diff --git a/.gitignore b/.gitignore index e00450690da..9dda73b2e4d 100644 --- a/.gitignore +++ b/.gitignore @@ -45,6 +45,10 @@ ipch/ /test/addons/doc-*/ email.md deps/v8-* +deps/icu +deps/icu*.zip +deps/icu*.tgz +deps/icu-tmp ./node_modules .svn/ @@ -67,3 +71,4 @@ deps/zlib/zlib.target.mk # test artifacts tools/faketime +icu_config.gypi diff --git a/AUTHORS b/AUTHORS index 5aa6137928e..ce538de7b5c 100644 --- a/AUTHORS +++ b/AUTHORS @@ -568,3 +568,4 @@ Kevin Simper Jackson Tian Tristan Berger Mathias Schreck +Steven R. Loomis diff --git a/ChangeLog b/ChangeLog index 00cc51d4d0f..ef860651fc8 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,3 +1,172 @@ +2014.09.24, Version 0.11.14 (Unstable) + +* uv: Upgrade to v1.0.0-rc1 + +* http_parser: Upgrade to v2.3.0 + +* npm: Upgrade to v2.0.0 + +* openssl: Upgrade to v1.0.1i + +* v8: Upgrade to 3.26.33 + +* Add fast path for simple URL parsing (Gabriel Wicke) + +* Added support for options parameter in console.dir() (Xavi Magrinyà) + +* Cluster: fix shared handles on Windows (Alexis Campailla) + +* buffer: Fix incorrect Buffer.compare behavior (Feross Aboukhadijeh) + +* buffer: construct new buffer from buffer toJSON() output (cjihrig) + +* buffer: improve Buffer constructor (Kang-Hao Kenny) + +* build: linking CoreFoundation framework for OSX (Thorsten Lorenz) + +* child_process: accept uid/gid everywhere (Fedor Indutny) + +* child_process: add path to spawn ENOENT Error (Ryan Cole) + +* child_process: copy spawnSync() cwd option to proper buffer (cjihrig) + +* child_process: do not access stderr when stdio set to 'ignore' (cjihrig) + +* child_process: don't throw on EAGAIN (Charles) + +* child_process: don't throw on EMFILE/ENFILE (Ben Noordhuis) + +* child_process: use full path for cmd.exe on Win32 (Ed Morley) + +* cluster: allow multiple calls to setupMaster() (Ryan Graham) + +* cluster: centralize removal from workers list. (Julien Gilli) + +* cluster: enable error/message events using .worker (cjihrig) + +* cluster: include settings object in 'setup' event (Ryan Graham) + +* cluster: restore v0.10.x setupMaster() behaviour (Ryan Graham) + +* cluster: support options in Worker constructor (cjihrig) + +* cluster: test events emit on cluster.worker (Sam Roberts) + +* console: console.dir() accepts options object (Xavi Magrinyà) + +* crypto: add `honorCipherOrder` argument (Fedor Indutny) + +* crypto: allow padding in RSA methods (Fedor Indutny) + +* crypto: clarify RandomBytes() error msg (Mickael van der Beek) + +* crypto: never store pointer to conn in SSL_CTX (Fedor Indutny) + +* crypto: unsigned value can't be negative (Brian White) + +* dgram: remove new keyword from errnoException (Jackson Tian) + +* dns: always set variable family in lookup() (cjihrig) + +* dns: include host name in error message if available (Maciej Małecki) + +* dns: introduce lookupService function (Saúl Ibarra Corretgé) + +* dns: send lookup c-ares errors to callback (Chris Dickinson) + +* dns: throw if hostname is not string or falsey (cjihrig) + +* events: Output the event that is leaking (Arnout Kazemier) + +* fs: close file if fstat() fails in readFile() (cjihrig) + +* fs: fs.readFile should not throw uncaughtException (Jackson Tian) + +* http: add 308 status_code, see RFC7238 (Yazhong Liu) + +* http: don't default OPTIONS to chunked encoding (Nick Muerdter) + +* http: fix bailout for writeHead (Alex Kocharin) + +* http: remove unused code block (Fedor Indutny) + +* http: write() after end() emits an error. (Julien Gilli) + +* lib, src: add vm.runInDebugContext() (Ben Noordhuis) + +* lib: noisy deprecation of child_process customFds (Ryan Graham) + +* module: don't require fs several times (Robert Kowalski) + +* net,dgram: workers can listen on exclusive ports (cjihrig) + +* net,stream: add isPaused, don't read() when paused (Chris Dickinson) + +* net: Ensure consistent binding to IPV6 if address is absent (Raymond Feng) + +* net: add remoteFamily for socket (Jackson Tian) + +* net: don't emit listening if handle is closed (Eli Skeggs) + +* net: don't prefer IPv4 addresses during resolution (cjihrig) + +* net: don't throw on net.Server.close() (cjihrig) + +* net: reset `errorEmitted` on reconnect (Ed Umansky) + +* node: set names for prototype methods (Trevor Norris) + +* node: support v8 microtask queue (Vladimir Kurchatkin) + +* path: fix slice OOB in trim (Lucio M. Tato) + +* path: isAbsolute() should always return boolean (Herman Lee) + +* process: throw TypeError if kill pid not a number (Sam Roberts) + +* querystring: custom encode and decode (fengmk2) + +* querystring: do not add sep for empty array (cjihrig) + +* querystring: remove prepended ? from query field (Ezequiel Rabinovich) + +* readline: fix close event of readline.Interface() (Yazhong Liu) + +* readline: fixes scoping bug (Dan Kaplun) + +* readline: implements keypress buffering (Dan Kaplun) + +* repl: fix multi-line input (Fedor Indutny) + +* repl: fix overwrite for this._prompt (Yazhong Liu) + +* repl: proper `setPrompt()` and `multiline` support (Fedor Indutny) + +* stream: don't try to finish if buffer is not empty (Vladimir Kurchatkin) + +* stream: only end reading on null, not undefined (Jonathan Reem) + +* streams: set default hwm properly for Duplex (Andrew Oppenlander) + +* string_bytes: ucs2 support big endian (Andrew Low) + +* tls, crypto: add DHE support (Shigeki Ohtsu) + +* tls: `checkServerIdentity` option (Trevor Livingston) + +* tls: add DHE-RSA-AES128-SHA256 to the def ciphers (Shigeki Ohtsu) + +* tls: better error reporting at cert validation (Fedor Indutny) + +* tls: support multiple keys/certs (Fedor Indutny) + +* tls: throw an error, not string (Jackson Tian) + +* udp: make it possible to receive empty udp packets (Andrius Bentkus) + +* url: treat \ the same as / (isaacs) + + 2014.05.01, Version 0.11.13 (Unstable), 99c9930ad626e2796af23def7cac19b65c608d18 * v8: upgrade to 3.24.35.22 diff --git a/Makefile b/Makefile index 11304e118f3..e463280ac4e 100644 --- a/Makefile +++ b/Makefile @@ -78,10 +78,12 @@ clean: distclean: -rm -rf out - -rm -f config.gypi + -rm -f config.gypi icu_config.gypi -rm -f config.mk -rm -rf node node_g blog.html email.md -rm -rf node_modules + -rm -rf deps/icu + -rm -rf deps/icu4c*.tgz deps/icu4c*.zip deps/icu-tmp test: all $(PYTHON) tools/test.py --mode=release simple message @@ -147,7 +149,19 @@ test-debugger: all $(PYTHON) tools/test.py debugger test-npm: node - ./node deps/npm/test/run.js + rm -rf npm-cache npm-tmp npm-prefix + mkdir npm-cache npm-tmp npm-prefix + cd deps/npm ; npm_config_cache="$(shell pwd)/npm-cache" \ + npm_config_prefix="$(shell pwd)/npm-prefix" \ + npm_config_tmp="$(shell pwd)/npm-tmp" \ + ../../node cli.js install + cd deps/npm ; npm_config_cache="$(shell pwd)/npm-cache" \ + npm_config_prefix="$(shell pwd)/npm-prefix" \ + npm_config_tmp="$(shell pwd)/npm-tmp" \ + ../../node cli.js run-script test-all && \ + ../../node cli.js prune --prod && \ + cd ../.. && \ + rm -rf npm-cache npm-tmp npm-prefix test-npm-publish: node npm_package_config_publishtest=true ./node deps/npm/test/run.js @@ -406,7 +420,7 @@ CPPLINT_EXCLUDE += src/queue.h CPPLINT_EXCLUDE += src/tree.h CPPLINT_EXCLUDE += src/v8abbr.h -CPPLINT_FILES = $(filter-out $(CPPLINT_EXCLUDE), $(wildcard src/*.cc src/*.h src/*.c)) +CPPLINT_FILES = $(filter-out $(CPPLINT_EXCLUDE), $(wildcard src/*.cc src/*.h src/*.c tools/icu/*.h tools/icu/*.cc deps/debugger-agent/include/* deps/debugger-agent/src/*)) cpplint: @$(PYTHON) tools/cpplint.py $(CPPLINT_FILES) diff --git a/README.md b/README.md index 086590a6bb4..acaf24b372c 100644 --- a/README.md +++ b/README.md @@ -19,17 +19,6 @@ make make install ``` -With libicu i18n support: - -```sh -svn checkout --force --revision 214189 \ - http://src.chromium.org/svn/trunk/deps/third_party/icu46 \ - deps/v8/third_party/icu46 -./configure --with-icu-path=deps/v8/third_party/icu46/icu.gyp -make -make install -``` - If your python binary is in a non-standard location or has a non-standard name, run the following instead: @@ -47,11 +36,13 @@ Prerequisites (Windows only): Windows: - vcbuild nosign +```sh +vcbuild nosign +``` You can download pre-built binaries for various operating systems from [http://nodejs.org/download/](http://nodejs.org/download/). The Windows -and OS X installers will prompt you for the location to install to. +and OS X installers will prompt you for the location in which to install. The tarballs are self-contained; you can extract them to a local directory with: @@ -92,6 +83,108 @@ make doc man doc/node.1 ``` +### `Intl` (ECMA-402) support: + +[Intl](https://github.com/joyent/node/wiki/Intl) support is not +enabled by default. + +#### "small" (English only) support + +This option will build with "small" (English only) support, but +the full `Intl` (ECMA-402) APIs. With `--download=all` it will +download the ICU library as needed. + +Unix/Macintosh: + +```sh +./configure --with-intl=small-icu --download=all +``` + +Windows: + +```sh +vcbuild small-icu download-all +``` + +The `small-icu` mode builds +with English-only data. You can add full data at runtime. + +*Note:* more docs are on +[the wiki](https://github.com/joyent/node/wiki/Intl). + +#### Build with full ICU support (all locales supported by ICU): + +With the `--download=all`, this may download ICU if you don't +have an ICU in `deps/icu`. + +Unix/Macintosh: + +```sh +./configure --with-intl=full-icu --download=all +``` + +Windows: + +```sh +vcbuild full-icu download-all +``` + +#### Build with no Intl support `:-(` + +The `Intl` object will not be available. +This is the default at present, so this option is not normally needed. + +Unix/Macintosh: + +```sh +./configure --with-intl=none +``` + +Windows: + +```sh +vcbuild intl-none +``` + +#### Use existing installed ICU (Unix/Macintosh only): + +```sh +pkg-config --modversion icu-i18n && ./configure --with-intl=system-icu +``` + +#### Build with a specific ICU: + +You can find other ICU releases at +[the ICU homepage](http://icu-project.org/download). +Download the file named something like `icu4c-**##.#**-src.tgz` (or +`.zip`). + +Unix/Macintosh: from an already-unpacked ICU + +```sh +./configure --with-intl=[small-icu,full-icu] --with-icu-source=/path/to/icu +``` + +Unix/Macintosh: from a local ICU tarball + +```sh +./configure --with-intl=[small-icu,full-icu] --with-icu-source=/path/to/icu.tgz +``` + +Unix/Macintosh: from a tarball URL + +```sh +./configure --with-intl=full-icu --with-icu-source=http://url/to/icu.tgz +``` + +Windows: first unpack latest ICU to `deps/icu` + [icu4c-**##.#**-src.tgz](http://icu-project.org/download) (or `.zip`) + as `deps/icu` (You'll have: `deps/icu/source/...`) + +```sh +vcbuild full-icu +``` + Resources for Newcomers --- - [The Wiki](https://github.com/joyent/node/wiki) diff --git a/benchmark/misc/module-loader.js b/benchmark/misc/module-loader.js new file mode 100644 index 00000000000..96f8e7df1ea --- /dev/null +++ b/benchmark/misc/module-loader.js @@ -0,0 +1,72 @@ +// Copyright Joyent, Inc. and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + + +var fs = require('fs'); +var path = require('path'); +var common = require('../common.js'); +var packageJson = '{"main": "index.js"}'; + +var tmpDirectory = path.join(__dirname, '..', 'tmp'); +var benchmarkDirectory = path.join(tmpDirectory, 'nodejs-benchmark-module'); + +var bench = common.createBenchmark(main, { + thousands: [50] +}); + +function main(conf) { + rmrf(tmpDirectory); + try { fs.mkdirSync(tmpDirectory); } catch (e) {} + try { fs.mkdirSync(benchmarkDirectory); } catch (e) {} + + var n = +conf.thousands * 1e3; + for (var i = 0; i <= n; i++) { + fs.mkdirSync(benchmarkDirectory + i); + fs.writeFileSync(benchmarkDirectory + i + '/package.json', '{"main": "index.js"}'); + fs.writeFileSync(benchmarkDirectory + i + '/index.js', 'module.exports = "";'); + } + + measure(n); +} + +function measure(n) { + bench.start(); + for (var i = 0; i <= n; i++) { + require(benchmarkDirectory + i); + } + bench.end(n / 1e3); +} + +function rmrf(location) { + if (fs.existsSync(location)) { + var things = fs.readdirSync(location); + things.forEach(function(thing) { + var cur = path.join(location, thing), + isDirectory = fs.statSync(cur).isDirectory(); + if (isDirectory) { + rmrf(cur); + return; + } + fs.unlinkSync(cur); + }); + fs.rmdirSync(location); + } +} diff --git a/common.gypi b/common.gypi index 6aa485bada2..8886b743927 100644 --- a/common.gypi +++ b/common.gypi @@ -26,10 +26,10 @@ }], ['GENERATOR == "ninja" or OS== "mac"', { 'OBJ_DIR': '<(PRODUCT_DIR)/obj', - 'V8_BASE': '<(PRODUCT_DIR)/libv8_base.<(target_arch).a', + 'V8_BASE': '<(PRODUCT_DIR)/libv8_base.a', }, { 'OBJ_DIR': '<(PRODUCT_DIR)/obj.target', - 'V8_BASE': '<(PRODUCT_DIR)/obj.target/deps/v8/tools/gyp/libv8_base.<(target_arch).a', + 'V8_BASE': '<(PRODUCT_DIR)/obj.target/deps/v8/tools/gyp/libv8_base.a', }], ], }, diff --git a/configure b/configure index 2e085832546..51475f03575 100755 --- a/configure +++ b/configure @@ -6,6 +6,8 @@ import re import shlex import subprocess import sys +import shutil +import string CC = os.environ.get('CC', 'cc') @@ -13,6 +15,10 @@ root_dir = os.path.dirname(__file__) sys.path.insert(0, os.path.join(root_dir, 'tools', 'gyp', 'pylib')) from gyp.common import GetFlavor +# imports in tools/configure.d +sys.path.insert(0, os.path.join(root_dir, 'tools', 'configure.d')) +import nodedownload + # parse our options parser = optparse.OptionParser() @@ -236,11 +242,31 @@ parser.add_option('--with-etw', dest='with_etw', help='build with ETW (default is true on Windows)') +parser.add_option('--download', + action='store', + dest='download_list', + help=nodedownload.help()) + parser.add_option('--with-icu-path', action='store', dest='with_icu_path', help='Path to icu.gyp (ICU i18n, Chromium version only.)') +parser.add_option('--with-icu-locales', + action='store', + dest='with_icu_locales', + help='Comma-separated list of locales for "small-icu". Default: "root,en". "root" is assumed.') + +parser.add_option('--with-intl', + action='store', + dest='with_intl', + help='Intl mode: none, full-icu, small-icu (default is none)') + +parser.add_option('--with-icu-source', + action='store', + dest='with_icu_source', + help='Intl mode: optional local path to icu/ dir, or path/URL of icu source archive.') + parser.add_option('--with-perfctr', action='store_true', dest='with_perfctr', @@ -289,6 +315,8 @@ parser.add_option('--xcode', (options, args) = parser.parse_args() +# set up auto-download list +auto_downloads = nodedownload.parse(options.download_list) def b(value): """Returns the string 'true' if value is truthy, 'false' otherwise.""" @@ -686,13 +714,259 @@ def configure_winsdk(o): print('ctrpp not found in WinSDK path--using pre-gen files ' 'from tools/msvs/genfiles.') +def write(filename, data): + filename = os.path.join(root_dir, filename) + print 'creating ', filename + f = open(filename, 'w+') + f.write(data) + +do_not_edit = '# Do not edit. Generated by the configure script.\n' + +def glob_to_var(dir_base, dir_sub): + list = [] + dir_all = os.path.join(dir_base, dir_sub) + files = os.walk(dir_all) + for ent in files: + (path, dirs, files) = ent + for file in files: + if file.endswith('.cpp') or file.endswith('.c') or file.endswith('.h'): + list.append('%s/%s' % (dir_sub, file)) + break + return list + +def configure_intl(o): + icus = [ + { + 'url': 'http://download.icu-project.org/files/icu4c/54.1/icu4c-54_1-src.zip', + # from https://ssl.icu-project.org/files/icu4c/54.1/icu4c-src-54_1.md5: + 'md5': '6b89d60e2f0e140898ae4d7f72323bca', + }, + ] + def icu_download(path): + # download ICU, if needed + for icu in icus: + url = icu['url'] + md5 = icu['md5'] + local = url.split('/')[-1] + targetfile = os.path.join(root_dir, 'deps', local) + if not os.path.isfile(targetfile): + if nodedownload.candownload(auto_downloads, "icu"): + nodedownload.retrievefile(url, targetfile) + else: + print ' Re-using existing %s' % targetfile + if os.path.isfile(targetfile): + sys.stdout.write(' Checking file integrity with MD5:\r') + gotmd5 = nodedownload.md5sum(targetfile) + print ' MD5: %s %s' % (gotmd5, targetfile) + if (md5 == gotmd5): + return targetfile + else: + print ' Expected: %s *MISMATCH*' % md5 + print '\n ** Corrupted ZIP? Delete %s to retry download.\n' % targetfile + return None + icu_config = { + 'variables': {} + } + icu_config_name = 'icu_config.gypi' + def write_config(data, name): + return + + # write an empty file to start with + write(icu_config_name, do_not_edit + + pprint.pformat(icu_config, indent=2) + '\n') -def configure_icu(o): + # always set icu_small, node.gyp depends on it being defined. + o['variables']['icu_small'] = b(False) + + with_intl = options.with_intl + with_icu_source = options.with_icu_source have_icu_path = bool(options.with_icu_path) - o['variables']['v8_enable_i18n_support'] = int(have_icu_path) - if have_icu_path: + if have_icu_path and with_intl: + print 'Error: Cannot specify both --with-icu-path and --with-intl' + sys.exit(1) + elif have_icu_path: + # Chromium .gyp mode: --with-icu-path + o['variables']['v8_enable_i18n_support'] = 1 + # use the .gyp given o['variables']['icu_gyp_path'] = options.with_icu_path - + return + # --with-intl= + # set the default + if with_intl is None: + with_intl = 'none' # The default mode of Intl + # sanity check localelist + if options.with_icu_locales and (with_intl != 'small-icu'): + print 'Error: --with-icu-locales only makes sense with --with-intl=small-icu' + sys.exit(1) + if with_intl == 'none' or with_intl is None: + o['variables']['v8_enable_i18n_support'] = 0 + return # no Intl + elif with_intl == 'small-icu': + # small ICU (English only) + o['variables']['v8_enable_i18n_support'] = 1 + o['variables']['icu_small'] = b(True) + with_icu_locales = options.with_icu_locales + if not with_icu_locales: + with_icu_locales = 'root,en' + locs = set(with_icu_locales.split(',')) + locs.add('root') # must have root + o['variables']['icu_locales'] = string.join(locs,',') + elif with_intl == 'full-icu': + # full ICU + o['variables']['v8_enable_i18n_support'] = 1 + elif with_intl == 'system-icu': + # ICU from pkg-config. + o['variables']['v8_enable_i18n_support'] = 1 + pkgicu = pkg_config('icu-i18n') + if not pkgicu: + print 'Error: could not load pkg-config data for "icu-i18n".' + print 'See above errors or the README.md.' + sys.exit(1) + (libs, cflags) = pkgicu + o['libraries'] += libs.split() + o['cflags'] += cflags.split() + # use the "system" .gyp + o['variables']['icu_gyp_path'] = 'tools/icu/icu-system.gyp' + return + else: + print 'Error: unknown value --with-intl=%s' % with_intl + sys.exit(1) + # Note: non-ICU implementations could use other 'with_intl' + # values. + + # this is just the 'deps' dir. Used for unpacking. + icu_parent_path = os.path.join(root_dir, 'deps') + + # The full path to the ICU source directory. + icu_full_path = os.path.join(icu_parent_path, 'icu') + + # icu-tmp is used to download and unpack the ICU tarball. + icu_tmp_path = os.path.join(icu_parent_path, 'icu-tmp') + + # --with-icu-source processing + # first, check that they didn't pass --with-icu-source=deps/icu + if with_icu_source and os.path.abspath(icu_full_path) == os.path.abspath(with_icu_source): + print 'Ignoring redundant --with-icu-source=%s' % (with_icu_source) + with_icu_source = None + # if with_icu_source is still set, try to use it. + if with_icu_source: + if os.path.isdir(icu_full_path): + print 'Deleting old ICU source: %s' % (icu_full_path) + shutil.rmtree(icu_full_path) + # now, what path was given? + if os.path.isdir(with_icu_source): + # it's a path. Copy it. + print '%s -> %s' % (with_icu_source, icu_full_path) + shutil.copytree(with_icu_source, icu_full_path) + else: + # could be file or URL. + # Set up temporary area + if os.path.isdir(icu_tmp_path): + shutil.rmtree(icu_tmp_path) + os.mkdir(icu_tmp_path) + icu_tarball = None + if os.path.isfile(with_icu_source): + # it's a file. Try to unpack it. + icu_tarball = with_icu_source + else: + # Can we download it? + local = os.path.join(icu_tmp_path, with_icu_source.split('/')[-1]) # local part + icu_tarball = nodedownload.retrievefile(with_icu_source, local) + # continue with "icu_tarball" + nodedownload.unpack(icu_tarball, icu_tmp_path) + # Did it unpack correctly? Should contain 'icu' + tmp_icu = os.path.join(icu_tmp_path, 'icu') + if os.path.isdir(tmp_icu): + os.rename(tmp_icu, icu_full_path) + shutil.rmtree(icu_tmp_path) + else: + print ' Error: --with-icu-source=%s did not result in an "icu" dir.' % with_icu_source + shutil.rmtree(icu_tmp_path) + sys.exit(1) + + # ICU mode. (icu-generic.gyp) + byteorder = sys.byteorder + o['variables']['icu_gyp_path'] = 'tools/icu/icu-generic.gyp' + # ICU source dir relative to root + o['variables']['icu_path'] = icu_full_path + if not os.path.isdir(icu_full_path): + print '* ECMA-402 (Intl) support didn\'t find ICU in %s..' % (icu_full_path) + # can we download (or find) a zipfile? + localzip = icu_download(icu_full_path) + if localzip: + nodedownload.unpack(localzip, icu_parent_path) + if not os.path.isdir(icu_full_path): + print ' Cannot build Intl without ICU in %s.' % (icu_full_path) + print ' (Fix, or disable with "--with-intl=none" )' + sys.exit(1) + else: + print '* Using ICU in %s' % (icu_full_path) + # Now, what version of ICU is it? We just need the "major", such as 54. + # uvernum.h contains it as a #define. + uvernum_h = os.path.join(icu_full_path, 'source/common/unicode/uvernum.h') + if not os.path.isfile(uvernum_h): + print ' Error: could not load %s - is ICU installed?' % uvernum_h + sys.exit(1) + icu_ver_major = None + matchVerExp = r'^\s*#define\s+U_ICU_VERSION_SHORT\s+"([^"]*)".*' + match_version = re.compile(matchVerExp) + for line in open(uvernum_h).readlines(): + m = match_version.match(line) + if m: + icu_ver_major = m.group(1) + if not icu_ver_major: + print ' Could not read U_ICU_VERSION_SHORT version from %s' % uvernum_h + sys.exit(1) + icu_endianness = sys.byteorder[0]; # TODO(srl295): EBCDIC should be 'e' + o['variables']['icu_ver_major'] = icu_ver_major + o['variables']['icu_endianness'] = icu_endianness + icu_data_file_l = 'icudt%s%s.dat' % (icu_ver_major, 'l') + icu_data_file = 'icudt%s%s.dat' % (icu_ver_major, icu_endianness) + # relative to configure + icu_data_path = os.path.join(icu_full_path, + 'source/data/in', + icu_data_file_l) + # relative to dep.. + icu_data_in = os.path.join('../../deps/icu/source/data/in', icu_data_file_l) + if not os.path.isfile(icu_data_path) and icu_endianness != 'l': + # use host endianness + icu_data_path = os.path.join(icu_full_path, + 'source/data/in', + icu_data_file) + # relative to dep.. + icu_data_in = os.path.join('icu/source/data/in', + icu_data_file) + # this is the input '.dat' file to use .. icudt*.dat + # may be little-endian if from a icu-project.org tarball + o['variables']['icu_data_in'] = icu_data_in + # this is the icudt*.dat file which node will be using (platform endianness) + o['variables']['icu_data_file'] = icu_data_file + if not os.path.isfile(icu_data_path): + print ' Error: ICU prebuilt data file %s does not exist.' % icu_data_path + print ' See the README.md.' + # .. and we're not about to build it from .gyp! + sys.exit(1) + # map from variable name to subdirs + icu_src = { + 'stubdata': 'stubdata', + 'common': 'common', + 'i18n': 'i18n', + 'io': 'io', + 'tools': 'tools/toolutil', + 'genccode': 'tools/genccode', + 'genrb': 'tools/genrb', + 'icupkg': 'tools/icupkg', + } + # this creates a variable icu_src_XXX for each of the subdirs + # with a list of the src files to use + for i in icu_src: + var = 'icu_src_%s' % i + path = '../../deps/icu/source/%s' % icu_src[i] + icu_config['variables'][var] = glob_to_var('tools/icu', path) + # write updated icu_config.gypi with a bunch of paths + write(icu_config_name, do_not_edit + + pprint.pformat(icu_config, indent=2) + '\n') + return # end of configure_intl # determine the "flavor" (operating system) we're building for, # leveraging gyp's GetFlavor function @@ -717,7 +991,7 @@ configure_libuv(output) configure_v8(output) configure_openssl(output) configure_winsdk(output) -configure_icu(output) +configure_intl(output) configure_fullystatic(output) # variables should be a root level element, @@ -730,13 +1004,7 @@ output = { } pprint.pprint(output, indent=2) -def write(filename, data): - filename = os.path.join(root_dir, filename) - print 'creating ', filename - f = open(filename, 'w+') - f.write(data) - -write('config.gypi', '# Do not edit. Generated by the configure script.\n' + +write('config.gypi', do_not_edit + pprint.pformat(output, indent=2) + '\n') config = { diff --git a/deps/debugger-agent/debugger-agent.gyp b/deps/debugger-agent/debugger-agent.gyp new file mode 100644 index 00000000000..e98206849ab --- /dev/null +++ b/deps/debugger-agent/debugger-agent.gyp @@ -0,0 +1,24 @@ +{ + "targets": [{ + "target_name": "debugger-agent", + "type": "<(library)", + "include_dirs": [ + "src", + "include", + "../v8/include", + "../uv/include", + + # Private node.js folder and stuff needed to include from it + "../../src", + "../cares/include", + ], + "direct_dependent_settings": { + "include_dirs": [ + "include", + ], + }, + "sources": [ + "src/agent.cc", + ], + }], +} diff --git a/deps/debugger-agent/include/debugger-agent.h b/deps/debugger-agent/include/debugger-agent.h new file mode 100644 index 00000000000..762a687a0a0 --- /dev/null +++ b/deps/debugger-agent/include/debugger-agent.h @@ -0,0 +1,109 @@ +// Copyright Fedor Indutny and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + +#ifndef DEPS_DEBUGGER_AGENT_INCLUDE_DEBUGGER_AGENT_H_ +#define DEPS_DEBUGGER_AGENT_INCLUDE_DEBUGGER_AGENT_H_ + +#include "uv.h" +#include "v8.h" +#include "v8-debug.h" + +namespace node { + +// Forward declaration +class Environment; + +namespace debugger { + +// Forward declaration +class AgentMessage; + +class Agent { + public: + explicit Agent(node::Environment* env); + ~Agent(); + + typedef void (*DispatchHandler)(node::Environment* env); + + // Start the debugger agent thread + bool Start(int port, bool wait); + // Listen for debug events + void Enable(); + // Stop the debugger agent + void Stop(); + + inline void set_dispatch_handler(DispatchHandler handler) { + dispatch_handler_ = handler; + } + + inline node::Environment* parent_env() const { return parent_env_; } + inline node::Environment* child_env() const { return child_env_; } + + protected: + void InitAdaptor(Environment* env); + + // Worker body + void WorkerRun(); + + static void ThreadCb(Agent* agent); + static void ParentSignalCb(uv_async_t* signal); + static void ChildSignalCb(uv_async_t* signal); + static void MessageHandler(const v8::Debug::Message& message); + + // V8 API + static Agent* Unwrap(const v8::FunctionCallbackInfo& args); + static void NotifyListen(const v8::FunctionCallbackInfo& args); + static void NotifyWait(const v8::FunctionCallbackInfo& args); + static void SendCommand(const v8::FunctionCallbackInfo& args); + + void EnqueueMessage(AgentMessage* message); + + enum State { + kNone, + kRunning + }; + + // TODO(indutny): Verify that there are no races + State state_; + + int port_; + bool wait_; + + uv_sem_t start_sem_; + uv_mutex_t message_mutex_; + uv_async_t child_signal_; + + uv_thread_t thread_; + node::Environment* parent_env_; + node::Environment* child_env_; + uv_loop_t child_loop_; + v8::Persistent api_; + + // QUEUE + void* messages_[2]; + + DispatchHandler dispatch_handler_; +}; + +} // namespace debugger +} // namespace node + +#endif // DEPS_DEBUGGER_AGENT_INCLUDE_DEBUGGER_AGENT_H_ diff --git a/deps/debugger-agent/lib/_debugger_agent.js b/deps/debugger-agent/lib/_debugger_agent.js new file mode 100644 index 00000000000..680c5e95c49 --- /dev/null +++ b/deps/debugger-agent/lib/_debugger_agent.js @@ -0,0 +1,191 @@ +var assert = require('assert'); +var net = require('net'); +var util = require('util'); +var Buffer = require('buffer').Buffer; + +var Transform = require('stream').Transform; + +exports.start = function start() { + var agent = new Agent(); + + // Do not let `agent.listen()` request listening from cluster master + var cluster = require('cluster'); + cluster.isWorker = false; + cluster.isMaster = true; + + agent.on('error', function(err) { + process._rawDebug(err.stack || err); + }); + + agent.listen(process._debugAPI.port, function() { + var addr = this.address(); + process._rawDebug('Debugger listening on port %d', addr.port); + process._debugAPI.notifyListen(); + }); + + // Just to spin-off events + // TODO(indutny): Figure out why node.cc isn't doing this + setImmediate(function() { + }); + + process._debugAPI.onclose = function() { + // We don't care about it, but it prevents loop from cleaning up gently + // NOTE: removeAllListeners won't work, as it doesn't call `removeListener` + process.listeners('SIGWINCH').forEach(function(fn) { + process.removeListener('SIGWINCH', fn); + }); + + agent.close(); + }; + + // Not used now, but anyway + return agent; +}; + +function Agent() { + net.Server.call(this, this.onConnection); + + this.first = true; + this.binding = process._debugAPI; + + var self = this; + this.binding.onmessage = function(msg) { + self.clients.forEach(function(client) { + client.send({}, msg); + }); + }; + + this.clients = []; + assert(this.binding, 'Debugger agent running without bindings!'); +} +util.inherits(Agent, net.Server); + +Agent.prototype.onConnection = function onConnection(socket) { + var c = new Client(this, socket); + + c.start(); + this.clients.push(c); + + var self = this; + c.once('close', function() { + var index = self.clients.indexOf(c); + assert(index !== -1); + self.clients.splice(index, 1); + }); +}; + +Agent.prototype.notifyWait = function notifyWait() { + if (this.first) + this.binding.notifyWait(); + this.first = false; +}; + +function Client(agent, socket) { + Transform.call(this); + this._readableState.objectMode = true; + + this.agent = agent; + this.binding = this.agent.binding; + this.socket = socket; + + // Parse incoming data + this.state = 'headers'; + this.headers = {}; + this.buffer = ''; + socket.pipe(this); + + this.on('data', this.onCommand); + + var self = this; + this.socket.on('close', function() { + self.destroy(); + }); +} +util.inherits(Client, Transform); + +Client.prototype.destroy = function destroy(msg) { + this.socket.destroy(); + + this.emit('close'); +}; + +Client.prototype._transform = function _transform(data, enc, cb) { + cb(); + + this.buffer += data; + + while (true) { + if (this.state === 'headers') { + // Not enough data + if (!/\r\n/.test(this.buffer)) + break; + + if (/^\r\n/.test(this.buffer)) { + this.buffer = this.buffer.slice(2); + this.state = 'body'; + continue; + } + + // Match: + // Header-name: header-value\r\n + var match = this.buffer.match(/^([^:\s\r\n]+)\s*:\s*([^\s\r\n]+)\r\n/); + if (!match) + return this.destroy('Expected header, but failed to parse it'); + + this.headers[match[1].toLowerCase()] = match[2]; + + this.buffer = this.buffer.slice(match[0].length); + } else { + var len = this.headers['content-length']; + if (len === undefined) + return this.destroy('Expected content-length'); + + len = len | 0; + if (Buffer.byteLength(this.buffer) < len) + break; + + this.push(new Command(this.headers, this.buffer.slice(0, len))); + this.state = 'headers'; + this.buffer = this.buffer.slice(len); + this.headers = {}; + } + } +}; + +Client.prototype.send = function send(headers, data) { + if (!data) + data = ''; + + var out = []; + Object.keys(headers).forEach(function(key) { + out.push(key + ': ' + headers[key]); + }); + out.push('Content-Length: ' + Buffer.byteLength(data), ''); + + this.socket.cork(); + this.socket.write(out.join('\r\n') + '\r\n'); + + if (data.length > 0) + this.socket.write(data); + this.socket.uncork(); +}; + +Client.prototype.start = function start() { + this.send({ + Type: 'connect', + 'V8-Version': process.versions.v8, + 'Protocol-Version': 1, + 'Embedding-Host': 'node ' + process.version + }); +}; + +Client.prototype.onCommand = function onCommand(cmd) { + this.binding.sendCommand(cmd.body); + + this.agent.notifyWait(); +}; + +function Command(headers, body) { + this.headers = headers; + this.body = body; +} diff --git a/deps/debugger-agent/src/agent.cc b/deps/debugger-agent/src/agent.cc new file mode 100644 index 00000000000..335737ffe9e --- /dev/null +++ b/deps/debugger-agent/src/agent.cc @@ -0,0 +1,347 @@ +// Copyright Fedor Indutny and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + +#include "agent.h" +#include "debugger-agent.h" + +#include "node.h" +#include "node_internals.h" // ARRAY_SIZE +#include "env.h" +#include "env-inl.h" +#include "v8.h" +#include "v8-debug.h" +#include "util.h" +#include "util-inl.h" +#include "queue.h" + +#include + +namespace node { +namespace debugger { + +using v8::Context; +using v8::Function; +using v8::FunctionCallbackInfo; +using v8::FunctionTemplate; +using v8::Handle; +using v8::HandleScope; +using v8::Integer; +using v8::Isolate; +using v8::Local; +using v8::Locker; +using v8::Object; +using v8::String; +using v8::Value; + + +Agent::Agent(Environment* env) : state_(kNone), + port_(5858), + wait_(false), + parent_env_(env), + child_env_(NULL), + dispatch_handler_(NULL) { + int err; + + err = uv_sem_init(&start_sem_, 0); + CHECK_EQ(err, 0); + + err = uv_mutex_init(&message_mutex_); + CHECK_EQ(err, 0); + + QUEUE_INIT(&messages_); +} + + +Agent::~Agent() { + Stop(); + + uv_sem_destroy(&start_sem_); + uv_mutex_destroy(&message_mutex_); + + // Clean-up messages + while (!QUEUE_EMPTY(&messages_)) { + QUEUE* q = QUEUE_HEAD(&messages_); + QUEUE_REMOVE(q); + AgentMessage* msg = ContainerOf(&AgentMessage::member, q); + delete msg; + } +} + + +bool Agent::Start(int port, bool wait) { + int err; + + if (state_ == kRunning) + return false; + + err = uv_loop_init(&child_loop_); + if (err != 0) + goto loop_init_failed; + + // Interruption signal handler + err = uv_async_init(&child_loop_, &child_signal_, ChildSignalCb); + if (err != 0) + goto async_init_failed; + uv_unref(reinterpret_cast(&child_signal_)); + + port_ = port; + wait_ = wait; + + err = uv_thread_create(&thread_, + reinterpret_cast(ThreadCb), + this); + if (err != 0) + goto thread_create_failed; + + uv_sem_wait(&start_sem_); + + state_ = kRunning; + + return true; + + thread_create_failed: + uv_close(reinterpret_cast(&child_signal_), NULL); + + async_init_failed: + err = uv_loop_close(&child_loop_); + CHECK_EQ(err, 0); + + loop_init_failed: + return false; +} + + +void Agent::Enable() { + v8::Debug::SetMessageHandler(MessageHandler); + + // Assign environment to the debugger's context + // NOTE: The debugger context is created after `SetMessageHandler()` call + parent_env()->AssignToContext(v8::Debug::GetDebugContext()); +} + + +void Agent::Stop() { + int err; + + if (state_ != kRunning) { + return; + } + + v8::Debug::SetMessageHandler(NULL); + + // Send empty message to terminate things + EnqueueMessage(new AgentMessage(NULL, 0)); + + // Signal worker thread to make it stop + err = uv_async_send(&child_signal_); + CHECK_EQ(err, 0); + + err = uv_thread_join(&thread_); + CHECK_EQ(err, 0); + + uv_close(reinterpret_cast(&child_signal_), NULL); + uv_run(&child_loop_, UV_RUN_NOWAIT); + + err = uv_loop_close(&child_loop_); + CHECK_EQ(err, 0); + + state_ = kNone; +} + + +void Agent::WorkerRun() { + static const char* argv[] = { "node", "--debug-agent" }; + Isolate* isolate = Isolate::New(); + { + Locker locker(isolate); + Isolate::Scope isolate_scope(isolate); + + HandleScope handle_scope(isolate); + Local context = Context::New(isolate); + + Context::Scope context_scope(context); + Environment* env = CreateEnvironment( + isolate, + &child_loop_, + context, + ARRAY_SIZE(argv), + argv, + ARRAY_SIZE(argv), + argv); + + child_env_ = env; + + // Expose API + InitAdaptor(env); + LoadEnvironment(env); + + CHECK_EQ(&child_loop_, env->event_loop()); + uv_run(&child_loop_, UV_RUN_DEFAULT); + + // Clean-up peristent + api_.Reset(); + + // Clean-up all running handles + env->CleanupHandles(); + + env->Dispose(); + env = NULL; + } + isolate->Dispose(); +} + + +void Agent::InitAdaptor(Environment* env) { + Isolate* isolate = env->isolate(); + HandleScope scope(isolate); + + // Create API adaptor + Local t = FunctionTemplate::New(isolate); + t->InstanceTemplate()->SetInternalFieldCount(1); + t->SetClassName(String::NewFromUtf8(isolate, "DebugAPI")); + + NODE_SET_PROTOTYPE_METHOD(t, "notifyListen", NotifyListen); + NODE_SET_PROTOTYPE_METHOD(t, "notifyWait", NotifyWait); + NODE_SET_PROTOTYPE_METHOD(t, "sendCommand", SendCommand); + + Local api = t->GetFunction()->NewInstance(); + api->SetAlignedPointerInInternalField(0, this); + + api->Set(String::NewFromUtf8(isolate, "port"), Integer::New(isolate, port_)); + + env->process_object()->Set(String::NewFromUtf8(isolate, "_debugAPI"), api); + api_.Reset(env->isolate(), api); +} + + +Agent* Agent::Unwrap(const v8::FunctionCallbackInfo& args) { + void* ptr = args.Holder()->GetAlignedPointerFromInternalField(0); + return reinterpret_cast(ptr); +} + + +void Agent::NotifyListen(const FunctionCallbackInfo& args) { + Agent* a = Unwrap(args); + + // Notify other thread that we are ready to process events + uv_sem_post(&a->start_sem_); +} + + +void Agent::NotifyWait(const FunctionCallbackInfo& args) { + Agent* a = Unwrap(args); + + a->wait_ = false; + + int err = uv_async_send(&a->child_signal_); + CHECK_EQ(err, 0); +} + + +void Agent::SendCommand(const FunctionCallbackInfo& args) { + Agent* a = Unwrap(args); + Environment* env = a->child_env(); + HandleScope scope(env->isolate()); + + String::Value v(args[0]); + + v8::Debug::SendCommand(a->parent_env()->isolate(), *v, v.length()); + if (a->dispatch_handler_ != NULL) + a->dispatch_handler_(a->parent_env()); +} + + +void Agent::ThreadCb(Agent* agent) { + agent->WorkerRun(); +} + + +void Agent::ChildSignalCb(uv_async_t* signal) { + Agent* a = ContainerOf(&Agent::child_signal_, signal); + Isolate* isolate = a->child_env()->isolate(); + + HandleScope scope(isolate); + Local api = PersistentToLocal(isolate, a->api_); + + uv_mutex_lock(&a->message_mutex_); + while (!QUEUE_EMPTY(&a->messages_)) { + QUEUE* q = QUEUE_HEAD(&a->messages_); + AgentMessage* msg = ContainerOf(&AgentMessage::member, q); + + // Time to close everything + if (msg->data() == NULL) { + QUEUE_REMOVE(q); + delete msg; + + MakeCallback(isolate, api, "onclose", 0, NULL); + break; + } + + // Waiting for client, do not send anything just yet + // TODO(indutny): move this to js-land + if (a->wait_) + break; + + QUEUE_REMOVE(q); + Local argv[] = { + String::NewFromTwoByte(isolate, + msg->data(), + String::kNormalString, + msg->length()) + }; + + // Emit message + MakeCallback(isolate, + api, + "onmessage", + ARRAY_SIZE(argv), + argv); + delete msg; + } + uv_mutex_unlock(&a->message_mutex_); +} + + +void Agent::EnqueueMessage(AgentMessage* message) { + uv_mutex_lock(&message_mutex_); + QUEUE_INSERT_TAIL(&messages_, &message->member); + uv_mutex_unlock(&message_mutex_); + uv_async_send(&child_signal_); +} + + +void Agent::MessageHandler(const v8::Debug::Message& message) { + Isolate* isolate = message.GetIsolate(); + Environment* env = Environment::GetCurrent(isolate); + Agent* a = env->debugger_agent(); + CHECK_NE(a, NULL); + CHECK_EQ(isolate, a->parent_env()->isolate()); + + HandleScope scope(isolate); + Local json = message.GetJSON(); + String::Value v(json); + + AgentMessage* msg = new AgentMessage(*v, v.length()); + a->EnqueueMessage(msg); +} + +} // namespace debugger +} // namespace node diff --git a/deps/debugger-agent/src/agent.h b/deps/debugger-agent/src/agent.h new file mode 100644 index 00000000000..82db5e5e181 --- /dev/null +++ b/deps/debugger-agent/src/agent.h @@ -0,0 +1,64 @@ +// Copyright Fedor Indutny and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + +#ifndef DEPS_DEBUGGER_AGENT_SRC_AGENT_H_ +#define DEPS_DEBUGGER_AGENT_SRC_AGENT_H_ + +#include "v8.h" +#include "v8-debug.h" +#include "queue.h" + +#include +#include + +namespace node { +namespace debugger { + +class AgentMessage { + public: + AgentMessage(uint16_t* val, int length) : length_(length) { + if (val == NULL) { + data_ = val; + } else { + data_ = new uint16_t[length]; + memcpy(data_, val, length * sizeof(*data_)); + } + } + + ~AgentMessage() { + delete[] data_; + data_ = NULL; + } + + inline const uint16_t* data() const { return data_; } + inline int length() const { return length_; } + + QUEUE member; + + private: + uint16_t* data_; + int length_; +}; + +} // namespace debugger +} // namespace node + +#endif // DEPS_DEBUGGER_AGENT_SRC_AGENT_H_ diff --git a/deps/npm/.eslintrc b/deps/npm/.eslintrc new file mode 100644 index 00000000000..ba331504210 --- /dev/null +++ b/deps/npm/.eslintrc @@ -0,0 +1,17 @@ +{ + "env" : { + "node" : true + }, + "rules" : { + "semi": [2, "never"], + "strict": 0, + "quotes": [1, "double", "avoid-escape"], + "no-use-before-define": 0, + "curly": 0, + "no-underscore-dangle": 0, + "no-lonely-if": 1, + "no-unused-vars": [2, {"vars" : "all", "args" : "after-used"}], + "no-mixed-requires": 0, + "space-infix-ops": 0 + } +} diff --git a/deps/npm/.npmignore b/deps/npm/.npmignore index 7232cea50a8..a128c9b604b 100644 --- a/deps/npm/.npmignore +++ b/deps/npm/.npmignore @@ -25,3 +25,5 @@ html/*.png /npm-*.tgz *.pyc + +/test/tap/builtin-config diff --git a/deps/npm/.travis.yml b/deps/npm/.travis.yml index 0fbe8dc335f..2734148642f 100644 --- a/deps/npm/.travis.yml +++ b/deps/npm/.travis.yml @@ -1,5 +1,11 @@ language: node_js -script: "npm run-script tap" node_js: - "0.11" - "0.10" +env: + - DEPLOY_VERSION=testing +before_install: + - "npm config set spin false" + - "npm install -g npm@^2" + - "sudo mkdir -p /var/run/couchdb" +script: "npm run-script tap" diff --git a/deps/npm/CHANGELOG.md b/deps/npm/CHANGELOG.md index 330c1ac17ad..e67cd290927 100644 --- a/deps/npm/CHANGELOG.md +++ b/deps/npm/CHANGELOG.md @@ -1,3 +1,384 @@ +### v2.1.6 (2014-10-23): + +* [`681b398`](https://github.com/npm/npm/commit/681b3987a18e7aba0aaf78c91a23c7cc0ab82ce8) + [#6523](https://github.com/npm/npm/issues/6523) fix default `logelevel` doc + ([@KenanY](https://github.com/KenanY)) +* [`80b368f`](https://github.com/npm/npm/commit/80b368ffd786d4d008734b56c4a6fe12d2cb2926) + [#6528](https://github.com/npm/npm/issues/6528) `npm version` should work in + a git directory without git ([@terinjokes](https://github.com/terinjokes)) +* [`5f5f9e4`](https://github.com/npm/npm/commit/5f5f9e4ddf544c2da6adf3f8c885238b0e745076) + [#6483](https://github.com/npm/npm/issues/6483) `init-package-json@1.1.1`: + Properly pick up default values from environment variables. + ([@othiym23](https://github.com/othiym23)) +* [`a114870`](https://github.com/npm/npm/commit/a1148702f53f82d49606b2e4dac7581261fff442) + perl 5.18.x doesn't like -pi without filenames + ([@othiym23](https://github.com/othiym23)) +* [`de5ba00`](https://github.com/npm/npm/commit/de5ba007a48db876eb5bfb6156435f3512d58977) + `request@2.46.0`: Tests and cleanup. + ([@othiym23](https://github.com/othiym23)) +* [`76933f1`](https://github.com/npm/npm/commit/76933f169f17b5273b32e924a7b392d5729931a7) + `fstream-npm@1.0.1`: Always include `LICENSE[.*]`, `LICENCE[.*]`, + `CHANGES[.*]`, `CHANGELOG[.*]`, and `HISTORY[.*]`. + ([@jonathanong](https://github.com/jonathanong)) + +### v2.1.5 (2014-10-16): + +* [`6a14b23`](https://github.com/npm/npm/commit/6a14b232a0e34158bd95bb25c607167be995c204) + [#6397](https://github.com/npm/npm/issues/6397) Defactor npmconf back into + npm. ([@othiym23](https://github.com/othiym23)) +* [`4000e33`](https://github.com/npm/npm/commit/4000e3333a76ca4844681efa8737cfac24b7c2c8) + [#6323](https://github.com/npm/npm/issues/6323) Install `peerDependencies` + from top. ([@othiym23](https://github.com/othiym23)) +* [`5d119ae`](https://github.com/npm/npm/commit/5d119ae246f27353b14ff063559d1ba8c616bb89) + [#6498](https://github.com/npm/npm/issues/6498) Better error messages on + malformed `.npmrc` properties. ([@nicks](https://github.com/nicks)) +* [`ae18efb`](https://github.com/npm/npm/commit/ae18efb65fed427b1ef18e4862885bf60b87b92e) + [#6093](https://github.com/npm/npm/issues/6093) Replace instances of 'hash' + with 'object' in documentation. ([@zeke](https://github.com/zeke)) +* [`53108b2`](https://github.com/npm/npm/commit/53108b276fec5f97a38250933a2768d58b6928da) + [#1558](https://github.com/npm/npm/issues/1558) Clarify how local paths + should be used. ([@KenanY](https://github.com/KenanY)) +* [`344fa1a`](https://github.com/npm/npm/commit/344fa1a219ac8867022df3dc58a47636dde8a242) + [#6488](https://github.com/npm/npm/issues/6488) Work around bug in marked. + ([@othiym23](https://github.com/othiym23)) + +OUTDATED DEPENDENCY CLEANUP JAMBOREE + +* [`60c2942`](https://github.com/npm/npm/commit/60c2942e13655d9ecdf6e0f1f97f10cb71a75255) + `realize-package-specifier@1.2.0`: Handle names and rawSpecs more + consistently. ([@iarna](https://github.com/iarna)) +* [`1b5c95f`](https://github.com/npm/npm/commit/1b5c95fbda77b87342bd48c5ecac5b1fd571ccfe) + `sha@1.3.0`: Change line endings? + ([@ForbesLindesay](https://github.com/ForbesLindesay)) +* [`d7dee3f`](https://github.com/npm/npm/commit/d7dee3f3f7d9e7c2061a4ecb4dd93e3e4bfe4f2e) + `request@2.45.0`: Dependency updates, better proxy support, better compressed + response handling, lots of 'use strict'. + ([@mikeal](https://github.com/mikeal)) +* [`3d75180`](https://github.com/npm/npm/commit/3d75180c2cc79fa3adfa0e4cb783a27192189a65) + `opener@1.4.0`: Added gratuitous return. + ([@Domenic](https://github.com/Domenic)) +* [`8e2703f`](https://github.com/npm/npm/commit/8e2703f78d280d1edeb749e257dda1f288bad6e3) + `retry@0.6.1` / `npm-registry-client@3.2.4`: Change of ownership. + ([@tim-kos](https://github.com/tim-kos)) +* [`c87b00f`](https://github.com/npm/npm/commit/c87b00f82f92434ee77831915012c77a6c244c39) + `once@1.3.1`: Wrap once with wrappy. ([@isaacs](https://github.com/isaacs)) +* [`01ec790`](https://github.com/npm/npm/commit/01ec790fd47def56eda6abb3b8d809093e8f493f) + `npm-user-validate@0.1.1`: Correct repository URL. + ([@robertkowalski](https://github.com/robertkowalski)) +* [`389e52c`](https://github.com/npm/npm/commit/389e52c2d94c818ca8935ccdcf392994fec564a2) + `glob@4.0.6`: Now absolutely requires `graceful-fs`. + ([@isaacs](https://github.com/isaacs)) +* [`e15ab15`](https://github.com/npm/npm/commit/e15ab15a27a8f14cf0d9dc6f11dee452080378a0) + `ini@1.3.0`: Tighten up whitespace handling. + ([@isaacs](https://github.com/isaacs)) +* [`7610f3e`](https://github.com/npm/npm/commit/7610f3e62e699292ece081bfd33084d436e3246d) + `archy@1.0.0` ([@substack](https://github.com/substack)) +* [`9c13149`](https://github.com/npm/npm/commit/9c1314985e513e20ffa3ea0ca333ba2ab78299c9) + `semver@4.1.0`: Add support for prerelease identifiers. + ([@bromanko](https://github.com/bromanko)) +* [`f096c25`](https://github.com/npm/npm/commit/f096c250441b031d758f03afbe8d2321f94c7703) + `graceful-fs@3.0.4`: Add a bunch of additional tests, skip the unfortunate + complications of `graceful-fs@3.0.3`. ([@isaacs](https://github.com/isaacs)) + +### v2.1.4 (2014-10-09): + +* [`3aeb440`](https://github.com/npm/npm/commit/3aeb4401444fad83cc7a8d11bf2507658afa5248) + [#6442](https://github.com/npm/npm/issues/6442) proxying git needs `GIT_SSL_CAINFO` + ([@wmertens](https://github.com/wmertens)) +* [`a8da8d6`](https://github.com/npm/npm/commit/a8da8d6e0cd56d97728c0b76b51604ee06ef6264) + [#6413](https://github.com/npm/npm/issues/6413) write builtin config on any + global npm install ([@isaacs](https://github.com/isaacs)) +* [`9e4d632`](https://github.com/npm/npm/commit/9e4d632c0142ba55df07d624667738b8727336fc) + [#6343](https://github.com/npm/npm/issues/6343) don't pass run arguments to + pre & post scripts ([@TheLudd](https://github.com/TheLudd)) +* [`d831b1f`](https://github.com/npm/npm/commit/d831b1f7ca1a9921ea5b394e39b7130ecbc6d7b4) + [#6399](https://github.com/npm/npm/issues/6399) race condition: inflight + installs, prevent `peerDependency` problems + ([@othiym23](https://github.com/othiym23)) +* [`82b775d`](https://github.com/npm/npm/commit/82b775d6ff34c4beb6c70b2344d491a9f2026577) + [#6384](https://github.com/npm/npm/issues/6384) race condition: inflight + caching by URL rather than semver range + ([@othiym23](https://github.com/othiym23)) +* [`7bee042`](https://github.com/npm/npm/commit/7bee0429066fedcc9e6e962c043eb740b3792809) + `inflight@1.0.4`: callback can take arbitrary number of parameters + ([@othiym23](https://github.com/othiym23)) +* [`3bff494`](https://github.com/npm/npm/commit/3bff494f4abf17d6d7e0e4a3a76cf7421ecec35a) + [#5195](https://github.com/npm/npm/issues/5195) fixed regex color regression + for `npm search` ([@chrismeyersfsu](https://github.com/chrismeyersfsu)) +* [`33ba2d5`](https://github.com/npm/npm/commit/33ba2d585160a0a2a322cb76c4cd989acadcc984) + [#6387](https://github.com/npm/npm/issues/6387) allow `npm view global` if + package is specified ([@evanlucas](https://github.com/evanlucas)) +* [`99c4cfc`](https://github.com/npm/npm/commit/99c4cfceed413396d952cf05f4e3c710f9682c23) + [#6388](https://github.com/npm/npm/issues/6388) npm-publish → + npm-developers(7) ([@kennydude](https://github.com/kennydude)) + +TEST CLEANUP EXTRAVAGANZA: + +* [`8d6bfcb`](https://github.com/npm/npm/commit/8d6bfcb88408f5885a2a67409854c43e5c3a23f6) + tap tests run with no system-wide side effects + ([@chrismeyersfsu](https://github.com/chrismeyersfsu)) +* [`7a1472f`](https://github.com/npm/npm/commit/7a1472fbdbe99956ad19f629e7eb1cc07ba026ef) + added npm cache cleanup script + ([@chrismeyersfsu](https://github.com/chrismeyersfsu)) +* [`0ce6a37`](https://github.com/npm/npm/commit/0ce6a3752fa9119298df15671254db6bc1d8e64c) + stripped out dead test code (othiym23) +* replace spawn with common.npm (@chrismeyersfsu): + * [`0dcd614`](https://github.com/npm/npm/commit/0dcd61446335eaf541bf5f2d5186ec1419f86a42) + test/tap/cache-shasum-fork.js + * [`97f861c`](https://github.com/npm/npm/commit/97f861c967606a7e51e3d5047cf805d9d1adea5a) + test/tap/false_name.js + * [`d01b3de`](https://github.com/npm/npm/commit/d01b3de6ce03f25bbf3db97bfcd3cc85830d6801) + test/tap/git-cache-locking.js + * [`7b63016`](https://github.com/npm/npm/commit/7b63016778124c6728d6bd89a045c841ae3900b6) + test/tap/pack-scoped.js + * [`c877553`](https://github.com/npm/npm/commit/c877553265c39673e03f0a97972f692af81a595d) + test/tap/scripts-whitespace-windows.js + * [`df98525`](https://github.com/npm/npm/commit/df98525331e964131299d457173c697cfb3d95b9) + test/tap/prepublish.js + * [`99c4cfc`](https://github.com/npm/npm/commit/99c4cfceed413396d952cf05f4e3c710f9682c23) + test/tap/prune.js + +### v2.1.3 (2014-10-02): + +BREAKING CHANGE FOR THE SQRT(i) PEOPLE ACTUALLY USING `npm submodule`: + +* [`1e64473`](https://github.com/npm/npm/commit/1e6447360207f45ad6188e5780fdf4517de6e23d) + `rm -rf npm submodule` command, which has been broken since the Carter + Administration ([@isaacs](https://github.com/isaacs)) + +BREAKING CHANGE IF YOU ARE FOR SOME REASON STILL USING NODE 0.6 AND YOU SHOULD +NOT BE DOING THAT CAN YOU NOT: + +* [`3e431f9`](https://github.com/npm/npm/commit/3e431f9d6884acb4cde8bcb8a0b122a76b33ee1d) + [joyent/node#8492](https://github.com/joyent/node/issues/8492) bye bye + customFds, hello stdio ([@othiym23](https://github.com/othiym23)) + +Other changes: + +* [`ea607a8`](https://github.com/npm/npm/commit/ea607a8a20e891ad38eed11b5ce2c3c0a65484b9) + [#6372](https://github.com/npm/npm/issues/6372) noisily error (without + aborting) on multi-{install,build} ([@othiym23](https://github.com/othiym23)) +* [`3ee2799`](https://github.com/npm/npm/commit/3ee2799b629fd079d2db21d7e8f25fa7fa1660d0) + [#6372](https://github.com/npm/npm/issues/6372) only make cache creation + requests in flight ([@othiym23](https://github.com/othiym23)) +* [`1a90ec2`](https://github.com/npm/npm/commit/1a90ec2f2cfbefc8becc6ef0c480e5edacc8a4cb) + [#6372](https://github.com/npm/npm/issues/6372) wait to put Git URLs in + flight until normalized ([@othiym23](https://github.com/othiym23)) +* [`664795b`](https://github.com/npm/npm/commit/664795bb7d8da7142417b3f4ef5986db3a394071) + [#6372](https://github.com/npm/npm/issues/6372) log what is and isn't in + flight ([@othiym23](https://github.com/othiym23)) +* [`00ef580`](https://github.com/npm/npm/commit/00ef58025a1f52dfabf2c4dc3898621d16a6e062) + `inflight@1.0.3`: fix largely theoretical race condition, because we really + really hate race conditions ([@isaacs](https://github.com/isaacs)) +* [`1cde465`](https://github.com/npm/npm/commit/1cde4658d897ae0f93ff1d65b258e1571b391182) + [#6363](https://github.com/npm/npm/issues/6363) + `realize-package-specifier@1.1.0`: handle local dependencies better + ([@iarna](https://github.com/iarna)) +* [`86f084c`](https://github.com/npm/npm/commit/86f084c6c6d7935cd85d72d9d94b8784c914d51e) + `realize-package-specifier@1.0.2`: dependency realization! in its own module! + ([@iarna](https://github.com/iarna)) +* [`553d830`](https://github.com/npm/npm/commit/553d830334552b83606b6bebefd821c9ea71e964) + `npm-package-arg@2.1.3`: simplified semver, better tests + ([@iarna](https://github.com/iarna)) +* [`bec9b61`](https://github.com/npm/npm/commit/bec9b61a316c19f5240657594f0905a92a474352) + `readable-stream@1.0.32`: for some reason + ([@rvagg](https://github.com/rvagg)) +* [`ff08ec5`](https://github.com/npm/npm/commit/ff08ec5f6d717bdbd559de0b2ede769306a9a763) + `dezalgo@1.0.1`: use wrappy for instrumentability + ([@isaacs](https://github.com/isaacs)) + +### v2.1.2 (2014-09-29): + +* [`a1aa20e`](https://github.com/npm/npm/commit/a1aa20e44bb8285c6be1e7fa63b9da920e3a70ed) + [#6282](https://github.com/npm/npm/issues/6282) + `normalize-package-data@1.0.3`: don't prune bundledDependencies + ([@isaacs](https://github.com/isaacs)) +* [`a1f5fe1`](https://github.com/npm/npm/commit/a1f5fe1005043ce20a06e8b17a3e201aa3215357) + move locks back into cache, now path-aware + ([@othiym23](https://github.com/othiym23)) +* [`a432c4b`](https://github.com/npm/npm/commit/a432c4b48c881294d6d79b5f41c2e1c16ad15a8a) + convert lib/utils/tar.js to use atomic streams + ([@othiym23](https://github.com/othiym23)) +* [`b8c3c74`](https://github.com/npm/npm/commit/b8c3c74a3c963564233204161cc263e0912c930b) + `fs-write-stream-atomic@1.0.2`: Now works with streams1 fs.WriteStreams. + ([@isaacs](https://github.com/isaacs)) +* [`c7ab76f`](https://github.com/npm/npm/commit/c7ab76f44cce5f42add5e3ba879bd10e7e00c3e6) + logging cleanup ([@othiym23](https://github.com/othiym23)) +* [`4b2d95d`](https://github.com/npm/npm/commit/4b2d95d0641435b09d047ae5cb2226f292bf38f0) + [#6329](https://github.com/npm/npm/issues/6329) efficiently validate tmp + tarballs safely ([@othiym23](https://github.com/othiym23)) + +### v2.1.1 (2014-09-26): + +* [`563225d`](https://github.com/npm/npm/commit/563225d813ea4c12f46d4f7821ac7f76ba8ee2d6) + [#6318](https://github.com/npm/npm/issues/6318) clean up locking; prefix + lockfile with "." ([@othiym23](https://github.com/othiym23)) +* [`c7f30e4`](https://github.com/npm/npm/commit/c7f30e4550fea882d31fcd4a55b681cd30713c44) + [#6318](https://github.com/npm/npm/issues/6318) remove locking code around + tarball packing and unpacking ([@othiym23](https://github.com/othiym23)) + +### v2.1.0 (2014-09-25): + +NEW FEATURE: + +* [`3635601`](https://github.com/npm/npm/commit/36356011b6f2e6a5a81490e85a0a44eb27199dd7) + [#5520](https://github.com/npm/npm/issues/5520) Add `'npm view .'`. + ([@evanlucas](https://github.com/evanlucas)) + +Other changes: + +* [`f24b552`](https://github.com/npm/npm/commit/f24b552b596d0627549cdd7c2d68fcf9006ea50a) + [#6294](https://github.com/npm/npm/issues/6294) Lock cache → lock cache + target. ([@othiym23](https://github.com/othiym23)) +* [`ad54450`](https://github.com/npm/npm/commit/ad54450104f94c82c501138b4eee488ce3a4555e) + [#6296](https://github.com/npm/npm/issues/6296) Ensure that npm-debug.log + file is created when rollbacks are done. + ([@isaacs](https://github.com/isaacs)) +* [`6810071`](https://github.com/npm/npm/commit/681007155a40ac9d165293bd6ec5d8a1423ccfca) + docs: Default loglevel "http" → "warn". + ([@othiym23](https://github.com/othiym23)) +* [`35ac89a`](https://github.com/npm/npm/commit/35ac89a940f23db875e882ce2888208395130336) + Skip installation of installed scoped packages. + ([@timoxley](https://github.com/timoxley)) +* [`e468527`](https://github.com/npm/npm/commit/e468527256ec599892b9b88d61205e061d1ab735) + Ensure cleanup executes for scripts-whitespace-windows test. + ([@timoxley](https://github.com/timoxley)) +* [`ef9101b`](https://github.com/npm/npm/commit/ef9101b7f346797749415086956a0394528a12c4) + Ensure cleanup executes for packed-scope test. + ([@timoxley](https://github.com/timoxley)) +* [`69b4d18`](https://github.com/npm/npm/commit/69b4d18cdbc2ae04c9afaffbd273b436a394f398) + `fs-write-stream-atomic@1.0.1`: Fix a race condition in our race-condition + fixer. ([@isaacs](https://github.com/isaacs)) +* [`26b17ff`](https://github.com/npm/npm/commit/26b17ff2e3b21ee26c6fdbecc8273520cff45718) + [#6272](https://github.com/npm/npm/issues/6272) `npmconf` decides what the + default prefix is. ([@othiym23](https://github.com/othiym23)) +* [`846faca`](https://github.com/npm/npm/commit/846facacc6427dafcf5756dcd36d9036539938de) + Fix development dependency is preferred over dependency. + ([@andersjanmyr](https://github.com/andersjanmyr)) +* [`9d1a9db`](https://github.com/npm/npm/commit/9d1a9db3af5adc48a7158a5a053eeb89ee41a0e7) + [#3265](https://github.com/npm/npm/issues/3265) Re-apply a71615a. Fixes + [#3265](https://github.com/npm/npm/issues/3265) again, with a test! + ([@glasser](https://github.com/glasser)) +* [`1d41db0`](https://github.com/npm/npm/commit/1d41db0b2744a7bd50971c35cc060ea0600fb4bf) + `marked-man@0.1.4`: Fixes formatting of synopsis blocks in man docs. + ([@kapouer](https://github.com/kapouer)) +* [`a623da0`](https://github.com/npm/npm/commit/a623da01bea1b2d3f3a18b9117cfd2d8e3cbdd77) + [#5867](https://github.com/npm/npm/issues/5867) Specify dummy git template + dir when cloning to prevent copying hooks. + ([@boneskull](https://github.com/boneskull)) + +### v2.0.2 (2014-09-19): + +* [`42c872b`](https://github.com/npm/npm/commit/42c872b32cadc0e555638fc78eab3a38a04401d8) + [#5920](https://github.com/npm/npm/issues/5920) + `fs-write-stream-atomic@1.0.0` ([@isaacs](https://github.com/isaacs)) +* [`6784767`](https://github.com/npm/npm/commit/6784767fe15e28b44c81a1d4bb1738c642a65d78) + [#5920](https://github.com/npm/npm/issues/5920) make all write streams atomic + ([@isaacs](https://github.com/isaacs)) +* [`f6fac00`](https://github.com/npm/npm/commit/f6fac000dd98ebdd5ea1d5921175735d463d328b) + [#5920](https://github.com/npm/npm/issues/5920) barf on 0-length cached + tarballs ([@isaacs](https://github.com/isaacs)) +* [`3b37592`](https://github.com/npm/npm/commit/3b37592a92ea98336505189ae8ca29248b0589f4) + `write-file-atomic@1.1.0`: use graceful-fs + ([@iarna](https://github.com/iarna)) + +### v2.0.1 (2014-09-18): + +* [`74c5ab0`](https://github.com/npm/npm/commit/74c5ab0a676793c6dc19a3fd5fe149f85fecb261) + [#6201](https://github.com/npm/npm/issues/6201) `npmconf@2.1.0`: scope + always-auth to registry URI ([@othiym23](https://github.com/othiym23)) +* [`774b127`](https://github.com/npm/npm/commit/774b127da1dd6fefe2f1299e73505d9146f00294) + [#6201](https://github.com/npm/npm/issues/6201) `npm-registry-client@3.2.2`: + use scoped always-auth settings ([@othiym23](https://github.com/othiym23)) +* [`f2d2190`](https://github.com/npm/npm/commit/f2d2190aa365d22378d03afab0da13f95614a583) + [#6201](https://github.com/npm/npm/issues/6201) support saving + `--always-auth` when logging in ([@othiym23](https://github.com/othiym23)) +* [`17c941a`](https://github.com/npm/npm/commit/17c941a2d583210fe97ed47e2968d94ce9f774ba) + [#6163](https://github.com/npm/npm/issues/6163) use `write-file-atomic` + instead of `fs.writeFile()` ([@fiws](https://github.com/fiws)) +* [`fb5724f`](https://github.com/npm/npm/commit/fb5724fd98e1509c939693568df83d11417ea337) + [#5925](https://github.com/npm/npm/issues/5925) `npm init -f`: allow `npm + init` to run without prompting + ([@michaelnisi](https://github.com/michaelnisi)) +* [`b706d63`](https://github.com/npm/npm/commit/b706d637d5965dbf8f7ce07dc5c4bc80887f30d8) + [#3059](https://github.com/npm/npm/issues/3059) disable prepublish when + running `npm install --production` + ([@jussi](https://github.com/jussi)-kalliokoski) +* [`119f068`](https://github.com/npm/npm/commit/119f068eae2a36fa8b9c9ca557c70377792243a4) + attach the node version used when publishing a package to its registry + metadata ([@othiym23](https://github.com/othiym23)) +* [`8fe0081`](https://github.com/npm/npm/commit/8fe008181665519c2ac201ee432a3ece9798c31f) + seriously, don't use `npm -g update npm` + ([@thomblake](https://github.com/thomblake)) +* [`ea5b3d4`](https://github.com/npm/npm/commit/ea5b3d446b86dcabb0dbc6dba374d3039342ecb3) + `request@2.44.0` ([@othiym23](https://github.com/othiym23)) + +### v2.0.0 (2014-09-12): + +BREAKING CHANGES: + +* [`4378a17`](https://github.com/npm/npm/commit/4378a17db340404a725ffe2eb75c9936f1612670) + `semver@4.0.0`: prerelease versions no longer show up in ranges; `^0.x.y` + behaves the way it did in `semver@2` rather than `semver@3`; docs have been + reorganized for comprehensibility ([@isaacs](https://github.com/isaacs)) +* [`c6ddb64`](https://github.com/npm/npm/commit/c6ddb6462fe32bf3a27b2c4a62a032a92e982429) + npm now assumes that node is newer than 0.6 + ([@isaacs](https://github.com/isaacs)) + +Other changes: + +* [`ea515c3`](https://github.com/npm/npm/commit/ea515c3b858bf493a7b87fa4cdc2110a0d9cef7f) + [#6043](https://github.com/npm/npm/issues/6043) `slide@1.1.6`: wait until all + callbacks have finished before proceeding + ([@othiym23](https://github.com/othiym23)) +* [`0b0a59d`](https://github.com/npm/npm/commit/0b0a59d504f20f424294b1590ace73a7464f0378) + [#6043](https://github.com/npm/npm/issues/6043) defer rollbacks until just + before the CLI exits ([@isaacs](https://github.com/isaacs)) +* [`a11c88b`](https://github.com/npm/npm/commit/a11c88bdb1488b87d8dcac69df9a55a7a91184b6) + [#6175](https://github.com/npm/npm/issues/6175) pack scoped packages + correctly ([@othiym23](https://github.com/othiym23)) +* [`e4e48e0`](https://github.com/npm/npm/commit/e4e48e037d4e95fdb6acec80b04c5c6eaee59970) + [#6121](https://github.com/npm/npm/issues/6121) `read-installed@3.1.2`: don't + mark linked dev dependencies as extraneous + ([@isaacs](https://github.com/isaacs)) +* [`d673e41`](https://github.com/npm/npm/commit/d673e4185d43362c2b2a91acbca8c057e7303c7b) + `cmd-shim@2.0.1`: depend on `graceful-fs` directly + ([@ForbesLindesay](https://github.com/ForbesLindesay)) +* [`9d54d45`](https://github.com/npm/npm/commit/9d54d45e602d595bdab7eae09b9fa1dc46370147) + `npm-registry-couchapp@2.5.3`: make tests more reliable on Travis + ([@iarna](https://github.com/iarna)) +* [`673d738`](https://github.com/npm/npm/commit/673d738c6142c3d043dcee0b7aa02c9831a2e0ca) + ensure permissions are set correctly in cache when running as root + ([@isaacs](https://github.com/isaacs)) +* [`6e6a5fb`](https://github.com/npm/npm/commit/6e6a5fb74af10fd345411df4e121e554e2e3f33e) + prepare for upgrade to `node-semver@4.0.0` + ([@isaacs](https://github.com/isaacs)) +* [`ab8dd87`](https://github.com/npm/npm/commit/ab8dd87b943262f5996744e8d4cc30cc9358b7d7) + swap out `ronn` for `marked-man@0.1.3` ([@isaacs](https://github.com/isaacs)) +* [`803da54`](https://github.com/npm/npm/commit/803da5404d5a0b7c9defa3fe7fa0f2d16a2b19d3) + `npm-registry-client@3.2.0`: prepare for `node-semver@4.0.0` and include more + error information ([@isaacs](https://github.com/isaacs)) +* [`4af0e71`](https://github.com/npm/npm/commit/4af0e7134f5757c3d456d83e8349224a4ba12660) + make default error display less scary ([@isaacs](https://github.com/isaacs)) +* [`4fd9e79`](https://github.com/npm/npm/commit/4fd9e7901a15abff7a3dd478d99ce239b9580bca) + `npm-registry-client@3.2.1`: handle errors returned by the registry much, + much better ([@othiym23](https://github.com/othiym23)) +* [`ca791e2`](https://github.com/npm/npm/commit/ca791e27e97e51c1dd491bff6622ac90b54c3e23) + restore a long (always?) missing pass for deduping + ([@othiym23](https://github.com/othiym23)) +* [`ca0ef0e`](https://github.com/npm/npm/commit/ca0ef0e99bbdeccf28d550d0296baa4cb5e7ece2) + correctly interpret relative paths for local dependencies + ([@othiym23](https://github.com/othiym23)) +* [`5eb8db2`](https://github.com/npm/npm/commit/5eb8db2c370eeb4cd34f6e8dc6a935e4ea325621) + `npm-package-arg@2.1.2`: support git+file:// URLs for local bare repos + ([@othiym23](https://github.com/othiym23)) +* [`860a185`](https://github.com/npm/npm/commit/860a185c43646aca84cb93d1c05e2266045c316b) + tweak docs to no longer advocate checking in `node_modules` + ([@hunterloftis](https://github.com/hunterloftis)) +* [`80e9033`](https://github.com/npm/npm/commit/80e9033c40e373775e35c674faa6c1948661782b) + add links to nodejs.org downloads to docs + ([@meetar](https://github.com/meetar)) + ### v1.4.28 (2014-09-12): * [`f4540b6`](https://github.com/npm/npm/commit/f4540b6537a87e653d7495a9ddcf72949fdd4d14) @@ -8,16 +389,101 @@ callbacks have finished before proceeding ([@othiym23](https://github.com/othiym23)) +### v2.0.0-beta.3 (2014-09-04): + +* [`fa79413`](https://github.com/npm/npm/commit/fa794138bec8edb7b88639db25ee9c010d2f4c2b) + [#6119](https://github.com/npm/npm/issues/6119) fall back to registry installs + if package.json is missing in a local directory ([@iarna](https://github.com/iarna)) +* [`16073e2`](https://github.com/npm/npm/commit/16073e2d8ae035961c4c189b602d4aacc6d6b387) + `npm-package-arg@2.1.0`: support file URIs as local specs + ([@othiym23](https://github.com/othiym23)) +* [`9164acb`](https://github.com/npm/npm/commit/9164acbdee28956fa816ce5e473c559395ae4ec2) + `github-url-from-username-repo@1.0.2`: don't match strings that are already + URIs ([@othiym23](https://github.com/othiym23)) +* [`4067d6b`](https://github.com/npm/npm/commit/4067d6bf303a69be13f3af4b19cf4fee1b0d3e12) + [#5629](https://github.com/npm/npm/issues/5629) support saving of local packages + in `package.json` ([@dylang](https://github.com/dylang)) +* [`1b2ffdf`](https://github.com/npm/npm/commit/1b2ffdf359a8c897a78f91fc5a5d535c97aaec97) + [#6097](https://github.com/npm/npm/issues/6097) document scoped packages + ([@seldo](https://github.com/seldo)) +* [`0a67d53`](https://github.com/npm/npm/commit/0a67d536067c4808a594d81288d34c0f7e97e105) + [#6007](https://github.com/npm/npm/issues/6007) `request@2.42.0`: properly + set headers on proxy requests ([@isaacs](https://github.com/isaacs)) +* [`9bac6b8`](https://github.com/npm/npm/commit/9bac6b860b674d24251bb7b8ba412fdb26cbc836) + `npmconf@2.0.8`: disallow semver ranges in tag configuration + ([@isaacs](https://github.com/isaacs)) +* [`d2d4d7c`](https://github.com/npm/npm/commit/d2d4d7cd3c32f91a87ffa11fe464d524029011c3) + [#6082](https://github.com/npm/npm/issues/6082) don't allow tagging with a + semver range as the tag name ([@isaacs](https://github.com/isaacs)) + ### v1.4.27 (2014-09-04): * [`4cf3c8f`](https://github.com/npm/npm/commit/4cf3c8fd78c9e2693a5f899f50c28f4823c88e2e) - [#6007](https://github.com/npm/npm/issues/6007) `request@2.42.0`: properly set + [#6007](https://github.com/npm/npm/issues/6007) request@2.42.0: properly set headers on proxy requests ([@isaacs](https://github.com/isaacs)) * [`403cb52`](https://github.com/npm/npm/commit/403cb526be1472bb7545fa8e62d4976382cdbbe5) - [#6055](https://github.com/npm/npm/issues/6055) `npmconf@1.1.8`: restore + [#6055](https://github.com/npm/npm/issues/6055) npmconf@1.1.8: restore case-insensitivity of environmental config ([@iarna](https://github.com/iarna)) +### v2.0.0-beta.2 (2014-08-29): + +SPECIAL LABOR DAY WEEKEND RELEASE PARTY WOOO + +* [`ed207e8`](https://github.com/npm/npm/commit/ed207e88019de3150037048df6267024566e1093) + `npm-registry-client@3.1.7`: Clean up auth logic and improve logging around + auth decisions. Also error on trying to change a user document without + writing to it. ([@othiym23](https://github.com/othiym23)) +* [`66c7423`](https://github.com/npm/npm/commit/66c7423b7fb07a326b83c83727879410d43c439f) + `npmconf@2.0.7`: support -C as an alias for --prefix + ([@isaacs](https://github.com/isaacs)) +* [`0dc6a07`](https://github.com/npm/npm/commit/0dc6a07c778071c94c2251429c7d107e88a45095) + [#6059](https://github.com/npm/npm/issues/6059) run commands in prefix, not + cwd ([@isaacs](https://github.com/isaacs)) +* [`65d2179`](https://github.com/npm/npm/commit/65d2179af96737eb9038eaa24a293a62184aaa13) + `github-url-from-username-repo@1.0.1`: part 3 handle slashes in branch names + ([@robertkowalski](https://github.com/robertkowalski)) +* [`e8d75d0`](https://github.com/npm/npm/commit/e8d75d0d9f148ce2b3e8f7671fa281945bac363d) + [#6057](https://github.com/npm/npm/issues/6057) `read-installed@3.1.1`: + properly handle extraneous dev dependencies of required dependencies + ([@othiym23](https://github.com/othiym23)) +* [`0602f70`](https://github.com/npm/npm/commit/0602f708f070d524ad41573afd4c57171cab21ad) + [#6064](https://github.com/npm/npm/issues/6064) ls: do not show deps of + extraneous deps ([@isaacs](https://github.com/isaacs)) + +### v2.0.0-beta.1 (2014-08-28): + +* [`78a1fc1`](https://github.com/npm/npm/commit/78a1fc12307a0cbdbc944775ed831b876ee65855) + `github-url-from-git@1.4.0`: add support for git+https and git+ssh + ([@stefanbuck](https://github.com/stefanbuck)) +* [`bf247ed`](https://github.com/npm/npm/commit/bf247edf5429c6b3ec4d4cb798fa0eb0a9c19fc1) + `columnify@1.2.1` ([@othiym23](https://github.com/othiym23)) +* [`4bbe682`](https://github.com/npm/npm/commit/4bbe682a6d4eabcd23f892932308c9f228bf4de3) + `cmd-shim@2.0.0`: upgrade to graceful-fs 3 + ([@ForbesLindesay](https://github.com/ForbesLindesay)) +* [`ae1d590`](https://github.com/npm/npm/commit/ae1d590bdfc2476a4ed446e760fea88686e3ae05) + `npm-package-arg@2.0.4`: accept slashes in branch names + ([@thealphanerd](https://github.com/thealphanerd)) +* [`b2f51ae`](https://github.com/npm/npm/commit/b2f51aecadf585711e145b6516f99e7c05f53614) + `semver@3.0.1`: semver.clean() is cleaner + ([@isaacs](https://github.com/isaacs)) +* [`1d041a8`](https://github.com/npm/npm/commit/1d041a8a5ebd5bf6cecafab2072d4ec07823adab) + `github-url-from-username-repo@1.0.0`: accept slashes in branch names + ([@robertkowalski](https://github.com/robertkowalski)) +* [`02c85d5`](https://github.com/npm/npm/commit/02c85d592c4058e5d9eafb0be36b6743ae631998) + `async-some@1.0.1` ([@othiym23](https://github.com/othiym23)) +* [`5af493e`](https://github.com/npm/npm/commit/5af493efa8a463cd1acc4a9a394699e2c0793b9c) + ensure lifecycle spawn errors caught properly + ([@isaacs](https://github.com/isaacs)) +* [`60fe012`](https://github.com/npm/npm/commit/60fe012fac9570d6c72554cdf34a6fa95bf0f0a6) + `npmconf@2.0.6`: init.version defaults to 1.0.0 + ([@isaacs](https://github.com/isaacs)) +* [`b4c717b`](https://github.com/npm/npm/commit/b4c717bbf58fb6a0d64ad229036c79a184297ee2) + `npm-registry-client@3.1.4`: properly encode % in passwords + ([@isaacs](https://github.com/isaacs)) +* [`7b55f44`](https://github.com/npm/npm/commit/7b55f44420252baeb3f30da437d22956315c31c9) + doc: Fix 'npm help index' ([@isaacs](https://github.com/isaacs)) + ### v1.4.26 (2014-08-28): * [`eceea95`](https://github.com/npm/npm/commit/eceea95c804fa15b18e91c52c0beb08d42a3e77d) @@ -40,6 +506,51 @@ * [`91cfb58`](https://github.com/npm/npm/commit/91cfb58dda851377ec604782263519f01fd96ad8) doc: Fix 'npm help index' ([@isaacs](https://github.com/isaacs)) +### v2.0.0-beta.0 (2014-08-21): + +* [`685f8be`](https://github.com/npm/npm/commit/685f8be1f2770cc75fd0e519a8d7aac72735a270) + `npm-registry-client@3.1.3`: Print the notification header returned by the + registry, and make sure status codes are printed without gratuitous quotes + around them. ([@isaacs](https://github.com/isaacs) / + [@othiym23](https://github.com/othiym23)) +* [`a8cb676`](https://github.com/npm/npm/commit/a8cb676aef0561eaf04487d2719672b097392c85) + [#5900](https://github.com/npm/npm/issues/5900) remove `npm` from its own + `engines` field in `package.json`. None of us remember why it was there. + ([@timoxley](https://github.com/timoxley)) +* [`6c47201`](https://github.com/npm/npm/commit/6c47201a7d071e8bf091b36933daf4199cc98e80) + [#5752](https://github.com/npm/npm/issues/5752), + [#6013](https://github.com/npm/npm/issues/6013) save git URLs correctly in + `_resolved` fields ([@isaacs](https://github.com/isaacs)) +* [`e4e1223`](https://github.com/npm/npm/commit/e4e1223a91c37688ba3378e1fc9d5ae045654d00) + [#5936](https://github.com/npm/npm/issues/5936) document the use of tags in + `package.json` ([@KenanY](https://github.com/KenanY)) +* [`c92b8d4`](https://github.com/npm/npm/commit/c92b8d4db7bde2a501da5b7d612684de1d629a42) + [#6004](https://github.com/npm/npm/issues/6004) manually installed scoped + packages are tracked correctly ([@dead](https://github.com/dead)-horse) +* [`21ca0aa`](https://github.com/npm/npm/commit/21ca0aaacbcfe2b89b0a439d914da0cae62de550) + [#5945](https://github.com/npm/npm/issues/5945) link scoped packages + correctly ([@dead](https://github.com/dead)-horse) +* [`16bead7`](https://github.com/npm/npm/commit/16bead7f2c82aec35b83ff0ec04df051ba456764) + [#5958](https://github.com/npm/npm/issues/5958) ensure that file streams work + in all versions of node ([@dead](https://github.com/dead)-horse) +* [`dbf0cab`](https://github.com/npm/npm/commit/dbf0cab29d0db43ac95e4b5a1fbdea1e0af75f10) + you can now pass quoted args to `npm run-script` + ([@bcoe](https://github.com/bcoe)) +* [`0583874`](https://github.com/npm/npm/commit/05838743f01ccb8d2432b3858d66847002fb62df) + `tar@1.0.1`: Add test for removing an extract target immediately after + unpacking. + ([@isaacs](https://github.com/isaacs)) +* [`cdf3b04`](https://github.com/npm/npm/commit/cdf3b0428bc0b0183fb41dcde9e34e8f42c5e3a7) + `lockfile@1.0.0`: Fix incorrect interaction between `wait`, `stale`, and + `retries` options. Part 2 of race condition leading to `ENOENT` + ([@isaacs](https://github.com/isaacs)) + errors. +* [`22d72a8`](https://github.com/npm/npm/commit/22d72a87a9e1a9ab56d9585397f63551887d9125) + `fstream@1.0.2`: Fix a double-finish call which can result in excess FS + operations after the `close` event. Part 1 of race condition leading to + `ENOENT` errors. + ([@isaacs](https://github.com/isaacs)) + ### v1.4.25 (2014-08-21): * [`64c0ec2`](https://github.com/npm/npm/commit/64c0ec241ef5d83761ca8de54acb3c41b079956e) @@ -61,6 +572,48 @@ leading to `ENOENT` errors. ([@isaacs](https://github.com/isaacs)) +### v2.0.0-alpha.7 (2014-08-14): + +* [`f23f1d8`](https://github.com/npm/npm/commit/f23f1d8e8f86ec1b7ab8dad68250bccaa67d61b1) + doc: update version doc to include `pre-*` increment args + ([@isaacs](https://github.com/isaacs)) +* [`b6bb746`](https://github.com/npm/npm/commit/b6bb7461824d4dc1c0936f46bd7929b5cd597986) + build: add 'make tag' to tag current release as latest + ([@isaacs](https://github.com/isaacs)) +* [`27c4bb6`](https://github.com/npm/npm/commit/27c4bb606e46e5eaf604b19fe8477bc6567f8b2e) + build: publish with `--tag=v1.4-next` ([@isaacs](https://github.com/isaacs)) +* [`cff66c3`](https://github.com/npm/npm/commit/cff66c3bf2850880058ebe2a26655dafd002495e) + build: add script to output `v1.4-next` publish tag + ([@isaacs](https://github.com/isaacs)) +* [`22abec8`](https://github.com/npm/npm/commit/22abec8833474879ac49b9604c103bc845dad779) + build: remove outdated `docpublish` make target + ([@isaacs](https://github.com/isaacs)) +* [`1be4de5`](https://github.com/npm/npm/commit/1be4de51c3976db8564f72b00d50384c921f0917) + build: remove `unpublish` step from `make publish` + ([@isaacs](https://github.com/isaacs)) +* [`e429e20`](https://github.com/npm/npm/commit/e429e2011f4d78e398f2461bca3e5a9a146fbd0c) + doc: add new changelog ([@othiym23](https://github.com/othiym23)) +* [`9243d20`](https://github.com/npm/npm/commit/9243d207896ea307082256604c10817f7c318d68) + lifecycle: test lifecycle path modification + ([@isaacs](https://github.com/isaacs)) +* [`021770b`](https://github.com/npm/npm/commit/021770b9cb07451509f0a44afff6c106311d8cf6) + lifecycle: BREAKING CHANGE do not add the directory containing node executable + ([@chulkilee](https://github.com/chulkilee)) +* [`1d5c41d`](https://github.com/npm/npm/commit/1d5c41dd0d757bce8b87f10c4135f04ece55aeb9) + install: rename .gitignore when unpacking foreign tarballs + ([@isaacs](https://github.com/isaacs)) +* [`9aac267`](https://github.com/npm/npm/commit/9aac2670a73423544d92b27cc301990a16a9563b) + cache: detect non-gzipped tar files more reliably + ([@isaacs](https://github.com/isaacs)) +* [`3f24755`](https://github.com/npm/npm/commit/3f24755c8fce3c7ab11ed1dc632cc40d7ef42f62) + `readdir-scoped-modules@1.0.0` ([@isaacs](https://github.com/isaacs)) +* [`151cd2f`](https://github.com/npm/npm/commit/151cd2ff87b8ac2fc9ea366bc9b7f766dc5b9684) + `read-installed@3.1.0` ([@isaacs](https://github.com/isaacs)) +* [`f5a9434`](https://github.com/npm/npm/commit/f5a94343a8ebe4a8cd987320b55137aef53fb3fd) + test: fix Travis timeouts ([@dylang](https://github.com/dylang)) +* [`126cafc`](https://github.com/npm/npm/commit/126cafcc6706814c88af3042f2ffff408747bff4) + `npm-registry-couchapp@2.5.0` ([@othiym23](https://github.com/othiym23)) + ### v1.4.24 (2014-08-14): * [`9344bd9`](https://github.com/npm/npm/commit/9344bd9b2929b5c399a0e0e0b34d45bce7bc24bb) @@ -89,7 +642,15 @@ cache: detect non-gzipped tar files more reliably ([@isaacs](https://github.com/isaacs)) -### v2.0.0-alpha-6 (2014-07-31): +### v2.0.0-alpha.6 (2014-08-07): + +BREAKING CHANGE: + +* [`ea547e2`](https://github.com/npm/npm/commit/ea547e2) Bump semver to + version 3: `^0.x.y` is now functionally the same as `=0.x.y`. + ([@isaacs](https://github.com/isaacs)) + +Other changes: * [`d987707`](https://github.com/npm/npm/commit/d987707) move fetch into npm-registry-client ([@othiym23](https://github.com/othiym23)) @@ -97,8 +658,6 @@ ([@isaacs](https://github.com/isaacs)) * [`9d73de7`](https://github.com/npm/npm/commit/9d73de7) remove unnecessary mkdirps ([@isaacs](https://github.com/isaacs)) -* [`ea547e2`](https://github.com/npm/npm/commit/ea547e2) Bump semver to version 3 - ([@isaacs](https://github.com/isaacs)) * [`33ccd13`](https://github.com/npm/npm/commit/33ccd13) Don't squash execute perms in `_git-remotes/` dir ([@adammeadows](https://github.com/adammeadows)) * [`48fd233`](https://github.com/npm/npm/commit/48fd233) `npm-package-arg@2.0.1` @@ -270,7 +829,7 @@ Other changes: ([@othiym23](https://github.com/othiym23)) * Allow to build all the docs OOTB. ([@GeJ](https://github.com/GeJ)) * Use core.longpaths on win32 git - fixes - [#5525](https://github.com/npm/npm/issues/5525) (Bradley Meck) + [#5525](https://github.com/npm/npm/issues/5525) ([@bmeck](https://github.com/bmeck)) * `npmconf@1.1.2` ([@isaacs](https://github.com/isaacs)) * Consolidate color sniffing in config/log loading process ([@isaacs](https://github.com/isaacs)) diff --git a/deps/npm/Makefile b/deps/npm/Makefile index 540e2da05b6..34d4b62de27 100644 --- a/deps/npm/Makefile +++ b/deps/npm/Makefile @@ -31,6 +31,28 @@ misc_mandocs = $(shell find doc/misc -name '*.md' \ |sed 's|doc/misc/|man/man7/|g' ) \ man/man7/npm-index.7 + +cli_partdocs = $(shell find doc/cli -name '*.md' \ + |sed 's|.md|.html|g' \ + |sed 's|doc/cli/|html/partial/doc/cli/|g' ) \ + html/partial/doc/README.html + +api_partdocs = $(shell find doc/api -name '*.md' \ + |sed 's|.md|.html|g' \ + |sed 's|doc/api/|html/partial/doc/api/|g' ) + +files_partdocs = $(shell find doc/files -name '*.md' \ + |sed 's|.md|.html|g' \ + |sed 's|doc/files/|html/partial/doc/files/|g' ) \ + html/partial/doc/files/npm-json.html \ + html/partial/doc/files/npm-global.html + +misc_partdocs = $(shell find doc/misc -name '*.md' \ + |sed 's|.md|.html|g' \ + |sed 's|doc/misc/|html/partial/doc/misc/|g' ) \ + html/partial/doc/index.html + + cli_htmldocs = $(shell find doc/cli -name '*.md' \ |sed 's|.md|.html|g' \ |sed 's|doc/cli/|html/doc/cli/|g' ) \ @@ -53,6 +75,8 @@ misc_htmldocs = $(shell find doc/misc -name '*.md' \ mandocs = $(api_mandocs) $(cli_mandocs) $(files_mandocs) $(misc_mandocs) +partdocs = $(api_partdocs) $(cli_partdocs) $(files_partdocs) $(misc_partdocs) + htmldocs = $(api_htmldocs) $(cli_htmldocs) $(files_htmldocs) $(misc_htmldocs) all: doc @@ -63,7 +87,7 @@ latest: @echo "in this folder that you're looking at right now." node cli.js install -g -f npm -install: docclean all +install: all node cli.js install -g -f # backwards compat @@ -72,31 +96,31 @@ dev: install link: uninstall node cli.js link -f -clean: markedclean ronnclean doc-clean uninstall +clean: markedclean marked-manclean doc-clean uninstall rm -rf npmrc node cli.js cache clean uninstall: node cli.js rm npm -g -f -doc: $(mandocs) $(htmldocs) +doc: $(mandocs) $(htmldocs) $(partdocs) markedclean: rm -rf node_modules/marked node_modules/.bin/marked .building_marked -ronnclean: - rm -rf node_modules/ronn node_modules/.bin/ronn .building_ronn +marked-manclean: + rm -rf node_modules/marked-man node_modules/.bin/marked-man .building_marked-man docclean: doc-clean doc-clean: rm -rf \ .building_marked \ - .building_ronn \ + .building_marked-man \ html/doc \ html/api \ man -# use `npm install ronn` for this to work. +# use `npm install marked-man` for this to work. man/man1/npm-README.1: README.md scripts/doc-build.sh package.json @[ -d man/man1 ] || mkdir -p man/man1 scripts/doc-build.sh $< $@ @@ -119,52 +143,82 @@ man/man5/%.5: doc/files/%.md scripts/doc-build.sh package.json @[ -d man/man5 ] || mkdir -p man/man5 scripts/doc-build.sh $< $@ +man/man7/%.7: doc/misc/%.md scripts/doc-build.sh package.json + @[ -d man/man7 ] || mkdir -p man/man7 + scripts/doc-build.sh $< $@ + + doc/misc/npm-index.md: scripts/index-build.js package.json node scripts/index-build.js > $@ -html/doc/index.html: doc/misc/npm-index.md $(html_docdeps) - @[ -d html/doc ] || mkdir -p html/doc - scripts/doc-build.sh $< $@ -man/man7/%.7: doc/misc/%.md scripts/doc-build.sh package.json - @[ -d man/man7 ] || mkdir -p man/man7 +# html/doc depends on html/partial/doc +html/doc/%.html: html/partial/doc/%.html + @[ -d html/doc ] || mkdir -p html/doc scripts/doc-build.sh $< $@ -html/doc/README.html: README.md $(html_docdeps) +html/doc/README.html: html/partial/doc/README.html @[ -d html/doc ] || mkdir -p html/doc scripts/doc-build.sh $< $@ -html/doc/cli/%.html: doc/cli/%.md $(html_docdeps) +html/doc/cli/%.html: html/partial/doc/cli/%.html @[ -d html/doc/cli ] || mkdir -p html/doc/cli scripts/doc-build.sh $< $@ -html/doc/api/%.html: doc/api/%.md $(html_docdeps) +html/doc/misc/%.html: html/partial/doc/misc/%.html + @[ -d html/doc/misc ] || mkdir -p html/doc/misc + scripts/doc-build.sh $< $@ + +html/doc/files/%.html: html/partial/doc/files/%.html + @[ -d html/doc/files ] || mkdir -p html/doc/files + scripts/doc-build.sh $< $@ + +html/doc/api/%.html: html/partial/doc/api/%.html @[ -d html/doc/api ] || mkdir -p html/doc/api scripts/doc-build.sh $< $@ -html/doc/files/npm-json.html: html/doc/files/package.json.html + +html/partial/doc/index.html: doc/misc/npm-index.md $(html_docdeps) + @[ -d html/partial/doc ] || mkdir -p html/partial/doc + scripts/doc-build.sh $< $@ + +html/partial/doc/README.html: README.md $(html_docdeps) + @[ -d html/partial/doc ] || mkdir -p html/partial/doc + scripts/doc-build.sh $< $@ + +html/partial/doc/cli/%.html: doc/cli/%.md $(html_docdeps) + @[ -d html/partial/doc/cli ] || mkdir -p html/partial/doc/cli + scripts/doc-build.sh $< $@ + +html/partial/doc/api/%.html: doc/api/%.md $(html_docdeps) + @[ -d html/partial/doc/api ] || mkdir -p html/partial/doc/api + scripts/doc-build.sh $< $@ + +html/partial/doc/files/npm-json.html: html/partial/doc/files/package.json.html cp $< $@ -html/doc/files/npm-global.html: html/doc/files/npm-folders.html +html/partial/doc/files/npm-global.html: html/partial/doc/files/npm-folders.html cp $< $@ -html/doc/files/%.html: doc/files/%.md $(html_docdeps) - @[ -d html/doc/files ] || mkdir -p html/doc/files +html/partial/doc/files/%.html: doc/files/%.md $(html_docdeps) + @[ -d html/partial/doc/files ] || mkdir -p html/partial/doc/files scripts/doc-build.sh $< $@ -html/doc/misc/%.html: doc/misc/%.md $(html_docdeps) - @[ -d html/doc/misc ] || mkdir -p html/doc/misc +html/partial/doc/misc/%.html: doc/misc/%.md $(html_docdeps) + @[ -d html/partial/doc/misc ] || mkdir -p html/partial/doc/misc scripts/doc-build.sh $< $@ + + marked: node_modules/.bin/marked node_modules/.bin/marked: node cli.js install marked --no-global -ronn: node_modules/.bin/ronn +marked-man: node_modules/.bin/marked-man -node_modules/.bin/ronn: - node cli.js install ronn --no-global +node_modules/.bin/marked-man: + node cli.js install marked-man --no-global doc: man diff --git a/deps/npm/README.md b/deps/npm/README.md index 0c08862fce9..ecb3f29e291 100644 --- a/deps/npm/README.md +++ b/deps/npm/README.md @@ -16,15 +16,15 @@ and prior, clone the git repo and dig through the old tags and branches. ## Super Easy Install -npm comes with node now. +npm comes with [node](http://nodejs.org/download/) now. ### Windows Computers -Get the MSI. npm is in it. +[Get the MSI](http://nodejs.org/download/). npm is in it. ### Apple Macintosh Computers -Get the pkg. npm is in it. +[Get the pkg](http://nodejs.org/download/). npm is in it. ### Other Sorts of Unices @@ -154,7 +154,7 @@ use npm itself to do. if (er) return commandFailed(er) // command succeeded, and data might have some info }) - npm.on("log", function (message) { .... }) + npm.registry.log.on("log", function (message) { .... }) }) The `load` function takes an object hash of the command-line configs. diff --git a/deps/npm/bin/npm-cli.js b/deps/npm/bin/npm-cli.js index ef8873542bb..ace40ca791c 100755 --- a/deps/npm/bin/npm-cli.js +++ b/deps/npm/bin/npm-cli.js @@ -19,10 +19,9 @@ var log = require("npmlog") log.pause() // will be unpaused when config is loaded. log.info("it worked if it ends with", "ok") -var fs = require("graceful-fs") - , path = require("path") +var path = require("path") , npm = require("../lib/npm.js") - , npmconf = require("npmconf") + , npmconf = require("../lib/config/core.js") , errorHandler = require("../lib/utils/error-handler.js") , configDefs = npmconf.defs @@ -58,16 +57,6 @@ if (conf.versions) { log.info("using", "npm@%s", npm.version) log.info("using", "node@%s", process.version) -// make sure that this version of node works with this version of npm. -var semver = require("semver") - , nodeVer = process.version - , reqVer = npm.nodeVersionRequired -if (reqVer && !semver.satisfies(nodeVer, reqVer)) { - return errorHandler(new Error( - "npm doesn't work with node " + nodeVer - + "\nRequired: node@" + reqVer), true) -} - process.on("uncaughtException", errorHandler) if (conf.usage && npm.command !== "help") { diff --git a/deps/npm/doc/api/npm-bin.md b/deps/npm/doc/api/npm-bin.md index f3dc48286d3..bd27af2fdca 100644 --- a/deps/npm/doc/api/npm-bin.md +++ b/deps/npm/doc/api/npm-bin.md @@ -10,4 +10,4 @@ npm-bin(3) -- Display npm bin folder Print the folder where npm will install executables. This function should not be used programmatically. Instead, just refer -to the `npm.bin` member. +to the `npm.bin` property. diff --git a/deps/npm/doc/api/npm-help-search.md b/deps/npm/doc/api/npm-help-search.md index 5c00cfc177d..01b235ce72b 100644 --- a/deps/npm/doc/api/npm-help-search.md +++ b/deps/npm/doc/api/npm-help-search.md @@ -27,4 +27,4 @@ array of results is returned. Each result is an object with these properties: * file: Name of the file that matched -The silent parameter is not neccessary not used, but it may in the future. +The silent parameter is not necessary not used, but it may in the future. diff --git a/deps/npm/doc/api/npm-load.md b/deps/npm/doc/api/npm-load.md index a95a6b295da..de412aff5b8 100644 --- a/deps/npm/doc/api/npm-load.md +++ b/deps/npm/doc/api/npm-load.md @@ -10,9 +10,9 @@ npm-load(3) -- Load config settings npm.load() must be called before any other function call. Both parameters are optional, but the second is recommended. -The first parameter is an object hash of command-line config params, and the -second parameter is a callback that will be called when npm is loaded and -ready to serve. +The first parameter is an object containing command-line config params, and the +second parameter is a callback that will be called when npm is loaded and ready +to serve. The first parameter should follow a similar structure as the package.json config object. diff --git a/deps/npm/doc/api/npm-submodule.md b/deps/npm/doc/api/npm-submodule.md deleted file mode 100644 index 2d8bafaa311..00000000000 --- a/deps/npm/doc/api/npm-submodule.md +++ /dev/null @@ -1,28 +0,0 @@ -npm-submodule(3) -- Add a package as a git submodule -==================================================== - -## SYNOPSIS - - npm.commands.submodule(packages, callback) - -## DESCRIPTION - -For each package specified, npm will check if it has a git repository url -in its package.json description then add it as a git submodule at -`node_modules/`. - -This is a convenience only. From then on, it's up to you to manage -updates by using the appropriate git commands. npm will stubbornly -refuse to update, modify, or remove anything with a `.git` subfolder -in it. - -This command also does not install missing dependencies, if the package -does not include them in its git repository. If `npm ls` reports that -things are missing, you can either install, link, or submodule them yourself, -or you can do `npm explore -- npm install` to install the -dependencies into the submodule folder. - -## SEE ALSO - -* npm help json -* git help submodule diff --git a/deps/npm/doc/api/npm.md b/deps/npm/doc/api/npm.md index d05684e8b9d..4b4dfcaddd2 100644 --- a/deps/npm/doc/api/npm.md +++ b/deps/npm/doc/api/npm.md @@ -25,13 +25,12 @@ This is the API documentation for npm. To find documentation of the command line client, see `npm(1)`. -Prior to using npm's commands, `npm.load()` must be called. -If you provide `configObject` as an object hash of top-level -configs, they override the values stored in the various config -locations. In the npm command line client, this set of configs -is parsed from the command line options. Additional configuration -params are loaded from two configuration files. See `npm-config(1)`, -`npm-config(7)`, and `npmrc(5)` for more information. +Prior to using npm's commands, `npm.load()` must be called. If you provide +`configObject` as an object map of top-level configs, they override the values +stored in the various config locations. In the npm command line client, this +set of configs is parsed from the command line options. Additional +configuration params are loaded from two configuration files. See +`npm-config(1)`, `npm-config(7)`, and `npmrc(5)` for more information. After that, each of the functions are accessible in the commands object: `npm.commands.`. See `npm-index(7)` for a list of @@ -88,9 +87,9 @@ command. ## MAGIC -For each of the methods in the `npm.commands` hash, a method is added to -the npm object, which takes a set of positional string arguments rather -than an array and a callback. +For each of the methods in the `npm.commands` object, a method is added to the +npm object, which takes a set of positional string arguments rather than an +array and a callback. If the last argument is a callback, then it will use the supplied callback. However, if no callback is provided, then it will print out diff --git a/deps/npm/doc/cli/npm-adduser.md b/deps/npm/doc/cli/npm-adduser.md index 68f3a3c0085..54e785b07fe 100644 --- a/deps/npm/doc/cli/npm-adduser.md +++ b/deps/npm/doc/cli/npm-adduser.md @@ -3,30 +3,62 @@ npm-adduser(1) -- Add a registry user account ## SYNOPSIS - npm adduser + npm adduser [--registry=url] [--scope=@orgname] [--always-auth] ## DESCRIPTION -Create or verify a user named `` in the npm registry, and -save the credentials to the `.npmrc` file. +Create or verify a user named `` in the specified registry, and +save the credentials to the `.npmrc` file. If no registry is specified, +the default registry will be used (see `npm-config(7)`). The username, password, and email are read in from prompts. You may use this command to change your email address, but not username or password. -To reset your password, go to +To reset your password, go to You may use this command multiple times with the same user account to authorize on a new machine. +`npm login` is an alias to `adduser` and behaves exactly the same way. + ## CONFIGURATION ### registry Default: http://registry.npmjs.org/ -The base URL of the npm package registry. +The base URL of the npm package registry. If `scope` is also specified, +this registry will only be used for packages with that scope. See `npm-scope(7)`. + +### scope + +Default: none + +If specified, the user and login credentials given will be associated +with the specified scope. See `npm-scope(7)`. You can use both at the same time, +e.g. + + npm adduser --registry=http://myregistry.example.com --scope=@myco + +This will set a registry for the given scope and login or create a user for +that registry at the same time. + +### always-auth + +Default: false + +If specified, save configuration indicating that all requests to the given +registry should include authorization information. Useful for private +registries. Can be used with `--registry` and / or `--scope`, e.g. + + npm adduser --registry=http://private-registry.example.com --always-auth + +This will ensure that all requests to that registry (including for tarballs) +include an authorization header. See `always-auth` in `npm-config(7)` for more +details on always-auth. Registry-specific configuaration of `always-auth` takes +precedence over any global configuration. ## SEE ALSO diff --git a/deps/npm/doc/cli/npm-explore.md b/deps/npm/doc/cli/npm-explore.md index 3642d7399d0..fded5340870 100644 --- a/deps/npm/doc/cli/npm-explore.md +++ b/deps/npm/doc/cli/npm-explore.md @@ -32,7 +32,6 @@ The shell to run for the `npm explore` command. ## SEE ALSO -* npm-submodule(1) * npm-folders(5) * npm-edit(1) * npm-rebuild(1) diff --git a/deps/npm/doc/cli/npm-init.md b/deps/npm/doc/cli/npm-init.md index bd63a8879da..08e517d79a4 100644 --- a/deps/npm/doc/cli/npm-init.md +++ b/deps/npm/doc/cli/npm-init.md @@ -3,7 +3,7 @@ npm-init(1) -- Interactively create a package.json file ## SYNOPSIS - npm init + npm init [-f|--force|-y|--yes] ## DESCRIPTION @@ -18,6 +18,9 @@ the options in there. It is strictly additive, so it does not delete options from your package.json without a really good reason to do so. +If you invoke it with `-f`, `--force`, `-y`, or `--yes`, it will use only +defaults and not prompt you for any options. + ## SEE ALSO * diff --git a/deps/npm/doc/cli/npm-install.md b/deps/npm/doc/cli/npm-install.md index 62eec2d8e34..2d427163730 100644 --- a/deps/npm/doc/cli/npm-install.md +++ b/deps/npm/doc/cli/npm-install.md @@ -7,10 +7,10 @@ npm-install(1) -- Install a package npm install npm install npm install - npm install [--save|--save-dev|--save-optional] [--save-exact] - npm install @ - npm install @ - npm install @ + npm install [@/] [--save|--save-dev|--save-optional] [--save-exact] + npm install [@/]@ + npm install [@/]@ + npm install [@/]@ npm i (with any of the previous argument usage) ## DESCRIPTION @@ -70,7 +70,7 @@ after packing it up into a tarball (b). npm install https://github.com/indexzero/forever/tarball/v0.5.6 -* `npm install [--save|--save-dev|--save-optional]`: +* `npm install [@/] [--save|--save-dev|--save-optional]`: Do a `@` install, where `` is the "tag" config. (See `npm-config(7)`.) @@ -98,9 +98,19 @@ after packing it up into a tarball (b). exact version rather than using npm's default semver range operator. + `` is optional. The package will be downloaded from the registry + associated with the specified scope. If no registry is associated with + the given scope the default registry is assumed. See `npm-scope(7)`. + + Note: if you do not include the @-symbol on your scope name, npm will + interpret this as a GitHub repository instead, see below. Scopes names + must also be followed by a slash. + Examples: npm install sax --save + npm install githubname/reponame + npm install @myorg/privatepackage npm install node-tap --save-dev npm install dtrace-provider --save-optional npm install readable-stream --save --save-exact @@ -110,7 +120,7 @@ after packing it up into a tarball (b). working directory, then it will try to install that, and only try to fetch the package by name if it is not valid. -* `npm install @`: +* `npm install [@/]@`: Install the version of the package that is referenced by the specified tag. If the tag does not exist in the registry data for that package, then this @@ -119,17 +129,19 @@ after packing it up into a tarball (b). Example: npm install sax@latest + npm install @myorg/mypackage@latest -* `npm install @`: +* `npm install [@/]@`: - Install the specified version of the package. This will fail if the version - has not been published to the registry. + Install the specified version of the package. This will fail if the + version has not been published to the registry. Example: npm install sax@0.1.1 + npm install @myorg/privatepackage@1.5.0 -* `npm install @`: +* `npm install [@/]@`: Install a version of the package matching the specified version range. This will follow the same rules for resolving dependencies described in `package.json(5)`. @@ -140,6 +152,19 @@ after packing it up into a tarball (b). Example: npm install sax@">=0.1.0 <0.2.0" + npm install @myorg/privatepackage@">=0.1.0 <0.2.0" + +* `npm install /`: + + Install the package at `https://github.com/githubname/githubrepo" by + attempting to clone it using `git`. + + Example: + + npm install mygithubuser/myproject + + To reference a package in a git repo that is not on GitHub, see git + remote urls below. * `npm install `: diff --git a/deps/npm/doc/cli/npm-link.md b/deps/npm/doc/cli/npm-link.md index c0fc01eb26d..a6c27479007 100644 --- a/deps/npm/doc/cli/npm-link.md +++ b/deps/npm/doc/cli/npm-link.md @@ -4,7 +4,7 @@ npm-link(1) -- Symlink a package folder ## SYNOPSIS npm link (in package folder) - npm link + npm link [@/] npm ln (with any of the previous argument usage) ## DESCRIPTION @@ -12,7 +12,8 @@ npm-link(1) -- Symlink a package folder Package linking is a two-step process. First, `npm link` in a package folder will create a globally-installed -symbolic link from `prefix/package-name` to the current folder. +symbolic link from `prefix/package-name` to the current folder (see +`npm-config(7)` for the value of `prefix`). Next, in some other location, `npm link package-name` will create a symlink from the local `node_modules` folder to the global symlink. @@ -20,12 +21,14 @@ symlink from the local `node_modules` folder to the global symlink. Note that `package-name` is taken from `package.json`, not from directory name. +The package name can be optionally prefixed with a scope. See `npm-scope(7)`. +The scope must by preceded by an @-symbol and followed by a slash. + When creating tarballs for `npm publish`, the linked packages are "snapshotted" to their current state by resolving the symbolic links. -This is -handy for installing your own stuff, so that you can work on it and test it -iteratively without having to continually rebuild. +This is handy for installing your own stuff, so that you can work on it and +test it iteratively without having to continually rebuild. For example: @@ -51,6 +54,11 @@ The second line is the equivalent of doing: That is, it first creates a global link, and then links the global installation target into your project's `node_modules` folder. +If your linked package is scoped (see `npm-scope(7)`) your link command must +include that scope, e.g. + + npm link @myorg/privatepackage + ## SEE ALSO * npm-developers(7) diff --git a/deps/npm/doc/cli/npm-ls.md b/deps/npm/doc/cli/npm-ls.md index 21f54264c7f..0f0d79489ae 100644 --- a/deps/npm/doc/cli/npm-ls.md +++ b/deps/npm/doc/cli/npm-ls.md @@ -3,10 +3,10 @@ npm-ls(1) -- List installed packages ## SYNOPSIS - npm list [ ...] - npm ls [ ...] - npm la [ ...] - npm ll [ ...] + npm list [[@/] ...] + npm ls [[@/] ...] + npm la [[@/] ...] + npm ll [[@/] ...] ## DESCRIPTION diff --git a/deps/npm/doc/cli/npm-prefix.md b/deps/npm/doc/cli/npm-prefix.md index f99a401d147..f262a36a752 100644 --- a/deps/npm/doc/cli/npm-prefix.md +++ b/deps/npm/doc/cli/npm-prefix.md @@ -3,11 +3,15 @@ npm-prefix(1) -- Display prefix ## SYNOPSIS - npm prefix + npm prefix [-g] ## DESCRIPTION -Print the prefix to standard out. +Print the local prefix to standard out. This is the closest parent directory +to contain a package.json file unless `-g` is also specified. + +If `-g` is specified, this will be the value of the global prefix. See +`npm-config(7)` for more detail. ## SEE ALSO diff --git a/deps/npm/doc/cli/npm-publish.md b/deps/npm/doc/cli/npm-publish.md index 338728e3e4c..30e816c7fdf 100644 --- a/deps/npm/doc/cli/npm-publish.md +++ b/deps/npm/doc/cli/npm-publish.md @@ -9,7 +9,13 @@ npm-publish(1) -- Publish a package ## DESCRIPTION -Publishes a package to the registry so that it can be installed by name. +Publishes a package to the registry so that it can be installed by name. See +`npm-developers(7)` for details on what's included in the published package, as +well as details on how the package is built. + +By default npm will publish to the public registry. This can be overridden by +specifying a different default registry or using a `npm-scope(7)` in the name +(see `package.json(5)`). * ``: A folder containing a package.json file @@ -24,7 +30,7 @@ Publishes a package to the registry so that it can be installed by name. and `npm install` installs the `latest` tag. Fails if the package name and version combination already exists in -the registry. +the specified registry. Once a package is published with a given name and version, that specific name and version combination can never be used again, even if diff --git a/deps/npm/doc/cli/npm-restart.md b/deps/npm/doc/cli/npm-restart.md index 4661d6b23ba..6d594a26c1b 100644 --- a/deps/npm/doc/cli/npm-restart.md +++ b/deps/npm/doc/cli/npm-restart.md @@ -3,15 +3,12 @@ npm-restart(1) -- Start a package ## SYNOPSIS - npm restart + npm restart [-- ] ## DESCRIPTION -This runs a package's "restart" script, if one was provided. -Otherwise it runs package's "stop" script, if one was provided, and then -the "start" script. - -If no version is specified, then it restarts the "active" version. +This runs a package's "restart" script, if one was provided. Otherwise it runs +package's "stop" script, if one was provided, and then the "start" script. ## SEE ALSO diff --git a/deps/npm/doc/cli/npm-run-script.md b/deps/npm/doc/cli/npm-run-script.md index 835dbf5df7f..74f416e0bec 100644 --- a/deps/npm/doc/cli/npm-run-script.md +++ b/deps/npm/doc/cli/npm-run-script.md @@ -3,8 +3,8 @@ npm-run-script(1) -- Run arbitrary package scripts ## SYNOPSIS - npm run-script [] [command] - npm run [] [command] + npm run-script [command] [-- ] + npm run [command] [-- ] ## DESCRIPTION @@ -16,6 +16,16 @@ is provided, it will list the available top level scripts. It is used by the test, start, restart, and stop commands, but can be called directly, as well. +As of [`npm@2.0.0`](http://blog.npmjs.org/post/98131109725/npm-2-0-0), you can +use custom arguments when executing scripts. The special option `--` is used by +[getopt](http://goo.gl/KxMmtG) to delimit the end of the options. npm will pass +all the arguments after the `--` directly to your script: + + npm run test -- --grep="pattern" + +The arguments will only be passed to the script specified after ```npm run``` +and not to any pre or post script. + ## SEE ALSO * npm-scripts(7) diff --git a/deps/npm/doc/cli/npm-start.md b/deps/npm/doc/cli/npm-start.md index 01347d2e469..759de221f38 100644 --- a/deps/npm/doc/cli/npm-start.md +++ b/deps/npm/doc/cli/npm-start.md @@ -3,7 +3,7 @@ npm-start(1) -- Start a package ## SYNOPSIS - npm start + npm start [-- ] ## DESCRIPTION diff --git a/deps/npm/doc/cli/npm-stop.md b/deps/npm/doc/cli/npm-stop.md index bda5cc8f47c..92b14b41796 100644 --- a/deps/npm/doc/cli/npm-stop.md +++ b/deps/npm/doc/cli/npm-stop.md @@ -3,7 +3,7 @@ npm-stop(1) -- Stop a package ## SYNOPSIS - npm stop + npm stop [-- ] ## DESCRIPTION diff --git a/deps/npm/doc/cli/npm-submodule.md b/deps/npm/doc/cli/npm-submodule.md deleted file mode 100644 index 7f0fbfc9fbf..00000000000 --- a/deps/npm/doc/cli/npm-submodule.md +++ /dev/null @@ -1,28 +0,0 @@ -npm-submodule(1) -- Add a package as a git submodule -==================================================== - -## SYNOPSIS - - npm submodule - -## DESCRIPTION - -If the specified package has a git repository url in its package.json -description, then this command will add it as a git submodule at -`node_modules/`. - -This is a convenience only. From then on, it's up to you to manage -updates by using the appropriate git commands. npm will stubbornly -refuse to update, modify, or remove anything with a `.git` subfolder -in it. - -This command also does not install missing dependencies, if the package -does not include them in its git repository. If `npm ls` reports that -things are missing, you can either install, link, or submodule them yourself, -or you can do `npm explore -- npm install` to install the -dependencies into the submodule folder. - -## SEE ALSO - -* package.json(5) -* git help submodule diff --git a/deps/npm/doc/cli/npm-test.md b/deps/npm/doc/cli/npm-test.md index 800f3ae104e..c2267082dfb 100644 --- a/deps/npm/doc/cli/npm-test.md +++ b/deps/npm/doc/cli/npm-test.md @@ -3,8 +3,8 @@ npm-test(1) -- Test a package ## SYNOPSIS - npm test - npm tst + npm test [-- ] + npm tst [-- ] ## DESCRIPTION diff --git a/deps/npm/doc/cli/npm-uninstall.md b/deps/npm/doc/cli/npm-uninstall.md index e24815bec79..bfa667c3e26 100644 --- a/deps/npm/doc/cli/npm-uninstall.md +++ b/deps/npm/doc/cli/npm-uninstall.md @@ -3,7 +3,7 @@ npm-rm(1) -- Remove a package ## SYNOPSIS - npm uninstall [--save|--save-dev|--save-optional] + npm uninstall [@/] [--save|--save-dev|--save-optional] npm rm (with any of the previous argument usage) ## DESCRIPTION @@ -27,9 +27,12 @@ the package version in your main package.json: * `--save-optional`: Package will be removed from your `optionalDependencies`. +Scope is optional and follows the usual rules for `npm-scope(7)`. + Examples: npm uninstall sax --save + npm uninstall @myorg/privatepackage --save npm uninstall node-tap --save-dev npm uninstall dtrace-provider --save-optional diff --git a/deps/npm/doc/cli/npm-unpublish.md b/deps/npm/doc/cli/npm-unpublish.md index 45026197e1f..1d5fe928780 100644 --- a/deps/npm/doc/cli/npm-unpublish.md +++ b/deps/npm/doc/cli/npm-unpublish.md @@ -3,7 +3,7 @@ npm-unpublish(1) -- Remove a package from the registry ## SYNOPSIS - npm unpublish [@] + npm unpublish [@/][@] ## WARNING @@ -27,6 +27,8 @@ Even if a package version is unpublished, that specific name and version combination can never be reused. In order to publish the package again, a new version number must be used. +The scope is optional and follows the usual rules for `npm-scope(7)`. + ## SEE ALSO * npm-deprecate(1) diff --git a/deps/npm/doc/cli/npm-update.md b/deps/npm/doc/cli/npm-update.md index 1ea6b627562..a53d2945928 100644 --- a/deps/npm/doc/cli/npm-update.md +++ b/deps/npm/doc/cli/npm-update.md @@ -12,8 +12,11 @@ This command will update all the packages listed to the latest version It will also install missing packages. -If the `-g` flag is specified, this command will update globally installed packages. -If no package name is specified, all packages in the specified location (global or local) will be updated. +If the `-g` flag is specified, this command will update globally installed +packages. + +If no package name is specified, all packages in the specified location (global +or local) will be updated. ## SEE ALSO diff --git a/deps/npm/doc/cli/npm-view.md b/deps/npm/doc/cli/npm-view.md index 1d19fe88d46..8f52a85a92f 100644 --- a/deps/npm/doc/cli/npm-view.md +++ b/deps/npm/doc/cli/npm-view.md @@ -3,8 +3,8 @@ npm-view(1) -- View registry info ## SYNOPSIS - npm view [@] [[.]...] - npm v [@] [[.]...] + npm view [@/][@] [[.]...] + npm v [@/][@] [[.]...] ## DESCRIPTION diff --git a/deps/npm/doc/files/npm-folders.md b/deps/npm/doc/files/npm-folders.md index 1b1485d5ed3..1fb21b1a310 100644 --- a/deps/npm/doc/files/npm-folders.md +++ b/deps/npm/doc/files/npm-folders.md @@ -42,6 +42,12 @@ Global installs on Unix systems go to `{prefix}/lib/node_modules`. Global installs on Windows go to `{prefix}/node_modules` (that is, no `lib` folder.) +Scoped packages are installed the same way, except they are grouped together +in a sub-folder of the relevant `node_modules` folder with the name of that +scope prefix by the @ symbol, e.g. `npm install @myorg/package` would place +the package in `{prefix}/node_modules/@myorg/package`. See `scopes(7)` for +more details. + If you wish to `require()` a package, then install it locally. ### Executables diff --git a/deps/npm/doc/files/package.json.md b/deps/npm/doc/files/package.json.md index b9b05d4d4d9..1138bc2749e 100644 --- a/deps/npm/doc/files/package.json.md +++ b/deps/npm/doc/files/package.json.md @@ -30,6 +30,9 @@ The name is what your thing is called. Some tips: * You may want to check the npm registry to see if there's something by that name already, before you get too attached to it. http://registry.npmjs.org/ +A name can be optionally prefixed by a scope, e.g. `@myorg/mypackage`. See +`npm-scope(7)` for more detail. + ## version The *most* important things in your package.json are the name and version fields. @@ -216,7 +219,7 @@ will create entries for `man foo` and `man 2 foo` The CommonJS [Packages](http://wiki.commonjs.org/wiki/Packages/1.0) spec details a few ways that you can indicate the structure of your package using a `directories` -hash. If you look at [npm's package.json](http://registry.npmjs.org/npm/latest), +object. If you look at [npm's package.json](http://registry.npmjs.org/npm/latest), you'll see that it has directories for doc, lib, and man. In the future, this information may be used in other creative ways. @@ -228,10 +231,10 @@ with the lib folder in any way, but it's useful meta info. ### directories.bin -If you specify a "bin" directory, then all the files in that folder will -be used as the "bin" hash. +If you specify a `bin` directory, then all the files in that folder will +be added as children of the `bin` path. -If you have a "bin" hash already, then this has no effect. +If you have a `bin` path already, then this has no effect. ### directories.man @@ -271,7 +274,7 @@ html project page that you put in your browser. It's for computers. ## scripts -The "scripts" member is an object hash of script commands that are run +The "scripts" property is a dictionary containing script commands that are run at various times in the lifecycle of your package. The key is the lifecycle event, and the value is the command to run at that point. @@ -279,9 +282,9 @@ See `npm-scripts(7)` to find out more about writing package scripts. ## config -A "config" hash can be used to set configuration -parameters used in package scripts that persist across upgrades. For -instance, if a package had the following: +A "config" object can be used to set configuration parameters used in package +scripts that persist across upgrades. For instance, if a package had the +following: { "name" : "foo" , "config" : { "port" : "8080" } } @@ -295,13 +298,13 @@ configs. ## dependencies -Dependencies are specified with a simple hash of package name to +Dependencies are specified in a simple object that maps a package name to a version range. The version range is a string which has one or more -space-separated descriptors. Dependencies can also be identified with -a tarball or git URL. +space-separated descriptors. Dependencies can also be identified with a +tarball or git URL. **Please do not put test harnesses or transpilers in your -`dependencies` hash.** See `devDependencies`, below. +`dependencies` object.** See `devDependencies`, below. See semver(7) for more details about specifying version ranges. @@ -320,6 +323,8 @@ See semver(7) for more details about specifying version ranges. * `range1 || range2` Passes if either range1 or range2 are satisfied. * `git...` See 'Git URLs as Dependencies' below * `user/repo` See 'GitHub URLs' below +* `tag` A specific version tagged and published as `tag` See `npm-tag(1)` +* `path/path/path` See Local Paths below For example, these are all valid: @@ -334,6 +339,8 @@ For example, these are all valid: , "elf" : "~1.2.3" , "two" : "2.x" , "thr" : "3.3.x" + , "lat" : "latest" + , "dyl" : "file:../dyl" } } @@ -369,14 +376,40 @@ As of version 1.1.65, you can refer to GitHub urls as just "foo": "user/foo-proj } } +## Local Paths + +As of version 2.0.0 you can provide a path to a local directory that contains a +package. Local paths can be saved using `npm install --save`, using any of +these forms: + + ../foo/bar + ~/foo/bar + ./foo/bar + /foo/bar + +in which case they will be normalized to a relative path and added to your +`package.json`. For example: + + { + "name": "baz", + "dependencies": { + "bar": "file:../foo/bar" + } + } + +This feature is helpful for local offline development and creating +tests that require npm installing where you don't want to hit an +external server, but should not be used when publishing packages +to the public registry. + ## devDependencies If someone is planning on downloading and using your module in their program, then they probably don't want or need to download and build the external test or documentation framework that you use. -In this case, it's best to list these additional items in a -`devDependencies` hash. +In this case, it's best to map these additional items in a `devDependencies` +object. These things will be installed when doing `npm link` or `npm install` from the root of a package, and can be managed like any other npm @@ -447,11 +480,11 @@ If this is spelled `"bundleDependencies"`, then that is also honorable. ## optionalDependencies -If a dependency can be used, but you would like npm to proceed if it -cannot be found or fails to install, then you may put it in the -`optionalDependencies` hash. This is a map of package name to version -or url, just like the `dependencies` hash. The difference is that -failure is tolerated. +If a dependency can be used, but you would like npm to proceed if it cannot be +found or fails to install, then you may put it in the `optionalDependencies` +object. This is a map of package name to version or url, just like the +`dependencies` object. The difference is that build failures do not cause +installation to fail. It is still your program's responsibility to handle the lack of the dependency. For example, something like this: @@ -499,12 +532,12 @@ field is advisory only. ## engineStrict If you are sure that your module will *definitely not* run properly on -versions of Node/npm other than those specified in the `engines` hash, +versions of Node/npm other than those specified in the `engines` object, then you can set `"engineStrict": true` in your package.json file. This will override the user's `engine-strict` config setting. Please do not do this unless you are really very very sure. If your -engines hash is something overly restrictive, you can quite easily and +engines object is something overly restrictive, you can quite easily and inadvertently lock yourself into obscurity and prevent your users from updating to new versions of Node. Consider this choice carefully. If people abuse it, it will be removed in a future version of npm. @@ -553,11 +586,11 @@ does help prevent some confusion if it doesn't work as expected. If you set `"private": true` in your package.json, then npm will refuse to publish it. -This is a way to prevent accidental publication of private repositories. -If you would like to ensure that a given package is only ever published -to a specific registry (for example, an internal registry), -then use the `publishConfig` hash described below -to override the `registry` config param at publish-time. +This is a way to prevent accidental publication of private repositories. If +you would like to ensure that a given package is only ever published to a +specific registry (for example, an internal registry), then use the +`publishConfig` dictionary described below to override the `registry` config +param at publish-time. ## publishConfig diff --git a/deps/npm/doc/misc/npm-coding-style.md b/deps/npm/doc/misc/npm-coding-style.md index b6a4a620fb6..80609f4f2fe 100644 --- a/deps/npm/doc/misc/npm-coding-style.md +++ b/deps/npm/doc/misc/npm-coding-style.md @@ -147,7 +147,7 @@ Use appropriate log levels. See `npm-config(7)` and search for ## Case, naming, etc. Use `lowerCamelCase` for multiword identifiers when they refer to objects, -functions, methods, members, or anything not specified in this section. +functions, methods, properties, or anything not specified in this section. Use `UpperCamelCase` for class names (things that you'd pass to "new"). diff --git a/deps/npm/doc/misc/npm-config.md b/deps/npm/doc/misc/npm-config.md index 035923fa6f4..6e7d995dd8e 100644 --- a/deps/npm/doc/misc/npm-config.md +++ b/deps/npm/doc/misc/npm-config.md @@ -50,6 +50,7 @@ The following shorthands are parsed on the command-line: * `-dd`, `--verbose`: `--loglevel verbose` * `-ddd`: `--loglevel silly` * `-g`: `--global` +* `-C`: `--prefix` * `-l`: `--long` * `-m`: `--message` * `-p`, `--porcelain`: `--parseable` @@ -253,12 +254,6 @@ set. The command to run for `npm edit` or `npm config edit`. -### email - -The email of the logged-in user. - -Set by the `npm adduser` command. Should not be set explicitly. - ### engine-strict * Default: false @@ -389,34 +384,42 @@ documentation for the [init-package-json](https://github.com/isaacs/init-package-json) module for more information, or npm-init(1). -### init.author.name +### init-author-name * Default: "" * Type: String The value `npm init` should use by default for the package author's name. -### init.author.email +### init-author-email * Default: "" * Type: String The value `npm init` should use by default for the package author's email. -### init.author.url +### init-author-url * Default: "" * Type: String The value `npm init` should use by default for the package author's homepage. -### init.license +### init-license * Default: "ISC" * Type: String The value `npm init` should use by default for the package license. +### init-version + +* Default: "0.0.0" +* Type: semver + +The value that `npm init` should use by default for the package +version number, if not already set in package.json. + ### json * Default: false @@ -461,15 +464,15 @@ to the npm registry. Must be IPv4 in versions of Node prior to 0.12. ### loglevel -* Default: "http" +* Default: "warn" * Type: String -* Values: "silent", "win", "error", "warn", "http", "info", "verbose", "silly" +* Values: "silent", "error", "warn", "http", "info", "verbose", "silly" What level of logs to report. On failure, *all* logs are written to `npm-debug.log` in the current working directory. Any logs of a higher level than the setting are shown. -The default is "http", which shows http, warn, and error output. +The default is "warn", which shows warn and error output. ### logstream @@ -507,7 +510,7 @@ Any "%s" in the message will be replaced with the version number. * Default: process.version * Type: semver or false -The node version to use when checking package's "engines" hash. +The node version to use when checking a package's `engines` map. ### npat @@ -529,7 +532,7 @@ usage. * Default: true * Type: Boolean -Attempt to install packages in the `optionalDependencies` hash. Note +Attempt to install packages in the `optionalDependencies` object. Note that if these packages fail to install, the overall installation process is not aborted. @@ -607,8 +610,8 @@ Remove failed installs. Save installed packages to a package.json file as dependencies. -When used with the `npm rm` command, it removes it from the dependencies -hash. +When used with the `npm rm` command, it removes it from the `dependencies` +object. Only works if there is already a package.json file present. @@ -629,10 +632,10 @@ bundledDependencies list. * Default: false * Type: Boolean -Save installed packages to a package.json file as devDependencies. +Save installed packages to a package.json file as `devDependencies`. When used with the `npm rm` command, it removes it from the -devDependencies hash. +`devDependencies` object. Only works if there is already a package.json file present. @@ -654,7 +657,7 @@ Save installed packages to a package.json file as optionalDependencies. When used with the `npm rm` command, it removes it from the -devDependencies hash. +`devDependencies` object. Only works if there is already a package.json file present. @@ -663,14 +666,25 @@ Only works if there is already a package.json file present. * Default: '^' * Type: String -Configure how versions of packages installed to a package.json file via +Configure how versions of packages installed to a package.json file via `--save` or `--save-dev` get prefixed. For example if a package has version `1.2.3`, by default it's version is -set to `^1.2.3` which allows minor upgrades for that package, but after +set to `^1.2.3` which allows minor upgrades for that package, but after `npm config set save-prefix='~'` it would be set to `~1.2.3` which only allows patch upgrades. +### scope + +* Default: "" +* Type: String + +Associate an operation with a scope for a scoped registry. Useful when logging +in to a private registry for the first time: +`npm login --scope=@organization --registry=registry.organization.com`, which +will cause `@organization` to be mapped to the registry for future installation +of packages specified according to the pattern `@organization/package`. + ### searchopts * Default: "" @@ -794,13 +808,6 @@ instead of complete help when doing `npm-help(1)`. The UID to set to when running package scripts as root. -### username - -* Default: null -* Type: String - -The username on the npm registry. Set with `npm adduser` - ### userconfig * Default: ~/.npmrc @@ -841,8 +848,8 @@ Only relevant when specified explicitly on the command line. * Default: false * Type: boolean -If true, output the npm version as well as node's `process.versions` -hash, and exit successfully. +If true, output the npm version as well as node's `process.versions` map, and +exit successfully. Only relevant when specified explicitly on the command line. diff --git a/deps/npm/doc/misc/npm-developers.md b/deps/npm/doc/misc/npm-developers.md index 5e53301f383..f6ea01176fa 100644 --- a/deps/npm/doc/misc/npm-developers.md +++ b/deps/npm/doc/misc/npm-developers.md @@ -76,7 +76,7 @@ least, you need: * scripts: If you have a special compilation or installation script, then you - should put it in the `scripts` hash. You should definitely have at + should put it in the `scripts` object. You should definitely have at least a basic smoke-test command as the "scripts.test" field. See npm-scripts(7). @@ -86,8 +86,8 @@ least, you need: then you need to specify that in the "main" field. * directories: - This is a hash of folders. The best ones to include are "lib" and - "doc", but if you specify a folder full of man pages in "man", then + This is an object mapping names to folders. The best ones to include are + "lib" and "doc", but if you use "man" to specify a folder full of man pages, they'll get installed just like these ones. You can use `npm init` in the root of your package in order to get you diff --git a/deps/npm/doc/misc/npm-faq.md b/deps/npm/doc/misc/npm-faq.md index 53fa03d629d..72891271f95 100644 --- a/deps/npm/doc/misc/npm-faq.md +++ b/deps/npm/doc/misc/npm-faq.md @@ -75,18 +75,20 @@ npm will not help you do something that is known to be a bad idea. ## Should I check my `node_modules` folder into git? -Mikeal Rogers answered this question very well: +Usually, no. Allow npm to resolve dependencies for your packages. - +For packages you **deploy**, such as websites and apps, +you should use npm shrinkwrap to lock down your full dependency tree: -tl;dr +https://www.npmjs.org/doc/cli/npm-shrinkwrap.html -* Check `node_modules` into git for things you **deploy**, such as - websites and apps. -* Do not check `node_modules` into git for libraries and modules - intended to be reused. -* Use npm to manage dependencies in your dev environment, but not in - your deployment scripts. +If you are paranoid about depending on the npm ecosystem, +you should run a private npm mirror or a private cache. + +If you want 100% confidence in being able to reproduce the specific bytes +included in a deployment, you should use an additional mechanism that can +verify contents rather than versions. For example, +Amazon machine images, DigitalOcean snapshots, Heroku slugs, or simple tarballs. ## Is it 'npm' or 'NPM' or 'Npm'? @@ -133,7 +135,7 @@ Arguments are greps. `npm search jsdom` shows jsdom packages. ## How do I update npm? - npm update npm -g + npm install npm -g You can also update all outdated local packages by doing `npm update` without any arguments, or global packages by doing `npm update -g`. diff --git a/deps/npm/doc/misc/npm-index.md b/deps/npm/doc/misc/npm-index.md index 4e9f70c99e2..9c804bf802c 100644 --- a/deps/npm/doc/misc/npm-index.md +++ b/deps/npm/doc/misc/npm-index.md @@ -161,10 +161,6 @@ Start a package Stop a package -### npm-submodule(1) - -Add a package as a git submodule - ### npm-tag(1) Tag a published version @@ -325,10 +321,6 @@ Start a package Stop a package -### npm-submodule(3) - -Add a package as a git submodule - ### npm-tag(3) Tag a published version @@ -409,6 +401,10 @@ Index of all npm documentation The JavaScript Package Registry +### npm-scope(7) + +Scoped packages + ### npm-scripts(7) How npm handles the "scripts" field diff --git a/deps/npm/doc/misc/npm-registry.md b/deps/npm/doc/misc/npm-registry.md index a8c4b0200d3..42cec59448a 100644 --- a/deps/npm/doc/misc/npm-registry.md +++ b/deps/npm/doc/misc/npm-registry.md @@ -12,15 +12,14 @@ write APIs as well, to allow for publishing packages and managing user account information. The official public npm registry is at . It -is powered by a CouchDB database at -. The code for the couchapp is -available at . npm user accounts -are CouchDB users, stored in the -database. - -The registry URL is supplied by the `registry` config parameter. See -`npm-config(1)`, `npmrc(5)`, and `npm-config(7)` for more on managing -npm's configuration. +is powered by a CouchDB database, of which there is a public mirror at +. The code for the couchapp is +available at . + +The registry URL used is determined by the scope of the package (see +`npm-scope(7)`). If no scope is specified, the default registry is used, which is +supplied by the `registry` config parameter. See `npm-config(1)`, +`npmrc(5)`, and `npm-config(7)` for more on managing npm's configuration. ## Can I run my own private registry? diff --git a/deps/npm/doc/misc/npm-scope.md b/deps/npm/doc/misc/npm-scope.md new file mode 100644 index 00000000000..66a9255d66d --- /dev/null +++ b/deps/npm/doc/misc/npm-scope.md @@ -0,0 +1,84 @@ +npm-scope(7) -- Scoped packages +=============================== + +## DESCRIPTION + +All npm packages have a name. Some package names also have a scope. A scope +follows the usual rules for package names (url-safe characters, no leading dots +or underscores). When used in package names, preceded by an @-symbol and +followed by a slash, e.g. + + @somescope/somepackagename + +Scopes are a way of grouping related packages together, and also affect a few +things about the way npm treats the package. + +**As of 2014-09-03, scoped packages are not supported by the public npm registry**. +However, the npm client is backwards-compatible with un-scoped registries, so +it can be used to work with scoped and un-scoped registries at the same time. + +## Installing scoped packages + +Scoped packages are installed to a sub-folder of the regular installation +folder, e.g. if your other packages are installed in `node_modules/packagename`, +scoped modules will be in `node_modules/@myorg/packagename`. The scope folder +(`@myorg`) is simply the name of the scope preceded by an @-symbol, and can +contain any number of scoped packages. + +A scoped package is installed by referencing it by name, preceded by an +@-symbol, in `npm install`: + + npm install @myorg/mypackage + +Or in `package.json`: + + "dependencies": { + "@myorg/mypackage": "^1.3.0" + } + +Note that if the @-symbol is omitted in either case npm will instead attempt to +install from GitHub; see `npm-install(1)`. + +## Requiring scoped packages + +Because scoped packages are installed into a scope folder, you have to +include the name of the scope when requiring them in your code, e.g. + + require('@myorg/mypackage') + +There is nothing special about the way Node treats scope folders, this is +just specifying to require the module `mypackage` in the folder called `@myorg`. + +## Publishing scoped packages + +Scoped packages can be published to any registry that supports them. +*As of 2014-09-03, the public npm registry does not support scoped packages*, +so attempting to publish a scoped package to the registry will fail unless +you have associated that scope with a different registry, see below. + +## Associating a scope with a registry + +Scopes can be associated with a separate registry. This allows you to +seamlessly use a mix of packages from the public npm registry and one or more +private registries, such as npm Enterprise. + +You can associate a scope with a registry at login, e.g. + + npm login --registry=http://reg.example.com --scope=@myco + +Scopes have a many-to-one relationship with registries: one registry can +host multiple scopes, but a scope only ever points to one registry. + +You can also associate a scope with a registry using `npm config`: + + npm config set @myco:registry http://reg.example.com + +Once a scope is associated with a registry, any `npm install` for a package +with that scope will request packages from that registry instead. Any +`npm publish` for a package name that contains the scope will be published to +that registry instead. + +## SEE ALSO + +* npm-install(1) +* npm-publish(1) \ No newline at end of file diff --git a/deps/npm/doc/misc/npm-scripts.md b/deps/npm/doc/misc/npm-scripts.md index b49d9e23d14..054886b4d54 100644 --- a/deps/npm/doc/misc/npm-scripts.md +++ b/deps/npm/doc/misc/npm-scripts.md @@ -3,7 +3,7 @@ npm-scripts(7) -- How npm handles the "scripts" field ## DESCRIPTION -npm supports the "scripts" member of the package.json script, for the +npm supports the "scripts" property of the package.json script, for the following scripts: * prepublish: @@ -33,8 +33,10 @@ following scripts: Run by the `npm restart` command. Note: `npm restart` will run the stop and start scripts if no `restart` script is provided. -Additionally, arbitrary scripts can be run by doing -`npm run-script `. +Additionally, arbitrary scripts can be executed by running `npm +run-script `. *Pre* and *post* commands with matching +names will be run for those as well (e.g. `premyscript`, `myscript`, +`postmyscript`). ## NOTE: INSTALL SCRIPTS ARE AN ANTIPATTERN @@ -135,7 +137,7 @@ Configuration parameters are put in the environment with the `npm_config_` prefix. For instance, you can view the effective `root` config by checking the `npm_config_root` environment variable. -### Special: package.json "config" hash +### Special: package.json "config" object The package.json "config" keys are overwritten in the environment if there is a config param of `[@]:`. For example, diff --git a/deps/npm/doc/misc/semver.md b/deps/npm/doc/misc/semver.md index 6c7d28061c6..bd697d959e1 100644 --- a/deps/npm/doc/misc/semver.md +++ b/deps/npm/doc/misc/semver.md @@ -41,53 +41,170 @@ A leading `"="` or `"v"` character is stripped off and ignored. ## Ranges -The following range styles are supported: - -* `1.2.3` A specific version. When nothing else will do. Must be a full - version number, with major, minor, and patch versions specified. - Note that build metadata is still ignored, so `1.2.3+build2012` will - satisfy this range. -* `>1.2.3` Greater than a specific version. -* `<1.2.3` Less than a specific version. If there is no prerelease - tag on the version range, then no prerelease version will be allowed - either, even though these are technically "less than". -* `>=1.2.3` Greater than or equal to. Note that prerelease versions - are NOT equal to their "normal" equivalents, so `1.2.3-beta` will - not satisfy this range, but `2.3.0-beta` will. -* `<=1.2.3` Less than or equal to. In this case, prerelease versions - ARE allowed, so `1.2.3-beta` would satisfy. +A `version range` is a set of `comparators` which specify versions +that satisfy the range. + +A `comparator` is composed of an `operator` and a `version`. The set +of primitive `operators` is: + +* `<` Less than +* `<=` Less than or equal to +* `>` Greater than +* `>=` Greater than or equal to +* `=` Equal. If no operator is specified, then equality is assumed, + so this operator is optional, but MAY be included. + +For example, the comparator `>=1.2.7` would match the versions +`1.2.7`, `1.2.8`, `2.5.3`, and `1.3.9`, but not the versions `1.2.6` +or `1.1.0`. + +Comparators can be joined by whitespace to form a `comparator set`, +which is satisfied by the **intersection** of all of the comparators +it includes. + +A range is composed of one or more comparator sets, joined by `||`. A +version matches a range if and only if every comparator in at least +one of the `||`-separated comparator sets is satisfied by the version. + +For example, the range `>=1.2.7 <1.3.0` would match the versions +`1.2.7`, `1.2.8`, and `1.2.99`, but not the versions `1.2.6`, `1.3.0`, +or `1.1.0`. + +The range `1.2.7 || >=1.2.9 <2.0.0` would match the versions `1.2.7`, +`1.2.9`, and `1.4.6`, but not the versions `1.2.8` or `2.0.0`. + +### Prerelease Tags + +If a version has a prerelease tag (for example, `1.2.3-alpha.3`) then +it will only be allowed to satisfy comparator sets if at least one +comparator with the same `[major, minor, patch]` tuple also has a +prerelease tag. + +For example, the range `>1.2.3-alpha.3` would be allowed to match the +version `1.2.3-alpha.7`, but it would *not* be satisfied by +`3.4.5-alpha.9`, even though `3.4.5-alpha.9` is technically "greater +than" `1.2.3-alpha.3` according to the SemVer sort rules. The version +range only accepts prerelease tags on the `1.2.3` version. The +version `3.4.5` *would* satisfy the range, because it does not have a +prerelease flag, and `3.4.5` is greater than `1.2.3-alpha.7`. + +The purpose for this behavior is twofold. First, prerelease versions +frequently are updated very quickly, and contain many breaking changes +that are (by the author's design) not yet fit for public consumption. +Therefore, by default, they are excluded from range matching +semantics. + +Second, a user who has opted into using a prerelease version has +clearly indicated the intent to use *that specific* set of +alpha/beta/rc versions. By including a prerelease tag in the range, +the user is indicating that they are aware of the risk. However, it +is still not appropriate to assume that they have opted into taking a +similar risk on the *next* set of prerelease versions. + +### Advanced Range Syntax + +Advanced range syntax desugars to primitive comparators in +deterministic ways. + +Advanced ranges may be combined in the same way as primitive +comparators using white space or `||`. + +#### Hyphen Ranges `X.Y.Z - A.B.C` + +Specifies an inclusive set. + * `1.2.3 - 2.3.4` := `>=1.2.3 <=2.3.4` -* `~1.2.3` := `>=1.2.3-0 <1.3.0-0` "Reasonably close to `1.2.3`". When - using tilde operators, prerelease versions are supported as well, - but a prerelease of the next significant digit will NOT be - satisfactory, so `1.3.0-beta` will not satisfy `~1.2.3`. -* `^1.2.3` := `>=1.2.3-0 <2.0.0-0` "Compatible with `1.2.3`". When - using caret operators, anything from the specified version (including - prerelease) will be supported up to, but not including, the next - major version (or its prereleases). `1.5.1` will satisfy `^1.2.3`, - while `1.2.2` and `2.0.0-beta` will not. -* `^0.1.3` := `>=0.1.3-0 <0.2.0-0` "Compatible with `0.1.3`". `0.x.x` versions are - special: the first non-zero component indicates potentially breaking changes, - meaning the caret operator matches any version with the same first non-zero - component starting at the specified version. -* `^0.0.2` := `=0.0.2` "Only the version `0.0.2` is considered compatible" -* `~1.2` := `>=1.2.0-0 <1.3.0-0` "Any version starting with `1.2`" -* `^1.2` := `>=1.2.0-0 <2.0.0-0` "Any version compatible with `1.2`" -* `1.2.x` := `>=1.2.0-0 <1.3.0-0` "Any version starting with `1.2`" -* `1.2.*` Same as `1.2.x`. -* `1.2` Same as `1.2.x`. -* `~1` := `>=1.0.0-0 <2.0.0-0` "Any version starting with `1`" -* `^1` := `>=1.0.0-0 <2.0.0-0` "Any version compatible with `1`" -* `1.x` := `>=1.0.0-0 <2.0.0-0` "Any version starting with `1`" -* `1.*` Same as `1.x`. -* `1` Same as `1.x`. -* `*` Any version whatsoever. -* `x` Same as `*`. -* `""` (just an empty string) Same as `*`. - - -Ranges can be joined with either a space (which implies "and") or a -`||` (which implies "or"). + +If a partial version is provided as the first version in the inclusive +range, then the missing pieces are replaced with zeroes. + +* `1.2 - 2.3.4` := `>=1.2.0 <=2.3.4` + +If a partial version is provided as the second version in the +inclusive range, then all versions that start with the supplied parts +of the tuple are accepted, but nothing that would be greater than the +provided tuple parts. + +* `1.2.3 - 2.3` := `>=1.2.3 <2.4.0` +* `1.2.3 - 2` := `>=1.2.3 <3.0.0` + +#### X-Ranges `1.2.x` `1.X` `1.2.*` `*` + +Any of `X`, `x`, or `*` may be used to "stand in" for one of the +numeric values in the `[major, minor, patch]` tuple. + +* `*` := `>=0.0.0` (Any version satisfies) +* `1.x` := `>=1.0.0 <2.0.0` (Matching major version) +* `1.2.x` := `>=1.2.0 <1.3.0` (Matching major and minor versions) + +A partial version range is treated as an X-Range, so the special +character is in fact optional. + +* `""` (empty string) := `*` := `>=0.0.0` +* `1` := `1.x.x` := `>=1.0.0 <2.0.0` +* `1.2` := `1.2.x` := `>=1.2.0 <1.3.0` + +#### Tilde Ranges `~1.2.3` `~1.2` `~1` + +Allows patch-level changes if a minor version is specified on the +comparator. Allows minor-level changes if not. + +* `~1.2.3` := `>=1.2.3 <1.(2+1).0` := `>=1.2.3 <1.3.0` +* `~1.2` := `>=1.2.0 <1.(2+1).0` := `>=1.2.0 <1.3.0` (Same as `1.2.x`) +* `~1` := `>=1.0.0 <(1+1).0.0` := `>=1.0.0 <2.0.0` (Same as `1.x`) +* `~0.2.3` := `>=0.2.3 <0.(2+1).0` := `>=0.2.3 <0.3.0` +* `~0.2` := `>=0.2.0 <0.(2+1).0` := `>=0.2.0 <0.3.0` (Same as `0.2.x`) +* `~0` := `>=0.0.0 <(0+1).0.0` := `>=0.0.0 <1.0.0` (Same as `0.x`) +* `~1.2.3-beta.2` := `>=1.2.3-beta.2 <1.3.0` Note that prereleases in + the `1.2.3` version will be allowed, if they are greater than or + equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but + `1.2.4-beta.2` would not, because it is a prerelease of a + different `[major, minor, patch]` tuple. + +Note: this is the same as the `~>` operator in rubygems. + +#### Caret Ranges `^1.2.3` `^0.2.5` `^0.0.4` + +Allows changes that do not modify the left-most non-zero digit in the +`[major, minor, patch]` tuple. In other words, this allows patch and +minor updates for versions `1.0.0` and above, patch updates for +versions `0.X >=0.1.0`, and *no* updates for versions `0.0.X`. + +Many authors treat a `0.x` version as if the `x` were the major +"breaking-change" indicator. + +Caret ranges are ideal when an author may make breaking changes +between `0.2.4` and `0.3.0` releases, which is a common practice. +However, it presumes that there will *not* be breaking changes between +`0.2.4` and `0.2.5`. It allows for changes that are presumed to be +additive (but non-breaking), according to commonly observed practices. + +* `^1.2.3` := `>=1.2.3 <2.0.0` +* `^0.2.3` := `>=0.2.3 <0.3.0` +* `^0.0.3` := `>=0.0.3 <0.0.4` +* `^1.2.3-beta.2` := `>=1.2.3-beta.2 <2.0.0` Note that prereleases in + the `1.2.3` version will be allowed, if they are greater than or + equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but + `1.2.4-beta.2` would not, because it is a prerelease of a + different `[major, minor, patch]` tuple. +* `^0.0.3-beta` := `>=0.0.3-beta <0.0.4` Note that prereleases in the + `0.0.3` version *only* will be allowed, if they are greater than or + equal to `beta`. So, `0.0.3-pr.2` would be allowed. + +When parsing caret ranges, a missing `patch` value desugars to the +number `0`, but will allow flexibility within that value, even if the +major and minor versions are both `0`. + +* `^1.2.x` := `>=1.2.0 <2.0.0` +* `^0.0.x` := `>=0.0.0 <0.1.0` +* `^0.0` := `>=0.0.0 <0.1.0` + +A missing `minor` and `patch` values will desugar to zero, but also +allow flexibility within those values, even if the major version is +zero. + +* `^1.x` := `>=1.0.0 <2.0.0` +* `^0.x` := `>=0.0.0 <1.0.0` ## Functions diff --git a/deps/npm/html/doc/README.html b/deps/npm/html/doc/README.html index 6a4832313b0..96bc06401a0 100644 --- a/deps/npm/html/doc/README.html +++ b/deps/npm/html/doc/README.html @@ -19,11 +19,11 @@

IMPORTANT

To install an old and unsupported version of npm that works on node 0.3 and prior, clone the git repo and dig through the old tags and branches.

Super Easy Install

-

npm comes with node now.

+

npm comes with node now.

Windows Computers

-

Get the MSI. npm is in it.

+

Get the MSI. npm is in it.

Apple Macintosh Computers

-

Get the pkg. npm is in it.

+

Get the pkg. npm is in it.

Other Sorts of Unices

Run make install. npm will be installed with node.

If you want a more fancy pants install (a different version, customized @@ -108,7 +108,7 @@

Using npm Programmatically

if (er) return commandFailed(er) // command succeeded, and data might have some info }) - npm.on("log", function (message) { .... }) + npm.registry.log.on("log", function (message) { .... }) })

The load function takes an object hash of the command-line configs. The various npm.commands.<cmd> functions take an array of @@ -141,7 +141,7 @@

If you have a complaint about a package in the public npm registry, and cannot resolve it with the package owner, please email -support@npmjs.com and explain the situation.

+support@npmjs.com and explain the situation.

Any data published to The npm Registry (including user account information) may be removed or modified at the sole discretion of the npm server administrators.

@@ -161,7 +161,7 @@

BUGS

  • web: https://github.com/npm/npm/issues
  • email: -npm-@googlegroups.com
  • +npm-@googlegroups.com

    Be sure to include all of the output from the npm command that didn't work as expected. The npm-debug.log file is also helpful to provide.

    @@ -169,10 +169,10 @@

    BUGS

    will no doubt tell you to put the output in a gist or email.

    SEE ALSO

    @@ -186,5 +186,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/api/npm-bin.html b/deps/npm/html/doc/api/npm-bin.html index 3a170b0244e..a2561247569 100644 --- a/deps/npm/html/doc/api/npm-bin.html +++ b/deps/npm/html/doc/api/npm-bin.html @@ -15,7 +15,7 @@

    SYNOPSIS

    DESCRIPTION

    Print the folder where npm will install executables.

    This function should not be used programmatically. Instead, just refer -to the npm.bin member.

    +to the npm.bin property.

    @@ -28,5 +28,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-bugs.html b/deps/npm/html/doc/api/npm-bugs.html index 1ab1393fff2..9cf2cc4131f 100644 --- a/deps/npm/html/doc/api/npm-bugs.html +++ b/deps/npm/html/doc/api/npm-bugs.html @@ -33,5 +33,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-cache.html b/deps/npm/html/doc/api/npm-cache.html index ed67808303f..6dfc4a0e5cc 100644 --- a/deps/npm/html/doc/api/npm-cache.html +++ b/deps/npm/html/doc/api/npm-cache.html @@ -18,7 +18,7 @@

    SYNOPSIS

    npm.commands.cache.add([args], callback) npm.commands.cache.read(name, version, forceBypass, callback)

    DESCRIPTION

    -

    This acts much the same ways as the npm-cache(1) command line +

    This acts much the same ways as the npm-cache(1) command line functionality.

    The callback is called with the package.json data of the thing that is eventually added to or read from the cache.

    @@ -42,5 +42,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-commands.html b/deps/npm/html/doc/api/npm-commands.html index 9ab89fb5cd0..3f3ae544e99 100644 --- a/deps/npm/html/doc/api/npm-commands.html +++ b/deps/npm/html/doc/api/npm-commands.html @@ -22,7 +22,7 @@

    SYNOPSIS

    usage, or man 3 npm-<command> for programmatic usage.

    SEE ALSO

    @@ -36,5 +36,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/api/npm-config.html b/deps/npm/html/doc/api/npm-config.html index 2fe37f217cb..3767a46aca5 100644 --- a/deps/npm/html/doc/api/npm-config.html +++ b/deps/npm/html/doc/api/npm-config.html @@ -43,7 +43,7 @@

    SYNOPSIS

    functions instead.

    SEE ALSO

    @@ -57,5 +57,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/api/npm-deprecate.html b/deps/npm/html/doc/api/npm-deprecate.html index 557d2efe3c4..a235c2baa96 100644 --- a/deps/npm/html/doc/api/npm-deprecate.html +++ b/deps/npm/html/doc/api/npm-deprecate.html @@ -31,9 +31,9 @@

    SYNOPSIS

    To un-deprecate a package, specify an empty string ("") for the message argument.

    SEE ALSO

    @@ -47,5 +47,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/api/npm-docs.html b/deps/npm/html/doc/api/npm-docs.html index d42b27b037f..222b90e70a8 100644 --- a/deps/npm/html/doc/api/npm-docs.html +++ b/deps/npm/html/doc/api/npm-docs.html @@ -33,5 +33,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-edit.html b/deps/npm/html/doc/api/npm-edit.html index f6f4617e128..aa3d7bdb0ba 100644 --- a/deps/npm/html/doc/api/npm-edit.html +++ b/deps/npm/html/doc/api/npm-edit.html @@ -36,5 +36,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-explore.html b/deps/npm/html/doc/api/npm-explore.html index 0136e705a2a..fbfd0cccc2d 100644 --- a/deps/npm/html/doc/api/npm-explore.html +++ b/deps/npm/html/doc/api/npm-explore.html @@ -31,5 +31,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-help-search.html b/deps/npm/html/doc/api/npm-help-search.html index e2bb08abbd0..886d0c5acbe 100644 --- a/deps/npm/html/doc/api/npm-help-search.html +++ b/deps/npm/html/doc/api/npm-help-search.html @@ -31,7 +31,7 @@

    SYNOPSIS

  • file: Name of the file that matched
  • -

    The silent parameter is not neccessary not used, but it may in the future.

    +

    The silent parameter is not necessary not used, but it may in the future.

    @@ -44,5 +44,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-init.html b/deps/npm/html/doc/api/npm-init.html index ca23df8f483..80b14a41df3 100644 --- a/deps/npm/html/doc/api/npm-init.html +++ b/deps/npm/html/doc/api/npm-init.html @@ -26,7 +26,7 @@

    SYNOPSIS

    preferred method. If you're sure you want to handle command-line prompting, then go ahead and use this programmatically.

    SEE ALSO

    -

    package.json(5)

    +

    package.json(5)

    @@ -39,5 +39,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/api/npm-install.html b/deps/npm/html/doc/api/npm-install.html index c0e0eb78bff..43cf4f166ff 100644 --- a/deps/npm/html/doc/api/npm-install.html +++ b/deps/npm/html/doc/api/npm-install.html @@ -32,5 +32,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-link.html b/deps/npm/html/doc/api/npm-link.html index aff1250a4cc..c41a31c9b4b 100644 --- a/deps/npm/html/doc/api/npm-link.html +++ b/deps/npm/html/doc/api/npm-link.html @@ -42,5 +42,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-load.html b/deps/npm/html/doc/api/npm-load.html index 7451a75728e..fbf22994f6d 100644 --- a/deps/npm/html/doc/api/npm-load.html +++ b/deps/npm/html/doc/api/npm-load.html @@ -15,9 +15,9 @@

    SYNOPSIS

    DESCRIPTION

    npm.load() must be called before any other function call. Both parameters are optional, but the second is recommended.

    -

    The first parameter is an object hash of command-line config params, and the -second parameter is a callback that will be called when npm is loaded and -ready to serve.

    +

    The first parameter is an object containing command-line config params, and the +second parameter is a callback that will be called when npm is loaded and ready +to serve.

    The first parameter should follow a similar structure as the package.json config object.

    For example, to emulate the --dev flag, pass an object that looks like this:

    @@ -37,5 +37,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-ls.html b/deps/npm/html/doc/api/npm-ls.html index f1c2504918d..e221bab4a0f 100644 --- a/deps/npm/html/doc/api/npm-ls.html +++ b/deps/npm/html/doc/api/npm-ls.html @@ -63,5 +63,5 @@

    global

           - + diff --git a/deps/npm/html/doc/api/npm-outdated.html b/deps/npm/html/doc/api/npm-outdated.html index a7e88b882c2..91fafce3297 100644 --- a/deps/npm/html/doc/api/npm-outdated.html +++ b/deps/npm/html/doc/api/npm-outdated.html @@ -28,5 +28,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-owner.html b/deps/npm/html/doc/api/npm-owner.html index eb8f9abafa1..878a9e59d86 100644 --- a/deps/npm/html/doc/api/npm-owner.html +++ b/deps/npm/html/doc/api/npm-owner.html @@ -32,8 +32,8 @@

    SYNOPSIS

    that is not implemented at this time.

    SEE ALSO

    @@ -47,5 +47,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/api/npm-pack.html b/deps/npm/html/doc/api/npm-pack.html index c2476f20e00..1e146e41f4f 100644 --- a/deps/npm/html/doc/api/npm-pack.html +++ b/deps/npm/html/doc/api/npm-pack.html @@ -33,5 +33,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-prefix.html b/deps/npm/html/doc/api/npm-prefix.html index 583079e336d..bd406009c5a 100644 --- a/deps/npm/html/doc/api/npm-prefix.html +++ b/deps/npm/html/doc/api/npm-prefix.html @@ -29,5 +29,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-prune.html b/deps/npm/html/doc/api/npm-prune.html index fabfab57d3c..0e446c26f58 100644 --- a/deps/npm/html/doc/api/npm-prune.html +++ b/deps/npm/html/doc/api/npm-prune.html @@ -30,5 +30,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-publish.html b/deps/npm/html/doc/api/npm-publish.html index 9d338cebbd7..0e41c2ad0ef 100644 --- a/deps/npm/html/doc/api/npm-publish.html +++ b/deps/npm/html/doc/api/npm-publish.html @@ -30,9 +30,9 @@

    SYNOPSIS

    the registry. Overwrites when the "force" environment variable is set.

    SEE ALSO

    @@ -46,5 +46,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/api/npm-rebuild.html b/deps/npm/html/doc/api/npm-rebuild.html index 821becbf543..f5d2e6a6629 100644 --- a/deps/npm/html/doc/api/npm-rebuild.html +++ b/deps/npm/html/doc/api/npm-rebuild.html @@ -30,5 +30,5 @@

    CONFIGURATION

           - + diff --git a/deps/npm/html/doc/api/npm-repo.html b/deps/npm/html/doc/api/npm-repo.html index d624659c155..024e7279b76 100644 --- a/deps/npm/html/doc/api/npm-repo.html +++ b/deps/npm/html/doc/api/npm-repo.html @@ -33,5 +33,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-restart.html b/deps/npm/html/doc/api/npm-restart.html index 67729df285d..29d1a566708 100644 --- a/deps/npm/html/doc/api/npm-restart.html +++ b/deps/npm/html/doc/api/npm-restart.html @@ -21,8 +21,8 @@

    SYNOPSIS

    in the packages parameter.

    SEE ALSO

    @@ -36,5 +36,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/api/npm-root.html b/deps/npm/html/doc/api/npm-root.html index 4f9b5293166..b639a33e7d8 100644 --- a/deps/npm/html/doc/api/npm-root.html +++ b/deps/npm/html/doc/api/npm-root.html @@ -29,5 +29,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-run-script.html b/deps/npm/html/doc/api/npm-run-script.html index c7ca6138338..26707808009 100644 --- a/deps/npm/html/doc/api/npm-run-script.html +++ b/deps/npm/html/doc/api/npm-run-script.html @@ -23,11 +23,11 @@

    SYNOPSIS

    assumed to be the command to run. All other elements are ignored.

    SEE ALSO

    @@ -41,5 +41,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/api/npm-search.html b/deps/npm/html/doc/api/npm-search.html index 72fa33509da..903aa521eb5 100644 --- a/deps/npm/html/doc/api/npm-search.html +++ b/deps/npm/html/doc/api/npm-search.html @@ -53,5 +53,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-shrinkwrap.html b/deps/npm/html/doc/api/npm-shrinkwrap.html index 172646a9ee5..eed523cdc5f 100644 --- a/deps/npm/html/doc/api/npm-shrinkwrap.html +++ b/deps/npm/html/doc/api/npm-shrinkwrap.html @@ -33,5 +33,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-start.html b/deps/npm/html/doc/api/npm-start.html index 813b28ce2aa..23678bc9ec1 100644 --- a/deps/npm/html/doc/api/npm-start.html +++ b/deps/npm/html/doc/api/npm-start.html @@ -28,5 +28,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-stop.html b/deps/npm/html/doc/api/npm-stop.html index 65f5c9f26e6..ed3b714f079 100644 --- a/deps/npm/html/doc/api/npm-stop.html +++ b/deps/npm/html/doc/api/npm-stop.html @@ -28,5 +28,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-submodule.html b/deps/npm/html/doc/api/npm-submodule.html index 35364403c35..d70ee36d49f 100644 --- a/deps/npm/html/doc/api/npm-submodule.html +++ b/deps/npm/html/doc/api/npm-submodule.html @@ -42,5 +42,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/api/npm-tag.html b/deps/npm/html/doc/api/npm-tag.html index cf9c71c3c50..b4a326161e1 100644 --- a/deps/npm/html/doc/api/npm-tag.html +++ b/deps/npm/html/doc/api/npm-tag.html @@ -36,5 +36,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-test.html b/deps/npm/html/doc/api/npm-test.html index f2d37483ac3..78168084c2d 100644 --- a/deps/npm/html/doc/api/npm-test.html +++ b/deps/npm/html/doc/api/npm-test.html @@ -30,5 +30,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-uninstall.html b/deps/npm/html/doc/api/npm-uninstall.html index 2abfd089964..962ff879c3d 100644 --- a/deps/npm/html/doc/api/npm-uninstall.html +++ b/deps/npm/html/doc/api/npm-uninstall.html @@ -30,5 +30,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-unpublish.html b/deps/npm/html/doc/api/npm-unpublish.html index f6412cf7d18..2b9a5c58f61 100644 --- a/deps/npm/html/doc/api/npm-unpublish.html +++ b/deps/npm/html/doc/api/npm-unpublish.html @@ -33,5 +33,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-update.html b/deps/npm/html/doc/api/npm-update.html index 60bcde26543..f60e83de3f5 100644 --- a/deps/npm/html/doc/api/npm-update.html +++ b/deps/npm/html/doc/api/npm-update.html @@ -27,5 +27,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-version.html b/deps/npm/html/doc/api/npm-version.html index 89858221647..c4ce078a482 100644 --- a/deps/npm/html/doc/api/npm-version.html +++ b/deps/npm/html/doc/api/npm-version.html @@ -32,5 +32,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm-view.html b/deps/npm/html/doc/api/npm-view.html index 59f68ee64ac..75c75fdbb47 100644 --- a/deps/npm/html/doc/api/npm-view.html +++ b/deps/npm/html/doc/api/npm-view.html @@ -81,5 +81,5 @@

    RETURN VALUE

           - + diff --git a/deps/npm/html/doc/api/npm-whoami.html b/deps/npm/html/doc/api/npm-whoami.html index 9380f8664a7..4ed6d79a428 100644 --- a/deps/npm/html/doc/api/npm-whoami.html +++ b/deps/npm/html/doc/api/npm-whoami.html @@ -29,5 +29,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/api/npm.html b/deps/npm/html/doc/api/npm.html index 2335b7a226d..67ff3d32f8a 100644 --- a/deps/npm/html/doc/api/npm.html +++ b/deps/npm/html/doc/api/npm.html @@ -23,20 +23,19 @@

    SYNOPSIS

    npm.commands.install(["package"], cb) })

    VERSION

    -

    1.4.28

    +

    2.1.6

    DESCRIPTION

    This is the API documentation for npm. To find documentation of the command line -client, see npm(1).

    -

    Prior to using npm's commands, npm.load() must be called. -If you provide configObject as an object hash of top-level -configs, they override the values stored in the various config -locations. In the npm command line client, this set of configs -is parsed from the command line options. Additional configuration -params are loaded from two configuration files. See npm-config(1), -npm-config(7), and npmrc(5) for more information.

    +client, see npm(1).

    +

    Prior to using npm's commands, npm.load() must be called. If you provide +configObject as an object map of top-level configs, they override the values +stored in the various config locations. In the npm command line client, this +set of configs is parsed from the command line options. Additional +configuration params are loaded from two configuration files. See +npm-config(1), npm-config(7), and npmrc(5) for more information.

    After that, each of the functions are accessible in the -commands object: npm.commands.<cmd>. See npm-index(7) for a list of +commands object: npm.commands.<cmd>. See npm-index(7) for a list of all possible commands.

    All commands on the command object take an array of positional argument strings. The last argument to any function is a callback. Some @@ -80,9 +79,9 @@

    METHODS AND PROPERTIES

    MAGIC

    -

    For each of the methods in the npm.commands hash, a method is added to -the npm object, which takes a set of positional string arguments rather -than an array and a callback.

    +

    For each of the methods in the npm.commands object, a method is added to the +npm object, which takes a set of positional string arguments rather than an +array and a callback.

    If the last argument is a callback, then it will use the supplied callback. However, if no callback is provided, then it will print out the error or results.

    @@ -110,5 +109,5 @@

    ABBREVS

           - + diff --git a/deps/npm/html/doc/cli/npm-adduser.html b/deps/npm/html/doc/cli/npm-adduser.html index 9a0e55ee607..84f0ac389ef 100644 --- a/deps/npm/html/doc/cli/npm-adduser.html +++ b/deps/npm/html/doc/cli/npm-adduser.html @@ -11,28 +11,49 @@

    npm-adduser

    Add a registry user account

    SYNOPSIS

    -
    npm adduser
    +
    npm adduser [--registry=url] [--scope=@orgname] [--always-auth]
     

    DESCRIPTION

    -

    Create or verify a user named <username> in the npm registry, and -save the credentials to the .npmrc file.

    +

    Create or verify a user named <username> in the specified registry, and +save the credentials to the .npmrc file. If no registry is specified, +the default registry will be used (see npm-config(7)).

    The username, password, and email are read in from prompts.

    You may use this command to change your email address, but not username or password.

    -

    To reset your password, go to https://npmjs.org/forgot

    +

    To reset your password, go to https://www.npmjs.org/forgot

    You may use this command multiple times with the same user account to authorize on a new machine.

    +

    npm login is an alias to adduser and behaves exactly the same way.

    CONFIGURATION

    registry

    Default: http://registry.npmjs.org/

    -

    The base URL of the npm package registry.

    +

    The base URL of the npm package registry. If scope is also specified, +this registry will only be used for packages with that scope. See npm-scope(7).

    +

    scope

    +

    Default: none

    +

    If specified, the user and login credentials given will be associated +with the specified scope. See npm-scope(7). You can use both at the same time, +e.g.

    +
    npm adduser --registry=http://myregistry.example.com --scope=@myco
    +

    This will set a registry for the given scope and login or create a user for +that registry at the same time.

    +

    always-auth

    +

    Default: false

    +

    If specified, save configuration indicating that all requests to the given +registry should include authorization information. Useful for private +registries. Can be used with --registry and / or --scope, e.g.

    +
    npm adduser --registry=http://private-registry.example.com --always-auth
    +

    This will ensure that all requests to that registry (including for tarballs) +include an authorization header. See always-auth in npm-config(7) for more +details on always-auth. Registry-specific configuaration of always-auth takes +precedence over any global configuration.

    SEE ALSO

    @@ -46,5 +67,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-bin.html b/deps/npm/html/doc/cli/npm-bin.html index d6e055aaa3d..7f6e3a5c0dd 100644 --- a/deps/npm/html/doc/cli/npm-bin.html +++ b/deps/npm/html/doc/cli/npm-bin.html @@ -16,12 +16,12 @@

    SYNOPSIS

    Print the folder where npm will install executables.

    SEE ALSO

    @@ -35,5 +35,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-bugs.html b/deps/npm/html/doc/cli/npm-bugs.html index 8693485aa3b..7758efa7264 100644 --- a/deps/npm/html/doc/cli/npm-bugs.html +++ b/deps/npm/html/doc/cli/npm-bugs.html @@ -33,14 +33,14 @@

    registry

    The base URL of the npm package registry.

    SEE ALSO

    @@ -54,5 +54,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-build.html b/deps/npm/html/doc/cli/npm-build.html index 58dfd36fde0..ca62cb24651 100644 --- a/deps/npm/html/doc/cli/npm-build.html +++ b/deps/npm/html/doc/cli/npm-build.html @@ -21,10 +21,10 @@

    DESCRIPTION

    It should generally not be called directly.

    SEE ALSO

    @@ -38,5 +38,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-bundle.html b/deps/npm/html/doc/cli/npm-bundle.html index 47355abbf21..9b833d0b6dd 100644 --- a/deps/npm/html/doc/cli/npm-bundle.html +++ b/deps/npm/html/doc/cli/npm-bundle.html @@ -17,7 +17,7 @@

    DESCRIPTION

    Just use npm install now to do what npm bundle used to do.

    SEE ALSO

    @@ -31,5 +31,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-cache.html b/deps/npm/html/doc/cli/npm-cache.html index 3b84f979dcc..53835b35db3 100644 --- a/deps/npm/html/doc/cli/npm-cache.html +++ b/deps/npm/html/doc/cli/npm-cache.html @@ -61,13 +61,13 @@

    cache

    The root cache folder.

    SEE ALSO

    @@ -81,5 +81,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-completion.html b/deps/npm/html/doc/cli/npm-completion.html index cc56d45623d..5a678ba352f 100644 --- a/deps/npm/html/doc/cli/npm-completion.html +++ b/deps/npm/html/doc/cli/npm-completion.html @@ -26,9 +26,9 @@

    SYNOPSIS

    completions based on the arguments.

    SEE ALSO

    @@ -42,5 +42,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-config.html b/deps/npm/html/doc/cli/npm-config.html index 52efed825b9..d97845e708a 100644 --- a/deps/npm/html/doc/cli/npm-config.html +++ b/deps/npm/html/doc/cli/npm-config.html @@ -22,8 +22,8 @@

    SYNOPSIS

    DESCRIPTION

    npm gets its config settings from the command line, environment variables, npmrc files, and in some cases, the package.json file.

    -

    See npmrc(5) for more information about the npmrc files.

    -

    See npm-config(7) for a more thorough discussion of the mechanisms +

    See npmrc(5) for more information about the npmrc files.

    +

    See npm-config(7) for a more thorough discussion of the mechanisms involved.

    The npm config command can be used to update and edit the contents of the user and global npmrc files.

    @@ -48,11 +48,11 @@

    edit

    global config.

    SEE ALSO

    @@ -66,5 +66,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-dedupe.html b/deps/npm/html/doc/cli/npm-dedupe.html index f2d9794de90..bf6c9974ea4 100644 --- a/deps/npm/html/doc/cli/npm-dedupe.html +++ b/deps/npm/html/doc/cli/npm-dedupe.html @@ -23,7 +23,7 @@

    SYNOPSIS

    | `-- c@1.0.3 `-- d <-- depends on c@~1.0.9 `-- c@1.0.10 -

    In this case, npm-dedupe(1) will transform the tree to:

    +

    In this case, npm-dedupe(1) will transform the tree to:

    a
     +-- b
     +-- d
    @@ -47,9 +47,9 @@ 

    SYNOPSIS

    versions.

    SEE ALSO

    @@ -63,5 +63,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-deprecate.html b/deps/npm/html/doc/cli/npm-deprecate.html index 01640248b57..8182c6ecf1b 100644 --- a/deps/npm/html/doc/cli/npm-deprecate.html +++ b/deps/npm/html/doc/cli/npm-deprecate.html @@ -23,8 +23,8 @@

    SYNOPSIS

    To un-deprecate a package, specify an empty string ("") for the message argument.

    SEE ALSO

    @@ -38,5 +38,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-docs.html b/deps/npm/html/doc/cli/npm-docs.html index 05b445cdaed..e9f2c9e732e 100644 --- a/deps/npm/html/doc/cli/npm-docs.html +++ b/deps/npm/html/doc/cli/npm-docs.html @@ -36,13 +36,13 @@

    registry

    The base URL of the npm package registry.

    SEE ALSO

    @@ -56,5 +56,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-edit.html b/deps/npm/html/doc/cli/npm-edit.html index 50d21486a0e..24a70fe7fc1 100644 --- a/deps/npm/html/doc/cli/npm-edit.html +++ b/deps/npm/html/doc/cli/npm-edit.html @@ -14,7 +14,7 @@

    SYNOPSIS

    npm edit <name>[@<version>]
     

    DESCRIPTION

    Opens the package folder in the default editor (or whatever you've -configured as the npm editor config -- see npm-config(7).)

    +configured as the npm editor config -- see npm-config(7).)

    After it has been edited, the package is rebuilt so as to pick up any changes in compiled packages.

    For instance, you can do npm install connect to install connect @@ -30,12 +30,12 @@

    editor

    The command to run for npm edit or npm config edit.

    SEE ALSO

    @@ -49,5 +49,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-explore.html b/deps/npm/html/doc/cli/npm-explore.html index c3c3127de6c..eeb49064486 100644 --- a/deps/npm/html/doc/cli/npm-explore.html +++ b/deps/npm/html/doc/cli/npm-explore.html @@ -31,12 +31,11 @@

    shell

    The shell to run for the npm explore command.

    SEE ALSO

    @@ -50,5 +49,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-help-search.html b/deps/npm/html/doc/cli/npm-help-search.html index d09aecf8c1b..2cf7506f03f 100644 --- a/deps/npm/html/doc/cli/npm-help-search.html +++ b/deps/npm/html/doc/cli/npm-help-search.html @@ -30,9 +30,9 @@

    long

    If false, then help-search will just list out the help topics found.

    SEE ALSO

    @@ -46,5 +46,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-help.html b/deps/npm/html/doc/cli/npm-help.html index bb5a2e16677..12c6c0e8fd2 100644 --- a/deps/npm/html/doc/cli/npm-help.html +++ b/deps/npm/html/doc/cli/npm-help.html @@ -29,16 +29,16 @@

    viewer

    Set to "browser" to view html help content in the default web browser.

    SEE ALSO

    @@ -52,5 +52,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-init.html b/deps/npm/html/doc/cli/npm-init.html index ec53ca25679..e7640ea46c6 100644 --- a/deps/npm/html/doc/cli/npm-init.html +++ b/deps/npm/html/doc/cli/npm-init.html @@ -11,7 +11,7 @@

    npm-init

    Interactively create a package.json file

    SYNOPSIS

    -
    npm init
    +
    npm init [-f|--force|-y|--yes]
     

    DESCRIPTION

    This will ask you a bunch of questions, and then write a package.json for you.

    It attempts to make reasonable guesses about what you want things to be set to, @@ -20,11 +20,13 @@

    SYNOPSIS

    the options in there.

    It is strictly additive, so it does not delete options from your package.json without a really good reason to do so.

    +

    If you invoke it with -f, --force, -y, or --yes, it will use only +defaults and not prompt you for any options.

    SEE ALSO

    @@ -38,5 +40,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-install.html b/deps/npm/html/doc/cli/npm-install.html index 28b6afe39dd..3759f011559 100644 --- a/deps/npm/html/doc/cli/npm-install.html +++ b/deps/npm/html/doc/cli/npm-install.html @@ -15,21 +15,21 @@

    SYNOPSIS

    npm install <tarball file> npm install <tarball url> npm install <folder> -npm install <name> [--save|--save-dev|--save-optional] [--save-exact] -npm install <name>@<tag> -npm install <name>@<version> -npm install <name>@<version range> +npm install [@<scope>/]<name> [--save|--save-dev|--save-optional] [--save-exact] +npm install [@<scope>/]<name>@<tag> +npm install [@<scope>/]<name>@<version> +npm install [@<scope>/]<name>@<version range> npm i (with any of the previous argument usage)

    DESCRIPTION

    This command installs a package, and any packages that it depends on. If the package has a shrinkwrap file, the installation of dependencies will be driven -by that. See npm-shrinkwrap(1).

    +by that. See npm-shrinkwrap(1).

    A package is:

    • a) a folder containing a program described by a package.json file
    • b) a gzipped tarball containing (a)
    • c) a url that resolves to (b)
    • -
    • d) a <name>@<version> that is published on the registry (see npm-registry(7)) with (c)
    • +
    • d) a <name>@<version> that is published on the registry (see npm-registry(7)) with (c)
    • e) a <name>@<tag> that points to (d)
    • f) a <name> that has a "latest" tag satisfying (e)
    • g) a <git remote url> that resolves to (b)
    • @@ -64,9 +64,9 @@

      SYNOPSIS

      Example:

          npm install https://github.com/indexzero/forever/tarball/v0.5.6
       
      -
    • npm install <name> [--save|--save-dev|--save-optional]:

      +
    • npm install [@<scope>/]<name> [--save|--save-dev|--save-optional]:

      Do a <name>@<tag> install, where <tag> is the "tag" config. (See - npm-config(7).)

      + npm-config(7).)

      In most cases, this will install the latest version of the module published on npm.

      Example:

      @@ -85,8 +85,16 @@

      SYNOPSIS

    • --save-exact: Saved dependencies will be configured with an exact version rather than using npm's default semver range operator.

      +

      <scope> is optional. The package will be downloaded from the registry +associated with the specified scope. If no registry is associated with +the given scope the default registry is assumed. See npm-scope(7).

      +

      Note: if you do not include the @-symbol on your scope name, npm will +interpret this as a GitHub repository instead, see below. Scopes names +must also be followed by a slash.

      Examples:

      npm install sax --save
      +npm install githubname/reponame
      +npm install @myorg/privatepackage
       npm install node-tap --save-dev
       npm install dtrace-provider --save-optional
       npm install readable-stream --save --save-exact
      @@ -98,27 +106,38 @@ 

      SYNOPSIS

      working directory, then it will try to install that, and only try to fetch the package by name if it is not valid.
        -
      • npm install <name>@<tag>:

        +
      • npm install [@<scope>/]<name>@<tag>:

        Install the version of the package that is referenced by the specified tag. If the tag does not exist in the registry data for that package, then this will fail.

        Example:

            npm install sax@latest
        +    npm install @myorg/mypackage@latest
         
      • -
      • npm install <name>@<version>:

        -

        Install the specified version of the package. This will fail if the version - has not been published to the registry.

        +
      • npm install [@<scope>/]<name>@<version>:

        +

        Install the specified version of the package. This will fail if the + version has not been published to the registry.

        Example:

            npm install sax@0.1.1
        +    npm install @myorg/privatepackage@1.5.0
         
      • -
      • npm install <name>@<version range>:

        +
      • npm install [@<scope>/]<name>@<version range>:

        Install a version of the package matching the specified version range. This - will follow the same rules for resolving dependencies described in package.json(5).

        + will follow the same rules for resolving dependencies described in package.json(5).

        Note that most version ranges must be put in quotes so that your shell will treat it as a single argument.

        Example:

            npm install sax@">=0.1.0 <0.2.0"
        +    npm install @myorg/privatepackage@">=0.1.0 <0.2.0"
         
      • +
      • npm install <githubname>/<githubrepo>:

        +

        Install the package at https://github.com/githubname/githubrepo" by + attempting to clone it usinggit`.

        +

        Example:

        +
            npm install mygithubuser/myproject
        +

        To reference a package in a git repo that is not on GitHub, see git + remote urls below.

        +
      • npm install <git remote url>:

        Install a package by cloning a git remote url. The format of the git url is:

        @@ -142,7 +161,7 @@

        SYNOPSIS

        local copy exists on disk.

        npm install sax --force
         

        The --global argument will cause npm to install the package globally -rather than locally. See npm-folders(5).

        +rather than locally. See npm-folders(5).

        The --link argument will cause npm to link global installs into the local space in some cases.

        The --no-bin-links argument will prevent npm from creating symlinks for @@ -153,7 +172,7 @@

        SYNOPSIS

        shrinkwrap file and use the package.json instead.

        The --nodedir=/path/to/node/source argument will allow npm to find the node source code so that npm can compile native modules.

        -

        See npm-config(7). Many of the configuration params have some +

        See npm-config(7). Many of the configuration params have some effect on installation, since that's most of what npm does.

        ALGORITHM

        To install a package, npm uses the following algorithm:

        @@ -174,7 +193,7 @@

        ALGORITHM

        `-- D

    That is, the dependency from B to C is satisfied by the fact that A already caused C to be installed at a higher level.

    -

    See npm-folders(5) for a more detailed description of the specific +

    See npm-folders(5) for a more detailed description of the specific folder structures that npm creates.

    Limitations of npm's Install Algorithm

    There are some very rare and pathological edge-cases where a cycle can @@ -194,19 +213,19 @@

    Limitations of npm's Install affects a real use-case, it will be investigated.

    SEE ALSO

    @@ -220,5 +239,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-link.html b/deps/npm/html/doc/cli/npm-link.html index 0324ca130ee..8d085892df2 100644 --- a/deps/npm/html/doc/cli/npm-link.html +++ b/deps/npm/html/doc/cli/npm-link.html @@ -12,21 +12,23 @@

    npm-link

    Symlink a package folder

    SYNOPSIS

    npm link (in package folder)
    -npm link <pkgname>
    +npm link [@<scope>/]<pkgname>
     npm ln (with any of the previous argument usage)
     

    DESCRIPTION

    Package linking is a two-step process.

    First, npm link in a package folder will create a globally-installed -symbolic link from prefix/package-name to the current folder.

    +symbolic link from prefix/package-name to the current folder (see +npm-config(7) for the value of prefix).

    Next, in some other location, npm link package-name will create a symlink from the local node_modules folder to the global symlink.

    Note that package-name is taken from package.json, not from directory name.

    +

    The package name can be optionally prefixed with a scope. See npm-scope(7). +The scope must by preceded by an @-symbol and followed by a slash.

    When creating tarballs for npm publish, the linked packages are "snapshotted" to their current state by resolving the symbolic links.

    -

    This is -handy for installing your own stuff, so that you can work on it and test it -iteratively without having to continually rebuild.

    +

    This is handy for installing your own stuff, so that you can work on it and +test it iteratively without having to continually rebuild.

    For example:

    cd ~/projects/node-redis    # go into the package directory
     npm link                    # creates global link
    @@ -43,16 +45,19 @@ 

    SYNOPSIS

    npm link redis

    That is, it first creates a global link, and then links the global installation target into your project's node_modules folder.

    -

    SEE ALSO

    +

    If your linked package is scoped (see npm-scope(7)) your link command must +include that scope, e.g.

    +
    npm link @myorg/privatepackage
    +

    SEE ALSO

    @@ -66,5 +71,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-ls.html b/deps/npm/html/doc/cli/npm-ls.html index 0e1fe6f358c..30419bdb0d6 100644 --- a/deps/npm/html/doc/cli/npm-ls.html +++ b/deps/npm/html/doc/cli/npm-ls.html @@ -11,10 +11,10 @@

    npm-ls

    List installed packages

    SYNOPSIS

    -
    npm list [<pkg> ...]
    -npm ls [<pkg> ...]
    -npm la [<pkg> ...]
    -npm ll [<pkg> ...]
    +
    npm list [[@<scope>/]<pkg> ...]
    +npm ls [[@<scope>/]<pkg> ...]
    +npm la [[@<scope>/]<pkg> ...]
    +npm ll [[@<scope>/]<pkg> ...]
     

    DESCRIPTION

    This command will print to stdout all the versions of packages that are installed, as well as their dependencies, in a tree-structure.

    @@ -22,7 +22,7 @@

    SYNOPSIS

    limit the results to only the paths to the packages named. Note that nested packages will also show the paths to the specified packages. For example, running npm ls promzard in npm's source tree will show:

    -
    npm@1.4.28 /path/to/npm
    +
    npm@2.1.6 /path/to/npm
     └─┬ init-package-json@0.0.4
       └── promzard@0.1.5
     

    It will print out extraneous, missing, and invalid packages.

    @@ -63,15 +63,15 @@

    depth

    Max display depth of the dependency tree.

    SEE ALSO

    @@ -85,5 +85,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-outdated.html b/deps/npm/html/doc/cli/npm-outdated.html index f4b3e1307e6..07a0a933d76 100644 --- a/deps/npm/html/doc/cli/npm-outdated.html +++ b/deps/npm/html/doc/cli/npm-outdated.html @@ -51,9 +51,9 @@

    depth

    Max depth for checking dependency tree.

    SEE ALSO

    @@ -67,5 +67,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-owner.html b/deps/npm/html/doc/cli/npm-owner.html index 4dd7b386d63..3600e087f15 100644 --- a/deps/npm/html/doc/cli/npm-owner.html +++ b/deps/npm/html/doc/cli/npm-owner.html @@ -32,10 +32,10 @@

    SYNOPSIS

    that is not implemented at this time.

    SEE ALSO

    @@ -49,5 +49,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-pack.html b/deps/npm/html/doc/cli/npm-pack.html index b5a68f27b7c..987ba3f792b 100644 --- a/deps/npm/html/doc/cli/npm-pack.html +++ b/deps/npm/html/doc/cli/npm-pack.html @@ -23,11 +23,11 @@

    SYNOPSIS

    If no arguments are supplied, then npm packs the current package folder.

    SEE ALSO

    @@ -41,5 +41,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-prefix.html b/deps/npm/html/doc/cli/npm-prefix.html index 56ef84cbc93..7b6a3c58a51 100644 --- a/deps/npm/html/doc/cli/npm-prefix.html +++ b/deps/npm/html/doc/cli/npm-prefix.html @@ -11,17 +11,20 @@

    npm-prefix

    Display prefix

    SYNOPSIS

    -
    npm prefix
    +
    npm prefix [-g]
     

    DESCRIPTION

    -

    Print the prefix to standard out.

    +

    Print the local prefix to standard out. This is the closest parent directory +to contain a package.json file unless -g is also specified.

    +

    If -g is specified, this will be the value of the global prefix. See +npm-config(7) for more detail.

    SEE ALSO

    @@ -35,5 +38,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-prune.html b/deps/npm/html/doc/cli/npm-prune.html index 9db5665b33b..dea291b4909 100644 --- a/deps/npm/html/doc/cli/npm-prune.html +++ b/deps/npm/html/doc/cli/npm-prune.html @@ -23,9 +23,9 @@

    SYNOPSIS

    packages specified in your devDependencies.

    SEE ALSO

    @@ -39,5 +39,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-publish.html b/deps/npm/html/doc/cli/npm-publish.html index 573f72314f0..e1b46d21d0b 100644 --- a/deps/npm/html/doc/cli/npm-publish.html +++ b/deps/npm/html/doc/cli/npm-publish.html @@ -14,7 +14,12 @@

    SYNOPSIS

    npm publish <tarball> [--tag <tag>]
     npm publish <folder> [--tag <tag>]
     

    DESCRIPTION

    -

    Publishes a package to the registry so that it can be installed by name.

    +

    Publishes a package to the registry so that it can be installed by name. See +npm-developers(7) for details on what's included in the published package, as +well as details on how the package is built.

    +

    By default npm will publish to the public registry. This can be overridden by +specifying a different default registry or using a npm-scope(7) in the name +(see package.json(5)).

    • <folder>: A folder containing a package.json file

      @@ -30,17 +35,17 @@

      SYNOPSIS

    Fails if the package name and version combination already exists in -the registry.

    +the specified registry.

    Once a package is published with a given name and version, that specific name and version combination can never be used again, even if -it is removed with npm-unpublish(1).

    +it is removed with npm-unpublish(1).

    SEE ALSO

    @@ -54,5 +59,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-rebuild.html b/deps/npm/html/doc/cli/npm-rebuild.html index 3334c695f8d..4da97a70518 100644 --- a/deps/npm/html/doc/cli/npm-rebuild.html +++ b/deps/npm/html/doc/cli/npm-rebuild.html @@ -23,8 +23,8 @@

    DESCRIPTION

    the new binary.

    SEE ALSO

    @@ -38,5 +38,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-repo.html b/deps/npm/html/doc/cli/npm-repo.html index 3ad91979de1..02335b4f4a1 100644 --- a/deps/npm/html/doc/cli/npm-repo.html +++ b/deps/npm/html/doc/cli/npm-repo.html @@ -27,8 +27,8 @@

    browser

    The browser that is called by the npm repo command to open websites.

    SEE ALSO

    @@ -42,5 +42,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-restart.html b/deps/npm/html/doc/cli/npm-restart.html index 115765cb07c..d7536f81fdf 100644 --- a/deps/npm/html/doc/cli/npm-restart.html +++ b/deps/npm/html/doc/cli/npm-restart.html @@ -11,19 +11,17 @@

    npm-restart

    Start a package

    SYNOPSIS

    -
    npm restart <name>
    +
    npm restart [-- <args>]
     

    DESCRIPTION

    -

    This runs a package's "restart" script, if one was provided. -Otherwise it runs package's "stop" script, if one was provided, and then -the "start" script.

    -

    If no version is specified, then it restarts the "active" version.

    +

    This runs a package's "restart" script, if one was provided. Otherwise it runs +package's "stop" script, if one was provided, and then the "start" script.

    SEE ALSO

    @@ -37,5 +35,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-rm.html b/deps/npm/html/doc/cli/npm-rm.html index 8622174552d..3b28aaad4d3 100644 --- a/deps/npm/html/doc/cli/npm-rm.html +++ b/deps/npm/html/doc/cli/npm-rm.html @@ -20,12 +20,12 @@

    SYNOPSIS

    on its behalf.

    SEE ALSO

    @@ -39,5 +39,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-root.html b/deps/npm/html/doc/cli/npm-root.html index b7be0eab402..f6b8b22dfcb 100644 --- a/deps/npm/html/doc/cli/npm-root.html +++ b/deps/npm/html/doc/cli/npm-root.html @@ -16,12 +16,12 @@

    SYNOPSIS

    Print the effective node_modules folder to standard out.

    SEE ALSO

    @@ -35,5 +35,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-run-script.html b/deps/npm/html/doc/cli/npm-run-script.html index efaed0d9455..8ca2ea2a5ed 100644 --- a/deps/npm/html/doc/cli/npm-run-script.html +++ b/deps/npm/html/doc/cli/npm-run-script.html @@ -11,8 +11,8 @@

    npm-run-script

    Run arbitrary package scripts

    SYNOPSIS

    -
    npm run-script [<pkg>] [command]
    -npm run [<pkg>] [command]
    +
    npm run-script [command] [-- <args>]
    +npm run [command] [-- <args>]
     

    DESCRIPTION

    This runs an arbitrary command from a package's "scripts" object. If no package name is provided, it will search for a package.json @@ -20,13 +20,20 @@

    SYNOPSIS

    is provided, it will list the available top level scripts.

    It is used by the test, start, restart, and stop commands, but can be called directly, as well.

    +

    As of npm@2.0.0, you can +use custom arguments when executing scripts. The special option -- is used by +getopt to delimit the end of the options. npm will pass +all the arguments after the -- directly to your script:

    +
    npm run test -- --grep="pattern"
    +

    The arguments will only be passed to the script specified after npm run +and not to any pre or post script.

    SEE ALSO

    @@ -40,5 +47,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-search.html b/deps/npm/html/doc/cli/npm-search.html index cbed4886460..f5fe720baa1 100644 --- a/deps/npm/html/doc/cli/npm-search.html +++ b/deps/npm/html/doc/cli/npm-search.html @@ -31,11 +31,11 @@

    long

    fall on multiple lines.

    SEE ALSO

    @@ -49,5 +49,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-shrinkwrap.html b/deps/npm/html/doc/cli/npm-shrinkwrap.html index 37c8c9c05c8..fbfaa6c3dc0 100644 --- a/deps/npm/html/doc/cli/npm-shrinkwrap.html +++ b/deps/npm/html/doc/cli/npm-shrinkwrap.html @@ -120,7 +120,7 @@

    Building shrinkwrapped packages

  • Run "npm shrinkwrap", commit the new npm-shrinkwrap.json, and publish your package.
  • -

    You can use npm-outdated(1) to view dependencies with newer versions +

    You can use npm-outdated(1) to view dependencies with newer versions available.

    Other Notes

    A shrinkwrap file must be consistent with the package's package.json @@ -148,9 +148,9 @@

    Caveats

    contents rather than versions.

    SEE ALSO

    @@ -164,5 +164,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-star.html b/deps/npm/html/doc/cli/npm-star.html index 5e98d0f272d..d3bbde5b9d6 100644 --- a/deps/npm/html/doc/cli/npm-star.html +++ b/deps/npm/html/doc/cli/npm-star.html @@ -20,9 +20,9 @@

    SYNOPSIS

    It's a boolean thing. Starring repeatedly has no additional effect.

    SEE ALSO

    @@ -36,5 +36,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-stars.html b/deps/npm/html/doc/cli/npm-stars.html index c13ede22b8f..7873880f415 100644 --- a/deps/npm/html/doc/cli/npm-stars.html +++ b/deps/npm/html/doc/cli/npm-stars.html @@ -20,10 +20,10 @@

    SYNOPSIS

    you will most certainly enjoy this command.

    SEE ALSO

    @@ -37,5 +37,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-start.html b/deps/npm/html/doc/cli/npm-start.html index 305050e41a6..0a3134bc82e 100644 --- a/deps/npm/html/doc/cli/npm-start.html +++ b/deps/npm/html/doc/cli/npm-start.html @@ -11,16 +11,16 @@

    npm-start

    Start a package

    SYNOPSIS

    -
    npm start <name>
    +
    npm start [-- <args>]
     

    DESCRIPTION

    This runs a package's "start" script, if one was provided.

    SEE ALSO

    @@ -34,5 +34,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-stop.html b/deps/npm/html/doc/cli/npm-stop.html index 519fa856ce3..01638ee4daf 100644 --- a/deps/npm/html/doc/cli/npm-stop.html +++ b/deps/npm/html/doc/cli/npm-stop.html @@ -11,16 +11,16 @@

    npm-stop

    Stop a package

    SYNOPSIS

    -
    npm stop <name>
    +
    npm stop [-- <args>]
     

    DESCRIPTION

    This runs a package's "stop" script, if one was provided.

    SEE ALSO

    @@ -34,5 +34,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-submodule.html b/deps/npm/html/doc/cli/npm-submodule.html index 6716c4a11cc..4ac55a88525 100644 --- a/deps/npm/html/doc/cli/npm-submodule.html +++ b/deps/npm/html/doc/cli/npm-submodule.html @@ -27,7 +27,7 @@

    SYNOPSIS

    dependencies into the submodule folder.

    SEE ALSO

    @@ -42,5 +42,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-tag.html b/deps/npm/html/doc/cli/npm-tag.html index 40a2ffe89fa..946d5fa767c 100644 --- a/deps/npm/html/doc/cli/npm-tag.html +++ b/deps/npm/html/doc/cli/npm-tag.html @@ -24,13 +24,13 @@

    SYNOPSIS

    Publishing a package always sets the "latest" tag to the published version.

    SEE ALSO

    @@ -44,5 +44,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-test.html b/deps/npm/html/doc/cli/npm-test.html index cc3d56d010b..6c5467de51f 100644 --- a/deps/npm/html/doc/cli/npm-test.html +++ b/deps/npm/html/doc/cli/npm-test.html @@ -11,19 +11,19 @@

    npm-test

    Test a package

    SYNOPSIS

    -
      npm test <name>
    -  npm tst <name>
    +
      npm test [-- <args>]
    +  npm tst [-- <args>]
     

    DESCRIPTION

    This runs a package's "test" script, if one was provided.

    To run tests as a condition of installation, set the npat config to true.

    SEE ALSO

    @@ -37,5 +37,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-uninstall.html b/deps/npm/html/doc/cli/npm-uninstall.html index 4f5a064606d..2a3c12c148d 100644 --- a/deps/npm/html/doc/cli/npm-uninstall.html +++ b/deps/npm/html/doc/cli/npm-uninstall.html @@ -11,7 +11,7 @@

    npm-rm

    Remove a package

    SYNOPSIS

    -
    npm uninstall <name> [--save|--save-dev|--save-optional]
    +
    npm uninstall [@<scope>/]<package> [--save|--save-dev|--save-optional]
     npm rm (with any of the previous argument usage)
     

    DESCRIPTION

    This uninstalls a package, completely removing everything npm installed @@ -30,18 +30,20 @@

    SYNOPSIS

  • --save-optional: Package will be removed from your optionalDependencies.

  • +

    Scope is optional and follows the usual rules for npm-scope(7).

    Examples:

    npm uninstall sax --save
    +npm uninstall @myorg/privatepackage --save
     npm uninstall node-tap --save-dev
     npm uninstall dtrace-provider --save-optional
     

    SEE ALSO

    @@ -55,5 +57,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/cli/npm-unpublish.html b/deps/npm/html/doc/cli/npm-unpublish.html index be9430a15c5..59b278e23da 100644 --- a/deps/npm/html/doc/cli/npm-unpublish.html +++ b/deps/npm/html/doc/cli/npm-unpublish.html @@ -11,7 +11,7 @@

    npm-unpublish

    Remove a package from the registry

    SYNOPSIS

    -
    npm unpublish <name>[@<version>]
    +
    npm unpublish [@<scope>/]<name>[@<version>]
     

    WARNING

    It is generally considered bad behavior to remove versions of a library that others are depending on!

    @@ -26,13 +26,14 @@

    DESCRIPTION

    Even if a package version is unpublished, that specific name and version combination can never be reused. In order to publish the package again, a new version number must be used.

    +

    The scope is optional and follows the usual rules for npm-scope(7).

    SEE ALSO

    @@ -46,5 +47,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-update.html b/deps/npm/html/doc/cli/npm-update.html index 650fd89f56d..5fa7846f534 100644 --- a/deps/npm/html/doc/cli/npm-update.html +++ b/deps/npm/html/doc/cli/npm-update.html @@ -16,15 +16,17 @@

    SYNOPSIS

    This command will update all the packages listed to the latest version (specified by the tag config).

    It will also install missing packages.

    -

    If the -g flag is specified, this command will update globally installed packages. -If no package name is specified, all packages in the specified location (global or local) will be updated.

    +

    If the -g flag is specified, this command will update globally installed +packages.

    +

    If no package name is specified, all packages in the specified location (global +or local) will be updated.

    SEE ALSO

    @@ -38,5 +40,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-version.html b/deps/npm/html/doc/cli/npm-version.html index 47ca7e58234..6477726f495 100644 --- a/deps/npm/html/doc/cli/npm-version.html +++ b/deps/npm/html/doc/cli/npm-version.html @@ -39,9 +39,9 @@

    SYNOPSIS

    Enter passphrase:

    SEE ALSO

    @@ -55,5 +55,5 @@

    SYNOPSIS

           - + diff --git a/deps/npm/html/doc/cli/npm-view.html b/deps/npm/html/doc/cli/npm-view.html index c30cc69f126..3b87ba0be20 100644 --- a/deps/npm/html/doc/cli/npm-view.html +++ b/deps/npm/html/doc/cli/npm-view.html @@ -11,8 +11,8 @@

    npm-view

    View registry info

    SYNOPSIS

    -
    npm view <name>[@<version>] [<field>[.<subfield>]...]
    -npm v <name>[@<version>] [<field>[.<subfield>]...]
    +
    npm view [@<scope>/]<name>[@<version>] [<field>[.<subfield>]...]
    +npm v [@<scope>/]<name>[@<version>] [<field>[.<subfield>]...]
     

    DESCRIPTION

    This command shows data about a package and prints it to the stream referenced by the outfd config, which defaults to stdout.

    @@ -46,7 +46,7 @@

    SYNOPSIS

    npm view express contributors.name contributors.email
     

    "Person" fields are shown as a string if they would be shown as an object. So, for example, this will show the list of npm contributors in -the shortened string format. (See package.json(5) for more on this.)

    +the shortened string format. (See package.json(5) for more on this.)

    npm view npm contributors
     

    If a version range is provided, then data will be printed for every matching version of the package. This will show which version of jsdom @@ -63,12 +63,12 @@

    SYNOPSIS

    the field name.

    SEE ALSO

    @@ -82,5 +82,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm-whoami.html b/deps/npm/html/doc/cli/npm-whoami.html index 70dc893cd3e..a2705c40c3a 100644 --- a/deps/npm/html/doc/cli/npm-whoami.html +++ b/deps/npm/html/doc/cli/npm-whoami.html @@ -16,10 +16,10 @@

    SYNOPSIS

    Print the username config to standard output.

    SEE ALSO

    @@ -33,5 +33,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/cli/npm.html b/deps/npm/html/doc/cli/npm.html index 36ae925064a..3bd3849d2ad 100644 --- a/deps/npm/html/doc/cli/npm.html +++ b/deps/npm/html/doc/cli/npm.html @@ -13,7 +13,7 @@

    npm

    node package manager

    SYNOPSIS

    npm <command> [args]
     

    VERSION

    -

    1.4.28

    +

    2.1.6

    DESCRIPTION

    npm is the package manager for the Node JavaScript platform. It puts modules in place so that node can find them, and manages dependency @@ -25,7 +25,7 @@

    DESCRIPTION

    INTRODUCTION

    You probably got npm because you want to install stuff.

    Use npm install blerg to install the latest version of "blerg". Check out -npm-install(1) for more info. It can do a lot of stuff.

    +npm-install(1) for more info. It can do a lot of stuff.

    Use the npm search command to show everything that's available. Use npm ls to show everything you've installed.

    DEPENDENCIES

    @@ -42,7 +42,7 @@

    DEPENDENCIES

    the node-gyp repository and the node-gyp Wiki.

    DIRECTORIES

    -

    See npm-folders(5) to learn about where npm puts stuff.

    +

    See npm-folders(5) to learn about where npm puts stuff.

    In particular, npm has two modes of operation:

    • global mode:
      npm installs packages into the install prefix at @@ -58,7 +58,7 @@

      DEVELOPER USAGE

      following help topics:

      • json: -Make a package.json file. See package.json(5).
      • +Make a package.json file. See package.json(5).
      • link: For linking your current working code into Node's path, so that you don't have to reinstall every time you make a change. Use @@ -93,12 +93,12 @@

        CONFIGURATION

      • Defaults:
        npm's default configuration options are defined in lib/utils/config-defs.js. These must not be changed.
      -

      See npm-config(7) for much much more information.

      +

      See npm-config(7) for much much more information.

      CONTRIBUTIONS

      Patches welcome!

      Be sure to include all of the output from the npm command that didn't work as expected. The npm-debug.log file is also helpful to provide.

      @@ -128,19 +128,19 @@

      AUTHOR

      Isaac Z. Schlueter :: isaacs :: @izs :: -i@izs.me

      +i@izs.me

      SEE ALSO

      @@ -154,5 +154,5 @@

      SEE ALSO

             - + diff --git a/deps/npm/html/doc/files/npm-folders.html b/deps/npm/html/doc/files/npm-folders.html index 637b45c3821..ec320399971 100644 --- a/deps/npm/html/doc/files/npm-folders.html +++ b/deps/npm/html/doc/files/npm-folders.html @@ -41,6 +41,11 @@

      Node Modules

      Global installs on Unix systems go to {prefix}/lib/node_modules. Global installs on Windows go to {prefix}/node_modules (that is, no lib folder.)

      +

      Scoped packages are installed the same way, except they are grouped together +in a sub-folder of the relevant node_modules folder with the name of that +scope prefix by the @ symbol, e.g. npm install @myorg/package would place +the package in {prefix}/node_modules/@myorg/package. See scopes(7) for +more details.

      If you wish to require() a package, then install it locally.

      Executables

      When in global mode, executables are linked into {prefix}/bin on Unix, @@ -54,7 +59,7 @@

      Man Pages

      When in local mode, man pages are not installed.

      Man pages are not installed on Windows systems.

      Cache

      -

      See npm-cache(1). Cache files are stored in ~/.npm on Posix, or +

      See npm-cache(1). Cache files are stored in ~/.npm on Posix, or ~/npm-cache on Windows.

      This is controlled by the cache configuration param.

      Temp Files

      @@ -154,18 +159,18 @@

      Publishing

      not be included in the package tarball.

      This allows a package maintainer to install all of their dependencies (and dev dependencies) locally, but only re-publish those items that -cannot be found elsewhere. See package.json(5) for more information.

      +cannot be found elsewhere. See package.json(5) for more information.

      SEE ALSO

      @@ -179,5 +184,5 @@

      SEE ALSO

             - + diff --git a/deps/npm/html/doc/files/npm-global.html b/deps/npm/html/doc/files/npm-global.html index 637b45c3821..90e5dcaf1dc 100644 --- a/deps/npm/html/doc/files/npm-global.html +++ b/deps/npm/html/doc/files/npm-global.html @@ -1,9 +1,9 @@ - npm-folders + npm-global - + @@ -41,6 +41,11 @@

      Node Modules

      Global installs on Unix systems go to {prefix}/lib/node_modules. Global installs on Windows go to {prefix}/node_modules (that is, no lib folder.)

      +

      Scoped packages are installed the same way, except they are grouped together +in a sub-folder of the relevant node_modules folder with the name of that +scope prefix by the @ symbol, e.g. npm install @myorg/package would place +the package in {prefix}/node_modules/@myorg/package. See scopes(7) for +more details.

      If you wish to require() a package, then install it locally.

      Executables

      When in global mode, executables are linked into {prefix}/bin on Unix, @@ -54,7 +59,7 @@

      Man Pages

      When in local mode, man pages are not installed.

      Man pages are not installed on Windows systems.

      Cache

      -

      See npm-cache(1). Cache files are stored in ~/.npm on Posix, or +

      See npm-cache(1). Cache files are stored in ~/.npm on Posix, or ~/npm-cache on Windows.

      This is controlled by the cache configuration param.

      Temp Files

      @@ -154,18 +159,18 @@

      Publishing

      not be included in the package tarball.

      This allows a package maintainer to install all of their dependencies (and dev dependencies) locally, but only re-publish those items that -cannot be found elsewhere. See package.json(5) for more information.

      +cannot be found elsewhere. See package.json(5) for more information.

      SEE ALSO

      @@ -179,5 +184,5 @@

      SEE ALSO

             - + diff --git a/deps/npm/html/doc/files/npm-json.html b/deps/npm/html/doc/files/npm-json.html index 129661d526f..ade4fd07ec4 100644 --- a/deps/npm/html/doc/files/npm-json.html +++ b/deps/npm/html/doc/files/npm-json.html @@ -1,9 +1,9 @@ - package.json + npm-json - + @@ -14,7 +14,7 @@

      DESCRIPTION

      This document is all you need to know about what's required in your package.json file. It must be actual JSON, not just a JavaScript object literal.

      A lot of the behavior described in this document is affected by the config -settings described in npm-config(7).

      +settings described in npm-config(7).

      name

      The most important things in your package.json are the name and version fields. Those are actually required, and your package won't install without @@ -34,6 +34,8 @@

      name

    • You may want to check the npm registry to see if there's something by that name already, before you get too attached to it. http://registry.npmjs.org/
    +

    A name can be optionally prefixed by a scope, e.g. @myorg/mypackage. See +npm-scope(7) for more detail.

    version

    The most important things in your package.json are the name and version fields. Those are actually required, and your package won't install without @@ -43,7 +45,7 @@

    version

    Version must be parseable by node-semver, which is bundled with npm as a dependency. (npm install semver to use it yourself.)

    -

    More on version numbers and ranges at semver(7).

    +

    More on version numbers and ranges at semver(7).

    description

    Put a description in it. It's a string. This helps people discover your package, as it's listed in npm search.

    @@ -159,16 +161,16 @@

    bin

    directories

    The CommonJS Packages spec details a few ways that you can indicate the structure of your package using a directories -hash. If you look at npm's package.json, +object. If you look at npm's package.json, you'll see that it has directories for doc, lib, and man.

    In the future, this information may be used in other creative ways.

    directories.lib

    Tell people where the bulk of your library is. Nothing special is done with the lib folder in any way, but it's useful meta info.

    directories.bin

    -

    If you specify a "bin" directory, then all the files in that folder will -be used as the "bin" hash.

    -

    If you have a "bin" hash already, then this has no effect.

    +

    If you specify a bin directory, then all the files in that folder will +be added as children of the bin path.

    +

    If you have a bin path already, then this has no effect.

    directories.man

    A folder that is full of man pages. Sugar to generate a "man" array by walking the folder.

    @@ -195,37 +197,37 @@

    repository

    directly to a VCS program without any modification. It should not be a url to an html project page that you put in your browser. It's for computers.

    scripts

    -

    The "scripts" member is an object hash of script commands that are run +

    The "scripts" property is a dictionary containing script commands that are run at various times in the lifecycle of your package. The key is the lifecycle event, and the value is the command to run at that point.

    -

    See npm-scripts(7) to find out more about writing package scripts.

    +

    See npm-scripts(7) to find out more about writing package scripts.

    config

    -

    A "config" hash can be used to set configuration -parameters used in package scripts that persist across upgrades. For -instance, if a package had the following:

    +

    A "config" object can be used to set configuration parameters used in package +scripts that persist across upgrades. For instance, if a package had the +following:

    { "name" : "foo"
     , "config" : { "port" : "8080" } }
     

    and then had a "start" command that then referenced the npm_package_config_port environment variable, then the user could override that by doing npm config set foo:port 8001.

    -

    See npm-config(7) and npm-scripts(7) for more on package +

    See npm-config(7) and npm-scripts(7) for more on package configs.

    dependencies

    -

    Dependencies are specified with a simple hash of package name to +

    Dependencies are specified in a simple object that maps a package name to a version range. The version range is a string which has one or more -space-separated descriptors. Dependencies can also be identified with -a tarball or git URL.

    +space-separated descriptors. Dependencies can also be identified with a +tarball or git URL.

    Please do not put test harnesses or transpilers in your -dependencies hash. See devDependencies, below.

    -

    See semver(7) for more details about specifying version ranges.

    +dependencies object. See devDependencies, below.

    +

    See semver(7) for more details about specifying version ranges.

    • version Must match version exactly
    • >version Must be greater than version
    • >=version etc
    • <version
    • <=version
    • -
    • ~version "Approximately equivalent to version" See semver(7)
    • -
    • ^version "Compatible with version" See semver(7)
    • +
    • ~version "Approximately equivalent to version" See semver(7)
    • +
    • ^version "Compatible with version" See semver(7)
    • 1.2.x 1.2.0, 1.2.1, etc., but not 1.3.0
    • http://... See 'URLs as Dependencies' below
    • * Matches any version
    • @@ -234,6 +236,8 @@

      dependencies

    • range1 || range2 Passes if either range1 or range2 are satisfied.
    • git... See 'Git URLs as Dependencies' below
    • user/repo See 'GitHub URLs' below
    • +
    • tag A specific version tagged and published as tag See npm-tag(1)
    • +
    • path/path/path See Local Paths below

    For example, these are all valid:

    { "dependencies" :
    @@ -247,6 +251,8 @@ 

    dependencies

    , "elf" : "~1.2.3" , "two" : "2.x" , "thr" : "3.3.x" + , "lat" : "latest" + , "dyl" : "file:../dyl" } }

    URLs as Dependencies

    @@ -271,15 +277,35 @@

    GitHub URLs

    "express": "visionmedia/express" } } -

    devDependencies

    +

    Local Paths

    +

    As of version 2.0.0 you can provide a path to a local directory that contains a +package. Local paths can be saved using npm install --save, using any of +these forms:

    +
    ../foo/bar
    +~/foo/bar
    +./foo/bar
    +/foo/bar
    +

    in which case they will be normalized to a relative path and added to your +package.json. For example:

    +
    {
    +  "name": "baz",
    +  "dependencies": {
    +    "bar": "file:../foo/bar"
    +  }
    +}
    +

    This feature is helpful for local offline development and creating +tests that require npm installing where you don't want to hit an +external server, but should not be used when publishing packages +to the public registry.

    +

    devDependencies

    If someone is planning on downloading and using your module in their program, then they probably don't want or need to download and build the external test or documentation framework that you use.

    -

    In this case, it's best to list these additional items in a -devDependencies hash.

    +

    In this case, it's best to map these additional items in a devDependencies +object.

    These things will be installed when doing npm link or npm install from the root of a package, and can be managed like any other npm -configuration param. See npm-config(7) for more on the topic.

    +configuration param. See npm-config(7) for more on the topic.

    For build steps that are not platform-specific, such as compiling CoffeeScript or other languages to JavaScript, use the prepublish script to do this, and make the required package a devDependency.

    @@ -329,11 +355,11 @@

    bundledDependencies

    Array of package names that will be bundled when publishing the package.

    If this is spelled "bundleDependencies", then that is also honorable.

    optionalDependencies

    -

    If a dependency can be used, but you would like npm to proceed if it -cannot be found or fails to install, then you may put it in the -optionalDependencies hash. This is a map of package name to version -or url, just like the dependencies hash. The difference is that -failure is tolerated.

    +

    If a dependency can be used, but you would like npm to proceed if it cannot be +found or fails to install, then you may put it in the optionalDependencies +object. This is a map of package name to version or url, just like the +dependencies object. The difference is that build failures do not cause +installation to fail.

    It is still your program's responsibility to handle the lack of the dependency. For example, something like this:

    try {
    @@ -368,11 +394,11 @@ 

    engines

    field is advisory only.

    engineStrict

    If you are sure that your module will definitely not run properly on -versions of Node/npm other than those specified in the engines hash, +versions of Node/npm other than those specified in the engines object, then you can set "engineStrict": true in your package.json file. This will override the user's engine-strict config setting.

    Please do not do this unless you are really very very sure. If your -engines hash is something overly restrictive, you can quite easily and +engines object is something overly restrictive, you can quite easily and inadvertently lock yourself into obscurity and prevent your users from updating to new versions of Node. Consider this choice carefully. If people abuse it, it will be removed in a future version of npm.

    @@ -402,11 +428,11 @@

    preferGlobal

    private

    If you set "private": true in your package.json, then npm will refuse to publish it.

    -

    This is a way to prevent accidental publication of private repositories. -If you would like to ensure that a given package is only ever published -to a specific registry (for example, an internal registry), -then use the publishConfig hash described below -to override the registry config param at publish-time.

    +

    This is a way to prevent accidental publication of private repositories. If +you would like to ensure that a given package is only ever published to a +specific registry (for example, an internal registry), then use the +publishConfig dictionary described below to override the registry config +param at publish-time.

    publishConfig

    This is a set of config values that will be used at publish-time. It's especially handy if you want to set the tag or registry, so that you can @@ -414,7 +440,7 @@

    publishConfig

    the global public registry by default.

    Any config values can be overridden, but of course only "tag" and "registry" probably matter for the purposes of publishing.

    -

    See npm-config(7) to see the list of config options that can be +

    See npm-config(7) to see the list of config options that can be overridden.

    DEFAULT VALUES

    npm will default some values based on package contents.

    @@ -436,16 +462,16 @@

    DEFAULT VALUES

    SEE ALSO

    @@ -459,5 +485,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/files/npmrc.html b/deps/npm/html/doc/files/npmrc.html index ee029cfe8f9..9f379006f09 100644 --- a/deps/npm/html/doc/files/npmrc.html +++ b/deps/npm/html/doc/files/npmrc.html @@ -15,7 +15,7 @@

    DESCRIPTION

    variables, and npmrc files.

    The npm config command can be used to update and edit the contents of the user and global npmrc files.

    -

    For a list of available configuration options, see npm-config(7).

    +

    For a list of available configuration options, see npm-config(7).

    FILES

    The four relevant files are:

      @@ -55,11 +55,11 @@

      Built-in config file

      manner.

      SEE ALSO

      @@ -73,5 +73,5 @@

      SEE ALSO

             - + diff --git a/deps/npm/html/doc/files/package.json.html b/deps/npm/html/doc/files/package.json.html index 129661d526f..183ad8ea5d6 100644 --- a/deps/npm/html/doc/files/package.json.html +++ b/deps/npm/html/doc/files/package.json.html @@ -14,7 +14,7 @@

      DESCRIPTION

      This document is all you need to know about what's required in your package.json file. It must be actual JSON, not just a JavaScript object literal.

      A lot of the behavior described in this document is affected by the config -settings described in npm-config(7).

      +settings described in npm-config(7).

      name

      The most important things in your package.json are the name and version fields. Those are actually required, and your package won't install without @@ -34,6 +34,8 @@

      name

    • You may want to check the npm registry to see if there's something by that name already, before you get too attached to it. http://registry.npmjs.org/
    +

    A name can be optionally prefixed by a scope, e.g. @myorg/mypackage. See +npm-scope(7) for more detail.

    version

    The most important things in your package.json are the name and version fields. Those are actually required, and your package won't install without @@ -43,7 +45,7 @@

    version

    Version must be parseable by node-semver, which is bundled with npm as a dependency. (npm install semver to use it yourself.)

    -

    More on version numbers and ranges at semver(7).

    +

    More on version numbers and ranges at semver(7).

    description

    Put a description in it. It's a string. This helps people discover your package, as it's listed in npm search.

    @@ -159,16 +161,16 @@

    bin

    directories

    The CommonJS Packages spec details a few ways that you can indicate the structure of your package using a directories -hash. If you look at npm's package.json, +object. If you look at npm's package.json, you'll see that it has directories for doc, lib, and man.

    In the future, this information may be used in other creative ways.

    directories.lib

    Tell people where the bulk of your library is. Nothing special is done with the lib folder in any way, but it's useful meta info.

    directories.bin

    -

    If you specify a "bin" directory, then all the files in that folder will -be used as the "bin" hash.

    -

    If you have a "bin" hash already, then this has no effect.

    +

    If you specify a bin directory, then all the files in that folder will +be added as children of the bin path.

    +

    If you have a bin path already, then this has no effect.

    directories.man

    A folder that is full of man pages. Sugar to generate a "man" array by walking the folder.

    @@ -195,37 +197,37 @@

    repository

    directly to a VCS program without any modification. It should not be a url to an html project page that you put in your browser. It's for computers.

    scripts

    -

    The "scripts" member is an object hash of script commands that are run +

    The "scripts" property is a dictionary containing script commands that are run at various times in the lifecycle of your package. The key is the lifecycle event, and the value is the command to run at that point.

    -

    See npm-scripts(7) to find out more about writing package scripts.

    +

    See npm-scripts(7) to find out more about writing package scripts.

    config

    -

    A "config" hash can be used to set configuration -parameters used in package scripts that persist across upgrades. For -instance, if a package had the following:

    +

    A "config" object can be used to set configuration parameters used in package +scripts that persist across upgrades. For instance, if a package had the +following:

    { "name" : "foo"
     , "config" : { "port" : "8080" } }
     

    and then had a "start" command that then referenced the npm_package_config_port environment variable, then the user could override that by doing npm config set foo:port 8001.

    -

    See npm-config(7) and npm-scripts(7) for more on package +

    See npm-config(7) and npm-scripts(7) for more on package configs.

    dependencies

    -

    Dependencies are specified with a simple hash of package name to +

    Dependencies are specified in a simple object that maps a package name to a version range. The version range is a string which has one or more -space-separated descriptors. Dependencies can also be identified with -a tarball or git URL.

    +space-separated descriptors. Dependencies can also be identified with a +tarball or git URL.

    Please do not put test harnesses or transpilers in your -dependencies hash. See devDependencies, below.

    -

    See semver(7) for more details about specifying version ranges.

    +dependencies object. See devDependencies, below.

    +

    See semver(7) for more details about specifying version ranges.

    • version Must match version exactly
    • >version Must be greater than version
    • >=version etc
    • <version
    • <=version
    • -
    • ~version "Approximately equivalent to version" See semver(7)
    • -
    • ^version "Compatible with version" See semver(7)
    • +
    • ~version "Approximately equivalent to version" See semver(7)
    • +
    • ^version "Compatible with version" See semver(7)
    • 1.2.x 1.2.0, 1.2.1, etc., but not 1.3.0
    • http://... See 'URLs as Dependencies' below
    • * Matches any version
    • @@ -234,6 +236,8 @@

      dependencies

    • range1 || range2 Passes if either range1 or range2 are satisfied.
    • git... See 'Git URLs as Dependencies' below
    • user/repo See 'GitHub URLs' below
    • +
    • tag A specific version tagged and published as tag See npm-tag(1)
    • +
    • path/path/path See Local Paths below

    For example, these are all valid:

    { "dependencies" :
    @@ -247,6 +251,8 @@ 

    dependencies

    , "elf" : "~1.2.3" , "two" : "2.x" , "thr" : "3.3.x" + , "lat" : "latest" + , "dyl" : "file:../dyl" } }

    URLs as Dependencies

    @@ -271,15 +277,35 @@

    GitHub URLs

    "express": "visionmedia/express" } } -

    devDependencies

    +

    Local Paths

    +

    As of version 2.0.0 you can provide a path to a local directory that contains a +package. Local paths can be saved using npm install --save, using any of +these forms:

    +
    ../foo/bar
    +~/foo/bar
    +./foo/bar
    +/foo/bar
    +

    in which case they will be normalized to a relative path and added to your +package.json. For example:

    +
    {
    +  "name": "baz",
    +  "dependencies": {
    +    "bar": "file:../foo/bar"
    +  }
    +}
    +

    This feature is helpful for local offline development and creating +tests that require npm installing where you don't want to hit an +external server, but should not be used when publishing packages +to the public registry.

    +

    devDependencies

    If someone is planning on downloading and using your module in their program, then they probably don't want or need to download and build the external test or documentation framework that you use.

    -

    In this case, it's best to list these additional items in a -devDependencies hash.

    +

    In this case, it's best to map these additional items in a devDependencies +object.

    These things will be installed when doing npm link or npm install from the root of a package, and can be managed like any other npm -configuration param. See npm-config(7) for more on the topic.

    +configuration param. See npm-config(7) for more on the topic.

    For build steps that are not platform-specific, such as compiling CoffeeScript or other languages to JavaScript, use the prepublish script to do this, and make the required package a devDependency.

    @@ -329,11 +355,11 @@

    bundledDependencies

    Array of package names that will be bundled when publishing the package.

    If this is spelled "bundleDependencies", then that is also honorable.

    optionalDependencies

    -

    If a dependency can be used, but you would like npm to proceed if it -cannot be found or fails to install, then you may put it in the -optionalDependencies hash. This is a map of package name to version -or url, just like the dependencies hash. The difference is that -failure is tolerated.

    +

    If a dependency can be used, but you would like npm to proceed if it cannot be +found or fails to install, then you may put it in the optionalDependencies +object. This is a map of package name to version or url, just like the +dependencies object. The difference is that build failures do not cause +installation to fail.

    It is still your program's responsibility to handle the lack of the dependency. For example, something like this:

    try {
    @@ -368,11 +394,11 @@ 

    engines

    field is advisory only.

    engineStrict

    If you are sure that your module will definitely not run properly on -versions of Node/npm other than those specified in the engines hash, +versions of Node/npm other than those specified in the engines object, then you can set "engineStrict": true in your package.json file. This will override the user's engine-strict config setting.

    Please do not do this unless you are really very very sure. If your -engines hash is something overly restrictive, you can quite easily and +engines object is something overly restrictive, you can quite easily and inadvertently lock yourself into obscurity and prevent your users from updating to new versions of Node. Consider this choice carefully. If people abuse it, it will be removed in a future version of npm.

    @@ -402,11 +428,11 @@

    preferGlobal

    private

    If you set "private": true in your package.json, then npm will refuse to publish it.

    -

    This is a way to prevent accidental publication of private repositories. -If you would like to ensure that a given package is only ever published -to a specific registry (for example, an internal registry), -then use the publishConfig hash described below -to override the registry config param at publish-time.

    +

    This is a way to prevent accidental publication of private repositories. If +you would like to ensure that a given package is only ever published to a +specific registry (for example, an internal registry), then use the +publishConfig dictionary described below to override the registry config +param at publish-time.

    publishConfig

    This is a set of config values that will be used at publish-time. It's especially handy if you want to set the tag or registry, so that you can @@ -414,7 +440,7 @@

    publishConfig

    the global public registry by default.

    Any config values can be overridden, but of course only "tag" and "registry" probably matter for the purposes of publishing.

    -

    See npm-config(7) to see the list of config options that can be +

    See npm-config(7) to see the list of config options that can be overridden.

    DEFAULT VALUES

    npm will default some values based on package contents.

    @@ -436,16 +462,16 @@

    DEFAULT VALUES

    SEE ALSO

    @@ -459,5 +485,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/index.html b/deps/npm/html/doc/index.html index 90c300c88f3..6c68895e073 100644 --- a/deps/npm/html/doc/index.html +++ b/deps/npm/html/doc/index.html @@ -1,6 +1,6 @@ - npm-index + index @@ -10,215 +10,213 @@

    npm-index

    Index of all npm documentation

    -

    README

    +

    README

    node package manager

    Command Line Documentation

    Using npm on the command line

    -

    npm(1)

    +

    npm(1)

    node package manager

    -

    npm-adduser(1)

    +

    npm-adduser(1)

    Add a registry user account

    -

    npm-bin(1)

    +

    npm-bin(1)

    Display npm bin folder

    -

    npm-bugs(1)

    +

    npm-bugs(1)

    Bugs for a package in a web browser maybe

    -

    npm-build(1)

    +

    npm-build(1)

    Build a package

    -

    npm-bundle(1)

    +

    npm-bundle(1)

    REMOVED

    -

    npm-cache(1)

    +

    npm-cache(1)

    Manipulates packages cache

    -

    npm-completion(1)

    +

    npm-completion(1)

    Tab Completion for npm

    -

    npm-config(1)

    +

    npm-config(1)

    Manage the npm configuration files

    -

    npm-dedupe(1)

    +

    npm-dedupe(1)

    Reduce duplication

    -

    npm-deprecate(1)

    +

    npm-deprecate(1)

    Deprecate a version of a package

    -

    npm-docs(1)

    +

    npm-docs(1)

    Docs for a package in a web browser maybe

    -

    npm-edit(1)

    +

    npm-edit(1)

    Edit an installed package

    -

    npm-explore(1)

    +

    npm-explore(1)

    Browse an installed package

    -

    npm-help-search(1)

    +

    npm-help-search(1)

    Search npm help documentation

    -

    npm-help(1)

    +

    npm-help(1)

    Get help on npm

    -

    npm-init(1)

    +

    npm-init(1)

    Interactively create a package.json file

    -

    npm-install(1)

    +

    npm-install(1)

    Install a package

    - +

    Symlink a package folder

    -

    npm-ls(1)

    +

    npm-ls(1)

    List installed packages

    -

    npm-outdated(1)

    +

    npm-outdated(1)

    Check for outdated packages

    -

    npm-owner(1)

    +

    npm-owner(1)

    Manage package owners

    -

    npm-pack(1)

    +

    npm-pack(1)

    Create a tarball from a package

    -

    npm-prefix(1)

    +

    npm-prefix(1)

    Display prefix

    -

    npm-prune(1)

    +

    npm-prune(1)

    Remove extraneous packages

    -

    npm-publish(1)

    +

    npm-publish(1)

    Publish a package

    -

    npm-rebuild(1)

    +

    npm-rebuild(1)

    Rebuild a package

    -

    npm-repo(1)

    +

    npm-repo(1)

    Open package repository page in the browser

    -

    npm-restart(1)

    +

    npm-restart(1)

    Start a package

    -

    npm-rm(1)

    +

    npm-rm(1)

    Remove a package

    -

    npm-root(1)

    +

    npm-root(1)

    Display npm root

    -

    npm-run-script(1)

    +

    npm-run-script(1)

    Run arbitrary package scripts

    -

    npm-search(1)

    +

    npm-search(1)

    Search for packages

    -

    npm-shrinkwrap(1)

    +

    npm-shrinkwrap(1)

    Lock down dependency versions

    -

    npm-star(1)

    +

    npm-star(1)

    Mark your favorite packages

    -

    npm-stars(1)

    +

    npm-stars(1)

    View packages marked as favorites

    -

    npm-start(1)

    +

    npm-start(1)

    Start a package

    -

    npm-stop(1)

    +

    npm-stop(1)

    Stop a package

    -

    npm-submodule(1)

    -

    Add a package as a git submodule

    -

    npm-tag(1)

    +

    npm-tag(1)

    Tag a published version

    -

    npm-test(1)

    +

    npm-test(1)

    Test a package

    -

    npm-uninstall(1)

    +

    npm-uninstall(1)

    Remove a package

    -

    npm-unpublish(1)

    +

    npm-unpublish(1)

    Remove a package from the registry

    -

    npm-update(1)

    +

    npm-update(1)

    Update a package

    -

    npm-version(1)

    +

    npm-version(1)

    Bump a package version

    -

    npm-view(1)

    +

    npm-view(1)

    View registry info

    -

    npm-whoami(1)

    +

    npm-whoami(1)

    Display npm username

    API Documentation

    Using npm in your Node programs

    -

    npm(3)

    +

    npm(3)

    node package manager

    -

    npm-bin(3)

    +

    npm-bin(3)

    Display npm bin folder

    -

    npm-bugs(3)

    +

    npm-bugs(3)

    Bugs for a package in a web browser maybe

    -

    npm-cache(3)

    +

    npm-cache(3)

    manage the npm cache programmatically

    -

    npm-commands(3)

    +

    npm-commands(3)

    npm commands

    -

    npm-config(3)

    +

    npm-config(3)

    Manage the npm configuration files

    -

    npm-deprecate(3)

    +

    npm-deprecate(3)

    Deprecate a version of a package

    -

    npm-docs(3)

    +

    npm-docs(3)

    Docs for a package in a web browser maybe

    -

    npm-edit(3)

    +

    npm-edit(3)

    Edit an installed package

    -

    npm-explore(3)

    +

    npm-explore(3)

    Browse an installed package

    -

    npm-help-search(3)

    +

    npm-help-search(3)

    Search the help pages

    -

    npm-init(3)

    +

    npm-init(3)

    Interactively create a package.json file

    -

    npm-install(3)

    +

    npm-install(3)

    install a package programmatically

    - +

    Symlink a package folder

    -

    npm-load(3)

    +

    npm-load(3)

    Load config settings

    -

    npm-ls(3)

    +

    npm-ls(3)

    List installed packages

    -

    npm-outdated(3)

    +

    npm-outdated(3)

    Check for outdated packages

    -

    npm-owner(3)

    +

    npm-owner(3)

    Manage package owners

    -

    npm-pack(3)

    +

    npm-pack(3)

    Create a tarball from a package

    -

    npm-prefix(3)

    +

    npm-prefix(3)

    Display prefix

    -

    npm-prune(3)

    +

    npm-prune(3)

    Remove extraneous packages

    -

    npm-publish(3)

    +

    npm-publish(3)

    Publish a package

    -

    npm-rebuild(3)

    +

    npm-rebuild(3)

    Rebuild a package

    -

    npm-repo(3)

    +

    npm-repo(3)

    Open package repository page in the browser

    -

    npm-restart(3)

    +

    npm-restart(3)

    Start a package

    -

    npm-root(3)

    +

    npm-root(3)

    Display npm root

    -

    npm-run-script(3)

    +

    npm-run-script(3)

    Run arbitrary package scripts

    -

    npm-search(3)

    +

    npm-search(3)

    Search for packages

    -

    npm-shrinkwrap(3)

    +

    npm-shrinkwrap(3)

    programmatically generate package shrinkwrap file

    -

    npm-start(3)

    +

    npm-start(3)

    Start a package

    -

    npm-stop(3)

    +

    npm-stop(3)

    Stop a package

    -

    npm-submodule(3)

    -

    Add a package as a git submodule

    -

    npm-tag(3)

    +

    npm-tag(3)

    Tag a published version

    -

    npm-test(3)

    +

    npm-test(3)

    Test a package

    -

    npm-uninstall(3)

    +

    npm-uninstall(3)

    uninstall a package programmatically

    -

    npm-unpublish(3)

    +

    npm-unpublish(3)

    Remove a package from the registry

    -

    npm-update(3)

    +

    npm-update(3)

    Update a package

    -

    npm-version(3)

    +

    npm-version(3)

    Bump a package version

    -

    npm-view(3)

    +

    npm-view(3)

    View registry info

    -

    npm-whoami(3)

    +

    npm-whoami(3)

    Display npm username

    Files

    File system structures npm uses

    -

    npm-folders(5)

    +

    npm-folders(5)

    Folder Structures Used by npm

    -

    npmrc(5)

    +

    npmrc(5)

    The npm config files

    -

    package.json(5)

    +

    package.json(5)

    Specifics of npm's package.json handling

    Misc

    Various other bits and bobs

    -

    npm-coding-style(7)

    +

    npm-coding-style(7)

    npm's "funny" coding style

    -

    npm-config(7)

    +

    npm-config(7)

    More than you probably want to know about npm configuration

    -

    npm-developers(7)

    +

    npm-developers(7)

    Developer Guide

    -

    npm-disputes(7)

    +

    npm-disputes(7)

    Handling Module Name Disputes

    -

    npm-faq(7)

    +

    npm-faq(7)

    Frequently Asked Questions

    -

    npm-index(7)

    +

    npm-index(7)

    Index of all npm documentation

    -

    npm-registry(7)

    +

    npm-registry(7)

    The JavaScript Package Registry

    -

    npm-scripts(7)

    +

    npm-scope(7)

    +

    Scoped packages

    +

    npm-scripts(7)

    How npm handles the "scripts" field

    -

    removing-npm(7)

    +

    removing-npm(7)

    Cleaning the Slate

    -

    semver(7)

    +

    semver(7)

    The semantic versioner for npm

    @@ -232,5 +230,5 @@

    semver(7)

           - + diff --git a/deps/npm/html/doc/misc/index.html b/deps/npm/html/doc/misc/index.html deleted file mode 100644 index 4db393c7c48..00000000000 --- a/deps/npm/html/doc/misc/index.html +++ /dev/null @@ -1,438 +0,0 @@ - - - index - - - - -
    -

    npm-index

    Index of all npm documentation

    - -

    README

    - -

    node package manager

    - -

    Command Line Documentation

    - -

    npm(1)

    - -

    node package manager

    - -

    npm-adduser(1)

    - -

    Add a registry user account

    - -

    npm-bin(1)

    - -

    Display npm bin folder

    - -

    npm-bugs(1)

    - -

    Bugs for a package in a web browser maybe

    - -

    npm-build(1)

    - -

    Build a package

    - -

    npm-bundle(1)

    - -

    REMOVED

    - -

    npm-cache(1)

    - -

    Manipulates packages cache

    - -

    npm-completion(1)

    - -

    Tab Completion for npm

    - -

    npm-config(1)

    - -

    Manage the npm configuration files

    - -

    npm-dedupe(1)

    - -

    Reduce duplication

    - -

    npm-deprecate(1)

    - -

    Deprecate a version of a package

    - -

    npm-docs(1)

    - -

    Docs for a package in a web browser maybe

    - -

    npm-edit(1)

    - -

    Edit an installed package

    - -

    npm-explore(1)

    - -

    Browse an installed package

    - -

    npm-help-search(1)

    - -

    Search npm help documentation

    - -

    npm-help(1)

    - -

    Get help on npm

    - -

    npm-init(1)

    - -

    Interactively create a package.json file

    - -

    npm-install(1)

    - -

    Install a package

    - - - -

    Symlink a package folder

    - -

    npm-ls(1)

    - -

    List installed packages

    - -

    npm-outdated(1)

    - -

    Check for outdated packages

    - -

    npm-owner(1)

    - -

    Manage package owners

    - -

    npm-pack(1)

    - -

    Create a tarball from a package

    - -

    npm-prefix(1)

    - -

    Display prefix

    - -

    npm-prune(1)

    - -

    Remove extraneous packages

    - -

    npm-publish(1)

    - -

    Publish a package

    - -

    npm-rebuild(1)

    - -

    Rebuild a package

    - -

    npm-restart(1)

    - -

    Start a package

    - -

    npm-rm(1)

    - -

    Remove a package

    - -

    npm-root(1)

    - -

    Display npm root

    - -

    npm-run-script(1)

    - -

    Run arbitrary package scripts

    - -

    npm-search(1)

    - -

    Search for packages

    - -

    npm-shrinkwrap(1)

    - -

    Lock down dependency versions

    - -

    npm-star(1)

    - -

    Mark your favorite packages

    - -

    npm-stars(1)

    - -

    View packages marked as favorites

    - -

    npm-start(1)

    - -

    Start a package

    - -

    npm-stop(1)

    - -

    Stop a package

    - -

    npm-submodule(1)

    - -

    Add a package as a git submodule

    - -

    npm-tag(1)

    - -

    Tag a published version

    - -

    npm-test(1)

    - -

    Test a package

    - -

    npm-uninstall(1)

    - -

    Remove a package

    - -

    npm-unpublish(1)

    - -

    Remove a package from the registry

    - -

    npm-update(1)

    - -

    Update a package

    - -

    npm-version(1)

    - -

    Bump a package version

    - -

    npm-view(1)

    - -

    View registry info

    - -

    npm-whoami(1)

    - -

    Display npm username

    - -

    API Documentation

    - -

    npm(3)

    - -

    node package manager

    - -

    npm-bin(3)

    - -

    Display npm bin folder

    - -

    npm-bugs(3)

    - -

    Bugs for a package in a web browser maybe

    - -

    npm-commands(3)

    - -

    npm commands

    - -

    npm-config(3)

    - -

    Manage the npm configuration files

    - -

    npm-deprecate(3)

    - -

    Deprecate a version of a package

    - -

    npm-docs(3)

    - -

    Docs for a package in a web browser maybe

    - -

    npm-edit(3)

    - -

    Edit an installed package

    - -

    npm-explore(3)

    - -

    Browse an installed package

    - -

    npm-help-search(3)

    - -

    Search the help pages

    - -

    npm-init(3)

    - -

    Interactively create a package.json file

    - -

    npm-install(3)

    - -

    install a package programmatically

    - - - -

    Symlink a package folder

    - -

    npm-load(3)

    - -

    Load config settings

    - -

    npm-ls(3)

    - -

    List installed packages

    - -

    npm-outdated(3)

    - -

    Check for outdated packages

    - -

    npm-owner(3)

    - -

    Manage package owners

    - -

    npm-pack(3)

    - -

    Create a tarball from a package

    - -

    npm-prefix(3)

    - -

    Display prefix

    - -

    npm-prune(3)

    - -

    Remove extraneous packages

    - -

    npm-publish(3)

    - -

    Publish a package

    - -

    npm-rebuild(3)

    - -

    Rebuild a package

    - -

    npm-restart(3)

    - -

    Start a package

    - -

    npm-root(3)

    - -

    Display npm root

    - -

    npm-run-script(3)

    - -

    Run arbitrary package scripts

    - -

    npm-search(3)

    - -

    Search for packages

    - -

    npm-shrinkwrap(3)

    - -

    programmatically generate package shrinkwrap file

    - -

    npm-start(3)

    - -

    Start a package

    - -

    npm-stop(3)

    - -

    Stop a package

    - -

    npm-submodule(3)

    - -

    Add a package as a git submodule

    - -

    npm-tag(3)

    - -

    Tag a published version

    - -

    npm-test(3)

    - -

    Test a package

    - -

    npm-uninstall(3)

    - -

    uninstall a package programmatically

    - -

    npm-unpublish(3)

    - -

    Remove a package from the registry

    - -

    npm-update(3)

    - -

    Update a package

    - -

    npm-version(3)

    - -

    Bump a package version

    - -

    npm-view(3)

    - -

    View registry info

    - -

    npm-whoami(3)

    - -

    Display npm username

    - -

    Files

    - -

    npm-folders(5)

    - -

    Folder Structures Used by npm

    - -

    npmrc(5)

    - -

    The npm config files

    - -

    package.json(5)

    - -

    Specifics of npm's package.json handling

    - -

    Misc

    - -

    npm-coding-style(7)

    - -

    npm's "funny" coding style

    - -

    npm-config(7)

    - -

    More than you probably want to know about npm configuration

    - -

    npm-developers(7)

    - -

    Developer Guide

    - -

    npm-disputes(7)

    - -

    Handling Module Name Disputes

    - -

    npm-faq(7)

    - -

    Frequently Asked Questions

    - -

    npm-registry(7)

    - -

    The JavaScript Package Registry

    - -

    npm-scripts(7)

    - -

    How npm handles the "scripts" field

    - -

    removing-npm(7)

    - -

    Cleaning the Slate

    - -

    semver(7)

    - -

    The semantic versioner for npm

    -
    - - diff --git a/deps/npm/html/doc/misc/npm-coding-style.html b/deps/npm/html/doc/misc/npm-coding-style.html index cd523ac9106..30d3e07cfab 100644 --- a/deps/npm/html/doc/misc/npm-coding-style.html +++ b/deps/npm/html/doc/misc/npm-coding-style.html @@ -109,11 +109,11 @@

    Logging

    logging the same object over and over again is not helpful. Logs should report what's happening so that it's easier to track down where a fault occurs.

    -

    Use appropriate log levels. See npm-config(7) and search for +

    Use appropriate log levels. See npm-config(7) and search for "loglevel".

    Case, naming, etc.

    Use lowerCamelCase for multiword identifiers when they refer to objects, -functions, methods, members, or anything not specified in this section.

    +functions, methods, properties, or anything not specified in this section.

    Use UpperCamelCase for class names (things that you'd pass to "new").

    Use all-lower-hyphen-css-case for multiword filenames and config keys.

    Use named functions. They make stack traces easier to follow.

    @@ -131,9 +131,9 @@

    null, undefined, false, 0

    Boolean objects are verboten.

    SEE ALSO

    @@ -147,5 +147,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/misc/npm-config.html b/deps/npm/html/doc/misc/npm-config.html index aa4c300bfa2..249d5934c9e 100644 --- a/deps/npm/html/doc/misc/npm-config.html +++ b/deps/npm/html/doc/misc/npm-config.html @@ -33,7 +33,7 @@

    npmrc Files

  • global config file ($PREFIX/npmrc)
  • npm builtin config file (/path/to/npm/npmrc)
  • -

    See npmrc(5) for more details.

    +

    See npmrc(5) for more details.

    Default Configs

    A set of configuration parameters that are internal to npm, and are defaults if nothing else is specified.

    @@ -48,6 +48,7 @@

    Shorthands and Other CLI Niceties

    -dd, --verbose: --loglevel verbose
  • -ddd: --loglevel silly
  • -g: --global
  • +
  • -C: --prefix
  • -l: --long
  • -m: --message
  • -p, --porcelain: --parseable
  • @@ -78,7 +79,7 @@

    Shorthands and Other CLI Niceties

    Per-Package Config Settings

    -

    When running scripts (see npm-scripts(7)) the package.json "config" +

    When running scripts (see npm-scripts(7)) the package.json "config" keys are overwritten in the environment if there is a config param of <name>[@<version>]:<key>. For example, if the package.json has this:

    @@ -89,7 +90,7 @@

    Shorthands and Other CLI Niceties

    http.createServer(...).listen(process.env.npm_package_config_port)

    then the user could change the behavior by doing:

    npm config set foo:port 80
    -

    See package.json(5) for more information.

    +

    See package.json(5) for more information.

    Config Settings

    always-auth

      @@ -137,7 +138,7 @@

      cache

    • Default: Windows: %AppData%\npm-cache, Posix: ~/.npm
    • Type: path
    -

    The location of npm's cache directory. See npm-cache(1)

    +

    The location of npm's cache directory. See npm-cache(1)

    cache-lock-stale

    • Default: 60000 (1 minute)
    • @@ -215,9 +216,6 @@

      editor

    • Type: path

    The command to run for npm edit or npm config edit.

    -

    email

    -

    The email of the logged-in user.

    -

    Set by the npm adduser command. Should not be set explicitly.

    engine-strict

    • Default: false
    • @@ -287,7 +285,7 @@

      global

    Operates in "global" mode, so that packages are installed into the prefix folder instead of the current working directory. See -npm-folders(5) for more on the differences in behavior.

    +npm-folders(5) for more on the differences in behavior.

    • packages are installed into the {prefix}/lib/node_modules folder, instead of the current working directory.
    • @@ -334,31 +332,38 @@

      init-module

      A module that will be loaded by the npm init command. See the documentation for the init-package-json module -for more information, or npm-init(1).

      -

      init.author.name

      +for more information, or npm-init(1).

      +

      init-author-name

      • Default: ""
      • Type: String

      The value npm init should use by default for the package author's name.

      -

      init.author.email

      +

      init-author-email

      • Default: ""
      • Type: String

      The value npm init should use by default for the package author's email.

      -

      init.author.url

      +

      init-author-url

      • Default: ""
      • Type: String

      The value npm init should use by default for the package author's homepage.

      -

      init.license

      +

      init-license

      • Default: "ISC"
      • Type: String

      The value npm init should use by default for the package license.

      +

      init-version

      +
        +
      • Default: "0.0.0"
      • +
      • Type: semver
      • +
      +

      The value that npm init should use by default for the package +version number, if not already set in package.json.

      json

      • Default: false
      • @@ -398,14 +403,14 @@

        local-address

        to the npm registry. Must be IPv4 in versions of Node prior to 0.12.

        loglevel

          -
        • Default: "http"
        • +
        • Default: "warn"
        • Type: String
        • -
        • Values: "silent", "win", "error", "warn", "http", "info", "verbose", "silly"
        • +
        • Values: "silent", "error", "warn", "http", "info", "verbose", "silly"

        What level of logs to report. On failure, all logs are written to npm-debug.log in the current working directory.

        Any logs of a higher level than the setting are shown. -The default is "http", which shows http, warn, and error output.

        +The default is "warn", which shows warn and error output.

        logstream

        • Default: process.stderr
        • @@ -436,7 +441,7 @@

          node-version

        • Default: process.version
        • Type: semver or false
        -

        The node version to use when checking package's "engines" hash.

        +

        The node version to use when checking a package's engines map.

        npat

        • Default: false
        • @@ -455,7 +460,7 @@

          optional

        • Default: true
        • Type: Boolean
        -

        Attempt to install packages in the optionalDependencies hash. Note +

        Attempt to install packages in the optionalDependencies object. Note that if these packages fail to install, the overall installation process is not aborted.

        parseable

        @@ -467,7 +472,7 @@

        parseable

        standard output.

        prefix

        The location to install global items. If set on the command line, then @@ -523,8 +528,8 @@

        save

      • Type: Boolean

      Save installed packages to a package.json file as dependencies.

      -

      When used with the npm rm command, it removes it from the dependencies -hash.

      +

      When used with the npm rm command, it removes it from the dependencies +object.

      Only works if there is already a package.json file present.

      save-bundle

        @@ -541,9 +546,9 @@

        save-dev

      • Default: false
      • Type: Boolean
      -

      Save installed packages to a package.json file as devDependencies.

      +

      Save installed packages to a package.json file as devDependencies.

      When used with the npm rm command, it removes it from the -devDependencies hash.

      +devDependencies object.

      Only works if there is already a package.json file present.

      save-exact

        @@ -561,18 +566,29 @@

        save-optional

        Save installed packages to a package.json file as optionalDependencies.

        When used with the npm rm command, it removes it from the -devDependencies hash.

        +devDependencies object.

        Only works if there is already a package.json file present.

        save-prefix

        • Default: '^'
        • Type: String
        -

        Configure how versions of packages installed to a package.json file via +

        Configure how versions of packages installed to a package.json file via --save or --save-dev get prefixed.

        For example if a package has version 1.2.3, by default it's version is -set to ^1.2.3 which allows minor upgrades for that package, but after
        npm config set save-prefix='~' it would be set to ~1.2.3 which only allows +set to ^1.2.3 which allows minor upgrades for that package, but after +npm config set save-prefix='~' it would be set to ~1.2.3 which only allows patch upgrades.

        +

        scope

        +
          +
        • Default: ""
        • +
        • Type: String
        • +
        +

        Associate an operation with a scope for a scoped registry. Useful when logging +in to a private registry for the first time: +npm login --scope=@organization --registry=registry.organization.com, which +will cause @organization to be mapped to the registry for future installation +of packages specified according to the pattern @organization/package.

        searchopts

        • Default: ""
        • @@ -671,19 +687,13 @@

          usage

        • Type: Boolean

        Set to show short usage output (like the -H output) -instead of complete help when doing npm-help(1).

        +instead of complete help when doing npm-help(1).

        user

        • Default: "nobody"
        • Type: String or Number

        The UID to set to when running package scripts as root.

        -

        username

        -
          -
        • Default: null
        • -
        • Type: String
        • -
        -

        The username on the npm registry. Set with npm adduser

        userconfig

        • Default: ~/.npmrc
        • @@ -718,8 +728,8 @@

          versions

        • Default: false
        • Type: boolean
        -

        If true, output the npm version as well as node's process.versions -hash, and exit successfully.

        +

        If true, output the npm version as well as node's process.versions map, and +exit successfully.

        Only relevant when specified explicitly on the command line.

        viewer

        SEE ALSO

        @@ -218,5 +220,5 @@

        SEE ALSO

               - + diff --git a/deps/npm/html/doc/misc/removing-npm.html b/deps/npm/html/doc/misc/removing-npm.html index afd747c9560..3028625d1bf 100644 --- a/deps/npm/html/doc/misc/removing-npm.html +++ b/deps/npm/html/doc/misc/removing-npm.html @@ -38,12 +38,12 @@

        SYNOPSIS

    Prior to version 0.3, npm used shim files for executables and node modules. To track those down, you can do the following:

    find /usr/local/{lib/node,bin} -exec grep -l npm \{\} \; ;
    -

    (This is also in the README file.)

    +

    (This is also in the README file.)

    SEE ALSO

    @@ -57,5 +57,5 @@

    SEE ALSO

           - + diff --git a/deps/npm/html/doc/misc/semver.html b/deps/npm/html/doc/misc/semver.html index 5ab35fa819d..eeea8fbc40f 100644 --- a/deps/npm/html/doc/misc/semver.html +++ b/deps/npm/html/doc/misc/semver.html @@ -42,52 +42,150 @@

    Usage

    http://semver.org/.

    A leading "=" or "v" character is stripped off and ignored.

    Ranges

    -

    The following range styles are supported:

    +

    A version range is a set of comparators which specify versions +that satisfy the range.

    +

    A comparator is composed of an operator and a version. The set +of primitive operators is:

    +
      +
    • < Less than
    • +
    • <= Less than or equal to
    • +
    • > Greater than
    • +
    • >= Greater than or equal to
    • +
    • = Equal. If no operator is specified, then equality is assumed, +so this operator is optional, but MAY be included.
    • +
    +

    For example, the comparator >=1.2.7 would match the versions +1.2.7, 1.2.8, 2.5.3, and 1.3.9, but not the versions 1.2.6 +or 1.1.0.

    +

    Comparators can be joined by whitespace to form a comparator set, +which is satisfied by the intersection of all of the comparators +it includes.

    +

    A range is composed of one or more comparator sets, joined by ||. A +version matches a range if and only if every comparator in at least +one of the ||-separated comparator sets is satisfied by the version.

    +

    For example, the range >=1.2.7 <1.3.0 would match the versions +1.2.7, 1.2.8, and 1.2.99, but not the versions 1.2.6, 1.3.0, +or 1.1.0.

    +

    The range 1.2.7 || >=1.2.9 <2.0.0 would match the versions 1.2.7, +1.2.9, and 1.4.6, but not the versions 1.2.8 or 2.0.0.

    +

    Prerelease Tags

    +

    If a version has a prerelease tag (for example, 1.2.3-alpha.3) then +it will only be allowed to satisfy comparator sets if at least one +comparator with the same [major, minor, patch] tuple also has a +prerelease tag.

    +

    For example, the range >1.2.3-alpha.3 would be allowed to match the +version 1.2.3-alpha.7, but it would not be satisfied by +3.4.5-alpha.9, even though 3.4.5-alpha.9 is technically "greater +than" 1.2.3-alpha.3 according to the SemVer sort rules. The version +range only accepts prerelease tags on the 1.2.3 version. The +version 3.4.5 would satisfy the range, because it does not have a +prerelease flag, and 3.4.5 is greater than 1.2.3-alpha.7.

    +

    The purpose for this behavior is twofold. First, prerelease versions +frequently are updated very quickly, and contain many breaking changes +that are (by the author's design) not yet fit for public consumption. +Therefore, by default, they are excluded from range matching +semantics.

    +

    Second, a user who has opted into using a prerelease version has +clearly indicated the intent to use that specific set of +alpha/beta/rc versions. By including a prerelease tag in the range, +the user is indicating that they are aware of the risk. However, it +is still not appropriate to assume that they have opted into taking a +similar risk on the next set of prerelease versions.

    +

    Advanced Range Syntax

    +

    Advanced range syntax desugars to primitive comparators in +deterministic ways.

    +

    Advanced ranges may be combined in the same way as primitive +comparators using white space or ||.

    +

    Hyphen Ranges X.Y.Z - A.B.C

    +

    Specifies an inclusive set.

      -
    • 1.2.3 A specific version. When nothing else will do. Must be a full -version number, with major, minor, and patch versions specified. -Note that build metadata is still ignored, so 1.2.3+build2012 will -satisfy this range.
    • -
    • >1.2.3 Greater than a specific version.
    • -
    • <1.2.3 Less than a specific version. If there is no prerelease -tag on the version range, then no prerelease version will be allowed -either, even though these are technically "less than".
    • -
    • >=1.2.3 Greater than or equal to. Note that prerelease versions -are NOT equal to their "normal" equivalents, so 1.2.3-beta will -not satisfy this range, but 2.3.0-beta will.
    • -
    • <=1.2.3 Less than or equal to. In this case, prerelease versions -ARE allowed, so 1.2.3-beta would satisfy.
    • 1.2.3 - 2.3.4 := >=1.2.3 <=2.3.4
    • -
    • ~1.2.3 := >=1.2.3-0 <1.3.0-0 "Reasonably close to 1.2.3". When -using tilde operators, prerelease versions are supported as well, -but a prerelease of the next significant digit will NOT be -satisfactory, so 1.3.0-beta will not satisfy ~1.2.3.
    • -
    • ^1.2.3 := >=1.2.3-0 <2.0.0-0 "Compatible with 1.2.3". When -using caret operators, anything from the specified version (including -prerelease) will be supported up to, but not including, the next -major version (or its prereleases). 1.5.1 will satisfy ^1.2.3, -while 1.2.2 and 2.0.0-beta will not.
    • -
    • ^0.1.3 := >=0.1.3-0 <0.2.0-0 "Compatible with 0.1.3". 0.x.x versions are -special: the first non-zero component indicates potentially breaking changes, -meaning the caret operator matches any version with the same first non-zero -component starting at the specified version.
    • -
    • ^0.0.2 := =0.0.2 "Only the version 0.0.2 is considered compatible"
    • -
    • ~1.2 := >=1.2.0-0 <1.3.0-0 "Any version starting with 1.2"
    • -
    • ^1.2 := >=1.2.0-0 <2.0.0-0 "Any version compatible with 1.2"
    • -
    • 1.2.x := >=1.2.0-0 <1.3.0-0 "Any version starting with 1.2"
    • -
    • 1.2.* Same as 1.2.x.
    • -
    • 1.2 Same as 1.2.x.
    • -
    • ~1 := >=1.0.0-0 <2.0.0-0 "Any version starting with 1"
    • -
    • ^1 := >=1.0.0-0 <2.0.0-0 "Any version compatible with 1"
    • -
    • 1.x := >=1.0.0-0 <2.0.0-0 "Any version starting with 1"
    • -
    • 1.* Same as 1.x.
    • -
    • 1 Same as 1.x.
    • -
    • * Any version whatsoever.
    • -
    • x Same as *.
    • -
    • "" (just an empty string) Same as *.
    -

    Ranges can be joined with either a space (which implies "and") or a -|| (which implies "or").

    +

    If a partial version is provided as the first version in the inclusive +range, then the missing pieces are replaced with zeroes.

    +
      +
    • 1.2 - 2.3.4 := >=1.2.0 <=2.3.4
    • +
    +

    If a partial version is provided as the second version in the +inclusive range, then all versions that start with the supplied parts +of the tuple are accepted, but nothing that would be greater than the +provided tuple parts.

    +
      +
    • 1.2.3 - 2.3 := >=1.2.3 <2.4.0
    • +
    • 1.2.3 - 2 := >=1.2.3 <3.0.0
    • +
    +

    X-Ranges 1.2.x 1.X 1.2.* *

    +

    Any of X, x, or * may be used to "stand in" for one of the +numeric values in the [major, minor, patch] tuple.

    +
      +
    • * := >=0.0.0 (Any version satisfies)
    • +
    • 1.x := >=1.0.0 <2.0.0 (Matching major version)
    • +
    • 1.2.x := >=1.2.0 <1.3.0 (Matching major and minor versions)
    • +
    +

    A partial version range is treated as an X-Range, so the special +character is in fact optional.

    +
      +
    • "" (empty string) := * := >=0.0.0
    • +
    • 1 := 1.x.x := >=1.0.0 <2.0.0
    • +
    • 1.2 := 1.2.x := >=1.2.0 <1.3.0
    • +
    +

    Tilde Ranges ~1.2.3 ~1.2 ~1

    +

    Allows patch-level changes if a minor version is specified on the +comparator. Allows minor-level changes if not.

    +
      +
    • ~1.2.3 := >=1.2.3 <1.(2+1).0 := >=1.2.3 <1.3.0
    • +
    • ~1.2 := >=1.2.0 <1.(2+1).0 := >=1.2.0 <1.3.0 (Same as 1.2.x)
    • +
    • ~1 := >=1.0.0 <(1+1).0.0 := >=1.0.0 <2.0.0 (Same as 1.x)
    • +
    • ~0.2.3 := >=0.2.3 <0.(2+1).0 := >=0.2.3 <0.3.0
    • +
    • ~0.2 := >=0.2.0 <0.(2+1).0 := >=0.2.0 <0.3.0 (Same as 0.2.x)
    • +
    • ~0 := >=0.0.0 <(0+1).0.0 := >=0.0.0 <1.0.0 (Same as 0.x)
    • +
    • ~1.2.3-beta.2 := >=1.2.3-beta.2 <1.3.0 Note that prereleases in +the 1.2.3 version will be allowed, if they are greater than or +equal to beta.2. So, 1.2.3-beta.4 would be allowed, but +1.2.4-beta.2 would not, because it is a prerelease of a +different [major, minor, patch] tuple.
    • +
    +

    Note: this is the same as the ~> operator in rubygems.

    +

    Caret Ranges ^1.2.3 ^0.2.5 ^0.0.4

    +

    Allows changes that do not modify the left-most non-zero digit in the +[major, minor, patch] tuple. In other words, this allows patch and +minor updates for versions 1.0.0 and above, patch updates for +versions 0.X >=0.1.0, and no updates for versions 0.0.X.

    +

    Many authors treat a 0.x version as if the x were the major +"breaking-change" indicator.

    +

    Caret ranges are ideal when an author may make breaking changes +between 0.2.4 and 0.3.0 releases, which is a common practice. +However, it presumes that there will not be breaking changes between +0.2.4 and 0.2.5. It allows for changes that are presumed to be +additive (but non-breaking), according to commonly observed practices.

    +
      +
    • ^1.2.3 := >=1.2.3 <2.0.0
    • +
    • ^0.2.3 := >=0.2.3 <0.3.0
    • +
    • ^0.0.3 := >=0.0.3 <0.0.4
    • +
    • ^1.2.3-beta.2 := >=1.2.3-beta.2 <2.0.0 Note that prereleases in +the 1.2.3 version will be allowed, if they are greater than or +equal to beta.2. So, 1.2.3-beta.4 would be allowed, but +1.2.4-beta.2 would not, because it is a prerelease of a +different [major, minor, patch] tuple.
    • +
    • ^0.0.3-beta := >=0.0.3-beta <0.0.4 Note that prereleases in the +0.0.3 version only will be allowed, if they are greater than or +equal to beta. So, 0.0.3-pr.2 would be allowed.
    • +
    +

    When parsing caret ranges, a missing patch value desugars to the +number 0, but will allow flexibility within that value, even if the +major and minor versions are both 0.

    +
      +
    • ^1.2.x := >=1.2.0 <2.0.0
    • +
    • ^0.0.x := >=0.0.0 <0.1.0
    • +
    • ^0.0 := >=0.0.0 <0.1.0
    • +
    +

    A missing minor and patch values will desugar to zero, but also +allow flexibility within those values, even if the major version is +zero.

    +
      +
    • ^1.x := >=1.0.0 <2.0.0
    • +
    • ^0.x := >=0.0.0 <1.0.0
    • +

    Functions

    All methods and classes take a final loose boolean argument that, if true, will be more forgiving about not-quite-valid semver strings. @@ -165,5 +263,5 @@

    Ranges

           - + diff --git a/deps/npm/html/partial/doc/README.html b/deps/npm/html/partial/doc/README.html new file mode 100644 index 00000000000..13ff98d2c37 --- /dev/null +++ b/deps/npm/html/partial/doc/README.html @@ -0,0 +1,166 @@ +

    npm

    node package manager

    +

    Build Status

    +

    SYNOPSIS

    +

    This is just enough info to get you up and running.

    +

    Much more info available via npm help once it's installed.

    +

    IMPORTANT

    +

    You need node v0.8 or higher to run this program.

    +

    To install an old and unsupported version of npm that works on node 0.3 +and prior, clone the git repo and dig through the old tags and branches.

    +

    Super Easy Install

    +

    npm comes with node now.

    +

    Windows Computers

    +

    Get the MSI. npm is in it.

    +

    Apple Macintosh Computers

    +

    Get the pkg. npm is in it.

    +

    Other Sorts of Unices

    +

    Run make install. npm will be installed with node.

    +

    If you want a more fancy pants install (a different version, customized +paths, etc.) then read on.

    +

    Fancy Install (Unix)

    +

    There's a pretty robust install script at +https://www.npmjs.org/install.sh. You can download that and run it.

    +

    Here's an example using curl:

    +
    curl -L https://npmjs.org/install.sh | sh
    +

    Slightly Fancier

    +

    You can set any npm configuration params with that script:

    +
    npm_config_prefix=/some/path sh install.sh
    +

    Or, you can run it in uber-debuggery mode:

    +
    npm_debug=1 sh install.sh
    +

    Even Fancier

    +

    Get the code with git. Use make to build the docs and do other stuff. +If you plan on hacking on npm, make link is your friend.

    +

    If you've got the npm source code, you can also semi-permanently set +arbitrary config keys using the ./configure --key=val ..., and then +run npm commands by doing node cli.js <cmd> <args>. (This is helpful +for testing, or running stuff without actually installing npm itself.)

    +

    Fancy Windows Install

    +

    You can download a zip file from https://npmjs.org/dist/, and unpack it +in the same folder where node.exe lives.

    +

    If that's not fancy enough for you, then you can fetch the code with +git, and mess with it directly.

    +

    Installing on Cygwin

    +

    No.

    +

    Permissions when Using npm to Install Other Stuff

    +

    tl;dr

    +
      +
    • Use sudo for greater safety. Or don't, if you prefer not to.
    • +
    • npm will downgrade permissions if it's root before running any build +scripts that package authors specified.
    • +
    +

    More details...

    +

    As of version 0.3, it is recommended to run npm as root. +This allows npm to change the user identifier to the nobody user prior +to running any package build or test commands.

    +

    If you are not the root user, or if you are on a platform that does not +support uid switching, then npm will not attempt to change the userid.

    +

    If you would like to ensure that npm always runs scripts as the +"nobody" user, and have it fail if it cannot downgrade permissions, then +set the following configuration param:

    +
    npm config set unsafe-perm false
    +

    This will prevent running in unsafe mode, even as non-root users.

    +

    Uninstalling

    +

    So sad to see you go.

    +
    sudo npm uninstall npm -g
    +

    Or, if that fails,

    +
    sudo make uninstall
    +

    More Severe Uninstalling

    +

    Usually, the above instructions are sufficient. That will remove +npm, but leave behind anything you've installed.

    +

    If you would like to remove all the packages that you have installed, +then you can use the npm ls command to find them, and then npm rm to +remove them.

    +

    To remove cruft left behind by npm 0.x, you can use the included +clean-old.sh script file. You can run it conveniently like this:

    +
    npm explore npm -g -- sh scripts/clean-old.sh
    +

    npm uses two configuration files, one for per-user configs, and another +for global (every-user) configs. You can view them by doing:

    +
    npm config get userconfig   # defaults to ~/.npmrc
    +npm config get globalconfig # defaults to /usr/local/etc/npmrc
    +

    Uninstalling npm does not remove configuration files by default. You +must remove them yourself manually if you want them gone. Note that +this means that future npm installs will not remember the settings that +you have chosen.

    +

    Using npm Programmatically

    +

    If you would like to use npm programmatically, you can do that. +It's not very well documented, but it is rather simple.

    +

    Most of the time, unless you actually want to do all the things that +npm does, you should try using one of npm's dependencies rather than +using npm itself, if possible.

    +

    Eventually, npm will be just a thin cli wrapper around the modules +that it depends on, but for now, there are some things that you must +use npm itself to do.

    +
    var npm = require("npm")
    +npm.load(myConfigObject, function (er) {
    +  if (er) return handlError(er)
    +  npm.commands.install(["some", "args"], function (er, data) {
    +    if (er) return commandFailed(er)
    +    // command succeeded, and data might have some info
    +  })
    +  npm.registry.log.on("log", function (message) { .... })
    +})
    +

    The load function takes an object hash of the command-line configs. +The various npm.commands.<cmd> functions take an array of +positional argument strings. The last argument to any +npm.commands.<cmd> function is a callback. Some commands take other +optional arguments. Read the source.

    +

    You cannot set configs individually for any single npm function at this +time. Since npm is a singleton, any call to npm.config.set will +change the value for all npm commands in that process.

    +

    See ./bin/npm-cli.js for an example of pulling config values off of the +command line arguments using nopt. You may also want to check out npm +help config to learn about all the options you can set there.

    +

    More Docs

    +

    Check out the docs, +especially the faq.

    +

    You can use the npm help command to read any of them.

    +

    If you're a developer, and you want to use npm to publish your program, +you should read this

    + +

    "npm" and "The npm Registry" are owned by npm, Inc. +All rights reserved. See the included LICENSE file for more details.

    +

    "Node.js" and "node" are trademarks owned by Joyent, Inc.

    +

    Modules published on the npm registry are not officially endorsed by +npm, Inc. or the Node.js project.

    +

    Data published to the npm registry is not part of npm itself, and is +the sole property of the publisher. While every effort is made to +ensure accountability, there is absolutely no guarantee, warrantee, or +assertion expressed or implied as to the quality, fitness for a +specific purpose, or lack of malice in any given npm package.

    +

    If you have a complaint about a package in the public npm registry, +and cannot resolve it with the package +owner, please email +support@npmjs.com and explain the situation.

    +

    Any data published to The npm Registry (including user account +information) may be removed or modified at the sole discretion of the +npm server administrators.

    +

    In plainer english

    +

    npm is the property of npm, Inc.

    +

    If you publish something, it's yours, and you are solely accountable +for it.

    +

    If other people publish something, it's theirs.

    +

    Users can publish Bad Stuff. It will be removed promptly if reported. +But there is no vetting process for published modules, and you use +them at your own risk. Please inspect the source.

    +

    If you publish Bad Stuff, we may delete it from the registry, or even +ban your account in extreme cases. So don't do that.

    +

    BUGS

    +

    When you find issues, please report them:

    + +

    Be sure to include all of the output from the npm command that didn't work +as expected. The npm-debug.log file is also helpful to provide.

    +

    You can also look for isaacs in #node.js on irc://irc.freenode.net. He +will no doubt tell you to put the output in a gist or email.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/api/npm-bin.html b/deps/npm/html/partial/doc/api/npm-bin.html new file mode 100644 index 00000000000..54f895518ab --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-bin.html @@ -0,0 +1,8 @@ +

    npm-bin

    Display npm bin folder

    +

    SYNOPSIS

    +
    npm.commands.bin(args, cb)
    +

    DESCRIPTION

    +

    Print the folder where npm will install executables.

    +

    This function should not be used programmatically. Instead, just refer +to the npm.bin property.

    + diff --git a/deps/npm/html/partial/doc/api/npm-bugs.html b/deps/npm/html/partial/doc/api/npm-bugs.html new file mode 100644 index 00000000000..e9ff2a58aa5 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-bugs.html @@ -0,0 +1,13 @@ +

    npm-bugs

    Bugs for a package in a web browser maybe

    +

    SYNOPSIS

    +
    npm.commands.bugs(package, callback)
    +

    DESCRIPTION

    +

    This command tries to guess at the likely location of a package's +bug tracker URL, and then tries to open it using the --browser +config param.

    +

    Like other commands, the first parameter is an array. This command only +uses the first element, which is expected to be a package name with an +optional version number.

    +

    This command will launch a browser, so this command may not be the most +friendly for programmatic use.

    + diff --git a/deps/npm/html/partial/doc/api/npm-cache.html b/deps/npm/html/partial/doc/api/npm-cache.html new file mode 100644 index 00000000000..b837a688695 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-cache.html @@ -0,0 +1,22 @@ +

    npm-cache

    manage the npm cache programmatically

    +

    SYNOPSIS

    +
    npm.commands.cache([args], callback)
    +
    +// helpers
    +npm.commands.cache.clean([args], callback)
    +npm.commands.cache.add([args], callback)
    +npm.commands.cache.read(name, version, forceBypass, callback)
    +

    DESCRIPTION

    +

    This acts much the same ways as the npm-cache(1) command line +functionality.

    +

    The callback is called with the package.json data of the thing that is +eventually added to or read from the cache.

    +

    The top level npm.commands.cache(...) functionality is a public +interface, and like all commands on the npm.commands object, it will +match the command line behavior exactly.

    +

    However, the cache folder structure and the cache helper functions are +considered internal API surface, and as such, may change in future +releases of npm, potentially without warning or significant version +incrementation.

    +

    Use at your own risk.

    + diff --git a/deps/npm/html/partial/doc/api/npm-commands.html b/deps/npm/html/partial/doc/api/npm-commands.html new file mode 100644 index 00000000000..eaf57af4af0 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-commands.html @@ -0,0 +1,16 @@ +

    npm-commands

    npm commands

    +

    SYNOPSIS

    +
    npm.commands[<command>](args, callback)
    +

    DESCRIPTION

    +

    npm comes with a full set of commands, and each of the commands takes a +similar set of arguments.

    +

    In general, all commands on the command object take an array of positional +argument strings. The last argument to any function is a callback. Some +commands are special and take other optional arguments.

    +

    All commands have their own man page. See man npm-<command> for command-line +usage, or man 3 npm-<command> for programmatic usage.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/api/npm-config.html b/deps/npm/html/partial/doc/api/npm-config.html new file mode 100644 index 00000000000..b34c02182d3 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-config.html @@ -0,0 +1,37 @@ +

    npm-config

    Manage the npm configuration files

    +

    SYNOPSIS

    +
    npm.commands.config(args, callback)
    +var val = npm.config.get(key)
    +npm.config.set(key, val)
    +

    DESCRIPTION

    +

    This function acts much the same way as the command-line version. The first +element in the array tells config what to do. Possible values are:

    +
      +
    • set

      +

      Sets a config parameter. The second element in args is interpreted as the + key, and the third element is interpreted as the value.

      +
    • +
    • get

      +

      Gets the value of a config parameter. The second element in args is the + key to get the value of.

      +
    • +
    • delete (rm or del)

      +

      Deletes a parameter from the config. The second element in args is the + key to delete.

      +
    • +
    • list (ls)

      +

      Show all configs that aren't secret. No parameters necessary.

      +
    • +
    • edit:

      +

      Opens the config file in the default editor. This command isn't very useful + programmatically, but it is made available.

      +
    • +
    +

    To programmatically access npm configuration settings, or set them for +the duration of a program, use the npm.config.set and npm.config.get +functions instead.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/api/npm-deprecate.html b/deps/npm/html/partial/doc/api/npm-deprecate.html new file mode 100644 index 00000000000..f0ef298b2a2 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-deprecate.html @@ -0,0 +1,27 @@ +

    npm-deprecate

    Deprecate a version of a package

    +

    SYNOPSIS

    +
    npm.commands.deprecate(args, callback)
    +

    DESCRIPTION

    +

    This command will update the npm registry entry for a package, providing +a deprecation warning to all who attempt to install it.

    +

    The 'args' parameter must have exactly two elements:

    +
      +
    • package[@version]

      +

      The version portion is optional, and may be either a range, or a + specific version, or a tag.

      +
    • +
    • message

      +

      The warning message that will be printed whenever a user attempts to + install the package.

      +
    • +
    +

    Note that you must be the package owner to deprecate something. See the +owner and adduser help topics.

    +

    To un-deprecate a package, specify an empty string ("") for the message argument.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/api/npm-docs.html b/deps/npm/html/partial/doc/api/npm-docs.html new file mode 100644 index 00000000000..dde38920fd1 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-docs.html @@ -0,0 +1,13 @@ +

    npm-docs

    Docs for a package in a web browser maybe

    +

    SYNOPSIS

    +
    npm.commands.docs(package, callback)
    +

    DESCRIPTION

    +

    This command tries to guess at the likely location of a package's +documentation URL, and then tries to open it using the --browser +config param.

    +

    Like other commands, the first parameter is an array. This command only +uses the first element, which is expected to be a package name with an +optional version number.

    +

    This command will launch a browser, so this command may not be the most +friendly for programmatic use.

    + diff --git a/deps/npm/html/partial/doc/api/npm-edit.html b/deps/npm/html/partial/doc/api/npm-edit.html new file mode 100644 index 00000000000..ef49f94e14e --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-edit.html @@ -0,0 +1,16 @@ +

    npm-edit

    Edit an installed package

    +

    SYNOPSIS

    +
    npm.commands.edit(package, callback)
    +

    DESCRIPTION

    +

    Opens the package folder in the default editor (or whatever you've +configured as the npm editor config -- see npm help config.)

    +

    After it has been edited, the package is rebuilt so as to pick up any +changes in compiled packages.

    +

    For instance, you can do npm install connect to install connect +into your package, and then npm.commands.edit(["connect"], callback) +to make a few changes to your locally installed copy.

    +

    The first parameter is a string array with a single element, the package +to open. The package can optionally have a version number attached.

    +

    Since this command opens an editor in a new process, be careful about where +and how this is used.

    + diff --git a/deps/npm/html/partial/doc/api/npm-explore.html b/deps/npm/html/partial/doc/api/npm-explore.html new file mode 100644 index 00000000000..60f3ac17802 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-explore.html @@ -0,0 +1,11 @@ +

    npm-explore

    Browse an installed package

    +

    SYNOPSIS

    +
    npm.commands.explore(args, callback)
    +

    DESCRIPTION

    +

    Spawn a subshell in the directory of the installed package specified.

    +

    If a command is specified, then it is run in the subshell, which then +immediately terminates.

    +

    Note that the package is not automatically rebuilt afterwards, so be +sure to use npm rebuild <pkg> if you make any changes.

    +

    The first element in the 'args' parameter must be a package name. After that is the optional command, which can be any number of strings. All of the strings will be combined into one, space-delimited command.

    + diff --git a/deps/npm/html/partial/doc/api/npm-help-search.html b/deps/npm/html/partial/doc/api/npm-help-search.html new file mode 100644 index 00000000000..7818b6b1e74 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-help-search.html @@ -0,0 +1,24 @@ +

    npm-help-search

    Search the help pages

    +

    SYNOPSIS

    +
    npm.commands.helpSearch(args, [silent,] callback)
    +

    DESCRIPTION

    +

    This command is rarely useful, but it exists in the rare case that it is.

    +

    This command takes an array of search terms and returns the help pages that +match in order of best match.

    +

    If there is only one match, then npm displays that help section. If there +are multiple results, the results are printed to the screen formatted and the +array of results is returned. Each result is an object with these properties:

    +
      +
    • hits: +A map of args to number of hits on that arg. For example, {"npm": 3}
    • +
    • found: +Total number of unique args that matched.
    • +
    • totalHits: +Total number of hits.
    • +
    • lines: +An array of all matching lines (and some adjacent lines).
    • +
    • file: +Name of the file that matched
    • +
    +

    The silent parameter is not necessary not used, but it may in the future.

    + diff --git a/deps/npm/html/partial/doc/api/npm-init.html b/deps/npm/html/partial/doc/api/npm-init.html new file mode 100644 index 00000000000..723fbdebedb --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-init.html @@ -0,0 +1,19 @@ +

    npm init

    Interactively create a package.json file

    +

    SYNOPSIS

    +
    npm.commands.init(args, callback)
    +

    DESCRIPTION

    +

    This will ask you a bunch of questions, and then write a package.json for you.

    +

    It attempts to make reasonable guesses about what you want things to be set to, +and then writes a package.json file with the options you've selected.

    +

    If you already have a package.json file, it'll read that first, and default to +the options in there.

    +

    It is strictly additive, so it does not delete options from your package.json +without a really good reason to do so.

    +

    Since this function expects to be run on the command-line, it doesn't work very +well as a programmatically. The best option is to roll your own, and since +JavaScript makes it stupid simple to output formatted JSON, that is the +preferred method. If you're sure you want to handle command-line prompting, +then go ahead and use this programmatically.

    +

    SEE ALSO

    +

    package.json(5)

    + diff --git a/deps/npm/html/partial/doc/api/npm-install.html b/deps/npm/html/partial/doc/api/npm-install.html new file mode 100644 index 00000000000..bfbd5668877 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-install.html @@ -0,0 +1,12 @@ +

    npm-install

    install a package programmatically

    +

    SYNOPSIS

    +
    npm.commands.install([where,] packages, callback)
    +

    DESCRIPTION

    +

    This acts much the same ways as installing on the command-line.

    +

    The 'where' parameter is optional and only used internally, and it specifies +where the packages should be installed to.

    +

    The 'packages' parameter is an array of strings. Each element in the array is +the name of a package to be installed.

    +

    Finally, 'callback' is a function that will be called when all packages have been +installed or when an error has been encountered.

    + diff --git a/deps/npm/html/partial/doc/api/npm-link.html b/deps/npm/html/partial/doc/api/npm-link.html new file mode 100644 index 00000000000..e2efe3ebc1f --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-link.html @@ -0,0 +1,22 @@ +

    npm-link

    Symlink a package folder

    +

    SYNOPSIS

    +
    npm.commands.link(callback)
    +npm.commands.link(packages, callback)
    +

    DESCRIPTION

    +

    Package linking is a two-step process.

    +

    Without parameters, link will create a globally-installed +symbolic link from prefix/package-name to the current folder.

    +

    With a parameters, link will create a symlink from the local node_modules +folder to the global symlink.

    +

    When creating tarballs for npm publish, the linked packages are +"snapshotted" to their current state by resolving the symbolic links.

    +

    This is +handy for installing your own stuff, so that you can work on it and test it +iteratively without having to continually rebuild.

    +

    For example:

    +
    npm.commands.link(cb)           # creates global link from the cwd
    +                                # (say redis package)
    +npm.commands.link('redis', cb)  # link-install the package
    +

    Now, any changes to the redis package will be reflected in +the package in the current working directory

    + diff --git a/deps/npm/html/partial/doc/api/npm-load.html b/deps/npm/html/partial/doc/api/npm-load.html new file mode 100644 index 00000000000..0796cacdab6 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-load.html @@ -0,0 +1,17 @@ +

    npm-load

    Load config settings

    +

    SYNOPSIS

    +
    npm.load(conf, cb)
    +

    DESCRIPTION

    +

    npm.load() must be called before any other function call. Both parameters are +optional, but the second is recommended.

    +

    The first parameter is an object containing command-line config params, and the +second parameter is a callback that will be called when npm is loaded and ready +to serve.

    +

    The first parameter should follow a similar structure as the package.json +config object.

    +

    For example, to emulate the --dev flag, pass an object that looks like this:

    +
    {
    +  "dev": true
    +}
    +

    For a list of all the available command-line configs, see npm help config

    + diff --git a/deps/npm/html/partial/doc/api/npm-ls.html b/deps/npm/html/partial/doc/api/npm-ls.html new file mode 100644 index 00000000000..508003ca158 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-ls.html @@ -0,0 +1,43 @@ +

    npm-ls

    List installed packages

    +

    SYNOPSIS

    +
    npm.commands.ls(args, [silent,] callback)
    +

    DESCRIPTION

    +

    This command will print to stdout all the versions of packages that are +installed, as well as their dependencies, in a tree-structure. It will also +return that data using the callback.

    +

    This command does not take any arguments, but args must be defined. +Beyond that, if any arguments are passed in, npm will politely warn that it +does not take positional arguments, though you may set config flags +like with any other command, such as global to list global packages.

    +

    It will print out extraneous, missing, and invalid packages.

    +

    If the silent parameter is set to true, nothing will be output to the screen, +but the data will still be returned.

    +

    Callback is provided an error if one occurred, the full data about which +packages are installed and which dependencies they will receive, and a +"lite" data object which just shows which versions are installed where. +Note that the full data object is a circular structure, so care must be +taken if it is serialized to JSON.

    +

    CONFIGURATION

    +

    long

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Show extended information.

    +

    parseable

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Show parseable output instead of tree view.

    +

    global

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    List packages in the global install prefix instead of in the current +project.

    +

    Note, if parseable is set or long isn't set, then duplicates will be trimmed. +This means that if a submodule a same dependency as a parent module, then the +dependency will only be output once.

    + diff --git a/deps/npm/html/partial/doc/api/npm-outdated.html b/deps/npm/html/partial/doc/api/npm-outdated.html new file mode 100644 index 00000000000..16d3150d3f7 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-outdated.html @@ -0,0 +1,8 @@ +

    npm-outdated

    Check for outdated packages

    +

    SYNOPSIS

    +
    npm.commands.outdated([packages,] callback)
    +

    DESCRIPTION

    +

    This command will check the registry to see if the specified packages are +currently outdated.

    +

    If the 'packages' parameter is left out, npm will check all packages.

    + diff --git a/deps/npm/html/partial/doc/api/npm-owner.html b/deps/npm/html/partial/doc/api/npm-owner.html new file mode 100644 index 00000000000..20e8b6840e4 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-owner.html @@ -0,0 +1,27 @@ +

    npm-owner

    Manage package owners

    +

    SYNOPSIS

    +
    npm.commands.owner(args, callback)
    +

    DESCRIPTION

    +

    The first element of the 'args' parameter defines what to do, and the subsequent +elements depend on the action. Possible values for the action are (order of +parameters are given in parenthesis):

    +
      +
    • ls (package): +List all the users who have access to modify a package and push new versions. +Handy when you need to know who to bug for help.
    • +
    • add (user, package): +Add a new user as a maintainer of a package. This user is enabled to modify +metadata, publish new versions, and add other owners.
    • +
    • rm (user, package): +Remove a user from the package owner list. This immediately revokes their +privileges.
    • +
    +

    Note that there is only one level of access. Either you can modify a package, +or you can't. Future versions may contain more fine-grained access levels, but +that is not implemented at this time.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/api/npm-pack.html b/deps/npm/html/partial/doc/api/npm-pack.html new file mode 100644 index 00000000000..6417688673c --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-pack.html @@ -0,0 +1,13 @@ +

    npm-pack

    Create a tarball from a package

    +

    SYNOPSIS

    +
    npm.commands.pack([packages,] callback)
    +

    DESCRIPTION

    +

    For anything that's installable (that is, a package folder, tarball, +tarball url, name@tag, name@version, or name), this command will fetch +it to the cache, and then copy the tarball to the current working +directory as <name>-<version>.tgz, and then write the filenames out to +stdout.

    +

    If the same package is specified multiple times, then the file will be +overwritten the second time.

    +

    If no arguments are supplied, then npm packs the current package folder.

    + diff --git a/deps/npm/html/partial/doc/api/npm-prefix.html b/deps/npm/html/partial/doc/api/npm-prefix.html new file mode 100644 index 00000000000..e9904b18d9f --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-prefix.html @@ -0,0 +1,9 @@ +

    npm-prefix

    Display prefix

    +

    SYNOPSIS

    +
    npm.commands.prefix(args, callback)
    +

    DESCRIPTION

    +

    Print the prefix to standard out.

    +

    'args' is never used and callback is never called with data. +'args' must be present or things will break.

    +

    This function is not useful programmatically

    + diff --git a/deps/npm/html/partial/doc/api/npm-prune.html b/deps/npm/html/partial/doc/api/npm-prune.html new file mode 100644 index 00000000000..5835a9b6a79 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-prune.html @@ -0,0 +1,10 @@ +

    npm-prune

    Remove extraneous packages

    +

    SYNOPSIS

    +
    npm.commands.prune([packages,] callback)
    +

    DESCRIPTION

    +

    This command removes "extraneous" packages.

    +

    The first parameter is optional, and it specifies packages to be removed.

    +

    No packages are specified, then all packages will be checked.

    +

    Extraneous packages are packages that are not listed on the parent +package's dependencies list.

    + diff --git a/deps/npm/html/partial/doc/api/npm-publish.html b/deps/npm/html/partial/doc/api/npm-publish.html new file mode 100644 index 00000000000..f0e5da91ba3 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-publish.html @@ -0,0 +1,26 @@ +

    npm-publish

    Publish a package

    +

    SYNOPSIS

    +
    npm.commands.publish([packages,] callback)
    +

    DESCRIPTION

    +

    Publishes a package to the registry so that it can be installed by name. +Possible values in the 'packages' array are:

    +
      +
    • <folder>: +A folder containing a package.json file

      +
    • +
    • <tarball>: +A url or file path to a gzipped tar archive containing a single folder +with a package.json file inside.

      +
    • +
    +

    If the package array is empty, npm will try to publish something in the +current working directory.

    +

    This command could fails if one of the packages specified already exists in +the registry. Overwrites when the "force" environment variable is set.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/api/npm-rebuild.html b/deps/npm/html/partial/doc/api/npm-rebuild.html new file mode 100644 index 00000000000..e428728a617 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-rebuild.html @@ -0,0 +1,10 @@ +

    npm-rebuild

    Rebuild a package

    +

    SYNOPSIS

    +
    npm.commands.rebuild([packages,] callback)
    +

    DESCRIPTION

    +

    This command runs the npm build command on each of the matched packages. This is useful +when you install a new version of node, and must recompile all your C++ addons with +the new binary. If no 'packages' parameter is specify, every package will be rebuilt.

    +

    CONFIGURATION

    +

    See npm help build

    + diff --git a/deps/npm/html/partial/doc/api/npm-repo.html b/deps/npm/html/partial/doc/api/npm-repo.html new file mode 100644 index 00000000000..9a18976cd41 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-repo.html @@ -0,0 +1,13 @@ +

    npm-repo

    Open package repository page in the browser

    +

    SYNOPSIS

    +
    npm.commands.repo(package, callback)
    +

    DESCRIPTION

    +

    This command tries to guess at the likely location of a package's +repository URL, and then tries to open it using the --browser +config param.

    +

    Like other commands, the first parameter is an array. This command only +uses the first element, which is expected to be a package name with an +optional version number.

    +

    This command will launch a browser, so this command may not be the most +friendly for programmatic use.

    + diff --git a/deps/npm/html/partial/doc/api/npm-restart.html b/deps/npm/html/partial/doc/api/npm-restart.html new file mode 100644 index 00000000000..35db404d785 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-restart.html @@ -0,0 +1,16 @@ +

    npm-restart

    Start a package

    +

    SYNOPSIS

    +
    npm.commands.restart(packages, callback)
    +

    DESCRIPTION

    +

    This runs a package's "restart" script, if one was provided. +Otherwise it runs package's "stop" script, if one was provided, and then +the "start" script.

    +

    If no version is specified, then it restarts the "active" version.

    +

    npm can run tests on multiple packages. Just specify multiple packages +in the packages parameter.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/api/npm-root.html b/deps/npm/html/partial/doc/api/npm-root.html new file mode 100644 index 00000000000..1549515122e --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-root.html @@ -0,0 +1,9 @@ +

    npm-root

    Display npm root

    +

    SYNOPSIS

    +
    npm.commands.root(args, callback)
    +

    DESCRIPTION

    +

    Print the effective node_modules folder to standard out.

    +

    'args' is never used and callback is never called with data. +'args' must be present or things will break.

    +

    This function is not useful programmatically.

    + diff --git a/deps/npm/html/partial/doc/api/npm-run-script.html b/deps/npm/html/partial/doc/api/npm-run-script.html new file mode 100644 index 00000000000..7cc42b601ad --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-run-script.html @@ -0,0 +1,21 @@ +

    npm-run-script

    Run arbitrary package scripts

    +

    SYNOPSIS

    +
    npm.commands.run-script(args, callback)
    +

    DESCRIPTION

    +

    This runs an arbitrary command from a package's "scripts" object.

    +

    It is used by the test, start, restart, and stop commands, but can be +called directly, as well.

    +

    The 'args' parameter is an array of strings. Behavior depends on the number +of elements. If there is only one element, npm assumes that the element +represents a command to be run on the local repository. If there is more than +one element, then the first is assumed to be the package and the second is +assumed to be the command to run. All other elements are ignored.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/api/npm-search.html b/deps/npm/html/partial/doc/api/npm-search.html new file mode 100644 index 00000000000..13cceb3d321 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-search.html @@ -0,0 +1,33 @@ +

    npm-search

    Search for packages

    +

    SYNOPSIS

    +
    npm.commands.search(searchTerms, [silent,] [staleness,] callback)
    +

    DESCRIPTION

    +

    Search the registry for packages matching the search terms. The available parameters are:

    +
      +
    • searchTerms: +Array of search terms. These terms are case-insensitive.
    • +
    • silent: +If true, npm will not log anything to the console.
    • +
    • staleness: +This is the threshold for stale packages. "Fresh" packages are not refreshed +from the registry. This value is measured in seconds.
    • +
    • callback: +Returns an object where each key is the name of a package, and the value +is information about that package along with a 'words' property, which is +a space-delimited string of all of the interesting words in that package. +The only properties included are those that are searched, which generally include:

      +
        +
      • name
      • +
      • description
      • +
      • maintainers
      • +
      • url
      • +
      • keywords
      • +
      +
    • +
    +

    A search on the registry excludes any result that does not match all of the +search terms. It also removes any items from the results that contain an +excluded term (the "searchexclude" config). The search is case insensitive +and doesn't try to read your mind (it doesn't do any verb tense matching or the +like).

    + diff --git a/deps/npm/html/partial/doc/api/npm-shrinkwrap.html b/deps/npm/html/partial/doc/api/npm-shrinkwrap.html new file mode 100644 index 00000000000..b5f33599989 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-shrinkwrap.html @@ -0,0 +1,13 @@ +

    npm-shrinkwrap

    programmatically generate package shrinkwrap file

    +

    SYNOPSIS

    +
    npm.commands.shrinkwrap(args, [silent,] callback)
    +

    DESCRIPTION

    +

    This acts much the same ways as shrinkwrapping on the command-line.

    +

    This command does not take any arguments, but 'args' must be defined. +Beyond that, if any arguments are passed in, npm will politely warn that it +does not take positional arguments.

    +

    If the 'silent' parameter is set to true, nothing will be output to the screen, +but the shrinkwrap file will still be written.

    +

    Finally, 'callback' is a function that will be called when the shrinkwrap has +been saved.

    + diff --git a/deps/npm/html/partial/doc/api/npm-start.html b/deps/npm/html/partial/doc/api/npm-start.html new file mode 100644 index 00000000000..2eae8ba0f59 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-start.html @@ -0,0 +1,8 @@ +

    npm-start

    Start a package

    +

    SYNOPSIS

    +
    npm.commands.start(packages, callback)
    +

    DESCRIPTION

    +

    This runs a package's "start" script, if one was provided.

    +

    npm can run tests on multiple packages. Just specify multiple packages +in the packages parameter.

    + diff --git a/deps/npm/html/partial/doc/api/npm-stop.html b/deps/npm/html/partial/doc/api/npm-stop.html new file mode 100644 index 00000000000..5b58289ed84 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-stop.html @@ -0,0 +1,8 @@ +

    npm-stop

    Stop a package

    +

    SYNOPSIS

    +
    npm.commands.stop(packages, callback)
    +

    DESCRIPTION

    +

    This runs a package's "stop" script, if one was provided.

    +

    npm can run stop on multiple packages. Just specify multiple packages +in the packages parameter.

    + diff --git a/deps/npm/html/partial/doc/api/npm-submodule.html b/deps/npm/html/partial/doc/api/npm-submodule.html new file mode 100644 index 00000000000..669841402f6 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-submodule.html @@ -0,0 +1,22 @@ +

    npm-submodule

    Add a package as a git submodule

    +

    SYNOPSIS

    +
    npm.commands.submodule(packages, callback)
    +

    DESCRIPTION

    +

    For each package specified, npm will check if it has a git repository url +in its package.json description then add it as a git submodule at +node_modules/<pkg name>.

    +

    This is a convenience only. From then on, it's up to you to manage +updates by using the appropriate git commands. npm will stubbornly +refuse to update, modify, or remove anything with a .git subfolder +in it.

    +

    This command also does not install missing dependencies, if the package +does not include them in its git repository. If npm ls reports that +things are missing, you can either install, link, or submodule them yourself, +or you can do npm explore <pkgname> -- npm install to install the +dependencies into the submodule folder.

    +

    SEE ALSO

    +
      +
    • npm help json
    • +
    • git help submodule
    • +
    + diff --git a/deps/npm/html/partial/doc/api/npm-tag.html b/deps/npm/html/partial/doc/api/npm-tag.html new file mode 100644 index 00000000000..f288fc15cfd --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-tag.html @@ -0,0 +1,16 @@ +

    npm-tag

    Tag a published version

    +

    SYNOPSIS

    +
    npm.commands.tag(package@version, tag, callback)
    +

    DESCRIPTION

    +

    Tags the specified version of the package with the specified tag, or the +--tag config if not specified.

    +

    The 'package@version' is an array of strings, but only the first two elements are +currently used.

    +

    The first element must be in the form package@version, where package +is the package name and version is the version number (much like installing a +specific version).

    +

    The second element is the name of the tag to tag this version with. If this +parameter is missing or falsey (empty), the default froom the config will be +used. For more information about how to set this config, check +man 3 npm-config for programmatic usage or man npm-config for cli usage.

    + diff --git a/deps/npm/html/partial/doc/api/npm-test.html b/deps/npm/html/partial/doc/api/npm-test.html new file mode 100644 index 00000000000..9c35bdcfdb8 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-test.html @@ -0,0 +1,10 @@ +

    npm-test

    Test a package

    +

    SYNOPSIS

    +
      npm.commands.test(packages, callback)
    +

    DESCRIPTION

    +

    This runs a package's "test" script, if one was provided.

    +

    To run tests as a condition of installation, set the npat config to +true.

    +

    npm can run tests on multiple packages. Just specify multiple packages +in the packages parameter.

    + diff --git a/deps/npm/html/partial/doc/api/npm-uninstall.html b/deps/npm/html/partial/doc/api/npm-uninstall.html new file mode 100644 index 00000000000..62369e4c7c6 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-uninstall.html @@ -0,0 +1,10 @@ +

    npm-uninstall

    uninstall a package programmatically

    +

    SYNOPSIS

    +
    npm.commands.uninstall(packages, callback)
    +

    DESCRIPTION

    +

    This acts much the same ways as uninstalling on the command-line.

    +

    The 'packages' parameter is an array of strings. Each element in the array is +the name of a package to be uninstalled.

    +

    Finally, 'callback' is a function that will be called when all packages have been +uninstalled or when an error has been encountered.

    + diff --git a/deps/npm/html/partial/doc/api/npm-unpublish.html b/deps/npm/html/partial/doc/api/npm-unpublish.html new file mode 100644 index 00000000000..ed9948cd849 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-unpublish.html @@ -0,0 +1,13 @@ +

    npm-unpublish

    Remove a package from the registry

    +

    SYNOPSIS

    +
    npm.commands.unpublish(package, callback)
    +

    DESCRIPTION

    +

    This removes a package version from the registry, deleting its +entry and removing the tarball.

    +

    The package parameter must be defined.

    +

    Only the first element in the package parameter is used. If there is no first +element, then npm assumes that the package at the current working directory +is what is meant.

    +

    If no version is specified, or if all versions are removed then +the root package entry is removed from the registry entirely.

    + diff --git a/deps/npm/html/partial/doc/api/npm-update.html b/deps/npm/html/partial/doc/api/npm-update.html new file mode 100644 index 00000000000..d05771159b8 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-update.html @@ -0,0 +1,7 @@ +

    npm-update

    Update a package

    +

    SYNOPSIS

    +
    npm.commands.update(packages, callback)
    +

    DESCRIPTION

    +

    Updates a package, upgrading it to the latest version. It also installs any missing packages.

    +

    The 'packages' argument is an array of packages to update. The 'callback' parameter will be called when done or when an error occurs.

    + diff --git a/deps/npm/html/partial/doc/api/npm-version.html b/deps/npm/html/partial/doc/api/npm-version.html new file mode 100644 index 00000000000..c2b8d5eb8b9 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-version.html @@ -0,0 +1,12 @@ +

    npm-version

    Bump a package version

    +

    SYNOPSIS

    +
    npm.commands.version(newversion, callback)
    +

    DESCRIPTION

    +

    Run this in a package directory to bump the version and write the new +data back to the package.json file.

    +

    If run in a git repo, it will also create a version commit and tag, and +fail if the repo is not clean.

    +

    Like all other commands, this function takes a string array as its first +parameter. The difference, however, is this function will fail if it does +not have exactly one element. The only element should be a version number.

    + diff --git a/deps/npm/html/partial/doc/api/npm-view.html b/deps/npm/html/partial/doc/api/npm-view.html new file mode 100644 index 00000000000..4f5acf62439 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-view.html @@ -0,0 +1,61 @@ +

    npm-view

    View registry info

    +

    SYNOPSIS

    +
    npm.commands.view(args, [silent,] callback)
    +

    DESCRIPTION

    +

    This command shows data about a package and prints it to the stream +referenced by the outfd config, which defaults to stdout.

    +

    The "args" parameter is an ordered list that closely resembles the command-line +usage. The elements should be ordered such that the first element is +the package and version (package@version). The version is optional. After that, +the rest of the parameters are fields with optional subfields ("field.subfield") +which can be used to get only the information desired from the registry.

    +

    The callback will be passed all of the data returned by the query.

    +

    For example, to get the package registry entry for the connect package, +you can do this:

    +
    npm.commands.view(["connect"], callback)
    +

    If no version is specified, "latest" is assumed.

    +

    Field names can be specified after the package descriptor. +For example, to show the dependencies of the ronn package at version +0.3.5, you could do the following:

    +
    npm.commands.view(["ronn@0.3.5", "dependencies"], callback)
    +

    You can view child field by separating them with a period. +To view the git repository URL for the latest version of npm, you could +do this:

    +
    npm.commands.view(["npm", "repository.url"], callback)
    +

    For fields that are arrays, requesting a non-numeric field will return +all of the values from the objects in the list. For example, to get all +the contributor names for the "express" project, you can do this:

    +
    npm.commands.view(["express", "contributors.email"], callback)
    +

    You may also use numeric indices in square braces to specifically select +an item in an array field. To just get the email address of the first +contributor in the list, you can do this:

    +
    npm.commands.view(["express", "contributors[0].email"], callback)
    +

    Multiple fields may be specified, and will be printed one after another. +For exampls, to get all the contributor names and email addresses, you +can do this:

    +
    npm.commands.view(["express", "contributors.name", "contributors.email"], callback)
    +

    "Person" fields are shown as a string if they would be shown as an +object. So, for example, this will show the list of npm contributors in +the shortened string format. (See npm help json for more on this.)

    +
    npm.commands.view(["npm", "contributors"], callback)
    +

    If a version range is provided, then data will be printed for every +matching version of the package. This will show which version of jsdom +was required by each matching version of yui3:

    +
    npm.commands.view(["yui3@'>0.5.4'", "dependencies.jsdom"], callback)
    +

    OUTPUT

    +

    If only a single string field for a single version is output, then it +will not be colorized or quoted, so as to enable piping the output to +another command.

    +

    If the version range matches multiple versions, than each printed value +will be prefixed with the version it applies to.

    +

    If multiple fields are requested, than each of them are prefixed with +the field name.

    +

    Console output can be disabled by setting the 'silent' parameter to true.

    +

    RETURN VALUE

    +

    The data returned will be an object in this formation:

    +
    { <version>:
    +  { <field>: <value>
    +  , ... }
    +, ... }
    +

    corresponding to the list of fields selected.

    + diff --git a/deps/npm/html/partial/doc/api/npm-whoami.html b/deps/npm/html/partial/doc/api/npm-whoami.html new file mode 100644 index 00000000000..3428a9e7677 --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm-whoami.html @@ -0,0 +1,9 @@ +

    npm-whoami

    Display npm username

    +

    SYNOPSIS

    +
    npm.commands.whoami(args, callback)
    +

    DESCRIPTION

    +

    Print the username config to standard output.

    +

    'args' is never used and callback is never called with data. +'args' must be present or things will break.

    +

    This function is not useful programmatically

    + diff --git a/deps/npm/html/partial/doc/api/npm.html b/deps/npm/html/partial/doc/api/npm.html new file mode 100644 index 00000000000..dbd481b380a --- /dev/null +++ b/deps/npm/html/partial/doc/api/npm.html @@ -0,0 +1,89 @@ +

    npm

    node package manager

    +

    SYNOPSIS

    +
    var npm = require("npm")
    +npm.load([configObject, ]function (er, npm) {
    +  // use the npm object, now that it's loaded.
    +
    +  npm.config.set(key, val)
    +  val = npm.config.get(key)
    +
    +  console.log("prefix = %s", npm.prefix)
    +
    +  npm.commands.install(["package"], cb)
    +})
    +

    VERSION

    +

    2.1.6

    +

    DESCRIPTION

    +

    This is the API documentation for npm. +To find documentation of the command line +client, see npm(1).

    +

    Prior to using npm's commands, npm.load() must be called. If you provide +configObject as an object map of top-level configs, they override the values +stored in the various config locations. In the npm command line client, this +set of configs is parsed from the command line options. Additional +configuration params are loaded from two configuration files. See +npm-config(1), npm-config(7), and npmrc(5) for more information.

    +

    After that, each of the functions are accessible in the +commands object: npm.commands.<cmd>. See npm-index(7) for a list of +all possible commands.

    +

    All commands on the command object take an array of positional argument +strings. The last argument to any function is a callback. Some +commands take other optional arguments.

    +

    Configs cannot currently be set on a per function basis, as each call to +npm.config.set will change the value for all npm commands in that process.

    +

    To find API documentation for a specific command, run the npm apihelp +command.

    +

    METHODS AND PROPERTIES

    +
      +
    • npm.load(configs, cb)

      +

      Load the configuration params, and call the cb function once the + globalconfig and userconfig files have been loaded as well, or on + nextTick if they've already been loaded.

      +
    • +
    • npm.config

      +

      An object for accessing npm configuration parameters.

      +
        +
      • npm.config.get(key)
      • +
      • npm.config.set(key, val)
      • +
      • npm.config.del(key)
      • +
      +
    • +
    • npm.dir or npm.root

      +

      The node_modules directory where npm will operate.

      +
    • +
    • npm.prefix

      +

      The prefix where npm is operating. (Most often the current working + directory.)

      +
    • +
    • npm.cache

      +

      The place where npm keeps JSON and tarballs it fetches from the + registry (or uploads to the registry).

      +
    • +
    • npm.tmp

      +

      npm's temporary working directory.

      +
    • +
    • npm.deref

      +

      Get the "real" name for a command that has either an alias or + abbreviation.

      +
    • +
    +

    MAGIC

    +

    For each of the methods in the npm.commands object, a method is added to the +npm object, which takes a set of positional string arguments rather than an +array and a callback.

    +

    If the last argument is a callback, then it will use the supplied +callback. However, if no callback is provided, then it will print out +the error or results.

    +

    For example, this would work in a node repl:

    +
    > npm = require("npm")
    +> npm.load()  // wait a sec...
    +> npm.install("dnode", "express")
    +

    Note that that won't work in a node program, since the install +method will get called before the configuration load is completed.

    +

    ABBREVS

    +

    In order to support npm ins foo instead of npm install foo, the +npm.commands object has a set of abbreviations as well as the full +method names. Use the npm.deref method to find the real name.

    +

    For example:

    +
    var cmd = npm.deref("unp") // cmd === "unpublish"
    +
    diff --git a/deps/npm/html/partial/doc/cli/npm-adduser.html b/deps/npm/html/partial/doc/cli/npm-adduser.html new file mode 100644 index 00000000000..ac9fa0086ca --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-adduser.html @@ -0,0 +1,47 @@ +

    npm-adduser

    Add a registry user account

    +

    SYNOPSIS

    +
    npm adduser [--registry=url] [--scope=@orgname] [--always-auth]
    +

    DESCRIPTION

    +

    Create or verify a user named <username> in the specified registry, and +save the credentials to the .npmrc file. If no registry is specified, +the default registry will be used (see npm-config(7)).

    +

    The username, password, and email are read in from prompts.

    +

    You may use this command to change your email address, but not username +or password.

    +

    To reset your password, go to https://www.npmjs.org/forgot

    +

    You may use this command multiple times with the same user account to +authorize on a new machine.

    +

    npm login is an alias to adduser and behaves exactly the same way.

    +

    CONFIGURATION

    +

    registry

    +

    Default: http://registry.npmjs.org/

    +

    The base URL of the npm package registry. If scope is also specified, +this registry will only be used for packages with that scope. See npm-scope(7).

    +

    scope

    +

    Default: none

    +

    If specified, the user and login credentials given will be associated +with the specified scope. See npm-scope(7). You can use both at the same time, +e.g.

    +
    npm adduser --registry=http://myregistry.example.com --scope=@myco
    +

    This will set a registry for the given scope and login or create a user for +that registry at the same time.

    +

    always-auth

    +

    Default: false

    +

    If specified, save configuration indicating that all requests to the given +registry should include authorization information. Useful for private +registries. Can be used with --registry and / or --scope, e.g.

    +
    npm adduser --registry=http://private-registry.example.com --always-auth
    +

    This will ensure that all requests to that registry (including for tarballs) +include an authorization header. See always-auth in npm-config(7) for more +details on always-auth. Registry-specific configuaration of always-auth takes +precedence over any global configuration.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-bin.html b/deps/npm/html/partial/doc/cli/npm-bin.html new file mode 100644 index 00000000000..1485681b5f6 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-bin.html @@ -0,0 +1,15 @@ +

    npm-bin

    Display npm bin folder

    +

    SYNOPSIS

    +
    npm bin
    +

    DESCRIPTION

    +

    Print the folder where npm will install executables.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-bugs.html b/deps/npm/html/partial/doc/cli/npm-bugs.html new file mode 100644 index 00000000000..d40152e3884 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-bugs.html @@ -0,0 +1,34 @@ +

    npm-bugs

    Bugs for a package in a web browser maybe

    +

    SYNOPSIS

    +
    npm bugs <pkgname>
    +npm bugs (with no args in a package dir)
    +

    DESCRIPTION

    +

    This command tries to guess at the likely location of a package's +bug tracker URL, and then tries to open it using the --browser +config param. If no package name is provided, it will search for +a package.json in the current folder and use the name property.

    +

    CONFIGURATION

    +

    browser

    +
      +
    • Default: OS X: "open", Windows: "start", Others: "xdg-open"
    • +
    • Type: String
    • +
    +

    The browser that is called by the npm bugs command to open websites.

    +

    registry

    + +

    The base URL of the npm package registry.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-build.html b/deps/npm/html/partial/doc/cli/npm-build.html new file mode 100644 index 00000000000..51f2e32960e --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-build.html @@ -0,0 +1,18 @@ +

    npm-build

    Build a package

    +

    SYNOPSIS

    +
    npm build <package-folder>
    +
      +
    • <package-folder>: +A folder containing a package.json file in its root.
    • +
    +

    DESCRIPTION

    +

    This is the plumbing command called by npm link and npm install.

    +

    It should generally not be called directly.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-bundle.html b/deps/npm/html/partial/doc/cli/npm-bundle.html new file mode 100644 index 00000000000..38bbdd83e38 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-bundle.html @@ -0,0 +1,11 @@ +

    npm-bundle

    REMOVED

    +

    DESCRIPTION

    +

    The npm bundle command has been removed in 1.0, for the simple reason +that it is no longer necessary, as the default behavior is now to +install packages into the local space.

    +

    Just use npm install now to do what npm bundle used to do.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-cache.html b/deps/npm/html/partial/doc/cli/npm-cache.html new file mode 100644 index 00000000000..f1a3b189643 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-cache.html @@ -0,0 +1,61 @@ +

    npm-cache

    Manipulates packages cache

    +

    SYNOPSIS

    +
    npm cache add <tarball file>
    +npm cache add <folder>
    +npm cache add <tarball url>
    +npm cache add <name>@<version>
    +
    +npm cache ls [<path>]
    +
    +npm cache clean [<path>]
    +

    DESCRIPTION

    +

    Used to add, list, or clear the npm cache folder.

    +
      +
    • add: +Add the specified package to the local cache. This command is primarily +intended to be used internally by npm, but it can provide a way to +add data to the local installation cache explicitly.

      +
    • +
    • ls: +Show the data in the cache. Argument is a path to show in the cache +folder. Works a bit like the find program, but limited by the +depth config.

      +
    • +
    • clean: +Delete data out of the cache folder. If an argument is provided, then +it specifies a subpath to delete. If no argument is provided, then +the entire cache is cleared.

      +
    • +
    +

    DETAILS

    +

    npm stores cache data in the directory specified in npm config get cache. +For each package that is added to the cache, three pieces of information are +stored in {cache}/{name}/{version}:

    +
      +
    • .../package/package.json: +The package.json file, as npm sees it.
    • +
    • .../package.tgz: +The tarball for that version.
    • +
    +

    Additionally, whenever a registry request is made, a .cache.json file +is placed at the corresponding URI, to store the ETag and the requested +data. This is stored in {cache}/{hostname}/{path}/.cache.json.

    +

    Commands that make non-essential registry requests (such as search and +view, or the completion scripts) generally specify a minimum timeout. +If the .cache.json file is younger than the specified timeout, then +they do not make an HTTP request to the registry.

    +

    CONFIGURATION

    +

    cache

    +

    Default: ~/.npm on Posix, or %AppData%/npm-cache on Windows.

    +

    The root cache folder.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-completion.html b/deps/npm/html/partial/doc/cli/npm-completion.html new file mode 100644 index 00000000000..1c9879337a5 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-completion.html @@ -0,0 +1,22 @@ +

    npm-completion

    Tab Completion for npm

    +

    SYNOPSIS

    +
    . <(npm completion)
    +

    DESCRIPTION

    +

    Enables tab-completion in all npm commands.

    +

    The synopsis above +loads the completions into your current shell. Adding it to +your ~/.bashrc or ~/.zshrc will make the completions available +everywhere.

    +

    You may of course also pipe the output of npm completion to a file +such as /usr/local/etc/bash_completion.d/npm if you have a system +that will read that file for you.

    +

    When COMP_CWORD, COMP_LINE, and COMP_POINT are defined in the +environment, npm completion acts in "plumbing mode", and outputs +completions based on the arguments.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-config.html b/deps/npm/html/partial/doc/cli/npm-config.html new file mode 100644 index 00000000000..3fee266c1c0 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-config.html @@ -0,0 +1,46 @@ +

    npm-config

    Manage the npm configuration files

    +

    SYNOPSIS

    +
    npm config set <key> <value> [--global]
    +npm config get <key>
    +npm config delete <key>
    +npm config list
    +npm config edit
    +npm c [set|get|delete|list]
    +npm get <key>
    +npm set <key> <value> [--global]
    +

    DESCRIPTION

    +

    npm gets its config settings from the command line, environment +variables, npmrc files, and in some cases, the package.json file.

    +

    See npmrc(5) for more information about the npmrc files.

    +

    See npm-config(7) for a more thorough discussion of the mechanisms +involved.

    +

    The npm config command can be used to update and edit the contents +of the user and global npmrc files.

    +

    Sub-commands

    +

    Config supports the following sub-commands:

    +

    set

    +
    npm config set key value
    +

    Sets the config key to the value.

    +

    If value is omitted, then it sets it to "true".

    +

    get

    +
    npm config get key
    +

    Echo the config value to stdout.

    +

    list

    +
    npm config list
    +

    Show all the config settings.

    +

    delete

    +
    npm config delete key
    +

    Deletes the key from all configuration files.

    +

    edit

    +
    npm config edit
    +

    Opens the config file in an editor. Use the --global flag to edit the +global config.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-dedupe.html b/deps/npm/html/partial/doc/cli/npm-dedupe.html new file mode 100644 index 00000000000..56a37c32db0 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-dedupe.html @@ -0,0 +1,43 @@ +

    npm-dedupe

    Reduce duplication

    +

    SYNOPSIS

    +
    npm dedupe [package names...]
    +npm ddp [package names...]
    +

    DESCRIPTION

    +

    Searches the local package tree and attempts to simplify the overall +structure by moving dependencies further up the tree, where they can +be more effectively shared by multiple dependent packages.

    +

    For example, consider this dependency graph:

    +
    a
    ++-- b <-- depends on c@1.0.x
    +|   `-- c@1.0.3
    +`-- d <-- depends on c@~1.0.9
    +    `-- c@1.0.10
    +

    In this case, npm-dedupe(1) will transform the tree to:

    +
    a
    ++-- b
    ++-- d
    +`-- c@1.0.10
    +

    Because of the hierarchical nature of node's module lookup, b and d +will both get their dependency met by the single c package at the root +level of the tree.

    +

    If a suitable version exists at the target location in the tree +already, then it will be left untouched, but the other duplicates will +be deleted.

    +

    If no suitable version can be found, then a warning is printed, and +nothing is done.

    +

    If any arguments are supplied, then they are filters, and only the +named packages will be touched.

    +

    Note that this operation transforms the dependency tree, and may +result in packages getting updated versions, perhaps from the npm +registry.

    +

    This feature is experimental, and may change in future versions.

    +

    The --tag argument will apply to all of the affected dependencies. If a +tag with the given name exists, the tagged version is preferred over newer +versions.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-deprecate.html b/deps/npm/html/partial/doc/cli/npm-deprecate.html new file mode 100644 index 00000000000..0657facd8ef --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-deprecate.html @@ -0,0 +1,18 @@ +

    npm-deprecate

    Deprecate a version of a package

    +

    SYNOPSIS

    +
    npm deprecate <name>[@<version>] <message>
    +

    DESCRIPTION

    +

    This command will update the npm registry entry for a package, providing +a deprecation warning to all who attempt to install it.

    +

    It works on version ranges as well as specific versions, so you can do +something like this:

    +
    npm deprecate my-thing@"< 0.2.3" "critical bug fixed in v0.2.3"
    +

    Note that you must be the package owner to deprecate something. See the +owner and adduser help topics.

    +

    To un-deprecate a package, specify an empty string ("") for the message argument.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-docs.html b/deps/npm/html/partial/doc/cli/npm-docs.html new file mode 100644 index 00000000000..3866ff1a0a4 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-docs.html @@ -0,0 +1,36 @@ +

    npm-docs

    Docs for a package in a web browser maybe

    +

    SYNOPSIS

    +
    npm docs [<pkgname> [<pkgname> ...]]
    +npm docs (with no args in a package dir)
    +npm home [<pkgname> [<pkgname> ...]]
    +npm home (with no args in a package dir)
    +

    DESCRIPTION

    +

    This command tries to guess at the likely location of a package's +documentation URL, and then tries to open it using the --browser +config param. You can pass multiple package names at once. If no +package name is provided, it will search for a package.json in +the current folder and use the name property.

    +

    CONFIGURATION

    +

    browser

    +
      +
    • Default: OS X: "open", Windows: "start", Others: "xdg-open"
    • +
    • Type: String
    • +
    +

    The browser that is called by the npm docs command to open websites.

    +

    registry

    + +

    The base URL of the npm package registry.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-edit.html b/deps/npm/html/partial/doc/cli/npm-edit.html new file mode 100644 index 00000000000..82b75ad7f3c --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-edit.html @@ -0,0 +1,29 @@ +

    npm-edit

    Edit an installed package

    +

    SYNOPSIS

    +
    npm edit <name>[@<version>]
    +

    DESCRIPTION

    +

    Opens the package folder in the default editor (or whatever you've +configured as the npm editor config -- see npm-config(7).)

    +

    After it has been edited, the package is rebuilt so as to pick up any +changes in compiled packages.

    +

    For instance, you can do npm install connect to install connect +into your package, and then npm edit connect to make a few +changes to your locally installed copy.

    +

    CONFIGURATION

    +

    editor

    +
      +
    • Default: EDITOR environment variable if set, or "vi" on Posix, +or "notepad" on Windows.
    • +
    • Type: path
    • +
    +

    The command to run for npm edit or npm config edit.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-explore.html b/deps/npm/html/partial/doc/cli/npm-explore.html new file mode 100644 index 00000000000..fe2fbd494ca --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-explore.html @@ -0,0 +1,29 @@ +

    npm-explore

    Browse an installed package

    +

    SYNOPSIS

    +
    npm explore <name> [ -- <cmd>]
    +

    DESCRIPTION

    +

    Spawn a subshell in the directory of the installed package specified.

    +

    If a command is specified, then it is run in the subshell, which then +immediately terminates.

    +

    This is particularly handy in the case of git submodules in the +node_modules folder:

    +
    npm explore some-dependency -- git pull origin master
    +

    Note that the package is not automatically rebuilt afterwards, so be +sure to use npm rebuild <pkg> if you make any changes.

    +

    CONFIGURATION

    +

    shell

    +
      +
    • Default: SHELL environment variable, or "bash" on Posix, or "cmd" on +Windows
    • +
    • Type: path
    • +
    +

    The shell to run for the npm explore command.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-help-search.html b/deps/npm/html/partial/doc/cli/npm-help-search.html new file mode 100644 index 00000000000..afd8fb47313 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-help-search.html @@ -0,0 +1,26 @@ +

    npm-help-search

    Search npm help documentation

    +

    SYNOPSIS

    +
    npm help-search some search terms
    +

    DESCRIPTION

    +

    This command will search the npm markdown documentation files for the +terms provided, and then list the results, sorted by relevance.

    +

    If only one result is found, then it will show that help topic.

    +

    If the argument to npm help is not a known help topic, then it will +call help-search. It is rarely if ever necessary to call this +command directly.

    +

    CONFIGURATION

    +

    long

    +
      +
    • Type: Boolean
    • +
    • Default false
    • +
    +

    If true, the "long" flag will cause help-search to output context around +where the terms were found in the documentation.

    +

    If false, then help-search will just list out the help topics found.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-help.html b/deps/npm/html/partial/doc/cli/npm-help.html new file mode 100644 index 00000000000..4217b8447c5 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-help.html @@ -0,0 +1,32 @@ +

    npm-help

    Get help on npm

    +

    SYNOPSIS

    +
    npm help <topic>
    +npm help some search terms
    +

    DESCRIPTION

    +

    If supplied a topic, then show the appropriate documentation page.

    +

    If the topic does not exist, or if multiple terms are provided, then run +the help-search command to find a match. Note that, if help-search +finds a single subject, then it will run help on that topic, so unique +matches are equivalent to specifying a topic name.

    +

    CONFIGURATION

    +

    viewer

    +
      +
    • Default: "man" on Posix, "browser" on Windows
    • +
    • Type: path
    • +
    +

    The program to use to view help content.

    +

    Set to "browser" to view html help content in the default web browser.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-init.html b/deps/npm/html/partial/doc/cli/npm-init.html new file mode 100644 index 00000000000..4f41ea88e1e --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-init.html @@ -0,0 +1,20 @@ +

    npm-init

    Interactively create a package.json file

    +

    SYNOPSIS

    +
    npm init [-f|--force|-y|--yes]
    +

    DESCRIPTION

    +

    This will ask you a bunch of questions, and then write a package.json for you.

    +

    It attempts to make reasonable guesses about what you want things to be set to, +and then writes a package.json file with the options you've selected.

    +

    If you already have a package.json file, it'll read that first, and default to +the options in there.

    +

    It is strictly additive, so it does not delete options from your package.json +without a really good reason to do so.

    +

    If you invoke it with -f, --force, -y, or --yes, it will use only +defaults and not prompt you for any options.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-install.html b/deps/npm/html/partial/doc/cli/npm-install.html new file mode 100644 index 00000000000..bd1932ed58c --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-install.html @@ -0,0 +1,219 @@ +

    npm-install

    Install a package

    +

    SYNOPSIS

    +
    npm install (with no args in a package dir)
    +npm install <tarball file>
    +npm install <tarball url>
    +npm install <folder>
    +npm install [@<scope>/]<name> [--save|--save-dev|--save-optional] [--save-exact]
    +npm install [@<scope>/]<name>@<tag>
    +npm install [@<scope>/]<name>@<version>
    +npm install [@<scope>/]<name>@<version range>
    +npm i (with any of the previous argument usage)
    +

    DESCRIPTION

    +

    This command installs a package, and any packages that it depends on. If the +package has a shrinkwrap file, the installation of dependencies will be driven +by that. See npm-shrinkwrap(1).

    +

    A package is:

    +
      +
    • a) a folder containing a program described by a package.json file
    • +
    • b) a gzipped tarball containing (a)
    • +
    • c) a url that resolves to (b)
    • +
    • d) a <name>@<version> that is published on the registry (see npm-registry(7)) with (c)
    • +
    • e) a <name>@<tag> that points to (d)
    • +
    • f) a <name> that has a "latest" tag satisfying (e)
    • +
    • g) a <git remote url> that resolves to (b)
    • +
    +

    Even if you never publish your package, you can still get a lot of +benefits of using npm if you just want to write a node program (a), and +perhaps if you also want to be able to easily install it elsewhere +after packing it up into a tarball (b).

    +
      +
    • npm install (in package directory, no arguments):

      +

      Install the dependencies in the local node_modules folder.

      +

      In global mode (ie, with -g or --global appended to the command), + it installs the current package context (ie, the current working + directory) as a global package.

      +

      By default, npm install will install all modules listed as + dependencies. With the --production flag, + npm will not install modules listed in devDependencies.

      +
    • +
    • npm install <folder>:

      +

      Install a package that is sitting in a folder on the filesystem.

      +
    • +
    • npm install <tarball file>:

      +

      Install a package that is sitting on the filesystem. Note: if you just want + to link a dev directory into your npm root, you can do this more easily by + using npm link.

      +

      Example:

      +
          npm install ./package.tgz
      +
    • +
    • npm install <tarball url>:

      +

      Fetch the tarball url, and then install it. In order to distinguish between + this and other options, the argument must start with "http://" or "https://"

      +

      Example:

      +
          npm install https://github.com/indexzero/forever/tarball/v0.5.6
      +
    • +
    • npm install [@<scope>/]<name> [--save|--save-dev|--save-optional]:

      +

      Do a <name>@<tag> install, where <tag> is the "tag" config. (See + npm-config(7).)

      +

      In most cases, this will install the latest version + of the module published on npm.

      +

      Example:

      +
          npm install sax
      +

      npm install takes 3 exclusive, optional flags which save or update + the package version in your main package.json:

      +
        +
      • --save: Package will appear in your dependencies.

        +
      • +
      • --save-dev: Package will appear in your devDependencies.

        +
      • +
      • --save-optional: Package will appear in your optionalDependencies.

        +

        When using any of the above options to save dependencies to your +package.json, there is an additional, optional flag:

        +
      • +
      • --save-exact: Saved dependencies will be configured with an +exact version rather than using npm's default semver range +operator.

        +

        <scope> is optional. The package will be downloaded from the registry +associated with the specified scope. If no registry is associated with +the given scope the default registry is assumed. See npm-scope(7).

        +

        Note: if you do not include the @-symbol on your scope name, npm will +interpret this as a GitHub repository instead, see below. Scopes names +must also be followed by a slash.

        +

        Examples:

        +
        npm install sax --save
        +npm install githubname/reponame
        +npm install @myorg/privatepackage
        +npm install node-tap --save-dev
        +npm install dtrace-provider --save-optional
        +npm install readable-stream --save --save-exact
        +
      • +
      +
    • +
    +
    **Note**: If there is a file or folder named `<name>` in the current
    +working directory, then it will try to install that, and only try to
    +fetch the package by name if it is not valid.
    +
      +
    • npm install [@<scope>/]<name>@<tag>:

      +

      Install the version of the package that is referenced by the specified tag. + If the tag does not exist in the registry data for that package, then this + will fail.

      +

      Example:

      +
          npm install sax@latest
      +    npm install @myorg/mypackage@latest
      +
    • +
    • npm install [@<scope>/]<name>@<version>:

      +

      Install the specified version of the package. This will fail if the + version has not been published to the registry.

      +

      Example:

      +
          npm install sax@0.1.1
      +    npm install @myorg/privatepackage@1.5.0
      +
    • +
    • npm install [@<scope>/]<name>@<version range>:

      +

      Install a version of the package matching the specified version range. This + will follow the same rules for resolving dependencies described in package.json(5).

      +

      Note that most version ranges must be put in quotes so that your shell will + treat it as a single argument.

      +

      Example:

      +
          npm install sax@">=0.1.0 <0.2.0"
      +    npm install @myorg/privatepackage@">=0.1.0 <0.2.0"
      +
    • +
    • npm install <githubname>/<githubrepo>:

      +

      Install the package at https://github.com/githubname/githubrepo" by + attempting to clone it usinggit`.

      +

      Example:

      +
          npm install mygithubuser/myproject
      +

      To reference a package in a git repo that is not on GitHub, see git + remote urls below.

      +
    • +
    • npm install <git remote url>:

      +

      Install a package by cloning a git remote url. The format of the git + url is:

      +
          <protocol>://[<user>@]<hostname><separator><path>[#<commit-ish>]
      +

      <protocol> is one of git, git+ssh, git+http, or + git+https. If no <commit-ish> is specified, then master is + used.

      +

      Examples:

      +
          git+ssh://git@github.com:npm/npm.git#v1.0.27
      +    git+https://isaacs@github.com/npm/npm.git
      +    git://github.com/npm/npm.git#v1.0.27
      +
    • +
    +

    You may combine multiple arguments, and even multiple types of arguments. +For example:

    +
    npm install sax@">=0.1.0 <0.2.0" bench supervisor
    +

    The --tag argument will apply to all of the specified install targets. If a +tag with the given name exists, the tagged version is preferred over newer +versions.

    +

    The --force argument will force npm to fetch remote resources even if a +local copy exists on disk.

    +
    npm install sax --force
    +

    The --global argument will cause npm to install the package globally +rather than locally. See npm-folders(5).

    +

    The --link argument will cause npm to link global installs into the +local space in some cases.

    +

    The --no-bin-links argument will prevent npm from creating symlinks for +any binaries the package might contain.

    +

    The --no-optional argument will prevent optional dependencies from +being installed.

    +

    The --no-shrinkwrap argument, which will ignore an available +shrinkwrap file and use the package.json instead.

    +

    The --nodedir=/path/to/node/source argument will allow npm to find the +node source code so that npm can compile native modules.

    +

    See npm-config(7). Many of the configuration params have some +effect on installation, since that's most of what npm does.

    +

    ALGORITHM

    +

    To install a package, npm uses the following algorithm:

    +
    install(where, what, family, ancestors)
    +fetch what, unpack to <where>/node_modules/<what>
    +for each dep in what.dependencies
    +  resolve dep to precise version
    +for each dep@version in what.dependencies
    +    not in <where>/node_modules/<what>/node_modules/*
    +    and not in <family>
    +  add precise version deps to <family>
    +  install(<where>/node_modules/<what>, dep, family)
    +

    For this package{dep} structure: A{B,C}, B{C}, C{D}, +this algorithm produces:

    +
    A
    ++-- B
    +`-- C
    +    `-- D
    +

    That is, the dependency from B to C is satisfied by the fact that A +already caused C to be installed at a higher level.

    +

    See npm-folders(5) for a more detailed description of the specific +folder structures that npm creates.

    +

    Limitations of npm's Install Algorithm

    +

    There are some very rare and pathological edge-cases where a cycle can +cause npm to try to install a never-ending tree of packages. Here is +the simplest case:

    +
    A -> B -> A' -> B' -> A -> B -> A' -> B' -> A -> ...
    +

    where A is some version of a package, and A' is a different version +of the same package. Because B depends on a different version of A +than the one that is already in the tree, it must install a separate +copy. The same is true of A', which must install B'. Because B' +depends on the original version of A, which has been overridden, the +cycle falls into infinite regress.

    +

    To avoid this situation, npm flat-out refuses to install any +name@version that is already present anywhere in the tree of package +folder ancestors. A more correct, but more complex, solution would be +to symlink the existing version into the new location. If this ever +affects a real use-case, it will be investigated.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-link.html b/deps/npm/html/partial/doc/cli/npm-link.html new file mode 100644 index 00000000000..3c832399ddb --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-link.html @@ -0,0 +1,51 @@ +

    npm-link

    Symlink a package folder

    +

    SYNOPSIS

    +
    npm link (in package folder)
    +npm link [@<scope>/]<pkgname>
    +npm ln (with any of the previous argument usage)
    +

    DESCRIPTION

    +

    Package linking is a two-step process.

    +

    First, npm link in a package folder will create a globally-installed +symbolic link from prefix/package-name to the current folder (see +npm-config(7) for the value of prefix).

    +

    Next, in some other location, npm link package-name will create a +symlink from the local node_modules folder to the global symlink.

    +

    Note that package-name is taken from package.json, +not from directory name.

    +

    The package name can be optionally prefixed with a scope. See npm-scope(7). +The scope must by preceded by an @-symbol and followed by a slash.

    +

    When creating tarballs for npm publish, the linked packages are +"snapshotted" to their current state by resolving the symbolic links.

    +

    This is handy for installing your own stuff, so that you can work on it and +test it iteratively without having to continually rebuild.

    +

    For example:

    +
    cd ~/projects/node-redis    # go into the package directory
    +npm link                    # creates global link
    +cd ~/projects/node-bloggy   # go into some other package directory.
    +npm link redis              # link-install the package
    +

    Now, any changes to ~/projects/node-redis will be reflected in +~/projects/node-bloggy/node_modules/redis/

    +

    You may also shortcut the two steps in one. For example, to do the +above use-case in a shorter way:

    +
    cd ~/projects/node-bloggy  # go into the dir of your main project
    +npm link ../node-redis     # link the dir of your dependency
    +

    The second line is the equivalent of doing:

    +
    (cd ../node-redis; npm link)
    +npm link redis
    +

    That is, it first creates a global link, and then links the global +installation target into your project's node_modules folder.

    +

    If your linked package is scoped (see npm-scope(7)) your link command must +include that scope, e.g.

    +
    npm link @myorg/privatepackage
    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-ls.html b/deps/npm/html/partial/doc/cli/npm-ls.html new file mode 100644 index 00000000000..199b6002b89 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-ls.html @@ -0,0 +1,65 @@ +

    npm-ls

    List installed packages

    +

    SYNOPSIS

    +
    npm list [[@<scope>/]<pkg> ...]
    +npm ls [[@<scope>/]<pkg> ...]
    +npm la [[@<scope>/]<pkg> ...]
    +npm ll [[@<scope>/]<pkg> ...]
    +

    DESCRIPTION

    +

    This command will print to stdout all the versions of packages that are +installed, as well as their dependencies, in a tree-structure.

    +

    Positional arguments are name@version-range identifiers, which will +limit the results to only the paths to the packages named. Note that +nested packages will also show the paths to the specified packages. +For example, running npm ls promzard in npm's source tree will show:

    +
    npm@2.1.6 /path/to/npm
    +└─┬ init-package-json@0.0.4
    +  └── promzard@0.1.5
    +

    It will print out extraneous, missing, and invalid packages.

    +

    If a project specifies git urls for dependencies these are shown +in parentheses after the name@version to make it easier for users to +recognize potential forks of a project.

    +

    When run as ll or la, it shows extended information by default.

    +

    CONFIGURATION

    +

    json

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Show information in JSON format.

    +

    long

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Show extended information.

    +

    parseable

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Show parseable output instead of tree view.

    +

    global

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    List packages in the global install prefix instead of in the current +project.

    +

    depth

    +
      +
    • Type: Int
    • +
    +

    Max display depth of the dependency tree.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-outdated.html b/deps/npm/html/partial/doc/cli/npm-outdated.html new file mode 100644 index 00000000000..ea07e01c1ce --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-outdated.html @@ -0,0 +1,47 @@ +

    npm-outdated

    Check for outdated packages

    +

    SYNOPSIS

    +
    npm outdated [<name> [<name> ...]]
    +

    DESCRIPTION

    +

    This command will check the registry to see if any (or, specific) installed +packages are currently outdated.

    +

    The resulting field 'wanted' shows the latest version according to the +version specified in the package.json, the field 'latest' the very latest +version of the package.

    +

    CONFIGURATION

    +

    json

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Show information in JSON format.

    +

    long

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Show extended information.

    +

    parseable

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Show parseable output instead of tree view.

    +

    global

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Check packages in the global install prefix instead of in the current +project.

    +

    depth

    +
      +
    • Type: Int
    • +
    +

    Max depth for checking dependency tree.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-owner.html b/deps/npm/html/partial/doc/cli/npm-owner.html new file mode 100644 index 00000000000..0e0dc92e416 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-owner.html @@ -0,0 +1,29 @@ +

    npm-owner

    Manage package owners

    +

    SYNOPSIS

    +
    npm owner ls <package name>
    +npm owner add <user> <package name>
    +npm owner rm <user> <package name>
    +

    DESCRIPTION

    +

    Manage ownership of published packages.

    +
      +
    • ls: +List all the users who have access to modify a package and push new versions. +Handy when you need to know who to bug for help.
    • +
    • add: +Add a new user as a maintainer of a package. This user is enabled to modify +metadata, publish new versions, and add other owners.
    • +
    • rm: +Remove a user from the package owner list. This immediately revokes their +privileges.
    • +
    +

    Note that there is only one level of access. Either you can modify a package, +or you can't. Future versions may contain more fine-grained access levels, but +that is not implemented at this time.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-pack.html b/deps/npm/html/partial/doc/cli/npm-pack.html new file mode 100644 index 00000000000..865f14afd46 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-pack.html @@ -0,0 +1,21 @@ +

    npm-pack

    Create a tarball from a package

    +

    SYNOPSIS

    +
    npm pack [<pkg> [<pkg> ...]]
    +

    DESCRIPTION

    +

    For anything that's installable (that is, a package folder, tarball, +tarball url, name@tag, name@version, or name), this command will fetch +it to the cache, and then copy the tarball to the current working +directory as <name>-<version>.tgz, and then write the filenames out to +stdout.

    +

    If the same package is specified multiple times, then the file will be +overwritten the second time.

    +

    If no arguments are supplied, then npm packs the current package folder.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-prefix.html b/deps/npm/html/partial/doc/cli/npm-prefix.html new file mode 100644 index 00000000000..bca3f6689c7 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-prefix.html @@ -0,0 +1,18 @@ +

    npm-prefix

    Display prefix

    +

    SYNOPSIS

    +
    npm prefix [-g]
    +

    DESCRIPTION

    +

    Print the local prefix to standard out. This is the closest parent directory +to contain a package.json file unless -g is also specified.

    +

    If -g is specified, this will be the value of the global prefix. See +npm-config(7) for more detail.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-prune.html b/deps/npm/html/partial/doc/cli/npm-prune.html new file mode 100644 index 00000000000..43dd8730d69 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-prune.html @@ -0,0 +1,19 @@ +

    npm-prune

    Remove extraneous packages

    +

    SYNOPSIS

    +
    npm prune [<name> [<name ...]]
    +npm prune [<name> [<name ...]] [--production]
    +

    DESCRIPTION

    +

    This command removes "extraneous" packages. If a package name is +provided, then only packages matching one of the supplied names are +removed.

    +

    Extraneous packages are packages that are not listed on the parent +package's dependencies list.

    +

    If the --production flag is specified, this command will remove the +packages specified in your devDependencies.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-publish.html b/deps/npm/html/partial/doc/cli/npm-publish.html new file mode 100644 index 00000000000..8df73e3d211 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-publish.html @@ -0,0 +1,39 @@ +

    npm-publish

    Publish a package

    +

    SYNOPSIS

    +
    npm publish <tarball> [--tag <tag>]
    +npm publish <folder> [--tag <tag>]
    +

    DESCRIPTION

    +

    Publishes a package to the registry so that it can be installed by name. See +npm-developers(7) for details on what's included in the published package, as +well as details on how the package is built.

    +

    By default npm will publish to the public registry. This can be overridden by +specifying a different default registry or using a npm-scope(7) in the name +(see package.json(5)).

    +
      +
    • <folder>: +A folder containing a package.json file

      +
    • +
    • <tarball>: +A url or file path to a gzipped tar archive containing a single folder +with a package.json file inside.

      +
    • +
    • [--tag <tag>] +Registers the published package with the given tag, such that npm install +<name>@<tag> will install this version. By default, npm publish updates +and npm install installs the latest tag.

      +
    • +
    +

    Fails if the package name and version combination already exists in +the specified registry.

    +

    Once a package is published with a given name and version, that +specific name and version combination can never be used again, even if +it is removed with npm-unpublish(1).

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-rebuild.html b/deps/npm/html/partial/doc/cli/npm-rebuild.html new file mode 100644 index 00000000000..b06f0705e3e --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-rebuild.html @@ -0,0 +1,18 @@ +

    npm-rebuild

    Rebuild a package

    +

    SYNOPSIS

    +
    npm rebuild [<name> [<name> ...]]
    +npm rb [<name> [<name> ...]]
    +
      +
    • <name>: +The package to rebuild
    • +
    +

    DESCRIPTION

    +

    This command runs the npm build command on the matched folders. This is useful +when you install a new version of node, and must recompile all your C++ addons with +the new binary.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-repo.html b/deps/npm/html/partial/doc/cli/npm-repo.html new file mode 100644 index 00000000000..55fcb5f4c96 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-repo.html @@ -0,0 +1,22 @@ +

    npm-repo

    Open package repository page in the browser

    +

    SYNOPSIS

    +
    npm repo <pkgname>
    +npm repo (with no args in a package dir)
    +

    DESCRIPTION

    +

    This command tries to guess at the likely location of a package's +repository URL, and then tries to open it using the --browser +config param. If no package name is provided, it will search for +a package.json in the current folder and use the name property.

    +

    CONFIGURATION

    +

    browser

    +
      +
    • Default: OS X: "open", Windows: "start", Others: "xdg-open"
    • +
    • Type: String
    • +
    +

    The browser that is called by the npm repo command to open websites.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-restart.html b/deps/npm/html/partial/doc/cli/npm-restart.html new file mode 100644 index 00000000000..267e570eca8 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-restart.html @@ -0,0 +1,15 @@ +

    npm-restart

    Start a package

    +

    SYNOPSIS

    +
    npm restart [-- <args>]
    +

    DESCRIPTION

    +

    This runs a package's "restart" script, if one was provided. Otherwise it runs +package's "stop" script, if one was provided, and then the "start" script.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-rm.html b/deps/npm/html/partial/doc/cli/npm-rm.html new file mode 100644 index 00000000000..24cd07eeecd --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-rm.html @@ -0,0 +1,19 @@ +

    npm-rm

    Remove a package

    +

    SYNOPSIS

    +
    npm rm <name>
    +npm r <name>
    +npm uninstall <name>
    +npm un <name>
    +

    DESCRIPTION

    +

    This uninstalls a package, completely removing everything npm installed +on its behalf.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-root.html b/deps/npm/html/partial/doc/cli/npm-root.html new file mode 100644 index 00000000000..e9b5ad0df8c --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-root.html @@ -0,0 +1,15 @@ +

    npm-root

    Display npm root

    +

    SYNOPSIS

    +
    npm root
    +

    DESCRIPTION

    +

    Print the effective node_modules folder to standard out.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-run-script.html b/deps/npm/html/partial/doc/cli/npm-run-script.html new file mode 100644 index 00000000000..b9a7cefce9f --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-run-script.html @@ -0,0 +1,27 @@ +

    npm-run-script

    Run arbitrary package scripts

    +

    SYNOPSIS

    +
    npm run-script [command] [-- <args>]
    +npm run [command] [-- <args>]
    +

    DESCRIPTION

    +

    This runs an arbitrary command from a package's "scripts" object. +If no package name is provided, it will search for a package.json +in the current folder and use its "scripts" object. If no "command" +is provided, it will list the available top level scripts.

    +

    It is used by the test, start, restart, and stop commands, but can be +called directly, as well.

    +

    As of npm@2.0.0, you can +use custom arguments when executing scripts. The special option -- is used by +getopt to delimit the end of the options. npm will pass +all the arguments after the -- directly to your script:

    +
    npm run test -- --grep="pattern"
    +

    The arguments will only be passed to the script specified after npm run +and not to any pre or post script.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-search.html b/deps/npm/html/partial/doc/cli/npm-search.html new file mode 100644 index 00000000000..ae66e47ead1 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-search.html @@ -0,0 +1,29 @@ +

    npm-search

    Search for packages

    +

    SYNOPSIS

    +
    npm search [--long] [search terms ...]
    +npm s [search terms ...]
    +npm se [search terms ...]
    +

    DESCRIPTION

    +

    Search the registry for packages matching the search terms.

    +

    If a term starts with /, then it's interpreted as a regular expression. +A trailing / will be ignored in this case. (Note that many regular +expression characters must be escaped or quoted in most shells.)

    +

    CONFIGURATION

    +

    long

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Display full package descriptions and other long text across multiple +lines. When disabled (default) search results are truncated to fit +neatly on a single line. Modules with extremely long names will +fall on multiple lines.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-shrinkwrap.html b/deps/npm/html/partial/doc/cli/npm-shrinkwrap.html new file mode 100644 index 00000000000..45b646c0300 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-shrinkwrap.html @@ -0,0 +1,144 @@ +

    npm-shrinkwrap

    Lock down dependency versions

    +

    SYNOPSIS

    +
    npm shrinkwrap
    +

    DESCRIPTION

    +

    This command locks down the versions of a package's dependencies so +that you can control exactly which versions of each dependency will be +used when your package is installed. The "package.json" file is still +required if you want to use "npm install".

    +

    By default, "npm install" recursively installs the target's +dependencies (as specified in package.json), choosing the latest +available version that satisfies the dependency's semver pattern. In +some situations, particularly when shipping software where each change +is tightly managed, it's desirable to fully specify each version of +each dependency recursively so that subsequent builds and deploys do +not inadvertently pick up newer versions of a dependency that satisfy +the semver pattern. Specifying specific semver patterns in each +dependency's package.json would facilitate this, but that's not always +possible or desirable, as when another author owns the npm package. +It's also possible to check dependencies directly into source control, +but that may be undesirable for other reasons.

    +

    As an example, consider package A:

    +
    {
    +  "name": "A",
    +  "version": "0.1.0",
    +  "dependencies": {
    +    "B": "<0.1.0"
    +  }
    +}
    +

    package B:

    +
    {
    +  "name": "B",
    +  "version": "0.0.1",
    +  "dependencies": {
    +    "C": "<0.1.0"
    +  }
    +}
    +

    and package C:

    +
    {
    +  "name": "C,
    +  "version": "0.0.1"
    +}
    +

    If these are the only versions of A, B, and C available in the +registry, then a normal "npm install A" will install:

    +
    A@0.1.0
    +`-- B@0.0.1
    +    `-- C@0.0.1
    +

    However, if B@0.0.2 is published, then a fresh "npm install A" will +install:

    +
    A@0.1.0
    +`-- B@0.0.2
    +    `-- C@0.0.1
    +

    assuming the new version did not modify B's dependencies. Of course, +the new version of B could include a new version of C and any number +of new dependencies. If such changes are undesirable, the author of A +could specify a dependency on B@0.0.1. However, if A's author and B's +author are not the same person, there's no way for A's author to say +that he or she does not want to pull in newly published versions of C +when B hasn't changed at all.

    +

    In this case, A's author can run

    +
    npm shrinkwrap
    +

    This generates npm-shrinkwrap.json, which will look something like this:

    +
    {
    +  "name": "A",
    +  "version": "0.1.0",
    +  "dependencies": {
    +    "B": {
    +      "version": "0.0.1",
    +      "dependencies": {
    +        "C": {
    +          "version": "0.1.0"
    +        }
    +      }
    +    }
    +  }
    +}
    +

    The shrinkwrap command has locked down the dependencies based on +what's currently installed in node_modules. When "npm install" +installs a package with a npm-shrinkwrap.json file in the package +root, the shrinkwrap file (rather than package.json files) completely +drives the installation of that package and all of its dependencies +(recursively). So now the author publishes A@0.1.0, and subsequent +installs of this package will use B@0.0.1 and C@0.1.0, regardless the +dependencies and versions listed in A's, B's, and C's package.json +files.

    +

    Using shrinkwrapped packages

    +

    Using a shrinkwrapped package is no different than using any other +package: you can "npm install" it by hand, or add a dependency to your +package.json file and "npm install" it.

    +

    Building shrinkwrapped packages

    +

    To shrinkwrap an existing package:

    +
      +
    1. Run "npm install" in the package root to install the current +versions of all dependencies.
    2. +
    3. Validate that the package works as expected with these versions.
    4. +
    5. Run "npm shrinkwrap", add npm-shrinkwrap.json to git, and publish +your package.
    6. +
    +

    To add or update a dependency in a shrinkwrapped package:

    +
      +
    1. Run "npm install" in the package root to install the current +versions of all dependencies.
    2. +
    3. Add or update dependencies. "npm install" each new or updated +package individually and then update package.json. Note that they +must be explicitly named in order to be installed: running npm +install with no arguments will merely reproduce the existing +shrinkwrap.
    4. +
    5. Validate that the package works as expected with the new +dependencies.
    6. +
    7. Run "npm shrinkwrap", commit the new npm-shrinkwrap.json, and +publish your package.
    8. +
    +

    You can use npm-outdated(1) to view dependencies with newer versions +available.

    +

    Other Notes

    +

    A shrinkwrap file must be consistent with the package's package.json +file. "npm shrinkwrap" will fail if required dependencies are not +already installed, since that would result in a shrinkwrap that +wouldn't actually work. Similarly, the command will fail if there are +extraneous packages (not referenced by package.json), since that would +indicate that package.json is not correct.

    +

    Since "npm shrinkwrap" is intended to lock down your dependencies for +production use, devDependencies will not be included unless you +explicitly set the --dev flag when you run npm shrinkwrap. If +installed devDependencies are excluded, then npm will print a +warning. If you want them to be installed with your module by +default, please consider adding them to dependencies instead.

    +

    If shrinkwrapped package A depends on shrinkwrapped package B, B's +shrinkwrap will not be used as part of the installation of A. However, +because A's shrinkwrap is constructed from a valid installation of B +and recursively specifies all dependencies, the contents of B's +shrinkwrap will implicitly be included in A's shrinkwrap.

    +

    Caveats

    +

    If you wish to lock down the specific bytes included in a package, for +example to have 100% confidence in being able to reproduce a +deployment or build, then you ought to check your dependencies into +source control, or pursue some other mechanism that can verify +contents rather than versions.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-star.html b/deps/npm/html/partial/doc/cli/npm-star.html new file mode 100644 index 00000000000..7377d9bc5dd --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-star.html @@ -0,0 +1,16 @@ +

    npm-star

    Mark your favorite packages

    +

    SYNOPSIS

    +
    npm star <pkgname> [<pkg>, ...]
    +npm unstar <pkgname> [<pkg>, ...]
    +

    DESCRIPTION

    +

    "Starring" a package means that you have some interest in it. It's +a vaguely positive way to show that you care.

    +

    "Unstarring" is the same thing, but in reverse.

    +

    It's a boolean thing. Starring repeatedly has no additional effect.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-stars.html b/deps/npm/html/partial/doc/cli/npm-stars.html new file mode 100644 index 00000000000..6ffda95b838 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-stars.html @@ -0,0 +1,17 @@ +

    npm-stars

    View packages marked as favorites

    +

    SYNOPSIS

    +
    npm stars
    +npm stars [username]
    +

    DESCRIPTION

    +

    If you have starred a lot of neat things and want to find them again +quickly this command lets you do just that.

    +

    You may also want to see your friend's favorite packages, in this case +you will most certainly enjoy this command.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-start.html b/deps/npm/html/partial/doc/cli/npm-start.html new file mode 100644 index 00000000000..bfd673ca26b --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-start.html @@ -0,0 +1,14 @@ +

    npm-start

    Start a package

    +

    SYNOPSIS

    +
    npm start [-- <args>]
    +

    DESCRIPTION

    +

    This runs a package's "start" script, if one was provided.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-stop.html b/deps/npm/html/partial/doc/cli/npm-stop.html new file mode 100644 index 00000000000..3b974c46a1e --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-stop.html @@ -0,0 +1,14 @@ +

    npm-stop

    Stop a package

    +

    SYNOPSIS

    +
    npm stop [-- <args>]
    +

    DESCRIPTION

    +

    This runs a package's "stop" script, if one was provided.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-submodule.html b/deps/npm/html/partial/doc/cli/npm-submodule.html new file mode 100644 index 00000000000..1e259e1f2f6 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-submodule.html @@ -0,0 +1,22 @@ +

    npm-submodule

    Add a package as a git submodule

    +

    SYNOPSIS

    +
    npm submodule <pkg>
    +

    DESCRIPTION

    +

    If the specified package has a git repository url in its package.json +description, then this command will add it as a git submodule at +node_modules/<pkg name>.

    +

    This is a convenience only. From then on, it's up to you to manage +updates by using the appropriate git commands. npm will stubbornly +refuse to update, modify, or remove anything with a .git subfolder +in it.

    +

    This command also does not install missing dependencies, if the package +does not include them in its git repository. If npm ls reports that +things are missing, you can either install, link, or submodule them yourself, +or you can do npm explore <pkgname> -- npm install to install the +dependencies into the submodule folder.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-tag.html b/deps/npm/html/partial/doc/cli/npm-tag.html new file mode 100644 index 00000000000..61b1c76e65c --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-tag.html @@ -0,0 +1,24 @@ +

    npm-tag

    Tag a published version

    +

    SYNOPSIS

    +
    npm tag <name>@<version> [<tag>]
    +

    DESCRIPTION

    +

    Tags the specified version of the package with the specified tag, or the +--tag config if not specified.

    +

    A tag can be used when installing packages as a reference to a version instead +of using a specific version number:

    +
    npm install <name>@<tag>
    +

    When installing dependencies, a preferred tagged version may be specified:

    +
    npm install --tag <tag>
    +

    This also applies to npm dedupe.

    +

    Publishing a package always sets the "latest" tag to the published version.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-test.html b/deps/npm/html/partial/doc/cli/npm-test.html new file mode 100644 index 00000000000..4a48e657d92 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-test.html @@ -0,0 +1,17 @@ +

    npm-test

    Test a package

    +

    SYNOPSIS

    +
      npm test [-- <args>]
    +  npm tst [-- <args>]
    +

    DESCRIPTION

    +

    This runs a package's "test" script, if one was provided.

    +

    To run tests as a condition of installation, set the npat config to +true.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-uninstall.html b/deps/npm/html/partial/doc/cli/npm-uninstall.html new file mode 100644 index 00000000000..5b247402bdc --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-uninstall.html @@ -0,0 +1,37 @@ +

    npm-rm

    Remove a package

    +

    SYNOPSIS

    +
    npm uninstall [@<scope>/]<package> [--save|--save-dev|--save-optional]
    +npm rm (with any of the previous argument usage)
    +

    DESCRIPTION

    +

    This uninstalls a package, completely removing everything npm installed +on its behalf.

    +

    Example:

    +
    npm uninstall sax
    +

    In global mode (ie, with -g or --global appended to the command), +it uninstalls the current package context as a global package.

    +

    npm uninstall takes 3 exclusive, optional flags which save or update +the package version in your main package.json:

    +
      +
    • --save: Package will be removed from your dependencies.

      +
    • +
    • --save-dev: Package will be removed from your devDependencies.

      +
    • +
    • --save-optional: Package will be removed from your optionalDependencies.

      +
    • +
    +

    Scope is optional and follows the usual rules for npm-scope(7).

    +

    Examples:

    +
    npm uninstall sax --save
    +npm uninstall @myorg/privatepackage --save
    +npm uninstall node-tap --save-dev
    +npm uninstall dtrace-provider --save-optional
    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-unpublish.html b/deps/npm/html/partial/doc/cli/npm-unpublish.html new file mode 100644 index 00000000000..9790cd43274 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-unpublish.html @@ -0,0 +1,27 @@ +

    npm-unpublish

    Remove a package from the registry

    +

    SYNOPSIS

    +
    npm unpublish [@<scope>/]<name>[@<version>]
    +

    WARNING

    +

    It is generally considered bad behavior to remove versions of a library +that others are depending on!

    +

    Consider using the deprecate command +instead, if your intent is to encourage users to upgrade.

    +

    There is plenty of room on the registry.

    +

    DESCRIPTION

    +

    This removes a package version from the registry, deleting its +entry and removing the tarball.

    +

    If no version is specified, or if all versions are removed then +the root package entry is removed from the registry entirely.

    +

    Even if a package version is unpublished, that specific name and +version combination can never be reused. In order to publish the +package again, a new version number must be used.

    +

    The scope is optional and follows the usual rules for npm-scope(7).

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-update.html b/deps/npm/html/partial/doc/cli/npm-update.html new file mode 100644 index 00000000000..3923be7faf9 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-update.html @@ -0,0 +1,20 @@ +

    npm-update

    Update a package

    +

    SYNOPSIS

    +
    npm update [-g] [<name> [<name> ...]]
    +

    DESCRIPTION

    +

    This command will update all the packages listed to the latest version +(specified by the tag config).

    +

    It will also install missing packages.

    +

    If the -g flag is specified, this command will update globally installed +packages.

    +

    If no package name is specified, all packages in the specified location (global +or local) will be updated.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-version.html b/deps/npm/html/partial/doc/cli/npm-version.html new file mode 100644 index 00000000000..5217f019635 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-version.html @@ -0,0 +1,35 @@ +

    npm-version

    Bump a package version

    +

    SYNOPSIS

    +
    npm version [<newversion> | major | minor | patch | premajor | preminor | prepatch | prerelease]
    +

    DESCRIPTION

    +

    Run this in a package directory to bump the version and write the new +data back to the package.json file.

    +

    The newversion argument should be a valid semver string, or a +valid second argument to semver.inc (one of "patch", "minor", "major", +"prepatch", "preminor", "premajor", "prerelease"). In the second case, +the existing version will be incremented by 1 in the specified field.

    +

    If run in a git repo, it will also create a version commit and tag, and +fail if the repo is not clean.

    +

    If supplied with --message (shorthand: -m) config option, npm will +use it as a commit message when creating a version commit. If the +message config contains %s then that will be replaced with the +resulting version number. For example:

    +
    npm version patch -m "Upgrade to %s for reasons"
    +

    If the sign-git-tag config is set, then the tag will be signed using +the -s flag to git. Note that you must have a default GPG key set up +in your git config for this to work properly. For example:

    +
    $ npm config set sign-git-tag true
    +$ npm version patch
    +
    +You need a passphrase to unlock the secret key for
    +user: "isaacs (http://blog.izs.me/) <i@izs.me>"
    +2048-bit RSA key, ID 6C481CF6, created 2010-08-31
    +
    +Enter passphrase:
    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-view.html b/deps/npm/html/partial/doc/cli/npm-view.html new file mode 100644 index 00000000000..a5b38518fb1 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-view.html @@ -0,0 +1,62 @@ +

    npm-view

    View registry info

    +

    SYNOPSIS

    +
    npm view [@<scope>/]<name>[@<version>] [<field>[.<subfield>]...]
    +npm v [@<scope>/]<name>[@<version>] [<field>[.<subfield>]...]
    +

    DESCRIPTION

    +

    This command shows data about a package and prints it to the stream +referenced by the outfd config, which defaults to stdout.

    +

    To show the package registry entry for the connect package, you can do +this:

    +
    npm view connect
    +

    The default version is "latest" if unspecified.

    +

    Field names can be specified after the package descriptor. +For example, to show the dependencies of the ronn package at version +0.3.5, you could do the following:

    +
    npm view ronn@0.3.5 dependencies
    +

    You can view child field by separating them with a period. +To view the git repository URL for the latest version of npm, you could +do this:

    +
    npm view npm repository.url
    +

    This makes it easy to view information about a dependency with a bit of +shell scripting. For example, to view all the data about the version of +opts that ronn depends on, you can do this:

    +
    npm view opts@$(npm view ronn dependencies.opts)
    +

    For fields that are arrays, requesting a non-numeric field will return +all of the values from the objects in the list. For example, to get all +the contributor names for the "express" project, you can do this:

    +
    npm view express contributors.email
    +

    You may also use numeric indices in square braces to specifically select +an item in an array field. To just get the email address of the first +contributor in the list, you can do this:

    +
    npm view express contributors[0].email
    +

    Multiple fields may be specified, and will be printed one after another. +For exampls, to get all the contributor names and email addresses, you +can do this:

    +
    npm view express contributors.name contributors.email
    +

    "Person" fields are shown as a string if they would be shown as an +object. So, for example, this will show the list of npm contributors in +the shortened string format. (See package.json(5) for more on this.)

    +
    npm view npm contributors
    +

    If a version range is provided, then data will be printed for every +matching version of the package. This will show which version of jsdom +was required by each matching version of yui3:

    +
    npm view yui3@'>0.5.4' dependencies.jsdom
    +

    OUTPUT

    +

    If only a single string field for a single version is output, then it +will not be colorized or quoted, so as to enable piping the output to +another command. If the field is an object, it will be output as a JavaScript object literal.

    +

    If the --json flag is given, the outputted fields will be JSON.

    +

    If the version range matches multiple versions, than each printed value +will be prefixed with the version it applies to.

    +

    If multiple fields are requested, than each of them are prefixed with +the field name.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm-whoami.html b/deps/npm/html/partial/doc/cli/npm-whoami.html new file mode 100644 index 00000000000..a0c0dd4cd82 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm-whoami.html @@ -0,0 +1,13 @@ +

    npm-whoami

    Display npm username

    +

    SYNOPSIS

    +
    npm whoami
    +

    DESCRIPTION

    +

    Print the username config to standard output.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/cli/npm.html b/deps/npm/html/partial/doc/cli/npm.html new file mode 100644 index 00000000000..646fffcb378 --- /dev/null +++ b/deps/npm/html/partial/doc/cli/npm.html @@ -0,0 +1,134 @@ +

    npm

    node package manager

    +

    SYNOPSIS

    +
    npm <command> [args]
    +

    VERSION

    +

    2.1.6

    +

    DESCRIPTION

    +

    npm is the package manager for the Node JavaScript platform. It puts +modules in place so that node can find them, and manages dependency +conflicts intelligently.

    +

    It is extremely configurable to support a wide variety of use cases. +Most commonly, it is used to publish, discover, install, and develop node +programs.

    +

    Run npm help to get a list of available commands.

    +

    INTRODUCTION

    +

    You probably got npm because you want to install stuff.

    +

    Use npm install blerg to install the latest version of "blerg". Check out +npm-install(1) for more info. It can do a lot of stuff.

    +

    Use the npm search command to show everything that's available. +Use npm ls to show everything you've installed.

    +

    DEPENDENCIES

    +

    If a package references to another package with a git URL, npm depends +on a preinstalled git.

    +

    If one of the packages npm tries to install is a native node module and +requires compiling of C++ Code, npm will use +node-gyp for that task. +For a Unix system, node-gyp +needs Python, make and a buildchain like GCC. On Windows, +Python and Microsoft Visual Studio C++ is needed. Python 3 is +not supported by node-gyp. +For more information visit +the node-gyp repository and +the node-gyp Wiki.

    +

    DIRECTORIES

    +

    See npm-folders(5) to learn about where npm puts stuff.

    +

    In particular, npm has two modes of operation:

    +
      +
    • global mode:
      npm installs packages into the install prefix at +prefix/lib/node_modules and bins are installed in prefix/bin.
    • +
    • local mode:
      npm installs packages into the current project directory, which +defaults to the current working directory. Packages are installed to +./node_modules, and bins are installed to ./node_modules/.bin.
    • +
    +

    Local mode is the default. Use --global or -g on any command to +operate in global mode instead.

    +

    DEVELOPER USAGE

    +

    If you're using npm to develop and publish your code, check out the +following help topics:

    +
      +
    • json: +Make a package.json file. See package.json(5).
    • +
    • link: +For linking your current working code into Node's path, so that you +don't have to reinstall every time you make a change. Use +npm link to do this.
    • +
    • install: +It's a good idea to install things if you don't need the symbolic link. +Especially, installing other peoples code from the registry is done via +npm install
    • +
    • adduser: +Create an account or log in. Credentials are stored in the +user config file.
    • +
    • publish: +Use the npm publish command to upload your code to the registry.
    • +
    +

    CONFIGURATION

    +

    npm is extremely configurable. It reads its configuration options from +5 places.

    +
      +
    • Command line switches:
      Set a config with --key val. All keys take a value, even if they +are booleans (the config parser doesn't know what the options are at +the time of parsing.) If no value is provided, then the option is set +to boolean true.
    • +
    • Environment Variables:
      Set any config by prefixing the name in an environment variable with +npm_config_. For example, export npm_config_key=val.
    • +
    • User Configs:
      The file at $HOME/.npmrc is an ini-formatted list of configs. If +present, it is parsed. If the userconfig option is set in the cli +or env, then that will be used instead.
    • +
    • Global Configs:
      The file found at ../etc/npmrc (from the node executable, by default +this resolves to /usr/local/etc/npmrc) will be parsed if it is found. +If the globalconfig option is set in the cli, env, or user config, +then that file is parsed instead.
    • +
    • Defaults:
      npm's default configuration options are defined in +lib/utils/config-defs.js. These must not be changed.
    • +
    +

    See npm-config(7) for much much more information.

    +

    CONTRIBUTIONS

    +

    Patches welcome!

    +
      +
    • code: +Read through npm-coding-style(7) if you plan to submit code. +You don't have to agree with it, but you do have to follow it.
    • +
    • docs: +If you find an error in the documentation, edit the appropriate markdown +file in the "doc" folder. (Don't worry about generating the man page.)
    • +
    +

    Contributors are listed in npm's package.json file. You can view them +easily by doing npm view npm contributors.

    +

    If you would like to contribute, but don't know what to work on, check +the issues list or ask on the mailing list.

    + +

    BUGS

    +

    When you find issues, please report them:

    + +

    Be sure to include all of the output from the npm command that didn't work +as expected. The npm-debug.log file is also helpful to provide.

    +

    You can also look for isaacs in #node.js on irc://irc.freenode.net. He +will no doubt tell you to put the output in a gist or email.

    +

    AUTHOR

    +

    Isaac Z. Schlueter :: +isaacs :: +@izs :: +i@izs.me

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/files/npm-folders.html b/deps/npm/html/partial/doc/files/npm-folders.html new file mode 100644 index 00000000000..08ea7ed13aa --- /dev/null +++ b/deps/npm/html/partial/doc/files/npm-folders.html @@ -0,0 +1,164 @@ +

    npm-folders

    Folder Structures Used by npm

    +

    DESCRIPTION

    +

    npm puts various things on your computer. That's its job.

    +

    This document will tell you what it puts where.

    +

    tl;dr

    +
      +
    • Local install (default): puts stuff in ./node_modules of the current +package root.
    • +
    • Global install (with -g): puts stuff in /usr/local or wherever node +is installed.
    • +
    • Install it locally if you're going to require() it.
    • +
    • Install it globally if you're going to run it on the command line.
    • +
    • If you need both, then install it in both places, or use npm link.
    • +
    +

    prefix Configuration

    +

    The prefix config defaults to the location where node is installed. +On most systems, this is /usr/local, and most of the time is the same +as node's process.installPrefix.

    +

    On windows, this is the exact location of the node.exe binary. On Unix +systems, it's one level up, since node is typically installed at +{prefix}/bin/node rather than {prefix}/node.exe.

    +

    When the global flag is set, npm installs things into this prefix. +When it is not set, it uses the root of the current package, or the +current working directory if not in a package already.

    +

    Node Modules

    +

    Packages are dropped into the node_modules folder under the prefix. +When installing locally, this means that you can +require("packagename") to load its main module, or +require("packagename/lib/path/to/sub/module") to load other modules.

    +

    Global installs on Unix systems go to {prefix}/lib/node_modules. +Global installs on Windows go to {prefix}/node_modules (that is, no +lib folder.)

    +

    Scoped packages are installed the same way, except they are grouped together +in a sub-folder of the relevant node_modules folder with the name of that +scope prefix by the @ symbol, e.g. npm install @myorg/package would place +the package in {prefix}/node_modules/@myorg/package. See scopes(7) for +more details.

    +

    If you wish to require() a package, then install it locally.

    +

    Executables

    +

    When in global mode, executables are linked into {prefix}/bin on Unix, +or directly into {prefix} on Windows.

    +

    When in local mode, executables are linked into +./node_modules/.bin so that they can be made available to scripts run +through npm. (For example, so that a test runner will be in the path +when you run npm test.)

    +

    Man Pages

    +

    When in global mode, man pages are linked into {prefix}/share/man.

    +

    When in local mode, man pages are not installed.

    +

    Man pages are not installed on Windows systems.

    +

    Cache

    +

    See npm-cache(1). Cache files are stored in ~/.npm on Posix, or +~/npm-cache on Windows.

    +

    This is controlled by the cache configuration param.

    +

    Temp Files

    +

    Temporary files are stored by default in the folder specified by the +tmp config, which defaults to the TMPDIR, TMP, or TEMP environment +variables, or /tmp on Unix and c:\windows\temp on Windows.

    +

    Temp files are given a unique folder under this root for each run of the +program, and are deleted upon successful exit.

    +

    More Information

    +

    When installing locally, npm first tries to find an appropriate +prefix folder. This is so that npm install foo@1.2.3 will install +to the sensible root of your package, even if you happen to have cded +into some other folder.

    +

    Starting at the $PWD, npm will walk up the folder tree checking for a +folder that contains either a package.json file, or a node_modules +folder. If such a thing is found, then that is treated as the effective +"current directory" for the purpose of running npm commands. (This +behavior is inspired by and similar to git's .git-folder seeking +logic when running git commands in a working dir.)

    +

    If no package root is found, then the current folder is used.

    +

    When you run npm install foo@1.2.3, then the package is loaded into +the cache, and then unpacked into ./node_modules/foo. Then, any of +foo's dependencies are similarly unpacked into +./node_modules/foo/node_modules/....

    +

    Any bin files are symlinked to ./node_modules/.bin/, so that they may +be found by npm scripts when necessary.

    +

    Global Installation

    +

    If the global configuration is set to true, then npm will +install packages "globally".

    +

    For global installation, packages are installed roughly the same way, +but using the folders described above.

    +

    Cycles, Conflicts, and Folder Parsimony

    +

    Cycles are handled using the property of node's module system that it +walks up the directories looking for node_modules folders. So, at every +stage, if a package is already installed in an ancestor node_modules +folder, then it is not installed at the current location.

    +

    Consider the case above, where foo -> bar -> baz. Imagine if, in +addition to that, baz depended on bar, so you'd have: +foo -> bar -> baz -> bar -> baz .... However, since the folder +structure is: foo/node_modules/bar/node_modules/baz, there's no need to +put another copy of bar into .../baz/node_modules, since when it calls +require("bar"), it will get the copy that is installed in +foo/node_modules/bar.

    +

    This shortcut is only used if the exact same +version would be installed in multiple nested node_modules folders. It +is still possible to have a/node_modules/b/node_modules/a if the two +"a" packages are different versions. However, without repeating the +exact same package multiple times, an infinite regress will always be +prevented.

    +

    Another optimization can be made by installing dependencies at the +highest level possible, below the localized "target" folder.

    +

    Example

    +

    Consider this dependency graph:

    +
    foo
    ++-- blerg@1.2.5
    ++-- bar@1.2.3
    +|   +-- blerg@1.x (latest=1.3.7)
    +|   +-- baz@2.x
    +|   |   `-- quux@3.x
    +|   |       `-- bar@1.2.3 (cycle)
    +|   `-- asdf@*
    +`-- baz@1.2.3
    +    `-- quux@3.x
    +        `-- bar
    +

    In this case, we might expect a folder structure like this:

    +
    foo
    ++-- node_modules
    +    +-- blerg (1.2.5) <---[A]
    +    +-- bar (1.2.3) <---[B]
    +    |   `-- node_modules
    +    |       +-- baz (2.0.2) <---[C]
    +    |       |   `-- node_modules
    +    |       |       `-- quux (3.2.0)
    +    |       `-- asdf (2.3.4)
    +    `-- baz (1.2.3) <---[D]
    +        `-- node_modules
    +            `-- quux (3.2.0) <---[E]
    +

    Since foo depends directly on bar@1.2.3 and baz@1.2.3, those are +installed in foo's node_modules folder.

    +

    Even though the latest copy of blerg is 1.3.7, foo has a specific +dependency on version 1.2.5. So, that gets installed at [A]. Since the +parent installation of blerg satisfies bar's dependency on blerg@1.x, +it does not install another copy under [B].

    +

    Bar [B] also has dependencies on baz and asdf, so those are installed in +bar's node_modules folder. Because it depends on baz@2.x, it cannot +re-use the baz@1.2.3 installed in the parent node_modules folder [D], +and must install its own copy [C].

    +

    Underneath bar, the baz -> quux -> bar dependency creates a cycle. +However, because bar is already in quux's ancestry [B], it does not +unpack another copy of bar into that folder.

    +

    Underneath foo -> baz [D], quux's [E] folder tree is empty, because its +dependency on bar is satisfied by the parent folder copy installed at [B].

    +

    For a graphical breakdown of what is installed where, use npm ls.

    +

    Publishing

    +

    Upon publishing, npm will look in the node_modules folder. If any of +the items there are not in the bundledDependencies array, then they will +not be included in the package tarball.

    +

    This allows a package maintainer to install all of their dependencies +(and dev dependencies) locally, but only re-publish those items that +cannot be found elsewhere. See package.json(5) for more information.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/files/npm-global.html b/deps/npm/html/partial/doc/files/npm-global.html new file mode 100644 index 00000000000..08ea7ed13aa --- /dev/null +++ b/deps/npm/html/partial/doc/files/npm-global.html @@ -0,0 +1,164 @@ +

    npm-folders

    Folder Structures Used by npm

    +

    DESCRIPTION

    +

    npm puts various things on your computer. That's its job.

    +

    This document will tell you what it puts where.

    +

    tl;dr

    +
      +
    • Local install (default): puts stuff in ./node_modules of the current +package root.
    • +
    • Global install (with -g): puts stuff in /usr/local or wherever node +is installed.
    • +
    • Install it locally if you're going to require() it.
    • +
    • Install it globally if you're going to run it on the command line.
    • +
    • If you need both, then install it in both places, or use npm link.
    • +
    +

    prefix Configuration

    +

    The prefix config defaults to the location where node is installed. +On most systems, this is /usr/local, and most of the time is the same +as node's process.installPrefix.

    +

    On windows, this is the exact location of the node.exe binary. On Unix +systems, it's one level up, since node is typically installed at +{prefix}/bin/node rather than {prefix}/node.exe.

    +

    When the global flag is set, npm installs things into this prefix. +When it is not set, it uses the root of the current package, or the +current working directory if not in a package already.

    +

    Node Modules

    +

    Packages are dropped into the node_modules folder under the prefix. +When installing locally, this means that you can +require("packagename") to load its main module, or +require("packagename/lib/path/to/sub/module") to load other modules.

    +

    Global installs on Unix systems go to {prefix}/lib/node_modules. +Global installs on Windows go to {prefix}/node_modules (that is, no +lib folder.)

    +

    Scoped packages are installed the same way, except they are grouped together +in a sub-folder of the relevant node_modules folder with the name of that +scope prefix by the @ symbol, e.g. npm install @myorg/package would place +the package in {prefix}/node_modules/@myorg/package. See scopes(7) for +more details.

    +

    If you wish to require() a package, then install it locally.

    +

    Executables

    +

    When in global mode, executables are linked into {prefix}/bin on Unix, +or directly into {prefix} on Windows.

    +

    When in local mode, executables are linked into +./node_modules/.bin so that they can be made available to scripts run +through npm. (For example, so that a test runner will be in the path +when you run npm test.)

    +

    Man Pages

    +

    When in global mode, man pages are linked into {prefix}/share/man.

    +

    When in local mode, man pages are not installed.

    +

    Man pages are not installed on Windows systems.

    +

    Cache

    +

    See npm-cache(1). Cache files are stored in ~/.npm on Posix, or +~/npm-cache on Windows.

    +

    This is controlled by the cache configuration param.

    +

    Temp Files

    +

    Temporary files are stored by default in the folder specified by the +tmp config, which defaults to the TMPDIR, TMP, or TEMP environment +variables, or /tmp on Unix and c:\windows\temp on Windows.

    +

    Temp files are given a unique folder under this root for each run of the +program, and are deleted upon successful exit.

    +

    More Information

    +

    When installing locally, npm first tries to find an appropriate +prefix folder. This is so that npm install foo@1.2.3 will install +to the sensible root of your package, even if you happen to have cded +into some other folder.

    +

    Starting at the $PWD, npm will walk up the folder tree checking for a +folder that contains either a package.json file, or a node_modules +folder. If such a thing is found, then that is treated as the effective +"current directory" for the purpose of running npm commands. (This +behavior is inspired by and similar to git's .git-folder seeking +logic when running git commands in a working dir.)

    +

    If no package root is found, then the current folder is used.

    +

    When you run npm install foo@1.2.3, then the package is loaded into +the cache, and then unpacked into ./node_modules/foo. Then, any of +foo's dependencies are similarly unpacked into +./node_modules/foo/node_modules/....

    +

    Any bin files are symlinked to ./node_modules/.bin/, so that they may +be found by npm scripts when necessary.

    +

    Global Installation

    +

    If the global configuration is set to true, then npm will +install packages "globally".

    +

    For global installation, packages are installed roughly the same way, +but using the folders described above.

    +

    Cycles, Conflicts, and Folder Parsimony

    +

    Cycles are handled using the property of node's module system that it +walks up the directories looking for node_modules folders. So, at every +stage, if a package is already installed in an ancestor node_modules +folder, then it is not installed at the current location.

    +

    Consider the case above, where foo -> bar -> baz. Imagine if, in +addition to that, baz depended on bar, so you'd have: +foo -> bar -> baz -> bar -> baz .... However, since the folder +structure is: foo/node_modules/bar/node_modules/baz, there's no need to +put another copy of bar into .../baz/node_modules, since when it calls +require("bar"), it will get the copy that is installed in +foo/node_modules/bar.

    +

    This shortcut is only used if the exact same +version would be installed in multiple nested node_modules folders. It +is still possible to have a/node_modules/b/node_modules/a if the two +"a" packages are different versions. However, without repeating the +exact same package multiple times, an infinite regress will always be +prevented.

    +

    Another optimization can be made by installing dependencies at the +highest level possible, below the localized "target" folder.

    +

    Example

    +

    Consider this dependency graph:

    +
    foo
    ++-- blerg@1.2.5
    ++-- bar@1.2.3
    +|   +-- blerg@1.x (latest=1.3.7)
    +|   +-- baz@2.x
    +|   |   `-- quux@3.x
    +|   |       `-- bar@1.2.3 (cycle)
    +|   `-- asdf@*
    +`-- baz@1.2.3
    +    `-- quux@3.x
    +        `-- bar
    +

    In this case, we might expect a folder structure like this:

    +
    foo
    ++-- node_modules
    +    +-- blerg (1.2.5) <---[A]
    +    +-- bar (1.2.3) <---[B]
    +    |   `-- node_modules
    +    |       +-- baz (2.0.2) <---[C]
    +    |       |   `-- node_modules
    +    |       |       `-- quux (3.2.0)
    +    |       `-- asdf (2.3.4)
    +    `-- baz (1.2.3) <---[D]
    +        `-- node_modules
    +            `-- quux (3.2.0) <---[E]
    +

    Since foo depends directly on bar@1.2.3 and baz@1.2.3, those are +installed in foo's node_modules folder.

    +

    Even though the latest copy of blerg is 1.3.7, foo has a specific +dependency on version 1.2.5. So, that gets installed at [A]. Since the +parent installation of blerg satisfies bar's dependency on blerg@1.x, +it does not install another copy under [B].

    +

    Bar [B] also has dependencies on baz and asdf, so those are installed in +bar's node_modules folder. Because it depends on baz@2.x, it cannot +re-use the baz@1.2.3 installed in the parent node_modules folder [D], +and must install its own copy [C].

    +

    Underneath bar, the baz -> quux -> bar dependency creates a cycle. +However, because bar is already in quux's ancestry [B], it does not +unpack another copy of bar into that folder.

    +

    Underneath foo -> baz [D], quux's [E] folder tree is empty, because its +dependency on bar is satisfied by the parent folder copy installed at [B].

    +

    For a graphical breakdown of what is installed where, use npm ls.

    +

    Publishing

    +

    Upon publishing, npm will look in the node_modules folder. If any of +the items there are not in the bundledDependencies array, then they will +not be included in the package tarball.

    +

    This allows a package maintainer to install all of their dependencies +(and dev dependencies) locally, but only re-publish those items that +cannot be found elsewhere. See package.json(5) for more information.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/files/npm-json.html b/deps/npm/html/partial/doc/files/npm-json.html new file mode 100644 index 00000000000..df3bea83742 --- /dev/null +++ b/deps/npm/html/partial/doc/files/npm-json.html @@ -0,0 +1,465 @@ +

    package.json

    Specifics of npm's package.json handling

    +

    DESCRIPTION

    +

    This document is all you need to know about what's required in your package.json +file. It must be actual JSON, not just a JavaScript object literal.

    +

    A lot of the behavior described in this document is affected by the config +settings described in npm-config(7).

    +

    name

    +

    The most important things in your package.json are the name and version fields. +Those are actually required, and your package won't install without +them. The name and version together form an identifier that is assumed +to be completely unique. Changes to the package should come along with +changes to the version.

    +

    The name is what your thing is called. Some tips:

    +
      +
    • Don't put "js" or "node" in the name. It's assumed that it's js, since you're +writing a package.json file, and you can specify the engine using the "engines" +field. (See below.)
    • +
    • The name ends up being part of a URL, an argument on the command line, and a +folder name. Any name with non-url-safe characters will be rejected. +Also, it can't start with a dot or an underscore.
    • +
    • The name will probably be passed as an argument to require(), so it should +be something short, but also reasonably descriptive.
    • +
    • You may want to check the npm registry to see if there's something by that name +already, before you get too attached to it. http://registry.npmjs.org/
    • +
    +

    A name can be optionally prefixed by a scope, e.g. @myorg/mypackage. See +npm-scope(7) for more detail.

    +

    version

    +

    The most important things in your package.json are the name and version fields. +Those are actually required, and your package won't install without +them. The name and version together form an identifier that is assumed +to be completely unique. Changes to the package should come along with +changes to the version.

    +

    Version must be parseable by +node-semver, which is bundled +with npm as a dependency. (npm install semver to use it yourself.)

    +

    More on version numbers and ranges at semver(7).

    +

    description

    +

    Put a description in it. It's a string. This helps people discover your +package, as it's listed in npm search.

    +

    keywords

    +

    Put keywords in it. It's an array of strings. This helps people +discover your package as it's listed in npm search.

    +

    homepage

    +

    The url to the project homepage.

    +

    NOTE: This is not the same as "url". If you put a "url" field, +then the registry will think it's a redirection to your package that has +been published somewhere else, and spit at you.

    +

    Literally. Spit. I'm so not kidding.

    +

    bugs

    +

    The url to your project's issue tracker and / or the email address to which +issues should be reported. These are helpful for people who encounter issues +with your package.

    +

    It should look like this:

    +
    { "url" : "http://github.com/owner/project/issues"
    +, "email" : "project@hostname.com"
    +}
    +

    You can specify either one or both values. If you want to provide only a url, +you can specify the value for "bugs" as a simple string instead of an object.

    +

    If a url is provided, it will be used by the npm bugs command.

    +

    license

    +

    You should specify a license for your package so that people know how they are +permitted to use it, and any restrictions you're placing on it.

    +

    The simplest way, assuming you're using a common license such as BSD-3-Clause +or MIT, is to just specify the standard SPDX ID of the license you're using, +like this:

    +
    { "license" : "BSD-3-Clause" }
    +

    You can check the full list of SPDX license IDs. +Ideally you should pick one that is +OSI approved.

    +

    It's also a good idea to include a LICENSE file at the top level in +your package.

    +

    people fields: author, contributors

    +

    The "author" is one person. "contributors" is an array of people. A "person" +is an object with a "name" field and optionally "url" and "email", like this:

    +
    { "name" : "Barney Rubble"
    +, "email" : "b@rubble.com"
    +, "url" : "http://barnyrubble.tumblr.com/"
    +}
    +

    Or you can shorten that all into a single string, and npm will parse it for you:

    +
    "Barney Rubble <b@rubble.com> (http://barnyrubble.tumblr.com/)
    +

    Both email and url are optional either way.

    +

    npm also sets a top-level "maintainers" field with your npm user info.

    +

    files

    +

    The "files" field is an array of files to include in your project. If +you name a folder in the array, then it will also include the files +inside that folder. (Unless they would be ignored by another rule.)

    +

    You can also provide a ".npmignore" file in the root of your package, +which will keep files from being included, even if they would be picked +up by the files array. The ".npmignore" file works just like a +".gitignore".

    +

    main

    +

    The main field is a module ID that is the primary entry point to your program. +That is, if your package is named foo, and a user installs it, and then does +require("foo"), then your main module's exports object will be returned.

    +

    This should be a module ID relative to the root of your package folder.

    +

    For most modules, it makes the most sense to have a main script and often not +much else.

    +

    bin

    +

    A lot of packages have one or more executable files that they'd like to +install into the PATH. npm makes this pretty easy (in fact, it uses this +feature to install the "npm" executable.)

    +

    To use this, supply a bin field in your package.json which is a map of +command name to local file name. On install, npm will symlink that file into +prefix/bin for global installs, or ./node_modules/.bin/ for local +installs.

    +

    For example, npm has this:

    +
    { "bin" : { "npm" : "./cli.js" } }
    +

    So, when you install npm, it'll create a symlink from the cli.js script to +/usr/local/bin/npm.

    +

    If you have a single executable, and its name should be the name +of the package, then you can just supply it as a string. For example:

    +
    { "name": "my-program"
    +, "version": "1.2.5"
    +, "bin": "./path/to/program" }
    +

    would be the same as this:

    +
    { "name": "my-program"
    +, "version": "1.2.5"
    +, "bin" : { "my-program" : "./path/to/program" } }
    +

    man

    +

    Specify either a single file or an array of filenames to put in place for the +man program to find.

    +

    If only a single file is provided, then it's installed such that it is the +result from man <pkgname>, regardless of its actual filename. For example:

    +
    { "name" : "foo"
    +, "version" : "1.2.3"
    +, "description" : "A packaged foo fooer for fooing foos"
    +, "main" : "foo.js"
    +, "man" : "./man/doc.1"
    +}
    +

    would link the ./man/doc.1 file in such that it is the target for man foo

    +

    If the filename doesn't start with the package name, then it's prefixed. +So, this:

    +
    { "name" : "foo"
    +, "version" : "1.2.3"
    +, "description" : "A packaged foo fooer for fooing foos"
    +, "main" : "foo.js"
    +, "man" : [ "./man/foo.1", "./man/bar.1" ]
    +}
    +

    will create files to do man foo and man foo-bar.

    +

    Man files must end with a number, and optionally a .gz suffix if they are +compressed. The number dictates which man section the file is installed into.

    +
    { "name" : "foo"
    +, "version" : "1.2.3"
    +, "description" : "A packaged foo fooer for fooing foos"
    +, "main" : "foo.js"
    +, "man" : [ "./man/foo.1", "./man/foo.2" ]
    +}
    +

    will create entries for man foo and man 2 foo

    +

    directories

    +

    The CommonJS Packages spec details a +few ways that you can indicate the structure of your package using a directories +object. If you look at npm's package.json, +you'll see that it has directories for doc, lib, and man.

    +

    In the future, this information may be used in other creative ways.

    +

    directories.lib

    +

    Tell people where the bulk of your library is. Nothing special is done +with the lib folder in any way, but it's useful meta info.

    +

    directories.bin

    +

    If you specify a bin directory, then all the files in that folder will +be added as children of the bin path.

    +

    If you have a bin path already, then this has no effect.

    +

    directories.man

    +

    A folder that is full of man pages. Sugar to generate a "man" array by +walking the folder.

    +

    directories.doc

    +

    Put markdown files in here. Eventually, these will be displayed nicely, +maybe, someday.

    +

    directories.example

    +

    Put example scripts in here. Someday, it might be exposed in some clever way.

    +

    repository

    +

    Specify the place where your code lives. This is helpful for people who +want to contribute. If the git repo is on github, then the npm docs +command will be able to find you.

    +

    Do it like this:

    +
    "repository" :
    +  { "type" : "git"
    +  , "url" : "http://github.com/npm/npm.git"
    +  }
    +
    +"repository" :
    +  { "type" : "svn"
    +  , "url" : "http://v8.googlecode.com/svn/trunk/"
    +  }
    +

    The URL should be a publicly available (perhaps read-only) url that can be handed +directly to a VCS program without any modification. It should not be a url to an +html project page that you put in your browser. It's for computers.

    +

    scripts

    +

    The "scripts" property is a dictionary containing script commands that are run +at various times in the lifecycle of your package. The key is the lifecycle +event, and the value is the command to run at that point.

    +

    See npm-scripts(7) to find out more about writing package scripts.

    +

    config

    +

    A "config" object can be used to set configuration parameters used in package +scripts that persist across upgrades. For instance, if a package had the +following:

    +
    { "name" : "foo"
    +, "config" : { "port" : "8080" } }
    +

    and then had a "start" command that then referenced the +npm_package_config_port environment variable, then the user could +override that by doing npm config set foo:port 8001.

    +

    See npm-config(7) and npm-scripts(7) for more on package +configs.

    +

    dependencies

    +

    Dependencies are specified in a simple object that maps a package name to a +version range. The version range is a string which has one or more +space-separated descriptors. Dependencies can also be identified with a +tarball or git URL.

    +

    Please do not put test harnesses or transpilers in your +dependencies object. See devDependencies, below.

    +

    See semver(7) for more details about specifying version ranges.

    +
      +
    • version Must match version exactly
    • +
    • >version Must be greater than version
    • +
    • >=version etc
    • +
    • <version
    • +
    • <=version
    • +
    • ~version "Approximately equivalent to version" See semver(7)
    • +
    • ^version "Compatible with version" See semver(7)
    • +
    • 1.2.x 1.2.0, 1.2.1, etc., but not 1.3.0
    • +
    • http://... See 'URLs as Dependencies' below
    • +
    • * Matches any version
    • +
    • "" (just an empty string) Same as *
    • +
    • version1 - version2 Same as >=version1 <=version2.
    • +
    • range1 || range2 Passes if either range1 or range2 are satisfied.
    • +
    • git... See 'Git URLs as Dependencies' below
    • +
    • user/repo See 'GitHub URLs' below
    • +
    • tag A specific version tagged and published as tag See npm-tag(1)
    • +
    • path/path/path See Local Paths below
    • +
    +

    For example, these are all valid:

    +
    { "dependencies" :
    +  { "foo" : "1.0.0 - 2.9999.9999"
    +  , "bar" : ">=1.0.2 <2.1.2"
    +  , "baz" : ">1.0.2 <=2.3.4"
    +  , "boo" : "2.0.1"
    +  , "qux" : "<1.0.0 || >=2.3.1 <2.4.5 || >=2.5.2 <3.0.0"
    +  , "asd" : "http://asdf.com/asdf.tar.gz"
    +  , "til" : "~1.2"
    +  , "elf" : "~1.2.3"
    +  , "two" : "2.x"
    +  , "thr" : "3.3.x"
    +  , "lat" : "latest"
    +  , "dyl" : "file:../dyl"
    +  }
    +}
    +

    URLs as Dependencies

    +

    You may specify a tarball URL in place of a version range.

    +

    This tarball will be downloaded and installed locally to your package at +install time.

    +

    Git URLs as Dependencies

    +

    Git urls can be of the form:

    +
    git://github.com/user/project.git#commit-ish
    +git+ssh://user@hostname:project.git#commit-ish
    +git+ssh://user@hostname/project.git#commit-ish
    +git+http://user@hostname/project/blah.git#commit-ish
    +git+https://user@hostname/project/blah.git#commit-ish
    +

    The commit-ish can be any tag, sha, or branch which can be supplied as +an argument to git checkout. The default is master.

    +

    GitHub URLs

    +

    As of version 1.1.65, you can refer to GitHub urls as just "foo": "user/foo-project". For example:

    +
    {
    +  "name": "foo",
    +  "version": "0.0.0",
    +  "dependencies": {
    +    "express": "visionmedia/express"
    +  }
    +}
    +

    Local Paths

    +

    As of version 2.0.0 you can provide a path to a local directory that contains a +package. Local paths can be saved using npm install --save, using any of +these forms:

    +
    ../foo/bar
    +~/foo/bar
    +./foo/bar
    +/foo/bar
    +

    in which case they will be normalized to a relative path and added to your +package.json. For example:

    +
    {
    +  "name": "baz",
    +  "dependencies": {
    +    "bar": "file:../foo/bar"
    +  }
    +}
    +

    This feature is helpful for local offline development and creating +tests that require npm installing where you don't want to hit an +external server, but should not be used when publishing packages +to the public registry.

    +

    devDependencies

    +

    If someone is planning on downloading and using your module in their +program, then they probably don't want or need to download and build +the external test or documentation framework that you use.

    +

    In this case, it's best to map these additional items in a devDependencies +object.

    +

    These things will be installed when doing npm link or npm install +from the root of a package, and can be managed like any other npm +configuration param. See npm-config(7) for more on the topic.

    +

    For build steps that are not platform-specific, such as compiling +CoffeeScript or other languages to JavaScript, use the prepublish +script to do this, and make the required package a devDependency.

    +

    For example:

    +
    { "name": "ethopia-waza",
    +  "description": "a delightfully fruity coffee varietal",
    +  "version": "1.2.3",
    +  "devDependencies": {
    +    "coffee-script": "~1.6.3"
    +  },
    +  "scripts": {
    +    "prepublish": "coffee -o lib/ -c src/waza.coffee"
    +  },
    +  "main": "lib/waza.js"
    +}
    +

    The prepublish script will be run before publishing, so that users +can consume the functionality without requiring them to compile it +themselves. In dev mode (ie, locally running npm install), it'll +run this script as well, so that you can test it easily.

    +

    peerDependencies

    +

    In some cases, you want to express the compatibility of your package with an +host tool or library, while not necessarily doing a require of this host. +This is usually refered to as a plugin. Notably, your module may be exposing +a specific interface, expected and specified by the host documentation.

    +

    For example:

    +
    {
    +  "name": "tea-latte",
    +  "version": "1.3.5"
    +  "peerDependencies": {
    +    "tea": "2.x"
    +  }
    +}
    +

    This ensures your package tea-latte can be installed along with the second +major version of the host package tea only. The host package is automatically +installed if needed. npm install tea-latte could possibly yield the following +dependency graph:

    +
    ├── tea-latte@1.3.5
    +└── tea@2.2.0
    +

    Trying to install another plugin with a conflicting requirement will cause an +error. For this reason, make sure your plugin requirement is as broad as +possible, and not to lock it down to specific patch versions.

    +

    Assuming the host complies with semver, only changes in +the host package's major version will break your plugin. Thus, if you've worked +with every 1.x version of the host package, use "^1.0" or "1.x" to express +this. If you depend on features introduced in 1.5.2, use ">= 1.5.2 < 2".

    +

    bundledDependencies

    +

    Array of package names that will be bundled when publishing the package.

    +

    If this is spelled "bundleDependencies", then that is also honorable.

    +

    optionalDependencies

    +

    If a dependency can be used, but you would like npm to proceed if it cannot be +found or fails to install, then you may put it in the optionalDependencies +object. This is a map of package name to version or url, just like the +dependencies object. The difference is that build failures do not cause +installation to fail.

    +

    It is still your program's responsibility to handle the lack of the +dependency. For example, something like this:

    +
    try {
    +  var foo = require('foo')
    +  var fooVersion = require('foo/package.json').version
    +} catch (er) {
    +  foo = null
    +}
    +if ( notGoodFooVersion(fooVersion) ) {
    +  foo = null
    +}
    +
    +// .. then later in your program ..
    +
    +if (foo) {
    +  foo.doFooThings()
    +}
    +

    Entries in optionalDependencies will override entries of the same name in +dependencies, so it's usually best to only put in one place.

    +

    engines

    +

    You can specify the version of node that your stuff works on:

    +
    { "engines" : { "node" : ">=0.10.3 <0.12" } }
    +

    And, like with dependencies, if you don't specify the version (or if you +specify "*" as the version), then any version of node will do.

    +

    If you specify an "engines" field, then npm will require that "node" be +somewhere on that list. If "engines" is omitted, then npm will just assume +that it works on node.

    +

    You can also use the "engines" field to specify which versions of npm +are capable of properly installing your program. For example:

    +
    { "engines" : { "npm" : "~1.0.20" } }
    +

    Note that, unless the user has set the engine-strict config flag, this +field is advisory only.

    +

    engineStrict

    +

    If you are sure that your module will definitely not run properly on +versions of Node/npm other than those specified in the engines object, +then you can set "engineStrict": true in your package.json file. +This will override the user's engine-strict config setting.

    +

    Please do not do this unless you are really very very sure. If your +engines object is something overly restrictive, you can quite easily and +inadvertently lock yourself into obscurity and prevent your users from +updating to new versions of Node. Consider this choice carefully. If +people abuse it, it will be removed in a future version of npm.

    +

    os

    +

    You can specify which operating systems your +module will run on:

    +
    "os" : [ "darwin", "linux" ]
    +

    You can also blacklist instead of whitelist operating systems, +just prepend the blacklisted os with a '!':

    +
    "os" : [ "!win32" ]
    +

    The host operating system is determined by process.platform

    +

    It is allowed to both blacklist, and whitelist, although there isn't any +good reason to do this.

    +

    cpu

    +

    If your code only runs on certain cpu architectures, +you can specify which ones.

    +
    "cpu" : [ "x64", "ia32" ]
    +

    Like the os option, you can also blacklist architectures:

    +
    "cpu" : [ "!arm", "!mips" ]
    +

    The host architecture is determined by process.arch

    +

    preferGlobal

    +

    If your package is primarily a command-line application that should be +installed globally, then set this value to true to provide a warning +if it is installed locally.

    +

    It doesn't actually prevent users from installing it locally, but it +does help prevent some confusion if it doesn't work as expected.

    +

    private

    +

    If you set "private": true in your package.json, then npm will refuse +to publish it.

    +

    This is a way to prevent accidental publication of private repositories. If +you would like to ensure that a given package is only ever published to a +specific registry (for example, an internal registry), then use the +publishConfig dictionary described below to override the registry config +param at publish-time.

    +

    publishConfig

    +

    This is a set of config values that will be used at publish-time. It's +especially handy if you want to set the tag or registry, so that you can +ensure that a given package is not tagged with "latest" or published to +the global public registry by default.

    +

    Any config values can be overridden, but of course only "tag" and +"registry" probably matter for the purposes of publishing.

    +

    See npm-config(7) to see the list of config options that can be +overridden.

    +

    DEFAULT VALUES

    +

    npm will default some values based on package contents.

    +
      +
    • "scripts": {"start": "node server.js"}

      +

      If there is a server.js file in the root of your package, then npm +will default the start command to node server.js.

      +
    • +
    • "scripts":{"preinstall": "node-gyp rebuild"}

      +

      If there is a binding.gyp file in the root of your package, npm will +default the preinstall command to compile using node-gyp.

      +
    • +
    • "contributors": [...]

      +

      If there is an AUTHORS file in the root of your package, npm will +treat each line as a Name <email> (url) format, where email and url +are optional. Lines which start with a # or are blank, will be +ignored.

      +
    • +
    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/files/npmrc.html b/deps/npm/html/partial/doc/files/npmrc.html new file mode 100644 index 00000000000..ac386ca85e2 --- /dev/null +++ b/deps/npm/html/partial/doc/files/npmrc.html @@ -0,0 +1,53 @@ +

    npmrc

    The npm config files

    +

    DESCRIPTION

    +

    npm gets its config settings from the command line, environment +variables, and npmrc files.

    +

    The npm config command can be used to update and edit the contents +of the user and global npmrc files.

    +

    For a list of available configuration options, see npm-config(7).

    +

    FILES

    +

    The four relevant files are:

    +
      +
    • per-project config file (/path/to/my/project/.npmrc)
    • +
    • per-user config file (~/.npmrc)
    • +
    • global config file ($PREFIX/npmrc)
    • +
    • npm builtin config file (/path/to/npm/npmrc)
    • +
    +

    All npm config files are an ini-formatted list of key = value +parameters. Environment variables can be replaced using +${VARIABLE_NAME}. For example:

    +
    prefix = ${HOME}/.npm-packages
    +

    Each of these files is loaded, and config options are resolved in +priority order. For example, a setting in the userconfig file would +override the setting in the globalconfig file.

    +

    Per-project config file

    +

    When working locally in a project, a .npmrc file in the root of the +project (ie, a sibling of node_modules and package.json) will set +config values specific to this project.

    +

    Note that this only applies to the root of the project that you're +running npm in. It has no effect when your module is published. For +example, you can't publish a module that forces itself to install +globally, or in a different location.

    +

    Per-user config file

    +

    $HOME/.npmrc (or the userconfig param, if set in the environment +or on the command line)

    +

    Global config file

    +

    $PREFIX/etc/npmrc (or the globalconfig param, if set above): +This file is an ini-file formatted list of key = value parameters. +Environment variables can be replaced as above.

    +

    Built-in config file

    +

    path/to/npm/itself/npmrc

    +

    This is an unchangeable "builtin" configuration file that npm keeps +consistent across updates. Set fields in here using the ./configure +script that comes with npm. This is primarily for distribution +maintainers to override default configs in a standard and consistent +manner.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/files/package.json.html b/deps/npm/html/partial/doc/files/package.json.html new file mode 100644 index 00000000000..df3bea83742 --- /dev/null +++ b/deps/npm/html/partial/doc/files/package.json.html @@ -0,0 +1,465 @@ +

    package.json

    Specifics of npm's package.json handling

    +

    DESCRIPTION

    +

    This document is all you need to know about what's required in your package.json +file. It must be actual JSON, not just a JavaScript object literal.

    +

    A lot of the behavior described in this document is affected by the config +settings described in npm-config(7).

    +

    name

    +

    The most important things in your package.json are the name and version fields. +Those are actually required, and your package won't install without +them. The name and version together form an identifier that is assumed +to be completely unique. Changes to the package should come along with +changes to the version.

    +

    The name is what your thing is called. Some tips:

    +
      +
    • Don't put "js" or "node" in the name. It's assumed that it's js, since you're +writing a package.json file, and you can specify the engine using the "engines" +field. (See below.)
    • +
    • The name ends up being part of a URL, an argument on the command line, and a +folder name. Any name with non-url-safe characters will be rejected. +Also, it can't start with a dot or an underscore.
    • +
    • The name will probably be passed as an argument to require(), so it should +be something short, but also reasonably descriptive.
    • +
    • You may want to check the npm registry to see if there's something by that name +already, before you get too attached to it. http://registry.npmjs.org/
    • +
    +

    A name can be optionally prefixed by a scope, e.g. @myorg/mypackage. See +npm-scope(7) for more detail.

    +

    version

    +

    The most important things in your package.json are the name and version fields. +Those are actually required, and your package won't install without +them. The name and version together form an identifier that is assumed +to be completely unique. Changes to the package should come along with +changes to the version.

    +

    Version must be parseable by +node-semver, which is bundled +with npm as a dependency. (npm install semver to use it yourself.)

    +

    More on version numbers and ranges at semver(7).

    +

    description

    +

    Put a description in it. It's a string. This helps people discover your +package, as it's listed in npm search.

    +

    keywords

    +

    Put keywords in it. It's an array of strings. This helps people +discover your package as it's listed in npm search.

    +

    homepage

    +

    The url to the project homepage.

    +

    NOTE: This is not the same as "url". If you put a "url" field, +then the registry will think it's a redirection to your package that has +been published somewhere else, and spit at you.

    +

    Literally. Spit. I'm so not kidding.

    +

    bugs

    +

    The url to your project's issue tracker and / or the email address to which +issues should be reported. These are helpful for people who encounter issues +with your package.

    +

    It should look like this:

    +
    { "url" : "http://github.com/owner/project/issues"
    +, "email" : "project@hostname.com"
    +}
    +

    You can specify either one or both values. If you want to provide only a url, +you can specify the value for "bugs" as a simple string instead of an object.

    +

    If a url is provided, it will be used by the npm bugs command.

    +

    license

    +

    You should specify a license for your package so that people know how they are +permitted to use it, and any restrictions you're placing on it.

    +

    The simplest way, assuming you're using a common license such as BSD-3-Clause +or MIT, is to just specify the standard SPDX ID of the license you're using, +like this:

    +
    { "license" : "BSD-3-Clause" }
    +

    You can check the full list of SPDX license IDs. +Ideally you should pick one that is +OSI approved.

    +

    It's also a good idea to include a LICENSE file at the top level in +your package.

    +

    people fields: author, contributors

    +

    The "author" is one person. "contributors" is an array of people. A "person" +is an object with a "name" field and optionally "url" and "email", like this:

    +
    { "name" : "Barney Rubble"
    +, "email" : "b@rubble.com"
    +, "url" : "http://barnyrubble.tumblr.com/"
    +}
    +

    Or you can shorten that all into a single string, and npm will parse it for you:

    +
    "Barney Rubble <b@rubble.com> (http://barnyrubble.tumblr.com/)
    +

    Both email and url are optional either way.

    +

    npm also sets a top-level "maintainers" field with your npm user info.

    +

    files

    +

    The "files" field is an array of files to include in your project. If +you name a folder in the array, then it will also include the files +inside that folder. (Unless they would be ignored by another rule.)

    +

    You can also provide a ".npmignore" file in the root of your package, +which will keep files from being included, even if they would be picked +up by the files array. The ".npmignore" file works just like a +".gitignore".

    +

    main

    +

    The main field is a module ID that is the primary entry point to your program. +That is, if your package is named foo, and a user installs it, and then does +require("foo"), then your main module's exports object will be returned.

    +

    This should be a module ID relative to the root of your package folder.

    +

    For most modules, it makes the most sense to have a main script and often not +much else.

    +

    bin

    +

    A lot of packages have one or more executable files that they'd like to +install into the PATH. npm makes this pretty easy (in fact, it uses this +feature to install the "npm" executable.)

    +

    To use this, supply a bin field in your package.json which is a map of +command name to local file name. On install, npm will symlink that file into +prefix/bin for global installs, or ./node_modules/.bin/ for local +installs.

    +

    For example, npm has this:

    +
    { "bin" : { "npm" : "./cli.js" } }
    +

    So, when you install npm, it'll create a symlink from the cli.js script to +/usr/local/bin/npm.

    +

    If you have a single executable, and its name should be the name +of the package, then you can just supply it as a string. For example:

    +
    { "name": "my-program"
    +, "version": "1.2.5"
    +, "bin": "./path/to/program" }
    +

    would be the same as this:

    +
    { "name": "my-program"
    +, "version": "1.2.5"
    +, "bin" : { "my-program" : "./path/to/program" } }
    +

    man

    +

    Specify either a single file or an array of filenames to put in place for the +man program to find.

    +

    If only a single file is provided, then it's installed such that it is the +result from man <pkgname>, regardless of its actual filename. For example:

    +
    { "name" : "foo"
    +, "version" : "1.2.3"
    +, "description" : "A packaged foo fooer for fooing foos"
    +, "main" : "foo.js"
    +, "man" : "./man/doc.1"
    +}
    +

    would link the ./man/doc.1 file in such that it is the target for man foo

    +

    If the filename doesn't start with the package name, then it's prefixed. +So, this:

    +
    { "name" : "foo"
    +, "version" : "1.2.3"
    +, "description" : "A packaged foo fooer for fooing foos"
    +, "main" : "foo.js"
    +, "man" : [ "./man/foo.1", "./man/bar.1" ]
    +}
    +

    will create files to do man foo and man foo-bar.

    +

    Man files must end with a number, and optionally a .gz suffix if they are +compressed. The number dictates which man section the file is installed into.

    +
    { "name" : "foo"
    +, "version" : "1.2.3"
    +, "description" : "A packaged foo fooer for fooing foos"
    +, "main" : "foo.js"
    +, "man" : [ "./man/foo.1", "./man/foo.2" ]
    +}
    +

    will create entries for man foo and man 2 foo

    +

    directories

    +

    The CommonJS Packages spec details a +few ways that you can indicate the structure of your package using a directories +object. If you look at npm's package.json, +you'll see that it has directories for doc, lib, and man.

    +

    In the future, this information may be used in other creative ways.

    +

    directories.lib

    +

    Tell people where the bulk of your library is. Nothing special is done +with the lib folder in any way, but it's useful meta info.

    +

    directories.bin

    +

    If you specify a bin directory, then all the files in that folder will +be added as children of the bin path.

    +

    If you have a bin path already, then this has no effect.

    +

    directories.man

    +

    A folder that is full of man pages. Sugar to generate a "man" array by +walking the folder.

    +

    directories.doc

    +

    Put markdown files in here. Eventually, these will be displayed nicely, +maybe, someday.

    +

    directories.example

    +

    Put example scripts in here. Someday, it might be exposed in some clever way.

    +

    repository

    +

    Specify the place where your code lives. This is helpful for people who +want to contribute. If the git repo is on github, then the npm docs +command will be able to find you.

    +

    Do it like this:

    +
    "repository" :
    +  { "type" : "git"
    +  , "url" : "http://github.com/npm/npm.git"
    +  }
    +
    +"repository" :
    +  { "type" : "svn"
    +  , "url" : "http://v8.googlecode.com/svn/trunk/"
    +  }
    +

    The URL should be a publicly available (perhaps read-only) url that can be handed +directly to a VCS program without any modification. It should not be a url to an +html project page that you put in your browser. It's for computers.

    +

    scripts

    +

    The "scripts" property is a dictionary containing script commands that are run +at various times in the lifecycle of your package. The key is the lifecycle +event, and the value is the command to run at that point.

    +

    See npm-scripts(7) to find out more about writing package scripts.

    +

    config

    +

    A "config" object can be used to set configuration parameters used in package +scripts that persist across upgrades. For instance, if a package had the +following:

    +
    { "name" : "foo"
    +, "config" : { "port" : "8080" } }
    +

    and then had a "start" command that then referenced the +npm_package_config_port environment variable, then the user could +override that by doing npm config set foo:port 8001.

    +

    See npm-config(7) and npm-scripts(7) for more on package +configs.

    +

    dependencies

    +

    Dependencies are specified in a simple object that maps a package name to a +version range. The version range is a string which has one or more +space-separated descriptors. Dependencies can also be identified with a +tarball or git URL.

    +

    Please do not put test harnesses or transpilers in your +dependencies object. See devDependencies, below.

    +

    See semver(7) for more details about specifying version ranges.

    +
      +
    • version Must match version exactly
    • +
    • >version Must be greater than version
    • +
    • >=version etc
    • +
    • <version
    • +
    • <=version
    • +
    • ~version "Approximately equivalent to version" See semver(7)
    • +
    • ^version "Compatible with version" See semver(7)
    • +
    • 1.2.x 1.2.0, 1.2.1, etc., but not 1.3.0
    • +
    • http://... See 'URLs as Dependencies' below
    • +
    • * Matches any version
    • +
    • "" (just an empty string) Same as *
    • +
    • version1 - version2 Same as >=version1 <=version2.
    • +
    • range1 || range2 Passes if either range1 or range2 are satisfied.
    • +
    • git... See 'Git URLs as Dependencies' below
    • +
    • user/repo See 'GitHub URLs' below
    • +
    • tag A specific version tagged and published as tag See npm-tag(1)
    • +
    • path/path/path See Local Paths below
    • +
    +

    For example, these are all valid:

    +
    { "dependencies" :
    +  { "foo" : "1.0.0 - 2.9999.9999"
    +  , "bar" : ">=1.0.2 <2.1.2"
    +  , "baz" : ">1.0.2 <=2.3.4"
    +  , "boo" : "2.0.1"
    +  , "qux" : "<1.0.0 || >=2.3.1 <2.4.5 || >=2.5.2 <3.0.0"
    +  , "asd" : "http://asdf.com/asdf.tar.gz"
    +  , "til" : "~1.2"
    +  , "elf" : "~1.2.3"
    +  , "two" : "2.x"
    +  , "thr" : "3.3.x"
    +  , "lat" : "latest"
    +  , "dyl" : "file:../dyl"
    +  }
    +}
    +

    URLs as Dependencies

    +

    You may specify a tarball URL in place of a version range.

    +

    This tarball will be downloaded and installed locally to your package at +install time.

    +

    Git URLs as Dependencies

    +

    Git urls can be of the form:

    +
    git://github.com/user/project.git#commit-ish
    +git+ssh://user@hostname:project.git#commit-ish
    +git+ssh://user@hostname/project.git#commit-ish
    +git+http://user@hostname/project/blah.git#commit-ish
    +git+https://user@hostname/project/blah.git#commit-ish
    +

    The commit-ish can be any tag, sha, or branch which can be supplied as +an argument to git checkout. The default is master.

    +

    GitHub URLs

    +

    As of version 1.1.65, you can refer to GitHub urls as just "foo": "user/foo-project". For example:

    +
    {
    +  "name": "foo",
    +  "version": "0.0.0",
    +  "dependencies": {
    +    "express": "visionmedia/express"
    +  }
    +}
    +

    Local Paths

    +

    As of version 2.0.0 you can provide a path to a local directory that contains a +package. Local paths can be saved using npm install --save, using any of +these forms:

    +
    ../foo/bar
    +~/foo/bar
    +./foo/bar
    +/foo/bar
    +

    in which case they will be normalized to a relative path and added to your +package.json. For example:

    +
    {
    +  "name": "baz",
    +  "dependencies": {
    +    "bar": "file:../foo/bar"
    +  }
    +}
    +

    This feature is helpful for local offline development and creating +tests that require npm installing where you don't want to hit an +external server, but should not be used when publishing packages +to the public registry.

    +

    devDependencies

    +

    If someone is planning on downloading and using your module in their +program, then they probably don't want or need to download and build +the external test or documentation framework that you use.

    +

    In this case, it's best to map these additional items in a devDependencies +object.

    +

    These things will be installed when doing npm link or npm install +from the root of a package, and can be managed like any other npm +configuration param. See npm-config(7) for more on the topic.

    +

    For build steps that are not platform-specific, such as compiling +CoffeeScript or other languages to JavaScript, use the prepublish +script to do this, and make the required package a devDependency.

    +

    For example:

    +
    { "name": "ethopia-waza",
    +  "description": "a delightfully fruity coffee varietal",
    +  "version": "1.2.3",
    +  "devDependencies": {
    +    "coffee-script": "~1.6.3"
    +  },
    +  "scripts": {
    +    "prepublish": "coffee -o lib/ -c src/waza.coffee"
    +  },
    +  "main": "lib/waza.js"
    +}
    +

    The prepublish script will be run before publishing, so that users +can consume the functionality without requiring them to compile it +themselves. In dev mode (ie, locally running npm install), it'll +run this script as well, so that you can test it easily.

    +

    peerDependencies

    +

    In some cases, you want to express the compatibility of your package with an +host tool or library, while not necessarily doing a require of this host. +This is usually refered to as a plugin. Notably, your module may be exposing +a specific interface, expected and specified by the host documentation.

    +

    For example:

    +
    {
    +  "name": "tea-latte",
    +  "version": "1.3.5"
    +  "peerDependencies": {
    +    "tea": "2.x"
    +  }
    +}
    +

    This ensures your package tea-latte can be installed along with the second +major version of the host package tea only. The host package is automatically +installed if needed. npm install tea-latte could possibly yield the following +dependency graph:

    +
    ├── tea-latte@1.3.5
    +└── tea@2.2.0
    +

    Trying to install another plugin with a conflicting requirement will cause an +error. For this reason, make sure your plugin requirement is as broad as +possible, and not to lock it down to specific patch versions.

    +

    Assuming the host complies with semver, only changes in +the host package's major version will break your plugin. Thus, if you've worked +with every 1.x version of the host package, use "^1.0" or "1.x" to express +this. If you depend on features introduced in 1.5.2, use ">= 1.5.2 < 2".

    +

    bundledDependencies

    +

    Array of package names that will be bundled when publishing the package.

    +

    If this is spelled "bundleDependencies", then that is also honorable.

    +

    optionalDependencies

    +

    If a dependency can be used, but you would like npm to proceed if it cannot be +found or fails to install, then you may put it in the optionalDependencies +object. This is a map of package name to version or url, just like the +dependencies object. The difference is that build failures do not cause +installation to fail.

    +

    It is still your program's responsibility to handle the lack of the +dependency. For example, something like this:

    +
    try {
    +  var foo = require('foo')
    +  var fooVersion = require('foo/package.json').version
    +} catch (er) {
    +  foo = null
    +}
    +if ( notGoodFooVersion(fooVersion) ) {
    +  foo = null
    +}
    +
    +// .. then later in your program ..
    +
    +if (foo) {
    +  foo.doFooThings()
    +}
    +

    Entries in optionalDependencies will override entries of the same name in +dependencies, so it's usually best to only put in one place.

    +

    engines

    +

    You can specify the version of node that your stuff works on:

    +
    { "engines" : { "node" : ">=0.10.3 <0.12" } }
    +

    And, like with dependencies, if you don't specify the version (or if you +specify "*" as the version), then any version of node will do.

    +

    If you specify an "engines" field, then npm will require that "node" be +somewhere on that list. If "engines" is omitted, then npm will just assume +that it works on node.

    +

    You can also use the "engines" field to specify which versions of npm +are capable of properly installing your program. For example:

    +
    { "engines" : { "npm" : "~1.0.20" } }
    +

    Note that, unless the user has set the engine-strict config flag, this +field is advisory only.

    +

    engineStrict

    +

    If you are sure that your module will definitely not run properly on +versions of Node/npm other than those specified in the engines object, +then you can set "engineStrict": true in your package.json file. +This will override the user's engine-strict config setting.

    +

    Please do not do this unless you are really very very sure. If your +engines object is something overly restrictive, you can quite easily and +inadvertently lock yourself into obscurity and prevent your users from +updating to new versions of Node. Consider this choice carefully. If +people abuse it, it will be removed in a future version of npm.

    +

    os

    +

    You can specify which operating systems your +module will run on:

    +
    "os" : [ "darwin", "linux" ]
    +

    You can also blacklist instead of whitelist operating systems, +just prepend the blacklisted os with a '!':

    +
    "os" : [ "!win32" ]
    +

    The host operating system is determined by process.platform

    +

    It is allowed to both blacklist, and whitelist, although there isn't any +good reason to do this.

    +

    cpu

    +

    If your code only runs on certain cpu architectures, +you can specify which ones.

    +
    "cpu" : [ "x64", "ia32" ]
    +

    Like the os option, you can also blacklist architectures:

    +
    "cpu" : [ "!arm", "!mips" ]
    +

    The host architecture is determined by process.arch

    +

    preferGlobal

    +

    If your package is primarily a command-line application that should be +installed globally, then set this value to true to provide a warning +if it is installed locally.

    +

    It doesn't actually prevent users from installing it locally, but it +does help prevent some confusion if it doesn't work as expected.

    +

    private

    +

    If you set "private": true in your package.json, then npm will refuse +to publish it.

    +

    This is a way to prevent accidental publication of private repositories. If +you would like to ensure that a given package is only ever published to a +specific registry (for example, an internal registry), then use the +publishConfig dictionary described below to override the registry config +param at publish-time.

    +

    publishConfig

    +

    This is a set of config values that will be used at publish-time. It's +especially handy if you want to set the tag or registry, so that you can +ensure that a given package is not tagged with "latest" or published to +the global public registry by default.

    +

    Any config values can be overridden, but of course only "tag" and +"registry" probably matter for the purposes of publishing.

    +

    See npm-config(7) to see the list of config options that can be +overridden.

    +

    DEFAULT VALUES

    +

    npm will default some values based on package contents.

    +
      +
    • "scripts": {"start": "node server.js"}

      +

      If there is a server.js file in the root of your package, then npm +will default the start command to node server.js.

      +
    • +
    • "scripts":{"preinstall": "node-gyp rebuild"}

      +

      If there is a binding.gyp file in the root of your package, npm will +default the preinstall command to compile using node-gyp.

      +
    • +
    • "contributors": [...]

      +

      If there is an AUTHORS file in the root of your package, npm will +treat each line as a Name <email> (url) format, where email and url +are optional. Lines which start with a # or are blank, will be +ignored.

      +
    • +
    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/index.html b/deps/npm/html/partial/doc/index.html new file mode 100644 index 00000000000..f6678d93715 --- /dev/null +++ b/deps/npm/html/partial/doc/index.html @@ -0,0 +1,210 @@ +

    npm-index

    Index of all npm documentation

    +

    README

    +

    node package manager

    +

    Command Line Documentation

    +

    Using npm on the command line

    +

    npm(1)

    +

    node package manager

    +

    npm-adduser(1)

    +

    Add a registry user account

    +

    npm-bin(1)

    +

    Display npm bin folder

    +

    npm-bugs(1)

    +

    Bugs for a package in a web browser maybe

    +

    npm-build(1)

    +

    Build a package

    +

    npm-bundle(1)

    +

    REMOVED

    +

    npm-cache(1)

    +

    Manipulates packages cache

    +

    npm-completion(1)

    +

    Tab Completion for npm

    +

    npm-config(1)

    +

    Manage the npm configuration files

    +

    npm-dedupe(1)

    +

    Reduce duplication

    +

    npm-deprecate(1)

    +

    Deprecate a version of a package

    +

    npm-docs(1)

    +

    Docs for a package in a web browser maybe

    +

    npm-edit(1)

    +

    Edit an installed package

    +

    npm-explore(1)

    +

    Browse an installed package

    +

    npm-help-search(1)

    +

    Search npm help documentation

    +

    npm-help(1)

    +

    Get help on npm

    +

    npm-init(1)

    +

    Interactively create a package.json file

    +

    npm-install(1)

    +

    Install a package

    + +

    Symlink a package folder

    +

    npm-ls(1)

    +

    List installed packages

    +

    npm-outdated(1)

    +

    Check for outdated packages

    +

    npm-owner(1)

    +

    Manage package owners

    +

    npm-pack(1)

    +

    Create a tarball from a package

    +

    npm-prefix(1)

    +

    Display prefix

    +

    npm-prune(1)

    +

    Remove extraneous packages

    +

    npm-publish(1)

    +

    Publish a package

    +

    npm-rebuild(1)

    +

    Rebuild a package

    +

    npm-repo(1)

    +

    Open package repository page in the browser

    +

    npm-restart(1)

    +

    Start a package

    +

    npm-rm(1)

    +

    Remove a package

    +

    npm-root(1)

    +

    Display npm root

    +

    npm-run-script(1)

    +

    Run arbitrary package scripts

    +

    npm-search(1)

    +

    Search for packages

    +

    npm-shrinkwrap(1)

    +

    Lock down dependency versions

    +

    npm-star(1)

    +

    Mark your favorite packages

    +

    npm-stars(1)

    +

    View packages marked as favorites

    +

    npm-start(1)

    +

    Start a package

    +

    npm-stop(1)

    +

    Stop a package

    +

    npm-tag(1)

    +

    Tag a published version

    +

    npm-test(1)

    +

    Test a package

    +

    npm-uninstall(1)

    +

    Remove a package

    +

    npm-unpublish(1)

    +

    Remove a package from the registry

    +

    npm-update(1)

    +

    Update a package

    +

    npm-version(1)

    +

    Bump a package version

    +

    npm-view(1)

    +

    View registry info

    +

    npm-whoami(1)

    +

    Display npm username

    +

    API Documentation

    +

    Using npm in your Node programs

    +

    npm(3)

    +

    node package manager

    +

    npm-bin(3)

    +

    Display npm bin folder

    +

    npm-bugs(3)

    +

    Bugs for a package in a web browser maybe

    +

    npm-cache(3)

    +

    manage the npm cache programmatically

    +

    npm-commands(3)

    +

    npm commands

    +

    npm-config(3)

    +

    Manage the npm configuration files

    +

    npm-deprecate(3)

    +

    Deprecate a version of a package

    +

    npm-docs(3)

    +

    Docs for a package in a web browser maybe

    +

    npm-edit(3)

    +

    Edit an installed package

    +

    npm-explore(3)

    +

    Browse an installed package

    +

    npm-help-search(3)

    +

    Search the help pages

    +

    npm-init(3)

    +

    Interactively create a package.json file

    +

    npm-install(3)

    +

    install a package programmatically

    + +

    Symlink a package folder

    +

    npm-load(3)

    +

    Load config settings

    +

    npm-ls(3)

    +

    List installed packages

    +

    npm-outdated(3)

    +

    Check for outdated packages

    +

    npm-owner(3)

    +

    Manage package owners

    +

    npm-pack(3)

    +

    Create a tarball from a package

    +

    npm-prefix(3)

    +

    Display prefix

    +

    npm-prune(3)

    +

    Remove extraneous packages

    +

    npm-publish(3)

    +

    Publish a package

    +

    npm-rebuild(3)

    +

    Rebuild a package

    +

    npm-repo(3)

    +

    Open package repository page in the browser

    +

    npm-restart(3)

    +

    Start a package

    +

    npm-root(3)

    +

    Display npm root

    +

    npm-run-script(3)

    +

    Run arbitrary package scripts

    +

    npm-search(3)

    +

    Search for packages

    +

    npm-shrinkwrap(3)

    +

    programmatically generate package shrinkwrap file

    +

    npm-start(3)

    +

    Start a package

    +

    npm-stop(3)

    +

    Stop a package

    +

    npm-tag(3)

    +

    Tag a published version

    +

    npm-test(3)

    +

    Test a package

    +

    npm-uninstall(3)

    +

    uninstall a package programmatically

    +

    npm-unpublish(3)

    +

    Remove a package from the registry

    +

    npm-update(3)

    +

    Update a package

    +

    npm-version(3)

    +

    Bump a package version

    +

    npm-view(3)

    +

    View registry info

    +

    npm-whoami(3)

    +

    Display npm username

    +

    Files

    +

    File system structures npm uses

    +

    npm-folders(5)

    +

    Folder Structures Used by npm

    +

    npmrc(5)

    +

    The npm config files

    +

    package.json(5)

    +

    Specifics of npm's package.json handling

    +

    Misc

    +

    Various other bits and bobs

    +

    npm-coding-style(7)

    +

    npm's "funny" coding style

    +

    npm-config(7)

    +

    More than you probably want to know about npm configuration

    +

    npm-developers(7)

    +

    Developer Guide

    +

    npm-disputes(7)

    +

    Handling Module Name Disputes

    +

    npm-faq(7)

    +

    Frequently Asked Questions

    +

    npm-index(7)

    +

    Index of all npm documentation

    +

    npm-registry(7)

    +

    The JavaScript Package Registry

    +

    npm-scope(7)

    +

    Scoped packages

    +

    npm-scripts(7)

    +

    How npm handles the "scripts" field

    +

    removing-npm(7)

    +

    Cleaning the Slate

    +

    semver(7)

    +

    The semantic versioner for npm

    + diff --git a/deps/npm/html/partial/doc/misc/npm-coding-style.html b/deps/npm/html/partial/doc/misc/npm-coding-style.html new file mode 100644 index 00000000000..732b326c997 --- /dev/null +++ b/deps/npm/html/partial/doc/misc/npm-coding-style.html @@ -0,0 +1,127 @@ +

    npm-coding-style

    npm's "funny" coding style

    +

    DESCRIPTION

    +

    npm's coding style is a bit unconventional. It is not different for +difference's sake, but rather a carefully crafted style that is +designed to reduce visual clutter and make bugs more apparent.

    +

    If you want to contribute to npm (which is very encouraged), you should +make your code conform to npm's style.

    +

    Note: this concerns npm's code not the specific packages at npmjs.org

    +

    Line Length

    +

    Keep lines shorter than 80 characters. It's better for lines to be +too short than to be too long. Break up long lists, objects, and other +statements onto multiple lines.

    +

    Indentation

    +

    Two-spaces. Tabs are better, but they look like hell in web browsers +(and on github), and node uses 2 spaces, so that's that.

    +

    Configure your editor appropriately.

    +

    Curly braces

    +

    Curly braces belong on the same line as the thing that necessitates them.

    +

    Bad:

    +
    function ()
    +{
    +

    Good:

    +
    function () {
    +

    If a block needs to wrap to the next line, use a curly brace. Don't +use it if it doesn't.

    +

    Bad:

    +
    if (foo) { bar() }
    +while (foo)
    +  bar()
    +

    Good:

    +
    if (foo) bar()
    +while (foo) {
    +  bar()
    +}
    +

    Semicolons

    +

    Don't use them except in four situations:

    +
      +
    • for (;;) loops. They're actually required.
    • +
    • null loops like: while (something) ; (But you'd better have a good +reason for doing that.)
    • +
    • case "foo": doSomething(); break
    • +
    • In front of a leading ( or [ at the start of the line. +This prevents the expression from being interpreted +as a function call or property access, respectively.
    • +
    +

    Some examples of good semicolon usage:

    +
    ;(x || y).doSomething()
    +;[a, b, c].forEach(doSomething)
    +for (var i = 0; i < 10; i ++) {
    +  switch (state) {
    +    case "begin": start(); continue
    +    case "end": finish(); break
    +    default: throw new Error("unknown state")
    +  }
    +  end()
    +}
    +

    Note that starting lines with - and + also should be prefixed +with a semicolon, but this is much less common.

    +

    Comma First

    +

    If there is a list of things separated by commas, and it wraps +across multiple lines, put the comma at the start of the next +line, directly below the token that starts the list. Put the +final token in the list on a line by itself. For example:

    +
    var magicWords = [ "abracadabra"
    +                 , "gesundheit"
    +                 , "ventrilo"
    +                 ]
    +  , spells = { "fireball" : function () { setOnFire() }
    +             , "water" : function () { putOut() }
    +             }
    +  , a = 1
    +  , b = "abc"
    +  , etc
    +  , somethingElse
    +

    Whitespace

    +

    Put a single space in front of ( for anything other than a function call. +Also use a single space wherever it makes things more readable.

    +

    Don't leave trailing whitespace at the end of lines. Don't indent empty +lines. Don't use more spaces than are helpful.

    +

    Functions

    +

    Use named functions. They make stack traces a lot easier to read.

    +

    Callbacks, Sync/async Style

    +

    Use the asynchronous/non-blocking versions of things as much as possible. +It might make more sense for npm to use the synchronous fs APIs, but this +way, the fs and http and child process stuff all uses the same callback-passing +methodology.

    +

    The callback should always be the last argument in the list. Its first +argument is the Error or null.

    +

    Be very careful never to ever ever throw anything. It's worse than useless. +Just send the error message back as the first argument to the callback.

    +

    Errors

    +

    Always create a new Error object with your message. Don't just return a +string message to the callback. Stack traces are handy.

    +

    Logging

    +

    Logging is done using the npmlog +utility.

    +

    Please clean up logs when they are no longer helpful. In particular, +logging the same object over and over again is not helpful. Logs should +report what's happening so that it's easier to track down where a fault +occurs.

    +

    Use appropriate log levels. See npm-config(7) and search for +"loglevel".

    +

    Case, naming, etc.

    +

    Use lowerCamelCase for multiword identifiers when they refer to objects, +functions, methods, properties, or anything not specified in this section.

    +

    Use UpperCamelCase for class names (things that you'd pass to "new").

    +

    Use all-lower-hyphen-css-case for multiword filenames and config keys.

    +

    Use named functions. They make stack traces easier to follow.

    +

    Use CAPS_SNAKE_CASE for constants, things that should never change +and are rarely used.

    +

    Use a single uppercase letter for function names where the function +would normally be anonymous, but needs to call itself recursively. It +makes it clear that it's a "throwaway" function.

    +

    null, undefined, false, 0

    +

    Boolean variables and functions should always be either true or +false. Don't set it to 0 unless it's supposed to be a number.

    +

    When something is intentionally missing or removed, set it to null.

    +

    Don't set things to undefined. Reserve that value to mean "not yet +set to anything."

    +

    Boolean objects are verboten.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/misc/npm-config.html b/deps/npm/html/partial/doc/misc/npm-config.html new file mode 100644 index 00000000000..87409720b9f --- /dev/null +++ b/deps/npm/html/partial/doc/misc/npm-config.html @@ -0,0 +1,739 @@ +

    npm-config

    More than you probably want to know about npm configuration

    +

    DESCRIPTION

    +

    npm gets its configuration values from 6 sources, in this priority:

    +

    Command Line Flags

    +

    Putting --foo bar on the command line sets the foo configuration +parameter to "bar". A -- argument tells the cli parser to stop +reading flags. A --flag parameter that is at the end of the +command will be given the value of true.

    +

    Environment Variables

    +

    Any environment variables that start with npm_config_ will be +interpreted as a configuration parameter. For example, putting +npm_config_foo=bar in your environment will set the foo +configuration parameter to bar. Any environment configurations that +are not given a value will be given the value of true. Config +values are case-insensitive, so NPM_CONFIG_FOO=bar will work the +same.

    +

    npmrc Files

    +

    The four relevant files are:

    +
      +
    • per-project config file (/path/to/my/project/.npmrc)
    • +
    • per-user config file (~/.npmrc)
    • +
    • global config file ($PREFIX/npmrc)
    • +
    • npm builtin config file (/path/to/npm/npmrc)
    • +
    +

    See npmrc(5) for more details.

    +

    Default Configs

    +

    A set of configuration parameters that are internal to npm, and are +defaults if nothing else is specified.

    +

    Shorthands and Other CLI Niceties

    +

    The following shorthands are parsed on the command-line:

    +
      +
    • -v: --version
    • +
    • -h, -?, --help, -H: --usage
    • +
    • -s, --silent: --loglevel silent
    • +
    • -q, --quiet: --loglevel warn
    • +
    • -d: --loglevel info
    • +
    • -dd, --verbose: --loglevel verbose
    • +
    • -ddd: --loglevel silly
    • +
    • -g: --global
    • +
    • -C: --prefix
    • +
    • -l: --long
    • +
    • -m: --message
    • +
    • -p, --porcelain: --parseable
    • +
    • -reg: --registry
    • +
    • -v: --version
    • +
    • -f: --force
    • +
    • -desc: --description
    • +
    • -S: --save
    • +
    • -D: --save-dev
    • +
    • -O: --save-optional
    • +
    • -B: --save-bundle
    • +
    • -E: --save-exact
    • +
    • -y: --yes
    • +
    • -n: --yes false
    • +
    • ll and la commands: ls --long
    • +
    +

    If the specified configuration param resolves unambiguously to a known +configuration parameter, then it is expanded to that configuration +parameter. For example:

    +
    npm ls --par
    +# same as:
    +npm ls --parseable
    +

    If multiple single-character shorthands are strung together, and the +resulting combination is unambiguously not some other configuration +param, then it is expanded to its various component pieces. For +example:

    +
    npm ls -gpld
    +# same as:
    +npm ls --global --parseable --long --loglevel info
    +

    Per-Package Config Settings

    +

    When running scripts (see npm-scripts(7)) the package.json "config" +keys are overwritten in the environment if there is a config param of +<name>[@<version>]:<key>. For example, if the package.json has +this:

    +
    { "name" : "foo"
    +, "config" : { "port" : "8080" }
    +, "scripts" : { "start" : "node server.js" } }
    +

    and the server.js is this:

    +
    http.createServer(...).listen(process.env.npm_package_config_port)
    +

    then the user could change the behavior by doing:

    +
    npm config set foo:port 80
    +

    See package.json(5) for more information.

    +

    Config Settings

    +

    always-auth

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Force npm to always require authentication when accessing the registry, +even for GET requests.

    + +
      +
    • Default: true
    • +
    • Type: Boolean
    • +
    +

    Tells npm to create symlinks (or .cmd shims on Windows) for package +executables.

    +

    Set to false to have it not do this. This can be used to work around +the fact that some file systems don't support symlinks, even on +ostensibly Unix systems.

    +

    browser

    +
      +
    • Default: OS X: "open", Windows: "start", Others: "xdg-open"
    • +
    • Type: String
    • +
    +

    The browser that is called by the npm docs command to open websites.

    +

    ca

    +
      +
    • Default: The npm CA certificate
    • +
    • Type: String or null
    • +
    +

    The Certificate Authority signing certificate that is trusted for SSL +connections to the registry.

    +

    Set to null to only allow "known" registrars, or to a specific CA cert +to trust only that specific signing authority.

    +

    See also the strict-ssl config.

    +

    cafile

    +
      +
    • Default: null
    • +
    • Type: path
    • +
    +

    A path to a file containing one or multiple Certificate Authority signing +certificates. Similar to the ca setting, but allows for multiple CA's, as +well as for the CA information to be stored in a file on disk.

    +

    cache

    +
      +
    • Default: Windows: %AppData%\npm-cache, Posix: ~/.npm
    • +
    • Type: path
    • +
    +

    The location of npm's cache directory. See npm-cache(1)

    +

    cache-lock-stale

    +
      +
    • Default: 60000 (1 minute)
    • +
    • Type: Number
    • +
    +

    The number of ms before cache folder lockfiles are considered stale.

    +

    cache-lock-retries

    +
      +
    • Default: 10
    • +
    • Type: Number
    • +
    +

    Number of times to retry to acquire a lock on cache folder lockfiles.

    +

    cache-lock-wait

    +
      +
    • Default: 10000 (10 seconds)
    • +
    • Type: Number
    • +
    +

    Number of ms to wait for cache lock files to expire.

    +

    cache-max

    +
      +
    • Default: Infinity
    • +
    • Type: Number
    • +
    +

    The maximum time (in seconds) to keep items in the registry cache before +re-checking against the registry.

    +

    Note that no purging is done unless the npm cache clean command is +explicitly used, and that only GET requests use the cache.

    +

    cache-min

    +
      +
    • Default: 10
    • +
    • Type: Number
    • +
    +

    The minimum time (in seconds) to keep items in the registry cache before +re-checking against the registry.

    +

    Note that no purging is done unless the npm cache clean command is +explicitly used, and that only GET requests use the cache.

    +

    cert

    +
      +
    • Default: null
    • +
    • Type: String
    • +
    +

    A client certificate to pass when accessing the registry.

    +

    color

    +
      +
    • Default: true on Posix, false on Windows
    • +
    • Type: Boolean or "always"
    • +
    +

    If false, never shows colors. If "always" then always shows colors. +If true, then only prints color codes for tty file descriptors.

    +

    depth

    +
      +
    • Default: Infinity
    • +
    • Type: Number
    • +
    +

    The depth to go when recursing directories for npm ls and +npm cache ls.

    +

    description

    +
      +
    • Default: true
    • +
    • Type: Boolean
    • +
    +

    Show the description in npm search

    +

    dev

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Install dev-dependencies along with packages.

    +

    Note that dev-dependencies are also installed if the npat flag is +set.

    +

    editor

    +
      +
    • Default: EDITOR environment variable if set, or "vi" on Posix, +or "notepad" on Windows.
    • +
    • Type: path
    • +
    +

    The command to run for npm edit or npm config edit.

    +

    engine-strict

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    If set to true, then npm will stubbornly refuse to install (or even +consider installing) any package that claims to not be compatible with +the current Node.js version.

    +

    force

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Makes various commands more forceful.

    +
      +
    • lifecycle script failure does not block progress.
    • +
    • publishing clobbers previously published versions.
    • +
    • skips cache when requesting from the registry.
    • +
    • prevents checks against clobbering non-npm files.
    • +
    +

    fetch-retries

    +
      +
    • Default: 2
    • +
    • Type: Number
    • +
    +

    The "retries" config for the retry module to use when fetching +packages from the registry.

    +

    fetch-retry-factor

    +
      +
    • Default: 10
    • +
    • Type: Number
    • +
    +

    The "factor" config for the retry module to use when fetching +packages.

    +

    fetch-retry-mintimeout

    +
      +
    • Default: 10000 (10 seconds)
    • +
    • Type: Number
    • +
    +

    The "minTimeout" config for the retry module to use when fetching +packages.

    +

    fetch-retry-maxtimeout

    +
      +
    • Default: 60000 (1 minute)
    • +
    • Type: Number
    • +
    +

    The "maxTimeout" config for the retry module to use when fetching +packages.

    +

    git

    +
      +
    • Default: "git"
    • +
    • Type: String
    • +
    +

    The command to use for git commands. If git is installed on the +computer, but is not in the PATH, then set this to the full path to +the git binary.

    +

    git-tag-version

    +
      +
    • Default: true
    • +
    • Type: Boolean
    • +
    +

    Tag the commit when using the npm version command.

    +

    global

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Operates in "global" mode, so that packages are installed into the +prefix folder instead of the current working directory. See +npm-folders(5) for more on the differences in behavior.

    +
      +
    • packages are installed into the {prefix}/lib/node_modules folder, instead of the +current working directory.
    • +
    • bin files are linked to {prefix}/bin
    • +
    • man pages are linked to {prefix}/share/man
    • +
    +

    globalconfig

    +
      +
    • Default: {prefix}/etc/npmrc
    • +
    • Type: path
    • +
    +

    The config file to read for global config options.

    +

    group

    +
      +
    • Default: GID of the current process
    • +
    • Type: String or Number
    • +
    +

    The group to use when running package scripts in global mode as the root +user.

    +

    heading

    +
      +
    • Default: "npm"
    • +
    • Type: String
    • +
    +

    The string that starts all the debugging log output.

    +

    https-proxy

    +
      +
    • Default: the HTTPS_PROXY or https_proxy or HTTP_PROXY or +http_proxy environment variables.
    • +
    • Type: url
    • +
    +

    A proxy to use for outgoing https requests.

    +

    ignore-scripts

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    If true, npm does not run scripts specified in package.json files.

    +

    init-module

    +
      +
    • Default: ~/.npm-init.js
    • +
    • Type: path
    • +
    +

    A module that will be loaded by the npm init command. See the +documentation for the +init-package-json module +for more information, or npm-init(1).

    +

    init-author-name

    +
      +
    • Default: ""
    • +
    • Type: String
    • +
    +

    The value npm init should use by default for the package author's name.

    +

    init-author-email

    +
      +
    • Default: ""
    • +
    • Type: String
    • +
    +

    The value npm init should use by default for the package author's email.

    +

    init-author-url

    +
      +
    • Default: ""
    • +
    • Type: String
    • +
    +

    The value npm init should use by default for the package author's homepage.

    +

    init-license

    +
      +
    • Default: "ISC"
    • +
    • Type: String
    • +
    +

    The value npm init should use by default for the package license.

    +

    init-version

    +
      +
    • Default: "0.0.0"
    • +
    • Type: semver
    • +
    +

    The value that npm init should use by default for the package +version number, if not already set in package.json.

    +

    json

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Whether or not to output JSON data, rather than the normal output.

    +

    This feature is currently experimental, and the output data structures +for many commands is either not implemented in JSON yet, or subject to +change. Only the output from npm ls --json is currently valid.

    +

    key

    +
      +
    • Default: null
    • +
    • Type: String
    • +
    +

    A client key to pass when accessing the registry.

    + +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    If true, then local installs will link if there is a suitable globally +installed package.

    +

    Note that this means that local installs can cause things to be +installed into the global space at the same time. The link is only done +if one of the two conditions are met:

    +
      +
    • The package is not already installed globally, or
    • +
    • the globally installed version is identical to the version that is +being installed locally.
    • +
    +

    local-address

    +
      +
    • Default: undefined
    • +
    • Type: IP Address
    • +
    +

    The IP address of the local interface to use when making connections +to the npm registry. Must be IPv4 in versions of Node prior to 0.12.

    +

    loglevel

    +
      +
    • Default: "warn"
    • +
    • Type: String
    • +
    • Values: "silent", "error", "warn", "http", "info", "verbose", "silly"
    • +
    +

    What level of logs to report. On failure, all logs are written to +npm-debug.log in the current working directory.

    +

    Any logs of a higher level than the setting are shown. +The default is "warn", which shows warn and error output.

    +

    logstream

    +
      +
    • Default: process.stderr
    • +
    • Type: Stream
    • +
    +

    This is the stream that is passed to the +npmlog module at run time.

    +

    It cannot be set from the command line, but if you are using npm +programmatically, you may wish to send logs to somewhere other than +stderr.

    +

    If the color config is set to true, then this stream will receive +colored output if it is a TTY.

    +

    long

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Show extended information in npm ls and npm search.

    +

    message

    +
      +
    • Default: "%s"
    • +
    • Type: String
    • +
    +

    Commit message which is used by npm version when creating version commit.

    +

    Any "%s" in the message will be replaced with the version number.

    +

    node-version

    +
      +
    • Default: process.version
    • +
    • Type: semver or false
    • +
    +

    The node version to use when checking a package's engines map.

    +

    npat

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Run tests on installation.

    +

    onload-script

    +
      +
    • Default: false
    • +
    • Type: path
    • +
    +

    A node module to require() when npm loads. Useful for programmatic +usage.

    +

    optional

    +
      +
    • Default: true
    • +
    • Type: Boolean
    • +
    +

    Attempt to install packages in the optionalDependencies object. Note +that if these packages fail to install, the overall installation +process is not aborted.

    +

    parseable

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Output parseable results from commands that write to +standard output.

    +

    prefix

    + +

    The location to install global items. If set on the command line, then +it forces non-global commands to run in the specified folder.

    +

    production

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Set to true to run in "production" mode.

    +
      +
    1. devDependencies are not installed at the topmost level when running +local npm install without any arguments.
    2. +
    3. Set the NODE_ENV="production" for lifecycle scripts.
    4. +
    +

    proprietary-attribs

    +
      +
    • Default: true
    • +
    • Type: Boolean
    • +
    +

    Whether or not to include proprietary extended attributes in the +tarballs created by npm.

    +

    Unless you are expecting to unpack package tarballs with something other +than npm -- particularly a very outdated tar implementation -- leave +this as true.

    +

    proxy

    +
      +
    • Default: HTTP_PROXY or http_proxy environment variable, or null
    • +
    • Type: url
    • +
    +

    A proxy to use for outgoing http requests.

    +

    rebuild-bundle

    +
      +
    • Default: true
    • +
    • Type: Boolean
    • +
    +

    Rebuild bundled dependencies after installation.

    +

    registry

    + +

    The base URL of the npm package registry.

    +

    rollback

    +
      +
    • Default: true
    • +
    • Type: Boolean
    • +
    +

    Remove failed installs.

    +

    save

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Save installed packages to a package.json file as dependencies.

    +

    When used with the npm rm command, it removes it from the dependencies +object.

    +

    Only works if there is already a package.json file present.

    +

    save-bundle

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    If a package would be saved at install time by the use of --save, +--save-dev, or --save-optional, then also put it in the +bundleDependencies list.

    +

    When used with the npm rm command, it removes it from the +bundledDependencies list.

    +

    save-dev

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Save installed packages to a package.json file as devDependencies.

    +

    When used with the npm rm command, it removes it from the +devDependencies object.

    +

    Only works if there is already a package.json file present.

    +

    save-exact

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Dependencies saved to package.json using --save, --save-dev or +--save-optional will be configured with an exact version rather than +using npm's default semver range operator.

    +

    save-optional

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Save installed packages to a package.json file as +optionalDependencies.

    +

    When used with the npm rm command, it removes it from the +devDependencies object.

    +

    Only works if there is already a package.json file present.

    +

    save-prefix

    +
      +
    • Default: '^'
    • +
    • Type: String
    • +
    +

    Configure how versions of packages installed to a package.json file via +--save or --save-dev get prefixed.

    +

    For example if a package has version 1.2.3, by default it's version is +set to ^1.2.3 which allows minor upgrades for that package, but after +npm config set save-prefix='~' it would be set to ~1.2.3 which only allows +patch upgrades.

    +

    scope

    +
      +
    • Default: ""
    • +
    • Type: String
    • +
    +

    Associate an operation with a scope for a scoped registry. Useful when logging +in to a private registry for the first time: +npm login --scope=@organization --registry=registry.organization.com, which +will cause @organization to be mapped to the registry for future installation +of packages specified according to the pattern @organization/package.

    +

    searchopts

    +
      +
    • Default: ""
    • +
    • Type: String
    • +
    +

    Space-separated options that are always passed to search.

    +

    searchexclude

    +
      +
    • Default: ""
    • +
    • Type: String
    • +
    +

    Space-separated options that limit the results from search.

    +

    searchsort

    +
      +
    • Default: "name"
    • +
    • Type: String
    • +
    • Values: "name", "-name", "date", "-date", "description", +"-description", "keywords", "-keywords"
    • +
    +

    Indication of which field to sort search results by. Prefix with a - +character to indicate reverse sort.

    +

    shell

    +
      +
    • Default: SHELL environment variable, or "bash" on Posix, or "cmd" on +Windows
    • +
    • Type: path
    • +
    +

    The shell to run for the npm explore command.

    +

    shrinkwrap

    +
      +
    • Default: true
    • +
    • Type: Boolean
    • +
    +

    If set to false, then ignore npm-shrinkwrap.json files when +installing.

    +

    sign-git-tag

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    If set to true, then the npm version command will tag the version +using -s to add a signature.

    +

    Note that git requires you to have set up GPG keys in your git configs +for this to work properly.

    +

    spin

    +
      +
    • Default: true
    • +
    • Type: Boolean or "always"
    • +
    +

    When set to true, npm will display an ascii spinner while it is doing +things, if process.stderr is a TTY.

    +

    Set to false to suppress the spinner, or set to always to output +the spinner even for non-TTY outputs.

    +

    strict-ssl

    +
      +
    • Default: true
    • +
    • Type: Boolean
    • +
    +

    Whether or not to do SSL key validation when making requests to the +registry via https.

    +

    See also the ca config.

    +

    tag

    +
      +
    • Default: latest
    • +
    • Type: String
    • +
    +

    If you ask npm to install a package and don't tell it a specific version, then +it will install the specified tag.

    +

    Also the tag that is added to the package@version specified by the npm +tag command, if no explicit tag is given.

    +

    tmp

    +
      +
    • Default: TMPDIR environment variable, or "/tmp"
    • +
    • Type: path
    • +
    +

    Where to store temporary files and folders. All temp files are deleted +on success, but left behind on failure for forensic purposes.

    +

    unicode

    +
      +
    • Default: true
    • +
    • Type: Boolean
    • +
    +

    When set to true, npm uses unicode characters in the tree output. When +false, it uses ascii characters to draw trees.

    +

    unsafe-perm

    +
      +
    • Default: false if running as root, true otherwise
    • +
    • Type: Boolean
    • +
    +

    Set to true to suppress the UID/GID switching when running package +scripts. If set explicitly to false, then installing as a non-root user +will fail.

    +

    usage

    +
      +
    • Default: false
    • +
    • Type: Boolean
    • +
    +

    Set to show short usage output (like the -H output) +instead of complete help when doing npm-help(1).

    +

    user

    +
      +
    • Default: "nobody"
    • +
    • Type: String or Number
    • +
    +

    The UID to set to when running package scripts as root.

    +

    userconfig

    +
      +
    • Default: ~/.npmrc
    • +
    • Type: path
    • +
    +

    The location of user-level configuration settings.

    +

    umask

    +
      +
    • Default: 022
    • +
    • Type: Octal numeric string
    • +
    +

    The "umask" value to use when setting the file creation mode on files +and folders.

    +

    Folders and executables are given a mode which is 0777 masked against +this value. Other files are given a mode which is 0666 masked against +this value. Thus, the defaults are 0755 and 0644 respectively.

    +

    user-agent

    +
      +
    • Default: node/{process.version} {process.platform} {process.arch}
    • +
    • Type: String
    • +
    +

    Sets a User-Agent to the request header

    +

    version

    +
      +
    • Default: false
    • +
    • Type: boolean
    • +
    +

    If true, output the npm version and exit successfully.

    +

    Only relevant when specified explicitly on the command line.

    +

    versions

    +
      +
    • Default: false
    • +
    • Type: boolean
    • +
    +

    If true, output the npm version as well as node's process.versions map, and +exit successfully.

    +

    Only relevant when specified explicitly on the command line.

    +

    viewer

    +
      +
    • Default: "man" on Posix, "browser" on Windows
    • +
    • Type: path
    • +
    +

    The program to use to view help content.

    +

    Set to "browser" to view html help content in the default web browser.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/misc/npm-developers.html b/deps/npm/html/partial/doc/misc/npm-developers.html new file mode 100644 index 00000000000..7ba880a44bc --- /dev/null +++ b/deps/npm/html/partial/doc/misc/npm-developers.html @@ -0,0 +1,161 @@ +

    npm-developers

    Developer Guide

    +

    DESCRIPTION

    +

    So, you've decided to use npm to develop (and maybe publish/deploy) +your project.

    +

    Fantastic!

    +

    There are a few things that you need to do above the simple steps +that your users will do to install your program.

    +

    About These Documents

    +

    These are man pages. If you install npm, you should be able to +then do man npm-thing to get the documentation on a particular +topic, or npm help thing to see the same information.

    +

    What is a package

    +

    A package is:

    +
      +
    • a) a folder containing a program described by a package.json file
    • +
    • b) a gzipped tarball containing (a)
    • +
    • c) a url that resolves to (b)
    • +
    • d) a <name>@<version> that is published on the registry with (c)
    • +
    • e) a <name>@<tag> that points to (d)
    • +
    • f) a <name> that has a "latest" tag satisfying (e)
    • +
    • g) a git url that, when cloned, results in (a).
    • +
    +

    Even if you never publish your package, you can still get a lot of +benefits of using npm if you just want to write a node program (a), and +perhaps if you also want to be able to easily install it elsewhere +after packing it up into a tarball (b).

    +

    Git urls can be of the form:

    +
    git://github.com/user/project.git#commit-ish
    +git+ssh://user@hostname:project.git#commit-ish
    +git+http://user@hostname/project/blah.git#commit-ish
    +git+https://user@hostname/project/blah.git#commit-ish
    +

    The commit-ish can be any tag, sha, or branch which can be supplied as +an argument to git checkout. The default is master.

    +

    The package.json File

    +

    You need to have a package.json file in the root of your project to do +much of anything with npm. That is basically the whole interface.

    +

    See package.json(5) for details about what goes in that file. At the very +least, you need:

    +
      +
    • name: +This should be a string that identifies your project. Please do not +use the name to specify that it runs on node, or is in JavaScript. +You can use the "engines" field to explicitly state the versions of +node (or whatever else) that your program requires, and it's pretty +well assumed that it's javascript.

      +

      It does not necessarily need to match your github repository name.

      +

      So, node-foo and bar-js are bad names. foo or bar are better.

      +
    • +
    • version: +A semver-compatible version.

      +
    • +
    • engines: +Specify the versions of node (or whatever else) that your program +runs on. The node API changes a lot, and there may be bugs or new +functionality that you depend on. Be explicit.

      +
    • +
    • author: +Take some credit.

      +
    • +
    • scripts: +If you have a special compilation or installation script, then you +should put it in the scripts object. You should definitely have at +least a basic smoke-test command as the "scripts.test" field. +See npm-scripts(7).

      +
    • +
    • main: +If you have a single module that serves as the entry point to your +program (like what the "foo" package gives you at require("foo")), +then you need to specify that in the "main" field.

      +
    • +
    • directories: +This is an object mapping names to folders. The best ones to include are +"lib" and "doc", but if you use "man" to specify a folder full of man pages, +they'll get installed just like these ones.

      +
    • +
    +

    You can use npm init in the root of your package in order to get you +started with a pretty basic package.json file. See npm-init(1) for +more info.

    +

    Keeping files out of your package

    +

    Use a .npmignore file to keep stuff out of your package. If there's +no .npmignore file, but there is a .gitignore file, then npm will +ignore the stuff matched by the .gitignore file. If you want to +include something that is excluded by your .gitignore file, you can +create an empty .npmignore file to override it.

    +

    By default, the following paths and files are ignored, so there's no +need to add them to .npmignore explicitly:

    +
      +
    • .*.swp
    • +
    • ._*
    • +
    • .DS_Store
    • +
    • .git
    • +
    • .hg
    • +
    • .lock-wscript
    • +
    • .svn
    • +
    • .wafpickle-*
    • +
    • CVS
    • +
    • npm-debug.log
    • +
    +

    Additionally, everything in node_modules is ignored, except for +bundled dependencies. npm automatically handles this for you, so don't +bother adding node_modules to .npmignore.

    +

    The following paths and files are never ignored, so adding them to +.npmignore is pointless:

    + + +

    npm link is designed to install a development package and see the +changes in real time without having to keep re-installing it. (You do +need to either re-link or npm rebuild -g to update compiled packages, +of course.)

    +

    More info at npm-link(1).

    +

    Before Publishing: Make Sure Your Package Installs and Works

    +

    This is important.

    +

    If you can not install it locally, you'll have +problems trying to publish it. Or, worse yet, you'll be able to +publish it, but you'll be publishing a broken or pointless package. +So don't do that.

    +

    In the root of your package, do this:

    +
    npm install . -g
    +

    That'll show you that it's working. If you'd rather just create a symlink +package that points to your working directory, then do this:

    +
    npm link
    +

    Use npm ls -g to see if it's there.

    +

    To test a local install, go into some other folder, and then do:

    +
    cd ../some-other-folder
    +npm install ../my-package
    +

    to install it locally into the node_modules folder in that other place.

    +

    Then go into the node-repl, and try using require("my-thing") to +bring in your module's main module.

    +

    Create a User Account

    +

    Create a user with the adduser command. It works like this:

    +
    npm adduser
    +

    and then follow the prompts.

    +

    This is documented better in npm-adduser(1).

    +

    Publish your package

    +

    This part's easy. IN the root of your folder, do this:

    +
    npm publish
    +

    You can give publish a url to a tarball, or a filename of a tarball, +or a path to a folder.

    +

    Note that pretty much everything in that folder will be exposed +by default. So, if you have secret stuff in there, use a +.npmignore file to list out the globs to ignore, or publish +from a fresh checkout.

    +

    Brag about it

    +

    Send emails, write blogs, blab in IRC.

    +

    Tell the world how easy it is to install your program!

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/misc/npm-disputes.html b/deps/npm/html/partial/doc/misc/npm-disputes.html new file mode 100644 index 00000000000..6a7abca7122 --- /dev/null +++ b/deps/npm/html/partial/doc/misc/npm-disputes.html @@ -0,0 +1,92 @@ +

    npm-disputes

    Handling Module Name Disputes

    +

    SYNOPSIS

    +
      +
    1. Get the author email with npm owner ls <pkgname>
    2. +
    3. Email the author, CC support@npmjs.com
    4. +
    5. After a few weeks, if there's no resolution, we'll sort it out.
    6. +
    +

    Don't squat on package names. Publish code or move out of the way.

    +

    DESCRIPTION

    +

    There sometimes arise cases where a user publishes a module, and then +later, some other user wants to use that name. Here are some common +ways that happens (each of these is based on actual events.)

    +
      +
    1. Joe writes a JavaScript module foo, which is not node-specific. +Joe doesn't use node at all. Bob wants to use foo in node, so he +wraps it in an npm module. Some time later, Joe starts using node, +and wants to take over management of his program.
    2. +
    3. Bob writes an npm module foo, and publishes it. Perhaps much +later, Joe finds a bug in foo, and fixes it. He sends a pull +request to Bob, but Bob doesn't have the time to deal with it, +because he has a new job and a new baby and is focused on his new +erlang project, and kind of not involved with node any more. Joe +would like to publish a new foo, but can't, because the name is +taken.
    4. +
    5. Bob writes a 10-line flow-control library, and calls it foo, and +publishes it to the npm registry. Being a simple little thing, it +never really has to be updated. Joe works for Foo Inc, the makers +of the critically acclaimed and widely-marketed foo JavaScript +toolkit framework. They publish it to npm as foojs, but people are +routinely confused when npm install foo is some different thing.
    6. +
    7. Bob writes a parser for the widely-known foo file format, because +he needs it for work. Then, he gets a new job, and never updates the +prototype. Later on, Joe writes a much more complete foo parser, +but can't publish, because Bob's foo is in the way.
    8. +
    +

    The validity of Joe's claim in each situation can be debated. However, +Joe's appropriate course of action in each case is the same.

    +
      +
    1. npm owner ls foo. This will tell Joe the email address of the +owner (Bob).
    2. +
    3. Joe emails Bob, explaining the situation as respectfully as +possible, and what he would like to do with the module name. He +adds the npm support staff support@npmjs.com to the CC list of +the email. Mention in the email that Bob can run npm owner add +joe foo to add Joe as an owner of the foo package.
    4. +
    5. After a reasonable amount of time, if Bob has not responded, or if +Bob and Joe can't come to any sort of resolution, email support +support@npmjs.com and we'll sort it out. ("Reasonable" is +usually at least 4 weeks, but extra time is allowed around common +holidays.)
    6. +
    +

    REASONING

    +

    In almost every case so far, the parties involved have been able to reach +an amicable resolution without any major intervention. Most people +really do want to be reasonable, and are probably not even aware that +they're in your way.

    +

    Module ecosystems are most vibrant and powerful when they are as +self-directed as possible. If an admin one day deletes something you +had worked on, then that is going to make most people quite upset, +regardless of the justification. When humans solve their problems by +talking to other humans with respect, everyone has the chance to end up +feeling good about the interaction.

    +

    EXCEPTIONS

    +

    Some things are not allowed, and will be removed without discussion if +they are brought to the attention of the npm registry admins, including +but not limited to:

    +
      +
    1. Malware (that is, a package designed to exploit or harm the machine on +which it is installed).
    2. +
    3. Violations of copyright or licenses (for example, cloning an +MIT-licensed program, and then removing or changing the copyright and +license statement).
    4. +
    5. Illegal content.
    6. +
    7. "Squatting" on a package name that you plan to use, but aren't +actually using. Sorry, I don't care how great the name is, or how +perfect a fit it is for the thing that someday might happen. If +someone wants to use it today, and you're just taking up space with +an empty tarball, you're going to be evicted.
    8. +
    9. Putting empty packages in the registry. Packages must have SOME +functionality. It can be silly, but it can't be nothing. (See +also: squatting.)
    10. +
    11. Doing weird things with the registry, like using it as your own +personal application database or otherwise putting non-packagey +things into it.
    12. +
    +

    If you see bad behavior like this, please report it right away.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/misc/npm-faq.html b/deps/npm/html/partial/doc/misc/npm-faq.html new file mode 100644 index 00000000000..7fc16344f79 --- /dev/null +++ b/deps/npm/html/partial/doc/misc/npm-faq.html @@ -0,0 +1,264 @@ +

    npm-faq

    Frequently Asked Questions

    +

    Where can I find these docs in HTML?

    +

    https://www.npmjs.org/doc/, or run:

    +
    npm config set viewer browser
    +

    to open these documents in your default web browser rather than man.

    +

    It didn't work.

    +

    That's not really a question.

    +

    Why didn't it work?

    +

    I don't know yet.

    +

    Read the error output, and if you can't figure out what it means, +do what it says and post a bug with all the information it asks for.

    +

    Where does npm put stuff?

    +

    See npm-folders(5)

    +

    tl;dr:

    +
      +
    • Use the npm root command to see where modules go, and the npm bin +command to see where executables go
    • +
    • Global installs are different from local installs. If you install +something with the -g flag, then its executables go in npm bin -g +and its modules go in npm root -g.
    • +
    +

    How do I install something on my computer in a central location?

    +

    Install it globally by tacking -g or --global to the command. (This +is especially important for command line utilities that need to add +their bins to the global system PATH.)

    +

    I installed something globally, but I can't require() it

    +

    Install it locally.

    +

    The global install location is a place for command-line utilities +to put their bins in the system PATH. It's not for use with require().

    +

    If you require() a module in your code, then that means it's a +dependency, and a part of your program. You need to install it locally +in your program.

    +

    Why can't npm just put everything in one place, like other package managers?

    +

    Not every change is an improvement, but every improvement is a change. +This would be like asking git to do network IO for every commit. It's +not going to happen, because it's a terrible idea that causes more +problems than it solves.

    +

    It is much harder to avoid dependency conflicts without nesting +dependencies. This is fundamental to the way that npm works, and has +proven to be an extremely successful approach. See npm-folders(5) for +more details.

    +

    If you want a package to be installed in one place, and have all your +programs reference the same copy of it, then use the npm link command. +That's what it's for. Install it globally, then link it into each +program that uses it.

    +

    Whatever, I really want the old style 'everything global' style.

    +

    Write your own package manager. You could probably even wrap up npm +in a shell script if you really wanted to.

    +

    npm will not help you do something that is known to be a bad idea.

    +

    Should I check my node_modules folder into git?

    +

    Usually, no. Allow npm to resolve dependencies for your packages.

    +

    For packages you deploy, such as websites and apps, +you should use npm shrinkwrap to lock down your full dependency tree:

    +

    https://www.npmjs.org/doc/cli/npm-shrinkwrap.html

    +

    If you are paranoid about depending on the npm ecosystem, +you should run a private npm mirror or a private cache.

    +

    If you want 100% confidence in being able to reproduce the specific bytes +included in a deployment, you should use an additional mechanism that can +verify contents rather than versions. For example, +Amazon machine images, DigitalOcean snapshots, Heroku slugs, or simple tarballs.

    +

    Is it 'npm' or 'NPM' or 'Npm'?

    +

    npm should never be capitalized unless it is being displayed in a +location that is customarily all-caps (such as the title of man pages.)

    +

    If 'npm' is an acronym, why is it never capitalized?

    +

    Contrary to the belief of many, "npm" is not in fact an abbreviation for +"Node Package Manager". It is a recursive bacronymic abbreviation for +"npm is not an acronym". (If it was "ninaa", then it would be an +acronym, and thus incorrectly named.)

    +

    "NPM", however, is an acronym (more precisely, a capitonym) for the +National Association of Pastoral Musicians. You can learn more +about them at http://npm.org/.

    +

    In software, "NPM" is a Non-Parametric Mapping utility written by +Chris Rorden. You can analyze pictures of brains with it. Learn more +about the (capitalized) NPM program at http://www.cabiatl.com/mricro/npm/.

    +

    The first seed that eventually grew into this flower was a bash utility +named "pm", which was a shortened descendent of "pkgmakeinst", a +bash function that was used to install various different things on different +platforms, most often using Yahoo's yinst. If npm was ever an +acronym for anything, it was node pm or maybe new pm.

    +

    So, in all seriousness, the "npm" project is named after its command-line +utility, which was organically selected to be easily typed by a right-handed +programmer using a US QWERTY keyboard layout, ending with the +right-ring-finger in a postition to type the - key for flags and +other command-line arguments. That command-line utility is always +lower-case, though it starts most sentences it is a part of.

    +

    How do I list installed packages?

    +

    npm ls

    +

    How do I search for packages?

    +

    npm search

    +

    Arguments are greps. npm search jsdom shows jsdom packages.

    +

    How do I update npm?

    +
    npm install npm -g
    +

    You can also update all outdated local packages by doing npm update without +any arguments, or global packages by doing npm update -g.

    +

    Occasionally, the version of npm will progress such that the current +version cannot be properly installed with the version that you have +installed already. (Consider, if there is ever a bug in the update +command.)

    +

    In those cases, you can do this:

    +
    curl https://www.npmjs.org/install.sh | sh
    +

    What is a package?

    +

    A package is:

    +
      +
    • a) a folder containing a program described by a package.json file
    • +
    • b) a gzipped tarball containing (a)
    • +
    • c) a url that resolves to (b)
    • +
    • d) a <name>@<version> that is published on the registry with (c)
    • +
    • e) a <name>@<tag> that points to (d)
    • +
    • f) a <name> that has a "latest" tag satisfying (e)
    • +
    • g) a git url that, when cloned, results in (a).
    • +
    +

    Even if you never publish your package, you can still get a lot of +benefits of using npm if you just want to write a node program (a), and +perhaps if you also want to be able to easily install it elsewhere +after packing it up into a tarball (b).

    +

    Git urls can be of the form:

    +
    git://github.com/user/project.git#commit-ish
    +git+ssh://user@hostname:project.git#commit-ish
    +git+http://user@hostname/project/blah.git#commit-ish
    +git+https://user@hostname/project/blah.git#commit-ish
    +

    The commit-ish can be any tag, sha, or branch which can be supplied as +an argument to git checkout. The default is master.

    +

    What is a module?

    +

    A module is anything that can be loaded with require() in a Node.js +program. The following things are all examples of things that can be +loaded as modules:

    +
      +
    • A folder with a package.json file containing a main field.
    • +
    • A folder with an index.js file in it.
    • +
    • A JavaScript file.
    • +
    +

    Most npm packages are modules, because they are libraries that you +load with require. However, there's no requirement that an npm +package be a module! Some only contain an executable command-line +interface, and don't provide a main field for use in Node programs.

    +

    Almost all npm packages (at least, those that are Node programs) +contain many modules within them (because every file they load with +require() is a module).

    +

    In the context of a Node program, the module is also the thing that +was loaded from a file. For example, in the following program:

    +
    var req = require('request')
    +

    we might say that "The variable req refers to the request module".

    +

    So, why is it the "node_modules" folder, but "package.json" file? Why not node_packages or module.json?

    +

    The package.json file defines the package. (See "What is a +package?" above.)

    +

    The node_modules folder is the place Node.js looks for modules. +(See "What is a module?" above.)

    +

    For example, if you create a file at node_modules/foo.js and then +had a program that did var f = require('foo.js') then it would load +the module. However, foo.js is not a "package" in this case, +because it does not have a package.json.

    +

    Alternatively, if you create a package which does not have an +index.js or a "main" field in the package.json file, then it is +not a module. Even if it's installed in node_modules, it can't be +an argument to require().

    +

    "node_modules" is the name of my deity's arch-rival, and a Forbidden Word in my religion. Can I configure npm to use a different folder?

    +

    No. This will never happen. This question comes up sometimes, +because it seems silly from the outside that npm couldn't just be +configured to put stuff somewhere else, and then npm could load them +from there. It's an arbitrary spelling choice, right? What's the big +deal?

    +

    At the time of this writing, the string 'node_modules' appears 151 +times in 53 separate files in npm and node core (excluding tests and +documentation).

    +

    Some of these references are in node's built-in module loader. Since +npm is not involved at all at run-time, node itself would have to +be configured to know where you've decided to stick stuff. Complexity +hurdle #1. Since the Node module system is locked, this cannot be +changed, and is enough to kill this request. But I'll continue, in +deference to your deity's delicate feelings regarding spelling.

    +

    Many of the others are in dependencies that npm uses, which are not +necessarily tightly coupled to npm (in the sense that they do not read +npm's configuration files, etc.) Each of these would have to be +configured to take the name of the node_modules folder as a +parameter. Complexity hurdle #2.

    +

    Furthermore, npm has the ability to "bundle" dependencies by adding +the dep names to the "bundledDependencies" list in package.json, +which causes the folder to be included in the package tarball. What +if the author of a module bundles its dependencies, and they use a +different spelling for node_modules? npm would have to rename the +folder at publish time, and then be smart enough to unpack it using +your locally configured name. Complexity hurdle #3.

    +

    Furthermore, what happens when you change this name? Fine, it's +easy enough the first time, just rename the node_modules folders to +./blergyblerp/ or whatever name you choose. But what about when you +change it again? npm doesn't currently track any state about past +configuration settings, so this would be rather difficult to do +properly. It would have to track every previous value for this +config, and always accept any of them, or else yesterday's install may +be broken tomorrow. Complexity hurdle #4.

    +

    Never going to happen. The folder is named node_modules. It is +written indelibly in the Node Way, handed down from the ancient times +of Node 0.3.

    +

    How do I install node with npm?

    +

    You don't. Try one of these node version managers:

    +

    Unix:

    + +

    Windows:

    + +

    How can I use npm for development?

    +

    See npm-developers(7) and package.json(5).

    +

    You'll most likely want to npm link your development folder. That's +awesomely handy.

    +

    To set up your own private registry, check out npm-registry(7).

    +

    Can I list a url as a dependency?

    +

    Yes. It should be a url to a gzipped tarball containing a single folder +that has a package.json in its root, or a git url. +(See "what is a package?" above.)

    + +

    See npm-link(1)

    +

    The package registry website. What is that exactly?

    +

    See npm-registry(7).

    +

    I forgot my password, and can't publish. How do I reset it?

    +

    Go to https://npmjs.org/forgot.

    +

    I get ECONNREFUSED a lot. What's up?

    +

    Either the registry is down, or node's DNS isn't able to reach out.

    +

    To check if the registry is down, open up +https://registry.npmjs.org/ in a web browser. This will also tell +you if you are just unable to access the internet for some reason.

    +

    If the registry IS down, let us know by emailing support@npmjs.com +or posting an issue at https://github.com/npm/npm/issues. If it's +down for the world (and not just on your local network) then we're +probably already being pinged about it.

    +

    You can also often get a faster response by visiting the #npm channel +on Freenode IRC.

    +

    Why no namespaces?

    +

    Please see this discussion: https://github.com/npm/npm/issues/798

    +

    tl;dr - It doesn't actually make things better, and can make them worse.

    +

    If you want to namespace your own packages, you may: simply use the +- character to separate the names. npm is a mostly anarchic system. +There is not sufficient need to impose namespace rules on everyone.

    +

    Who does npm?

    +

    npm was originally written by Isaac Z. Schlueter, and many others have +contributed to it, some of them quite substantially.

    +

    The npm open source project, The npm Registry, and the community +website are maintained and operated by the +good folks at npm, Inc.

    +

    I have a question or request not addressed here. Where should I put it?

    +

    Post an issue on the github project:

    + +

    Why does npm hate me?

    +

    npm is not capable of hatred. It loves everyone, especially you.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/misc/npm-index.html b/deps/npm/html/partial/doc/misc/npm-index.html new file mode 100644 index 00000000000..6e4c0ca8004 --- /dev/null +++ b/deps/npm/html/partial/doc/misc/npm-index.html @@ -0,0 +1,210 @@ +

    npm-index

    Index of all npm documentation

    +

    README

    +

    node package manager

    +

    Command Line Documentation

    +

    Using npm on the command line

    +

    npm(1)

    +

    node package manager

    +

    npm-adduser(1)

    +

    Add a registry user account

    +

    npm-bin(1)

    +

    Display npm bin folder

    +

    npm-bugs(1)

    +

    Bugs for a package in a web browser maybe

    +

    npm-build(1)

    +

    Build a package

    +

    npm-bundle(1)

    +

    REMOVED

    +

    npm-cache(1)

    +

    Manipulates packages cache

    +

    npm-completion(1)

    +

    Tab Completion for npm

    +

    npm-config(1)

    +

    Manage the npm configuration files

    +

    npm-dedupe(1)

    +

    Reduce duplication

    +

    npm-deprecate(1)

    +

    Deprecate a version of a package

    +

    npm-docs(1)

    +

    Docs for a package in a web browser maybe

    +

    npm-edit(1)

    +

    Edit an installed package

    +

    npm-explore(1)

    +

    Browse an installed package

    +

    npm-help-search(1)

    +

    Search npm help documentation

    +

    npm-help(1)

    +

    Get help on npm

    +

    npm-init(1)

    +

    Interactively create a package.json file

    +

    npm-install(1)

    +

    Install a package

    + +

    Symlink a package folder

    +

    npm-ls(1)

    +

    List installed packages

    +

    npm-outdated(1)

    +

    Check for outdated packages

    +

    npm-owner(1)

    +

    Manage package owners

    +

    npm-pack(1)

    +

    Create a tarball from a package

    +

    npm-prefix(1)

    +

    Display prefix

    +

    npm-prune(1)

    +

    Remove extraneous packages

    +

    npm-publish(1)

    +

    Publish a package

    +

    npm-rebuild(1)

    +

    Rebuild a package

    +

    npm-repo(1)

    +

    Open package repository page in the browser

    +

    npm-restart(1)

    +

    Start a package

    +

    npm-rm(1)

    +

    Remove a package

    +

    npm-root(1)

    +

    Display npm root

    +

    npm-run-script(1)

    +

    Run arbitrary package scripts

    +

    npm-search(1)

    +

    Search for packages

    +

    npm-shrinkwrap(1)

    +

    Lock down dependency versions

    +

    npm-star(1)

    +

    Mark your favorite packages

    +

    npm-stars(1)

    +

    View packages marked as favorites

    +

    npm-start(1)

    +

    Start a package

    +

    npm-stop(1)

    +

    Stop a package

    +

    npm-tag(1)

    +

    Tag a published version

    +

    npm-test(1)

    +

    Test a package

    +

    npm-uninstall(1)

    +

    Remove a package

    +

    npm-unpublish(1)

    +

    Remove a package from the registry

    +

    npm-update(1)

    +

    Update a package

    +

    npm-version(1)

    +

    Bump a package version

    +

    npm-view(1)

    +

    View registry info

    +

    npm-whoami(1)

    +

    Display npm username

    +

    API Documentation

    +

    Using npm in your Node programs

    +

    npm(3)

    +

    node package manager

    +

    npm-bin(3)

    +

    Display npm bin folder

    +

    npm-bugs(3)

    +

    Bugs for a package in a web browser maybe

    +

    npm-cache(3)

    +

    manage the npm cache programmatically

    +

    npm-commands(3)

    +

    npm commands

    +

    npm-config(3)

    +

    Manage the npm configuration files

    +

    npm-deprecate(3)

    +

    Deprecate a version of a package

    +

    npm-docs(3)

    +

    Docs for a package in a web browser maybe

    +

    npm-edit(3)

    +

    Edit an installed package

    +

    npm-explore(3)

    +

    Browse an installed package

    +

    npm-help-search(3)

    +

    Search the help pages

    +

    npm-init(3)

    +

    Interactively create a package.json file

    +

    npm-install(3)

    +

    install a package programmatically

    + +

    Symlink a package folder

    +

    npm-load(3)

    +

    Load config settings

    +

    npm-ls(3)

    +

    List installed packages

    +

    npm-outdated(3)

    +

    Check for outdated packages

    +

    npm-owner(3)

    +

    Manage package owners

    +

    npm-pack(3)

    +

    Create a tarball from a package

    +

    npm-prefix(3)

    +

    Display prefix

    +

    npm-prune(3)

    +

    Remove extraneous packages

    +

    npm-publish(3)

    +

    Publish a package

    +

    npm-rebuild(3)

    +

    Rebuild a package

    +

    npm-repo(3)

    +

    Open package repository page in the browser

    +

    npm-restart(3)

    +

    Start a package

    +

    npm-root(3)

    +

    Display npm root

    +

    npm-run-script(3)

    +

    Run arbitrary package scripts

    +

    npm-search(3)

    +

    Search for packages

    +

    npm-shrinkwrap(3)

    +

    programmatically generate package shrinkwrap file

    +

    npm-start(3)

    +

    Start a package

    +

    npm-stop(3)

    +

    Stop a package

    +

    npm-tag(3)

    +

    Tag a published version

    +

    npm-test(3)

    +

    Test a package

    +

    npm-uninstall(3)

    +

    uninstall a package programmatically

    +

    npm-unpublish(3)

    +

    Remove a package from the registry

    +

    npm-update(3)

    +

    Update a package

    +

    npm-version(3)

    +

    Bump a package version

    +

    npm-view(3)

    +

    View registry info

    +

    npm-whoami(3)

    +

    Display npm username

    +

    Files

    +

    File system structures npm uses

    +

    npm-folders(5)

    +

    Folder Structures Used by npm

    +

    npmrc(5)

    +

    The npm config files

    +

    package.json(5)

    +

    Specifics of npm's package.json handling

    +

    Misc

    +

    Various other bits and bobs

    +

    npm-coding-style(7)

    +

    npm's "funny" coding style

    +

    npm-config(7)

    +

    More than you probably want to know about npm configuration

    +

    npm-developers(7)

    +

    Developer Guide

    +

    npm-disputes(7)

    +

    Handling Module Name Disputes

    +

    npm-faq(7)

    +

    Frequently Asked Questions

    +

    npm-index(7)

    +

    Index of all npm documentation

    +

    npm-registry(7)

    +

    The JavaScript Package Registry

    +

    npm-scope(7)

    +

    Scoped packages

    +

    npm-scripts(7)

    +

    How npm handles the "scripts" field

    +

    removing-npm(7)

    +

    Cleaning the Slate

    +

    semver(7)

    +

    The semantic versioner for npm

    + diff --git a/deps/npm/html/partial/doc/misc/npm-registry.html b/deps/npm/html/partial/doc/misc/npm-registry.html new file mode 100644 index 00000000000..0031f61b10c --- /dev/null +++ b/deps/npm/html/partial/doc/misc/npm-registry.html @@ -0,0 +1,50 @@ +

    npm-registry

    The JavaScript Package Registry

    +

    DESCRIPTION

    +

    To resolve packages by name and version, npm talks to a registry website +that implements the CommonJS Package Registry specification for reading +package info.

    +

    Additionally, npm's package registry implementation supports several +write APIs as well, to allow for publishing packages and managing user +account information.

    +

    The official public npm registry is at http://registry.npmjs.org/. It +is powered by a CouchDB database, of which there is a public mirror at +http://skimdb.npmjs.com/registry. The code for the couchapp is +available at http://github.com/npm/npm-registry-couchapp.

    +

    The registry URL used is determined by the scope of the package (see +npm-scope(7)). If no scope is specified, the default registry is used, which is +supplied by the registry config parameter. See npm-config(1), +npmrc(5), and npm-config(7) for more on managing npm's configuration.

    +

    Can I run my own private registry?

    +

    Yes!

    +

    The easiest way is to replicate the couch database, and use the same (or +similar) design doc to implement the APIs.

    +

    If you set up continuous replication from the official CouchDB, and then +set your internal CouchDB as the registry config, then you'll be able +to read any published packages, in addition to your private ones, and by +default will only publish internally. If you then want to publish a +package for the whole world to see, you can simply override the +--registry config for that command.

    +

    I don't want my package published in the official registry. It's private.

    +

    Set "private": true in your package.json to prevent it from being +published at all, or +"publishConfig":{"registry":"http://my-internal-registry.local"} +to force it to be published only to your internal registry.

    +

    See package.json(5) for more info on what goes in the package.json file.

    +

    Will you replicate from my registry into the public one?

    +

    No. If you want things to be public, then publish them into the public +registry using npm. What little security there is would be for nought +otherwise.

    +

    Do I have to use couchdb to build a registry that npm can talk to?

    +

    No, but it's way easier. Basically, yes, you do, or you have to +effectively implement the entire CouchDB API anyway.

    +

    Is there a website or something to see package docs and such?

    +

    Yes, head over to https://npmjs.org/

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/misc/npm-scope.html b/deps/npm/html/partial/doc/misc/npm-scope.html new file mode 100644 index 00000000000..5616efdcb8c --- /dev/null +++ b/deps/npm/html/partial/doc/misc/npm-scope.html @@ -0,0 +1,58 @@ +

    npm-scope

    Scoped packages

    +

    DESCRIPTION

    +

    All npm packages have a name. Some package names also have a scope. A scope +follows the usual rules for package names (url-safe characters, no leading dots +or underscores). When used in package names, preceded by an @-symbol and +followed by a slash, e.g.

    +
    @somescope/somepackagename
    +

    Scopes are a way of grouping related packages together, and also affect a few +things about the way npm treats the package.

    +

    As of 2014-09-03, scoped packages are not supported by the public npm registry. +However, the npm client is backwards-compatible with un-scoped registries, so +it can be used to work with scoped and un-scoped registries at the same time.

    +

    Installing scoped packages

    +

    Scoped packages are installed to a sub-folder of the regular installation +folder, e.g. if your other packages are installed in node_modules/packagename, +scoped modules will be in node_modules/@myorg/packagename. The scope folder +(@myorg) is simply the name of the scope preceded by an @-symbol, and can +contain any number of scoped packages.

    +

    A scoped package is installed by referencing it by name, preceded by an +@-symbol, in npm install:

    +
    npm install @myorg/mypackage
    +

    Or in package.json:

    +
    "dependencies": {
    +  "@myorg/mypackage": "^1.3.0"
    +}
    +

    Note that if the @-symbol is omitted in either case npm will instead attempt to +install from GitHub; see npm-install(1).

    +

    Requiring scoped packages

    +

    Because scoped packages are installed into a scope folder, you have to +include the name of the scope when requiring them in your code, e.g.

    +
    require('@myorg/mypackage')
    +

    There is nothing special about the way Node treats scope folders, this is +just specifying to require the module mypackage in the folder called @myorg.

    +

    Publishing scoped packages

    +

    Scoped packages can be published to any registry that supports them. +As of 2014-09-03, the public npm registry does not support scoped packages, +so attempting to publish a scoped package to the registry will fail unless +you have associated that scope with a different registry, see below.

    +

    Associating a scope with a registry

    +

    Scopes can be associated with a separate registry. This allows you to +seamlessly use a mix of packages from the public npm registry and one or more +private registries, such as npm Enterprise.

    +

    You can associate a scope with a registry at login, e.g.

    +
    npm login --registry=http://reg.example.com --scope=@myco
    +

    Scopes have a many-to-one relationship with registries: one registry can +host multiple scopes, but a scope only ever points to one registry.

    +

    You can also associate a scope with a registry using npm config:

    +
    npm config set @myco:registry http://reg.example.com
    +

    Once a scope is associated with a registry, any npm install for a package +with that scope will request packages from that registry instead. Any +npm publish for a package name that contains the scope will be published to +that registry instead.

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/misc/npm-scripts.html b/deps/npm/html/partial/doc/misc/npm-scripts.html new file mode 100644 index 00000000000..08bcbd54a5b --- /dev/null +++ b/deps/npm/html/partial/doc/misc/npm-scripts.html @@ -0,0 +1,200 @@ +

    npm-scripts

    How npm handles the "scripts" field

    +

    DESCRIPTION

    +

    npm supports the "scripts" property of the package.json script, for the +following scripts:

    +
      +
    • prepublish: +Run BEFORE the package is published. (Also run on local npm +install without any arguments.)
    • +
    • publish, postpublish: +Run AFTER the package is published.
    • +
    • preinstall: +Run BEFORE the package is installed
    • +
    • install, postinstall: +Run AFTER the package is installed.
    • +
    • preuninstall, uninstall: +Run BEFORE the package is uninstalled.
    • +
    • postuninstall: +Run AFTER the package is uninstalled.
    • +
    • preupdate: +Run BEFORE the package is updated with the update command.
    • +
    • update, postupdate: +Run AFTER the package is updated with the update command.
    • +
    • pretest, test, posttest: +Run by the npm test command.
    • +
    • prestop, stop, poststop: +Run by the npm stop command.
    • +
    • prestart, start, poststart: +Run by the npm start command.
    • +
    • prerestart, restart, postrestart: +Run by the npm restart command. Note: npm restart will run the +stop and start scripts if no restart script is provided.
    • +
    +

    Additionally, arbitrary scripts can be executed by running npm +run-script <pkg> <stage>. Pre and post commands with matching +names will be run for those as well (e.g. premyscript, myscript, +postmyscript).

    +

    NOTE: INSTALL SCRIPTS ARE AN ANTIPATTERN

    +

    tl;dr Don't use install. Use a .gyp file for compilation, and +prepublish for anything else.

    +

    You should almost never have to explicitly set a preinstall or +install script. If you are doing this, please consider if there is +another option.

    +

    The only valid use of install or preinstall scripts is for +compilation which must be done on the target architecture. In early +versions of node, this was often done using the node-waf scripts, or +a standalone Makefile, and early versions of npm required that it be +explicitly set in package.json. This was not portable, and harder to +do properly.

    +

    In the current version of node, the standard way to do this is using a +.gyp file. If you have a file with a .gyp extension in the root +of your package, then npm will run the appropriate node-gyp commands +automatically at install time. This is the only officially supported +method for compiling binary addons, and does not require that you add +anything to your package.json file.

    +

    If you have to do other things before your package is used, in a way +that is not dependent on the operating system or architecture of the +target system, then use a prepublish script instead. This includes +tasks such as:

    +
      +
    • Compile CoffeeScript source code into JavaScript.
    • +
    • Create minified versions of JavaScript source code.
    • +
    • Fetching remote resources that your package will use.
    • +
    +

    The advantage of doing these things at prepublish time instead of +preinstall or install time is that they can be done once, in a +single place, and thus greatly reduce complexity and variability. +Additionally, this means that:

    +
      +
    • You can depend on coffee-script as a devDependency, and thus +your users don't need to have it installed.
    • +
    • You don't need to include the minifiers in your package, reducing +the size for your users.
    • +
    • You don't need to rely on your users having curl or wget or +other system tools on the target machines.
    • +
    +

    DEFAULT VALUES

    +

    npm will default some script values based on package contents.

    +
      +
    • "start": "node server.js":

      +

      If there is a server.js file in the root of your package, then npm +will default the start command to node server.js.

      +
    • +
    • "preinstall": "node-waf clean || true; node-waf configure build":

      +

      If there is a wscript file in the root of your package, npm will +default the preinstall command to compile using node-waf.

      +
    • +
    +

    USER

    +

    If npm was invoked with root privileges, then it will change the uid +to the user account or uid specified by the user config, which +defaults to nobody. Set the unsafe-perm flag to run scripts with +root privileges.

    +

    ENVIRONMENT

    +

    Package scripts run in an environment where many pieces of information +are made available regarding the setup of npm and the current state of +the process.

    +

    path

    +

    If you depend on modules that define executable scripts, like test +suites, then those executables will be added to the PATH for +executing the scripts. So, if your package.json has this:

    +
    { "name" : "foo"
    +, "dependencies" : { "bar" : "0.1.x" }
    +, "scripts": { "start" : "bar ./test" } }
    +

    then you could run npm start to execute the bar script, which is +exported into the node_modules/.bin directory on npm install.

    +

    package.json vars

    +

    The package.json fields are tacked onto the npm_package_ prefix. So, +for instance, if you had {"name":"foo", "version":"1.2.5"} in your +package.json file, then your package scripts would have the +npm_package_name environment variable set to "foo", and the +npm_package_version set to "1.2.5"

    +

    configuration

    +

    Configuration parameters are put in the environment with the +npm_config_ prefix. For instance, you can view the effective root +config by checking the npm_config_root environment variable.

    +

    Special: package.json "config" object

    +

    The package.json "config" keys are overwritten in the environment if +there is a config param of <name>[@<version>]:<key>. For example, +if the package.json has this:

    +
    { "name" : "foo"
    +, "config" : { "port" : "8080" }
    +, "scripts" : { "start" : "node server.js" } }
    +

    and the server.js is this:

    +
    http.createServer(...).listen(process.env.npm_package_config_port)
    +

    then the user could change the behavior by doing:

    +
    npm config set foo:port 80
    +

    current lifecycle event

    +

    Lastly, the npm_lifecycle_event environment variable is set to +whichever stage of the cycle is being executed. So, you could have a +single script used for different parts of the process which switches +based on what's currently happening.

    +

    Objects are flattened following this format, so if you had +{"scripts":{"install":"foo.js"}} in your package.json, then you'd +see this in the script:

    +
    process.env.npm_package_scripts_install === "foo.js"
    +

    EXAMPLES

    +

    For example, if your package.json contains this:

    +
    { "scripts" :
    +  { "install" : "scripts/install.js"
    +  , "postinstall" : "scripts/install.js"
    +  , "uninstall" : "scripts/uninstall.js"
    +  }
    +}
    +

    then the scripts/install.js will be called for the install, +post-install, stages of the lifecycle, and the scripts/uninstall.js +would be called when the package is uninstalled. Since +scripts/install.js is running for three different phases, it would +be wise in this case to look at the npm_lifecycle_event environment +variable.

    +

    If you want to run a make command, you can do so. This works just +fine:

    +
    { "scripts" :
    +  { "preinstall" : "./configure"
    +  , "install" : "make && make install"
    +  , "test" : "make test"
    +  }
    +}
    +

    EXITING

    +

    Scripts are run by passing the line as a script argument to sh.

    +

    If the script exits with a code other than 0, then this will abort the +process.

    +

    Note that these script files don't have to be nodejs or even +javascript programs. They just have to be some kind of executable +file.

    +

    HOOK SCRIPTS

    +

    If you want to run a specific script at a specific lifecycle event for +ALL packages, then you can use a hook script.

    +

    Place an executable file at node_modules/.hooks/{eventname}, and +it'll get run for all packages when they are going through that point +in the package lifecycle for any packages installed in that root.

    +

    Hook scripts are run exactly the same way as package.json scripts. +That is, they are in a separate child process, with the env described +above.

    +

    BEST PRACTICES

    +
      +
    • Don't exit with a non-zero error code unless you really mean it. +Except for uninstall scripts, this will cause the npm action to +fail, and potentially be rolled back. If the failure is minor or +only will prevent some optional features, then it's better to just +print a warning and exit successfully.
    • +
    • Try not to use scripts to do what npm can do for you. Read through +package.json(5) to see all the things that you can specify and enable +by simply describing your package appropriately. In general, this +will lead to a more robust and consistent state.
    • +
    • Inspect the env to determine where to put things. For instance, if +the npm_config_binroot environ is set to /home/user/bin, then +don't try to install executables into /usr/local/bin. The user +probably set it up that way for a reason.
    • +
    • Don't prefix your script commands with "sudo". If root permissions +are required for some reason, then it'll fail with that error, and +the user will sudo the npm command in question.
    • +
    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/misc/removing-npm.html b/deps/npm/html/partial/doc/misc/removing-npm.html new file mode 100644 index 00000000000..3b3968bfc01 --- /dev/null +++ b/deps/npm/html/partial/doc/misc/removing-npm.html @@ -0,0 +1,37 @@ +

    npm-removal

    Cleaning the Slate

    +

    SYNOPSIS

    +

    So sad to see you go.

    +
    sudo npm uninstall npm -g
    +

    Or, if that fails, get the npm source code, and do:

    +
    sudo make uninstall
    +

    More Severe Uninstalling

    +

    Usually, the above instructions are sufficient. That will remove +npm, but leave behind anything you've installed.

    +

    If that doesn't work, or if you require more drastic measures, +continue reading.

    +

    Note that this is only necessary for globally-installed packages. Local +installs are completely contained within a project's node_modules +folder. Delete that folder, and everything is gone (unless a package's +install script is particularly ill-behaved).

    +

    This assumes that you installed node and npm in the default place. If +you configured node with a different --prefix, or installed npm with a +different prefix setting, then adjust the paths accordingly, replacing +/usr/local with your install prefix.

    +

    To remove everything npm-related manually:

    +
    rm -rf /usr/local/{lib/node{,/.npm,_modules},bin,share/man}/npm*
    +

    If you installed things with npm, then your best bet is to uninstall +them with npm first, and then install them again once you have a +proper install. This can help find any symlinks that are lying +around:

    +
    ls -laF /usr/local/{lib/node{,/.npm},bin,share/man} | grep npm
    +

    Prior to version 0.3, npm used shim files for executables and node +modules. To track those down, you can do the following:

    +
    find /usr/local/{lib/node,bin} -exec grep -l npm \{\} \; ;
    +

    (This is also in the README file.)

    +

    SEE ALSO

    + + diff --git a/deps/npm/html/partial/doc/misc/semver.html b/deps/npm/html/partial/doc/misc/semver.html new file mode 100644 index 00000000000..691a277dc78 --- /dev/null +++ b/deps/npm/html/partial/doc/misc/semver.html @@ -0,0 +1,243 @@ +

    semver

    The semantic versioner for npm

    +

    Usage

    +
    $ npm install semver
    +
    +semver.valid('1.2.3') // '1.2.3'
    +semver.valid('a.b.c') // null
    +semver.clean('  =v1.2.3   ') // '1.2.3'
    +semver.satisfies('1.2.3', '1.x || >=2.5.0 || 5.0.0 - 7.2.3') // true
    +semver.gt('1.2.3', '9.8.7') // false
    +semver.lt('1.2.3', '9.8.7') // true
    +

    As a command-line utility:

    +
    $ semver -h
    +
    +Usage: semver <version> [<version> [...]] [-r <range> | -i <inc> | -d <dec>]
    +Test if version(s) satisfy the supplied range(s), and sort them.
    +
    +Multiple versions or ranges may be supplied, unless increment
    +or decrement options are specified.  In that case, only a single
    +version may be used, and it is incremented by the specified level
    +
    +Program exits successfully if any valid version satisfies
    +all supplied ranges, and prints all satisfying versions.
    +
    +If no versions are valid, or ranges are not satisfied,
    +then exits failure.
    +
    +Versions are printed in ascending order, so supplying
    +multiple versions to the utility will just sort them.
    +

    Versions

    +

    A "version" is described by the v2.0.0 specification found at +http://semver.org/.

    +

    A leading "=" or "v" character is stripped off and ignored.

    +

    Ranges

    +

    A version range is a set of comparators which specify versions +that satisfy the range.

    +

    A comparator is composed of an operator and a version. The set +of primitive operators is:

    +
      +
    • < Less than
    • +
    • <= Less than or equal to
    • +
    • > Greater than
    • +
    • >= Greater than or equal to
    • +
    • = Equal. If no operator is specified, then equality is assumed, +so this operator is optional, but MAY be included.
    • +
    +

    For example, the comparator >=1.2.7 would match the versions +1.2.7, 1.2.8, 2.5.3, and 1.3.9, but not the versions 1.2.6 +or 1.1.0.

    +

    Comparators can be joined by whitespace to form a comparator set, +which is satisfied by the intersection of all of the comparators +it includes.

    +

    A range is composed of one or more comparator sets, joined by ||. A +version matches a range if and only if every comparator in at least +one of the ||-separated comparator sets is satisfied by the version.

    +

    For example, the range >=1.2.7 <1.3.0 would match the versions +1.2.7, 1.2.8, and 1.2.99, but not the versions 1.2.6, 1.3.0, +or 1.1.0.

    +

    The range 1.2.7 || >=1.2.9 <2.0.0 would match the versions 1.2.7, +1.2.9, and 1.4.6, but not the versions 1.2.8 or 2.0.0.

    +

    Prerelease Tags

    +

    If a version has a prerelease tag (for example, 1.2.3-alpha.3) then +it will only be allowed to satisfy comparator sets if at least one +comparator with the same [major, minor, patch] tuple also has a +prerelease tag.

    +

    For example, the range >1.2.3-alpha.3 would be allowed to match the +version 1.2.3-alpha.7, but it would not be satisfied by +3.4.5-alpha.9, even though 3.4.5-alpha.9 is technically "greater +than" 1.2.3-alpha.3 according to the SemVer sort rules. The version +range only accepts prerelease tags on the 1.2.3 version. The +version 3.4.5 would satisfy the range, because it does not have a +prerelease flag, and 3.4.5 is greater than 1.2.3-alpha.7.

    +

    The purpose for this behavior is twofold. First, prerelease versions +frequently are updated very quickly, and contain many breaking changes +that are (by the author's design) not yet fit for public consumption. +Therefore, by default, they are excluded from range matching +semantics.

    +

    Second, a user who has opted into using a prerelease version has +clearly indicated the intent to use that specific set of +alpha/beta/rc versions. By including a prerelease tag in the range, +the user is indicating that they are aware of the risk. However, it +is still not appropriate to assume that they have opted into taking a +similar risk on the next set of prerelease versions.

    +

    Advanced Range Syntax

    +

    Advanced range syntax desugars to primitive comparators in +deterministic ways.

    +

    Advanced ranges may be combined in the same way as primitive +comparators using white space or ||.

    +

    Hyphen Ranges X.Y.Z - A.B.C

    +

    Specifies an inclusive set.

    +
      +
    • 1.2.3 - 2.3.4 := >=1.2.3 <=2.3.4
    • +
    +

    If a partial version is provided as the first version in the inclusive +range, then the missing pieces are replaced with zeroes.

    +
      +
    • 1.2 - 2.3.4 := >=1.2.0 <=2.3.4
    • +
    +

    If a partial version is provided as the second version in the +inclusive range, then all versions that start with the supplied parts +of the tuple are accepted, but nothing that would be greater than the +provided tuple parts.

    +
      +
    • 1.2.3 - 2.3 := >=1.2.3 <2.4.0
    • +
    • 1.2.3 - 2 := >=1.2.3 <3.0.0
    • +
    +

    X-Ranges 1.2.x 1.X 1.2.* *

    +

    Any of X, x, or * may be used to "stand in" for one of the +numeric values in the [major, minor, patch] tuple.

    +
      +
    • * := >=0.0.0 (Any version satisfies)
    • +
    • 1.x := >=1.0.0 <2.0.0 (Matching major version)
    • +
    • 1.2.x := >=1.2.0 <1.3.0 (Matching major and minor versions)
    • +
    +

    A partial version range is treated as an X-Range, so the special +character is in fact optional.

    +
      +
    • "" (empty string) := * := >=0.0.0
    • +
    • 1 := 1.x.x := >=1.0.0 <2.0.0
    • +
    • 1.2 := 1.2.x := >=1.2.0 <1.3.0
    • +
    +

    Tilde Ranges ~1.2.3 ~1.2 ~1

    +

    Allows patch-level changes if a minor version is specified on the +comparator. Allows minor-level changes if not.

    +
      +
    • ~1.2.3 := >=1.2.3 <1.(2+1).0 := >=1.2.3 <1.3.0
    • +
    • ~1.2 := >=1.2.0 <1.(2+1).0 := >=1.2.0 <1.3.0 (Same as 1.2.x)
    • +
    • ~1 := >=1.0.0 <(1+1).0.0 := >=1.0.0 <2.0.0 (Same as 1.x)
    • +
    • ~0.2.3 := >=0.2.3 <0.(2+1).0 := >=0.2.3 <0.3.0
    • +
    • ~0.2 := >=0.2.0 <0.(2+1).0 := >=0.2.0 <0.3.0 (Same as 0.2.x)
    • +
    • ~0 := >=0.0.0 <(0+1).0.0 := >=0.0.0 <1.0.0 (Same as 0.x)
    • +
    • ~1.2.3-beta.2 := >=1.2.3-beta.2 <1.3.0 Note that prereleases in +the 1.2.3 version will be allowed, if they are greater than or +equal to beta.2. So, 1.2.3-beta.4 would be allowed, but +1.2.4-beta.2 would not, because it is a prerelease of a +different [major, minor, patch] tuple.
    • +
    +

    Note: this is the same as the ~> operator in rubygems.

    +

    Caret Ranges ^1.2.3 ^0.2.5 ^0.0.4

    +

    Allows changes that do not modify the left-most non-zero digit in the +[major, minor, patch] tuple. In other words, this allows patch and +minor updates for versions 1.0.0 and above, patch updates for +versions 0.X >=0.1.0, and no updates for versions 0.0.X.

    +

    Many authors treat a 0.x version as if the x were the major +"breaking-change" indicator.

    +

    Caret ranges are ideal when an author may make breaking changes +between 0.2.4 and 0.3.0 releases, which is a common practice. +However, it presumes that there will not be breaking changes between +0.2.4 and 0.2.5. It allows for changes that are presumed to be +additive (but non-breaking), according to commonly observed practices.

    +
      +
    • ^1.2.3 := >=1.2.3 <2.0.0
    • +
    • ^0.2.3 := >=0.2.3 <0.3.0
    • +
    • ^0.0.3 := >=0.0.3 <0.0.4
    • +
    • ^1.2.3-beta.2 := >=1.2.3-beta.2 <2.0.0 Note that prereleases in +the 1.2.3 version will be allowed, if they are greater than or +equal to beta.2. So, 1.2.3-beta.4 would be allowed, but +1.2.4-beta.2 would not, because it is a prerelease of a +different [major, minor, patch] tuple.
    • +
    • ^0.0.3-beta := >=0.0.3-beta <0.0.4 Note that prereleases in the +0.0.3 version only will be allowed, if they are greater than or +equal to beta. So, 0.0.3-pr.2 would be allowed.
    • +
    +

    When parsing caret ranges, a missing patch value desugars to the +number 0, but will allow flexibility within that value, even if the +major and minor versions are both 0.

    +
      +
    • ^1.2.x := >=1.2.0 <2.0.0
    • +
    • ^0.0.x := >=0.0.0 <0.1.0
    • +
    • ^0.0 := >=0.0.0 <0.1.0
    • +
    +

    A missing minor and patch values will desugar to zero, but also +allow flexibility within those values, even if the major version is +zero.

    +
      +
    • ^1.x := >=1.0.0 <2.0.0
    • +
    • ^0.x := >=0.0.0 <1.0.0
    • +
    +

    Functions

    +

    All methods and classes take a final loose boolean argument that, if +true, will be more forgiving about not-quite-valid semver strings. +The resulting output will always be 100% strict, of course.

    +

    Strict-mode Comparators and Ranges will be strict about the SemVer +strings that they parse.

    +
      +
    • valid(v): Return the parsed version, or null if it's not valid.
    • +
    • inc(v, release): Return the version incremented by the release +type (major, premajor, minor, preminor, patch, +prepatch, or prerelease), or null if it's not valid
        +
      • premajor in one call will bump the version up to the next major +version and down to a prerelease of that major version. +preminor, and prepatch work the same way.
      • +
      • If called from a non-prerelease version, the prerelease will work the +same as prepatch. It increments the patch version, then makes a +prerelease. If the input version is already a prerelease it simply +increments it.
      • +
      +
    • +
    +

    Comparison

    +
      +
    • gt(v1, v2): v1 > v2
    • +
    • gte(v1, v2): v1 >= v2
    • +
    • lt(v1, v2): v1 < v2
    • +
    • lte(v1, v2): v1 <= v2
    • +
    • eq(v1, v2): v1 == v2 This is true if they're logically equivalent, +even if they're not the exact same string. You already know how to +compare strings.
    • +
    • neq(v1, v2): v1 != v2 The opposite of eq.
    • +
    • cmp(v1, comparator, v2): Pass in a comparison string, and it'll call +the corresponding function above. "===" and "!==" do simple +string comparison, but are included for completeness. Throws if an +invalid comparison string is provided.
    • +
    • compare(v1, v2): Return 0 if v1 == v2, or 1 if v1 is greater, or -1 if +v2 is greater. Sorts in ascending order if passed to Array.sort().
    • +
    • rcompare(v1, v2): The reverse of compare. Sorts an array of versions +in descending order when passed to Array.sort().
    • +
    +

    Ranges

    +
      +
    • validRange(range): Return the valid range or null if it's not valid
    • +
    • satisfies(version, range): Return true if the version satisfies the +range.
    • +
    • maxSatisfying(versions, range): Return the highest version in the list +that satisfies the range, or null if none of them do.
    • +
    • gtr(version, range): Return true if version is greater than all the +versions possible in the range.
    • +
    • ltr(version, range): Return true if version is less than all the +versions possible in the range.
    • +
    • outside(version, range, hilo): Return true if the version is outside +the bounds of the range in either the high or low direction. The +hilo argument must be either the string '>' or '<'. (This is +the function called by gtr and ltr.)
    • +
    +

    Note that, since ranges may be non-contiguous, a version might not be +greater than a range, less than a range, or satisfy a range! For +example, the range 1.2 <1.2.9 || >2.0.0 would have a hole from 1.2.9 +until 2.0.0, so the version 1.2.10 would not be greater than the +range (because 2.0.1 satisfies, which is higher), nor less than the +range (since 1.2.8 satisfies, which is lower), and it also does not +satisfy the range.

    +

    If you want to know if a version satisfies or does not satisfy a +range, use the satisfies(version, range) function.

    + diff --git a/deps/npm/lib/adduser.js b/deps/npm/lib/adduser.js index 579ecb0a9f8..9693aebd389 100644 --- a/deps/npm/lib/adduser.js +++ b/deps/npm/lib/adduser.js @@ -19,9 +19,10 @@ function adduser (args, cb) { if (!crypto) return cb(new Error( "You must compile node with ssl support to use the adduser feature")) - var c = { u : npm.config.get("username") || "" - , p : npm.config.get("_password") || "" - , e : npm.config.get("email") || "" + var creds = npm.config.getCredentialsByURI(npm.config.get("registry")) + var c = { u : creds.username || "" + , p : creds.password || "" + , e : creds.email || "" } , u = {} , fns = [readUsername, readPassword, readEmail, save] @@ -94,7 +95,7 @@ function readPassword (c, u, cb) { return readPassword(c, u, cb) } - c.changed = c.changed || c.p != pw + c.changed = c.changed || c.p !== pw u.p = pw cb(er) }) @@ -132,17 +133,46 @@ function save (c, u, cb) { registry.password = u.p } npm.spinner.start() + // save existing configs, but yank off for this PUT - registry.adduser(npm.config.get("registry"), u.u, u.p, u.e, function (er) { + var uri = npm.config.get("registry") + var scope = npm.config.get("scope") + + // there may be a saved scope and no --registry (for login) + if (scope) { + if (scope.charAt(0) !== "@") scope = "@" + scope + + var scopedRegistry = npm.config.get(scope + ":registry") + if (scopedRegistry) uri = scopedRegistry + } + + registry.adduser(uri, u.u, u.p, u.e, function (er, doc) { npm.spinner.stop() if (er) return cb(er) + registry.username = u.u registry.password = u.p registry.email = u.e - npm.config.set("username", u.u, "user") - npm.config.set("_password", u.p, "user") - npm.config.set("email", u.e, "user") + + // don't want this polluting the configuration npm.config.del("_token", "user") + + if (scope) npm.config.set(scope + ":registry", uri, "user") + + if (doc && doc.token) { + npm.config.setCredentialsByURI(uri, { + token : doc.token + }) + } + else { + npm.config.setCredentialsByURI(uri, { + username : u.u, + password : u.p, + email : u.e, + alwaysAuth : npm.config.get("always-auth") + }) + } + log.info("adduser", "Authorized user %s", u.u) npm.config.save("user", cb) }) diff --git a/deps/npm/lib/bugs.js b/deps/npm/lib/bugs.js index b3022bf2a20..16744cd5c84 100644 --- a/deps/npm/lib/bugs.js +++ b/deps/npm/lib/bugs.js @@ -9,19 +9,23 @@ var npm = require("./npm.js") , opener = require("opener") , path = require("path") , readJson = require("read-package-json") + , npa = require("npm-package-arg") , fs = require("fs") - , url = require("url") + , mapToRegistry = require("./utils/map-to-registry.js") bugs.completion = function (opts, cb) { if (opts.conf.argv.remain.length > 2) return cb() - var uri = url.resolve(npm.config.get("registry"), "-/short") - registry.get(uri, { timeout : 60000 }, function (er, list) { - return cb(null, list || []) + mapToRegistry("-/short", npm.config, function (er, uri) { + if (er) return cb(er) + + registry.get(uri, { timeout : 60000 }, function (er, list) { + return cb(null, list || []) + }) }) } function bugs (args, cb) { - var n = args.length && args[0].split("@").shift() || '.' + var n = args.length && npa(args[0]).name || '.' fs.stat(n, function (er, s) { if (er && er.code === "ENOENT") return callRegistry(n, cb) else if (er) return cb (er) @@ -56,9 +60,13 @@ function getUrlAndOpen (d, cb) { } function callRegistry (n, cb) { - var uri = url.resolve(npm.config.get("registry"), n + "/latest") - registry.get(uri, { timeout : 3600 }, function (er, d) { + mapToRegistry(n, npm.config, function (er, uri) { if (er) return cb(er) - getUrlAndOpen (d, cb) + + registry.get(uri + "/latest", { timeout : 3600 }, function (er, d) { + if (er) return cb(er) + + getUrlAndOpen (d, cb) + }) }) } diff --git a/deps/npm/lib/build.js b/deps/npm/lib/build.js index 350774a453f..2e01ef6eeaa 100644 --- a/deps/npm/lib/build.js +++ b/deps/npm/lib/build.js @@ -19,6 +19,8 @@ var npm = require("./npm.js") , cmdShim = require("cmd-shim") , cmdShimIfExists = cmdShim.ifExists , asyncMap = require("slide").asyncMap + , ini = require("ini") + , writeFile = require("write-file-atomic") module.exports = build build.usage = "npm build \n(this is plumbing)" @@ -41,6 +43,7 @@ function build (args, global, didPre, didRB, cb) { function build_ (global, didPre, didRB) { return function (folder, cb) { folder = path.resolve(folder) + if (build._didBuild[folder]) log.error("build", "already built", folder) build._didBuild[folder] = true log.info("build", folder) readJson(path.resolve(folder, "package.json"), function (er, pkg) { @@ -48,7 +51,7 @@ function build_ (global, didPre, didRB) { return function (folder, cb) { chain ( [ !didPre && [lifecycle, pkg, "preinstall", folder] , [linkStuff, pkg, folder, global, didRB] - , pkg.name === "npm" && [writeBuiltinConf, folder] + , [writeBuiltinConf, pkg, folder] , didPre !== build._noLC && [lifecycle, pkg, "install", folder] , didPre !== build._noLC && [lifecycle, pkg, "postinstall", folder] , didPre !== build._noLC @@ -58,14 +61,21 @@ function build_ (global, didPre, didRB) { return function (folder, cb) { }) }} -function writeBuiltinConf (folder, cb) { - // the builtin config is "sticky". Any time npm installs itself, - // it puts its builtin config file there, as well. - if (!npm.config.usingBuiltin - || folder !== path.dirname(__dirname)) { +function writeBuiltinConf (pkg, folder, cb) { + // the builtin config is "sticky". Any time npm installs + // itself globally, it puts its builtin config file there + var parent = path.dirname(folder) + var dir = npm.globalDir + + if (pkg.name !== "npm" || + !npm.config.get("global") || + !npm.config.usingBuiltin || + dir !== parent) { return cb() } - npm.config.save("builtin", cb) + + var data = ini.stringify(npm.config.sources.builtin.data) + writeFile(path.resolve(folder, "npmrc"), data, cb) } function linkStuff (pkg, folder, global, didRB, cb) { @@ -75,7 +85,7 @@ function linkStuff (pkg, folder, global, didRB, cb) { // if it's global, and folder is in {prefix}/node_modules, // then bins are in {prefix}/bin // otherwise, then bins are in folder/../.bin - var parent = path.dirname(folder) + var parent = pkg.name[0] === "@" ? path.dirname(path.dirname(folder)) : path.dirname(folder) , gnm = global && npm.globalDir , gtop = parent === gnm @@ -95,7 +105,7 @@ function linkStuff (pkg, folder, global, didRB, cb) { function shouldWarn(pkg, folder, global, cb) { var parent = path.dirname(folder) , top = parent === npm.dir - , cwd = process.cwd() + , cwd = npm.localPrefix readJson(path.resolve(cwd, "package.json"), function(er, topPkg) { if (er) return cb(er) diff --git a/deps/npm/lib/cache.js b/deps/npm/lib/cache.js index 37bba5a0653..e1afb0d1578 100644 --- a/deps/npm/lib/cache.js +++ b/deps/npm/lib/cache.js @@ -47,31 +47,41 @@ adding a name@range: adding a local tarball: 1. untar to tmp/random/{blah} 2. goto folder(2) + +adding a namespaced package: +1. lookup registry for @namespace +2. namespace_registry.get('name') +3. add url(namespace/latest.tarball) */ exports = module.exports = cache + cache.unpack = unpack cache.clean = clean cache.read = read var npm = require("./npm.js") , fs = require("graceful-fs") + , writeFileAtomic = require("write-file-atomic") , assert = require("assert") , rm = require("./utils/gently-rm.js") , readJson = require("read-package-json") , log = require("npmlog") , path = require("path") - , url = require("url") , asyncMap = require("slide").asyncMap , tar = require("./utils/tar.js") , fileCompletion = require("./utils/completion/file-completion.js") - , isGitUrl = require("./utils/is-git-url.js") , deprCheck = require("./utils/depr-check.js") , addNamed = require("./cache/add-named.js") , addLocal = require("./cache/add-local.js") , addRemoteTarball = require("./cache/add-remote-tarball.js") , addRemoteGit = require("./cache/add-remote-git.js") + , maybeGithub = require("./cache/maybe-github.js") , inflight = require("inflight") + , realizePackageSpecifier = require("realize-package-specifier") + , npa = require("npm-package-arg") + , getStat = require("./cache/get-stat.js") + , cachedPackageRoot = require("./cache/cached-package-root.js") cache.usage = "npm cache add " + "\nnpm cache add " @@ -108,9 +118,8 @@ function cache (args, cb) { switch (cmd) { case "rm": case "clear": case "clean": return clean(args, cb) case "list": case "sl": case "ls": return ls(args, cb) - case "add": return add(args, cb) - default: return cb(new Error( - "Invalid cache action: "+cmd)) + case "add": return add(args, npm.prefix, cb) + default: return cb("Usage: "+cache.usage) } } @@ -123,8 +132,11 @@ function read (name, ver, forceBypass, cb) { if (forceBypass === undefined || forceBypass === null) forceBypass = true - var jsonFile = path.join(npm.cache, name, ver, "package", "package.json") + var root = cachedPackageRoot({name : name, version : ver}) function c (er, data) { + log.silly("cache", "addNamed cb", name+"@"+ver) + if (er) log.verbose("cache", "addNamed error for", name+"@"+ver, er) + if (data) deprCheck(data) return cb(er, data) @@ -135,27 +147,43 @@ function read (name, ver, forceBypass, cb) { return addNamed(name, ver, null, c) } - readJson(jsonFile, function (er, data) { - er = needName(er, data) - er = needVersion(er, data) + readJson(path.join(root, "package", "package.json"), function (er, data) { if (er && er.code !== "ENOENT" && er.code !== "ENOTDIR") return cb(er) - if (er) return addNamed(name, ver, null, c) - deprCheck(data) + if (data) { + if (!data.name) return cb(new Error("No name provided")) + if (!data.version) return cb(new Error("No version provided")) + } - c(er, data) + if (er) return addNamed(name, ver, null, c) + else c(er, data) }) } +function normalize (args) { + var normalized = "" + if (args.length > 0) { + var a = npa(args[0]) + if (a.name) normalized = a.name + if (a.rawSpec) normalized = [normalized, a.rawSpec].join("/") + if (args.length > 1) normalized = [normalized].concat(args.slice(1)).join("/") + } + + if (normalized.substr(-1) === "/") { + normalized = normalized.substr(0, normalized.length - 1) + } + log.silly("ls", "normalized", normalized) + + return normalized +} + // npm cache ls [] function ls (args, cb) { - args = args.join("/").split("@").join("/") - if (args.substr(-1) === "/") args = args.substr(0, args.length - 1) var prefix = npm.config.get("cache") - if (0 === prefix.indexOf(process.env.HOME)) { + if (prefix.indexOf(process.env.HOME) === 0) { prefix = "~" + prefix.substr(process.env.HOME.length) } - ls_(args, npm.config.get("depth"), function (er, files) { + ls_(normalize(args), npm.config.get("depth"), function (er, files) { console.log(files.map(function (f) { return path.join(prefix, f) }).join("\n").trim()) @@ -174,9 +202,7 @@ function clean (args, cb) { if (!args) args = [] - args = args.join("/").split("@").join("/") - if (args.substr(-1) === "/") args = args.substr(0, args.length - 1) - var f = path.join(npm.cache, path.normalize(args)) + var f = path.join(npm.cache, path.normalize(normalize(args))) if (f === npm.cache) { fs.readdir(npm.cache, function (er, files) { if (er) return cb() @@ -187,30 +213,29 @@ function clean (args, cb) { }) , rm, cb ) }) - } else rm(path.join(npm.cache, path.normalize(args)), cb) + } else rm(path.join(npm.cache, path.normalize(normalize(args))), cb) } // npm cache add // npm cache add // npm cache add // npm cache add -cache.add = function (pkg, ver, scrub, cb) { +cache.add = function (pkg, ver, where, scrub, cb) { assert(typeof pkg === "string", "must include name of package to install") assert(typeof cb === "function", "must include callback") if (scrub) { return clean([], function (er) { if (er) return cb(er) - add([pkg, ver], cb) + add([pkg, ver], where, cb) }) } - log.verbose("cache add", [pkg, ver]) - return add([pkg, ver], cb) + return add([pkg, ver], where, cb) } var adding = 0 -function add (args, cb) { +function add (args, where, cb) { // this is hot code. almost everything passes through here. // the args can be any of: // ["url"] @@ -226,60 +251,54 @@ function add (args, cb) { + " npm cache add @\n" + " npm cache add \n" + " npm cache add \n" - , name , spec + log.silly("cache add", "args", args) + if (args[1] === undefined) args[1] = null // at this point the args length must ==2 if (args[1] !== null) { - name = args[0] - spec = args[1] + spec = args[0]+"@"+args[1] } else if (args.length === 2) { spec = args[0] } - log.verbose("cache add", "name=%j spec=%j args=%j", name, spec, args) + log.verbose("cache add", "spec", spec) - if (!name && !spec) return cb(usage) + if (!spec) return cb(usage) if (adding <= 0) { npm.spinner.start() } - adding ++ - cb = afterAdd([name, spec], cb) - - // see if the spec is a url - // otherwise, treat as name@version - var p = url.parse(spec) || {} - log.verbose("parsed url", p) - - // If there's a /, and it's a path, then install the path. - // If not, and there's a @, it could be that we got name@http://blah - // in that case, we will not have a protocol now, but if we - // split and check, we will. - if (!name && !p.protocol) { - return maybeFile(spec, cb) - } - else { - switch (p.protocol) { - case "http:": - case "https:": - return addRemoteTarball(spec, { name: name }, null, cb) - + adding++ + cb = afterAdd(cb) + + realizePackageSpecifier(spec, where, function (err, p) { + if (err) return cb(err) + + log.silly("cache add", "parsed spec", p) + + switch (p.type) { + case "local": + case "directory": + addLocal(p, null, cb) + break + case "remote": + addRemoteTarball(p.spec, {name : p.name}, null, cb) + break + case "git": + addRemoteGit(p.spec, false, cb) + break + case "github": + maybeGithub(p.spec, cb) + break default: - if (isGitUrl(p)) return addRemoteGit(spec, p, false, cb) - - // if we have a name and a spec, then try name@spec - if (name) { - addNamed(name, spec, null, cb) - } - // if not, then try just spec (which may try name@"" if not found) - else { - addLocal(spec, {}, cb) - } + if (p.name) return addNamed(p.name, p.spec, null, cb) + + cb(new Error("couldn't figure out how to install " + spec)) } - } + }) } function unpack (pkg, ver, unpackTarget, dMode, fMode, uid, gid, cb) { @@ -295,7 +314,7 @@ function unpack (pkg, ver, unpackTarget, dMode, fMode, uid, gid, cb) { } npm.commands.unbuild([unpackTarget], true, function (er) { if (er) return cb(er) - tar.unpack( path.join(npm.cache, pkg, ver, "package.tgz") + tar.unpack( path.join(cachedPackageRoot({name : pkg, version : ver}), "package.tgz") , unpackTarget , dMode, fMode , uid, gid @@ -304,68 +323,26 @@ function unpack (pkg, ver, unpackTarget, dMode, fMode, uid, gid, cb) { }) } -function afterAdd (arg, cb) { return function (er, data) { - adding -- - if (adding <= 0) { - npm.spinner.stop() - } - if (er || !data || !data.name || !data.version) { - return cb(er, data) - } +function afterAdd (cb) { return function (er, data) { + adding-- + if (adding <= 0) npm.spinner.stop() + + if (er || !data || !data.name || !data.version) return cb(er, data) + log.silly("cache", "afterAdd", data.name+"@"+data.version) // Save the resolved, shasum, etc. into the data so that the next // time we load from this cached data, we have all the same info. - var name = data.name - var ver = data.version - var pj = path.join(npm.cache, name, ver, "package", "package.json") - var tmp = pj + "." + process.pid + var pj = path.join(cachedPackageRoot(data), "package", "package.json") var done = inflight(pj, cb) + if (!done) return log.verbose("afterAdd", pj, "already in flight; not writing") + log.verbose("afterAdd", pj, "not in flight; writing") - if (!done) return - - fs.writeFile(tmp, JSON.stringify(data), "utf8", function (er) { + getStat(function (er, cs) { if (er) return done(er) - fs.rename(tmp, pj, function (er) { - done(er, data) + writeFileAtomic(pj, JSON.stringify(data), {chown : cs}, function (er) { + if (!er) log.verbose("afterAdd", pj, "written") + return done(er, data) }) }) }} - -function maybeFile (spec, cb) { - // split name@2.3.4 only if name is a valid package name, - // don't split in case of "./test@example.com/" (local path) - fs.stat(spec, function (er) { - if (!er) { - // definitely a local thing - return addLocal(spec, {}, cb) - } - - maybeAt(spec, cb) - }) -} - -function maybeAt (spec, cb) { - if (spec.indexOf("@") !== -1) { - var tmp = spec.split("@") - - var name = tmp.shift() - spec = tmp.join("@") - add([name, spec], cb) - } else { - // already know it's not a url, so must be local - addLocal(spec, {}, cb) - } -} - -function needName(er, data) { - return er ? er - : (data && !data.name) ? new Error("No name provided") - : null -} - -function needVersion(er, data) { - return er ? er - : (data && !data.version) ? new Error("No version provided") - : null -} diff --git a/deps/npm/lib/cache/add-local-tarball.js b/deps/npm/lib/cache/add-local-tarball.js index bcb938fa972..e84b66dd8dd 100644 --- a/deps/npm/lib/cache/add-local-tarball.js +++ b/deps/npm/lib/cache/add-local-tarball.js @@ -1,80 +1,52 @@ var mkdir = require("mkdirp") , assert = require("assert") , fs = require("graceful-fs") - , readJson = require("read-package-json") - , log = require("npmlog") + , writeFileAtomic = require("write-file-atomic") , path = require("path") , sha = require("sha") , npm = require("../npm.js") + , log = require("npmlog") , tar = require("../utils/tar.js") , pathIsInside = require("path-is-inside") - , locker = require("../utils/locker.js") - , lock = locker.lock - , unlock = locker.unlock , getCacheStat = require("./get-stat.js") + , cachedPackageRoot = require("./cached-package-root.js") , chownr = require("chownr") , inflight = require("inflight") , once = require("once") + , writeStream = require("fs-write-stream-atomic") + , randomBytes = require("crypto").pseudoRandomBytes // only need uniqueness module.exports = addLocalTarball -function addLocalTarball (p, pkgData, shasum, cb_) { +function addLocalTarball (p, pkgData, shasum, cb) { assert(typeof p === "string", "must have path") - assert(typeof cb_ === "function", "must have callback") + assert(typeof cb === "function", "must have callback") if (!pkgData) pkgData = {} - var name = pkgData.name || "" - // If we don't have a shasum yet, then get the shasum now. + // If we don't have a shasum yet, compute it. if (!shasum) { return sha.get(p, function (er, shasum) { - if (er) return cb_(er) - addLocalTarball(p, pkgData, shasum, cb_) + if (er) return cb(er) + log.silly("addLocalTarball", "shasum (computed)", shasum) + addLocalTarball(p, pkgData, shasum, cb) }) } - // if it's a tar, and not in place, - // then unzip to .tmp, add the tmp folder, and clean up tmp - if (pathIsInside(p, npm.tmp)) - return addTmpTarball(p, pkgData, shasum, cb_) - if (pathIsInside(p, npm.cache)) { - if (path.basename(p) !== "package.tgz") return cb_(new Error( - "Not a valid cache tarball name: "+p)) - return addPlacedTarball(p, pkgData, shasum, cb_) + if (path.basename(p) !== "package.tgz") { + return cb(new Error("Not a valid cache tarball name: "+p)) + } + log.verbose("addLocalTarball", "adding from inside cache", p) + return addPlacedTarball(p, pkgData, shasum, cb) } - function cb (er, data) { + addTmpTarball(p, pkgData, shasum, function (er, data) { if (data) { data._resolved = p data._shasum = data._shasum || shasum } - return cb_(er, data) - } - - // just copy it over and then add the temp tarball file. - var tmp = path.join(npm.tmp, name + Date.now() - + "-" + Math.random(), "tmp.tgz") - mkdir(path.dirname(tmp), function (er) { - if (er) return cb(er) - var from = fs.createReadStream(p) - , to = fs.createWriteStream(tmp) - , errState = null - function errHandler (er) { - if (errState) return - return cb(errState = er) - } - from.on("error", errHandler) - to.on("error", errHandler) - to.on("close", function () { - if (errState) return - log.verbose("chmod", tmp, npm.modes.file.toString(8)) - fs.chmod(tmp, npm.modes.file, function (er) { - if (er) return cb(er) - addTmpTarball(tmp, pkgData, shasum, cb) - }) - }) - from.pipe(to) + return cb(er, data) }) } @@ -89,10 +61,7 @@ function addPlacedTarball (p, pkgData, shasum, cb) { } function addPlacedTarball_ (p, pkgData, uid, gid, resolvedSum, cb) { - // now we know it's in place already as .cache/name/ver/package.tgz - var name = pkgData.name - , version = pkgData.version - , folder = path.join(npm.cache, name, version, "package") + var folder = path.join(cachedPackageRoot(pkgData), "package") // First, make sure we have the shasum, if we don't already. if (!resolvedSum) { @@ -103,105 +72,105 @@ function addPlacedTarball_ (p, pkgData, uid, gid, resolvedSum, cb) { return } - lock(folder, function (er) { + mkdir(folder, function (er) { if (er) return cb(er) - - // async try/finally - var originalCb = cb - cb = function (er, data) { - unlock(folder, function (er2) { - return originalCb(er || er2, data) - }) - } - - mkdir(folder, function (er) { - if (er) return cb(er) - var pj = path.join(folder, "package.json") - var json = JSON.stringify(pkgData, null, 2) - fs.writeFile(pj, json, "utf8", function (er) { - cb(er, pkgData) - }) + var pj = path.join(folder, "package.json") + var json = JSON.stringify(pkgData, null, 2) + writeFileAtomic(pj, json, function (er) { + cb(er, pkgData) }) }) } function addTmpTarball (tgz, pkgData, shasum, cb) { assert(typeof cb === "function", "must have callback function") - assert(shasum, "should have shasum by now") + assert(shasum, "must have shasum by now") cb = inflight("addTmpTarball:" + tgz, cb) - if (!cb) return + if (!cb) return log.verbose("addTmpTarball", tgz, "already in flight; not adding") + log.verbose("addTmpTarball", tgz, "not in flight; adding") // we already have the package info, so just move into place if (pkgData && pkgData.name && pkgData.version) { + log.verbose( + "addTmpTarball", + "already have metadata; skipping unpack for", + pkgData.name + "@" + pkgData.version + ) return addTmpTarball_(tgz, pkgData, shasum, cb) } - // This is a tarball we probably downloaded from the internet. - // The shasum's already been checked, but we haven't ever had - // a peek inside, so we unpack it here just to make sure it is - // what it says it is. - // Note: we might not have any clue what we think it is, for - // example if the user just did `npm install ./foo.tgz` + // This is a tarball we probably downloaded from the internet. The shasum's + // already been checked, but we haven't ever had a peek inside, so we unpack + // it here just to make sure it is what it says it is. + // + // NOTE: we might not have any clue what we think it is, for example if the + // user just did `npm install ./foo.tgz` - var target = tgz + "-unpack" - getCacheStat(function (er, cs) { - tar.unpack(tgz, target, null, null, cs.uid, cs.gid, next) - }) - - function next (er) { + // generate a unique filename + randomBytes(6, function (er, random) { if (er) return cb(er) - var pj = path.join(target, "package.json") - readJson(pj, function (er, data) { - // XXX dry with similar stanza in add-local.js - er = needName(er, data) - er = needVersion(er, data) - // check that this is what we expected. - if (!er && pkgData.name && pkgData.name !== data.name) { - er = new Error( "Invalid Package: expected " - + pkgData.name + " but found " - + data.name ) - } - - if (!er && pkgData.version && pkgData.version !== data.version) { - er = new Error( "Invalid Package: expected " - + pkgData.name + "@" + pkgData.version - + " but found " - + data.name + "@" + data.version ) - } + var target = path.join(npm.tmp, "unpack-" + random.toString("hex")) + getCacheStat(function (er, cs) { if (er) return cb(er) - addTmpTarball_(tgz, data, shasum, cb) + log.verbose("addTmpTarball", "validating metadata from", tgz) + tar.unpack(tgz, target, null, null, cs.uid, cs.gid, function (er, data) { + if (er) return cb(er) + + // check that this is what we expected. + if (!data.name) { + return cb(new Error("No name provided")) + } + else if (pkgData.name && data.name !== pkgData.name) { + return cb(new Error("Invalid Package: expected " + pkgData.name + + " but found " + data.name)) + } + + if (!data.version) { + return cb(new Error("No version provided")) + } + else if (pkgData.version && data.version !== pkgData.version) { + return cb(new Error("Invalid Package: expected " + + pkgData.name + "@" + pkgData.version + + " but found " + data.name + "@" + data.version)) + } + + addTmpTarball_(tgz, data, shasum, cb) + }) }) - } + }) } function addTmpTarball_ (tgz, data, shasum, cb) { assert(typeof cb === "function", "must have callback function") cb = once(cb) - var name = data.name - var version = data.version - assert(name, "should have package name by now") - assert(version, "should have package version by now") + assert(data.name, "should have package name by now") + assert(data.version, "should have package version by now") - var root = path.resolve(npm.cache, name, version) + var root = cachedPackageRoot(data) var pkg = path.resolve(root, "package") var target = path.resolve(root, "package.tgz") getCacheStat(function (er, cs) { if (er) return cb(er) - mkdir(pkg, function (er) { + mkdir(pkg, function (er, created) { + + // chown starting from the first dir created by mkdirp, + // or the root dir, if none had to be created, so that + // we know that we get all the children. + function chown () { + chownr(created || root, cs.uid, cs.gid, done) + } + if (er) return cb(er) var read = fs.createReadStream(tgz) - var write = fs.createWriteStream(target) + var write = writeStream(target, { mode: npm.modes.file }) var fin = cs.uid && cs.gid ? chown : done read.on("error", cb).pipe(write).on("error", cb).on("close", fin) }) - function chown () { - chownr(root, cs.uid, cs.gid, done) - } }) function done() { @@ -209,15 +178,3 @@ function addTmpTarball_ (tgz, data, shasum, cb) { cb(null, data) } } - -function needName(er, data) { - return er ? er - : (data && !data.name) ? new Error("No name provided") - : null -} - -function needVersion(er, data) { - return er ? er - : (data && !data.version) ? new Error("No version provided") - : null -} diff --git a/deps/npm/lib/cache/add-local.js b/deps/npm/lib/cache/add-local.js index 2a6d8cf884f..b425d7f9118 100644 --- a/deps/npm/lib/cache/add-local.js +++ b/deps/npm/lib/cache/add-local.js @@ -1,5 +1,4 @@ -var fs = require("graceful-fs") - , assert = require("assert") +var assert = require("assert") , path = require("path") , mkdir = require("mkdirp") , chownr = require("chownr") @@ -9,56 +8,36 @@ var fs = require("graceful-fs") , npm = require("../npm.js") , tar = require("../utils/tar.js") , deprCheck = require("../utils/depr-check.js") - , locker = require("../utils/locker.js") - , lock = locker.lock - , unlock = locker.unlock , getCacheStat = require("./get-stat.js") - , addNamed = require("./add-named.js") + , cachedPackageRoot = require("./cached-package-root.js") , addLocalTarball = require("./add-local-tarball.js") - , maybeGithub = require("./maybe-github.js") , sha = require("sha") module.exports = addLocal function addLocal (p, pkgData, cb_) { - assert(typeof p === "string", "must have path") + assert(typeof p === "object", "must have spec info") assert(typeof cb === "function", "must have callback") pkgData = pkgData || {} function cb (er, data) { - unlock(p, function () { - if (er) { - // if it doesn't have a / in it, it might be a - // remote thing. - if (p.indexOf("/") === -1 && p.charAt(0) !== "." - && (process.platform !== "win32" || p.indexOf("\\") === -1)) { - return addNamed(p, "", null, cb_) - } - log.error("addLocal", "Could not install %s", p) - return cb_(er) - } - if (data && !data._fromGithub) data._from = p - return cb_(er, data) - }) + if (er) { + log.error("addLocal", "Could not install %s", p.spec) + return cb_(er) + } + if (data && !data._fromGithub) { + data._from = path.relative(npm.prefix, p.spec) || "." + } + return cb_(er, data) } - lock(p, function (er) { - if (er) return cb(er) - // figure out if this is a folder or file. - fs.stat(p, function (er, s) { - if (er) { - // might be username/project - // in that case, try it as a github url. - if (p.split("/").length === 2) { - return maybeGithub(p, er, cb) - } - return cb(er) - } - if (s.isDirectory()) addLocalDirectory(p, pkgData, null, cb) - else addLocalTarball(p, pkgData, null, cb) - }) - }) + if (p.type === "directory") { + addLocalDirectory(p.spec, pkgData, null, cb) + } + else { + addLocalTarball(p.spec, pkgData, null, cb) + } } // At this point, if shasum is set, it's something that we've already @@ -73,30 +52,33 @@ function addLocalDirectory (p, pkgData, shasum, cb) { "Adding a cache directory to the cache will make the world implode.")) readJson(path.join(p, "package.json"), false, function (er, data) { - er = needName(er, data) - er = needVersion(er, data) + if (er) return cb(er) - // check that this is what we expected. - if (!er && pkgData.name && pkgData.name !== data.name) { - er = new Error( "Invalid Package: expected " - + pkgData.name + " but found " - + data.name ) + if (!data.name) { + return cb(new Error("No name provided in package.json")) + } + else if (pkgData.name && pkgData.name !== data.name) { + return cb(new Error( + "Invalid package: expected " + pkgData.name + " but found " + data.name + )) } - if (!er && pkgData.version && pkgData.version !== data.version) { - er = new Error( "Invalid Package: expected " - + pkgData.name + "@" + pkgData.version - + " but found " - + data.name + "@" + data.version ) + if (!data.version) { + return cb(new Error("No version provided in package.json")) + } + else if (pkgData.version && pkgData.version !== data.version) { + return cb(new Error( + "Invalid package: expected " + pkgData.name + "@" + pkgData.version + + " but found " + data.name + "@" + data.version + )) } - if (er) return cb(er) deprCheck(data) // pack to {cache}/name/ver/package.tgz - var croot = path.resolve(npm.cache, data.name, data.version) - var tgz = path.resolve(croot, "package.tgz") - var pj = path.resolve(croot, "package/package.json") + var root = cachedPackageRoot(data) + var tgz = path.resolve(root, "package.tgz") + var pj = path.resolve(root, "package/package.json") getCacheStat(function (er, cs) { mkdir(path.dirname(pj), function (er, made) { if (er) return cb(er) @@ -132,15 +114,3 @@ function addLocalDirectory (p, pkgData, shasum, cb) { } }) } - -function needName(er, data) { - return er ? er - : (data && !data.name) ? new Error("No name provided") - : null -} - -function needVersion(er, data) { - return er ? er - : (data && !data.version) ? new Error("No version provided") - : null -} diff --git a/deps/npm/lib/cache/add-named.js b/deps/npm/lib/cache/add-named.js index 7137cc9b569..1bd7af14486 100644 --- a/deps/npm/lib/cache/add-named.js +++ b/deps/npm/lib/cache/add-named.js @@ -10,45 +10,48 @@ var path = require("path") , registry = npm.registry , deprCheck = require("../utils/depr-check.js") , inflight = require("inflight") - , locker = require("../utils/locker.js") - , lock = locker.lock - , unlock = locker.unlock - , maybeGithub = require("./maybe-github.js") , addRemoteTarball = require("./add-remote-tarball.js") + , cachedPackageRoot = require("./cached-package-root.js") + , mapToRegistry = require("../utils/map-to-registry.js") module.exports = addNamed -var NAME_PREFIX = "addName:" +function getOnceFromRegistry (name, from, next, done) { + mapToRegistry(name, npm.config, function (er, uri) { + if (er) return done(er) + + var key = "registry:" + uri + next = inflight(key, next) + if (!next) return log.verbose(from, key, "already in flight; waiting") + else log.verbose(from, key, "not in flight; fetching") + + registry.get(uri, null, next) + }) +} + function addNamed (name, version, data, cb_) { assert(typeof name === "string", "must have module name") assert(typeof cb_ === "function", "must have callback") - log.verbose("addNamed", [name, version]) - var key = name + "@" + version + log.verbose("addNamed", key) + function cb (er, data) { if (data && !data._fromGithub) data._from = key - unlock(key, function () { cb_(er, data) }) + cb_(er, data) } - cb_ = inflight(NAME_PREFIX + key, cb_) - - if (!cb_) return - - log.verbose("addNamed", [semver.valid(version), semver.validRange(version)]) - lock(key, function (er) { - if (er) return cb(er) - - var fn = ( semver.valid(version, true) ? addNameVersion - : semver.validRange(version, true) ? addNameRange - : addNameTag - ) - fn(name, version, data, cb) - }) + log.silly("addNamed", "semver.valid", semver.valid(version)) + log.silly("addNamed", "semver.validRange", semver.validRange(version)) + var fn = ( semver.valid(version, true) ? addNameVersion + : semver.validRange(version, true) ? addNameRange + : addNameTag + ) + fn(name, version, data, cb) } -function addNameTag (name, tag, data, cb_) { +function addNameTag (name, tag, data, cb) { log.info("addNameTag", [name, tag]) var explicit = true if (!tag) { @@ -56,21 +59,14 @@ function addNameTag (name, tag, data, cb_) { tag = npm.config.get("tag") } - function cb(er, data) { - // might be username/project - // in that case, try it as a github url. - if (er && tag.split("/").length === 2) { - return maybeGithub(tag, er, cb_) - } - return cb_(er, data) - } + getOnceFromRegistry(name, "addNameTag", next, cb) - var uri = url.resolve(npm.config.get("registry"), name) - registry.get(uri, null, function (er, data, json, resp) { - if (!er) { - er = errorResponse(name, resp) - } + function next (er, data, json, resp) { + if (!er) er = errorResponse(name, resp) if (er) return cb(er) + + log.silly("addNameTag", "next cb for", name, "with tag", tag) + engineFilter(data) if (data["dist-tags"] && data["dist-tags"][tag] && data.versions[data["dist-tags"][tag]]) { @@ -83,7 +79,7 @@ function addNameTag (name, tag, data, cb_) { er = installTargetsError(tag, data) return cb(er) - }) + } } function engineFilter (data) { @@ -114,22 +110,24 @@ function addNameVersion (name, v, data, cb) { response = null return next() } - var uri = url.resolve(npm.config.get("registry"), name) - registry.get(uri, null, function (er, d, json, resp) { + + getOnceFromRegistry(name, "addNameVersion", setData, cb) + + function setData (er, d, json, resp) { if (!er) { er = errorResponse(name, resp) } if (er) return cb(er) data = d && d.versions[ver] if (!data) { - er = new Error('version not found: ' + name + '@' + ver) + er = new Error("version not found: "+name+"@"+ver) er.package = name er.statusCode = 404 return cb(er) } response = resp next() - }) + } function next () { deprCheck(data) @@ -145,18 +143,27 @@ function addNameVersion (name, v, data, cb) { } // we got cached data, so let's see if we have a tarball. - var pkgroot = path.join(npm.cache, name, ver) + var pkgroot = cachedPackageRoot({name : name, version : ver}) var pkgtgz = path.join(pkgroot, "package.tgz") var pkgjson = path.join(pkgroot, "package", "package.json") fs.stat(pkgtgz, function (er) { if (!er) { readJson(pkgjson, function (er, data) { - er = needName(er, data) - er = needVersion(er, data) - if (er && er.code !== "ENOENT" && er.code !== "ENOTDIR") - return cb(er) + if (er && er.code !== "ENOENT" && er.code !== "ENOTDIR") return cb(er) + + if (data) { + if (!data.name) return cb(new Error("No name provided")) + if (!data.version) return cb(new Error("No version provided")) + + // check the SHA of the package we have, to ensure it wasn't installed + // from somewhere other than the registry (eg, a fork) + if (data._shasum && dist.shasum && data._shasum !== dist.shasum) { + return fetchit() + } + } + if (er) return fetchit() - return cb(null, data) + else return cb(null, data) }) } else return fetchit() }) @@ -166,10 +173,9 @@ function addNameVersion (name, v, data, cb) { return cb(new Error("Cannot fetch: "+dist.tarball)) } - // use the same protocol as the registry. - // https registry --> https tarballs, but - // only if they're the same hostname, or else - // detached tarballs may not work. + // Use the same protocol as the registry. https registry --> https + // tarballs, but only if they're the same hostname, or else detached + // tarballs may not work. var tb = url.parse(dist.tarball) var rp = url.parse(npm.config.get("registry")) if (tb.hostname === rp.hostname @@ -179,8 +185,8 @@ function addNameVersion (name, v, data, cb) { } tb = url.format(tb) - // only add non-shasum'ed packages if --forced. - // only ancient things would lack this for good reasons nowadays. + // Only add non-shasum'ed packages if --forced. Only ancient things + // would lack this for good reasons nowadays. if (!dist.shasum && !npm.config.get("force")) { return cb(new Error("package lacks shasum: " + data._id)) } @@ -192,20 +198,23 @@ function addNameVersion (name, v, data, cb) { function addNameRange (name, range, data, cb) { range = semver.validRange(range, true) if (range === null) return cb(new Error( - "Invalid version range: "+range)) + "Invalid version range: " + range + )) log.silly("addNameRange", {name:name, range:range, hasData:!!data}) if (data) return next() - var uri = url.resolve(npm.config.get("registry"), name) - registry.get(uri, null, function (er, d, json, resp) { + + getOnceFromRegistry(name, "addNameRange", setData, cb) + + function setData (er, d, json, resp) { if (!er) { er = errorResponse(name, resp) } if (er) return cb(er) data = d next() - }) + } function next () { log.silly( "addNameRange", "number 2" @@ -264,15 +273,3 @@ function errorResponse (name, response) { } return er } - -function needName(er, data) { - return er ? er - : (data && !data.name) ? new Error("No name provided") - : null -} - -function needVersion(er, data) { - return er ? er - : (data && !data.version) ? new Error("No version provided") - : null -} diff --git a/deps/npm/lib/cache/add-remote-git.js b/deps/npm/lib/cache/add-remote-git.js index 7743aa4a450..d8f3f1cd88f 100644 --- a/deps/npm/lib/cache/add-remote-git.js +++ b/deps/npm/lib/cache/add-remote-git.js @@ -8,17 +8,13 @@ var mkdir = require("mkdirp") , url = require("url") , chownr = require("chownr") , zlib = require("zlib") - , which = require("which") , crypto = require("crypto") - , chmodr = require("chmodr") , npm = require("../npm.js") , rm = require("../utils/gently-rm.js") , inflight = require("inflight") - , locker = require("../utils/locker.js") - , lock = locker.lock - , unlock = locker.unlock , getCacheStat = require("./get-stat.js") , addLocalTarball = require("./add-local-tarball.js") + , writeStream = require("fs-write-stream-atomic") // 1. cacheDir = path.join(cache,'_git-remotes',sha1(u)) @@ -28,18 +24,13 @@ var mkdir = require("mkdirp") // 5. git archive /tmp/random.tgz // 6. addLocalTarball(/tmp/random.tgz) --format=tar --prefix=package/ // silent flag is used if this should error quietly -module.exports = function addRemoteGit (u, parsed, silent, cb_) { +module.exports = function addRemoteGit (u, silent, cb) { assert(typeof u === "string", "must have git URL") - assert(typeof parsed === "object", "must have parsed query") - assert(typeof cb_ === "function", "must have callback") + assert(typeof cb === "function", "must have callback") - function cb (er, data) { - unlock(u, function () { cb_(er, data) }) - } - - cb_ = inflight(u, cb_) - - if (!cb_) return + log.verbose("addRemoteGit", "u=%j silent=%j", u, silent) + var parsed = url.parse(u, true) + log.silly("addRemoteGit", "parsed", parsed) // git is so tricky! // if the path is like ssh://foo:22/some/path then it works, but @@ -55,23 +46,28 @@ module.exports = function addRemoteGit (u, parsed, silent, cb_) { u = u.replace(/^ssh:\/\//, "") } - lock(u, function (er) { - if (er) return cb(er) + cb = inflight(u, cb) + if (!cb) return log.verbose("addRemoteGit", u, "already in flight; waiting") + log.verbose("addRemoteGit", u, "not in flight; cloning") + + // figure out what we should check out. + var co = parsed.hash && parsed.hash.substr(1) || "master" - // figure out what we should check out. - var co = parsed.hash && parsed.hash.substr(1) || "master" + var v = crypto.createHash("sha1").update(u).digest("hex").slice(0, 8) + v = u.replace(/[^a-zA-Z0-9]+/g, "-")+"-"+v - var v = crypto.createHash("sha1").update(u).digest("hex").slice(0, 8) - v = u.replace(/[^a-zA-Z0-9]+/g, '-') + '-' + v + log.verbose("addRemoteGit", [u, co]) - log.verbose("addRemoteGit", [u, co]) + var p = path.join(npm.config.get("cache"), "_git-remotes", v) - var p = path.join(npm.config.get("cache"), "_git-remotes", v) + // we don't need global templates when cloning. use this empty dir to specify as template dir + mkdir(path.join(npm.config.get("cache"), "_git-remotes", "_templates"), function (er) { + if (er) return cb(er) + checkGitDir(p, u, co, origUrl, silent, function (er, data) { + if (er) return cb(er, data) - checkGitDir(p, u, co, origUrl, silent, function(er, data) { - chmodr(p, npm.modes.file, function(erChmod) { - if (er) return cb(er, data) - return cb(erChmod, data) + addModeRecursive(p, npm.modes.file, function (er) { + return cb(er, data) }) }) }) @@ -109,7 +105,8 @@ function cloneGitRemote (p, u, co, origUrl, silent, cb) { mkdir(p, function (er) { if (er) return cb(er) - var args = [ "clone", "--mirror", u, p ] + var args = [ "clone", "--template=" + path.join(npm.config.get("cache"), + "_git_remotes", "_templates"), "--mirror", u, p ] var env = gitEnv() // check for git @@ -145,10 +142,7 @@ function archiveGitRemote (p, u, co, origUrl, cb) { } log.verbose("git fetch -a origin ("+u+")", stdout) tmp = path.join(npm.tmp, Date.now()+"-"+Math.random(), "tmp.tgz") - verifyOwnership() - }) - function verifyOwnership() { if (process.platform === "win32") { log.silly("verifyOwnership", "skipping for windows") resolveHead() @@ -167,7 +161,7 @@ function archiveGitRemote (p, u, co, origUrl, cb) { }) }) } - } + }) function resolveHead () { git.whichAndExec(resolve, {cwd: p, env: env}, function (er, stdout, stderr) { @@ -181,16 +175,20 @@ function archiveGitRemote (p, u, co, origUrl, cb) { parsed.hash = stdout resolved = url.format(parsed) + if (parsed.protocol !== "git:") { + resolved = "git+" + resolved + } + // https://github.com/npm/npm/issues/3224 // node incorrectly sticks a / at the start of the path // We know that the host won't change, so split and detect this var spo = origUrl.split(parsed.host) var spr = resolved.split(parsed.host) - if (spo[1].charAt(0) === ':' && spr[1].charAt(0) === '/') + if (spo[1].charAt(0) === ":" && spr[1].charAt(0) === "/") spr[1] = spr[1].slice(1) resolved = spr.join(parsed.host) - log.verbose('resolved git url', resolved) + log.verbose("resolved git url", resolved) next() }) } @@ -200,7 +198,7 @@ function archiveGitRemote (p, u, co, origUrl, cb) { if (er) return cb(er) var gzip = zlib.createGzip({ level: 9 }) var args = ["archive", co, "--format=tar", "--prefix=package/"] - var out = fs.createWriteStream(tmp) + var out = writeStream(tmp) var env = gitEnv() cb = once(cb) var cp = git.spawn(args, { env: env, cwd: p }) @@ -226,8 +224,47 @@ function gitEnv () { if (gitEnv_) return gitEnv_ gitEnv_ = {} for (var k in process.env) { - if (!~['GIT_PROXY_COMMAND','GIT_SSH','GIT_SSL_NO_VERIFY'].indexOf(k) && k.match(/^GIT/)) continue + if (!~["GIT_PROXY_COMMAND","GIT_SSH","GIT_SSL_NO_VERIFY","GIT_SSL_CAINFO"].indexOf(k) && k.match(/^GIT/)) continue gitEnv_[k] = process.env[k] } return gitEnv_ } + +// similar to chmodr except it add permissions rather than overwriting them +// adapted from https://github.com/isaacs/chmodr/blob/master/chmodr.js +function addModeRecursive(p, mode, cb) { + fs.readdir(p, function (er, children) { + // Any error other than ENOTDIR means it's not readable, or doesn't exist. + // Give up. + if (er && er.code !== "ENOTDIR") return cb(er) + if (er || !children.length) return addMode(p, mode, cb) + + var len = children.length + var errState = null + children.forEach(function (child) { + addModeRecursive(path.resolve(p, child), mode, then) + }) + + function then (er) { + if (errState) return undefined + if (er) return cb(errState = er) + if (--len === 0) return addMode(p, dirMode(mode), cb) + } + }) +} + +function addMode(p, mode, cb) { + fs.stat(p, function (er, stats) { + if (er) return cb(er) + mode = stats.mode | mode + fs.chmod(p, mode, cb) + }) +} + +// taken from https://github.com/isaacs/chmodr/blob/master/chmodr.js +function dirMode(mode) { + if (mode & parseInt("0400", 8)) mode |= parseInt("0100", 8) + if (mode & parseInt( "040", 8)) mode |= parseInt( "010", 8) + if (mode & parseInt( "04", 8)) mode |= parseInt( "01", 8) + return mode +} diff --git a/deps/npm/lib/cache/add-remote-tarball.js b/deps/npm/lib/cache/add-remote-tarball.js index db9a05d8254..9591ba89d23 100644 --- a/deps/npm/lib/cache/add-remote-tarball.js +++ b/deps/npm/lib/cache/add-remote-tarball.js @@ -4,12 +4,10 @@ var mkdir = require("mkdirp") , path = require("path") , sha = require("sha") , retry = require("retry") + , createWriteStream = require("fs-write-stream-atomic") , npm = require("../npm.js") - , fetch = require("../utils/fetch.js") + , registry = npm.registry , inflight = require("inflight") - , locker = require("../utils/locker.js") - , lock = locker.lock - , unlock = locker.unlock , addLocalTarball = require("./add-local-tarball.js") , cacheFile = require("npm-cache-filename") @@ -25,14 +23,12 @@ function addRemoteTarball (u, pkgData, shasum, cb_) { data._shasum = data._shasum || shasum data._resolved = u } - unlock(u, function () { - cb_(er, data) - }) + cb_(er, data) } cb_ = inflight(u, cb_) - - if (!cb_) return + if (!cb_) return log.verbose("addRemoteTarball", u, "already in flight; waiting") + log.verbose("addRemoteTarball", u, "not in flight; adding") // XXX Fetch direct to cache location, store tarballs under // ${cache}/registry.npmjs.org/pkg/-/pkg-1.2.3.tgz @@ -43,25 +39,22 @@ function addRemoteTarball (u, pkgData, shasum, cb_) { addLocalTarball(tmp, pkgData, shasum, cb) } - lock(u, function (er) { + log.verbose("addRemoteTarball", [u, shasum]) + mkdir(path.dirname(tmp), function (er) { if (er) return cb(er) - - log.verbose("addRemoteTarball", [u, shasum]) - mkdir(path.dirname(tmp), function (er) { - if (er) return cb(er) - addRemoteTarball_(u, tmp, shasum, next) - }) + addRemoteTarball_(u, tmp, shasum, next) }) } function addRemoteTarball_(u, tmp, shasum, cb) { // Tuned to spread 3 attempts over about a minute. // See formula at . - var operation = retry.operation - ( { retries: npm.config.get("fetch-retries") - , factor: npm.config.get("fetch-retry-factor") - , minTimeout: npm.config.get("fetch-retry-mintimeout") - , maxTimeout: npm.config.get("fetch-retry-maxtimeout") }) + var operation = retry.operation({ + retries: npm.config.get("fetch-retries") + , factor: npm.config.get("fetch-retry-factor") + , minTimeout: npm.config.get("fetch-retry-mintimeout") + , maxTimeout: npm.config.get("fetch-retry-maxtimeout") + }) operation.attempt(function (currentAttempt) { log.info("retry", "fetch attempt " + currentAttempt @@ -80,27 +73,39 @@ function addRemoteTarball_(u, tmp, shasum, cb) { } function fetchAndShaCheck (u, tmp, shasum, cb) { - fetch(u, tmp, function (er, response) { + registry.fetch(u, null, function (er, response) { if (er) { log.error("fetch failed", u) return cb(er, response) } - if (!shasum) { - // Well, we weren't given a shasum, so at least sha what we have - // in case we want to compare it to something else later - return sha.get(tmp, function (er, shasum) { - cb(er, response, shasum) - }) - } + var tarball = createWriteStream(tmp, { mode : npm.modes.file }) + tarball.on("error", function (er) { + cb(er) + tarball.destroy() + }) - // validate that the url we just downloaded matches the expected shasum. - sha.check(tmp, shasum, function (er) { - if (er && er.message) { - // add original filename for better debuggability - er.message = er.message + '\n' + 'From: ' + u + tarball.on("finish", function () { + if (!shasum) { + // Well, we weren't given a shasum, so at least sha what we have + // in case we want to compare it to something else later + return sha.get(tmp, function (er, shasum) { + log.silly("fetchAndShaCheck", "shasum", shasum) + cb(er, response, shasum) + }) } - return cb(er, response, shasum) + + // validate that the url we just downloaded matches the expected shasum. + log.silly("fetchAndShaCheck", "shasum", shasum) + sha.check(tmp, shasum, function (er) { + if (er && er.message) { + // add original filename for better debuggability + er.message = er.message + "\n" + "From: " + u + } + return cb(er, response, shasum) + }) }) + + response.pipe(tarball) }) } diff --git a/deps/npm/lib/cache/cached-package-root.js b/deps/npm/lib/cache/cached-package-root.js new file mode 100644 index 00000000000..7163314a8ed --- /dev/null +++ b/deps/npm/lib/cache/cached-package-root.js @@ -0,0 +1,14 @@ +var assert = require("assert") +var resolve = require("path").resolve + +var npm = require("../npm.js") + +module.exports = getCacheRoot + +function getCacheRoot (data) { + assert(data, "must pass package metadata") + assert(data.name, "package metadata must include name") + assert(data.version, "package metadata must include version") + + return resolve(npm.cache, data.name, data.version) +} diff --git a/deps/npm/lib/cache/get-stat.js b/deps/npm/lib/cache/get-stat.js index 913f5af851f..45b60ce7936 100644 --- a/deps/npm/lib/cache/get-stat.js +++ b/deps/npm/lib/cache/get-stat.js @@ -10,9 +10,6 @@ var cacheStat = null module.exports = function getCacheStat (cb) { if (cacheStat) return cb(null, cacheStat) - cb = inflight("getCacheStat", cb) - if (!cb) return - fs.stat(npm.cache, function (er, st) { if (er) return makeCacheDir(cb) if (!st.isDirectory()) { @@ -24,7 +21,13 @@ module.exports = function getCacheStat (cb) { } function makeCacheDir (cb) { - if (!process.getuid) return mkdir(npm.cache, cb) + cb = inflight("makeCacheDir", cb) + if (!cb) return log.verbose("getCacheStat", "cache creation already in flight; waiting") + log.verbose("getCacheStat", "cache creation not in flight; initializing") + + if (!process.getuid) return mkdir(npm.cache, function (er) { + return cb(er, {}) + }) var uid = +process.getuid() , gid = +process.getgid() diff --git a/deps/npm/lib/cache/maybe-github.js b/deps/npm/lib/cache/maybe-github.js index fee64c5dfdf..5ecdb691552 100644 --- a/deps/npm/lib/cache/maybe-github.js +++ b/deps/npm/lib/cache/maybe-github.js @@ -1,29 +1,26 @@ -var url = require("url") - , assert = require("assert") +var assert = require("assert") , log = require("npmlog") , addRemoteGit = require("./add-remote-git.js") -module.exports = function maybeGithub (p, er, cb) { +module.exports = function maybeGithub (p, cb) { assert(typeof p === "string", "must pass package name") - assert(er instanceof Error, "must include error") assert(typeof cb === "function", "must pass callback") var u = "git://github.com/" + p - , up = url.parse(u) log.info("maybeGithub", "Attempting %s from %s", p, u) - return addRemoteGit(u, up, true, function (er2, data) { - if (er2) { + return addRemoteGit(u, true, function (er, data) { + if (er) { var upriv = "git+ssh://git@github.com:" + p - , uppriv = url.parse(upriv) - log.info("maybeGithub", "Attempting %s from %s", p, upriv) - return addRemoteGit(upriv, uppriv, false, function (er3, data) { - if (er3) return cb(er) + return addRemoteGit(upriv, false, function (er, data) { + if (er) return cb(er) + success(upriv, data) }) } + success(u, data) }) diff --git a/deps/npm/lib/completion.js b/deps/npm/lib/completion.js index 5c1098a599d..1d26ffcf8ac 100644 --- a/deps/npm/lib/completion.js +++ b/deps/npm/lib/completion.js @@ -6,7 +6,7 @@ completion.usage = "npm completion >> ~/.bashrc\n" + "source <(npm completion)" var npm = require("./npm.js") - , npmconf = require("npmconf") + , npmconf = require("./config/core.js") , configDefs = npmconf.defs , configTypes = configDefs.types , shorthands = configDefs.shorthands @@ -229,7 +229,7 @@ function configCompl (opts, cb) { // expand with the valid values of various config values. // not yet implemented. function configValueCompl (opts, cb) { - console.error('configValue', opts) + console.error("configValue", opts) return cb(null, []) } diff --git a/deps/npm/lib/config.js b/deps/npm/lib/config.js index 8dc814acd0a..f51156aad4e 100644 --- a/deps/npm/lib/config.js +++ b/deps/npm/lib/config.js @@ -11,8 +11,9 @@ config.usage = "npm config set " var log = require("npmlog") , npm = require("./npm.js") + , npmconf = require("./config/core.js") , fs = require("graceful-fs") - , npmconf = require("npmconf") + , writeFileAtomic = require("write-file-atomic") , types = npmconf.defs.types , ini = require("ini") , editor = require("editor") @@ -88,17 +89,16 @@ function edit (cb) { if (key === "logstream") return arr return arr.concat( ini.stringify(obj) - .replace(/\n$/m, '') - .replace(/^/g, '; ') - .replace(/\n/g, '\n; ') - .split('\n')) + .replace(/\n$/m, "") + .replace(/^/g, "; ") + .replace(/\n/g, "\n; ") + .split("\n")) }, [])) .concat([""]) .join(os.EOL) - fs.writeFile + writeFileAtomic ( f , data - , "utf8" , function (er) { if (er) return cb(er) editor(f, { editor: e }, cb) diff --git a/deps/npm/node_modules/npmconf/npmconf.js b/deps/npm/lib/config/core.js similarity index 64% rename from deps/npm/node_modules/npmconf/npmconf.js rename to deps/npm/lib/config/core.js index a17705447a6..6c6112532fa 100644 --- a/deps/npm/node_modules/npmconf/npmconf.js +++ b/deps/npm/lib/config/core.js @@ -1,16 +1,15 @@ -var CC = require('config-chain').ConfigChain -var inherits = require('inherits') -var configDefs = require('./config-defs.js') +var CC = require("config-chain").ConfigChain +var inherits = require("inherits") +var configDefs = require("./defaults.js") var types = configDefs.types -var once = require('once') -var fs = require('fs') -var path = require('path') -var nopt = require('nopt') -var ini = require('ini') +var once = require("once") +var fs = require("fs") +var path = require("path") +var nopt = require("nopt") +var ini = require("ini") var Octal = configDefs.Octal -var mkdirp = require('mkdirp') -var path = require('path') +var mkdirp = require("mkdirp") exports.load = load exports.Conf = Conf @@ -19,11 +18,11 @@ exports.rootConf = null exports.usingBuiltin = false exports.defs = configDefs -Object.defineProperty(exports, 'defaults', { get: function () { +Object.defineProperty(exports, "defaults", { get: function () { return configDefs.defaults }, enumerable: true }) -Object.defineProperty(exports, 'types', { get: function () { +Object.defineProperty(exports, "types", { get: function () { return configDefs.types }, enumerable: true }) @@ -37,13 +36,13 @@ var myGid = process.env.SUDO_GID !== undefined var loading = false var loadCbs = [] -function load (cli_, builtin_, cb_) { +function load () { var cli, builtin, cb for (var i = 0; i < arguments.length; i++) switch (typeof arguments[i]) { - case 'string': builtin = arguments[i]; break - case 'object': cli = arguments[i]; break - case 'function': cb = arguments[i]; break + case "string": builtin = arguments[i]; break + case "object": cli = arguments[i]; break + case "function": cb = arguments[i]; break } if (!cb) @@ -86,13 +85,14 @@ function load (cli_, builtin_, cb_) { exports.usingBuiltin = !!builtin var rc = exports.rootConf = new Conf() if (builtin) - rc.addFile(builtin, 'builtin') + rc.addFile(builtin, "builtin") else - rc.add({}, 'builtin') + rc.add({}, "builtin") - rc.on('load', function () { + rc.on("load", function () { load_(builtin, rc, cli, cb) }) + rc.on("error", cb) } function load_(builtin, rc, cli, cb) { @@ -100,7 +100,7 @@ function load_(builtin, rc, cli, cb) { var conf = new Conf(rc) conf.usingBuiltin = !!builtin - conf.add(cli, 'cli') + conf.add(cli, "cli") conf.addEnv() conf.loadPrefix(function(er) { @@ -123,24 +123,24 @@ function load_(builtin, rc, cli, cb) { // the default or resolved userconfig value. npm will log a "verbose" // message about this when it happens, but it is a rare enough edge case // that we don't have to be super concerned about it. - var projectConf = path.resolve(conf.localPrefix, '.npmrc') - var defaultUserConfig = rc.get('userconfig') - var resolvedUserConfig = conf.get('userconfig') - if (!conf.get('global') && + var projectConf = path.resolve(conf.localPrefix, ".npmrc") + var defaultUserConfig = rc.get("userconfig") + var resolvedUserConfig = conf.get("userconfig") + if (!conf.get("global") && projectConf !== defaultUserConfig && projectConf !== resolvedUserConfig) { - conf.addFile(projectConf, 'project') - conf.once('load', afterPrefix) + conf.addFile(projectConf, "project") + conf.once("load", afterPrefix) } else { - conf.add({}, 'project') + conf.add({}, "project") afterPrefix() } }) function afterPrefix() { - conf.addFile(conf.get('userconfig'), 'user') - conf.once('error', cb) - conf.once('load', afterUser) + conf.addFile(conf.get("userconfig"), "user") + conf.once("error", cb) + conf.once("load", afterUser) } function afterUser () { @@ -149,18 +149,18 @@ function load_(builtin, rc, cli, cb) { // Eg, `npm config get globalconfig --prefix ~/local` should // return `~/local/etc/npmrc` // annoying humans and their expectations! - if (conf.get('prefix')) { - var etc = path.resolve(conf.get('prefix'), 'etc') - defaults.globalconfig = path.resolve(etc, 'npmrc') - defaults.globalignorefile = path.resolve(etc, 'npmignore') + if (conf.get("prefix")) { + var etc = path.resolve(conf.get("prefix"), "etc") + defaults.globalconfig = path.resolve(etc, "npmrc") + defaults.globalignorefile = path.resolve(etc, "npmignore") } - conf.addFile(conf.get('globalconfig'), 'global') + conf.addFile(conf.get("globalconfig"), "global") // move the builtin into the conf stack now. conf.root = defaults - conf.add(rc.shift(), 'builtin') - conf.once('load', function () { + conf.add(rc.shift(), "builtin") + conf.once("load", function () { conf.loadExtras(afterExtras) }) } @@ -172,7 +172,7 @@ function load_(builtin, rc, cli, cb) { // warn about invalid bits. validate(conf) - var cafile = conf.get('cafile') + var cafile = conf.get("cafile") if (cafile) { return conf.loadCAFile(cafile, finalize) @@ -181,7 +181,7 @@ function load_(builtin, rc, cli, cb) { finalize() } - function finalize(er, cadata) { + function finalize(er) { if (er) { return cb(er) } @@ -212,11 +212,13 @@ function Conf (base) { this.root = configDefs.defaults } -Conf.prototype.loadPrefix = require('./lib/load-prefix.js') -Conf.prototype.loadCAFile = require('./lib/load-cafile.js') -Conf.prototype.loadUid = require('./lib/load-uid.js') -Conf.prototype.setUser = require('./lib/set-user.js') -Conf.prototype.findPrefix = require('./lib/find-prefix.js') +Conf.prototype.loadPrefix = require("./load-prefix.js") +Conf.prototype.loadCAFile = require("./load-cafile.js") +Conf.prototype.loadUid = require("./load-uid.js") +Conf.prototype.setUser = require("./set-user.js") +Conf.prototype.findPrefix = require("./find-prefix.js") +Conf.prototype.getCredentialsByURI = require("./get-credentials-by-uri.js") +Conf.prototype.setCredentialsByURI = require("./set-credentials-by-uri.js") Conf.prototype.loadExtras = function(cb) { this.setUser(function(er) { @@ -234,17 +236,17 @@ Conf.prototype.loadExtras = function(cb) { Conf.prototype.save = function (where, cb) { var target = this.sources[where] if (!target || !(target.path || target.source) || !target.data) { - if (where !== 'builtin') - var er = new Error('bad save target: '+where) + if (where !== "builtin") + var er = new Error("bad save target: " + where) if (cb) { process.nextTick(cb.bind(null, er)) return this } - return this.emit('error', er) + return this.emit("error", er) } if (target.source) { - var pref = target.prefix || '' + var pref = target.prefix || "" Object.keys(target.data).forEach(function (k) { target.source[pref + k] = target.data[k] }) @@ -252,30 +254,15 @@ Conf.prototype.save = function (where, cb) { return this } - var data = target.data - - if (typeof data._password === 'string' && - typeof data.username === 'string') { - var auth = data.username + ':' + data._password - data = Object.keys(data).reduce(function (c, k) { - if (k === 'username' || k === '_password') - return c - c[k] = data[k] - return c - }, { _auth: new Buffer(auth, 'utf8').toString('base64') }) - delete data.username - delete data._password - } - - data = ini.stringify(data) + var data = ini.stringify(target.data) then = then.bind(this) done = done.bind(this) this._saving ++ - var mode = where === 'user' ? 0600 : 0666 + var mode = where === "user" ? "0600" : "0666" if (!data.trim()) { - fs.unlink(target.path, function (er) { + fs.unlink(target.path, function () { // ignore the possible error (e.g. the file doesn't exist) done(null) }) @@ -283,10 +270,10 @@ Conf.prototype.save = function (where, cb) { mkdirp(path.dirname(target.path), function (er) { if (er) return then(er) - fs.writeFile(target.path, data, 'utf8', function (er) { + fs.writeFile(target.path, data, "utf8", function (er) { if (er) return then(er) - if (where === 'user' && myUid && myGid) + if (where === "user" && myUid && myGid) fs.chown(target.path, +myUid, +myGid, then) else then() @@ -303,12 +290,12 @@ Conf.prototype.save = function (where, cb) { function done (er) { if (er) { if (cb) return cb(er) - else return this.emit('error', er) + else return this.emit("error", er) } this._saving -- if (this._saving === 0) { if (cb) cb() - this.emit('save') + this.emit("save") } } @@ -318,32 +305,31 @@ Conf.prototype.save = function (where, cb) { Conf.prototype.addFile = function (file, name) { name = name || file var marker = {__source__:name} - this.sources[name] = { path: file, type: 'ini' } + this.sources[name] = { path: file, type: "ini" } this.push(marker) this._await() - fs.readFile(file, 'utf8', function (er, data) { + fs.readFile(file, "utf8", function (er, data) { if (er) // just ignore missing files. return this.add({}, marker) - this.addString(data, file, 'ini', marker) + this.addString(data, file, "ini", marker) }.bind(this)) return this } // always ini files. Conf.prototype.parse = function (content, file) { - return CC.prototype.parse.call(this, content, file, 'ini') + return CC.prototype.parse.call(this, content, file, "ini") } Conf.prototype.add = function (data, marker) { - Object.keys(data).forEach(function (k) { - data[k] = parseField(data[k], k) - }) - if (Object.prototype.hasOwnProperty.call(data, '_auth')) { - var auth = new Buffer(data._auth, 'base64').toString('utf8').split(':') - var username = auth.shift() - var password = auth.join(':') - data.username = username - data._password = password + try { + Object.keys(data).forEach(function (k) { + data[k] = parseField(data[k], k) + }) + } + catch (e) { + this.emit("error", e) + return this } return CC.prototype.add.call(this, data, marker) } @@ -360,15 +346,15 @@ Conf.prototype.addEnv = function (env) { // leave first char untouched, even if // it is a "_" - convert all other to "-" var p = k.toLowerCase() - .replace(/^npm_config_/, '') - .replace(/(?!^)_/g, '-') + .replace(/^npm_config_/, "") + .replace(/(?!^)_/g, "-") conf[p] = env[k] }) - return CC.prototype.addEnv.call(this, '', conf, 'env') + return CC.prototype.addEnv.call(this, "", conf, "env") } -function parseField (f, k, emptyIsFalse) { - if (typeof f !== 'string' && !(f instanceof String)) +function parseField (f, k) { + if (typeof f !== "string" && !(f instanceof String)) return f // type can be an array or single thing. @@ -379,25 +365,31 @@ function parseField (f, k, emptyIsFalse) { var isOctal = -1 !== typeList.indexOf(Octal) var isNumber = isOctal || (-1 !== typeList.indexOf(Number)) - f = (''+f).trim() + f = (""+f).trim() - if (f.match(/^".*"$/)) - f = JSON.parse(f) + if (f.match(/^".*"$/)) { + try { + f = JSON.parse(f) + } + catch (e) { + throw new Error("Failed parsing JSON config key " + k + ": " + f) + } + } - if (isBool && !isString && f === '') + if (isBool && !isString && f === "") return true switch (f) { - case 'true': return true - case 'false': return false - case 'null': return null - case 'undefined': return undefined + case "true": return true + case "false": return false + case "null": return null + case "undefined": return undefined } f = envReplace(f) if (isPath) { - var homePattern = process.platform === 'win32' ? /^~(\/|\\)/ : /^~\// + var homePattern = process.platform === "win32" ? /^~(\/|\\)/ : /^~\// if (f.match(homePattern) && process.env.HOME) { f = path.resolve(process.env.HOME, f.substr(2)) } @@ -411,23 +403,23 @@ function parseField (f, k, emptyIsFalse) { } function envReplace (f) { - if (typeof f !== 'string' || !f) return f + if (typeof f !== "string" || !f) return f // replace any ${ENV} values with the appropriate environ. var envExpr = /(\\*)\$\{([^}]+)\}/g - return f.replace(envExpr, function (orig, esc, name, i, s) { + return f.replace(envExpr, function (orig, esc, name) { esc = esc.length && esc.length % 2 if (esc) return orig if (undefined === process.env[name]) - throw new Error('Failed to replace env in config: '+orig) + throw new Error("Failed to replace env in config: "+orig) return process.env[name] }) } function validate (cl) { // warn about invalid configs at every level. - cl.list.forEach(function (conf, level) { + cl.list.forEach(function (conf) { nopt.clean(conf, configDefs.types) }) diff --git a/deps/npm/node_modules/npmconf/config-defs.js b/deps/npm/lib/config/defaults.js similarity index 91% rename from deps/npm/node_modules/npmconf/config-defs.js rename to deps/npm/lib/config/defaults.js index 31522fb6434..7bd672114de 100644 --- a/deps/npm/node_modules/npmconf/config-defs.js +++ b/deps/npm/lib/config/defaults.js @@ -16,7 +16,7 @@ try { } catch (er) { var util = require("util") log = { warn: function (m) { - console.warn(m + util.format.apply(util, [].slice.call(arguments, 1))) + console.warn(m + " " + util.format.apply(util, [].slice.call(arguments, 1))) } } } @@ -40,6 +40,12 @@ function validateSemver (data, k, val) { data[k] = semver.valid(val) } +function validateTag (data, k, val) { + val = ("" + val).trim() + if (!val || semver.validRange(val)) return false + data[k] = val +} + function validateStream (data, k, val) { if (!(val instanceof Stream)) return false data[k] = val @@ -49,6 +55,10 @@ nopt.typeDefs.semver = { type: semver, validate: validateSemver } nopt.typeDefs.Octal = { type: Octal, validate: validateOctal } nopt.typeDefs.Stream = { type: Stream, validate: validateStream } +// Don't let --tag=1.2.3 ever be a thing +var tag = {} +nopt.typeDefs.tag = { type: tag, validate: validateTag } + nopt.invalidHandler = function (k, val, type) { log.warn("invalid config", k + "=" + JSON.stringify(val)) @@ -58,6 +68,9 @@ nopt.invalidHandler = function (k, val, type) { } switch (type) { + case tag: + log.warn("invalid config", "Tag must not be a SemVer range") + break case Octal: log.warn("invalid config", "Must be octal number, starting with 0") break @@ -113,8 +126,8 @@ Object.defineProperty(exports, "defaults", {get: function () { } } - return defaults = - { "always-auth" : false + defaults = { + "always-auth" : false , "bin-links" : true , browser : null @@ -137,7 +150,6 @@ Object.defineProperty(exports, "defaults", {get: function () { , description : true , dev : false , editor : osenv.editor() - , email: "" , "engine-strict": false , force : false @@ -156,10 +168,11 @@ Object.defineProperty(exports, "defaults", {get: function () { , heading: "npm" , "ignore-scripts": false , "init-module": path.resolve(home, ".npm-init.js") - , "init.author.name" : "" - , "init.author.email" : "" - , "init.author.url" : "" - , "init.license": "ISC" + , "init-author-name" : "" + , "init-author-email" : "" + , "init-author-url" : "" + , "init-version": "1.0.0" + , "init-license": "ISC" , json: false , key: null , link: false @@ -192,6 +205,7 @@ Object.defineProperty(exports, "defaults", {get: function () { , "save-exact" : false , "save-optional" : false , "save-prefix": "^" + , scope : "" , searchopts: "" , searchexclude: null , searchsort: "name" @@ -210,7 +224,6 @@ Object.defineProperty(exports, "defaults", {get: function () { || process.getuid() !== 0 , usage : false , user : process.platform === "win32" ? 0 : "nobody" - , username : "" , userconfig : path.resolve(home, ".npmrc") , umask: process.umask ? process.umask() : parseInt("022", 8) , version : false @@ -218,7 +231,9 @@ Object.defineProperty(exports, "defaults", {get: function () { , viewer: process.platform === "win32" ? "browser" : "man" , _exit : true - } + } + + return defaults }}) exports.types = @@ -239,7 +254,6 @@ exports.types = , description : Boolean , dev : Boolean , editor : String - , email: [null, String] , "engine-strict": Boolean , force : Boolean , "fetch-retries": Number @@ -256,17 +270,18 @@ exports.types = , "heading": String , "ignore-scripts": Boolean , "init-module": path - , "init.author.name" : String - , "init.author.email" : String - , "init.author.url" : ["", url] - , "init.license": String + , "init-author-name" : String + , "init-author-email" : String + , "init-author-url" : ["", url] + , "init-license": String + , "init-version": semver , json: Boolean , key: [null, String] , link: Boolean // local-address must be listed as an IP for a local network interface // must be IPv4 due to node bug , "local-address" : getLocalAddresses() - , loglevel : ["silent","win","error","warn","http","info","verbose","silly"] + , loglevel : ["silent","error","warn","http","info","verbose","silly"] , logstream : Stream , long : Boolean , message: String @@ -288,6 +303,7 @@ exports.types = , "save-exact" : Boolean , "save-optional" : Boolean , "save-prefix": String + , scope : String , searchopts : String , searchexclude: [null, String] , searchsort: [ "name", "-name" @@ -300,20 +316,18 @@ exports.types = , "sign-git-tag": Boolean , spin: ["always", Boolean] , "strict-ssl": Boolean - , tag : String + , tag : tag , tmp : path , unicode : Boolean , "unsafe-perm" : Boolean , usage : Boolean , user : [Number, String] - , username : String , userconfig : path , umask: Octal , version : Boolean , versions : Boolean , viewer: String , _exit : Boolean - , _password: String } function getLocalAddresses() { @@ -365,4 +379,5 @@ exports.shorthands = , y : ["--yes"] , n : ["--no-yes"] , B : ["--save-bundle"] + , C : ["--prefix"] } diff --git a/deps/npm/node_modules/npmconf/lib/find-prefix.js b/deps/npm/lib/config/find-prefix.js similarity index 100% rename from deps/npm/node_modules/npmconf/lib/find-prefix.js rename to deps/npm/lib/config/find-prefix.js diff --git a/deps/npm/lib/config/get-credentials-by-uri.js b/deps/npm/lib/config/get-credentials-by-uri.js new file mode 100644 index 00000000000..26a7f4317c6 --- /dev/null +++ b/deps/npm/lib/config/get-credentials-by-uri.js @@ -0,0 +1,73 @@ +var assert = require("assert") + +var toNerfDart = require("./nerf-dart.js") + +module.exports = getCredentialsByURI + +function getCredentialsByURI (uri) { + assert(uri && typeof uri === "string", "registry URL is required") + var nerfed = toNerfDart(uri) + var defnerf = toNerfDart(this.get("registry")) + + // hidden class micro-optimization + var c = { + scope : nerfed, + token : undefined, + password : undefined, + username : undefined, + email : undefined, + auth : undefined, + alwaysAuth : undefined + } + + if (this.get(nerfed + ":_authToken")) { + c.token = this.get(nerfed + ":_authToken") + // the bearer token is enough, don't confuse things + return c + } + + // Handle the old-style _auth= style for the default + // registry, if set. + // + // XXX(isaacs): Remove when npm 1.4 is no longer relevant + var authDef = this.get("_auth") + var userDef = this.get("username") + var passDef = this.get("_password") + if (authDef && !(userDef && passDef)) { + authDef = new Buffer(authDef, "base64").toString() + authDef = authDef.split(":") + userDef = authDef.shift() + passDef = authDef.join(":") + } + + if (this.get(nerfed + ":_password")) { + c.password = new Buffer(this.get(nerfed + ":_password"), "base64").toString("utf8") + } else if (nerfed === defnerf && passDef) { + c.password = passDef + } + + if (this.get(nerfed + ":username")) { + c.username = this.get(nerfed + ":username") + } else if (nerfed === defnerf && userDef) { + c.username = userDef + } + + if (this.get(nerfed + ":email")) { + c.email = this.get(nerfed + ":email") + } else if (this.get("email")) { + c.email = this.get("email") + } + + if (this.get(nerfed + ":always-auth") !== undefined) { + var val = this.get(nerfed + ":always-auth") + c.alwaysAuth = val === "false" ? false : !!val + } else if (this.get("always-auth") !== undefined) { + c.alwaysAuth = this.get("always-auth") + } + + if (c.username && c.password) { + c.auth = new Buffer(c.username + ":" + c.password).toString("base64") + } + + return c +} diff --git a/deps/npm/node_modules/npmconf/lib/load-cafile.js b/deps/npm/lib/config/load-cafile.js similarity index 72% rename from deps/npm/node_modules/npmconf/lib/load-cafile.js rename to deps/npm/lib/config/load-cafile.js index b8c9fff2330..dc1ff9f03a9 100644 --- a/deps/npm/node_modules/npmconf/lib/load-cafile.js +++ b/deps/npm/lib/config/load-cafile.js @@ -1,18 +1,18 @@ module.exports = loadCAFile -var fs = require('fs') +var fs = require("fs") function loadCAFile(cafilePath, cb) { if (!cafilePath) return process.nextTick(cb) - fs.readFile(cafilePath, 'utf8', afterCARead.bind(this)) + fs.readFile(cafilePath, "utf8", afterCARead.bind(this)) function afterCARead(er, cadata) { if (er) return cb(er) - var delim = '-----END CERTIFICATE-----' + var delim = "-----END CERTIFICATE-----" var output output = cadata @@ -24,8 +24,7 @@ function loadCAFile(cafilePath, cb) { return xs.trimLeft() + delim }) - this.set('ca', output) + this.set("ca", output) cb(null) } - } diff --git a/deps/npm/node_modules/npmconf/lib/load-prefix.js b/deps/npm/lib/config/load-prefix.js similarity index 89% rename from deps/npm/node_modules/npmconf/lib/load-prefix.js rename to deps/npm/lib/config/load-prefix.js index bb39d9c98de..39d076fb7d4 100644 --- a/deps/npm/node_modules/npmconf/lib/load-prefix.js +++ b/deps/npm/lib/config/load-prefix.js @@ -1,7 +1,7 @@ module.exports = loadPrefix var findPrefix = require("./find-prefix.js") -var path = require('path') +var path = require("path") function loadPrefix (cb) { var cli = this.list[0] @@ -9,7 +9,7 @@ function loadPrefix (cb) { Object.defineProperty(this, "prefix", { set : function (prefix) { var g = this.get("global") - this[g ? 'globalPrefix' : 'localPrefix'] = prefix + this[g ? "globalPrefix" : "localPrefix"] = prefix }.bind(this) , get : function () { var g = this.get("global") @@ -20,7 +20,7 @@ function loadPrefix (cb) { Object.defineProperty(this, "globalPrefix", { set : function (prefix) { - this.set('prefix', prefix) + this.set("prefix", prefix) }.bind(this) , get : function () { return path.resolve(this.get("prefix")) @@ -44,6 +44,6 @@ function loadPrefix (cb) { findPrefix(process.cwd(), function (er, found) { p = found cb(er) - }.bind(this)) + }) } } diff --git a/deps/npm/node_modules/npmconf/lib/load-uid.js b/deps/npm/lib/config/load-uid.js similarity index 100% rename from deps/npm/node_modules/npmconf/lib/load-uid.js rename to deps/npm/lib/config/load-uid.js diff --git a/deps/npm/lib/config/nerf-dart.js b/deps/npm/lib/config/nerf-dart.js new file mode 100644 index 00000000000..3b26a56c65f --- /dev/null +++ b/deps/npm/lib/config/nerf-dart.js @@ -0,0 +1,21 @@ +var url = require("url") + +module.exports = toNerfDart + +/** + * Maps a URL to an identifier. + * + * Name courtesy schiffertronix media LLC, a New Jersey corporation + * + * @param {String} uri The URL to be nerfed. + * + * @returns {String} A nerfed URL. + */ +function toNerfDart(uri) { + var parsed = url.parse(uri) + parsed.pathname = "/" + delete parsed.protocol + delete parsed.auth + + return url.format(parsed) +} diff --git a/deps/npm/lib/config/set-credentials-by-uri.js b/deps/npm/lib/config/set-credentials-by-uri.js new file mode 100644 index 00000000000..31eab4479ed --- /dev/null +++ b/deps/npm/lib/config/set-credentials-by-uri.js @@ -0,0 +1,42 @@ +var assert = require("assert") + +var toNerfDart = require("./nerf-dart.js") + +module.exports = setCredentialsByURI + +function setCredentialsByURI (uri, c) { + assert(uri && typeof uri === "string", "registry URL is required") + assert(c && typeof c === "object", "credentials are required") + + var nerfed = toNerfDart(uri) + + if (c.token) { + this.set(nerfed + ":_authToken", c.token, "user") + this.del(nerfed + ":_password", "user") + this.del(nerfed + ":username", "user") + this.del(nerfed + ":email", "user") + this.del(nerfed + ":always-auth", "user") + } + else if (c.username || c.password || c.email) { + assert(c.username, "must include username") + assert(c.password, "must include password") + assert(c.email, "must include email address") + + this.del(nerfed + ":_authToken", "user") + + var encoded = new Buffer(c.password, "utf8").toString("base64") + this.set(nerfed + ":_password", encoded, "user") + this.set(nerfed + ":username", c.username, "user") + this.set(nerfed + ":email", c.email, "user") + + if (c.alwaysAuth !== undefined) { + this.set(nerfed + ":always-auth", c.alwaysAuth, "user") + } + else { + this.del(nerfed + ":always-auth", "user") + } + } + else { + throw new Error("No credentials to set.") + } +} diff --git a/deps/npm/node_modules/npmconf/lib/set-user.js b/deps/npm/lib/config/set-user.js similarity index 63% rename from deps/npm/node_modules/npmconf/lib/set-user.js rename to deps/npm/lib/config/set-user.js index cf29b1ace21..4c207a6792a 100644 --- a/deps/npm/node_modules/npmconf/lib/set-user.js +++ b/deps/npm/lib/config/set-user.js @@ -1,9 +1,9 @@ module.exports = setUser -var Conf = require('../npmconf.js').Conf -var assert = require('assert') -var path = require('path') -var fs = require('fs') +var assert = require("assert") +var path = require("path") +var fs = require("fs") +var mkdirp = require("mkdirp") function setUser (cb) { var defaultConf = this.root @@ -19,8 +19,11 @@ function setUser (cb) { } var prefix = path.resolve(this.get("prefix")) - fs.stat(prefix, function (er, st) { - defaultConf.user = st && st.uid - return cb(er) + mkdirp(prefix, function (er) { + if (er) return cb(er) + fs.stat(prefix, function (er, st) { + defaultConf.user = st && st.uid + return cb(er) + }) }) } diff --git a/deps/npm/lib/dedupe.js b/deps/npm/lib/dedupe.js index e6762e15bc5..74397d0cb95 100644 --- a/deps/npm/lib/dedupe.js +++ b/deps/npm/lib/dedupe.js @@ -7,7 +7,6 @@ // much better "put pkg X at folder Y" abstraction. Oh well, // whatever. Perfect enemy of the good, and all that. -var url = require("url") var fs = require("fs") var asyncMap = require("slide").asyncMap var path = require("path") @@ -16,6 +15,7 @@ var semver = require("semver") var rm = require("./utils/gently-rm.js") var log = require("npmlog") var npm = require("./npm.js") +var mapToRegistry = require("./utils/map-to-registry.js") module.exports = dedupe @@ -61,7 +61,7 @@ function dedupe_ (dir, filter, unavoidable, dryrun, silent, cb) { Object.keys(obj.children).forEach(function (k) { U(obj.children[k]) }) - }) + })(data) // then collect them up and figure out who needs them ;(function C (obj) { @@ -240,13 +240,19 @@ function findVersions (npm, summary, cb) { var versions = data.versions var ranges = data.ranges - var uri = url.resolve(npm.config.get("registry"), name) - npm.registry.get(uri, null, function (er, data) { + mapToRegistry(name, npm.config, function (er, uri) { + if (er) return cb(er) + + npm.registry.get(uri, null, next) + }) + + function next (er, data) { var regVersions = er ? [] : Object.keys(data.versions) var locMatch = bestMatch(versions, ranges) - var regMatch; var tag = npm.config.get("tag") var distTag = data["dist-tags"] && data["dist-tags"][tag] + + var regMatch if (distTag && data.versions[distTag] && matches(distTag, ranges)) { regMatch = distTag } else { @@ -254,7 +260,7 @@ function findVersions (npm, summary, cb) { } cb(null, [[name, has, loc, locMatch, regMatch, locs]]) - }) + } }, cb) } diff --git a/deps/npm/lib/deprecate.js b/deps/npm/lib/deprecate.js index 175b69ceb13..17dd4eab0c1 100644 --- a/deps/npm/lib/deprecate.js +++ b/deps/npm/lib/deprecate.js @@ -1,5 +1,6 @@ -var url = require("url") - , npm = require("./npm.js") +var npm = require("./npm.js") + , mapToRegistry = require("./utils/map-to-registry.js") + , npa = require("npm-package-arg") module.exports = deprecate @@ -8,16 +9,20 @@ deprecate.usage = "npm deprecate [@] " deprecate.completion = function (opts, cb) { // first, get a list of remote packages this user owns. // once we have a user account, then don't complete anything. - var un = npm.config.get("username") - if (!npm.config.get("username")) return cb() if (opts.conf.argv.remain.length > 2) return cb() // get the list of packages by user - var path = "/-/by-user/"+encodeURIComponent(un) - , uri = url.resolve(npm.config.get("registry"), path) - npm.registry.get(uri, { timeout : 60000 }, function (er, list) { - if (er) return cb() - console.error(list) - return cb(null, list[un]) + var path = "/-/by-user/" + mapToRegistry(path, npm.config, function (er, uri) { + if (er) return cb(er) + + var c = npm.config.getCredentialsByURI(uri) + if (!(c && c.username)) return cb() + + npm.registry.get(uri + c.username, { timeout : 60000 }, function (er, list) { + if (er) return cb() + console.error(list) + return cb(null, list[c.username]) + }) }) } @@ -25,11 +30,15 @@ function deprecate (args, cb) { var pkg = args[0] , msg = args[1] if (msg === undefined) return cb("Usage: " + deprecate.usage) + // fetch the data and make sure it exists. - pkg = pkg.split(/@/) - var name = pkg.shift() - , ver = pkg.join("@") - , uri = url.resolve(npm.config.get("registry"), name) + var p = npa(pkg) + + mapToRegistry(p.name, npm.config, next) + + function next (er, uri) { + if (er) return cb(er) - npm.registry.deprecate(uri, ver, msg, cb) + npm.registry.deprecate(uri, p.spec, msg, cb) + } } diff --git a/deps/npm/lib/docs.js b/deps/npm/lib/docs.js index 77073fbb9c1..dead3f7551c 100644 --- a/deps/npm/lib/docs.js +++ b/deps/npm/lib/docs.js @@ -5,18 +5,21 @@ docs.usage += "\n" docs.usage += "npm docs ." docs.completion = function (opts, cb) { - var uri = url_.resolve(npm.config.get("registry"), "/-/short") - registry.get(uri, { timeout : 60000 }, function (er, list) { - return cb(null, list || []) + mapToRegistry("/-/short", npm.config, function (er, uri) { + if (er) return cb(er) + + registry.get(uri, { timeout : 60000 }, function (er, list) { + return cb(null, list || []) + }) }) } -var url_ = require("url") - , npm = require("./npm.js") +var npm = require("./npm.js") , registry = npm.registry , opener = require("opener") , path = require("path") , log = require("npmlog") + , mapToRegistry = require("./utils/map-to-registry.js") function url (json) { return json.homepage ? json.homepage : "https://npmjs.org/package/" + json.name @@ -38,7 +41,7 @@ function docs (args, cb) { function getDoc (project, cb) { project = project || '.' - var package = path.resolve(process.cwd(), "package.json") + var package = path.resolve(npm.localPrefix, "package.json") if (project === '.' || project === './') { var json @@ -54,8 +57,13 @@ function getDoc (project, cb) { return opener(url(json), { command: npm.config.get("browser") }, cb) } - var uri = url_.resolve(npm.config.get("registry"), project + "/latest") - registry.get(uri, { timeout : 3600 }, function (er, json) { + mapToRegistry(project, npm.config, function (er, uri) { + if (er) return cb(er) + + registry.get(uri + "/latest", { timeout : 3600 }, next) + }) + + function next (er, json) { var github = "https://github.com/" + project + "#readme" if (er) { @@ -64,5 +72,5 @@ function getDoc (project, cb) { } return opener(url(json), { command: npm.config.get("browser") }, cb) - }) + } } diff --git a/deps/npm/lib/explore.js b/deps/npm/lib/explore.js index 767d9a876a2..e87e839354b 100644 --- a/deps/npm/lib/explore.js +++ b/deps/npm/lib/explore.js @@ -27,7 +27,7 @@ function explore (args, cb) { "Type 'exit' or ^D when finished\n") npm.spinner.stop() - var shell = spawn(sh, args, { cwd: cwd, customFds: [0, 1, 2] }) + var shell = spawn(sh, args, { cwd: cwd, stdio: "inherit" }) shell.on("close", function (er) { // only fail if non-interactive. if (!args.length) return cb() diff --git a/deps/npm/lib/help.js b/deps/npm/lib/help.js index 8f54d69ded7..747bd5020da 100644 --- a/deps/npm/lib/help.js +++ b/deps/npm/lib/help.js @@ -31,7 +31,7 @@ function help (args, cb) { // npm help : show basic usage if (!section) { - var valid = argv[0] === 'help' ? 0 : 1 + var valid = argv[0] === "help" ? 0 : 1 return npmUsage(valid, cb) } @@ -111,7 +111,7 @@ function viewMan (man, cb) { switch (viewer) { case "woman": var a = ["-e", "(woman-find-file \"" + man + "\")"] - conf = { env: env, customFds: [ 0, 1, 2] } + conf = { env: env, stdio: "inherit" } var woman = spawn("emacsclient", a, conf) woman.on("close", cb) break @@ -121,7 +121,7 @@ function viewMan (man, cb) { break default: - conf = { env: env, customFds: [ 0, 1, 2] } + conf = { env: env, stdio: "inherit" } var manProcess = spawn("man", [num, section], conf) manProcess.on("close", cb) break @@ -153,8 +153,8 @@ function htmlMan (man) { function npmUsage (valid, cb) { npm.config.set("loglevel", "silent") log.level = "silent" - console.log - ( ["\nUsage: npm " + console.log( + [ "\nUsage: npm " , "" , "where is one of:" , npm.config.get("long") ? usages() @@ -196,7 +196,7 @@ function usages () { function wrap (arr) { - var out = [''] + var out = [""] , l = 0 , line @@ -209,9 +209,9 @@ function wrap (arr) { arr.sort(function (a,b) { return a --save` afterwards to install a package and" - ,"save it as a dependency in the package.json file." - ,"" - ,"Press ^C at any time to quit." - ].join("\n")) - + var initFile = npm.config.get("init-module") + if (!initJson.yes(npm.config)) { + console.log( + ["This utility will walk you through creating a package.json file." + ,"It only covers the most common items, and tries to guess sane defaults." + ,"" + ,"See `npm help json` for definitive documentation on these fields" + ,"and exactly what they do." + ,"" + ,"Use `npm install --save` afterwards to install a package and" + ,"save it as a dependency in the package.json file." + ,"" + ,"Press ^C at any time to quit." + ].join("\n")) + } initJson(dir, initFile, npm.config, function (er, data) { log.resume() - log.silly('package data', data) - log.info('init', 'written successfully') - if (er && er.message === 'canceled') { - log.warn('init', 'canceled') + log.silly("package data", data) + log.info("init", "written successfully") + if (er && er.message === "canceled") { + log.warn("init", "canceled") return cb(null, data) } cb(er, data) diff --git a/deps/npm/lib/install.js b/deps/npm/lib/install.js index 9d2c2cfa279..e539307aff3 100644 --- a/deps/npm/lib/install.js +++ b/deps/npm/lib/install.js @@ -34,28 +34,34 @@ install.completion = function (opts, cb) { // if it starts with https?://, then just give up, because it's a url // for now, not yet implemented. var registry = npm.registry - , uri = url.resolve(npm.config.get("registry"), "-/short") - registry.get(uri, null, function (er, pkgs) { - if (er) return cb() - if (!opts.partialWord) return cb(null, pkgs) - - var name = opts.partialWord.split("@").shift() - pkgs = pkgs.filter(function (p) { - return p.indexOf(name) === 0 - }) - - if (pkgs.length !== 1 && opts.partialWord === name) { - return cb(null, pkgs) - } + mapToRegistry("-/short", npm.config, function (er, uri) { + if (er) return cb(er) - uri = url.resolve(npm.config.get("registry"), pkgs[0]) - registry.get(uri, null, function (er, d) { + registry.get(uri, null, function (er, pkgs) { if (er) return cb() - return cb(null, Object.keys(d["dist-tags"] || {}) - .concat(Object.keys(d.versions || {})) - .map(function (t) { - return pkgs[0] + "@" + t - })) + if (!opts.partialWord) return cb(null, pkgs) + + var name = npa(opts.partialWord).name + pkgs = pkgs.filter(function (p) { + return p.indexOf(name) === 0 + }) + + if (pkgs.length !== 1 && opts.partialWord === name) { + return cb(null, pkgs) + } + + mapToRegistry(pkgs[0], npm.config, function (er, uri) { + if (er) return cb(er) + + registry.get(uri, null, function (er, d) { + if (er) return cb() + return cb(null, Object.keys(d["dist-tags"] || {}) + .concat(Object.keys(d.versions || {})) + .map(function (t) { + return pkgs[0] + "@" + t + })) + }) + }) }) }) } @@ -67,6 +73,7 @@ var npm = require("./npm.js") , log = require("npmlog") , path = require("path") , fs = require("graceful-fs") + , writeFileAtomic = require("write-file-atomic") , cache = require("./cache.js") , asyncMap = require("slide").asyncMap , chain = require("slide").chain @@ -74,9 +81,14 @@ var npm = require("./npm.js") , mkdir = require("mkdirp") , lifecycle = require("./utils/lifecycle.js") , archy = require("archy") - , isGitUrl = require("./utils/is-git-url.js") , npmInstallChecks = require("npm-install-checks") , sortedObject = require("sorted-object") + , mapToRegistry = require("./utils/map-to-registry.js") + , npa = require("npm-package-arg") + , inflight = require("inflight") + , locker = require("./utils/locker.js") + , lock = locker.lock + , unlock = locker.unlock function install (args, cb_) { var hasArguments = !!args.length @@ -112,7 +124,7 @@ function install (args, cb_) { where = args args = [].concat(cb_) // pass in [] to do default dep-install cb_ = arguments[2] - log.verbose("install", "where,what", [where, args]) + log.verbose("install", "where, what", [where, args]) } if (!npm.config.get("global")) { @@ -136,6 +148,22 @@ function install (args, cb_) { } var deps = Object.keys(data.dependencies || {}) log.verbose("install", "where, deps", [where, deps]) + + // FIXME: Install peerDependencies as direct dependencies, but only at + // the top level. Should only last until peerDependencies are nerfed to + // no longer implicitly install themselves. + var peers = [] + Object.keys(data.peerDependencies || {}).forEach(function (dep) { + if (!data.dependencies[dep]) { + log.verbose( + "install", + "peerDependency", dep, "wasn't going to be installed; adding" + ) + peers.push(dep) + } + }) + log.verbose("install", "where, peers", [where, peers]) + var context = { family: {} , ancestors: {} , explicit: false @@ -153,10 +181,12 @@ function install (args, cb_) { installManyTop(deps.map(function (dep) { var target = data.dependencies[dep] - target = dep + "@" + target - return target - }), where, context, function(er, results) { - if (er) return cb(er, results) + return dep + "@" + target + }).concat(peers.map(function (dep) { + var target = data.peerDependencies[dep] + return dep + "@" + target + })), where, context, function(er, results) { + if (er || npm.config.get("production")) return cb(er, results) lifecycle(data, "prepublish", where, function(er) { return cb(er, results) }) @@ -198,7 +228,7 @@ function findPeerInvalid (where, cb) { function findPeerInvalid_ (packageMap, fpiList) { if (fpiList.indexOf(packageMap) !== -1) - return + return undefined fpiList.push(packageMap) @@ -206,7 +236,7 @@ function findPeerInvalid_ (packageMap, fpiList) { var pkg = packageMap[packageName] if (pkg.peerInvalid) { - var peersDepending = {}; + var peersDepending = {} for (var peerName in packageMap) { var peer = packageMap[peerName] if (peer.peerDependencies && peer.peerDependencies[packageName]) { @@ -253,7 +283,13 @@ function readDependencies (context, where, opts, cb) { if (opts && opts.dev) { if (!data.dependencies) data.dependencies = {} Object.keys(data.devDependencies || {}).forEach(function (k) { - data.dependencies[k] = data.devDependencies[k] + if (data.dependencies[k]) { + log.warn("package.json", "Dependency '%s' exists in both dependencies " + + "and devDependencies, using '%s@%s' from dependencies", + k, k, data.dependencies[k]) + } else { + data.dependencies[k] = data.devDependencies[k] + } }) } @@ -285,11 +321,9 @@ function readDependencies (context, where, opts, cb) { var wrapfile = path.resolve(where, "npm-shrinkwrap.json") fs.readFile(wrapfile, "utf8", function (er, wrapjson) { - if (er) { - log.verbose("readDependencies", "using package.json deps") - return cb(null, data, null) - } + if (er) return cb(null, data, null) + log.verbose("readDependencies", "npm-shrinkwrap.json is overriding dependencies") var newwrap try { newwrap = JSON.parse(wrapjson) @@ -338,21 +372,33 @@ function save (where, installed, tree, pretty, hasArguments, cb) { return cb(null, installed, tree, pretty) } - var saveBundle = npm.config.get('save-bundle') - var savePrefix = npm.config.get('save-prefix') || "^"; + var saveBundle = npm.config.get("save-bundle") + var savePrefix = npm.config.get("save-prefix") // each item in the tree is a top-level thing that should be saved // to the package.json file. // The relevant tree shape is { : {what:} } var saveTarget = path.resolve(where, "package.json") - , things = Object.keys(tree).map(function (k) { - // if "what" was a url, then save that instead. - var t = tree[k] - , u = url.parse(t.from) - , w = t.what.split("@") - if (u && u.protocol) w[1] = t.from - return w - }).reduce(function (set, k) { + + asyncMap(Object.keys(tree), function (k, cb) { + // if "what" was a url, then save that instead. + var t = tree[k] + , u = url.parse(t.from) + , a = npa(t.what) + , w = [a.name, a.spec] + + + fs.stat(t.from, function (er){ + if (!er) { + w[1] = "file:" + t.from + } else if (u && u.protocol) { + w[1] = t.from + } + cb(null, [w]) + }) + } + , function (er, arr) { + var things = arr.reduce(function (set, k) { var rangeDescriptor = semver.valid(k[1], true) && semver.gte(k[1], "0.1.0", true) && !npm.config.get("save-exact") @@ -361,47 +407,49 @@ function save (where, installed, tree, pretty, hasArguments, cb) { return set }, {}) - // don't use readJson, because we don't want to do all the other - // tricky npm-specific stuff that's in there. - fs.readFile(saveTarget, function (er, data) { - // ignore errors here, just don't save it. - try { - data = JSON.parse(data.toString("utf8")) - } catch (ex) { - er = ex - } - if (er) { - return cb(null, installed, tree, pretty) - } + // don't use readJson, because we don't want to do all the other + // tricky npm-specific stuff that's in there. + fs.readFile(saveTarget, function (er, data) { + // ignore errors here, just don't save it. + try { + data = JSON.parse(data.toString("utf8")) + } catch (ex) { + er = ex + } - var deps = npm.config.get("save-optional") ? "optionalDependencies" - : npm.config.get("save-dev") ? "devDependencies" - : "dependencies" + if (er) { + return cb(null, installed, tree, pretty) + } - if (saveBundle) { - var bundle = data.bundleDependencies || data.bundledDependencies - delete data.bundledDependencies - if (!Array.isArray(bundle)) bundle = [] - data.bundleDependencies = bundle.sort() - } + var deps = npm.config.get("save-optional") ? "optionalDependencies" + : npm.config.get("save-dev") ? "devDependencies" + : "dependencies" - log.verbose('saving', things) - data[deps] = data[deps] || {} - Object.keys(things).forEach(function (t) { - data[deps][t] = things[t] if (saveBundle) { - var i = bundle.indexOf(t) - if (i === -1) bundle.push(t) + var bundle = data.bundleDependencies || data.bundledDependencies + delete data.bundledDependencies + if (!Array.isArray(bundle)) bundle = [] data.bundleDependencies = bundle.sort() } - }) - data[deps] = sortedObject(data[deps]) + log.verbose("saving", things) + data[deps] = data[deps] || {} + Object.keys(things).forEach(function (t) { + data[deps][t] = things[t] + if (saveBundle) { + var i = bundle.indexOf(t) + if (i === -1) bundle.push(t) + data.bundleDependencies = bundle.sort() + } + }) + + data[deps] = sortedObject(data[deps]) - data = JSON.stringify(data, null, 2) + "\n" - fs.writeFile(saveTarget, data, function (er) { - cb(er, installed, tree, pretty) + data = JSON.stringify(data, null, 2) + "\n" + writeFileAtomic(saveTarget, data, function (er) { + cb(er, installed, tree, pretty) + }) }) }) } @@ -412,22 +460,22 @@ function save (where, installed, tree, pretty, hasArguments, cb) { // that the submodules are not immediately require()able. // TODO: Show the complete tree, ls-style, but only if --long is provided function prettify (tree, installed) { - if (npm.config.get("json")) { - function red (set, kv) { - set[kv[0]] = kv[1] - return set - } + function red (set, kv) { + set[kv[0]] = kv[1] + return set + } + if (npm.config.get("json")) { tree = Object.keys(tree).map(function (p) { if (!tree[p]) return null - var what = tree[p].what.split("@") - , name = what.shift() - , version = what.join("@") + var what = npa(tree[p].what) + , name = what.name + , version = what.spec , o = { name: name, version: version, from: tree[p].from } o.dependencies = tree[p].children.map(function P (dep) { - var what = dep.what.split("@") - , name = what.shift() - , version = what.join("@") + var what = npa(dep.what) + , name = what.name + , version = what.spec , o = { version: version, from: dep.from } o.dependencies = dep.children.map(P).reduce(red, {}) return [name, o] @@ -530,30 +578,58 @@ function installManyTop_ (what, where, context, cb) { fs.readdir(nm, function (er, pkgs) { if (er) return installMany(what, where, context, cb) - pkgs = pkgs.filter(function (p) { + + var scopes = [], unscoped = [] + pkgs.filter(function (p) { return !p.match(/^[\._-]/) + }).forEach(function (p) { + // @names deserve deeper investigation + if (p[0] === "@") { + scopes.push(p) + } + else { + unscoped.push(p) + } }) - asyncMap(pkgs.map(function (p) { - return path.resolve(nm, p, "package.json") - }), function (jsonfile, cb) { - readJson(jsonfile, log.warn, function (er, data) { - if (er && er.code !== "ENOENT" && er.code !== "ENOTDIR") return cb(er) - if (er) return cb(null, []) - return cb(null, [[data.name, data.version]]) - }) - }, function (er, packages) { - // if there's nothing in node_modules, then don't freak out. - if (er) packages = [] - // add all the existing packages to the family list. - // however, do not add to the ancestors list. - packages.forEach(function (p) { - context.family[p[0]] = p[1] + + maybeScoped(scopes, nm, function (er, scoped) { + if (er && er.code !== "ENOENT" && er.code !== "ENOTDIR") return cb(er) + // recombine unscoped with @scope/package packages + asyncMap(unscoped.concat(scoped).map(function (p) { + return path.resolve(nm, p, "package.json") + }), function (jsonfile, cb) { + readJson(jsonfile, log.warn, function (er, data) { + if (er && er.code !== "ENOENT" && er.code !== "ENOTDIR") return cb(er) + if (er) return cb(null, []) + cb(null, [[data.name, data.version]]) + }) + }, function (er, packages) { + // if there's nothing in node_modules, then don't freak out. + if (er) packages = [] + // add all the existing packages to the family list. + // however, do not add to the ancestors list. + packages.forEach(function (p) { + context.family[p[0]] = p[1] + }) + installMany(what, where, context, cb) }) - return installMany(what, where, context, cb) }) }) } +function maybeScoped (scopes, where, cb) { + // find packages in scopes + asyncMap(scopes, function (scope, cb) { + fs.readdir(path.resolve(where, scope), function (er, scoped) { + if (er) return cb(er) + var paths = scoped.map(function (p) { + return path.join(scope, p) + }) + cb(null, paths) + }) + }, cb) +} + function installMany (what, where, context, cb) { // readDependencies takes care of figuring out whether the list of // dependencies we'll iterate below comes from an existing shrinkwrap from a @@ -593,7 +669,7 @@ function installMany (what, where, context, cb) { targets.forEach(function (t) { newPrev[t.name] = t.version }) - log.silly("resolved", targets) + log.silly("install resolved", targets) targets.filter(function (t) { return t }).forEach(function (t) { log.info("install", "%s into %s", t._id, where) }) @@ -615,60 +691,69 @@ function installMany (what, where, context, cb) { } function targetResolver (where, context, deps) { - var alreadyInstalledManually = context.explicit ? [] : null + var alreadyInstalledManually = [] + , resolveLeft = 0 , nm = path.resolve(where, "node_modules") , parent = context.parent , wrap = context.wrap - if (!context.explicit) fs.readdir(nm, function (er, inst) { - if (er) return alreadyInstalledManually = [] + if (!context.explicit) readdir(nm) - // don't even mess with non-package looking things - inst = inst.filter(function (p) { - return !p.match(/^[\._-]/) - }) + function readdir(name) { + resolveLeft++ + fs.readdir(name, function (er, inst) { + if (er) return resolveLeft-- - asyncMap(inst, function (pkg, cb) { - readJson(path.resolve(nm, pkg, "package.json"), log.warn, function (er, d) { - if (er && er.code !== "ENOENT" && er.code !== "ENOTDIR") return cb(er) - // error means it's not a package, most likely. - if (er) return cb(null, []) - - // if it's a bundled dep, then assume that anything there is valid. - // otherwise, make sure that it's a semver match with what we want. - var bd = parent.bundleDependencies - if (bd && bd.indexOf(d.name) !== -1 || - semver.satisfies(d.version, deps[d.name] || "*", true) || - deps[d.name] === d._resolved) { - return cb(null, d.name) - } + // don't even mess with non-package looking things + inst = inst.filter(function (p) { + if (!p.match(/^[@\._-]/)) return true + // scoped packages + readdir(path.join(name, p)) + }) - // see if the package had been previously linked - fs.lstat(path.resolve(nm, pkg), function(err, s) { - if (err) return cb(null, []) - if (s.isSymbolicLink()) { + asyncMap(inst, function (pkg, cb) { + readJson(path.resolve(name, pkg, "package.json"), log.warn, function (er, d) { + if (er && er.code !== "ENOENT" && er.code !== "ENOTDIR") return cb(er) + // error means it's not a package, most likely. + if (er) return cb(null, []) + + // if it's a bundled dep, then assume that anything there is valid. + // otherwise, make sure that it's a semver match with what we want. + var bd = parent.bundleDependencies + if (bd && bd.indexOf(d.name) !== -1 || + semver.satisfies(d.version, deps[d.name] || "*", true) || + deps[d.name] === d._resolved) { return cb(null, d.name) } - // something is there, but it's not satisfactory. Clobber it. - return cb(null, []) + // see if the package had been previously linked + fs.lstat(path.resolve(nm, pkg), function(err, s) { + if (err) return cb(null, []) + if (s.isSymbolicLink()) { + return cb(null, d.name) + } + + // something is there, but it's not satisfactory. Clobber it. + return cb(null, []) + }) }) + }, function (er, inst) { + // this is the list of things that are valid and should be ignored. + alreadyInstalledManually = alreadyInstalledManually.concat(inst) + resolveLeft-- }) - }, function (er, inst) { - // this is the list of things that are valid and should be ignored. - alreadyInstalledManually = inst }) - }) + } var to = 0 return function resolver (what, cb) { - if (!alreadyInstalledManually) return setTimeout(function () { + if (resolveLeft) return setTimeout(function () { resolver(what, cb) }, to++) // now we know what's been installed here manually, // or tampered with in some way that npm doesn't want to overwrite. - if (alreadyInstalledManually.indexOf(what.split("@").shift()) !== -1) { + if (alreadyInstalledManually.indexOf(npa(what).name) !== -1) { log.verbose("already installed", "skipping %s %s", what, where) return cb(null, []) } @@ -692,7 +777,7 @@ function targetResolver (where, context, deps) { } if (wrap) { - var name = what.split(/@/).shift() + var name = npa(what).name if (wrap[name]) { var wrapTarget = readWrap(wrap[name]) what = name + "@" + wrapTarget @@ -709,19 +794,16 @@ function targetResolver (where, context, deps) { // already has a matching copy. // If it's not a git repo, and the parent already has that pkg, then // we can skip installing it again. - cache.add(what, null, false, function (er, data) { + var pkgroot = path.resolve(npm.prefix, (parent && parent._from) || "") + cache.add(what, null, pkgroot, false, function (er, data) { if (er && parent && parent.optionalDependencies && - parent.optionalDependencies.hasOwnProperty(what.split("@")[0])) { + parent.optionalDependencies.hasOwnProperty(npa(what).name)) { log.warn("optional dep failed, continuing", what) log.verbose("optional dep failed, continuing", [what, er]) return cb(null, []) } - var isGit = false - , maybeGit = what.split("@").slice(1).join() - - if (maybeGit) - isGit = isGitUrl(url.parse(maybeGit)) + var isGit = npa(what).type === "git" if (!er && data && @@ -733,6 +815,7 @@ function targetResolver (where, context, deps) { return cb(null, []) } + if (data && !data._from) data._from = what if (er && parent && parent.name) er.parent = parent.name return cb(er, data || []) @@ -771,6 +854,13 @@ function localLink (target, where, context, cb) { , parent = context.parent readJson(jsonFile, log.warn, function (er, data) { + function thenLink () { + npm.commands.link([target.name], function (er, d) { + log.silly("localLink", "back from link", [er, d]) + cb(er, [resultList(target, where, parent && parent._id)]) + }) + } + if (er && er.code !== "ENOENT" && er.code !== "ENOTDIR") return cb(er) if (er || data._id === target._id) { if (er) { @@ -781,14 +871,6 @@ function localLink (target, where, context, cb) { thenLink() }) } else thenLink() - - function thenLink () { - npm.commands.link([target.name], function (er, d) { - log.silly("localLink", "back from link", [er, d]) - cb(er, [resultList(target, where, parent && parent._id)]) - }) - } - } else { log.verbose("localLink", "install locally (no link)", target._id) installOne_(target, where, context, cb) @@ -819,15 +901,9 @@ function resultList (target, where, parentId) { , target._from ] } -// name => install locations -var installOnesInProgress = Object.create(null) +var installed = Object.create(null) -function isIncompatibleInstallOneInProgress(target, where) { - return target.name in installOnesInProgress && - installOnesInProgress[target.name].indexOf(where) !== -1 -} - -function installOne_ (target, where, context, cb) { +function installOne_ (target, where, context, cb_) { var nm = path.resolve(where, "node_modules") , targetFolder = path.resolve(nm, target.name) , prettyWhere = path.relative(process.cwd(), where) @@ -835,37 +911,55 @@ function installOne_ (target, where, context, cb) { if (prettyWhere === ".") prettyWhere = null - if (isIncompatibleInstallOneInProgress(target, where)) { - // just call back, with no error. the error will be detected in the - // final check for peer-invalid dependencies - return cb() + cb_ = inflight(target.name + ":" + where, cb_) + if (!cb_) return log.verbose( + "installOne", + "of", target.name, + "to", where, + "already in flight; waiting" + ) + else log.verbose( + "installOne", + "of", target.name, + "to", where, + "not in flight; installing" + ) + + function cb(er, data) { + unlock(nm, target.name, function () { cb_(er, data) }) } - if (!(target.name in installOnesInProgress)) { - installOnesInProgress[target.name] = [] - } - installOnesInProgress[target.name].push(where) - var indexOfIOIP = installOnesInProgress[target.name].length - 1 - , force = npm.config.get("force") - , nodeVersion = npm.config.get("node-version") - , strict = npm.config.get("engine-strict") - , c = npmInstallChecks - - chain - ( [ [c.checkEngine, target, npm.version, nodeVersion, force, strict] - , [c.checkPlatform, target, force] - , [c.checkCycle, target, context.ancestors] - , [c.checkGit, targetFolder] - , [write, target, targetFolder, context] ] - , function (er, d) { - installOnesInProgress[target.name].splice(indexOfIOIP, 1) + lock(nm, target.name, function (er) { + if (er) return cb(er) - if (er) return cb(er) + if (targetFolder in installed) { + log.error("install", "trying to install", target.version, "to", targetFolder) + log.error("install", "but already installed versions", installed[targetFolder]) + installed[targetFolder].push(target.version) + } + else { + installed[targetFolder] = [target.version] + } - d.push(resultList(target, where, parent && parent._id)) - cb(er, d) - } - ) + var force = npm.config.get("force") + , nodeVersion = npm.config.get("node-version") + , strict = npm.config.get("engine-strict") + , c = npmInstallChecks + + chain( + [ [c.checkEngine, target, npm.version, nodeVersion, force, strict] + , [c.checkPlatform, target, force] + , [c.checkCycle, target, context.ancestors] + , [c.checkGit, targetFolder] + , [write, target, targetFolder, context] ] + , function (er, d) { + if (er) return cb(er) + + d.push(resultList(target, where, parent && parent._id)) + cb(er, d) + } + ) + }) } function write (target, targetFolder, context, cb_) { @@ -879,15 +973,16 @@ function write (target, targetFolder, context, cb_) { // is the list of installed packages from that last thing. if (!er) return cb_(er, data) - if (false === npm.config.get("rollback")) return cb_(er) + if (npm.config.get("rollback") === false) return cb_(er) npm.rollbacks.push(targetFolder) cb_(er, data) } var bundled = [] - chain - ( [ [ cache.unpack, target.name, target.version, targetFolder + log.silly("install write", "writing", target.name, target.version, "to", targetFolder) + chain( + [ [ cache.unpack, target.name, target.version, targetFolder , null, null, user, group ] , [ fs, "writeFile" , path.resolve(targetFolder, "package.json") @@ -922,14 +1017,27 @@ function write (target, targetFolder, context, cb_) { , explicit: false , wrap: wrap } + var actions = + [ [ installManyAndBuild, deps, depsTargetFolder, depsContext ] ] + + // FIXME: This is an accident waiting to happen! + // + // 1. If multiple children at the same level of the tree share a + // peerDependency that's not in the parent's dependencies, because + // the peerDeps don't get added to the family, they will keep + // getting reinstalled (worked around by inflighting installOne). + // 2. The installer can't safely build at the parent level because + // that's already being done by the parent's installAndBuild. This + // runs the risk of the peerDependency never getting built. + // + // The fix: Don't install peerDependencies; require them to be + // included as explicit dependencies / devDependencies, and warn + // or error when they're missing. See #5080 for more arguments in + // favor of killing implicit peerDependency installs with fire. var peerDeps = prepareForInstallMany(data, "peerDependencies", bundled, wrap, family) var pdTargetFolder = path.resolve(targetFolder, "..", "..") var pdContext = context - - var actions = - [ [ installManyAndBuild, deps, depsTargetFolder, depsContext ] ] - if (peerDeps.length > 0) { actions.push( [ installMany, peerDeps, pdTargetFolder, pdContext ] @@ -972,8 +1080,9 @@ function prepareForInstallMany (packageData, depsKey, bundled, wrap, family) { return !semver.satisfies(family[d], packageData[depsKey][d], true) return true }).map(function (d) { - var t = packageData[depsKey][d] - t = d + "@" + t + var v = packageData[depsKey][d] + var t = d + "@" + v + log.silly("prepareForInstallMany", "adding", t, "from", packageData.name, depsKey) return t }) } diff --git a/deps/npm/lib/link.js b/deps/npm/lib/link.js index 8022fc78dfe..8c6a9302905 100644 --- a/deps/npm/lib/link.js +++ b/deps/npm/lib/link.js @@ -10,6 +10,7 @@ var npm = require("./npm.js") , path = require("path") , rm = require("./utils/gently-rm.js") , build = require("./build.js") + , npa = require("npm-package-arg") module.exports = link @@ -49,25 +50,26 @@ function link (args, cb) { function linkInstall (pkgs, cb) { asyncMap(pkgs, function (pkg, cb) { + var t = path.resolve(npm.globalDir, "..") + , pp = path.resolve(npm.globalDir, pkg) + , rp = null + , target = path.resolve(npm.dir, pkg) + function n (er, data) { if (er) return cb(er, data) // install returns [ [folder, pkgId], ... ] // but we definitely installed just one thing. var d = data.filter(function (d) { return !d[3] }) + var what = npa(d[0][0]) pp = d[0][1] - pkg = path.basename(pp) + pkg = what.name target = path.resolve(npm.dir, pkg) next() } - var t = path.resolve(npm.globalDir, "..") - , pp = path.resolve(npm.globalDir, pkg) - , rp = null - , target = path.resolve(npm.dir, pkg) - - // if it's a folder or a random not-installed thing, then - // link or install it first - if (pkg.indexOf("/") !== -1 || pkg.indexOf("\\") !== -1) { + // if it's a folder, a random not-installed thing, or not a scoped package, + // then link or install it first + if (pkg[0] !== "@" && (pkg.indexOf("/") !== -1 || pkg.indexOf("\\") !== -1)) { return fs.lstat(path.resolve(pkg), function (er, st) { if (er || !st.isDirectory()) { npm.commands.install(t, pkg, n) diff --git a/deps/npm/lib/ls.js b/deps/npm/lib/ls.js index 781b6443b99..ed329d19e1b 100644 --- a/deps/npm/lib/ls.js +++ b/deps/npm/lib/ls.js @@ -14,8 +14,8 @@ var npm = require("./npm.js") , archy = require("archy") , semver = require("semver") , url = require("url") - , isGitUrl = require("./utils/is-git-url.js") , color = require("ansicolors") + , npa = require("npm-package-arg") ls.usage = "npm ls" @@ -29,9 +29,9 @@ function ls (args, silent, cb) { // npm ls 'foo@~1.3' bar 'baz@<2' if (!args) args = [] else args = args.map(function (a) { - var nv = a.split("@") - , name = nv.shift() - , ver = semver.validRange(nv.join("@")) || "" + var p = npa(a) + , name = p.name + , ver = semver.validRange(p.rawSpec) || "" return [ name, ver ] }) @@ -39,6 +39,7 @@ function ls (args, silent, cb) { var depth = npm.config.get("depth") var opt = { depth: depth, log: log.warn, dev: true } readInstalled(dir, opt, function (er, data) { + pruneNestedExtraneous(data) var bfs = bfsify(data, args) , lite = getLite(bfs) @@ -75,6 +76,18 @@ function ls (args, silent, cb) { }) } +function pruneNestedExtraneous (data, visited) { + visited = visited || [] + visited.push(data) + for (var i in data.dependencies) { + if (data.dependencies[i].extraneous) { + data.dependencies[i].dependencies = {} + } else if (visited.indexOf(data.dependencies[i]) === -1) { + pruneNestedExtraneous(data.dependencies[i], visited) + } + } +} + function alphasort (a, b) { a = a.toLowerCase() b = b.toLowerCase() @@ -265,7 +278,7 @@ function makeArchy_ (data, long, dir, depth, parent, d) { // add giturl to name@version if (data._resolved) { - if (isGitUrl(url.parse(data._resolved))) + if (npa(data._resolved).type === "git") out.label += " (" + data._resolved + ")" } diff --git a/deps/npm/lib/npm.js b/deps/npm/lib/npm.js index 3139b1d1452..e933a1346cc 100644 --- a/deps/npm/lib/npm.js +++ b/deps/npm/lib/npm.js @@ -16,7 +16,7 @@ require('child-process-close') var EventEmitter = require("events").EventEmitter , npm = module.exports = new EventEmitter() - , npmconf = require("npmconf") + , npmconf = require("./config/core.js") , log = require("npmlog") , fs = require("graceful-fs") , path = require("path") @@ -46,16 +46,6 @@ try { var j = JSON.parse(fs.readFileSync( path.join(__dirname, "../package.json"))+"") npm.version = j.version - npm.nodeVersionRequired = j.engines.node - if (!semver.satisfies(pv, j.engines.node)) { - log.warn("unsupported version", ["" - ,"npm requires node version: "+j.engines.node - ,"And you have: "+pv - ,"which is not satisfactory." - ,"" - ,"Bad things will likely happen. You have been warned." - ,""].join("\n")) - } } catch (ex) { try { log.info("error reading version", ex) @@ -109,7 +99,6 @@ var commandCache = {} , "update" , "outdated" , "prune" - , "submodule" , "pack" , "dedupe" @@ -153,6 +142,7 @@ var commandCache = {} ] , plumbing = [ "build" , "unbuild" + , "isntall" , "xmas" , "substack" , "visnup" @@ -433,11 +423,7 @@ Object.defineProperty(npm, "cache", }) var tmpFolder -var crypto = require("crypto") -var rand = crypto.randomBytes(6) - .toString("base64") - .replace(/\//g, '_') - .replace(/\+/, '-') +var rand = require("crypto").randomBytes(4).toString("hex") Object.defineProperty(npm, "tmp", { get : function () { if (!tmpFolder) tmpFolder = "npm-" + process.pid + "-" + rand diff --git a/deps/npm/lib/outdated.js b/deps/npm/lib/outdated.js index a71df7fe76a..fdfd7624db2 100644 --- a/deps/npm/lib/outdated.js +++ b/deps/npm/lib/outdated.js @@ -28,12 +28,13 @@ var path = require("path") , asyncMap = require("slide").asyncMap , npm = require("./npm.js") , url = require("url") - , isGitUrl = require("./utils/is-git-url.js") , color = require("ansicolors") , styles = require("ansistyles") , table = require("text-table") , semver = require("semver") , os = require("os") + , mapToRegistry = require("./utils/map-to-registry.js") + , npa = require("npm-package-arg") function outdated (args, silent, cb) { if (typeof cb !== "function") cb = silent, silent = false @@ -43,7 +44,7 @@ function outdated (args, silent, cb) { if (npm.config.get("json")) { console.log(makeJSON(list)) } else if (npm.config.get("parseable")) { - console.log(makeParseable(list)); + console.log(makeParseable(list)) } else { var outList = list.map(makePretty) var outTable = [[ "Package" @@ -99,7 +100,7 @@ function makePretty (p) { function ansiTrim (str) { var r = new RegExp("\x1b(?:\\[(?:\\d+[ABCDEFGJKSTm]|\\d+;\\d+[Hfm]|" + - "\\d+;\\d+;\\d+m|6n|s|u|\\?25[lh])|\\w)", "g"); + "\\d+;\\d+;\\d+m|6n|s|u|\\?25[lh])|\\w)", "g") return str.replace(r, "") } @@ -114,7 +115,7 @@ function makeParseable (list) { , dir = path.resolve(p[0], "node_modules", dep) , has = p[2] , want = p[3] - , latest = p[4]; + , latest = p[4] return [ dir , dep + "@" + want @@ -264,20 +265,25 @@ function shouldUpdate (args, dir, dep, has, req, depth, cb) { return skip() } - if (isGitUrl(url.parse(req))) + if (npa(req).type === "git") return doIt("git", "git") // search for the latest package - var uri = url.resolve(npm.config.get("registry"), dep) - npm.registry.get(uri, null, function (er, d) { + mapToRegistry(dep, npm.config, function (er, uri) { + if (er) return cb(er) + + npm.registry.get(uri, null, updateDeps) + }) + + function updateDeps (er, d) { if (er) return cb() - if (!d || !d['dist-tags'] || !d.versions) return cb() - var l = d.versions[d['dist-tags'].latest] + if (!d || !d["dist-tags"] || !d.versions) return cb() + var l = d.versions[d["dist-tags"].latest] if (!l) return cb() var r = req - if (d['dist-tags'][req]) - r = d['dist-tags'][req] + if (d["dist-tags"][req]) + r = d["dist-tags"][req] if (semver.validRange(r, true)) { // some kind of semver range. @@ -290,13 +296,13 @@ function shouldUpdate (args, dir, dep, has, req, depth, cb) { } // We didn't find the version in the doc. See if cache can find it. - cache.add(dep, req, false, onCacheAdd) + cache.add(dep, req, null, false, onCacheAdd) function onCacheAdd(er, d) { // if this fails, then it means we can't update this thing. // it's probably a thing that isn't published. if (er) { - if (er.code && er.code === 'ETARGET') { + if (er.code && er.code === "ETARGET") { // no viable version found return skip(er) } @@ -315,6 +321,5 @@ function shouldUpdate (args, dir, dep, has, req, depth, cb) { else skip() } - - }) + } } diff --git a/deps/npm/lib/owner.js b/deps/npm/lib/owner.js index 34dbbc24722..2fdee7adb69 100644 --- a/deps/npm/lib/owner.js +++ b/deps/npm/lib/owner.js @@ -5,6 +5,12 @@ owner.usage = "npm owner add " + "\nnpm owner rm " + "\nnpm owner ls " +var npm = require("./npm.js") + , registry = npm.registry + , log = require("npmlog") + , readJson = require("read-package-json") + , mapToRegistry = require("./utils/map-to-registry.js") + owner.completion = function (opts, cb) { var argv = opts.conf.argv.remain if (argv.length > 4) return cb() @@ -14,65 +20,78 @@ owner.completion = function (opts, cb) { else subs.push("ls", "list") return cb(null, subs) } - var un = encodeURIComponent(npm.config.get("username")) - var theUser, uri - switch (argv[2]) { - case "ls": - if (argv.length > 3) return cb() - uri = url.resolve(npm.config.get("registry"), "-/short") - return registry.get(uri, null, cb) - - case "rm": - if (argv.length > 3) { - theUser = encodeURIComponent(argv[3]) - uri = url.resolve(npm.config.get("registry"), "-/by-user/"+theUser+"|"+un) - console.error(uri) - return registry.get(uri, null, function (er, d) { + + npm.commands.whoami([], true, function (er, username) { + if (er) return cb() + + var un = encodeURIComponent(username) + var byUser, theUser + switch (argv[2]) { + case "ls": + if (argv.length > 3) return cb() + return mapToRegistry("-/short", npm.config, function (er, uri) { if (er) return cb(er) - // return the intersection - return cb(null, d[theUser].filter(function (p) { - // kludge for server adminery. - return un === "isaacs" || d[un].indexOf(p) === -1 - })) + + registry.get(uri, null, cb) }) - } - // else fallthrough - case "add": - if (argv.length > 3) { - theUser = encodeURIComponent(argv[3]) - uri = url.resolve(npm.config.get("registry"), "-/by-user/"+theUser+"|"+un) - console.error(uri) - return registry.get(uri, null, function (er, d) { - console.error(uri, er || d) - // return mine that they're not already on. + + case "rm": + if (argv.length > 3) { + theUser = encodeURIComponent(argv[3]) + byUser = "-/by-user/" + theUser + "|" + un + return mapToRegistry(byUser, npm.config, function (er, uri) { + if (er) return cb(er) + + console.error(uri) + registry.get(uri, null, function (er, d) { + if (er) return cb(er) + // return the intersection + return cb(null, d[theUser].filter(function (p) { + // kludge for server adminery. + return un === "isaacs" || d[un].indexOf(p) === -1 + })) + }) + }) + } + // else fallthrough + case "add": + if (argv.length > 3) { + theUser = encodeURIComponent(argv[3]) + byUser = "-/by-user/" + theUser + "|" + un + return mapToRegistry(byUser, npm.config, function (er, uri) { + if (er) return cb(er) + + console.error(uri) + registry.get(uri, null, function (er, d) { + console.error(uri, er || d) + // return mine that they're not already on. + if (er) return cb(er) + var mine = d[un] || [] + , theirs = d[theUser] || [] + return cb(null, mine.filter(function (p) { + return theirs.indexOf(p) === -1 + })) + }) + }) + } + // just list all users who aren't me. + return mapToRegistry("-/users", npm.config, function (er, uri) { if (er) return cb(er) - var mine = d[un] || [] - , theirs = d[theUser] || [] - return cb(null, mine.filter(function (p) { - return theirs.indexOf(p) === -1 - })) + + registry.get(uri, null, function (er, list) { + if (er) return cb() + return cb(null, Object.keys(list).filter(function (n) { + return n !== un + })) + }) }) - } - // just list all users who aren't me. - uri = url.resolve(npm.config.get("registry"), "-/users") - return registry.get(uri, null, function (er, list) { - if (er) return cb() - return cb(null, Object.keys(list).filter(function (n) { - return n !== un - })) - }) - default: - return cb() - } + default: + return cb() + } + }) } -var npm = require("./npm.js") - , registry = npm.registry - , log = require("npmlog") - , readJson = require("read-package-json") - , url = require("url") - function owner (args, cb) { var action = args.shift() switch (action) { @@ -90,18 +109,23 @@ function ls (pkg, cb) { ls(pkg, cb) }) - var uri = url.resolve(npm.config.get("registry"), pkg) - registry.get(uri, null, function (er, data) { - var msg = "" - if (er) { - log.error("owner ls", "Couldn't get owner data", pkg) - return cb(er) - } - var owners = data.maintainers - if (!owners || !owners.length) msg = "admin party!" - else msg = owners.map(function (o) { return o.name +" <"+o.email+">" }).join("\n") - console.log(msg) - cb(er, owners) + mapToRegistry(pkg, npm.config, function (er, uri) { + if (er) return cb(er) + + registry.get(uri, null, function (er, data) { + var msg = "" + if (er) { + log.error("owner ls", "Couldn't get owner data", pkg) + return cb(er) + } + var owners = data.maintainers + if (!owners || !owners.length) msg = "admin party!" + else msg = owners.map(function (o) { + return o.name + " <" + o.email + ">" + }).join("\n") + console.log(msg) + cb(er, owners) + }) }) } @@ -120,7 +144,7 @@ function add (user, pkg, cb) { var o = owners[i] if (o.name === u.name) { log.info( "owner add" - , "Already a package owner: "+o.name+" <"+o.email+">") + , "Already a package owner: " + o.name + " <" + o.email + ">") return false } } @@ -145,7 +169,7 @@ function rm (user, pkg, cb) { return !match }) if (!found) { - log.info("owner rm", "Not a package owner: "+user) + log.info("owner rm", "Not a package owner: " + user) return false } if (!m.length) return new Error( @@ -156,15 +180,19 @@ function rm (user, pkg, cb) { function mutate (pkg, user, mutation, cb) { if (user) { - var uri = url.resolve(npm.config.get("registry"), "-/user/org.couchdb.user:"+user) - registry.get(uri, null, mutate_) + var byUser = "-/user/org.couchdb.user:" + user + mapToRegistry(byUser, npm.config, function (er, uri) { + if (er) return cb(er) + + registry.get(uri, null, mutate_) + }) } else { mutate_(null, null) } function mutate_ (er, u) { if (!er && user && (!u || u.error)) er = new Error( - "Couldn't get user data for "+user+": "+JSON.stringify(u)) + "Couldn't get user data for " + user + ": " + JSON.stringify(u)) if (er) { log.error("owner mutate", "Error getting user data for %s", user) @@ -172,27 +200,34 @@ function mutate (pkg, user, mutation, cb) { } if (u) u = { "name" : u.name, "email" : u.email } - var uri = url.resolve(npm.config.get("registry"), pkg) - registry.get(uri, null, function (er, data) { - if (er) { - log.error("owner mutate", "Error getting package data for %s", pkg) - return cb(er) - } - var m = mutation(u, data.maintainers) - if (!m) return cb() // handled - if (m instanceof Error) return cb(m) // error - data = { _id : data._id - , _rev : data._rev - , maintainers : m - } - var uri = url.resolve(npm.config.get("registry"), pkg+"/-rev/"+data._rev) - registry.request("PUT", uri, { body : data }, function (er, data) { - if (!er && data.error) er = new Error( - "Failed to update package metadata: "+JSON.stringify(data)) + mapToRegistry(pkg, npm.config, function (er, uri) { + if (er) return cb(er) + + registry.get(uri, null, function (er, data) { if (er) { - log.error("owner mutate", "Failed to update package metadata") + log.error("owner mutate", "Error getting package data for %s", pkg) + return cb(er) } - cb(er, data) + var m = mutation(u, data.maintainers) + if (!m) return cb() // handled + if (m instanceof Error) return cb(m) // error + data = { _id : data._id + , _rev : data._rev + , maintainers : m + } + var dataPath = pkg + "/-rev/" + data._rev + mapToRegistry(dataPath, npm.config, function (er, uri) { + if (er) return cb(er) + + registry.request("PUT", uri, { body : data }, function (er, data) { + if (!er && data.error) er = new Error( + "Failed to update package metadata: " + JSON.stringify(data)) + if (er) { + log.error("owner mutate", "Failed to update package metadata") + } + cb(er, data) + }) + }) }) }) } @@ -207,5 +242,5 @@ function readLocalPkg (cb) { } function unknown (action, cb) { - cb("Usage: \n"+owner.usage) + cb("Usage: \n" + owner.usage) } diff --git a/deps/npm/lib/pack.js b/deps/npm/lib/pack.js index ea94dd15422..a5ce90094f6 100644 --- a/deps/npm/lib/pack.js +++ b/deps/npm/lib/pack.js @@ -11,6 +11,8 @@ var npm = require("./npm.js") , chain = require("slide").chain , path = require("path") , cwd = process.cwd() + , writeStream = require('fs-write-stream-atomic') + , cachedPackageRoot = require("./cache/cached-package-root.js") pack.usage = "npm pack " @@ -40,15 +42,17 @@ function printFiles (files, cb) { // add to cache, then cp to the cwd function pack_ (pkg, cb) { - cache.add(pkg, null, false, function (er, data) { + cache.add(pkg, null, null, false, function (er, data) { if (er) return cb(er) - var fname = path.resolve(data._id.replace(/@/g, "-") + ".tgz") - , cached = path.resolve( npm.cache - , data.name - , data.version - , "package.tgz" ) + + // scoped packages get special treatment + var name = data.name + if (name[0] === "@") name = name.substr(1).replace(/\//g, "-") + var fname = name + "-" + data.version + ".tgz" + + var cached = path.join(cachedPackageRoot(data), "package.tgz") , from = fs.createReadStream(cached) - , to = fs.createWriteStream(fname) + , to = writeStream(fname) , errState = null from.on("error", cb_) diff --git a/deps/npm/lib/publish.js b/deps/npm/lib/publish.js index ccad3ea8270..2a0fcff5a5c 100644 --- a/deps/npm/lib/publish.js +++ b/deps/npm/lib/publish.js @@ -1,14 +1,17 @@ module.exports = publish -var npm = require("./npm.js") +var url = require("url") + , npm = require("./npm.js") , log = require("npmlog") , path = require("path") , readJson = require("read-package-json") , lifecycle = require("./utils/lifecycle.js") , chain = require("slide").chain - , Conf = require("npmconf").Conf + , Conf = require("./config/core.js").Conf , RegClient = require("npm-registry-client") + , mapToRegistry = require("./utils/map-to-registry.js") + , cachedPackageRoot = require("./cache/cached-package-root.js") publish.usage = "npm publish " + "\nnpm publish " @@ -22,7 +25,10 @@ publish.completion = function (opts, cb) { } function publish (args, isRetry, cb) { - if (typeof cb !== "function") cb = isRetry, isRetry = false + if (typeof cb !== "function") { + cb = isRetry + isRetry = false + } if (args.length === 0) args = ["."] if (args.length !== 1) return cb(publish.usage) @@ -30,14 +36,18 @@ function publish (args, isRetry, cb) { var arg = args[0] // if it's a local folder, then run the prepublish there, first. readJson(path.resolve(arg, "package.json"), function (er, data) { - er = needVersion(er, data) if (er && er.code !== "ENOENT" && er.code !== "ENOTDIR") return cb(er) - // error is ok. could be publishing a url or tarball - // however, that means that we will not have automatically run - // the prepublish script, since that gets run when adding a folder - // to the cache. + + if (data) { + if (!data.name) return cb(new Error("No name provided")) + if (!data.version) return cb(new Error("No version provided")) + } + + // Error is OK. Could be publishing a URL or tarball, however, that means + // that we will not have automatically run the prepublish script, since + // that gets run when adding a folder to the cache. if (er) return cacheAddPublish(arg, false, isRetry, cb) - cacheAddPublish(arg, true, isRetry, cb) + else cacheAddPublish(arg, true, isRetry, cb) }) } @@ -47,15 +57,12 @@ function publish (args, isRetry, cb) { // That means that we can run publish/postpublish in the dir, rather than // in the cache dir. function cacheAddPublish (dir, didPre, isRetry, cb) { - npm.commands.cache.add(dir, null, false, function (er, data) { + npm.commands.cache.add(dir, null, null, false, function (er, data) { if (er) return cb(er) log.silly("publish", data) - var cachedir = path.resolve( npm.cache - , data.name - , data.version - , "package" ) - chain - ( [ !didPre && [lifecycle, data, "prepublish", cachedir] + var cachedir = path.resolve(cachedPackageRoot(data), "package") + chain([ !didPre && + [lifecycle, data, "prepublish", cachedir] , [publish_, dir, data, isRetry, cachedir] , [lifecycle, data, "publish", didPre ? dir : cachedir] , [lifecycle, data, "postpublish", didPre ? dir : cachedir] ] @@ -66,53 +73,61 @@ function cacheAddPublish (dir, didPre, isRetry, cb) { function publish_ (arg, data, isRetry, cachedir, cb) { if (!data) return cb(new Error("no package.json file found")) - // check for publishConfig hash var registry = npm.registry - var registryURI = npm.config.get("registry") + var config = npm.config + + // check for publishConfig hash if (data.publishConfig) { - var pubConf = new Conf(npm.config) - pubConf.save = npm.config.save.bind(npm.config) + config = new Conf(npm.config) + config.save = npm.config.save.bind(npm.config) // don't modify the actual publishConfig object, in case we have // to set a login token or some other data. - pubConf.unshift(Object.keys(data.publishConfig).reduce(function (s, k) { + config.unshift(Object.keys(data.publishConfig).reduce(function (s, k) { s[k] = data.publishConfig[k] return s }, {})) - registry = new RegClient(pubConf) - registryURI = pubConf.get("registry") + registry = new RegClient(config) } - data._npmVersion = npm.version - data._npmUser = { name: npm.config.get("username") - , email: npm.config.get("email") } + data._npmVersion = npm.version + data._nodeVersion = process.versions.node delete data.modules - if (data.private) return cb(new Error - ("This package has been marked as private\n" - +"Remove the 'private' field from the package.json to publish it.")) - - var tarball = cachedir + ".tgz" - registry.publish(registryURI, data, tarball, function (er) { - if (er && er.code === "EPUBLISHCONFLICT" - && npm.config.get("force") && !isRetry) { - log.warn("publish", "Forced publish over "+data._id) - return npm.commands.unpublish([data._id], function (er) { - // ignore errors. Use the force. Reach out with your feelings. - // but if it fails again, then report the first error. - publish([arg], er || true, cb) - }) - } - // report the unpublish error if this was a retry and unpublish failed - if (er && isRetry && isRetry !== true) return cb(isRetry) + if (data.private) return cb( + new Error( + "This package has been marked as private\n" + + "Remove the 'private' field from the package.json to publish it." + ) + ) + + mapToRegistry(data.name, config, function (er, registryURI) { if (er) return cb(er) - console.log("+ " + data._id) - cb() - }) -} -function needVersion(er, data) { - return er ? er - : (data && !data.version) ? new Error("No version provided") - : null + var tarball = cachedir + ".tgz" + + // we just want the base registry URL in this case + var registryBase = url.resolve(registryURI, ".") + log.verbose("publish", "registryBase", registryBase) + + var c = config.getCredentialsByURI(registryBase) + data._npmUser = {name: c.username, email: c.email} + + registry.publish(registryBase, data, tarball, function (er) { + if (er && er.code === "EPUBLISHCONFLICT" + && npm.config.get("force") && !isRetry) { + log.warn("publish", "Forced publish over " + data._id) + return npm.commands.unpublish([data._id], function (er) { + // ignore errors. Use the force. Reach out with your feelings. + // but if it fails again, then report the first error. + publish([arg], er || true, cb) + }) + } + // report the unpublish error if this was a retry and unpublish failed + if (er && isRetry && isRetry !== true) return cb(isRetry) + if (er) return cb(er) + console.log("+ " + data._id) + cb() + }) + }) } diff --git a/deps/npm/lib/rebuild.js b/deps/npm/lib/rebuild.js index e296451b705..ab372c6ec07 100644 --- a/deps/npm/lib/rebuild.js +++ b/deps/npm/lib/rebuild.js @@ -5,6 +5,7 @@ var readInstalled = require("read-installed") , semver = require("semver") , log = require("npmlog") , npm = require("./npm.js") + , npa = require("npm-package-arg") rebuild.usage = "npm rebuild [[@] [name[@] ...]]" @@ -46,9 +47,9 @@ function filter (data, args, set, seen) { else if (data.name && data._id) { for (var i = 0, l = args.length; i < l; i ++) { var arg = args[i] - , nv = arg.split("@") - , n = nv.shift() - , v = nv.join("@") + , nv = npa(arg) + , n = nv.name + , v = nv.rawSpec if (n !== data.name) continue if (!semver.satisfies(data.version, v, true)) continue pass = true diff --git a/deps/npm/lib/repo.js b/deps/npm/lib/repo.js index d209c3ca836..c6db8e37b01 100644 --- a/deps/npm/lib/repo.js +++ b/deps/npm/lib/repo.js @@ -5,9 +5,12 @@ repo.usage = "npm repo " repo.completion = function (opts, cb) { if (opts.conf.argv.remain.length > 2) return cb() - var uri = url_.resolve(npm.config.get("registry"), "/-/short") - registry.get(uri, { timeout : 60000 }, function (er, list) { - return cb(null, list || []) + mapToRegistry("/-/short", npm.config, function (er, uri) { + if (er) return cb(er) + + registry.get(uri, { timeout : 60000 }, function (er, list) { + return cb(null, list || []) + }) }) } @@ -19,10 +22,12 @@ var npm = require("./npm.js") , path = require("path") , readJson = require("read-package-json") , fs = require("fs") - , url_ = require('url') + , url_ = require("url") + , mapToRegistry = require("./utils/map-to-registry.js") + , npa = require("npm-package-arg") function repo (args, cb) { - var n = args.length && args[0].split("@").shift() || '.' + var n = args.length && npa(args[0]).name || "." fs.stat(n, function (er, s) { if (er && er.code === "ENOENT") return callRegistry(n, cb) else if (er) return cb(er) @@ -35,8 +40,8 @@ function repo (args, cb) { } function getUrlAndOpen (d, cb) { - var r = d.repository; - if (!r) return cb(new Error('no repository')); + var r = d.repository + if (!r) return cb(new Error('no repository')) // XXX remove this when npm@v1.3.10 from node 0.10 is deprecated // from https://github.com/npm/npm-www/issues/418 if (githubUserRepo(r.url)) @@ -52,10 +57,13 @@ function getUrlAndOpen (d, cb) { } function callRegistry (n, cb) { - var uri = url_.resolve(npm.config.get("registry"), n + "/latest") - registry.get(uri, { timeout : 3600 }, function (er, d) { + mapToRegistry(n, npm.config, function (er, uri) { if (er) return cb(er) - getUrlAndOpen(d, cb) + + registry.get(uri + "/latest", { timeout : 3600 }, function (er, d) { + if (er) return cb(er) + getUrlAndOpen(d, cb) + }) }) } diff --git a/deps/npm/lib/run-script.js b/deps/npm/lib/run-script.js index 25e98f01d65..4495b93c48e 100644 --- a/deps/npm/lib/run-script.js +++ b/deps/npm/lib/run-script.js @@ -8,7 +8,7 @@ var lifecycle = require("./utils/lifecycle.js") , log = require("npmlog") , chain = require("slide").chain -runScript.usage = "npm run-script [] " +runScript.usage = "npm run-script [-- ]" runScript.completion = function (opts, cb) { @@ -21,7 +21,7 @@ runScript.completion = function (opts, cb) { if (argv.length === 3) { // either specified a script locally, in which case, done, // or a package, in which case, complete against its scripts - var json = path.join(npm.prefix, "package.json") + var json = path.join(npm.localPrefix, "package.json") return readJson(json, function (er, d) { if (er && er.code !== "ENOENT" && er.code !== "ENOTDIR") return cb(er) if (er) d = {} @@ -30,7 +30,7 @@ runScript.completion = function (opts, cb) { if (scripts.indexOf(argv[2]) !== -1) return cb() // ok, try to find out which package it was, then var pref = npm.config.get("global") ? npm.config.get("prefix") - : npm.prefix + : npm.localPrefix var pkgDir = path.resolve( pref, "node_modules" , argv[2], "package.json" ) readJson(pkgDir, function (er, d) { @@ -53,8 +53,11 @@ runScript.completion = function (opts, cb) { next() }) - if (npm.config.get("global")) scripts = [], next() - else readJson(path.join(npm.prefix, "package.json"), function (er, d) { + if (npm.config.get("global")) { + scripts = [] + next() + } + else readJson(path.join(npm.localPrefix, "package.json"), function (er, d) { if (er && er.code !== "ENOENT" && er.code !== "ENOTDIR") return cb(er) d = d || {} scripts = Object.keys(d.scripts || {}) @@ -63,26 +66,27 @@ runScript.completion = function (opts, cb) { function next () { if (!installed || !scripts) return - return cb(null, scripts.concat(installed)) + + cb(null, scripts.concat(installed)) } } function runScript (args, cb) { if (!args.length) return list(cb) - var pkgdir = args.length === 1 ? process.cwd() - : path.resolve(npm.dir, args[0]) - , cmd = args.pop() + + var pkgdir = npm.localPrefix + , cmd = args.shift() readJson(path.resolve(pkgdir, "package.json"), function (er, d) { if (er) return cb(er) - run(d, pkgdir, cmd, cb) + run(d, pkgdir, cmd, args, cb) }) } function list(cb) { - var json = path.join(npm.prefix, 'package.json') + var json = path.join(npm.localPrefix, "package.json") return readJson(json, function(er, d) { - if (er && er.code !== 'ENOENT' && er.code !== 'ENOTDIR') return cb(er) + if (er && er.code !== "ENOENT" && er.code !== "ENOTDIR") return cb(er) if (er) d = {} var scripts = Object.keys(d.scripts || {}) @@ -109,22 +113,41 @@ function list(cb) { }) } -function run (pkg, wd, cmd, cb) { - var cmds = [] +function run (pkg, wd, cmd, args, cb) { if (!pkg.scripts) pkg.scripts = {} + + var cmds if (cmd === "restart") { - cmds = ["prestop","stop","poststop" - ,"restart" - ,"prestart","start","poststart"] + cmds = [ + "prestop", "stop", "poststop", + "restart", + "prestart", "start", "poststart" + ] } else { cmds = [cmd] } + if (!cmd.match(/^(pre|post)/)) { cmds = ["pre"+cmd].concat(cmds).concat("post"+cmd) } + log.verbose("run-script", cmds) chain(cmds.map(function (c) { + // pass cli arguments after -- to script. + if (pkg.scripts[c] && c === cmd) pkg.scripts[c] = pkg.scripts[c] + joinArgs(args) + // when running scripts explicitly, assume that they're trusted. return [lifecycle, pkg, c, wd, true] }), cb) } + +// join arguments after '--' and pass them to script, +// handle special characters such as ', ", ' '. +function joinArgs (args) { + var joinedArgs = "" + args.forEach(function(arg) { + if (arg.match(/[ '"]/)) arg = '"' + arg.replace(/"/g, '\\"') + '"' + joinedArgs += " " + arg + }) + return joinedArgs +} diff --git a/deps/npm/lib/search.js b/deps/npm/lib/search.js index e7892350ca6..5dd060f829c 100644 --- a/deps/npm/lib/search.js +++ b/deps/npm/lib/search.js @@ -1,10 +1,10 @@ module.exports = exports = search -var url = require("url") - , npm = require("./npm.js") +var npm = require("./npm.js") , registry = npm.registry , columnify = require('columnify') + , mapToRegistry = require("./utils/map-to-registry.js") search.usage = "npm search [some search terms ...]" @@ -63,10 +63,13 @@ function getFilteredData (staleness, args, notArgs, cb) { follow : true, staleOk : true } - var uri = url.resolve(npm.config.get("registry"), "-/all") - registry.get(uri, opts, function (er, data) { + mapToRegistry("-/all", npm.config, function (er, uri) { if (er) return cb(er) - return cb(null, filter(data, args, notArgs)) + + registry.get(uri, opts, function (er, data) { + if (er) return cb(er) + return cb(null, filter(data, args, notArgs)) + }) }) } @@ -215,7 +218,7 @@ function addColorMarker (str, arg, i) { if (arg.charAt(0) === "/") { //arg = arg.replace(/\/$/, "") - return str.replace( new RegExp(arg.substr(1, arg.length - 1), "gi") + return str.replace( new RegExp(arg.substr(1, arg.length - 2), "gi") , function (bit) { return markStart + bit + markEnd } ) } diff --git a/deps/npm/lib/shrinkwrap.js b/deps/npm/lib/shrinkwrap.js index 5f8261d095f..a5783837c67 100644 --- a/deps/npm/lib/shrinkwrap.js +++ b/deps/npm/lib/shrinkwrap.js @@ -6,6 +6,7 @@ module.exports = exports = shrinkwrap var npm = require("./npm.js") , log = require("npmlog") , fs = require("fs") + , writeFileAtomic = require("write-file-atomic") , path = require("path") , readJson = require("read-package-json") , sortedObject = require("sorted-object") @@ -70,7 +71,7 @@ function save (pkginfo, silent, cb) { var file = path.resolve(npm.prefix, "npm-shrinkwrap.json") - fs.writeFile(file, swdata, function (er) { + writeFileAtomic(file, swdata, function (er) { if (er) return cb(er) if (silent) return cb(null, pkginfo) console.log("wrote npm-shrinkwrap.json") diff --git a/deps/npm/lib/star.js b/deps/npm/lib/star.js index 9c0b4ea9ed9..123c4ebbb44 100644 --- a/deps/npm/lib/star.js +++ b/deps/npm/lib/star.js @@ -1,19 +1,22 @@ module.exports = star -var url = require("url") - , npm = require("./npm.js") +var npm = require("./npm.js") , registry = npm.registry , log = require("npmlog") , asyncMap = require("slide").asyncMap + , mapToRegistry = require("./utils/map-to-registry.js") star.usage = "npm star [pkg, pkg, ...]\n" + "npm unstar [pkg, pkg, ...]" star.completion = function (opts, cb) { - var uri = url.resolve(npm.config.get("registry"), "-/short") - registry.get(uri, { timeout : 60000 }, function (er, list) { - return cb(null, list || []) + mapToRegistry("-/short", npm.config, function (er, uri) { + if (er) return cb(er) + + registry.get(uri, { timeout : 60000 }, function (er, list) { + return cb(null, list || []) + }) }) } @@ -24,13 +27,16 @@ function star (args, cb) { , using = !(npm.command.match(/^un/)) if (!using) s = u asyncMap(args, function (pkg, cb) { - var uri = url.resolve(npm.config.get("registry"), pkg) - registry.star(uri, using, function (er, data, raw, req) { - if (!er) { - console.log(s + " "+pkg) - log.verbose("star", data) - } - cb(er, data, raw, req) + mapToRegistry(pkg, npm.config, function (er, uri) { + if (er) return cb(er) + + registry.star(uri, using, function (er, data, raw, req) { + if (!er) { + console.log(s + " "+pkg) + log.verbose("star", data) + } + cb(er, data, raw, req) + }) }) }, cb) } diff --git a/deps/npm/lib/stars.js b/deps/npm/lib/stars.js index f0d2ef73aeb..dee5c152afa 100644 --- a/deps/npm/lib/stars.js +++ b/deps/npm/lib/stars.js @@ -2,23 +2,26 @@ module.exports = stars stars.usage = "npm stars [username]" -var url = require("url") - , npm = require("./npm.js") +var npm = require("./npm.js") , registry = npm.registry , log = require("npmlog") + , mapToRegistry = require("./utils/map-to-registry.js") function stars (args, cb) { - var name = args.length === 1 ? args[0] : npm.config.get("username") - , uri = url.resolve(npm.config.get("registry"), name) - registry.stars(uri, showstars) + npm.commands.whoami([], true, function (er, username) { + var name = args.length === 1 ? args[0] : username + mapToRegistry("", npm.config, function (er, uri) { + if (er) return cb(er) + + registry.stars(uri, name, showstars) + }) + }) function showstars (er, data) { - if (er) { - return cb(er) - } + if (er) return cb(er) if (data.rows.length === 0) { - log.warn('stars', 'user has not starred any packages.') + log.warn("stars", "user has not starred any packages.") } else { data.rows.forEach(function(a) { console.log(a.value) diff --git a/deps/npm/lib/submodule.js b/deps/npm/lib/submodule.js deleted file mode 100644 index 2231ced9cfb..00000000000 --- a/deps/npm/lib/submodule.js +++ /dev/null @@ -1,91 +0,0 @@ -// npm submodule -// Check the package contents for a git repository url. -// If there is one, then create a git submodule in the node_modules folder. - -module.exports = submodule - -var npm = require("./npm.js") - , cache = require("./cache.js") - , git = require("./utils/git.js") - , asyncMap = require("slide").asyncMap - , chain = require("slide").chain - , which = require("which") - -submodule.usage = "npm submodule " - -submodule.completion = require("./docs.js").completion - -function submodule (args, cb) { - if (npm.config.get("global")) { - return cb(new Error("Cannot use submodule command in global mode.")) - } - - if (args.length === 0) return cb(submodule.usage) - - asyncMap(args, function (arg, cb) { - cache.add(arg, null, false, cb) - }, function (er, pkgs) { - if (er) return cb(er) - chain(pkgs.map(function (pkg) { return function (cb) { - submodule_(pkg, cb) - }}), cb) - }) - -} - -function submodule_ (pkg, cb) { - if (!pkg.repository - || pkg.repository.type !== "git" - || !pkg.repository.url) { - return cb(new Error(pkg._id + ": No git repository listed")) - } - - // prefer https:// github urls - pkg.repository.url = pkg.repository.url - .replace(/^(git:\/\/)?(git@)?github.com[:\/]/, "https://github.com/") - - // first get the list of submodules, and update if it's already there. - getSubmodules(function (er, modules) { - if (er) return cb(er) - // if there's already a submodule, then just update it. - if (modules.indexOf(pkg.name) !== -1) { - return updateSubmodule(pkg.name, cb) - } - addSubmodule(pkg.name, pkg.repository.url, cb) - }) -} - -function updateSubmodule (name, cb) { - var args = [ "submodule", "update", "--init", "node_modules/", name ] - - git.whichAndExec(args, cb) -} - -function addSubmodule (name, url, cb) { - var args = [ "submodule", "add", url, "node_modules/", name ] - - git.whichAndExec(args, cb) -} - - -var getSubmodules = function (cb) { - var args = [ "submodule", "status" ] - - - git.whichAndExec(args, function _(er, stdout) { - if (er) return cb(er) - var res = stdout.trim().split(/\n/).map(function (line) { - return line.trim().split(/\s+/)[1] - }).filter(function (line) { - // only care about submodules in the node_modules folder. - return line && line.match(/^node_modules\//) - }).map(function (line) { - return line.replace(/^node_modules\//g, "") - }) - - // memoize. - getSubmodules = function (cb) { return cb(null, res) } - - cb(null, res) - }) -} diff --git a/deps/npm/lib/tag.js b/deps/npm/lib/tag.js index 1d04ad1f7e0..47e9a8c0ac7 100644 --- a/deps/npm/lib/tag.js +++ b/deps/npm/lib/tag.js @@ -5,16 +5,30 @@ tag.usage = "npm tag @ []" tag.completion = require("./unpublish.js").completion -var url = require("url") - , npm = require("./npm.js") +var npm = require("./npm.js") , registry = npm.registry + , mapToRegistry = require("./utils/map-to-registry.js") + , npa = require("npm-package-arg") + , semver = require("semver") function tag (args, cb) { - var thing = (args.shift() || "").split("@") - , project = thing.shift() - , version = thing.join("@") + var thing = npa(args.shift() || "") + , project = thing.name + , version = thing.rawSpec , t = args.shift() || npm.config.get("tag") + + t = t.trim() + if (!project || !version || !t) return cb("Usage:\n"+tag.usage) - var uri = url.resolve(npm.config.get("registry"), project) - registry.tag(uri, version, t, cb) + + if (semver.validRange(t)) { + var er = new Error("Tag name must not be a valid SemVer range: " + t) + return cb(er) + } + + mapToRegistry(project, npm.config, function (er, uri) { + if (er) return cb(er) + + registry.tag(uri, version, t, cb) + }) } diff --git a/deps/npm/lib/unbuild.js b/deps/npm/lib/unbuild.js index b594f28a9ba..8bd6e8507f3 100644 --- a/deps/npm/lib/unbuild.js +++ b/deps/npm/lib/unbuild.js @@ -2,7 +2,6 @@ module.exports = unbuild unbuild.usage = "npm unbuild \n(this is plumbing)" var readJson = require("read-package-json") - , rm = require("./utils/gently-rm.js") , gentlyRm = require("./utils/gently-rm.js") , npm = require("./npm.js") , path = require("path") @@ -15,7 +14,7 @@ var readJson = require("read-package-json") // args is a list of folders. // remove any bins/etc, and then delete the folder. function unbuild (args, silent, cb) { - if (typeof silent === 'function') cb = silent, silent = false + if (typeof silent === "function") cb = silent, silent = false asyncMap(args, unbuild_(silent), cb) } @@ -25,10 +24,10 @@ function unbuild_ (silent) { return function (folder, cb_) { } folder = path.resolve(folder) delete build._didBuild[folder] - log.verbose(folder.substr(npm.prefix.length + 1), "unbuild") + log.verbose("unbuild", folder.substr(npm.prefix.length + 1)) readJson(path.resolve(folder, "package.json"), function (er, pkg) { // if no json, then just trash it, but no scripts or whatever. - if (er) return rm(folder, cb) + if (er) return gentlyRm(folder, false, cb) readJson.cache.del(folder) chain ( [ [lifecycle, pkg, "preuninstall", folder, false, true] @@ -39,7 +38,7 @@ function unbuild_ (silent) { return function (folder, cb_) { } , [rmStuff, pkg, folder] , [lifecycle, pkg, "postuninstall", folder, false, true] - , [rm, folder] ] + , [gentlyRm, folder, undefined] ] , cb ) }) }} @@ -54,7 +53,8 @@ function rmStuff (pkg, folder, cb) { readJson.cache.del(path.resolve(folder, "package.json")) - log.verbose([top, gnm, parent], "unbuild " + pkg._id) + log.verbose("unbuild rmStuff", pkg._id, "from", gnm) + if (!top) log.verbose("unbuild rmStuff", "in", parent) asyncMap([rmBins, rmMans], function (fn, cb) { fn(pkg, folder, parent, top, cb) }, cb) @@ -66,8 +66,8 @@ function rmBins (pkg, folder, parent, top, cb) { log.verbose([binRoot, pkg.bin], "binRoot") asyncMap(Object.keys(pkg.bin), function (b, cb) { if (process.platform === "win32") { - chain([ [rm, path.resolve(binRoot, b) + ".cmd"] - , [rm, path.resolve(binRoot, b) ] ], cb) + chain([ [gentlyRm, path.resolve(binRoot, b) + ".cmd", undefined] + , [gentlyRm, path.resolve(binRoot, b), undefined] ], cb) } else { gentlyRm( path.resolve(binRoot, b) , !npm.config.get("force") && folder diff --git a/deps/npm/lib/uninstall.js b/deps/npm/lib/uninstall.js index 42a9a83e0e5..68869f57902 100644 --- a/deps/npm/lib/uninstall.js +++ b/deps/npm/lib/uninstall.js @@ -9,6 +9,7 @@ uninstall.usage = "npm uninstall [@ [[@] ...]" uninstall.completion = require("./utils/completion/installed-shallow.js") var fs = require("graceful-fs") + , writeFileAtomic = require("write-file-atomic") , log = require("npmlog") , readJson = require("read-package-json") , path = require("path") @@ -120,7 +121,7 @@ function saver (args, nm, cb_) { } } - fs.writeFile(pj, JSON.stringify(pkg, null, 2) + "\n", function (er) { + writeFileAtomic(pj, JSON.stringify(pkg, null, 2) + "\n", function (er) { return cb_(er, data) }) }) diff --git a/deps/npm/lib/unpublish.js b/deps/npm/lib/unpublish.js index 225c1c3c455..2566cd5ae62 100644 --- a/deps/npm/lib/unpublish.js +++ b/deps/npm/lib/unpublish.js @@ -1,40 +1,51 @@ module.exports = unpublish -var url = require("url") - , log = require("npmlog") +var log = require("npmlog") , npm = require("./npm.js") , registry = npm.registry , readJson = require("read-package-json") , path = require("path") + , mapToRegistry = require("./utils/map-to-registry.js") + , npa = require("npm-package-arg") unpublish.usage = "npm unpublish [@]" unpublish.completion = function (opts, cb) { if (opts.conf.argv.remain.length >= 3) return cb() - var un = encodeURIComponent(npm.config.get("username")) - if (!un) return cb() - var uri = url.resolve(npm.config.get("registry"), "-/by-user/"+un) - registry.get(uri, null, function (er, pkgs) { - // do a bit of filtering at this point, so that we don't need - // to fetch versions for more than one thing, but also don't - // accidentally a whole project. - pkgs = pkgs[un] - if (!pkgs || !pkgs.length) return cb() - var partial = opts.partialWord.split("@") - , pp = partial.shift() - pkgs = pkgs.filter(function (p) { - return p.indexOf(pp) === 0 - }) - if (pkgs.length > 1) return cb(null, pkgs) - var uri = url.resolve(npm.config.get("registry"), pkgs[0]) - registry.get(uri, null, function (er, d) { + npm.commands.whoami([], true, function (er, username) { + if (er) return cb() + + var un = encodeURIComponent(username) + if (!un) return cb() + var byUser = "-/by-user/" + un + mapToRegistry(byUser, npm.config, function (er, uri) { if (er) return cb(er) - var vers = Object.keys(d.versions) - if (!vers.length) return cb(null, pkgs) - return cb(null, vers.map(function (v) { - return pkgs[0]+"@"+v - })) + + registry.get(uri, null, function (er, pkgs) { + // do a bit of filtering at this point, so that we don't need + // to fetch versions for more than one thing, but also don't + // accidentally a whole project. + pkgs = pkgs[un] + if (!pkgs || !pkgs.length) return cb() + var pp = npa(opts.partialWord).name + pkgs = pkgs.filter(function (p) { + return p.indexOf(pp) === 0 + }) + if (pkgs.length > 1) return cb(null, pkgs) + mapToRegistry(pkgs[0], npm.config, function (er, uri) { + if (er) return cb(er) + + registry.get(uri, null, function (er, d) { + if (er) return cb(er) + var vers = Object.keys(d.versions) + if (!vers.length) return cb(null, pkgs) + return cb(null, vers.map(function (v) { + return pkgs[0] + "@" + v + })) + }) + }) + }) }) }) } @@ -42,23 +53,25 @@ unpublish.completion = function (opts, cb) { function unpublish (args, cb) { if (args.length > 1) return cb(unpublish.usage) - var thing = args.length ? args.shift().split("@") : [] - , project = thing.shift() - , version = thing.join("@") + var thing = args.length ? npa(args[0]) : {} + , project = thing.name + , version = thing.rawSpec + log.silly("unpublish", "args[0]", args[0]) + log.silly("unpublish", "thing", thing) if (!version && !npm.config.get("force")) { return cb("Refusing to delete entire project.\n" - +"Run with --force to do this.\n" - +unpublish.usage) + + "Run with --force to do this.\n" + + unpublish.usage) } - if (!project || path.resolve(project) === npm.prefix) { + if (!project || path.resolve(project) === npm.localPrefix) { // if there's a package.json in the current folder, then // read the package name and version out of that. - var cwdJson = path.join(process.cwd(), "package.json") + var cwdJson = path.join(npm.localPrefix, "package.json") return readJson(cwdJson, function (er, data) { if (er && er.code !== "ENOENT" && er.code !== "ENOTDIR") return cb(er) - if (er) return cb("Usage:\n"+unpublish.usage) + if (er) return cb("Usage:\n" + unpublish.usage) gotProject(data.name, data.version, cb) }) } @@ -79,7 +92,10 @@ function gotProject (project, version, cb_) { return cb(er) } - var uri = url.resolve(npm.config.get("registry"), project) - registry.unpublish(uri, version, cb) + mapToRegistry(project, npm.config, function (er, uri) { + if (er) return cb(er) + + registry.unpublish(uri, version, cb) + }) }) } diff --git a/deps/npm/lib/utils/error-handler.js b/deps/npm/lib/utils/error-handler.js index 5c4f4c99e80..95b78a8ccbe 100644 --- a/deps/npm/lib/utils/error-handler.js +++ b/deps/npm/lib/utils/error-handler.js @@ -11,6 +11,7 @@ var cbCalled = false , exitCode = 0 , rollbacks = npm.rollbacks , chain = require("slide").chain + , writeStream = require("fs-write-stream-atomic") process.on("exit", function (code) { @@ -24,13 +25,18 @@ process.on("exit", function (code) { } if (wroteLogFile) { - log.error("", ["" - ,"Additional logging details can be found in:" + // just a line break + if (log.levels[log.level] <= log.levels.error) console.error("") + + log.error("", + ["Please include the following file with any support request:" ," " + path.resolve("npm-debug.log") ].join("\n")) wroteLogFile = false } - log.error("not ok", "code", code) + if (code) { + log.error("code", code) + } } var doExit = npm.config.get("_exit") @@ -61,17 +67,21 @@ function exit (code, noLog) { if (er) { log.error("error rolling back", er) if (!code) errorHandler(er) - else reallyExit(er) + else if (noLog) rm("npm-debug.log", reallyExit.bind(null, er)) + else writeLogFile(reallyExit.bind(this, er)) } else { - rm("npm-debug.log", reallyExit) + if (!noLog && code) writeLogFile(reallyExit) + else rm("npm-debug.log", reallyExit) } }) rollbacks.length = 0 } else if (code && !noLog) writeLogFile(reallyExit) - else reallyExit() + else rm("npm-debug.log", reallyExit) + + function reallyExit (er) { + if (er && !code) code = typeof er.errno === "number" ? er.errno : 1 - function reallyExit() { // truncate once it's been written. log.record.length = 0 @@ -87,7 +97,6 @@ function exit (code, noLog) { function errorHandler (er) { - var printStack = false // console.error("errorHandler", er) if (!npm.config || !npm.config.loaded) { // logging won't work unless we pretend that it's ready @@ -112,13 +121,55 @@ function errorHandler (er) { var m = er.code || er.message.match(/^(?:Error: )?(E[A-Z]+)/) if (m && !er.code) er.code = m + ; [ "type" + , "fstream_path" + , "fstream_unc_path" + , "fstream_type" + , "fstream_class" + , "fstream_finish_call" + , "fstream_linkpath" + , "stack" + , "fstream_stack" + , "statusCode" + , "pkgid" + ].forEach(function (k) { + var v = er[k] + if (!v) return + if (k === "fstream_stack") v = v.join("\n") + log.verbose(k, v) + }) + + log.verbose("cwd", process.cwd()) + + var os = require("os") + // log.error("System", os.type() + " " + os.release()) + // log.error("command", process.argv.map(JSON.stringify).join(" ")) + // log.error("node -v", process.version) + // log.error("npm -v", npm.version) + log.error("", os.type() + " " + os.release()) + log.error("argv", process.argv.map(JSON.stringify).join(" ")) + log.error("node", process.version) + log.error("npm ", "v" + npm.version) + + ; [ "file" + , "path" + , "code" + , "errno" + , "syscall" + ].forEach(function (k) { + var v = er[k] + if (v) log.error(k, v) + }) + + // just a line break + if (log.levels[log.level] <= log.levels.error) console.error("") + switch (er.code) { case "ECONNREFUSED": log.error("", er) log.error("", ["\nIf you are behind a proxy, please make sure that the" ,"'proxy' config is set properly. See: 'npm help config'" ].join("\n")) - printStack = true break case "EACCES": @@ -126,7 +177,6 @@ function errorHandler (er) { log.error("", er) log.error("", ["\nPlease try running this command again as root/Administrator." ].join("\n")) - printStack = true break case "ELIFECYCLE": @@ -160,24 +210,22 @@ function errorHandler (er) { ].join("\n"), "JSON.parse") break + // TODO(isaacs) + // Add a special case here for E401 and E403 explaining auth issues? + case "E404": var msg = [er.message] if (er.pkgid && er.pkgid !== "-") { msg.push("", "'"+er.pkgid+"' is not in the npm registry." - ,"You should bug the author to publish it") + ,"You should bug the author to publish it (or use the name yourself!)") if (er.parent) { msg.push("It was specified as a dependency of '"+er.parent+"'") } - if (er.pkgid.match(/^node[\.\-]|[\.\-]js$/)) { - var s = er.pkgid.replace(/^node[\.\-]|[\.\-]js$/g, "") - if (s !== er.pkgid) { - s = s.replace(/[^a-z0-9]/g, ' ') - msg.push("\nMaybe try 'npm search " + s + "'") - } - } msg.push("\nNote that you can also install from a" - ,"tarball, folder, or http url, or git url.") + ,"tarball, folder, http url, or git url.") } + // There's no need to have 404 in the message as well. + msg[0] = msg[0].replace(/^404\s+/, "") log.error("404", msg.join("\n")) break @@ -185,9 +233,6 @@ function errorHandler (er) { log.error("publish fail", ["Cannot publish over existing version." ,"Update the 'version' field in package.json and try again." ,"" - ,"If the previous version was published in error, see:" - ," npm help unpublish" - ,"" ,"To automatically increment version numbers, see:" ," npm help version" ].join("\n")) @@ -295,50 +340,13 @@ function errorHandler (er) { break default: - log.error("", er.stack || er.message || er) - log.error("", ["If you need help, you may report this *entire* log," - ,"including the npm and node versions, at:" + log.error("", er.message || er) + log.error("", ["", "If you need help, you may report this error at:" ," " ].join("\n")) - printStack = false break } - var os = require("os") - // just a line break - if (log.levels[log.level] <= log.levels.error) console.error("") - log.error("System", os.type() + " " + os.release()) - log.error("command", process.argv - .map(JSON.stringify).join(" ")) - log.error("cwd", process.cwd()) - log.error("node -v", process.version) - log.error("npm -v", npm.version) - - ; [ "file" - , "path" - , "type" - , "syscall" - , "fstream_path" - , "fstream_unc_path" - , "fstream_type" - , "fstream_class" - , "fstream_finish_call" - , "fstream_linkpath" - , "code" - , "errno" - , "stack" - , "fstream_stack" - ].forEach(function (k) { - var v = er[k] - if (k === "stack") { - if (!printStack) return - if (!v) v = er.message - } - if (!v) return - if (k === "fstream_stack") v = v.join("\n") - log.error(k, v) - }) - exit(typeof er.errno === "number" ? er.errno : 1) } @@ -348,19 +356,17 @@ function writeLogFile (cb) { writingLogFile = true wroteLogFile = true - var fs = require("graceful-fs") - , fstr = fs.createWriteStream("npm-debug.log") - , util = require("util") + var fstr = writeStream("npm-debug.log") , os = require("os") , out = "" log.record.forEach(function (m) { var pref = [m.id, m.level] if (m.prefix) pref.push(m.prefix) - pref = pref.join(' ') + pref = pref.join(" ") m.message.trim().split(/\r?\n/).map(function (line) { - return (pref + ' ' + line).trim() + return (pref + " " + line).trim() }).forEach(function (line) { out += line + os.EOL }) diff --git a/deps/npm/lib/utils/fetch.js b/deps/npm/lib/utils/fetch.js deleted file mode 100644 index f6e5166ff5f..00000000000 --- a/deps/npm/lib/utils/fetch.js +++ /dev/null @@ -1,106 +0,0 @@ -/** - * Fetch an HTTP url to a local file. - **/ - -var request = require("request") - , fs = require("graceful-fs") - , npm = require("../npm.js") - , url = require("url") - , log = require("npmlog") - , path = require("path") - , mkdir = require("mkdirp") - , chownr = require("chownr") - , regHost - , once = require("once") - , crypto = require("crypto") - -module.exports = fetch - -function fetch (remote, local, headers, cb) { - if (typeof cb !== "function") cb = headers, headers = {} - cb = once(cb) - log.verbose("fetch", "to=", local) - mkdir(path.dirname(local), function (er, made) { - if (er) return cb(er) - fetch_(remote, local, headers, cb) - }) -} - -function fetch_ (remote, local, headers, cb) { - var fstr = fs.createWriteStream(local, { mode : npm.modes.file }) - var response = null - - fstr.on("error", function (er) { - cb(er) - fstr.destroy() - }) - - var req = makeRequest(remote, fstr, headers) - req.on("response", function (res) { - log.http(res.statusCode, remote) - response = res - response.resume() - // Work around bug in node v0.10.0 where the CryptoStream - // gets stuck and never starts reading again. - if (process.version === "v0.10.0") { - response.resume = function (orig) { return function() { - var ret = orig.apply(response, arguments) - if (response.socket.encrypted) - response.socket.encrypted.read(0) - return ret - }}(response.resume) - } - }) - - fstr.on("close", function () { - var er - if (response && response.statusCode && response.statusCode >= 400) { - er = new Error(response.statusCode + " " - + require("http").STATUS_CODES[response.statusCode]) - } - cb(er, response) - }) -} - -function makeRequest (remote, fstr, headers) { - remote = url.parse(remote) - log.http("GET", remote.href) - regHost = regHost || url.parse(npm.config.get("registry")).host - - if (remote.host === regHost && npm.config.get("always-auth")) { - remote.auth = new Buffer( npm.config.get("_auth") - , "base64" ).toString("utf8") - if (!remote.auth) return fstr.emit("error", new Error( - "Auth required and none provided. Please run 'npm adduser'")) - } - - var proxy - if (remote.protocol !== "https:" || !(proxy = npm.config.get("https-proxy"))) { - proxy = npm.config.get("proxy") - } - - var sessionToken = npm.registry.sessionToken - if (!sessionToken) { - sessionToken = crypto.randomBytes(8).toString("hex") - npm.registry.sessionToken = sessionToken - } - - var ca = remote.host === regHost ? npm.config.get("ca") : undefined - var opts = { url: remote - , proxy: proxy - , strictSSL: npm.config.get("strict-ssl") - , rejectUnauthorized: npm.config.get("strict-ssl") - , ca: ca - , headers: - { "user-agent": npm.config.get("user-agent") - , "npm-session": sessionToken - , referer: npm.registry.refer - } - } - var req = request(opts) - req.on("error", function (er) { - fstr.emit("error", er) - }) - req.pipe(fstr) - return req -} diff --git a/deps/npm/lib/utils/gently-rm.js b/deps/npm/lib/utils/gently-rm.js index 241740fed6a..d43d0725ebb 100644 --- a/deps/npm/lib/utils/gently-rm.js +++ b/deps/npm/lib/utils/gently-rm.js @@ -3,54 +3,159 @@ module.exports = gentlyRm -var rimraf = require("rimraf") - , fs = require("graceful-fs") - , npm = require("../npm.js") - , path = require("path") +var npm = require("../npm.js") + , log = require("npmlog") + , resolve = require("path").resolve + , dirname = require("path").dirname + , lstat = require("graceful-fs").lstat + , readlink = require("graceful-fs").readlink + , isInside = require("path-is-inside") + , vacuum = require("fs-vacuum") + , rimraf = require("rimraf") + , some = require("async-some") -function gentlyRm (p, gently, cb) { - if (!cb) cb = gently, gently = null +function gentlyRm (path, gently, cb) { + if (!cb) { + cb = gently + gently = null + } // never rm the root, prefix, or bin dirs. // just a safety precaution. - p = path.resolve(p) - if (p === npm.dir || - p === npm.root || - p === npm.bin || - p === npm.prefix || - p === npm.globalDir || - p === npm.globalRoot || - p === npm.globalBin || - p === npm.globalPrefix) { - return cb(new Error("May not delete: " + p)) + var prefixes = [ + npm.dir, npm.root, npm.bin, npm.prefix, + npm.globalDir, npm.globalRoot, npm.globalBin, npm.globalPrefix + ] + + var resolved = resolve(path) + if (prefixes.indexOf(resolved) !== -1) { + log.verbose("gentlyRm", resolved, "is part of npm and can't be removed") + return cb(new Error("May not delete: "+resolved)) } - if (npm.config.get("force") || !gently) { - return rimraf(p, cb) + var options = {log : log.silly.bind(log, "gentlyRm")} + if (npm.config.get("force") || !gently) options.purge = true + + if (!gently) { + log.verbose("gentlyRm", "vacuuming", resolved) + return vacuum(resolved, options, cb) } - gently = path.resolve(gently) + var parent = resolve(gently) + log.verbose("gentlyRm", "verifying that", parent, "is managed by npm") + some(prefixes, isManaged(parent), function (er, matched) { + if (er) return cb(er) + + if (!matched) { + log.verbose("gentlyRm", parent, "is not managed by npm") + return clobberFail(resolved, parent, cb) + } + + log.silly("gentlyRm", parent, "is managed by npm") + + if (isInside(resolved, parent)) { + log.silly("gentlyRm", resolved, "is under", parent) + log.verbose("gentlyRm", "vacuuming", resolved, "up to", parent) + options.base = parent + return vacuum(resolved, options, cb) + } + + log.silly("gentlyRm", resolved, "is not under", parent) + log.silly("gentlyRm", "checking to see if", resolved, "is a link") + lstat(resolved, function (er, stat) { + if (er) { + if (er.code === "ENOENT") return cb(null) + return cb(er) + } + + if (!stat.isSymbolicLink()) { + log.verbose("gentlyRm", resolved, "is outside", parent, "and not a link") + return clobberFail(resolved, parent, cb) + } + + log.silly("gentlyRm", resolved, "is a link") + readlink(resolved, function (er, link) { + if (er) { + if (er.code === "ENOENT") return cb(null) + return cb(er) + } - // lstat it, see if it's a symlink. - fs.lstat(p, function (er, s) { - if (er) return rimraf(p, cb) - if (!s.isSymbolicLink()) next(null, path.resolve(p)) - realish(p, next) + var source = resolve(dirname(resolved), link) + if (isInside(source, parent)) { + log.silly("gentlyRm", source, "inside", parent) + log.verbose("gentlyRm", "vacuuming", resolved) + return vacuum(resolved, options, cb) + } + + log.silly("gentlyRm", "checking to see if", source, "is managed by npm") + some(prefixes, isManaged(source), function (er, matched) { + if (er) return cb(er) + + if (matched) { + log.silly("gentlyRm", source, "is under", matched) + log.verbose("gentlyRm", "removing", resolved) + rimraf(resolved, cb) + } + + log.verbose("gentlyRm", source, "is not managed by npm") + return clobberFail(path, parent, cb) + }) + }) + }) }) +} - function next (er, rp) { - if (rp && rp.indexOf(gently) !== 0) { - return clobberFail(p, gently, cb) +var resolvedPaths = {} +function isManaged (target) { + return predicate + + function predicate (path, cb) { + if (!path) { + log.verbose("isManaged", "no path") + return cb(null, false) + } + + path = resolve(path) + + // if the path has already been memoized, return immediately + var resolved = resolvedPaths[path] + if (resolved) { + var inside = isInside(target, resolved) + log.silly("isManaged", target, inside ? "is" : "is not", "inside", resolved) + + return cb(null, inside && path) } - rimraf(p, cb) + + // otherwise, check the path + lstat(path, function (er, stat) { + if (er) { + if (er.code === "ENOENT") return cb(null, false) + + return cb(er) + } + + // if it's not a link, cache & test the path itself + if (!stat.isSymbolicLink()) return cacheAndTest(path, path, target, cb) + + // otherwise, cache & test the link's source + readlink(path, function (er, source) { + if (er) { + if (er.code === "ENOENT") return cb(null, false) + + return cb(er) + } + + cacheAndTest(resolve(path, source), path, target, cb) + }) + }) } -} -function realish (p, cb) { - fs.readlink(p, function (er, r) { - if (er) return cb(er) - return cb(null, path.resolve(path.dirname(p), r)) - }) + function cacheAndTest (resolved, source, target, cb) { + resolvedPaths[source] = resolved + var inside = isInside(target, resolved) + log.silly("cacheAndTest", target, inside ? "is" : "is not", "inside", resolved) + cb(null, inside && source) + } } function clobberFail (p, g, cb) { diff --git a/deps/npm/lib/utils/git.js b/deps/npm/lib/utils/git.js index 7e20151938c..db5cc7baf07 100644 --- a/deps/npm/lib/utils/git.js +++ b/deps/npm/lib/utils/git.js @@ -10,16 +10,20 @@ var exec = require("child_process").execFile , npm = require("../npm.js") , which = require("which") , git = npm.config.get("git") + , assert = require("assert") + , log = require("npmlog") function prefixGitArgs() { return process.platform === "win32" ? ["-c", "core.longpaths=true"] : [] } function execGit(args, options, cb) { + log.info("git", args) return exec(git, prefixGitArgs().concat(args || []), options, cb) } function spawnGit(args, options, cb) { + log.info("git", args) return spawn(git, prefixGitArgs().concat(args || []), options) } @@ -33,6 +37,7 @@ function whichGit(cb) { } function whichAndExec(args, options, cb) { + assert.equal(typeof cb, "function", "no callback provided") // check for git whichGit(function (err) { if (err) { diff --git a/deps/npm/lib/utils/is-git-url.js b/deps/npm/lib/utils/is-git-url.js deleted file mode 100644 index 7ded4b602a2..00000000000 --- a/deps/npm/lib/utils/is-git-url.js +++ /dev/null @@ -1,13 +0,0 @@ -module.exports = isGitUrl - -function isGitUrl (url) { - switch (url.protocol) { - case "git:": - case "git+http:": - case "git+https:": - case "git+rsync:": - case "git+ftp:": - case "git+ssh:": - return true - } -} diff --git a/deps/npm/lib/utils/lifecycle.js b/deps/npm/lib/utils/lifecycle.js index 8bcb99689f4..c0eb83dfb1d 100644 --- a/deps/npm/lib/utils/lifecycle.js +++ b/deps/npm/lib/utils/lifecycle.js @@ -71,11 +71,6 @@ function lifecycle_ (pkg, stage, wd, env, unsafe, failOk, cb) { , p = wd.split("node_modules") , acc = path.resolve(p.shift()) - // first add the directory containing the `node` executable currently - // running, so that any lifecycle script that invoke "node" will execute - // this same one. - pathArr.unshift(path.dirname(process.execPath)) - p.forEach(function (pp) { pathArr.unshift(path.join(acc, "node_modules", ".bin")) acc = path.join(acc, "node_modules", pp) @@ -353,13 +348,9 @@ function makeEnv (data, prefix, env) { function cmd (stage) { function CMD (args, cb) { - if (args.length) { - chain(args.map(function (p) { - return [npm.commands, "run-script", [p, stage]] - }), cb) - } else npm.commands["run-script"]([stage], cb) + npm.commands["run-script"]([stage].concat(args), cb) } - CMD.usage = "npm "+stage+" " + CMD.usage = "npm "+stage+" [-- ]" var installedShallow = require("./completion/installed-shallow.js") CMD.completion = function (opts, cb) { installedShallow(opts, function (d) { diff --git a/deps/npm/lib/utils/locker.js b/deps/npm/lib/utils/locker.js index 9e322d7af3e..4479f241dab 100644 --- a/deps/npm/lib/utils/locker.js +++ b/deps/npm/lib/utils/locker.js @@ -1,52 +1,75 @@ var crypto = require("crypto") -var path = require("path") +var resolve = require("path").resolve -var npm = require("../npm.js") -var lockFile = require("lockfile") +var lockfile = require("lockfile") var log = require("npmlog") -var getCacheStat = require("../cache/get-stat.js") - -function lockFileName (u) { - var c = u.replace(/[^a-zA-Z0-9]+/g, "-").replace(/^-+|-+$/g, "") - , h = crypto.createHash("sha1").update(u).digest("hex") - h = h.substr(0, 8) - c = c.substr(-32) - log.silly("lockFile", h + "-" + c, u) - return path.resolve(npm.config.get("cache"), h + "-" + c + ".lock") +var mkdirp = require("mkdirp") + +var npm = require("../npm.js") +var getStat = require("../cache/get-stat.js") + +var installLocks = {} + +function lockFileName (base, name) { + var c = name.replace(/[^a-zA-Z0-9]+/g, "-").replace(/^-+|-+$/g, "") + , p = resolve(base, name) + , h = crypto.createHash("sha1").update(p).digest("hex") + , l = resolve(npm.cache, "_locks") + + return resolve(l, c.substr(0, 24)+"-"+h.substr(0, 16)+".lock") } -var myLocks = {} -function lock (u, cb) { - // the cache dir needs to exist already for this. - getCacheStat(function (er, cs) { - if (er) return cb(er) - var opts = { stale: npm.config.get("cache-lock-stale") - , retries: npm.config.get("cache-lock-retries") - , wait: npm.config.get("cache-lock-wait") } - var lf = lockFileName(u) - log.verbose("lock", u, lf) - lockFile.lock(lf, opts, function(er) { - if (!er) myLocks[lf] = true - cb(er) +function lock (base, name, cb) { + getStat(function (er) { + var lockDir = resolve(npm.cache, "_locks") + mkdirp(lockDir, function () { + if (er) return cb(er) + + var opts = { stale: npm.config.get("cache-lock-stale") + , retries: npm.config.get("cache-lock-retries") + , wait: npm.config.get("cache-lock-wait") } + var lf = lockFileName(base, name) + lockfile.lock(lf, opts, function (er) { + if (er) log.warn("locking", lf, "failed", er) + + if (!er) { + log.verbose("lock", "using", lf, "for", resolve(base, name)) + installLocks[lf] = true + } + + cb(er) + }) }) }) } -function unlock (u, cb) { - var lf = lockFileName(u) - , locked = myLocks[lf] +function unlock (base, name, cb) { + var lf = lockFileName(base, name) + , locked = installLocks[lf] if (locked === false) { return process.nextTick(cb) - } else if (locked === true) { - myLocks[lf] = false - lockFile.unlock(lockFileName(u), cb) - } else { - throw new Error("Attempt to unlock " + u + ", which hasn't been locked") + } + else if (locked === true) { + lockfile.unlock(lf, function (er) { + if (er) { + log.warn("unlocking", lf, "failed", er) + } + else { + installLocks[lf] = false + log.verbose("unlock", "done using", lf, "for", resolve(base, name)) + } + + cb(er) + }) + } + else { + throw new Error( + "Attempt to unlock " + resolve(base, name) + ", which hasn't been locked" + ) } } module.exports = { - lock: lock, - unlock: unlock, - _lockFileName: lockFileName + lock : lock, + unlock : unlock } diff --git a/deps/npm/lib/utils/map-to-registry.js b/deps/npm/lib/utils/map-to-registry.js new file mode 100644 index 00000000000..cf665e4f656 --- /dev/null +++ b/deps/npm/lib/utils/map-to-registry.js @@ -0,0 +1,54 @@ +var url = require("url") + +var log = require("npmlog") + , npa = require("npm-package-arg") + +module.exports = mapToRegistry + +function mapToRegistry(name, config, cb) { + var uri + var scopedRegistry + + // the name itself takes precedence + var data = npa(name) + if (data.scope) { + // the name is definitely scoped, so escape now + name = name.replace("/", "%2f") + + log.silly("mapToRegistry", "scope", data.scope) + + scopedRegistry = config.get(data.scope + ":registry") + if (scopedRegistry) { + log.silly("mapToRegistry", "scopedRegistry (scoped package)", scopedRegistry) + uri = url.resolve(scopedRegistry, name) + } + else { + log.verbose("mapToRegistry", "no registry URL found for scope", data.scope) + } + } + + // ...then --scope=@scope or --scope=scope + var scope = config.get("scope") + if (!uri && scope) { + // I'm an enabler, sorry + if (scope.charAt(0) !== "@") scope = "@" + scope + + scopedRegistry = config.get(scope + ":registry") + if (scopedRegistry) { + log.silly("mapToRegistry", "scopedRegistry (scope in config)", scopedRegistry) + uri = url.resolve(scopedRegistry, name) + } + else { + log.verbose("mapToRegistry", "no registry URL found for scope", scope) + } + } + + // ...and finally use the default registry + if (!uri) { + uri = url.resolve(config.get("registry"), name) + } + + log.verbose("mapToRegistry", "name", name) + log.verbose("mapToRegistry", "uri", uri) + cb(null, uri) +} diff --git a/deps/npm/lib/utils/tar.js b/deps/npm/lib/utils/tar.js index 192de7a26a2..ede49a121ed 100644 --- a/deps/npm/lib/utils/tar.js +++ b/deps/npm/lib/utils/tar.js @@ -3,6 +3,7 @@ var npm = require("../npm.js") , fs = require("graceful-fs") + , writeFileAtomic = require("write-file-atomic") , path = require("path") , log = require("npmlog") , uidNumber = require("uid-number") @@ -15,15 +16,6 @@ var npm = require("../npm.js") , fstream = require("fstream") , Packer = require("fstream-npm") , lifecycle = require("./lifecycle.js") - , locker = require("./locker.js") - -function lock(path, cb) { - return locker.lock('tar://' + path, cb) -} - -function unlock(path, cb) { - return locker.unlock('tar://' + path, cb) -} if (process.env.SUDO_UID && myUid === 0) { if (!isNaN(process.env.SUDO_UID)) myUid = +process.env.SUDO_UID @@ -51,73 +43,40 @@ function pack (tarball, folder, pkg, dfc, cb) { } } -function pack_ (tarball, folder, pkg, cb_) { - var tarballLock = false - , folderLock = false - - function cb (er) { - if (folderLock) - unlock(folder, function() { - folderLock = false - cb(er) - }) - else if (tarballLock) - unlock(tarball, function() { - tarballLock = false - cb(er) - }) - else - cb_(er) - } - - lock(folder, function(er) { - if (er) return cb(er) - folderLock = true - next() - }) - - lock(tarball, function (er) { - if (er) return cb(er) - tarballLock = true - next() - }) - - function next () { - if (!tarballLock || !folderLock) return - - new Packer({ path: folder, type: "Directory", isDirectory: true }) - .on("error", function (er) { - if (er) log.error("tar pack", "Error reading " + folder) - return cb(er) - }) +function pack_ (tarball, folder, pkg, cb) { + new Packer({ path: folder, type: "Directory", isDirectory: true }) + .on("error", function (er) { + if (er) log.error("tar pack", "Error reading " + folder) + return cb(er) + }) - // By default, npm includes some proprietary attributes in the - // package tarball. This is sane, and allowed by the spec. - // However, npm *itself* excludes these from its own package, - // so that it can be more easily bootstrapped using old and - // non-compliant tar implementations. - .pipe(tar.Pack({ noProprietary: !npm.config.get("proprietary-attribs") })) - .on("error", function (er) { - if (er) log.error("tar.pack", "tar creation error", tarball) - cb(er) - }) - .pipe(zlib.Gzip()) - .on("error", function (er) { - if (er) log.error("tar.pack", "gzip error "+tarball) - cb(er) - }) - .pipe(fstream.Writer({ type: "File", path: tarball })) - .on("error", function (er) { - if (er) log.error("tar.pack", "Could not write "+tarball) - cb(er) - }) - .on("close", cb) - } + // By default, npm includes some proprietary attributes in the + // package tarball. This is sane, and allowed by the spec. + // However, npm *itself* excludes these from its own package, + // so that it can be more easily bootstrapped using old and + // non-compliant tar implementations. + .pipe(tar.Pack({ noProprietary: !npm.config.get("proprietary-attribs") })) + .on("error", function (er) { + if (er) log.error("tar.pack", "tar creation error", tarball) + cb(er) + }) + .pipe(zlib.Gzip()) + .on("error", function (er) { + if (er) log.error("tar.pack", "gzip error "+tarball) + cb(er) + }) + .pipe(fstream.Writer({ type: "File", path: tarball })) + .on("error", function (er) { + if (er) log.error("tar.pack", "Could not write "+tarball) + cb(er) + }) + .on("close", cb) } function unpack (tarball, unpackTarget, dMode, fMode, uid, gid, cb) { - log.verbose("tar unpack", tarball) + log.verbose("tar", "unpack", tarball) + log.verbose("tar", "unpacking to", unpackTarget) if (typeof cb !== "function") cb = gid, gid = null if (typeof cb !== "function") cb = uid, uid = null if (typeof cb !== "function") cb = fMode, fMode = npm.modes.file @@ -129,52 +88,9 @@ function unpack (tarball, unpackTarget, dMode, fMode, uid, gid, cb) { }) } -function unpack_ ( tarball, unpackTarget, dMode, fMode, uid, gid, cb_ ) { - var parent = path.dirname(unpackTarget) - , base = path.basename(unpackTarget) - , folderLock - , tarballLock - - function cb (er) { - if (folderLock) - unlock(unpackTarget, function() { - folderLock = false - cb(er) - }) - else if (tarballLock) - unlock(tarball, function() { - tarballLock = false - cb(er) - }) - else - cb_(er) - } - - lock(unpackTarget, function (er) { - if (er) return cb(er) - folderLock = true - next() - }) - - lock(tarball, function (er) { +function unpack_ ( tarball, unpackTarget, dMode, fMode, uid, gid, cb ) { + rm(unpackTarget, function (er) { if (er) return cb(er) - tarballLock = true - next() - }) - - function next() { - if (!tarballLock || !folderLock) return - rmGunz() - } - - function rmGunz () { - rm(unpackTarget, function (er) { - if (er) return cb(er) - gtp() - }) - } - - function gtp () { // gzip {tarball} --decompress --stdout \ // | tar -mvxpf - --strip-components=1 -C {unpackTarget} gunzTarPerm( tarball, unpackTarget @@ -184,7 +100,7 @@ function unpack_ ( tarball, unpackTarget, dMode, fMode, uid, gid, cb_ ) { if (er) return cb(er) readJson(path.resolve(folder, "package.json"), cb) }) - } + }) } @@ -202,6 +118,17 @@ function gunzTarPerm (tarball, target, dMode, fMode, uid, gid, cb_) { var fst = fs.createReadStream(tarball) + fst.on("open", function (fd) { + fs.fstat(fd, function (er, st) { + if (er) return fst.emit("error", er) + if (st.size === 0) { + er = new Error("0-byte tarball\n" + + "Please run `npm cache clean`") + fst.emit("error", er) + } + }) + }) + // figure out who we're supposed to be, if we're not pretending // to be a specific user. if (npm.config.get("unsafe-perm") && process.platform !== "win32") { @@ -275,73 +202,74 @@ function gunzTarPerm (tarball, target, dMode, fMode, uid, gid, cb_) { } - fst.on("error", function (er) { - if (er) log.error("tar.unpack", "error reading "+tarball) - cb(er) - }) - fst.on("data", function OD (c) { - // detect what it is. - // Then, depending on that, we'll figure out whether it's - // a single-file module, gzipped tarball, or naked tarball. - // gzipped files all start with 1f8b08 - if (c[0] === 0x1F && - c[1] === 0x8B && - c[2] === 0x08) { - fst - .pipe(zlib.Unzip()) - .on("error", function (er) { - if (er) log.error("tar.unpack", "unzip error "+tarball) - cb(er) - }) - .pipe(tar.Extract(extractOpts)) - .on("entry", extractEntry) - .on("error", function (er) { - if (er) log.error("tar.unpack", "untar error "+tarball) - cb(er) - }) - .on("close", cb) - } else if (c.toString().match(/^package\//) || - c.toString().match(/^pax_global_header/)) { - // naked tar - fst - .pipe(tar.Extract(extractOpts)) - .on("entry", extractEntry) - .on("error", function (er) { - if (er) log.error("tar.unpack", "untar error "+tarball) - cb(er) - }) - .on("close", cb) - } else { - // naked js file - var jsOpts = { path: path.resolve(target, "index.js") } - - if (process.platform !== "win32" && - typeof uid === "number" && - typeof gid === "number") { - jsOpts.uid = uid - jsOpts.gid = gid - } + fst + .on("error", function (er) { + if (er) log.error("tar.unpack", "error reading "+tarball) + cb(er) + }) + .on("data", function OD (c) { + // detect what it is. + // Then, depending on that, we'll figure out whether it's + // a single-file module, gzipped tarball, or naked tarball. + // gzipped files all start with 1f8b08 + if (c[0] === 0x1F && + c[1] === 0x8B && + c[2] === 0x08) { + fst + .pipe(zlib.Unzip()) + .on("error", function (er) { + if (er) log.error("tar.unpack", "unzip error "+tarball) + cb(er) + }) + .pipe(tar.Extract(extractOpts)) + .on("entry", extractEntry) + .on("error", function (er) { + if (er) log.error("tar.unpack", "untar error "+tarball) + cb(er) + }) + .on("close", cb) + } else if (c.toString().match(/^package\//) || + c.toString().match(/^pax_global_header/)) { + // naked tar + fst + .pipe(tar.Extract(extractOpts)) + .on("entry", extractEntry) + .on("error", function (er) { + if (er) log.error("tar.unpack", "untar error "+tarball) + cb(er) + }) + .on("close", cb) + } else { + // naked js file + var jsOpts = { path: path.resolve(target, "index.js") } + + if (process.platform !== "win32" && + typeof uid === "number" && + typeof gid === "number") { + jsOpts.uid = uid + jsOpts.gid = gid + } - fst - .pipe(fstream.Writer(jsOpts)) - .on("error", function (er) { - if (er) log.error("tar.unpack", "copy error "+tarball) - cb(er) - }) - .on("close", function () { - var j = path.resolve(target, "package.json") - readJson(j, function (er, d) { - if (er) { - log.error("not a package", tarball) - return cb(er) - } - fs.writeFile(j, JSON.stringify(d) + "\n", cb) + fst + .pipe(fstream.Writer(jsOpts)) + .on("error", function (er) { + if (er) log.error("tar.unpack", "copy error "+tarball) + cb(er) }) - }) - } + .on("close", function () { + var j = path.resolve(target, "package.json") + readJson(j, function (er, d) { + if (er) { + log.error("not a package", tarball) + return cb(er) + } + writeFileAtomic(j, JSON.stringify(d) + "\n", cb) + }) + }) + } - // now un-hook, and re-emit the chunk - fst.removeListener("data", OD) - fst.emit("data", c) - }) + // now un-hook, and re-emit the chunk + fst.removeListener("data", OD) + fst.emit("data", c) + }) } diff --git a/deps/npm/lib/version.js b/deps/npm/lib/version.js index 95d5ff2ee2b..a15e2c391c8 100644 --- a/deps/npm/lib/version.js +++ b/deps/npm/lib/version.js @@ -6,6 +6,7 @@ var exec = require("child_process").execFile , semver = require("semver") , path = require("path") , fs = require("graceful-fs") + , writeFileAtomic = require("write-file-atomic") , chain = require("slide").chain , log = require("npmlog") , which = require("which") @@ -23,7 +24,7 @@ version.usage = "npm version [ | major | minor | patch | prerelease function version (args, silent, cb_) { if (typeof cb_ !== "function") cb_ = silent, silent = false if (args.length > 1) return cb_(version.usage) - fs.readFile(path.join(process.cwd(), "package.json"), function (er, data) { + fs.readFile(path.join(npm.localPrefix, "package.json"), function (er, data) { if (!args.length) { var v = {} Object.keys(process.versions).forEach(function (k) { @@ -63,7 +64,7 @@ function version (args, silent, cb_) { if (data.version === newVer) return cb_(new Error("Version not changed")) data.version = newVer - fs.stat(path.join(process.cwd(), ".git"), function (er, s) { + fs.stat(path.join(npm.localPrefix, ".git"), function (er, s) { function cb (er) { if (!er && !silent) console.log("v" + newVer) cb_(er) @@ -83,6 +84,15 @@ function checkGit (data, cb) { // check for git git.whichAndExec(args, options, function (er, stdout) { + if (er && er.code === "ENOGIT") { + log.warn( + "version", + "This is a Git checkout, but the git command was not found.", + "npm could not create a Git tag for this release!" + ) + return write(data, cb) + } + var lines = stdout.trim().split("\n").filter(function (line) { return line.trim() && !line.match(/^\?\? /) }).map(function (line) { @@ -111,7 +121,7 @@ function checkGit (data, cb) { } function write (data, cb) { - fs.writeFile( path.join(process.cwd(), "package.json") + writeFileAtomic( path.join(npm.localPrefix, "package.json") , new Buffer(JSON.stringify(data, null, 2) + "\n") , cb ) } diff --git a/deps/npm/lib/view.js b/deps/npm/lib/view.js index 33bf550dd9f..6b45cca2ec0 100644 --- a/deps/npm/lib/view.js +++ b/deps/npm/lib/view.js @@ -4,21 +4,26 @@ module.exports = view view.usage = "npm view pkg[@version] [[.subfield]...]" view.completion = function (opts, cb) { - var uri if (opts.conf.argv.remain.length <= 2) { - uri = url.resolve(npm.config.get("registry"), "-/short") - return registry.get(uri, null, cb) + return mapToRegistry("-/short", npm.config, function (er, uri) { + if (er) return cb(er) + + registry.get(uri, null, cb) + }) } // have the package, get the fields. var tag = npm.config.get("tag") - uri = url.resolve(npm.config.get("registry"), opts.conf.argv.remain[2]) - registry.get(uri, null, function (er, d) { + mapToRegistry(opts.conf.argv.remain[2], npm.config, function (er, uri) { if (er) return cb(er) - var dv = d.versions[d["dist-tags"][tag]] - , fields = [] - d.versions = Object.keys(d.versions).sort(semver.compareLoose) - fields = getFields(d).concat(getFields(dv)) - cb(null, fields) + + registry.get(uri, null, function (er, d) { + if (er) return cb(er) + var dv = d.versions[d["dist-tags"][tag]] + , fields = [] + d.versions = Object.keys(d.versions).sort(semver.compareLoose) + fields = getFields(d).concat(getFields(dv)) + cb(null, fields) + }) }) function getFields (d, f, pref) { @@ -30,11 +35,12 @@ view.completion = function (opts, cb) { var p = pref.concat(k).join(".") f.push(p) if (Array.isArray(d[k])) { - return d[k].forEach(function (val, i) { + d[k].forEach(function (val, i) { var pi = p + "[" + i + "]" if (val && typeof val === "object") getFields(val, f, [p]) else f.push(pi) }) + return } if (typeof d[k] === "object") getFields(d[k], f, [p]) }) @@ -42,71 +48,105 @@ view.completion = function (opts, cb) { } } -var url = require("url") - , npm = require("./npm.js") +var npm = require("./npm.js") + , readJson = require("read-package-json") , registry = npm.registry , log = require("npmlog") , util = require("util") , semver = require("semver") + , mapToRegistry = require("./utils/map-to-registry.js") + , npa = require("npm-package-arg") + , path = require("path") function view (args, silent, cb) { if (typeof cb !== "function") cb = silent, silent = false - if (!args.length) return cb("Usage: "+view.usage) + + if (!args.length) args = ["."] + var pkg = args.shift() - , nv = pkg.split("@") - , name = nv.shift() - , version = nv.join("@") || npm.config.get("tag") + , nv = npa(pkg) + , name = nv.name + , local = (name === "." || !name) + + if (npm.config.get("global") && local) { + return cb(new Error("Cannot use view command in global mode.")) + } - if (name === ".") return cb(view.usage) + if (local) { + var dir = npm.prefix + readJson(path.resolve(dir, "package.json"), function (er, d) { + d = d || {} + if (er && er.code !== "ENOENT" && er.code !== "ENOTDIR") return cb(er) + if (!d.name) return cb(new Error("Invalid package.json")) + var p = d.name + nv = npa(p) + if (pkg && ~pkg.indexOf("@")) { + nv.rawSpec = pkg.split("@")[pkg.indexOf("@")] + } + + fetchAndRead(nv, args, silent, cb) + }) + } else { + fetchAndRead(nv, args, silent, cb) + } +} + +function fetchAndRead (nv, args, silent, cb) { // get the data about this package - var uri = url.resolve(npm.config.get("registry"), name) - registry.get(uri, null, function (er, data) { + var name = nv.name + , version = nv.rawSpec || npm.config.get("tag") + + mapToRegistry(name, npm.config, function (er, uri) { if (er) return cb(er) - if (data["dist-tags"] && data["dist-tags"].hasOwnProperty(version)) { - version = data["dist-tags"][version] - } - if (data.time && data.time.unpublished) { - var u = data.time.unpublished - er = new Error("Unpublished by " + u.name + " on " + u.time) - er.statusCode = 404 - er.code = "E404" - er.pkgid = data._id - return cb(er, data) - } + registry.get(uri, null, function (er, data) { + if (er) return cb(er) + if (data["dist-tags"] && data["dist-tags"].hasOwnProperty(version)) { + version = data["dist-tags"][version] + } + if (data.time && data.time.unpublished) { + var u = data.time.unpublished + er = new Error("Unpublished by " + u.name + " on " + u.time) + er.statusCode = 404 + er.code = "E404" + er.pkgid = data._id + return cb(er, data) + } - var results = [] - , error = null - , versions = data.versions || {} - data.versions = Object.keys(versions).sort(semver.compareLoose) - if (!args.length) args = [""] - - // remove readme unless we asked for it - if (-1 === args.indexOf("readme")) { - delete data.readme - } - Object.keys(versions).forEach(function (v) { - if (semver.satisfies(v, version, true)) args.forEach(function (args) { - // remove readme unless we asked for it - if (-1 === args.indexOf("readme")) { - delete versions[v].readme - } - results.push(showFields(data, versions[v], args)) + var results = [] + , error = null + , versions = data.versions || {} + data.versions = Object.keys(versions).sort(semver.compareLoose) + if (!args.length) args = [""] + + // remove readme unless we asked for it + if (-1 === args.indexOf("readme")) { + delete data.readme + } + + Object.keys(versions).forEach(function (v) { + if (semver.satisfies(v, version, true)) args.forEach(function (args) { + // remove readme unless we asked for it + if (-1 === args.indexOf("readme")) { + delete versions[v].readme + } + results.push(showFields(data, versions[v], args)) + }) }) - }) - results = results.reduce(reducer, {}) - var retval = results + results = results.reduce(reducer, {}) + var retval = results - if (args.length === 1 && args[0] === "") { - retval = cleanBlanks(retval) - log.silly("cleanup", retval) - } + if (args.length === 1 && args[0] === "") { + retval = cleanBlanks(retval) + log.silly("cleanup", retval) + } - if (error || silent) cb(error, retval) - else printData(results, data._id, cb.bind(null, error, retval)) + if (error || silent) cb(error, retval) + else printData(results, data._id, cb.bind(null, error, retval)) + }) }) } @@ -175,9 +215,7 @@ function search (data, fields, version, title) { results = results.reduce(reducer, {}) return results } - if (!data.hasOwnProperty(field)) { - return - } + if (!data.hasOwnProperty(field)) return undefined data = data[field] if (tail.length) { if (typeof data === "object") { @@ -196,15 +234,15 @@ function search (data, fields, version, title) { function printData (data, name, cb) { var versions = Object.keys(data) , msg = "" - , showVersions = versions.length > 1 - , showFields + , includeVersions = versions.length > 1 + , includeFields versions.forEach(function (v) { var fields = Object.keys(data[v]) - showFields = showFields || (fields.length > 1) + includeFields = includeFields || (fields.length > 1) fields.forEach(function (f) { var d = cleanup(data[v][f]) - if (showVersions || showFields || typeof d !== "string") { + if (includeVersions || includeFields || typeof d !== "string") { d = cleanup(data[v][f]) d = npm.config.get("json") ? JSON.stringify(d, null, 2) @@ -212,10 +250,10 @@ function printData (data, name, cb) { } else if (typeof d === "string" && npm.config.get("json")) { d = JSON.stringify(d) } - if (f && showFields) f += " = " + if (f && includeFields) f += " = " if (d.indexOf("\n") !== -1) d = " \n" + d - msg += (showVersions ? name + "@" + v + " " : "") - + (showFields ? f : "") + d + "\n" + msg += (includeVersions ? name + "@" + v + " " : "") + + (includeFields ? f : "") + d + "\n" }) }) @@ -259,4 +297,3 @@ function unparsePerson (d) { + (d.email ? " <"+d.email+">" : "") + (d.url ? " ("+d.url+")" : "") } - diff --git a/deps/npm/lib/whoami.js b/deps/npm/lib/whoami.js index f1c67e2b0df..b33f93743d2 100644 --- a/deps/npm/lib/whoami.js +++ b/deps/npm/lib/whoami.js @@ -1,13 +1,39 @@ -module.exports = whoami - var npm = require("./npm.js") -whoami.usage = "npm whoami\n(just prints the 'username' config)" +module.exports = whoami + +whoami.usage = "npm whoami\n(just prints username according to given registry)" function whoami (args, silent, cb) { - if (typeof cb !== "function") cb = silent, silent = false - var me = npm.config.get("username") - var msg = me ? me : "Not authed. Run 'npm adduser'" + // FIXME: need tighter checking on this, but is a breaking change + if (typeof cb !== "function") { + cb = silent + silent = false + } + + var registry = npm.config.get("registry") + if (!registry) return cb(new Error("no default registry set")) + + var credentials = npm.config.getCredentialsByURI(registry) + if (credentials) { + if (credentials.username) { + if (!silent) console.log(credentials.username) + return process.nextTick(cb.bind(this, null, credentials.username)) + } + else if (credentials.token) { + return npm.registry.whoami(registry, function (er, username) { + if (er) return cb(er) + + if (!silent) console.log(username) + cb(null, username) + }) + } + } + + // At this point, if they have a credentials object, it doesn't + // have a token or auth in it. Probably just the default + // registry. + var msg = "Not authed. Run 'npm adduser'" if (!silent) console.log(msg) - process.nextTick(cb.bind(this, null, me)) + process.nextTick(cb.bind(this, null, msg)) } diff --git a/deps/npm/man/man1/npm-README.1 b/deps/npm/man/man1/npm-README.1 index cfa8b459836..a7cf1046f0c 100644 --- a/deps/npm/man/man1/npm-README.1 +++ b/deps/npm/man/man1/npm-README.1 @@ -1,220 +1,176 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM" "1" "September 2014" "" "" -. +.TH "NPM" "1" "October 2014" "" "" .SH "NAME" -\fBnpm\fR \-\- node package manager![Build Status \fIhttps://img\.shields\.io/travis/npm/npm/master\.svg)](https://travis\-ci\.org/npm/npm\fR -## SYNOPSIS -. +\fBnpm\fR \- node package manager +.P +Build Status \fIhttps://img\.shields\.io/travis/npm/npm/master\.svg\fR \fIhttps://travis\-ci\.org/npm/npm\fR +.SH SYNOPSIS .P This is just enough info to get you up and running\. -. .P -Much more info available via \fBnpm help\fR once it\'s installed\. -. -.SH "IMPORTANT" +Much more info available via \fBnpm help\fR once it's installed\. +.SH IMPORTANT +.P \fBYou need node v0\.8 or higher to run this program\.\fR -. .P To install an old \fBand unsupported\fR version of npm that works on node 0\.3 and prior, clone the git repo and dig through the old tags and branches\. -. -.SH "Super Easy Install" -npm comes with node now\. -. -.SS "Windows Computers" -Get the MSI\. npm is in it\. -. -.SS "Apple Macintosh Computers" -Get the pkg\. npm is in it\. -. -.SS "Other Sorts of Unices" +.SH Super Easy Install +.P +npm comes with node \fIhttp://nodejs\.org/download/\fR now\. +.SS Windows Computers +.P +Get the MSI \fIhttp://nodejs\.org/download/\fR\|\. npm is in it\. +.SS Apple Macintosh Computers +.P +Get the pkg \fIhttp://nodejs\.org/download/\fR\|\. npm is in it\. +.SS Other Sorts of Unices +.P Run \fBmake install\fR\|\. npm will be installed with node\. -. .P If you want a more fancy pants install (a different version, customized paths, etc\.) then read on\. -. -.SH "Fancy Install (Unix)" -There\'s a pretty robust install script at \fIhttps://www\.npmjs\.org/install\.sh\fR\|\. You can download that and run it\. -. -.P -Here\'s an example using curl: -. -.IP "" 4 -. +.SH Fancy Install (Unix) +.P +There's a pretty robust install script at +https://www\.npmjs\.org/install\.sh\|\. You can download that and run it\. +.P +Here's an example using curl: +.P +.RS 2 .nf curl \-L https://npmjs\.org/install\.sh | sh -. .fi -. -.IP "" 0 -. -.SS "Slightly Fancier" +.RE +.SS Slightly Fancier +.P You can set any npm configuration params with that script: -. -.IP "" 4 -. +.P +.RS 2 .nf npm_config_prefix=/some/path sh install\.sh -. .fi -. -.IP "" 0 -. +.RE .P Or, you can run it in uber\-debuggery mode: -. -.IP "" 4 -. +.P +.RS 2 .nf npm_debug=1 sh install\.sh -. .fi -. -.IP "" 0 -. -.SS "Even Fancier" +.RE +.SS Even Fancier +.P Get the code with git\. Use \fBmake\fR to build the docs and do other stuff\. If you plan on hacking on npm, \fBmake link\fR is your friend\. -. .P -If you\'ve got the npm source code, you can also semi\-permanently set +If you've got the npm source code, you can also semi\-permanently set arbitrary config keys using the \fB\|\./configure \-\-key=val \.\.\.\fR, and then run npm commands by doing \fBnode cli\.js \fR\|\. (This is helpful for testing, or running stuff without actually installing npm itself\.) -. -.SH "Fancy Windows Install" -You can download a zip file from \fIhttps://npmjs\.org/dist/\fR, and unpack it +.SH Fancy Windows Install +.P +You can download a zip file from https://npmjs\.org/dist/, and unpack it in the same folder where node\.exe lives\. -. .P -If that\'s not fancy enough for you, then you can fetch the code with +If that's not fancy enough for you, then you can fetch the code with git, and mess with it directly\. -. -.SH "Installing on Cygwin" +.SH Installing on Cygwin +.P No\. -. -.SH "Permissions when Using npm to Install Other Stuff" +.SH Permissions when Using npm to Install Other Stuff +.P \fBtl;dr\fR -. -.IP "\(bu" 4 -Use \fBsudo\fR for greater safety\. Or don\'t, if you prefer not to\. -. -.IP "\(bu" 4 -npm will downgrade permissions if it\'s root before running any build +.RS 0 +.IP \(bu 2 +Use \fBsudo\fR for greater safety\. Or don't, if you prefer not to\. +.IP \(bu 2 +npm will downgrade permissions if it's root before running any build scripts that package authors specified\. -. -.IP "" 0 -. -.SS "More details\.\.\." + +.RE +.SS More details\.\.\. +.P As of version 0\.3, it is recommended to run npm as root\. This allows npm to change the user identifier to the \fBnobody\fR user prior to running any package build or test commands\. -. .P If you are not the root user, or if you are on a platform that does not support uid switching, then npm will not attempt to change the userid\. -. .P If you would like to ensure that npm \fBalways\fR runs scripts as the "nobody" user, and have it fail if it cannot downgrade permissions, then set the following configuration param: -. -.IP "" 4 -. +.P +.RS 2 .nf npm config set unsafe\-perm false -. .fi -. -.IP "" 0 -. +.RE .P This will prevent running in unsafe mode, even as non\-root users\. -. -.SH "Uninstalling" +.SH Uninstalling +.P So sad to see you go\. -. -.IP "" 4 -. +.P +.RS 2 .nf sudo npm uninstall npm \-g -. .fi -. -.IP "" 0 -. +.RE .P Or, if that fails, -. -.IP "" 4 -. +.P +.RS 2 .nf sudo make uninstall -. .fi -. -.IP "" 0 -. -.SH "More Severe Uninstalling" +.RE +.SH More Severe Uninstalling +.P Usually, the above instructions are sufficient\. That will remove -npm, but leave behind anything you\'ve installed\. -. +npm, but leave behind anything you've installed\. .P If you would like to remove all the packages that you have installed, then you can use the \fBnpm ls\fR command to find them, and then \fBnpm rm\fR to remove them\. -. .P -To remove cruft left behind by npm 0\.x, you can use the included \fBclean\-old\.sh\fR script file\. You can run it conveniently like this: -. -.IP "" 4 -. +To remove cruft left behind by npm 0\.x, you can use the included +\fBclean\-old\.sh\fR script file\. You can run it conveniently like this: +.P +.RS 2 .nf npm explore npm \-g \-\- sh scripts/clean\-old\.sh -. .fi -. -.IP "" 0 -. +.RE .P npm uses two configuration files, one for per\-user configs, and another for global (every\-user) configs\. You can view them by doing: -. -.IP "" 4 -. +.P +.RS 2 .nf npm config get userconfig # defaults to ~/\.npmrc npm config get globalconfig # defaults to /usr/local/etc/npmrc -. .fi -. -.IP "" 0 -. +.RE .P Uninstalling npm does not remove configuration files by default\. You must remove them yourself manually if you want them gone\. Note that this means that future npm installs will not remember the settings that you have chosen\. -. -.SH "Using npm Programmatically" +.SH Using npm Programmatically +.P If you would like to use npm programmatically, you can do that\. -It\'s not very well documented, but it \fIis\fR rather simple\. -. +It's not very well documented, but it \fIis\fR rather simple\. .P Most of the time, unless you actually want to do all the things that -npm does, you should try using one of npm\'s dependencies rather than +npm does, you should try using one of npm's dependencies rather than using npm itself, if possible\. -. .P Eventually, npm will be just a thin cli wrapper around the modules that it depends on, but for now, there are some things that you must use npm itself to do\. -. -.IP "" 4 -. +.P +.RS 2 .nf var npm = require("npm") npm\.load(myConfigObject, function (er) { @@ -223,119 +179,100 @@ npm\.load(myConfigObject, function (er) { if (er) return commandFailed(er) // command succeeded, and data might have some info }) - npm\.on("log", function (message) { \.\.\.\. }) + npm\.registry\.log\.on("log", function (message) { \.\.\.\. }) }) -. .fi -. -.IP "" 0 -. +.RE .P The \fBload\fR function takes an object hash of the command\-line configs\. The various \fBnpm\.commands\.\fR functions take an \fBarray\fR of -positional argument \fBstrings\fR\|\. The last argument to any \fBnpm\.commands\.\fR function is a callback\. Some commands take other +positional argument \fBstrings\fR\|\. The last argument to any +\fBnpm\.commands\.\fR function is a callback\. Some commands take other optional arguments\. Read the source\. -. .P You cannot set configs individually for any single npm function at this time\. Since \fBnpm\fR is a singleton, any call to \fBnpm\.config\.set\fR will change the value for \fIall\fR npm commands in that process\. -. .P See \fB\|\./bin/npm\-cli\.js\fR for an example of pulling config values off of the command line arguments using nopt\. You may also want to check out \fBnpm help config\fR to learn about all the options you can set there\. -. -.SH "More Docs" +.SH More Docs +.P Check out the docs \fIhttps://www\.npmjs\.org/doc/\fR, especially the faq \fIhttps://www\.npmjs\.org/doc/faq\.html\fR\|\. -. .P You can use the \fBnpm help\fR command to read any of them\. -. .P -If you\'re a developer, and you want to use npm to publish your program, +If you're a developer, and you want to use npm to publish your program, you should read this \fIhttps://www\.npmjs\.org/doc/developers\.html\fR -. -.SH "Legal Stuff" +.SH Legal Stuff +.P "npm" and "The npm Registry" are owned by npm, Inc\. All rights reserved\. See the included LICENSE file for more details\. -. .P "Node\.js" and "node" are trademarks owned by Joyent, Inc\. -. .P Modules published on the npm registry are not officially endorsed by npm, Inc\. or the Node\.js project\. -. .P Data published to the npm registry is not part of npm itself, and is the sole property of the publisher\. While every effort is made to ensure accountability, there is absolutely no guarantee, warrantee, or assertion expressed or implied as to the quality, fitness for a specific purpose, or lack of malice in any given npm package\. -. .P If you have a complaint about a package in the public npm registry, and cannot resolve it with the package -owner \fIhttps://www\.npmjs\.org/doc/misc/npm\-disputes\.html\fR, please email \fIsupport@npmjs\.com\fR and explain the situation\. -. +owner \fIhttps://www\.npmjs\.org/doc/misc/npm\-disputes\.html\fR, please email +support@npmjs\.com and explain the situation\. .P Any data published to The npm Registry (including user account information) may be removed or modified at the sole discretion of the npm server administrators\. -. -.SS "In plainer english" +.SS In plainer english +.P npm is the property of npm, Inc\. -. .P -If you publish something, it\'s yours, and you are solely accountable +If you publish something, it's yours, and you are solely accountable for it\. -. .P -If other people publish something, it\'s theirs\. -. +If other people publish something, it's theirs\. .P Users can publish Bad Stuff\. It will be removed promptly if reported\. But there is no vetting process for published modules, and you use them at your own risk\. Please inspect the source\. -. .P If you publish Bad Stuff, we may delete it from the registry, or even -ban your account in extreme cases\. So don\'t do that\. -. -.SH "BUGS" +ban your account in extreme cases\. So don't do that\. +.SH BUGS +.P When you find issues, please report them: -. -.IP "\(bu" 4 -web: \fIhttps://github\.com/npm/npm/issues\fR -. -.IP "\(bu" 4 -email: \fInpm\-@googlegroups\.com\fR -. -.IP "" 0 -. -.P -Be sure to include \fIall\fR of the output from the npm command that didn\'t work +.RS 0 +.IP \(bu 2 +web: +https://github\.com/npm/npm/issues +.IP \(bu 2 +email: +npm\-@googlegroups\.com + +.RE +.P +Be sure to include \fIall\fR of the output from the npm command that didn't work as expected\. The \fBnpm\-debug\.log\fR file is also helpful to provide\. -. .P You can also look for isaacs in #node\.js on irc://irc\.freenode\.net\. He will no doubt tell you to put the output in a gist or email\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help npm -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 faq -. -.IP "\(bu" 4 +.IP \(bu 2 npm help help -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 index -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-adduser.1 b/deps/npm/man/man1/npm-adduser.1 index da1dcdbc3f3..6b85986e02e 100644 --- a/deps/npm/man/man1/npm-adduser.1 +++ b/deps/npm/man/man1/npm-adduser.1 @@ -1,63 +1,85 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-ADDUSER" "1" "September 2014" "" "" -. +.TH "NPM\-ADDUSER" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-adduser\fR \-\- Add a registry user account -. -.SH "SYNOPSIS" -. +\fBnpm-adduser\fR \- Add a registry user account +.SH SYNOPSIS +.P +.RS 2 .nf -npm adduser -. +npm adduser [\-\-registry=url] [\-\-scope=@orgname] [\-\-always\-auth] .fi -. -.SH "DESCRIPTION" -Create or verify a user named \fB\fR in the npm registry, and -save the credentials to the \fB\|\.npmrc\fR file\. -. +.RE +.SH DESCRIPTION +.P +Create or verify a user named \fB\fR in the specified registry, and +save the credentials to the \fB\|\.npmrc\fR file\. If no registry is specified, +the default registry will be used (see npm help 7 \fBnpm\-config\fR)\. .P The username, password, and email are read in from prompts\. -. .P You may use this command to change your email address, but not username or password\. -. .P -To reset your password, go to \fIhttps://npmjs\.org/forgot\fR -. +To reset your password, go to https://www\.npmjs\.org/forgot .P You may use this command multiple times with the same user account to authorize on a new machine\. -. -.SH "CONFIGURATION" -. -.SS "registry" +.P +\fBnpm login\fR is an alias to \fBadduser\fR and behaves exactly the same way\. +.SH CONFIGURATION +.SS registry +.P Default: http://registry\.npmjs\.org/ -. .P -The base URL of the npm package registry\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +The base URL of the npm package registry\. If \fBscope\fR is also specified, +this registry will only be used for packages with that scope\. See npm help 7 \fBnpm\-scope\fR\|\. +.SS scope +.P +Default: none +.P +If specified, the user and login credentials given will be associated +with the specified scope\. See npm help 7 \fBnpm\-scope\fR\|\. You can use both at the same time, +e\.g\. +.P +.RS 2 +.nf +npm adduser \-\-registry=http://myregistry\.example\.com \-\-scope=@myco +.fi +.RE +.P +This will set a registry for the given scope and login or create a user for +that registry at the same time\. +.SS always\-auth +.P +Default: false +.P +If specified, save configuration indicating that all requests to the given +registry should include authorization information\. Useful for private +registries\. Can be used with \fB\-\-registry\fR and / or \fB\-\-scope\fR, e\.g\. +.P +.RS 2 +.nf +npm adduser \-\-registry=http://private\-registry\.example\.com \-\-always\-auth +.fi +.RE +.P +This will ensure that all requests to that registry (including for tarballs) +include an authorization header\. See \fBalways\-auth\fR in npm help 7 \fBnpm\-config\fR for more +details on always\-auth\. Registry\-specific configuaration of \fBalways\-auth\fR takes +precedence over any global configuration\. +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help 7 registry -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "\(bu" 4 +.IP \(bu 2 npm help owner -. -.IP "\(bu" 4 +.IP \(bu 2 npm help whoami -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-bin.1 b/deps/npm/man/man1/npm-bin.1 index 548bb6ad347..6552d6cf4d5 100644 --- a/deps/npm/man/man1/npm-bin.1 +++ b/deps/npm/man/man1/npm-bin.1 @@ -1,40 +1,30 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-BIN" "1" "September 2014" "" "" -. +.TH "NPM\-BIN" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-bin\fR \-\- Display npm bin folder -. -.SH "SYNOPSIS" -. +\fBnpm-bin\fR \- Display npm bin folder +.SH SYNOPSIS +.P +.RS 2 .nf npm bin -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P Print the folder where npm will install executables\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help prefix -. -.IP "\(bu" 4 +.IP \(bu 2 npm help root -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 folders -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-bugs.1 b/deps/npm/man/man1/npm-bugs.1 index 328ac304515..09c7659c600 100644 --- a/deps/npm/man/man1/npm-bugs.1 +++ b/deps/npm/man/man1/npm-bugs.1 @@ -1,78 +1,59 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-BUGS" "1" "September 2014" "" "" -. +.TH "NPM\-BUGS" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-bugs\fR \-\- Bugs for a package in a web browser maybe -. -.SH "SYNOPSIS" -. +\fBnpm-bugs\fR \- Bugs for a package in a web browser maybe +.SH SYNOPSIS +.P +.RS 2 .nf npm bugs npm bugs (with no args in a package dir) -. .fi -. -.SH "DESCRIPTION" -This command tries to guess at the likely location of a package\'s +.RE +.SH DESCRIPTION +.P +This command tries to guess at the likely location of a package's bug tracker URL, and then tries to open it using the \fB\-\-browser\fR config param\. If no package name is provided, it will search for a \fBpackage\.json\fR in the current folder and use the \fBname\fR property\. -. -.SH "CONFIGURATION" -. -.SS "browser" -. -.IP "\(bu" 4 +.SH CONFIGURATION +.SS browser +.RS 0 +.IP \(bu 2 Default: OS X: \fB"open"\fR, Windows: \fB"start"\fR, Others: \fB"xdg\-open"\fR -. -.IP "\(bu" 4 +.IP \(bu 2 Type: String -. -.IP "" 0 -. + +.RE .P The browser that is called by the \fBnpm bugs\fR command to open websites\. -. -.SS "registry" -. -.IP "\(bu" 4 +.SS registry +.RS 0 +.IP \(bu 2 Default: https://registry\.npmjs\.org/ -. -.IP "\(bu" 4 +.IP \(bu 2 Type: url -. -.IP "" 0 -. + +.RE .P The base URL of the npm package registry\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help docs -. -.IP "\(bu" 4 +.IP \(bu 2 npm help view -. -.IP "\(bu" 4 +.IP \(bu 2 npm help publish -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 registry -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 package\.json -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-build.1 b/deps/npm/man/man1/npm-build.1 index cc815b63b51..0f2184292a7 100644 --- a/deps/npm/man/man1/npm-build.1 +++ b/deps/npm/man/man1/npm-build.1 @@ -1,43 +1,34 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-BUILD" "1" "September 2014" "" "" -. +.TH "NPM\-BUILD" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-build\fR \-\- Build a package -. -.SH "SYNOPSIS" -. +\fBnpm-build\fR \- Build a package +.SH SYNOPSIS +.P +.RS 2 .nf npm build -. .fi -. -.IP "\(bu" 4 +.RE +.RS 0 +.IP \(bu 2 \fB\fR: A folder containing a \fBpackage\.json\fR file in its root\. -. -.IP "" 0 -. -.SH "DESCRIPTION" + +.RE +.SH DESCRIPTION +.P This is the plumbing command called by \fBnpm link\fR and \fBnpm install\fR\|\. -. .P It should generally not be called directly\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help install -. -.IP "\(bu" 4 +.IP \(bu 2 npm help link -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 scripts -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 package\.json -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-bundle.1 b/deps/npm/man/man1/npm-bundle.1 index 5799f4b19d9..0748922dae2 100644 --- a/deps/npm/man/man1/npm-bundle.1 +++ b/deps/npm/man/man1/npm-bundle.1 @@ -1,23 +1,17 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-BUNDLE" "1" "September 2014" "" "" -. +.TH "NPM\-BUNDLE" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-bundle\fR \-\- REMOVED -. -.SH "DESCRIPTION" +\fBnpm-bundle\fR \- REMOVED +.SH DESCRIPTION +.P The \fBnpm bundle\fR command has been removed in 1\.0, for the simple reason that it is no longer necessary, as the default behavior is now to install packages into the local space\. -. .P Just use \fBnpm install\fR now to do what \fBnpm bundle\fR used to do\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help install -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-cache.1 b/deps/npm/man/man1/npm-cache.1 index 3977da0b1af..c49015ae6e7 100644 --- a/deps/npm/man/man1/npm-cache.1 +++ b/deps/npm/man/man1/npm-cache.1 @@ -1,100 +1,86 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-CACHE" "1" "September 2014" "" "" -. +.TH "NPM\-CACHE" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-cache\fR \-\- Manipulates packages cache -. -.SH "SYNOPSIS" -. +\fBnpm-cache\fR \- Manipulates packages cache +.SH SYNOPSIS +.P +.RS 2 .nf npm cache add npm cache add npm cache add npm cache add @ + npm cache ls [] + npm cache clean [] -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P Used to add, list, or clear the npm cache folder\. -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 add: Add the specified package to the local cache\. This command is primarily intended to be used internally by npm, but it can provide a way to add data to the local installation cache explicitly\. -. -.IP "\(bu" 4 +.IP \(bu 2 ls: Show the data in the cache\. Argument is a path to show in the cache -folder\. Works a bit like the \fBfind\fR program, but limited by the \fBdepth\fR config\. -. -.IP "\(bu" 4 +folder\. Works a bit like the \fBfind\fR program, but limited by the +\fBdepth\fR config\. +.IP \(bu 2 clean: Delete data out of the cache folder\. If an argument is provided, then it specifies a subpath to delete\. If no argument is provided, then the entire cache is cleared\. -. -.IP "" 0 -. -.SH "DETAILS" + +.RE +.SH DETAILS +.P npm stores cache data in the directory specified in \fBnpm config get cache\fR\|\. For each package that is added to the cache, three pieces of information are stored in \fB{cache}/{name}/{version}\fR: -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 \|\.\.\./package/package\.json: The package\.json file, as npm sees it\. -. -.IP "\(bu" 4 +.IP \(bu 2 \|\.\.\./package\.tgz: The tarball for that version\. -. -.IP "" 0 -. + +.RE .P Additionally, whenever a registry request is made, a \fB\|\.cache\.json\fR file is placed at the corresponding URI, to store the ETag and the requested data\. This is stored in \fB{cache}/{hostname}/{path}/\.cache\.json\fR\|\. -. .P -Commands that make non\-essential registry requests (such as \fBsearch\fR and \fBview\fR, or the completion scripts) generally specify a minimum timeout\. +Commands that make non\-essential registry requests (such as \fBsearch\fR and +\fBview\fR, or the completion scripts) generally specify a minimum timeout\. If the \fB\|\.cache\.json\fR file is younger than the specified timeout, then they do not make an HTTP request to the registry\. -. -.SH "CONFIGURATION" -. -.SS "cache" +.SH CONFIGURATION +.SS cache +.P Default: \fB~/\.npm\fR on Posix, or \fB%AppData%/npm\-cache\fR on Windows\. -. .P The root cache folder\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help 5 folders -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "\(bu" 4 +.IP \(bu 2 npm help install -. -.IP "\(bu" 4 +.IP \(bu 2 npm help publish -. -.IP "\(bu" 4 +.IP \(bu 2 npm help pack -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-completion.1 b/deps/npm/man/man1/npm-completion.1 index 2ae25687a68..a89cc6fd5ff 100644 --- a/deps/npm/man/man1/npm-completion.1 +++ b/deps/npm/man/man1/npm-completion.1 @@ -1,47 +1,37 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-COMPLETION" "1" "September 2014" "" "" -. +.TH "NPM\-COMPLETION" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-completion\fR \-\- Tab Completion for npm -. -.SH "SYNOPSIS" -. +\fBnpm-completion\fR \- Tab Completion for npm +.SH SYNOPSIS +.P +.RS 2 .nf \|\. <(npm completion) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P Enables tab\-completion in all npm commands\. -. .P The synopsis above loads the completions into your current shell\. Adding it to your ~/\.bashrc or ~/\.zshrc will make the completions available everywhere\. -. .P You may of course also pipe the output of npm completion to a file such as \fB/usr/local/etc/bash_completion\.d/npm\fR if you have a system that will read that file for you\. -. .P When \fBCOMP_CWORD\fR, \fBCOMP_LINE\fR, and \fBCOMP_POINT\fR are defined in the environment, \fBnpm completion\fR acts in "plumbing mode", and outputs completions based on the arguments\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help 7 developers -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 faq -. -.IP "\(bu" 4 +.IP \(bu 2 npm help npm -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-config.1 b/deps/npm/man/man1/npm-config.1 index 0b019c7c025..a93ebace724 100644 --- a/deps/npm/man/man1/npm-config.1 +++ b/deps/npm/man/man1/npm-config.1 @@ -1,13 +1,9 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-CONFIG" "1" "September 2014" "" "" -. +.TH "NPM\-CONFIG" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-config\fR \-\- Manage the npm configuration files -. -.SH "SYNOPSIS" -. +\fBnpm-config\fR \- Manage the npm configuration files +.SH SYNOPSIS +.P +.RS 2 .nf npm config set [\-\-global] npm config get @@ -17,97 +13,83 @@ npm config edit npm c [set|get|delete|list] npm get npm set [\-\-global] -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P npm gets its config settings from the command line, environment variables, \fBnpmrc\fR files, and in some cases, the \fBpackage\.json\fR file\. -. .P See npm help 5 npmrc for more information about the npmrc files\. -. .P See npm help 7 \fBnpm\-config\fR for a more thorough discussion of the mechanisms involved\. -. .P The \fBnpm config\fR command can be used to update and edit the contents of the user and global npmrc files\. -. -.SH "Sub\-commands" +.SH Sub\-commands +.P Config supports the following sub\-commands: -. -.SS "set" -. +.SS set +.P +.RS 2 .nf npm config set key value -. .fi -. +.RE .P Sets the config key to the value\. -. .P If value is omitted, then it sets it to "true"\. -. -.SS "get" -. +.SS get +.P +.RS 2 .nf npm config get key -. .fi -. +.RE .P Echo the config value to stdout\. -. -.SS "list" -. +.SS list +.P +.RS 2 .nf npm config list -. .fi -. +.RE .P Show all the config settings\. -. -.SS "delete" -. +.SS delete +.P +.RS 2 .nf npm config delete key -. .fi -. +.RE .P Deletes the key from all configuration files\. -. -.SS "edit" -. +.SS edit +.P +.RS 2 .nf npm config edit -. .fi -. +.RE .P Opens the config file in an editor\. Use the \fB\-\-global\fR flag to edit the global config\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help 5 folders -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 package\.json -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "\(bu" 4 +.IP \(bu 2 npm help npm -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-dedupe.1 b/deps/npm/man/man1/npm-dedupe.1 index cdfa3520f68..24548077e30 100644 --- a/deps/npm/man/man1/npm-dedupe.1 +++ b/deps/npm/man/man1/npm-dedupe.1 @@ -1,96 +1,74 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-DEDUPE" "1" "September 2014" "" "" -. +.TH "NPM\-DEDUPE" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-dedupe\fR \-\- Reduce duplication -. -.SH "SYNOPSIS" -. +\fBnpm-dedupe\fR \- Reduce duplication +.SH SYNOPSIS +.P +.RS 2 .nf npm dedupe [package names\.\.\.] npm ddp [package names\.\.\.] -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P Searches the local package tree and attempts to simplify the overall structure by moving dependencies further up the tree, where they can be more effectively shared by multiple dependent packages\. -. .P For example, consider this dependency graph: -. -.IP "" 4 -. +.P +.RS 2 .nf a +\-\- b <\-\- depends on c@1\.0\.x | `\-\- c@1\.0\.3 `\-\- d <\-\- depends on c@~1\.0\.9 `\-\- c@1\.0\.10 -. .fi -. -.IP "" 0 -. +.RE .P In this case, npm help \fBnpm\-dedupe\fR will transform the tree to: -. -.IP "" 4 -. +.P +.RS 2 .nf a +\-\- b +\-\- d `\-\- c@1\.0\.10 -. .fi -. -.IP "" 0 -. +.RE .P -Because of the hierarchical nature of node\'s module lookup, b and d +Because of the hierarchical nature of node's module lookup, b and d will both get their dependency met by the single c package at the root level of the tree\. -. .P If a suitable version exists at the target location in the tree already, then it will be left untouched, but the other duplicates will be deleted\. -. .P If no suitable version can be found, then a warning is printed, and nothing is done\. -. .P If any arguments are supplied, then they are filters, and only the named packages will be touched\. -. .P Note that this operation transforms the dependency tree, and may result in packages getting updated versions, perhaps from the npm registry\. -. .P This feature is experimental, and may change in future versions\. -. .P The \fB\-\-tag\fR argument will apply to all of the affected dependencies\. If a tag with the given name exists, the tagged version is preferred over newer versions\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help ls -. -.IP "\(bu" 4 +.IP \(bu 2 npm help update -. -.IP "\(bu" 4 +.IP \(bu 2 npm help install -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-deprecate.1 b/deps/npm/man/man1/npm-deprecate.1 index cc2d18ee52f..581a58948f3 100644 --- a/deps/npm/man/man1/npm-deprecate.1 +++ b/deps/npm/man/man1/npm-deprecate.1 @@ -1,48 +1,37 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-DEPRECATE" "1" "September 2014" "" "" -. +.TH "NPM\-DEPRECATE" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-deprecate\fR \-\- Deprecate a version of a package -. -.SH "SYNOPSIS" -. +\fBnpm-deprecate\fR \- Deprecate a version of a package +.SH SYNOPSIS +.P +.RS 2 .nf npm deprecate [@] -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This command will update the npm registry entry for a package, providing a deprecation warning to all who attempt to install it\. -. .P It works on version ranges as well as specific versions, so you can do something like this: -. -.IP "" 4 -. +.P +.RS 2 .nf npm deprecate my\-thing@"< 0\.2\.3" "critical bug fixed in v0\.2\.3" -. .fi -. -.IP "" 0 -. +.RE .P -Note that you must be the package owner to deprecate something\. See the \fBowner\fR and \fBadduser\fR help topics\. -. +Note that you must be the package owner to deprecate something\. See the +\fBowner\fR and \fBadduser\fR help topics\. .P To un\-deprecate a package, specify an empty string (\fB""\fR) for the \fBmessage\fR argument\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help publish -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 registry -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-docs.1 b/deps/npm/man/man1/npm-docs.1 index db3d4e768fb..1e9e5c19901 100644 --- a/deps/npm/man/man1/npm-docs.1 +++ b/deps/npm/man/man1/npm-docs.1 @@ -1,78 +1,60 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-DOCS" "1" "September 2014" "" "" -. +.TH "NPM\-DOCS" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-docs\fR \-\- Docs for a package in a web browser maybe -. -.SH "SYNOPSIS" -. +\fBnpm-docs\fR \- Docs for a package in a web browser maybe +.SH SYNOPSIS +.P +.RS 2 .nf npm docs [ [ \.\.\.]] npm docs (with no args in a package dir) npm home [ [ \.\.\.]] npm home (with no args in a package dir) -. .fi -. -.SH "DESCRIPTION" -This command tries to guess at the likely location of a package\'s +.RE +.SH DESCRIPTION +.P +This command tries to guess at the likely location of a package's documentation URL, and then tries to open it using the \fB\-\-browser\fR config param\. You can pass multiple package names at once\. If no package name is provided, it will search for a \fBpackage\.json\fR in the current folder and use the \fBname\fR property\. -. -.SH "CONFIGURATION" -. -.SS "browser" -. -.IP "\(bu" 4 +.SH CONFIGURATION +.SS browser +.RS 0 +.IP \(bu 2 Default: OS X: \fB"open"\fR, Windows: \fB"start"\fR, Others: \fB"xdg\-open"\fR -. -.IP "\(bu" 4 +.IP \(bu 2 Type: String -. -.IP "" 0 -. + +.RE .P The browser that is called by the \fBnpm docs\fR command to open websites\. -. -.SS "registry" -. -.IP "\(bu" 4 +.SS registry +.RS 0 +.IP \(bu 2 Default: https://registry\.npmjs\.org/ -. -.IP "\(bu" 4 +.IP \(bu 2 Type: url -. -.IP "" 0 -. + +.RE .P The base URL of the npm package registry\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help view -. -.IP "\(bu" 4 +.IP \(bu 2 npm help publish -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 registry -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 package\.json -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-edit.1 b/deps/npm/man/man1/npm-edit.1 index 036d0715a4f..8a19d125788 100644 --- a/deps/npm/man/man1/npm-edit.1 +++ b/deps/npm/man/man1/npm-edit.1 @@ -1,66 +1,50 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-EDIT" "1" "September 2014" "" "" -. +.TH "NPM\-EDIT" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-edit\fR \-\- Edit an installed package -. -.SH "SYNOPSIS" -. +\fBnpm-edit\fR \- Edit an installed package +.SH SYNOPSIS +.P +.RS 2 .nf npm edit [@] -. .fi -. -.SH "DESCRIPTION" -Opens the package folder in the default editor (or whatever you\'ve +.RE +.SH DESCRIPTION +.P +Opens the package folder in the default editor (or whatever you've configured as the npm \fBeditor\fR config \-\- see npm help 7 \fBnpm\-config\fR\|\.) -. .P After it has been edited, the package is rebuilt so as to pick up any changes in compiled packages\. -. .P For instance, you can do \fBnpm install connect\fR to install connect into your package, and then \fBnpm edit connect\fR to make a few changes to your locally installed copy\. -. -.SH "CONFIGURATION" -. -.SS "editor" -. -.IP "\(bu" 4 +.SH CONFIGURATION +.SS editor +.RS 0 +.IP \(bu 2 Default: \fBEDITOR\fR environment variable if set, or \fB"vi"\fR on Posix, or \fB"notepad"\fR on Windows\. -. -.IP "\(bu" 4 +.IP \(bu 2 Type: path -. -.IP "" 0 -. + +.RE .P The command to run for \fBnpm edit\fR or \fBnpm config edit\fR\|\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help 5 folders -. -.IP "\(bu" 4 +.IP \(bu 2 npm help explore -. -.IP "\(bu" 4 +.IP \(bu 2 npm help install -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-explore.1 b/deps/npm/man/man1/npm-explore.1 index c7d570745cf..0211aef43e9 100644 --- a/deps/npm/man/man1/npm-explore.1 +++ b/deps/npm/man/man1/npm-explore.1 @@ -1,76 +1,55 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-EXPLORE" "1" "September 2014" "" "" -. +.TH "NPM\-EXPLORE" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-explore\fR \-\- Browse an installed package -. -.SH "SYNOPSIS" -. +\fBnpm-explore\fR \- Browse an installed package +.SH SYNOPSIS +.P +.RS 2 .nf npm explore [ \-\- ] -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P Spawn a subshell in the directory of the installed package specified\. -. .P If a command is specified, then it is run in the subshell, which then immediately terminates\. -. .P -This is particularly handy in the case of git submodules in the \fBnode_modules\fR folder: -. -.IP "" 4 -. +This is particularly handy in the case of git submodules in the +\fBnode_modules\fR folder: +.P +.RS 2 .nf npm explore some\-dependency \-\- git pull origin master -. .fi -. -.IP "" 0 -. +.RE .P Note that the package is \fInot\fR automatically rebuilt afterwards, so be sure to use \fBnpm rebuild \fR if you make any changes\. -. -.SH "CONFIGURATION" -. -.SS "shell" -. -.IP "\(bu" 4 +.SH CONFIGURATION +.SS shell +.RS 0 +.IP \(bu 2 Default: SHELL environment variable, or "bash" on Posix, or "cmd" on Windows -. -.IP "\(bu" 4 +.IP \(bu 2 Type: path -. -.IP "" 0 -. + +.RE .P The shell to run for the \fBnpm explore\fR command\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 -npm help submodule -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help 5 folders -. -.IP "\(bu" 4 +.IP \(bu 2 npm help edit -. -.IP "\(bu" 4 +.IP \(bu 2 npm help rebuild -. -.IP "\(bu" 4 +.IP \(bu 2 npm help build -. -.IP "\(bu" 4 +.IP \(bu 2 npm help install -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-help-search.1 b/deps/npm/man/man1/npm-help-search.1 index 37ba03c7960..a18a8e97a68 100644 --- a/deps/npm/man/man1/npm-help-search.1 +++ b/deps/npm/man/man1/npm-help-search.1 @@ -1,59 +1,45 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-HELP\-SEARCH" "1" "September 2014" "" "" -. +.TH "NPM\-HELP\-SEARCH" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-help-search\fR \-\- Search npm help documentation -. -.SH "SYNOPSIS" -. +\fBnpm-help-search\fR \- Search npm help documentation +.SH SYNOPSIS +.P +.RS 2 .nf npm help\-search some search terms -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This command will search the npm markdown documentation files for the terms provided, and then list the results, sorted by relevance\. -. .P If only one result is found, then it will show that help topic\. -. .P If the argument to \fBnpm help\fR is not a known help topic, then it will call \fBhelp\-search\fR\|\. It is rarely if ever necessary to call this command directly\. -. -.SH "CONFIGURATION" -. -.SS "long" -. -.IP "\(bu" 4 +.SH CONFIGURATION +.SS long +.RS 0 +.IP \(bu 2 Type: Boolean -. -.IP "\(bu" 4 +.IP \(bu 2 Default false -. -.IP "" 0 -. + +.RE .P If true, the "long" flag will cause help\-search to output context around where the terms were found in the documentation\. -. .P If false, then help\-search will just list out the help topics found\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help npm -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 faq -. -.IP "\(bu" 4 +.IP \(bu 2 npm help help -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-help.1 b/deps/npm/man/man1/npm-help.1 index 7cc361f463b..556eeb52ee8 100644 --- a/deps/npm/man/man1/npm-help.1 +++ b/deps/npm/man/man1/npm-help.1 @@ -1,77 +1,57 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-HELP" "1" "September 2014" "" "" -. +.TH "NPM\-HELP" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-help\fR \-\- Get help on npm -. -.SH "SYNOPSIS" -. +\fBnpm-help\fR \- Get help on npm +.SH SYNOPSIS +.P +.RS 2 .nf npm help npm help some search terms -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P If supplied a topic, then show the appropriate documentation page\. -. .P If the topic does not exist, or if multiple terms are provided, then run the \fBhelp\-search\fR command to find a match\. Note that, if \fBhelp\-search\fR finds a single subject, then it will run \fBhelp\fR on that topic, so unique matches are equivalent to specifying a topic name\. -. -.SH "CONFIGURATION" -. -.SS "viewer" -. -.IP "\(bu" 4 +.SH CONFIGURATION +.SS viewer +.RS 0 +.IP \(bu 2 Default: "man" on Posix, "browser" on Windows -. -.IP "\(bu" 4 +.IP \(bu 2 Type: path -. -.IP "" 0 -. + +.RE .P The program to use to view help content\. -. .P Set to \fB"browser"\fR to view html help content in the default web browser\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help npm -. -.IP "\(bu" 4 +.IP \(bu 2 README -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 faq -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 folders -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 package\.json -. -.IP "\(bu" 4 +.IP \(bu 2 npm help help\-search -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 index -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-init.1 b/deps/npm/man/man1/npm-init.1 index 5091fdefd83..3d4ed0957f5 100644 --- a/deps/npm/man/man1/npm-init.1 +++ b/deps/npm/man/man1/npm-init.1 @@ -1,43 +1,36 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-INIT" "1" "September 2014" "" "" -. +.TH "NPM\-INIT" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-init\fR \-\- Interactively create a package\.json file -. -.SH "SYNOPSIS" -. +\fBnpm-init\fR \- Interactively create a package\.json file +.SH SYNOPSIS +.P +.RS 2 .nf -npm init -. +npm init [\-f|\-\-force|\-y|\-\-yes] .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This will ask you a bunch of questions, and then write a package\.json for you\. -. .P It attempts to make reasonable guesses about what you want things to be set to, -and then writes a package\.json file with the options you\'ve selected\. -. +and then writes a package\.json file with the options you've selected\. .P -If you already have a package\.json file, it\'ll read that first, and default to +If you already have a package\.json file, it'll read that first, and default to the options in there\. -. .P It is strictly additive, so it does not delete options from your package\.json without a really good reason to do so\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 -\fIhttps://github\.com/isaacs/init\-package\-json\fR -. -.IP "\(bu" 4 +.P +If you invoke it with \fB\-f\fR, \fB\-\-force\fR, \fB\-y\fR, or \fB\-\-yes\fR, it will use only +defaults and not prompt you for any options\. +.SH SEE ALSO +.RS 0 +.IP \(bu 2 +https://github\.com/isaacs/init\-package\-json +.IP \(bu 2 npm help 5 package\.json -. -.IP "\(bu" 4 +.IP \(bu 2 npm help version -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-install.1 b/deps/npm/man/man1/npm-install.1 index 7e874f34900..0df0197b0c5 100644 --- a/deps/npm/man/man1/npm-install.1 +++ b/deps/npm/man/man1/npm-install.1 @@ -1,334 +1,269 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-INSTALL" "1" "September 2014" "" "" -. +.TH "NPM\-INSTALL" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-install\fR \-\- Install a package -. -.SH "SYNOPSIS" -. +\fBnpm-install\fR \- Install a package +.SH SYNOPSIS +.P +.RS 2 .nf npm install (with no args in a package dir) npm install npm install npm install -npm install [\-\-save|\-\-save\-dev|\-\-save\-optional] [\-\-save\-exact] -npm install @ -npm install @ -npm install @ +npm install [@/] [\-\-save|\-\-save\-dev|\-\-save\-optional] [\-\-save\-exact] +npm install [@/]@ +npm install [@/]@ +npm install [@/]@ npm i (with any of the previous argument usage) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This command installs a package, and any packages that it depends on\. If the package has a shrinkwrap file, the installation of dependencies will be driven by that\. See npm help shrinkwrap\. -. .P A \fBpackage\fR is: -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 a) a folder containing a program described by a package\.json file -. -.IP "\(bu" 4 +.IP \(bu 2 b) a gzipped tarball containing (a) -. -.IP "\(bu" 4 +.IP \(bu 2 c) a url that resolves to (b) -. -.IP "\(bu" 4 +.IP \(bu 2 d) a \fB@\fR that is published on the registry (see npm help 7 \fBnpm\-registry\fR) with (c) -. -.IP "\(bu" 4 +.IP \(bu 2 e) a \fB@\fR that points to (d) -. -.IP "\(bu" 4 +.IP \(bu 2 f) a \fB\fR that has a "latest" tag satisfying (e) -. -.IP "\(bu" 4 +.IP \(bu 2 g) a \fB\fR that resolves to (b) -. -.IP "" 0 -. + +.RE .P Even if you never publish your package, you can still get a lot of benefits of using npm if you just want to write a node program (a), and perhaps if you also want to be able to easily install it elsewhere after packing it up into a tarball (b)\. -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 \fBnpm install\fR (in package directory, no arguments): -. -.IP -Install the dependencies in the local node_modules folder\. -. -.IP -In global mode (ie, with \fB\-g\fR or \fB\-\-global\fR appended to the command), -it installs the current package context (ie, the current working -directory) as a global package\. -. -.IP -By default, \fBnpm install\fR will install all modules listed as -dependencies\. With the \fB\-\-production\fR flag, -npm will not install modules listed in \fBdevDependencies\fR\|\. -. -.IP "\(bu" 4 + Install the dependencies in the local node_modules folder\. + In global mode (ie, with \fB\-g\fR or \fB\-\-global\fR appended to the command), + it installs the current package context (ie, the current working + directory) as a global package\. + By default, \fBnpm install\fR will install all modules listed as + dependencies\. With the \fB\-\-production\fR flag, + npm will not install modules listed in \fBdevDependencies\fR\|\. +.IP \(bu 2 \fBnpm install \fR: -. -.IP -Install a package that is sitting in a folder on the filesystem\. -. -.IP "\(bu" 4 + Install a package that is sitting in a folder on the filesystem\. +.IP \(bu 2 \fBnpm install \fR: -. -.IP -Install a package that is sitting on the filesystem\. Note: if you just want -to link a dev directory into your npm root, you can do this more easily by -using \fBnpm link\fR\|\. -. -.IP -Example: -. -.IP "" 4 -. + Install a package that is sitting on the filesystem\. Note: if you just want + to link a dev directory into your npm root, you can do this more easily by + using \fBnpm link\fR\|\. + Example: +.P +.RS 2 .nf - npm install \./package\.tgz -. + npm install \./package\.tgz .fi -. -.IP "" 0 - -. -.IP "\(bu" 4 +.RE +.IP \(bu 2 \fBnpm install \fR: -. -.IP -Fetch the tarball url, and then install it\. In order to distinguish between -this and other options, the argument must start with "http://" or "https://" -. -.IP -Example: -. -.IP "" 4 -. + Fetch the tarball url, and then install it\. In order to distinguish between + this and other options, the argument must start with "http://" or "https://" + Example: +.P +.RS 2 .nf - npm install https://github\.com/indexzero/forever/tarball/v0\.5\.6 -. + npm install https://github\.com/indexzero/forever/tarball/v0\.5\.6 .fi -. -.IP "" 0 - -. -.IP "\(bu" 4 -\fBnpm install [\-\-save|\-\-save\-dev|\-\-save\-optional]\fR: -. -.IP -Do a \fB@\fR install, where \fB\fR is the "tag" config\. (See npm help 7 \fBnpm\-config\fR\|\.) -. -.IP -In most cases, this will install the latest version -of the module published on npm\. -. -.IP -Example: -. -.IP - npm install sax -. -.IP -\fBnpm install\fR takes 3 exclusive, optional flags which save or update -the package version in your main package\.json: -. -.IP "\(bu" 4 +.RE +.IP \(bu 2 +\fBnpm install [@/] [\-\-save|\-\-save\-dev|\-\-save\-optional]\fR: + Do a \fB@\fR install, where \fB\fR is the "tag" config\. (See + npm help 7 \fBnpm\-config\fR\|\.) + In most cases, this will install the latest version + of the module published on npm\. + Example: +.P +.RS 2 +.nf + npm install sax +.fi +.RE + \fBnpm install\fR takes 3 exclusive, optional flags which save or update + the package version in your main package\.json: +.RS 0 +.IP \(bu 2 \fB\-\-save\fR: Package will appear in your \fBdependencies\fR\|\. -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-\-save\-dev\fR: Package will appear in your \fBdevDependencies\fR\|\. -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-\-save\-optional\fR: Package will appear in your \fBoptionalDependencies\fR\|\. -. -.IP When using any of the above options to save dependencies to your package\.json, there is an additional, optional flag: -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-\-save\-exact\fR: Saved dependencies will be configured with an -exact version rather than using npm\'s default semver range +exact version rather than using npm's default semver range operator\. -. -.IP +\fB\fR is optional\. The package will be downloaded from the registry +associated with the specified scope\. If no registry is associated with +the given scope the default registry is assumed\. See npm help 7 \fBnpm\-scope\fR\|\. +Note: if you do not include the @\-symbol on your scope name, npm will +interpret this as a GitHub repository instead, see below\. Scopes names +must also be followed by a slash\. Examples: -. -.IP - npm install sax \-\-save - npm install node\-tap \-\-save\-dev - npm install dtrace\-provider \-\-save\-optional - npm install readable\-stream \-\-save \-\-save\-exact -. -.IP -\fBNote\fR: If there is a file or folder named \fB\fR in the current +.P +.RS 2 +.nf +npm install sax \-\-save +npm install githubname/reponame +npm install @myorg/privatepackage +npm install node\-tap \-\-save\-dev +npm install dtrace\-provider \-\-save\-optional +npm install readable\-stream \-\-save \-\-save\-exact +.fi +.RE + +.RE + +.RE +.P +.RS 2 +.nf +**Note**: If there is a file or folder named `` in the current working directory, then it will try to install that, and only try to fetch the package by name if it is not valid\. -. -.IP "" 0 - -. -.IP "\(bu" 4 -\fBnpm install @\fR: -. -.IP -Install the version of the package that is referenced by the specified tag\. -If the tag does not exist in the registry data for that package, then this -will fail\. -. -.IP -Example: -. -.IP "" 4 -. +.fi +.RE +.RS 0 +.IP \(bu 2 +\fBnpm install [@/]@\fR: + Install the version of the package that is referenced by the specified tag\. + If the tag does not exist in the registry data for that package, then this + will fail\. + Example: +.P +.RS 2 .nf - npm install sax@latest -. + npm install sax@latest + npm install @myorg/mypackage@latest .fi -. -.IP "" 0 - -. -.IP "\(bu" 4 -\fBnpm install @\fR: -. -.IP -Install the specified version of the package\. This will fail if the version -has not been published to the registry\. -. -.IP -Example: -. -.IP "" 4 -. +.RE +.IP \(bu 2 +\fBnpm install [@/]@\fR: + Install the specified version of the package\. This will fail if the + version has not been published to the registry\. + Example: +.P +.RS 2 .nf - npm install sax@0\.1\.1 -. + npm install sax@0\.1\.1 + npm install @myorg/privatepackage@1\.5\.0 .fi -. -.IP "" 0 - -. -.IP "\(bu" 4 -\fBnpm install @\fR: -. -.IP -Install a version of the package matching the specified version range\. This -will follow the same rules for resolving dependencies described in npm help 5 \fBpackage\.json\fR\|\. -. -.IP -Note that most version ranges must be put in quotes so that your shell will -treat it as a single argument\. -. -.IP -Example: -. -.IP - npm install sax@">=0\.1\.0 <0\.2\.0" -. -.IP "\(bu" 4 +.RE +.IP \(bu 2 +\fBnpm install [@/]@\fR: + Install a version of the package matching the specified version range\. This + will follow the same rules for resolving dependencies described in npm help 5 \fBpackage\.json\fR\|\. + Note that most version ranges must be put in quotes so that your shell will + treat it as a single argument\. + Example: +.P +.RS 2 +.nf + npm install sax@">=0\.1\.0 <0\.2\.0" + npm install @myorg/privatepackage@">=0\.1\.0 <0\.2\.0" +.fi +.RE +.IP \(bu 2 +\fBnpm install /\fR: + Install the package at \fBhttps://github\.com/githubname/githubrepo" by + attempting to clone it using\fRgit`\. + Example: +.P +.RS 2 +.nf + npm install mygithubuser/myproject +.fi +.RE + To reference a package in a git repo that is not on GitHub, see git + remote urls below\. +.IP \(bu 2 \fBnpm install \fR: -. -.IP -Install a package by cloning a git remote url\. The format of the git -url is: -. -.IP - ://[@][#] -. -.IP -\fB\fR is one of \fBgit\fR, \fBgit+ssh\fR, \fBgit+http\fR, or \fBgit+https\fR\|\. If no \fB\fR is specified, then \fBmaster\fR is -used\. -. -.IP -Examples: -. -.IP "" 4 -. + Install a package by cloning a git remote url\. The format of the git + url is: +.P +.RS 2 .nf - git+ssh://git@github\.com:npm/npm\.git#v1\.0\.27 - git+https://isaacs@github\.com/npm/npm\.git - git://github\.com/npm/npm\.git#v1\.0\.27 -. + ://[@][#] .fi -. -.IP "" 0 +.RE + \fB\fR is one of \fBgit\fR, \fBgit+ssh\fR, \fBgit+http\fR, or + \fBgit+https\fR\|\. If no \fB\fR is specified, then \fBmaster\fR is + used\. + Examples: +.P +.RS 2 +.nf + git+ssh://git@github\.com:npm/npm\.git#v1\.0\.27 + git+https://isaacs@github\.com/npm/npm\.git + git://github\.com/npm/npm\.git#v1\.0\.27 +.fi +.RE -. -.IP "" 0 -. +.RE .P You may combine multiple arguments, and even multiple types of arguments\. For example: -. -.IP "" 4 -. +.P +.RS 2 .nf npm install sax@">=0\.1\.0 <0\.2\.0" bench supervisor -. .fi -. -.IP "" 0 -. +.RE .P The \fB\-\-tag\fR argument will apply to all of the specified install targets\. If a tag with the given name exists, the tagged version is preferred over newer versions\. -. .P The \fB\-\-force\fR argument will force npm to fetch remote resources even if a local copy exists on disk\. -. -.IP "" 4 -. +.P +.RS 2 .nf npm install sax \-\-force -. .fi -. -.IP "" 0 -. +.RE .P The \fB\-\-global\fR argument will cause npm to install the package globally rather than locally\. See npm help 5 \fBnpm\-folders\fR\|\. -. .P The \fB\-\-link\fR argument will cause npm to link global installs into the local space in some cases\. -. .P The \fB\-\-no\-bin\-links\fR argument will prevent npm from creating symlinks for any binaries the package might contain\. -. .P The \fB\-\-no\-optional\fR argument will prevent optional dependencies from being installed\. -. .P The \fB\-\-no\-shrinkwrap\fR argument, which will ignore an available shrinkwrap file and use the package\.json instead\. -. .P The \fB\-\-nodedir=/path/to/node/source\fR argument will allow npm to find the node source code so that npm can compile native modules\. -. .P See npm help 7 \fBnpm\-config\fR\|\. Many of the configuration params have some -effect on installation, since that\'s most of what npm does\. -. -.SH "ALGORITHM" +effect on installation, since that's most of what npm does\. +.SH ALGORITHM +.P To install a package, npm uses the following algorithm: -. -.IP "" 4 -. +.P +.RS 2 .nf install(where, what, family, ancestors) fetch what, unpack to /node_modules/ @@ -339,103 +274,78 @@ for each dep@version in what\.dependencies and not in add precise version deps to install(/node_modules/, dep, family) -. .fi -. -.IP "" 0 -. +.RE .P For this \fBpackage{dep}\fR structure: \fBA{B,C}, B{C}, C{D}\fR, this algorithm produces: -. -.IP "" 4 -. +.P +.RS 2 .nf A +\-\- B `\-\- C `\-\- D -. .fi -. -.IP "" 0 -. +.RE .P That is, the dependency from B to C is satisfied by the fact that A already caused C to be installed at a higher level\. -. .P See npm help 5 folders for a more detailed description of the specific folder structures that npm creates\. -. -.SS "Limitations of npm's Install Algorithm" +.SS Limitations of npm's Install Algorithm +.P There are some very rare and pathological edge\-cases where a cycle can cause npm to try to install a never\-ending tree of packages\. Here is the simplest case: -. -.IP "" 4 -. +.P +.RS 2 .nf -A \-> B \-> A\' \-> B\' \-> A \-> B \-> A\' \-> B\' \-> A \-> \.\.\. -. +A \-> B \-> A' \-> B' \-> A \-> B \-> A' \-> B' \-> A \-> \.\.\. .fi -. -.IP "" 0 -. +.RE .P -where \fBA\fR is some version of a package, and \fBA\'\fR is a different version +where \fBA\fR is some version of a package, and \fBA'\fR is a different version of the same package\. Because \fBB\fR depends on a different version of \fBA\fR than the one that is already in the tree, it must install a separate -copy\. The same is true of \fBA\'\fR, which must install \fBB\'\fR\|\. Because \fBB\'\fR +copy\. The same is true of \fBA'\fR, which must install \fBB'\fR\|\. Because \fBB'\fR depends on the original version of \fBA\fR, which has been overridden, the cycle falls into infinite regress\. -. .P -To avoid this situation, npm flat\-out refuses to install any \fBname@version\fR that is already present anywhere in the tree of package +To avoid this situation, npm flat\-out refuses to install any +\fBname@version\fR that is already present anywhere in the tree of package folder ancestors\. A more correct, but more complex, solution would be to symlink the existing version into the new location\. If this ever affects a real use\-case, it will be investigated\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help 5 folders -. -.IP "\(bu" 4 +.IP \(bu 2 npm help update -. -.IP "\(bu" 4 +.IP \(bu 2 npm help link -. -.IP "\(bu" 4 +.IP \(bu 2 npm help rebuild -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 scripts -. -.IP "\(bu" 4 +.IP \(bu 2 npm help build -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 registry -. -.IP "\(bu" 4 +.IP \(bu 2 npm help tag -. -.IP "\(bu" 4 +.IP \(bu 2 npm help rm -. -.IP "\(bu" 4 +.IP \(bu 2 npm help shrinkwrap -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-link.1 b/deps/npm/man/man1/npm-link.1 index 15d45e4e079..62d76503f6a 100644 --- a/deps/npm/man/man1/npm-link.1 +++ b/deps/npm/man/man1/npm-link.1 @@ -1,119 +1,100 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-LINK" "1" "September 2014" "" "" -. +.TH "NPM\-LINK" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-link\fR \-\- Symlink a package folder -. -.SH "SYNOPSIS" -. +\fBnpm-link\fR \- Symlink a package folder +.SH SYNOPSIS +.P +.RS 2 .nf npm link (in package folder) -npm link +npm link [@/] npm ln (with any of the previous argument usage) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P Package linking is a two\-step process\. -. .P First, \fBnpm link\fR in a package folder will create a globally\-installed -symbolic link from \fBprefix/package\-name\fR to the current folder\. -. +symbolic link from \fBprefix/package\-name\fR to the current folder (see +npm help 7 \fBnpm\-config\fR for the value of \fBprefix\fR)\. .P Next, in some other location, \fBnpm link package\-name\fR will create a symlink from the local \fBnode_modules\fR folder to the global symlink\. -. .P Note that \fBpackage\-name\fR is taken from \fBpackage\.json\fR, not from directory name\. -. +.P +The package name can be optionally prefixed with a scope\. See npm help 7 \fBnpm\-scope\fR\|\. +The scope must by preceded by an @\-symbol and followed by a slash\. .P When creating tarballs for \fBnpm publish\fR, the linked packages are "snapshotted" to their current state by resolving the symbolic links\. -. .P -This is -handy for installing your own stuff, so that you can work on it and test it -iteratively without having to continually rebuild\. -. +This is handy for installing your own stuff, so that you can work on it and +test it iteratively without having to continually rebuild\. .P For example: -. -.IP "" 4 -. +.P +.RS 2 .nf cd ~/projects/node\-redis # go into the package directory npm link # creates global link cd ~/projects/node\-bloggy # go into some other package directory\. npm link redis # link\-install the package -. .fi -. -.IP "" 0 -. +.RE .P Now, any changes to ~/projects/node\-redis will be reflected in ~/projects/node\-bloggy/node_modules/redis/ -. .P You may also shortcut the two steps in one\. For example, to do the above use\-case in a shorter way: -. -.IP "" 4 -. +.P +.RS 2 .nf cd ~/projects/node\-bloggy # go into the dir of your main project npm link \.\./node\-redis # link the dir of your dependency -. .fi -. -.IP "" 0 -. +.RE .P The second line is the equivalent of doing: -. -.IP "" 4 -. +.P +.RS 2 .nf (cd \.\./node\-redis; npm link) npm link redis -. .fi -. -.IP "" 0 -. +.RE .P That is, it first creates a global link, and then links the global -installation target into your project\'s \fBnode_modules\fR folder\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +installation target into your project's \fBnode_modules\fR folder\. +.P +If your linked package is scoped (see npm help 7 \fBnpm\-scope\fR) your link command must +include that scope, e\.g\. +.P +.RS 2 +.nf +npm link @myorg/privatepackage +.fi +.RE +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help 7 developers -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 faq -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 package\.json -. -.IP "\(bu" 4 +.IP \(bu 2 npm help install -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 folders -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-ls.1 b/deps/npm/man/man1/npm-ls.1 index 1584fb0f28c..9cf4823c9a6 100644 --- a/deps/npm/man/man1/npm-ls.1 +++ b/deps/npm/man/man1/npm-ls.1 @@ -1,146 +1,111 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-LS" "1" "September 2014" "" "" -. +.TH "NPM\-LS" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-ls\fR \-\- List installed packages -. -.SH "SYNOPSIS" -. +\fBnpm-ls\fR \- List installed packages +.SH SYNOPSIS +.P +.RS 2 .nf -npm list [ \.\.\.] -npm ls [ \.\.\.] -npm la [ \.\.\.] -npm ll [ \.\.\.] -. +npm list [[@/] \.\.\.] +npm ls [[@/] \.\.\.] +npm la [[@/] \.\.\.] +npm ll [[@/] \.\.\.] .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This command will print to stdout all the versions of packages that are installed, as well as their dependencies, in a tree\-structure\. -. .P Positional arguments are \fBname@version\-range\fR identifiers, which will limit the results to only the paths to the packages named\. Note that nested packages will \fIalso\fR show the paths to the specified packages\. -For example, running \fBnpm ls promzard\fR in npm\'s source tree will show: -. -.IP "" 4 -. +For example, running \fBnpm ls promzard\fR in npm's source tree will show: +.P +.RS 2 .nf -npm@1.4.28 /path/to/npm +npm@2.1.6 /path/to/npm └─┬ init\-package\-json@0\.0\.4 └── promzard@0\.1\.5 -. .fi -. -.IP "" 0 -. +.RE .P It will print out extraneous, missing, and invalid packages\. -. .P If a project specifies git urls for dependencies these are shown in parentheses after the name@version to make it easier for users to recognize potential forks of a project\. -. .P When run as \fBll\fR or \fBla\fR, it shows extended information by default\. -. -.SH "CONFIGURATION" -. -.SS "json" -. -.IP "\(bu" 4 +.SH CONFIGURATION +.SS json +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Show information in JSON format\. -. -.SS "long" -. -.IP "\(bu" 4 +.SS long +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Show extended information\. -. -.SS "parseable" -. -.IP "\(bu" 4 +.SS parseable +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Show parseable output instead of tree view\. -. -.SS "global" -. -.IP "\(bu" 4 +.SS global +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P List packages in the global install prefix instead of in the current project\. -. -.SS "depth" -. -.IP "\(bu" 4 +.SS depth +.RS 0 +.IP \(bu 2 Type: Int -. -.IP "" 0 -. + +.RE .P Max display depth of the dependency tree\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 folders -. -.IP "\(bu" 4 +.IP \(bu 2 npm help install -. -.IP "\(bu" 4 +.IP \(bu 2 npm help link -. -.IP "\(bu" 4 +.IP \(bu 2 npm help prune -. -.IP "\(bu" 4 +.IP \(bu 2 npm help outdated -. -.IP "\(bu" 4 +.IP \(bu 2 npm help update -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-outdated.1 b/deps/npm/man/man1/npm-outdated.1 index 7376fcd24af..45433a814e2 100644 --- a/deps/npm/man/man1/npm-outdated.1 +++ b/deps/npm/man/man1/npm-outdated.1 @@ -1,102 +1,79 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-OUTDATED" "1" "September 2014" "" "" -. +.TH "NPM\-OUTDATED" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-outdated\fR \-\- Check for outdated packages -. -.SH "SYNOPSIS" -. +\fBnpm-outdated\fR \- Check for outdated packages +.SH SYNOPSIS +.P +.RS 2 .nf npm outdated [ [ \.\.\.]] -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This command will check the registry to see if any (or, specific) installed packages are currently outdated\. -. .P -The resulting field \'wanted\' shows the latest version according to the -version specified in the package\.json, the field \'latest\' the very latest +The resulting field 'wanted' shows the latest version according to the +version specified in the package\.json, the field 'latest' the very latest version of the package\. -. -.SH "CONFIGURATION" -. -.SS "json" -. -.IP "\(bu" 4 +.SH CONFIGURATION +.SS json +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Show information in JSON format\. -. -.SS "long" -. -.IP "\(bu" 4 +.SS long +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Show extended information\. -. -.SS "parseable" -. -.IP "\(bu" 4 +.SS parseable +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Show parseable output instead of tree view\. -. -.SS "global" -. -.IP "\(bu" 4 +.SS global +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Check packages in the global install prefix instead of in the current project\. -. -.SS "depth" -. -.IP "\(bu" 4 +.SS depth +.RS 0 +.IP \(bu 2 Type: Int -. -.IP "" 0 -. + +.RE .P Max depth for checking dependency tree\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help update -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 registry -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 folders -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-owner.1 b/deps/npm/man/man1/npm-owner.1 index f204431e5dc..3ed5549f71c 100644 --- a/deps/npm/man/man1/npm-owner.1 +++ b/deps/npm/man/man1/npm-owner.1 @@ -1,58 +1,47 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-OWNER" "1" "September 2014" "" "" -. +.TH "NPM\-OWNER" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-owner\fR \-\- Manage package owners -. -.SH "SYNOPSIS" -. +\fBnpm-owner\fR \- Manage package owners +.SH SYNOPSIS +.P +.RS 2 .nf npm owner ls npm owner add npm owner rm -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P Manage ownership of published packages\. -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 ls: List all the users who have access to modify a package and push new versions\. Handy when you need to know who to bug for help\. -. -.IP "\(bu" 4 +.IP \(bu 2 add: Add a new user as a maintainer of a package\. This user is enabled to modify metadata, publish new versions, and add other owners\. -. -.IP "\(bu" 4 +.IP \(bu 2 rm: Remove a user from the package owner list\. This immediately revokes their privileges\. -. -.IP "" 0 -. + +.RE .P Note that there is only one level of access\. Either you can modify a package, -or you can\'t\. Future versions may contain more fine\-grained access levels, but +or you can't\. Future versions may contain more fine\-grained access levels, but that is not implemented at this time\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help publish -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 registry -. -.IP "\(bu" 4 +.IP \(bu 2 npm help adduser -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 disputes -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-pack.1 b/deps/npm/man/man1/npm-pack.1 index 951d209adb1..8b9408abb27 100644 --- a/deps/npm/man/man1/npm-pack.1 +++ b/deps/npm/man/man1/npm-pack.1 @@ -1,48 +1,37 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-PACK" "1" "September 2014" "" "" -. +.TH "NPM\-PACK" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-pack\fR \-\- Create a tarball from a package -. -.SH "SYNOPSIS" -. +\fBnpm-pack\fR \- Create a tarball from a package +.SH SYNOPSIS +.P +.RS 2 .nf npm pack [ [ \.\.\.]] -. .fi -. -.SH "DESCRIPTION" -For anything that\'s installable (that is, a package folder, tarball, +.RE +.SH DESCRIPTION +.P +For anything that's installable (that is, a package folder, tarball, tarball url, name@tag, name@version, or name), this command will fetch it to the cache, and then copy the tarball to the current working directory as \fB\-\.tgz\fR, and then write the filenames out to stdout\. -. .P If the same package is specified multiple times, then the file will be overwritten the second time\. -. .P If no arguments are supplied, then npm packs the current package folder\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help cache -. -.IP "\(bu" 4 +.IP \(bu 2 npm help publish -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-prefix.1 b/deps/npm/man/man1/npm-prefix.1 index 9cc3f7cadd6..b7bcac63956 100644 --- a/deps/npm/man/man1/npm-prefix.1 +++ b/deps/npm/man/man1/npm-prefix.1 @@ -1,40 +1,34 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-PREFIX" "1" "September 2014" "" "" -. +.TH "NPM\-PREFIX" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-prefix\fR \-\- Display prefix -. -.SH "SYNOPSIS" -. +\fBnpm-prefix\fR \- Display prefix +.SH SYNOPSIS +.P +.RS 2 .nf -npm prefix -. +npm prefix [\-g] .fi -. -.SH "DESCRIPTION" -Print the prefix to standard out\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.RE +.SH DESCRIPTION +.P +Print the local prefix to standard out\. This is the closest parent directory +to contain a package\.json file unless \fB\-g\fR is also specified\. +.P +If \fB\-g\fR is specified, this will be the value of the global prefix\. See +npm help 7 \fBnpm\-config\fR for more detail\. +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help root -. -.IP "\(bu" 4 +.IP \(bu 2 npm help bin -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 folders -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-prune.1 b/deps/npm/man/man1/npm-prune.1 index 71bb77c407d..1a8cc952156 100644 --- a/deps/npm/man/man1/npm-prune.1 +++ b/deps/npm/man/man1/npm-prune.1 @@ -1,42 +1,33 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-PRUNE" "1" "September 2014" "" "" -. +.TH "NPM\-PRUNE" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-prune\fR \-\- Remove extraneous packages -. -.SH "SYNOPSIS" -. +\fBnpm-prune\fR \- Remove extraneous packages +.SH SYNOPSIS +.P +.RS 2 .nf npm prune [ [ [ [\-\-tag ] npm publish [\-\-tag ] -. .fi -. -.SH "DESCRIPTION" -Publishes a package to the registry so that it can be installed by name\. -. -.IP "\(bu" 4 +.RE +.SH DESCRIPTION +.P +Publishes a package to the registry so that it can be installed by name\. See +npm help 7 \fBnpm\-developers\fR for details on what's included in the published package, as +well as details on how the package is built\. +.P +By default npm will publish to the public registry\. This can be overridden by +specifying a different default registry or using a npm help 7 \fBnpm\-scope\fR in the name +(see npm help 5 \fBpackage\.json\fR)\. +.RS 0 +.IP \(bu 2 \fB\fR: A folder containing a package\.json file -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\fR: A url or file path to a gzipped tar archive containing a single folder with a package\.json file inside\. -. -.IP "\(bu" 4 +.IP \(bu 2 \fB[\-\-tag ]\fR Registers the published package with the given tag, such that \fBnpm install @\fR will install this version\. By default, \fBnpm publish\fR updates and \fBnpm install\fR installs the \fBlatest\fR tag\. -. -.IP "" 0 -. + +.RE .P Fails if the package name and version combination already exists in -the registry\. -. +the specified registry\. .P Once a package is published with a given name and version, that specific name and version combination can never be used again, even if it is removed with npm help unpublish\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help 7 registry -. -.IP "\(bu" 4 +.IP \(bu 2 npm help adduser -. -.IP "\(bu" 4 +.IP \(bu 2 npm help owner -. -.IP "\(bu" 4 +.IP \(bu 2 npm help deprecate -. -.IP "\(bu" 4 +.IP \(bu 2 npm help tag -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-rebuild.1 b/deps/npm/man/man1/npm-rebuild.1 index 4130eb773f2..0e04b9cfbe1 100644 --- a/deps/npm/man/man1/npm-rebuild.1 +++ b/deps/npm/man/man1/npm-rebuild.1 @@ -1,37 +1,31 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-REBUILD" "1" "September 2014" "" "" -. +.TH "NPM\-REBUILD" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-rebuild\fR \-\- Rebuild a package -. -.SH "SYNOPSIS" -. +\fBnpm-rebuild\fR \- Rebuild a package +.SH SYNOPSIS +.P +.RS 2 .nf npm rebuild [ [ \.\.\.]] npm rb [ [ \.\.\.]] -. .fi -. -.IP "\(bu" 4 +.RE +.RS 0 +.IP \(bu 2 \fB\fR: The package to rebuild -. -.IP "" 0 -. -.SH "DESCRIPTION" + +.RE +.SH DESCRIPTION +.P This command runs the \fBnpm build\fR command on the matched folders\. This is useful when you install a new version of node, and must recompile all your C++ addons with the new binary\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help build -. -.IP "\(bu" 4 +.IP \(bu 2 npm help install -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-repo.1 b/deps/npm/man/man1/npm-repo.1 index 557a3566eaa..dc8428d0242 100644 --- a/deps/npm/man/man1/npm-repo.1 +++ b/deps/npm/man/man1/npm-repo.1 @@ -1,47 +1,37 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-REPO" "1" "September 2014" "" "" -. +.TH "NPM\-REPO" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-repo\fR \-\- Open package repository page in the browser -. -.SH "SYNOPSIS" -. +\fBnpm-repo\fR \- Open package repository page in the browser +.SH SYNOPSIS +.P +.RS 2 .nf npm repo npm repo (with no args in a package dir) -. .fi -. -.SH "DESCRIPTION" -This command tries to guess at the likely location of a package\'s +.RE +.SH DESCRIPTION +.P +This command tries to guess at the likely location of a package's repository URL, and then tries to open it using the \fB\-\-browser\fR config param\. If no package name is provided, it will search for a \fBpackage\.json\fR in the current folder and use the \fBname\fR property\. -. -.SH "CONFIGURATION" -. -.SS "browser" -. -.IP "\(bu" 4 +.SH CONFIGURATION +.SS browser +.RS 0 +.IP \(bu 2 Default: OS X: \fB"open"\fR, Windows: \fB"start"\fR, Others: \fB"xdg\-open"\fR -. -.IP "\(bu" 4 +.IP \(bu 2 Type: String -. -.IP "" 0 -. + +.RE .P The browser that is called by the \fBnpm repo\fR command to open websites\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help docs -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-restart.1 b/deps/npm/man/man1/npm-restart.1 index 828a43f30fc..234d0aa76e9 100644 --- a/deps/npm/man/man1/npm-restart.1 +++ b/deps/npm/man/man1/npm-restart.1 @@ -1,42 +1,29 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-RESTART" "1" "September 2014" "" "" -. +.TH "NPM\-RESTART" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-restart\fR \-\- Start a package -. -.SH "SYNOPSIS" -. +\fBnpm-restart\fR \- Start a package +.SH SYNOPSIS +.P +.RS 2 .nf -npm restart -. +npm restart [\-\- ] .fi -. -.SH "DESCRIPTION" -This runs a package\'s "restart" script, if one was provided\. -Otherwise it runs package\'s "stop" script, if one was provided, and then -the "start" script\. -. +.RE +.SH DESCRIPTION .P -If no version is specified, then it restarts the "active" version\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +This runs a package's "restart" script, if one was provided\. Otherwise it runs +package's "stop" script, if one was provided, and then the "start" script\. +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help run\-script -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 scripts -. -.IP "\(bu" 4 +.IP \(bu 2 npm help test -. -.IP "\(bu" 4 +.IP \(bu 2 npm help start -. -.IP "\(bu" 4 +.IP \(bu 2 npm help stop -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-rm.1 b/deps/npm/man/man1/npm-rm.1 index 424314c7d57..c7f92fb52bf 100644 --- a/deps/npm/man/man1/npm-rm.1 +++ b/deps/npm/man/man1/npm-rm.1 @@ -1,44 +1,34 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-RM" "1" "September 2014" "" "" -. +.TH "NPM\-RM" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-rm\fR \-\- Remove a package -. -.SH "SYNOPSIS" -. +\fBnpm-rm\fR \- Remove a package +.SH SYNOPSIS +.P +.RS 2 .nf npm rm npm r npm uninstall npm un -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This uninstalls a package, completely removing everything npm installed on its behalf\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help prune -. -.IP "\(bu" 4 +.IP \(bu 2 npm help install -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 folders -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-root.1 b/deps/npm/man/man1/npm-root.1 index 463eeaf934f..f85ebb97085 100644 --- a/deps/npm/man/man1/npm-root.1 +++ b/deps/npm/man/man1/npm-root.1 @@ -1,40 +1,30 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-ROOT" "1" "September 2014" "" "" -. +.TH "NPM\-ROOT" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-root\fR \-\- Display npm root -. -.SH "SYNOPSIS" -. +\fBnpm-root\fR \- Display npm root +.SH SYNOPSIS +.P +.RS 2 .nf npm root -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P Print the effective \fBnode_modules\fR folder to standard out\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help prefix -. -.IP "\(bu" 4 +.IP \(bu 2 npm help bin -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 folders -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-run-script.1 b/deps/npm/man/man1/npm-run-script.1 index aa2740c1198..905908a7a10 100644 --- a/deps/npm/man/man1/npm-run-script.1 +++ b/deps/npm/man/man1/npm-run-script.1 @@ -1,45 +1,49 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-RUN\-SCRIPT" "1" "September 2014" "" "" -. +.TH "NPM\-RUN\-SCRIPT" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-run-script\fR \-\- Run arbitrary package scripts -. -.SH "SYNOPSIS" -. +\fBnpm-run-script\fR \- Run arbitrary package scripts +.SH SYNOPSIS +.P +.RS 2 .nf -npm run\-script [] [command] -npm run [] [command] -. +npm run\-script [command] [\-\- ] +npm run [command] [\-\- ] .fi -. -.SH "DESCRIPTION" -This runs an arbitrary command from a package\'s \fB"scripts"\fR object\. +.RE +.SH DESCRIPTION +.P +This runs an arbitrary command from a package's \fB"scripts"\fR object\. If no package name is provided, it will search for a \fBpackage\.json\fR in the current folder and use its \fB"scripts"\fR object\. If no \fB"command"\fR is provided, it will list the available top level scripts\. -. .P It is used by the test, start, restart, and stop commands, but can be called directly, as well\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.P +As of \fBnpm@2\.0\.0\fR \fIhttp://blog\.npmjs\.org/post/98131109725/npm\-2\-0\-0\fR, you can +use custom arguments when executing scripts\. The special option \fB\-\-\fR is used by +getopt \fIhttp://goo\.gl/KxMmtG\fR to delimit the end of the options\. npm will pass +all the arguments after the \fB\-\-\fR directly to your script: +.P +.RS 2 +.nf +npm run test \-\- \-\-grep="pattern" +.fi +.RE +.P +The arguments will only be passed to the script specified after \fBnpm run\fR +and not to any pre or post script\. +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help 7 scripts -. -.IP "\(bu" 4 +.IP \(bu 2 npm help test -. -.IP "\(bu" 4 +.IP \(bu 2 npm help start -. -.IP "\(bu" 4 +.IP \(bu 2 npm help restart -. -.IP "\(bu" 4 +.IP \(bu 2 npm help stop -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-search.1 b/deps/npm/man/man1/npm-search.1 index 2c7edcd2ad0..4ad5a67b8c9 100644 --- a/deps/npm/man/man1/npm-search.1 +++ b/deps/npm/man/man1/npm-search.1 @@ -1,62 +1,48 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-SEARCH" "1" "September 2014" "" "" -. +.TH "NPM\-SEARCH" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-search\fR \-\- Search for packages -. -.SH "SYNOPSIS" -. +\fBnpm-search\fR \- Search for packages +.SH SYNOPSIS +.P +.RS 2 .nf npm search [\-\-long] [search terms \.\.\.] npm s [search terms \.\.\.] npm se [search terms \.\.\.] -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P Search the registry for packages matching the search terms\. -. .P -If a term starts with \fB/\fR, then it\'s interpreted as a regular expression\. +If a term starts with \fB/\fR, then it's interpreted as a regular expression\. A trailing \fB/\fR will be ignored in this case\. (Note that many regular expression characters must be escaped or quoted in most shells\.) -. -.SH "CONFIGURATION" -. -.SS "long" -. -.IP "\(bu" 4 +.SH CONFIGURATION +.SS long +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Display full package descriptions and other long text across multiple lines\. When disabled (default) search results are truncated to fit neatly on a single line\. Modules with extremely long names will fall on multiple lines\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help 7 registry -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "\(bu" 4 +.IP \(bu 2 npm help view -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-shrinkwrap.1 b/deps/npm/man/man1/npm-shrinkwrap.1 index 2a053a5b0b7..fa2b313ab2b 100644 --- a/deps/npm/man/man1/npm-shrinkwrap.1 +++ b/deps/npm/man/man1/npm-shrinkwrap.1 @@ -1,43 +1,36 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-SHRINKWRAP" "1" "September 2014" "" "" -. +.TH "NPM\-SHRINKWRAP" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-shrinkwrap\fR \-\- Lock down dependency versions -. -.SH "SYNOPSIS" -. +\fBnpm-shrinkwrap\fR \- Lock down dependency versions +.SH SYNOPSIS +.P +.RS 2 .nf npm shrinkwrap -. .fi -. -.SH "DESCRIPTION" -This command locks down the versions of a package\'s dependencies so +.RE +.SH DESCRIPTION +.P +This command locks down the versions of a package's dependencies so that you can control exactly which versions of each dependency will be used when your package is installed\. The "package\.json" file is still required if you want to use "npm install"\. -. .P -By default, "npm install" recursively installs the target\'s +By default, "npm install" recursively installs the target's dependencies (as specified in package\.json), choosing the latest -available version that satisfies the dependency\'s semver pattern\. In +available version that satisfies the dependency's semver pattern\. In some situations, particularly when shipping software where each change -is tightly managed, it\'s desirable to fully specify each version of +is tightly managed, it's desirable to fully specify each version of each dependency recursively so that subsequent builds and deploys do not inadvertently pick up newer versions of a dependency that satisfy the semver pattern\. Specifying specific semver patterns in each -dependency\'s package\.json would facilitate this, but that\'s not always +dependency's package\.json would facilitate this, but that's not always possible or desirable, as when another author owns the npm package\. -It\'s also possible to check dependencies directly into source control, +It's also possible to check dependencies directly into source control, but that may be undesirable for other reasons\. -. .P As an example, consider package A: -. -.IP "" 4 -. +.P +.RS 2 .nf { "name": "A", @@ -46,16 +39,12 @@ As an example, consider package A: "B": "<0\.1\.0" } } -. .fi -. -.IP "" 0 -. +.RE .P package B: -. -.IP "" 4 -. +.P +.RS 2 .nf { "name": "B", @@ -64,82 +53,61 @@ package B: "C": "<0\.1\.0" } } -. .fi -. -.IP "" 0 -. +.RE .P and package C: -. -.IP "" 4 -. +.P +.RS 2 .nf { "name": "C, "version": "0\.0\.1" } -. .fi -. -.IP "" 0 -. +.RE .P If these are the only versions of A, B, and C available in the registry, then a normal "npm install A" will install: -. -.IP "" 4 -. +.P +.RS 2 .nf A@0\.1\.0 `\-\- B@0\.0\.1 `\-\- C@0\.0\.1 -. .fi -. -.IP "" 0 -. +.RE .P However, if B@0\.0\.2 is published, then a fresh "npm install A" will install: -. -.IP "" 4 -. +.P +.RS 2 .nf A@0\.1\.0 `\-\- B@0\.0\.2 `\-\- C@0\.0\.1 -. .fi -. -.IP "" 0 -. +.RE .P -assuming the new version did not modify B\'s dependencies\. Of course, +assuming the new version did not modify B's dependencies\. Of course, the new version of B could include a new version of C and any number of new dependencies\. If such changes are undesirable, the author of A -could specify a dependency on B@0\.0\.1\. However, if A\'s author and B\'s -author are not the same person, there\'s no way for A\'s author to say +could specify a dependency on B@0\.0\.1\. However, if A's author and B's +author are not the same person, there's no way for A's author to say that he or she does not want to pull in newly published versions of C -when B hasn\'t changed at all\. -. +when B hasn't changed at all\. +.P +In this case, A's author can run .P -In this case, A\'s author can run -. -.IP "" 4 -. +.RS 2 .nf npm shrinkwrap -. .fi -. -.IP "" 0 -. +.RE .P This generates npm\-shrinkwrap\.json, which will look something like this: -. -.IP "" 4 -. +.P +.RS 2 .nf { "name": "A", @@ -155,79 +123,68 @@ This generates npm\-shrinkwrap\.json, which will look something like this: } } } -. .fi -. -.IP "" 0 -. +.RE .P The shrinkwrap command has locked down the dependencies based on -what\'s currently installed in node_modules\. When "npm install" +what's currently installed in node_modules\. When "npm install" installs a package with a npm\-shrinkwrap\.json file in the package root, the shrinkwrap file (rather than package\.json files) completely drives the installation of that package and all of its dependencies (recursively)\. So now the author publishes A@0\.1\.0, and subsequent installs of this package will use B@0\.0\.1 and C@0\.1\.0, regardless the -dependencies and versions listed in A\'s, B\'s, and C\'s package\.json +dependencies and versions listed in A's, B's, and C's package\.json files\. -. -.SS "Using shrinkwrapped packages" +.SS Using shrinkwrapped packages +.P Using a shrinkwrapped package is no different than using any other package: you can "npm install" it by hand, or add a dependency to your package\.json file and "npm install" it\. -. -.SS "Building shrinkwrapped packages" +.SS Building shrinkwrapped packages +.P To shrinkwrap an existing package: -. -.IP "1" 4 +.RS 0 +.IP 1. 3 Run "npm install" in the package root to install the current versions of all dependencies\. -. -.IP "2" 4 +.IP 2. 3 Validate that the package works as expected with these versions\. -. -.IP "3" 4 +.IP 3. 3 Run "npm shrinkwrap", add npm\-shrinkwrap\.json to git, and publish your package\. -. -.IP "" 0 -. + +.RE .P To add or update a dependency in a shrinkwrapped package: -. -.IP "1" 4 +.RS 0 +.IP 1. 3 Run "npm install" in the package root to install the current versions of all dependencies\. -. -.IP "2" 4 +.IP 2. 3 Add or update dependencies\. "npm install" each new or updated package individually and then update package\.json\. Note that they must be explicitly named in order to be installed: running \fBnpm install\fR with no arguments will merely reproduce the existing shrinkwrap\. -. -.IP "3" 4 +.IP 3. 3 Validate that the package works as expected with the new dependencies\. -. -.IP "4" 4 +.IP 4. 3 Run "npm shrinkwrap", commit the new npm\-shrinkwrap\.json, and publish your package\. -. -.IP "" 0 -. + +.RE .P You can use npm help outdated to view dependencies with newer versions available\. -. -.SS "Other Notes" -A shrinkwrap file must be consistent with the package\'s package\.json +.SS Other Notes +.P +A shrinkwrap file must be consistent with the package's package\.json file\. "npm shrinkwrap" will fail if required dependencies are not already installed, since that would result in a shrinkwrap that -wouldn\'t actually work\. Similarly, the command will fail if there are +wouldn't actually work\. Similarly, the command will fail if there are extraneous packages (not referenced by package\.json), since that would indicate that package\.json is not correct\. -. .P Since "npm shrinkwrap" is intended to lock down your dependencies for production use, \fBdevDependencies\fR will not be included unless you @@ -235,31 +192,27 @@ explicitly set the \fB\-\-dev\fR flag when you run \fBnpm shrinkwrap\fR\|\. If installed \fBdevDependencies\fR are excluded, then npm will print a warning\. If you want them to be installed with your module by default, please consider adding them to \fBdependencies\fR instead\. -. .P -If shrinkwrapped package A depends on shrinkwrapped package B, B\'s +If shrinkwrapped package A depends on shrinkwrapped package B, B's shrinkwrap will not be used as part of the installation of A\. However, -because A\'s shrinkwrap is constructed from a valid installation of B -and recursively specifies all dependencies, the contents of B\'s -shrinkwrap will implicitly be included in A\'s shrinkwrap\. -. -.SS "Caveats" +because A's shrinkwrap is constructed from a valid installation of B +and recursively specifies all dependencies, the contents of B's +shrinkwrap will implicitly be included in A's shrinkwrap\. +.SS Caveats +.P If you wish to lock down the specific bytes included in a package, for example to have 100% confidence in being able to reproduce a deployment or build, then you ought to check your dependencies into source control, or pursue some other mechanism that can verify contents rather than versions\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help install -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 package\.json -. -.IP "\(bu" 4 +.IP \(bu 2 npm help ls -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-star.1 b/deps/npm/man/man1/npm-star.1 index bbcfee19eb1..8dbc0292ae7 100644 --- a/deps/npm/man/man1/npm-star.1 +++ b/deps/npm/man/man1/npm-star.1 @@ -1,39 +1,30 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-STAR" "1" "September 2014" "" "" -. +.TH "NPM\-STAR" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-star\fR \-\- Mark your favorite packages -. -.SH "SYNOPSIS" -. +\fBnpm-star\fR \- Mark your favorite packages +.SH SYNOPSIS +.P +.RS 2 .nf npm star [, \.\.\.] npm unstar [, \.\.\.] -. .fi -. -.SH "DESCRIPTION" -"Starring" a package means that you have some interest in it\. It\'s +.RE +.SH DESCRIPTION +.P +"Starring" a package means that you have some interest in it\. It's a vaguely positive way to show that you care\. -. .P "Unstarring" is the same thing, but in reverse\. -. .P -It\'s a boolean thing\. Starring repeatedly has no additional effect\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +It's a boolean thing\. Starring repeatedly has no additional effect\. +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help view -. -.IP "\(bu" 4 +.IP \(bu 2 npm help whoami -. -.IP "\(bu" 4 +.IP \(bu 2 npm help adduser -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-stars.1 b/deps/npm/man/man1/npm-stars.1 index 9b2d6d187e4..1762a0f08bd 100644 --- a/deps/npm/man/man1/npm-stars.1 +++ b/deps/npm/man/man1/npm-stars.1 @@ -1,40 +1,31 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-STARS" "1" "September 2014" "" "" -. +.TH "NPM\-STARS" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-stars\fR \-\- View packages marked as favorites -. -.SH "SYNOPSIS" -. +\fBnpm-stars\fR \- View packages marked as favorites +.SH SYNOPSIS +.P +.RS 2 .nf npm stars npm stars [username] -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P If you have starred a lot of neat things and want to find them again quickly this command lets you do just that\. -. .P -You may also want to see your friend\'s favorite packages, in this case +You may also want to see your friend's favorite packages, in this case you will most certainly enjoy this command\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help star -. -.IP "\(bu" 4 +.IP \(bu 2 npm help view -. -.IP "\(bu" 4 +.IP \(bu 2 npm help whoami -. -.IP "\(bu" 4 +.IP \(bu 2 npm help adduser -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-start.1 b/deps/npm/man/man1/npm-start.1 index c76e2c92a5d..0a342ee1f1d 100644 --- a/deps/npm/man/man1/npm-start.1 +++ b/deps/npm/man/man1/npm-start.1 @@ -1,37 +1,28 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-START" "1" "September 2014" "" "" -. +.TH "NPM\-START" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-start\fR \-\- Start a package -. -.SH "SYNOPSIS" -. +\fBnpm-start\fR \- Start a package +.SH SYNOPSIS +.P +.RS 2 .nf -npm start -. +npm start [\-\- ] .fi -. -.SH "DESCRIPTION" -This runs a package\'s "start" script, if one was provided\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.RE +.SH DESCRIPTION +.P +This runs a package's "start" script, if one was provided\. +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help run\-script -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 scripts -. -.IP "\(bu" 4 +.IP \(bu 2 npm help test -. -.IP "\(bu" 4 +.IP \(bu 2 npm help restart -. -.IP "\(bu" 4 +.IP \(bu 2 npm help stop -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-stop.1 b/deps/npm/man/man1/npm-stop.1 index 37c1a5fe03f..8622d18d964 100644 --- a/deps/npm/man/man1/npm-stop.1 +++ b/deps/npm/man/man1/npm-stop.1 @@ -1,37 +1,28 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-STOP" "1" "September 2014" "" "" -. +.TH "NPM\-STOP" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-stop\fR \-\- Stop a package -. -.SH "SYNOPSIS" -. +\fBnpm-stop\fR \- Stop a package +.SH SYNOPSIS +.P +.RS 2 .nf -npm stop -. +npm stop [\-\- ] .fi -. -.SH "DESCRIPTION" -This runs a package\'s "stop" script, if one was provided\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.RE +.SH DESCRIPTION +.P +This runs a package's "stop" script, if one was provided\. +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help run\-script -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 scripts -. -.IP "\(bu" 4 +.IP \(bu 2 npm help test -. -.IP "\(bu" 4 +.IP \(bu 2 npm help start -. -.IP "\(bu" 4 +.IP \(bu 2 npm help restart -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-submodule.1 b/deps/npm/man/man1/npm-submodule.1 index 71853335c59..4999ac64e91 100644 --- a/deps/npm/man/man1/npm-submodule.1 +++ b/deps/npm/man/man1/npm-submodule.1 @@ -1,42 +1,35 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-SUBMODULE" "1" "September 2014" "" "" -. +.TH "NPM\-SUBMODULE" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-submodule\fR \-\- Add a package as a git submodule -. -.SH "SYNOPSIS" -. +\fBnpm-submodule\fR \- Add a package as a git submodule +.SH SYNOPSIS +.P +.RS 2 .nf npm submodule -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P If the specified package has a git repository url in its package\.json -description, then this command will add it as a git submodule at \fBnode_modules/\fR\|\. -. +description, then this command will add it as a git submodule at +\fBnode_modules/\fR\|\. .P -This is a convenience only\. From then on, it\'s up to you to manage +This is a convenience only\. From then on, it's up to you to manage updates by using the appropriate git commands\. npm will stubbornly refuse to update, modify, or remove anything with a \fB\|\.git\fR subfolder in it\. -. .P This command also does not install missing dependencies, if the package does not include them in its git repository\. If \fBnpm ls\fR reports that things are missing, you can either install, link, or submodule them yourself, or you can do \fBnpm explore \-\- npm install\fR to install the dependencies into the submodule folder\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help 5 package\.json -. -.IP "\(bu" 4 +.IP \(bu 2 git help submodule -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-tag.1 b/deps/npm/man/man1/npm-tag.1 index c1d463f8cef..5aace75083f 100644 --- a/deps/npm/man/man1/npm-tag.1 +++ b/deps/npm/man/man1/npm-tag.1 @@ -1,74 +1,54 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-TAG" "1" "September 2014" "" "" -. +.TH "NPM\-TAG" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-tag\fR \-\- Tag a published version -. -.SH "SYNOPSIS" -. +\fBnpm-tag\fR \- Tag a published version +.SH SYNOPSIS +.P +.RS 2 .nf npm tag @ [] -. .fi -. -.SH "DESCRIPTION" -Tags the specified version of the package with the specified tag, or the \fB\-\-tag\fR config if not specified\. -. +.RE +.SH DESCRIPTION +.P +Tags the specified version of the package with the specified tag, or the +\fB\-\-tag\fR config if not specified\. .P A tag can be used when installing packages as a reference to a version instead of using a specific version number: -. -.IP "" 4 -. +.P +.RS 2 .nf npm install @ -. .fi -. -.IP "" 0 -. +.RE .P When installing dependencies, a preferred tagged version may be specified: -. -.IP "" 4 -. +.P +.RS 2 .nf npm install \-\-tag -. .fi -. -.IP "" 0 -. +.RE .P This also applies to \fBnpm dedupe\fR\|\. -. .P Publishing a package always sets the "latest" tag to the published version\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help publish -. -.IP "\(bu" 4 +.IP \(bu 2 npm help install -. -.IP "\(bu" 4 +.IP \(bu 2 npm help dedupe -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 registry -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-test.1 b/deps/npm/man/man1/npm-test.1 index 063fc926793..0b4a9f4dbb3 100644 --- a/deps/npm/man/man1/npm-test.1 +++ b/deps/npm/man/man1/npm-test.1 @@ -1,42 +1,32 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-TEST" "1" "September 2014" "" "" -. +.TH "NPM\-TEST" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-test\fR \-\- Test a package -. -.SH "SYNOPSIS" -. +\fBnpm-test\fR \- Test a package +.SH SYNOPSIS +.P +.RS 2 .nf - npm test - npm tst -. + npm test [\-\- ] + npm tst [\-\- ] .fi -. -.SH "DESCRIPTION" -This runs a package\'s "test" script, if one was provided\. -. +.RE +.SH DESCRIPTION +.P +This runs a package's "test" script, if one was provided\. .P To run tests as a condition of installation, set the \fBnpat\fR config to true\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help run\-script -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 scripts -. -.IP "\(bu" 4 +.IP \(bu 2 npm help start -. -.IP "\(bu" 4 +.IP \(bu 2 npm help restart -. -.IP "\(bu" 4 +.IP \(bu 2 npm help stop -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-uninstall.1 b/deps/npm/man/man1/npm-uninstall.1 index 364d9c1d7c4..a56f8bb9187 100644 --- a/deps/npm/man/man1/npm-uninstall.1 +++ b/deps/npm/man/man1/npm-uninstall.1 @@ -1,87 +1,68 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-RM" "1" "September 2014" "" "" -. +.TH "NPM\-RM" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-rm\fR \-\- Remove a package -. -.SH "SYNOPSIS" -. +\fBnpm-rm\fR \- Remove a package +.SH SYNOPSIS +.P +.RS 2 .nf -npm uninstall [\-\-save|\-\-save\-dev|\-\-save\-optional] +npm uninstall [@/] [\-\-save|\-\-save\-dev|\-\-save\-optional] npm rm (with any of the previous argument usage) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This uninstalls a package, completely removing everything npm installed on its behalf\. -. .P Example: -. -.IP "" 4 -. +.P +.RS 2 .nf npm uninstall sax -. .fi -. -.IP "" 0 -. +.RE .P In global mode (ie, with \fB\-g\fR or \fB\-\-global\fR appended to the command), it uninstalls the current package context as a global package\. -. .P \fBnpm uninstall\fR takes 3 exclusive, optional flags which save or update the package version in your main package\.json: -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 \fB\-\-save\fR: Package will be removed from your \fBdependencies\fR\|\. -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-\-save\-dev\fR: Package will be removed from your \fBdevDependencies\fR\|\. -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-\-save\-optional\fR: Package will be removed from your \fBoptionalDependencies\fR\|\. -. -.IP "" 0 -. + +.RE +.P +Scope is optional and follows the usual rules for npm help 7 \fBnpm\-scope\fR\|\. .P Examples: -. -.IP "" 4 -. +.P +.RS 2 .nf npm uninstall sax \-\-save +npm uninstall @myorg/privatepackage \-\-save npm uninstall node\-tap \-\-save\-dev npm uninstall dtrace\-provider \-\-save\-optional -. .fi -. -.IP "" 0 -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.RE +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help prune -. -.IP "\(bu" 4 +.IP \(bu 2 npm help install -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 folders -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-unpublish.1 b/deps/npm/man/man1/npm-unpublish.1 index e5b8a656010..6cb1df7263c 100644 --- a/deps/npm/man/man1/npm-unpublish.1 +++ b/deps/npm/man/man1/npm-unpublish.1 @@ -1,58 +1,47 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-UNPUBLISH" "1" "September 2014" "" "" -. +.TH "NPM\-UNPUBLISH" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-unpublish\fR \-\- Remove a package from the registry -. -.SH "SYNOPSIS" -. +\fBnpm-unpublish\fR \- Remove a package from the registry +.SH SYNOPSIS +.P +.RS 2 .nf -npm unpublish [@] -. +npm unpublish [@/][@] .fi -. -.SH "WARNING" +.RE +.SH WARNING +.P \fBIt is generally considered bad behavior to remove versions of a library that others are depending on!\fR -. .P Consider using the \fBdeprecate\fR command instead, if your intent is to encourage users to upgrade\. -. .P There is plenty of room on the registry\. -. -.SH "DESCRIPTION" +.SH DESCRIPTION +.P This removes a package version from the registry, deleting its entry and removing the tarball\. -. .P If no version is specified, or if all versions are removed then the root package entry is removed from the registry entirely\. -. .P Even if a package version is unpublished, that specific name and version combination can never be reused\. In order to publish the package again, a new version number must be used\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.P +The scope is optional and follows the usual rules for npm help 7 \fBnpm\-scope\fR\|\. +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help deprecate -. -.IP "\(bu" 4 +.IP \(bu 2 npm help publish -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 registry -. -.IP "\(bu" 4 +.IP \(bu 2 npm help adduser -. -.IP "\(bu" 4 +.IP \(bu 2 npm help owner -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-update.1 b/deps/npm/man/man1/npm-update.1 index de2201209f8..19adfc92765 100644 --- a/deps/npm/man/man1/npm-update.1 +++ b/deps/npm/man/man1/npm-update.1 @@ -1,45 +1,37 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-UPDATE" "1" "September 2014" "" "" -. +.TH "NPM\-UPDATE" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-update\fR \-\- Update a package -. -.SH "SYNOPSIS" -. +\fBnpm-update\fR \- Update a package +.SH SYNOPSIS +.P +.RS 2 .nf npm update [\-g] [ [ \.\.\.]] -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This command will update all the packages listed to the latest version (specified by the \fBtag\fR config)\. -. .P It will also install missing packages\. -. .P -If the \fB\-g\fR flag is specified, this command will update globally installed packages\. -If no package name is specified, all packages in the specified location (global or local) will be updated\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +If the \fB\-g\fR flag is specified, this command will update globally installed +packages\. +.P +If no package name is specified, all packages in the specified location (global +or local) will be updated\. +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help install -. -.IP "\(bu" 4 +.IP \(bu 2 npm help outdated -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 registry -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 folders -. -.IP "\(bu" 4 +.IP \(bu 2 npm help ls -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-version.1 b/deps/npm/man/man1/npm-version.1 index fc52da6e8f5..21fde3452fa 100644 --- a/deps/npm/man/man1/npm-version.1 +++ b/deps/npm/man/man1/npm-version.1 @@ -1,75 +1,61 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-VERSION" "1" "September 2014" "" "" -. +.TH "NPM\-VERSION" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-version\fR \-\- Bump a package version -. -.SH "SYNOPSIS" -. +\fBnpm-version\fR \- Bump a package version +.SH SYNOPSIS +.P +.RS 2 .nf npm version [ | major | minor | patch | premajor | preminor | prepatch | prerelease] -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P Run this in a package directory to bump the version and write the new data back to the package\.json file\. -. .P The \fBnewversion\fR argument should be a valid semver string, \fIor\fR a valid second argument to semver\.inc (one of "patch", "minor", "major", "prepatch", "preminor", "premajor", "prerelease")\. In the second case, the existing version will be incremented by 1 in the specified field\. -. .P If run in a git repo, it will also create a version commit and tag, and fail if the repo is not clean\. -. .P If supplied with \fB\-\-message\fR (shorthand: \fB\-m\fR) config option, npm will -use it as a commit message when creating a version commit\. If the \fBmessage\fR config contains \fB%s\fR then that will be replaced with the +use it as a commit message when creating a version commit\. If the +\fBmessage\fR config contains \fB%s\fR then that will be replaced with the resulting version number\. For example: -. -.IP "" 4 -. +.P +.RS 2 .nf npm version patch \-m "Upgrade to %s for reasons" -. .fi -. -.IP "" 0 -. +.RE .P If the \fBsign\-git\-tag\fR config is set, then the tag will be signed using the \fB\-s\fR flag to git\. Note that you must have a default GPG key set up in your git config for this to work properly\. For example: -. -.IP "" 4 -. +.P +.RS 2 .nf $ npm config set sign\-git\-tag true $ npm version patch + You need a passphrase to unlock the secret key for user: "isaacs (http://blog\.izs\.me/) " 2048\-bit RSA key, ID 6C481CF6, created 2010\-08\-31 + Enter passphrase: -. .fi -. -.IP "" 0 -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.RE +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help init -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 package\.json -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 semver -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-view.1 b/deps/npm/man/man1/npm-view.1 index 44b42b308d6..35ef045329e 100644 --- a/deps/npm/man/man1/npm-view.1 +++ b/deps/npm/man/man1/npm-view.1 @@ -1,186 +1,136 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-VIEW" "1" "September 2014" "" "" -. +.TH "NPM\-VIEW" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-view\fR \-\- View registry info -. -.SH "SYNOPSIS" -. +\fBnpm-view\fR \- View registry info +.SH SYNOPSIS +.P +.RS 2 .nf -npm view [@] [[\.]\.\.\.] -npm v [@] [[\.]\.\.\.] -. +npm view [@/][@] [[\.]\.\.\.] +npm v [@/][@] [[\.]\.\.\.] .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This command shows data about a package and prints it to the stream referenced by the \fBoutfd\fR config, which defaults to stdout\. -. .P To show the package registry entry for the \fBconnect\fR package, you can do this: -. -.IP "" 4 -. +.P +.RS 2 .nf npm view connect -. .fi -. -.IP "" 0 -. +.RE .P The default version is "latest" if unspecified\. -. .P Field names can be specified after the package descriptor\. For example, to show the dependencies of the \fBronn\fR package at version 0\.3\.5, you could do the following: -. -.IP "" 4 -. +.P +.RS 2 .nf npm view ronn@0\.3\.5 dependencies -. .fi -. -.IP "" 0 -. +.RE .P You can view child field by separating them with a period\. To view the git repository URL for the latest version of npm, you could do this: -. -.IP "" 4 -. +.P +.RS 2 .nf npm view npm repository\.url -. .fi -. -.IP "" 0 -. +.RE .P This makes it easy to view information about a dependency with a bit of shell scripting\. For example, to view all the data about the version of opts that ronn depends on, you can do this: -. -.IP "" 4 -. +.P +.RS 2 .nf npm view opts@$(npm view ronn dependencies\.opts) -. .fi -. -.IP "" 0 -. +.RE .P For fields that are arrays, requesting a non\-numeric field will return all of the values from the objects in the list\. For example, to get all the contributor names for the "express" project, you can do this: -. -.IP "" 4 -. +.P +.RS 2 .nf npm view express contributors\.email -. .fi -. -.IP "" 0 -. +.RE .P You may also use numeric indices in square braces to specifically select an item in an array field\. To just get the email address of the first contributor in the list, you can do this: -. -.IP "" 4 -. +.P +.RS 2 .nf npm view express contributors[0]\.email -. .fi -. -.IP "" 0 -. +.RE .P Multiple fields may be specified, and will be printed one after another\. For exampls, to get all the contributor names and email addresses, you can do this: -. -.IP "" 4 -. +.P +.RS 2 .nf npm view express contributors\.name contributors\.email -. .fi -. -.IP "" 0 -. +.RE .P "Person" fields are shown as a string if they would be shown as an object\. So, for example, this will show the list of npm contributors in the shortened string format\. (See npm help 5 \fBpackage\.json\fR for more on this\.) -. -.IP "" 4 -. +.P +.RS 2 .nf npm view npm contributors -. .fi -. -.IP "" 0 -. +.RE .P If a version range is provided, then data will be printed for every matching version of the package\. This will show which version of jsdom was required by each matching version of yui3: -. -.IP "" 4 -. +.P +.RS 2 .nf -npm view yui3@\'>0\.5\.4\' dependencies\.jsdom -. +npm view yui3@'>0\.5\.4' dependencies\.jsdom .fi -. -.IP "" 0 -. -.SH "OUTPUT" +.RE +.SH OUTPUT +.P If only a single string field for a single version is output, then it will not be colorized or quoted, so as to enable piping the output to another command\. If the field is an object, it will be output as a JavaScript object literal\. -. .P If the \-\-json flag is given, the outputted fields will be JSON\. -. .P If the version range matches multiple versions, than each printed value will be prefixed with the version it applies to\. -. .P If multiple fields are requested, than each of them are prefixed with the field name\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help search -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 registry -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "\(bu" 4 +.IP \(bu 2 npm help docs -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm-whoami.1 b/deps/npm/man/man1/npm-whoami.1 index bf43ae7eee4..34a3f04ac35 100644 --- a/deps/npm/man/man1/npm-whoami.1 +++ b/deps/npm/man/man1/npm-whoami.1 @@ -1,34 +1,26 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-WHOAMI" "1" "September 2014" "" "" -. +.TH "NPM\-WHOAMI" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-whoami\fR \-\- Display npm username -. -.SH "SYNOPSIS" -. +\fBnpm-whoami\fR \- Display npm username +.SH SYNOPSIS +.P +.RS 2 .nf npm whoami -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P Print the \fBusername\fR config to standard output\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "\(bu" 4 +.IP \(bu 2 npm help adduser -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man1/npm.1 b/deps/npm/man/man1/npm.1 index 5a0f94c740b..a275e4728d5 100644 --- a/deps/npm/man/man1/npm.1 +++ b/deps/npm/man/man1/npm.1 @@ -1,244 +1,212 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM" "1" "September 2014" "" "" -. +.TH "NPM" "1" "October 2014" "" "" .SH "NAME" -\fBnpm\fR \-\- node package manager -. -.SH "SYNOPSIS" -. +\fBnpm\fR \- node package manager +.SH SYNOPSIS +.P +.RS 2 .nf npm [args] -. .fi -. -.SH "VERSION" -1.4.28 -. -.SH "DESCRIPTION" +.RE +.SH VERSION +.P +2.1.6 +.SH DESCRIPTION +.P npm is the package manager for the Node JavaScript platform\. It puts modules in place so that node can find them, and manages dependency conflicts intelligently\. -. .P It is extremely configurable to support a wide variety of use cases\. Most commonly, it is used to publish, discover, install, and develop node programs\. -. .P Run \fBnpm help\fR to get a list of available commands\. -. -.SH "INTRODUCTION" +.SH INTRODUCTION +.P You probably got npm because you want to install stuff\. -. .P -Use \fBnpm install blerg\fR to install the latest version of "blerg"\. Check out npm help \fBnpm\-install\fR for more info\. It can do a lot of stuff\. -. +Use \fBnpm install blerg\fR to install the latest version of "blerg"\. Check out +npm help \fBnpm\-install\fR for more info\. It can do a lot of stuff\. +.P +Use the \fBnpm search\fR command to show everything that's available\. +Use \fBnpm ls\fR to show everything you've installed\. +.SH DEPENDENCIES .P -Use the \fBnpm search\fR command to show everything that\'s available\. -Use \fBnpm ls\fR to show everything you\'ve installed\. -. -.SH "DEPENDENCIES" If a package references to another package with a git URL, npm depends on a preinstalled git\. -. .P If one of the packages npm tries to install is a native node module and -requires compiling of C++ Code, npm will use node\-gyp \fIhttps://github\.com/TooTallNate/node\-gyp\fR for that task\. +requires compiling of C++ Code, npm will use +node\-gyp \fIhttps://github\.com/TooTallNate/node\-gyp\fR for that task\. For a Unix system, node\-gyp \fIhttps://github\.com/TooTallNate/node\-gyp\fR needs Python, make and a buildchain like GCC\. On Windows, Python and Microsoft Visual Studio C++ is needed\. Python 3 is not supported by node\-gyp \fIhttps://github\.com/TooTallNate/node\-gyp\fR\|\. -For more information visit the node\-gyp repository \fIhttps://github\.com/TooTallNate/node\-gyp\fR and +For more information visit +the node\-gyp repository \fIhttps://github\.com/TooTallNate/node\-gyp\fR and the node\-gyp Wiki \fIhttps://github\.com/TooTallNate/node\-gyp/wiki\fR\|\. -. -.SH "DIRECTORIES" +.SH DIRECTORIES +.P See npm help 5 \fBnpm\-folders\fR to learn about where npm puts stuff\. -. .P In particular, npm has two modes of operation: -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 global mode: -. .br -npm installs packages into the install prefix at \fBprefix/lib/node_modules\fR and bins are installed in \fBprefix/bin\fR\|\. -. -.IP "\(bu" 4 +npm installs packages into the install prefix at +\fBprefix/lib/node_modules\fR and bins are installed in \fBprefix/bin\fR\|\. +.IP \(bu 2 local mode: -. .br npm installs packages into the current project directory, which -defaults to the current working directory\. Packages are installed to \fB\|\./node_modules\fR, and bins are installed to \fB\|\./node_modules/\.bin\fR\|\. -. -.IP "" 0 -. +defaults to the current working directory\. Packages are installed to +\fB\|\./node_modules\fR, and bins are installed to \fB\|\./node_modules/\.bin\fR\|\. + +.RE .P Local mode is the default\. Use \fB\-\-global\fR or \fB\-g\fR on any command to operate in global mode instead\. -. -.SH "DEVELOPER USAGE" -If you\'re using npm to develop and publish your code, check out the +.SH DEVELOPER USAGE +.P +If you're using npm to develop and publish your code, check out the following help topics: -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 json: Make a package\.json file\. See npm help 5 \fBpackage\.json\fR\|\. -. -.IP "\(bu" 4 +.IP \(bu 2 link: -For linking your current working code into Node\'s path, so that you -don\'t have to reinstall every time you make a change\. Use \fBnpm link\fR to do this\. -. -.IP "\(bu" 4 +For linking your current working code into Node's path, so that you +don't have to reinstall every time you make a change\. Use +\fBnpm link\fR to do this\. +.IP \(bu 2 install: -It\'s a good idea to install things if you don\'t need the symbolic link\. -Especially, installing other peoples code from the registry is done via \fBnpm install\fR -. -.IP "\(bu" 4 +It's a good idea to install things if you don't need the symbolic link\. +Especially, installing other peoples code from the registry is done via +\fBnpm install\fR +.IP \(bu 2 adduser: Create an account or log in\. Credentials are stored in the user config file\. -. -.IP "\(bu" 4 +.IP \(bu 2 publish: Use the \fBnpm publish\fR command to upload your code to the registry\. -. -.IP "" 0 -. -.SH "CONFIGURATION" + +.RE +.SH CONFIGURATION +.P npm is extremely configurable\. It reads its configuration options from 5 places\. -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 Command line switches: -. .br Set a config with \fB\-\-key val\fR\|\. All keys take a value, even if they -are booleans (the config parser doesn\'t know what the options are at +are booleans (the config parser doesn't know what the options are at the time of parsing\.) If no value is provided, then the option is set to boolean \fBtrue\fR\|\. -. -.IP "\(bu" 4 +.IP \(bu 2 Environment Variables: -. .br -Set any config by prefixing the name in an environment variable with \fBnpm_config_\fR\|\. For example, \fBexport npm_config_key=val\fR\|\. -. -.IP "\(bu" 4 +Set any config by prefixing the name in an environment variable with +\fBnpm_config_\fR\|\. For example, \fBexport npm_config_key=val\fR\|\. +.IP \(bu 2 User Configs: -. .br The file at $HOME/\.npmrc is an ini\-formatted list of configs\. If present, it is parsed\. If the \fBuserconfig\fR option is set in the cli or env, then that will be used instead\. -. -.IP "\(bu" 4 +.IP \(bu 2 Global Configs: -. .br The file found at \.\./etc/npmrc (from the node executable, by default this resolves to /usr/local/etc/npmrc) will be parsed if it is found\. If the \fBglobalconfig\fR option is set in the cli, env, or user config, then that file is parsed instead\. -. -.IP "\(bu" 4 +.IP \(bu 2 Defaults: -. .br -npm\'s default configuration options are defined in +npm's default configuration options are defined in lib/utils/config\-defs\.js\. These must not be changed\. -. -.IP "" 0 -. + +.RE .P See npm help 7 \fBnpm\-config\fR for much much more information\. -. -.SH "CONTRIBUTIONS" +.SH CONTRIBUTIONS +.P Patches welcome! -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 code: Read through npm help 7 \fBnpm\-coding\-style\fR if you plan to submit code\. -You don\'t have to agree with it, but you do have to follow it\. -. -.IP "\(bu" 4 +You don't have to agree with it, but you do have to follow it\. +.IP \(bu 2 docs: If you find an error in the documentation, edit the appropriate markdown -file in the "doc" folder\. (Don\'t worry about generating the man page\.) -. -.IP "" 0 -. +file in the "doc" folder\. (Don't worry about generating the man page\.) + +.RE .P -Contributors are listed in npm\'s \fBpackage\.json\fR file\. You can view them +Contributors are listed in npm's \fBpackage\.json\fR file\. You can view them easily by doing \fBnpm view npm contributors\fR\|\. -. .P -If you would like to contribute, but don\'t know what to work on, check +If you would like to contribute, but don't know what to work on, check the issues list or ask on the mailing list\. -. -.IP "\(bu" 4 -\fIhttp://github\.com/npm/npm/issues\fR -. -.IP "\(bu" 4 -\fInpm\-@googlegroups\.com\fR -. -.IP "" 0 -. -.SH "BUGS" +.RS 0 +.IP \(bu 2 +http://github\.com/npm/npm/issues +.IP \(bu 2 +npm\-@googlegroups\.com + +.RE +.SH BUGS +.P When you find issues, please report them: -. -.IP "\(bu" 4 -web: \fIhttp://github\.com/npm/npm/issues\fR -. -.IP "\(bu" 4 -email: \fInpm\-@googlegroups\.com\fR -. -.IP "" 0 -. -.P -Be sure to include \fIall\fR of the output from the npm command that didn\'t work +.RS 0 +.IP \(bu 2 +web: +http://github\.com/npm/npm/issues +.IP \(bu 2 +email: +npm\-@googlegroups\.com + +.RE +.P +Be sure to include \fIall\fR of the output from the npm command that didn't work as expected\. The \fBnpm\-debug\.log\fR file is also helpful to provide\. -. .P You can also look for isaacs in #node\.js on irc://irc\.freenode\.net\. He will no doubt tell you to put the output in a gist or email\. -. -.SH "AUTHOR" -Isaac Z\. Schlueter \fIhttp://blog\.izs\.me/\fR :: isaacs \fIhttps://github\.com/isaacs/\fR :: @izs \fIhttp://twitter\.com/izs\fR :: \fIi@izs\.me\fR -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH AUTHOR +.P +Isaac Z\. Schlueter \fIhttp://blog\.izs\.me/\fR :: +isaacs \fIhttps://github\.com/isaacs/\fR :: +@izs \fIhttp://twitter\.com/izs\fR :: +i@izs\.me +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help help -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 faq -. -.IP "\(bu" 4 +.IP \(bu 2 README -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 package\.json -. -.IP "\(bu" 4 +.IP \(bu 2 npm help install -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 index -. -.IP "\(bu" 4 +.IP \(bu 2 npm apihelp npm -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man3/npm-bin.3 b/deps/npm/man/man3/npm-bin.3 index 97de75de859..4c76b8a0cdb 100644 --- a/deps/npm/man/man3/npm-bin.3 +++ b/deps/npm/man/man3/npm-bin.3 @@ -1,21 +1,17 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-BIN" "3" "September 2014" "" "" -. +.TH "NPM\-BIN" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-bin\fR \-\- Display npm bin folder -. -.SH "SYNOPSIS" -. +\fBnpm-bin\fR \- Display npm bin folder +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.bin(args, cb) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P Print the folder where npm will install executables\. -. .P This function should not be used programmatically\. Instead, just refer -to the \fBnpm\.bin\fR member\. +to the \fBnpm\.bin\fR property\. + diff --git a/deps/npm/man/man3/npm-bugs.3 b/deps/npm/man/man3/npm-bugs.3 index bb85060afe1..cd8dda6ea5e 100644 --- a/deps/npm/man/man3/npm-bugs.3 +++ b/deps/npm/man/man3/npm-bugs.3 @@ -1,28 +1,23 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-BUGS" "3" "September 2014" "" "" -. +.TH "NPM\-BUGS" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-bugs\fR \-\- Bugs for a package in a web browser maybe -. -.SH "SYNOPSIS" -. +\fBnpm-bugs\fR \- Bugs for a package in a web browser maybe +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.bugs(package, callback) -. .fi -. -.SH "DESCRIPTION" -This command tries to guess at the likely location of a package\'s +.RE +.SH DESCRIPTION +.P +This command tries to guess at the likely location of a package's bug tracker URL, and then tries to open it using the \fB\-\-browser\fR config param\. -. .P Like other commands, the first parameter is an array\. This command only uses the first element, which is expected to be a package name with an optional version number\. -. .P This command will launch a browser, so this command may not be the most friendly for programmatic use\. + diff --git a/deps/npm/man/man3/npm-cache.3 b/deps/npm/man/man3/npm-cache.3 index b3396446ff3..1dccd8fd0c1 100644 --- a/deps/npm/man/man3/npm-cache.3 +++ b/deps/npm/man/man3/npm-cache.3 @@ -1,40 +1,34 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-CACHE" "3" "September 2014" "" "" -. +.TH "NPM\-CACHE" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-cache\fR \-\- manage the npm cache programmatically -. -.SH "SYNOPSIS" -. +\fBnpm-cache\fR \- manage the npm cache programmatically +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.cache([args], callback) + // helpers npm\.commands\.cache\.clean([args], callback) npm\.commands\.cache\.add([args], callback) npm\.commands\.cache\.read(name, version, forceBypass, callback) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This acts much the same ways as the npm help cache command line functionality\. -. .P The callback is called with the package\.json data of the thing that is eventually added to or read from the cache\. -. .P The top level \fBnpm\.commands\.cache(\.\.\.)\fR functionality is a public interface, and like all commands on the \fBnpm\.commands\fR object, it will match the command line behavior exactly\. -. .P However, the cache folder structure and the cache helper functions are considered \fBinternal\fR API surface, and as such, may change in future releases of npm, potentially without warning or significant version incrementation\. -. .P Use at your own risk\. + diff --git a/deps/npm/man/man3/npm-commands.3 b/deps/npm/man/man3/npm-commands.3 index 003f5e5ab47..87ad3253b12 100644 --- a/deps/npm/man/man3/npm-commands.3 +++ b/deps/npm/man/man3/npm-commands.3 @@ -1,35 +1,28 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-COMMANDS" "3" "September 2014" "" "" -. +.TH "NPM\-COMMANDS" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-commands\fR \-\- npm commands -. -.SH "SYNOPSIS" -. +\fBnpm-commands\fR \- npm commands +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands[](args, callback) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P npm comes with a full set of commands, and each of the commands takes a similar set of arguments\. -. .P In general, all commands on the command object take an \fBarray\fR of positional argument \fBstrings\fR\|\. The last argument to any function is a callback\. Some commands are special and take other optional arguments\. -. .P All commands have their own man page\. See \fBman npm\-\fR for command\-line usage, or \fBman 3 npm\-\fR for programmatic usage\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help 7 index -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man3/npm-config.3 b/deps/npm/man/man3/npm-config.3 index 578b939a636..763e2252548 100644 --- a/deps/npm/man/man3/npm-config.3 +++ b/deps/npm/man/man3/npm-config.3 @@ -1,69 +1,49 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-CONFIG" "3" "September 2014" "" "" -. +.TH "NPM\-CONFIG" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-config\fR \-\- Manage the npm configuration files -. -.SH "SYNOPSIS" -. +\fBnpm-config\fR \- Manage the npm configuration files +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.config(args, callback) var val = npm\.config\.get(key) npm\.config\.set(key, val) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This function acts much the same way as the command\-line version\. The first element in the array tells config what to do\. Possible values are: -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 \fBset\fR -. -.IP -Sets a config parameter\. The second element in \fBargs\fR is interpreted as the -key, and the third element is interpreted as the value\. -. -.IP "\(bu" 4 + Sets a config parameter\. The second element in \fBargs\fR is interpreted as the + key, and the third element is interpreted as the value\. +.IP \(bu 2 \fBget\fR -. -.IP -Gets the value of a config parameter\. The second element in \fBargs\fR is the -key to get the value of\. -. -.IP "\(bu" 4 + Gets the value of a config parameter\. The second element in \fBargs\fR is the + key to get the value of\. +.IP \(bu 2 \fBdelete\fR (\fBrm\fR or \fBdel\fR) -. -.IP -Deletes a parameter from the config\. The second element in \fBargs\fR is the -key to delete\. -. -.IP "\(bu" 4 + Deletes a parameter from the config\. The second element in \fBargs\fR is the + key to delete\. +.IP \(bu 2 \fBlist\fR (\fBls\fR) -. -.IP -Show all configs that aren\'t secret\. No parameters necessary\. -. -.IP "\(bu" 4 + Show all configs that aren't secret\. No parameters necessary\. +.IP \(bu 2 \fBedit\fR: -. -.IP -Opens the config file in the default editor\. This command isn\'t very useful -programmatically, but it is made available\. -. -.IP "" 0 -. + Opens the config file in the default editor\. This command isn't very useful + programmatically, but it is made available\. + +.RE .P To programmatically access npm configuration settings, or set them for the duration of a program, use the \fBnpm\.config\.set\fR and \fBnpm\.config\.get\fR functions instead\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm apihelp npm -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man3/npm-deprecate.3 b/deps/npm/man/man3/npm-deprecate.3 index 29e4c345154..9b543d36d5a 100644 --- a/deps/npm/man/man3/npm-deprecate.3 +++ b/deps/npm/man/man3/npm-deprecate.3 @@ -1,57 +1,43 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-DEPRECATE" "3" "September 2014" "" "" -. +.TH "NPM\-DEPRECATE" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-deprecate\fR \-\- Deprecate a version of a package -. -.SH "SYNOPSIS" -. +\fBnpm-deprecate\fR \- Deprecate a version of a package +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.deprecate(args, callback) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This command will update the npm registry entry for a package, providing a deprecation warning to all who attempt to install it\. -. .P -The \'args\' parameter must have exactly two elements: -. -.IP "\(bu" 4 +The 'args' parameter must have exactly two elements: +.RS 0 +.IP \(bu 2 \fBpackage[@version]\fR -. -.IP -The \fBversion\fR portion is optional, and may be either a range, or a -specific version, or a tag\. -. -.IP "\(bu" 4 + The \fBversion\fR portion is optional, and may be either a range, or a + specific version, or a tag\. +.IP \(bu 2 \fBmessage\fR -. -.IP -The warning message that will be printed whenever a user attempts to -install the package\. -. -.IP "" 0 -. + The warning message that will be printed whenever a user attempts to + install the package\. + +.RE .P -Note that you must be the package owner to deprecate something\. See the \fBowner\fR and \fBadduser\fR help topics\. -. +Note that you must be the package owner to deprecate something\. See the +\fBowner\fR and \fBadduser\fR help topics\. .P To un\-deprecate a package, specify an empty string (\fB""\fR) for the \fBmessage\fR argument\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm apihelp publish -. -.IP "\(bu" 4 +.IP \(bu 2 npm apihelp unpublish -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 registry -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man3/npm-docs.3 b/deps/npm/man/man3/npm-docs.3 index e3039c2ef5e..ad93e305bd5 100644 --- a/deps/npm/man/man3/npm-docs.3 +++ b/deps/npm/man/man3/npm-docs.3 @@ -1,28 +1,23 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-DOCS" "3" "September 2014" "" "" -. +.TH "NPM\-DOCS" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-docs\fR \-\- Docs for a package in a web browser maybe -. -.SH "SYNOPSIS" -. +\fBnpm-docs\fR \- Docs for a package in a web browser maybe +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.docs(package, callback) -. .fi -. -.SH "DESCRIPTION" -This command tries to guess at the likely location of a package\'s +.RE +.SH DESCRIPTION +.P +This command tries to guess at the likely location of a package's documentation URL, and then tries to open it using the \fB\-\-browser\fR config param\. -. .P Like other commands, the first parameter is an array\. This command only uses the first element, which is expected to be a package name with an optional version number\. -. .P This command will launch a browser, so this command may not be the most friendly for programmatic use\. + diff --git a/deps/npm/man/man3/npm-edit.3 b/deps/npm/man/man3/npm-edit.3 index bcdabb6f237..82767c8b7e1 100644 --- a/deps/npm/man/man3/npm-edit.3 +++ b/deps/npm/man/man3/npm-edit.3 @@ -1,35 +1,28 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-EDIT" "3" "September 2014" "" "" -. +.TH "NPM\-EDIT" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-edit\fR \-\- Edit an installed package -. -.SH "SYNOPSIS" -. +\fBnpm-edit\fR \- Edit an installed package +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.edit(package, callback) -. .fi -. -.SH "DESCRIPTION" -Opens the package folder in the default editor (or whatever you\'ve +.RE +.SH DESCRIPTION +.P +Opens the package folder in the default editor (or whatever you've configured as the npm \fBeditor\fR config \-\- see \fBnpm help config\fR\|\.) -. .P After it has been edited, the package is rebuilt so as to pick up any changes in compiled packages\. -. .P For instance, you can do \fBnpm install connect\fR to install connect into your package, and then \fBnpm\.commands\.edit(["connect"], callback)\fR to make a few changes to your locally installed copy\. -. .P The first parameter is a string array with a single element, the package to open\. The package can optionally have a version number attached\. -. .P Since this command opens an editor in a new process, be careful about where and how this is used\. + diff --git a/deps/npm/man/man3/npm-explore.3 b/deps/npm/man/man3/npm-explore.3 index 0918dae972c..54948eadc17 100644 --- a/deps/npm/man/man3/npm-explore.3 +++ b/deps/npm/man/man3/npm-explore.3 @@ -1,28 +1,22 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-EXPLORE" "3" "September 2014" "" "" -. +.TH "NPM\-EXPLORE" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-explore\fR \-\- Browse an installed package -. -.SH "SYNOPSIS" -. +\fBnpm-explore\fR \- Browse an installed package +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.explore(args, callback) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P Spawn a subshell in the directory of the installed package specified\. -. .P If a command is specified, then it is run in the subshell, which then immediately terminates\. -. .P Note that the package is \fInot\fR automatically rebuilt afterwards, so be sure to use \fBnpm rebuild \fR if you make any changes\. -. .P -The first element in the \'args\' parameter must be a package name\. After that is the optional command, which can be any number of strings\. All of the strings will be combined into one, space\-delimited command\. +The first element in the 'args' parameter must be a package name\. After that is the optional command, which can be any number of strings\. All of the strings will be combined into one, space\-delimited command\. + diff --git a/deps/npm/man/man3/npm-help-search.3 b/deps/npm/man/man3/npm-help-search.3 index 2c39f5c7b42..8f4f346c260 100644 --- a/deps/npm/man/man3/npm-help-search.3 +++ b/deps/npm/man/man3/npm-help-search.3 @@ -1,51 +1,41 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-HELP\-SEARCH" "3" "September 2014" "" "" -. +.TH "NPM\-HELP\-SEARCH" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-help-search\fR \-\- Search the help pages -. -.SH "SYNOPSIS" -. +\fBnpm-help-search\fR \- Search the help pages +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.helpSearch(args, [silent,] callback) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This command is rarely useful, but it exists in the rare case that it is\. -. .P This command takes an array of search terms and returns the help pages that match in order of best match\. -. .P If there is only one match, then npm displays that help section\. If there are multiple results, the results are printed to the screen formatted and the array of results is returned\. Each result is an object with these properties: -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 hits: A map of args to number of hits on that arg\. For example, {"npm": 3} -. -.IP "\(bu" 4 +.IP \(bu 2 found: Total number of unique args that matched\. -. -.IP "\(bu" 4 +.IP \(bu 2 totalHits: Total number of hits\. -. -.IP "\(bu" 4 +.IP \(bu 2 lines: An array of all matching lines (and some adjacent lines)\. -. -.IP "\(bu" 4 +.IP \(bu 2 file: Name of the file that matched -. -.IP "" 0 -. + +.RE .P -The silent parameter is not neccessary not used, but it may in the future\. +The silent parameter is not necessary not used, but it may in the future\. + diff --git a/deps/npm/man/man3/npm-init.3 b/deps/npm/man/man3/npm-init.3 index d4eba220526..d5da00dd8f6 100644 --- a/deps/npm/man/man3/npm-init.3 +++ b/deps/npm/man/man3/npm-init.3 @@ -1,39 +1,32 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "INIT" "3" "September 2014" "" "" -. +.TH "NPM" "" "October 2014" "" "" .SH "NAME" -\fBinit\fR \-\- Interactively create a package\.json file -. -.SH "SYNOPSIS" -. +\fBnpm\fR +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.init(args, callback) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This will ask you a bunch of questions, and then write a package\.json for you\. -. .P It attempts to make reasonable guesses about what you want things to be set to, -and then writes a package\.json file with the options you\'ve selected\. -. +and then writes a package\.json file with the options you've selected\. .P -If you already have a package\.json file, it\'ll read that first, and default to +If you already have a package\.json file, it'll read that first, and default to the options in there\. -. .P It is strictly additive, so it does not delete options from your package\.json without a really good reason to do so\. -. .P -Since this function expects to be run on the command\-line, it doesn\'t work very +Since this function expects to be run on the command\-line, it doesn't work very well as a programmatically\. The best option is to roll your own, and since JavaScript makes it stupid simple to output formatted JSON, that is the -preferred method\. If you\'re sure you want to handle command\-line prompting, +preferred method\. If you're sure you want to handle command\-line prompting, then go ahead and use this programmatically\. -. -.SH "SEE ALSO" +.SH SEE ALSO +.P npm help 5 package\.json + diff --git a/deps/npm/man/man3/npm-install.3 b/deps/npm/man/man3/npm-install.3 index 4b09fbe80fe..ec98278cad9 100644 --- a/deps/npm/man/man3/npm-install.3 +++ b/deps/npm/man/man3/npm-install.3 @@ -1,29 +1,23 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-INSTALL" "3" "September 2014" "" "" -. +.TH "NPM\-INSTALL" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-install\fR \-\- install a package programmatically -. -.SH "SYNOPSIS" -. +\fBnpm-install\fR \- install a package programmatically +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.install([where,] packages, callback) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This acts much the same ways as installing on the command\-line\. -. .P -The \'where\' parameter is optional and only used internally, and it specifies +The 'where' parameter is optional and only used internally, and it specifies where the packages should be installed to\. -. .P -The \'packages\' parameter is an array of strings\. Each element in the array is +The 'packages' parameter is an array of strings\. Each element in the array is the name of a package to be installed\. -. .P -Finally, \'callback\' is a function that will be called when all packages have been +Finally, 'callback' is a function that will be called when all packages have been installed or when an error has been encountered\. + diff --git a/deps/npm/man/man3/npm-link.3 b/deps/npm/man/man3/npm-link.3 index dbecc0edb7d..0c379a48c59 100644 --- a/deps/npm/man/man3/npm-link.3 +++ b/deps/npm/man/man3/npm-link.3 @@ -1,53 +1,41 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-LINK" "3" "September 2014" "" "" -. +.TH "NPM\-LINK" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-link\fR \-\- Symlink a package folder -. -.SH "SYNOPSIS" -. +\fBnpm-link\fR \- Symlink a package folder +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.link(callback) npm\.commands\.link(packages, callback) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P Package linking is a two\-step process\. -. .P Without parameters, link will create a globally\-installed symbolic link from \fBprefix/package\-name\fR to the current folder\. -. .P With a parameters, link will create a symlink from the local \fBnode_modules\fR folder to the global symlink\. -. .P When creating tarballs for \fBnpm publish\fR, the linked packages are "snapshotted" to their current state by resolving the symbolic links\. -. .P This is handy for installing your own stuff, so that you can work on it and test it iteratively without having to continually rebuild\. -. .P For example: -. -.IP "" 4 -. +.P +.RS 2 .nf npm\.commands\.link(cb) # creates global link from the cwd # (say redis package) -npm\.commands\.link(\'redis\', cb) # link\-install the package -. +npm\.commands\.link('redis', cb) # link\-install the package .fi -. -.IP "" 0 -. +.RE .P Now, any changes to the redis package will be reflected in the package in the current working directory + diff --git a/deps/npm/man/man3/npm-load.3 b/deps/npm/man/man3/npm-load.3 index 4180127d7e1..61fac42ebd1 100644 --- a/deps/npm/man/man3/npm-load.3 +++ b/deps/npm/man/man3/npm-load.3 @@ -1,44 +1,34 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-LOAD" "3" "September 2014" "" "" -. +.TH "NPM\-LOAD" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-load\fR \-\- Load config settings -. -.SH "SYNOPSIS" -. +\fBnpm-load\fR \- Load config settings +.SH SYNOPSIS +.P +.RS 2 .nf npm\.load(conf, cb) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P npm\.load() must be called before any other function call\. Both parameters are optional, but the second is recommended\. -. .P -The first parameter is an object hash of command\-line config params, and the -second parameter is a callback that will be called when npm is loaded and -ready to serve\. -. +The first parameter is an object containing command\-line config params, and the +second parameter is a callback that will be called when npm is loaded and ready +to serve\. .P The first parameter should follow a similar structure as the package\.json config object\. -. .P For example, to emulate the \-\-dev flag, pass an object that looks like this: -. -.IP "" 4 -. +.P +.RS 2 .nf { "dev": true } -. .fi -. -.IP "" 0 -. +.RE .P For a list of all the available command\-line configs, see \fBnpm help config\fR + diff --git a/deps/npm/man/man3/npm-ls.3 b/deps/npm/man/man3/npm-ls.3 index 723e2bc45b0..84558abeb3a 100644 --- a/deps/npm/man/man3/npm-ls.3 +++ b/deps/npm/man/man3/npm-ls.3 @@ -1,86 +1,68 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-LS" "3" "September 2014" "" "" -. +.TH "NPM\-LS" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-ls\fR \-\- List installed packages -. -.SH "SYNOPSIS" -. +\fBnpm-ls\fR \- List installed packages +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.ls(args, [silent,] callback) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This command will print to stdout all the versions of packages that are installed, as well as their dependencies, in a tree\-structure\. It will also return that data using the callback\. -. .P This command does not take any arguments, but args must be defined\. Beyond that, if any arguments are passed in, npm will politely warn that it does not take positional arguments, though you may set config flags like with any other command, such as \fBglobal\fR to list global packages\. -. .P It will print out extraneous, missing, and invalid packages\. -. .P If the silent parameter is set to true, nothing will be output to the screen, but the data will still be returned\. -. .P Callback is provided an error if one occurred, the full data about which packages are installed and which dependencies they will receive, and a "lite" data object which just shows which versions are installed where\. Note that the full data object is a circular structure, so care must be taken if it is serialized to JSON\. -. -.SH "CONFIGURATION" -. -.SS "long" -. -.IP "\(bu" 4 +.SH CONFIGURATION +.SS long +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Show extended information\. -. -.SS "parseable" -. -.IP "\(bu" 4 +.SS parseable +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Show parseable output instead of tree view\. -. -.SS "global" -. -.IP "\(bu" 4 +.SS global +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P List packages in the global install prefix instead of in the current project\. -. .P -Note, if parseable is set or long isn\'t set, then duplicates will be trimmed\. +Note, if parseable is set or long isn't set, then duplicates will be trimmed\. This means that if a submodule a same dependency as a parent module, then the dependency will only be output once\. + diff --git a/deps/npm/man/man3/npm-outdated.3 b/deps/npm/man/man3/npm-outdated.3 index 3da841dc3dd..2bba8469fc5 100644 --- a/deps/npm/man/man3/npm-outdated.3 +++ b/deps/npm/man/man3/npm-outdated.3 @@ -1,21 +1,17 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-OUTDATED" "3" "September 2014" "" "" -. +.TH "NPM\-OUTDATED" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-outdated\fR \-\- Check for outdated packages -. -.SH "SYNOPSIS" -. +\fBnpm-outdated\fR \- Check for outdated packages +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.outdated([packages,] callback) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This command will check the registry to see if the specified packages are currently outdated\. -. .P -If the \'packages\' parameter is left out, npm will check all packages\. +If the 'packages' parameter is left out, npm will check all packages\. + diff --git a/deps/npm/man/man3/npm-owner.3 b/deps/npm/man/man3/npm-owner.3 index 38cc42d6992..101b752e9f7 100644 --- a/deps/npm/man/man3/npm-owner.3 +++ b/deps/npm/man/man3/npm-owner.3 @@ -1,52 +1,43 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-OWNER" "3" "September 2014" "" "" -. +.TH "NPM\-OWNER" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-owner\fR \-\- Manage package owners -. -.SH "SYNOPSIS" -. +\fBnpm-owner\fR \- Manage package owners +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.owner(args, callback) -. .fi -. -.SH "DESCRIPTION" -The first element of the \'args\' parameter defines what to do, and the subsequent +.RE +.SH DESCRIPTION +.P +The first element of the 'args' parameter defines what to do, and the subsequent elements depend on the action\. Possible values for the action are (order of parameters are given in parenthesis): -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 ls (package): List all the users who have access to modify a package and push new versions\. Handy when you need to know who to bug for help\. -. -.IP "\(bu" 4 +.IP \(bu 2 add (user, package): Add a new user as a maintainer of a package\. This user is enabled to modify metadata, publish new versions, and add other owners\. -. -.IP "\(bu" 4 +.IP \(bu 2 rm (user, package): Remove a user from the package owner list\. This immediately revokes their privileges\. -. -.IP "" 0 -. + +.RE .P Note that there is only one level of access\. Either you can modify a package, -or you can\'t\. Future versions may contain more fine\-grained access levels, but +or you can't\. Future versions may contain more fine\-grained access levels, but that is not implemented at this time\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm apihelp publish -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 registry -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man3/npm-pack.3 b/deps/npm/man/man3/npm-pack.3 index a7ccab0a732..d9da93e33c2 100644 --- a/deps/npm/man/man3/npm-pack.3 +++ b/deps/npm/man/man3/npm-pack.3 @@ -1,28 +1,23 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-PACK" "3" "September 2014" "" "" -. +.TH "NPM\-PACK" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-pack\fR \-\- Create a tarball from a package -. -.SH "SYNOPSIS" -. +\fBnpm-pack\fR \- Create a tarball from a package +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.pack([packages,] callback) -. .fi -. -.SH "DESCRIPTION" -For anything that\'s installable (that is, a package folder, tarball, +.RE +.SH DESCRIPTION +.P +For anything that's installable (that is, a package folder, tarball, tarball url, name@tag, name@version, or name), this command will fetch it to the cache, and then copy the tarball to the current working directory as \fB\-\.tgz\fR, and then write the filenames out to stdout\. -. .P If the same package is specified multiple times, then the file will be overwritten the second time\. -. .P If no arguments are supplied, then npm packs the current package folder\. + diff --git a/deps/npm/man/man3/npm-prefix.3 b/deps/npm/man/man3/npm-prefix.3 index 3e800556574..e2da6d67eda 100644 --- a/deps/npm/man/man3/npm-prefix.3 +++ b/deps/npm/man/man3/npm-prefix.3 @@ -1,24 +1,19 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-PREFIX" "3" "September 2014" "" "" -. +.TH "NPM\-PREFIX" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-prefix\fR \-\- Display prefix -. -.SH "SYNOPSIS" -. +\fBnpm-prefix\fR \- Display prefix +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.prefix(args, callback) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P Print the prefix to standard out\. -. .P -\'args\' is never used and callback is never called with data\. -\'args\' must be present or things will break\. -. +\|'args' is never used and callback is never called with data\. +\|'args' must be present or things will break\. .P This function is not useful programmatically + diff --git a/deps/npm/man/man3/npm-prune.3 b/deps/npm/man/man3/npm-prune.3 index f9aff4ad322..48a06c97c7a 100644 --- a/deps/npm/man/man3/npm-prune.3 +++ b/deps/npm/man/man3/npm-prune.3 @@ -1,27 +1,21 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-PRUNE" "3" "September 2014" "" "" -. +.TH "NPM\-PRUNE" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-prune\fR \-\- Remove extraneous packages -. -.SH "SYNOPSIS" -. +\fBnpm-prune\fR \- Remove extraneous packages +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.prune([packages,] callback) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This command removes "extraneous" packages\. -. .P The first parameter is optional, and it specifies packages to be removed\. -. .P No packages are specified, then all packages will be checked\. -. .P Extraneous packages are packages that are not listed on the parent -package\'s dependencies list\. +package's dependencies list\. + diff --git a/deps/npm/man/man3/npm-publish.3 b/deps/npm/man/man3/npm-publish.3 index 842da1bb808..13dbd95f33b 100644 --- a/deps/npm/man/man3/npm-publish.3 +++ b/deps/npm/man/man3/npm-publish.3 @@ -1,51 +1,41 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-PUBLISH" "3" "September 2014" "" "" -. +.TH "NPM\-PUBLISH" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-publish\fR \-\- Publish a package -. -.SH "SYNOPSIS" -. +\fBnpm-publish\fR \- Publish a package +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.publish([packages,] callback) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P Publishes a package to the registry so that it can be installed by name\. -Possible values in the \'packages\' array are: -. -.IP "\(bu" 4 +Possible values in the 'packages' array are: +.RS 0 +.IP \(bu 2 \fB\fR: A folder containing a package\.json file -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\fR: A url or file path to a gzipped tar archive containing a single folder with a package\.json file inside\. -. -.IP "" 0 -. + +.RE .P If the package array is empty, npm will try to publish something in the current working directory\. -. .P This command could fails if one of the packages specified already exists in the registry\. Overwrites when the "force" environment variable is set\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help 7 registry -. -.IP "\(bu" 4 +.IP \(bu 2 npm help adduser -. -.IP "\(bu" 4 +.IP \(bu 2 npm apihelp owner -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man3/npm-rebuild.3 b/deps/npm/man/man3/npm-rebuild.3 index f6233c2f290..21a5aba1deb 100644 --- a/deps/npm/man/man3/npm-rebuild.3 +++ b/deps/npm/man/man3/npm-rebuild.3 @@ -1,22 +1,19 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-REBUILD" "3" "September 2014" "" "" -. +.TH "NPM\-REBUILD" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-rebuild\fR \-\- Rebuild a package -. -.SH "SYNOPSIS" -. +\fBnpm-rebuild\fR \- Rebuild a package +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.rebuild([packages,] callback) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This command runs the \fBnpm build\fR command on each of the matched packages\. This is useful when you install a new version of node, and must recompile all your C++ addons with -the new binary\. If no \'packages\' parameter is specify, every package will be rebuilt\. -. -.SH "CONFIGURATION" +the new binary\. If no 'packages' parameter is specify, every package will be rebuilt\. +.SH CONFIGURATION +.P See \fBnpm help build\fR + diff --git a/deps/npm/man/man3/npm-repo.3 b/deps/npm/man/man3/npm-repo.3 index 06db0d50a25..5638d434aec 100644 --- a/deps/npm/man/man3/npm-repo.3 +++ b/deps/npm/man/man3/npm-repo.3 @@ -1,28 +1,23 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-REPO" "3" "September 2014" "" "" -. +.TH "NPM\-REPO" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-repo\fR \-\- Open package repository page in the browser -. -.SH "SYNOPSIS" -. +\fBnpm-repo\fR \- Open package repository page in the browser +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.repo(package, callback) -. .fi -. -.SH "DESCRIPTION" -This command tries to guess at the likely location of a package\'s +.RE +.SH DESCRIPTION +.P +This command tries to guess at the likely location of a package's repository URL, and then tries to open it using the \fB\-\-browser\fR config param\. -. .P Like other commands, the first parameter is an array\. This command only uses the first element, which is expected to be a package name with an optional version number\. -. .P This command will launch a browser, so this command may not be the most friendly for programmatic use\. + diff --git a/deps/npm/man/man3/npm-restart.3 b/deps/npm/man/man3/npm-restart.3 index 5c0ed9ca0e8..be948334aa1 100644 --- a/deps/npm/man/man3/npm-restart.3 +++ b/deps/npm/man/man3/npm-restart.3 @@ -1,37 +1,29 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-RESTART" "3" "September 2014" "" "" -. +.TH "NPM\-RESTART" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-restart\fR \-\- Start a package -. -.SH "SYNOPSIS" -. +\fBnpm-restart\fR \- Start a package +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.restart(packages, callback) -. .fi -. -.SH "DESCRIPTION" -This runs a package\'s "restart" script, if one was provided\. -Otherwise it runs package\'s "stop" script, if one was provided, and then +.RE +.SH DESCRIPTION +.P +This runs a package's "restart" script, if one was provided\. +Otherwise it runs package's "stop" script, if one was provided, and then the "start" script\. -. .P If no version is specified, then it restarts the "active" version\. -. .P npm can run tests on multiple packages\. Just specify multiple packages in the \fBpackages\fR parameter\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm apihelp start -. -.IP "\(bu" 4 +.IP \(bu 2 npm apihelp stop -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man3/npm-root.3 b/deps/npm/man/man3/npm-root.3 index 5772cb40d33..68bac79fa03 100644 --- a/deps/npm/man/man3/npm-root.3 +++ b/deps/npm/man/man3/npm-root.3 @@ -1,24 +1,19 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-ROOT" "3" "September 2014" "" "" -. +.TH "NPM\-ROOT" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-root\fR \-\- Display npm root -. -.SH "SYNOPSIS" -. +\fBnpm-root\fR \- Display npm root +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.root(args, callback) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P Print the effective \fBnode_modules\fR folder to standard out\. -. .P -\'args\' is never used and callback is never called with data\. -\'args\' must be present or things will break\. -. +\|'args' is never used and callback is never called with data\. +\|'args' must be present or things will break\. .P This function is not useful programmatically\. + diff --git a/deps/npm/man/man3/npm-run-script.3 b/deps/npm/man/man3/npm-run-script.3 index 5c5d435a309..866f1e01f3f 100644 --- a/deps/npm/man/man3/npm-run-script.3 +++ b/deps/npm/man/man3/npm-run-script.3 @@ -1,48 +1,37 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-RUN\-SCRIPT" "3" "September 2014" "" "" -. +.TH "NPM\-RUN\-SCRIPT" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-run-script\fR \-\- Run arbitrary package scripts -. -.SH "SYNOPSIS" -. +\fBnpm-run-script\fR \- Run arbitrary package scripts +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.run\-script(args, callback) -. .fi -. -.SH "DESCRIPTION" -This runs an arbitrary command from a package\'s "scripts" object\. -. +.RE +.SH DESCRIPTION +.P +This runs an arbitrary command from a package's "scripts" object\. .P It is used by the test, start, restart, and stop commands, but can be called directly, as well\. -. .P -The \'args\' parameter is an array of strings\. Behavior depends on the number +The 'args' parameter is an array of strings\. Behavior depends on the number of elements\. If there is only one element, npm assumes that the element represents a command to be run on the local repository\. If there is more than one element, then the first is assumed to be the package and the second is assumed to be the command to run\. All other elements are ignored\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help 7 scripts -. -.IP "\(bu" 4 +.IP \(bu 2 npm apihelp test -. -.IP "\(bu" 4 +.IP \(bu 2 npm apihelp start -. -.IP "\(bu" 4 +.IP \(bu 2 npm apihelp restart -. -.IP "\(bu" 4 +.IP \(bu 2 npm apihelp stop -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man3/npm-search.3 b/deps/npm/man/man3/npm-search.3 index f7692a637c6..ba0cc5f4c3b 100644 --- a/deps/npm/man/man3/npm-search.3 +++ b/deps/npm/man/man3/npm-search.3 @@ -1,64 +1,52 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-SEARCH" "3" "September 2014" "" "" -. +.TH "NPM\-SEARCH" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-search\fR \-\- Search for packages -. -.SH "SYNOPSIS" -. +\fBnpm-search\fR \- Search for packages +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.search(searchTerms, [silent,] [staleness,] callback) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P Search the registry for packages matching the search terms\. The available parameters are: -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 searchTerms: Array of search terms\. These terms are case\-insensitive\. -. -.IP "\(bu" 4 +.IP \(bu 2 silent: If true, npm will not log anything to the console\. -. -.IP "\(bu" 4 +.IP \(bu 2 staleness: This is the threshold for stale packages\. "Fresh" packages are not refreshed from the registry\. This value is measured in seconds\. -. -.IP "\(bu" 4 +.IP \(bu 2 callback: Returns an object where each key is the name of a package, and the value -is information about that package along with a \'words\' property, which is +is information about that package along with a 'words' property, which is a space\-delimited string of all of the interesting words in that package\. The only properties included are those that are searched, which generally include: -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 name -. -.IP "\(bu" 4 +.IP \(bu 2 description -. -.IP "\(bu" 4 +.IP \(bu 2 maintainers -. -.IP "\(bu" 4 +.IP \(bu 2 url -. -.IP "\(bu" 4 +.IP \(bu 2 keywords -. -.IP "" 0 -. -.IP "" 0 -. +.RE + +.RE .P A search on the registry excludes any result that does not match all of the search terms\. It also removes any items from the results that contain an excluded term (the "searchexclude" config)\. The search is case insensitive -and doesn\'t try to read your mind (it doesn\'t do any verb tense matching or the +and doesn't try to read your mind (it doesn't do any verb tense matching or the like)\. + diff --git a/deps/npm/man/man3/npm-shrinkwrap.3 b/deps/npm/man/man3/npm-shrinkwrap.3 index e5cdb59d9c3..0f87c50906c 100644 --- a/deps/npm/man/man3/npm-shrinkwrap.3 +++ b/deps/npm/man/man3/npm-shrinkwrap.3 @@ -1,30 +1,24 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-SHRINKWRAP" "3" "September 2014" "" "" -. +.TH "NPM\-SHRINKWRAP" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-shrinkwrap\fR \-\- programmatically generate package shrinkwrap file -. -.SH "SYNOPSIS" -. +\fBnpm-shrinkwrap\fR \- programmatically generate package shrinkwrap file +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.shrinkwrap(args, [silent,] callback) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This acts much the same ways as shrinkwrapping on the command\-line\. -. .P -This command does not take any arguments, but \'args\' must be defined\. +This command does not take any arguments, but 'args' must be defined\. Beyond that, if any arguments are passed in, npm will politely warn that it does not take positional arguments\. -. .P -If the \'silent\' parameter is set to true, nothing will be output to the screen, +If the 'silent' parameter is set to true, nothing will be output to the screen, but the shrinkwrap file will still be written\. -. .P -Finally, \'callback\' is a function that will be called when the shrinkwrap has +Finally, 'callback' is a function that will be called when the shrinkwrap has been saved\. + diff --git a/deps/npm/man/man3/npm-start.3 b/deps/npm/man/man3/npm-start.3 index 6e2cb647713..4eabb36c444 100644 --- a/deps/npm/man/man3/npm-start.3 +++ b/deps/npm/man/man3/npm-start.3 @@ -1,21 +1,17 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-START" "3" "September 2014" "" "" -. +.TH "NPM\-START" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-start\fR \-\- Start a package -. -.SH "SYNOPSIS" -. +\fBnpm-start\fR \- Start a package +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.start(packages, callback) -. .fi -. -.SH "DESCRIPTION" -This runs a package\'s "start" script, if one was provided\. -. +.RE +.SH DESCRIPTION +.P +This runs a package's "start" script, if one was provided\. .P npm can run tests on multiple packages\. Just specify multiple packages in the \fBpackages\fR parameter\. + diff --git a/deps/npm/man/man3/npm-stop.3 b/deps/npm/man/man3/npm-stop.3 index b1f4ee75030..aa55b84736b 100644 --- a/deps/npm/man/man3/npm-stop.3 +++ b/deps/npm/man/man3/npm-stop.3 @@ -1,21 +1,17 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-STOP" "3" "September 2014" "" "" -. +.TH "NPM\-STOP" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-stop\fR \-\- Stop a package -. -.SH "SYNOPSIS" -. +\fBnpm-stop\fR \- Stop a package +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.stop(packages, callback) -. .fi -. -.SH "DESCRIPTION" -This runs a package\'s "stop" script, if one was provided\. -. +.RE +.SH DESCRIPTION +.P +This runs a package's "stop" script, if one was provided\. .P npm can run stop on multiple packages\. Just specify multiple packages in the \fBpackages\fR parameter\. + diff --git a/deps/npm/man/man3/npm-submodule.3 b/deps/npm/man/man3/npm-submodule.3 index 95739ce3b08..378862a563a 100644 --- a/deps/npm/man/man3/npm-submodule.3 +++ b/deps/npm/man/man3/npm-submodule.3 @@ -1,42 +1,35 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-SUBMODULE" "3" "September 2014" "" "" -. +.TH "NPM\-SUBMODULE" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-submodule\fR \-\- Add a package as a git submodule -. -.SH "SYNOPSIS" -. +\fBnpm-submodule\fR \- Add a package as a git submodule +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.submodule(packages, callback) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P For each package specified, npm will check if it has a git repository url -in its package\.json description then add it as a git submodule at \fBnode_modules/\fR\|\. -. +in its package\.json description then add it as a git submodule at +\fBnode_modules/\fR\|\. .P -This is a convenience only\. From then on, it\'s up to you to manage +This is a convenience only\. From then on, it's up to you to manage updates by using the appropriate git commands\. npm will stubbornly refuse to update, modify, or remove anything with a \fB\|\.git\fR subfolder in it\. -. .P This command also does not install missing dependencies, if the package does not include them in its git repository\. If \fBnpm ls\fR reports that things are missing, you can either install, link, or submodule them yourself, or you can do \fBnpm explore \-\- npm install\fR to install the dependencies into the submodule folder\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help json -. -.IP "\(bu" 4 +.IP \(bu 2 git help submodule -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man3/npm-tag.3 b/deps/npm/man/man3/npm-tag.3 index fe00dbcc2e4..4da13767f93 100644 --- a/deps/npm/man/man3/npm-tag.3 +++ b/deps/npm/man/man3/npm-tag.3 @@ -1,31 +1,27 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-TAG" "3" "September 2014" "" "" -. +.TH "NPM\-TAG" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-tag\fR \-\- Tag a published version -. -.SH "SYNOPSIS" -. +\fBnpm-tag\fR \- Tag a published version +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.tag(package@version, tag, callback) -. .fi -. -.SH "DESCRIPTION" -Tags the specified version of the package with the specified tag, or the \fB\-\-tag\fR config if not specified\. -. +.RE +.SH DESCRIPTION +.P +Tags the specified version of the package with the specified tag, or the +\fB\-\-tag\fR config if not specified\. .P -The \'package@version\' is an array of strings, but only the first two elements are +The 'package@version' is an array of strings, but only the first two elements are currently used\. -. .P The first element must be in the form package@version, where package is the package name and version is the version number (much like installing a specific version)\. -. .P The second element is the name of the tag to tag this version with\. If this parameter is missing or falsey (empty), the default froom the config will be -used\. For more information about how to set this config, check \fBman 3 npm\-config\fR for programmatic usage or \fBman npm\-config\fR for cli usage\. +used\. For more information about how to set this config, check +\fBman 3 npm\-config\fR for programmatic usage or \fBman npm\-config\fR for cli usage\. + diff --git a/deps/npm/man/man3/npm-test.3 b/deps/npm/man/man3/npm-test.3 index 86aa780ac1c..f6d0f6d3f11 100644 --- a/deps/npm/man/man3/npm-test.3 +++ b/deps/npm/man/man3/npm-test.3 @@ -1,25 +1,20 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-TEST" "3" "September 2014" "" "" -. +.TH "NPM\-TEST" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-test\fR \-\- Test a package -. -.SH "SYNOPSIS" -. +\fBnpm-test\fR \- Test a package +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.test(packages, callback) -. .fi -. -.SH "DESCRIPTION" -This runs a package\'s "test" script, if one was provided\. -. +.RE +.SH DESCRIPTION +.P +This runs a package's "test" script, if one was provided\. .P To run tests as a condition of installation, set the \fBnpat\fR config to true\. -. .P npm can run tests on multiple packages\. Just specify multiple packages in the \fBpackages\fR parameter\. + diff --git a/deps/npm/man/man3/npm-uninstall.3 b/deps/npm/man/man3/npm-uninstall.3 index 7ae13684231..8505f399559 100644 --- a/deps/npm/man/man3/npm-uninstall.3 +++ b/deps/npm/man/man3/npm-uninstall.3 @@ -1,25 +1,20 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-UNINSTALL" "3" "September 2014" "" "" -. +.TH "NPM\-UNINSTALL" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-uninstall\fR \-\- uninstall a package programmatically -. -.SH "SYNOPSIS" -. +\fBnpm-uninstall\fR \- uninstall a package programmatically +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.uninstall(packages, callback) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This acts much the same ways as uninstalling on the command\-line\. -. .P -The \'packages\' parameter is an array of strings\. Each element in the array is +The 'packages' parameter is an array of strings\. Each element in the array is the name of a package to be uninstalled\. -. .P -Finally, \'callback\' is a function that will be called when all packages have been +Finally, 'callback' is a function that will be called when all packages have been uninstalled or when an error has been encountered\. + diff --git a/deps/npm/man/man3/npm-unpublish.3 b/deps/npm/man/man3/npm-unpublish.3 index 63be8506ee7..9b4ab467d2f 100644 --- a/deps/npm/man/man3/npm-unpublish.3 +++ b/deps/npm/man/man3/npm-unpublish.3 @@ -1,30 +1,24 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-UNPUBLISH" "3" "September 2014" "" "" -. +.TH "NPM\-UNPUBLISH" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-unpublish\fR \-\- Remove a package from the registry -. -.SH "SYNOPSIS" -. +\fBnpm-unpublish\fR \- Remove a package from the registry +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.unpublish(package, callback) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This removes a package version from the registry, deleting its entry and removing the tarball\. -. .P The package parameter must be defined\. -. .P Only the first element in the package parameter is used\. If there is no first element, then npm assumes that the package at the current working directory is what is meant\. -. .P If no version is specified, or if all versions are removed then the root package entry is removed from the registry entirely\. + diff --git a/deps/npm/man/man3/npm-update.3 b/deps/npm/man/man3/npm-update.3 index 740038b419b..3f40eb0db2d 100644 --- a/deps/npm/man/man3/npm-update.3 +++ b/deps/npm/man/man3/npm-update.3 @@ -1,18 +1,18 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-UPDATE" "3" "September 2014" "" "" -. +.TH "NPM\-UPDATE" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-update\fR \-\- Update a package -. -.SH "SYNOPSIS" -. +\fBnpm-update\fR \- Update a package +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.update(packages, callback) -. .fi +.RE +.TH "DESCRIPTION" "" "October 2014" "" "" +.SH "NAME" +\fBDESCRIPTION\fR +.P Updates a package, upgrading it to the latest version\. It also installs any missing packages\. -. .P -The \'packages\' argument is an array of packages to update\. The \'callback\' parameter will be called when done or when an error occurs\. +The 'packages' argument is an array of packages to update\. The 'callback' parameter will be called when done or when an error occurs\. + diff --git a/deps/npm/man/man3/npm-version.3 b/deps/npm/man/man3/npm-version.3 index 2c79f3782f6..16979247fe4 100644 --- a/deps/npm/man/man3/npm-version.3 +++ b/deps/npm/man/man3/npm-version.3 @@ -1,27 +1,22 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-VERSION" "3" "September 2014" "" "" -. +.TH "NPM\-VERSION" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-version\fR \-\- Bump a package version -. -.SH "SYNOPSIS" -. +\fBnpm-version\fR \- Bump a package version +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.version(newversion, callback) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P Run this in a package directory to bump the version and write the new data back to the package\.json file\. -. .P If run in a git repo, it will also create a version commit and tag, and fail if the repo is not clean\. -. .P Like all other commands, this function takes a string array as its first parameter\. The difference, however, is this function will fail if it does not have exactly one element\. The only element should be a version number\. + diff --git a/deps/npm/man/man3/npm-view.3 b/deps/npm/man/man3/npm-view.3 index 3e91ce67168..e49f28d7edc 100644 --- a/deps/npm/man/man3/npm-view.3 +++ b/deps/npm/man/man3/npm-view.3 @@ -1,176 +1,131 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-VIEW" "3" "September 2014" "" "" -. +.TH "NPM\-VIEW" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-view\fR \-\- View registry info -. -.SH "SYNOPSIS" -. +\fBnpm-view\fR \- View registry info +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.view(args, [silent,] callback) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P This command shows data about a package and prints it to the stream referenced by the \fBoutfd\fR config, which defaults to stdout\. -. .P The "args" parameter is an ordered list that closely resembles the command\-line usage\. The elements should be ordered such that the first element is the package and version (package@version)\. The version is optional\. After that, the rest of the parameters are fields with optional subfields ("field\.subfield") which can be used to get only the information desired from the registry\. -. .P The callback will be passed all of the data returned by the query\. -. .P For example, to get the package registry entry for the \fBconnect\fR package, you can do this: -. -.IP "" 4 -. +.P +.RS 2 .nf npm\.commands\.view(["connect"], callback) -. .fi -. -.IP "" 0 -. +.RE .P If no version is specified, "latest" is assumed\. -. .P Field names can be specified after the package descriptor\. For example, to show the dependencies of the \fBronn\fR package at version 0\.3\.5, you could do the following: -. -.IP "" 4 -. +.P +.RS 2 .nf npm\.commands\.view(["ronn@0\.3\.5", "dependencies"], callback) -. .fi -. -.IP "" 0 -. +.RE .P You can view child field by separating them with a period\. To view the git repository URL for the latest version of npm, you could do this: -. -.IP "" 4 -. +.P +.RS 2 .nf npm\.commands\.view(["npm", "repository\.url"], callback) -. .fi -. -.IP "" 0 -. +.RE .P For fields that are arrays, requesting a non\-numeric field will return all of the values from the objects in the list\. For example, to get all the contributor names for the "express" project, you can do this: -. -.IP "" 4 -. +.P +.RS 2 .nf npm\.commands\.view(["express", "contributors\.email"], callback) -. .fi -. -.IP "" 0 -. +.RE .P You may also use numeric indices in square braces to specifically select an item in an array field\. To just get the email address of the first contributor in the list, you can do this: -. -.IP "" 4 -. +.P +.RS 2 .nf npm\.commands\.view(["express", "contributors[0]\.email"], callback) -. .fi -. -.IP "" 0 -. +.RE .P Multiple fields may be specified, and will be printed one after another\. For exampls, to get all the contributor names and email addresses, you can do this: -. -.IP "" 4 -. +.P +.RS 2 .nf npm\.commands\.view(["express", "contributors\.name", "contributors\.email"], callback) -. .fi -. -.IP "" 0 -. +.RE .P "Person" fields are shown as a string if they would be shown as an object\. So, for example, this will show the list of npm contributors in the shortened string format\. (See \fBnpm help json\fR for more on this\.) -. -.IP "" 4 -. +.P +.RS 2 .nf npm\.commands\.view(["npm", "contributors"], callback) -. .fi -. -.IP "" 0 -. +.RE .P If a version range is provided, then data will be printed for every matching version of the package\. This will show which version of jsdom was required by each matching version of yui3: -. -.IP "" 4 -. +.P +.RS 2 .nf -npm\.commands\.view(["yui3@\'>0\.5\.4\'", "dependencies\.jsdom"], callback) -. +npm\.commands\.view(["yui3@'>0\.5\.4'", "dependencies\.jsdom"], callback) .fi -. -.IP "" 0 -. -.SH "OUTPUT" +.RE +.SH OUTPUT +.P If only a single string field for a single version is output, then it will not be colorized or quoted, so as to enable piping the output to another command\. -. .P If the version range matches multiple versions, than each printed value will be prefixed with the version it applies to\. -. .P If multiple fields are requested, than each of them are prefixed with the field name\. -. .P -Console output can be disabled by setting the \'silent\' parameter to true\. -. -.SH "RETURN VALUE" +Console output can be disabled by setting the 'silent' parameter to true\. +.SH RETURN VALUE +.P The data returned will be an object in this formation: -. -.IP "" 4 -. +.P +.RS 2 .nf { : { : , \.\.\. } , \.\.\. } -. .fi -. -.IP "" 0 -. +.RE .P corresponding to the list of fields selected\. + diff --git a/deps/npm/man/man3/npm-whoami.3 b/deps/npm/man/man3/npm-whoami.3 index 1a0a43cf51b..2d32507f513 100644 --- a/deps/npm/man/man3/npm-whoami.3 +++ b/deps/npm/man/man3/npm-whoami.3 @@ -1,24 +1,19 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-WHOAMI" "3" "September 2014" "" "" -. +.TH "NPM\-WHOAMI" "3" "October 2014" "" "" .SH "NAME" -\fBnpm-whoami\fR \-\- Display npm username -. -.SH "SYNOPSIS" -. +\fBnpm-whoami\fR \- Display npm username +.SH SYNOPSIS +.P +.RS 2 .nf npm\.commands\.whoami(args, callback) -. .fi -. -.SH "DESCRIPTION" +.RE +.SH DESCRIPTION +.P Print the \fBusername\fR config to standard output\. -. .P -\'args\' is never used and callback is never called with data\. -\'args\' must be present or things will break\. -. +\|'args' is never used and callback is never called with data\. +\|'args' must be present or things will break\. .P This function is not useful programmatically + diff --git a/deps/npm/man/man3/npm.3 b/deps/npm/man/man3/npm.3 index e762dc4851f..71bbc58ad09 100644 --- a/deps/npm/man/man3/npm.3 +++ b/deps/npm/man/man3/npm.3 @@ -1,162 +1,124 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM" "3" "September 2014" "" "" -. +.TH "NPM" "3" "October 2014" "" "" .SH "NAME" -\fBnpm\fR \-\- node package manager -. -.SH "SYNOPSIS" -. +\fBnpm\fR \- node package manager +.SH SYNOPSIS +.P +.RS 2 .nf var npm = require("npm") npm\.load([configObject, ]function (er, npm) { - // use the npm object, now that it\'s loaded\. + // use the npm object, now that it's loaded\. + npm\.config\.set(key, val) val = npm\.config\.get(key) + console\.log("prefix = %s", npm\.prefix) + npm\.commands\.install(["package"], cb) }) -. .fi -. -.SH "VERSION" -1.4.28 -. -.SH "DESCRIPTION" +.RE +.SH VERSION +.P +2.1.6 +.SH DESCRIPTION +.P This is the API documentation for npm\. To find documentation of the command line client, see npm help \fBnpm\fR\|\. -. .P -Prior to using npm\'s commands, \fBnpm\.load()\fR must be called\. -If you provide \fBconfigObject\fR as an object hash of top\-level -configs, they override the values stored in the various config -locations\. In the npm command line client, this set of configs -is parsed from the command line options\. Additional configuration -params are loaded from two configuration files\. See npm help \fBnpm\-config\fR, npm help 7 \fBnpm\-config\fR, and npm help 5 \fBnpmrc\fR for more information\. -. +Prior to using npm's commands, \fBnpm\.load()\fR must be called\. If you provide +\fBconfigObject\fR as an object map of top\-level configs, they override the values +stored in the various config locations\. In the npm command line client, this +set of configs is parsed from the command line options\. Additional +configuration params are loaded from two configuration files\. See +npm help \fBnpm\-config\fR, npm help 7 \fBnpm\-config\fR, and npm help 5 \fBnpmrc\fR for more information\. .P After that, each of the functions are accessible in the commands object: \fBnpm\.commands\.\fR\|\. See npm help 7 \fBnpm\-index\fR for a list of all possible commands\. -. .P -All commands on the command object take an \fBarray\fR of positional argument \fBstrings\fR\|\. The last argument to any function is a callback\. Some +All commands on the command object take an \fBarray\fR of positional argument +\fBstrings\fR\|\. The last argument to any function is a callback\. Some commands take other optional arguments\. -. .P Configs cannot currently be set on a per function basis, as each call to npm\.config\.set will change the value for \fIall\fR npm commands in that process\. -. .P To find API documentation for a specific command, run the \fBnpm apihelp\fR command\. -. -.SH "METHODS AND PROPERTIES" -. -.IP "\(bu" 4 +.SH METHODS AND PROPERTIES +.RS 0 +.IP \(bu 2 \fBnpm\.load(configs, cb)\fR -. -.IP -Load the configuration params, and call the \fBcb\fR function once the -globalconfig and userconfig files have been loaded as well, or on -nextTick if they\'ve already been loaded\. -. -.IP "\(bu" 4 + Load the configuration params, and call the \fBcb\fR function once the + globalconfig and userconfig files have been loaded as well, or on + nextTick if they've already been loaded\. +.IP \(bu 2 \fBnpm\.config\fR -. -.IP -An object for accessing npm configuration parameters\. -. -.IP "\(bu" 4 + An object for accessing npm configuration parameters\. +.RS 0 +.IP \(bu 2 \fBnpm\.config\.get(key)\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fBnpm\.config\.set(key, val)\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fBnpm\.config\.del(key)\fR -. -.IP "" 0 -. -.IP "\(bu" 4 +.RE +.IP \(bu 2 \fBnpm\.dir\fR or \fBnpm\.root\fR -. -.IP -The \fBnode_modules\fR directory where npm will operate\. -. -.IP "\(bu" 4 + The \fBnode_modules\fR directory where npm will operate\. +.IP \(bu 2 \fBnpm\.prefix\fR -. -.IP -The prefix where npm is operating\. (Most often the current working -directory\.) -. -.IP "\(bu" 4 + The prefix where npm is operating\. (Most often the current working + directory\.) +.IP \(bu 2 \fBnpm\.cache\fR -. -.IP -The place where npm keeps JSON and tarballs it fetches from the -registry (or uploads to the registry)\. -. -.IP "\(bu" 4 + The place where npm keeps JSON and tarballs it fetches from the + registry (or uploads to the registry)\. +.IP \(bu 2 \fBnpm\.tmp\fR -. -.IP -npm\'s temporary working directory\. -. -.IP "\(bu" 4 + npm's temporary working directory\. +.IP \(bu 2 \fBnpm\.deref\fR -. -.IP -Get the "real" name for a command that has either an alias or -abbreviation\. -. -.IP "" 0 -. -.SH "MAGIC" -For each of the methods in the \fBnpm\.commands\fR hash, a method is added to -the npm object, which takes a set of positional string arguments rather -than an array and a callback\. -. + Get the "real" name for a command that has either an alias or + abbreviation\. + +.RE +.SH MAGIC +.P +For each of the methods in the \fBnpm\.commands\fR object, a method is added to the +npm object, which takes a set of positional string arguments rather than an +array and a callback\. .P If the last argument is a callback, then it will use the supplied callback\. However, if no callback is provided, then it will print out the error or results\. -. .P For example, this would work in a node repl: -. -.IP "" 4 -. +.P +.RS 2 .nf > npm = require("npm") > npm\.load() // wait a sec\.\.\. > npm\.install("dnode", "express") -. .fi -. -.IP "" 0 -. +.RE .P -Note that that \fIwon\'t\fR work in a node program, since the \fBinstall\fR +Note that that \fIwon't\fR work in a node program, since the \fBinstall\fR method will get called before the configuration load is completed\. -. -.SH "ABBREVS" -In order to support \fBnpm ins foo\fR instead of \fBnpm install foo\fR, the \fBnpm\.commands\fR object has a set of abbreviations as well as the full +.SH ABBREVS +.P +In order to support \fBnpm ins foo\fR instead of \fBnpm install foo\fR, the +\fBnpm\.commands\fR object has a set of abbreviations as well as the full method names\. Use the \fBnpm\.deref\fR method to find the real name\. -. .P For example: -. -.IP "" 4 -. +.P +.RS 2 .nf var cmd = npm\.deref("unp") // cmd === "unpublish" -. .fi -. -.IP "" 0 +.RE diff --git a/deps/npm/man/man5/npm-folders.5 b/deps/npm/man/man5/npm-folders.5 index d349c1f43a5..9cd3436f894 100644 --- a/deps/npm/man/man5/npm-folders.5 +++ b/deps/npm/man/man5/npm-folders.5 @@ -1,141 +1,132 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-FOLDERS" "5" "September 2014" "" "" -. +.TH "NPM\-FOLDERS" "5" "October 2014" "" "" .SH "NAME" -\fBnpm-folders\fR \-\- Folder Structures Used by npm -. -.SH "DESCRIPTION" -npm puts various things on your computer\. That\'s its job\. -. +\fBnpm-folders\fR \- Folder Structures Used by npm +.SH DESCRIPTION +.P +npm puts various things on your computer\. That's its job\. .P This document will tell you what it puts where\. -. -.SS "tl;dr" -. -.IP "\(bu" 4 +.SS tl;dr +.RS 0 +.IP \(bu 2 Local install (default): puts stuff in \fB\|\./node_modules\fR of the current package root\. -. -.IP "\(bu" 4 +.IP \(bu 2 Global install (with \fB\-g\fR): puts stuff in /usr/local or wherever node is installed\. -. -.IP "\(bu" 4 -Install it \fBlocally\fR if you\'re going to \fBrequire()\fR it\. -. -.IP "\(bu" 4 -Install it \fBglobally\fR if you\'re going to run it on the command line\. -. -.IP "\(bu" 4 +.IP \(bu 2 +Install it \fBlocally\fR if you're going to \fBrequire()\fR it\. +.IP \(bu 2 +Install it \fBglobally\fR if you're going to run it on the command line\. +.IP \(bu 2 If you need both, then install it in both places, or use \fBnpm link\fR\|\. -. -.IP "" 0 -. -.SS "prefix Configuration" + +.RE +.SS prefix Configuration +.P The \fBprefix\fR config defaults to the location where node is installed\. On most systems, this is \fB/usr/local\fR, and most of the time is the same -as node\'s \fBprocess\.installPrefix\fR\|\. -. +as node's \fBprocess\.installPrefix\fR\|\. .P On windows, this is the exact location of the node\.exe binary\. On Unix -systems, it\'s one level up, since node is typically installed at \fB{prefix}/bin/node\fR rather than \fB{prefix}/node\.exe\fR\|\. -. +systems, it's one level up, since node is typically installed at +\fB{prefix}/bin/node\fR rather than \fB{prefix}/node\.exe\fR\|\. .P When the \fBglobal\fR flag is set, npm installs things into this prefix\. When it is not set, it uses the root of the current package, or the current working directory if not in a package already\. -. -.SS "Node Modules" +.SS Node Modules +.P Packages are dropped into the \fBnode_modules\fR folder under the \fBprefix\fR\|\. -When installing locally, this means that you can \fBrequire("packagename")\fR to load its main module, or \fBrequire("packagename/lib/path/to/sub/module")\fR to load other modules\. -. +When installing locally, this means that you can +\fBrequire("packagename")\fR to load its main module, or +\fBrequire("packagename/lib/path/to/sub/module")\fR to load other modules\. .P Global installs on Unix systems go to \fB{prefix}/lib/node_modules\fR\|\. -Global installs on Windows go to \fB{prefix}/node_modules\fR (that is, no \fBlib\fR folder\.) -. +Global installs on Windows go to \fB{prefix}/node_modules\fR (that is, no +\fBlib\fR folder\.) +.P +Scoped packages are installed the same way, except they are grouped together +in a sub\-folder of the relevant \fBnode_modules\fR folder with the name of that +scope prefix by the @ symbol, e\.g\. \fBnpm install @myorg/package\fR would place +the package in \fB{prefix}/node_modules/@myorg/package\fR\|\. See npm help 7 \fBscopes\fR for +more details\. .P If you wish to \fBrequire()\fR a package, then install it locally\. -. -.SS "Executables" +.SS Executables +.P When in global mode, executables are linked into \fB{prefix}/bin\fR on Unix, or directly into \fB{prefix}\fR on Windows\. -. .P -When in local mode, executables are linked into \fB\|\./node_modules/\.bin\fR so that they can be made available to scripts run +When in local mode, executables are linked into +\fB\|\./node_modules/\.bin\fR so that they can be made available to scripts run through npm\. (For example, so that a test runner will be in the path when you run \fBnpm test\fR\|\.) -. -.SS "Man Pages" +.SS Man Pages +.P When in global mode, man pages are linked into \fB{prefix}/share/man\fR\|\. -. .P When in local mode, man pages are not installed\. -. .P Man pages are not installed on Windows systems\. -. -.SS "Cache" -See npm help \fBnpm\-cache\fR\|\. Cache files are stored in \fB~/\.npm\fR on Posix, or \fB~/npm\-cache\fR on Windows\. -. +.SS Cache +.P +See npm help \fBnpm\-cache\fR\|\. Cache files are stored in \fB~/\.npm\fR on Posix, or +\fB~/npm\-cache\fR on Windows\. .P This is controlled by the \fBcache\fR configuration param\. -. -.SS "Temp Files" -Temporary files are stored by default in the folder specified by the \fBtmp\fR config, which defaults to the TMPDIR, TMP, or TEMP environment +.SS Temp Files +.P +Temporary files are stored by default in the folder specified by the +\fBtmp\fR config, which defaults to the TMPDIR, TMP, or TEMP environment variables, or \fB/tmp\fR on Unix and \fBc:\\windows\\temp\fR on Windows\. -. .P Temp files are given a unique folder under this root for each run of the program, and are deleted upon successful exit\. -. -.SH "More Information" -When installing locally, npm first tries to find an appropriate \fBprefix\fR folder\. This is so that \fBnpm install foo@1\.2\.3\fR will install +.SH More Information +.P +When installing locally, npm first tries to find an appropriate +\fBprefix\fR folder\. This is so that \fBnpm install foo@1\.2\.3\fR will install to the sensible root of your package, even if you happen to have \fBcd\fRed into some other folder\. -. .P Starting at the $PWD, npm will walk up the folder tree checking for a folder that contains either a \fBpackage\.json\fR file, or a \fBnode_modules\fR folder\. If such a thing is found, then that is treated as the effective "current directory" for the purpose of running npm commands\. (This -behavior is inspired by and similar to git\'s \.git\-folder seeking +behavior is inspired by and similar to git's \.git\-folder seeking logic when running git commands in a working dir\.) -. .P If no package root is found, then the current folder is used\. -. .P When you run \fBnpm install foo@1\.2\.3\fR, then the package is loaded into the cache, and then unpacked into \fB\|\./node_modules/foo\fR\|\. Then, any of -foo\'s dependencies are similarly unpacked into \fB\|\./node_modules/foo/node_modules/\.\.\.\fR\|\. -. +foo's dependencies are similarly unpacked into +\fB\|\./node_modules/foo/node_modules/\.\.\.\fR\|\. .P Any bin files are symlinked to \fB\|\./node_modules/\.bin/\fR, so that they may be found by npm scripts when necessary\. -. -.SS "Global Installation" +.SS Global Installation +.P If the \fBglobal\fR configuration is set to true, then npm will install packages "globally"\. -. .P For global installation, packages are installed roughly the same way, but using the folders described above\. -. -.SS "Cycles, Conflicts, and Folder Parsimony" -Cycles are handled using the property of node\'s module system that it +.SS Cycles, Conflicts, and Folder Parsimony +.P +Cycles are handled using the property of node's module system that it walks up the directories looking for \fBnode_modules\fR folders\. So, at every stage, if a package is already installed in an ancestor \fBnode_modules\fR folder, then it is not installed at the current location\. -. .P Consider the case above, where \fBfoo \-> bar \-> baz\fR\|\. Imagine if, in -addition to that, baz depended on bar, so you\'d have: \fBfoo \-> bar \-> baz \-> bar \-> baz \.\.\.\fR\|\. However, since the folder -structure is: \fBfoo/node_modules/bar/node_modules/baz\fR, there\'s no need to +addition to that, baz depended on bar, so you'd have: +\fBfoo \-> bar \-> baz \-> bar \-> baz \.\.\.\fR\|\. However, since the folder +structure is: \fBfoo/node_modules/bar/node_modules/baz\fR, there's no need to put another copy of bar into \fB\|\.\.\./baz/node_modules\fR, since when it calls -require("bar"), it will get the copy that is installed in \fBfoo/node_modules/bar\fR\|\. -. +require("bar"), it will get the copy that is installed in +\fBfoo/node_modules/bar\fR\|\. .P This shortcut is only used if the exact same version would be installed in multiple nested \fBnode_modules\fR folders\. It @@ -143,16 +134,14 @@ is still possible to have \fBa/node_modules/b/node_modules/a\fR if the two "a" packages are different versions\. However, without repeating the exact same package multiple times, an infinite regress will always be prevented\. -. .P Another optimization can be made by installing dependencies at the highest level possible, below the localized "target" folder\. -. -.SS "\fIExample\fR" +.SS Example +.P Consider this dependency graph: -. -.IP "" 4 -. +.P +.RS 2 .nf foo +\-\- blerg@1\.2\.5 @@ -165,16 +154,12 @@ foo `\-\- baz@1\.2\.3 `\-\- quux@3\.x `\-\- bar -. .fi -. -.IP "" 0 -. +.RE .P In this case, we might expect a folder structure like this: -. -.IP "" 4 -. +.P +.RS 2 .nf foo +\-\- node_modules @@ -188,77 +173,59 @@ foo `\-\- baz (1\.2\.3) <\-\-\-[D] `\-\- node_modules `\-\- quux (3\.2\.0) <\-\-\-[E] -. .fi -. -.IP "" 0 -. +.RE .P Since foo depends directly on \fBbar@1\.2\.3\fR and \fBbaz@1\.2\.3\fR, those are -installed in foo\'s \fBnode_modules\fR folder\. -. +installed in foo's \fBnode_modules\fR folder\. .P Even though the latest copy of blerg is 1\.3\.7, foo has a specific dependency on version 1\.2\.5\. So, that gets installed at [A]\. Since the -parent installation of blerg satisfies bar\'s dependency on \fBblerg@1\.x\fR, +parent installation of blerg satisfies bar's dependency on \fBblerg@1\.x\fR, it does not install another copy under [B]\. -. .P Bar [B] also has dependencies on baz and asdf, so those are installed in -bar\'s \fBnode_modules\fR folder\. Because it depends on \fBbaz@2\.x\fR, it cannot +bar's \fBnode_modules\fR folder\. Because it depends on \fBbaz@2\.x\fR, it cannot re\-use the \fBbaz@1\.2\.3\fR installed in the parent \fBnode_modules\fR folder [D], and must install its own copy [C]\. -. .P Underneath bar, the \fBbaz \-> quux \-> bar\fR dependency creates a cycle\. -However, because bar is already in quux\'s ancestry [B], it does not +However, because bar is already in quux's ancestry [B], it does not unpack another copy of bar into that folder\. -. .P -Underneath \fBfoo \-> baz\fR [D], quux\'s [E] folder tree is empty, because its +Underneath \fBfoo \-> baz\fR [D], quux's [E] folder tree is empty, because its dependency on bar is satisfied by the parent folder copy installed at [B]\. -. .P For a graphical breakdown of what is installed where, use \fBnpm ls\fR\|\. -. -.SS "Publishing" +.SS Publishing +.P Upon publishing, npm will look in the \fBnode_modules\fR folder\. If any of the items there are not in the \fBbundledDependencies\fR array, then they will not be included in the package tarball\. -. .P This allows a package maintainer to install all of their dependencies (and dev dependencies) locally, but only re\-publish those items that cannot be found elsewhere\. See npm help 5 \fBpackage\.json\fR for more information\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help 7 faq -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 package\.json -. -.IP "\(bu" 4 +.IP \(bu 2 npm help install -. -.IP "\(bu" 4 +.IP \(bu 2 npm help pack -. -.IP "\(bu" 4 +.IP \(bu 2 npm help cache -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help publish -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man5/npm-global.5 b/deps/npm/man/man5/npm-global.5 index d349c1f43a5..9cd3436f894 100644 --- a/deps/npm/man/man5/npm-global.5 +++ b/deps/npm/man/man5/npm-global.5 @@ -1,141 +1,132 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-FOLDERS" "5" "September 2014" "" "" -. +.TH "NPM\-FOLDERS" "5" "October 2014" "" "" .SH "NAME" -\fBnpm-folders\fR \-\- Folder Structures Used by npm -. -.SH "DESCRIPTION" -npm puts various things on your computer\. That\'s its job\. -. +\fBnpm-folders\fR \- Folder Structures Used by npm +.SH DESCRIPTION +.P +npm puts various things on your computer\. That's its job\. .P This document will tell you what it puts where\. -. -.SS "tl;dr" -. -.IP "\(bu" 4 +.SS tl;dr +.RS 0 +.IP \(bu 2 Local install (default): puts stuff in \fB\|\./node_modules\fR of the current package root\. -. -.IP "\(bu" 4 +.IP \(bu 2 Global install (with \fB\-g\fR): puts stuff in /usr/local or wherever node is installed\. -. -.IP "\(bu" 4 -Install it \fBlocally\fR if you\'re going to \fBrequire()\fR it\. -. -.IP "\(bu" 4 -Install it \fBglobally\fR if you\'re going to run it on the command line\. -. -.IP "\(bu" 4 +.IP \(bu 2 +Install it \fBlocally\fR if you're going to \fBrequire()\fR it\. +.IP \(bu 2 +Install it \fBglobally\fR if you're going to run it on the command line\. +.IP \(bu 2 If you need both, then install it in both places, or use \fBnpm link\fR\|\. -. -.IP "" 0 -. -.SS "prefix Configuration" + +.RE +.SS prefix Configuration +.P The \fBprefix\fR config defaults to the location where node is installed\. On most systems, this is \fB/usr/local\fR, and most of the time is the same -as node\'s \fBprocess\.installPrefix\fR\|\. -. +as node's \fBprocess\.installPrefix\fR\|\. .P On windows, this is the exact location of the node\.exe binary\. On Unix -systems, it\'s one level up, since node is typically installed at \fB{prefix}/bin/node\fR rather than \fB{prefix}/node\.exe\fR\|\. -. +systems, it's one level up, since node is typically installed at +\fB{prefix}/bin/node\fR rather than \fB{prefix}/node\.exe\fR\|\. .P When the \fBglobal\fR flag is set, npm installs things into this prefix\. When it is not set, it uses the root of the current package, or the current working directory if not in a package already\. -. -.SS "Node Modules" +.SS Node Modules +.P Packages are dropped into the \fBnode_modules\fR folder under the \fBprefix\fR\|\. -When installing locally, this means that you can \fBrequire("packagename")\fR to load its main module, or \fBrequire("packagename/lib/path/to/sub/module")\fR to load other modules\. -. +When installing locally, this means that you can +\fBrequire("packagename")\fR to load its main module, or +\fBrequire("packagename/lib/path/to/sub/module")\fR to load other modules\. .P Global installs on Unix systems go to \fB{prefix}/lib/node_modules\fR\|\. -Global installs on Windows go to \fB{prefix}/node_modules\fR (that is, no \fBlib\fR folder\.) -. +Global installs on Windows go to \fB{prefix}/node_modules\fR (that is, no +\fBlib\fR folder\.) +.P +Scoped packages are installed the same way, except they are grouped together +in a sub\-folder of the relevant \fBnode_modules\fR folder with the name of that +scope prefix by the @ symbol, e\.g\. \fBnpm install @myorg/package\fR would place +the package in \fB{prefix}/node_modules/@myorg/package\fR\|\. See npm help 7 \fBscopes\fR for +more details\. .P If you wish to \fBrequire()\fR a package, then install it locally\. -. -.SS "Executables" +.SS Executables +.P When in global mode, executables are linked into \fB{prefix}/bin\fR on Unix, or directly into \fB{prefix}\fR on Windows\. -. .P -When in local mode, executables are linked into \fB\|\./node_modules/\.bin\fR so that they can be made available to scripts run +When in local mode, executables are linked into +\fB\|\./node_modules/\.bin\fR so that they can be made available to scripts run through npm\. (For example, so that a test runner will be in the path when you run \fBnpm test\fR\|\.) -. -.SS "Man Pages" +.SS Man Pages +.P When in global mode, man pages are linked into \fB{prefix}/share/man\fR\|\. -. .P When in local mode, man pages are not installed\. -. .P Man pages are not installed on Windows systems\. -. -.SS "Cache" -See npm help \fBnpm\-cache\fR\|\. Cache files are stored in \fB~/\.npm\fR on Posix, or \fB~/npm\-cache\fR on Windows\. -. +.SS Cache +.P +See npm help \fBnpm\-cache\fR\|\. Cache files are stored in \fB~/\.npm\fR on Posix, or +\fB~/npm\-cache\fR on Windows\. .P This is controlled by the \fBcache\fR configuration param\. -. -.SS "Temp Files" -Temporary files are stored by default in the folder specified by the \fBtmp\fR config, which defaults to the TMPDIR, TMP, or TEMP environment +.SS Temp Files +.P +Temporary files are stored by default in the folder specified by the +\fBtmp\fR config, which defaults to the TMPDIR, TMP, or TEMP environment variables, or \fB/tmp\fR on Unix and \fBc:\\windows\\temp\fR on Windows\. -. .P Temp files are given a unique folder under this root for each run of the program, and are deleted upon successful exit\. -. -.SH "More Information" -When installing locally, npm first tries to find an appropriate \fBprefix\fR folder\. This is so that \fBnpm install foo@1\.2\.3\fR will install +.SH More Information +.P +When installing locally, npm first tries to find an appropriate +\fBprefix\fR folder\. This is so that \fBnpm install foo@1\.2\.3\fR will install to the sensible root of your package, even if you happen to have \fBcd\fRed into some other folder\. -. .P Starting at the $PWD, npm will walk up the folder tree checking for a folder that contains either a \fBpackage\.json\fR file, or a \fBnode_modules\fR folder\. If such a thing is found, then that is treated as the effective "current directory" for the purpose of running npm commands\. (This -behavior is inspired by and similar to git\'s \.git\-folder seeking +behavior is inspired by and similar to git's \.git\-folder seeking logic when running git commands in a working dir\.) -. .P If no package root is found, then the current folder is used\. -. .P When you run \fBnpm install foo@1\.2\.3\fR, then the package is loaded into the cache, and then unpacked into \fB\|\./node_modules/foo\fR\|\. Then, any of -foo\'s dependencies are similarly unpacked into \fB\|\./node_modules/foo/node_modules/\.\.\.\fR\|\. -. +foo's dependencies are similarly unpacked into +\fB\|\./node_modules/foo/node_modules/\.\.\.\fR\|\. .P Any bin files are symlinked to \fB\|\./node_modules/\.bin/\fR, so that they may be found by npm scripts when necessary\. -. -.SS "Global Installation" +.SS Global Installation +.P If the \fBglobal\fR configuration is set to true, then npm will install packages "globally"\. -. .P For global installation, packages are installed roughly the same way, but using the folders described above\. -. -.SS "Cycles, Conflicts, and Folder Parsimony" -Cycles are handled using the property of node\'s module system that it +.SS Cycles, Conflicts, and Folder Parsimony +.P +Cycles are handled using the property of node's module system that it walks up the directories looking for \fBnode_modules\fR folders\. So, at every stage, if a package is already installed in an ancestor \fBnode_modules\fR folder, then it is not installed at the current location\. -. .P Consider the case above, where \fBfoo \-> bar \-> baz\fR\|\. Imagine if, in -addition to that, baz depended on bar, so you\'d have: \fBfoo \-> bar \-> baz \-> bar \-> baz \.\.\.\fR\|\. However, since the folder -structure is: \fBfoo/node_modules/bar/node_modules/baz\fR, there\'s no need to +addition to that, baz depended on bar, so you'd have: +\fBfoo \-> bar \-> baz \-> bar \-> baz \.\.\.\fR\|\. However, since the folder +structure is: \fBfoo/node_modules/bar/node_modules/baz\fR, there's no need to put another copy of bar into \fB\|\.\.\./baz/node_modules\fR, since when it calls -require("bar"), it will get the copy that is installed in \fBfoo/node_modules/bar\fR\|\. -. +require("bar"), it will get the copy that is installed in +\fBfoo/node_modules/bar\fR\|\. .P This shortcut is only used if the exact same version would be installed in multiple nested \fBnode_modules\fR folders\. It @@ -143,16 +134,14 @@ is still possible to have \fBa/node_modules/b/node_modules/a\fR if the two "a" packages are different versions\. However, without repeating the exact same package multiple times, an infinite regress will always be prevented\. -. .P Another optimization can be made by installing dependencies at the highest level possible, below the localized "target" folder\. -. -.SS "\fIExample\fR" +.SS Example +.P Consider this dependency graph: -. -.IP "" 4 -. +.P +.RS 2 .nf foo +\-\- blerg@1\.2\.5 @@ -165,16 +154,12 @@ foo `\-\- baz@1\.2\.3 `\-\- quux@3\.x `\-\- bar -. .fi -. -.IP "" 0 -. +.RE .P In this case, we might expect a folder structure like this: -. -.IP "" 4 -. +.P +.RS 2 .nf foo +\-\- node_modules @@ -188,77 +173,59 @@ foo `\-\- baz (1\.2\.3) <\-\-\-[D] `\-\- node_modules `\-\- quux (3\.2\.0) <\-\-\-[E] -. .fi -. -.IP "" 0 -. +.RE .P Since foo depends directly on \fBbar@1\.2\.3\fR and \fBbaz@1\.2\.3\fR, those are -installed in foo\'s \fBnode_modules\fR folder\. -. +installed in foo's \fBnode_modules\fR folder\. .P Even though the latest copy of blerg is 1\.3\.7, foo has a specific dependency on version 1\.2\.5\. So, that gets installed at [A]\. Since the -parent installation of blerg satisfies bar\'s dependency on \fBblerg@1\.x\fR, +parent installation of blerg satisfies bar's dependency on \fBblerg@1\.x\fR, it does not install another copy under [B]\. -. .P Bar [B] also has dependencies on baz and asdf, so those are installed in -bar\'s \fBnode_modules\fR folder\. Because it depends on \fBbaz@2\.x\fR, it cannot +bar's \fBnode_modules\fR folder\. Because it depends on \fBbaz@2\.x\fR, it cannot re\-use the \fBbaz@1\.2\.3\fR installed in the parent \fBnode_modules\fR folder [D], and must install its own copy [C]\. -. .P Underneath bar, the \fBbaz \-> quux \-> bar\fR dependency creates a cycle\. -However, because bar is already in quux\'s ancestry [B], it does not +However, because bar is already in quux's ancestry [B], it does not unpack another copy of bar into that folder\. -. .P -Underneath \fBfoo \-> baz\fR [D], quux\'s [E] folder tree is empty, because its +Underneath \fBfoo \-> baz\fR [D], quux's [E] folder tree is empty, because its dependency on bar is satisfied by the parent folder copy installed at [B]\. -. .P For a graphical breakdown of what is installed where, use \fBnpm ls\fR\|\. -. -.SS "Publishing" +.SS Publishing +.P Upon publishing, npm will look in the \fBnode_modules\fR folder\. If any of the items there are not in the \fBbundledDependencies\fR array, then they will not be included in the package tarball\. -. .P This allows a package maintainer to install all of their dependencies (and dev dependencies) locally, but only re\-publish those items that cannot be found elsewhere\. See npm help 5 \fBpackage\.json\fR for more information\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help 7 faq -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 package\.json -. -.IP "\(bu" 4 +.IP \(bu 2 npm help install -. -.IP "\(bu" 4 +.IP \(bu 2 npm help pack -. -.IP "\(bu" 4 +.IP \(bu 2 npm help cache -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help publish -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man5/npm-json.5 b/deps/npm/man/man5/npm-json.5 index 8233dc17315..fa9ef95c4ba 100644 --- a/deps/npm/man/man5/npm-json.5 +++ b/deps/npm/man/man5/npm-json.5 @@ -1,253 +1,209 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "PACKAGE\.JSON" "5" "September 2014" "" "" -. +.TH "PACKAGE\.JSON" "5" "October 2014" "" "" .SH "NAME" -\fBpackage.json\fR \-\- Specifics of npm\'s package\.json handling -. -.SH "DESCRIPTION" -This document is all you need to know about what\'s required in your package\.json +\fBpackage.json\fR \- Specifics of npm's package\.json handling +.SH DESCRIPTION +.P +This document is all you need to know about what's required in your package\.json file\. It must be actual JSON, not just a JavaScript object literal\. -. .P A lot of the behavior described in this document is affected by the config settings described in npm help 7 \fBnpm\-config\fR\|\. -. -.SH "name" +.SH name +.P The \fImost\fR important things in your package\.json are the name and version fields\. -Those are actually required, and your package won\'t install without +Those are actually required, and your package won't install without them\. The name and version together form an identifier that is assumed to be completely unique\. Changes to the package should come along with changes to the version\. -. .P The name is what your thing is called\. Some tips: -. -.IP "\(bu" 4 -Don\'t put "js" or "node" in the name\. It\'s assumed that it\'s js, since you\'re +.RS 0 +.IP \(bu 2 +Don't put "js" or "node" in the name\. It's assumed that it's js, since you're writing a package\.json file, and you can specify the engine using the "engines" field\. (See below\.) -. -.IP "\(bu" 4 +.IP \(bu 2 The name ends up being part of a URL, an argument on the command line, and a folder name\. Any name with non\-url\-safe characters will be rejected\. -Also, it can\'t start with a dot or an underscore\. -. -.IP "\(bu" 4 +Also, it can't start with a dot or an underscore\. +.IP \(bu 2 The name will probably be passed as an argument to require(), so it should be something short, but also reasonably descriptive\. -. -.IP "\(bu" 4 -You may want to check the npm registry to see if there\'s something by that name +.IP \(bu 2 +You may want to check the npm registry to see if there's something by that name already, before you get too attached to it\. http://registry\.npmjs\.org/ -. -.IP "" 0 -. -.SH "version" + +.RE +.P +A name can be optionally prefixed by a scope, e\.g\. \fB@myorg/mypackage\fR\|\. See +npm help 7 \fBnpm\-scope\fR for more detail\. +.SH version +.P The \fImost\fR important things in your package\.json are the name and version fields\. -Those are actually required, and your package won\'t install without +Those are actually required, and your package won't install without them\. The name and version together form an identifier that is assumed to be completely unique\. Changes to the package should come along with changes to the version\. -. .P -Version must be parseable by node\-semver \fIhttps://github\.com/isaacs/node\-semver\fR, which is bundled +Version must be parseable by +node\-semver \fIhttps://github\.com/isaacs/node\-semver\fR, which is bundled with npm as a dependency\. (\fBnpm install semver\fR to use it yourself\.) -. .P More on version numbers and ranges at npm help 7 semver\. -. -.SH "description" -Put a description in it\. It\'s a string\. This helps people discover your -package, as it\'s listed in \fBnpm search\fR\|\. -. -.SH "keywords" -Put keywords in it\. It\'s an array of strings\. This helps people -discover your package as it\'s listed in \fBnpm search\fR\|\. -. -.SH "homepage" +.SH description +.P +Put a description in it\. It's a string\. This helps people discover your +package, as it's listed in \fBnpm search\fR\|\. +.SH keywords +.P +Put keywords in it\. It's an array of strings\. This helps people +discover your package as it's listed in \fBnpm search\fR\|\. +.SH homepage +.P The url to the project homepage\. -. .P \fBNOTE\fR: This is \fInot\fR the same as "url"\. If you put a "url" field, -then the registry will think it\'s a redirection to your package that has +then the registry will think it's a redirection to your package that has been published somewhere else, and spit at you\. -. .P -Literally\. Spit\. I\'m so not kidding\. -. -.SH "bugs" -The url to your project\'s issue tracker and / or the email address to which +Literally\. Spit\. I'm so not kidding\. +.SH bugs +.P +The url to your project's issue tracker and / or the email address to which issues should be reported\. These are helpful for people who encounter issues with your package\. -. .P It should look like this: -. -.IP "" 4 -. +.P +.RS 2 .nf { "url" : "http://github\.com/owner/project/issues" , "email" : "project@hostname\.com" } -. .fi -. -.IP "" 0 -. +.RE .P You can specify either one or both values\. If you want to provide only a url, you can specify the value for "bugs" as a simple string instead of an object\. -. .P If a url is provided, it will be used by the \fBnpm bugs\fR command\. -. -.SH "license" +.SH license +.P You should specify a license for your package so that people know how they are -permitted to use it, and any restrictions you\'re placing on it\. -. +permitted to use it, and any restrictions you're placing on it\. .P -The simplest way, assuming you\'re using a common license such as BSD\-3\-Clause -or MIT, is to just specify the standard SPDX ID of the license you\'re using, +The simplest way, assuming you're using a common license such as BSD\-3\-Clause +or MIT, is to just specify the standard SPDX ID of the license you're using, like this: -. -.IP "" 4 -. +.P +.RS 2 .nf { "license" : "BSD\-3\-Clause" } -. .fi -. -.IP "" 0 -. +.RE .P You can check the full list of SPDX license IDs \fIhttps://spdx\.org/licenses/\fR\|\. -Ideally you should pick one that is OSI \fIhttp://opensource\.org/licenses/alphabetical\fR approved\. -. +Ideally you should pick one that is +OSI \fIhttp://opensource\.org/licenses/alphabetical\fR approved\. .P -It\'s also a good idea to include a LICENSE file at the top level in +It's also a good idea to include a LICENSE file at the top level in your package\. -. -.SH "people fields: author, contributors" +.SH people fields: author, contributors +.P The "author" is one person\. "contributors" is an array of people\. A "person" is an object with a "name" field and optionally "url" and "email", like this: -. -.IP "" 4 -. +.P +.RS 2 .nf { "name" : "Barney Rubble" , "email" : "b@rubble\.com" , "url" : "http://barnyrubble\.tumblr\.com/" } -. .fi -. -.IP "" 0 -. +.RE .P Or you can shorten that all into a single string, and npm will parse it for you: -. -.IP "" 4 -. +.P +.RS 2 .nf "Barney Rubble (http://barnyrubble\.tumblr\.com/) -. .fi -. -.IP "" 0 -. +.RE .P Both email and url are optional either way\. -. .P npm also sets a top\-level "maintainers" field with your npm user info\. -. -.SH "files" +.SH files +.P The "files" field is an array of files to include in your project\. If you name a folder in the array, then it will also include the files inside that folder\. (Unless they would be ignored by another rule\.) -. .P You can also provide a "\.npmignore" file in the root of your package, which will keep files from being included, even if they would be picked up by the files array\. The "\.npmignore" file works just like a "\.gitignore"\. -. -.SH "main" +.SH main +.P The main field is a module ID that is the primary entry point to your program\. -That is, if your package is named \fBfoo\fR, and a user installs it, and then does \fBrequire("foo")\fR, then your main module\'s exports object will be returned\. -. +That is, if your package is named \fBfoo\fR, and a user installs it, and then does +\fBrequire("foo")\fR, then your main module's exports object will be returned\. .P This should be a module ID relative to the root of your package folder\. -. .P For most modules, it makes the most sense to have a main script and often not much else\. -. -.SH "bin" -A lot of packages have one or more executable files that they\'d like to +.SH bin +.P +A lot of packages have one or more executable files that they'd like to install into the PATH\. npm makes this pretty easy (in fact, it uses this feature to install the "npm" executable\.) -. .P To use this, supply a \fBbin\fR field in your package\.json which is a map of -command name to local file name\. On install, npm will symlink that file into \fBprefix/bin\fR for global installs, or \fB\|\./node_modules/\.bin/\fR for local +command name to local file name\. On install, npm will symlink that file into +\fBprefix/bin\fR for global installs, or \fB\|\./node_modules/\.bin/\fR for local installs\. -. .P For example, npm has this: -. -.IP "" 4 -. +.P +.RS 2 .nf { "bin" : { "npm" : "\./cli\.js" } } -. .fi -. -.IP "" 0 -. +.RE .P -So, when you install npm, it\'ll create a symlink from the \fBcli\.js\fR script to \fB/usr/local/bin/npm\fR\|\. -. +So, when you install npm, it'll create a symlink from the \fBcli\.js\fR script to +\fB/usr/local/bin/npm\fR\|\. .P If you have a single executable, and its name should be the name of the package, then you can just supply it as a string\. For example: -. -.IP "" 4 -. +.P +.RS 2 .nf { "name": "my\-program" , "version": "1\.2\.5" , "bin": "\./path/to/program" } -. .fi -. -.IP "" 0 -. +.RE .P would be the same as this: -. -.IP "" 4 -. +.P +.RS 2 .nf { "name": "my\-program" , "version": "1\.2\.5" , "bin" : { "my\-program" : "\./path/to/program" } } -. .fi -. -.IP "" 0 -. -.SH "man" -Specify either a single file or an array of filenames to put in place for the \fBman\fR program to find\. -. -.P -If only a single file is provided, then it\'s installed such that it is the +.RE +.SH man +.P +Specify either a single file or an array of filenames to put in place for the +\fBman\fR program to find\. +.P +If only a single file is provided, then it's installed such that it is the result from \fBman \fR, regardless of its actual filename\. For example: -. -.IP "" 4 -. +.P +.RS 2 .nf { "name" : "foo" , "version" : "1\.2\.3" @@ -255,20 +211,15 @@ result from \fBman \fR, regardless of its actual filename\. For exampl , "main" : "foo\.js" , "man" : "\./man/doc\.1" } -. .fi -. -.IP "" 0 -. +.RE .P would link the \fB\|\./man/doc\.1\fR file in such that it is the target for \fBman foo\fR -. .P -If the filename doesn\'t start with the package name, then it\'s prefixed\. +If the filename doesn't start with the package name, then it's prefixed\. So, this: -. -.IP "" 4 -. +.P +.RS 2 .nf { "name" : "foo" , "version" : "1\.2\.3" @@ -276,20 +227,15 @@ So, this: , "main" : "foo\.js" , "man" : [ "\./man/foo\.1", "\./man/bar\.1" ] } -. .fi -. -.IP "" 0 -. +.RE .P will create files to do \fBman foo\fR and \fBman foo\-bar\fR\|\. -. .P Man files must end with a number, and optionally a \fB\|\.gz\fR suffix if they are compressed\. The number dictates which man section the file is installed into\. -. -.IP "" 4 -. +.P +.RS 2 .nf { "name" : "foo" , "version" : "1\.2\.3" @@ -297,169 +243,142 @@ compressed\. The number dictates which man section the file is installed into\. , "main" : "foo\.js" , "man" : [ "\./man/foo\.1", "\./man/foo\.2" ] } -. .fi -. -.IP "" 0 -. +.RE .P will create entries for \fBman foo\fR and \fBman 2 foo\fR -. -.SH "directories" +.SH directories +.P The CommonJS Packages \fIhttp://wiki\.commonjs\.org/wiki/Packages/1\.0\fR spec details a few ways that you can indicate the structure of your package using a \fBdirectories\fR -hash\. If you look at npm\'s package\.json \fIhttp://registry\.npmjs\.org/npm/latest\fR, -you\'ll see that it has directories for doc, lib, and man\. -. +object\. If you look at npm's package\.json \fIhttp://registry\.npmjs\.org/npm/latest\fR, +you'll see that it has directories for doc, lib, and man\. .P In the future, this information may be used in other creative ways\. -. -.SS "directories\.lib" +.SS directories\.lib +.P Tell people where the bulk of your library is\. Nothing special is done -with the lib folder in any way, but it\'s useful meta info\. -. -.SS "directories\.bin" -If you specify a "bin" directory, then all the files in that folder will -be used as the "bin" hash\. -. -.P -If you have a "bin" hash already, then this has no effect\. -. -.SS "directories\.man" +with the lib folder in any way, but it's useful meta info\. +.SS directories\.bin +.P +If you specify a \fBbin\fR directory, then all the files in that folder will +be added as children of the \fBbin\fR path\. +.P +If you have a \fBbin\fR path already, then this has no effect\. +.SS directories\.man +.P A folder that is full of man pages\. Sugar to generate a "man" array by walking the folder\. -. -.SS "directories\.doc" +.SS directories\.doc +.P Put markdown files in here\. Eventually, these will be displayed nicely, maybe, someday\. -. -.SS "directories\.example" +.SS directories\.example +.P Put example scripts in here\. Someday, it might be exposed in some clever way\. -. -.SH "repository" +.SH repository +.P Specify the place where your code lives\. This is helpful for people who want to contribute\. If the git repo is on github, then the \fBnpm docs\fR command will be able to find you\. -. .P Do it like this: -. -.IP "" 4 -. +.P +.RS 2 .nf "repository" : { "type" : "git" , "url" : "http://github\.com/npm/npm\.git" } + "repository" : { "type" : "svn" , "url" : "http://v8\.googlecode\.com/svn/trunk/" } -. .fi -. -.IP "" 0 -. +.RE .P The URL should be a publicly available (perhaps read\-only) url that can be handed directly to a VCS program without any modification\. It should not be a url to an -html project page that you put in your browser\. It\'s for computers\. -. -.SH "scripts" -The "scripts" member is an object hash of script commands that are run +html project page that you put in your browser\. It's for computers\. +.SH scripts +.P +The "scripts" property is a dictionary containing script commands that are run at various times in the lifecycle of your package\. The key is the lifecycle event, and the value is the command to run at that point\. -. .P See npm help 7 \fBnpm\-scripts\fR to find out more about writing package scripts\. -. -.SH "config" -A "config" hash can be used to set configuration -parameters used in package scripts that persist across upgrades\. For -instance, if a package had the following: -. -.IP "" 4 -. +.SH config +.P +A "config" object can be used to set configuration parameters used in package +scripts that persist across upgrades\. For instance, if a package had the +following: +.P +.RS 2 .nf { "name" : "foo" , "config" : { "port" : "8080" } } -. .fi -. -.IP "" 0 -. +.RE .P -and then had a "start" command that then referenced the \fBnpm_package_config_port\fR environment variable, then the user could +and then had a "start" command that then referenced the +\fBnpm_package_config_port\fR environment variable, then the user could override that by doing \fBnpm config set foo:port 8001\fR\|\. -. .P See npm help 7 \fBnpm\-config\fR and npm help 7 \fBnpm\-scripts\fR for more on package configs\. -. -.SH "dependencies" -Dependencies are specified with a simple hash of package name to +.SH dependencies +.P +Dependencies are specified in a simple object that maps a package name to a version range\. The version range is a string which has one or more -space\-separated descriptors\. Dependencies can also be identified with -a tarball or git URL\. -. +space\-separated descriptors\. Dependencies can also be identified with a +tarball or git URL\. .P -\fBPlease do not put test harnesses or transpilers in your \fBdependencies\fR hash\.\fR See \fBdevDependencies\fR, below\. -. +\fBPlease do not put test harnesses or transpilers in your +\fBdependencies\fR object\.\fR See \fBdevDependencies\fR, below\. .P See npm help 7 semver for more details about specifying version ranges\. -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 \fBversion\fR Must match \fBversion\fR exactly -. -.IP "\(bu" 4 +.IP \(bu 2 \fB>version\fR Must be greater than \fBversion\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB>=version\fR etc -. -.IP "\(bu" 4 +.IP \(bu 2 \fB=version1 <=version2\fR\|\. -. -.IP "\(bu" 4 +.IP \(bu 2 \fBrange1 || range2\fR Passes if either range1 or range2 are satisfied\. -. -.IP "\(bu" 4 -\fBgit\.\.\.\fR See \'Git URLs as Dependencies\' below -. -.IP "\(bu" 4 -\fBuser/repo\fR See \'GitHub URLs\' below -. -.IP "" 0 -. +.IP \(bu 2 +\fBgit\.\.\.\fR See 'Git URLs as Dependencies' below +.IP \(bu 2 +\fBuser/repo\fR See 'GitHub URLs' below +.IP \(bu 2 +\fBtag\fR A specific version tagged and published as \fBtag\fR See npm help \fBnpm\-tag\fR +.IP \(bu 2 +\fBpath/path/path\fR See Local Paths below + +.RE .P For example, these are all valid: -. -.IP "" 4 -. +.P +.RS 2 .nf { "dependencies" : { "foo" : "1\.0\.0 \- 2\.9999\.9999" @@ -472,45 +391,39 @@ For example, these are all valid: , "elf" : "~1\.2\.3" , "two" : "2\.x" , "thr" : "3\.3\.x" + , "lat" : "latest" + , "dyl" : "file:\.\./dyl" } } -. .fi -. -.IP "" 0 -. -.SS "URLs as Dependencies" +.RE +.SS URLs as Dependencies +.P You may specify a tarball URL in place of a version range\. -. .P This tarball will be downloaded and installed locally to your package at install time\. -. -.SS "Git URLs as Dependencies" +.SS Git URLs as Dependencies +.P Git urls can be of the form: -. -.IP "" 4 -. +.P +.RS 2 .nf git://github\.com/user/project\.git#commit\-ish git+ssh://user@hostname:project\.git#commit\-ish git+ssh://user@hostname/project\.git#commit\-ish git+http://user@hostname/project/blah\.git#commit\-ish git+https://user@hostname/project/blah\.git#commit\-ish -. .fi -. -.IP "" 0 -. +.RE .P The \fBcommit\-ish\fR can be any tag, sha, or branch which can be supplied as an argument to \fBgit checkout\fR\|\. The default is \fBmaster\fR\|\. -. -.SH "GitHub URLs" +.SH GitHub URLs +.P As of version 1\.1\.65, you can refer to GitHub urls as just "foo": "user/foo\-project"\. For example: -. -.IP "" 4 -. +.P +.RS 2 .nf { "name": "foo", @@ -519,34 +432,61 @@ As of version 1\.1\.65, you can refer to GitHub urls as just "foo": "user/foo\-p "express": "visionmedia/express" } } -. .fi -. -.IP "" 0 -. -.SH "devDependencies" +.RE +.SH Local Paths +.P +As of version 2\.0\.0 you can provide a path to a local directory that contains a +package\. Local paths can be saved using \fBnpm install \-\-save\fR, using any of +these forms: +.P +.RS 2 +.nf +\|\.\./foo/bar +~/foo/bar +\|\./foo/bar +/foo/bar +.fi +.RE +.P +in which case they will be normalized to a relative path and added to your +\fBpackage\.json\fR\|\. For example: +.P +.RS 2 +.nf +{ + "name": "baz", + "dependencies": { + "bar": "file:\.\./foo/bar" + } +} +.fi +.RE +.P +This feature is helpful for local offline development and creating +tests that require npm installing where you don't want to hit an +external server, but should not be used when publishing packages +to the public registry\. +.SH devDependencies +.P If someone is planning on downloading and using your module in their -program, then they probably don\'t want or need to download and build +program, then they probably don't want or need to download and build the external test or documentation framework that you use\. -. .P -In this case, it\'s best to list these additional items in a \fBdevDependencies\fR hash\. -. +In this case, it's best to map these additional items in a \fBdevDependencies\fR +object\. .P These things will be installed when doing \fBnpm link\fR or \fBnpm install\fR from the root of a package, and can be managed like any other npm configuration param\. See npm help 7 \fBnpm\-config\fR for more on the topic\. -. .P For build steps that are not platform\-specific, such as compiling CoffeeScript or other languages to JavaScript, use the \fBprepublish\fR script to do this, and make the required package a devDependency\. -. .P For example: -. -.IP "" 4 -. +.P +.RS 2 .nf { "name": "ethopia\-waza", "description": "a delightfully fruity coffee varietal", @@ -559,28 +499,23 @@ For example: }, "main": "lib/waza\.js" } -. .fi -. -.IP "" 0 -. +.RE .P The \fBprepublish\fR script will be run before publishing, so that users can consume the functionality without requiring them to compile it -themselves\. In dev mode (ie, locally running \fBnpm install\fR), it\'ll +themselves\. In dev mode (ie, locally running \fBnpm install\fR), it'll run this script as well, so that you can test it easily\. -. -.SH "peerDependencies" +.SH peerDependencies +.P In some cases, you want to express the compatibility of your package with an host tool or library, while not necessarily doing a \fBrequire\fR of this host\. This is usually refered to as a \fIplugin\fR\|\. Notably, your module may be exposing a specific interface, expected and specified by the host documentation\. -. .P For example: -. -.IP "" 4 -. +.P +.RS 2 .nf { "name": "tea\-latte", @@ -589,283 +524,223 @@ For example: "tea": "2\.x" } } -. .fi -. -.IP "" 0 -. +.RE .P This ensures your package \fBtea\-latte\fR can be installed \fIalong\fR with the second major version of the host package \fBtea\fR only\. The host package is automatically installed if needed\. \fBnpm install tea\-latte\fR could possibly yield the following dependency graph: -. -.IP "" 4 -. +.P +.RS 2 .nf ├── tea\-latte@1\.3\.5 └── tea@2\.2\.0 -. .fi -. -.IP "" 0 -. +.RE .P Trying to install another plugin with a conflicting requirement will cause an error\. For this reason, make sure your plugin requirement is as broad as possible, and not to lock it down to specific patch versions\. -. .P Assuming the host complies with semver \fIhttp://semver\.org/\fR, only changes in -the host package\'s major version will break your plugin\. Thus, if you\'ve worked +the host package's major version will break your plugin\. Thus, if you've worked with every 1\.x version of the host package, use \fB"^1\.0"\fR or \fB"1\.x"\fR to express this\. If you depend on features introduced in 1\.5\.2, use \fB">= 1\.5\.2 < 2"\fR\|\. -. -.SH "bundledDependencies" +.SH bundledDependencies +.P Array of package names that will be bundled when publishing the package\. -. .P If this is spelled \fB"bundleDependencies"\fR, then that is also honorable\. -. -.SH "optionalDependencies" -If a dependency can be used, but you would like npm to proceed if it -cannot be found or fails to install, then you may put it in the \fBoptionalDependencies\fR hash\. This is a map of package name to version -or url, just like the \fBdependencies\fR hash\. The difference is that -failure is tolerated\. -. -.P -It is still your program\'s responsibility to handle the lack of the +.SH optionalDependencies +.P +If a dependency can be used, but you would like npm to proceed if it cannot be +found or fails to install, then you may put it in the \fBoptionalDependencies\fR +object\. This is a map of package name to version or url, just like the +\fBdependencies\fR object\. The difference is that build failures do not cause +installation to fail\. +.P +It is still your program's responsibility to handle the lack of the dependency\. For example, something like this: -. -.IP "" 4 -. +.P +.RS 2 .nf try { - var foo = require(\'foo\') - var fooVersion = require(\'foo/package\.json\')\.version + var foo = require('foo') + var fooVersion = require('foo/package\.json')\.version } catch (er) { foo = null } if ( notGoodFooVersion(fooVersion) ) { foo = null } + // \.\. then later in your program \.\. + if (foo) { foo\.doFooThings() } -. .fi -. -.IP "" 0 -. +.RE +.P +Entries in \fBoptionalDependencies\fR will override entries of the same name in +\fBdependencies\fR, so it's usually best to only put in one place\. +.SH engines .P -Entries in \fBoptionalDependencies\fR will override entries of the same name in \fBdependencies\fR, so it\'s usually best to only put in one place\. -. -.SH "engines" You can specify the version of node that your stuff works on: -. -.IP "" 4 -. +.P +.RS 2 .nf { "engines" : { "node" : ">=0\.10\.3 <0\.12" } } -. .fi -. -.IP "" 0 -. +.RE .P -And, like with dependencies, if you don\'t specify the version (or if you +And, like with dependencies, if you don't specify the version (or if you specify "*" as the version), then any version of node will do\. -. .P If you specify an "engines" field, then npm will require that "node" be somewhere on that list\. If "engines" is omitted, then npm will just assume that it works on node\. -. .P You can also use the "engines" field to specify which versions of npm are capable of properly installing your program\. For example: -. -.IP "" 4 -. +.P +.RS 2 .nf { "engines" : { "npm" : "~1\.0\.20" } } -. .fi -. -.IP "" 0 -. +.RE .P Note that, unless the user has set the \fBengine\-strict\fR config flag, this field is advisory only\. -. -.SH "engineStrict" +.SH engineStrict +.P If you are sure that your module will \fIdefinitely not\fR run properly on -versions of Node/npm other than those specified in the \fBengines\fR hash, +versions of Node/npm other than those specified in the \fBengines\fR object, then you can set \fB"engineStrict": true\fR in your package\.json file\. -This will override the user\'s \fBengine\-strict\fR config setting\. -. +This will override the user's \fBengine\-strict\fR config setting\. .P Please do not do this unless you are really very very sure\. If your -engines hash is something overly restrictive, you can quite easily and +engines object is something overly restrictive, you can quite easily and inadvertently lock yourself into obscurity and prevent your users from updating to new versions of Node\. Consider this choice carefully\. If people abuse it, it will be removed in a future version of npm\. -. -.SH "os" +.SH os +.P You can specify which operating systems your module will run on: -. -.IP "" 4 -. +.P +.RS 2 .nf "os" : [ "darwin", "linux" ] -. .fi -. -.IP "" 0 -. +.RE .P You can also blacklist instead of whitelist operating systems, -just prepend the blacklisted os with a \'!\': -. -.IP "" 4 -. +just prepend the blacklisted os with a '!': +.P +.RS 2 .nf "os" : [ "!win32" ] -. .fi -. -.IP "" 0 -. +.RE .P The host operating system is determined by \fBprocess\.platform\fR -. .P -It is allowed to both blacklist, and whitelist, although there isn\'t any +It is allowed to both blacklist, and whitelist, although there isn't any good reason to do this\. -. -.SH "cpu" +.SH cpu +.P If your code only runs on certain cpu architectures, you can specify which ones\. -. -.IP "" 4 -. +.P +.RS 2 .nf "cpu" : [ "x64", "ia32" ] -. .fi -. -.IP "" 0 -. +.RE .P Like the \fBos\fR option, you can also blacklist architectures: -. -.IP "" 4 -. +.P +.RS 2 .nf "cpu" : [ "!arm", "!mips" ] -. .fi -. -.IP "" 0 -. +.RE .P The host architecture is determined by \fBprocess\.arch\fR -. -.SH "preferGlobal" +.SH preferGlobal +.P If your package is primarily a command\-line application that should be installed globally, then set this value to \fBtrue\fR to provide a warning if it is installed locally\. -. .P -It doesn\'t actually prevent users from installing it locally, but it -does help prevent some confusion if it doesn\'t work as expected\. -. -.SH "private" +It doesn't actually prevent users from installing it locally, but it +does help prevent some confusion if it doesn't work as expected\. +.SH private +.P If you set \fB"private": true\fR in your package\.json, then npm will refuse to publish it\. -. -.P -This is a way to prevent accidental publication of private repositories\. -If you would like to ensure that a given package is only ever published -to a specific registry (for example, an internal registry), -then use the \fBpublishConfig\fR hash described below -to override the \fBregistry\fR config param at publish\-time\. -. -.SH "publishConfig" -This is a set of config values that will be used at publish\-time\. It\'s +.P +This is a way to prevent accidental publication of private repositories\. If +you would like to ensure that a given package is only ever published to a +specific registry (for example, an internal registry), then use the +\fBpublishConfig\fR dictionary described below to override the \fBregistry\fR config +param at publish\-time\. +.SH publishConfig +.P +This is a set of config values that will be used at publish\-time\. It's especially handy if you want to set the tag or registry, so that you can ensure that a given package is not tagged with "latest" or published to the global public registry by default\. -. .P Any config values can be overridden, but of course only "tag" and "registry" probably matter for the purposes of publishing\. -. .P See npm help 7 \fBnpm\-config\fR to see the list of config options that can be overridden\. -. -.SH "DEFAULT VALUES" +.SH DEFAULT VALUES +.P npm will default some values based on package contents\. -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 \fB"scripts": {"start": "node server\.js"}\fR -. -.IP If there is a \fBserver\.js\fR file in the root of your package, then npm will default the \fBstart\fR command to \fBnode server\.js\fR\|\. -. -.IP "\(bu" 4 +.IP \(bu 2 \fB"scripts":{"preinstall": "node\-gyp rebuild"}\fR -. -.IP If there is a \fBbinding\.gyp\fR file in the root of your package, npm will default the \fBpreinstall\fR command to compile using node\-gyp\. -. -.IP "\(bu" 4 +.IP \(bu 2 \fB"contributors": [\.\.\.]\fR -. -.IP If there is an \fBAUTHORS\fR file in the root of your package, npm will treat each line as a \fBName (url)\fR format, where email and url are optional\. Lines which start with a \fB#\fR or are blank, will be ignored\. -. -.IP "" 0 -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 + +.RE +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help 7 semver -. -.IP "\(bu" 4 +.IP \(bu 2 npm help init -. -.IP "\(bu" 4 +.IP \(bu 2 npm help version -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help help -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 faq -. -.IP "\(bu" 4 +.IP \(bu 2 npm help install -. -.IP "\(bu" 4 +.IP \(bu 2 npm help publish -. -.IP "\(bu" 4 +.IP \(bu 2 npm help rm -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man5/npmrc.5 b/deps/npm/man/man5/npmrc.5 index d0b63236574..d2846869abf 100644 --- a/deps/npm/man/man5/npmrc.5 +++ b/deps/npm/man/man5/npmrc.5 @@ -1,103 +1,83 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPMRC" "5" "September 2014" "" "" -. +.TH "NPMRC" "5" "October 2014" "" "" .SH "NAME" -\fBnpmrc\fR \-\- The npm config files -. -.SH "DESCRIPTION" +\fBnpmrc\fR \- The npm config files +.SH DESCRIPTION +.P npm gets its config settings from the command line, environment variables, and \fBnpmrc\fR files\. -. .P The \fBnpm config\fR command can be used to update and edit the contents of the user and global npmrc files\. -. .P For a list of available configuration options, see npm help 7 config\. -. -.SH "FILES" +.SH FILES +.P The four relevant files are: -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 per\-project config file (/path/to/my/project/\.npmrc) -. -.IP "\(bu" 4 +.IP \(bu 2 per\-user config file (~/\.npmrc) -. -.IP "\(bu" 4 +.IP \(bu 2 global config file ($PREFIX/npmrc) -. -.IP "\(bu" 4 +.IP \(bu 2 npm builtin config file (/path/to/npm/npmrc) -. -.IP "" 0 -. + +.RE .P All npm config files are an ini\-formatted list of \fBkey = value\fR -parameters\. Environment variables can be replaced using \fB${VARIABLE_NAME}\fR\|\. For example: -. -.IP "" 4 -. +parameters\. Environment variables can be replaced using +\fB${VARIABLE_NAME}\fR\|\. For example: +.P +.RS 2 .nf prefix = ${HOME}/\.npm\-packages -. .fi -. -.IP "" 0 -. +.RE .P Each of these files is loaded, and config options are resolved in priority order\. For example, a setting in the userconfig file would override the setting in the globalconfig file\. -. -.SS "Per\-project config file" +.SS Per\-project config file +.P When working locally in a project, a \fB\|\.npmrc\fR file in the root of the project (ie, a sibling of \fBnode_modules\fR and \fBpackage\.json\fR) will set config values specific to this project\. -. .P -Note that this only applies to the root of the project that you\'re +Note that this only applies to the root of the project that you're running npm in\. It has no effect when your module is published\. For -example, you can\'t publish a module that forces itself to install +example, you can't publish a module that forces itself to install globally, or in a different location\. -. -.SS "Per\-user config file" +.SS Per\-user config file +.P \fB$HOME/\.npmrc\fR (or the \fBuserconfig\fR param, if set in the environment or on the command line) -. -.SS "Global config file" +.SS Global config file +.P \fB$PREFIX/etc/npmrc\fR (or the \fBglobalconfig\fR param, if set above): This file is an ini\-file formatted list of \fBkey = value\fR parameters\. Environment variables can be replaced as above\. -. -.SS "Built\-in config file" +.SS Built\-in config file +.P \fBpath/to/npm/itself/npmrc\fR -. .P This is an unchangeable "builtin" configuration file that npm keeps consistent across updates\. Set fields in here using the \fB\|\./configure\fR script that comes with npm\. This is primarily for distribution maintainers to override default configs in a standard and consistent manner\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help 5 folders -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 package\.json -. -.IP "\(bu" 4 +.IP \(bu 2 npm help npm -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man5/package.json.5 b/deps/npm/man/man5/package.json.5 index 8233dc17315..fa9ef95c4ba 100644 --- a/deps/npm/man/man5/package.json.5 +++ b/deps/npm/man/man5/package.json.5 @@ -1,253 +1,209 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "PACKAGE\.JSON" "5" "September 2014" "" "" -. +.TH "PACKAGE\.JSON" "5" "October 2014" "" "" .SH "NAME" -\fBpackage.json\fR \-\- Specifics of npm\'s package\.json handling -. -.SH "DESCRIPTION" -This document is all you need to know about what\'s required in your package\.json +\fBpackage.json\fR \- Specifics of npm's package\.json handling +.SH DESCRIPTION +.P +This document is all you need to know about what's required in your package\.json file\. It must be actual JSON, not just a JavaScript object literal\. -. .P A lot of the behavior described in this document is affected by the config settings described in npm help 7 \fBnpm\-config\fR\|\. -. -.SH "name" +.SH name +.P The \fImost\fR important things in your package\.json are the name and version fields\. -Those are actually required, and your package won\'t install without +Those are actually required, and your package won't install without them\. The name and version together form an identifier that is assumed to be completely unique\. Changes to the package should come along with changes to the version\. -. .P The name is what your thing is called\. Some tips: -. -.IP "\(bu" 4 -Don\'t put "js" or "node" in the name\. It\'s assumed that it\'s js, since you\'re +.RS 0 +.IP \(bu 2 +Don't put "js" or "node" in the name\. It's assumed that it's js, since you're writing a package\.json file, and you can specify the engine using the "engines" field\. (See below\.) -. -.IP "\(bu" 4 +.IP \(bu 2 The name ends up being part of a URL, an argument on the command line, and a folder name\. Any name with non\-url\-safe characters will be rejected\. -Also, it can\'t start with a dot or an underscore\. -. -.IP "\(bu" 4 +Also, it can't start with a dot or an underscore\. +.IP \(bu 2 The name will probably be passed as an argument to require(), so it should be something short, but also reasonably descriptive\. -. -.IP "\(bu" 4 -You may want to check the npm registry to see if there\'s something by that name +.IP \(bu 2 +You may want to check the npm registry to see if there's something by that name already, before you get too attached to it\. http://registry\.npmjs\.org/ -. -.IP "" 0 -. -.SH "version" + +.RE +.P +A name can be optionally prefixed by a scope, e\.g\. \fB@myorg/mypackage\fR\|\. See +npm help 7 \fBnpm\-scope\fR for more detail\. +.SH version +.P The \fImost\fR important things in your package\.json are the name and version fields\. -Those are actually required, and your package won\'t install without +Those are actually required, and your package won't install without them\. The name and version together form an identifier that is assumed to be completely unique\. Changes to the package should come along with changes to the version\. -. .P -Version must be parseable by node\-semver \fIhttps://github\.com/isaacs/node\-semver\fR, which is bundled +Version must be parseable by +node\-semver \fIhttps://github\.com/isaacs/node\-semver\fR, which is bundled with npm as a dependency\. (\fBnpm install semver\fR to use it yourself\.) -. .P More on version numbers and ranges at npm help 7 semver\. -. -.SH "description" -Put a description in it\. It\'s a string\. This helps people discover your -package, as it\'s listed in \fBnpm search\fR\|\. -. -.SH "keywords" -Put keywords in it\. It\'s an array of strings\. This helps people -discover your package as it\'s listed in \fBnpm search\fR\|\. -. -.SH "homepage" +.SH description +.P +Put a description in it\. It's a string\. This helps people discover your +package, as it's listed in \fBnpm search\fR\|\. +.SH keywords +.P +Put keywords in it\. It's an array of strings\. This helps people +discover your package as it's listed in \fBnpm search\fR\|\. +.SH homepage +.P The url to the project homepage\. -. .P \fBNOTE\fR: This is \fInot\fR the same as "url"\. If you put a "url" field, -then the registry will think it\'s a redirection to your package that has +then the registry will think it's a redirection to your package that has been published somewhere else, and spit at you\. -. .P -Literally\. Spit\. I\'m so not kidding\. -. -.SH "bugs" -The url to your project\'s issue tracker and / or the email address to which +Literally\. Spit\. I'm so not kidding\. +.SH bugs +.P +The url to your project's issue tracker and / or the email address to which issues should be reported\. These are helpful for people who encounter issues with your package\. -. .P It should look like this: -. -.IP "" 4 -. +.P +.RS 2 .nf { "url" : "http://github\.com/owner/project/issues" , "email" : "project@hostname\.com" } -. .fi -. -.IP "" 0 -. +.RE .P You can specify either one or both values\. If you want to provide only a url, you can specify the value for "bugs" as a simple string instead of an object\. -. .P If a url is provided, it will be used by the \fBnpm bugs\fR command\. -. -.SH "license" +.SH license +.P You should specify a license for your package so that people know how they are -permitted to use it, and any restrictions you\'re placing on it\. -. +permitted to use it, and any restrictions you're placing on it\. .P -The simplest way, assuming you\'re using a common license such as BSD\-3\-Clause -or MIT, is to just specify the standard SPDX ID of the license you\'re using, +The simplest way, assuming you're using a common license such as BSD\-3\-Clause +or MIT, is to just specify the standard SPDX ID of the license you're using, like this: -. -.IP "" 4 -. +.P +.RS 2 .nf { "license" : "BSD\-3\-Clause" } -. .fi -. -.IP "" 0 -. +.RE .P You can check the full list of SPDX license IDs \fIhttps://spdx\.org/licenses/\fR\|\. -Ideally you should pick one that is OSI \fIhttp://opensource\.org/licenses/alphabetical\fR approved\. -. +Ideally you should pick one that is +OSI \fIhttp://opensource\.org/licenses/alphabetical\fR approved\. .P -It\'s also a good idea to include a LICENSE file at the top level in +It's also a good idea to include a LICENSE file at the top level in your package\. -. -.SH "people fields: author, contributors" +.SH people fields: author, contributors +.P The "author" is one person\. "contributors" is an array of people\. A "person" is an object with a "name" field and optionally "url" and "email", like this: -. -.IP "" 4 -. +.P +.RS 2 .nf { "name" : "Barney Rubble" , "email" : "b@rubble\.com" , "url" : "http://barnyrubble\.tumblr\.com/" } -. .fi -. -.IP "" 0 -. +.RE .P Or you can shorten that all into a single string, and npm will parse it for you: -. -.IP "" 4 -. +.P +.RS 2 .nf "Barney Rubble (http://barnyrubble\.tumblr\.com/) -. .fi -. -.IP "" 0 -. +.RE .P Both email and url are optional either way\. -. .P npm also sets a top\-level "maintainers" field with your npm user info\. -. -.SH "files" +.SH files +.P The "files" field is an array of files to include in your project\. If you name a folder in the array, then it will also include the files inside that folder\. (Unless they would be ignored by another rule\.) -. .P You can also provide a "\.npmignore" file in the root of your package, which will keep files from being included, even if they would be picked up by the files array\. The "\.npmignore" file works just like a "\.gitignore"\. -. -.SH "main" +.SH main +.P The main field is a module ID that is the primary entry point to your program\. -That is, if your package is named \fBfoo\fR, and a user installs it, and then does \fBrequire("foo")\fR, then your main module\'s exports object will be returned\. -. +That is, if your package is named \fBfoo\fR, and a user installs it, and then does +\fBrequire("foo")\fR, then your main module's exports object will be returned\. .P This should be a module ID relative to the root of your package folder\. -. .P For most modules, it makes the most sense to have a main script and often not much else\. -. -.SH "bin" -A lot of packages have one or more executable files that they\'d like to +.SH bin +.P +A lot of packages have one or more executable files that they'd like to install into the PATH\. npm makes this pretty easy (in fact, it uses this feature to install the "npm" executable\.) -. .P To use this, supply a \fBbin\fR field in your package\.json which is a map of -command name to local file name\. On install, npm will symlink that file into \fBprefix/bin\fR for global installs, or \fB\|\./node_modules/\.bin/\fR for local +command name to local file name\. On install, npm will symlink that file into +\fBprefix/bin\fR for global installs, or \fB\|\./node_modules/\.bin/\fR for local installs\. -. .P For example, npm has this: -. -.IP "" 4 -. +.P +.RS 2 .nf { "bin" : { "npm" : "\./cli\.js" } } -. .fi -. -.IP "" 0 -. +.RE .P -So, when you install npm, it\'ll create a symlink from the \fBcli\.js\fR script to \fB/usr/local/bin/npm\fR\|\. -. +So, when you install npm, it'll create a symlink from the \fBcli\.js\fR script to +\fB/usr/local/bin/npm\fR\|\. .P If you have a single executable, and its name should be the name of the package, then you can just supply it as a string\. For example: -. -.IP "" 4 -. +.P +.RS 2 .nf { "name": "my\-program" , "version": "1\.2\.5" , "bin": "\./path/to/program" } -. .fi -. -.IP "" 0 -. +.RE .P would be the same as this: -. -.IP "" 4 -. +.P +.RS 2 .nf { "name": "my\-program" , "version": "1\.2\.5" , "bin" : { "my\-program" : "\./path/to/program" } } -. .fi -. -.IP "" 0 -. -.SH "man" -Specify either a single file or an array of filenames to put in place for the \fBman\fR program to find\. -. -.P -If only a single file is provided, then it\'s installed such that it is the +.RE +.SH man +.P +Specify either a single file or an array of filenames to put in place for the +\fBman\fR program to find\. +.P +If only a single file is provided, then it's installed such that it is the result from \fBman \fR, regardless of its actual filename\. For example: -. -.IP "" 4 -. +.P +.RS 2 .nf { "name" : "foo" , "version" : "1\.2\.3" @@ -255,20 +211,15 @@ result from \fBman \fR, regardless of its actual filename\. For exampl , "main" : "foo\.js" , "man" : "\./man/doc\.1" } -. .fi -. -.IP "" 0 -. +.RE .P would link the \fB\|\./man/doc\.1\fR file in such that it is the target for \fBman foo\fR -. .P -If the filename doesn\'t start with the package name, then it\'s prefixed\. +If the filename doesn't start with the package name, then it's prefixed\. So, this: -. -.IP "" 4 -. +.P +.RS 2 .nf { "name" : "foo" , "version" : "1\.2\.3" @@ -276,20 +227,15 @@ So, this: , "main" : "foo\.js" , "man" : [ "\./man/foo\.1", "\./man/bar\.1" ] } -. .fi -. -.IP "" 0 -. +.RE .P will create files to do \fBman foo\fR and \fBman foo\-bar\fR\|\. -. .P Man files must end with a number, and optionally a \fB\|\.gz\fR suffix if they are compressed\. The number dictates which man section the file is installed into\. -. -.IP "" 4 -. +.P +.RS 2 .nf { "name" : "foo" , "version" : "1\.2\.3" @@ -297,169 +243,142 @@ compressed\. The number dictates which man section the file is installed into\. , "main" : "foo\.js" , "man" : [ "\./man/foo\.1", "\./man/foo\.2" ] } -. .fi -. -.IP "" 0 -. +.RE .P will create entries for \fBman foo\fR and \fBman 2 foo\fR -. -.SH "directories" +.SH directories +.P The CommonJS Packages \fIhttp://wiki\.commonjs\.org/wiki/Packages/1\.0\fR spec details a few ways that you can indicate the structure of your package using a \fBdirectories\fR -hash\. If you look at npm\'s package\.json \fIhttp://registry\.npmjs\.org/npm/latest\fR, -you\'ll see that it has directories for doc, lib, and man\. -. +object\. If you look at npm's package\.json \fIhttp://registry\.npmjs\.org/npm/latest\fR, +you'll see that it has directories for doc, lib, and man\. .P In the future, this information may be used in other creative ways\. -. -.SS "directories\.lib" +.SS directories\.lib +.P Tell people where the bulk of your library is\. Nothing special is done -with the lib folder in any way, but it\'s useful meta info\. -. -.SS "directories\.bin" -If you specify a "bin" directory, then all the files in that folder will -be used as the "bin" hash\. -. -.P -If you have a "bin" hash already, then this has no effect\. -. -.SS "directories\.man" +with the lib folder in any way, but it's useful meta info\. +.SS directories\.bin +.P +If you specify a \fBbin\fR directory, then all the files in that folder will +be added as children of the \fBbin\fR path\. +.P +If you have a \fBbin\fR path already, then this has no effect\. +.SS directories\.man +.P A folder that is full of man pages\. Sugar to generate a "man" array by walking the folder\. -. -.SS "directories\.doc" +.SS directories\.doc +.P Put markdown files in here\. Eventually, these will be displayed nicely, maybe, someday\. -. -.SS "directories\.example" +.SS directories\.example +.P Put example scripts in here\. Someday, it might be exposed in some clever way\. -. -.SH "repository" +.SH repository +.P Specify the place where your code lives\. This is helpful for people who want to contribute\. If the git repo is on github, then the \fBnpm docs\fR command will be able to find you\. -. .P Do it like this: -. -.IP "" 4 -. +.P +.RS 2 .nf "repository" : { "type" : "git" , "url" : "http://github\.com/npm/npm\.git" } + "repository" : { "type" : "svn" , "url" : "http://v8\.googlecode\.com/svn/trunk/" } -. .fi -. -.IP "" 0 -. +.RE .P The URL should be a publicly available (perhaps read\-only) url that can be handed directly to a VCS program without any modification\. It should not be a url to an -html project page that you put in your browser\. It\'s for computers\. -. -.SH "scripts" -The "scripts" member is an object hash of script commands that are run +html project page that you put in your browser\. It's for computers\. +.SH scripts +.P +The "scripts" property is a dictionary containing script commands that are run at various times in the lifecycle of your package\. The key is the lifecycle event, and the value is the command to run at that point\. -. .P See npm help 7 \fBnpm\-scripts\fR to find out more about writing package scripts\. -. -.SH "config" -A "config" hash can be used to set configuration -parameters used in package scripts that persist across upgrades\. For -instance, if a package had the following: -. -.IP "" 4 -. +.SH config +.P +A "config" object can be used to set configuration parameters used in package +scripts that persist across upgrades\. For instance, if a package had the +following: +.P +.RS 2 .nf { "name" : "foo" , "config" : { "port" : "8080" } } -. .fi -. -.IP "" 0 -. +.RE .P -and then had a "start" command that then referenced the \fBnpm_package_config_port\fR environment variable, then the user could +and then had a "start" command that then referenced the +\fBnpm_package_config_port\fR environment variable, then the user could override that by doing \fBnpm config set foo:port 8001\fR\|\. -. .P See npm help 7 \fBnpm\-config\fR and npm help 7 \fBnpm\-scripts\fR for more on package configs\. -. -.SH "dependencies" -Dependencies are specified with a simple hash of package name to +.SH dependencies +.P +Dependencies are specified in a simple object that maps a package name to a version range\. The version range is a string which has one or more -space\-separated descriptors\. Dependencies can also be identified with -a tarball or git URL\. -. +space\-separated descriptors\. Dependencies can also be identified with a +tarball or git URL\. .P -\fBPlease do not put test harnesses or transpilers in your \fBdependencies\fR hash\.\fR See \fBdevDependencies\fR, below\. -. +\fBPlease do not put test harnesses or transpilers in your +\fBdependencies\fR object\.\fR See \fBdevDependencies\fR, below\. .P See npm help 7 semver for more details about specifying version ranges\. -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 \fBversion\fR Must match \fBversion\fR exactly -. -.IP "\(bu" 4 +.IP \(bu 2 \fB>version\fR Must be greater than \fBversion\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB>=version\fR etc -. -.IP "\(bu" 4 +.IP \(bu 2 \fB=version1 <=version2\fR\|\. -. -.IP "\(bu" 4 +.IP \(bu 2 \fBrange1 || range2\fR Passes if either range1 or range2 are satisfied\. -. -.IP "\(bu" 4 -\fBgit\.\.\.\fR See \'Git URLs as Dependencies\' below -. -.IP "\(bu" 4 -\fBuser/repo\fR See \'GitHub URLs\' below -. -.IP "" 0 -. +.IP \(bu 2 +\fBgit\.\.\.\fR See 'Git URLs as Dependencies' below +.IP \(bu 2 +\fBuser/repo\fR See 'GitHub URLs' below +.IP \(bu 2 +\fBtag\fR A specific version tagged and published as \fBtag\fR See npm help \fBnpm\-tag\fR +.IP \(bu 2 +\fBpath/path/path\fR See Local Paths below + +.RE .P For example, these are all valid: -. -.IP "" 4 -. +.P +.RS 2 .nf { "dependencies" : { "foo" : "1\.0\.0 \- 2\.9999\.9999" @@ -472,45 +391,39 @@ For example, these are all valid: , "elf" : "~1\.2\.3" , "two" : "2\.x" , "thr" : "3\.3\.x" + , "lat" : "latest" + , "dyl" : "file:\.\./dyl" } } -. .fi -. -.IP "" 0 -. -.SS "URLs as Dependencies" +.RE +.SS URLs as Dependencies +.P You may specify a tarball URL in place of a version range\. -. .P This tarball will be downloaded and installed locally to your package at install time\. -. -.SS "Git URLs as Dependencies" +.SS Git URLs as Dependencies +.P Git urls can be of the form: -. -.IP "" 4 -. +.P +.RS 2 .nf git://github\.com/user/project\.git#commit\-ish git+ssh://user@hostname:project\.git#commit\-ish git+ssh://user@hostname/project\.git#commit\-ish git+http://user@hostname/project/blah\.git#commit\-ish git+https://user@hostname/project/blah\.git#commit\-ish -. .fi -. -.IP "" 0 -. +.RE .P The \fBcommit\-ish\fR can be any tag, sha, or branch which can be supplied as an argument to \fBgit checkout\fR\|\. The default is \fBmaster\fR\|\. -. -.SH "GitHub URLs" +.SH GitHub URLs +.P As of version 1\.1\.65, you can refer to GitHub urls as just "foo": "user/foo\-project"\. For example: -. -.IP "" 4 -. +.P +.RS 2 .nf { "name": "foo", @@ -519,34 +432,61 @@ As of version 1\.1\.65, you can refer to GitHub urls as just "foo": "user/foo\-p "express": "visionmedia/express" } } -. .fi -. -.IP "" 0 -. -.SH "devDependencies" +.RE +.SH Local Paths +.P +As of version 2\.0\.0 you can provide a path to a local directory that contains a +package\. Local paths can be saved using \fBnpm install \-\-save\fR, using any of +these forms: +.P +.RS 2 +.nf +\|\.\./foo/bar +~/foo/bar +\|\./foo/bar +/foo/bar +.fi +.RE +.P +in which case they will be normalized to a relative path and added to your +\fBpackage\.json\fR\|\. For example: +.P +.RS 2 +.nf +{ + "name": "baz", + "dependencies": { + "bar": "file:\.\./foo/bar" + } +} +.fi +.RE +.P +This feature is helpful for local offline development and creating +tests that require npm installing where you don't want to hit an +external server, but should not be used when publishing packages +to the public registry\. +.SH devDependencies +.P If someone is planning on downloading and using your module in their -program, then they probably don\'t want or need to download and build +program, then they probably don't want or need to download and build the external test or documentation framework that you use\. -. .P -In this case, it\'s best to list these additional items in a \fBdevDependencies\fR hash\. -. +In this case, it's best to map these additional items in a \fBdevDependencies\fR +object\. .P These things will be installed when doing \fBnpm link\fR or \fBnpm install\fR from the root of a package, and can be managed like any other npm configuration param\. See npm help 7 \fBnpm\-config\fR for more on the topic\. -. .P For build steps that are not platform\-specific, such as compiling CoffeeScript or other languages to JavaScript, use the \fBprepublish\fR script to do this, and make the required package a devDependency\. -. .P For example: -. -.IP "" 4 -. +.P +.RS 2 .nf { "name": "ethopia\-waza", "description": "a delightfully fruity coffee varietal", @@ -559,28 +499,23 @@ For example: }, "main": "lib/waza\.js" } -. .fi -. -.IP "" 0 -. +.RE .P The \fBprepublish\fR script will be run before publishing, so that users can consume the functionality without requiring them to compile it -themselves\. In dev mode (ie, locally running \fBnpm install\fR), it\'ll +themselves\. In dev mode (ie, locally running \fBnpm install\fR), it'll run this script as well, so that you can test it easily\. -. -.SH "peerDependencies" +.SH peerDependencies +.P In some cases, you want to express the compatibility of your package with an host tool or library, while not necessarily doing a \fBrequire\fR of this host\. This is usually refered to as a \fIplugin\fR\|\. Notably, your module may be exposing a specific interface, expected and specified by the host documentation\. -. .P For example: -. -.IP "" 4 -. +.P +.RS 2 .nf { "name": "tea\-latte", @@ -589,283 +524,223 @@ For example: "tea": "2\.x" } } -. .fi -. -.IP "" 0 -. +.RE .P This ensures your package \fBtea\-latte\fR can be installed \fIalong\fR with the second major version of the host package \fBtea\fR only\. The host package is automatically installed if needed\. \fBnpm install tea\-latte\fR could possibly yield the following dependency graph: -. -.IP "" 4 -. +.P +.RS 2 .nf ├── tea\-latte@1\.3\.5 └── tea@2\.2\.0 -. .fi -. -.IP "" 0 -. +.RE .P Trying to install another plugin with a conflicting requirement will cause an error\. For this reason, make sure your plugin requirement is as broad as possible, and not to lock it down to specific patch versions\. -. .P Assuming the host complies with semver \fIhttp://semver\.org/\fR, only changes in -the host package\'s major version will break your plugin\. Thus, if you\'ve worked +the host package's major version will break your plugin\. Thus, if you've worked with every 1\.x version of the host package, use \fB"^1\.0"\fR or \fB"1\.x"\fR to express this\. If you depend on features introduced in 1\.5\.2, use \fB">= 1\.5\.2 < 2"\fR\|\. -. -.SH "bundledDependencies" +.SH bundledDependencies +.P Array of package names that will be bundled when publishing the package\. -. .P If this is spelled \fB"bundleDependencies"\fR, then that is also honorable\. -. -.SH "optionalDependencies" -If a dependency can be used, but you would like npm to proceed if it -cannot be found or fails to install, then you may put it in the \fBoptionalDependencies\fR hash\. This is a map of package name to version -or url, just like the \fBdependencies\fR hash\. The difference is that -failure is tolerated\. -. -.P -It is still your program\'s responsibility to handle the lack of the +.SH optionalDependencies +.P +If a dependency can be used, but you would like npm to proceed if it cannot be +found or fails to install, then you may put it in the \fBoptionalDependencies\fR +object\. This is a map of package name to version or url, just like the +\fBdependencies\fR object\. The difference is that build failures do not cause +installation to fail\. +.P +It is still your program's responsibility to handle the lack of the dependency\. For example, something like this: -. -.IP "" 4 -. +.P +.RS 2 .nf try { - var foo = require(\'foo\') - var fooVersion = require(\'foo/package\.json\')\.version + var foo = require('foo') + var fooVersion = require('foo/package\.json')\.version } catch (er) { foo = null } if ( notGoodFooVersion(fooVersion) ) { foo = null } + // \.\. then later in your program \.\. + if (foo) { foo\.doFooThings() } -. .fi -. -.IP "" 0 -. +.RE +.P +Entries in \fBoptionalDependencies\fR will override entries of the same name in +\fBdependencies\fR, so it's usually best to only put in one place\. +.SH engines .P -Entries in \fBoptionalDependencies\fR will override entries of the same name in \fBdependencies\fR, so it\'s usually best to only put in one place\. -. -.SH "engines" You can specify the version of node that your stuff works on: -. -.IP "" 4 -. +.P +.RS 2 .nf { "engines" : { "node" : ">=0\.10\.3 <0\.12" } } -. .fi -. -.IP "" 0 -. +.RE .P -And, like with dependencies, if you don\'t specify the version (or if you +And, like with dependencies, if you don't specify the version (or if you specify "*" as the version), then any version of node will do\. -. .P If you specify an "engines" field, then npm will require that "node" be somewhere on that list\. If "engines" is omitted, then npm will just assume that it works on node\. -. .P You can also use the "engines" field to specify which versions of npm are capable of properly installing your program\. For example: -. -.IP "" 4 -. +.P +.RS 2 .nf { "engines" : { "npm" : "~1\.0\.20" } } -. .fi -. -.IP "" 0 -. +.RE .P Note that, unless the user has set the \fBengine\-strict\fR config flag, this field is advisory only\. -. -.SH "engineStrict" +.SH engineStrict +.P If you are sure that your module will \fIdefinitely not\fR run properly on -versions of Node/npm other than those specified in the \fBengines\fR hash, +versions of Node/npm other than those specified in the \fBengines\fR object, then you can set \fB"engineStrict": true\fR in your package\.json file\. -This will override the user\'s \fBengine\-strict\fR config setting\. -. +This will override the user's \fBengine\-strict\fR config setting\. .P Please do not do this unless you are really very very sure\. If your -engines hash is something overly restrictive, you can quite easily and +engines object is something overly restrictive, you can quite easily and inadvertently lock yourself into obscurity and prevent your users from updating to new versions of Node\. Consider this choice carefully\. If people abuse it, it will be removed in a future version of npm\. -. -.SH "os" +.SH os +.P You can specify which operating systems your module will run on: -. -.IP "" 4 -. +.P +.RS 2 .nf "os" : [ "darwin", "linux" ] -. .fi -. -.IP "" 0 -. +.RE .P You can also blacklist instead of whitelist operating systems, -just prepend the blacklisted os with a \'!\': -. -.IP "" 4 -. +just prepend the blacklisted os with a '!': +.P +.RS 2 .nf "os" : [ "!win32" ] -. .fi -. -.IP "" 0 -. +.RE .P The host operating system is determined by \fBprocess\.platform\fR -. .P -It is allowed to both blacklist, and whitelist, although there isn\'t any +It is allowed to both blacklist, and whitelist, although there isn't any good reason to do this\. -. -.SH "cpu" +.SH cpu +.P If your code only runs on certain cpu architectures, you can specify which ones\. -. -.IP "" 4 -. +.P +.RS 2 .nf "cpu" : [ "x64", "ia32" ] -. .fi -. -.IP "" 0 -. +.RE .P Like the \fBos\fR option, you can also blacklist architectures: -. -.IP "" 4 -. +.P +.RS 2 .nf "cpu" : [ "!arm", "!mips" ] -. .fi -. -.IP "" 0 -. +.RE .P The host architecture is determined by \fBprocess\.arch\fR -. -.SH "preferGlobal" +.SH preferGlobal +.P If your package is primarily a command\-line application that should be installed globally, then set this value to \fBtrue\fR to provide a warning if it is installed locally\. -. .P -It doesn\'t actually prevent users from installing it locally, but it -does help prevent some confusion if it doesn\'t work as expected\. -. -.SH "private" +It doesn't actually prevent users from installing it locally, but it +does help prevent some confusion if it doesn't work as expected\. +.SH private +.P If you set \fB"private": true\fR in your package\.json, then npm will refuse to publish it\. -. -.P -This is a way to prevent accidental publication of private repositories\. -If you would like to ensure that a given package is only ever published -to a specific registry (for example, an internal registry), -then use the \fBpublishConfig\fR hash described below -to override the \fBregistry\fR config param at publish\-time\. -. -.SH "publishConfig" -This is a set of config values that will be used at publish\-time\. It\'s +.P +This is a way to prevent accidental publication of private repositories\. If +you would like to ensure that a given package is only ever published to a +specific registry (for example, an internal registry), then use the +\fBpublishConfig\fR dictionary described below to override the \fBregistry\fR config +param at publish\-time\. +.SH publishConfig +.P +This is a set of config values that will be used at publish\-time\. It's especially handy if you want to set the tag or registry, so that you can ensure that a given package is not tagged with "latest" or published to the global public registry by default\. -. .P Any config values can be overridden, but of course only "tag" and "registry" probably matter for the purposes of publishing\. -. .P See npm help 7 \fBnpm\-config\fR to see the list of config options that can be overridden\. -. -.SH "DEFAULT VALUES" +.SH DEFAULT VALUES +.P npm will default some values based on package contents\. -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 \fB"scripts": {"start": "node server\.js"}\fR -. -.IP If there is a \fBserver\.js\fR file in the root of your package, then npm will default the \fBstart\fR command to \fBnode server\.js\fR\|\. -. -.IP "\(bu" 4 +.IP \(bu 2 \fB"scripts":{"preinstall": "node\-gyp rebuild"}\fR -. -.IP If there is a \fBbinding\.gyp\fR file in the root of your package, npm will default the \fBpreinstall\fR command to compile using node\-gyp\. -. -.IP "\(bu" 4 +.IP \(bu 2 \fB"contributors": [\.\.\.]\fR -. -.IP If there is an \fBAUTHORS\fR file in the root of your package, npm will treat each line as a \fBName (url)\fR format, where email and url are optional\. Lines which start with a \fB#\fR or are blank, will be ignored\. -. -.IP "" 0 -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 + +.RE +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help 7 semver -. -.IP "\(bu" 4 +.IP \(bu 2 npm help init -. -.IP "\(bu" 4 +.IP \(bu 2 npm help version -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help help -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 faq -. -.IP "\(bu" 4 +.IP \(bu 2 npm help install -. -.IP "\(bu" 4 +.IP \(bu 2 npm help publish -. -.IP "\(bu" 4 +.IP \(bu 2 npm help rm -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man7/index.7 b/deps/npm/man/man7/index.7 deleted file mode 100644 index 911d8efab01..00000000000 --- a/deps/npm/man/man7/index.7 +++ /dev/null @@ -1,298 +0,0 @@ -.\" Generated with Ronnjs 0.4.0 -.\" http://github.com/kapouer/ronnjs -. -.TH "NPM\-INDEX" "7" "July 2013" "" "" -. -.SH "NAME" -\fBnpm-index\fR \-\- Index of all npm documentation -. -npm help .SH "README" -node package manager -. -npm help .SH "npm" -node package manager -. -npm help .SH "npm\-adduser" -Add a registry user account -. -npm help .SH "npm\-bin" -Display npm bin folder -. -npm help .SH "npm\-bugs" -Bugs for a package in a web browser maybe -. -npm help .SH "npm\-build" -Build a package -. -npm help .SH "npm\-bundle" -REMOVED -. -npm help .SH "npm\-cache" -Manipulates packages cache -. -npm help .SH "npm\-completion" -Tab Completion for npm -. -npm help .SH "npm\-config" -Manage the npm configuration files -. -npm help .SH "npm\-dedupe" -Reduce duplication -. -npm help .SH "npm\-deprecate" -Deprecate a version of a package -. -npm help .SH "npm\-docs" -Docs for a package in a web browser maybe -. -npm help .SH "npm\-edit" -Edit an installed package -. -npm help .SH "npm\-explore" -Browse an installed package -. -npm help .SH "npm\-help\-search" -Search npm help documentation -. -npm help .SH "npm\-help" -Get help on npm -. -npm help .SH "npm\-init" -Interactively create a package\.json file -. -npm help .SH "npm\-install" -Install a package -. -npm help .SH "npm\-link" -Symlink a package folder -. -npm help .SH "npm\-ls" -List installed packages -. -npm help .SH "npm\-outdated" -Check for outdated packages -. -npm help .SH "npm\-owner" -Manage package owners -. -npm help .SH "npm\-pack" -Create a tarball from a package -. -npm help .SH "npm\-prefix" -Display prefix -. -npm help .SH "npm\-prune" -Remove extraneous packages -. -npm help .SH "npm\-publish" -Publish a package -. -npm help .SH "npm\-rebuild" -Rebuild a package -. -npm help .SH "npm\-restart" -Start a package -. -npm help .SH "npm\-rm" -Remove a package -. -npm help .SH "npm\-root" -Display npm root -. -npm help .SH "npm\-run\-script" -Run arbitrary package scripts -. -npm help .SH "npm\-search" -Search for packages -. -npm help .SH "npm\-shrinkwrap" -Lock down dependency versions -. -npm help .SH "npm\-star" -Mark your favorite packages -. -npm help .SH "npm\-stars" -View packages marked as favorites -. -npm help .SH "npm\-start" -Start a package -. -npm help .SH "npm\-stop" -Stop a package -. -npm help .SH "npm\-submodule" -Add a package as a git submodule -. -npm help .SH "npm\-tag" -Tag a published version -. -npm help .SH "npm\-test" -Test a package -. -npm help .SH "npm\-uninstall" -Remove a package -. -npm help .SH "npm\-unpublish" -Remove a package from the registry -. -npm help .SH "npm\-update" -Update a package -. -npm help .SH "npm\-version" -Bump a package version -. -npm help .SH "npm\-view" -View registry info -. -npm help .SH "npm\-whoami" -Display npm username -. -npm apihelp .SH "npm" -node package manager -. -npm apihelp .SH "npm\-bin" -Display npm bin folder -. -npm apihelp .SH "npm\-bugs" -Bugs for a package in a web browser maybe -. -npm apihelp .SH "npm\-commands" -npm commands -. -npm apihelp .SH "npm\-config" -Manage the npm configuration files -. -npm apihelp .SH "npm\-deprecate" -Deprecate a version of a package -. -npm apihelp .SH "npm\-docs" -Docs for a package in a web browser maybe -. -npm apihelp .SH "npm\-edit" -Edit an installed package -. -npm apihelp .SH "npm\-explore" -Browse an installed package -. -npm apihelp .SH "npm\-help\-search" -Search the help pages -. -npm apihelp .SH "npm\-init" -Interactively create a package\.json file -. -npm apihelp .SH "npm\-install" -install a package programmatically -. -npm apihelp .SH "npm\-link" -Symlink a package folder -. -npm apihelp .SH "npm\-load" -Load config settings -. -npm apihelp .SH "npm\-ls" -List installed packages -. -npm apihelp .SH "npm\-outdated" -Check for outdated packages -. -npm apihelp .SH "npm\-owner" -Manage package owners -. -npm apihelp .SH "npm\-pack" -Create a tarball from a package -. -npm apihelp .SH "npm\-prefix" -Display prefix -. -npm apihelp .SH "npm\-prune" -Remove extraneous packages -. -npm apihelp .SH "npm\-publish" -Publish a package -. -npm apihelp .SH "npm\-rebuild" -Rebuild a package -. -npm apihelp .SH "npm\-restart" -Start a package -. -npm apihelp .SH "npm\-root" -Display npm root -. -npm apihelp .SH "npm\-run\-script" -Run arbitrary package scripts -. -npm apihelp .SH "npm\-search" -Search for packages -. -npm apihelp .SH "npm\-shrinkwrap" -programmatically generate package shrinkwrap file -. -npm apihelp .SH "npm\-start" -Start a package -. -npm apihelp .SH "npm\-stop" -Stop a package -. -npm apihelp .SH "npm\-submodule" -Add a package as a git submodule -. -npm apihelp .SH "npm\-tag" -Tag a published version -. -npm apihelp .SH "npm\-test" -Test a package -. -npm apihelp .SH "npm\-uninstall" -uninstall a package programmatically -. -npm apihelp .SH "npm\-unpublish" -Remove a package from the registry -. -npm apihelp .SH "npm\-update" -Update a package -. -npm apihelp .SH "npm\-version" -Bump a package version -. -npm apihelp .SH "npm\-view" -View registry info -. -npm apihelp .SH "npm\-whoami" -Display npm username -. -npm help .SH "npm\-folders" -Folder Structures Used by npm -. -npm help .SH "npmrc" -The npm config files -. -npm help .SH "package\.json" -Specifics of npm\'s package\.json handling -. -npm help .SH "npm\-coding\-style" -npm\'s "funny" coding style -. -npm help .SH "npm\-config" -More than you probably want to know about npm configuration -. -npm help .SH "npm\-developers" -Developer Guide -. -npm help .SH "npm\-disputes" -Handling Module Name Disputes -. -npm help .SH "npm\-faq" -Frequently Asked Questions -. -npm help .SH "npm\-registry" -The JavaScript Package Registry -. -npm help .SH "npm\-scripts" -How npm handles the "scripts" field -. -npm help .SH "removing\-npm" -Cleaning the Slate -. -npm help .SH "semver" -The semantic versioner for npm diff --git a/deps/npm/man/man7/npm-coding-style.7 b/deps/npm/man/man7/npm-coding-style.7 index 385a3908728..0dc15318ab4 100644 --- a/deps/npm/man/man7/npm-coding-style.7 +++ b/deps/npm/man/man7/npm-coding-style.7 @@ -1,121 +1,92 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-CODING\-STYLE" "7" "September 2014" "" "" -. +.TH "NPM\-CODING\-STYLE" "7" "October 2014" "" "" .SH "NAME" -\fBnpm-coding-style\fR \-\- npm\'s "funny" coding style -. -.SH "DESCRIPTION" -npm\'s coding style is a bit unconventional\. It is not different for -difference\'s sake, but rather a carefully crafted style that is +\fBnpm-coding-style\fR \- npm's "funny" coding style +.SH DESCRIPTION +.P +npm's coding style is a bit unconventional\. It is not different for +difference's sake, but rather a carefully crafted style that is designed to reduce visual clutter and make bugs more apparent\. -. .P If you want to contribute to npm (which is very encouraged), you should -make your code conform to npm\'s style\. -. +make your code conform to npm's style\. +.P +Note: this concerns npm's code not the specific packages at npmjs\.org +.SH Line Length .P -Note: this concerns npm\'s code not the specific packages at npmjs\.org -. -.SH "Line Length" -Keep lines shorter than 80 characters\. It\'s better for lines to be +Keep lines shorter than 80 characters\. It's better for lines to be too short than to be too long\. Break up long lists, objects, and other statements onto multiple lines\. -. -.SH "Indentation" +.SH Indentation +.P Two\-spaces\. Tabs are better, but they look like hell in web browsers -(and on github), and node uses 2 spaces, so that\'s that\. -. +(and on github), and node uses 2 spaces, so that's that\. .P Configure your editor appropriately\. -. -.SH "Curly braces" +.SH Curly braces +.P Curly braces belong on the same line as the thing that necessitates them\. -. .P Bad: -. -.IP "" 4 -. +.P +.RS 2 .nf function () { -. .fi -. -.IP "" 0 -. +.RE .P Good: -. -.IP "" 4 -. +.P +.RS 2 .nf function () { -. .fi -. -.IP "" 0 -. +.RE .P -If a block needs to wrap to the next line, use a curly brace\. Don\'t -use it if it doesn\'t\. -. +If a block needs to wrap to the next line, use a curly brace\. Don't +use it if it doesn't\. .P Bad: -. -.IP "" 4 -. +.P +.RS 2 .nf if (foo) { bar() } while (foo) bar() -. .fi -. -.IP "" 0 -. +.RE .P Good: -. -.IP "" 4 -. +.P +.RS 2 .nf if (foo) bar() while (foo) { bar() } -. .fi -. -.IP "" 0 -. -.SH "Semicolons" -Don\'t use them except in four situations: -. -.IP "\(bu" 4 -\fBfor (;;)\fR loops\. They\'re actually required\. -. -.IP "\(bu" 4 -null loops like: \fBwhile (something) ;\fR (But you\'d better have a good +.RE +.SH Semicolons +.P +Don't use them except in four situations: +.RS 0 +.IP \(bu 2 +\fBfor (;;)\fR loops\. They're actually required\. +.IP \(bu 2 +null loops like: \fBwhile (something) ;\fR (But you'd better have a good reason for doing that\.) -. -.IP "\(bu" 4 +.IP \(bu 2 \fBcase "foo": doSomething(); break\fR -. -.IP "\(bu" 4 +.IP \(bu 2 In front of a leading \fB(\fR or \fB[\fR at the start of the line\. This prevents the expression from being interpreted as a function call or property access, respectively\. -. -.IP "" 0 -. + +.RE .P Some examples of good semicolon usage: -. -.IP "" 4 -. +.P +.RS 2 .nf ;(x || y)\.doSomething() ;[a, b, c]\.forEach(doSomething) @@ -127,23 +98,19 @@ for (var i = 0; i < 10; i ++) { } end() } -. .fi -. -.IP "" 0 -. +.RE .P Note that starting lines with \fB\-\fR and \fB+\fR also should be prefixed with a semicolon, but this is much less common\. -. -.SH "Comma First" +.SH Comma First +.P If there is a list of things separated by commas, and it wraps across multiple lines, put the comma at the start of the next line, directly below the token that starts the list\. Put the final token in the list on a line by itself\. For example: -. -.IP "" 4 -. +.P +.RS 2 .nf var magicWords = [ "abracadabra" , "gesundheit" @@ -156,99 +123,82 @@ var magicWords = [ "abracadabra" , b = "abc" , etc , somethingElse -. .fi -. -.IP "" 0 -. -.SH "Whitespace" +.RE +.SH Whitespace +.P Put a single space in front of ( for anything other than a function call\. Also use a single space wherever it makes things more readable\. -. .P -Don\'t leave trailing whitespace at the end of lines\. Don\'t indent empty -lines\. Don\'t use more spaces than are helpful\. -. -.SH "Functions" +Don't leave trailing whitespace at the end of lines\. Don't indent empty +lines\. Don't use more spaces than are helpful\. +.SH Functions +.P Use named functions\. They make stack traces a lot easier to read\. -. -.SH "Callbacks, Sync/async Style" +.SH Callbacks, Sync/async Style +.P Use the asynchronous/non\-blocking versions of things as much as possible\. It might make more sense for npm to use the synchronous fs APIs, but this way, the fs and http and child process stuff all uses the same callback\-passing methodology\. -. .P The callback should always be the last argument in the list\. Its first argument is the Error or null\. -. .P -Be very careful never to ever ever throw anything\. It\'s worse than useless\. +Be very careful never to ever ever throw anything\. It's worse than useless\. Just send the error message back as the first argument to the callback\. -. -.SH "Errors" -Always create a new Error object with your message\. Don\'t just return a +.SH Errors +.P +Always create a new Error object with your message\. Don't just return a string message to the callback\. Stack traces are handy\. -. -.SH "Logging" +.SH Logging +.P Logging is done using the npmlog \fIhttps://github\.com/npm/npmlog\fR utility\. -. .P Please clean up logs when they are no longer helpful\. In particular, logging the same object over and over again is not helpful\. Logs should -report what\'s happening so that it\'s easier to track down where a fault +report what's happening so that it's easier to track down where a fault occurs\. -. .P Use appropriate log levels\. See npm help 7 \fBnpm\-config\fR and search for "loglevel"\. -. -.SH "Case, naming, etc\." +.SH Case, naming, etc\. +.P Use \fBlowerCamelCase\fR for multiword identifiers when they refer to objects, -functions, methods, members, or anything not specified in this section\. -. +functions, methods, properties, or anything not specified in this section\. .P -Use \fBUpperCamelCase\fR for class names (things that you\'d pass to "new")\. -. +Use \fBUpperCamelCase\fR for class names (things that you'd pass to "new")\. .P Use \fBall\-lower\-hyphen\-css\-case\fR for multiword filenames and config keys\. -. .P Use named functions\. They make stack traces easier to follow\. -. .P Use \fBCAPS_SNAKE_CASE\fR for constants, things that should never change and are rarely used\. -. .P Use a single uppercase letter for function names where the function would normally be anonymous, but needs to call itself recursively\. It -makes it clear that it\'s a "throwaway" function\. -. -.SH "null, undefined, false, 0" -Boolean variables and functions should always be either \fBtrue\fR or \fBfalse\fR\|\. Don\'t set it to 0 unless it\'s supposed to be a number\. -. +makes it clear that it's a "throwaway" function\. +.SH null, undefined, false, 0 +.P +Boolean variables and functions should always be either \fBtrue\fR or +\fBfalse\fR\|\. Don't set it to 0 unless it's supposed to be a number\. .P When something is intentionally missing or removed, set it to \fBnull\fR\|\. -. .P -Don\'t set things to \fBundefined\fR\|\. Reserve that value to mean "not yet +Don't set things to \fBundefined\fR\|\. Reserve that value to mean "not yet set to anything\." -. .P Boolean objects are verboten\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help 7 developers -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 faq -. -.IP "\(bu" 4 +.IP \(bu 2 npm help npm -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man7/npm-config.7 b/deps/npm/man/man7/npm-config.7 index 7bdf1c00598..e3b4c5c6f63 100644 --- a/deps/npm/man/man7/npm-config.7 +++ b/deps/npm/man/man7/npm-config.7 @@ -1,1520 +1,1198 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-CONFIG" "7" "September 2014" "" "" -. +.TH "NPM\-CONFIG" "7" "October 2014" "" "" .SH "NAME" -\fBnpm-config\fR \-\- More than you probably want to know about npm configuration -. -.SH "DESCRIPTION" +\fBnpm-config\fR \- More than you probably want to know about npm configuration +.SH DESCRIPTION +.P npm gets its configuration values from 6 sources, in this priority: -. -.SS "Command Line Flags" +.SS Command Line Flags +.P Putting \fB\-\-foo bar\fR on the command line sets the \fBfoo\fR configuration parameter to \fB"bar"\fR\|\. A \fB\-\-\fR argument tells the cli parser to stop reading flags\. A \fB\-\-flag\fR parameter that is at the \fIend\fR of the command will be given the value of \fBtrue\fR\|\. -. -.SS "Environment Variables" +.SS Environment Variables +.P Any environment variables that start with \fBnpm_config_\fR will be -interpreted as a configuration parameter\. For example, putting \fBnpm_config_foo=bar\fR in your environment will set the \fBfoo\fR +interpreted as a configuration parameter\. For example, putting +\fBnpm_config_foo=bar\fR in your environment will set the \fBfoo\fR configuration parameter to \fBbar\fR\|\. Any environment configurations that are not given a value will be given the value of \fBtrue\fR\|\. Config values are case\-insensitive, so \fBNPM_CONFIG_FOO=bar\fR will work the same\. -. -.SS "npmrc Files" +.SS npmrc Files +.P The four relevant files are: -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 per\-project config file (/path/to/my/project/\.npmrc) -. -.IP "\(bu" 4 +.IP \(bu 2 per\-user config file (~/\.npmrc) -. -.IP "\(bu" 4 +.IP \(bu 2 global config file ($PREFIX/npmrc) -. -.IP "\(bu" 4 +.IP \(bu 2 npm builtin config file (/path/to/npm/npmrc) -. -.IP "" 0 -. + +.RE .P See npm help 5 npmrc for more details\. -. -.SS "Default Configs" +.SS Default Configs +.P A set of configuration parameters that are internal to npm, and are defaults if nothing else is specified\. -. -.SH "Shorthands and Other CLI Niceties" +.SH Shorthands and Other CLI Niceties +.P The following shorthands are parsed on the command\-line: -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 \fB\-v\fR: \fB\-\-version\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-h\fR, \fB\-?\fR, \fB\-\-help\fR, \fB\-H\fR: \fB\-\-usage\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-s\fR, \fB\-\-silent\fR: \fB\-\-loglevel silent\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-q\fR, \fB\-\-quiet\fR: \fB\-\-loglevel warn\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-d\fR: \fB\-\-loglevel info\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-dd\fR, \fB\-\-verbose\fR: \fB\-\-loglevel verbose\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-ddd\fR: \fB\-\-loglevel silly\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-g\fR: \fB\-\-global\fR -. -.IP "\(bu" 4 +.IP \(bu 2 +\fB\-C\fR: \fB\-\-prefix\fR +.IP \(bu 2 \fB\-l\fR: \fB\-\-long\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-m\fR: \fB\-\-message\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-p\fR, \fB\-\-porcelain\fR: \fB\-\-parseable\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-reg\fR: \fB\-\-registry\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-v\fR: \fB\-\-version\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-f\fR: \fB\-\-force\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-desc\fR: \fB\-\-description\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-S\fR: \fB\-\-save\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-D\fR: \fB\-\-save\-dev\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-O\fR: \fB\-\-save\-optional\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-B\fR: \fB\-\-save\-bundle\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-E\fR: \fB\-\-save\-exact\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-y\fR: \fB\-\-yes\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\-n\fR: \fB\-\-yes false\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fBll\fR and \fBla\fR commands: \fBls \-\-long\fR -. -.IP "" 0 -. + +.RE .P If the specified configuration param resolves unambiguously to a known configuration parameter, then it is expanded to that configuration parameter\. For example: -. -.IP "" 4 -. +.P +.RS 2 .nf npm ls \-\-par # same as: npm ls \-\-parseable -. .fi -. -.IP "" 0 -. +.RE .P If multiple single\-character shorthands are strung together, and the resulting combination is unambiguously not some other configuration param, then it is expanded to its various component pieces\. For example: -. -.IP "" 4 -. +.P +.RS 2 .nf npm ls \-gpld # same as: npm ls \-\-global \-\-parseable \-\-long \-\-loglevel info -. .fi -. -.IP "" 0 -. -.SH "Per\-Package Config Settings" +.RE +.SH Per\-Package Config Settings +.P When running scripts (see npm help 7 \fBnpm\-scripts\fR) the package\.json "config" -keys are overwritten in the environment if there is a config param of \fB[@]:\fR\|\. For example, if the package\.json has +keys are overwritten in the environment if there is a config param of +\fB[@]:\fR\|\. For example, if the package\.json has this: -. -.IP "" 4 -. +.P +.RS 2 .nf { "name" : "foo" , "config" : { "port" : "8080" } , "scripts" : { "start" : "node server\.js" } } -. .fi -. -.IP "" 0 -. +.RE .P and the server\.js is this: -. -.IP "" 4 -. +.P +.RS 2 .nf http\.createServer(\.\.\.)\.listen(process\.env\.npm_package_config_port) -. .fi -. -.IP "" 0 -. +.RE .P then the user could change the behavior by doing: -. -.IP "" 4 -. +.P +.RS 2 .nf npm config set foo:port 80 -. .fi -. -.IP "" 0 -. +.RE .P See npm help 5 package\.json for more information\. -. -.SH "Config Settings" -. -.SS "always\-auth" -. -.IP "\(bu" 4 +.SH Config Settings +.SS always\-auth +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Force npm to always require authentication when accessing the registry, even for \fBGET\fR requests\. -. -.SS "bin\-links" -. -.IP "\(bu" 4 +.SS bin\-links +.RS 0 +.IP \(bu 2 Default: \fBtrue\fR -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Tells npm to create symlinks (or \fB\|\.cmd\fR shims on Windows) for package executables\. -. .P Set to false to have it not do this\. This can be used to work around -the fact that some file systems don\'t support symlinks, even on +the fact that some file systems don't support symlinks, even on ostensibly Unix systems\. -. -.SS "browser" -. -.IP "\(bu" 4 +.SS browser +.RS 0 +.IP \(bu 2 Default: OS X: \fB"open"\fR, Windows: \fB"start"\fR, Others: \fB"xdg\-open"\fR -. -.IP "\(bu" 4 +.IP \(bu 2 Type: String -. -.IP "" 0 -. + +.RE .P The browser that is called by the \fBnpm docs\fR command to open websites\. -. -.SS "ca" -. -.IP "\(bu" 4 +.SS ca +.RS 0 +.IP \(bu 2 Default: The npm CA certificate -. -.IP "\(bu" 4 +.IP \(bu 2 Type: String or null -. -.IP "" 0 -. + +.RE .P The Certificate Authority signing certificate that is trusted for SSL connections to the registry\. -. .P Set to \fBnull\fR to only allow "known" registrars, or to a specific CA cert to trust only that specific signing authority\. -. .P See also the \fBstrict\-ssl\fR config\. -. -.SS "cafile" -. -.IP "\(bu" 4 +.SS cafile +.RS 0 +.IP \(bu 2 Default: \fBnull\fR -. -.IP "\(bu" 4 +.IP \(bu 2 Type: path -. -.IP "" 0 -. + +.RE .P A path to a file containing one or multiple Certificate Authority signing -certificates\. Similar to the \fBca\fR setting, but allows for multiple CA\'s, as +certificates\. Similar to the \fBca\fR setting, but allows for multiple CA's, as well as for the CA information to be stored in a file on disk\. -. -.SS "cache" -. -.IP "\(bu" 4 +.SS cache +.RS 0 +.IP \(bu 2 Default: Windows: \fB%AppData%\\npm\-cache\fR, Posix: \fB~/\.npm\fR -. -.IP "\(bu" 4 +.IP \(bu 2 Type: path -. -.IP "" 0 -. -.P -The location of npm\'s cache directory\. See npm help \fBnpm\-cache\fR -. -.SS "cache\-lock\-stale" -. -.IP "\(bu" 4 + +.RE +.P +The location of npm's cache directory\. See npm help \fBnpm\-cache\fR +.SS cache\-lock\-stale +.RS 0 +.IP \(bu 2 Default: 60000 (1 minute) -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Number -. -.IP "" 0 -. + +.RE .P The number of ms before cache folder lockfiles are considered stale\. -. -.SS "cache\-lock\-retries" -. -.IP "\(bu" 4 +.SS cache\-lock\-retries +.RS 0 +.IP \(bu 2 Default: 10 -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Number -. -.IP "" 0 -. + +.RE .P Number of times to retry to acquire a lock on cache folder lockfiles\. -. -.SS "cache\-lock\-wait" -. -.IP "\(bu" 4 +.SS cache\-lock\-wait +.RS 0 +.IP \(bu 2 Default: 10000 (10 seconds) -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Number -. -.IP "" 0 -. + +.RE .P Number of ms to wait for cache lock files to expire\. -. -.SS "cache\-max" -. -.IP "\(bu" 4 +.SS cache\-max +.RS 0 +.IP \(bu 2 Default: Infinity -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Number -. -.IP "" 0 -. + +.RE .P The maximum time (in seconds) to keep items in the registry cache before re\-checking against the registry\. -. .P Note that no purging is done unless the \fBnpm cache clean\fR command is explicitly used, and that only GET requests use the cache\. -. -.SS "cache\-min" -. -.IP "\(bu" 4 +.SS cache\-min +.RS 0 +.IP \(bu 2 Default: 10 -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Number -. -.IP "" 0 -. + +.RE .P The minimum time (in seconds) to keep items in the registry cache before re\-checking against the registry\. -. .P Note that no purging is done unless the \fBnpm cache clean\fR command is explicitly used, and that only GET requests use the cache\. -. -.SS "cert" -. -.IP "\(bu" 4 +.SS cert +.RS 0 +.IP \(bu 2 Default: \fBnull\fR -. -.IP "\(bu" 4 +.IP \(bu 2 Type: String -. -.IP "" 0 -. + +.RE .P A client certificate to pass when accessing the registry\. -. -.SS "color" -. -.IP "\(bu" 4 +.SS color +.RS 0 +.IP \(bu 2 Default: true on Posix, false on Windows -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean or \fB"always"\fR -. -.IP "" 0 -. + +.RE .P If false, never shows colors\. If \fB"always"\fR then always shows colors\. If true, then only prints color codes for tty file descriptors\. -. -.SS "depth" -. -.IP "\(bu" 4 +.SS depth +.RS 0 +.IP \(bu 2 Default: Infinity -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Number -. -.IP "" 0 -. -.P -The depth to go when recursing directories for \fBnpm ls\fR and \fBnpm cache ls\fR\|\. -. -.SS "description" -. -.IP "\(bu" 4 + +.RE +.P +The depth to go when recursing directories for \fBnpm ls\fR and +\fBnpm cache ls\fR\|\. +.SS description +.RS 0 +.IP \(bu 2 Default: true -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Show the description in \fBnpm search\fR -. -.SS "dev" -. -.IP "\(bu" 4 +.SS dev +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Install \fBdev\-dependencies\fR along with packages\. -. .P Note that \fBdev\-dependencies\fR are also installed if the \fBnpat\fR flag is set\. -. -.SS "editor" -. -.IP "\(bu" 4 +.SS editor +.RS 0 +.IP \(bu 2 Default: \fBEDITOR\fR environment variable if set, or \fB"vi"\fR on Posix, or \fB"notepad"\fR on Windows\. -. -.IP "\(bu" 4 +.IP \(bu 2 Type: path -. -.IP "" 0 -. + +.RE .P The command to run for \fBnpm edit\fR or \fBnpm config edit\fR\|\. -. -.SS "email" -The email of the logged\-in user\. -. -.P -Set by the \fBnpm adduser\fR command\. Should not be set explicitly\. -. -.SS "engine\-strict" -. -.IP "\(bu" 4 +.SS engine\-strict +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P If set to true, then npm will stubbornly refuse to install (or even consider installing) any package that claims to not be compatible with the current Node\.js version\. -. -.SS "force" -. -.IP "\(bu" 4 +.SS force +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Makes various commands more forceful\. -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 lifecycle script failure does not block progress\. -. -.IP "\(bu" 4 +.IP \(bu 2 publishing clobbers previously published versions\. -. -.IP "\(bu" 4 +.IP \(bu 2 skips cache when requesting from the registry\. -. -.IP "\(bu" 4 +.IP \(bu 2 prevents checks against clobbering non\-npm files\. -. -.IP "" 0 -. -.SS "fetch\-retries" -. -.IP "\(bu" 4 + +.RE +.SS fetch\-retries +.RS 0 +.IP \(bu 2 Default: 2 -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Number -. -.IP "" 0 -. + +.RE .P The "retries" config for the \fBretry\fR module to use when fetching packages from the registry\. -. -.SS "fetch\-retry\-factor" -. -.IP "\(bu" 4 +.SS fetch\-retry\-factor +.RS 0 +.IP \(bu 2 Default: 10 -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Number -. -.IP "" 0 -. + +.RE .P The "factor" config for the \fBretry\fR module to use when fetching packages\. -. -.SS "fetch\-retry\-mintimeout" -. -.IP "\(bu" 4 +.SS fetch\-retry\-mintimeout +.RS 0 +.IP \(bu 2 Default: 10000 (10 seconds) -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Number -. -.IP "" 0 -. + +.RE .P The "minTimeout" config for the \fBretry\fR module to use when fetching packages\. -. -.SS "fetch\-retry\-maxtimeout" -. -.IP "\(bu" 4 +.SS fetch\-retry\-maxtimeout +.RS 0 +.IP \(bu 2 Default: 60000 (1 minute) -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Number -. -.IP "" 0 -. + +.RE .P The "maxTimeout" config for the \fBretry\fR module to use when fetching packages\. -. -.SS "git" -. -.IP "\(bu" 4 +.SS git +.RS 0 +.IP \(bu 2 Default: \fB"git"\fR -. -.IP "\(bu" 4 +.IP \(bu 2 Type: String -. -.IP "" 0 -. + +.RE .P The command to use for git commands\. If git is installed on the computer, but is not in the \fBPATH\fR, then set this to the full path to the git binary\. -. -.SS "git\-tag\-version" -. -.IP "\(bu" 4 +.SS git\-tag\-version +.RS 0 +.IP \(bu 2 Default: \fBtrue\fR -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Tag the commit when using the \fBnpm version\fR command\. -. -.SS "global" -. -.IP "\(bu" 4 +.SS global +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P -Operates in "global" mode, so that packages are installed into the \fBprefix\fR folder instead of the current working directory\. See npm help 5 \fBnpm\-folders\fR for more on the differences in behavior\. -. -.IP "\(bu" 4 +Operates in "global" mode, so that packages are installed into the +\fBprefix\fR folder instead of the current working directory\. See +npm help 5 \fBnpm\-folders\fR for more on the differences in behavior\. +.RS 0 +.IP \(bu 2 packages are installed into the \fB{prefix}/lib/node_modules\fR folder, instead of the current working directory\. -. -.IP "\(bu" 4 +.IP \(bu 2 bin files are linked to \fB{prefix}/bin\fR -. -.IP "\(bu" 4 +.IP \(bu 2 man pages are linked to \fB{prefix}/share/man\fR -. -.IP "" 0 -. -.SS "globalconfig" -. -.IP "\(bu" 4 + +.RE +.SS globalconfig +.RS 0 +.IP \(bu 2 Default: {prefix}/etc/npmrc -. -.IP "\(bu" 4 +.IP \(bu 2 Type: path -. -.IP "" 0 -. + +.RE .P The config file to read for global config options\. -. -.SS "group" -. -.IP "\(bu" 4 +.SS group +.RS 0 +.IP \(bu 2 Default: GID of the current process -. -.IP "\(bu" 4 +.IP \(bu 2 Type: String or Number -. -.IP "" 0 -. + +.RE .P The group to use when running package scripts in global mode as the root user\. -. -.SS "heading" -. -.IP "\(bu" 4 +.SS heading +.RS 0 +.IP \(bu 2 Default: \fB"npm"\fR -. -.IP "\(bu" 4 +.IP \(bu 2 Type: String -. -.IP "" 0 -. + +.RE .P The string that starts all the debugging log output\. -. -.SS "https\-proxy" -. -.IP "\(bu" 4 -Default: the \fBHTTPS_PROXY\fR or \fBhttps_proxy\fR or \fBHTTP_PROXY\fR or \fBhttp_proxy\fR environment variables\. -. -.IP "\(bu" 4 +.SS https\-proxy +.RS 0 +.IP \(bu 2 +Default: the \fBHTTPS_PROXY\fR or \fBhttps_proxy\fR or \fBHTTP_PROXY\fR or +\fBhttp_proxy\fR environment variables\. +.IP \(bu 2 Type: url -. -.IP "" 0 -. + +.RE .P A proxy to use for outgoing https requests\. -. -.SS "ignore\-scripts" -. -.IP "\(bu" 4 +.SS ignore\-scripts +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P If true, npm does not run scripts specified in package\.json files\. -. -.SS "init\-module" -. -.IP "\(bu" 4 +.SS init\-module +.RS 0 +.IP \(bu 2 Default: ~/\.npm\-init\.js -. -.IP "\(bu" 4 +.IP \(bu 2 Type: path -. -.IP "" 0 -. + +.RE .P A module that will be loaded by the \fBnpm init\fR command\. See the -documentation for the init\-package\-json \fIhttps://github\.com/isaacs/init\-package\-json\fR module +documentation for the +init\-package\-json \fIhttps://github\.com/isaacs/init\-package\-json\fR module for more information, or npm help init\. -. -.SS "init\.author\.name" -. -.IP "\(bu" 4 +.SS init\-author\-name +.RS 0 +.IP \(bu 2 Default: "" -. -.IP "\(bu" 4 +.IP \(bu 2 Type: String -. -.IP "" 0 -. -.P -The value \fBnpm init\fR should use by default for the package author\'s name\. -. -.SS "init\.author\.email" -. -.IP "\(bu" 4 + +.RE +.P +The value \fBnpm init\fR should use by default for the package author's name\. +.SS init\-author\-email +.RS 0 +.IP \(bu 2 Default: "" -. -.IP "\(bu" 4 +.IP \(bu 2 Type: String -. -.IP "" 0 -. -.P -The value \fBnpm init\fR should use by default for the package author\'s email\. -. -.SS "init\.author\.url" -. -.IP "\(bu" 4 + +.RE +.P +The value \fBnpm init\fR should use by default for the package author's email\. +.SS init\-author\-url +.RS 0 +.IP \(bu 2 Default: "" -. -.IP "\(bu" 4 +.IP \(bu 2 Type: String -. -.IP "" 0 -. -.P -The value \fBnpm init\fR should use by default for the package author\'s homepage\. -. -.SS "init\.license" -. -.IP "\(bu" 4 + +.RE +.P +The value \fBnpm init\fR should use by default for the package author's homepage\. +.SS init\-license +.RS 0 +.IP \(bu 2 Default: "ISC" -. -.IP "\(bu" 4 +.IP \(bu 2 Type: String -. -.IP "" 0 -. + +.RE .P The value \fBnpm init\fR should use by default for the package license\. -. -.SS "json" -. -.IP "\(bu" 4 +.SS init\-version +.RS 0 +.IP \(bu 2 +Default: "0\.0\.0" +.IP \(bu 2 +Type: semver + +.RE +.P +The value that \fBnpm init\fR should use by default for the package +version number, if not already set in package\.json\. +.SS json +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Whether or not to output JSON data, rather than the normal output\. -. .P This feature is currently experimental, and the output data structures for many commands is either not implemented in JSON yet, or subject to change\. Only the output from \fBnpm ls \-\-json\fR is currently valid\. -. -.SS "key" -. -.IP "\(bu" 4 +.SS key +.RS 0 +.IP \(bu 2 Default: \fBnull\fR -. -.IP "\(bu" 4 +.IP \(bu 2 Type: String -. -.IP "" 0 -. + +.RE .P A client key to pass when accessing the registry\. -. -.SS "link" -. -.IP "\(bu" 4 +.SS link +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P If true, then local installs will link if there is a suitable globally installed package\. -. .P Note that this means that local installs can cause things to be installed into the global space at the same time\. The link is only done if one of the two conditions are met: -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 The package is not already installed globally, or -. -.IP "\(bu" 4 +.IP \(bu 2 the globally installed version is identical to the version that is being installed locally\. -. -.IP "" 0 -. -.SS "local\-address" -. -.IP "\(bu" 4 + +.RE +.SS local\-address +.RS 0 +.IP \(bu 2 Default: undefined -. -.IP "\(bu" 4 +.IP \(bu 2 Type: IP Address -. -.IP "" 0 -. + +.RE .P The IP address of the local interface to use when making connections to the npm registry\. Must be IPv4 in versions of Node prior to 0\.12\. -. -.SS "loglevel" -. -.IP "\(bu" 4 -Default: "http" -. -.IP "\(bu" 4 +.SS loglevel +.RS 0 +.IP \(bu 2 +Default: "warn" +.IP \(bu 2 Type: String -. -.IP "\(bu" 4 -Values: "silent", "win", "error", "warn", "http", "info", "verbose", "silly" -. -.IP "" 0 -. +.IP \(bu 2 +Values: "silent", "error", "warn", "http", "info", "verbose", "silly" + +.RE .P -What level of logs to report\. On failure, \fIall\fR logs are written to \fBnpm\-debug\.log\fR in the current working directory\. -. +What level of logs to report\. On failure, \fIall\fR logs are written to +\fBnpm\-debug\.log\fR in the current working directory\. .P Any logs of a higher level than the setting are shown\. -The default is "http", which shows http, warn, and error output\. -. -.SS "logstream" -. -.IP "\(bu" 4 +The default is "warn", which shows warn and error output\. +.SS logstream +.RS 0 +.IP \(bu 2 Default: process\.stderr -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Stream -. -.IP "" 0 -. + +.RE .P -This is the stream that is passed to the npmlog \fIhttps://github\.com/npm/npmlog\fR module at run time\. -. +This is the stream that is passed to the +npmlog \fIhttps://github\.com/npm/npmlog\fR module at run time\. .P It cannot be set from the command line, but if you are using npm programmatically, you may wish to send logs to somewhere other than stderr\. -. .P If the \fBcolor\fR config is set to true, then this stream will receive colored output if it is a TTY\. -. -.SS "long" -. -.IP "\(bu" 4 +.SS long +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Show extended information in \fBnpm ls\fR and \fBnpm search\fR\|\. -. -.SS "message" -. -.IP "\(bu" 4 +.SS message +.RS 0 +.IP \(bu 2 Default: "%s" -. -.IP "\(bu" 4 +.IP \(bu 2 Type: String -. -.IP "" 0 -. + +.RE .P Commit message which is used by \fBnpm version\fR when creating version commit\. -. .P Any "%s" in the message will be replaced with the version number\. -. -.SS "node\-version" -. -.IP "\(bu" 4 +.SS node\-version +.RS 0 +.IP \(bu 2 Default: process\.version -. -.IP "\(bu" 4 +.IP \(bu 2 Type: semver or false -. -.IP "" 0 -. -.P -The node version to use when checking package\'s "engines" hash\. -. -.SS "npat" -. -.IP "\(bu" 4 + +.RE +.P +The node version to use when checking a package's \fBengines\fR map\. +.SS npat +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Run tests on installation\. -. -.SS "onload\-script" -. -.IP "\(bu" 4 +.SS onload\-script +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: path -. -.IP "" 0 -. + +.RE .P A node module to \fBrequire()\fR when npm loads\. Useful for programmatic usage\. -. -.SS "optional" -. -.IP "\(bu" 4 +.SS optional +.RS 0 +.IP \(bu 2 Default: true -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P -Attempt to install packages in the \fBoptionalDependencies\fR hash\. Note +Attempt to install packages in the \fBoptionalDependencies\fR object\. Note that if these packages fail to install, the overall installation process is not aborted\. -. -.SS "parseable" -. -.IP "\(bu" 4 +.SS parseable +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Output parseable results from commands that write to standard output\. -. -.SS "prefix" -. -.IP "\(bu" 4 +.SS prefix +.RS 0 +.IP \(bu 2 Default: see npm help 5 folders -. -.IP "\(bu" 4 +.IP \(bu 2 Type: path -. -.IP "" 0 -. + +.RE .P The location to install global items\. If set on the command line, then it forces non\-global commands to run in the specified folder\. -. -.SS "production" -. -.IP "\(bu" 4 +.SS production +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Set to true to run in "production" mode\. -. -.IP "1" 4 +.RS 0 +.IP 1. 3 devDependencies are not installed at the topmost level when running local \fBnpm install\fR without any arguments\. -. -.IP "2" 4 +.IP 2. 3 Set the NODE_ENV="production" for lifecycle scripts\. -. -.IP "" 0 -. -.SS "proprietary\-attribs" -. -.IP "\(bu" 4 + +.RE +.SS proprietary\-attribs +.RS 0 +.IP \(bu 2 Default: true -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Whether or not to include proprietary extended attributes in the tarballs created by npm\. -. .P Unless you are expecting to unpack package tarballs with something other than npm \-\- particularly a very outdated tar implementation \-\- leave this as true\. -. -.SS "proxy" -. -.IP "\(bu" 4 +.SS proxy +.RS 0 +.IP \(bu 2 Default: \fBHTTP_PROXY\fR or \fBhttp_proxy\fR environment variable, or null -. -.IP "\(bu" 4 +.IP \(bu 2 Type: url -. -.IP "" 0 -. + +.RE .P A proxy to use for outgoing http requests\. -. -.SS "rebuild\-bundle" -. -.IP "\(bu" 4 +.SS rebuild\-bundle +.RS 0 +.IP \(bu 2 Default: true -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Rebuild bundled dependencies after installation\. -. -.SS "registry" -. -.IP "\(bu" 4 +.SS registry +.RS 0 +.IP \(bu 2 Default: https://registry\.npmjs\.org/ -. -.IP "\(bu" 4 +.IP \(bu 2 Type: url -. -.IP "" 0 -. + +.RE .P The base URL of the npm package registry\. -. -.SS "rollback" -. -.IP "\(bu" 4 +.SS rollback +.RS 0 +.IP \(bu 2 Default: true -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Remove failed installs\. -. -.SS "save" -. -.IP "\(bu" 4 +.SS save +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Save installed packages to a package\.json file as dependencies\. -. .P -When used with the \fBnpm rm\fR command, it removes it from the dependencies -hash\. -. +When used with the \fBnpm rm\fR command, it removes it from the \fBdependencies\fR +object\. .P Only works if there is already a package\.json file present\. -. -.SS "save\-bundle" -. -.IP "\(bu" 4 +.SS save\-bundle +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P -If a package would be saved at install time by the use of \fB\-\-save\fR, \fB\-\-save\-dev\fR, or \fB\-\-save\-optional\fR, then also put it in the \fBbundleDependencies\fR list\. -. +If a package would be saved at install time by the use of \fB\-\-save\fR, +\fB\-\-save\-dev\fR, or \fB\-\-save\-optional\fR, then also put it in the +\fBbundleDependencies\fR list\. .P When used with the \fBnpm rm\fR command, it removes it from the bundledDependencies list\. -. -.SS "save\-dev" -. -.IP "\(bu" 4 +.SS save\-dev +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P -Save installed packages to a package\.json file as devDependencies\. -. +Save installed packages to a package\.json file as \fBdevDependencies\fR\|\. .P When used with the \fBnpm rm\fR command, it removes it from the -devDependencies hash\. -. +\fBdevDependencies\fR object\. .P Only works if there is already a package\.json file present\. -. -.SS "save\-exact" -. -.IP "\(bu" 4 +.SS save\-exact +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. -.P -Dependencies saved to package\.json using \fB\-\-save\fR, \fB\-\-save\-dev\fR or \fB\-\-save\-optional\fR will be configured with an exact version rather than -using npm\'s default semver range operator\. -. -.SS "save\-optional" -. -.IP "\(bu" 4 + +.RE +.P +Dependencies saved to package\.json using \fB\-\-save\fR, \fB\-\-save\-dev\fR or +\fB\-\-save\-optional\fR will be configured with an exact version rather than +using npm's default semver range operator\. +.SS save\-optional +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Save installed packages to a package\.json file as optionalDependencies\. -. .P When used with the \fBnpm rm\fR command, it removes it from the -devDependencies hash\. -. +\fBdevDependencies\fR object\. .P Only works if there is already a package\.json file present\. -. -.SS "save\-prefix" -. -.IP "\(bu" 4 -Default: \'^\' -. -.IP "\(bu" 4 +.SS save\-prefix +.RS 0 +.IP \(bu 2 +Default: '^' +.IP \(bu 2 Type: String -. -.IP "" 0 -. + +.RE .P -Configure how versions of packages installed to a package\.json file via \fB\-\-save\fR or \fB\-\-save\-dev\fR get prefixed\. -. +Configure how versions of packages installed to a package\.json file via +\fB\-\-save\fR or \fB\-\-save\-dev\fR get prefixed\. .P -For example if a package has version \fB1\.2\.3\fR, by default it\'s version is +For example if a package has version \fB1\.2\.3\fR, by default it's version is set to \fB^1\.2\.3\fR which allows minor upgrades for that package, but after -. -.br -\fBnpm config set save\-prefix=\'~\'\fR it would be set to \fB~1\.2\.3\fR which only allows +\fBnpm config set save\-prefix='~'\fR it would be set to \fB~1\.2\.3\fR which only allows patch upgrades\. -. -.SS "searchopts" -. -.IP "\(bu" 4 +.SS scope +.RS 0 +.IP \(bu 2 Default: "" -. -.IP "\(bu" 4 +.IP \(bu 2 Type: String -. -.IP "" 0 -. + +.RE +.P +Associate an operation with a scope for a scoped registry\. Useful when logging +in to a private registry for the first time: +\fBnpm login \-\-scope=@organization \-\-registry=registry\.organization\.com\fR, which +will cause \fB@organization\fR to be mapped to the registry for future installation +of packages specified according to the pattern \fB@organization/package\fR\|\. +.SS searchopts +.RS 0 +.IP \(bu 2 +Default: "" +.IP \(bu 2 +Type: String + +.RE .P Space\-separated options that are always passed to search\. -. -.SS "searchexclude" -. -.IP "\(bu" 4 +.SS searchexclude +.RS 0 +.IP \(bu 2 Default: "" -. -.IP "\(bu" 4 +.IP \(bu 2 Type: String -. -.IP "" 0 -. + +.RE .P Space\-separated options that limit the results from search\. -. -.SS "searchsort" -. -.IP "\(bu" 4 +.SS searchsort +.RS 0 +.IP \(bu 2 Default: "name" -. -.IP "\(bu" 4 +.IP \(bu 2 Type: String -. -.IP "\(bu" 4 +.IP \(bu 2 Values: "name", "\-name", "date", "\-date", "description", "\-description", "keywords", "\-keywords" -. -.IP "" 0 -. + +.RE .P Indication of which field to sort search results by\. Prefix with a \fB\-\fR character to indicate reverse sort\. -. -.SS "shell" -. -.IP "\(bu" 4 +.SS shell +.RS 0 +.IP \(bu 2 Default: SHELL environment variable, or "bash" on Posix, or "cmd" on Windows -. -.IP "\(bu" 4 +.IP \(bu 2 Type: path -. -.IP "" 0 -. + +.RE .P The shell to run for the \fBnpm explore\fR command\. -. -.SS "shrinkwrap" -. -.IP "\(bu" 4 +.SS shrinkwrap +.RS 0 +.IP \(bu 2 Default: true -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P If set to false, then ignore \fBnpm\-shrinkwrap\.json\fR files when installing\. -. -.SS "sign\-git\-tag" -. -.IP "\(bu" 4 +.SS sign\-git\-tag +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P If set to true, then the \fBnpm version\fR command will tag the version using \fB\-s\fR to add a signature\. -. .P Note that git requires you to have set up GPG keys in your git configs for this to work properly\. -. -.SS "spin" -. -.IP "\(bu" 4 +.SS spin +.RS 0 +.IP \(bu 2 Default: true -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean or \fB"always"\fR -. -.IP "" 0 -. + +.RE .P When set to \fBtrue\fR, npm will display an ascii spinner while it is doing things, if \fBprocess\.stderr\fR is a TTY\. -. .P Set to \fBfalse\fR to suppress the spinner, or set to \fBalways\fR to output the spinner even for non\-TTY outputs\. -. -.SS "strict\-ssl" -. -.IP "\(bu" 4 +.SS strict\-ssl +.RS 0 +.IP \(bu 2 Default: true -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Whether or not to do SSL key validation when making requests to the registry via https\. -. .P See also the \fBca\fR config\. -. -.SS "tag" -. -.IP "\(bu" 4 +.SS tag +.RS 0 +.IP \(bu 2 Default: latest -. -.IP "\(bu" 4 +.IP \(bu 2 Type: String -. -.IP "" 0 -. + +.RE .P -If you ask npm to install a package and don\'t tell it a specific version, then +If you ask npm to install a package and don't tell it a specific version, then it will install the specified tag\. -. .P Also the tag that is added to the package@version specified by the \fBnpm tag\fR command, if no explicit tag is given\. -. -.SS "tmp" -. -.IP "\(bu" 4 +.SS tmp +.RS 0 +.IP \(bu 2 Default: TMPDIR environment variable, or "/tmp" -. -.IP "\(bu" 4 +.IP \(bu 2 Type: path -. -.IP "" 0 -. + +.RE .P Where to store temporary files and folders\. All temp files are deleted on success, but left behind on failure for forensic purposes\. -. -.SS "unicode" -. -.IP "\(bu" 4 +.SS unicode +.RS 0 +.IP \(bu 2 Default: true -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P When set to true, npm uses unicode characters in the tree output\. When false, it uses ascii characters to draw trees\. -. -.SS "unsafe\-perm" -. -.IP "\(bu" 4 +.SS unsafe\-perm +.RS 0 +.IP \(bu 2 Default: false if running as root, true otherwise -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Set to true to suppress the UID/GID switching when running package scripts\. If set explicitly to false, then installing as a non\-root user will fail\. -. -.SS "usage" -. -.IP "\(bu" 4 +.SS usage +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Boolean -. -.IP "" 0 -. + +.RE .P Set to show short usage output (like the \-H output) instead of complete help when doing npm help \fBnpm\-help\fR\|\. -. -.SS "user" -. -.IP "\(bu" 4 +.SS user +.RS 0 +.IP \(bu 2 Default: "nobody" -. -.IP "\(bu" 4 +.IP \(bu 2 Type: String or Number -. -.IP "" 0 -. + +.RE .P The UID to set to when running package scripts as root\. -. -.SS "username" -. -.IP "\(bu" 4 -Default: null -. -.IP "\(bu" 4 -Type: String -. -.IP "" 0 -. -.P -The username on the npm registry\. Set with \fBnpm adduser\fR -. -.SS "userconfig" -. -.IP "\(bu" 4 +.SS userconfig +.RS 0 +.IP \(bu 2 Default: ~/\.npmrc -. -.IP "\(bu" 4 +.IP \(bu 2 Type: path -. -.IP "" 0 -. + +.RE .P The location of user\-level configuration settings\. -. -.SS "umask" -. -.IP "\(bu" 4 +.SS umask +.RS 0 +.IP \(bu 2 Default: 022 -. -.IP "\(bu" 4 +.IP \(bu 2 Type: Octal numeric string -. -.IP "" 0 -. + +.RE .P The "umask" value to use when setting the file creation mode on files and folders\. -. .P Folders and executables are given a mode which is \fB0777\fR masked against this value\. Other files are given a mode which is \fB0666\fR masked against this value\. Thus, the defaults are \fB0755\fR and \fB0644\fR respectively\. -. -.SS "user\-agent" -. -.IP "\(bu" 4 +.SS user\-agent +.RS 0 +.IP \(bu 2 Default: node/{process\.version} {process\.platform} {process\.arch} -. -.IP "\(bu" 4 +.IP \(bu 2 Type: String -. -.IP "" 0 -. + +.RE .P Sets a User\-Agent to the request header -. -.SS "version" -. -.IP "\(bu" 4 +.SS version +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: boolean -. -.IP "" 0 -. + +.RE .P If true, output the npm version and exit successfully\. -. .P Only relevant when specified explicitly on the command line\. -. -.SS "versions" -. -.IP "\(bu" 4 +.SS versions +.RS 0 +.IP \(bu 2 Default: false -. -.IP "\(bu" 4 +.IP \(bu 2 Type: boolean -. -.IP "" 0 -. + +.RE .P -If true, output the npm version as well as node\'s \fBprocess\.versions\fR -hash, and exit successfully\. -. +If true, output the npm version as well as node's \fBprocess\.versions\fR map, and +exit successfully\. .P Only relevant when specified explicitly on the command line\. -. -.SS "viewer" -. -.IP "\(bu" 4 +.SS viewer +.RS 0 +.IP \(bu 2 Default: "man" on Posix, "browser" on Windows -. -.IP "\(bu" 4 +.IP \(bu 2 Type: path -. -.IP "" 0 -. + +.RE .P The program to use to view help content\. -. .P Set to \fB"browser"\fR to view html help content in the default web browser\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 scripts -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 folders -. -.IP "\(bu" 4 +.IP \(bu 2 npm help npm -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man7/npm-developers.7 b/deps/npm/man/man7/npm-developers.7 index 071b8c2d79a..bf8edb29f23 100644 --- a/deps/npm/man/man7/npm-developers.7 +++ b/deps/npm/man/man7/npm-developers.7 @@ -1,335 +1,258 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-DEVELOPERS" "7" "September 2014" "" "" -. +.TH "NPM\-DEVELOPERS" "7" "October 2014" "" "" .SH "NAME" -\fBnpm-developers\fR \-\- Developer Guide -. -.SH "DESCRIPTION" -So, you\'ve decided to use npm to develop (and maybe publish/deploy) +\fBnpm-developers\fR \- Developer Guide +.SH DESCRIPTION +.P +So, you've decided to use npm to develop (and maybe publish/deploy) your project\. -. .P Fantastic! -. .P There are a few things that you need to do above the simple steps that your users will do to install your program\. -. -.SH "About These Documents" +.SH About These Documents +.P These are man pages\. If you install npm, you should be able to then do \fBman npm\-thing\fR to get the documentation on a particular topic, or \fBnpm help thing\fR to see the same information\. -. -.SH "What is a " +.SH What is a \fBpackage\fR +.P A package is: -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 a) a folder containing a program described by a package\.json file -. -.IP "\(bu" 4 +.IP \(bu 2 b) a gzipped tarball containing (a) -. -.IP "\(bu" 4 +.IP \(bu 2 c) a url that resolves to (b) -. -.IP "\(bu" 4 +.IP \(bu 2 d) a \fB@\fR that is published on the registry with (c) -. -.IP "\(bu" 4 +.IP \(bu 2 e) a \fB@\fR that points to (d) -. -.IP "\(bu" 4 +.IP \(bu 2 f) a \fB\fR that has a "latest" tag satisfying (e) -. -.IP "\(bu" 4 +.IP \(bu 2 g) a \fBgit\fR url that, when cloned, results in (a)\. -. -.IP "" 0 -. + +.RE .P Even if you never publish your package, you can still get a lot of benefits of using npm if you just want to write a node program (a), and perhaps if you also want to be able to easily install it elsewhere after packing it up into a tarball (b)\. -. .P Git urls can be of the form: -. -.IP "" 4 -. +.P +.RS 2 .nf git://github\.com/user/project\.git#commit\-ish git+ssh://user@hostname:project\.git#commit\-ish git+http://user@hostname/project/blah\.git#commit\-ish git+https://user@hostname/project/blah\.git#commit\-ish -. .fi -. -.IP "" 0 -. +.RE .P The \fBcommit\-ish\fR can be any tag, sha, or branch which can be supplied as an argument to \fBgit checkout\fR\|\. The default is \fBmaster\fR\|\. -. -.SH "The package\.json File" +.SH The package\.json File +.P You need to have a \fBpackage\.json\fR file in the root of your project to do much of anything with npm\. That is basically the whole interface\. -. .P See npm help 5 \fBpackage\.json\fR for details about what goes in that file\. At the very least, you need: -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 name: This should be a string that identifies your project\. Please do not use the name to specify that it runs on node, or is in JavaScript\. You can use the "engines" field to explicitly state the versions of -node (or whatever else) that your program requires, and it\'s pretty -well assumed that it\'s javascript\. -. -.IP +node (or whatever else) that your program requires, and it's pretty +well assumed that it's javascript\. It does not necessarily need to match your github repository name\. -. -.IP So, \fBnode\-foo\fR and \fBbar\-js\fR are bad names\. \fBfoo\fR or \fBbar\fR are better\. -. -.IP "\(bu" 4 +.IP \(bu 2 version: A semver\-compatible version\. -. -.IP "\(bu" 4 +.IP \(bu 2 engines: Specify the versions of node (or whatever else) that your program runs on\. The node API changes a lot, and there may be bugs or new functionality that you depend on\. Be explicit\. -. -.IP "\(bu" 4 +.IP \(bu 2 author: Take some credit\. -. -.IP "\(bu" 4 +.IP \(bu 2 scripts: If you have a special compilation or installation script, then you -should put it in the \fBscripts\fR hash\. You should definitely have at +should put it in the \fBscripts\fR object\. You should definitely have at least a basic smoke\-test command as the "scripts\.test" field\. See npm help 7 scripts\. -. -.IP "\(bu" 4 +.IP \(bu 2 main: If you have a single module that serves as the entry point to your program (like what the "foo" package gives you at require("foo")), then you need to specify that in the "main" field\. -. -.IP "\(bu" 4 +.IP \(bu 2 directories: -This is a hash of folders\. The best ones to include are "lib" and -"doc", but if you specify a folder full of man pages in "man", then -they\'ll get installed just like these ones\. -. -.IP "" 0 -. +This is an object mapping names to folders\. The best ones to include are +"lib" and "doc", but if you use "man" to specify a folder full of man pages, +they'll get installed just like these ones\. + +.RE .P You can use \fBnpm init\fR in the root of your package in order to get you started with a pretty basic package\.json file\. See npm help \fBnpm\-init\fR for more info\. -. -.SH "Keeping files " -Use a \fB\|\.npmignore\fR file to keep stuff out of your package\. If there\'s +.SH Keeping files \fIout\fR of your package +.P +Use a \fB\|\.npmignore\fR file to keep stuff out of your package\. If there's no \fB\|\.npmignore\fR file, but there \fIis\fR a \fB\|\.gitignore\fR file, then npm will ignore the stuff matched by the \fB\|\.gitignore\fR file\. If you \fIwant\fR to include something that is excluded by your \fB\|\.gitignore\fR file, you can create an empty \fB\|\.npmignore\fR file to override it\. -. .P -By default, the following paths and files are ignored, so there\'s no +By default, the following paths and files are ignored, so there's no need to add them to \fB\|\.npmignore\fR explicitly: -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 \fB\|\.*\.swp\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\|\._*\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\|\.DS_Store\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\|\.git\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\|\.hg\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\|\.lock\-wscript\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\|\.svn\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fB\|\.wafpickle\-*\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fBCVS\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fBnpm\-debug\.log\fR -. -.IP "" 0 -. + +.RE .P Additionally, everything in \fBnode_modules\fR is ignored, except for -bundled dependencies\. npm automatically handles this for you, so don\'t +bundled dependencies\. npm automatically handles this for you, so don't bother adding \fBnode_modules\fR to \fB\|\.npmignore\fR\|\. -. .P -The following paths and files are never ignored, so adding them to \fB\|\.npmignore\fR is pointless: -. -.IP "\(bu" 4 +The following paths and files are never ignored, so adding them to +\fB\|\.npmignore\fR is pointless: +.RS 0 +.IP \(bu 2 \fBpackage\.json\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fBREADME\.*\fR -. -.IP "" 0 -. -.SH "Link Packages" + +.RE +.SH Link Packages +.P \fBnpm link\fR is designed to install a development package and see the changes in real time without having to keep re\-installing it\. (You do need to either re\-link or \fBnpm rebuild \-g\fR to update compiled packages, of course\.) -. .P More info at npm help \fBnpm\-link\fR\|\. -. -.SH "Before Publishing: Make Sure Your Package Installs and Works" +.SH Before Publishing: Make Sure Your Package Installs and Works +.P \fBThis is important\.\fR -. .P -If you can not install it locally, you\'ll have -problems trying to publish it\. Or, worse yet, you\'ll be able to -publish it, but you\'ll be publishing a broken or pointless package\. -So don\'t do that\. -. +If you can not install it locally, you'll have +problems trying to publish it\. Or, worse yet, you'll be able to +publish it, but you'll be publishing a broken or pointless package\. +So don't do that\. .P In the root of your package, do this: -. -.IP "" 4 -. +.P +.RS 2 .nf npm install \. \-g -. .fi -. -.IP "" 0 -. +.RE .P -That\'ll show you that it\'s working\. If you\'d rather just create a symlink +That'll show you that it's working\. If you'd rather just create a symlink package that points to your working directory, then do this: -. -.IP "" 4 -. +.P +.RS 2 .nf npm link -. .fi -. -.IP "" 0 -. +.RE .P -Use \fBnpm ls \-g\fR to see if it\'s there\. -. +Use \fBnpm ls \-g\fR to see if it's there\. .P To test a local install, go into some other folder, and then do: -. -.IP "" 4 -. +.P +.RS 2 .nf cd \.\./some\-other\-folder npm install \.\./my\-package -. .fi -. -.IP "" 0 -. +.RE .P to install it locally into the node_modules folder in that other place\. -. .P Then go into the node\-repl, and try using require("my\-thing") to -bring in your module\'s main module\. -. -.SH "Create a User Account" +bring in your module's main module\. +.SH Create a User Account +.P Create a user with the adduser command\. It works like this: -. -.IP "" 4 -. +.P +.RS 2 .nf npm adduser -. .fi -. -.IP "" 0 -. +.RE .P and then follow the prompts\. -. .P This is documented better in npm help adduser\. -. -.SH "Publish your package" -This part\'s easy\. IN the root of your folder, do this: -. -.IP "" 4 -. +.SH Publish your package +.P +This part's easy\. IN the root of your folder, do this: +.P +.RS 2 .nf npm publish -. .fi -. -.IP "" 0 -. +.RE .P You can give publish a url to a tarball, or a filename of a tarball, or a path to a folder\. -. .P Note that pretty much \fBeverything in that folder will be exposed\fR -by default\. So, if you have secret stuff in there, use a \fB\|\.npmignore\fR file to list out the globs to ignore, or publish +by default\. So, if you have secret stuff in there, use a +\fB\|\.npmignore\fR file to list out the globs to ignore, or publish from a fresh checkout\. -. -.SH "Brag about it" +.SH Brag about it +.P Send emails, write blogs, blab in IRC\. -. .P Tell the world how easy it is to install your program! -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help 7 faq -. -.IP "\(bu" 4 +.IP \(bu 2 npm help npm -. -.IP "\(bu" 4 +.IP \(bu 2 npm help init -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 package\.json -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 scripts -. -.IP "\(bu" 4 +.IP \(bu 2 npm help publish -. -.IP "\(bu" 4 +.IP \(bu 2 npm help adduser -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 registry -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man7/npm-disputes.7 b/deps/npm/man/man7/npm-disputes.7 index a3163bcaec1..9cf7009f620 100644 --- a/deps/npm/man/man7/npm-disputes.7 +++ b/deps/npm/man/man7/npm-disputes.7 @@ -1,92 +1,78 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-DISPUTES" "7" "September 2014" "" "" -. +.TH "NPM\-DISPUTES" "7" "October 2014" "" "" .SH "NAME" -\fBnpm-disputes\fR \-\- Handling Module Name Disputes -. -.SH "SYNOPSIS" -. -.IP "1" 4 +\fBnpm-disputes\fR \- Handling Module Name Disputes +.SH SYNOPSIS +.RS 0 +.IP 1. 3 Get the author email with \fBnpm owner ls \fR -. -.IP "2" 4 -Email the author, CC \fIsupport@npmjs\.com\fR -. -.IP "3" 4 -After a few weeks, if there\'s no resolution, we\'ll sort it out\. -. -.IP "" 0 -. +.IP 2. 3 +Email the author, CC support@npmjs\.com +.IP 3. 3 +After a few weeks, if there's no resolution, we'll sort it out\. + +.RE +.P +Don't squat on package names\. Publish code or move out of the way\. +.SH DESCRIPTION .P -Don\'t squat on package names\. Publish code or move out of the way\. -. -.SH "DESCRIPTION" There sometimes arise cases where a user publishes a module, and then later, some other user wants to use that name\. Here are some common ways that happens (each of these is based on actual events\.) -. -.IP "1" 4 +.RS 0 +.IP 1. 3 Joe writes a JavaScript module \fBfoo\fR, which is not node\-specific\. -Joe doesn\'t use node at all\. Bob wants to use \fBfoo\fR in node, so he +Joe doesn't use node at all\. Bob wants to use \fBfoo\fR in node, so he wraps it in an npm module\. Some time later, Joe starts using node, and wants to take over management of his program\. -. -.IP "2" 4 +.IP 2. 3 Bob writes an npm module \fBfoo\fR, and publishes it\. Perhaps much later, Joe finds a bug in \fBfoo\fR, and fixes it\. He sends a pull -request to Bob, but Bob doesn\'t have the time to deal with it, +request to Bob, but Bob doesn't have the time to deal with it, because he has a new job and a new baby and is focused on his new erlang project, and kind of not involved with node any more\. Joe -would like to publish a new \fBfoo\fR, but can\'t, because the name is +would like to publish a new \fBfoo\fR, but can't, because the name is taken\. -. -.IP "3" 4 +.IP 3. 3 Bob writes a 10\-line flow\-control library, and calls it \fBfoo\fR, and publishes it to the npm registry\. Being a simple little thing, it never really has to be updated\. Joe works for Foo Inc, the makers of the critically acclaimed and widely\-marketed \fBfoo\fR JavaScript toolkit framework\. They publish it to npm as \fBfoojs\fR, but people are routinely confused when \fBnpm install foo\fR is some different thing\. -. -.IP "4" 4 +.IP 4. 3 Bob writes a parser for the widely\-known \fBfoo\fR file format, because he needs it for work\. Then, he gets a new job, and never updates the prototype\. Later on, Joe writes a much more complete \fBfoo\fR parser, -but can\'t publish, because Bob\'s \fBfoo\fR is in the way\. -. -.IP "" 0 -. +but can't publish, because Bob's \fBfoo\fR is in the way\. + +.RE .P -The validity of Joe\'s claim in each situation can be debated\. However, -Joe\'s appropriate course of action in each case is the same\. -. -.IP "1" 4 +The validity of Joe's claim in each situation can be debated\. However, +Joe's appropriate course of action in each case is the same\. +.RS 0 +.IP 1. 3 \fBnpm owner ls foo\fR\|\. This will tell Joe the email address of the owner (Bob)\. -. -.IP "2" 4 +.IP 2. 3 Joe emails Bob, explaining the situation \fBas respectfully as possible\fR, and what he would like to do with the module name\. He -adds the npm support staff \fIsupport@npmjs\.com\fR to the CC list of +adds the npm support staff support@npmjs\.com to the CC list of the email\. Mention in the email that Bob can run \fBnpm owner add joe foo\fR to add Joe as an owner of the \fBfoo\fR package\. -. -.IP "3" 4 +.IP 3. 3 After a reasonable amount of time, if Bob has not responded, or if -Bob and Joe can\'t come to any sort of resolution, email support \fIsupport@npmjs\.com\fR and we\'ll sort it out\. ("Reasonable" is +Bob and Joe can't come to any sort of resolution, email support +support@npmjs\.com and we'll sort it out\. ("Reasonable" is usually at least 4 weeks, but extra time is allowed around common holidays\.) -. -.IP "" 0 -. -.SH "REASONING" + +.RE +.SH REASONING +.P In almost every case so far, the parties involved have been able to reach an amicable resolution without any major intervention\. Most people really do want to be reasonable, and are probably not even aware that -they\'re in your way\. -. +they're in your way\. .P Module ecosystems are most vibrant and powerful when they are as self\-directed as possible\. If an admin one day deletes something you @@ -94,53 +80,45 @@ had worked on, then that is going to make most people quite upset, regardless of the justification\. When humans solve their problems by talking to other humans with respect, everyone has the chance to end up feeling good about the interaction\. -. -.SH "EXCEPTIONS" +.SH EXCEPTIONS +.P Some things are not allowed, and will be removed without discussion if they are brought to the attention of the npm registry admins, including but not limited to: -. -.IP "1" 4 +.RS 0 +.IP 1. 3 Malware (that is, a package designed to exploit or harm the machine on which it is installed)\. -. -.IP "2" 4 +.IP 2. 3 Violations of copyright or licenses (for example, cloning an MIT\-licensed program, and then removing or changing the copyright and license statement)\. -. -.IP "3" 4 +.IP 3. 3 Illegal content\. -. -.IP "4" 4 -"Squatting" on a package name that you \fIplan\fR to use, but aren\'t -actually using\. Sorry, I don\'t care how great the name is, or how +.IP 4. 3 +"Squatting" on a package name that you \fIplan\fR to use, but aren't +actually using\. Sorry, I don't care how great the name is, or how perfect a fit it is for the thing that someday might happen\. If -someone wants to use it today, and you\'re just taking up space with -an empty tarball, you\'re going to be evicted\. -. -.IP "5" 4 +someone wants to use it today, and you're just taking up space with +an empty tarball, you're going to be evicted\. +.IP 5. 3 Putting empty packages in the registry\. Packages must have SOME -functionality\. It can be silly, but it can\'t be \fInothing\fR\|\. (See +functionality\. It can be silly, but it can't be \fInothing\fR\|\. (See also: squatting\.) -. -.IP "6" 4 +.IP 6. 3 Doing weird things with the registry, like using it as your own personal application database or otherwise putting non\-packagey things into it\. -. -.IP "" 0 -. + +.RE .P If you see bad behavior like this, please report it right away\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help 7 registry -. -.IP "\(bu" 4 +.IP \(bu 2 npm help owner -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man7/npm-faq.7 b/deps/npm/man/man7/npm-faq.7 index 5eefee8d06b..563509a8728 100644 --- a/deps/npm/man/man7/npm-faq.7 +++ b/deps/npm/man/man7/npm-faq.7 @@ -1,145 +1,118 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-FAQ" "7" "September 2014" "" "" -. +.TH "NPM\-FAQ" "7" "October 2014" "" "" .SH "NAME" -\fBnpm-faq\fR \-\- Frequently Asked Questions -. -.SH "Where can I find these docs in HTML?" -\fIhttps://www\.npmjs\.org/doc/\fR, or run: -. -.IP "" 4 -. +\fBnpm-faq\fR \- Frequently Asked Questions +.SH Where can I find these docs in HTML? +.P +https://www\.npmjs\.org/doc/, or run: +.P +.RS 2 .nf npm config set viewer browser -. .fi -. -.IP "" 0 -. +.RE .P to open these documents in your default web browser rather than \fBman\fR\|\. -. -.SH "It didn't work\." -That\'s not really a question\. -. -.SH "Why didn't it work?" -I don\'t know yet\. -. -.P -Read the error output, and if you can\'t figure out what it means, +.SH It didn't work\. +.P +That's not really a question\. +.SH Why didn't it work? +.P +I don't know yet\. +.P +Read the error output, and if you can't figure out what it means, do what it says and post a bug with all the information it asks for\. -. -.SH "Where does npm put stuff?" +.SH Where does npm put stuff? +.P See npm help 5 \fBnpm\-folders\fR -. .P tl;dr: -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 Use the \fBnpm root\fR command to see where modules go, and the \fBnpm bin\fR command to see where executables go -. -.IP "\(bu" 4 +.IP \(bu 2 Global installs are different from local installs\. If you install something with the \fB\-g\fR flag, then its executables go in \fBnpm bin \-g\fR and its modules go in \fBnpm root \-g\fR\|\. -. -.IP "" 0 -. -.SH "How do I install something on my computer in a central location?" + +.RE +.SH How do I install something on my computer in a central location? +.P Install it globally by tacking \fB\-g\fR or \fB\-\-global\fR to the command\. (This is especially important for command line utilities that need to add their bins to the global system \fBPATH\fR\|\.) -. -.SH "I installed something globally, but I can't " +.SH I installed something globally, but I can't \fBrequire()\fR it +.P Install it locally\. -. .P The global install location is a place for command\-line utilities -to put their bins in the system \fBPATH\fR\|\. It\'s not for use with \fBrequire()\fR\|\. -. +to put their bins in the system \fBPATH\fR\|\. It's not for use with \fBrequire()\fR\|\. .P -If you \fBrequire()\fR a module in your code, then that means it\'s a +If you \fBrequire()\fR a module in your code, then that means it's a dependency, and a part of your program\. You need to install it locally in your program\. -. -.SH "Why can't npm just put everything in one place, like other package managers?" +.SH Why can't npm just put everything in one place, like other package managers? +.P Not every change is an improvement, but every improvement is a change\. -This would be like asking git to do network IO for every commit\. It\'s -not going to happen, because it\'s a terrible idea that causes more +This would be like asking git to do network IO for every commit\. It's +not going to happen, because it's a terrible idea that causes more problems than it solves\. -. .P It is much harder to avoid dependency conflicts without nesting dependencies\. This is fundamental to the way that npm works, and has proven to be an extremely successful approach\. See npm help 5 \fBnpm\-folders\fR for more details\. -. .P If you want a package to be installed in one place, and have all your programs reference the same copy of it, then use the \fBnpm link\fR command\. -That\'s what it\'s for\. Install it globally, then link it into each +That's what it's for\. Install it globally, then link it into each program that uses it\. -. -.SH "Whatever, I really want the old style 'everything global' style\." +.SH Whatever, I really want the old style 'everything global' style\. +.P Write your own package manager\. You could probably even wrap up \fBnpm\fR in a shell script if you really wanted to\. -. .P npm will not help you do something that is known to be a bad idea\. -. -.SH "Should I check my " -Mikeal Rogers answered this question very well: -. -.P -\fIhttp://www\.futurealoof\.com/posts/nodemodules\-in\-git\.html\fR -. -.P -tl;dr -. -.IP "\(bu" 4 -Check \fBnode_modules\fR into git for things you \fBdeploy\fR, such as -websites and apps\. -. -.IP "\(bu" 4 -Do not check \fBnode_modules\fR into git for libraries and modules -intended to be reused\. -. -.IP "\(bu" 4 -Use npm to manage dependencies in your dev environment, but not in -your deployment scripts\. -. -.IP "" 0 -. -.SH "Is it 'npm' or 'NPM' or 'Npm'?" +.SH Should I check my \fBnode_modules\fR folder into git? +.P +Usually, no\. Allow npm to resolve dependencies for your packages\. +.P +For packages you \fBdeploy\fR, such as websites and apps, +you should use npm shrinkwrap to lock down your full dependency tree: +.P +https://www\.npmjs\.org/doc/cli/npm\-shrinkwrap\.html +.P +If you are paranoid about depending on the npm ecosystem, +you should run a private npm mirror or a private cache\. +.P +If you want 100% confidence in being able to reproduce the specific bytes +included in a deployment, you should use an additional mechanism that can +verify contents rather than versions\. For example, +Amazon machine images, DigitalOcean snapshots, Heroku slugs, or simple tarballs\. +.SH Is it 'npm' or 'NPM' or 'Npm'? +.P npm should never be capitalized unless it is being displayed in a location that is customarily all\-caps (such as the title of man pages\.) -. -.SH "If 'npm' is an acronym, why is it never capitalized?" +.SH If 'npm' is an acronym, why is it never capitalized? +.P Contrary to the belief of many, "npm" is not in fact an abbreviation for "Node Package Manager"\. It is a recursive bacronymic abbreviation for "npm is not an acronym"\. (If it was "ninaa", then it would be an acronym, and thus incorrectly named\.) -. .P "NPM", however, \fIis\fR an acronym (more precisely, a capitonym) for the National Association of Pastoral Musicians\. You can learn more -about them at \fIhttp://npm\.org/\fR\|\. -. +about them at http://npm\.org/\|\. .P In software, "NPM" is a Non\-Parametric Mapping utility written by Chris Rorden\. You can analyze pictures of brains with it\. Learn more -about the (capitalized) NPM program at \fIhttp://www\.cabiatl\.com/mricro/npm/\fR\|\. -. +about the (capitalized) NPM program at http://www\.cabiatl\.com/mricro/npm/\|\. .P The first seed that eventually grew into this flower was a bash utility named "pm", which was a shortened descendent of "pkgmakeinst", a bash function that was used to install various different things on different -platforms, most often using Yahoo\'s \fByinst\fR\|\. If \fBnpm\fR was ever an +platforms, most often using Yahoo's \fByinst\fR\|\. If \fBnpm\fR was ever an acronym for anything, it was \fBnode pm\fR or maybe \fBnew pm\fR\|\. -. .P So, in all seriousness, the "npm" project is named after its command\-line utility, which was organically selected to be easily typed by a right\-handed @@ -147,183 +120,151 @@ programmer using a US QWERTY keyboard layout, ending with the right\-ring\-finger in a postition to type the \fB\-\fR key for flags and other command\-line arguments\. That command\-line utility is always lower\-case, though it starts most sentences it is a part of\. -. -.SH "How do I list installed packages?" +.SH How do I list installed packages? +.P \fBnpm ls\fR -. -.SH "How do I search for packages?" +.SH How do I search for packages? +.P \fBnpm search\fR -. .P Arguments are greps\. \fBnpm search jsdom\fR shows jsdom packages\. -. -.SH "How do I update npm?" -. +.SH How do I update npm? +.P +.RS 2 .nf -npm update npm \-g -. +npm install npm \-g .fi -. +.RE .P You can also update all outdated local packages by doing \fBnpm update\fR without any arguments, or global packages by doing \fBnpm update \-g\fR\|\. -. .P Occasionally, the version of npm will progress such that the current version cannot be properly installed with the version that you have installed already\. (Consider, if there is ever a bug in the \fBupdate\fR command\.) -. .P In those cases, you can do this: -. -.IP "" 4 -. +.P +.RS 2 .nf curl https://www\.npmjs\.org/install\.sh | sh -. .fi -. -.IP "" 0 -. -.SH "What is a " +.RE +.SH What is a \fBpackage\fR? +.P A package is: -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 a) a folder containing a program described by a package\.json file -. -.IP "\(bu" 4 +.IP \(bu 2 b) a gzipped tarball containing (a) -. -.IP "\(bu" 4 +.IP \(bu 2 c) a url that resolves to (b) -. -.IP "\(bu" 4 +.IP \(bu 2 d) a \fB@\fR that is published on the registry with (c) -. -.IP "\(bu" 4 +.IP \(bu 2 e) a \fB@\fR that points to (d) -. -.IP "\(bu" 4 +.IP \(bu 2 f) a \fB\fR that has a "latest" tag satisfying (e) -. -.IP "\(bu" 4 +.IP \(bu 2 g) a \fBgit\fR url that, when cloned, results in (a)\. -. -.IP "" 0 -. + +.RE .P Even if you never publish your package, you can still get a lot of benefits of using npm if you just want to write a node program (a), and perhaps if you also want to be able to easily install it elsewhere after packing it up into a tarball (b)\. -. .P Git urls can be of the form: -. -.IP "" 4 -. +.P +.RS 2 .nf git://github\.com/user/project\.git#commit\-ish git+ssh://user@hostname:project\.git#commit\-ish git+http://user@hostname/project/blah\.git#commit\-ish git+https://user@hostname/project/blah\.git#commit\-ish -. .fi -. -.IP "" 0 -. +.RE .P The \fBcommit\-ish\fR can be any tag, sha, or branch which can be supplied as an argument to \fBgit checkout\fR\|\. The default is \fBmaster\fR\|\. -. -.SH "What is a " +.SH What is a \fBmodule\fR? +.P A module is anything that can be loaded with \fBrequire()\fR in a Node\.js program\. The following things are all examples of things that can be loaded as modules: -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 A folder with a \fBpackage\.json\fR file containing a \fBmain\fR field\. -. -.IP "\(bu" 4 +.IP \(bu 2 A folder with an \fBindex\.js\fR file in it\. -. -.IP "\(bu" 4 +.IP \(bu 2 A JavaScript file\. -. -.IP "" 0 -. + +.RE .P Most npm packages are modules, because they are libraries that you -load with \fBrequire\fR\|\. However, there\'s no requirement that an npm +load with \fBrequire\fR\|\. However, there's no requirement that an npm package be a module! Some only contain an executable command\-line -interface, and don\'t provide a \fBmain\fR field for use in Node programs\. -. +interface, and don't provide a \fBmain\fR field for use in Node programs\. .P -Almost all npm packages (at least, those that are Node programs) \fIcontain\fR many modules within them (because every file they load with \fBrequire()\fR is a module)\. -. +Almost all npm packages (at least, those that are Node programs) +\fIcontain\fR many modules within them (because every file they load with +\fBrequire()\fR is a module)\. .P In the context of a Node program, the \fBmodule\fR is also the thing that was loaded \fIfrom\fR a file\. For example, in the following program: -. -.IP "" 4 -. +.P +.RS 2 .nf -var req = require(\'request\') -. +var req = require('request') .fi -. -.IP "" 0 -. +.RE .P we might say that "The variable \fBreq\fR refers to the \fBrequest\fR module"\. -. -.SH "So, why is it the "" +.SH So, why is it the "\fBnode_modules\fR" folder, but "\fBpackage\.json\fR" file? Why not \fBnode_packages\fR or \fBmodule\.json\fR? +.P The \fBpackage\.json\fR file defines the package\. (See "What is a package?" above\.) -. .P The \fBnode_modules\fR folder is the place Node\.js looks for modules\. (See "What is a module?" above\.) -. .P For example, if you create a file at \fBnode_modules/foo\.js\fR and then -had a program that did \fBvar f = require(\'foo\.js\')\fR then it would load +had a program that did \fBvar f = require('foo\.js')\fR then it would load the module\. However, \fBfoo\.js\fR is not a "package" in this case, because it does not have a package\.json\. -. .P -Alternatively, if you create a package which does not have an \fBindex\.js\fR or a \fB"main"\fR field in the \fBpackage\.json\fR file, then it is -not a module\. Even if it\'s installed in \fBnode_modules\fR, it can\'t be +Alternatively, if you create a package which does not have an +\fBindex\.js\fR or a \fB"main"\fR field in the \fBpackage\.json\fR file, then it is +not a module\. Even if it's installed in \fBnode_modules\fR, it can't be an argument to \fBrequire()\fR\|\. -. -.SH ""node_modules"" +.SH \fB"node_modules"\fR is the name of my deity's arch\-rival, and a Forbidden Word in my religion\. Can I configure npm to use a different folder? +.P No\. This will never happen\. This question comes up sometimes, -because it seems silly from the outside that npm couldn\'t just be +because it seems silly from the outside that npm couldn't just be configured to put stuff somewhere else, and then npm could load them -from there\. It\'s an arbitrary spelling choice, right? What\'s the big +from there\. It's an arbitrary spelling choice, right? What's the big deal? -. .P -At the time of this writing, the string \fB\'node_modules\'\fR appears 151 +At the time of this writing, the string \fB\|'node_modules'\fR appears 151 times in 53 separate files in npm and node core (excluding tests and documentation)\. -. .P -Some of these references are in node\'s built\-in module loader\. Since +Some of these references are in node's built\-in module loader\. Since npm is not involved \fBat all\fR at run\-time, node itself would have to -be configured to know where you\'ve decided to stick stuff\. Complexity +be configured to know where you've decided to stick stuff\. Complexity hurdle #1\. Since the Node module system is locked, this cannot be -changed, and is enough to kill this request\. But I\'ll continue, in -deference to your deity\'s delicate feelings regarding spelling\. -. +changed, and is enough to kill this request\. But I'll continue, in +deference to your deity's delicate feelings regarding spelling\. .P Many of the others are in dependencies that npm uses, which are not necessarily tightly coupled to npm (in the sense that they do not read -npm\'s configuration files, etc\.) Each of these would have to be +npm's configuration files, etc\.) Each of these would have to be configured to take the name of the \fBnode_modules\fR folder as a parameter\. Complexity hurdle #2\. -. .P Furthermore, npm has the ability to "bundle" dependencies by adding the dep names to the \fB"bundledDependencies"\fR list in package\.json, @@ -332,148 +273,127 @@ if the author of a module bundles its dependencies, and they use a different spelling for \fBnode_modules\fR? npm would have to rename the folder at publish time, and then be smart enough to unpack it using your locally configured name\. Complexity hurdle #3\. -. .P -Furthermore, what happens when you \fIchange\fR this name? Fine, it\'s -easy enough the first time, just rename the \fBnode_modules\fR folders to \fB\|\./blergyblerp/\fR or whatever name you choose\. But what about when you -change it again? npm doesn\'t currently track any state about past +Furthermore, what happens when you \fIchange\fR this name? Fine, it's +easy enough the first time, just rename the \fBnode_modules\fR folders to +\fB\|\./blergyblerp/\fR or whatever name you choose\. But what about when you +change it again? npm doesn't currently track any state about past configuration settings, so this would be rather difficult to do properly\. It would have to track every previous value for this -config, and always accept any of them, or else yesterday\'s install may +config, and always accept any of them, or else yesterday's install may be broken tomorrow\. Complexity hurdle #4\. -. .P Never going to happen\. The folder is named \fBnode_modules\fR\|\. It is written indelibly in the Node Way, handed down from the ancient times of Node 0\.3\. -. -.SH "How do I install node with npm?" -You don\'t\. Try one of these node version managers: -. +.SH How do I install node with npm? +.P +You don't\. Try one of these node version managers: .P Unix: -. -.IP "\(bu" 4 -\fIhttp://github\.com/isaacs/nave\fR -. -.IP "\(bu" 4 -\fIhttp://github\.com/visionmedia/n\fR -. -.IP "\(bu" 4 -\fIhttp://github\.com/creationix/nvm\fR -. -.IP "" 0 -. +.RS 0 +.IP \(bu 2 +http://github\.com/isaacs/nave +.IP \(bu 2 +http://github\.com/visionmedia/n +.IP \(bu 2 +http://github\.com/creationix/nvm + +.RE .P Windows: -. -.IP "\(bu" 4 -\fIhttp://github\.com/marcelklehr/nodist\fR -. -.IP "\(bu" 4 -\fIhttps://github\.com/hakobera/nvmw\fR -. -.IP "\(bu" 4 -\fIhttps://github\.com/nanjingboy/nvmw\fR -. -.IP "" 0 -. -.SH "How can I use npm for development?" +.RS 0 +.IP \(bu 2 +http://github\.com/marcelklehr/nodist +.IP \(bu 2 +https://github\.com/hakobera/nvmw +.IP \(bu 2 +https://github\.com/nanjingboy/nvmw + +.RE +.SH How can I use npm for development? +.P See npm help 7 \fBnpm\-developers\fR and npm help 5 \fBpackage\.json\fR\|\. -. .P -You\'ll most likely want to \fBnpm link\fR your development folder\. That\'s +You'll most likely want to \fBnpm link\fR your development folder\. That's awesomely handy\. -. .P To set up your own private registry, check out npm help 7 \fBnpm\-registry\fR\|\. -. -.SH "Can I list a url as a dependency?" +.SH Can I list a url as a dependency? +.P Yes\. It should be a url to a gzipped tarball containing a single folder that has a package\.json in its root, or a git url\. (See "what is a package?" above\.) -. -.SH "How do I symlink to a dev folder so I don't have to keep re\-installing?" +.SH How do I symlink to a dev folder so I don't have to keep re\-installing? +.P See npm help \fBnpm\-link\fR -. -.SH "The package registry website\. What is that exactly?" +.SH The package registry website\. What is that exactly? +.P See npm help 7 \fBnpm\-registry\fR\|\. -. -.SH "I forgot my password, and can't publish\. How do I reset it?" -Go to \fIhttps://npmjs\.org/forgot\fR\|\. -. -.SH "I get ECONNREFUSED a lot\. What's up?" -Either the registry is down, or node\'s DNS isn\'t able to reach out\. -. -.P -To check if the registry is down, open up \fIhttps://registry\.npmjs\.org/\fR in a web browser\. This will also tell +.SH I forgot my password, and can't publish\. How do I reset it? +.P +Go to https://npmjs\.org/forgot\|\. +.SH I get ECONNREFUSED a lot\. What's up? +.P +Either the registry is down, or node's DNS isn't able to reach out\. +.P +To check if the registry is down, open up +https://registry\.npmjs\.org/ in a web browser\. This will also tell you if you are just unable to access the internet for some reason\. -. .P -If the registry IS down, let us know by emailing \fIsupport@npmjs\.com\fR -or posting an issue at \fIhttps://github\.com/npm/npm/issues\fR\|\. If it\'s -down for the world (and not just on your local network) then we\'re +If the registry IS down, let us know by emailing support@npmjs\.com +or posting an issue at https://github\.com/npm/npm/issues\|\. If it's +down for the world (and not just on your local network) then we're probably already being pinged about it\. -. .P You can also often get a faster response by visiting the #npm channel on Freenode IRC\. -. -.SH "Why no namespaces?" -Please see this discussion: \fIhttps://github\.com/npm/npm/issues/798\fR -. +.SH Why no namespaces? +.P +Please see this discussion: https://github\.com/npm/npm/issues/798 .P -tl;dr \- It doesn\'t actually make things better, and can make them worse\. -. +tl;dr \- It doesn't actually make things better, and can make them worse\. .P -If you want to namespace your own packages, you may: simply use the \fB\-\fR character to separate the names\. npm is a mostly anarchic system\. +If you want to namespace your own packages, you may: simply use the +\fB\-\fR character to separate the names\. npm is a mostly anarchic system\. There is not sufficient need to impose namespace rules on everyone\. -. -.SH "Who does npm?" +.SH Who does npm? +.P npm was originally written by Isaac Z\. Schlueter, and many others have contributed to it, some of them quite substantially\. -. .P The npm open source project, The npm Registry, and the community website \fIhttps://www\.npmjs\.org\fR are maintained and operated by the good folks at npm, Inc\. \fIhttp://www\.npmjs\.com\fR -. -.SH "I have a question or request not addressed here\. Where should I put it?" +.SH I have a question or request not addressed here\. Where should I put it? +.P Post an issue on the github project: -. -.IP "\(bu" 4 -\fIhttps://github\.com/npm/npm/issues\fR -. -.IP "" 0 -. -.SH "Why does npm hate me?" +.RS 0 +.IP \(bu 2 +https://github\.com/npm/npm/issues + +.RE +.SH Why does npm hate me? +.P npm is not capable of hatred\. It loves everyone, especially you\. -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help npm -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 developers -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 package\.json -. -.IP "\(bu" 4 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 folders -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man7/npm-index.7 b/deps/npm/man/man7/npm-index.7 index 763b3dd3e52..442815a2e52 100644 --- a/deps/npm/man/man7/npm-index.7 +++ b/deps/npm/man/man7/npm-index.7 @@ -1,322 +1,316 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-INDEX" "7" "September 2014" "" "" -. +.TH "NPM\-INDEX" "7" "October 2014" "" "" .SH "NAME" -\fBnpm-index\fR \-\- Index of all npm documentation -. -.SS "npm help README" +\fBnpm-index\fR \- Index of all npm documentation +.SS npm help README +.P node package manager -. -.SH "Command Line Documentation" +.SH Command Line Documentation +.P Using npm on the command line -. -.SS "npm help npm" +.SS npm help npm +.P node package manager -. -.SS "npm help adduser" +.SS npm help adduser +.P Add a registry user account -. -.SS "npm help bin" +.SS npm help bin +.P Display npm bin folder -. -.SS "npm help bugs" +.SS npm help bugs +.P Bugs for a package in a web browser maybe -. -.SS "npm help build" +.SS npm help build +.P Build a package -. -.SS "npm help bundle" +.SS npm help bundle +.P REMOVED -. -.SS "npm help cache" +.SS npm help cache +.P Manipulates packages cache -. -.SS "npm help completion" +.SS npm help completion +.P Tab Completion for npm -. -.SS "npm help config" +.SS npm help config +.P Manage the npm configuration files -. -.SS "npm help dedupe" +.SS npm help dedupe +.P Reduce duplication -. -.SS "npm help deprecate" +.SS npm help deprecate +.P Deprecate a version of a package -. -.SS "npm help docs" +.SS npm help docs +.P Docs for a package in a web browser maybe -. -.SS "npm help edit" +.SS npm help edit +.P Edit an installed package -. -.SS "npm help explore" +.SS npm help explore +.P Browse an installed package -. -.SS "npm help help\-search" +.SS npm help help\-search +.P Search npm help documentation -. -.SS "npm help help" +.SS npm help help +.P Get help on npm -. -.SS "npm help init" +.SS npm help init +.P Interactively create a package\.json file -. -.SS "npm help install" +.SS npm help install +.P Install a package -. -.SS "npm help link" +.SS npm help link +.P Symlink a package folder -. -.SS "npm help ls" +.SS npm help ls +.P List installed packages -. -.SS "npm help outdated" +.SS npm help outdated +.P Check for outdated packages -. -.SS "npm help owner" +.SS npm help owner +.P Manage package owners -. -.SS "npm help pack" +.SS npm help pack +.P Create a tarball from a package -. -.SS "npm help prefix" +.SS npm help prefix +.P Display prefix -. -.SS "npm help prune" +.SS npm help prune +.P Remove extraneous packages -. -.SS "npm help publish" +.SS npm help publish +.P Publish a package -. -.SS "npm help rebuild" +.SS npm help rebuild +.P Rebuild a package -. -.SS "npm help repo" +.SS npm help repo +.P Open package repository page in the browser -. -.SS "npm help restart" +.SS npm help restart +.P Start a package -. -.SS "npm help rm" +.SS npm help rm +.P Remove a package -. -.SS "npm help root" +.SS npm help root +.P Display npm root -. -.SS "npm help run\-script" +.SS npm help run\-script +.P Run arbitrary package scripts -. -.SS "npm help search" +.SS npm help search +.P Search for packages -. -.SS "npm help shrinkwrap" +.SS npm help shrinkwrap +.P Lock down dependency versions -. -.SS "npm help star" +.SS npm help star +.P Mark your favorite packages -. -.SS "npm help stars" +.SS npm help stars +.P View packages marked as favorites -. -.SS "npm help start" +.SS npm help start +.P Start a package -. -.SS "npm help stop" +.SS npm help stop +.P Stop a package -. -.SS "npm help submodule" -Add a package as a git submodule -. -.SS "npm help tag" +.SS npm help tag +.P Tag a published version -. -.SS "npm help test" +.SS npm help test +.P Test a package -. -.SS "npm help uninstall" +.SS npm help uninstall +.P Remove a package -. -.SS "npm help unpublish" +.SS npm help unpublish +.P Remove a package from the registry -. -.SS "npm help update" +.SS npm help update +.P Update a package -. -.SS "npm help version" +.SS npm help version +.P Bump a package version -. -.SS "npm help view" +.SS npm help view +.P View registry info -. -.SS "npm help whoami" +.SS npm help whoami +.P Display npm username -. -.SH "API Documentation" +.SH API Documentation +.P Using npm in your Node programs -. -.SS "npm apihelp npm" +.SS npm apihelp npm +.P node package manager -. -.SS "npm apihelp bin" +.SS npm apihelp bin +.P Display npm bin folder -. -.SS "npm apihelp bugs" +.SS npm apihelp bugs +.P Bugs for a package in a web browser maybe -. -.SS "npm apihelp cache" +.SS npm apihelp cache +.P manage the npm cache programmatically -. -.SS "npm apihelp commands" +.SS npm apihelp commands +.P npm commands -. -.SS "npm apihelp config" +.SS npm apihelp config +.P Manage the npm configuration files -. -.SS "npm apihelp deprecate" +.SS npm apihelp deprecate +.P Deprecate a version of a package -. -.SS "npm apihelp docs" +.SS npm apihelp docs +.P Docs for a package in a web browser maybe -. -.SS "npm apihelp edit" +.SS npm apihelp edit +.P Edit an installed package -. -.SS "npm apihelp explore" +.SS npm apihelp explore +.P Browse an installed package -. -.SS "npm apihelp help\-search" +.SS npm apihelp help\-search +.P Search the help pages -. -.SS "npm apihelp init" +.SS npm apihelp init +.P Interactively create a package\.json file -. -.SS "npm apihelp install" +.SS npm apihelp install +.P install a package programmatically -. -.SS "npm apihelp link" +.SS npm apihelp link +.P Symlink a package folder -. -.SS "npm apihelp load" +.SS npm apihelp load +.P Load config settings -. -.SS "npm apihelp ls" +.SS npm apihelp ls +.P List installed packages -. -.SS "npm apihelp outdated" +.SS npm apihelp outdated +.P Check for outdated packages -. -.SS "npm apihelp owner" +.SS npm apihelp owner +.P Manage package owners -. -.SS "npm apihelp pack" +.SS npm apihelp pack +.P Create a tarball from a package -. -.SS "npm apihelp prefix" +.SS npm apihelp prefix +.P Display prefix -. -.SS "npm apihelp prune" +.SS npm apihelp prune +.P Remove extraneous packages -. -.SS "npm apihelp publish" +.SS npm apihelp publish +.P Publish a package -. -.SS "npm apihelp rebuild" +.SS npm apihelp rebuild +.P Rebuild a package -. -.SS "npm apihelp repo" +.SS npm apihelp repo +.P Open package repository page in the browser -. -.SS "npm apihelp restart" +.SS npm apihelp restart +.P Start a package -. -.SS "npm apihelp root" +.SS npm apihelp root +.P Display npm root -. -.SS "npm apihelp run\-script" +.SS npm apihelp run\-script +.P Run arbitrary package scripts -. -.SS "npm apihelp search" +.SS npm apihelp search +.P Search for packages -. -.SS "npm apihelp shrinkwrap" +.SS npm apihelp shrinkwrap +.P programmatically generate package shrinkwrap file -. -.SS "npm apihelp start" +.SS npm apihelp start +.P Start a package -. -.SS "npm apihelp stop" +.SS npm apihelp stop +.P Stop a package -. -.SS "npm apihelp submodule" -Add a package as a git submodule -. -.SS "npm apihelp tag" +.SS npm apihelp tag +.P Tag a published version -. -.SS "npm apihelp test" +.SS npm apihelp test +.P Test a package -. -.SS "npm apihelp uninstall" +.SS npm apihelp uninstall +.P uninstall a package programmatically -. -.SS "npm apihelp unpublish" +.SS npm apihelp unpublish +.P Remove a package from the registry -. -.SS "npm apihelp update" +.SS npm apihelp update +.P Update a package -. -.SS "npm apihelp version" +.SS npm apihelp version +.P Bump a package version -. -.SS "npm apihelp view" +.SS npm apihelp view +.P View registry info -. -.SS "npm apihelp whoami" +.SS npm apihelp whoami +.P Display npm username -. -.SH "Files" +.SH Files +.P File system structures npm uses -. -.SS "npm help 5 folders" +.SS npm help 5 folders +.P Folder Structures Used by npm -. -.SS "npm help 5 npmrc" +.SS npm help 5 npmrc +.P The npm config files -. -.SS "npm help 5 package\.json" -Specifics of npm\'s package\.json handling -. -.SH "Misc" +.SS npm help 5 package\.json +.P +Specifics of npm's package\.json handling +.SH Misc +.P Various other bits and bobs -. -.SS "npm help 7 coding\-style" -npm\'s "funny" coding style -. -.SS "npm help 7 config" +.SS npm help 7 coding\-style +.P +npm's "funny" coding style +.SS npm help 7 config +.P More than you probably want to know about npm configuration -. -.SS "npm help 7 developers" +.SS npm help 7 developers +.P Developer Guide -. -.SS "npm help 7 disputes" +.SS npm help 7 disputes +.P Handling Module Name Disputes -. -.SS "npm help 7 faq" +.SS npm help 7 faq +.P Frequently Asked Questions -. -.SS "npm help 7 index" +.SS npm help 7 index +.P Index of all npm documentation -. -.SS "npm help 7 registry" +.SS npm help 7 registry +.P The JavaScript Package Registry -. -.SS "npm help 7 scripts" +.SS npm help 7 scope +.P +Scoped packages +.SS npm help 7 scripts +.P How npm handles the "scripts" field -. -.SS "npm help 7 removing\-npm" +.SS npm help 7 removing\-npm +.P Cleaning the Slate -. -.SS "npm help 7 semver" +.SS npm help 7 semver +.P The semantic versioner for npm + diff --git a/deps/npm/man/man7/npm-registry.7 b/deps/npm/man/man7/npm-registry.7 index c190779ad1a..9de209d468f 100644 --- a/deps/npm/man/man7/npm-registry.7 +++ b/deps/npm/man/man7/npm-registry.7 @@ -1,82 +1,70 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-REGISTRY" "7" "September 2014" "" "" -. +.TH "NPM\-REGISTRY" "7" "October 2014" "" "" .SH "NAME" -\fBnpm-registry\fR \-\- The JavaScript Package Registry -. -.SH "DESCRIPTION" +\fBnpm-registry\fR \- The JavaScript Package Registry +.SH DESCRIPTION +.P To resolve packages by name and version, npm talks to a registry website that implements the CommonJS Package Registry specification for reading package info\. -. .P -Additionally, npm\'s package registry implementation supports several +Additionally, npm's package registry implementation supports several write APIs as well, to allow for publishing packages and managing user account information\. -. .P -The official public npm registry is at \fIhttp://registry\.npmjs\.org/\fR\|\. It -is powered by a CouchDB database at \fIhttp://isaacs\.iriscouch\.com/registry\fR\|\. The code for the couchapp is -available at \fIhttp://github\.com/npm/npmjs\.org\fR\|\. npm user accounts -are CouchDB users, stored in the \fIhttp://isaacs\.iriscouch\.com/_users\fR -database\. -. +The official public npm registry is at http://registry\.npmjs\.org/\|\. It +is powered by a CouchDB database, of which there is a public mirror at +http://skimdb\.npmjs\.com/registry\|\. The code for the couchapp is +available at http://github\.com/npm/npm\-registry\-couchapp\|\. +.P +The registry URL used is determined by the scope of the package (see +npm help 7 \fBnpm\-scope\fR)\. If no scope is specified, the default registry is used, which is +supplied by the \fBregistry\fR config parameter\. See npm help \fBnpm\-config\fR, +npm help 5 \fBnpmrc\fR, and npm help 7 \fBnpm\-config\fR for more on managing npm's configuration\. +.SH Can I run my own private registry? .P -The registry URL is supplied by the \fBregistry\fR config parameter\. See npm help \fBnpm\-config\fR, npm help 5 \fBnpmrc\fR, and npm help 7 \fBnpm\-config\fR for more on managing -npm\'s configuration\. -. -.SH "Can I run my own private registry?" Yes! -. .P The easiest way is to replicate the couch database, and use the same (or similar) design doc to implement the APIs\. -. .P If you set up continuous replication from the official CouchDB, and then -set your internal CouchDB as the registry config, then you\'ll be able +set your internal CouchDB as the registry config, then you'll be able to read any published packages, in addition to your private ones, and by default will only publish internally\. If you then want to publish a -package for the whole world to see, you can simply override the \fB\-\-registry\fR config for that command\. -. -.SH "I don't want my package published in the official registry\. It's private\." +package for the whole world to see, you can simply override the +\fB\-\-registry\fR config for that command\. +.SH I don't want my package published in the official registry\. It's private\. +.P Set \fB"private": true\fR in your package\.json to prevent it from being -published at all, or \fB"publishConfig":{"registry":"http://my\-internal\-registry\.local"}\fR +published at all, or +\fB"publishConfig":{"registry":"http://my\-internal\-registry\.local"}\fR to force it to be published only to your internal registry\. -. .P See npm help 5 \fBpackage\.json\fR for more info on what goes in the package\.json file\. -. -.SH "Will you replicate from my registry into the public one?" +.SH Will you replicate from my registry into the public one? +.P No\. If you want things to be public, then publish them into the public registry using npm\. What little security there is would be for nought otherwise\. -. -.SH "Do I have to use couchdb to build a registry that npm can talk to?" -No, but it\'s way easier\. Basically, yes, you do, or you have to +.SH Do I have to use couchdb to build a registry that npm can talk to? +.P +No, but it's way easier\. Basically, yes, you do, or you have to effectively implement the entire CouchDB API anyway\. -. -.SH "Is there a website or something to see package docs and such?" -Yes, head over to \fIhttps://npmjs\.org/\fR -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH Is there a website or something to see package docs and such? +.P +Yes, head over to https://npmjs\.org/ +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 config -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 npmrc -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 developers -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 disputes -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man7/npm-scope.7 b/deps/npm/man/man7/npm-scope.7 index ef8d251e0a8..f876e4eaaa6 100644 --- a/deps/npm/man/man7/npm-scope.7 +++ b/deps/npm/man/man7/npm-scope.7 @@ -1,4 +1,4 @@ -.TH "NPM\-SCOPE" "7" "September 2014" "" "" +.TH "NPM\-SCOPE" "7" "October 2014" "" "" .SH "NAME" \fBnpm-scope\fR \- Scoped packages .SH DESCRIPTION @@ -9,9 +9,9 @@ or underscores)\. When used in package names, preceded by an @\-symbol and followed by a slash, e\.g\. .P .RS 2 -.EX +.nf @somescope/somepackagename -.EE +.fi .RE .P Scopes are a way of grouping related packages together, and also affect a few @@ -28,23 +28,23 @@ scoped modules will be in \fBnode_modules/@myorg/packagename\fR\|\. The scope fo (\fB@myorg\fR) is simply the name of the scope preceded by an @\-symbol, and can contain any number of scoped packages\. .P -A scoped package is install by referencing it by name, preceded by an @\-symbol, -in \fBnpm install\fR: +A scoped package is installed by referencing it by name, preceded by an +@\-symbol, in \fBnpm install\fR: .P .RS 2 -.EX +.nf npm install @myorg/mypackage -.EE +.fi .RE .P Or in \fBpackage\.json\fR: .P .RS 2 -.EX +.nf "dependencies": { "@myorg/mypackage": "^1\.3\.0" } -.EE +.fi .RE .P Note that if the @\-symbol is omitted in either case npm will instead attempt to @@ -55,9 +55,9 @@ Because scoped packages are installed into a scope folder, you have to include the name of the scope when requiring them in your code, e\.g\. .P .RS 2 -.EX +.nf require('@myorg/mypackage') -.EE +.fi .RE .P There is nothing special about the way Node treats scope folders, this is @@ -77,9 +77,9 @@ private registries, such as npm Enterprise\. You can associate a scope with a registry at login, e\.g\. .P .RS 2 -.EX +.nf npm login \-\-registry=http://reg\.example\.com \-\-scope=@myco -.EE +.fi .RE .P Scopes have a many\-to\-one relationship with registries: one registry can @@ -88,9 +88,9 @@ host multiple scopes, but a scope only ever points to one registry\. You can also associate a scope with a registry using \fBnpm config\fR: .P .RS 2 -.EX +.nf npm config set @myco:registry http://reg\.example\.com -.EE +.fi .RE .P Once a scope is associated with a registry, any \fBnpm install\fR for a package diff --git a/deps/npm/man/man7/npm-scripts.7 b/deps/npm/man/man7/npm-scripts.7 index d4d045f8709..9d11f4626cb 100644 --- a/deps/npm/man/man7/npm-scripts.7 +++ b/deps/npm/man/man7/npm-scripts.7 @@ -1,77 +1,64 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-SCRIPTS" "7" "September 2014" "" "" -. +.TH "NPM\-SCRIPTS" "7" "October 2014" "" "" .SH "NAME" -\fBnpm-scripts\fR \-\- How npm handles the "scripts" field -. -.SH "DESCRIPTION" -npm supports the "scripts" member of the package\.json script, for the +\fBnpm-scripts\fR \- How npm handles the "scripts" field +.SH DESCRIPTION +.P +npm supports the "scripts" property of the package\.json script, for the following scripts: -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 prepublish: Run BEFORE the package is published\. (Also run on local \fBnpm install\fR without any arguments\.) -. -.IP "\(bu" 4 +.IP \(bu 2 publish, postpublish: Run AFTER the package is published\. -. -.IP "\(bu" 4 +.IP \(bu 2 preinstall: Run BEFORE the package is installed -. -.IP "\(bu" 4 +.IP \(bu 2 install, postinstall: Run AFTER the package is installed\. -. -.IP "\(bu" 4 +.IP \(bu 2 preuninstall, uninstall: Run BEFORE the package is uninstalled\. -. -.IP "\(bu" 4 +.IP \(bu 2 postuninstall: Run AFTER the package is uninstalled\. -. -.IP "\(bu" 4 +.IP \(bu 2 preupdate: Run BEFORE the package is updated with the update command\. -. -.IP "\(bu" 4 +.IP \(bu 2 update, postupdate: Run AFTER the package is updated with the update command\. -. -.IP "\(bu" 4 +.IP \(bu 2 pretest, test, posttest: Run by the \fBnpm test\fR command\. -. -.IP "\(bu" 4 +.IP \(bu 2 prestop, stop, poststop: Run by the \fBnpm stop\fR command\. -. -.IP "\(bu" 4 +.IP \(bu 2 prestart, start, poststart: Run by the \fBnpm start\fR command\. -. -.IP "\(bu" 4 +.IP \(bu 2 prerestart, restart, postrestart: Run by the \fBnpm restart\fR command\. Note: \fBnpm restart\fR will run the stop and start scripts if no \fBrestart\fR script is provided\. -. -.IP "" 0 -. + +.RE .P -Additionally, arbitrary scripts can be run by doing \fBnpm run\-script \fR\|\. -. -.SH "NOTE: INSTALL SCRIPTS ARE AN ANTIPATTERN" -\fBtl;dr\fR Don\'t use \fBinstall\fR\|\. Use a \fB\|\.gyp\fR file for compilation, and \fBprepublish\fR for anything else\. -. +Additionally, arbitrary scripts can be executed by running \fBnpm +run\-script \fR\|\. \fIPre\fR and \fIpost\fR commands with matching +names will be run for those as well (e\.g\. \fBpremyscript\fR, \fBmyscript\fR, +\fBpostmyscript\fR)\. +.SH NOTE: INSTALL SCRIPTS ARE AN ANTIPATTERN .P -You should almost never have to explicitly set a \fBpreinstall\fR or \fBinstall\fR script\. If you are doing this, please consider if there is +\fBtl;dr\fR Don't use \fBinstall\fR\|\. Use a \fB\|\.gyp\fR file for compilation, and +\fBprepublish\fR for anything else\. +.P +You should almost never have to explicitly set a \fBpreinstall\fR or +\fBinstall\fR script\. If you are doing this, please consider if there is another option\. -. .P The only valid use of \fBinstall\fR or \fBpreinstall\fR scripts is for compilation which must be done on the target architecture\. In early @@ -79,173 +66,147 @@ versions of node, this was often done using the \fBnode\-waf\fR scripts, or a standalone \fBMakefile\fR, and early versions of npm required that it be explicitly set in package\.json\. This was not portable, and harder to do properly\. -. .P -In the current version of node, the standard way to do this is using a \fB\|\.gyp\fR file\. If you have a file with a \fB\|\.gyp\fR extension in the root +In the current version of node, the standard way to do this is using a +\fB\|\.gyp\fR file\. If you have a file with a \fB\|\.gyp\fR extension in the root of your package, then npm will run the appropriate \fBnode\-gyp\fR commands automatically at install time\. This is the only officially supported method for compiling binary addons, and does not require that you add anything to your package\.json file\. -. .P If you have to do other things before your package is used, in a way that is not dependent on the operating system or architecture of the target system, then use a \fBprepublish\fR script instead\. This includes tasks such as: -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 Compile CoffeeScript source code into JavaScript\. -. -.IP "\(bu" 4 +.IP \(bu 2 Create minified versions of JavaScript source code\. -. -.IP "\(bu" 4 +.IP \(bu 2 Fetching remote resources that your package will use\. -. -.IP "" 0 -. + +.RE .P -The advantage of doing these things at \fBprepublish\fR time instead of \fBpreinstall\fR or \fBinstall\fR time is that they can be done once, in a +The advantage of doing these things at \fBprepublish\fR time instead of +\fBpreinstall\fR or \fBinstall\fR time is that they can be done once, in a single place, and thus greatly reduce complexity and variability\. Additionally, this means that: -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 You can depend on \fBcoffee\-script\fR as a \fBdevDependency\fR, and thus -your users don\'t need to have it installed\. -. -.IP "\(bu" 4 -You don\'t need to include the minifiers in your package, reducing +your users don't need to have it installed\. +.IP \(bu 2 +You don't need to include the minifiers in your package, reducing the size for your users\. -. -.IP "\(bu" 4 -You don\'t need to rely on your users having \fBcurl\fR or \fBwget\fR or +.IP \(bu 2 +You don't need to rely on your users having \fBcurl\fR or \fBwget\fR or other system tools on the target machines\. -. -.IP "" 0 -. -.SH "DEFAULT VALUES" + +.RE +.SH DEFAULT VALUES +.P npm will default some script values based on package contents\. -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 \fB"start": "node server\.js"\fR: -. -.IP If there is a \fBserver\.js\fR file in the root of your package, then npm will default the \fBstart\fR command to \fBnode server\.js\fR\|\. -. -.IP "\(bu" 4 +.IP \(bu 2 \fB"preinstall": "node\-waf clean || true; node\-waf configure build"\fR: -. -.IP If there is a \fBwscript\fR file in the root of your package, npm will default the \fBpreinstall\fR command to compile using node\-waf\. -. -.IP "" 0 -. -.SH "USER" + +.RE +.SH USER +.P If npm was invoked with root privileges, then it will change the uid to the user account or uid specified by the \fBuser\fR config, which defaults to \fBnobody\fR\|\. Set the \fBunsafe\-perm\fR flag to run scripts with root privileges\. -. -.SH "ENVIRONMENT" +.SH ENVIRONMENT +.P Package scripts run in an environment where many pieces of information are made available regarding the setup of npm and the current state of the process\. -. -.SS "path" +.SS path +.P If you depend on modules that define executable scripts, like test suites, then those executables will be added to the \fBPATH\fR for executing the scripts\. So, if your package\.json has this: -. -.IP "" 4 -. +.P +.RS 2 .nf { "name" : "foo" , "dependencies" : { "bar" : "0\.1\.x" } , "scripts": { "start" : "bar \./test" } } -. .fi -. -.IP "" 0 -. +.RE .P then you could run \fBnpm start\fR to execute the \fBbar\fR script, which is exported into the \fBnode_modules/\.bin\fR directory on \fBnpm install\fR\|\. -. -.SS "package\.json vars" +.SS package\.json vars +.P The package\.json fields are tacked onto the \fBnpm_package_\fR prefix\. So, for instance, if you had \fB{"name":"foo", "version":"1\.2\.5"}\fR in your -package\.json file, then your package scripts would have the \fBnpm_package_name\fR environment variable set to "foo", and the \fBnpm_package_version\fR set to "1\.2\.5" -. -.SS "configuration" -Configuration parameters are put in the environment with the \fBnpm_config_\fR prefix\. For instance, you can view the effective \fBroot\fR +package\.json file, then your package scripts would have the +\fBnpm_package_name\fR environment variable set to "foo", and the +\fBnpm_package_version\fR set to "1\.2\.5" +.SS configuration +.P +Configuration parameters are put in the environment with the +\fBnpm_config_\fR prefix\. For instance, you can view the effective \fBroot\fR config by checking the \fBnpm_config_root\fR environment variable\. -. -.SS "Special: package\.json "config" hash" +.SS Special: package\.json "config" object +.P The package\.json "config" keys are overwritten in the environment if there is a config param of \fB[@]:\fR\|\. For example, if the package\.json has this: -. -.IP "" 4 -. +.P +.RS 2 .nf { "name" : "foo" , "config" : { "port" : "8080" } , "scripts" : { "start" : "node server\.js" } } -. .fi -. -.IP "" 0 -. +.RE .P and the server\.js is this: -. -.IP "" 4 -. +.P +.RS 2 .nf http\.createServer(\.\.\.)\.listen(process\.env\.npm_package_config_port) -. .fi -. -.IP "" 0 -. +.RE .P then the user could change the behavior by doing: -. -.IP "" 4 -. +.P +.RS 2 .nf npm config set foo:port 80 -. .fi -. -.IP "" 0 -. -.SS "current lifecycle event" +.RE +.SS current lifecycle event +.P Lastly, the \fBnpm_lifecycle_event\fR environment variable is set to whichever stage of the cycle is being executed\. So, you could have a single script used for different parts of the process which switches -based on what\'s currently happening\. -. +based on what's currently happening\. .P -Objects are flattened following this format, so if you had \fB{"scripts":{"install":"foo\.js"}}\fR in your package\.json, then you\'d +Objects are flattened following this format, so if you had +\fB{"scripts":{"install":"foo\.js"}}\fR in your package\.json, then you'd see this in the script: -. -.IP "" 4 -. +.P +.RS 2 .nf process\.env\.npm_package_scripts_install === "foo\.js" -. .fi -. -.IP "" 0 -. -.SH "EXAMPLES" +.RE +.SH EXAMPLES +.P For example, if your package\.json contains this: -. -.IP "" 4 -. +.P +.RS 2 .nf { "scripts" : { "install" : "scripts/install\.js" @@ -253,24 +214,20 @@ For example, if your package\.json contains this: , "uninstall" : "scripts/uninstall\.js" } } -. .fi -. -.IP "" 0 -. +.RE .P then the \fBscripts/install\.js\fR will be called for the install, post\-install, stages of the lifecycle, and the \fBscripts/uninstall\.js\fR -would be called when the package is uninstalled\. Since \fBscripts/install\.js\fR is running for three different phases, it would +would be called when the package is uninstalled\. Since +\fBscripts/install\.js\fR is running for three different phases, it would be wise in this case to look at the \fBnpm_lifecycle_event\fR environment variable\. -. .P If you want to run a make command, you can do so\. This works just fine: -. -.IP "" 4 -. +.P +.RS 2 .nf { "scripts" : { "preinstall" : "\./configure" @@ -278,77 +235,64 @@ fine: , "test" : "make test" } } -. .fi -. -.IP "" 0 -. -.SH "EXITING" +.RE +.SH EXITING +.P Scripts are run by passing the line as a script argument to \fBsh\fR\|\. -. .P If the script exits with a code other than 0, then this will abort the process\. -. .P -Note that these script files don\'t have to be nodejs or even +Note that these script files don't have to be nodejs or even javascript programs\. They just have to be some kind of executable file\. -. -.SH "HOOK SCRIPTS" +.SH HOOK SCRIPTS +.P If you want to run a specific script at a specific lifecycle event for ALL packages, then you can use a hook script\. -. .P Place an executable file at \fBnode_modules/\.hooks/{eventname}\fR, and -it\'ll get run for all packages when they are going through that point +it'll get run for all packages when they are going through that point in the package lifecycle for any packages installed in that root\. -. .P Hook scripts are run exactly the same way as package\.json scripts\. That is, they are in a separate child process, with the env described above\. -. -.SH "BEST PRACTICES" -. -.IP "\(bu" 4 -Don\'t exit with a non\-zero error code unless you \fIreally\fR mean it\. +.SH BEST PRACTICES +.RS 0 +.IP \(bu 2 +Don't exit with a non\-zero error code unless you \fIreally\fR mean it\. Except for uninstall scripts, this will cause the npm action to fail, and potentially be rolled back\. If the failure is minor or -only will prevent some optional features, then it\'s better to just +only will prevent some optional features, then it's better to just print a warning and exit successfully\. -. -.IP "\(bu" 4 -Try not to use scripts to do what npm can do for you\. Read through npm help 5 \fBpackage\.json\fR to see all the things that you can specify and enable +.IP \(bu 2 +Try not to use scripts to do what npm can do for you\. Read through +npm help 5 \fBpackage\.json\fR to see all the things that you can specify and enable by simply describing your package appropriately\. In general, this will lead to a more robust and consistent state\. -. -.IP "\(bu" 4 +.IP \(bu 2 Inspect the env to determine where to put things\. For instance, if the \fBnpm_config_binroot\fR environ is set to \fB/home/user/bin\fR, then -don\'t try to install executables into \fB/usr/local/bin\fR\|\. The user +don't try to install executables into \fB/usr/local/bin\fR\|\. The user probably set it up that way for a reason\. -. -.IP "\(bu" 4 -Don\'t prefix your script commands with "sudo"\. If root permissions -are required for some reason, then it\'ll fail with that error, and +.IP \(bu 2 +Don't prefix your script commands with "sudo"\. If root permissions +are required for some reason, then it'll fail with that error, and the user will sudo the npm command in question\. -. -.IP "" 0 -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 + +.RE +.SH SEE ALSO +.RS 0 +.IP \(bu 2 npm help run\-script -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 5 package\.json -. -.IP "\(bu" 4 +.IP \(bu 2 npm help 7 developers -. -.IP "\(bu" 4 +.IP \(bu 2 npm help install -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man7/removing-npm.7 b/deps/npm/man/man7/removing-npm.7 index e8a60cdf954..b0a4fca5d7c 100644 --- a/deps/npm/man/man7/removing-npm.7 +++ b/deps/npm/man/man7/removing-npm.7 @@ -1,107 +1,78 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "NPM\-REMOVAL" "1" "September 2014" "" "" -. +.TH "NPM\-REMOVAL" "1" "October 2014" "" "" .SH "NAME" -\fBnpm-removal\fR \-\- Cleaning the Slate -. -.SH "SYNOPSIS" +\fBnpm-removal\fR \- Cleaning the Slate +.SH SYNOPSIS +.P So sad to see you go\. -. -.IP "" 4 -. +.P +.RS 2 .nf sudo npm uninstall npm \-g -. .fi -. -.IP "" 0 -. +.RE .P Or, if that fails, get the npm source code, and do: -. -.IP "" 4 -. +.P +.RS 2 .nf sudo make uninstall -. .fi -. -.IP "" 0 -. -.SH "More Severe Uninstalling" +.RE +.SH More Severe Uninstalling +.P Usually, the above instructions are sufficient\. That will remove -npm, but leave behind anything you\'ve installed\. -. +npm, but leave behind anything you've installed\. .P -If that doesn\'t work, or if you require more drastic measures, +If that doesn't work, or if you require more drastic measures, continue reading\. -. .P Note that this is only necessary for globally\-installed packages\. Local -installs are completely contained within a project\'s \fBnode_modules\fR -folder\. Delete that folder, and everything is gone (unless a package\'s +installs are completely contained within a project's \fBnode_modules\fR +folder\. Delete that folder, and everything is gone (unless a package's install script is particularly ill\-behaved)\. -. .P This assumes that you installed node and npm in the default place\. If you configured node with a different \fB\-\-prefix\fR, or installed npm with a -different prefix setting, then adjust the paths accordingly, replacing \fB/usr/local\fR with your install prefix\. -. +different prefix setting, then adjust the paths accordingly, replacing +\fB/usr/local\fR with your install prefix\. .P To remove everything npm\-related manually: -. -.IP "" 4 -. +.P +.RS 2 .nf rm \-rf /usr/local/{lib/node{,/\.npm,_modules},bin,share/man}/npm* -. .fi -. -.IP "" 0 -. +.RE .P If you installed things \fIwith\fR npm, then your best bet is to uninstall them with npm first, and then install them again once you have a proper install\. This can help find any symlinks that are lying around: -. -.IP "" 4 -. +.P +.RS 2 .nf ls \-laF /usr/local/{lib/node{,/\.npm},bin,share/man} | grep npm -. .fi -. -.IP "" 0 -. +.RE .P Prior to version 0\.3, npm used shim files for executables and node modules\. To track those down, you can do the following: -. -.IP "" 4 -. +.P +.RS 2 .nf find /usr/local/{lib/node,bin} \-exec grep \-l npm \\{\\} \\; ; -. .fi -. -.IP "" 0 -. +.RE .P (This is also in the README file\.) -. -.SH "SEE ALSO" -. -.IP "\(bu" 4 +.SH SEE ALSO +.RS 0 +.IP \(bu 2 README -. -.IP "\(bu" 4 +.IP \(bu 2 npm help rm -. -.IP "\(bu" 4 +.IP \(bu 2 npm help prune -. -.IP "" 0 + +.RE diff --git a/deps/npm/man/man7/semver.7 b/deps/npm/man/man7/semver.7 index 1e64a8df20e..a6be932b474 100644 --- a/deps/npm/man/man7/semver.7 +++ b/deps/npm/man/man7/semver.7 @@ -1,243 +1,343 @@ -.\" Generated with Ronnjs 0.3.8 -.\" http://github.com/kapouer/ronnjs/ -. -.TH "SEMVER" "7" "September 2014" "" "" -. +.TH "SEMVER" "7" "October 2014" "" "" .SH "NAME" -\fBsemver\fR \-\- The semantic versioner for npm -. -.SH "Usage" -. +\fBsemver\fR \- The semantic versioner for npm +.SH Usage +.P +.RS 2 .nf $ npm install semver -semver\.valid(\'1\.2\.3\') // \'1\.2\.3\' -semver\.valid(\'a\.b\.c\') // null -semver\.clean(\' =v1\.2\.3 \') // \'1\.2\.3\' -semver\.satisfies(\'1\.2\.3\', \'1\.x || >=2\.5\.0 || 5\.0\.0 \- 7\.2\.3\') // true -semver\.gt(\'1\.2\.3\', \'9\.8\.7\') // false -semver\.lt(\'1\.2\.3\', \'9\.8\.7\') // true -. + +semver\.valid('1\.2\.3') // '1\.2\.3' +semver\.valid('a\.b\.c') // null +semver\.clean(' =v1\.2\.3 ') // '1\.2\.3' +semver\.satisfies('1\.2\.3', '1\.x || >=2\.5\.0 || 5\.0\.0 \- 7\.2\.3') // true +semver\.gt('1\.2\.3', '9\.8\.7') // false +semver\.lt('1\.2\.3', '9\.8\.7') // true .fi -. +.RE .P As a command\-line utility: -. -.IP "" 4 -. +.P +.RS 2 .nf $ semver \-h + Usage: semver [ [\.\.\.]] [\-r | \-i | \-d ] Test if version(s) satisfy the supplied range(s), and sort them\. + Multiple versions or ranges may be supplied, unless increment or decrement options are specified\. In that case, only a single version may be used, and it is incremented by the specified level + Program exits successfully if any valid version satisfies all supplied ranges, and prints all satisfying versions\. + If no versions are valid, or ranges are not satisfied, then exits failure\. + Versions are printed in ascending order, so supplying multiple versions to the utility will just sort them\. -. .fi -. -.IP "" 0 -. -.SH "Versions" -A "version" is described by the \fBv2\.0\.0\fR specification found at \fIhttp://semver\.org/\fR\|\. -. +.RE +.SH Versions +.P +A "version" is described by the \fBv2\.0\.0\fR specification found at +http://semver\.org/\|\. .P A leading \fB"="\fR or \fB"v"\fR character is stripped off and ignored\. -. -.SH "Ranges" -The following range styles are supported: -. -.IP "\(bu" 4 -\fB1\.2\.3\fR A specific version\. When nothing else will do\. Must be a full -version number, with major, minor, and patch versions specified\. -Note that build metadata is still ignored, so \fB1\.2\.3+build2012\fR will -satisfy this range\. -. -.IP "\(bu" 4 -\fB>1\.2\.3\fR Greater than a specific version\. -. -.IP "\(bu" 4 -\fB<1\.2\.3\fR Less than a specific version\. If there is no prerelease -tag on the version range, then no prerelease version will be allowed -either, even though these are technically "less than"\. -. -.IP "\(bu" 4 -\fB>=1\.2\.3\fR Greater than or equal to\. Note that prerelease versions -are NOT equal to their "normal" equivalents, so \fB1\.2\.3\-beta\fR will -not satisfy this range, but \fB2\.3\.0\-beta\fR will\. -. -.IP "\(bu" 4 -\fB<=1\.2\.3\fR Less than or equal to\. In this case, prerelease versions -ARE allowed, so \fB1\.2\.3\-beta\fR would satisfy\. -. -.IP "\(bu" 4 +.SH Ranges +.P +A \fBversion range\fR is a set of \fBcomparators\fR which specify versions +that satisfy the range\. +.P +A \fBcomparator\fR is composed of an \fBoperator\fR and a \fBversion\fR\|\. The set +of primitive \fBoperators\fR is: +.RS 0 +.IP \(bu 2 +\fB<\fR Less than +.IP \(bu 2 +\fB<=\fR Less than or equal to +.IP \(bu 2 +\fB>\fR Greater than +.IP \(bu 2 +\fB>=\fR Greater than or equal to +.IP \(bu 2 +\fB=\fR Equal\. If no operator is specified, then equality is assumed, +so this operator is optional, but MAY be included\. + +.RE +.P +For example, the comparator \fB>=1\.2\.7\fR would match the versions +\fB1\.2\.7\fR, \fB1\.2\.8\fR, \fB2\.5\.3\fR, and \fB1\.3\.9\fR, but not the versions \fB1\.2\.6\fR +or \fB1\.1\.0\fR\|\. +.P +Comparators can be joined by whitespace to form a \fBcomparator set\fR, +which is satisfied by the \fBintersection\fR of all of the comparators +it includes\. +.P +A range is composed of one or more comparator sets, joined by \fB||\fR\|\. A +version matches a range if and only if every comparator in at least +one of the \fB||\fR\-separated comparator sets is satisfied by the version\. +.P +For example, the range \fB>=1\.2\.7 <1\.3\.0\fR would match the versions +\fB1\.2\.7\fR, \fB1\.2\.8\fR, and \fB1\.2\.99\fR, but not the versions \fB1\.2\.6\fR, \fB1\.3\.0\fR, +or \fB1\.1\.0\fR\|\. +.P +The range \fB1\.2\.7 || >=1\.2\.9 <2\.0\.0\fR would match the versions \fB1\.2\.7\fR, +\fB1\.2\.9\fR, and \fB1\.4\.6\fR, but not the versions \fB1\.2\.8\fR or \fB2\.0\.0\fR\|\. +.SS Prerelease Tags +.P +If a version has a prerelease tag (for example, \fB1\.2\.3\-alpha\.3\fR) then +it will only be allowed to satisfy comparator sets if at least one +comparator with the same \fB[major, minor, patch]\fR tuple also has a +prerelease tag\. +.P +For example, the range \fB>1\.2\.3\-alpha\.3\fR would be allowed to match the +version \fB1\.2\.3\-alpha\.7\fR, but it would \fInot\fR be satisfied by +\fB3\.4\.5\-alpha\.9\fR, even though \fB3\.4\.5\-alpha\.9\fR is technically "greater +than" \fB1\.2\.3\-alpha\.3\fR according to the SemVer sort rules\. The version +range only accepts prerelease tags on the \fB1\.2\.3\fR version\. The +version \fB3\.4\.5\fR \fIwould\fR satisfy the range, because it does not have a +prerelease flag, and \fB3\.4\.5\fR is greater than \fB1\.2\.3\-alpha\.7\fR\|\. +.P +The purpose for this behavior is twofold\. First, prerelease versions +frequently are updated very quickly, and contain many breaking changes +that are (by the author's design) not yet fit for public consumption\. +Therefore, by default, they are excluded from range matching +semantics\. +.P +Second, a user who has opted into using a prerelease version has +clearly indicated the intent to use \fIthat specific\fR set of +alpha/beta/rc versions\. By including a prerelease tag in the range, +the user is indicating that they are aware of the risk\. However, it +is still not appropriate to assume that they have opted into taking a +similar risk on the \fInext\fR set of prerelease versions\. +.SS Advanced Range Syntax +.P +Advanced range syntax desugars to primitive comparators in +deterministic ways\. +.P +Advanced ranges may be combined in the same way as primitive +comparators using white space or \fB||\fR\|\. +.SS Hyphen Ranges \fBX\.Y\.Z \- A\.B\.C\fR +.P +Specifies an inclusive set\. +.RS 0 +.IP \(bu 2 \fB1\.2\.3 \- 2\.3\.4\fR := \fB>=1\.2\.3 <=2\.3\.4\fR -. -.IP "\(bu" 4 -\fB~1\.2\.3\fR := \fB>=1\.2\.3\-0 <1\.3\.0\-0\fR "Reasonably close to \fB1\.2\.3\fR"\. When -using tilde operators, prerelease versions are supported as well, -but a prerelease of the next significant digit will NOT be -satisfactory, so \fB1\.3\.0\-beta\fR will not satisfy \fB~1\.2\.3\fR\|\. -. -.IP "\(bu" 4 -\fB^1\.2\.3\fR := \fB>=1\.2\.3\-0 <2\.0\.0\-0\fR "Compatible with \fB1\.2\.3\fR"\. When -using caret operators, anything from the specified version (including -prerelease) will be supported up to, but not including, the next -major version (or its prereleases)\. \fB1\.5\.1\fR will satisfy \fB^1\.2\.3\fR, -while \fB1\.2\.2\fR and \fB2\.0\.0\-beta\fR will not\. -. -.IP "\(bu" 4 -\fB^0\.1\.3\fR := \fB>=0\.1\.3\-0 <0\.2\.0\-0\fR "Compatible with \fB0\.1\.3\fR"\. \fB0\.x\.x\fR versions are -special: the first non\-zero component indicates potentially breaking changes, -meaning the caret operator matches any version with the same first non\-zero -component starting at the specified version\. -. -.IP "\(bu" 4 -\fB^0\.0\.2\fR := \fB=0\.0\.2\fR "Only the version \fB0\.0\.2\fR is considered compatible" -. -.IP "\(bu" 4 -\fB~1\.2\fR := \fB>=1\.2\.0\-0 <1\.3\.0\-0\fR "Any version starting with \fB1\.2\fR" -. -.IP "\(bu" 4 -\fB^1\.2\fR := \fB>=1\.2\.0\-0 <2\.0\.0\-0\fR "Any version compatible with \fB1\.2\fR" -. -.IP "\(bu" 4 -\fB1\.2\.x\fR := \fB>=1\.2\.0\-0 <1\.3\.0\-0\fR "Any version starting with \fB1\.2\fR" -. -.IP "\(bu" 4 -\fB1\.2\.*\fR Same as \fB1\.2\.x\fR\|\. -. -.IP "\(bu" 4 -\fB1\.2\fR Same as \fB1\.2\.x\fR\|\. -. -.IP "\(bu" 4 -\fB~1\fR := \fB>=1\.0\.0\-0 <2\.0\.0\-0\fR "Any version starting with \fB1\fR" -. -.IP "\(bu" 4 -\fB^1\fR := \fB>=1\.0\.0\-0 <2\.0\.0\-0\fR "Any version compatible with \fB1\fR" -. -.IP "\(bu" 4 -\fB1\.x\fR := \fB>=1\.0\.0\-0 <2\.0\.0\-0\fR "Any version starting with \fB1\fR" -. -.IP "\(bu" 4 -\fB1\.*\fR Same as \fB1\.x\fR\|\. -. -.IP "\(bu" 4 -\fB1\fR Same as \fB1\.x\fR\|\. -. -.IP "\(bu" 4 -\fB*\fR Any version whatsoever\. -. -.IP "\(bu" 4 -\fBx\fR Same as \fB*\fR\|\. -. -.IP "\(bu" 4 -\fB""\fR (just an empty string) Same as \fB*\fR\|\. -. -.IP "" 0 -. -.P -Ranges can be joined with either a space (which implies "and") or a \fB||\fR (which implies "or")\. -. -.SH "Functions" + +.RE +.P +If a partial version is provided as the first version in the inclusive +range, then the missing pieces are replaced with zeroes\. +.RS 0 +.IP \(bu 2 +\fB1\.2 \- 2\.3\.4\fR := \fB>=1\.2\.0 <=2\.3\.4\fR + +.RE +.P +If a partial version is provided as the second version in the +inclusive range, then all versions that start with the supplied parts +of the tuple are accepted, but nothing that would be greater than the +provided tuple parts\. +.RS 0 +.IP \(bu 2 +\fB1\.2\.3 \- 2\.3\fR := \fB>=1\.2\.3 <2\.4\.0\fR +.IP \(bu 2 +\fB1\.2\.3 \- 2\fR := \fB>=1\.2\.3 <3\.0\.0\fR + +.RE +.SS X\-Ranges \fB1\.2\.x\fR \fB1\.X\fR \fB1\.2\.*\fR \fB*\fR +.P +Any of \fBX\fR, \fBx\fR, or \fB*\fR may be used to "stand in" for one of the +numeric values in the \fB[major, minor, patch]\fR tuple\. +.RS 0 +.IP \(bu 2 +\fB*\fR := \fB>=0\.0\.0\fR (Any version satisfies) +.IP \(bu 2 +\fB1\.x\fR := \fB>=1\.0\.0 <2\.0\.0\fR (Matching major version) +.IP \(bu 2 +\fB1\.2\.x\fR := \fB>=1\.2\.0 <1\.3\.0\fR (Matching major and minor versions) + +.RE +.P +A partial version range is treated as an X\-Range, so the special +character is in fact optional\. +.RS 0 +.IP \(bu 2 +\fB""\fR (empty string) := \fB*\fR := \fB>=0\.0\.0\fR +.IP \(bu 2 +\fB1\fR := \fB1\.x\.x\fR := \fB>=1\.0\.0 <2\.0\.0\fR +.IP \(bu 2 +\fB1\.2\fR := \fB1\.2\.x\fR := \fB>=1\.2\.0 <1\.3\.0\fR + +.RE +.SS Tilde Ranges \fB~1\.2\.3\fR \fB~1\.2\fR \fB~1\fR +.P +Allows patch\-level changes if a minor version is specified on the +comparator\. Allows minor\-level changes if not\. +.RS 0 +.IP \(bu 2 +\fB~1\.2\.3\fR := \fB>=1\.2\.3 <1\.(2+1)\.0\fR := \fB>=1\.2\.3 <1\.3\.0\fR +.IP \(bu 2 +\fB~1\.2\fR := \fB>=1\.2\.0 <1\.(2+1)\.0\fR := \fB>=1\.2\.0 <1\.3\.0\fR (Same as \fB1\.2\.x\fR) +.IP \(bu 2 +\fB~1\fR := \fB>=1\.0\.0 <(1+1)\.0\.0\fR := \fB>=1\.0\.0 <2\.0\.0\fR (Same as \fB1\.x\fR) +.IP \(bu 2 +\fB~0\.2\.3\fR := \fB>=0\.2\.3 <0\.(2+1)\.0\fR := \fB>=0\.2\.3 <0\.3\.0\fR +.IP \(bu 2 +\fB~0\.2\fR := \fB>=0\.2\.0 <0\.(2+1)\.0\fR := \fB>=0\.2\.0 <0\.3\.0\fR (Same as \fB0\.2\.x\fR) +.IP \(bu 2 +\fB~0\fR := \fB>=0\.0\.0 <(0+1)\.0\.0\fR := \fB>=0\.0\.0 <1\.0\.0\fR (Same as \fB0\.x\fR) +.IP \(bu 2 +\fB~1\.2\.3\-beta\.2\fR := \fB>=1\.2\.3\-beta\.2 <1\.3\.0\fR Note that prereleases in +the \fB1\.2\.3\fR version will be allowed, if they are greater than or +equal to \fBbeta\.2\fR\|\. So, \fB1\.2\.3\-beta\.4\fR would be allowed, but +\fB1\.2\.4\-beta\.2\fR would not, because it is a prerelease of a +different \fB[major, minor, patch]\fR tuple\. + +.RE +.P +Note: this is the same as the \fB~>\fR operator in rubygems\. +.SS Caret Ranges \fB^1\.2\.3\fR \fB^0\.2\.5\fR \fB^0\.0\.4\fR +.P +Allows changes that do not modify the left\-most non\-zero digit in the +\fB[major, minor, patch]\fR tuple\. In other words, this allows patch and +minor updates for versions \fB1\.0\.0\fR and above, patch updates for +versions \fB0\.X >=0\.1\.0\fR, and \fIno\fR updates for versions \fB0\.0\.X\fR\|\. +.P +Many authors treat a \fB0\.x\fR version as if the \fBx\fR were the major +"breaking\-change" indicator\. +.P +Caret ranges are ideal when an author may make breaking changes +between \fB0\.2\.4\fR and \fB0\.3\.0\fR releases, which is a common practice\. +However, it presumes that there will \fInot\fR be breaking changes between +\fB0\.2\.4\fR and \fB0\.2\.5\fR\|\. It allows for changes that are presumed to be +additive (but non\-breaking), according to commonly observed practices\. +.RS 0 +.IP \(bu 2 +\fB^1\.2\.3\fR := \fB>=1\.2\.3 <2\.0\.0\fR +.IP \(bu 2 +\fB^0\.2\.3\fR := \fB>=0\.2\.3 <0\.3\.0\fR +.IP \(bu 2 +\fB^0\.0\.3\fR := \fB>=0\.0\.3 <0\.0\.4\fR +.IP \(bu 2 +\fB^1\.2\.3\-beta\.2\fR := \fB>=1\.2\.3\-beta\.2 <2\.0\.0\fR Note that prereleases in +the \fB1\.2\.3\fR version will be allowed, if they are greater than or +equal to \fBbeta\.2\fR\|\. So, \fB1\.2\.3\-beta\.4\fR would be allowed, but +\fB1\.2\.4\-beta\.2\fR would not, because it is a prerelease of a +different \fB[major, minor, patch]\fR tuple\. +.IP \(bu 2 +\fB^0\.0\.3\-beta\fR := \fB>=0\.0\.3\-beta <0\.0\.4\fR Note that prereleases in the +\fB0\.0\.3\fR version \fIonly\fR will be allowed, if they are greater than or +equal to \fBbeta\fR\|\. So, \fB0\.0\.3\-pr\.2\fR would be allowed\. + +.RE +.P +When parsing caret ranges, a missing \fBpatch\fR value desugars to the +number \fB0\fR, but will allow flexibility within that value, even if the +major and minor versions are both \fB0\fR\|\. +.RS 0 +.IP \(bu 2 +\fB^1\.2\.x\fR := \fB>=1\.2\.0 <2\.0\.0\fR +.IP \(bu 2 +\fB^0\.0\.x\fR := \fB>=0\.0\.0 <0\.1\.0\fR +.IP \(bu 2 +\fB^0\.0\fR := \fB>=0\.0\.0 <0\.1\.0\fR + +.RE +.P +A missing \fBminor\fR and \fBpatch\fR values will desugar to zero, but also +allow flexibility within those values, even if the major version is +zero\. +.RS 0 +.IP \(bu 2 +\fB^1\.x\fR := \fB>=1\.0\.0 <2\.0\.0\fR +.IP \(bu 2 +\fB^0\.x\fR := \fB>=0\.0\.0 <1\.0\.0\fR + +.RE +.SH Functions +.P All methods and classes take a final \fBloose\fR boolean argument that, if true, will be more forgiving about not\-quite\-valid semver strings\. The resulting output will always be 100% strict, of course\. -. .P Strict\-mode Comparators and Ranges will be strict about the SemVer strings that they parse\. -. -.IP "\(bu" 4 -\fBvalid(v)\fR: Return the parsed version, or null if it\'s not valid\. -. -.IP "\(bu" 4 -\fBinc(v, release)\fR\fBmajor\fR\fBpremajor\fR\fBminor\fR\fBpreminor\fR\fBpatch\fR\fBprepatch\fR\fBprerelease\fR -. -.IP "\(bu" 4 +.RS 0 +.IP \(bu 2 +\fBvalid(v)\fR: Return the parsed version, or null if it's not valid\. +.IP \(bu 2 +\fBinc(v, release)\fR: Return the version incremented by the release +type (\fBmajor\fR, \fBpremajor\fR, \fBminor\fR, \fBpreminor\fR, \fBpatch\fR, +\fBprepatch\fR, or \fBprerelease\fR), or null if it's not valid +.RS 0 +.IP \(bu 2 \fBpremajor\fR in one call will bump the version up to the next major -version and down to a prerelease of that major version\. \fBpreminor\fR, and \fBprepatch\fR work the same way\. -. -.IP "\(bu" 4 +version and down to a prerelease of that major version\. +\fBpreminor\fR, and \fBprepatch\fR work the same way\. +.IP \(bu 2 If called from a non\-prerelease version, the \fBprerelease\fR will work the same as \fBprepatch\fR\|\. It increments the patch version, then makes a prerelease\. If the input version is already a prerelease it simply increments it\. -. -.IP "" 0 - -. -.IP "" 0 -. -.SS "Comparison" -. -.IP "\(bu" 4 + +.RE + +.RE +.SS Comparison +.RS 0 +.IP \(bu 2 \fBgt(v1, v2)\fR: \fBv1 > v2\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fBgte(v1, v2)\fR: \fBv1 >= v2\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fBlt(v1, v2)\fR: \fBv1 < v2\fR -. -.IP "\(bu" 4 +.IP \(bu 2 \fBlte(v1, v2)\fR: \fBv1 <= v2\fR -. -.IP "\(bu" 4 -\fBeq(v1, v2)\fR: \fBv1 == v2\fR This is true if they\'re logically equivalent, -even if they\'re not the exact same string\. You already know how to +.IP \(bu 2 +\fBeq(v1, v2)\fR: \fBv1 == v2\fR This is true if they're logically equivalent, +even if they're not the exact same string\. You already know how to compare strings\. -. -.IP "\(bu" 4 +.IP \(bu 2 \fBneq(v1, v2)\fR: \fBv1 != v2\fR The opposite of \fBeq\fR\|\. -. -.IP "\(bu" 4 -\fBcmp(v1, comparator, v2)\fR: Pass in a comparison string, and it\'ll call +.IP \(bu 2 +\fBcmp(v1, comparator, v2)\fR: Pass in a comparison string, and it'll call the corresponding function above\. \fB"==="\fR and \fB"!=="\fR do simple string comparison, but are included for completeness\. Throws if an invalid comparison string is provided\. -. -.IP "\(bu" 4 -\fBcompare(v1, v2)\fR: Return \fB0\fR if \fBv1 == v2\fR, or \fB1\fR if \fBv1\fR is greater, or \fB\-1\fR if \fBv2\fR is greater\. Sorts in ascending order if passed to \fBArray\.sort()\fR\|\. -. -.IP "\(bu" 4 +.IP \(bu 2 +\fBcompare(v1, v2)\fR: Return \fB0\fR if \fBv1 == v2\fR, or \fB1\fR if \fBv1\fR is greater, or \fB\-1\fR if +\fBv2\fR is greater\. Sorts in ascending order if passed to \fBArray\.sort()\fR\|\. +.IP \(bu 2 \fBrcompare(v1, v2)\fR: The reverse of compare\. Sorts an array of versions in descending order when passed to \fBArray\.sort()\fR\|\. -. -.IP "" 0 -. -.SS "Ranges" -. -.IP "\(bu" 4 -\fBvalidRange(range)\fR: Return the valid range or null if it\'s not valid -. -.IP "\(bu" 4 + +.RE +.SS Ranges +.RS 0 +.IP \(bu 2 +\fBvalidRange(range)\fR: Return the valid range or null if it's not valid +.IP \(bu 2 \fBsatisfies(version, range)\fR: Return true if the version satisfies the range\. -. -.IP "\(bu" 4 +.IP \(bu 2 \fBmaxSatisfying(versions, range)\fR: Return the highest version in the list that satisfies the range, or \fBnull\fR if none of them do\. -. -.IP "\(bu" 4 +.IP \(bu 2 \fBgtr(version, range)\fR: Return \fBtrue\fR if version is greater than all the versions possible in the range\. -. -.IP "\(bu" 4 +.IP \(bu 2 \fBltr(version, range)\fR: Return \fBtrue\fR if version is less than all the versions possible in the range\. -. -.IP "\(bu" 4 +.IP \(bu 2 \fBoutside(version, range, hilo)\fR: Return true if the version is outside -the bounds of the range in either the high or low direction\. The \fBhilo\fR argument must be either the string \fB\'>\'\fR or \fB\'<\'\fR\|\. (This is +the bounds of the range in either the high or low direction\. The +\fBhilo\fR argument must be either the string \fB\|'>'\fR or \fB\|'<'\fR\|\. (This is the function called by \fBgtr\fR and \fBltr\fR\|\.) -. -.IP "" 0 -. + +.RE .P Note that, since ranges may be non\-contiguous, a version might not be greater than a range, less than a range, \fIor\fR satisfy a range! For @@ -246,7 +346,7 @@ until \fB2\.0\.0\fR, so the version \fB1\.2\.10\fR would not be greater than the range (because \fB2\.0\.1\fR satisfies, which is higher), nor less than the range (since \fB1\.2\.8\fR satisfies, which is lower), and it also does not satisfy the range\. -. .P If you want to know if a version satisfies or does not satisfy a range, use the \fBsatisfies(version, range)\fR function\. + diff --git a/deps/npm/node_modules/archy/LICENSE b/deps/npm/node_modules/archy/LICENSE new file mode 100644 index 00000000000..ee27ba4b441 --- /dev/null +++ b/deps/npm/node_modules/archy/LICENSE @@ -0,0 +1,18 @@ +This software is released under the MIT license: + +Permission is hereby granted, free of charge, to any person obtaining a copy of +this software and associated documentation files (the "Software"), to deal in +the Software without restriction, including without limitation the rights to +use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of +the Software, and to permit persons to whom the Software is furnished to do so, +subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS +FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR +COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER +IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/deps/npm/node_modules/archy/README.markdown b/deps/npm/node_modules/archy/README.markdown index deaba0fd1c3..ef7a5cf34be 100644 --- a/deps/npm/node_modules/archy/README.markdown +++ b/deps/npm/node_modules/archy/README.markdown @@ -1,12 +1,12 @@ -archy -===== +# archy Render nested hierarchies `npm ls` style with unicode pipes. +[![browser support](http://ci.testling.com/substack/node-archy.png)](http://ci.testling.com/substack/node-archy) + [![build status](https://secure.travis-ci.org/substack/node-archy.png)](http://travis-ci.org/substack/node-archy) -example -======= +# example ``` js var archy = require('archy'); @@ -50,13 +50,11 @@ beep time! ``` -methods -======= +# methods var archy = require('archy') -archy(obj, prefix='', opts={}) ------------------------------- +## archy(obj, prefix='', opts={}) Return a string representation of `obj` with unicode pipe characters like how `npm ls` looks. @@ -77,8 +75,7 @@ with the current prefix. To disable unicode results in favor of all-ansi output set `opts.unicode` to `false`. -install -======= +# install With [npm](http://npmjs.org) do: @@ -86,7 +83,6 @@ With [npm](http://npmjs.org) do: npm install archy ``` -license -======= +# license -MIT/X11 +MIT diff --git a/deps/npm/node_modules/archy/examples/beep.js b/deps/npm/node_modules/archy/examples/beep.js new file mode 100644 index 00000000000..9c0704797c8 --- /dev/null +++ b/deps/npm/node_modules/archy/examples/beep.js @@ -0,0 +1,24 @@ +var archy = require('../'); +var s = archy({ + label : 'beep', + nodes : [ + 'ity', + { + label : 'boop', + nodes : [ + { + label : 'o_O', + nodes : [ + { + label : 'oh', + nodes : [ 'hello', 'puny' ] + }, + 'human' + ] + }, + 'party\ntime!' + ] + } + ] +}); +console.log(s); diff --git a/deps/npm/node_modules/archy/examples/multi_line.js b/deps/npm/node_modules/archy/examples/multi_line.js new file mode 100644 index 00000000000..8afdfada910 --- /dev/null +++ b/deps/npm/node_modules/archy/examples/multi_line.js @@ -0,0 +1,25 @@ +var archy = require('../'); + +var s = archy({ + label : 'beep\none\ntwo', + nodes : [ + 'ity', + { + label : 'boop', + nodes : [ + { + label : 'o_O\nwheee', + nodes : [ + { + label : 'oh', + nodes : [ 'hello', 'puny\nmeat' ] + }, + 'creature' + ] + }, + 'party\ntime!' + ] + } + ] +}); +console.log(s); diff --git a/deps/npm/node_modules/archy/package.json b/deps/npm/node_modules/archy/package.json index 81c3e2669d7..4b3da663726 100644 --- a/deps/npm/node_modules/archy/package.json +++ b/deps/npm/node_modules/archy/package.json @@ -1,22 +1,42 @@ { "name": "archy", - "version": "0.0.2", + "version": "1.0.0", "description": "render nested hierarchies `npm ls` style with unicode pipes", "main": "index.js", - "directories": { - "lib": ".", - "example": "example", - "test": "test" - }, "devDependencies": { - "tap": "~0.2.3" + "tap": "~0.3.3", + "tape": "~0.1.1" }, "scripts": { "test": "tap test" }, + "testling": { + "files": "test/*.js", + "browsers": { + "iexplore": [ + "6.0", + "7.0", + "8.0", + "9.0" + ], + "chrome": [ + "20.0" + ], + "firefox": [ + "10.0", + "15.0" + ], + "safari": [ + "5.1" + ], + "opera": [ + "12.0" + ] + } + }, "repository": { "type": "git", - "url": "git://github.com/substack/node-archy.git" + "url": "http://github.com/substack/node-archy.git" }, "keywords": [ "hierarchy", @@ -30,23 +50,30 @@ "email": "mail@substack.net", "url": "http://substack.net" }, - "license": "MIT/X11", - "engine": { - "node": ">=0.4" + "license": "MIT", + "gitHead": "30223c16191e877bf027b15b12daf077b9b55b84", + "bugs": { + "url": "https://github.com/substack/node-archy/issues" }, + "homepage": "https://github.com/substack/node-archy", + "_id": "archy@1.0.0", + "_shasum": "f9c8c13757cc1dd7bc379ac77b2c62a5c2868c40", + "_from": "archy@>=1.0.0 <2.0.0", + "_npmVersion": "1.4.25", "_npmUser": { - "name": "isaacs", - "email": "i@izs.me" - }, - "_id": "archy@0.0.2", - "dependencies": {}, - "optionalDependencies": {}, - "engines": { - "node": "*" - }, - "_engineSupported": true, - "_npmVersion": "1.1.13", - "_nodeVersion": "v0.7.7-pre", - "_defaultsLoaded": true, - "_from": "archy@0.0.2" + "name": "substack", + "email": "mail@substack.net" + }, + "maintainers": [ + { + "name": "substack", + "email": "mail@substack.net" + } + ], + "dist": { + "shasum": "f9c8c13757cc1dd7bc379ac77b2c62a5c2868c40", + "tarball": "http://registry.npmjs.org/archy/-/archy-1.0.0.tgz" + }, + "directories": {}, + "_resolved": "https://registry.npmjs.org/archy/-/archy-1.0.0.tgz" } diff --git a/deps/npm/node_modules/archy/test/beep.js b/deps/npm/node_modules/archy/test/beep.js new file mode 100644 index 00000000000..4ea74f9cee4 --- /dev/null +++ b/deps/npm/node_modules/archy/test/beep.js @@ -0,0 +1,40 @@ +var test = require('tape'); +var archy = require('../'); + +test('beep', function (t) { + var s = archy({ + label : 'beep', + nodes : [ + 'ity', + { + label : 'boop', + nodes : [ + { + label : 'o_O', + nodes : [ + { + label : 'oh', + nodes : [ 'hello', 'puny' ] + }, + 'human' + ] + }, + 'party!' + ] + } + ] + }); + t.equal(s, [ + 'beep', + '├── ity', + '└─┬ boop', + ' ├─┬ o_O', + ' │ ├─┬ oh', + ' │ │ ├── hello', + ' │ │ └── puny', + ' │ └── human', + ' └── party!', + '' + ].join('\n')); + t.end(); +}); diff --git a/deps/npm/node_modules/archy/test/multi_line.js b/deps/npm/node_modules/archy/test/multi_line.js new file mode 100644 index 00000000000..2cf2154d8a3 --- /dev/null +++ b/deps/npm/node_modules/archy/test/multi_line.js @@ -0,0 +1,45 @@ +var test = require('tape'); +var archy = require('../'); + +test('multi-line', function (t) { + var s = archy({ + label : 'beep\none\ntwo', + nodes : [ + 'ity', + { + label : 'boop', + nodes : [ + { + label : 'o_O\nwheee', + nodes : [ + { + label : 'oh', + nodes : [ 'hello', 'puny\nmeat' ] + }, + 'creature' + ] + }, + 'party\ntime!' + ] + } + ] + }); + t.equal(s, [ + 'beep', + '│ one', + '│ two', + '├── ity', + '└─┬ boop', + ' ├─┬ o_O', + ' │ │ wheee', + ' │ ├─┬ oh', + ' │ │ ├── hello', + ' │ │ └── puny', + ' │ │ meat', + ' │ └── creature', + ' └── party', + ' time!', + '' + ].join('\n')); + t.end(); +}); diff --git a/deps/npm/node_modules/archy/test/non_unicode.js b/deps/npm/node_modules/archy/test/non_unicode.js new file mode 100644 index 00000000000..7204d33271d --- /dev/null +++ b/deps/npm/node_modules/archy/test/non_unicode.js @@ -0,0 +1,40 @@ +var test = require('tape'); +var archy = require('../'); + +test('beep', function (t) { + var s = archy({ + label : 'beep', + nodes : [ + 'ity', + { + label : 'boop', + nodes : [ + { + label : 'o_O', + nodes : [ + { + label : 'oh', + nodes : [ 'hello', 'puny' ] + }, + 'human' + ] + }, + 'party!' + ] + } + ] + }, '', { unicode : false }); + t.equal(s, [ + 'beep', + '+-- ity', + '`-- boop', + ' +-- o_O', + ' | +-- oh', + ' | | +-- hello', + ' | | `-- puny', + ' | `-- human', + ' `-- party!', + '' + ].join('\n')); + t.end(); +}); diff --git a/deps/npm/node_modules/async-some/.eslintrc b/deps/npm/node_modules/async-some/.eslintrc new file mode 100644 index 00000000000..5c39c67eca0 --- /dev/null +++ b/deps/npm/node_modules/async-some/.eslintrc @@ -0,0 +1,18 @@ +{ + "env" : { + "node" : true + }, + "rules" : { + "curly" : 0, + "no-lonely-if" : 1, + "no-mixed-requires" : 0, + "no-underscore-dangle" : 0, + "no-unused-vars" : [2, {"vars" : "all", "args" : "after-used"}], + "no-use-before-define" : [2, "nofunc"], + "quotes" : [1, "double", "avoid-escape"], + "semi" : [2, "never"], + "space-after-keywords" : 1, + "space-infix-ops" : 0, + "strict" : 0 + } +} diff --git a/deps/npm/node_modules/async-some/.npmignore b/deps/npm/node_modules/async-some/.npmignore new file mode 100644 index 00000000000..3c3629e647f --- /dev/null +++ b/deps/npm/node_modules/async-some/.npmignore @@ -0,0 +1 @@ +node_modules diff --git a/deps/npm/node_modules/async-some/README.md b/deps/npm/node_modules/async-some/README.md new file mode 100644 index 00000000000..bb502ee0608 --- /dev/null +++ b/deps/npm/node_modules/async-some/README.md @@ -0,0 +1,62 @@ +# some + +Short-circuited async Array.prototype.some implementation. + +Serially evaluates a list of values from a JS array or arraylike +against an asynchronous predicate, terminating on the first truthy +value. If the predicate encounters an error, pass it to the completion +callback. Otherwise, pass the truthy value passed by the predicate, or +`false` if no truthy value was passed. + +Is +[Zalgo](http://blog.izs.me/post/59142742143/designing-apis-for-asynchrony)-proof, +browser-safe, and pretty efficient. + +## Usage + +```javascript +var some = require("async-some"); +var resolve = require("path").resolve; +var stat = require("fs").stat; +var readFileSync = require("fs").readFileSync; + +some(["apple", "seaweed", "ham", "quince"], porkDetector, function (error, match) { + if (error) return console.error(error); + + if (match) return console.dir(JSON.parse(readFileSync(match))); + + console.error("time to buy more Sporkle™!"); +}); + +var PREFIX = resolve(__dirname, "../pork_store"); +function porkDetector(value, cb) { + var path = resolve(PREFIX, value + ".json"); + stat(path, function (er, stat) { + if (er) { + if (er.code === "ENOENT") return cb(null, false); + + return cb(er); + } + + cb(er, path); + }); +} +``` + +### some(list, test, callback) + +* `list` {Object} An arraylike (either an Array or the arguments arraylike) to + be checked. +* `test` {Function} The predicate against which the elements of `list` will be + tested. Takes two parameters: + * `element` {any} The element of the list to be tested. + * `callback` {Function} The continuation to be called once the test is + complete. Takes (again) two values: + * `error` {Error} Any errors that the predicate encountered. + * `value` {any} A truthy value. A non-falsy result terminates checking the + entire list. +* `callback` {Function} The callback to invoke when either a value has been + found or the entire input list has been processed with no result. Is invoked + with the traditional two parameters: + * `error` {Error} Errors that were encountered during the evaluation of some(). + * `match` {any} Value successfully matched by `test`, if any. diff --git a/deps/npm/node_modules/async-some/package.json b/deps/npm/node_modules/async-some/package.json new file mode 100644 index 00000000000..d32ae73fb2a --- /dev/null +++ b/deps/npm/node_modules/async-some/package.json @@ -0,0 +1,57 @@ +{ + "name": "async-some", + "version": "1.0.1", + "description": "short-circuited, asynchronous version of Array.protototype.some", + "main": "some.js", + "scripts": { + "test": "tap test/*.js" + }, + "repository": { + "type": "git", + "url": "https://github.com/othiym23/async-some.git" + }, + "keywords": [ + "async", + "some", + "array", + "collections", + "fp" + ], + "author": { + "name": "Forrest L Norvell", + "email": "ogd@aoaioxxysz.net" + }, + "license": "ISC", + "bugs": { + "url": "https://github.com/othiym23/async-some/issues" + }, + "homepage": "https://github.com/othiym23/async-some", + "dependencies": { + "dezalgo": "^1.0.0" + }, + "devDependencies": { + "tap": "^0.4.11" + }, + "gitHead": "e73d6d1fbc03cca5a0d54f456f39bab294a4c7b7", + "_id": "async-some@1.0.1", + "_shasum": "8b54f08d46f0f9babc72ea9d646c245d23a4d9e5", + "_from": "async-some@>=1.0.1-0 <2.0.0-0", + "_npmVersion": "1.5.0-pre", + "_npmUser": { + "name": "othiym23", + "email": "ogd@aoaioxxysz.net" + }, + "maintainers": [ + { + "name": "othiym23", + "email": "ogd@aoaioxxysz.net" + } + ], + "dist": { + "shasum": "8b54f08d46f0f9babc72ea9d646c245d23a4d9e5", + "tarball": "http://registry.npmjs.org/async-some/-/async-some-1.0.1.tgz" + }, + "directories": {}, + "_resolved": "https://registry.npmjs.org/async-some/-/async-some-1.0.1.tgz", + "readme": "ERROR: No README data found!" +} diff --git a/deps/npm/node_modules/async-some/some.js b/deps/npm/node_modules/async-some/some.js new file mode 100644 index 00000000000..0419709f763 --- /dev/null +++ b/deps/npm/node_modules/async-some/some.js @@ -0,0 +1,47 @@ +var assert = require("assert") +var dezalgoify = require("dezalgo") + +module.exports = some + +/** + * short-circuited async Array.prototype.some implementation + * + * Serially evaluates a list of values from a JS array or arraylike + * against an asynchronous predicate, terminating on the first truthy + * value. If the predicate encounters an error, pass it to the completion + * callback. Otherwise, pass the truthy value passed by the predicate, or + * `false` if no truthy value was passed. + */ +function some (list, test, cb) { + assert("length" in list, "array must be arraylike") + assert.equal(typeof test, "function", "predicate must be callable") + assert.equal(typeof cb, "function", "callback must be callable") + + var array = slice(list) + , index = 0 + , length = array.length + , hecomes = dezalgoify(cb) + + map() + + function map () { + if (index >= length) return hecomes(null, false) + + test(array[index], reduce) + } + + function reduce (er, result) { + if (er) return hecomes(er, false) + if (result) return hecomes(null, result) + + index++ + map() + } +} + +// Array.prototype.slice on arguments arraylike is expensive +function slice(args) { + var l = args.length, a = [], i + for (i = 0; i < l; i++) a[i] = args[i] + return a +} diff --git a/deps/npm/node_modules/async-some/test/base-case.js b/deps/npm/node_modules/async-some/test/base-case.js new file mode 100644 index 00000000000..356890521d6 --- /dev/null +++ b/deps/npm/node_modules/async-some/test/base-case.js @@ -0,0 +1,35 @@ +var test = require("tap").test + +var some = require("../some.js") + +test("some() array base case", function (t) { + some([], failer, function (error, match) { + t.ifError(error, "ran successfully") + + t.notOk(match, "nothing to find, so nothing found") + + t.end() + }) + + function failer(value, cb) { + cb(new Error("test should never have been called")) + } +}) + +test("some() arguments arraylike base case", function (t) { + go() + + function go() { + some(arguments, failer, function (error, match) { + t.ifError(error, "ran successfully") + + t.notOk(match, "nothing to find, so nothing found") + + t.end() + }) + + function failer(value, cb) { + cb(new Error("test should never have been called")) + } + } +}) diff --git a/deps/npm/node_modules/async-some/test/parameters.js b/deps/npm/node_modules/async-some/test/parameters.js new file mode 100644 index 00000000000..0706d1da6fc --- /dev/null +++ b/deps/npm/node_modules/async-some/test/parameters.js @@ -0,0 +1,37 @@ +var test = require("tap").test + +var some = require("../some.js") + +var NOP = function () {} + +test("some() called with bogus parameters", function (t) { + t.throws(function () { + some() + }, "throws when called with no parameters") + + t.throws(function () { + some(null, NOP, NOP) + }, "throws when called with no list") + + t.throws(function () { + some([], null, NOP) + }, "throws when called with no predicate") + + t.throws(function () { + some([], NOP, null) + }, "throws when called with no callback") + + t.throws(function () { + some({}, NOP, NOP) + }, "throws when called with wrong list type") + + t.throws(function () { + some([], "ham", NOP) + }, "throws when called with wrong test type") + + t.throws(function () { + some([], NOP, "ham") + }, "throws when called with wrong test type") + + t.end() +}) diff --git a/deps/npm/node_modules/async-some/test/simple.js b/deps/npm/node_modules/async-some/test/simple.js new file mode 100644 index 00000000000..3d68e1e5076 --- /dev/null +++ b/deps/npm/node_modules/async-some/test/simple.js @@ -0,0 +1,60 @@ +var test = require("tap").test + +var some = require("../some.js") + +test("some() doesn't find anything asynchronously", function (t) { + some(["a", "b", "c", "d", "e", "f", "g"], predicate, function (error, match) { + t.ifError(error, "ran successfully") + + t.notOk(match, "nothing to find, so nothing found") + + t.end() + }) + + function predicate(value, cb) { + // dezalgo ensures it's safe to not do this, but just in case + setTimeout(function () { cb(null, value > "j" && value) }) + } +}) + +test("some() doesn't find anything synchronously", function (t) { + some(["a", "b", "c", "d", "e", "f", "g"], predicate, function (error, match) { + t.ifError(error, "ran successfully") + + t.notOk(match, "nothing to find, so nothing found") + + t.end() + }) + + function predicate(value, cb) { + cb(null, value > "j" && value) + } +}) + +test("some() doesn't find anything asynchronously", function (t) { + some(["a", "b", "c", "d", "e", "f", "g"], predicate, function (error, match) { + t.ifError(error, "ran successfully") + + t.equals(match, "d", "found expected element") + + t.end() + }) + + function predicate(value, cb) { + setTimeout(function () { cb(null, value > "c" && value) }) + } +}) + +test("some() doesn't find anything synchronously", function (t) { + some(["a", "b", "c", "d", "e", "f", "g"], predicate, function (error, match) { + t.ifError(error, "ran successfully") + + t.equals(match, "d", "found expected") + + t.end() + }) + + function predicate(value, cb) { + cb(null, value > "c" && value) + } +}) diff --git a/deps/npm/node_modules/cmd-shim/index.js b/deps/npm/node_modules/cmd-shim/index.js index 7853e8605db..59a1f6cbd62 100644 --- a/deps/npm/node_modules/cmd-shim/index.js +++ b/deps/npm/node_modules/cmd-shim/index.js @@ -11,11 +11,7 @@ module.exports = cmdShim cmdShim.ifExists = cmdShimIfExists -try { - var fs = require("graceful-fs") -} catch (e) { - var fs = require("fs") -} +var fs = require("graceful-fs") var mkdir = require("mkdirp") , path = require("path") diff --git a/deps/npm/node_modules/cmd-shim/package.json b/deps/npm/node_modules/cmd-shim/package.json index 09f0c48a4dc..e1f4f543ea7 100644 --- a/deps/npm/node_modules/cmd-shim/package.json +++ b/deps/npm/node_modules/cmd-shim/package.json @@ -1,6 +1,6 @@ { "name": "cmd-shim", - "version": "2.0.0", + "version": "2.0.1", "description": "Used in npm for command line application support", "scripts": { "test": "tap test/*.js" @@ -10,26 +10,37 @@ "url": "https://github.com/ForbesLindesay/cmd-shim.git" }, "license": "BSD", - "optionalDependencies": { - "graceful-fs": "^3.0.2" - }, "dependencies": { - "mkdirp": "~0.5.0", - "graceful-fs": "^3.0.2" + "graceful-fs": ">3.0.1 <4.0.0-0", + "mkdirp": "~0.5.0" }, "devDependencies": { "tap": "~0.4.11", "rimraf": "~2.2.8" }, - "readme": "# cmd-shim\n\nThe cmd-shim used in npm to create executable scripts on Windows,\nsince symlinks are not suitable for this purpose there.\n\nOn Unix systems, you should use a symbolic link instead.\n\n[![Build Status](https://img.shields.io/travis/ForbesLindesay/cmd-shim/master.svg)](https://travis-ci.org/ForbesLindesay/cmd-shim)\n[![Dependency Status](https://img.shields.io/gemnasium/ForbesLindesay/cmd-shim.svg)](https://gemnasium.com/ForbesLindesay/cmd-shim)\n[![NPM version](https://img.shields.io/npm/v/cmd-shim.svg)](http://badge.fury.io/js/cmd-shim)\n\n## Installation\n\n```\nnpm install cmd-shim\n```\n\n## API\n\n### cmdShim(from, to, cb)\n\nCreate a cmd shim at `to` for the command line program at `from`.\ne.g.\n\n```javascript\nvar cmdShim = require('cmd-shim');\ncmdShim(__dirname + '/cli.js', '/usr/bin/command-name', function (err) {\n if (err) throw err;\n});\n```\n\n### cmdShim.ifExists(from, to, cb)\n\nThe same as above, but will just continue if the file does not exist.\nSource:\n\n```javascript\nfunction cmdShimIfExists (from, to, cb) {\n fs.stat(from, function (er) {\n if (er) return cb()\n cmdShim(from, to, cb)\n })\n}\n```\n", - "readmeFilename": "README.md", + "gitHead": "6f53d506be590fe9ac20c9801512cd1a3aad5974", "bugs": { "url": "https://github.com/ForbesLindesay/cmd-shim/issues" }, "homepage": "https://github.com/ForbesLindesay/cmd-shim", - "_id": "cmd-shim@2.0.0", - "_shasum": "64fd5859110051571406f61821bf37d366bc3cb3", - "_resolved": "git://github.com/othiym23/cmd-shim#12de64ca97f45ac600910092f19afacc3d5376dd", - "_from": "git://github.com/othiym23/cmd-shim", - "_fromGithub": true + "_id": "cmd-shim@2.0.1", + "_shasum": "4512a373d2391679aec51ad1d4733559e9b85d4a", + "_from": "cmd-shim@>=2.0.1-0 <3.0.0-0", + "_npmVersion": "1.5.0-alpha-4", + "_npmUser": { + "name": "forbeslindesay", + "email": "forbes@lindesay.co.uk" + }, + "maintainers": [ + { + "name": "forbeslindesay", + "email": "forbes@lindesay.co.uk" + } + ], + "dist": { + "shasum": "4512a373d2391679aec51ad1d4733559e9b85d4a", + "tarball": "http://registry.npmjs.org/cmd-shim/-/cmd-shim-2.0.1.tgz" + }, + "directories": {}, + "_resolved": "https://registry.npmjs.org/cmd-shim/-/cmd-shim-2.0.1.tgz" } diff --git a/deps/npm/node_modules/columnify/node_modules/strip-ansi/node_modules/ansi-regex/package.json b/deps/npm/node_modules/columnify/node_modules/strip-ansi/node_modules/ansi-regex/package.json index 716ae00899d..ca610250c9e 100644 --- a/deps/npm/node_modules/columnify/node_modules/strip-ansi/node_modules/ansi-regex/package.json +++ b/deps/npm/node_modules/columnify/node_modules/strip-ansi/node_modules/ansi-regex/package.json @@ -57,7 +57,7 @@ "homepage": "https://github.com/sindresorhus/ansi-regex", "_id": "ansi-regex@0.2.1", "_shasum": "0d8e946967a3d8143f93e24e298525fc1b2235f9", - "_from": "ansi-regex@^0.2.1", + "_from": "ansi-regex@0.2.1", "_npmVersion": "1.4.9", "_npmUser": { "name": "sindresorhus", @@ -74,6 +74,5 @@ "tarball": "http://registry.npmjs.org/ansi-regex/-/ansi-regex-0.2.1.tgz" }, "directories": {}, - "_resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-0.2.1.tgz", - "readme": "ERROR: No README data found!" + "_resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-0.2.1.tgz" } diff --git a/deps/npm/node_modules/columnify/node_modules/strip-ansi/package.json b/deps/npm/node_modules/columnify/node_modules/strip-ansi/package.json index 0fd180b6f27..64c4dee52c4 100644 --- a/deps/npm/node_modules/columnify/node_modules/strip-ansi/package.json +++ b/deps/npm/node_modules/columnify/node_modules/strip-ansi/package.json @@ -63,7 +63,7 @@ "homepage": "https://github.com/sindresorhus/strip-ansi", "_id": "strip-ansi@1.0.0", "_shasum": "6c021321d6ece161a3c608fbab268c7328901c73", - "_from": "strip-ansi@^1.0.0", + "_from": "strip-ansi@>=1.0.0-0 <2.0.0-0", "_npmVersion": "1.4.14", "_npmUser": { "name": "sindresorhus", @@ -84,6 +84,5 @@ "tarball": "http://registry.npmjs.org/strip-ansi/-/strip-ansi-1.0.0.tgz" }, "directories": {}, - "_resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-1.0.0.tgz", - "readme": "ERROR: No README data found!" + "_resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-1.0.0.tgz" } diff --git a/deps/npm/node_modules/columnify/node_modules/wcwidth/node_modules/defaults/node_modules/clone/package.json b/deps/npm/node_modules/columnify/node_modules/wcwidth/node_modules/defaults/node_modules/clone/package.json index ee00ac7e54b..3c6b7764709 100644 --- a/deps/npm/node_modules/columnify/node_modules/wcwidth/node_modules/defaults/node_modules/clone/package.json +++ b/deps/npm/node_modules/columnify/node_modules/wcwidth/node_modules/defaults/node_modules/clone/package.json @@ -100,7 +100,7 @@ "homepage": "https://github.com/pvorb/node-clone", "_id": "clone@0.1.18", "_shasum": "64a0d5d57eaa85a1a8af380cd1db8c7b3a895f66", - "_from": "clone@~0.1.5", + "_from": "clone@>=0.1.5-0 <0.2.0-0", "_npmVersion": "1.4.14", "_npmUser": { "name": "pvorb", @@ -117,6 +117,5 @@ "tarball": "http://registry.npmjs.org/clone/-/clone-0.1.18.tgz" }, "directories": {}, - "_resolved": "https://registry.npmjs.org/clone/-/clone-0.1.18.tgz", - "readme": "ERROR: No README data found!" + "_resolved": "https://registry.npmjs.org/clone/-/clone-0.1.18.tgz" } diff --git a/deps/npm/node_modules/columnify/node_modules/wcwidth/node_modules/defaults/package.json b/deps/npm/node_modules/columnify/node_modules/wcwidth/node_modules/defaults/package.json index ba00482142e..f9243a12005 100644 --- a/deps/npm/node_modules/columnify/node_modules/wcwidth/node_modules/defaults/package.json +++ b/deps/npm/node_modules/columnify/node_modules/wcwidth/node_modules/defaults/package.json @@ -45,10 +45,6 @@ ], "directories": {}, "_shasum": "3ae25f44416c6c01f9809a25fcdd285912d2a6b1", - "_from": "defaults@^1.0.0", - "_resolved": "https://registry.npmjs.org/defaults/-/defaults-1.0.0.tgz", - "bugs": { - "url": "https://github.com/tmpvar/defaults/issues" - }, - "homepage": "https://github.com/tmpvar/defaults" + "_from": "defaults@>=1.0.0-0 <2.0.0-0", + "_resolved": "https://registry.npmjs.org/defaults/-/defaults-1.0.0.tgz" } diff --git a/deps/npm/node_modules/columnify/node_modules/wcwidth/package.json b/deps/npm/node_modules/columnify/node_modules/wcwidth/package.json index 0045c3cdba5..f12d49b789e 100644 --- a/deps/npm/node_modules/columnify/node_modules/wcwidth/package.json +++ b/deps/npm/node_modules/columnify/node_modules/wcwidth/package.json @@ -40,7 +40,7 @@ "gitHead": "5bc3aafd45c89f233c27b9479c18a23ca91ba660", "_id": "wcwidth@1.0.0", "_shasum": "02d059ff7a8fc741e0f6b5da1e69b2b40daeca6f", - "_from": "wcwidth@^1.0.0", + "_from": "wcwidth@>=1.0.0-0 <2.0.0-0", "_npmVersion": "1.4.23", "_npmUser": { "name": "timoxley", @@ -56,6 +56,5 @@ "shasum": "02d059ff7a8fc741e0f6b5da1e69b2b40daeca6f", "tarball": "http://registry.npmjs.org/wcwidth/-/wcwidth-1.0.0.tgz" }, - "_resolved": "https://registry.npmjs.org/wcwidth/-/wcwidth-1.0.0.tgz", - "readme": "ERROR: No README data found!" + "_resolved": "https://registry.npmjs.org/wcwidth/-/wcwidth-1.0.0.tgz" } diff --git a/deps/npm/node_modules/columnify/package.json b/deps/npm/node_modules/columnify/package.json index 01ac64bb210..ef307b50925 100644 --- a/deps/npm/node_modules/columnify/package.json +++ b/deps/npm/node_modules/columnify/package.json @@ -43,7 +43,7 @@ "gitHead": "14e77bef3f57acaa3f390145915a9f2d2a4f882c", "_id": "columnify@1.2.1", "_shasum": "921ec51c178f4126d3c07e9acecd67a55c7953e4", - "_from": "columnify@^1.2.1", + "_from": "columnify@>=1.2.1-0 <2.0.0-0", "_npmVersion": "1.4.23", "_npmUser": { "name": "timoxley", @@ -59,6 +59,5 @@ "shasum": "921ec51c178f4126d3c07e9acecd67a55c7953e4", "tarball": "http://registry.npmjs.org/columnify/-/columnify-1.2.1.tgz" }, - "_resolved": "https://registry.npmjs.org/columnify/-/columnify-1.2.1.tgz", - "readme": "ERROR: No README data found!" + "_resolved": "https://registry.npmjs.org/columnify/-/columnify-1.2.1.tgz" } diff --git a/deps/npm/node_modules/npmconf/node_modules/config-chain/.npmignore b/deps/npm/node_modules/config-chain/.npmignore similarity index 100% rename from deps/npm/node_modules/npmconf/node_modules/config-chain/.npmignore rename to deps/npm/node_modules/config-chain/.npmignore diff --git a/deps/npm/node_modules/npmconf/node_modules/config-chain/LICENCE b/deps/npm/node_modules/config-chain/LICENCE similarity index 100% rename from deps/npm/node_modules/npmconf/node_modules/config-chain/LICENCE rename to deps/npm/node_modules/config-chain/LICENCE diff --git a/deps/npm/node_modules/npmconf/node_modules/config-chain/index.js b/deps/npm/node_modules/config-chain/index.js similarity index 100% rename from deps/npm/node_modules/npmconf/node_modules/config-chain/index.js rename to deps/npm/node_modules/config-chain/index.js diff --git a/deps/npm/node_modules/npmconf/node_modules/config-chain/node_modules/proto-list/LICENSE b/deps/npm/node_modules/config-chain/node_modules/proto-list/LICENSE similarity index 100% rename from deps/npm/node_modules/npmconf/node_modules/config-chain/node_modules/proto-list/LICENSE rename to deps/npm/node_modules/config-chain/node_modules/proto-list/LICENSE diff --git a/deps/npm/node_modules/npmconf/node_modules/config-chain/node_modules/proto-list/README.md b/deps/npm/node_modules/config-chain/node_modules/proto-list/README.md similarity index 100% rename from deps/npm/node_modules/npmconf/node_modules/config-chain/node_modules/proto-list/README.md rename to deps/npm/node_modules/config-chain/node_modules/proto-list/README.md diff --git a/deps/npm/node_modules/npmconf/node_modules/config-chain/node_modules/proto-list/package.json b/deps/npm/node_modules/config-chain/node_modules/proto-list/package.json similarity index 100% rename from deps/npm/node_modules/npmconf/node_modules/config-chain/node_modules/proto-list/package.json rename to deps/npm/node_modules/config-chain/node_modules/proto-list/package.json diff --git a/deps/npm/node_modules/npmconf/node_modules/config-chain/node_modules/proto-list/proto-list.js b/deps/npm/node_modules/config-chain/node_modules/proto-list/proto-list.js similarity index 100% rename from deps/npm/node_modules/npmconf/node_modules/config-chain/node_modules/proto-list/proto-list.js rename to deps/npm/node_modules/config-chain/node_modules/proto-list/proto-list.js diff --git a/deps/npm/node_modules/npmconf/node_modules/config-chain/node_modules/proto-list/test/basic.js b/deps/npm/node_modules/config-chain/node_modules/proto-list/test/basic.js similarity index 100% rename from deps/npm/node_modules/npmconf/node_modules/config-chain/node_modules/proto-list/test/basic.js rename to deps/npm/node_modules/config-chain/node_modules/proto-list/test/basic.js diff --git a/deps/npm/node_modules/npmconf/node_modules/config-chain/package.json b/deps/npm/node_modules/config-chain/package.json similarity index 99% rename from deps/npm/node_modules/npmconf/node_modules/config-chain/package.json rename to deps/npm/node_modules/config-chain/package.json index c59f5ceeb6d..a07f2f41433 100644 --- a/deps/npm/node_modules/npmconf/node_modules/config-chain/package.json +++ b/deps/npm/node_modules/config-chain/package.json @@ -32,7 +32,7 @@ "shasum": "0943d0b7227213a20d4eaff4434f4a1c0a052cad", "tarball": "http://registry.npmjs.org/config-chain/-/config-chain-1.1.8.tgz" }, - "_from": "config-chain@~1.1.8", + "_from": "config-chain@^1.1.8", "_npmVersion": "1.3.6", "_npmUser": { "name": "dominictarr", diff --git a/deps/npm/node_modules/npmconf/node_modules/config-chain/readme.markdown b/deps/npm/node_modules/config-chain/readme.markdown similarity index 100% rename from deps/npm/node_modules/npmconf/node_modules/config-chain/readme.markdown rename to deps/npm/node_modules/config-chain/readme.markdown diff --git a/deps/npm/node_modules/npmconf/node_modules/config-chain/test/broken.js b/deps/npm/node_modules/config-chain/test/broken.js similarity index 100% rename from deps/npm/node_modules/npmconf/node_modules/config-chain/test/broken.js rename to deps/npm/node_modules/config-chain/test/broken.js diff --git a/deps/npm/node_modules/npmconf/node_modules/config-chain/test/broken.json b/deps/npm/node_modules/config-chain/test/broken.json similarity index 100% rename from deps/npm/node_modules/npmconf/node_modules/config-chain/test/broken.json rename to deps/npm/node_modules/config-chain/test/broken.json diff --git a/deps/npm/node_modules/npmconf/node_modules/config-chain/test/chain-class.js b/deps/npm/node_modules/config-chain/test/chain-class.js similarity index 100% rename from deps/npm/node_modules/npmconf/node_modules/config-chain/test/chain-class.js rename to deps/npm/node_modules/config-chain/test/chain-class.js diff --git a/deps/npm/node_modules/npmconf/node_modules/config-chain/test/env.js b/deps/npm/node_modules/config-chain/test/env.js similarity index 100% rename from deps/npm/node_modules/npmconf/node_modules/config-chain/test/env.js rename to deps/npm/node_modules/config-chain/test/env.js diff --git a/deps/npm/node_modules/npmconf/node_modules/config-chain/test/find-file.js b/deps/npm/node_modules/config-chain/test/find-file.js similarity index 100% rename from deps/npm/node_modules/npmconf/node_modules/config-chain/test/find-file.js rename to deps/npm/node_modules/config-chain/test/find-file.js diff --git a/deps/npm/node_modules/npmconf/node_modules/config-chain/test/get.js b/deps/npm/node_modules/config-chain/test/get.js similarity index 100% rename from deps/npm/node_modules/npmconf/node_modules/config-chain/test/get.js rename to deps/npm/node_modules/config-chain/test/get.js diff --git a/deps/npm/node_modules/npmconf/node_modules/config-chain/test/ignore-unfound-file.js b/deps/npm/node_modules/config-chain/test/ignore-unfound-file.js similarity index 100% rename from deps/npm/node_modules/npmconf/node_modules/config-chain/test/ignore-unfound-file.js rename to deps/npm/node_modules/config-chain/test/ignore-unfound-file.js diff --git a/deps/npm/node_modules/npmconf/node_modules/config-chain/test/ini.js b/deps/npm/node_modules/config-chain/test/ini.js similarity index 100% rename from deps/npm/node_modules/npmconf/node_modules/config-chain/test/ini.js rename to deps/npm/node_modules/config-chain/test/ini.js diff --git a/deps/npm/node_modules/npmconf/node_modules/config-chain/test/save.js b/deps/npm/node_modules/config-chain/test/save.js similarity index 100% rename from deps/npm/node_modules/npmconf/node_modules/config-chain/test/save.js rename to deps/npm/node_modules/config-chain/test/save.js diff --git a/deps/npm/node_modules/dezalgo/README.md b/deps/npm/node_modules/dezalgo/README.md new file mode 100644 index 00000000000..bdfc8ba80d0 --- /dev/null +++ b/deps/npm/node_modules/dezalgo/README.md @@ -0,0 +1,29 @@ +# dezalgo + +Contain async insanity so that the dark pony lord doesn't eat souls + +See [this blog +post](http://blog.izs.me/post/59142742143/designing-apis-for-asynchrony). + +## USAGE + +Pass a callback to `dezalgo` and it will ensure that it is *always* +called in a future tick, and never in this tick. + +```javascript +var dz = require('dezalgo') + +var cache = {} +function maybeSync(arg, cb) { + cb = dz(cb) + + // this will actually defer to nextTick + if (cache[arg]) cb(null, cache[arg]) + + fs.readFile(arg, function (er, data) { + // since this is *already* defered, it will call immediately + if (er) cb(er) + cb(null, cache[arg] = data) + }) +} +``` diff --git a/deps/npm/node_modules/dezalgo/dezalgo.js b/deps/npm/node_modules/dezalgo/dezalgo.js new file mode 100644 index 00000000000..04fd3ba7814 --- /dev/null +++ b/deps/npm/node_modules/dezalgo/dezalgo.js @@ -0,0 +1,22 @@ +var wrappy = require('wrappy') +module.exports = wrappy(dezalgo) + +var asap = require('asap') + +function dezalgo (cb) { + var sync = true + asap(function () { + sync = false + }) + + return function zalgoSafe() { + var args = arguments + var me = this + if (sync) + asap(function() { + cb.apply(me, args) + }) + else + cb.apply(me, args) + } +} diff --git a/deps/npm/node_modules/dezalgo/node_modules/asap/LICENSE.md b/deps/npm/node_modules/dezalgo/node_modules/asap/LICENSE.md new file mode 100644 index 00000000000..5d98ad8fe99 --- /dev/null +++ b/deps/npm/node_modules/dezalgo/node_modules/asap/LICENSE.md @@ -0,0 +1,20 @@ + +Copyright 2009–2013 Contributors. All rights reserved. +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to +deal in the Software without restriction, including without limitation the +rights to use, copy, modify, merge, publish, distribute, sublicense, and/or +sell copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING +FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS +IN THE SOFTWARE. + diff --git a/deps/npm/node_modules/dezalgo/node_modules/asap/README.md b/deps/npm/node_modules/dezalgo/node_modules/asap/README.md new file mode 100644 index 00000000000..9a42759761d --- /dev/null +++ b/deps/npm/node_modules/dezalgo/node_modules/asap/README.md @@ -0,0 +1,81 @@ + +# ASAP + +This `asap` CommonJS package contains a single `asap` module that +exports a single `asap` function that executes a function **as soon as +possible**. + +```javascript +asap(function () { + // ... +}); +``` + +More formally, ASAP provides a fast event queue that will execute tasks +until it is empty before yielding to the JavaScript engine's underlying +event-loop. When the event queue becomes non-empty, ASAP schedules a +flush event, preferring for that event to occur before the JavaScript +engine has an opportunity to perform IO tasks or rendering, thus making +the first task and subsequent tasks semantically indistinguishable. +ASAP uses a variety of techniques to preserve this invariant on +different versions of browsers and NodeJS. + +By design, ASAP can starve the event loop on the theory that, if there +is enough work to be done synchronously, albeit in separate events, long +enough to starve input or output, it is a strong indicator that the +program needs to push back on scheduling more work. + +Take care. ASAP can sustain infinite recursive calls indefinitely +without warning. This is behaviorally equivalent to an infinite loop. +It will not halt from a stack overflow, but it *will* chew through +memory (which is an oddity I cannot explain at this time). Just as with +infinite loops, you can monitor a Node process for this behavior with a +heart-beat signal. As with infinite loops, a very small amount of +caution goes a long way to avoiding problems. + +```javascript +function loop() { + asap(loop); +} +loop(); +``` + +ASAP is distinct from `setImmediate` in that it does not suffer the +overhead of returning a handle and being possible to cancel. For a +`setImmediate` shim, consider [setImmediate][]. + +[setImmediate]: https://github.com/noblejs/setimmediate + +If a task throws an exception, it will not interrupt the flushing of +high-priority tasks. The exception will be postponed to a later, +low-priority event to avoid slow-downs, when the underlying JavaScript +engine will treat it as it does any unhandled exception. + +## Heritage + +ASAP has been factored out of the [Q][] asynchronous promise library. +It originally had a naïve implementation in terms of `setTimeout`, but +[Malte Ubl][NonBlocking] provided an insight that `postMessage` might be +useful for creating a high-priority, no-delay event dispatch hack. +Since then, Internet Explorer proposed and implemented `setImmediate`. +Robert Kratić began contributing to Q by measuring the performance of +the internal implementation of `asap`, paying particular attention to +error recovery. Domenic, Robert, and I collectively settled on the +current strategy of unrolling the high-priority event queue internally +regardless of what strategy we used to dispatch the potentially +lower-priority flush event. Domenic went on to make ASAP cooperate with +NodeJS domains. + +[Q]: https://github.com/kriskowal/q +[NonBlocking]: http://www.nonblocking.io/2011/06/windownexttick.html + +For further reading, Nicholas Zakas provided a thorough article on [The +Case for setImmediate][NCZ]. + +[NCZ]: http://www.nczonline.net/blog/2013/07/09/the-case-for-setimmediate/ + +## License + +Copyright 2009-2013 by Contributors +MIT License (enclosed) + diff --git a/deps/npm/node_modules/dezalgo/node_modules/asap/asap.js b/deps/npm/node_modules/dezalgo/node_modules/asap/asap.js new file mode 100644 index 00000000000..2f85516cde0 --- /dev/null +++ b/deps/npm/node_modules/dezalgo/node_modules/asap/asap.js @@ -0,0 +1,113 @@ + +// Use the fastest possible means to execute a task in a future turn +// of the event loop. + +// linked list of tasks (single, with head node) +var head = {task: void 0, next: null}; +var tail = head; +var flushing = false; +var requestFlush = void 0; +var isNodeJS = false; + +function flush() { + /* jshint loopfunc: true */ + + while (head.next) { + head = head.next; + var task = head.task; + head.task = void 0; + var domain = head.domain; + + if (domain) { + head.domain = void 0; + domain.enter(); + } + + try { + task(); + + } catch (e) { + if (isNodeJS) { + // In node, uncaught exceptions are considered fatal errors. + // Re-throw them synchronously to interrupt flushing! + + // Ensure continuation if the uncaught exception is suppressed + // listening "uncaughtException" events (as domains does). + // Continue in next event to avoid tick recursion. + if (domain) { + domain.exit(); + } + setTimeout(flush, 0); + if (domain) { + domain.enter(); + } + + throw e; + + } else { + // In browsers, uncaught exceptions are not fatal. + // Re-throw them asynchronously to avoid slow-downs. + setTimeout(function() { + throw e; + }, 0); + } + } + + if (domain) { + domain.exit(); + } + } + + flushing = false; +} + +if (typeof process !== "undefined" && process.nextTick) { + // Node.js before 0.9. Note that some fake-Node environments, like the + // Mocha test runner, introduce a `process` global without a `nextTick`. + isNodeJS = true; + + requestFlush = function () { + process.nextTick(flush); + }; + +} else if (typeof setImmediate === "function") { + // In IE10, Node.js 0.9+, or https://github.com/NobleJS/setImmediate + if (typeof window !== "undefined") { + requestFlush = setImmediate.bind(window, flush); + } else { + requestFlush = function () { + setImmediate(flush); + }; + } + +} else if (typeof MessageChannel !== "undefined") { + // modern browsers + // http://www.nonblocking.io/2011/06/windownexttick.html + var channel = new MessageChannel(); + channel.port1.onmessage = flush; + requestFlush = function () { + channel.port2.postMessage(0); + }; + +} else { + // old browsers + requestFlush = function () { + setTimeout(flush, 0); + }; +} + +function asap(task) { + tail = tail.next = { + task: task, + domain: isNodeJS && process.domain, + next: null + }; + + if (!flushing) { + flushing = true; + requestFlush(); + } +}; + +module.exports = asap; + diff --git a/deps/npm/node_modules/dezalgo/node_modules/asap/package.json b/deps/npm/node_modules/dezalgo/node_modules/asap/package.json new file mode 100644 index 00000000000..311f9fc0c7e --- /dev/null +++ b/deps/npm/node_modules/dezalgo/node_modules/asap/package.json @@ -0,0 +1,39 @@ +{ + "name": "asap", + "version": "1.0.0", + "description": "High-priority task queue for Node.js and browsers", + "keywords": [ + "event", + "task", + "queue" + ], + "licenses": [ + { + "type": "MIT", + "url": "https://github.com/kriskowal/asap/raw/master/LICENSE.md" + } + ], + "main": "asap", + "readme": "\n# ASAP\n\nThis `asap` CommonJS package contains a single `asap` module that\nexports a single `asap` function that executes a function **as soon as\npossible**.\n\n```javascript\nasap(function () {\n // ...\n});\n```\n\nMore formally, ASAP provides a fast event queue that will execute tasks\nuntil it is empty before yielding to the JavaScript engine's underlying\nevent-loop. When the event queue becomes non-empty, ASAP schedules a\nflush event, preferring for that event to occur before the JavaScript\nengine has an opportunity to perform IO tasks or rendering, thus making\nthe first task and subsequent tasks semantically indistinguishable.\nASAP uses a variety of techniques to preserve this invariant on\ndifferent versions of browsers and NodeJS.\n\nBy design, ASAP can starve the event loop on the theory that, if there\nis enough work to be done synchronously, albeit in separate events, long\nenough to starve input or output, it is a strong indicator that the\nprogram needs to push back on scheduling more work.\n\nTake care. ASAP can sustain infinite recursive calls indefinitely\nwithout warning. This is behaviorally equivalent to an infinite loop.\nIt will not halt from a stack overflow, but it *will* chew through\nmemory (which is an oddity I cannot explain at this time). Just as with\ninfinite loops, you can monitor a Node process for this behavior with a\nheart-beat signal. As with infinite loops, a very small amount of\ncaution goes a long way to avoiding problems.\n\n```javascript\nfunction loop() {\n asap(loop);\n}\nloop();\n```\n\nASAP is distinct from `setImmediate` in that it does not suffer the\noverhead of returning a handle and being possible to cancel. For a\n`setImmediate` shim, consider [setImmediate][].\n\n[setImmediate]: https://github.com/noblejs/setimmediate\n\nIf a task throws an exception, it will not interrupt the flushing of\nhigh-priority tasks. The exception will be postponed to a later,\nlow-priority event to avoid slow-downs, when the underlying JavaScript\nengine will treat it as it does any unhandled exception.\n\n## Heritage\n\nASAP has been factored out of the [Q][] asynchronous promise library.\nIt originally had a naïve implementation in terms of `setTimeout`, but\n[Malte Ubl][NonBlocking] provided an insight that `postMessage` might be\nuseful for creating a high-priority, no-delay event dispatch hack.\nSince then, Internet Explorer proposed and implemented `setImmediate`.\nRobert Kratić began contributing to Q by measuring the performance of\nthe internal implementation of `asap`, paying particular attention to\nerror recovery. Domenic, Robert, and I collectively settled on the\ncurrent strategy of unrolling the high-priority event queue internally\nregardless of what strategy we used to dispatch the potentially\nlower-priority flush event. Domenic went on to make ASAP cooperate with\nNodeJS domains.\n\n[Q]: https://github.com/kriskowal/q\n[NonBlocking]: http://www.nonblocking.io/2011/06/windownexttick.html\n\nFor further reading, Nicholas Zakas provided a thorough article on [The\nCase for setImmediate][NCZ].\n\n[NCZ]: http://www.nczonline.net/blog/2013/07/09/the-case-for-setimmediate/\n\n## License\n\nCopyright 2009-2013 by Contributors\nMIT License (enclosed)\n\n", + "readmeFilename": "README.md", + "_id": "asap@1.0.0", + "dist": { + "shasum": "b2a45da5fdfa20b0496fc3768cc27c12fa916a7d", + "tarball": "http://registry.npmjs.org/asap/-/asap-1.0.0.tgz" + }, + "_from": "asap@>=1.0.0 <2.0.0", + "_npmVersion": "1.2.15", + "_npmUser": { + "name": "kriskowal", + "email": "kris.kowal@cixar.com" + }, + "maintainers": [ + { + "name": "kriskowal", + "email": "kris.kowal@cixar.com" + } + ], + "directories": {}, + "_shasum": "b2a45da5fdfa20b0496fc3768cc27c12fa916a7d", + "_resolved": "https://registry.npmjs.org/asap/-/asap-1.0.0.tgz" +} diff --git a/deps/npm/node_modules/dezalgo/package.json b/deps/npm/node_modules/dezalgo/package.json new file mode 100644 index 00000000000..1f63e83a18a --- /dev/null +++ b/deps/npm/node_modules/dezalgo/package.json @@ -0,0 +1,67 @@ +{ + "name": "dezalgo", + "version": "1.0.1", + "description": "Contain async insanity so that the dark pony lord doesn't eat souls", + "main": "dezalgo.js", + "directories": { + "test": "test" + }, + "dependencies": { + "asap": "^1.0.0", + "wrappy": "1" + }, + "devDependencies": { + "tap": "^0.4.11" + }, + "scripts": { + "test": "tap test/*.js" + }, + "repository": { + "type": "git", + "url": "https://github.com/npm/dezalgo" + }, + "keywords": [ + "async", + "zalgo", + "the dark pony", + "he comes", + "asynchrony of all holy and good", + "T̯̪ͅo̯͖̹ ̻̮̖̲͢i̥̖n̢͈͇̝͍v͏͉ok̭̬̝ͅe̞͍̩̫͍̩͝ ̩̮̖̟͇͉́t͔͔͎̗h͏̗̟e̘͉̰̦̠̞͓ ͕h͉̟͎̪̠̱͠ḭ̮̩v̺͉͇̩e̵͖-̺̪m͍i̜n̪̲̲̲̮d̷ ̢r̠̼̯̹̦̦͘ͅe͓̳͓̙p̺̗̫͙͘ͅr͔̰͜e̴͓̞s͉̩̩͟ͅe͏̣n͚͇̗̭̺͍tì͙̣n͏̖̥̗͎̰̪g̞͓̭̱̯̫̕ ̣̱͜ͅc̦̰̰̠̮͎͙̀hao̺̜̻͍͙ͅs͉͓̘.͎̼̺̼͕̹͘", + "̠̞̱̰I͖͇̝̻n̦̰͍̰̟v̤̺̫̳̭̼̗͘ò̹̟̩̩͚k̢̥̠͍͉̦̬i̖͓͔̮̱̻͘n̶̳͙̫͎g̖̯̣̲̪͉ ̞͎̗͕͚ͅt̲͕̘̺̯̗̦h̘̦̲̜̻e̳͎͉̬͙ ̴̞̪̲̥f̜̯͓͓̭̭͢e̱̘͔̮e̜̤l̺̱͖̯͓͙͈͢i̵̦̬͉͔̫͚͕n͉g̨͖̙̙̹̹̟̤ ͉̪o̞̠͍̪̰͙ͅf̬̲̺ ͔͕̲͕͕̲̕c̙͉h̝͔̩̙̕ͅa̲͖̻̗̹o̥̼̫s̝̖̜̝͚̫̟.̺͚ ̸̱̲W̶̥̣͖̦i͏̤̬̱̳̣ͅt͉h̗̪̪ ̷̱͚̹̪ǫ͕̗̣̳̦͎u̼̦͔̥̮̕ţ͖͎̻͔͉ ̴͎̩òr̹̰̖͉͈͝d̷̲̦̖͓e̲͓̠r", + "̧͚̜͓̰̭̭Ṯ̫̹̜̮̟̮͝h͚̘̩̘̖̰́e ̥̘͓͉͔͙̼N̟̜̣̘͔̪e̞̞̤͢z̰̖̘͇p̠͟e̺̱̣͍͙̝ṛ̘̬͔̙͇̠d͝ḭ̯̱̥̗̩a̛ͅn͏̦ ̷̥hi̥v̖̳̹͉̮̱͝e̹̪̘̖̰̟-̴͙͓͚̜̻mi̗̺̻͙̺ͅn̪̯͈d ͏̘͓̫̳ͅơ̹͔̳̖̣͓f͈̹̘ ͕ͅc̗̤̠̜̮̥̥h̡͍̩̭̫͚̱a̤͉̤͔͜os͕̤̼͍̲̀ͅ.̡̱ ̦Za̯̱̗̭͍̣͚l̗͉̰̤g͏̣̭̬̗̲͖ͅo̶̭̩̳̟͈.̪̦̰̳", + "H̴̱̦̗̬̣͓̺e̮ ͉̠̰̞͎̖͟ẁh̛̺̯ͅo̖̫͡ ̢Ẁa̡̗i̸t͖̣͉̀ş͔̯̩ ̤̦̮͇̞̦̲B͎̭͇̦̼e̢hin͏͙̟̪d̴̰͓̻̣̮͕ͅ T͖̮̕h͖e̘̺̰̙͘ ̥Ẁ̦͔̻͚a̞͖̪͉l̪̠̻̰̣̠l̲͎͞", + "Z̘͍̼͎̣͔͝Ą̲̜̱̱̹̤͇L̶̝̰̭͔G͍̖͍O̫͜ͅ!̼̤ͅ", + "H̝̪̜͓̀̌̂̒E̢̙̠̣ ̴̳͇̥̟̠͍̐C̹̓̑̐̆͝Ó̶̭͓̚M̬̼Ĕ̖̤͔͔̟̹̽̿̊ͥ̍ͫS̻̰̦̻̖̘̱̒ͪ͌̅͟" + ], + "author": { + "name": "Isaac Z. Schlueter", + "email": "i@izs.me", + "url": "http://blog.izs.me/" + }, + "license": "ISC", + "bugs": { + "url": "https://github.com/npm/dezalgo/issues" + }, + "homepage": "https://github.com/npm/dezalgo", + "gitHead": "0a5eee75c179611f8b67f663015d68bb517e57d2", + "_id": "dezalgo@1.0.1", + "_shasum": "12bde135060807900d5a7aebb607c2abb7c76937", + "_from": "dezalgo@latest", + "_npmVersion": "2.0.0", + "_nodeVersion": "0.10.31", + "_npmUser": { + "name": "isaacs", + "email": "i@izs.me" + }, + "maintainers": [ + { + "name": "isaacs", + "email": "i@izs.me" + } + ], + "dist": { + "shasum": "12bde135060807900d5a7aebb607c2abb7c76937", + "tarball": "http://registry.npmjs.org/dezalgo/-/dezalgo-1.0.1.tgz" + }, + "_resolved": "https://registry.npmjs.org/dezalgo/-/dezalgo-1.0.1.tgz" +} diff --git a/deps/npm/node_modules/dezalgo/test/basic.js b/deps/npm/node_modules/dezalgo/test/basic.js new file mode 100644 index 00000000000..da09e724da3 --- /dev/null +++ b/deps/npm/node_modules/dezalgo/test/basic.js @@ -0,0 +1,29 @@ +var test = require('tap').test +var dz = require('../dezalgo.js') + +test('the dark pony', function(t) { + + var n = 0 + function foo(i, cb) { + cb = dz(cb) + if (++n % 2) cb(true, i) + else process.nextTick(cb.bind(null, false, i)) + } + + var called = 0 + var order = [0, 2, 4, 6, 8, 1, 3, 5, 7, 9] + var o = 0 + for (var i = 0; i < 10; i++) { + foo(i, function(cached, i) { + t.equal(i, order[o++]) + t.equal(i % 2, cached ? 0 : 1) + called++ + }) + t.equal(called, 0) + } + + setTimeout(function() { + t.equal(called, 10) + t.end() + }) +}) diff --git a/deps/npm/node_modules/fs-vacuum/.eslintrc b/deps/npm/node_modules/fs-vacuum/.eslintrc new file mode 100644 index 00000000000..5c39c67eca0 --- /dev/null +++ b/deps/npm/node_modules/fs-vacuum/.eslintrc @@ -0,0 +1,18 @@ +{ + "env" : { + "node" : true + }, + "rules" : { + "curly" : 0, + "no-lonely-if" : 1, + "no-mixed-requires" : 0, + "no-underscore-dangle" : 0, + "no-unused-vars" : [2, {"vars" : "all", "args" : "after-used"}], + "no-use-before-define" : [2, "nofunc"], + "quotes" : [1, "double", "avoid-escape"], + "semi" : [2, "never"], + "space-after-keywords" : 1, + "space-infix-ops" : 0, + "strict" : 0 + } +} diff --git a/deps/npm/node_modules/fs-vacuum/.npmignore b/deps/npm/node_modules/fs-vacuum/.npmignore new file mode 100644 index 00000000000..3c3629e647f --- /dev/null +++ b/deps/npm/node_modules/fs-vacuum/.npmignore @@ -0,0 +1 @@ +node_modules diff --git a/deps/npm/node_modules/fs-vacuum/README.md b/deps/npm/node_modules/fs-vacuum/README.md new file mode 100644 index 00000000000..df31243df5c --- /dev/null +++ b/deps/npm/node_modules/fs-vacuum/README.md @@ -0,0 +1,33 @@ +# fs-vacuum + +Remove the empty branches of a directory tree, optionally up to (but not +including) a specified base directory. Optionally nukes the leaf directory. + +## Usage + +```javascript +var logger = require("npmlog"); +var vacuum = require("fs-vacuum"); + +var options = { + base : "/path/to/my/tree/root", + purge : true, + log : logger.silly.bind(logger, "myCleanup") +}; + +/* Assuming there are no other files or directories in "out", "to", or "my", + * the final path will just be "/path/to/my/tree/root". + */ +vacuum("/path/to/my/tree/root/out/to/my/files", function (error) { + if (error) console.error("Unable to cleanly vacuum:", error.message); +}); +``` +# vacuum(directory, options, callback) + +* `directory` {String} Leaf node to remove. **Must be a directory, symlink, or file.** +* `options` {Object} + * `base` {String} No directories at or above this level of the filesystem will be removed. + * `purge` {Boolean} If set, nuke the whole leaf directory, including its contents. + * `log` {Function} A logging function that takes `npmlog`-compatible argument lists. +* `callback` {Function} Function to call once vacuuming is complete. + * `error` {Error} What went wrong along the way, if anything. diff --git a/deps/npm/node_modules/fs-vacuum/package.json b/deps/npm/node_modules/fs-vacuum/package.json new file mode 100644 index 00000000000..140536797f8 --- /dev/null +++ b/deps/npm/node_modules/fs-vacuum/package.json @@ -0,0 +1,42 @@ +{ + "name": "fs-vacuum", + "version": "1.2.1", + "description": "recursively remove empty directories -- to a point", + "main": "vacuum.js", + "scripts": { + "test": "tap test/*.js" + }, + "repository": { + "type": "git", + "url": "https://github.com/npm/fs-vacuum.git" + }, + "keywords": [ + "rm", + "rimraf", + "clean" + ], + "author": { + "name": "Forrest L Norvell", + "email": "ogd@aoaioxxysz.net" + }, + "license": "ISC", + "bugs": { + "url": "https://github.com/npm/fs-vacuum/issues" + }, + "homepage": "https://github.com/npm/fs-vacuum", + "devDependencies": { + "mkdirp": "^0.5.0", + "tap": "^0.4.11", + "tmp": "0.0.23" + }, + "dependencies": { + "graceful-fs": "^3.0.2", + "rimraf": "^2.2.8" + }, + "readme": "# fs-vacuum\n\nRemove the empty branches of a directory tree, optionally up to (but not\nincluding) a specified base directory. Optionally nukes the leaf directory.\n\n## Usage\n\n```javascript\nvar logger = require(\"npmlog\");\nvar vacuum = require(\"fs-vacuum\");\n\nvar options = {\n base : \"/path/to/my/tree/root\",\n purge : true,\n log : logger.silly.bind(logger, \"myCleanup\")\n};\n\n/* Assuming there are no other files or directories in \"out\", \"to\", or \"my\",\n * the final path will just be \"/path/to/my/tree/root\".\n */\nvacuum(\"/path/to/my/tree/root/out/to/my/files\", function (error) {\n if (error) console.error(\"Unable to cleanly vacuum:\", error.message);\n});\n```\n# vacuum(directory, options, callback)\n\n* `directory` {String} Leaf node to remove. **Must be a directory, symlink, or file.**\n* `options` {Object}\n * `base` {String} No directories at or above this level of the filesystem will be removed.\n * `purge` {Boolean} If set, nuke the whole leaf directory, including its contents.\n * `log` {Function} A logging function that takes `npmlog`-compatible argument lists.\n* `callback` {Function} Function to call once vacuuming is complete.\n * `error` {Error} What went wrong along the way, if anything.\n", + "readmeFilename": "README.md", + "gitHead": "bad24b21c45d86b3da991f2c3d058ef03546d83e", + "_id": "fs-vacuum@1.2.1", + "_shasum": "1bc3c62da30d6272569b8b9089c9811abb0a600b", + "_from": "fs-vacuum@>=1.2.1-0 <1.3.0-0" +} diff --git a/deps/npm/node_modules/fs-vacuum/test/arguments.js b/deps/npm/node_modules/fs-vacuum/test/arguments.js new file mode 100644 index 00000000000..d77ce0627d2 --- /dev/null +++ b/deps/npm/node_modules/fs-vacuum/test/arguments.js @@ -0,0 +1,24 @@ +var test = require("tap").test + +var vacuum = require("../vacuum.js") + +test("vacuum throws on missing parameters", function (t) { + t.throws(vacuum, "called with no parameters") + t.throws(function () { vacuum("directory", {}) }, "called with no callback") + + t.end() +}) + +test('vacuum throws on incorrect types ("Forrest is pedantic" section)', function (t) { + t.throws(function () { + vacuum({}, {}, function () {}) + }, "called with path parameter of incorrect type") + t.throws(function () { + vacuum("directory", "directory", function () {}) + }, "called with options of wrong type") + t.throws(function () { + vacuum("directory", {}, "whoops") + }, "called with callback that isn't callable") + + t.end() +}) diff --git a/deps/npm/node_modules/fs-vacuum/test/base-leaf-mismatch.js b/deps/npm/node_modules/fs-vacuum/test/base-leaf-mismatch.js new file mode 100644 index 00000000000..cfdf074fe43 --- /dev/null +++ b/deps/npm/node_modules/fs-vacuum/test/base-leaf-mismatch.js @@ -0,0 +1,16 @@ +var test = require("tap").test + +var vacuum = require("../vacuum.js") + +test("vacuum errors when base is set and path is not under it", function (t) { + vacuum("/a/made/up/path", {base : "/root/elsewhere"}, function (er) { + t.ok(er, "got an error") + t.equal( + er.message, + "/a/made/up/path is not a child of /root/elsewhere", + "got the expected error message" + ) + + t.end() + }) +}) diff --git a/deps/npm/node_modules/fs-vacuum/test/no-entries-file-no-purge.js b/deps/npm/node_modules/fs-vacuum/test/no-entries-file-no-purge.js new file mode 100644 index 00000000000..6a6e51bcab8 --- /dev/null +++ b/deps/npm/node_modules/fs-vacuum/test/no-entries-file-no-purge.js @@ -0,0 +1,78 @@ +var path = require("path") + +var test = require("tap").test +var statSync = require("graceful-fs").statSync +var writeFile = require("graceful-fs").writeFile +var readdirSync = require("graceful-fs").readdirSync +var mkdtemp = require("tmp").dir +var mkdirp = require("mkdirp") + +var vacuum = require("../vacuum.js") + +// CONSTANTS +var TEMP_OPTIONS = { + unsafeCleanup : true, + mode : "0700" +} +var SHORT_PATH = path.join("i", "am", "a", "path") +var PARTIAL_PATH = path.join(SHORT_PATH, "that", "ends", "at", "a") +var FULL_PATH = path.join(PARTIAL_PATH, "file") + +var messages = [] +function log() { messages.push(Array.prototype.slice.call(arguments).join(" ")) } + +var testBase, partialPath, fullPath +test("xXx setup xXx", function (t) { + mkdtemp(TEMP_OPTIONS, function (er, tmpdir) { + t.ifError(er, "temp directory exists") + + testBase = path.resolve(tmpdir, SHORT_PATH) + partialPath = path.resolve(tmpdir, PARTIAL_PATH) + fullPath = path.resolve(tmpdir, FULL_PATH) + + mkdirp(partialPath, function (er) { + t.ifError(er, "made test path") + + writeFile(fullPath, new Buffer("hi"), function (er) { + t.ifError(er, "made file") + + t.end() + }) + }) + }) +}) + +test("remove up to a point", function (t) { + vacuum(fullPath, {purge : false, base : testBase, log : log}, function (er) { + t.ifError(er, "cleaned up to base") + + t.equal(messages.length, 6, "got 5 removal & 1 finish message") + t.equal(messages[5], "finished vacuuming up to " + testBase) + + var stat + var verifyPath = fullPath + + function verify() { stat = statSync(verifyPath) } + + // handle the file separately + t.throws(verify, verifyPath + " cannot be statted") + t.notOk(stat && stat.isFile(), verifyPath + " is totally gone") + verifyPath = path.dirname(verifyPath) + + for (var i = 0; i < 4; i++) { + t.throws(verify, verifyPath + " cannot be statted") + t.notOk(stat && stat.isDirectory(), verifyPath + " is totally gone") + verifyPath = path.dirname(verifyPath) + } + + t.doesNotThrow(function () { + stat = statSync(testBase) + }, testBase + " can be statted") + t.ok(stat && stat.isDirectory(), testBase + " is still a directory") + + var files = readdirSync(testBase) + t.equal(files.length, 0, "nothing left in base directory") + + t.end() + }) +}) diff --git a/deps/npm/node_modules/fs-vacuum/test/no-entries-link-no-purge.js b/deps/npm/node_modules/fs-vacuum/test/no-entries-link-no-purge.js new file mode 100644 index 00000000000..087c039d618 --- /dev/null +++ b/deps/npm/node_modules/fs-vacuum/test/no-entries-link-no-purge.js @@ -0,0 +1,78 @@ +var path = require("path") + +var test = require("tap").test +var statSync = require("graceful-fs").statSync +var symlinkSync = require("graceful-fs").symlinkSync +var readdirSync = require("graceful-fs").readdirSync +var mkdtemp = require("tmp").dir +var mkdirp = require("mkdirp") + +var vacuum = require("../vacuum.js") + +// CONSTANTS +var TEMP_OPTIONS = { + unsafeCleanup : true, + mode : "0700" +} +var SHORT_PATH = path.join("i", "am", "a", "path") +var TARGET_PATH = path.join("target-link", "in", "the", "middle") +var PARTIAL_PATH = path.join(SHORT_PATH, "with", "a") +var FULL_PATH = path.join(PARTIAL_PATH, "link") +var EXPANDO_PATH = path.join(SHORT_PATH, "with", "a", "link", "in", "the", "middle") + +var messages = [] +function log() { messages.push(Array.prototype.slice.call(arguments).join(" ")) } + +var testBase, targetPath, partialPath, fullPath, expandoPath +test("xXx setup xXx", function (t) { + mkdtemp(TEMP_OPTIONS, function (er, tmpdir) { + t.ifError(er, "temp directory exists") + + testBase = path.resolve(tmpdir, SHORT_PATH) + targetPath = path.resolve(tmpdir, TARGET_PATH) + partialPath = path.resolve(tmpdir, PARTIAL_PATH) + fullPath = path.resolve(tmpdir, FULL_PATH) + expandoPath = path.resolve(tmpdir, EXPANDO_PATH) + + mkdirp(partialPath, function (er) { + t.ifError(er, "made test path") + + mkdirp(targetPath, function (er) { + t.ifError(er, "made target path") + + symlinkSync(path.join(tmpdir, "target-link"), fullPath) + + t.end() + }) + }) + }) +}) + +test("remove up to a point", function (t) { + vacuum(expandoPath, {purge : false, base : testBase, log : log}, function (er) { + t.ifError(er, "cleaned up to base") + + t.equal(messages.length, 7, "got 6 removal & 1 finish message") + t.equal(messages[6], "finished vacuuming up to " + testBase) + + var stat + var verifyPath = expandoPath + function verify() { stat = statSync(verifyPath) } + + for (var i = 0; i < 6; i++) { + t.throws(verify, verifyPath + " cannot be statted") + t.notOk(stat && stat.isDirectory(), verifyPath + " is totally gone") + verifyPath = path.dirname(verifyPath) + } + + t.doesNotThrow(function () { + stat = statSync(testBase) + }, testBase + " can be statted") + t.ok(stat && stat.isDirectory(), testBase + " is still a directory") + + var files = readdirSync(testBase) + t.equal(files.length, 0, "nothing left in base directory") + + t.end() + }) +}) diff --git a/deps/npm/node_modules/fs-vacuum/test/no-entries-no-purge.js b/deps/npm/node_modules/fs-vacuum/test/no-entries-no-purge.js new file mode 100644 index 00000000000..346ab566976 --- /dev/null +++ b/deps/npm/node_modules/fs-vacuum/test/no-entries-no-purge.js @@ -0,0 +1,61 @@ +var path = require("path") + +var test = require("tap").test +var statSync = require("graceful-fs").statSync +var mkdtemp = require("tmp").dir +var mkdirp = require("mkdirp") + +var vacuum = require("../vacuum.js") + +// CONSTANTS +var TEMP_OPTIONS = { + unsafeCleanup : true, + mode : "0700" +} +var SHORT_PATH = path.join("i", "am", "a", "path") +var LONG_PATH = path.join(SHORT_PATH, "of", "a", "certain", "length") + +var messages = [] +function log() { messages.push(Array.prototype.slice.call(arguments).join(" ")) } + +var testPath, testBase +test("xXx setup xXx", function (t) { + mkdtemp(TEMP_OPTIONS, function (er, tmpdir) { + t.ifError(er, "temp directory exists") + + testBase = path.resolve(tmpdir, SHORT_PATH) + testPath = path.resolve(tmpdir, LONG_PATH) + + mkdirp(testPath, function (er) { + t.ifError(er, "made test path") + + t.end() + }) + }) +}) + +test("remove up to a point", function (t) { + vacuum(testPath, {purge : false, base : testBase, log : log}, function (er) { + t.ifError(er, "cleaned up to base") + + t.equal(messages.length, 5, "got 4 removal & 1 finish message") + t.equal(messages[4], "finished vacuuming up to " + testBase) + + var stat + var verifyPath = testPath + function verify() { stat = statSync(verifyPath) } + + for (var i = 0; i < 4; i++) { + t.throws(verify, verifyPath + " cannot be statted") + t.notOk(stat && stat.isDirectory(), verifyPath + " is totally gone") + verifyPath = path.dirname(verifyPath) + } + + t.doesNotThrow(function () { + stat = statSync(testBase) + }, testBase + " can be statted") + t.ok(stat && stat.isDirectory(), testBase + " is still a directory") + + t.end() + }) +}) diff --git a/deps/npm/node_modules/fs-vacuum/test/no-entries-with-link-purge.js b/deps/npm/node_modules/fs-vacuum/test/no-entries-with-link-purge.js new file mode 100644 index 00000000000..4ed1a393974 --- /dev/null +++ b/deps/npm/node_modules/fs-vacuum/test/no-entries-with-link-purge.js @@ -0,0 +1,78 @@ +var path = require("path") + +var test = require("tap").test +var statSync = require("graceful-fs").statSync +var writeFileSync = require("graceful-fs").writeFileSync +var symlinkSync = require("graceful-fs").symlinkSync +var mkdtemp = require("tmp").dir +var mkdirp = require("mkdirp") + +var vacuum = require("../vacuum.js") + +// CONSTANTS +var TEMP_OPTIONS = { + unsafeCleanup : true, + mode : "0700" +} +var SHORT_PATH = path.join("i", "am", "a", "path") +var TARGET_PATH = "link-target" +var FIRST_FILE = path.join(TARGET_PATH, "monsieurs") +var SECOND_FILE = path.join(TARGET_PATH, "mesdames") +var PARTIAL_PATH = path.join(SHORT_PATH, "with", "a", "definite") +var FULL_PATH = path.join(PARTIAL_PATH, "target") + +var messages = [] +function log() { messages.push(Array.prototype.slice.call(arguments).join(" ")) } + +var testBase, partialPath, fullPath, targetPath +test("xXx setup xXx", function (t) { + mkdtemp(TEMP_OPTIONS, function (er, tmpdir) { + t.ifError(er, "temp directory exists") + + testBase = path.resolve(tmpdir, SHORT_PATH) + targetPath = path.resolve(tmpdir, TARGET_PATH) + partialPath = path.resolve(tmpdir, PARTIAL_PATH) + fullPath = path.resolve(tmpdir, FULL_PATH) + + mkdirp(partialPath, function (er) { + t.ifError(er, "made test path") + + mkdirp(targetPath, function (er) { + t.ifError(er, "made target path") + + writeFileSync(path.resolve(tmpdir, FIRST_FILE), new Buffer("c'est vraiment joli")) + writeFileSync(path.resolve(tmpdir, SECOND_FILE), new Buffer("oui oui")) + symlinkSync(targetPath, fullPath) + + t.end() + }) + }) + }) +}) + +test("remove up to a point", function (t) { + vacuum(fullPath, {purge : true, base : testBase, log : log}, function (er) { + t.ifError(er, "cleaned up to base") + + t.equal(messages.length, 5, "got 4 removal & 1 finish message") + t.equal(messages[0], "purging " + fullPath) + t.equal(messages[4], "finished vacuuming up to " + testBase) + + var stat + var verifyPath = fullPath + function verify() { stat = statSync(verifyPath) } + + for (var i = 0; i < 4; i++) { + t.throws(verify, verifyPath + " cannot be statted") + t.notOk(stat && stat.isDirectory(), verifyPath + " is totally gone") + verifyPath = path.dirname(verifyPath) + } + + t.doesNotThrow(function () { + stat = statSync(testBase) + }, testBase + " can be statted") + t.ok(stat && stat.isDirectory(), testBase + " is still a directory") + + t.end() + }) +}) diff --git a/deps/npm/node_modules/fs-vacuum/test/no-entries-with-purge.js b/deps/npm/node_modules/fs-vacuum/test/no-entries-with-purge.js new file mode 100644 index 00000000000..10fa558552a --- /dev/null +++ b/deps/npm/node_modules/fs-vacuum/test/no-entries-with-purge.js @@ -0,0 +1,67 @@ +var path = require("path") + +var test = require("tap").test +var statSync = require("graceful-fs").statSync +var writeFileSync = require("graceful-fs").writeFileSync +var mkdtemp = require("tmp").dir +var mkdirp = require("mkdirp") + +var vacuum = require("../vacuum.js") + +// CONSTANTS +var TEMP_OPTIONS = { + unsafeCleanup : true, + mode : "0700" +} +var SHORT_PATH = path.join("i", "am", "a", "path") +var LONG_PATH = path.join(SHORT_PATH, "of", "a", "certain", "kind") +var FIRST_FILE = path.join(LONG_PATH, "monsieurs") +var SECOND_FILE = path.join(LONG_PATH, "mesdames") + +var messages = [] +function log() { messages.push(Array.prototype.slice.call(arguments).join(" ")) } + +var testPath, testBase +test("xXx setup xXx", function (t) { + mkdtemp(TEMP_OPTIONS, function (er, tmpdir) { + t.ifError(er, "temp directory exists") + + testBase = path.resolve(tmpdir, SHORT_PATH) + testPath = path.resolve(tmpdir, LONG_PATH) + + mkdirp(testPath, function (er) { + t.ifError(er, "made test path") + + writeFileSync(path.resolve(tmpdir, FIRST_FILE), new Buffer("c'est vraiment joli")) + writeFileSync(path.resolve(tmpdir, SECOND_FILE), new Buffer("oui oui")) + t.end() + }) + }) +}) + +test("remove up to a point", function (t) { + vacuum(testPath, {purge : true, base : testBase, log : log}, function (er) { + t.ifError(er, "cleaned up to base") + + t.equal(messages.length, 5, "got 4 removal & 1 finish message") + t.equal(messages[0], "purging " + testPath) + t.equal(messages[4], "finished vacuuming up to " + testBase) + + var stat + var verifyPath = testPath + function verify() { stat = statSync(verifyPath) } + + for (var i = 0; i < 4; i++) { + t.throws(verify, verifyPath + " cannot be statted") + t.notOk(stat && stat.isDirectory(), verifyPath + " is totally gone") + verifyPath = path.dirname(verifyPath) + } + + t.doesNotThrow(function () { + stat = statSync(testBase) + }, testBase + " can be statted") + t.ok(stat && stat.isDirectory(), testBase + " is still a directory") + + t.end() + }) +}) diff --git a/deps/npm/node_modules/fs-vacuum/test/other-directories-no-purge.js b/deps/npm/node_modules/fs-vacuum/test/other-directories-no-purge.js new file mode 100644 index 00000000000..dce785623e2 --- /dev/null +++ b/deps/npm/node_modules/fs-vacuum/test/other-directories-no-purge.js @@ -0,0 +1,76 @@ +var path = require("path") + +var test = require("tap").test +var statSync = require("graceful-fs").statSync +var mkdtemp = require("tmp").dir +var mkdirp = require("mkdirp") + +var vacuum = require("../vacuum.js") + +// CONSTANTS +var TEMP_OPTIONS = { + unsafeCleanup : true, + mode : "0700" +} +var SHORT_PATH = path.join("i", "am", "a", "path") +var REMOVE_PATH = path.join(SHORT_PATH, "of", "a", "certain", "length") +var OTHER_PATH = path.join(SHORT_PATH, "of", "no", "qualities") + +var messages = [] +function log() { messages.push(Array.prototype.slice.call(arguments).join(" ")) } + +var testBase, testPath, otherPath +test("xXx setup xXx", function (t) { + mkdtemp(TEMP_OPTIONS, function (er, tmpdir) { + t.ifError(er, "temp directory exists") + + testBase = path.resolve(tmpdir, SHORT_PATH) + testPath = path.resolve(tmpdir, REMOVE_PATH) + otherPath = path.resolve(tmpdir, OTHER_PATH) + + mkdirp(testPath, function (er) { + t.ifError(er, "made test path") + + mkdirp(otherPath, function (er) { + t.ifError(er, "made other path") + + t.end() + }) + }) + }) +}) + +test("remove up to a point", function (t) { + vacuum(testPath, {purge : false, base : testBase, log : log}, function (er) { + t.ifError(er, "cleaned up to base") + + t.equal(messages.length, 4, "got 3 removal & 1 finish message") + t.equal( + messages[3], "quitting because other entries in " + testBase + "/of", + "got expected final message" + ) + + var stat + var verifyPath = testPath + function verify() { stat = statSync(verifyPath) } + + for (var i = 0; i < 3; i++) { + t.throws(verify, verifyPath + " cannot be statted") + t.notOk(stat && stat.isDirectory(), verifyPath + " is totally gone") + verifyPath = path.dirname(verifyPath) + } + + t.doesNotThrow(function () { + stat = statSync(otherPath) + }, otherPath + " can be statted") + t.ok(stat && stat.isDirectory(), otherPath + " is still a directory") + + var intersection = path.join(testBase, "of") + t.doesNotThrow(function () { + stat = statSync(intersection) + }, intersection + " can be statted") + t.ok(stat && stat.isDirectory(), intersection + " is still a directory") + + t.end() + }) +}) diff --git a/deps/npm/node_modules/fs-vacuum/vacuum.js b/deps/npm/node_modules/fs-vacuum/vacuum.js new file mode 100644 index 00000000000..f706a4be68c --- /dev/null +++ b/deps/npm/node_modules/fs-vacuum/vacuum.js @@ -0,0 +1,104 @@ +var assert = require("assert") +var dirname = require("path").dirname +var resolve = require("path").resolve + +var rimraf = require("rimraf") +var lstat = require("graceful-fs").lstat +var readdir = require("graceful-fs").readdir +var rmdir = require("graceful-fs").rmdir +var unlink = require("graceful-fs").unlink + +module.exports = vacuum + +function vacuum(leaf, options, cb) { + assert(typeof leaf === "string", "must pass in path to remove") + assert(typeof cb === "function", "must pass in callback") + + if (!options) options = {} + assert(typeof options === "object", "options must be an object") + + var log = options.log ? options.log : function () {} + + var base = options.base + if (base && resolve(leaf).indexOf(resolve(base)) !== 0) { + return cb(new Error(resolve(leaf) + " is not a child of " + resolve(base))) + } + + lstat(leaf, function (error, stat) { + if (error) { + if (error.code === "ENOENT") return cb(null) + + log(error.stack) + return cb(error) + } + + if (!(stat && (stat.isDirectory() || stat.isSymbolicLink() || stat.isFile()))) { + log(leaf, "is not a directory, file, or link") + return cb(new Error(leaf + " is not a directory, file, or link")) + } + + if (options.purge) { + log("purging", leaf) + rimraf(leaf, function (error) { + if (error) return cb(error) + + next(dirname(leaf)) + }) + } + else if (!stat.isDirectory()) { + log("removing", leaf) + unlink(leaf, function (error) { + if (error) return cb(error) + + next(dirname(leaf)) + }) + } + else { + next(leaf) + } + }) + + function next(branch) { + // either we've reached the base or we've reached the root + if ((base && resolve(branch) === resolve(base)) || branch === dirname(branch)) { + log("finished vacuuming up to", branch) + return cb(null) + } + + readdir(branch, function (error, files) { + if (error) { + if (error.code === "ENOENT") return cb(null) + + log("unable to check directory", branch, "due to", error.message) + return cb(error) + } + + if (files.length > 0) { + log("quitting because other entries in", branch) + return cb(null) + } + + log("removing", branch) + lstat(branch, function (error, stat) { + if (error) { + if (error.code === "ENOENT") return cb(null) + + log("unable to lstat", branch, "due to", error.message) + return cb(error) + } + + var remove = stat.isDirectory() ? rmdir : unlink + remove(branch, function (error) { + if (error) { + if (error.code === "ENOENT") return cb(null) + + log("unable to remove", branch, "due to", error.message) + return cb(error) + } + + next(dirname(branch)) + }) + }) + }) + } +} diff --git a/deps/npm/node_modules/npmconf/LICENSE b/deps/npm/node_modules/fs-write-stream-atomic/LICENSE similarity index 100% rename from deps/npm/node_modules/npmconf/LICENSE rename to deps/npm/node_modules/fs-write-stream-atomic/LICENSE diff --git a/deps/npm/node_modules/fs-write-stream-atomic/README.md b/deps/npm/node_modules/fs-write-stream-atomic/README.md new file mode 100644 index 00000000000..9a15d056764 --- /dev/null +++ b/deps/npm/node_modules/fs-write-stream-atomic/README.md @@ -0,0 +1,35 @@ +# fs-write-stream-atomic + +Like `fs.createWriteStream(...)`, but atomic. + +Writes to a tmp file and does an atomic `fs.rename` to move it into +place when it's done. + +First rule of debugging: **It's always a race condition.** + +## USAGE + +```javascript +var fsWriteStreamAtomic = require('fs-write-stream-atomic') +// options are optional. +var write = fsWriteStreamAtomic('output.txt', options) +var read = fs.createReadStream('input.txt') +read.pipe(write) + +// When the write stream emits a 'finish' or 'close' event, +// you can be sure that it is moved into place, and contains +// all the bytes that were written to it, even if something else +// was writing to `output.txt` at the same time. +``` + +### `fsWriteStreamAtomic(filename, [options])` + +* `filename` {String} The file we want to write to +* `options` {Object} + * `chown` {Object} User and group to set ownership after write + * `uid` {Number} + * `gid` {Number} + * `encoding` {String} default = 'utf8' + * `mode` {Number} default = `0666` + * `flags` {String} default = `'w'` + diff --git a/deps/npm/node_modules/fs-write-stream-atomic/index.js b/deps/npm/node_modules/fs-write-stream-atomic/index.js new file mode 100644 index 00000000000..42a9a8825ec --- /dev/null +++ b/deps/npm/node_modules/fs-write-stream-atomic/index.js @@ -0,0 +1,96 @@ +var fs = require('graceful-fs') +var util = require('util') +var crypto = require('crypto') + +function md5hex () { + var hash = crypto.createHash('md5') + for (var ii=0; ii=1.0.2 <1.1.0", + "_npmVersion": "2.1.0", + "_nodeVersion": "0.10.31", + "_npmUser": { + "name": "isaacs", + "email": "i@izs.me" + }, + "maintainers": [ + { + "name": "isaacs", + "email": "i@izs.me" + } + ], + "dist": { + "shasum": "fe0c6cec75256072b2fef8180d97e309fe3f5efb", + "tarball": "http://registry.npmjs.org/fs-write-stream-atomic/-/fs-write-stream-atomic-1.0.2.tgz" + }, + "_resolved": "https://registry.npmjs.org/fs-write-stream-atomic/-/fs-write-stream-atomic-1.0.2.tgz" +} diff --git a/deps/npm/node_modules/fs-write-stream-atomic/test/basic.js b/deps/npm/node_modules/fs-write-stream-atomic/test/basic.js new file mode 100644 index 00000000000..159c596ab01 --- /dev/null +++ b/deps/npm/node_modules/fs-write-stream-atomic/test/basic.js @@ -0,0 +1,89 @@ +var test = require('tap').test +var writeStream = require('../index.js') +var fs = require('fs') +var path = require('path') + +test('basic', function (t) { + // open 10 write streams to the same file. + // then write to each of them, and to the target + // and verify at the end that each of them does their thing + var target = path.resolve(__dirname, 'test.txt') + var n = 10 + + var streams = [] + for (var i = 0; i < n; i++) { + var s = writeStream(target) + s.on('finish', verifier('finish')) + s.on('close', verifier('close')) + streams.push(s) + } + + var verifierCalled = 0 + function verifier (ev) { return function () { + if (ev === 'close') + t.equal(this.__emittedFinish, true) + else { + this.__emittedFinish = true + t.equal(ev, 'finish') + } + + // make sure that one of the atomic streams won. + var res = fs.readFileSync(target, 'utf8') + var lines = res.trim().split(/\n/) + lines.forEach(function (line) { + var first = lines[0].match(/\d+$/)[0] + var cur = line.match(/\d+$/)[0] + t.equal(cur, first) + }) + + var resExpr = /^first write \d+\nsecond write \d+\nthird write \d+\nfinal write \d+\n$/ + t.similar(res, resExpr) + + // should be called once for each close, and each finish + if (++verifierCalled === n * 2) { + t.end() + } + }} + + // now write something to each stream. + streams.forEach(function (stream, i) { + stream.write('first write ' + i + '\n') + }) + + // wait a sec for those writes to go out. + setTimeout(function () { + // write something else to the target. + fs.writeFileSync(target, 'brutality!\n') + + // write some more stuff. + streams.forEach(function (stream, i) { + stream.write('second write ' + i + '\n') + }) + + setTimeout(function () { + // Oops! Deleted the file! + fs.unlinkSync(target) + + // write some more stuff. + streams.forEach(function (stream, i) { + stream.write('third write ' + i + '\n') + }) + + setTimeout(function () { + fs.writeFileSync(target, 'brutality TWO!\n') + streams.forEach(function (stream, i) { + stream.end('final write ' + i + '\n') + }) + }, 50) + }, 50) + }, 50) +}) + +test('cleanup', function (t) { + fs.readdirSync(__dirname).filter(function (f) { + return f.match(/^test.txt/) + }).forEach(function (file) { + fs.unlinkSync(path.resolve(__dirname, file)) + }) + t.end() +}) diff --git a/deps/npm/node_modules/fstream-npm/fstream-npm.js b/deps/npm/node_modules/fstream-npm/fstream-npm.js index ca05880a67a..863f5884544 100644 --- a/deps/npm/node_modules/fstream-npm/fstream-npm.js +++ b/deps/npm/node_modules/fstream-npm/fstream-npm.js @@ -93,6 +93,12 @@ Packer.prototype.applyIgnores = function (entry, partial, entryObj) { // readme files should never be ignored. if (entry.match(/^readme(\.[^\.]*)$/i)) return true + // license files should never be ignored. + if (entry.match(/^(license|licence)(\.[^\.]*)?$/i)) return true + + // changelogs should never be ignored. + if (entry.match(/^(changes|changelog|history)(\.[^\.]*)?$/i)) return true + // special rules. see below. if (entry === "node_modules" && this.packageRoot) return true diff --git a/deps/npm/node_modules/fstream-npm/node_modules/fstream-ignore/package.json b/deps/npm/node_modules/fstream-npm/node_modules/fstream-ignore/package.json index 558d3dc9095..29e508673a5 100644 --- a/deps/npm/node_modules/fstream-npm/node_modules/fstream-ignore/package.json +++ b/deps/npm/node_modules/fstream-npm/node_modules/fstream-ignore/package.json @@ -26,8 +26,6 @@ "mkdirp": "" }, "license": "ISC", - "readme": "# fstream-ignore\n\nA fstream DirReader that filters out files that match globs in `.ignore`\nfiles throughout the tree, like how git ignores files based on a\n`.gitignore` file.\n\nHere's an example:\n\n```javascript\nvar Ignore = require(\"fstream-ignore\")\nIgnore({ path: __dirname\n , ignoreFiles: [\".ignore\", \".gitignore\"]\n })\n .on(\"child\", function (c) {\n console.error(c.path.substr(c.root.path.length + 1))\n })\n .pipe(tar.Pack())\n .pipe(fs.createWriteStream(\"foo.tar\"))\n```\n\nThis will tar up the files in __dirname into `foo.tar`, ignoring\nanything matched by the globs in any .iginore or .gitignore file.\n", - "readmeFilename": "README.md", "gitHead": "290f2b621fa4f8fe3eec97307d22527fa2065375", "bugs": { "url": "https://github.com/isaacs/fstream-ignore/issues" @@ -35,5 +33,23 @@ "homepage": "https://github.com/isaacs/fstream-ignore", "_id": "fstream-ignore@1.0.1", "_shasum": "153df36c4fa2cb006fb915dc71ac9d75f6a17c82", - "_from": "fstream-ignore@^1.0.0" + "_from": "fstream-ignore@>=1.0.0 <2.0.0", + "_npmVersion": "1.4.22", + "_npmUser": { + "name": "isaacs", + "email": "i@izs.me" + }, + "maintainers": [ + { + "name": "isaacs", + "email": "i@izs.me" + } + ], + "dist": { + "shasum": "153df36c4fa2cb006fb915dc71ac9d75f6a17c82", + "tarball": "http://registry.npmjs.org/fstream-ignore/-/fstream-ignore-1.0.1.tgz" + }, + "directories": {}, + "_resolved": "https://registry.npmjs.org/fstream-ignore/-/fstream-ignore-1.0.1.tgz", + "readme": "ERROR: No README data found!" } diff --git a/deps/npm/node_modules/fstream-npm/package.json b/deps/npm/node_modules/fstream-npm/package.json index 31a5af5d364..e7de77086af 100644 --- a/deps/npm/node_modules/fstream-npm/package.json +++ b/deps/npm/node_modules/fstream-npm/package.json @@ -6,7 +6,7 @@ }, "name": "fstream-npm", "description": "fstream class for creating npm packages", - "version": "1.0.0", + "version": "1.0.1", "repository": { "type": "git", "url": "git://github.com/isaacs/fstream-npm.git" @@ -17,15 +17,31 @@ "inherits": "2" }, "license": "ISC", - "readme": "# fstream-npm\n\nThis is an fstream DirReader class that will read a directory and filter\nthings according to the semantics of what goes in an npm package.\n\nFor example:\n\n```javascript\n// This will print out all the files that would be included\n// by 'npm publish' or 'npm install' of this directory.\n\nvar FN = require(\"fstream-npm\")\nFN({ path: \"./\" })\n .on(\"child\", function (e) {\n console.error(e.path.substr(e.root.path.length + 1))\n })\n```\n\n", - "readmeFilename": "README.md", - "gitHead": "807e0a8653ab793dc2e1b3b798e6256d09f972e7", + "gitHead": "4a95e1903f93dc122320349bb55e367ddd08ad6b", "bugs": { "url": "https://github.com/isaacs/fstream-npm/issues" }, "homepage": "https://github.com/isaacs/fstream-npm", - "_id": "fstream-npm@1.0.0", + "_id": "fstream-npm@1.0.1", "scripts": {}, - "_shasum": "0262c95c771d393e7cf59fcfeabce621703f3d27", - "_from": "fstream-npm@latest" + "_shasum": "1e35c77f0fa24f5d6367e6d447ae7d6ddb482db2", + "_from": "fstream-npm@>=1.0.1 <1.1.0", + "_npmVersion": "2.1.3", + "_nodeVersion": "0.10.31", + "_npmUser": { + "name": "isaacs", + "email": "i@izs.me" + }, + "maintainers": [ + { + "name": "isaacs", + "email": "i@izs.me" + } + ], + "dist": { + "shasum": "1e35c77f0fa24f5d6367e6d447ae7d6ddb482db2", + "tarball": "http://registry.npmjs.org/fstream-npm/-/fstream-npm-1.0.1.tgz" + }, + "directories": {}, + "_resolved": "https://registry.npmjs.org/fstream-npm/-/fstream-npm-1.0.1.tgz" } diff --git a/deps/npm/node_modules/fstream/package.json b/deps/npm/node_modules/fstream/package.json index de7f7bc14f0..d0ac58243ad 100644 --- a/deps/npm/node_modules/fstream/package.json +++ b/deps/npm/node_modules/fstream/package.json @@ -28,8 +28,6 @@ "test": "tap examples/*.js" }, "license": "BSD", - "readme": "Like FS streams, but with stat on them, and supporting directories and\nsymbolic links, as well as normal files. Also, you can use this to set\nthe stats on a file, even if you don't change its contents, or to create\na symlink, etc.\n\nSo, for example, you can \"write\" a directory, and it'll call `mkdir`. You\ncan specify a uid and gid, and it'll call `chown`. You can specify a\n`mtime` and `atime`, and it'll call `utimes`. You can call it a symlink\nand provide a `linkpath` and it'll call `symlink`.\n\nNote that it won't automatically resolve symbolic links. So, if you\ncall `fstream.Reader('/some/symlink')` then you'll get an object\nthat stats and then ends immediately (since it has no data). To follow\nsymbolic links, do this: `fstream.Reader({path:'/some/symlink', follow:\ntrue })`.\n\nThere are various checks to make sure that the bytes emitted are the\nsame as the intended size, if the size is set.\n\n## Examples\n\n```javascript\nfstream\n .Writer({ path: \"path/to/file\"\n , mode: 0755\n , size: 6\n })\n .write(\"hello\\n\")\n .end()\n```\n\nThis will create the directories if they're missing, and then write\n`hello\\n` into the file, chmod it to 0755, and assert that 6 bytes have\nbeen written when it's done.\n\n```javascript\nfstream\n .Writer({ path: \"path/to/file\"\n , mode: 0755\n , size: 6\n , flags: \"a\"\n })\n .write(\"hello\\n\")\n .end()\n```\n\nYou can pass flags in, if you want to append to a file.\n\n```javascript\nfstream\n .Writer({ path: \"path/to/symlink\"\n , linkpath: \"./file\"\n , SymbolicLink: true\n , mode: \"0755\" // octal strings supported\n })\n .end()\n```\n\nIf isSymbolicLink is a function, it'll be called, and if it returns\ntrue, then it'll treat it as a symlink. If it's not a function, then\nany truish value will make a symlink, or you can set `type:\n'SymbolicLink'`, which does the same thing.\n\nNote that the linkpath is relative to the symbolic link location, not\nthe parent dir or cwd.\n\n```javascript\nfstream\n .Reader(\"path/to/dir\")\n .pipe(fstream.Writer(\"path/to/other/dir\"))\n```\n\nThis will do like `cp -Rp path/to/dir path/to/other/dir`. If the other\ndir exists and isn't a directory, then it'll emit an error. It'll also\nset the uid, gid, mode, etc. to be identical. In this way, it's more\nlike `rsync -a` than simply a copy.\n", - "readmeFilename": "README.md", "gitHead": "b3b74e92ef4a91ae206fab90b7998c7cd2e4290d", "bugs": { "url": "https://github.com/isaacs/fstream/issues" @@ -37,5 +35,23 @@ "homepage": "https://github.com/isaacs/fstream", "_id": "fstream@1.0.2", "_shasum": "56930ff1b4d4d7b1a689c8656b3a11e744ab92c6", - "_from": "fstream@latest" + "_from": "fstream@1.0.2", + "_npmVersion": "1.4.23", + "_npmUser": { + "name": "isaacs", + "email": "i@izs.me" + }, + "maintainers": [ + { + "name": "isaacs", + "email": "i@izs.me" + } + ], + "dist": { + "shasum": "56930ff1b4d4d7b1a689c8656b3a11e744ab92c6", + "tarball": "http://registry.npmjs.org/fstream/-/fstream-1.0.2.tgz" + }, + "directories": {}, + "_resolved": "https://registry.npmjs.org/fstream/-/fstream-1.0.2.tgz", + "readme": "ERROR: No README data found!" } diff --git a/deps/npm/node_modules/github-url-from-git/package.json b/deps/npm/node_modules/github-url-from-git/package.json index 978435c7da2..229af333ca4 100644 --- a/deps/npm/node_modules/github-url-from-git/package.json +++ b/deps/npm/node_modules/github-url-from-git/package.json @@ -32,7 +32,7 @@ "homepage": "https://github.com/visionmedia/node-github-url-from-git", "_id": "github-url-from-git@1.4.0", "_shasum": "285e6b520819001bde128674704379e4ff03e0de", - "_from": "github-url-from-git@^1.4.0", + "_from": "github-url-from-git@>=1.4.0-0 <2.0.0-0", "_npmVersion": "2.0.0-alpha.7", "_npmUser": { "name": "bcoe", @@ -53,6 +53,5 @@ "tarball": "http://registry.npmjs.org/github-url-from-git/-/github-url-from-git-1.4.0.tgz" }, "directories": {}, - "_resolved": "https://registry.npmjs.org/github-url-from-git/-/github-url-from-git-1.4.0.tgz", - "readme": "ERROR: No README data found!" + "_resolved": "https://registry.npmjs.org/github-url-from-git/-/github-url-from-git-1.4.0.tgz" } diff --git a/deps/npm/node_modules/github-url-from-username-repo/index.js b/deps/npm/node_modules/github-url-from-username-repo/index.js index 794daabf3bc..f9d77f952f5 100644 --- a/deps/npm/node_modules/github-url-from-username-repo/index.js +++ b/deps/npm/node_modules/github-url-from-username-repo/index.js @@ -2,7 +2,16 @@ module.exports = getUrl function getUrl (r, forBrowser) { if (!r) return null - if (/^[\w-]+\/[\w\.-]+(#[a-z0-9]*)?$/.test(r)) { + // Regex taken from https://github.com/npm/npm-package-arg/commit/01dce583c64afae07b66a2a8a6033aeba871c3cd + // Note: This does not fully test the git ref format. + // See https://www.kernel.org/pub/software/scm/git/docs/git-check-ref-format.html + // + // The only way to do this properly would be to shell out to + // git-check-ref-format, and as this is a fast sync function, + // we don't want to do that. Just let git fail if it turns + // out that the commit-ish is invalid. + // GH usernames cannot start with . or - + if (/^[^@%\/\s\.-][^:@%\/\s]*\/[^@\s\/%]+(?:#.*)?$/.test(r)) { if (forBrowser) r = r.replace("#", "/tree/") return "https://github.com/" + r diff --git a/deps/npm/node_modules/github-url-from-username-repo/package.json b/deps/npm/node_modules/github-url-from-username-repo/package.json index 8b6be10115d..f8aa80d5b6f 100644 --- a/deps/npm/node_modules/github-url-from-username-repo/package.json +++ b/deps/npm/node_modules/github-url-from-username-repo/package.json @@ -1,6 +1,6 @@ { "name": "github-url-from-username-repo", - "version": "1.0.0", + "version": "1.0.2", "description": "Create urls from username/repo", "main": "index.js", "scripts": { @@ -26,34 +26,11 @@ "github", "repo" ], - "gitHead": "d5b3c01193504d67b3ecc030e93d5c58c9b0df63", + "readme": "[![Build Status](https://travis-ci.org/robertkowalski/github-url-from-username-repo.png?branch=master)](https://travis-ci.org/robertkowalski/github-url-from-username-repo)\n[![Dependency Status](https://gemnasium.com/robertkowalski/github-url-from-username-repo.png)](https://gemnasium.com/robertkowalski/github-url-from-username-repo)\n\n\n# github-url-from-username-repo\n\n## API\n\n### getUrl(url, [forBrowser])\n\nGet's the url normalized for npm.\nIf `forBrowser` is true, return a GitHub url that is usable in a webbrowser.\n\n## Usage\n\n```javascript\n\nvar getUrl = require(\"github-url-from-username-repo\")\ngetUrl(\"visionmedia/express\") // https://github.com/visionmedia/express\n\n```\n", + "readmeFilename": "README.md", + "gitHead": "d404a13f7f04edaed0e2f068a43b81230b8c7aee", "homepage": "https://github.com/robertkowalski/github-url-from-username-repo", - "_id": "github-url-from-username-repo@1.0.0", - "_shasum": "848d4f19bc838dc428484ce0dc33da593e8400ed", - "_from": "github-url-from-username-repo@^1.0.0", - "_npmVersion": "1.4.21", - "_npmUser": { - "name": "robertkowalski", - "email": "rok@kowalski.gd" - }, - "maintainers": [ - { - "name": "robertkowalski", - "email": "rok@kowalski.gd" - }, - { - "name": "othiym23", - "email": "ogd@aoaioxxysz.net" - }, - { - "name": "forbeslindesay", - "email": "forbes@lindesay.co.uk" - } - ], - "dist": { - "shasum": "848d4f19bc838dc428484ce0dc33da593e8400ed", - "tarball": "http://registry.npmjs.org/github-url-from-username-repo/-/github-url-from-username-repo-1.0.0.tgz" - }, - "directories": {}, - "_resolved": "https://registry.npmjs.org/github-url-from-username-repo/-/github-url-from-username-repo-1.0.0.tgz" + "_id": "github-url-from-username-repo@1.0.2", + "_shasum": "7dd79330d2abe69c10c2cef79714c97215791dfa", + "_from": "github-url-from-username-repo@>=1.0.2-0 <2.0.0-0" } diff --git a/deps/npm/node_modules/github-url-from-username-repo/test/index.js b/deps/npm/node_modules/github-url-from-username-repo/test/index.js index 935bb439d3e..10fe3a34cc3 100644 --- a/deps/npm/node_modules/github-url-from-username-repo/test/index.js +++ b/deps/npm/node_modules/github-url-from-username-repo/test/index.js @@ -17,6 +17,11 @@ describe("github url from username/repo", function () { assert.deepEqual(null, url) }) + it("returns null for something that's already a URI", function () { + var url = getUrl("file:../relative") + assert.deepEqual(null, url) + }) + it("works with .", function () { var url = getUrl("component/.download.er.js.") assert.equal("https://github.com/component/.download.er.js.", url) @@ -40,6 +45,13 @@ describe("github url from username/repo", function () { "4b477f04d947bd53c473799b277", url) }) + it("can handle branches with slashes", function () { + var url = getUrl( + "component/entejs#some/branch/name") + + assert.equal("https://github.com/component/entejs#some/branch/name", url) + }) + describe("browser mode", function () { it("is able to return urls for branches", function () { var url = getUrl( diff --git a/deps/npm/node_modules/glob/README.md b/deps/npm/node_modules/glob/README.md index cc691645104..82b7ef6d61d 100644 --- a/deps/npm/node_modules/glob/README.md +++ b/deps/npm/node_modules/glob/README.md @@ -5,16 +5,7 @@ Match files using the patterns the shell uses, like stars and stuff. This is a glob implementation in JavaScript. It uses the `minimatch` library to do its matching. -## Attention: node-glob users! - -The API has changed dramatically between 2.x and 3.x. This library is -now 100% JavaScript, and the integer flags have been replaced with an -options object. - -Also, there's an event emitter class, proper tests, and all the other -things you've come to expect from node modules. - -And best of all, no compilation! +![](oh-my-glob.gif) ## Usage diff --git a/deps/npm/node_modules/glob/glob.js b/deps/npm/node_modules/glob/glob.js index 6941fc7cf13..564f3b1217c 100644 --- a/deps/npm/node_modules/glob/glob.js +++ b/deps/npm/node_modules/glob/glob.js @@ -36,10 +36,8 @@ module.exports = glob -var fs -try { fs = require("graceful-fs") } catch (e) { fs = require("fs") } - -var minimatch = require("minimatch") +var fs = require("graceful-fs") +, minimatch = require("minimatch") , Minimatch = minimatch.Minimatch , inherits = require("inherits") , EE = require("events").EventEmitter diff --git a/deps/npm/node_modules/glob/oh-my-glob.gif b/deps/npm/node_modules/glob/oh-my-glob.gif new file mode 100644 index 00000000000..a1568c13ca6 Binary files /dev/null and b/deps/npm/node_modules/glob/oh-my-glob.gif differ diff --git a/deps/npm/node_modules/glob/package.json b/deps/npm/node_modules/glob/package.json index 5905b40cd4f..6bf85e8dfeb 100644 --- a/deps/npm/node_modules/glob/package.json +++ b/deps/npm/node_modules/glob/package.json @@ -6,7 +6,7 @@ }, "name": "glob", "description": "a little globber", - "version": "4.0.5", + "version": "4.0.6", "repository": { "type": "git", "url": "git://github.com/isaacs/node-glob.git" @@ -15,14 +15,11 @@ "engines": { "node": "*" }, - "optionalDependencies": { - "graceful-fs": "^3.0.2" - }, "dependencies": { + "graceful-fs": "^3.0.2", "inherits": "2", "minimatch": "^1.0.0", - "once": "^1.3.0", - "graceful-fs": "^3.0.2" + "once": "^1.3.0" }, "devDependencies": { "tap": "~0.4.0", @@ -34,15 +31,16 @@ "test-regen": "TEST_REGEN=1 node test/00-setup.js" }, "license": "ISC", - "gitHead": "a7d85acf4e89fa26d17396ab022ef98fbe1f8a4b", + "gitHead": "6825c425e738eaffa315d8cdb1a4c3255ededcb3", "bugs": { "url": "https://github.com/isaacs/node-glob/issues" }, "homepage": "https://github.com/isaacs/node-glob", - "_id": "glob@4.0.5", - "_shasum": "95e42c9efdb3ab1f4788fd7793dfded4a3378063", - "_from": "glob@latest", - "_npmVersion": "1.4.21", + "_id": "glob@4.0.6", + "_shasum": "695c50bdd4e2fb5c5d370b091f388d3707e291a7", + "_from": "glob@>=4.0.6 <5.0.0", + "_npmVersion": "2.0.0", + "_nodeVersion": "0.10.31", "_npmUser": { "name": "isaacs", "email": "i@izs.me" @@ -54,9 +52,10 @@ } ], "dist": { - "shasum": "95e42c9efdb3ab1f4788fd7793dfded4a3378063", - "tarball": "http://registry.npmjs.org/glob/-/glob-4.0.5.tgz" + "shasum": "695c50bdd4e2fb5c5d370b091f388d3707e291a7", + "tarball": "http://registry.npmjs.org/glob/-/glob-4.0.6.tgz" }, "directories": {}, - "_resolved": "https://registry.npmjs.org/glob/-/glob-4.0.5.tgz" + "_resolved": "https://registry.npmjs.org/glob/-/glob-4.0.6.tgz", + "readme": "ERROR: No README data found!" } diff --git a/deps/npm/node_modules/glob/test/negation-test.js b/deps/npm/node_modules/glob/test/negation-test.js new file mode 100644 index 00000000000..fc679e2cf86 --- /dev/null +++ b/deps/npm/node_modules/glob/test/negation-test.js @@ -0,0 +1,16 @@ +// Negation test +// Show that glob respect's minimatch's negate flag + +var glob = require('../glob.js') +var test = require('tap').test + +test('glob respects minimatch negate flag when activated with leading !', function(t) { + var expect = ["abcdef/g", "abcfed/g", "c/d", "cb/e", "symlink/a"] + var results = glob("!b**/*", {cwd: 'a'}, function (er, results) { + if (er) + throw er + + t.same(results, expect) + t.end() + }); +}); diff --git a/deps/npm/node_modules/graceful-fs/package.json b/deps/npm/node_modules/graceful-fs/package.json index a77f90c7dce..57ce64885ac 100644 --- a/deps/npm/node_modules/graceful-fs/package.json +++ b/deps/npm/node_modules/graceful-fs/package.json @@ -6,7 +6,7 @@ }, "name": "graceful-fs", "description": "A drop-in replacement for fs, making various improvements.", - "version": "3.0.2", + "version": "3.0.4", "repository": { "type": "git", "url": "git://github.com/isaacs/node-graceful-fs.git" @@ -38,15 +38,20 @@ "EACCESS" ], "license": "BSD", - "gitHead": "0caa11544c0c9001db78bf593cf0c5805d149a41", + "devDependencies": { + "mkdirp": "^0.5.0", + "rimraf": "^2.2.8", + "tap": "^0.4.13" + }, + "gitHead": "d3fd03247ccc4fa8a3eee399307fd266c70efb06", "bugs": { "url": "https://github.com/isaacs/node-graceful-fs/issues" }, "homepage": "https://github.com/isaacs/node-graceful-fs", - "_id": "graceful-fs@3.0.2", - "_shasum": "2cb5bf7f742bea8ad47c754caeee032b7e71a577", - "_from": "graceful-fs@~3.0.0", - "_npmVersion": "1.4.14", + "_id": "graceful-fs@3.0.4", + "_shasum": "a0306d9b0940e0fc512d33b5df1014e88e0637a3", + "_from": "graceful-fs@>=3.0.4 <4.0.0", + "_npmVersion": "1.4.28", "_npmUser": { "name": "isaacs", "email": "i@izs.me" @@ -58,8 +63,9 @@ } ], "dist": { - "shasum": "2cb5bf7f742bea8ad47c754caeee032b7e71a577", - "tarball": "http://registry.npmjs.org/graceful-fs/-/graceful-fs-3.0.2.tgz" + "shasum": "a0306d9b0940e0fc512d33b5df1014e88e0637a3", + "tarball": "http://registry.npmjs.org/graceful-fs/-/graceful-fs-3.0.4.tgz" }, - "_resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-3.0.2.tgz" + "_resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-3.0.4.tgz", + "readme": "ERROR: No README data found!" } diff --git a/deps/npm/node_modules/graceful-fs/test/max-open.js b/deps/npm/node_modules/graceful-fs/test/max-open.js new file mode 100644 index 00000000000..44d52679b9b --- /dev/null +++ b/deps/npm/node_modules/graceful-fs/test/max-open.js @@ -0,0 +1,69 @@ +var test = require('tap').test +var fs = require('../') + +test('open lots of stuff', function (t) { + // Get around EBADF from libuv by making sure that stderr is opened + // Otherwise Darwin will refuse to give us a FD for stderr! + process.stderr.write('') + + // How many parallel open()'s to do + var n = 1024 + var opens = 0 + var fds = [] + var going = true + var closing = false + var doneCalled = 0 + + for (var i = 0; i < n; i++) { + go() + } + + function go() { + opens++ + fs.open(__filename, 'r', function (er, fd) { + if (er) throw er + fds.push(fd) + if (going) go() + }) + } + + // should hit ulimit pretty fast + setTimeout(function () { + going = false + t.equal(opens - fds.length, n) + done() + }, 100) + + + function done () { + if (closing) return + doneCalled++ + + if (fds.length === 0) { + //console.error('done called %d times', doneCalled) + // First because of the timeout + // Then to close the fd's opened afterwards + // Then this time, to complete. + // Might take multiple passes, depending on CPU speed + // and ulimit, but at least 3 in every case. + t.ok(doneCalled >= 3) + return t.end() + } + + closing = true + setTimeout(function () { + // console.error('do closing again') + closing = false + done() + }, 100) + + // console.error('closing time') + var closes = fds.slice(0) + fds.length = 0 + closes.forEach(function (fd) { + fs.close(fd, function (er) { + if (er) throw er + }) + }) + } +}) diff --git a/deps/npm/node_modules/graceful-fs/test/readdir-sort.js b/deps/npm/node_modules/graceful-fs/test/readdir-sort.js index fe005aa7a2e..cb63a6846ed 100644 --- a/deps/npm/node_modules/graceful-fs/test/readdir-sort.js +++ b/deps/npm/node_modules/graceful-fs/test/readdir-sort.js @@ -14,7 +14,6 @@ test("readdir reorder", function (t) { g.readdir("whatevers", function (er, files) { if (er) throw er - console.error(files) t.same(files, [ "a", "b", "z" ]) t.end() }) diff --git a/deps/npm/node_modules/graceful-fs/test/write-then-read.js b/deps/npm/node_modules/graceful-fs/test/write-then-read.js new file mode 100644 index 00000000000..3a3db54b0c8 --- /dev/null +++ b/deps/npm/node_modules/graceful-fs/test/write-then-read.js @@ -0,0 +1,45 @@ +var fs = require('../'); +var rimraf = require('rimraf'); +var mkdirp = require('mkdirp'); +var test = require('tap').test; +var p = require('path').resolve(__dirname, 'files'); + +// Make sure to reserve the stderr fd +process.stderr.write(''); + +var num = 4097; +var paths = new Array(num); + +test('make files', function (t) { + rimraf.sync(p); + mkdirp.sync(p); + + for (var i = 0; i < num; ++i) { + paths[i] = 'files/file-' + i; + fs.writeFileSync(paths[i], 'content'); + } + + t.end(); +}) + +test('read files', function (t) { + // now read them + var done = 0; + for (var i = 0; i < num; ++i) { + fs.readFile(paths[i], function(err, data) { + if (err) + throw err; + + ++done; + if (done === num) { + t.pass('success'); + t.end() + } + }); + } +}); + +test('cleanup', function (t) { + rimraf.sync(p); + t.end(); +}); diff --git a/deps/npm/node_modules/inflight/.eslintrc b/deps/npm/node_modules/inflight/.eslintrc new file mode 100644 index 00000000000..b7a1550efc2 --- /dev/null +++ b/deps/npm/node_modules/inflight/.eslintrc @@ -0,0 +1,17 @@ +{ + "env" : { + "node" : true + }, + "rules" : { + "semi": [2, "never"], + "strict": 0, + "quotes": [1, "single", "avoid-escape"], + "no-use-before-define": 0, + "curly": 0, + "no-underscore-dangle": 0, + "no-lonely-if": 1, + "no-unused-vars": [2, {"vars" : "all", "args" : "after-used"}], + "no-mixed-requires": 0, + "space-infix-ops": 0 + } +} diff --git a/deps/npm/node_modules/inflight/inflight.js b/deps/npm/node_modules/inflight/inflight.js index 1fe279f9adc..8bc96cbd373 100644 --- a/deps/npm/node_modules/inflight/inflight.js +++ b/deps/npm/node_modules/inflight/inflight.js @@ -1,8 +1,9 @@ -module.exports = inflight - +var wrappy = require('wrappy') var reqs = Object.create(null) var once = require('once') +module.exports = wrappy(inflight) + function inflight (key, cb) { if (reqs[key]) { reqs[key].push(cb) @@ -13,13 +14,31 @@ function inflight (key, cb) { } } -function makeres(key) { - return once(res) - function res(error, data) { +function makeres (key) { + return once(function RES () { var cbs = reqs[key] - delete reqs[key] - cbs.forEach(function(cb) { - cb(error, data) - }) - } + var len = cbs.length + var args = slice(arguments) + for (var i = 0; i < len; i++) { + cbs[i].apply(null, args) + } + if (cbs.length > len) { + // added more in the interim. + // de-zalgo, just in case, but don't call again. + cbs.splice(0, len) + process.nextTick(function () { + RES.apply(null, args) + }) + } else { + delete reqs[key] + } + }) +} + +function slice (args) { + var length = args.length + var array = [] + + for (var i = 0; i < length; i++) array[i] = args[i] + return array } diff --git a/deps/npm/node_modules/inflight/package.json b/deps/npm/node_modules/inflight/package.json index f4e294aadf4..e0b63729cc6 100644 --- a/deps/npm/node_modules/inflight/package.json +++ b/deps/npm/node_modules/inflight/package.json @@ -1,10 +1,11 @@ { "name": "inflight", - "version": "1.0.1", + "version": "1.0.4", "description": "Add callbacks to requests in flight to avoid async duplication", "main": "inflight.js", "dependencies": { - "once": "^1.3.0" + "once": "^1.3.0", + "wrappy": "1" }, "devDependencies": { "tap": "^0.4.10" @@ -26,25 +27,10 @@ }, "homepage": "https://github.com/isaacs/inflight", "license": "ISC", - "_id": "inflight@1.0.1", - "_shasum": "01f6911821535243c790ac0f998f54e9023ffb6f", - "_from": "inflight@~1.0.1", - "_npmVersion": "1.4.9", - "_npmUser": { - "name": "isaacs", - "email": "i@izs.me" - }, - "maintainers": [ - { - "name": "isaacs", - "email": "i@izs.me" - } - ], - "dist": { - "shasum": "01f6911821535243c790ac0f998f54e9023ffb6f", - "tarball": "http://registry.npmjs.org/inflight/-/inflight-1.0.1.tgz" - }, - "directories": {}, - "_resolved": "https://registry.npmjs.org/inflight/-/inflight-1.0.1.tgz", - "readme": "ERROR: No README data found!" + "readme": "# inflight\n\nAdd callbacks to requests in flight to avoid async duplication\n\n## USAGE\n\n```javascript\nvar inflight = require('inflight')\n\n// some request that does some stuff\nfunction req(key, callback) {\n // key is any random string. like a url or filename or whatever.\n //\n // will return either a falsey value, indicating that the\n // request for this key is already in flight, or a new callback\n // which when called will call all callbacks passed to inflightk\n // with the same key\n callback = inflight(key, callback)\n\n // If we got a falsey value back, then there's already a req going\n if (!callback) return\n\n // this is where you'd fetch the url or whatever\n // callback is also once()-ified, so it can safely be assigned\n // to multiple events etc. First call wins.\n setTimeout(function() {\n callback(null, key)\n }, 100)\n}\n\n// only assigns a single setTimeout\n// when it dings, all cbs get called\nreq('foo', cb1)\nreq('foo', cb2)\nreq('foo', cb3)\nreq('foo', cb4)\n```\n", + "readmeFilename": "README.md", + "gitHead": "c7b5531d572a867064d4a1da9e013e8910b7d1ba", + "_id": "inflight@1.0.4", + "_shasum": "6cbb4521ebd51ce0ec0a936bfd7657ef7e9b172a", + "_from": "inflight@>=1.0.4 <1.1.0" } diff --git a/deps/npm/node_modules/inflight/test.js b/deps/npm/node_modules/inflight/test.js index 28fc1450346..2bb75b38814 100644 --- a/deps/npm/node_modules/inflight/test.js +++ b/deps/npm/node_modules/inflight/test.js @@ -31,3 +31,67 @@ test('basic', function (t) { t.notOk(b, 'second should get falsey inflight response') }) + +test('timing', function (t) { + var expect = [ + 'method one', + 'start one', + 'end one', + 'two', + 'tick', + 'three' + ] + var i = 0 + + function log (m) { + t.equal(m, expect[i], m + ' === ' + expect[i]) + ++i + if (i === expect.length) + t.end() + } + + function method (name, cb) { + log('method ' + name) + process.nextTick(cb) + } + + var one = inf('foo', function () { + log('start one') + var three = inf('foo', function () { + log('three') + }) + if (three) method('three', three) + log('end one') + }) + + method('one', one) + + var two = inf('foo', function () { + log('two') + }) + if (two) method('one', two) + + process.nextTick(log.bind(null, 'tick')) +}) + +test('parameters', function (t) { + t.plan(8) + + var a = inf('key', function (first, second, third) { + t.equal(first, 1) + t.equal(second, 2) + t.equal(third, 3) + }) + t.ok(a, 'first returned cb function') + + var b = inf('key', function (first, second, third) { + t.equal(first, 1) + t.equal(second, 2) + t.equal(third, 3) + }) + t.notOk(b, 'second should get falsey inflight response') + + setTimeout(function () { + a(1, 2, 3) + }) +}) diff --git a/deps/npm/node_modules/inherits/package.json b/deps/npm/node_modules/inherits/package.json index 3b4843a6d61..a0cd4268361 100644 --- a/deps/npm/node_modules/inherits/package.json +++ b/deps/npm/node_modules/inherits/package.json @@ -28,5 +28,24 @@ "url": "https://github.com/isaacs/inherits/issues" }, "_id": "inherits@2.0.1", - "_from": "inherits@" + "dist": { + "shasum": "b17d08d326b4423e568eff719f91b0b1cbdf69f1", + "tarball": "http://registry.npmjs.org/inherits/-/inherits-2.0.1.tgz" + }, + "_from": "inherits@latest", + "_npmVersion": "1.3.8", + "_npmUser": { + "name": "isaacs", + "email": "i@izs.me" + }, + "maintainers": [ + { + "name": "isaacs", + "email": "i@izs.me" + } + ], + "directories": {}, + "_shasum": "b17d08d326b4423e568eff719f91b0b1cbdf69f1", + "_resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.1.tgz", + "homepage": "https://github.com/isaacs/inherits" } diff --git a/deps/npm/node_modules/ini/README.md b/deps/npm/node_modules/ini/README.md index acbe8ec895f..33df258297d 100644 --- a/deps/npm/node_modules/ini/README.md +++ b/deps/npm/node_modules/ini/README.md @@ -1,7 +1,7 @@ An ini format parser and serializer for node. -Sections are treated as nested objects. Items before the first heading -are saved on the object directly. +Sections are treated as nested objects. Items before the first +heading are saved on the object directly. ## Usage @@ -34,40 +34,62 @@ You can read, manipulate and write the ini-file like so: delete config.paths.default.datadir config.paths.default.array.push('fourth value') - fs.writeFileSync('./config_modified.ini', ini.stringify(config, 'section')) + fs.writeFileSync('./config_modified.ini', ini.stringify(config, { section: 'section' })) -This will result in a file called `config_modified.ini` being written to the filesystem with the following content: +This will result in a file called `config_modified.ini` being written +to the filesystem with the following content: [section] - scope = local + scope=local [section.database] - user = dbuser - password = dbpassword - database = use_another_database + user=dbuser + password=dbpassword + database=use_another_database [section.paths.default] - tmpdir = /tmp - array[] = first value - array[] = second value - array[] = third value - array[] = fourth value + tmpdir=/tmp + array[]=first value + array[]=second value + array[]=third value + array[]=fourth value ## API ### decode(inistring) + Decode the ini-style formatted `inistring` into a nested object. ### parse(inistring) + Alias for `decode(inistring)` -### encode(object, [section]) -Encode the object `object` into an ini-style formatted string. If the optional parameter `section` is given, then all top-level properties of the object are put into this section and the `section`-string is prepended to all sub-sections, see the usage example above. +### encode(object, [options]) -### stringify(object, [section]) -Alias for `encode(object, [section])` +Encode the object `object` into an ini-style formatted string. If the +optional parameter `section` is given, then all top-level properties +of the object are put into this section and the `section`-string is +prepended to all sub-sections, see the usage example above. + +The `options` object may contain the following: + +* `section` A string which will be the first `section` in the encoded + ini data. Defaults to none. +* `whitespace` Boolean to specify whether to put whitespace around the + `=` character. By default, whitespace is omitted, to be friendly to + some persnickety old parsers that don't tolerate it well. But some + find that it's more human-readable and pretty with the whitespace. + +For backwards compatibility reasons, if a `string` options is passed +in, then it is assumed to be the `section` value. + +### stringify(object, [options]) + +Alias for `encode(object, [options])` ### safe(val) -Escapes the string `val` such that it is safe to be used as a key or value in an ini-file. Basically escapes quotes. For example + +Escapes the string `val` such that it is safe to be used as a key or +value in an ini-file. Basically escapes quotes. For example ini.safe('"unsafe string"') @@ -76,4 +98,5 @@ would result in "\"unsafe string\"" ### unsafe(val) + Unescapes the string `val` diff --git a/deps/npm/node_modules/ini/ini.js b/deps/npm/node_modules/ini/ini.js index f5e44411863..1e232e74387 100644 --- a/deps/npm/node_modules/ini/ini.js +++ b/deps/npm/node_modules/ini/ini.js @@ -7,31 +7,47 @@ exports.unsafe = unsafe var eol = process.platform === "win32" ? "\r\n" : "\n" -function encode (obj, section) { +function encode (obj, opt) { var children = [] , out = "" + if (typeof opt === "string") { + opt = { + section: opt, + whitespace: false + } + } else { + opt = opt || {} + opt.whitespace = opt.whitespace === true + } + + var separator = opt.whitespace ? " = " : "=" + Object.keys(obj).forEach(function (k, _, __) { var val = obj[k] if (val && Array.isArray(val)) { val.forEach(function(item) { - out += safe(k + "[]") + "=" + safe(item) + "\n" + out += safe(k + "[]") + separator + safe(item) + "\n" }) } else if (val && typeof val === "object") { children.push(k) } else { - out += safe(k) + "=" + safe(val) + eol + out += safe(k) + separator + safe(val) + eol } }) - if (section && out.length) { - out = "[" + safe(section) + "]" + eol + out + if (opt.section && out.length) { + out = "[" + safe(opt.section) + "]" + eol + out } children.forEach(function (k, _, __) { var nk = dotSplit(k).join('\\.') - var child = encode(obj[k], (section ? section + "." : "") + nk) + var section = (opt.section ? opt.section + "." : "") + nk + var child = encode(obj[k], { + section: section, + whitespace: opt.whitespace + }) if (out.length && child.length) { out += eol } diff --git a/deps/npm/node_modules/ini/package.json b/deps/npm/node_modules/ini/package.json index 5212afb063f..3042c406277 100644 --- a/deps/npm/node_modules/ini/package.json +++ b/deps/npm/node_modules/ini/package.json @@ -6,7 +6,7 @@ }, "name": "ini", "description": "An ini encoder/decoder for node", - "version": "1.2.1", + "version": "1.3.0", "repository": { "type": "git", "url": "git://github.com/isaacs/ini.git" @@ -22,15 +22,15 @@ "devDependencies": { "tap": "~0.4.0" }, - "gitHead": "13498ce1ba5a6a20cd77ed2b55de0e714786f70c", + "gitHead": "6c314944d0201f3199e1189aeb5687d0aaf1c575", "bugs": { "url": "https://github.com/isaacs/ini/issues" }, "homepage": "https://github.com/isaacs/ini", - "_id": "ini@1.2.1", - "_shasum": "7f774e2f22752cd1dacbf9c63323df2a164ebca3", - "_from": "ini@latest", - "_npmVersion": "1.4.11", + "_id": "ini@1.3.0", + "_shasum": "625483e56c643a7721014c76604d3353f44bd429", + "_from": "ini@>=1.3.0 <2.0.0", + "_npmVersion": "2.0.0", "_npmUser": { "name": "isaacs", "email": "i@izs.me" @@ -42,9 +42,10 @@ } ], "dist": { - "shasum": "7f774e2f22752cd1dacbf9c63323df2a164ebca3", - "tarball": "http://registry.npmjs.org/ini/-/ini-1.2.1.tgz" + "shasum": "625483e56c643a7721014c76604d3353f44bd429", + "tarball": "http://registry.npmjs.org/ini/-/ini-1.3.0.tgz" }, "directories": {}, - "_resolved": "https://registry.npmjs.org/ini/-/ini-1.2.1.tgz" + "_resolved": "https://registry.npmjs.org/ini/-/ini-1.3.0.tgz", + "readme": "ERROR: No README data found!" } diff --git a/deps/npm/node_modules/ini/test/foo.js b/deps/npm/node_modules/ini/test/foo.js index 5c59890e8b6..9d34aa6fdaf 100644 --- a/deps/npm/node_modules/ini/test/foo.js +++ b/deps/npm/node_modules/ini/test/foo.js @@ -59,6 +59,16 @@ var i = require("../") } } } + , expectF = '[prefix.log]\n' + + 'type=file\n\n' + + '[prefix.log.level]\n' + + 'label=debug\n' + + 'value=10\n' + , expectG = '[log]\n' + + 'type = file\n\n' + + '[log.level]\n' + + 'label = debug\n' + + 'value = 10\n' test("decode from file", function (t) { var d = i.decode(data) @@ -77,3 +87,19 @@ test("encode from data", function (t) { t.end() }) + +test("encode with option", function (t) { + var obj = {log: { type:'file', level: {label:'debug', value:10} } } + e = i.encode(obj, {section: 'prefix'}) + + t.equal(e, expectF) + t.end() +}) + +test("encode with whitespace", function (t) { + var obj = {log: { type:'file', level: {label:'debug', value:10} } } + e = i.encode(obj, {whitespace: true}) + + t.equal(e, expectG) + t.end() +}) diff --git a/deps/npm/node_modules/init-package-json/.npmignore b/deps/npm/node_modules/init-package-json/.npmignore new file mode 100644 index 00000000000..44a3be18e88 --- /dev/null +++ b/deps/npm/node_modules/init-package-json/.npmignore @@ -0,0 +1,2 @@ +node_modules/ +.eslintrc diff --git a/deps/npm/node_modules/init-package-json/default-input.js b/deps/npm/node_modules/init-package-json/default-input.js index 8b335f96740..c86894b26a2 100644 --- a/deps/npm/node_modules/init-package-json/default-input.js +++ b/deps/npm/node_modules/init-package-json/default-input.js @@ -38,11 +38,14 @@ function readDeps (test) { return function (cb) { }) }} +var name = package.name || basename +exports.name = yes ? name : prompt('name', name) + +var version = package.version || config.get('init-version') || '1.0.0' +exports.version = yes ? version : prompt('version', version) -exports.name = prompt('name', package.name || basename) -exports.version = prompt('version', package.version || config.get('init.version') || '1.0.0') if (!package.description) { - exports.description = prompt('description') + exports.description = yes ? '' : prompt('description') } if (!package.main) { @@ -63,7 +66,8 @@ if (!package.main) { else f = f[0] - return cb(null, prompt('entry point', f || 'index.js')) + var index = f || 'index.js' + return cb(null, yes ? index : prompt('entry point', index)) }) } } @@ -121,26 +125,32 @@ function setupScripts (d, cb) { function tx (test) { return test || notest } - if (!s.test || s.test === notest) { - if (d.indexOf('tap') !== -1) - s.test = prompt('test command', 'tap test/*.js', tx) - else if (d.indexOf('expresso') !== -1) - s.test = prompt('test command', 'expresso test', tx) - else if (d.indexOf('mocha') !== -1) - s.test = prompt('test command', 'mocha', tx) - else - s.test = prompt('test command', tx) + var commands = { + 'tap':'tap test/*.js' + , 'expresso':'expresso test' + , 'mocha':'mocha' + } + var command + Object.keys(commands).forEach(function (k) { + if (d.indexOf(k) !== -1) command = commands[k] + }) + var ps = 'test command' + if (yes) { + s.test = command || notest + } else { + s.test = command ? prompt(ps, command, tx) : prompt(ps, tx) + } } - return cb(null, s) } if (!package.repository) { exports.repository = function (cb) { fs.readFile('.git/config', 'utf8', function (er, gconf) { - if (er || !gconf) return cb(null, prompt('git repository')) - + if (er || !gconf) { + return cb(null, yes ? '' : prompt('git repository')) + } gconf = gconf.split(/\r?\n/) var i = gconf.indexOf('[remote "origin"]') if (i !== -1) { @@ -152,13 +162,13 @@ if (!package.repository) { if (u && u.match(/^git@github.com:/)) u = u.replace(/^git@github.com:/, 'https://github.com/') - return cb(null, prompt('git repository', u)) + return cb(null, yes ? u : prompt('git repository', u)) }) } } if (!package.keywords) { - exports.keywords = prompt('keywords', function (s) { + exports.keywords = yes ? '' : prompt('keywords', function (s) { if (!s) return undefined if (Array.isArray(s)) s = s.join(' ') if (typeof s !== 'string') return s @@ -167,15 +177,14 @@ if (!package.keywords) { } if (!package.author) { - exports.author = config.get('init.author.name') + exports.author = config.get('init-author-name') ? { - "name" : config.get('init.author.name'), - "email" : config.get('init.author.email'), - "url" : config.get('init.author.url') + "name" : config.get('init-author-name'), + "email" : config.get('init-author-email'), + "url" : config.get('init-author-url') } : prompt('author') } -exports.license = prompt('license', package.license || - config.get('init.license') || - 'ISC') +var license = package.license || config.get('init-license') || 'ISC' +exports.license = yes ? license : prompt('license', license) diff --git a/deps/npm/node_modules/init-package-json/init-package-json.js b/deps/npm/node_modules/init-package-json/init-package-json.js index 2600e77b0af..cac761c39e0 100644 --- a/deps/npm/node_modules/init-package-json/init-package-json.js +++ b/deps/npm/node_modules/init-package-json/init-package-json.js @@ -1,5 +1,6 @@ module.exports = init +module.exports.yes = yes var PZ = require('promzard').PromZard var path = require('path') @@ -14,6 +15,13 @@ var read = require('read') // readJson.extras(file, data, cb) var readJson = require('read-package-json') +function yes (conf) { + return !!( + conf.get('yes') || conf.get('y') || + conf.get('force') || conf.get('f') + ) +} + function init (dir, input, config, cb) { if (typeof config === 'function') cb = config, config = {} @@ -35,7 +43,7 @@ function init (dir, input, config, cb) { var package = path.resolve(dir, 'package.json') input = path.resolve(input) var pkg - var ctx = {} + var ctx = { yes: yes(config) } var es = readJson.extraSet readJson.extraSet = es.filter(function (fn) { @@ -91,14 +99,21 @@ function init (dir, input, config, cb) { delete pkg.repository var d = JSON.stringify(pkg, null, 2) + '\n' + function write (yes) { + fs.writeFile(package, d, 'utf8', function (er) { + if (!er && yes) console.log('Wrote to %s:\n\n%s\n', package, d) + return cb(er, pkg) + }) + } + if (ctx.yes) { + return write(true) + } console.log('About to write to %s:\n\n%s\n', package, d) read({prompt:'Is this ok? ', default: 'yes'}, function (er, ok) { if (!ok || ok.toLowerCase().charAt(0) !== 'y') { console.log('Aborted.') } else { - fs.writeFile(package, d, 'utf8', function (er) { - return cb(er, pkg) - }) + return write() } }) }) diff --git a/deps/npm/node_modules/init-package-json/node_modules/promzard/package.json b/deps/npm/node_modules/init-package-json/node_modules/promzard/package.json index bba3057d99a..f66857539f6 100644 --- a/deps/npm/node_modules/init-package-json/node_modules/promzard/package.json +++ b/deps/npm/node_modules/init-package-json/node_modules/promzard/package.json @@ -27,7 +27,7 @@ "homepage": "https://github.com/isaacs/promzard", "_id": "promzard@0.2.2", "_shasum": "918b9f2b29458cb001781a8856502e4a79b016e0", - "_from": "promzard@~0.2.0", + "_from": "promzard@>=0.2.0 <0.3.0", "_npmVersion": "1.4.10", "_npmUser": { "name": "isaacs", diff --git a/deps/npm/node_modules/init-package-json/package.json b/deps/npm/node_modules/init-package-json/package.json index ff9f926fc97..54aa7cbdf33 100644 --- a/deps/npm/node_modules/init-package-json/package.json +++ b/deps/npm/node_modules/init-package-json/package.json @@ -1,6 +1,6 @@ { "name": "init-package-json", - "version": "1.0.0", + "version": "1.1.1", "main": "init-package-json.js", "scripts": { "test": "tap test/*.js" @@ -21,11 +21,12 @@ "promzard": "~0.2.0", "read": "~1.0.1", "read-package-json": "1", - "semver": "2.x || 3.x" + "semver": "2.x || 3.x || 4" }, "devDependencies": { - "tap": "~0.2.5", - "rimraf": "~2.0.2" + "npm": "^2.1.4", + "rimraf": "^2.1.4", + "tap": "^0.4.13" }, "keywords": [ "init", @@ -37,29 +38,14 @@ "prompt", "start" ], - "gitHead": "e8c42e4be8877195e0ef2cd0b50d806afd2eec08", + "readme": "# init-package-json\n\nA node module to get your node module started.\n\n## Usage\n\n```javascript\nvar init = require('init-package-json')\nvar path = require('path')\n\n// a path to a promzard module. In the event that this file is\n// not found, one will be provided for you.\nvar initFile = path.resolve(process.env.HOME, '.npm-init')\n\n// the dir where we're doin stuff.\nvar dir = process.cwd()\n\n// extra stuff that gets put into the PromZard module's context.\n// In npm, this is the resolved config object. Exposed as 'config'\n// Optional.\nvar configData = { some: 'extra stuff' }\n\n// Any existing stuff from the package.json file is also exposed in the\n// PromZard module as the `package` object. There will also be free\n// vars for:\n// * `filename` path to the package.json file\n// * `basename` the tip of the package dir\n// * `dirname` the parent of the package dir\n\ninit(dir, initFile, configData, function (er, data) {\n // the data's already been written to {dir}/package.json\n // now you can do stuff with it\n})\n```\n\nOr from the command line:\n\n```\n$ npm-init\n```\n\nSee [PromZard](https://github.com/isaacs/promzard) for details about\nwhat can go in the config file.\n", + "readmeFilename": "README.md", + "gitHead": "a4df4e57f9b6a2bf906ad50612dbed7dcb2f2c2b", "bugs": { "url": "https://github.com/isaacs/init-package-json/issues" }, "homepage": "https://github.com/isaacs/init-package-json", - "_id": "init-package-json@1.0.0", - "_shasum": "8985a99ef11589695d6d3a5d03300b1eab0dd47a", - "_from": "init-package-json@1.0.0", - "_npmVersion": "1.4.21", - "_npmUser": { - "name": "isaacs", - "email": "i@izs.me" - }, - "maintainers": [ - { - "name": "isaacs", - "email": "i@izs.me" - } - ], - "dist": { - "shasum": "8985a99ef11589695d6d3a5d03300b1eab0dd47a", - "tarball": "http://registry.npmjs.org/init-package-json/-/init-package-json-1.0.0.tgz" - }, - "directories": {}, - "_resolved": "https://registry.npmjs.org/init-package-json/-/init-package-json-1.0.0.tgz" + "_id": "init-package-json@1.1.1", + "_shasum": "e09e9f1fb541e0fddc9175c5ce1736fd45ff4bf8", + "_from": "init-package-json@>=1.1.1 <2.0.0" } diff --git a/deps/npm/node_modules/init-package-json/test/npm-defaults.js b/deps/npm/node_modules/init-package-json/test/npm-defaults.js new file mode 100644 index 00000000000..8229c84a002 --- /dev/null +++ b/deps/npm/node_modules/init-package-json/test/npm-defaults.js @@ -0,0 +1,49 @@ +var test = require("tap").test +var rimraf = require("rimraf") +var resolve = require("path").resolve + +var npm = require("npm") +var init = require("../") + +var EXPECTED = { + name : "test", + version : "3.1.4", + description : "", + main : "basic.js", + scripts : { + test : 'echo "Error: no test specified" && exit 1' + }, + keywords : [], + author : "npmbot (http://npm.im)", + license : "WTFPL" +} + +test("npm configuration values pulled from environment", function (t) { + /*eslint camelcase:0 */ + process.env.npm_config_yes = "yes" + + process.env.npm_config_init_author_name = "npmbot" + process.env.npm_config_init_author_email = "n@p.m" + process.env.npm_config_init_author_url = "http://npm.im" + + process.env.npm_config_init_license = EXPECTED.license + process.env.npm_config_init_version = EXPECTED.version + + npm.load({}, function (err) { + t.ifError(err, "npm loaded successfully") + + process.chdir(resolve(__dirname)) + init(__dirname, __dirname, npm.config, function (er, data) { + t.ifError(err, "init ran successfully") + + t.same(data, EXPECTED, "got the package data from the environment") + t.end() + }) + }) +}) + +test("cleanup", function (t) { + rimraf.sync(resolve(__dirname, "package.json")) + t.pass("cleaned up") + t.end() +}) diff --git a/deps/npm/node_modules/lockfile/package.json b/deps/npm/node_modules/lockfile/package.json index bf4a90dcfb5..27bd23777dc 100644 --- a/deps/npm/node_modules/lockfile/package.json +++ b/deps/npm/node_modules/lockfile/package.json @@ -31,8 +31,6 @@ }, "license": "BSD", "description": "A very polite lock file utility, which endeavors to not litter, and to wait patiently for others.", - "readme": "# lockfile\n\nA very polite lock file utility, which endeavors to not litter, and to\nwait patiently for others.\n\n## Usage\n\n```javascript\nvar lockFile = require('lockfile')\n\n// opts is optional, and defaults to {}\nlockFile.lock('some-file.lock', opts, function (er) {\n // if the er happens, then it failed to acquire a lock.\n // if there was not an error, then the file was created,\n // and won't be deleted until we unlock it.\n\n // do my stuff, free of interruptions\n // then, some time later, do:\n lockFile.unlock('some-file.lock', function (er) {\n // er means that an error happened, and is probably bad.\n })\n})\n```\n\n## Methods\n\nSync methods return the value/throw the error, others don't. Standard\nnode fs stuff.\n\nAll known locks are removed when the process exits. Of course, it's\npossible for certain types of failures to cause this to fail, but a best\neffort is made to not be a litterbug.\n\n### lockFile.lock(path, [opts], cb)\n\nAcquire a file lock on the specified path\n\n### lockFile.lockSync(path, [opts])\n\nAcquire a file lock on the specified path\n\n### lockFile.unlock(path, cb)\n\nClose and unlink the lockfile.\n\n### lockFile.unlockSync(path)\n\nClose and unlink the lockfile.\n\n### lockFile.check(path, [opts], cb)\n\nCheck if the lockfile is locked and not stale.\n\nReturns boolean.\n\n### lockFile.checkSync(path, [opts], cb)\n\nCheck if the lockfile is locked and not stale.\n\nCallback is called with `cb(error, isLocked)`.\n\n## Options\n\n### opts.wait\n\nA number of milliseconds to wait for locks to expire before giving up.\nOnly used by lockFile.lock. Poll for `opts.wait` ms. If the lock is\nnot cleared by the time the wait expires, then it returns with the\noriginal error.\n\n### opts.pollPeriod\n\nWhen using `opts.wait`, this is the period in ms in which it polls to\ncheck if the lock has expired. Defaults to `100`.\n\n### opts.stale\n\nA number of milliseconds before locks are considered to have expired.\n\n### opts.retries\n\nUsed by lock and lockSync. Retry `n` number of times before giving up.\n\n### opts.retryWait\n\nUsed by lock. Wait `n` milliseconds before retrying.\n", - "readmeFilename": "README.md", "gitHead": "9590c6f02521eb1bb154ddc3ca9a7e84ce770c45", "bugs": { "url": "https://github.com/isaacs/lockfile/issues" @@ -40,5 +38,26 @@ "homepage": "https://github.com/isaacs/lockfile", "_id": "lockfile@1.0.0", "_shasum": "b3a7609dda6012060083bacb0ab0ecbca58e9203", - "_from": "lockfile@latest" + "_from": "lockfile@1.0.0", + "_npmVersion": "1.4.23", + "_npmUser": { + "name": "isaacs", + "email": "i@izs.me" + }, + "maintainers": [ + { + "name": "trevorburnham", + "email": "trevorburnham@gmail.com" + }, + { + "name": "isaacs", + "email": "i@izs.me" + } + ], + "dist": { + "shasum": "b3a7609dda6012060083bacb0ab0ecbca58e9203", + "tarball": "http://registry.npmjs.org/lockfile/-/lockfile-1.0.0.tgz" + }, + "_resolved": "https://registry.npmjs.org/lockfile/-/lockfile-1.0.0.tgz", + "readme": "ERROR: No README data found!" } diff --git a/deps/npm/node_modules/node-gyp/package.json b/deps/npm/node_modules/node-gyp/package.json index 8ee98695186..2e2e47c7a37 100644 --- a/deps/npm/node_modules/node-gyp/package.json +++ b/deps/npm/node_modules/node-gyp/package.json @@ -10,7 +10,7 @@ "bindings", "gyp" ], - "version": "1.0.1", + "version": "1.0.2", "installVersion": 9, "author": { "name": "Nathan Rajlich", @@ -37,22 +37,45 @@ "osenv": "0", "request": "2", "rimraf": "2", - "semver": "2.x || 3.x", + "semver": "2.x || 3.x || 4", "tar": "^1.0.0", "which": "1" }, "engines": { "node": ">= 0.8.0" }, - "readme": "node-gyp\n=========\n### Node.js native addon build tool\n\n`node-gyp` is a cross-platform command-line tool written in Node.js for compiling\nnative addon modules for Node.js. It bundles the [gyp](https://code.google.com/p/gyp/)\nproject used by the Chromium team and takes away the pain of dealing with the\nvarious differences in build platforms. It is the replacement to the `node-waf`\nprogram which is removed for node `v0.8`. If you have a native addon for node that\nstill has a `wscript` file, then you should definitely add a `binding.gyp` file\nto support the latest versions of node.\n\nMultiple target versions of node are supported (i.e. `0.8`, `0.9`, `0.10`, ..., `1.0`,\netc.), regardless of what version of node is actually installed on your system\n(`node-gyp` downloads the necessary development files for the target version).\n\n#### Features:\n\n * Easy to use, consistent interface\n * Same commands to build your module on every platform\n * Supports multiple target versions of Node\n\n\nInstallation\n------------\n\nYou can install with `npm`:\n\n``` bash\n$ npm install -g node-gyp\n```\n\nYou will also need to install:\n\n * On Unix:\n * `python` (`v2.7` recommended, `v3.x.x` is __*not*__ supported)\n * `make`\n * A proper C/C++ compiler toolchain, like GCC\n * On Windows:\n * [Python][windows-python] ([`v2.7.3`][windows-python-v2.7.3] recommended, `v3.x.x` is __*not*__ supported)\n * Windows XP/Vista/7:\n * Microsoft Visual Studio C++ 2010 ([Express][msvc2010] version works well)\n * For 64-bit builds of node and native modules you will _**also**_ need the [Windows 7 64-bit SDK][win7sdk]\n * If the install fails, try uninstalling any C++ 2010 x64&x86 Redistributable that you have installed first.\n * If you get errors that the 64-bit compilers are not installed you may also need the [compiler update for the Windows SDK 7.1]\n * Windows 7/8:\n * Microsoft Visual Studio C++ 2012 for Windows Desktop ([Express][msvc2012] version works well)\n\nIf you have multiple Python versions installed, you can identify which Python\nversion `node-gyp` uses by setting the '--python' variable:\n\n``` bash\n$ node-gyp --python /path/to/python2.7\n```\n\nIf `node-gyp` is called by way of `npm` *and* you have multiple versions of\nPython installed, then you can set `npm`'s 'python' config key to the appropriate\nvalue:\n\n``` bash\n$ npm config set python /path/to/executable/python2.7\n```\n\nNote that OS X is just a flavour of Unix and so needs `python`, `make`, and C/C++.\nAn easy way to obtain these is to install XCode from Apple,\nand then use it to install the command line tools (under Preferences -> Downloads).\n\nHow to Use\n----------\n\nTo compile your native addon, first go to its root directory:\n\n``` bash\n$ cd my_node_addon\n```\n\nThe next step is to generate the appropriate project build files for the current\nplatform. Use `configure` for that:\n\n``` bash\n$ node-gyp configure\n```\n\n__Note__: The `configure` step looks for the `binding.gyp` file in the current\ndirectory to processs. See below for instructions on creating the `binding.gyp` file.\n\nNow you will have either a `Makefile` (on Unix platforms) or a `vcxproj` file\n(on Windows) in the `build/` directory. Next invoke the `build` command:\n\n``` bash\n$ node-gyp build\n```\n\nNow you have your compiled `.node` bindings file! The compiled bindings end up\nin `build/Debug/` or `build/Release/`, depending on the build mode. At this point\nyou can require the `.node` file with Node and run your tests!\n\n__Note:__ To create a _Debug_ build of the bindings file, pass the `--debug` (or\n`-d`) switch when running the either `configure` or `build` command.\n\n\nThe \"binding.gyp\" file\n----------------------\n\nPreviously when node had `node-waf` you had to write a `wscript` file. The\nreplacement for that is the `binding.gyp` file, which describes the configuration\nto build your module in a JSON-like format. This file gets placed in the root of\nyour package, alongside the `package.json` file.\n\nA barebones `gyp` file appropriate for building a node addon looks like:\n\n``` python\n{\n \"targets\": [\n {\n \"target_name\": \"binding\",\n \"sources\": [ \"src/binding.cc\" ]\n }\n ]\n}\n```\n\nSome additional resources for writing `gyp` files:\n\n * [\"Hello World\" node addon example](https://github.com/joyent/node/tree/master/test/addons/hello-world)\n * [gyp user documentation](http://code.google.com/p/gyp/wiki/GypUserDocumentation)\n * [gyp input format reference](http://code.google.com/p/gyp/wiki/InputFormatReference)\n * [*\"binding.gyp\" files out in the wild* wiki page](https://github.com/TooTallNate/node-gyp/wiki/%22binding.gyp%22-files-out-in-the-wild)\n\n\nCommands\n--------\n\n`node-gyp` responds to the following commands:\n\n| **Command** | **Description**\n|:--------------|:---------------------------------------------------------------\n| `build` | Invokes `make`/`msbuild.exe` and builds the native addon\n| `clean` | Removes any the `build` dir if it exists\n| `configure` | Generates project build files for the current platform\n| `rebuild` | Runs \"clean\", \"configure\" and \"build\" all in a row\n| `install` | Installs node development header files for the given version\n| `list` | Lists the currently installed node development file versions\n| `remove` | Removes the node development header files for the given version\n\n\nLicense\n-------\n\n(The MIT License)\n\nCopyright (c) 2012 Nathan Rajlich <nathan@tootallnate.net>\n\nPermission is hereby granted, free of charge, to any person obtaining\na copy of this software and associated documentation files (the\n'Software'), to deal in the Software without restriction, including\nwithout limitation the rights to use, copy, modify, merge, publish,\ndistribute, sublicense, and/or sell copies of the Software, and to\npermit persons to whom the Software is furnished to do so, subject to\nthe following conditions:\n\nThe above copyright notice and this permission notice shall be\nincluded in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND,\nEXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\nMERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.\nIN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY\nCLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,\nTORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\nSOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\n[windows-python]: http://www.python.org/getit/windows\n[windows-python-v2.7.3]: http://www.python.org/download/releases/2.7.3#download\n[msvc2010]: http://go.microsoft.com/?linkid=9709949\n[msvc2012]: http://go.microsoft.com/?linkid=9816758\n[win7sdk]: http://www.microsoft.com/en-us/download/details.aspx?id=8279\n[compiler update for the Windows SDK 7.1]: http://www.microsoft.com/en-us/download/details.aspx?id=4422\n", - "readmeFilename": "README.md", - "gitHead": "b2abd70377c356483c98509b14a01d71f1eaa17f", + "gitHead": "1e399b471945b35f3bfbca4a10fba31a6739b5db", "bugs": { "url": "https://github.com/TooTallNate/node-gyp/issues" }, "homepage": "https://github.com/TooTallNate/node-gyp", - "_id": "node-gyp@1.0.1", + "_id": "node-gyp@1.0.2", "scripts": {}, - "_shasum": "d5e364145ff10b259be9986855c83b5a76a2d975", - "_from": "node-gyp@latest" + "_shasum": "b0bb6d2d762271408dd904853e7aa3000ed2eb57", + "_from": "node-gyp@>=1.0.1-0 <1.1.0-0", + "_npmVersion": "2.0.0-beta.3", + "_npmUser": { + "name": "isaacs", + "email": "i@izs.me" + }, + "maintainers": [ + { + "name": "TooTallNate", + "email": "nathan@tootallnate.net" + }, + { + "name": "tootallnate", + "email": "nathan@tootallnate.net" + }, + { + "name": "isaacs", + "email": "i@izs.me" + } + ], + "dist": { + "shasum": "b0bb6d2d762271408dd904853e7aa3000ed2eb57", + "tarball": "http://registry.npmjs.org/node-gyp/-/node-gyp-1.0.2.tgz" + }, + "directories": {}, + "_resolved": "https://registry.npmjs.org/node-gyp/-/node-gyp-1.0.2.tgz" } diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/.npmignore b/deps/npm/node_modules/normalize-package-data/.npmignore similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/.npmignore rename to deps/npm/node_modules/normalize-package-data/.npmignore diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/.travis.yml b/deps/npm/node_modules/normalize-package-data/.travis.yml similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/.travis.yml rename to deps/npm/node_modules/normalize-package-data/.travis.yml diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/AUTHORS b/deps/npm/node_modules/normalize-package-data/AUTHORS similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/AUTHORS rename to deps/npm/node_modules/normalize-package-data/AUTHORS diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/LICENSE b/deps/npm/node_modules/normalize-package-data/LICENSE similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/LICENSE rename to deps/npm/node_modules/normalize-package-data/LICENSE diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/README.md b/deps/npm/node_modules/normalize-package-data/README.md similarity index 97% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/README.md rename to deps/npm/node_modules/normalize-package-data/README.md index bdcc8b04db4..1429e404208 100644 --- a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/README.md +++ b/deps/npm/node_modules/normalize-package-data/README.md @@ -16,7 +16,7 @@ Basic usage is really simple. You call the function that normalize-package-data ```javascript normalizeData = require('normalize-package-data') -packageData = fs.readfileSync("package.json") +packageData = fs.readFileSync("package.json") normalizeData(packageData) // packageData is now normalized ``` @@ -27,7 +27,7 @@ You may activate strict validation by passing true as the second argument. ```javascript normalizeData = require('normalize-package-data') -packageData = fs.readfileSync("package.json") +packageData = fs.readFileSync("package.json") warnFn = function(msg) { console.error(msg) } normalizeData(packageData, true) // packageData is now normalized @@ -41,7 +41,7 @@ Optionally, you may pass a "warning" function. It gets called whenever the `norm ```javascript normalizeData = require('normalize-package-data') -packageData = fs.readfileSync("package.json") +packageData = fs.readFileSync("package.json") warnFn = function(msg) { console.error(msg) } normalizeData(packageData, warnFn) // packageData is now normalized. Any number of warnings may have been logged. diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/lib/core_module_names.json b/deps/npm/node_modules/normalize-package-data/lib/core_module_names.json similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/lib/core_module_names.json rename to deps/npm/node_modules/normalize-package-data/lib/core_module_names.json diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/lib/extract_description.js b/deps/npm/node_modules/normalize-package-data/lib/extract_description.js similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/lib/extract_description.js rename to deps/npm/node_modules/normalize-package-data/lib/extract_description.js diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/lib/fixer.js b/deps/npm/node_modules/normalize-package-data/lib/fixer.js similarity index 97% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/lib/fixer.js rename to deps/npm/node_modules/normalize-package-data/lib/fixer.js index 72836002fea..14c0abc8e91 100644 --- a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/lib/fixer.js +++ b/deps/npm/node_modules/normalize-package-data/lib/fixer.js @@ -111,6 +111,13 @@ var fixer = module.exports = { this.warn("nonStringBundleDependency", bd) return false } else { + if (!data.dependencies) { + data.dependencies = {} + } + if (!data.dependencies.hasOwnProperty(bd)) { + this.warn("nonDependencyBundleDependency", bd) + data.dependencies[bd] = "*" + } return true } }, this) diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/lib/make_warning.js b/deps/npm/node_modules/normalize-package-data/lib/make_warning.js similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/lib/make_warning.js rename to deps/npm/node_modules/normalize-package-data/lib/make_warning.js diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/lib/normalize.js b/deps/npm/node_modules/normalize-package-data/lib/normalize.js similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/lib/normalize.js rename to deps/npm/node_modules/normalize-package-data/lib/normalize.js diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/lib/safe_format.js b/deps/npm/node_modules/normalize-package-data/lib/safe_format.js similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/lib/safe_format.js rename to deps/npm/node_modules/normalize-package-data/lib/safe_format.js diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/lib/typos.json b/deps/npm/node_modules/normalize-package-data/lib/typos.json similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/lib/typos.json rename to deps/npm/node_modules/normalize-package-data/lib/typos.json diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/lib/warning_messages.json b/deps/npm/node_modules/normalize-package-data/lib/warning_messages.json similarity index 95% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/lib/warning_messages.json rename to deps/npm/node_modules/normalize-package-data/lib/warning_messages.json index 9605f5cc640..1877fe5de39 100644 --- a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/lib/warning_messages.json +++ b/deps/npm/node_modules/normalize-package-data/lib/warning_messages.json @@ -8,6 +8,7 @@ ,"invalidFilename": "Invalid filename in 'files' list: %s" ,"nonArrayBundleDependencies": "Invalid 'bundleDependencies' list. Must be array of package names" ,"nonStringBundleDependency": "Invalid bundleDependencies member: %s" + ,"nonDependencyBundleDependency": "Non-dependency in bundleDependencies: %s" ,"nonObjectDependencies": "%s field must be an object" ,"nonStringDependency": "Invalid dependency: %s %s" ,"deprecatedArrayDependencies": "specifying %s as array is deprecated" @@ -25,4 +26,4 @@ ,"nonUrlHomepage": "homepage field must be a string url. Deleted." ,"missingProtocolHomepage": "homepage field must start with a protocol." ,"typo": "%s should probably be %s." -} \ No newline at end of file +} diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/package.json b/deps/npm/node_modules/normalize-package-data/package.json similarity index 74% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/package.json rename to deps/npm/node_modules/normalize-package-data/package.json index 084068ead7a..6da54694c2f 100644 --- a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/package.json +++ b/deps/npm/node_modules/normalize-package-data/package.json @@ -1,6 +1,6 @@ { "name": "normalize-package-data", - "version": "1.0.1", + "version": "1.0.3", "author": { "name": "Meryn Stol", "email": "merynstol@gmail.com" @@ -17,7 +17,7 @@ "dependencies": { "github-url-from-git": "^1.3.0", "github-url-from-username-repo": "^1.0.0", - "semver": "2 || 3" + "semver": "2 || 3 || 4" }, "devDependencies": { "tap": "~0.2.5", @@ -38,18 +38,19 @@ "email": "rok@kowalski.gd" } ], - "gitHead": "d260644f514672cc84f1cc471024679cccc4fd65", + "gitHead": "8c30091c83b1a41e113757148c4543ef61ff863d", "bugs": { "url": "https://github.com/meryn/normalize-package-data/issues" }, "homepage": "https://github.com/meryn/normalize-package-data", - "_id": "normalize-package-data@1.0.1", - "_shasum": "2a4b5200c82cc47bb91c8c9cf47d645499d200bf", - "_from": "normalize-package-data@^1.0.0", - "_npmVersion": "2.0.0-beta.0", + "_id": "normalize-package-data@1.0.3", + "_shasum": "8be955b8907af975f1a4584ea8bb9b41492312f5", + "_from": "normalize-package-data@>=1.0.3 <1.1.0", + "_npmVersion": "2.1.0", + "_nodeVersion": "0.10.31", "_npmUser": { - "name": "othiym23", - "email": "ogd@aoaioxxysz.net" + "name": "isaacs", + "email": "i@izs.me" }, "maintainers": [ { @@ -66,9 +67,9 @@ } ], "dist": { - "shasum": "2a4b5200c82cc47bb91c8c9cf47d645499d200bf", - "tarball": "http://registry.npmjs.org/normalize-package-data/-/normalize-package-data-1.0.1.tgz" + "shasum": "8be955b8907af975f1a4584ea8bb9b41492312f5", + "tarball": "http://registry.npmjs.org/normalize-package-data/-/normalize-package-data-1.0.3.tgz" }, "directories": {}, - "_resolved": "https://registry.npmjs.org/normalize-package-data/-/normalize-package-data-1.0.1.tgz" + "_resolved": "https://registry.npmjs.org/normalize-package-data/-/normalize-package-data-1.0.3.tgz" } diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/basic.js b/deps/npm/node_modules/normalize-package-data/test/basic.js similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/basic.js rename to deps/npm/node_modules/normalize-package-data/test/basic.js diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/consistency.js b/deps/npm/node_modules/normalize-package-data/test/consistency.js similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/consistency.js rename to deps/npm/node_modules/normalize-package-data/test/consistency.js diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/dependencies.js b/deps/npm/node_modules/normalize-package-data/test/dependencies.js similarity index 94% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/dependencies.js rename to deps/npm/node_modules/normalize-package-data/test/dependencies.js index dda24dc4f9c..3e493ab0237 100644 --- a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/dependencies.js +++ b/deps/npm/node_modules/normalize-package-data/test/dependencies.js @@ -37,7 +37,8 @@ tap.test("warn if bundleDependencies array contains anything else but strings", var wanted1 = safeFormat(warningMessages.nonStringBundleDependency, 123) var wanted2 = safeFormat(warningMessages.nonStringBundleDependency, {foo:"bar"}) + var wanted2 = safeFormat(warningMessages.nonDependencyBundleDependency, "abc") t.ok(~warnings.indexOf(wanted1), wanted1) t.ok(~warnings.indexOf(wanted2), wanted2) t.end() -}) \ No newline at end of file +}) diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/fixtures/async.json b/deps/npm/node_modules/normalize-package-data/test/fixtures/async.json similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/fixtures/async.json rename to deps/npm/node_modules/normalize-package-data/test/fixtures/async.json diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/fixtures/bcrypt.json b/deps/npm/node_modules/normalize-package-data/test/fixtures/bcrypt.json similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/fixtures/bcrypt.json rename to deps/npm/node_modules/normalize-package-data/test/fixtures/bcrypt.json diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/fixtures/coffee-script.json b/deps/npm/node_modules/normalize-package-data/test/fixtures/coffee-script.json similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/fixtures/coffee-script.json rename to deps/npm/node_modules/normalize-package-data/test/fixtures/coffee-script.json diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/fixtures/http-server.json b/deps/npm/node_modules/normalize-package-data/test/fixtures/http-server.json similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/fixtures/http-server.json rename to deps/npm/node_modules/normalize-package-data/test/fixtures/http-server.json diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/fixtures/movefile.json b/deps/npm/node_modules/normalize-package-data/test/fixtures/movefile.json similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/fixtures/movefile.json rename to deps/npm/node_modules/normalize-package-data/test/fixtures/movefile.json diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/fixtures/no-description.json b/deps/npm/node_modules/normalize-package-data/test/fixtures/no-description.json similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/fixtures/no-description.json rename to deps/npm/node_modules/normalize-package-data/test/fixtures/no-description.json diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/fixtures/node-module_exist.json b/deps/npm/node_modules/normalize-package-data/test/fixtures/node-module_exist.json similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/fixtures/node-module_exist.json rename to deps/npm/node_modules/normalize-package-data/test/fixtures/node-module_exist.json diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/fixtures/npm.json b/deps/npm/node_modules/normalize-package-data/test/fixtures/npm.json similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/fixtures/npm.json rename to deps/npm/node_modules/normalize-package-data/test/fixtures/npm.json diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/fixtures/read-package-json.json b/deps/npm/node_modules/normalize-package-data/test/fixtures/read-package-json.json similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/fixtures/read-package-json.json rename to deps/npm/node_modules/normalize-package-data/test/fixtures/read-package-json.json diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/fixtures/request.json b/deps/npm/node_modules/normalize-package-data/test/fixtures/request.json similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/fixtures/request.json rename to deps/npm/node_modules/normalize-package-data/test/fixtures/request.json diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/fixtures/underscore.json b/deps/npm/node_modules/normalize-package-data/test/fixtures/underscore.json similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/fixtures/underscore.json rename to deps/npm/node_modules/normalize-package-data/test/fixtures/underscore.json diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/github-urls.js b/deps/npm/node_modules/normalize-package-data/test/github-urls.js similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/github-urls.js rename to deps/npm/node_modules/normalize-package-data/test/github-urls.js diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/normalize.js b/deps/npm/node_modules/normalize-package-data/test/normalize.js similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/normalize.js rename to deps/npm/node_modules/normalize-package-data/test/normalize.js diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/scoped.js b/deps/npm/node_modules/normalize-package-data/test/scoped.js similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/scoped.js rename to deps/npm/node_modules/normalize-package-data/test/scoped.js diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/strict.js b/deps/npm/node_modules/normalize-package-data/test/strict.js similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/strict.js rename to deps/npm/node_modules/normalize-package-data/test/strict.js diff --git a/deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/typo.js b/deps/npm/node_modules/normalize-package-data/test/typo.js similarity index 100% rename from deps/npm/node_modules/read-package-json/node_modules/normalize-package-data/test/typo.js rename to deps/npm/node_modules/normalize-package-data/test/typo.js diff --git a/deps/npm/node_modules/npm-install-checks/package.json b/deps/npm/node_modules/npm-install-checks/package.json index 9457df0b5c3..06ca052e410 100644 --- a/deps/npm/node_modules/npm-install-checks/package.json +++ b/deps/npm/node_modules/npm-install-checks/package.json @@ -1,11 +1,11 @@ { "name": "npm-install-checks", - "version": "1.0.2", + "version": "1.0.4", "description": "checks that npm runs during the installation of a module", "main": "index.js", "dependencies": { "npmlog": "0.1", - "semver": "^2.3.0" + "semver": "^2.3.0 || 3.x || 4" }, "devDependencies": { "tap": "~0.4.8", @@ -32,10 +32,29 @@ "bugs": { "url": "https://github.com/npm/npm-install-checks/issues" }, - "readme": "# npm-install-checks\n\nA package that contains checks that npm runs during the installation.\n\n## API\n\n### .checkEngine(target, npmVer, nodeVer, force, strict, cb)\nCheck if node/npm version is supported by the package.\n\nError type: `ENOTSUP`\n\n### .checkPlatform(target, force, cb)\nCheck if OS/Arch is supported by the package.\n\nError type: `EBADPLATFORM`\n\n### .checkCycle(target, ancestors, cb)\nCheck for cyclic dependencies.\n\nError type: `ECYCLE`\n\n### .checkGit(folder, cb)\nCheck if a folder is a .git folder.\n\nError type: `EISGIT`\n", - "readmeFilename": "README.md", - "gitHead": "056ade7c5e3a6b3c720ca6a743c1b99a0705d29e", - "_id": "npm-install-checks@1.0.2", - "_shasum": "ebba769753fc8551308333ef411920743a6809f6", - "_from": "npm-install-checks@latest" + "gitHead": "05944f95860b0ac3769667551c4b7aa3d3fcdc32", + "_id": "npm-install-checks@1.0.4", + "_shasum": "9757c6f9d4d493c2489465da6d07a8ed416d44c8", + "_from": "npm-install-checks@>=1.0.2-0 <1.1.0-0", + "_npmVersion": "2.0.0-beta.3", + "_npmUser": { + "name": "isaacs", + "email": "i@izs.me" + }, + "maintainers": [ + { + "name": "robertkowalski", + "email": "rok@kowalski.gd" + }, + { + "name": "isaacs", + "email": "i@izs.me" + } + ], + "dist": { + "shasum": "9757c6f9d4d493c2489465da6d07a8ed416d44c8", + "tarball": "http://registry.npmjs.org/npm-install-checks/-/npm-install-checks-1.0.4.tgz" + }, + "directories": {}, + "_resolved": "https://registry.npmjs.org/npm-install-checks/-/npm-install-checks-1.0.4.tgz" } diff --git a/deps/npm/node_modules/npm-package-arg/LICENSE b/deps/npm/node_modules/npm-package-arg/LICENSE new file mode 100644 index 00000000000..05eeeb88c2e --- /dev/null +++ b/deps/npm/node_modules/npm-package-arg/LICENSE @@ -0,0 +1,15 @@ +The ISC License + +Copyright (c) Isaac Z. Schlueter + +Permission to use, copy, modify, and/or distribute this software for any +purpose with or without fee is hereby granted, provided that the above +copyright notice and this permission notice appear in all copies. + +THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES +WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF +MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR +ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES +WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN +ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR +IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. diff --git a/deps/npm/node_modules/npm-package-arg/README.md b/deps/npm/node_modules/npm-package-arg/README.md new file mode 100644 index 00000000000..602277a378d --- /dev/null +++ b/deps/npm/node_modules/npm-package-arg/README.md @@ -0,0 +1,55 @@ +# npm-package-arg + +Parse the things that can be arguments to `npm install` + +Takes an argument like `foo@1.2`, or `foo@user/foo`, or +`http://x.com/foo.tgz`, or `git+https://github.com/user/foo`, and +figures out what type of thing it is. + +## USAGE + +```javascript +var assert = require("assert") +var npa = require("npm-package-arg") + +// Pass in the descriptor, and it'll return an object +var parsed = npa("foo@1.2") + +// Returns an object like: +// { +// name: "foo", // The bit in front of the @ +// type: "range", // the type of descriptor this is +// spec: "1.2" // the specifier for this descriptor +// } + +// Completely unreasonable invalid garbage throws an error +// Make sure you wrap this in a try/catch if you have not +// already sanitized the inputs! +assert.throws(function() { + npa("this is not \0 a valid package name or url") +}) +``` + +For more examples, see the test file. + +## Result Objects + +The objects that are returned by npm-package-arg contain the following +fields: + +* `name` - If known, the `name` field expected in the resulting pkg. +* `type` - One of the following strings: + * `git` - A git repo + * `github` - A github shorthand, like `user/project` + * `tag` - A tagged version, like `"foo@latest"` + * `version` - A specific version number, like `"foo@1.2.3"` + * `range` - A version range, like `"foo@2.x"` + * `local` - A local file or folder path + * `remote` - An http url (presumably to a tgz) +* `spec` - The "thing". URL, the range, git repo, etc. +* `raw` - The original un-modified string that was provided. +* `rawSpec` - The part after the `name@...`, as it was originally + provided. +* `scope` - If a name is something like `@org/module` then the `scope` + field will be set to `org`. If it doesn't have a scoped name, then + scope is `null`. diff --git a/deps/npm/node_modules/npm-package-arg/npa.js b/deps/npm/node_modules/npm-package-arg/npa.js new file mode 100644 index 00000000000..8333c75f442 --- /dev/null +++ b/deps/npm/node_modules/npm-package-arg/npa.js @@ -0,0 +1,187 @@ +var url = require("url") +var assert = require("assert") +var util = require("util") +var semver = require("semver") +var path = require("path") + +module.exports = npa + +var isWindows = process.platform === "win32" || global.FAKE_WINDOWS +var slashRe = isWindows ? /\\|\// : /\// + +var parseName = /^(?:@([^\/]+?)\/)?([^\/]+?)$/ +var nameAt = /^(@([^\/]+?)\/)?([^\/]+?)@/ +var debug = util.debuglog ? util.debuglog("npa") + : /\bnpa\b/i.test(process.env.NODE_DEBUG || "") + ? function () { + console.error("NPA: " + util.format.apply(util, arguments).split("\n").join("\nNPA: ")) + } : function () {} + +function validName (name) { + if (!name) { + debug("not a name %j", name) + return false + } + var n = name.trim() + if (!n || n.charAt(0) === "." + || !n.match(/^[a-zA-Z0-9]/) + || n.match(/[\/\(\)&\?#\|<>@:%\s\\\*'"!~`]/) + || n.toLowerCase() === "node_modules" + || n !== encodeURIComponent(n) + || n.toLowerCase() === "favicon.ico") { + debug("not a valid name %j", name) + return false + } + return n +} + +function npa (arg) { + assert.equal(typeof arg, "string") + arg = arg.trim() + + var res = new Result + res.raw = arg + res.scope = null + + // See if it's something like foo@... + var nameparse = arg.match(nameAt) + debug("nameparse", nameparse) + if (nameparse && validName(nameparse[3]) && + (!nameparse[2] || validName(nameparse[2]))) { + res.name = (nameparse[1] || "") + nameparse[3] + if (nameparse[2]) + res.scope = "@" + nameparse[2] + arg = arg.substr(nameparse[0].length) + } else { + res.name = null + } + + res.rawSpec = arg + res.spec = arg + + var urlparse = url.parse(arg) + debug("urlparse", urlparse) + + // windows paths look like urls + // don't be fooled! + if (isWindows && urlparse && urlparse.protocol && + urlparse.protocol.match(/^[a-zA-Z]:$/)) { + debug("windows url-ish local path", urlparse) + urlparse = {} + } + + if (urlparse.protocol) { + return parseUrl(res, arg, urlparse) + } + + // parse git stuff + // parse tag/range/local/remote + + if (maybeGitHubShorthand(arg)) { + res.type = "github" + res.spec = arg + return res + } + + // at this point, it's not a url, and not github + // If it's a valid name, and doesn't already have a name, then assume + // $name@"" range + // + // if it's got / chars in it, then assume that it's local. + + if (res.name) { + var version = semver.valid(arg, true) + var range = semver.validRange(arg, true) + // foo@... + if (version) { + res.spec = version + res.type = "version" + } else if (range) { + res.spec = range + res.type = "range" + } else if (slashRe.test(arg)) { + parseLocal(res, arg) + } else { + res.type = "tag" + res.spec = arg + } + } else { + var p = arg.match(parseName) + if (p && validName(p[2]) && + (!p[1] || validName(p[1]))) { + res.type = "range" + res.spec = "*" + res.rawSpec = "" + res.name = arg + if (p[1]) + res.scope = "@" + p[1] + } else { + parseLocal(res, arg) + } + } + + return res +} + +function parseLocal (res, arg) { + // turns out nearly every character is allowed in fs paths + if (/\0/.test(arg)) { + throw new Error("Invalid Path: " + JSON.stringify(arg)) + } + res.type = "local" + res.spec = path.resolve(arg) +} + +function maybeGitHubShorthand (arg) { + // Note: This does not fully test the git ref format. + // See https://www.kernel.org/pub/software/scm/git/docs/git-check-ref-format.html + // + // The only way to do this properly would be to shell out to + // git-check-ref-format, and as this is a fast sync function, + // we don't want to do that. Just let git fail if it turns + // out that the commit-ish is invalid. + // GH usernames cannot start with . or - + return /^[^@%\/\s\.-][^@%\/\s]*\/[^@\s\/%]+(?:#.*)?$/.test(arg) +} + +function parseUrl (res, arg, urlparse) { + // check the protocol, and then see if it's git or not + switch (urlparse.protocol) { + case "git:": + case "git+http:": + case "git+https:": + case "git+rsync:": + case "git+ftp:": + case "git+ssh:": + case "git+file:": + res.type = 'git' + res.spec = arg.replace(/^git\+/, '') + break + + case 'http:': + case 'https:': + res.type = 'remote' + res.spec = arg + break + + case 'file:': + res.type = 'local' + res.spec = urlparse.pathname + break; + + default: + throw new Error('Unsupported URL Type: ' + arg) + break + } + + return res +} + + +function Result () { + if (!(this instanceof Result)) return new Result +} +Result.prototype.name = null +Result.prototype.type = null +Result.prototype.spec = null +Result.prototype.raw = null diff --git a/deps/npm/node_modules/npm-package-arg/package.json b/deps/npm/node_modules/npm-package-arg/package.json new file mode 100644 index 00000000000..babbd7312a0 --- /dev/null +++ b/deps/npm/node_modules/npm-package-arg/package.json @@ -0,0 +1,38 @@ +{ + "name": "npm-package-arg", + "version": "2.1.3", + "description": "Parse the things that can be arguments to `npm install`", + "main": "npa.js", + "directories": { + "test": "test" + }, + "dependencies": { + "semver": "4" + }, + "devDependencies": { + "tap": "^0.4.9" + }, + "scripts": { + "test": "tap test/*.js" + }, + "repository": { + "type": "git", + "url": "https://github.com/npm/npm-package-arg" + }, + "author": { + "name": "Isaac Z. Schlueter", + "email": "i@izs.me", + "url": "http://blog.izs.me/" + }, + "license": "ISC", + "bugs": { + "url": "https://github.com/npm/npm-package-arg/issues" + }, + "homepage": "https://github.com/npm/npm-package-arg", + "readme": "# npm-package-arg\n\nParse the things that can be arguments to `npm install`\n\nTakes an argument like `foo@1.2`, or `foo@user/foo`, or\n`http://x.com/foo.tgz`, or `git+https://github.com/user/foo`, and\nfigures out what type of thing it is.\n\n## USAGE\n\n```javascript\nvar assert = require(\"assert\")\nvar npa = require(\"npm-package-arg\")\n\n// Pass in the descriptor, and it'll return an object\nvar parsed = npa(\"foo@1.2\")\n\n// Returns an object like:\n// {\n// name: \"foo\", // The bit in front of the @\n// type: \"range\", // the type of descriptor this is\n// spec: \"1.2\" // the specifier for this descriptor\n// }\n\n// Completely unreasonable invalid garbage throws an error\n// Make sure you wrap this in a try/catch if you have not\n// already sanitized the inputs!\nassert.throws(function() {\n npa(\"this is not \\0 a valid package name or url\")\n})\n```\n\nFor more examples, see the test file.\n\n## Result Objects\n\nThe objects that are returned by npm-package-arg contain the following\nfields:\n\n* `name` - If known, the `name` field expected in the resulting pkg.\n* `type` - One of the following strings:\n * `git` - A git repo\n * `github` - A github shorthand, like `user/project`\n * `tag` - A tagged version, like `\"foo@latest\"`\n * `version` - A specific version number, like `\"foo@1.2.3\"`\n * `range` - A version range, like `\"foo@2.x\"`\n * `local` - A local file or folder path\n * `remote` - An http url (presumably to a tgz)\n* `spec` - The \"thing\". URL, the range, git repo, etc.\n* `raw` - The original un-modified string that was provided.\n* `rawSpec` - The part after the `name@...`, as it was originally\n provided.\n* `scope` - If a name is something like `@org/module` then the `scope`\n field will be set to `org`. If it doesn't have a scoped name, then\n scope is `null`.\n", + "readmeFilename": "README.md", + "gitHead": "9aaabc2aae746371a05f54cdb57a5f9ada003d8f", + "_id": "npm-package-arg@2.1.3", + "_shasum": "dfba34bd82dd327c10cb43a65c8db6ef0b812bf7", + "_from": "npm-package-arg@~2.1.3" +} diff --git a/deps/npm/node_modules/npm-package-arg/test/basic.js b/deps/npm/node_modules/npm-package-arg/test/basic.js new file mode 100644 index 00000000000..98206db205c --- /dev/null +++ b/deps/npm/node_modules/npm-package-arg/test/basic.js @@ -0,0 +1,203 @@ +var npa = require("../npa.js") +var path = require("path") + +require("tap").test("basic", function (t) { + t.setMaxListeners(999) + + var tests = { + "foo@1.2": { + name: "foo", + type: "range", + spec: ">=1.2.0 <1.3.0", + raw: "foo@1.2", + rawSpec: "1.2" + }, + + "@foo/bar": { + raw: "@foo/bar", + name: "@foo/bar", + scope: "@foo", + rawSpec: "", + spec: "*", + type: "range" + }, + + "@foo/bar@": { + raw: "@foo/bar@", + name: "@foo/bar", + scope: "@foo", + rawSpec: "", + spec: "*", + type: "range" + }, + + "@foo/bar@baz": { + raw: "@foo/bar@baz", + name: "@foo/bar", + scope: "@foo", + rawSpec: "baz", + spec: "baz", + type: "tag" + }, + + "@f fo o al/ a d s ;f ": { + raw: "@f fo o al/ a d s ;f", + name: null, + rawSpec: "@f fo o al/ a d s ;f", + spec: path.resolve("@f fo o al/ a d s ;f"), + type: "local" + }, + + "foo@1.2.3": { + name: "foo", + type: "version", + spec: "1.2.3", + raw: "foo@1.2.3" + }, + + "foo@=v1.2.3": { + name: "foo", + type: "version", + spec: "1.2.3", + raw: "foo@=v1.2.3", + rawSpec: "=v1.2.3" + }, + + "git+ssh://git@github.com/user/foo#1.2.3": { + name: null, + type: "git", + spec: "ssh://git@github.com/user/foo#1.2.3", + raw: "git+ssh://git@github.com/user/foo#1.2.3" + }, + + "git+file://path/to/repo#1.2.3": { + name: null, + type: "git", + spec: "file://path/to/repo#1.2.3", + raw: "git+file://path/to/repo#1.2.3" + }, + + "git://github.com/user/foo": { + name: null, + type: "git", + spec: "git://github.com/user/foo", + raw: "git://github.com/user/foo" + }, + + "@foo/bar@git+ssh://github.com/user/foo": { + name: "@foo/bar", + scope: "@foo", + spec: "ssh://github.com/user/foo", + rawSpec: "git+ssh://github.com/user/foo", + raw: "@foo/bar@git+ssh://github.com/user/foo" + }, + + "/path/to/foo": { + name: null, + type: "local", + spec: "/path/to/foo", + raw: "/path/to/foo" + }, + + "file:path/to/foo": { + name: null, + type: "local", + spec: "path/to/foo", + raw: "file:path/to/foo" + }, + + "file:~/path/to/foo": { + name: null, + type: "local", + spec: "~/path/to/foo", + raw: "file:~/path/to/foo" + }, + + "file:../path/to/foo": { + name: null, + type: "local", + spec: "../path/to/foo", + raw: "file:../path/to/foo" + }, + + "file:///path/to/foo": { + name: null, + type: "local", + spec: "/path/to/foo", + raw: "file:///path/to/foo" + }, + + "https://server.com/foo.tgz": { + name: null, + type: "remote", + spec: "https://server.com/foo.tgz", + raw: "https://server.com/foo.tgz" + }, + + "user/foo-js": { + name: null, + type: "github", + spec: "user/foo-js", + raw: "user/foo-js" + }, + + "user/foo-js#bar/baz": { + name: null, + type: "github", + spec: "user/foo-js#bar/baz", + raw: "user/foo-js#bar/baz" + }, + + "user..blerg--/..foo-js# . . . . . some . tags / / /": { + name: null, + type: "github", + spec: "user..blerg--/..foo-js# . . . . . some . tags / / /", + raw: "user..blerg--/..foo-js# . . . . . some . tags / / /" + }, + + "user/foo-js#bar/baz/bin": { + name: null, + type: "github", + spec: "user/foo-js#bar/baz/bin", + raw: "user/foo-js#bar/baz/bin" + }, + + "foo@user/foo-js": { + name: "foo", + type: "github", + spec: "user/foo-js", + raw: "foo@user/foo-js" + }, + + "foo@latest": { + name: "foo", + type: "tag", + spec: "latest", + raw: "foo@latest" + }, + + "foo": { + name: "foo", + type: "range", + spec: "*", + raw: "foo" + } + } + + Object.keys(tests).forEach(function (arg) { + var res = npa(arg) + t.type(res, "Result") + t.has(res, tests[arg]) + }) + + // Completely unreasonable invalid garbage throws an error + t.throws(function() { + npa("this is not a \0 valid package name or url") + }) + + t.throws(function() { + npa("gopher://yea right") + }, "Unsupported URL Type: gopher://yea right") + + t.end() +}) diff --git a/deps/npm/node_modules/npm-package-arg/test/windows.js b/deps/npm/node_modules/npm-package-arg/test/windows.js new file mode 100644 index 00000000000..51629fe075e --- /dev/null +++ b/deps/npm/node_modules/npm-package-arg/test/windows.js @@ -0,0 +1,41 @@ +global.FAKE_WINDOWS = true + +var npa = require("../npa.js") +var test = require("tap").test +var path = require("path") + +var cases = { + "C:\\x\\y\\z": { + raw: 'C:\\x\\y\\z', + scope: null, + name: null, + rawSpec: 'C:\\x\\y\\z', + spec: path.resolve('C:\\x\\y\\z'), + type: 'local' + }, + "foo@C:\\x\\y\\z": { + raw: 'foo@C:\\x\\y\\z', + scope: null, + name: 'foo', + rawSpec: 'C:\\x\\y\\z', + spec: path.resolve('C:\\x\\y\\z'), + type: 'local' + }, + "foo@/foo/bar/baz": { + raw: 'foo@/foo/bar/baz', + scope: null, + name: 'foo', + rawSpec: '/foo/bar/baz', + spec: path.resolve('/foo/bar/baz'), + type: 'local' + } +} + +test("parse a windows path", function (t) { + Object.keys(cases).forEach(function (c) { + var expect = cases[c] + var actual = npa(c) + t.same(actual, expect, c) + }) + t.end() +}) diff --git a/deps/npm/node_modules/npm-registry-client/.npmignore b/deps/npm/node_modules/npm-registry-client/.npmignore index 187ab679536..bea2db6203a 100644 --- a/deps/npm/node_modules/npm-registry-client/.npmignore +++ b/deps/npm/node_modules/npm-registry-client/.npmignore @@ -1,3 +1,5 @@ test/fixtures/cache node_modules npm-debug.log +.eslintrc +.jshintrc diff --git a/deps/npm/node_modules/npm-registry-client/lib/adduser.js b/deps/npm/node_modules/npm-registry-client/lib/adduser.js index d1fcac8e918..e449c258089 100644 --- a/deps/npm/node_modules/npm-registry-client/lib/adduser.js +++ b/deps/npm/node_modules/npm-registry-client/lib/adduser.js @@ -29,15 +29,13 @@ function adduser (base, username, password, email, cb) { // pluck off any other username/password/token. it needs to be the // same as the user we're becoming now. replace them on error. - var pre = { username: this.conf.get('username') - , password: this.conf.get('_password') - , auth: this.conf.get('_auth') + var c = this.conf.getCredentialsByURI(base) + var pre = { username: c.username + , password: c.password + , email: c.email , token: this.conf.get('_token') } this.conf.del('_token') - this.conf.del('username') - this.conf.del('_auth') - this.conf.del('_password') if (this.couchLogin) { this.couchLogin.token = null } @@ -61,13 +59,15 @@ function adduser (base, username, password, email, cb) { , function (error, data, json, response) { // if it worked, then we just created a new user, and all is well. // but if we're updating a current record, then it'll 409 first - if (error && !this.conf.get('_auth')) { + var c = this.conf.getCredentialsByURI(base) + if (error && !c.auth) { // must be trying to re-auth on a new machine. // use this info as auth - var b = new Buffer(username + ":" + password) - this.conf.set('_auth', b.toString("base64")) - this.conf.set('username', username) - this.conf.set('_password', password) + this.conf.setCredentialsByURI(base, { + username : username, + password : password, + email : email + }) } if (!error || !response || response.statusCode !== 409) { @@ -94,39 +94,43 @@ function adduser (base, username, password, email, cb) { , cb) }.bind(this)) }.bind(this)) -} -function done (cb, pre) { - return function (error, data, json, response) { - if (!error && (!response || response.statusCode === 201)) { - return cb(error, data, json, response) - } - - // there was some kind of error, re-instate previous auth/token/etc. - this.conf.set('_token', pre.token) - if (this.couchLogin) { - this.couchLogin.token = pre.token - if (this.couchLogin.tokenSet) { - this.couchLogin.tokenSet(pre.token) + function done (cb, pre) { + return function (error, data, json, response) { + if (!error && (!response || response.statusCode === 201)) { + return cb(error, data, json, response) + } + + // there was some kind of error, re-instate previous auth/token/etc. + this.conf.set('_token', pre.token) + if (this.couchLogin) { + this.couchLogin.token = pre.token + if (this.couchLogin.tokenSet) { + this.couchLogin.tokenSet(pre.token) + } + } + this.conf.setCredentialsByURI(base, { + username : pre.username, + password : pre.password, + email : pre.email + }) + + this.log.verbose("adduser", "back", [error, data, json]) + if (!error) { + error = new Error( + (response && response.statusCode || "") + " " + + "Could not create user\n" + JSON.stringify(data) + ) } - } - this.conf.set('username', pre.username) - this.conf.set('_password', pre.password) - this.conf.set('_auth', pre.auth) - - this.log.verbose("adduser", "back", [error, data, json]) - if (!error) { - error = new Error( (response && response.statusCode || "") + " "+ - "Could not create user\n"+JSON.stringify(data)) - } - if (response - && (response.statusCode === 401 || response.statusCode === 403)) { - this.log.warn("adduser", "Incorrect username or password\n" - +"You can reset your account by visiting:\n" - +"\n" - +" https://npmjs.org/forgot\n") - } - - return cb(error) - }.bind(this) + + if (response && (response.statusCode === 401 || response.statusCode === 403)) { + this.log.warn("adduser", "Incorrect username or password\n" + + "You can reset your account by visiting:\n" + + "\n" + + " https://npmjs.org/forgot\n") + } + + return cb(error) + }.bind(this) + } } diff --git a/deps/npm/node_modules/npm-registry-client/lib/attempt.js b/deps/npm/node_modules/npm-registry-client/lib/attempt.js new file mode 100644 index 00000000000..0794fdc3bff --- /dev/null +++ b/deps/npm/node_modules/npm-registry-client/lib/attempt.js @@ -0,0 +1,22 @@ +var retry = require("retry") + +module.exports = attempt + +function attempt(cb) { + // Tuned to spread 3 attempts over about a minute. + // See formula at . + var operation = retry.operation({ + retries : this.conf.get("fetch-retries") || 2, + factor : this.conf.get("fetch-retry-factor"), + minTimeout : this.conf.get("fetch-retry-mintimeout") || 10000, + maxTimeout : this.conf.get("fetch-retry-maxtimeout") || 60000 + }) + + var client = this + operation.attempt(function (currentAttempt) { + client.log.info("attempt", "registry request try #"+currentAttempt+ + " at "+(new Date()).toLocaleTimeString()) + + cb(operation) + }) +} diff --git a/deps/npm/node_modules/npm-registry-client/lib/authify.js b/deps/npm/node_modules/npm-registry-client/lib/authify.js new file mode 100644 index 00000000000..2b0c7a2a33a --- /dev/null +++ b/deps/npm/node_modules/npm-registry-client/lib/authify.js @@ -0,0 +1,27 @@ +var url = require("url") + +module.exports = authify + +function authify (authed, parsed, headers) { + var c = this.conf.getCredentialsByURI(url.format(parsed)) + + if (c && c.token) { + this.log.verbose("request", "using bearer token for auth") + headers.authorization = "Bearer " + c.token + + return null + } + + if (authed) { + if (c && c.username && c.password) { + var username = encodeURIComponent(c.username) + var password = encodeURIComponent(c.password) + parsed.auth = username + ":" + password + } + else { + return new Error( + "This request requires auth credentials. Run `npm login` and repeat the request." + ) + } + } +} diff --git a/deps/npm/node_modules/npm-registry-client/lib/deprecate.js b/deps/npm/node_modules/npm-registry-client/lib/deprecate.js index 078968dd327..f5fd597047b 100644 --- a/deps/npm/node_modules/npm-registry-client/lib/deprecate.js +++ b/deps/npm/node_modules/npm-registry-client/lib/deprecate.js @@ -4,7 +4,8 @@ var url = require("url") var semver = require("semver") function deprecate (uri, ver, message, cb) { - if (!this.conf.get('username')) { + var c = this.conf.getCredentialsByURI(uri) + if (!(c.token || c.auth)) { return cb(new Error("Must be logged in to deprecate a package")) } diff --git a/deps/npm/node_modules/npm-registry-client/lib/fetch.js b/deps/npm/node_modules/npm-registry-client/lib/fetch.js new file mode 100644 index 00000000000..75c52de3ae9 --- /dev/null +++ b/deps/npm/node_modules/npm-registry-client/lib/fetch.js @@ -0,0 +1,89 @@ +var assert = require("assert") + , url = require("url") + +var request = require("request") + , once = require("once") + +module.exports = fetch + +function fetch (uri, headers, cb) { + assert(uri, "must pass resource to fetch") + assert(cb, "must pass callback") + + if (!headers) headers = {} + + cb = once(cb) + + var client = this + this.attempt(function (operation) { + makeRequest.call(client, uri, headers, function (er, req) { + if (er) return cb(er) + + req.on("error", function (er) { + if (operation.retry(er)) { + client.log.info("retry", "will retry, error on last attempt: " + er) + } + }) + + req.on("response", function (res) { + client.log.http("fetch", "" + res.statusCode, uri) + + var er + var statusCode = res && res.statusCode + if (statusCode === 200) { + // Work around bug in node v0.10.0 where the CryptoStream + // gets stuck and never starts reading again. + res.resume() + if (process.version === "v0.10.0") unstick(res) + + return cb(null, res) + } + // Only retry on 408, 5xx or no `response`. + else if (statusCode === 408) { + er = new Error("request timed out") + } + else if (statusCode >= 500) { + er = new Error("server error " + statusCode) + } + + if (er && operation.retry(er)) { + client.log.info("retry", "will retry, error on last attempt: " + er) + } + else { + cb(new Error("fetch failed with status code " + statusCode)) + } + }) + }) + }) +} + +function unstick(response) { + response.resume = function (orig) { return function() { + var ret = orig.apply(response, arguments) + if (response.socket.encrypted) response.socket.encrypted.read(0) + return ret + }}(response.resume) +} + +function makeRequest (remote, headers, cb) { + var parsed = url.parse(remote) + this.log.http("fetch", "GET", parsed.href) + + var er = this.authify( + this.conf.getCredentialsByURI(remote).alwaysAuth, + parsed, + headers + ) + if (er) return cb(er) + + var opts = this.initialize( + parsed, + "GET", + "application/x-tar", + headers + ) + // always want to follow redirects for fetch + opts.followRedirect = true + + cb(null, request(opts)) +} diff --git a/deps/npm/node_modules/npm-registry-client/lib/initialize.js b/deps/npm/node_modules/npm-registry-client/lib/initialize.js new file mode 100644 index 00000000000..b6e89ffe957 --- /dev/null +++ b/deps/npm/node_modules/npm-registry-client/lib/initialize.js @@ -0,0 +1,41 @@ +var crypto = require("crypto") + +var pkg = require("../package.json") + +module.exports = initialize + +function initialize (uri, method, accept, headers) { + if (!this.sessionToken) { + this.sessionToken = crypto.randomBytes(8).toString("hex") + this.log.verbose("request id", this.sessionToken) + } + + var strict = this.conf.get("strict-ssl") + if (strict === undefined) strict = true + + var p = this.conf.get("proxy") + var sp = this.conf.get("https-proxy") || p + + var opts = { + url : uri, + method : method, + headers : headers, + proxy : uri.protocol === "https:" ? sp : p, + localAddress : this.conf.get("local-address"), + strictSSL : strict, + cert : this.conf.get("cert"), + key : this.conf.get("key"), + ca : this.conf.get("ca") + } + + headers.version = this.version || pkg.version + headers.accept = accept + + if (this.refer) headers.referer = this.refer + + headers["npm-session"] = this.sessionToken + headers["user-agent"] = this.conf.get("user-agent") || + "node/" + process.version + + return opts +} diff --git a/deps/npm/node_modules/npm-registry-client/lib/publish.js b/deps/npm/node_modules/npm-registry-client/lib/publish.js index 5504658d332..c3b2f3e1f2a 100644 --- a/deps/npm/node_modules/npm-registry-client/lib/publish.js +++ b/deps/npm/node_modules/npm-registry-client/lib/publish.js @@ -5,20 +5,26 @@ var url = require("url") , semver = require("semver") , crypto = require("crypto") , fs = require("fs") + , fixNameField = require("normalize-package-data/lib/fixer.js").fixNameField -function publish (uri, data, tarball, cb) { - var email = this.conf.get('email') - var auth = this.conf.get('_auth') - var username = this.conf.get('username') +function escaped(name) { + return name.replace("/", "%2f") +} - if (!email || !auth || !username) { +function publish (uri, data, tarball, cb) { + var c = this.conf.getCredentialsByURI(uri) + if (!(c.token || (c.auth && c.username && c.email))) { var er = new Error("auth and email required for publishing") er.code = 'ENEEDAUTH' return cb(er) } - if (data.name !== encodeURIComponent(data.name)) - return cb(new Error('invalid name: must be url-safe')) + try { + fixNameField(data, true) + } + catch (er) { + return cb(er) + } var ver = semver.clean(data.version) if (!ver) @@ -30,12 +36,12 @@ function publish (uri, data, tarball, cb) { if (er) return cb(er) fs.readFile(tarball, function(er, tarbuffer) { if (er) return cb(er) - putFirst.call(self, uri, data, tarbuffer, s, username, email, cb) + putFirst.call(self, uri, data, tarbuffer, s, c, cb) }) }) } -function putFirst (registry, data, tarbuffer, stat, username, email, cb) { +function putFirst (registry, data, tarbuffer, stat, creds, cb) { // optimistically try to PUT all in one single atomic thing. // If 409, then GET and merge, try again. // If other error, then fail. @@ -47,15 +53,14 @@ function putFirst (registry, data, tarbuffer, stat, username, email, cb) { , "dist-tags" : {} , versions : {} , readme: data.readme || "" - , maintainers : - [ { name : username - , email : email - } - ] } + if (!creds.token) { + root.maintainers = [{name : creds.username, email : creds.email}] + data.maintainers = JSON.parse(JSON.stringify(root.maintainers)) + } + root.versions[ data.version ] = data - data.maintainers = JSON.parse(JSON.stringify(root.maintainers)) var tag = data.tag || this.conf.get('tag') || "latest" root["dist-tags"][tag] = data.version @@ -70,12 +75,12 @@ function putFirst (registry, data, tarbuffer, stat, username, email, cb) { root._attachments = {} root._attachments[ tbName ] = { - content_type: 'application/octet-stream', - data: tarbuffer.toString('base64'), - length: stat.size - }; + "content_type": "application/octet-stream", + "data": tarbuffer.toString("base64"), + "length": stat.size + } - var fixed = url.resolve(registry, data.name) + var fixed = url.resolve(registry, escaped(data.name)) this.request("PUT", fixed, { body : root }, function (er, parsed, json, res) { var r409 = "must supply latest _rev to update existing package" var r409b = "Document update conflict." @@ -94,8 +99,7 @@ function putFirst (registry, data, tarbuffer, stat, username, email, cb) { return cb(er, parsed, json, res) // let's see what versions are already published. - var getUrl = url.resolve(registry, data.name + "?write=true") - this.request("GET", getUrl, null, function (er, current) { + this.request("GET", fixed + "?write=true", null, function (er, current) { if (er) return cb(er) putNext.call(this, registry, data.version, root, current, cb) @@ -133,7 +137,7 @@ function putNext(registry, newVersion, root, current, cb) { // ignore these case 'maintainers': - break; + break // copy default: @@ -143,7 +147,8 @@ function putNext(registry, newVersion, root, current, cb) { var maint = JSON.parse(JSON.stringify(root.maintainers)) root.versions[newVersion].maintainers = maint - this.request("PUT", url.resolve(registry, root.name), { body : current }, cb) + var uri = url.resolve(registry, escaped(root.name)) + this.request("PUT", uri, { body : current }, cb) } function conflictError (pkgid, version) { diff --git a/deps/npm/node_modules/npm-registry-client/lib/request.js b/deps/npm/node_modules/npm-registry-client/lib/request.js index 7a770a6d22a..910fe013142 100644 --- a/deps/npm/node_modules/npm-registry-client/lib/request.js +++ b/deps/npm/node_modules/npm-registry-client/lib/request.js @@ -1,15 +1,13 @@ -module.exports = regRequest - -var url = require("url") +var assert = require("assert") + , url = require("url") , zlib = require("zlib") - , assert = require("assert") - , rm = require("rimraf") , Stream = require("stream").Stream + +var rm = require("rimraf") , request = require("request") - , retry = require("retry") - , crypto = require("crypto") - , pkg = require("../package.json") + , once = require("once") +module.exports = regRequest // npm: means // 1. https @@ -20,59 +18,44 @@ function regRequest (method, uri, options, cb_) { assert(cb_, "must pass callback") options = options || {} - var nofollow = (typeof options.follow === 'boolean' ? !options.follow : false) - var etag = options.etag - var what = options.body var parsed = url.parse(uri) - - var authThis = false - if (parsed.protocol === "npm") { - parsed.protocol = "https" - authThis = true - } - var where = parsed.pathname + var what = options.body + var follow = (typeof options.follow === "boolean" ? options.follow : true) + this.log.verbose("request", "on initialization, where is", where) + if (parsed.search) { where = where + parsed.search parsed.search = "" } parsed.pathname = "/" - this.log.verbose("request", "where is", where) - - var registry = url.format(parsed) - this.log.verbose("request", "registry", registry) - - if (!this.sessionToken) { - this.sessionToken = crypto.randomBytes(8).toString("hex") - this.log.verbose("request id", this.sessionToken) - } + this.log.verbose("request", "after pass 1, where is", where) // Since there are multiple places where an error could occur, // don't let the cb be called more than once. - var errState = null - function cb (er) { - if (errState) return - if (er) errState = er - cb_.apply(null, arguments) - } + var cb = once(cb_) if (where.match(/^\/?favicon.ico/)) { return cb(new Error("favicon.ico isn't a package, it's a picture.")) } var adduserChange = /^\/?-\/user\/org\.couchdb\.user:([^\/]+)\/-rev/ - , adduserNew = /^\/?-\/user\/org\.couchdb\.user:([^\/]+)/ - , nu = where.match(adduserNew) - , uc = where.match(adduserChange) - , alwaysAuth = this.conf.get('always-auth') - , isDel = method === "DELETE" - , isWrite = what || isDel - , authRequired = (authThis || alwaysAuth || isWrite) && !nu || uc || isDel + , isUserChange = where.match(adduserChange) + , adduserNew = /^\/?-\/user\/org\.couchdb\.user:([^\/]+)$/ + , isNewUser = where.match(adduserNew) + , registry = url.format(parsed) + , alwaysAuth = this.conf.getCredentialsByURI(registry).alwaysAuth + , isDelete = method === "DELETE" + , isWrite = what || isDelete + + if (isUserChange && !isWrite) { + return cb(new Error("trying to change user document without writing(?!)")) + } // resolve to a full url on the registry if (!where.match(/^https?:\/\//)) { - this.log.verbose("url raw", where) + this.log.verbose("request", "url raw", where) var q = where.split("?") where = q.shift() @@ -84,56 +67,42 @@ function regRequest (method, uri, options, cb_) { if (p.match(/^org.couchdb.user/)) { return p.replace(/\//g, encodeURIComponent("/")) } - return encodeURIComponent(p) + return p }).join("/") if (q) where += "?" + q - this.log.verbose("url resolving", [registry, where]) + + this.log.verbose("request", "resolving registry", [registry, where]) where = url.resolve(registry, where) - this.log.verbose("url resolved", where) + this.log.verbose("request", "after pass 2, where is", where) } - this.log.verbose("request", "where is", where) - - var remote = url.parse(where) - , auth = this.conf.get('_auth') - if (authRequired && !auth) { - var un = this.conf.get('username') - var pw = this.conf.get('_password') - if (un && pw) - auth = new Buffer(un + ':' + pw).toString('base64') + var authed + // new users can *not* use auth, because they don't *have* auth yet + if (isNewUser) { + this.log.verbose("request", "new user, so can't send auth") + authed = false } - - if (authRequired && !auth) { - return cb(new Error( - "This request requires auth credentials. Run `npm login` and repeat the request.")) + else if (alwaysAuth) { + this.log.verbose("request", "always-auth set; sending authorization") + authed = true } - - if (auth && authRequired) { - // Escape any weird characters that might be in the auth string - // TODO(isaacs) Clean up this awful back and forth mess. - var remoteAuth = new Buffer(auth, "base64").toString("utf8") - remoteAuth = encodeURIComponent(remoteAuth).replace(/%3A/, ":") - remote.auth = remoteAuth + else if (isWrite) { + this.log.verbose("request", "sending authorization for write operation") + authed = true + } + else { + // most of the time we don't want to auth + this.log.verbose("request", "no auth needed") + authed = false } - - // Tuned to spread 3 attempts over about a minute. - // See formula at . - var operation = retry.operation({ - retries: this.conf.get('fetch-retries') || 2, - factor: this.conf.get('fetch-retry-factor'), - minTimeout: this.conf.get('fetch-retry-mintimeout') || 10000, - maxTimeout: this.conf.get('fetch-retry-maxtimeout') || 60000 - }) var self = this - operation.attempt(function (currentAttempt) { - self.log.info("trying", "registry request attempt " + currentAttempt - + " at " + (new Date()).toLocaleTimeString()) - makeRequest.call(self, method, remote, where, what, etag, nofollow + this.attempt(function (operation) { + makeRequest.call(self, method, where, what, options.etag, follow, authed , function (er, parsed, raw, response) { if (!er || (er.message && er.message.match(/^SSL Error/))) { if (er) - er.code = 'ESSL' + er.code = "ESSL" return cb(er, parsed, raw, response) } @@ -145,61 +114,47 @@ function regRequest (method, uri, options, cb_) { var statusRetry = !statusCode || timeout || serverError if (er && statusRetry && operation.retry(er)) { self.log.info("retry", "will retry, error on last attempt: " + er) - return + return undefined } if (response) { - this.log.verbose("headers", response.headers) + self.log.verbose("headers", response.headers) if (response.headers["npm-notice"]) { - this.log.warn("notice", response.headers["npm-notice"]) + self.log.warn("notice", response.headers["npm-notice"]) } } cb.apply(null, arguments) - }.bind(this)) - }.bind(this)) + }) + }) } -function makeRequest (method, remote, where, what, etag, nofollow, cb_) { - var cbCalled = false - function cb () { - if (cbCalled) return - cbCalled = true - cb_.apply(null, arguments) - } +function makeRequest (method, where, what, etag, follow, authed, cb_) { + var cb = once(cb_) - var strict = this.conf.get('strict-ssl') - if (strict === undefined) strict = true - var opts = { url: remote - , method: method - , encoding: null // tell request let body be Buffer instance - , ca: this.conf.get('ca') - , localAddress: this.conf.get('local-address') - , cert: this.conf.get('cert') - , key: this.conf.get('key') - , strictSSL: strict } - , headers = opts.headers = {} - if (etag) { - this.log.verbose("etag", etag) - headers[method === "GET" ? "if-none-match" : "if-match"] = etag - } + var parsed = url.parse(where) + var headers = {} - headers['npm-session'] = this.sessionToken - headers.version = this.version || pkg.version + // metadata should be compressed + headers["accept-encoding"] = "gzip" - if (this.refer) { - headers.referer = this.refer - } + var er = this.authify(authed, parsed, headers) + if (er) return cb_(er) - headers.accept = "application/json" - headers['accept-encoding'] = 'gzip' + var opts = this.initialize( + parsed, + method, + "application/json", + headers + ) - headers["user-agent"] = this.conf.get('user-agent') || - 'node/' + process.version + opts.followRedirect = follow + opts.encoding = null // tell request let body be Buffer instance - var p = this.conf.get('proxy') - var sp = this.conf.get('https-proxy') || p - opts.proxy = remote.protocol === "https:" ? sp : p + if (etag) { + this.log.verbose("etag", etag) + headers[method === "GET" ? "if-none-match" : "if-match"] = etag + } - // figure out wth 'what' is + // figure out wth "what" is if (what) { if (Buffer.isBuffer(what) || typeof what === "string") { opts.body = what @@ -214,11 +169,7 @@ function makeRequest (method, remote, where, what, etag, nofollow, cb_) { } } - if (nofollow) { - opts.followRedirect = false - } - - this.log.http(method, remote.href || "/") + this.log.http("request", method, parsed.href || "/") var done = requestDone.call(this, method, where, cb) var req = request(opts, decodeResponseBody(done)) @@ -243,7 +194,7 @@ function decodeResponseBody(cb) { response.socket.destroy() } - if (response.headers['content-encoding'] !== 'gzip') return cb(er, response, data) + if (response.headers["content-encoding"] !== "gzip") return cb(er, response, data) zlib.gunzip(data, function (er, buf) { if (er) return cb(er, response, data) @@ -260,7 +211,7 @@ function requestDone (method, where, cb) { var urlObj = url.parse(where) if (urlObj.auth) - urlObj.auth = '***' + urlObj.auth = "***" this.log.http(response.statusCode, url.format(urlObj)) var parsed @@ -298,16 +249,21 @@ function requestDone (method, where, cb) { if (parsed && parsed.error && response.statusCode >= 400) { var w = url.parse(where).pathname.substr(1) var name - if (!w.match(/^-/) && parsed.error === "not_found") { + if (!w.match(/^-/)) { w = w.split("/") name = w[w.indexOf("_rewrite") + 1] - er = new Error("404 Not Found: "+name) - er.code = "E404" - er.pkgid = name + } + + if (name && parsed.error === "not_found") { + er = new Error("404 Not Found: " + name) } else { er = new Error( parsed.error + " " + (parsed.reason || "") + ": " + w) } + if (name) er.pkgid = name + er.statusCode = response.statusCode + er.code = "E" + er.statusCode + } else if (method !== "HEAD" && method !== "GET") { // invalidate cache // This is irrelevant for commands that do etag caching, but diff --git a/deps/npm/node_modules/npm-registry-client/lib/star.js b/deps/npm/node_modules/npm-registry-client/lib/star.js index c0590f1e2ee..97745851ea1 100644 --- a/deps/npm/node_modules/npm-registry-client/lib/star.js +++ b/deps/npm/node_modules/npm-registry-client/lib/star.js @@ -2,10 +2,15 @@ module.exports = star function star (uri, starred, cb) { - if (!this.conf.get('username')) return cb(new Error( - "Must be logged in to star/unstar packages")) + var c = this.conf.getCredentialsByURI(uri) + if (c.token) { + return cb(new Error("This operation is unsupported for token-based auth")) + } + else if (!c.auth) { + return cb(new Error("Must be logged in to star/unstar packages")) + } - this.request("GET", uri+"?write=true", null, function (er, fullData) { + this.request("GET", uri + "?write=true", null, function (er, fullData) { if (er) return cb(er) fullData = { _id: fullData._id @@ -14,10 +19,10 @@ function star (uri, starred, cb) { if (starred) { this.log.info("starring", fullData._id) - fullData.users[this.conf.get('username')] = true + fullData.users[c.username] = true this.log.verbose("starring", fullData) } else { - delete fullData.users[this.conf.get('username')] + delete fullData.users[c.username] this.log.info("unstarring", fullData._id) this.log.verbose("unstarring", fullData) } diff --git a/deps/npm/node_modules/npm-registry-client/lib/unpublish.js b/deps/npm/node_modules/npm-registry-client/lib/unpublish.js index 6a4ac8a1916..346d537fe6f 100644 --- a/deps/npm/node_modules/npm-registry-client/lib/unpublish.js +++ b/deps/npm/node_modules/npm-registry-client/lib/unpublish.js @@ -22,7 +22,7 @@ function unpublish (uri, ver, cb) { // remove all if no version specified if (!ver) { this.log.info("unpublish", "No version specified, removing all") - return this.request("DELETE", uri+'/-rev/'+data._rev, null, cb) + return this.request("DELETE", uri+"/-rev/"+data._rev, null, cb) } var versions = data.versions || {} @@ -72,7 +72,7 @@ function unpublish (uri, ver, cb) { function detacher (uri, data, dist, cb) { return function (er) { if (er) return cb(er) - this.get(url.resolve(uri, data.name), null, function (er, data) { + this.get(escape(uri, data.name), null, function (er, data) { if (er) return cb(er) var tb = url.parse(dist.tarball) @@ -96,10 +96,15 @@ function detach (uri, data, path, rev, cb) { this.log.info("detach", path) return this.request("DELETE", url.resolve(uri, path), null, cb) } - this.get(url.resolve(uri, data.name), null, function (er, data) { + this.get(escape(uri, data.name), null, function (er, data) { rev = data._rev if (!rev) return cb(new Error( "No _rev found in "+data._id)) detach.call(this, data, path, rev, cb) }.bind(this)) } + +function escape (base, name) { + var escaped = name.replace(/\//, "%2f") + return url.resolve(base, escaped) +} diff --git a/deps/npm/node_modules/npm-registry-client/lib/util/nerf-dart.js b/deps/npm/node_modules/npm-registry-client/lib/util/nerf-dart.js new file mode 100644 index 00000000000..3b26a56c65f --- /dev/null +++ b/deps/npm/node_modules/npm-registry-client/lib/util/nerf-dart.js @@ -0,0 +1,21 @@ +var url = require("url") + +module.exports = toNerfDart + +/** + * Maps a URL to an identifier. + * + * Name courtesy schiffertronix media LLC, a New Jersey corporation + * + * @param {String} uri The URL to be nerfed. + * + * @returns {String} A nerfed URL. + */ +function toNerfDart(uri) { + var parsed = url.parse(uri) + parsed.pathname = "/" + delete parsed.protocol + delete parsed.auth + + return url.format(parsed) +} diff --git a/deps/npm/node_modules/npm-registry-client/lib/whoami.js b/deps/npm/node_modules/npm-registry-client/lib/whoami.js new file mode 100644 index 00000000000..ffa7bd704e6 --- /dev/null +++ b/deps/npm/node_modules/npm-registry-client/lib/whoami.js @@ -0,0 +1,15 @@ +module.exports = whoami + +var url = require("url") + +function whoami (uri, cb) { + if (!this.conf.getCredentialsByURI(uri)) { + return cb(new Error("Must be logged in to see who you are")) + } + + this.request("GET", url.resolve(uri, "whoami"), null, function (er, userdata) { + if (er) return cb(er) + + cb(null, userdata.username) + }) +} diff --git a/deps/npm/node_modules/npm-registry-client/package.json b/deps/npm/node_modules/npm-registry-client/package.json index 6d29da9ddfd..f9c447ee2be 100644 --- a/deps/npm/node_modules/npm-registry-client/package.json +++ b/deps/npm/node_modules/npm-registry-client/package.json @@ -6,7 +6,7 @@ }, "name": "npm-registry-client", "description": "Client for the npm registry", - "version": "2.0.7", + "version": "3.2.4", "repository": { "url": "git://github.com/isaacs/npm-registry-client" }, @@ -18,15 +18,19 @@ "chownr": "0", "graceful-fs": "^3.0.0", "mkdirp": "^0.5.0", + "normalize-package-data": "~1.0.1", "npm-cache-filename": "^1.0.0", + "once": "^1.3.0", "request": "2 >=2.25.0", - "retry": "0.6.0", - "rimraf": "~2", - "semver": "2 >=2.2.1", - "slide": "~1.1.3", + "retry": "^0.6.1", + "rimraf": "2", + "semver": "2 >=2.2.1 || 3.x || 4", + "slide": "^1.1.3", "npmlog": "" }, "devDependencies": { + "concat-stream": "^1.4.6", + "npmconf": "^2.1.0", "tap": "" }, "optionalDependencies": { @@ -35,12 +39,12 @@ "license": "ISC", "readme": "# npm-registry-client\n\nThe code that npm uses to talk to the registry.\n\nIt handles all the caching and HTTP calls.\n\n## Usage\n\n```javascript\nvar RegClient = require('npm-registry-client')\nvar client = new RegClient(config)\nvar uri = \"npm://registry.npmjs.org/npm\"\nvar options = {timeout: 1000}\n\nclient.get(uri, options, function (error, data, raw, res) {\n // error is an error if there was a problem.\n // data is the parsed data object\n // raw is the json string\n // res is the response from couch\n})\n```\n\n# Registry URLs\n\nThe registry calls take either a full URL pointing to a resource in the\nregistry, or a base URL for the registry as a whole (for the base URL, any path\nwill be ignored). In addition to `http` and `https`, `npm` URLs are allowed.\n`npm` URLs are `https` URLs with the additional restrictions that they will\nalways include authorization credentials, and the response is always registry\nmetadata (and not tarballs or other attachments).\n\n# Configuration\n\nThis program is designed to work with\n[npmconf](https://npmjs.org/package/npmconf), but you can also pass in\na plain-jane object with the appropriate configs, and it'll shim it\nfor you. Any configuration thingie that has get/set/del methods will\nalso be accepted.\n\n* `cache` **Required** {String} Path to the cache folder\n* `always-auth` {Boolean} Auth even for GET requests.\n* `auth` {String} A base64-encoded `username:password`\n* `email` {String} User's email address\n* `tag` {String} The default tag to use when publishing new packages.\n Default = `\"latest\"`\n* `ca` {String} Cerficate signing authority certificates to trust.\n* `cert` {String} Client certificate (PEM encoded). Enable access\n to servers that require client certificates\n* `key` {String} Private key (PEM encoded) for client certificate 'cert'\n* `strict-ssl` {Boolean} Whether or not to be strict with SSL\n certificates. Default = `true`\n* `user-agent` {String} User agent header to send. Default =\n `\"node/{process.version} {process.platform} {process.arch}\"`\n* `log` {Object} The logger to use. Defaults to `require(\"npmlog\")` if\n that works, otherwise logs are disabled.\n* `fetch-retries` {Number} Number of times to retry on GET failures.\n Default=2\n* `fetch-retry-factor` {Number} `factor` setting for `node-retry`. Default=10\n* `fetch-retry-mintimeout` {Number} `minTimeout` setting for `node-retry`.\n Default=10000 (10 seconds)\n* `fetch-retry-maxtimeout` {Number} `maxTimeout` setting for `node-retry`.\n Default=60000 (60 seconds)\n* `proxy` {URL} The url to proxy requests through.\n* `https-proxy` {URL} The url to proxy https requests through.\n Defaults to be the same as `proxy` if unset.\n* `_auth` {String} The base64-encoded authorization header.\n* `username` `_password` {String} Username/password to use to generate\n `_auth` if not supplied.\n* `_token` {Object} A token for use with\n [couch-login](https://npmjs.org/package/couch-login)\n\n# client.request(method, uri, options, cb)\n\n* `method` {String} HTTP method\n* `uri` {String} URI pointing to the resource to request\n* `options` {Object} Object containing optional per-request properties.\n * `what` {Stream | Buffer | String | Object} The request body. Objects\n that are not Buffers or Streams are encoded as JSON.\n * `etag` {String} The cached ETag\n * `follow` {Boolean} Follow 302/301 responses (defaults to true)\n* `cb` {Function}\n * `error` {Error | null}\n * `data` {Object} the parsed data object\n * `raw` {String} the json\n * `res` {Response Object} response from couch\n\nMake a request to the registry. All the other methods are wrappers around\n`request`.\n\n# client.adduser(base, username, password, email, cb)\n\n* `base` {String} Base registry URL\n* `username` {String}\n* `password` {String}\n* `email` {String}\n* `cb` {Function}\n\nAdd a user account to the registry, or verify the credentials.\n\n# client.deprecate(uri, version, message, cb)\n\n* `uri` {String} Full registry URI for the deprecated package\n* `version` {String} Semver version range\n* `message` {String} The message to use as a deprecation warning\n* `cb` {Function}\n\nDeprecate a version of a package in the registry.\n\n# client.bugs(uri, cb)\n\n* `uri` {String} Full registry URI for the package\n* `cb` {Function}\n\nGet the url for bugs of a package\n\n# client.get(uri, options, cb)\n\n* `uri` {String} The complete registry URI to fetch\n* `options` {Object} Object containing optional per-request properties.\n * `timeout` {Number} Duration before the request times out.\n * `follow` {Boolean} Follow 302/301 responses (defaults to true)\n * `staleOk` {Boolean} If there's cached data available, then return that\n to the callback quickly, and update the cache the background.\n\nFetches data from the registry via a GET request, saving it in the cache folder\nwith the ETag.\n\n# client.publish(uri, data, tarball, cb)\n\n* `uri` {String} The registry URI to publish to\n* `data` {Object} Package data\n* `tarball` {String | Stream} Filename or stream of the package tarball\n* `cb` {Function}\n\nPublish a package to the registry.\n\nNote that this does not create the tarball from a folder. However, it can\naccept a gzipped tar stream or a filename to a tarball.\n\n# client.star(uri, starred, cb)\n\n* `uri` {String} The complete registry URI to star\n* `starred` {Boolean} True to star the package, false to unstar it.\n* `cb` {Function}\n\nStar or unstar a package.\n\nNote that the user does not have to be the package owner to star or unstar a\npackage, though other writes do require that the user be the package owner.\n\n# client.stars(base, username, cb)\n\n* `base` {String} The base URL for the registry\n* `username` {String} Name of user to fetch starred packages for.\n* `cb` {Function}\n\nView your own or another user's starred packages.\n\n# client.tag(uri, version, tag, cb)\n\n* `uri` {String} The complete registry URI to tag\n* `version` {String} Version to tag\n* `tag` {String} Tag name to apply\n* `cb` {Function}\n\nMark a version in the `dist-tags` hash, so that `pkg@tag` will fetch the\nspecified version.\n\n# client.unpublish(uri, [ver], cb)\n\n* `uri` {String} The complete registry URI to unpublish\n* `ver` {String} version to unpublish. Leave blank to unpublish all\n versions.\n* `cb` {Function}\n\nRemove a version of a package (or all versions) from the registry. When the\nlast version us unpublished, the entire document is removed from the database.\n\n# client.upload(uri, file, [etag], [nofollow], cb)\n\n* `uri` {String} The complete registry URI to upload to\n* `file` {String | Stream} Either the filename or a readable stream\n* `etag` {String} Cache ETag\n* `nofollow` {Boolean} Do not follow 301/302 responses\n* `cb` {Function}\n\nUpload an attachment. Mostly used by `client.publish()`.\n", "readmeFilename": "README.md", - "gitHead": "bb534a209f9a36d77aff57cd4318ba3985501360", + "gitHead": "ddafd4913bdca30a1f9111660767f71653604b57", "bugs": { "url": "https://github.com/isaacs/npm-registry-client/issues" }, "homepage": "https://github.com/isaacs/npm-registry-client", - "_id": "npm-registry-client@2.0.7", - "_shasum": "97a2cdca5aba753b4b5b334b4ae65669c6641085", - "_from": "npm-registry-client@^2.0.7" + "_id": "npm-registry-client@3.2.4", + "_shasum": "8659b3449e1c9a9f8181dad142cadb048bfe521f", + "_from": "npm-registry-client@>=3.2.4 <3.3.0" } diff --git a/deps/npm/node_modules/npm-registry-client/test/bugs.js b/deps/npm/node_modules/npm-registry-client/test/bugs.js index a7336b4a585..799445295d3 100644 --- a/deps/npm/node_modules/npm-registry-client/test/bugs.js +++ b/deps/npm/node_modules/npm-registry-client/test/bugs.js @@ -2,13 +2,7 @@ var tap = require("tap") var server = require("./lib/server.js") var common = require("./lib/common.js") -var client = common.freshClient({ - username : "username", - password : "%1234@asdf%", - email : "ogd@aoaioxxysz.net", - _auth : new Buffer("username:%1234@asdf%").toString("base64"), - "always-auth" : true -}) +var client = common.freshClient() tap.test("get the URL for the bugs page on a package", function (t) { server.expect("GET", "/sample/latest", function (req, res) { @@ -23,7 +17,8 @@ tap.test("get the URL for the bugs page on a package", function (t) { }) client.bugs("http://localhost:1337/sample", function (error, info) { - t.notOk(error, "no errors") + t.ifError(error) + t.ok(info.url, "got the URL") t.ok(info.email, "got the email address") diff --git a/deps/npm/node_modules/npm-registry-client/test/deprecate.js b/deps/npm/node_modules/npm-registry-client/test/deprecate.js index 29d33742c70..76a5ba128d8 100644 --- a/deps/npm/node_modules/npm-registry-client/test/deprecate.js +++ b/deps/npm/node_modules/npm-registry-client/test/deprecate.js @@ -2,13 +2,13 @@ var tap = require("tap") var server = require("./lib/server.js") var common = require("./lib/common.js") -var client = common.freshClient({ - username : "username", - password : "password", - email : "ogd@aoaioxxysz.net", - _auth : new Buffer("username:%1234@asdf%").toString("base64"), - "always-auth" : true -}) + +var nerfed = "//localhost:" + server.port + "/:" + +var configuration = {} +configuration[nerfed + "_authToken"] = "not-bad-meaning-bad-but-bad-meaning-wombat" + +var client = common.freshClient(configuration) var cache = require("./fixtures/underscore/cache.json") @@ -57,8 +57,8 @@ tap.test("deprecate a package", function (t) { }) }) - client.deprecate("http://localhost:1337/underscore", VERSION, MESSAGE, function (error, data) { - t.notOk(error, "no errors") + client.deprecate(common.registry + "/underscore", VERSION, MESSAGE, function (er, data) { + t.ifError(er) t.ok(data.deprecated, "was deprecated") t.end() diff --git a/deps/npm/node_modules/npm-registry-client/test/fetch-404.js b/deps/npm/node_modules/npm-registry-client/test/fetch-404.js new file mode 100644 index 00000000000..2ce3b212b04 --- /dev/null +++ b/deps/npm/node_modules/npm-registry-client/test/fetch-404.js @@ -0,0 +1,44 @@ +var resolve = require("path").resolve +var createReadStream = require("graceful-fs").createReadStream +var readFileSync = require("graceful-fs").readFileSync + +var tap = require("tap") +var cat = require("concat-stream") + +var server = require("./lib/server.js") +var common = require("./lib/common.js") + +var tgz = resolve(__dirname, "./fixtures/underscore/1.3.3/package.tgz") + +tap.test("basic fetch", function (t) { + server.expect("/underscore/-/underscore-1.3.3.tgz", function (req, res) { + t.equal(req.method, "GET", "got expected method") + + res.writeHead(200, { + "content-type" : "application/x-tar", + "content-encoding" : "gzip" + }) + + createReadStream(tgz).pipe(res) + }) + + var client = common.freshClient() + client.fetch( + "http://localhost:1337/underscore/-/underscore-1.3.3.tgz", + null, + function (er, res) { + t.ifError(er, "loaded successfully") + + var sink = cat(function (data) { + t.deepEqual(data, readFileSync(tgz)) + t.end() + }) + + res.on("error", function (error) { + t.ifError(error, "no errors on stream") + }) + + res.pipe(sink) + } + ) +}) diff --git a/deps/npm/node_modules/npm-registry-client/test/fetch-408.js b/deps/npm/node_modules/npm-registry-client/test/fetch-408.js new file mode 100644 index 00000000000..bdd8bf07034 --- /dev/null +++ b/deps/npm/node_modules/npm-registry-client/test/fetch-408.js @@ -0,0 +1,52 @@ +var resolve = require("path").resolve +var createReadStream = require("graceful-fs").createReadStream +var readFileSync = require("graceful-fs").readFileSync + +var tap = require("tap") +var cat = require("concat-stream") + +var server = require("./lib/server.js") +var common = require("./lib/common.js") + +var tgz = resolve(__dirname, "./fixtures/underscore/1.3.3/package.tgz") + +tap.test("fetch with retry on timeout", function (t) { + server.expect("/underscore/-/underscore-1.3.3.tgz", function (req, res) { + t.equal(req.method, "GET", "got expected method") + + res.writeHead(408) + res.end() + }) + + server.expect("/underscore/-/underscore-1.3.3.tgz", function (req, res) { + t.equal(req.method, "GET", "got expected method") + + res.writeHead(200, { + "content-type" : "application/x-tar", + "content-encoding" : "gzip" + }) + + createReadStream(tgz).pipe(res) + }) + + var client = common.freshClient() + client.conf.set("fetch-retry-mintimeout", 100) + client.fetch( + "http://localhost:1337/underscore/-/underscore-1.3.3.tgz", + {}, + function (er, res) { + t.ifError(er, "loaded successfully") + + var sink = cat(function (data) { + t.deepEqual(data, readFileSync(tgz)) + t.end() + }) + + res.on("error", function (error) { + t.ifError(error, "no errors on stream") + }) + + res.pipe(sink) + } + ) +}) diff --git a/deps/npm/node_modules/npm-registry-client/test/fetch-503.js b/deps/npm/node_modules/npm-registry-client/test/fetch-503.js new file mode 100644 index 00000000000..91cd6754daf --- /dev/null +++ b/deps/npm/node_modules/npm-registry-client/test/fetch-503.js @@ -0,0 +1,52 @@ +var resolve = require("path").resolve +var createReadStream = require("graceful-fs").createReadStream +var readFileSync = require("graceful-fs").readFileSync + +var tap = require("tap") +var cat = require("concat-stream") + +var server = require("./lib/server.js") +var common = require("./lib/common.js") + +var tgz = resolve(__dirname, "./fixtures/underscore/1.3.3/package.tgz") + +tap.test("fetch with retry on server error", function (t) { + server.expect("/underscore/-/underscore-1.3.3.tgz", function (req, res) { + t.equal(req.method, "GET", "got expected method") + + res.writeHead(503) + res.end() + }) + + server.expect("/underscore/-/underscore-1.3.3.tgz", function (req, res) { + t.equal(req.method, "GET", "got expected method") + + res.writeHead(200, { + "content-type" : "application/x-tar", + "content-encoding" : "gzip" + }) + + createReadStream(tgz).pipe(res) + }) + + var client = common.freshClient() + client.conf.set("fetch-retry-mintimeout", 100) + client.fetch( + "http://localhost:1337/underscore/-/underscore-1.3.3.tgz", + {}, + function (er, res) { + t.ifError(er, "loaded successfully") + + var sink = cat(function (data) { + t.deepEqual(data, readFileSync(tgz)) + t.end() + }) + + res.on("error", function (error) { + t.ifError(error, "no errors on stream") + }) + + res.pipe(sink) + } + ) +}) diff --git a/deps/npm/node_modules/npm-registry-client/test/fetch-authed.js b/deps/npm/node_modules/npm-registry-client/test/fetch-authed.js new file mode 100644 index 00000000000..da359296c41 --- /dev/null +++ b/deps/npm/node_modules/npm-registry-client/test/fetch-authed.js @@ -0,0 +1,56 @@ +var resolve = require("path").resolve +var createReadStream = require("graceful-fs").createReadStream +var readFileSync = require("graceful-fs").readFileSync + +var tap = require("tap") +var cat = require("concat-stream") + +var server = require("./lib/server.js") +var common = require("./lib/common.js") + +var tgz = resolve(__dirname, "./fixtures/underscore/1.3.3/package.tgz") + +tap.test("basic fetch with scoped always-auth enabled", function (t) { + server.expect("/underscore/-/underscore-1.3.3.tgz", function (req, res) { + t.equal(req.method, "GET", "got expected method") + t.equal( + req.headers.authorization, + "Basic dXNlcm5hbWU6JTEyMzRAYXNkZiU=", + "got expected auth header" + ) + + res.writeHead(200, { + "content-type" : "application/x-tar", + "content-encoding" : "gzip" + }) + + createReadStream(tgz).pipe(res) + }) + + var nerfed = "//localhost:" + server.port + "/:" + var configuration = {} + configuration[nerfed + "username"] = "username" + configuration[nerfed + "_password"] = new Buffer("%1234@asdf%").toString("base64") + configuration[nerfed + "email"] = "i@izs.me" + configuration[nerfed + "always-auth"] = true + + var client = common.freshClient(configuration) + client.fetch( + "http://localhost:1337/underscore/-/underscore-1.3.3.tgz", + null, + function (er, res) { + t.ifError(er, "loaded successfully") + + var sink = cat(function (data) { + t.deepEqual(data, readFileSync(tgz)) + t.end() + }) + + res.on("error", function (error) { + t.ifError(error, "no errors on stream") + }) + + res.pipe(sink) + } + ) +}) diff --git a/deps/npm/node_modules/npm-registry-client/test/fetch-basic.js b/deps/npm/node_modules/npm-registry-client/test/fetch-basic.js new file mode 100644 index 00000000000..2ce3b212b04 --- /dev/null +++ b/deps/npm/node_modules/npm-registry-client/test/fetch-basic.js @@ -0,0 +1,44 @@ +var resolve = require("path").resolve +var createReadStream = require("graceful-fs").createReadStream +var readFileSync = require("graceful-fs").readFileSync + +var tap = require("tap") +var cat = require("concat-stream") + +var server = require("./lib/server.js") +var common = require("./lib/common.js") + +var tgz = resolve(__dirname, "./fixtures/underscore/1.3.3/package.tgz") + +tap.test("basic fetch", function (t) { + server.expect("/underscore/-/underscore-1.3.3.tgz", function (req, res) { + t.equal(req.method, "GET", "got expected method") + + res.writeHead(200, { + "content-type" : "application/x-tar", + "content-encoding" : "gzip" + }) + + createReadStream(tgz).pipe(res) + }) + + var client = common.freshClient() + client.fetch( + "http://localhost:1337/underscore/-/underscore-1.3.3.tgz", + null, + function (er, res) { + t.ifError(er, "loaded successfully") + + var sink = cat(function (data) { + t.deepEqual(data, readFileSync(tgz)) + t.end() + }) + + res.on("error", function (error) { + t.ifError(error, "no errors on stream") + }) + + res.pipe(sink) + } + ) +}) diff --git a/deps/npm/node_modules/npm-registry-client/test/fetch-not-authed.js b/deps/npm/node_modules/npm-registry-client/test/fetch-not-authed.js new file mode 100644 index 00000000000..0275dc2b96b --- /dev/null +++ b/deps/npm/node_modules/npm-registry-client/test/fetch-not-authed.js @@ -0,0 +1,52 @@ +var resolve = require("path").resolve +var createReadStream = require("graceful-fs").createReadStream +var readFileSync = require("graceful-fs").readFileSync + +var tap = require("tap") +var cat = require("concat-stream") + +var server = require("./lib/server.js") +var common = require("./lib/common.js") + +var tgz = resolve(__dirname, "./fixtures/underscore/1.3.3/package.tgz") + +tap.test("basic fetch with scoped always-auth disabled", function (t) { + server.expect("/underscore/-/underscore-1.3.3.tgz", function (req, res) { + t.equal(req.method, "GET", "got expected method") + t.notOk(req.headers.authorization, "received no auth header") + + res.writeHead(200, { + "content-type" : "application/x-tar", + "content-encoding" : "gzip" + }) + + createReadStream(tgz).pipe(res) + }) + + var nerfed = "//localhost:" + server.port + "/:" + var configuration = {} + configuration[nerfed + "username"] = "username" + configuration[nerfed + "_password"] = new Buffer("%1234@asdf%").toString("base64") + configuration[nerfed + "email"] = "i@izs.me" + configuration[nerfed + "always-auth"] = false + + var client = common.freshClient(configuration) + client.fetch( + "http://localhost:1337/underscore/-/underscore-1.3.3.tgz", + null, + function (er, res) { + t.ifError(er, "loaded successfully") + + var sink = cat(function (data) { + t.deepEqual(data, readFileSync(tgz)) + t.end() + }) + + res.on("error", function (error) { + t.ifError(error, "no errors on stream") + }) + + res.pipe(sink) + } + ) +}) diff --git a/deps/npm/node_modules/npm-registry-client/test/fixtures/@npm/npm-registry-client/cache.json b/deps/npm/node_modules/npm-registry-client/test/fixtures/@npm/npm-registry-client/cache.json new file mode 100644 index 00000000000..4561db502b1 --- /dev/null +++ b/deps/npm/node_modules/npm-registry-client/test/fixtures/@npm/npm-registry-client/cache.json @@ -0,0 +1 @@ +{"_id":"@npm%2fnpm-registry-client","_rev":"213-0a1049cf56172b7d9a1184742c6477b9","name":"@npm/npm-registry-client","description":"Client for the npm registry","dist-tags":{"latest":"2.0.4","v2.0":"2.0.3"},"versions":{"0.0.1":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.0.1","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"node-uuid":"~1.3.3","request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.0.14","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"engines":{"node":"*"},"_npmUser":{"name":"isaacs","email":"i@izs.me"},"_id":"@npm%2fnpm-registry-client@0.0.1","_engineSupported":true,"_npmVersion":"1.1.24","_nodeVersion":"v0.7.10-pre","_defaultsLoaded":true,"dist":{"shasum":"693a08f6d2faea22bbd2bf412508a63d3e6229a7","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.0.1.tgz"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.0.2":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.0.2","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"node-uuid":"~1.3.3","request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.0.14","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"engines":{"node":"*"},"_npmUser":{"name":"isaacs","email":"i@izs.me"},"_id":"@npm%2fnpm-registry-client@0.0.2","_engineSupported":true,"_npmVersion":"1.1.24","_nodeVersion":"v0.7.10-pre","_defaultsLoaded":true,"dist":{"shasum":"b48c0ec5563c6a6fdc253454fc56d2c60c5a26f4","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.0.2.tgz"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.0.3":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.0.3","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"node-uuid":"~1.3.3","request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.0.14","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"engines":{"node":"*"},"_npmUser":{"name":"isaacs","email":"i@izs.me"},"_id":"@npm%2fnpm-registry-client@0.0.3","_engineSupported":true,"_npmVersion":"1.1.24","_nodeVersion":"v0.7.10-pre","_defaultsLoaded":true,"dist":{"shasum":"ccc0254c2d59e3ea9b9050e2b16edef78df1a1e8","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.0.3.tgz"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.0.4":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.0.4","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"node-uuid":"~1.3.3","request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.0.14","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"engines":{"node":"*"},"_npmUser":{"name":"isaacs","email":"i@izs.me"},"_id":"@npm%2fnpm-registry-client@0.0.4","_engineSupported":true,"_npmVersion":"1.1.25","_nodeVersion":"v0.7.10-pre","_defaultsLoaded":true,"dist":{"shasum":"faabd25ef477521c74ac21e0f4cf3a2f66d18fb3","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.0.4.tgz"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.0.5":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.0.5","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"node-uuid":"~1.3.3","request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.0.14","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"engines":{"node":"*"},"_id":"@npm%2fnpm-registry-client@0.0.5","dist":{"shasum":"85219810c9d89ae8d28ea766e7cf74efbd9f1e52","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.0.5.tgz"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.0.6":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"The code that npm uses to talk to the registry","version":"0.0.6","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"node-uuid":"~1.3.3","request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.0.14","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"engines":{"node":"*"},"_id":"@npm%2fnpm-registry-client@0.0.6","dist":{"shasum":"cc6533b3b41df65e6e9db2601fbbf1a509a7e94c","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.0.6.tgz"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.0.7":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"The code that npm uses to talk to the registry","version":"0.0.7","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"node-uuid":"~1.3.3","request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.0.14","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"engines":{"node":"*"},"_id":"@npm%2fnpm-registry-client@0.0.7","dist":{"shasum":"0cee1d1c61f1c8e483774fe1f7bbb81c4f394a3a","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.0.7.tgz"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.0.8":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.0.8","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"node-uuid":"~1.3.3","request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.0.14","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","retry":"0.6.0","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.0.8","dist":{"shasum":"1b7411c3f7310ec2a96b055b00e7ca606e47bd07","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.0.8.tgz"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.0.9":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.0.9","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"node-uuid":"~1.3.3","request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.0.14","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","retry":"0.6.0","couch-login":"~0.1.6","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.0.9","dist":{"shasum":"6d5bfde431559ac9e2e52a7db85f5839b874f022","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.0.9.tgz"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.0.10":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.0.10","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"node-uuid":"~1.3.3","request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.0.14","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","retry":"0.6.0","couch-login":"~0.1.6","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.0.10","dist":{"shasum":"0c8b6a4615bce82aa6cc04a0d1f7dc89921f7a38","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.0.10.tgz"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.0.11":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.0.11","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"node-uuid":"~1.3.3","request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.0.14","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","retry":"0.6.0","couch-login":"~0.1.6","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.0.11","dist":{"shasum":"afab40be5bed1faa946d8e1827844698f2ec1db7","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.0.11.tgz"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.1.0":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.1.0","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"node-uuid":"~1.3.3","request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.0.14","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","retry":"0.6.0","couch-login":"~0.1.6","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.1.0","dist":{"shasum":"1077d6bbb5e432450239dc6622a59474953ffbea","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.1.0.tgz"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.1.1":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.1.1","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"node-uuid":"~1.3.3","request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.0.14","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","retry":"0.6.0","couch-login":"~0.1.6","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.1.1","dist":{"shasum":"759765361d09b715270f59cf50f10908e4e9c5fc","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.1.1.tgz"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.1.2":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.1.2","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"node-uuid":"~1.3.3","request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.0.14","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","retry":"0.6.0","couch-login":"~0.1.6","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.1.2","dist":{"shasum":"541ce93abb3d35f5c325545c718dd3bbeaaa9ff0","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.1.2.tgz"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.1.3":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.1.3","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"node-uuid":"~1.3.3","request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.0.14","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","retry":"0.6.0","couch-login":"~0.1.6","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.1.3","dist":{"shasum":"e9a40d7031e8f809af5fd85aa9aac979e17efc97","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.1.3.tgz"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.1.4":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.1.4","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"node-uuid":"~1.3.3","request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.0.14","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","retry":"0.6.0","couch-login":"~0.1.6","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.1.4","dist":{"shasum":"b211485b046191a1085362376530316f0cab0420","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.1.4.tgz"},"_npmVersion":"1.1.48","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.0":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.0","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"node-uuid":"~1.3.3","request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.0.14","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","retry":"0.6.0","couch-login":"~0.1.6","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.2.0","dist":{"shasum":"6508a4b4d96f31057d5200ca5779531bafd2b840","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.0.tgz"},"_npmVersion":"1.1.49","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.1":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.1","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"node-uuid":"~1.3.3","request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.0.14","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","retry":"0.6.0","couch-login":"~0.1.6","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.2.1","dist":{"shasum":"1bc8c4576c368cd88253d8a52daf40c55b89bb1a","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.1.tgz"},"_npmVersion":"1.1.49","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.5":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.5","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.0.14","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","retry":"0.6.0","couch-login":"~0.1.6","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.2.5","dist":{"shasum":"2f55d675dfb977403b1ad0d96874c1d30e8058d7","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.5.tgz"},"_npmVersion":"1.1.51","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.6":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.6","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.0.14","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","retry":"0.6.0","couch-login":"~0.1.6","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.2.6","dist":{"shasum":"f05df6695360360ad220e6e13a6a7bace7165fbe","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.6.tgz"},"_npmVersion":"1.1.56","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.7":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.7","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.0.14","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","retry":"0.6.0","couch-login":"~0.1.6","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.2.7","dist":{"shasum":"867bad8854cae82ed89ee3b7f1d391af59491671","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.7.tgz"},"_npmVersion":"1.1.59","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.8":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.8","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.1.0","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","retry":"0.6.0","couch-login":"~0.1.6","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.2.8","dist":{"shasum":"ef194cdb70f1ea03a576cff2c97392fa96e36563","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.8.tgz"},"_npmVersion":"1.1.62","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.9":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.9","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.1.0","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","retry":"0.6.0","couch-login":"~0.1.15","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.2.9","dist":{"shasum":"3cec10431dfed1594adaf99c50f482ee56ecf9e4","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.9.tgz"},"_npmVersion":"1.1.59","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.10":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.10","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.1.0","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2.0.1","retry":"0.6.0","couch-login":"~0.1.15","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.2.10","dist":{"shasum":"1e69726dae0944e78562fd77243f839c6a2ced1e","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.10.tgz"},"_npmVersion":"1.1.64","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.11":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.11","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.1.0","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.15","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.2.11","dist":{"shasum":"d92f33c297eb1bbd57fd597c3d8f5f7e9340a0b5","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.11.tgz"},"_npmVersion":"1.1.70","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.12":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.12","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"~2.9.202","graceful-fs":"~1.1.8","semver":"~1.1.0","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.15","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.2.12","dist":{"shasum":"3bfb6fc0e4b131d665580cd1481c341fe521bfd3","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.12.tgz"},"_from":".","_npmVersion":"1.2.2","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.13":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.13","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"~2.9.202","graceful-fs":"~1.2.0","semver":"~1.1.0","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.15","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.2.13","dist":{"shasum":"e03f2a4340065511b7184a3e2862cd5d459ef027","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.13.tgz"},"_from":".","_npmVersion":"1.2.4","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.14":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.14","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"~2.9.202","graceful-fs":"~1.2.0","semver":"~1.1.0","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.15","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.2.14","dist":{"shasum":"186874a7790417a340d582b1cd4a7c338087ee12","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.14.tgz"},"_from":".","_npmVersion":"1.2.10","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.15":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.15","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"~2.9.202","graceful-fs":"~1.2.0","semver":"~1.1.0","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.15","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.2.15","dist":{"shasum":"f71f32b7185855f1f8b7a5ef49e49d2357c2c552","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.15.tgz"},"_from":".","_npmVersion":"1.2.10","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.16":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.16","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"~2.9.202","graceful-fs":"~1.2.0","semver":"~1.1.0","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.15","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.2.16","dist":{"shasum":"3331323b5050fc5afdf77c3a35913c16f3e43964","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.16.tgz"},"_from":".","_npmVersion":"1.2.10","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.17":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.17","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"~2.9.202","graceful-fs":"~1.2.0","semver":"~1.1.0","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.15","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.2.17","dist":{"shasum":"1df2bbecac6751f5d9600fb43722aef96d956773","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.17.tgz"},"_from":".","_npmVersion":"1.2.11","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.18":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.18","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"~2.9.202","graceful-fs":"~1.2.0","semver":"~1.1.0","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.15","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.2.18","dist":{"shasum":"198c8d15ed9b1ed546faf6e431eb63a6b18193ad","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.18.tgz"},"_from":".","_npmVersion":"1.2.13","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.19":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.19","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"~2.16","graceful-fs":"~1.2.0","semver":"~1.1.0","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.15","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.2.19","dist":{"shasum":"106da826f0d2007f6e081f2b68fb6f26fa951b20","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.19.tgz"},"_from":".","_npmVersion":"1.2.14","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.20":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.20","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"~2.16","graceful-fs":"~1.2.0","semver":"~1.1.0","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.15","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","_id":"@npm%2fnpm-registry-client@0.2.20","dist":{"shasum":"3fff194331e26660be2cf8ebf45ddf7d36add5f6","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.20.tgz"},"_from":".","_npmVersion":"1.2.15","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.21":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.21","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"~2.16","graceful-fs":"~1.2.0","semver":"~1.1.0","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.15","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"_id":"@npm%2fnpm-registry-client@0.2.21","dist":{"shasum":"d85dd32525f193925c46ff9eb0e0f529dfd1b254","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.21.tgz"},"_from":".","_npmVersion":"1.2.18","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.22":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.22","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"~2.20.0","graceful-fs":"~1.2.0","semver":"~1.1.0","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.15","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"_id":"@npm%2fnpm-registry-client@0.2.22","dist":{"shasum":"caa22ff40a1ccd632a660b8b80c333c8f92d5a17","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.22.tgz"},"_from":".","_npmVersion":"1.2.18","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.23":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.23","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.20.0","graceful-fs":"~1.2.0","semver":"~1.1.0","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.15","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"_id":"@npm%2fnpm-registry-client@0.2.23","dist":{"shasum":"a320ab2b1d048b4f7b88e40bd86974ca322b4c24","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.23.tgz"},"_from":".","_npmVersion":"1.2.19","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.24":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.24","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.20.0","graceful-fs":"~1.2.0","semver":"~1.1.0","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.15","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"_id":"@npm%2fnpm-registry-client@0.2.24","dist":{"shasum":"e12f644338619319ee7f233363a1714a87f3c72d","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.24.tgz"},"_from":".","_npmVersion":"1.2.22","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.25":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.25","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.20.0","graceful-fs":"~1.2.0","semver":"~2.0.5","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.15","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"_id":"@npm%2fnpm-registry-client@0.2.25","dist":{"shasum":"c2caeb1dcf937d6fcc4a187765d401f5e2f54027","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.25.tgz"},"_from":".","_npmVersion":"1.2.32","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.26":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.26","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.20.0","graceful-fs":"~1.2.0","semver":"~2.0.5","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.15","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"_id":"@npm%2fnpm-registry-client@0.2.26","dist":{"shasum":"4c5a2b3de946e383032f10fa497d0c15ee5f4c60","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.26.tgz"},"_from":".","_npmVersion":"1.3.1","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.27":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.27","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.20.0","graceful-fs":"~2.0.0","semver":"~2.0.5","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.15","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"_id":"@npm%2fnpm-registry-client@0.2.27","dist":{"shasum":"8f338189d32769267886a07ad7b7fd2267446adf","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.27.tgz"},"_from":".","_npmVersion":"1.3.2","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.28":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.28","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"~2.1.0","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.18","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"_id":"@npm%2fnpm-registry-client@0.2.28","dist":{"shasum":"959141fc0180d7b1ad089e87015a8a2142a8bffc","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.28.tgz"},"_from":".","_npmVersion":"1.3.6","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.29":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.29","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"^2.2.1","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.18","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@0.2.29","dist":{"shasum":"66ff2766f0c61d41e8a6139d3692d8833002c686","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.29.tgz"},"_from":".","_npmVersion":"1.3.12","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.30":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.30","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"^2.2.1","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.18","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@0.2.30","dist":{"shasum":"f01cae5c51aa0a1c5dc2516cbad3ebde068d3eaa","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.30.tgz"},"_from":".","_npmVersion":"1.3.14","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.2.31":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.2.31","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"^2.2.1","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.18","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@0.2.31","dist":{"shasum":"24a23e24e43246677cb485f8391829e9536563d4","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.2.31.tgz"},"_from":".","_npmVersion":"1.3.17","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.3.0":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.3.0","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"^2.2.1","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.18","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@0.3.0","dist":{"shasum":"66eab02a69be67f232ac14023eddfb8308c2eccd","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.3.0.tgz"},"_from":".","_npmVersion":"1.3.18","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.3.1":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.3.1","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"^2.2.1","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.18","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@0.3.1","dist":{"shasum":"16dba07cc304442edcece378218672d0a1258ef8","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.3.1.tgz"},"_from":".","_npmVersion":"1.3.18","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.3.2":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.3.2","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"^2.2.1","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.18","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@0.3.2","dist":{"shasum":"ea3060bd0a87fb1d97b87433b50f38f7272b1686","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.3.2.tgz"},"_from":".","_npmVersion":"1.3.20","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.3.3":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.3.3","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"^2.2.1","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.18","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@0.3.3","dist":{"shasum":"da08bb681fb24aa5c988ca71f8c10f27f09daf4a","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.3.3.tgz"},"_from":".","_npmVersion":"1.3.21","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.3.4":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.3.4","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"^2.2.1","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.18","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@0.3.4","dist":{"shasum":"25d771771590b1ca39277aea4506af234c5f4342","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.3.4.tgz"},"_from":".","_npmVersion":"1.3.25","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.3.5":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.3.5","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"^2.2.1","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","couch-login":"~0.1.18","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@0.3.5","dist":{"shasum":"98ba1ac851a3939a3fb9917c28fa8da522dc635f","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.3.5.tgz"},"_from":".","_npmVersion":"1.3.25","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.3.6":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.3.6","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"^2.2.1","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@0.3.6","dist":{"shasum":"c48a2a03643769acc49672860f7920ec6bffac6e","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.3.6.tgz"},"_from":".","_npmVersion":"1.3.26","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.4.0":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.4.0","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"^2.2.1","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@0.4.0","dist":{"shasum":"30d0c178b7f2e54183a6a3fc9fe4071eb10290bf","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.4.0.tgz"},"_from":".","_npmVersion":"1.3.26","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.4.1":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.4.1","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"^2.2.1","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@0.4.1","dist":{"shasum":"9c49b3e44558e2072158fb085be8a083c5f83537","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.4.1.tgz"},"_from":".","_npmVersion":"1.4.0","_npmUser":{"name":"npm-www","email":"npm@npmjs.com"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.4.2":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.4.2","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"^2.2.1","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@0.4.2","dist":{"shasum":"d9568a9413bee14951201ce73f3b3992ec6658c0","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.4.2.tgz"},"_from":".","_npmVersion":"1.4.1","_npmUser":{"name":"npm-www","email":"npm@npmjs.com"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.4.3":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.4.3","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"^2.2.1","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@0.4.3","dist":{"shasum":"aa188fc5067158e991a57f4697c54994108f5389","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.4.3.tgz"},"_from":".","_npmVersion":"1.4.2","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.4.4":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.4.4","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"^2.2.1","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@0.4.4","dist":{"shasum":"f9dbc383a49069d8c7f67755a3ff6e424aff584f","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.4.4.tgz"},"_from":".","_npmVersion":"1.4.2","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.4.5":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.4.5","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"^2.2.1","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@0.4.5","dist":{"shasum":"7d6fdca46139470715f9477ddb5ad3e770d4de7b","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.4.5.tgz"},"_from":".","_npmVersion":"1.4.4","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.4.6":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.4.6","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"^2.2.1","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@0.4.6","_from":".","_npmVersion":"1.4.6","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"dist":{"shasum":"657f69a79543fc4cc264c3b2de958bd15f7140fe","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.4.6.tgz"},"directories":{}},"0.4.7":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.4.7","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"^2.2.1","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@0.4.7","dist":{"shasum":"f4369b59890da7882527eb7c427dd95d43707afb","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.4.7.tgz"},"_from":".","_npmVersion":"1.4.6","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"directories":{}},"0.4.8":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.4.8","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"^2.2.1","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@0.4.8","_shasum":"a6685a161033101be6064b7af887ab440e8695d0","_from":".","_npmVersion":"1.4.8","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"dist":{"shasum":"a6685a161033101be6064b7af887ab440e8695d0","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.4.8.tgz"},"directories":{}},"0.4.9":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.4.9","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"^2.2.1","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@0.4.9","_shasum":"304d3d4726a58e33d8cc965afdc9ed70b996580c","_from":".","_npmVersion":"1.4.10","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"dist":{"shasum":"304d3d4726a58e33d8cc965afdc9ed70b996580c","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.4.9.tgz"},"directories":{}},"0.4.10":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.4.10","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"^2.2.1","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@0.4.10","_shasum":"ab7bf1be3ba07d769eaf74dee3c9347e02283116","_from":".","_npmVersion":"1.4.10","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"dist":{"shasum":"ab7bf1be3ba07d769eaf74dee3c9347e02283116","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.4.10.tgz"},"directories":{}},"0.4.11":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.4.11","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"2 >=2.2.1","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@0.4.11","_shasum":"032e9b6b050ed052ee9441841a945a184ea6bc33","_from":".","_npmVersion":"1.4.10","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"dist":{"shasum":"032e9b6b050ed052ee9441841a945a184ea6bc33","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.4.11.tgz"},"directories":{}},"0.4.12":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"0.4.12","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"request":"2 >=2.25.0","graceful-fs":"~2.0.0","semver":"2 >=2.2.1","slide":"~1.1.3","chownr":"0","mkdirp":"~0.3.3","rimraf":"~2","retry":"0.6.0","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@0.4.12","_shasum":"34303422f6a3da93ca3a387a2650d707c8595b99","_from":".","_npmVersion":"1.4.10","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"dist":{"shasum":"34303422f6a3da93ca3a387a2650d707c8595b99","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-0.4.12.tgz"},"directories":{}},"1.0.0":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"1.0.0","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"chownr":"0","graceful-fs":"~2.0.0","mkdirp":"~0.3.3","npm-cache-filename":"^1.0.0","request":"2 >=2.25.0","retry":"0.6.0","rimraf":"~2","semver":"2 >=2.2.1","slide":"~1.1.3","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@1.0.0","_shasum":"2a6f9dfdce5f8ebf4b9af4dbfd738384d25014e5","_from":".","_npmVersion":"1.4.10","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"dist":{"shasum":"2a6f9dfdce5f8ebf4b9af4dbfd738384d25014e5","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-1.0.0.tgz"},"directories":{}},"1.0.1":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"1.0.1","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"chownr":"0","graceful-fs":"~2.0.0","mkdirp":"~0.3.3","npm-cache-filename":"^1.0.0","request":"2 >=2.25.0","retry":"0.6.0","rimraf":"~2","semver":"2 >=2.2.1","slide":"~1.1.3","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","gitHead":"98b1278c230cf6c159f189e2f8c69daffa727ab8","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@1.0.1","_shasum":"c5f6a87d285f2005a35d3f67d9c724bce551b0f1","_from":".","_npmVersion":"1.4.13","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"dist":{"shasum":"c5f6a87d285f2005a35d3f67d9c724bce551b0f1","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-1.0.1.tgz"},"directories":{}},"2.0.0":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"2.0.0","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"chownr":"0","graceful-fs":"~2.0.0","mkdirp":"~0.3.3","npm-cache-filename":"^1.0.0","request":"2 >=2.25.0","retry":"0.6.0","rimraf":"~2","semver":"2 >=2.2.1","slide":"~1.1.3","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","gitHead":"47a98069b6a34e751cbd5b84ce92858cae5abe70","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@2.0.0","_shasum":"88810dac2d534c0df1d905c79e723392fcfc791a","_from":".","_npmVersion":"1.4.14","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"}],"dist":{"shasum":"88810dac2d534c0df1d905c79e723392fcfc791a","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-2.0.0.tgz"},"directories":{}},"2.0.1":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"2.0.1","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"chownr":"0","graceful-fs":"^3.0.0","mkdirp":"~0.3.3","npm-cache-filename":"^1.0.0","request":"2 >=2.25.0","retry":"0.6.0","rimraf":"~2","semver":"2 >=2.2.1","slide":"~1.1.3","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","gitHead":"123e40131f83f7265f66ecd2a558cce44a3aea86","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@2.0.1","_shasum":"611c7cb7c8f7ff22be2ebc6398423b5de10db0e2","_from":".","_npmVersion":"1.4.14","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"},{"name":"othiym23","email":"ogd@aoaioxxysz.net"}],"dist":{"shasum":"611c7cb7c8f7ff22be2ebc6398423b5de10db0e2","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-2.0.1.tgz"},"directories":{}},"2.0.2":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"2.0.2","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"chownr":"0","graceful-fs":"^3.0.0","mkdirp":"~0.3.3","npm-cache-filename":"^1.0.0","request":"2 >=2.25.0","retry":"0.6.0","rimraf":"~2","semver":"2 >=2.2.1","slide":"~1.1.3","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","gitHead":"6ecc311c9dd4890f2d9b6bae60447070a3321e12","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@2.0.2","_shasum":"a82b000354c7f830114fb18444764bc477d5740f","_from":".","_npmVersion":"1.4.15","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"},{"name":"othiym23","email":"ogd@aoaioxxysz.net"}],"dist":{"shasum":"a82b000354c7f830114fb18444764bc477d5740f","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-2.0.2.tgz"},"directories":{}},"3.0.0":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"3.0.0","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"chownr":"0","graceful-fs":"^3.0.0","mkdirp":"~0.3.3","normalize-package-data":"^0.4.0","npm-cache-filename":"^1.0.0","request":"2 >=2.25.0","retry":"0.6.0","rimraf":"~2","semver":"2 >=2.2.1","slide":"~1.1.3","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","gitHead":"6bb1aec1e85fa82ee075bd997d6fb9f2dbb7f643","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@3.0.0","_shasum":"4febc5cdb274e9fa06bc3008910e3fa1ec007994","_from":".","_npmVersion":"1.5.0-pre","_npmUser":{"name":"othiym23","email":"ogd@aoaioxxysz.net"},"maintainers":[{"name":"isaacs","email":"i@izs.me"},{"name":"othiym23","email":"ogd@aoaioxxysz.net"}],"dist":{"shasum":"4febc5cdb274e9fa06bc3008910e3fa1ec007994","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-3.0.0.tgz"},"directories":{}},"3.0.1":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"3.0.1","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"chownr":"0","graceful-fs":"^3.0.0","mkdirp":"~0.3.3","normalize-package-data":"^0.4.0","npm-cache-filename":"^1.0.0","request":"2 >=2.25.0","retry":"0.6.0","rimraf":"~2","semver":"2 >=2.2.1","slide":"~1.1.3","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","gitHead":"fe8382dde609ea1e3580fcdc5bc3d0bba119cfc6","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@3.0.1","_shasum":"5f3ee362ce5c237cfb798fce22c77875fc1a63c2","_from":".","_npmVersion":"1.5.0-alpha-1","_npmUser":{"name":"othiym23","email":"ogd@aoaioxxysz.net"},"maintainers":[{"name":"isaacs","email":"i@izs.me"},{"name":"othiym23","email":"ogd@aoaioxxysz.net"}],"dist":{"shasum":"5f3ee362ce5c237cfb798fce22c77875fc1a63c2","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-3.0.1.tgz"},"directories":{}},"2.0.3":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"2.0.3","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"chownr":"0","graceful-fs":"^3.0.0","mkdirp":"~0.3.3","npm-cache-filename":"^1.0.0","request":"2 >=2.25.0","retry":"0.6.0","rimraf":"~2","semver":"2 >=2.2.1","slide":"~1.1.3","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","gitHead":"2578fb9a807d77417554ba235ba8fac39405e832","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@2.0.3","_shasum":"93dad3d9a162c99404badb71739c622c0f3b9a72","_from":".","_npmVersion":"1.5.0-alpha-1","_npmUser":{"name":"othiym23","email":"ogd@aoaioxxysz.net"},"maintainers":[{"name":"isaacs","email":"i@izs.me"},{"name":"othiym23","email":"ogd@aoaioxxysz.net"}],"dist":{"shasum":"93dad3d9a162c99404badb71739c622c0f3b9a72","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-2.0.3.tgz"},"directories":{}},"3.0.2":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"3.0.2","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"chownr":"0","graceful-fs":"^3.0.0","mkdirp":"~0.3.3","normalize-package-data":"^0.4.0","npm-cache-filename":"^1.0.0","request":"2 >=2.25.0","retry":"0.6.0","rimraf":"~2","semver":"2 >=2.2.1","slide":"~1.1.3","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","gitHead":"15343019160ace0b9874cf0ec186b3425dbc7301","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@3.0.2","_shasum":"5dd0910157ce55f4286a1871d39f9a2128cd3c99","_from":".","_npmVersion":"1.5.0-alpha-2","_npmUser":{"name":"othiym23","email":"ogd@aoaioxxysz.net"},"maintainers":[{"name":"isaacs","email":"i@izs.me"},{"name":"othiym23","email":"ogd@aoaioxxysz.net"}],"dist":{"shasum":"5dd0910157ce55f4286a1871d39f9a2128cd3c99","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-3.0.2.tgz"},"directories":{}},"3.0.3":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"3.0.3","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"chownr":"0","graceful-fs":"^3.0.0","mkdirp":"~0.3.3","normalize-package-data":"^0.4.0","npm-cache-filename":"^1.0.0","request":"2 >=2.25.0","retry":"0.6.0","rimraf":"~2","semver":"2 >=2.2.1 || 3.x","slide":"~1.1.3","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","gitHead":"b18a780d1185f27c06c27812147b83aba0d4a2f5","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@3.0.3","_shasum":"2377dc1cf69b4d374b3a95fb7feba8c804d8cb30","_from":".","_npmVersion":"2.0.0-alpha-5","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"},{"name":"othiym23","email":"ogd@aoaioxxysz.net"}],"dist":{"shasum":"2377dc1cf69b4d374b3a95fb7feba8c804d8cb30","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-3.0.3.tgz"},"directories":{}},"3.0.4":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"3.0.4","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"chownr":"0","graceful-fs":"^3.0.0","mkdirp":"~0.5.0","normalize-package-data":"^0.4.0","npm-cache-filename":"^1.0.0","request":"2 >=2.25.0","retry":"0.6.0","rimraf":"~2","semver":"2 >=2.2.1 || 3.x","slide":"~1.1.3","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","gitHead":"54900fe4b2eb5b99ee6dfe173f145732fdfae80e","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@3.0.4","_shasum":"d4a177d1f25615cfaef9b6844fa366ffbf5f578a","_from":".","_npmVersion":"2.0.0-alpha-5","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"},{"name":"othiym23","email":"ogd@aoaioxxysz.net"}],"dist":{"shasum":"d4a177d1f25615cfaef9b6844fa366ffbf5f578a","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-3.0.4.tgz"},"directories":{}},"3.0.5":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"3.0.5","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"chownr":"0","graceful-fs":"^3.0.0","mkdirp":"0.5","normalize-package-data":"0.4","npm-cache-filename":"^1.0.0","request":"2 >=2.25.0","retry":"0.6.0","rimraf":"2","semver":"2 >=2.2.1 || 3.x","slide":"^1.1.3","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"BSD","gitHead":"635db1654346bc86473df7b39626601425f46177","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@3.0.5","_shasum":"cdabaefa399b81ac8a86a48718aefd80e7b19ff3","_from":".","_npmVersion":"2.0.0-alpha-5","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"},{"name":"othiym23","email":"ogd@aoaioxxysz.net"}],"dist":{"shasum":"cdabaefa399b81ac8a86a48718aefd80e7b19ff3","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-3.0.5.tgz"},"directories":{}},"3.0.6":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"3.0.6","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"chownr":"0","graceful-fs":"^3.0.0","mkdirp":"^0.5.0","normalize-package-data":"0.4","npm-cache-filename":"^1.0.0","request":"2 >=2.25.0","retry":"0.6.0","rimraf":"2","semver":"2 >=2.2.1 || 3.x","slide":"^1.1.3","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"ISC","gitHead":"eba30fadd724ed5cad1aec95ac3ee907a59b7317","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@3.0.6","_shasum":"14a17d9a60ed2a80b04edcbc596dbce0d96540ee","_from":".","_npmVersion":"1.4.22","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"},{"name":"othiym23","email":"ogd@aoaioxxysz.net"}],"dist":{"shasum":"14a17d9a60ed2a80b04edcbc596dbce0d96540ee","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-3.0.6.tgz"},"directories":{}},"2.0.4":{"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"name":"@npm/npm-registry-client","description":"Client for the npm registry","version":"2.0.4","repository":{"url":"git://github.com/isaacs/npm-registry-client"},"main":"index.js","scripts":{"test":"tap test/*.js"},"dependencies":{"chownr":"0","graceful-fs":"^3.0.0","mkdirp":"^0.5.0","npm-cache-filename":"^1.0.0","request":"2 >=2.25.0","retry":"0.6.0","rimraf":"~2","semver":"2 >=2.2.1","slide":"~1.1.3","npmlog":""},"devDependencies":{"tap":""},"optionalDependencies":{"npmlog":""},"license":"ISC","gitHead":"a10f621d9cdc813b9d3092a14b661f65bfa6d40d","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"homepage":"https://github.com/isaacs/npm-registry-client","_id":"@npm%2fnpm-registry-client@2.0.4","_shasum":"528e08900d7655c12096d1637d1c3a7a5b451019","_from":".","_npmVersion":"1.4.22","_npmUser":{"name":"isaacs","email":"i@izs.me"},"maintainers":[{"name":"isaacs","email":"i@izs.me"},{"name":"othiym23","email":"ogd@aoaioxxysz.net"}],"dist":{"shasum":"528e08900d7655c12096d1637d1c3a7a5b451019","tarball":"http://registry.npmjs.org/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-2.0.4.tgz"},"directories":{}}},"readme":"# npm-registry-client\u000a\u000aThe code that npm uses to talk to the registry.\u000a\u000aIt handles all the caching and HTTP calls.\u000a\u000a## Usage\u000a\u000a```javascript\u000avar RegClient = require('npm-registry-client')\u000avar client = new RegClient(config)\u000avar uri = \"npm://registry.npmjs.org/npm\"\u000avar options = {timeout: 1000}\u000a\u000aclient.get(uri, options, function (error, data, raw, res) {\u000a // error is an error if there was a problem.\u000a // data is the parsed data object\u000a // raw is the json string\u000a // res is the response from couch\u000a})\u000a```\u000a\u000a# Registry URLs\u000a\u000aThe registry calls take either a full URL pointing to a resource in the\u000aregistry, or a base URL for the registry as a whole (for the base URL, any path\u000awill be ignored). In addition to `http` and `https`, `npm` URLs are allowed.\u000a`npm` URLs are `https` URLs with the additional restrictions that they will\u000aalways include authorization credentials, and the response is always registry\u000ametadata (and not tarballs or other attachments).\u000a\u000a# Configuration\u000a\u000aThis program is designed to work with\u000a[npmconf](https://npmjs.org/package/npmconf), but you can also pass in\u000aa plain-jane object with the appropriate configs, and it'll shim it\u000afor you. Any configuration thingie that has get/set/del methods will\u000aalso be accepted.\u000a\u000a* `cache` **Required** {String} Path to the cache folder\u000a* `always-auth` {Boolean} Auth even for GET requests.\u000a* `auth` {String} A base64-encoded `username:password`\u000a* `email` {String} User's email address\u000a* `tag` {String} The default tag to use when publishing new packages.\u000a Default = `\"latest\"`\u000a* `ca` {String} Cerficate signing authority certificates to trust.\u000a* `cert` {String} Client certificate (PEM encoded). Enable access\u000a to servers that require client certificates\u000a* `key` {String} Private key (PEM encoded) for client certificate 'cert'\u000a* `strict-ssl` {Boolean} Whether or not to be strict with SSL\u000a certificates. Default = `true`\u000a* `user-agent` {String} User agent header to send. Default =\u000a `\"node/{process.version} {process.platform} {process.arch}\"`\u000a* `log` {Object} The logger to use. Defaults to `require(\"npmlog\")` if\u000a that works, otherwise logs are disabled.\u000a* `fetch-retries` {Number} Number of times to retry on GET failures.\u000a Default=2\u000a* `fetch-retry-factor` {Number} `factor` setting for `node-retry`. Default=10\u000a* `fetch-retry-mintimeout` {Number} `minTimeout` setting for `node-retry`.\u000a Default=10000 (10 seconds)\u000a* `fetch-retry-maxtimeout` {Number} `maxTimeout` setting for `node-retry`.\u000a Default=60000 (60 seconds)\u000a* `proxy` {URL} The url to proxy requests through.\u000a* `https-proxy` {URL} The url to proxy https requests through.\u000a Defaults to be the same as `proxy` if unset.\u000a* `_auth` {String} The base64-encoded authorization header.\u000a* `username` `_password` {String} Username/password to use to generate\u000a `_auth` if not supplied.\u000a* `_token` {Object} A token for use with\u000a [couch-login](https://npmjs.org/package/couch-login)\u000a\u000a# client.request(method, uri, options, cb)\u000a\u000a* `method` {String} HTTP method\u000a* `uri` {String} URI pointing to the resource to request\u000a* `options` {Object} Object containing optional per-request properties.\u000a * `what` {Stream | Buffer | String | Object} The request body. Objects\u000a that are not Buffers or Streams are encoded as JSON.\u000a * `etag` {String} The cached ETag\u000a * `follow` {Boolean} Follow 302/301 responses (defaults to true)\u000a* `cb` {Function}\u000a * `error` {Error | null}\u000a * `data` {Object} the parsed data object\u000a * `raw` {String} the json\u000a * `res` {Response Object} response from couch\u000a\u000aMake a request to the registry. All the other methods are wrappers around\u000a`request`.\u000a\u000a# client.adduser(base, username, password, email, cb)\u000a\u000a* `base` {String} Base registry URL\u000a* `username` {String}\u000a* `password` {String}\u000a* `email` {String}\u000a* `cb` {Function}\u000a\u000aAdd a user account to the registry, or verify the credentials.\u000a\u000a# client.deprecate(uri, version, message, cb)\u000a\u000a* `uri` {String} Full registry URI for the deprecated package\u000a* `version` {String} Semver version range\u000a* `message` {String} The message to use as a deprecation warning\u000a* `cb` {Function}\u000a\u000aDeprecate a version of a package in the registry.\u000a\u000a# client.bugs(uri, cb)\u000a\u000a* `uri` {String} Full registry URI for the package\u000a* `cb` {Function}\u000a\u000aGet the url for bugs of a package\u000a\u000a# client.get(uri, options, cb)\u000a\u000a* `uri` {String} The complete registry URI to fetch\u000a* `options` {Object} Object containing optional per-request properties.\u000a * `timeout` {Number} Duration before the request times out.\u000a * `follow` {Boolean} Follow 302/301 responses (defaults to true)\u000a * `staleOk` {Boolean} If there's cached data available, then return that\u000a to the callback quickly, and update the cache the background.\u000a\u000aFetches data from the registry via a GET request, saving it in the cache folder\u000awith the ETag.\u000a\u000a# client.publish(uri, data, tarball, cb)\u000a\u000a* `uri` {String} The registry URI to publish to\u000a* `data` {Object} Package data\u000a* `tarball` {String | Stream} Filename or stream of the package tarball\u000a* `cb` {Function}\u000a\u000aPublish a package to the registry.\u000a\u000aNote that this does not create the tarball from a folder. However, it can\u000aaccept a gzipped tar stream or a filename to a tarball.\u000a\u000a# client.star(uri, starred, cb)\u000a\u000a* `uri` {String} The complete registry URI to star\u000a* `starred` {Boolean} True to star the package, false to unstar it.\u000a* `cb` {Function}\u000a\u000aStar or unstar a package.\u000a\u000aNote that the user does not have to be the package owner to star or unstar a\u000apackage, though other writes do require that the user be the package owner.\u000a\u000a# client.stars(base, username, cb)\u000a\u000a* `base` {String} The base URL for the registry\u000a* `username` {String} Name of user to fetch starred packages for.\u000a* `cb` {Function}\u000a\u000aView your own or another user's starred packages.\u000a\u000a# client.tag(uri, version, tag, cb)\u000a\u000a* `uri` {String} The complete registry URI to tag\u000a* `version` {String} Version to tag\u000a* `tag` {String} Tag name to apply\u000a* `cb` {Function}\u000a\u000aMark a version in the `dist-tags` hash, so that `pkg@tag` will fetch the\u000aspecified version.\u000a\u000a# client.unpublish(uri, [ver], cb)\u000a\u000a* `uri` {String} The complete registry URI to unpublish\u000a* `ver` {String} version to unpublish. Leave blank to unpublish all\u000a versions.\u000a* `cb` {Function}\u000a\u000aRemove a version of a package (or all versions) from the registry. When the\u000alast version us unpublished, the entire document is removed from the database.\u000a\u000a# client.upload(uri, file, [etag], [nofollow], cb)\u000a\u000a* `uri` {String} The complete registry URI to upload to\u000a* `file` {String | Stream} Either the filename or a readable stream\u000a* `etag` {String} Cache ETag\u000a* `nofollow` {Boolean} Do not follow 301/302 responses\u000a* `cb` {Function}\u000a\u000aUpload an attachment. Mostly used by `client.publish()`.\u000a","maintainers":[{"name":"isaacs","email":"i@izs.me"},{"name":"othiym23","email":"ogd@aoaioxxysz.net"}],"time":{"modified":"2014-07-31T21:59:52.896Z","created":"2012-06-07T04:43:36.581Z","0.0.1":"2012-06-07T04:43:38.123Z","0.0.2":"2012-06-07T05:35:05.937Z","0.0.3":"2012-06-09T00:55:25.861Z","0.0.4":"2012-06-11T03:53:26.548Z","0.0.5":"2012-06-11T23:48:11.235Z","0.0.6":"2012-06-17T06:23:27.320Z","0.0.7":"2012-06-18T19:19:38.315Z","0.0.8":"2012-06-28T20:40:20.563Z","0.0.9":"2012-07-10T03:28:04.651Z","0.0.10":"2012-07-11T17:03:45.151Z","0.0.11":"2012-07-17T14:06:37.489Z","0.1.0":"2012-07-23T18:17:38.007Z","0.1.1":"2012-07-23T21:21:28.196Z","0.1.2":"2012-07-24T06:14:12.831Z","0.1.3":"2012-08-07T02:02:20.564Z","0.1.4":"2012-08-15T03:04:52.822Z","0.1.5":"2012-08-17T21:59:33.310Z","0.2.0":"2012-08-17T22:00:18.081Z","0.2.1":"2012-08-17T22:07:28.827Z","0.2.2":"2012-08-17T22:37:24.352Z","0.2.3":"2012-08-19T19:16:44.808Z","0.2.4":"2012-08-19T19:18:51.792Z","0.2.5":"2012-08-20T16:54:50.794Z","0.2.6":"2012-08-22T00:25:04.766Z","0.2.7":"2012-08-27T19:07:34.829Z","0.2.8":"2012-10-02T19:53:50.661Z","0.2.9":"2012-10-03T22:09:50.766Z","0.2.10":"2012-10-25T14:55:54.216Z","0.2.11":"2012-12-21T16:26:38.094Z","0.2.12":"2013-01-18T22:22:41.668Z","0.2.13":"2013-02-06T00:16:35.939Z","0.2.14":"2013-02-10T02:44:02.764Z","0.2.15":"2013-02-11T19:18:55.678Z","0.2.16":"2013-02-15T17:09:03.249Z","0.2.17":"2013-02-16T03:47:13.898Z","0.2.18":"2013-03-06T22:09:23.536Z","0.2.19":"2013-03-20T06:27:39.128Z","0.2.20":"2013-03-28T00:43:07.558Z","0.2.21":"2013-04-29T15:46:54.094Z","0.2.22":"2013-04-29T15:51:02.178Z","0.2.23":"2013-05-11T00:28:14.198Z","0.2.24":"2013-05-24T21:27:50.693Z","0.2.25":"2013-06-20T15:36:46.277Z","0.2.26":"2013-07-06T17:12:54.670Z","0.2.27":"2013-07-11T07:14:45.740Z","0.2.28":"2013-08-02T20:27:41.732Z","0.2.29":"2013-10-28T18:23:24.477Z","0.2.30":"2013-11-18T23:12:00.540Z","0.2.31":"2013-12-16T08:36:43.044Z","0.3.0":"2013-12-17T07:03:10.699Z","0.3.1":"2013-12-17T16:53:27.867Z","0.3.2":"2013-12-17T22:25:14.882Z","0.3.3":"2013-12-21T16:07:06.773Z","0.3.4":"2014-01-29T15:24:05.163Z","0.3.5":"2014-01-31T01:53:19.656Z","0.3.6":"2014-02-07T00:17:21.362Z","0.4.0":"2014-02-13T01:17:18.973Z","0.4.1":"2014-02-13T23:47:37.892Z","0.4.2":"2014-02-14T00:29:13.086Z","0.4.3":"2014-02-16T03:40:54.640Z","0.4.4":"2014-02-16T03:41:48.856Z","0.4.5":"2014-03-12T05:09:17.474Z","0.4.6":"2014-03-29T19:44:15.041Z","0.4.7":"2014-04-02T19:41:07.149Z","0.4.8":"2014-05-01T22:24:54.980Z","0.4.9":"2014-05-12T21:52:55.127Z","0.4.10":"2014-05-13T16:44:29.801Z","0.4.11":"2014-05-13T20:33:04.738Z","0.4.12":"2014-05-14T06:14:22.842Z","1.0.0":"2014-05-14T23:04:37.188Z","1.0.1":"2014-06-03T00:55:54.448Z","2.0.0":"2014-06-06T04:23:46.579Z","2.0.1":"2014-06-06T06:25:14.419Z","2.0.2":"2014-06-14T00:33:10.205Z","3.0.0":"2014-07-02T00:30:29.154Z","3.0.1":"2014-07-14T23:29:05.057Z","2.0.3":"2014-07-15T00:09:36.043Z","3.0.2":"2014-07-17T06:30:02.659Z","3.0.3":"2014-07-23T21:20:42.406Z","3.0.4":"2014-07-25T00:27:26.007Z","3.0.5":"2014-07-25T00:28:48.007Z","3.0.6":"2014-07-31T21:57:49.043Z","2.0.4":"2014-07-31T21:59:52.896Z"},"author":{"name":"Isaac Z. Schlueter","email":"i@izs.me","url":"http://blog.izs.me/"},"repository":{"url":"git://github.com/isaacs/npm-registry-client"},"users":{"fgribreau":true,"fengmk2":true},"readmeFilename":"README.md","homepage":"https://github.com/isaacs/npm-registry-client","bugs":{"url":"https://github.com/isaacs/npm-registry-client/issues"},"license":"ISC","_attachments":{}} diff --git a/deps/npm/node_modules/npm-registry-client/test/fixtures/underscore/1.3.3/cache.json b/deps/npm/node_modules/npm-registry-client/test/fixtures/underscore/1.3.3/cache.json new file mode 100644 index 00000000000..01da3002763 --- /dev/null +++ b/deps/npm/node_modules/npm-registry-client/test/fixtures/underscore/1.3.3/cache.json @@ -0,0 +1 @@ +{"name":"underscore","description":"JavaScript's functional programming helper library.","homepage":"http://documentcloud.github.com/underscore/","keywords":["util","functional","server","client","browser"],"author":{"name":"Jeremy Ashkenas","email":"jeremy@documentcloud.org"},"repository":{"type":"git","url":"git://github.com/documentcloud/underscore.git"},"main":"underscore.js","version":"1.3.3","_npmUser":{"name":"jashkenas","email":"jashkenas@gmail.com"},"_id":"underscore@1.3.3","dependencies":{},"devDependencies":{},"optionalDependencies":{},"engines":{"node":"*"},"_engineSupported":true,"_npmVersion":"1.1.1","_nodeVersion":"v0.6.11","_defaultsLoaded":true,"dist":{"shasum":"47ac53683daf832bfa952e1774417da47817ae42","tarball":"http://registry.npmjs.org/underscore/-/underscore-1.3.3.tgz"},"readme":" __ \n /\\ \\ __ \n __ __ ___ \\_\\ \\ __ _ __ ____ ___ ___ _ __ __ /\\_\\ ____ \n /\\ \\/\\ \\ /' _ `\\ /'_ \\ /'__`\\/\\ __\\/ ,__\\ / ___\\ / __`\\/\\ __\\/'__`\\ \\/\\ \\ /',__\\ \n \\ \\ \\_\\ \\/\\ \\/\\ \\/\\ \\ \\ \\/\\ __/\\ \\ \\//\\__, `\\/\\ \\__//\\ \\ \\ \\ \\ \\//\\ __/ __ \\ \\ \\/\\__, `\\\n \\ \\____/\\ \\_\\ \\_\\ \\___,_\\ \\____\\\\ \\_\\\\/\\____/\\ \\____\\ \\____/\\ \\_\\\\ \\____\\/\\_\\ _\\ \\ \\/\\____/\n \\/___/ \\/_/\\/_/\\/__,_ /\\/____/ \\/_/ \\/___/ \\/____/\\/___/ \\/_/ \\/____/\\/_//\\ \\_\\ \\/___/ \n \\ \\____/ \n \\/___/\n \nUnderscore.js is a utility-belt library for JavaScript that provides \nsupport for the usual functional suspects (each, map, reduce, filter...) \nwithout extending any core JavaScript objects.\n\nFor Docs, License, Tests, and pre-packed downloads, see:\nhttp://documentcloud.github.com/underscore/\n\nMany thanks to our contributors:\nhttps://github.com/documentcloud/underscore/contributors\n","maintainers":[{"name":"documentcloud","email":"jeremy@documentcloud.org"},{"name":"jashkenas","email":"jashkenas@gmail.com"}],"directories":{}} \ No newline at end of file diff --git a/deps/npm/node_modules/npm-registry-client/test/fixtures/underscore/1.3.3/package.tgz b/deps/npm/node_modules/npm-registry-client/test/fixtures/underscore/1.3.3/package.tgz new file mode 100644 index 00000000000..19da9baa7fb Binary files /dev/null and b/deps/npm/node_modules/npm-registry-client/test/fixtures/underscore/1.3.3/package.tgz differ diff --git a/deps/npm/node_modules/npm-registry-client/test/fixtures/underscore/cache.json b/deps/npm/node_modules/npm-registry-client/test/fixtures/underscore/cache.json new file mode 100644 index 00000000000..d899f11922a --- /dev/null +++ b/deps/npm/node_modules/npm-registry-client/test/fixtures/underscore/cache.json @@ -0,0 +1 @@ +{"_id":"underscore","_rev":"72-47f2986bfd8e8b55068b204588bbf484","name":"underscore","description":"JavaScript's functional programming helper library.","dist-tags":{"latest":"1.3.3","stable":"1.3.3"},"versions":{"1.0.3":{"name":"underscore","description":"Functional programming aid for JavaScript. Works well with jQuery.","url":"http://documentcloud.github.com/underscore/","keywords":["util","functional","server","client","browser"],"author":{"name":"Jeremy Ashkenas","email":"jeremy@documentcloud.org"},"contributors":[],"dependencies":{},"lib":".","main":"underscore","version":"1.0.3","_id":"underscore@1.0.3","engines":{"node":"*"},"_nodeSupported":true,"_npmVersion":"0.2.7-2","_nodeVersion":"v0.3.1-pre","dist":{"tarball":"http://registry.npmjs.org/underscore/-/underscore-1.0.3.tgz"},"directories":{},"_npmUser":{"name":"jashkenas","email":"jashkenas@gmail.com"},"maintainers":[{"name":"documentcloud","email":"jeremy@documentcloud.org"},{"name":"jashkenas","email":"jashkenas@gmail.com"}]},"1.0.4":{"name":"underscore","description":"Functional programming aid for JavaScript. Works well with jQuery.","url":"http://documentcloud.github.com/underscore/","keywords":["util","functional","server","client","browser"],"author":{"name":"Jeremy Ashkenas","email":"jeremy@documentcloud.org"},"contributors":[],"dependencies":{},"lib":".","main":"underscore","version":"1.0.4","_id":"underscore@1.0.4","engines":{"node":"*"},"_nodeSupported":true,"_npmVersion":"0.2.7-2","_nodeVersion":"v0.3.1-pre","dist":{"tarball":"http://registry.npmjs.org/underscore/-/underscore-1.0.4.tgz"},"directories":{},"_npmUser":{"name":"jashkenas","email":"jashkenas@gmail.com"},"maintainers":[{"name":"documentcloud","email":"jeremy@documentcloud.org"},{"name":"jashkenas","email":"jashkenas@gmail.com"}]},"1.1.0":{"name":"underscore","description":"Functional programming aid for JavaScript. Works well with jQuery.","url":"http://documentcloud.github.com/underscore/","keywords":["util","functional","server","client","browser"],"author":{"name":"Jeremy Ashkenas","email":"jeremy@documentcloud.org"},"contributors":[],"dependencies":{},"lib":".","main":"underscore","version":"1.1.0","_id":"underscore@1.1.0","engines":{"node":"*"},"_nodeSupported":true,"_npmVersion":"0.2.7-2","_nodeVersion":"v0.3.1-pre","dist":{"tarball":"http://registry.npmjs.org/underscore/-/underscore-1.1.0.tgz"},"directories":{},"_npmUser":{"name":"jashkenas","email":"jashkenas@gmail.com"},"maintainers":[{"name":"documentcloud","email":"jeremy@documentcloud.org"},{"name":"jashkenas","email":"jashkenas@gmail.com"}]},"1.1.1":{"name":"underscore","description":"Functional programming aid for JavaScript. Works well with jQuery.","url":"http://documentcloud.github.com/underscore/","keywords":["util","functional","server","client","browser"],"author":{"name":"Jeremy Ashkenas","email":"jeremy@documentcloud.org"},"contributors":[],"dependencies":{},"lib":".","main":"underscore","version":"1.1.1","_id":"underscore@1.1.1","engines":{"node":"*"},"_nodeSupported":true,"_npmVersion":"0.2.7-2","_nodeVersion":"v0.3.1-pre","dist":{"tarball":"http://registry.npmjs.org/underscore/-/underscore-1.1.1.tgz"},"directories":{},"_npmUser":{"name":"jashkenas","email":"jashkenas@gmail.com"},"maintainers":[{"name":"documentcloud","email":"jeremy@documentcloud.org"},{"name":"jashkenas","email":"jashkenas@gmail.com"}]},"1.1.2":{"name":"underscore","description":"Functional programming aid for JavaScript. Works well with jQuery.","url":"http://documentcloud.github.com/underscore/","keywords":["util","functional","server","client","browser"],"author":{"name":"Jeremy Ashkenas","email":"jeremy@documentcloud.org"},"contributors":[],"dependencies":{},"lib":".","main":"underscore","version":"1.1.2","_id":"underscore@1.1.2","engines":{"node":"*"},"_nodeSupported":true,"_npmVersion":"0.2.7-2","_nodeVersion":"v0.3.1-pre","dist":{"tarball":"http://registry.npmjs.org/underscore/-/underscore-1.1.2.tgz"},"directories":{},"_npmUser":{"name":"jashkenas","email":"jashkenas@gmail.com"},"maintainers":[{"name":"documentcloud","email":"jeremy@documentcloud.org"},{"name":"jashkenas","email":"jashkenas@gmail.com"}]},"1.1.3":{"name":"underscore","description":"Functional programming aid for JavaScript. Works well with jQuery.","url":"http://documentcloud.github.com/underscore/","keywords":["util","functional","server","client","browser"],"author":{"name":"Jeremy Ashkenas","email":"jeremy@documentcloud.org"},"contributors":[],"dependencies":{},"lib":".","main":"underscore","version":"1.1.3","_id":"underscore@1.1.3","engines":{"node":"*"},"_nodeSupported":true,"_npmVersion":"0.2.8-1","_nodeVersion":"v0.2.5","dist":{"tarball":"http://registry.npmjs.org/underscore/-/underscore-1.1.3.tgz"},"directories":{},"_npmUser":{"name":"jashkenas","email":"jashkenas@gmail.com"},"maintainers":[{"name":"documentcloud","email":"jeremy@documentcloud.org"},{"name":"jashkenas","email":"jashkenas@gmail.com"}]},"1.1.4":{"name":"underscore","description":"Functional programming aid for JavaScript. Works well with jQuery.","url":"http://documentcloud.github.com/underscore/","keywords":["util","functional","server","client","browser"],"author":{"name":"Jeremy Ashkenas","email":"jeremy@documentcloud.org"},"contributors":[],"dependencies":{},"lib":".","main":"underscore.js","version":"1.1.4","_id":"underscore@1.1.4","engines":{"node":"*"},"_engineSupported":true,"_npmVersion":"0.3.9","_nodeVersion":"v0.5.0-pre","dist":{"shasum":"9e82274902865625b3a6d4c315a38ffd80047dae","tarball":"http://registry.npmjs.org/underscore/-/underscore-1.1.4.tgz"},"_npmUser":{"name":"jashkenas","email":"jashkenas@gmail.com"},"maintainers":[{"name":"documentcloud","email":"jeremy@documentcloud.org"},{"name":"jashkenas","email":"jashkenas@gmail.com"}],"directories":{}},"1.1.5":{"name":"underscore","description":"JavaScript's functional programming helper library.","homepage":"http://documentcloud.github.com/underscore/","keywords":["util","functional","server","client","browser"],"author":{"name":"Jeremy Ashkenas","email":"jeremy@documentcloud.org"},"contributors":[],"dependencies":{},"repository":{"type":"git","url":"git://github.com/documentcloud/underscore.git"},"main":"underscore.js","version":"1.1.5","_id":"underscore@1.1.5","engines":{"node":"*"},"_engineSupported":true,"_npmVersion":"0.3.16","_nodeVersion":"v0.4.2","directories":{},"files":[""],"_defaultsLoaded":true,"dist":{"shasum":"23601d62c75619998b2f0db24938102793336a56","tarball":"http://registry.npmjs.org/underscore/-/underscore-1.1.5.tgz"},"_npmUser":{"name":"jashkenas","email":"jashkenas@gmail.com"},"maintainers":[{"name":"documentcloud","email":"jeremy@documentcloud.org"},{"name":"jashkenas","email":"jashkenas@gmail.com"}]},"1.1.6":{"name":"underscore","description":"JavaScript's functional programming helper library.","homepage":"http://documentcloud.github.com/underscore/","keywords":["util","functional","server","client","browser"],"author":{"name":"Jeremy Ashkenas","email":"jeremy@documentcloud.org"},"contributors":[],"dependencies":{},"repository":{"type":"git","url":"git://github.com/documentcloud/underscore.git"},"main":"underscore.js","version":"1.1.6","_id":"underscore@1.1.6","engines":{"node":"*"},"_engineSupported":true,"_npmVersion":"0.3.18","_nodeVersion":"v0.4.2","directories":{},"files":[""],"_defaultsLoaded":true,"dist":{"shasum":"6868da1bdd72d75285be0b4e50f228e70d001a2c","tarball":"http://registry.npmjs.org/underscore/-/underscore-1.1.6.tgz"},"_npmUser":{"name":"jashkenas","email":"jashkenas@gmail.com"},"maintainers":[{"name":"documentcloud","email":"jeremy@documentcloud.org"},{"name":"jashkenas","email":"jashkenas@gmail.com"}]},"1.1.7":{"name":"underscore","description":"JavaScript's functional programming helper library.","homepage":"http://documentcloud.github.com/underscore/","keywords":["util","functional","server","client","browser"],"author":{"name":"Jeremy Ashkenas","email":"jeremy@documentcloud.org"},"contributors":[],"dependencies":{},"repository":{"type":"git","url":"git://github.com/documentcloud/underscore.git"},"main":"underscore.js","version":"1.1.7","devDependencies":{},"_id":"underscore@1.1.7","engines":{"node":"*"},"_engineSupported":true,"_npmVersion":"1.0.3","_nodeVersion":"v0.4.7","_defaultsLoaded":true,"dist":{"shasum":"40bab84bad19d230096e8d6ef628bff055d83db0","tarball":"http://registry.npmjs.org/underscore/-/underscore-1.1.7.tgz"},"scripts":{},"_npmUser":{"name":"jashkenas","email":"jashkenas@gmail.com"},"maintainers":[{"name":"documentcloud","email":"jeremy@documentcloud.org"},{"name":"jashkenas","email":"jashkenas@gmail.com"}],"directories":{}},"1.2.0":{"name":"underscore","description":"JavaScript's functional programming helper library.","homepage":"http://documentcloud.github.com/underscore/","keywords":["util","functional","server","client","browser"],"author":{"name":"Jeremy Ashkenas","email":"jeremy@documentcloud.org"},"contributors":[],"dependencies":{},"repository":{"type":"git","url":"git://github.com/documentcloud/underscore.git"},"main":"underscore.js","version":"1.2.0","_npmJsonOpts":{"file":"/Users/jashkenas/.npm/underscore/1.2.0/package/package.json","wscript":false,"contributors":false,"serverjs":false},"_id":"underscore@1.2.0","devDependencies":{},"engines":{"node":"*"},"_engineSupported":true,"_npmVersion":"1.0.22","_nodeVersion":"v0.4.10","_defaultsLoaded":true,"dist":{"shasum":"b32ce32c8c118caa8031c10b54c7f65ab3b557fd","tarball":"http://registry.npmjs.org/underscore/-/underscore-1.2.0.tgz"},"scripts":{},"maintainers":[{"name":"documentcloud","email":"jeremy@documentcloud.org"},{"name":"jashkenas","email":"jashkenas@gmail.com"}],"_npmUser":{"name":"jashkenas","email":"jashkenas@gmail.com"},"directories":{}},"1.2.1":{"name":"underscore","description":"JavaScript's functional programming helper library.","homepage":"http://documentcloud.github.com/underscore/","keywords":["util","functional","server","client","browser"],"author":{"name":"Jeremy Ashkenas","email":"jeremy@documentcloud.org"},"contributors":[],"dependencies":{},"repository":{"type":"git","url":"git://github.com/documentcloud/underscore.git"},"main":"underscore.js","version":"1.2.1","_npmJsonOpts":{"file":"/Users/jashkenas/.npm/underscore/1.2.1/package/package.json","wscript":false,"contributors":false,"serverjs":false},"_id":"underscore@1.2.1","devDependencies":{},"engines":{"node":"*"},"_engineSupported":true,"_npmVersion":"1.0.22","_nodeVersion":"v0.4.10","_defaultsLoaded":true,"dist":{"shasum":"fc5c6b0765673d92a2d4ac8b4dc0aa88702e2bd4","tarball":"http://registry.npmjs.org/underscore/-/underscore-1.2.1.tgz"},"scripts":{},"maintainers":[{"name":"documentcloud","email":"jeremy@documentcloud.org"},{"name":"jashkenas","email":"jashkenas@gmail.com"}],"_npmUser":{"name":"jashkenas","email":"jashkenas@gmail.com"},"directories":{}},"1.2.2":{"name":"underscore","description":"JavaScript's functional programming helper library.","homepage":"http://documentcloud.github.com/underscore/","keywords":["util","functional","server","client","browser"],"author":{"name":"Jeremy Ashkenas","email":"jeremy@documentcloud.org"},"contributors":[],"dependencies":{},"repository":{"type":"git","url":"git://github.com/documentcloud/underscore.git"},"main":"underscore.js","version":"1.2.2","_npmUser":{"name":"jashkenas","email":"jashkenas@gmail.com"},"_id":"underscore@1.2.2","devDependencies":{},"engines":{"node":"*"},"_engineSupported":true,"_npmVersion":"1.0.104","_nodeVersion":"v0.6.0","_defaultsLoaded":true,"dist":{"shasum":"74dd40e9face84e724eb2edae945b8aedc233ba3","tarball":"http://registry.npmjs.org/underscore/-/underscore-1.2.2.tgz"},"maintainers":[{"name":"documentcloud","email":"jeremy@documentcloud.org"},{"name":"jashkenas","email":"jashkenas@gmail.com"}],"directories":{}},"1.2.3":{"name":"underscore","description":"JavaScript's functional programming helper library.","homepage":"http://documentcloud.github.com/underscore/","keywords":["util","functional","server","client","browser"],"author":{"name":"Jeremy Ashkenas","email":"jeremy@documentcloud.org"},"contributors":[],"dependencies":{},"repository":{"type":"git","url":"git://github.com/documentcloud/underscore.git"},"main":"underscore.js","version":"1.2.3","_npmUser":{"name":"jashkenas","email":"jashkenas@gmail.com"},"_id":"underscore@1.2.3","devDependencies":{},"engines":{"node":"*"},"_engineSupported":true,"_npmVersion":"1.0.104","_nodeVersion":"v0.6.0","_defaultsLoaded":true,"dist":{"shasum":"11b874da70f4683d7d48bba2b44be1e600d2f6cf","tarball":"http://registry.npmjs.org/underscore/-/underscore-1.2.3.tgz"},"maintainers":[{"name":"documentcloud","email":"jeremy@documentcloud.org"},{"name":"jashkenas","email":"jashkenas@gmail.com"}],"directories":{}},"1.2.4":{"name":"underscore","description":"JavaScript's functional programming helper library.","homepage":"http://documentcloud.github.com/underscore/","keywords":["util","functional","server","client","browser"],"author":{"name":"Jeremy Ashkenas","email":"jeremy@documentcloud.org"},"contributors":[],"repository":{"type":"git","url":"git://github.com/documentcloud/underscore.git"},"main":"underscore.js","version":"1.2.4","_npmUser":{"name":"jashkenas","email":"jashkenas@gmail.com"},"_id":"underscore@1.2.4","dependencies":{},"devDependencies":{},"engines":{"node":"*"},"_engineSupported":true,"_npmVersion":"1.0.104","_nodeVersion":"v0.6.6","_defaultsLoaded":true,"dist":{"shasum":"e8da6241aa06f64df2473bb2590b8c17c84c3c7e","tarball":"http://registry.npmjs.org/underscore/-/underscore-1.2.4.tgz"},"maintainers":[{"name":"documentcloud","email":"jeremy@documentcloud.org"},{"name":"jashkenas","email":"jashkenas@gmail.com"}],"directories":{}},"1.3.0":{"name":"underscore","description":"JavaScript's functional programming helper library.","homepage":"http://documentcloud.github.com/underscore/","keywords":["util","functional","server","client","browser"],"author":{"name":"Jeremy Ashkenas","email":"jeremy@documentcloud.org"},"contributors":[],"repository":{"type":"git","url":"git://github.com/documentcloud/underscore.git"},"main":"underscore.js","version":"1.3.0","_npmUser":{"name":"jashkenas","email":"jashkenas@gmail.com"},"_id":"underscore@1.3.0","dependencies":{},"devDependencies":{},"engines":{"node":"*"},"_engineSupported":true,"_npmVersion":"1.0.104","_nodeVersion":"v0.6.6","_defaultsLoaded":true,"dist":{"shasum":"253b2d79b7bb67943ced0fc744eb18267963ede8","tarball":"http://registry.npmjs.org/underscore/-/underscore-1.3.0.tgz"},"maintainers":[{"name":"documentcloud","email":"jeremy@documentcloud.org"},{"name":"jashkenas","email":"jashkenas@gmail.com"}],"directories":{}},"1.3.1":{"name":"underscore","description":"JavaScript's functional programming helper library.","homepage":"http://documentcloud.github.com/underscore/","keywords":["util","functional","server","client","browser"],"author":{"name":"Jeremy Ashkenas","email":"jeremy@documentcloud.org"},"repository":{"type":"git","url":"git://github.com/documentcloud/underscore.git"},"main":"underscore.js","version":"1.3.1","_npmUser":{"name":"jashkenas","email":"jashkenas@gmail.com"},"_id":"underscore@1.3.1","dependencies":{},"devDependencies":{},"engines":{"node":"*"},"_engineSupported":true,"_npmVersion":"1.0.104","_nodeVersion":"v0.6.6","_defaultsLoaded":true,"dist":{"shasum":"6cb8aad0e77eb5dbbfb54b22bcd8697309cf9641","tarball":"http://registry.npmjs.org/underscore/-/underscore-1.3.1.tgz"},"maintainers":[{"name":"documentcloud","email":"jeremy@documentcloud.org"},{"name":"jashkenas","email":"jashkenas@gmail.com"}],"directories":{}},"1.3.2":{"name":"underscore","description":"JavaScript's functional programming helper library.","homepage":"http://documentcloud.github.com/underscore/","keywords":["util","functional","server","client","browser"],"author":{"name":"Jeremy Ashkenas","email":"jeremy@documentcloud.org"},"repository":{"type":"git","url":"git://github.com/documentcloud/underscore.git"},"main":"underscore.js","version":"1.3.2","_npmUser":{"name":"jashkenas","email":"jashkenas@gmail.com"},"_id":"underscore@1.3.2","dependencies":{},"devDependencies":{},"optionalDependencies":{},"engines":{"node":"*"},"_engineSupported":true,"_npmVersion":"1.1.1","_nodeVersion":"v0.6.11","_defaultsLoaded":true,"dist":{"shasum":"1b4e455089ab1d1d38ab6794ffe6cf08f764394a","tarball":"http://registry.npmjs.org/underscore/-/underscore-1.3.2.tgz"},"readme":" __ \n /\\ \\ __ \n __ __ ___ \\_\\ \\ __ _ __ ____ ___ ___ _ __ __ /\\_\\ ____ \n /\\ \\/\\ \\ /' _ `\\ /'_ \\ /'__`\\/\\ __\\/ ,__\\ / ___\\ / __`\\/\\ __\\/'__`\\ \\/\\ \\ /',__\\ \n \\ \\ \\_\\ \\/\\ \\/\\ \\/\\ \\ \\ \\/\\ __/\\ \\ \\//\\__, `\\/\\ \\__//\\ \\ \\ \\ \\ \\//\\ __/ __ \\ \\ \\/\\__, `\\\n \\ \\____/\\ \\_\\ \\_\\ \\___,_\\ \\____\\\\ \\_\\\\/\\____/\\ \\____\\ \\____/\\ \\_\\\\ \\____\\/\\_\\ _\\ \\ \\/\\____/\n \\/___/ \\/_/\\/_/\\/__,_ /\\/____/ \\/_/ \\/___/ \\/____/\\/___/ \\/_/ \\/____/\\/_//\\ \\_\\ \\/___/ \n \\ \\____/ \n \\/___/\n \nUnderscore.js is a utility-belt library for JavaScript that provides \nsupport for the usual functional suspects (each, map, reduce, filter...) \nwithout extending any core JavaScript objects.\n\nFor Docs, License, Tests, and pre-packed downloads, see:\nhttp://documentcloud.github.com/underscore/\n\nMany thanks to our contributors:\nhttps://github.com/documentcloud/underscore/contributors\n","maintainers":[{"name":"documentcloud","email":"jeremy@documentcloud.org"},{"name":"jashkenas","email":"jashkenas@gmail.com"}],"directories":{}},"1.3.3":{"name":"underscore","description":"JavaScript's functional programming helper library.","homepage":"http://documentcloud.github.com/underscore/","keywords":["util","functional","server","client","browser"],"author":{"name":"Jeremy Ashkenas","email":"jeremy@documentcloud.org"},"repository":{"type":"git","url":"git://github.com/documentcloud/underscore.git"},"main":"underscore.js","version":"1.3.3","_npmUser":{"name":"jashkenas","email":"jashkenas@gmail.com"},"_id":"underscore@1.3.3","dependencies":{},"devDependencies":{},"optionalDependencies":{},"engines":{"node":"*"},"_engineSupported":true,"_npmVersion":"1.1.1","_nodeVersion":"v0.6.11","_defaultsLoaded":true,"dist":{"shasum":"47ac53683daf832bfa952e1774417da47817ae42","tarball":"http://registry.npmjs.org/underscore/-/underscore-1.3.3.tgz"},"readme":" __ \n /\\ \\ __ \n __ __ ___ \\_\\ \\ __ _ __ ____ ___ ___ _ __ __ /\\_\\ ____ \n /\\ \\/\\ \\ /' _ `\\ /'_ \\ /'__`\\/\\ __\\/ ,__\\ / ___\\ / __`\\/\\ __\\/'__`\\ \\/\\ \\ /',__\\ \n \\ \\ \\_\\ \\/\\ \\/\\ \\/\\ \\ \\ \\/\\ __/\\ \\ \\//\\__, `\\/\\ \\__//\\ \\ \\ \\ \\ \\//\\ __/ __ \\ \\ \\/\\__, `\\\n \\ \\____/\\ \\_\\ \\_\\ \\___,_\\ \\____\\\\ \\_\\\\/\\____/\\ \\____\\ \\____/\\ \\_\\\\ \\____\\/\\_\\ _\\ \\ \\/\\____/\n \\/___/ \\/_/\\/_/\\/__,_ /\\/____/ \\/_/ \\/___/ \\/____/\\/___/ \\/_/ \\/____/\\/_//\\ \\_\\ \\/___/ \n \\ \\____/ \n \\/___/\n \nUnderscore.js is a utility-belt library for JavaScript that provides \nsupport for the usual functional suspects (each, map, reduce, filter...) \nwithout extending any core JavaScript objects.\n\nFor Docs, License, Tests, and pre-packed downloads, see:\nhttp://documentcloud.github.com/underscore/\n\nMany thanks to our contributors:\nhttps://github.com/documentcloud/underscore/contributors\n","maintainers":[{"name":"documentcloud","email":"jeremy@documentcloud.org"},{"name":"jashkenas","email":"jashkenas@gmail.com"}],"directories":{}}},"maintainers":[{"name":"documentcloud","email":"jeremy@documentcloud.org"},{"name":"jashkenas","email":"jashkenas@gmail.com"}],"author":{"name":"Jeremy Ashkenas","email":"jeremy@documentcloud.org"},"time":{"1.0.3":"2011-12-07T15:12:18.045Z","1.0.4":"2011-12-07T15:12:18.045Z","1.1.0":"2011-12-07T15:12:18.045Z","1.1.1":"2011-12-07T15:12:18.045Z","1.1.2":"2011-12-07T15:12:18.045Z","1.1.3":"2011-12-07T15:12:18.045Z","1.1.4":"2011-12-07T15:12:18.045Z","1.1.5":"2011-12-07T15:12:18.045Z","1.1.6":"2011-12-07T15:12:18.045Z","1.1.7":"2011-12-07T15:12:18.045Z","1.2.0":"2011-12-07T15:12:18.045Z","1.2.1":"2011-12-07T15:12:18.045Z","1.2.2":"2011-11-14T20:28:47.115Z","1.2.3":"2011-12-07T15:12:18.045Z","1.2.4":"2012-01-09T17:23:14.818Z","1.3.0":"2012-01-11T16:41:38.459Z","1.3.1":"2012-01-23T22:57:36.474Z","1.3.2":"2012-04-09T18:38:14.345Z","1.3.3":"2012-04-10T14:43:48.089Z"},"repository":{"type":"git","url":"git://github.com/documentcloud/underscore.git"},"users":{"vesln":true,"mvolkmann":true,"lancehunt":true,"mikl":true,"linus":true,"vasc":true,"bat":true,"dmalam":true,"mbrevoort":true,"danielr":true,"rsimoes":true,"thlorenz":true}} \ No newline at end of file diff --git a/deps/npm/node_modules/npm-registry-client/test/get-all.js b/deps/npm/node_modules/npm-registry-client/test/get-all.js index 86978b26703..75570fcbb6c 100644 --- a/deps/npm/node_modules/npm-registry-client/test/get-all.js +++ b/deps/npm/node_modules/npm-registry-client/test/get-all.js @@ -10,7 +10,7 @@ tap.test("basic request", function (t) { }) client.get("http://localhost:1337/-/all", null, function (er) { - t.notOk(er, "no error") + t.ifError(er, "no error") t.end() }) }) diff --git a/deps/npm/node_modules/npm-registry-client/test/get-basic.js b/deps/npm/node_modules/npm-registry-client/test/get-basic.js index 10c48b0b876..240dc876221 100644 --- a/deps/npm/node_modules/npm-registry-client/test/get-basic.js +++ b/deps/npm/node_modules/npm-registry-client/test/get-basic.js @@ -16,7 +16,11 @@ tap.test("basic request", function (t) { res.json(usroot) }) - t.plan(2) + server.expect("/@bigco%2funderscore", function (req, res) { + res.json(usroot) + }) + + t.plan(3) client.get("http://localhost:1337/underscore/1.3.3", null, function (er, data) { t.deepEqual(data, us) }) @@ -24,4 +28,8 @@ tap.test("basic request", function (t) { client.get("http://localhost:1337/underscore", null, function (er, data) { t.deepEqual(data, usroot) }) + + client.get("http://localhost:1337/@bigco%2funderscore", null, function (er, data) { + t.deepEqual(data, usroot) + }) }) diff --git a/deps/npm/node_modules/npm-registry-client/test/get-error-403.js b/deps/npm/node_modules/npm-registry-client/test/get-error-403.js new file mode 100644 index 00000000000..27406b1680c --- /dev/null +++ b/deps/npm/node_modules/npm-registry-client/test/get-error-403.js @@ -0,0 +1,33 @@ +var tap = require("tap") + +var server = require("./lib/server.js") +var common = require("./lib/common.js") + +tap.test("get fails with 403", function (t) { + server.expect("/habanero", function (req, res) { + t.equal(req.method, "GET", "got expected method") + + res.writeHead(403) + res.end("{\"error\":\"get that cat out of the toilet that's gross omg\"}") + }) + + var client = common.freshClient() + client.conf.set("fetch-retry-mintimeout", 100) + client.get( + "http://localhost:1337/habanero", + {}, + function (er) { + t.ok(er, "failed as expected") + + t.equal(er.statusCode, 403, "status code was attached as expected") + t.equal(er.code, "E403", "error code was formatted as expected") + t.equal( + er.message, + "get that cat out of the toilet that's gross omg : habanero", + "got error message" + ) + + t.end() + } + ) +}) diff --git a/deps/npm/node_modules/npm-registry-client/test/lib/common.js b/deps/npm/node_modules/npm-registry-client/test/lib/common.js index f9048c09453..712f6632f29 100644 --- a/deps/npm/node_modules/npm-registry-client/test/lib/common.js +++ b/deps/npm/node_modules/npm-registry-client/test/lib/common.js @@ -1,16 +1,80 @@ var resolve = require("path").resolve -var server = require('./server.js') -var RC = require('../../') + +var server = require("./server.js") +var RC = require("../../") +var toNerfDart = require("../../lib/util/nerf-dart.js") + +var REGISTRY = "http://localhost:" + server.port module.exports = { + port : server.port, + registry : REGISTRY, freshClient : function freshClient(config) { config = config || {} - config.cache = resolve(__dirname, '../fixtures/cache') - config.registry = 'http://localhost:' + server.port + config.cache = resolve(__dirname, "../fixtures/cache") + config.registry = REGISTRY + var container = { + get: function (k) { return config[k] }, + set: function (k, v) { config[k] = v }, + del: function (k) { delete config[k] }, + getCredentialsByURI: function(uri) { + var nerfed = toNerfDart(uri) + var c = {scope : nerfed} + + if (this.get(nerfed + ":_authToken")) { + c.token = this.get(nerfed + ":_authToken") + // the bearer token is enough, don't confuse things + return c + } + + if (this.get(nerfed + ":_password")) { + c.password = new Buffer(this.get(nerfed + ":_password"), "base64").toString("utf8") + } + + if (this.get(nerfed + ":username")) { + c.username = this.get(nerfed + ":username") + } + + if (this.get(nerfed + ":email")) { + c.email = this.get(nerfed + ":email") + } + + if (this.get(nerfed + ":always-auth") !== undefined) { + c.alwaysAuth = this.get(nerfed + ":always-auth") + } + + if (c.username && c.password) { + c.auth = new Buffer(c.username + ":" + c.password).toString("base64") + } + + return c + }, + setCredentialsByURI: function (uri, c) { + var nerfed = toNerfDart(uri) + + if (c.token) { + this.set(nerfed + ":_authToken", c.token, "user") + this.del(nerfed + ":_password", "user") + this.del(nerfed + ":username", "user") + this.del(nerfed + ":email", "user") + } + else if (c.username || c.password || c.email) { + this.del(nerfed + ":_authToken", "user") + + var encoded = new Buffer(c.password, "utf8").toString("base64") + this.set(nerfed + ":_password", encoded, "user") + this.set(nerfed + ":username", c.username, "user") + this.set(nerfed + ":email", c.email, "user") + } + else { + throw new Error("No credentials to set.") + } + } + } - var client = new RC(config) + var client = new RC(container) server.log = client.log - client.log.level = 'silent' + client.log.level = "silent" return client } diff --git a/deps/npm/node_modules/npm-registry-client/test/lib/server.js b/deps/npm/node_modules/npm-registry-client/test/lib/server.js index b195d9a9b30..37cfae04177 100644 --- a/deps/npm/node_modules/npm-registry-client/test/lib/server.js +++ b/deps/npm/node_modules/npm-registry-client/test/lib/server.js @@ -14,7 +14,7 @@ function handler (req, res) { req.connection.setTimeout(1000) // If we got authorization, make sure it's the right password. - if (req.headers.authorization) { + if (req.headers.authorization && req.headers.authorization.match(/^Basic/)) { var auth = req.headers.authorization.replace(/^Basic /, "") auth = new Buffer(auth, "base64").toString("utf8") assert.equal(auth, "username:%1234@asdf%") diff --git a/deps/npm/node_modules/npm-registry-client/test/publish-again-scoped.js b/deps/npm/node_modules/npm-registry-client/test/publish-again-scoped.js new file mode 100644 index 00000000000..97838ca44dc --- /dev/null +++ b/deps/npm/node_modules/npm-registry-client/test/publish-again-scoped.js @@ -0,0 +1,82 @@ +var tap = require("tap") +var fs = require("fs") + +var server = require("./lib/server.js") +var common = require("./lib/common.js") + +var nerfed = "//localhost:" + server.port + "/:" + +var configuration = {} +configuration[nerfed + "username"] = "username" +configuration[nerfed + "_password"] = new Buffer("%1234@asdf%").toString("base64") +configuration[nerfed + "email"] = "i@izs.me" + +var client = common.freshClient(configuration) + +tap.test("publish again", function (t) { + // not really a tarball, but doesn't matter + var tarball = require.resolve("../package.json") + var pd = fs.readFileSync(tarball, "base64") + var pkg = require("../package.json") + var lastTime = null + + server.expect("/@npm%2fnpm-registry-client", function (req, res) { + t.equal(req.method, "PUT") + var b = "" + req.setEncoding("utf8") + req.on("data", function (d) { + b += d + }) + + req.on("end", function () { + var o = lastTime = JSON.parse(b) + t.equal(o._id, "@npm/npm-registry-client") + t.equal(o["dist-tags"].latest, pkg.version) + t.has(o.versions[pkg.version], pkg) + t.same(o.maintainers, [ { name: "username", email: "i@izs.me" } ]) + var att = o._attachments[ pkg.name + "-" + pkg.version + ".tgz" ] + t.same(att.data, pd) + res.statusCode = 409 + res.json({reason: "must supply latest _rev to update existing package"}) + }) + }) + + server.expect("/@npm%2fnpm-registry-client?write=true", function (req, res) { + t.equal(req.method, "GET") + t.ok(lastTime) + for (var i in lastTime.versions) { + var v = lastTime.versions[i] + delete lastTime.versions[i] + lastTime.versions["0.0.2"] = v + lastTime["dist-tags"] = { latest: "0.0.2" } + } + lastTime._rev = "asdf" + res.json(lastTime) + }) + + server.expect("/@npm%2fnpm-registry-client", function (req, res) { + t.equal(req.method, "PUT") + t.ok(lastTime) + + var b = "" + req.setEncoding("utf8") + req.on("data", function (d) { + b += d + }) + + req.on("end", function() { + var o = JSON.parse(b) + t.equal(o._rev, "asdf") + t.deepEqual(o.versions["0.0.2"], o.versions[pkg.version]) + res.statusCode = 201 + res.json({created: true}) + }) + }) + + pkg.name = "@npm/npm-registry-client" + client.publish("http://localhost:1337/", pkg, tarball, function (er, data) { + if (er) throw er + t.deepEqual(data, { created: true }) + t.end() + }) +}) diff --git a/deps/npm/node_modules/npm-registry-client/test/publish-again.js b/deps/npm/node_modules/npm-registry-client/test/publish-again.js index 6d286fb7eba..39c368fd35b 100644 --- a/deps/npm/node_modules/npm-registry-client/test/publish-again.js +++ b/deps/npm/node_modules/npm-registry-client/test/publish-again.js @@ -3,16 +3,23 @@ var fs = require("fs") var server = require("./lib/server.js") var common = require("./lib/common.js") -var client = common.freshClient({ - username: "username", - password: "%1234@asdf%", - email: "i@izs.me", - _auth: new Buffer("username:%1234@asdf%").toString("base64"), - "always-auth": true -}) + +var nerfed = "//localhost:" + server.port + "/:" + +var configuration = {} +configuration[nerfed + "username"] = "username" +configuration[nerfed + "_password"] = new Buffer("%1234@asdf%").toString("base64") +configuration[nerfed + "email"] = "i@izs.me" + +var client = common.freshClient(configuration) tap.test("publish again", function (t) { + // not really a tarball, but doesn't matter + var tarball = require.resolve("../package.json") + var pd = fs.readFileSync(tarball, "base64") + var pkg = require("../package.json") var lastTime = null + server.expect("/npm-registry-client", function (req, res) { t.equal(req.method, "PUT") var b = "" @@ -66,11 +73,6 @@ tap.test("publish again", function (t) { }) }) - - // not really a tarball, but doesn't matter - var tarball = require.resolve("../package.json") - var pd = fs.readFileSync(tarball, "base64") - var pkg = require("../package.json") client.publish("http://localhost:1337/", pkg, tarball, function (er, data) { if (er) throw er t.deepEqual(data, { created: true }) diff --git a/deps/npm/node_modules/npm-registry-client/test/publish-scoped-auth-token.js b/deps/npm/node_modules/npm-registry-client/test/publish-scoped-auth-token.js new file mode 100644 index 00000000000..e1bb7dd1ee2 --- /dev/null +++ b/deps/npm/node_modules/npm-registry-client/test/publish-scoped-auth-token.js @@ -0,0 +1,52 @@ +var tap = require("tap") +var crypto = require("crypto") +var fs = require("fs") + +var server = require("./lib/server.js") +var common = require("./lib/common.js") + +var nerfed = "//localhost:" + server.port + "/:" + +var configuration = {} +configuration[nerfed + "_authToken"] = "of-glad-tidings" + +var client = common.freshClient(configuration) + +tap.test("publish", function (t) { + // not really a tarball, but doesn't matter + var tarball = require.resolve("../package.json") + var pd = fs.readFileSync(tarball, "base64") + var pkg = require("../package.json") + pkg.name = "@npm/npm-registry-client" + + server.expect("/@npm%2fnpm-registry-client", function (req, res) { + t.equal(req.method, "PUT") + t.equal(req.headers.authorization, "Bearer of-glad-tidings") + + var b = "" + req.setEncoding("utf8") + req.on("data", function (d) { + b += d + }) + + req.on("end", function () { + var o = JSON.parse(b) + t.equal(o._id, "@npm/npm-registry-client") + t.equal(o["dist-tags"].latest, pkg.version) + t.has(o.versions[pkg.version], pkg) + t.same(o.maintainers, o.versions[pkg.version].maintainers) + var att = o._attachments[ pkg.name + "-" + pkg.version + ".tgz" ] + t.same(att.data, pd) + var hash = crypto.createHash("sha1").update(pd, "base64").digest("hex") + t.equal(o.versions[pkg.version].dist.shasum, hash) + res.statusCode = 201 + res.json({created:true}) + }) + }) + + client.publish(common.registry, pkg, tarball, function (er, data) { + if (er) throw er + t.deepEqual(data, { created: true }) + t.end() + }) +}) diff --git a/deps/npm/node_modules/npm-registry-client/test/publish-scoped.js b/deps/npm/node_modules/npm-registry-client/test/publish-scoped.js new file mode 100644 index 00000000000..b5dea3649c3 --- /dev/null +++ b/deps/npm/node_modules/npm-registry-client/test/publish-scoped.js @@ -0,0 +1,57 @@ +var tap = require("tap") +var crypto = require("crypto") +var fs = require("fs") + +var server = require("./lib/server.js") +var common = require("./lib/common.js") + +var nerfed = "//localhost:" + server.port + "/:" + +var configuration = {} +configuration[nerfed + "username"] = "username" +configuration[nerfed + "_password"] = new Buffer("%1234@asdf%").toString("base64") +configuration[nerfed + "email"] = "ogd@aoaioxxysz.net" + +var client = common.freshClient(configuration) + +var _auth = new Buffer("username:%1234@asdf%").toString("base64") + +tap.test("publish", function (t) { + // not really a tarball, but doesn't matter + var tarball = require.resolve("../package.json") + var pd = fs.readFileSync(tarball, "base64") + var pkg = require("../package.json") + pkg.name = "@npm/npm-registry-client" + + server.expect("/@npm%2fnpm-registry-client", function (req, res) { + t.equal(req.method, "PUT") + t.equal(req.headers.authorization, "Basic " + _auth) + + var b = "" + req.setEncoding("utf8") + req.on("data", function (d) { + b += d + }) + + req.on("end", function () { + var o = JSON.parse(b) + t.equal(o._id, "@npm/npm-registry-client") + t.equal(o["dist-tags"].latest, pkg.version) + t.has(o.versions[pkg.version], pkg) + t.same(o.maintainers, [ { name: "username", email: "ogd@aoaioxxysz.net" } ]) + t.same(o.maintainers, o.versions[pkg.version].maintainers) + var att = o._attachments[ pkg.name + "-" + pkg.version + ".tgz" ] + t.same(att.data, pd) + var hash = crypto.createHash("sha1").update(pd, "base64").digest("hex") + t.equal(o.versions[pkg.version].dist.shasum, hash) + res.statusCode = 201 + res.json({created:true}) + }) + }) + + client.publish(common.registry, pkg, tarball, function (er, data) { + if (er) throw er + t.deepEqual(data, { created: true }) + t.end() + }) +}) diff --git a/deps/npm/node_modules/npm-registry-client/test/publish.js b/deps/npm/node_modules/npm-registry-client/test/publish.js index c34bf6c5340..2d76dfae202 100644 --- a/deps/npm/node_modules/npm-registry-client/test/publish.js +++ b/deps/npm/node_modules/npm-registry-client/test/publish.js @@ -4,16 +4,22 @@ var fs = require("fs") var server = require("./lib/server.js") var common = require("./lib/common.js") -var client = common.freshClient({ - username: "username", - password: "%1234@asdf%", - email: "i@izs.me", - _auth: new Buffer("username:%1234@asdf%").toString("base64"), - "always-auth": true -}) +var nerfed = "//localhost:" + server.port + "/:" + +var configuration = {} +configuration[nerfed + "username"] = "username" +configuration[nerfed + "_password"] = new Buffer("%1234@asdf%").toString("base64") +configuration[nerfed + "email"] = "i@izs.me" + +var client = common.freshClient(configuration) tap.test("publish", function (t) { + // not really a tarball, but doesn't matter + var tarball = require.resolve("../package.json") + var pd = fs.readFileSync(tarball, "base64") + var pkg = require("../package.json") + server.expect("/npm-registry-client", function (req, res) { t.equal(req.method, "PUT") var b = "" @@ -38,10 +44,6 @@ tap.test("publish", function (t) { }) }) - // not really a tarball, but doesn't matter - var tarball = require.resolve("../package.json") - var pd = fs.readFileSync(tarball, "base64") - var pkg = require("../package.json") client.publish("http://localhost:1337/", pkg, tarball, function (er, data) { if (er) throw er t.deepEqual(data, { created: true }) diff --git a/deps/npm/node_modules/npm-registry-client/test/request-gzip-content.js b/deps/npm/node_modules/npm-registry-client/test/request-gzip-content.js index 79c2e8dc02e..1085bfaca20 100644 --- a/deps/npm/node_modules/npm-registry-client/test/request-gzip-content.js +++ b/deps/npm/node_modules/npm-registry-client/test/request-gzip-content.js @@ -19,10 +19,12 @@ var pkg = { zlib.gzip(JSON.stringify(pkg), function (err, pkgGzip) { tap.test("request gzip package content", function (t) { + t.ifError(err, "example package compressed") + server.expect("GET", "/some-package-gzip/1.2.3", function (req, res) { res.statusCode = 200 - res.setHeader("Content-Encoding", "gzip"); - res.setHeader("Content-Type", "application/json"); + res.setHeader("Content-Encoding", "gzip") + res.setHeader("Content-Type", "application/json") res.end(pkgGzip) }) @@ -46,4 +48,4 @@ zlib.gzip(JSON.stringify(pkg), function (err, pkgGzip) { t.end() }) }) -}); +}) diff --git a/deps/npm/node_modules/npm-registry-client/test/star.js b/deps/npm/node_modules/npm-registry-client/test/star.js index 0e43e10d76d..43c8888ef20 100644 --- a/deps/npm/node_modules/npm-registry-client/test/star.js +++ b/deps/npm/node_modules/npm-registry-client/test/star.js @@ -2,18 +2,20 @@ var tap = require("tap") var server = require("./lib/server.js") var common = require("./lib/common.js") -var client = common.freshClient({ - username : "username", - password : "%1234@asdf%", - email : "ogd@aoaioxxysz.net", - _auth : new Buffer("username:%1234@asdf%").toString("base64"), - "always-auth" : true -}) - -var cache = require("./fixtures/underscore/cache.json") var DEP_USER = "username" +var nerfed = "//localhost:" + server.port + "/:" + +var configuration = {} +configuration[nerfed + "username"] = DEP_USER +configuration[nerfed + "_password"] = new Buffer("%1234@asdf%").toString("base64") +configuration[nerfed + "email"] = "i@izs.me" + +var client = common.freshClient(configuration) + +var cache = require("./fixtures/underscore/cache.json") + tap.test("star a package", function (t) { server.expect("GET", "/underscore?write=true", function (req, res) { t.equal(req.method, "GET") @@ -52,7 +54,7 @@ tap.test("star a package", function (t) { }) client.star("http://localhost:1337/underscore", true, function (error, data) { - t.notOk(error, "no errors") + t.ifError(error, "no errors") t.ok(data.starred, "was starred") t.end() diff --git a/deps/npm/node_modules/npm-registry-client/test/stars.js b/deps/npm/node_modules/npm-registry-client/test/stars.js index ae1ddbb49d6..28f8a98d766 100644 --- a/deps/npm/node_modules/npm-registry-client/test/stars.js +++ b/deps/npm/node_modules/npm-registry-client/test/stars.js @@ -2,13 +2,7 @@ var tap = require("tap") var server = require("./lib/server.js") var common = require("./lib/common.js") -var client = common.freshClient({ - username : "username", - password : "%1234@asdf%", - email : "ogd@aoaioxxysz.net", - _auth : new Buffer("username:%1234@asdf%").toString("base64"), - "always-auth" : true -}) +var client = common.freshClient() var users = [ "benjamincoe", @@ -24,7 +18,7 @@ tap.test("get the URL for the bugs page on a package", function (t) { }) client.stars("http://localhost:1337/", "sample", function (error, info) { - t.notOk(error, "no errors") + t.ifError(error, "no errors") t.deepEqual(info, users, "got the list of users") t.end() diff --git a/deps/npm/node_modules/npm-registry-client/test/tag.js b/deps/npm/node_modules/npm-registry-client/test/tag.js index 216ac6c520b..7551569307b 100644 --- a/deps/npm/node_modules/npm-registry-client/test/tag.js +++ b/deps/npm/node_modules/npm-registry-client/test/tag.js @@ -2,13 +2,15 @@ var tap = require("tap") var server = require("./lib/server.js") var common = require("./lib/common.js") -var client = common.freshClient({ - username : "username", - password : "%1234@asdf%", - email : "ogd@aoaioxxysz.net", - _auth : new Buffer("username:%1234@asdf%").toString("base64"), - "always-auth" : true -}) + +var nerfed = "//localhost:" + server.port + "/:" + +var configuration = {} +configuration[nerfed + "username"] = "username" +configuration[nerfed + "_password"] = new Buffer("%1234@asdf%").toString("base64") +configuration[nerfed + "email"] = "i@izs.me" + +var client = common.freshClient(configuration) tap.test("tag a package", function (t) { server.expect("PUT", "/underscore/not-lodash", function (req, res) { @@ -31,7 +33,7 @@ tap.test("tag a package", function (t) { }) client.tag("http://localhost:1337/underscore", {"1.3.2":{}}, "not-lodash", function (error, data) { - t.notOk(error, "no errors") + t.ifError(error, "no errors") t.ok(data.tagged, "was tagged") t.end() diff --git a/deps/npm/node_modules/npm-registry-client/test/unpublish-scoped.js b/deps/npm/node_modules/npm-registry-client/test/unpublish-scoped.js new file mode 100644 index 00000000000..0e5cb8606d1 --- /dev/null +++ b/deps/npm/node_modules/npm-registry-client/test/unpublish-scoped.js @@ -0,0 +1,59 @@ +var tap = require("tap") + +var server = require("./lib/server.js") +var common = require("./lib/common.js") + +var nerfed = "//localhost:" + server.port + "/:" + +var configuration = {} +configuration[nerfed + "_authToken"] = "of-glad-tidings" + +var client = common.freshClient(configuration) + +var cache = require("./fixtures/@npm/npm-registry-client/cache.json") + +var REV = "/-rev/213-0a1049cf56172b7d9a1184742c6477b9" +var VERSION = "3.0.6" + +tap.test("unpublish a package", function (t) { + server.expect("GET", "/@npm%2fnpm-registry-client?write=true", function (req, res) { + t.equal(req.method, "GET") + + res.json(cache) + }) + + server.expect("PUT", "/@npm%2fnpm-registry-client" + REV, function (req, res) { + t.equal(req.method, "PUT") + + var b = "" + req.setEncoding("utf-8") + req.on("data", function (d) { + b += d + }) + + req.on("end", function () { + var updated = JSON.parse(b) + t.notOk(updated.versions[VERSION]) + }) + + res.json(cache) + }) + + server.expect("GET", "/@npm%2fnpm-registry-client", function (req, res) { + t.equal(req.method, "GET") + + res.json(cache) + }) + + server.expect("DELETE", "/@npm%2fnpm-registry-client/-/@npm%2fnpm-registry-client-" + VERSION + ".tgz" + REV, function (req, res) { + t.equal(req.method, "DELETE") + + res.json({unpublished:true}) + }) + + client.unpublish("http://localhost:1337/@npm%2fnpm-registry-client", VERSION, function (error) { + t.ifError(error, "no errors") + + t.end() + }) +}) diff --git a/deps/npm/node_modules/npm-registry-client/test/unpublish.js b/deps/npm/node_modules/npm-registry-client/test/unpublish.js index 47c5617c8ac..7a60431faca 100644 --- a/deps/npm/node_modules/npm-registry-client/test/unpublish.js +++ b/deps/npm/node_modules/npm-registry-client/test/unpublish.js @@ -2,13 +2,13 @@ var tap = require("tap") var server = require("./lib/server.js") var common = require("./lib/common.js") -var client = common.freshClient({ - username : "username", - password : "%1234@asdf%", - email : "ogd@aoaioxxysz.net", - _auth : new Buffer("username:%1234@asdf%").toString("base64"), - "always-auth" : true -}) + +var nerfed = "//localhost:" + server.port + "/:" + +var configuration = {} +configuration[nerfed + "_authToken"] = "of-glad-tidings" + +var client = common.freshClient(configuration) var cache = require("./fixtures/underscore/cache.json") @@ -52,7 +52,7 @@ tap.test("unpublish a package", function (t) { }) client.unpublish("http://localhost:1337/underscore", VERSION, function (error) { - t.notOk(error, "no errors") + t.ifError(error, "no errors") t.end() }) diff --git a/deps/npm/node_modules/npm-registry-client/test/upload.js b/deps/npm/node_modules/npm-registry-client/test/upload.js index 434ad36f019..fa197e3681d 100644 --- a/deps/npm/node_modules/npm-registry-client/test/upload.js +++ b/deps/npm/node_modules/npm-registry-client/test/upload.js @@ -7,13 +7,12 @@ var server = require("./lib/server.js") var cache = require("./fixtures/underscore/cache.json") -var client = common.freshClient({ - username : "username", - password : "%1234@asdf%", - email : "ogd@aoaioxxysz.net", - _auth : new Buffer("username:%1234@asdf%").toString("base64"), - "always-auth" : true -}) +var nerfed = "//localhost:" + server.port + "/:" + +var configuration = {} +configuration[nerfed + "_authToken"] = "of-glad-tidings" + +var client = common.freshClient(configuration) function OneA() { Readable.call(this) @@ -22,7 +21,7 @@ function OneA() { } inherits(OneA, Readable) -tap.test("unpublish a package", function (t) { +tap.test("uploading a tarball", function (t) { server.expect("PUT", "/underscore", function (req, res) { t.equal(req.method, "PUT") @@ -30,7 +29,7 @@ tap.test("unpublish a package", function (t) { }) client.upload("http://localhost:1337/underscore", new OneA(), "daedabeefa", true, function (error) { - t.notOk(error, "no errors") + t.ifError(error, "no errors") t.end() }) diff --git a/deps/npm/node_modules/npm-registry-client/test/whoami.js b/deps/npm/node_modules/npm-registry-client/test/whoami.js new file mode 100644 index 00000000000..f9c817684f2 --- /dev/null +++ b/deps/npm/node_modules/npm-registry-client/test/whoami.js @@ -0,0 +1,30 @@ +var tap = require("tap") + +var server = require("./lib/server.js") +var common = require("./lib/common.js") + +var nerfed = "//localhost:" + server.port + "/:" + +var configuration = {} +configuration[nerfed + "_authToken"] = "not-bad-meaning-bad-but-bad-meaning-wombat" + +var client = common.freshClient(configuration) + +var WHOIAM = "wombat" + +tap.test("whoami", function (t) { + server.expect("GET", "/whoami", function (req, res) { + t.equal(req.method, "GET") + // only available for token-based auth for now + t.equal(req.headers.authorization, "Bearer not-bad-meaning-bad-but-bad-meaning-wombat") + + res.json({username : WHOIAM}) + }) + + client.whoami(common.registry, function (error, wombat) { + t.ifError(error, "no errors") + t.equal(wombat, WHOIAM, "im a wombat") + + t.end() + }) +}) diff --git a/deps/npm/node_modules/npm-user-validate/package.json b/deps/npm/node_modules/npm-user-validate/package.json index b9377c77168..cf4440364be 100644 --- a/deps/npm/node_modules/npm-user-validate/package.json +++ b/deps/npm/node_modules/npm-user-validate/package.json @@ -1,6 +1,6 @@ { "name": "npm-user-validate", - "version": "0.1.0", + "version": "0.1.1", "description": "User validations for npm", "main": "npm-user-validate.js", "devDependencies": { @@ -11,7 +11,7 @@ }, "repository": { "type": "git", - "url": "git://github.com/npm/npm-registry-mock.git" + "url": "git://github.com/npm/npm-user-validate.git" }, "keywords": [ "npm", @@ -23,13 +23,34 @@ "email": "rok@kowalski.gd" }, "license": "BSD", - "readme": "[![Build Status](https://travis-ci.org/npm/npm-user-validate.png?branch=master)](https://travis-ci.org/npm/npm-user-validate)\n[![devDependency Status](https://david-dm.org/npm/npm-user-validate/dev-status.png)](https://david-dm.org/npm/npm-user-validate#info=devDependencies)\n\n# npm-user-validate\n\nValidation for the npm client and npm-www (and probably other npm projects)\n", - "readmeFilename": "README.md", + "gitHead": "64c9bd4ded742c41afdb3a8414fbbfdbfdcdf6b7", "bugs": { - "url": "https://github.com/npm/npm-registry-mock/issues" + "url": "https://github.com/npm/npm-user-validate/issues" }, - "homepage": "https://github.com/npm/npm-registry-mock", - "_id": "npm-user-validate@0.1.0", - "_shasum": "358a5b5148ed3f79771d980388c6e34c4a61f638", - "_from": "npm-user-validate@latest" + "homepage": "https://github.com/npm/npm-user-validate", + "_id": "npm-user-validate@0.1.1", + "_shasum": "ea7774636c3c8fe6d01e174bd9f2ee0e22eeed57", + "_from": "npm-user-validate@>=0.1.1 <0.2.0", + "_npmVersion": "2.1.3", + "_nodeVersion": "0.10.31", + "_npmUser": { + "name": "isaacs", + "email": "i@izs.me" + }, + "maintainers": [ + { + "name": "robertkowalski", + "email": "rok@kowalski.gd" + }, + { + "name": "isaacs", + "email": "i@izs.me" + } + ], + "dist": { + "shasum": "ea7774636c3c8fe6d01e174bd9f2ee0e22eeed57", + "tarball": "http://registry.npmjs.org/npm-user-validate/-/npm-user-validate-0.1.1.tgz" + }, + "directories": {}, + "_resolved": "https://registry.npmjs.org/npm-user-validate/-/npm-user-validate-0.1.1.tgz" } diff --git a/deps/npm/node_modules/npmconf/.npmignore b/deps/npm/node_modules/npmconf/.npmignore deleted file mode 100644 index baa471ca800..00000000000 --- a/deps/npm/node_modules/npmconf/.npmignore +++ /dev/null @@ -1 +0,0 @@ -/test/fixtures/userconfig-with-gc diff --git a/deps/npm/node_modules/npmconf/README.md b/deps/npm/node_modules/npmconf/README.md deleted file mode 100644 index afc995d1afe..00000000000 --- a/deps/npm/node_modules/npmconf/README.md +++ /dev/null @@ -1,33 +0,0 @@ -# npmconf - -The config thing npm uses - -If you are interested in interacting with the config settings that npm -uses, then use this module. - -However, if you are writing a new Node.js program, and want -configuration functionality similar to what npm has, but for your -own thing, then I'd recommend using [rc](https://github.com/dominictarr/rc), -which is probably what you want. - -If I were to do it all over again, that's what I'd do for npm. But, -alas, there are many systems depending on many of the particulars of -npm's configuration setup, so it's not worth the cost of changing. - -## USAGE - -```javascript -var npmconf = require('npmconf') - -// pass in the cli options that you read from the cli -// or whatever top-level configs you want npm to use for now. -npmconf.load({some:'configs'}, function (er, conf) { - // do stuff with conf - conf.get('some', 'cli') // 'configs' - conf.get('username') // 'joebobwhatevers' - conf.set('foo', 'bar', 'user') - conf.save('user', function (er) { - // foo = bar is now saved to ~/.npmrc or wherever - }) -}) -``` diff --git a/deps/npm/node_modules/npmconf/package.json b/deps/npm/node_modules/npmconf/package.json deleted file mode 100644 index 55daab66e1e..00000000000 --- a/deps/npm/node_modules/npmconf/package.json +++ /dev/null @@ -1,53 +0,0 @@ -{ - "name": "npmconf", - "version": "1.1.8", - "description": "The config thing npm uses", - "main": "npmconf.js", - "directories": { - "test": "test" - }, - "dependencies": { - "config-chain": "~1.1.8", - "inherits": "~2.0.0", - "ini": "^1.2.0", - "mkdirp": "^0.5.0", - "nopt": "~3.0.1", - "once": "~1.3.0", - "osenv": "^0.1.0", - "semver": "2", - "uid-number": "0.0.5" - }, - "devDependencies": { - "tap": "~0.4.0" - }, - "scripts": { - "test": "tap test/*.js" - }, - "repository": { - "type": "git", - "url": "git://github.com/isaacs/npmconf" - }, - "keywords": [ - "npm", - "config", - "config-chain", - "conf", - "ini" - ], - "author": { - "name": "Isaac Z. Schlueter", - "email": "i@izs.me", - "url": "http://blog.izs.me" - }, - "license": "ISC", - "readme": "# npmconf\n\nThe config thing npm uses\n\nIf you are interested in interacting with the config settings that npm\nuses, then use this module.\n\nHowever, if you are writing a new Node.js program, and want\nconfiguration functionality similar to what npm has, but for your\nown thing, then I'd recommend using [rc](https://github.com/dominictarr/rc),\nwhich is probably what you want.\n\nIf I were to do it all over again, that's what I'd do for npm. But,\nalas, there are many systems depending on many of the particulars of\nnpm's configuration setup, so it's not worth the cost of changing.\n\n## USAGE\n\n```javascript\nvar npmconf = require('npmconf')\n\n// pass in the cli options that you read from the cli\n// or whatever top-level configs you want npm to use for now.\nnpmconf.load({some:'configs'}, function (er, conf) {\n // do stuff with conf\n conf.get('some', 'cli') // 'configs'\n conf.get('username') // 'joebobwhatevers'\n conf.set('foo', 'bar', 'user')\n conf.save('user', function (er) {\n // foo = bar is now saved to ~/.npmrc or wherever\n })\n})\n```\n", - "readmeFilename": "README.md", - "gitHead": "98e8ed0e2a307470f8db14d2727a165d8524b567", - "bugs": { - "url": "https://github.com/isaacs/npmconf/issues" - }, - "homepage": "https://github.com/isaacs/npmconf", - "_id": "npmconf@1.1.8", - "_shasum": "350e3d7a4da8e4958dfd0391c81e9a750b01cde2", - "_from": "npmconf@^1.1.8" -} diff --git a/deps/npm/node_modules/npmconf/test/00-setup.js b/deps/npm/node_modules/npmconf/test/00-setup.js deleted file mode 100644 index d009e81eb6b..00000000000 --- a/deps/npm/node_modules/npmconf/test/00-setup.js +++ /dev/null @@ -1,43 +0,0 @@ -var path = require('path') -var userconfigSrc = path.resolve(__dirname, 'fixtures', 'userconfig') -exports.userconfig = userconfigSrc + '-with-gc' -exports.globalconfig = path.resolve(__dirname, 'fixtures', 'globalconfig') -exports.builtin = path.resolve(__dirname, 'fixtures', 'builtin') - -// set the userconfig in the env -// unset anything else that npm might be trying to foist on us -Object.keys(process.env).forEach(function (k) { - if (k.match(/^npm_config_/i)) { - delete process.env[k] - } -}) -process.env.npm_config_userconfig = exports.userconfig -process.env.npm_config_other_env_thing = 1000 -process.env.random_env_var = 'asdf' -process.env.npm_config__underbar_env_thing = 'underful' -process.env.NPM_CONFIG_UPPERCASE_ENV_THING = 42 - -exports.envData = { - userconfig: exports.userconfig, - '_underbar-env-thing': 'underful', - 'uppercase-env-thing': '42', - 'other-env-thing': '1000' -} -exports.envDataFix = { - userconfig: exports.userconfig, - '_underbar-env-thing': 'underful', - 'uppercase-env-thing': 42, - 'other-env-thing': 1000, -} - - -if (module === require.main) { - // set the globalconfig in the userconfig - var fs = require('fs') - var uc = fs.readFileSync(userconfigSrc) - var gcini = 'globalconfig = ' + exports.globalconfig + '\n' - fs.writeFileSync(exports.userconfig, gcini + uc) - - console.log('0..1') - console.log('ok 1 setup done') -} diff --git a/deps/npm/node_modules/npmconf/test/basic.js b/deps/npm/node_modules/npmconf/test/basic.js deleted file mode 100644 index 29d708b3a67..00000000000 --- a/deps/npm/node_modules/npmconf/test/basic.js +++ /dev/null @@ -1,83 +0,0 @@ -var test = require('tap').test -var npmconf = require('../npmconf.js') -var common = require('./00-setup.js') -var path = require('path') - -var projectData = {} - -var ucData = - { globalconfig: common.globalconfig, - email: 'i@izs.me', - 'env-thing': 'asdf', - 'init.author.name': 'Isaac Z. Schlueter', - 'init.author.email': 'i@izs.me', - 'init.author.url': 'http://blog.izs.me/', - 'proprietary-attribs': false, - 'npm:publishtest': true, - '_npmjs.org:couch': 'https://admin:password@localhost:5984/registry', - _auth: 'dXNlcm5hbWU6cGFzc3dvcmQ=', - 'npm-www:nocache': '1', - nodedir: '/Users/isaacs/dev/js/node-v0.8', - 'sign-git-tag': true, - message: 'v%s', - 'strict-ssl': false, - 'tmp': process.env.HOME + '/.tmp', - username : "username", - _password : "password", - _token: - { AuthSession: 'yabba-dabba-doodle', - version: '1', - expires: '1345001053415', - path: '/', - httponly: true } } - -var envData = common.envData -var envDataFix = common.envDataFix - -var gcData = { 'package-config:foo': 'boo' } - -var biData = {} - -var cli = { foo: 'bar', umask: 022 } - -var expectList = -[ cli, - envDataFix, - projectData, - ucData, - gcData, - biData ] - -var expectSources = -{ cli: { data: cli }, - env: - { data: envDataFix, - source: envData, - prefix: '' }, - project: - { path: path.resolve(__dirname, '..', '.npmrc'), - type: 'ini', - data: projectData }, - user: - { path: common.userconfig, - type: 'ini', - data: ucData }, - global: - { path: common.globalconfig, - type: 'ini', - data: gcData }, - builtin: { data: biData } } - -test('no builtin', function (t) { - npmconf.load(cli, function (er, conf) { - if (er) throw er - t.same(conf.list, expectList) - t.same(conf.sources, expectSources) - t.same(npmconf.rootConf.list, []) - t.equal(npmconf.rootConf.root, npmconf.defs.defaults) - t.equal(conf.root, npmconf.defs.defaults) - t.equal(conf.get('umask'), 022) - t.equal(conf.get('heading'), 'npm') - t.end() - }) -}) diff --git a/deps/npm/node_modules/npmconf/test/builtin.js b/deps/npm/node_modules/npmconf/test/builtin.js deleted file mode 100644 index 15cb9083aaf..00000000000 --- a/deps/npm/node_modules/npmconf/test/builtin.js +++ /dev/null @@ -1,83 +0,0 @@ -var test = require('tap').test -var npmconf = require('../npmconf.js') -var common = require('./00-setup.js') -var path = require('path') - -var ucData = - { globalconfig: common.globalconfig, - email: 'i@izs.me', - 'env-thing': 'asdf', - 'init.author.name': 'Isaac Z. Schlueter', - 'init.author.email': 'i@izs.me', - 'init.author.url': 'http://blog.izs.me/', - 'proprietary-attribs': false, - 'npm:publishtest': true, - '_npmjs.org:couch': 'https://admin:password@localhost:5984/registry', - _auth: 'dXNlcm5hbWU6cGFzc3dvcmQ=', - 'npm-www:nocache': '1', - nodedir: '/Users/isaacs/dev/js/node-v0.8', - 'sign-git-tag': true, - message: 'v%s', - 'strict-ssl': false, - 'tmp': process.env.HOME + '/.tmp', - username : "username", - _password : "password", - _token: - { AuthSession: 'yabba-dabba-doodle', - version: '1', - expires: '1345001053415', - path: '/', - httponly: true } } - -var envData = common.envData -var envDataFix = common.envDataFix - -var gcData = { 'package-config:foo': 'boo' } - -var biData = { 'builtin-config': true } - -var cli = { foo: 'bar', heading: 'foo', 'git-tag-version': false } - -var projectData = {} - -var expectList = -[ cli, - envDataFix, - projectData, - ucData, - gcData, - biData ] - -var expectSources = -{ cli: { data: cli }, - env: - { data: envDataFix, - source: envData, - prefix: '' }, - project: - { path: path.resolve(__dirname, '..', '.npmrc'), - type: 'ini', - data: projectData }, - user: - { path: common.userconfig, - type: 'ini', - data: ucData }, - global: - { path: common.globalconfig, - type: 'ini', - data: gcData }, - builtin: { data: biData } } - -test('with builtin', function (t) { - npmconf.load(cli, common.builtin, function (er, conf) { - if (er) throw er - t.same(conf.list, expectList) - t.same(conf.sources, expectSources) - t.same(npmconf.rootConf.list, []) - t.equal(npmconf.rootConf.root, npmconf.defs.defaults) - t.equal(conf.root, npmconf.defs.defaults) - t.equal(conf.get('heading'), 'foo') - t.equal(conf.get('git-tag-version'), false) - t.end() - }) -}) diff --git a/deps/npm/node_modules/npmconf/test/certfile.js b/deps/npm/node_modules/npmconf/test/certfile.js deleted file mode 100644 index 3dfb6e90f98..00000000000 --- a/deps/npm/node_modules/npmconf/test/certfile.js +++ /dev/null @@ -1,17 +0,0 @@ -var test = require('tap').test -var npmconf = require('../npmconf.js') -var common = require('./00-setup.js') -var path = require('path') -var fs = require('fs') - -test('cafile loads as ca', function (t) { - var cafile = path.join(__dirname, 'fixtures', 'multi-ca') - - npmconf.load({cafile: cafile}, function (er, conf) { - if (er) throw er - - t.same(conf.get('cafile'), cafile) - t.same(conf.get('ca').join('\n'), fs.readFileSync(cafile, 'utf8').trim()) - t.end() - }) -}) diff --git a/deps/npm/node_modules/npmconf/test/project.js b/deps/npm/node_modules/npmconf/test/project.js deleted file mode 100644 index fa21e43d22b..00000000000 --- a/deps/npm/node_modules/npmconf/test/project.js +++ /dev/null @@ -1,85 +0,0 @@ -var test = require('tap').test -var npmconf = require('../npmconf.js') -var common = require('./00-setup.js') -var path = require('path') -var fix = path.resolve(__dirname, 'fixtures') -var projectRc = path.resolve(fix, '.npmrc') - -var projectData = { just: 'testing' } - -var ucData = - { globalconfig: common.globalconfig, - email: 'i@izs.me', - 'env-thing': 'asdf', - 'init.author.name': 'Isaac Z. Schlueter', - 'init.author.email': 'i@izs.me', - 'init.author.url': 'http://blog.izs.me/', - 'proprietary-attribs': false, - 'npm:publishtest': true, - '_npmjs.org:couch': 'https://admin:password@localhost:5984/registry', - _auth: 'dXNlcm5hbWU6cGFzc3dvcmQ=', - 'npm-www:nocache': '1', - nodedir: '/Users/isaacs/dev/js/node-v0.8', - 'sign-git-tag': true, - message: 'v%s', - 'strict-ssl': false, - 'tmp': process.env.HOME + '/.tmp', - username : "username", - _password : "password", - _token: - { AuthSession: 'yabba-dabba-doodle', - version: '1', - expires: '1345001053415', - path: '/', - httponly: true } } - -var envData = common.envData -var envDataFix = common.envDataFix - -var gcData = { 'package-config:foo': 'boo' } - -var biData = {} - -var cli = { foo: 'bar', umask: 022, prefix: fix } - -var expectList = -[ cli, - envDataFix, - projectData, - ucData, - gcData, - biData ] - -var expectSources = -{ cli: { data: cli }, - env: - { data: envDataFix, - source: envData, - prefix: '' }, - project: - { path: projectRc, - type: 'ini', - data: projectData }, - user: - { path: common.userconfig, - type: 'ini', - data: ucData }, - global: - { path: common.globalconfig, - type: 'ini', - data: gcData }, - builtin: { data: biData } } - -test('no builtin', function (t) { - npmconf.load(cli, function (er, conf) { - if (er) throw er - t.same(conf.list, expectList) - t.same(conf.sources, expectSources) - t.same(npmconf.rootConf.list, []) - t.equal(npmconf.rootConf.root, npmconf.defs.defaults) - t.equal(conf.root, npmconf.defs.defaults) - t.equal(conf.get('umask'), 022) - t.equal(conf.get('heading'), 'npm') - t.end() - }) -}) diff --git a/deps/npm/node_modules/npmconf/test/save.js b/deps/npm/node_modules/npmconf/test/save.js deleted file mode 100644 index 64b114449ee..00000000000 --- a/deps/npm/node_modules/npmconf/test/save.js +++ /dev/null @@ -1,84 +0,0 @@ -var test = require('tap').test -var npmconf = require('../npmconf.js') -var common = require('./00-setup.js') -var fs = require('fs') -var ini = require('ini') -var expectConf = - [ 'globalconfig = ' + common.globalconfig, - 'email = i@izs.me', - 'env-thing = asdf', - 'init.author.name = Isaac Z. Schlueter', - 'init.author.email = i@izs.me', - 'init.author.url = http://blog.izs.me/', - 'proprietary-attribs = false', - 'npm:publishtest = true', - '_npmjs.org:couch = https://admin:password@localhost:5984/registry', - '_auth = dXNlcm5hbWU6cGFzc3dvcmQ=', - 'npm-www:nocache = 1', - 'sign-git-tag = false', - 'message = v%s', - 'strict-ssl = false', - 'username = username', - '_password = password', - '', - '[_token]', - 'AuthSession = yabba-dabba-doodle', - 'version = 1', - 'expires = 1345001053415', - 'path = /', - 'httponly = true', - '' ].join('\n') -var expectFile = - [ 'globalconfig = ' + common.globalconfig, - 'email = i@izs.me', - 'env-thing = asdf', - 'init.author.name = Isaac Z. Schlueter', - 'init.author.email = i@izs.me', - 'init.author.url = http://blog.izs.me/', - 'proprietary-attribs = false', - 'npm:publishtest = true', - '_npmjs.org:couch = https://admin:password@localhost:5984/registry', - '_auth = dXNlcm5hbWU6cGFzc3dvcmQ=', - 'npm-www:nocache = 1', - 'sign-git-tag = false', - 'message = v%s', - 'strict-ssl = false', - '', - '[_token]', - 'AuthSession = yabba-dabba-doodle', - 'version = 1', - 'expires = 1345001053415', - 'path = /', - 'httponly = true', - '' ].join('\n') - -test('saving configs', function (t) { - npmconf.load(function (er, conf) { - if (er) - throw er - conf.set('sign-git-tag', false, 'user') - conf.del('nodedir') - conf.del('tmp') - var foundConf = ini.stringify(conf.sources.user.data) - t.same(ini.parse(foundConf), ini.parse(expectConf)) - fs.unlinkSync(common.userconfig) - conf.save('user', function (er) { - if (er) - throw er - var uc = fs.readFileSync(conf.get('userconfig'), 'utf8') - t.same(ini.parse(uc), ini.parse(expectFile)) - t.end() - }) - }) -}) - -test('setting prefix', function (t) { - npmconf.load(function (er, conf) { - if (er) - throw er - - conf.prefix = 'newvalue' - t.same(conf.prefix, 'newvalue'); - t.end(); - }) -}) diff --git a/deps/npm/node_modules/once/once.js b/deps/npm/node_modules/once/once.js index 0770a73cdaa..2e1e721bfec 100644 --- a/deps/npm/node_modules/once/once.js +++ b/deps/npm/node_modules/once/once.js @@ -1,4 +1,5 @@ -module.exports = once +var wrappy = require('wrappy') +module.exports = wrappy(once) once.proto = once(function () { Object.defineProperty(Function.prototype, 'once', { diff --git a/deps/npm/node_modules/once/package.json b/deps/npm/node_modules/once/package.json index 96af3dedfe1..eb8a4217a27 100644 --- a/deps/npm/node_modules/once/package.json +++ b/deps/npm/node_modules/once/package.json @@ -1,12 +1,14 @@ { "name": "once", - "version": "1.3.0", + "version": "1.3.1", "description": "Run a function exactly one time", "main": "once.js", "directories": { "test": "test" }, - "dependencies": {}, + "dependencies": { + "wrappy": "1" + }, "devDependencies": { "tap": "~0.3.0" }, @@ -29,11 +31,30 @@ "url": "http://blog.izs.me/" }, "license": "BSD", - "readme": "# once\n\nOnly call a function once.\n\n## usage\n\n```javascript\nvar once = require('once')\n\nfunction load (file, cb) {\n cb = once(cb)\n loader.load('file')\n loader.once('load', cb)\n loader.once('error', cb)\n}\n```\n\nOr add to the Function.prototype in a responsible way:\n\n```javascript\n// only has to be done once\nrequire('once').proto()\n\nfunction load (file, cb) {\n cb = cb.once()\n loader.load('file')\n loader.once('load', cb)\n loader.once('error', cb)\n}\n```\n\nIronically, the prototype feature makes this module twice as\ncomplicated as necessary.\n\nTo check whether you function has been called, use `fn.called`. Once the\nfunction is called for the first time the return value of the original\nfunction is saved in `fn.value` and subsequent calls will continue to\nreturn this value.\n\n```javascript\nvar once = require('once')\n\nfunction load (cb) {\n cb = once(cb)\n var stream = createStream()\n stream.once('data', cb)\n stream.once('end', function () {\n if (!cb.called) cb(new Error('not found'))\n })\n}\n```\n", - "readmeFilename": "README.md", + "gitHead": "c90ac02a74f433ce47f6938869e68dd6196ffc2c", "bugs": { "url": "https://github.com/isaacs/once/issues" }, - "_id": "once@1.3.0", - "_from": "once@latest" + "homepage": "https://github.com/isaacs/once", + "_id": "once@1.3.1", + "_shasum": "f3f3e4da5b7d27b5c732969ee3e67e729457b31f", + "_from": "once@>=1.3.1 <2.0.0", + "_npmVersion": "2.0.0", + "_nodeVersion": "0.10.31", + "_npmUser": { + "name": "isaacs", + "email": "i@izs.me" + }, + "maintainers": [ + { + "name": "isaacs", + "email": "i@izs.me" + } + ], + "dist": { + "shasum": "f3f3e4da5b7d27b5c732969ee3e67e729457b31f", + "tarball": "http://registry.npmjs.org/once/-/once-1.3.1.tgz" + }, + "_resolved": "https://registry.npmjs.org/once/-/once-1.3.1.tgz", + "readme": "ERROR: No README data found!" } diff --git a/deps/npm/node_modules/once/test/once.js b/deps/npm/node_modules/once/test/once.js index a77951f110c..c618360dfae 100644 --- a/deps/npm/node_modules/once/test/once.js +++ b/deps/npm/node_modules/once/test/once.js @@ -3,11 +3,14 @@ var once = require('../once.js') test('once', function (t) { var f = 0 - var foo = once(function (g) { + function fn (g) { t.equal(f, 0) f ++ return f + g + this - }) + } + fn.ownProperty = {} + var foo = once(fn) + t.equal(fn.ownProperty, foo.ownProperty) t.notOk(foo.called) for (var i = 0; i < 1E3; i++) { t.same(f, i === 0 ? 0 : 1) diff --git a/deps/npm/node_modules/opener/LICENSE.txt b/deps/npm/node_modules/opener/LICENSE.txt index 60008853930..0407ecda831 100644 --- a/deps/npm/node_modules/opener/LICENSE.txt +++ b/deps/npm/node_modules/opener/LICENSE.txt @@ -1,14 +1,19 @@ - DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE - Version 2, December 2004 - - Copyright (C) 2012 Domenic Denicola - - Everyone is permitted to copy and distribute verbatim or modified - copies of this license document, and changing it is allowed as long - as the name is changed. - - DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE - TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION - - 0. You just DO WHAT THE FUCK YOU WANT TO. - +Copyright © 2012–2014 Domenic Denicola + +This work is free. You can redistribute it and/or modify it under the +terms of the Do What The Fuck You Want To Public License, Version 2, +as published by Sam Hocevar. See below for more details. + + DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE + Version 2, December 2004 + + Copyright (C) 2004 Sam Hocevar + + Everyone is permitted to copy and distribute verbatim or modified + copies of this license document, and changing it is allowed as long + as the name is changed. + + DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE + TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION + + 0. You just DO WHAT THE FUCK YOU WANT TO. diff --git a/deps/npm/node_modules/opener/README.md b/deps/npm/node_modules/opener/README.md index fc643654456..8a803f3384e 100644 --- a/deps/npm/node_modules/opener/README.md +++ b/deps/npm/node_modules/opener/README.md @@ -1,44 +1,57 @@ -# It Opens Stuff - -That is, in your desktop environment. This will make *actual windows pop up*, with stuff in them: - -```bash -npm install opener -g - -opener http://google.com -opener ./my-file.txt -opener firefox -opener npm run lint -``` - -Also if you want to use it programmatically you can do that too: - -```js -var opener = require("opener"); - -opener("http://google.com"); -opener("./my-file.txt"); -opener("firefox"); -opener("npm run lint"); -``` - -## Use It for Good - -Like opening the user's browser with a test harness in your package's test script: - -```json -{ - "scripts": { - "test": "opener ./test/runner.html" - }, - "devDependencies": { - "opener": "*" - } -} -``` - -## Why - -Because Windows has `start`, Macs have `open`, and *nix has `xdg-open`. At least -[according to some guy on StackOverflow](http://stackoverflow.com/q/1480971/3191). And I like things that work on all -three. Like Node.js. And Opener. +# It Opens Stuff + +That is, in your desktop environment. This will make *actual windows pop up*, with stuff in them: + +```bash +npm install opener -g + +opener http://google.com +opener ./my-file.txt +opener firefox +opener npm run lint +``` + +Also if you want to use it programmatically you can do that too: + +```js +var opener = require("opener"); + +opener("http://google.com"); +opener("./my-file.txt"); +opener("firefox"); +opener("npm run lint"); +``` + +Plus, it returns the child process created, so you can do things like let your script exit while the window stays open: + +```js +var editor = opener("documentation.odt"); +editor.unref(); +// These other unrefs may be necessary if your OS's opener process +// exits before the process it started is complete. +editor.stdin.unref(); +editor.stdout.unref(); +editor.stderr.unref(); +``` + + +## Use It for Good + +Like opening the user's browser with a test harness in your package's test script: + +```json +{ + "scripts": { + "test": "opener ./test/runner.html" + }, + "devDependencies": { + "opener": "*" + } +} +``` + +## Why + +Because Windows has `start`, Macs have `open`, and *nix has `xdg-open`. At least +[according to some guy on StackOverflow](http://stackoverflow.com/q/1480971/3191). And I like things that work on all +three. Like Node.js. And Opener. diff --git a/deps/npm/node_modules/opener/opener.js b/deps/npm/node_modules/opener/opener.js index 5477da52b64..3f95d0635a5 100755 --- a/deps/npm/node_modules/opener/opener.js +++ b/deps/npm/node_modules/opener/opener.js @@ -38,7 +38,7 @@ function opener(args, options, callback) { args = ["/c", "start", '""'].concat(args); } - childProcess.execFile(command, args, options, callback); + return childProcess.execFile(command, args, options, callback); } // Export `opener` for programmatic access. diff --git a/deps/npm/node_modules/opener/package.json b/deps/npm/node_modules/opener/package.json index 0d18470bd62..b62915e6ef9 100644 --- a/deps/npm/node_modules/opener/package.json +++ b/deps/npm/node_modules/opener/package.json @@ -1,11 +1,11 @@ { "name": "opener", "description": "Opens stuff, like webpages and files and executables, cross-platform", - "version": "1.3.0", + "version": "1.4.0", "author": { "name": "Domenic Denicola", "email": "domenic@domenicdenicola.com", - "url": "http://domenicdenicola.com" + "url": "http://domenic.me/" }, "license": "WTFPL", "repository": { @@ -23,12 +23,28 @@ "lint": "jshint opener.js" }, "devDependencies": { - "jshint": ">= 0.9.0" + "jshint": "^2.5.4" }, - "readme": "# It Opens Stuff\r\n\r\nThat is, in your desktop environment. This will make *actual windows pop up*, with stuff in them:\r\n\r\n```bash\r\nnpm install opener -g\r\n\r\nopener http://google.com\r\nopener ./my-file.txt\r\nopener firefox\r\nopener npm run lint\r\n```\r\n\r\nAlso if you want to use it programmatically you can do that too:\r\n\r\n```js\r\nvar opener = require(\"opener\");\r\n\r\nopener(\"http://google.com\");\r\nopener(\"./my-file.txt\");\r\nopener(\"firefox\");\r\nopener(\"npm run lint\");\r\n```\r\n\r\n## Use It for Good\r\n\r\nLike opening the user's browser with a test harness in your package's test script:\r\n\r\n```json\r\n{\r\n \"scripts\": {\r\n \"test\": \"opener ./test/runner.html\"\r\n },\r\n \"devDependencies\": {\r\n \"opener\": \"*\"\r\n }\r\n}\r\n```\r\n\r\n## Why\r\n\r\nBecause Windows has `start`, Macs have `open`, and *nix has `xdg-open`. At least\r\n[according to some guy on StackOverflow](http://stackoverflow.com/q/1480971/3191). And I like things that work on all\r\nthree. Like Node.js. And Opener.\r\n", - "_id": "opener@1.3.0", + "gitHead": "b9d36d4f82c26560acdadbabbb10ddba46a30dc5", + "homepage": "https://github.com/domenic/opener", + "_id": "opener@1.4.0", + "_shasum": "d11f86eeeb076883735c9d509f538fe82d10b941", + "_from": "opener@>=1.4.0 <1.5.0", + "_npmVersion": "1.4.23", + "_npmUser": { + "name": "domenic", + "email": "domenic@domenicdenicola.com" + }, + "maintainers": [ + { + "name": "domenic", + "email": "domenic@domenicdenicola.com" + } + ], "dist": { - "shasum": "d72b4b2e61b0a4ca7822a7554070620002fb90d9" + "shasum": "d11f86eeeb076883735c9d509f538fe82d10b941", + "tarball": "http://registry.npmjs.org/opener/-/opener-1.4.0.tgz" }, - "_from": "opener@latest" + "directories": {}, + "_resolved": "https://registry.npmjs.org/opener/-/opener-1.4.0.tgz" } diff --git a/deps/npm/node_modules/read-installed/node_modules/debuglog/LICENSE b/deps/npm/node_modules/read-installed/node_modules/debuglog/LICENSE new file mode 100644 index 00000000000..a3187cc1002 --- /dev/null +++ b/deps/npm/node_modules/read-installed/node_modules/debuglog/LICENSE @@ -0,0 +1,19 @@ +Copyright Joyent, Inc. and other Node contributors. All rights reserved. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to +deal in the Software without restriction, including without limitation the +rights to use, copy, modify, merge, publish, distribute, sublicense, and/or +sell copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING +FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS +IN THE SOFTWARE. diff --git a/deps/npm/node_modules/read-installed/node_modules/debuglog/README.md b/deps/npm/node_modules/read-installed/node_modules/debuglog/README.md new file mode 100644 index 00000000000..dc6fccecc32 --- /dev/null +++ b/deps/npm/node_modules/read-installed/node_modules/debuglog/README.md @@ -0,0 +1,40 @@ +# debuglog - backport of util.debuglog() from node v0.11 + +To facilitate using the `util.debuglog()` function that will be available when +node v0.12 is released now, this is a copy extracted from the source. + +## require('debuglog') + +Return `util.debuglog`, if it exists, otherwise it will return an internal copy +of the implementation from node v0.11. + +## debuglog(section) + +* `section` {String} The section of the program to be debugged +* Returns: {Function} The logging function + +This is used to create a function which conditionally writes to stderr +based on the existence of a `NODE_DEBUG` environment variable. If the +`section` name appears in that environment variable, then the returned +function will be similar to `console.error()`. If not, then the +returned function is a no-op. + +For example: + +```javascript +var debuglog = util.debuglog('foo'); + +var bar = 123; +debuglog('hello from foo [%d]', bar); +``` + +If this program is run with `NODE_DEBUG=foo` in the environment, then +it will output something like: + + FOO 3245: hello from foo [123] + +where `3245` is the process id. If it is not run with that +environment variable set, then it will not print anything. + +You may separate multiple `NODE_DEBUG` environment variables with a +comma. For example, `NODE_DEBUG=fs,net,tls`. diff --git a/deps/npm/node_modules/read-installed/node_modules/debuglog/debuglog.js b/deps/npm/node_modules/read-installed/node_modules/debuglog/debuglog.js new file mode 100644 index 00000000000..748fd72a1a6 --- /dev/null +++ b/deps/npm/node_modules/read-installed/node_modules/debuglog/debuglog.js @@ -0,0 +1,22 @@ +var util = require('util'); + +module.exports = (util && util.debuglog) || debuglog; + +var debugs = {}; +var debugEnviron = process.env.NODE_DEBUG || ''; + +function debuglog(set) { + set = set.toUpperCase(); + if (!debugs[set]) { + if (new RegExp('\\b' + set + '\\b', 'i').test(debugEnviron)) { + var pid = process.pid; + debugs[set] = function() { + var msg = util.format.apply(exports, arguments); + console.error('%s %d: %s', set, pid, msg); + }; + } else { + debugs[set] = function() {}; + } + } + return debugs[set]; +}; diff --git a/deps/npm/node_modules/read-installed/node_modules/debuglog/package.json b/deps/npm/node_modules/read-installed/node_modules/debuglog/package.json new file mode 100644 index 00000000000..3966625621e --- /dev/null +++ b/deps/npm/node_modules/read-installed/node_modules/debuglog/package.json @@ -0,0 +1,45 @@ +{ + "name": "debuglog", + "version": "1.0.1", + "description": "backport of util.debuglog from node v0.11", + "license": "MIT", + "main": "debuglog.js", + "repository": { + "type": "git", + "url": "https://github.com/sam-github/node-debuglog.git" + }, + "author": { + "name": "Sam Roberts", + "email": "sam@strongloop.com" + }, + "engines": { + "node": "*" + }, + "browser": { + "util": false + }, + "bugs": { + "url": "https://github.com/sam-github/node-debuglog/issues" + }, + "homepage": "https://github.com/sam-github/node-debuglog", + "_id": "debuglog@1.0.1", + "dist": { + "shasum": "aa24ffb9ac3df9a2351837cfb2d279360cd78492", + "tarball": "http://registry.npmjs.org/debuglog/-/debuglog-1.0.1.tgz" + }, + "_from": "debuglog@>=1.0.1-0 <2.0.0-0", + "_npmVersion": "1.4.3", + "_npmUser": { + "name": "octet", + "email": "sam@strongloop.com" + }, + "maintainers": [ + { + "name": "octet", + "email": "sam@strongloop.com" + } + ], + "directories": {}, + "_shasum": "aa24ffb9ac3df9a2351837cfb2d279360cd78492", + "_resolved": "https://registry.npmjs.org/debuglog/-/debuglog-1.0.1.tgz" +} diff --git a/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/LICENSE b/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/LICENSE new file mode 100644 index 00000000000..19129e315fe --- /dev/null +++ b/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/LICENSE @@ -0,0 +1,15 @@ +The ISC License + +Copyright (c) Isaac Z. Schlueter and Contributors + +Permission to use, copy, modify, and/or distribute this software for any +purpose with or without fee is hereby granted, provided that the above +copyright notice and this permission notice appear in all copies. + +THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES +WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF +MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR +ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES +WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN +ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR +IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. diff --git a/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/README.md b/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/README.md new file mode 100644 index 00000000000..ade57a186dc --- /dev/null +++ b/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/README.md @@ -0,0 +1,17 @@ +# readdir-scoped-modules + +Like `fs.readdir` but handling `@org/module` dirs as if they were +a single entry. + +Used by npm. + +## USAGE + +```javascript +var readdir = require('readdir-scoped-modules') + +readdir('node_modules', function (er, entries) { + // entries will be something like + // ['a', '@org/foo', '@org/bar'] +}) +``` diff --git a/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/package.json b/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/package.json new file mode 100644 index 00000000000..84b91e75a55 --- /dev/null +++ b/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/package.json @@ -0,0 +1,54 @@ +{ + "name": "readdir-scoped-modules", + "version": "1.0.0", + "description": "Like `fs.readdir` but handling `@org/module` dirs as if they were a single entry.", + "main": "readdir.js", + "directories": { + "test": "test" + }, + "dependencies": { + "debuglog": "^1.0.1", + "dezalgo": "^1.0.0", + "once": "^1.3.0" + }, + "devDependencies": { + "tap": "0.4" + }, + "scripts": { + "test": "tap test/*.js" + }, + "repository": { + "type": "git", + "url": "https://github.com/npm/readdir-scoped-modules" + }, + "author": { + "name": "Isaac Z. Schlueter", + "email": "i@izs.me", + "url": "http://blog.izs.me/" + }, + "license": "ISC", + "bugs": { + "url": "https://github.com/npm/readdir-scoped-modules/issues" + }, + "homepage": "https://github.com/npm/readdir-scoped-modules", + "gitHead": "35a4a7a2325d12ed25ed322cd61f976b740f7fb7", + "_id": "readdir-scoped-modules@1.0.0", + "_shasum": "e939de969b38b3e7dfaa14fbcfe7a2fd15a4ea37", + "_from": "readdir-scoped-modules@>=1.0.0-0 <2.0.0-0", + "_npmVersion": "2.0.0-alpha.6.0", + "_npmUser": { + "name": "isaacs", + "email": "i@izs.me" + }, + "maintainers": [ + { + "name": "isaacs", + "email": "i@izs.me" + } + ], + "dist": { + "shasum": "e939de969b38b3e7dfaa14fbcfe7a2fd15a4ea37", + "tarball": "http://registry.npmjs.org/readdir-scoped-modules/-/readdir-scoped-modules-1.0.0.tgz" + }, + "_resolved": "https://registry.npmjs.org/readdir-scoped-modules/-/readdir-scoped-modules-1.0.0.tgz" +} diff --git a/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/readdir.js b/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/readdir.js new file mode 100644 index 00000000000..91978a739db --- /dev/null +++ b/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/readdir.js @@ -0,0 +1,71 @@ +var fs = require ('fs') +var dz = require ('dezalgo') +var once = require ('once') +var path = require ('path') +var debug = require ('debuglog') ('rds') + +module . exports = readdir + +function readdir (dir, cb) { + fs . readdir (dir, function (er, kids) { + if (er) + return cb (er) + + debug ('dir=%j, kids=%j', dir, kids) + readScopes (dir, kids, function (er, data) { + if (er) + return cb (er) + + // Sort for bonus consistency points + data = data . sort (function (a, b) { + return a > b ? 1 : -1 + }) + + return cb (null, data) + }) + }) +} + +// Turn [ 'a', '@scope' ] into +// ['a', '@scope/foo', '@scope/bar'] +function readScopes (root, kids, cb) { + var scopes = kids . filter (function (kid) { + return kid . charAt (0) === '@' + }) + + kids = kids . filter (function (kid) { + return kid . charAt (0) !== '@' + }) + + debug ('scopes=%j', scopes) + + if (scopes . length === 0) + dz (cb) (null, kids) // prevent maybe-sync zalgo release + + cb = once (cb) + var l = scopes . length + scopes . forEach (function (scope) { + var scopedir = path . resolve (root, scope) + debug ('root=%j scope=%j scopedir=%j', root, scope, scopedir) + fs . readdir (scopedir, then . bind (null, scope)) + }) + + function then (scope, er, scopekids) { + if (er) + return cb (er) + + // XXX: Not sure how old this node bug is. Maybe superstition? + scopekids = scopekids . filter (function (scopekid) { + return !(scopekid === '.' || scopekid === '..' || !scopekid) + }) + + kids . push . apply (kids, scopekids . map (function (scopekid) { + return scope + '/' + scopekid + })) + + debug ('scope=%j scopekids=%j kids=%j', scope, scopekids, kids) + + if (--l === 0) + cb (null, kids) + } +} diff --git a/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/test/basic.js b/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/test/basic.js new file mode 100644 index 00000000000..715c40d584b --- /dev/null +++ b/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/test/basic.js @@ -0,0 +1,14 @@ +var test = require ('tap') . test +var readdir = require ('../readdir.js') + +test ('basic', function (t) { + // should not get {a,b}/{x,y}, but SHOULD get @org/ and @scope children + var expect = [ '@org/x', '@org/y', '@scope/x', '@scope/y', 'a', 'b' ] + + readdir (__dirname + '/fixtures', function (er, kids) { + if (er) + throw er + t.same(kids, expect) + t.end() + }) +}) diff --git a/deps/npm/node_modules/npmconf/test/fixtures/package.json b/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/test/fixtures/@org/x/.keep similarity index 100% rename from deps/npm/node_modules/npmconf/test/fixtures/package.json rename to deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/test/fixtures/@org/x/.keep diff --git a/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/test/fixtures/@org/y/.keep b/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/test/fixtures/@org/y/.keep new file mode 100644 index 00000000000..e69de29bb2d diff --git a/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/test/fixtures/@scope/x/.keep b/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/test/fixtures/@scope/x/.keep new file mode 100644 index 00000000000..e69de29bb2d diff --git a/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/test/fixtures/@scope/y/.keep b/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/test/fixtures/@scope/y/.keep new file mode 100644 index 00000000000..e69de29bb2d diff --git a/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/test/fixtures/a/x/.keep b/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/test/fixtures/a/x/.keep new file mode 100644 index 00000000000..e69de29bb2d diff --git a/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/test/fixtures/a/y/.keep b/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/test/fixtures/a/y/.keep new file mode 100644 index 00000000000..e69de29bb2d diff --git a/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/test/fixtures/b/x/.keep b/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/test/fixtures/b/x/.keep new file mode 100644 index 00000000000..e69de29bb2d diff --git a/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/test/fixtures/b/y/.keep b/deps/npm/node_modules/read-installed/node_modules/readdir-scoped-modules/test/fixtures/b/y/.keep new file mode 100644 index 00000000000..e69de29bb2d diff --git a/deps/npm/node_modules/read-installed/node_modules/util-extend/package.json b/deps/npm/node_modules/read-installed/node_modules/util-extend/package.json index 96f5a3f51bf..0bab48d3297 100644 --- a/deps/npm/node_modules/read-installed/node_modules/util-extend/package.json +++ b/deps/npm/node_modules/read-installed/node_modules/util-extend/package.json @@ -22,7 +22,7 @@ "shasum": "bb703b79480293ddcdcfb3c6a9fea20f483415bc", "tarball": "http://registry.npmjs.org/util-extend/-/util-extend-1.0.1.tgz" }, - "_from": "util-extend@^1.0.1", + "_from": "util-extend@>=1.0.1-0 <2.0.0-0", "_npmVersion": "1.3.4", "_npmUser": { "name": "isaacs", diff --git a/deps/npm/node_modules/read-installed/package.json b/deps/npm/node_modules/read-installed/package.json index de958a544e8..2c50225534f 100644 --- a/deps/npm/node_modules/read-installed/package.json +++ b/deps/npm/node_modules/read-installed/package.json @@ -1,7 +1,7 @@ { "name": "read-installed", "description": "Read all the installed packages in a folder, and return a tree structure with all the data.", - "version": "2.0.5", + "version": "3.1.3", "repository": { "type": "git", "url": "git://github.com/isaacs/read-installed" @@ -11,8 +11,10 @@ "test": "tap ./test/*.js" }, "dependencies": { + "debuglog": "^1.0.1", "read-package-json": "1", - "semver": "2", + "readdir-scoped-modules": "^1.0.0", + "semver": "2 || 3 || 4", "slide": "~1.1.3", "util-extend": "^1.0.1", "graceful-fs": "2 || 3" @@ -27,16 +29,37 @@ }, "license": "ISC", "devDependencies": { + "mkdirp": "^0.5.0", + "rimraf": "^2.2.8", "tap": "~0.4.8" }, - "readme": "# read-installed\n\nRead all the installed packages in a folder, and return a tree\nstructure with all the data.\n\nnpm uses this.\n\n## 2.0.0\n\nBreaking changes in `2.0.0`:\n\nThe second argument is now an `Object` that contains the following keys:\n\n * `depth` optional, defaults to Infinity\n * `log` optional log Function\n * `dev` optional, default false, set to true to include devDependencies\n\n## Usage\n\n```javascript\nvar readInstalled = require(\"read-installed\")\n// optional options\nvar options = { dev: false, log: fn, depth: 2 }\nreadInstalled(folder, options, function (er, data) {\n ...\n})\n```\n", - "readmeFilename": "README.md", - "gitHead": "2595631e4d3cbd64b26cee63dc3b5ce9f53e3533", + "gitHead": "50e45af7581b1a879c62146fafbfa1b92842f7df", "bugs": { "url": "https://github.com/isaacs/read-installed/issues" }, "homepage": "https://github.com/isaacs/read-installed", - "_id": "read-installed@2.0.5", - "_shasum": "761eda1fd2dc322f8e77844a8bf1ddedbcfc754b", - "_from": "read-installed@latest" + "_id": "read-installed@3.1.3", + "_shasum": "c09092a13c2117f22842cad16804f3b059129d11", + "_from": "read-installed@>=3.1.2-0 <3.2.0-0", + "_npmVersion": "2.0.0-beta.3", + "_npmUser": { + "name": "isaacs", + "email": "i@izs.me" + }, + "maintainers": [ + { + "name": "isaacs", + "email": "i@izs.me" + }, + { + "name": "othiym23", + "email": "ogd@aoaioxxysz.net" + } + ], + "dist": { + "shasum": "c09092a13c2117f22842cad16804f3b059129d11", + "tarball": "http://registry.npmjs.org/read-installed/-/read-installed-3.1.3.tgz" + }, + "directories": {}, + "_resolved": "https://registry.npmjs.org/read-installed/-/read-installed-3.1.3.tgz" } diff --git a/deps/npm/node_modules/read-installed/read-installed.js b/deps/npm/node_modules/read-installed/read-installed.js index 9b5a4796226..a92ed3fbee3 100644 --- a/deps/npm/node_modules/read-installed/read-installed.js +++ b/deps/npm/node_modules/read-installed/read-installed.js @@ -101,6 +101,10 @@ var url = require("url") var util = require("util") var extend = require("util-extend") +var debug = require("debuglog")("read-installed") + +var readdir = require("readdir-scoped-modules") + module.exports = readInstalled function readInstalled (folder, opts, cb) { @@ -120,25 +124,29 @@ function readInstalled (folder, opts, cb) { opts.log = function () {} opts.dev = !!opts.dev + opts.realpathSeen = {} + opts.findUnmetSeen = [] + readInstalled_(folder, null, null, null, 0, opts, function (er, obj) { if (er) return cb(er) // now obj has all the installed things, where they're installed // figure out the inheritance links, now that the object is built. resolveInheritance(obj, opts) - markExtraneous(obj) + obj.root = true + unmarkExtraneous(obj, opts) cb(null, obj) }) } -var rpSeen = {} function readInstalled_ (folder, parent, name, reqver, depth, opts, cb) { var installed , obj , real , link + , realpathSeen = opts.realpathSeen - fs.readdir(path.resolve(folder, "node_modules"), function (er, i) { + readdir(path.resolve(folder, "node_modules"), function (er, i) { // error indicates that nothing is installed here if (er) i = [] installed = i.filter(function (f) { return f.charAt(0) !== "." }) @@ -161,7 +169,7 @@ function readInstalled_ (folder, parent, name, reqver, depth, opts, cb) { return next(er) } fs.realpath(folder, function (er, rp) { - //console.error("realpath(%j) = %j", folder, rp) + debug("realpath(%j) = %j", folder, rp) real = rp if (st.isSymbolicLink()) link = rp next(er) @@ -176,10 +184,10 @@ function readInstalled_ (folder, parent, name, reqver, depth, opts, cb) { errState = er return cb(null, []) } - //console.error('next', installed, obj && typeof obj, name, real) + debug('next', installed, obj && typeof obj, name, real) if (!installed || !obj || !real || called) return called = true - if (rpSeen[real]) return cb(null, rpSeen[real]) + if (realpathSeen[real]) return cb(null, realpathSeen[real]) if (obj === true) { obj = {dependencies:{}, path:folder} installed.forEach(function (i) { obj.dependencies[i] = "*" }) @@ -188,6 +196,9 @@ function readInstalled_ (folder, parent, name, reqver, depth, opts, cb) { obj.realName = name || obj.name obj.dependencies = obj.dependencies || {} + // At this point, figure out what dependencies we NEED to get met + obj._dependencies = copy(obj.dependencies) + // "foo":"http://blah" and "foo":"latest" are always presumed valid if (reqver && semver.validRange(reqver, true) @@ -195,21 +206,17 @@ function readInstalled_ (folder, parent, name, reqver, depth, opts, cb) { obj.invalid = true } - if (parent) { - var deps = parent.dependencies || {} - var inDeps = name in deps - var devDeps = parent.devDependencies || {} - var inDev = opts.dev && (name in devDeps) - if (!inDeps && !inDev) { - obj.extraneous = true - } - } + // Mark as extraneous at this point. + // This will be un-marked in unmarkExtraneous, where we mark as + // not-extraneous everything that is required in some way from + // the root object. + obj.extraneous = true obj.path = obj.path || folder obj.realPath = real obj.link = link if (parent && !obj.link) obj.parent = parent - rpSeen[real] = obj + realpathSeen[real] = obj obj.depth = depth //if (depth >= opts.depth) return cb(null, obj) asyncMap(installed, function (pkg, cb) { @@ -259,50 +266,45 @@ function resolveInheritance (obj, opts) { findUnmet(obj.dependencies[dep], opts) }) Object.keys(obj.dependencies).forEach(function (dep) { - resolveInheritance(obj.dependencies[dep], opts) + if (typeof obj.dependencies[dep] === "object") { + resolveInheritance(obj.dependencies[dep], opts) + } else { + debug("unmet dep! %s %s@%s", obj.name, dep, obj.dependencies[dep]) + } }) findUnmet(obj, opts) } // find unmet deps by walking up the tree object. // No I/O -var fuSeen = [] function findUnmet (obj, opts) { - if (fuSeen.indexOf(obj) !== -1) return - fuSeen.push(obj) - //console.error("find unmet", obj.name, obj.parent && obj.parent.name) + var findUnmetSeen = opts.findUnmetSeen + if (findUnmetSeen.indexOf(obj) !== -1) return + findUnmetSeen.push(obj) + debug("find unmet parent=%s obj=", obj.parent && obj.parent.name, obj.name || obj) var deps = obj.dependencies = obj.dependencies || {} - //console.error(deps) + debug(deps) Object.keys(deps) .filter(function (d) { return typeof deps[d] === "string" }) .forEach(function (d) { - //console.error("find unmet", obj.name, d, deps[d]) - var r = obj.parent - , found = null - while (r && !found && typeof deps[d] === "string") { - // if r is a valid choice, then use that. - found = r.dependencies[d] - if (!found && r.realName === d) found = r - - if (!found) { - r = r.link ? null : r.parent - continue - } - // "foo":"http://blah" and "foo":"latest" are always presumed valid - if ( typeof deps[d] === "string" - && semver.validRange(deps[d], true) - && !semver.satisfies(found.version, deps[d], true)) { - // the bad thing will happen - opts.log("unmet dependency", obj.path + " requires "+d+"@'"+deps[d] - +"' but will load\n" - +found.path+",\nwhich is version "+found.version - ) - found.invalid = true - } + var found = findDep(obj, d) + debug("finding dep %j", d, found && found.name || found) + // "foo":"http://blah" and "foo":"latest" are always presumed valid + if (typeof deps[d] === "string" && + semver.validRange(deps[d], true) && + found && + !semver.satisfies(found.version, deps[d], true)) { + // the bad thing will happen + opts.log( "unmet dependency" + , obj.path + " requires "+d+"@'"+deps[d] + + "' but will load\n" + + found.path+",\nwhich is version "+found.version ) + found.invalid = true + } + if (found) { deps[d] = found } - }) var peerDeps = obj.peerDependencies = obj.peerDependencies || {} @@ -329,34 +331,58 @@ function findUnmet (obj, opts) { obj.dependencies[d] = peerDeps[d] } else if (!semver.satisfies(dependency.version, peerDeps[d], true)) { dependency.peerInvalid = true - } else { - dependency.extraneous = false } }) return obj } -function recursivelyMarkExtraneous (obj, extraneous) { - // stop recursion if we're not changing anything - if (obj.extraneous === extraneous) return +function unmarkExtraneous (obj, opts) { + // Mark all non-required deps as extraneous. + // start from the root object and mark as non-extraneous all modules + // that haven't been previously flagged as extraneous then propagate + // to all their dependencies - obj.extraneous = extraneous - var deps = obj.dependencies = obj.dependencies || {} - Object.keys(deps).forEach(function(d){ - recursivelyMarkExtraneous(deps[d], extraneous) - }); + obj.extraneous = false + + var deps = obj._dependencies + if (opts.dev && obj.devDependencies && (obj.root || obj.link)) { + Object.keys(obj.devDependencies).forEach(function (k) { + deps[k] = obj.devDependencies[k] + }) + } + + if (obj.peerDependencies) { + Object.keys(obj.peerDependencies).forEach(function (k) { + deps[k] = obj.peerDependencies[k] + }) + } + + debug("not extraneous", obj._id, deps) + Object.keys(deps).forEach(function (d) { + var dep = findDep(obj, d) + if (dep && dep.extraneous) { + unmarkExtraneous(dep, opts) + } + }) } -function markExtraneous (obj) { - // start from the root object and mark as non-extraneous all modules that haven't been previously flagged as - // extraneous then propagate to all their dependencies - var deps = obj.dependencies = obj.dependencies || {} - Object.keys(deps).forEach(function(d){ - if (!deps[d].extraneous){ - recursivelyMarkExtraneous(deps[d], false); +// Find the one that will actually be loaded by require() +// so we can make sure it's valid etc. +function findDep (obj, d) { + var r = obj + , found = null + while (r && !found) { + // if r is a valid choice, then use that. + // kinda weird if a pkg depends on itself, but after the first + // iteration of this loop, it indicates a dep cycle. + if (typeof r.dependencies[d] === "object") { + found = r.dependencies[d] } - }); + if (!found && r.realName === d) found = r + r = r.link ? null : r.parent + } + return found } function copy (obj) { diff --git a/deps/npm/node_modules/read-installed/test/basic.js b/deps/npm/node_modules/read-installed/test/basic.js index 4d83cd0ca59..f497848879d 100644 --- a/deps/npm/node_modules/read-installed/test/basic.js +++ b/deps/npm/node_modules/read-installed/test/basic.js @@ -1,8 +1,9 @@ var readInstalled = require("../read-installed.js") -var json = require("./fixtures/package.json") -var known = [].concat(Object.keys(json.dependencies) - , Object.keys(json.optionalDependencies) - , Object.keys(json.devDependencies)).sort() +var json = require("../package.json") +var d = Object.keys(json.dependencies) +var dd = Object.keys(json.devDependencies) +var od = Object.keys(json.optionalDependencies) +var known = d.concat(dd).concat(od).sort() var test = require("tap").test var path = require("path") @@ -36,9 +37,7 @@ function cleanup (map) { default: delete map[i] } var dep = map.dependencies -// delete map.dependencies if (dep) { -// map.dependencies = dep for (var i in dep) if (typeof dep[i] === "object") { cleanup(dep[i]) } diff --git a/deps/npm/node_modules/read-installed/test/cyclic-extraneous-peer-deps.js b/deps/npm/node_modules/read-installed/test/cyclic-extraneous-peer-deps.js new file mode 100644 index 00000000000..58bf6a649a0 --- /dev/null +++ b/deps/npm/node_modules/read-installed/test/cyclic-extraneous-peer-deps.js @@ -0,0 +1,81 @@ +var test = require("tap").test +var mkdirp = require("mkdirp") +var rimraf = require("rimraf") +var fs = require("fs") +var path = require("path") +var readInstalled = require("../read-installed.js") + +var parent = { + name: "parent", + version: "1.2.3", + dependencies: {}, + devDependencies: { + "child1":"*" + }, + readme:"." +} + +var child1 = { + name: "child1", + version: "1.2.3", + peerDependencies: { + child2: "*" + }, + readme:"." +} + +var child2 = { + name: "child2", + version: "1.2.3", + peerDependencies: { + child1: "*" + }, + readme:"." +} + + +var root = path.resolve(__dirname, "cyclic-extraneous-peer-deps") +var parentjson = path.resolve(root, "package.json") +var child1root = path.resolve(root, "node_modules/child1") +var child1json = path.resolve(child1root, "package.json") +var child2root = path.resolve(root, "node_modules/child2") +var child2json = path.resolve(child2root, "package.json") + +test("setup", function (t) { + rimraf.sync(root) + mkdirp.sync(child1root) + mkdirp.sync(child2root) + fs.writeFileSync(parentjson, JSON.stringify(parent, null, 2) + "\n", "utf8") + fs.writeFileSync(child1json, JSON.stringify(child1, null, 2) + "\n", "utf8") + fs.writeFileSync(child2json, JSON.stringify(child2, null, 2) + "\n", "utf8") + t.pass("setup done") + t.end() +}) + +test("dev mode", function (t) { + // peer dev deps should both be not extraneous. + readInstalled(root, { dev: true }, function (er, data) { + if (er) + throw er + t.notOk(data.dependencies.child1.extraneous, "c1 not extra") + t.notOk(data.dependencies.child2.extraneous, "c2 not extra") + t.end() + }) +}) + +test("prod mode", function (t) { + readInstalled(root, { dev: false }, function (er, data) { + if (er) + throw er + t.ok(data.dependencies.child1.extraneous, "c1 extra") + t.ok(data.dependencies.child2.extraneous, "c2 extra") + t.end() + }) +}) + + +test("cleanup", function (t) { + rimraf.sync(root) + t.pass("cleanup done") + t.end() +}) diff --git a/deps/npm/node_modules/read-installed/test/dev.js b/deps/npm/node_modules/read-installed/test/dev.js index f6f4857bb09..5e5a994a88d 100644 --- a/deps/npm/node_modules/read-installed/test/dev.js +++ b/deps/npm/node_modules/read-installed/test/dev.js @@ -1,6 +1,6 @@ var readInstalled = require("../read-installed.js") var test = require("tap").test -var json = require("./fixtures/package.json") +var json = require("../package.json") var path = require("path") var known = [].concat(Object.keys(json.dependencies) , Object.keys(json.optionalDependencies) @@ -17,7 +17,7 @@ test("make sure that it works without dev deps", function (t) { var deps = Object.keys(map.dependencies).sort() t.equal(deps.length, known.length, "array lengths are equal") t.deepEqual(deps, known, "arrays should be equal") - t.ok(map.dependencies.tap.extraneous, 'extraneous is set on devDep') + t.ok(map.dependencies.tap.extraneous, "extraneous is set on devDep") t.end() }) }) diff --git a/deps/npm/node_modules/read-installed/test/extraneous-dev.js b/deps/npm/node_modules/read-installed/test/extraneous-dev.js new file mode 100644 index 00000000000..2f9012d548b --- /dev/null +++ b/deps/npm/node_modules/read-installed/test/extraneous-dev.js @@ -0,0 +1,20 @@ +var readInstalled = require("../read-installed.js") +var test = require("tap").test +var path = require("path") + +test("extraneous detected", function(t) { + // This test verifies read-installed#16 + readInstalled( + path.join(__dirname, "fixtures/extraneous-dev-dep"), + { + log: console.error, + dev: true + }, + function (err, map) { + t.ifError(err, "read-installed made it") + + t.notOk(map.dependencies.d.extraneous, "d is not extraneous, it's required by root") + t.ok(map.dependencies.x.extraneous, "x is extraneous, it's only a dev dep of d") + t.end() + }) +}) diff --git a/deps/npm/node_modules/read-installed/test/extraneous.js b/deps/npm/node_modules/read-installed/test/extraneous.js index 2cc0d04e7a5..e999c9b4fc3 100644 --- a/deps/npm/node_modules/read-installed/test/extraneous.js +++ b/deps/npm/node_modules/read-installed/test/extraneous.js @@ -1,6 +1,6 @@ var readInstalled = require('../read-installed.js') var test = require('tap').test -var path = require('path'); +var path = require('path') test('extraneous detected', function(t) { // This test verifies read-installed#16 @@ -12,6 +12,6 @@ test('extraneous detected', function(t) { t.ok(map.dependencies.bar.extraneous, 'bar is extraneous, it\'s not required by any module') t.notOk(map.dependencies.asdf.extraneous, 'asdf is not extraneous, it\'s required by ghjk') t.notOk(map.dependencies.ghjk.extraneous, 'ghjk is not extraneous, it\'s required by our root module') - t.end(); + t.end() }) }) diff --git a/deps/npm/node_modules/read-installed/test/fixtures/extraneous-dev-dep/package.json b/deps/npm/node_modules/read-installed/test/fixtures/extraneous-dev-dep/package.json new file mode 100644 index 00000000000..9bfa7ce8f58 --- /dev/null +++ b/deps/npm/node_modules/read-installed/test/fixtures/extraneous-dev-dep/package.json @@ -0,0 +1,7 @@ +{ + "name": "extraneous-dev-dep", + "version": "0.0.0", + "dependencies": { + "d": "1.0.0" + } +} diff --git a/deps/npm/node_modules/read-installed/test/fixtures/grandparent-peer-dev/package.json b/deps/npm/node_modules/read-installed/test/fixtures/grandparent-peer-dev/package.json new file mode 100644 index 00000000000..1a229c1cff0 --- /dev/null +++ b/deps/npm/node_modules/read-installed/test/fixtures/grandparent-peer-dev/package.json @@ -0,0 +1,8 @@ +{ + "name": "example", + "version": "0.0.0", + "devDependencies": { + "plugin-wrapper": "0.0.0", + "framework": "0.0.0" + } +} diff --git a/deps/npm/node_modules/read-installed/test/grandparent-peer-dev.js b/deps/npm/node_modules/read-installed/test/grandparent-peer-dev.js new file mode 100644 index 00000000000..fd7c2d2bc9c --- /dev/null +++ b/deps/npm/node_modules/read-installed/test/grandparent-peer-dev.js @@ -0,0 +1,20 @@ +var readInstalled = require('../read-installed.js') +var test = require('tap').test +var path = require('path'); + +function allValid(t, map) { + var deps = Object.keys(map.dependencies || {}) + deps.forEach(function (dep) { + t.ok(map.dependencies[dep].extraneous, 'dependency ' + dep + ' of ' + map.name + ' is extraneous') + }) +} + +test('grandparent dev peer dependencies should be extraneous', function(t) { + readInstalled( + path.join(__dirname, 'fixtures/grandparent-peer-dev'), + { log: console.error }, + function(err, map) { + allValid(t, map) + t.end() + }) +}) diff --git a/deps/npm/node_modules/read-installed/test/linked-dep-dev-deps-extraneous.js b/deps/npm/node_modules/read-installed/test/linked-dep-dev-deps-extraneous.js new file mode 100644 index 00000000000..65605133045 --- /dev/null +++ b/deps/npm/node_modules/read-installed/test/linked-dep-dev-deps-extraneous.js @@ -0,0 +1,59 @@ +var test = require('tap').test +var path = require('path') +var fs = require('fs') +var mkdirp = require('mkdirp') +var rimraf = require('rimraf') +var readInstalled = require('../') + +var root = path.resolve(__dirname, 'root') +var pkg = path.resolve(root, 'pkg') +var pkgnm = path.resolve(pkg, 'node_modules') +var linkdepSrc = path.resolve(root, 'linkdep') +var linkdepLink = path.resolve(pkgnm, 'linkdep') +var devdep = path.resolve(linkdepSrc, 'node_modules', 'devdep') + +function pjson (dir, data) { + mkdirp.sync(dir) + var d = path.resolve(dir, 'package.json') + fs.writeFileSync(d, JSON.stringify(data)) +} + +test('setup', function (t) { + rimraf.sync(root) + pjson(pkg, { + name: 'root', + version: '1.2.3', + dependencies: { + linkdep: '' + } + }) + pjson(linkdepSrc, { + name: 'linkdep', + version: '1.2.3', + devDependencies: { + devdep: '' + } + }) + pjson(devdep, { + name: 'devdep', + version: '1.2.3' + }) + + mkdirp.sync(pkgnm) + fs.symlinkSync(linkdepSrc, linkdepLink, 'dir') + + t.end() +}) + +test('basic', function (t) { + readInstalled(pkg, { dev: true }, function (er, data) { + var dd = data.dependencies.linkdep.dependencies.devdep + t.notOk(dd.extraneous, 'linked dev dep should not be extraneous') + t.end() + }) +}) + +test('cleanup', function (t) { + rimraf.sync(root) + t.end() +}) diff --git a/deps/npm/node_modules/read-installed/test/noargs.js b/deps/npm/node_modules/read-installed/test/noargs.js index a84a8f4cfa2..66fabeb74ec 100644 --- a/deps/npm/node_modules/read-installed/test/noargs.js +++ b/deps/npm/node_modules/read-installed/test/noargs.js @@ -1,6 +1,6 @@ var readInstalled = require("../read-installed.js") var test = require("tap").test -var json = require("./fixtures/package.json") +var json = require("../package.json") var path = require("path") var known = [].concat(Object.keys(json.dependencies) , Object.keys(json.optionalDependencies) diff --git a/deps/npm/node_modules/read-package-json/package.json b/deps/npm/node_modules/read-package-json/package.json index 8b67330c1d5..1fd2f674f78 100644 --- a/deps/npm/node_modules/read-package-json/package.json +++ b/deps/npm/node_modules/read-package-json/package.json @@ -37,7 +37,7 @@ "homepage": "https://github.com/isaacs/read-package-json", "_id": "read-package-json@1.2.7", "_shasum": "f0b440c461a218f4dbf48b094e80fc65c5248502", - "_from": "read-package-json@^1.2.7", + "_from": "read-package-json@>=1.2.7-0 <1.3.0-0", "_npmVersion": "2.0.0-beta.0", "_npmUser": { "name": "othiym23", diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/.npmignore b/deps/npm/node_modules/readable-stream/.npmignore similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/.npmignore rename to deps/npm/node_modules/readable-stream/.npmignore diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/LICENSE b/deps/npm/node_modules/readable-stream/LICENSE similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/LICENSE rename to deps/npm/node_modules/readable-stream/LICENSE diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/README.md b/deps/npm/node_modules/readable-stream/README.md similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/README.md rename to deps/npm/node_modules/readable-stream/README.md diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/duplex.js b/deps/npm/node_modules/readable-stream/duplex.js similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/duplex.js rename to deps/npm/node_modules/readable-stream/duplex.js diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/lib/_stream_duplex.js b/deps/npm/node_modules/readable-stream/lib/_stream_duplex.js similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/lib/_stream_duplex.js rename to deps/npm/node_modules/readable-stream/lib/_stream_duplex.js diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/lib/_stream_passthrough.js b/deps/npm/node_modules/readable-stream/lib/_stream_passthrough.js similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/lib/_stream_passthrough.js rename to deps/npm/node_modules/readable-stream/lib/_stream_passthrough.js diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/lib/_stream_readable.js b/deps/npm/node_modules/readable-stream/lib/_stream_readable.js similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/lib/_stream_readable.js rename to deps/npm/node_modules/readable-stream/lib/_stream_readable.js diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/lib/_stream_transform.js b/deps/npm/node_modules/readable-stream/lib/_stream_transform.js similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/lib/_stream_transform.js rename to deps/npm/node_modules/readable-stream/lib/_stream_transform.js diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/lib/_stream_writable.js b/deps/npm/node_modules/readable-stream/lib/_stream_writable.js similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/lib/_stream_writable.js rename to deps/npm/node_modules/readable-stream/lib/_stream_writable.js diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/core-util-is/README.md b/deps/npm/node_modules/readable-stream/node_modules/core-util-is/README.md similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/core-util-is/README.md rename to deps/npm/node_modules/readable-stream/node_modules/core-util-is/README.md diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/core-util-is/float.patch b/deps/npm/node_modules/readable-stream/node_modules/core-util-is/float.patch similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/core-util-is/float.patch rename to deps/npm/node_modules/readable-stream/node_modules/core-util-is/float.patch diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/core-util-is/lib/util.js b/deps/npm/node_modules/readable-stream/node_modules/core-util-is/lib/util.js similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/core-util-is/lib/util.js rename to deps/npm/node_modules/readable-stream/node_modules/core-util-is/lib/util.js diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/core-util-is/package.json b/deps/npm/node_modules/readable-stream/node_modules/core-util-is/package.json similarity index 94% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/core-util-is/package.json rename to deps/npm/node_modules/readable-stream/node_modules/core-util-is/package.json index add87edf58d..4eb9ce4f3c1 100644 --- a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/core-util-is/package.json +++ b/deps/npm/node_modules/readable-stream/node_modules/core-util-is/package.json @@ -35,7 +35,7 @@ "shasum": "6b07085aef9a3ccac6ee53bf9d3df0c1521a5538", "tarball": "http://registry.npmjs.org/core-util-is/-/core-util-is-1.0.1.tgz" }, - "_from": "core-util-is@~1.0.0", + "_from": "core-util-is@>=1.0.0 <1.1.0", "_npmVersion": "1.3.23", "_npmUser": { "name": "isaacs", @@ -49,5 +49,6 @@ ], "directories": {}, "_shasum": "6b07085aef9a3ccac6ee53bf9d3df0c1521a5538", - "_resolved": "https://registry.npmjs.org/core-util-is/-/core-util-is-1.0.1.tgz" + "_resolved": "https://registry.npmjs.org/core-util-is/-/core-util-is-1.0.1.tgz", + "scripts": {} } diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/core-util-is/util.js b/deps/npm/node_modules/readable-stream/node_modules/core-util-is/util.js similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/core-util-is/util.js rename to deps/npm/node_modules/readable-stream/node_modules/core-util-is/util.js diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/isarray/README.md b/deps/npm/node_modules/readable-stream/node_modules/isarray/README.md similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/isarray/README.md rename to deps/npm/node_modules/readable-stream/node_modules/isarray/README.md diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/isarray/build/build.js b/deps/npm/node_modules/readable-stream/node_modules/isarray/build/build.js similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/isarray/build/build.js rename to deps/npm/node_modules/readable-stream/node_modules/isarray/build/build.js diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/isarray/component.json b/deps/npm/node_modules/readable-stream/node_modules/isarray/component.json similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/isarray/component.json rename to deps/npm/node_modules/readable-stream/node_modules/isarray/component.json diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/isarray/index.js b/deps/npm/node_modules/readable-stream/node_modules/isarray/index.js similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/isarray/index.js rename to deps/npm/node_modules/readable-stream/node_modules/isarray/index.js diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/isarray/package.json b/deps/npm/node_modules/readable-stream/node_modules/isarray/package.json similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/isarray/package.json rename to deps/npm/node_modules/readable-stream/node_modules/isarray/package.json diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/string_decoder/.npmignore b/deps/npm/node_modules/readable-stream/node_modules/string_decoder/.npmignore similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/string_decoder/.npmignore rename to deps/npm/node_modules/readable-stream/node_modules/string_decoder/.npmignore diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/string_decoder/LICENSE b/deps/npm/node_modules/readable-stream/node_modules/string_decoder/LICENSE similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/string_decoder/LICENSE rename to deps/npm/node_modules/readable-stream/node_modules/string_decoder/LICENSE diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/string_decoder/README.md b/deps/npm/node_modules/readable-stream/node_modules/string_decoder/README.md similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/string_decoder/README.md rename to deps/npm/node_modules/readable-stream/node_modules/string_decoder/README.md diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/string_decoder/index.js b/deps/npm/node_modules/readable-stream/node_modules/string_decoder/index.js similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/string_decoder/index.js rename to deps/npm/node_modules/readable-stream/node_modules/string_decoder/index.js diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/string_decoder/package.json b/deps/npm/node_modules/readable-stream/node_modules/string_decoder/package.json similarity index 96% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/string_decoder/package.json rename to deps/npm/node_modules/readable-stream/node_modules/string_decoder/package.json index a8c586bfb90..0364d54ba46 100644 --- a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/node_modules/string_decoder/package.json +++ b/deps/npm/node_modules/readable-stream/node_modules/string_decoder/package.json @@ -28,7 +28,7 @@ }, "_id": "string_decoder@0.10.31", "_shasum": "62e203bc41766c6c28c9fc84301dab1c5310fa94", - "_from": "string_decoder@~0.10.x", + "_from": "string_decoder@>=0.10.0 <0.11.0", "_npmVersion": "1.4.23", "_npmUser": { "name": "rvagg", diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/package.json b/deps/npm/node_modules/readable-stream/package.json similarity index 78% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/package.json rename to deps/npm/node_modules/readable-stream/package.json index 14485870130..2fbd99751fb 100644 --- a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/package.json +++ b/deps/npm/node_modules/readable-stream/package.json @@ -1,6 +1,6 @@ { "name": "readable-stream", - "version": "1.0.31", + "version": "1.0.32", "description": "Streams2, a user-land copy of the stream library from Node.js v0.10.x", "main": "readable.js", "dependencies": { @@ -33,14 +33,16 @@ "url": "http://blog.izs.me/" }, "license": "MIT", + "gitHead": "2024ad52b1e475465488b4ad39eb41d067ffcbb9", "bugs": { "url": "https://github.com/isaacs/readable-stream/issues" }, "homepage": "https://github.com/isaacs/readable-stream", - "_id": "readable-stream@1.0.31", - "_shasum": "8f2502e0bc9e3b0da1b94520aabb4e2603ecafae", - "_from": "readable-stream@~1.0.26", - "_npmVersion": "1.4.9", + "_id": "readable-stream@1.0.32", + "_shasum": "6b44a88ba984cd0ec0834ae7d59a47c39aef48ec", + "_from": "readable-stream@*", + "_npmVersion": "2.0.2", + "_nodeVersion": "0.10.31", "_npmUser": { "name": "rvagg", "email": "rod@vagg.org" @@ -60,10 +62,10 @@ } ], "dist": { - "shasum": "8f2502e0bc9e3b0da1b94520aabb4e2603ecafae", - "tarball": "http://registry.npmjs.org/readable-stream/-/readable-stream-1.0.31.tgz" + "shasum": "6b44a88ba984cd0ec0834ae7d59a47c39aef48ec", + "tarball": "http://registry.npmjs.org/readable-stream/-/readable-stream-1.0.32.tgz" }, "directories": {}, - "_resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-1.0.31.tgz", + "_resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-1.0.32.tgz", "readme": "ERROR: No README data found!" } diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/passthrough.js b/deps/npm/node_modules/readable-stream/passthrough.js similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/passthrough.js rename to deps/npm/node_modules/readable-stream/passthrough.js diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/readable.js b/deps/npm/node_modules/readable-stream/readable.js similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/readable.js rename to deps/npm/node_modules/readable-stream/readable.js diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/transform.js b/deps/npm/node_modules/readable-stream/transform.js similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/transform.js rename to deps/npm/node_modules/readable-stream/transform.js diff --git a/deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/writable.js b/deps/npm/node_modules/readable-stream/writable.js similarity index 100% rename from deps/npm/node_modules/request/node_modules/bl/node_modules/readable-stream/writable.js rename to deps/npm/node_modules/readable-stream/writable.js diff --git a/deps/npm/node_modules/realize-package-specifier/.npmignore b/deps/npm/node_modules/realize-package-specifier/.npmignore new file mode 100644 index 00000000000..926ddf616c7 --- /dev/null +++ b/deps/npm/node_modules/realize-package-specifier/.npmignore @@ -0,0 +1,3 @@ +*~ +.#* +node_modules diff --git a/deps/npm/node_modules/realize-package-specifier/README.md b/deps/npm/node_modules/realize-package-specifier/README.md new file mode 100644 index 00000000000..577014a48cb --- /dev/null +++ b/deps/npm/node_modules/realize-package-specifier/README.md @@ -0,0 +1,58 @@ +realize-package-specifier +------------------------- + +Parse a package specifier, peeking at the disk to differentiate between +local tarballs, directories and named modules. This implements the logic +used by `npm install` and `npm cache` to determine where to get packages +from. + +```javascript +var realizePackageSpecifier = require("realize-package-specifier") +realizePackageSpecifier("foo.tar.gz", ".", function (err, package) { + … +}) +``` + +* realizePackageSpecifier(*spec*, [*where*,] *callback*) + +Parses *spec* using `npm-package-arg` and then uses stat to check to see if +it refers to a local tarball or package directory. Stats are done relative +to *where*. If it does then the local module is loaded. If it doesn't then +target is left as a remote package specifier. Package directories are +recognized by the presence of a package.json in them. + +*spec* -- a package specifier, like: `foo@1.2`, or `foo@user/foo`, or +`http://x.com/foo.tgz`, or `git+https://github.com/user/foo` + +*where* (optional, default: .) -- The directory in which we should look for +local tarballs or package directories. + +*callback* function(*err*, *result*) -- Called once we've determined what +kind of specifier this is. The *result* object will be very like the one +returned by `npm-package-arg` except with three differences: 1) There's a +new type of `directory`. 2) The `local` type only refers to tarballs. 2) +For all `local` and `directory` type results spec will contain the full path of +the local package. + +## Result Objects + +The full definition of the result object is: + +* `name` - If known, the `name` field expected in the resulting pkg. +* `type` - One of the following strings: + * `git` - A git repo + * `github` - A github shorthand, like `user/project` + * `tag` - A tagged version, like `"foo@latest"` + * `version` - A specific version number, like `"foo@1.2.3"` + * `range` - A version range, like `"foo@2.x"` + * `local` - A local file path + * `directory` - A local package directory + * `remote` - An http url (presumably to a tgz) +* `spec` - The "thing". URL, the range, git repo, etc. +* `raw` - The original un-modified string that was provided. +* `rawSpec` - The part after the `name@...`, as it was originally + provided. +* `scope` - If a name is something like `@org/module` then the `scope` + field will be set to `org`. If it doesn't have a scoped name, then + scope is `null`. + diff --git a/deps/npm/node_modules/realize-package-specifier/index.js b/deps/npm/node_modules/realize-package-specifier/index.js new file mode 100644 index 00000000000..261ad663077 --- /dev/null +++ b/deps/npm/node_modules/realize-package-specifier/index.js @@ -0,0 +1,38 @@ +"use strict" +var fs = require("fs") +var path = require("path") +var dz = require("dezalgo") +var npa = require("npm-package-arg") + +module.exports = function (spec, where, cb) { + if (where instanceof Function) { cb = where; where = null } + if (where == null) where = "." + cb = dz(cb) + try { + var dep = npa(spec) + } + catch (e) { + return cb(e) + } + var specpath = dep.type == "local" + ? path.resolve(where, dep.spec) + : path.resolve(dep.rawSpec? dep.rawSpec: dep.name) + fs.stat(specpath, function (er, s) { + if (er) return finalize() + if (!s.isDirectory()) return finalize("local") + fs.stat(path.join(specpath, "package.json"), function (er) { + finalize(er ? null : "directory") + }) + }) + function finalize(type) { + if (type != null && type != dep.type) { + dep.type = type + if (! dep.rawSpec) { + dep.rawSpec = dep.name + dep.name = null + } + } + if (dep.type == "local" || dep.type == "directory") dep.spec = specpath + cb(null, dep) + } +} diff --git a/deps/npm/node_modules/realize-package-specifier/package.json b/deps/npm/node_modules/realize-package-specifier/package.json new file mode 100644 index 00000000000..53645763748 --- /dev/null +++ b/deps/npm/node_modules/realize-package-specifier/package.json @@ -0,0 +1,53 @@ +{ + "name": "realize-package-specifier", + "version": "1.2.0", + "description": "Like npm-package-arg, but more so, producing full file paths and differentiating local tar and directory sources.", + "main": "index.js", + "scripts": { + "test": "tap test/*.js" + }, + "license": "ISC", + "repository": { + "type": "git", + "url": "https://github.com/npm/realize-package-specifier.git" + }, + "author": { + "name": "Rebecca Turner", + "email": "me@re-becca.org", + "url": "http://re-becca.org" + }, + "homepage": "https://github.com/npm/realize-package-specifier", + "dependencies": { + "dezalgo": "^1.0.1", + "npm-package-arg": "^2.1.3" + }, + "devDependencies": { + "require-inject": "^1.1.0", + "tap": "^0.4.12" + }, + "gitHead": "39016343d5bd5572ab39374323e9588e54985910", + "bugs": { + "url": "https://github.com/npm/realize-package-specifier/issues" + }, + "_id": "realize-package-specifier@1.2.0", + "_shasum": "93364e40dee38369f92e9b0c76124500342132f2", + "_from": "realize-package-specifier@>=1.2.0 <1.3.0", + "_npmVersion": "2.1.2", + "_nodeVersion": "0.10.32", + "_npmUser": { + "name": "iarna", + "email": "me@re-becca.org" + }, + "maintainers": [ + { + "name": "iarna", + "email": "me@re-becca.org" + } + ], + "dist": { + "shasum": "93364e40dee38369f92e9b0c76124500342132f2", + "tarball": "http://registry.npmjs.org/realize-package-specifier/-/realize-package-specifier-1.2.0.tgz" + }, + "directories": {}, + "_resolved": "https://registry.npmjs.org/realize-package-specifier/-/realize-package-specifier-1.2.0.tgz" +} diff --git a/deps/npm/node_modules/realize-package-specifier/test/basic.js b/deps/npm/node_modules/realize-package-specifier/test/basic.js new file mode 100644 index 00000000000..d5d8fc6c07f --- /dev/null +++ b/deps/npm/node_modules/realize-package-specifier/test/basic.js @@ -0,0 +1,121 @@ +"use strict" +var test = require("tap").test +var requireInject = require("require-inject") +var path = require("path") + +var re = { + tarball: /[\/\\]a.tar.gz$/, + packagedir: /[\/\\]b$/, + packagejson: /[\/\\]b[\/\\]package.json$/, + nonpackagedir: /[\/\\]c$/, + nopackagejson: /[\/\\]c[\/\\]package.json$/, + remotename: /[\/\\]d$/, + packagedirlikegithub: /[\/\\]e[\/\\]1$/, + packagejsonlikegithub: /[\/\\]e[\/\\]1[\/\\]package.json$/, + github: /[\/\\]e[\/\\]2$/ +} + +var rps = requireInject("../index", { + "fs": { + "stat": function (path, callback) { + if (re.tarball.test(path)) { + callback(null,{isDirectory:function(){ return false }}) + } + else if (re.packagedir.test(path)) { + callback(null,{isDirectory:function(){ return true }}) + } + else if (re.packagejson.test(path)) { + callback(null,{}) + } + else if (re.nonpackagedir.test(path)) { + callback(null,{isDirectory:function(){ return true }}) + } + else if (re.nopackagejson.test(path)) { + callback(new Error("EFILENOTFOUND")) + } + else if (re.remotename.test(path)) { + callback(new Error("EFILENOTFOUND")) + } + else if (re.packagedirlikegithub.test(path)) { + callback(null,{isDirectory:function(){ return true }}) + } + else if (re.packagejsonlikegithub.test(path)) { + callback(null,{}) + } + else if (re.github.test(path)) { + callback(new Error("EFILENOTFOUND")) + } + else { + throw new Error("Unknown stat fixture path: "+path) + } + } + } +}) + +test("realize-package-specifier", function (t) { + t.plan(10) + rps("a.tar.gz", function (err, result) { + t.is(result.type, "local", "local tarball") + }) + rps("b", function (err, result) { + t.is(result.type, "directory", "local package directory") + }) + rps("c", function (err, result) { + t.is(result.type, "range", "remote package, non-package local directory") + }) + rps("d", function (err, result) { + t.is(result.type, "range", "remote package, no local directory") + }) + rps("file:./a.tar.gz", function (err, result) { + t.is(result.type, "local", "local tarball") + }) + rps("file:./b", function (err, result) { + t.is(result.type, "directory", "local package directory") + }) + rps("file:./c", function (err, result) { + t.is(result.type, "local", "non-package local directory, specified with a file URL") + }) + rps("file:./d", function (err, result) { + t.is(result.type, "local", "no local directory, specified with a file URL") + }) + rps("e/1", function (err, result) { + t.is(result.type, "directory", "local package directory") + }) + rps("e/2", function (err, result) { + t.is(result.type, "github", "github package dependency") + }) +}) +test("named realize-package-specifier", function (t) { + t.plan(10) + + rps("a@a.tar.gz", function (err, result) { + t.is(result.type, "local", "named local tarball") + }) + rps("b@b", function (err, result) { + t.is(result.type, "directory", "named local package directory") + }) + rps("c@c", function (err, result) { + t.is(result.type, "tag", "remote package, non-package local directory") + }) + rps("d@d", function (err, result) { + t.is(result.type, "tag", "remote package, no local directory") + }) + rps("a@file:./a.tar.gz", function (err, result) { + t.is(result.type, "local", "local tarball") + }) + rps("b@file:./b", function (err, result) { + t.is(result.type, "directory", "local package directory") + }) + rps("c@file:./c", function (err, result) { + t.is(result.type, "local", "non-package local directory, specified with a file URL") + }) + rps("d@file:./d", function (err, result) { + t.is(result.type, "local", "no local directory, specified with a file URL") + }) + rps("e@e/1", function (err, result) { + t.is(result.type, "directory", "local package directory") + }) + rps("e@e/2", function (err, result) { + t.is(result.type, "github", "github package dependency") + }) +}) diff --git a/deps/npm/node_modules/realize-package-specifier/test/npa-basic.js b/deps/npm/node_modules/realize-package-specifier/test/npa-basic.js new file mode 100644 index 00000000000..be07aa56a38 --- /dev/null +++ b/deps/npm/node_modules/realize-package-specifier/test/npa-basic.js @@ -0,0 +1,207 @@ +var test = require("tap").test; +var rps = require("../index.js") +var path = require("path") + +test("npa-basic", function (t) { + t.setMaxListeners(999) + + var tests = { + "foo@1.2": { + name: "foo", + type: "range", + spec: ">=1.2.0 <1.3.0", + raw: "foo@1.2", + rawSpec: "1.2" + }, + + "@foo/bar": { + raw: "@foo/bar", + name: "@foo/bar", + scope: "@foo", + rawSpec: "", + spec: "*", + type: "range" + }, + + "@foo/bar@": { + raw: "@foo/bar@", + name: "@foo/bar", + scope: "@foo", + rawSpec: "", + spec: "*", + type: "range" + }, + + "@foo/bar@baz": { + raw: "@foo/bar@baz", + name: "@foo/bar", + scope: "@foo", + rawSpec: "baz", + spec: "baz", + type: "tag" + }, + + "@f fo o al/ a d s ;f ": { + raw: "@f fo o al/ a d s ;f", + name: null, + rawSpec: "@f fo o al/ a d s ;f", + spec: path.resolve("@f fo o al/ a d s ;f"), + type: "local" + }, + + "foo@1.2.3": { + name: "foo", + type: "version", + spec: "1.2.3", + raw: "foo@1.2.3" + }, + + "foo@=v1.2.3": { + name: "foo", + type: "version", + spec: "1.2.3", + raw: "foo@=v1.2.3", + rawSpec: "=v1.2.3" + }, + + "git+ssh://git@github.com/user/foo#1.2.3": { + name: null, + type: "git", + spec: "ssh://git@github.com/user/foo#1.2.3", + raw: "git+ssh://git@github.com/user/foo#1.2.3" + }, + + "git+file://path/to/repo#1.2.3": { + name: null, + type: "git", + spec: "file://path/to/repo#1.2.3", + raw: "git+file://path/to/repo#1.2.3" + }, + + "git://github.com/user/foo": { + name: null, + type: "git", + spec: "git://github.com/user/foo", + raw: "git://github.com/user/foo" + }, + + "@foo/bar@git+ssh://github.com/user/foo": { + name: "@foo/bar", + scope: "@foo", + spec: "ssh://github.com/user/foo", + rawSpec: "git+ssh://github.com/user/foo", + raw: "@foo/bar@git+ssh://github.com/user/foo" + }, + + "/path/to/foo": { + name: null, + type: "local", + spec: "/path/to/foo", + raw: "/path/to/foo" + }, + + "file:path/to/foo": { + name: null, + type: "local", + spec: path.resolve(__dirname,"..","path/to/foo"), + raw: "file:path/to/foo" + }, + + "file:~/path/to/foo": { + name: null, + type: "local", + spec: path.resolve(__dirname,"..","~/path/to/foo"), + raw: "file:~/path/to/foo" + }, + + "file:../path/to/foo": { + name: null, + type: "local", + spec: path.resolve(__dirname,"..","../path/to/foo"), + raw: "file:../path/to/foo" + }, + + "file:///path/to/foo": { + name: null, + type: "local", + spec: "/path/to/foo", + raw: "file:///path/to/foo" + }, + + "https://server.com/foo.tgz": { + name: null, + type: "remote", + spec: "https://server.com/foo.tgz", + raw: "https://server.com/foo.tgz" + }, + + "user/foo-js": { + name: null, + type: "github", + spec: "user/foo-js", + raw: "user/foo-js" + }, + + "user/foo-js#bar/baz": { + name: null, + type: "github", + spec: "user/foo-js#bar/baz", + raw: "user/foo-js#bar/baz" + }, + + "user..blerg--/..foo-js# . . . . . some . tags / / /": { + name: null, + type: "github", + spec: "user..blerg--/..foo-js# . . . . . some . tags / / /", + raw: "user..blerg--/..foo-js# . . . . . some . tags / / /" + }, + + "user/foo-js#bar/baz/bin": { + name: null, + type: "github", + spec: "user/foo-js#bar/baz/bin", + raw: "user/foo-js#bar/baz/bin" + }, + + "foo@user/foo-js": { + name: "foo", + type: "github", + spec: "user/foo-js", + raw: "foo@user/foo-js" + }, + + "foo@latest": { + name: "foo", + type: "tag", + spec: "latest", + raw: "foo@latest" + }, + + "foo": { + name: "foo", + type: "range", + spec: "*", + raw: "foo" + } + } + + t.plan( 2 + Object.keys(tests).length * 3 ) + + Object.keys(tests).forEach(function (arg) { + rps(arg, path.resolve(__dirname,'..'), function(err, res) { + t.notOk(err, "No error") + t.type(res, "Result") + t.has(res, tests[arg]) + }) + }) + + // Completely unreasonable invalid garbage throws an error + rps("this is not a \0 valid package name or url", path.resolve(__dirname,'..'), function (err) { + t.ok(err, "error") + }) + + rps("gopher://yea right", path.resolve(__dirname,'..'), function (err) { + t.ok(err, "Unsupported URL Type: gopher://yea right") + }) + +}) diff --git a/deps/npm/node_modules/realize-package-specifier/test/npa-windows.js b/deps/npm/node_modules/realize-package-specifier/test/npa-windows.js new file mode 100644 index 00000000000..f6275bea9cb --- /dev/null +++ b/deps/npm/node_modules/realize-package-specifier/test/npa-windows.js @@ -0,0 +1,42 @@ +global.FAKE_WINDOWS = true + +var rps = require('../index.js') +var test = require("tap").test +var path = require("path") + +var cases = { + "C:\\x\\y\\z": { + raw: 'C:\\x\\y\\z', + scope: null, + name: null, + rawSpec: 'C:\\x\\y\\z', + spec: path.resolve('C:\\x\\y\\z'), + type: 'local' + }, + "foo@C:\\x\\y\\z": { + raw: 'foo@C:\\x\\y\\z', + scope: null, + name: 'foo', + rawSpec: 'C:\\x\\y\\z', + spec: path.resolve('C:\\x\\y\\z'), + type: 'local' + }, + "foo@/foo/bar/baz": { + raw: 'foo@/foo/bar/baz', + scope: null, + name: 'foo', + rawSpec: '/foo/bar/baz', + spec: path.resolve('/foo/bar/baz'), + type: 'local' + } +} + +test("parse a windows path", function (t) { + t.plan( Object.keys(cases).length ) + Object.keys(cases).forEach(function (c) { + var expect = cases[c] + rps(c, function(err, actual) { + t.same(actual, expect, c) + }) + }) +}) diff --git a/deps/npm/node_modules/request/.eslintrc b/deps/npm/node_modules/request/.eslintrc new file mode 100644 index 00000000000..9c3350d6bb7 --- /dev/null +++ b/deps/npm/node_modules/request/.eslintrc @@ -0,0 +1,22 @@ +{ + "env": { + "node": true + }, + "rules": { + // Disallow semi-colons, unless needed to disambiguate statement + "semi": [2, "never"], + // Require strings to use single quotes + "quotes": [2, "single"], + // Require curly braces for all control statements + "curly": 2, + // Disallow using variables and functions before they've been defined + "no-use-before-define": 2, + // Allow any case for variable naming + "camelcase": 0, + // Disallow unused variables, except as function arguments + "no-unused-vars": [2, {"args":"none"}], + // Allow leading underscores for method names + // REASON: we use underscores to denote private methods + "no-underscore-dangle": 0 + } +} diff --git a/deps/npm/node_modules/request/.travis.yml b/deps/npm/node_modules/request/.travis.yml index 6e4887af80f..742c7dfa0c5 100644 --- a/deps/npm/node_modules/request/.travis.yml +++ b/deps/npm/node_modules/request/.travis.yml @@ -2,11 +2,8 @@ language: node_js node_js: - "0.8" - "0.10" - -env: - - OPTIONALS=Y - - OPTIONALS=N - -install: - - if [[ "$OPTIONALS" == "Y" ]]; then npm install; fi - - if [[ "$OPTIONALS" == "N" ]]; then npm install --no-optional; fi +webhooks: + urls: https://webhooks.gitter.im/e/237280ed4796c19cc626 + on_success: change # options: [always|never|change] default: always + on_failure: always # options: [always|never|change] default: always + on_start: false # default: false diff --git a/deps/npm/node_modules/request/CONTRIBUTING.md b/deps/npm/node_modules/request/CONTRIBUTING.md index 06367a1b0c1..17d383e8e3d 100644 --- a/deps/npm/node_modules/request/CONTRIBUTING.md +++ b/deps/npm/node_modules/request/CONTRIBUTING.md @@ -4,7 +4,9 @@ ## What? -Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project. +Individuals making significant and valuable contributions are given +commit-access to the project to contribute as they see fit. This project is +more like an open wiki than a standard guarded open source project. ## Rules @@ -12,10 +14,21 @@ There are a few basic ground-rules for contributors: 1. **No `--force` pushes** or modifying the Git history in any way. 1. **Non-master branches** ought to be used for ongoing work. -1. **External API changes and significant modifications** ought to be subject to an **internal pull-request** to solicit feedback from other contributors. -1. Internal pull-requests to solicit feedback are *encouraged* for any other non-trivial contribution but left to the discretion of the contributor. -1. For significant changes wait a full 24 hours before merging so that active contributors who are distributed throughout the world have a chance to weigh in. +1. **External API changes and significant modifications** ought to be subject + to an **internal pull-request** to solicit feedback from other contributors. +1. Internal pull-requests to solicit feedback are *encouraged* for any other + non-trivial contribution but left to the discretion of the contributor. +1. For significant changes wait a full 24 hours before merging so that active + contributors who are distributed throughout the world have a chance to weigh + in. 1. Contributors should attempt to adhere to the prevailing code-style. +1. Run `npm test` locally before submitting your PR, to catch any easy to miss + style & testing issues. To diagnose test failures, there are two ways to + run a single test file: + - `node_modules/.bin/taper tests/test-file.js` - run using the default + [`taper`](/nylen/taper) test reporter. + - `node tests/test-file.js` - view the raw + [tap](https://testanything.org/) output. ## Releases @@ -24,6 +37,8 @@ Declaring formal releases remains the prerogative of the project maintainer. ## Changes to this arrangement -This is an experiment and feedback is welcome! This document may also be subject to pull-requests or changes by contributors where you believe you have something valuable to add or change. +This is an experiment and feedback is welcome! This document may also be +subject to pull-requests or changes by contributors where you believe you have +something valuable to add or change. ----------------------------------------- diff --git a/deps/npm/node_modules/request/README.md b/deps/npm/node_modules/request/README.md index 1878fdfbb82..56604207841 100644 --- a/deps/npm/node_modules/request/README.md +++ b/deps/npm/node_modules/request/README.md @@ -1,4 +1,5 @@ # Request — Simplified HTTP client +[![Gitter](https://badges.gitter.im/Join Chat.svg)](https://gitter.im/mikeal/request?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![NPM](https://nodei.co/npm/request.png?downloads=true&downloadRank=true&stars=true)](https://nodei.co/npm/request/) @@ -35,6 +36,18 @@ Request can also `pipe` to itself. When doing so, `content-type` and `content-le request.get('http://google.com/img.png').pipe(request.put('http://mysite.com/img.png')) ``` +Request emits a "response" event when a response is received. The `response` argument will be an instance of [http.IncomingMessage](http://nodejs.org/api/http.html#http_http_incomingmessage). + +```javascript +request + .get('http://google.com/img.png') + .on('response', function(response) { + console.log(response.statusCode) // 200 + console.log(response.headers['content-type']) // 'image/png' + }) + .pipe(request.put('http://mysite.com/img.png')) +``` + Now let’s get fancy. ```javascript @@ -108,9 +121,8 @@ HTTP/1.1 200 OK At this point, the connection is left open, and the client is communicating directly with the `endpoint-server.com` machine. -See (the wikipedia page on HTTP -Tunneling)[http://en.wikipedia.org/wiki/HTTP_tunnel] for more -information. +See [the wikipedia page on HTTP Tunneling](http://en.wikipedia.org/wiki/HTTP_tunnel) +for more information. By default, when proxying `http` traffic, request will simply make a standard proxied `http` request. This is done by making the `url` @@ -169,48 +181,128 @@ header is *never* sent to the endpoint server, but only to the proxy server. All other headers are sent as-is over the established connection. -## UNIX Socket +### Controlling proxy behaviour using environment variables + +The following environment variables are respected by `request`: + + * `HTTP_PROXY` / `http_proxy` + * `HTTPS_PROXY` / `https_proxy` + * `NO_PROXY` / `no_proxy` + +When `HTTP_PROXY` / `http_proxy` are set, they will be used to proxy non-SSL requests that do not have an explicit `proxy` configuration option present. Similarly, `HTTPS_PROXY` / `https_proxy` will be respected for SSL requests that do not have an explicit `proxy` configuration option. It is valid to define a proxy in one of the environment variables, but then override it for a specific request, using the `proxy` configuration option. Furthermore, the `proxy` configuration option can be explicitly set to false / null to opt out of proxying altogether for that request. + +`request` is also aware of the `NO_PROXY`/`no_proxy` environment variables. These variables provide a granular way to opt out of proxying, on a per-host basis. It should contain a comma separated list of hosts to opt out of proxying. It is also possible to opt of proxying when a particular destination port is used. Finally, the variable may be set to `*` to opt out of the implicit proxy configuration of the other environment variables. + +Here's some examples of valid `no_proxy` values: -`request` supports the `unix://` protocol for all requests. The path is assumed to be absolute to the root of the host file system. + * `google.com` - don't proxy HTTP/HTTPS requests to Google. + * `google.com:443` - don't proxy HTTPS requests to Google, but *do* proxy HTTP requests to Google. + * `google.com:443, yahoo.com:80` - don't proxy HTTPS requests to Google, and don't proxy HTTP requests to Yahoo! + * `*` - ignore `https_proxy`/`http_proxy` environment variables altogether. -HTTP paths are extracted from the supplied URL by testing each level of the full URL against net.connect for a socket response. +## UNIX Socket -Thus the following request will GET `/httppath` from the HTTP server listening on `/tmp/unix.socket` +`request` supports making requests to [UNIX Domain Sockets](http://en.wikipedia.org/wiki/Unix_domain_socket). To make one, use the following URL scheme: ```javascript -request.get('unix://tmp/unix.socket/httppath') +/* Pattern */ 'http://unix:SOCKET:PATH' +/* Example */ request.get('http://unix:/absolute/path/to/unix.socket:/request/path') ``` +Note: The `SOCKET` path is assumed to be absolute to the root of the host file system. + + ## Forms `request` supports `application/x-www-form-urlencoded` and `multipart/form-data` form uploads. For `multipart/related` refer to the `multipart` API. +#### application/x-www-form-urlencoded (URL-Encoded Forms) + URL-encoded forms are simple. ```javascript request.post('http://service.com/upload', {form:{key:'value'}}) // or request.post('http://service.com/upload').form({key:'value'}) +// or +request.post({url:'http://service.com/upload', form: {key:'value'}}, function(err,httpResponse,body){ /* ... */ }) ``` -For `multipart/form-data` we use the [form-data](https://github.com/felixge/node-form-data) library by [@felixge](https://github.com/felixge). You don’t need to worry about piping the form object or setting the headers, `request` will handle that for you. +#### multipart/form-data (Multipart Form Uploads) + +For `multipart/form-data` we use the [form-data](https://github.com/felixge/node-form-data) library by [@felixge](https://github.com/felixge). For the most cases, you can pass your upload form data via the `formData` option. + ```javascript -var r = request.post('http://service.com/upload', function optionalCallback (err, httpResponse, body) { +var formData = { + // Pass a simple key-value pair + my_field: 'my_value', + // Pass data via Buffers + my_buffer: new Buffer([1, 2, 3]), + // Pass data via Streams + my_file: fs.createReadStream(__dirname + '/unicycle.jpg'), + // Pass multiple values /w an Array + attachments: [ + fs.createReadStream(__dirname + '/attacment1.jpg') + fs.createReadStream(__dirname + '/attachment2.jpg') + ], + // Pass optional meta-data with an 'options' object with style: {value: DATA, options: OPTIONS} + // See the `form-data` README for more information about options: https://github.com/felixge/node-form-data + custom_file: { + value: fs.createReadStream('/dev/urandom'), + options: { + filename: 'topsecret.jpg', + contentType: 'image/jpg' + } + } +}; +request.post({url:'http://service.com/upload', formData: formData}, function optionalCallback(err, httpResponse, body) { if (err) { return console.error('upload failed:', err); } console.log('Upload successful! Server responded with:', body); -}) -var form = r.form() -form.append('my_field', 'my_value') -form.append('my_buffer', new Buffer([1, 2, 3])) -form.append('my_file', fs.createReadStream(path.join(__dirname, 'doodle.png'))) -form.append('remote_file', request('http://google.com/doodle.png')) +}); +``` + +For advanced cases, you can the form-data object itself via `r.form()`. This can be modified until the request is fired on the next cycle of the event-loop. (Note that this calling `form()` will clear the currently set form data for that request.) + +```javascript +// NOTE: Advanced use-case, for normal use see 'formData' usage above +var r = request.post('http://service.com/upload', function optionalCallback(err, httpResponse, body) { // ... -// Just like always, `r` is a writable stream, and can be used as such (you have until nextTick to pipe it, etc.) -// Alternatively, you can provide a callback (that's what this example does — see `optionalCallback` above). +var form = r.form(); +form.append('my_field', 'my_value'); +form.append('my_buffer', new Buffer([1, 2, 3])); +form.append('custom_file', fs.createReadStream(__dirname + '/unicycle.jpg'), {filename: 'unicycle.jpg'}); ``` +See the [form-data README](https://github.com/felixge/node-form-data) for more information & examples. + +#### multipart/related + +Some variations in different HTTP implementations require a newline/CRLF before, after, or both before and after the boundary of a `multipart/related` request (using the multipart option). This has been observed in the .NET WebAPI version 4.0. You can turn on a boundary preambleCRLF or postamble by passing them as `true` to your request options. + +```javascript + request( + { method: 'PUT' + , preambleCRLF: true + , postambleCRLF: true + , uri: 'http://service.com/upload' + , multipart: + [ { 'content-type': 'application/json' + , body: JSON.stringify({foo: 'bar', _attachments: {'message.txt': {follows: true, length: 18, 'content_type': 'text/plain' }}}) + } + , { body: 'I am an attachment' } + ] + } + , function (error, response, body) { + if (err) { + return console.error('upload failed:', err); + } + console.log('Upload successful! Server responded with:', body); + } + ) +``` + ## HTTP Authentication @@ -238,7 +330,7 @@ If passed as an option, `auth` should be a hash containing values `user` || `use `sendImmediately` defaults to `true`, which causes a basic authentication header to be sent. If `sendImmediately` is `false`, then `request` will retry with a proper authentication header after receiving a `401` response from the server (which must contain a `WWW-Authenticate` header indicating the required authentication method). -Note that you can also use for basic authentication a trick using the URL itself, as specified in [RFC 1738](http://www.ietf.org/rfc/rfc1738.txt). +Note that you can also use for basic authentication a trick using the URL itself, as specified in [RFC 1738](http://www.ietf.org/rfc/rfc1738.txt). Simply pass the `user:password` before the host with an `@` sign. ```javascript @@ -331,35 +423,86 @@ function callback(error, response, body) { request(options, callback); ``` +## TLS/SSL Protocol + +TLS/SSL Protocol options, such as `cert`, `key` and `passphrase`, can be +set in the `agentOptions` property of the `options` object. +In the example below, we call an API requires client side SSL certificate +(in PEM format) with passphrase protected private key (in PEM format) and disable the SSLv3 protocol: + +```javascript +var fs = require('fs') + , path = require('path') + , certFile = path.resolve(__dirname, 'ssl/client.crt') + , keyFile = path.resolve(__dirname, 'ssl/client.key') + , request = require('request'); + +var options = { + url: 'https://api.some-server.com/', + agentOptions: { + 'cert': fs.readFileSync(certFile), + 'key': fs.readFileSync(keyFile), + // Or use `pfx` property replacing `cert` and `key` when using private key, certificate and CA certs in PFX or PKCS12 format: + // 'pfx': fs.readFileSync(pfxFilePath), + 'passphrase': 'password', + 'securityOptions': 'SSL_OP_NO_SSLv3' + } +}; + +request.get(options); +``` + +It is able to force using SSLv3 only by specifying `secureProtocol`: + +```javascript + +request.get({ + url: 'https://api.some-server.com/', + agentOptions: { + 'secureProtocol': 'SSLv3_method' + } +}); +``` + ## request(options, callback) The first argument can be either a `url` or an `options` object. The only required option is `uri`; all others are optional. * `uri` || `url` - fully qualified uri or a parsed url object from `url.parse()` * `qs` - object containing querystring values to be appended to the `uri` +* `useQuerystring` - If true, use `querystring` to stringify and parse + querystrings, otherwise use `qs` (default: `false`). Set this option to + `true` if you need arrays to be serialized as `foo=bar&foo=baz` instead of the + default `foo[0]=bar&foo[1]=baz`. * `method` - http method (default: `"GET"`) * `headers` - http headers (default: `{}`) -* `body` - entity body for PATCH, POST and PUT requests. Must be a `Buffer` or `String`. -* `form` - when passed an object or a querystring, this sets `body` to a querystring representation of value, and adds `Content-type: application/x-www-form-urlencoded; charset=utf-8` header. When passed no options, a `FormData` instance is returned (and is piped to request). +* `body` - entity body for PATCH, POST and PUT requests. Must be a `Buffer` or `String`, unless `json` is `true`. If `json` is `true`, then `body` must be a JSON-serializable object. +* `form` - when passed an object or a querystring, this sets `body` to a querystring representation of value, and adds `Content-type: application/x-www-form-urlencoded` header. When passed no options, a `FormData` instance is returned (and is piped to request). See "Forms" section above. +* `formData` - Data to pass for a `multipart/form-data` request. See "Forms" section above. +* `multipart` - (experimental) Data to pass for a `multipart/related` request. See "Forms" section above * `auth` - A hash containing values `user` || `username`, `pass` || `password`, and `sendImmediately` (optional). See documentation above. * `json` - sets `body` but to JSON representation of value and adds `Content-type: application/json` header. Additionally, parses the response body as JSON. * `multipart` - (experimental) array of objects which contains their own headers and `body` attribute. Sends `multipart/related` request. See example below. +* `preambleCRLF` - append a newline/CRLF before the boundary of your `multipart/form-data` request. +* `postambleCRLF` - append a newline/CRLF at the end of the boundary of your `multipart/form-data` request. * `followRedirect` - follow HTTP 3xx responses as redirects (default: `true`). This property can also be implemented as function which gets `response` object as a single argument and should return `true` if redirects should continue or `false` otherwise. * `followAllRedirects` - follow non-GET HTTP 3xx responses as redirects (default: `false`) * `maxRedirects` - the maximum number of redirects to follow (default: `10`) -* `encoding` - Encoding to be used on `setEncoding` of response data. If `null`, the `body` is returned as a `Buffer`. -* `pool` - A hash object containing the agents for these requests. If omitted, the request will use the global pool (which is set to node's default `maxSockets`) -* `pool.maxSockets` - Integer containing the maximum amount of sockets in the pool. +* `encoding` - Encoding to be used on `setEncoding` of response data. If `null`, the `body` is returned as a `Buffer`. Anything else **(including the default value of `undefined`)** will be passed as the [encoding](http://nodejs.org/api/buffer.html#buffer_buffer) parameter to `toString()` (meaning this is effectively `utf8` by default). +* `pool` - An object describing which agents to use for the request. If this option is omitted the request will use the global agent (as long as [your options allow for it](request.js#L747)). Otherwise, request will search the pool for your custom agent. If no custom agent is found, a new agent will be created and added to the pool. + * A `maxSockets` property can also be provided on the `pool` object to set the max number of sockets for all agents created (ex: `pool: {maxSockets: Infinity}`). * `timeout` - Integer containing the number of milliseconds to wait for a request to respond before aborting the request * `proxy` - An HTTP proxy to be used. Supports proxy Auth with Basic Auth, identical to support for the `url` parameter (by embedding the auth info in the `uri`) * `oauth` - Options for OAuth HMAC-SHA1 signing. See documentation above. * `hawk` - Options for [Hawk signing](https://github.com/hueniverse/hawk). The `credentials` key must contain the necessary signing info, [see hawk docs for details](https://github.com/hueniverse/hawk#usage-example). * `strictSSL` - If `true`, requires SSL certificates be valid. **Note:** to use your own certificate authority, you need to specify an agent that was created with that CA as an option. +* `agentOptions` - Object containing user agent options. See documentation above. **Note:** [see tls API doc for TLS/SSL options](http://nodejs.org/api/tls.html#tls_tls_connect_options_callback). + * `jar` - If `true` and `tough-cookie` is installed, remember cookies for future use (or define your custom cookie jar; see examples section) * `aws` - `object` containing AWS signing information. Should have the properties `key`, `secret`. Also requires the property `bucket`, unless you’re specifying your `bucket` as part of the path, or the request doesn’t use a bucket (i.e. GET Services) * `httpSignature` - Options for the [HTTP Signature Scheme](https://github.com/joyent/node-http-signature/blob/master/http_signing.md) using [Joyent's library](https://github.com/joyent/node-http-signature). The `keyId` and `key` properties must be specified. See the docs for other options. * `localAddress` - Local interface to bind for network connections. -* `gzip` - If `true`, add an `Accept-Encoding` header to request compressed content encodings from the server (if not already present) and decode supported content encodings in the response. +* `gzip` - If `true`, add an `Accept-Encoding` header to request compressed content encodings from the server (if not already present) and decode supported content encodings in the response. **Note:** Automatic decoding of the response content is performed on the body data returned through `request` (both through the `request` stream and passed to the callback function) but is not performed on the `response` stream (available from the `response` event) which is the unmodified `http.IncomingMessage` object which may contain compressed data. See example below. * `tunnel` - If `true`, then *always* use a tunneling proxy. If `false` (default), then tunneling will only be used if the destination is `https`, or if a previous request in the redirect @@ -368,7 +511,7 @@ The first argument can be either a `url` or an `options` object. The only requir tunneling proxy. -The callback argument gets 3 arguments: +The callback argument gets 3 arguments: 1. An `error` when applicable (usually from [`http.ClientRequest`](http://nodejs.org/api/http.html#http_class_http_clientrequest) object) 2. An [`http.IncomingMessage`](http://nodejs.org/api/http.html#http_http_incomingmessage) object @@ -382,7 +525,7 @@ There are also shorthand methods for different HTTP METHODs and some other conve This method returns a wrapper around the normal request API that defaults to whatever options you pass in to it. -**Note:** You can call `.defaults()` on the wrapper that is returned from `request.defaults` to add/override defaults that were previously defaulted. +**Note:** You can call `.defaults()` on the wrapper that is returned from `request.defaults` to add/override defaults that were previously defaulted. For example: ```javascript @@ -450,7 +593,7 @@ request.get(url) Function that creates a new cookie. ```javascript -request.cookie('cookie_string_here') +request.cookie('key1=value1') ``` ### request.jar @@ -488,6 +631,37 @@ request.jar() ) ``` +For backwards-compatibility, response compression is not supported by default. +To accept gzip-compressed responses, set the `gzip` option to `true`. Note +that the body data passed through `request` is automatically decompressed +while the response object is unmodified and will contain compressed data if +the server sent a compressed response. + +```javascript + var request = require('request') + request( + { method: 'GET' + , uri: 'http://www.google.com' + , gzip: true + } + , function (error, response, body) { + // body is the decompressed response body + console.log('server encoded the data as: ' + (response.headers['content-encoding'] || 'identity')) + console.log('the decoded data is: ' + body) + } + ).on('data', function(data) { + // decompressed data as it is received + console.log('decoded chunk: ' + data) + }) + .on('response', function(response) { + // unmodified http.IncomingMessage object + response.on('data', function(data) { + // compressed data as it is received + console.log('received ' + data.length + ' bytes of compressed data') + }) + }) +``` + Cookies are disabled by default (else, they would be used in subsequent requests). To enable cookies, set `jar` to `true` (either in `defaults` or `options`) and install `tough-cookie`. ```javascript @@ -511,10 +685,11 @@ OR ```javascript // `npm install --save tough-cookie` before this works -var j = request.jar() -var cookie = request.cookie('your_cookie_here') -j.setCookie(cookie, uri); -request({url: 'http://www.google.com', jar: j}, function () { +var j = request.jar(); +var cookie = request.cookie('key1=value1'); +var url = 'http://www.google.com'; +j.setCookie(cookie, url); +request({url: url, jar: j}, function () { request('http://images.google.com') }) ``` @@ -522,10 +697,10 @@ request({url: 'http://www.google.com', jar: j}, function () { To inspect your cookie jar after a request ```javascript -var j = request.jar() +var j = request.jar() request({url: 'http://www.google.com', jar: j}, function () { var cookie_string = j.getCookieString(uri); // "key1=value1; key2=value2; ..." - var cookies = j.getCookies(uri); + var cookies = j.getCookies(uri); // [{key: 'key1', value: 'value1', domain: "www.google.com", ...}, ...] }) ``` diff --git a/deps/npm/node_modules/request/index.js b/deps/npm/node_modules/request/index.js index 8e8a133e243..033268405df 100755 --- a/deps/npm/node_modules/request/index.js +++ b/deps/npm/node_modules/request/index.js @@ -12,16 +12,18 @@ // See the License for the specific language governing permissions and // limitations under the License. +'use strict' + var extend = require('util')._extend , cookies = require('./lib/cookies') - , copy = require('./lib/copy') , helpers = require('./lib/helpers') - , isFunction = helpers.isFunction + +var isFunction = helpers.isFunction , constructObject = helpers.constructObject , filterForCallback = helpers.filterForCallback , constructOptionsFrom = helpers.constructOptionsFrom , paramsHaveRequestBody = helpers.paramsHaveRequestBody - ; + // organize params for patch, post, put, head, del function initParams(uri, options, callback) { @@ -36,8 +38,9 @@ function initParams(uri, options, callback) { } function request (uri, options, callback) { - if (typeof uri === 'undefined') + if (typeof uri === 'undefined') { throw new Error('undefined is not a valid uri or options object.') + } var params = initParams(uri, options, callback) options = params.options @@ -48,8 +51,9 @@ function request (uri, options, callback) { } function requester(params) { - if(typeof params.options._requester === 'function') + if(typeof params.options._requester === 'function') { return params.options._requester + } return request } @@ -63,8 +67,9 @@ request.head = function (uri, options, callback) { var params = initParams(uri, options, callback) params.options.method = 'HEAD' - if (paramsHaveRequestBody(params)) - throw new Error("HTTP HEAD requests MUST NOT include a request body.") + if (paramsHaveRequestBody(params)) { + throw new Error('HTTP HEAD requests MUST NOT include a request body.') + } return requester(params)(params.uri || null, params.options, params.callback) } @@ -102,7 +107,7 @@ request.cookie = function (str) { } request.defaults = function (options, requester) { - + var self = this var wrap = function (method) { var headerlessOptions = function (options) { options = extend({}, options) @@ -119,13 +124,14 @@ request.defaults = function (options, requester) { return function (uri, opts, callback) { var params = initParams(uri, opts, callback) - params.options = extend(params.options, headerlessOptions(options)) + params.options = extend(headerlessOptions(options), params.options) - if (options.headers) + if (options.headers) { params.options.headers = getHeaders(params, options) + } if (isFunction(requester)) { - if (method === request) { + if (method === self) { method = requester } else { params.options._requester = requester @@ -136,23 +142,27 @@ request.defaults = function (options, requester) { } } - defaults = wrap(this) - defaults.get = wrap(this.get) - defaults.patch = wrap(this.patch) - defaults.post = wrap(this.post) - defaults.put = wrap(this.put) - defaults.head = wrap(this.head) - defaults.del = wrap(this.del) - defaults.cookie = wrap(this.cookie) - defaults.jar = this.jar - defaults.defaults = this.defaults + var defaults = wrap(self) + defaults.get = wrap(self.get) + defaults.patch = wrap(self.patch) + defaults.post = wrap(self.post) + defaults.put = wrap(self.put) + defaults.head = wrap(self.head) + defaults.del = wrap(self.del) + defaults.cookie = wrap(self.cookie) + defaults.jar = self.jar + defaults.defaults = self.defaults return defaults } request.forever = function (agentOptions, optionsArg) { var options = constructObject() - if (optionsArg) options.extend(optionsArg) - if (agentOptions) options.agentOptions = agentOptions + if (optionsArg) { + options.extend(optionsArg) + } + if (agentOptions) { + options.agentOptions = agentOptions + } options.extend({forever: true}) return request.defaults(options.done()) diff --git a/deps/npm/node_modules/request/lib/cookies.js b/deps/npm/node_modules/request/lib/cookies.js index 7e61c62bcd5..017bdb467e4 100644 --- a/deps/npm/node_modules/request/lib/cookies.js +++ b/deps/npm/node_modules/request/lib/cookies.js @@ -1,31 +1,41 @@ -var optional = require('./optional') - , tough = optional('tough-cookie') - , Cookie = tough && tough.Cookie - , CookieJar = tough && tough.CookieJar - ; +'use strict' + +var tough = require('tough-cookie') + +var Cookie = tough.Cookie + , CookieJar = tough.CookieJar + exports.parse = function(str) { - if (str && str.uri) str = str.uri - if (typeof str !== 'string') throw new Error("The cookie function only accepts STRING as param") + if (str && str.uri) { + str = str.uri + } + if (typeof str !== 'string') { + throw new Error('The cookie function only accepts STRING as param') + } if (!Cookie) { - return null; + return null } return Cookie.parse(str) -}; +} // Adapt the sometimes-Async api of tough.CookieJar to our requirements function RequestJar() { - this._jar = new CookieJar(); + var self = this + self._jar = new CookieJar() } RequestJar.prototype.setCookie = function(cookieOrStr, uri, options) { - return this._jar.setCookieSync(cookieOrStr, uri, options || {}); -}; + var self = this + return self._jar.setCookieSync(cookieOrStr, uri, options || {}) +} RequestJar.prototype.getCookieString = function(uri) { - return this._jar.getCookieStringSync(uri); -}; + var self = this + return self._jar.getCookieStringSync(uri) +} RequestJar.prototype.getCookies = function(uri) { - return this._jar.getCookiesSync(uri); -}; + var self = this + return self._jar.getCookiesSync(uri) +} exports.jar = function() { if (!CookieJar) { @@ -34,7 +44,7 @@ exports.jar = function() { setCookie: function(){}, getCookieString: function(){}, getCookies: function(){} - }; + } } - return new RequestJar(); -}; + return new RequestJar() +} diff --git a/deps/npm/node_modules/request/lib/copy.js b/deps/npm/node_modules/request/lib/copy.js index 56831ff80f9..ad162a50892 100644 --- a/deps/npm/node_modules/request/lib/copy.js +++ b/deps/npm/node_modules/request/lib/copy.js @@ -1,3 +1,5 @@ +'use strict' + module.exports = function copy (obj) { var o = {} @@ -5,4 +7,4 @@ function copy (obj) { o[i] = obj[i] }) return o -} \ No newline at end of file +} diff --git a/deps/npm/node_modules/request/lib/debug.js b/deps/npm/node_modules/request/lib/debug.js index d61ec88d7f5..25e3dedc7ef 100644 --- a/deps/npm/node_modules/request/lib/debug.js +++ b/deps/npm/node_modules/request/lib/debug.js @@ -1,6 +1,8 @@ +'use strict' + var util = require('util') , request = require('../index') - ; + module.exports = function debug() { if (request.debug) { diff --git a/deps/npm/node_modules/request/lib/helpers.js b/deps/npm/node_modules/request/lib/helpers.js index eb3f3e1f29b..fa5712ffbc2 100644 --- a/deps/npm/node_modules/request/lib/helpers.js +++ b/deps/npm/node_modules/request/lib/helpers.js @@ -1,4 +1,16 @@ +'use strict' + var extend = require('util')._extend + , jsonSafeStringify = require('json-stringify-safe') + , crypto = require('crypto') + +function deferMethod() { + if(typeof setImmediate === 'undefined') { + return process.nextTick + } + + return setImmediate +} function constructObject(initialObject) { initialObject = initialObject || {} @@ -15,21 +27,25 @@ function constructObject(initialObject) { function constructOptionsFrom(uri, options) { var params = constructObject() - if (typeof uri === 'object') params.extend(uri) - if (typeof uri === 'string') params.extend({uri: uri}) - params.extend(options) + if (typeof options === 'object') { + params.extend(options).extend({uri: uri}) + } else if (typeof uri === 'string') { + params.extend({uri: uri}) + } else { + params.extend(uri) + } return params.done() } +function isFunction(value) { + return typeof value === 'function' +} + function filterForCallback(values) { var callbacks = values.filter(isFunction) return callbacks[0] } -function isFunction(value) { - return typeof value === 'function' -} - function paramsHaveRequestBody(params) { return ( params.options.body || @@ -39,8 +55,35 @@ function paramsHaveRequestBody(params) { ) } +function safeStringify (obj) { + var ret + try { + ret = JSON.stringify(obj) + } catch (e) { + ret = jsonSafeStringify(obj) + } + return ret +} + +function md5 (str) { + return crypto.createHash('md5').update(str).digest('hex') +} + +function isReadStream (rs) { + return rs.readable && rs.path && rs.mode +} + +function toBase64 (str) { + return (new Buffer(str || '', 'ascii')).toString('base64') +} + exports.isFunction = isFunction exports.constructObject = constructObject exports.constructOptionsFrom = constructOptionsFrom exports.filterForCallback = filterForCallback exports.paramsHaveRequestBody = paramsHaveRequestBody +exports.safeStringify = safeStringify +exports.md5 = md5 +exports.isReadStream = isReadStream +exports.toBase64 = toBase64 +exports.defer = deferMethod() diff --git a/deps/npm/node_modules/request/lib/optional.js b/deps/npm/node_modules/request/lib/optional.js deleted file mode 100644 index af0cc15f8c1..00000000000 --- a/deps/npm/node_modules/request/lib/optional.js +++ /dev/null @@ -1,13 +0,0 @@ -module.exports = function(moduleName) { - try { - return module.parent.require(moduleName); - } catch (e) { - // This could mean that we are in a browser context. - // Add another try catch like it used to be, for backwards compability - // and browserify reasons. - try { - return require(moduleName); - } - catch (e) {} - } -}; diff --git a/deps/npm/node_modules/request/node_modules/aws-sign2/package.json b/deps/npm/node_modules/request/node_modules/aws-sign2/package.json index 719d4887064..9104550c823 100644 --- a/deps/npm/node_modules/request/node_modules/aws-sign2/package.json +++ b/deps/npm/node_modules/request/node_modules/aws-sign2/package.json @@ -27,7 +27,7 @@ "shasum": "c57103f7a17fc037f02d7c2e64b602ea223f7d63", "tarball": "http://registry.npmjs.org/aws-sign2/-/aws-sign2-0.5.0.tgz" }, - "_from": "aws-sign2@~0.5.0", + "_from": "aws-sign2@>=0.5.0 <0.6.0", "_npmVersion": "1.3.2", "_npmUser": { "name": "mikeal", @@ -42,5 +42,6 @@ "directories": {}, "_shasum": "c57103f7a17fc037f02d7c2e64b602ea223f7d63", "_resolved": "https://registry.npmjs.org/aws-sign2/-/aws-sign2-0.5.0.tgz", - "homepage": "https://github.com/mikeal/aws-sign" + "homepage": "https://github.com/mikeal/aws-sign", + "scripts": {} } diff --git a/deps/npm/node_modules/request/node_modules/bl/LICENSE b/deps/npm/node_modules/request/node_modules/bl/LICENSE deleted file mode 100644 index f6a0029de11..00000000000 --- a/deps/npm/node_modules/request/node_modules/bl/LICENSE +++ /dev/null @@ -1,39 +0,0 @@ -Copyright 2013, Rod Vagg (the "Original Author") -All rights reserved. - -MIT +no-false-attribs License - -Permission is hereby granted, free of charge, to any person -obtaining a copy of this software and associated documentation -files (the "Software"), to deal in the Software without -restriction, including without limitation the rights to use, -copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the -Software is furnished to do so, subject to the following -conditions: - -The above copyright notice and this permission notice shall be -included in all copies or substantial portions of the Software. - -Distributions of all or part of the Software intended to be used -by the recipients as they would use the unmodified Software, -containing modifications that substantially alter, remove, or -disable functionality of the Software, outside of the documented -configuration mechanisms provided by the Software, shall be -modified such that the Original Author's bug reporting email -addresses and urls are either replaced with the contact information -of the parties responsible for the changes, or removed entirely. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES -OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT -HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, -WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING -FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR -OTHER DEALINGS IN THE SOFTWARE. - - -Except where noted, this license applies to any and all software -programs and associated documentation files created by the -Original Author, when distributed with the Software. \ No newline at end of file diff --git a/deps/npm/node_modules/request/node_modules/bl/LICENSE.md b/deps/npm/node_modules/request/node_modules/bl/LICENSE.md new file mode 100644 index 00000000000..ccb24797c89 --- /dev/null +++ b/deps/npm/node_modules/request/node_modules/bl/LICENSE.md @@ -0,0 +1,13 @@ +The MIT License (MIT) +===================== + +Copyright (c) 2014 bl contributors +---------------------------------- + +*bl contributors listed at * + +Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/deps/npm/node_modules/request/node_modules/bl/README.md b/deps/npm/node_modules/request/node_modules/bl/README.md index 386fbd5f671..1753cc40b10 100644 --- a/deps/npm/node_modules/request/node_modules/bl/README.md +++ b/deps/npm/node_modules/request/node_modules/bl/README.md @@ -1,7 +1,5 @@ # bl *(BufferList)* -[![Build Status](https://secure.travis-ci.org/rvagg/bl.png)](http://travis-ci.org/rvagg/bl) - **A Node.js Buffer list collector, reader and streamer thingy.** [![NPM](https://nodei.co/npm/bl.png?downloads=true&downloadRank=true)](https://nodei.co/npm/bl/) @@ -169,7 +167,7 @@ console.log(bl.toString()) -------------------------------------------------------- -### bl.readDoubleBE(), bl.readDoubleLE(), bl.readFloatBE(), bl.readFloatLE(), bl.readInt32BE(), bl.readInt32LE(), bl.readUInt32BE(), bl.readUInt32LE(), bl.readInt16BE(), bl.readInt16LE(), bl.readUInt16BE(), bl.readUInt16LE(), bl.readInt8(), bl.readUInt8() +### bl.readDoubleBE(), bl.readDoubleLE(), bl.readFloatBE(), bl.readFloatLE(), bl.readInt32BE(), bl.readInt32LE(), bl.readUInt32BE(), bl.readUInt32LE(), bl.readInt16BE(), bl.readInt16LE(), bl.readUInt16BE(), bl.readUInt16LE(), bl.readInt8(), bl.readUInt8() All of the standard byte-reading methods of the `Buffer` interface are implemented and will operate across internal Buffer boundaries transparently. @@ -188,9 +186,10 @@ See the [Buffer](http://nodejs.org/docs/latest/api/buffer.html)< * [Rod Vagg](https://github.com/rvagg) * [Matteo Collina](https://github.com/mcollina) + * [Jarett Cruger](https://github.com/jcrugzz) ======= ## License -**bl** is Copyright (c) 2013 Rod Vagg [@rvagg](https://twitter.com/rvagg) and licenced under the MIT licence. All rights not explicitly granted in the MIT license are reserved. See the included LICENSE file for more details. +**bl** is Copyright (c) 2013 Rod Vagg [@rvagg](https://twitter.com/rvagg) and licenced under the MIT licence. All rights not explicitly granted in the MIT license are reserved. See the included LICENSE.md file for more details. diff --git a/deps/npm/node_modules/request/node_modules/bl/package.json b/deps/npm/node_modules/request/node_modules/bl/package.json index 19c4ac079f4..a5692e03c70 100644 --- a/deps/npm/node_modules/request/node_modules/bl/package.json +++ b/deps/npm/node_modules/request/node_modules/bl/package.json @@ -1,6 +1,6 @@ { "name": "bl", - "version": "0.9.1", + "version": "0.9.3", "description": "Buffer List: collect buffers and access with a standard readable Buffer interface, streamable too!", "main": "bl.js", "scripts": { @@ -14,7 +14,8 @@ "homepage": "https://github.com/rvagg/bl", "authors": [ "Rod Vagg (https://github.com/rvagg)", - "Matteo Collina (https://github.com/mcollina)" + "Matteo Collina (https://github.com/mcollina)", + "Jarett Cruger (https://github.com/jcrugzz)" ], "keywords": [ "buffer", @@ -32,14 +33,14 @@ "faucet": "~0.0.1", "brtapsauce": "~0.3.0" }, - "gitHead": "53d3d10e39be326feb049ab27437173b3ce47ec4", + "gitHead": "4987a76bf6bafd7616e62c7023c955e62f3a9461", "bugs": { "url": "https://github.com/rvagg/bl/issues" }, - "_id": "bl@0.9.1", - "_shasum": "d262c5b83aa5cf4386cea1d998c82b36d7ae2942", - "_from": "bl@~0.9.0", - "_npmVersion": "1.4.21", + "_id": "bl@0.9.3", + "_shasum": "c41eff3e7cb31bde107c8f10076d274eff7f7d44", + "_from": "bl@>=0.9.0 <0.10.0", + "_npmVersion": "1.4.27", "_npmUser": { "name": "rvagg", "email": "rod@vagg.org" @@ -51,10 +52,10 @@ } ], "dist": { - "shasum": "d262c5b83aa5cf4386cea1d998c82b36d7ae2942", - "tarball": "http://registry.npmjs.org/bl/-/bl-0.9.1.tgz" + "shasum": "c41eff3e7cb31bde107c8f10076d274eff7f7d44", + "tarball": "http://registry.npmjs.org/bl/-/bl-0.9.3.tgz" }, "directories": {}, - "_resolved": "https://registry.npmjs.org/bl/-/bl-0.9.1.tgz", + "_resolved": "https://registry.npmjs.org/bl/-/bl-0.9.3.tgz", "readme": "ERROR: No README data found!" } diff --git a/deps/npm/node_modules/request/node_modules/caseless/package.json b/deps/npm/node_modules/request/node_modules/caseless/package.json index 62d467544dd..3725c102644 100644 --- a/deps/npm/node_modules/request/node_modules/caseless/package.json +++ b/deps/npm/node_modules/request/node_modules/caseless/package.json @@ -30,7 +30,7 @@ "homepage": "https://github.com/mikeal/caseless", "_id": "caseless@0.6.0", "_shasum": "8167c1ab8397fb5bb95f96d28e5a81c50f247ac4", - "_from": "caseless@~0.6.0", + "_from": "caseless@>=0.6.0 <0.7.0", "_npmVersion": "1.4.9", "_npmUser": { "name": "mikeal", diff --git a/deps/npm/node_modules/request/node_modules/forever-agent/package.json b/deps/npm/node_modules/request/node_modules/forever-agent/package.json index 764ca1e2c4f..1bb44419367 100644 --- a/deps/npm/node_modules/request/node_modules/forever-agent/package.json +++ b/deps/npm/node_modules/request/node_modules/forever-agent/package.json @@ -26,7 +26,7 @@ "shasum": "6d0e09c4921f94a27f63d3b49c5feff1ea4c5130", "tarball": "http://registry.npmjs.org/forever-agent/-/forever-agent-0.5.2.tgz" }, - "_from": "forever-agent@~0.5.0", + "_from": "forever-agent@>=0.5.0 <0.6.0", "_npmVersion": "1.3.21", "_npmUser": { "name": "mikeal", @@ -41,5 +41,6 @@ "directories": {}, "_shasum": "6d0e09c4921f94a27f63d3b49c5feff1ea4c5130", "_resolved": "https://registry.npmjs.org/forever-agent/-/forever-agent-0.5.2.tgz", - "readme": "ERROR: No README data found!" + "readme": "ERROR: No README data found!", + "scripts": {} } diff --git a/deps/npm/node_modules/request/node_modules/form-data/node_modules/async/package.json b/deps/npm/node_modules/request/node_modules/form-data/node_modules/async/package.json index bdbe740109c..e8f9ed81b6c 100644 --- a/deps/npm/node_modules/request/node_modules/form-data/node_modules/async/package.json +++ b/deps/npm/node_modules/request/node_modules/form-data/node_modules/async/package.json @@ -41,7 +41,7 @@ "shasum": "ac3613b1da9bed1b47510bb4651b8931e47146c7", "tarball": "http://registry.npmjs.org/async/-/async-0.9.0.tgz" }, - "_from": "async@~0.9.0", + "_from": "async@>=0.9.0 <0.10.0", "_npmVersion": "1.4.3", "_npmUser": { "name": "caolan", diff --git a/deps/npm/node_modules/request/node_modules/form-data/node_modules/combined-stream/node_modules/delayed-stream/package.json b/deps/npm/node_modules/request/node_modules/form-data/node_modules/combined-stream/node_modules/delayed-stream/package.json index cbafd00ee70..3324a13e97b 100644 --- a/deps/npm/node_modules/request/node_modules/form-data/node_modules/combined-stream/node_modules/delayed-stream/package.json +++ b/deps/npm/node_modules/request/node_modules/form-data/node_modules/combined-stream/node_modules/delayed-stream/package.json @@ -33,8 +33,8 @@ "scripts": {}, "directories": {}, "_shasum": "d4b1f43a93e8296dfe02694f4680bc37a313c73f", - "_from": "delayed-stream@0.0.5", "_resolved": "https://registry.npmjs.org/delayed-stream/-/delayed-stream-0.0.5.tgz", + "_from": "delayed-stream@0.0.5", "bugs": { "url": "https://github.com/felixge/node-delayed-stream/issues" }, diff --git a/deps/npm/node_modules/request/node_modules/form-data/node_modules/combined-stream/package.json b/deps/npm/node_modules/request/node_modules/form-data/node_modules/combined-stream/package.json index 37c37314cc0..080953f1600 100644 --- a/deps/npm/node_modules/request/node_modules/form-data/node_modules/combined-stream/package.json +++ b/deps/npm/node_modules/request/node_modules/form-data/node_modules/combined-stream/package.json @@ -31,7 +31,7 @@ }, "_id": "combined-stream@0.0.5", "_shasum": "29ed76e5c9aad07c4acf9ca3d32601cce28697a2", - "_from": "combined-stream@~0.0.4", + "_from": "combined-stream@>=0.0.4 <0.1.0", "_npmVersion": "1.4.14", "_npmUser": { "name": "alexindigo", diff --git a/deps/npm/node_modules/request/node_modules/form-data/node_modules/mime/package.json b/deps/npm/node_modules/request/node_modules/form-data/node_modules/mime/package.json index 259822b788e..b666b72a2a1 100644 --- a/deps/npm/node_modules/request/node_modules/form-data/node_modules/mime/package.json +++ b/deps/npm/node_modules/request/node_modules/form-data/node_modules/mime/package.json @@ -35,7 +35,7 @@ "shasum": "58203eed86e3a5ef17aed2b7d9ebd47f0a60dd10", "tarball": "http://registry.npmjs.org/mime/-/mime-1.2.11.tgz" }, - "_from": "mime@~1.2.11", + "_from": "mime@>=1.2.11 <1.3.0", "_npmVersion": "1.3.6", "_npmUser": { "name": "broofa", @@ -54,5 +54,6 @@ "directories": {}, "_shasum": "58203eed86e3a5ef17aed2b7d9ebd47f0a60dd10", "_resolved": "https://registry.npmjs.org/mime/-/mime-1.2.11.tgz", - "homepage": "https://github.com/broofa/node-mime" + "homepage": "https://github.com/broofa/node-mime", + "scripts": {} } diff --git a/deps/npm/node_modules/request/node_modules/form-data/package.json b/deps/npm/node_modules/request/node_modules/form-data/package.json index afda8b6c30c..7700d99929d 100644 --- a/deps/npm/node_modules/request/node_modules/form-data/package.json +++ b/deps/npm/node_modules/request/node_modules/form-data/package.json @@ -42,7 +42,7 @@ "homepage": "https://github.com/felixge/node-form-data", "_id": "form-data@0.1.4", "_shasum": "91abd788aba9702b1aabfa8bc01031a2ac9e3b12", - "_from": "form-data@~0.1.0", + "_from": "form-data@>=0.1.0 <0.2.0", "_npmVersion": "1.4.14", "_npmUser": { "name": "alexindigo", diff --git a/deps/npm/node_modules/request/node_modules/hawk/node_modules/boom/package.json b/deps/npm/node_modules/request/node_modules/hawk/node_modules/boom/package.json index 2406a49a5db..c7875b4cbb2 100755 --- a/deps/npm/node_modules/request/node_modules/hawk/node_modules/boom/package.json +++ b/deps/npm/node_modules/request/node_modules/hawk/node_modules/boom/package.json @@ -41,7 +41,7 @@ "shasum": "7a636e9ded4efcefb19cef4947a3c67dfaee911b", "tarball": "http://registry.npmjs.org/boom/-/boom-0.4.2.tgz" }, - "_from": "boom@0.4.x", + "_from": "boom@>=0.4.0 <0.5.0", "_npmVersion": "1.2.18", "_npmUser": { "name": "hueniverse", diff --git a/deps/npm/node_modules/request/node_modules/hawk/node_modules/cryptiles/package.json b/deps/npm/node_modules/request/node_modules/hawk/node_modules/cryptiles/package.json index c4cd1b23426..1248613351a 100755 --- a/deps/npm/node_modules/request/node_modules/hawk/node_modules/cryptiles/package.json +++ b/deps/npm/node_modules/request/node_modules/hawk/node_modules/cryptiles/package.json @@ -45,7 +45,7 @@ "shasum": "ed91ff1f17ad13d3748288594f8a48a0d26f325c", "tarball": "http://registry.npmjs.org/cryptiles/-/cryptiles-0.2.2.tgz" }, - "_from": "cryptiles@0.2.x", + "_from": "cryptiles@>=0.2.0 <0.3.0", "_npmVersion": "1.2.24", "_npmUser": { "name": "hueniverse", diff --git a/deps/npm/node_modules/request/node_modules/hawk/node_modules/hoek/package.json b/deps/npm/node_modules/request/node_modules/hawk/node_modules/hoek/package.json index 4e4eb74b7a2..789de2adbfb 100755 --- a/deps/npm/node_modules/request/node_modules/hawk/node_modules/hoek/package.json +++ b/deps/npm/node_modules/request/node_modules/hawk/node_modules/hoek/package.json @@ -43,7 +43,7 @@ "shasum": "3d322462badf07716ea7eb85baf88079cddce505", "tarball": "http://registry.npmjs.org/hoek/-/hoek-0.9.1.tgz" }, - "_from": "hoek@0.9.x", + "_from": "hoek@>=0.9.0 <0.10.0", "_npmVersion": "1.2.18", "_npmUser": { "name": "hueniverse", diff --git a/deps/npm/node_modules/request/node_modules/hawk/node_modules/sntp/package.json b/deps/npm/node_modules/request/node_modules/hawk/node_modules/sntp/package.json index c96e8482aca..0656c84e198 100755 --- a/deps/npm/node_modules/request/node_modules/hawk/node_modules/sntp/package.json +++ b/deps/npm/node_modules/request/node_modules/hawk/node_modules/sntp/package.json @@ -42,7 +42,7 @@ "shasum": "fb885f18b0f3aad189f824862536bceeec750900", "tarball": "http://registry.npmjs.org/sntp/-/sntp-0.2.4.tgz" }, - "_from": "sntp@0.2.x", + "_from": "sntp@>=0.2.0 <0.3.0", "_npmVersion": "1.2.18", "_npmUser": { "name": "hueniverse", diff --git a/deps/npm/node_modules/request/node_modules/http-signature/node_modules/asn1/package.json b/deps/npm/node_modules/request/node_modules/http-signature/node_modules/asn1/package.json index ad8294e904a..8c68193cd1e 100644 --- a/deps/npm/node_modules/request/node_modules/http-signature/node_modules/asn1/package.json +++ b/deps/npm/node_modules/request/node_modules/http-signature/node_modules/asn1/package.json @@ -53,8 +53,8 @@ ], "directories": {}, "_shasum": "559be18376d08a4ec4dbe80877d27818639b2df7", - "_from": "asn1@0.1.11", "_resolved": "https://registry.npmjs.org/asn1/-/asn1-0.1.11.tgz", + "_from": "asn1@0.1.11", "bugs": { "url": "https://github.com/mcavage/node-asn1/issues" }, diff --git a/deps/npm/node_modules/request/node_modules/http-signature/node_modules/assert-plus/package.json b/deps/npm/node_modules/request/node_modules/http-signature/node_modules/assert-plus/package.json index d55a6bd65ab..d06cbfd7347 100644 --- a/deps/npm/node_modules/request/node_modules/http-signature/node_modules/assert-plus/package.json +++ b/deps/npm/node_modules/request/node_modules/http-signature/node_modules/assert-plus/package.json @@ -13,7 +13,6 @@ "engines": { "node": ">=0.6" }, - "readme": "# node-assert-plus\n\nThis library is a super small wrapper over node's assert module that has two\nthings: (1) the ability to disable assertions with the environment variable\nNODE_NDEBUG, and (2) some API wrappers for argument testing. Like\n`assert.string(myArg, 'myArg')`. As a simple example, most of my code looks\nlike this:\n\n var assert = require('assert-plus');\n\n function fooAccount(options, callback) {\n\t assert.object(options, 'options');\n\t\tassert.number(options.id, 'options.id);\n\t\tassert.bool(options.isManager, 'options.isManager');\n\t\tassert.string(options.name, 'options.name');\n\t\tassert.arrayOfString(options.email, 'options.email');\n\t\tassert.func(callback, 'callback');\n\n // Do stuff\n\t\tcallback(null, {});\n }\n\n# API\n\nAll methods that *aren't* part of node's core assert API are simply assumed to\ntake an argument, and then a string 'name' that's not a message; `AssertionError`\nwill be thrown if the assertion fails with a message like:\n\n AssertionError: foo (string) is required\n\tat test (/home/mark/work/foo/foo.js:3:9)\n\tat Object. (/home/mark/work/foo/foo.js:15:1)\n\tat Module._compile (module.js:446:26)\n\tat Object..js (module.js:464:10)\n\tat Module.load (module.js:353:31)\n\tat Function._load (module.js:311:12)\n\tat Array.0 (module.js:484:10)\n\tat EventEmitter._tickCallback (node.js:190:38)\n\nfrom:\n\n function test(foo) {\n\t assert.string(foo, 'foo');\n }\n\nThere you go. You can check that arrays are of a homogenous type with `Arrayof$Type`:\n\n function test(foo) {\n\t assert.arrayOfString(foo, 'foo');\n }\n\nYou can assert IFF an argument is not `undefined` (i.e., an optional arg):\n\n assert.optionalString(foo, 'foo');\n\nLastly, you can opt-out of assertion checking altogether by setting the\nenvironment variable `NODE_NDEBUG=1`. This is pseudo-useful if you have\nlots of assertions, and don't want to pay `typeof ()` taxes to v8 in\nproduction.\n\nThe complete list of APIs is:\n\n* assert.bool\n* assert.buffer\n* assert.func\n* assert.number\n* assert.object\n* assert.string\n* assert.arrayOfBool\n* assert.arrayOfFunc\n* assert.arrayOfNumber\n* assert.arrayOfObject\n* assert.arrayOfString\n* assert.optionalBool\n* assert.optionalBuffer\n* assert.optionalFunc\n* assert.optionalNumber\n* assert.optionalObject\n* assert.optionalString\n* assert.optionalArrayOfBool\n* assert.optionalArrayOfFunc\n* assert.optionalArrayOfNumber\n* assert.optionalArrayOfObject\n* assert.optionalArrayOfString\n* assert.AssertionError\n* assert.fail\n* assert.ok\n* assert.equal\n* assert.notEqual\n* assert.deepEqual\n* assert.notDeepEqual\n* assert.strictEqual\n* assert.notStrictEqual\n* assert.throws\n* assert.doesNotThrow\n* assert.ifError\n\n# Installation\n\n npm install assert-plus\n\n## License\n\nThe MIT License (MIT)\nCopyright (c) 2012 Mark Cavage\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of\nthis software and associated documentation files (the \"Software\"), to deal in\nthe Software without restriction, including without limitation the rights to\nuse, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of\nthe Software, and to permit persons to whom the Software is furnished to do so,\nsubject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\n## Bugs\n\nSee .\n", "_id": "assert-plus@0.1.2", "dist": { "shasum": "d93ffdbb67ac5507779be316a7d65146417beef8", @@ -32,6 +31,8 @@ ], "directories": {}, "_shasum": "d93ffdbb67ac5507779be316a7d65146417beef8", + "_resolved": "https://registry.npmjs.org/assert-plus/-/assert-plus-0.1.2.tgz", "_from": "assert-plus@0.1.2", - "_resolved": "https://registry.npmjs.org/assert-plus/-/assert-plus-0.1.2.tgz" + "readme": "ERROR: No README data found!", + "scripts": {} } diff --git a/deps/npm/node_modules/request/node_modules/http-signature/node_modules/ctype/package.json b/deps/npm/node_modules/request/node_modules/http-signature/node_modules/ctype/package.json index 474e54daf22..5840d050c39 100644 --- a/deps/npm/node_modules/request/node_modules/http-signature/node_modules/ctype/package.json +++ b/deps/npm/node_modules/request/node_modules/http-signature/node_modules/ctype/package.json @@ -29,7 +29,8 @@ ], "directories": {}, "_shasum": "fe8091d468a373a0b0c9ff8bbfb3425c00973a1d", - "_from": "ctype@0.5.2", "_resolved": "https://registry.npmjs.org/ctype/-/ctype-0.5.2.tgz", - "readme": "ERROR: No README data found!" + "_from": "ctype@0.5.2", + "readme": "ERROR: No README data found!", + "scripts": {} } diff --git a/deps/npm/node_modules/request/node_modules/http-signature/package.json b/deps/npm/node_modules/request/node_modules/http-signature/package.json index 6d646d4ad08..c6bfef9750f 100644 --- a/deps/npm/node_modules/request/node_modules/http-signature/package.json +++ b/deps/npm/node_modules/request/node_modules/http-signature/package.json @@ -32,7 +32,7 @@ "shasum": "1494e4f5000a83c0f11bcc12d6007c530cb99582", "tarball": "http://registry.npmjs.org/http-signature/-/http-signature-0.10.0.tgz" }, - "_from": "http-signature@~0.10.0", + "_from": "http-signature@>=0.10.0 <0.11.0", "_npmVersion": "1.2.18", "_npmUser": { "name": "mcavage", diff --git a/deps/npm/node_modules/request/node_modules/json-stringify-safe/package.json b/deps/npm/node_modules/request/node_modules/json-stringify-safe/package.json index 3ddf83680c1..90549cb6a6b 100644 --- a/deps/npm/node_modules/request/node_modules/json-stringify-safe/package.json +++ b/deps/npm/node_modules/request/node_modules/json-stringify-safe/package.json @@ -22,8 +22,6 @@ "url": "http://blog.izs.me" }, "license": "BSD", - "readmeFilename": "README.md", - "readme": "# json-stringify-safe\n\nLike JSON.stringify, but doesn't throw on circular references.\n\n## Usage\n\nTakes the same arguments as `JSON.stringify`.\n\n```javascript\nvar stringify = require('json-stringify-safe');\nvar circularObj = {};\ncircularObj.circularRef = circularObj;\ncircularObj.list = [ circularObj, circularObj ];\nconsole.log(stringify(circularObj, null, 2));\n```\n\nOutput:\n\n```json\n{\n \"circularRef\": \"[Circular]\",\n \"list\": [\n \"[Circular]\",\n \"[Circular]\"\n ]\n}\n```\n\n## Details\n\n```\nstringify(obj, serializer, indent, decycler)\n```\n\nThe first three arguments are the same as to JSON.stringify. The last\nis an argument that's only used when the object has been seen already.\n\nThe default `decycler` function returns the string `'[Circular]'`.\nIf, for example, you pass in `function(k,v){}` (return nothing) then it\nwill prune cycles. If you pass in `function(k,v){ return {foo: 'bar'}}`,\nthen cyclical objects will always be represented as `{\"foo\":\"bar\"}` in\nthe result.\n\n```\nstringify.getSerialize(serializer, decycler)\n```\n\nReturns a serializer that can be used elsewhere. This is the actual\nfunction that's passed to JSON.stringify.\n", "bugs": { "url": "https://github.com/isaacs/json-stringify-safe/issues" }, @@ -32,7 +30,7 @@ "shasum": "4c1f228b5050837eba9d21f50c2e6e320624566e", "tarball": "http://registry.npmjs.org/json-stringify-safe/-/json-stringify-safe-5.0.0.tgz" }, - "_from": "json-stringify-safe@~5.0.0", + "_from": "json-stringify-safe@>=5.0.0 <5.1.0", "_npmVersion": "1.3.6", "_npmUser": { "name": "isaacs", @@ -47,5 +45,6 @@ "directories": {}, "_shasum": "4c1f228b5050837eba9d21f50c2e6e320624566e", "_resolved": "https://registry.npmjs.org/json-stringify-safe/-/json-stringify-safe-5.0.0.tgz", + "readme": "ERROR: No README data found!", "homepage": "https://github.com/isaacs/json-stringify-safe" } diff --git a/deps/npm/node_modules/request/node_modules/mime-types/package.json b/deps/npm/node_modules/request/node_modules/mime-types/package.json index baa79a956c7..67432028b84 100644 --- a/deps/npm/node_modules/request/node_modules/mime-types/package.json +++ b/deps/npm/node_modules/request/node_modules/mime-types/package.json @@ -39,7 +39,7 @@ "homepage": "https://github.com/expressjs/mime-types", "_id": "mime-types@1.0.2", "_shasum": "995ae1392ab8affcbfcb2641dd054e943c0d5dce", - "_from": "mime-types@~1.0.1", + "_from": "mime-types@>=1.0.1 <1.1.0", "_npmVersion": "1.4.21", "_npmUser": { "name": "dougwilson", diff --git a/deps/npm/node_modules/request/node_modules/node-uuid/package.json b/deps/npm/node_modules/request/node_modules/node-uuid/package.json index ee93121fc32..bead110cc36 100644 --- a/deps/npm/node_modules/request/node_modules/node-uuid/package.json +++ b/deps/npm/node_modules/request/node_modules/node-uuid/package.json @@ -34,7 +34,7 @@ "shasum": "39aef510e5889a3dca9c895b506c73aae1bac048", "tarball": "http://registry.npmjs.org/node-uuid/-/node-uuid-1.4.1.tgz" }, - "_from": "node-uuid@~1.4.0", + "_from": "node-uuid@>=1.4.0 <1.5.0", "_npmVersion": "1.3.6", "_npmUser": { "name": "broofa", @@ -49,5 +49,6 @@ "directories": {}, "_shasum": "39aef510e5889a3dca9c895b506c73aae1bac048", "_resolved": "https://registry.npmjs.org/node-uuid/-/node-uuid-1.4.1.tgz", - "homepage": "https://github.com/broofa/node-uuid" + "homepage": "https://github.com/broofa/node-uuid", + "scripts": {} } diff --git a/deps/npm/node_modules/request/node_modules/oauth-sign/package.json b/deps/npm/node_modules/request/node_modules/oauth-sign/package.json index d0e82fecb07..d8765b6e9fa 100644 --- a/deps/npm/node_modules/request/node_modules/oauth-sign/package.json +++ b/deps/npm/node_modules/request/node_modules/oauth-sign/package.json @@ -30,7 +30,7 @@ "shasum": "f22956f31ea7151a821e5f2fb32c113cad8b9f69", "tarball": "http://registry.npmjs.org/oauth-sign/-/oauth-sign-0.4.0.tgz" }, - "_from": "oauth-sign@~0.4.0", + "_from": "oauth-sign@>=0.4.0 <0.5.0", "_npmVersion": "1.3.2", "_npmUser": { "name": "mikeal", diff --git a/deps/npm/node_modules/request/node_modules/qs/package.json b/deps/npm/node_modules/request/node_modules/qs/package.json index 7b1917023f9..d65387274f2 100755 --- a/deps/npm/node_modules/request/node_modules/qs/package.json +++ b/deps/npm/node_modules/request/node_modules/qs/package.json @@ -35,7 +35,7 @@ }, "_id": "qs@1.2.2", "_shasum": "19b57ff24dc2a99ce1f8bdf6afcda59f8ef61f88", - "_from": "qs@~1.2.0", + "_from": "qs@>=1.2.0 <1.3.0", "_npmVersion": "1.4.21", "_npmUser": { "name": "hueniverse", diff --git a/deps/npm/node_modules/request/node_modules/stringstream/package.json b/deps/npm/node_modules/request/node_modules/stringstream/package.json index f9caf4b8434..b71cf2859c1 100644 --- a/deps/npm/node_modules/request/node_modules/stringstream/package.json +++ b/deps/npm/node_modules/request/node_modules/stringstream/package.json @@ -39,10 +39,11 @@ ], "directories": {}, "_shasum": "0f0e3423f942960b5692ac324a57dd093bc41a92", - "_from": "stringstream@~0.0.4", "_resolved": "https://registry.npmjs.org/stringstream/-/stringstream-0.0.4.tgz", + "_from": "stringstream@>=0.0.4 <0.1.0", "bugs": { "url": "https://github.com/mhart/StringStream/issues" }, - "homepage": "https://github.com/mhart/StringStream" + "homepage": "https://github.com/mhart/StringStream", + "scripts": {} } diff --git a/deps/npm/node_modules/request/node_modules/tough-cookie/node_modules/punycode/LICENSE-MIT.txt b/deps/npm/node_modules/request/node_modules/tough-cookie/node_modules/punycode/LICENSE-MIT.txt index 97067e54637..a41e0a7ef97 100644 --- a/deps/npm/node_modules/request/node_modules/tough-cookie/node_modules/punycode/LICENSE-MIT.txt +++ b/deps/npm/node_modules/request/node_modules/tough-cookie/node_modules/punycode/LICENSE-MIT.txt @@ -1,4 +1,4 @@ -Copyright Mathias Bynens +Copyright Mathias Bynens Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the diff --git a/deps/npm/node_modules/request/node_modules/tough-cookie/node_modules/punycode/README.md b/deps/npm/node_modules/request/node_modules/tough-cookie/node_modules/punycode/README.md index 577f2c7a1ab..831e6379b5b 100644 --- a/deps/npm/node_modules/request/node_modules/tough-cookie/node_modules/punycode/README.md +++ b/deps/npm/node_modules/request/node_modules/tough-cookie/node_modules/punycode/README.md @@ -124,7 +124,7 @@ punycode.toASCII('джумла@джpумлатест.bрфa'); #### `punycode.ucs2.decode(string)` -Creates an array containing the numeric code point values of each Unicode symbol in the string. While [JavaScript uses UCS-2 internally](http://mathiasbynens.be/notes/javascript-encoding), this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16. +Creates an array containing the numeric code point values of each Unicode symbol in the string. While [JavaScript uses UCS-2 internally](https://mathiasbynens.be/notes/javascript-encoding), this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16. ```js punycode.ucs2.decode('abc'); @@ -163,7 +163,7 @@ Feel free to fork if you see possible improvements! | [![twitter/mathias](https://gravatar.com/avatar/24e08a9ea84deb17ae121074d0f17125?s=70)](https://twitter.com/mathias "Follow @mathias on Twitter") | |---| -| [Mathias Bynens](http://mathiasbynens.be/) | +| [Mathias Bynens](https://mathiasbynens.be/) | ## Contributors @@ -173,4 +173,4 @@ Feel free to fork if you see possible improvements! ## License -Punycode.js is available under the [MIT](http://mths.be/mit) license. +Punycode.js is available under the [MIT](https://mths.be/mit) license. diff --git a/deps/npm/node_modules/request/node_modules/tough-cookie/node_modules/punycode/package.json b/deps/npm/node_modules/request/node_modules/tough-cookie/node_modules/punycode/package.json index ab13fc836f4..4f627138189 100644 --- a/deps/npm/node_modules/request/node_modules/tough-cookie/node_modules/punycode/package.json +++ b/deps/npm/node_modules/request/node_modules/tough-cookie/node_modules/punycode/package.json @@ -1,8 +1,8 @@ { "name": "punycode", - "version": "1.3.1", + "version": "1.3.2", "description": "A robust Punycode converter that fully complies to RFC 3492 and RFC 5891, and works on nearly all JavaScript platforms.", - "homepage": "http://mths.be/punycode", + "homepage": "https://mths.be/punycode", "main": "punycode.js", "keywords": [ "punycode", @@ -13,20 +13,15 @@ "url", "domain" ], - "licenses": [ - { - "type": "MIT", - "url": "http://mths.be/mit" - } - ], + "license": "MIT", "author": { "name": "Mathias Bynens", - "url": "http://mathiasbynens.be/" + "url": "https://mathiasbynens.be/" }, "contributors": [ { "name": "Mathias Bynens", - "url": "http://mathiasbynens.be/" + "url": "https://mathiasbynens.be/" }, { "name": "John-David Dalton", @@ -44,9 +39,6 @@ "LICENSE-MIT.txt", "punycode.js" ], - "directories": { - "test": "tests" - }, "scripts": { "test": "node tests/tests.js" }, @@ -60,10 +52,11 @@ "qunitjs": "~1.11.0", "requirejs": "^2.1.14" }, - "_id": "punycode@1.3.1", - "_shasum": "710afe5123c20a1530b712e3e682b9118fe8058e", + "gitHead": "38c8d3131a82567bfef18da09f7f4db68c84f8a3", + "_id": "punycode@1.3.2", + "_shasum": "9653a036fb7c1ee42342f2325cceefea3926c48d", "_from": "punycode@>=0.2.0", - "_npmVersion": "1.4.9", + "_npmVersion": "1.4.28", "_npmUser": { "name": "mathias", "email": "mathias@qiwi.be" @@ -79,9 +72,10 @@ } ], "dist": { - "shasum": "710afe5123c20a1530b712e3e682b9118fe8058e", - "tarball": "http://registry.npmjs.org/punycode/-/punycode-1.3.1.tgz" + "shasum": "9653a036fb7c1ee42342f2325cceefea3926c48d", + "tarball": "http://registry.npmjs.org/punycode/-/punycode-1.3.2.tgz" }, - "_resolved": "https://registry.npmjs.org/punycode/-/punycode-1.3.1.tgz", + "directories": {}, + "_resolved": "https://registry.npmjs.org/punycode/-/punycode-1.3.2.tgz", "readme": "ERROR: No README data found!" } diff --git a/deps/npm/node_modules/request/node_modules/tough-cookie/node_modules/punycode/punycode.js b/deps/npm/node_modules/request/node_modules/tough-cookie/node_modules/punycode/punycode.js index 6ab1df3a039..ac685973833 100644 --- a/deps/npm/node_modules/request/node_modules/tough-cookie/node_modules/punycode/punycode.js +++ b/deps/npm/node_modules/request/node_modules/tough-cookie/node_modules/punycode/punycode.js @@ -1,4 +1,4 @@ -/*! http://mths.be/punycode v1.3.1 by @mathias */ +/*! https://mths.be/punycode v1.3.2 by @mathias */ ;(function(root) { /** Detect free variables */ @@ -103,7 +103,9 @@ result = parts[0] + '@'; string = parts[1]; } - var labels = string.split(regexSeparators); + // Avoid `split(regex)` for IE8 compatibility. See #17. + string = string.replace(regexSeparators, '\x2E'); + var labels = string.split('.'); var encoded = map(labels, fn).join('.'); return result + encoded; } @@ -115,7 +117,7 @@ * UCS-2 exposes as separate characters) into a single code point, * matching UTF-16. * @see `punycode.ucs2.encode` - * @see + * @see * @memberOf punycode.ucs2 * @name decode * @param {String} string The Unicode input string (UCS-2). @@ -484,11 +486,11 @@ * @memberOf punycode * @type String */ - 'version': '1.3.1', + 'version': '1.3.2', /** * An object of methods to convert from JavaScript's internal character * representation (UCS-2) to Unicode code points, and back. - * @see + * @see * @memberOf punycode * @type Object */ diff --git a/deps/npm/node_modules/request/node_modules/tunnel-agent/package.json b/deps/npm/node_modules/request/node_modules/tunnel-agent/package.json index 59c7f5cb509..5b1ebba1507 100644 --- a/deps/npm/node_modules/request/node_modules/tunnel-agent/package.json +++ b/deps/npm/node_modules/request/node_modules/tunnel-agent/package.json @@ -26,7 +26,7 @@ "shasum": "b1184e312ffbcf70b3b4c78e8c219de7ebb1c550", "tarball": "http://registry.npmjs.org/tunnel-agent/-/tunnel-agent-0.4.0.tgz" }, - "_from": "tunnel-agent@~0.4.0", + "_from": "tunnel-agent@>=0.4.0 <0.5.0", "_npmVersion": "1.3.21", "_npmUser": { "name": "mikeal", @@ -41,5 +41,6 @@ "directories": {}, "_shasum": "b1184e312ffbcf70b3b4c78e8c219de7ebb1c550", "_resolved": "https://registry.npmjs.org/tunnel-agent/-/tunnel-agent-0.4.0.tgz", - "readme": "ERROR: No README data found!" + "readme": "ERROR: No README data found!", + "scripts": {} } diff --git a/deps/npm/node_modules/request/package.json b/deps/npm/node_modules/request/package.json index 0e94d8d149b..d68d512fd1d 100755 --- a/deps/npm/node_modules/request/package.json +++ b/deps/npm/node_modules/request/package.json @@ -7,7 +7,7 @@ "util", "utility" ], - "version": "2.42.0", + "version": "2.46.0", "author": { "name": "Mikeal Rogers", "email": "mikeal.rogers@gmail.com" @@ -20,30 +20,21 @@ "url": "http://github.com/mikeal/request/issues" }, "license": "Apache-2.0", - "engines": [ - "node >= 0.8.0" - ], + "engines": { + "node": ">=0.8.0" + }, "main": "index.js", "dependencies": { "bl": "~0.9.0", "caseless": "~0.6.0", "forever-agent": "~0.5.0", - "qs": "~1.2.0", + "form-data": "~0.1.0", "json-stringify-safe": "~5.0.0", "mime-types": "~1.0.1", "node-uuid": "~1.4.0", + "qs": "~1.2.0", "tunnel-agent": "~0.4.0", "tough-cookie": ">=0.12.0", - "form-data": "~0.1.0", - "http-signature": "~0.10.0", - "oauth-sign": "~0.4.0", - "hawk": "1.1.1", - "aws-sign2": "~0.5.0", - "stringstream": "~0.0.4" - }, - "optionalDependencies": { - "tough-cookie": ">=0.12.0", - "form-data": "~0.1.0", "http-signature": "~0.10.0", "oauth-sign": "~0.4.0", "hawk": "1.1.1", @@ -51,31 +42,40 @@ "stringstream": "~0.0.4" }, "scripts": { - "test": "node tests/run.js" + "test": "npm run lint && node node_modules/.bin/taper tests/test-*.js", + "lint": "node node_modules/.bin/eslint lib/ *.js tests/ && echo Lint passed." }, "devDependencies": { - "rimraf": "~2.2.8" + "eslint": "0.5.1", + "rimraf": "~2.2.8", + "tape": "~3.0.0", + "taper": "~0.3.0" }, + "gitHead": "7cdd75ec184868bba3be88a780bfb6e10fe33be4", "homepage": "https://github.com/mikeal/request", - "_id": "request@2.42.0", - "_shasum": "572bd0148938564040ac7ab148b96423a063304a", - "_from": "request@^2.42.0", - "_npmVersion": "1.4.9", + "_id": "request@2.46.0", + "_shasum": "359195d52eaf720bc69742579d04ad6d265a8274", + "_from": "request@>=2.46.0 <2.47.0", + "_npmVersion": "1.4.14", "_npmUser": { - "name": "mikeal", - "email": "mikeal.rogers@gmail.com" + "name": "nylen", + "email": "jnylen@gmail.com" }, "maintainers": [ { "name": "mikeal", "email": "mikeal.rogers@gmail.com" + }, + { + "name": "nylen", + "email": "jnylen@gmail.com" } ], "dist": { - "shasum": "572bd0148938564040ac7ab148b96423a063304a", - "tarball": "http://registry.npmjs.org/request/-/request-2.42.0.tgz" + "shasum": "359195d52eaf720bc69742579d04ad6d265a8274", + "tarball": "http://registry.npmjs.org/request/-/request-2.46.0.tgz" }, "directories": {}, - "_resolved": "https://registry.npmjs.org/request/-/request-2.42.0.tgz", + "_resolved": "https://registry.npmjs.org/request/-/request-2.46.0.tgz", "readme": "ERROR: No README data found!" } diff --git a/deps/npm/node_modules/request/release.sh b/deps/npm/node_modules/request/release.sh new file mode 100755 index 00000000000..05e7767fc10 --- /dev/null +++ b/deps/npm/node_modules/request/release.sh @@ -0,0 +1,3 @@ +#!/bin/sh + +npm version minor && npm publish && npm version patch && git push --tags && git push origin master diff --git a/deps/npm/node_modules/request/request.js b/deps/npm/node_modules/request/request.js index e528dd5ff43..466d9165588 100644 --- a/deps/npm/node_modules/request/request.js +++ b/deps/npm/node_modules/request/request.js @@ -1,47 +1,41 @@ -var optional = require('./lib/optional') - , http = require('http') - , https = optional('https') - , tls = optional('tls') +'use strict' + +var http = require('http') + , https = require('https') , url = require('url') , util = require('util') , stream = require('stream') , qs = require('qs') , querystring = require('querystring') - , crypto = require('crypto') , zlib = require('zlib') - + , helpers = require('./lib/helpers') , bl = require('bl') - , oauth = optional('oauth-sign') - , hawk = optional('hawk') - , aws = optional('aws-sign2') - , httpSignature = optional('http-signature') + , oauth = require('oauth-sign') + , hawk = require('hawk') + , aws = require('aws-sign2') + , httpSignature = require('http-signature') , uuid = require('node-uuid') , mime = require('mime-types') , tunnel = require('tunnel-agent') - , _safeStringify = require('json-stringify-safe') - , stringstream = optional('stringstream') + , stringstream = require('stringstream') , caseless = require('caseless') - , ForeverAgent = require('forever-agent') - , FormData = optional('form-data') - + , FormData = require('form-data') , cookies = require('./lib/cookies') - , globalCookieJar = cookies.jar() - , copy = require('./lib/copy') , debug = require('./lib/debug') , net = require('net') - ; -function safeStringify (obj) { - var ret - try { ret = JSON.stringify(obj) } - catch (e) { ret = _safeStringify(obj) } - return ret -} +var safeStringify = helpers.safeStringify + , md5 = helpers.md5 + , isReadStream = helpers.isReadStream + , toBase64 = helpers.toBase64 + , defer = helpers.defer + , globalCookieJar = cookies.jar() + var globalPool = {} -var isUrl = /^https?:|^unix:/ + , isUrl = /^https?:/ var defaultProxyHeaderWhiteList = [ 'accept', @@ -70,148 +64,272 @@ var defaultProxyHeaderWhiteList = [ 'via' ] -function isReadStream (rs) { - return rs.readable && rs.path && rs.mode; +function filterForNonReserved(reserved, options) { + // Filter out properties that are not reserved. + // Reserved values are passed in at call site. + + var object = {} + for (var i in options) { + var notReserved = (reserved.indexOf(i) === -1) + if (notReserved) { + object[i] = options[i] + } + } + return object +} + +function filterOutReservedFunctions(reserved, options) { + // Filter out properties that are functions and are reserved. + // Reserved values are passed in at call site. + + var object = {} + for (var i in options) { + var isReserved = !(reserved.indexOf(i) === -1) + var isFunction = (typeof options[i] === 'function') + if (!(isReserved && isFunction)) { + object[i] = options[i] + } + } + return object + +} + +function constructProxyHost(uriObject) { + var port = uriObject.portA + , protocol = uriObject.protocol + , proxyHost = uriObject.hostname + ':' + + if (port) { + proxyHost += port + } else if (protocol === 'https:') { + proxyHost += '443' + } else { + proxyHost += '80' + } + + return proxyHost +} + +function constructProxyHeaderWhiteList(headers, proxyHeaderWhiteList) { + return Object.keys(headers) + .filter(function (header) { + return proxyHeaderWhiteList.indexOf(header.toLowerCase()) !== -1 + }) + .reduce(function (set, header) { + set[header] = headers[header] + return set + }, {}) +} + +function construcTunnelOptions(request) { + var proxy = request.proxy + var proxyHeaders = request.proxyHeaders + var proxyAuth + + if (proxy.auth) { + proxyAuth = proxy.auth + } + + if (!proxy.auth && request.proxyAuthorization) { + proxyHeaders['Proxy-Authorization'] = request.proxyAuthorization + } + + var tunnelOptions = { + proxy: { + host: proxy.hostname, + port: +proxy.port, + proxyAuth: proxyAuth, + headers: proxyHeaders + }, + rejectUnauthorized: request.rejectUnauthorized, + headers: request.headers, + ca: request.ca, + cert: request.cert, + key: request.key + } + + return tunnelOptions } -function toBase64 (str) { - return (new Buffer(str || "", "ascii")).toString("base64") +function constructTunnelFnName(uri, proxy) { + var uriProtocol = (uri.protocol === 'https:' ? 'https' : 'http') + var proxyProtocol = (proxy.protocol === 'https:' ? 'Https' : 'Http') + return [uriProtocol, proxyProtocol].join('Over') } -function md5 (str) { - return crypto.createHash('md5').update(str).digest('hex') +function getTunnelFn(request) { + var uri = request.uri + var proxy = request.proxy + var tunnelFnName = constructTunnelFnName(uri, proxy) + return tunnel[tunnelFnName] +} + +// Decide the proper request proxy to use based on the request URI object and the +// environmental variables (NO_PROXY, HTTP_PROXY, etc.) +function getProxyFromURI(uri) { + // respect NO_PROXY environment variables (see: http://lynx.isc.org/current/breakout/lynx_help/keystrokes/environments.html) + var noProxy = process.env.NO_PROXY || process.env.no_proxy || null + + // easy case first - if NO_PROXY is '*' + if (noProxy === '*') { + return null + } + + // otherwise, parse the noProxy value to see if it applies to the URL + if (noProxy !== null) { + var noProxyItem, hostname, port, noProxyItemParts, noProxyHost, noProxyPort, noProxyList + + // canonicalize the hostname, so that 'oogle.com' won't match 'google.com' + hostname = uri.hostname.replace(/^\.*/, '.').toLowerCase() + noProxyList = noProxy.split(',') + + for (var i = 0, len = noProxyList.length; i < len; i++) { + noProxyItem = noProxyList[i].trim().toLowerCase() + + // no_proxy can be granular at the port level, which complicates things a bit. + if (noProxyItem.indexOf(':') > -1) { + noProxyItemParts = noProxyItem.split(':', 2) + noProxyHost = noProxyItemParts[0].replace(/^\.*/, '.') + noProxyPort = noProxyItemParts[1] + port = uri.port || (uri.protocol === 'https:' ? '443' : '80') + + // we've found a match - ports are same and host ends with no_proxy entry. + if (port === noProxyPort && hostname.indexOf(noProxyHost) === hostname.length - noProxyHost.length) { + return null + } + } else { + noProxyItem = noProxyItem.replace(/^\.*/, '.') + if (hostname.indexOf(noProxyItem) === hostname.length - noProxyItem.length) { + return null + } + } + } + } + + // check for HTTP(S)_PROXY environment variables + if (uri.protocol === 'http:') { + return process.env.HTTP_PROXY || process.env.http_proxy || null + } else if (uri.protocol === 'https:') { + return process.env.HTTPS_PROXY || process.env.https_proxy || process.env.HTTP_PROXY || process.env.http_proxy || null + } + + // return null if all else fails (What uri protocol are you using then?) + return null +} + +// Function for properly handling a connection error +function connectionErrorHandler(error) { + var socket = this + if (socket.res) { + if (socket.res.request) { + socket.res.request.emit('error', error) + } else { + socket.res.emit('error', error) + } + } else { + socket._httpMessage.emit('error', error) + } } // Return a simpler request object to allow serialization function requestToJSON() { + var self = this return { - uri: this.uri, - method: this.method, - headers: this.headers + uri: self.uri, + method: self.method, + headers: self.headers } } // Return a simpler response object to allow serialization function responseToJSON() { + var self = this return { - statusCode: this.statusCode, - body: this.body, - headers: this.headers, - request: requestToJSON.call(this.request) + statusCode: self.statusCode, + body: self.body, + headers: self.headers, + request: requestToJSON.call(self.request) } } function Request (options) { - stream.Stream.call(this) - this.readable = true - this.writable = true + // if tunnel property of options was not given default to false + // if given the method property in options, set property explicitMethod to true - if (typeof options === 'string') { - options = {uri:options} - } + // extend the Request instance with any non-reserved properties + // remove any reserved functions from the options object + // set Request instance to be readable and writable + // call init + var self = this + stream.Stream.call(self) var reserved = Object.keys(Request.prototype) - for (var i in options) { - if (reserved.indexOf(i) === -1) { - this[i] = options[i] - } else { - if (typeof options[i] === 'function') { - delete options[i] - } - } - } + var nonReserved = filterForNonReserved(reserved, options) + stream.Stream.call(self) + util._extend(self, nonReserved) + options = filterOutReservedFunctions(reserved, options) + + self.readable = true + self.writable = true + if (typeof options.tunnel === 'undefined') { + options.tunnel = false + } if (options.method) { - this.explicitMethod = true + self.explicitMethod = true } - - // Assume that we're not going to tunnel unless we need to - if (typeof options.tunnel === 'undefined') options.tunnel = false - - this.init(options) + self.canTunnel = options.tunnel !== false && tunnel + self.init(options) } -util.inherits(Request, stream.Stream) +util.inherits(Request, stream.Stream) -// Set up the tunneling agent if necessary Request.prototype.setupTunnel = function () { - var self = this - if (typeof self.proxy == 'string') self.proxy = url.parse(self.proxy) + // Set up the tunneling agent if necessary + // Only send the proxy whitelisted header names. + // Turn on tunneling for the rest of request. - if (!self.proxy) return false + var self = this - // Don't need to use a tunneling proxy - if (!self.tunnel && self.uri.protocol !== 'https:') - return + if (typeof self.proxy === 'string') { + self.proxy = url.parse(self.proxy) + } - // do the HTTP CONNECT dance using koichik/node-tunnel + if (!self.proxy) { + return false + } - // The host to tell the proxy to CONNECT to - var proxyHost = self.uri.hostname + ':' - if (self.uri.port) - proxyHost += self.uri.port - else if (self.uri.protocol === 'https:') - proxyHost += '443' - else - proxyHost += '80' + if (!self.tunnel && self.uri.protocol !== 'https:') { + return false + } - if (!self.proxyHeaderWhiteList) + if (!self.proxyHeaderWhiteList) { self.proxyHeaderWhiteList = defaultProxyHeaderWhiteList + } - // Only send the proxy the whitelisted header names. - var proxyHeaders = Object.keys(self.headers).filter(function (h) { - return self.proxyHeaderWhiteList.indexOf(h.toLowerCase()) !== -1 - }).reduce(function (set, h) { - set[h] = self.headers[h] - return set - }, {}) - - proxyHeaders.host = proxyHost - - var tunnelFnName = - (self.uri.protocol === 'https:' ? 'https' : 'http') + - 'Over' + - (self.proxy.protocol === 'https:' ? 'Https' : 'Http') + var proxyHost = constructProxyHost(self.uri) + self.proxyHeaders = constructProxyHeaderWhiteList(self.headers, self.proxyHeaderWhiteList) + self.proxyHeaders.host = proxyHost - var tunnelFn = tunnel[tunnelFnName] - - var proxyAuth - if (self.proxy.auth) - proxyAuth = self.proxy.auth - else if (self.proxyAuthorization) - proxyHeaders['Proxy-Authorization'] = self.proxyAuthorization - - var tunnelOptions = { proxy: { host: self.proxy.hostname - , port: +self.proxy.port - , proxyAuth: proxyAuth - , headers: proxyHeaders } - , rejectUnauthorized: self.rejectUnauthorized - , headers: self.headers - , ca: self.ca - , cert: self.cert - , key: self.key} + var tunnelFn = getTunnelFn(self) + var tunnelOptions = construcTunnelOptions(self) self.agent = tunnelFn(tunnelOptions) - - // At this point, we know that the proxy will support tunneling - // (or fail miserably), so we're going to tunnel all proxied requests - // from here on out. self.tunnel = true - return true } - - - Request.prototype.init = function (options) { // init() contains all the code to setup the request object. // the actual outgoing request is not started until start() is called // this function is called from both the constructor and on redirect. var self = this - if (!options) options = {} + if (!options) { + options = {} + } + self.headers = self.headers ? copy(self.headers) : {} - caseless.httpify(self, self.headers || {}) + caseless.httpify(self, self.headers) // Never send proxy-auth to the endpoint! if (self.hasHeader('proxy-authorization')) { @@ -219,11 +337,19 @@ Request.prototype.init = function (options) { self.removeHeader('proxy-authorization') } - if (!self.method) self.method = options.method || 'GET' + if (!self.method) { + self.method = options.method || 'GET' + } self.localAddress = options.localAddress + if (!self.qsLib) { + self.qsLib = (options.useQuerystring ? querystring : qs) + } + debug(options) - if (!self.pool && self.pool !== false) self.pool = globalPool + if (!self.pool && self.pool !== false) { + self.pool = globalPool + } self.dests = self.dests || [] self.__isRequestRequest = true @@ -231,7 +357,9 @@ Request.prototype.init = function (options) { if (!self._callback && self.callback) { self._callback = self.callback self.callback = function () { - if (self._callbackCalled) return // Print a warning maybe? + if (self._callbackCalled) { + return // Print a warning maybe? + } self._callbackCalled = true self._callback.apply(self, arguments) } @@ -239,17 +367,40 @@ Request.prototype.init = function (options) { self.on('complete', self.callback.bind(self, null)) } - if (self.url && !self.uri) { - // People use this property instead all the time so why not just support it. + // People use this property instead all the time, so support it + if (!self.uri && self.url) { self.uri = self.url delete self.url } + // A URI is needed by this point, throw if we haven't been able to get one if (!self.uri) { - // this will throw if unhandled but is handleable when in a redirect - return self.emit('error', new Error("options.uri is a required argument")) - } else { - if (typeof self.uri == "string") self.uri = url.parse(self.uri) + return self.emit('error', new Error('options.uri is a required argument')) + } + + // If a string URI/URL was given, parse it into a URL object + if(typeof self.uri === 'string') { + self.uri = url.parse(self.uri) + } + + // DEPRECATED: Warning for users of the old Unix Sockets URL Scheme + if (self.uri.protocol === 'unix:') { + return self.emit('error', new Error('`unix://` URL scheme is no longer supported. Please use the format `http://unix:SOCKET:PATH`')) + } + + // Support Unix Sockets + if(self.uri.host === 'unix') { + // Get the socket & request paths from the URL + var unixParts = self.uri.path.split(':') + , host = unixParts[0] + , path = unixParts[1] + // Apply unix properties to request + self.socketPath = host + self.uri.pathname = path + self.uri.path = path + self.uri.host = host + self.uri.hostname = host + self.uri.isUnix = true } if (self.strictSSL === false) { @@ -257,13 +408,7 @@ Request.prototype.init = function (options) { } if(!self.hasOwnProperty('proxy')) { - // check for HTTP(S)_PROXY environment variables - if(self.uri.protocol == "http:") { - self.proxy = process.env.HTTP_PROXY || process.env.http_proxy || null; - } else if(self.uri.protocol == "https:") { - self.proxy = process.env.HTTPS_PROXY || process.env.https_proxy || - process.env.HTTP_PROXY || process.env.http_proxy || null; - } + self.proxy = getProxyFromURI(self.uri) } // Pass in `tunnel:true` to *always* tunnel through proxies @@ -274,8 +419,8 @@ Request.prototype.init = function (options) { if (!self.uri.pathname) {self.uri.pathname = '/'} - if (!self.uri.host && !self.protocol=='unix:') { - // Invalid URI: it may generate lot of bad errors, like "TypeError: Cannot call method 'indexOf' of undefined" in CookieJar + if (!(self.uri.host || (self.uri.hostname && self.uri.port)) && !self.uri.isUnix) { + // Invalid URI: it may generate lot of bad errors, like 'TypeError: Cannot call method `indexOf` of undefined' in CookieJar // Detect and reject it as soon as possible var faultyUri = url.format(self.uri) var message = 'Invalid URI "' + faultyUri + '"' @@ -285,27 +430,29 @@ Request.prototype.init = function (options) { // they should be warned that it can be caused by a redirection (can save some hair) message += '. This can be caused by a crappy redirection.' } - self.emit('error', new Error(message)) - return // This error was fatal + // This error was fatal + return self.emit('error', new Error(message)) } self._redirectsFollowed = self._redirectsFollowed || 0 self.maxRedirects = (self.maxRedirects !== undefined) ? self.maxRedirects : 10 self.allowRedirect = (typeof self.followRedirect === 'function') ? self.followRedirect : function(response) { - return true; - }; - self.followRedirect = (self.followRedirect !== undefined) ? !!self.followRedirect : true + return true + } + self.followRedirects = (self.followRedirect !== undefined) ? !!self.followRedirect : true self.followAllRedirects = (self.followAllRedirects !== undefined) ? self.followAllRedirects : false - if (self.followRedirect || self.followAllRedirects) + if (self.followRedirects || self.followAllRedirects) { self.redirects = self.redirects || [] + } self.setHost = false if (!self.hasHeader('host')) { self.setHeader('host', self.uri.hostname) if (self.uri.port) { if ( !(self.uri.port === 80 && self.uri.protocol === 'http:') && - !(self.uri.port === 443 && self.uri.protocol === 'https:') ) - self.setHeader('host', self.getHeader('host') + (':'+self.uri.port) ) + !(self.uri.port === 443 && self.uri.protocol === 'https:') ) { + self.setHeader('host', self.getHeader('host') + (':' + self.uri.port) ) + } } self.setHost = true } @@ -313,8 +460,8 @@ Request.prototype.init = function (options) { self.jar(self._jar || options.jar) if (!self.uri.port) { - if (self.uri.protocol == 'http:') {self.uri.port = 80} - else if (self.uri.protocol == 'https:') {self.uri.port = 443} + if (self.uri.protocol === 'http:') {self.uri.port = 80} + else if (self.uri.protocol === 'https:') {self.uri.port = 443} } if (self.proxy && !self.tunnel) { @@ -325,317 +472,253 @@ Request.prototype.init = function (options) { self.host = self.uri.hostname } - self.clientErrorHandler = function (error) { - if (self._aborted) return - if (self.req && self.req._reusedSocket && error.code === 'ECONNRESET' - && self.agent.addRequestNoreuse) { - self.agent = { addRequest: self.agent.addRequestNoreuse.bind(self.agent) } - self.start() - self.req.end() - return - } - if (self.timeout && self.timeoutTimer) { - clearTimeout(self.timeoutTimer) - self.timeoutTimer = null - } - self.emit('error', error) + if (options.form) { + self.form(options.form) } - self._parserErrorHandler = function (error) { - if (this.res) { - if (this.res.request) { - this.res.request.emit('error', error) + if (options.formData) { + var formData = options.formData + var requestForm = self.form() + var appendFormValue = function (key, value) { + if (value.hasOwnProperty('value') && value.hasOwnProperty('options')) { + requestForm.append(key, value.value, value.options) } else { - this.res.emit('error', error) + requestForm.append(key, value) + } + } + for (var formKey in formData) { + if (formData.hasOwnProperty(formKey)) { + var formValue = formData[formKey] + if (formValue instanceof Array) { + for (var j = 0; j < formValue.length; j++) { + appendFormValue(formKey, formValue[j]) + } + } else { + appendFormValue(formKey, formValue) + } } - } else { - this._httpMessage.emit('error', error) } } - self._buildRequest = function(){ - var self = this; - - if (options.form) { - self.form(options.form) - } + if (options.qs) { + self.qs(options.qs) + } - if (options.qs) self.qs(options.qs) + if (self.uri.path) { + self.path = self.uri.path + } else { + self.path = self.uri.pathname + (self.uri.search || '') + } - if (self.uri.path) { - self.path = self.uri.path - } else { - self.path = self.uri.pathname + (self.uri.search || "") - } + if (self.path.length === 0) { + self.path = '/' + } - if (self.path.length === 0) self.path = '/' + // Auth must happen last in case signing is dependent on other headers + if (options.oauth) { + self.oauth(options.oauth) + } + if (options.aws) { + self.aws(options.aws) + } - // Auth must happen last in case signing is dependent on other headers - if (options.oauth) { - self.oauth(options.oauth) - } + if (options.hawk) { + self.hawk(options.hawk) + } - if (options.aws) { - self.aws(options.aws) - } + if (options.httpSignature) { + self.httpSignature(options.httpSignature) + } - if (options.hawk) { - self.hawk(options.hawk) + if (options.auth) { + if (Object.prototype.hasOwnProperty.call(options.auth, 'username')) { + options.auth.user = options.auth.username } - - if (options.httpSignature) { - self.httpSignature(options.httpSignature) + if (Object.prototype.hasOwnProperty.call(options.auth, 'password')) { + options.auth.pass = options.auth.password } - if (options.auth) { - if (Object.prototype.hasOwnProperty.call(options.auth, 'username')) options.auth.user = options.auth.username - if (Object.prototype.hasOwnProperty.call(options.auth, 'password')) options.auth.pass = options.auth.password + self.auth( + options.auth.user, + options.auth.pass, + options.auth.sendImmediately, + options.auth.bearer + ) + } - self.auth( - options.auth.user, - options.auth.pass, - options.auth.sendImmediately, - options.auth.bearer - ) - } + if (self.gzip && !self.hasHeader('accept-encoding')) { + self.setHeader('accept-encoding', 'gzip') + } - if (self.gzip && !self.hasHeader('accept-encoding')) { - self.setHeader('accept-encoding', 'gzip') - } + if (self.uri.auth && !self.hasHeader('authorization')) { + var uriAuthPieces = self.uri.auth.split(':').map(function(item){ return querystring.unescape(item) }) + self.auth(uriAuthPieces[0], uriAuthPieces.slice(1).join(':'), true) + } - if (self.uri.auth && !self.hasHeader('authorization')) { - var authPieces = self.uri.auth.split(':').map(function(item){ return querystring.unescape(item) }) - self.auth(authPieces[0], authPieces.slice(1).join(':'), true) + if (self.proxy && !self.tunnel) { + if (self.proxy.auth && !self.proxyAuthorization) { + var proxyAuthPieces = self.proxy.auth.split(':').map(function(item){ + return querystring.unescape(item) + }) + var authHeader = 'Basic ' + toBase64(proxyAuthPieces.join(':')) + self.proxyAuthorization = authHeader } - - if (self.proxy && !self.tunnel) { - if (self.proxy.auth && !self.proxyAuthorization) { - var authPieces = self.proxy.auth.split(':').map(function(item){ - return querystring.unescape(item) - }) - var authHeader = 'Basic ' + toBase64(authPieces.join(':')) - self.proxyAuthorization = authHeader - } - if (self.proxyAuthorization) - self.setHeader('proxy-authorization', self.proxyAuthorization) + if (self.proxyAuthorization) { + self.setHeader('proxy-authorization', self.proxyAuthorization) } + } - if (self.proxy && !self.tunnel) self.path = (self.uri.protocol + '//' + self.uri.host + self.path) + if (self.proxy && !self.tunnel) { + self.path = (self.uri.protocol + '//' + self.uri.host + self.path) + } - if (options.json) { - self.json(options.json) - } else if (options.multipart) { - self.boundary = uuid() - self.multipart(options.multipart) - } + if (options.json) { + self.json(options.json) + } else if (options.multipart) { + self.boundary = uuid() + self.multipart(options.multipart) + } - if (self.body) { - var length = 0 - if (!Buffer.isBuffer(self.body)) { - if (Array.isArray(self.body)) { - for (var i = 0; i < self.body.length; i++) { - length += self.body[i].length - } - } else { - self.body = new Buffer(self.body) - length = self.body.length + if (self.body) { + var length = 0 + if (!Buffer.isBuffer(self.body)) { + if (Array.isArray(self.body)) { + for (var i = 0; i < self.body.length; i++) { + length += self.body[i].length } } else { + self.body = new Buffer(self.body) length = self.body.length } - if (length) { - if (!self.hasHeader('content-length')) self.setHeader('content-length', length) - } else { - throw new Error('Argument error, options.body.') + } else { + length = self.body.length + } + if (length) { + if (!self.hasHeader('content-length')) { + self.setHeader('content-length', length) } + } else { + throw new Error('Argument error, options.body.') } + } - var protocol = self.proxy && !self.tunnel ? self.proxy.protocol : self.uri.protocol - , defaultModules = {'http:':http, 'https:':https, 'unix:':http} - , httpModules = self.httpModules || {} - ; - self.httpModule = httpModules[protocol] || defaultModules[protocol] + var protocol = self.proxy && !self.tunnel ? self.proxy.protocol : self.uri.protocol + , defaultModules = {'http:':http, 'https:':https} + , httpModules = self.httpModules || {} - if (!self.httpModule) return this.emit('error', new Error("Invalid protocol: " + protocol)) + self.httpModule = httpModules[protocol] || defaultModules[protocol] - if (options.ca) self.ca = options.ca + if (!self.httpModule) { + return self.emit('error', new Error('Invalid protocol: ' + protocol)) + } - if (!self.agent) { - if (options.agentOptions) self.agentOptions = options.agentOptions + if (options.ca) { + self.ca = options.ca + } - if (options.agentClass) { - self.agentClass = options.agentClass - } else if (options.forever) { - self.agentClass = protocol === 'http:' ? ForeverAgent : ForeverAgent.SSL - } else { - self.agentClass = self.httpModule.Agent - } + if (!self.agent) { + if (options.agentOptions) { + self.agentOptions = options.agentOptions } - if (self.pool === false) { - self.agent = false + if (options.agentClass) { + self.agentClass = options.agentClass + } else if (options.forever) { + self.agentClass = protocol === 'http:' ? ForeverAgent : ForeverAgent.SSL } else { - self.agent = self.agent || self.getAgent() - if (self.maxSockets) { - // Don't use our pooling if node has the refactored client - self.agent.maxSockets = self.maxSockets - } - if (self.pool.maxSockets) { - // Don't use our pooling if node has the refactored client - self.agent.maxSockets = self.pool.maxSockets - } + self.agentClass = self.httpModule.Agent } + } - self.on('pipe', function (src) { - if (self.ntick && self._started) throw new Error("You cannot pipe to this stream after the outbound request has started.") - self.src = src - if (isReadStream(src)) { - if (!self.hasHeader('content-type')) self.setHeader('content-type', mime.lookup(src.path)) - } else { - if (src.headers) { - for (var i in src.headers) { - if (!self.hasHeader(i)) { - self.setHeader(i, src.headers[i]) - } - } - } - if (self._json && !self.hasHeader('content-type')) - self.setHeader('content-type', 'application/json') - if (src.method && !self.explicitMethod) { - self.method = src.method - } - } - - // self.on('pipe', function () { - // console.error("You have already piped to this stream. Pipeing twice is likely to break the request.") - // }) - }) - - process.nextTick(function () { - if (self._aborted) return + if (self.pool === false) { + self.agent = false + } else { + self.agent = self.agent || self.getAgent() + if (self.maxSockets) { + // Don't use our pooling if node has the refactored client + self.agent.maxSockets = self.maxSockets + } + if (self.pool.maxSockets) { + // Don't use our pooling if node has the refactored client + self.agent.maxSockets = self.pool.maxSockets + } + } - var end = function () { - if (self._form) { - self._form.pipe(self) - } - if (self.body) { - if (Array.isArray(self.body)) { - self.body.forEach(function (part) { - self.write(part) - }) - } else { - self.write(self.body) - } - self.end() - } else if (self.requestBodyStream) { - console.warn("options.requestBodyStream is deprecated, please pass the request object to stream.pipe.") - self.requestBodyStream.pipe(self) - } else if (!self.src) { - if (self.method !== 'GET' && typeof self.method !== 'undefined') { - self.setHeader('content-length', 0) + self.on('pipe', function (src) { + if (self.ntick && self._started) { + throw new Error('You cannot pipe to this stream after the outbound request has started.') + } + self.src = src + if (isReadStream(src)) { + if (!self.hasHeader('content-type')) { + self.setHeader('content-type', mime.lookup(src.path)) + } + } else { + if (src.headers) { + for (var i in src.headers) { + if (!self.hasHeader(i)) { + self.setHeader(i, src.headers[i]) } - self.end() } } - - if (self._form && !self.hasHeader('content-length')) { - // Before ending the request, we had to compute the length of the whole form, asyncly - self.setHeader(self._form.getHeaders()) - self._form.getLength(function (err, length) { - if (!err) { - self.setHeader('content-length', length) - } - end() - }) - } else { - end() + if (self._json && !self.hasHeader('content-type')) { + self.setHeader('content-type', 'application/json') + } + if (src.method && !self.explicitMethod) { + self.method = src.method } - - self.ntick = true - }) - - } // End _buildRequest - - self._handleUnixSocketURI = function(self){ - // Parse URI and extract a socket path (tested as a valid socket using net.connect), and a http style path suffix - // Thus http requests can be made to a socket using the uri unix://tmp/my.socket/urlpath - // and a request for '/urlpath' will be sent to the unix socket at /tmp/my.socket - - self.unixsocket = true; - - var full_path = self.uri.href.replace(self.uri.protocol+'/', ''); - - var lookup = full_path.split('/'); - var error_connecting = true; - - var lookup_table = {}; - do { lookup_table[lookup.join('/')]={} } while(lookup.pop()) - for (r in lookup_table){ - try_next(r); - } - - function try_next(table_row){ - var client = net.connect( table_row ); - client.path = table_row - client.on('error', function(){ lookup_table[this.path].error_connecting=true; this.end(); }); - client.on('connect', function(){ lookup_table[this.path].error_connecting=false; this.end(); }); - table_row.client = client; } - wait_for_socket_response(); - - response_counter = 0; + // self.on('pipe', function () { + // console.error('You have already piped to this stream. Pipeing twice is likely to break the request.') + // }) + }) - function wait_for_socket_response(){ - var detach; - if('undefined' == typeof setImmediate ) detach = process.nextTick - else detach = setImmediate; - detach(function(){ - // counter to prevent infinite blocking waiting for an open socket to be found. - response_counter++; - var trying = false; - for (r in lookup_table){ - if('undefined' == typeof lookup_table[r].error_connecting) - trying = true; - } - if(trying && response_counter<1000) - wait_for_socket_response() - else - set_socket_properties(); - }) + defer(function () { + if (self._aborted) { + return } - function set_socket_properties(){ - var host; - for (r in lookup_table){ - if(lookup_table[r].error_connecting === false){ - host = r - } + var end = function () { + if (self._form) { + self._form.pipe(self) } - if(!host){ - self.emit('error', new Error("Failed to connect to any socket in "+full_path)) + if (self.body) { + if (Array.isArray(self.body)) { + self.body.forEach(function (part) { + self.write(part) + }) + } else { + self.write(self.body) + } + self.end() + } else if (self.requestBodyStream) { + console.warn('options.requestBodyStream is deprecated, please pass the request object to stream.pipe.') + self.requestBodyStream.pipe(self) + } else if (!self.src) { + if (self.method !== 'GET' && typeof self.method !== 'undefined') { + self.setHeader('content-length', 0) + } + self.end() } - var path = full_path.replace(host, '') + } - self.socketPath = host - self.uri.pathname = path - self.uri.href = path - self.uri.path = path - self.host = '' - self.hostname = '' - delete self.host - delete self.hostname - self._buildRequest(); + if (self._form && !self.hasHeader('content-length')) { + // Before ending the request, we had to compute the length of the whole form, asyncly + self.setHeader(self._form.getHeaders()) + self._form.getLength(function (err, length) { + if (!err) { + self.setHeader('content-length', length) + } + end() + }) + } else { + end() } - } - // Intercept UNIX protocol requests to change properties to match socket - if(/^unix:/.test(self.uri.protocol)){ - self._handleUnixSocketURI(self); - } else { - self._buildRequest(); - } + self.ntick = true + }) } @@ -650,7 +733,9 @@ Request.prototype._updateProtocol = function () { // previously was doing http, now doing https // if it's https, then we might need to tunnel now. if (self.proxy) { - if (self.setupTunnel()) return + if (self.setupTunnel()) { + return + } } self.httpModule = https @@ -667,7 +752,9 @@ Request.prototype._updateProtocol = function () { } // if there's an agent, we need to get a new one. - if (self.agent) self.agent = self.getAgent() + if (self.agent) { + self.agent = self.getAgent() + } } else { // previously was doing https, now doing http @@ -693,85 +780,114 @@ Request.prototype._updateProtocol = function () { } Request.prototype.getAgent = function () { - var Agent = this.agentClass + var self = this + var Agent = self.agentClass var options = {} - if (this.agentOptions) { - for (var i in this.agentOptions) { - options[i] = this.agentOptions[i] + if (self.agentOptions) { + for (var i in self.agentOptions) { + options[i] = self.agentOptions[i] } } - if (this.ca) options.ca = this.ca - if (this.ciphers) options.ciphers = this.ciphers - if (this.secureProtocol) options.secureProtocol = this.secureProtocol - if (this.secureOptions) options.secureOptions = this.secureOptions - if (typeof this.rejectUnauthorized !== 'undefined') options.rejectUnauthorized = this.rejectUnauthorized + if (self.ca) { + options.ca = self.ca + } + if (self.ciphers) { + options.ciphers = self.ciphers + } + if (self.secureProtocol) { + options.secureProtocol = self.secureProtocol + } + if (self.secureOptions) { + options.secureOptions = self.secureOptions + } + if (typeof self.rejectUnauthorized !== 'undefined') { + options.rejectUnauthorized = self.rejectUnauthorized + } - if (this.cert && this.key) { - options.key = this.key - options.cert = this.cert + if (self.cert && self.key) { + options.key = self.key + options.cert = self.cert } var poolKey = '' // different types of agents are in different pools - if (Agent !== this.httpModule.Agent) { + if (Agent !== self.httpModule.Agent) { poolKey += Agent.name } - if (!this.httpModule.globalAgent) { + if (!self.httpModule.globalAgent) { // node 0.4.x - options.host = this.host - options.port = this.port - if (poolKey) poolKey += ':' - poolKey += this.host + ':' + this.port + options.host = self.host + options.port = self.port + if (poolKey) { + poolKey += ':' + } + poolKey += self.host + ':' + self.port } // ca option is only relevant if proxy or destination are https - var proxy = this.proxy - if (typeof proxy === 'string') proxy = url.parse(proxy) + var proxy = self.proxy + if (typeof proxy === 'string') { + proxy = url.parse(proxy) + } var isHttps = (proxy && proxy.protocol === 'https:') || this.uri.protocol === 'https:' + if (isHttps) { if (options.ca) { - if (poolKey) poolKey += ':' + if (poolKey) { + poolKey += ':' + } poolKey += options.ca } if (typeof options.rejectUnauthorized !== 'undefined') { - if (poolKey) poolKey += ':' + if (poolKey) { + poolKey += ':' + } poolKey += options.rejectUnauthorized } - if (options.cert) + if (options.cert) { poolKey += options.cert.toString('ascii') + options.key.toString('ascii') + } if (options.ciphers) { - if (poolKey) poolKey += ':' + if (poolKey) { + poolKey += ':' + } poolKey += options.ciphers } if (options.secureProtocol) { - if (poolKey) poolKey += ':' + if (poolKey) { + poolKey += ':' + } poolKey += options.secureProtocol } if (options.secureOptions) { - if (poolKey) poolKey += ':' + if (poolKey) { + poolKey += ':' + } poolKey += options.secureOptions } } - if (this.pool === globalPool && !poolKey && Object.keys(options).length === 0 && this.httpModule.globalAgent) { + if (self.pool === globalPool && !poolKey && Object.keys(options).length === 0 && self.httpModule.globalAgent) { // not doing anything special. Use the globalAgent - return this.httpModule.globalAgent + return self.httpModule.globalAgent } // we're using a stored agent. Make sure it's protocol-specific - poolKey = this.uri.protocol + poolKey + poolKey = self.uri.protocol + poolKey - // already generated an agent for this setting - if (this.pool[poolKey]) return this.pool[poolKey] + // generate a new agent for this setting if none yet exists + if (!self.pool[poolKey]) { + self.pool[poolKey] = new Agent(options) + } - return this.pool[poolKey] = new Agent(options) + return self.pool[poolKey] } Request.prototype.start = function () { @@ -779,7 +895,9 @@ Request.prototype.start = function () { // this is usually called on the first write(), end() or on nextTick() var self = this - if (self._aborted) return + if (self._aborted) { + return + } self._started = true self.method = self.method || 'GET' @@ -798,14 +916,14 @@ Request.prototype.start = function () { delete reqOptions.auth debug('make request', self.uri.href) - self.req = self.httpModule.request(reqOptions, self.onResponse.bind(self)) + self.req = self.httpModule.request(reqOptions) if (self.timeout && !self.timeoutTimer) { self.timeoutTimer = setTimeout(function () { - self.req.abort() - var e = new Error("ETIMEDOUT") - e.code = "ETIMEDOUT" - self.emit("error", e) + self.abort() + var e = new Error('ETIMEDOUT') + e.code = 'ETIMEDOUT' + self.emit('error', e) }, self.timeout) // Set additional timeout on socket - in case if remote @@ -814,43 +932,73 @@ Request.prototype.start = function () { self.req.setTimeout(self.timeout, function () { if (self.req) { self.req.abort() - var e = new Error("ESOCKETTIMEDOUT") - e.code = "ESOCKETTIMEDOUT" - self.emit("error", e) + var e = new Error('ESOCKETTIMEDOUT') + e.code = 'ESOCKETTIMEDOUT' + self.emit('error', e) } }) } } - self.req.on('error', self.clientErrorHandler) + self.req.on('response', self.onRequestResponse.bind(self)) + self.req.on('error', self.onRequestError.bind(self)) self.req.on('drain', function() { self.emit('drain') }) + self.req.on('socket', function(socket) { + self.emit('socket', socket) + }) + self.on('end', function() { - if ( self.req.connection ) self.req.connection.removeListener('error', self._parserErrorHandler) + if ( self.req.connection ) { + self.req.connection.removeListener('error', connectionErrorHandler) + } }) self.emit('request', self.req) } -Request.prototype.onResponse = function (response) { + +Request.prototype.onRequestError = function (error) { var self = this - debug('onResponse', self.uri.href, response.statusCode, response.headers) + if (self._aborted) { + return + } + if (self.req && self.req._reusedSocket && error.code === 'ECONNRESET' + && self.agent.addRequestNoreuse) { + self.agent = { addRequest: self.agent.addRequestNoreuse.bind(self.agent) } + self.start() + self.req.end() + return + } + if (self.timeout && self.timeoutTimer) { + clearTimeout(self.timeoutTimer) + self.timeoutTimer = null + } + self.emit('error', error) +} + +Request.prototype.onRequestResponse = function (response) { + var self = this + debug('onRequestResponse', self.uri.href, response.statusCode, response.headers) response.on('end', function() { debug('response end', self.uri.href, response.statusCode, response.headers) - }); + }) // The check on response.connection is a workaround for browserify. - if (response.connection && response.connection.listeners('error').indexOf(self._parserErrorHandler) === -1) { + if (response.connection && response.connection.listeners('error').indexOf(connectionErrorHandler) === -1) { response.connection.setMaxListeners(0) - response.connection.once('error', self._parserErrorHandler) + response.connection.once('error', connectionErrorHandler) } if (self._aborted) { debug('aborted', self.uri.href) response.resume() return } - if (self._paused) response.pause() - // Check that response.resume is defined. Workaround for browserify. - else response.resume && response.resume() + if (self._paused) { + response.pause() + } else if (response.resume) { + // response.resume should be defined, but check anyway before calling. Workaround for browserify. + response.resume() + } self.response = response response.request = self @@ -861,24 +1009,29 @@ Request.prototype.onResponse = function (response) { self.strictSSL && (!response.hasOwnProperty('client') || !response.client.authorized)) { debug('strict ssl error', self.uri.href) - var sslErr = response.hasOwnProperty('client') ? response.client.authorizationError : self.uri.href + " does not support SSL"; - self.emit('error', new Error('SSL Error: '+ sslErr)) + var sslErr = response.hasOwnProperty('client') ? response.client.authorizationError : self.uri.href + ' does not support SSL' + self.emit('error', new Error('SSL Error: ' + sslErr)) return } - if (self.setHost) self.removeHeader('host') + // Save the original host before any redirect (if it changes, we need to + // remove any authorization headers) + self.originalHost = self.headers.host + if (self.setHost) { + self.removeHeader('host') + } if (self.timeout && self.timeoutTimer) { clearTimeout(self.timeoutTimer) self.timeoutTimer = null } - var targetCookieJar = (self._jar && self._jar.setCookie)?self._jar:globalCookieJar; + var targetCookieJar = (self._jar && self._jar.setCookie) ? self._jar : globalCookieJar var addCookie = function (cookie) { //set the cookie if it's domain in the href's domain. try { - targetCookieJar.setCookie(cookie, self.uri.href, {ignoreError: true}); + targetCookieJar.setCookie(cookie, self.uri.href, {ignoreError: true}) } catch (e) { - self.emit('error', e); + self.emit('error', e) } } @@ -886,8 +1039,11 @@ Request.prototype.onResponse = function (response) { if (response.caseless.has('set-cookie') && (!self._disableCookies)) { var headerName = response.caseless.has('set-cookie') - if (Array.isArray(response.headers[headerName])) response.headers[headerName].forEach(addCookie) - else addCookie(response.headers[headerName]) + if (Array.isArray(response.headers[headerName])) { + response.headers[headerName].forEach(addCookie) + } else { + addCookie(response.headers[headerName]) + } } var redirectTo = null @@ -897,7 +1053,7 @@ Request.prototype.onResponse = function (response) { if (self.followAllRedirects) { redirectTo = location - } else if (self.followRedirect) { + } else if (self.followRedirects) { switch (self.method) { case 'PATCH': case 'PUT': @@ -910,7 +1066,7 @@ Request.prototype.onResponse = function (response) { break } } - } else if (response.statusCode == 401 && self._hasAuth && !self._sentAuth) { + } else if (response.statusCode === 401 && self._hasAuth && !self._sentAuth) { var authHeader = response.caseless.get('www-authenticate') var authVerb = authHeader && authHeader.split(' ')[0].toLowerCase() debug('reauth', authVerb) @@ -943,8 +1099,10 @@ Request.prototype.onResponse = function (response) { var re = /([a-z0-9_-]+)=(?:"([^"]+)"|([a-z0-9_-]+))/gi for (;;) { var match = re.exec(authHeader) - if (!match) break - challenge[match[1]] = match[2] || match[3]; + if (!match) { + break + } + challenge[match[1]] = match[2] || match[3] } var ha1 = md5(self._user + ':' + challenge.realm + ':' + self._pass) @@ -968,12 +1126,12 @@ Request.prototype.onResponse = function (response) { authHeader = [] for (var k in authValues) { - if (!authValues[k]) { - //ignore - } else if (k === 'qop' || k === 'nc' || k === 'algorithm') { - authHeader.push(k + '=' + authValues[k]) - } else { - authHeader.push(k + '="' + authValues[k] + '"') + if (authValues[k]) { + if (k === 'qop' || k === 'nc' || k === 'algorithm') { + authHeader.push(k + '=' + authValues[k]) + } else { + authHeader.push(k + '="' + authValues[k] + '"') + } } } authHeader = 'Digest ' + authHeader.join(', ') @@ -990,10 +1148,12 @@ Request.prototype.onResponse = function (response) { // ignore any potential response body. it cannot possibly be useful // to us at this point. - if (self._paused) response.resume() + if (self._paused) { + response.resume() + } if (self._redirectsFollowed >= self.maxRedirects) { - self.emit('error', new Error("Exceeded maxRedirects. Probably stuck in a redirect loop "+self.uri.href)) + self.emit('error', new Error('Exceeded maxRedirects. Probably stuck in a redirect loop ' + self.uri.href)) return } self._redirectsFollowed += 1 @@ -1015,13 +1175,15 @@ Request.prototype.onResponse = function (response) { , redirectUri: redirectTo } ) - if (self.followAllRedirects && response.statusCode != 401 && response.statusCode != 307) self.method = 'GET' + if (self.followAllRedirects && response.statusCode !== 401 && response.statusCode !== 307) { + self.method = 'GET' + } // self.method = 'GET' // Force all redirects to use GET || commented out fixes #215 delete self.src delete self.req delete self.agent delete self._started - if (response.statusCode != 401 && response.statusCode != 307) { + if (response.statusCode !== 401 && response.statusCode !== 307) { // Remove parameters from the previous response, unless this is the second request // for a server that requires digest authentication. delete self.body @@ -1030,10 +1192,16 @@ Request.prototype.onResponse = function (response) { self.removeHeader('host') self.removeHeader('content-type') self.removeHeader('content-length') + if (self.uri.hostname !== self.originalHost.split(':')[0]) { + // Remove authorization if changing hostnames (but not if just + // changing ports or protocols). This matches the behavior of curl: + // https://github.com/bagder/curl/blob/6beb0eee/lib/http.c#L710 + self.removeHeader('authorization') + } } } - self.emit('redirect'); + self.emit('redirect') self.init() return // Ignore the rest of the response @@ -1042,7 +1210,9 @@ Request.prototype.onResponse = function (response) { // Be a good stream and emit end when the response is finished. // Hack to emit end on close because of a core bug that never fires end response.on('close', function () { - if (!self._ended) self.response.emit('end') + if (!self._ended) { + self.response.emit('end') + } }) response.on('end', function () { @@ -1051,17 +1221,17 @@ Request.prototype.onResponse = function (response) { var dataStream if (self.gzip) { - var contentEncoding = response.headers["content-encoding"] || "identity" + var contentEncoding = response.headers['content-encoding'] || 'identity' contentEncoding = contentEncoding.trim().toLowerCase() - if (contentEncoding === "gzip") { + if (contentEncoding === 'gzip') { dataStream = zlib.createGunzip() response.pipe(dataStream) } else { // Since previous versions didn't check for Content-Encoding header, // ignore any invalid values to preserve backwards-compatibility - if (contentEncoding !== "identity") { - debug("ignoring unrecognized Content-Encoding " + contentEncoding) + if (contentEncoding !== 'identity') { + debug('ignoring unrecognized Content-Encoding ' + contentEncoding) } dataStream = response } @@ -1071,7 +1241,7 @@ Request.prototype.onResponse = function (response) { if (self.encoding) { if (self.dests.length !== 0) { - console.error("Ignoring encoding parameter as this stream is being piped to another stream which makes the encoding option invalid.") + console.error('Ignoring encoding parameter as this stream is being piped to another stream which makes the encoding option invalid.') } else if (dataStream.setEncoding) { dataStream.setEncoding(self.encoding) } else { @@ -1088,24 +1258,30 @@ Request.prototype.onResponse = function (response) { self.pipeDest(dest) }) - dataStream.on("data", function (chunk) { + dataStream.on('data', function (chunk) { self._destdata = true - self.emit("data", chunk) + self.emit('data', chunk) }) - dataStream.on("end", function (chunk) { - self.emit("end", chunk) + dataStream.on('end', function (chunk) { + self.emit('end', chunk) }) - dataStream.on("close", function () {self.emit("close")}) + dataStream.on('error', function (error) { + self.emit('error', error) + }) + dataStream.on('close', function () {self.emit('close')}) if (self.callback) { var buffer = bl() , strings = [] - ; - self.on("data", function (chunk) { - if (Buffer.isBuffer(chunk)) buffer.append(chunk) - else strings.push(chunk) + + self.on('data', function (chunk) { + if (Buffer.isBuffer(chunk)) { + buffer.append(chunk) + } else { + strings.push(chunk) + } }) - self.on("end", function () { + self.on('end', function () { debug('end event', self.uri.href) if (self._aborted) { debug('aborted', self.uri.href) @@ -1124,7 +1300,7 @@ Request.prototype.onResponse = function (response) { } else if (strings.length) { // The UTF8 BOM [0xEF,0xBB,0xBF] is converted to [0xFE,0xFF] in the JS UTC16/UCS2 representation. // Strip this value out when the encoding is set to 'utf8', as upstream consumers won't expect it and it breaks JSON.parse(). - if (self.encoding === 'utf8' && strings[0].length > 0 && strings[0][0] === "\uFEFF") { + if (self.encoding === 'utf8' && strings[0].length > 0 && strings[0][0] === '\uFEFF') { strings[0] = strings[0].substring(1) } response.body = strings.join('') @@ -1136,96 +1312,112 @@ Request.prototype.onResponse = function (response) { } catch (e) {} } debug('emitting complete', self.uri.href) - if(response.body == undefined && !self._json) { - response.body = ""; + if(typeof response.body === 'undefined' && !self._json) { + response.body = '' } self.emit('complete', response, response.body) }) } //if no callback else{ - self.on("end", function () { + self.on('end', function () { if (self._aborted) { debug('aborted', self.uri.href) return } - self.emit('complete', response); - }); + self.emit('complete', response) + }) } } debug('finish init function', self.uri.href) } Request.prototype.abort = function () { - this._aborted = true + var self = this + self._aborted = true - if (this.req) { - this.req.abort() + if (self.req) { + self.req.abort() } - else if (this.response) { - this.response.abort() + else if (self.response) { + self.response.abort() } - this.emit("abort") + self.emit('abort') } Request.prototype.pipeDest = function (dest) { - var response = this.response + var self = this + var response = self.response // Called after the response is received if (dest.headers && !dest.headersSent) { if (response.caseless.has('content-type')) { var ctname = response.caseless.has('content-type') - if (dest.setHeader) dest.setHeader(ctname, response.headers[ctname]) - else dest.headers[ctname] = response.headers[ctname] + if (dest.setHeader) { + dest.setHeader(ctname, response.headers[ctname]) + } + else { + dest.headers[ctname] = response.headers[ctname] + } } if (response.caseless.has('content-length')) { var clname = response.caseless.has('content-length') - if (dest.setHeader) dest.setHeader(clname, response.headers[clname]) - else dest.headers[clname] = response.headers[clname] + if (dest.setHeader) { + dest.setHeader(clname, response.headers[clname]) + } else { + dest.headers[clname] = response.headers[clname] + } } } if (dest.setHeader && !dest.headersSent) { for (var i in response.headers) { // If the response content is being decoded, the Content-Encoding header // of the response doesn't represent the piped content, so don't pass it. - if (!this.gzip || i !== 'content-encoding') { + if (!self.gzip || i !== 'content-encoding') { dest.setHeader(i, response.headers[i]) } } dest.statusCode = response.statusCode } - if (this.pipefilter) this.pipefilter(response, dest) + if (self.pipefilter) { + self.pipefilter(response, dest) + } } Request.prototype.qs = function (q, clobber) { + var self = this var base - if (!clobber && this.uri.query) base = qs.parse(this.uri.query) - else base = {} + if (!clobber && self.uri.query) { + base = self.qsLib.parse(self.uri.query) + } else { + base = {} + } for (var i in q) { base[i] = q[i] } - if (qs.stringify(base) === ''){ - return this + if (self.qsLib.stringify(base) === ''){ + return self } - this.uri = url.parse(this.uri.href.split('?')[0] + '?' + qs.stringify(base)) - this.url = this.uri - this.path = this.uri.path + self.uri = url.parse(self.uri.href.split('?')[0] + '?' + self.qsLib.stringify(base)) + self.url = self.uri + self.path = self.uri.path - return this + return self } Request.prototype.form = function (form) { + var self = this if (form) { - this.setHeader('content-type', 'application/x-www-form-urlencoded; charset=utf-8') - this.body = (typeof form === 'string') ? form.toString('utf8') : qs.stringify(form).toString('utf8') - return this + self.setHeader('content-type', 'application/x-www-form-urlencoded') + self.body = (typeof form === 'string') ? form.toString('utf8') : self.qsLib.stringify(form).toString('utf8') + return self } // create form-data object - this._form = new FormData() - return this._form + self._form = new FormData() + return self._form } Request.prototype.multipart = function (multipart) { var self = this @@ -1234,11 +1426,13 @@ Request.prototype.multipart = function (multipart) { if (!self.hasHeader('content-type')) { self.setHeader('content-type', 'multipart/related; boundary=' + self.boundary) } else { - var headerName = self.hasHeader('content-type'); + var headerName = self.hasHeader('content-type') self.setHeader(headerName, self.headers[headerName].split(';')[0] + '; boundary=' + self.boundary) } - if (!multipart.forEach) throw new Error('Argument error, options.multipart.') + if (!multipart.forEach) { + throw new Error('Argument error, options.multipart.') + } if (self.preambleCRLF) { self.body.push(new Buffer('\r\n')) @@ -1246,7 +1440,9 @@ Request.prototype.multipart = function (multipart) { multipart.forEach(function (part) { var body = part.body - if(body == null) throw Error('Body attribute missing in multipart.') + if(typeof body === 'undefined') { + throw new Error('Body attribute missing in multipart.') + } delete part.body var preamble = '--' + self.boundary + '\r\n' Object.keys(part).forEach(function (key) { @@ -1258,85 +1454,104 @@ Request.prototype.multipart = function (multipart) { self.body.push(new Buffer('\r\n')) }) self.body.push(new Buffer('--' + self.boundary + '--')) + + if (self.postambleCRLF) { + self.body.push(new Buffer('\r\n')) + } + return self } Request.prototype.json = function (val) { var self = this - if (!self.hasHeader('accept')) self.setHeader('accept', 'application/json') + if (!self.hasHeader('accept')) { + self.setHeader('accept', 'application/json') + } - this._json = true + self._json = true if (typeof val === 'boolean') { - if (typeof this.body === 'object') { - this.body = safeStringify(this.body) - if (!self.hasHeader('content-type')) + if (typeof self.body === 'object') { + self.body = safeStringify(self.body) + if (!self.hasHeader('content-type')) { self.setHeader('content-type', 'application/json') + } } } else { - this.body = safeStringify(val) - if (!self.hasHeader('content-type')) + self.body = safeStringify(val) + if (!self.hasHeader('content-type')) { self.setHeader('content-type', 'application/json') + } } - return this + return self } Request.prototype.getHeader = function (name, headers) { + var self = this var result, re, match - if (!headers) headers = this.headers + if (!headers) { + headers = self.headers + } Object.keys(headers).forEach(function (key) { - if (key.length !== name.length) return + if (key.length !== name.length) { + return + } re = new RegExp(name, 'i') match = key.match(re) - if (match) result = headers[key] + if (match) { + result = headers[key] + } }) return result } var getHeader = Request.prototype.getHeader Request.prototype.auth = function (user, pass, sendImmediately, bearer) { + var self = this if (bearer !== undefined) { - this._bearer = bearer - this._hasAuth = true - if (sendImmediately || typeof sendImmediately == 'undefined') { + self._bearer = bearer + self._hasAuth = true + if (sendImmediately || typeof sendImmediately === 'undefined') { if (typeof bearer === 'function') { bearer = bearer() } - this.setHeader('authorization', 'Bearer ' + bearer) - this._sentAuth = true + self.setHeader('authorization', 'Bearer ' + bearer) + self._sentAuth = true } - return this + return self } if (typeof user !== 'string' || (pass !== undefined && typeof pass !== 'string')) { throw new Error('auth() received invalid user or password') } - this._user = user - this._pass = pass - this._hasAuth = true + self._user = user + self._pass = pass + self._hasAuth = true var header = typeof pass !== 'undefined' ? user + ':' + pass : user - if (sendImmediately || typeof sendImmediately == 'undefined') { - this.setHeader('authorization', 'Basic ' + toBase64(header)) - this._sentAuth = true + if (sendImmediately || typeof sendImmediately === 'undefined') { + self.setHeader('authorization', 'Basic ' + toBase64(header)) + self._sentAuth = true } - return this + return self } Request.prototype.aws = function (opts, now) { + var self = this + if (!now) { - this._aws = opts - return this + self._aws = opts + return self } var date = new Date() - this.setHeader('date', date.toUTCString()) + self.setHeader('date', date.toUTCString()) var auth = { key: opts.key , secret: opts.secret - , verb: this.method.toUpperCase() + , verb: self.method.toUpperCase() , date: date - , contentType: this.getHeader('content-type') || '' - , md5: this.getHeader('content-md5') || '' - , amazonHeaders: aws.canonicalizeHeaders(this.headers) + , contentType: self.getHeader('content-type') || '' + , md5: self.getHeader('content-md5') || '' + , amazonHeaders: aws.canonicalizeHeaders(self.headers) } - var path = this.uri.path; + var path = self.uri.path if (opts.bucket && path) { auth.resource = '/' + opts.bucket + path } else if (opts.bucket && !path) { @@ -1347,50 +1562,61 @@ Request.prototype.aws = function (opts, now) { auth.resource = '/' } auth.resource = aws.canonicalizeResource(auth.resource) - this.setHeader('authorization', aws.authorization(auth)) + self.setHeader('authorization', aws.authorization(auth)) - return this + return self } Request.prototype.httpSignature = function (opts) { - var req = this + var self = this httpSignature.signRequest({ getHeader: function(header) { - return getHeader(header, req.headers) + return getHeader(header, self.headers) }, setHeader: function(header, value) { - req.setHeader(header, value) + self.setHeader(header, value) }, - method: this.method, - path: this.path + method: self.method, + path: self.path }, opts) - debug('httpSignature authorization', this.getHeader('authorization')) + debug('httpSignature authorization', self.getHeader('authorization')) - return this + return self } Request.prototype.hawk = function (opts) { - this.setHeader('Authorization', hawk.client.header(this.uri, this.method, opts).field) + var self = this + self.setHeader('Authorization', hawk.client.header(self.uri, self.method, opts).field) } Request.prototype.oauth = function (_oauth) { + var self = this var form, query - if (this.hasHeader('content-type') && - this.getHeader('content-type').slice(0, 'application/x-www-form-urlencoded'.length) === + if (self.hasHeader('content-type') && + self.getHeader('content-type').slice(0, 'application/x-www-form-urlencoded'.length) === 'application/x-www-form-urlencoded' ) { - form = this.body + form = self.body } - if (this.uri.query) { - query = this.uri.query + if (self.uri.query) { + query = self.uri.query } var oa = {} - for (var i in _oauth) oa['oauth_'+i] = _oauth[i] - if ('oauth_realm' in oa) delete oa.oauth_realm - - if (!oa.oauth_version) oa.oauth_version = '1.0' - if (!oa.oauth_timestamp) oa.oauth_timestamp = Math.floor( Date.now() / 1000 ).toString() - if (!oa.oauth_nonce) oa.oauth_nonce = uuid().replace(/-/g, '') + for (var i in _oauth) { + oa['oauth_' + i] = _oauth[i] + } + if ('oauth_realm' in oa) { + delete oa.oauth_realm + } + if (!oa.oauth_version) { + oa.oauth_version = '1.0' + } + if (!oa.oauth_timestamp) { + oa.oauth_timestamp = Math.floor( Date.now() / 1000 ).toString() + } + if (!oa.oauth_nonce) { + oa.oauth_nonce = uuid().replace(/-/g, '') + } oa.oauth_signature_method = 'HMAC-SHA1' @@ -1399,95 +1625,118 @@ Request.prototype.oauth = function (_oauth) { var token_secret = oa.oauth_token_secret delete oa.oauth_token_secret - var baseurl = this.uri.protocol + '//' + this.uri.host + this.uri.pathname - var params = qs.parse([].concat(query, form, qs.stringify(oa)).join('&')) - var signature = oauth.hmacsign(this.method, baseurl, params, consumer_secret, token_secret) + var baseurl = self.uri.protocol + '//' + self.uri.host + self.uri.pathname + var params = self.qsLib.parse([].concat(query, form, self.qsLib.stringify(oa)).join('&')) + var signature = oauth.hmacsign(self.method, baseurl, params, consumer_secret, token_secret) - var realm = _oauth.realm ? 'realm="' + _oauth.realm + '",' : ''; + var realm = _oauth.realm ? 'realm="' + _oauth.realm + '",' : '' var authHeader = 'OAuth ' + realm + - Object.keys(oa).sort().map(function (i) {return i+'="'+oauth.rfc3986(oa[i])+'"'}).join(',') + Object.keys(oa).sort().map(function (i) {return i + '="' + oauth.rfc3986(oa[i]) + '"'}).join(',') authHeader += ',oauth_signature="' + oauth.rfc3986(signature) + '"' - this.setHeader('Authorization', authHeader) - return this + self.setHeader('Authorization', authHeader) + return self } Request.prototype.jar = function (jar) { + var self = this var cookies - if (this._redirectsFollowed === 0) { - this.originalCookieHeader = this.getHeader('cookie') + if (self._redirectsFollowed === 0) { + self.originalCookieHeader = self.getHeader('cookie') } if (!jar) { // disable cookies cookies = false - this._disableCookies = true + self._disableCookies = true } else { - var targetCookieJar = (jar && jar.getCookieString)?jar:globalCookieJar; - var urihref = this.uri.href + var targetCookieJar = (jar && jar.getCookieString) ? jar : globalCookieJar + var urihref = self.uri.href //fetch cookie in the Specified host if (targetCookieJar) { - cookies = targetCookieJar.getCookieString(urihref); + cookies = targetCookieJar.getCookieString(urihref) } } //if need cookie and cookie is not empty if (cookies && cookies.length) { - if (this.originalCookieHeader) { + if (self.originalCookieHeader) { // Don't overwrite existing Cookie header - this.setHeader('cookie', this.originalCookieHeader + '; ' + cookies) + self.setHeader('cookie', self.originalCookieHeader + '; ' + cookies) } else { - this.setHeader('cookie', cookies) + self.setHeader('cookie', cookies) } } - this._jar = jar - return this + self._jar = jar + return self } // Stream API Request.prototype.pipe = function (dest, opts) { - if (this.response) { - if (this._destdata) { - throw new Error("You cannot pipe after data has been emitted from the response.") - } else if (this._ended) { - throw new Error("You cannot pipe after the response has been ended.") + var self = this + + if (self.response) { + if (self._destdata) { + throw new Error('You cannot pipe after data has been emitted from the response.') + } else if (self._ended) { + throw new Error('You cannot pipe after the response has been ended.') } else { - stream.Stream.prototype.pipe.call(this, dest, opts) - this.pipeDest(dest) + stream.Stream.prototype.pipe.call(self, dest, opts) + self.pipeDest(dest) return dest } } else { - this.dests.push(dest) - stream.Stream.prototype.pipe.call(this, dest, opts) + self.dests.push(dest) + stream.Stream.prototype.pipe.call(self, dest, opts) return dest } } Request.prototype.write = function () { - if (!this._started) this.start() - return this.req.write.apply(this.req, arguments) + var self = this + if (!self._started) { + self.start() + } + return self.req.write.apply(self.req, arguments) } Request.prototype.end = function (chunk) { - if (chunk) this.write(chunk) - if (!this._started) this.start() - this.req.end() + var self = this + if (chunk) { + self.write(chunk) + } + if (!self._started) { + self.start() + } + self.req.end() } Request.prototype.pause = function () { - if (!this.response) this._paused = true - else this.response.pause.apply(this.response, arguments) + var self = this + if (!self.response) { + self._paused = true + } else { + self.response.pause.apply(self.response, arguments) + } } Request.prototype.resume = function () { - if (!this.response) this._paused = false - else this.response.resume.apply(this.response, arguments) + var self = this + if (!self.response) { + self._paused = false + } else { + self.response.resume.apply(self.response, arguments) + } } Request.prototype.destroy = function () { - if (!this._ended) this.end() - else if (this.response) this.response.destroy() + var self = this + if (!self._ended) { + self.end() + } else if (self.response) { + self.response.destroy() + } } -Request.prototype.toJSON = requestToJSON - Request.defaultProxyHeaderWhiteList = defaultProxyHeaderWhiteList.slice() +// Exports +Request.prototype.toJSON = requestToJSON module.exports = Request diff --git a/deps/npm/node_modules/retry/Readme.md b/deps/npm/node_modules/retry/Readme.md index 2bb865097d4..ba6602205ab 100644 --- a/deps/npm/node_modules/retry/Readme.md +++ b/deps/npm/node_modules/retry/Readme.md @@ -14,7 +14,7 @@ This module has been tested and is ready to be used. The example below will retry a potentially failing `dns.resolve` operation `10` times using an exponential backoff strategy. With the default settings, this -means the last attempt is made after `34 minutes and 7 seconds`. +means the last attempt is made after `17 minutes and 3 seconds`. ``` javascript var dns = require('dns'); @@ -29,7 +29,7 @@ function faultTolerantResolve(address, cb) { return; } - cb(operation.mainError(), addresses); + cb(err ? operation.mainError() : null, addresses); }); }); } @@ -69,8 +69,8 @@ milliseconds. If `options` is an array, a copy of that array is returned. * `retries`: The maximum amount of times to retry the operation. Default is `10`. * `factor`: The exponential factor to use. Default is `2`. -* `minTimeout`: The amount of time before starting the first retry. Default is `1000`. -* `maxTimeout`: The maximum amount of time between two retries. Default is `Infinity`. +* `minTimeout`: The number of milliseconds before starting the first retry. Default is `1000`. +* `maxTimeout`: The maximum number of milliseconds between two retries. Default is `Infinity`. * `randomize`: Randomizes the timeouts by multiplying with a factor between `1` to `2`. Default is `false`. The formula used to calculate the individual timeouts is: diff --git a/deps/npm/node_modules/retry/package.json b/deps/npm/node_modules/retry/package.json index 8f5e6d21f24..130fcae13aa 100644 --- a/deps/npm/node_modules/retry/package.json +++ b/deps/npm/node_modules/retry/package.json @@ -6,11 +6,11 @@ }, "name": "retry", "description": "Abstraction for exponential and custom retry strategies for failed operations.", - "version": "0.6.0", + "version": "0.6.1", "homepage": "https://github.com/tim-kos/node-retry", "repository": { "type": "git", - "url": "git://github.com/felixge/node-retry.git" + "url": "git://github.com/tim-kos/node-retry.git" }, "directories": { "lib": "./lib" @@ -24,6 +24,26 @@ "fake": "0.2.0", "far": "0.0.1" }, - "_id": "retry@0.6.0", - "_from": "retry" + "bugs": { + "url": "https://github.com/tim-kos/node-retry/issues" + }, + "_id": "retry@0.6.1", + "_shasum": "fdc90eed943fde11b893554b8cc63d0e899ba918", + "_from": "retry@>=0.6.1 <0.7.0", + "_npmVersion": "1.4.9", + "_npmUser": { + "name": "tim-kos", + "email": "tim@debuggable.com" + }, + "maintainers": [ + { + "name": "tim-kos", + "email": "tim@debuggable.com" + } + ], + "dist": { + "shasum": "fdc90eed943fde11b893554b8cc63d0e899ba918", + "tarball": "http://registry.npmjs.org/retry/-/retry-0.6.1.tgz" + }, + "_resolved": "https://registry.npmjs.org/retry/-/retry-0.6.1.tgz" } diff --git a/deps/npm/node_modules/semver/Makefile b/deps/npm/node_modules/semver/Makefile index 5717ccf42bf..71af0e9750c 100644 --- a/deps/npm/node_modules/semver/Makefile +++ b/deps/npm/node_modules/semver/Makefile @@ -8,12 +8,12 @@ all: $(files) clean: rm -f $(files) -semver.browser.js: head.js semver.js foot.js - ( cat head.js; \ +semver.browser.js: head.js.txt semver.js foot.js.txt + ( cat head.js.txt; \ cat semver.js | \ egrep -v '^ *\/\* nomin \*\/' | \ perl -pi -e 's/debug\([^\)]+\)//g'; \ - cat foot.js ) > semver.browser.js + cat foot.js.txt ) > semver.browser.js semver.min.js: semver.browser.js uglifyjs -m semver.min.js diff --git a/deps/npm/node_modules/semver/README.md b/deps/npm/node_modules/semver/README.md index 4e95b846566..7e1961d4578 100644 --- a/deps/npm/node_modules/semver/README.md +++ b/deps/npm/node_modules/semver/README.md @@ -41,53 +41,170 @@ A leading `"="` or `"v"` character is stripped off and ignored. ## Ranges -The following range styles are supported: - -* `1.2.3` A specific version. When nothing else will do. Must be a full - version number, with major, minor, and patch versions specified. - Note that build metadata is still ignored, so `1.2.3+build2012` will - satisfy this range. -* `>1.2.3` Greater than a specific version. -* `<1.2.3` Less than a specific version. If there is no prerelease - tag on the version range, then no prerelease version will be allowed - either, even though these are technically "less than". -* `>=1.2.3` Greater than or equal to. Note that prerelease versions - are NOT equal to their "normal" equivalents, so `1.2.3-beta` will - not satisfy this range, but `2.3.0-beta` will. -* `<=1.2.3` Less than or equal to. In this case, prerelease versions - ARE allowed, so `1.2.3-beta` would satisfy. +A `version range` is a set of `comparators` which specify versions +that satisfy the range. + +A `comparator` is composed of an `operator` and a `version`. The set +of primitive `operators` is: + +* `<` Less than +* `<=` Less than or equal to +* `>` Greater than +* `>=` Greater than or equal to +* `=` Equal. If no operator is specified, then equality is assumed, + so this operator is optional, but MAY be included. + +For example, the comparator `>=1.2.7` would match the versions +`1.2.7`, `1.2.8`, `2.5.3`, and `1.3.9`, but not the versions `1.2.6` +or `1.1.0`. + +Comparators can be joined by whitespace to form a `comparator set`, +which is satisfied by the **intersection** of all of the comparators +it includes. + +A range is composed of one or more comparator sets, joined by `||`. A +version matches a range if and only if every comparator in at least +one of the `||`-separated comparator sets is satisfied by the version. + +For example, the range `>=1.2.7 <1.3.0` would match the versions +`1.2.7`, `1.2.8`, and `1.2.99`, but not the versions `1.2.6`, `1.3.0`, +or `1.1.0`. + +The range `1.2.7 || >=1.2.9 <2.0.0` would match the versions `1.2.7`, +`1.2.9`, and `1.4.6`, but not the versions `1.2.8` or `2.0.0`. + +### Prerelease Tags + +If a version has a prerelease tag (for example, `1.2.3-alpha.3`) then +it will only be allowed to satisfy comparator sets if at least one +comparator with the same `[major, minor, patch]` tuple also has a +prerelease tag. + +For example, the range `>1.2.3-alpha.3` would be allowed to match the +version `1.2.3-alpha.7`, but it would *not* be satisfied by +`3.4.5-alpha.9`, even though `3.4.5-alpha.9` is technically "greater +than" `1.2.3-alpha.3` according to the SemVer sort rules. The version +range only accepts prerelease tags on the `1.2.3` version. The +version `3.4.5` *would* satisfy the range, because it does not have a +prerelease flag, and `3.4.5` is greater than `1.2.3-alpha.7`. + +The purpose for this behavior is twofold. First, prerelease versions +frequently are updated very quickly, and contain many breaking changes +that are (by the author's design) not yet fit for public consumption. +Therefore, by default, they are excluded from range matching +semantics. + +Second, a user who has opted into using a prerelease version has +clearly indicated the intent to use *that specific* set of +alpha/beta/rc versions. By including a prerelease tag in the range, +the user is indicating that they are aware of the risk. However, it +is still not appropriate to assume that they have opted into taking a +similar risk on the *next* set of prerelease versions. + +### Advanced Range Syntax + +Advanced range syntax desugars to primitive comparators in +deterministic ways. + +Advanced ranges may be combined in the same way as primitive +comparators using white space or `||`. + +#### Hyphen Ranges `X.Y.Z - A.B.C` + +Specifies an inclusive set. + * `1.2.3 - 2.3.4` := `>=1.2.3 <=2.3.4` -* `~1.2.3` := `>=1.2.3-0 <1.3.0-0` "Reasonably close to `1.2.3`". When - using tilde operators, prerelease versions are supported as well, - but a prerelease of the next significant digit will NOT be - satisfactory, so `1.3.0-beta` will not satisfy `~1.2.3`. -* `^1.2.3` := `>=1.2.3-0 <2.0.0-0` "Compatible with `1.2.3`". When - using caret operators, anything from the specified version (including - prerelease) will be supported up to, but not including, the next - major version (or its prereleases). `1.5.1` will satisfy `^1.2.3`, - while `1.2.2` and `2.0.0-beta` will not. -* `^0.1.3` := `>=0.1.3-0 <0.2.0-0` "Compatible with `0.1.3`". `0.x.x` versions are - special: the first non-zero component indicates potentially breaking changes, - meaning the caret operator matches any version with the same first non-zero - component starting at the specified version. -* `^0.0.2` := `=0.0.2` "Only the version `0.0.2` is considered compatible" -* `~1.2` := `>=1.2.0-0 <1.3.0-0` "Any version starting with `1.2`" -* `^1.2` := `>=1.2.0-0 <2.0.0-0` "Any version compatible with `1.2`" -* `1.2.x` := `>=1.2.0-0 <1.3.0-0` "Any version starting with `1.2`" -* `1.2.*` Same as `1.2.x`. -* `1.2` Same as `1.2.x`. -* `~1` := `>=1.0.0-0 <2.0.0-0` "Any version starting with `1`" -* `^1` := `>=1.0.0-0 <2.0.0-0` "Any version compatible with `1`" -* `1.x` := `>=1.0.0-0 <2.0.0-0` "Any version starting with `1`" -* `1.*` Same as `1.x`. -* `1` Same as `1.x`. -* `*` Any version whatsoever. -* `x` Same as `*`. -* `""` (just an empty string) Same as `*`. - - -Ranges can be joined with either a space (which implies "and") or a -`||` (which implies "or"). + +If a partial version is provided as the first version in the inclusive +range, then the missing pieces are replaced with zeroes. + +* `1.2 - 2.3.4` := `>=1.2.0 <=2.3.4` + +If a partial version is provided as the second version in the +inclusive range, then all versions that start with the supplied parts +of the tuple are accepted, but nothing that would be greater than the +provided tuple parts. + +* `1.2.3 - 2.3` := `>=1.2.3 <2.4.0` +* `1.2.3 - 2` := `>=1.2.3 <3.0.0` + +#### X-Ranges `1.2.x` `1.X` `1.2.*` `*` + +Any of `X`, `x`, or `*` may be used to "stand in" for one of the +numeric values in the `[major, minor, patch]` tuple. + +* `*` := `>=0.0.0` (Any version satisfies) +* `1.x` := `>=1.0.0 <2.0.0` (Matching major version) +* `1.2.x` := `>=1.2.0 <1.3.0` (Matching major and minor versions) + +A partial version range is treated as an X-Range, so the special +character is in fact optional. + +* `""` (empty string) := `*` := `>=0.0.0` +* `1` := `1.x.x` := `>=1.0.0 <2.0.0` +* `1.2` := `1.2.x` := `>=1.2.0 <1.3.0` + +#### Tilde Ranges `~1.2.3` `~1.2` `~1` + +Allows patch-level changes if a minor version is specified on the +comparator. Allows minor-level changes if not. + +* `~1.2.3` := `>=1.2.3 <1.(2+1).0` := `>=1.2.3 <1.3.0` +* `~1.2` := `>=1.2.0 <1.(2+1).0` := `>=1.2.0 <1.3.0` (Same as `1.2.x`) +* `~1` := `>=1.0.0 <(1+1).0.0` := `>=1.0.0 <2.0.0` (Same as `1.x`) +* `~0.2.3` := `>=0.2.3 <0.(2+1).0` := `>=0.2.3 <0.3.0` +* `~0.2` := `>=0.2.0 <0.(2+1).0` := `>=0.2.0 <0.3.0` (Same as `0.2.x`) +* `~0` := `>=0.0.0 <(0+1).0.0` := `>=0.0.0 <1.0.0` (Same as `0.x`) +* `~1.2.3-beta.2` := `>=1.2.3-beta.2 <1.3.0` Note that prereleases in + the `1.2.3` version will be allowed, if they are greater than or + equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but + `1.2.4-beta.2` would not, because it is a prerelease of a + different `[major, minor, patch]` tuple. + +Note: this is the same as the `~>` operator in rubygems. + +#### Caret Ranges `^1.2.3` `^0.2.5` `^0.0.4` + +Allows changes that do not modify the left-most non-zero digit in the +`[major, minor, patch]` tuple. In other words, this allows patch and +minor updates for versions `1.0.0` and above, patch updates for +versions `0.X >=0.1.0`, and *no* updates for versions `0.0.X`. + +Many authors treat a `0.x` version as if the `x` were the major +"breaking-change" indicator. + +Caret ranges are ideal when an author may make breaking changes +between `0.2.4` and `0.3.0` releases, which is a common practice. +However, it presumes that there will *not* be breaking changes between +`0.2.4` and `0.2.5`. It allows for changes that are presumed to be +additive (but non-breaking), according to commonly observed practices. + +* `^1.2.3` := `>=1.2.3 <2.0.0` +* `^0.2.3` := `>=0.2.3 <0.3.0` +* `^0.0.3` := `>=0.0.3 <0.0.4` +* `^1.2.3-beta.2` := `>=1.2.3-beta.2 <2.0.0` Note that prereleases in + the `1.2.3` version will be allowed, if they are greater than or + equal to `beta.2`. So, `1.2.3-beta.4` would be allowed, but + `1.2.4-beta.2` would not, because it is a prerelease of a + different `[major, minor, patch]` tuple. +* `^0.0.3-beta` := `>=0.0.3-beta <0.0.4` Note that prereleases in the + `0.0.3` version *only* will be allowed, if they are greater than or + equal to `beta`. So, `0.0.3-pr.2` would be allowed. + +When parsing caret ranges, a missing `patch` value desugars to the +number `0`, but will allow flexibility within that value, even if the +major and minor versions are both `0`. + +* `^1.2.x` := `>=1.2.0 <2.0.0` +* `^0.0.x` := `>=0.0.0 <0.1.0` +* `^0.0` := `>=0.0.0 <0.1.0` + +A missing `minor` and `patch` values will desugar to zero, but also +allow flexibility within those values, even if the major version is +zero. + +* `^1.x` := `>=1.0.0 <2.0.0` +* `^0.x` := `>=0.0.0 <1.0.0` ## Functions diff --git a/deps/npm/node_modules/semver/bin/semver b/deps/npm/node_modules/semver/bin/semver index 848420630b6..c5f2e857e82 100755 --- a/deps/npm/node_modules/semver/bin/semver +++ b/deps/npm/node_modules/semver/bin/semver @@ -12,6 +12,7 @@ var argv = process.argv.slice(2) , inc = null , version = require("../package.json").version , loose = false + , identifier = undefined , semver = require("../semver") , reverse = false @@ -47,6 +48,9 @@ function main () { break } break + case "--preid": + identifier = argv.shift() + break case "-r": case "--range": range.push(argv.shift()) break @@ -88,7 +92,7 @@ function success () { }).map(function (v) { return semver.clean(v, loose) }).map(function (v) { - return inc ? semver.inc(v, inc, loose) : v + return inc ? semver.inc(v, inc, loose, identifier) : v }).forEach(function (v,i,_) { console.log(v) }) } @@ -107,10 +111,14 @@ function help () { ,"" ,"-i --increment []" ," Increment a version by the specified level. Level can" - ," be one of: major, minor, patch, or prerelease" - ," Default level is 'patch'." + ," be one of: major, minor, patch, premajor, preminor," + ," prepatch, or prerelease. Default level is 'patch'." ," Only one version may be specified." ,"" + ,"--preid " + ," Identifier to be used to prefix premajor, preminor," + ," prepatch or prerelease version increments." + ,"" ,"-l --loose" ," Interpret versions and ranges loosely" ,"" diff --git a/deps/npm/node_modules/semver/foot.js b/deps/npm/node_modules/semver/foot.js.txt similarity index 100% rename from deps/npm/node_modules/semver/foot.js rename to deps/npm/node_modules/semver/foot.js.txt diff --git a/deps/npm/node_modules/semver/head.js b/deps/npm/node_modules/semver/head.js.txt similarity index 100% rename from deps/npm/node_modules/semver/head.js rename to deps/npm/node_modules/semver/head.js.txt diff --git a/deps/npm/node_modules/semver/package.json b/deps/npm/node_modules/semver/package.json index b65d866c307..a22dc9737fd 100644 --- a/deps/npm/node_modules/semver/package.json +++ b/deps/npm/node_modules/semver/package.json @@ -1,6 +1,6 @@ { "name": "semver", - "version": "2.3.0", + "version": "4.1.0", "description": "The semantic version parser used by npm.", "main": "semver.js", "browser": "semver.browser.js", @@ -21,13 +21,35 @@ "bin": { "semver": "./bin/semver" }, - "readme": "semver(1) -- The semantic versioner for npm\n===========================================\n\n## Usage\n\n $ npm install semver\n\n semver.valid('1.2.3') // '1.2.3'\n semver.valid('a.b.c') // null\n semver.clean(' =v1.2.3 ') // '1.2.3'\n semver.satisfies('1.2.3', '1.x || >=2.5.0 || 5.0.0 - 7.2.3') // true\n semver.gt('1.2.3', '9.8.7') // false\n semver.lt('1.2.3', '9.8.7') // true\n\nAs a command-line utility:\n\n $ semver -h\n\n Usage: semver [ [...]] [-r | -i | -d ]\n Test if version(s) satisfy the supplied range(s), and sort them.\n\n Multiple versions or ranges may be supplied, unless increment\n or decrement options are specified. In that case, only a single\n version may be used, and it is incremented by the specified level\n\n Program exits successfully if any valid version satisfies\n all supplied ranges, and prints all satisfying versions.\n\n If no versions are valid, or ranges are not satisfied,\n then exits failure.\n\n Versions are printed in ascending order, so supplying\n multiple versions to the utility will just sort them.\n\n## Versions\n\nA \"version\" is described by the `v2.0.0` specification found at\n.\n\nA leading `\"=\"` or `\"v\"` character is stripped off and ignored.\n\n## Ranges\n\nThe following range styles are supported:\n\n* `1.2.3` A specific version. When nothing else will do. Must be a full\n version number, with major, minor, and patch versions specified.\n Note that build metadata is still ignored, so `1.2.3+build2012` will\n satisfy this range.\n* `>1.2.3` Greater than a specific version.\n* `<1.2.3` Less than a specific version. If there is no prerelease\n tag on the version range, then no prerelease version will be allowed\n either, even though these are technically \"less than\".\n* `>=1.2.3` Greater than or equal to. Note that prerelease versions\n are NOT equal to their \"normal\" equivalents, so `1.2.3-beta` will\n not satisfy this range, but `2.3.0-beta` will.\n* `<=1.2.3` Less than or equal to. In this case, prerelease versions\n ARE allowed, so `1.2.3-beta` would satisfy.\n* `1.2.3 - 2.3.4` := `>=1.2.3 <=2.3.4`\n* `~1.2.3` := `>=1.2.3-0 <1.3.0-0` \"Reasonably close to `1.2.3`\". When\n using tilde operators, prerelease versions are supported as well,\n but a prerelease of the next significant digit will NOT be\n satisfactory, so `1.3.0-beta` will not satisfy `~1.2.3`.\n* `^1.2.3` := `>=1.2.3-0 <2.0.0-0` \"Compatible with `1.2.3`\". When\n using caret operators, anything from the specified version (including\n prerelease) will be supported up to, but not including, the next\n major version (or its prereleases). `1.5.1` will satisfy `^1.2.3`,\n while `1.2.2` and `2.0.0-beta` will not.\n* `^0.1.3` := `>=0.1.3-0 <0.2.0-0` \"Compatible with `0.1.3`\". `0.x.x` versions are\n special: the first non-zero component indicates potentially breaking changes,\n meaning the caret operator matches any version with the same first non-zero\n component starting at the specified version.\n* `^0.0.2` := `=0.0.2` \"Only the version `0.0.2` is considered compatible\"\n* `~1.2` := `>=1.2.0-0 <1.3.0-0` \"Any version starting with `1.2`\"\n* `^1.2` := `>=1.2.0-0 <2.0.0-0` \"Any version compatible with `1.2`\"\n* `1.2.x` := `>=1.2.0-0 <1.3.0-0` \"Any version starting with `1.2`\"\n* `1.2.*` Same as `1.2.x`.\n* `1.2` Same as `1.2.x`.\n* `~1` := `>=1.0.0-0 <2.0.0-0` \"Any version starting with `1`\"\n* `^1` := `>=1.0.0-0 <2.0.0-0` \"Any version compatible with `1`\"\n* `1.x` := `>=1.0.0-0 <2.0.0-0` \"Any version starting with `1`\"\n* `1.*` Same as `1.x`.\n* `1` Same as `1.x`.\n* `*` Any version whatsoever.\n* `x` Same as `*`.\n* `\"\"` (just an empty string) Same as `*`.\n\n\nRanges can be joined with either a space (which implies \"and\") or a\n`||` (which implies \"or\").\n\n## Functions\n\nAll methods and classes take a final `loose` boolean argument that, if\ntrue, will be more forgiving about not-quite-valid semver strings.\nThe resulting output will always be 100% strict, of course.\n\nStrict-mode Comparators and Ranges will be strict about the SemVer\nstrings that they parse.\n\n* `valid(v)`: Return the parsed version, or null if it's not valid.\n* `inc(v, release)`: Return the version incremented by the release\n type (`major`, `premajor`, `minor`, `preminor`, `patch`,\n `prepatch`, or `prerelease`), or null if it's not valid\n * `premajor` in one call will bump the version up to the next major\n version and down to a prerelease of that major version.\n `preminor`, and `prepatch` work the same way.\n * If called from a non-prerelease version, the `prerelease` will work the\n same as `prepatch`. It increments the patch version, then makes a\n prerelease. If the input version is already a prerelease it simply\n increments it.\n\n### Comparison\n\n* `gt(v1, v2)`: `v1 > v2`\n* `gte(v1, v2)`: `v1 >= v2`\n* `lt(v1, v2)`: `v1 < v2`\n* `lte(v1, v2)`: `v1 <= v2`\n* `eq(v1, v2)`: `v1 == v2` This is true if they're logically equivalent,\n even if they're not the exact same string. You already know how to\n compare strings.\n* `neq(v1, v2)`: `v1 != v2` The opposite of `eq`.\n* `cmp(v1, comparator, v2)`: Pass in a comparison string, and it'll call\n the corresponding function above. `\"===\"` and `\"!==\"` do simple\n string comparison, but are included for completeness. Throws if an\n invalid comparison string is provided.\n* `compare(v1, v2)`: Return `0` if `v1 == v2`, or `1` if `v1` is greater, or `-1` if\n `v2` is greater. Sorts in ascending order if passed to `Array.sort()`.\n* `rcompare(v1, v2)`: The reverse of compare. Sorts an array of versions\n in descending order when passed to `Array.sort()`.\n\n\n### Ranges\n\n* `validRange(range)`: Return the valid range or null if it's not valid\n* `satisfies(version, range)`: Return true if the version satisfies the\n range.\n* `maxSatisfying(versions, range)`: Return the highest version in the list\n that satisfies the range, or `null` if none of them do.\n* `gtr(version, range)`: Return `true` if version is greater than all the\n versions possible in the range.\n* `ltr(version, range)`: Return `true` if version is less than all the\n versions possible in the range.\n* `outside(version, range, hilo)`: Return true if the version is outside\n the bounds of the range in either the high or low direction. The\n `hilo` argument must be either the string `'>'` or `'<'`. (This is\n the function called by `gtr` and `ltr`.)\n\nNote that, since ranges may be non-contiguous, a version might not be\ngreater than a range, less than a range, *or* satisfy a range! For\nexample, the range `1.2 <1.2.9 || >2.0.0` would have a hole from `1.2.9`\nuntil `2.0.0`, so the version `1.2.10` would not be greater than the\nrange (because `2.0.1` satisfies, which is higher), nor less than the\nrange (since `1.2.8` satisfies, which is lower), and it also does not\nsatisfy the range.\n\nIf you want to know if a version satisfies or does not satisfy a\nrange, use the `satisfies(version, range)` function.\n", - "readmeFilename": "README.md", + "gitHead": "f8db569b9fd00788d14064aaf81854ed81e1337a", "bugs": { "url": "https://github.com/isaacs/node-semver/issues" }, "homepage": "https://github.com/isaacs/node-semver", - "_id": "semver@2.3.0", - "_shasum": "d31b2903ebe2a1806c05b8e763916a7183108a15", - "_from": "semver@latest" + "_id": "semver@4.1.0", + "_shasum": "bc80a9ff68532814362cc3cfda3c7b75ed9c321c", + "_from": "semver@>=4.1.0 <5.0.0", + "_npmVersion": "2.1.3", + "_nodeVersion": "0.10.31", + "_npmUser": { + "name": "isaacs", + "email": "i@izs.me" + }, + "maintainers": [ + { + "name": "isaacs", + "email": "i@izs.me" + }, + { + "name": "othiym23", + "email": "ogd@aoaioxxysz.net" + } + ], + "dist": { + "shasum": "bc80a9ff68532814362cc3cfda3c7b75ed9c321c", + "tarball": "http://registry.npmjs.org/semver/-/semver-4.1.0.tgz" + }, + "directories": {}, + "_resolved": "https://registry.npmjs.org/semver/-/semver-4.1.0.tgz", + "readme": "ERROR: No README data found!" } diff --git a/deps/npm/node_modules/semver/semver.browser.js b/deps/npm/node_modules/semver/semver.browser.js index 0f414c3d8d3..712de835cb7 100644 --- a/deps/npm/node_modules/semver/semver.browser.js +++ b/deps/npm/node_modules/semver/semver.browser.js @@ -128,18 +128,18 @@ var XRANGEPLAIN = R++; src[XRANGEPLAIN] = '[v=\\s]*(' + src[XRANGEIDENTIFIER] + ')' + '(?:\\.(' + src[XRANGEIDENTIFIER] + ')' + '(?:\\.(' + src[XRANGEIDENTIFIER] + ')' + - '(?:(' + src[PRERELEASE] + ')' + - ')?)?)?'; + '(?:' + src[PRERELEASE] + ')?' + + src[BUILD] + '?' + + ')?)?'; var XRANGEPLAINLOOSE = R++; src[XRANGEPLAINLOOSE] = '[v=\\s]*(' + src[XRANGEIDENTIFIERLOOSE] + ')' + '(?:\\.(' + src[XRANGEIDENTIFIERLOOSE] + ')' + '(?:\\.(' + src[XRANGEIDENTIFIERLOOSE] + ')' + - '(?:(' + src[PRERELEASELOOSE] + ')' + - ')?)?)?'; + '(?:' + src[PRERELEASELOOSE] + ')?' + + src[BUILD] + '?' + + ')?)?'; -// >=2.x, for example, means >=2.0.0-0 -// <1.x would be the same as "<1.0.0-0", though. var XRANGE = R++; src[XRANGE] = '^' + src[GTLT] + '\\s*' + src[XRANGEPLAIN] + '$'; var XRANGELOOSE = R++; @@ -236,7 +236,7 @@ function valid(version, loose) { exports.clean = clean; function clean(version, loose) { - var s = parse(version, loose); + var s = parse(version.trim().replace(/^[=v]+/, ''), loose); return s ? s.version : null; } @@ -345,32 +345,55 @@ SemVer.prototype.comparePre = function(other) { // preminor will bump the version up to the next minor release, and immediately // down to pre-release. premajor and prepatch work the same way. -SemVer.prototype.inc = function(release) { +SemVer.prototype.inc = function(release, identifier) { switch (release) { case 'premajor': - this.inc('major'); - this.inc('pre'); + this.prerelease.length = 0; + this.patch = 0; + this.minor = 0; + this.major++; + this.inc('pre', identifier); break; case 'preminor': - this.inc('minor'); - this.inc('pre'); + this.prerelease.length = 0; + this.patch = 0; + this.minor++; + this.inc('pre', identifier); break; case 'prepatch': - this.inc('patch'); - this.inc('pre'); + // If this is already a prerelease, it will bump to the next version + // drop any prereleases that might already exist, since they are not + // relevant at this point. + this.prerelease.length = 0; + this.inc('patch', identifier); + this.inc('pre', identifier); break; // If the input is a non-prerelease version, this acts the same as // prepatch. case 'prerelease': if (this.prerelease.length === 0) - this.inc('patch'); - this.inc('pre'); + this.inc('patch', identifier); + this.inc('pre', identifier); break; + case 'major': - this.major++; - this.minor = -1; + // If this is a pre-major version, bump up to the same major version. + // Otherwise increment major. + // 1.0.0-5 bumps to 1.0.0 + // 1.1.0 bumps to 2.0.0 + if (this.minor !== 0 || this.patch !== 0 || this.prerelease.length === 0) + this.major++; + this.minor = 0; + this.patch = 0; + this.prerelease = []; + break; case 'minor': - this.minor++; + // If this is a pre-minor version, bump up to the same minor version. + // Otherwise increment minor. + // 1.2.0-5 bumps to 1.2.0 + // 1.2.1 bumps to 1.3.0 + if (this.patch !== 0 || this.prerelease.length === 0) + this.minor++; this.patch = 0; this.prerelease = []; break; @@ -383,7 +406,7 @@ SemVer.prototype.inc = function(release) { this.patch++; this.prerelease = []; break; - // This probably shouldn't be used publically. + // This probably shouldn't be used publicly. // 1.0.0 "pre" would become 1.0.0-0 which is the wrong direction. case 'pre': if (this.prerelease.length === 0) @@ -399,6 +422,15 @@ SemVer.prototype.inc = function(release) { if (i === -1) // didn't increment anything this.prerelease.push(0); } + if (identifier) { + // 1.2.0-beta.1 bumps to 1.2.0-beta.2, + // 1.2.0-beta.fooblz or 1.2.0-beta bumps to 1.2.0-beta.0 + if (this.prerelease[0] === identifier) { + if (isNaN(this.prerelease[1])) + this.prerelease = [identifier, 0]; + } else + this.prerelease = [identifier, 0]; + } break; default: @@ -409,9 +441,14 @@ SemVer.prototype.inc = function(release) { }; exports.inc = inc; -function inc(version, release, loose) { +function inc(version, release, loose, identifier) { + if (typeof(loose) === 'string') { + identifier = loose; + loose = undefined; + } + try { - return new SemVer(version, loose).inc(release).version; + return new SemVer(version, loose).inc(release, identifier).version; } catch (er) { return null; } @@ -504,8 +541,16 @@ exports.cmp = cmp; function cmp(a, op, b, loose) { var ret; switch (op) { - case '===': ret = a === b; break; - case '!==': ret = a !== b; break; + case '===': + if (typeof a === 'object') a = a.version; + if (typeof b === 'object') b = b.version; + ret = a === b; + break; + case '!==': + if (typeof a === 'object') a = a.version; + if (typeof b === 'object') b = b.version; + ret = a !== b; + break; case '': case '=': case '==': ret = eq(a, b, loose); break; case '!=': ret = neq(a, b, loose); break; case '>': ret = gt(a, b, loose); break; @@ -537,6 +582,8 @@ function Comparator(comp, loose) { this.value = ''; else this.value = this.operator + this.semver.version; + + ; } var ANY = {}; @@ -548,24 +595,14 @@ Comparator.prototype.parse = function(comp) { throw new TypeError('Invalid comparator: ' + comp); this.operator = m[1]; + if (this.operator === '=') + this.operator = ''; + // if it literally is just '>' or '' then allow anything. if (!m[2]) this.semver = ANY; - else { + else this.semver = new SemVer(m[2], this.loose); - - // <1.2.3-rc DOES allow 1.2.3-beta (has prerelease) - // >=1.2.3 DOES NOT allow 1.2.3-beta - // <=1.2.3 DOES allow 1.2.3-beta - // However, <1.2.3 does NOT allow 1.2.3-beta, - // even though `1.2.3-beta < 1.2.3` - // The assumption is that the 1.2.3 version has something you - // *don't* want, so we push the prerelease down to the minimum. - if (this.operator === '<' && !this.semver.prerelease.length) { - this.semver.prerelease = ['0']; - this.semver.format(); - } - } }; Comparator.prototype.inspect = function() { @@ -578,8 +615,14 @@ Comparator.prototype.toString = function() { Comparator.prototype.test = function(version) { ; - return (this.semver === ANY) ? true : - cmp(version, this.operator, this.semver, this.loose); + + if (this.semver === ANY) + return true; + + if (typeof version === 'string') + version = new SemVer(version, this.loose); + + return cmp(version, this.operator, this.semver, this.loose); }; @@ -716,20 +759,20 @@ function replaceTilde(comp, loose) { if (isX(M)) ret = ''; else if (isX(m)) - ret = '>=' + M + '.0.0-0 <' + (+M + 1) + '.0.0-0'; + ret = '>=' + M + '.0.0 <' + (+M + 1) + '.0.0'; else if (isX(p)) // ~1.2 == >=1.2.0- <1.3.0- - ret = '>=' + M + '.' + m + '.0-0 <' + M + '.' + (+m + 1) + '.0-0'; + ret = '>=' + M + '.' + m + '.0 <' + M + '.' + (+m + 1) + '.0'; else if (pr) { ; if (pr.charAt(0) !== '-') pr = '-' + pr; ret = '>=' + M + '.' + m + '.' + p + pr + - ' <' + M + '.' + (+m + 1) + '.0-0'; + ' <' + M + '.' + (+m + 1) + '.0'; } else - // ~1.2.3 == >=1.2.3-0 <1.3.0-0 - ret = '>=' + M + '.' + m + '.' + p + '-0' + - ' <' + M + '.' + (+m + 1) + '.0-0'; + // ~1.2.3 == >=1.2.3 <1.3.0 + ret = '>=' + M + '.' + m + '.' + p + + ' <' + M + '.' + (+m + 1) + '.0'; ; return ret; @@ -749,6 +792,7 @@ function replaceCarets(comp, loose) { } function replaceCaret(comp, loose) { + ; var r = loose ? re[CARETLOOSE] : re[CARET]; return comp.replace(r, function(_, M, m, p, pr) { ; @@ -757,35 +801,38 @@ function replaceCaret(comp, loose) { if (isX(M)) ret = ''; else if (isX(m)) - ret = '>=' + M + '.0.0-0 <' + (+M + 1) + '.0.0-0'; + ret = '>=' + M + '.0.0 <' + (+M + 1) + '.0.0'; else if (isX(p)) { if (M === '0') - ret = '>=' + M + '.' + m + '.0-0 <' + M + '.' + (+m + 1) + '.0-0'; + ret = '>=' + M + '.' + m + '.0 <' + M + '.' + (+m + 1) + '.0'; else - ret = '>=' + M + '.' + m + '.0-0 <' + (+M + 1) + '.0.0-0'; + ret = '>=' + M + '.' + m + '.0 <' + (+M + 1) + '.0.0'; } else if (pr) { ; if (pr.charAt(0) !== '-') pr = '-' + pr; if (M === '0') { if (m === '0') - ret = '=' + M + '.' + m + '.' + p + pr; + ret = '>=' + M + '.' + m + '.' + p + pr + + ' <' + M + '.' + m + '.' + (+p + 1); else ret = '>=' + M + '.' + m + '.' + p + pr + - ' <' + M + '.' + (+m + 1) + '.0-0'; + ' <' + M + '.' + (+m + 1) + '.0'; } else ret = '>=' + M + '.' + m + '.' + p + pr + - ' <' + (+M + 1) + '.0.0-0'; + ' <' + (+M + 1) + '.0.0'; } else { + ; if (M === '0') { if (m === '0') - ret = '=' + M + '.' + m + '.' + p; + ret = '>=' + M + '.' + m + '.' + p + + ' <' + M + '.' + m + '.' + (+p + 1); else - ret = '>=' + M + '.' + m + '.' + p + '-0' + - ' <' + M + '.' + (+m + 1) + '.0-0'; + ret = '>=' + M + '.' + m + '.' + p + + ' <' + M + '.' + (+m + 1) + '.0'; } else - ret = '>=' + M + '.' + m + '.' + p + '-0' + - ' <' + (+M + 1) + '.0.0-0'; + ret = '>=' + M + '.' + m + '.' + p + + ' <' + (+M + 1) + '.0.0'; } ; @@ -813,23 +860,27 @@ function replaceXRange(comp, loose) { if (gtlt === '=' && anyX) gtlt = ''; - if (gtlt && anyX) { - // replace X with 0, and then append the -0 min-prerelease - if (xM) - M = 0; + if (xM) { + if (gtlt === '>' || gtlt === '<') { + // nothing is allowed + ret = '<0.0.0'; + } else { + // nothing is forbidden + ret = '*'; + } + } else if (gtlt && anyX) { + // replace X with 0 if (xm) m = 0; if (xp) p = 0; if (gtlt === '>') { - // >1 => >=2.0.0-0 - // >1.2 => >=1.3.0-0 - // >1.2.3 => >= 1.2.4-0 + // >1 => >=2.0.0 + // >1.2 => >=1.3.0 + // >1.2.3 => >= 1.2.4 gtlt = '>='; - if (xM) { - // no change - } else if (xm) { + if (xm) { M = +M + 1; m = 0; p = 0; @@ -837,20 +888,21 @@ function replaceXRange(comp, loose) { m = +m + 1; p = 0; } + } else if (gtlt === '<=') { + // <=0.7.x is actually <0.8.0, since any 0.7.x should + // pass. Similarly, <=7.x is actually <8.0.0, etc. + gtlt = '<' + if (xm) + M = +M + 1 + else + m = +m + 1 } - - ret = gtlt + M + '.' + m + '.' + p + '-0'; - } else if (xM) { - // allow any - ret = '*'; + ret = gtlt + M + '.' + m + '.' + p; } else if (xm) { - // append '-0' onto the version, otherwise - // '1.x.x' matches '2.0.0-beta', since the tag - // *lowers* the version value - ret = '>=' + M + '.0.0-0 <' + (+M + 1) + '.0.0-0'; + ret = '>=' + M + '.0.0 <' + (+M + 1) + '.0.0'; } else if (xp) { - ret = '>=' + M + '.' + m + '.0-0 <' + M + '.' + (+m + 1) + '.0-0'; + ret = '>=' + M + '.' + m + '.0 <' + M + '.' + (+m + 1) + '.0'; } ; @@ -869,9 +921,9 @@ function replaceStars(comp, loose) { // This function is passed to string.replace(re[HYPHENRANGE]) // M, m, patch, prerelease, build -// 1.2 - 3.4.5 => >=1.2.0-0 <=3.4.5 -// 1.2.3 - 3.4 => >=1.2.0-0 <3.5.0-0 Any 3.4.x will do -// 1.2 - 3.4 => >=1.2.0-0 <3.5.0-0 +// 1.2 - 3.4.5 => >=1.2.0 <=3.4.5 +// 1.2.3 - 3.4 => >=1.2.0 <3.5.0 Any 3.4.x will do +// 1.2 - 3.4 => >=1.2.0 <3.5.0 function hyphenReplace($0, from, fM, fm, fp, fpr, fb, to, tM, tm, tp, tpr, tb) { @@ -879,18 +931,18 @@ function hyphenReplace($0, if (isX(fM)) from = ''; else if (isX(fm)) - from = '>=' + fM + '.0.0-0'; + from = '>=' + fM + '.0.0'; else if (isX(fp)) - from = '>=' + fM + '.' + fm + '.0-0'; + from = '>=' + fM + '.' + fm + '.0'; else from = '>=' + from; if (isX(tM)) to = ''; else if (isX(tm)) - to = '<' + (+tM + 1) + '.0.0-0'; + to = '<' + (+tM + 1) + '.0.0'; else if (isX(tp)) - to = '<' + tM + '.' + (+tm + 1) + '.0-0'; + to = '<' + tM + '.' + (+tm + 1) + '.0'; else if (tpr) to = '<=' + tM + '.' + tm + '.' + tp + '-' + tpr; else @@ -904,6 +956,10 @@ function hyphenReplace($0, Range.prototype.test = function(version) { if (!version) return false; + + if (typeof version === 'string') + version = new SemVer(version, this.loose); + for (var i = 0; i < this.set.length; i++) { if (testSet(this.set[i], version)) return true; @@ -916,6 +972,31 @@ function testSet(set, version) { if (!set[i].test(version)) return false; } + + if (version.prerelease.length) { + // Find the set of versions that are allowed to have prereleases + // For example, ^1.2.3-pr.1 desugars to >=1.2.3-pr.1 <2.0.0 + // That should allow `1.2.3-pr.2` to pass. + // However, `1.2.4-alpha.notready` should NOT be allowed, + // even though it's within the range set by the comparators. + for (var i = 0; i < set.length; i++) { + ; + if (set[i].semver === ANY) + return true; + + if (set[i].semver.prerelease.length > 0) { + var allowed = set[i].semver; + if (allowed.major === version.major && + allowed.minor === version.minor && + allowed.patch === version.patch) + return true; + } + } + + // Version has a -pre, but it's not one of the ones we like. + return false; + } + return true; } diff --git a/deps/npm/node_modules/semver/semver.browser.js.gz b/deps/npm/node_modules/semver/semver.browser.js.gz index 2b07bae519b..e3066055506 100644 Binary files a/deps/npm/node_modules/semver/semver.browser.js.gz and b/deps/npm/node_modules/semver/semver.browser.js.gz differ diff --git a/deps/npm/node_modules/semver/semver.js b/deps/npm/node_modules/semver/semver.js index a7385b41c51..22673fdd193 100644 --- a/deps/npm/node_modules/semver/semver.js +++ b/deps/npm/node_modules/semver/semver.js @@ -138,18 +138,18 @@ var XRANGEPLAIN = R++; src[XRANGEPLAIN] = '[v=\\s]*(' + src[XRANGEIDENTIFIER] + ')' + '(?:\\.(' + src[XRANGEIDENTIFIER] + ')' + '(?:\\.(' + src[XRANGEIDENTIFIER] + ')' + - '(?:(' + src[PRERELEASE] + ')' + - ')?)?)?'; + '(?:' + src[PRERELEASE] + ')?' + + src[BUILD] + '?' + + ')?)?'; var XRANGEPLAINLOOSE = R++; src[XRANGEPLAINLOOSE] = '[v=\\s]*(' + src[XRANGEIDENTIFIERLOOSE] + ')' + '(?:\\.(' + src[XRANGEIDENTIFIERLOOSE] + ')' + '(?:\\.(' + src[XRANGEIDENTIFIERLOOSE] + ')' + - '(?:(' + src[PRERELEASELOOSE] + ')' + - ')?)?)?'; + '(?:' + src[PRERELEASELOOSE] + ')?' + + src[BUILD] + '?' + + ')?)?'; -// >=2.x, for example, means >=2.0.0-0 -// <1.x would be the same as "<1.0.0-0", though. var XRANGE = R++; src[XRANGE] = '^' + src[GTLT] + '\\s*' + src[XRANGEPLAIN] + '$'; var XRANGELOOSE = R++; @@ -246,7 +246,7 @@ function valid(version, loose) { exports.clean = clean; function clean(version, loose) { - var s = parse(version, loose); + var s = parse(version.trim().replace(/^[=v]+/, ''), loose); return s ? s.version : null; } @@ -355,32 +355,55 @@ SemVer.prototype.comparePre = function(other) { // preminor will bump the version up to the next minor release, and immediately // down to pre-release. premajor and prepatch work the same way. -SemVer.prototype.inc = function(release) { +SemVer.prototype.inc = function(release, identifier) { switch (release) { case 'premajor': - this.inc('major'); - this.inc('pre'); + this.prerelease.length = 0; + this.patch = 0; + this.minor = 0; + this.major++; + this.inc('pre', identifier); break; case 'preminor': - this.inc('minor'); - this.inc('pre'); + this.prerelease.length = 0; + this.patch = 0; + this.minor++; + this.inc('pre', identifier); break; case 'prepatch': - this.inc('patch'); - this.inc('pre'); + // If this is already a prerelease, it will bump to the next version + // drop any prereleases that might already exist, since they are not + // relevant at this point. + this.prerelease.length = 0; + this.inc('patch', identifier); + this.inc('pre', identifier); break; // If the input is a non-prerelease version, this acts the same as // prepatch. case 'prerelease': if (this.prerelease.length === 0) - this.inc('patch'); - this.inc('pre'); + this.inc('patch', identifier); + this.inc('pre', identifier); break; + case 'major': - this.major++; - this.minor = -1; + // If this is a pre-major version, bump up to the same major version. + // Otherwise increment major. + // 1.0.0-5 bumps to 1.0.0 + // 1.1.0 bumps to 2.0.0 + if (this.minor !== 0 || this.patch !== 0 || this.prerelease.length === 0) + this.major++; + this.minor = 0; + this.patch = 0; + this.prerelease = []; + break; case 'minor': - this.minor++; + // If this is a pre-minor version, bump up to the same minor version. + // Otherwise increment minor. + // 1.2.0-5 bumps to 1.2.0 + // 1.2.1 bumps to 1.3.0 + if (this.patch !== 0 || this.prerelease.length === 0) + this.minor++; this.patch = 0; this.prerelease = []; break; @@ -393,7 +416,7 @@ SemVer.prototype.inc = function(release) { this.patch++; this.prerelease = []; break; - // This probably shouldn't be used publically. + // This probably shouldn't be used publicly. // 1.0.0 "pre" would become 1.0.0-0 which is the wrong direction. case 'pre': if (this.prerelease.length === 0) @@ -409,6 +432,15 @@ SemVer.prototype.inc = function(release) { if (i === -1) // didn't increment anything this.prerelease.push(0); } + if (identifier) { + // 1.2.0-beta.1 bumps to 1.2.0-beta.2, + // 1.2.0-beta.fooblz or 1.2.0-beta bumps to 1.2.0-beta.0 + if (this.prerelease[0] === identifier) { + if (isNaN(this.prerelease[1])) + this.prerelease = [identifier, 0]; + } else + this.prerelease = [identifier, 0]; + } break; default: @@ -419,9 +451,14 @@ SemVer.prototype.inc = function(release) { }; exports.inc = inc; -function inc(version, release, loose) { +function inc(version, release, loose, identifier) { + if (typeof(loose) === 'string') { + identifier = loose; + loose = undefined; + } + try { - return new SemVer(version, loose).inc(release).version; + return new SemVer(version, loose).inc(release, identifier).version; } catch (er) { return null; } @@ -514,8 +551,16 @@ exports.cmp = cmp; function cmp(a, op, b, loose) { var ret; switch (op) { - case '===': ret = a === b; break; - case '!==': ret = a !== b; break; + case '===': + if (typeof a === 'object') a = a.version; + if (typeof b === 'object') b = b.version; + ret = a === b; + break; + case '!==': + if (typeof a === 'object') a = a.version; + if (typeof b === 'object') b = b.version; + ret = a !== b; + break; case '': case '=': case '==': ret = eq(a, b, loose); break; case '!=': ret = neq(a, b, loose); break; case '>': ret = gt(a, b, loose); break; @@ -547,6 +592,8 @@ function Comparator(comp, loose) { this.value = ''; else this.value = this.operator + this.semver.version; + + debug('comp', this); } var ANY = {}; @@ -558,24 +605,14 @@ Comparator.prototype.parse = function(comp) { throw new TypeError('Invalid comparator: ' + comp); this.operator = m[1]; + if (this.operator === '=') + this.operator = ''; + // if it literally is just '>' or '' then allow anything. if (!m[2]) this.semver = ANY; - else { + else this.semver = new SemVer(m[2], this.loose); - - // <1.2.3-rc DOES allow 1.2.3-beta (has prerelease) - // >=1.2.3 DOES NOT allow 1.2.3-beta - // <=1.2.3 DOES allow 1.2.3-beta - // However, <1.2.3 does NOT allow 1.2.3-beta, - // even though `1.2.3-beta < 1.2.3` - // The assumption is that the 1.2.3 version has something you - // *don't* want, so we push the prerelease down to the minimum. - if (this.operator === '<' && !this.semver.prerelease.length) { - this.semver.prerelease = ['0']; - this.semver.format(); - } - } }; Comparator.prototype.inspect = function() { @@ -588,8 +625,14 @@ Comparator.prototype.toString = function() { Comparator.prototype.test = function(version) { debug('Comparator.test', version, this.loose); - return (this.semver === ANY) ? true : - cmp(version, this.operator, this.semver, this.loose); + + if (this.semver === ANY) + return true; + + if (typeof version === 'string') + version = new SemVer(version, this.loose); + + return cmp(version, this.operator, this.semver, this.loose); }; @@ -726,20 +769,20 @@ function replaceTilde(comp, loose) { if (isX(M)) ret = ''; else if (isX(m)) - ret = '>=' + M + '.0.0-0 <' + (+M + 1) + '.0.0-0'; + ret = '>=' + M + '.0.0 <' + (+M + 1) + '.0.0'; else if (isX(p)) // ~1.2 == >=1.2.0- <1.3.0- - ret = '>=' + M + '.' + m + '.0-0 <' + M + '.' + (+m + 1) + '.0-0'; + ret = '>=' + M + '.' + m + '.0 <' + M + '.' + (+m + 1) + '.0'; else if (pr) { debug('replaceTilde pr', pr); if (pr.charAt(0) !== '-') pr = '-' + pr; ret = '>=' + M + '.' + m + '.' + p + pr + - ' <' + M + '.' + (+m + 1) + '.0-0'; + ' <' + M + '.' + (+m + 1) + '.0'; } else - // ~1.2.3 == >=1.2.3-0 <1.3.0-0 - ret = '>=' + M + '.' + m + '.' + p + '-0' + - ' <' + M + '.' + (+m + 1) + '.0-0'; + // ~1.2.3 == >=1.2.3 <1.3.0 + ret = '>=' + M + '.' + m + '.' + p + + ' <' + M + '.' + (+m + 1) + '.0'; debug('tilde return', ret); return ret; @@ -759,6 +802,7 @@ function replaceCarets(comp, loose) { } function replaceCaret(comp, loose) { + debug('caret', comp, loose); var r = loose ? re[CARETLOOSE] : re[CARET]; return comp.replace(r, function(_, M, m, p, pr) { debug('caret', comp, _, M, m, p, pr); @@ -767,35 +811,38 @@ function replaceCaret(comp, loose) { if (isX(M)) ret = ''; else if (isX(m)) - ret = '>=' + M + '.0.0-0 <' + (+M + 1) + '.0.0-0'; + ret = '>=' + M + '.0.0 <' + (+M + 1) + '.0.0'; else if (isX(p)) { if (M === '0') - ret = '>=' + M + '.' + m + '.0-0 <' + M + '.' + (+m + 1) + '.0-0'; + ret = '>=' + M + '.' + m + '.0 <' + M + '.' + (+m + 1) + '.0'; else - ret = '>=' + M + '.' + m + '.0-0 <' + (+M + 1) + '.0.0-0'; + ret = '>=' + M + '.' + m + '.0 <' + (+M + 1) + '.0.0'; } else if (pr) { debug('replaceCaret pr', pr); if (pr.charAt(0) !== '-') pr = '-' + pr; if (M === '0') { if (m === '0') - ret = '=' + M + '.' + m + '.' + p + pr; + ret = '>=' + M + '.' + m + '.' + p + pr + + ' <' + M + '.' + m + '.' + (+p + 1); else ret = '>=' + M + '.' + m + '.' + p + pr + - ' <' + M + '.' + (+m + 1) + '.0-0'; + ' <' + M + '.' + (+m + 1) + '.0'; } else ret = '>=' + M + '.' + m + '.' + p + pr + - ' <' + (+M + 1) + '.0.0-0'; + ' <' + (+M + 1) + '.0.0'; } else { + debug('no pr'); if (M === '0') { if (m === '0') - ret = '=' + M + '.' + m + '.' + p; + ret = '>=' + M + '.' + m + '.' + p + + ' <' + M + '.' + m + '.' + (+p + 1); else - ret = '>=' + M + '.' + m + '.' + p + '-0' + - ' <' + M + '.' + (+m + 1) + '.0-0'; + ret = '>=' + M + '.' + m + '.' + p + + ' <' + M + '.' + (+m + 1) + '.0'; } else - ret = '>=' + M + '.' + m + '.' + p + '-0' + - ' <' + (+M + 1) + '.0.0-0'; + ret = '>=' + M + '.' + m + '.' + p + + ' <' + (+M + 1) + '.0.0'; } debug('caret return', ret); @@ -823,23 +870,27 @@ function replaceXRange(comp, loose) { if (gtlt === '=' && anyX) gtlt = ''; - if (gtlt && anyX) { - // replace X with 0, and then append the -0 min-prerelease - if (xM) - M = 0; + if (xM) { + if (gtlt === '>' || gtlt === '<') { + // nothing is allowed + ret = '<0.0.0'; + } else { + // nothing is forbidden + ret = '*'; + } + } else if (gtlt && anyX) { + // replace X with 0 if (xm) m = 0; if (xp) p = 0; if (gtlt === '>') { - // >1 => >=2.0.0-0 - // >1.2 => >=1.3.0-0 - // >1.2.3 => >= 1.2.4-0 + // >1 => >=2.0.0 + // >1.2 => >=1.3.0 + // >1.2.3 => >= 1.2.4 gtlt = '>='; - if (xM) { - // no change - } else if (xm) { + if (xm) { M = +M + 1; m = 0; p = 0; @@ -847,20 +898,21 @@ function replaceXRange(comp, loose) { m = +m + 1; p = 0; } + } else if (gtlt === '<=') { + // <=0.7.x is actually <0.8.0, since any 0.7.x should + // pass. Similarly, <=7.x is actually <8.0.0, etc. + gtlt = '<' + if (xm) + M = +M + 1 + else + m = +m + 1 } - - ret = gtlt + M + '.' + m + '.' + p + '-0'; - } else if (xM) { - // allow any - ret = '*'; + ret = gtlt + M + '.' + m + '.' + p; } else if (xm) { - // append '-0' onto the version, otherwise - // '1.x.x' matches '2.0.0-beta', since the tag - // *lowers* the version value - ret = '>=' + M + '.0.0-0 <' + (+M + 1) + '.0.0-0'; + ret = '>=' + M + '.0.0 <' + (+M + 1) + '.0.0'; } else if (xp) { - ret = '>=' + M + '.' + m + '.0-0 <' + M + '.' + (+m + 1) + '.0-0'; + ret = '>=' + M + '.' + m + '.0 <' + M + '.' + (+m + 1) + '.0'; } debug('xRange return', ret); @@ -879,9 +931,9 @@ function replaceStars(comp, loose) { // This function is passed to string.replace(re[HYPHENRANGE]) // M, m, patch, prerelease, build -// 1.2 - 3.4.5 => >=1.2.0-0 <=3.4.5 -// 1.2.3 - 3.4 => >=1.2.0-0 <3.5.0-0 Any 3.4.x will do -// 1.2 - 3.4 => >=1.2.0-0 <3.5.0-0 +// 1.2 - 3.4.5 => >=1.2.0 <=3.4.5 +// 1.2.3 - 3.4 => >=1.2.0 <3.5.0 Any 3.4.x will do +// 1.2 - 3.4 => >=1.2.0 <3.5.0 function hyphenReplace($0, from, fM, fm, fp, fpr, fb, to, tM, tm, tp, tpr, tb) { @@ -889,18 +941,18 @@ function hyphenReplace($0, if (isX(fM)) from = ''; else if (isX(fm)) - from = '>=' + fM + '.0.0-0'; + from = '>=' + fM + '.0.0'; else if (isX(fp)) - from = '>=' + fM + '.' + fm + '.0-0'; + from = '>=' + fM + '.' + fm + '.0'; else from = '>=' + from; if (isX(tM)) to = ''; else if (isX(tm)) - to = '<' + (+tM + 1) + '.0.0-0'; + to = '<' + (+tM + 1) + '.0.0'; else if (isX(tp)) - to = '<' + tM + '.' + (+tm + 1) + '.0-0'; + to = '<' + tM + '.' + (+tm + 1) + '.0'; else if (tpr) to = '<=' + tM + '.' + tm + '.' + tp + '-' + tpr; else @@ -914,6 +966,10 @@ function hyphenReplace($0, Range.prototype.test = function(version) { if (!version) return false; + + if (typeof version === 'string') + version = new SemVer(version, this.loose); + for (var i = 0; i < this.set.length; i++) { if (testSet(this.set[i], version)) return true; @@ -926,6 +982,31 @@ function testSet(set, version) { if (!set[i].test(version)) return false; } + + if (version.prerelease.length) { + // Find the set of versions that are allowed to have prereleases + // For example, ^1.2.3-pr.1 desugars to >=1.2.3-pr.1 <2.0.0 + // That should allow `1.2.3-pr.2` to pass. + // However, `1.2.4-alpha.notready` should NOT be allowed, + // even though it's within the range set by the comparators. + for (var i = 0; i < set.length; i++) { + debug(set[i].semver); + if (set[i].semver === ANY) + return true; + + if (set[i].semver.prerelease.length > 0) { + var allowed = set[i].semver; + if (allowed.major === version.major && + allowed.minor === version.minor && + allowed.patch === version.patch) + return true; + } + } + + // Version has a -pre, but it's not one of the ones we like. + return false; + } + return true; } diff --git a/deps/npm/node_modules/semver/semver.min.js b/deps/npm/node_modules/semver/semver.min.js index 66e13b86332..56c9249e1ca 100644 --- a/deps/npm/node_modules/semver/semver.min.js +++ b/deps/npm/node_modules/semver/semver.min.js @@ -1 +1 @@ -(function(e){if(typeof module==="object"&&module.exports===e)e=module.exports=H;e.SEMVER_SPEC_VERSION="2.0.0";var r=e.re=[];var t=e.src=[];var n=0;var i=n++;t[i]="0|[1-9]\\d*";var s=n++;t[s]="[0-9]+";var a=n++;t[a]="\\d*[a-zA-Z-][a-zA-Z0-9-]*";var o=n++;t[o]="("+t[i]+")\\."+"("+t[i]+")\\."+"("+t[i]+")";var f=n++;t[f]="("+t[s]+")\\."+"("+t[s]+")\\."+"("+t[s]+")";var u=n++;t[u]="(?:"+t[i]+"|"+t[a]+")";var c=n++;t[c]="(?:"+t[s]+"|"+t[a]+")";var l=n++;t[l]="(?:-("+t[u]+"(?:\\."+t[u]+")*))";var p=n++;t[p]="(?:-?("+t[c]+"(?:\\."+t[c]+")*))";var h=n++;t[h]="[0-9A-Za-z-]+";var v=n++;t[v]="(?:\\+("+t[h]+"(?:\\."+t[h]+")*))";var m=n++;var g="v?"+t[o]+t[l]+"?"+t[v]+"?";t[m]="^"+g+"$";var w="[v=\\s]*"+t[f]+t[p]+"?"+t[v]+"?";var d=n++;t[d]="^"+w+"$";var y=n++;t[y]="((?:<|>)?=?)";var b=n++;t[b]=t[s]+"|x|X|\\*";var $=n++;t[$]=t[i]+"|x|X|\\*";var j=n++;t[j]="[v=\\s]*("+t[$]+")"+"(?:\\.("+t[$]+")"+"(?:\\.("+t[$]+")"+"(?:("+t[l]+")"+")?)?)?";var k=n++;t[k]="[v=\\s]*("+t[b]+")"+"(?:\\.("+t[b]+")"+"(?:\\.("+t[b]+")"+"(?:("+t[p]+")"+")?)?)?";var E=n++;t[E]="^"+t[y]+"\\s*"+t[j]+"$";var x=n++;t[x]="^"+t[y]+"\\s*"+t[k]+"$";var R=n++;t[R]="(?:~>?)";var S=n++;t[S]="(\\s*)"+t[R]+"\\s+";r[S]=new RegExp(t[S],"g");var V="$1~";var I=n++;t[I]="^"+t[R]+t[j]+"$";var T=n++;t[T]="^"+t[R]+t[k]+"$";var A=n++;t[A]="(?:\\^)";var C=n++;t[C]="(\\s*)"+t[A]+"\\s+";r[C]=new RegExp(t[C],"g");var M="$1^";var z=n++;t[z]="^"+t[A]+t[j]+"$";var P=n++;t[P]="^"+t[A]+t[k]+"$";var Z=n++;t[Z]="^"+t[y]+"\\s*("+w+")$|^$";var q=n++;t[q]="^"+t[y]+"\\s*("+g+")$|^$";var L=n++;t[L]="(\\s*)"+t[y]+"\\s*("+w+"|"+t[j]+")";r[L]=new RegExp(t[L],"g");var X="$1$2$3";var _=n++;t[_]="^\\s*("+t[j]+")"+"\\s+-\\s+"+"("+t[j]+")"+"\\s*$";var N=n++;t[N]="^\\s*("+t[k]+")"+"\\s+-\\s+"+"("+t[k]+")"+"\\s*$";var O=n++;t[O]="(<|>)?=?\\s*\\*";for(var B=0;B'};H.prototype.toString=function(){return this.version};H.prototype.compare=function(e){if(!(e instanceof H))e=new H(e,this.loose);return this.compareMain(e)||this.comparePre(e)};H.prototype.compareMain=function(e){if(!(e instanceof H))e=new H(e,this.loose);return Q(this.major,e.major)||Q(this.minor,e.minor)||Q(this.patch,e.patch)};H.prototype.comparePre=function(e){if(!(e instanceof H))e=new H(e,this.loose);if(this.prerelease.length&&!e.prerelease.length)return-1;else if(!this.prerelease.length&&e.prerelease.length)return 1;else if(!this.prerelease.length&&!e.prerelease.length)return 0;var r=0;do{var t=this.prerelease[r];var n=e.prerelease[r];if(t===undefined&&n===undefined)return 0;else if(n===undefined)return 1;else if(t===undefined)return-1;else if(t===n)continue;else return Q(t,n)}while(++r)};H.prototype.inc=function(e){switch(e){case"premajor":this.inc("major");this.inc("pre");break;case"preminor":this.inc("minor");this.inc("pre");break;case"prepatch":this.inc("patch");this.inc("pre");break;case"prerelease":if(this.prerelease.length===0)this.inc("patch");this.inc("pre");break;case"major":this.major++;this.minor=-1;case"minor":this.minor++;this.patch=0;this.prerelease=[];break;case"patch":if(this.prerelease.length===0)this.patch++;this.prerelease=[];break;case"pre":if(this.prerelease.length===0)this.prerelease=[0];else{var r=this.prerelease.length;while(--r>=0){if(typeof this.prerelease[r]==="number"){this.prerelease[r]++;r=-2}}if(r===-1)this.prerelease.push(0)}break;default:throw new Error("invalid increment argument: "+e)}this.format();return this};e.inc=J;function J(e,r,t){try{return new H(e,t).inc(r).version}catch(n){return null}}e.compareIdentifiers=Q;var K=/^[0-9]+$/;function Q(e,r){var t=K.test(e);var n=K.test(r);if(t&&n){e=+e;r=+r}return t&&!n?-1:n&&!t?1:er?1:0}e.rcompareIdentifiers=U;function U(e,r){return Q(r,e)}e.compare=W;function W(e,r,t){return new H(e,t).compare(r)}e.compareLoose=Y;function Y(e,r){return W(e,r,true)}e.rcompare=er;function er(e,r,t){return W(r,e,t)}e.sort=rr;function rr(r,t){return r.sort(function(r,n){return e.compare(r,n,t)})}e.rsort=tr;function tr(r,t){return r.sort(function(r,n){return e.rcompare(r,n,t)})}e.gt=nr;function nr(e,r,t){return W(e,r,t)>0}e.lt=ir;function ir(e,r,t){return W(e,r,t)<0}e.eq=sr;function sr(e,r,t){return W(e,r,t)===0}e.neq=ar;function ar(e,r,t){return W(e,r,t)!==0}e.gte=or;function or(e,r,t){return W(e,r,t)>=0}e.lte=fr;function fr(e,r,t){return W(e,r,t)<=0}e.cmp=ur;function ur(e,r,t,n){var i;switch(r){case"===":i=e===t;break;case"!==":i=e!==t;break;case"":case"=":case"==":i=sr(e,t,n);break;case"!=":i=ar(e,t,n);break;case">":i=nr(e,t,n);break;case">=":i=or(e,t,n);break;case"<":i=ir(e,t,n);break;case"<=":i=fr(e,t,n);break;default:throw new TypeError("Invalid operator: "+r)}return i}e.Comparator=cr;function cr(e,r){if(e instanceof cr){if(e.loose===r)return e;else e=e.value}if(!(this instanceof cr))return new cr(e,r);this.loose=r;this.parse(e);if(this.semver===lr)this.value="";else this.value=this.operator+this.semver.version}var lr={};cr.prototype.parse=function(e){var t=this.loose?r[Z]:r[q];var n=e.match(t);if(!n)throw new TypeError("Invalid comparator: "+e);this.operator=n[1];if(!n[2])this.semver=lr;else{this.semver=new H(n[2],this.loose);if(this.operator==="<"&&!this.semver.prerelease.length){this.semver.prerelease=["0"];this.semver.format()}}};cr.prototype.inspect=function(){return''};cr.prototype.toString=function(){return this.value};cr.prototype.test=function(e){return this.semver===lr?true:ur(e,this.operator,this.semver,this.loose)};e.Range=pr;function pr(e,r){if(e instanceof pr&&e.loose===r)return e;if(!(this instanceof pr))return new pr(e,r);this.loose=r;this.raw=e;this.set=e.split(/\s*\|\|\s*/).map(function(e){return this.parseRange(e.trim())},this).filter(function(e){return e.length});if(!this.set.length){throw new TypeError("Invalid SemVer Range: "+e)}this.format()}pr.prototype.inspect=function(){return''};pr.prototype.format=function(){this.range=this.set.map(function(e){return e.join(" ").trim()}).join("||").trim();return this.range};pr.prototype.toString=function(){return this.range};pr.prototype.parseRange=function(e){var t=this.loose;e=e.trim();var n=t?r[N]:r[_];e=e.replace(n,kr);e=e.replace(r[L],X);e=e.replace(r[S],V);e=e.replace(r[C],M);e=e.split(/\s+/).join(" ");var i=t?r[Z]:r[q];var s=e.split(" ").map(function(e){return vr(e,t)}).join(" ").split(/\s+/);if(this.loose){s=s.filter(function(e){return!!e.match(i)})}s=s.map(function(e){return new cr(e,t)});return s};e.toComparators=hr;function hr(e,r){return new pr(e,r).set.map(function(e){return e.map(function(e){return e.value}).join(" ").trim().split(" ")})}function vr(e,r){e=dr(e,r);e=gr(e,r);e=br(e,r);e=jr(e,r);return e}function mr(e){return!e||e.toLowerCase()==="x"||e==="*"}function gr(e,r){return e.trim().split(/\s+/).map(function(e){return wr(e,r)}).join(" ")}function wr(e,t){var n=t?r[T]:r[I];return e.replace(n,function(e,r,t,n,i){var s;if(mr(r))s="";else if(mr(t))s=">="+r+".0.0-0 <"+(+r+1)+".0.0-0";else if(mr(n))s=">="+r+"."+t+".0-0 <"+r+"."+(+t+1)+".0-0";else if(i){if(i.charAt(0)!=="-")i="-"+i;s=">="+r+"."+t+"."+n+i+" <"+r+"."+(+t+1)+".0-0"}else s=">="+r+"."+t+"."+n+"-0"+" <"+r+"."+(+t+1)+".0-0";return s})}function dr(e,r){return e.trim().split(/\s+/).map(function(e){return yr(e,r)}).join(" ")}function yr(e,t){var n=t?r[P]:r[z];return e.replace(n,function(e,r,t,n,i){var s;if(mr(r))s="";else if(mr(t))s=">="+r+".0.0-0 <"+(+r+1)+".0.0-0";else if(mr(n)){if(r==="0")s=">="+r+"."+t+".0-0 <"+r+"."+(+t+1)+".0-0";else s=">="+r+"."+t+".0-0 <"+(+r+1)+".0.0-0"}else if(i){if(i.charAt(0)!=="-")i="-"+i;if(r==="0"){if(t==="0")s="="+r+"."+t+"."+n+i;else s=">="+r+"."+t+"."+n+i+" <"+r+"."+(+t+1)+".0-0"}else s=">="+r+"."+t+"."+n+i+" <"+(+r+1)+".0.0-0"}else{if(r==="0"){if(t==="0")s="="+r+"."+t+"."+n;else s=">="+r+"."+t+"."+n+"-0"+" <"+r+"."+(+t+1)+".0-0"}else s=">="+r+"."+t+"."+n+"-0"+" <"+(+r+1)+".0.0-0"}return s})}function br(e,r){return e.split(/\s+/).map(function(e){return $r(e,r)}).join(" ")}function $r(e,t){e=e.trim();var n=t?r[x]:r[E];return e.replace(n,function(e,r,t,n,i,s){var a=mr(t);var o=a||mr(n);var f=o||mr(i);var u=f;if(r==="="&&u)r="";if(r&&u){if(a)t=0;if(o)n=0;if(f)i=0;if(r===">"){r=">=";if(a){}else if(o){t=+t+1;n=0;i=0}else if(f){n=+n+1;i=0}}e=r+t+"."+n+"."+i+"-0"}else if(a){e="*"}else if(o){e=">="+t+".0.0-0 <"+(+t+1)+".0.0-0"}else if(f){e=">="+t+"."+n+".0-0 <"+t+"."+(+n+1)+".0-0"}return e})}function jr(e,t){return e.trim().replace(r[O],"")}function kr(e,r,t,n,i,s,a,o,f,u,c,l,p){if(mr(t))r="";else if(mr(n))r=">="+t+".0.0-0";else if(mr(i))r=">="+t+"."+n+".0-0";else r=">="+r;if(mr(f))o="";else if(mr(u))o="<"+(+f+1)+".0.0-0";else if(mr(c))o="<"+f+"."+(+u+1)+".0-0";else if(l)o="<="+f+"."+u+"."+c+"-"+l;else o="<="+o;return(r+" "+o).trim()}pr.prototype.test=function(e){if(!e)return false;for(var r=0;r",t)}e.outside=Tr;function Tr(e,r,t,n){e=new H(e,n);r=new pr(r,n);var i,s,a,o,f;switch(t){case">":i=nr;s=fr;a=ir;o=">";f=">=";break;case"<":i=ir;s=or;a=nr;o="<";f="<=";break;default:throw new TypeError('Must provide a hilo val of "<" or ">"')}if(xr(e,r,n)){return false}for(var u=0;u)?=?)";var b=n++;t[b]=t[s]+"|x|X|\\*";var j=n++;t[j]=t[i]+"|x|X|\\*";var $=n++;t[$]="[v=\\s]*("+t[j]+")"+"(?:\\.("+t[j]+")"+"(?:\\.("+t[j]+")"+"(?:"+t[p]+")?"+t[v]+"?"+")?)?";var k=n++;t[k]="[v=\\s]*("+t[b]+")"+"(?:\\.("+t[b]+")"+"(?:\\.("+t[b]+")"+"(?:"+t[c]+")?"+t[v]+"?"+")?)?";var E=n++;t[E]="^"+t[y]+"\\s*"+t[$]+"$";var x=n++;t[x]="^"+t[y]+"\\s*"+t[k]+"$";var R=n++;t[R]="(?:~>?)";var S=n++;t[S]="(\\s*)"+t[R]+"\\s+";r[S]=new RegExp(t[S],"g");var V="$1~";var I=n++;t[I]="^"+t[R]+t[$]+"$";var T=n++;t[T]="^"+t[R]+t[k]+"$";var A=n++;t[A]="(?:\\^)";var C=n++;t[C]="(\\s*)"+t[A]+"\\s+";r[C]=new RegExp(t[C],"g");var M="$1^";var z=n++;t[z]="^"+t[A]+t[$]+"$";var N=n++;t[N]="^"+t[A]+t[k]+"$";var P=n++;t[P]="^"+t[y]+"\\s*("+w+")$|^$";var Z=n++;t[Z]="^"+t[y]+"\\s*("+g+")$|^$";var q=n++;t[q]="(\\s*)"+t[y]+"\\s*("+w+"|"+t[$]+")";r[q]=new RegExp(t[q],"g");var L="$1$2$3";var X=n++;t[X]="^\\s*("+t[$]+")"+"\\s+-\\s+"+"("+t[$]+")"+"\\s*$";var _=n++;t[_]="^\\s*("+t[k]+")"+"\\s+-\\s+"+"("+t[k]+")"+"\\s*$";var O=n++;t[O]="(<|>)?=?\\s*\\*";for(var B=0;B'};H.prototype.toString=function(){return this.version};H.prototype.compare=function(e){if(!(e instanceof H))e=new H(e,this.loose);return this.compareMain(e)||this.comparePre(e)};H.prototype.compareMain=function(e){if(!(e instanceof H))e=new H(e,this.loose);return Q(this.major,e.major)||Q(this.minor,e.minor)||Q(this.patch,e.patch)};H.prototype.comparePre=function(e){if(!(e instanceof H))e=new H(e,this.loose);if(this.prerelease.length&&!e.prerelease.length)return-1;else if(!this.prerelease.length&&e.prerelease.length)return 1;else if(!this.prerelease.length&&!e.prerelease.length)return 0;var r=0;do{var t=this.prerelease[r];var n=e.prerelease[r];if(t===undefined&&n===undefined)return 0;else if(n===undefined)return 1;else if(t===undefined)return-1;else if(t===n)continue;else return Q(t,n)}while(++r)};H.prototype.inc=function(e,r){switch(e){case"premajor":this.prerelease.length=0;this.patch=0;this.minor=0;this.major++;this.inc("pre",r);break;case"preminor":this.prerelease.length=0;this.patch=0;this.minor++;this.inc("pre",r);break;case"prepatch":this.prerelease.length=0;this.inc("patch",r);this.inc("pre",r);break;case"prerelease":if(this.prerelease.length===0)this.inc("patch",r);this.inc("pre",r);break;case"major":if(this.minor!==0||this.patch!==0||this.prerelease.length===0)this.major++;this.minor=0;this.patch=0;this.prerelease=[];break;case"minor":if(this.patch!==0||this.prerelease.length===0)this.minor++;this.patch=0;this.prerelease=[];break;case"patch":if(this.prerelease.length===0)this.patch++;this.prerelease=[];break;case"pre":if(this.prerelease.length===0)this.prerelease=[0];else{var t=this.prerelease.length;while(--t>=0){if(typeof this.prerelease[t]==="number"){this.prerelease[t]++;t=-2}}if(t===-1)this.prerelease.push(0)}if(r){if(this.prerelease[0]===r){if(isNaN(this.prerelease[1]))this.prerelease=[r,0]}else this.prerelease=[r,0]}break;default:throw new Error("invalid increment argument: "+e)}this.format();return this};e.inc=J;function J(e,r,t,n){if(typeof t==="string"){n=t;t=undefined}try{return new H(e,t).inc(r,n).version}catch(i){return null}}e.compareIdentifiers=Q;var K=/^[0-9]+$/;function Q(e,r){var t=K.test(e);var n=K.test(r);if(t&&n){e=+e;r=+r}return t&&!n?-1:n&&!t?1:er?1:0}e.rcompareIdentifiers=U;function U(e,r){return Q(r,e)}e.compare=W;function W(e,r,t){return new H(e,t).compare(r)}e.compareLoose=Y;function Y(e,r){return W(e,r,true)}e.rcompare=er;function er(e,r,t){return W(r,e,t)}e.sort=rr;function rr(r,t){return r.sort(function(r,n){return e.compare(r,n,t)})}e.rsort=tr;function tr(r,t){return r.sort(function(r,n){return e.rcompare(r,n,t)})}e.gt=nr;function nr(e,r,t){return W(e,r,t)>0}e.lt=ir;function ir(e,r,t){return W(e,r,t)<0}e.eq=sr;function sr(e,r,t){return W(e,r,t)===0}e.neq=ar;function ar(e,r,t){return W(e,r,t)!==0}e.gte=or;function or(e,r,t){return W(e,r,t)>=0}e.lte=fr;function fr(e,r,t){return W(e,r,t)<=0}e.cmp=ur;function ur(e,r,t,n){var i;switch(r){case"===":if(typeof e==="object")e=e.version;if(typeof t==="object")t=t.version;i=e===t;break;case"!==":if(typeof e==="object")e=e.version;if(typeof t==="object")t=t.version;i=e!==t;break;case"":case"=":case"==":i=sr(e,t,n);break;case"!=":i=ar(e,t,n);break;case">":i=nr(e,t,n);break;case">=":i=or(e,t,n);break;case"<":i=ir(e,t,n);break;case"<=":i=fr(e,t,n);break;default:throw new TypeError("Invalid operator: "+r)}return i}e.Comparator=lr;function lr(e,r){if(e instanceof lr){if(e.loose===r)return e;else e=e.value}if(!(this instanceof lr))return new lr(e,r);this.loose=r;this.parse(e);if(this.semver===pr)this.value="";else this.value=this.operator+this.semver.version}var pr={};lr.prototype.parse=function(e){var t=this.loose?r[P]:r[Z];var n=e.match(t);if(!n)throw new TypeError("Invalid comparator: "+e);this.operator=n[1];if(this.operator==="=")this.operator="";if(!n[2])this.semver=pr;else this.semver=new H(n[2],this.loose)};lr.prototype.inspect=function(){return''};lr.prototype.toString=function(){return this.value};lr.prototype.test=function(e){if(this.semver===pr)return true;if(typeof e==="string")e=new H(e,this.loose);return ur(e,this.operator,this.semver,this.loose)};e.Range=cr;function cr(e,r){if(e instanceof cr&&e.loose===r)return e;if(!(this instanceof cr))return new cr(e,r);this.loose=r;this.raw=e;this.set=e.split(/\s*\|\|\s*/).map(function(e){return this.parseRange(e.trim())},this).filter(function(e){return e.length});if(!this.set.length){throw new TypeError("Invalid SemVer Range: "+e)}this.format()}cr.prototype.inspect=function(){return''};cr.prototype.format=function(){this.range=this.set.map(function(e){return e.join(" ").trim()}).join("||").trim();return this.range};cr.prototype.toString=function(){return this.range};cr.prototype.parseRange=function(e){var t=this.loose;e=e.trim();var n=t?r[_]:r[X];e=e.replace(n,kr);e=e.replace(r[q],L);e=e.replace(r[S],V);e=e.replace(r[C],M);e=e.split(/\s+/).join(" ");var i=t?r[P]:r[Z];var s=e.split(" ").map(function(e){return vr(e,t)}).join(" ").split(/\s+/);if(this.loose){s=s.filter(function(e){return!!e.match(i)})}s=s.map(function(e){return new lr(e,t)});return s};e.toComparators=hr;function hr(e,r){return new cr(e,r).set.map(function(e){return e.map(function(e){return e.value}).join(" ").trim().split(" ")})}function vr(e,r){e=dr(e,r);e=gr(e,r);e=br(e,r);e=$r(e,r);return e}function mr(e){return!e||e.toLowerCase()==="x"||e==="*"}function gr(e,r){return e.trim().split(/\s+/).map(function(e){return wr(e,r)}).join(" ")}function wr(e,t){var n=t?r[T]:r[I];return e.replace(n,function(e,r,t,n,i){var s;if(mr(r))s="";else if(mr(t))s=">="+r+".0.0 <"+(+r+1)+".0.0";else if(mr(n))s=">="+r+"."+t+".0 <"+r+"."+(+t+1)+".0";else if(i){if(i.charAt(0)!=="-")i="-"+i;s=">="+r+"."+t+"."+n+i+" <"+r+"."+(+t+1)+".0"}else s=">="+r+"."+t+"."+n+" <"+r+"."+(+t+1)+".0";return s})}function dr(e,r){return e.trim().split(/\s+/).map(function(e){return yr(e,r)}).join(" ")}function yr(e,t){var n=t?r[N]:r[z];return e.replace(n,function(e,r,t,n,i){var s;if(mr(r))s="";else if(mr(t))s=">="+r+".0.0 <"+(+r+1)+".0.0";else if(mr(n)){if(r==="0")s=">="+r+"."+t+".0 <"+r+"."+(+t+1)+".0";else s=">="+r+"."+t+".0 <"+(+r+1)+".0.0"}else if(i){if(i.charAt(0)!=="-")i="-"+i;if(r==="0"){if(t==="0")s=">="+r+"."+t+"."+n+i+" <"+r+"."+t+"."+(+n+1);else s=">="+r+"."+t+"."+n+i+" <"+r+"."+(+t+1)+".0"}else s=">="+r+"."+t+"."+n+i+" <"+(+r+1)+".0.0"}else{if(r==="0"){if(t==="0")s=">="+r+"."+t+"."+n+" <"+r+"."+t+"."+(+n+1);else s=">="+r+"."+t+"."+n+" <"+r+"."+(+t+1)+".0"}else s=">="+r+"."+t+"."+n+" <"+(+r+1)+".0.0"}return s})}function br(e,r){return e.split(/\s+/).map(function(e){return jr(e,r)}).join(" ")}function jr(e,t){e=e.trim();var n=t?r[x]:r[E];return e.replace(n,function(e,r,t,n,i,s){var a=mr(t);var o=a||mr(n);var f=o||mr(i);var u=f;if(r==="="&&u)r="";if(a){if(r===">"||r==="<"){e="<0.0.0"}else{e="*"}}else if(r&&u){if(o)n=0;if(f)i=0;if(r===">"){r=">=";if(o){t=+t+1;n=0;i=0}else if(f){n=+n+1;i=0}}else if(r==="<="){r="<";if(o)t=+t+1;else n=+n+1}e=r+t+"."+n+"."+i}else if(o){e=">="+t+".0.0 <"+(+t+1)+".0.0"}else if(f){e=">="+t+"."+n+".0 <"+t+"."+(+n+1)+".0"}return e})}function $r(e,t){return e.trim().replace(r[O],"")}function kr(e,r,t,n,i,s,a,o,f,u,l,p,c){if(mr(t))r="";else if(mr(n))r=">="+t+".0.0";else if(mr(i))r=">="+t+"."+n+".0";else r=">="+r;if(mr(f))o="";else if(mr(u))o="<"+(+f+1)+".0.0";else if(mr(l))o="<"+f+"."+(+u+1)+".0";else if(p)o="<="+f+"."+u+"."+l+"-"+p;else o="<="+o;return(r+" "+o).trim()}cr.prototype.test=function(e){if(!e)return false;if(typeof e==="string")e=new H(e,this.loose);for(var r=0;r0){var n=e[t].semver;if(n.major===r.major&&n.minor===r.minor&&n.patch===r.patch)return true}}return false}return true}e.satisfies=xr;function xr(e,r,t){try{r=new cr(r,t)}catch(n){return false}return r.test(e)}e.maxSatisfying=Rr;function Rr(e,r,t){return e.filter(function(e){return xr(e,r,t)}).sort(function(e,r){return er(e,r,t)})[0]||null}e.validRange=Sr;function Sr(e,r){try{return new cr(e,r).range||"*"}catch(t){return null}}e.ltr=Vr;function Vr(e,r,t){return Tr(e,r,"<",t)}e.gtr=Ir;function Ir(e,r,t){return Tr(e,r,">",t)}e.outside=Tr;function Tr(e,r,t,n){e=new H(e,n);r=new cr(r,n);var i,s,a,o,f;switch(t){case">":i=nr;s=fr;a=ir;o=">";f=">=";break;case"<":i=ir;s=or;a=nr;o="<";f="<=";break;default:throw new TypeError('Must provide a hilo val of "<" or ">"')}if(xr(e,r,n)){return false}for(var u=0;u1.2.3', null], + ['~1.2.3', null], + ['<=1.2.3', null], + ['1.2.x', null] + ].forEach(function(tuple) { + var range = tuple[0]; + var version = tuple[1]; + var msg = 'clean(' + range + ') = ' + version; + t.equal(clean(range), version, msg); + }); + t.end(); +}); diff --git a/deps/npm/node_modules/semver/test/gtr.js b/deps/npm/node_modules/semver/test/gtr.js index cb6199efcf8..bbb87896c64 100644 --- a/deps/npm/node_modules/semver/test/gtr.js +++ b/deps/npm/node_modules/semver/test/gtr.js @@ -39,7 +39,7 @@ test('\ngtr tests', function(t) { ['~v0.5.4-pre', '0.6.1-pre'], ['=0.7.x', '0.8.0'], ['=0.7.x', '0.8.0-asdf'], - ['<=0.7.x', '0.7.0'], + ['<0.7.x', '0.7.0'], ['~1.2.2', '1.3.0'], ['1.0.0 - 2.0.0', '2.2.3'], ['1.0.0', '1.0.1'], @@ -66,7 +66,7 @@ test('\ngtr tests', function(t) { ['<1', '1.0.0beta', true], ['< 1', '1.0.0beta', true], ['=0.7.x', '0.8.2'], - ['<=0.7.x', '0.7.2'] + ['<0.7.x', '0.7.2'] ].forEach(function(tuple) { var range = tuple[0]; var version = tuple[1]; diff --git a/deps/npm/node_modules/semver/test/index.js b/deps/npm/node_modules/semver/test/index.js index 6285b693f9a..de8acaedfb5 100644 --- a/deps/npm/node_modules/semver/test/index.js +++ b/deps/npm/node_modules/semver/test/index.js @@ -1,3 +1,5 @@ +'use strict'; + var tap = require('tap'); var test = tap.test; var semver = require('../semver.js'); @@ -130,6 +132,15 @@ test('\nrange tests', function(t) { // [range, version] // version should be included by range [['1.0.0 - 2.0.0', '1.2.3'], + ['^1.2.3+build', '1.2.3'], + ['^1.2.3+build', '1.3.0'], + ['1.2.3-pre+asdf - 2.4.3-pre+asdf', '1.2.3'], + ['1.2.3pre+asdf - 2.4.3-pre+asdf', '1.2.3', true], + ['1.2.3-pre+asdf - 2.4.3pre+asdf', '1.2.3', true], + ['1.2.3pre+asdf - 2.4.3pre+asdf', '1.2.3', true], + ['1.2.3-pre+asdf - 2.4.3-pre+asdf', '1.2.3-pre.2'], + ['1.2.3-pre+asdf - 2.4.3-pre+asdf', '2.4.3-alpha'], + ['1.2.3+asdf - 2.4.3+asdf', '1.2.3'], ['1.0.0', '1.0.0'], ['>=*', '0.2.4'], ['', '1.0.0'], @@ -187,13 +198,11 @@ test('\nrange tests', function(t) { ['>= 1', '1.0.0'], ['<1.2', '1.1.1'], ['< 1.2', '1.1.1'], - ['1', '1.0.0beta', true], ['~v0.5.4-pre', '0.5.5'], ['~v0.5.4-pre', '0.5.4'], ['=0.7.x', '0.7.2'], + ['<=0.7.x', '0.7.2'], ['>=0.7.x', '0.7.2'], - ['=0.7.x', '0.7.0-asdf'], - ['>=0.7.x', '0.7.0-asdf'], ['<=0.7.x', '0.6.2'], ['~1.2.1 >=1.2.3', '1.2.3'], ['~1.2.1 =1.2.3', '1.2.3'], @@ -205,17 +214,15 @@ test('\nrange tests', function(t) { ['1.2.3 >=1.2.1', '1.2.3'], ['>=1.2.3 >=1.2.1', '1.2.3'], ['>=1.2.1 >=1.2.3', '1.2.3'], - ['<=1.2.3', '1.2.3-beta'], - ['>1.2', '1.3.0-beta'], ['>=1.2', '1.2.8'], ['^1.2.3', '1.8.1'], - ['^1.2.3', '1.2.3-beta'], ['^0.1.2', '0.1.2'], ['^0.1', '0.1.2'], ['^1.2', '1.4.2'], ['^1.2 ^1', '1.4.2'], - ['^1.2', '1.2.0-pre'], - ['^1.2.3', '1.2.3-pre'] + ['^1.2.3-alpha', '1.2.3-pre'], + ['^1.2.0-alpha', '1.2.0-pre'], + ['^0.0.1-alpha', '0.0.1-beta'] ].forEach(function(v) { var range = v[0]; var ver = v[1]; @@ -229,6 +236,20 @@ test('\nnegative range tests', function(t) { // [range, version] // version should not be included by range [['1.0.0 - 2.0.0', '2.2.3'], + ['1.2.3+asdf - 2.4.3+asdf', '1.2.3-pre.2'], + ['1.2.3+asdf - 2.4.3+asdf', '2.4.3-alpha'], + ['^1.2.3+build', '2.0.0'], + ['^1.2.3+build', '1.2.0'], + ['^1.2.3', '1.2.3-pre'], + ['^1.2', '1.2.0-pre'], + ['>1.2', '1.3.0-beta'], + ['<=1.2.3', '1.2.3-beta'], + ['^1.2.3', '1.2.3-beta'], + ['=0.7.x', '0.7.0-asdf'], + ['>=0.7.x', '0.7.0-asdf'], + ['1', '1.0.0beta', true], + ['<1', '1.0.0beta', true], + ['< 1', '1.0.0beta', true], ['1.0.0', '1.0.1'], ['>=1.0.0', '0.0.0'], ['>=1.0.0', '0.0.1'], @@ -268,11 +289,9 @@ test('\nnegative range tests', function(t) { ['>=1.2', '1.1.1'], ['1', '2.0.0beta', true], ['~v0.5.4-beta', '0.5.4-alpha'], - ['<1', '1.0.0beta', true], - ['< 1', '1.0.0beta', true], ['=0.7.x', '0.8.2'], ['>=0.7.x', '0.6.2'], - ['<=0.7.x', '0.7.2'], + ['<0.7.x', '0.7.2'], ['<1.2.3', '1.2.3-beta'], ['=1.2.3', '1.2.3-beta'], ['>1.2', '1.2.8'], @@ -294,8 +313,8 @@ test('\nnegative range tests', function(t) { }); test('\nincrement versions test', function(t) { - // [version, inc, result] - // inc(version, inc) -> result +// [version, inc, result, identifier] +// inc(version, inc) -> result [['1.2.3', 'major', '2.0.0'], ['1.2.3', 'minor', '1.3.0'], ['1.2.3', 'patch', '1.2.4'], @@ -327,19 +346,66 @@ test('\nincrement versions test', function(t) { ['1.2.3-alpha.9.beta', 'prerelease', '1.2.3-alpha.10.beta'], ['1.2.3-alpha.10.beta', 'prerelease', '1.2.3-alpha.11.beta'], ['1.2.3-alpha.11.beta', 'prerelease', '1.2.3-alpha.12.beta'], + ['1.2.0', 'prepatch', '1.2.1-0'], + ['1.2.0-1', 'prepatch', '1.2.1-0'], ['1.2.0', 'preminor', '1.3.0-0'], + ['1.2.3-1', 'preminor', '1.3.0-0'], ['1.2.0', 'premajor', '2.0.0-0'], - ['1.2.0', 'preminor', '1.3.0-0'], - ['1.2.0', 'premajor', '2.0.0-0'] + ['1.2.3-1', 'premajor', '2.0.0-0'], + ['1.2.0-1', 'minor', '1.2.0'], + ['1.0.0-1', 'major', '1.0.0'], + ['1.2.3', 'major', '2.0.0', false, 'dev'], + ['1.2.3', 'minor', '1.3.0', false, 'dev'], + ['1.2.3', 'patch', '1.2.4', false, 'dev'], + ['1.2.3tag', 'major', '2.0.0', true, 'dev'], + ['1.2.3-tag', 'major', '2.0.0', false, 'dev'], + ['1.2.3', 'fake', null, false, 'dev'], + ['1.2.0-0', 'patch', '1.2.0', false, 'dev'], + ['fake', 'major', null, false, 'dev'], + ['1.2.3-4', 'major', '2.0.0', false, 'dev'], + ['1.2.3-4', 'minor', '1.3.0', false, 'dev'], + ['1.2.3-4', 'patch', '1.2.3', false, 'dev'], + ['1.2.3-alpha.0.beta', 'major', '2.0.0', false, 'dev'], + ['1.2.3-alpha.0.beta', 'minor', '1.3.0', false, 'dev'], + ['1.2.3-alpha.0.beta', 'patch', '1.2.3', false, 'dev'], + ['1.2.4', 'prerelease', '1.2.5-dev.0', false, 'dev'], + ['1.2.3-0', 'prerelease', '1.2.3-dev.0', false, 'dev'], + ['1.2.3-alpha.0', 'prerelease', '1.2.3-dev.0', false, 'dev'], + ['1.2.3-alpha.0', 'prerelease', '1.2.3-alpha.1', false, 'alpha'], + ['1.2.3-alpha.0.beta', 'prerelease', '1.2.3-dev.0', false, 'dev'], + ['1.2.3-alpha.0.beta', 'prerelease', '1.2.3-alpha.1.beta', false, 'alpha'], + ['1.2.3-alpha.10.0.beta', 'prerelease', '1.2.3-dev.0', false, 'dev'], + ['1.2.3-alpha.10.0.beta', 'prerelease', '1.2.3-alpha.10.1.beta', false, 'alpha'], + ['1.2.3-alpha.10.1.beta', 'prerelease', '1.2.3-alpha.10.2.beta', false, 'alpha'], + ['1.2.3-alpha.10.2.beta', 'prerelease', '1.2.3-alpha.10.3.beta', false, 'alpha'], + ['1.2.3-alpha.10.beta.0', 'prerelease', '1.2.3-dev.0', false, 'dev'], + ['1.2.3-alpha.10.beta.0', 'prerelease', '1.2.3-alpha.10.beta.1', false, 'alpha'], + ['1.2.3-alpha.10.beta.1', 'prerelease', '1.2.3-alpha.10.beta.2', false, 'alpha'], + ['1.2.3-alpha.10.beta.2', 'prerelease', '1.2.3-alpha.10.beta.3', false, 'alpha'], + ['1.2.3-alpha.9.beta', 'prerelease', '1.2.3-dev.0', false, 'dev'], + ['1.2.3-alpha.9.beta', 'prerelease', '1.2.3-alpha.10.beta', false, 'alpha'], + ['1.2.3-alpha.10.beta', 'prerelease', '1.2.3-alpha.11.beta', false, 'alpha'], + ['1.2.3-alpha.11.beta', 'prerelease', '1.2.3-alpha.12.beta', false, 'alpha'], + ['1.2.0', 'prepatch', '1.2.1-dev.0', 'dev'], + ['1.2.0-1', 'prepatch', '1.2.1-dev.0', 'dev'], + ['1.2.0', 'preminor', '1.3.0-dev.0', 'dev'], + ['1.2.3-1', 'preminor', '1.3.0-dev.0', 'dev'], + ['1.2.0', 'premajor', '2.0.0-dev.0', 'dev'], + ['1.2.3-1', 'premajor', '2.0.0-dev.0', 'dev'], + ['1.2.0-1', 'minor', '1.2.0', 'dev'], + ['1.0.0-1', 'major', '1.0.0', 'dev'], + ['1.2.3-dev.bar', 'prerelease', '1.2.3-dev.0', false, 'dev'] ].forEach(function(v) { var pre = v[0]; var what = v[1]; var wanted = v[2]; var loose = v[3]; - var found = inc(pre, what, loose); - t.equal(found, wanted, 'inc(' + pre + ', ' + what + ') === ' + wanted); + var id = v[4]; + var found = inc(pre, what, loose, id); + var cmd = 'inc(' + pre + ', ' + what + ', ' + id + ')'; + t.equal(found, wanted, cmd + ' === ' + wanted); }); t.end(); @@ -351,18 +417,18 @@ test('\nvalid range test', function(t) { // translate ranges into their canonical form [['1.0.0 - 2.0.0', '>=1.0.0 <=2.0.0'], ['1.0.0', '1.0.0'], - ['>=*', '>=0.0.0-0'], + ['>=*', '*'], ['', '*'], ['*', '*'], ['*', '*'], ['>=1.0.0', '>=1.0.0'], ['>1.0.0', '>1.0.0'], ['<=2.0.0', '<=2.0.0'], - ['1', '>=1.0.0-0 <2.0.0-0'], + ['1', '>=1.0.0 <2.0.0'], ['<=2.0.0', '<=2.0.0'], ['<=2.0.0', '<=2.0.0'], - ['<2.0.0', '<2.0.0-0'], - ['<2.0.0', '<2.0.0-0'], + ['<2.0.0', '<2.0.0'], + ['<2.0.0', '<2.0.0'], ['>= 1.0.0', '>=1.0.0'], ['>= 1.0.0', '>=1.0.0'], ['>= 1.0.0', '>=1.0.0'], @@ -371,56 +437,56 @@ test('\nvalid range test', function(t) { ['<= 2.0.0', '<=2.0.0'], ['<= 2.0.0', '<=2.0.0'], ['<= 2.0.0', '<=2.0.0'], - ['< 2.0.0', '<2.0.0-0'], - ['< 2.0.0', '<2.0.0-0'], + ['< 2.0.0', '<2.0.0'], + ['< 2.0.0', '<2.0.0'], ['>=0.1.97', '>=0.1.97'], ['>=0.1.97', '>=0.1.97'], ['0.1.20 || 1.2.4', '0.1.20||1.2.4'], - ['>=0.2.3 || <0.0.1', '>=0.2.3||<0.0.1-0'], - ['>=0.2.3 || <0.0.1', '>=0.2.3||<0.0.1-0'], - ['>=0.2.3 || <0.0.1', '>=0.2.3||<0.0.1-0'], + ['>=0.2.3 || <0.0.1', '>=0.2.3||<0.0.1'], + ['>=0.2.3 || <0.0.1', '>=0.2.3||<0.0.1'], + ['>=0.2.3 || <0.0.1', '>=0.2.3||<0.0.1'], ['||', '||'], - ['2.x.x', '>=2.0.0-0 <3.0.0-0'], - ['1.2.x', '>=1.2.0-0 <1.3.0-0'], - ['1.2.x || 2.x', '>=1.2.0-0 <1.3.0-0||>=2.0.0-0 <3.0.0-0'], - ['1.2.x || 2.x', '>=1.2.0-0 <1.3.0-0||>=2.0.0-0 <3.0.0-0'], + ['2.x.x', '>=2.0.0 <3.0.0'], + ['1.2.x', '>=1.2.0 <1.3.0'], + ['1.2.x || 2.x', '>=1.2.0 <1.3.0||>=2.0.0 <3.0.0'], + ['1.2.x || 2.x', '>=1.2.0 <1.3.0||>=2.0.0 <3.0.0'], ['x', '*'], - ['2.*.*', '>=2.0.0-0 <3.0.0-0'], - ['1.2.*', '>=1.2.0-0 <1.3.0-0'], - ['1.2.* || 2.*', '>=1.2.0-0 <1.3.0-0||>=2.0.0-0 <3.0.0-0'], + ['2.*.*', '>=2.0.0 <3.0.0'], + ['1.2.*', '>=1.2.0 <1.3.0'], + ['1.2.* || 2.*', '>=1.2.0 <1.3.0||>=2.0.0 <3.0.0'], ['*', '*'], - ['2', '>=2.0.0-0 <3.0.0-0'], - ['2.3', '>=2.3.0-0 <2.4.0-0'], - ['~2.4', '>=2.4.0-0 <2.5.0-0'], - ['~2.4', '>=2.4.0-0 <2.5.0-0'], - ['~>3.2.1', '>=3.2.1-0 <3.3.0-0'], - ['~1', '>=1.0.0-0 <2.0.0-0'], - ['~>1', '>=1.0.0-0 <2.0.0-0'], - ['~> 1', '>=1.0.0-0 <2.0.0-0'], - ['~1.0', '>=1.0.0-0 <1.1.0-0'], - ['~ 1.0', '>=1.0.0-0 <1.1.0-0'], - ['^0', '>=0.0.0-0 <1.0.0-0'], - ['^ 1', '>=1.0.0-0 <2.0.0-0'], - ['^0.1', '>=0.1.0-0 <0.2.0-0'], - ['^1.0', '>=1.0.0-0 <2.0.0-0'], - ['^1.2', '>=1.2.0-0 <2.0.0-0'], - ['^0.0.1', '=0.0.1'], - ['^0.0.1-beta', '=0.0.1-beta'], - ['^0.1.2', '>=0.1.2-0 <0.2.0-0'], - ['^1.2.3', '>=1.2.3-0 <2.0.0-0'], - ['^1.2.3-beta.4', '>=1.2.3-beta.4 <2.0.0-0'], - ['<1', '<1.0.0-0'], - ['< 1', '<1.0.0-0'], - ['>=1', '>=1.0.0-0'], - ['>= 1', '>=1.0.0-0'], - ['<1.2', '<1.2.0-0'], - ['< 1.2', '<1.2.0-0'], - ['1', '>=1.0.0-0 <2.0.0-0'], + ['2', '>=2.0.0 <3.0.0'], + ['2.3', '>=2.3.0 <2.4.0'], + ['~2.4', '>=2.4.0 <2.5.0'], + ['~2.4', '>=2.4.0 <2.5.0'], + ['~>3.2.1', '>=3.2.1 <3.3.0'], + ['~1', '>=1.0.0 <2.0.0'], + ['~>1', '>=1.0.0 <2.0.0'], + ['~> 1', '>=1.0.0 <2.0.0'], + ['~1.0', '>=1.0.0 <1.1.0'], + ['~ 1.0', '>=1.0.0 <1.1.0'], + ['^0', '>=0.0.0 <1.0.0'], + ['^ 1', '>=1.0.0 <2.0.0'], + ['^0.1', '>=0.1.0 <0.2.0'], + ['^1.0', '>=1.0.0 <2.0.0'], + ['^1.2', '>=1.2.0 <2.0.0'], + ['^0.0.1', '>=0.0.1 <0.0.2'], + ['^0.0.1-beta', '>=0.0.1-beta <0.0.2'], + ['^0.1.2', '>=0.1.2 <0.2.0'], + ['^1.2.3', '>=1.2.3 <2.0.0'], + ['^1.2.3-beta.4', '>=1.2.3-beta.4 <2.0.0'], + ['<1', '<1.0.0'], + ['< 1', '<1.0.0'], + ['>=1', '>=1.0.0'], + ['>= 1', '>=1.0.0'], + ['<1.2', '<1.2.0'], + ['< 1.2', '<1.2.0'], + ['1', '>=1.0.0 <2.0.0'], ['>01.02.03', '>1.2.3', true], ['>01.02.03', null], - ['~1.2.3beta', '>=1.2.3-beta <1.3.0-0', true], + ['~1.2.3beta', '>=1.2.3-beta <1.3.0', true], ['~1.2.3beta', null], - ['^ 1.2 ^ 1', '>=1.2.0-0 <2.0.0-0 >=1.0.0-0 <2.0.0-0'] + ['^ 1.2 ^ 1', '>=1.2.0 <2.0.0 >=1.0.0 <2.0.0'] ].forEach(function(v) { var pre = v[0]; var wanted = v[1]; @@ -438,7 +504,7 @@ test('\ncomparators test', function(t) { // turn range into a set of individual comparators [['1.0.0 - 2.0.0', [['>=1.0.0', '<=2.0.0']]], ['1.0.0', [['1.0.0']]], - ['>=*', [['>=0.0.0-0']]], + ['>=*', [['']]], ['', [['']]], ['*', [['']]], ['*', [['']]], @@ -448,11 +514,11 @@ test('\ncomparators test', function(t) { ['>1.0.0', [['>1.0.0']]], ['>1.0.0', [['>1.0.0']]], ['<=2.0.0', [['<=2.0.0']]], - ['1', [['>=1.0.0-0', '<2.0.0-0']]], + ['1', [['>=1.0.0', '<2.0.0']]], ['<=2.0.0', [['<=2.0.0']]], ['<=2.0.0', [['<=2.0.0']]], - ['<2.0.0', [['<2.0.0-0']]], - ['<2.0.0', [['<2.0.0-0']]], + ['<2.0.0', [['<2.0.0']]], + ['<2.0.0', [['<2.0.0']]], ['>= 1.0.0', [['>=1.0.0']]], ['>= 1.0.0', [['>=1.0.0']]], ['>= 1.0.0', [['>=1.0.0']]], @@ -461,47 +527,50 @@ test('\ncomparators test', function(t) { ['<= 2.0.0', [['<=2.0.0']]], ['<= 2.0.0', [['<=2.0.0']]], ['<= 2.0.0', [['<=2.0.0']]], - ['< 2.0.0', [['<2.0.0-0']]], - ['<\t2.0.0', [['<2.0.0-0']]], + ['< 2.0.0', [['<2.0.0']]], + ['<\t2.0.0', [['<2.0.0']]], ['>=0.1.97', [['>=0.1.97']]], ['>=0.1.97', [['>=0.1.97']]], ['0.1.20 || 1.2.4', [['0.1.20'], ['1.2.4']]], - ['>=0.2.3 || <0.0.1', [['>=0.2.3'], ['<0.0.1-0']]], - ['>=0.2.3 || <0.0.1', [['>=0.2.3'], ['<0.0.1-0']]], - ['>=0.2.3 || <0.0.1', [['>=0.2.3'], ['<0.0.1-0']]], + ['>=0.2.3 || <0.0.1', [['>=0.2.3'], ['<0.0.1']]], + ['>=0.2.3 || <0.0.1', [['>=0.2.3'], ['<0.0.1']]], + ['>=0.2.3 || <0.0.1', [['>=0.2.3'], ['<0.0.1']]], ['||', [[''], ['']]], - ['2.x.x', [['>=2.0.0-0', '<3.0.0-0']]], - ['1.2.x', [['>=1.2.0-0', '<1.3.0-0']]], - ['1.2.x || 2.x', [['>=1.2.0-0', '<1.3.0-0'], ['>=2.0.0-0', '<3.0.0-0']]], - ['1.2.x || 2.x', [['>=1.2.0-0', '<1.3.0-0'], ['>=2.0.0-0', '<3.0.0-0']]], + ['2.x.x', [['>=2.0.0', '<3.0.0']]], + ['1.2.x', [['>=1.2.0', '<1.3.0']]], + ['1.2.x || 2.x', [['>=1.2.0', '<1.3.0'], ['>=2.0.0', '<3.0.0']]], + ['1.2.x || 2.x', [['>=1.2.0', '<1.3.0'], ['>=2.0.0', '<3.0.0']]], ['x', [['']]], - ['2.*.*', [['>=2.0.0-0', '<3.0.0-0']]], - ['1.2.*', [['>=1.2.0-0', '<1.3.0-0']]], - ['1.2.* || 2.*', [['>=1.2.0-0', '<1.3.0-0'], ['>=2.0.0-0', '<3.0.0-0']]], - ['1.2.* || 2.*', [['>=1.2.0-0', '<1.3.0-0'], ['>=2.0.0-0', '<3.0.0-0']]], + ['2.*.*', [['>=2.0.0', '<3.0.0']]], + ['1.2.*', [['>=1.2.0', '<1.3.0']]], + ['1.2.* || 2.*', [['>=1.2.0', '<1.3.0'], ['>=2.0.0', '<3.0.0']]], + ['1.2.* || 2.*', [['>=1.2.0', '<1.3.0'], ['>=2.0.0', '<3.0.0']]], ['*', [['']]], - ['2', [['>=2.0.0-0', '<3.0.0-0']]], - ['2.3', [['>=2.3.0-0', '<2.4.0-0']]], - ['~2.4', [['>=2.4.0-0', '<2.5.0-0']]], - ['~2.4', [['>=2.4.0-0', '<2.5.0-0']]], - ['~>3.2.1', [['>=3.2.1-0', '<3.3.0-0']]], - ['~1', [['>=1.0.0-0', '<2.0.0-0']]], - ['~>1', [['>=1.0.0-0', '<2.0.0-0']]], - ['~> 1', [['>=1.0.0-0', '<2.0.0-0']]], - ['~1.0', [['>=1.0.0-0', '<1.1.0-0']]], - ['~ 1.0', [['>=1.0.0-0', '<1.1.0-0']]], - ['~ 1.0.3', [['>=1.0.3-0', '<1.1.0-0']]], - ['~> 1.0.3', [['>=1.0.3-0', '<1.1.0-0']]], - ['<1', [['<1.0.0-0']]], - ['< 1', [['<1.0.0-0']]], - ['>=1', [['>=1.0.0-0']]], - ['>= 1', [['>=1.0.0-0']]], - ['<1.2', [['<1.2.0-0']]], - ['< 1.2', [['<1.2.0-0']]], - ['1', [['>=1.0.0-0', '<2.0.0-0']]], - ['1 2', [['>=1.0.0-0', '<2.0.0-0', '>=2.0.0-0', '<3.0.0-0']]], - ['1.2 - 3.4.5', [['>=1.2.0-0', '<=3.4.5']]], - ['1.2.3 - 3.4', [['>=1.2.3', '<3.5.0-0']]] + ['2', [['>=2.0.0', '<3.0.0']]], + ['2.3', [['>=2.3.0', '<2.4.0']]], + ['~2.4', [['>=2.4.0', '<2.5.0']]], + ['~2.4', [['>=2.4.0', '<2.5.0']]], + ['~>3.2.1', [['>=3.2.1', '<3.3.0']]], + ['~1', [['>=1.0.0', '<2.0.0']]], + ['~>1', [['>=1.0.0', '<2.0.0']]], + ['~> 1', [['>=1.0.0', '<2.0.0']]], + ['~1.0', [['>=1.0.0', '<1.1.0']]], + ['~ 1.0', [['>=1.0.0', '<1.1.0']]], + ['~ 1.0.3', [['>=1.0.3', '<1.1.0']]], + ['~> 1.0.3', [['>=1.0.3', '<1.1.0']]], + ['<1', [['<1.0.0']]], + ['< 1', [['<1.0.0']]], + ['>=1', [['>=1.0.0']]], + ['>= 1', [['>=1.0.0']]], + ['<1.2', [['<1.2.0']]], + ['< 1.2', [['<1.2.0']]], + ['1', [['>=1.0.0', '<2.0.0']]], + ['1 2', [['>=1.0.0', '<2.0.0', '>=2.0.0', '<3.0.0']]], + ['1.2 - 3.4.5', [['>=1.2.0', '<=3.4.5']]], + ['1.2.3 - 3.4', [['>=1.2.3', '<3.5.0']]], + ['1.2.3 - 3', [['>=1.2.3', '<4.0.0']]], + ['>*', [['<0.0.0']]], + ['<*', [['<0.0.0']]] ].forEach(function(v) { var pre = v[0]; var wanted = v[1]; @@ -555,7 +624,7 @@ test('\nstrict vs loose version numbers', function(t) { test('\nstrict vs loose ranges', function(t) { [['>=01.02.03', '>=1.2.3'], - ['~1.02.03beta', '>=1.2.3-beta <1.3.0-0'] + ['~1.02.03beta', '>=1.2.3-beta <1.3.0'] ].forEach(function(v) { var loose = v[0]; var comps = v[1]; diff --git a/deps/npm/node_modules/semver/test/ltr.js b/deps/npm/node_modules/semver/test/ltr.js index a4f503a3c42..ecd1387ddfe 100644 --- a/deps/npm/node_modules/semver/test/ltr.js +++ b/deps/npm/node_modules/semver/test/ltr.js @@ -66,6 +66,10 @@ test('\nltr tests', function(t) { ['>1', '1.0.0beta', true], ['> 1', '1.0.0beta', true], ['=0.7.x', '0.6.2'], + ['=0.7.x', '0.7.0-asdf'], + ['^1', '1.0.0-0'], + ['>=0.7.x', '0.7.0-asdf'], + ['1', '1.0.0beta', true], ['>=0.7.x', '0.6.2'] ].forEach(function(tuple) { var range = tuple[0]; @@ -145,24 +149,27 @@ test('\nnegative ltr tests', function(t) { ['>= 1', '1.0.0'], ['<1.2', '1.1.1'], ['< 1.2', '1.1.1'], - ['1', '1.0.0beta', true], ['~v0.5.4-pre', '0.5.5'], ['~v0.5.4-pre', '0.5.4'], ['=0.7.x', '0.7.2'], ['>=0.7.x', '0.7.2'], - ['=0.7.x', '0.7.0-asdf'], - ['>=0.7.x', '0.7.0-asdf'], ['<=0.7.x', '0.6.2'], ['>0.2.3 >0.2.4 <=0.2.5', '0.2.5'], ['>=0.2.3 <=0.2.4', '0.2.4'], ['1.0.0 - 2.0.0', '2.0.0'], - ['^1', '1.0.0-0'], ['^3.0.0', '4.0.0'], ['^1.0.0 || ~2.0.1', '2.0.0'], ['^0.1.0 || ~3.0.1 || 5.0.0', '3.2.0'], ['^0.1.0 || ~3.0.1 || 5.0.0', '1.0.0beta', true], ['^0.1.0 || ~3.0.1 || 5.0.0', '5.0.0-0', true], - ['^0.1.0 || ~3.0.1 || >4 <=5.0.0', '3.5.0'] + ['^0.1.0 || ~3.0.1 || >4 <=5.0.0', '3.5.0'], + ['^1.0.0alpha', '1.0.0beta', true], + ['~1.0.0alpha', '1.0.0beta', true], + ['^1.0.0-alpha', '1.0.0beta', true], + ['~1.0.0-alpha', '1.0.0beta', true], + ['^1.0.0-alpha', '1.0.0-beta'], + ['~1.0.0-alpha', '1.0.0-beta'], + ['=0.1.0', '1.0.0'] ].forEach(function(tuple) { var range = tuple[0]; var version = tuple[1]; diff --git a/deps/npm/node_modules/semver/test/no-module.js b/deps/npm/node_modules/semver/test/no-module.js index 96d1cd1fc52..8b50873f138 100644 --- a/deps/npm/node_modules/semver/test/no-module.js +++ b/deps/npm/node_modules/semver/test/no-module.js @@ -4,9 +4,9 @@ var test = tap.test; test('no module system', function(t) { var fs = require('fs'); var vm = require('vm'); - var head = fs.readFileSync(require.resolve('../head.js'), 'utf8'); + var head = fs.readFileSync(require.resolve('../head.js.txt'), 'utf8'); var src = fs.readFileSync(require.resolve('../'), 'utf8'); - var foot = fs.readFileSync(require.resolve('../foot.js'), 'utf8'); + var foot = fs.readFileSync(require.resolve('../foot.js.txt'), 'utf8'); vm.runInThisContext(head + src + foot, 'semver.js'); // just some basic poking to see if it did some stuff diff --git a/deps/npm/node_modules/sha/.npmignore b/deps/npm/node_modules/sha/.npmignore index ac4d7d173be..fcfd9449475 100644 --- a/deps/npm/node_modules/sha/.npmignore +++ b/deps/npm/node_modules/sha/.npmignore @@ -1,4 +1,4 @@ -node_modules -test -.gitignore +node_modules +test +.gitignore .travis.yml \ No newline at end of file diff --git a/deps/npm/node_modules/sha/LICENSE b/deps/npm/node_modules/sha/LICENSE index 3d8f089d214..048a6f99d22 100644 --- a/deps/npm/node_modules/sha/LICENSE +++ b/deps/npm/node_modules/sha/LICENSE @@ -1,46 +1,46 @@ -Copyright (c) 2013 Forbes Lindesay - -The BSD License - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions -are met: - -1. Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - -2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - -THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND -ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR -PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS -BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR -BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, -WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE -OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN -IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -The MIT License (MIT) - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in -all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +Copyright (c) 2013 Forbes Lindesay + +The BSD License + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions +are met: + +1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + +THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND +ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS +BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR +BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, +WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE +OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN +IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +The MIT License (MIT) + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. \ No newline at end of file diff --git a/deps/npm/node_modules/sha/README.md b/deps/npm/node_modules/sha/README.md index c881194c213..a2b300cc036 100644 --- a/deps/npm/node_modules/sha/README.md +++ b/deps/npm/node_modules/sha/README.md @@ -1,49 +1,49 @@ -# sha - -Check and get file hashes (using any algorithm) - -[![Build Status](https://img.shields.io/travis/ForbesLindesay/sha/master.svg)](https://travis-ci.org/ForbesLindesay/sha) -[![Dependency Status](https://img.shields.io/gemnasium/ForbesLindesay/sha.svg)](https://gemnasium.com/ForbesLindesay/sha) -[![NPM version](https://img.shields.io/npm/v/sha.svg)](http://badge.fury.io/js/sha) - -## Installation - - $ npm install sha - -## API - -### check(fileName, expected, [options,] cb) / checkSync(filename, expected, [options]) - -Asynchronously check that `fileName` has a "hash" of `expected`. The callback will be called with either `null` or an error (indicating that they did not match). - -Options: - -- algorithm: defaults to `sha1` and can be any of the algorithms supported by `crypto.createHash` - -### get(fileName, [options,] cb) / getSync(filename, [options]) - -Asynchronously get the "hash" of `fileName`. The callback will be called with an optional `error` object and the (lower cased) hex digest of the hash. - -Options: - -- algorithm: defaults to `sha1` and can be any of the algorithms supported by `crypto.createHash` - -### stream(expected, [options]) - -Check the hash of a stream without ever buffering it. This is a pass through stream so you can do things like: - -```js -fs.createReadStream('src') - .pipe(sha.stream('expected')) - .pipe(fs.createWriteStream('dest')) -``` - -`dest` will be a complete copy of `src` and an error will be emitted if the hash did not match `'expected'`. - -Options: - -- algorithm: defaults to `sha1` and can be any of the algorithms supported by `crypto.createHash` - -## License - +# sha + +Check and get file hashes (using any algorithm) + +[![Build Status](https://img.shields.io/travis/ForbesLindesay/sha/master.svg)](https://travis-ci.org/ForbesLindesay/sha) +[![Dependency Status](https://img.shields.io/gemnasium/ForbesLindesay/sha.svg)](https://gemnasium.com/ForbesLindesay/sha) +[![NPM version](https://img.shields.io/npm/v/sha.svg)](http://badge.fury.io/js/sha) + +## Installation + + $ npm install sha + +## API + +### check(fileName, expected, [options,] cb) / checkSync(filename, expected, [options]) + +Asynchronously check that `fileName` has a "hash" of `expected`. The callback will be called with either `null` or an error (indicating that they did not match). + +Options: + +- algorithm: defaults to `sha1` and can be any of the algorithms supported by `crypto.createHash` + +### get(fileName, [options,] cb) / getSync(filename, [options]) + +Asynchronously get the "hash" of `fileName`. The callback will be called with an optional `error` object and the (lower cased) hex digest of the hash. + +Options: + +- algorithm: defaults to `sha1` and can be any of the algorithms supported by `crypto.createHash` + +### stream(expected, [options]) + +Check the hash of a stream without ever buffering it. This is a pass through stream so you can do things like: + +```js +fs.createReadStream('src') + .pipe(sha.stream('expected')) + .pipe(fs.createWriteStream('dest')) +``` + +`dest` will be a complete copy of `src` and an error will be emitted if the hash did not match `'expected'`. + +Options: + +- algorithm: defaults to `sha1` and can be any of the algorithms supported by `crypto.createHash` + +## License + You may use this software under the BSD or MIT. Take your pick. If you want me to release it under another license, open a pull request. \ No newline at end of file diff --git a/deps/npm/node_modules/sha/index.js b/deps/npm/node_modules/sha/index.js index 45f152c8d4e..d50ac2fb92f 100644 --- a/deps/npm/node_modules/sha/index.js +++ b/deps/npm/node_modules/sha/index.js @@ -1,120 +1,120 @@ -'use strict' - -var Transform = require('stream').Transform || require('readable-stream').Transform -var crypto = require('crypto') -var fs -try { - fs = require('graceful-fs') -} catch (ex) { - fs = require('fs') -} -try { - process.binding('crypto') -} catch (e) { - var er = new Error( 'crypto binding not found.\n' - + 'Please build node with openssl.\n' - + e.message ) - throw er -} - -exports.check = check -exports.checkSync = checkSync -exports.get = get -exports.getSync = getSync -exports.stream = stream - -function check(file, expected, options, cb) { - if (typeof options === 'function') { - cb = options - options = undefined - } - expected = expected.toLowerCase().trim() - get(file, options, function (er, actual) { - if (er) { - if (er.message) er.message += ' while getting shasum for ' + file - return cb(er) - } - if (actual === expected) return cb(null) - cb(new Error( - 'shasum check failed for ' + file + '\n' - + 'Expected: ' + expected + '\n' - + 'Actual: ' + actual)) - }) -} -function checkSync(file, expected, options) { - expected = expected.toLowerCase().trim() - var actual - try { - actual = getSync(file, options) - } catch (er) { - if (er.message) er.message += ' while getting shasum for ' + file - throw er - } - if (actual !== expected) { - var ex = new Error( - 'shasum check failed for ' + file + '\n' - + 'Expected: ' + expected + '\n' - + 'Actual: ' + actual) - throw ex - } -} - - -function get(file, options, cb) { - if (typeof options === 'function') { - cb = options - options = undefined - } - options = options || {} - var algorithm = options.algorithm || 'sha1' - var hash = crypto.createHash(algorithm) - var source = fs.createReadStream(file) - var errState = null - source - .on('error', function (er) { - if (errState) return - return cb(errState = er) - }) - .on('data', function (chunk) { - if (errState) return - hash.update(chunk) - }) - .on('end', function () { - if (errState) return - var actual = hash.digest("hex").toLowerCase().trim() - cb(null, actual) - }) -} - -function getSync(file, options) { - options = options || {} - var algorithm = options.algorithm || 'sha1' - var hash = crypto.createHash(algorithm) - var source = fs.readFileSync(file) - hash.update(source) - return hash.digest("hex").toLowerCase().trim() -} - -function stream(expected, options) { - expected = expected.toLowerCase().trim() - options = options || {} - var algorithm = options.algorithm || 'sha1' - var hash = crypto.createHash(algorithm) - - var stream = new Transform() - stream._transform = function (chunk, encoding, callback) { - hash.update(chunk) - stream.push(chunk) - callback() - } - stream._flush = function (cb) { - var actual = hash.digest("hex").toLowerCase().trim() - if (actual === expected) return cb(null) - cb(new Error( - 'shasum check failed for:\n' - + ' Expected: ' + expected + '\n' - + ' Actual: ' + actual)) - this.push(null) - } - return stream +'use strict' + +var Transform = require('stream').Transform || require('readable-stream').Transform +var crypto = require('crypto') +var fs +try { + fs = require('graceful-fs') +} catch (ex) { + fs = require('fs') +} +try { + process.binding('crypto') +} catch (e) { + var er = new Error( 'crypto binding not found.\n' + + 'Please build node with openssl.\n' + + e.message ) + throw er +} + +exports.check = check +exports.checkSync = checkSync +exports.get = get +exports.getSync = getSync +exports.stream = stream + +function check(file, expected, options, cb) { + if (typeof options === 'function') { + cb = options + options = undefined + } + expected = expected.toLowerCase().trim() + get(file, options, function (er, actual) { + if (er) { + if (er.message) er.message += ' while getting shasum for ' + file + return cb(er) + } + if (actual === expected) return cb(null) + cb(new Error( + 'shasum check failed for ' + file + '\n' + + 'Expected: ' + expected + '\n' + + 'Actual: ' + actual)) + }) +} +function checkSync(file, expected, options) { + expected = expected.toLowerCase().trim() + var actual + try { + actual = getSync(file, options) + } catch (er) { + if (er.message) er.message += ' while getting shasum for ' + file + throw er + } + if (actual !== expected) { + var ex = new Error( + 'shasum check failed for ' + file + '\n' + + 'Expected: ' + expected + '\n' + + 'Actual: ' + actual) + throw ex + } +} + + +function get(file, options, cb) { + if (typeof options === 'function') { + cb = options + options = undefined + } + options = options || {} + var algorithm = options.algorithm || 'sha1' + var hash = crypto.createHash(algorithm) + var source = fs.createReadStream(file) + var errState = null + source + .on('error', function (er) { + if (errState) return + return cb(errState = er) + }) + .on('data', function (chunk) { + if (errState) return + hash.update(chunk) + }) + .on('end', function () { + if (errState) return + var actual = hash.digest("hex").toLowerCase().trim() + cb(null, actual) + }) +} + +function getSync(file, options) { + options = options || {} + var algorithm = options.algorithm || 'sha1' + var hash = crypto.createHash(algorithm) + var source = fs.readFileSync(file) + hash.update(source) + return hash.digest("hex").toLowerCase().trim() +} + +function stream(expected, options) { + expected = expected.toLowerCase().trim() + options = options || {} + var algorithm = options.algorithm || 'sha1' + var hash = crypto.createHash(algorithm) + + var stream = new Transform() + stream._transform = function (chunk, encoding, callback) { + hash.update(chunk) + stream.push(chunk) + callback() + } + stream._flush = function (cb) { + var actual = hash.digest("hex").toLowerCase().trim() + if (actual === expected) return cb(null) + cb(new Error( + 'shasum check failed for:\n' + + ' Expected: ' + expected + '\n' + + ' Actual: ' + actual)) + this.push(null) + } + return stream } \ No newline at end of file diff --git a/deps/npm/node_modules/sha/node_modules/readable-stream/LICENSE b/deps/npm/node_modules/sha/node_modules/readable-stream/LICENSE index 0c44ae716db..e3d4e695a4c 100644 --- a/deps/npm/node_modules/sha/node_modules/readable-stream/LICENSE +++ b/deps/npm/node_modules/sha/node_modules/readable-stream/LICENSE @@ -1,27 +1,18 @@ -Copyright (c) Isaac Z. Schlueter ("Author") -All rights reserved. +Copyright Joyent, Inc. and other Node contributors. All rights reserved. +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to +deal in the Software without restriction, including without limitation the +rights to use, copy, modify, merge, publish, distribute, sublicense, and/or +sell copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: -The BSD License +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions -are met: - -1. Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - -2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - -THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND -ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR -PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS -BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR -BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, -WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE -OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN -IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING +FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS +IN THE SOFTWARE. diff --git a/deps/npm/node_modules/sha/node_modules/readable-stream/README.md b/deps/npm/node_modules/sha/node_modules/readable-stream/README.md index 34c11897927..e46b823903d 100644 --- a/deps/npm/node_modules/sha/node_modules/readable-stream/README.md +++ b/deps/npm/node_modules/sha/node_modules/readable-stream/README.md @@ -2,8 +2,8 @@ ***Node-core streams for userland*** -[![NPM](https://nodei.co/npm/readable-stream.png?downloads=true)](https://nodei.co/npm/readable-stream/) -[![NPM](https://nodei.co/npm-dl/readable-stream.png)](https://nodei.co/npm/readable-stream/) +[![NPM](https://nodei.co/npm/readable-stream.png?downloads=true&downloadRank=true)](https://nodei.co/npm/readable-stream/) +[![NPM](https://nodei.co/npm-dl/readable-stream.png&months=6&height=3)](https://nodei.co/npm/readable-stream/) This package is a mirror of the Streams2 and Streams3 implementations in Node-core. diff --git a/deps/npm/node_modules/sha/node_modules/readable-stream/float.patch b/deps/npm/node_modules/sha/node_modules/readable-stream/float.patch new file mode 100644 index 00000000000..b984607a41c --- /dev/null +++ b/deps/npm/node_modules/sha/node_modules/readable-stream/float.patch @@ -0,0 +1,923 @@ +diff --git a/lib/_stream_duplex.js b/lib/_stream_duplex.js +index c5a741c..a2e0d8e 100644 +--- a/lib/_stream_duplex.js ++++ b/lib/_stream_duplex.js +@@ -26,8 +26,8 @@ + + module.exports = Duplex; + var util = require('util'); +-var Readable = require('_stream_readable'); +-var Writable = require('_stream_writable'); ++var Readable = require('./_stream_readable'); ++var Writable = require('./_stream_writable'); + + util.inherits(Duplex, Readable); + +diff --git a/lib/_stream_passthrough.js b/lib/_stream_passthrough.js +index a5e9864..330c247 100644 +--- a/lib/_stream_passthrough.js ++++ b/lib/_stream_passthrough.js +@@ -25,7 +25,7 @@ + + module.exports = PassThrough; + +-var Transform = require('_stream_transform'); ++var Transform = require('./_stream_transform'); + var util = require('util'); + util.inherits(PassThrough, Transform); + +diff --git a/lib/_stream_readable.js b/lib/_stream_readable.js +index 0c3fe3e..90a8298 100644 +--- a/lib/_stream_readable.js ++++ b/lib/_stream_readable.js +@@ -23,10 +23,34 @@ module.exports = Readable; + Readable.ReadableState = ReadableState; + + var EE = require('events').EventEmitter; ++if (!EE.listenerCount) EE.listenerCount = function(emitter, type) { ++ return emitter.listeners(type).length; ++}; ++ ++if (!global.setImmediate) global.setImmediate = function setImmediate(fn) { ++ return setTimeout(fn, 0); ++}; ++if (!global.clearImmediate) global.clearImmediate = function clearImmediate(i) { ++ return clearTimeout(i); ++}; ++ + var Stream = require('stream'); + var util = require('util'); ++if (!util.isUndefined) { ++ var utilIs = require('core-util-is'); ++ for (var f in utilIs) { ++ util[f] = utilIs[f]; ++ } ++} + var StringDecoder; +-var debug = util.debuglog('stream'); ++var debug; ++if (util.debuglog) ++ debug = util.debuglog('stream'); ++else try { ++ debug = require('debuglog')('stream'); ++} catch (er) { ++ debug = function() {}; ++} + + util.inherits(Readable, Stream); + +@@ -380,7 +404,7 @@ function chunkInvalid(state, chunk) { + + + function onEofChunk(stream, state) { +- if (state.decoder && !state.ended) { ++ if (state.decoder && !state.ended && state.decoder.end) { + var chunk = state.decoder.end(); + if (chunk && chunk.length) { + state.buffer.push(chunk); +diff --git a/lib/_stream_transform.js b/lib/_stream_transform.js +index b1f9fcc..b0caf57 100644 +--- a/lib/_stream_transform.js ++++ b/lib/_stream_transform.js +@@ -64,8 +64,14 @@ + + module.exports = Transform; + +-var Duplex = require('_stream_duplex'); ++var Duplex = require('./_stream_duplex'); + var util = require('util'); ++if (!util.isUndefined) { ++ var utilIs = require('core-util-is'); ++ for (var f in utilIs) { ++ util[f] = utilIs[f]; ++ } ++} + util.inherits(Transform, Duplex); + + +diff --git a/lib/_stream_writable.js b/lib/_stream_writable.js +index ba2e920..f49288b 100644 +--- a/lib/_stream_writable.js ++++ b/lib/_stream_writable.js +@@ -27,6 +27,12 @@ module.exports = Writable; + Writable.WritableState = WritableState; + + var util = require('util'); ++if (!util.isUndefined) { ++ var utilIs = require('core-util-is'); ++ for (var f in utilIs) { ++ util[f] = utilIs[f]; ++ } ++} + var Stream = require('stream'); + + util.inherits(Writable, Stream); +@@ -119,7 +125,7 @@ function WritableState(options, stream) { + function Writable(options) { + // Writable ctor is applied to Duplexes, though they're not + // instanceof Writable, they're instanceof Readable. +- if (!(this instanceof Writable) && !(this instanceof Stream.Duplex)) ++ if (!(this instanceof Writable) && !(this instanceof require('./_stream_duplex'))) + return new Writable(options); + + this._writableState = new WritableState(options, this); +diff --git a/test/simple/test-stream-big-push.js b/test/simple/test-stream-big-push.js +index e3787e4..8cd2127 100644 +--- a/test/simple/test-stream-big-push.js ++++ b/test/simple/test-stream-big-push.js +@@ -21,7 +21,7 @@ + + var common = require('../common'); + var assert = require('assert'); +-var stream = require('stream'); ++var stream = require('../../'); + var str = 'asdfasdfasdfasdfasdf'; + + var r = new stream.Readable({ +diff --git a/test/simple/test-stream-end-paused.js b/test/simple/test-stream-end-paused.js +index bb73777..d40efc7 100644 +--- a/test/simple/test-stream-end-paused.js ++++ b/test/simple/test-stream-end-paused.js +@@ -25,7 +25,7 @@ var gotEnd = false; + + // Make sure we don't miss the end event for paused 0-length streams + +-var Readable = require('stream').Readable; ++var Readable = require('../../').Readable; + var stream = new Readable(); + var calledRead = false; + stream._read = function() { +diff --git a/test/simple/test-stream-pipe-after-end.js b/test/simple/test-stream-pipe-after-end.js +index b46ee90..0be8366 100644 +--- a/test/simple/test-stream-pipe-after-end.js ++++ b/test/simple/test-stream-pipe-after-end.js +@@ -22,8 +22,8 @@ + var common = require('../common'); + var assert = require('assert'); + +-var Readable = require('_stream_readable'); +-var Writable = require('_stream_writable'); ++var Readable = require('../../lib/_stream_readable'); ++var Writable = require('../../lib/_stream_writable'); + var util = require('util'); + + util.inherits(TestReadable, Readable); +diff --git a/test/simple/test-stream-pipe-cleanup.js b/test/simple/test-stream-pipe-cleanup.js +deleted file mode 100644 +index f689358..0000000 +--- a/test/simple/test-stream-pipe-cleanup.js ++++ /dev/null +@@ -1,122 +0,0 @@ +-// Copyright Joyent, Inc. and other Node contributors. +-// +-// Permission is hereby granted, free of charge, to any person obtaining a +-// copy of this software and associated documentation files (the +-// "Software"), to deal in the Software without restriction, including +-// without limitation the rights to use, copy, modify, merge, publish, +-// distribute, sublicense, and/or sell copies of the Software, and to permit +-// persons to whom the Software is furnished to do so, subject to the +-// following conditions: +-// +-// The above copyright notice and this permission notice shall be included +-// in all copies or substantial portions of the Software. +-// +-// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +-// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +-// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +-// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +-// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +-// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +-// USE OR OTHER DEALINGS IN THE SOFTWARE. +- +-// This test asserts that Stream.prototype.pipe does not leave listeners +-// hanging on the source or dest. +- +-var common = require('../common'); +-var stream = require('stream'); +-var assert = require('assert'); +-var util = require('util'); +- +-function Writable() { +- this.writable = true; +- this.endCalls = 0; +- stream.Stream.call(this); +-} +-util.inherits(Writable, stream.Stream); +-Writable.prototype.end = function() { +- this.endCalls++; +-}; +- +-Writable.prototype.destroy = function() { +- this.endCalls++; +-}; +- +-function Readable() { +- this.readable = true; +- stream.Stream.call(this); +-} +-util.inherits(Readable, stream.Stream); +- +-function Duplex() { +- this.readable = true; +- Writable.call(this); +-} +-util.inherits(Duplex, Writable); +- +-var i = 0; +-var limit = 100; +- +-var w = new Writable(); +- +-var r; +- +-for (i = 0; i < limit; i++) { +- r = new Readable(); +- r.pipe(w); +- r.emit('end'); +-} +-assert.equal(0, r.listeners('end').length); +-assert.equal(limit, w.endCalls); +- +-w.endCalls = 0; +- +-for (i = 0; i < limit; i++) { +- r = new Readable(); +- r.pipe(w); +- r.emit('close'); +-} +-assert.equal(0, r.listeners('close').length); +-assert.equal(limit, w.endCalls); +- +-w.endCalls = 0; +- +-r = new Readable(); +- +-for (i = 0; i < limit; i++) { +- w = new Writable(); +- r.pipe(w); +- w.emit('close'); +-} +-assert.equal(0, w.listeners('close').length); +- +-r = new Readable(); +-w = new Writable(); +-var d = new Duplex(); +-r.pipe(d); // pipeline A +-d.pipe(w); // pipeline B +-assert.equal(r.listeners('end').length, 2); // A.onend, A.cleanup +-assert.equal(r.listeners('close').length, 2); // A.onclose, A.cleanup +-assert.equal(d.listeners('end').length, 2); // B.onend, B.cleanup +-assert.equal(d.listeners('close').length, 3); // A.cleanup, B.onclose, B.cleanup +-assert.equal(w.listeners('end').length, 0); +-assert.equal(w.listeners('close').length, 1); // B.cleanup +- +-r.emit('end'); +-assert.equal(d.endCalls, 1); +-assert.equal(w.endCalls, 0); +-assert.equal(r.listeners('end').length, 0); +-assert.equal(r.listeners('close').length, 0); +-assert.equal(d.listeners('end').length, 2); // B.onend, B.cleanup +-assert.equal(d.listeners('close').length, 2); // B.onclose, B.cleanup +-assert.equal(w.listeners('end').length, 0); +-assert.equal(w.listeners('close').length, 1); // B.cleanup +- +-d.emit('end'); +-assert.equal(d.endCalls, 1); +-assert.equal(w.endCalls, 1); +-assert.equal(r.listeners('end').length, 0); +-assert.equal(r.listeners('close').length, 0); +-assert.equal(d.listeners('end').length, 0); +-assert.equal(d.listeners('close').length, 0); +-assert.equal(w.listeners('end').length, 0); +-assert.equal(w.listeners('close').length, 0); +diff --git a/test/simple/test-stream-pipe-error-handling.js b/test/simple/test-stream-pipe-error-handling.js +index c5d724b..c7d6b7d 100644 +--- a/test/simple/test-stream-pipe-error-handling.js ++++ b/test/simple/test-stream-pipe-error-handling.js +@@ -21,7 +21,7 @@ + + var common = require('../common'); + var assert = require('assert'); +-var Stream = require('stream').Stream; ++var Stream = require('../../').Stream; + + (function testErrorListenerCatches() { + var source = new Stream(); +diff --git a/test/simple/test-stream-pipe-event.js b/test/simple/test-stream-pipe-event.js +index cb9d5fe..56f8d61 100644 +--- a/test/simple/test-stream-pipe-event.js ++++ b/test/simple/test-stream-pipe-event.js +@@ -20,7 +20,7 @@ + // USE OR OTHER DEALINGS IN THE SOFTWARE. + + var common = require('../common'); +-var stream = require('stream'); ++var stream = require('../../'); + var assert = require('assert'); + var util = require('util'); + +diff --git a/test/simple/test-stream-push-order.js b/test/simple/test-stream-push-order.js +index f2e6ec2..a5c9bf9 100644 +--- a/test/simple/test-stream-push-order.js ++++ b/test/simple/test-stream-push-order.js +@@ -20,7 +20,7 @@ + // USE OR OTHER DEALINGS IN THE SOFTWARE. + + var common = require('../common.js'); +-var Readable = require('stream').Readable; ++var Readable = require('../../').Readable; + var assert = require('assert'); + + var s = new Readable({ +diff --git a/test/simple/test-stream-push-strings.js b/test/simple/test-stream-push-strings.js +index 06f43dc..1701a9a 100644 +--- a/test/simple/test-stream-push-strings.js ++++ b/test/simple/test-stream-push-strings.js +@@ -22,7 +22,7 @@ + var common = require('../common'); + var assert = require('assert'); + +-var Readable = require('stream').Readable; ++var Readable = require('../../').Readable; + var util = require('util'); + + util.inherits(MyStream, Readable); +diff --git a/test/simple/test-stream-readable-event.js b/test/simple/test-stream-readable-event.js +index ba6a577..a8e6f7b 100644 +--- a/test/simple/test-stream-readable-event.js ++++ b/test/simple/test-stream-readable-event.js +@@ -22,7 +22,7 @@ + var common = require('../common'); + var assert = require('assert'); + +-var Readable = require('stream').Readable; ++var Readable = require('../../').Readable; + + (function first() { + // First test, not reading when the readable is added. +diff --git a/test/simple/test-stream-readable-flow-recursion.js b/test/simple/test-stream-readable-flow-recursion.js +index 2891ad6..11689ba 100644 +--- a/test/simple/test-stream-readable-flow-recursion.js ++++ b/test/simple/test-stream-readable-flow-recursion.js +@@ -27,7 +27,7 @@ var assert = require('assert'); + // more data continuously, but without triggering a nextTick + // warning or RangeError. + +-var Readable = require('stream').Readable; ++var Readable = require('../../').Readable; + + // throw an error if we trigger a nextTick warning. + process.throwDeprecation = true; +diff --git a/test/simple/test-stream-unshift-empty-chunk.js b/test/simple/test-stream-unshift-empty-chunk.js +index 0c96476..7827538 100644 +--- a/test/simple/test-stream-unshift-empty-chunk.js ++++ b/test/simple/test-stream-unshift-empty-chunk.js +@@ -24,7 +24,7 @@ var assert = require('assert'); + + // This test verifies that stream.unshift(Buffer(0)) or + // stream.unshift('') does not set state.reading=false. +-var Readable = require('stream').Readable; ++var Readable = require('../../').Readable; + + var r = new Readable(); + var nChunks = 10; +diff --git a/test/simple/test-stream-unshift-read-race.js b/test/simple/test-stream-unshift-read-race.js +index 83fd9fa..17c18aa 100644 +--- a/test/simple/test-stream-unshift-read-race.js ++++ b/test/simple/test-stream-unshift-read-race.js +@@ -29,7 +29,7 @@ var assert = require('assert'); + // 3. push() after the EOF signaling null is an error. + // 4. _read() is not called after pushing the EOF null chunk. + +-var stream = require('stream'); ++var stream = require('../../'); + var hwm = 10; + var r = stream.Readable({ highWaterMark: hwm }); + var chunks = 10; +@@ -51,7 +51,14 @@ r._read = function(n) { + + function push(fast) { + assert(!pushedNull, 'push() after null push'); +- var c = pos >= data.length ? null : data.slice(pos, pos + n); ++ var c; ++ if (pos >= data.length) ++ c = null; ++ else { ++ if (n + pos > data.length) ++ n = data.length - pos; ++ c = data.slice(pos, pos + n); ++ } + pushedNull = c === null; + if (fast) { + pos += n; +diff --git a/test/simple/test-stream-writev.js b/test/simple/test-stream-writev.js +index 5b49e6e..b5321f3 100644 +--- a/test/simple/test-stream-writev.js ++++ b/test/simple/test-stream-writev.js +@@ -22,7 +22,7 @@ + var common = require('../common'); + var assert = require('assert'); + +-var stream = require('stream'); ++var stream = require('../../'); + + var queue = []; + for (var decode = 0; decode < 2; decode++) { +diff --git a/test/simple/test-stream2-basic.js b/test/simple/test-stream2-basic.js +index 3814bf0..248c1be 100644 +--- a/test/simple/test-stream2-basic.js ++++ b/test/simple/test-stream2-basic.js +@@ -21,7 +21,7 @@ + + + var common = require('../common.js'); +-var R = require('_stream_readable'); ++var R = require('../../lib/_stream_readable'); + var assert = require('assert'); + + var util = require('util'); +diff --git a/test/simple/test-stream2-compatibility.js b/test/simple/test-stream2-compatibility.js +index 6cdd4e9..f0fa84b 100644 +--- a/test/simple/test-stream2-compatibility.js ++++ b/test/simple/test-stream2-compatibility.js +@@ -21,7 +21,7 @@ + + + var common = require('../common.js'); +-var R = require('_stream_readable'); ++var R = require('../../lib/_stream_readable'); + var assert = require('assert'); + + var util = require('util'); +diff --git a/test/simple/test-stream2-finish-pipe.js b/test/simple/test-stream2-finish-pipe.js +index 39b274f..006a19b 100644 +--- a/test/simple/test-stream2-finish-pipe.js ++++ b/test/simple/test-stream2-finish-pipe.js +@@ -20,7 +20,7 @@ + // USE OR OTHER DEALINGS IN THE SOFTWARE. + + var common = require('../common.js'); +-var stream = require('stream'); ++var stream = require('../../'); + var Buffer = require('buffer').Buffer; + + var r = new stream.Readable(); +diff --git a/test/simple/test-stream2-fs.js b/test/simple/test-stream2-fs.js +deleted file mode 100644 +index e162406..0000000 +--- a/test/simple/test-stream2-fs.js ++++ /dev/null +@@ -1,72 +0,0 @@ +-// Copyright Joyent, Inc. and other Node contributors. +-// +-// Permission is hereby granted, free of charge, to any person obtaining a +-// copy of this software and associated documentation files (the +-// "Software"), to deal in the Software without restriction, including +-// without limitation the rights to use, copy, modify, merge, publish, +-// distribute, sublicense, and/or sell copies of the Software, and to permit +-// persons to whom the Software is furnished to do so, subject to the +-// following conditions: +-// +-// The above copyright notice and this permission notice shall be included +-// in all copies or substantial portions of the Software. +-// +-// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +-// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +-// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +-// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +-// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +-// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +-// USE OR OTHER DEALINGS IN THE SOFTWARE. +- +- +-var common = require('../common.js'); +-var R = require('_stream_readable'); +-var assert = require('assert'); +- +-var fs = require('fs'); +-var FSReadable = fs.ReadStream; +- +-var path = require('path'); +-var file = path.resolve(common.fixturesDir, 'x1024.txt'); +- +-var size = fs.statSync(file).size; +- +-var expectLengths = [1024]; +- +-var util = require('util'); +-var Stream = require('stream'); +- +-util.inherits(TestWriter, Stream); +- +-function TestWriter() { +- Stream.apply(this); +- this.buffer = []; +- this.length = 0; +-} +- +-TestWriter.prototype.write = function(c) { +- this.buffer.push(c.toString()); +- this.length += c.length; +- return true; +-}; +- +-TestWriter.prototype.end = function(c) { +- if (c) this.buffer.push(c.toString()); +- this.emit('results', this.buffer); +-} +- +-var r = new FSReadable(file); +-var w = new TestWriter(); +- +-w.on('results', function(res) { +- console.error(res, w.length); +- assert.equal(w.length, size); +- var l = 0; +- assert.deepEqual(res.map(function (c) { +- return c.length; +- }), expectLengths); +- console.log('ok'); +-}); +- +-r.pipe(w); +diff --git a/test/simple/test-stream2-httpclient-response-end.js b/test/simple/test-stream2-httpclient-response-end.js +deleted file mode 100644 +index 15cffc2..0000000 +--- a/test/simple/test-stream2-httpclient-response-end.js ++++ /dev/null +@@ -1,52 +0,0 @@ +-// Copyright Joyent, Inc. and other Node contributors. +-// +-// Permission is hereby granted, free of charge, to any person obtaining a +-// copy of this software and associated documentation files (the +-// "Software"), to deal in the Software without restriction, including +-// without limitation the rights to use, copy, modify, merge, publish, +-// distribute, sublicense, and/or sell copies of the Software, and to permit +-// persons to whom the Software is furnished to do so, subject to the +-// following conditions: +-// +-// The above copyright notice and this permission notice shall be included +-// in all copies or substantial portions of the Software. +-// +-// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +-// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +-// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +-// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +-// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +-// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +-// USE OR OTHER DEALINGS IN THE SOFTWARE. +- +-var common = require('../common.js'); +-var assert = require('assert'); +-var http = require('http'); +-var msg = 'Hello'; +-var readable_event = false; +-var end_event = false; +-var server = http.createServer(function(req, res) { +- res.writeHead(200, {'Content-Type': 'text/plain'}); +- res.end(msg); +-}).listen(common.PORT, function() { +- http.get({port: common.PORT}, function(res) { +- var data = ''; +- res.on('readable', function() { +- console.log('readable event'); +- readable_event = true; +- data += res.read(); +- }); +- res.on('end', function() { +- console.log('end event'); +- end_event = true; +- assert.strictEqual(msg, data); +- server.close(); +- }); +- }); +-}); +- +-process.on('exit', function() { +- assert(readable_event); +- assert(end_event); +-}); +- +diff --git a/test/simple/test-stream2-large-read-stall.js b/test/simple/test-stream2-large-read-stall.js +index 2fbfbca..667985b 100644 +--- a/test/simple/test-stream2-large-read-stall.js ++++ b/test/simple/test-stream2-large-read-stall.js +@@ -30,7 +30,7 @@ var PUSHSIZE = 20; + var PUSHCOUNT = 1000; + var HWM = 50; + +-var Readable = require('stream').Readable; ++var Readable = require('../../').Readable; + var r = new Readable({ + highWaterMark: HWM + }); +@@ -39,23 +39,23 @@ var rs = r._readableState; + r._read = push; + + r.on('readable', function() { +- console.error('>> readable'); ++ //console.error('>> readable'); + do { +- console.error(' > read(%d)', READSIZE); ++ //console.error(' > read(%d)', READSIZE); + var ret = r.read(READSIZE); +- console.error(' < %j (%d remain)', ret && ret.length, rs.length); ++ //console.error(' < %j (%d remain)', ret && ret.length, rs.length); + } while (ret && ret.length === READSIZE); + +- console.error('<< after read()', +- ret && ret.length, +- rs.needReadable, +- rs.length); ++ //console.error('<< after read()', ++ // ret && ret.length, ++ // rs.needReadable, ++ // rs.length); + }); + + var endEmitted = false; + r.on('end', function() { + endEmitted = true; +- console.error('end'); ++ //console.error('end'); + }); + + var pushes = 0; +@@ -64,11 +64,11 @@ function push() { + return; + + if (pushes++ === PUSHCOUNT) { +- console.error(' push(EOF)'); ++ //console.error(' push(EOF)'); + return r.push(null); + } + +- console.error(' push #%d', pushes); ++ //console.error(' push #%d', pushes); + if (r.push(new Buffer(PUSHSIZE))) + setTimeout(push); + } +diff --git a/test/simple/test-stream2-objects.js b/test/simple/test-stream2-objects.js +index 3e6931d..ff47d89 100644 +--- a/test/simple/test-stream2-objects.js ++++ b/test/simple/test-stream2-objects.js +@@ -21,8 +21,8 @@ + + + var common = require('../common.js'); +-var Readable = require('_stream_readable'); +-var Writable = require('_stream_writable'); ++var Readable = require('../../lib/_stream_readable'); ++var Writable = require('../../lib/_stream_writable'); + var assert = require('assert'); + + // tiny node-tap lookalike. +diff --git a/test/simple/test-stream2-pipe-error-handling.js b/test/simple/test-stream2-pipe-error-handling.js +index cf7531c..e3f3e4e 100644 +--- a/test/simple/test-stream2-pipe-error-handling.js ++++ b/test/simple/test-stream2-pipe-error-handling.js +@@ -21,7 +21,7 @@ + + var common = require('../common'); + var assert = require('assert'); +-var stream = require('stream'); ++var stream = require('../../'); + + (function testErrorListenerCatches() { + var count = 1000; +diff --git a/test/simple/test-stream2-pipe-error-once-listener.js b/test/simple/test-stream2-pipe-error-once-listener.js +index 5e8e3cb..53b2616 100755 +--- a/test/simple/test-stream2-pipe-error-once-listener.js ++++ b/test/simple/test-stream2-pipe-error-once-listener.js +@@ -24,7 +24,7 @@ var common = require('../common.js'); + var assert = require('assert'); + + var util = require('util'); +-var stream = require('stream'); ++var stream = require('../../'); + + + var Read = function() { +diff --git a/test/simple/test-stream2-push.js b/test/simple/test-stream2-push.js +index b63edc3..eb2b0e9 100644 +--- a/test/simple/test-stream2-push.js ++++ b/test/simple/test-stream2-push.js +@@ -20,7 +20,7 @@ + // USE OR OTHER DEALINGS IN THE SOFTWARE. + + var common = require('../common.js'); +-var stream = require('stream'); ++var stream = require('../../'); + var Readable = stream.Readable; + var Writable = stream.Writable; + var assert = require('assert'); +diff --git a/test/simple/test-stream2-read-sync-stack.js b/test/simple/test-stream2-read-sync-stack.js +index e8a7305..9740a47 100644 +--- a/test/simple/test-stream2-read-sync-stack.js ++++ b/test/simple/test-stream2-read-sync-stack.js +@@ -21,7 +21,7 @@ + + var common = require('../common'); + var assert = require('assert'); +-var Readable = require('stream').Readable; ++var Readable = require('../../').Readable; + var r = new Readable(); + var N = 256 * 1024; + +diff --git a/test/simple/test-stream2-readable-empty-buffer-no-eof.js b/test/simple/test-stream2-readable-empty-buffer-no-eof.js +index cd30178..4b1659d 100644 +--- a/test/simple/test-stream2-readable-empty-buffer-no-eof.js ++++ b/test/simple/test-stream2-readable-empty-buffer-no-eof.js +@@ -22,10 +22,9 @@ + var common = require('../common'); + var assert = require('assert'); + +-var Readable = require('stream').Readable; ++var Readable = require('../../').Readable; + + test1(); +-test2(); + + function test1() { + var r = new Readable(); +@@ -88,31 +87,3 @@ function test1() { + console.log('ok'); + }); + } +- +-function test2() { +- var r = new Readable({ encoding: 'base64' }); +- var reads = 5; +- r._read = function(n) { +- if (!reads--) +- return r.push(null); // EOF +- else +- return r.push(new Buffer('x')); +- }; +- +- var results = []; +- function flow() { +- var chunk; +- while (null !== (chunk = r.read())) +- results.push(chunk + ''); +- } +- r.on('readable', flow); +- r.on('end', function() { +- results.push('EOF'); +- }); +- flow(); +- +- process.on('exit', function() { +- assert.deepEqual(results, [ 'eHh4', 'eHg=', 'EOF' ]); +- console.log('ok'); +- }); +-} +diff --git a/test/simple/test-stream2-readable-from-list.js b/test/simple/test-stream2-readable-from-list.js +index 7c96ffe..04a96f5 100644 +--- a/test/simple/test-stream2-readable-from-list.js ++++ b/test/simple/test-stream2-readable-from-list.js +@@ -21,7 +21,7 @@ + + var assert = require('assert'); + var common = require('../common.js'); +-var fromList = require('_stream_readable')._fromList; ++var fromList = require('../../lib/_stream_readable')._fromList; + + // tiny node-tap lookalike. + var tests = []; +diff --git a/test/simple/test-stream2-readable-legacy-drain.js b/test/simple/test-stream2-readable-legacy-drain.js +index 675da8e..51fd3d5 100644 +--- a/test/simple/test-stream2-readable-legacy-drain.js ++++ b/test/simple/test-stream2-readable-legacy-drain.js +@@ -22,7 +22,7 @@ + var common = require('../common'); + var assert = require('assert'); + +-var Stream = require('stream'); ++var Stream = require('../../'); + var Readable = Stream.Readable; + + var r = new Readable(); +diff --git a/test/simple/test-stream2-readable-non-empty-end.js b/test/simple/test-stream2-readable-non-empty-end.js +index 7314ae7..c971898 100644 +--- a/test/simple/test-stream2-readable-non-empty-end.js ++++ b/test/simple/test-stream2-readable-non-empty-end.js +@@ -21,7 +21,7 @@ + + var assert = require('assert'); + var common = require('../common.js'); +-var Readable = require('_stream_readable'); ++var Readable = require('../../lib/_stream_readable'); + + var len = 0; + var chunks = new Array(10); +diff --git a/test/simple/test-stream2-readable-wrap-empty.js b/test/simple/test-stream2-readable-wrap-empty.js +index 2e5cf25..fd8a3dc 100644 +--- a/test/simple/test-stream2-readable-wrap-empty.js ++++ b/test/simple/test-stream2-readable-wrap-empty.js +@@ -22,7 +22,7 @@ + var common = require('../common'); + var assert = require('assert'); + +-var Readable = require('_stream_readable'); ++var Readable = require('../../lib/_stream_readable'); + var EE = require('events').EventEmitter; + + var oldStream = new EE(); +diff --git a/test/simple/test-stream2-readable-wrap.js b/test/simple/test-stream2-readable-wrap.js +index 90eea01..6b177f7 100644 +--- a/test/simple/test-stream2-readable-wrap.js ++++ b/test/simple/test-stream2-readable-wrap.js +@@ -22,8 +22,8 @@ + var common = require('../common'); + var assert = require('assert'); + +-var Readable = require('_stream_readable'); +-var Writable = require('_stream_writable'); ++var Readable = require('../../lib/_stream_readable'); ++var Writable = require('../../lib/_stream_writable'); + var EE = require('events').EventEmitter; + + var testRuns = 0, completedRuns = 0; +diff --git a/test/simple/test-stream2-set-encoding.js b/test/simple/test-stream2-set-encoding.js +index 5d2c32a..685531b 100644 +--- a/test/simple/test-stream2-set-encoding.js ++++ b/test/simple/test-stream2-set-encoding.js +@@ -22,7 +22,7 @@ + + var common = require('../common.js'); + var assert = require('assert'); +-var R = require('_stream_readable'); ++var R = require('../../lib/_stream_readable'); + var util = require('util'); + + // tiny node-tap lookalike. +diff --git a/test/simple/test-stream2-transform.js b/test/simple/test-stream2-transform.js +index 9c9ddd8..a0cacc6 100644 +--- a/test/simple/test-stream2-transform.js ++++ b/test/simple/test-stream2-transform.js +@@ -21,8 +21,8 @@ + + var assert = require('assert'); + var common = require('../common.js'); +-var PassThrough = require('_stream_passthrough'); +-var Transform = require('_stream_transform'); ++var PassThrough = require('../../').PassThrough; ++var Transform = require('../../').Transform; + + // tiny node-tap lookalike. + var tests = []; +diff --git a/test/simple/test-stream2-unpipe-drain.js b/test/simple/test-stream2-unpipe-drain.js +index d66dc3c..365b327 100644 +--- a/test/simple/test-stream2-unpipe-drain.js ++++ b/test/simple/test-stream2-unpipe-drain.js +@@ -22,7 +22,7 @@ + + var common = require('../common.js'); + var assert = require('assert'); +-var stream = require('stream'); ++var stream = require('../../'); + var crypto = require('crypto'); + + var util = require('util'); +diff --git a/test/simple/test-stream2-unpipe-leak.js b/test/simple/test-stream2-unpipe-leak.js +index 99f8746..17c92ae 100644 +--- a/test/simple/test-stream2-unpipe-leak.js ++++ b/test/simple/test-stream2-unpipe-leak.js +@@ -22,7 +22,7 @@ + + var common = require('../common.js'); + var assert = require('assert'); +-var stream = require('stream'); ++var stream = require('../../'); + + var chunk = new Buffer('hallo'); + +diff --git a/test/simple/test-stream2-writable.js b/test/simple/test-stream2-writable.js +index 704100c..209c3a6 100644 +--- a/test/simple/test-stream2-writable.js ++++ b/test/simple/test-stream2-writable.js +@@ -20,8 +20,8 @@ + // USE OR OTHER DEALINGS IN THE SOFTWARE. + + var common = require('../common.js'); +-var W = require('_stream_writable'); +-var D = require('_stream_duplex'); ++var W = require('../../').Writable; ++var D = require('../../').Duplex; + var assert = require('assert'); + + var util = require('util'); +diff --git a/test/simple/test-stream3-pause-then-read.js b/test/simple/test-stream3-pause-then-read.js +index b91bde3..2f72c15 100644 +--- a/test/simple/test-stream3-pause-then-read.js ++++ b/test/simple/test-stream3-pause-then-read.js +@@ -22,7 +22,7 @@ + var common = require('../common'); + var assert = require('assert'); + +-var stream = require('stream'); ++var stream = require('../../'); + var Readable = stream.Readable; + var Writable = stream.Writable; + diff --git a/deps/npm/node_modules/sha/node_modules/readable-stream/lib/_stream_readable.js b/deps/npm/node_modules/sha/node_modules/readable-stream/lib/_stream_readable.js index 0ca77052840..19ab3588984 100644 --- a/deps/npm/node_modules/sha/node_modules/readable-stream/lib/_stream_readable.js +++ b/deps/npm/node_modules/sha/node_modules/readable-stream/lib/_stream_readable.js @@ -49,15 +49,29 @@ util.inherits = require('inherits'); var StringDecoder; + +/**/ +var debug = require('util'); +if (debug && debug.debuglog) { + debug = debug.debuglog('stream'); +} else { + debug = function () {}; +} +/**/ + + util.inherits(Readable, Stream); function ReadableState(options, stream) { + var Duplex = require('./_stream_duplex'); + options = options || {}; // the point at which it stops calling _read() to fill the buffer // Note: 0 is a valid value, means "don't call _read preemptively ever" var hwm = options.highWaterMark; - this.highWaterMark = (hwm || hwm === 0) ? hwm : 16 * 1024; + var defaultHwm = options.objectMode ? 16 : 16 * 1024; + this.highWaterMark = (hwm || hwm === 0) ? hwm : defaultHwm; // cast to ints. this.highWaterMark = ~~this.highWaterMark; @@ -66,19 +80,13 @@ function ReadableState(options, stream) { this.length = 0; this.pipes = null; this.pipesCount = 0; - this.flowing = false; + this.flowing = null; this.ended = false; this.endEmitted = false; this.reading = false; - // In streams that never have any data, and do push(null) right away, - // the consumer can miss the 'end' event if they do some I/O before - // consuming the stream. So, we don't emit('end') until some reading - // happens. - this.calledRead = false; - // a flag to be able to tell if the onwrite cb is called immediately, - // or on a later tick. We set this to true at first, becuase any + // or on a later tick. We set this to true at first, because any // actions that shouldn't happen until "later" should generally also // not happen before the first write call. this.sync = true; @@ -94,6 +102,9 @@ function ReadableState(options, stream) { // make all the buffer merging and length checks go away this.objectMode = !!options.objectMode; + if (stream instanceof Duplex) + this.objectMode = this.objectMode || !!options.readableObjectMode; + // Crypto is kind of old and crusty. Historically, its default string // encoding is 'binary' so we have to make this configurable. // Everything else in the universe uses 'utf8', though. @@ -120,6 +131,8 @@ function ReadableState(options, stream) { } function Readable(options) { + var Duplex = require('./_stream_duplex'); + if (!(this instanceof Readable)) return new Readable(options); @@ -138,7 +151,7 @@ function Readable(options) { Readable.prototype.push = function(chunk, encoding) { var state = this._readableState; - if (typeof chunk === 'string' && !state.objectMode) { + if (util.isString(chunk) && !state.objectMode) { encoding = encoding || state.defaultEncoding; if (encoding !== state.encoding) { chunk = new Buffer(chunk, encoding); @@ -159,7 +172,7 @@ function readableAddChunk(stream, state, chunk, encoding, addToFront) { var er = chunkInvalid(state, chunk); if (er) { stream.emit('error', er); - } else if (chunk === null || chunk === undefined) { + } else if (util.isNullOrUndefined(chunk)) { state.reading = false; if (!state.ended) onEofChunk(stream, state); @@ -174,17 +187,24 @@ function readableAddChunk(stream, state, chunk, encoding, addToFront) { if (state.decoder && !addToFront && !encoding) chunk = state.decoder.write(chunk); - // update the buffer info. - state.length += state.objectMode ? 1 : chunk.length; - if (addToFront) { - state.buffer.unshift(chunk); - } else { + if (!addToFront) state.reading = false; - state.buffer.push(chunk); - } - if (state.needReadable) - emitReadable(stream); + // if we want the data now, just emit it. + if (state.flowing && state.length === 0 && !state.sync) { + stream.emit('data', chunk); + stream.read(0); + } else { + // update the buffer info. + state.length += state.objectMode ? 1 : chunk.length; + if (addToFront) + state.buffer.unshift(chunk); + else + state.buffer.push(chunk); + + if (state.needReadable) + emitReadable(stream); + } maybeReadMore(stream, state); } @@ -217,6 +237,7 @@ Readable.prototype.setEncoding = function(enc) { StringDecoder = require('string_decoder/').StringDecoder; this._readableState.decoder = new StringDecoder(enc); this._readableState.encoding = enc; + return this; }; // Don't raise the hwm > 128MB @@ -240,7 +261,7 @@ function howMuchToRead(n, state) { if (state.objectMode) return n === 0 ? 0 : 1; - if (isNaN(n) || n === null) { + if (isNaN(n) || util.isNull(n)) { // only flow one buffer at a time if (state.flowing && state.buffer.length) return state.buffer[0].length; @@ -272,11 +293,11 @@ function howMuchToRead(n, state) { // you can override either this method, or the async _read(n) below. Readable.prototype.read = function(n) { + debug('read', n); var state = this._readableState; - state.calledRead = true; var nOrig = n; - if (typeof n !== 'number' || n > 0) + if (!util.isNumber(n) || n > 0) state.emittedReadable = false; // if we're doing read(0) to trigger a readable event, but we @@ -285,7 +306,11 @@ Readable.prototype.read = function(n) { if (n === 0 && state.needReadable && (state.length >= state.highWaterMark || state.ended)) { - emitReadable(this); + debug('read: emitReadable', state.length, state.ended); + if (state.length === 0 && state.ended) + endReadable(this); + else + emitReadable(this); return null; } @@ -322,17 +347,23 @@ Readable.prototype.read = function(n) { // if we need a readable event, then we need to do some reading. var doRead = state.needReadable; + debug('need readable', doRead); // if we currently have less than the highWaterMark, then also read some - if (state.length - n <= state.highWaterMark) + if (state.length === 0 || state.length - n < state.highWaterMark) { doRead = true; + debug('length less than watermark', doRead); + } // however, if we've ended, then there's no point, and if we're already // reading, then it's unnecessary. - if (state.ended || state.reading) + if (state.ended || state.reading) { doRead = false; + debug('reading or ended', doRead); + } if (doRead) { + debug('do read'); state.reading = true; state.sync = true; // if the length is currently zero, then we *need* a readable event. @@ -343,9 +374,8 @@ Readable.prototype.read = function(n) { state.sync = false; } - // If _read called its callback synchronously, then `reading` - // will be false, and we need to re-evaluate how much data we - // can return to the user. + // If _read pushed data synchronously, then `reading` will be false, + // and we need to re-evaluate how much data we can return to the user. if (doRead && !state.reading) n = howMuchToRead(nOrig, state); @@ -355,7 +385,7 @@ Readable.prototype.read = function(n) { else ret = null; - if (ret === null) { + if (util.isNull(ret)) { state.needReadable = true; n = 0; } @@ -367,23 +397,22 @@ Readable.prototype.read = function(n) { if (state.length === 0 && !state.ended) state.needReadable = true; - // If we happened to read() exactly the remaining amount in the - // buffer, and the EOF has been seen at this point, then make sure - // that we emit 'end' on the very next tick. - if (state.ended && !state.endEmitted && state.length === 0) + // If we tried to read() past the EOF, then emit end on the next tick. + if (nOrig !== n && state.ended && state.length === 0) endReadable(this); + if (!util.isNull(ret)) + this.emit('data', ret); + return ret; }; function chunkInvalid(state, chunk) { var er = null; - if (!Buffer.isBuffer(chunk) && - 'string' !== typeof chunk && - chunk !== null && - chunk !== undefined && - !state.objectMode && - !er) { + if (!util.isBuffer(chunk) && + !util.isString(chunk) && + !util.isNullOrUndefined(chunk) && + !state.objectMode) { er = new TypeError('Invalid non-string/buffer chunk'); } return er; @@ -400,12 +429,8 @@ function onEofChunk(stream, state) { } state.ended = true; - // if we've ended and we have some data left, then emit - // 'readable' now to make sure it gets picked up. - if (state.length > 0) - emitReadable(stream); - else - endReadable(stream); + // emit 'readable' now to make sure it gets picked up. + emitReadable(stream); } // Don't emit readable right away in sync mode, because this can trigger @@ -414,20 +439,22 @@ function onEofChunk(stream, state) { function emitReadable(stream) { var state = stream._readableState; state.needReadable = false; - if (state.emittedReadable) - return; - - state.emittedReadable = true; - if (state.sync) - process.nextTick(function() { + if (!state.emittedReadable) { + debug('emitReadable', state.flowing); + state.emittedReadable = true; + if (state.sync) + process.nextTick(function() { + emitReadable_(stream); + }); + else emitReadable_(stream); - }); - else - emitReadable_(stream); + } } function emitReadable_(stream) { + debug('emit readable'); stream.emit('readable'); + flow(stream); } @@ -450,6 +477,7 @@ function maybeReadMore_(stream, state) { var len = state.length; while (!state.reading && !state.flowing && !state.ended && state.length < state.highWaterMark) { + debug('maybeReadMore read 0'); stream.read(0); if (len === state.length) // didn't get any data, stop spinning. @@ -484,6 +512,7 @@ Readable.prototype.pipe = function(dest, pipeOpts) { break; } state.pipesCount += 1; + debug('pipe count=%d opts=%j', state.pipesCount, pipeOpts); var doEnd = (!pipeOpts || pipeOpts.end !== false) && dest !== process.stdout && @@ -497,11 +526,14 @@ Readable.prototype.pipe = function(dest, pipeOpts) { dest.on('unpipe', onunpipe); function onunpipe(readable) { - if (readable !== src) return; - cleanup(); + debug('onunpipe'); + if (readable === src) { + cleanup(); + } } function onend() { + debug('onend'); dest.end(); } @@ -513,6 +545,7 @@ Readable.prototype.pipe = function(dest, pipeOpts) { dest.on('drain', ondrain); function cleanup() { + debug('cleanup'); // cleanup event handlers once the pipe is broken dest.removeListener('close', onclose); dest.removeListener('finish', onfinish); @@ -521,19 +554,34 @@ Readable.prototype.pipe = function(dest, pipeOpts) { dest.removeListener('unpipe', onunpipe); src.removeListener('end', onend); src.removeListener('end', cleanup); + src.removeListener('data', ondata); // if the reader is waiting for a drain event from this // specific writer, then it would cause it to never start // flowing again. // So, if this is awaiting a drain, then we just call it now. // If we don't know, then assume that we are waiting for one. - if (!dest._writableState || dest._writableState.needDrain) + if (state.awaitDrain && + (!dest._writableState || dest._writableState.needDrain)) ondrain(); } + src.on('data', ondata); + function ondata(chunk) { + debug('ondata'); + var ret = dest.write(chunk); + if (false === ret) { + debug('false write response, pause', + src._readableState.awaitDrain); + src._readableState.awaitDrain++; + src.pause(); + } + } + // if the dest has an error, then stop piping into it. // however, don't suppress the throwing behavior for this. function onerror(er) { + debug('onerror', er); unpipe(); dest.removeListener('error', onerror); if (EE.listenerCount(dest, 'error') === 0) @@ -557,12 +605,14 @@ Readable.prototype.pipe = function(dest, pipeOpts) { } dest.once('close', onclose); function onfinish() { + debug('onfinish'); dest.removeListener('close', onclose); unpipe(); } dest.once('finish', onfinish); function unpipe() { + debug('unpipe'); src.unpipe(dest); } @@ -571,16 +621,8 @@ Readable.prototype.pipe = function(dest, pipeOpts) { // start the flow if it hasn't been started already. if (!state.flowing) { - // the handler that waits for readable events after all - // the data gets sucked out in flow. - // This would be easier to follow with a .once() handler - // in flow(), but that is too slow. - this.on('readable', pipeOnReadable); - - state.flowing = true; - process.nextTick(function() { - flow(src); - }); + debug('pipe resume'); + src.resume(); } return dest; @@ -588,63 +630,15 @@ Readable.prototype.pipe = function(dest, pipeOpts) { function pipeOnDrain(src) { return function() { - var dest = this; var state = src._readableState; - state.awaitDrain--; - if (state.awaitDrain === 0) + debug('pipeOnDrain', state.awaitDrain); + if (state.awaitDrain) + state.awaitDrain--; + if (state.awaitDrain === 0 && EE.listenerCount(src, 'data')) { + state.flowing = true; flow(src); - }; -} - -function flow(src) { - var state = src._readableState; - var chunk; - state.awaitDrain = 0; - - function write(dest, i, list) { - var written = dest.write(chunk); - if (false === written) { - state.awaitDrain++; } - } - - while (state.pipesCount && null !== (chunk = src.read())) { - - if (state.pipesCount === 1) - write(state.pipes, 0, null); - else - forEach(state.pipes, write); - - src.emit('data', chunk); - - // if anyone needs a drain, then we have to wait for that. - if (state.awaitDrain > 0) - return; - } - - // if every destination was unpiped, either before entering this - // function, or in the while loop, then stop flowing. - // - // NB: This is a pretty rare edge case. - if (state.pipesCount === 0) { - state.flowing = false; - - // if there were data event listeners added, then switch to old mode. - if (EE.listenerCount(src, 'data') > 0) - emitDataEvents(src); - return; - } - - // at this point, no one needed a drain, so we just ran out of data - // on the next readable event, start it over again. - state.ranOut = true; -} - -function pipeOnReadable() { - if (this._readableState.ranOut) { - this._readableState.ranOut = false; - flow(this); - } + }; } @@ -667,7 +661,6 @@ Readable.prototype.unpipe = function(dest) { // got a match. state.pipes = null; state.pipesCount = 0; - this.removeListener('readable', pipeOnReadable); state.flowing = false; if (dest) dest.emit('unpipe', this); @@ -682,7 +675,6 @@ Readable.prototype.unpipe = function(dest) { var len = state.pipesCount; state.pipes = null; state.pipesCount = 0; - this.removeListener('readable', pipeOnReadable); state.flowing = false; for (var i = 0; i < len; i++) @@ -710,8 +702,11 @@ Readable.prototype.unpipe = function(dest) { Readable.prototype.on = function(ev, fn) { var res = Stream.prototype.on.call(this, ev, fn); - if (ev === 'data' && !this._readableState.flowing) - emitDataEvents(this); + // If listening to data, and it has not explicitly been paused, + // then call resume to start the flow of data on the next tick. + if (ev === 'data' && false !== this._readableState.flowing) { + this.resume(); + } if (ev === 'readable' && this.readable) { var state = this._readableState; @@ -720,7 +715,11 @@ Readable.prototype.on = function(ev, fn) { state.emittedReadable = false; state.needReadable = true; if (!state.reading) { - this.read(0); + var self = this; + process.nextTick(function() { + debug('readable nexttick read 0'); + self.read(0); + }); } else if (state.length) { emitReadable(this, state); } @@ -734,63 +733,54 @@ Readable.prototype.addListener = Readable.prototype.on; // pause() and resume() are remnants of the legacy readable stream API // If the user uses them, then switch into old mode. Readable.prototype.resume = function() { - emitDataEvents(this); - this.read(0); - this.emit('resume'); + var state = this._readableState; + if (!state.flowing) { + debug('resume'); + state.flowing = true; + if (!state.reading) { + debug('resume read 0'); + this.read(0); + } + resume(this, state); + } + return this; }; +function resume(stream, state) { + if (!state.resumeScheduled) { + state.resumeScheduled = true; + process.nextTick(function() { + resume_(stream, state); + }); + } +} + +function resume_(stream, state) { + state.resumeScheduled = false; + stream.emit('resume'); + flow(stream); + if (state.flowing && !state.reading) + stream.read(0); +} + Readable.prototype.pause = function() { - emitDataEvents(this, true); - this.emit('pause'); + debug('call pause flowing=%j', this._readableState.flowing); + if (false !== this._readableState.flowing) { + debug('pause'); + this._readableState.flowing = false; + this.emit('pause'); + } + return this; }; -function emitDataEvents(stream, startPaused) { +function flow(stream) { var state = stream._readableState; - + debug('flow', state.flowing); if (state.flowing) { - // https://github.com/isaacs/readable-stream/issues/16 - throw new Error('Cannot switch to old mode now.'); + do { + var chunk = stream.read(); + } while (null !== chunk && state.flowing); } - - var paused = startPaused || false; - var readable = false; - - // convert to an old-style stream. - stream.readable = true; - stream.pipe = Stream.prototype.pipe; - stream.on = stream.addListener = Stream.prototype.on; - - stream.on('readable', function() { - readable = true; - - var c; - while (!paused && (null !== (c = stream.read()))) - stream.emit('data', c); - - if (c === null) { - readable = false; - stream._readableState.needReadable = true; - } - }); - - stream.pause = function() { - paused = true; - this.emit('pause'); - }; - - stream.resume = function() { - paused = false; - if (readable) - process.nextTick(function() { - stream.emit('readable'); - }); - else - this.read(0); - this.emit('resume'); - }; - - // now make it start, just in case it hadn't already. - stream.emit('readable'); } // wrap an old-style stream as the async data source. @@ -802,6 +792,7 @@ Readable.prototype.wrap = function(stream) { var self = this; stream.on('end', function() { + debug('wrapped end'); if (state.decoder && !state.ended) { var chunk = state.decoder.end(); if (chunk && chunk.length) @@ -812,6 +803,7 @@ Readable.prototype.wrap = function(stream) { }); stream.on('data', function(chunk) { + debug('wrapped data'); if (state.decoder) chunk = state.decoder.write(chunk); if (!chunk || !state.objectMode && !chunk.length) @@ -827,8 +819,7 @@ Readable.prototype.wrap = function(stream) { // proxy all the other methods. // important when wrapping filters and duplexes. for (var i in stream) { - if (typeof stream[i] === 'function' && - typeof this[i] === 'undefined') { + if (util.isFunction(stream[i]) && util.isUndefined(this[i])) { this[i] = function(method) { return function() { return stream[method].apply(stream, arguments); }}(i); @@ -844,6 +835,7 @@ Readable.prototype.wrap = function(stream) { // when we try to consume some more bytes, simply unpause the // underlying stream. self._read = function(n) { + debug('wrapped _read', n); if (paused) { paused = false; stream.resume(); @@ -932,7 +924,7 @@ function endReadable(stream) { if (state.length > 0) throw new Error('endReadable called on non-empty stream'); - if (!state.endEmitted && state.calledRead) { + if (!state.endEmitted) { state.ended = true; process.nextTick(function() { // Check that we didn't get one last unshift. diff --git a/deps/npm/node_modules/sha/node_modules/readable-stream/lib/_stream_transform.js b/deps/npm/node_modules/sha/node_modules/readable-stream/lib/_stream_transform.js index eb188df3e86..905c5e45075 100644 --- a/deps/npm/node_modules/sha/node_modules/readable-stream/lib/_stream_transform.js +++ b/deps/npm/node_modules/sha/node_modules/readable-stream/lib/_stream_transform.js @@ -97,7 +97,7 @@ function afterTransform(stream, er, data) { ts.writechunk = null; ts.writecb = null; - if (data !== null && data !== undefined) + if (!util.isNullOrUndefined(data)) stream.push(data); if (cb) @@ -117,7 +117,7 @@ function Transform(options) { Duplex.call(this, options); - var ts = this._transformState = new TransformState(options, this); + this._transformState = new TransformState(options, this); // when the writable side finishes, then flush out anything remaining. var stream = this; @@ -130,8 +130,8 @@ function Transform(options) { // sync guard flag. this._readableState.sync = false; - this.once('finish', function() { - if ('function' === typeof this._flush) + this.once('prefinish', function() { + if (util.isFunction(this._flush)) this._flush(function(er) { done(stream, er); }); @@ -179,7 +179,7 @@ Transform.prototype._write = function(chunk, encoding, cb) { Transform.prototype._read = function(n) { var ts = this._transformState; - if (ts.writechunk !== null && ts.writecb && !ts.transforming) { + if (!util.isNull(ts.writechunk) && ts.writecb && !ts.transforming) { ts.transforming = true; this._transform(ts.writechunk, ts.writeencoding, ts.afterTransform); } else { @@ -197,7 +197,6 @@ function done(stream, er) { // if there's nothing in the write buffer, then that means // that nothing more will ever be provided var ws = stream._writableState; - var rs = stream._readableState; var ts = stream._transformState; if (ws.length) diff --git a/deps/npm/node_modules/sha/node_modules/readable-stream/lib/_stream_writable.js b/deps/npm/node_modules/sha/node_modules/readable-stream/lib/_stream_writable.js index d0254d5a714..db8539cd5b8 100644 --- a/deps/npm/node_modules/sha/node_modules/readable-stream/lib/_stream_writable.js +++ b/deps/npm/node_modules/sha/node_modules/readable-stream/lib/_stream_writable.js @@ -37,7 +37,6 @@ var util = require('core-util-is'); util.inherits = require('inherits'); /**/ - var Stream = require('stream'); util.inherits(Writable, Stream); @@ -49,18 +48,24 @@ function WriteReq(chunk, encoding, cb) { } function WritableState(options, stream) { + var Duplex = require('./_stream_duplex'); + options = options || {}; // the point at which write() starts returning false // Note: 0 is a valid value, means that we always return false if // the entire buffer is not flushed immediately on write() var hwm = options.highWaterMark; - this.highWaterMark = (hwm || hwm === 0) ? hwm : 16 * 1024; + var defaultHwm = options.objectMode ? 16 : 16 * 1024; + this.highWaterMark = (hwm || hwm === 0) ? hwm : defaultHwm; // object stream flag to indicate whether or not this stream // contains buffers or objects. this.objectMode = !!options.objectMode; + if (stream instanceof Duplex) + this.objectMode = this.objectMode || !!options.writableObjectMode; + // cast to ints. this.highWaterMark = ~~this.highWaterMark; @@ -91,8 +96,11 @@ function WritableState(options, stream) { // a flag to see when we're in the middle of a write. this.writing = false; + // when true all writes will be buffered until .uncork() call + this.corked = 0; + // a flag to be able to tell if the onwrite cb is called immediately, - // or on a later tick. We set this to true at first, becuase any + // or on a later tick. We set this to true at first, because any // actions that shouldn't happen until "later" should generally also // not happen before the first write call. this.sync = true; @@ -115,6 +123,14 @@ function WritableState(options, stream) { this.buffer = []; + // number of pending user-supplied write callbacks + // this must be 0 before 'finish' can be emitted + this.pendingcb = 0; + + // emit prefinish if the only thing we're waiting for is _write cbs + // This is relevant for synchronous Transform streams + this.prefinished = false; + // True if the error was already emitted and should not be thrown again this.errorEmitted = false; } @@ -157,10 +173,9 @@ function writeAfterEnd(stream, state, cb) { // how many bytes or characters. function validChunk(stream, state, chunk, cb) { var valid = true; - if (!Buffer.isBuffer(chunk) && - 'string' !== typeof chunk && - chunk !== null && - chunk !== undefined && + if (!util.isBuffer(chunk) && + !util.isString(chunk) && + !util.isNullOrUndefined(chunk) && !state.objectMode) { var er = new TypeError('Invalid non-string/buffer chunk'); stream.emit('error', er); @@ -176,31 +191,54 @@ Writable.prototype.write = function(chunk, encoding, cb) { var state = this._writableState; var ret = false; - if (typeof encoding === 'function') { + if (util.isFunction(encoding)) { cb = encoding; encoding = null; } - if (Buffer.isBuffer(chunk)) + if (util.isBuffer(chunk)) encoding = 'buffer'; else if (!encoding) encoding = state.defaultEncoding; - if (typeof cb !== 'function') + if (!util.isFunction(cb)) cb = function() {}; if (state.ended) writeAfterEnd(this, state, cb); - else if (validChunk(this, state, chunk, cb)) + else if (validChunk(this, state, chunk, cb)) { + state.pendingcb++; ret = writeOrBuffer(this, state, chunk, encoding, cb); + } return ret; }; +Writable.prototype.cork = function() { + var state = this._writableState; + + state.corked++; +}; + +Writable.prototype.uncork = function() { + var state = this._writableState; + + if (state.corked) { + state.corked--; + + if (!state.writing && + !state.corked && + !state.finished && + !state.bufferProcessing && + state.buffer.length) + clearBuffer(this, state); + } +}; + function decodeChunk(state, chunk, encoding) { if (!state.objectMode && state.decodeStrings !== false && - typeof chunk === 'string') { + util.isString(chunk)) { chunk = new Buffer(chunk, encoding); } return chunk; @@ -211,7 +249,7 @@ function decodeChunk(state, chunk, encoding) { // If we return false, then we need a drain event, so set that flag. function writeOrBuffer(stream, state, chunk, encoding, cb) { chunk = decodeChunk(state, chunk, encoding); - if (Buffer.isBuffer(chunk)) + if (util.isBuffer(chunk)) encoding = 'buffer'; var len = state.objectMode ? 1 : chunk.length; @@ -222,30 +260,36 @@ function writeOrBuffer(stream, state, chunk, encoding, cb) { if (!ret) state.needDrain = true; - if (state.writing) + if (state.writing || state.corked) state.buffer.push(new WriteReq(chunk, encoding, cb)); else - doWrite(stream, state, len, chunk, encoding, cb); + doWrite(stream, state, false, len, chunk, encoding, cb); return ret; } -function doWrite(stream, state, len, chunk, encoding, cb) { +function doWrite(stream, state, writev, len, chunk, encoding, cb) { state.writelen = len; state.writecb = cb; state.writing = true; state.sync = true; - stream._write(chunk, encoding, state.onwrite); + if (writev) + stream._writev(chunk, state.onwrite); + else + stream._write(chunk, encoding, state.onwrite); state.sync = false; } function onwriteError(stream, state, sync, er, cb) { if (sync) process.nextTick(function() { + state.pendingcb--; cb(er); }); - else + else { + state.pendingcb--; cb(er); + } stream._writableState.errorEmitted = true; stream.emit('error', er); @@ -271,8 +315,12 @@ function onwrite(stream, er) { // Check if we're actually ready to finish, but don't emit yet var finished = needFinish(stream, state); - if (!finished && !state.bufferProcessing && state.buffer.length) + if (!finished && + !state.corked && + !state.bufferProcessing && + state.buffer.length) { clearBuffer(stream, state); + } if (sync) { process.nextTick(function() { @@ -287,9 +335,9 @@ function onwrite(stream, er) { function afterWrite(stream, state, finished, cb) { if (!finished) onwriteDrain(stream, state); + state.pendingcb--; cb(); - if (finished) - finishMaybe(stream, state); + finishMaybe(stream, state); } // Must force callback to be called on nextTick, so that we don't @@ -307,51 +355,82 @@ function onwriteDrain(stream, state) { function clearBuffer(stream, state) { state.bufferProcessing = true; - for (var c = 0; c < state.buffer.length; c++) { - var entry = state.buffer[c]; - var chunk = entry.chunk; - var encoding = entry.encoding; - var cb = entry.callback; - var len = state.objectMode ? 1 : chunk.length; - - doWrite(stream, state, len, chunk, encoding, cb); - - // if we didn't call the onwrite immediately, then - // it means that we need to wait until it does. - // also, that means that the chunk and cb are currently - // being processed, so move the buffer counter past them. - if (state.writing) { - c++; - break; + if (stream._writev && state.buffer.length > 1) { + // Fast case, write everything using _writev() + var cbs = []; + for (var c = 0; c < state.buffer.length; c++) + cbs.push(state.buffer[c].callback); + + // count the one we are adding, as well. + // TODO(isaacs) clean this up + state.pendingcb++; + doWrite(stream, state, true, state.length, state.buffer, '', function(err) { + for (var i = 0; i < cbs.length; i++) { + state.pendingcb--; + cbs[i](err); + } + }); + + // Clear buffer + state.buffer = []; + } else { + // Slow case, write chunks one-by-one + for (var c = 0; c < state.buffer.length; c++) { + var entry = state.buffer[c]; + var chunk = entry.chunk; + var encoding = entry.encoding; + var cb = entry.callback; + var len = state.objectMode ? 1 : chunk.length; + + doWrite(stream, state, false, len, chunk, encoding, cb); + + // if we didn't call the onwrite immediately, then + // it means that we need to wait until it does. + // also, that means that the chunk and cb are currently + // being processed, so move the buffer counter past them. + if (state.writing) { + c++; + break; + } } + + if (c < state.buffer.length) + state.buffer = state.buffer.slice(c); + else + state.buffer.length = 0; } state.bufferProcessing = false; - if (c < state.buffer.length) - state.buffer = state.buffer.slice(c); - else - state.buffer.length = 0; } Writable.prototype._write = function(chunk, encoding, cb) { cb(new Error('not implemented')); + }; +Writable.prototype._writev = null; + Writable.prototype.end = function(chunk, encoding, cb) { var state = this._writableState; - if (typeof chunk === 'function') { + if (util.isFunction(chunk)) { cb = chunk; chunk = null; encoding = null; - } else if (typeof encoding === 'function') { + } else if (util.isFunction(encoding)) { cb = encoding; encoding = null; } - if (typeof chunk !== 'undefined' && chunk !== null) + if (!util.isNullOrUndefined(chunk)) this.write(chunk, encoding); + // .end() fully uncorks + if (state.corked) { + state.corked = 1; + this.uncork(); + } + // ignore unnecessary end() calls. if (!state.ending && !state.finished) endWritable(this, state, cb); @@ -365,11 +444,22 @@ function needFinish(stream, state) { !state.writing); } +function prefinish(stream, state) { + if (!state.prefinished) { + state.prefinished = true; + stream.emit('prefinish'); + } +} + function finishMaybe(stream, state) { var need = needFinish(stream, state); if (need) { - state.finished = true; - stream.emit('finish'); + if (state.pendingcb === 0) { + prefinish(stream, state); + state.finished = true; + stream.emit('finish'); + } else + prefinish(stream, state); } return need; } diff --git a/deps/npm/node_modules/sha/node_modules/readable-stream/node_modules/core-util-is/package.json b/deps/npm/node_modules/sha/node_modules/readable-stream/node_modules/core-util-is/package.json index 2b7593c1939..4eb9ce4f3c1 100644 --- a/deps/npm/node_modules/sha/node_modules/readable-stream/node_modules/core-util-is/package.json +++ b/deps/npm/node_modules/sha/node_modules/readable-stream/node_modules/core-util-is/package.json @@ -35,7 +35,7 @@ "shasum": "6b07085aef9a3ccac6ee53bf9d3df0c1521a5538", "tarball": "http://registry.npmjs.org/core-util-is/-/core-util-is-1.0.1.tgz" }, - "_from": "core-util-is@~1.0.0", + "_from": "core-util-is@>=1.0.0 <1.1.0", "_npmVersion": "1.3.23", "_npmUser": { "name": "isaacs", diff --git a/deps/npm/node_modules/sha/node_modules/readable-stream/node_modules/string_decoder/index.js b/deps/npm/node_modules/sha/node_modules/readable-stream/node_modules/string_decoder/index.js index 2e44a03e15d..b00e54fb790 100644 --- a/deps/npm/node_modules/sha/node_modules/readable-stream/node_modules/string_decoder/index.js +++ b/deps/npm/node_modules/sha/node_modules/readable-stream/node_modules/string_decoder/index.js @@ -36,6 +36,14 @@ function assertEncoding(encoding) { } } +// StringDecoder provides an interface for efficiently splitting a series of +// buffers into a series of JS strings without breaking apart multi-byte +// characters. CESU-8 is handled as part of the UTF-8 encoding. +// +// @TODO Handling all encodings inside a single object makes it very difficult +// to reason about this code, so it should be split up in the future. +// @TODO There should be a utf8-strict encoding that rejects invalid UTF-8 code +// points as used by CESU-8. var StringDecoder = exports.StringDecoder = function(encoding) { this.encoding = (encoding || 'utf8').toLowerCase().replace(/[-_]/, ''); assertEncoding(encoding); @@ -60,37 +68,50 @@ var StringDecoder = exports.StringDecoder = function(encoding) { return; } + // Enough space to store all bytes of a single character. UTF-8 needs 4 + // bytes, but CESU-8 may require up to 6 (3 bytes per surrogate). this.charBuffer = new Buffer(6); + // Number of bytes received for the current incomplete multi-byte character. this.charReceived = 0; + // Number of bytes expected for the current incomplete multi-byte character. this.charLength = 0; }; +// write decodes the given buffer and returns it as JS string that is +// guaranteed to not contain any partial multi-byte characters. Any partial +// character found at the end of the buffer is buffered up, and will be +// returned when calling write again with the remaining bytes. +// +// Note: Converting a Buffer containing an orphan surrogate to a String +// currently works, but converting a String to a Buffer (via `new Buffer`, or +// Buffer#write) will replace incomplete surrogates with the unicode +// replacement character. See https://codereview.chromium.org/121173009/ . StringDecoder.prototype.write = function(buffer) { var charStr = ''; - var offset = 0; - // if our last write ended with an incomplete multibyte character while (this.charLength) { // determine how many remaining bytes this buffer has to offer for this char - var i = (buffer.length >= this.charLength - this.charReceived) ? - this.charLength - this.charReceived : - buffer.length; + var available = (buffer.length >= this.charLength - this.charReceived) ? + this.charLength - this.charReceived : + buffer.length; // add the new bytes to the char buffer - buffer.copy(this.charBuffer, this.charReceived, offset, i); - this.charReceived += (i - offset); - offset = i; + buffer.copy(this.charBuffer, this.charReceived, 0, available); + this.charReceived += available; if (this.charReceived < this.charLength) { // still not enough chars in this buffer? wait for more ... return ''; } + // remove bytes belonging to the current character from the buffer + buffer = buffer.slice(available, buffer.length); + // get the character that was split charStr = this.charBuffer.slice(0, this.charLength).toString(this.encoding); - // lead surrogate (D800-DBFF) is also the incomplete character + // CESU-8: lead surrogate (D800-DBFF) is also the incomplete character var charCode = charStr.charCodeAt(charStr.length - 1); if (charCode >= 0xD800 && charCode <= 0xDBFF) { this.charLength += this.surrogateSize; @@ -100,34 +121,33 @@ StringDecoder.prototype.write = function(buffer) { this.charReceived = this.charLength = 0; // if there are no more bytes in this buffer, just emit our char - if (i == buffer.length) return charStr; - - // otherwise cut off the characters end from the beginning of this buffer - buffer = buffer.slice(i, buffer.length); + if (buffer.length === 0) { + return charStr; + } break; } - var lenIncomplete = this.detectIncompleteChar(buffer); + // determine and set charLength / charReceived + this.detectIncompleteChar(buffer); var end = buffer.length; if (this.charLength) { // buffer the incomplete character bytes we got - buffer.copy(this.charBuffer, 0, buffer.length - lenIncomplete, end); - this.charReceived = lenIncomplete; - end -= lenIncomplete; + buffer.copy(this.charBuffer, 0, buffer.length - this.charReceived, end); + end -= this.charReceived; } charStr += buffer.toString(this.encoding, 0, end); var end = charStr.length - 1; var charCode = charStr.charCodeAt(end); - // lead surrogate (D800-DBFF) is also the incomplete character + // CESU-8: lead surrogate (D800-DBFF) is also the incomplete character if (charCode >= 0xD800 && charCode <= 0xDBFF) { var size = this.surrogateSize; this.charLength += size; this.charReceived += size; this.charBuffer.copy(this.charBuffer, size, 0, size); - this.charBuffer.write(charStr.charAt(charStr.length - 1), this.encoding); + buffer.copy(this.charBuffer, 0, 0, size); return charStr.substring(0, end); } @@ -135,6 +155,10 @@ StringDecoder.prototype.write = function(buffer) { return charStr; }; +// detectIncompleteChar determines if there is an incomplete UTF-8 character at +// the end of the given buffer. If so, it sets this.charLength to the byte +// length that character, and sets this.charReceived to the number of bytes +// that are available for this character. StringDecoder.prototype.detectIncompleteChar = function(buffer) { // determine how many bytes we have to check at the end of this buffer var i = (buffer.length >= 3) ? 3 : buffer.length; @@ -164,8 +188,7 @@ StringDecoder.prototype.detectIncompleteChar = function(buffer) { break; } } - - return i; + this.charReceived = i; }; StringDecoder.prototype.end = function(buffer) { @@ -188,13 +211,11 @@ function passThroughWrite(buffer) { } function utf16DetectIncompleteChar(buffer) { - var incomplete = this.charReceived = buffer.length % 2; - this.charLength = incomplete ? 2 : 0; - return incomplete; + this.charReceived = buffer.length % 2; + this.charLength = this.charReceived ? 2 : 0; } function base64DetectIncompleteChar(buffer) { - var incomplete = this.charReceived = buffer.length % 3; - this.charLength = incomplete ? 3 : 0; - return incomplete; + this.charReceived = buffer.length % 3; + this.charLength = this.charReceived ? 3 : 0; } diff --git a/deps/npm/node_modules/sha/node_modules/readable-stream/node_modules/string_decoder/package.json b/deps/npm/node_modules/sha/node_modules/readable-stream/node_modules/string_decoder/package.json index 2e827f5921f..0364d54ba46 100644 --- a/deps/npm/node_modules/sha/node_modules/readable-stream/node_modules/string_decoder/package.json +++ b/deps/npm/node_modules/sha/node_modules/readable-stream/node_modules/string_decoder/package.json @@ -1,6 +1,6 @@ { "name": "string_decoder", - "version": "0.10.25-1", + "version": "0.10.31", "description": "The string_decoder module from Node core", "main": "index.js", "dependencies": {}, @@ -22,16 +22,14 @@ "browserify" ], "license": "MIT", + "gitHead": "d46d4fd87cf1d06e031c23f1ba170ca7d4ade9a0", "bugs": { "url": "https://github.com/rvagg/string_decoder/issues" }, - "_id": "string_decoder@0.10.25-1", - "dist": { - "shasum": "f387babd95d23a2bb73b1fbf2cb3efab6f78baab", - "tarball": "http://registry.npmjs.org/string_decoder/-/string_decoder-0.10.25-1.tgz" - }, - "_from": "string_decoder@~0.10.x", - "_npmVersion": "1.3.24", + "_id": "string_decoder@0.10.31", + "_shasum": "62e203bc41766c6c28c9fc84301dab1c5310fa94", + "_from": "string_decoder@>=0.10.0 <0.11.0", + "_npmVersion": "1.4.23", "_npmUser": { "name": "rvagg", "email": "rod@vagg.org" @@ -46,8 +44,11 @@ "email": "rod@vagg.org" } ], + "dist": { + "shasum": "62e203bc41766c6c28c9fc84301dab1c5310fa94", + "tarball": "http://registry.npmjs.org/string_decoder/-/string_decoder-0.10.31.tgz" + }, "directories": {}, - "_shasum": "f387babd95d23a2bb73b1fbf2cb3efab6f78baab", - "_resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-0.10.25-1.tgz", + "_resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-0.10.31.tgz", "readme": "ERROR: No README data found!" } diff --git a/deps/npm/node_modules/sha/node_modules/readable-stream/package.json b/deps/npm/node_modules/sha/node_modules/readable-stream/package.json index 8d8961b95aa..9344b0f9c49 100644 --- a/deps/npm/node_modules/sha/node_modules/readable-stream/package.json +++ b/deps/npm/node_modules/sha/node_modules/readable-stream/package.json @@ -1,7 +1,7 @@ { "name": "readable-stream", - "version": "1.0.27-1", - "description": "Streams2, a user-land copy of the stream library from Node.js v0.10.x", + "version": "1.1.13", + "description": "Streams3, a user-land copy of the stream library from Node.js v0.11.x", "main": "readable.js", "dependencies": { "core-util-is": "~1.0.0", @@ -33,17 +33,15 @@ "url": "http://blog.izs.me/" }, "license": "MIT", + "gitHead": "3b672fd7ae92acf5b4ffdbabf74b372a0a56b051", "bugs": { "url": "https://github.com/isaacs/readable-stream/issues" }, "homepage": "https://github.com/isaacs/readable-stream", - "_id": "readable-stream@1.0.27-1", - "dist": { - "shasum": "6b67983c20357cefd07f0165001a16d710d91078", - "tarball": "http://registry.npmjs.org/readable-stream/-/readable-stream-1.0.27-1.tgz" - }, - "_from": "readable-stream@1.0", - "_npmVersion": "1.4.3", + "_id": "readable-stream@1.1.13", + "_shasum": "f6eef764f514c89e2b9e23146a75ba106756d23e", + "_from": "readable-stream@>=1.1.0 <1.2.0", + "_npmVersion": "1.4.23", "_npmUser": { "name": "rvagg", "email": "rod@vagg.org" @@ -62,8 +60,11 @@ "email": "rod@vagg.org" } ], + "dist": { + "shasum": "f6eef764f514c89e2b9e23146a75ba106756d23e", + "tarball": "http://registry.npmjs.org/readable-stream/-/readable-stream-1.1.13.tgz" + }, "directories": {}, - "_shasum": "6b67983c20357cefd07f0165001a16d710d91078", - "_resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-1.0.27-1.tgz", + "_resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-1.1.13.tgz", "readme": "ERROR: No README data found!" } diff --git a/deps/npm/node_modules/sha/node_modules/readable-stream/readable.js b/deps/npm/node_modules/sha/node_modules/readable-stream/readable.js index 4d1ddfc734e..09b8bf5091a 100644 --- a/deps/npm/node_modules/sha/node_modules/readable-stream/readable.js +++ b/deps/npm/node_modules/sha/node_modules/readable-stream/readable.js @@ -1,4 +1,5 @@ exports = module.exports = require('./lib/_stream_readable.js'); +exports.Stream = require('stream'); exports.Readable = exports; exports.Writable = require('./lib/_stream_writable.js'); exports.Duplex = require('./lib/_stream_duplex.js'); diff --git a/deps/npm/node_modules/sha/package.json b/deps/npm/node_modules/sha/package.json index 091919964a9..f5aff490799 100644 --- a/deps/npm/node_modules/sha/package.json +++ b/deps/npm/node_modules/sha/package.json @@ -1,6 +1,6 @@ { "name": "sha", - "version": "1.2.4", + "version": "1.3.0", "description": "Check and get file hashes", "scripts": { "test": "mocha -R spec" @@ -12,29 +12,27 @@ "license": "BSD", "optionalDependencies": { "graceful-fs": "2 || 3", - "readable-stream": "1.0" + "readable-stream": "~1.1" }, "devDependencies": { "mocha": "~1.9.0" }, + "gitHead": "f1985eefbf7538e5809a2157c728d2f740901600", "bugs": { "url": "https://github.com/ForbesLindesay/sha/issues" }, "homepage": "https://github.com/ForbesLindesay/sha", "dependencies": { "graceful-fs": "2 || 3", - "readable-stream": "1.0" + "readable-stream": "~1.1" }, - "_id": "sha@1.2.4", - "dist": { - "shasum": "1f9a377f27b6fdee409b9b858e43da702be48a4d", - "tarball": "http://registry.npmjs.org/sha/-/sha-1.2.4.tgz" - }, - "_from": "sha@latest", - "_npmVersion": "1.4.3", + "_id": "sha@1.3.0", + "_shasum": "79f4787045d0ede7327d702c25c443460dbc6764", + "_from": "sha@>=1.3.0 <1.4.0", + "_npmVersion": "1.5.0-alpha-4", "_npmUser": { "name": "forbeslindesay", - "email": "forbes@lindeay.co.uk" + "email": "forbes@lindesay.co.uk" }, "maintainers": [ { @@ -42,7 +40,10 @@ "email": "forbes@lindesay.co.uk" } ], + "dist": { + "shasum": "79f4787045d0ede7327d702c25c443460dbc6764", + "tarball": "http://registry.npmjs.org/sha/-/sha-1.3.0.tgz" + }, "directories": {}, - "_shasum": "1f9a377f27b6fdee409b9b858e43da702be48a4d", - "_resolved": "https://registry.npmjs.org/sha/-/sha-1.2.4.tgz" + "_resolved": "https://registry.npmjs.org/sha/-/sha-1.3.0.tgz" } diff --git a/deps/npm/node_modules/slide/package.json b/deps/npm/node_modules/slide/package.json index 481ff526567..1c0b30bf2a9 100644 --- a/deps/npm/node_modules/slide/package.json +++ b/deps/npm/node_modules/slide/package.json @@ -33,7 +33,7 @@ "_id": "slide@1.1.6", "scripts": {}, "_shasum": "56eb027d65b4d2dce6cb2e2d32c4d4afc9e1d707", - "_from": "slide@~1.1.6", + "_from": "slide@>=1.1.6 <1.2.0", "_npmVersion": "2.0.0-beta.3", "_npmUser": { "name": "isaacs", @@ -50,6 +50,5 @@ "tarball": "http://registry.npmjs.org/slide/-/slide-1.1.6.tgz" }, "directories": {}, - "_resolved": "https://registry.npmjs.org/slide/-/slide-1.1.6.tgz", - "readme": "ERROR: No README data found!" + "_resolved": "https://registry.npmjs.org/slide/-/slide-1.1.6.tgz" } diff --git a/deps/npm/node_modules/tar/package.json b/deps/npm/node_modules/tar/package.json index 4f660303506..207eaa1fdd2 100644 --- a/deps/npm/node_modules/tar/package.json +++ b/deps/npm/node_modules/tar/package.json @@ -26,8 +26,6 @@ "tap": "0.x" }, "license": "BSD", - "readme": "# node-tar\n\nTar for Node.js.\n\n[![NPM](https://nodei.co/npm/tar.png)](https://nodei.co/npm/tar/)\n\n## API\n\nSee `examples/` for usage examples.\n\n### var tar = require('tar')\n\nReturns an object with `.Pack`, `.Extract` and `.Parse` methods.\n\n### tar.Pack([properties])\n\nReturns a through stream. Use\n[fstream](https://npmjs.org/package/fstream) to write files into the\npack stream and you will receive tar archive data from the pack\nstream.\n\nThis only works with directories, it does not work with individual files.\n\nThe optional `properties` object are used to set properties in the tar\n'Global Extended Header'.\n\n### tar.Extract([options])\n\nReturns a through stream. Write tar data to the stream and the files\nin the tarball will be extracted onto the filesystem.\n\n`options` can be:\n\n```js\n{\n path: '/path/to/extract/tar/into',\n strip: 0, // how many path segments to strip from the root when extracting\n}\n```\n\n`options` also get passed to the `fstream.Writer` instance that `tar`\nuses internally.\n\n### tar.Parse()\n\nReturns a writable stream. Write tar data to it and it will emit\n`entry` events for each entry parsed from the tarball. This is used by\n`tar.Extract`.\n", - "readmeFilename": "README.md", "gitHead": "476bf6f5882b9c33d1cbf66f175d0f25e3981044", "bugs": { "url": "https://github.com/isaacs/node-tar/issues" @@ -35,5 +33,23 @@ "homepage": "https://github.com/isaacs/node-tar", "_id": "tar@1.0.1", "_shasum": "6075b5a1f236defe0c7e3756d3d9b3ebdad0f19a", - "_from": "tar@latest" + "_from": "tar@1.0.1", + "_npmVersion": "1.4.23", + "_npmUser": { + "name": "isaacs", + "email": "i@izs.me" + }, + "maintainers": [ + { + "name": "isaacs", + "email": "i@izs.me" + } + ], + "dist": { + "shasum": "6075b5a1f236defe0c7e3756d3d9b3ebdad0f19a", + "tarball": "http://registry.npmjs.org/tar/-/tar-1.0.1.tgz" + }, + "directories": {}, + "_resolved": "https://registry.npmjs.org/tar/-/tar-1.0.1.tgz", + "readme": "ERROR: No README data found!" } diff --git a/deps/npm/node_modules/wrappy/LICENSE b/deps/npm/node_modules/wrappy/LICENSE new file mode 100644 index 00000000000..19129e315fe --- /dev/null +++ b/deps/npm/node_modules/wrappy/LICENSE @@ -0,0 +1,15 @@ +The ISC License + +Copyright (c) Isaac Z. Schlueter and Contributors + +Permission to use, copy, modify, and/or distribute this software for any +purpose with or without fee is hereby granted, provided that the above +copyright notice and this permission notice appear in all copies. + +THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES +WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF +MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR +ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES +WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN +ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR +IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. diff --git a/deps/npm/node_modules/wrappy/README.md b/deps/npm/node_modules/wrappy/README.md new file mode 100644 index 00000000000..98eab2522b8 --- /dev/null +++ b/deps/npm/node_modules/wrappy/README.md @@ -0,0 +1,36 @@ +# wrappy + +Callback wrapping utility + +## USAGE + +```javascript +var wrappy = require("wrappy") + +// var wrapper = wrappy(wrapperFunction) + +// make sure a cb is called only once +// See also: http://npm.im/once for this specific use case +var once = wrappy(function (cb) { + var called = false + return function () { + if (called) return + called = true + return cb.apply(this, arguments) + } +}) + +function printBoo () { + console.log('boo') +} +// has some rando property +printBoo.iAmBooPrinter = true + +var onlyPrintOnce = once(printBoo) + +onlyPrintOnce() // prints 'boo' +onlyPrintOnce() // does nothing + +// random property is retained! +assert.equal(onlyPrintOnce.iAmBooPrinter, true) +``` diff --git a/deps/npm/node_modules/wrappy/package.json b/deps/npm/node_modules/wrappy/package.json new file mode 100644 index 00000000000..b88e6628329 --- /dev/null +++ b/deps/npm/node_modules/wrappy/package.json @@ -0,0 +1,52 @@ +{ + "name": "wrappy", + "version": "1.0.1", + "description": "Callback wrapping utility", + "main": "wrappy.js", + "directories": { + "test": "test" + }, + "dependencies": {}, + "devDependencies": { + "tap": "^0.4.12" + }, + "scripts": { + "test": "tap test/*.js" + }, + "repository": { + "type": "git", + "url": "https://github.com/npm/wrappy" + }, + "author": { + "name": "Isaac Z. Schlueter", + "email": "i@izs.me", + "url": "http://blog.izs.me/" + }, + "license": "ISC", + "bugs": { + "url": "https://github.com/npm/wrappy/issues" + }, + "homepage": "https://github.com/npm/wrappy", + "gitHead": "006a8cbac6b99988315834c207896eed71fd069a", + "_id": "wrappy@1.0.1", + "_shasum": "1e65969965ccbc2db4548c6b84a6f2c5aedd4739", + "_from": "wrappy@1.0.1", + "_npmVersion": "2.0.0", + "_nodeVersion": "0.10.31", + "_npmUser": { + "name": "isaacs", + "email": "i@izs.me" + }, + "maintainers": [ + { + "name": "isaacs", + "email": "i@izs.me" + } + ], + "dist": { + "shasum": "1e65969965ccbc2db4548c6b84a6f2c5aedd4739", + "tarball": "http://registry.npmjs.org/wrappy/-/wrappy-1.0.1.tgz" + }, + "_resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.1.tgz", + "readme": "ERROR: No README data found!" +} diff --git a/deps/npm/node_modules/wrappy/test/basic.js b/deps/npm/node_modules/wrappy/test/basic.js new file mode 100644 index 00000000000..5ed0fcdfd9c --- /dev/null +++ b/deps/npm/node_modules/wrappy/test/basic.js @@ -0,0 +1,51 @@ +var test = require('tap').test +var wrappy = require('../wrappy.js') + +test('basic', function (t) { + function onceifier (cb) { + var called = false + return function () { + if (called) return + called = true + return cb.apply(this, arguments) + } + } + onceifier.iAmOnce = {} + var once = wrappy(onceifier) + t.equal(once.iAmOnce, onceifier.iAmOnce) + + var called = 0 + function boo () { + t.equal(called, 0) + called++ + } + // has some rando property + boo.iAmBoo = true + + var onlyPrintOnce = once(boo) + + onlyPrintOnce() // prints 'boo' + onlyPrintOnce() // does nothing + t.equal(called, 1) + + // random property is retained! + t.equal(onlyPrintOnce.iAmBoo, true) + + var logs = [] + var logwrap = wrappy(function (msg, cb) { + logs.push(msg + ' wrapping cb') + return function () { + logs.push(msg + ' before cb') + var ret = cb.apply(this, arguments) + logs.push(msg + ' after cb') + } + }) + + var c = logwrap('foo', function () { + t.same(logs, [ 'foo wrapping cb', 'foo before cb' ]) + }) + c() + t.same(logs, [ 'foo wrapping cb', 'foo before cb', 'foo after cb' ]) + + t.end() +}) diff --git a/deps/npm/node_modules/wrappy/wrappy.js b/deps/npm/node_modules/wrappy/wrappy.js new file mode 100644 index 00000000000..bb7e7d6fcf7 --- /dev/null +++ b/deps/npm/node_modules/wrappy/wrappy.js @@ -0,0 +1,33 @@ +// Returns a wrapper function that returns a wrapped callback +// The wrapper function should do some stuff, and return a +// presumably different callback function. +// This makes sure that own properties are retained, so that +// decorations and such are not lost along the way. +module.exports = wrappy +function wrappy (fn, cb) { + if (fn && cb) return wrappy(fn)(cb) + + if (typeof fn !== 'function') + throw new TypeError('need wrapper function') + + Object.keys(fn).forEach(function (k) { + wrapper[k] = fn[k] + }) + + return wrapper + + function wrapper() { + var args = new Array(arguments.length) + for (var i = 0; i < args.length; i++) { + args[i] = arguments[i] + } + var ret = fn.apply(this, args) + var cb = args[args.length-1] + if (typeof ret === 'function' && ret !== cb) { + Object.keys(cb).forEach(function (k) { + ret[k] = cb[k] + }) + } + return ret + } +} diff --git a/deps/npm/node_modules/write-file-atomic/.npmignore b/deps/npm/node_modules/write-file-atomic/.npmignore new file mode 100644 index 00000000000..454382637b4 --- /dev/null +++ b/deps/npm/node_modules/write-file-atomic/.npmignore @@ -0,0 +1,4 @@ +*~ +DEADJOE +.#* +node_modules \ No newline at end of file diff --git a/deps/npm/node_modules/write-file-atomic/README.md b/deps/npm/node_modules/write-file-atomic/README.md new file mode 100644 index 00000000000..26e434d1943 --- /dev/null +++ b/deps/npm/node_modules/write-file-atomic/README.md @@ -0,0 +1,44 @@ +write-file-atomic +----------------- + +This is an extension for node's `fs.writeFile` that makes its operation +atomic and allows you set ownership (uid/gid of the file). + +### var writeFileAtomic = require('write-file-atomic')
    writeFileAtomic(filename, data, [options], callback) + +* filename **String** +* data **String** | **Buffer** +* options **Object** + * chown **Object** + * uid **Number** + * gid **Number** + * encoding **String** | **Null** default = 'utf8' + * mode **Number** default = 438 (aka 0666 in Octal) +callback **Function** + +Atomically and asynchronously writes data to a file, replacing the file if it already +exists. data can be a string or a buffer. + +The file is initially named `filename + "." + md5hex(__filename, process.pid, ++invocations)`. +If writeFile completes successfully then, if passed the **chown** option it will change +the ownership of the file. Finally it renames the file back to the filename you specified. If +it encounters errors at any of these steps it will attempt to unlink the temporary file and then +pass the error back to the caller. + +If provided, the **chown** option requires both **uid** and **gid** properties or else +you'll get an error. + +The **encoding** option is ignored if **data** is a buffer. It defaults to 'utf8'. + +Example: + +```javascript +writeFileAtomic('message.txt', 'Hello Node', {chown:{uid:100,gid:50}}, function (err) { + if (err) throw err; + console.log('It\'s saved!'); +}); +``` + +### var writeFileAtomicSync = require('write-file-atomic').sync
    writeFileAtomicSync(filename, data, [options]) + +The synchronous version of **writeFileAtomic**. diff --git a/deps/npm/node_modules/write-file-atomic/index.js b/deps/npm/node_modules/write-file-atomic/index.js new file mode 100644 index 00000000000..f61a17038bd --- /dev/null +++ b/deps/npm/node_modules/write-file-atomic/index.js @@ -0,0 +1,45 @@ +'use strict' +var fs = require('graceful-fs'); +var chain = require('slide').chain; +var crypto = require('crypto'); + +var md5hex = function () { + var hash = crypto.createHash('md5'); + for (var ii=0; ii=1.1.0 <2.0.0", + "_npmVersion": "1.4.28", + "_npmUser": { + "name": "iarna", + "email": "me@re-becca.org" + }, + "maintainers": [ + { + "name": "iarna", + "email": "me@re-becca.org" + } + ], + "dist": { + "shasum": "e114cfb8f82188353f98217c5945451c9b4dc060", + "tarball": "http://registry.npmjs.org/write-file-atomic/-/write-file-atomic-1.1.0.tgz" + }, + "directories": {}, + "_resolved": "https://registry.npmjs.org/write-file-atomic/-/write-file-atomic-1.1.0.tgz" +} diff --git a/deps/npm/node_modules/write-file-atomic/test/basic.js b/deps/npm/node_modules/write-file-atomic/test/basic.js new file mode 100644 index 00000000000..a3227eaa1de --- /dev/null +++ b/deps/npm/node_modules/write-file-atomic/test/basic.js @@ -0,0 +1,97 @@ +"use strict"; +var test = require('tap').test; +var requireInject = require('require-inject'); +var writeFileAtomic = requireInject('../index', { + 'graceful-fs': { + writeFile: function (tmpfile, data, options, cb) { + if (/nowrite/.test(tmpfile)) return cb('ENOWRITE'); + cb(); + }, + chown: function (tmpfile, uid, gid, cb) { + if (/nochown/.test(tmpfile)) return cb('ENOCHOWN'); + cb(); + }, + rename: function (tmpfile, filename, cb) { + if (/norename/.test(tmpfile)) return cb('ENORENAME'); + cb(); + }, + unlink: function (tmpfile, cb) { + if (/nounlink/.test(tmpfile)) return cb('ENOUNLINK'); + cb(); + }, + writeFileSync: function (tmpfile, data, options) { + if (/nowrite/.test(tmpfile)) throw 'ENOWRITE'; + }, + chownSync: function (tmpfile, uid, gid) { + if (/nochown/.test(tmpfile)) throw 'ENOCHOWN'; + }, + renameSync: function (tmpfile, filename) { + if (/norename/.test(tmpfile)) throw 'ENORENAME'; + }, + unlinkSync: function (tmpfile) { + if (/nounlink/.test(tmpfile)) throw 'ENOUNLINK'; + }, + } +}); +var writeFileAtomicSync = writeFileAtomic.sync; + +test('async tests', function (t) { + t.plan(7); + writeFileAtomic('good', 'test', {mode: '0777'}, function (err) { + t.notOk(err, 'No errors occur when passing in options'); + }); + writeFileAtomic('good', 'test', function (err) { + t.notOk(err, 'No errors occur when NOT passing in options'); + }); + writeFileAtomic('nowrite', 'test', function (err) { + t.is(err, 'ENOWRITE', 'writeFile failures propagate'); + }); + writeFileAtomic('nochown', 'test', {chown: {uid:100,gid:100}}, function (err) { + t.is(err, 'ENOCHOWN', 'Chown failures propagate'); + }); + writeFileAtomic('nochown', 'test', function (err) { + t.notOk(err, 'No attempt to chown when no uid/gid passed in'); + }); + writeFileAtomic('norename', 'test', function (err) { + t.is(err, 'ENORENAME', 'Rename errors propagate'); + }); + writeFileAtomic('norename nounlink', 'test', function (err) { + t.is(err, 'ENORENAME', 'Failure to unlink the temp file does not clobber the original error'); + }); +}); + +test('sync tests', function (t) { + t.plan(7); + var throws = function (shouldthrow, msg, todo) { + var err; + try { todo() } catch (e) { err = e } + t.is(shouldthrow,err,msg); + } + var noexception = function (msg, todo) { + var err; + try { todo() } catch (e) { err = e } + t.notOk(err,msg); + } + + noexception('No errors occur when passing in options',function (){ + writeFileAtomicSync('good', 'test', {mode: '0777'}); + }) + noexception('No errors occur when NOT passing in options',function (){ + writeFileAtomicSync('good', 'test'); + }); + throws('ENOWRITE', 'writeFile failures propagate', function () { + writeFileAtomicSync('nowrite', 'test'); + }); + throws('ENOCHOWN', 'Chown failures propagate', function () { + writeFileAtomicSync('nochown', 'test', {chown: {uid:100,gid:100}}); + }); + noexception('No attempt to chown when no uid/gid passed in', function (){ + writeFileAtomicSync('nochown', 'test'); + }); + throws('ENORENAME', 'Rename errors propagate', function (){ + writeFileAtomicSync('norename', 'test'); + }); + throws('ENORENAME', 'Failure to unlink the temp file does not clobber the original error', function (){ + writeFileAtomicSync('norename nounlink', 'test'); + }); +}); diff --git a/deps/npm/package.json b/deps/npm/package.json index a2961a3606b..761c8c814e0 100644 --- a/deps/npm/package.json +++ b/deps/npm/package.json @@ -1,5 +1,5 @@ { - "version": "1.4.28", + "version": "2.1.6", "name": "npm", "description": "A package manager for node", "keywords": [ @@ -35,54 +35,65 @@ "ansi": "~0.3.0", "ansicolors": "~0.3.2", "ansistyles": "~0.1.3", - "archy": "0", + "archy": "~1.0.0", + "async-some": "~1.0.1", "block-stream": "0.0.7", "char-spinner": "~1.0.1", "child-process-close": "~0.1.1", "chmodr": "~0.1.0", "chownr": "0", - "cmd-shim": "2.0.0", + "cmd-shim": "~2.0.1", "columnify": "~1.2.1", + "config-chain": "~1.1.8", + "dezalgo": "~1.0.1", "editor": "~0.1.0", + "fs-vacuum": "~1.2.1", + "fs-write-stream-atomic": "~1.0.2", "fstream": "~1.0.2", - "fstream-npm": "~1.0.0", + "fstream-npm": "~1.0.1", "github-url-from-git": "~1.4.0", - "github-url-from-username-repo": "~1.0.0", - "glob": "~4.0.5", - "graceful-fs": "~3.0.0", - "inflight": "~1.0.1", - "ini": "~1.2.0", - "init-package-json": "~1.0.0", + "github-url-from-username-repo": "~1.0.2", + "glob": "~4.0.6", + "graceful-fs": "~3.0.4", + "inflight": "~1.0.4", + "inherits": "~2.0.1", + "ini": "~1.3.0", + "init-package-json": "~1.1.1", "lockfile": "~1.0.0", "lru-cache": "~2.5.0", "minimatch": "~1.0.0", "mkdirp": "~0.5.0", "node-gyp": "~1.0.1", "nopt": "~3.0.1", + "normalize-package-data": "~1.0.3", "npm-cache-filename": "~1.0.1", "npm-install-checks": "~1.0.2", - "npm-registry-client": "~2.0.7", - "npm-user-validate": "~0.1.0", - "npmconf": "~1.1.8", + "npm-package-arg": "~2.1.3", + "npm-registry-client": "~3.2.4", + "npm-user-validate": "~0.1.1", "npmlog": "~0.1.1", - "once": "~1.3.0", - "opener": "~1.3.0", + "once": "~1.3.1", + "opener": "~1.4.0", "osenv": "~0.1.0", "path-is-inside": "~1.0.0", "read": "~1.0.4", - "read-installed": "~2.0.5", + "read-installed": "~3.1.2", "read-package-json": "~1.2.7", - "request": "~2.42.0", - "retry": "~0.6.0", + "readable-stream": "~1.0.32", + "realize-package-specifier": "~1.2.0", + "request": "~2.46.0", + "retry": "~0.6.1", "rimraf": "~2.2.8", - "semver": "~2.3.0", - "sha": "~1.2.1", + "semver": "~4.1.0", + "sha": "~1.3.0", "slide": "~1.1.6", "sorted-object": "~1.0.0", "tar": "~1.0.1", "text-table": "~0.2.0", "uid-number": "0.0.5", - "which": "1" + "which": "1", + "wrappy": "~1.0.1", + "write-file-atomic": "~1.1.0" }, "bundleDependencies": [ "abbrev", @@ -90,6 +101,7 @@ "ansicolors", "ansistyles", "archy", + "async-some", "block-stream", "char-spinner", "child-process-close", @@ -97,7 +109,11 @@ "chownr", "cmd-shim", "columnify", + "config-chain", + "dezalgo", "editor", + "fs-vacuum", + "fs-write-stream-atomic", "fstream", "fstream-npm", "github-url-from-git", @@ -114,11 +130,12 @@ "mkdirp", "node-gyp", "nopt", + "normalize-package-data", "npm-cache-filename", "npm-install-checks", + "npm-package-arg", "npm-registry-client", "npm-user-validate", - "npmconf", "npmlog", "once", "opener", @@ -127,6 +144,8 @@ "read", "read-installed", "read-package-json", + "readable-stream", + "realize-package-specifier", "request", "retry", "rimraf", @@ -137,18 +156,18 @@ "tar", "text-table", "uid-number", - "which" + "which", + "wrappy", + "write-file-atomic" ], "devDependencies": { "marked": "~0.3.2", - "npm-registry-couchapp": "~2.3.6", + "marked-man": "~0.1.4", + "nock": "~0.48.1", + "npm-registry-couchapp": "~2.6.2", "npm-registry-mock": "~0.6.3", - "ronn": "~0.3.6", - "tap": "~0.4.9" - }, - "engines": { - "node": ">=0.8", - "npm": "1" + "require-inject": "~1.1.0", + "tap": "~0.4.12" }, "scripts": { "test-legacy": "node ./test/run.js", diff --git a/deps/npm/scripts/doc-build.sh b/deps/npm/scripts/doc-build.sh index 9afab0782b7..79629c63dc9 100755 --- a/deps/npm/scripts/doc-build.sh +++ b/deps/npm/scripts/doc-build.sh @@ -6,26 +6,26 @@ fi set -o errexit set -o pipefail -if ! [ -x node_modules/.bin/ronn ]; then +if ! [ -x node_modules/.bin/marked-man ]; then ps=0 - if [ -f .building_ronn ]; then - pid=$(cat .building_ronn) + if [ -f .building_marked-man ]; then + pid=$(cat .building_marked-man) ps=$(ps -p $pid | grep $pid | wc -l) || true fi - if [ -f .building_ronn ] && [ $ps != 0 ]; then - while [ -f .building_ronn ]; do + if [ -f .building_marked-man ] && [ $ps != 0 ]; then + while [ -f .building_marked-man ]; do sleep 1 done else - # a race to see which make process will be the one to install ronn - echo $$ > .building_ronn + # a race to see which make process will be the one to install marked-man + echo $$ > .building_marked-man sleep 1 - if [ $(cat .building_ronn) == $$ ]; then - make node_modules/.bin/ronn - rm .building_ronn + if [ $(cat .building_marked-man) == $$ ]; then + make node_modules/.bin/marked-man + rm .building_marked-man else - while [ -f .building_ronn ]; do + while [ -f .building_marked-man ]; do sleep 1 done fi @@ -66,44 +66,59 @@ version=$(node cli.js -v) mkdir -p $(dirname $dest) +html_replace_tokens () { + local url=$1 + sed "s|@NAME@|$name|g" \ + | sed "s|@DATE@|$date|g" \ + | sed "s|@URL@|$url|g" \ + | sed "s|@VERSION@|$version|g" \ + | perl -p -e 's/]*)>([^\(]*\([0-9]\)) -- (.*?)<\/h1>/

    \2<\/h1>

    \3<\/p>/g' \ + | perl -p -e 's/npm-npm/npm/g' \ + | perl -p -e 's/([^"-])(npm-)?README(?!\.html)(\(1\))?/\1README<\/a>/g' \ + | perl -p -e 's/<a href="[^"]+README.html">README<\/a><\/title>/<title>README<\/title>/g' \ + | perl -p -e 's/([^"-])([^\(> ]+)(\(1\))/\1<a href="..\/cli\/\2.html">\2\3<\/a>/g' \ + | perl -p -e 's/([^"-])([^\(> ]+)(\(3\))/\1<a href="..\/api\/\2.html">\2\3<\/a>/g' \ + | perl -p -e 's/([^"-])([^\(> ]+)(\(5\))/\1<a href="..\/files\/\2.html">\2\3<\/a>/g' \ + | perl -p -e 's/([^"-])([^\(> ]+)(\(7\))/\1<a href="..\/misc\/\2.html">\2\3<\/a>/g' \ + | perl -p -e 's/\([1357]\)<\/a><\/h1>/<\/a><\/h1>/g' \ + | (if [ $(basename $(dirname $dest)) == "doc" ]; then + perl -p -e 's/ href="\.\.\// href="/g' + else + cat + fi) +} + +man_replace_tokens () { + sed "s|@VERSION@|$version|g" \ + | perl -p -e 's/(npm\\-)?([a-zA-Z\\\.\-]*)\(1\)/npm help \2/g' \ + | perl -p -e 's/(npm\\-)?([a-zA-Z\\\.\-]*)\(([57])\)/npm help \3 \2/g' \ + | perl -p -e 's/(npm\\-)?([a-zA-Z\\\.\-]*)\(3\)/npm apihelp \2/g' \ + | perl -p -e 's/npm\(1\)/npm help npm/g' \ + | perl -p -e 's/npm\(3\)/npm apihelp npm/g' +} + case $dest in *.[1357]) - ./node_modules/.bin/ronn --roff $src \ - | sed "s|@VERSION@|$version|g" \ - | perl -pi -e 's/(npm\\-)?([a-zA-Z\\\.\-]*)\(1\)/npm help \2/g' \ - | perl -pi -e 's/(npm\\-)?([a-zA-Z\\\.\-]*)\(([57])\)/npm help \3 \2/g' \ - | perl -pi -e 's/(npm\\-)?([a-zA-Z\\\.\-]*)\(3\)/npm apihelp \2/g' \ - | perl -pi -e 's/npm\(1\)/npm help npm/g' \ - | perl -pi -e 's/npm\(3\)/npm apihelp npm/g' \ - > $dest + ./node_modules/.bin/marked-man --roff $src \ + | man_replace_tokens > $dest exit $? ;; - *.html) + + html/partial/*.html) + url=${dest/html\/partial\//} + cat $src | ./node_modules/.bin/marked | html_replace_tokens $url > $dest + ;; + + html/*.html) url=${dest/html\//} (cat html/dochead.html && \ - cat $src | ./node_modules/.bin/marked && + cat $src && \ cat html/docfoot.html)\ - | sed "s|@NAME@|$name|g" \ - | sed "s|@DATE@|$date|g" \ - | sed "s|@URL@|$url|g" \ - | sed "s|@VERSION@|$version|g" \ - | perl -pi -e 's/<h1([^>]*)>([^\(]*\([0-9]\)) -- (.*?)<\/h1>/<h1>\2<\/h1> <p>\3<\/p>/g' \ - | perl -pi -e 's/npm-npm/npm/g' \ - | perl -pi -e 's/([^"-])(npm-)?README(?!\.html)(\(1\))?/\1<a href="..\/..\/doc\/README.html">README<\/a>/g' \ - | perl -pi -e 's/<title><a href="[^"]+README.html">README<\/a><\/title>/<title>README<\/title>/g' \ - | perl -pi -e 's/([^"-])([^\(> ]+)(\(1\))/\1<a href="..\/cli\/\2.html">\2\3<\/a>/g' \ - | perl -pi -e 's/([^"-])([^\(> ]+)(\(3\))/\1<a href="..\/api\/\2.html">\2\3<\/a>/g' \ - | perl -pi -e 's/([^"-])([^\(> ]+)(\(5\))/\1<a href="..\/files\/\2.html">\2\3<\/a>/g' \ - | perl -pi -e 's/([^"-])([^\(> ]+)(\(7\))/\1<a href="..\/misc\/\2.html">\2\3<\/a>/g' \ - | perl -pi -e 's/\([1357]\)<\/a><\/h1>/<\/a><\/h1>/g' \ - | (if [ $(basename $(dirname $dest)) == "doc" ]; then - perl -pi -e 's/ href="\.\.\// href="/g' - else - cat - fi) \ + | html_replace_tokens $url \ > $dest exit $? ;; + *) echo "Invalid destination type: $dest" >&2 exit 1 diff --git a/deps/npm/test/common-tap.js b/deps/npm/test/common-tap.js index d6d09ed9bca..2efca30c481 100644 --- a/deps/npm/test/common-tap.js +++ b/deps/npm/test/common-tap.js @@ -1,16 +1,27 @@ var spawn = require("child_process").spawn +var path = require("path") +var fs = require("fs") var port = exports.port = 1337 exports.registry = "http://localhost:" + port process.env.npm_config_loglevel = "error" +var npm_config_cache = path.resolve(__dirname, "npm_cache") +exports.npm_config_cache = npm_config_cache + var bin = exports.bin = require.resolve("../bin/npm-cli.js") var once = require("once") + exports.npm = function (cmd, opts, cb) { cb = once(cb) cmd = [bin].concat(cmd) opts = opts || {} + opts.env = opts.env ? opts.env : process.env + if (!opts.env.npm_config_cache) { + opts.env.npm_config_cache = npm_config_cache + } + var stdout = "" , stderr = "" , node = process.execPath @@ -26,7 +37,31 @@ exports.npm = function (cmd, opts, cb) { child.on("error", cb) - child.on("close", function (code, signal) { + child.on("close", function (code) { cb(null, code, stdout, stderr) }) + return child +} + +// based on http://bit.ly/1tkI6DJ +function deleteNpmCacheRecursivelySync(cache) { + cache = cache ? cache : npm_config_cache + var files = [] + var res + if( fs.existsSync(cache) ) { + files = fs.readdirSync(cache) + files.forEach(function(file,index) { + var curPath = path.resolve(cache, file) + if(fs.lstatSync(curPath).isDirectory()) { // recurse + deleteNpmCacheRecursivelySync(curPath) + } else { // delete file + if (res = fs.unlinkSync(curPath)) + throw Error("Failed to delete file " + curPath + ", error " + res) + } + }) + if (res = fs.rmdirSync(cache)) + throw Error("Failed to delete directory " + cache + ", error " + res) + } + return 0 } +exports.deleteNpmCacheRecursivelySync = deleteNpmCacheRecursivelySync \ No newline at end of file diff --git a/deps/npm/node_modules/npmconf/test/fixtures/.npmrc b/deps/npm/test/fixtures/config/.npmrc similarity index 100% rename from deps/npm/node_modules/npmconf/test/fixtures/.npmrc rename to deps/npm/test/fixtures/config/.npmrc diff --git a/deps/npm/node_modules/npmconf/test/fixtures/builtin b/deps/npm/test/fixtures/config/builtin similarity index 100% rename from deps/npm/node_modules/npmconf/test/fixtures/builtin rename to deps/npm/test/fixtures/config/builtin diff --git a/deps/npm/node_modules/npmconf/test/fixtures/globalconfig b/deps/npm/test/fixtures/config/globalconfig similarity index 100% rename from deps/npm/node_modules/npmconf/test/fixtures/globalconfig rename to deps/npm/test/fixtures/config/globalconfig diff --git a/deps/npm/test/fixtures/config/malformed b/deps/npm/test/fixtures/config/malformed new file mode 100644 index 00000000000..182c4d2c71c --- /dev/null +++ b/deps/npm/test/fixtures/config/malformed @@ -0,0 +1 @@ +email = """ \ No newline at end of file diff --git a/deps/npm/node_modules/npmconf/test/fixtures/multi-ca b/deps/npm/test/fixtures/config/multi-ca similarity index 100% rename from deps/npm/node_modules/npmconf/test/fixtures/multi-ca rename to deps/npm/test/fixtures/config/multi-ca diff --git a/deps/npm/test/fixtures/config/package.json b/deps/npm/test/fixtures/config/package.json new file mode 100644 index 00000000000..e69de29bb2d diff --git a/deps/npm/node_modules/npmconf/test/fixtures/userconfig b/deps/npm/test/fixtures/config/userconfig similarity index 96% rename from deps/npm/node_modules/npmconf/test/fixtures/userconfig rename to deps/npm/test/fixtures/config/userconfig index bda1eb82ae8..d600c0664e2 100644 --- a/deps/npm/node_modules/npmconf/test/fixtures/userconfig +++ b/deps/npm/test/fixtures/config/userconfig @@ -3,16 +3,17 @@ env-thing = ${random_env_var} init.author.name = Isaac Z. Schlueter init.author.email = i@izs.me init.author.url = http://blog.izs.me/ +init.version = 1.2.3 proprietary-attribs = false npm:publishtest = true _npmjs.org:couch = https://admin:password@localhost:5984/registry -_auth = dXNlcm5hbWU6cGFzc3dvcmQ= npm-www:nocache = 1 nodedir = /Users/isaacs/dev/js/node-v0.8 sign-git-tag = true message = v%s strict-ssl = false tmp = ~/.tmp +_auth = dXNlcm5hbWU6cGFzc3dvcmQ= [_token] AuthSession = yabba-dabba-doodle diff --git a/deps/npm/test/fixtures/config/userconfig-with-gc b/deps/npm/test/fixtures/config/userconfig-with-gc new file mode 100644 index 00000000000..3e5d605f4e3 --- /dev/null +++ b/deps/npm/test/fixtures/config/userconfig-with-gc @@ -0,0 +1,22 @@ +globalconfig=/Users/ogd/Documents/projects/npm/npm/test/fixtures/config/globalconfig +email=i@izs.me +env-thing=asdf +init.author.name=Isaac Z. Schlueter +init.author.email=i@izs.me +init.author.url=http://blog.izs.me/ +init.version=1.2.3 +proprietary-attribs=false +npm:publishtest=true +_npmjs.org:couch=https://admin:password@localhost:5984/registry +npm-www:nocache=1 +sign-git-tag=false +message=v%s +strict-ssl=false +_auth=dXNlcm5hbWU6cGFzc3dvcmQ= + +[_token] +AuthSession=yabba-dabba-doodle +version=1 +expires=1345001053415 +path=/ +httponly=true diff --git a/deps/npm/test/packages/npm-test-optional-deps/package.json b/deps/npm/test/packages/npm-test-optional-deps/package.json index 56c6f09ed01..67545ca9da1 100644 --- a/deps/npm/test/packages/npm-test-optional-deps/package.json +++ b/deps/npm/test/packages/npm-test-optional-deps/package.json @@ -5,7 +5,6 @@ { "npm-test-foobarzaaakakaka": "http://example.com/" , "dnode": "10.999.14234" , "sax": "0.3.5" - , "999 invalid name": "1.2.3" , "glob": "some invalid version 99 #! $$ x y z" , "npm-test-failer":"*" } diff --git a/deps/npm/test/run.js b/deps/npm/test/run.js index 008cfbac45a..904df5b8e46 100644 --- a/deps/npm/test/run.js +++ b/deps/npm/test/run.js @@ -7,7 +7,7 @@ var path = require("path") , testdir = __dirname , fs = require("graceful-fs") , npmpkg = path.dirname(testdir) - , npmcli = path.join(__dirname, "bin", "npm-cli.js") + , npmcli = path.resolve(npmpkg, "bin", "npm-cli.js") var temp = process.env.TMPDIR || process.env.TMP @@ -63,7 +63,7 @@ function prefix (content, pref) { } var execCount = 0 -function exec (cmd, shouldFail, cb) { +function exec (cmd, cwd, shouldFail, cb) { if (typeof shouldFail === "function") { cb = shouldFail, shouldFail = false } @@ -81,7 +81,10 @@ function exec (cmd, shouldFail, cb) { cmd = cmd.replace(/^npm /, npmReplace + " ") cmd = cmd.replace(/^node /, nodeReplace + " ") - child_process.exec(cmd, {env: env}, function (er, stdout, stderr) { + console.error("$$$$$$ cd %s; PATH=%s %s", cwd, env.PATH, cmd) + + child_process.exec(cmd, {cwd: cwd, env: env}, function (er, stdout, stderr) { + console.error("$$$$$$ after command", cmd, cwd) if (stdout) { console.error(prefix(stdout, " 1> ")) } @@ -102,10 +105,8 @@ function exec (cmd, shouldFail, cb) { } function execChain (cmds, cb) { - chain(cmds.reduce(function (l, r) { - return l.concat(r) - }, []).map(function (cmd) { - return [exec, cmd] + chain(cmds.map(function (args) { + return [exec].concat(args) }), cb) } @@ -118,9 +119,8 @@ function flatten (arr) { function setup (cb) { cleanup(function (er) { if (er) return cb(er) - execChain([ "node \""+path.resolve(npmpkg, "bin", "npm-cli.js") - + "\" install \""+npmpkg+"\"" - , "npm config set package-config:foo boo" + execChain([ [ "node \""+npmcli+"\" install \""+npmpkg+"\"", root ], + [ "npm config set package-config:foo boo", root ] ], cb) }) } @@ -134,6 +134,7 @@ function main (cb) { failures = 0 process.chdir(testdir) + var base = path.resolve(root, path.join("lib", "node_modules")) // get the list of packages var packages = fs.readdirSync(path.resolve(testdir, "packages")) @@ -150,17 +151,17 @@ function main (cb) { packagesToRm.push("npm") } - chain - ( [ setup - , [ exec, "npm install "+npmpkg ] + chain( + [ setup + , [ exec, "npm install "+npmpkg, testdir ] , [ execChain, packages.map(function (p) { - return "npm install packages/"+p + return [ "npm install packages/"+p, testdir ] }) ] , [ execChain, packages.map(function (p) { - return "npm test "+p + return [ "npm test -ddd", path.resolve(base, p) ] }) ] , [ execChain, packagesToRm.map(function (p) { - return "npm rm " + p + return [ "npm rm "+p, root ] }) ] , installAndTestEach ] @@ -171,15 +172,15 @@ function main (cb) { function installAndTestEach (cb) { var thingsToChain = [ setup - , [ execChain, packages.map(function (p) { - return [ "npm install packages/"+p - , "npm test "+p - , "npm rm "+p ] - }) ] + , [ execChain, flatten(packages.map(function (p) { + return [ [ "npm install packages/"+p, testdir ] + , [ "npm test", path.resolve(base, p) ] + , [ "npm rm "+p, root ] ] + })) ] ] if (process.platform !== "win32") { // Windows can't handle npm rm npm due to file-in-use issues. - thingsToChain.push([exec, "npm rm npm"]) + thingsToChain.push([exec, "npm rm npm", testdir]) } chain(thingsToChain, cb) diff --git a/deps/npm/test/tap/00-check-mock-dep.js b/deps/npm/test/tap/00-check-mock-dep.js index c4d2ff2c224..1c862317c9a 100644 --- a/deps/npm/test/tap/00-check-mock-dep.js +++ b/deps/npm/test/tap/00-check-mock-dep.js @@ -1,6 +1,7 @@ console.log("TAP Version 13") -process.on("uncaughtException", function(er) { +process.on("uncaughtException", function (er) { + if (er) { throw er } console.log("not ok - Failed checking mock registry dep. Expect much fail!") console.log("1..1") process.exit(1) @@ -10,6 +11,7 @@ var assert = require("assert") var semver = require("semver") var mock = require("npm-registry-mock/package.json").version var req = require("../../package.json").devDependencies["npm-registry-mock"] + assert(semver.satisfies(mock, req)) console.log("ok") console.log("1..1") diff --git a/deps/npm/test/tap/00-config-setup.js b/deps/npm/test/tap/00-config-setup.js new file mode 100644 index 00000000000..aaad5462715 --- /dev/null +++ b/deps/npm/test/tap/00-config-setup.js @@ -0,0 +1,68 @@ +var path = require("path") +var userconfigSrc = path.resolve(__dirname, "..", "fixtures", "config", "userconfig") +exports.userconfig = userconfigSrc + "-with-gc" +exports.globalconfig = path.resolve(__dirname, "..", "fixtures", "config", "globalconfig") +exports.builtin = path.resolve(__dirname, "..", "fixtures", "config", "builtin") +exports.malformed = path.resolve(__dirname, "..", "fixtures", "config", "malformed") +exports.ucData = + { globalconfig: exports.globalconfig, + email: "i@izs.me", + "env-thing": "asdf", + "init.author.name": "Isaac Z. Schlueter", + "init.author.email": "i@izs.me", + "init.author.url": "http://blog.izs.me/", + "init.version": "1.2.3", + "proprietary-attribs": false, + "npm:publishtest": true, + "_npmjs.org:couch": "https://admin:password@localhost:5984/registry", + "npm-www:nocache": "1", + nodedir: "/Users/isaacs/dev/js/node-v0.8", + "sign-git-tag": true, + message: "v%s", + "strict-ssl": false, + "tmp": process.env.HOME + "/.tmp", + _auth: "dXNlcm5hbWU6cGFzc3dvcmQ=", + _token: + { AuthSession: "yabba-dabba-doodle", + version: "1", + expires: "1345001053415", + path: "/", + httponly: true } } + +// set the userconfig in the env +// unset anything else that npm might be trying to foist on us +Object.keys(process.env).forEach(function (k) { + if (k.match(/^npm_config_/i)) { + delete process.env[k] + } +}) +process.env.npm_config_userconfig = exports.userconfig +process.env.npm_config_other_env_thing = 1000 +process.env.random_env_var = "asdf" +process.env.npm_config__underbar_env_thing = "underful" +process.env.NPM_CONFIG_UPPERCASE_ENV_THING = 42 + +exports.envData = { + userconfig: exports.userconfig, + "_underbar-env-thing": "underful", + "uppercase-env-thing": "42", + "other-env-thing": "1000" +} +exports.envDataFix = { + userconfig: exports.userconfig, + "_underbar-env-thing": "underful", + "uppercase-env-thing": 42, + "other-env-thing": 1000 +} + + +if (module === require.main) { + // set the globalconfig in the userconfig + var fs = require("fs") + var uc = fs.readFileSync(userconfigSrc) + var gcini = "globalconfig = " + exports.globalconfig + "\n" + fs.writeFileSync(exports.userconfig, gcini + uc) + + console.log("0..1") + console.log("ok 1 setup done") +} diff --git a/deps/npm/test/tap/00-verify-bundle-deps.js b/deps/npm/test/tap/00-verify-bundle-deps.js index 00291a6c481..9d16b2d3b12 100644 --- a/deps/npm/test/tap/00-verify-bundle-deps.js +++ b/deps/npm/test/tap/00-verify-bundle-deps.js @@ -16,7 +16,7 @@ test("all deps are bundled deps or dev deps", function (t) { }) t.same( - fs.readdirSync(path.resolve(__dirname, '../../node_modules')).filter(function (name) { + fs.readdirSync(path.resolve(__dirname, "../../node_modules")).filter(function (name) { return (dev.indexOf(name) === -1) && (name !== ".bin") }).sort(), bundled.sort(), diff --git a/deps/npm/test/tap/00-verify-ls-ok.js b/deps/npm/test/tap/00-verify-ls-ok.js new file mode 100644 index 00000000000..aa6acdbc56f --- /dev/null +++ b/deps/npm/test/tap/00-verify-ls-ok.js @@ -0,0 +1,18 @@ +var common = require("../common-tap") +var test = require("tap").test +var path = require("path") +var cwd = path.resolve(__dirname, "..", "..") +var fs = require("fs") + +test("npm ls in npm", function (t) { + t.ok(fs.existsSync(cwd), "ensure that the path we are calling ls within exists") + var files = fs.readdirSync(cwd) + t.notEqual(files.length, 0, "ensure there are files in the directory we are to ls") + + var opt = { cwd: cwd, stdio: [ "ignore", "ignore", 2 ] } + common.npm(["ls"], opt, function (err, code) { + t.ifError(err, "error should not exist") + t.equal(code, 0, "npm ls exited with code") + t.end() + }) +}) diff --git a/deps/npm/test/tap/404-parent.js b/deps/npm/test/tap/404-parent.js index b3c353827f7..e40d850de73 100644 --- a/deps/npm/test/tap/404-parent.js +++ b/deps/npm/test/tap/404-parent.js @@ -1,26 +1,25 @@ -var common = require('../common-tap.js') -var test = require('tap').test -var npm = require('../../') -var osenv = require('osenv') -var path = require('path') -var fs = require('fs') -var rimraf = require('rimraf') -var mkdirp = require('mkdirp') -var pkg = path.resolve(__dirname, '404-parent') +var common = require("../common-tap.js") +var test = require("tap").test +var npm = require("../../") +var osenv = require("osenv") +var path = require("path") +var fs = require("fs") +var rimraf = require("rimraf") +var mkdirp = require("mkdirp") +var pkg = path.resolve(__dirname, "404-parent") var mr = require("npm-registry-mock") -test('404-parent: if parent exists, specify parent in error message', function(t) { +test("404-parent: if parent exists, specify parent in error message", function (t) { setup() - rimraf.sync(path.resolve(pkg, 'node_modules')) - performInstall(function(err) { - t.ok(err instanceof Error) - t.pass('error was returned') - t.ok(err.parent === '404-parent-test') + rimraf.sync(path.resolve(pkg, "node_modules")) + performInstall(function (err) { + t.ok(err instanceof Error, "error was returned") + t.ok(err.parent === "404-parent-test", "error's parent set") t.end() }) }) -test('cleanup', function(t) { +test("cleanup", function (t) { process.chdir(osenv.tmpdir()) rimraf.sync(pkg) t.end() @@ -28,23 +27,23 @@ test('cleanup', function(t) { function setup() { mkdirp.sync(pkg) - mkdirp.sync(path.resolve(pkg, 'cache')) - fs.writeFileSync(path.resolve(pkg, 'package.json'), JSON.stringify({ - author: 'Evan Lucas', - name: '404-parent-test', - version: '0.0.0', - description: 'Test for 404-parent', + mkdirp.sync(path.resolve(pkg, "cache")) + fs.writeFileSync(path.resolve(pkg, "package.json"), JSON.stringify({ + author: "Evan Lucas", + name: "404-parent-test", + version: "0.0.0", + description: "Test for 404-parent", dependencies: { - 'test-npm-404-parent-test': '*' + "test-npm-404-parent-test": "*" } - }), 'utf8') + }), "utf8") process.chdir(pkg) } function performInstall(cb) { mr(common.port, function (s) { // create mock registry. - npm.load({registry: common.registry}, function() { - npm.commands.install(pkg, [], function(err) { + npm.load({registry: common.registry}, function () { + npm.commands.install(pkg, [], function (err) { cb(err) s.close() // shutdown mock npm server. }) diff --git a/deps/npm/test/tap/builtin-config.js b/deps/npm/test/tap/builtin-config.js new file mode 100644 index 00000000000..75acd2be276 --- /dev/null +++ b/deps/npm/test/tap/builtin-config.js @@ -0,0 +1,125 @@ +var fs = require("fs") + +if (process.argv[2] === "write-builtin") { + var pid = process.argv[3] + fs.writeFileSync("npmrc", "foo=bar\npid=" + pid + "\n") + return +} + +var rcdata = "foo=bar\npid=" + process.pid + "\n" +var common = require("../common-tap.js") +var path = require("path") +var rimraf = require("rimraf") +var mkdirp = require("mkdirp") +var folder = path.resolve(__dirname, "builtin-config") +var test = require("tap").test +var npm = path.resolve(__dirname, "../..") +var spawn = require("child_process").spawn +var node = process.execPath + +test("setup", function (t) { + rimraf.sync(folder) + mkdirp.sync(folder + "/first") + mkdirp.sync(folder + "/second") + mkdirp.sync(folder + "/cache") + mkdirp.sync(folder + "/tmp") + + t.pass("finished setup") + t.end() +}) + + +test("install npm into first folder", function (t) { + var args = ["install", npm, "-g", + "--prefix=" + folder + "/first", + "--cache=" + folder + "/cache", + "--no-spin", + "--loglevel=silent", + "--tmp=" + folder + "/tmp"] + common.npm(args, {stdio: "inherit"}, function (er, code) { + if (er) throw er + t.equal(code, 0) + t.end() + }) +}) + +test("write npmrc file", function (t) { + common.npm(["explore", "npm", "-g", + "--prefix=" + folder + "/first", + "--cache=" + folder + "/cache", + "--tmp=" + folder + "/tmp", + "--no-spin", + "--", + node, __filename, "write-builtin", process.pid + ], + {"stdio": "inherit"}, + function (er, code) { + if (er) throw er + t.equal(code, 0) + t.end() + }) +}) + +test("use first npm to install second npm", function (t) { + // get the root location + common.npm([ "root", "-g", + "--prefix=" + folder + "/first", + "--cache=" + folder + "/cache", + "--tmp=" + folder + "/tmp", + "--no-spin" + ], {}, function (er, code, so) { + if (er) throw er + t.equal(code, 0) + var root = so.trim() + t.ok(fs.statSync(root).isDirectory()) + + var bin = path.resolve(root, "npm/bin/npm-cli.js") + spawn( node + , [ bin + , "install", npm + , "-g" + , "--prefix=" + folder + "/second" + , "--cache=" + folder + "/cache" + , "--tmp=" + folder + "/tmp" + , "--no-spin" + ]) + .on("error", function (er) { throw er }) + .on("close", function (code) { + t.equal(code, 0, "code is zero") + t.end() + }) + }) +}) + +test("verify that the builtin config matches", function (t) { + common.npm([ "root", "-g", + "--prefix=" + folder + "/first", + "--cache=" + folder + "/cache", + "--tmp=" + folder + "/tmp" + ], {}, function (er, code, so) { + if (er) throw er + t.equal(code, 0) + var firstRoot = so.trim() + common.npm([ "root", "-g", + "--prefix=" + folder + "/second", + "--cache=" + folder + "/cache", + "--tmp=" + folder + "/tmp" + ], {}, function (er, code, so) { + if (er) throw er + t.equal(code, 0) + var secondRoot = so.trim() + var firstRc = path.resolve(firstRoot, "npm", "npmrc") + var secondRc = path.resolve(secondRoot, "npm", "npmrc") + var firstData = fs.readFileSync(firstRc, "utf8") + var secondData = fs.readFileSync(secondRc, "utf8") + t.equal(firstData, secondData) + t.end() + }) + }) +}) + + +test("clean", function (t) { + rimraf.sync(folder) + t.end() +}) diff --git a/deps/npm/test/tap/cache-add-localdir-fallback.js b/deps/npm/test/tap/cache-add-localdir-fallback.js new file mode 100644 index 00000000000..facd95c3ad4 --- /dev/null +++ b/deps/npm/test/tap/cache-add-localdir-fallback.js @@ -0,0 +1,84 @@ +var path = require("path") +var test = require("tap").test +var npm = require("../../lib/npm.js") +var requireInject = require("require-inject") + +var realizePackageSpecifier = requireInject("realize-package-specifier", { + "fs": { + stat: function (file, cb) { + process.nextTick(function () { + switch (file) { + case path.resolve("named"): + cb(new Error("ENOENT")) + break + case path.resolve("file.tgz"): + cb(null, { isDirectory: function () { return false } }) + break + case path.resolve("dir-no-package"): + cb(null, { isDirectory: function () { return true } }) + break + case path.resolve("dir-no-package/package.json"): + cb(new Error("ENOENT")) + break + case path.resolve("dir-with-package"): + cb(null, { isDirectory: function () { return true } }) + break + case path.resolve("dir-with-package/package.json"): + cb(null, {}) + break + case path.resolve(__dirname, "dir-with-package"): + cb(null, { isDirectory: function () { return true } }) + break + case path.join(__dirname, "dir-with-package", "package.json"): + cb(null, {}) + break + case path.resolve(__dirname, "file.tgz"): + cb(null, { isDirectory: function () { return false } }) + break + default: + throw new Error("Unknown test file passed to stat: " + file) + } + }) + } + } +}) + +npm.load({loglevel : "silent"}, function () { + var cache = requireInject("../../lib/cache.js", { + "realize-package-specifier": realizePackageSpecifier, + "../../lib/cache/add-named.js": function addNamed (name, version, data, cb) { + cb(null, "addNamed") + }, + "../../lib/cache/add-local.js": function addLocal (name, data, cb) { + cb(null, "addLocal") + } + }) + + test("npm install localdir fallback", function (t) { + t.plan(12) + cache.add("named", null, null, false, function (er, which) { + t.ifError(er, "named was cached") + t.is(which, "addNamed", "registry package name") + }) + cache.add("file.tgz", null, null, false, function (er, which) { + t.ifError(er, "file.tgz was cached") + t.is(which, "addLocal", "local file") + }) + cache.add("dir-no-package", null, null, false, function (er, which) { + t.ifError(er, "local directory was cached") + t.is(which, "addNamed", "local directory w/o package.json") + }) + cache.add("dir-with-package", null, null, false, function (er, which) { + t.ifError(er, "local directory with package was cached") + t.is(which,"addLocal", "local directory with package.json") + }) + cache.add("file:./dir-with-package", null, __dirname, false, function (er, which) { + t.ifError(er, "local directory (as URI) with package was cached") + t.is(which, "addLocal", "file: URI to local directory with package.json") + }) + cache.add("file:./file.tgz", null, __dirname, false, function (er, which) { + t.ifError(er, "local file (as URI) with package was cached") + t.is(which, "addLocal", "file: URI to local file with package.json") + }) + }) +}) diff --git a/deps/npm/test/tap/cache-add-unpublished.js b/deps/npm/test/tap/cache-add-unpublished.js index e3132131453..46f0db232eb 100644 --- a/deps/npm/test/tap/cache-add-unpublished.js +++ b/deps/npm/test/tap/cache-add-unpublished.js @@ -1,61 +1,12 @@ -var common = require('../common-tap.js') -var test = require('tap').test - -var server - -var port = common.port -var http = require("http") - -var doc = { - "_id": "superfoo", - "_rev": "5-d11adeec0fdfea6b96b120610d2bed71", - "name": "superfoo", - "time": { - "modified": "2014-02-18T18:35:02.930Z", - "created": "2014-02-18T18:34:08.437Z", - "1.1.0": "2014-02-18T18:34:08.437Z", - "unpublished": { - "name": "isaacs", - "time": "2014-04-30T18:26:45.584Z", - "tags": { - "latest": "1.1.0" - }, - "maintainers": [ - { - "name": "foo", - "email": "foo@foo.com" - } - ], - "description": "do lots a foo", - "versions": [ - "1.1.0" - ] - } - }, - "_attachments": {} -} - -test("setup", function (t) { - server = http.createServer(function(req, res) { - res.end(JSON.stringify(doc)) - }) - server.listen(port, function() { - t.end() - }) -}) +var common = require("../common-tap.js") +var test = require("tap").test test("cache add", function (t) { common.npm(["cache", "add", "superfoo"], {}, function (er, c, so, se) { if (er) throw er - t.ok(c) - t.equal(so, "") - t.similar(se, /404 Not Found: superfoo/) - t.end() - }) -}) - -test("cleanup", function (t) { - server.close(function() { + t.ok(c, "got non-zero exit code") + t.equal(so, "", "nothing printed to stdout") + t.similar(se, /404 Not Found: superfoo/, "got expected error") t.end() }) }) diff --git a/deps/npm/test/tap/cache-shasum-fork.js b/deps/npm/test/tap/cache-shasum-fork.js new file mode 100644 index 00000000000..383f08c7152 --- /dev/null +++ b/deps/npm/test/tap/cache-shasum-fork.js @@ -0,0 +1,83 @@ +var test = require("tap").test +var path = require("path") +var fs = require("fs") +var rimraf = require("rimraf") +var mkdirp = require("mkdirp") +var mr = require("npm-registry-mock") +var common = require("../common-tap.js") +var cache = path.resolve(__dirname, "cache-shasum-fork", "CACHE") +var cwd = path.resolve(__dirname, "cache-shasum-fork", "CWD") +var server + +// Test for https://github.com/npm/npm/issues/3265 + +test("mock reg", function (t) { + rimraf.sync(cache) + mkdirp.sync(cache) + rimraf.sync(cwd) + mkdirp.sync(path.join(cwd, "node_modules")) + mr(common.port, function (s) { + server = s + t.pass("ok") + t.end() + }) +}) + +test("npm cache - install from fork", function (t) { + // Install from a tarball that thinks it is underscore@1.5.1 + // (but is actually a fork) + var forkPath = path.resolve( + __dirname, "cache-shasum-fork", "underscore-1.5.1.tgz") + common.npm(["install", forkPath], { + cwd: cwd, + env: { + "npm_config_cache" : cache, + "npm_config_registry" : common.registry, + "npm_config_loglevel" : "silent" + } + }, function (err, code, stdout, stderr) { + t.ifErr(err, "install finished without error") + t.notOk(stderr, "Should not get data on stderr: " + stderr) + t.equal(code, 0, "install finished successfully") + + t.equal(stdout, "underscore@1.5.1 node_modules/underscore\n") + var index = fs.readFileSync( + path.join(cwd, "node_modules", "underscore", "index.js"), + "utf8" + ) + t.equal(index, 'console.log("This is the fork");\n\n') + t.end() + }) +}) + +test("npm cache - install from origin", function (t) { + // Now install the real 1.5.1. + rimraf.sync(path.join(cwd, "node_modules")) + mkdirp.sync(path.join(cwd, "node_modules")) + common.npm(["install", "underscore"], { + cwd: cwd, + env: { + "npm_config_cache" : cache, + "npm_config_registry" : common.registry, + "npm_config_loglevel" : "silent" + } + }, function (err, code, stdout, stderr) { + t.ifErr(err, "install finished without error") + t.equal(code, 0, "install finished successfully") + t.notOk(stderr, "Should not get data on stderr: " + stderr) + t.equal(stdout, "underscore@1.5.1 node_modules/underscore\n") + var index = fs.readFileSync( + path.join(cwd, "node_modules", "underscore", "index.js"), + "utf8" + ) + t.equal(index, "module.exports = require('./underscore');\n") + t.end() + }) +}) + +test("cleanup", function (t) { + server.close() + rimraf.sync(cache) + rimraf.sync(cwd) + t.end() +}) diff --git a/deps/npm/test/tap/cache-shasum-fork/underscore-1.5.1.tgz b/deps/npm/test/tap/cache-shasum-fork/underscore-1.5.1.tgz new file mode 100644 index 00000000000..5aca6247ac3 Binary files /dev/null and b/deps/npm/test/tap/cache-shasum-fork/underscore-1.5.1.tgz differ diff --git a/deps/npm/test/tap/cache-shasum.js b/deps/npm/test/tap/cache-shasum.js index 2139d8fb79c..c7784ecff51 100644 --- a/deps/npm/test/tap/cache-shasum.js +++ b/deps/npm/test/tap/cache-shasum.js @@ -1,7 +1,6 @@ var npm = require.resolve("../../") var test = require("tap").test var path = require("path") -var fs = require("fs") var rimraf = require("rimraf") var mkdirp = require("mkdirp") var mr = require("npm-registry-mock") @@ -11,7 +10,7 @@ var spawn = require("child_process").spawn var sha = require("sha") var server -test("mock reg", function(t) { +test("mock reg", function (t) { rimraf.sync(cache) mkdirp.sync(cache) mr(common.port, function (s) { @@ -21,7 +20,7 @@ test("mock reg", function(t) { }) }) -test("npm cache add request", function(t) { +test("npm cache add request", function (t) { var c = spawn(process.execPath, [ npm, "cache", "add", "request@2.27.0", "--cache=" + cache, @@ -30,21 +29,21 @@ test("npm cache add request", function(t) { ]) c.stderr.pipe(process.stderr) - c.stdout.on("data", function(d) { + c.stdout.on("data", function (d) { t.fail("Should not get data on stdout: " + d) }) - c.on("close", function(code) { + c.on("close", function (code) { t.notOk(code, "exit ok") t.end() }) }) -test("compare", function(t) { +test("compare", function (t) { var d = path.resolve(__dirname, "cache-shasum/request") var p = path.resolve(d, "2.27.0/package.tgz") var r = require("./cache-shasum/localhost_1337/request/.cache.json") - var rshasum = r.versions['2.27.0'].dist.shasum + var rshasum = r.versions["2.27.0"].dist.shasum sha.get(p, function (er, pshasum) { if (er) throw er @@ -53,7 +52,7 @@ test("compare", function(t) { }) }) -test("cleanup", function(t) { +test("cleanup", function (t) { server.close() rimraf.sync(cache) t.end() diff --git a/deps/npm/test/tap/circular-dep.js b/deps/npm/test/tap/circular-dep.js index 533f46451c6..60487dd3816 100644 --- a/deps/npm/test/tap/circular-dep.js +++ b/deps/npm/test/tap/circular-dep.js @@ -17,12 +17,12 @@ test("installing a package that depends on the current package", function (t) { setup(function () { npm.install("optimist", function (err) { if (err) return t.fail(err) - npm.dedupe(function(err) { + npm.dedupe(function (err) { if (err) return t.fail(err) t.ok(existsSync(path.resolve(pkg, "minimist", "node_modules", "optimist", "node_modules", "minimist" - ))) + )), "circular dependency uncircled") cleanup() server.close() }) diff --git a/deps/npm/test/tap/config-basic.js b/deps/npm/test/tap/config-basic.js new file mode 100644 index 00000000000..d5a950a8e5a --- /dev/null +++ b/deps/npm/test/tap/config-basic.js @@ -0,0 +1,66 @@ +var test = require("tap").test +var npmconf = require("../../lib/config/core.js") +var common = require("./00-config-setup.js") +var path = require("path") + +var projectData = { + "save-prefix": "~", + "proprietary-attribs": false +} + +var ucData = common.ucData +var envData = common.envData +var envDataFix = common.envDataFix + +var gcData = { "package-config:foo": "boo" } + +var biData = {} + +var cli = { foo: "bar", umask: 022 } + +var expectList = +[ cli, + envDataFix, + projectData, + ucData, + gcData, + biData ] + +var expectSources = { + cli: { data: cli }, + env: { + data: envDataFix, + source: envData, + prefix: "" + }, + project: { + path: path.resolve(__dirname, "..", "..", ".npmrc"), + type: "ini", + data: projectData + }, + user: { + path: common.userconfig, + type: "ini", + data: ucData + }, + global: { + path: common.globalconfig, + type: "ini", + data: gcData + }, + builtin: { data: biData } +} + +test("no builtin", function (t) { + npmconf.load(cli, function (er, conf) { + if (er) throw er + t.same(conf.list, expectList) + t.same(conf.sources, expectSources) + t.same(npmconf.rootConf.list, []) + t.equal(npmconf.rootConf.root, npmconf.defs.defaults) + t.equal(conf.root, npmconf.defs.defaults) + t.equal(conf.get("umask"), 022) + t.equal(conf.get("heading"), "npm") + t.end() + }) +}) diff --git a/deps/npm/test/tap/config-builtin.js b/deps/npm/test/tap/config-builtin.js new file mode 100644 index 00000000000..5a1589ff6a2 --- /dev/null +++ b/deps/npm/test/tap/config-builtin.js @@ -0,0 +1,68 @@ +var test = require("tap").test +var npmconf = require("../../lib/config/core.js") +var common = require("./00-config-setup.js") +var path = require("path") + +var ucData = common.ucData + +var envData = common.envData +var envDataFix = common.envDataFix + +var gcData = { "package-config:foo": "boo" } + +var biData = { "builtin-config": true } + +var cli = { foo: "bar", heading: "foo", "git-tag-version": false } + +var projectData = { + "save-prefix": "~", + "proprietary-attribs": false +} + +var expectList = [ + cli, + envDataFix, + projectData, + ucData, + gcData, + biData +] + +var expectSources = { + cli: { data: cli }, + env: { + data: envDataFix, + source: envData, + prefix: "" + }, + project: { + path: path.resolve(__dirname, "..", "..", ".npmrc"), + type: "ini", + data: projectData + }, + user: { + path: common.userconfig, + type: "ini", + data: ucData + }, + global: { + path: common.globalconfig, + type: "ini", + data: gcData + }, + builtin: { data: biData } +} + +test("with builtin", function (t) { + npmconf.load(cli, common.builtin, function (er, conf) { + if (er) throw er + t.same(conf.list, expectList) + t.same(conf.sources, expectSources) + t.same(npmconf.rootConf.list, []) + t.equal(npmconf.rootConf.root, npmconf.defs.defaults) + t.equal(conf.root, npmconf.defs.defaults) + t.equal(conf.get("heading"), "foo") + t.equal(conf.get("git-tag-version"), false) + t.end() + }) +}) diff --git a/deps/npm/test/tap/config-certfile.js b/deps/npm/test/tap/config-certfile.js new file mode 100644 index 00000000000..25de9963a9f --- /dev/null +++ b/deps/npm/test/tap/config-certfile.js @@ -0,0 +1,18 @@ +require("./00-config-setup.js") + +var path = require("path") +var fs = require("fs") +var test = require("tap").test +var npmconf = require("../../lib/config/core.js") + +test("cafile loads as ca", function (t) { + var cafile = path.join(__dirname, "..", "fixtures", "config", "multi-ca") + + npmconf.load({cafile: cafile}, function (er, conf) { + if (er) throw er + + t.same(conf.get("cafile"), cafile) + t.same(conf.get("ca").join("\n"), fs.readFileSync(cafile, "utf8").trim()) + t.end() + }) +}) diff --git a/deps/npm/test/tap/config-credentials.js b/deps/npm/test/tap/config-credentials.js new file mode 100644 index 00000000000..c24bb7e1b27 --- /dev/null +++ b/deps/npm/test/tap/config-credentials.js @@ -0,0 +1,295 @@ +var test = require("tap").test + +var npmconf = require("../../lib/config/core.js") +var common = require("./00-config-setup.js") + +var URI = "https://registry.lvh.me:8661/" + +test("getting scope with no credentials set", function (t) { + npmconf.load({}, function (er, conf) { + t.ifError(er, "configuration loaded") + + var basic = conf.getCredentialsByURI(URI) + t.equal(basic.scope, "//registry.lvh.me:8661/", "nerfed URL extracted") + + t.end() + }) +}) + +test("trying to set credentials with no URI", function (t) { + npmconf.load(common.builtin, function (er, conf) { + t.ifError(er, "configuration loaded") + + t.throws(function () { + conf.setCredentialsByURI() + }, "enforced missing URI") + + t.end() + }) +}) + +test("set with missing credentials object", function (t) { + npmconf.load(common.builtin, function (er, conf) { + t.ifError(er, "configuration loaded") + + t.throws(function () { + conf.setCredentialsByURI(URI) + }, "enforced missing credentials") + + t.end() + }) +}) + +test("set with empty credentials object", function (t) { + npmconf.load(common.builtin, function (er, conf) { + t.ifError(er, "configuration loaded") + + t.throws(function () { + conf.setCredentialsByURI(URI, {}) + }, "enforced missing credentials") + + t.end() + }) +}) + +test("set with token", function (t) { + npmconf.load(common.builtin, function (er, conf) { + t.ifError(er, "configuration loaded") + + t.doesNotThrow(function () { + conf.setCredentialsByURI(URI, {token : "simple-token"}) + }, "needs only token") + + var expected = { + scope : "//registry.lvh.me:8661/", + token : "simple-token", + username : undefined, + password : undefined, + email : undefined, + auth : undefined, + alwaysAuth : undefined + } + + t.same(conf.getCredentialsByURI(URI), expected, "got bearer token and scope") + + t.end() + }) +}) + +test("set with missing username", function (t) { + npmconf.load(common.builtin, function (er, conf) { + t.ifError(er, "configuration loaded") + + var credentials = { + password : "password", + email : "ogd@aoaioxxysz.net" + } + + t.throws(function () { + conf.setCredentialsByURI(URI, credentials) + }, "enforced missing email") + + t.end() + }) +}) + +test("set with missing password", function (t) { + npmconf.load(common.builtin, function (er, conf) { + t.ifError(er, "configuration loaded") + + var credentials = { + username : "username", + email : "ogd@aoaioxxysz.net" + } + + t.throws(function () { + conf.setCredentialsByURI(URI, credentials) + }, "enforced missing email") + + t.end() + }) +}) + +test("set with missing email", function (t) { + npmconf.load(common.builtin, function (er, conf) { + t.ifError(er, "configuration loaded") + + var credentials = { + username : "username", + password : "password" + } + + t.throws(function () { + conf.setCredentialsByURI(URI, credentials) + }, "enforced missing email") + + t.end() + }) +}) + +test("set with old-style credentials", function (t) { + npmconf.load(common.builtin, function (er, conf) { + t.ifError(er, "configuration loaded") + + var credentials = { + username : "username", + password : "password", + email : "ogd@aoaioxxysz.net" + } + + t.doesNotThrow(function () { + conf.setCredentialsByURI(URI, credentials) + }, "requires all of username, password, and email") + + var expected = { + scope : "//registry.lvh.me:8661/", + token : undefined, + username : "username", + password : "password", + email : "ogd@aoaioxxysz.net", + auth : "dXNlcm5hbWU6cGFzc3dvcmQ=", + alwaysAuth : false + } + + t.same(conf.getCredentialsByURI(URI), expected, "got credentials") + + t.end() + }) +}) + +test("get old-style credentials for default registry", function (t) { + npmconf.load(common.builtin, function (er, conf) { + var actual = conf.getCredentialsByURI(conf.get("registry")) + var expected = { + scope : "//registry.npmjs.org/", + token : undefined, + password : "password", + username : "username", + email : "i@izs.me", + auth : "dXNlcm5hbWU6cGFzc3dvcmQ=", + alwaysAuth : false + } + t.same(actual, expected) + t.end() + }) +}) + +test("set with always-auth enabled", function (t) { + npmconf.load(common.builtin, function (er, conf) { + t.ifError(er, "configuration loaded") + + var credentials = { + username : "username", + password : "password", + email : "ogd@aoaioxxysz.net", + alwaysAuth : true + } + + conf.setCredentialsByURI(URI, credentials) + + var expected = { + scope : "//registry.lvh.me:8661/", + token : undefined, + username : "username", + password : "password", + email : "ogd@aoaioxxysz.net", + auth : "dXNlcm5hbWU6cGFzc3dvcmQ=", + alwaysAuth : true + } + + t.same(conf.getCredentialsByURI(URI), expected, "got credentials") + + t.end() + }) +}) + +test("set with always-auth disabled", function (t) { + npmconf.load(common.builtin, function (er, conf) { + t.ifError(er, "configuration loaded") + + var credentials = { + username : "username", + password : "password", + email : "ogd@aoaioxxysz.net", + alwaysAuth : false + } + + conf.setCredentialsByURI(URI, credentials) + + var expected = { + scope : "//registry.lvh.me:8661/", + token : undefined, + username : "username", + password : "password", + email : "ogd@aoaioxxysz.net", + auth : "dXNlcm5hbWU6cGFzc3dvcmQ=", + alwaysAuth : false + } + + t.same(conf.getCredentialsByURI(URI), expected, "got credentials") + + t.end() + }) +}) + +test("set with global always-auth enabled", function (t) { + npmconf.load(common.builtin, function (er, conf) { + t.ifError(er, "configuration loaded") + var original = conf.get("always-auth") + conf.set("always-auth", true) + + var credentials = { + username : "username", + password : "password", + email : "ogd@aoaioxxysz.net" + } + + conf.setCredentialsByURI(URI, credentials) + + var expected = { + scope : "//registry.lvh.me:8661/", + token : undefined, + username : "username", + password : "password", + email : "ogd@aoaioxxysz.net", + auth : "dXNlcm5hbWU6cGFzc3dvcmQ=", + alwaysAuth : true + } + + t.same(conf.getCredentialsByURI(URI), expected, "got credentials") + + conf.set("always-auth", original) + t.end() + }) +}) + +test("set with global always-auth disabled", function (t) { + npmconf.load(common.builtin, function (er, conf) { + t.ifError(er, "configuration loaded") + var original = conf.get("always-auth") + conf.set("always-auth", false) + + var credentials = { + username : "username", + password : "password", + email : "ogd@aoaioxxysz.net" + } + + conf.setCredentialsByURI(URI, credentials) + + var expected = { + scope : "//registry.lvh.me:8661/", + token : undefined, + username : "username", + password : "password", + email : "ogd@aoaioxxysz.net", + auth : "dXNlcm5hbWU6cGFzc3dvcmQ=", + alwaysAuth : false + } + + t.same(conf.getCredentialsByURI(URI), expected, "got credentials") + + conf.set("always-auth", original) + t.end() + }) +}) diff --git a/deps/npm/test/tap/config-malformed.js b/deps/npm/test/tap/config-malformed.js new file mode 100644 index 00000000000..04502214621 --- /dev/null +++ b/deps/npm/test/tap/config-malformed.js @@ -0,0 +1,14 @@ +var test = require('tap').test + +var npmconf = require("../../lib/config/core.js") +var common = require("./00-config-setup.js") + +test('with malformed', function (t) { + npmconf.load({}, common.malformed, function (er, conf) { + t.ok(er, 'Expected parse error') + if (!(er && /Failed parsing JSON config key email/.test(er.message))) { + throw er + } + t.end() + }) +}) diff --git a/deps/npm/test/tap/config-meta.js b/deps/npm/test/tap/config-meta.js index 75a66604cf6..faced80d99e 100644 --- a/deps/npm/test/tap/config-meta.js +++ b/deps/npm/test/tap/config-meta.js @@ -16,8 +16,13 @@ var CONFS = {} var DOC = {} var exceptions = [ + path.resolve(lib, "adduser.js"), path.resolve(lib, "config.js"), - path.resolve(lib, "utils", "lifecycle.js") + path.resolve(lib, "publish.js"), + path.resolve(lib, "utils", "lifecycle.js"), + path.resolve(lib, "utils", "map-to-registry.js"), + path.resolve(nm, "npm-registry-client", "lib", "publish.js"), + path.resolve(nm, "npm-registry-client", "lib", "request.js") ] test("get files", function (t) { @@ -46,16 +51,16 @@ test("get files", function (t) { test("get lines", function (t) { FILES.forEach(function (f) { - var lines = fs.readFileSync(f, 'utf8').split('\n') + var lines = fs.readFileSync(f, "utf8").split(/\r|\n/) lines.forEach(function (l, i) { var matches = l.split(/conf(?:ig)?\.get\(/g) matches.shift() matches.forEach(function (m) { - m = m.split(')').shift() + m = m.split(")").shift() var literal = m.match(/^['"].+['"]$/) if (literal) { m = m.slice(1, -1) - if (!m.match(/^\_/) && m !== 'argv') + if (!m.match(/^\_/) && m !== "argv") CONFS[m] = { file: f, line: i @@ -71,53 +76,51 @@ test("get lines", function (t) { }) test("get docs", function (t) { - var d = fs.readFileSync(doc, "utf8").split("\n") + var d = fs.readFileSync(doc, "utf8").split(/\r|\n/) // walk down until the "## Config Settings" section for (var i = 0; i < d.length && d[i] !== "## Config Settings"; i++); i++ // now gather up all the ^###\s lines until the next ^##\s - var doclines = [] for (; i < d.length && !d[i].match(/^## /); i++) { if (d[i].match(/^### /)) - DOC[ d[i].replace(/^### /, '').trim() ] = true + DOC[ d[i].replace(/^### /, "").trim() ] = true } t.pass("read the docs") t.end() }) test("check configs", function (t) { - var defs = require("npmconf/config-defs.js") + var defs = require("../../lib/config/defaults.js") var types = Object.keys(defs.types) var defaults = Object.keys(defs.defaults) - - for (var c in CONFS) { - if (CONFS[c].file.indexOf(lib) === 0) { - t.ok(DOC[c], "should be documented " + c + " " - + CONFS[c].file + ":" + CONFS[c].line) - t.ok(types.indexOf(c) !== -1, "should be defined in npmconf " + c) - t.ok(defaults.indexOf(c) !== -1, "should have default in npmconf " + c) + for (var c1 in CONFS) { + if (CONFS[c1].file.indexOf(lib) === 0) { + t.ok(DOC[c1], "should be documented " + c1 + " " + + CONFS[c1].file + ":" + CONFS[c1].line) + t.ok(types.indexOf(c1) !== -1, "should be defined in npmconf " + c1) + t.ok(defaults.indexOf(c1) !== -1, "should have default in npmconf " + c1) } } - for (var c in DOC) { - if (c !== "versions" && c !== "version") { - t.ok(CONFS[c], "config in doc should be used somewhere " + c) - t.ok(types.indexOf(c) !== -1, "should be defined in npmconf " + c) - t.ok(defaults.indexOf(c) !== -1, "should have default in npmconf " + c) + for (var c2 in DOC) { + if (c2 !== "versions" && c2 !== "version" && c2 !== "init.version") { + t.ok(CONFS[c2], "config in doc should be used somewhere " + c2) + t.ok(types.indexOf(c2) !== -1, "should be defined in npmconf " + c2) + t.ok(defaults.indexOf(c2) !== -1, "should have default in npmconf " + c2) } } - types.forEach(function(c) { - if (!c.match(/^\_/) && c !== 'argv' && !c.match(/^versions?$/)) { - t.ok(DOC[c], 'defined type should be documented ' + c) - t.ok(CONFS[c], 'defined type should be used ' + c) + types.forEach(function (c) { + if (!c.match(/^\_/) && c !== "argv" && !c.match(/^versions?$/)) { + t.ok(DOC[c], "defined type should be documented " + c) + t.ok(CONFS[c], "defined type should be used " + c) } }) - defaults.forEach(function(c) { - if (!c.match(/^\_/) && c !== 'argv' && !c.match(/^versions?$/)) { - t.ok(DOC[c], 'defaulted type should be documented ' + c) - t.ok(CONFS[c], 'defaulted type should be used ' + c) + defaults.forEach(function (c) { + if (!c.match(/^\_/) && c !== "argv" && !c.match(/^versions?$/)) { + t.ok(DOC[c], "defaulted type should be documented " + c) + t.ok(CONFS[c], "defaulted type should be used " + c) } }) diff --git a/deps/npm/test/tap/config-project.js b/deps/npm/test/tap/config-project.js new file mode 100644 index 00000000000..337355bf286 --- /dev/null +++ b/deps/npm/test/tap/config-project.js @@ -0,0 +1,66 @@ +var test = require("tap").test +var path = require("path") +var fix = path.resolve(__dirname, "..", "fixtures", "config") +var projectRc = path.resolve(fix, ".npmrc") +var npmconf = require("../../lib/config/core.js") +var common = require("./00-config-setup.js") + +var projectData = { just: "testing" } + +var ucData = common.ucData +var envData = common.envData +var envDataFix = common.envDataFix + +var gcData = { "package-config:foo": "boo" } + +var biData = {} + +var cli = { foo: "bar", umask: 022, prefix: fix } + +var expectList = [ + cli, + envDataFix, + projectData, + ucData, + gcData, + biData +] + +var expectSources = { + cli: { data: cli }, + env: { + data: envDataFix, + source: envData, + prefix: "" + }, + project: { + path: projectRc, + type: "ini", + data: projectData + }, + user: { + path: common.userconfig, + type: "ini", + data: ucData + }, + global: { + path: common.globalconfig, + type: "ini", + data: gcData + }, + builtin: { data: biData } +} + +test("no builtin", function (t) { + npmconf.load(cli, function (er, conf) { + if (er) throw er + t.same(conf.list, expectList) + t.same(conf.sources, expectSources) + t.same(npmconf.rootConf.list, []) + t.equal(npmconf.rootConf.root, npmconf.defs.defaults) + t.equal(conf.root, npmconf.defs.defaults) + t.equal(conf.get("umask"), 022) + t.equal(conf.get("heading"), "npm") + t.end() + }) +}) diff --git a/deps/npm/test/tap/config-save.js b/deps/npm/test/tap/config-save.js new file mode 100644 index 00000000000..88526a38af8 --- /dev/null +++ b/deps/npm/test/tap/config-save.js @@ -0,0 +1,88 @@ +var fs = require("fs") +var ini = require("ini") +var test = require("tap").test +var npmconf = require("../../lib/config/core.js") +var common = require("./00-config-setup.js") + +var expectConf = [ + "globalconfig = " + common.globalconfig, + "email = i@izs.me", + "env-thing = asdf", + "init.author.name = Isaac Z. Schlueter", + "init.author.email = i@izs.me", + "init.author.url = http://blog.izs.me/", + "init.version = 1.2.3", + "proprietary-attribs = false", + "npm:publishtest = true", + "_npmjs.org:couch = https://admin:password@localhost:5984/registry", + "npm-www:nocache = 1", + "sign-git-tag = false", + "message = v%s", + "strict-ssl = false", + "_auth = dXNlcm5hbWU6cGFzc3dvcmQ=", + "", + "[_token]", + "AuthSession = yabba-dabba-doodle", + "version = 1", + "expires = 1345001053415", + "path = /", + "httponly = true", + "" +].join("\n") + +var expectFile = [ + "globalconfig = " + common.globalconfig, + "email = i@izs.me", + "env-thing = asdf", + "init.author.name = Isaac Z. Schlueter", + "init.author.email = i@izs.me", + "init.author.url = http://blog.izs.me/", + "init.version = 1.2.3", + "proprietary-attribs = false", + "npm:publishtest = true", + "_npmjs.org:couch = https://admin:password@localhost:5984/registry", + "npm-www:nocache = 1", + "sign-git-tag = false", + "message = v%s", + "strict-ssl = false", + "_auth = dXNlcm5hbWU6cGFzc3dvcmQ=", + "", + "[_token]", + "AuthSession = yabba-dabba-doodle", + "version = 1", + "expires = 1345001053415", + "path = /", + "httponly = true", + "" +].join("\n") + +test("saving configs", function (t) { + npmconf.load(function (er, conf) { + if (er) + throw er + conf.set("sign-git-tag", false, "user") + conf.del("nodedir") + conf.del("tmp") + var foundConf = ini.stringify(conf.sources.user.data) + t.same(ini.parse(foundConf), ini.parse(expectConf)) + fs.unlinkSync(common.userconfig) + conf.save("user", function (er) { + if (er) + throw er + var uc = fs.readFileSync(conf.get("userconfig"), "utf8") + t.same(ini.parse(uc), ini.parse(expectFile)) + t.end() + }) + }) +}) + +test("setting prefix", function (t) { + npmconf.load(function (er, conf) { + if (er) + throw er + + conf.prefix = "newvalue" + t.same(conf.prefix, "newvalue") + t.end() + }) +}) diff --git a/deps/npm/test/tap/config-semver-tag.js b/deps/npm/test/tap/config-semver-tag.js new file mode 100644 index 00000000000..4ce1cb219e5 --- /dev/null +++ b/deps/npm/test/tap/config-semver-tag.js @@ -0,0 +1,27 @@ +var util = require("util") +var test = require("tap").test +var npmconf = require("../../lib/config/core.js") +var common = require("./00-config-setup.js") + +var cli = { tag: "v2.x" } + +var log = require("npmlog") + +test("tag cannot be a SemVer", function (t) { + var messages = [] + log.warn = function (m) { + messages.push(m + " " + util.format.apply(util, [].slice.call(arguments, 1))) + } + + var expect = [ + 'invalid config tag="v2.x"', + "invalid config Tag must not be a SemVer range" + ] + + npmconf.load(cli, common.builtin, function (er, conf) { + if (er) throw er + t.equal(conf.get("tag"), "latest") + t.same(messages, expect) + t.end() + }) +}) diff --git a/deps/npm/test/tap/dedupe.js b/deps/npm/test/tap/dedupe.js index b4b7495aa87..c0a648e738c 100644 --- a/deps/npm/test/tap/dedupe.js +++ b/deps/npm/test/tap/dedupe.js @@ -2,17 +2,26 @@ var test = require("tap").test , fs = require("fs") , path = require("path") , existsSync = fs.existsSync || path.existsSync - , npm = require("../../") , rimraf = require("rimraf") , mr = require("npm-registry-mock") - , common = require('../common-tap.js') + , common = require("../common-tap.js") + +var EXEC_OPTS = {} test("dedupe finds the common module and moves it up one level", function (t) { setup(function (s) { - npm.install(".", function (err) { - if (err) return t.fail(err) - npm.dedupe(function(err) { - if (err) return t.fail(err) + common.npm( + [ + "install", ".", + "--registry", common.registry + ], + EXEC_OPTS, + function (err, code) { + t.ifError(err, "successfully installed directory") + t.equal(code, 0, "npm install exited with code") + common.npm(["dedupe"], {}, function (err, code) { + t.ifError(err, "successfully deduped against previous install") + t.notOk(code, "npm dedupe exited with code") t.ok(existsSync(path.join(__dirname, "dedupe", "node_modules", "minimist"))) t.ok(!existsSync(path.join(__dirname, "dedupe", "node_modules", "checker"))) s.close() // shutdown mock registry. @@ -25,10 +34,8 @@ test("dedupe finds the common module and moves it up one level", function (t) { function setup (cb) { process.chdir(path.join(__dirname, "dedupe")) mr(common.port, function (s) { // create mock registry. - npm.load({registry: common.registry}, function() { - rimraf.sync(path.join(__dirname, "dedupe", "node_modules")) - fs.mkdirSync(path.join(__dirname, "dedupe", "node_modules")) - cb(s) - }) + rimraf.sync(path.join(__dirname, "dedupe", "node_modules")) + fs.mkdirSync(path.join(__dirname, "dedupe", "node_modules")) + cb(s) }) } diff --git a/deps/npm/test/tap/dev-dep-duplicate/desired-ls-results.json b/deps/npm/test/tap/dev-dep-duplicate/desired-ls-results.json new file mode 100644 index 00000000000..355039a0929 --- /dev/null +++ b/deps/npm/test/tap/dev-dep-duplicate/desired-ls-results.json @@ -0,0 +1,9 @@ +{ + "name": "dev-dep-duplicate", + "version": "0.0.0", + "dependencies": { + "underscore": { + "version": "1.5.1" + } + } +} diff --git a/deps/npm/test/tap/dev-dep-duplicate/package.json b/deps/npm/test/tap/dev-dep-duplicate/package.json new file mode 100644 index 00000000000..87061b9d542 --- /dev/null +++ b/deps/npm/test/tap/dev-dep-duplicate/package.json @@ -0,0 +1,11 @@ +{ + "author": "Anders Janmyr", + "name": "dev-dep-duplicate", + "version": "0.0.0", + "dependencies": { + "underscore": "1.5.1" + }, + "devDependencies": { + "underscore": "1.3.1" + } +} diff --git a/deps/npm/test/tap/false_name.js b/deps/npm/test/tap/false_name.js index 5ab1a67ecc6..b02eafec99d 100644 --- a/deps/npm/test/tap/false_name.js +++ b/deps/npm/test/tap/false_name.js @@ -11,41 +11,45 @@ var test = require("tap").test , fs = require("fs") , path = require("path") , existsSync = fs.existsSync || path.existsSync - , spawn = require("child_process").spawn - , npm = require("../../") , rimraf = require("rimraf") , common = require("../common-tap.js") , mr = require("npm-registry-mock") - , pkg = __dirname + "/false_name" + , pkg = path.resolve(__dirname, "false_name") + , cache = path.resolve(pkg, "cache") + , nodeModules = path.resolve(pkg, "node_modules") -test("not every pkg.name can be required", function (t) { - rimraf.sync(pkg + "/cache") +var EXEC_OPTS = { cwd: pkg } - t.plan(1) +test("setup", function(t) { + cleanup() + fs.mkdirSync(nodeModules) + t.end() +}) + +test("not every pkg.name can be required", function (t) { + t.plan(3) mr(common.port, function (s) { - setup(function () { - npm.install(".", function (err) { - if (err) return t.fail(err) - s.close() - t.ok(existsSync(pkg + "/node_modules/test-package-with-one-dep/" + - "node_modules/test-package")) - }) + common.npm([ + "install", ".", + "--cache", cache, + "--registry", common.registry + ], EXEC_OPTS, function (err, code) { + s.close() + t.ifErr(err, "install finished without error") + t.equal(code, 0, "install exited ok") + t.ok(existsSync(path.resolve(pkg, + "node_modules/test-package-with-one-dep", + "node_modules/test-package"))) }) }) }) +function cleanup() { + rimraf.sync(cache) + rimraf.sync(nodeModules) +} + test("cleanup", function (t) { - rimraf.sync(pkg + "/cache") - rimraf.sync(pkg + "/node_modules") + cleanup() t.end() }) - -function setup (cb) { - process.chdir(pkg) - npm.load({cache: pkg + "/cache", registry: common.registry}, - function () { - rimraf.sync(pkg + "/node_modules") - fs.mkdirSync(pkg + "/node_modules") - cb() - }) -} diff --git a/deps/npm/test/tap/git-cache-locking.js b/deps/npm/test/tap/git-cache-locking.js index b9b328f30c6..39f8b279c3b 100644 --- a/deps/npm/test/tap/git-cache-locking.js +++ b/deps/npm/test/tap/git-cache-locking.js @@ -1,10 +1,8 @@ var test = require("tap").test + , common = require("../common-tap") , path = require("path") , rimraf = require("rimraf") , mkdirp = require("mkdirp") - , spawn = require("child_process").spawn - , npm = require.resolve("../../bin/npm-cli.js") - , node = process.execPath , pkg = path.resolve(__dirname, "git-cache-locking") , tmp = path.join(pkg, "tmp") , cache = path.join(pkg, "cache") @@ -12,10 +10,7 @@ var test = require("tap").test test("setup", function (t) { rimraf.sync(pkg) - mkdirp.sync(pkg) - mkdirp.sync(cache) - mkdirp.sync(tmp) - mkdirp.sync(path.resolve(pkg, 'node_modules')) + mkdirp.sync(path.resolve(pkg, "node_modules")) t.end() }) @@ -26,27 +21,28 @@ test("git-cache-locking: install a git dependency", function (t) { // package c depends on a.git#master and b.git#master // package b depends on a.git#master - var child = spawn(node, [npm, "install", "git://github.com/nigelzor/npm-4503-c.git"], { + common.npm([ + "install", + "git://github.com/nigelzor/npm-4503-c.git" + ], { cwd: pkg, env: { - npm_config_cache: cache, - npm_config_tmp: tmp, - npm_config_prefix: pkg, - npm_config_global: "false", + "npm_config_cache": cache, + "npm_config_tmp": tmp, + "npm_config_prefix": pkg, + "npm_config_global": "false", HOME: process.env.HOME, Path: process.env.PATH, PATH: process.env.PATH - }, - stdio: "inherit" - }) - - child.on("close", function (code) { + } + }, function (err, code) { + t.ifErr(err, "npm install finished without error") t.equal(0, code, "npm install should succeed") t.end() }) }) -test('cleanup', function(t) { +test("cleanup", function(t) { rimraf.sync(pkg) t.end() }) diff --git a/deps/npm/test/tap/git-cache-no-hooks.js b/deps/npm/test/tap/git-cache-no-hooks.js new file mode 100644 index 00000000000..32731fa1b01 --- /dev/null +++ b/deps/npm/test/tap/git-cache-no-hooks.js @@ -0,0 +1,63 @@ +var test = require("tap").test + , fs = require("fs") + , path = require("path") + , rimraf = require("rimraf") + , mkdirp = require("mkdirp") + , spawn = require("child_process").spawn + , npmCli = require.resolve("../../bin/npm-cli.js") + , node = process.execPath + , pkg = path.resolve(__dirname, "git-cache-no-hooks") + , tmp = path.join(pkg, "tmp") + , cache = path.join(pkg, "cache") + + +test("setup", function (t) { + rimraf.sync(pkg) + mkdirp.sync(pkg) + mkdirp.sync(cache) + mkdirp.sync(tmp) + mkdirp.sync(path.resolve(pkg, "node_modules")) + t.end() +}) + +test("git-cache-no-hooks: install a git dependency", function (t) { + + // disable git integration tests on Travis. + if (process.env.TRAVIS) return t.end() + + var command = [ npmCli + , "install" + , "git://github.com/nigelzor/npm-4503-a.git" + ] + var child = spawn(node, command, { + cwd: pkg, + env: { + "npm_config_cache" : cache, + "npm_config_tmp" : tmp, + "npm_config_prefix" : pkg, + "npm_config_global" : "false", + "npm_config_umask" : "00", + HOME : process.env.HOME, + Path : process.env.PATH, + PATH : process.env.PATH + }, + stdio: "inherit" + }) + + child.on("close", function (code) { + t.equal(code, 0, "npm install should succeed") + + // verify permissions on git hooks + var repoDir = "git-github-com-nigelzor-npm-4503-a-git-40c5cb24" + var hooksPath = path.join(cache, "_git-remotes", repoDir, "hooks") + fs.readdir(hooksPath, function (err) { + t.equal(err && err.code, "ENOENT", "hooks are not brought along with repo") + t.end() + }) + }) +}) + +test("cleanup", function (t) { + rimraf.sync(pkg) + t.end() +}) diff --git a/deps/npm/test/tap/global-prefix-set-in-userconfig.js b/deps/npm/test/tap/global-prefix-set-in-userconfig.js index 85fa4f610ab..f820a27727d 100644 --- a/deps/npm/test/tap/global-prefix-set-in-userconfig.js +++ b/deps/npm/test/tap/global-prefix-set-in-userconfig.js @@ -15,9 +15,9 @@ test("setup", function (t) { test("run command", function (t) { var args = ["prefix", "-g", "--userconfig=" + rcfile] - common.npm(args, {env: {}}, function (er, code, so, se) { + common.npm(args, {env: {}}, function (er, code, so) { if (er) throw er - t.equal(code, 0) + t.notOk(code, "npm prefix exited with code 0") t.equal(so.trim(), prefix) t.end() }) diff --git a/deps/npm/test/tap/ignore-install-link.js b/deps/npm/test/tap/ignore-install-link.js index 2c90b9a6d42..45db51d30f7 100644 --- a/deps/npm/test/tap/ignore-install-link.js +++ b/deps/npm/test/tap/ignore-install-link.js @@ -4,20 +4,17 @@ if (process.platform === "win32") { } var common = require("../common-tap.js") var test = require("tap").test -var npm = require.resolve("../../bin/npm-cli.js") -var node = process.execPath var path = require("path") var fs = require("fs") var rimraf = require("rimraf") var mkdirp = require("mkdirp") -var spawn = require("child_process").spawn var root = path.resolve(__dirname, "ignore-install-link") var pkg = path.resolve(root, "pkg") var dep = path.resolve(root, "dep") var target = path.resolve(pkg, "node_modules", "dep") var cache = path.resolve(root, "cache") -var global = path.resolve(root, "global") +var globalPath = path.resolve(root, "global") var pkgj = { "name":"pkg", "version": "1.2.3" , "dependencies": { "dep": "1.2.3" } } @@ -34,7 +31,7 @@ test("setup", function (t) { mkdirp.sync(path.resolve(pkg, "node_modules")) mkdirp.sync(dep) mkdirp.sync(cache) - mkdirp.sync(global) + mkdirp.sync(globalPath) fs.writeFileSync(path.resolve(pkg, "package.json"), JSON.stringify(pkgj)) fs.writeFileSync(path.resolve(dep, "package.json"), JSON.stringify(depj)) fs.symlinkSync(dep, target, "dir") @@ -47,15 +44,15 @@ test("ignore install if package is linked", function (t) { env: { PATH: process.env.PATH || process.env.Path, HOME: process.env.HOME, - npm_config_prefix: global, - npm_config_cache: cache, - npm_config_registry: common.registry, - npm_config_loglevel: "silent" + "npm_config_prefix": globalPath, + "npm_config_cache": cache, + "npm_config_registry": common.registry, + "npm_config_loglevel": "silent" }, stdio: "inherit" - }, function (er, code, stdout, stderr) { + }, function (er, code) { if (er) throw er - t.equal(code, 0) + t.equal(code, 0, "npm install exited with code") t.end() }) }) diff --git a/deps/npm/test/tap/ignore-scripts.js b/deps/npm/test/tap/ignore-scripts.js index 0115b7571d8..9526443e92c 100644 --- a/deps/npm/test/tap/ignore-scripts.js +++ b/deps/npm/test/tap/ignore-scripts.js @@ -1,24 +1,24 @@ +var common = require("../common-tap") var test = require("tap").test -var npm = require.resolve("../../bin/npm-cli.js") - -var spawn = require("child_process").spawn -var node = process.execPath +var path = require("path") // ignore-scripts/package.json has scripts that always exit with non-zero error // codes. The "install" script is omitted so that npm tries to run node-gyp, // which should also fail. -var pkg = __dirname + "/ignore-scripts" +var pkg = path.resolve(__dirname, "ignore-scripts") -test("ignore-scripts: install using the option", function(t) { - createChild([npm, "install", "--ignore-scripts"]).on("close", function(code) { - t.equal(code, 0) +test("ignore-scripts: install using the option", function (t) { + createChild(["install", "--ignore-scripts"], function (err, code) { + t.ifError(err, "install with scripts ignored finished successfully") + t.equal(code, 0, "npm install exited with code") t.end() }) }) -test("ignore-scripts: install NOT using the option", function(t) { - createChild([npm, "install"]).on("close", function(code) { - t.notEqual(code, 0) +test("ignore-scripts: install NOT using the option", function (t) { + createChild(["install"], function (err, code) { + t.ifError(err, "install with scripts successful") + t.notEqual(code, 0, "npm install exited with code") t.end() }) }) @@ -34,39 +34,40 @@ var scripts = [ "prerestart", "restart", "postrestart" ] -scripts.forEach(function(script) { - test("ignore-scripts: run-script "+script+" using the option", function(t) { - createChild([npm, "--ignore-scripts", "run-script", script]) - .on("close", function(code) { - t.equal(code, 0) - t.end() - }) +scripts.forEach(function (script) { + test("ignore-scripts: run-script "+script+" using the option", function (t) { + createChild(["--ignore-scripts", "run-script", script], function (err, code) { + t.ifError(err, "run-script " + script + " with ignore-scripts successful") + t.equal(code, 0, "npm run-script exited with code") + t.end() + }) }) }) -scripts.forEach(function(script) { - test("ignore-scripts: run-script "+script+" NOT using the option", function(t) { - createChild([npm, "run-script", script]).on("close", function(code) { - t.notEqual(code, 0) +scripts.forEach(function (script) { + test("ignore-scripts: run-script "+script+" NOT using the option", function (t) { + createChild(["run-script", script], function (err, code) { + t.ifError(err, "run-script " + script + " finished successfully") + t.notEqual(code, 0, "npm run-script exited with code") t.end() }) }) }) -function createChild (args) { +function createChild (args, cb) { var env = { HOME: process.env.HOME, Path: process.env.PATH, PATH: process.env.PATH, - npm_config_loglevel: "silent" + "npm_config_loglevel": "silent" } if (process.platform === "win32") env.npm_config_cache = "%APPDATA%\\npm-cache" - return spawn(node, args, { + return common.npm(args, { cwd: pkg, stdio: "inherit", env: env - }) + }, cb) } diff --git a/deps/npm/test/tap/ignore-shrinkwrap.js b/deps/npm/test/tap/ignore-shrinkwrap.js index ce1c66425c0..6744a868a20 100644 --- a/deps/npm/test/tap/ignore-shrinkwrap.js +++ b/deps/npm/test/tap/ignore-shrinkwrap.js @@ -1,10 +1,9 @@ var common = require("../common-tap.js") var test = require("tap").test -var pkg = './ignore-shrinkwrap' +var pkg = require("path").join(__dirname,"ignore-shrinkwrap") var mr = require("npm-registry-mock") -var child var spawn = require("child_process").spawn var npm = require.resolve("../../bin/npm-cli.js") var node = process.execPath @@ -18,7 +17,7 @@ var customMocks = { test("ignore-shrinkwrap: using the option", function (t) { mr({port: common.port, mocks: customMocks}, function (s) { - s._server.on("request", function (req, res) { + s._server.on("request", function (req) { switch (req.url) { case "/shrinkwrap.js": t.fail() @@ -28,7 +27,7 @@ test("ignore-shrinkwrap: using the option", function (t) { } }) var child = createChild(true) - child.on("close", function (m) { + child.on("close", function () { s.close() t.end() }) @@ -37,7 +36,7 @@ test("ignore-shrinkwrap: using the option", function (t) { test("ignore-shrinkwrap: NOT using the option", function (t) { mr({port: common.port, mocks: customMocks}, function (s) { - s._server.on("request", function (req, res) { + s._server.on("request", function (req) { switch (req.url) { case "/shrinkwrap.js": t.pass("shrinkwrap used") @@ -47,7 +46,7 @@ test("ignore-shrinkwrap: NOT using the option", function (t) { } }) var child = createChild(false) - child.on("close", function (m) { + child.on("close", function () { s.close() t.end() }) @@ -65,13 +64,12 @@ function createChild (ignoreShrinkwrap) { return spawn(node, args, { cwd: pkg, env: { - npm_config_registry: common.registry, - npm_config_cache_lock_stale: 1000, - npm_config_cache_lock_wait: 1000, + "npm_config_registry": common.registry, + "npm_config_cache_lock_stale": 1000, + "npm_config_cache_lock_wait": 1000, HOME: process.env.HOME, Path: process.env.PATH, PATH: process.env.PATH } }) - } diff --git a/deps/npm/test/tap/install-at-locally.js b/deps/npm/test/tap/install-at-locally.js index 18ea6c3a60e..02874d0cd8f 100644 --- a/deps/npm/test/tap/install-at-locally.js +++ b/deps/npm/test/tap/install-at-locally.js @@ -1,43 +1,42 @@ -var common = require('../common-tap.js') -var test = require('tap').test -var npm = require('../../') -var osenv = require('osenv') -var path = require('path') -var fs = require('fs') -var rimraf = require('rimraf') -var mkdirp = require('mkdirp') -var pkg = path.join(__dirname, 'install-at-locally') +var common = require("../common-tap.js") +var test = require("tap").test +var path = require("path") +var fs = require("fs") +var rimraf = require("rimraf") +var mkdirp = require("mkdirp") +var pkg = path.join(__dirname, "install-at-locally") + +var EXEC_OPTS = { } test("setup", function (t) { mkdirp.sync(pkg) - mkdirp.sync(path.resolve(pkg, 'node_modules')) + mkdirp.sync(path.resolve(pkg, "node_modules")) process.chdir(pkg) t.end() }) -test('"npm install ./package@1.2.3" should install local pkg', function(t) { - npm.load(function() { - npm.commands.install(['./package@1.2.3'], function(err) { - var p = path.resolve(pkg, 'node_modules/install-at-locally/package.json') - t.ok(JSON.parse(fs.readFileSync(p, 'utf8'))) - t.end() - }) +test("\"npm install ./package@1.2.3\" should install local pkg", function(t) { + common.npm(["install", "./package@1.2.3"], EXEC_OPTS, function(err, code) { + var p = path.resolve(pkg, "node_modules/install-at-locally/package.json") + t.ifError(err, "install local package successful") + t.equal(code, 0, "npm install exited with code") + t.ok(JSON.parse(fs.readFileSync(p, "utf8"))) + t.end() }) }) -test('"npm install install/at/locally@./package@1.2.3" should install local pkg', function(t) { - npm.load(function() { - npm.commands.install(['./package@1.2.3'], function(err) { - var p = path.resolve(pkg, 'node_modules/install-at-locally/package.json') - t.ok(JSON.parse(fs.readFileSync(p, 'utf8'))) - t.end() - }) +test("\"npm install install/at/locally@./package@1.2.3\" should install local pkg", function(t) { + common.npm(["install", "./package@1.2.3"], EXEC_OPTS, function(err, code) { + var p = path.resolve(pkg, "node_modules/install-at-locally/package.json") + t.ifError(err, "install local package in explicit directory successful") + t.equal(code, 0, "npm install exited with code") + t.ok(JSON.parse(fs.readFileSync(p, "utf8"))) + t.end() }) }) -test('cleanup', function(t) { +test("cleanup", function(t) { process.chdir(__dirname) - rimraf.sync(path.resolve(pkg, 'node_modules')) + rimraf.sync(path.resolve(pkg, "node_modules")) t.end() }) - diff --git a/deps/npm/test/tap/install-cli-production.js b/deps/npm/test/tap/install-cli-production.js new file mode 100644 index 00000000000..00c93552701 --- /dev/null +++ b/deps/npm/test/tap/install-cli-production.js @@ -0,0 +1,44 @@ +var common = require("../common-tap.js") +var test = require("tap").test +var path = require("path") +var fs = require("fs") +var rimraf = require("rimraf") +var mkdirp = require("mkdirp") +var pkg = path.join(__dirname, "install-cli-production") + +var EXEC_OPTS = { + cwd: pkg +} + +test("setup", function(t) { + mkdirp.sync(pkg) + mkdirp.sync(path.resolve(pkg, "node_modules")) + process.chdir(pkg) + t.end() +}) + +test("\"npm install --production\" should install dependencies", function(t) { + common.npm(["install", "--production"], EXEC_OPTS, function(err, code) { + t.ifError(err, "install production successful") + t.equal(code, 0, "npm install exited with code") + var p = path.resolve(pkg, "node_modules/dependency/package.json") + t.ok(JSON.parse(fs.readFileSync(p, "utf8"))) + t.end() + }) +}) + +test("\"npm install --production\" should not install dev dependencies", function(t) { + common.npm(["install", "--production"], EXEC_OPTS, function(err, code) { + t.ifError(err, "install production successful") + t.equal(code, 0, "npm install exited with code") + var p = path.resolve(pkg, "node_modules/dev-dependency/package.json") + t.ok(!fs.existsSync(p), "") + t.end() + }) +}) + +test("cleanup", function(t) { + process.chdir(__dirname) + rimraf.sync(path.resolve(pkg, "node_modules")) + t.end() +}) diff --git a/deps/npm/test/tap/install-cli-production/dependency/package.json b/deps/npm/test/tap/install-cli-production/dependency/package.json new file mode 100644 index 00000000000..6ee6be0c3ba --- /dev/null +++ b/deps/npm/test/tap/install-cli-production/dependency/package.json @@ -0,0 +1,5 @@ +{ + "name": "dependency", + "description": "fixture", + "version": "0.0.0" +} diff --git a/deps/npm/test/tap/install-cli-production/dev-dependency/package.json b/deps/npm/test/tap/install-cli-production/dev-dependency/package.json new file mode 100644 index 00000000000..a6a8f69763e --- /dev/null +++ b/deps/npm/test/tap/install-cli-production/dev-dependency/package.json @@ -0,0 +1,5 @@ +{ + "name": "dev-dependency", + "description": "fixture", + "version": "0.0.0" +} diff --git a/deps/npm/test/tap/install-cli-production/package.json b/deps/npm/test/tap/install-cli-production/package.json new file mode 100644 index 00000000000..8f2f0e2ec2e --- /dev/null +++ b/deps/npm/test/tap/install-cli-production/package.json @@ -0,0 +1,14 @@ +{ + "name": "install-cli-production", + "description": "fixture", + "version": "0.0.0", + "scripts": { + "prepublish": "exit 123" + }, + "dependencies": { + "dependency": "file:./dependency" + }, + "devDependencies": { + "dev-dependency": "file:./dev-dependency" + } +} diff --git a/deps/npm/test/tap/install-cli-unicode.js b/deps/npm/test/tap/install-cli-unicode.js index bb9b4f5eed4..7318deffca4 100644 --- a/deps/npm/test/tap/install-cli-unicode.js +++ b/deps/npm/test/tap/install-cli-unicode.js @@ -1,23 +1,24 @@ -var common = require('../common-tap.js') -var test = require('tap').test -var npm = require('../../') -var mkdirp = require('mkdirp') -var mr = require('npm-registry-mock') -var exec = require('child_process').exec +var common = require("../common-tap.js") +var test = require("tap").test +var mr = require("npm-registry-mock") +var path = require("path") -var pkg = __dirname + '/install-cli' -var NPM_BIN = __dirname + '/../../bin/npm-cli.js' +var pkg = path.resolve(__dirname, "install-cli") function hasOnlyAscii (s) { - return /^[\000-\177]*$/.test(s) ; + return /^[\000-\177]*$/.test(s) } -test('does not use unicode with --unicode false', function (t) { - t.plan(3) +var EXEC_OPTS = { + cwd : pkg +} + +test("does not use unicode with --unicode false", function (t) { + t.plan(5) mr(common.port, function (s) { - exec('node ' + NPM_BIN + ' install --unicode false read', { - cwd: pkg - }, function(err, stdout) { + common.npm(["install", "--unicode", "false", "read"], EXEC_OPTS, function (err, code, stdout) { + t.ifError(err, "install package read without unicode success") + t.notOk(code, "npm install exited with code 0") t.ifError(err) t.ok(stdout, stdout.length) t.ok(hasOnlyAscii(stdout)) @@ -26,11 +27,11 @@ test('does not use unicode with --unicode false', function (t) { }) }) -test('cleanup', function (t) { +test("cleanup", function (t) { mr(common.port, function (s) { - exec('node ' + NPM_BIN + ' uninstall read', { - cwd: pkg - }, function(err, stdout) { + common.npm(["uninstall", "read"], EXEC_OPTS, function (err, code) { + t.ifError(err, "uninstall read package success") + t.notOk(code, "npm uninstall exited with code 0") s.close() }) }) diff --git a/deps/npm/test/tap/install-from-local.js b/deps/npm/test/tap/install-from-local.js new file mode 100644 index 00000000000..d1fbb3b909a --- /dev/null +++ b/deps/npm/test/tap/install-from-local.js @@ -0,0 +1,39 @@ +var common = require("../common-tap") +var test = require("tap").test +var path = require("path") +var fs = require("fs") +var rimraf = require("rimraf") +var pkg = path.join(__dirname, "install-from-local", "package-with-local-paths") + +var EXEC_OPTS = { } + +test("setup", function (t) { + process.chdir(pkg) + t.end() +}) + +test('"npm install" should install local packages', function (t) { + common.npm(["install", "."], EXEC_OPTS, function (err, code) { + t.ifError(err, "error should not exist") + t.notOk(code, "npm install exited with code 0") + var dependencyPackageJson = path.resolve(pkg, "node_modules/package-local-dependency/package.json") + t.ok( + JSON.parse(fs.readFileSync(dependencyPackageJson, "utf8")), + "package with local dependency installed" + ) + + var devDependencyPackageJson = path.resolve(pkg, "node_modules/package-local-dev-dependency/package.json") + t.ok( + JSON.parse(fs.readFileSync(devDependencyPackageJson, "utf8")), + "package with local dev dependency installed" + ) + + t.end() + }) +}) + +test("cleanup", function (t) { + process.chdir(__dirname) + rimraf.sync(path.resolve(pkg, "node_modules")) + t.end() +}) diff --git a/deps/npm/test/tap/install-from-local/package-local-dependency/package.json b/deps/npm/test/tap/install-from-local/package-local-dependency/package.json new file mode 100644 index 00000000000..a524d826245 --- /dev/null +++ b/deps/npm/test/tap/install-from-local/package-local-dependency/package.json @@ -0,0 +1,5 @@ +{ + "name": "package-local-dependency", + "version": "0.0.0", + "description": "Test for local installs" +} diff --git a/deps/npm/test/tap/install-from-local/package-local-dev-dependency/package.json b/deps/npm/test/tap/install-from-local/package-local-dev-dependency/package.json new file mode 100644 index 00000000000..23f3ad68240 --- /dev/null +++ b/deps/npm/test/tap/install-from-local/package-local-dev-dependency/package.json @@ -0,0 +1,5 @@ +{ + "name": "package-local-dev-dependency", + "version": "0.0.0", + "description": "Test for local installs" +} diff --git a/deps/npm/test/tap/install-from-local/package-scoped-dependency/package.json b/deps/npm/test/tap/install-from-local/package-scoped-dependency/package.json new file mode 100644 index 00000000000..ec3e13214ea --- /dev/null +++ b/deps/npm/test/tap/install-from-local/package-scoped-dependency/package.json @@ -0,0 +1,5 @@ +{ + "name": "@scoped/package", + "version": "0.0.0", + "description": "Test for local installs" +} diff --git a/deps/npm/test/tap/install-from-local/package-with-local-paths/package.json b/deps/npm/test/tap/install-from-local/package-with-local-paths/package.json new file mode 100644 index 00000000000..bf4a3e946c6 --- /dev/null +++ b/deps/npm/test/tap/install-from-local/package-with-local-paths/package.json @@ -0,0 +1,10 @@ +{ + "name": "package-with-local-paths", + "version": "0.0.0", + "dependencies": { + "package-local-dependency": "file:../package-local-dependency" + }, + "devDependencies": { + "package-local-dev-dependency": "file:../package-local-dev-dependency" + } +} diff --git a/deps/npm/test/tap/install-from-local/package-with-scoped-paths/package.json b/deps/npm/test/tap/install-from-local/package-with-scoped-paths/package.json new file mode 100644 index 00000000000..262aa57e065 --- /dev/null +++ b/deps/npm/test/tap/install-from-local/package-with-scoped-paths/package.json @@ -0,0 +1,8 @@ +{ + "name": "package-with-scoped-paths", + "version": "0.0.0", + "dependencies": { + "package-local-dependency": "file:../package-local-dependency", + "@scoped/package-scoped-dependency": "file:../package-scoped-dependency" + } +} diff --git a/deps/npm/test/tap/install-save-exact.js b/deps/npm/test/tap/install-save-exact.js index cf25b779bc2..ef785f240e3 100644 --- a/deps/npm/test/tap/install-save-exact.js +++ b/deps/npm/test/tap/install-save-exact.js @@ -1,41 +1,41 @@ -var common = require('../common-tap.js') -var test = require('tap').test -var npm = require('../../') -var osenv = require('osenv') -var path = require('path') -var fs = require('fs') -var rimraf = require('rimraf') -var mkdirp = require('mkdirp') -var pkg = path.join(__dirname, 'install-save-exact') +var common = require("../common-tap.js") +var test = require("tap").test +var npm = require("../../") +var path = require("path") +var fs = require("fs") +var rimraf = require("rimraf") +var mkdirp = require("mkdirp") +var pkg = path.join(__dirname, "install-save-exact") var mr = require("npm-registry-mock") test("setup", function (t) { mkdirp.sync(pkg) - mkdirp.sync(path.resolve(pkg, 'node_modules')) + mkdirp.sync(path.resolve(pkg, "node_modules")) process.chdir(pkg) t.end() }) -test('"npm install --save --save-exact should install local pkg', function(t) { +test("\"npm install --save --save-exact\" should install local pkg", function (t) { resetPackageJSON(pkg) mr(common.port, function (s) { npm.load({ cache: pkg + "/cache", - loglevel: 'silent', - registry: common.registry }, function(err) { + loglevel: "silent", + registry: common.registry }, function (err) { t.ifError(err) - npm.config.set('save', true) - npm.config.set('save-exact', true) - npm.commands.install(['underscore@1.3.1'], function(err) { + npm.config.set("save", true) + npm.config.set("save-exact", true) + npm.commands.install(["underscore@1.3.1"], function (err) { t.ifError(err) - var p = path.resolve(pkg, 'node_modules/underscore/package.json') + var p = path.resolve(pkg, "node_modules/underscore/package.json") t.ok(JSON.parse(fs.readFileSync(p))) - var pkgJson = JSON.parse(fs.readFileSync(pkg + '/package.json', 'utf8')) + p = path.resolve(pkg, "package.json") + var pkgJson = JSON.parse(fs.readFileSync(p, "utf8")) t.deepEqual(pkgJson.dependencies, { - 'underscore': '1.3.1' - }, 'Underscore dependency should specify exactly 1.3.1') - npm.config.set('save', undefined) - npm.config.set('save-exact', undefined) + "underscore": "1.3.1" + }, "Underscore dependency should specify exactly 1.3.1") + npm.config.set("save", undefined) + npm.config.set("save-exact", undefined) s.close() t.end() }) @@ -43,50 +43,50 @@ test('"npm install --save --save-exact should install local pkg', function(t) { }) }) -test('"npm install --save-dev --save-exact should install local pkg', function(t) { +test("\"npm install --save-dev --save-exact\" should install local pkg", function (t) { resetPackageJSON(pkg) mr(common.port, function (s) { npm.load({ cache: pkg + "/cache", - loglevel: 'silent', - registry: common.registry }, function(err) { + loglevel: "silent", + registry: common.registry }, function (err) { t.ifError(err) - npm.config.set('save-dev', true) - npm.config.set('save-exact', true) - npm.commands.install(['underscore@1.3.1'], function(err) { + npm.config.set("save-dev", true) + npm.config.set("save-exact", true) + npm.commands.install(["underscore@1.3.1"], function (err) { t.ifError(err) - var p = path.resolve(pkg, 'node_modules/underscore/package.json') + var p = path.resolve(pkg, "node_modules/underscore/package.json") t.ok(JSON.parse(fs.readFileSync(p))) - var pkgJson = JSON.parse(fs.readFileSync(pkg + '/package.json', 'utf8')) + p = path.resolve(pkg, "package.json") + var pkgJson = JSON.parse(fs.readFileSync(p, "utf8")) console.log(pkgJson) t.deepEqual(pkgJson.devDependencies, { - 'underscore': '1.3.1' - }, 'underscore devDependency should specify exactly 1.3.1') + "underscore": "1.3.1" + }, "underscore devDependency should specify exactly 1.3.1") s.close() - npm.config.set('save-dev', undefined) - npm.config.set('save-exact', undefined) + npm.config.set("save-dev", undefined) + npm.config.set("save-exact", undefined) t.end() }) }) }) }) -test('cleanup', function(t) { +test("cleanup", function (t) { process.chdir(__dirname) - rimraf.sync(path.resolve(pkg, 'node_modules')) - rimraf.sync(path.resolve(pkg, 'cache')) + rimraf.sync(path.resolve(pkg, "node_modules")) + rimraf.sync(path.resolve(pkg, "cache")) resetPackageJSON(pkg) t.end() }) function resetPackageJSON(pkg) { - var pkgJson = JSON.parse(fs.readFileSync(pkg + '/package.json', 'utf8')) + var pkgJson = JSON.parse(fs.readFileSync(pkg + "/package.json", "utf8")) delete pkgJson.dependencies delete pkgJson.devDependencies delete pkgJson.optionalDependencies var json = JSON.stringify(pkgJson, null, 2) + "\n" - fs.writeFileSync(pkg + '/package.json', json, "ascii") + var p = path.resolve(pkg, "package.json") + fs.writeFileSync(p, json, "ascii") } - - diff --git a/deps/npm/test/tap/install-save-local.js b/deps/npm/test/tap/install-save-local.js new file mode 100644 index 00000000000..2a1f839984b --- /dev/null +++ b/deps/npm/test/tap/install-save-local.js @@ -0,0 +1,65 @@ +var common = require("../common-tap.js") +var test = require("tap").test +var path = require("path") +var fs = require("fs") +var rimraf = require("rimraf") +var pkg = path.join(__dirname, "install-save-local", "package") + +var EXEC_OPTS = { } + +test("setup", function (t) { + resetPackageJSON(pkg) + process.chdir(pkg) + t.end() +}) + +test('"npm install --save ../local/path" should install local package and save to package.json', function (t) { + resetPackageJSON(pkg) + common.npm(["install", "--save", "../package-local-dependency"], EXEC_OPTS, function (err, code) { + t.ifError(err) + t.notOk(code, "npm install exited with code 0") + + var dependencyPackageJson = path.resolve(pkg, "node_modules/package-local-dependency/package.json") + t.ok(JSON.parse(fs.readFileSync(dependencyPackageJson, "utf8"))) + + var pkgJson = JSON.parse(fs.readFileSync(pkg + "/package.json", "utf8")) + t.deepEqual(pkgJson.dependencies, { + "package-local-dependency": "file:../package-local-dependency" + }) + t.end() + }) +}) + +test('"npm install --save-dev ../local/path" should install local package and save to package.json', function (t) { + resetPackageJSON(pkg) + common.npm(["install", "--save-dev", "../package-local-dev-dependency"], EXEC_OPTS, function (err, code) { + t.ifError(err) + t.notOk(code, "npm install exited with code 0") + + var dependencyPackageJson = path.resolve(pkg, "node_modules/package-local-dev-dependency/package.json") + t.ok(JSON.parse(fs.readFileSync(dependencyPackageJson, "utf8"))) + + var pkgJson = JSON.parse(fs.readFileSync(pkg + "/package.json", "utf8")) + t.deepEqual(pkgJson.devDependencies, { + "package-local-dev-dependency": "file:../package-local-dev-dependency" + }) + + t.end() + }) +}) + + +test("cleanup", function (t) { + resetPackageJSON(pkg) + process.chdir(__dirname) + rimraf.sync(path.resolve(pkg, "node_modules")) + t.end() +}) + +function resetPackageJSON(pkg) { + var pkgJson = JSON.parse(fs.readFileSync(pkg + "/package.json", "utf8")) + delete pkgJson.dependencies + delete pkgJson.devDependencies + var json = JSON.stringify(pkgJson, null, 2) + "\n" + fs.writeFileSync(pkg + "/package.json", json, "ascii") +} diff --git a/deps/npm/test/tap/install-save-local/package-local-dependency/package.json b/deps/npm/test/tap/install-save-local/package-local-dependency/package.json new file mode 100644 index 00000000000..a524d826245 --- /dev/null +++ b/deps/npm/test/tap/install-save-local/package-local-dependency/package.json @@ -0,0 +1,5 @@ +{ + "name": "package-local-dependency", + "version": "0.0.0", + "description": "Test for local installs" +} diff --git a/deps/npm/test/tap/install-save-local/package-local-dev-dependency/package.json b/deps/npm/test/tap/install-save-local/package-local-dev-dependency/package.json new file mode 100644 index 00000000000..23f3ad68240 --- /dev/null +++ b/deps/npm/test/tap/install-save-local/package-local-dev-dependency/package.json @@ -0,0 +1,5 @@ +{ + "name": "package-local-dev-dependency", + "version": "0.0.0", + "description": "Test for local installs" +} diff --git a/deps/npm/test/tap/install-save-local/package/package.json b/deps/npm/test/tap/install-save-local/package/package.json new file mode 100644 index 00000000000..c6a5cb99d58 --- /dev/null +++ b/deps/npm/test/tap/install-save-local/package/package.json @@ -0,0 +1,4 @@ +{ + "name": "package", + "version": "0.0.0" +} diff --git a/deps/npm/test/tap/install-save-prefix.js b/deps/npm/test/tap/install-save-prefix.js index bbdeddf3fec..d4efef4b615 100644 --- a/deps/npm/test/tap/install-save-prefix.js +++ b/deps/npm/test/tap/install-save-prefix.js @@ -1,40 +1,39 @@ -var common = require('../common-tap.js') -var test = require('tap').test -var npm = require('../../') -var osenv = require('osenv') -var path = require('path') -var fs = require('fs') -var rimraf = require('rimraf') -var mkdirp = require('mkdirp') -var pkg = path.join(__dirname, 'install-save-prefix') +var common = require("../common-tap.js") +var test = require("tap").test +var npm = require("../../") +var path = require("path") +var fs = require("fs") +var rimraf = require("rimraf") +var mkdirp = require("mkdirp") +var pkg = path.join(__dirname, "install-save-prefix") var mr = require("npm-registry-mock") test("setup", function (t) { mkdirp.sync(pkg) - mkdirp.sync(path.resolve(pkg, 'node_modules')) + mkdirp.sync(path.resolve(pkg, "node_modules")) process.chdir(pkg) t.end() }) -test('"npm install --save with default save-prefix should install local pkg versioned to allow minor updates', function(t) { +test("npm install --save with default save-prefix should install local pkg versioned to allow minor updates", function (t) { resetPackageJSON(pkg) mr(common.port, function (s) { npm.load({ cache: pkg + "/cache", - loglevel: 'silent', - 'save-prefix': '^', - registry: common.registry }, function(err) { + loglevel: "silent", + "save-prefix": "^", + registry: common.registry }, function (err) { t.ifError(err) - npm.config.set('save', true) - npm.commands.install(['underscore@latest'], function(err) { + npm.config.set("save", true) + npm.commands.install(["underscore@latest"], function (err) { t.ifError(err) - var p = path.resolve(pkg, 'node_modules/underscore/package.json') + var p = path.resolve(pkg, "node_modules/underscore/package.json") t.ok(JSON.parse(fs.readFileSync(p))) - var pkgJson = JSON.parse(fs.readFileSync(pkg + '/package.json', 'utf8')) + var pkgJson = JSON.parse(fs.readFileSync(pkg + "/package.json", "utf8")) t.deepEqual(pkgJson.dependencies, { - 'underscore': '^1.5.1' - }, 'Underscore dependency should specify ^1.5.1') - npm.config.set('save', undefined) + "underscore": "^1.5.1" + }, "Underscore dependency should specify ^1.5.1") + npm.config.set("save", undefined) s.close() t.end() }) @@ -42,25 +41,25 @@ test('"npm install --save with default save-prefix should install local pkg vers }) }) -test('"npm install --save-dev with default save-prefix should install local pkg to dev dependencies versioned to allow minor updates', function(t) { +test("npm install --save-dev with default save-prefix should install local pkg to dev dependencies versioned to allow minor updates", function (t) { resetPackageJSON(pkg) mr(common.port, function (s) { npm.load({ cache: pkg + "/cache", - loglevel: 'silent', - 'save-prefix': '^', - registry: common.registry }, function(err) { + loglevel: "silent", + "save-prefix": "^", + registry: common.registry }, function (err) { t.ifError(err) - npm.config.set('save-dev', true) - npm.commands.install(['underscore@1.3.1'], function(err) { + npm.config.set("save-dev", true) + npm.commands.install(["underscore@1.3.1"], function (err) { t.ifError(err) - var p = path.resolve(pkg, 'node_modules/underscore/package.json') + var p = path.resolve(pkg, "node_modules/underscore/package.json") t.ok(JSON.parse(fs.readFileSync(p))) - var pkgJson = JSON.parse(fs.readFileSync(pkg + '/package.json', 'utf8')) + var pkgJson = JSON.parse(fs.readFileSync(pkg + "/package.json", "utf8")) t.deepEqual(pkgJson.devDependencies, { - 'underscore': '^1.3.1' - }, 'Underscore devDependency should specify ^1.3.1') - npm.config.set('save-dev', undefined) + "underscore": "^1.3.1" + }, "Underscore devDependency should specify ^1.3.1") + npm.config.set("save-dev", undefined) s.close() t.end() }) @@ -68,26 +67,26 @@ test('"npm install --save-dev with default save-prefix should install local pkg }) }) -test('"npm install --save with "~" save-prefix should install local pkg versioned to allow patch updates', function(t) { +test("npm install --save with \"~\" save-prefix should install local pkg versioned to allow patch updates", function (t) { resetPackageJSON(pkg) mr(common.port, function (s) { npm.load({ cache: pkg + "/cache", - loglevel: 'silent', - registry: common.registry }, function(err) { + loglevel: "silent", + registry: common.registry }, function (err) { t.ifError(err) - npm.config.set('save', true) - npm.config.set('save-prefix', '~') - npm.commands.install(['underscore@1.3.1'], function(err) { + npm.config.set("save", true) + npm.config.set("save-prefix", "~") + npm.commands.install(["underscore@1.3.1"], function (err) { t.ifError(err) - var p = path.resolve(pkg, 'node_modules/underscore/package.json') + var p = path.resolve(pkg, "node_modules/underscore/package.json") t.ok(JSON.parse(fs.readFileSync(p))) - var pkgJson = JSON.parse(fs.readFileSync(pkg + '/package.json', 'utf8')) + var pkgJson = JSON.parse(fs.readFileSync(pkg + "/package.json", "utf8")) t.deepEqual(pkgJson.dependencies, { - 'underscore': '~1.3.1' - }, 'Underscore dependency should specify ~1.3.1') - npm.config.set('save', undefined) - npm.config.set('save-prefix', undefined) + "underscore": "~1.3.1" + }, "Underscore dependency should specify ~1.3.1") + npm.config.set("save", undefined) + npm.config.set("save-prefix", undefined) s.close() t.end() }) @@ -95,26 +94,26 @@ test('"npm install --save with "~" save-prefix should install local pkg versione }) }) -test('"npm install --save-dev with "~" save-prefix should install local pkg to dev dependencies versioned to allow patch updates', function(t) { +test("npm install --save-dev with \"~\" save-prefix should install local pkg to dev dependencies versioned to allow patch updates", function (t) { resetPackageJSON(pkg) mr(common.port, function (s) { npm.load({ cache: pkg + "/cache", - loglevel: 'silent', - registry: common.registry }, function(err) { + loglevel: "silent", + registry: common.registry }, function (err) { t.ifError(err) - npm.config.set('save-dev', true) - npm.config.set('save-prefix', '~') - npm.commands.install(['underscore@1.3.1'], function(err) { + npm.config.set("save-dev", true) + npm.config.set("save-prefix", "~") + npm.commands.install(["underscore@1.3.1"], function (err) { t.ifError(err) - var p = path.resolve(pkg, 'node_modules/underscore/package.json') + var p = path.resolve(pkg, "node_modules/underscore/package.json") t.ok(JSON.parse(fs.readFileSync(p))) - var pkgJson = JSON.parse(fs.readFileSync(pkg + '/package.json', 'utf8')) + var pkgJson = JSON.parse(fs.readFileSync(pkg + "/package.json", "utf8")) t.deepEqual(pkgJson.devDependencies, { - 'underscore': '~1.3.1' - }, 'Underscore devDependency should specify ~1.3.1') - npm.config.set('save-dev', undefined) - npm.config.set('save-prefix', undefined) + "underscore": "~1.3.1" + }, "Underscore devDependency should specify ~1.3.1") + npm.config.set("save-dev", undefined) + npm.config.set("save-prefix", undefined) s.close() t.end() }) @@ -122,21 +121,19 @@ test('"npm install --save-dev with "~" save-prefix should install local pkg to d }) }) -test('cleanup', function(t) { +test("cleanup", function (t) { process.chdir(__dirname) - rimraf.sync(path.resolve(pkg, 'node_modules')) - rimraf.sync(path.resolve(pkg, 'cache')) + rimraf.sync(path.resolve(pkg, "node_modules")) + rimraf.sync(path.resolve(pkg, "cache")) resetPackageJSON(pkg) t.end() }) function resetPackageJSON(pkg) { - var pkgJson = JSON.parse(fs.readFileSync(pkg + '/package.json', 'utf8')) + var pkgJson = JSON.parse(fs.readFileSync(pkg + "/package.json", "utf8")) delete pkgJson.dependencies delete pkgJson.devDependencies delete pkgJson.optionalDependencies var json = JSON.stringify(pkgJson, null, 2) + "\n" - fs.writeFileSync(pkg + '/package.json', json, "ascii") + fs.writeFileSync(pkg + "/package.json", json, "ascii") } - - diff --git a/deps/npm/test/tap/install-scoped-already-installed.js b/deps/npm/test/tap/install-scoped-already-installed.js new file mode 100644 index 00000000000..a355a4a50bb --- /dev/null +++ b/deps/npm/test/tap/install-scoped-already-installed.js @@ -0,0 +1,86 @@ +var common = require("../common-tap") +var existsSync = require("fs").existsSync +var join = require("path").join + +var test = require("tap").test +var rimraf = require("rimraf") +var mkdirp = require("mkdirp") + +var pkg = join(__dirname, "install-from-local", "package-with-scoped-paths") +var modules = join(pkg, "node_modules") + +var EXEC_OPTS = { + cwd : pkg +} + +test("setup", function (t) { + rimraf.sync(modules) + rimraf.sync(join(pkg, "cache")) + process.chdir(pkg) + mkdirp.sync(modules) + t.end() +}) + +test("installing already installed local scoped package", function (t) { + common.npm(["install", "--loglevel", "silent"], EXEC_OPTS, function (err, code, stdout) { + var installed = parseNpmInstallOutput(stdout) + t.ifError(err, "error should not exist") + t.notOk(code, "npm install exited with code 0") + t.ifError(err, "install ran to completion without error") + t.ok( + existsSync(join(modules, "@scoped", "package", "package.json")), + "package installed" + ) + t.ok( + contains(installed, "node_modules/@scoped/package"), + "installed @scoped/package" + ) + t.ok( + contains(installed, "node_modules/package-local-dependency"), + "installed package-local-dependency" + ) + + common.npm(["install", "--loglevel", "silent"], EXEC_OPTS, function (err, code, stdout) { + installed = parseNpmInstallOutput(stdout) + t.ifError(err, "error should not exist") + t.notOk(code, "npm install exited with code 0") + + t.ifError(err, "install ran to completion without error") + + t.ok( + existsSync(join(modules, "@scoped", "package", "package.json")), + "package installed" + ) + + t.notOk( + contains(installed, "node_modules/@scoped/package"), + "did not reinstall @scoped/package" + ) + t.notOk( + contains(installed, "node_modules/package-local-dependency"), + "did not reinstall package-local-dependency" + ) + t.end() + }) + }) +}) + +test("cleanup", function (t) { + process.chdir(__dirname) + rimraf.sync(join(modules)) + rimraf.sync(join(pkg, "cache")) + t.end() +}) + +function contains(list, element) { + for (var i=0; i < list.length; ++i) { + if (list[i] === element) { + return true + } + } + return false +} + +function parseNpmInstallOutput(stdout) { + return stdout.trim().split(/\n\n|\s+/) +} diff --git a/deps/npm/test/tap/install-scoped-link.js b/deps/npm/test/tap/install-scoped-link.js new file mode 100644 index 00000000000..b1e6ca0b229 --- /dev/null +++ b/deps/npm/test/tap/install-scoped-link.js @@ -0,0 +1,51 @@ +var common = require("../common-tap.js") +var existsSync = require("fs").existsSync +var join = require("path").join +var exec = require("child_process").exec + +var test = require("tap").test +var rimraf = require("rimraf") +var mkdirp = require("mkdirp") + +var pkg = join(__dirname, "install-scoped") +var work = join(__dirname, "install-scoped-TEST") +var modules = join(work, "node_modules") + +var EXEC_OPTS = {} + +test("setup", function (t) { + mkdirp.sync(modules) + process.chdir(work) + + t.end() +}) + +test("installing package with links", function (t) { + common.npm(["install", pkg], EXEC_OPTS, function (err, code) { + t.ifError(err, "install ran to completion without error") + t.notOk(code, "npm install exited with code 0") + + t.ok( + existsSync(join(modules, "@scoped", "package", "package.json")), + "package installed" + ) + t.ok(existsSync(join(modules, ".bin")), "binary link directory exists") + + var hello = join(modules, ".bin", "hello") + t.ok(existsSync(hello), "binary link exists") + + exec("node " + hello, function (err, stdout, stderr) { + t.ifError(err, "command ran fine") + t.notOk(stderr, "got no error output back") + t.equal(stdout, "hello blrbld\n", "output was as expected") + + t.end() + }) + }) +}) + +test("cleanup", function (t) { + process.chdir(__dirname) + rimraf.sync(work) + t.end() +}) diff --git a/deps/npm/test/tap/install-scoped/package.json b/deps/npm/test/tap/install-scoped/package.json new file mode 100644 index 00000000000..32700cf6af9 --- /dev/null +++ b/deps/npm/test/tap/install-scoped/package.json @@ -0,0 +1,7 @@ +{ + "name": "@scoped/package", + "version": "0.0.0", + "bin": { + "hello": "./world.js" + } +} diff --git a/deps/npm/test/tap/install-scoped/world.js b/deps/npm/test/tap/install-scoped/world.js new file mode 100644 index 00000000000..f6333ba5b13 --- /dev/null +++ b/deps/npm/test/tap/install-scoped/world.js @@ -0,0 +1 @@ +console.log("hello blrbld") diff --git a/deps/npm/test/tap/install-with-dev-dep-duplicate.js b/deps/npm/test/tap/install-with-dev-dep-duplicate.js new file mode 100644 index 00000000000..d0f86aa77ad --- /dev/null +++ b/deps/npm/test/tap/install-with-dev-dep-duplicate.js @@ -0,0 +1,57 @@ +var npm = npm = require("../../") +var test = require("tap").test +var path = require("path") +var fs = require("fs") +var osenv = require("osenv") +var rimraf = require("rimraf") +var mr = require("npm-registry-mock") +var common = require("../common-tap.js") + +var pkg = path.resolve(__dirname, "dev-dep-duplicate") +var desiredResultsPath = path.resolve(pkg, "desired-ls-results.json") + +test("prefers version from dependencies over devDependencies", function (t) { + t.plan(1) + + mr(common.port, function (s) { + setup(function (err) { + if (err) return t.fail(err) + + npm.install(".", function (err) { + if (err) return t.fail(err) + + npm.commands.ls([], true, function (err, _, results) { + if (err) return t.fail(err) + + fs.readFile(desiredResultsPath, function (err, desired) { + if (err) return t.fail(err) + + t.deepEqual(results, JSON.parse(desired)) + s.close() + t.end() + }) + }) + }) + }) + }) +}) + +test("cleanup", function (t) { + cleanup() + t.end() +}) + + +function setup (cb) { + cleanup() + process.chdir(pkg) + + var opts = { cache: path.resolve(pkg, "cache"), registry: common.registry} + npm.load(opts, cb) +} + +function cleanup () { + process.chdir(osenv.tmpdir()) + rimraf.sync(path.resolve(pkg, "node_modules")) + rimraf.sync(path.resolve(pkg, "cache")) +} diff --git a/deps/npm/test/tap/invalid-cmd-exit-code.js b/deps/npm/test/tap/invalid-cmd-exit-code.js index 14db8669e9d..c9918e5a79d 100644 --- a/deps/npm/test/tap/invalid-cmd-exit-code.js +++ b/deps/npm/test/tap/invalid-cmd-exit-code.js @@ -1,10 +1,9 @@ var test = require("tap").test -var node = process.execPath var common = require("../common-tap.js") var opts = { cwd: process.cwd() } -test("npm asdf should return exit code 1", function(t) { +test("npm asdf should return exit code 1", function (t) { common.npm(["asdf"], opts, function (er, c) { if (er) throw er t.ok(c, "exit code should not be zero") @@ -12,7 +11,7 @@ test("npm asdf should return exit code 1", function(t) { }) }) -test("npm help should return exit code 0", function(t) { +test("npm help should return exit code 0", function (t) { common.npm(["help"], opts, function (er, c) { if (er) throw er t.equal(c, 0, "exit code should be 0") @@ -20,7 +19,7 @@ test("npm help should return exit code 0", function(t) { }) }) -test("npm help fadf should return exit code 0", function(t) { +test("npm help fadf should return exit code 0", function (t) { common.npm(["help", "fadf"], opts, function (er, c) { if (er) throw er t.equal(c, 0, "exit code should be 0") diff --git a/deps/npm/test/tap/lifecycle-path.js b/deps/npm/test/tap/lifecycle-path.js new file mode 100644 index 00000000000..cf4bbdc99ce --- /dev/null +++ b/deps/npm/test/tap/lifecycle-path.js @@ -0,0 +1,59 @@ +var test = require("tap").test +var common = require("../common-tap.js") +var path = require("path") +var rimraf = require("rimraf") +var pkg = path.resolve(__dirname, "lifecycle-path") +var fs = require("fs") +var link = path.resolve(pkg, "node-bin") + +// Without the path to the shell, nothing works usually. +var PATH +if (process.platform === "win32") { + PATH = "C:\\Windows\\system32;C:\\Windows" +} else { + PATH = "/bin:/usr/bin" +} + +test("setup", function (t) { + rimraf.sync(link) + fs.symlinkSync(path.dirname(process.execPath), link, "dir") + t.end() +}) + +test("make sure the path is correct", function (t) { + common.npm(["run-script", "path"], { + cwd: pkg, + env: { + PATH: PATH, + stdio: [ 0, "pipe", 2 ] + } + }, function (er, code, stdout) { + if (er) throw er + t.equal(code, 0, "exit code") + // remove the banner, we just care about the last line + stdout = stdout.trim().split(/\r|\n/).pop() + var pathSplit = process.platform === "win32" ? ";" : ":" + var root = path.resolve(__dirname, "../..") + var actual = stdout.split(pathSplit).map(function (p) { + if (p.indexOf(root) === 0) { + p = "{{ROOT}}" + p.substr(root.length) + } + return p.replace(/\\/g, "/") + }) + + // get the ones we tacked on, then the system-specific requirements + var expect = [ + "{{ROOT}}/bin/node-gyp-bin", + "{{ROOT}}/test/tap/lifecycle-path/node_modules/.bin" + ].concat(PATH.split(pathSplit).map(function (p) { + return p.replace(/\\/g, "/") + })) + t.same(actual, expect) + t.end() + }) +}) + +test("clean", function (t) { + rimraf.sync(link) + t.end() +}) diff --git a/deps/npm/test/tap/lifecycle-path/package.json b/deps/npm/test/tap/lifecycle-path/package.json new file mode 100644 index 00000000000..42e792e4676 --- /dev/null +++ b/deps/npm/test/tap/lifecycle-path/package.json @@ -0,0 +1 @@ +{"name":"glorb","version":"1.2.3","scripts":{"path":"./node-bin/node print-path.js"}} diff --git a/deps/npm/test/tap/lifecycle-path/print-path.js b/deps/npm/test/tap/lifecycle-path/print-path.js new file mode 100644 index 00000000000..c7ad00b3d39 --- /dev/null +++ b/deps/npm/test/tap/lifecycle-path/print-path.js @@ -0,0 +1 @@ +console.log(process.env.PATH) diff --git a/deps/npm/test/tap/lifecycle.js b/deps/npm/test/tap/lifecycle.js index 288329c2445..aa0efc52669 100644 --- a/deps/npm/test/tap/lifecycle.js +++ b/deps/npm/test/tap/lifecycle.js @@ -1,12 +1,12 @@ var test = require("tap").test -var npm = require('../../') -var lifecycle = require('../../lib/utils/lifecycle') +var npm = require("../../") +var lifecycle = require("../../lib/utils/lifecycle") test("lifecycle: make env correctly", function (t) { - npm.load({enteente: Infinity}, function() { + npm.load({enteente: Infinity}, function () { var env = lifecycle.makeEnv({}, null, process.env) - t.equal('Infinity', env.npm_config_enteente) + t.equal("Infinity", env.npm_config_enteente) t.end() }) }) diff --git a/deps/npm/test/tap/locker.js b/deps/npm/test/tap/locker.js new file mode 100644 index 00000000000..bc43c30e95e --- /dev/null +++ b/deps/npm/test/tap/locker.js @@ -0,0 +1,89 @@ +var test = require("tap").test + , path = require("path") + , fs = require("graceful-fs") + , crypto = require("crypto") + , rimraf = require("rimraf") + , osenv = require("osenv") + , mkdirp = require("mkdirp") + , npm = require("../../") + , locker = require("../../lib/utils/locker.js") + , lock = locker.lock + , unlock = locker.unlock + +var pkg = path.join(__dirname, "/locker") + , cache = path.join(pkg, "/cache") + , tmp = path.join(pkg, "/tmp") + , nm = path.join(pkg, "/node_modules") + +function cleanup () { + process.chdir(osenv.tmpdir()) + rimraf.sync(pkg) +} + +test("setup", function (t) { + cleanup() + mkdirp.sync(cache) + mkdirp.sync(tmp) + t.end() +}) + +test("locking file puts lock in correct place", function (t) { + npm.load({cache: cache, tmpdir: tmp}, function (er) { + t.ifError(er, "npm bootstrapped OK") + + var n = "correct" + , c = n.replace(/[^a-zA-Z0-9]+/g, "-").replace(/^-+|-+$/g, "") + , p = path.resolve(nm, n) + , h = crypto.createHash("sha1").update(p).digest("hex") + , l = c.substr(0, 24)+"-"+h.substr(0, 16)+".lock" + , v = path.join(cache, "_locks", l) + + lock(nm, n, function (er) { + t.ifError(er, "locked path") + + fs.exists(v, function (found) { + t.ok(found, "lock found OK") + + unlock(nm, n, function (er) { + t.ifError(er, "unlocked path") + + fs.exists(v, function (found) { + t.notOk(found, "lock deleted OK") + t.end() + }) + }) + }) + }) + }) +}) + +test("unlocking out of order errors out", function (t) { + npm.load({cache: cache, tmpdir: tmp}, function (er) { + t.ifError(er, "npm bootstrapped OK") + + var n = "busted" + , c = n.replace(/[^a-zA-Z0-9]+/g, "-").replace(/^-+|-+$/g, "") + , p = path.resolve(nm, n) + , h = crypto.createHash("sha1").update(p).digest("hex") + , l = c.substr(0, 24)+"-"+h.substr(0, 16)+".lock" + , v = path.join(cache, "_locks", l) + + fs.exists(v, function (found) { + t.notOk(found, "no lock to unlock") + + t.throws(function () { + unlock(nm, n, function () { + t.fail("shouldn't get here") + t.end() + }) + }, "blew up as expected") + + t.end() + }) + }) +}) + +test("cleanup", function (t) { + cleanup() + t.end() +}) diff --git a/deps/npm/test/tap/login-always-auth.js b/deps/npm/test/tap/login-always-auth.js new file mode 100644 index 00000000000..8311096c2e1 --- /dev/null +++ b/deps/npm/test/tap/login-always-auth.js @@ -0,0 +1,142 @@ +var fs = require("fs") +var path = require("path") +var rimraf = require("rimraf") +var mr = require("npm-registry-mock") + +var test = require("tap").test +var common = require("../common-tap.js") + +var opts = {cwd : __dirname} +var outfile = path.resolve(__dirname, "_npmrc") +var responses = { + "Username" : "u\n", + "Password" : "p\n", + "Email" : "u@p.me\n" +} + +function mocks(server) { + server.filteringRequestBody(function (r) { + if (r.match(/\"_id\":\"org\.couchdb\.user:u\"/)) { + return "auth" + } + }) + server.put("/-/user/org.couchdb.user:u", "auth") + .reply(201, {username : "u", password : "p", email : "u@p.me"}) +} + +test("npm login", function (t) { + mr({port : common.port, mocks : mocks}, function (s) { + var runner = common.npm( + [ + "login", + "--registry", common.registry, + "--loglevel", "silent", + "--userconfig", outfile + ], + opts, + function (err, code) { + t.notOk(code, "exited OK") + t.notOk(err, "no error output") + var config = fs.readFileSync(outfile, "utf8") + t.like(config, /:always-auth=false/, "always-auth is scoped and false (by default)") + s.close() + rimraf(outfile, function (err) { + t.ifError(err, "removed config file OK") + t.end() + }) + }) + + var o = "", e = "", remaining = Object.keys(responses).length + runner.stdout.on("data", function (chunk) { + remaining-- + o += chunk + + var label = chunk.toString("utf8").split(":")[0] + runner.stdin.write(responses[label]) + + if (remaining === 0) runner.stdin.end() + }) + runner.stderr.on("data", function (chunk) { e += chunk }) + }) +}) + +test("npm login --always-auth", function (t) { + mr({port : common.port, mocks : mocks}, function (s) { + var runner = common.npm( + [ + "login", + "--registry", common.registry, + "--loglevel", "silent", + "--userconfig", outfile, + "--always-auth" + ], + opts, + function (err, code) { + t.notOk(code, "exited OK") + t.notOk(err, "no error output") + var config = fs.readFileSync(outfile, "utf8") + t.like(config, /:always-auth=true/, "always-auth is scoped and true") + s.close() + rimraf(outfile, function (err) { + t.ifError(err, "removed config file OK") + t.end() + }) + }) + + var o = "", e = "", remaining = Object.keys(responses).length + runner.stdout.on("data", function (chunk) { + remaining-- + o += chunk + + var label = chunk.toString("utf8").split(":")[0] + runner.stdin.write(responses[label]) + + if (remaining === 0) runner.stdin.end() + }) + runner.stderr.on("data", function (chunk) { e += chunk }) + }) +}) + +test("npm login --no-always-auth", function (t) { + mr({port : common.port, mocks : mocks}, function (s) { + var runner = common.npm( + [ + "login", + "--registry", common.registry, + "--loglevel", "silent", + "--userconfig", outfile, + "--no-always-auth" + ], + opts, + function (err, code) { + t.notOk(code, "exited OK") + t.notOk(err, "no error output") + var config = fs.readFileSync(outfile, "utf8") + t.like(config, /:always-auth=false/, "always-auth is scoped and false") + s.close() + rimraf(outfile, function (err) { + t.ifError(err, "removed config file OK") + t.end() + }) + }) + + var o = "", e = "", remaining = Object.keys(responses).length + runner.stdout.on("data", function (chunk) { + remaining-- + o += chunk + + var label = chunk.toString("utf8").split(":")[0] + runner.stdin.write(responses[label]) + + if (remaining === 0) runner.stdin.end() + }) + runner.stderr.on("data", function (chunk) { e += chunk }) + }) +}) + + +test("cleanup", function (t) { + rimraf.sync(outfile) + t.pass("cleaned up") + t.end() +}) diff --git a/deps/npm/test/tap/ls-depth-cli.js b/deps/npm/test/tap/ls-depth-cli.js index fcbc4364fad..89c4cc35473 100644 --- a/deps/npm/test/tap/ls-depth-cli.js +++ b/deps/npm/test/tap/ls-depth-cli.js @@ -1,31 +1,27 @@ -var common = require('../common-tap') - , test = require('tap').test - , path = require('path') - , rimraf = require('rimraf') - , osenv = require('osenv') - , mkdirp = require('mkdirp') - , pkg = __dirname + '/ls-depth' - , cache = pkg + '/cache' - , tmp = pkg + '/tmp' - , node = process.execPath - , npm = path.resolve(__dirname, '../../cli.js') - , mr = require('npm-registry-mock') +var common = require("../common-tap") + , test = require("tap").test + , path = require("path") + , rimraf = require("rimraf") + , osenv = require("osenv") + , mkdirp = require("mkdirp") + , pkg = path.resolve(__dirname, "ls-depth") + , mr = require("npm-registry-mock") , opts = {cwd: pkg} function cleanup () { process.chdir(osenv.tmpdir()) - rimraf.sync(pkg + '/cache') - rimraf.sync(pkg + '/tmp') - rimraf.sync(pkg + '/node_modules') + rimraf.sync(pkg + "/cache") + rimraf.sync(pkg + "/tmp") + rimraf.sync(pkg + "/node_modules") } -test('setup', function (t) { +test("setup", function (t) { cleanup() - mkdirp.sync(pkg + '/cache') - mkdirp.sync(pkg + '/tmp') + mkdirp.sync(pkg + "/cache") + mkdirp.sync(pkg + "/tmp") mr(common.port, function (s) { - var cmd = ['install', '--registry=' + common.registry] + var cmd = ["install", "--registry=" + common.registry] common.npm(cmd, opts, function (er, c) { if (er) throw er t.equal(c, 0) @@ -35,8 +31,8 @@ test('setup', function (t) { }) }) -test('npm ls --depth=0', function (t) { - common.npm(['ls', '--depth=0'], opts, function (er, c, out) { +test("npm ls --depth=0", function (t) { + common.npm(["ls", "--depth=0"], opts, function (er, c, out) { if (er) throw er t.equal(c, 0) t.has(out, /test-package-with-one-dep@0\.0\.0/ @@ -47,8 +43,8 @@ test('npm ls --depth=0', function (t) { }) }) -test('npm ls --depth=1', function (t) { - common.npm(['ls', '--depth=1'], opts, function (er, c, out) { +test("npm ls --depth=1", function (t) { + common.npm(["ls", "--depth=1"], opts, function (er, c, out) { if (er) throw er t.equal(c, 0) t.has(out, /test-package-with-one-dep@0\.0\.0/ @@ -59,10 +55,10 @@ test('npm ls --depth=1', function (t) { }) }) -test('npm ls --depth=Infinity', function (t) { +test("npm ls --depth=Infinity", function (t) { // travis has a preconfigured depth=0, in general we can not depend // on the default value in all environments, so explictly set it here - common.npm(['ls', '--depth=Infinity'], opts, function (er, c, out) { + common.npm(["ls", "--depth=Infinity"], opts, function (er, c, out) { if (er) throw er t.equal(c, 0) t.has(out, /test-package-with-one-dep@0\.0\.0/ @@ -73,7 +69,7 @@ test('npm ls --depth=Infinity', function (t) { }) }) -test('cleanup', function (t) { +test("cleanup", function (t) { cleanup() t.end() }) diff --git a/deps/npm/test/tap/ls-depth-unmet.js b/deps/npm/test/tap/ls-depth-unmet.js index 1ac85efc94b..37a0cb5fea7 100644 --- a/deps/npm/test/tap/ls-depth-unmet.js +++ b/deps/npm/test/tap/ls-depth-unmet.js @@ -1,31 +1,30 @@ -var common = require('../common-tap') - , test = require('tap').test - , path = require('path') - , rimraf = require('rimraf') - , osenv = require('osenv') - , mkdirp = require('mkdirp') - , pkg = __dirname + '/ls-depth-unmet' - , cache = pkg + '/cache' - , tmp = pkg + '/tmp' - , node = process.execPath - , npm = path.resolve(__dirname, '../../cli.js') - , mr = require('npm-registry-mock') +var common = require("../common-tap") + , test = require("tap").test + , path = require("path") + , rimraf = require("rimraf") + , osenv = require("osenv") + , mkdirp = require("mkdirp") + , pkg = path.resolve(__dirname, "ls-depth-unmet") + , mr = require("npm-registry-mock") , opts = {cwd: pkg} + , cache = path.resolve(pkg, "cache") + , tmp = path.resolve(pkg, "tmp") + , nodeModules = path.resolve(pkg, "node_modules") function cleanup () { process.chdir(osenv.tmpdir()) - rimraf.sync(pkg + '/cache') - rimraf.sync(pkg + '/tmp') - rimraf.sync(pkg + '/node_modules') + rimraf.sync(cache) + rimraf.sync(tmp) + rimraf.sync(nodeModules) } -test('setup', function (t) { +test("setup", function (t) { cleanup() - mkdirp.sync(pkg + '/cache') - mkdirp.sync(pkg + '/tmp') + mkdirp.sync(cache) + mkdirp.sync(tmp) mr(common.port, function (s) { - var cmd = ['install', 'underscore@1.3.1', 'mkdirp', 'test-package-with-one-dep', '--registry=' + common.registry] + var cmd = ["install", "underscore@1.3.1", "mkdirp", "test-package-with-one-dep", "--registry=" + common.registry] common.npm(cmd, opts, function (er, c) { if (er) throw er t.equal(c, 0) @@ -35,8 +34,8 @@ test('setup', function (t) { }) }) -test('npm ls --depth=0', function (t) { - common.npm(['ls', '--depth=0'], opts, function (er, c, out) { +test("npm ls --depth=0", function (t) { + common.npm(["ls", "--depth=0"], opts, function (er, c, out) { if (er) throw er t.equal(c, 1) t.has(out, /UNMET DEPENDENCY optimist@0\.6\.0/ @@ -53,8 +52,8 @@ test('npm ls --depth=0', function (t) { }) }) -test('npm ls --depth=1', function (t) { - common.npm(['ls', '--depth=1'], opts, function (er, c, out) { +test("npm ls --depth=1", function (t) { + common.npm(["ls", "--depth=1"], opts, function (er, c, out) { if (er) throw er t.equal(c, 1) t.has(out, /UNMET DEPENDENCY optimist@0\.6\.0/ @@ -71,10 +70,10 @@ test('npm ls --depth=1', function (t) { }) }) -test('npm ls --depth=Infinity', function (t) { +test("npm ls --depth=Infinity", function (t) { // travis has a preconfigured depth=0, in general we can not depend // on the default value in all environments, so explictly set it here - common.npm(['ls', '--depth=Infinity'], opts, function (er, c, out) { + common.npm(["ls", "--depth=Infinity"], opts, function (er, c, out) { if (er) throw er t.equal(c, 1) t.has(out, /UNMET DEPENDENCY optimist@0\.6\.0/ @@ -91,7 +90,7 @@ test('npm ls --depth=Infinity', function (t) { }) }) -test('cleanup', function (t) { +test("cleanup", function (t) { cleanup() t.end() }) diff --git a/deps/npm/test/tap/ls-no-results.js b/deps/npm/test/tap/ls-no-results.js index 9792774c69b..10f3ce00145 100644 --- a/deps/npm/test/tap/ls-no-results.js +++ b/deps/npm/test/tap/ls-no-results.js @@ -1,11 +1,11 @@ -var test = require('tap').test -var spawn = require('child_process').spawn +var test = require("tap").test +var spawn = require("child_process").spawn var node = process.execPath -var npm = require.resolve('../../') -var args = [ npm, 'ls', 'ceci n’est pas une package' ] -test('ls exits non-zero when nothing found', function (t) { +var npm = require.resolve("../../") +var args = [ npm, "ls", "ceci n’est pas une package" ] +test("ls exits non-zero when nothing found", function (t) { var child = spawn(node, args) - child.on('exit', function (code) { + child.on("exit", function (code) { t.notEqual(code, 0) t.end() }) diff --git a/deps/npm/test/tap/maybe-github.js b/deps/npm/test/tap/maybe-github.js index 8b7105e6ea1..52a62e11bb2 100644 --- a/deps/npm/test/tap/maybe-github.js +++ b/deps/npm/test/tap/maybe-github.js @@ -4,15 +4,15 @@ var npm = require("../../lib/npm.js") // this is the narrowest way to replace a function in the module cache var found = true -var remoteGitPath = require.resolve('../../lib/cache/add-remote-git.js') +var remoteGitPath = require.resolve("../../lib/cache/add-remote-git.js") require("module")._cache[remoteGitPath] = { id: remoteGitPath, - exports: function stub(_, error, __, cb) { + exports: function stub(_, __, cb) { if (found) { cb(null, {}) } else { - cb(error) + cb(new Error("not on filesystem")) } } } @@ -24,23 +24,19 @@ test("should throw with no parameters", function (t) { t.plan(1) t.throws(function () { - maybeGithub(); + maybeGithub() }, "throws when called without parameters") }) test("should throw with wrong parameter types", function (t) { - t.plan(3) + t.plan(2) t.throws(function () { - maybeGithub({}, new Error(), function () {}) + maybeGithub({}, function () {}) }, "expects only a package name") t.throws(function () { - maybeGithub("npm/xxx-noexist", null, function () {}) - }, "expects to be called after a previous check already failed") - - t.throws(function () { - maybeGithub("npm/xxx-noexist", new Error(), "ham") + maybeGithub("npm/xxx-noexist", "ham") }, "is always async") }) @@ -49,7 +45,7 @@ test("should find an existing package on Github", function (t) { npm.load({}, function (error) { t.notOk(error, "bootstrapping succeeds") t.doesNotThrow(function () { - maybeGithub("npm/npm", new Error("not on filesystem"), function (error, data) { + maybeGithub("npm/npm", function (error, data) { t.notOk(error, "no issues in looking things up") t.ok(data, "received metadata from Github") t.end() @@ -62,7 +58,7 @@ test("shouldn't find a nonexistent package on Github", function (t) { found = false npm.load({}, function () { t.doesNotThrow(function () { - maybeGithub("npm/xxx-noexist", new Error("not on filesystem"), function (error, data) { + maybeGithub("npm/xxx-noexist", function (error, data) { t.equal( error.message, "not on filesystem", diff --git a/deps/npm/test/tap/nested-extraneous.js b/deps/npm/test/tap/nested-extraneous.js new file mode 100644 index 00000000000..fcba0418e68 --- /dev/null +++ b/deps/npm/test/tap/nested-extraneous.js @@ -0,0 +1,53 @@ +var common = require("../common-tap.js") +var test = require("tap").test +var mkdirp = require("mkdirp") +var fs = require("fs") +var rimraf = require("rimraf") +var path = require("path") + +var pkg = path.resolve(__dirname, "nested-extraneous") +var pj = { + name: "nested-extraneous", + version: "1.2.3" +} + +var dep = path.resolve(pkg, "node_modules", "dep") +var deppj = { + name: "nested-extraneous-dep", + version: "1.2.3", + dependencies: { + "nested-extra-depdep": "*" + } +} + +var depdep = path.resolve(dep, "node_modules", "depdep") +var depdeppj = { + name: "nested-extra-depdep", + version: "1.2.3" +} + +test("setup", function (t) { + rimraf.sync(pkg) + mkdirp.sync(depdep) + fs.writeFileSync(path.resolve(pkg, "package.json"), JSON.stringify(pj)) + fs.writeFileSync(path.resolve(dep, "package.json"), JSON.stringify(deppj)) + fs.writeFileSync(path.resolve(depdep, "package.json"), JSON.stringify(depdeppj)) + t.end() +}) + +test("test", function (t) { + common.npm(["ls"], { + cwd: pkg + }, function (er, code, sto, ste) { + if (er) throw er + t.notEqual(code, 0) + t.notSimilar(ste, /depdep/) + t.notSimilar(sto, /depdep/) + t.end() + }) +}) + +test("clean", function (t) { + rimraf.sync(pkg) + t.end() +}) diff --git a/deps/npm/test/tap/noargs-install-config-save.js b/deps/npm/test/tap/noargs-install-config-save.js index c821000c2f3..328da7d17bd 100644 --- a/deps/npm/test/tap/noargs-install-config-save.js +++ b/deps/npm/test/tap/noargs-install-config-save.js @@ -11,8 +11,8 @@ var mr = require("npm-registry-mock") var spawn = require("child_process").spawn var node = process.execPath -var pkg = process.env.npm_config_tmp || "/tmp" -pkg += path.sep + "noargs-install-config-save" +var pkg = path.resolve(process.env.npm_config_tmp || "/tmp", + "noargs-install-config-save") function writePackageJson() { rimraf.sync(pkg) @@ -26,14 +26,14 @@ function writePackageJson() { "devDependencies": { "underscore": "1.3.1" } - }), 'utf8') + }), "utf8") } function createChild (args) { var env = { - npm_config_save: true, - npm_config_registry: common.registry, - npm_config_cache: pkg + "/cache", + "npm_config_save": true, + "npm_config_registry": common.registry, + "npm_config_cache": pkg + "/cache", HOME: process.env.HOME, Path: process.env.PATH, PATH: process.env.PATH @@ -56,7 +56,7 @@ test("does not update the package.json with empty arguments", function (t) { var child = createChild([npm, "install"]) child.on("close", function () { var text = JSON.stringify(fs.readFileSync(pkg + "/package.json", "utf8")) - t.ok(text.indexOf('"dependencies') === -1) + t.ok(text.indexOf("\"dependencies") === -1) s.close() t.end() }) @@ -71,7 +71,7 @@ test("updates the package.json (adds dependencies) with an argument", function ( var child = createChild([npm, "install", "underscore"]) child.on("close", function () { var text = JSON.stringify(fs.readFileSync(pkg + "/package.json", "utf8")) - t.ok(text.indexOf('"dependencies') !== -1) + t.ok(text.indexOf("\"dependencies") !== -1) s.close() t.end() }) diff --git a/deps/npm/test/tap/npm-api-not-loaded-error.js b/deps/npm/test/tap/npm-api-not-loaded-error.js index 3fd07311039..afedfbcd076 100644 --- a/deps/npm/test/tap/npm-api-not-loaded-error.js +++ b/deps/npm/test/tap/npm-api-not-loaded-error.js @@ -21,7 +21,7 @@ test("calling set/get on config pre-load should throw", function (t) { t.ok(threw, "get before load should throw") } - var threw = true + threw = true try { npm.config.set("foo", "bar") threw = false diff --git a/deps/npm/test/tap/optional-metadep-rollback-collision.js b/deps/npm/test/tap/optional-metadep-rollback-collision.js new file mode 100644 index 00000000000..f8645840bdd --- /dev/null +++ b/deps/npm/test/tap/optional-metadep-rollback-collision.js @@ -0,0 +1,56 @@ +var test = require("tap").test +var rimraf = require("rimraf") +var common = require("../common-tap.js") +var path = require("path") +var fs = require("fs") + +var pkg = path.resolve(__dirname, "optional-metadep-rollback-collision") +var nm = path.resolve(pkg, "node_modules") +var cache = path.resolve(pkg, "cache") +var pidfile = path.resolve(pkg, "child.pid") + +test("setup", function (t) { + cleanup() + t.end() +}) + +test("go go test racer", function (t) { + common.npm(["install", "--prefix=" + pkg, "--fetch-retries=0", "--cache=" + cache], { + cwd: pkg, + env: { + PATH: process.env.PATH, + Path: process.env.Path, + "npm_config_loglevel": "silent" + }, + stdio: [ 0, "pipe", 2 ] + }, function (er, code, sout) { + if (er) throw er + t.notOk(code, "npm install exited with code 0") + t.equal(sout, "ok\nok\n") + t.notOk(/not ok/.test(sout), "should not contain the string 'not ok'") + t.end() + }) +}) + +test("verify results", function (t) { + t.throws(function () { + fs.statSync(nm) + }) + t.end() +}) + +test("cleanup", function (t) { + cleanup() + t.end() +}) + +function cleanup () { + try { + var pid = +fs.readFileSync(pidfile) + process.kill(pid, "SIGKILL") + } catch (er) {} + + rimraf.sync(cache) + rimraf.sync(nm) + rimraf.sync(pidfile) +} diff --git a/deps/npm/test/tap/optional-metadep-rollback-collision/deps/d1/package.json b/deps/npm/test/tap/optional-metadep-rollback-collision/deps/d1/package.json new file mode 100644 index 00000000000..26cd1dea32a --- /dev/null +++ b/deps/npm/test/tap/optional-metadep-rollback-collision/deps/d1/package.json @@ -0,0 +1,13 @@ +{ + "name": "d1", + "version": "1.0.0", + "description": "I FAIL CONSTANTLY", + "scripts": { + "preinstall": "sleep 1" + }, + "dependencies": { + "foo": "http://localhost:8080/" + }, + "author": "Forrest L Norvell <ogd@aoaioxxysz.net>", + "license": "ISC" +} diff --git a/deps/npm/test/tap/optional-metadep-rollback-collision/deps/d2/blart.js b/deps/npm/test/tap/optional-metadep-rollback-collision/deps/d2/blart.js new file mode 100644 index 00000000000..c69b8a5d084 --- /dev/null +++ b/deps/npm/test/tap/optional-metadep-rollback-collision/deps/d2/blart.js @@ -0,0 +1,52 @@ +var rando = require("crypto").randomBytes +var resolve = require("path").resolve + +var mkdirp = require("mkdirp") +var rimraf = require("rimraf") +var writeFile = require("graceful-fs").writeFile + +var BASEDIR = resolve(__dirname, "arena") + +var keepItGoingLouder = {} +var writers = 0 +var errors = 0 + +function gensym() { return rando(16).toString("hex") } + +function writeAlmostForever(filename) { + if (!keepItGoingLouder[filename]) { + writers-- + if (writers < 1) return done() + } + else { + writeFile(filename, keepItGoingLouder[filename], function (err) { + if (err) errors++ + + writeAlmostForever(filename) + }) + } +} + +function done() { + rimraf(BASEDIR, function () { + if (errors > 0) console.log("not ok - %d errors", errors) + else console.log("ok") + }) +} + +mkdirp(BASEDIR, function go() { + for (var i = 0; i < 16; i++) { + var filename = resolve(BASEDIR, gensym() + ".txt") + + keepItGoingLouder[filename] = "" + for (var j = 0; j < 512; j++) keepItGoingLouder[filename] += filename + + writers++ + writeAlmostForever(filename) + } + + setTimeout(function viktor() { + // kill all the writers + keepItGoingLouder = {} + }, 3 * 1000) +}) diff --git a/deps/npm/test/tap/optional-metadep-rollback-collision/deps/d2/package.json b/deps/npm/test/tap/optional-metadep-rollback-collision/deps/d2/package.json new file mode 100644 index 00000000000..08eeba4f7ee --- /dev/null +++ b/deps/npm/test/tap/optional-metadep-rollback-collision/deps/d2/package.json @@ -0,0 +1,15 @@ +{ + "name": "d2", + "version": "1.0.0", + "description": "how do you *really* know you exist?", + "scripts": { + "postinstall": "node blart.js" + }, + "dependencies": { + "graceful-fs": "^3.0.2", + "mkdirp": "^0.5.0", + "rimraf": "^2.2.8" + }, + "author": "Forrest L Norvell <ogd@aoaioxxysz.net>", + "license": "ISC" +} diff --git a/deps/npm/test/tap/optional-metadep-rollback-collision/deps/opdep/bad-server.js b/deps/npm/test/tap/optional-metadep-rollback-collision/deps/opdep/bad-server.js new file mode 100644 index 00000000000..4818884c496 --- /dev/null +++ b/deps/npm/test/tap/optional-metadep-rollback-collision/deps/opdep/bad-server.js @@ -0,0 +1,35 @@ +var createServer = require("http").createServer +var spawn = require("child_process").spawn +var fs = require("fs") +var path = require("path") +var pidfile = path.resolve(__dirname, "..", "..", "child.pid") + +if (process.argv[2]) { + console.log("ok") + createServer(function (req, res) { + setTimeout(function () { + res.writeHead(404) + res.end() + }, 1000) + this.close() + }).listen(8080) +} +else { + var child = spawn( + process.execPath, + [__filename, "whatever"], + { + stdio: [0, 1, 2], + detached: true + } + ) + child.unref() + + // kill any prior children, if existing. + try { + var pid = +fs.readFileSync(pidfile) + process.kill(pid, "SIGKILL") + } catch (er) {} + + fs.writeFileSync(pidfile, child.pid + "\n") +} diff --git a/deps/npm/test/tap/optional-metadep-rollback-collision/deps/opdep/package.json b/deps/npm/test/tap/optional-metadep-rollback-collision/deps/opdep/package.json new file mode 100644 index 00000000000..3289c123e82 --- /dev/null +++ b/deps/npm/test/tap/optional-metadep-rollback-collision/deps/opdep/package.json @@ -0,0 +1,15 @@ +{ + "name": "opdep", + "version": "1.0.0", + "description": "To explode, of course!", + "main": "index.js", + "scripts": { + "preinstall": "node bad-server.js" + }, + "dependencies": { + "d1": "file:../d1", + "d2": "file:../d2" + }, + "author": "Forrest L Norvell <ogd@aoaioxxysz.net>", + "license": "ISC" +} diff --git a/deps/npm/test/tap/optional-metadep-rollback-collision/package.json b/deps/npm/test/tap/optional-metadep-rollback-collision/package.json new file mode 100644 index 00000000000..0d812a6e002 --- /dev/null +++ b/deps/npm/test/tap/optional-metadep-rollback-collision/package.json @@ -0,0 +1,10 @@ +{ + "name": "optional-metadep-rollback-collision", + "version": "1.0.0", + "description": "let's just see about that race condition", + "optionalDependencies": { + "opdep": "file:./deps/opdep" + }, + "author": "Forrest L Norvell <ogd@aoaioxxysz.net>", + "license": "ISC" +} diff --git a/deps/npm/test/tap/outdated-color.js b/deps/npm/test/tap/outdated-color.js index f20bcea93cf..4e90277fe28 100644 --- a/deps/npm/test/tap/outdated-color.js +++ b/deps/npm/test/tap/outdated-color.js @@ -1,15 +1,17 @@ var common = require("../common-tap.js") var test = require("tap").test -var npm = require("../../") var mkdirp = require("mkdirp") var rimraf = require("rimraf") var mr = require("npm-registry-mock") -var exec = require('child_process').exec -var mr = require("npm-registry-mock") +var path = require("path") + +var pkg = path.resolve(__dirname, "outdated") +var cache = path.resolve(pkg, "cache") +mkdirp.sync(cache) -var pkg = __dirname + '/outdated' -var NPM_BIN = __dirname + '/../../bin/npm-cli.js' -mkdirp.sync(pkg + "/cache") +var EXEC_OPTS = { + cwd: pkg +} function hasControlCodes(str) { return str.length !== ansiTrim(str).length @@ -17,20 +19,26 @@ function hasControlCodes(str) { function ansiTrim (str) { var r = new RegExp("\x1b(?:\\[(?:\\d+[ABCDEFGJKSTm]|\\d+;\\d+[Hfm]|" + - "\\d+;\\d+;\\d+m|6n|s|u|\\?25[lh])|\\w)", "g"); + "\\d+;\\d+;\\d+m|6n|s|u|\\?25[lh])|\\w)", "g") return str.replace(r, "") } // note hard to automate tests for color = true // as npm kills the color config when it detects -// it's not running in a tty +// it"s not running in a tty test("does not use ansi styling", function (t) { - t.plan(3) + t.plan(4) mr(common.port, function (s) { // create mock registry. - exec('node ' + NPM_BIN + ' outdated --registry ' + common.registry + ' --color false underscore', { - cwd: pkg - }, function(err, stdout) { + common.npm( + [ + "outdated", + "--registry", common.registry, + "underscore" + ], + EXEC_OPTS, + function (err, code, stdout) { t.ifError(err) + t.notOk(code, "npm outdated exited with code 0") t.ok(stdout, stdout.length) t.ok(!hasControlCodes(stdout)) s.close() @@ -39,6 +47,6 @@ test("does not use ansi styling", function (t) { }) test("cleanup", function (t) { - rimraf.sync(pkg + "/cache") + rimraf.sync(cache) t.end() }) diff --git a/deps/npm/test/tap/outdated-depth.js b/deps/npm/test/tap/outdated-depth.js index 7ccc0c62547..112f363aa50 100644 --- a/deps/npm/test/tap/outdated-depth.js +++ b/deps/npm/test/tap/outdated-depth.js @@ -1,37 +1,39 @@ -var common = require('../common-tap') - , path = require('path') - , test = require('tap').test - , rimraf = require('rimraf') - , npm = require('../../') - , mr = require('npm-registry-mock') - , pkg = path.resolve(__dirname, 'outdated-depth') +var common = require("../common-tap") + , path = require("path") + , test = require("tap").test + , rimraf = require("rimraf") + , npm = require("../../") + , mr = require("npm-registry-mock") + , pkg = path.resolve(__dirname, "outdated-depth") + , cache = path.resolve(pkg, "cache") + , nodeModules = path.resolve(pkg, "node_modules") function cleanup () { - rimraf.sync(pkg + '/node_modules') - rimraf.sync(pkg + '/cache') + rimraf.sync(nodeModules) + rimraf.sync(cache) } -test('outdated depth zero', function (t) { +test("outdated depth zero", function (t) { var expected = [ pkg, - 'underscore', - '1.3.1', - '1.3.1', - '1.5.1', - '1.3.1' + "underscore", + "1.3.1", + "1.3.1", + "1.5.1", + "1.3.1" ] process.chdir(pkg) mr(common.port, function (s) { npm.load({ - cache: pkg + '/cache' - , loglevel: 'silent' + cache: cache + , loglevel: "silent" , registry: common.registry , depth: 0 } , function () { - npm.install('.', function (er) { + npm.install(".", function (er) { if (er) throw new Error(er) npm.outdated(function (err, d) { if (err) throw new Error(err) diff --git a/deps/npm/test/tap/outdated-git.js b/deps/npm/test/tap/outdated-git.js index 11c20402270..cc89ff19d98 100644 --- a/deps/npm/test/tap/outdated-git.js +++ b/deps/npm/test/tap/outdated-git.js @@ -3,28 +3,29 @@ var test = require("tap").test var npm = require("../../") var mkdirp = require("mkdirp") var rimraf = require("rimraf") -var mr = require("npm-registry-mock") +var path = require("path") // config -var pkg = __dirname + "/outdated-git" -mkdirp.sync(pkg + "/cache") +var pkg = path.resolve(__dirname, "outdated-git") +var cache = path.resolve(pkg, "cache") +mkdirp.sync(cache) test("dicovers new versions in outdated", function (t) { process.chdir(pkg) t.plan(5) - npm.load({cache: pkg + "/cache", registry: common.registry}, function () { + npm.load({cache: cache, registry: common.registry}, function () { npm.commands.outdated([], function (er, d) { - t.equal('git', d[0][3]) - t.equal('git', d[0][4]) - t.equal('git://github.com/robertkowalski/foo-private.git', d[0][5]) - t.equal('git://user:pass@github.com/robertkowalski/foo-private.git', d[1][5]) - t.equal('git+https://github.com/robertkowalski/foo', d[2][5]) + t.equal("git", d[0][3]) + t.equal("git", d[0][4]) + t.equal("git://github.com/robertkowalski/foo-private.git", d[0][5]) + t.equal("git://user:pass@github.com/robertkowalski/foo-private.git", d[1][5]) + t.equal("git+https://github.com/robertkowalski/foo", d[2][5]) }) }) }) test("cleanup", function (t) { - rimraf.sync(pkg + "/cache") + rimraf.sync(cache) t.end() }) diff --git a/deps/npm/test/tap/outdated-include-devdependencies.js b/deps/npm/test/tap/outdated-include-devdependencies.js index b78948a24e2..bdb656ff18d 100644 --- a/deps/npm/test/tap/outdated-include-devdependencies.js +++ b/deps/npm/test/tap/outdated-include-devdependencies.js @@ -4,15 +4,17 @@ var npm = require("../../") var mkdirp = require("mkdirp") var rimraf = require("rimraf") var mr = require("npm-registry-mock") +var path = require("path") // config -var pkg = __dirname + '/outdated-include-devdependencies' -mkdirp.sync(pkg + "/cache") +var pkg = path.resolve(__dirname, "outdated-include-devdependencies") +var cache = path.resolve(pkg, "cache") +mkdirp.sync(cache) test("includes devDependencies in outdated", function (t) { process.chdir(pkg) mr(common.port, function (s) { - npm.load({cache: pkg + "/cache", registry: common.registry}, function () { + npm.load({cache: cache, registry: common.registry}, function () { npm.outdated(function (er, d) { t.equal("1.5.1", d[0][3]) s.close() @@ -23,6 +25,6 @@ test("includes devDependencies in outdated", function (t) { }) test("cleanup", function (t) { - rimraf.sync(pkg + "/cache") + rimraf.sync(cache) t.end() }) diff --git a/deps/npm/test/tap/outdated-json.js b/deps/npm/test/tap/outdated-json.js index 7c19561ee3e..b874f28e695 100644 --- a/deps/npm/test/tap/outdated-json.js +++ b/deps/npm/test/tap/outdated-json.js @@ -5,29 +5,29 @@ var common = require("../common-tap.js") , mr = require("npm-registry-mock") , path = require("path") , osenv = require("osenv") - , spawn = require('child_process').spawn + , spawn = require("child_process").spawn , node = process.execPath - , npmc = require.resolve('../../') - , pkg = path.resolve(__dirname, 'outdated-new-versions') + , npmc = require.resolve("../../") + , pkg = path.resolve(__dirname, "outdated-new-versions") , args = [ npmc - , 'outdated' - , '--json' - , '--silent' - , '--registry=' + common.registry - , '--cache=' + pkg + '/cache' + , "outdated" + , "--json" + , "--silent" + , "--registry=" + common.registry + , "--cache=" + pkg + "/cache" ] var expected = { underscore: - { current: '1.3.3' - , wanted: '1.3.3' - , latest: '1.5.1' - , location: 'node_modules' + path.sep + 'underscore' + { current: "1.3.3" + , wanted: "1.3.3" + , latest: "1.5.1" + , location: "node_modules" + path.sep + "underscore" } , request: - { current: '0.9.5' - , wanted: '0.9.5' - , latest: '2.27.0' - , location: 'node_modules' + path.sep + 'request' + { current: "0.9.5" + , wanted: "0.9.5" + , latest: "2.27.0" + , location: "node_modules" + path.sep + "request" } } @@ -38,18 +38,19 @@ test("it should log json data", function (t) { mr(common.port, function (s) { npm.load({ cache: pkg + "/cache", - loglevel: 'silent', + loglevel: "silent", registry: common.registry } , function () { npm.install(".", function (err) { + t.ifError(err, "error should not exist") var child = spawn(node, args) - , out = '' + , out = "" child.stdout - .on('data', function (buf) { + .on("data", function (buf) { out += buf.toString() }) .pipe(process.stdout) - child.on('exit', function () { + child.on("exit", function () { out = JSON.parse(out) t.deepEqual(out, expected) s.close() diff --git a/deps/npm/test/tap/outdated-new-versions.js b/deps/npm/test/tap/outdated-new-versions.js index 17797361777..cb7eaa47334 100644 --- a/deps/npm/test/tap/outdated-new-versions.js +++ b/deps/npm/test/tap/outdated-new-versions.js @@ -3,11 +3,13 @@ var test = require("tap").test var npm = require("../../") var mkdirp = require("mkdirp") var rimraf = require("rimraf") +var path = require("path") var mr = require("npm-registry-mock") -var pkg = __dirname + "/outdated-new-versions" -mkdirp.sync(pkg + "/cache") +var pkg = path.resolve(__dirname, "outdated-new-versions") +var cache = path.resolve(pkg, "cache") +mkdirp.sync(cache) test("dicovers new versions in outdated", function (t) { @@ -15,7 +17,7 @@ test("dicovers new versions in outdated", function (t) { t.plan(2) mr(common.port, function (s) { - npm.load({cache: pkg + "/cache", registry: common.registry}, function () { + npm.load({cache: cache, registry: common.registry}, function () { npm.outdated(function (er, d) { for (var i = 0; i < d.length; i++) { if (d[i][1] === "underscore") @@ -31,6 +33,6 @@ test("dicovers new versions in outdated", function (t) { }) test("cleanup", function (t) { - rimraf.sync(pkg + "/cache") + rimraf.sync(cache) t.end() }) diff --git a/deps/npm/test/tap/outdated-notarget.js b/deps/npm/test/tap/outdated-notarget.js index 79fb88c67cc..f1032eab37e 100644 --- a/deps/npm/test/tap/outdated-notarget.js +++ b/deps/npm/test/tap/outdated-notarget.js @@ -1,23 +1,23 @@ // Fixes Issue #1770 -var common = require('../common-tap.js') -var test = require('tap').test -var npm = require('../../') -var osenv = require('osenv') -var path = require('path') -var fs = require('fs') -var rimraf = require('rimraf') -var mkdirp = require('mkdirp') -var pkg = path.resolve(__dirname, 'outdated-notarget') -var cache = path.resolve(pkg, 'cache') -var mr = require('npm-registry-mock') +var common = require("../common-tap.js") +var test = require("tap").test +var npm = require("../../") +var osenv = require("osenv") +var path = require("path") +var fs = require("fs") +var rimraf = require("rimraf") +var mkdirp = require("mkdirp") +var pkg = path.resolve(__dirname, "outdated-notarget") +var cache = path.resolve(pkg, "cache") +var mr = require("npm-registry-mock") -test('outdated-target: if no viable version is found, show error', function(t) { +test("outdated-target: if no viable version is found, show error", function (t) { t.plan(1) setup() - mr({port: common.port}, function(s) { - npm.load({ cache: cache, registry: common.registry}, function() { - npm.commands.update(function(er, d) { - t.equal(er.code, 'ETARGET') + mr({port: common.port}, function (s) { + npm.load({ cache: cache, registry: common.registry}, function () { + npm.commands.update(function (er) { + t.equal(er.code, "ETARGET") s.close() t.end() }) @@ -25,7 +25,7 @@ test('outdated-target: if no viable version is found, show error', function(t) { }) }) -test('cleanup', function(t) { +test("cleanup", function (t) { process.chdir(osenv.tmpdir()) rimraf.sync(pkg) t.end() @@ -34,14 +34,14 @@ test('cleanup', function(t) { function setup() { mkdirp.sync(pkg) mkdirp.sync(cache) - fs.writeFileSync(path.resolve(pkg, 'package.json'), JSON.stringify({ - author: 'Evan Lucas', - name: 'outdated-notarget', - version: '0.0.0', - description: 'Test for outdated-target', + fs.writeFileSync(path.resolve(pkg, "package.json"), JSON.stringify({ + author: "Evan Lucas", + name: "outdated-notarget", + version: "0.0.0", + description: "Test for outdated-target", dependencies: { - underscore: '~199.7.1' + underscore: "~199.7.1" } - }), 'utf8') + }), "utf8") process.chdir(pkg) } diff --git a/deps/npm/test/tap/outdated.js b/deps/npm/test/tap/outdated.js index dddec77ea2c..9f4b14d1f3e 100644 --- a/deps/npm/test/tap/outdated.js +++ b/deps/npm/test/tap/outdated.js @@ -1,14 +1,14 @@ var common = require("../common-tap.js") -var fs = require("fs") var test = require("tap").test var rimraf = require("rimraf") var npm = require("../../") +var path = require("path") var mr = require("npm-registry-mock") // config -var pkg = __dirname + '/outdated' - -var path = require("path") +var pkg = path.resolve(__dirname, "outdated") +var cache = path.resolve(pkg, "cache") +var nodeModules = path.resolve(pkg, "node_modules") test("it should not throw", function (t) { cleanup() @@ -33,13 +33,15 @@ test("it should not throw", function (t) { } mr(common.port, function (s) { npm.load({ - cache: pkg + "/cache", - loglevel: 'silent', + cache: "cache", + loglevel: "silent", parseable: true, registry: common.registry } , function () { npm.install(".", function (err) { + t.ifError(err, "install success") npm.outdated(function (er, d) { + t.ifError(er, "outdated success") console.log = originalLog t.same(output, expOut) t.same(d, expData) @@ -57,6 +59,6 @@ test("cleanup", function (t) { }) function cleanup () { - rimraf.sync(pkg + "/node_modules") - rimraf.sync(pkg + "/cache") + rimraf.sync(nodeModules) + rimraf.sync(cache) } diff --git a/deps/npm/test/tap/pack-scoped.js b/deps/npm/test/tap/pack-scoped.js new file mode 100644 index 00000000000..5c351339cb6 --- /dev/null +++ b/deps/npm/test/tap/pack-scoped.js @@ -0,0 +1,81 @@ +// verify that prepublish runs on pack and publish +var test = require("tap").test +var common = require("../common-tap") +var fs = require("graceful-fs") +var join = require("path").join +var mkdirp = require("mkdirp") +var rimraf = require("rimraf") + +var pkg = join(__dirname, "scoped_package") +var manifest = join(pkg, "package.json") +var tmp = join(pkg, "tmp") +var cache = join(pkg, "cache") + +var data = { + name : "@scope/generic-package", + version : "90000.100001.5" +} + +test("setup", function (t) { + var n = 0 + + rimraf.sync(pkg) + + mkdirp(pkg, then()) + mkdirp(cache, then()) + mkdirp(tmp, then()) + + function then () { + n++ + return function (er) { + t.ifError(er) + if (--n === 0) next() + } + } + + function next () { + fs.writeFile(manifest, JSON.stringify(data), "ascii", done) + } + + function done (er) { + t.ifError(er) + + t.pass("setup done") + t.end() + } +}) + +test("test", function (t) { + var env = { + "npm_config_cache" : cache, + "npm_config_tmp" : tmp, + "npm_config_prefix" : pkg, + "npm_config_global" : "false" + } + + for (var i in process.env) { + if (!/^npm_config_/.test(i)) env[i] = process.env[i] + } + + common.npm([ + "pack", + "--loglevel", "warn" + ], { + cwd: pkg, + env: env + }, function(err, code, stdout, stderr) { + t.ifErr(err, "npm pack finished without error") + t.equal(code, 0, "npm pack exited ok") + t.notOk(stderr, "got stderr data: " + JSON.stringify("" + stderr)) + stdout = stdout.trim() + var regex = new RegExp("scope-generic-package-90000.100001.5.tgz", "ig") + t.ok(stdout.match(regex), "found package") + t.end() + }) +}) + +test("cleanup", function (t) { + rimraf.sync(pkg) + t.pass("cleaned up") + t.end() +}) diff --git a/deps/npm/test/tap/peer-deps-invalid.js b/deps/npm/test/tap/peer-deps-invalid.js index c2873655017..098f3ccb187 100644 --- a/deps/npm/test/tap/peer-deps-invalid.js +++ b/deps/npm/test/tap/peer-deps-invalid.js @@ -1,18 +1,20 @@ -var common = require('../common-tap.js') +var common = require("../common-tap") var fs = require("fs") var path = require("path") var test = require("tap").test var rimraf = require("rimraf") var npm = require("../../") var mr = require("npm-registry-mock") -var pkg = __dirname + "/peer-deps-invalid" +var pkg = path.resolve(__dirname, "peer-deps-invalid") +var cache = path.resolve(pkg, "cache") +var nodeModules = path.resolve(pkg, "node_modules") var okFile = fs.readFileSync(path.join(pkg, "file-ok.js"), "utf8") var failFile = fs.readFileSync(path.join(pkg, "file-fail.js"), "utf8") test("installing dependencies that have conflicting peerDependencies", function (t) { - rimraf.sync(pkg + "/node_modules") - rimraf.sync(pkg + "/cache") + rimraf.sync(nodeModules) + rimraf.sync(cache) process.chdir(pkg) var customMocks = { @@ -23,7 +25,7 @@ test("installing dependencies that have conflicting peerDependencies", function } mr({port: common.port, mocks: customMocks}, function (s) { // create mock registry. npm.load({ - cache: pkg + "/cache", + cache: cache, registry: common.registry }, function () { npm.commands.install([], function (err) { @@ -40,7 +42,7 @@ test("installing dependencies that have conflicting peerDependencies", function }) test("cleanup", function (t) { - rimraf.sync(pkg + "/node_modules") - rimraf.sync(pkg + "/cache") + rimraf.sync(nodeModules) + rimraf.sync(cache) t.end() }) diff --git a/deps/npm/test/tap/peer-deps-toplevel.js b/deps/npm/test/tap/peer-deps-toplevel.js new file mode 100644 index 00000000000..30ab657dc8f --- /dev/null +++ b/deps/npm/test/tap/peer-deps-toplevel.js @@ -0,0 +1,55 @@ +var npm = npm = require("../../") +var test = require("tap").test +var path = require("path") +var fs = require("fs") +var osenv = require("osenv") +var rimraf = require("rimraf") +var mr = require("npm-registry-mock") +var common = require("../common-tap.js") + +var pkg = path.resolve(__dirname, "peer-deps-toplevel") +var desiredResultsPath = path.resolve(pkg, "desired-ls-results.json") + +test("installs the peer dependency directory structure", function (t) { + mr(common.port, function (s) { + setup(function (err) { + t.ifError(err, "setup ran successfully") + + npm.install(".", function (err) { + t.ifError(err, "packages were installed") + + npm.commands.ls([], true, function (err, _, results) { + t.ifError(err, "listed tree without problems") + + fs.readFile(desiredResultsPath, function (err, desired) { + t.ifError(err, "read desired results") + + t.deepEqual(results, JSON.parse(desired), "got expected output from ls") + s.close() + t.end() + }) + }) + }) + }) + }) +}) + +test("cleanup", function (t) { + cleanup() + t.end() +}) + + +function setup (cb) { + cleanup() + process.chdir(pkg) + + var opts = { cache: path.resolve(pkg, "cache"), registry: common.registry} + npm.load(opts, cb) +} + +function cleanup () { + process.chdir(osenv.tmpdir()) + rimraf.sync(path.resolve(pkg, "node_modules")) + rimraf.sync(path.resolve(pkg, "cache")) +} diff --git a/deps/npm/test/tap/peer-deps-toplevel/desired-ls-results.json b/deps/npm/test/tap/peer-deps-toplevel/desired-ls-results.json new file mode 100644 index 00000000000..28eff4c6d11 --- /dev/null +++ b/deps/npm/test/tap/peer-deps-toplevel/desired-ls-results.json @@ -0,0 +1,20 @@ +{ + "name": "npm-test-peer-deps-toplevel", + "version": "0.0.0", + "dependencies": { + "npm-test-peer-deps": { + "version": "0.0.0", + "dependencies": { + "underscore": { + "version": "1.3.1" + } + } + }, + "mkdirp": { + "version": "0.3.5" + }, + "request": { + "version": "0.9.5" + } + } +} diff --git a/deps/npm/test/tap/peer-deps-toplevel/package.json b/deps/npm/test/tap/peer-deps-toplevel/package.json new file mode 100644 index 00000000000..ab77daeec3a --- /dev/null +++ b/deps/npm/test/tap/peer-deps-toplevel/package.json @@ -0,0 +1,11 @@ +{ + "author": "Domenic Denicola", + "name": "npm-test-peer-deps-toplevel", + "version": "0.0.0", + "dependencies": { + "npm-test-peer-deps": "*" + }, + "peerDependencies": { + "mkdirp": "*" + } +} diff --git a/deps/npm/test/tap/peer-deps-without-package-json.js b/deps/npm/test/tap/peer-deps-without-package-json.js index ce01be4d088..eb000782f95 100644 --- a/deps/npm/test/tap/peer-deps-without-package-json.js +++ b/deps/npm/test/tap/peer-deps-without-package-json.js @@ -1,37 +1,39 @@ -var common = require('../common-tap.js') +var common = require("../common-tap") var fs = require("fs") var path = require("path") var test = require("tap").test var rimraf = require("rimraf") var npm = require("../../") var mr = require("npm-registry-mock") -var pkg = __dirname + "/peer-deps-without-package-json" +var pkg = path.resolve(__dirname, "peer-deps-without-package-json") +var cache = path.resolve(pkg, "cache") +var nodeModules = path.resolve(pkg, "node_modules") var js = fs.readFileSync(path.join(pkg, "file-js.js"), "utf8") test("installing a peerDependencies-using package without a package.json present (GH-3049)", function (t) { - rimraf.sync(pkg + "/node_modules") - rimraf.sync(pkg + "/cache") + rimraf.sync(nodeModules) + rimraf.sync(cache) - fs.mkdirSync(pkg + "/node_modules") + fs.mkdirSync(nodeModules) process.chdir(pkg) var customMocks = { "get": { - "/ok.js": [200, js], + "/ok.js": [200, js] } } mr({port: common.port, mocks: customMocks}, function (s) { // create mock registry. npm.load({ registry: common.registry, - cache: pkg + "/cache" + cache: cache }, function () { npm.install(common.registry + "/ok.js", function (err) { if (err) { t.fail(err) } else { - t.ok(fs.existsSync(pkg + "/node_modules/npm-test-peer-deps-file")) - t.ok(fs.existsSync(pkg + "/node_modules/underscore")) + t.ok(fs.existsSync(path.join(nodeModules, "/npm-test-peer-deps-file"))) + t.ok(fs.existsSync(path.join(nodeModules, "/underscore"))) } t.end() s.close() // shutdown mock registry. @@ -41,7 +43,7 @@ test("installing a peerDependencies-using package without a package.json present }) test("cleanup", function (t) { - rimraf.sync(pkg + "/node_modules") - rimraf.sync(pkg + "/cache") + rimraf.sync(nodeModules) + rimraf.sync(cache) t.end() }) diff --git a/deps/npm/test/tap/peer-deps.js b/deps/npm/test/tap/peer-deps.js index 097a9217915..6e60fc10321 100644 --- a/deps/npm/test/tap/peer-deps.js +++ b/deps/npm/test/tap/peer-deps.js @@ -11,8 +11,6 @@ var pkg = path.resolve(__dirname, "peer-deps") var desiredResultsPath = path.resolve(pkg, "desired-ls-results.json") test("installs the peer dependency directory structure", function (t) { - t.plan(1) - mr(common.port, function (s) { setup(function (err) { if (err) return t.fail(err) @@ -46,7 +44,7 @@ function setup (cb) { cleanup() process.chdir(pkg) - var opts = { cache: path.resolve(pkg, "cache"), registry: common.registry}; + var opts = { cache: path.resolve(pkg, "cache"), registry: common.registry} npm.load(opts, cb) } diff --git a/deps/npm/test/tap/prepublish.js b/deps/npm/test/tap/prepublish.js index f80085d92c6..36391beeb34 100644 --- a/deps/npm/test/tap/prepublish.js +++ b/deps/npm/test/tap/prepublish.js @@ -1,79 +1,64 @@ // verify that prepublish runs on pack and publish -var test = require('tap').test -var npm = require('../../') -var fs = require('fs') -var pkg = __dirname + '/prepublish_package' -var tmp = pkg + '/tmp' -var cache = pkg + '/cache' -var mkdirp = require('mkdirp') -var rimraf = require('rimraf') -var path = require('path') -var os = require('os') +var common = require("../common-tap") +var test = require("tap").test +var fs = require("graceful-fs") +var join = require("path").join +var mkdirp = require("mkdirp") +var rimraf = require("rimraf") -test('setup', function (t) { +var pkg = join(__dirname, "prepublish_package") +var tmp = join(pkg, "tmp") +var cache = join(pkg, "cache") + +test("setup", function (t) { var n = 0 + cleanup() mkdirp(pkg, then()) mkdirp(cache, then()) mkdirp(tmp, then()) - function then (er) { - n ++ + function then () { + n++ return function (er) { - if (er) - throw er - if (--n === 0) - next() + if (er) throw er + if (--n === 0) next() } } function next () { - fs.writeFile(pkg + '/package.json', JSON.stringify({ - name: 'npm-test-prepublish', - version: '1.2.5', - scripts: { prepublish: 'echo ok' } - }), 'ascii', function (er) { - if (er) - throw er - t.pass('setup done') + fs.writeFile(join(pkg, "package.json"), JSON.stringify({ + name: "npm-test-prepublish", + version: "1.2.5", + scripts: { prepublish: "echo ok" } + }), "ascii", function (er) { + if (er) throw er + + t.pass("setup done") t.end() }) } }) -test('test', function (t) { - var spawn = require('child_process').spawn - var node = process.execPath - var npm = path.resolve(__dirname, '../../cli.js') +test("test", function (t) { var env = { - npm_config_cache: cache, - npm_config_tmp: tmp, - npm_config_prefix: pkg, - npm_config_global: 'false' + "npm_config_cache" : cache, + "npm_config_tmp" : tmp, + "npm_config_prefix" : pkg, + "npm_config_global" : "false" } for (var i in process.env) { if (!/^npm_config_/.test(i)) env[i] = process.env[i] } - var child = spawn(node, [npm, 'pack'], { - cwd: pkg, - env: env - }) - child.stdout.setEncoding('utf8') - child.stderr.on('data', onerr) - child.stdout.on('data', ondata) - child.on('close', onend) - var c = '' - , e = '' - function ondata (chunk) { - c += chunk - } - function onerr (chunk) { - e += chunk - } - function onend () { - if (e) { - throw new Error('got stderr data: ' + JSON.stringify('' + e)) - } - c = c.trim() + + common.npm([ + "pack", + "--loglevel", "warn" + ], { cwd: pkg, env: env }, function(err, code, stdout, stderr) { + t.equal(code, 0, "pack finished successfully") + t.ifErr(err, "pack finished successfully") + + t.notOk(stderr, "got stderr data:" + JSON.stringify("" + stderr)) + var c = stdout.trim() var regex = new RegExp("" + "> npm-test-prepublish@1.2.5 prepublish [^\\r\\n]+\\r?\\n" + "> echo ok\\r?\\n" + @@ -83,15 +68,15 @@ test('test', function (t) { t.ok(c.match(regex)) t.end() - } + }) }) -test('cleanup', function (t) { - rimraf(pkg, function(er) { - if (er) - throw er - t.pass('cleaned up') - t.end() - }) +test("cleanup", function (t) { + cleanup() + t.pass("cleaned up") + t.end() }) +function cleanup() { + rimraf.sync(pkg) +} diff --git a/deps/npm/test/tap/prune.js b/deps/npm/test/tap/prune.js index a303d840c01..24edf5e6184 100644 --- a/deps/npm/test/tap/prune.js +++ b/deps/npm/test/tap/prune.js @@ -1,16 +1,17 @@ var test = require("tap").test +var common = require("../common-tap") var fs = require("fs") -var node = process.execPath -var npm = require.resolve("../../bin/npm-cli.js") var rimraf = require("rimraf") var mr = require("npm-registry-mock") -var common = require("../common-tap.js") -var spawn = require("child_process").spawn var env = process.env -process.env.npm_config_depth = "Infinity" +var path = require("path") -var pkg = __dirname + "/prune" -var cache = pkg + "/cache" +var pkg = path.resolve(__dirname, "prune") +var cache = path.resolve(pkg, "cache") +var nodeModules = path.resolve(pkg, "node_modules") + +var EXEC_OPTS = { cwd: pkg, env: env } +EXEC_OPTS.env.npm_config_depth = "Infinity" var server @@ -22,38 +23,43 @@ test("reg mock", function (t) { }) }) +function cleanup () { + rimraf.sync(cache) + rimraf.sync(nodeModules) +} + +test("setup", function (t) { + cleanup() + t.pass("setup") + t.end() +}) test("npm install", function (t) { - rimraf.sync(pkg + "/node_modules") - var c = spawn(node, [ - npm, "install", - "--cache=" + cache, - "--registry=" + common.registry, - "--loglevel=silent", - "--production=false" - ], { cwd: pkg, env: env }) - c.stderr.on("data", function(d) { - t.fail("Should not get data on stderr: " + d) - }) - c.on("close", function(code) { + common.npm([ + "install", + "--cache", cache, + "--registry", common.registry, + "--loglevel", "silent", + "--production", "false" + ], EXEC_OPTS, function (err, code, stdout, stderr) { + t.ifErr(err, "install finished successfully") t.notOk(code, "exit ok") + t.notOk(stderr, "Should not get data on stderr: " + stderr) t.end() }) }) test("npm install test-package", function (t) { - var c = spawn(node, [ - npm, "install", "test-package", - "--cache=" + cache, - "--registry=" + common.registry, - "--loglevel=silent", - "--production=false" - ], { cwd: pkg, env: env }) - c.stderr.on("data", function(d) { - t.fail("Should not get data on stderr: " + d) - }) - c.on("close", function(code) { + common.npm([ + "install", "test-package", + "--cache", cache, + "--registry", common.registry, + "--loglevel", "silent", + "--production", "false" + ], EXEC_OPTS, function (err, code, stdout, stderr) { + t.ifErr(err, "install finished successfully") t.notOk(code, "exit ok") + t.notOk(stderr, "Should not get data on stderr: " + stderr) t.end() }) }) @@ -65,16 +71,14 @@ test("verify installs", function (t) { }) test("npm prune", function (t) { - var c = spawn(node, [ - npm, "prune", - "--loglevel=silent", - "--production=false" - ], { cwd: pkg, env: env }) - c.stderr.on("data", function(d) { - t.fail("Should not get data on stderr: " + d) - }) - c.on("close", function(code) { + common.npm([ + "prune", + "--loglevel", "silent", + "--production", "false" + ], EXEC_OPTS, function (err, code, stdout, stderr) { + t.ifErr(err, "prune finished successfully") t.notOk(code, "exit ok") + t.notOk(stderr, "Should not get data on stderr: " + stderr) t.end() }) }) @@ -86,16 +90,14 @@ test("verify installs", function (t) { }) test("npm prune", function (t) { - var c = spawn(node, [ - npm, "prune", - "--loglevel=silent", + common.npm([ + "prune", + "--loglevel", "silent", "--production" - ], { cwd: pkg, env: env }) - c.stderr.on("data", function(d) { - t.fail("Should not get data on stderr: " + d) - }) - c.on("close", function(code) { + ], EXEC_OPTS, function (err, code, stderr) { + t.ifErr(err, "prune finished successfully") t.notOk(code, "exit ok") + t.equal(stderr, "unbuild mkdirp@0.3.5\n") t.end() }) }) @@ -108,8 +110,7 @@ test("verify installs", function (t) { test("cleanup", function (t) { server.close() - rimraf.sync(pkg + "/node_modules") - rimraf.sync(pkg + "/cache") + cleanup() t.pass("cleaned up") t.end() }) diff --git a/deps/npm/test/tap/publish-config.js b/deps/npm/test/tap/publish-config.js index 3c4624eeaf7..9e537a92064 100644 --- a/deps/npm/test/tap/publish-config.js +++ b/deps/npm/test/tap/publish-config.js @@ -1,33 +1,37 @@ -var common = require('../common-tap.js') -var test = require('tap').test -var fs = require('fs') -var osenv = require('osenv') -var pkg = process.env.npm_config_tmp || '/tmp' -pkg += '/npm-test-publish-config' +var common = require("../common-tap.js") +var test = require("tap").test +var fs = require("fs") +var osenv = require("osenv") +var pkg = process.env.npm_config_tmp || "/tmp" +pkg += "/npm-test-publish-config" -require('mkdirp').sync(pkg) +require("mkdirp").sync(pkg) -fs.writeFileSync(pkg + '/package.json', JSON.stringify({ - name: 'npm-test-publish-config', - version: '1.2.3', +fs.writeFileSync(pkg + "/package.json", JSON.stringify({ + name: "npm-test-publish-config", + version: "1.2.3", publishConfig: { registry: common.registry } -}), 'utf8') +}), "utf8") -var spawn = require('child_process').spawn -var npm = require.resolve('../../bin/npm-cli.js') -var node = process.execPath +fs.writeFileSync(pkg + "/fixture_npmrc", + "//localhost:1337/:email = fancy@feast.net\n" + + "//localhost:1337/:username = fancy\n" + + "//localhost:1337/:_password = " + new Buffer("feast").toString("base64") + "\n" + + "registry = http://localhost:1337/") test(function (t) { var child - require('http').createServer(function (req, res) { - t.pass('got request on the fakey fake registry') + require("http").createServer(function (req, res) { + t.pass("got request on the fakey fake registry") t.end() this.close() res.statusCode = 500 - res.end('{"error":"sshhh. naptime nao. \\^O^/ <(YAWWWWN!)"}') + res.end(JSON.stringify({ + error: "sshhh. naptime nao. \\^O^/ <(YAWWWWN!)" + })) child.kill() }).listen(common.port, function () { - t.pass('server is listening') + t.pass("server is listening") // don't much care about listening to the child's results // just wanna make sure it hits the server we just set up. @@ -36,16 +40,20 @@ test(function (t) { // itself functions normally. // // Make sure that we don't sit around waiting for lock files - child = spawn(node, [npm, 'publish', '--email=fancy', '--_auth=feast'], { + child = common.npm(["publish", "--userconfig=" + pkg + "/fixture_npmrc"], { cwd: pkg, + stdio: "inherit", env: { - npm_config_cache_lock_stale: 1000, - npm_config_cache_lock_wait: 1000, + "npm_config_cache_lock_stale": 1000, + "npm_config_cache_lock_wait": 1000, HOME: process.env.HOME, Path: process.env.PATH, PATH: process.env.PATH, USERPROFILE: osenv.home() } + }, function (err, code) { + t.ifError(err, "publish command finished successfully") + t.notOk(code, "npm install exited with code 0") }) }) }) diff --git a/deps/npm/test/tap/publish-scoped.js b/deps/npm/test/tap/publish-scoped.js new file mode 100644 index 00000000000..2658c8dd2bd --- /dev/null +++ b/deps/npm/test/tap/publish-scoped.js @@ -0,0 +1,101 @@ +var fs = require("fs") +var path = require("path") + +var test = require("tap").test +var mkdirp = require("mkdirp") +var rimraf = require("rimraf") +var nock = require("nock") + +var npm = require("../../") +var common = require("../common-tap.js") + +var pkg = path.join(__dirname, "prepublish_package") + +// TODO: nock uses setImmediate, breaks 0.8: replace with mockRegistry +if (!global.setImmediate) { + global.setImmediate = function () { + var args = [arguments[0], 0].concat([].slice.call(arguments, 1)) + setTimeout.apply(this, args) + } +} + +test("setup", function (t) { + mkdirp(path.join(pkg, "cache"), next) + + function next () { + process.chdir(pkg) + fs.writeFile( + path.join(pkg, "package.json"), + JSON.stringify({ + name: "@bigco/publish-organized", + version: "1.2.5" + }), + "ascii", + function (er) { + t.ifError(er) + + t.pass("setup done") + t.end() + } + ) + } +}) + +test("npm publish should honor scoping", function (t) { + var put = nock(common.registry) + .put("/@bigco%2fpublish-organized") + .reply(201, verify) + + var configuration = { + cache : path.join(pkg, "cache"), + loglevel : "silent", + registry : "http://nonexistent.lvh.me", + "//localhost:1337/:username" : "username", + "//localhost:1337/:_password" : new Buffer("password").toString("base64"), + "//localhost:1337/:email" : "ogd@aoaioxxysz.net" + } + + npm.load(configuration, onload) + + function onload (er) { + t.ifError(er, "npm bootstrapped successfully") + + npm.config.set("@bigco:registry", common.registry) + npm.commands.publish([], false, function (er) { + t.ifError(er, "published without error") + + put.done() + + t.end() + }) + } + + function verify (_, body) { + t.doesNotThrow(function () { + var parsed = JSON.parse(body) + var current = parsed.versions["1.2.5"] + t.equal( + current._npmVersion, + require(path.resolve(__dirname, "../../package.json")).version, + "npm version is correct" + ) + + t.equal( + current._nodeVersion, + process.versions.node, + "node version is correct" + ) + }, "converted body back into object") + + return {ok: true} + } +}) + +test("cleanup", function (t) { + process.chdir(__dirname) + rimraf(pkg, function (er) { + t.ifError(er) + + t.end() + }) +}) diff --git a/deps/npm/test/tap/pwd-prefix.js b/deps/npm/test/tap/pwd-prefix.js new file mode 100644 index 00000000000..237096e0a2c --- /dev/null +++ b/deps/npm/test/tap/pwd-prefix.js @@ -0,0 +1,35 @@ +// This test ensures that a few commands do the same +// thing when the cwd is where package.json is, and when +// the package.json is one level up. + +var test = require("tap").test +var common = require("../common-tap.js") +var path = require("path") +var root = path.resolve(__dirname, "../..") +var lib = path.resolve(root, "lib") +var commands = ["run", "version"] + +commands.forEach(function (cmd) { + // Should get the same stdout and stderr each time + var stdout, stderr + + test(cmd + " in root", function (t) { + common.npm([cmd], {cwd: root}, function (er, code, so, se) { + if (er) throw er + t.notOk(code, "npm " + cmd + " exited with code 0") + stdout = so + stderr = se + t.end() + }) + }) + + test(cmd + " in lib", function (t) { + common.npm([cmd], {cwd: lib}, function (er, code, so, se) { + if (er) throw er + t.notOk(code, "npm " + cmd + " exited with code 0") + t.equal(so, stdout) + t.equal(se, stderr) + t.end() + }) + }) +}) diff --git a/deps/npm/test/tap/referer.js b/deps/npm/test/tap/referer.js index 1b55ab02613..c1b173d9765 100644 --- a/deps/npm/test/tap/referer.js +++ b/deps/npm/test/tap/referer.js @@ -1,18 +1,17 @@ var common = require("../common-tap.js") var test = require("tap").test var http = require("http") -var server test("should send referer http header", function (t) { - var server = http.createServer(function (q, s) { + http.createServer(function (q, s) { t.equal(q.headers.referer, "install foo") s.statusCode = 404 s.end(JSON.stringify({error: "whatever"})) this.close() }).listen(common.port, function () { - var reg = "--registry=http://localhost:" + common.port - var args = [ "install", "foo", reg ] - common.npm(args, {}, function (er, code, so, se) { + var reg = "http://localhost:" + common.port + var args = [ "install", "foo", "--registry", reg ] + common.npm(args, {}, function (er, code) { if (er) { throw er } diff --git a/deps/npm/test/tap/registry.js b/deps/npm/test/tap/registry.js index 8ea1c2f2daf..20e7bbe8115 100644 --- a/deps/npm/test/tap/registry.js +++ b/deps/npm/test/tap/registry.js @@ -1,55 +1,75 @@ // Run all the tests in the `npm-registry-couchapp` suite // This verifies that the server-side stuff still works. +var common = require("../common-tap") var test = require("tap").test -var spawn = require("child_process").spawn var npmExec = require.resolve("../../bin/npm-cli.js") var path = require("path") var ca = path.resolve(__dirname, "../../node_modules/npm-registry-couchapp") var which = require("which") -var hasCouch = false - -which("couchdb", function(er, couch) { - if (er) { - return test("need couchdb", function (t) { - t.fail("need couch to run test: " + er.message) - t.end() - }) - } else { - runTests() - } -}) + +var v = process.versions.node.split(".").map(function (n) { return parseInt(n, 10) }) +if (v[0] === 0 && v[1] < 10) { + console.error( + "WARNING: need a recent Node for npm-registry-couchapp tests to run, have", + process.versions.node + ) +} +else { + which("couchdb", function (er) { + if (er) { + console.error("WARNING: need couch to run test: " + er.message) + } + else { + runTests() + } + }) +} + function runTests () { - spawn(process.execPath, [ - npmExec, "install" - ], { + var env = {} + for (var i in process.env) env[i] = process.env[i] + env.npm = npmExec + + var opts = { cwd: ca, stdio: "inherit" - }).on("close", function (code, sig) { - if (code || sig) { + } + common.npm(["install"], opts, function (err, code) { + if (err) { throw err } + if (code) { return test("need install to work", function (t) { - t.fail("install failed with: " (code || sig)) + t.fail("install failed with: " + code) t.end() }) } else { - var env = {} - for (var i in process.env) env[i] = process.env[i] - env.npm = npmExec - - spawn(process.execPath, [ - npmExec, "test" - ], { + opts = { cwd: ca, env: env, stdio: "inherit" - }).on("close", function (code, sig) { - process.exit(code || sig) + } + common.npm(["test"], opts, function (err, code) { + if (err) { throw err } + if (code) { + return test("need test to work", function (t) { + t.fail("test failed with: " + code) + t.end() + }) + } + opts = { + cwd: ca, + env: env, + stdio: "inherit" + } + common.npm(["prune", "--production"], opts, function (err, code) { + if (err) { throw err } + process.exit(code || 0) + }) }) } - }) } diff --git a/deps/npm/test/tap/repo.js b/deps/npm/test/tap/repo.js index bf7b574f866..caae7d3c3f6 100644 --- a/deps/npm/test/tap/repo.js +++ b/deps/npm/test/tap/repo.js @@ -33,7 +33,7 @@ test("npm repo underscore", function (t) { '--registry=' + common.registry, '--loglevel=silent', '--browser=' + __dirname + '/_script.sh' - ], opts, function(err, code, stdout, stderr) { + ], opts, function (err, code, stdout, stderr) { t.equal(code, 0, 'exit ok') var res = fs.readFileSync(outFile, 'ascii') s.close() @@ -52,7 +52,7 @@ test('npm repo optimist - github (https://)', function (t) { '--registry=' + common.registry, '--loglevel=silent', '--browser=' + __dirname + '/_script.sh' - ], opts, function(err, code, stdout, stderr) { + ], opts, function (err, code, stdout, stderr) { t.equal(code, 0, 'exit ok') var res = fs.readFileSync(outFile, 'ascii') s.close() @@ -70,7 +70,7 @@ test("npm repo npm-test-peer-deps - no repo", function (t) { '--registry=' + common.registry, '--loglevel=silent', '--browser=' + __dirname + '/_script.sh' - ], opts, function(err, code, stdout, stderr) { + ], opts, function (err, code, stdout, stderr) { t.equal(code, 1, 'exit not ok') s.close() t.end() @@ -85,7 +85,7 @@ test("npm repo test-repo-url-http - non-github (http://)", function (t) { '--registry=' + common.registry, '--loglevel=silent', '--browser=' + __dirname + '/_script.sh' - ], opts, function(err, code, stdout, stderr) { + ], opts, function (err, code, stdout, stderr) { t.equal(code, 0, 'exit ok') var res = fs.readFileSync(outFile, 'ascii') s.close() @@ -103,7 +103,7 @@ test("npm repo test-repo-url-https - non-github (https://)", function (t) { '--registry=' + common.registry, '--loglevel=silent', '--browser=' + __dirname + '/_script.sh' - ], opts, function(err, code, stdout, stderr) { + ], opts, function (err, code, stdout, stderr) { t.equal(code, 0, 'exit ok') var res = fs.readFileSync(outFile, 'ascii') s.close() @@ -121,7 +121,7 @@ test("npm repo test-repo-url-ssh - non-github (ssh://)", function (t) { '--registry=' + common.registry, '--loglevel=silent', '--browser=' + __dirname + '/_script.sh' - ], opts, function(err, code, stdout, stderr) { + ], opts, function (err, code, stdout, stderr) { t.equal(code, 0, 'exit ok') var res = fs.readFileSync(outFile, 'ascii') s.close() diff --git a/deps/npm/test/tap/run-script.js b/deps/npm/test/tap/run-script.js new file mode 100644 index 00000000000..6ce361968db --- /dev/null +++ b/deps/npm/test/tap/run-script.js @@ -0,0 +1,84 @@ +var common = require("../common-tap") + , test = require("tap").test + , path = require("path") + , rimraf = require("rimraf") + , mkdirp = require("mkdirp") + , pkg = path.resolve(__dirname, "run-script") + , cache = path.resolve(pkg, "cache") + , tmp = path.resolve(pkg, "tmp") + , opts = { cwd: pkg } + +function testOutput (t, command, er, code, stdout, stderr) { + var lines + + if (er) + throw er + + if (stderr) + throw new Error("npm " + command + " stderr: " + stderr.toString()) + + lines = stdout.trim().split("\n") + stdout = lines.filter(function(line) { + return line.trim() !== "" && line[0] !== '>' + }).join(';') + + t.equal(stdout, command) + t.end() +} + +function cleanup () { + rimraf.sync(cache) + rimraf.sync(tmp) +} + +test("setup", function (t) { + cleanup() + mkdirp.sync(cache) + mkdirp.sync(tmp) + t.end() +}) + +test("npm run-script", function (t) { + common.npm(["run-script", "start"], opts, testOutput.bind(null, t, "start")) +}) + +test("npm run-script with args", function (t) { + common.npm(["run-script", "start", "--", "stop"], opts, testOutput.bind(null, t, "stop")) +}) + +test("npm run-script with args that contain spaces", function (t) { + common.npm(["run-script", "start", "--", "hello world"], opts, testOutput.bind(null, t, "hello world")) +}) + +test("npm run-script with args that contain single quotes", function (t) { + common.npm(["run-script", "start", "--", "they're awesome"], opts, testOutput.bind(null, t, "they're awesome")) +}) + +test("npm run-script with args that contain double quotes", function (t) { + common.npm(["run-script", "start", "--", "what's \"up\"?"], opts, testOutput.bind(null, t, "what's \"up\"?")) +}) + +test("npm run-script with pre script", function (t) { + common.npm(["run-script", "with-post"], opts, testOutput.bind(null, t, "main;post")) +}) + +test("npm run-script with post script", function (t) { + common.npm(["run-script", "with-pre"], opts, testOutput.bind(null, t, "pre;main")) +}) + +test("npm run-script with both pre and post script", function (t) { + common.npm(["run-script", "with-both"], opts, testOutput.bind(null, t, "pre;main;post")) +}) + +test("npm run-script with both pre and post script and with args", function (t) { + common.npm(["run-script", "with-both", "--", "an arg"], opts, testOutput.bind(null, t, "pre;an arg;post")) +}) + +test("npm run-script explicitly call pre script with arg", function (t) { + common.npm(["run-script", "prewith-pre", "--", "an arg"], opts, testOutput.bind(null, t, "an arg")) +}) + +test("cleanup", function (t) { + cleanup() + t.end() +}) diff --git a/deps/npm/test/tap/run-script/package.json b/deps/npm/test/tap/run-script/package.json new file mode 100644 index 00000000000..afa0e3f0c8d --- /dev/null +++ b/deps/npm/test/tap/run-script/package.json @@ -0,0 +1,13 @@ +{"name":"runscript" +,"version":"1.2.3" +,"scripts":{ + "start":"node -e 'console.log(process.argv[1] || \"start\")'", + "prewith-pre":"node -e 'console.log(process.argv[1] || \"pre\")'", + "with-pre":"node -e 'console.log(process.argv[1] || \"main\")'", + "with-post":"node -e 'console.log(process.argv[1] || \"main\")'", + "postwith-post":"node -e 'console.log(process.argv[1] || \"post\")'", + "prewith-both":"node -e 'console.log(process.argv[1] || \"pre\")'", + "with-both":"node -e 'console.log(process.argv[1] || \"main\")'", + "postwith-both":"node -e 'console.log(process.argv[1] || \"post\")'" + } +} diff --git a/deps/npm/test/tap/scripts-whitespace-windows.js b/deps/npm/test/tap/scripts-whitespace-windows.js index 97bed98cb71..44170af5b9e 100644 --- a/deps/npm/test/tap/scripts-whitespace-windows.js +++ b/deps/npm/test/tap/scripts-whitespace-windows.js @@ -1,71 +1,54 @@ -var test = require('tap').test -var path = require('path') -var npm = path.resolve(__dirname, '../../cli.js') -var pkg = __dirname + '/scripts-whitespace-windows' -var tmp = pkg + '/tmp' -var cache = pkg + '/cache' -var modules = pkg + '/node_modules' -var dep = pkg + '/dep' - -var mkdirp = require('mkdirp') -var rimraf = require('rimraf') -var node = process.execPath -var spawn = require('child_process').spawn - -test('setup', function (t) { +var test = require("tap").test +var common = require("../common-tap") +var path = require("path") +var pkg = path.resolve(__dirname, "scripts-whitespace-windows") +var tmp = path.resolve(pkg, "tmp") +var cache = path.resolve(pkg, "cache") +var modules = path.resolve(pkg, "node_modules") +var dep = path.resolve(pkg, "dep") + +var mkdirp = require("mkdirp") +var rimraf = require("rimraf") + +test("setup", function (t) { + cleanup() mkdirp.sync(cache) mkdirp.sync(tmp) - rimraf.sync(modules) - var env = { - npm_config_cache: cache, - npm_config_tmp: tmp, - npm_config_prefix: pkg, - npm_config_global: 'false' - } - - var child = spawn(node, [npm, 'i', dep], { + common.npm(["i", dep], { cwd: pkg, - env: env - }) - - child.stdout.setEncoding('utf8') - child.stderr.on('data', function(chunk) { - throw new Error('got stderr data: ' + JSON.stringify('' + chunk)) - }) - child.on('close', function () { + env: { + "npm_config_cache": cache, + "npm_config_tmp": tmp, + "npm_config_prefix": pkg, + "npm_config_global": "false" + } + }, function (err, code, stdout, stderr) { + t.ifErr(err, "npm i " + dep + " finished without error") + t.equal(code, 0, "npm i " + dep + " exited ok") + t.notOk(stderr, "no output stderr") t.end() }) }) -test('test', function (t) { - - var child = spawn(node, [npm, 'run', 'foo'], { - cwd: pkg, - env: process.env - }) - - child.stdout.setEncoding('utf8') - child.stderr.on('data', function(chunk) { - throw new Error('got stderr data: ' + JSON.stringify('' + chunk)) +test("test", function (t) { + common.npm(["run", "foo"], { cwd: pkg }, function (err, code, stdout, stderr) { + t.ifErr(err, "npm run finished without error") + t.equal(code, 0, "npm run exited ok") + t.notOk(stderr, "no output stderr: ", stderr) + stdout = stdout.trim() + t.ok(/npm-test-fine/.test(stdout)) + t.end() }) - child.stdout.on('data', ondata) - child.on('close', onend) - var c = '' - function ondata (chunk) { - c += chunk - } - function onend () { - c = c.trim() +}) - t.ok(/npm-test-fine/.test(c)) - t.end() - } +test("cleanup", function (t) { + cleanup() + t.end() }) -test('cleanup', function (t) { +function cleanup() { rimraf.sync(cache) rimraf.sync(tmp) rimraf.sync(modules) - t.end() -}) +} diff --git a/deps/npm/test/tap/search.js b/deps/npm/test/tap/search.js new file mode 100644 index 00000000000..d4c56697330 --- /dev/null +++ b/deps/npm/test/tap/search.js @@ -0,0 +1,265 @@ +var common = require("../common-tap.js") +var test = require("tap").test +var rimraf = require("rimraf") +var mr = require("npm-registry-mock") +var fs = require("fs") +var path = require("path") +var pkg = path.resolve(__dirname, "search") +var cache = path.resolve(pkg, "cache") +var registryCache = path.resolve(cache, "localhost_1337", "-", "all") +var cacheJsonFile = path.resolve(registryCache, ".cache.json") +var mkdirp = require("mkdirp") + +var timeMock = { + epoch: 1411727900, + future: 1411727900+100, + all: 1411727900+25, + since: 0 // filled by since server callback +} + +var EXEC_OPTS = {} + +function cleanupCache() { + rimraf.sync(cache) +} +function cleanup () { cleanupCache() } + +function setupCache() { + mkdirp.sync(cache) + mkdirp.sync(registryCache) + var res = fs.writeFileSync(cacheJsonFile, stringifyUpdated(timeMock.epoch)) + if (res) throw new Error("Creating cache file failed") +} + +var mocks = { + /* Since request, always response with an _update time > the time requested */ + sinceFuture: function(server) { + server.filteringPathRegEx(/startkey=[^&]*/g, function(s) { + var _allMock = JSON.parse(JSON.stringify(allMock)) + timeMock.since = _allMock._updated = s.replace("startkey=", "") + server.get("/-/all/since?stale=update_after&" + s) + .reply(200, _allMock) + return s + }) + }, + allFutureUpdatedOnly: function(server) { + server.get("/-/all") + .reply(200, stringifyUpdated(timeMock.future)) + }, + all: function(server) { + server.get("/-/all") + .reply(200, allMock) + } +} + + +test("No previous cache, init cache triggered by first search", function(t) { + cleanupCache() + + mr({ port: common.port, mocks: mocks.allFutureUpdatedOnly }, function (s) { + common.npm([ + "search", "do not do extra search work on my behalf", + "--registry", common.registry, + "--cache", cache, + "--loglevel", "silent", + "--color", "always" + ], + EXEC_OPTS, + function(err, code) { + s.close() + t.equal(code, 0, "search finished successfully") + t.ifErr(err, "search finished successfully") + + t.ok(fs.existsSync(cacheJsonFile), + cacheJsonFile + " expected to have been created") + var cacheData = JSON.parse(fs.readFileSync(cacheJsonFile, "utf8")) + t.equal(cacheData._updated, String(timeMock.future)) + t.end() + }) + }) +}) + +test("previous cache, _updated set, should trigger since request", function(t) { + cleanupCache() + setupCache() + + function m(server) { + [ mocks.all, mocks.sinceFuture ].forEach(function(m) { + m(server) + }) + } + mr({ port: common.port, mocks: m }, function (s) { + common.npm([ + "search", "do not do extra search work on my behalf", + "--registry", common.registry, + "--cache", cache, + "--loglevel", "silly", + "--color", "always" + ], + EXEC_OPTS, + function(err, code) { + s.close() + t.equal(code, 0, "search finished successfully") + t.ifErr(err, "search finished successfully") + + var cacheData = JSON.parse(fs.readFileSync(cacheJsonFile, "utf8")) + t.equal(cacheData._updated, + timeMock.since, + "cache update time gotten from since response") + cleanupCache() + t.end() + }) + }) +}) + + +var searches = [ + { + term: "f36b6a6123da50959741e2ce4d634f96ec668c56", + description: "non-regex", + location: 241 + }, + { + term: "/f36b6a6123da50959741e2ce4d634f96ec668c56/", + description: "regex", + location: 241 + } +] + +searches.forEach(function(search) { + test(search.description + " search in color", function(t) { + cleanupCache() + mr({ port: common.port, mocks: mocks.all }, function (s) { + common.npm([ + "search", search.term, + "--registry", common.registry, + "--cache", cache, + "--loglevel", "silent", + "--color", "always" + ], + EXEC_OPTS, + function(err, code, stdout) { + s.close() + t.equal(code, 0, "search finished successfully") + t.ifErr(err, "search finished successfully") + // \033 == \u001B + var markStart = "\u001B\\[[0-9][0-9]m" + var markEnd = "\u001B\\[0m" + + var re = new RegExp(markStart + ".*?" + markEnd) + + var cnt = stdout.search(re) + t.equal(cnt, search.location, + search.description + " search for " + search.term) + t.end() + }) + }) + }) +}) + +test("cleanup", function (t) { + cleanup() + t.end() +}) + +function stringifyUpdated(time) { + return JSON.stringify({ _updated : String(time) }) +} + +var allMock = { + "_updated": timeMock.all, + "generator-frontcow": { + "name": "generator-frontcow", + "description": "f36b6a6123da50959741e2ce4d634f96ec668c56 This is a fake description to ensure we do not accidentally search the real npm registry or use some kind of cache", + "dist-tags": { + "latest": "0.1.19" + }, + "maintainers": [ + { + "name": "bcabanes", + "email": "contact@benjamincabanes.com" + } + ], + "homepage": "https://github.com/bcabanes/generator-frontcow", + "keywords": [ + "sass", + "frontend", + "yeoman-generator", + "atomic", + "design", + "sass", + "foundation", + "foundation5", + "atomic design", + "bourbon", + "polyfill", + "font awesome" + ], + "repository": { + "type": "git", + "url": "https://github.com/bcabanes/generator-frontcow" + }, + "author": { + "name": "ben", + "email": "contact@benjamincabanes.com", + "url": "https://github.com/bcabanes" + }, + "bugs": { + "url": "https://github.com/bcabanes/generator-frontcow/issues" + }, + "license": "MIT", + "readmeFilename": "README.md", + "time": { + "modified": "2014-10-03T02:26:18.406Z" + }, + "versions": { + "0.1.19": "latest" + } + }, + "marko": { + "name": "marko", + "description": "Marko is an extensible, streaming, asynchronous, high performance, HTML-based templating language that can be used in Node.js or in the browser.", + "dist-tags": { + "latest": "1.2.16" + }, + "maintainers": [ + { + "name": "pnidem", + "email": "pnidem@gmail.com" + }, + { + "name": "philidem", + "email": "phillip.idem@gmail.com" + } + ], + "homepage": "https://github.com/raptorjs/marko", + "keywords": [ + "templating", + "template", + "async", + "streaming" + ], + "repository": { + "type": "git", + "url": "https://github.com/raptorjs/marko.git" + }, + "author": { + "name": "Patrick Steele-Idem", + "email": "pnidem@gmail.com" + }, + "bugs": { + "url": "https://github.com/raptorjs/marko/issues" + }, + "license": "Apache License v2.0", + "readmeFilename": "README.md", + "users": { + "pnidem": true + }, + "time": { + "modified": "2014-10-03T02:27:31.775Z" + }, + "versions": { + "1.2.16": "latest" + } + } +} diff --git a/deps/npm/test/tap/semver-doc.js b/deps/npm/test/tap/semver-doc.js index 5133f465990..963cace101f 100644 --- a/deps/npm/test/tap/semver-doc.js +++ b/deps/npm/test/tap/semver-doc.js @@ -1,11 +1,11 @@ var test = require("tap").test -test("semver doc is up to date", function(t) { +test("semver doc is up to date", function (t) { var path = require("path") var moddoc = path.join(__dirname, "../../node_modules/semver/README.md") var mydoc = path.join(__dirname, "../../doc/misc/semver.md") var fs = require("fs") - var mod = fs.readFileSync(moddoc, "utf8").replace(/semver\(1\)/, 'semver(7)') + var mod = fs.readFileSync(moddoc, "utf8").replace(/semver\(1\)/, "semver(7)") var my = fs.readFileSync(mydoc, "utf8") t.equal(my, mod) t.end() diff --git a/deps/npm/test/tap/semver-tag.js b/deps/npm/test/tap/semver-tag.js new file mode 100644 index 00000000000..03dcdf85b64 --- /dev/null +++ b/deps/npm/test/tap/semver-tag.js @@ -0,0 +1,15 @@ +// should not allow tagging with a valid semver range +var common = require("../common-tap.js") +var test = require("tap").test + +test("try to tag with semver range as tag name", function (t) { + var cmd = ["tag", "zzzz@1.2.3", "v2.x", "--registry=http://localhost"] + common.npm(cmd, { + stdio: "pipe" + }, function (er, code, so, se) { + if (er) throw er + t.similar(se, /Tag name must not be a valid SemVer range: v2.x\n/) + t.equal(code, 1) + t.end() + }) +}) diff --git a/deps/npm/test/tap/shrinkwrap-empty-deps.js b/deps/npm/test/tap/shrinkwrap-empty-deps.js index 9ec8e71e0ba..6be67af7443 100644 --- a/deps/npm/test/tap/shrinkwrap-empty-deps.js +++ b/deps/npm/test/tap/shrinkwrap-empty-deps.js @@ -6,7 +6,8 @@ var test = require("tap").test , fs = require("fs") , osenv = require("osenv") , rimraf = require("rimraf") - , pkg = __dirname + "/shrinkwrap-empty-deps" + , pkg = path.resolve(__dirname, "shrinkwrap-empty-deps") + , cache = path.resolve(pkg, "cache") test("returns a list of removed items", function (t) { var desiredResultsPath = path.resolve(pkg, "npm-shrinkwrap.json") @@ -36,7 +37,7 @@ test("returns a list of removed items", function (t) { function setup (cb) { cleanup() process.chdir(pkg) - npm.load({cache: pkg + "/cache", registry: common.registry}, function () { + npm.load({cache: cache, registry: common.registry}, function () { cb() }) } diff --git a/deps/npm/test/tap/sorted-package-json.js b/deps/npm/test/tap/sorted-package-json.js index 41c90855a87..1dec1d2ae12 100644 --- a/deps/npm/test/tap/sorted-package-json.js +++ b/deps/npm/test/tap/sorted-package-json.js @@ -30,11 +30,11 @@ test("sorting dependencies", function (t) { var child = spawn(node, [npm, "install", "--save", "underscore@1.3.3"], { cwd: pkg, env: { - npm_config_registry: common.registry, - npm_config_cache: cache, - npm_config_tmp: tmp, - npm_config_prefix: pkg, - npm_config_global: "false", + "npm_config_registry": common.registry, + "npm_config_cache": cache, + "npm_config_tmp": tmp, + "npm_config_prefix": pkg, + "npm_config_global": "false", HOME: process.env.HOME, Path: process.env.PATH, PATH: process.env.PATH @@ -42,6 +42,7 @@ test("sorting dependencies", function (t) { }) child.on("close", function (code) { + t.equal(code, 0, "npm install exited with code") var result = fs.readFileSync(packageJson).toString() , resultAsJson = JSON.parse(result) @@ -83,7 +84,7 @@ function setup() { "underscore": "^1.3.3", "request": "^0.9.0" } - }, null, 2), 'utf8') + }, null, 2), "utf8") } function cleanup() { diff --git a/deps/npm/test/tap/spawn-enoent.js b/deps/npm/test/tap/spawn-enoent.js index 7ea9dab5606..20fed21bcf4 100644 --- a/deps/npm/test/tap/spawn-enoent.js +++ b/deps/npm/test/tap/spawn-enoent.js @@ -26,7 +26,7 @@ test("enoent script", function (t) { env: { PATH: process.env.PATH, Path: process.env.Path, - npm_config_loglevel: "warn" + "npm_config_loglevel": "warn" } }, function (er, code, sout, serr) { t.similar(serr, /npm ERR! Failed at the x@1\.2\.3 start script\./) diff --git a/deps/npm/test/tap/startstop.js b/deps/npm/test/tap/startstop.js index f056aa78929..334551ed295 100644 --- a/deps/npm/test/tap/startstop.js +++ b/deps/npm/test/tap/startstop.js @@ -1,22 +1,18 @@ -var common = require('../common-tap') - , test = require('tap').test - , path = require('path') - , spawn = require('child_process').spawn - , rimraf = require('rimraf') - , mkdirp = require('mkdirp') - , pkg = __dirname + '/startstop' - , cache = pkg + '/cache' - , tmp = pkg + '/tmp' - , node = process.execPath - , npm = path.resolve(__dirname, '../../cli.js') +var common = require("../common-tap") + , test = require("tap").test + , path = require("path") + , rimraf = require("rimraf") + , mkdirp = require("mkdirp") + , pkg = path.resolve(__dirname, "startstop") + , cache = path.resolve(pkg, "cache") + , tmp = path.resolve(pkg, "tmp") , opts = { cwd: pkg } function testOutput (t, command, er, code, stdout, stderr) { - if (er) - throw er + t.notOk(code, "npm " + command + " exited with code 0") if (stderr) - throw new Error('npm ' + command + ' stderr: ' + stderr.toString()) + throw new Error("npm " + command + " stderr: " + stderr.toString()) stdout = stdout.trim().split(/\n|\r/) stdout = stdout[stdout.length - 1] @@ -25,40 +21,40 @@ function testOutput (t, command, er, code, stdout, stderr) { } function cleanup () { - rimraf.sync(pkg + '/cache') - rimraf.sync(pkg + '/tmp') + rimraf.sync(cache) + rimraf.sync(tmp) } -test('setup', function (t) { +test("setup", function (t) { cleanup() - mkdirp.sync(pkg + '/cache') - mkdirp.sync(pkg + '/tmp') + mkdirp.sync(cache) + mkdirp.sync(tmp) t.end() }) -test('npm start', function (t) { - common.npm(['start'], opts, testOutput.bind(null, t, "start")) +test("npm start", function (t) { + common.npm(["start"], opts, testOutput.bind(null, t, "start")) }) -test('npm stop', function (t) { - common.npm(['stop'], opts, testOutput.bind(null, t, "stop")) +test("npm stop", function (t) { + common.npm(["stop"], opts, testOutput.bind(null, t, "stop")) }) -test('npm restart', function (t) { - common.npm(['restart'], opts, function (er, c, stdout, stderr) { +test("npm restart", function (t) { + common.npm(["restart"], opts, function (er, c, stdout) { if (er) throw er - var output = stdout.split('\n').filter(function (val) { + var output = stdout.split("\n").filter(function (val) { return val.match(/^s/) }) - t.same(output.sort(), ['start', 'stop'].sort()) + t.same(output.sort(), ["start", "stop"].sort()) t.end() }) }) -test('cleanup', function (t) { +test("cleanup", function (t) { cleanup() t.end() }) diff --git a/deps/npm/test/tap/test-run-ls.js b/deps/npm/test/tap/test-run-ls.js index 4c869e5e247..252c6e8f931 100644 --- a/deps/npm/test/tap/test-run-ls.js +++ b/deps/npm/test/tap/test-run-ls.js @@ -6,7 +6,7 @@ var testscript = require("../../package.json").scripts.test var tsregexp = testscript.replace(/([\[\.\*\]])/g, "\\$1") test("default", function (t) { - common.npm(["run"], { cwd: cwd }, function (er, code, so, se) { + common.npm(["run"], { cwd: cwd }, function (er, code, so) { if (er) throw er t.notOk(code) t.similar(so, new RegExp("\\n test\\n " + tsregexp + "\\n")) @@ -15,7 +15,7 @@ test("default", function (t) { }) test("parseable", function (t) { - common.npm(["run", "-p"], { cwd: cwd }, function (er, code, so, se) { + common.npm(["run", "-p"], { cwd: cwd }, function (er, code, so) { if (er) throw er t.notOk(code) t.similar(so, new RegExp("\\ntest:" + tsregexp + "\\n")) @@ -24,7 +24,7 @@ test("parseable", function (t) { }) test("parseable", function (t) { - common.npm(["run", "--json"], { cwd: cwd }, function (er, code, so, se) { + common.npm(["run", "--json"], { cwd: cwd }, function (er, code, so) { if (er) throw er t.notOk(code) t.equal(JSON.parse(so).test, testscript) diff --git a/deps/npm/test/tap/uninstall-package.js b/deps/npm/test/tap/uninstall-package.js index 06827012501..a0ba4c6c1a5 100644 --- a/deps/npm/test/tap/uninstall-package.js +++ b/deps/npm/test/tap/uninstall-package.js @@ -3,7 +3,8 @@ var test = require("tap").test , rimraf = require("rimraf") , mr = require("npm-registry-mock") , common = require("../common-tap.js") - , pkg = __dirname + "/uninstall-package" + , path = require("path") + , pkg = path.join(__dirname, "uninstall-package") test("returns a list of removed items", function (t) { t.plan(1) diff --git a/deps/npm/test/tap/unpack-foreign-tarball.js b/deps/npm/test/tap/unpack-foreign-tarball.js index a03f9b17f40..d2e2e73c918 100644 --- a/deps/npm/test/tap/unpack-foreign-tarball.js +++ b/deps/npm/test/tap/unpack-foreign-tarball.js @@ -12,8 +12,8 @@ var tmp = path.resolve(dir, "tmp") var pkg = path.resolve(nm, "npm-test-gitignore") var env = { - npm_config_cache: cache, - npm_config_tmp: tmp + "npm_config_cache": cache, + "npm_config_tmp": tmp } var conf = { @@ -22,36 +22,39 @@ var conf = { stdio: [ "pipe", "pipe", 2 ] } +function verify (t, files, err, code) { + if (code) { + t.fail("exited with failure: " + code) + return t.end() + } + var actual = fs.readdirSync(pkg).sort() + var expect = files.concat([".npmignore", "package.json"]).sort() + t.same(actual, expect) + t.end() +} + test("npmignore only", function (t) { setup() var file = path.resolve(dir, "npmignore.tgz") - common.npm(["install", file], conf, function (code, stdout, stderr) { - verify(t, code, ["foo"]) - }) + common.npm(["install", file], conf, verify.bind(null, t, ["foo"])) }) test("gitignore only", function (t) { setup() var file = path.resolve(dir, "gitignore.tgz") - common.npm(["install", file], conf, function (code, stdout, stderr) { - verify(t, code, ["foo"]) - }) + common.npm(["install", file], conf, verify.bind(null, t, ["foo"])) }) test("gitignore and npmignore", function (t) { setup() var file = path.resolve(dir, "gitignore-and-npmignore.tgz") - common.npm(["install", file], conf, function (code, stdout, stderr) { - verify(t, code, ["foo", "bar"]) - }) + common.npm(["install", file], conf, verify.bind(null, t, ["foo", "bar"])) }) test("gitignore and npmignore, not gzipped", function (t) { setup() var file = path.resolve(dir, "gitignore-and-npmignore.tar") - common.npm(["install", file], conf, function (code, stdout, stderr) { - verify(t, code, ["foo", "bar"]) - }) + common.npm(["install", file], conf, verify.bind(null, t, ["foo", "bar"])) }) test("clean", function (t) { @@ -59,17 +62,6 @@ test("clean", function (t) { t.end() }) -function verify (t, code, files) { - if (code) { - t.fail("exited with failure: " + code) - return t.end() - } - var actual = fs.readdirSync(pkg).sort() - var expect = files.concat([".npmignore", "package.json"]).sort() - t.same(actual, expect) - t.end() -} - function setup () { clean() mkdirp.sync(nm) diff --git a/deps/npm/test/tap/update-save.js b/deps/npm/test/tap/update-save.js index 6323ef8515a..5f871b26c8a 100644 --- a/deps/npm/test/tap/update-save.js +++ b/deps/npm/test/tap/update-save.js @@ -3,8 +3,8 @@ var test = require("tap").test var npm = require("../../") var mkdirp = require("mkdirp") var rimraf = require("rimraf") -var fs = require('fs') -var path = require('path') +var fs = require("fs") +var path = require("path") var mr = require("npm-registry-mock") var PKG_DIR = path.resolve(__dirname, "update-save") @@ -14,10 +14,10 @@ var MODULES_DIR = path.resolve(PKG_DIR, "node_modules") var EXEC_OPTS = { cwd: PKG_DIR, - stdio: 'ignore', + stdio: "ignore", env: { - npm_config_registry: common.registry, - npm_config_loglevel: 'verbose' + "npm_config_registry": common.registry, + "npm_config_loglevel": "verbose" } } @@ -32,9 +32,9 @@ var DEFAULT_PKG = { } } -var s = undefined // mock server reference +var s // mock server reference -test('setup', function (t) { +test("setup", function (t) { resetPackage() mr(common.port, function (server) { @@ -49,14 +49,14 @@ test('setup', function (t) { test("update regular dependencies only", function (t) { resetPackage() - common.npm(['update', '--save'], EXEC_OPTS, function (err, code) { + common.npm(["update", "--save"], EXEC_OPTS, function (err, code) { t.ifError(err) - t.equal(code, 0) + t.notOk(code, "npm update exited with code 0") - var pkgdata = JSON.parse(fs.readFileSync(PKG, 'utf8')) - t.deepEqual(pkgdata.dependencies, {mkdirp: '^0.3.5'}, 'only dependencies updated') - t.deepEqual(pkgdata.devDependencies, DEFAULT_PKG.devDependencies, 'dev dependencies should be untouched') - t.deepEqual(pkgdata.optionalDependencies, DEFAULT_PKG.optionalDependencies, 'optional dependencies should be untouched') + var pkgdata = JSON.parse(fs.readFileSync(PKG, "utf8")) + t.deepEqual(pkgdata.dependencies, {mkdirp: "^0.3.5"}, "only dependencies updated") + t.deepEqual(pkgdata.devDependencies, DEFAULT_PKG.devDependencies, "dev dependencies should be untouched") + t.deepEqual(pkgdata.optionalDependencies, DEFAULT_PKG.optionalDependencies, "optional dependencies should be untouched") t.end() }) }) @@ -64,14 +64,14 @@ test("update regular dependencies only", function (t) { test("update devDependencies only", function (t) { resetPackage() - common.npm(['update', '--save-dev'], EXEC_OPTS, function (err, code, stdout, stderr) { + common.npm(["update", "--save-dev"], EXEC_OPTS, function (err, code) { t.ifError(err) - t.equal(code, 0) + t.notOk(code, "npm update exited with code 0") - var pkgdata = JSON.parse(fs.readFileSync(PKG, 'utf8')) - t.deepEqual(pkgdata.dependencies, DEFAULT_PKG.dependencies, 'dependencies should be untouched') - t.deepEqual(pkgdata.devDependencies, {underscore: '^1.3.3'}, 'dev dependencies should be updated') - t.deepEqual(pkgdata.optionalDependencies, DEFAULT_PKG.optionalDependencies, 'optional dependencies should be untouched') + var pkgdata = JSON.parse(fs.readFileSync(PKG, "utf8")) + t.deepEqual(pkgdata.dependencies, DEFAULT_PKG.dependencies, "dependencies should be untouched") + t.deepEqual(pkgdata.devDependencies, {underscore: "^1.3.3"}, "dev dependencies should be updated") + t.deepEqual(pkgdata.optionalDependencies, DEFAULT_PKG.optionalDependencies, "optional dependencies should be untouched") t.end() }) }) @@ -83,14 +83,14 @@ test("update optionalDependencies only", function (t) { } }) - common.npm(['update', '--save-optional'], EXEC_OPTS, function (err, code) { + common.npm(["update", "--save-optional"], EXEC_OPTS, function (err, code) { t.ifError(err) - t.equal(code, 0) + t.notOk(code, "npm update exited with code 0") - var pkgdata = JSON.parse(fs.readFileSync(PKG, 'utf8')) - t.deepEqual(pkgdata.dependencies, DEFAULT_PKG.dependencies, 'dependencies should be untouched') - t.deepEqual(pkgdata.devDependencies, DEFAULT_PKG.devDependencies, 'dev dependencies should be untouched') - t.deepEqual(pkgdata.optionalDependencies, {underscore: '^1.3.3'}, 'optional dependencies should be updated') + var pkgdata = JSON.parse(fs.readFileSync(PKG, "utf8")) + t.deepEqual(pkgdata.dependencies, DEFAULT_PKG.dependencies, "dependencies should be untouched") + t.deepEqual(pkgdata.devDependencies, DEFAULT_PKG.devDependencies, "dev dependencies should be untouched") + t.deepEqual(pkgdata.optionalDependencies, {underscore: "^1.3.3"}, "optional dependencies should be updated") t.end() }) }) @@ -102,14 +102,14 @@ test("optionalDependencies are merged into dependencies during --save", function } }) - common.npm(['update', '--save'], EXEC_OPTS, function (err, code) { + common.npm(["update", "--save"], EXEC_OPTS, function (err, code) { t.ifError(err) - t.equal(code, 0) + t.notOk(code, "npm update exited with code 0") - var pkgdata = JSON.parse(fs.readFileSync(PKG, 'utf8')) - t.deepEqual(pkgdata.dependencies, {mkdirp: '^0.3.5'}, 'dependencies should not include optional dependencies') - t.deepEqual(pkgdata.devDependencies, pkg.devDependencies, 'dev dependencies should be untouched') - t.deepEqual(pkgdata.optionalDependencies, pkg.optionalDependencies, 'optional dependencies should be untouched') + var pkgdata = JSON.parse(fs.readFileSync(PKG, "utf8")) + t.deepEqual(pkgdata.dependencies, {mkdirp: "^0.3.5"}, "dependencies should not include optional dependencies") + t.deepEqual(pkgdata.devDependencies, pkg.devDependencies, "dev dependencies should be untouched") + t.deepEqual(pkgdata.optionalDependencies, pkg.optionalDependencies, "optional dependencies should be untouched") t.end() }) }) @@ -117,16 +117,16 @@ test("optionalDependencies are merged into dependencies during --save", function test("semver prefix is replaced with configured save-prefix", function (t) { resetPackage() - common.npm(['update', '--save', '--save-prefix', '~'], EXEC_OPTS, function (err, code) { + common.npm(["update", "--save", "--save-prefix", "~"], EXEC_OPTS, function (err, code) { t.ifError(err) - t.equal(code, 0) + t.notOk(code, "npm update exited with code 0") - var pkgdata = JSON.parse(fs.readFileSync(PKG, 'utf8')) + var pkgdata = JSON.parse(fs.readFileSync(PKG, "utf8")) t.deepEqual(pkgdata.dependencies, { - mkdirp: '~0.3.5' - }, 'dependencies should be updated') - t.deepEqual(pkgdata.devDependencies, DEFAULT_PKG.devDependencies, 'dev dependencies should be untouched') - t.deepEqual(pkgdata.optionalDependencies, DEFAULT_PKG.optionalDependencies, 'optional dependencies should be updated') + mkdirp: "~0.3.5" + }, "dependencies should be updated") + t.deepEqual(pkgdata.devDependencies, DEFAULT_PKG.devDependencies, "dev dependencies should be untouched") + t.deepEqual(pkgdata.optionalDependencies, DEFAULT_PKG.optionalDependencies, "optional dependencies should be updated") t.end() }) }) @@ -137,8 +137,8 @@ function resetPackage(extendWith) { mkdirp.sync(CACHE_DIR) var pkg = clone(DEFAULT_PKG) extend(pkg, extendWith) - for (key in extend) { pkg[key] = extend[key]} - fs.writeFileSync(PKG, JSON.stringify(pkg, null, 2), 'ascii') + for (var key in extend) { pkg[key] = extend[key]} + fs.writeFileSync(PKG, JSON.stringify(pkg, null, 2), "ascii") return pkg } @@ -155,6 +155,6 @@ function clone(a) { } function extend(a, b) { - for (key in b) { a[key] = b[key]} + for (var key in b) { a[key] = b[key]} return a } diff --git a/deps/npm/test/tap/url-dependencies.js b/deps/npm/test/tap/url-dependencies.js index 7f8cc78644e..a77b3d380dd 100644 --- a/deps/npm/test/tap/url-dependencies.js +++ b/deps/npm/test/tap/url-dependencies.js @@ -3,11 +3,8 @@ var rimraf = require("rimraf") var path = require("path") var osenv = require("osenv") var mr = require("npm-registry-mock") -var spawn = require("child_process").spawn -var npm = require.resolve("../../bin/npm-cli.js") -var node = process.execPath var pkg = path.resolve(__dirname, "url-dependencies") -var common = require('../common-tap') +var common = require("../common-tap") var mockRoutes = { "get": { @@ -15,27 +12,27 @@ var mockRoutes = { } } -test("url-dependencies: download first time", function(t) { +test("url-dependencies: download first time", function (t) { cleanup() - performInstall(function(output){ - if(!tarballWasFetched(output)){ + performInstall(t, function (output){ + if (!tarballWasFetched(output)){ t.fail("Tarball was not fetched") - }else{ + } else { t.pass("Tarball was fetched") } t.end() }) }) -test("url-dependencies: do not download subsequent times", function(t) { +test("url-dependencies: do not download subsequent times", function (t) { cleanup() - performInstall(function(){ - performInstall(function(output){ - if(tarballWasFetched(output)){ + performInstall(t, function () { + performInstall(t, function (output) { + if (tarballWasFetched(output)){ t.fail("Tarball was fetched second time around") - }else{ + } else { t.pass("Tarball was not fetched") } t.end() @@ -44,31 +41,28 @@ test("url-dependencies: do not download subsequent times", function(t) { }) function tarballWasFetched(output){ - return output.indexOf("http GET " + common.registry + "/underscore/-/underscore-1.3.1.tgz") > -1 + return output.indexOf("http fetch GET " + common.registry + "/underscore/-/underscore-1.3.1.tgz") > -1 } -function performInstall (cb) { - mr({port: common.port, mocks: mockRoutes}, function(s){ - var output = "" - , child = spawn(node, [npm, "install"], { - cwd: pkg, - env: { - npm_config_registry: common.registry, - npm_config_cache_lock_stale: 1000, - npm_config_cache_lock_wait: 1000, - npm_config_loglevel: "http", - HOME: process.env.HOME, - Path: process.env.PATH, - PATH: process.env.PATH - } - }) - - child.stderr.on("data", function(data){ - output += data.toString() - }) - child.on("close", function () { +function performInstall (t, cb) { + mr({port: common.port, mocks: mockRoutes}, function (s) { + var opts = { + cwd : pkg, + env: { + "npm_config_registry": common.registry, + "npm_config_cache_lock_stale": 1000, + "npm_config_cache_lock_wait": 1000, + "npm_config_loglevel": "http", + HOME: process.env.HOME, + Path: process.env.PATH, + PATH: process.env.PATH + } + } + common.npm(["install"], opts, function (err, code, stdout, stderr) { + t.ifError(err, "install success") + t.notOk(code, "npm install exited with code 0") s.close() - cb(output) + cb(stderr) }) }) } diff --git a/deps/npm/test/tap/version-no-git.js b/deps/npm/test/tap/version-no-git.js new file mode 100644 index 00000000000..e5a5d23467e --- /dev/null +++ b/deps/npm/test/tap/version-no-git.js @@ -0,0 +1,54 @@ +var common = require("../common-tap.js") +var test = require("tap").test +var npm = require("../../") +var osenv = require("osenv") +var path = require("path") +var fs = require("fs") +var mkdirp = require("mkdirp") +var rimraf = require("rimraf") +var requireInject = require("require-inject") + +var pkg = path.resolve(__dirname, "version-no-git") +var cache = path.resolve(pkg, "cache") +var gitDir = path.resolve(pkg, ".git") + +test("npm version <semver> in a git repo without the git binary", function(t) { + setup() + npm.load({cache: cache, registry: common.registry}, function() { + var version = requireInject("../../lib/version", { + which: function(cmd, cb) { + process.nextTick(function() { + cb(new Error('ENOGIT!')) + }) + } + }) + + version(["patch"], function(err) { + if (err) return t.fail("Error performing version patch") + var p = path.resolve(pkg, "package") + var testPkg = require(p) + t.equal("0.0.1", testPkg.version, "\"" + testPkg.version+"\" === \"0.0.1\"") + t.end() + }) + }) +}) + +test("cleanup", function(t) { + process.chdir(osenv.tmpdir()) + + rimraf.sync(pkg) + t.end() +}) + +function setup() { + mkdirp.sync(pkg) + mkdirp.sync(cache) + mkdirp.sync(gitDir) + fs.writeFileSync(path.resolve(pkg, "package.json"), JSON.stringify({ + author: "Terin Stock", + name: "version-no-git-test", + version: "0.0.0", + description: "Test for npm version if git binary doesn't exist" + }), "utf8") + process.chdir(pkg) +} diff --git a/deps/npm/test/tap/version-no-tags.js b/deps/npm/test/tap/version-no-tags.js index e731c315443..cb6f195f8ba 100644 --- a/deps/npm/test/tap/version-no-tags.js +++ b/deps/npm/test/tap/version-no-tags.js @@ -1,49 +1,47 @@ -var common = require('../common-tap.js') -var test = require('tap').test -var npm = require('../../') -var npmc = require.resolve('../../') -var osenv = require('osenv') -var path = require('path') -var fs = require('fs') -var rimraf = require('rimraf') -var mkdirp = require('mkdirp') -var which = require('which') -var util = require('util') -var spawn = require('child_process').spawn -var args = [ npmc - , 'version' - , 'patch' - , '--no-git-tag-version' - ] -var pkg = __dirname + '/version-no-tags' +var common = require("../common-tap.js") +var test = require("tap").test +var npm = require("../../") +var osenv = require("osenv") +var path = require("path") +var fs = require("fs") +var rimraf = require("rimraf") +var mkdirp = require("mkdirp") +var which = require("which") +var spawn = require("child_process").spawn + +var pkg = path.resolve(__dirname, "version-no-tags") +var cache = path.resolve(pkg, "cache") test("npm version <semver> without git tag", function (t) { setup() - npm.load({ cache: pkg + '/cache', registry: common.registry}, function () { - which('git', function(err, git) { + npm.load({ cache: cache, registry: common.registry}, function () { + which("git", function (err, git) { + t.ifError(err, "git found on system") function tagExists(tag, _cb) { - var child = spawn(git, ['tag', '-l', tag]) - var out = '' - child.stdout.on('data', function(d) { - out += data.toString() + var child1 = spawn(git, ["tag", "-l", tag]) + var out = "" + child1.stdout.on("data", function (d) { + out += d.toString() }) - child.on('exit', function() { + child1.on("exit", function () { return _cb(null, Boolean(~out.indexOf(tag))) }) } - var child = spawn(git, ['init']) - child.stdout.pipe(process.stdout) - child.on('exit', function() { - npm.config.set('git-tag-version', false) - npm.commands.version(['patch'], function(err) { - if (err) return t.fail('Error perform version patch') - var testPkg = require(pkg+'/package') - if (testPkg.version !== '0.0.1') t.fail(testPkg.version+' !== \'0.0.1\'') - t.ok('0.0.1' === testPkg.version) - tagExists('v0.0.1', function(err, exists) { - t.equal(exists, false, 'git tag DOES exist') - t.pass('git tag does not exist') + var child2 = spawn(git, ["init"]) + child2.stdout.pipe(process.stdout) + child2.on("exit", function () { + npm.config.set("git-tag-version", false) + npm.commands.version(["patch"], function (err) { + if (err) return t.fail("Error perform version patch") + var p = path.resolve(pkg, "package") + var testPkg = require(p) + if (testPkg.version !== "0.0.1") t.fail(testPkg.version+" !== \"0.0.1\"") + t.equal("0.0.1", testPkg.version) + tagExists("v0.0.1", function (err, exists) { + t.ifError(err, "tag found to exist") + t.equal(exists, false, "git tag DOES exist") + t.pass("git tag does not exist") t.end() }) }) @@ -52,7 +50,7 @@ test("npm version <semver> without git tag", function (t) { }) }) -test('cleanup', function(t) { +test("cleanup", function (t) { // windows fix for locked files process.chdir(osenv.tmpdir()) @@ -62,12 +60,12 @@ test('cleanup', function(t) { function setup() { mkdirp.sync(pkg) - mkdirp.sync(pkg + '/cache') - fs.writeFileSync(pkg + '/package.json', JSON.stringify({ + mkdirp.sync(cache) + fs.writeFileSync(path.resolve(pkg, "package.json"), JSON.stringify({ author: "Evan Lucas", name: "version-no-tags-test", version: "0.0.0", description: "Test for git-tag-version flag" - }), 'utf8') + }), "utf8") process.chdir(pkg) } diff --git a/deps/npm/test/tap/view.js b/deps/npm/test/tap/view.js new file mode 100644 index 00000000000..c36abfe1f6d --- /dev/null +++ b/deps/npm/test/tap/view.js @@ -0,0 +1,253 @@ +var common = require("../common-tap.js") +var test = require("tap").test +var osenv = require("osenv") +var path = require("path") +var fs = require("fs") +var rimraf = require("rimraf") +var mkdirp = require("mkdirp") +var tmp = osenv.tmpdir() +var t1dir = path.resolve(tmp, "view-local-no-pkg") +var t2dir = path.resolve(tmp, "view-local-notmine") +var t3dir = path.resolve(tmp, "view-local-mine") +var mr = require("npm-registry-mock") + +test("setup", function (t) { + mkdirp.sync(t1dir) + mkdirp.sync(t2dir) + mkdirp.sync(t3dir) + + fs.writeFileSync(t2dir + "/package.json", JSON.stringify({ + author: "Evan Lucas" + , name: "test-repo-url-https" + , version: "0.0.1" + }), "utf8") + + fs.writeFileSync(t3dir + "/package.json", JSON.stringify({ + author: "Evan Lucas" + , name: "biscuits" + , version: "0.0.1" + }), "utf8") + + t.pass("created fixtures") + t.end() +}) + +test("npm view . in global mode", function (t) { + process.chdir(t1dir) + common.npm([ + "view" + , "." + , "--registry=" + common.registry + , "--global" + ], { cwd: t1dir }, function (err, code, stdout, stderr) { + t.ifError(err, "view command finished successfully") + t.equal(code, 1, "exit not ok") + t.similar(stderr, /Cannot use view command in global mode./m) + t.end() + }) +}) + +test("npm view --global", function(t) { + process.chdir(t1dir) + common.npm([ + "view" + , "--registry=" + common.registry + , "--global" + ], { cwd: t1dir }, function(err, code, stdout, stderr) { + t.ifError(err, "view command finished successfully") + t.equal(code, 1, "exit not ok") + t.similar(stderr, /Cannot use view command in global mode./m) + t.end() + }) +}) + +test("npm view . with no package.json", function(t) { + process.chdir(t1dir) + common.npm([ + "view" + , "." + , "--registry=" + common.registry + ], { cwd: t1dir }, function (err, code, stdout, stderr) { + t.ifError(err, "view command finished successfully") + t.equal(code, 1, "exit not ok") + t.similar(stderr, /Invalid package.json/m) + t.end() + }) +}) + +test("npm view . with no published package", function (t) { + process.chdir(t3dir) + mr(common.port, function (s) { + common.npm([ + "view" + , "." + , "--registry=" + common.registry + ], { cwd: t3dir }, function (err, code, stdout, stderr) { + t.ifError(err, "view command finished successfully") + t.equal(code, 1, "exit not ok") + t.similar(stderr, /version not found/m) + s.close() + t.end() + }) + }) +}) + +test("npm view .", function (t) { + process.chdir(t2dir) + mr(common.port, function (s) { + common.npm([ + "view" + , "." + , "--registry=" + common.registry + ], { cwd: t2dir }, function (err, code, stdout) { + t.ifError(err, "view command finished successfully") + t.equal(code, 0, "exit ok") + var re = new RegExp("name: 'test-repo-url-https'") + t.similar(stdout, re) + s.close() + t.end() + }) + }) +}) + +test("npm view . select fields", function (t) { + process.chdir(t2dir) + mr(common.port, function (s) { + common.npm([ + "view" + , "." + , "main" + , "--registry=" + common.registry + ], { cwd: t2dir }, function (err, code, stdout) { + t.ifError(err, "view command finished successfully") + t.equal(code, 0, "exit ok") + t.equal(stdout.trim(), "index.js", "should print `index.js`") + s.close() + t.end() + }) + }) +}) + +test("npm view .@<version>", function (t) { + process.chdir(t2dir) + mr(common.port, function (s) { + common.npm([ + "view" + , ".@0.0.0" + , "version" + , "--registry=" + common.registry + ], { cwd: t2dir }, function (err, code, stdout) { + t.ifError(err, "view command finished successfully") + t.equal(code, 0, "exit ok") + t.equal(stdout.trim(), "0.0.0", "should print `0.0.0`") + s.close() + t.end() + }) + }) +}) + +test("npm view .@<version> --json", function (t) { + process.chdir(t2dir) + mr(common.port, function (s) { + common.npm([ + "view" + , ".@0.0.0" + , "version" + , "--json" + , "--registry=" + common.registry + ], { cwd: t2dir }, function (err, code, stdout) { + t.ifError(err, "view command finished successfully") + t.equal(code, 0, "exit ok") + t.equal(stdout.trim(), "\"0.0.0\"", "should print `\"0.0.0\"`") + s.close() + t.end() + }) + }) +}) + +test("npm view <package name>", function (t) { + mr(common.port, function (s) { + common.npm([ + "view" + , "underscore" + , "--registry=" + common.registry + ], { cwd: t2dir }, function (err, code, stdout) { + t.ifError(err, "view command finished successfully") + t.equal(code, 0, "exit ok") + var re = new RegExp("name: 'underscore'") + t.similar(stdout, re, "should have name `underscore`") + s.close() + t.end() + }) + }) +}) + +test("npm view <package name> --global", function(t) { + mr(common.port, function(s) { + common.npm([ + "view" + , "underscore" + , "--global" + , "--registry=" + common.registry + ], { cwd: t2dir }, function(err, code, stdout) { + t.ifError(err, "view command finished successfully") + t.equal(code, 0, "exit ok") + var re = new RegExp("name: 'underscore'") + t.similar(stdout, re, "should have name `underscore`") + s.close() + t.end() + }) + }) +}) + +test("npm view <package name> --json", function(t) { + t.plan(3) + mr(common.port, function (s) { + common.npm([ + "view" + , "underscore" + , "--json" + , "--registry=" + common.registry + ], { cwd: t2dir }, function (err, code, stdout) { + t.ifError(err, "view command finished successfully") + t.equal(code, 0, "exit ok") + s.close() + try { + var out = JSON.parse(stdout.trim()) + t.similar(out, { + maintainers: "jashkenas <jashkenas@gmail.com>" + }, "should have the same maintainer") + } + catch (er) { + t.fail("Unable to parse JSON") + } + }) + }) +}) + +test("npm view <package name> <field>", function (t) { + mr(common.port, function (s) { + common.npm([ + "view" + , "underscore" + , "homepage" + , "--registry=" + common.registry + ], { cwd: t2dir }, function (err, code, stdout) { + t.ifError(err, "view command finished successfully") + t.equal(code, 0, "exit ok") + t.equal(stdout.trim(), "http://underscorejs.org", + "homepage should equal `http://underscorejs.org`") + s.close() + t.end() + }) + }) +}) + +test("cleanup", function (t) { + process.chdir(osenv.tmpdir()) + rimraf.sync(t1dir) + rimraf.sync(t2dir) + rimraf.sync(t3dir) + t.pass("cleaned up") + t.end() +}) diff --git a/deps/npm/test/tap/whoami.js b/deps/npm/test/tap/whoami.js new file mode 100644 index 00000000000..e4ed30df773 --- /dev/null +++ b/deps/npm/test/tap/whoami.js @@ -0,0 +1,77 @@ +var common = require("../common-tap.js") + +var fs = require("fs") +var path = require("path") +var createServer = require("http").createServer + +var test = require("tap").test +var rimraf = require("rimraf") + +var opts = { cwd: __dirname } + +var FIXTURE_PATH = path.resolve(__dirname, "fixture_npmrc") + +test("npm whoami with basic auth", function (t) { + var s = "//registry.lvh.me/:username = wombat\n" + + "//registry.lvh.me/:_password = YmFkIHBhc3N3b3Jk\n" + + "//registry.lvh.me/:email = lindsay@wdu.org.au\n" + fs.writeFileSync(FIXTURE_PATH, s, "ascii") + fs.chmodSync(FIXTURE_PATH, "0444") + + common.npm( + [ + "whoami", + "--userconfig=" + FIXTURE_PATH, + "--registry=http://registry.lvh.me/" + ], + opts, + function (err, code, stdout, stderr) { + t.ifError(err) + + t.equal(stderr, "", "got nothing on stderr") + t.equal(code, 0, "exit ok") + t.equal(stdout, "wombat\n", "got username") + rimraf.sync(FIXTURE_PATH) + t.end() + } + ) +}) + +test("npm whoami with bearer auth", {timeout : 2 * 1000}, function (t) { + var s = "//localhost:" + common.port + + "/:_authToken = wombat-developers-union\n" + fs.writeFileSync(FIXTURE_PATH, s, "ascii") + fs.chmodSync(FIXTURE_PATH, "0444") + + function verify(req, res) { + t.equal(req.method, "GET") + t.equal(req.url, "/whoami") + + res.setHeader("content-type", "application/json") + res.writeHeader(200) + res.end(JSON.stringify({username : "wombat"}), "utf8") + } + + var server = createServer(verify) + + server.listen(common.port, function () { + common.npm( + [ + "whoami", + "--userconfig=" + FIXTURE_PATH, + "--registry=http://localhost:" + common.port + "/" + ], + opts, + function (err, code, stdout, stderr) { + t.ifError(err) + + t.equal(stderr, "", "got nothing on stderr") + t.equal(code, 0, "exit ok") + t.equal(stdout, "wombat\n", "got username") + rimraf.sync(FIXTURE_PATH) + server.close() + t.end() + } + ) + }) +}) diff --git a/deps/npm/test/tap/zz-cleanup.js b/deps/npm/test/tap/zz-cleanup.js new file mode 100644 index 00000000000..7167537e060 --- /dev/null +++ b/deps/npm/test/tap/zz-cleanup.js @@ -0,0 +1,15 @@ +var common = require("../common-tap") +var test = require("tap").test +var fs = require("fs") + +test("cleanup", function (t) { + var res = common.deleteNpmCacheRecursivelySync() + t.equal(res, 0, "Deleted test npm cache successfully") + + // ensure cache is clean + fs.readdir(common.npm_config_cache, function (err) { + t.ok(err, "error expected") + t.equal(err.code, "ENOENT", "npm cache directory no longer exists") + t.end() + }) +}) diff --git a/deps/uv/.gitignore b/deps/uv/.gitignore index a2e2558115b..14a174adf63 100644 --- a/deps/uv/.gitignore +++ b/deps/uv/.gitignore @@ -61,3 +61,9 @@ UpgradeLog*.XML Debug Release ipch + +# sphinx generated files +/docs/build/ + +*.xcodeproj +*.xcworkspace diff --git a/deps/uv/.mailmap b/deps/uv/.mailmap index 2ca07c83813..34f5e4daf35 100644 --- a/deps/uv/.mailmap +++ b/deps/uv/.mailmap @@ -14,11 +14,13 @@ Isaac Z. Schlueter <i@izs.me> Justin Venus <justin.venus@gmail.com> <justin.venus@orbitz.com> Keno Fischer <kenof@stanford.edu> <kfischer+github@college.harvard.edu> Keno Fischer <kenof@stanford.edu> <kfischer@college.harvard.edu> +Leonard Hecker <leonard.hecker91@gmail.com> <leonard@hecker.io> Maciej Małecki <maciej.malecki@notimplemented.org> <me@mmalecki.com> Marc Schlaich <marc.schlaich@googlemail.com> <marc.schlaich@gmail.com> Rasmus Christian Pedersen <ruysch@outlook.com> Rasmus Christian Pedersen <ruysch@outlook.com> Rasmus Christian Pedersen <ruysch@outlook.com> +Rasmus Christian Pedersen <ruysch@outlook.com> Rasmus Christian Pedersen <zerhacken@yahoo.com> <ruysch@outlook.com> Rasmus Pedersen <ruysch@outlook.com> <zerhacken@yahoo.com> Robert Mustacchi <rm@joyent.com> <rm@fingolfin.org> diff --git a/deps/uv/AUTHORS b/deps/uv/AUTHORS index 19f911f1131..d4c18cf532f 100644 --- a/deps/uv/AUTHORS +++ b/deps/uv/AUTHORS @@ -86,9 +86,7 @@ Nicholas Vavilov <vvnicholas@gmail.com> Miroslav Bajtoš <miro.bajtos@gmail.com> Sean Silva <chisophugis@gmail.com> Wynn Wilkes <wynnw@movenetworks.com> -Linus Mårtensson <linus.martensson@sonymobile.com> Andrei Sedoi <bsnote@gmail.com> -Navaneeth Kedaram Nambiathan <navaneethkn@gmail.com> Alex Crichton <alex@alexcrichton.com> Brent Cook <brent@boundary.com> Brian Kaisner <bkize1@gmail.com> @@ -110,7 +108,6 @@ Yazhong Liu <yorkiefixer@gmail.com> Sam Roberts <vieuxtech@gmail.com> River Tarnell <river@loreley.flyingparchment.org.uk> Nathan Sweet <nathanjsweet@gmail.com> -Luca Bruno <lucab@debian.org> Trevor Norris <trev.norris@gmail.com> Oguz Bastemur <obastemur@gmail.com> Dylan Cali <calid1984@gmail.com> @@ -155,3 +152,24 @@ Pavel Platto <hinidu@gmail.com> Tony Kelman <tony@kelman.net> John Firebaugh <john.firebaugh@gmail.com> lilohuang <lilohuang@hotmail.com> +Paul Goldsmith <paul.goldsmith@aplink.net> +Julien Gilli <julien.gilli@joyent.com> +Michael Hudson-Doyle <michael.hudson@linaro.org> +Recep ASLANTAS <m@recp.me> +Rob Adams <readams@readams.net> +Zachary Newman <znewman01@gmail.com> +Robin Hahling <robin.hahling@gw-computing.net> +Jeff Widman <jeff@jeffwidman.com> +cjihrig <cjihrig@gmail.com> +Tomasz Kołodziejski <tkolodziejski@mozilla.com> +Unknown W. Brackets <checkins@unknownbrackets.org> +Emmanuel Odeke <odeke@ualberta.ca> +Mikhail Mukovnikov <yndi@me.com> +Thorsten Lorenz <thlorenz@gmx.de> +Yuri D'Elia <yuri.delia@eurac.edu> +Manos Nikolaidis <manos@shadowrobot.com> +Elijah Andrews <elijah@busbud.com> +Michael Ira Krufky <m.krufky@samsung.com> +Helge Deller <deller@gmx.de> +Joey Geralnik <jgeralnik@gmail.com> +Tim Caswell <tim@creationix.com> diff --git a/deps/uv/CONTRIBUTING.md b/deps/uv/CONTRIBUTING.md index 28a32baaea5..332ed1129b8 100644 --- a/deps/uv/CONTRIBUTING.md +++ b/deps/uv/CONTRIBUTING.md @@ -6,13 +6,13 @@ through the process. ### FORK -Fork the project [on GitHub](https://github.com/joyent/libuv) and check out +Fork the project [on GitHub](https://github.com/libuv/libuv) and check out your copy. ``` $ git clone https://github.com/username/libuv.git $ cd libuv -$ git remote add upstream https://github.com/joyent/libuv.git +$ git remote add upstream https://github.com/libuv/libuv.git ``` Now decide if you want your feature or bug fix to go into the master branch @@ -37,10 +37,10 @@ Okay, so you have decided on the proper branch. Create a feature branch and start hacking: ``` -$ git checkout -b my-feature-branch -t origin/v0.10 +$ git checkout -b my-feature-branch -t origin/v1.x ``` -(Where v0.10 is the latest stable branch as of this writing.) +(Where v1.x is the latest stable branch as of this writing.) ### CODE @@ -131,7 +131,7 @@ Use `git rebase` (not `git merge`) to sync your work from time to time. ``` $ git fetch upstream -$ git rebase upstream/v0.10 # or upstream/master +$ git rebase upstream/v1.x # or upstream/master ``` @@ -160,7 +160,7 @@ feature branch. Post a comment in the pull request afterwards; GitHub does not send out notifications when you add commits. -[issue tracker]: https://github.com/joyent/libuv/issues +[issue tracker]: https://github.com/libuv/libuv/issues [libuv mailing list]: http://groups.google.com/group/libuv [IRC]: http://webchat.freelibuv.net/?channels=libuv [Google C/C++ style guide]: http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml diff --git a/deps/uv/ChangeLog b/deps/uv/ChangeLog index db13f188c67..e2169998429 100644 --- a/deps/uv/ChangeLog +++ b/deps/uv/ChangeLog @@ -1,4 +1,265 @@ -2014.08.08, Version 0.11.28 (Unstable) +2014.12.10, Version 1.0.2 (Stable), eec671f0059953505f9a3c9aeb7f9f31466dd7cd + +Changes since version 1.0.1: + +* linux: fix sigmask size arg in epoll_pwait() call (Ben Noordhuis) + +* linux: handle O_NONBLOCK != SOCK_NONBLOCK case (Helge Deller) + +* doc: fix spelling (Joey Geralnik) + +* unix, windows: fix typos in comments (Joey Geralnik) + +* test: canonicalize test runner path (Ben Noordhuis) + +* test: fix compilation warnings (Saúl Ibarra Corretgé) + +* test: skip tty test if detected width and height are 0 (Saúl Ibarra Corretgé) + +* doc: update README with IRC channel (Saúl Ibarra Corretgé) + +* Revert "unix: use cfmakeraw() for setting raw TTY mode" (Ben Noordhuis) + +* doc: document how to get result of uv_fs_mkdtemp (Tim Caswell) + +* unix: add flag for blocking SIGPROF during poll (Ben Noordhuis) + +* unix, windows: add uv_loop_configure() function (Ben Noordhuis) + +* win: keep a reference to AFD_POLL_INFO in cancel poll (Marc Schlaich) + +* test: raise fd limit for OSX select test (Saúl Ibarra Corretgé) + +* unix: remove overzealous assert in uv_read_stop (Saúl Ibarra Corretgé) + +* unix: reset the reading flag when a stream gets EOF (Saúl Ibarra Corretgé) + +* unix: stop reading if an error is produced (Saúl Ibarra Corretgé) + +* cleanup: remove all dead assignments (Maciej Małecki) + +* linux: return early if we have no interfaces (Maciej Małecki) + +* cleanup: remove a dead increment (Maciej Małecki) + + +2014.12.10, Version 0.10.30 (Stable), 5a63f5e9546dca482eeebc3054139b21f509f21f + +Changes since version 0.10.29: + +* linux: fix sigmask size arg in epoll_pwait() call (Ben Noordhuis) + +* linux: handle O_NONBLOCK != SOCK_NONBLOCK case (Helge Deller) + +* doc: update project links (Ben Noordhuis) + +* windows: fix compilation of tests (Marc Schlaich) + +* unix: add flag for blocking SIGPROF during poll (Ben Noordhuis) + +* unix, windows: add uv_loop_configure() function (Ben Noordhuis) + +* win: keep a reference to AFD_POLL_INFO in cancel poll (Marc Schlaich) + + +2014.11.27, Version 1.0.1 (Stable), 0a8e81374e861d425b56c45c8599595d848911d2 + +Changes since version 1.0.0: + +* readme: remove Rust from users (Elijah Andrews) + +* doc,build,include: update project links (Ben Noordhuis) + +* doc: fix typo: Strcutures -> Structures (Michael Ira Krufky) + +* unix: fix processing process handles queue (Saúl Ibarra Corretgé) + +* win: replace non-ansi characters in source file (Bert Belder) + + +2014.11.21, Version 1.0.0 (Stable), feb2a9e6947d892f449b2770c4090f7d8c88381b + +Changes since version 1.0.0-rc2: + +* doc: fix git/svn url for gyp repo in README (Emmanuel Odeke) + +* windows: fix fs_read with nbufs > 1 and offset (Unknown W. Brackets) + +* win: add missing IP_ADAPTER_UNICAST_ADDRESS_LH definition for MinGW + (huxingyi) + +* doc: mention homebrew in README (Mikhail Mukovnikov) + +* doc: add learnuv workshop to README (Thorsten Lorenz) + +* doc: fix parameter name in uv_fs_access (Saúl Ibarra Corretgé) + +* unix: use cfmakeraw() for setting raw TTY mode (Yuri D'Elia) + +* win: fix uv_thread_self() (Alexis Campailla) + +* build: add x32 support to gyp build (Ben Noordhuis) + +* build: remove dtrace probes (Ben Noordhuis) + +* doc: fix link in misc.rst (Manos Nikolaidis) + +* mailmap: remove duplicated entries (Saúl Ibarra Corretgé) + +* gyp: fix comment regarding version info location (Saúl Ibarra Corretgé) + + +2014.10.21, Version 1.0.0-rc2 (Pre-release) + +Changes since version 1.0.0-rc1: + +* build: add missing fixtures to distribution tarball (Rob Adams) + +* doc: update references to current stable branch (Zachary Newman) + +* fs: fix readdir on empty directory (Fedor Indutny) + +* fs: rename uv_fs_readdir to uv_fs_scandir (Saúl Ibarra Corretgé) + +* doc: document uv_alloc_cb (Saúl Ibarra Corretgé) + +* doc: add migration guide from version 0.10 (Saúl Ibarra Corretgé) + +* build: add DragonFly BSD support in autotools (Robin Hahling) + +* doc: document missing stream related structures (Saúl Ibarra Corretgé) + +* doc: clarify uv_loop_t.data field lifetime (Saúl Ibarra Corretgé) + +* doc: add documentation for missing functions and structures (Saúl Ibarra + Corretgé) + +* doc: fix punctuation and grammar in README (Jeff Widman) + +* windows: return libuv error codes in uv_poll_init() (cjihrig) + +* unix, windows: add uv_fs_access() (cjihrig) + +* windows: fix netmask detection (Alexis Campailla) + +* unix, windows: don't include null byte in uv_cwd size (Saúl Ibarra Corretgé) + +* unix, windows: add uv_thread_equal (Tomasz Kołodziejski) + +* windows: fix fs_write with nbufs > 1 and offset (Unknown W. Brackets) + + +2014.10.21, Version 0.10.29 (Stable), 2d728542d3790183417f8f122a110693cd85db14 + +Changes since version 0.10.28: + +* darwin: allocate enough space for select() hack (Fedor Indutny) + +* linux: try epoll_pwait if epoll_wait is missing (Michael Hudson-Doyle) + +* windows: map ERROR_INVALID_DRIVE to UV_ENOENT (Saúl Ibarra Corretgé) + + +2014.09.18, Version 1.0.0-rc1 (Unstable), 0c28bbf7b42882853d1799ab96ff68b07f7f8d49 + +Changes since version 0.11.29: + +* windows: improve timer precision (Alexis Campailla) + +* build, gyp: set xcode flags (Recep ASLANTAS) + +* ignore: include m4 files which are created manually (Recep ASLANTAS) + +* build: add m4 for feature/flag-testing (Recep ASLANTAS) + +* ignore: ignore Xcode project and workspace files (Recep ASLANTAS) + +* unix: fix warnings about dollar symbol usage in identifiers (Recep ASLANTAS) + +* unix: fix warnings when loading functions with dlsym (Recep ASLANTAS) + +* linux: try epoll_pwait if epoll_wait is missing (Michael Hudson-Doyle) + +* test: add test for closing and recreating default loop (Saúl Ibarra Corretgé) + +* windows: properly close the default loop (Saúl Ibarra Corretgé) + +* version: add ability to specify a version suffix (Saúl Ibarra Corretgé) + +* doc: add API documentation (Saúl Ibarra Corretgé) + +* test: don't close connection on write error (Trevor Norris) + +* windows: further simplify the code for timers (Saúl Ibarra Corretgé) + +* gyp: remove UNLIMITED_SELECT from dependent define (Fedor Indutny) + +* darwin: allocate enough space for select() hack (Fedor Indutny) + +* unix, windows: don't allow a NULL callback on timers (Saúl Ibarra Corretgé) + +* windows: simplify code in uv_timer_again (Saúl Ibarra Corretgé) + +* test: use less requests on tcp-write-queue-order (Saúl Ibarra Corretgé) + +* unix: stop child process watcher after last one exits (Saúl Ibarra Corretgé) + +* unix: simplify how process handle queue is managed (Saúl Ibarra Corretgé) + +* windows: remove duplicated field (mattn) + +* core: add a reserved field to uv_handle_t and uv_req_t (Saúl Ibarra Corretgé) + +* windows: fix buffer leak after failed udp send (Bert Belder) + +* windows: make sure sockets and handles are reset on close (Saúl Ibarra Corretgé) + +* unix, windows: add uv_fileno (Saúl Ibarra Corretgé) + +* build: use same CFLAGS in autotools build as in gyp (Saúl Ibarra Corretgé) + +* build: remove unneeded define in uv.gyp (Saúl Ibarra Corretgé) + +* test: fix watcher_cross_stop on Windows (Saúl Ibarra Corretgé) + +* unix, windows: move includes for EAI constants (Saúl Ibarra Corretgé) + +* unix: fix exposing EAI_* glibc-isms (Saúl Ibarra Corretgé) + +* unix: fix tcp write after bad connect freezing (Andrius Bentkus) + + +2014.08.20, Version 0.11.29 (Unstable), 35451fed830807095bbae8ef981af004a4b9259e + +Changes since version 0.11.28: + +* windows: make uv_read_stop immediately stop reading (Jameson Nash) + +* windows: fix uv__getaddrinfo_translate_error (Alexis Campailla) + +* netbsd: fix build (Saúl Ibarra Corretgé) + +* unix, windows: add uv_recv_buffer_size and uv_send_buffer_size (Andrius + Bentkus) + +* windows: add support for UNC paths on uv_spawn (Paul Goldsmith) + +* windows: replace use of inet_addr with uv_inet_pton (Saúl Ibarra Corretgé) + +* unix: replace some asserts with returning errors (Andrius Bentkus) + +* windows: use OpenBSD implementation for uv_fs_mkdtemp (Pavel Platto) + +* windows: fix GetNameInfoW error handling (Alexis Campailla) + +* fs: introduce uv_readdir_next() and report types (Fedor Indutny) + +* fs: extend reported types in uv_fs_readdir_next (Saúl Ibarra Corretgé) + +* unix: read on stream even when UV__POLLHUP set. (Julien Gilli) + + +2014.08.08, Version 0.11.28 (Unstable), fc9e2a0bc487b299c0cd3b2c9a23aeb554b5d8d1 Changes since version 0.11.27: @@ -87,6 +348,20 @@ Changes since version 0.11.26: * windows: relay TCP bind errors via ipc (Alexis Campailla) +2014.07.32, Version 0.10.28 (Stable), 9c14b616f5fb84bfd7d45707bab4bbb85894443e + +Changes since version 0.10.27: + +* windows: fix handling closed socket while poll handle is closing (Saúl Ibarra + Corretgé) + +* unix: return system error on EAI_SYSTEM (Saúl Ibarra Corretgé) + +* unix: fix bogus structure field name (Saúl Ibarra Corretgé) + +* darwin: invoke `mach_timebase_info` only once (Fedor Indutny) + + 2014.06.28, Version 0.11.26 (Unstable), 115281a1058c4034d5c5ccedacb667fe3f6327ea Changes since version 0.11.25: diff --git a/deps/uv/Makefile.am b/deps/uv/Makefile.am index 861b632bbf4..371df711d65 100644 --- a/deps/uv/Makefile.am +++ b/deps/uv/Makefile.am @@ -23,7 +23,7 @@ CLEANFILES = lib_LTLIBRARIES = libuv.la libuv_la_CFLAGS = @CFLAGS@ -libuv_la_LDFLAGS = -no-undefined -version-info 11:0:0 +libuv_la_LDFLAGS = -no-undefined -version-info 1:0:0 libuv_la_SOURCES = src/fs-poll.c \ src/heap-inl.h \ src/inet.c \ @@ -81,6 +81,7 @@ else # WINNT include_HEADERS += include/uv-unix.h AM_CPPFLAGS += -I$(top_srcdir)/src/unix +libuv_la_CFLAGS += -g --std=gnu89 -pedantic -Wall -Wextra -Wno-unused-parameter libuv_la_SOURCES += src/unix/async.c \ src/unix/atomic-ops.h \ src/unix/core.c \ @@ -105,6 +106,9 @@ libuv_la_SOURCES += src/unix/async.c \ endif # WINNT +EXTRA_DIST = test/fixtures/empty_file \ + test/fixtures/load_error.node + TESTS = test/run-tests check_PROGRAMS = test/run-tests test_run_tests_CFLAGS = @@ -126,6 +130,7 @@ test_run_tests_SOURCES = test/blackhole-server.c \ test/test-condvar.c \ test/test-connection-fail.c \ test/test-cwd-and-chdir.c \ + test/test-default-loop-close.c \ test/test-delayed-accept.c \ test/test-dlerror.c \ test/test-embed.c \ @@ -141,6 +146,7 @@ test_run_tests_SOURCES = test/blackhole-server.c \ test/test-getaddrinfo.c \ test/test-getnameinfo.c \ test/test-getsockname.c \ + test/test-handle-fileno.c \ test/test-hrtime.c \ test/test-idle.c \ test/test-ip4-addr.c \ @@ -163,6 +169,7 @@ test_run_tests_SOURCES = test/blackhole-server.c \ test/test-pipe-getsockname.c \ test/test-pipe-sendmsg.c \ test/test-pipe-server-close.c \ + test/test-pipe-close-stdout-read-stdin.c \ test/test-platform-output.c \ test/test-poll-close.c \ test/test-poll-closesocket.c \ @@ -177,6 +184,7 @@ test_run_tests_SOURCES = test/blackhole-server.c \ test/test-shutdown-twice.c \ test/test-signal-multiple-loops.c \ test/test-signal.c \ + test/test-socket-buffer-size.c \ test/test-spawn.c \ test/test-stdio-over-pipes.c \ test/test-tcp-bind-error.c \ @@ -194,9 +202,11 @@ test_run_tests_SOURCES = test/blackhole-server.c \ test/test-tcp-shutdown-after-write.c \ test/test-tcp-unexpected-read.c \ test/test-tcp-write-to-half-open-connection.c \ + test/test-tcp-write-after-connect.c \ test/test-tcp-writealot.c \ test/test-tcp-try-write.c \ test/test-tcp-write-queue-order.c \ + test/test-thread-equal.c \ test/test-thread.c \ test/test-threadpool-cancel.c \ test/test-threadpool.c \ @@ -216,6 +226,7 @@ test_run_tests_SOURCES = test/blackhole-server.c \ test/test-udp-options.c \ test/test-udp-send-and-recv.c \ test/test-udp-send-immediate.c \ + test/test-udp-send-unreachable.c \ test/test-udp-try-send.c \ test/test-walk-handles.c \ test/test-watcher-cross-stop.c @@ -253,6 +264,7 @@ endif if DARWIN include_HEADERS += include/uv-darwin.h libuv_la_CFLAGS += -D_DARWIN_USE_64_BIT_INODE=1 +libuv_la_CFLAGS += -D_DARWIN_UNLIMITED_SELECT=1 libuv_la_SOURCES += src/unix/darwin.c \ src/unix/darwin-proctitle.c \ src/unix/fsevents.c \ @@ -260,6 +272,11 @@ libuv_la_SOURCES += src/unix/darwin.c \ src/unix/proctitle.c endif +if DRAGONFLY +include_HEADERS += include/uv-bsd.h +libuv_la_SOURCES += src/unix/kqueue.c src/unix/freebsd.c +endif + if FREEBSD include_HEADERS += include/uv-bsd.h libuv_la_SOURCES += src/unix/freebsd.c src/unix/kqueue.c @@ -290,46 +307,7 @@ libuv_la_CFLAGS += -D__EXTENSIONS__ -D_XOPEN_SOURCE=500 libuv_la_SOURCES += src/unix/sunos.c endif -if HAVE_DTRACE -BUILT_SOURCES = include/uv-dtrace.h -CLEANFILES += include/uv-dtrace.h -if FREEBSD -libuv_la_LDFLAGS += -lelf -endif -endif - -if DTRACE_NEEDS_OBJECTS -libuv_la_SOURCES += src/unix/uv-dtrace.d -libuv_la_DEPENDENCIES = src/unix/uv-dtrace.o -libuv_la_LIBADD = uv-dtrace.lo -CLEANFILES += src/unix/uv-dtrace.o src/unix/uv-dtrace.lo -endif - if HAVE_PKG_CONFIG pkgconfigdir = $(libdir)/pkgconfig pkgconfig_DATA = @PACKAGE_NAME@.pc endif - -if HAVE_DTRACE -include/uv-dtrace.h: src/unix/uv-dtrace.d - $(AM_V_GEN)$(DTRACE) $(DTRACEFLAGS) -h -xnolibs -s $< -o $(top_srcdir)/$@ -endif - -if DTRACE_NEEDS_OBJECTS -SUFFIXES = .d - -src/unix/uv-dtrace.o: src/unix/uv-dtrace.d ${libuv_la_OBJECTS} - -# It's ok to specify the output here, because we have 1 .d file, and we process -# every created .o, most projects don't need to include more than one .d -.d.o: - $(AM_V_GEN)$(DTRACE) $(DTRACEFLAGS) -G -o $(top_builddir)/uv-dtrace.o -s $< \ - `find ${top_builddir}/src -name "*.o"` - $(AM_V_GEN)printf %s\\n \ - '# ${top_builddir}/uv-dtrace.lo - a libtool object file' \ - '# Generated by libtool (GNU libtool) 2.4' \ - '# libtool wants a .lo not a .o' \ - "pic_object='uv-dtrace.o'" \ - "non_pic_object='uv-dtrace.o'" \ - > ${top_builddir}/uv-dtrace.lo -endif diff --git a/deps/uv/README.md b/deps/uv/README.md index 364cf695c41..a267f0d5b52 100644 --- a/deps/uv/README.md +++ b/deps/uv/README.md @@ -4,9 +4,8 @@ libuv is a multi-platform support library with a focus on asynchronous I/O. It was primarily developed for use by [Node.js](http://nodejs.org), but it's also -used by Mozilla's [Rust language](http://www.rust-lang.org/), -[Luvit](http://luvit.io/), [Julia](http://julialang.org/), -[pyuv](https://github.com/saghul/pyuv), and [others](https://github.com/joyent/libuv/wiki/Projects-that-use-libuv). +used by [Luvit](http://luvit.io/), [Julia](http://julialang.org/), +[pyuv](https://github.com/saghul/pyuv), and [others](https://github.com/libuv/libuv/wiki/Projects-that-use-libuv). ## Feature highlights @@ -34,27 +33,61 @@ used by Mozilla's [Rust language](http://www.rust-lang.org/), * Threading and synchronization primitives +## Versioning + +Starting with version 1.0.0 libuv follows the [semantic versioning](http://semver.org/) +scheme. The API change and backwards compatibility rules are those indicated by +SemVer. libuv will keep a stable ABI across major releases. ## Community * [Mailing list](http://groups.google.com/group/libuv) + * [IRC chatroom (#libuv@irc.freenode.org)](http://webchat.freenode.net?channels=libuv&uio=d4) ## Documentation - * [include/uv.h](https://github.com/joyent/libuv/blob/master/include/uv.h) - — API documentation in the form of detailed header comments. +### Official API documentation + +Located in the docs/ subdirectory. It uses the [Sphinx](http://sphinx-doc.org/) +framework, which makes it possible to build the documentation in multiple +formats. + +Show different supported building options: + + $ make help + +Build documentation as HTML: + + $ make html + +Build documentation as man pages: + + $ make man + +Build documentation as ePub: + + $ make epub + +NOTE: Windows users need to use make.bat instead of plain 'make'. + +Documentation can be browsed online [here](http://docs.libuv.org). + +### Other resources + * [An Introduction to libuv](http://nikhilm.github.com/uvbook/) — An overview of libuv with tutorials. * [LXJS 2012 talk](http://www.youtube.com/watch?v=nGn60vDSxQ4) — High-level introductory talk about libuv. - * [Tests and benchmarks](https://github.com/joyent/libuv/tree/master/test) + * [Tests and benchmarks](https://github.com/libuv/libuv/tree/master/test) — API specification and usage examples. * [libuv-dox](https://github.com/thlorenz/libuv-dox) — Documenting types and methods of libuv, mostly by reading uv.h. + * [learnuv](https://github.com/thlorenz/learnuv) + — Learn uv for fun and profit, a self guided workshop to libuv. ## Build Instructions -For GCC there are two methods building: via autotools or via [GYP][]. +For GCC there are two build methods: via autotools or via [GYP][]. GYP is a meta-build system which can generate MSVS, Makefile, and XCode backends. It is best used for integration into other projects. @@ -69,7 +102,7 @@ To build with autotools: ### Windows First, [Python][] 2.6 or 2.7 must be installed as it is required by [GYP][]. -If python is not in your path set the environment variable `PYTHON` to its +If python is not in your path, set the environment variable `PYTHON` to its location. For example: `set PYTHON=C:\Python27\python.exe` To build with Visual Studio, launch a git shell (e.g. Cmd or PowerShell) @@ -79,8 +112,9 @@ generate uv.sln as well as related project files. To have GYP generate build script for another system, checkout GYP into the project tree manually: - $ mkdir -p build - $ git clone https://git.chromium.org/external/gyp.git build/gyp + $ git clone https://chromium.googlesource.com/external/gyp.git build/gyp + OR + $ svn co http://gyp.googlecode.com/svn/trunk build/gyp ### Unix @@ -89,6 +123,8 @@ Run: $ ./gyp_uv.py -f make $ make -C out +Run `./gyp_uv.py -f make -Dtarget_arch=x32` to build [x32][] binaries. + ### OS X Run: @@ -97,6 +133,10 @@ Run: $ xcodebuild -ARCHS="x86_64" -project uv.xcodeproj \ -configuration Release -target All +Using Homebrew: + + $ brew install --HEAD libuv + Note to OS X users: Make sure that you specify the architecture you wish to build for in the @@ -142,5 +182,5 @@ See the [guidelines for contributing][]. [GYP]: http://code.google.com/p/gyp/ [Python]: https://www.python.org/downloads/ [Visual Studio Express 2010]: http://www.microsoft.com/visualstudio/eng/products/visual-studio-2010-express -[guidelines for contributing]: https://github.com/joyent/libuv/blob/master/CONTRIBUTING.md -[libuv_banner]: https://raw.githubusercontent.com/joyent/libuv/master/img/banner.png +[guidelines for contributing]: https://github.com/libuv/libuv/blob/master/CONTRIBUTING.md +[libuv_banner]: https://raw.githubusercontent.com/libuv/libuv/master/img/banner.png diff --git a/deps/uv/common.gypi b/deps/uv/common.gypi index a0e0eea06fe..a8e2ef44c61 100644 --- a/deps/uv/common.gypi +++ b/deps/uv/common.gypi @@ -143,6 +143,10 @@ 'cflags': [ '-m32' ], 'ldflags': [ '-m32' ], }], + [ 'target_arch=="x32"', { + 'cflags': [ '-mx32' ], + 'ldflags': [ '-mx32' ], + }], [ 'OS=="linux"', { 'cflags': [ '-ansi' ], }], diff --git a/deps/uv/configure.ac b/deps/uv/configure.ac index ac789524b56..6ae53cc9164 100644 --- a/deps/uv/configure.ac +++ b/deps/uv/configure.ac @@ -13,16 +13,18 @@ # OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. AC_PREREQ(2.57) -AC_INIT([libuv], [0.11.28], [https://github.com/joyent/libuv/issues]) +AC_INIT([libuv], [1.0.2], [https://github.com/libuv/libuv/issues]) AC_CONFIG_MACRO_DIR([m4]) m4_include([m4/libuv-extra-automake-flags.m4]) m4_include([m4/as_case.m4]) +m4_include([m4/libuv-check-flags.m4]) AM_INIT_AUTOMAKE([-Wall -Werror foreign subdir-objects] UV_EXTRA_AUTOMAKE_FLAGS) AC_CANONICAL_HOST AC_ENABLE_SHARED AC_ENABLE_STATIC AC_PROG_CC AM_PROG_CC_C_O +CC_CHECK_CFLAGS_APPEND([-Wno-dollar-in-identifier-extension]) # AM_PROG_AR is not available in automake v0.11 but it's essential in v0.12. m4_ifdef([AM_PROG_AR], [AM_PROG_AR]) m4_ifdef([AM_SILENT_RULES], [AM_SILENT_RULES([yes])]) @@ -38,16 +40,16 @@ AC_CHECK_LIB([rt], [clock_gettime]) AC_CHECK_LIB([sendfile], [sendfile]) AC_CHECK_LIB([socket], [socket]) AC_SYS_LARGEFILE -AM_CONDITIONAL([AIX], [AS_CASE([$host_os],[aix*], [true], [false])]) -AM_CONDITIONAL([ANDROID],[AS_CASE([$host_os],[linux-android*],[true], [false])]) -AM_CONDITIONAL([DARWIN], [AS_CASE([$host_os],[darwin*], [true], [false])]) -AM_CONDITIONAL([FREEBSD],[AS_CASE([$host_os],[freebsd*], [true], [false])]) -AM_CONDITIONAL([LINUX], [AS_CASE([$host_os],[linux*], [true], [false])]) -AM_CONDITIONAL([NETBSD], [AS_CASE([$host_os],[netbsd*], [true], [false])]) -AM_CONDITIONAL([OPENBSD],[AS_CASE([$host_os],[openbsd*], [true], [false])]) -AM_CONDITIONAL([SUNOS], [AS_CASE([$host_os],[solaris*], [true], [false])]) -AM_CONDITIONAL([WINNT], [AS_CASE([$host_os],[mingw*], [true], [false])]) -PANDORA_ENABLE_DTRACE +AM_CONDITIONAL([AIX], [AS_CASE([$host_os],[aix*], [true], [false])]) +AM_CONDITIONAL([ANDROID], [AS_CASE([$host_os],[linux-android*],[true], [false])]) +AM_CONDITIONAL([DARWIN], [AS_CASE([$host_os],[darwin*], [true], [false])]) +AM_CONDITIONAL([DRAGONFLY],[AS_CASE([$host_os],[dragonfly*], [true], [false])]) +AM_CONDITIONAL([FREEBSD], [AS_CASE([$host_os],[freebsd*], [true], [false])]) +AM_CONDITIONAL([LINUX], [AS_CASE([$host_os],[linux*], [true], [false])]) +AM_CONDITIONAL([NETBSD], [AS_CASE([$host_os],[netbsd*], [true], [false])]) +AM_CONDITIONAL([OPENBSD], [AS_CASE([$host_os],[openbsd*], [true], [false])]) +AM_CONDITIONAL([SUNOS], [AS_CASE([$host_os],[solaris*], [true], [false])]) +AM_CONDITIONAL([WINNT], [AS_CASE([$host_os],[mingw*], [true], [false])]) AC_CHECK_PROG(PKG_CONFIG, pkg-config, yes) AM_CONDITIONAL([HAVE_PKG_CONFIG], [test "x$PKG_CONFIG" != "x"]) AS_IF([test "x$PKG_CONFIG" != "x"], [ diff --git a/deps/uv/docs/make.bat b/deps/uv/docs/make.bat new file mode 100644 index 00000000000..10eb94b013b --- /dev/null +++ b/deps/uv/docs/make.bat @@ -0,0 +1,243 @@ +@ECHO OFF + +REM Command file for Sphinx documentation + +if "%SPHINXBUILD%" == "" ( + set SPHINXBUILD=sphinx-build +) +set BUILDDIR=build +set SRCDIR=src +set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% %SRCDIR% +set I18NSPHINXOPTS=%SPHINXOPTS% %SRCDIR% +if NOT "%PAPER%" == "" ( + set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% + set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% +) + +if "%1" == "" goto help + +if "%1" == "help" ( + :help + echo.Please use `make ^<target^>` where ^<target^> is one of + echo. html to make standalone HTML files + echo. dirhtml to make HTML files named index.html in directories + echo. singlehtml to make a single large HTML file + echo. pickle to make pickle files + echo. json to make JSON files + echo. htmlhelp to make HTML files and a HTML help project + echo. qthelp to make HTML files and a qthelp project + echo. devhelp to make HTML files and a Devhelp project + echo. epub to make an epub + echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter + echo. text to make text files + echo. man to make manual pages + echo. texinfo to make Texinfo files + echo. gettext to make PO message catalogs + echo. changes to make an overview over all changed/added/deprecated items + echo. xml to make Docutils-native XML files + echo. pseudoxml to make pseudoxml-XML files for display purposes + echo. linkcheck to check all external links for integrity + echo. doctest to run all doctests embedded in the documentation if enabled + goto end +) + +if "%1" == "clean" ( + for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i + del /q /s %BUILDDIR%\* + goto end +) + + +%SPHINXBUILD% 2> nul +if errorlevel 9009 ( + echo. + echo.The 'sphinx-build' command was not found. Make sure you have Sphinx + echo.installed, then set the SPHINXBUILD environment variable to point + echo.to the full path of the 'sphinx-build' executable. Alternatively you + echo.may add the Sphinx directory to PATH. + echo. + echo.If you don't have Sphinx installed, grab it from + echo.http://sphinx-doc.org/ + exit /b 1 +) + +if "%1" == "html" ( + %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html + if errorlevel 1 exit /b 1 + echo. + echo.Build finished. The HTML pages are in %BUILDDIR%/html. + goto end +) + +if "%1" == "dirhtml" ( + %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml + if errorlevel 1 exit /b 1 + echo. + echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. + goto end +) + +if "%1" == "singlehtml" ( + %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml + if errorlevel 1 exit /b 1 + echo. + echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. + goto end +) + +if "%1" == "pickle" ( + %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle + if errorlevel 1 exit /b 1 + echo. + echo.Build finished; now you can process the pickle files. + goto end +) + +if "%1" == "json" ( + %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json + if errorlevel 1 exit /b 1 + echo. + echo.Build finished; now you can process the JSON files. + goto end +) + +if "%1" == "htmlhelp" ( + %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp + if errorlevel 1 exit /b 1 + echo. + echo.Build finished; now you can run HTML Help Workshop with the ^ +.hhp project file in %BUILDDIR%/htmlhelp. + goto end +) + +if "%1" == "qthelp" ( + %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp + if errorlevel 1 exit /b 1 + echo. + echo.Build finished; now you can run "qcollectiongenerator" with the ^ +.qhcp project file in %BUILDDIR%/qthelp, like this: + echo.^> qcollectiongenerator %BUILDDIR%\qthelp\libuv.qhcp + echo.To view the help file: + echo.^> assistant -collectionFile %BUILDDIR%\qthelp\libuv.ghc + goto end +) + +if "%1" == "devhelp" ( + %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp + if errorlevel 1 exit /b 1 + echo. + echo.Build finished. + goto end +) + +if "%1" == "epub" ( + %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub + if errorlevel 1 exit /b 1 + echo. + echo.Build finished. The epub file is in %BUILDDIR%/epub. + goto end +) + +if "%1" == "latex" ( + %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex + if errorlevel 1 exit /b 1 + echo. + echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. + goto end +) + +if "%1" == "latexpdf" ( + %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex + cd %BUILDDIR%/latex + make all-pdf + cd %BUILDDIR%/.. + echo. + echo.Build finished; the PDF files are in %BUILDDIR%/latex. + goto end +) + +if "%1" == "latexpdfja" ( + %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex + cd %BUILDDIR%/latex + make all-pdf-ja + cd %BUILDDIR%/.. + echo. + echo.Build finished; the PDF files are in %BUILDDIR%/latex. + goto end +) + +if "%1" == "text" ( + %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text + if errorlevel 1 exit /b 1 + echo. + echo.Build finished. The text files are in %BUILDDIR%/text. + goto end +) + +if "%1" == "man" ( + %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man + if errorlevel 1 exit /b 1 + echo. + echo.Build finished. The manual pages are in %BUILDDIR%/man. + goto end +) + +if "%1" == "texinfo" ( + %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo + if errorlevel 1 exit /b 1 + echo. + echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. + goto end +) + +if "%1" == "gettext" ( + %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale + if errorlevel 1 exit /b 1 + echo. + echo.Build finished. The message catalogs are in %BUILDDIR%/locale. + goto end +) + +if "%1" == "changes" ( + %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes + if errorlevel 1 exit /b 1 + echo. + echo.The overview file is in %BUILDDIR%/changes. + goto end +) + +if "%1" == "linkcheck" ( + %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck + if errorlevel 1 exit /b 1 + echo. + echo.Link check complete; look for any errors in the above output ^ +or in %BUILDDIR%/linkcheck/output.txt. + goto end +) + +if "%1" == "doctest" ( + %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest + if errorlevel 1 exit /b 1 + echo. + echo.Testing of doctests in the sources finished, look at the ^ +results in %BUILDDIR%/doctest/output.txt. + goto end +) + +if "%1" == "xml" ( + %SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml + if errorlevel 1 exit /b 1 + echo. + echo.Build finished. The XML files are in %BUILDDIR%/xml. + goto end +) + +if "%1" == "pseudoxml" ( + %SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml + if errorlevel 1 exit /b 1 + echo. + echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml. + goto end +) + +:end diff --git a/deps/uv/docs/src/async.rst b/deps/uv/docs/src/async.rst new file mode 100644 index 00000000000..7afc92a71bc --- /dev/null +++ b/deps/uv/docs/src/async.rst @@ -0,0 +1,56 @@ + +.. _async: + +:c:type:`uv_async_t` --- Async handle +===================================== + +Async handles allow the user to "wakeup" the event loop and get a callback +called from another thread. + + +Data types +---------- + +.. c:type:: uv_async_t + + Async handle type. + +.. c:type:: void (*uv_async_cb)(uv_async_t* handle) + + Type definition for callback passed to :c:func:`uv_async_init`. + + +Public members +^^^^^^^^^^^^^^ + +N/A + +.. seealso:: The :c:type:`uv_handle_t` members also apply. + + +API +--- + +.. c:function:: int uv_async_init(uv_loop_t* loop, uv_async_t* async, uv_async_cb async_cb) + + Initialize the handle. A NULL callback is allowed. + + .. note:: + Unlike other handle initialization functions, it immediately starts the handle. + +.. c:function:: int uv_async_send(uv_async_t* async) + + Wakeup the event loop and call the async handle's callback. + + .. note:: + It's safe to call this function from any thread. The callback will be called on the + loop thread. + + .. warning:: + libuv will coalesce calls to :c:func:`uv_async_send`, that is, not every call to it will + yield an execution of the callback, the only guarantee is that it will be called at least + once. Thus, calling this function may not wakeup the event loop if it was already called + previously within a short period of time. + +.. seealso:: + The :c:type:`uv_handle_t` API functions also apply. diff --git a/deps/uv/docs/src/check.rst b/deps/uv/docs/src/check.rst new file mode 100644 index 00000000000..8d48f222767 --- /dev/null +++ b/deps/uv/docs/src/check.rst @@ -0,0 +1,46 @@ + +.. _check: + +:c:type:`uv_check_t` --- Check handle +===================================== + +Check handles will run the given callback once per loop iteration, right +after polling for i/o. + + +Data types +---------- + +.. c:type:: uv_check_t + + Check handle type. + +.. c:type:: void (*uv_check_cb)(uv_check_t* handle) + + Type definition for callback passed to :c:func:`uv_check_start`. + + +Public members +^^^^^^^^^^^^^^ + +N/A + +.. seealso:: The :c:type:`uv_handle_t` members also apply. + + +API +--- + +.. c:function:: int uv_check_init(uv_loop_t*, uv_check_t* check) + + Initialize the handle. + +.. c:function:: int uv_check_start(uv_check_t* check, uv_check_cb cb) + + Start the handle with the given callback. + +.. c:function:: int uv_check_stop(uv_check_t* check) + + Stop the handle, the callback will no longer be called. + +.. seealso:: The :c:type:`uv_handle_t` API functions also apply. diff --git a/deps/uv/docs/src/conf.py b/deps/uv/docs/src/conf.py new file mode 100644 index 00000000000..f614fc5b434 --- /dev/null +++ b/deps/uv/docs/src/conf.py @@ -0,0 +1,348 @@ +# -*- coding: utf-8 -*- +# +# libuv API documentation documentation build configuration file, created by +# sphinx-quickstart on Sun Jul 27 11:47:51 2014. +# +# This file is execfile()d with the current directory set to its +# containing dir. +# +# Note that not all possible configuration values are present in this +# autogenerated file. +# +# All configuration values have a default; values that are commented out +# serve to show the default. + +import os +import re +import sys + + +def get_libuv_version(): + with open('../../include/uv-version.h') as f: + data = f.read() + try: + m = re.search(r"""^#define UV_VERSION_MAJOR (\d)$""", data, re.MULTILINE) + major = int(m.group(1)) + m = re.search(r"""^#define UV_VERSION_MINOR (\d)$""", data, re.MULTILINE) + minor = int(m.group(1)) + m = re.search(r"""^#define UV_VERSION_PATCH (\d)$""", data, re.MULTILINE) + patch = int(m.group(1)) + m = re.search(r"""^#define UV_VERSION_IS_RELEASE (\d)$""", data, re.MULTILINE) + is_release = int(m.group(1)) + m = re.search(r"""^#define UV_VERSION_SUFFIX \"(\w*)\"$""", data, re.MULTILINE) + suffix = m.group(1) + return '%d.%d.%d%s' % (major, minor, patch, '-%s' % suffix if not is_release else '') + except Exception: + return 'unknown' + +# If extensions (or modules to document with autodoc) are in another directory, +# add these directories to sys.path here. If the directory is relative to the +# documentation root, use os.path.abspath to make it absolute, like shown here. +#sys.path.insert(0, os.path.abspath('.')) + +# -- General configuration ------------------------------------------------ + +# If your documentation needs a minimal Sphinx version, state it here. +#needs_sphinx = '1.0' + +# Add any Sphinx extension module names here, as strings. They can be +# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom +# ones. +extensions = [] + +# Add any paths that contain templates here, relative to this directory. +templates_path = ['templates'] + +# The suffix of source filenames. +source_suffix = '.rst' + +# The encoding of source files. +#source_encoding = 'utf-8-sig' + +# The master toctree document. +master_doc = 'index' + +# General information about the project. +project = u'libuv API documentation' +copyright = u'libuv contributors' + +# The version info for the project you're documenting, acts as replacement for +# |version| and |release|, also used in various other places throughout the +# built documents. +# +# The short X.Y version. +version = get_libuv_version() +# The full version, including alpha/beta/rc tags. +release = version + +# The language for content autogenerated by Sphinx. Refer to documentation +# for a list of supported languages. +#language = None + +# There are two options for replacing |today|: either, you set today to some +# non-false value, then it is used: +#today = '' +# Else, today_fmt is used as the format for a strftime call. +#today_fmt = '%B %d, %Y' + +# List of patterns, relative to source directory, that match files and +# directories to ignore when looking for source files. +exclude_patterns = [] + +# The reST default role (used for this markup: `text`) to use for all +# documents. +#default_role = None + +# If true, '()' will be appended to :func: etc. cross-reference text. +#add_function_parentheses = True + +# If true, the current module name will be prepended to all description +# unit titles (such as .. function::). +#add_module_names = True + +# If true, sectionauthor and moduleauthor directives will be shown in the +# output. They are ignored by default. +#show_authors = False + +# The name of the Pygments (syntax highlighting) style to use. +pygments_style = 'sphinx' + +# A list of ignored prefixes for module index sorting. +#modindex_common_prefix = [] + +# If true, keep warnings as "system message" paragraphs in the built documents. +#keep_warnings = False + + +# -- Options for HTML output ---------------------------------------------- + +# The theme to use for HTML and HTML Help pages. See the documentation for +# a list of builtin themes. +html_theme = 'nature' + +# Theme options are theme-specific and customize the look and feel of a theme +# further. For a list of options available for each theme, see the +# documentation. +#html_theme_options = {} + +# Add any paths that contain custom themes here, relative to this directory. +#html_theme_path = [] + +# The name for this set of Sphinx documents. If None, it defaults to +# "<project> v<release> documentation". +html_title = 'libuv API documentation' + +# A shorter title for the navigation bar. Default is the same as html_title. +html_short_title = 'libuv %s API documentation' % version + +# The name of an image file (relative to this directory) to place at the top +# of the sidebar. +html_logo = 'static/logo.png' + +# The name of an image file (within the static path) to use as favicon of the +# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 +# pixels large. +html_favicon = 'static/favicon.ico' + +# Add any paths that contain custom static files (such as style sheets) here, +# relative to this directory. They are copied after the builtin static files, +# so a file named "default.css" will overwrite the builtin "default.css". +html_static_path = ['static'] + +# Add any extra paths that contain custom files (such as robots.txt or +# .htaccess) here, relative to this directory. These files are copied +# directly to the root of the documentation. +#html_extra_path = [] + +# If not '', a 'Last updated on:' timestamp is inserted at every page bottom, +# using the given strftime format. +#html_last_updated_fmt = '%b %d, %Y' + +# If true, SmartyPants will be used to convert quotes and dashes to +# typographically correct entities. +#html_use_smartypants = True + +# Custom sidebar templates, maps document names to template names. +#html_sidebars = {} + +# Additional templates that should be rendered to pages, maps page names to +# template names. +#html_additional_pages = {} + +# If false, no module index is generated. +#html_domain_indices = True + +# If false, no index is generated. +#html_use_index = True + +# If true, the index is split into individual pages for each letter. +#html_split_index = False + +# If true, links to the reST sources are added to the pages. +#html_show_sourcelink = True + +# If true, "Created using Sphinx" is shown in the HTML footer. Default is True. +#html_show_sphinx = True + +# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. +#html_show_copyright = True + +# If true, an OpenSearch description file will be output, and all pages will +# contain a <link> tag referring to it. The value of this option must be the +# base URL from which the finished HTML is served. +#html_use_opensearch = '' + +# This is the file name suffix for HTML files (e.g. ".xhtml"). +#html_file_suffix = None + +# Output file base name for HTML help builder. +htmlhelp_basename = 'libuv' + + +# -- Options for LaTeX output --------------------------------------------- + +latex_elements = { +# The paper size ('letterpaper' or 'a4paper'). +#'papersize': 'letterpaper', + +# The font size ('10pt', '11pt' or '12pt'). +#'pointsize': '10pt', + +# Additional stuff for the LaTeX preamble. +#'preamble': '', +} + +# Grouping the document tree into LaTeX files. List of tuples +# (source start file, target name, title, +# author, documentclass [howto, manual, or own class]). +latex_documents = [ + ('index', 'libuv.tex', u'libuv API documentation', + u'libuv contributors', 'manual'), +] + +# The name of an image file (relative to this directory) to place at the top of +# the title page. +#latex_logo = None + +# For "manual" documents, if this is true, then toplevel headings are parts, +# not chapters. +#latex_use_parts = False + +# If true, show page references after internal links. +#latex_show_pagerefs = False + +# If true, show URL addresses after external links. +#latex_show_urls = False + +# Documents to append as an appendix to all manuals. +#latex_appendices = [] + +# If false, no module index is generated. +#latex_domain_indices = True + + +# -- Options for manual page output --------------------------------------- + +# One entry per manual page. List of tuples +# (source start file, name, description, authors, manual section). +man_pages = [ + ('index', 'libuv', u'libuv API documentation', + [u'libuv contributors'], 1) +] + +# If true, show URL addresses after external links. +#man_show_urls = False + + +# -- Options for Texinfo output ------------------------------------------- + +# Grouping the document tree into Texinfo files. List of tuples +# (source start file, target name, title, author, +# dir menu entry, description, category) +texinfo_documents = [ + ('index', 'libuv', u'libuv API documentation', + u'libuv contributors', 'libuv', 'Cross-platform asynchronous I/O', + 'Miscellaneous'), +] + +# Documents to append as an appendix to all manuals. +#texinfo_appendices = [] + +# If false, no module index is generated. +#texinfo_domain_indices = True + +# How to display URL addresses: 'footnote', 'no', or 'inline'. +#texinfo_show_urls = 'footnote' + +# If true, do not generate a @detailmenu in the "Top" node's menu. +#texinfo_no_detailmenu = False + + +# -- Options for Epub output ---------------------------------------------- + +# Bibliographic Dublin Core info. +epub_title = u'libuv API documentation' +epub_author = u'libuv contributors' +epub_publisher = u'libuv contributors' +epub_copyright = u'2014, libuv contributors' + +# The basename for the epub file. It defaults to the project name. +epub_basename = u'libuv' + +# The HTML theme for the epub output. Since the default themes are not optimized +# for small screen space, using the same theme for HTML and epub output is +# usually not wise. This defaults to 'epub', a theme designed to save visual +# space. +#epub_theme = 'epub' + +# The language of the text. It defaults to the language option +# or en if the language is not set. +#epub_language = '' + +# The scheme of the identifier. Typical schemes are ISBN or URL. +#epub_scheme = '' + +# The unique identifier of the text. This can be a ISBN number +# or the project homepage. +#epub_identifier = '' + +# A unique identification for the text. +#epub_uid = '' + +# A tuple containing the cover image and cover page html template filenames. +#epub_cover = () + +# A sequence of (type, uri, title) tuples for the guide element of content.opf. +#epub_guide = () + +# HTML files that should be inserted before the pages created by sphinx. +# The format is a list of tuples containing the path and title. +#epub_pre_files = [] + +# HTML files shat should be inserted after the pages created by sphinx. +# The format is a list of tuples containing the path and title. +#epub_post_files = [] + +# A list of files that should not be packed into the epub file. +epub_exclude_files = ['search.html'] + +# The depth of the table of contents in toc.ncx. +#epub_tocdepth = 3 + +# Allow duplicate toc entries. +#epub_tocdup = True + +# Choose between 'default' and 'includehidden'. +#epub_tocscope = 'default' + +# Fix unsupported image types using the PIL. +#epub_fix_images = False + +# Scale large images. +#epub_max_image_width = 0 + +# How to display URL addresses: 'footnote', 'no', or 'inline'. +#epub_show_urls = 'inline' + +# If false, no index is generated. +#epub_use_index = True diff --git a/deps/uv/docs/src/design.rst b/deps/uv/docs/src/design.rst new file mode 100644 index 00000000000..63141bedf58 --- /dev/null +++ b/deps/uv/docs/src/design.rst @@ -0,0 +1,137 @@ + +.. _design: + +Design overview +=============== + +libuv is cross-platform support library which was originally written for NodeJS. It's designed +around the event-driven asynchronous I/O model. + +The library provides much more than simply abstraction over different I/O polling mechanisms: +'handles' and 'streams' provide a high level abstraction for sockets and other entities; +cross-platform file I/O and threading functionality is also provided, amongst other things. + +Here is a diagram illustrating the different parts that compose libuv and what subsystem they +relate to: + +.. image:: static/architecture.png + :scale: 75% + :align: center + + +Handles and requests +^^^^^^^^^^^^^^^^^^^^ + +libuv provides users with 2 abstractions to work with, in combination with the event loop: +handles and requests. + +Handles represent long-lived objects capable of performing certain operations while active. Some +examples: a prepare handle gets its callback called once every loop iteration when active, and +a TCP server handle get its connection callback called every time there is a new connection. + +Requests represent (typically) short-lived operations. These operations can be performed over a +handle: write requests are used to write data on a handle; or standalone: getaddrinfo requests +don't need a handle they run directly on the loop. + + +The I/O loop +^^^^^^^^^^^^ + +The I/O (or event) loop is the central part of libuv. It establishes the content for all I/O +operations, and it's meant to be tied to a single thread. One can run multiple event loops +as long as each runs in a different thread. The libuv event loop (or any other API involving +the loop or handles, for that matter) **is not thread-safe** except stated otherwise. + +The event loop follows the rather usual single threaded asynchronous I/O approach: all (network) +I/O is performed on non-blocking sockets which are polled using the best mechanism available +on the given platform: epoll on Linux, kqueue on OSX and other BSDs, event ports on SunOS and IOCP +on Windows. As part of a loop iteration the loop will block waiting for I/O activity on sockets +which have been added to the poller and callbacks will be fired indicating socket conditions +(readable, writable hangup) so handles can read, write or perform the desired I/O operation. + +In order to better understand how the event loop operates, the following diagram illustrates all +stages of a loop iteration: + +.. image:: static/loop_iteration.png + :scale: 75% + :align: center + + +#. The loop concept of 'now' is updated. The event loop caches the current time at the start of + the event loop tick in order to reduce the number of time-related system calls. + +#. If the loop is *alive* an iteration is started, otherwise the loop will exit immediately. So, + when is a loop considered to be *alive*? If a loop has active and ref'd handles, active + requests or closing handles it's considered to be *alive*. + +#. Due timers are run. All active timers scheduled for a time before the loop's concept of *now* + get their callbacks called. + +#. Pending callbacks are called. All I/O callbacks are called right after polling for I/O, for the + most part. There are cases, however, in which calling such a callback is deferred for the next + loop iteration. If the previous iteration deferred any I/O callback it will be run at this point. + +#. Idle handle callbacks are called. Despite the unfortunate name, idle handles are run on every + loop iteration, if they are active. + +#. Prepare handle callbacks are called. Prepare handles get their callbacks called right before + the loop will block for I/O. + +#. Poll timeout is calculated. Before blocking for I/O the loop calculates for how long it should + block. These are the rules when calculating the timeout: + + * If the loop was run with the ``UV_RUN_NOWAIT`` flag, the timeout is 0. + * If the loop is going to be stopped (:c:func:`uv_stop` was called), the timeout is 0. + * If there are no active handles or requests, the timeout is 0. + * If there are any idle handles active, the timeout is 0. + * If there are any handles pending to be closed, the timeout is 0. + * If none of the above cases was matched, the timeout of the closest timer is taken, or + if there are no active timers, infinity. + +#. The loop blocks for I/O. At this point the loop will block for I/O for the timeout calculated + on the previous step. All I/O related handles that were monitoring a given file descriptor + for a read or write operation get their callbacks called at this point. + +#. Check handle callbacks are called. Check handles get their callbacks called right after the + loop has blocked for I/O. Check handles are essentially the counterpart of prepare handles. + +#. Close callbacks are called. If a handle was closed by calling :c:func:`uv_close` it will + get the close callback called. + +#. Special case in case the loop was run with ``UV_RUN_ONCE``, as it implies forward progress. + It's possible that no I/O callbacks were fired after blocking for I/O, but some time has passed + so there might be timers which are due, those timers get their callbacks called. + +#. Iteration ends. If the loop was run with ``UV_RUN_NOWAIT`` or ``UV_RUN_ONCE`` modes the + iteration is ended and :c:func:`uv_run` will return. If the loop was run with ``UV_RUN_DEFAULT`` + it will continue from the start if it's still *alive*, otherwise it will also end. + + +.. important:: + libuv uses a thread pool to make asynchronous file I/O operations possible, but + network I/O is **always** performed in a single thread, each loop's thread. + +.. note:: + While the polling mechanism is different, libuv makes the execution model consistent + Unix systems and Windows. + + +File I/O +^^^^^^^^ + +Unlike network I/O, there are no platform-specific file I/O primitives libuv could rely on, +so the current approach is to run blocking file I/O operations in a thread pool. + +For a thorough explanation of the cross-platform file I/O landscape, checkout +`this post <http://blog.libtorrent.org/2012/10/asynchronous-disk-io/>`_. + +libuv currently uses a global thread pool on which all loops can queue work on. 3 types of +operations are currently run on this pool: + + * Filesystem operations + * DNS functions (getaddrinfo and getnameinfo) + * User specified code via :c:func:`uv_queue_work` + +.. warning:: + See the :c:ref:`threadpool` section for more details, but keep in mind the thread pool size + is quite limited. diff --git a/deps/uv/docs/src/dll.rst b/deps/uv/docs/src/dll.rst new file mode 100644 index 00000000000..3fb11e192db --- /dev/null +++ b/deps/uv/docs/src/dll.rst @@ -0,0 +1,44 @@ + +.. _dll: + +Shared library handling +======================= + +libuv provides cross platform utilities for loading shared libraries and +retrieving symbols from them, using the following API. + + +Data types +---------- + +.. c:type:: uv_lib_t + + Shared library data type. + + +Public members +^^^^^^^^^^^^^^ + +N/A + + +API +--- + +.. c:function:: int uv_dlopen(const char* filename, uv_lib_t* lib) + + Opens a shared library. The filename is in utf-8. Returns 0 on success and + -1 on error. Call :c:func:`uv_dlerror` to get the error message. + +.. c:function:: void uv_dlclose(uv_lib_t* lib) + + Close the shared library. + +.. c:function:: uv_dlsym(uv_lib_t* lib, const char* name, void** ptr) + + Retrieves a data pointer from a dynamic library. It is legal for a symbol + to map to NULL. Returns 0 on success and -1 if the symbol was not found. + +.. c:function:: const char* uv_dlerror(const uv_lib_t* lib) + + Returns the last uv_dlopen() or uv_dlsym() error message. diff --git a/deps/uv/docs/src/dns.rst b/deps/uv/docs/src/dns.rst new file mode 100644 index 00000000000..d7c889f7ada --- /dev/null +++ b/deps/uv/docs/src/dns.rst @@ -0,0 +1,83 @@ + +.. _dns: + +DNS utility functions +===================== + +libuv provides asynchronous variants of `getaddrinfo` and `getnameinfo`. + + +Data types +---------- + +.. c:type:: uv_getaddrinfo_t + + `getaddrinfo` request type. + +.. c:type:: void (*uv_getaddrinfo_cb)(uv_getaddrinfo_t* req, int status, struct addrinfo* res) + + Callback which will be called with the getaddrinfo request result once + complete. In case it was cancelled, `status` will have a value of + ``UV_ECANCELED``. + +.. c:type:: uv_getnameinfo_t + + `getnameinfo` request type. + +.. c:type:: void (*uv_getnameinfo_cb)(uv_getnameinfo_t* req, int status, const char* hostname, const char* service) + + Callback which will be called with the getnameinfo request result once + complete. In case it was cancelled, `status` will have a value of + ``UV_ECANCELED``. + + +Public members +^^^^^^^^^^^^^^ + +.. c:member:: uv_loop_t* uv_getaddrinfo_t.loop + + Loop that started this getaddrinfo request and where completion will be + reported. Readonly. + +.. c:member:: uv_loop_t* uv_getnameinfo_t.loop + + Loop that started this getnameinfo request and where completion will be + reported. Readonly. + +.. seealso:: The :c:type:`uv_req_t` members also apply. + + +API +--- + +.. c:function:: int uv_getaddrinfo(uv_loop_t* loop, uv_getaddrinfo_t* req, uv_getaddrinfo_cb getaddrinfo_cb, const char* node, const char* service, const struct addrinfo* hints) + + Asynchronous ``getaddrinfo(3)``. + + Either node or service may be NULL but not both. + + `hints` is a pointer to a struct addrinfo with additional address type + constraints, or NULL. Consult `man -s 3 getaddrinfo` for more details. + + Returns 0 on success or an error code < 0 on failure. If successful, the + callback will get called sometime in the future with the lookup result, + which is either: + + * status == 0, the res argument points to a valid `struct addrinfo`, or + * status < 0, the res argument is NULL. See the UV_EAI_* constants. + + Call :c:func:`uv_freeaddrinfo` to free the addrinfo structure. + +.. c:function:: void uv_freeaddrinfo(struct addrinfo* ai) + + Free the struct addrinfo. Passing NULL is allowed and is a no-op. + +.. c:function:: int uv_getnameinfo(uv_loop_t* loop, uv_getnameinfo_t* req, uv_getnameinfo_cb getnameinfo_cb, const struct sockaddr* addr, int flags) + + Asynchronous ``getnameinfo(3)``. + + Returns 0 on success or an error code < 0 on failure. If successful, the + callback will get called sometime in the future with the lookup result. + Consult `man -s 3 getnameinfo` for more details. + +.. seealso:: The :c:type:`uv_req_t` API functions also apply. diff --git a/deps/uv/docs/src/errors.rst b/deps/uv/docs/src/errors.rst new file mode 100644 index 00000000000..5d59dc30f28 --- /dev/null +++ b/deps/uv/docs/src/errors.rst @@ -0,0 +1,329 @@ + +.. _errors: + +Error handling +============== + +In libuv errors are negative numbered constants. As a rule of thumb, whenever +there is a status parameter, or an API functions returns an integer, a negative +number will imply an error. + +.. note:: + Implementation detail: on Unix error codes are the negated `errno` (or `-errno`), while on + Windows they are defined by libuv to arbitrary negative numbers. + + +Error constants +--------------- + +.. c:macro:: UV_E2BIG + + argument list too long + +.. c:macro:: UV_EACCES + + permission denied + +.. c:macro:: UV_EADDRINUSE + + address already in use + +.. c:macro:: UV_EADDRNOTAVAIL + + address not available + +.. c:macro:: UV_EAFNOSUPPORT + + address family not supported + +.. c:macro:: UV_EAGAIN + + resource temporarily unavailable + +.. c:macro:: UV_EAI_ADDRFAMILY + + address family not supported + +.. c:macro:: UV_EAI_AGAIN + + temporary failure + +.. c:macro:: UV_EAI_BADFLAGS + + bad ai_flags value + +.. c:macro:: UV_EAI_BADHINTS + + invalid value for hints + +.. c:macro:: UV_EAI_CANCELED + + request canceled + +.. c:macro:: UV_EAI_FAIL + + permanent failure + +.. c:macro:: UV_EAI_FAMILY + + ai_family not supported + +.. c:macro:: UV_EAI_MEMORY + + out of memory + +.. c:macro:: UV_EAI_NODATA + + no address + +.. c:macro:: UV_EAI_NONAME + + unknown node or service + +.. c:macro:: UV_EAI_OVERFLOW + + argument buffer overflow + +.. c:macro:: UV_EAI_PROTOCOL + + resolved protocol is unknown + +.. c:macro:: UV_EAI_SERVICE + + service not available for socket type + +.. c:macro:: UV_EAI_SOCKTYPE + + socket type not supported + +.. c:macro:: UV_EALREADY + + connection already in progress + +.. c:macro:: UV_EBADF + + bad file descriptor + +.. c:macro:: UV_EBUSY + + resource busy or locked + +.. c:macro:: UV_ECANCELED + + operation canceled + +.. c:macro:: UV_ECHARSET + + invalid Unicode character + +.. c:macro:: UV_ECONNABORTED + + software caused connection abort + +.. c:macro:: UV_ECONNREFUSED + + connection refused + +.. c:macro:: UV_ECONNRESET + + connection reset by peer + +.. c:macro:: UV_EDESTADDRREQ + + destination address required + +.. c:macro:: UV_EEXIST + + file already exists + +.. c:macro:: UV_EFAULT + + bad address in system call argument + +.. c:macro:: UV_EFBIG + + file too large + +.. c:macro:: UV_EHOSTUNREACH + + host is unreachable + +.. c:macro:: UV_EINTR + + interrupted system call + +.. c:macro:: UV_EINVAL + + invalid argument + +.. c:macro:: UV_EIO + + i/o error + +.. c:macro:: UV_EISCONN + + socket is already connected + +.. c:macro:: UV_EISDIR + + illegal operation on a directory + +.. c:macro:: UV_ELOOP + + too many symbolic links encountered + +.. c:macro:: UV_EMFILE + + too many open files + +.. c:macro:: UV_EMSGSIZE + + message too long + +.. c:macro:: UV_ENAMETOOLONG + + name too long + +.. c:macro:: UV_ENETDOWN + + network is down + +.. c:macro:: UV_ENETUNREACH + + network is unreachable + +.. c:macro:: UV_ENFILE + + file table overflow + +.. c:macro:: UV_ENOBUFS + + no buffer space available + +.. c:macro:: UV_ENODEV + + no such device + +.. c:macro:: UV_ENOENT + + no such file or directory + +.. c:macro:: UV_ENOMEM + + not enough memory + +.. c:macro:: UV_ENONET + + machine is not on the network + +.. c:macro:: UV_ENOPROTOOPT + + protocol not available + +.. c:macro:: UV_ENOSPC + + no space left on device + +.. c:macro:: UV_ENOSYS + + function not implemented + +.. c:macro:: UV_ENOTCONN + + socket is not connected + +.. c:macro:: UV_ENOTDIR + + not a directory + +.. c:macro:: UV_ENOTEMPTY + + directory not empty + +.. c:macro:: UV_ENOTSOCK + + socket operation on non-socket + +.. c:macro:: UV_ENOTSUP + + operation not supported on socket + +.. c:macro:: UV_EPERM + + operation not permitted + +.. c:macro:: UV_EPIPE + + broken pipe + +.. c:macro:: UV_EPROTO + + protocol error + +.. c:macro:: UV_EPROTONOSUPPORT + + protocol not supported + +.. c:macro:: UV_EPROTOTYPE + + protocol wrong type for socket + +.. c:macro:: UV_ERANGE + + result too large + +.. c:macro:: UV_EROFS + + read-only file system + +.. c:macro:: UV_ESHUTDOWN + + cannot send after transport endpoint shutdown + +.. c:macro:: UV_ESPIPE + + invalid seek + +.. c:macro:: UV_ESRCH + + no such process + +.. c:macro:: UV_ETIMEDOUT + + connection timed out + +.. c:macro:: UV_ETXTBSY + + text file is busy + +.. c:macro:: UV_EXDEV + + cross-device link not permitted + +.. c:macro:: UV_UNKNOWN + + unknown error + +.. c:macro:: UV_EOF + + end of file + +.. c:macro:: UV_ENXIO + + no such device or address + +.. c:macro:: UV_EMLINK + + too many links + + +API +--- + +.. c:function:: const char* uv_strerror(int err) + + Returns the error message for the given error code. + +.. c:function:: const char* uv_err_name(int err) + + Returns the error name for the given error code. diff --git a/deps/uv/docs/src/fs.rst b/deps/uv/docs/src/fs.rst new file mode 100644 index 00000000000..cd535f756fc --- /dev/null +++ b/deps/uv/docs/src/fs.rst @@ -0,0 +1,278 @@ + +.. _fs: + +Filesystem operations +===================== + +libuv provides a wide variety of cross-platform sync and async filesystem +operations. All functions defined in this document take a callback, which is +allowed to be NULL. If the callback is NULL the request is completed synchronously, +otherwise it will be performed asynchronously. + +All file operations are run on the threadpool, see :ref:`threadpool` for information +on the threadpool size. + + +Data types +---------- + +.. c:type:: uv_fs_t + + Filesystem request type. + +.. c:type:: uv_timespec_t + + Portable equivalent of ``struct timespec``. + + :: + + typedef struct { + long tv_sec; + long tv_nsec; + } uv_timespec_t; + +.. c:type:: uv_stat_t + + Portable equivalent of ``struct stat``. + + :: + + typedef struct { + uint64_t st_dev; + uint64_t st_mode; + uint64_t st_nlink; + uint64_t st_uid; + uint64_t st_gid; + uint64_t st_rdev; + uint64_t st_ino; + uint64_t st_size; + uint64_t st_blksize; + uint64_t st_blocks; + uint64_t st_flags; + uint64_t st_gen; + uv_timespec_t st_atim; + uv_timespec_t st_mtim; + uv_timespec_t st_ctim; + uv_timespec_t st_birthtim; + } uv_stat_t; + +.. c:type:: uv_fs_type + + Filesystem request type. + + :: + + typedef enum { + UV_FS_UNKNOWN = -1, + UV_FS_CUSTOM, + UV_FS_OPEN, + UV_FS_CLOSE, + UV_FS_READ, + UV_FS_WRITE, + UV_FS_SENDFILE, + UV_FS_STAT, + UV_FS_LSTAT, + UV_FS_FSTAT, + UV_FS_FTRUNCATE, + UV_FS_UTIME, + UV_FS_FUTIME, + UV_FS_ACCESS, + UV_FS_CHMOD, + UV_FS_FCHMOD, + UV_FS_FSYNC, + UV_FS_FDATASYNC, + UV_FS_UNLINK, + UV_FS_RMDIR, + UV_FS_MKDIR, + UV_FS_MKDTEMP, + UV_FS_RENAME, + UV_FS_SCANDIR, + UV_FS_LINK, + UV_FS_SYMLINK, + UV_FS_READLINK, + UV_FS_CHOWN, + UV_FS_FCHOWN + } uv_fs_type; + +.. c:type:: uv_dirent_t + + Cross platform (reduced) equivalent of ``struct dirent``. + Used in :c:func:`uv_fs_scandir_next`. + + :: + + typedef enum { + UV_DIRENT_UNKNOWN, + UV_DIRENT_FILE, + UV_DIRENT_DIR, + UV_DIRENT_LINK, + UV_DIRENT_FIFO, + UV_DIRENT_SOCKET, + UV_DIRENT_CHAR, + UV_DIRENT_BLOCK + } uv_dirent_type_t; + + typedef struct uv_dirent_s { + const char* name; + uv_dirent_type_t type; + } uv_dirent_t; + + +Public members +^^^^^^^^^^^^^^ + +.. c:member:: uv_loop_t* uv_fs_t.loop + + Loop that started this request and where completion will be reported. + Readonly. + +.. c:member:: uv_fs_type uv_fs_t.fs_type + + FS request type. + +.. c:member:: const char* uv_fs_t.path + + Path affecting the request. + +.. c:member:: ssize_t uv_fs_t.result + + Result of the request. < 0 means error, success otherwise. On requests such + as :c:func:`uv_fs_read` or :c:func:`uv_fs_write` it indicates the amount of + data that was read or written, respectively. + +.. c:member:: uv_stat_t uv_fs_t.statbuf + + Stores the result of :c:func:`uv_fs_stat` and other stat requests. + +.. c:member:: void* uv_fs_t.ptr + + Stores the result of :c:func:`uv_fs_readlink` and serves as an alias to + `statbuf`. + +.. seealso:: The :c:type:`uv_req_t` members also apply. + + +API +--- + +.. c:function:: void uv_fs_req_cleanup(uv_fs_t* req) + + Cleanup request. Must be called after a request is finished to deallocate + any memory libuv might have allocated. + +.. c:function:: int uv_fs_close(uv_loop_t* loop, uv_fs_t* req, uv_file file, uv_fs_cb cb) + + Equivalent to ``close(2)``. + +.. c:function:: int uv_fs_open(uv_loop_t* loop, uv_fs_t* req, const char* path, int flags, int mode, uv_fs_cb cb) + + Equivalent to ``open(2)``. + +.. c:function:: int uv_fs_read(uv_loop_t* loop, uv_fs_t* req, uv_file file, const uv_buf_t bufs[], unsigned int nbufs, int64_t offset, uv_fs_cb cb) + + Equivalent to ``preadv(2)``. + +.. c:function:: int uv_fs_unlink(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb) + + Equivalent to ``unlink(2)``. + +.. c:function:: int uv_fs_write(uv_loop_t* loop, uv_fs_t* req, uv_file file, const uv_buf_t bufs[], unsigned int nbufs, int64_t offset, uv_fs_cb cb) + + Equivalent to ``pwritev(2)``. + +.. c:function:: int uv_fs_mkdir(uv_loop_t* loop, uv_fs_t* req, const char* path, int mode, uv_fs_cb cb) + + Equivalent to ``mkdir(2)``. + + .. note:: + `mode` is currently not implemented on Windows. + +.. c:function:: int uv_fs_mkdtemp(uv_loop_t* loop, uv_fs_t* req, const char* tpl, uv_fs_cb cb) + + Equivalent to ``mkdtemp(3)``. + + .. note:: + The result can be found as a null terminated string at `req->path`. + +.. c:function:: int uv_fs_rmdir(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb) + + Equivalent to ``rmdir(2)``. + +.. c:function:: int uv_fs_scandir(uv_loop_t* loop, uv_fs_t* req, const char* path, int flags, uv_fs_cb cb) +.. c:function:: int uv_fs_scandir_next(uv_fs_t* req, uv_dirent_t* ent) + + Equivalent to ``scandir(3)``, with a slightly different API. Once the callback + for the request is called, the user can use :c:func:`uv_fs_scandir_next` to + get `ent` populated with the next directory entry data. When there are no + more entries ``UV_EOF`` will be returned. + +.. c:function:: int uv_fs_stat(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb) +.. c:function:: int uv_fs_fstat(uv_loop_t* loop, uv_fs_t* req, uv_file file, uv_fs_cb cb) +.. c:function:: int uv_fs_lstat(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb) + + Equivalent to ``(f/l)stat(2)``. + +.. c:function:: int uv_fs_rename(uv_loop_t* loop, uv_fs_t* req, const char* path, const char* new_path, uv_fs_cb cb) + + Equivalent to ``rename(2)``. + +.. c:function:: int uv_fs_fsync(uv_loop_t* loop, uv_fs_t* req, uv_file file, uv_fs_cb cb) + + Equivalent to ``fsync(2)``. + +.. c:function:: int uv_fs_fdatasync(uv_loop_t* loop, uv_fs_t* req, uv_file file, uv_fs_cb cb) + + Equivalent to ``fdatasync(2)``. + +.. c:function:: int uv_fs_ftruncate(uv_loop_t* loop, uv_fs_t* req, uv_file file, int64_t offset, uv_fs_cb cb) + + Equivalent to ``ftruncate(2)``. + +.. c:function:: int uv_fs_sendfile(uv_loop_t* loop, uv_fs_t* req, uv_file out_fd, uv_file in_fd, int64_t in_offset, size_t length, uv_fs_cb cb) + + Limited equivalent to ``sendfile(2)``. + +.. c:function:: int uv_fs_access(uv_loop_t* loop, uv_fs_t* req, const char* path, int mode, uv_fs_cb cb) + + Equivalent to ``access(2)`` on Unix. Windows uses ``GetFileAttributesW()``. + +.. c:function:: int uv_fs_chmod(uv_loop_t* loop, uv_fs_t* req, const char* path, int mode, uv_fs_cb cb) +.. c:function:: int uv_fs_fchmod(uv_loop_t* loop, uv_fs_t* req, uv_file file, int mode, uv_fs_cb cb) + + Equivalent to ``(f)chmod(2)``. + +.. c:function:: int uv_fs_utime(uv_loop_t* loop, uv_fs_t* req, const char* path, double atime, double mtime, uv_fs_cb cb) +.. c:function:: int uv_fs_futime(uv_loop_t* loop, uv_fs_t* req, uv_file file, double atime, double mtime, uv_fs_cb cb) + + Equivalent to ``(f)utime(s)(2)``. + +.. c:function:: int uv_fs_link(uv_loop_t* loop, uv_fs_t* req, const char* path, const char* new_path, uv_fs_cb cb) + + Equivalent to ``link(2)``. + +.. c:function:: int uv_fs_symlink(uv_loop_t* loop, uv_fs_t* req, const char* path, const char* new_path, int flags, uv_fs_cb cb) + + Equivalent to ``symlink(2)``. + + .. note:: + On Windows the `flags` parameter can be specified to control how the symlink will + be created: + + * ``UV_FS_SYMLINK_DIR``: indicates that `path` points to a directory. + + * ``UV_FS_SYMLINK_JUNCTION``: request that the symlink is created + using junction points. + +.. c:function:: int uv_fs_readlink(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb) + + Equivalent to ``readlink(2)``. + +.. c:function:: int uv_fs_chown(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_uid_t uid, uv_gid_t gid, uv_fs_cb cb) +.. c:function:: int uv_fs_fchown(uv_loop_t* loop, uv_fs_t* req, uv_file file, uv_uid_t uid, uv_gid_t gid, uv_fs_cb cb) + + Equivalent to ``(f)chown(2)``. + + .. note:: + These functions are not implemented on Windows. + +.. seealso:: The :c:type:`uv_req_t` API functions also apply. diff --git a/deps/uv/docs/src/fs_event.rst b/deps/uv/docs/src/fs_event.rst new file mode 100644 index 00000000000..9bc9939fc2c --- /dev/null +++ b/deps/uv/docs/src/fs_event.rst @@ -0,0 +1,102 @@ + +.. _fs_event: + +:c:type:`uv_fs_event_t` --- FS Event handle +=========================================== + +FS Event handles allow the user to monitor a given path for changes, for example, +if the file was renamed or there was a generic change in it. This handle uses +the best backend for the job on each platform. + + +Data types +---------- + +.. c:type:: uv_fs_event_t + + FS Event handle type. + +.. c:type:: void (*uv_fs_event_cb)(uv_fs_event_t* handle, const char* filename, int events, int status) + + Callback passed to :c:func:`uv_fs_event_start` which will be called repeatedly + after the handle is started. If the handle was started with a directory the + `filename` parameter will be a relative path to a file contained in the directory. + The `events` parameter is an ORed mask of :c:type:`uv_fs_event` elements. + +.. c:type:: uv_fs_event + + Event types that :c:type:`uv_fs_event_t` handles monitor. + + :: + + enum uv_fs_event { + UV_RENAME = 1, + UV_CHANGE = 2 + }; + +.. c:type:: uv_fs_event_flags + + Flags that can be passed to :c:func:`uv_fs_event_start` to control its + behavior. + + :: + + enum uv_fs_event_flags { + /* + * By default, if the fs event watcher is given a directory name, we will + * watch for all events in that directory. This flags overrides this behavior + * and makes fs_event report only changes to the directory entry itself. This + * flag does not affect individual files watched. + * This flag is currently not implemented yet on any backend. + */ + UV_FS_EVENT_WATCH_ENTRY = 1, + /* + * By default uv_fs_event will try to use a kernel interface such as inotify + * or kqueue to detect events. This may not work on remote filesystems such + * as NFS mounts. This flag makes fs_event fall back to calling stat() on a + * regular interval. + * This flag is currently not implemented yet on any backend. + */ + UV_FS_EVENT_STAT = 2, + /* + * By default, event watcher, when watching directory, is not registering + * (is ignoring) changes in it's subdirectories. + * This flag will override this behaviour on platforms that support it. + */ + UV_FS_EVENT_RECURSIVE = 4 + }; + + +Public members +^^^^^^^^^^^^^^ + +N/A + +.. seealso:: The :c:type:`uv_handle_t` members also apply. + + +API +--- + +.. c:function:: int uv_fs_event_init(uv_loop_t* loop, uv_fs_event_t* handle) + + Initialize the handle. + +.. c:function:: int uv_fs_event_start(uv_fs_event_t* handle, uv_fs_event_cb cb, const char* path, unsigned int flags) + + Start the handle with the given callback, which will watch the specified + `path` for changes. `flags` can be an ORed mask of :c:type:`uv_fs_event_flags`. + +.. c:function:: int uv_fs_event_stop(uv_fs_event_t* handle) + + Stop the handle, the callback will no longer be called. + +.. c:function:: int uv_fs_event_getpath(uv_fs_event_t* handle, char* buf, size_t* len) + + Get the path being monitored by the handle. The buffer must be preallocated + by the user. Returns 0 on success or an error code < 0 in case of failure. + On success, `buf` will contain the path and `len` its length. If the buffer + is not big enough UV_ENOBUFS will be returned and len will be set to the + required size. + +.. seealso:: The :c:type:`uv_handle_t` API functions also apply. diff --git a/deps/uv/docs/src/fs_poll.rst b/deps/uv/docs/src/fs_poll.rst new file mode 100644 index 00000000000..df310535214 --- /dev/null +++ b/deps/uv/docs/src/fs_poll.rst @@ -0,0 +1,69 @@ + +.. _fs_poll: + +:c:type:`uv_fs_poll_t` --- FS Poll handle +========================================= + +FS Poll handles allow the user to monitor a given path for changes. Unlike +:c:type:`uv_fs_event_t`, fs poll handles use `stat` to detect when a file has +changed so they can work on file systems where fs event handles can't. + + +Data types +---------- + +.. c:type:: uv_fs_poll_t + + FS Poll handle type. + +.. c:type:: void (*uv_fs_poll_cb)(uv_fs_poll_t* handle, int status, const uv_stat_t* prev, const uv_stat_t* curr) + + Callback passed to :c:func:`uv_fs_poll_start` which will be called repeatedly + after the handle is started, when any change happens to the monitored path. + + The callback is invoked with `status < 0` if `path` does not exist + or is inaccessible. The watcher is *not* stopped but your callback is + not called again until something changes (e.g. when the file is created + or the error reason changes). + + When `status == 0`, the callback receives pointers to the old and new + :c:type:`uv_stat_t` structs. They are valid for the duration of the + callback only. + + +Public members +^^^^^^^^^^^^^^ + +N/A + +.. seealso:: The :c:type:`uv_handle_t` members also apply. + + +API +--- + +.. c:function:: int uv_fs_poll_init(uv_loop_t* loop, uv_fs_poll_t* handle) + + Initialize the handle. + +.. c:function:: int uv_fs_poll_start(uv_fs_poll_t* handle, uv_fs_poll_cb poll_cb, const char* path, unsigned int interval) + + Check the file at `path` for changes every `interval` milliseconds. + + .. note:: + For maximum portability, use multi-second intervals. Sub-second intervals will not detect + all changes on many file systems. + +.. c:function:: int uv_fs_poll_stop(uv_fs_poll_t* handle) + + Stop the handle, the callback will no longer be called. + +.. c:function:: int uv_fs_poll_getpath(uv_fs_poll_t* handle, char* buf, size_t* len) + + Get the path being monitored by the handle. The buffer must be preallocated + by the user. Returns 0 on success or an error code < 0 in case of failure. + On success, `buf` will contain the path and `len` its length. If the buffer + is not big enough UV_ENOBUFS will be returned and len will be set to the + required size. + +.. seealso:: The :c:type:`uv_handle_t` API functions also apply. diff --git a/deps/uv/docs/src/handle.rst b/deps/uv/docs/src/handle.rst new file mode 100644 index 00000000000..6ba597a21ab --- /dev/null +++ b/deps/uv/docs/src/handle.rst @@ -0,0 +1,181 @@ + +.. _handle: + +:c:type:`uv_handle_t` --- Base handle +===================================== + +`uv_handle_t` is the base type for all libuv handle types. + +Structures are aligned so that any libuv handle can be cast to `uv_handle_t`. +All API functions defined here work with any handle type. + + +Data types +---------- + +.. c:type:: uv_handle_t + + The base libuv handle type. + +.. c:type:: uv_any_handle + + Union of all handle types. + +.. c:type:: void (*uv_alloc_cb)(uv_handle_t* handle, size_t suggested_size, uv_buf_t* buf) + + Type definition for callback passed to :c:func:`uv_read_start` and + :c:func:`uv_udp_recv_start`. The user must fill the supplied :c:type:`uv_buf_t` + structure with whatever size, as long as it's > 0. A suggested size (65536 at the moment) + is provided, but it doesn't need to be honored. Setting the buffer's length to 0 + will trigger a ``UV_ENOBUFS`` error in the :c:type:`uv_udp_recv_cb` or + :c:type:`uv_read_cb` callback. + +.. c:type:: void (*uv_close_cb)(uv_handle_t* handle) + + Type definition for callback passed to :c:func:`uv_close`. + + +Public members +^^^^^^^^^^^^^^ + +.. c:member:: uv_loop_t* uv_handle_t.loop + + Pointer to the :c:type:`uv_loop_t` where the handle is running on. Readonly. + +.. c:member:: void* uv_handle_t.data + + Space for user-defined arbitrary data. libuv does not use this field. + + +API +--- + +.. c:function:: int uv_is_active(const uv_handle_t* handle) + + Returns non-zero if the handle is active, zero if it's inactive. What + "active" means depends on the type of handle: + + - A uv_async_t handle is always active and cannot be deactivated, except + by closing it with uv_close(). + + - A uv_pipe_t, uv_tcp_t, uv_udp_t, etc. handle - basically any handle that + deals with i/o - is active when it is doing something that involves i/o, + like reading, writing, connecting, accepting new connections, etc. + + - A uv_check_t, uv_idle_t, uv_timer_t, etc. handle is active when it has + been started with a call to uv_check_start(), uv_idle_start(), etc. + + Rule of thumb: if a handle of type `uv_foo_t` has a `uv_foo_start()` + function, then it's active from the moment that function is called. + Likewise, `uv_foo_stop()` deactivates the handle again. + +.. c:function:: int uv_is_closing(const uv_handle_t* handle) + + Returns non-zero if the handle is closing or closed, zero otherwise. + + .. note:: + This function should only be used between the initialization of the handle and the + arrival of the close callback. + +.. c:function:: void uv_close(uv_handle_t* handle, uv_close_cb close_cb) + + Request handle to be closed. `close_cb` will be called asynchronously after + this call. This MUST be called on each handle before memory is released. + + Handles that wrap file descriptors are closed immediately but + `close_cb` will still be deferred to the next iteration of the event loop. + It gives you a chance to free up any resources associated with the handle. + + In-progress requests, like uv_connect_t or uv_write_t, are cancelled and + have their callbacks called asynchronously with status=UV_ECANCELED. + +.. c:function:: void uv_ref(uv_handle_t* handle) + + Reference the given handle. References are idempotent, that is, if a handle + is already referenced calling this function again will have no effect. + + See :ref:`refcount`. + +.. c:function:: void uv_unref(uv_handle_t* handle) + + Un-reference the given handle. References are idempotent, that is, if a handle + is not referenced calling this function again will have no effect. + + See :ref:`refcount`. + +.. c:function:: int uv_has_ref(const uv_handle_t* handle) + + Returns non-zero if the handle referenced, zero otherwise. + + See :ref:`refcount`. + +.. c:function:: size_t uv_handle_size(uv_handle_type type) + + Returns the size of the given handle type. Useful for FFI binding writers + who don't want to know the structure layout. + + +Miscellaneous API functions +--------------------------- + +The following API functions take a :c:type:`uv_handle_t` argument but they work +just for some handle types. + +.. c:function:: int uv_send_buffer_size(uv_handle_t* handle, int* value) + + Gets or sets the size of the send buffer that the operating + system uses for the socket. + + If `*value` == 0, it will return the current send buffer size, + otherwise it will use `*value` to set the new send buffer size. + + This function works for TCP, pipe and UDP handles on Unix and for TCP and + UDP handles on Windows. + + .. note:: + Linux will set double the size and return double the size of the original set value. + +.. c:function:: int uv_recv_buffer_size(uv_handle_t* handle, int* value) + + Gets or sets the size of the receive buffer that the operating + system uses for the socket. + + If `*value` == 0, it will return the current receive buffer size, + otherwise it will use `*value` to set the new receive buffer size. + + This function works for TCP, pipe and UDP handles on Unix and for TCP and + UDP handles on Windows. + + .. note:: + Linux will set double the size and return double the size of the original set value. + +.. c:function:: int uv_fileno(const uv_handle_t* handle, uv_os_fd_t* fd) + + Gets the platform dependent file descriptor equivalent. + + The following handles are supported: TCP, pipes, TTY, UDP and poll. Passing + any other handle type will fail with `UV_EINVAL`. + + If a handle doesn't have an attached file descriptor yet or the handle + itself has been closed, this function will return `UV_EBADF`. + + .. warning:: + Be very careful when using this function. libuv assumes it's in control of the file + descriptor so any change to it may lead to malfunction. + + +.. _refcount: + +Reference counting +------------------ + +The libuv event loop (if run in the default mode) will run until there are no +active `and` referenced handles left. The user can force the loop to exit early +by unreferencing handles which are active, for example by calling :c:func:`uv_unref` +after calling :c:func:`uv_timer_start`. + +A handle can be referenced or unreferenced, the refcounting scheme doesn't use +a counter, so both operations are idempotent. + +All handles are referenced when active by default, see :c:func:`uv_is_active` +for a more detailed explanation on what being `active` involves. diff --git a/deps/uv/docs/src/idle.rst b/deps/uv/docs/src/idle.rst new file mode 100644 index 00000000000..81f51d2076e --- /dev/null +++ b/deps/uv/docs/src/idle.rst @@ -0,0 +1,54 @@ + +.. _idle: + +:c:type:`uv_idle_t` --- Idle handle +=================================== + +Idle handles will run the given callback once per loop iteration, right +before the :c:type:`uv_prepare_t` handles. + +.. note:: + The notable difference with prepare handles is that when there are active idle handles, + the loop will perform a zero timeout poll instead of blocking for i/o. + +.. warning:: + Despite the name, idle handles will get their callbacks called on every loop iteration, + not when the loop is actually "idle". + + +Data types +---------- + +.. c:type:: uv_idle_t + + Idle handle type. + +.. c:type:: void (*uv_idle_cb)(uv_idle_t* handle) + + Type definition for callback passed to :c:func:`uv_idle_start`. + + +Public members +^^^^^^^^^^^^^^ + +N/A + +.. seealso:: The :c:type:`uv_handle_t` members also apply. + + +API +--- + +.. c:function:: int uv_idle_init(uv_loop_t*, uv_idle_t* idle) + + Initialize the handle. + +.. c:function:: int uv_idle_start(uv_idle_t* idle, uv_idle_cb cb) + + Start the handle with the given callback. + +.. c:function:: int uv_idle_stop(uv_idle_t* idle) + + Stop the handle, the callback will no longer be called. + +.. seealso:: The :c:type:`uv_handle_t` API functions also apply. diff --git a/deps/uv/docs/src/index.rst b/deps/uv/docs/src/index.rst new file mode 100644 index 00000000000..9cdc494aecb --- /dev/null +++ b/deps/uv/docs/src/index.rst @@ -0,0 +1,94 @@ + +Welcome to the libuv API documentation +====================================== + +Overview +-------- + +libuv is a multi-platform support library with a focus on asynchronous I/O. It +was primarily developed for use by `Node.js`_, but it's also used by `Luvit`_, +`Julia`_, `pyuv`_, and `others`_. + +.. note:: + In case you find errors in this documentation you can help by sending + `pull requests <https://github.com/libuv/libuv>`_! + +.. _Node.js: http://nodejs.org +.. _Luvit: http://luvit.io +.. _Julia: http://julialang.org +.. _pyuv: https://github.com/saghul/pyuv +.. _others: https://github.com/libuv/libuv/wiki/Projects-that-use-libuv + + +Features +-------- + +* Full-featured event loop backed by epoll, kqueue, IOCP, event ports. +* Asynchronous TCP and UDP sockets +* Asynchronous DNS resolution +* Asynchronous file and file system operations +* File system events +* ANSI escape code controlled TTY +* IPC with socket sharing, using Unix domain sockets or named pipes (Windows) +* Child processes +* Thread pool +* Signal handling +* High resolution clock +* Threading and synchronization primitives + + +Downloads +--------- + +libuv can be downloaded from `here <http://dist.libuv.org/dist/>`_. + + +Installation +------------ + +Installation instructions can be found on `the README <https://github.com/libuv/libuv/blob/master/README.md>`_. + + +Upgrading +--------- + +Migration guides for different libuv versions, starting with 1.0. + +.. toctree:: + :maxdepth: 1 + + migration_010_100 + + +Documentation +------------- + +.. toctree:: + :maxdepth: 1 + + design + errors + loop + handle + request + timer + prepare + check + idle + async + poll + signal + process + stream + tcp + pipe + tty + udp + fs_event + fs_poll + fs + threadpool + dns + dll + threading + misc diff --git a/deps/uv/docs/src/loop.rst b/deps/uv/docs/src/loop.rst new file mode 100644 index 00000000000..0a9e8a60869 --- /dev/null +++ b/deps/uv/docs/src/loop.rst @@ -0,0 +1,156 @@ + +.. _loop: + +:c:type:`uv_loop_t` --- Event loop +================================== + +The event loop is the central part of libuv's functionality. It takes care +of polling for i/o and scheduling callbacks to be run based on different sources +of events. + + +Data types +---------- + +.. c:type:: uv_loop_t + + Loop data type. + +.. c:type:: uv_run_mode + + Mode used to run the loop with :c:func:`uv_run`. + + :: + + typedef enum { + UV_RUN_DEFAULT = 0, + UV_RUN_ONCE, + UV_RUN_NOWAIT + } uv_run_mode; + +.. c:type:: void (*uv_walk_cb)(uv_handle_t* handle, void* arg) + + Type definition for callback passed to :c:func:`uv_walk`. + + +Public members +^^^^^^^^^^^^^^ + +.. c:member:: void* uv_loop_t.data + + Space for user-defined arbitrary data. libuv does not use this field. libuv does, however, + initialize it to NULL in :c:func:`uv_loop_init`, and it poisons the value (on debug builds) + on :c:func:`uv_loop_close`. + + +API +--- + +.. c:function:: int uv_loop_init(uv_loop_t* loop) + + Initializes the given `uv_loop_t` structure. + +.. c:function:: int uv_loop_configure(uv_loop_t* loop, uv_loop_option option, ...) + + Set additional loop options. You should normally call this before the + first call to :c:func:`uv_run` unless mentioned otherwise. + + Returns 0 on success or a UV_E* error code on failure. Be prepared to + handle UV_ENOSYS; it means the loop option is not supported by the platform. + + Supported options: + + - UV_LOOP_BLOCK_SIGNAL: Block a signal when polling for new events. The + second argument to :c:func:`uv_loop_configure` is the signal number. + + This operation is currently only implemented for SIGPROF signals, + to suppress unnecessary wakeups when using a sampling profiler. + Requesting other signals will fail with UV_EINVAL. + +.. c:function:: int uv_loop_close(uv_loop_t* loop) + + Closes all internal loop resources. This function must only be called once + the loop has finished its execution or it will return UV_EBUSY. After this + function returns the user shall free the memory allocated for the loop. + +.. c:function:: uv_loop_t* uv_default_loop(void) + + Returns the initialized default loop. It may return NULL in case of + allocation failure. + +.. c:function:: int uv_run(uv_loop_t* loop, uv_run_mode mode) + + This function runs the event loop. It will act differently depending on the + specified mode: + + - UV_RUN_DEFAULT: Runs the event loop until there are no more active and + referenced handles or requests. Always returns zero. + - UV_RUN_ONCE: Poll for i/o once. Note that this function blocks if + there are no pending callbacks. Returns zero when done (no active handles + or requests left), or non-zero if more callbacks are expected (meaning + you should run the event loop again sometime in the future). + - UV_RUN_NOWAIT: Poll for i/o once but don't block if there are no + pending callbacks. Returns zero if done (no active handles + or requests left), or non-zero if more callbacks are expected (meaning + you should run the event loop again sometime in the future). + +.. c:function:: int uv_loop_alive(const uv_loop_t* loop) + + Returns non-zero if there are active handles or request in the loop. + +.. c:function:: void uv_stop(uv_loop_t* loop) + + Stop the event loop, causing :c:func:`uv_run` to end as soon as + possible. This will happen not sooner than the next loop iteration. + If this function was called before blocking for i/o, the loop won't block + for i/o on this iteration. + +.. c:function:: size_t uv_loop_size(void) + + Returns the size of the `uv_loop_t` structure. Useful for FFI binding + writers who don't want to know the structure layout. + +.. c:function:: int uv_backend_fd(const uv_loop_t* loop) + + Get backend file descriptor. Only kqueue, epoll and event ports are + supported. + + This can be used in conjunction with `uv_run(loop, UV_RUN_NOWAIT)` to + poll in one thread and run the event loop's callbacks in another see + test/test-embed.c for an example. + + .. note:: + Embedding a kqueue fd in another kqueue pollset doesn't work on all platforms. It's not + an error to add the fd but it never generates events. + +.. c:function:: int uv_backend_timeout(const uv_loop_t* loop) + + Get the poll timeout. The return value is in milliseconds, or -1 for no + timeout. + +.. c:function:: uint64_t uv_now(const uv_loop_t* loop) + + Return the current timestamp in milliseconds. The timestamp is cached at + the start of the event loop tick, see :c:func:`uv_update_time` for details + and rationale. + + The timestamp increases monotonically from some arbitrary point in time. + Don't make assumptions about the starting point, you will only get + disappointed. + + .. note:: + Use :c:func:`uv_hrtime` if you need sub-millisecond granularity. + +.. c:function:: void uv_update_time(uv_loop_t* loop) + + Update the event loop's concept of "now". Libuv caches the current time + at the start of the event loop tick in order to reduce the number of + time-related system calls. + + You won't normally need to call this function unless you have callbacks + that block the event loop for longer periods of time, where "longer" is + somewhat subjective but probably on the order of a millisecond or more. + +.. c:function:: void uv_walk(uv_loop_t* loop, uv_walk_cb walk_cb, void* arg) + + Walk the list of handles: `walk_cb` will be executed with the given `arg`. diff --git a/deps/uv/docs/src/migration_010_100.rst b/deps/uv/docs/src/migration_010_100.rst new file mode 100644 index 00000000000..bb6ac1a8092 --- /dev/null +++ b/deps/uv/docs/src/migration_010_100.rst @@ -0,0 +1,244 @@ + +.. _migration_010_100: + +libuv 0.10 -> 1.0.0 migration guide +=================================== + +Some APIs changed quite a bit throughout the 1.0.0 development process. Here +is a migration guide for the most significant changes that happened after 0.10 +was released. + + +Loop initialization and closing +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In libuv 0.10 (and previous versions), loops were created with `uv_loop_new`, which +allocated memory for a new loop and initialized it; and destroyed with `uv_loop_delete`, +which destroyed the loop and freed the memory. Starting with 1.0, those are deprecated +and the user is responsible for allocating the memory and then initializing the loop. + +libuv 0.10 + +:: + + uv_loop_t* loop = uv_loop_new(); + ... + uv_loop_delete(loop); + +libuv 1.0 + +:: + + uv_loop_t* loop = malloc(sizeof *loop); + uv_loop_init(loop); + ... + uv_loop_close(loop); + free(loop); + +.. note:: + Error handling was omitted for brevity. Check the documentation for :c:func:`uv_loop_init` + and :c:func:`uv_loop_close`. + + +Error handling +~~~~~~~~~~~~~~ + +Error handling had a major overhaul in libuv 1.0. In general, functions and status parameters +would get 0 for success and -1 for failure on libuv 0.10, and the user had to use `uv_last_error` +to fetch the error code, which was a positive number. + +In 1.0, functions and status parameters contain the actual error code, which is 0 for success, or +a negative number in case of error. + +libuv 0.10 + +:: + + ... assume 'server' is a TCP server which is already listening + r = uv_listen((uv_stream_t*) server, 511, NULL); + if (r == -1) { + uv_err_t err = uv_last_error(uv_default_loop()); + /* err.code contains UV_EADDRINUSE */ + } + +libuv 1.0 + +:: + + ... assume 'server' is a TCP server which is already listening + r = uv_listen((uv_stream_t*) server, 511, NULL); + if (r < 0) { + /* r contains UV_EADDRINUSE */ + } + + +Threadpool changes +~~~~~~~~~~~~~~~~~~ + +In libuv 0.10 Unix used a threadpool which defaulted to 4 threads, while Windows used the +`QueueUserWorkItem` API, which uses a Windows internal threadpool, which defaults to 512 +threads per process. + +In 1.0, we unified both implementations, so Windows now uses the same implementation Unix +does. The threadpool size can be set by exporting the ``UV_THREADPOOL_SIZE`` environment +variable. See :c:ref:`threadpool`. + + +Allocation callback API change +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In libuv 0.10 the callback had to return a filled :c:type:`uv_buf_t` by value: + +:: + + uv_buf_t alloc_cb(uv_handle_t* handle, size_t size) { + return uv_buf_init(malloc(size), size); + } + +In libuv 1.0 a pointer to a buffer is passed to the callback, which the user +needs to fill: + +:: + + void alloc_cb(uv_handle_t* handle, size_t size, uv_buf_t* buf) { + buf->base = malloc(size); + buf->len = size; + } + + +Unification of IPv4 / IPv6 APIs +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +libuv 1.0 unified the IPv4 and IPv6 APIS. There is no longer a `uv_tcp_bind` and `uv_tcp_bind6` +duality, there is only :c:func:`uv_tcp_bind` now. + +IPv4 functions took ``struct sockaddr_in`` structures by value, and IPv6 functions took +``struct sockaddr_in6``. Now functions take a ``struct sockaddr*`` (note it's a pointer). +It can be stack allocated. + +libuv 0.10 + +:: + + struct sockaddr_in addr = uv_ip4_addr("0.0.0.0", 1234); + ... + uv_tcp_bind(&server, addr) + +libuv 1.0 + +:: + + struct sockaddr_in addr; + uv_ip4_addr("0.0.0.0", 1234, &addr) + ... + uv_tcp_bind(&server, (const struct sockaddr*) &addr, 0); + +The IPv4 and IPv6 struct creating functions (:c:func:`uv_ip4_addr` and :c:func:`uv_ip6_addr`) +have also changed, make sure you check the documentation. + +..note:: + This change applies to all functions that made a distinction between IPv4 and IPv6 + addresses. + + +Streams / UDP data receive callback API change +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The streams and UDP data receive callbacks now get a pointer to a :c:type:`uv_buf_t` buffer, +not a structure by value. + +libuv 0.10 + +:: + + void on_read(uv_stream_t* handle, + ssize_t nread, + uv_buf_t buf) { + ... + } + + void recv_cb(uv_udp_t* handle, + ssize_t nread, + uv_buf_t buf, + struct sockaddr* addr, + unsigned flags) { + ... + } + +libuv 1.0 + +:: + + void on_read(uv_stream_t* handle, + ssize_t nread, + const uv_buf_t* buf) { + ... + } + + void recv_cb(uv_udp_t* handle, + ssize_t nread, + const uv_buf_t* buf, + const struct sockaddr* addr, + unsigned flags) { + ... + } + + +Receiving handles over pipes API change +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In libuv 0.10 (and earlier versions) the `uv_read2_start` function was used to start reading +data on a pipe, which could also result in the reception of handles over it. The callback +for such function looked like this: + +:: + + void on_read(uv_pipe_t* pipe, + ssize_t nread, + uv_buf_t buf, + uv_handle_type pending) { + ... + } + +In libuv 1.0, `uv_read2_start` was removed, and the user needs to check if there are pending +handles using :c:func:`uv_pipe_pending_count` and :c:func:`uv_pipe_pending_type` while in +the read callback: + +:: + + void on_read(uv_stream_t* handle, + ssize_t nread, + const uv_buf_t* buf) { + ... + while (uv_pipe_pending_count((uv_pipe_t*) handle) != 0) { + pending = uv_pipe_pending_type((uv_pipe_t*) handle); + ... + } + ... + } + + +Extracting the file descriptor out of a handle +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +While it wasn't supported by the API, users often accessed the libuv internals in +order to get access to the file descriptor of a TCP handle, for example. + +:: + + fd = handle->io_watcher.fd; + +This is now properly exposed through the :c:func:`uv_fileno` function. + + +uv_fs_readdir rename and API change +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +`uv_fs_readdir` returned a list of strings in the `req->ptr` field upon completion in +libuv 0.10. In 1.0, this function got renamed to :c:func:`uv_fs_scandir`, since it's +actually implemented using ``scandir(3)``. + +In addition, instead of allocating a full list strings, the user is able to get one +result at a time by using the :c:func:`uv_fs_scandir_next` function. This function +does not need to make a roundtrip to the threadpool, because libuv will keep the +list of *dents* returned by ``scandir(3)`` around. diff --git a/deps/uv/docs/src/misc.rst b/deps/uv/docs/src/misc.rst new file mode 100644 index 00000000000..4b810fe0847 --- /dev/null +++ b/deps/uv/docs/src/misc.rst @@ -0,0 +1,228 @@ + +.. _misc: + +Miscellaneous utilities +====================== + +This section contains miscellaneous functions that don't really belong in any +other section. + + +Data types +---------- + +.. c:type:: uv_buf_t + + Buffer data type. + +.. c:type:: uv_file + + Cross platform representation of a file handle. + +.. c:type:: uv_os_sock_t + + Cross platform representation of a socket handle. + +.. c:type:: uv_os_fd_t + + Abstract representation of a file descriptor. On Unix systems this is a + `typedef` of `int` and on Windows fa `HANDLE`. + +.. c:type:: uv_rusage_t + + Data type for resource usage results. + + :: + + typedef struct { + uv_timeval_t ru_utime; /* user CPU time used */ + uv_timeval_t ru_stime; /* system CPU time used */ + uint64_t ru_maxrss; /* maximum resident set size */ + uint64_t ru_ixrss; /* integral shared memory size */ + uint64_t ru_idrss; /* integral unshared data size */ + uint64_t ru_isrss; /* integral unshared stack size */ + uint64_t ru_minflt; /* page reclaims (soft page faults) */ + uint64_t ru_majflt; /* page faults (hard page faults) */ + uint64_t ru_nswap; /* swaps */ + uint64_t ru_inblock; /* block input operations */ + uint64_t ru_oublock; /* block output operations */ + uint64_t ru_msgsnd; /* IPC messages sent */ + uint64_t ru_msgrcv; /* IPC messages received */ + uint64_t ru_nsignals; /* signals received */ + uint64_t ru_nvcsw; /* voluntary context switches */ + uint64_t ru_nivcsw; /* involuntary context switches */ + } uv_rusage_t; + +.. c:type:: uv_cpu_info_t + + Data type for CPU information. + + :: + + typedef struct uv_cpu_info_s { + char* model; + int speed; + struct uv_cpu_times_s { + uint64_t user; + uint64_t nice; + uint64_t sys; + uint64_t idle; + uint64_t irq; + } cpu_times; + } uv_cpu_info_t; + +.. c:type:: uv_interface_address_t + + Data type for interface addresses. + + :: + + typedef struct uv_interface_address_s { + char* name; + char phys_addr[6]; + int is_internal; + union { + struct sockaddr_in address4; + struct sockaddr_in6 address6; + } address; + union { + struct sockaddr_in netmask4; + struct sockaddr_in6 netmask6; + } netmask; + } uv_interface_address_t; + + +API +--- + +.. c:function:: uv_handle_type uv_guess_handle(uv_file file) + + Used to detect what type of stream should be used with a given file + descriptor. Usually this will be used during initialization to guess the + type of the stdio streams. + + For ``isatty()`` functionality use this function and test for ``UV_TTY``. + +.. c:function:: unsigned int uv_version(void) + + Returns the libuv version packed into a single integer. 8 bits are used for + each component, with the patch number stored in the 8 least significant + bits. E.g. for libuv 1.2.3 this would return 0x010203. + +.. c:function:: const char* uv_version_string(void) + + Returns the libuv version number as a string. For non-release versions + "-pre" is appended, so the version number could be "1.2.3-pre". + +.. c:function:: uv_buf_t uv_buf_init(char* base, unsigned int len) + + Constructor for :c:type:`uv_buf_t`. + + Due to platform differences the user cannot rely on the ordering of the + `base` and `len` members of the uv_buf_t struct. The user is responsible for + freeing `base` after the uv_buf_t is done. Return struct passed by value. + +.. c:function:: char** uv_setup_args(int argc, char** argv) + + Store the program arguments. Required for getting / setting the process title. + +.. c:function:: int uv_get_process_title(char* buffer, size_t size) + + Gets the title of the current process. + +.. c:function:: int uv_set_process_title(const char* title) + + Sets the current process title. + +.. c:function:: int uv_resident_set_memory(size_t* rss) + + Gets the resident set size (RSS) for the current process. + +.. c:function:: int uv_uptime(double* uptime) + + Gets the current system uptime. + +.. c:function:: int uv_getrusage(uv_rusage_t* rusage) + + Gets the resource usage measures for the current process. + + .. note:: + On Windows not all fields are set, the unsupported fields are filled with zeroes. + +.. c:function:: int uv_cpu_info(uv_cpu_info_t** cpu_infos, int* count) + + Gets information about the CPUs on the system. The `cpu_infos` array will + have `count` elements and needs to be freed with :c:func:`uv_free_cpu_info`. + +.. c:function:: void uv_free_cpu_info(uv_cpu_info_t* cpu_infos, int count) + + Frees the `cpu_infos` array previously allocated with :c:func:`uv_cpu_info`. + +.. c:function:: int uv_interface_addresses(uv_interface_address_t** addresses, int* count) + + Gets address information about the network interfaces on the system. An + array of `count` elements is allocated and returned in `addresses`. It must + be freed by the user, calling :c:func:`uv_free_interface_addresses`. + +.. c:function:: void uv_free_interface_addresses(uv_interface_address_t* addresses, int count) + + Free an array of :c:type:`uv_interface_address_t` which was returned by + :c:func:`uv_interface_addresses`. + +.. c:function:: void uv_loadavg(double avg[3]) + + Gets the load average. See: `<http://en.wikipedia.org/wiki/Load_(computing)>`_ + + .. note:: + Returns [0,0,0] on Windows (i.e., it's not implemented). + +.. c:function:: int uv_ip4_addr(const char* ip, int port, struct sockaddr_in* addr) + + Convert a string containing an IPv4 addresses to a binary structure. + +.. c:function:: int uv_ip6_addr(const char* ip, int port, struct sockaddr_in6* addr) + + Convert a string containing an IPv6 addresses to a binary structure. + +.. c:function:: int uv_ip4_name(const struct sockaddr_in* src, char* dst, size_t size) + + Convert a binary structure containing an IPv4 address to a string. + +.. c:function:: int uv_ip6_name(const struct sockaddr_in6* src, char* dst, size_t size) + + Convert a binary structure containing an IPv6 address to a string. + +.. c:function:: int uv_inet_ntop(int af, const void* src, char* dst, size_t size) +.. c:function:: int uv_inet_pton(int af, const char* src, void* dst) + + Cross-platform IPv6-capable implementation of the 'standard' ``inet_ntop()`` + and ``inet_pton()`` functions. On success they return 0. In case of error + the target `dst` pointer is unmodified. + +.. c:function:: int uv_exepath(char* buffer, size_t* size) + + Gets the executable path. + +.. c:function:: int uv_cwd(char* buffer, size_t* size) + + Gets the current working directory. + +.. c:function:: int uv_chdir(const char* dir) + + Changes the current working directory. + +.. uint64_t uv_get_free_memory(void) +.. c:function:: uint64_t uv_get_total_memory(void) + + Gets memory information (in bytes). + +.. c:function:: uint64_t uv_hrtime(void) + + Returns the current high-resolution real time. This is expressed in + nanoseconds. It is relative to an arbitrary time in the past. It is not + related to the time of day and therefore not subject to clock drift. The + primary use is for measuring performance between intervals. + + .. note:: + Not every platform can support nanosecond resolution; however, this value will always + be in nanoseconds. diff --git a/deps/uv/docs/src/pipe.rst b/deps/uv/docs/src/pipe.rst new file mode 100644 index 00000000000..614bb2e3b1f --- /dev/null +++ b/deps/uv/docs/src/pipe.rst @@ -0,0 +1,86 @@ + +.. _pipe: + +:c:type:`uv_pipe_t` --- Pipe handle +=================================== + +Pipe handles provide an abstraction over local domain sockets on Unix and named +pipes on Windows. + +:c:type:`uv_pipe_t` is a 'subclass' of :c:type:`uv_stream_t`. + + +Data types +---------- + +.. c:type:: uv_pipe_t + + Pipe handle type. + + +Public members +^^^^^^^^^^^^^^ + +N/A + +.. seealso:: The :c:type:`uv_stream_t` members also apply. + + +API +--- + +.. c:function:: int uv_pipe_init(uv_loop_t*, uv_pipe_t* handle, int ipc) + + Initialize a pipe handle. The `ipc` argument is a boolean to indicate if + this pipe will be used for handle passing between processes. + +.. c:function:: int uv_pipe_open(uv_pipe_t*, uv_file file) + + Open an existing file descriptor or HANDLE as a pipe. + + .. note:: + The user is responsible for setting the file descriptor in non-blocking mode. + +.. c:function:: int uv_pipe_bind(uv_pipe_t* handle, const char* name) + + Bind the pipe to a file path (Unix) or a name (Windows). + + .. note:: + Paths on Unix get truncated to ``sizeof(sockaddr_un.sun_path)`` bytes, typically between + 92 and 108 bytes. + +.. c:function:: void uv_pipe_connect(uv_connect_t* req, uv_pipe_t* handle, const char* name, uv_connect_cb cb) + + Connect to the Unix domain socket or the named pipe. + + .. note:: + Paths on Unix get truncated to ``sizeof(sockaddr_un.sun_path)`` bytes, typically between + 92 and 108 bytes. + +.. c:function:: int uv_pipe_getsockname(const uv_pipe_t* handle, char* buf, size_t* len) + + Get the name of the Unix domain socket or the named pipe. + + A preallocated buffer must be provided. The len parameter holds the length + of the buffer and it's set to the number of bytes written to the buffer on + output. If the buffer is not big enough ``UV_ENOBUFS`` will be returned and + len will contain the required size. + +.. c:function:: void uv_pipe_pending_instances(uv_pipe_t* handle, int count) + + Set the number of pending pipe instance handles when the pipe server is + waiting for connections. + + .. note:: + This setting applies to Windows only. + +.. c:function:: int uv_pipe_pending_count(uv_pipe_t* handle) +.. c:function:: uv_handle_type uv_pipe_pending_type(uv_pipe_t* handle) + + Used to receive handles over IPC pipes. + + First - call :c:func:`uv_pipe_pending_count`, if it's > 0 then initialize + a handle of the given `type`, returned by :c:func:`uv_pipe_pending_type` + and call ``uv_accept(pipe, handle)``. + +.. seealso:: The :c:type:`uv_stream_t` API functions also apply. diff --git a/deps/uv/docs/src/poll.rst b/deps/uv/docs/src/poll.rst new file mode 100644 index 00000000000..f34842256b6 --- /dev/null +++ b/deps/uv/docs/src/poll.rst @@ -0,0 +1,99 @@ + +.. _poll: + +:c:type:`uv_poll_t` --- Poll handle +=================================== + +Poll handles are used to watch file descriptors for readability and +writability, similar to the purpose of poll(2). + +The purpose of poll handles is to enable integrating external libraries that +rely on the event loop to signal it about the socket status changes, like +c-ares or libssh2. Using uv_poll_t for any other purpose is not recommended; +:c:type:`uv_tcp_t`, :c:type:`uv_udp_t`, etc. provide an implementation that is faster and +more scalable than what can be achieved with :c:type:`uv_poll_t`, especially on +Windows. + +It is possible that poll handles occasionally signal that a file descriptor is +readable or writable even when it isn't. The user should therefore always +be prepared to handle EAGAIN or equivalent when it attempts to read from or +write to the fd. + +It is not okay to have multiple active poll handles for the same socket, this +can cause libuv to busyloop or otherwise malfunction. + +The user should not close a file descriptor while it is being polled by an +active poll handle. This can cause the handle to report an error, +but it might also start polling another socket. However the fd can be safely +closed immediately after a call to :c:func:`uv_poll_stop` or :c:func:`uv_close`. + +.. note:: + On windows only sockets can be polled with poll handles. On Unix any file + descriptor that would be accepted by poll(2) can be used. + + +Data types +---------- + +.. c:type:: uv_poll_t + + Poll handle type. + +.. c:type:: void (*uv_poll_cb)(uv_poll_t* handle, int status, int events) + + Type definition for callback passed to :c:func:`uv_poll_start`. + +.. c:type:: uv_poll_event + + Poll event types + + :: + + enum uv_poll_event { + UV_READABLE = 1, + UV_WRITABLE = 2 + }; + + +Public members +^^^^^^^^^^^^^^ + +N/A + +.. seealso:: The :c:type:`uv_handle_t` members also apply. + + +API +--- + +.. c:function:: int uv_poll_init(uv_loop_t* loop, uv_poll_t* handle, int fd) + + Initialize the handle using a file descriptor. + +.. c:function:: int uv_poll_init_socket(uv_loop_t* loop, uv_poll_t* handle, uv_os_sock_t socket) + + Initialize the handle using a socket descriptor. On Unix this is identical + to :c:func:`uv_poll_init`. On windows it takes a SOCKET handle. + +.. c:function:: int uv_poll_start(uv_poll_t* handle, int events, uv_poll_cb cb) + + Starts polling the file descriptor. `events` is a bitmask consisting made up + of UV_READABLE and UV_WRITABLE. As soon as an event is detected the callback + will be called with `status` set to 0, and the detected events set on the + `events` field. + + If an error happens while polling, `status` will be < 0 and corresponds + with one of the UV_E* error codes (see :ref:`errors`). The user should + not close the socket while the handle is active. If the user does that + anyway, the callback *may* be called reporting an error status, but this + is **not** guaranteed. + + .. note:: + Calling :c:func:`uv_poll_start` on a handle that is already active is fine. Doing so + will update the events mask that is being watched for. + +.. c:function:: int uv_poll_stop(uv_poll_t* poll) + + Stop polling the file descriptor, the callback will no longer be called. + +.. seealso:: The :c:type:`uv_handle_t` API functions also apply. diff --git a/deps/uv/docs/src/prepare.rst b/deps/uv/docs/src/prepare.rst new file mode 100644 index 00000000000..aca58155809 --- /dev/null +++ b/deps/uv/docs/src/prepare.rst @@ -0,0 +1,46 @@ + +.. _prepare: + +:c:type:`uv_prepare_t` --- Prepare handle +========================================= + +Prepare handles will run the given callback once per loop iteration, right +before polling for i/o. + + +Data types +---------- + +.. c:type:: uv_prepare_t + + Prepare handle type. + +.. c:type:: void (*uv_prepare_cb)(uv_prepare_t* handle) + + Type definition for callback passed to :c:func:`uv_prepare_start`. + + +Public members +^^^^^^^^^^^^^^ + +N/A + +.. seealso:: The :c:type:`uv_handle_t` members also apply. + + +API +--- + +.. c:function:: int uv_prepare_init(uv_loop_t* loop, uv_prepare_t* prepare) + + Initialize the handle. + +.. c:function:: int uv_prepare_start(uv_prepare_t* prepare, uv_prepare_cb cb) + + Start the handle with the given callback. + +.. c:function:: int uv_prepare_stop(uv_prepare_t* prepare) + + Stop the handle, the callback will no longer be called. + +.. seealso:: The :c:type:`uv_handle_t` API functions also apply. diff --git a/deps/uv/docs/src/process.rst b/deps/uv/docs/src/process.rst new file mode 100644 index 00000000000..b0380ddfb72 --- /dev/null +++ b/deps/uv/docs/src/process.rst @@ -0,0 +1,225 @@ + +.. _process: + +:c:type:`uv_process_t` --- Process handle +========================================= + +Process handles will spawn a new process and allow the user to control it and +establish communication channels with it using streams. + + +Data types +---------- + +.. c:type:: uv_process_t + + Process handle type. + +.. c:type:: uv_process_options_t + + Options for spawning the process (passed to :c:func:`uv_spawn`. + + :: + + typedef struct uv_process_options_s { + uv_exit_cb exit_cb; + const char* file; + char** args; + char** env; + const char* cwd; + unsigned int flags; + int stdio_count; + uv_stdio_container_t* stdio; + uv_uid_t uid; + uv_gid_t gid; + } uv_process_options_t; + +.. c:type:: void (*uv_exit_cb)(uv_process_t*, int64_t exit_status, int term_signal) + + Type definition for callback passed in :c:type:`uv_process_options_t` which + will indicate the exit status and the signal that caused the process to + terminate, if any. + +.. c:type:: uv_process_flags + + Flags to be set on the flags field of :c:type:`uv_process_options_t`. + + :: + + enum uv_process_flags { + /* + * Set the child process' user id. + */ + UV_PROCESS_SETUID = (1 << 0), + /* + * Set the child process' group id. + */ + UV_PROCESS_SETGID = (1 << 1), + /* + * Do not wrap any arguments in quotes, or perform any other escaping, when + * converting the argument list into a command line string. This option is + * only meaningful on Windows systems. On Unix it is silently ignored. + */ + UV_PROCESS_WINDOWS_VERBATIM_ARGUMENTS = (1 << 2), + /* + * Spawn the child process in a detached state - this will make it a process + * group leader, and will effectively enable the child to keep running after + * the parent exits. Note that the child process will still keep the + * parent's event loop alive unless the parent process calls uv_unref() on + * the child's process handle. + */ + UV_PROCESS_DETACHED = (1 << 3), + /* + * Hide the subprocess console window that would normally be created. This + * option is only meaningful on Windows systems. On Unix it is silently + * ignored. + */ + UV_PROCESS_WINDOWS_HIDE = (1 << 4) + }; + +.. c:type:: uv_stdio_container_t + + Container for each stdio handle or fd passed to a child process. + + :: + + typedef struct uv_stdio_container_s { + uv_stdio_flags flags; + union { + uv_stream_t* stream; + int fd; + } data; + } uv_stdio_container_t; + +.. c:type:: uv_stdio_flags + + Flags specifying how a stdio should be transmitted to the child process. + + :: + + typedef enum { + UV_IGNORE = 0x00, + UV_CREATE_PIPE = 0x01, + UV_INHERIT_FD = 0x02, + UV_INHERIT_STREAM = 0x04, + /* + * When UV_CREATE_PIPE is specified, UV_READABLE_PIPE and UV_WRITABLE_PIPE + * determine the direction of flow, from the child process' perspective. Both + * flags may be specified to create a duplex data stream. + */ + UV_READABLE_PIPE = 0x10, + UV_WRITABLE_PIPE = 0x20 + } uv_stdio_flags; + + +Public members +^^^^^^^^^^^^^^ + +.. c:member:: uv_process_t.pid + + The PID of the spawned process. It's set after calling :c:func:`uv_spawn`. + +.. note:: + The :c:type:`uv_handle_t` members also apply. + +.. c:member:: uv_process_options_t.exit_cb + + Callback called after the process exits. + +.. c:member:: uv_process_options_t.file + + Path pointing to the program to be executed. + +.. c:member:: uv_process_options_t.args + + Command line arguments. args[0] should be the path to the program. On + Windows this uses `CreateProcess` which concatenates the arguments into a + string this can cause some strange errors. See the + ``UV_PROCESS_WINDOWS_VERBATIM_ARGUMENTS`` flag on :c:type:`uv_process_flags`. + +.. c:member:: uv_process_options_t.env + + Environment for the new process. If NULL the parents environment is used. + +.. c:member:: uv_process_options_t.cwd + + Current working directory for the subprocess. + +.. c:member:: uv_process_options_t.flags + + Various flags that control how :c:func:`uv_spawn` behaves. See + :c:type:`uv_process_flags`. + +.. c:member:: uv_process_options_t.stdio_count +.. c:member:: uv_process_options_t.stdio + + The `stdio` field points to an array of :c:type:`uv_stdio_container_t` + structs that describe the file descriptors that will be made available to + the child process. The convention is that stdio[0] points to stdin, + fd 1 is used for stdout, and fd 2 is stderr. + + .. note:: + On Windows file descriptors greater than 2 are available to the child process only if + the child processes uses the MSVCRT runtime. + +.. c:member:: uv_process_options_t.uid +.. c:member:: uv_process_options_t.gid + + Libuv can change the child process' user/group id. This happens only when + the appropriate bits are set in the flags fields. + + .. note:: + This is not supported on Windows, :c:func:`uv_spawn` will fail and set the error + to ``UV_ENOTSUP``. + +.. c:member:: uv_stdio_container_t.flags + + Flags specifying how the stdio container should be passed to the child. See + :c:type:`uv_stdio_flags`. + +.. c:member:: uv_stdio_container_t.data + + Union containing either the stream or fd to be passed on to the child + process. + + +API +--- + +.. c:function:: void uv_disable_stdio_inheritance(void) + + Disables inheritance for file descriptors / handles that this process + inherited from its parent. The effect is that child processes spawned by + this process don't accidentally inherit these handles. + + It is recommended to call this function as early in your program as possible, + before the inherited file descriptors can be closed or duplicated. + + .. note:: + This function works on a best-effort basis: there is no guarantee that libuv can discover + all file descriptors that were inherited. In general it does a better job on Windows than + it does on Unix. + +.. c:function:: int uv_spawn(uv_loop_t* loop, uv_process_t* handle, const uv_process_options_t* options) + + Initializes the process handle and starts the process. If the process is + successfully spawned, this function will return 0. Otherwise, the + negative error code corresponding to the reason it couldn't spawn is + returned. + + Possible reasons for failing to spawn would include (but not be limited to) + the file to execute not existing, not having permissions to use the setuid or + setgid specified, or not having enough memory to allocate for the new + process. + +.. c:function:: int uv_process_kill(uv_process_t* handle, int signum) + + Sends the specified signal to the given process handle. Check the documentation + on :c:ref:`signal` for signal support, specially on Windows. + +.. c:function:: int uv_kill(int pid, int signum) + + Sends the specified signal to the given PID. Check the documentation + on :c:ref:`signal` for signal support, specially on Windows. + +.. seealso:: The :c:type:`uv_handle_t` API functions also apply. diff --git a/deps/uv/docs/src/request.rst b/deps/uv/docs/src/request.rst new file mode 100644 index 00000000000..2f58d46b143 --- /dev/null +++ b/deps/uv/docs/src/request.rst @@ -0,0 +1,82 @@ + +.. _request: + +:c:type:`uv_req_t` --- Base request +=================================== + +`uv_req_t` is the base type for all libuv request types. + +Structures are aligned so that any libuv request can be cast to `uv_req_t`. +All API functions defined here work with any request type. + + +Data types +---------- + +.. c:type:: uv_req_t + + The base libuv request structure. + +.. c:type:: uv_any_req + + Union of all request types. + + +Public members +^^^^^^^^^^^^^^ + +.. c:member:: void* uv_request_t.data + + Space for user-defined arbitrary data. libuv does not use this field. + +.. c:member:: uv_req_type uv_req_t.type + + Indicated the type of request. Readonly. + + :: + + typedef enum { + UV_UNKNOWN_REQ = 0, + UV_REQ, + UV_CONNECT, + UV_WRITE, + UV_SHUTDOWN, + UV_UDP_SEND, + UV_FS, + UV_WORK, + UV_GETADDRINFO, + UV_GETNAMEINFO, + UV_REQ_TYPE_PRIVATE, + UV_REQ_TYPE_MAX, + } uv_req_type; + + +API +--- + +.. c:function:: int uv_cancel(uv_req_t* req) + + Cancel a pending request. Fails if the request is executing or has finished + executing. + + Returns 0 on success, or an error code < 0 on failure. + + Only cancellation of :c:type:`uv_fs_t`, :c:type:`uv_getaddrinfo_t`, + :c:type:`uv_getnameinfo_t` and :c:type:`uv_work_t` requests is + currently supported. + + Cancelled requests have their callbacks invoked some time in the future. + It's **not** safe to free the memory associated with the request until the + callback is called. + + Here is how cancellation is reported to the callback: + + * A :c:type:`uv_fs_t` request has its req->result field set to `UV_ECANCELED`. + + * A :c:type:`uv_work_t`, :c:type:`uv_getaddrinfo_t` or c:type:`uv_getnameinfo_t` + request has its callback invoked with status == `UV_ECANCELED`. + +.. c:function:: size_t uv_req_size(uv_req_type type) + + Returns the size of the given request type. Useful for FFI binding writers + who don't want to know the structure layout. diff --git a/deps/uv/docs/src/signal.rst b/deps/uv/docs/src/signal.rst new file mode 100644 index 00000000000..21675945fc4 --- /dev/null +++ b/deps/uv/docs/src/signal.rst @@ -0,0 +1,77 @@ + +.. _signal: + +:c:type:`uv_signal_t` --- Signal handle +======================================= + +Signal handles implement Unix style signal handling on a per-event loop bases. + +Reception of some signals is emulated on Windows: + +* SIGINT is normally delivered when the user presses CTRL+C. However, like + on Unix, it is not generated when terminal raw mode is enabled. + +* SIGBREAK is delivered when the user pressed CTRL + BREAK. + +* SIGHUP is generated when the user closes the console window. On SIGHUP the + program is given approximately 10 seconds to perform cleanup. After that + Windows will unconditionally terminate it. + +* SIGWINCH is raised whenever libuv detects that the console has been + resized. SIGWINCH is emulated by libuv when the program uses a :c:type:`uv_tty_t` + handle to write to the console. SIGWINCH may not always be delivered in a + timely manner; libuv will only detect size changes when the cursor is + being moved. When a readable :c:type:`uv_tty_t` handle is used in raw mode, + resizing the console buffer will also trigger a SIGWINCH signal. + +Watchers for other signals can be successfully created, but these signals +are never received. These signals are: `SIGILL`, `SIGABRT`, `SIGFPE`, `SIGSEGV`, +`SIGTERM` and `SIGKILL.` + +Calls to raise() or abort() to programmatically raise a signal are +not detected by libuv; these will not trigger a signal watcher. + +.. note:: + On Linux SIGRT0 and SIGRT1 (signals 32 and 33) are used by the NPTL pthreads library to + manage threads. Installing watchers for those signals will lead to unpredictable behavior + and is strongly discouraged. Future versions of libuv may simply reject them. + + +Data types +---------- + +.. c:type:: uv_signal_t + + Signal handle type. + +.. c:type:: void (*uv_signal_cb)(uv_signal_t* handle, int signum) + + Type definition for callback passed to :c:func:`uv_signal_start`. + + +Public members +^^^^^^^^^^^^^^ + +.. c:member:: int uv_signal_t.signum + + Signal being monitored by this handle. Readonly. + +.. seealso:: The :c:type:`uv_handle_t` members also apply. + + +API +--- + +.. c:function:: int uv_signal_init(uv_loop_t*, uv_signal_t* signal) + + Initialize the handle. + +.. c:function:: int uv_signal_start(uv_signal_t* signal, uv_signal_cb cb, int signum) + + Start the handle with the given callback, watching for the given signal. + +.. c:function:: int uv_signal_stop(uv_signal_t* signal) + + Stop the handle, the callback will no longer be called. + +.. seealso:: The :c:type:`uv_handle_t` API functions also apply. diff --git a/deps/uv/docs/src/static/architecture.png b/deps/uv/docs/src/static/architecture.png new file mode 100644 index 00000000000..81e8749f249 Binary files /dev/null and b/deps/uv/docs/src/static/architecture.png differ diff --git a/deps/uv/docs/src/static/diagrams.key/Data/st0-311.jpg b/deps/uv/docs/src/static/diagrams.key/Data/st0-311.jpg new file mode 100644 index 00000000000..439f5810936 Binary files /dev/null and b/deps/uv/docs/src/static/diagrams.key/Data/st0-311.jpg differ diff --git a/deps/uv/docs/src/static/diagrams.key/Data/st1-475.jpg b/deps/uv/docs/src/static/diagrams.key/Data/st1-475.jpg new file mode 100644 index 00000000000..ffb21ff2245 Binary files /dev/null and b/deps/uv/docs/src/static/diagrams.key/Data/st1-475.jpg differ diff --git a/deps/uv/docs/src/static/diagrams.key/Index.zip b/deps/uv/docs/src/static/diagrams.key/Index.zip new file mode 100644 index 00000000000..17aedace14f Binary files /dev/null and b/deps/uv/docs/src/static/diagrams.key/Index.zip differ diff --git a/deps/uv/docs/src/static/diagrams.key/Metadata/BuildVersionHistory.plist b/deps/uv/docs/src/static/diagrams.key/Metadata/BuildVersionHistory.plist new file mode 100644 index 00000000000..39dd4fe62fb --- /dev/null +++ b/deps/uv/docs/src/static/diagrams.key/Metadata/BuildVersionHistory.plist @@ -0,0 +1,8 @@ +<?xml version="1.0" encoding="UTF-8"?> +<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> +<plist version="1.0"> +<array> + <string>Template: White (2014-02-28 09:41)</string> + <string>M6.2.2-1878-1</string> +</array> +</plist> diff --git a/deps/uv/docs/src/static/diagrams.key/Metadata/DocumentIdentifier b/deps/uv/docs/src/static/diagrams.key/Metadata/DocumentIdentifier new file mode 100644 index 00000000000..ddb18f01f99 --- /dev/null +++ b/deps/uv/docs/src/static/diagrams.key/Metadata/DocumentIdentifier @@ -0,0 +1 @@ +F69E9CD9-EEF1-4223-9DA4-A1EA7FE112BA \ No newline at end of file diff --git a/deps/uv/docs/src/static/diagrams.key/Metadata/Properties.plist b/deps/uv/docs/src/static/diagrams.key/Metadata/Properties.plist new file mode 100644 index 00000000000..74bc69317de Binary files /dev/null and b/deps/uv/docs/src/static/diagrams.key/Metadata/Properties.plist differ diff --git a/deps/uv/docs/src/static/diagrams.key/preview-micro.jpg b/deps/uv/docs/src/static/diagrams.key/preview-micro.jpg new file mode 100644 index 00000000000..dd8decd6303 Binary files /dev/null and b/deps/uv/docs/src/static/diagrams.key/preview-micro.jpg differ diff --git a/deps/uv/docs/src/static/diagrams.key/preview-web.jpg b/deps/uv/docs/src/static/diagrams.key/preview-web.jpg new file mode 100644 index 00000000000..aadd401f1f0 Binary files /dev/null and b/deps/uv/docs/src/static/diagrams.key/preview-web.jpg differ diff --git a/deps/uv/docs/src/static/diagrams.key/preview.jpg b/deps/uv/docs/src/static/diagrams.key/preview.jpg new file mode 100644 index 00000000000..fc80025a4be Binary files /dev/null and b/deps/uv/docs/src/static/diagrams.key/preview.jpg differ diff --git a/deps/uv/docs/src/static/favicon.ico b/deps/uv/docs/src/static/favicon.ico new file mode 100644 index 00000000000..2c40694cd28 Binary files /dev/null and b/deps/uv/docs/src/static/favicon.ico differ diff --git a/deps/uv/docs/src/static/logo.png b/deps/uv/docs/src/static/logo.png new file mode 100644 index 00000000000..eaf1eee577b Binary files /dev/null and b/deps/uv/docs/src/static/logo.png differ diff --git a/deps/uv/docs/src/static/loop_iteration.png b/deps/uv/docs/src/static/loop_iteration.png new file mode 100644 index 00000000000..e769cf338b4 Binary files /dev/null and b/deps/uv/docs/src/static/loop_iteration.png differ diff --git a/deps/uv/docs/src/stream.rst b/deps/uv/docs/src/stream.rst new file mode 100644 index 00000000000..2c669cf0418 --- /dev/null +++ b/deps/uv/docs/src/stream.rst @@ -0,0 +1,217 @@ + +.. _stream: + +:c:type:`uv_stream_t` --- Stream handle +======================================= + +Stream handles provide an abstraction of a duplex communication channel. +:c:type:`uv_stream_t` is an abstract type, libuv provides 3 stream implementations +in the for of :c:type:`uv_tcp_t`, :c:type:`uv_pipe_t` and :c:type:`uv_tty_t`. + + +Data types +---------- + +.. c:type:: uv_stream_t + + Stream handle type. + +.. c:type:: uv_connect_t + + Connect request type. + +.. c:type:: uv_shutdown_t + + Shutdown request type. + +.. c:type:: uv_write_t + + Write request type. + +.. c:type:: void (*uv_read_cb)(uv_stream_t* stream, ssize_t nread, const uv_buf_t* buf) + + Callback called when data was read on a stream. + + `nread` is > 0 if there is data available, 0 if libuv is done reading for + now, or < 0 on error. + + The callee is responsible for stopping closing the stream when an error happens + by calling :c:func:`uv_read_stop` or :c:func:`uv_close`. Trying to read + from the stream again is undefined. + + The callee is responsible for freeing the buffer, libuv does not reuse it. + The buffer may be a null buffer (where buf->base=NULL and buf->len=0) on + error. + +.. c:type:: void (*uv_write_cb)(uv_write_t* req, int status) + + Callback called after data was written on a stream. `status` will be 0 in + case of success, < 0 otherwise. + +.. c:type:: void (*uv_connect_cb)(uv_connect_t* req, int status) + + Callback called after a connection started by :c:func:`uv_connect` is done. + `status` will be 0 in case of success, < 0 otherwise. + +.. c:type:: void (*uv_shutdown_cb)(uv_shutdown_t* req, int status) + + Callback called after s shutdown request has been completed. `status` will + be 0 in case of success, < 0 otherwise. + +.. c:type:: void (*uv_connection_cb)(uv_stream_t* server, int status) + + Callback called when a stream server has received an incoming connection. + The user can accept the connection by calling :c:func:`uv_accept`. + `status` will be 0 in case of success, < 0 otherwise. + + +Public members +^^^^^^^^^^^^^^ + +.. c:member:: size_t uv_stream_t.write_queue_size + + Contains the amount of queued bytes waiting to be sent. Readonly. + +.. c:member:: uv_stream_t* uv_connect_t.handle + + Pointer to the stream where this connection request is running. + +.. c:member:: uv_stream_t* uv_shutdown_t.handle + + Pointer to the stream where this shutdown request is running. + +.. c:member:: uv_stream_t* uv_write_t.handle + + Pointer to the stream where this write request is running. + +.. c:member:: uv_stream_t* uv_write_t.send_handle + + Pointer to the stream being sent using this write request.. + +.. seealso:: The :c:type:`uv_handle_t` members also apply. + + +API +--- + +.. c:function:: int uv_shutdown(uv_shutdown_t* req, uv_stream_t* handle, uv_shutdown_cb cb) + + Shutdown the outgoing (write) side of a duplex stream. It waits for pending + write requests to complete. The `handle` should refer to a initialized stream. + `req` should be an uninitialized shutdown request struct. The `cb` is called + after shutdown is complete. + +.. c:function:: int uv_listen(uv_stream_t* stream, int backlog, uv_connection_cb cb) + + Start listening for incoming connections. `backlog` indicates the number of + connections the kernel might queue, same as ``listen(2)``. When a new + incoming connection is received the :c:type:`uv_connection_cb` callback is + called. + +.. c:function:: int uv_accept(uv_stream_t* server, uv_stream_t* client) + + This call is used in conjunction with :c:func:`uv_listen` to accept incoming + connections. Call this function after receiving a :c:type:`uv_connection_cb` + to accept the connection. Before calling this function the client handle must + be initialized. < 0 return value indicates an error. + + When the :c:type:`uv_connection_cb` callback is called it is guaranteed that + this function will complete successfully the first time. If you attempt to use + it more than once, it may fail. It is suggested to only call this function once + per :c:type:`uv_connection_cb` call. + + .. note:: + `server` and `client` must be handles running on the same loop. + +.. c:function:: int uv_read_start(uv_stream_t*, uv_alloc_cb alloc_cb, uv_read_cb read_cb) + + Read data from an incoming stream. The callback will be made several + times until there is no more data to read or :c:func:`uv_read_stop` is called. + When we've reached EOF `nread` will be set to ``UV_EOF``. + + When `nread` < 0, the `buf` parameter might not point to a valid buffer; + in that case `buf.len` and `buf.base` are both set to 0. + + .. note:: + `nread` might also be 0, which does *not* indicate an error or EOF, it happens when + libuv requested a buffer through the alloc callback but then decided that it didn't + need that buffer. + +.. c:function:: int uv_read_stop(uv_stream_t*) + + Stop reading data from the stream. The :c:type:`uv_read_cb` callback will + no longer be called. + +.. c:function:: int uv_write(uv_write_t* req, uv_stream_t* handle, const uv_buf_t bufs[], unsigned int nbufs, uv_write_cb cb) + + Write data to stream. Buffers are written in order. Example: + + :: + + uv_buf_t a[] = { + { .base = "1", .len = 1 }, + { .base = "2", .len = 1 } + }; + + uv_buf_t b[] = { + { .base = "3", .len = 1 }, + { .base = "4", .len = 1 } + }; + + uv_write_t req1; + uv_write_t req2; + + /* writes "1234" */ + uv_write(&req1, stream, a, 2); + uv_write(&req2, stream, b, 2); + +.. c:function:: int uv_write2(uv_write_t* req, uv_stream_t* handle, const uv_buf_t bufs[], unsigned int nbufs, uv_stream_t* send_handle, uv_write_cb cb) + + Extended write function for sending handles over a pipe. The pipe must be + initialized with `ipc` == 1. + + .. note:: + `send_handle` must be a TCP socket or pipe, which is a server or a connection (listening + or connected state). Bound sockets or pipes will be assumed to be servers. + +.. c:function:: int uv_try_write(uv_stream_t* handle, const uv_buf_t bufs[], unsigned int nbufs) + + Same as :c:func:`uv_write`, but won't queue a write request if it can't be + completed immediately. + + Will return either: + + * > 0: number of bytes written (can be less than the supplied buffer size). + * < 0: negative error code (``UV_EAGAIN`` is returned if no data can be sent + immediately). + +.. c:function:: int uv_is_readable(const uv_stream_t* handle) + + Returns 1 if the stream is readable, 0 otherwise. + +.. c:function:: int uv_is_writable(const uv_stream_t* handle) + + Returns 1 if the stream is writable, 0 otherwise. + +.. c:function:: int uv_stream_set_blocking(uv_stream_t* handle, int blocking) + + Enable or disable blocking mode for a stream. + + When blocking mode is enabled all writes complete synchronously. The + interface remains unchanged otherwise, e.g. completion or failure of the + operation will still be reported through a callback which is made + asynchronously. + + .. warning:: + Relying too much on this API is not recommended. It is likely to change + significantly in the future. + + Currently this only works on Windows and only for + :c:type:`uv_pipe_t` handles. + + Also libuv currently makes no ordering guarantee when the blocking mode + is changed after write requests have already been submitted. Therefore it is + recommended to set the blocking mode immediately after opening or creating + the stream. + +.. seealso:: The :c:type:`uv_handle_t` API functions also apply. diff --git a/deps/uv/docs/src/tcp.rst b/deps/uv/docs/src/tcp.rst new file mode 100644 index 00000000000..2c1001b531f --- /dev/null +++ b/deps/uv/docs/src/tcp.rst @@ -0,0 +1,97 @@ + +.. _tcp: + +:c:type:`uv_tcp_t` --- TCP handle +================================= + +TCP handles are used to represent both TCP streams and servers. + +:c:type:`uv_tcp_t` is a 'subclass' of :c:type:`uv_stream_t`. + + +Data types +---------- + +.. c:type:: uv_tcp_t + + TCP handle type. + + +Public members +^^^^^^^^^^^^^^ + +N/A + +.. seealso:: The :c:type:`uv_stream_t` members also apply. + + +API +--- + +.. c:function:: int uv_tcp_init(uv_loop_t*, uv_tcp_t* handle) + + Initialize the handle. + +.. c:function:: int uv_tcp_open(uv_tcp_t* handle, uv_os_sock_t sock) + + Open an existing file descriptor or SOCKET as a TCP handle. + + .. note:: + The user is responsible for setting the file descriptor in + non-blocking mode. + +.. c:function:: int uv_tcp_nodelay(uv_tcp_t* handle, int enable) + + Enable / disable Nagle's algorithm. + +.. c:function:: int uv_tcp_keepalive(uv_tcp_t* handle, int enable, unsigned int delay) + + Enable / disable TCP keep-alive. `delay` is the initial delay in seconds, + ignored when `enable` is zero. + +.. c:function:: int uv_tcp_simultaneous_accepts(uv_tcp_t* handle, int enable) + + Enable / disable simultaneous asynchronous accept requests that are + queued by the operating system when listening for new TCP connections. + + This setting is used to tune a TCP server for the desired performance. + Having simultaneous accepts can significantly improve the rate of accepting + connections (which is why it is enabled by default) but may lead to uneven + load distribution in multi-process setups. + +.. c:function:: int uv_tcp_bind(uv_tcp_t* handle, const struct sockaddr* addr, unsigned int flags) + + Bind the handle to an address and port. `addr` should point to an + initialized ``struct sockaddr_in`` or ``struct sockaddr_in6``. + + When the port is already taken, you can expect to see an ``UV_EADDRINUSE`` + error from either :c:func:`uv_tcp_bind`, :c:func:`uv_listen` or + :c:func:`uv_tcp_connect`. That is, a successful call to this function does + not guarantee that the call to :c:func:`uv_listen` or :c:func:`uv_tcp_connect` + will succeed as well. + + `flags` con contain ``UV_TCP_IPV6ONLY``, in which case dual-stack support + is disabled and only IPv6 is used. + +.. c:function:: int uv_tcp_getsockname(const uv_tcp_t* handle, struct sockaddr* name, int* namelen) + + Get the current address to which the handle is bound. `addr` must point to + a valid and big enough chunk of memory, ``struct sockaddr_storage`` is + recommended for IPv4 and IPv6 support. + +.. c:function:: int uv_tcp_getpeername(const uv_tcp_t* handle, struct sockaddr* name, int* namelen) + + Get the address of the peer connected to the handle. `addr` must point to + a valid and big enough chunk of memory, ``struct sockaddr_storage`` is + recommended for IPv4 and IPv6 support. + +.. c:function:: int uv_tcp_connect(uv_connect_t* req, uv_tcp_t* handle, const struct sockaddr* addr, uv_connect_cb cb) + + Establish an IPv4 or IPv6 TCP connection. Provide an initialized TCP handle + and an uninitialized :c:type:`uv_connect_t`. `addr` should point to an + initialized ``struct sockaddr_in`` or ``struct sockaddr_in6``. + + The callback is made when the connection has been established or when a + connection error happened. + +.. seealso:: The :c:type:`uv_stream_t` API functions also apply. diff --git a/deps/uv/docs/src/threading.rst b/deps/uv/docs/src/threading.rst new file mode 100644 index 00000000000..aab13f84b62 --- /dev/null +++ b/deps/uv/docs/src/threading.rst @@ -0,0 +1,157 @@ + +.. _threading: + +Threading and synchronization utilities +======================================= + +libuv provides cross-platform implementations for multiple threading and +synchronization primitives. The API largely follows the pthreads API. + + +Data types +---------- + +.. c:type:: uv_thread_t + + Thread data type. + +.. c:type:: void (*uv_thread_cb)(void* arg) + + Callback that is invoked to initialize thread execution. `arg` is the same + value that was passed to :c:func:`uv_thread_create`. + +.. c:type:: uv_key_t + + Thread-local key data type. + +.. c:type:: uv_once_t + + Once-only initializer data type. + +.. c:type:: uv_mutex_t + + Mutex data type. + +.. c:type:: uv_rwlock_t + + Read-write lock data type. + +.. c:type:: uv_sem_t + + Semaphore data type. + +.. c:type:: uv_cond_t + + Condition data type. + +.. c:type:: uv_barrier_t + + Barrier data type. + + +API +--- + +Threads +^^^^^^^ + +.. c:function:: int uv_thread_create(uv_thread_t* tid, uv_thread_cb entry, void* arg) +.. c:function:: uv_thread_t uv_thread_self(void) +.. c:function:: int uv_thread_join(uv_thread_t *tid) +.. c:function:: int uv_thread_equal(const uv_thread_t* t1, const uv_thread_t* t2) + +Thread-local storage +^^^^^^^^^^^^^^^^^^^^ + +.. note:: + The total thread-local storage size may be limited. That is, it may not be possible to + create many TLS keys. + +.. c:function:: int uv_key_create(uv_key_t* key) +.. c:function:: void uv_key_delete(uv_key_t* key) +.. c:function:: void* uv_key_get(uv_key_t* key) +.. c:function:: void uv_key_set(uv_key_t* key, void* value) + +Once-only initialization +^^^^^^^^^^^^^^^^^^^^^^^^ + +Runs a function once and only once. Concurrent calls to :c:func:`uv_once` with the +same guard will block all callers except one (it's unspecified which one). +The guard should be initialized statically with the UV_ONCE_INIT macro. + +.. c:function:: void uv_once(uv_once_t* guard, void (*callback)(void)) + +Mutex locks +^^^^^^^^^^^ + +Functions return 0 on success or an error code < 0 (unless the +return type is void, of course). + +.. c:function:: int uv_mutex_init(uv_mutex_t* handle) +.. c:function:: void uv_mutex_destroy(uv_mutex_t* handle) +.. c:function:: void uv_mutex_lock(uv_mutex_t* handle) +.. c:function:: int uv_mutex_trylock(uv_mutex_t* handle) +.. c:function:: void uv_mutex_unlock(uv_mutex_t* handle) + +Read-write locks +^^^^^^^^^^^^^^^^ + +Functions return 0 on success or an error code < 0 (unless the +return type is void, of course). + +.. c:function:: int uv_rwlock_init(uv_rwlock_t* rwlock) +.. c:function:: void uv_rwlock_destroy(uv_rwlock_t* rwlock) +.. c:function:: void uv_rwlock_rdlock(uv_rwlock_t* rwlock) +.. c:function:: int uv_rwlock_tryrdlock(uv_rwlock_t* rwlock) +.. c:function:: void uv_rwlock_rdunlock(uv_rwlock_t* rwlock) +.. c:function:: void uv_rwlock_wrlock(uv_rwlock_t* rwlock) +.. c:function:: int uv_rwlock_trywrlock(uv_rwlock_t* rwlock) +.. c:function:: void uv_rwlock_wrunlock(uv_rwlock_t* rwlock) + +Semaphores +^^^^^^^^^^ + +Functions return 0 on success or an error code < 0 (unless the +return type is void, of course). + +.. c:function:: int uv_sem_init(uv_sem_t* sem, unsigned int value) +.. c:function:: void uv_sem_destroy(uv_sem_t* sem) +.. c:function:: void uv_sem_post(uv_sem_t* sem) +.. c:function:: void uv_sem_wait(uv_sem_t* sem) +.. c:function:: int uv_sem_trywait(uv_sem_t* sem) + +Conditions +^^^^^^^^^^ + +Functions return 0 on success or an error code < 0 (unless the +return type is void, of course). + +.. note:: + Callers should be prepared to deal with spurious wakeups on :c:func:`uv_cond_wait` and + :c:func:`uv_cond_timedwait`. + +.. c:function:: int uv_cond_init(uv_cond_t* cond) +.. c:function:: void uv_cond_destroy(uv_cond_t* cond) +.. c:function:: void uv_cond_signal(uv_cond_t* cond) +.. c:function:: void uv_cond_broadcast(uv_cond_t* cond) +.. c:function:: void uv_cond_wait(uv_cond_t* cond, uv_mutex_t* mutex) +.. c:function:: int uv_cond_timedwait(uv_cond_t* cond, uv_mutex_t* mutex, uint64_t timeout) + +Barriers +^^^^^^^^ + +Functions return 0 on success or an error code < 0 (unless the +return type is void, of course). + +.. note:: + :c:func:`uv_barrier_wait` returns a value > 0 to an arbitrarily chosen "serializer" thread + to facilitate cleanup, i.e. + + :: + + if (uv_barrier_wait(&barrier) > 0) + uv_barrier_destroy(&barrier); + +.. c:function:: int uv_barrier_init(uv_barrier_t* barrier, unsigned int count) +.. c:function:: void uv_barrier_destroy(uv_barrier_t* barrier) +.. c:function:: int uv_barrier_wait(uv_barrier_t* barrier) diff --git a/deps/uv/docs/src/threadpool.rst b/deps/uv/docs/src/threadpool.rst new file mode 100644 index 00000000000..66ff53e2305 --- /dev/null +++ b/deps/uv/docs/src/threadpool.rst @@ -0,0 +1,59 @@ + +.. _threadpool: + +Thread pool work scheduling +=========================== + +libuv provides a threadpool which can be used to run user code and get notified +in the loop thread. This thread pool is internally used to run all filesystem +operations, as well as getaddrinfo and getnameinfo requests. + +Its default size is 4, but it can be changed at startup time by setting the +``UV_THREADPOOL_SIZE`` environment variable to any value (the absolute maximum +is 128). + +The threadpool is global and shared across all event loops. + + +Data types +---------- + +.. c:type:: uv_work_t + + Work request type. + +.. c:type:: void (*uv_work_cb)(uv_work_t* req) + + Callback passed to :c:func:`uv_queue_work` which will be run on the thread + pool. + +.. c:type:: void (*uv_after_work_cb)(uv_work_t* req, int status) + + Callback passed to :c:func:`uv_queue_work` which will be called on the loop + thread after the work on the threadpool has been completed. If the work + was cancelled using :c:func:`uv_cancel` `status` will be ``UV_ECANCELED``. + + +Public members +^^^^^^^^^^^^^^ + +.. c:member:: uv_loop_t* uv_work_t.loop + + Loop that started this request and where completion will be reported. + Readonly. + +.. seealso:: The :c:type:`uv_req_t` members also apply. + + +API +--- + +.. c:function:: int uv_queue_work(uv_loop_t* loop, uv_work_t* req, uv_work_cb work_cb, uv_after_work_cb after_work_cb) + + Initializes a work request which will run the given `work_cb` in a thread + from the threadpool. Once `work_cb` is completed, `after_work_cb` will be + called on the loop thread. + + This request can be cancelled with :c:func:`uv_cancel`. + +.. seealso:: The :c:type:`uv_req_t` API functions also apply. diff --git a/deps/uv/docs/src/timer.rst b/deps/uv/docs/src/timer.rst new file mode 100644 index 00000000000..e558704cb20 --- /dev/null +++ b/deps/uv/docs/src/timer.rst @@ -0,0 +1,68 @@ + +.. _timer: + +:c:type:`uv_timer_t` --- Timer handle +===================================== + +Timer handles are used to schedule callbacks to be called in the future. + + +Data types +---------- + +.. c:type:: uv_timer_t + + Timer handle type. + +.. c:type:: void (*uv_timer_cb)(uv_timer_t* handle) + + Type definition for callback passed to :c:func:`uv_timer_start`. + + +Public members +^^^^^^^^^^^^^^ + +N/A + +.. seealso:: The :c:type:`uv_handle_t` members also apply. + + +API +--- + +.. c:function:: int uv_timer_init(uv_loop_t* loop, uv_timer_t* handle) + + Initialize the handle. + +.. c:function:: int uv_timer_start(uv_timer_t* handle, uv_timer_cb cb, uint64_t timeout, uint64_t repeat) + + Start the timer. `timeout` and `repeat` are in milliseconds. + + If `timeout` is zero, the callback fires on the next event loop iteration. + If `repeat` is non-zero, the callback fires first after `timeout` + milliseconds and then repeatedly after `repeat` milliseconds. + +.. c:function:: int uv_timer_stop(uv_timer_t* handle) + + Stop the timer, the callback will not be called anymore. + +.. c:function:: int uv_timer_again(uv_timer_t* handle) + + Stop the timer, and if it is repeating restart it using the repeat value + as the timeout. If the timer has never been started before it returns + UV_EINVAL. + +.. c:function:: void uv_timer_set_repeat(uv_timer_t* handle, uint64_t repeat) + + Set the repeat value in milliseconds. + + .. note:: + If the repeat value is set from a timer callback it does not immediately take effect. + If the timer was non-repeating before, it will have been stopped. If it was repeating, + then the old repeat value will have been used to schedule the next timeout. + +.. c:function:: uint64_t uv_timer_get_repeat(const uv_timer_t* handle) + + Get the timer repeat value. + +.. seealso:: The :c:type:`uv_handle_t` API functions also apply. diff --git a/deps/uv/docs/src/tty.rst b/deps/uv/docs/src/tty.rst new file mode 100644 index 00000000000..8cb00663203 --- /dev/null +++ b/deps/uv/docs/src/tty.rst @@ -0,0 +1,63 @@ + +.. _tty: + +:c:type:`uv_tty_t` --- TTY handle +================================= + +TTY handles represent a stream for the console. + +:c:type:`uv_tty_t` is a 'subclass' of :c:type:`uv_stream_t`. + + +Data types +---------- + +.. c:type:: uv_tty_t + + TTY handle type. + + +Public members +^^^^^^^^^^^^^^ + +N/A + +.. seealso:: The :c:type:`uv_stream_t` members also apply. + + +API +--- + +.. c:function:: int uv_tty_init(uv_loop_t*, uv_tty_t*, uv_file fd, int readable) + + Initialize a new TTY stream with the given file descriptor. Usually the + file descriptor will be: + + * 0 = stdin + * 1 = stdout + * 2 = stderr + + `readable`, specifies if you plan on calling :c:func:`uv_read_start` with + this stream. stdin is readable, stdout is not. + + .. note:: + TTY streams which are not readable have blocking writes. + +.. c:function:: int uv_tty_set_mode(uv_tty_t*, int mode) + + Set the TTY mode. 0 for normal, 1 for raw. + +.. c:function:: int uv_tty_reset_mode(void) + + To be called when the program exits. Resets TTY settings to default + values for the next process to take over. + + This function is async signal-safe on Unix platforms but can fail with error + code ``UV_EBUSY`` if you call it when execution is inside + :c:func:`uv_tty_set_mode`. + +.. c:function:: int uv_tty_get_winsize(uv_tty_t*, int* width, int* height) + + Gets the current Window size. On success it returns 0. + +.. seealso:: The :c:type:`uv_stream_t` API functions also apply. diff --git a/deps/uv/docs/src/udp.rst b/deps/uv/docs/src/udp.rst new file mode 100644 index 00000000000..175ce07a2db --- /dev/null +++ b/deps/uv/docs/src/udp.rst @@ -0,0 +1,280 @@ + +.. _udp: + +:c:type:`uv_udp_t` --- UDP handle +================================= + +UDP handles encapsulate UDP communication for both clients and servers. + + +Data types +---------- + +.. c:type:: uv_udp_t + + UDP handle type. + +.. c:type:: uv_udp_send_t + + UDP send request type. + +.. c:type:: uv_udp_flags + + Flags used in :c:func:`uv_udp_bind` and :c:type:`uv_udp_recv_cb`.. + + :: + + enum uv_udp_flags { + /* Disables dual stack mode. */ + UV_UDP_IPV6ONLY = 1, + /* + * Indicates message was truncated because read buffer was too small. The + * remainder was discarded by the OS. Used in uv_udp_recv_cb. + */ + UV_UDP_PARTIAL = 2, + /* + * Indicates if SO_REUSEADDR will be set when binding the handle in + * uv_udp_bind. + * This sets the SO_REUSEPORT socket flag on the BSDs and OS X. On other + * Unix platforms, it sets the SO_REUSEADDR flag. What that means is that + * multiple threads or processes can bind to the same address without error + * (provided they all set the flag) but only the last one to bind will receive + * any traffic, in effect "stealing" the port from the previous listener. + */ + UV_UDP_REUSEADDR = 4 + }; + +.. c:type:: void (*uv_udp_send_cb)(uv_udp_send_t* req, int status) + + Type definition for callback passed to :c:func:`uv_udp_send`, which is + called after the data was sent. + +.. c:type:: void (*uv_udp_recv_cb)(uv_udp_t* handle, ssize_t nread, const uv_buf_t* buf, const struct sockaddr* addr, unsigned flags) + + Type definition for callback passed to :c:func:`uv_udp_recv_start`, which + is called when the endpoint receives data. + + * `handle`: UDP handle + * `nread`: Number of bytes that have been received. + 0 if there is no more data to read. You may discard or repurpose + the read buffer. Note that 0 may also mean that an empty datagram + was received (in this case `addr` is not NULL). < 0 if a transmission + error was detected. + * `buf`: :c:type:`uv_buf_t` with the received data. + * `addr`: ``struct sockaddr*`` containing the address of the sender. + Can be NULL. Valid for the duration of the callback only. + * `flags`: One or more or'ed UV_UDP_* constants. Right now only + ``UV_UDP_PARTIAL`` is used. + + .. note:: + The receive callback will be called with `nread` == 0 and `addr` == NULL when there is + nothing to read, and with `nread` == 0 and `addr` != NULL when an empty UDP packet is + received. + +.. c:type:: uv_membership + + Membership type for a multicast address. + + :: + + typedef enum { + UV_LEAVE_GROUP = 0, + UV_JOIN_GROUP + } uv_membership; + + +Public members +^^^^^^^^^^^^^^ + +.. c:member:: size_t uv_udp_t.send_queue_size + + Number of bytes queued for sending. This field strictly shows how much + information is currently queued. + +.. c:member:: size_t uv_udp_t.send_queue_count + + Number of send requests currently in the queue awaiting to be processed. + +.. c:member:: uv_udp_t* uv_udp_send_t.handle + + UDP handle where this send request is taking place. + +.. seealso:: The :c:type:`uv_handle_t` members also apply. + + +API +--- + +.. c:function:: int uv_udp_init(uv_loop_t*, uv_udp_t* handle) + + Initialize a new UDP handle. The actual socket is created lazily. + Returns 0 on success. + +.. c:function:: int uv_udp_open(uv_udp_t* handle, uv_os_sock_t sock) + + Opens an existing file descriptor or Windows SOCKET as a UDP handle. + + Unix only: + The only requirement of the `sock` argument is that it follows the datagram + contract (works in unconnected mode, supports sendmsg()/recvmsg(), etc). + In other words, other datagram-type sockets like raw sockets or netlink + sockets can also be passed to this function. + +.. c:function:: int uv_udp_bind(uv_udp_t* handle, const struct sockaddr* addr, unsigned int flags) + + Bind the UDP handle to an IP address and port. + + :param handle: UDP handle. Should have been initialized with + :c:func:`uv_udp_init`. + + :param addr: `struct sockaddr_in` or `struct sockaddr_in6` + with the address and port to bind to. + + :param flags: Indicate how the socket will be bound, + ``UV_UDP_IPV6ONLY`` and ``UV_UDP_REUSEADDR`` are supported. + + :returns: 0 on success, or an error code < 0 on failure. + +.. c:function:: int uv_udp_getsockname(const uv_udp_t* handle, struct sockaddr* name, int* namelen) + + Get the local IP and port of the UDP handle. + + :param handle: UDP handle. Should have been initialized with + :c:func:`uv_udp_init` and bound. + + :param name: Pointer to the structure to be filled with the address data. + In order to support IPv4 and IPv6 `struct sockaddr_storage` should be + used. + + :param namelen: On input it indicates the data of the `name` field. On + output it indicates how much of it was filled. + + :returns: 0 on success, or an error code < 0 on failure. + +.. c:function:: int uv_udp_set_membership(uv_udp_t* handle, const char* multicast_addr, const char* interface_addr, uv_membership membership) + + Set membership for a multicast address + + :param handle: UDP handle. Should have been initialized with + :c:func:`uv_udp_init`. + + :param multicast_addr: Multicast address to set membership for. + + :param interface_addr: Interface address. + + :param membership: Should be ``UV_JOIN_GROUP`` or ``UV_LEAVE_GROUP``. + + :returns: 0 on success, or an error code < 0 on failure. + +.. c:function:: int uv_udp_set_multicast_loop(uv_udp_t* handle, int on) + + Set IP multicast loop flag. Makes multicast packets loop back to + local sockets. + + :param handle: UDP handle. Should have been initialized with + :c:func:`uv_udp_init`. + + :param on: 1 for on, 0 for off. + + :returns: 0 on success, or an error code < 0 on failure. + +.. c:function:: int uv_udp_set_multicast_ttl(uv_udp_t* handle, int ttl) + + Set the multicast ttl. + + :param handle: UDP handle. Should have been initialized with + :c:func:`uv_udp_init`. + + :param ttl: 1 through 255. + + :returns: 0 on success, or an error code < 0 on failure. + +.. c:function:: int uv_udp_set_multicast_interface(uv_udp_t* handle, const char* interface_addr) + + Set the multicast interface to send or receive data on. + + :param handle: UDP handle. Should have been initialized with + :c:func:`uv_udp_init`. + + :param interface_addr: interface address. + + :returns: 0 on success, or an error code < 0 on failure. + +.. c:function:: int uv_udp_set_broadcast(uv_udp_t* handle, int on) + + Set broadcast on or off. + + :param handle: UDP handle. Should have been initialized with + :c:func:`uv_udp_init`. + + :param on: 1 for on, 0 for off. + + :returns: 0 on success, or an error code < 0 on failure. + +.. c:function:: int uv_udp_set_ttl(uv_udp_t* handle, int ttl) + + Set the time to live. + + :param handle: UDP handle. Should have been initialized with + :c:func:`uv_udp_init`. + + :param ttl: 1 through 255. + + :returns: 0 on success, or an error code < 0 on failure. + +.. c:function:: int uv_udp_send(uv_udp_send_t* req, uv_udp_t* handle, const uv_buf_t bufs[], unsigned int nbufs, const struct sockaddr* addr, uv_udp_send_cb send_cb) + + Send data over the UDP socket. If the socket has not previously been bound + with :c:func:`uv_udp_bind` it will be bound to 0.0.0.0 + (the "all interfaces" IPv4 address) and a random port number. + + :param req: UDP request handle. Need not be initialized. + + :param handle: UDP handle. Should have been initialized with + :c:func:`uv_udp_init`. + + :param bufs: List of buffers to send. + + :param nbufs: Number of buffers in `bufs`. + + :param addr: `struct sockaddr_in` or `struct sockaddr_in6` with the + address and port of the remote peer. + + :param send_cb: Callback to invoke when the data has been sent out. + + :returns: 0 on success, or an error code < 0 on failure. + +.. c:function:: int uv_udp_try_send(uv_udp_t* handle, const uv_buf_t bufs[], unsigned int nbufs, const struct sockaddr* addr) + + Same as :c:func:`uv_udp_send`, but won't queue a send request if it can't + be completed immediately. + + :returns: >= 0: number of bytes sent (it matches the given buffer size). + < 0: negative error code (``UV_EAGAIN`` is returned when the message + can't be sent immediately). + +.. c:function:: int uv_udp_recv_start(uv_udp_t* handle, uv_alloc_cb alloc_cb, uv_udp_recv_cb recv_cb) + + Prepare for receiving data. If the socket has not previously been bound + with :c:func:`uv_udp_bind` it is bound to 0.0.0.0 (the "all interfaces" + IPv4 address) and a random port number. + + :param handle: UDP handle. Should have been initialized with + :c:func:`uv_udp_init`. + + :param alloc_cb: Callback to invoke when temporary storage is needed. + + :param recv_cb: Callback to invoke with received data. + + :returns: 0 on success, or an error code < 0 on failure. + +.. c:function:: int uv_udp_recv_stop(uv_udp_t* handle) + + Stop listening for incoming datagrams. + + :param handle: UDP handle. Should have been initialized with + :c:func:`uv_udp_init`. + + :returns: 0 on success, or an error code < 0 on failure. + +.. seealso:: The :c:type:`uv_handle_t` API functions also apply. diff --git a/deps/uv/include/uv-errno.h b/deps/uv/include/uv-errno.h index 00b48609a9a..c34132795cd 100644 --- a/deps/uv/include/uv-errno.h +++ b/deps/uv/include/uv-errno.h @@ -57,12 +57,6 @@ # define UV__EACCES (-4092) #endif -#if defined(EADDRINFO) && !defined(_WIN32) -# define UV__EADDRINFO EADDRINFO -#else -# define UV__EADDRINFO (-4091) -#endif - #if defined(EADDRINUSE) && !defined(_WIN32) # define UV__EADDRINUSE (-EADDRINUSE) #else diff --git a/deps/uv/include/uv-unix.h b/deps/uv/include/uv-unix.h index bbaaefc3ed1..e72492564db 100644 --- a/deps/uv/include/uv-unix.h +++ b/deps/uv/include/uv-unix.h @@ -25,6 +25,7 @@ #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> +#include <dirent.h> #include <sys/socket.h> #include <netinet/in.h> @@ -117,13 +118,14 @@ struct uv__async { #endif /* Note: May be cast to struct iovec. See writev(2). */ -typedef struct { +typedef struct uv_buf_t { char* base; size_t len; } uv_buf_t; typedef int uv_file; typedef int uv_os_sock_t; +typedef int uv_os_fd_t; #define UV_ONCE_INIT PTHREAD_ONCE_INIT @@ -155,6 +157,47 @@ typedef pthread_barrier_t uv_barrier_t; typedef gid_t uv_gid_t; typedef uid_t uv_uid_t; +typedef struct dirent uv__dirent_t; + +#if defined(DT_UNKNOWN) +# define HAVE_DIRENT_TYPES +# if defined(DT_REG) +# define UV__DT_FILE DT_REG +# else +# define UV__DT_FILE -1 +# endif +# if defined(DT_DIR) +# define UV__DT_DIR DT_DIR +# else +# define UV__DT_DIR -2 +# endif +# if defined(DT_LNK) +# define UV__DT_LINK DT_LNK +# else +# define UV__DT_LINK -3 +# endif +# if defined(DT_FIFO) +# define UV__DT_FIFO DT_FIFO +# else +# define UV__DT_FIFO -4 +# endif +# if defined(DT_SOCK) +# define UV__DT_SOCKET DT_SOCK +# else +# define UV__DT_SOCKET -5 +# endif +# if defined(DT_CHR) +# define UV__DT_CHAR DT_CHR +# else +# define UV__DT_CHAR -6 +# endif +# if defined(DT_BLK) +# define UV__DT_BLOCK DT_BLK +# else +# define UV__DT_BLOCK -7 +# endif +#endif + /* Platform-specific definitions for uv_dlopen support. */ #define UV_DYNAMIC /* empty */ @@ -176,7 +219,7 @@ typedef struct { uv_async_t wq_async; \ uv_rwlock_t cloexec_lock; \ uv_handle_t* closing_handles; \ - void* process_handles[1][2]; \ + void* process_handles[2]; \ void* prepare_handles[2]; \ void* check_handles[2]; \ void* idle_handles[2]; \ diff --git a/deps/uv/include/uv-version.h b/deps/uv/include/uv-version.h index d33c8f8bde7..25c31ab5e10 100644 --- a/deps/uv/include/uv-version.h +++ b/deps/uv/include/uv-version.h @@ -23,16 +23,17 @@ #define UV_VERSION_H /* - * Versions with an even minor version (e.g. 0.6.1 or 1.0.4) are API and ABI - * stable. When the minor version is odd, the API can change between patch - * releases. Make sure you update the -soname directives in configure.ac + * Versions with the same major number are ABI stable. API is allowed to + * evolve between minor releases, but only in a backwards compatible way. + * Make sure you update the -soname directives in configure.ac * and uv.gyp whenever you bump UV_VERSION_MAJOR or UV_VERSION_MINOR (but * not UV_VERSION_PATCH.) */ -#define UV_VERSION_MAJOR 0 -#define UV_VERSION_MINOR 11 -#define UV_VERSION_PATCH 28 +#define UV_VERSION_MAJOR 1 +#define UV_VERSION_MINOR 0 +#define UV_VERSION_PATCH 2 #define UV_VERSION_IS_RELEASE 1 +#define UV_VERSION_SUFFIX "" #endif /* UV_VERSION_H */ diff --git a/deps/uv/include/uv-win.h b/deps/uv/include/uv-win.h index 136b0b45de5..0c188e7e22a 100644 --- a/deps/uv/include/uv-win.h +++ b/deps/uv/include/uv-win.h @@ -39,6 +39,20 @@ typedef struct pollfd { } WSAPOLLFD, *PWSAPOLLFD, *LPWSAPOLLFD; #endif +#ifndef LOCALE_INVARIANT +# define LOCALE_INVARIANT 0x007f +#endif + +#ifndef _malloca +# if defined(_DEBUG) +# define _malloca(size) malloc(size) +# define _freea(ptr) free(ptr) +# else +# define _malloca(size) alloca(size) +# define _freea(ptr) +# endif +#endif + #include <mswsock.h> #include <ws2tcpip.h> #include <windows.h> @@ -215,8 +229,8 @@ typedef struct uv_buf_t { } uv_buf_t; typedef int uv_file; - typedef SOCKET uv_os_sock_t; +typedef HANDLE uv_os_fd_t; typedef HANDLE uv_thread_t; @@ -275,6 +289,19 @@ typedef struct uv_once_s { typedef unsigned char uv_uid_t; typedef unsigned char uv_gid_t; +typedef struct uv__dirent_s { + int d_type; + char d_name[1]; +} uv__dirent_t; + +#define UV__DT_DIR UV_DIRENT_DIR +#define UV__DT_FILE UV_DIRENT_FILE +#define UV__DT_LINK UV_DIRENT_LINK +#define UV__DT_FIFO UV_DIRENT_FIFO +#define UV__DT_SOCKET UV_DIRENT_SOCKET +#define UV__DT_CHAR UV_DIRENT_CHAR +#define UV__DT_BLOCK UV_DIRENT_BLOCK + /* Platform-specific definitions for uv_dlopen support. */ #define UV_DYNAMIC FAR WINAPI typedef struct { @@ -289,8 +316,6 @@ RB_HEAD(uv_timer_tree_s, uv_timer_s); HANDLE iocp; \ /* The current time according to the event loop. in msecs. */ \ uint64_t time; \ - /* GetTickCount() result when the event loop time was last updated. */ \ - DWORD last_tick_count; \ /* Tail of a single-linked circular queue of pending reqs. If the queue */ \ /* is empty, tail_ is NULL. If there is only one item, */ \ /* tail_->next_req == tail_ */ \ @@ -443,7 +468,8 @@ RB_HEAD(uv_timer_tree_s, uv_timer_s); int queue_len; \ } pending_ipc_info; \ uv_write_t* non_overlapped_writes_tail; \ - void* reserved; + uv_mutex_t readfile_mutex; \ + volatile HANDLE readfile_thread; #define UV_PIPE_PRIVATE_FIELDS \ HANDLE handle; \ @@ -491,7 +517,10 @@ RB_HEAD(uv_timer_tree_s, uv_timer_s); /* Used in fast mode */ \ SOCKET peer_socket; \ AFD_POLL_INFO afd_poll_info_1; \ - AFD_POLL_INFO afd_poll_info_2; \ + union { \ + AFD_POLL_INFO* afd_poll_info_ptr; \ + AFD_POLL_INFO afd_poll_info; \ + } afd_poll_info_2; \ /* Used in fast and slow mode. */ \ uv_req_t poll_req_1; \ uv_req_t poll_req_2; \ @@ -613,3 +642,15 @@ int uv_utf16_to_utf8(const WCHAR* utf16Buffer, size_t utf16Size, int uv_utf8_to_utf16(const char* utf8Buffer, WCHAR* utf16Buffer, size_t utf16Size); +#ifndef F_OK +#define F_OK 0 +#endif +#ifndef R_OK +#define R_OK 4 +#endif +#ifndef W_OK +#define W_OK 2 +#endif +#ifndef X_OK +#define X_OK 1 +#endif diff --git a/deps/uv/include/uv.h b/deps/uv/include/uv.h index df6d9549c11..7b3c25223b2 100644 --- a/deps/uv/include/uv.h +++ b/deps/uv/include/uv.h @@ -19,7 +19,7 @@ * IN THE SOFTWARE. */ -/* See https://github.com/joyent/libuv#documentation for documentation. */ +/* See https://github.com/libuv/libuv#documentation for documentation. */ #ifndef UV_H #define UV_H @@ -227,7 +227,11 @@ typedef struct uv_work_s uv_work_t; /* None of the above. */ typedef struct uv_cpu_info_s uv_cpu_info_t; typedef struct uv_interface_address_s uv_interface_address_t; +typedef struct uv_dirent_s uv_dirent_t; +typedef enum { + UV_LOOP_BLOCK_SIGNAL +} uv_loop_option; typedef enum { UV_RUN_DEFAULT = 0, @@ -236,180 +240,44 @@ typedef enum { } uv_run_mode; -/* - * Returns the libuv version packed into a single integer. 8 bits are used for - * each component, with the patch number stored in the 8 least significant - * bits. E.g. for libuv 1.2.3 this would return 0x010203. - */ UV_EXTERN unsigned int uv_version(void); - -/* - * Returns the libuv version number as a string. For non-release versions - * "-pre" is appended, so the version number could be "1.2.3-pre". - */ UV_EXTERN const char* uv_version_string(void); - -/* - * All functions besides uv_run() are non-blocking. - * - * All callbacks in libuv are made asynchronously. That is they are never - * made by the function that takes them as a parameter. - */ - -/* - * Returns the initialized default loop. It may return NULL in case of - * allocation failture. - */ UV_EXTERN uv_loop_t* uv_default_loop(void); - -/* - * Initializes a uv_loop_t structure. - */ UV_EXTERN int uv_loop_init(uv_loop_t* loop); - -/* - * Closes all internal loop resources. This function must only be called once - * the loop has finished it's execution or it will return UV_EBUSY. After this - * function returns the user shall free the memory allocated for the loop. - */ UV_EXTERN int uv_loop_close(uv_loop_t* loop); - /* - * Allocates and initializes a new loop. - * * NOTE: * This function is DEPRECATED (to be removed after 0.12), users should * allocate the loop manually and use uv_loop_init instead. */ UV_EXTERN uv_loop_t* uv_loop_new(void); - /* - * Cleans up a loop once it has finished executio and frees its memory. - * * NOTE: * This function is DEPRECATED (to be removed after 0.12). Users should use * uv_loop_close and free the memory manually instead. */ UV_EXTERN void uv_loop_delete(uv_loop_t*); - -/* - * Returns size of the loop struct, useful for dynamic lookup with FFI. - */ UV_EXTERN size_t uv_loop_size(void); - -/* - * This function runs the event loop. It will act differently depending on the - * specified mode: - * - UV_RUN_DEFAULT: Runs the event loop until the reference count drops to - * zero. Always returns zero. - * - UV_RUN_ONCE: Poll for new events once. Note that this function blocks if - * there are no pending events. Returns zero when done (no active handles - * or requests left), or non-zero if more events are expected (meaning you - * should run the event loop again sometime in the future). - * - UV_RUN_NOWAIT: Poll for new events once but don't block if there are no - * pending events. Returns zero when done (no active handles - * or requests left), or non-zero if more events are expected (meaning you - * should run the event loop again sometime in the future). - */ -UV_EXTERN int uv_run(uv_loop_t*, uv_run_mode mode); - -/* - * This function checks whether the reference count, the number of active - * handles or requests left in the event loop, is non-zero. - */ UV_EXTERN int uv_loop_alive(const uv_loop_t* loop); +UV_EXTERN int uv_loop_configure(uv_loop_t* loop, uv_loop_option option, ...); -/* - * This function will stop the event loop by forcing uv_run to end as soon as - * possible, but not sooner than the next loop iteration. - * If this function was called before blocking for i/o, the loop won't block - * for i/o on this iteration. - */ +UV_EXTERN int uv_run(uv_loop_t*, uv_run_mode mode); UV_EXTERN void uv_stop(uv_loop_t*); -/* - * Manually modify the event loop's reference count. Useful if the user wants - * to have a handle or timeout that doesn't keep the loop alive. - */ UV_EXTERN void uv_ref(uv_handle_t*); UV_EXTERN void uv_unref(uv_handle_t*); UV_EXTERN int uv_has_ref(const uv_handle_t*); -/* - * Update the event loop's concept of "now". Libuv caches the current time - * at the start of the event loop tick in order to reduce the number of - * time-related system calls. - * - * You won't normally need to call this function unless you have callbacks - * that block the event loop for longer periods of time, where "longer" is - * somewhat subjective but probably on the order of a millisecond or more. - */ UV_EXTERN void uv_update_time(uv_loop_t*); - -/* - * Return the current timestamp in milliseconds. The timestamp is cached at - * the start of the event loop tick, see |uv_update_time()| for details and - * rationale. - * - * The timestamp increases monotonically from some arbitrary point in time. - * Don't make assumptions about the starting point, you will only get - * disappointed. - * - * Use uv_hrtime() if you need sub-millisecond granularity. - */ UV_EXTERN uint64_t uv_now(const uv_loop_t*); -/* - * Get backend file descriptor. Only kqueue, epoll and event ports are - * supported. - * - * This can be used in conjunction with `uv_run(loop, UV_RUN_NOWAIT)` to - * poll in one thread and run the event loop's event callbacks in another. - * - * Useful for embedding libuv's event loop in another event loop. - * See test/test-embed.c for an example. - * - * Note that embedding a kqueue fd in another kqueue pollset doesn't work on - * all platforms. It's not an error to add the fd but it never generates - * events. - */ UV_EXTERN int uv_backend_fd(const uv_loop_t*); - -/* - * Get the poll timeout. The return value is in milliseconds, or -1 for no - * timeout. - */ UV_EXTERN int uv_backend_timeout(const uv_loop_t*); - -/* - * Should prepare a buffer that libuv can use to read data into. - * - * `suggested_size` is a hint. Returning a buffer that is smaller is perfectly - * okay as long as `buf.len > 0`. - * - * If you return a buffer with `buf.len == 0`, libuv skips the read and calls - * your read or recv callback with nread=UV_ENOBUFS. - * - * Note that returning a zero-length buffer does not stop the handle, call - * uv_read_stop() or uv_udp_recv_stop() for that. - */ typedef void (*uv_alloc_cb)(uv_handle_t* handle, size_t suggested_size, uv_buf_t* buf); - -/* - * `nread` is > 0 if there is data available, 0 if libuv is done reading for - * now, or < 0 on error. - * - * The callee is responsible for closing the stream when an error happens - * by calling uv_close(). Trying to read from the stream again is undefined. - * - * The callee is responsible for freeing the buffer, libuv does not reuse it. - * The buffer may be a null buffer (where buf->base=NULL and buf->len=0) on - * error. - */ typedef void (*uv_read_cb)(uv_stream_t* stream, ssize_t nread, const uv_buf_t* buf); @@ -463,12 +331,6 @@ typedef struct { } uv_stat_t; -/* -* This will be called repeatedly after the uv_fs_event_t is initialized. -* If uv_fs_event_t was initialized with a directory the filename parameter -* will be a relative path to a file contained in the directory. -* The events parameter is an ORed mask of enum uv_fs_event elements. -*/ typedef void (*uv_fs_event_cb)(uv_fs_event_t* handle, const char* filename, int events, @@ -488,9 +350,6 @@ typedef enum { } uv_membership; -/* - * Most functions return 0 on success or an error code < 0 on failure. - */ UV_EXTERN const char* uv_strerror(int err); UV_EXTERN const char* uv_err_name(int err); @@ -502,6 +361,7 @@ UV_EXTERN const char* uv_err_name(int err); uv_req_type type; \ /* private */ \ void* active_queue[2]; \ + void* reserved[4]; \ UV_REQ_PRIVATE_FIELDS \ /* Abstract base class of all requests. */ @@ -514,14 +374,6 @@ struct uv_req_s { UV_PRIVATE_REQ_TYPES -/* - * uv_shutdown_t is a subclass of uv_req_t. - * - * Shutdown the outgoing (write) side of a duplex stream. It waits for pending - * write requests to complete. The handle should refer to a initialized stream. - * req should be an uninitialized shutdown request struct. The cb is called - * after shutdown is complete. - */ UV_EXTERN int uv_shutdown(uv_shutdown_t* req, uv_stream_t* handle, uv_shutdown_cb cb); @@ -543,6 +395,7 @@ struct uv_shutdown_s { /* private */ \ uv_close_cb close_cb; \ void* handle_queue[2]; \ + void* reserved[4]; \ UV_HANDLE_PRIVATE_FIELDS \ /* The abstract base class of all handles. */ @@ -550,66 +403,20 @@ struct uv_handle_s { UV_HANDLE_FIELDS }; -/* - * Returns size of various handle types, useful for FFI bindings to allocate - * correct memory without copying struct definitions. - */ UV_EXTERN size_t uv_handle_size(uv_handle_type type); - -/* - * Returns size of request types, useful for dynamic lookup with FFI. - */ UV_EXTERN size_t uv_req_size(uv_req_type type); -/* - * Returns non-zero if the handle is active, zero if it's inactive. - * - * What "active" means depends on the type of handle: - * - * - A uv_async_t handle is always active and cannot be deactivated, except - * by closing it with uv_close(). - * - * - A uv_pipe_t, uv_tcp_t, uv_udp_t, etc. handle - basically any handle that - * deals with i/o - is active when it is doing something that involves i/o, - * like reading, writing, connecting, accepting new connections, etc. - * - * - A uv_check_t, uv_idle_t, uv_timer_t, etc. handle is active when it has - * been started with a call to uv_check_start(), uv_idle_start(), etc. - * - * Rule of thumb: if a handle of type uv_foo_t has a uv_foo_start() - * function, then it's active from the moment that function is called. - * Likewise, uv_foo_stop() deactivates the handle again. - * - */ UV_EXTERN int uv_is_active(const uv_handle_t* handle); -/* - * Walk the list of open handles. - */ UV_EXTERN void uv_walk(uv_loop_t* loop, uv_walk_cb walk_cb, void* arg); - -/* - * Request handle to be closed. close_cb will be called asynchronously after - * this call. This MUST be called on each handle before memory is released. - * - * Note that handles that wrap file descriptors are closed immediately but - * close_cb will still be deferred to the next iteration of the event loop. - * It gives you a chance to free up any resources associated with the handle. - * - * In-progress requests, like uv_connect_t or uv_write_t, are cancelled and - * have their callbacks called asynchronously with status=UV_ECANCELED. - */ UV_EXTERN void uv_close(uv_handle_t* handle, uv_close_cb close_cb); +UV_EXTERN int uv_send_buffer_size(uv_handle_t* handle, int* value); +UV_EXTERN int uv_recv_buffer_size(uv_handle_t* handle, int* value); + +UV_EXTERN int uv_fileno(const uv_handle_t* handle, uv_os_fd_t* fd); -/* - * Constructor for uv_buf_t. - * - * Due to platform differences the user cannot rely on the ordering of the - * base and len members of the uv_buf_t struct. The user is responsible for - * freeing base after the uv_buf_t is done. Return struct passed by value. - */ UV_EXTERN uv_buf_t uv_buf_init(char* base, unsigned int len); @@ -634,89 +441,24 @@ struct uv_stream_s { }; UV_EXTERN int uv_listen(uv_stream_t* stream, int backlog, uv_connection_cb cb); - -/* - * This call is used in conjunction with uv_listen() to accept incoming - * connections. Call uv_accept after receiving a uv_connection_cb to accept - * the connection. Before calling uv_accept use uv_*_init() must be - * called on the client. Non-zero return value indicates an error. - * - * When the uv_connection_cb is called it is guaranteed that uv_accept() will - * complete successfully the first time. If you attempt to use it more than - * once, it may fail. It is suggested to only call uv_accept() once per - * uv_connection_cb call. - */ UV_EXTERN int uv_accept(uv_stream_t* server, uv_stream_t* client); -/* - * Read data from an incoming stream. The callback will be made several - * times until there is no more data to read or uv_read_stop() is called. - * When we've reached EOF nread will be set to UV_EOF. - * - * When nread < 0, the buf parameter might not point to a valid buffer; - * in that case buf.len and buf.base are both set to 0. - * - * Note that nread might also be 0, which does *not* indicate an error or - * eof; it happens when libuv requested a buffer through the alloc callback - * but then decided that it didn't need that buffer. - */ UV_EXTERN int uv_read_start(uv_stream_t*, uv_alloc_cb alloc_cb, uv_read_cb read_cb); - UV_EXTERN int uv_read_stop(uv_stream_t*); - -/* - * Write data to stream. Buffers are written in order. Example: - * - * uv_buf_t a[] = { - * { .base = "1", .len = 1 }, - * { .base = "2", .len = 1 } - * }; - * - * uv_buf_t b[] = { - * { .base = "3", .len = 1 }, - * { .base = "4", .len = 1 } - * }; - * - * uv_write_t req1; - * uv_write_t req2; - * - * // writes "1234" - * uv_write(&req1, stream, a, 2); - * uv_write(&req2, stream, b, 2); - * - */ UV_EXTERN int uv_write(uv_write_t* req, uv_stream_t* handle, const uv_buf_t bufs[], unsigned int nbufs, uv_write_cb cb); - -/* - * Extended write function for sending handles over a pipe. The pipe must be - * initialized with ipc == 1. - * send_handle must be a TCP socket or pipe, which is a server or a connection - * (listening or connected state). Bound sockets or pipes will be assumed to - * be servers. - */ UV_EXTERN int uv_write2(uv_write_t* req, uv_stream_t* handle, const uv_buf_t bufs[], unsigned int nbufs, uv_stream_t* send_handle, uv_write_cb cb); - -/* - * Same as uv_write(), but won't queue write request if it can't be completed - * immediately. - * - * Will return either: - * - > 0: number of bytes written (can be less than the supplied buffer size). - * - < 0: negative error code (UV_EAGAIN is returned if no data can be sent - * immediately). - */ UV_EXTERN int uv_try_write(uv_stream_t* handle, const uv_buf_t bufs[], unsigned int nbufs); @@ -731,40 +473,11 @@ struct uv_write_s { }; -/* - * Used to determine whether a stream is readable or writable. - */ UV_EXTERN int uv_is_readable(const uv_stream_t* handle); UV_EXTERN int uv_is_writable(const uv_stream_t* handle); - -/* - * Enable or disable blocking mode for a stream. - * - * When blocking mode is enabled all writes complete synchronously. The - * interface remains unchanged otherwise, e.g. completion or failure of the - * operation will still be reported through a callback which is made - * asychronously. - * - * Relying too much on this API is not recommended. It is likely to change - * significantly in the future. - * - * Currently this only works on Windows and only for uv_pipe_t handles. - * - * Also libuv currently makes no ordering guarantee when the blocking mode - * is changed after write requests have already been submitted. Therefore it is - * recommended to set the blocking mode immediately after opening or creating - * the stream. - */ UV_EXTERN int uv_stream_set_blocking(uv_stream_t* handle, int blocking); - -/* - * Used to determine whether a stream is closing or closed. - * - * N.B. is only valid between the initialization of the handle and the arrival - * of the close callback, and cannot be used to validate the handle. - */ UV_EXTERN int uv_is_closing(const uv_handle_t* handle); @@ -780,33 +493,11 @@ struct uv_tcp_s { }; UV_EXTERN int uv_tcp_init(uv_loop_t*, uv_tcp_t* handle); - -/* - * Opens an existing file descriptor or SOCKET as a tcp handle. - */ UV_EXTERN int uv_tcp_open(uv_tcp_t* handle, uv_os_sock_t sock); - -/* Enable/disable Nagle's algorithm. */ UV_EXTERN int uv_tcp_nodelay(uv_tcp_t* handle, int enable); - -/* - * Enable/disable TCP keep-alive. - * - * `delay` is the initial delay in seconds, ignored when `enable` is zero. - */ UV_EXTERN int uv_tcp_keepalive(uv_tcp_t* handle, int enable, unsigned int delay); - -/* - * Enable/disable simultaneous asynchronous accept requests that are - * queued by the operating system when listening for new tcp connections. - * - * This setting is used to tune a tcp server for the desired performance. - * Having simultaneous accepts can significantly improve the rate of accepting - * connections (which is why it is enabled by default) but may lead to uneven - * load distribution in multi-process setups. - */ UV_EXTERN int uv_tcp_simultaneous_accepts(uv_tcp_t* handle, int enable); enum uv_tcp_flags { @@ -814,16 +505,6 @@ enum uv_tcp_flags { UV_TCP_IPV6ONLY = 1 }; -/* - * Bind the handle to an address and port. `addr` should point to an - * initialized struct sockaddr_in or struct sockaddr_in6. - * - * When the port is already taken, you can expect to see an UV_EADDRINUSE error - * from either uv_tcp_bind(), uv_listen() or uv_tcp_connect(). - * - * That is, a successful call to uv_tcp_bind() does not guarantee that the call - * to uv_listen() or uv_tcp_connect() will succeed as well. - */ UV_EXTERN int uv_tcp_bind(uv_tcp_t* handle, const struct sockaddr* addr, unsigned int flags); @@ -833,15 +514,6 @@ UV_EXTERN int uv_tcp_getsockname(const uv_tcp_t* handle, UV_EXTERN int uv_tcp_getpeername(const uv_tcp_t* handle, struct sockaddr* name, int* namelen); - -/* - * Establish an IPv4 or IPv6 TCP connection. Provide an initialized TCP handle - * and an uninitialized uv_connect_t*. `addr` should point to an initialized - * struct sockaddr_in or struct sockaddr_in6. - * - * The callback is made when the connection has been established or when a - * connection error happened. - */ UV_EXTERN int uv_tcp_connect(uv_connect_t* req, uv_tcp_t* handle, const struct sockaddr* addr, @@ -879,31 +551,7 @@ enum uv_udp_flags { UV_UDP_REUSEADDR = 4 }; -/* - * Called after uv_udp_send(). status 0 indicates success otherwise error. - */ typedef void (*uv_udp_send_cb)(uv_udp_send_t* req, int status); - -/* - * Callback that is invoked when a new UDP datagram is received. - * - * handle UDP handle. - * nread Number of bytes that have been received. - * - 0 if there is no more data to read. You may discard or repurpose - * the read buffer. Note that 0 may also mean that an empty datagram - * was received (in this case `addr` is not NULL). - * - < 0 if a transmission error was detected. - * buf uv_buf_t with the received data. - * addr struct sockaddr* containing the address of the sender. Can be NULL. - * Valid for the duration of the callback only. - * flags One or more OR'ed UV_UDP_* constants. Right now only UV_UDP_PARTIAL - * is used. - * - * NOTE: - * The receive callback will be called with nread == 0 and addr == NULL when - * there is nothing to read, and with nread == 0 and addr != NULL when an empty - * UDP packet is received. - */ typedef void (*uv_udp_recv_cb)(uv_udp_t* handle, ssize_t nread, const uv_buf_t* buf, @@ -934,44 +582,8 @@ struct uv_udp_send_s { UV_UDP_SEND_PRIVATE_FIELDS }; -/* - * Initialize a new UDP handle. The actual socket is created lazily. - * Returns 0 on success. - */ UV_EXTERN int uv_udp_init(uv_loop_t*, uv_udp_t* handle); - -/* - * Opens an existing file descriptor or SOCKET as a udp handle. - * - * Unix only: - * The only requirement of the sock argument is that it follows the datagram - * contract (works in unconnected mode, supports sendmsg()/recvmsg(), etc). - * In other words, other datagram-type sockets like raw sockets or netlink - * sockets can also be passed to this function. - * - * This sets the SO_REUSEPORT socket flag on the BSDs and OS X. On other Unix - * platforms, it sets the SO_REUSEADDR flag. What that means is that multiple - * threads or processes can bind to the same address without error (provided - * they all set the flag) but only the last one to bind will receive any - * traffic, in effect "stealing" the port from the previous listener. - * This behavior is something of an anomaly and may be replaced by an explicit - * opt-in mechanism in future versions of libuv. - */ UV_EXTERN int uv_udp_open(uv_udp_t* handle, uv_os_sock_t sock); - -/* - * Bind to an IP address and port. - * - * Arguments: - * handle UDP handle. Should have been initialized with uv_udp_init(). - * addr struct sockaddr_in or struct sockaddr_in6 with the address and - * port to bind to. - * flags Indicate how the socket will be bound, UV_UDP_IPV6ONLY and - * UV_UDP_REUSEADDR are supported. - * - * Returns: - * 0 on success, or an error code < 0 on failure. - */ UV_EXTERN int uv_udp_bind(uv_udp_t* handle, const struct sockaddr* addr, unsigned int flags); @@ -979,155 +591,29 @@ UV_EXTERN int uv_udp_bind(uv_udp_t* handle, UV_EXTERN int uv_udp_getsockname(const uv_udp_t* handle, struct sockaddr* name, int* namelen); - -/* - * Set membership for a multicast address - * - * Arguments: - * handle UDP handle. Should have been initialized with - * uv_udp_init(). - * multicast_addr multicast address to set membership for. - * interface_addr interface address. - * membership Should be UV_JOIN_GROUP or UV_LEAVE_GROUP. - * - * Returns: - * 0 on success, or an error code < 0 on failure. - */ UV_EXTERN int uv_udp_set_membership(uv_udp_t* handle, const char* multicast_addr, const char* interface_addr, uv_membership membership); - -/* - * Set IP multicast loop flag. Makes multicast packets loop back to - * local sockets. - * - * Arguments: - * handle UDP handle. Should have been initialized with - * uv_udp_init(). - * on 1 for on, 0 for off. - * - * Returns: - * 0 on success, or an error code < 0 on failure. - */ UV_EXTERN int uv_udp_set_multicast_loop(uv_udp_t* handle, int on); - -/* - * Set the multicast ttl. - * - * Arguments: - * handle UDP handle. Should have been initialized with - * uv_udp_init(). - * ttl 1 through 255. - * - * Returns: - * 0 on success, or an error code < 0 on failure. - */ UV_EXTERN int uv_udp_set_multicast_ttl(uv_udp_t* handle, int ttl); - - -/* - * Set the multicast interface to send on. - * - * Arguments: - * handle UDP handle. Should have been initialized with - * uv_udp_init(). - * interface_addr interface address. - * - * Returns: - * 0 on success, or an error code < 0 on failure. - */ UV_EXTERN int uv_udp_set_multicast_interface(uv_udp_t* handle, const char* interface_addr); - -/* - * Set broadcast on or off. - * - * Arguments: - * handle UDP handle. Should have been initialized with - * uv_udp_init(). - * on 1 for on, 0 for off. - * - * Returns: - * 0 on success, or an error code < 0 on failure. - */ UV_EXTERN int uv_udp_set_broadcast(uv_udp_t* handle, int on); - -/* - * Set the time to live. - * - * Arguments: - * handle UDP handle. Should have been initialized with - * uv_udp_init(). - * ttl 1 through 255. - * - * Returns: - * 0 on success, or an error code < 0 on failure. - */ UV_EXTERN int uv_udp_set_ttl(uv_udp_t* handle, int ttl); - -/* - * Send data. If the socket has not previously been bound with uv_udp_bind() it - * is bound to 0.0.0.0 (the "all interfaces" address) and a random port number. - * - * Arguments: - * req UDP request handle. Need not be initialized. - * handle UDP handle. Should have been initialized with uv_udp_init(). - * bufs List of buffers to send. - * nbufs Number of buffers in `bufs`. - * addr struct sockaddr_in or struct sockaddr_in6 with the address and - * port of the remote peer. - * send_cb Callback to invoke when the data has been sent out. - * - * Returns: - * 0 on success, or an error code < 0 on failure. - */ UV_EXTERN int uv_udp_send(uv_udp_send_t* req, uv_udp_t* handle, const uv_buf_t bufs[], unsigned int nbufs, const struct sockaddr* addr, uv_udp_send_cb send_cb); - -/* - * Same as uv_udp_send(), but won't queue a send request if it can't be completed - * immediately. - * - * Will return either: - * - >= 0: number of bytes sent (it matches the given buffer size). - * - < 0: negative error code (UV_EAGAIN is returned when the message can't be - * sent immediately). - */ UV_EXTERN int uv_udp_try_send(uv_udp_t* handle, const uv_buf_t bufs[], unsigned int nbufs, const struct sockaddr* addr); -/* - * Receive data. If the socket has not previously been bound with uv_udp_bind() - * it is bound to 0.0.0.0 (the "all interfaces" address) and a random port - * number. - * - * Arguments: - * handle UDP handle. Should have been initialized with uv_udp_init(). - * alloc_cb Callback to invoke when temporary storage is needed. - * recv_cb Callback to invoke with received data. - * - * Returns: - * 0 on success, or an error code < 0 on failure. - */ UV_EXTERN int uv_udp_recv_start(uv_udp_t* handle, uv_alloc_cb alloc_cb, uv_udp_recv_cb recv_cb); - -/* - * Stop listening for incoming datagrams. - * - * Arguments: - * handle UDP handle. Should have been initialized with uv_udp_init(). - * - * Returns: - * 0 on success, or an error code < 0 on failure. - */ UV_EXTERN int uv_udp_recv_stop(uv_udp_t* handle); @@ -1142,45 +628,11 @@ struct uv_tty_s { UV_TTY_PRIVATE_FIELDS }; -/* - * Initialize a new TTY stream with the given file descriptor. Usually the - * file descriptor will be: - * 0 = stdin - * 1 = stdout - * 2 = stderr - * The last argument, readable, specifies if you plan on calling - * uv_read_start() with this stream. stdin is readable, stdout is not. - * - * TTY streams which are not readable have blocking writes. - */ UV_EXTERN int uv_tty_init(uv_loop_t*, uv_tty_t*, uv_file fd, int readable); - -/* - * Set mode. 0 for normal, 1 for raw. - */ UV_EXTERN int uv_tty_set_mode(uv_tty_t*, int mode); - -/* - * To be called when the program exits. Resets TTY settings to default - * values for the next process to take over. - * - * This function is async signal-safe on Unix platforms but can fail with error - * code UV_EBUSY if you call it when execution is inside uv_tty_set_mode(). - */ UV_EXTERN int uv_tty_reset_mode(void); - -/* - * Gets the current Window size. On success zero is returned. - */ UV_EXTERN int uv_tty_get_winsize(uv_tty_t*, int* width, int* height); -/* - * Used to detect what type of stream should be used with a given file - * descriptor. Usually this will be used during initialization to guess the - * type of the stdio streams. - * - * For isatty() functionality use this function and test for UV_TTY. - */ UV_EXTERN uv_handle_type uv_guess_handle(uv_file file); /* @@ -1196,95 +648,21 @@ struct uv_pipe_s { UV_PIPE_PRIVATE_FIELDS }; -/* - * Initialize a pipe. The last argument is a boolean to indicate if - * this pipe will be used for handle passing between processes. - */ UV_EXTERN int uv_pipe_init(uv_loop_t*, uv_pipe_t* handle, int ipc); - -/* - * Opens an existing file descriptor or HANDLE as a pipe. - */ UV_EXTERN int uv_pipe_open(uv_pipe_t*, uv_file file); - -/* - * Bind the pipe to a file path (Unix) or a name (Windows). - * - * Paths on Unix get truncated to `sizeof(sockaddr_un.sun_path)` bytes, - * typically between 92 and 108 bytes. - */ UV_EXTERN int uv_pipe_bind(uv_pipe_t* handle, const char* name); - -/* - * Connect to the Unix domain socket or the named pipe. - * - * Paths on Unix get truncated to `sizeof(sockaddr_un.sun_path)` bytes, - * typically between 92 and 108 bytes. - */ UV_EXTERN void uv_pipe_connect(uv_connect_t* req, uv_pipe_t* handle, const char* name, uv_connect_cb cb); - -/* - * Get the name of the Unix domain socket or the named pipe. - * - * A preallocated buffer must be provided. The len parameter holds the length - * of the buffer and it's set to the number of bytes written to the buffer on - * output. If the buffer is not big enough UV_ENOBUFS will be returned and len - * will contain the required size. - */ UV_EXTERN int uv_pipe_getsockname(const uv_pipe_t* handle, char* buf, size_t* len); - -/* - * This setting applies to Windows only. - * - * Set the number of pending pipe instance handles when the pipe server is - * waiting for connections. - */ UV_EXTERN void uv_pipe_pending_instances(uv_pipe_t* handle, int count); - -/* - * Used to receive handles over ipc pipes. - * - * First - call uv_pipe_pending_count(), if it is > 0 - initialize handle - * using type, returned by uv_pipe_pending_type() and call - * uv_accept(pipe, handle). - */ UV_EXTERN int uv_pipe_pending_count(uv_pipe_t* handle); UV_EXTERN uv_handle_type uv_pipe_pending_type(uv_pipe_t* handle); -/* - * uv_poll_t is a subclass of uv_handle_t. - * - * The uv_poll watcher is used to watch file descriptors for readability and - * writability, similar to the purpose of poll(2). - * - * The purpose of uv_poll is to enable integrating external libraries that - * rely on the event loop to signal it about the socket status changes, like - * c-ares or libssh2. Using uv_poll_t for any other purpose is not recommended; - * uv_tcp_t, uv_udp_t, etc. provide an implementation that is much faster and - * more scalable than what can be achieved with uv_poll_t, especially on - * Windows. - * - * It is possible that uv_poll occasionally signals that a file descriptor is - * readable or writable even when it isn't. The user should therefore always - * be prepared to handle EAGAIN or equivalent when it attempts to read from or - * write to the fd. - * - * It is not okay to have multiple active uv_poll watchers for the same socket. - * This can cause libuv to busyloop or otherwise malfunction. - * - * The user should not close a file descriptor while it is being polled by an - * active uv_poll watcher. This can cause the poll watcher to report an error, - * but it might also start polling another socket. However the fd can be safely - * closed immediately after a call to uv_poll_stop() or uv_close(). - * - * On windows only sockets can be polled with uv_poll. On Unix any file - * descriptor that would be accepted by poll(2) can be used with uv_poll. - */ + struct uv_poll_s { UV_HANDLE_FIELDS uv_poll_cb poll_cb; @@ -1296,124 +674,52 @@ enum uv_poll_event { UV_WRITABLE = 2 }; -/* Initialize the poll watcher using a file descriptor. */ UV_EXTERN int uv_poll_init(uv_loop_t* loop, uv_poll_t* handle, int fd); - -/* - * Initialize the poll watcher using a socket descriptor. On Unix this is - * identical to uv_poll_init. On windows it takes a SOCKET handle. - */ UV_EXTERN int uv_poll_init_socket(uv_loop_t* loop, uv_poll_t* handle, uv_os_sock_t socket); - -/* - * Starts polling the file descriptor. `events` is a bitmask consisting made up - * of UV_READABLE and UV_WRITABLE. As soon as an event is detected the callback - * will be called with `status` set to 0, and the detected events set en the - * `events` field. - * - * If an error happens while polling status, `status` < 0 and corresponds - * with one of the UV_E* error codes. The user should not close the socket - * while uv_poll is active. If the user does that anyway, the callback *may* - * be called reporting an error status, but this is not guaranteed. - * - * Calling uv_poll_start on an uv_poll watcher that is already active is fine. - * Doing so will update the events mask that is being watched for. - */ UV_EXTERN int uv_poll_start(uv_poll_t* handle, int events, uv_poll_cb cb); - -/* Stops polling the file descriptor. */ UV_EXTERN int uv_poll_stop(uv_poll_t* handle); -/* - * uv_prepare_t is a subclass of uv_handle_t. - * - * Every active prepare handle gets its callback called exactly once per loop - * iteration, just before the system blocks to wait for completed i/o. - */ struct uv_prepare_s { UV_HANDLE_FIELDS UV_PREPARE_PRIVATE_FIELDS }; UV_EXTERN int uv_prepare_init(uv_loop_t*, uv_prepare_t* prepare); - UV_EXTERN int uv_prepare_start(uv_prepare_t* prepare, uv_prepare_cb cb); - UV_EXTERN int uv_prepare_stop(uv_prepare_t* prepare); -/* - * uv_check_t is a subclass of uv_handle_t. - * - * Every active check handle gets its callback called exactly once per loop - * iteration, just after the system returns from blocking. - */ struct uv_check_s { UV_HANDLE_FIELDS UV_CHECK_PRIVATE_FIELDS }; UV_EXTERN int uv_check_init(uv_loop_t*, uv_check_t* check); - UV_EXTERN int uv_check_start(uv_check_t* check, uv_check_cb cb); - UV_EXTERN int uv_check_stop(uv_check_t* check); -/* - * uv_idle_t is a subclass of uv_handle_t. - * - * Every active idle handle gets its callback called repeatedly until it is - * stopped. This happens after all other types of callbacks are processed. - * When there are multiple "idle" handles active, their callbacks are called - * in turn. - */ struct uv_idle_s { UV_HANDLE_FIELDS UV_IDLE_PRIVATE_FIELDS }; UV_EXTERN int uv_idle_init(uv_loop_t*, uv_idle_t* idle); - UV_EXTERN int uv_idle_start(uv_idle_t* idle, uv_idle_cb cb); - UV_EXTERN int uv_idle_stop(uv_idle_t* idle); -/* - * uv_async_t is a subclass of uv_handle_t. - * - * uv_async_send() wakes up the event loop and calls the async handle's callback. - * - * Unlike all other libuv functions, uv_async_send() can be called from another - * thread. - * - * NOTE: - * There is no guarantee that every uv_async_send() call leads to exactly one - * invocation of the callback; the only guarantee is that the callback - * function is called at least once after the call to async_send. - */ struct uv_async_s { UV_HANDLE_FIELDS UV_ASYNC_PRIVATE_FIELDS }; -/* - * Initialize the uv_async_t handle. A NULL callback is allowed. - * - * Note that uv_async_init(), unlike other libuv functions, immediately - * starts the handle. To stop the handle again, close it with uv_close(). - */ UV_EXTERN int uv_async_init(uv_loop_t*, uv_async_t* async, uv_async_cb async_cb); - -/* - * This can be called from other threads to wake up a libuv thread. - */ UV_EXTERN int uv_async_send(uv_async_t* async); @@ -1428,37 +734,13 @@ struct uv_timer_s { }; UV_EXTERN int uv_timer_init(uv_loop_t*, uv_timer_t* handle); - -/* - * Start the timer. `timeout` and `repeat` are in milliseconds. - * - * If timeout is zero, the callback fires on the next tick of the event loop. - * - * If repeat is non-zero, the callback fires first after timeout milliseconds - * and then repeatedly after repeat milliseconds. - */ UV_EXTERN int uv_timer_start(uv_timer_t* handle, uv_timer_cb cb, uint64_t timeout, uint64_t repeat); - UV_EXTERN int uv_timer_stop(uv_timer_t* handle); - -/* - * Stop the timer, and if it is repeating restart it using the repeat value - * as the timeout. If the timer has never been started before it returns - * UV_EINVAL. - */ UV_EXTERN int uv_timer_again(uv_timer_t* handle); - -/* - * Set the repeat value in milliseconds. Note that if the repeat value is set - * from a timer callback it does not immediately take effect. If the timer was - * non-repeating before, it will have been stopped. If it was repeating, then - * the old repeat value will have been used to schedule the next timeout. - */ UV_EXTERN void uv_timer_set_repeat(uv_timer_t* handle, uint64_t repeat); - UV_EXTERN uint64_t uv_timer_get_repeat(const uv_timer_t* handle); @@ -1475,34 +757,12 @@ struct uv_getaddrinfo_s { }; -/* - * Asynchronous getaddrinfo(3). - * - * Either node or service may be NULL but not both. - * - * hints is a pointer to a struct addrinfo with additional address type - * constraints, or NULL. Consult `man -s 3 getaddrinfo` for details. - * - * Returns 0 on success or an error code < 0 on failure. - * - * If successful, your callback gets called sometime in the future with the - * lookup result, which is either: - * - * a) err == 0, the res argument points to a valid struct addrinfo, or - * b) err < 0, the res argument is NULL. See the UV_EAI_* constants. - * - * Call uv_freeaddrinfo() to free the addrinfo structure. - */ UV_EXTERN int uv_getaddrinfo(uv_loop_t* loop, uv_getaddrinfo_t* req, uv_getaddrinfo_cb getaddrinfo_cb, const char* node, const char* service, const struct addrinfo* hints); - -/* - * Free the struct addrinfo. Passing NULL is allowed and is a no-op. - */ UV_EXTERN void uv_freeaddrinfo(struct addrinfo* ai); @@ -1518,14 +778,6 @@ struct uv_getnameinfo_s { UV_GETNAMEINFO_PRIVATE_FIELDS }; -/* - * Asynchronous getnameinfo. - * - * Returns 0 on success or an error code < 0 on failure. - * - * If successful, your callback gets called sometime in the future with the - * lookup result. - */ UV_EXTERN int uv_getnameinfo(uv_loop_t* loop, uv_getnameinfo_t* req, uv_getnameinfo_cb getnameinfo_cb, @@ -1651,46 +903,10 @@ struct uv_process_s { UV_PROCESS_PRIVATE_FIELDS }; -/* - * Initializes the uv_process_t and starts the process. If the process is - * successfully spawned, then this function will return 0. Otherwise, the - * negative error code corresponding to the reason it couldn't spawn is - * returned. - * - * Possible reasons for failing to spawn would include (but not be limited to) - * the file to execute not existing, not having permissions to use the setuid or - * setgid specified, or not having enough memory to allocate for the new - * process. - */ UV_EXTERN int uv_spawn(uv_loop_t* loop, uv_process_t* handle, const uv_process_options_t* options); - - -/* - * Kills the process with the specified signal. The user must still - * call uv_close() on the process. - * - * Emulates some aspects of Unix exit status on Windows, in that while the - * underlying process will be terminated with a status of `1`, - * `uv_process_t.exit_signal` will be set to signum, so the process will appear - * to have been killed by `signum`. - */ UV_EXTERN int uv_process_kill(uv_process_t*, int signum); - - -/* Kills the process with the specified signal. - * - * Emulates some aspects of Unix signals on Windows: - * - SIGTERM, SIGKILL, and SIGINT call TerminateProcess() to unconditionally - * cause the target to exit with status 1. Unlike Unix, this cannot be caught - * or ignored (but see uv_process_kill() and uv_signal_start()). - * - Signal number `0` causes a check for target existence, as in Unix. Return - * value is 0 on existence, UV_ESRCH on non-existence. - * - * Returns 0 on success, or an error code on failure. UV_ESRCH is portably used - * for non-existence of target process, other errors may be system specific. - */ UV_EXTERN int uv_kill(int pid, int signum); @@ -1705,34 +921,11 @@ struct uv_work_s { UV_WORK_PRIVATE_FIELDS }; -/* Queues a work request to execute asynchronously on the thread pool. */ UV_EXTERN int uv_queue_work(uv_loop_t* loop, uv_work_t* req, uv_work_cb work_cb, uv_after_work_cb after_work_cb); -/* Cancel a pending request. Fails if the request is executing or has finished - * executing. - * - * Returns 0 on success, or an error code < 0 on failure. - * - * Only cancellation of uv_fs_t, uv_getaddrinfo_t and uv_work_t requests is - * currently supported. - * - * Cancelled requests have their callbacks invoked some time in the future. - * It's _not_ safe to free the memory associated with the request until your - * callback is called. - * - * Here is how cancellation is reported to your callback: - * - * - A uv_fs_t request has its req->result field set to UV_ECANCELED. - * - * - A uv_work_t or uv_getaddrinfo_t request has its callback invoked with - * status == UV_ECANCELED. - * - * This function is currently only implemented on Unix platforms. On Windows, - * it always returns UV_ENOSYS. - */ UV_EXTERN int uv_cancel(uv_req_t* req); @@ -1762,6 +955,22 @@ struct uv_interface_address_s { } netmask; }; +typedef enum { + UV_DIRENT_UNKNOWN, + UV_DIRENT_FILE, + UV_DIRENT_DIR, + UV_DIRENT_LINK, + UV_DIRENT_FIFO, + UV_DIRENT_SOCKET, + UV_DIRENT_CHAR, + UV_DIRENT_BLOCK +} uv_dirent_type_t; + +struct uv_dirent_s { + const char* name; + uv_dirent_type_t type; +}; + UV_EXTERN char** uv_setup_args(int argc, char** argv); UV_EXTERN int uv_get_process_title(char* buffer, size_t size); UV_EXTERN int uv_set_process_title(const char* title); @@ -1792,41 +1001,16 @@ typedef struct { uint64_t ru_nivcsw; /* involuntary context switches */ } uv_rusage_t; -/* - * Get information about OS resource utilization for the current process. - * Please note that not all uv_rusage_t struct fields will be filled on Windows. - */ UV_EXTERN int uv_getrusage(uv_rusage_t* rusage); -/* - * This allocates cpu_infos array, and sets count. The array is freed - * using uv_free_cpu_info(). - */ UV_EXTERN int uv_cpu_info(uv_cpu_info_t** cpu_infos, int* count); UV_EXTERN void uv_free_cpu_info(uv_cpu_info_t* cpu_infos, int count); -/* - * This allocates addresses array, and sets count. The array is freed - * using uv_free_interface_addresses(). - */ UV_EXTERN int uv_interface_addresses(uv_interface_address_t** addresses, - int* count); + int* count); UV_EXTERN void uv_free_interface_addresses(uv_interface_address_t* addresses, - int count); + int count); -/* - * File System Methods. - * - * The uv_fs_* functions execute a blocking system call asynchronously (in a - * thread pool) and call the specified callback in the specified loop after - * completion. If the user gives NULL as the callback the blocking system - * call will be called synchronously. req should be a pointer to an - * uninitialized uv_fs_t object. - * - * uv_fs_req_cleanup() must be called after completion of the uv_fs_ - * function to free any internal memory allocations associated with the - * request. - */ typedef enum { UV_FS_UNKNOWN = -1, @@ -1842,6 +1026,7 @@ typedef enum { UV_FS_FTRUNCATE, UV_FS_UTIME, UV_FS_FUTIME, + UV_FS_ACCESS, UV_FS_CHMOD, UV_FS_FCHMOD, UV_FS_FSYNC, @@ -1851,7 +1036,7 @@ typedef enum { UV_FS_MKDIR, UV_FS_MKDTEMP, UV_FS_RENAME, - UV_FS_READDIR, + UV_FS_SCANDIR, UV_FS_LINK, UV_FS_SYMLINK, UV_FS_READLINK, @@ -1873,69 +1058,118 @@ struct uv_fs_s { }; UV_EXTERN void uv_fs_req_cleanup(uv_fs_t* req); - -UV_EXTERN int uv_fs_close(uv_loop_t* loop, uv_fs_t* req, uv_file file, - uv_fs_cb cb); - -UV_EXTERN int uv_fs_open(uv_loop_t* loop, uv_fs_t* req, const char* path, - int flags, int mode, uv_fs_cb cb); - -UV_EXTERN int uv_fs_read(uv_loop_t* loop, uv_fs_t* req, uv_file file, - const uv_buf_t bufs[], unsigned int nbufs, int64_t offset, uv_fs_cb cb); - -UV_EXTERN int uv_fs_unlink(uv_loop_t* loop, uv_fs_t* req, const char* path, - uv_fs_cb cb); - -UV_EXTERN int uv_fs_write(uv_loop_t* loop, uv_fs_t* req, uv_file file, - const uv_buf_t bufs[], unsigned int nbufs, int64_t offset, uv_fs_cb cb); - -UV_EXTERN int uv_fs_mkdir(uv_loop_t* loop, uv_fs_t* req, const char* path, - int mode, uv_fs_cb cb); - -UV_EXTERN int uv_fs_mkdtemp(uv_loop_t* loop, uv_fs_t* req, const char* tpl, - uv_fs_cb cb); - -UV_EXTERN int uv_fs_rmdir(uv_loop_t* loop, uv_fs_t* req, const char* path, - uv_fs_cb cb); - -UV_EXTERN int uv_fs_readdir(uv_loop_t* loop, uv_fs_t* req, - const char* path, int flags, uv_fs_cb cb); - -UV_EXTERN int uv_fs_stat(uv_loop_t* loop, uv_fs_t* req, const char* path, - uv_fs_cb cb); - -UV_EXTERN int uv_fs_fstat(uv_loop_t* loop, uv_fs_t* req, uv_file file, - uv_fs_cb cb); - -UV_EXTERN int uv_fs_rename(uv_loop_t* loop, uv_fs_t* req, const char* path, - const char* new_path, uv_fs_cb cb); - -UV_EXTERN int uv_fs_fsync(uv_loop_t* loop, uv_fs_t* req, uv_file file, - uv_fs_cb cb); - -UV_EXTERN int uv_fs_fdatasync(uv_loop_t* loop, uv_fs_t* req, uv_file file, - uv_fs_cb cb); - -UV_EXTERN int uv_fs_ftruncate(uv_loop_t* loop, uv_fs_t* req, uv_file file, - int64_t offset, uv_fs_cb cb); - -UV_EXTERN int uv_fs_sendfile(uv_loop_t* loop, uv_fs_t* req, uv_file out_fd, - uv_file in_fd, int64_t in_offset, size_t length, uv_fs_cb cb); - -UV_EXTERN int uv_fs_chmod(uv_loop_t* loop, uv_fs_t* req, const char* path, - int mode, uv_fs_cb cb); - -UV_EXTERN int uv_fs_utime(uv_loop_t* loop, uv_fs_t* req, const char* path, - double atime, double mtime, uv_fs_cb cb); - -UV_EXTERN int uv_fs_futime(uv_loop_t* loop, uv_fs_t* req, uv_file file, - double atime, double mtime, uv_fs_cb cb); - -UV_EXTERN int uv_fs_lstat(uv_loop_t* loop, uv_fs_t* req, const char* path, - uv_fs_cb cb); - -UV_EXTERN int uv_fs_link(uv_loop_t* loop, uv_fs_t* req, const char* path, - const char* new_path, uv_fs_cb cb); +UV_EXTERN int uv_fs_close(uv_loop_t* loop, + uv_fs_t* req, + uv_file file, + uv_fs_cb cb); +UV_EXTERN int uv_fs_open(uv_loop_t* loop, + uv_fs_t* req, + const char* path, + int flags, + int mode, + uv_fs_cb cb); +UV_EXTERN int uv_fs_read(uv_loop_t* loop, + uv_fs_t* req, + uv_file file, + const uv_buf_t bufs[], + unsigned int nbufs, + int64_t offset, + uv_fs_cb cb); +UV_EXTERN int uv_fs_unlink(uv_loop_t* loop, + uv_fs_t* req, + const char* path, + uv_fs_cb cb); +UV_EXTERN int uv_fs_write(uv_loop_t* loop, + uv_fs_t* req, + uv_file file, + const uv_buf_t bufs[], + unsigned int nbufs, + int64_t offset, + uv_fs_cb cb); +UV_EXTERN int uv_fs_mkdir(uv_loop_t* loop, + uv_fs_t* req, + const char* path, + int mode, + uv_fs_cb cb); +UV_EXTERN int uv_fs_mkdtemp(uv_loop_t* loop, + uv_fs_t* req, + const char* tpl, + uv_fs_cb cb); +UV_EXTERN int uv_fs_rmdir(uv_loop_t* loop, + uv_fs_t* req, + const char* path, + uv_fs_cb cb); +UV_EXTERN int uv_fs_scandir(uv_loop_t* loop, + uv_fs_t* req, + const char* path, + int flags, + uv_fs_cb cb); +UV_EXTERN int uv_fs_scandir_next(uv_fs_t* req, + uv_dirent_t* ent); +UV_EXTERN int uv_fs_stat(uv_loop_t* loop, + uv_fs_t* req, + const char* path, + uv_fs_cb cb); +UV_EXTERN int uv_fs_fstat(uv_loop_t* loop, + uv_fs_t* req, + uv_file file, + uv_fs_cb cb); +UV_EXTERN int uv_fs_rename(uv_loop_t* loop, + uv_fs_t* req, + const char* path, + const char* new_path, + uv_fs_cb cb); +UV_EXTERN int uv_fs_fsync(uv_loop_t* loop, + uv_fs_t* req, + uv_file file, + uv_fs_cb cb); +UV_EXTERN int uv_fs_fdatasync(uv_loop_t* loop, + uv_fs_t* req, + uv_file file, + uv_fs_cb cb); +UV_EXTERN int uv_fs_ftruncate(uv_loop_t* loop, + uv_fs_t* req, + uv_file file, + int64_t offset, + uv_fs_cb cb); +UV_EXTERN int uv_fs_sendfile(uv_loop_t* loop, + uv_fs_t* req, + uv_file out_fd, + uv_file in_fd, + int64_t in_offset, + size_t length, + uv_fs_cb cb); +UV_EXTERN int uv_fs_access(uv_loop_t* loop, + uv_fs_t* req, + const char* path, + int mode, + uv_fs_cb cb); +UV_EXTERN int uv_fs_chmod(uv_loop_t* loop, + uv_fs_t* req, + const char* path, + int mode, + uv_fs_cb cb); +UV_EXTERN int uv_fs_utime(uv_loop_t* loop, + uv_fs_t* req, + const char* path, + double atime, + double mtime, + uv_fs_cb cb); +UV_EXTERN int uv_fs_futime(uv_loop_t* loop, + uv_fs_t* req, + uv_file file, + double atime, + double mtime, + uv_fs_cb cb); +UV_EXTERN int uv_fs_lstat(uv_loop_t* loop, + uv_fs_t* req, + const char* path, + uv_fs_cb cb); +UV_EXTERN int uv_fs_link(uv_loop_t* loop, + uv_fs_t* req, + const char* path, + const char* new_path, + uv_fs_cb cb); /* * This flag can be used with uv_fs_symlink() on Windows to specify whether @@ -1949,20 +1183,33 @@ UV_EXTERN int uv_fs_link(uv_loop_t* loop, uv_fs_t* req, const char* path, */ #define UV_FS_SYMLINK_JUNCTION 0x0002 -UV_EXTERN int uv_fs_symlink(uv_loop_t* loop, uv_fs_t* req, const char* path, - const char* new_path, int flags, uv_fs_cb cb); - -UV_EXTERN int uv_fs_readlink(uv_loop_t* loop, uv_fs_t* req, const char* path, - uv_fs_cb cb); - -UV_EXTERN int uv_fs_fchmod(uv_loop_t* loop, uv_fs_t* req, uv_file file, - int mode, uv_fs_cb cb); - -UV_EXTERN int uv_fs_chown(uv_loop_t* loop, uv_fs_t* req, const char* path, - uv_uid_t uid, uv_gid_t gid, uv_fs_cb cb); - -UV_EXTERN int uv_fs_fchown(uv_loop_t* loop, uv_fs_t* req, uv_file file, - uv_uid_t uid, uv_gid_t gid, uv_fs_cb cb); +UV_EXTERN int uv_fs_symlink(uv_loop_t* loop, + uv_fs_t* req, + const char* path, + const char* new_path, + int flags, + uv_fs_cb cb); +UV_EXTERN int uv_fs_readlink(uv_loop_t* loop, + uv_fs_t* req, + const char* path, + uv_fs_cb cb); +UV_EXTERN int uv_fs_fchmod(uv_loop_t* loop, + uv_fs_t* req, + uv_file file, + int mode, + uv_fs_cb cb); +UV_EXTERN int uv_fs_chown(uv_loop_t* loop, + uv_fs_t* req, + const char* path, + uv_uid_t uid, + uv_gid_t gid, + uv_fs_cb cb); +UV_EXTERN int uv_fs_fchown(uv_loop_t* loop, + uv_fs_t* req, + uv_file file, + uv_uid_t uid, + uv_gid_t gid, + uv_fs_cb cb); enum uv_fs_event { @@ -1989,77 +1236,14 @@ struct uv_fs_poll_s { }; UV_EXTERN int uv_fs_poll_init(uv_loop_t* loop, uv_fs_poll_t* handle); - -/* - * Check the file at `path` for changes every `interval` milliseconds. - * - * Your callback is invoked with `status < 0` if `path` does not exist - * or is inaccessible. The watcher is *not* stopped but your callback is - * not called again until something changes (e.g. when the file is created - * or the error reason changes). - * - * When `status == 0`, your callback receives pointers to the old and new - * `uv_stat_t` structs. They are valid for the duration of the callback - * only! - * - * For maximum portability, use multi-second intervals. Sub-second intervals - * will not detect all changes on many file systems. - */ UV_EXTERN int uv_fs_poll_start(uv_fs_poll_t* handle, uv_fs_poll_cb poll_cb, const char* path, unsigned int interval); - UV_EXTERN int uv_fs_poll_stop(uv_fs_poll_t* handle); - -/* - * Get the path being monitored by the handle. The buffer must be preallocated - * by the user. Returns 0 on success or an error code < 0 in case of failure. - * On sucess, `buf` will contain the path and `len` its length. If the buffer - * is not big enough UV_ENOBUFS will be returned and len will be set to the - * required size. - */ UV_EXTERN int uv_fs_poll_getpath(uv_fs_poll_t* handle, char* buf, size_t* len); -/* - * Unix signal handling on a per-event loop basis. The implementation is not - * ultra efficient so don't go creating a million event loops with a million - * signal watchers. - * - * Note to Linux users: SIGRT0 and SIGRT1 (signals 32 and 33) are used by the - * NPTL pthreads library to manage threads. Installing watchers for those - * signals will lead to unpredictable behavior and is strongly discouraged. - * Future versions of libuv may simply reject them. - * - * Reception of some signals is emulated on Windows: - * - * SIGINT is normally delivered when the user presses CTRL+C. However, like - * on Unix, it is not generated when terminal raw mode is enabled. - * - * SIGBREAK is delivered when the user pressed CTRL+BREAK. - * - * SIGHUP is generated when the user closes the console window. On SIGHUP the - * program is given approximately 10 seconds to perform cleanup. After that - * Windows will unconditionally terminate it. - * - * SIGWINCH is raised whenever libuv detects that the console has been - * resized. SIGWINCH is emulated by libuv when the program uses an uv_tty_t - * handle to write to the console. SIGWINCH may not always be delivered in a - * timely manner; libuv will only detect size changes when the cursor is - * being moved. When a readable uv_tty_handle is used in raw mode, resizing - * the console buffer will also trigger a SIGWINCH signal. - * - * Watchers for other signals can be successfully created, but these signals - * are never received. These signals are: SIGILL, SIGABRT, SIGFPE, SIGSEGV, - * SIGTERM and SIGKILL. - * - * Note that calls to raise() or abort() to programmatically raise a signal are - * not detected by libuv; these will not trigger a signal watcher. - * - * See uv_process_kill() and uv_kill() for information about support for sending - * signals. - */ struct uv_signal_s { UV_HANDLE_FIELDS uv_signal_cb signal_cb; @@ -2068,21 +1252,11 @@ struct uv_signal_s { }; UV_EXTERN int uv_signal_init(uv_loop_t* loop, uv_signal_t* handle); - UV_EXTERN int uv_signal_start(uv_signal_t* handle, uv_signal_cb signal_cb, int signum); - UV_EXTERN int uv_signal_stop(uv_signal_t* handle); - -/* - * Gets load average. - * - * See: http://en.wikipedia.org/wiki/Load_(computing) - * - * Returns [0,0,0] on Windows. - */ UV_EXTERN void uv_loadavg(double avg[3]); @@ -2118,118 +1292,48 @@ enum uv_fs_event_flags { UV_EXTERN int uv_fs_event_init(uv_loop_t* loop, uv_fs_event_t* handle); - UV_EXTERN int uv_fs_event_start(uv_fs_event_t* handle, uv_fs_event_cb cb, const char* path, unsigned int flags); - UV_EXTERN int uv_fs_event_stop(uv_fs_event_t* handle); - -/* - * Get the path being monitored by the handle. The buffer must be preallocated - * by the user. Returns 0 on success or an error code < 0 in case of failure. - * On sucess, `buf` will contain the path and `len` its length. If the buffer - * is not big enough UV_ENOBUFS will be returned and len will be set to the - * required size. - */ UV_EXTERN int uv_fs_event_getpath(uv_fs_event_t* handle, char* buf, size_t* len); - -/* Utilities. */ - -/* Convert string ip addresses to binary structures. */ UV_EXTERN int uv_ip4_addr(const char* ip, int port, struct sockaddr_in* addr); UV_EXTERN int uv_ip6_addr(const char* ip, int port, struct sockaddr_in6* addr); -/* Convert binary addresses to strings. */ UV_EXTERN int uv_ip4_name(const struct sockaddr_in* src, char* dst, size_t size); UV_EXTERN int uv_ip6_name(const struct sockaddr_in6* src, char* dst, size_t size); -/* - * Cross-platform IPv6-capable implementation of the 'standard' inet_ntop() and - * inet_pton() functions. On success they return 0. If an error the target of - * the `dst` pointer is unmodified. - */ UV_EXTERN int uv_inet_ntop(int af, const void* src, char* dst, size_t size); UV_EXTERN int uv_inet_pton(int af, const char* src, void* dst); -/* Gets the executable path. */ UV_EXTERN int uv_exepath(char* buffer, size_t* size); -/* Gets the current working directory. */ UV_EXTERN int uv_cwd(char* buffer, size_t* size); -/* Changes the current working directory. */ UV_EXTERN int uv_chdir(const char* dir); -/* Gets memory info in bytes. */ UV_EXTERN uint64_t uv_get_free_memory(void); UV_EXTERN uint64_t uv_get_total_memory(void); -/* - * Returns the current high-resolution real time. This is expressed in - * nanoseconds. It is relative to an arbitrary time in the past. It is not - * related to the time of day and therefore not subject to clock drift. The - * primary use is for measuring performance between intervals. - * - * Note not every platform can support nanosecond resolution; however, this - * value will always be in nanoseconds. - */ UV_EXTERN extern uint64_t uv_hrtime(void); - -/* - * Disables inheritance for file descriptors / handles that this process - * inherited from its parent. The effect is that child processes spawned by - * this process don't accidentally inherit these handles. - * - * It is recommended to call this function as early in your program as possible, - * before the inherited file descriptors can be closed or duplicated. - * - * Note that this function works on a best-effort basis: there is no guarantee - * that libuv can discover all file descriptors that were inherited. In general - * it does a better job on Windows than it does on Unix. - */ UV_EXTERN void uv_disable_stdio_inheritance(void); -/* - * Opens a shared library. The filename is in utf-8. Returns 0 on success and - * -1 on error. Call uv_dlerror(uv_lib_t*) to get the error message. - */ UV_EXTERN int uv_dlopen(const char* filename, uv_lib_t* lib); - -/* - * Close the shared library. - */ UV_EXTERN void uv_dlclose(uv_lib_t* lib); - -/* - * Retrieves a data pointer from a dynamic library. It is legal for a symbol to - * map to NULL. Returns 0 on success and -1 if the symbol was not found. - */ UV_EXTERN int uv_dlsym(uv_lib_t* lib, const char* name, void** ptr); - -/* - * Returns the last uv_dlopen() or uv_dlsym() error message. - */ UV_EXTERN const char* uv_dlerror(const uv_lib_t* lib); -/* - * The mutex functions return 0 on success or an error code < 0 (unless the - * return type is void, of course). - */ UV_EXTERN int uv_mutex_init(uv_mutex_t* handle); UV_EXTERN void uv_mutex_destroy(uv_mutex_t* handle); UV_EXTERN void uv_mutex_lock(uv_mutex_t* handle); UV_EXTERN int uv_mutex_trylock(uv_mutex_t* handle); UV_EXTERN void uv_mutex_unlock(uv_mutex_t* handle); -/* - * Same goes for the read/write lock functions. - */ UV_EXTERN int uv_rwlock_init(uv_rwlock_t* rwlock); UV_EXTERN void uv_rwlock_destroy(uv_rwlock_t* rwlock); UV_EXTERN void uv_rwlock_rdlock(uv_rwlock_t* rwlock); @@ -2239,85 +1343,39 @@ UV_EXTERN void uv_rwlock_wrlock(uv_rwlock_t* rwlock); UV_EXTERN int uv_rwlock_trywrlock(uv_rwlock_t* rwlock); UV_EXTERN void uv_rwlock_wrunlock(uv_rwlock_t* rwlock); -/* - * Same goes for the semaphore functions. - */ UV_EXTERN int uv_sem_init(uv_sem_t* sem, unsigned int value); UV_EXTERN void uv_sem_destroy(uv_sem_t* sem); UV_EXTERN void uv_sem_post(uv_sem_t* sem); UV_EXTERN void uv_sem_wait(uv_sem_t* sem); UV_EXTERN int uv_sem_trywait(uv_sem_t* sem); -/* - * Same goes for the condition variable functions. - */ UV_EXTERN int uv_cond_init(uv_cond_t* cond); UV_EXTERN void uv_cond_destroy(uv_cond_t* cond); UV_EXTERN void uv_cond_signal(uv_cond_t* cond); UV_EXTERN void uv_cond_broadcast(uv_cond_t* cond); -/* - * Same goes for the barrier functions. Note that uv_barrier_wait() returns - * a value > 0 to an arbitrarily chosen "serializer" thread to facilitate - * cleanup, i.e.: - * - * if (uv_barrier_wait(&barrier) > 0) - * uv_barrier_destroy(&barrier); - */ UV_EXTERN int uv_barrier_init(uv_barrier_t* barrier, unsigned int count); UV_EXTERN void uv_barrier_destroy(uv_barrier_t* barrier); UV_EXTERN int uv_barrier_wait(uv_barrier_t* barrier); -/* - * Waits on a condition variable without a timeout. - * - * NOTE: - * 1. callers should be prepared to deal with spurious wakeups. - */ UV_EXTERN void uv_cond_wait(uv_cond_t* cond, uv_mutex_t* mutex); -/* - * Waits on a condition variable with a timeout in nano seconds. - * Returns 0 for success or UV_ETIMEDOUT on timeout, It aborts when other - * errors happen. - * - * NOTE: - * 1. callers should be prepared to deal with spurious wakeups. - * 2. the granularity of timeout on Windows is never less than one millisecond. - * 3. uv_cond_timedwait() takes a relative timeout, not an absolute time. - */ -UV_EXTERN int uv_cond_timedwait(uv_cond_t* cond, uv_mutex_t* mutex, - uint64_t timeout); +UV_EXTERN int uv_cond_timedwait(uv_cond_t* cond, + uv_mutex_t* mutex, + uint64_t timeout); -/* - * Runs a function once and only once. Concurrent calls to uv_once() with the - * same guard will block all callers except one (it's unspecified which one). - * The guard should be initialized statically with the UV_ONCE_INIT macro. - */ UV_EXTERN void uv_once(uv_once_t* guard, void (*callback)(void)); -/* - * Thread-local storage. These functions largely follow the semantics of - * pthread_key_create(), pthread_key_delete(), pthread_getspecific() and - * pthread_setspecific(). - * - * Note that the total thread-local storage size may be limited. - * That is, it may not be possible to create many TLS keys. - */ UV_EXTERN int uv_key_create(uv_key_t* key); UV_EXTERN void uv_key_delete(uv_key_t* key); UV_EXTERN void* uv_key_get(uv_key_t* key); UV_EXTERN void uv_key_set(uv_key_t* key, void* value); -/* - * Callback that is invoked to initialize thread execution. - * - * `arg` is the same value that was passed to uv_thread_create(). - */ typedef void (*uv_thread_cb)(void* arg); UV_EXTERN int uv_thread_create(uv_thread_t* tid, uv_thread_cb entry, void* arg); -UV_EXTERN unsigned long uv_thread_self(void); +UV_EXTERN uv_thread_t uv_thread_self(void); UV_EXTERN int uv_thread_join(uv_thread_t *tid); +UV_EXTERN int uv_thread_equal(const uv_thread_t* t1, const uv_thread_t* t2); /* The presence of these unions force similar struct layout. */ #define XX(_, name) uv_ ## name ## _t name; diff --git a/deps/uv/m4/.gitignore b/deps/uv/m4/.gitignore index bde78f43f99..c44e4c2929a 100644 --- a/deps/uv/m4/.gitignore +++ b/deps/uv/m4/.gitignore @@ -1,2 +1,4 @@ # Ignore libtoolize-generated files. *.m4 +!as_case.m4 +!libuv-check-flags.m4 diff --git a/deps/uv/m4/as_case.m4 b/deps/uv/m4/as_case.m4 new file mode 100644 index 00000000000..c7ae0f0f5ed --- /dev/null +++ b/deps/uv/m4/as_case.m4 @@ -0,0 +1,21 @@ +# AS_CASE(WORD, [PATTERN1], [IF-MATCHED1]...[DEFAULT]) +# ---------------------------------------------------- +# Expand into +# | case WORD in +# | PATTERN1) IF-MATCHED1 ;; +# | ... +# | *) DEFAULT ;; +# | esac +m4_define([_AS_CASE], +[m4_if([$#], 0, [m4_fatal([$0: too few arguments: $#])], + [$#], 1, [ *) $1 ;;], + [$#], 2, [ $1) m4_default([$2], [:]) ;;], + [ $1) m4_default([$2], [:]) ;; +$0(m4_shiftn(2, $@))])dnl +]) +m4_defun([AS_CASE], +[m4_ifval([$2$3], +[case $1 in +_AS_CASE(m4_shift($@)) +esac])]) + diff --git a/deps/uv/m4/dtrace.m4 b/deps/uv/m4/dtrace.m4 new file mode 100644 index 00000000000..09f7dc89cf5 --- /dev/null +++ b/deps/uv/m4/dtrace.m4 @@ -0,0 +1,66 @@ +dnl Copyright (C) 2009 Sun Microsystems +dnl This file is free software; Sun Microsystems +dnl gives unlimited permission to copy and/or distribute it, +dnl with or without modifications, as long as this notice is preserved. + +dnl --------------------------------------------------------------------------- +dnl Macro: PANDORA_ENABLE_DTRACE +dnl --------------------------------------------------------------------------- +AC_DEFUN([PANDORA_ENABLE_DTRACE],[ + AC_ARG_ENABLE([dtrace], + [AS_HELP_STRING([--disable-dtrace], + [enable DTrace USDT probes. @<:@default=yes@:>@])], + [ac_cv_enable_dtrace="$enableval"], + [ac_cv_enable_dtrace="yes"]) + + AS_IF([test "$ac_cv_enable_dtrace" = "yes"],[ + AC_CHECK_PROGS([DTRACE], [dtrace]) + AS_IF([test "x$ac_cv_prog_DTRACE" = "xdtrace"],[ + + AC_CACHE_CHECK([if dtrace works],[ac_cv_dtrace_works],[ + cat >conftest.d <<_ACEOF +provider Example { + probe increment(int); +}; +_ACEOF + $DTRACE -h -o conftest.h -s conftest.d 2>/dev/zero + AS_IF([test $? -eq 0],[ac_cv_dtrace_works=yes], + [ac_cv_dtrace_works=no]) + rm -f conftest.h conftest.d + ]) + AS_IF([test "x$ac_cv_dtrace_works" = "xyes"],[ + AC_DEFINE([HAVE_DTRACE], [1], [Enables DTRACE Support]) + AC_CACHE_CHECK([if dtrace should instrument object files], + [ac_cv_dtrace_needs_objects],[ + dnl DTrace on MacOSX does not use -G option + cat >conftest.d <<_ACEOF +provider Example { + probe increment(int); +}; +_ACEOF + cat > conftest.c <<_ACEOF +#include "conftest.h" +void foo() { + EXAMPLE_INCREMENT(1); +} +_ACEOF + $DTRACE -h -o conftest.h -s conftest.d 2>/dev/zero + $CC -c -o conftest.o conftest.c + $DTRACE -G -o conftest.d.o -s conftest.d conftest.o 2>/dev/zero + AS_IF([test $? -eq 0],[ac_cv_dtrace_needs_objects=yes], + [ac_cv_dtrace_needs_objects=no]) + rm -f conftest.d.o conftest.d conftest.h conftest.o conftest.c + ]) + ]) + AC_SUBST(DTRACEFLAGS) dnl TODO: test for -G on OSX + ac_cv_have_dtrace=yes + ])]) + +AM_CONDITIONAL([HAVE_DTRACE], [test "x$ac_cv_dtrace_works" = "xyes"]) +AM_CONDITIONAL([DTRACE_NEEDS_OBJECTS], + [test "x$ac_cv_dtrace_needs_objects" = "xyes"]) + +]) +dnl --------------------------------------------------------------------------- +dnl End Macro: PANDORA_ENABLE_DTRACE +dnl --------------------------------------------------------------------------- diff --git a/deps/uv/m4/libuv-check-flags.m4 b/deps/uv/m4/libuv-check-flags.m4 new file mode 100644 index 00000000000..59c30635577 --- /dev/null +++ b/deps/uv/m4/libuv-check-flags.m4 @@ -0,0 +1,319 @@ +dnl Macros to check the presence of generic (non-typed) symbols. +dnl Copyright (c) 2006-2008 Diego Pettenà <flameeyes gmail com> +dnl Copyright (c) 2006-2008 xine project +dnl +dnl This program is free software; you can redistribute it and/or modify +dnl it under the terms of the GNU General Public License as published by +dnl the Free Software Foundation; either version 3, or (at your option) +dnl any later version. +dnl +dnl This program is distributed in the hope that it will be useful, +dnl but WITHOUT ANY WARRANTY; without even the implied warranty of +dnl MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +dnl GNU General Public License for more details. +dnl +dnl You should have received a copy of the GNU General Public License +dnl along with this program; if not, write to the Free Software +dnl Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA +dnl 02110-1301, USA. +dnl +dnl As a special exception, the copyright owners of the +dnl macro gives unlimited permission to copy, distribute and modify the +dnl configure scripts that are the output of Autoconf when processing the +dnl Macro. You need not follow the terms of the GNU General Public +dnl License when using or distributing such scripts, even though portions +dnl of the text of the Macro appear in them. The GNU General Public +dnl License (GPL) does govern all other use of the material that +dnl constitutes the Autoconf Macro. +dnl +dnl This special exception to the GPL applies to versions of the +dnl Autoconf Macro released by this project. When you make and +dnl distribute a modified version of the Autoconf Macro, you may extend +dnl this special exception to the GPL to apply to your modified version as +dnl well. + +dnl Check if the flag is supported by compiler +dnl CC_CHECK_CFLAGS_SILENT([FLAG], [ACTION-IF-FOUND],[ACTION-IF-NOT-FOUND]) + +AC_DEFUN([CC_CHECK_CFLAGS_SILENT], [ + AC_CACHE_VAL(AS_TR_SH([cc_cv_cflags_$1]), + [ac_save_CFLAGS="$CFLAGS" + CFLAGS="$CFLAGS $1" + AC_COMPILE_IFELSE([AC_LANG_SOURCE([int a;])], + [eval "AS_TR_SH([cc_cv_cflags_$1])='yes'"], + [eval "AS_TR_SH([cc_cv_cflags_$1])='no'"]) + CFLAGS="$ac_save_CFLAGS" + ]) + + AS_IF([eval test x$]AS_TR_SH([cc_cv_cflags_$1])[ = xyes], + [$2], [$3]) +]) + +dnl Check if the flag is supported by compiler (cacheable) +dnl CC_CHECK_CFLAGS([FLAG], [ACTION-IF-FOUND],[ACTION-IF-NOT-FOUND]) + +AC_DEFUN([CC_CHECK_CFLAGS], [ + AC_CACHE_CHECK([if $CC supports $1 flag], + AS_TR_SH([cc_cv_cflags_$1]), + CC_CHECK_CFLAGS_SILENT([$1]) dnl Don't execute actions here! + ) + + AS_IF([eval test x$]AS_TR_SH([cc_cv_cflags_$1])[ = xyes], + [$2], [$3]) +]) + +dnl CC_CHECK_CFLAG_APPEND(FLAG, [action-if-found], [action-if-not-found]) +dnl Check for CFLAG and appends them to CFLAGS if supported +AC_DEFUN([CC_CHECK_CFLAG_APPEND], [ + AC_CACHE_CHECK([if $CC supports $1 flag], + AS_TR_SH([cc_cv_cflags_$1]), + CC_CHECK_CFLAGS_SILENT([$1]) dnl Don't execute actions here! + ) + + AS_IF([eval test x$]AS_TR_SH([cc_cv_cflags_$1])[ = xyes], + [CFLAGS="$CFLAGS $1"; DEBUG_CFLAGS="$DEBUG_CFLAGS $1"; $2], [$3]) +]) + +dnl CC_CHECK_CFLAGS_APPEND([FLAG1 FLAG2], [action-if-found], [action-if-not]) +AC_DEFUN([CC_CHECK_CFLAGS_APPEND], [ + for flag in $1; do + CC_CHECK_CFLAG_APPEND($flag, [$2], [$3]) + done +]) + +dnl Check if the flag is supported by linker (cacheable) +dnl CC_CHECK_LDFLAGS([FLAG], [ACTION-IF-FOUND],[ACTION-IF-NOT-FOUND]) + +AC_DEFUN([CC_CHECK_LDFLAGS], [ + AC_CACHE_CHECK([if $CC supports $1 flag], + AS_TR_SH([cc_cv_ldflags_$1]), + [ac_save_LDFLAGS="$LDFLAGS" + LDFLAGS="$LDFLAGS $1" + AC_LANG_PUSH([C]) + AC_LINK_IFELSE([AC_LANG_SOURCE([int main() { return 1; }])], + [eval "AS_TR_SH([cc_cv_ldflags_$1])='yes'"], + [eval "AS_TR_SH([cc_cv_ldflags_$1])="]) + AC_LANG_POP([C]) + LDFLAGS="$ac_save_LDFLAGS" + ]) + + AS_IF([eval test x$]AS_TR_SH([cc_cv_ldflags_$1])[ = xyes], + [$2], [$3]) +]) + +dnl define the LDFLAGS_NOUNDEFINED variable with the correct value for +dnl the current linker to avoid undefined references in a shared object. +AC_DEFUN([CC_NOUNDEFINED], [ + dnl We check $host for which systems to enable this for. + AC_REQUIRE([AC_CANONICAL_HOST]) + + case $host in + dnl FreeBSD (et al.) does not complete linking for shared objects when pthreads + dnl are requested, as different implementations are present; to avoid problems + dnl use -Wl,-z,defs only for those platform not behaving this way. + *-freebsd* | *-openbsd*) ;; + *) + dnl First of all check for the --no-undefined variant of GNU ld. This allows + dnl for a much more readable commandline, so that people can understand what + dnl it does without going to look for what the heck -z defs does. + for possible_flags in "-Wl,--no-undefined" "-Wl,-z,defs"; do + CC_CHECK_LDFLAGS([$possible_flags], [LDFLAGS_NOUNDEFINED="$possible_flags"]) + break + done + ;; + esac + + AC_SUBST([LDFLAGS_NOUNDEFINED]) +]) + +dnl Check for a -Werror flag or equivalent. -Werror is the GCC +dnl and ICC flag that tells the compiler to treat all the warnings +dnl as fatal. We usually need this option to make sure that some +dnl constructs (like attributes) are not simply ignored. +dnl +dnl Other compilers don't support -Werror per se, but they support +dnl an equivalent flag: +dnl - Sun Studio compiler supports -errwarn=%all +AC_DEFUN([CC_CHECK_WERROR], [ + AC_CACHE_CHECK( + [for $CC way to treat warnings as errors], + [cc_cv_werror], + [CC_CHECK_CFLAGS_SILENT([-Werror], [cc_cv_werror=-Werror], + [CC_CHECK_CFLAGS_SILENT([-errwarn=%all], [cc_cv_werror=-errwarn=%all])]) + ]) +]) + +AC_DEFUN([CC_CHECK_ATTRIBUTE], [ + AC_REQUIRE([CC_CHECK_WERROR]) + AC_CACHE_CHECK([if $CC supports __attribute__(( ifelse([$2], , [$1], [$2]) ))], + AS_TR_SH([cc_cv_attribute_$1]), + [ac_save_CFLAGS="$CFLAGS" + CFLAGS="$CFLAGS $cc_cv_werror" + AC_LANG_PUSH([C]) + AC_COMPILE_IFELSE([AC_LANG_SOURCE([$3])], + [eval "AS_TR_SH([cc_cv_attribute_$1])='yes'"], + [eval "AS_TR_SH([cc_cv_attribute_$1])='no'"]) + AC_LANG_POP([C]) + CFLAGS="$ac_save_CFLAGS" + ]) + + AS_IF([eval test x$]AS_TR_SH([cc_cv_attribute_$1])[ = xyes], + [AC_DEFINE( + AS_TR_CPP([SUPPORT_ATTRIBUTE_$1]), 1, + [Define this if the compiler supports __attribute__(( ifelse([$2], , [$1], [$2]) ))] + ) + $4], + [$5]) +]) + +AC_DEFUN([CC_ATTRIBUTE_CONSTRUCTOR], [ + CC_CHECK_ATTRIBUTE( + [constructor],, + [void __attribute__((constructor)) ctor() { int a; }], + [$1], [$2]) +]) + +AC_DEFUN([CC_ATTRIBUTE_FORMAT], [ + CC_CHECK_ATTRIBUTE( + [format], [format(printf, n, n)], + [void __attribute__((format(printf, 1, 2))) printflike(const char *fmt, ...) { fmt = (void *)0; }], + [$1], [$2]) +]) + +AC_DEFUN([CC_ATTRIBUTE_FORMAT_ARG], [ + CC_CHECK_ATTRIBUTE( + [format_arg], [format_arg(printf)], + [char *__attribute__((format_arg(1))) gettextlike(const char *fmt) { fmt = (void *)0; }], + [$1], [$2]) +]) + +AC_DEFUN([CC_ATTRIBUTE_VISIBILITY], [ + CC_CHECK_ATTRIBUTE( + [visibility_$1], [visibility("$1")], + [void __attribute__((visibility("$1"))) $1_function() { }], + [$2], [$3]) +]) + +AC_DEFUN([CC_ATTRIBUTE_NONNULL], [ + CC_CHECK_ATTRIBUTE( + [nonnull], [nonnull()], + [void __attribute__((nonnull())) some_function(void *foo, void *bar) { foo = (void*)0; bar = (void*)0; }], + [$1], [$2]) +]) + +AC_DEFUN([CC_ATTRIBUTE_UNUSED], [ + CC_CHECK_ATTRIBUTE( + [unused], , + [void some_function(void *foo, __attribute__((unused)) void *bar);], + [$1], [$2]) +]) + +AC_DEFUN([CC_ATTRIBUTE_SENTINEL], [ + CC_CHECK_ATTRIBUTE( + [sentinel], , + [void some_function(void *foo, ...) __attribute__((sentinel));], + [$1], [$2]) +]) + +AC_DEFUN([CC_ATTRIBUTE_DEPRECATED], [ + CC_CHECK_ATTRIBUTE( + [deprecated], , + [void some_function(void *foo, ...) __attribute__((deprecated));], + [$1], [$2]) +]) + +AC_DEFUN([CC_ATTRIBUTE_ALIAS], [ + CC_CHECK_ATTRIBUTE( + [alias], [weak, alias], + [void other_function(void *foo) { } + void some_function(void *foo) __attribute__((weak, alias("other_function")));], + [$1], [$2]) +]) + +AC_DEFUN([CC_ATTRIBUTE_MALLOC], [ + CC_CHECK_ATTRIBUTE( + [malloc], , + [void * __attribute__((malloc)) my_alloc(int n);], + [$1], [$2]) +]) + +AC_DEFUN([CC_ATTRIBUTE_PACKED], [ + CC_CHECK_ATTRIBUTE( + [packed], , + [struct astructure { char a; int b; long c; void *d; } __attribute__((packed));], + [$1], [$2]) +]) + +AC_DEFUN([CC_ATTRIBUTE_CONST], [ + CC_CHECK_ATTRIBUTE( + [const], , + [int __attribute__((const)) twopow(int n) { return 1 << n; } ], + [$1], [$2]) +]) + +AC_DEFUN([CC_FLAG_VISIBILITY], [ + AC_REQUIRE([CC_CHECK_WERROR]) + AC_CACHE_CHECK([if $CC supports -fvisibility=hidden], + [cc_cv_flag_visibility], + [cc_flag_visibility_save_CFLAGS="$CFLAGS" + CFLAGS="$CFLAGS $cc_cv_werror" + CC_CHECK_CFLAGS_SILENT([-fvisibility=hidden], + cc_cv_flag_visibility='yes', + cc_cv_flag_visibility='no') + CFLAGS="$cc_flag_visibility_save_CFLAGS"]) + + AS_IF([test "x$cc_cv_flag_visibility" = "xyes"], + [AC_DEFINE([SUPPORT_FLAG_VISIBILITY], 1, + [Define this if the compiler supports the -fvisibility flag]) + $1], + [$2]) +]) + +AC_DEFUN([CC_FUNC_EXPECT], [ + AC_REQUIRE([CC_CHECK_WERROR]) + AC_CACHE_CHECK([if compiler has __builtin_expect function], + [cc_cv_func_expect], + [ac_save_CFLAGS="$CFLAGS" + CFLAGS="$CFLAGS $cc_cv_werror" + AC_LANG_PUSH([C]) + AC_COMPILE_IFELSE([AC_LANG_SOURCE( + [int some_function() { + int a = 3; + return (int)__builtin_expect(a, 3); + }])], + [cc_cv_func_expect=yes], + [cc_cv_func_expect=no]) + AC_LANG_POP([C]) + CFLAGS="$ac_save_CFLAGS" + ]) + + AS_IF([test "x$cc_cv_func_expect" = "xyes"], + [AC_DEFINE([SUPPORT__BUILTIN_EXPECT], 1, + [Define this if the compiler supports __builtin_expect() function]) + $1], + [$2]) +]) + +AC_DEFUN([CC_ATTRIBUTE_ALIGNED], [ + AC_REQUIRE([CC_CHECK_WERROR]) + AC_CACHE_CHECK([highest __attribute__ ((aligned ())) supported], + [cc_cv_attribute_aligned], + [ac_save_CFLAGS="$CFLAGS" + CFLAGS="$CFLAGS $cc_cv_werror" + AC_LANG_PUSH([C]) + for cc_attribute_align_try in 64 32 16 8 4 2; do + AC_COMPILE_IFELSE([AC_LANG_SOURCE([ + int main() { + static char c __attribute__ ((aligned($cc_attribute_align_try))) = 0; + return c; + }])], [cc_cv_attribute_aligned=$cc_attribute_align_try; break]) + done + AC_LANG_POP([C]) + CFLAGS="$ac_save_CFLAGS" + ]) + + if test "x$cc_cv_attribute_aligned" != "x"; then + AC_DEFINE_UNQUOTED([ATTRIBUTE_ALIGNED_MAX], [$cc_cv_attribute_aligned], + [Define the highest alignment supported]) + fi +]) \ No newline at end of file diff --git a/deps/uv/src/fs-poll.c b/deps/uv/src/fs-poll.c index 871228fc8de..0de15b1739f 100644 --- a/deps/uv/src/fs-poll.c +++ b/deps/uv/src/fs-poll.c @@ -60,6 +60,7 @@ int uv_fs_poll_start(uv_fs_poll_t* handle, struct poll_ctx* ctx; uv_loop_t* loop; size_t len; + int err; if (uv__is_active(handle)) return 0; @@ -78,19 +79,25 @@ int uv_fs_poll_start(uv_fs_poll_t* handle, ctx->parent_handle = handle; memcpy(ctx->path, path, len + 1); - if (uv_timer_init(loop, &ctx->timer_handle)) - abort(); + err = uv_timer_init(loop, &ctx->timer_handle); + if (err < 0) + goto error; ctx->timer_handle.flags |= UV__HANDLE_INTERNAL; uv__handle_unref(&ctx->timer_handle); - if (uv_fs_stat(loop, &ctx->fs_req, ctx->path, poll_cb)) - abort(); + err = uv_fs_stat(loop, &ctx->fs_req, ctx->path, poll_cb); + if (err < 0) + goto error; handle->poll_ctx = ctx; uv__handle_start(handle); return 0; + +error: + free(ctx); + return err; } diff --git a/deps/uv/src/unix/aix.c b/deps/uv/src/unix/aix.c index eb901113451..349c2b558e4 100644 --- a/deps/uv/src/unix/aix.c +++ b/deps/uv/src/unix/aix.c @@ -151,7 +151,7 @@ void uv__io_poll(uv_loop_t* loop, int timeout) { * Could maybe mod if we knew for sure no events are removed, but * content of w->events is handled above as not reliable (falls back) * so may require a pollset_query() which would have to be pretty cheap - * compared to a PS_DELETE to be worth optimising. Alternatively, could + * compared to a PS_DELETE to be worth optimizing. Alternatively, could * lazily remove events, squelching them in the mean time. */ pc.cmd = PS_DELETE; if (pollset_ctl(loop->backend_fd, &pc, 1)) { @@ -332,7 +332,7 @@ int uv_exepath(char* buffer, size_t* size) { res = readlink(symlink, temp_buffer, PATH_MAX-1); /* if readlink fails, it is a normal file just copy symlink to the - * outbut buffer. + * output buffer. */ if (res < 0) { assert(*size > strlen(symlink)); diff --git a/deps/uv/src/unix/core.c b/deps/uv/src/unix/core.c index 4770d8d8c68..c08040e5378 100644 --- a/deps/uv/src/unix/core.c +++ b/deps/uv/src/unix/core.c @@ -163,6 +163,33 @@ void uv_close(uv_handle_t* handle, uv_close_cb close_cb) { uv__make_close_pending(handle); } +int uv__socket_sockopt(uv_handle_t* handle, int optname, int* value) { + int r; + int fd; + socklen_t len; + + if (handle == NULL || value == NULL) + return -EINVAL; + + if (handle->type == UV_TCP || handle->type == UV_NAMED_PIPE) + fd = uv__stream_fd((uv_stream_t*) handle); + else if (handle->type == UV_UDP) + fd = ((uv_udp_t *) handle)->io_watcher.fd; + else + return -ENOTSUP; + + len = sizeof(*value); + + if (*value == 0) + r = getsockopt(fd, SOL_SOCKET, optname, value, &len); + else + r = setsockopt(fd, SOL_SOCKET, optname, (const void*) value, len); + + if (r < 0) + return -errno; + + return 0; +} void uv__make_close_pending(uv_handle_t* handle) { assert(handle->flags & UV_CLOSING); @@ -283,8 +310,6 @@ int uv_run(uv_loop_t* loop, uv_run_mode mode) { uv__update_time(loop); while (r != 0 && loop->stop_flag == 0) { - UV_TICK_START(loop, mode); - uv__update_time(loop); uv__run_timers(loop); uv__run_pending(loop); @@ -300,7 +325,7 @@ int uv_run(uv_loop_t* loop, uv_run_mode mode) { uv__run_closing_handles(loop); if (mode == UV_RUN_ONCE) { - /* UV_RUN_ONCE implies forward progess: at least one callback must have + /* UV_RUN_ONCE implies forward progress: at least one callback must have * been invoked when it returns. uv__io_poll() can return without doing * I/O (meaning: no callbacks) when its timeout expires - which means we * have pending timers that satisfy the forward progress constraint. @@ -313,7 +338,6 @@ int uv_run(uv_loop_t* loop, uv_run_mode mode) { } r = uv__loop_alive(loop); - UV_TICK_STOP(loop, mode); if (mode & (UV_RUN_ONCE | UV_RUN_NOWAIT)) break; @@ -610,7 +634,7 @@ int uv_cwd(char* buffer, size_t* size) { if (getcwd(buffer, *size) == NULL) return -errno; - *size = strlen(buffer) + 1; + *size = strlen(buffer); return 0; } @@ -635,6 +659,36 @@ void uv_disable_stdio_inheritance(void) { } +int uv_fileno(const uv_handle_t* handle, uv_os_fd_t* fd) { + int fd_out; + + switch (handle->type) { + case UV_TCP: + case UV_NAMED_PIPE: + case UV_TTY: + fd_out = uv__stream_fd((uv_stream_t*) handle); + break; + + case UV_UDP: + fd_out = ((uv_udp_t *) handle)->io_watcher.fd; + break; + + case UV_POLL: + fd_out = ((uv_poll_t *) handle)->io_watcher.fd; + break; + + default: + return -EINVAL; + } + + if (uv__is_closing(handle) || fd_out == -1) + return -EBADF; + + *fd = fd_out; + return 0; +} + + static void uv__run_pending(uv_loop_t* loop) { QUEUE* q; uv__io_t* w; diff --git a/deps/uv/src/unix/darwin-proctitle.c b/deps/uv/src/unix/darwin-proctitle.c index 8cd358bcf01..b7267caa99d 100644 --- a/deps/uv/src/unix/darwin-proctitle.c +++ b/deps/uv/src/unix/darwin-proctitle.c @@ -36,7 +36,7 @@ static int uv__pthread_setname_np(const char* name) { int err; /* pthread_setname_np() first appeared in OS X 10.6 and iOS 3.2. */ - dynamic_pthread_setname_np = dlsym(RTLD_DEFAULT, "pthread_setname_np"); + *(void **)(&dynamic_pthread_setname_np) = dlsym(RTLD_DEFAULT, "pthread_setname_np"); if (dynamic_pthread_setname_np == NULL) return -ENOSYS; @@ -94,13 +94,13 @@ int uv__set_process_title(const char* title) { if (application_services_handle == NULL || core_foundation_handle == NULL) goto out; - pCFStringCreateWithCString = + *(void **)(&pCFStringCreateWithCString) = dlsym(core_foundation_handle, "CFStringCreateWithCString"); - pCFBundleGetBundleWithIdentifier = + *(void **)(&pCFBundleGetBundleWithIdentifier) = dlsym(core_foundation_handle, "CFBundleGetBundleWithIdentifier"); - pCFBundleGetDataPointerForName = + *(void **)(&pCFBundleGetDataPointerForName) = dlsym(core_foundation_handle, "CFBundleGetDataPointerForName"); - pCFBundleGetFunctionPointerForName = + *(void **)(&pCFBundleGetFunctionPointerForName) = dlsym(core_foundation_handle, "CFBundleGetFunctionPointerForName"); if (pCFStringCreateWithCString == NULL || @@ -118,14 +118,14 @@ int uv__set_process_title(const char* title) { if (launch_services_bundle == NULL) goto out; - pLSGetCurrentApplicationASN = + *(void **)(&pLSGetCurrentApplicationASN) = pCFBundleGetFunctionPointerForName(launch_services_bundle, S("_LSGetCurrentApplicationASN")); if (pLSGetCurrentApplicationASN == NULL) goto out; - pLSSetApplicationInformationItem = + *(void **)(&pLSSetApplicationInformationItem) = pCFBundleGetFunctionPointerForName(launch_services_bundle, S("_LSSetApplicationInformationItem")); @@ -138,9 +138,9 @@ int uv__set_process_title(const char* title) { if (display_name_key == NULL || *display_name_key == NULL) goto out; - pCFBundleGetInfoDictionary = dlsym(core_foundation_handle, + *(void **)(&pCFBundleGetInfoDictionary) = dlsym(core_foundation_handle, "CFBundleGetInfoDictionary"); - pCFBundleGetMainBundle = dlsym(core_foundation_handle, + *(void **)(&pCFBundleGetMainBundle) = dlsym(core_foundation_handle, "CFBundleGetMainBundle"); if (pCFBundleGetInfoDictionary == NULL || pCFBundleGetMainBundle == NULL) goto out; @@ -152,13 +152,13 @@ int uv__set_process_title(const char* title) { if (hi_services_bundle == NULL) goto out; - pSetApplicationIsDaemon = pCFBundleGetFunctionPointerForName( + *(void **)(&pSetApplicationIsDaemon) = pCFBundleGetFunctionPointerForName( hi_services_bundle, S("SetApplicationIsDaemon")); - pLSApplicationCheckIn = pCFBundleGetFunctionPointerForName( + *(void **)(&pLSApplicationCheckIn) = pCFBundleGetFunctionPointerForName( launch_services_bundle, S("_LSApplicationCheckIn")); - pLSSetApplicationLaunchServicesServerConnectionStatus = + *(void **)(&pLSSetApplicationLaunchServicesServerConnectionStatus) = pCFBundleGetFunctionPointerForName( launch_services_bundle, S("_LSSetApplicationLaunchServicesServerConnectionStatus")); diff --git a/deps/uv/src/unix/fs.c b/deps/uv/src/unix/fs.c index 47f667229db..65fd01230b3 100644 --- a/deps/uv/src/unix/fs.c +++ b/deps/uv/src/unix/fs.c @@ -38,7 +38,6 @@ #include <sys/stat.h> #include <sys/time.h> #include <pthread.h> -#include <dirent.h> #include <unistd.h> #include <fcntl.h> #include <utime.h> @@ -296,64 +295,47 @@ static ssize_t uv__fs_read(uv_fs_t* req) { #if defined(__OpenBSD__) || (defined(__APPLE__) && !defined(MAC_OS_X_VERSION_10_8)) -static int uv__fs_readdir_filter(struct dirent* dent) { +static int uv__fs_scandir_filter(uv__dirent_t* dent) { #else -static int uv__fs_readdir_filter(const struct dirent* dent) { +static int uv__fs_scandir_filter(const uv__dirent_t* dent) { #endif return strcmp(dent->d_name, ".") != 0 && strcmp(dent->d_name, "..") != 0; } -/* This should have been called uv__fs_scandir(). */ -static ssize_t uv__fs_readdir(uv_fs_t* req) { - struct dirent **dents; +static ssize_t uv__fs_scandir(uv_fs_t* req) { + uv__dirent_t **dents; int saved_errno; - size_t off; - size_t len; - char *buf; - int i; int n; dents = NULL; - n = scandir(req->path, &dents, uv__fs_readdir_filter, alphasort); + n = scandir(req->path, &dents, uv__fs_scandir_filter, alphasort); + + /* NOTE: We will use nbufs as an index field */ + req->nbufs = 0; if (n == 0) goto out; /* osx still needs to deallocate some memory */ else if (n == -1) return n; - len = 0; - - for (i = 0; i < n; i++) - len += strlen(dents[i]->d_name) + 1; - - buf = malloc(len); + req->ptr = dents; - if (buf == NULL) { - errno = ENOMEM; - n = -1; - goto out; - } - - off = 0; - - for (i = 0; i < n; i++) { - len = strlen(dents[i]->d_name) + 1; - memcpy(buf + off, dents[i]->d_name, len); - off += len; - } - - req->ptr = buf; + return n; out: saved_errno = errno; if (dents != NULL) { + int i; + for (i = 0; i < n; i++) free(dents[i]); free(dents); } errno = saved_errno; + req->ptr = NULL; + return n; } @@ -781,6 +763,7 @@ static void uv__fs_work(struct uv__work* w) { break; switch (req->fs_type) { + X(ACCESS, access(req->path, req->flags)); X(CHMOD, chmod(req->path, req->mode)); X(CHOWN, chown(req->path, req->uid, req->gid)); X(CLOSE, close(req->file)); @@ -796,7 +779,7 @@ static void uv__fs_work(struct uv__work* w) { X(MKDIR, mkdir(req->path, req->mode)); X(MKDTEMP, uv__fs_mkdtemp(req)); X(READ, uv__fs_read(req)); - X(READDIR, uv__fs_readdir(req)); + X(SCANDIR, uv__fs_scandir(req)); X(READLINK, uv__fs_readlink(req)); X(RENAME, rename(req->path, req->new_path)); X(RMDIR, rmdir(req->path)); @@ -871,6 +854,18 @@ static void uv__fs_done(struct uv__work* w, int status) { } +int uv_fs_access(uv_loop_t* loop, + uv_fs_t* req, + const char* path, + int flags, + uv_fs_cb cb) { + INIT(ACCESS); + PATH; + req->flags = flags; + POST; +} + + int uv_fs_chmod(uv_loop_t* loop, uv_fs_t* req, const char* path, @@ -1057,12 +1052,12 @@ int uv_fs_read(uv_loop_t* loop, uv_fs_t* req, } -int uv_fs_readdir(uv_loop_t* loop, +int uv_fs_scandir(uv_loop_t* loop, uv_fs_t* req, const char* path, int flags, uv_fs_cb cb) { - INIT(READDIR); + INIT(SCANDIR); PATH; req->flags = flags; POST; @@ -1184,6 +1179,9 @@ void uv_fs_req_cleanup(uv_fs_t* req) { req->path = NULL; req->new_path = NULL; + if (req->fs_type == UV_FS_SCANDIR && req->ptr != NULL) + uv__fs_scandir_cleanup(req); + if (req->ptr != &req->statbuf) free(req->ptr); req->ptr = NULL; diff --git a/deps/uv/src/unix/fsevents.c b/deps/uv/src/unix/fsevents.c index 1c7896d8d7c..49085306b90 100644 --- a/deps/uv/src/unix/fsevents.c +++ b/deps/uv/src/unix/fsevents.c @@ -525,7 +525,7 @@ static int uv__fsevents_global_init(void) { err = -ENOENT; #define V(handle, symbol) \ do { \ - p ## symbol = dlsym((handle), #symbol); \ + *(void **)(&p ## symbol) = dlsym((handle), #symbol); \ if (p ## symbol == NULL) \ goto out; \ } \ diff --git a/deps/uv/src/unix/getaddrinfo.c b/deps/uv/src/unix/getaddrinfo.c index 1db00680d1a..faf9add9285 100644 --- a/deps/uv/src/unix/getaddrinfo.c +++ b/deps/uv/src/unix/getaddrinfo.c @@ -18,6 +18,13 @@ * IN THE SOFTWARE. */ +/* Expose glibc-specific EAI_* error codes. Needs to be defined before we + * include any headers. + */ +#ifndef _GNU_SOURCE +# define _GNU_SOURCE +#endif + #include "uv.h" #include "internal.h" @@ -26,6 +33,66 @@ #include <stdlib.h> #include <string.h> +/* EAI_* constants. */ +#include <netdb.h> + + +int uv__getaddrinfo_translate_error(int sys_err) { + switch (sys_err) { + case 0: return 0; +#if defined(EAI_ADDRFAMILY) + case EAI_ADDRFAMILY: return UV_EAI_ADDRFAMILY; +#endif +#if defined(EAI_AGAIN) + case EAI_AGAIN: return UV_EAI_AGAIN; +#endif +#if defined(EAI_BADFLAGS) + case EAI_BADFLAGS: return UV_EAI_BADFLAGS; +#endif +#if defined(EAI_BADHINTS) + case EAI_BADHINTS: return UV_EAI_BADHINTS; +#endif +#if defined(EAI_CANCELED) + case EAI_CANCELED: return UV_EAI_CANCELED; +#endif +#if defined(EAI_FAIL) + case EAI_FAIL: return UV_EAI_FAIL; +#endif +#if defined(EAI_FAMILY) + case EAI_FAMILY: return UV_EAI_FAMILY; +#endif +#if defined(EAI_MEMORY) + case EAI_MEMORY: return UV_EAI_MEMORY; +#endif +#if defined(EAI_NODATA) + case EAI_NODATA: return UV_EAI_NODATA; +#endif +#if defined(EAI_NONAME) +# if !defined(EAI_NODATA) || EAI_NODATA != EAI_NONAME + case EAI_NONAME: return UV_EAI_NONAME; +# endif +#endif +#if defined(EAI_OVERFLOW) + case EAI_OVERFLOW: return UV_EAI_OVERFLOW; +#endif +#if defined(EAI_PROTOCOL) + case EAI_PROTOCOL: return UV_EAI_PROTOCOL; +#endif +#if defined(EAI_SERVICE) + case EAI_SERVICE: return UV_EAI_SERVICE; +#endif +#if defined(EAI_SOCKTYPE) + case EAI_SOCKTYPE: return UV_EAI_SOCKTYPE; +#endif +#if defined(EAI_SYSTEM) + case EAI_SYSTEM: return -errno; +#endif + } + assert(!"unknown EAI_* error code"); + abort(); + return 0; /* Pacify compiler. */ +} + static void uv__getaddrinfo_work(struct uv__work* w) { uv_getaddrinfo_t* req; @@ -115,10 +182,8 @@ int uv_getaddrinfo(uv_loop_t* loop, len += service_len; } - if (hostname) { + if (hostname) req->hostname = memcpy(buf + len, hostname, hostname_len); - len += hostname_len; - } uv__work_submit(loop, &req->work_req, diff --git a/deps/uv/src/unix/internal.h b/deps/uv/src/unix/internal.h index 114cb696ee8..daad61b782f 100644 --- a/deps/uv/src/unix/internal.h +++ b/deps/uv/src/unix/internal.h @@ -143,7 +143,12 @@ enum { UV_TCP_NODELAY = 0x400, /* Disable Nagle. */ UV_TCP_KEEPALIVE = 0x800, /* Turn on keep-alive. */ UV_TCP_SINGLE_ACCEPT = 0x1000, /* Only accept() when idle. */ - UV_HANDLE_IPV6 = 0x2000 /* Handle is bound to a IPv6 socket. */ + UV_HANDLE_IPV6 = 0x10000 /* Handle is bound to a IPv6 socket. */ +}; + +/* loop flags */ +enum { + UV_LOOP_BLOCK_SIGPROF = 1 }; typedef enum { @@ -306,12 +311,4 @@ UV_UNUSED(static char* uv__basename_r(const char* path)) { return s + 1; } - -#ifdef HAVE_DTRACE -#include "uv-dtrace.h" -#else -#define UV_TICK_START(arg0, arg1) -#define UV_TICK_STOP(arg0, arg1) -#endif - #endif /* UV_UNIX_INTERNAL_H_ */ diff --git a/deps/uv/src/unix/kqueue.c b/deps/uv/src/unix/kqueue.c index b4f9f5d8405..aaadcd8419a 100644 --- a/deps/uv/src/unix/kqueue.c +++ b/deps/uv/src/unix/kqueue.c @@ -55,9 +55,11 @@ void uv__io_poll(uv_loop_t* loop, int timeout) { unsigned int nevents; unsigned int revents; QUEUE* q; + uv__io_t* w; + sigset_t* pset; + sigset_t set; uint64_t base; uint64_t diff; - uv__io_t* w; int filter; int fflags; int count; @@ -117,6 +119,13 @@ void uv__io_poll(uv_loop_t* loop, int timeout) { w->events = w->pevents; } + pset = NULL; + if (loop->flags & UV_LOOP_BLOCK_SIGPROF) { + pset = &set; + sigemptyset(pset); + sigaddset(pset, SIGPROF); + } + assert(timeout >= -1); base = loop->time; count = 48; /* Benchmarks suggest this gives the best throughput. */ @@ -127,6 +136,9 @@ void uv__io_poll(uv_loop_t* loop, int timeout) { spec.tv_nsec = (timeout % 1000) * 1000000; } + if (pset != NULL) + pthread_sigmask(SIG_BLOCK, pset, NULL); + nfds = kevent(loop->backend_fd, events, nevents, @@ -134,6 +146,9 @@ void uv__io_poll(uv_loop_t* loop, int timeout) { ARRAY_SIZE(events), timeout == -1 ? NULL : &spec); + if (pset != NULL) + pthread_sigmask(SIG_UNBLOCK, pset, NULL); + /* Update loop->time unconditionally. It's tempting to skip the update when * timeout == 0 (i.e. non-blocking poll) but there is no guarantee that the * operating system didn't reschedule our process while in the syscall. diff --git a/deps/uv/src/unix/linux-core.c b/deps/uv/src/unix/linux-core.c index 5f6215998c0..a2145b0f369 100644 --- a/deps/uv/src/unix/linux-core.c +++ b/deps/uv/src/unix/linux-core.c @@ -33,6 +33,7 @@ #include <sys/prctl.h> #include <sys/sysinfo.h> #include <unistd.h> +#include <signal.h> #include <fcntl.h> #include <time.h> @@ -141,6 +142,8 @@ void uv__io_poll(uv_loop_t* loop, int timeout) { struct uv__epoll_event e; QUEUE* q; uv__io_t* w; + sigset_t* pset; + sigset_t set; uint64_t base; uint64_t diff; int nevents; @@ -149,6 +152,7 @@ void uv__io_poll(uv_loop_t* loop, int timeout) { int fd; int op; int i; + static int no_epoll_wait; if (loop->nfds == 0) { assert(QUEUE_EMPTY(&loop->watcher_queue)); @@ -190,15 +194,34 @@ void uv__io_poll(uv_loop_t* loop, int timeout) { w->events = w->pevents; } + pset = NULL; + if (loop->flags & UV_LOOP_BLOCK_SIGPROF) { + pset = &set; + sigemptyset(pset); + sigaddset(pset, SIGPROF); + } + assert(timeout >= -1); base = loop->time; count = 48; /* Benchmarks suggest this gives the best throughput. */ for (;;) { - nfds = uv__epoll_wait(loop->backend_fd, - events, - ARRAY_SIZE(events), - timeout); + if (no_epoll_wait || pset != NULL) { + nfds = uv__epoll_pwait(loop->backend_fd, + events, + ARRAY_SIZE(events), + timeout, + pset); + } else { + nfds = uv__epoll_wait(loop->backend_fd, + events, + ARRAY_SIZE(events), + timeout); + if (nfds == -1 && errno == ENOSYS) { + no_epoll_wait = 1; + continue; + } + } /* Update loop->time unconditionally. It's tempting to skip the update when * timeout == 0 (i.e. non-blocking poll) but there is no guarantee that the @@ -731,6 +754,7 @@ int uv_interface_addresses(uv_interface_address_t** addresses, return -errno; *count = 0; + *addresses = NULL; /* Count the number of interfaces */ for (ent = addrs; ent != NULL; ent = ent->ifa_next) { @@ -743,6 +767,9 @@ int uv_interface_addresses(uv_interface_address_t** addresses, (*count)++; } + if (*count == 0) + return 0; + *addresses = malloc(*count * sizeof(**addresses)); if (!(*addresses)) return -ENOMEM; diff --git a/deps/uv/src/unix/linux-syscalls.c b/deps/uv/src/unix/linux-syscalls.c index 1ff8abd197f..e036fad5ef6 100644 --- a/deps/uv/src/unix/linux-syscalls.c +++ b/deps/uv/src/unix/linux-syscalls.c @@ -21,6 +21,7 @@ #include "linux-syscalls.h" #include <unistd.h> +#include <signal.h> #include <sys/syscall.h> #include <sys/types.h> #include <errno.h> @@ -328,7 +329,7 @@ int uv__epoll_pwait(int epfd, nevents, timeout, sigmask, - sizeof(*sigmask)); + _NSIG / 8); #else return errno = ENOSYS, -1; #endif diff --git a/deps/uv/src/unix/linux-syscalls.h b/deps/uv/src/unix/linux-syscalls.h index 0f0b34b1ed3..fd6bb48665f 100644 --- a/deps/uv/src/unix/linux-syscalls.h +++ b/deps/uv/src/unix/linux-syscalls.h @@ -44,7 +44,7 @@ #if defined(__alpha__) # define UV__O_NONBLOCK 0x4 #elif defined(__hppa__) -# define UV__O_NONBLOCK 0x10004 +# define UV__O_NONBLOCK O_NONBLOCK #elif defined(__mips__) # define UV__O_NONBLOCK 0x80 #elif defined(__sparc__) @@ -60,7 +60,11 @@ #define UV__IN_NONBLOCK UV__O_NONBLOCK #define UV__SOCK_CLOEXEC UV__O_CLOEXEC -#define UV__SOCK_NONBLOCK UV__O_NONBLOCK +#if defined(SOCK_NONBLOCK) +# define UV__SOCK_NONBLOCK SOCK_NONBLOCK +#else +# define UV__SOCK_NONBLOCK UV__O_NONBLOCK +#endif /* epoll flags */ #define UV__EPOLL_CLOEXEC UV__O_CLOEXEC diff --git a/deps/uv/src/unix/loop.c b/deps/uv/src/unix/loop.c index aa74be6455e..616cf5bc43b 100644 --- a/deps/uv/src/unix/loop.c +++ b/deps/uv/src/unix/loop.c @@ -99,7 +99,6 @@ void uv_loop_delete(uv_loop_t* loop) { static int uv__loop_init(uv_loop_t* loop, int default_loop) { - unsigned int i; int err; uv__signal_global_once_init(); @@ -138,9 +137,7 @@ static int uv__loop_init(uv_loop_t* loop, int default_loop) { uv_signal_init(loop, &loop->child_watcher); uv__handle_unref(&loop->child_watcher); loop->child_watcher.flags |= UV__HANDLE_INTERNAL; - - for (i = 0; i < ARRAY_SIZE(loop->process_handles); i++) - QUEUE_INIT(loop->process_handles + i); + QUEUE_INIT(&loop->process_handles); if (uv_rwlock_init(&loop->cloexec_lock)) abort(); @@ -195,3 +192,15 @@ static void uv__loop_close(uv_loop_t* loop) { loop->watchers = NULL; loop->nwatchers = 0; } + + +int uv__loop_configure(uv_loop_t* loop, uv_loop_option option, va_list ap) { + if (option != UV_LOOP_BLOCK_SIGNAL) + return UV_ENOSYS; + + if (va_arg(ap, int) != SIGPROF) + return UV_EINVAL; + + loop->flags |= UV_LOOP_BLOCK_SIGPROF; + return 0; +} diff --git a/deps/uv/src/unix/netbsd.c b/deps/uv/src/unix/netbsd.c index 7423a71078f..5f1182f8b43 100644 --- a/deps/uv/src/unix/netbsd.c +++ b/deps/uv/src/unix/netbsd.c @@ -38,6 +38,7 @@ #include <sys/resource.h> #include <sys/types.h> #include <sys/sysctl.h> +#include <uvm/uvm_extern.h> #include <unistd.h> #include <time.h> diff --git a/deps/uv/src/unix/pipe.c b/deps/uv/src/unix/pipe.c index a26c3dbc135..b20fb9210c0 100644 --- a/deps/uv/src/unix/pipe.c +++ b/deps/uv/src/unix/pipe.c @@ -44,13 +44,10 @@ int uv_pipe_bind(uv_pipe_t* handle, const char* name) { struct sockaddr_un saddr; const char* pipe_fname; int sockfd; - int bound; int err; pipe_fname = NULL; sockfd = -1; - bound = 0; - err = -EINVAL; /* Already bound? */ if (uv__stream_fd(handle) >= 0) @@ -83,7 +80,6 @@ int uv_pipe_bind(uv_pipe_t* handle, const char* name) { err = -EACCES; goto out; } - bound = 1; /* Success. */ handle->pipe_fname = pipe_fname; /* Is a strdup'ed copy. */ @@ -91,11 +87,9 @@ int uv_pipe_bind(uv_pipe_t* handle, const char* name) { return 0; out: - if (bound) { - /* unlink() before uv__close() to avoid races. */ - assert(pipe_fname != NULL); - unlink(pipe_fname); - } + /* unlink() before uv__close() to avoid races. */ + assert(pipe_fname != NULL); + unlink(pipe_fname); uv__close(sockfd); free((void*)pipe_fname); return err; @@ -158,7 +152,6 @@ void uv_pipe_connect(uv_connect_t* req, int r; new_sock = (uv__stream_fd(handle) == -1); - err = -EINVAL; if (new_sock) { err = uv__socket(AF_UNIX, SOCK_STREAM, 0); diff --git a/deps/uv/src/unix/process.c b/deps/uv/src/unix/process.c index 52e4eb28130..be283b480d6 100644 --- a/deps/uv/src/unix/process.c +++ b/deps/uv/src/unix/process.c @@ -45,77 +45,70 @@ extern char **environ; #endif -static QUEUE* uv__process_queue(uv_loop_t* loop, int pid) { - assert(pid > 0); - return loop->process_handles + pid % ARRAY_SIZE(loop->process_handles); -} - - static void uv__chld(uv_signal_t* handle, int signum) { uv_process_t* process; uv_loop_t* loop; int exit_status; int term_signal; - unsigned int i; int status; pid_t pid; QUEUE pending; - QUEUE* h; QUEUE* q; + QUEUE* h; assert(signum == SIGCHLD); QUEUE_INIT(&pending); loop = handle->loop; - for (i = 0; i < ARRAY_SIZE(loop->process_handles); i++) { - h = loop->process_handles + i; - q = QUEUE_HEAD(h); + h = &loop->process_handles; + q = QUEUE_HEAD(h); + while (q != h) { + process = QUEUE_DATA(q, uv_process_t, queue); + q = QUEUE_NEXT(q); - while (q != h) { - process = QUEUE_DATA(q, uv_process_t, queue); - q = QUEUE_NEXT(q); + do + pid = waitpid(process->pid, &status, WNOHANG); + while (pid == -1 && errno == EINTR); - do - pid = waitpid(process->pid, &status, WNOHANG); - while (pid == -1 && errno == EINTR); - - if (pid == 0) - continue; - - if (pid == -1) { - if (errno != ECHILD) - abort(); - continue; - } + if (pid == 0) + continue; - process->status = status; - QUEUE_REMOVE(&process->queue); - QUEUE_INSERT_TAIL(&pending, &process->queue); + if (pid == -1) { + if (errno != ECHILD) + abort(); + continue; } - while (!QUEUE_EMPTY(&pending)) { - q = QUEUE_HEAD(&pending); - QUEUE_REMOVE(q); - QUEUE_INIT(q); + process->status = status; + QUEUE_REMOVE(&process->queue); + QUEUE_INSERT_TAIL(&pending, &process->queue); + } - process = QUEUE_DATA(q, uv_process_t, queue); - uv__handle_stop(process); + h = &pending; + q = QUEUE_HEAD(h); + while (q != h) { + process = QUEUE_DATA(q, uv_process_t, queue); + q = QUEUE_NEXT(q); - if (process->exit_cb == NULL) - continue; + QUEUE_REMOVE(&process->queue); + QUEUE_INIT(&process->queue); + uv__handle_stop(process); - exit_status = 0; - if (WIFEXITED(process->status)) - exit_status = WEXITSTATUS(process->status); + if (process->exit_cb == NULL) + continue; - term_signal = 0; - if (WIFSIGNALED(process->status)) - term_signal = WTERMSIG(process->status); + exit_status = 0; + if (WIFEXITED(process->status)) + exit_status = WEXITSTATUS(process->status); - process->exit_cb(process, exit_status, term_signal); - } + term_signal = 0; + if (WIFSIGNALED(process->status)) + term_signal = WTERMSIG(process->status); + + process->exit_cb(process, exit_status, term_signal); } + assert(QUEUE_EMPTY(&pending)); } @@ -369,7 +362,6 @@ int uv_spawn(uv_loop_t* loop, int signal_pipe[2] = { -1, -1 }; int (*pipes)[2]; int stdio_count; - QUEUE* q; ssize_t r; pid_t pid; int err; @@ -483,8 +475,7 @@ int uv_spawn(uv_loop_t* loop, /* Only activate this handle if exec() happened successfully */ if (exec_errorno == 0) { - q = uv__process_queue(loop, pid); - QUEUE_INSERT_TAIL(q, &process->queue); + QUEUE_INSERT_TAIL(&loop->process_handles, &process->queue); uv__handle_start(process); } @@ -526,7 +517,8 @@ int uv_kill(int pid, int signum) { void uv__process_close(uv_process_t* handle) { - /* TODO stop signal watcher when this is the last handle */ QUEUE_REMOVE(&handle->queue); uv__handle_stop(handle); + if (QUEUE_EMPTY(&handle->loop->process_handles)) + uv_signal_stop(&handle->loop->child_watcher); } diff --git a/deps/uv/src/unix/stream.c b/deps/uv/src/unix/stream.c index ae7880c33fa..d41a3429a78 100644 --- a/deps/uv/src/unix/stream.c +++ b/deps/uv/src/unix/stream.c @@ -53,6 +53,10 @@ struct uv__stream_select_s { int fake_fd; int int_fd; int fd; + fd_set* sread; + size_t sread_sz; + fd_set* swrite; + size_t swrite_sz; }; #endif /* defined(__APPLE__) */ @@ -127,8 +131,6 @@ static void uv__stream_osx_select(void* arg) { uv_stream_t* stream; uv__stream_select_t* s; char buf[1024]; - fd_set sread; - fd_set swrite; int events; int fd; int r; @@ -149,17 +151,17 @@ static void uv__stream_osx_select(void* arg) { break; /* Watch fd using select(2) */ - FD_ZERO(&sread); - FD_ZERO(&swrite); + memset(s->sread, 0, s->sread_sz); + memset(s->swrite, 0, s->swrite_sz); if (uv__io_active(&stream->io_watcher, UV__POLLIN)) - FD_SET(fd, &sread); + FD_SET(fd, s->sread); if (uv__io_active(&stream->io_watcher, UV__POLLOUT)) - FD_SET(fd, &swrite); - FD_SET(s->int_fd, &sread); + FD_SET(fd, s->swrite); + FD_SET(s->int_fd, s->sread); /* Wait indefinitely for fd events */ - r = select(max_fd + 1, &sread, &swrite, NULL, NULL); + r = select(max_fd + 1, s->sread, s->swrite, NULL, NULL); if (r == -1) { if (errno == EINTR) continue; @@ -173,7 +175,7 @@ static void uv__stream_osx_select(void* arg) { continue; /* Empty socketpair's buffer in case of interruption */ - if (FD_ISSET(s->int_fd, &sread)) + if (FD_ISSET(s->int_fd, s->sread)) while (1) { r = read(s->int_fd, buf, sizeof(buf)); @@ -194,12 +196,12 @@ static void uv__stream_osx_select(void* arg) { /* Handle events */ events = 0; - if (FD_ISSET(fd, &sread)) + if (FD_ISSET(fd, s->sread)) events |= UV__POLLIN; - if (FD_ISSET(fd, &swrite)) + if (FD_ISSET(fd, s->swrite)) events |= UV__POLLOUT; - assert(events != 0 || FD_ISSET(s->int_fd, &sread)); + assert(events != 0 || FD_ISSET(s->int_fd, s->sread)); if (events != 0) { ACCESS_ONCE(int, s->events) = events; @@ -261,6 +263,9 @@ int uv__stream_try_select(uv_stream_t* stream, int* fd) { int ret; int kq; int old_fd; + int max_fd; + size_t sread_sz; + size_t swrite_sz; kq = kqueue(); if (kq == -1) { @@ -284,31 +289,48 @@ int uv__stream_try_select(uv_stream_t* stream, int* fd) { return 0; /* At this point we definitely know that this fd won't work with kqueue */ - s = malloc(sizeof(*s)); - if (s == NULL) - return -ENOMEM; + + /* + * Create fds for io watcher and to interrupt the select() loop. + * NOTE: do it ahead of malloc below to allocate enough space for fd_sets + */ + if (socketpair(AF_UNIX, SOCK_STREAM, 0, fds)) + return -errno; + + max_fd = *fd; + if (fds[1] > max_fd) + max_fd = fds[1]; + + sread_sz = (max_fd + NBBY) / NBBY; + swrite_sz = sread_sz; + + s = malloc(sizeof(*s) + sread_sz + swrite_sz); + if (s == NULL) { + err = -ENOMEM; + goto failed_malloc; + } s->events = 0; s->fd = *fd; + s->sread = (fd_set*) ((char*) s + sizeof(*s)); + s->sread_sz = sread_sz; + s->swrite = (fd_set*) ((char*) s->sread + sread_sz); + s->swrite_sz = swrite_sz; err = uv_async_init(stream->loop, &s->async, uv__stream_osx_select_cb); - if (err) { - free(s); - return err; - } + if (err) + goto failed_async_init; s->async.flags |= UV__HANDLE_INTERNAL; uv__handle_unref(&s->async); - if (uv_sem_init(&s->close_sem, 0)) - goto fatal1; - - if (uv_sem_init(&s->async_sem, 0)) - goto fatal2; + err = uv_sem_init(&s->close_sem, 0); + if (err != 0) + goto failed_close_sem_init; - /* Create fds for io watcher and to interrupt the select() loop. */ - if (socketpair(AF_UNIX, SOCK_STREAM, 0, fds)) - goto fatal3; + err = uv_sem_init(&s->async_sem, 0); + if (err != 0) + goto failed_async_sem_init; s->fake_fd = fds[0]; s->int_fd = fds[1]; @@ -318,26 +340,36 @@ int uv__stream_try_select(uv_stream_t* stream, int* fd) { stream->select = s; *fd = s->fake_fd; - if (uv_thread_create(&s->thread, uv__stream_osx_select, stream)) - goto fatal4; + err = uv_thread_create(&s->thread, uv__stream_osx_select, stream); + if (err != 0) + goto failed_thread_create; return 0; -fatal4: +failed_thread_create: s->stream = NULL; stream->select = NULL; *fd = old_fd; - uv__close(s->fake_fd); - uv__close(s->int_fd); - s->fake_fd = -1; - s->int_fd = -1; -fatal3: + uv_sem_destroy(&s->async_sem); -fatal2: + +failed_async_sem_init: uv_sem_destroy(&s->close_sem); -fatal1: + +failed_close_sem_init: + uv__close(fds[0]); + uv__close(fds[1]); uv_close((uv_handle_t*) &s->async, uv__stream_osx_cb_close); - return -errno; + return err; + +failed_async_init: + free(s); + +failed_malloc: + uv__close(fds[0]); + uv__close(fds[1]); + + return err; } #endif /* defined(__APPLE__) */ @@ -361,10 +393,22 @@ int uv__stream_open(uv_stream_t* stream, int fd, int flags) { } -void uv__stream_destroy(uv_stream_t* stream) { +void uv__stream_flush_write_queue(uv_stream_t* stream, int error) { uv_write_t* req; QUEUE* q; + while (!QUEUE_EMPTY(&stream->write_queue)) { + q = QUEUE_HEAD(&stream->write_queue); + QUEUE_REMOVE(q); + + req = QUEUE_DATA(q, uv_write_t, queue); + req->error = error; + + QUEUE_INSERT_TAIL(&stream->write_completed_queue, &req->queue); + } +} + +void uv__stream_destroy(uv_stream_t* stream) { assert(!uv__io_active(&stream->io_watcher, UV__POLLIN | UV__POLLOUT)); assert(stream->flags & UV_CLOSED); @@ -374,16 +418,7 @@ void uv__stream_destroy(uv_stream_t* stream) { stream->connect_req = NULL; } - while (!QUEUE_EMPTY(&stream->write_queue)) { - q = QUEUE_HEAD(&stream->write_queue); - QUEUE_REMOVE(q); - - req = QUEUE_DATA(q, uv_write_t, queue); - req->error = -ECANCELED; - - QUEUE_INSERT_TAIL(&stream->write_completed_queue, &req->queue); - } - + uv__stream_flush_write_queue(stream, -ECANCELED); uv__write_callbacks(stream); if (stream->shutdown_req) { @@ -514,7 +549,6 @@ int uv_accept(uv_stream_t* server, uv_stream_t* client) { if (server->accepted_fd == -1) return -EAGAIN; - err = 0; switch (client->type) { case UV_NAMED_PIPE: case UV_TCP: @@ -537,7 +571,7 @@ int uv_accept(uv_stream_t* server, uv_stream_t* client) { break; default: - assert(0); + return -EINVAL; } done: @@ -573,7 +607,6 @@ int uv_accept(uv_stream_t* server, uv_stream_t* client) { int uv_listen(uv_stream_t* stream, int backlog, uv_connection_cb cb) { int err; - err = -EINVAL; switch (stream->type) { case UV_TCP: err = uv_tcp_listen((uv_tcp_t*)stream, backlog, cb); @@ -584,7 +617,7 @@ int uv_listen(uv_stream_t* stream, int backlog, uv_connection_cb cb) { break; default: - assert(0); + err = -EINVAL; } if (err == 0) @@ -917,6 +950,7 @@ static void uv__stream_eof(uv_stream_t* stream, const uv_buf_t* buf) { uv__handle_stop(stream); uv__stream_osx_interrupt_select(stream); stream->read_cb(stream, UV_EOF, buf); + stream->flags &= ~UV_STREAM_READING; } @@ -1083,8 +1117,13 @@ static void uv__read(uv_stream_t* stream) { } else { /* Error. User should call uv_close(). */ stream->read_cb(stream, -errno, &buf); - assert(!uv__io_active(&stream->io_watcher, UV__POLLIN) && - "stream->read_cb(status=-1) did not call uv_close()"); + if (stream->flags & UV_STREAM_READING) { + stream->flags &= ~UV_STREAM_READING; + uv__io_stop(stream->loop, &stream->io_watcher, UV__POLLIN); + if (!uv__io_active(&stream->io_watcher, UV__POLLOUT)) + uv__handle_stop(stream); + uv__stream_osx_interrupt_select(stream); + } } return; } else if (nread == 0) { @@ -1163,7 +1202,7 @@ static void uv__stream_io(uv_loop_t* loop, uv__io_t* w, unsigned int events) { assert(uv__stream_fd(stream) >= 0); /* Ignore POLLHUP here. Even it it's set, there may still be data to read. */ - if (events & (UV__POLLIN | UV__POLLERR)) + if (events & (UV__POLLIN | UV__POLLERR | UV__POLLHUP)) uv__read(stream); if (uv__stream_fd(stream) == -1) @@ -1233,10 +1272,21 @@ static void uv__stream_connect(uv_stream_t* stream) { stream->connect_req = NULL; uv__req_unregister(stream->loop, req); - uv__io_stop(stream->loop, &stream->io_watcher, UV__POLLOUT); + + if (error < 0 || QUEUE_EMPTY(&stream->write_queue)) { + uv__io_stop(stream->loop, &stream->io_watcher, UV__POLLOUT); + } if (req->cb) req->cb(req, error); + + if (uv__stream_fd(stream) == -1) + return; + + if (error < 0) { + uv__stream_flush_write_queue(stream, -ECANCELED); + uv__write_callbacks(stream); + } } @@ -1274,7 +1324,7 @@ int uv_write2(uv_write_t* req, /* It's legal for write_queue_size > 0 even when the write_queue is empty; * it means there are error-state requests in the write_completed_queue that * will touch up write_queue_size later, see also uv__write_req_finish(). - * We chould check that write_queue is empty instead but that implies making + * We could check that write_queue is empty instead but that implies making * a write() syscall when we know that the handle is in error mode. */ empty_queue = (stream->write_queue_size == 0); @@ -1426,15 +1476,8 @@ int uv_read_start(uv_stream_t* stream, int uv_read_stop(uv_stream_t* stream) { - /* Sanity check. We're going to stop the handle unless it's primed for - * writing but that means there should be some kind of write action in - * progress. - */ - assert(!uv__io_active(&stream->io_watcher, UV__POLLOUT) || - !QUEUE_EMPTY(&stream->write_completed_queue) || - !QUEUE_EMPTY(&stream->write_queue) || - stream->shutdown_req != NULL || - stream->connect_req != NULL); + if (!(stream->flags & UV_STREAM_READING)) + return 0; stream->flags &= ~UV_STREAM_READING; uv__io_stop(stream->loop, &stream->io_watcher, UV__POLLIN); diff --git a/deps/uv/src/unix/sunos.c b/deps/uv/src/unix/sunos.c index a630dba759a..d6fb7f49509 100644 --- a/deps/uv/src/unix/sunos.c +++ b/deps/uv/src/unix/sunos.c @@ -122,6 +122,8 @@ void uv__io_poll(uv_loop_t* loop, int timeout) { struct timespec spec; QUEUE* q; uv__io_t* w; + sigset_t* pset; + sigset_t set; uint64_t base; uint64_t diff; unsigned int nfds; @@ -129,6 +131,7 @@ void uv__io_poll(uv_loop_t* loop, int timeout) { int saved_errno; int nevents; int count; + int err; int fd; if (loop->nfds == 0) { @@ -150,6 +153,13 @@ void uv__io_poll(uv_loop_t* loop, int timeout) { w->events = w->pevents; } + pset = NULL; + if (loop->flags & UV_LOOP_BLOCK_SIGPROF) { + pset = &set; + sigemptyset(pset); + sigaddset(pset, SIGPROF); + } + assert(timeout >= -1); base = loop->time; count = 48; /* Benchmarks suggest this gives the best throughput. */ @@ -165,11 +175,20 @@ void uv__io_poll(uv_loop_t* loop, int timeout) { nfds = 1; saved_errno = 0; - if (port_getn(loop->backend_fd, - events, - ARRAY_SIZE(events), - &nfds, - timeout == -1 ? NULL : &spec)) { + + if (pset != NULL) + pthread_sigmask(SIG_BLOCK, pset, NULL); + + err = port_getn(loop->backend_fd, + events, + ARRAY_SIZE(events), + &nfds, + timeout == -1 ? NULL : &spec); + + if (pset != NULL) + pthread_sigmask(SIG_UNBLOCK, pset, NULL); + + if (err) { /* Work around another kernel bug: port_getn() may return events even * on error. */ diff --git a/deps/uv/src/unix/thread.c b/deps/uv/src/unix/thread.c index 522426f634d..7a55bd63247 100644 --- a/deps/uv/src/unix/thread.c +++ b/deps/uv/src/unix/thread.c @@ -31,11 +31,61 @@ #undef NANOSEC #define NANOSEC ((uint64_t) 1e9) + +struct thread_ctx { + void (*entry)(void* arg); + void* arg; +}; + + +static void* uv__thread_start(void *arg) +{ + struct thread_ctx *ctx_p; + struct thread_ctx ctx; + + ctx_p = arg; + ctx = *ctx_p; + free(ctx_p); + ctx.entry(ctx.arg); + + return 0; +} + + +int uv_thread_create(uv_thread_t *tid, void (*entry)(void *arg), void *arg) { + struct thread_ctx* ctx; + int err; + + ctx = malloc(sizeof(*ctx)); + if (ctx == NULL) + return UV_ENOMEM; + + ctx->entry = entry; + ctx->arg = arg; + + err = pthread_create(tid, NULL, uv__thread_start, ctx); + + if (err) + free(ctx); + + return err ? -1 : 0; +} + + +uv_thread_t uv_thread_self(void) { + return pthread_self(); +} + int uv_thread_join(uv_thread_t *tid) { return -pthread_join(*tid, NULL); } +int uv_thread_equal(const uv_thread_t* t1, const uv_thread_t* t2) { + return pthread_equal(*t1, *t2); +} + + int uv_mutex_init(uv_mutex_t* mutex) { #if defined(NDEBUG) || !defined(PTHREAD_MUTEX_ERRORCHECK) return -pthread_mutex_init(mutex, NULL); diff --git a/deps/uv/src/unix/timer.c b/deps/uv/src/unix/timer.c index 9bd0423b5d3..ca3ec3db957 100644 --- a/deps/uv/src/unix/timer.c +++ b/deps/uv/src/unix/timer.c @@ -65,6 +65,9 @@ int uv_timer_start(uv_timer_t* handle, uint64_t repeat) { uint64_t clamped_timeout; + if (cb == NULL) + return -EINVAL; + if (uv__is_active(handle)) uv_timer_stop(handle); diff --git a/deps/uv/src/unix/udp.c b/deps/uv/src/unix/udp.c index bf91cbdf9f2..71a0e41f1f7 100644 --- a/deps/uv/src/unix/udp.c +++ b/deps/uv/src/unix/udp.c @@ -278,9 +278,6 @@ int uv__udp_bind(uv_udp_t* handle, int yes; int fd; - err = -EINVAL; - fd = -1; - /* Check for bad flags. */ if (flags & ~(UV_UDP_IPV6ONLY | UV_UDP_REUSEADDR)) return -EINVAL; @@ -340,8 +337,6 @@ static int uv__udp_maybe_deferred_bind(uv_udp_t* handle, unsigned char taddr[sizeof(struct sockaddr_in6)]; socklen_t addrlen; - assert(domain == AF_INET || domain == AF_INET6); - if (handle->io_watcher.fd != -1) return 0; diff --git a/deps/uv/src/uv-common.c b/deps/uv/src/uv-common.c index 4e3968cb448..f84f8c4ae10 100644 --- a/deps/uv/src/uv-common.c +++ b/deps/uv/src/uv-common.c @@ -19,18 +19,12 @@ * IN THE SOFTWARE. */ -/* Expose glibc-specific EAI_* error codes. Needs to be defined before we - * include any headers. - */ -#ifndef _GNU_SOURCE -# define _GNU_SOURCE -#endif - #include "uv.h" #include "uv-common.h" #include <stdio.h> #include <assert.h> +#include <stdarg.h> #include <stddef.h> /* NULL */ #include <stdlib.h> /* malloc */ #include <string.h> /* memset */ @@ -39,13 +33,6 @@ # include <net/if.h> /* if_nametoindex */ #endif -/* EAI_* constants. */ -#if !defined(_WIN32) -# include <sys/types.h> -# include <sys/socket.h> -# include <netdb.h> -#endif - #define XX(uc, lc) case UV_##uc: return sizeof(uv_##lc##_t); size_t uv_handle_size(uv_handle_type type) { @@ -271,64 +258,6 @@ int uv_udp_recv_stop(uv_udp_t* handle) { } -struct thread_ctx { - void (*entry)(void* arg); - void* arg; -}; - - -#ifdef _WIN32 -static UINT __stdcall uv__thread_start(void* arg) -#else -static void* uv__thread_start(void *arg) -#endif -{ - struct thread_ctx *ctx_p; - struct thread_ctx ctx; - - ctx_p = arg; - ctx = *ctx_p; - free(ctx_p); - ctx.entry(ctx.arg); - - return 0; -} - - -int uv_thread_create(uv_thread_t *tid, void (*entry)(void *arg), void *arg) { - struct thread_ctx* ctx; - int err; - - ctx = malloc(sizeof(*ctx)); - if (ctx == NULL) - return UV_ENOMEM; - - ctx->entry = entry; - ctx->arg = arg; - -#ifdef _WIN32 - *tid = (HANDLE) _beginthreadex(NULL, 0, uv__thread_start, ctx, 0, NULL); - err = *tid ? 0 : errno; -#else - err = pthread_create(tid, NULL, uv__thread_start, ctx); -#endif - - if (err) - free(ctx); - - return err ? -1 : 0; -} - - -unsigned long uv_thread_self(void) { -#ifdef _WIN32 - return (unsigned long) GetCurrentThreadId(); -#else - return (unsigned long) pthread_self(); -#endif -} - - void uv_walk(uv_loop_t* loop, uv_walk_cb walk_cb, void* arg) { QUEUE* q; uv_handle_t* h; @@ -410,62 +339,6 @@ uint64_t uv_now(const uv_loop_t* loop) { } -int uv__getaddrinfo_translate_error(int sys_err) { - switch (sys_err) { - case 0: return 0; -#if defined(EAI_ADDRFAMILY) - case EAI_ADDRFAMILY: return UV_EAI_ADDRFAMILY; -#endif -#if defined(EAI_AGAIN) - case EAI_AGAIN: return UV_EAI_AGAIN; -#endif -#if defined(EAI_BADFLAGS) - case EAI_BADFLAGS: return UV_EAI_BADFLAGS; -#endif -#if defined(EAI_BADHINTS) - case EAI_BADHINTS: return UV_EAI_BADHINTS; -#endif -#if defined(EAI_CANCELED) - case EAI_CANCELED: return UV_EAI_CANCELED; -#endif -#if defined(EAI_FAIL) - case EAI_FAIL: return UV_EAI_FAIL; -#endif -#if defined(EAI_FAMILY) - case EAI_FAMILY: return UV_EAI_FAMILY; -#endif -#if defined(EAI_MEMORY) - case EAI_MEMORY: return UV_EAI_MEMORY; -#endif -#if defined(EAI_NODATA) - case EAI_NODATA: return UV_EAI_NODATA; -#endif -#if defined(EAI_NONAME) -# if !defined(EAI_NODATA) || EAI_NODATA != EAI_NONAME - case EAI_NONAME: return UV_EAI_NONAME; -# endif -#endif -#if defined(EAI_OVERFLOW) - case EAI_OVERFLOW: return UV_EAI_OVERFLOW; -#endif -#if defined(EAI_PROTOCOL) - case EAI_PROTOCOL: return UV_EAI_PROTOCOL; -#endif -#if defined(EAI_SERVICE) - case EAI_SERVICE: return UV_EAI_SERVICE; -#endif -#if defined(EAI_SOCKTYPE) - case EAI_SOCKTYPE: return UV_EAI_SOCKTYPE; -#endif -#if defined(EAI_SYSTEM) - case EAI_SYSTEM: return -errno; -#endif - } - assert(!"unknown EAI_* error code"); - abort(); - return 0; /* Pacify compiler. */ -} - size_t uv__count_bufs(const uv_buf_t bufs[], unsigned int nbufs) { unsigned int i; @@ -478,6 +351,13 @@ size_t uv__count_bufs(const uv_buf_t bufs[], unsigned int nbufs) { return bytes; } +int uv_recv_buffer_size(uv_handle_t* handle, int* value) { + return uv__socket_sockopt(handle, SO_RCVBUF, value); +} + +int uv_send_buffer_size(uv_handle_t* handle, int *value) { + return uv__socket_sockopt(handle, SO_SNDBUF, value); +} int uv_fs_event_getpath(uv_fs_event_t* handle, char* buf, size_t* len) { size_t required_len; @@ -498,3 +378,81 @@ int uv_fs_event_getpath(uv_fs_event_t* handle, char* buf, size_t* len) { return 0; } + + +void uv__fs_scandir_cleanup(uv_fs_t* req) { + uv__dirent_t** dents; + + dents = req->ptr; + if (req->nbufs > 0 && req->nbufs != (unsigned int) req->result) + req->nbufs--; + for (; req->nbufs < (unsigned int) req->result; req->nbufs++) + free(dents[req->nbufs]); +} + + +int uv_fs_scandir_next(uv_fs_t* req, uv_dirent_t* ent) { + uv__dirent_t** dents; + uv__dirent_t* dent; + + dents = req->ptr; + + /* Free previous entity */ + if (req->nbufs > 0) + free(dents[req->nbufs - 1]); + + /* End was already reached */ + if (req->nbufs == (unsigned int) req->result) { + free(dents); + req->ptr = NULL; + return UV_EOF; + } + + dent = dents[req->nbufs++]; + + ent->name = dent->d_name; +#ifdef HAVE_DIRENT_TYPES + switch (dent->d_type) { + case UV__DT_DIR: + ent->type = UV_DIRENT_DIR; + break; + case UV__DT_FILE: + ent->type = UV_DIRENT_FILE; + break; + case UV__DT_LINK: + ent->type = UV_DIRENT_LINK; + break; + case UV__DT_FIFO: + ent->type = UV_DIRENT_FIFO; + break; + case UV__DT_SOCKET: + ent->type = UV_DIRENT_SOCKET; + break; + case UV__DT_CHAR: + ent->type = UV_DIRENT_CHAR; + break; + case UV__DT_BLOCK: + ent->type = UV_DIRENT_BLOCK; + break; + default: + ent->type = UV_DIRENT_UNKNOWN; + } +#else + ent->type = UV_DIRENT_UNKNOWN; +#endif + + return 0; +} + + +int uv_loop_configure(uv_loop_t* loop, uv_loop_option option, ...) { + va_list ap; + int err; + + va_start(ap, option); + /* Any platform-agnostic options should be handled here. */ + err = uv__loop_configure(loop, option, ap); + va_end(ap); + + return err; +} diff --git a/deps/uv/src/uv-common.h b/deps/uv/src/uv-common.h index 34c287898cd..7d3c58f1218 100644 --- a/deps/uv/src/uv-common.h +++ b/deps/uv/src/uv-common.h @@ -28,6 +28,7 @@ #define UV_COMMON_H_ #include <assert.h> +#include <stdarg.h> #include <stddef.h> #if defined(_MSC_VER) && _MSC_VER < 1600 @@ -59,6 +60,8 @@ enum { # define UV__HANDLE_CLOSING 0x01 #endif +int uv__loop_configure(uv_loop_t* loop, uv_loop_option option, va_list ap); + int uv__tcp_bind(uv_tcp_t* tcp, const struct sockaddr* addr, unsigned int addrlen, @@ -107,6 +110,10 @@ void uv__work_done(uv_async_t* handle); size_t uv__count_bufs(const uv_buf_t bufs[], unsigned int nbufs); +int uv__socket_sockopt(uv_handle_t* handle, int optname, int* value); + +void uv__fs_scandir_cleanup(uv_fs_t* req); + #define uv__has_active_reqs(loop) \ (QUEUE_EMPTY(&(loop)->active_reqs) == 0) diff --git a/deps/uv/src/version.c b/deps/uv/src/version.c index 02de6de305c..ff91a460904 100644 --- a/deps/uv/src/version.c +++ b/deps/uv/src/version.c @@ -35,7 +35,7 @@ #if UV_VERSION_IS_RELEASE # define UV_VERSION_STRING UV_VERSION_STRING_BASE #else -# define UV_VERSION_STRING UV_VERSION_STRING_BASE "-pre" +# define UV_VERSION_STRING UV_VERSION_STRING_BASE "-" UV_VERSION_SUFFIX #endif diff --git a/deps/uv/src/win/core.c b/deps/uv/src/win/core.c index c39597561dd..48897cf29bc 100644 --- a/deps/uv/src/win/core.c +++ b/deps/uv/src/win/core.c @@ -36,12 +36,11 @@ #include "req-inl.h" -/* The only event loop we support right now */ -static uv_loop_t uv_default_loop_; +static uv_loop_t default_loop_struct; +static uv_loop_t* default_loop_ptr; -/* uv_once intialization guards */ +/* uv_once initialization guards */ static uv_once_t uv_init_guard_ = UV_ONCE_INIT; -static uv_once_t uv_default_loop_init_guard_ = UV_ONCE_INIT; #if defined(_DEBUG) && (defined(_MSC_VER) || defined(__MINGW64_VERSION_MAJOR)) @@ -104,7 +103,7 @@ static void uv_init(void) { #endif /* Fetch winapi function pointers. This must be done first because other - * intialization code might need these function pointers to be loaded. + * initialization code might need these function pointers to be loaded. */ uv_winapi_init(); @@ -134,11 +133,10 @@ int uv_loop_init(uv_loop_t* loop) { if (loop->iocp == NULL) return uv_translate_sys_error(GetLastError()); - /* To prevent uninitialized memory access, loop->time must be intialized + /* To prevent uninitialized memory access, loop->time must be initialized * to zero before calling uv_update_time for the first time. */ loop->time = 0; - loop->last_tick_count = 0; uv_update_time(loop); QUEUE_INIT(&loop->wq); @@ -181,48 +179,45 @@ int uv_loop_init(uv_loop_t* loop) { } -static void uv_default_loop_init(void) { - /* Initialize libuv itself first */ - uv__once_init(); - - /* Initialize the main loop */ - uv_loop_init(&uv_default_loop_); -} - - void uv__once_init(void) { uv_once(&uv_init_guard_, uv_init); } uv_loop_t* uv_default_loop(void) { - uv_once(&uv_default_loop_init_guard_, uv_default_loop_init); - return &uv_default_loop_; + if (default_loop_ptr != NULL) + return default_loop_ptr; + + if (uv_loop_init(&default_loop_struct)) + return NULL; + + default_loop_ptr = &default_loop_struct; + return default_loop_ptr; } static void uv__loop_close(uv_loop_t* loop) { - /* close the async handle without needeing an extra loop iteration */ + size_t i; + + /* close the async handle without needing an extra loop iteration */ assert(!loop->wq_async.async_sent); loop->wq_async.close_cb = NULL; uv__handle_closing(&loop->wq_async); uv__handle_close(&loop->wq_async); - if (loop != &uv_default_loop_) { - size_t i; - for (i = 0; i < ARRAY_SIZE(loop->poll_peer_sockets); i++) { - SOCKET sock = loop->poll_peer_sockets[i]; - if (sock != 0 && sock != INVALID_SOCKET) - closesocket(sock); - } + for (i = 0; i < ARRAY_SIZE(loop->poll_peer_sockets); i++) { + SOCKET sock = loop->poll_peer_sockets[i]; + if (sock != 0 && sock != INVALID_SOCKET) + closesocket(sock); } - /* TODO: cleanup default loop*/ uv_mutex_lock(&loop->wq_mutex); assert(QUEUE_EMPTY(&loop->wq) && "thread pool work queue not empty!"); assert(!uv__has_active_reqs(loop)); uv_mutex_unlock(&loop->wq_mutex); uv_mutex_destroy(&loop->wq_mutex); + + CloseHandle(loop->iocp); } @@ -242,6 +237,8 @@ int uv_loop_close(uv_loop_t* loop) { #ifndef NDEBUG memset(loop, -1, sizeof(*loop)); #endif + if (loop == default_loop_ptr) + default_loop_ptr = NULL; return 0; } @@ -265,13 +262,21 @@ uv_loop_t* uv_loop_new(void) { void uv_loop_delete(uv_loop_t* loop) { - int err = uv_loop_close(loop); + uv_loop_t* default_loop; + int err; + default_loop = default_loop_ptr; + err = uv_loop_close(loop); assert(err == 0); - if (loop != &uv_default_loop_) + if (loop != default_loop) free(loop); } +int uv__loop_configure(uv_loop_t* loop, uv_loop_option option, va_list ap) { + return UV_ENOSYS; +} + + int uv_backend_fd(const uv_loop_t* loop) { return -1; } @@ -313,13 +318,17 @@ static void uv_poll(uv_loop_t* loop, DWORD timeout) { /* Package was dequeued */ req = uv_overlapped_to_req(overlapped); uv_insert_pending_req(loop, req); + + /* Some time might have passed waiting for I/O, + * so update the loop time here. + */ + uv_update_time(loop); } else if (GetLastError() != WAIT_TIMEOUT) { /* Serious error */ uv_fatal_error(GetLastError(), "GetQueuedCompletionStatus"); - } else { - /* We're sure that at least `timeout` milliseconds have expired, but - * this may not be reflected yet in the GetTickCount() return value. - * Therefore we ensure it's taken into account here. + } else if (timeout > 0) { + /* GetQueuedCompletionStatus can occasionally return a little early. + * Make sure that the desired timeout is reflected in the loop time. */ uv__time_forward(loop, timeout); } @@ -346,13 +355,17 @@ static void uv_poll_ex(uv_loop_t* loop, DWORD timeout) { req = uv_overlapped_to_req(overlappeds[i].lpOverlapped); uv_insert_pending_req(loop, req); } + + /* Some time might have passed waiting for I/O, + * so update the loop time here. + */ + uv_update_time(loop); } else if (GetLastError() != WAIT_TIMEOUT) { /* Serious error */ uv_fatal_error(GetLastError(), "GetQueuedCompletionStatusEx"); } else if (timeout > 0) { - /* We're sure that at least `timeout` milliseconds have expired, but - * this may not be reflected yet in the GetTickCount() return value. - * Therefore we ensure it's taken into account here. + /* GetQueuedCompletionStatus can occasionally return a little early. + * Make sure that the desired timeout is reflected in the loop time. */ uv__time_forward(loop, timeout); } @@ -403,7 +416,7 @@ int uv_run(uv_loop_t *loop, uv_run_mode mode) { uv_process_endgames(loop); if (mode == UV_RUN_ONCE) { - /* UV_RUN_ONCE implies forward progess: at least one callback must have + /* UV_RUN_ONCE implies forward progress: at least one callback must have * been invoked when it returns. uv__io_poll() can return without doing * I/O (meaning: no callbacks) when its timeout expires - which means we * have pending timers that satisfy the forward progress constraint. @@ -411,7 +424,6 @@ int uv_run(uv_loop_t *loop, uv_run_mode mode) { * UV_RUN_NOWAIT makes no guarantees about progress so it's omitted from * the check. */ - uv_update_time(loop); uv_process_timers(loop); } @@ -428,3 +440,68 @@ int uv_run(uv_loop_t *loop, uv_run_mode mode) { return r; } + + +int uv_fileno(const uv_handle_t* handle, uv_os_fd_t* fd) { + uv_os_fd_t fd_out; + + switch (handle->type) { + case UV_TCP: + fd_out = (uv_os_fd_t)((uv_tcp_t*) handle)->socket; + break; + + case UV_NAMED_PIPE: + fd_out = ((uv_pipe_t*) handle)->handle; + break; + + case UV_TTY: + fd_out = ((uv_tty_t*) handle)->handle; + break; + + case UV_UDP: + fd_out = (uv_os_fd_t)((uv_udp_t*) handle)->socket; + break; + + case UV_POLL: + fd_out = (uv_os_fd_t)((uv_poll_t*) handle)->socket; + break; + + default: + return UV_EINVAL; + } + + if (uv_is_closing(handle) || fd_out == INVALID_HANDLE_VALUE) + return UV_EBADF; + + *fd = fd_out; + return 0; +} + + +int uv__socket_sockopt(uv_handle_t* handle, int optname, int* value) { + int r; + int len; + SOCKET socket; + + if (handle == NULL || value == NULL) + return UV_EINVAL; + + if (handle->type == UV_TCP) + socket = ((uv_tcp_t*) handle)->socket; + else if (handle->type == UV_UDP) + socket = ((uv_udp_t*) handle)->socket; + else + return UV_ENOTSUP; + + len = sizeof(*value); + + if (*value == 0) + r = getsockopt(socket, SOL_SOCKET, optname, (char*) value, &len); + else + r = setsockopt(socket, SOL_SOCKET, optname, (const char*) value, len); + + if (r == SOCKET_ERROR) + return uv_translate_sys_error(WSAGetLastError()); + + return 0; +} diff --git a/deps/uv/src/win/fs.c b/deps/uv/src/win/fs.c index 8b52e610f42..30a457a023b 100644 --- a/deps/uv/src/win/fs.c +++ b/deps/uv/src/win/fs.c @@ -36,11 +36,15 @@ #include "req-inl.h" #include "handle-inl.h" +#include <wincrypt.h> + #define UV_FS_FREE_PATHS 0x0002 #define UV_FS_FREE_PTR 0x0008 #define UV_FS_CLEANEDUP 0x0010 +static const int uv__fs_dirent_slide = 0x20; + #define QUEUE_FS_TP_JOB(loop, req) \ do { \ @@ -279,7 +283,7 @@ INLINE static int fs__readlink_handle(HANDLE handle, char** target_ptr, (w_target[4] >= L'a' && w_target[4] <= L'z')) && w_target[5] == L':' && (w_target_len == 6 || w_target[6] == L'\\')) { - /* \??\drive:\ */ + /* \??\<drive>:\ */ w_target += 4; w_target_len -= 4; @@ -288,8 +292,8 @@ INLINE static int fs__readlink_handle(HANDLE handle, char** target_ptr, (w_target[5] == L'N' || w_target[5] == L'n') && (w_target[6] == L'C' || w_target[6] == L'c') && w_target[7] == L'\\') { - /* \??\UNC\server\share\ - make sure the final path looks like */ - /* \\server\share\ */ + /* \??\UNC\<server>\<share>\ - make sure the final path looks like */ + /* \\<server>\<share>\ */ w_target += 6; w_target[0] = L'\\'; w_target_len -= 6; @@ -304,8 +308,8 @@ INLINE static int fs__readlink_handle(HANDLE handle, char** target_ptr, w_target_len = reparse_data->MountPointReparseBuffer.SubstituteNameLength / sizeof(WCHAR); - /* Only treat junctions that look like \??\drive:\ as symlink. */ - /* Junctions can also be used as mount points, like \??\Volume{guid}, */ + /* Only treat junctions that look like \??\<drive>:\ as symlink. */ + /* Junctions can also be used as mount points, like \??\Volume{<guid>}, */ /* but that's confusing for programs since they wouldn't be able to */ /* actually understand such a path when returned by uv_readlink(). */ /* UNC paths are never valid for junctions so we don't care about them. */ @@ -553,11 +557,6 @@ void fs__read(uv_fs_t* req) { if (offset != -1) { memset(&overlapped, 0, sizeof overlapped); - - offset_.QuadPart = offset; - overlapped.Offset = offset_.LowPart; - overlapped.OffsetHigh = offset_.HighPart; - overlapped_ptr = &overlapped; } else { overlapped_ptr = NULL; @@ -567,6 +566,13 @@ void fs__read(uv_fs_t* req) { bytes = 0; do { DWORD incremental_bytes; + + if (offset != -1) { + offset_.QuadPart = offset + bytes; + overlapped.Offset = offset_.LowPart; + overlapped.OffsetHigh = offset_.HighPart; + } + result = ReadFile(handle, req->bufs[index].base, req->bufs[index].len, @@ -609,11 +615,6 @@ void fs__write(uv_fs_t* req) { if (offset != -1) { memset(&overlapped, 0, sizeof overlapped); - - offset_.QuadPart = offset; - overlapped.Offset = offset_.LowPart; - overlapped.OffsetHigh = offset_.HighPart; - overlapped_ptr = &overlapped; } else { overlapped_ptr = NULL; @@ -623,6 +624,13 @@ void fs__write(uv_fs_t* req) { bytes = 0; do { DWORD incremental_bytes; + + if (offset != -1) { + offset_.QuadPart = offset + bytes; + overlapped.Offset = offset_.LowPart; + overlapped.OffsetHigh = offset_.HighPart; + } + result = WriteFile(handle, req->bufs[index].base, req->bufs[index].len, @@ -721,88 +729,75 @@ void fs__mkdir(uv_fs_t* req) { } -/* Some parts of the implementation were borrowed from glibc. */ +/* OpenBSD original: lib/libc/stdio/mktemp.c */ void fs__mkdtemp(uv_fs_t* req) { - static const WCHAR letters[] = + static const WCHAR *tempchars = L"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"; + static const size_t num_chars = 62; + static const size_t num_x = 6; + WCHAR *cp, *ep; + unsigned int tries, i; size_t len; - WCHAR* template_part; - static uint64_t value; - unsigned int count; - int fd; - - /* A lower bound on the number of temporary files to attempt to - generate. The maximum total number of temporary file names that - can exist for a given template is 62**6. It should never be - necessary to try all these combinations. Instead if a reasonable - number of names is tried (we define reasonable as 62**3) fail to - give the system administrator the chance to remove the problems. */ -#define ATTEMPTS_MIN (62 * 62 * 62) - - /* The number of times to attempt to generate a temporary file. To - conform to POSIX, this must be no smaller than TMP_MAX. */ -#if ATTEMPTS_MIN < TMP_MAX - unsigned int attempts = TMP_MAX; -#else - unsigned int attempts = ATTEMPTS_MIN; -#endif + HCRYPTPROV h_crypt_prov; + uint64_t v; + BOOL released; len = wcslen(req->pathw); - if (len < 6 || wcsncmp(&req->pathw[len - 6], L"XXXXXX", 6)) { + ep = req->pathw + len; + if (len < num_x || wcsncmp(ep - num_x, L"XXXXXX", num_x)) { SET_REQ_UV_ERROR(req, UV_EINVAL, ERROR_INVALID_PARAMETER); return; } - /* This is where the Xs start. */ - template_part = &req->pathw[len - 6]; - - /* Get some random data. */ - value += uv_hrtime() ^ _getpid(); - - for (count = 0; count < attempts; value += 7777, ++count) { - uint64_t v = value; + if (!CryptAcquireContext(&h_crypt_prov, NULL, NULL, PROV_RSA_FULL, + CRYPT_VERIFYCONTEXT)) { + SET_REQ_WIN32_ERROR(req, GetLastError()); + return; + } - /* Fill in the random bits. */ - template_part[0] = letters[v % 62]; - v /= 62; - template_part[1] = letters[v % 62]; - v /= 62; - template_part[2] = letters[v % 62]; - v /= 62; - template_part[3] = letters[v % 62]; - v /= 62; - template_part[4] = letters[v % 62]; - v /= 62; - template_part[5] = letters[v % 62]; + tries = TMP_MAX; + do { + if (!CryptGenRandom(h_crypt_prov, sizeof(v), (BYTE*) &v)) { + SET_REQ_WIN32_ERROR(req, GetLastError()); + break; + } - fd = _wmkdir(req->pathw); + cp = ep - num_x; + for (i = 0; i < num_x; i++) { + *cp++ = tempchars[v % num_chars]; + v /= num_chars; + } - if (fd >= 0) { + if (_wmkdir(req->pathw) == 0) { len = strlen(req->path); - wcstombs((char*) req->path + len - 6, template_part, 6); + wcstombs((char*) req->path + len - num_x, ep - num_x, num_x); SET_REQ_RESULT(req, 0); - return; + break; } else if (errno != EEXIST) { SET_REQ_RESULT(req, -1); - return; + break; } - } + } while (--tries); - /* We got out of the loop because we ran out of combinations to try. */ - SET_REQ_RESULT(req, -1); + released = CryptReleaseContext(h_crypt_prov, 0); + assert(released); + if (tries == 0) { + SET_REQ_RESULT(req, -1); + } } -void fs__readdir(uv_fs_t* req) { +void fs__scandir(uv_fs_t* req) { WCHAR* pathw = req->pathw; size_t len = wcslen(pathw); - int result, size; - WCHAR* buf = NULL, *ptr, *name; + int result; + WCHAR* name; HANDLE dir; WIN32_FIND_DATAW ent = { 0 }; - size_t buf_char_len = 4096; WCHAR* path2; const WCHAR* fmt; + uv__dirent_t** dents; + int dent_size; if (len == 0) { fmt = L"./*"; @@ -821,7 +816,8 @@ void fs__readdir(uv_fs_t* req) { path2 = (WCHAR*)malloc(sizeof(WCHAR) * (len + 4)); if (!path2) { - uv_fatal_error(ERROR_OUTOFMEMORY, "malloc"); + SET_REQ_UV_ERROR(req, UV_ENOMEM, ERROR_OUTOFMEMORY); + return; } _snwprintf(path2, len + 3, fmt, pathw); @@ -834,71 +830,81 @@ void fs__readdir(uv_fs_t* req) { } result = 0; + dents = NULL; + dent_size = 0; do { - name = ent.cFileName; - - if (name[0] != L'.' || (name[1] && (name[1] != L'.' || name[2]))) { - len = wcslen(name); + uv__dirent_t* dent; + int utf8_len; - if (!buf) { - buf = (WCHAR*)malloc(buf_char_len * sizeof(WCHAR)); - if (!buf) { - uv_fatal_error(ERROR_OUTOFMEMORY, "malloc"); - } + name = ent.cFileName; - ptr = buf; - } + if (!(name[0] != L'.' || (name[1] && (name[1] != L'.' || name[2])))) + continue; - while ((ptr - buf) + len + 1 > buf_char_len) { - buf_char_len *= 2; - path2 = buf; - buf = (WCHAR*)realloc(buf, buf_char_len * sizeof(WCHAR)); - if (!buf) { - uv_fatal_error(ERROR_OUTOFMEMORY, "realloc"); - } + /* Grow dents buffer, if needed */ + if (result >= dent_size) { + uv__dirent_t** tmp; - ptr = buf + (ptr - path2); + dent_size += uv__fs_dirent_slide; + tmp = realloc(dents, dent_size * sizeof(*dents)); + if (tmp == NULL) { + SET_REQ_UV_ERROR(req, UV_ENOMEM, ERROR_OUTOFMEMORY); + goto fatal; } - - wcscpy(ptr, name); - ptr += len + 1; - result++; + dents = tmp; } - } while(FindNextFileW(dir, &ent)); - FindClose(dir); - - if (buf) { - /* Convert result to UTF8. */ - size = uv_utf16_to_utf8(buf, buf_char_len, NULL, 0); - if (!size) { + /* Allocate enough space to fit utf8 encoding of file name */ + len = wcslen(name); + utf8_len = uv_utf16_to_utf8(name, len, NULL, 0); + if (!utf8_len) { SET_REQ_WIN32_ERROR(req, GetLastError()); - return; + goto fatal; } - req->ptr = (char*)malloc(size + 1); - if (!req->ptr) { - uv_fatal_error(ERROR_OUTOFMEMORY, "malloc"); + dent = malloc(sizeof(*dent) + utf8_len + 1); + if (dent == NULL) { + SET_REQ_UV_ERROR(req, UV_ENOMEM, ERROR_OUTOFMEMORY); + goto fatal; } - size = uv_utf16_to_utf8(buf, buf_char_len, (char*)req->ptr, size); - if (!size) { - free(buf); - free(req->ptr); - req->ptr = NULL; + /* Copy file name */ + utf8_len = uv_utf16_to_utf8(name, len, dent->d_name, utf8_len); + if (!utf8_len) { + free(dent); SET_REQ_WIN32_ERROR(req, GetLastError()); - return; + goto fatal; } - free(buf); + dent->d_name[utf8_len] = '\0'; - ((char*)req->ptr)[size] = '\0'; + /* Copy file type */ + if ((ent.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) != 0) + dent->d_type = UV__DT_DIR; + else if ((ent.dwFileAttributes & FILE_ATTRIBUTE_REPARSE_POINT) != 0) + dent->d_type = UV__DT_LINK; + else + dent->d_type = UV__DT_FILE; + + dents[result++] = dent; + } while(FindNextFileW(dir, &ent)); + + FindClose(dir); + + if (dents != NULL) req->flags |= UV_FS_FREE_PTR; - } else { - req->ptr = NULL; - } + /* NOTE: nbufs will be used as index */ + req->nbufs = 0; + req->ptr = dents; SET_REQ_RESULT(req, result); + return; + +fatal: + /* Deallocate dents */ + for (result--; result >= 0; result--) + free(dents[result]); + free(dents); } @@ -941,7 +947,7 @@ INLINE static int fs__stat_handle(HANDLE handle, uv_stat_t* statbuf) { * * Currently it's based on whether the 'readonly' attribute is set, which * makes little sense because the semantics are so different: the 'read-only' - * flag is just a way for a user to protect against accidental deleteion, and + * flag is just a way for a user to protect against accidental deletion, and * serves no security purpose. Windows uses ACLs for that. * * Also people now use uv_fs_chmod() to take away the writable bit for good @@ -950,7 +956,7 @@ INLINE static int fs__stat_handle(HANDLE handle, uv_stat_t* statbuf) { * deleted. * * IOW it's all just a clusterfuck and we should think of something that - * makes slighty more sense. + * makes slightly more sense. * * And uv_fs_chmod should probably just fail on windows or be a total no-op. * There's nothing sensible it can do anyway. @@ -1218,6 +1224,25 @@ static void fs__sendfile(uv_fs_t* req) { } +static void fs__access(uv_fs_t* req) { + DWORD attr = GetFileAttributesW(req->pathw); + + if (attr == INVALID_FILE_ATTRIBUTES) { + SET_REQ_WIN32_ERROR(req, GetLastError()); + return; + } + + if ((req->flags & W_OK) && + ((attr & FILE_ATTRIBUTE_READONLY) || + (attr & FILE_ATTRIBUTE_DIRECTORY))) { + SET_REQ_WIN32_ERROR(req, UV_EPERM); + return; + } + + SET_REQ_RESULT(req, 0); +} + + static void fs__chmod(uv_fs_t* req) { int result = _wchmod(req->pathw, req->mode); SET_REQ_RESULT(req, result); @@ -1593,6 +1618,7 @@ static void uv__fs_work(struct uv__work* w) { XX(FTRUNCATE, ftruncate) XX(UTIME, utime) XX(FUTIME, futime) + XX(ACCESS, access) XX(CHMOD, chmod) XX(FCHMOD, fchmod) XX(FSYNC, fsync) @@ -1602,7 +1628,7 @@ static void uv__fs_work(struct uv__work* w) { XX(MKDIR, mkdir) XX(MKDTEMP, mkdtemp) XX(RENAME, rename) - XX(READDIR, readdir) + XX(SCANDIR, scandir) XX(LINK, link) XX(SYMLINK, symlink) XX(READLINK, readlink) @@ -1837,11 +1863,11 @@ int uv_fs_rmdir(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb) { } -int uv_fs_readdir(uv_loop_t* loop, uv_fs_t* req, const char* path, int flags, +int uv_fs_scandir(uv_loop_t* loop, uv_fs_t* req, const char* path, int flags, uv_fs_cb cb) { int err; - uv_fs_req_init(loop, req, UV_FS_READDIR, cb); + uv_fs_req_init(loop, req, UV_FS_SCANDIR, cb); err = fs__capture_path(loop, req, path, NULL, cb != NULL); if (err) { @@ -1854,7 +1880,7 @@ int uv_fs_readdir(uv_loop_t* loop, uv_fs_t* req, const char* path, int flags, QUEUE_FS_TP_JOB(loop, req); return 0; } else { - fs__readdir(req); + fs__scandir(req); return req->result; } } @@ -2100,6 +2126,31 @@ int uv_fs_sendfile(uv_loop_t* loop, uv_fs_t* req, uv_file fd_out, } +int uv_fs_access(uv_loop_t* loop, + uv_fs_t* req, + const char* path, + int flags, + uv_fs_cb cb) { + int err; + + uv_fs_req_init(loop, req, UV_FS_ACCESS, cb); + + err = fs__capture_path(loop, req, path, NULL, cb != NULL); + if (err) + return uv_translate_sys_error(err); + + req->flags = flags; + + if (cb) { + QUEUE_FS_TP_JOB(loop, req); + return 0; + } + + fs__access(req); + return req->result; +} + + int uv_fs_chmod(uv_loop_t* loop, uv_fs_t* req, const char* path, int mode, uv_fs_cb cb) { int err; diff --git a/deps/uv/src/win/getaddrinfo.c b/deps/uv/src/win/getaddrinfo.c index 086200a9ea3..53a6084efe5 100644 --- a/deps/uv/src/win/getaddrinfo.c +++ b/deps/uv/src/win/getaddrinfo.c @@ -26,6 +26,25 @@ #include "internal.h" #include "req-inl.h" +/* EAI_* constants. */ +#include <winsock2.h> + + +int uv__getaddrinfo_translate_error(int sys_err) { + switch (sys_err) { + case 0: return 0; + case WSATRY_AGAIN: return UV_EAI_AGAIN; + case WSAEINVAL: return UV_EAI_BADFLAGS; + case WSANO_RECOVERY: return UV_EAI_FAIL; + case WSAEAFNOSUPPORT: return UV_EAI_FAMILY; + case WSA_NOT_ENOUGH_MEMORY: return UV_EAI_MEMORY; + case WSAHOST_NOT_FOUND: return UV_EAI_NONAME; + case WSATYPE_NOT_FOUND: return UV_EAI_SERVICE; + case WSAESOCKTNOSUPPORT: return UV_EAI_SOCKTYPE; + default: return uv_translate_sys_error(sys_err); + } +} + /* * MinGW is missing this @@ -277,7 +296,7 @@ int uv_getaddrinfo(uv_loop_t* loop, req->alloc = (void*)alloc_ptr; /* convert node string to UTF16 into allocated memory and save pointer in */ - /* the reques. */ + /* the request. */ if (node != NULL) { req->node = (WCHAR*)alloc_ptr; if (uv_utf8_to_utf16(node, diff --git a/deps/uv/src/win/getnameinfo.c b/deps/uv/src/win/getnameinfo.c index 45608dae85d..52cc7908892 100644 --- a/deps/uv/src/win/getnameinfo.c +++ b/deps/uv/src/win/getnameinfo.c @@ -46,13 +46,15 @@ static void uv__getnameinfo_work(struct uv__work* w) { int ret = 0; req = container_of(w, uv_getnameinfo_t, work_req); - ret = GetNameInfoW((struct sockaddr*)&req->storage, - sizeof(req->storage), - host, - ARRAY_SIZE(host), - service, - ARRAY_SIZE(service), - req->flags); + if (GetNameInfoW((struct sockaddr*)&req->storage, + sizeof(req->storage), + host, + ARRAY_SIZE(host), + service, + ARRAY_SIZE(service), + req->flags)) { + ret = WSAGetLastError(); + } req->retcode = uv__getaddrinfo_translate_error(ret); /* convert results to UTF-8 */ diff --git a/deps/uv/src/win/internal.h b/deps/uv/src/win/internal.h index 9eadb71235c..d87402b73a0 100644 --- a/deps/uv/src/win/internal.h +++ b/deps/uv/src/win/internal.h @@ -65,7 +65,6 @@ extern UV_THREAD_LOCAL int uv__crt_assert_enabled; /* Used by all handles. */ #define UV_HANDLE_CLOSED 0x00000002 #define UV_HANDLE_ENDGAME_QUEUED 0x00000004 -#define UV_HANDLE_ACTIVE 0x00000010 /* uv-common.h: #define UV__HANDLE_CLOSING 0x00000001 */ /* uv-common.h: #define UV__HANDLE_ACTIVE 0x00000040 */ @@ -100,6 +99,7 @@ extern UV_THREAD_LOCAL int uv__crt_assert_enabled; /* Only used by uv_pipe_t handles. */ #define UV_HANDLE_NON_OVERLAPPED_PIPE 0x01000000 #define UV_HANDLE_PIPESERVER 0x02000000 +#define UV_HANDLE_PIPE_READ_CANCELABLE 0x04000000 /* Only used by uv_tty_t handles. */ #define UV_HANDLE_TTY_READABLE 0x01000000 @@ -181,6 +181,9 @@ int uv_pipe_write(uv_loop_t* loop, uv_write_t* req, uv_pipe_t* handle, int uv_pipe_write2(uv_loop_t* loop, uv_write_t* req, uv_pipe_t* handle, const uv_buf_t bufs[], unsigned int nbufs, uv_stream_t* send_handle, uv_write_cb cb); +void uv__pipe_pause_read(uv_pipe_t* handle); +void uv__pipe_unpause_read(uv_pipe_t* handle); +void uv__pipe_stop_read(uv_pipe_t* handle); void uv_process_pipe_read_req(uv_loop_t* loop, uv_pipe_t* handle, uv_req_t* req); @@ -319,6 +322,7 @@ void uv__fs_poll_endgame(uv_loop_t* loop, uv_fs_poll_t* handle); */ void uv__util_init(); +uint64_t uv__hrtime(double scale); int uv_parent_pid(); __declspec(noreturn) void uv_fatal_error(const int errorno, const char* syscall); diff --git a/deps/uv/src/win/loop-watcher.c b/deps/uv/src/win/loop-watcher.c index eb49f7cbc55..20e4509f838 100644 --- a/deps/uv/src/win/loop-watcher.c +++ b/deps/uv/src/win/loop-watcher.c @@ -49,7 +49,7 @@ void uv_loop_watcher_endgame(uv_loop_t* loop, uv_handle_t* handle) { \ assert(handle->type == UV_##NAME); \ \ - if (handle->flags & UV_HANDLE_ACTIVE) \ + if (uv__is_active(handle)) \ return 0; \ \ if (cb == NULL) \ @@ -67,7 +67,6 @@ void uv_loop_watcher_endgame(uv_loop_t* loop, uv_handle_t* handle) { loop->name##_handles = handle; \ \ handle->name##_cb = cb; \ - handle->flags |= UV_HANDLE_ACTIVE; \ uv__handle_start(handle); \ \ return 0; \ @@ -79,7 +78,7 @@ void uv_loop_watcher_endgame(uv_loop_t* loop, uv_handle_t* handle) { \ assert(handle->type == UV_##NAME); \ \ - if (!(handle->flags & UV_HANDLE_ACTIVE)) \ + if (!uv__is_active(handle)) \ return 0; \ \ /* Update loop head if needed */ \ @@ -99,7 +98,6 @@ void uv_loop_watcher_endgame(uv_loop_t* loop, uv_handle_t* handle) { handle->name##_next->name##_prev = handle->name##_prev; \ } \ \ - handle->flags &= ~UV_HANDLE_ACTIVE; \ uv__handle_stop(handle); \ \ return 0; \ diff --git a/deps/uv/src/win/pipe.c b/deps/uv/src/win/pipe.c index 3bf2a220d0c..c78051db7c9 100644 --- a/deps/uv/src/win/pipe.c +++ b/deps/uv/src/win/pipe.c @@ -101,6 +101,7 @@ int uv_pipe_init(uv_loop_t* loop, uv_pipe_t* handle, int ipc) { handle->pending_ipc_info.queue_len = 0; handle->ipc = ipc; handle->non_overlapped_writes_tail = NULL; + handle->readfile_thread = NULL; uv_req_init(loop, (uv_req_t*) &handle->ipc_header_write_req); @@ -112,6 +113,12 @@ static void uv_pipe_connection_init(uv_pipe_t* handle) { uv_connection_init((uv_stream_t*) handle); handle->read_req.data = handle; handle->eof_timer = NULL; + assert(!(handle->flags & UV_HANDLE_PIPESERVER)); + if (pCancelSynchronousIo && + handle->flags & UV_HANDLE_NON_OVERLAPPED_PIPE) { + uv_mutex_init(&handle->readfile_mutex); + handle->flags |= UV_HANDLE_PIPE_READ_CANCELABLE; + } } @@ -321,6 +328,11 @@ void uv_pipe_endgame(uv_loop_t* loop, uv_pipe_t* handle) { FILE_PIPE_LOCAL_INFORMATION pipe_info; uv__ipc_queue_item_t* item; + if (handle->flags & UV_HANDLE_PIPE_READ_CANCELABLE) { + handle->flags &= ~UV_HANDLE_PIPE_READ_CANCELABLE; + uv_mutex_destroy(&handle->readfile_mutex); + } + if ((handle->flags & UV_HANDLE_CONNECTION) && handle->shutdown_req != NULL && handle->write_reqs_pending == 0) { @@ -658,12 +670,49 @@ void uv_pipe_connect(uv_connect_t* req, uv_pipe_t* handle, } +void uv__pipe_pause_read(uv_pipe_t* handle) { + if (handle->flags & UV_HANDLE_PIPE_READ_CANCELABLE) { + /* Pause the ReadFile task briefly, to work + around the Windows kernel bug that causes + any access to a NamedPipe to deadlock if + any process has called ReadFile */ + HANDLE h; + uv_mutex_lock(&handle->readfile_mutex); + h = handle->readfile_thread; + while (h) { + /* spinlock: we expect this to finish quickly, + or we are probably about to deadlock anyways + (in the kernel), so it doesn't matter */ + pCancelSynchronousIo(h); + SwitchToThread(); /* yield thread control briefly */ + h = handle->readfile_thread; + } + } +} + + +void uv__pipe_unpause_read(uv_pipe_t* handle) { + if (handle->flags & UV_HANDLE_PIPE_READ_CANCELABLE) { + uv_mutex_unlock(&handle->readfile_mutex); + } +} + + +void uv__pipe_stop_read(uv_pipe_t* handle) { + handle->flags &= ~UV_HANDLE_READING; + uv__pipe_pause_read((uv_pipe_t*)handle); + uv__pipe_unpause_read((uv_pipe_t*)handle); +} + + /* Cleans up uv_pipe_t (server or connection) and all resources associated */ /* with it. */ void uv_pipe_cleanup(uv_loop_t* loop, uv_pipe_t* handle) { int i; HANDLE pipeHandle; + uv__pipe_stop_read(handle); + if (handle->name) { free(handle->name); handle->name = NULL; @@ -689,6 +738,7 @@ void uv_pipe_cleanup(uv_loop_t* loop, uv_pipe_t* handle) { CloseHandle(handle->handle); handle->handle = INVALID_HANDLE_VALUE; } + } @@ -867,19 +917,61 @@ static DWORD WINAPI uv_pipe_zero_readfile_thread_proc(void* parameter) { uv_read_t* req = (uv_read_t*) parameter; uv_pipe_t* handle = (uv_pipe_t*) req->data; uv_loop_t* loop = handle->loop; + HANDLE hThread = NULL; + DWORD err; + uv_mutex_t *m = &handle->readfile_mutex; assert(req != NULL); assert(req->type == UV_READ); assert(handle->type == UV_NAMED_PIPE); + if (handle->flags & UV_HANDLE_PIPE_READ_CANCELABLE) { + uv_mutex_lock(m); /* mutex controls *setting* of readfile_thread */ + if (DuplicateHandle(GetCurrentProcess(), GetCurrentThread(), + GetCurrentProcess(), &hThread, + 0, TRUE, DUPLICATE_SAME_ACCESS)) { + handle->readfile_thread = hThread; + } else { + hThread = NULL; + } + uv_mutex_unlock(m); + } +restart_readfile: result = ReadFile(handle->handle, &uv_zero_, 0, &bytes, NULL); + if (!result) { + err = GetLastError(); + if (err == ERROR_OPERATION_ABORTED && + handle->flags & UV_HANDLE_PIPE_READ_CANCELABLE) { + if (handle->flags & UV_HANDLE_READING) { + /* just a brief break to do something else */ + handle->readfile_thread = NULL; + /* resume after it is finished */ + uv_mutex_lock(m); + handle->readfile_thread = hThread; + uv_mutex_unlock(m); + goto restart_readfile; + } else { + result = 1; /* successfully stopped reading */ + } + } + } + if (hThread) { + assert(hThread == handle->readfile_thread); + /* mutex does not control clearing readfile_thread */ + handle->readfile_thread = NULL; + uv_mutex_lock(m); + /* only when we hold the mutex lock is it safe to + open or close the handle */ + CloseHandle(hThread); + uv_mutex_unlock(m); + } if (!result) { - SET_REQ_ERROR(req, GetLastError()); + SET_REQ_ERROR(req, err); } POST_COMPLETION_FOR_REQ(loop, req); @@ -1836,6 +1928,8 @@ int uv_pipe_getsockname(const uv_pipe_t* handle, char* buf, size_t* len) { return UV_EINVAL; } + uv__pipe_pause_read((uv_pipe_t*)handle); /* cast away const warning */ + nt_status = pNtQueryInformationFile(handle->handle, &io_status, &tmp_name_info, @@ -1846,7 +1940,8 @@ int uv_pipe_getsockname(const uv_pipe_t* handle, char* buf, size_t* len) { name_info = malloc(name_size); if (!name_info) { *len = 0; - return UV_ENOMEM; + err = UV_ENOMEM; + goto cleanup; } nt_status = pNtQueryInformationFile(handle->handle, @@ -1918,10 +2013,14 @@ int uv_pipe_getsockname(const uv_pipe_t* handle, char* buf, size_t* len) { buf[addrlen++] = '\0'; *len = addrlen; - return 0; + err = 0; + goto cleanup; error: free(name_info); + +cleanup: + uv__pipe_unpause_read((uv_pipe_t*)handle); /* cast away const warning */ return err; } diff --git a/deps/uv/src/win/poll.c b/deps/uv/src/win/poll.c index bf3739985a7..622cbabe399 100644 --- a/deps/uv/src/win/poll.c +++ b/deps/uv/src/win/poll.c @@ -79,7 +79,7 @@ static void uv__fast_poll_submit_poll_req(uv_loop_t* loop, uv_poll_t* handle) { handle->mask_events_2 = handle->events; } else if (handle->submitted_events_2 == 0) { req = &handle->poll_req_2; - afd_poll_info = &handle->afd_poll_info_2; + afd_poll_info = &handle->afd_poll_info_2.afd_poll_info_ptr[0]; handle->submitted_events_2 = handle->events; handle->mask_events_1 = handle->events; handle->mask_events_2 = 0; @@ -119,18 +119,19 @@ static void uv__fast_poll_submit_poll_req(uv_loop_t* loop, uv_poll_t* handle) { static int uv__fast_poll_cancel_poll_req(uv_loop_t* loop, uv_poll_t* handle) { - AFD_POLL_INFO afd_poll_info; - int result; + AFD_POLL_INFO* afd_poll_info; + DWORD result; - afd_poll_info.Exclusive = TRUE; - afd_poll_info.NumberOfHandles = 1; - afd_poll_info.Timeout.QuadPart = INT64_MAX; - afd_poll_info.Handles[0].Handle = (HANDLE) handle->socket; - afd_poll_info.Handles[0].Status = 0; - afd_poll_info.Handles[0].Events = AFD_POLL_ALL; + afd_poll_info = &handle->afd_poll_info_2.afd_poll_info_ptr[1]; + afd_poll_info->Exclusive = TRUE; + afd_poll_info->NumberOfHandles = 1; + afd_poll_info->Timeout.QuadPart = INT64_MAX; + afd_poll_info->Handles[0].Handle = (HANDLE) handle->socket; + afd_poll_info->Handles[0].Status = 0; + afd_poll_info->Handles[0].Events = AFD_POLL_ALL; result = uv_msafd_poll(handle->socket, - &afd_poll_info, + afd_poll_info, uv__get_overlapped_dummy()); if (result == SOCKET_ERROR) { @@ -154,7 +155,7 @@ static void uv__fast_poll_process_poll_req(uv_loop_t* loop, uv_poll_t* handle, handle->submitted_events_1 = 0; mask_events = handle->mask_events_1; } else if (req == &handle->poll_req_2) { - afd_poll_info = &handle->afd_poll_info_2; + afd_poll_info = &handle->afd_poll_info_2.afd_poll_info_ptr[0]; handle->submitted_events_2 = 0; mask_events = handle->mask_events_2; } else { @@ -530,7 +531,7 @@ int uv_poll_init_socket(uv_loop_t* loop, uv_poll_t* handle, SO_PROTOCOL_INFOW, (char*) &protocol_info, &len) != 0) { - return WSAGetLastError(); + return uv_translate_sys_error(WSAGetLastError()); } /* Get the peer socket that is needed to enable fast poll. If the returned */ @@ -546,7 +547,7 @@ int uv_poll_init_socket(uv_loop_t* loop, uv_poll_t* handle, handle->flags |= UV_HANDLE_POLL_SLOW; } - /* Intialize 2 poll reqs. */ + /* Initialize 2 poll reqs. */ handle->submitted_events_1 = 0; uv_req_init(loop, (uv_req_t*) &(handle->poll_req_1)); handle->poll_req_1.type = UV_POLL_REQ; @@ -557,6 +558,11 @@ int uv_poll_init_socket(uv_loop_t* loop, uv_poll_t* handle, handle->poll_req_2.type = UV_POLL_REQ; handle->poll_req_2.data = handle; + handle->afd_poll_info_2.afd_poll_info_ptr = malloc(sizeof(*handle->afd_poll_info_2.afd_poll_info_ptr) * 2); + if (handle->afd_poll_info_2.afd_poll_info_ptr == NULL) { + return UV_ENOMEM; + } + return 0; } @@ -618,5 +624,9 @@ void uv_poll_endgame(uv_loop_t* loop, uv_poll_t* handle) { assert(handle->submitted_events_1 == 0); assert(handle->submitted_events_2 == 0); + if (handle->afd_poll_info_2.afd_poll_info_ptr) { + free(handle->afd_poll_info_2.afd_poll_info_ptr); + handle->afd_poll_info_2.afd_poll_info_ptr = NULL; + } uv__handle_close(handle); } diff --git a/deps/uv/src/win/process.c b/deps/uv/src/win/process.c index 40023e55723..3a0106f82d6 100644 --- a/deps/uv/src/win/process.c +++ b/deps/uv/src/win/process.c @@ -171,8 +171,10 @@ static WCHAR* search_path_join_test(const WCHAR* dir, size_t cwd_len) { WCHAR *result, *result_pos; DWORD attrs; - - if (dir_len >= 1 && (dir[0] == L'/' || dir[0] == L'\\')) { + if (dir_len > 2 && dir[0] == L'\\' && dir[1] == L'\\') { + /* It's a UNC path so ignore cwd */ + cwd_len = 0; + } else if (dir_len >= 1 && (dir[0] == L'/' || dir[0] == L'\\')) { /* It's a full path without drive letter, use cwd's drive letter only */ cwd_len = 2; } else if (dir_len >= 2 && dir[1] == L':' && @@ -331,7 +333,11 @@ static WCHAR* path_search_walk_ext(const WCHAR *dir, * file that is not readable/executable; if the spawn fails it will not * continue searching. * - * TODO: correctly interpret UNC paths + * UNC path support: we are dealing with UNC paths in both the path and the + * filename. This is a deviation from what cmd.exe does (it does not let you + * start a program by specifying an UNC path on the command line) but this is + * really a pointless restriction. + * */ static WCHAR* search_path(const WCHAR *file, WCHAR *cwd, @@ -794,10 +800,8 @@ int make_program_env(char* env_block[], WCHAR** dst_ptr) { i++; } else { /* copy var from env_block */ - DWORD r; len = wcslen(*ptr_copy) + 1; - r = wmemcpy_s(ptr, (env_len - (ptr - dst)), *ptr_copy, len); - assert(!r); + wmemcpy(ptr, *ptr_copy, len); ptr_copy++; if (cmp == 0) i++; @@ -1059,7 +1063,7 @@ int uv_spawn(uv_loop_t* loop, if (options->flags & UV_PROCESS_DETACHED) { /* Note that we're not setting the CREATE_BREAKAWAY_FROM_JOB flag. That - * means that libuv might not let you create a fully deamonized process + * means that libuv might not let you create a fully daemonized process * when run under job control. However the type of job control that libuv * itself creates doesn't trickle down to subprocesses so they can still * daemonize. @@ -1137,7 +1141,7 @@ int uv_spawn(uv_loop_t* loop, assert(!err); /* Make the handle active. It will remain active until the exit callback */ - /* iis made or the handle is closed, whichever happens first. */ + /* is made or the handle is closed, whichever happens first. */ uv__handle_start(process); /* Cleanup, whether we succeeded or failed. */ @@ -1173,7 +1177,7 @@ static int uv__kill(HANDLE process_handle, int signum) { return 0; /* If the process already exited before TerminateProcess was called, */ - /* TerminateProcess will fail with ERROR_ACESS_DENIED. */ + /* TerminateProcess will fail with ERROR_ACCESS_DENIED. */ err = GetLastError(); if (err == ERROR_ACCESS_DENIED && GetExitCodeProcess(process_handle, &status) && diff --git a/deps/uv/src/win/stream.c b/deps/uv/src/win/stream.c index 6553ab11d72..057f72ecad8 100644 --- a/deps/uv/src/win/stream.c +++ b/deps/uv/src/win/stream.c @@ -106,7 +106,11 @@ int uv_read_stop(uv_stream_t* handle) { if (handle->type == UV_TTY) { err = uv_tty_read_stop((uv_tty_t*) handle); } else { - handle->flags &= ~UV_HANDLE_READING; + if (handle->type == UV_NAMED_PIPE) { + uv__pipe_stop_read((uv_pipe_t*) handle); + } else { + handle->flags &= ~UV_HANDLE_READING; + } DECREASE_ACTIVE_COUNT(handle->loop, handle); } diff --git a/deps/uv/src/win/tcp.c b/deps/uv/src/win/tcp.c index a213ad63e77..cff2929e4cc 100644 --- a/deps/uv/src/win/tcp.c +++ b/deps/uv/src/win/tcp.c @@ -196,6 +196,7 @@ void uv_tcp_endgame(uv_loop_t* loop, uv_tcp_t* handle) { if (!(handle->flags & UV_HANDLE_TCP_SOCKET_CLOSED)) { closesocket(handle->socket); + handle->socket = INVALID_SOCKET; handle->flags |= UV_HANDLE_TCP_SOCKET_CLOSED; } @@ -240,7 +241,7 @@ void uv_tcp_endgame(uv_loop_t* loop, uv_tcp_t* handle) { * allow binding to addresses that are in use by sockets in TIME_WAIT, it * effectively allows 'stealing' a port which is in use by another application. * - * SO_EXCLUSIVEADDRUSE is also not good here because it does cehck all sockets, + * SO_EXCLUSIVEADDRUSE is also not good here because it does check all sockets, * regardless of state, so we'd get an error even if the port is in use by a * socket in TIME_WAIT state. * @@ -589,7 +590,7 @@ int uv_tcp_listen(uv_tcp_t* handle, int backlog, uv_connection_cb cb) { } /* Initialize other unused requests too, because uv_tcp_endgame */ - /* doesn't know how how many requests were intialized, so it will */ + /* doesn't know how how many requests were initialized, so it will */ /* try to clean up {uv_simultaneous_server_accepts} requests. */ for (i = simultaneous_accepts; i < uv_simultaneous_server_accepts; i++) { req = &handle->accept_reqs[i]; @@ -1341,7 +1342,7 @@ void uv_tcp_close(uv_loop_t* loop, uv_tcp_t* tcp) { if (uv_tcp_try_cancel_io(tcp) != 0) { /* When cancellation is not possible, there is another option: we can */ /* close the incoming sockets, which will also cancel the accept */ - /* operations. However this is not cool because we might inadvertedly */ + /* operations. However this is not cool because we might inadvertently */ /* close a socket that just accepted a new connection, which will */ /* cause the connection to be aborted. */ unsigned int i; @@ -1368,6 +1369,7 @@ void uv_tcp_close(uv_loop_t* loop, uv_tcp_t* tcp) { if (close_socket) { closesocket(tcp->socket); + tcp->socket = INVALID_SOCKET; tcp->flags |= UV_HANDLE_TCP_SOCKET_CLOSED; } diff --git a/deps/uv/src/win/thread.c b/deps/uv/src/win/thread.c index ccc5579fa7a..a697d7ae744 100644 --- a/deps/uv/src/win/thread.c +++ b/deps/uv/src/win/thread.c @@ -100,7 +100,7 @@ static NOINLINE void uv__once_inner(uv_once_t* guard, } else { /* We lost the race. Destroy the event we created and wait for the */ - /* existing one todv become signaled. */ + /* existing one to become signaled. */ CloseHandle(created_event); result = WaitForSingleObject(existing_event, INFINITE); assert(result == WAIT_OBJECT_0); @@ -117,6 +117,68 @@ void uv_once(uv_once_t* guard, void (*callback)(void)) { uv__once_inner(guard, callback); } +static UV_THREAD_LOCAL uv_thread_t uv__current_thread = NULL; + +struct thread_ctx { + void (*entry)(void* arg); + void* arg; + uv_thread_t self; +}; + + +static UINT __stdcall uv__thread_start(void* arg) +{ + struct thread_ctx *ctx_p; + struct thread_ctx ctx; + + ctx_p = arg; + ctx = *ctx_p; + free(ctx_p); + + uv__current_thread = ctx.self; + ctx.entry(ctx.arg); + + return 0; +} + + +int uv_thread_create(uv_thread_t *tid, void (*entry)(void *arg), void *arg) { + struct thread_ctx* ctx; + int err; + HANDLE thread; + + ctx = malloc(sizeof(*ctx)); + if (ctx == NULL) + return UV_ENOMEM; + + ctx->entry = entry; + ctx->arg = arg; + + /* Create the thread in suspended state so we have a chance to pass + * its own creation handle to it */ + thread = (HANDLE) _beginthreadex(NULL, + 0, + uv__thread_start, + ctx, + CREATE_SUSPENDED, + NULL); + if (thread == NULL) { + err = errno; + free(ctx); + } else { + err = 0; + *tid = thread; + ctx->self = thread; + ResumeThread(thread); + } + + return err; +} + + +uv_thread_t uv_thread_self(void) { + return uv__current_thread; +} int uv_thread_join(uv_thread_t *tid) { if (WaitForSingleObject(*tid, INFINITE)) @@ -129,6 +191,11 @@ int uv_thread_join(uv_thread_t *tid) { } +int uv_thread_equal(const uv_thread_t* t1, const uv_thread_t* t2) { + return *t1 == *t2; +} + + int uv_mutex_init(uv_mutex_t* mutex) { InitializeCriticalSection(mutex); return 0; diff --git a/deps/uv/src/win/timer.c b/deps/uv/src/win/timer.c index c229d4c897f..0da541a2c86 100644 --- a/deps/uv/src/win/timer.c +++ b/deps/uv/src/win/timer.c @@ -28,39 +28,17 @@ #include "handle-inl.h" -void uv_update_time(uv_loop_t* loop) { - DWORD ticks; - ULARGE_INTEGER time; - - ticks = GetTickCount(); +/* The number of milliseconds in one second. */ +#define UV__MILLISEC 1000 - time.QuadPart = loop->time; - /* GetTickCount() can conceivably wrap around, so when the current tick - * count is lower than the last tick count, we'll assume it has wrapped. - * uv_poll must make sure that the timer can never overflow more than - * once between two subsequent uv_update_time calls. - */ - time.LowPart = ticks; - if (ticks < loop->last_tick_count) - time.HighPart++; - - /* Remember the last tick count. */ - loop->last_tick_count = ticks; - - /* The GetTickCount() resolution isn't too good. Sometimes it'll happen - * that GetQueuedCompletionStatus() or GetQueuedCompletionStatusEx() has - * waited for a couple of ms but this is not reflected in the GetTickCount - * result yet. Therefore whenever GetQueuedCompletionStatus times out - * we'll add the number of ms that it has waited to the current loop time. - * When that happened the loop time might be a little ms farther than what - * we've just computed, and we shouldn't update the loop time. - */ - if (loop->time < time.QuadPart) - loop->time = time.QuadPart; +void uv_update_time(uv_loop_t* loop) { + uint64_t new_time = uv__hrtime(UV__MILLISEC); + if (new_time > loop->time) { + loop->time = new_time; + } } - void uv__time_forward(uv_loop_t* loop, uint64_t msecs) { loop->time += msecs; } @@ -119,14 +97,15 @@ int uv_timer_start(uv_timer_t* handle, uv_timer_cb timer_cb, uint64_t timeout, uv_loop_t* loop = handle->loop; uv_timer_t* old; - if (handle->flags & UV_HANDLE_ACTIVE) { - RB_REMOVE(uv_timer_tree_s, &loop->timers, handle); - } + if (timer_cb == NULL) + return UV_EINVAL; + + if (uv__is_active(handle)) + uv_timer_stop(handle); handle->timer_cb = timer_cb; handle->due = get_clamped_due_time(loop->time, timeout); handle->repeat = repeat; - handle->flags |= UV_HANDLE_ACTIVE; uv__handle_start(handle); /* start_id is the second index to be compared in uv__timer_cmp() */ @@ -142,12 +121,10 @@ int uv_timer_start(uv_timer_t* handle, uv_timer_cb timer_cb, uint64_t timeout, int uv_timer_stop(uv_timer_t* handle) { uv_loop_t* loop = handle->loop; - if (!(handle->flags & UV_HANDLE_ACTIVE)) + if (!uv__is_active(handle)) return 0; RB_REMOVE(uv_timer_tree_s, &loop->timers, handle); - - handle->flags &= ~UV_HANDLE_ACTIVE; uv__handle_stop(handle); return 0; @@ -155,28 +132,14 @@ int uv_timer_stop(uv_timer_t* handle) { int uv_timer_again(uv_timer_t* handle) { - uv_loop_t* loop = handle->loop; - /* If timer_cb is NULL that means that the timer was never started. */ if (!handle->timer_cb) { return UV_EINVAL; } - if (handle->flags & UV_HANDLE_ACTIVE) { - RB_REMOVE(uv_timer_tree_s, &loop->timers, handle); - handle->flags &= ~UV_HANDLE_ACTIVE; - uv__handle_stop(handle); - } - if (handle->repeat) { - handle->due = get_clamped_due_time(loop->time, handle->repeat); - - if (RB_INSERT(uv_timer_tree_s, &loop->timers, handle) != NULL) { - uv_fatal_error(ERROR_INVALID_DATA, "RB_INSERT"); - } - - handle->flags |= UV_HANDLE_ACTIVE; - uv__handle_start(handle); + uv_timer_stop(handle); + uv_timer_start(handle, handle->timer_cb, handle->repeat, handle->repeat); } return 0; @@ -206,16 +169,9 @@ DWORD uv__next_timeout(const uv_loop_t* loop) { timer = RB_MIN(uv_timer_tree_s, &((uv_loop_t*)loop)->timers); if (timer) { delta = timer->due - loop->time; - if (delta >= UINT_MAX >> 1) { - /* A timeout value of UINT_MAX means infinite, so that's no good. But - * more importantly, there's always the risk that GetTickCount wraps. - * uv_update_time can detect this, but we must make sure that the - * tick counter never overflows twice between two subsequent - * uv_update_time calls. We do this by never sleeping more than half - * the time it takes to wrap the counter - which is huge overkill, - * but hey, it's not so bad to wake up every 25 days. - */ - return UINT_MAX >> 1; + if (delta >= UINT_MAX - 1) { + /* A timeout value of UINT_MAX means infinite, so that's no good. */ + return UINT_MAX - 1; } else if (delta < 0) { /* Negative timeout values are not allowed */ return 0; @@ -236,23 +192,9 @@ void uv_process_timers(uv_loop_t* loop) { for (timer = RB_MIN(uv_timer_tree_s, &loop->timers); timer != NULL && timer->due <= loop->time; timer = RB_MIN(uv_timer_tree_s, &loop->timers)) { - RB_REMOVE(uv_timer_tree_s, &loop->timers, timer); - - if (timer->repeat != 0) { - /* If it is a repeating timer, reschedule with repeat timeout. */ - timer->due = get_clamped_due_time(timer->due, timer->repeat); - if (timer->due < loop->time) { - timer->due = loop->time; - } - if (RB_INSERT(uv_timer_tree_s, &loop->timers, timer) != NULL) { - uv_fatal_error(ERROR_INVALID_DATA, "RB_INSERT"); - } - } else { - /* If non-repeating, mark the timer as inactive. */ - timer->flags &= ~UV_HANDLE_ACTIVE; - uv__handle_stop(timer); - } + uv_timer_stop(timer); + uv_timer_again(timer); timer->timer_cb((uv_timer_t*) timer); } } diff --git a/deps/uv/src/win/tty.c b/deps/uv/src/win/tty.c index 6b8297cbd9a..6d6709f79e1 100644 --- a/deps/uv/src/win/tty.c +++ b/deps/uv/src/win/tty.c @@ -1903,6 +1903,7 @@ void uv_tty_close(uv_tty_t* handle) { if (handle->flags & UV_HANDLE_READING) uv_tty_read_stop(handle); + handle->handle = INVALID_HANDLE_VALUE; handle->flags &= ~(UV_HANDLE_READABLE | UV_HANDLE_WRITABLE); uv__handle_closing(handle); diff --git a/deps/uv/src/win/udp.c b/deps/uv/src/win/udp.c index ef63dd73dfd..73b5bd5e467 100644 --- a/deps/uv/src/win/udp.c +++ b/deps/uv/src/win/udp.c @@ -83,7 +83,7 @@ static int uv_udp_set_socket(uv_loop_t* loop, uv_udp_t* handle, SOCKET socket, } if (pSetFileCompletionNotificationModes) { - /* All know windowses that support SetFileCompletionNotificationModes */ + /* All known Windows that support SetFileCompletionNotificationModes */ /* have a bug that makes it impossible to use this function in */ /* conjunction with datagram sockets. We can work around that but only */ /* if the user is using the default UDP driver (AFD) and has no other */ @@ -144,6 +144,7 @@ int uv_udp_init(uv_loop_t* loop, uv_udp_t* handle) { void uv_udp_close(uv_loop_t* loop, uv_udp_t* handle) { uv_udp_recv_stop(handle); closesocket(handle->socket); + handle->socket = INVALID_SOCKET; uv__handle_closing(handle); @@ -505,9 +506,13 @@ void uv_process_udp_recv_req(uv_loop_t* loop, uv_udp_t* handle, } else if (err == WSAEWOULDBLOCK) { /* Kernel buffer empty */ handle->recv_cb(handle, 0, &buf, NULL, 0); - } else if (err != WSAECONNRESET && err != WSAENETRESET) { - /* Serious error. WSAECONNRESET/WSANETRESET is ignored because this */ - /* just indicates that a previous sendto operation failed. */ + } else if (err == WSAECONNRESET || err == WSAENETRESET) { + /* WSAECONNRESET/WSANETRESET is ignored because this just indicates + * that a previous sendto operation failed. + */ + handle->recv_cb(handle, 0, &buf, NULL, 0); + } else { + /* Any other error that we want to report back to the user. */ uv_udp_recv_stop(handle); handle->recv_cb(handle, uv_translate_sys_error(err), &buf, NULL, 0); } @@ -572,7 +577,9 @@ static int uv__udp_set_membership4(uv_udp_t* handle, memset(&mreq, 0, sizeof mreq); if (interface_addr) { - mreq.imr_interface.s_addr = inet_addr(interface_addr); + err = uv_inet_pton(AF_INET, interface_addr, &mreq.imr_interface.s_addr); + if (err) + return err; } else { mreq.imr_interface.s_addr = htonl(INADDR_ANY); } diff --git a/deps/uv/src/win/util.c b/deps/uv/src/win/util.c index a56fbea500d..43d843ff5c4 100644 --- a/deps/uv/src/win/util.c +++ b/deps/uv/src/win/util.c @@ -44,7 +44,7 @@ * of the console title is that it is smaller than 64K. However in practice * it is much smaller, and there is no way to figure out what the exact length * of the title is or can be, at least not on XP. To make it even more - * annoying, GetConsoleTitle failes when the buffer to be read into is bigger + * annoying, GetConsoleTitle fails when the buffer to be read into is bigger * than the actual maximum length. So we make a conservative guess here; * just don't put the novel you're writing in the title, unless the plot * survives truncation. @@ -52,20 +52,19 @@ #define MAX_TITLE_LENGTH 8192 /* The number of nanoseconds in one second. */ -#undef NANOSEC -#define NANOSEC 1000000000 +#define UV__NANOSEC 1000000000 /* Cached copy of the process title, plus a mutex guarding it. */ static char *process_title; static CRITICAL_SECTION process_title_lock; -/* Frequency (ticks per nanosecond) of the high-resolution clock. */ -static double hrtime_frequency_ = 0; +/* Interval (in seconds) of the high-resolution clock. */ +static double hrtime_interval_ = 0; /* - * One-time intialization code for functionality defined in util.c. + * One-time initialization code for functionality defined in util.c. */ void uv__util_init() { LARGE_INTEGER perf_frequency; @@ -73,11 +72,14 @@ void uv__util_init() { /* Initialize process title access mutex. */ InitializeCriticalSection(&process_title_lock); - /* Retrieve high-resolution timer frequency. */ - if (QueryPerformanceFrequency(&perf_frequency)) - hrtime_frequency_ = (double) perf_frequency.QuadPart / (double) NANOSEC; - else - hrtime_frequency_= 0; + /* Retrieve high-resolution timer frequency + * and precompute its reciprocal. + */ + if (QueryPerformanceFrequency(&perf_frequency)) { + hrtime_interval_ = 1.0 / perf_frequency.QuadPart; + } else { + hrtime_interval_= 0; + } } @@ -204,7 +206,7 @@ int uv_cwd(char* buffer, size_t* size) { if (r == 0) { return uv_translate_sys_error(GetLastError()); } else if (r > (int) *size) { - *size = r; + *size = r -1; return UV_ENOBUFS; } @@ -221,7 +223,7 @@ int uv_cwd(char* buffer, size_t* size) { return uv_translate_sys_error(GetLastError()); } - *size = r; + *size = r - 1; return 0; } @@ -463,26 +465,27 @@ int uv_get_process_title(char* buffer, size_t size) { uint64_t uv_hrtime(void) { - LARGE_INTEGER counter; - uv__once_init(); + return uv__hrtime(UV__NANOSEC); +} + +uint64_t uv__hrtime(double scale) { + LARGE_INTEGER counter; - /* If the performance frequency is zero, there's no support. */ - if (hrtime_frequency_ == 0) { - /* uv__set_sys_error(loop, ERROR_NOT_SUPPORTED); */ + /* If the performance interval is zero, there's no support. */ + if (hrtime_interval_ == 0) { return 0; } if (!QueryPerformanceCounter(&counter)) { - /* uv__set_sys_error(loop, GetLastError()); */ return 0; } /* Because we have no guarantee about the order of magnitude of the - * performance counter frequency, integer math could cause this computation + * performance counter interval, integer math could cause this computation * to overflow. Therefore we resort to floating point math. */ - return (uint64_t) ((double) counter.QuadPart / hrtime_frequency_); + return (uint64_t) ((double) counter.QuadPart * hrtime_interval_ * scale); } @@ -775,11 +778,76 @@ void uv_free_cpu_info(uv_cpu_info_t* cpu_infos, int count) { } +static int is_windows_version_or_greater(DWORD os_major, + DWORD os_minor, + WORD service_pack_major, + WORD service_pack_minor) { + OSVERSIONINFOEX osvi; + DWORDLONG condition_mask = 0; + int op = VER_GREATER_EQUAL; + + /* Initialize the OSVERSIONINFOEX structure. */ + ZeroMemory(&osvi, sizeof(OSVERSIONINFOEX)); + osvi.dwOSVersionInfoSize = sizeof(OSVERSIONINFOEX); + osvi.dwMajorVersion = os_major; + osvi.dwMinorVersion = os_minor; + osvi.wServicePackMajor = service_pack_major; + osvi.wServicePackMinor = service_pack_minor; + + /* Initialize the condition mask. */ + VER_SET_CONDITION(condition_mask, VER_MAJORVERSION, op); + VER_SET_CONDITION(condition_mask, VER_MINORVERSION, op); + VER_SET_CONDITION(condition_mask, VER_SERVICEPACKMAJOR, op); + VER_SET_CONDITION(condition_mask, VER_SERVICEPACKMINOR, op); + + /* Perform the test. */ + return (int) VerifyVersionInfo( + &osvi, + VER_MAJORVERSION | VER_MINORVERSION | + VER_SERVICEPACKMAJOR | VER_SERVICEPACKMINOR, + condition_mask); +} + + +static int address_prefix_match(int family, + struct sockaddr* address, + struct sockaddr* prefix_address, + int prefix_len) { + uint8_t* address_data; + uint8_t* prefix_address_data; + int i; + + assert(address->sa_family == family); + assert(prefix_address->sa_family == family); + + if (family == AF_INET6) { + address_data = (uint8_t*) &(((struct sockaddr_in6 *) address)->sin6_addr); + prefix_address_data = + (uint8_t*) &(((struct sockaddr_in6 *) prefix_address)->sin6_addr); + } else { + address_data = (uint8_t*) &(((struct sockaddr_in *) address)->sin_addr); + prefix_address_data = + (uint8_t*) &(((struct sockaddr_in *) prefix_address)->sin_addr); + } + + for (i = 0; i < prefix_len >> 3; i++) { + if (address_data[i] != prefix_address_data[i]) + return 0; + } + + if (prefix_len % 8) + return prefix_address_data[i] == + (address_data[i] & (0xff << (8 - prefix_len % 8))); + + return 1; +} + + int uv_interface_addresses(uv_interface_address_t** addresses_ptr, int* count_ptr) { IP_ADAPTER_ADDRESSES* win_address_buf; ULONG win_address_buf_size; - IP_ADAPTER_ADDRESSES* win_address; + IP_ADAPTER_ADDRESSES* adapter; uv_interface_address_t* uv_address_buf; char* name_buf; @@ -788,6 +856,23 @@ int uv_interface_addresses(uv_interface_address_t** addresses_ptr, int count; + int is_vista_or_greater; + ULONG flags; + + is_vista_or_greater = is_windows_version_or_greater(6, 0, 0, 0); + if (is_vista_or_greater) { + flags = GAA_FLAG_SKIP_ANYCAST | GAA_FLAG_SKIP_MULTICAST | + GAA_FLAG_SKIP_DNS_SERVER; + } else { + /* We need at least XP SP1. */ + if (!is_windows_version_or_greater(5, 1, 1, 0)) + return UV_ENOTSUP; + + flags = GAA_FLAG_SKIP_ANYCAST | GAA_FLAG_SKIP_MULTICAST | + GAA_FLAG_SKIP_DNS_SERVER | GAA_FLAG_INCLUDE_PREFIX; + } + + /* Fetch the size of the adapters reported by windows, and then get the */ /* list itself. */ win_address_buf_size = 0; @@ -800,7 +885,7 @@ int uv_interface_addresses(uv_interface_address_t** addresses_ptr, /* ERROR_BUFFER_OVERFLOW, and the required buffer size will be stored in */ /* win_address_buf_size. */ r = GetAdaptersAddresses(AF_UNSPEC, - GAA_FLAG_INCLUDE_PREFIX, + flags, NULL, win_address_buf, &win_address_buf_size); @@ -859,25 +944,23 @@ int uv_interface_addresses(uv_interface_address_t** addresses_ptr, count = 0; uv_address_buf_size = 0; - for (win_address = win_address_buf; - win_address != NULL; - win_address = win_address->Next) { - /* Use IP_ADAPTER_UNICAST_ADDRESS_XP to retain backwards compatibility */ - /* with Windows XP */ - IP_ADAPTER_UNICAST_ADDRESS_XP* unicast_address; + for (adapter = win_address_buf; + adapter != NULL; + adapter = adapter->Next) { + IP_ADAPTER_UNICAST_ADDRESS* unicast_address; int name_size; /* Interfaces that are not 'up' should not be reported. Also skip */ /* interfaces that have no associated unicast address, as to avoid */ /* allocating space for the name for this interface. */ - if (win_address->OperStatus != IfOperStatusUp || - win_address->FirstUnicastAddress == NULL) + if (adapter->OperStatus != IfOperStatusUp || + adapter->FirstUnicastAddress == NULL) continue; /* Compute the size of the interface name. */ name_size = WideCharToMultiByte(CP_UTF8, 0, - win_address->FriendlyName, + adapter->FriendlyName, -1, NULL, 0, @@ -891,8 +974,8 @@ int uv_interface_addresses(uv_interface_address_t** addresses_ptr, /* Count the number of addresses associated with this interface, and */ /* compute the size. */ - for (unicast_address = (IP_ADAPTER_UNICAST_ADDRESS_XP*) - win_address->FirstUnicastAddress; + for (unicast_address = (IP_ADAPTER_UNICAST_ADDRESS*) + adapter->FirstUnicastAddress; unicast_address != NULL; unicast_address = unicast_address->Next) { count++; @@ -913,16 +996,15 @@ int uv_interface_addresses(uv_interface_address_t** addresses_ptr, name_buf = (char*) (uv_address_buf + count); /* Fill out the output buffer. */ - for (win_address = win_address_buf; - win_address != NULL; - win_address = win_address->Next) { - IP_ADAPTER_UNICAST_ADDRESS_XP* unicast_address; - IP_ADAPTER_PREFIX* prefix; + for (adapter = win_address_buf; + adapter != NULL; + adapter = adapter->Next) { + IP_ADAPTER_UNICAST_ADDRESS* unicast_address; int name_size; size_t max_name_size; - if (win_address->OperStatus != IfOperStatusUp || - win_address->FirstUnicastAddress == NULL) + if (adapter->OperStatus != IfOperStatusUp || + adapter->FirstUnicastAddress == NULL) continue; /* Convert the interface name to UTF8. */ @@ -931,7 +1013,7 @@ int uv_interface_addresses(uv_interface_address_t** addresses_ptr, max_name_size = INT_MAX; name_size = WideCharToMultiByte(CP_UTF8, 0, - win_address->FriendlyName, + adapter->FriendlyName, -1, name_buf, (int) max_name_size, @@ -943,47 +1025,78 @@ int uv_interface_addresses(uv_interface_address_t** addresses_ptr, return uv_translate_sys_error(GetLastError()); } - prefix = win_address->FirstPrefix; - /* Add an uv_interface_address_t element for every unicast address. */ - /* Walk the prefix list in tandem with the address list. */ - for (unicast_address = (IP_ADAPTER_UNICAST_ADDRESS_XP*) - win_address->FirstUnicastAddress; - unicast_address != NULL && prefix != NULL; - unicast_address = unicast_address->Next, prefix = prefix->Next) { + for (unicast_address = (IP_ADAPTER_UNICAST_ADDRESS*) + adapter->FirstUnicastAddress; + unicast_address != NULL; + unicast_address = unicast_address->Next) { struct sockaddr* sa; ULONG prefix_len; sa = unicast_address->Address.lpSockaddr; - prefix_len = prefix->PrefixLength; + + /* XP has no OnLinkPrefixLength field. */ + if (is_vista_or_greater) { + prefix_len = + ((IP_ADAPTER_UNICAST_ADDRESS_LH*) unicast_address)->OnLinkPrefixLength; + } else { + /* Prior to Windows Vista the FirstPrefix pointed to the list with + * single prefix for each IP address assigned to the adapter. + * Order of FirstPrefix does not match order of FirstUnicastAddress, + * so we need to find corresponding prefix. + */ + IP_ADAPTER_PREFIX* prefix; + prefix_len = 0; + + for (prefix = adapter->FirstPrefix; prefix; prefix = prefix->Next) { + /* We want the longest matching prefix. */ + if (prefix->Address.lpSockaddr->sa_family != sa->sa_family || + prefix->PrefixLength <= prefix_len) + continue; + + if (address_prefix_match(sa->sa_family, sa, + prefix->Address.lpSockaddr, prefix->PrefixLength)) { + prefix_len = prefix->PrefixLength; + } + } + + /* If there is no matching prefix information, return a single-host + * subnet mask (e.g. 255.255.255.255 for IPv4). + */ + if (!prefix_len) + prefix_len = (sa->sa_family == AF_INET6) ? 128 : 32; + } memset(uv_address, 0, sizeof *uv_address); uv_address->name = name_buf; - if (win_address->PhysicalAddressLength == sizeof(uv_address->phys_addr)) { + if (adapter->PhysicalAddressLength == sizeof(uv_address->phys_addr)) { memcpy(uv_address->phys_addr, - win_address->PhysicalAddress, + adapter->PhysicalAddress, sizeof(uv_address->phys_addr)); } uv_address->is_internal = - (win_address->IfType == IF_TYPE_SOFTWARE_LOOPBACK); + (adapter->IfType == IF_TYPE_SOFTWARE_LOOPBACK); if (sa->sa_family == AF_INET6) { uv_address->address.address6 = *((struct sockaddr_in6 *) sa); uv_address->netmask.netmask6.sin6_family = AF_INET6; memset(uv_address->netmask.netmask6.sin6_addr.s6_addr, 0xff, prefix_len >> 3); - uv_address->netmask.netmask6.sin6_addr.s6_addr[prefix_len >> 3] = - 0xff << (8 - prefix_len % 8); + /* This check ensures that we don't write past the size of the data. */ + if (prefix_len % 8) { + uv_address->netmask.netmask6.sin6_addr.s6_addr[prefix_len >> 3] = + 0xff << (8 - prefix_len % 8); + } } else { uv_address->address.address4 = *((struct sockaddr_in *) sa); uv_address->netmask.netmask4.sin_family = AF_INET; - uv_address->netmask.netmask4.sin_addr.s_addr = - htonl(0xffffffff << (32 - prefix_len)); + uv_address->netmask.netmask4.sin_addr.s_addr = (prefix_len > 0) ? + htonl(0xffffffff << (32 - prefix_len)) : 0; } uv_address++; diff --git a/deps/uv/src/win/winapi.c b/deps/uv/src/win/winapi.c index 3e439ea5b2a..84ce73e3a02 100644 --- a/deps/uv/src/win/winapi.c +++ b/deps/uv/src/win/winapi.c @@ -51,6 +51,7 @@ sSleepConditionVariableCS pSleepConditionVariableCS; sSleepConditionVariableSRW pSleepConditionVariableSRW; sWakeAllConditionVariable pWakeAllConditionVariable; sWakeConditionVariable pWakeConditionVariable; +sCancelSynchronousIo pCancelSynchronousIo; void uv_winapi_init() { @@ -156,4 +157,7 @@ void uv_winapi_init() { pWakeConditionVariable = (sWakeConditionVariable) GetProcAddress(kernel32_module, "WakeConditionVariable"); + + pCancelSynchronousIo = (sCancelSynchronousIo) + GetProcAddress(kernel32_module, "CancelSynchronousIo"); } diff --git a/deps/uv/src/win/winapi.h b/deps/uv/src/win/winapi.h index 21d7fe4ac33..1bb0e9aae1e 100644 --- a/deps/uv/src/win/winapi.h +++ b/deps/uv/src/win/winapi.h @@ -4617,6 +4617,8 @@ typedef VOID (WINAPI* sWakeAllConditionVariable) typedef VOID (WINAPI* sWakeConditionVariable) (PCONDITION_VARIABLE ConditionVariable); +typedef BOOL (WINAPI* sCancelSynchronousIo) + (HANDLE hThread); /* Ntdll function pointers */ extern sRtlNtStatusToDosError pRtlNtStatusToDosError; @@ -4644,5 +4646,6 @@ extern sSleepConditionVariableCS pSleepConditionVariableCS; extern sSleepConditionVariableSRW pSleepConditionVariableSRW; extern sWakeAllConditionVariable pWakeAllConditionVariable; extern sWakeConditionVariable pWakeConditionVariable; +extern sCancelSynchronousIo pCancelSynchronousIo; #endif /* UV_WIN_WINAPI_H_ */ diff --git a/deps/uv/src/win/winsock.h b/deps/uv/src/win/winsock.h index 957d08ec198..7c007ab4934 100644 --- a/deps/uv/src/win/winsock.h +++ b/deps/uv/src/win/winsock.h @@ -166,6 +166,25 @@ typedef struct _IP_ADAPTER_UNICAST_ADDRESS_XP { ULONG LeaseLifetime; } IP_ADAPTER_UNICAST_ADDRESS_XP,*PIP_ADAPTER_UNICAST_ADDRESS_XP; +typedef struct _IP_ADAPTER_UNICAST_ADDRESS_LH { + union { + ULONGLONG Alignment; + struct { + ULONG Length; + DWORD Flags; + }; + }; + struct _IP_ADAPTER_UNICAST_ADDRESS_LH *Next; + SOCKET_ADDRESS Address; + IP_PREFIX_ORIGIN PrefixOrigin; + IP_SUFFIX_ORIGIN SuffixOrigin; + IP_DAD_STATE DadState; + ULONG ValidLifetime; + ULONG PreferredLifetime; + ULONG LeaseLifetime; + UINT8 OnLinkPrefixLength; +} IP_ADAPTER_UNICAST_ADDRESS_LH,*PIP_ADAPTER_UNICAST_ADDRESS_LH; + #endif #endif /* UV_WIN_WINSOCK_H_ */ diff --git a/deps/uv/test/echo-server.c b/deps/uv/test/echo-server.c index f0937ccaac3..f223981c261 100644 --- a/deps/uv/test/echo-server.c +++ b/deps/uv/test/echo-server.c @@ -51,20 +51,21 @@ static void after_write(uv_write_t* req, int status) { /* Free the read/write buffer and the request */ wr = (write_req_t*) req; free(wr->buf.base); + free(wr); - if (status == 0) { - free(wr); + if (status == 0) return; - } fprintf(stderr, "uv_write error: %s - %s\n", uv_err_name(status), uv_strerror(status)); +} - if (!uv_is_closing((uv_handle_t*) req->handle)) - uv_close((uv_handle_t*) req->handle, on_close); - free(wr); + +static void after_shutdown(uv_shutdown_t* req, int status) { + uv_close((uv_handle_t*) req->handle, on_close); + free(req); } @@ -73,16 +74,15 @@ static void after_read(uv_stream_t* handle, const uv_buf_t* buf) { int i; write_req_t *wr; + uv_shutdown_t* sreq; if (nread < 0) { /* Error or EOF */ ASSERT(nread == UV_EOF); - if (buf->base) { - free(buf->base); - } - - uv_close((uv_handle_t*) handle, on_close); + free(buf->base); + sreq = malloc(sizeof* sreq); + ASSERT(0 == uv_shutdown(sreq, handle, after_shutdown)); return; } diff --git a/deps/uv/test/run-benchmarks.c b/deps/uv/test/run-benchmarks.c index 61f062f99aa..8d4f549799e 100644 --- a/deps/uv/test/run-benchmarks.c +++ b/deps/uv/test/run-benchmarks.c @@ -33,7 +33,8 @@ static int maybe_run_test(int argc, char **argv); int main(int argc, char **argv) { - platform_init(argc, argv); + if (platform_init(argc, argv)) + return EXIT_FAILURE; switch (argc) { case 1: return run_tests(1); @@ -41,8 +42,10 @@ int main(int argc, char **argv) { case 3: return run_test_part(argv[1], argv[2]); default: LOGF("Too many arguments.\n"); - return 1; + return EXIT_FAILURE; } + + return EXIT_SUCCESS; } diff --git a/deps/uv/test/run-tests.c b/deps/uv/test/run-tests.c index d8f3cda540a..e92c93008e7 100644 --- a/deps/uv/test/run-tests.c +++ b/deps/uv/test/run-tests.c @@ -46,7 +46,8 @@ static int maybe_run_test(int argc, char **argv); int main(int argc, char **argv) { - platform_init(argc, argv); + if (platform_init(argc, argv)) + return EXIT_FAILURE; argv = uv_setup_args(argc, argv); @@ -56,8 +57,10 @@ int main(int argc, char **argv) { case 3: return run_test_part(argv[1], argv[2]); default: LOGF("Too many arguments.\n"); - return 1; + return EXIT_FAILURE; } + + return EXIT_SUCCESS; } diff --git a/deps/uv/test/runner-unix.c b/deps/uv/test/runner-unix.c index 9afcd1e4881..1f12c6f12d9 100644 --- a/deps/uv/test/runner-unix.c +++ b/deps/uv/test/runner-unix.c @@ -22,10 +22,11 @@ #include "runner-unix.h" #include "runner.h" +#include <limits.h> #include <stdint.h> /* uintptr_t */ #include <errno.h> -#include <unistd.h> /* usleep */ +#include <unistd.h> /* readlink, usleep */ #include <string.h> /* strdup */ #include <stdio.h> #include <stdlib.h> @@ -40,7 +41,7 @@ /* Do platform-specific initialization. */ -void platform_init(int argc, char **argv) { +int platform_init(int argc, char **argv) { const char* tap; tap = getenv("UV_TAP_OUTPUT"); @@ -49,8 +50,14 @@ void platform_init(int argc, char **argv) { /* Disable stdio output buffering. */ setvbuf(stdout, NULL, _IONBF, 0); setvbuf(stderr, NULL, _IONBF, 0); - strncpy(executable_path, argv[0], sizeof(executable_path) - 1); signal(SIGPIPE, SIG_IGN); + + if (realpath(argv[0], executable_path) == NULL) { + perror("realpath"); + return -1; + } + + return 0; } diff --git a/deps/uv/test/runner-win.c b/deps/uv/test/runner-win.c index 83d76783f6b..97ef7599eb8 100644 --- a/deps/uv/test/runner-win.c +++ b/deps/uv/test/runner-win.c @@ -43,7 +43,7 @@ /* Do platform-specific initialization. */ -void platform_init(int argc, char **argv) { +int platform_init(int argc, char **argv) { const char* tap; tap = getenv("UV_TAP_OUTPUT"); @@ -66,6 +66,8 @@ void platform_init(int argc, char **argv) { setvbuf(stderr, NULL, _IONBF, 0); strcpy(executable_path, argv[0]); + + return 0; } diff --git a/deps/uv/test/runner.c b/deps/uv/test/runner.c index a934b24c6e5..e896d43b762 100644 --- a/deps/uv/test/runner.c +++ b/deps/uv/test/runner.c @@ -26,7 +26,7 @@ #include "task.h" #include "uv.h" -char executable_path[PATHMAX] = { '\0' }; +char executable_path[sizeof(executable_path)]; int tap_output = 0; diff --git a/deps/uv/test/runner.h b/deps/uv/test/runner.h index 97c7312da7b..78f3c880a98 100644 --- a/deps/uv/test/runner.h +++ b/deps/uv/test/runner.h @@ -22,6 +22,7 @@ #ifndef RUNNER_H_ #define RUNNER_H_ +#include <limits.h> /* PATH_MAX */ #include <stdio.h> /* FILE */ @@ -83,8 +84,11 @@ typedef struct { #define TEST_HELPER HELPER_ENTRY #define BENCHMARK_HELPER HELPER_ENTRY -#define PATHMAX 1024 -extern char executable_path[PATHMAX]; +#ifdef PATH_MAX +extern char executable_path[PATH_MAX]; +#else +extern char executable_path[4096]; +#endif /* * Include platform-dependent definitions @@ -130,7 +134,7 @@ void print_tests(FILE* stream); */ /* Do platform-specific initialization. */ -void platform_init(int argc, char** argv); +int platform_init(int argc, char** argv); /* Invoke "argv[0] test-name [test-part]". Store process info in *p. */ /* Make sure that all stdio output of the processes is buffered up. */ diff --git a/deps/uv/test/test-default-loop-close.c b/deps/uv/test/test-default-loop-close.c new file mode 100644 index 00000000000..fd11cfa8c12 --- /dev/null +++ b/deps/uv/test/test-default-loop-close.c @@ -0,0 +1,59 @@ +/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to + * deal in the Software without restriction, including without limitation the + * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or + * sell copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS + * IN THE SOFTWARE. + */ + +#include "uv.h" +#include "task.h" + + +static int timer_cb_called; + + +static void timer_cb(uv_timer_t* timer) { + timer_cb_called++; + uv_close((uv_handle_t*) timer, NULL); +} + + +TEST_IMPL(default_loop_close) { + uv_loop_t* loop; + uv_timer_t timer_handle; + + loop = uv_default_loop(); + ASSERT(loop != NULL); + + ASSERT(0 == uv_timer_init(loop, &timer_handle)); + ASSERT(0 == uv_timer_start(&timer_handle, timer_cb, 1, 0)); + ASSERT(0 == uv_run(loop, UV_RUN_DEFAULT)); + ASSERT(1 == timer_cb_called); + ASSERT(0 == uv_loop_close(loop)); + + loop = uv_default_loop(); + ASSERT(loop != NULL); + + ASSERT(0 == uv_timer_init(loop, &timer_handle)); + ASSERT(0 == uv_timer_start(&timer_handle, timer_cb, 1, 0)); + ASSERT(0 == uv_run(loop, UV_RUN_DEFAULT)); + ASSERT(2 == timer_cb_called); + ASSERT(0 == uv_loop_close(loop)); + + MAKE_VALGRIND_HAPPY(); + return 0; +} diff --git a/deps/uv/test/test-fs.c b/deps/uv/test/test-fs.c index 4c6ccfab2cf..471860a76c4 100644 --- a/deps/uv/test/test-fs.c +++ b/deps/uv/test/test-fs.c @@ -67,7 +67,7 @@ static int unlink_cb_count; static int mkdir_cb_count; static int mkdtemp_cb_count; static int rmdir_cb_count; -static int readdir_cb_count; +static int scandir_cb_count; static int stat_cb_count; static int rename_cb_count; static int fsync_cb_count; @@ -75,6 +75,7 @@ static int fdatasync_cb_count; static int ftruncate_cb_count; static int sendfile_cb_count; static int fstat_cb_count; +static int access_cb_count; static int chmod_cb_count; static int fchmod_cb_count; static int chown_cb_count; @@ -97,7 +98,7 @@ static uv_fs_t mkdir_req; static uv_fs_t mkdtemp_req1; static uv_fs_t mkdtemp_req2; static uv_fs_t rmdir_req; -static uv_fs_t readdir_req; +static uv_fs_t scandir_req; static uv_fs_t stat_req; static uv_fs_t rename_req; static uv_fs_t fsync_req; @@ -108,7 +109,9 @@ static uv_fs_t utime_req; static uv_fs_t futime_req; static char buf[32]; +static char buf2[32]; static char test_buf[] = "test-buffer\n"; +static char test_buf2[] = "second-buffer\n"; static uv_buf_t iov; static void check_permission(const char* filename, unsigned int mode) { @@ -164,6 +167,14 @@ static void readlink_cb(uv_fs_t* req) { uv_fs_req_cleanup(req); } + +static void access_cb(uv_fs_t* req) { + ASSERT(req->fs_type == UV_FS_ACCESS); + access_cb_count++; + uv_fs_req_cleanup(req); +} + + static void fchmod_cb(uv_fs_t* req) { ASSERT(req->fs_type == UV_FS_FCHMOD); ASSERT(req->result == 0); @@ -416,14 +427,18 @@ static void rmdir_cb(uv_fs_t* req) { } -static void readdir_cb(uv_fs_t* req) { - ASSERT(req == &readdir_req); - ASSERT(req->fs_type == UV_FS_READDIR); +static void scandir_cb(uv_fs_t* req) { + uv_dirent_t dent; + ASSERT(req == &scandir_req); + ASSERT(req->fs_type == UV_FS_SCANDIR); ASSERT(req->result == 2); ASSERT(req->ptr); - ASSERT(memcmp(req->ptr, "file1\0file2\0", 12) == 0 - || memcmp(req->ptr, "file2\0file1\0", 12) == 0); - readdir_cb_count++; + + while (UV_EOF != uv_fs_scandir_next(req, &dent)) { + ASSERT(strcmp(dent.name, "file1") == 0 || strcmp(dent.name, "file2") == 0); + ASSERT(dent.type == UV_DIRENT_FILE || dent.type == UV_DIRENT_UNKNOWN); + } + scandir_cb_count++; ASSERT(req->path); ASSERT(memcmp(req->path, "test_dir\0", 9) == 0); uv_fs_req_cleanup(req); @@ -431,23 +446,26 @@ static void readdir_cb(uv_fs_t* req) { } -static void empty_readdir_cb(uv_fs_t* req) { - ASSERT(req == &readdir_req); - ASSERT(req->fs_type == UV_FS_READDIR); +static void empty_scandir_cb(uv_fs_t* req) { + uv_dirent_t dent; + + ASSERT(req == &scandir_req); + ASSERT(req->fs_type == UV_FS_SCANDIR); ASSERT(req->result == 0); ASSERT(req->ptr == NULL); + ASSERT(UV_EOF == uv_fs_scandir_next(req, &dent)); uv_fs_req_cleanup(req); - readdir_cb_count++; + scandir_cb_count++; } -static void file_readdir_cb(uv_fs_t* req) { - ASSERT(req == &readdir_req); - ASSERT(req->fs_type == UV_FS_READDIR); +static void file_scandir_cb(uv_fs_t* req) { + ASSERT(req == &scandir_req); + ASSERT(req->fs_type == UV_FS_SCANDIR); ASSERT(req->result == UV_ENOTDIR); ASSERT(req->ptr == NULL); uv_fs_req_cleanup(req); - readdir_cb_count++; + scandir_cb_count++; } @@ -802,6 +820,7 @@ TEST_IMPL(fs_file_write_null_buffer) { TEST_IMPL(fs_async_dir) { int r; + uv_dirent_t dent; /* Setup */ unlink("test_dir/file1"); @@ -833,21 +852,23 @@ TEST_IMPL(fs_async_dir) { ASSERT(r == 0); uv_fs_req_cleanup(&close_req); - r = uv_fs_readdir(loop, &readdir_req, "test_dir", 0, readdir_cb); + r = uv_fs_scandir(loop, &scandir_req, "test_dir", 0, scandir_cb); ASSERT(r == 0); uv_run(loop, UV_RUN_DEFAULT); - ASSERT(readdir_cb_count == 1); + ASSERT(scandir_cb_count == 1); - /* sync uv_fs_readdir */ - r = uv_fs_readdir(loop, &readdir_req, "test_dir", 0, NULL); + /* sync uv_fs_scandir */ + r = uv_fs_scandir(loop, &scandir_req, "test_dir", 0, NULL); ASSERT(r == 2); - ASSERT(readdir_req.result == 2); - ASSERT(readdir_req.ptr); - ASSERT(memcmp(readdir_req.ptr, "file1\0file2\0", 12) == 0 - || memcmp(readdir_req.ptr, "file2\0file1\0", 12) == 0); - uv_fs_req_cleanup(&readdir_req); - ASSERT(!readdir_req.ptr); + ASSERT(scandir_req.result == 2); + ASSERT(scandir_req.ptr); + while (UV_EOF != uv_fs_scandir_next(&scandir_req, &dent)) { + ASSERT(strcmp(dent.name, "file1") == 0 || strcmp(dent.name, "file2") == 0); + ASSERT(dent.type == UV_DIRENT_FILE || dent.type == UV_DIRENT_UNKNOWN); + } + uv_fs_req_cleanup(&scandir_req); + ASSERT(!scandir_req.ptr); r = uv_fs_stat(loop, &stat_req, "test_dir", stat_cb); ASSERT(r == 0); @@ -1109,6 +1130,70 @@ TEST_IMPL(fs_fstat) { } +TEST_IMPL(fs_access) { + int r; + uv_fs_t req; + uv_file file; + + /* Setup. */ + unlink("test_file"); + + loop = uv_default_loop(); + + /* File should not exist */ + r = uv_fs_access(loop, &req, "test_file", F_OK, NULL); + ASSERT(r < 0); + ASSERT(req.result < 0); + uv_fs_req_cleanup(&req); + + /* File should not exist */ + r = uv_fs_access(loop, &req, "test_file", F_OK, access_cb); + ASSERT(r == 0); + uv_run(loop, UV_RUN_DEFAULT); + ASSERT(access_cb_count == 1); + access_cb_count = 0; /* reset for the next test */ + + /* Create file */ + r = uv_fs_open(loop, &req, "test_file", O_RDWR | O_CREAT, + S_IWUSR | S_IRUSR, NULL); + ASSERT(r >= 0); + ASSERT(req.result >= 0); + file = req.result; + uv_fs_req_cleanup(&req); + + /* File should exist */ + r = uv_fs_access(loop, &req, "test_file", F_OK, NULL); + ASSERT(r == 0); + ASSERT(req.result == 0); + uv_fs_req_cleanup(&req); + + /* File should exist */ + r = uv_fs_access(loop, &req, "test_file", F_OK, access_cb); + ASSERT(r == 0); + uv_run(loop, UV_RUN_DEFAULT); + ASSERT(access_cb_count == 1); + access_cb_count = 0; /* reset for the next test */ + + /* Close file */ + r = uv_fs_close(loop, &req, file, NULL); + ASSERT(r == 0); + ASSERT(req.result == 0); + uv_fs_req_cleanup(&req); + + /* + * Run the loop just to check we don't have make any extraneous uv_ref() + * calls. This should drop out immediately. + */ + uv_run(loop, UV_RUN_DEFAULT); + + /* Cleanup. */ + unlink("test_file"); + + MAKE_VALGRIND_HAPPY(); + return 0; +} + + TEST_IMPL(fs_chmod) { int r; uv_fs_t req; @@ -1521,6 +1606,7 @@ TEST_IMPL(fs_symlink_dir) { uv_fs_t req; int r; char* test_dir; + uv_dirent_t dent; /* set-up */ unlink("test_dir/file1"); @@ -1593,32 +1679,36 @@ TEST_IMPL(fs_symlink_dir) { ASSERT(r == 0); uv_fs_req_cleanup(&close_req); - r = uv_fs_readdir(loop, &readdir_req, "test_dir_symlink", 0, NULL); + r = uv_fs_scandir(loop, &scandir_req, "test_dir_symlink", 0, NULL); ASSERT(r == 2); - ASSERT(readdir_req.result == 2); - ASSERT(readdir_req.ptr); - ASSERT(memcmp(readdir_req.ptr, "file1\0file2\0", 12) == 0 - || memcmp(readdir_req.ptr, "file2\0file1\0", 12) == 0); - uv_fs_req_cleanup(&readdir_req); - ASSERT(!readdir_req.ptr); + ASSERT(scandir_req.result == 2); + ASSERT(scandir_req.ptr); + while (UV_EOF != uv_fs_scandir_next(&scandir_req, &dent)) { + ASSERT(strcmp(dent.name, "file1") == 0 || strcmp(dent.name, "file2") == 0); + ASSERT(dent.type == UV_DIRENT_FILE || dent.type == UV_DIRENT_UNKNOWN); + } + uv_fs_req_cleanup(&scandir_req); + ASSERT(!scandir_req.ptr); /* unlink will remove the directory symlink */ r = uv_fs_unlink(loop, &req, "test_dir_symlink", NULL); ASSERT(r == 0); uv_fs_req_cleanup(&req); - r = uv_fs_readdir(loop, &readdir_req, "test_dir_symlink", 0, NULL); + r = uv_fs_scandir(loop, &scandir_req, "test_dir_symlink", 0, NULL); ASSERT(r == UV_ENOENT); - uv_fs_req_cleanup(&readdir_req); + uv_fs_req_cleanup(&scandir_req); - r = uv_fs_readdir(loop, &readdir_req, "test_dir", 0, NULL); + r = uv_fs_scandir(loop, &scandir_req, "test_dir", 0, NULL); ASSERT(r == 2); - ASSERT(readdir_req.result == 2); - ASSERT(readdir_req.ptr); - ASSERT(memcmp(readdir_req.ptr, "file1\0file2\0", 12) == 0 - || memcmp(readdir_req.ptr, "file2\0file1\0", 12) == 0); - uv_fs_req_cleanup(&readdir_req); - ASSERT(!readdir_req.ptr); + ASSERT(scandir_req.result == 2); + ASSERT(scandir_req.ptr); + while (UV_EOF != uv_fs_scandir_next(&scandir_req, &dent)) { + ASSERT(strcmp(dent.name, "file1") == 0 || strcmp(dent.name, "file2") == 0); + ASSERT(dent.type == UV_DIRENT_FILE || dent.type == UV_DIRENT_UNKNOWN); + } + uv_fs_req_cleanup(&scandir_req); + ASSERT(!scandir_req.ptr); /* clean-up */ unlink("test_dir/file1"); @@ -1790,9 +1880,10 @@ TEST_IMPL(fs_stat_missing_path) { } -TEST_IMPL(fs_readdir_empty_dir) { +TEST_IMPL(fs_scandir_empty_dir) { const char* path; uv_fs_t req; + uv_dirent_t dent; int r; path = "./empty_dir/"; @@ -1801,18 +1892,22 @@ TEST_IMPL(fs_readdir_empty_dir) { uv_fs_mkdir(loop, &req, path, 0777, NULL); uv_fs_req_cleanup(&req); - r = uv_fs_readdir(loop, &req, path, 0, NULL); + /* Fill the req to ensure that required fields are cleaned up */ + memset(&req, 0xdb, sizeof(req)); + + r = uv_fs_scandir(loop, &req, path, 0, NULL); ASSERT(r == 0); ASSERT(req.result == 0); ASSERT(req.ptr == NULL); + ASSERT(UV_EOF == uv_fs_scandir_next(&req, &dent)); uv_fs_req_cleanup(&req); - r = uv_fs_readdir(loop, &readdir_req, path, 0, empty_readdir_cb); + r = uv_fs_scandir(loop, &scandir_req, path, 0, empty_scandir_cb); ASSERT(r == 0); - ASSERT(readdir_cb_count == 0); + ASSERT(scandir_cb_count == 0); uv_run(loop, UV_RUN_DEFAULT); - ASSERT(readdir_cb_count == 1); + ASSERT(scandir_cb_count == 1); uv_fs_rmdir(loop, &req, path, NULL); uv_fs_req_cleanup(&req); @@ -1822,23 +1917,23 @@ TEST_IMPL(fs_readdir_empty_dir) { } -TEST_IMPL(fs_readdir_file) { +TEST_IMPL(fs_scandir_file) { const char* path; int r; path = "test/fixtures/empty_file"; loop = uv_default_loop(); - r = uv_fs_readdir(loop, &readdir_req, path, 0, NULL); + r = uv_fs_scandir(loop, &scandir_req, path, 0, NULL); ASSERT(r == UV_ENOTDIR); - uv_fs_req_cleanup(&readdir_req); + uv_fs_req_cleanup(&scandir_req); - r = uv_fs_readdir(loop, &readdir_req, path, 0, file_readdir_cb); + r = uv_fs_scandir(loop, &scandir_req, path, 0, file_scandir_cb); ASSERT(r == 0); - ASSERT(readdir_cb_count == 0); + ASSERT(scandir_cb_count == 0); uv_run(loop, UV_RUN_DEFAULT); - ASSERT(readdir_cb_count == 1); + ASSERT(scandir_cb_count == 1); MAKE_VALGRIND_HAPPY(); return 0; @@ -2071,3 +2166,67 @@ TEST_IMPL(fs_read_file_eof) { MAKE_VALGRIND_HAPPY(); return 0; } + + +TEST_IMPL(fs_write_multiple_bufs) { + uv_buf_t iovs[2]; + int r; + + /* Setup. */ + unlink("test_file"); + + loop = uv_default_loop(); + + r = uv_fs_open(loop, &open_req1, "test_file", O_WRONLY | O_CREAT, + S_IWUSR | S_IRUSR, NULL); + ASSERT(r >= 0); + ASSERT(open_req1.result >= 0); + uv_fs_req_cleanup(&open_req1); + + iovs[0] = uv_buf_init(test_buf, sizeof(test_buf)); + iovs[1] = uv_buf_init(test_buf2, sizeof(test_buf2)); + r = uv_fs_write(loop, &write_req, open_req1.result, iovs, 2, 0, NULL); + ASSERT(r >= 0); + ASSERT(write_req.result >= 0); + uv_fs_req_cleanup(&write_req); + + r = uv_fs_close(loop, &close_req, open_req1.result, NULL); + ASSERT(r == 0); + ASSERT(close_req.result == 0); + uv_fs_req_cleanup(&close_req); + + r = uv_fs_open(loop, &open_req1, "test_file", O_RDONLY, 0, NULL); + ASSERT(r >= 0); + ASSERT(open_req1.result >= 0); + uv_fs_req_cleanup(&open_req1); + + memset(buf, 0, sizeof(buf)); + memset(buf2, 0, sizeof(buf2)); + /* Read the strings back to separate buffers. */ + iovs[0] = uv_buf_init(buf, sizeof(test_buf)); + iovs[1] = uv_buf_init(buf2, sizeof(test_buf2)); + r = uv_fs_read(loop, &read_req, open_req1.result, iovs, 2, 0, NULL); + ASSERT(r >= 0); + ASSERT(read_req.result >= 0); + ASSERT(strcmp(buf, test_buf) == 0); + ASSERT(strcmp(buf2, test_buf2) == 0); + uv_fs_req_cleanup(&read_req); + + iov = uv_buf_init(buf, sizeof(buf)); + r = uv_fs_read(loop, &read_req, open_req1.result, &iov, 1, + read_req.result, NULL); + ASSERT(r == 0); + ASSERT(read_req.result == 0); + uv_fs_req_cleanup(&read_req); + + r = uv_fs_close(loop, &close_req, open_req1.result, NULL); + ASSERT(r == 0); + ASSERT(close_req.result == 0); + uv_fs_req_cleanup(&close_req); + + /* Cleanup */ + unlink("test_file"); + + MAKE_VALGRIND_HAPPY(); + return 0; +} diff --git a/deps/uv/test/test-handle-fileno.c b/deps/uv/test/test-handle-fileno.c new file mode 100644 index 00000000000..df5e984ab74 --- /dev/null +++ b/deps/uv/test/test-handle-fileno.c @@ -0,0 +1,120 @@ +/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to + * deal in the Software without restriction, including without limitation the + * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or + * sell copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS + * IN THE SOFTWARE. + */ + +#include "uv.h" +#include "task.h" + + +static int get_tty_fd(void) { + /* Make sure we have an FD that refers to a tty */ +#ifdef _WIN32 + HANDLE handle; + handle = CreateFileA("conout$", + GENERIC_READ | GENERIC_WRITE, + FILE_SHARE_READ | FILE_SHARE_WRITE, + NULL, + OPEN_EXISTING, + FILE_ATTRIBUTE_NORMAL, + NULL); + if (handle == INVALID_HANDLE_VALUE) + return -1; + return _open_osfhandle((intptr_t) handle, 0); +#else /* unix */ + return open("/dev/tty", O_RDONLY, 0); +#endif +} + + +TEST_IMPL(handle_fileno) { + int r; + int tty_fd; + struct sockaddr_in addr; + uv_os_fd_t fd; + uv_tcp_t tcp; + uv_udp_t udp; + uv_pipe_t pipe; + uv_tty_t tty; + uv_idle_t idle; + uv_loop_t* loop; + + loop = uv_default_loop(); + ASSERT(0 == uv_ip4_addr("127.0.0.1", TEST_PORT, &addr)); + + r = uv_idle_init(loop, &idle); + ASSERT(r == 0); + r = uv_fileno((uv_handle_t*) &idle, &fd); + ASSERT(r == UV_EINVAL); + uv_close((uv_handle_t*) &idle, NULL); + + r = uv_tcp_init(loop, &tcp); + ASSERT(r == 0); + r = uv_fileno((uv_handle_t*) &tcp, &fd); + ASSERT(r == UV_EBADF); + r = uv_tcp_bind(&tcp, (const struct sockaddr*) &addr, 0); + ASSERT(r == 0); + r = uv_fileno((uv_handle_t*) &tcp, &fd); + ASSERT(r == 0); + uv_close((uv_handle_t*) &tcp, NULL); + r = uv_fileno((uv_handle_t*) &tcp, &fd); + ASSERT(r == UV_EBADF); + + r = uv_udp_init(loop, &udp); + ASSERT(r == 0); + r = uv_fileno((uv_handle_t*) &udp, &fd); + ASSERT(r == UV_EBADF); + r = uv_udp_bind(&udp, (const struct sockaddr*) &addr, 0); + ASSERT(r == 0); + r = uv_fileno((uv_handle_t*) &udp, &fd); + ASSERT(r == 0); + uv_close((uv_handle_t*) &udp, NULL); + r = uv_fileno((uv_handle_t*) &udp, &fd); + ASSERT(r == UV_EBADF); + + r = uv_pipe_init(loop, &pipe, 0); + ASSERT(r == 0); + r = uv_fileno((uv_handle_t*) &pipe, &fd); + ASSERT(r == UV_EBADF); + r = uv_pipe_bind(&pipe, TEST_PIPENAME); + ASSERT(r == 0); + r = uv_fileno((uv_handle_t*) &pipe, &fd); + ASSERT(r == 0); + uv_close((uv_handle_t*) &pipe, NULL); + r = uv_fileno((uv_handle_t*) &pipe, &fd); + ASSERT(r == UV_EBADF); + + tty_fd = get_tty_fd(); + if (tty_fd < 0) { + LOGF("Cannot open a TTY fd"); + } else { + r = uv_tty_init(loop, &tty, tty_fd, 0); + ASSERT(r == 0); + r = uv_fileno((uv_handle_t*) &tty, &fd); + ASSERT(r == 0); + uv_close((uv_handle_t*) &tty, NULL); + r = uv_fileno((uv_handle_t*) &tty, &fd); + ASSERT(r == UV_EBADF); + } + + uv_run(loop, UV_RUN_DEFAULT); + + MAKE_VALGRIND_HAPPY(); + return 0; +} diff --git a/deps/uv/test/test-list.h b/deps/uv/test/test-list.h index 6dbe22307eb..85ddac82ae1 100644 --- a/deps/uv/test/test-list.h +++ b/deps/uv/test/test-list.h @@ -29,6 +29,7 @@ TEST_DECLARE (loop_close) TEST_DECLARE (loop_stop) TEST_DECLARE (loop_update_time) TEST_DECLARE (loop_backend_timeout) +TEST_DECLARE (default_loop_close) TEST_DECLARE (barrier_1) TEST_DECLARE (barrier_2) TEST_DECLARE (barrier_3) @@ -55,6 +56,9 @@ TEST_DECLARE (tcp_ping_pong_v6) TEST_DECLARE (pipe_ping_pong) TEST_DECLARE (delayed_accept) TEST_DECLARE (multiple_listen) +#ifndef _WIN32 +TEST_DECLARE (tcp_write_after_connect) +#endif TEST_DECLARE (tcp_writealot) TEST_DECLARE (tcp_try_write) TEST_DECLARE (tcp_write_queue_order) @@ -89,6 +93,7 @@ TEST_DECLARE (udp_bind) TEST_DECLARE (udp_bind_reuseaddr) TEST_DECLARE (udp_send_and_recv) TEST_DECLARE (udp_send_immediate) +TEST_DECLARE (udp_send_unreachable) TEST_DECLARE (udp_multicast_join) TEST_DECLARE (udp_multicast_join6) TEST_DECLARE (udp_multicast_ttl) @@ -109,6 +114,7 @@ TEST_DECLARE (pipe_connect_bad_name) TEST_DECLARE (pipe_connect_to_file) TEST_DECLARE (pipe_getsockname) TEST_DECLARE (pipe_getsockname_abstract) +TEST_DECLARE (pipe_getsockname_blocking) TEST_DECLARE (pipe_sendmsg) TEST_DECLARE (pipe_server_close) TEST_DECLARE (connection_fail) @@ -128,6 +134,7 @@ TEST_DECLARE (timer_huge_timeout) TEST_DECLARE (timer_huge_repeat) TEST_DECLARE (timer_run_once) TEST_DECLARE (timer_from_check) +TEST_DECLARE (timer_null_callback) TEST_DECLARE (idle_starvation) TEST_DECLARE (loop_handles) TEST_DECLARE (get_loadavg) @@ -155,6 +162,9 @@ TEST_DECLARE (pipe_ref) TEST_DECLARE (pipe_ref2) TEST_DECLARE (pipe_ref3) TEST_DECLARE (pipe_ref4) +#ifndef _WIN32 +TEST_DECLARE (pipe_close_stdout_read_stdin) +#endif TEST_DECLARE (process_ref) TEST_DECLARE (has_ref) TEST_DECLARE (active) @@ -165,6 +175,7 @@ TEST_DECLARE (get_currentexe) TEST_DECLARE (process_title) TEST_DECLARE (cwd_and_chdir) TEST_DECLARE (get_memory) +TEST_DECLARE (handle_fileno) TEST_DECLARE (hrtime) TEST_DECLARE (getaddrinfo_fail) TEST_DECLARE (getaddrinfo_basic) @@ -175,6 +186,7 @@ TEST_DECLARE (getsockname_tcp) TEST_DECLARE (getsockname_udp) TEST_DECLARE (fail_always) TEST_DECLARE (pass_always) +TEST_DECLARE (socket_buffer_size) TEST_DECLARE (spawn_fails) TEST_DECLARE (spawn_exit_code) TEST_DECLARE (spawn_stdout) @@ -206,6 +218,7 @@ TEST_DECLARE (fs_async_dir) TEST_DECLARE (fs_async_sendfile) TEST_DECLARE (fs_mkdtemp) TEST_DECLARE (fs_fstat) +TEST_DECLARE (fs_access) TEST_DECLARE (fs_chmod) TEST_DECLARE (fs_chown) TEST_DECLARE (fs_link) @@ -229,10 +242,11 @@ TEST_DECLARE (fs_event_close_in_callback) TEST_DECLARE (fs_event_start_and_close) TEST_DECLARE (fs_event_error_reporting) TEST_DECLARE (fs_event_getpath) -TEST_DECLARE (fs_readdir_empty_dir) -TEST_DECLARE (fs_readdir_file) +TEST_DECLARE (fs_scandir_empty_dir) +TEST_DECLARE (fs_scandir_file) TEST_DECLARE (fs_open_dir) TEST_DECLARE (fs_rename_to_existing_file) +TEST_DECLARE (fs_write_multiple_bufs) TEST_DECLARE (threadpool_queue_work_simple) TEST_DECLARE (threadpool_queue_work_einval) TEST_DECLARE (threadpool_multiple_event_loops) @@ -245,6 +259,7 @@ TEST_DECLARE (thread_local_storage) TEST_DECLARE (thread_mutex) TEST_DECLARE (thread_rwlock) TEST_DECLARE (thread_create) +TEST_DECLARE (thread_equal) TEST_DECLARE (dlerror) TEST_DECLARE (poll_duplex) TEST_DECLARE (poll_unidirectional) @@ -275,6 +290,7 @@ TEST_DECLARE (closed_fd_events) #endif #ifdef __APPLE__ TEST_DECLARE (osx_select) +TEST_DECLARE (osx_select_many_fds) #endif HELPER_DECLARE (tcp4_echo_server) HELPER_DECLARE (tcp6_echo_server) @@ -296,6 +312,7 @@ TASK_LIST_START TEST_ENTRY (loop_stop) TEST_ENTRY (loop_update_time) TEST_ENTRY (loop_backend_timeout) + TEST_ENTRY (default_loop_close) TEST_ENTRY (barrier_1) TEST_ENTRY (barrier_2) TEST_ENTRY (barrier_3) @@ -312,6 +329,9 @@ TASK_LIST_START TEST_ENTRY (pipe_connect_to_file) TEST_ENTRY (pipe_server_close) +#ifndef _WIN32 + TEST_ENTRY (pipe_close_stdout_read_stdin) +#endif TEST_ENTRY (tty) TEST_ENTRY (stdio_over_pipes) TEST_ENTRY (ip6_pton) @@ -335,6 +355,10 @@ TASK_LIST_START TEST_ENTRY (delayed_accept) TEST_ENTRY (multiple_listen) +#ifndef _WIN32 + TEST_ENTRY (tcp_write_after_connect) +#endif + TEST_ENTRY (tcp_writealot) TEST_HELPER (tcp_writealot, tcp4_echo_server) @@ -381,6 +405,7 @@ TASK_LIST_START TEST_ENTRY (udp_bind_reuseaddr) TEST_ENTRY (udp_send_and_recv) TEST_ENTRY (udp_send_immediate) + TEST_ENTRY (udp_send_unreachable) TEST_ENTRY (udp_dgram_too_big) TEST_ENTRY (udp_dual_stack) TEST_ENTRY (udp_ipv6_only) @@ -402,6 +427,7 @@ TASK_LIST_START TEST_ENTRY (pipe_listen_without_bind) TEST_ENTRY (pipe_getsockname) TEST_ENTRY (pipe_getsockname_abstract) + TEST_ENTRY (pipe_getsockname_blocking) TEST_ENTRY (pipe_sendmsg) TEST_ENTRY (connection_fail) @@ -432,6 +458,7 @@ TASK_LIST_START TEST_ENTRY (timer_huge_repeat) TEST_ENTRY (timer_run_once) TEST_ENTRY (timer_from_check) + TEST_ENTRY (timer_null_callback) TEST_ENTRY (idle_starvation) @@ -487,6 +514,8 @@ TASK_LIST_START TEST_ENTRY (get_loadavg) + TEST_ENTRY (handle_fileno) + TEST_ENTRY (hrtime) TEST_ENTRY_CUSTOM (getaddrinfo_fail, 0, 0, 10000) @@ -504,6 +533,8 @@ TASK_LIST_START TEST_ENTRY (poll_unidirectional) TEST_ENTRY (poll_close) + TEST_ENTRY (socket_buffer_size) + TEST_ENTRY (spawn_fails) TEST_ENTRY (spawn_exit_code) TEST_ENTRY (spawn_stdout) @@ -549,6 +580,7 @@ TASK_LIST_START #ifdef __APPLE__ TEST_ENTRY (osx_select) + TEST_ENTRY (osx_select_many_fds) #endif TEST_ENTRY (fs_file_noent) @@ -561,6 +593,7 @@ TASK_LIST_START TEST_ENTRY (fs_async_sendfile) TEST_ENTRY (fs_mkdtemp) TEST_ENTRY (fs_fstat) + TEST_ENTRY (fs_access) TEST_ENTRY (fs_chmod) TEST_ENTRY (fs_chown) TEST_ENTRY (fs_utime) @@ -583,10 +616,11 @@ TASK_LIST_START TEST_ENTRY (fs_event_start_and_close) TEST_ENTRY (fs_event_error_reporting) TEST_ENTRY (fs_event_getpath) - TEST_ENTRY (fs_readdir_empty_dir) - TEST_ENTRY (fs_readdir_file) + TEST_ENTRY (fs_scandir_empty_dir) + TEST_ENTRY (fs_scandir_file) TEST_ENTRY (fs_open_dir) TEST_ENTRY (fs_rename_to_existing_file) + TEST_ENTRY (fs_write_multiple_bufs) TEST_ENTRY (threadpool_queue_work_simple) TEST_ENTRY (threadpool_queue_work_einval) TEST_ENTRY (threadpool_multiple_event_loops) @@ -599,6 +633,7 @@ TASK_LIST_START TEST_ENTRY (thread_mutex) TEST_ENTRY (thread_rwlock) TEST_ENTRY (thread_create) + TEST_ENTRY (thread_equal) TEST_ENTRY (dlerror) TEST_ENTRY (ip4_addr) TEST_ENTRY (ip6_addr_link_local) diff --git a/deps/uv/test/test-osx-select.c b/deps/uv/test/test-osx-select.c index e5e1bf8b469..49b1bb8229a 100644 --- a/deps/uv/test/test-osx-select.c +++ b/deps/uv/test/test-osx-select.c @@ -79,4 +79,54 @@ TEST_IMPL(osx_select) { return 0; } + +TEST_IMPL(osx_select_many_fds) { + int r; + int fd; + size_t i; + size_t len; + const char* str; + struct sockaddr_in addr; + uv_tty_t tty; + uv_tcp_t tcps[1500]; + + TEST_FILE_LIMIT(ARRAY_SIZE(tcps) + 2); + + r = uv_ip4_addr("127.0.0.1", 0, &addr); + ASSERT(r == 0); + + for (i = 0; i < ARRAY_SIZE(tcps); i++) { + r = uv_tcp_init(uv_default_loop(), &tcps[i]); + ASSERT(r == 0); + r = uv_tcp_bind(&tcps[i], (const struct sockaddr *) &addr, 0); + ASSERT(r == 0); + uv_unref((uv_handle_t*) &tcps[i]); + } + + fd = open("/dev/tty", O_RDONLY); + ASSERT(fd >= 0); + + r = uv_tty_init(uv_default_loop(), &tty, fd, 1); + ASSERT(r == 0); + + r = uv_read_start((uv_stream_t*) &tty, alloc_cb, read_cb); + ASSERT(r == 0); + + /* Emulate user-input */ + str = "got some input\n" + "with a couple of lines\n" + "feel pretty happy\n"; + for (i = 0, len = strlen(str); i < len; i++) { + r = ioctl(fd, TIOCSTI, str + i); + ASSERT(r == 0); + } + + uv_run(uv_default_loop(), UV_RUN_DEFAULT); + + ASSERT(read_count == 3); + + MAKE_VALGRIND_HAPPY(); + return 0; +} + #endif /* __APPLE__ */ diff --git a/deps/uv/test/test-pipe-close-stdout-read-stdin.c b/deps/uv/test/test-pipe-close-stdout-read-stdin.c new file mode 100644 index 00000000000..3064babf98c --- /dev/null +++ b/deps/uv/test/test-pipe-close-stdout-read-stdin.c @@ -0,0 +1,105 @@ +/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to + * deal in the Software without restriction, including without limitation the + * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or + * sell copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS + * IN THE SOFTWARE. + */ + +#ifndef _WIN32 + +#include <stdlib.h> +#include <unistd.h> +#include <sys/wait.h> +#include <sys/types.h> + +#include "uv.h" +#include "task.h" + +void alloc_buffer(uv_handle_t *handle, size_t suggested_size, uv_buf_t* buf) +{ + static char buffer[1024]; + + buf->base = buffer; + buf->len = sizeof(buffer); +} + +void read_stdin(uv_stream_t *stream, ssize_t nread, const uv_buf_t* buf) +{ + if (nread < 0) { + uv_close((uv_handle_t*)stream, NULL); + return; + } +} + +/* + * This test is a reproduction of joyent/libuv#1419 . + */ +TEST_IMPL(pipe_close_stdout_read_stdin) { + int r = -1; + int pid; + int fd[2]; + int status; + + r = pipe(fd); + ASSERT(r == 0); + + if ((pid = fork()) == 0) { + /* + * Make the read side of the pipe our stdin. + * The write side will be closed by the parent process. + */ + close(fd[1]); + close(0); + r = dup(fd[0]); + ASSERT(r != -1); + + /* Create a stream that reads from the pipe. */ + uv_pipe_t stdin_pipe; + + r = uv_pipe_init(uv_default_loop(), (uv_pipe_t *)&stdin_pipe, 0); + ASSERT(r == 0); + + r = uv_pipe_open((uv_pipe_t *)&stdin_pipe, 0); + ASSERT(r == 0); + + r = uv_read_start((uv_stream_t *)&stdin_pipe, alloc_buffer, read_stdin); + ASSERT(r == 0); + + /* + * Because the other end of the pipe was closed, there should + * be no event left to process after one run of the event loop. + * Otherwise, it means that events were not processed correctly. + */ + ASSERT(uv_run(uv_default_loop(), UV_RUN_NOWAIT) == 0); + } else { + /* + * Close both ends of the pipe so that the child + * get a POLLHUP event when it tries to read from + * the other end. + */ + close(fd[1]); + close(fd[0]); + + waitpid(pid, &status, 0); + ASSERT(WIFEXITED(status) && WEXITSTATUS(status) == 0); + } + + MAKE_VALGRIND_HAPPY(); + return 0; +} + +#endif /* ifndef _WIN32 */ diff --git a/deps/uv/test/test-pipe-getsockname.c b/deps/uv/test/test-pipe-getsockname.c index 396f72577db..d4010f3b507 100644 --- a/deps/uv/test/test-pipe-getsockname.c +++ b/deps/uv/test/test-pipe-getsockname.c @@ -32,6 +32,8 @@ #ifndef _WIN32 # include <unistd.h> /* close */ +#else +# include <fcntl.h> #endif @@ -120,3 +122,59 @@ TEST_IMPL(pipe_getsockname_abstract) { #endif } +TEST_IMPL(pipe_getsockname_blocking) { +#ifdef _WIN32 + uv_pipe_t reader; + HANDLE readh, writeh; + int readfd; + char buf1[1024], buf2[1024]; + size_t len1, len2; + int r; + + r = CreatePipe(&readh, &writeh, NULL, 65536); + ASSERT(r != 0); + + r = uv_pipe_init(uv_default_loop(), &reader, 0); + ASSERT(r == 0); + readfd = _open_osfhandle((intptr_t)readh, _O_RDONLY); + ASSERT(r != -1); + r = uv_pipe_open(&reader, readfd); + ASSERT(r == 0); + r = uv_read_start((uv_stream_t*)&reader, NULL, NULL); + ASSERT(r == 0); + Sleep(100); + r = uv_read_stop((uv_stream_t*)&reader); + ASSERT(r == 0); + + len1 = sizeof buf1; + r = uv_pipe_getsockname(&reader, buf1, &len1); + ASSERT(r == 0); + + r = uv_read_start((uv_stream_t*)&reader, NULL, NULL); + ASSERT(r == 0); + Sleep(100); + + len2 = sizeof buf2; + r = uv_pipe_getsockname(&reader, buf2, &len2); + ASSERT(r == 0); + + r = uv_read_stop((uv_stream_t*)&reader); + ASSERT(r == 0); + + ASSERT(len1 == len2); + ASSERT(memcmp(buf1, buf2, len1) == 0); + + close_cb_called = 0; + uv_close((uv_handle_t*)&reader, close_cb); + + uv_run(uv_default_loop(), UV_RUN_DEFAULT); + + ASSERT(close_cb_called == 1); + + _close(readfd); + CloseHandle(writeh); +#endif + + MAKE_VALGRIND_HAPPY(); + return 0; +} diff --git a/deps/uv/test/test-socket-buffer-size.c b/deps/uv/test/test-socket-buffer-size.c new file mode 100644 index 00000000000..72f8c2524c0 --- /dev/null +++ b/deps/uv/test/test-socket-buffer-size.c @@ -0,0 +1,77 @@ +/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to + * deal in the Software without restriction, including without limitation the + * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or + * sell copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS + * IN THE SOFTWARE. + */ + +#include "uv.h" +#include "task.h" + +#include <stdio.h> +#include <stdlib.h> +#include <string.h> + +static uv_udp_t udp; +static uv_tcp_t tcp; +static int close_cb_called; + + +static void close_cb(uv_handle_t* handle) { + close_cb_called++; +} + + +static void check_buffer_size(uv_handle_t* handle) { + int value; + + value = 0; + ASSERT(0 == uv_recv_buffer_size(handle, &value)); + ASSERT(value > 0); + + value = 10000; + ASSERT(0 == uv_recv_buffer_size(handle, &value)); + + value = 0; + ASSERT(0 == uv_recv_buffer_size(handle, &value)); + /* linux sets double the value */ + ASSERT(value == 10000 || value == 20000); +} + + +TEST_IMPL(socket_buffer_size) { + struct sockaddr_in addr; + + ASSERT(0 == uv_ip4_addr("127.0.0.1", TEST_PORT, &addr)); + + ASSERT(0 == uv_tcp_init(uv_default_loop(), &tcp)); + ASSERT(0 == uv_tcp_bind(&tcp, (struct sockaddr*) &addr, 0)); + check_buffer_size((uv_handle_t*) &tcp); + uv_close((uv_handle_t*) &tcp, close_cb); + + ASSERT(0 == uv_udp_init(uv_default_loop(), &udp)); + ASSERT(0 == uv_udp_bind(&udp, (struct sockaddr*) &addr, 0)); + check_buffer_size((uv_handle_t*) &udp); + uv_close((uv_handle_t*) &udp, close_cb); + + ASSERT(0 == uv_run(uv_default_loop(), UV_RUN_DEFAULT)); + + ASSERT(close_cb_called == 2); + + MAKE_VALGRIND_HAPPY(); + return 0; +} diff --git a/deps/uv/test/test-spawn.c b/deps/uv/test/test-spawn.c index 57f0862f944..11f43bdf134 100644 --- a/deps/uv/test/test-spawn.c +++ b/deps/uv/test/test-spawn.c @@ -1295,23 +1295,25 @@ TEST_IMPL(closed_fd_events) { TEST_IMPL(spawn_reads_child_path) { int r; int len; + char file[64]; char path[1024]; char *env[2] = {path, NULL}; /* Set up the process, but make sure that the file to run is relative and */ /* requires a lookup into PATH */ init_process_options("spawn_helper1", exit_cb); - options.file = "run-tests"; - args[0] = "run-tests"; /* Set up the PATH env variable */ for (len = strlen(exepath); exepath[len - 1] != '/' && exepath[len - 1] != '\\'; len--); + strcpy(file, exepath + len); exepath[len] = 0; strcpy(path, "PATH="); strcpy(path + 5, exepath); + options.file = file; + options.args[0] = file; options.env = env; r = uv_spawn(uv_default_loop(), &process, &options); diff --git a/deps/uv/test/test-tcp-write-after-connect.c b/deps/uv/test/test-tcp-write-after-connect.c new file mode 100644 index 00000000000..aa03228f134 --- /dev/null +++ b/deps/uv/test/test-tcp-write-after-connect.c @@ -0,0 +1,68 @@ +/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to + * deal in the Software without restriction, including without limitation the + * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or + * sell copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS + * IN THE SOFTWARE. + */ + +#ifndef _WIN32 + +#include "uv.h" +#include "task.h" + +uv_loop_t loop; +uv_tcp_t tcp_client; +uv_connect_t connection_request; +uv_write_t write_request; +uv_buf_t buf = { "HELLO", 4 }; + + +static void write_cb(uv_write_t *req, int status) { + ASSERT(status == UV_ECANCELED); + uv_close((uv_handle_t*) req->handle, NULL); +} + + +static void connect_cb(uv_connect_t *req, int status) { + ASSERT(status == UV_ECONNREFUSED); +} + + +TEST_IMPL(tcp_write_after_connect) { + struct sockaddr_in sa; + ASSERT(0 == uv_ip4_addr("127.0.0.1", TEST_PORT, &sa)); + ASSERT(0 == uv_loop_init(&loop)); + ASSERT(0 == uv_tcp_init(&loop, &tcp_client)); + + ASSERT(0 == uv_tcp_connect(&connection_request, + &tcp_client, + (const struct sockaddr *) + &sa, + connect_cb)); + + ASSERT(0 == uv_write(&write_request, + (uv_stream_t *)&tcp_client, + &buf, 1, + write_cb)); + + uv_run(&loop, UV_RUN_DEFAULT); + + MAKE_VALGRIND_HAPPY(); + return 0; +} + +#endif diff --git a/deps/uv/test/test-tcp-write-queue-order.c b/deps/uv/test/test-tcp-write-queue-order.c index 18e1f192b62..aa4d2acc24a 100644 --- a/deps/uv/test/test-tcp-write-queue-order.c +++ b/deps/uv/test/test-tcp-write-queue-order.c @@ -26,7 +26,7 @@ #include "uv.h" #include "task.h" -#define REQ_COUNT 100000 +#define REQ_COUNT 10000 static uv_timer_t timer; static uv_tcp_t server; diff --git a/deps/uv/test/test-thread-equal.c b/deps/uv/test/test-thread-equal.c new file mode 100644 index 00000000000..27c07ee2c7d --- /dev/null +++ b/deps/uv/test/test-thread-equal.c @@ -0,0 +1,45 @@ +/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to + * deal in the Software without restriction, including without limitation the + * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or + * sell copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS + * IN THE SOFTWARE. + */ + +#include "uv.h" +#include "task.h" + +uv_thread_t main_thread_id; +uv_thread_t subthreads[2]; + +static void check_thread(void* arg) { + uv_thread_t *thread_id = arg; + uv_thread_t self_id = uv_thread_self(); + ASSERT(uv_thread_equal(&main_thread_id, &self_id) == 0); + *thread_id = uv_thread_self(); +} + +TEST_IMPL(thread_equal) { + uv_thread_t threads[2]; + main_thread_id = uv_thread_self(); + ASSERT(0 != uv_thread_equal(&main_thread_id, &main_thread_id)); + ASSERT(0 == uv_thread_create(threads + 0, check_thread, subthreads + 0)); + ASSERT(0 == uv_thread_create(threads + 1, check_thread, subthreads + 1)); + ASSERT(0 == uv_thread_join(threads + 0)); + ASSERT(0 == uv_thread_join(threads + 1)); + ASSERT(0 == uv_thread_equal(subthreads + 0, subthreads + 1)); + return 0; +} diff --git a/deps/uv/test/test-threadpool-cancel.c b/deps/uv/test/test-threadpool-cancel.c index d852e488b6c..f999cba818f 100644 --- a/deps/uv/test/test-threadpool-cancel.c +++ b/deps/uv/test/test-threadpool-cancel.c @@ -301,7 +301,7 @@ TEST_IMPL(threadpool_cancel_fs) { ASSERT(0 == uv_fs_mkdir(loop, reqs + n++, "/", 0, fs_cb)); ASSERT(0 == uv_fs_open(loop, reqs + n++, "/", 0, 0, fs_cb)); ASSERT(0 == uv_fs_read(loop, reqs + n++, 0, NULL, 0, 0, fs_cb)); - ASSERT(0 == uv_fs_readdir(loop, reqs + n++, "/", 0, fs_cb)); + ASSERT(0 == uv_fs_scandir(loop, reqs + n++, "/", 0, fs_cb)); ASSERT(0 == uv_fs_readlink(loop, reqs + n++, "/", fs_cb)); ASSERT(0 == uv_fs_rename(loop, reqs + n++, "/", "/", fs_cb)); ASSERT(0 == uv_fs_mkdir(loop, reqs + n++, "/", 0, fs_cb)); diff --git a/deps/uv/test/test-timer.c b/deps/uv/test/test-timer.c index f26dae577ff..aba050fd64c 100644 --- a/deps/uv/test/test-timer.c +++ b/deps/uv/test/test-timer.c @@ -290,3 +290,14 @@ TEST_IMPL(timer_run_once) { MAKE_VALGRIND_HAPPY(); return 0; } + + +TEST_IMPL(timer_null_callback) { + uv_timer_t handle; + + ASSERT(0 == uv_timer_init(uv_default_loop(), &handle)); + ASSERT(UV_EINVAL == uv_timer_start(&handle, NULL, 100, 100)); + + MAKE_VALGRIND_HAPPY(); + return 0; +} diff --git a/deps/uv/test/test-tty.c b/deps/uv/test/test-tty.c index fb69910732c..7e1ce266889 100644 --- a/deps/uv/test/test-tty.c +++ b/deps/uv/test/test-tty.c @@ -96,6 +96,13 @@ TEST_IMPL(tty) { printf("width=%d height=%d\n", width, height); + if (width == 0 && height == 0) { + /* Some environments such as containers or Jenkins behave like this + * sometimes */ + MAKE_VALGRIND_HAPPY(); + return TEST_SKIP; + } + /* * Is it a safe assumption that most people have terminals larger than * 10x10? diff --git a/deps/uv/test/test-udp-ipv6.c b/deps/uv/test/test-udp-ipv6.c index 0e2fe2dcf2d..0ca9f4dcff6 100644 --- a/deps/uv/test/test-udp-ipv6.c +++ b/deps/uv/test/test-udp-ipv6.c @@ -147,12 +147,19 @@ static void do_test(uv_udp_recv_cb recv_cb, int bind_flags) { TEST_IMPL(udp_dual_stack) { +#if defined(__DragonFly__) || \ + defined(__FreeBSD__) || \ + defined(__OpenBSD__) || \ + defined(__NetBSD__) + RETURN_SKIP("dual stack not enabled by default in this OS."); +#else do_test(ipv6_recv_ok, 0); ASSERT(recv_cb_called == 1); ASSERT(send_cb_called == 1); return 0; +#endif } diff --git a/deps/uv/test/test-udp-multicast-interface6.c b/deps/uv/test/test-udp-multicast-interface6.c index e58d7711974..e54e738b0be 100644 --- a/deps/uv/test/test-udp-multicast-interface6.c +++ b/deps/uv/test/test-udp-multicast-interface6.c @@ -69,7 +69,7 @@ TEST_IMPL(udp_multicast_interface6) { r = uv_udp_bind(&server, (const struct sockaddr*)&baddr, 0); ASSERT(r == 0); -#if defined(__APPLE__) +#if defined(__APPLE__) || defined(__FreeBSD__) r = uv_udp_set_multicast_interface(&server, "::1%lo0"); #else r = uv_udp_set_multicast_interface(&server, NULL); diff --git a/deps/uv/test/test-udp-send-unreachable.c b/deps/uv/test/test-udp-send-unreachable.c new file mode 100644 index 00000000000..c6500320d78 --- /dev/null +++ b/deps/uv/test/test-udp-send-unreachable.c @@ -0,0 +1,150 @@ +/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to + * deal in the Software without restriction, including without limitation the + * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or + * sell copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS + * IN THE SOFTWARE. + */ + +#include "uv.h" +#include "task.h" + +#include <stdio.h> +#include <stdlib.h> +#include <string.h> + +#define CHECK_HANDLE(handle) \ + ASSERT((uv_udp_t*)(handle) == &client) + +static uv_udp_t client; +static uv_timer_t timer; + +static int send_cb_called; +static int recv_cb_called; +static int close_cb_called; +static int alloc_cb_called; +static int timer_cb_called; + + +static void alloc_cb(uv_handle_t* handle, + size_t suggested_size, + uv_buf_t* buf) { + static char slab[65536]; + CHECK_HANDLE(handle); + ASSERT(suggested_size <= sizeof(slab)); + buf->base = slab; + buf->len = sizeof(slab); + alloc_cb_called++; +} + + +static void close_cb(uv_handle_t* handle) { + ASSERT(1 == uv_is_closing(handle)); + close_cb_called++; +} + + +static void send_cb(uv_udp_send_t* req, int status) { + ASSERT(req != NULL); + ASSERT(status == 0); + CHECK_HANDLE(req->handle); + send_cb_called++; +} + + +static void recv_cb(uv_udp_t* handle, + ssize_t nread, + const uv_buf_t* rcvbuf, + const struct sockaddr* addr, + unsigned flags) { + CHECK_HANDLE(handle); + recv_cb_called++; + + if (nread < 0) { + ASSERT(0 && "unexpected error"); + } else if (nread == 0) { + /* Returning unused buffer */ + ASSERT(addr == NULL); + } else { + ASSERT(addr != NULL); + } +} + + +static void timer_cb(uv_timer_t* h) { + ASSERT(h == &timer); + timer_cb_called++; + uv_close((uv_handle_t*) &client, close_cb); + uv_close((uv_handle_t*) h, close_cb); +} + + +TEST_IMPL(udp_send_unreachable) { + struct sockaddr_in addr; + struct sockaddr_in addr2; + uv_udp_send_t req1, req2; + uv_buf_t buf; + int r; + + ASSERT(0 == uv_ip4_addr("127.0.0.1", TEST_PORT, &addr)); + ASSERT(0 == uv_ip4_addr("127.0.0.1", TEST_PORT_2, &addr2)); + + r = uv_timer_init( uv_default_loop(), &timer ); + ASSERT(r == 0); + + r = uv_timer_start( &timer, timer_cb, 1000, 0 ); + ASSERT(r == 0); + + r = uv_udp_init(uv_default_loop(), &client); + ASSERT(r == 0); + + r = uv_udp_bind(&client, (const struct sockaddr*) &addr2, 0); + ASSERT(r == 0); + + r = uv_udp_recv_start(&client, alloc_cb, recv_cb); + ASSERT(r == 0); + + /* client sends "PING", then "PANG" */ + buf = uv_buf_init("PING", 4); + + r = uv_udp_send(&req1, + &client, + &buf, + 1, + (const struct sockaddr*) &addr, + send_cb); + ASSERT(r == 0); + + buf = uv_buf_init("PANG", 4); + + r = uv_udp_send(&req2, + &client, + &buf, + 1, + (const struct sockaddr*) &addr, + send_cb); + ASSERT(r == 0); + + uv_run(uv_default_loop(), UV_RUN_DEFAULT); + + ASSERT(send_cb_called == 2); + ASSERT(recv_cb_called == alloc_cb_called); + ASSERT(timer_cb_called == 1); + ASSERT(close_cb_called == 2); + + MAKE_VALGRIND_HAPPY(); + return 0; +} diff --git a/deps/uv/test/test-watcher-cross-stop.c b/deps/uv/test/test-watcher-cross-stop.c index bf765cb00cd..910ed0fb613 100644 --- a/deps/uv/test/test-watcher-cross-stop.c +++ b/deps/uv/test/test-watcher-cross-stop.c @@ -92,10 +92,10 @@ TEST_IMPL(watcher_cross_stop) { uv_close((uv_handle_t*) &sockets[i], close_cb); ASSERT(recv_cb_called > 0); - ASSERT(ARRAY_SIZE(sockets) == send_cb_called); uv_run(loop, UV_RUN_DEFAULT); + ASSERT(ARRAY_SIZE(sockets) == send_cb_called); ASSERT(ARRAY_SIZE(sockets) == close_cb_called); MAKE_VALGRIND_HAPPY(); diff --git a/deps/uv/uv.gyp b/deps/uv/uv.gyp index 5b4d69a9241..a5ba14c315a 100644 --- a/deps/uv/uv.gyp +++ b/deps/uv/uv.gyp @@ -1,14 +1,4 @@ { - 'variables': { - 'uv_use_dtrace%': 'false', - # uv_parent_path is the relative path to libuv in the parent project - # this is only relevant when dtrace is enabled and libuv is a child project - # as it's necessary to correctly locate the object files for post - # processing. - # XXX gyp is quite sensitive about paths with double / they don't normalize - 'uv_parent_path': '/', - }, - 'target_defaults': { 'conditions': [ ['OS != "win"', { @@ -26,6 +16,30 @@ ], }], ], + 'xcode_settings': { + 'conditions': [ + [ 'clang==1', { + 'WARNING_CFLAGS': [ + '-Wall', + '-Wextra', + '-Wno-unused-parameter', + '-Wno-dollar-in-identifier-extension' + ]}, { + 'WARNING_CFLAGS': [ + '-Wall', + '-Wextra', + '-Wno-unused-parameter' + ]} + ] + ], + 'OTHER_LDFLAGS': [ + ], + 'OTHER_CFLAGS': [ + '-g', + '--std=gnu89', + '-pedantic' + ], + } }, 'targets': [ @@ -53,9 +67,6 @@ }], ], }, - 'defines': [ - 'HAVE_CONFIG_H' - ], 'sources': [ 'common.gypi', 'include/uv.h', @@ -178,8 +189,8 @@ ['uv_library=="shared_library" and OS!="mac"', { 'link_settings': { # Must correspond with UV_VERSION_MAJOR and UV_VERSION_MINOR - # in src/version.c - 'libraries': [ '-Wl,-soname,libuv.so.0.11' ], + # in include/uv-version.h + 'libraries': [ '-Wl,-soname,libuv.so.1.0' ], }, }], ], @@ -195,6 +206,7 @@ ], 'defines': [ '_DARWIN_USE_64_BIT_INODE=1', + '_DARWIN_UNLIMITED_SELECT=1', ] }], [ 'OS!="mac"', { @@ -274,20 +286,6 @@ ['uv_library=="shared_library"', { 'defines': [ 'BUILDING_UV_SHARED=1' ] }], - # FIXME(bnoordhuis or tjfontaine) Unify this, it's extremely ugly. - ['uv_use_dtrace=="true"', { - 'defines': [ 'HAVE_DTRACE=1' ], - 'dependencies': [ 'uv_dtrace_header' ], - 'include_dirs': [ '<(SHARED_INTERMEDIATE_DIR)' ], - 'conditions': [ - [ 'OS not in "mac linux"', { - 'sources': [ 'src/unix/dtrace.c' ], - }], - [ 'OS=="linux"', { - 'sources': [ '<(SHARED_INTERMEDIATE_DIR)/dtrace.o' ] - }], - ], - }], ] }, @@ -312,6 +310,7 @@ 'test/test-close-order.c', 'test/test-connection-fail.c', 'test/test-cwd-and-chdir.c', + 'test/test-default-loop-close.c', 'test/test-delayed-accept.c', 'test/test-error.c', 'test/test-embed.c', @@ -324,6 +323,7 @@ 'test/test-getaddrinfo.c', 'test/test-getnameinfo.c', 'test/test-getsockname.c', + 'test/test-handle-fileno.c', 'test/test-hrtime.c', 'test/test-idle.c', 'test/test-ip6-addr.c', @@ -346,6 +346,7 @@ 'test/test-pipe-getsockname.c', 'test/test-pipe-sendmsg.c', 'test/test-pipe-server-close.c', + 'test/test-pipe-close-stdout-read-stdin.c', 'test/test-platform-output.c', 'test/test-poll.c', 'test/test-poll-close.c', @@ -360,6 +361,7 @@ 'test/test-shutdown-twice.c', 'test/test-signal.c', 'test/test-signal-multiple-loops.c', + 'test/test-socket-buffer-size.c', 'test/test-spawn.c', 'test/test-fs-poll.c', 'test/test-stdio-over-pipes.c', @@ -376,6 +378,7 @@ 'test/test-tcp-connect6-error.c', 'test/test-tcp-open.c', 'test/test-tcp-write-to-half-open-connection.c', + 'test/test-tcp-write-after-connect.c', 'test/test-tcp-writealot.c', 'test/test-tcp-try-write.c', 'test/test-tcp-unexpected-read.c', @@ -383,6 +386,7 @@ 'test/test-tcp-write-queue-order.c', 'test/test-threadpool.c', 'test/test-threadpool-cancel.c', + 'test/test-thread-equal.c', 'test/test-mutexes.c', 'test/test-thread.c', 'test/test-barrier.c', @@ -398,6 +402,7 @@ 'test/test-udp-options.c', 'test/test-udp-send-and-recv.c', 'test/test-udp-send-immediate.c', + 'test/test-udp-send-unreachable.c', 'test/test-udp-multicast-join.c', 'test/test-udp-multicast-join6.c', 'test/test-dlerror.c', @@ -493,60 +498,5 @@ }, }, }, - - { - 'target_name': 'uv_dtrace_header', - 'type': 'none', - 'conditions': [ - [ 'uv_use_dtrace=="true"', { - 'actions': [ - { - 'action_name': 'uv_dtrace_header', - 'inputs': [ 'src/unix/uv-dtrace.d' ], - 'outputs': [ '<(SHARED_INTERMEDIATE_DIR)/uv-dtrace.h' ], - 'action': [ 'dtrace', '-h', '-xnolibs', '-s', '<@(_inputs)', - '-o', '<@(_outputs)' ], - }, - ], - }], - ], - }, - - # FIXME(bnoordhuis or tjfontaine) Unify this, it's extremely ugly. - { - 'target_name': 'uv_dtrace_provider', - 'type': 'none', - 'conditions': [ - [ 'uv_use_dtrace=="true" and OS not in "mac linux"', { - 'actions': [ - { - 'action_name': 'uv_dtrace_o', - 'inputs': [ - 'src/unix/uv-dtrace.d', - '<(PRODUCT_DIR)/obj.target/libuv<(uv_parent_path)src/unix/core.o', - ], - 'outputs': [ - '<(PRODUCT_DIR)/obj.target/libuv<(uv_parent_path)src/unix/dtrace.o', - ], - 'action': [ 'dtrace', '-G', '-xnolibs', '-s', '<@(_inputs)', - '-o', '<@(_outputs)' ] - } - ] - }], - [ 'uv_use_dtrace=="true" and OS=="linux"', { - 'actions': [ - { - 'action_name': 'uv_dtrace_o', - 'inputs': [ 'src/unix/uv-dtrace.d' ], - 'outputs': [ '<(SHARED_INTERMEDIATE_DIR)/dtrace.o' ], - 'action': [ - 'dtrace', '-C', '-G', '-s', '<@(_inputs)', '-o', '<@(_outputs)' - ], - } - ] - }], - ] - }, - ] } diff --git a/deps/v8/.DEPS.git b/deps/v8/.DEPS.git index e1e6982c05e..7775744953a 100644 --- a/deps/v8/.DEPS.git +++ b/deps/v8/.DEPS.git @@ -13,8 +13,14 @@ vars = { deps = { 'v8/build/gyp': Var('git_url') + '/external/gyp.git@a3e2a5caf24a1e0a45401e09ad131210bf16b852', + 'v8/buildtools': + Var('git_url') + '/chromium/buildtools.git@fb782d4369d5ae04f17a2fceef7de5a63e50f07b', + 'v8/testing/gmock': + Var('git_url') + '/external/googlemock.git@896ba0e03f520fb9b6ed582bde2bd00847e3c3f2', + 'v8/testing/gtest': + Var('git_url') + '/external/googletest.git@4650552ff637bb44ecf7784060091cbed3252211', 'v8/third_party/icu': - Var('git_url') + '/chromium/deps/icu46.git@7a1ec88f69e25b3efcf76196d07f7815255db025', + Var('git_url') + '/chromium/deps/icu52.git@26d8859357ac0bfb86b939bf21c087b8eae22494', } deps_os = { @@ -28,14 +34,68 @@ deps_os = { } include_rules = [ - + '+include', + '+unicode', + '+third_party/fdlibm' ] skip_child_includes = [ - + 'build', + 'third_party' ] hooks = [ + { + 'action': + [ + 'download_from_google_storage', + '--no_resume', + '--platform=win32', + '--no_auth', + '--bucket', + 'chromium-clang-format', + '-s', + 'v8/buildtools/win/clang-format.exe.sha1' +], + 'pattern': + '.', + 'name': + 'clang_format_win' +}, + { + 'action': + [ + 'download_from_google_storage', + '--no_resume', + '--platform=darwin', + '--no_auth', + '--bucket', + 'chromium-clang-format', + '-s', + 'v8/buildtools/mac/clang-format.sha1' +], + 'pattern': + '.', + 'name': + 'clang_format_mac' +}, + { + 'action': + [ + 'download_from_google_storage', + '--no_resume', + '--platform=linux*', + '--no_auth', + '--bucket', + 'chromium-clang-format', + '-s', + 'v8/buildtools/linux64/clang-format.sha1' +], + 'pattern': + '.', + 'name': + 'clang_format_linux' +}, { 'action': [ diff --git a/deps/v8/.gitignore b/deps/v8/.gitignore index ebcb5816b7d..d0d4b436df1 100644 --- a/deps/v8/.gitignore +++ b/deps/v8/.gitignore @@ -21,11 +21,18 @@ #*# *~ .cpplint-cache +.cproject .d8_history +.gclient_entries +.project +.pydevproject +.settings .*.sw? bsuite d8 d8_g +gccauses +gcsuspects shell shell_g /_* @@ -33,6 +40,7 @@ shell_g /build/gyp /build/ipch/ /build/Release +/buildtools /hydrogen.cfg /obj /out @@ -45,13 +53,18 @@ shell_g /test/benchmarks/sunspider /test/mozilla/CHECKED_OUT_VERSION /test/mozilla/data +/test/mozilla/data.old /test/mozilla/downloaded_* /test/promises-aplus/promises-tests /test/promises-aplus/promises-tests.tar.gz /test/promises-aplus/sinon /test/test262/data +/test/test262/data.old /test/test262/tc39-test262-* -/third_party +/testing/gmock +/testing/gtest +/third_party/icu +/third_party/llvm /tools/jsfunfuzz /tools/jsfunfuzz.zip /tools/oom_dump/oom_dump diff --git a/deps/v8/AUTHORS b/deps/v8/AUTHORS index 4ef2bcca339..7ac08156994 100644 --- a/deps/v8/AUTHORS +++ b/deps/v8/AUTHORS @@ -13,6 +13,7 @@ Bloomberg Finance L.P. NVIDIA Corporation BlackBerry Limited Opera Software ASA +Intel Corporation Akinori MUSHA <knu@FreeBSD.org> Alexander Botero-Lowry <alexbl@FreeBSD.org> @@ -24,6 +25,7 @@ Baptiste Afsa <baptiste.afsa@arm.com> Bert Belder <bertbelder@gmail.com> Burcu Dogan <burcujdogan@gmail.com> Craig Schlenter <craig.schlenter@gmail.com> +Chunyang Dai <chunyang.dai@intel.com> Daniel Andersson <kodandersson@gmail.com> Daniel James <dnljms@gmail.com> Derek J Conrod <dconrod@codeaurora.org> @@ -64,6 +66,7 @@ Subrato K De <subratokde@codeaurora.org> Tobias Burnus <burnus@net-b.de> Vincent Belliard <vincent.belliard@arm.com> Vlad Burlik <vladbph@gmail.com> +Weiliang Lin<weiliang.lin@intel.com> Xi Qian <xi.qian@intel.com> Yuqiang Xian <yuqiang.xian@intel.com> Zaheer Ahmad <zahmad@codeaurora.org> diff --git a/deps/v8/BUILD.gn b/deps/v8/BUILD.gn index 2a6178eab07..efa4b717c9a 100644 --- a/deps/v8/BUILD.gn +++ b/deps/v8/BUILD.gn @@ -14,17 +14,11 @@ v8_enable_verify_heap = false v8_interpreted_regexp = false v8_object_print = false v8_postmortem_support = false -v8_use_default_platform = true v8_use_snapshot = true - -if (is_debug) { - v8_enable_extra_checks = true -} else { - v8_enable_extra_checks = false -} - -# TODO(jochen): Add support for want_seperate_host_toolset. -# TODO(jochen): Add toolchain.gypi support. +v8_use_external_startup_data = false +v8_enable_extra_checks = is_debug +v8_target_arch = cpu_arch +v8_random_seed = "314159265" ############################################################################### @@ -33,14 +27,32 @@ if (is_debug) { config("internal_config") { visibility = ":*" # Only targets in this file can depend on this. - include_dirs = [ "src" ] + include_dirs = [ "." ] if (component_mode == "shared_library") { defines = [ + "V8_SHARED", "BUILDING_V8_SHARED", + ] + } +} + +config("internal_config_base") { + visibility = ":*" # Only targets in this file can depend on this. + + include_dirs = [ "." ] +} + +# This config should only be applied to code using V8 and not any V8 code +# itself. +config("external_config") { + if (is_component_build) { + defines = [ "V8_SHARED", + "USING_V8_SHARED", ] } + include_dirs = [ "include" ] } config("features") { @@ -83,11 +95,6 @@ config("features") { "V8_I18N_SUPPORT", ] } - if (v8_use_default_platform == true) { - defines += [ - "V8_USE_DEFAULT_PLATFORM", - ] - } if (v8_compress_startup_data == "bz2") { defines += [ "COMPRESS_STARTUP_DATA_BZ2", @@ -103,25 +110,62 @@ config("features") { "ENABLE_HANDLE_ZAPPING", ] } + if (v8_use_external_startup_data == true) { + defines += [ + "V8_USE_EXTERNAL_STARTUP_DATA", + ] + } } -############################################################################### -# Actions -# - -# TODO(jochen): Do actions need visibility settings as well? -action("generate_trig_table") { +config("toolchain") { visibility = ":*" # Only targets in this file can depend on this. - script = "tools/generate-trig-table.py" + defines = [] + cflags = [] - outputs = [ - "$target_gen_dir/trig-table.cc" - ] + # TODO(jochen): Add support for arm, mips, mipsel. - args = rebase_path(outputs, root_build_dir) + if (v8_target_arch == "arm64") { + defines += [ + "V8_TARGET_ARCH_ARM64", + ] + } + if (v8_target_arch == "x86") { + defines += [ + "V8_TARGET_ARCH_IA32", + ] + } + if (v8_target_arch == "x64") { + defines += [ + "V8_TARGET_ARCH_X64", + ] + } + if (is_win) { + defines += [ + "WIN32", + ] + # TODO(jochen): Support v8_enable_prof. + } + + # TODO(jochen): Add support for compiling with simulators. + + if (is_debug) { + # TODO(jochen): Add support for different debug optimization levels. + defines += [ + "ENABLE_DISASSEMBLER", + "V8_ENABLE_CHECKS", + "OBJECT_PRINT", + "VERIFY_HEAP", + "DEBUG", + "OPTIMIZED_DEBUG", + ] + } } +############################################################################### +# Actions +# + action("js2c") { visibility = ":*" # Only targets in this file can depend on this. @@ -134,9 +178,11 @@ action("js2c") { sources = [ "src/runtime.js", "src/v8natives.js", + "src/symbol.js", "src/array.js", "src/string.js", "src/uri.js", + "third_party/fdlibm/fdlibm.js", "src/math.js", "src/messages.js", "src/apinatives.js", @@ -148,8 +194,14 @@ action("js2c") { "src/regexp.js", "src/arraybuffer.js", "src/typedarray.js", + "src/collection.js", + "src/collection-iterator.js", + "src/weak_collection.js", + "src/promise.js", "src/object-observe.js", "src/macros.py", + "src/array-iterator.js", + "src/string-iterator.js", ] outputs = [ @@ -160,10 +212,19 @@ action("js2c") { sources += [ "src/i18n.js" ] } - args = - rebase_path(outputs, root_build_dir) + - [ "EXPERIMENTAL", v8_compress_startup_data ] + - rebase_path(sources, root_build_dir) + args = [ + rebase_path("$target_gen_dir/libraries.cc", root_build_dir), + "CORE", + v8_compress_startup_data + ] + rebase_path(sources, root_build_dir) + + if (v8_use_external_startup_data) { + outputs += [ "$target_gen_dir/libraries.bin" ] + args += [ + "--startup_blob", + rebase_path("$target_gen_dir/libraries.bin", root_build_dir) + ] + } } action("js2c_experimental") { @@ -177,26 +238,53 @@ action("js2c_experimental") { sources = [ "src/macros.py", - "src/symbol.js", "src/proxy.js", - "src/collection.js", - "src/weak_collection.js", - "src/promise.js", "src/generator.js", - "src/array-iterator.js", "src/harmony-string.js", "src/harmony-array.js", - "src/harmony-math.js", ] outputs = [ "$target_gen_dir/experimental-libraries.cc" ] - args = - rebase_path(outputs, root_build_dir) + - [ "CORE", v8_compress_startup_data ] + - rebase_path(sources, root_build_dir) + args = [ + rebase_path("$target_gen_dir/experimental-libraries.cc", root_build_dir), + "EXPERIMENTAL", + v8_compress_startup_data + ] + rebase_path(sources, root_build_dir) + + if (v8_use_external_startup_data) { + outputs += [ "$target_gen_dir/libraries_experimental.bin" ] + args += [ + "--startup_blob", + rebase_path("$target_gen_dir/libraries_experimental.bin", root_build_dir) + ] + } +} + +if (v8_use_external_startup_data) { + action("natives_blob") { + visibility = ":*" # Only targets in this file can depend on this. + + deps = [ + ":js2c", + ":js2c_experimental" + ] + + sources = [ + "$target_gen_dir/libraries.bin", + "$target_gen_dir/libraries_experimental.bin" + ] + + outputs = [ + "$root_gen_dir/natives_blob.bin" + ] + + script = "tools/concatenate-files.py" + + args = rebase_path(sources + outputs, root_build_dir) + } } action("postmortem-metadata") { @@ -218,6 +306,40 @@ action("postmortem-metadata") { rebase_path(sources, root_build_dir) } +action("run_mksnapshot") { + visibility = ":*" # Only targets in this file can depend on this. + + deps = [ ":mksnapshot($host_toolchain)" ] + + script = "tools/run.py" + + outputs = [ + "$target_gen_dir/snapshot.cc" + ] + + args = [ + "./" + rebase_path(get_label_info(":mksnapshot($host_toolchain)", + "root_out_dir") + "/mksnapshot", + root_build_dir), + "--log-snapshot-positions", + "--logfile", rebase_path("$target_gen_dir/snapshot.log", root_build_dir), + rebase_path("$target_gen_dir/snapshot.cc", root_build_dir) + ] + + if (v8_random_seed != "0") { + args += [ "--random-seed", v8_random_seed ] + } + + if (v8_use_external_startup_data) { + outputs += [ "$root_gen_dir/snapshot_blob.bin" ] + args += [ + "--startup_blob", + rebase_path("$root_gen_dir/snapshot_blob.bin", root_build_dir) + ] + } +} + + ############################################################################### # Source Sets (aka static libraries) # @@ -228,18 +350,64 @@ source_set("v8_nosnapshot") { deps = [ ":js2c", ":js2c_experimental", - ":generate_trig_table", ":v8_base", ] sources = [ "$target_gen_dir/libraries.cc", "$target_gen_dir/experimental-libraries.cc", - "$target_gen_dir/trig-table.cc", "src/snapshot-empty.cc", + "src/snapshot-common.cc", + ] + + configs -= [ "//build/config/compiler:chromium_code" ] + configs += [ "//build/config/compiler:no_chromium_code" ] + configs += [ ":internal_config", ":features", ":toolchain" ] +} + +source_set("v8_snapshot") { + visibility = ":*" # Only targets in this file can depend on this. + + deps = [ + ":js2c", + ":js2c_experimental", + ":run_mksnapshot", + ":v8_base", + ] + + sources = [ + "$target_gen_dir/libraries.cc", + "$target_gen_dir/experimental-libraries.cc", + "$target_gen_dir/snapshot.cc", + "src/snapshot-common.cc", ] - configs += [ ":internal_config", ":features" ] + configs -= [ "//build/config/compiler:chromium_code" ] + configs += [ "//build/config/compiler:no_chromium_code" ] + configs += [ ":internal_config", ":features", ":toolchain" ] +} + +if (v8_use_external_startup_data) { + source_set("v8_external_snapshot") { + visibility = ":*" # Only targets in this file can depend on this. + + deps = [ + ":js2c", + ":js2c_experimental", + ":run_mksnapshot", + ":v8_base", + ":natives_blob", + ] + + sources = [ + "src/natives-external.cc", + "src/snapshot-external.cc", + ] + + configs -= [ "//build/config/compiler:chromium_code" ] + configs += [ "//build/config/compiler:no_chromium_code" ] + configs += [ ":internal_config", ":features", ":toolchain" ] + } } source_set("v8_base") { @@ -262,10 +430,10 @@ source_set("v8_base") { "src/assembler.h", "src/assert-scope.h", "src/assert-scope.cc", + "src/ast-value-factory.cc", + "src/ast-value-factory.h", "src/ast.cc", "src/ast.h", - "src/atomicops.h", - "src/atomicops_internals_x86_gcc.cc", "src/bignum-dtoa.cc", "src/bignum-dtoa.h", "src/bignum.cc", @@ -291,6 +459,95 @@ source_set("v8_base") { "src/codegen.h", "src/compilation-cache.cc", "src/compilation-cache.h", + "src/compiler/ast-graph-builder.cc", + "src/compiler/ast-graph-builder.h", + "src/compiler/code-generator-impl.h", + "src/compiler/code-generator.cc", + "src/compiler/code-generator.h", + "src/compiler/common-node-cache.h", + "src/compiler/common-operator.h", + "src/compiler/control-builders.cc", + "src/compiler/control-builders.h", + "src/compiler/frame.h", + "src/compiler/gap-resolver.cc", + "src/compiler/gap-resolver.h", + "src/compiler/generic-algorithm-inl.h", + "src/compiler/generic-algorithm.h", + "src/compiler/generic-graph.h", + "src/compiler/generic-node-inl.h", + "src/compiler/generic-node.h", + "src/compiler/graph-builder.cc", + "src/compiler/graph-builder.h", + "src/compiler/graph-inl.h", + "src/compiler/graph-reducer.cc", + "src/compiler/graph-reducer.h", + "src/compiler/graph-replay.cc", + "src/compiler/graph-replay.h", + "src/compiler/graph-visualizer.cc", + "src/compiler/graph-visualizer.h", + "src/compiler/graph.cc", + "src/compiler/graph.h", + "src/compiler/instruction-codes.h", + "src/compiler/instruction-selector-impl.h", + "src/compiler/instruction-selector.cc", + "src/compiler/instruction-selector.h", + "src/compiler/instruction.cc", + "src/compiler/instruction.h", + "src/compiler/js-context-specialization.cc", + "src/compiler/js-context-specialization.h", + "src/compiler/js-generic-lowering.cc", + "src/compiler/js-generic-lowering.h", + "src/compiler/js-graph.cc", + "src/compiler/js-graph.h", + "src/compiler/js-operator.h", + "src/compiler/js-typed-lowering.cc", + "src/compiler/js-typed-lowering.h", + "src/compiler/linkage-impl.h", + "src/compiler/linkage.cc", + "src/compiler/linkage.h", + "src/compiler/lowering-builder.cc", + "src/compiler/lowering-builder.h", + "src/compiler/machine-node-factory.h", + "src/compiler/machine-operator-reducer.cc", + "src/compiler/machine-operator-reducer.h", + "src/compiler/machine-operator.h", + "src/compiler/node-aux-data-inl.h", + "src/compiler/node-aux-data.h", + "src/compiler/node-cache.cc", + "src/compiler/node-cache.h", + "src/compiler/node-matchers.h", + "src/compiler/node-properties-inl.h", + "src/compiler/node-properties.h", + "src/compiler/node.cc", + "src/compiler/node.h", + "src/compiler/opcodes.h", + "src/compiler/operator-properties-inl.h", + "src/compiler/operator-properties.h", + "src/compiler/operator.h", + "src/compiler/phi-reducer.h", + "src/compiler/pipeline.cc", + "src/compiler/pipeline.h", + "src/compiler/raw-machine-assembler.cc", + "src/compiler/raw-machine-assembler.h", + "src/compiler/register-allocator.cc", + "src/compiler/register-allocator.h", + "src/compiler/representation-change.h", + "src/compiler/schedule.cc", + "src/compiler/schedule.h", + "src/compiler/scheduler.cc", + "src/compiler/scheduler.h", + "src/compiler/simplified-lowering.cc", + "src/compiler/simplified-lowering.h", + "src/compiler/simplified-node-factory.h", + "src/compiler/simplified-operator.h", + "src/compiler/source-position.cc", + "src/compiler/source-position.h", + "src/compiler/structured-machine-assembler.cc", + "src/compiler/structured-machine-assembler.h", + "src/compiler/typer.cc", + "src/compiler/typer.h", + "src/compiler/verifier.cc", + "src/compiler/verifier.h", "src/compiler.cc", "src/compiler.h", "src/contexts.cc", @@ -303,8 +560,6 @@ source_set("v8_base") { "src/cpu-profiler-inl.h", "src/cpu-profiler.cc", "src/cpu-profiler.h", - "src/cpu.cc", - "src/cpu.h", "src/data-flow.cc", "src/data-flow.h", "src/date.cc", @@ -312,8 +567,6 @@ source_set("v8_base") { "src/dateparser-inl.h", "src/dateparser.cc", "src/dateparser.h", - "src/debug-agent.cc", - "src/debug-agent.h", "src/debug.cc", "src/debug.h", "src/deoptimizer.cc", @@ -348,6 +601,9 @@ source_set("v8_base") { "src/fast-dtoa.cc", "src/fast-dtoa.h", "src/feedback-slots.h", + "src/field-index.cc", + "src/field-index.h", + "src/field-index-inl.h", "src/fixed-dtoa.cc", "src/fixed-dtoa.h", "src/flag-definitions.h", @@ -369,14 +625,32 @@ source_set("v8_base") { "src/handles.cc", "src/handles.h", "src/hashmap.h", - "src/heap-inl.h", "src/heap-profiler.cc", "src/heap-profiler.h", "src/heap-snapshot-generator-inl.h", "src/heap-snapshot-generator.cc", "src/heap-snapshot-generator.h", - "src/heap.cc", - "src/heap.h", + "src/heap/gc-tracer.cc", + "src/heap/gc-tracer.h", + "src/heap/heap-inl.h", + "src/heap/heap.cc", + "src/heap/heap.h", + "src/heap/incremental-marking.cc", + "src/heap/incremental-marking.h", + "src/heap/mark-compact-inl.h", + "src/heap/mark-compact.cc", + "src/heap/mark-compact.h", + "src/heap/objects-visiting-inl.h", + "src/heap/objects-visiting.cc", + "src/heap/objects-visiting.h", + "src/heap/spaces-inl.h", + "src/heap/spaces.cc", + "src/heap/spaces.h", + "src/heap/store-buffer-inl.h", + "src/heap/store-buffer.cc", + "src/heap/store-buffer.h", + "src/heap/sweeper-thread.h", + "src/heap/sweeper-thread.cc", "src/hydrogen-alias-analysis.h", "src/hydrogen-bce.cc", "src/hydrogen-bce.h", @@ -425,6 +699,8 @@ source_set("v8_base") { "src/hydrogen-sce.h", "src/hydrogen-store-elimination.cc", "src/hydrogen-store-elimination.h", + "src/hydrogen-types.cc", + "src/hydrogen-types.h", "src/hydrogen-uint32-analysis.cc", "src/hydrogen-uint32-analysis.h", "src/i18n.cc", @@ -434,8 +710,6 @@ source_set("v8_base") { "src/ic-inl.h", "src/ic.cc", "src/ic.h", - "src/incremental-marking.cc", - "src/incremental-marking.h", "src/interface.cc", "src/interface.h", "src/interpreter-irregexp.cc", @@ -447,14 +721,6 @@ source_set("v8_base") { "src/jsregexp-inl.h", "src/jsregexp.cc", "src/jsregexp.h", - "src/lazy-instance.h", - # TODO(jochen): move libplatform/ files to their own target. - "src/libplatform/default-platform.cc", - "src/libplatform/default-platform.h", - "src/libplatform/task-queue.cc", - "src/libplatform/task-queue.h", - "src/libplatform/worker-thread.cc", - "src/libplatform/worker-thread.h", "src/list-inl.h", "src/list.h", "src/lithium-allocator-inl.h", @@ -471,9 +737,10 @@ source_set("v8_base") { "src/log-utils.h", "src/log.cc", "src/log.h", + "src/lookup-inl.h", + "src/lookup.cc", + "src/lookup.h", "src/macro-assembler.h", - "src/mark-compact.cc", - "src/mark-compact.h", "src/messages.cc", "src/messages.h", "src/msan.h", @@ -481,28 +748,16 @@ source_set("v8_base") { "src/objects-debug.cc", "src/objects-inl.h", "src/objects-printer.cc", - "src/objects-visiting.cc", - "src/objects-visiting.h", "src/objects.cc", "src/objects.h", - "src/once.cc", - "src/once.h", - "src/optimizing-compiler-thread.h", "src/optimizing-compiler-thread.cc", + "src/optimizing-compiler-thread.h", + "src/ostreams.cc", + "src/ostreams.h", "src/parser.cc", "src/parser.h", - "src/platform/elapsed-timer.h", - "src/platform/time.cc", - "src/platform/time.h", - "src/platform.h", - "src/platform/condition-variable.cc", - "src/platform/condition-variable.h", - "src/platform/mutex.cc", - "src/platform/mutex.h", - "src/platform/semaphore.cc", - "src/platform/semaphore.h", - "src/platform/socket.cc", - "src/platform/socket.h", + "src/perf-jit.cc", + "src/perf-jit.h", "src/preparse-data-format.h", "src/preparse-data.cc", "src/preparse-data.h", @@ -516,6 +771,7 @@ source_set("v8_base") { "src/property-details.h", "src/property.cc", "src/property.h", + "src/prototype.h", "src/regexp-macro-assembler-irregexp-inl.h", "src/regexp-macro-assembler-irregexp.cc", "src/regexp-macro-assembler-irregexp.h", @@ -547,14 +803,9 @@ source_set("v8_base") { "src/serialize.h", "src/small-pointer-list.h", "src/smart-pointers.h", - "src/snapshot-common.cc", + "src/snapshot-source-sink.cc", + "src/snapshot-source-sink.h", "src/snapshot.h", - "src/spaces-inl.h", - "src/spaces.cc", - "src/spaces.h", - "src/store-buffer-inl.h", - "src/store-buffer.cc", - "src/store-buffer.h", "src/string-search.cc", "src/string-search.h", "src/string-stream.cc", @@ -563,8 +814,6 @@ source_set("v8_base") { "src/strtod.h", "src/stub-cache.cc", "src/stub-cache.h", - "src/sweeper-thread.h", - "src/sweeper-thread.cc", "src/token.cc", "src/token.h", "src/transitions-inl.h", @@ -587,12 +836,8 @@ source_set("v8_base") { "src/utils-inl.h", "src/utils.cc", "src/utils.h", - "src/utils/random-number-generator.cc", - "src/utils/random-number-generator.h", "src/v8.cc", "src/v8.h", - "src/v8checks.h", - "src/v8globals.h", "src/v8memory.h", "src/v8threads.cc", "src/v8threads.h", @@ -605,9 +850,11 @@ source_set("v8_base") { "src/zone-inl.h", "src/zone.cc", "src/zone.h", + "third_party/fdlibm/fdlibm.cc", + "third_party/fdlibm/fdlibm.h", ] - if (cpu_arch == "x86") { + if (v8_target_arch == "x86") { sources += [ "src/ia32/assembler-ia32-inl.h", "src/ia32/assembler-ia32.cc", @@ -636,8 +883,12 @@ source_set("v8_base") { "src/ia32/regexp-macro-assembler-ia32.cc", "src/ia32/regexp-macro-assembler-ia32.h", "src/ia32/stub-cache-ia32.cc", + "src/compiler/ia32/code-generator-ia32.cc", + "src/compiler/ia32/instruction-codes-ia32.h", + "src/compiler/ia32/instruction-selector-ia32.cc", + "src/compiler/ia32/linkage-ia32.cc", ] - } else if (cpu_arch == "x64") { + } else if (v8_target_arch == "x64") { sources += [ "src/x64/assembler-x64-inl.h", "src/x64/assembler-x64.cc", @@ -666,8 +917,12 @@ source_set("v8_base") { "src/x64/regexp-macro-assembler-x64.cc", "src/x64/regexp-macro-assembler-x64.h", "src/x64/stub-cache-x64.cc", + "src/compiler/x64/code-generator-x64.cc", + "src/compiler/x64/instruction-codes-x64.h", + "src/compiler/x64/instruction-selector-x64.cc", + "src/compiler/x64/linkage-x64.cc", ] - } else if (cpu_arch == "arm") { + } else if (v8_target_arch == "arm") { sources += [ "src/arm/assembler-arm-inl.h", "src/arm/assembler-arm.cc", @@ -699,8 +954,12 @@ source_set("v8_base") { "src/arm/regexp-macro-assembler-arm.h", "src/arm/simulator-arm.cc", "src/arm/stub-cache-arm.cc", + "src/compiler/arm/code-generator-arm.cc", + "src/compiler/arm/instruction-codes-arm.h", + "src/compiler/arm/instruction-selector-arm.cc", + "src/compiler/arm/linkage-arm.cc", ] - } else if (cpu_arch == "arm64") { + } else if (v8_target_arch == "arm64") { sources += [ "src/arm64/assembler-arm64.cc", "src/arm64/assembler-arm64.h", @@ -712,7 +971,6 @@ source_set("v8_base") { "src/arm64/code-stubs-arm64.h", "src/arm64/constants-arm64.h", "src/arm64/cpu-arm64.cc", - "src/arm64/cpu-arm64.h", "src/arm64/debug-arm64.cc", "src/arm64/decoder-arm64.cc", "src/arm64/decoder-arm64.h", @@ -744,8 +1002,12 @@ source_set("v8_base") { "src/arm64/stub-cache-arm64.cc", "src/arm64/utils-arm64.cc", "src/arm64/utils-arm64.h", + "src/compiler/arm64/code-generator-arm64.cc", + "src/compiler/arm64/instruction-codes-arm64.h", + "src/compiler/arm64/instruction-selector-arm64.cc", + "src/compiler/arm64/linkage-arm64.cc", ] - } else if (cpu_arch == "mipsel") { + } else if (v8_target_arch == "mipsel") { sources += [ "src/mips/assembler-mips.cc", "src/mips/assembler-mips.h", @@ -780,41 +1042,122 @@ source_set("v8_base") { ] } - configs += [ ":internal_config", ":features" ] + configs -= [ "//build/config/compiler:chromium_code" ] + configs += [ "//build/config/compiler:no_chromium_code" ] + configs += [ ":internal_config", ":features", ":toolchain" ] + + defines = [] + deps = [ ":v8_libbase" ] + + if (is_linux) { + if (v8_compress_startup_data == "bz2") { + libs += [ "bz2" ] + } + } + + if (v8_enable_i18n_support) { + deps += [ "//third_party/icu" ] + if (is_win) { + deps += [ "//third_party/icu:icudata" ] + } + # TODO(jochen): Add support for icu_use_data_file_flag + defines += [ "ICU_UTIL_DATA_IMPL=ICU_UTIL_DATA_FILE" ] + } else { + sources -= [ + "src/i18n.cc", + "src/i18n.h", + ] + } + + if (v8_postmortem_support) { + sources += [ "$target_gen_dir/debug-support.cc" ] + deps += [ ":postmortem-metadata" ] + } +} + +source_set("v8_libbase") { + visibility = ":*" # Only targets in this file can depend on this. + + sources = [ + "src/base/atomicops.h", + "src/base/atomicops_internals_arm64_gcc.h", + "src/base/atomicops_internals_arm_gcc.h", + "src/base/atomicops_internals_atomicword_compat.h", + "src/base/atomicops_internals_mac.h", + "src/base/atomicops_internals_mips_gcc.h", + "src/base/atomicops_internals_tsan.h", + "src/base/atomicops_internals_x86_gcc.cc", + "src/base/atomicops_internals_x86_gcc.h", + "src/base/atomicops_internals_x86_msvc.h", + "src/base/build_config.h", + "src/base/cpu.cc", + "src/base/cpu.h", + "src/base/lazy-instance.h", + "src/base/logging.cc", + "src/base/logging.h", + "src/base/macros.h", + "src/base/once.cc", + "src/base/once.h", + "src/base/platform/elapsed-timer.h", + "src/base/platform/time.cc", + "src/base/platform/time.h", + "src/base/platform/condition-variable.cc", + "src/base/platform/condition-variable.h", + "src/base/platform/mutex.cc", + "src/base/platform/mutex.h", + "src/base/platform/platform.h", + "src/base/platform/semaphore.cc", + "src/base/platform/semaphore.h", + "src/base/safe_conversions.h", + "src/base/safe_conversions_impl.h", + "src/base/safe_math.h", + "src/base/safe_math_impl.h", + "src/base/utils/random-number-generator.cc", + "src/base/utils/random-number-generator.h", + ] + + configs -= [ "//build/config/compiler:chromium_code" ] + configs += [ "//build/config/compiler:no_chromium_code" ] + configs += [ ":internal_config_base", ":features", ":toolchain" ] defines = [] - deps = [] if (is_posix) { sources += [ - "src/platform-posix.cc" + "src/base/platform/platform-posix.cc" ] } if (is_linux) { sources += [ - "src/platform-linux.cc" + "src/base/platform/platform-linux.cc" ] - # TODO(brettw) - # 'conditions': [ - # ['v8_compress_startup_data=="bz2"', { - # 'libraries': [ - # '-lbz2', - # ] - # }], - # ], - libs = [ "rt" ] } else if (is_android) { - # TODO(brettw) OS=="android" condition from tools/gyp/v8.gyp + defines += [ "CAN_USE_VFP_INSTRUCTIONS" ] + + if (build_os == "mac") { + if (current_toolchain == host_toolchain) { + sources += [ "src/base/platform/platform-macos.cc" ] + } else { + sources += [ "src/base/platform/platform-linux.cc" ] + } + } else { + sources += [ "src/base/platform/platform-linux.cc" ] + if (current_toolchain == host_toolchain) { + defines += [ "V8_LIBRT_NOT_AVAILABLE" ] + } + } } else if (is_mac) { - sources += [ "src/platform-macos,cc" ] + sources += [ "src/base/platform/platform-macos.cc" ] } else if (is_win) { + # TODO(jochen): Add support for cygwin. sources += [ - "src/platform-win32.cc", - "src/win32-math.cc", - "src/win32-math.h", + "src/base/platform/platform-win32.cc", + "src/base/win32-headers.h", + "src/base/win32-math.cc", + "src/base/win32-math.h", ] defines += [ "_CRT_RAND_S" ] # for rand_s() @@ -822,52 +1165,117 @@ source_set("v8_base") { libs = [ "winmm.lib", "ws2_32.lib" ] } + # TODO(jochen): Add support for qnx, freebsd, openbsd, netbsd, and solaris. +} + +source_set("v8_libplatform") { + sources = [ + "include/libplatform/libplatform.h", + "src/libplatform/default-platform.cc", + "src/libplatform/default-platform.h", + "src/libplatform/task-queue.cc", + "src/libplatform/task-queue.h", + "src/libplatform/worker-thread.cc", + "src/libplatform/worker-thread.h", + ] - if (v8_enable_i18n_support) { - deps += [ "//third_party/icu" ] - if (is_win) { - deps += [ "//third_party/icu:icudata" ] - } - } else { - sources -= [ - "src/i18n.cc", - "src/i18n.h", - ] - } + configs -= [ "//build/config/compiler:chromium_code" ] + configs += [ "//build/config/compiler:no_chromium_code" ] + configs += [ ":internal_config_base", ":features", ":toolchain" ] - # TODO(brettw) other conditions from v8.gyp - # TODO(brettw) icu_use_data_file_flag + deps = [ + ":v8_libbase", + ] } ############################################################################### # Executables # -# TODO(jochen): Remove this as soon as toolchain.gypi is integrated. -if (build_cpu_arch != cpu_arch) { +if (current_toolchain == host_toolchain) { + executable("mksnapshot") { + visibility = ":*" # Only targets in this file can depend on this. -executable("mksnapshot") { - sources = [ - ] + sources = [ + "src/mksnapshot.cc", + ] + + configs -= [ "//build/config/compiler:chromium_code" ] + configs += [ "//build/config/compiler:no_chromium_code" ] + configs += [ ":internal_config", ":features", ":toolchain" ] + + deps = [ + ":v8_base", + ":v8_libplatform", + ":v8_nosnapshot", + ] + + if (v8_compress_startup_data == "bz2") { + libs = [ "bz2" ] + } + } } -} else { +############################################################################### +# Public targets +# + +if (component_mode == "shared_library") { -executable("mksnapshot") { +component("v8") { sources = [ - "src/mksnapshot.cc", + "src/v8dll-main.cc", ] - configs += [ ":internal_config", ":features" ] + if (v8_use_external_startup_data) { + deps = [ + ":v8_base", + ":v8_external_snapshot", + ] + } else if (v8_use_snapshot) { + deps = [ + ":v8_base", + ":v8_snapshot", + ] + } else { + deps = [ + ":v8_base", + ":v8_nosnapshot", + ] + } - deps = [ - ":v8_base", - ":v8_nosnapshot", - ] + configs -= [ "//build/config/compiler:chromium_code" ] + configs += [ "//build/config/compiler:no_chromium_code" ] + configs += [ ":internal_config", ":features", ":toolchain" ] - if (v8_compress_startup_data == "bz2") { - libs = [ "bz2" ] + direct_dependent_configs = [ ":external_config" ] + + if (is_android && current_toolchain != host_toolchain) { + libs += [ "log" ] } } +} else { + +group("v8") { + if (v8_use_external_startup_data) { + deps = [ + ":v8_base", + ":v8_external_snapshot", + ] + } else if (v8_use_snapshot) { + deps = [ + ":v8_base", + ":v8_snapshot", + ] + } else { + deps = [ + ":v8_base", + ":v8_nosnapshot", + ] + } + + direct_dependent_configs = [ ":external_config" ] +} + } diff --git a/deps/v8/ChangeLog b/deps/v8/ChangeLog index 8f1d2563859..0b2872a7c21 100644 --- a/deps/v8/ChangeLog +++ b/deps/v8/ChangeLog @@ -1,3 +1,726 @@ +2014-08-13: Version 3.28.73 + + Performance and stability improvements on all platforms. + + +2014-08-12: Version 3.28.71 + + ToNumber(Symbol) should throw TypeError (issue 3499). + + Performance and stability improvements on all platforms. + + +2014-08-11: Version 3.28.69 + + Performance and stability improvements on all platforms. + + +2014-08-09: Version 3.28.65 + + Performance and stability improvements on all platforms. + + +2014-08-08: Version 3.28.64 + + ES6: Implement WeakMap and WeakSet constructor logic (issue 3399). + + Enable ES6 unscopables (issue 3401). + + Turn on harmony_unscopables for es_staging (issue 3401). + + Remove proxies from --harmony switch for M38, because problems. + + Reland "Add initial support for compiler unit tests using GTest/GMock." + (issue 3489). + + Enable ES6 iteration by default (issue 2214). + + Performance and stability improvements on all platforms. + + +2014-08-07: Version 3.28.62 + + Only escape U+0022 in argument values of `String.prototype` HTML methods + (issue 2217). + + Update webkit test for expected own properties. + + This implements unscopables (issue 3401). + + Add `CheckObjectCoercible` for the `String.prototype` HTML methods + (issue 2218). + + Add initial support for compiler unit tests using GTest/GMock (issue + 3489). + + Trigger exception debug events on Promise reject (Chromium issue + 393913). + + Refactor unit tests for the base library to use GTest (issue 3489). + + Performance and stability improvements on all platforms. + + +2014-08-06: Version 3.28.60 + + Enable ES6 Map and Set by default (issue 1622). + + Performance and stability improvements on all platforms. + + +2014-08-06: Version 3.28.59 + + Removed GetConstructor from the API. Instead either get the + "constructor" property stored in the prototype, or keep a side-table. + + Enable ES6 Symbols by default (issue 2158). + + Performance and stability improvements on all platforms. + + +2014-08-05: Version 3.28.57 + + Add dependencies on gtest and gmock. + + Performance and stability improvements on all platforms. + + +2014-08-04: Version 3.28.54 + + Performance and stability improvements on all platforms. + + +2014-08-01: Version 3.28.53 + + Performance and stability improvements on all platforms. + + +2014-07-31: Version 3.28.52 + + Performance and stability improvements on all platforms. + + +2014-07-31: Version 3.28.51 + + Drop deprecated memory related notification API (Chromium issue 397026). + + Performance and stability improvements on all platforms. + + +2014-07-31: Version 3.28.50 + + Use emergency memory in the case of out of memory during evacuation + (Chromium issue 395314). + + Performance and stability improvements on all platforms. + + +2014-07-30: Version 3.28.48 + + Fix Object.freeze with field type tracking. Keep the descriptor properly + intact while update the field type (issue 3458). + + Performance and stability improvements on all platforms. + + +2014-07-29: Version 3.28.45 + + Performance and stability improvements on all platforms. + + +2014-07-28: Version 3.28.43 + + Performance and stability improvements on all platforms. + + +2014-07-25: Version 3.28.38 + + Fix issue with setters and their holders in accessors.cc (Chromium issue + 3462). + + Introduce more debug events for promises (issue 3093). + + Move gc notifications from V8 to Isolate and make idle hint mandatory + (Chromium issue 397026). + + The accessors should get the value from the holder and not from this + (issue 3461). + + Performance and stability improvements on all platforms. + + +2014-07-24: Version 3.28.35 + + Rebaseline/update the intl tests with ICU 52 (issue 3454). + + Expose the content of Sets and WeakSets through SetMirror (issue 3093). + + Performance and stability improvements on all platforms. + + +2014-07-23: Version 3.28.32 + + Update ICU to 5.2 (matching chromium) (issue 3452). + + Performance and stability improvements on all platforms. + + +2014-07-22: Version 3.28.31 + + Remove harmony-typeof. + + Implement String.prototype.codePointAt and String.fromCodePoint (issue + 2840). + + Performance and stability improvements on all platforms. + + +2014-07-21: Version 3.28.30 + + Performance and stability improvements on all platforms. + + +2014-07-21: Version 3.28.29 + + Performance and stability improvements on all platforms. + + +2014-07-18: Version 3.28.28 + + Performance and stability improvements on all platforms. + + +2014-07-17: Version 3.28.26 + + Ship ES6 Math functions (issue 2938). + + Make ToPrimitive throw on symbol wrappers (issue 3442). + + Performance and stability improvements on all platforms. + + +2014-07-16: Version 3.28.25 + + Performance and stability improvements on all platforms. + + +2014-07-16: Version 3.28.24 + + Removed some copy-n-paste from StackFrame::Foo API entries (issue 3436). + + Performance and stability improvements on all platforms. + + +2014-07-15: Version 3.28.23 + + Fix error message about read-only symbol properties (issue 3441). + + Include symbol properties in Object.{create,defineProperties} (issue + 3440). + + Performance and stability improvements on all platforms. + + +2014-07-14: Version 3.28.22 + + Performance and stability improvements on all platforms. + + +2014-07-11: Version 3.28.21 + + Make `let` usable as an identifier in ES6 sloppy mode (issue 2198). + + Support ES6 Map and Set in heap profiler (issue 3368). + + Performance and stability improvements on all platforms. + + +2014-07-10: Version 3.28.20 + + Remove deprecate counter/histogram methods. + + Fixed printing of external references (Chromium issue 392068). + + Fix several issues with ES6 redeclaration checks (issue 3426). + + Performance and stability improvements on all platforms. + + +2014-07-09: Version 3.28.19 + + Performance and stability improvements on all platforms. + + +2014-07-09: Version 3.28.18 + + Reland "Postpone termination exceptions in debug scope." (issue 3408). + + Performance and stability improvements on all platforms. + + +2014-07-08: Version 3.28.17 + + MIPS: Fix computed properties on object literals with a double as + propertyname (Chromium issue 390732). + + Performance and stability improvements on all platforms. + + +2014-07-08: Version 3.28.16 + + Fix computed properties on object literals with a double as propertyname + (Chromium issue 390732). + + Avoid brittle use of .bind in Promise.all (issue 3420). + + Performance and stability improvements on all platforms. + + +2014-07-07: Version 3.28.15 + + Remove a bunch of Isolate::UncheckedCurrent calls. + + Performance and stability improvements on all platforms. + + +2014-07-07: Version 3.28.14 + + Use the HeapObjectIterator to scan-on-scavenge map pages (Chromium issue + 390732). + + Introduce debug events for Microtask queue (Chromium issue 272416). + + Split out libplatform into a separate libary. + + Add clang-format to presubmit checks. + + Stack traces exposed to Javascript should omit extensions (issue 311). + + Remove deprecated v8::Context::HasOutOfMemoryException. + + Postpone termination exceptions in debug scope (issue 3408). + + Performance and stability improvements on all platforms. + + +2014-07-04: Version 3.28.13 + + Rollback to r22134. + + +2014-07-04: Version 3.28.12 + + Use the HeapObjectIterator to scan-on-scavenge map pages (Chromium issue + 390732). + + Introduce debug events for Microtask queue (Chromium issue 272416). + + Performance and stability improvements on all platforms. + + +2014-07-03: Version 3.28.11 + + Split out libplatform into a separate libary. + + Performance and stability improvements on all platforms. + + +2014-07-03: Version 3.28.10 + + Add clang-format to presubmit checks. + + Stack traces exposed to Javascript should omit extensions (issue 311). + + Remove deprecated v8::Context::HasOutOfMemoryException. + + Postpone termination exceptions in debug scope (issue 3408). + + Performance and stability improvements on all platforms. + + +2014-07-02: Version 3.28.9 + + Make freeze & friends ignore private properties (issue 3419). + + Introduce a builddeps make target (issue 3418). + + Performance and stability improvements on all platforms. + + +2014-07-01: Version 3.28.8 + + Remove static initializer from isolate. + + ES6: Add missing Set.prototype.keys function (issue 3411). + + Introduce debug events for promises (issue 3093). + + Performance and stability improvements on all platforms. + + +2014-06-30: Version 3.28.7 + + Performance and stability improvements on all platforms. + + +2014-06-30: Version 3.28.6 + + Unbreak "os" stuff in shared d8 builds (issue 3407). + + Performance and stability improvements on all platforms. + + +2014-06-26: Version 3.28.4 + + Compile optimized code with active debugger but no break points + (Chromium issue 386492). + + Optimize Map/Set.prototype.forEach. + + Collect garbage with kReduceMemoryFootprintMask in IdleNotification + (Chromium issue 350720). + + Performance and stability improvements on all platforms. + + +2014-06-26: Version 3.28.3 + + Grow heap slower if GC freed many global handles (Chromium issue + 263503). + + Performance and stability improvements on all platforms. + + +2014-06-25: Version 3.28.2 + + Remove bogus assertions in HCompareObjectEqAndBranch (Chromium issue + 387636). + + Do not eagerly update allow_osr_at_loop_nesting_level (Chromium issue + 387599). + + Set host_arch to ia32 on machines with a 32bit userland but a 64bit + kernel (Chromium issue 368384). + + Map/Set: Implement constructor parameter handling (issue 3398). + + Performance and stability improvements on all platforms. + + +2014-06-24: Version 3.28.1 + + Support LiveEdit on Arm64 (Chromium issue 368580). + + Run JS micro tasks in the appropriate context (Chromium issue 385349). + + Add a use counter API. + + Set host_arch to ia32 on machines with a 32bit userland but a 64bit + kernel. + + Performance and stability improvements on all platforms. + + +2014-06-23: Version 3.28.0 + + MIPS: Support LiveEdit (Chromium issue 368580). + + Array.concat: properly go to dictionary mode when required (Chromium + issue 387031). + + Support LiveEdit on ARM (Chromium issue 368580). + + Performance and stability improvements on all platforms. + + +2014-06-18: Version 3.27.34 + + Reduce number of writes to DependentCode array when inserting dependent + IC (Chromium issue 305878). + + Performance and stability improvements on all platforms. + + +2014-06-17: Version 3.27.33 + + Do GC if CodeRange fails to allocate a block (Chromium issue 305878). + + Throw syntax error when a getter/setter has the wrong number of params + (issue 3371). + + Performance and stability improvements on all platforms. + + +2014-06-17: Version 3.27.32 + + Performance and stability improvements on all platforms. + + +2014-06-16: Version 3.27.31 + + Version fix. + + +2014-06-16: Version 3.27.30 + + Fix representation of Phis for mutable-heapnumber-in-object-literal + properties (issue 3392). + + Performance and stability improvements on all platforms. + + +2014-06-16: Version 3.27.29 + + Emulate MLS on pre-ARMv6T2. Cleaned up thumbee vs. thumb2 confusion. + + X87: Fixed flooring division by a power of 2, once again.. (issue 3259). + + Fixed undefined behavior in RNG (Chromium issue 377790). + + Performance and stability improvements on all platforms. + + +2014-06-13: Version 3.27.28 + + Add v8::Promise::Then (Chromium issue 371288). + + Performance and stability improvements on all platforms. + + +2014-06-12: Version 3.27.27 + + Fix detection of VFP3D16 on Galaxy Tab 10.1 (issue 3387). + + Performance and stability improvements on all platforms. + + +2014-06-12: Version 3.27.26 + + MIPS: Fixed flooring division by a power of 2, once again.. (issue + 3259). + + Fixed flooring division by a power of 2, once again.. (issue 3259). + + Fix unsigned comparisons (issue 3380). + + Performance and stability improvements on all platforms. + + +2014-06-11: Version 3.27.25 + + Performance and stability improvements on all platforms. + + +2014-06-11: Version 3.27.24 + + Fix invalid attributes when generalizing because of incompatible map + change (Chromium issue 382143). + + Fix missing smi check in inlined indexOf/lastIndexOf (Chromium issue + 382513). + + Performance and stability improvements on all platforms. + + +2014-06-06: Version 3.27.23 + + Performance and stability improvements on all platforms. + + +2014-06-06: Version 3.27.22 + + Performance and stability improvements on all platforms. + + +2014-06-06: Version 3.27.21 + + Turn on harmony_collections for es_staging (issue 1622). + + Do not make heap iterable eagerly (Chromium issue 379740). + + Performance and stability improvements on all platforms. + + +2014-06-05: Version 3.27.20 + + Fix invalid loop condition for Array.lastIndexOf() (Chromium issue + 380512). + + Add API support for passing a C++ function as a microtask callback. + + Performance and stability improvements on all platforms. + + +2014-06-04: Version 3.27.19 + + Split Put into Put and Remove. + + ES6: Add support for values/keys/entries for Map and Set (issue 1793). + + Performance and stability improvements on all platforms. + + +2014-06-03: Version 3.27.18 + + Remove PROHIBITS_OVERWRITING as it is subsumed by non-configurable + properties. + + Performance and stability improvements on all platforms. + + +2014-06-02: Version 3.27.17 + + BuildNumberToString: Check for undefined keys in the cache (Chromium + issue 368114). + + HRor and HSar can deoptimize (issue 3359). + + Simplify, speed-up correct-context ObjectObserve calls. + + Performance and stability improvements on all platforms. + + +2014-05-29: Version 3.27.16 + + Allow microtasks to throw exceptions and handle them gracefully + (Chromium issue 371566). + + Performance and stability improvements on all platforms. + + +2014-05-28: Version 3.27.15 + + Performance and stability improvements on all platforms. + + +2014-05-27: Version 3.27.14 + + Reland "Customized support for feedback on calls to Array." and follow- + up fixes (Chromium issues 377198, 377290). + + Performance and stability improvements on all platforms. + + +2014-05-26: Version 3.27.13 + + Performance and stability improvements on all platforms. + + +2014-05-26: Version 3.27.12 + + Check for cached transition to ExternalArray elements kind (issue 3337). + + Support ES6 weak collections in heap profiler (Chromium issue 376196). + + Performance and stability improvements on all platforms. + + +2014-05-23: Version 3.27.11 + + Add support for ES6 Symbol in heap profiler (Chromium issue 376194). + + Performance and stability improvements on all platforms. + + +2014-05-22: Version 3.27.10 + + Implement Mirror object for Symbols (issue 3290). + + Allow debugger to step into Map and Set forEach callbacks (issue 3341). + + Fix ArrayShift hydrogen support (Chromium issue 374838). + + Use SameValueZero for Map and Set (issue 1622). + + Array Iterator next should check for own property. + + Performance and stability improvements on all platforms. + + +2014-05-21: Version 3.27.9 + + Disable ArrayShift hydrogen support (Chromium issue 374838). + + ES6 Map/Set iterators/forEach improvements (issue 1793). + + Performance and stability improvements on all platforms. + + +2014-05-20: Version 3.27.8 + + Move microtask queueing logic from JavaScript to C++. + + Partial revert of "Next bunch of fixes for check elimination" (Chromium + issue 372173). + + Performance and stability improvements on all platforms. + + +2014-05-19: Version 3.27.7 + + Performance and stability improvements on all platforms. + + +2014-05-19: Version 3.27.6 + + Performance and stability improvements on all platforms. + + +2014-05-16: Version 3.27.5 + + Performance and stability improvements on all platforms. + + +2014-05-15: Version 3.27.4 + + Drop thenable coercion cache (Chromium issue 372788). + + Skip write barriers when updating the weak hash table (Chromium issue + 359401). + + Performance and stability improvements on all platforms. + + +2014-05-14: Version 3.27.3 + + Performance and stability improvements on all platforms. + + +2014-05-13: Version 3.27.2 + + Harden %SetIsObserved with RUNTIME_ASSERTs (Chromium issue 371782). + + Drop unused static microtask API. + + Introduce an api to query the microtask autorun state of an isolate. + + Performance and stability improvements on all platforms. + + +2014-05-12: Version 3.27.1 + + Object.observe: avoid accessing acceptList properties more than once + (issue 3315). + + Array Iterator prototype should not have a constructor (issue 3293). + + Fix typos in unit test for Array.prototype.fill(). + + Shorten autogenerated error message for functions only (issue 3019, + Chromium issue 331971). + + Reland "Removed default Isolate." (Chromium issue 359977). + + Performance and stability improvements on all platforms. + + +2014-05-09: Version 3.27.0 + + Unbreak samples and tools. + + Performance and stability improvements on all platforms. + + 2014-05-08: Version 3.26.33 Removed default Isolate (Chromium issue 359977). diff --git a/deps/v8/DEPS b/deps/v8/DEPS index 24b7841584f..9459204f2cb 100644 --- a/deps/v8/DEPS +++ b/deps/v8/DEPS @@ -2,26 +2,89 @@ # directory and assume that the root of the checkout is in ./v8/, so # all paths in here must match this assumption. +vars = { + "chromium_trunk": "https://src.chromium.org/svn/trunk", + + "buildtools_revision": "fb782d4369d5ae04f17a2fceef7de5a63e50f07b", +} + deps = { # Remember to keep the revision in sync with the Makefile. "v8/build/gyp": "http://gyp.googlecode.com/svn/trunk@1831", "v8/third_party/icu": - "https://src.chromium.org/svn/trunk/deps/third_party/icu46@258359", + Var("chromium_trunk") + "/deps/third_party/icu52@277999", + + "v8/buildtools": + "https://chromium.googlesource.com/chromium/buildtools.git@" + + Var("buildtools_revision"), + + "v8/testing/gtest": + "http://googletest.googlecode.com/svn/trunk@692", + + "v8/testing/gmock": + "http://googlemock.googlecode.com/svn/trunk@485", } deps_os = { "win": { "v8/third_party/cygwin": - "http://src.chromium.org/svn/trunk/deps/third_party/cygwin@66844", + Var("chromium_trunk") + "/deps/third_party/cygwin@66844", "v8/third_party/python_26": - "http://src.chromium.org/svn/trunk/tools/third_party/python_26@89111", + Var("chromium_trunk") + "/tools/third_party/python_26@89111", } } +include_rules = [ + # Everybody can use some things. + "+include", + "+unicode", + "+third_party/fdlibm", +] + +# checkdeps.py shouldn't check for includes in these directories: +skip_child_includes = [ + "build", + "third_party", +] + hooks = [ + # Pull clang-format binaries using checked-in hashes. + { + "name": "clang_format_win", + "pattern": ".", + "action": [ "download_from_google_storage", + "--no_resume", + "--platform=win32", + "--no_auth", + "--bucket", "chromium-clang-format", + "-s", "v8/buildtools/win/clang-format.exe.sha1", + ], + }, + { + "name": "clang_format_mac", + "pattern": ".", + "action": [ "download_from_google_storage", + "--no_resume", + "--platform=darwin", + "--no_auth", + "--bucket", "chromium-clang-format", + "-s", "v8/buildtools/mac/clang-format.sha1", + ], + }, + { + "name": "clang_format_linux", + "pattern": ".", + "action": [ "download_from_google_storage", + "--no_resume", + "--platform=linux*", + "--no_auth", + "--bucket", "chromium-clang-format", + "-s", "v8/buildtools/linux64/clang-format.sha1", + ], + }, { # A change to a .gyp, .gypi, or to GYP itself should run the generator. "pattern": ".", diff --git a/deps/v8/Makefile b/deps/v8/Makefile index a99b09c0705..96d7a7ae4d3 100644 --- a/deps/v8/Makefile +++ b/deps/v8/Makefile @@ -70,6 +70,10 @@ ifeq ($(backtrace), off) else GYPFLAGS += -Dv8_enable_backtrace=1 endif +# verifypredictable=on +ifeq ($(verifypredictable), on) + GYPFLAGS += -Dv8_enable_verify_predictable=1 +endif # snapshot=off ifeq ($(snapshot), off) GYPFLAGS += -Dv8_use_snapshot='false' @@ -156,11 +160,6 @@ ifeq ($(armv7), true) endif endif endif -# vfp2=off. Deprecated, use armfpu= -# vfp3=off. Deprecated, use armfpu= -ifeq ($(vfp3), off) - GYPFLAGS += -Darm_fpu=vfp -endif # hardfp=on/off. Deprecated, use armfloatabi ifeq ($(hardfp),on) GYPFLAGS += -Darm_float_abi=hard @@ -169,16 +168,10 @@ ifeq ($(hardfp),off) GYPFLAGS += -Darm_float_abi=softfp endif endif -# armneon=on/off -ifeq ($(armneon), on) - GYPFLAGS += -Darm_neon=1 -endif # fpu: armfpu=xxx # xxx: vfp, vfpv3-d16, vfpv3, neon. ifeq ($(armfpu),) -ifneq ($(vfp3), off) GYPFLAGS += -Darm_fpu=default -endif else GYPFLAGS += -Darm_fpu=$(armfpu) endif @@ -198,19 +191,19 @@ ifeq ($(armthumb), on) GYPFLAGS += -Darm_thumb=1 endif endif -# armtest=on +# arm_test_noprobe=on # With this flag set, by default v8 will only use features implied # by the compiler (no probe). This is done by modifying the default -# values of enable_armv7, enable_vfp2, enable_vfp3 and enable_32dregs. +# values of enable_armv7, enable_vfp3, enable_32dregs and enable_neon. # Modifying these flags when launching v8 will enable the probing for # the specified values. -# When using the simulator, this flag is implied. -ifeq ($(armtest), on) - GYPFLAGS += -Darm_test=on +ifeq ($(arm_test_noprobe), on) + GYPFLAGS += -Darm_test_noprobe=on endif # ----------------- available targets: -------------------- -# - "dependencies": pulls in external dependencies (currently: GYP) +# - "builddeps": pulls in external dependencies for building +# - "dependencies": pulls in all external dependencies # - "grokdump": rebuilds heap constants lists used by grokdump # - any arch listed in ARCHES (see below) # - any mode listed in MODES @@ -228,11 +221,11 @@ endif # Architectures and modes to be compiled. Consider these to be internal # variables, don't override them (use the targets instead). -ARCHES = ia32 x64 arm arm64 mips mipsel +ARCHES = ia32 x64 x32 arm arm64 mips mipsel mips64el x87 DEFAULT_ARCHES = ia32 x64 arm MODES = release debug optdebug DEFAULT_MODES = release debug -ANDROID_ARCHES = android_ia32 android_arm android_arm64 android_mipsel +ANDROID_ARCHES = android_ia32 android_arm android_arm64 android_mipsel android_x87 NACL_ARCHES = nacl_ia32 nacl_x64 # List of files that trigger Makefile regeneration: @@ -258,7 +251,7 @@ NACL_CHECKS = $(addsuffix .check,$(NACL_BUILDS)) # File where previously used GYPFLAGS are stored. ENVFILE = $(OUTDIR)/environment -.PHONY: all check clean dependencies $(ENVFILE).new native \ +.PHONY: all check clean builddeps dependencies $(ENVFILE).new native \ qc quickcheck $(QUICKCHECKS) \ $(addsuffix .quickcheck,$(MODES)) $(addsuffix .quickcheck,$(ARCHES)) \ $(ARCHES) $(MODES) $(BUILDS) $(CHECKS) $(addsuffix .clean,$(ARCHES)) \ @@ -406,18 +399,22 @@ clean: $(addsuffix .clean, $(ARCHES) $(ANDROID_ARCHES) $(NACL_ARCHES)) native.cl # GYP file generation targets. OUT_MAKEFILES = $(addprefix $(OUTDIR)/Makefile.,$(BUILDS)) $(OUT_MAKEFILES): $(GYPFILES) $(ENVFILE) - PYTHONPATH="$(shell pwd)/tools/generate_shim_headers:$(PYTHONPATH)" \ - PYTHONPATH="$(shell pwd)/build/gyp/pylib:$(PYTHONPATH)" \ + $(eval CXX_TARGET_ARCH:=$(shell $(CXX) -v 2>&1 | grep ^Target: | \ + cut -f 2 -d " " | cut -f 1 -d "-" )) + $(eval CXX_TARGET_ARCH:=$(subst aarch64,arm64,$(CXX_TARGET_ARCH))) + $(eval V8_TARGET_ARCH:=$(subst .,,$(suffix $(basename $@)))) + PYTHONPATH="$(shell pwd)/tools/generate_shim_headers:$(shell pwd)/build:$(PYTHONPATH):$(shell pwd)/build/gyp/pylib:$(PYTHONPATH)" \ GYP_GENERATORS=make \ build/gyp/gyp --generator-output="$(OUTDIR)" build/all.gyp \ -Ibuild/standalone.gypi --depth=. \ - -Dv8_target_arch=$(subst .,,$(suffix $(basename $@))) \ + -Dv8_target_arch=$(V8_TARGET_ARCH) \ + $(if $(findstring $(CXX_TARGET_ARCH),$(V8_TARGET_ARCH)), \ + -Dtarget_arch=$(V8_TARGET_ARCH),) \ $(if $(findstring optdebug,$@),-Dv8_optimized_debug=2,) \ -S$(suffix $(basename $@))$(suffix $@) $(GYPFLAGS) $(OUTDIR)/Makefile.native: $(GYPFILES) $(ENVFILE) - PYTHONPATH="$(shell pwd)/tools/generate_shim_headers:$(PYTHONPATH)" \ - PYTHONPATH="$(shell pwd)/build/gyp/pylib:$(PYTHONPATH)" \ + PYTHONPATH="$(shell pwd)/tools/generate_shim_headers:$(shell pwd)/build:$(PYTHONPATH):$(shell pwd)/build/gyp/pylib:$(PYTHONPATH)" \ GYP_GENERATORS=make \ build/gyp/gyp --generator-output="$(OUTDIR)" build/all.gyp \ -Ibuild/standalone.gypi --depth=. -S.native $(GYPFLAGS) @@ -471,11 +468,26 @@ GPATH GRTAGS GSYMS GTAGS: gtags.files $(shell cat gtags.files 2> /dev/null) gtags.clean: rm -f gtags.files GPATH GRTAGS GSYMS GTAGS -# Dependencies. +# Dependencies. "builddeps" are dependencies required solely for building, +# "dependencies" includes also dependencies required for development. # Remember to keep these in sync with the DEPS file. -dependencies: +builddeps: svn checkout --force http://gyp.googlecode.com/svn/trunk build/gyp \ --revision 1831 - svn checkout --force \ - https://src.chromium.org/chrome/trunk/deps/third_party/icu46 \ - third_party/icu --revision 258359 + if svn info third_party/icu 2>&1 | grep -q icu46 ; then \ + svn switch --force \ + https://src.chromium.org/chrome/trunk/deps/third_party/icu52 \ + third_party/icu --revision 277999 ; \ + else \ + svn checkout --force \ + https://src.chromium.org/chrome/trunk/deps/third_party/icu52 \ + third_party/icu --revision 277999 ; \ + fi + svn checkout --force http://googletest.googlecode.com/svn/trunk \ + testing/gtest --revision 692 + svn checkout --force http://googlemock.googlecode.com/svn/trunk \ + testing/gmock --revision 485 + +dependencies: builddeps + # The spec is a copy of the hooks in v8's DEPS file. + gclient sync -r fb782d4369d5ae04f17a2fceef7de5a63e50f07b --spec="solutions = [{u'managed': False, u'name': u'buildtools', u'url': u'https://chromium.googlesource.com/chromium/buildtools.git', u'custom_deps': {}, u'custom_hooks': [{u'name': u'clang_format_win',u'pattern': u'.',u'action': [u'download_from_google_storage',u'--no_resume',u'--platform=win32',u'--no_auth',u'--bucket',u'chromium-clang-format',u'-s',u'buildtools/win/clang-format.exe.sha1']},{u'name': u'clang_format_mac',u'pattern': u'.',u'action': [u'download_from_google_storage',u'--no_resume',u'--platform=darwin',u'--no_auth',u'--bucket',u'chromium-clang-format',u'-s',u'buildtools/mac/clang-format.sha1']},{u'name': u'clang_format_linux',u'pattern': u'.',u'action': [u'download_from_google_storage',u'--no_resume',u'--platform=linux*',u'--no_auth',u'--bucket',u'chromium-clang-format',u'-s',u'buildtools/linux64/clang-format.sha1']}],u'deps_file': u'.DEPS.git', u'safesync_url': u''}]" diff --git a/deps/v8/Makefile.android b/deps/v8/Makefile.android index 396b58d7444..d46af31fdb7 100644 --- a/deps/v8/Makefile.android +++ b/deps/v8/Makefile.android @@ -26,7 +26,7 @@ # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # Those definitions should be consistent with the main Makefile -ANDROID_ARCHES = android_ia32 android_arm android_arm64 android_mipsel +ANDROID_ARCHES = android_ia32 android_arm android_arm64 android_mipsel android_x87 MODES = release debug # Generates all combinations of ANDROID ARCHES and MODES, @@ -51,13 +51,13 @@ ifeq ($(ARCH), android_arm) DEFINES += arm_neon=0 arm_version=7 TOOLCHAIN_ARCH = arm-linux-androideabi TOOLCHAIN_PREFIX = $(TOOLCHAIN_ARCH) - TOOLCHAIN_VER = 4.6 + TOOLCHAIN_VER = 4.8 else ifeq ($(ARCH), android_arm64) - DEFINES = target_arch=arm64 v8_target_arch=arm64 android_target_arch=arm64 android_target_platform=20 + DEFINES = target_arch=arm64 v8_target_arch=arm64 android_target_arch=arm64 android_target_platform=L TOOLCHAIN_ARCH = aarch64-linux-android TOOLCHAIN_PREFIX = $(TOOLCHAIN_ARCH) - TOOLCHAIN_VER = 4.8 + TOOLCHAIN_VER = 4.9 else ifeq ($(ARCH), android_mipsel) DEFINES = target_arch=mipsel v8_target_arch=mipsel android_target_platform=14 @@ -73,7 +73,14 @@ else TOOLCHAIN_PREFIX = i686-linux-android TOOLCHAIN_VER = 4.6 else - $(error Target architecture "${ARCH}" is not supported) + ifeq ($(ARCH), android_x87) + DEFINES = target_arch=x87 v8_target_arch=x87 android_target_arch=x86 android_target_platform=14 + TOOLCHAIN_ARCH = x86 + TOOLCHAIN_PREFIX = i686-linux-android + TOOLCHAIN_VER = 4.6 + else + $(error Target architecture "${ARCH}" is not supported) + endif endif endif endif @@ -91,6 +98,7 @@ endif # For mksnapshot host generation. DEFINES += host_os=${HOST_OS} +DEFINES += OS=android .SECONDEXPANSION: $(ANDROID_BUILDS): $(OUTDIR)/Makefile.$$@ @@ -112,7 +120,7 @@ $(ANDROID_MAKEFILES): GYP_DEFINES="${DEFINES}" \ CC="${ANDROID_TOOLCHAIN}/bin/${TOOLCHAIN_PREFIX}-gcc" \ CXX="${ANDROID_TOOLCHAIN}/bin/${TOOLCHAIN_PREFIX}-g++" \ - PYTHONPATH="$(shell pwd)/tools/generate_shim_headers:$(PYTHONPATH)" \ + PYTHONPATH="$(shell pwd)/tools/generate_shim_headers:$(shell pwd)/build:$(PYTHONPATH)" \ build/gyp/gyp --generator-output="${OUTDIR}" build/all.gyp \ -Ibuild/standalone.gypi --depth=. -Ibuild/android.gypi \ -S$(suffix $(basename $@))$(suffix $@) ${GYPFLAGS} diff --git a/deps/v8/Makefile.nacl b/deps/v8/Makefile.nacl index 1d34a3b30aa..34bd960fed1 100644 --- a/deps/v8/Makefile.nacl +++ b/deps/v8/Makefile.nacl @@ -97,7 +97,7 @@ $(NACL_MAKEFILES): GYP_DEFINES="${GYPENV}" \ CC=${NACL_CC} \ CXX=${NACL_CXX} \ - PYTHONPATH="$(shell pwd)/tools/generate_shim_headers:$(PYTHONPATH)" \ + PYTHONPATH="$(shell pwd)/tools/generate_shim_headers:$(shell pwd)/build:$(PYTHONPATH)" \ build/gyp/gyp --generator-output="${OUTDIR}" build/all.gyp \ -Ibuild/standalone.gypi --depth=. \ -S$(suffix $(basename $@))$(suffix $@) $(GYPFLAGS) \ diff --git a/deps/v8/OWNERS b/deps/v8/OWNERS index 2fbb3ef2ac1..f67b3ec5c62 100644 --- a/deps/v8/OWNERS +++ b/deps/v8/OWNERS @@ -18,4 +18,5 @@ titzer@chromium.org ulan@chromium.org vegorov@chromium.org verwaest@chromium.org +vogelheim@chromium.org yangguo@chromium.org diff --git a/deps/v8/PRESUBMIT.py b/deps/v8/PRESUBMIT.py index 41d79eb5300..55bb99ab8ac 100644 --- a/deps/v8/PRESUBMIT.py +++ b/deps/v8/PRESUBMIT.py @@ -31,6 +31,9 @@ for more details about the presubmit API built into gcl. """ +import sys + + def _V8PresubmitChecks(input_api, output_api): """Runs the V8 presubmit checks.""" import sys @@ -38,6 +41,8 @@ def _V8PresubmitChecks(input_api, output_api): input_api.PresubmitLocalPath(), 'tools')) from presubmit import CppLintProcessor from presubmit import SourceProcessor + from presubmit import CheckGeneratedRuntimeTests + from presubmit import CheckExternalReferenceRegistration results = [] if not CppLintProcessor().Run(input_api.PresubmitLocalPath()): @@ -46,6 +51,65 @@ def _V8PresubmitChecks(input_api, output_api): results.append(output_api.PresubmitError( "Copyright header, trailing whitespaces and two empty lines " \ "between declarations check failed")) + if not CheckGeneratedRuntimeTests(input_api.PresubmitLocalPath()): + results.append(output_api.PresubmitError( + "Generated runtime tests check failed")) + if not CheckExternalReferenceRegistration(input_api.PresubmitLocalPath()): + results.append(output_api.PresubmitError( + "External references registration check failed")) + return results + + +def _CheckUnwantedDependencies(input_api, output_api): + """Runs checkdeps on #include statements added in this + change. Breaking - rules is an error, breaking ! rules is a + warning. + """ + # We need to wait until we have an input_api object and use this + # roundabout construct to import checkdeps because this file is + # eval-ed and thus doesn't have __file__. + original_sys_path = sys.path + try: + sys.path = sys.path + [input_api.os_path.join( + input_api.PresubmitLocalPath(), 'buildtools', 'checkdeps')] + import checkdeps + from cpp_checker import CppChecker + from rules import Rule + finally: + # Restore sys.path to what it was before. + sys.path = original_sys_path + + added_includes = [] + for f in input_api.AffectedFiles(): + if not CppChecker.IsCppFile(f.LocalPath()): + continue + + changed_lines = [line for line_num, line in f.ChangedContents()] + added_includes.append([f.LocalPath(), changed_lines]) + + deps_checker = checkdeps.DepsChecker(input_api.PresubmitLocalPath()) + + error_descriptions = [] + warning_descriptions = [] + for path, rule_type, rule_description in deps_checker.CheckAddedCppIncludes( + added_includes): + description_with_path = '%s\n %s' % (path, rule_description) + if rule_type == Rule.DISALLOW: + error_descriptions.append(description_with_path) + else: + warning_descriptions.append(description_with_path) + + results = [] + if error_descriptions: + results.append(output_api.PresubmitError( + 'You added one or more #includes that violate checkdeps rules.', + error_descriptions)) + if warning_descriptions: + results.append(output_api.PresubmitPromptOrNotify( + 'You added one or more #includes of files that are temporarily\n' + 'allowed but being removed. Can you avoid introducing the\n' + '#include? See relevant DEPS file(s) for details and contacts.', + warning_descriptions)) return results @@ -54,7 +118,10 @@ def _CommonChecks(input_api, output_api): results = [] results.extend(input_api.canned_checks.CheckOwners( input_api, output_api, source_file_filter=None)) + results.extend(input_api.canned_checks.CheckPatchFormatted( + input_api, output_api)) results.extend(_V8PresubmitChecks(input_api, output_api)) + results.extend(_CheckUnwantedDependencies(input_api, output_api)) return results @@ -110,7 +177,9 @@ def GetPreferredTryMasters(project, change): 'v8_linux64_rel': set(['defaulttests']), 'v8_linux_arm_dbg': set(['defaulttests']), 'v8_linux_arm64_rel': set(['defaulttests']), + 'v8_linux_layout_dbg': set(['defaulttests']), 'v8_mac_rel': set(['defaulttests']), 'v8_win_rel': set(['defaulttests']), + 'v8_win64_rel': set(['defaulttests']), }, } diff --git a/deps/v8/benchmarks/v8.json b/deps/v8/benchmarks/v8.json new file mode 100644 index 00000000000..f4210d9d406 --- /dev/null +++ b/deps/v8/benchmarks/v8.json @@ -0,0 +1,16 @@ +{ + "path": ["."], + "main": "run.js", + "run_count": 2, + "results_regexp": "^%s: (.+)$", + "benchmarks": [ + {"name": "Richards"}, + {"name": "DeltaBlue"}, + {"name": "Crypto"}, + {"name": "RayTrace"}, + {"name": "EarleyBoyer"}, + {"name": "RegExp"}, + {"name": "Splay"}, + {"name": "NavierStokes"} + ] +} diff --git a/deps/v8/build/all.gyp b/deps/v8/build/all.gyp index 3860379ea9c..5e410a3d0f2 100644 --- a/deps/v8/build/all.gyp +++ b/deps/v8/build/all.gyp @@ -10,7 +10,9 @@ 'dependencies': [ '../samples/samples.gyp:*', '../src/d8.gyp:d8', + '../test/base-unittests/base-unittests.gyp:*', '../test/cctest/cctest.gyp:*', + '../test/compiler-unittests/compiler-unittests.gyp:*', ], 'conditions': [ ['component!="shared_library"', { diff --git a/deps/v8/build/android.gypi b/deps/v8/build/android.gypi index 73ac93a434a..46ece08524e 100644 --- a/deps/v8/build/android.gypi +++ b/deps/v8/build/android.gypi @@ -35,9 +35,6 @@ 'variables': { 'android_ndk_root%': '<!(/bin/echo -n $ANDROID_NDK_ROOT)', 'android_toolchain%': '<!(/bin/echo -n $ANDROID_TOOLCHAIN)', - # This is set when building the Android WebView inside the Android build - # system, using the 'android' gyp backend. - 'android_webview_build%': 0, }, 'conditions': [ ['android_ndk_root==""', { @@ -64,9 +61,6 @@ # link the NDK one? 'use_system_stlport%': '<(android_webview_build)', 'android_stlport_library': 'stlport_static', - # Copy it out one scope. - 'android_webview_build%': '<(android_webview_build)', - 'OS': 'android', }, # variables 'target_defaults': { 'defines': [ @@ -81,7 +75,12 @@ }, # Release }, # configurations 'cflags': [ '-Wno-abi', '-Wall', '-W', '-Wno-unused-parameter', - '-Wnon-virtual-dtor', '-fno-rtti', '-fno-exceptions', ], + '-Wnon-virtual-dtor', '-fno-rtti', '-fno-exceptions', + # Note: Using -std=c++0x will define __STRICT_ANSI__, which in + # turn will leave out some template stuff for 'long long'. What + # we want is -std=c++11, but this is not supported by GCC 4.6 or + # Xcode 4.2 + '-std=gnu++0x' ], 'target_conditions': [ ['_toolset=="target"', { 'cflags!': [ @@ -179,7 +178,7 @@ '-L<(android_stlport_libs)/mips', ], }], - ['target_arch=="ia32"', { + ['target_arch=="ia32" or target_arch=="x87"', { 'ldflags': [ '-L<(android_stlport_libs)/x86', ], @@ -196,7 +195,7 @@ }], ], }], - ['target_arch=="ia32"', { + ['target_arch=="ia32" or target_arch=="x87"', { # The x86 toolchain currently has problems with stack-protector. 'cflags!': [ '-fstack-protector', @@ -215,6 +214,15 @@ '-fno-stack-protector', ], }], + ['target_arch=="arm64" or target_arch=="x64"', { + # TODO(ulan): Enable PIE for other architectures (crbug.com/373219). + 'cflags': [ + '-fPIE', + ], + 'ldflags': [ + '-pie', + ], + }], ], 'target_conditions': [ ['_type=="executable"', { @@ -257,15 +265,8 @@ }], # _toolset=="target" # Settings for building host targets using the system toolchain. ['_toolset=="host"', { - 'conditions': [ - ['target_arch=="x64"', { - 'cflags': [ '-m64', '-pthread' ], - 'ldflags': [ '-m64', '-pthread' ], - }, { - 'cflags': [ '-m32', '-pthread' ], - 'ldflags': [ '-m32', '-pthread' ], - }], - ], + 'cflags': [ '-pthread' ], + 'ldflags': [ '-pthread' ], 'ldflags!': [ '-Wl,-z,noexecstack', '-Wl,--gc-sections', diff --git a/deps/v8/build/detect_v8_host_arch.py b/deps/v8/build/detect_v8_host_arch.py new file mode 100644 index 00000000000..3460a9a404f --- /dev/null +++ b/deps/v8/build/detect_v8_host_arch.py @@ -0,0 +1,69 @@ +#!/usr/bin/env python +# Copyright 2014 the V8 project authors. All rights reserved. +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following +# disclaimer in the documentation and/or other materials provided +# with the distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +"""Outputs host CPU architecture in format recognized by gyp.""" + +import platform +import re +import sys + + +def main(): + print DoMain([]) + return 0 + +def DoMain(_): + """Hook to be called from gyp without starting a separate python + interpreter.""" + host_arch = platform.machine() + + # Convert machine type to format recognized by gyp. + if re.match(r'i.86', host_arch) or host_arch == 'i86pc': + host_arch = 'ia32' + elif host_arch in ['x86_64', 'amd64']: + host_arch = 'x64' + elif host_arch.startswith('arm'): + host_arch = 'arm' + elif host_arch == 'aarch64': + host_arch = 'arm64' + elif host_arch == 'mips64': + host_arch = 'mips64el' + elif host_arch.startswith('mips'): + host_arch = 'mipsel' + + # platform.machine is based on running kernel. It's possible to use 64-bit + # kernel with 32-bit userland, e.g. to give linker slightly more memory. + # Distinguish between different userland bitness by querying + # the python binary. + if host_arch == 'x64' and platform.architecture()[0] == '32bit': + host_arch = 'ia32' + + return host_arch + +if __name__ == '__main__': + sys.exit(main()) diff --git a/deps/v8/build/features.gypi b/deps/v8/build/features.gypi index e8f5b2f08f3..7ce66e4c98e 100644 --- a/deps/v8/build/features.gypi +++ b/deps/v8/build/features.gypi @@ -41,6 +41,8 @@ 'v8_use_snapshot%': 'true', + 'v8_enable_verify_predictable%': 0, + # With post mortem support enabled, metadata is embedded into libv8 that # describes various parameters of the VM for use by debuggers. See # tools/gen-postmortem-metadata.py for details. @@ -57,8 +59,9 @@ # Enable compiler warnings when using V8_DEPRECATED apis. 'v8_deprecation_warnings%': 0, - # Use the v8 provided v8::Platform implementation. - 'v8_use_default_platform%': 1, + # Use external files for startup data blobs: + # the JS builtins sources and the start snapshot. + 'v8_use_external_startup_data%': 0, }, 'target_defaults': { 'conditions': [ @@ -74,6 +77,9 @@ ['v8_enable_verify_heap==1', { 'defines': ['VERIFY_HEAP',], }], + ['v8_enable_verify_predictable==1', { + 'defines': ['VERIFY_PREDICTABLE',], + }], ['v8_interpreted_regexp==1', { 'defines': ['V8_INTERPRETED_REGEXP',], }], @@ -83,13 +89,11 @@ ['v8_enable_i18n_support==1', { 'defines': ['V8_I18N_SUPPORT',], }], - ['v8_use_default_platform==1', { - 'defines': ['V8_USE_DEFAULT_PLATFORM',], - }], ['v8_compress_startup_data=="bz2"', { - 'defines': [ - 'COMPRESS_STARTUP_DATA_BZ2', - ], + 'defines': ['COMPRESS_STARTUP_DATA_BZ2',], + }], + ['v8_use_external_startup_data==1', { + 'defines': ['V8_USE_EXTERNAL_STARTUP_DATA',], }], ], # conditions 'configurations': { diff --git a/deps/v8/build/get_landmines.py b/deps/v8/build/get_landmines.py new file mode 100755 index 00000000000..c6ff8165f93 --- /dev/null +++ b/deps/v8/build/get_landmines.py @@ -0,0 +1,26 @@ +#!/usr/bin/env python +# Copyright 2014 the V8 project authors. All rights reserved. +# Use of this source code is governed by a BSD-style license that can be +# found in the LICENSE file. + +""" +This file emits the list of reasons why a particular build needs to be clobbered +(or a list of 'landmines'). +""" + +import sys + + +def main(): + """ + ALL LANDMINES ARE EMITTED FROM HERE. + """ + print 'Need to clobber after ICU52 roll.' + print 'Landmines test.' + print 'Activating MSVS 2013.' + print 'Revert activation of MSVS 2013.' + return 0 + + +if __name__ == '__main__': + sys.exit(main()) diff --git a/deps/v8/build/gyp_v8 b/deps/v8/build/gyp_v8 index bc733dfca6f..14467eccaad 100755 --- a/deps/v8/build/gyp_v8 +++ b/deps/v8/build/gyp_v8 @@ -34,6 +34,7 @@ import glob import os import platform import shlex +import subprocess import sys script_dir = os.path.dirname(os.path.realpath(__file__)) @@ -107,6 +108,14 @@ def additional_include_files(args=[]): def run_gyp(args): rc = gyp.main(args) + + # Check for landmines (reasons to clobber the build). This must be run here, + # rather than a separate runhooks step so that any environment modifications + # from above are picked up. + print 'Running build/landmines.py...' + subprocess.check_call( + [sys.executable, os.path.join(script_dir, 'landmines.py')]) + if rc != 0: print 'Error running GYP' sys.exit(rc) diff --git a/deps/v8/build/landmine_utils.py b/deps/v8/build/landmine_utils.py new file mode 100644 index 00000000000..e8b7c98d5fc --- /dev/null +++ b/deps/v8/build/landmine_utils.py @@ -0,0 +1,114 @@ +# Copyright 2014 the V8 project authors. All rights reserved. +# Use of this source code is governed by a BSD-style license that can be +# found in the LICENSE file. + + +import functools +import logging +import os +import shlex +import sys + + +def memoize(default=None): + """This decorator caches the return value of a parameterless pure function""" + def memoizer(func): + val = [] + @functools.wraps(func) + def inner(): + if not val: + ret = func() + val.append(ret if ret is not None else default) + if logging.getLogger().isEnabledFor(logging.INFO): + print '%s -> %r' % (func.__name__, val[0]) + return val[0] + return inner + return memoizer + + +@memoize() +def IsWindows(): + return sys.platform in ['win32', 'cygwin'] + + +@memoize() +def IsLinux(): + return sys.platform.startswith(('linux', 'freebsd')) + + +@memoize() +def IsMac(): + return sys.platform == 'darwin' + + +@memoize() +def gyp_defines(): + """Parses and returns GYP_DEFINES env var as a dictionary.""" + return dict(arg.split('=', 1) + for arg in shlex.split(os.environ.get('GYP_DEFINES', ''))) + +@memoize() +def gyp_msvs_version(): + return os.environ.get('GYP_MSVS_VERSION', '') + +@memoize() +def distributor(): + """ + Returns a string which is the distributed build engine in use (if any). + Possible values: 'goma', 'ib', '' + """ + if 'goma' in gyp_defines(): + return 'goma' + elif IsWindows(): + if 'CHROME_HEADLESS' in os.environ: + return 'ib' # use (win and !goma and headless) as approximation of ib + + +@memoize() +def platform(): + """ + Returns a string representing the platform this build is targetted for. + Possible values: 'win', 'mac', 'linux', 'ios', 'android' + """ + if 'OS' in gyp_defines(): + if 'android' in gyp_defines()['OS']: + return 'android' + else: + return gyp_defines()['OS'] + elif IsWindows(): + return 'win' + elif IsLinux(): + return 'linux' + else: + return 'mac' + + +@memoize() +def builder(): + """ + Returns a string representing the build engine (not compiler) to use. + Possible values: 'make', 'ninja', 'xcode', 'msvs', 'scons' + """ + if 'GYP_GENERATORS' in os.environ: + # for simplicity, only support the first explicit generator + generator = os.environ['GYP_GENERATORS'].split(',')[0] + if generator.endswith('-android'): + return generator.split('-')[0] + elif generator.endswith('-ninja'): + return 'ninja' + else: + return generator + else: + if platform() == 'android': + # Good enough for now? Do any android bots use make? + return 'make' + elif platform() == 'ios': + return 'xcode' + elif IsWindows(): + return 'msvs' + elif IsLinux(): + return 'make' + elif IsMac(): + return 'xcode' + else: + assert False, 'Don\'t know what builder we\'re using!' diff --git a/deps/v8/build/landmines.py b/deps/v8/build/landmines.py new file mode 100755 index 00000000000..bd1fb28f719 --- /dev/null +++ b/deps/v8/build/landmines.py @@ -0,0 +1,139 @@ +#!/usr/bin/env python +# Copyright 2014 the V8 project authors. All rights reserved. +# Use of this source code is governed by a BSD-style license that can be +# found in the LICENSE file. + +""" +This script runs every build as a hook. If it detects that the build should +be clobbered, it will touch the file <build_dir>/.landmine_triggered. The +various build scripts will then check for the presence of this file and clobber +accordingly. The script will also emit the reasons for the clobber to stdout. + +A landmine is tripped when a builder checks out a different revision, and the +diff between the new landmines and the old ones is non-null. At this point, the +build is clobbered. +""" + +import difflib +import logging +import optparse +import os +import sys +import subprocess +import time + +import landmine_utils + + +SRC_DIR = os.path.dirname(os.path.dirname(os.path.realpath(__file__))) + + +def get_target_build_dir(build_tool, target): + """ + Returns output directory absolute path dependent on build and targets. + Examples: + r'c:\b\build\slave\win\build\src\out\Release' + '/mnt/data/b/build/slave/linux/build/src/out/Debug' + '/b/build/slave/ios_rel_device/build/src/xcodebuild/Release-iphoneos' + + Keep this function in sync with tools/build/scripts/slave/compile.py + """ + ret = None + if build_tool == 'xcode': + ret = os.path.join(SRC_DIR, 'xcodebuild', target) + elif build_tool in ['make', 'ninja', 'ninja-ios']: # TODO: Remove ninja-ios. + ret = os.path.join(SRC_DIR, 'out', target) + elif build_tool in ['msvs', 'vs', 'ib']: + ret = os.path.join(SRC_DIR, 'build', target) + else: + raise NotImplementedError('Unexpected GYP_GENERATORS (%s)' % build_tool) + return os.path.abspath(ret) + + +def set_up_landmines(target, new_landmines): + """Does the work of setting, planting, and triggering landmines.""" + out_dir = get_target_build_dir(landmine_utils.builder(), target) + + landmines_path = os.path.join(out_dir, '.landmines') + if not os.path.exists(out_dir): + return + + if not os.path.exists(landmines_path): + print "Landmines tracker didn't exists." + + # FIXME(machenbach): Clobber deletes the .landmines tracker. Difficult + # to know if we are right after a clobber or if it is first-time landmines + # deployment. Also, a landmine-triggered clobber right after a clobber is + # not possible. Different clobber methods for msvs, xcode and make all + # have different blacklists of files that are not deleted. + if os.path.exists(landmines_path): + triggered = os.path.join(out_dir, '.landmines_triggered') + with open(landmines_path, 'r') as f: + old_landmines = f.readlines() + if old_landmines != new_landmines: + old_date = time.ctime(os.stat(landmines_path).st_ctime) + diff = difflib.unified_diff(old_landmines, new_landmines, + fromfile='old_landmines', tofile='new_landmines', + fromfiledate=old_date, tofiledate=time.ctime(), n=0) + + with open(triggered, 'w') as f: + f.writelines(diff) + print "Setting landmine: %s" % triggered + elif os.path.exists(triggered): + # Remove false triggered landmines. + os.remove(triggered) + print "Removing landmine: %s" % triggered + with open(landmines_path, 'w') as f: + f.writelines(new_landmines) + + +def process_options(): + """Returns a list of landmine emitting scripts.""" + parser = optparse.OptionParser() + parser.add_option( + '-s', '--landmine-scripts', action='append', + default=[os.path.join(SRC_DIR, 'build', 'get_landmines.py')], + help='Path to the script which emits landmines to stdout. The target ' + 'is passed to this script via option -t. Note that an extra ' + 'script can be specified via an env var EXTRA_LANDMINES_SCRIPT.') + parser.add_option('-v', '--verbose', action='store_true', + default=('LANDMINES_VERBOSE' in os.environ), + help=('Emit some extra debugging information (default off). This option ' + 'is also enabled by the presence of a LANDMINES_VERBOSE environment ' + 'variable.')) + + options, args = parser.parse_args() + + if args: + parser.error('Unknown arguments %s' % args) + + logging.basicConfig( + level=logging.DEBUG if options.verbose else logging.ERROR) + + extra_script = os.environ.get('EXTRA_LANDMINES_SCRIPT') + if extra_script: + return options.landmine_scripts + [extra_script] + else: + return options.landmine_scripts + + +def main(): + landmine_scripts = process_options() + + if landmine_utils.builder() in ('dump_dependency_json', 'eclipse'): + return 0 + + landmines = [] + for s in landmine_scripts: + proc = subprocess.Popen([sys.executable, s], stdout=subprocess.PIPE) + output, _ = proc.communicate() + landmines.extend([('%s\n' % l.strip()) for l in output.splitlines()]) + + for target in ('Debug', 'Release'): + set_up_landmines(target, landmines) + + return 0 + + +if __name__ == '__main__': + sys.exit(main()) diff --git a/deps/v8/build/standalone.gypi b/deps/v8/build/standalone.gypi index befa73851e3..2ed19f65eac 100644 --- a/deps/v8/build/standalone.gypi +++ b/deps/v8/build/standalone.gypi @@ -33,8 +33,8 @@ 'includes': ['toolchain.gypi'], 'variables': { 'component%': 'static_library', - 'clang%': 0, 'asan%': 0, + 'tsan%': 0, 'visibility%': 'hidden', 'v8_enable_backtrace%': 0, 'v8_enable_i18n_support%': 1, @@ -51,13 +51,7 @@ # Anything else gets passed through, which probably won't work # very well; such hosts should pass an explicit target_arch # to gyp. - 'host_arch%': - '<!(uname -m | sed -e "s/i.86/ia32/;\ - s/x86_64/x64/;\ - s/amd64/x64/;\ - s/arm.*/arm/;\ - s/aarch64/arm64/;\ - s/mips.*/mipsel/")', + 'host_arch%': '<!pymod_do_main(detect_v8_host_arch)', }, { # OS!="linux" and OS!="freebsd" and OS!="openbsd" and # OS!="netbsd" and OS!="mac" @@ -104,6 +98,7 @@ ['(v8_target_arch=="arm" and host_arch!="arm") or \ (v8_target_arch=="arm64" and host_arch!="arm64") or \ (v8_target_arch=="mipsel" and host_arch!="mipsel") or \ + (v8_target_arch=="mips64el" and host_arch!="mips64el") or \ (v8_target_arch=="x64" and host_arch!="x64") or \ (OS=="android" or OS=="qnx")', { 'want_separate_host_toolset': 1, @@ -115,16 +110,20 @@ }, { 'os_posix%': 1, }], - ['(v8_target_arch=="ia32" or v8_target_arch=="x64") and \ + ['(v8_target_arch=="ia32" or v8_target_arch=="x64" or v8_target_arch=="x87") and \ (OS=="linux" or OS=="mac")', { 'v8_enable_gdbjit%': 1, }, { 'v8_enable_gdbjit%': 0, }], + ['OS=="mac"', { + 'clang%': 1, + }, { + 'clang%': 0, + }], ], # Default ARM variable settings. 'arm_version%': 'default', - 'arm_neon%': 0, 'arm_fpu%': 'vfpv3', 'arm_float_abi%': 'default', 'arm_thumb': 'default', @@ -192,17 +191,36 @@ ], }, }], + ['tsan==1', { + 'target_defaults': { + 'cflags+': [ + '-fno-omit-frame-pointer', + '-gline-tables-only', + '-fsanitize=thread', + '-fPIC', + '-Wno-c++11-extensions', + ], + 'cflags!': [ + '-fomit-frame-pointer', + ], + 'ldflags': [ + '-fsanitize=thread', + '-pie', + ], + 'defines': [ + 'THREAD_SANITIZER', + ], + }, + }], ['OS=="linux" or OS=="freebsd" or OS=="openbsd" or OS=="solaris" \ or OS=="netbsd"', { 'target_defaults': { 'cflags': [ '-Wall', '<(werror)', '-W', '-Wno-unused-parameter', - '-pthread', '-fno-exceptions', '-pedantic' ], - 'cflags_cc': [ '-Wnon-virtual-dtor', '-fno-rtti' ], + '-Wno-long-long', '-pthread', '-fno-exceptions', + '-pedantic' ], + 'cflags_cc': [ '-Wnon-virtual-dtor', '-fno-rtti', '-std=gnu++0x' ], 'ldflags': [ '-pthread', ], 'conditions': [ - [ 'OS=="linux"', { - 'cflags': [ '-ansi' ], - }], [ 'visibility=="hidden" and v8_enable_backtrace==0', { 'cflags': [ '-fvisibility=hidden' ], }], @@ -218,7 +236,7 @@ 'target_defaults': { 'cflags': [ '-Wall', '<(werror)', '-W', '-Wno-unused-parameter', '-fno-exceptions' ], - 'cflags_cc': [ '-Wnon-virtual-dtor', '-fno-rtti' ], + 'cflags_cc': [ '-Wnon-virtual-dtor', '-fno-rtti', '-std=gnu++0x' ], 'conditions': [ [ 'visibility=="hidden"', { 'cflags': [ '-fvisibility=hidden' ], @@ -316,7 +334,7 @@ 'target_defaults': { 'xcode_settings': { 'ALWAYS_SEARCH_USER_PATHS': 'NO', - 'GCC_C_LANGUAGE_STANDARD': 'ansi', # -ansi + 'GCC_C_LANGUAGE_STANDARD': 'c99', # -std=c99 'GCC_CW_ASM_SYNTAX': 'NO', # No -fasm-blocks 'GCC_DYNAMIC_NO_PIC': 'NO', # No -mdynamic-no-pic # (Equivalent to -fPIC) @@ -352,7 +370,7 @@ ['clang==1', { 'xcode_settings': { 'GCC_VERSION': 'com.apple.compilers.llvm.clang.1_0', - 'CLANG_CXX_LANGUAGE_STANDARD': 'gnu++11', # -std=gnu++11 + 'CLANG_CXX_LANGUAGE_STANDARD': 'gnu++0x', # -std=gnu++0x }, }], ], diff --git a/deps/v8/build/toolchain.gypi b/deps/v8/build/toolchain.gypi index a9958ce8d66..1d47360d2a7 100644 --- a/deps/v8/build/toolchain.gypi +++ b/deps/v8/build/toolchain.gypi @@ -31,7 +31,7 @@ 'variables': { 'msvs_use_common_release': 0, 'gcc_version%': 'unknown', - 'CXX%': '${CXX:-$(which g++)}', # Used to assemble a shell command. + 'clang%': 0, 'v8_target_arch%': '<(target_arch)', # Native Client builds currently use the V8 ARM JIT and # arm/simulator-arm.cc to defer the significant effort required @@ -47,7 +47,7 @@ # these registers in the snapshot and use CPU feature probing when running # on the target. 'v8_can_use_vfp32dregs%': 'false', - 'arm_test%': 'off', + 'arm_test_noprobe%': 'off', # Similar to vfp but on MIPS. 'v8_can_use_fpu_instructions%': 'true', @@ -56,7 +56,7 @@ 'v8_use_mips_abi_hardfloat%': 'true', # Default arch variant for MIPS. - 'mips_arch_variant%': 'mips32r2', + 'mips_arch_variant%': 'r2', 'v8_enable_backtrace%': 0, @@ -82,35 +82,85 @@ # Allow to suppress the array bounds warning (default is no suppression). 'wno_array_bounds%': '', + + 'variables': { + # This is set when building the Android WebView inside the Android build + # system, using the 'android' gyp backend. + 'android_webview_build%': 0, + }, + # Copy it out one scope. + 'android_webview_build%': '<(android_webview_build)', }, + 'conditions': [ + ['host_arch=="ia32" or host_arch=="x64" or clang==1', { + 'variables': { + 'host_cxx_is_biarch%': 1, + }, + }, { + 'variables': { + 'host_cxx_is_biarch%': 0, + }, + }], + ['target_arch=="ia32" or target_arch=="x64" or target_arch=="x87" or \ + clang==1', { + 'variables': { + 'target_cxx_is_biarch%': 1, + }, + }, { + 'variables': { + 'target_cxx_is_biarch%': 0, + }, + }], + ], 'target_defaults': { 'conditions': [ ['v8_target_arch=="arm"', { 'defines': [ 'V8_TARGET_ARCH_ARM', ], + 'conditions': [ + [ 'arm_version==7 or arm_version=="default"', { + 'defines': [ + 'CAN_USE_ARMV7_INSTRUCTIONS', + ], + }], + [ 'arm_fpu=="vfpv3-d16" or arm_fpu=="default"', { + 'defines': [ + 'CAN_USE_VFP3_INSTRUCTIONS', + ], + }], + [ 'arm_fpu=="vfpv3"', { + 'defines': [ + 'CAN_USE_VFP3_INSTRUCTIONS', + 'CAN_USE_VFP32DREGS', + ], + }], + [ 'arm_fpu=="neon"', { + 'defines': [ + 'CAN_USE_VFP3_INSTRUCTIONS', + 'CAN_USE_VFP32DREGS', + 'CAN_USE_NEON', + ], + }], + [ 'arm_test_noprobe=="on"', { + 'defines': [ + 'ARM_TEST_NO_FEATURE_PROBE', + ], + }], + ], 'target_conditions': [ ['_toolset=="host"', { - 'variables': { - 'armcompiler': '<!($(echo ${CXX_host:-$(which g++)}) -v 2>&1 | grep -q "^Target: arm" && echo "yes" || echo "no")', - }, 'conditions': [ - ['armcompiler=="yes"', { + ['v8_target_arch==host_arch and android_webview_build==0', { + # Host built with an Arm CXX compiler. 'conditions': [ [ 'arm_version==7', { 'cflags': ['-march=armv7-a',], }], [ 'arm_version==7 or arm_version=="default"', { 'conditions': [ - [ 'arm_neon==1', { - 'cflags': ['-mfpu=neon',], - }, - { - 'conditions': [ - [ 'arm_fpu!="default"', { - 'cflags': ['-mfpu=<(arm_fpu)',], - }], - ], + [ 'arm_fpu!="default"', { + 'cflags': ['-mfpu=<(arm_fpu)',], }], ], }], @@ -123,44 +173,11 @@ [ 'arm_thumb==0', { 'cflags': ['-marm',], }], - [ 'arm_test=="on"', { - 'defines': [ - 'ARM_TEST', - ], - }], ], }, { - # armcompiler=="no" + # 'v8_target_arch!=host_arch' + # Host not built with an Arm CXX compiler (simulator build). 'conditions': [ - [ 'arm_version==7 or arm_version=="default"', { - 'defines': [ - 'CAN_USE_ARMV7_INSTRUCTIONS=1', - ], - 'conditions': [ - [ 'arm_fpu=="default"', { - 'defines': [ - 'CAN_USE_VFP3_INSTRUCTIONS', - ], - }], - [ 'arm_fpu=="vfpv3-d16"', { - 'defines': [ - 'CAN_USE_VFP3_INSTRUCTIONS', - ], - }], - [ 'arm_fpu=="vfpv3"', { - 'defines': [ - 'CAN_USE_VFP3_INSTRUCTIONS', - 'CAN_USE_VFP32DREGS', - ], - }], - [ 'arm_fpu=="neon" or arm_neon==1', { - 'defines': [ - 'CAN_USE_VFP3_INSTRUCTIONS', - 'CAN_USE_VFP32DREGS', - ], - }], - ], - }], [ 'arm_float_abi=="hard"', { 'defines': [ 'USE_EABI_HARDFLOAT=1', @@ -172,33 +189,21 @@ ], }], ], - 'defines': [ - 'ARM_TEST', - ], }], ], }], # _toolset=="host" ['_toolset=="target"', { - 'variables': { - 'armcompiler': '<!($(echo ${CXX_target:-<(CXX)}) -v 2>&1 | grep -q "^Target: arm" && echo "yes" || echo "no")', - }, 'conditions': [ - ['armcompiler=="yes"', { + ['v8_target_arch==target_arch and android_webview_build==0', { + # Target built with an Arm CXX compiler. 'conditions': [ [ 'arm_version==7', { 'cflags': ['-march=armv7-a',], }], [ 'arm_version==7 or arm_version=="default"', { 'conditions': [ - [ 'arm_neon==1', { - 'cflags': ['-mfpu=neon',], - }, - { - 'conditions': [ - [ 'arm_fpu!="default"', { - 'cflags': ['-mfpu=<(arm_fpu)',], - }], - ], + [ 'arm_fpu!="default"', { + 'cflags': ['-mfpu=<(arm_fpu)',], }], ], }], @@ -211,44 +216,11 @@ [ 'arm_thumb==0', { 'cflags': ['-marm',], }], - [ 'arm_test=="on"', { - 'defines': [ - 'ARM_TEST', - ], - }], ], }, { - # armcompiler=="no" + # 'v8_target_arch!=target_arch' + # Target not built with an Arm CXX compiler (simulator build). 'conditions': [ - [ 'arm_version==7 or arm_version=="default"', { - 'defines': [ - 'CAN_USE_ARMV7_INSTRUCTIONS=1', - ], - 'conditions': [ - [ 'arm_fpu=="default"', { - 'defines': [ - 'CAN_USE_VFP3_INSTRUCTIONS', - ], - }], - [ 'arm_fpu=="vfpv3-d16"', { - 'defines': [ - 'CAN_USE_VFP3_INSTRUCTIONS', - ], - }], - [ 'arm_fpu=="vfpv3"', { - 'defines': [ - 'CAN_USE_VFP3_INSTRUCTIONS', - 'CAN_USE_VFP32DREGS', - ], - }], - [ 'arm_fpu=="neon" or arm_neon==1', { - 'defines': [ - 'CAN_USE_VFP3_INSTRUCTIONS', - 'CAN_USE_VFP32DREGS', - ], - }], - ], - }], [ 'arm_float_abi=="hard"', { 'defines': [ 'USE_EABI_HARDFLOAT=1', @@ -260,9 +232,6 @@ ], }], ], - 'defines': [ - 'ARM_TEST', - ], }], ], }], # _toolset=="target" @@ -278,15 +247,19 @@ 'V8_TARGET_ARCH_IA32', ], }], # v8_target_arch=="ia32" + ['v8_target_arch=="x87"', { + 'defines': [ + 'V8_TARGET_ARCH_X87', + ], + 'cflags': ['-march=i586'], + }], # v8_target_arch=="x87" ['v8_target_arch=="mips"', { 'defines': [ 'V8_TARGET_ARCH_MIPS', ], - 'variables': { - 'mipscompiler': '<!($(echo <(CXX)) -v 2>&1 | grep -q "^Target: mips" && echo "yes" || echo "no")', - }, 'conditions': [ - ['mipscompiler=="yes"', { + ['v8_target_arch==target_arch and android_webview_build==0', { + # Target built with a Mips CXX compiler. 'target_conditions': [ ['_toolset=="target"', { 'cflags': ['-EB'], @@ -299,10 +272,10 @@ 'cflags': ['-msoft-float'], 'ldflags': ['-msoft-float'], }], - ['mips_arch_variant=="mips32r2"', { + ['mips_arch_variant=="r2"', { 'cflags': ['-mips32r2', '-Wa,-mips32r2'], }], - ['mips_arch_variant=="mips32r1"', { + ['mips_arch_variant=="r1"', { 'cflags': ['-mips32', '-Wa,-mips32'], }], ], @@ -324,7 +297,7 @@ '__mips_soft_float=1' ], }], - ['mips_arch_variant=="mips32r2"', { + ['mips_arch_variant=="r2"', { 'defines': ['_MIPS_ARCH_MIPS32R2',], }], ], @@ -333,11 +306,9 @@ 'defines': [ 'V8_TARGET_ARCH_MIPS', ], - 'variables': { - 'mipscompiler': '<!($(echo <(CXX)) -v 2>&1 | grep -q "^Target: mips" && echo "yes" || echo "no")', - }, 'conditions': [ - ['mipscompiler=="yes"', { + ['v8_target_arch==target_arch and android_webview_build==0', { + # Target built with a Mips CXX compiler. 'target_conditions': [ ['_toolset=="target"', { 'cflags': ['-EL'], @@ -350,10 +321,10 @@ 'cflags': ['-msoft-float'], 'ldflags': ['-msoft-float'], }], - ['mips_arch_variant=="mips32r2"', { + ['mips_arch_variant=="r2"', { 'cflags': ['-mips32r2', '-Wa,-mips32r2'], }], - ['mips_arch_variant=="mips32r1"', { + ['mips_arch_variant=="r1"', { 'cflags': ['-mips32', '-Wa,-mips32'], }], ['mips_arch_variant=="loongson"', { @@ -378,7 +349,7 @@ '__mips_soft_float=1' ], }], - ['mips_arch_variant=="mips32r2"', { + ['mips_arch_variant=="r2"', { 'defines': ['_MIPS_ARCH_MIPS32R2',], }], ['mips_arch_variant=="loongson"', { @@ -386,6 +357,68 @@ }], ], }], # v8_target_arch=="mipsel" + ['v8_target_arch=="mips64el"', { + 'defines': [ + 'V8_TARGET_ARCH_MIPS64', + ], + 'conditions': [ + ['v8_target_arch==target_arch and android_webview_build==0', { + # Target built with a Mips CXX compiler. + 'target_conditions': [ + ['_toolset=="target"', { + 'cflags': ['-EL'], + 'ldflags': ['-EL'], + 'conditions': [ + [ 'v8_use_mips_abi_hardfloat=="true"', { + 'cflags': ['-mhard-float'], + 'ldflags': ['-mhard-float'], + }, { + 'cflags': ['-msoft-float'], + 'ldflags': ['-msoft-float'], + }], + ['mips_arch_variant=="r6"', { + 'cflags': ['-mips64r6', '-mabi=64', '-Wa,-mips64r6'], + 'ldflags': [ + '-mips64r6', '-mabi=64', + '-Wl,--dynamic-linker=$(LDSO_PATH)', + '-Wl,--rpath=$(LD_R_PATH)', + ], + }], + ['mips_arch_variant=="r2"', { + 'cflags': ['-mips64r2', '-mabi=64', '-Wa,-mips64r2'], + 'ldflags': [ + '-mips64r2', '-mabi=64', + '-Wl,--dynamic-linker=$(LDSO_PATH)', + '-Wl,--rpath=$(LD_R_PATH)', + ], + }], + ], + }], + ], + }], + [ 'v8_can_use_fpu_instructions=="true"', { + 'defines': [ + 'CAN_USE_FPU_INSTRUCTIONS', + ], + }], + [ 'v8_use_mips_abi_hardfloat=="true"', { + 'defines': [ + '__mips_hard_float=1', + 'CAN_USE_FPU_INSTRUCTIONS', + ], + }, { + 'defines': [ + '__mips_soft_float=1' + ], + }], + ['mips_arch_variant=="r6"', { + 'defines': ['_MIPS_ARCH_MIPS64R6',], + }], + ['mips_arch_variant=="r2"', { + 'defines': ['_MIPS_ARCH_MIPS64R2',], + }], + ], + }], # v8_target_arch=="mips64el" ['v8_target_arch=="x64"', { 'defines': [ 'V8_TARGET_ARCH_X64', @@ -400,16 +433,42 @@ }, 'msvs_configuration_platform': 'x64', }], # v8_target_arch=="x64" + ['v8_target_arch=="x32"', { + 'defines': [ + # x32 port shares the source code with x64 port. + 'V8_TARGET_ARCH_X64', + 'V8_TARGET_ARCH_32_BIT', + ], + 'cflags': [ + '-mx32', + # Inhibit warning if long long type is used. + '-Wno-long-long', + ], + 'ldflags': [ + '-mx32', + ], + }], # v8_target_arch=="x32" ['OS=="win"', { 'defines': [ 'WIN32', ], + # 4351: VS 2005 and later are warning us that they've fixed a bug + # present in VS 2003 and earlier. + 'msvs_disabled_warnings': [4351], 'msvs_configuration_attributes': { 'OutputDirectory': '<(DEPTH)\\build\\$(ConfigurationName)', 'IntermediateDirectory': '$(OutDir)\\obj\\$(ProjectName)', 'CharacterSet': '1', }, }], + ['OS=="win" and v8_target_arch=="ia32"', { + 'msvs_settings': { + 'VCCLCompilerTool': { + # Ensure no surprising artifacts from 80bit double math with x86. + 'AdditionalOptions': ['/arch:SSE2'], + }, + }, + }], ['OS=="win" and v8_enable_prof==1', { 'msvs_settings': { 'VCLinkerTool': { @@ -417,44 +476,28 @@ }, }, }], - ['OS=="linux" or OS=="freebsd" or OS=="openbsd" or OS=="solaris" \ - or OS=="netbsd" or OS=="qnx"', { - 'conditions': [ - [ 'v8_no_strict_aliasing==1', { - 'cflags': [ '-fno-strict-aliasing' ], - }], - ], # conditions - }], - ['OS=="solaris"', { - 'defines': [ '__C99FEATURES__=1' ], # isinf() etc. - }], - ['(OS=="linux" or OS=="freebsd" or OS=="openbsd" or OS=="solaris" \ + ['(OS=="linux" or OS=="freebsd" or OS=="openbsd" or OS=="solaris" \ or OS=="netbsd" or OS=="mac" or OS=="android" or OS=="qnx") and \ (v8_target_arch=="arm" or v8_target_arch=="ia32" or \ - v8_target_arch=="mips" or v8_target_arch=="mipsel")', { - # Check whether the host compiler and target compiler support the - # '-m32' option and set it if so. + v8_target_arch=="x87" or v8_target_arch=="mips" or \ + v8_target_arch=="mipsel")', { 'target_conditions': [ ['_toolset=="host"', { - 'variables': { - 'm32flag': '<!(($(echo ${CXX_host:-$(which g++)}) -m32 -E - > /dev/null 2>&1 < /dev/null) && echo "-m32" || true)', - }, - 'cflags': [ '<(m32flag)' ], - 'ldflags': [ '<(m32flag)' ], + 'conditions': [ + ['host_cxx_is_biarch==1', { + 'cflags': [ '-m32' ], + 'ldflags': [ '-m32' ] + }], + ], 'xcode_settings': { 'ARCHS': [ 'i386' ], }, }], ['_toolset=="target"', { - 'variables': { - 'm32flag': '<!(($(echo ${CXX_target:-<(CXX)}) -m32 -E - > /dev/null 2>&1 < /dev/null) && echo "-m32" || true)', - 'clang%': 0, - }, 'conditions': [ - ['((OS!="android" and OS!="qnx") or clang==1) and \ - nacl_target_arch!="nacl_x64"', { - 'cflags': [ '<(m32flag)' ], - 'ldflags': [ '<(m32flag)' ], + ['target_cxx_is_biarch==1 and nacl_target_arch!="nacl_x64"', { + 'cflags': [ '-m32' ], + 'ldflags': [ '-m32' ], }], ], 'xcode_settings': { @@ -465,28 +508,35 @@ }], ['(OS=="linux" or OS=="android") and \ (v8_target_arch=="x64" or v8_target_arch=="arm64")', { - # Check whether the host compiler and target compiler support the - # '-m64' option and set it if so. 'target_conditions': [ ['_toolset=="host"', { - 'variables': { - 'm64flag': '<!(($(echo ${CXX_host:-$(which g++)}) -m64 -E - > /dev/null 2>&1 < /dev/null) && echo "-m64" || true)', - }, - 'cflags': [ '<(m64flag)' ], - 'ldflags': [ '<(m64flag)' ], - }], - ['_toolset=="target"', { - 'variables': { - 'm64flag': '<!(($(echo ${CXX_target:-<(CXX)}) -m64 -E - > /dev/null 2>&1 < /dev/null) && echo "-m64" || true)', - }, 'conditions': [ - ['((OS!="android" and OS!="qnx") or clang==1)', { - 'cflags': [ '<(m64flag)' ], - 'ldflags': [ '<(m64flag)' ], + ['host_cxx_is_biarch==1', { + 'cflags': [ '-m64' ], + 'ldflags': [ '-m64' ] }], - ], - }] - ], + ], + }], + ['_toolset=="target"', { + 'conditions': [ + ['target_cxx_is_biarch==1', { + 'cflags': [ '-m64' ], + 'ldflags': [ '-m64' ], + }], + ] + }], + ], + }], + ['OS=="linux" or OS=="freebsd" or OS=="openbsd" or OS=="solaris" \ + or OS=="netbsd" or OS=="qnx"', { + 'conditions': [ + [ 'v8_no_strict_aliasing==1', { + 'cflags': [ '-fno-strict-aliasing' ], + }], + ], # conditions + }], + ['OS=="solaris"', { + 'defines': [ '__C99FEATURES__=1' ], # isinf() etc. }], ['OS=="freebsd" or OS=="openbsd"', { 'cflags': [ '-I/usr/local/include' ], diff --git a/deps/v8/codereview.settings b/deps/v8/codereview.settings index 3f642f13bd9..b7f853cd594 100644 --- a/deps/v8/codereview.settings +++ b/deps/v8/codereview.settings @@ -5,3 +5,4 @@ STATUS: http://v8-status.appspot.com/status TRY_ON_UPLOAD: False TRYSERVER_SVN_URL: svn://svn.chromium.org/chrome-try-v8 TRYSERVER_ROOT: v8 +PROJECT: v8 diff --git a/deps/v8/include/libplatform/libplatform.h b/deps/v8/include/libplatform/libplatform.h new file mode 100644 index 00000000000..2125e9746b9 --- /dev/null +++ b/deps/v8/include/libplatform/libplatform.h @@ -0,0 +1,38 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_LIBPLATFORM_LIBPLATFORM_H_ +#define V8_LIBPLATFORM_LIBPLATFORM_H_ + +#include "include/v8-platform.h" + +namespace v8 { +namespace platform { + +/** + * Returns a new instance of the default v8::Platform implementation. + * + * The caller will take ownership of the returned pointer. |thread_pool_size| + * is the number of worker threads to allocate for background jobs. If a value + * of zero is passed, a suitable default based on the current number of + * processors online will be chosen. + */ +v8::Platform* CreateDefaultPlatform(int thread_pool_size = 0); + + +/** + * Pumps the message loop for the given isolate. + * + * The caller has to make sure that this is called from the right thread. + * Returns true if a task was executed, and false otherwise. This call does + * not block if no task is pending. The |platform| has to be created using + * |CreateDefaultPlatform|. + */ +bool PumpMessageLoop(v8::Platform* platform, v8::Isolate* isolate); + + +} // namespace platform +} // namespace v8 + +#endif // V8_LIBPLATFORM_LIBPLATFORM_H_ diff --git a/deps/v8/include/v8-debug.h b/deps/v8/include/v8-debug.h index bd3eb77c4d7..e72415952d9 100644 --- a/deps/v8/include/v8-debug.h +++ b/deps/v8/include/v8-debug.h @@ -19,8 +19,10 @@ enum DebugEvent { NewFunction = 3, BeforeCompile = 4, AfterCompile = 5, - ScriptCollected = 6, - BreakForCommand = 7 + CompileError = 6, + PromiseEvent = 7, + AsyncTaskEvent = 8, + BreakForCommand = 9 }; @@ -137,7 +139,7 @@ class V8_EXPORT Debug { * A EventCallback2 does not take possession of the event data, * and must not rely on the data persisting after the handler returns. */ - typedef void (*EventCallback2)(const EventDetails& event_details); + typedef void (*EventCallback)(const EventDetails& event_details); /** * Debug message callback function. @@ -147,23 +149,14 @@ class V8_EXPORT Debug { * A MessageHandler2 does not take possession of the message data, * and must not rely on the data persisting after the handler returns. */ - typedef void (*MessageHandler2)(const Message& message); - - /** - * Debug host dispatch callback function. - */ - typedef void (*HostDispatchHandler)(); + typedef void (*MessageHandler)(const Message& message); /** * Callback function for the host to ensure debug messages are processed. */ typedef void (*DebugMessageDispatchHandler)(); - static bool SetDebugEventListener2(EventCallback2 that, - Handle<Value> data = Handle<Value>()); - - // Set a JavaScript debug event listener. - static bool SetDebugEventListener(v8::Handle<v8::Object> that, + static bool SetDebugEventListener(EventCallback that, Handle<Value> data = Handle<Value>()); // Schedule a debugger break to happen when JavaScript code is run @@ -181,36 +174,13 @@ class V8_EXPORT Debug { // stops. static void DebugBreakForCommand(Isolate* isolate, ClientData* data); - // TODO(svenpanne) Remove this when Chrome is updated. - static void DebugBreakForCommand(ClientData* data, Isolate* isolate) { - DebugBreakForCommand(isolate, data); - } - // Message based interface. The message protocol is JSON. - static void SetMessageHandler2(MessageHandler2 handler); + static void SetMessageHandler(MessageHandler handler); static void SendCommand(Isolate* isolate, const uint16_t* command, int length, ClientData* client_data = NULL); - // Dispatch interface. - static void SetHostDispatchHandler(HostDispatchHandler handler, - int period = 100); - - /** - * Register a callback function to be called when a debug message has been - * received and is ready to be processed. For the debug messages to be - * processed V8 needs to be entered, and in certain embedding scenarios this - * callback can be used to make sure V8 is entered for the debug message to - * be processed. Note that debug messages will only be processed if there is - * a V8 break. This can happen automatically by using the option - * --debugger-auto-break. - * \param provide_locker requires that V8 acquires v8::Locker for you before - * calling handler - */ - static void SetDebugMessageDispatchHandler( - DebugMessageDispatchHandler handler, bool provide_locker = false); - /** * Run a JavaScript function in the debugger. * \param fun the function to call @@ -237,22 +207,6 @@ class V8_EXPORT Debug { */ static Local<Value> GetMirror(v8::Handle<v8::Value> obj); - /** - * Enable the V8 builtin debug agent. The debugger agent will listen on the - * supplied TCP/IP port for remote debugger connection. - * \param name the name of the embedding application - * \param port the TCP/IP port to listen on - * \param wait_for_connection whether V8 should pause on a first statement - * allowing remote debugger to connect before anything interesting happened - */ - static bool EnableAgent(const char* name, int port, - bool wait_for_connection = false); - - /** - * Disable the V8 builtin debug agent. The TCP/IP connection will be closed. - */ - static void DisableAgent(); - /** * Makes V8 process all pending debug messages. * @@ -271,10 +225,6 @@ class V8_EXPORT Debug { * until V8 gets control again; however, embedding application may improve * this by manually calling this method. * - * It makes sense to call this method whenever a new debug message arrived and - * V8 is not already running. Method v8::Debug::SetDebugMessageDispatchHandler - * should help with the former condition. - * * Technically this method in many senses is equivalent to executing empty * script: * 1. It does nothing except for processing all pending debug messages. @@ -305,11 +255,6 @@ class V8_EXPORT Debug { * unexpectedly used. LiveEdit is enabled by default. */ static void SetLiveEditEnabled(Isolate* isolate, bool enable); - - // TODO(svenpanne) Remove this when Chrome is updated. - static void SetLiveEditEnabled(bool enable, Isolate* isolate) { - SetLiveEditEnabled(isolate, enable); - } }; diff --git a/deps/v8/include/v8-platform.h b/deps/v8/include/v8-platform.h index 5667211c321..1f1679f0e0b 100644 --- a/deps/v8/include/v8-platform.h +++ b/deps/v8/include/v8-platform.h @@ -5,10 +5,10 @@ #ifndef V8_V8_PLATFORM_H_ #define V8_V8_PLATFORM_H_ -#include "v8.h" - namespace v8 { +class Isolate; + /** * A Task represents a unit of work. */ @@ -37,6 +37,8 @@ class Platform { kLongRunningTask }; + virtual ~Platform() {} + /** * Schedules a task to be invoked on a background thread. |expected_runtime| * indicates that the task will run a long time. The Platform implementation @@ -53,9 +55,6 @@ class Platform { * scheduling. The definition of "foreground" is opaque to V8. */ virtual void CallOnForegroundThread(Isolate* isolate, Task* task) = 0; - - protected: - virtual ~Platform() {} }; } // namespace v8 diff --git a/deps/v8/include/v8-profiler.h b/deps/v8/include/v8-profiler.h index 19d143e01bb..7fc193db58e 100644 --- a/deps/v8/include/v8-profiler.h +++ b/deps/v8/include/v8-profiler.h @@ -219,19 +219,20 @@ class V8_EXPORT HeapGraphEdge { class V8_EXPORT HeapGraphNode { public: enum Type { - kHidden = 0, // Hidden node, may be filtered when shown to user. - kArray = 1, // An array of elements. - kString = 2, // A string. - kObject = 3, // A JS object (except for arrays and strings). - kCode = 4, // Compiled code. - kClosure = 5, // Function closure. - kRegExp = 6, // RegExp. - kHeapNumber = 7, // Number stored in the heap. - kNative = 8, // Native object (not from V8 heap). - kSynthetic = 9, // Synthetic object, usualy used for grouping - // snapshot items together. - kConsString = 10, // Concatenated string. A pair of pointers to strings. - kSlicedString = 11 // Sliced string. A fragment of another string. + kHidden = 0, // Hidden node, may be filtered when shown to user. + kArray = 1, // An array of elements. + kString = 2, // A string. + kObject = 3, // A JS object (except for arrays and strings). + kCode = 4, // Compiled code. + kClosure = 5, // Function closure. + kRegExp = 6, // RegExp. + kHeapNumber = 7, // Number stored in the heap. + kNative = 8, // Native object (not from V8 heap). + kSynthetic = 9, // Synthetic object, usualy used for grouping + // snapshot items together. + kConsString = 10, // Concatenated string. A pair of pointers to strings. + kSlicedString = 11, // Sliced string. A fragment of another string. + kSymbol = 12 // A Symbol (ES6). }; /** Returns node type (see HeapGraphNode::Type). */ @@ -292,7 +293,7 @@ class V8_EXPORT OutputStream { // NOLINT */ virtual WriteResult WriteHeapStatsChunk(HeapStatsUpdate* data, int count) { return kAbort; - }; + } }; diff --git a/deps/v8/include/v8-util.h b/deps/v8/include/v8-util.h index 60feff549d9..1eaf1ab68f6 100644 --- a/deps/v8/include/v8-util.h +++ b/deps/v8/include/v8-util.h @@ -154,7 +154,7 @@ class PersistentValueMap { */ bool SetReturnValue(const K& key, ReturnValue<Value> returnValue) { - return SetReturnValueFromVal(returnValue, Traits::Get(&impl_, key)); + return SetReturnValueFromVal(&returnValue, Traits::Get(&impl_, key)); } /** @@ -227,7 +227,7 @@ class PersistentValueMap { } template<typename T> bool SetReturnValue(ReturnValue<T> returnValue) { - return SetReturnValueFromVal(returnValue, value_); + return SetReturnValueFromVal(&returnValue, value_); } void Reset() { value_ = kPersistentContainerNotFound; @@ -300,6 +300,7 @@ class PersistentValueMap { K key = Traits::KeyFromWeakCallbackData(data); Traits::Dispose(data.GetIsolate(), persistentValueMap->Remove(key).Pass(), key); + Traits::DisposeCallbackData(data.GetParameter()); } } @@ -308,10 +309,10 @@ class PersistentValueMap { } static bool SetReturnValueFromVal( - ReturnValue<Value>& returnValue, PersistentContainerValue value) { + ReturnValue<Value>* returnValue, PersistentContainerValue value) { bool hasValue = value != kPersistentContainerNotFound; if (hasValue) { - returnValue.SetInternal( + returnValue->SetInternal( *reinterpret_cast<internal::Object**>(FromVal(value))); } return hasValue; @@ -337,7 +338,7 @@ class PersistentValueMap { static UniquePersistent<V> Release(PersistentContainerValue v) { UniquePersistent<V> p; p.val_ = FromVal(v); - if (Traits::kCallbackType != kNotWeak && !p.IsEmpty()) { + if (Traits::kCallbackType != kNotWeak && p.IsWeak()) { Traits::DisposeCallbackData( p.template ClearWeak<typename Traits::WeakCallbackDataType>()); } @@ -422,7 +423,7 @@ class PersistentValueVector { */ void Append(UniquePersistent<V> persistent) { Traits::Append(&impl_, ClearAndLeak(&persistent)); - }; + } /** * Are there any values in the vector? diff --git a/deps/v8/include/v8.h b/deps/v8/include/v8.h index 538b6581f1d..ef0bda63f43 100644 --- a/deps/v8/include/v8.h +++ b/deps/v8/include/v8.h @@ -895,6 +895,13 @@ struct Maybe { }; +// Convenience wrapper. +template <class T> +inline Maybe<T> maybe(T t) { + return Maybe<T>(t); +} + + // --- Special objects --- @@ -916,20 +923,24 @@ class ScriptOrigin { Handle<Value> resource_name, Handle<Integer> resource_line_offset = Handle<Integer>(), Handle<Integer> resource_column_offset = Handle<Integer>(), - Handle<Boolean> resource_is_shared_cross_origin = Handle<Boolean>()) + Handle<Boolean> resource_is_shared_cross_origin = Handle<Boolean>(), + Handle<Integer> script_id = Handle<Integer>()) : resource_name_(resource_name), resource_line_offset_(resource_line_offset), resource_column_offset_(resource_column_offset), - resource_is_shared_cross_origin_(resource_is_shared_cross_origin) { } + resource_is_shared_cross_origin_(resource_is_shared_cross_origin), + script_id_(script_id) { } V8_INLINE Handle<Value> ResourceName() const; V8_INLINE Handle<Integer> ResourceLineOffset() const; V8_INLINE Handle<Integer> ResourceColumnOffset() const; V8_INLINE Handle<Boolean> ResourceIsSharedCrossOrigin() const; + V8_INLINE Handle<Integer> ScriptID() const; private: Handle<Value> resource_name_; Handle<Integer> resource_line_offset_; Handle<Integer> resource_column_offset_; Handle<Boolean> resource_is_shared_cross_origin_; + Handle<Integer> script_id_; }; @@ -946,6 +957,15 @@ class V8_EXPORT UnboundScript { int GetId(); Handle<Value> GetScriptName(); + /** + * Data read from magic sourceURL comments. + */ + Handle<Value> GetSourceURL(); + /** + * Data read from magic sourceMappingURL comments. + */ + Handle<Value> GetSourceMappingURL(); + /** * Returns zero based line number of the code_pos location in the script. * -1 will be returned if no information available. @@ -984,24 +1004,9 @@ class V8_EXPORT Script { */ Local<UnboundScript> GetUnboundScript(); - // To be deprecated; use GetUnboundScript()->GetId(); - int GetId() { - return GetUnboundScript()->GetId(); - } - - // Use GetUnboundScript()->GetId(); V8_DEPRECATED("Use GetUnboundScript()->GetId()", - Handle<Value> GetScriptName()) { - return GetUnboundScript()->GetScriptName(); - } - - /** - * Returns zero based line number of the code_pos location in the script. - * -1 will be returned if no information available. - */ - V8_DEPRECATED("Use GetUnboundScript()->GetLineNumber()", - int GetLineNumber(int code_pos)) { - return GetUnboundScript()->GetLineNumber(code_pos); + int GetId()) { + return GetUnboundScript()->GetId(); } }; @@ -1039,15 +1044,14 @@ class V8_EXPORT ScriptCompiler { int length; BufferPolicy buffer_policy; - private: - // Prevent copying. Not implemented. - CachedData(const CachedData&); - CachedData& operator=(const CachedData&); + private: + // Prevent copying. Not implemented. + CachedData(const CachedData&); + CachedData& operator=(const CachedData&); }; /** - * Source code which can be then compiled to a UnboundScript or - * BoundScript. + * Source code which can be then compiled to a UnboundScript or Script. */ class Source { public: @@ -1065,7 +1069,7 @@ class V8_EXPORT ScriptCompiler { private: friend class ScriptCompiler; - // Prevent copying. Not implemented. + // Prevent copying. Not implemented. Source(const Source&); Source& operator=(const Source&); @@ -1077,19 +1081,31 @@ class V8_EXPORT ScriptCompiler { Handle<Integer> resource_column_offset; Handle<Boolean> resource_is_shared_cross_origin; - // Cached data from previous compilation (if any), or generated during - // compilation (if the generate_cached_data flag is passed to - // ScriptCompiler). + // Cached data from previous compilation (if a kConsume*Cache flag is + // set), or hold newly generated cache data (kProduce*Cache flags) are + // set when calling a compile method. CachedData* cached_data; }; enum CompileOptions { - kNoCompileOptions, - kProduceDataToCache = 1 << 0 + kNoCompileOptions = 0, + kProduceParserCache, + kConsumeParserCache, + kProduceCodeCache, + kConsumeCodeCache, + + // Support the previous API for a transition period. + kProduceDataToCache }; /** * Compiles the specified script (context-independent). + * Cached data as part of the source object can be optionally produced to be + * consumed later to speed up compilation of identical source scripts. + * + * Note that when producing cached data, the source must point to NULL for + * cached data. When consuming cached data, the cached data must have been + * produced by the same version of V8. * * \param source Script source code. * \return Compiled script object (context independent; for running it must be @@ -1124,6 +1140,12 @@ class V8_EXPORT Message { Local<String> Get() const; Local<String> GetSourceLine() const; + /** + * Returns the origin for the script from where the function causing the + * error originates. + */ + ScriptOrigin GetScriptOrigin() const; + /** * Returns the resource name for the script from where the function causing * the error originates. @@ -1201,6 +1223,7 @@ class V8_EXPORT StackTrace { kIsConstructor = 1 << 5, kScriptNameOrSourceURL = 1 << 6, kScriptId = 1 << 7, + kExposeFramesAcrossSecurityOrigins = 1 << 8, kOverview = kLineNumber | kColumnOffset | kScriptName | kFunctionName, kDetailed = kOverview | kIsEval | kIsConstructor | kScriptNameOrSourceURL }; @@ -2071,11 +2094,7 @@ typedef void (*AccessorSetterCallback)( * accessors have an explicit access control parameter which specifies * the kind of cross-context access that should be allowed. * - * Additionally, for security, accessors can prohibit overwriting by - * accessors defined in JavaScript. For objects that have such - * accessors either locally or in their prototype chain it is not - * possible to overwrite the accessor by using __defineGetter__ or - * __defineSetter__ from JavaScript code. + * TODO(dcarney): Remove PROHIBITS_OVERWRITING as it is now unused. */ enum AccessControl { DEFAULT = 0, @@ -2090,13 +2109,11 @@ enum AccessControl { */ class V8_EXPORT Object : public Value { public: - bool Set(Handle<Value> key, - Handle<Value> value, - PropertyAttribute attribs = None); + bool Set(Handle<Value> key, Handle<Value> value); bool Set(uint32_t index, Handle<Value> value); - // Sets a local property on this object bypassing interceptors and + // Sets an own property on this object bypassing interceptors and // overriding accessors or read-only properties. // // Note that if the object has an interceptor the property will be set @@ -2119,6 +2136,11 @@ class V8_EXPORT Object : public Value { */ PropertyAttribute GetPropertyAttributes(Handle<Value> key); + /** + * Returns Object.getOwnPropertyDescriptor as per ES5 section 15.2.3.3. + */ + Local<Value> GetOwnPropertyDescriptor(Local<String> key); + bool Has(Handle<Value> key); bool Delete(Handle<Value> key); @@ -2203,12 +2225,6 @@ class V8_EXPORT Object : public Value { */ Local<String> ObjectProtoToString(); - /** - * Returns the function invoked as a constructor for this object. - * May be the null value. - */ - Local<Value> GetConstructor(); - /** * Returns the name of the function invoked as a constructor for this object. */ @@ -2429,6 +2445,10 @@ class ReturnValue { // Convenience getter for Isolate V8_INLINE Isolate* GetIsolate(); + // Pointer setter: Uncompilable to prevent inadvertent misuse. + template <typename S> + V8_INLINE void Set(S* whatever); + private: template<class F> friend class ReturnValue; template<class F> friend class FunctionCallbackInfo; @@ -2629,6 +2649,7 @@ class V8_EXPORT Promise : public Object { */ Local<Promise> Chain(Handle<Function> handler); Local<Promise> Catch(Handle<Function> handler); + Local<Promise> Then(Handle<Function> handler); V8_INLINE static Promise* Cast(Value* obj); @@ -3865,8 +3886,8 @@ class V8_EXPORT ResourceConstraints { uint64_t virtual_memory_limit, uint32_t number_of_processors); - int max_new_space_size() const { return max_new_space_size_; } - void set_max_new_space_size(int value) { max_new_space_size_ = value; } + int max_semi_space_size() const { return max_semi_space_size_; } + void set_max_semi_space_size(int value) { max_semi_space_size_ = value; } int max_old_space_size() const { return max_old_space_size_; } void set_max_old_space_size(int value) { max_old_space_size_ = value; } int max_executable_size() const { return max_executable_size_; } @@ -3879,18 +3900,18 @@ class V8_EXPORT ResourceConstraints { void set_max_available_threads(int value) { max_available_threads_ = value; } - int code_range_size() const { return code_range_size_; } - void set_code_range_size(int value) { + size_t code_range_size() const { return code_range_size_; } + void set_code_range_size(size_t value) { code_range_size_ = value; } private: - int max_new_space_size_; + int max_semi_space_size_; int max_old_space_size_; int max_executable_size_; uint32_t* stack_limit_; int max_available_threads_; - int code_range_size_; + size_t code_range_size_; }; @@ -3965,6 +3986,9 @@ typedef void (*MemoryAllocationCallback)(ObjectSpace space, // --- Leave Script Callback --- typedef void (*CallCompletedCallback)(); +// --- Microtask Callback --- +typedef void (*MicrotaskCallback)(void* data); + // --- Failed Access Check Callback --- typedef void (*FailedAccessCheckCallback)(Local<Object> target, AccessType type, @@ -4133,6 +4157,20 @@ class V8_EXPORT Isolate { kMinorGarbageCollection }; + /** + * Features reported via the SetUseCounterCallback callback. Do not chang + * assigned numbers of existing items; add new features to the end of this + * list. + */ + enum UseCounterFeature { + kUseAsm = 0, + kUseCounterFeatureCount // This enum value must be last. + }; + + typedef void (*UseCounterCallback)(Isolate* isolate, + UseCounterFeature feature); + + /** * Creates a new isolate. Does not change the currently entered * isolate. @@ -4211,7 +4249,8 @@ class V8_EXPORT Isolate { * kept alive by JavaScript objects. * \returns the adjusted value. */ - int64_t AdjustAmountOfExternalAllocatedMemory(int64_t change_in_bytes); + V8_INLINE int64_t + AdjustAmountOfExternalAllocatedMemory(int64_t change_in_bytes); /** * Returns heap profiler for this isolate. Will return NULL until the isolate @@ -4375,6 +4414,7 @@ class V8_EXPORT Isolate { /** * Experimental: Runs the Microtask Work Queue until empty + * Any exceptions thrown by microtask callbacks are swallowed. */ void RunMicrotasks(); @@ -4383,12 +4423,71 @@ class V8_EXPORT Isolate { */ void EnqueueMicrotask(Handle<Function> microtask); + /** + * Experimental: Enqueues the callback to the Microtask Work Queue + */ + void EnqueueMicrotask(MicrotaskCallback microtask, void* data = NULL); + /** * Experimental: Controls whether the Microtask Work Queue is automatically * run when the script call depth decrements to zero. */ void SetAutorunMicrotasks(bool autorun); + /** + * Experimental: Returns whether the Microtask Work Queue is automatically + * run when the script call depth decrements to zero. + */ + bool WillAutorunMicrotasks() const; + + /** + * Sets a callback for counting the number of times a feature of V8 is used. + */ + void SetUseCounterCallback(UseCounterCallback callback); + + /** + * Enables the host application to provide a mechanism for recording + * statistics counters. + */ + void SetCounterFunction(CounterLookupCallback); + + /** + * Enables the host application to provide a mechanism for recording + * histograms. The CreateHistogram function returns a + * histogram which will later be passed to the AddHistogramSample + * function. + */ + void SetCreateHistogramFunction(CreateHistogramCallback); + void SetAddHistogramSampleFunction(AddHistogramSampleCallback); + + /** + * Optional notification that the embedder is idle. + * V8 uses the notification to reduce memory footprint. + * This call can be used repeatedly if the embedder remains idle. + * Returns true if the embedder should stop calling IdleNotification + * until real work has been done. This indicates that V8 has done + * as much cleanup as it will be able to do. + * + * The idle_time_in_ms argument specifies the time V8 has to do reduce + * the memory footprint. There is no guarantee that the actual work will be + * done within the time limit. + */ + bool IdleNotification(int idle_time_in_ms); + + /** + * Optional notification that the system is running low on memory. + * V8 uses these notifications to attempt to free memory. + */ + void LowMemoryNotification(); + + /** + * Optional notification that a context has been disposed. V8 uses + * these notifications to guide the GC heuristic. Returns the number + * of context disposals - including this one - since the last time + * V8 had a chance to clean up. + */ + int ContextDisposedNotification(); + private: template<class K, class V, class Traits> friend class PersistentValueMap; @@ -4402,6 +4501,7 @@ class V8_EXPORT Isolate { void SetObjectGroupId(internal::Object** object, UniqueId id); void SetReferenceFromGroup(UniqueId id, internal::Object** object); void SetReference(internal::Object** parent, internal::Object** child); + void CollectAllGarbage(const char* gc_reason); }; class V8_EXPORT StartupData { @@ -4512,7 +4612,7 @@ struct JitCodeEvent { // Size of the instructions. size_t code_len; // Script info for CODE_ADDED event. - Handle<Script> script; + Handle<UnboundScript> script; // User-defined data for *_LINE_INFO_* event. It's used to hold the source // code line information which is returned from the // CODE_START_LINE_INFO_RECORDING event. And it's passed to subsequent @@ -4640,6 +4740,24 @@ class V8_EXPORT V8 { static void GetCompressedStartupData(StartupData* compressed_data); static void SetDecompressedStartupData(StartupData* decompressed_data); + /** + * Hand startup data to V8, in case the embedder has chosen to build + * V8 with external startup data. + * + * Note: + * - By default the startup data is linked into the V8 library, in which + * case this function is not meaningful. + * - If this needs to be called, it needs to be called before V8 + * tries to make use of its built-ins. + * - To avoid unnecessary copies of data, V8 will point directly into the + * given data blob, so pretty please keep it around until V8 exit. + * - Compression of the startup blob might be useful, but needs to + * handled entirely on the embedders' side. + * - The call will abort if the data is invalid. + */ + static void SetNativesDataBlob(StartupData* startup_blob); + static void SetSnapshotDataBlob(StartupData* startup_blob); + /** * Adds a message listener. * @@ -4681,21 +4799,6 @@ class V8_EXPORT V8 { /** Get the version string. */ static const char* GetVersion(); - /** - * Enables the host application to provide a mechanism for recording - * statistics counters. - */ - static void SetCounterFunction(CounterLookupCallback); - - /** - * Enables the host application to provide a mechanism for recording - * histograms. The CreateHistogram function returns a - * histogram which will later be passed to the AddHistogramSample - * function. - */ - static void SetCreateHistogramFunction(CreateHistogramCallback); - static void SetAddHistogramSampleFunction(AddHistogramSampleCallback); - /** Callback function for reporting failed access checks.*/ static void SetFailedAccessCheckCallbackFunction(FailedAccessCheckCallback); @@ -4750,28 +4853,6 @@ class V8_EXPORT V8 { */ static void RemoveMemoryAllocationCallback(MemoryAllocationCallback callback); - /** - * Experimental: Runs the Microtask Work Queue until empty - * - * Deprecated: Use methods on Isolate instead. - */ - static void RunMicrotasks(Isolate* isolate); - - /** - * Experimental: Enqueues the callback to the Microtask Work Queue - * - * Deprecated: Use methods on Isolate instead. - */ - static void EnqueueMicrotask(Isolate* isolate, Handle<Function> microtask); - - /** - * Experimental: Controls whether the Microtask Work Queue is automatically - * run when the script call depth decrements to zero. - * - * Deprecated: Use methods on Isolate instead. - */ - static void SetAutorunMicrotasks(Isolate *source, bool autorun); - /** * Initializes from snapshot if possible. Otherwise, attempts to * initialize from scratch. This function is called implicitly if @@ -4906,34 +4987,6 @@ class V8_EXPORT V8 { static void VisitHandlesForPartialDependence( Isolate* isolate, PersistentHandleVisitor* visitor); - /** - * Optional notification that the embedder is idle. - * V8 uses the notification to reduce memory footprint. - * This call can be used repeatedly if the embedder remains idle. - * Returns true if the embedder should stop calling IdleNotification - * until real work has been done. This indicates that V8 has done - * as much cleanup as it will be able to do. - * - * The hint argument specifies the amount of work to be done in the function - * on scale from 1 to 1000. There is no guarantee that the actual work will - * match the hint. - */ - static bool IdleNotification(int hint = 1000); - - /** - * Optional notification that the system is running low on memory. - * V8 uses these notifications to attempt to free memory. - */ - static void LowMemoryNotification(); - - /** - * Optional notification that a context has been disposed. V8 uses - * these notifications to guide the GC heuristic. Returns the number - * of context disposals - including this one - since the last time - * V8 had a chance to clean up. - */ - static int ContextDisposedNotification(); - /** * Initialize the ICU library bundled with V8. The embedder should only * invoke this method when using the bundled ICU. Returns true on success. @@ -5061,7 +5114,8 @@ class V8_EXPORT TryCatch { /** * Clears any exceptions that may have been caught by this try/catch block. - * After this method has been called, HasCaught() will return false. + * After this method has been called, HasCaught() will return false. Cancels + * the scheduled exception if it is caught and ReThrow() is not called before. * * It is not necessary to clear a try/catch block before using it again; if * another exception is thrown the previously caught exception will just be @@ -5087,7 +5141,25 @@ class V8_EXPORT TryCatch { */ void SetCaptureMessage(bool value); + /** + * There are cases when the raw address of C++ TryCatch object cannot be + * used for comparisons with addresses into the JS stack. The cases are: + * 1) ARM, ARM64 and MIPS simulators which have separate JS stack. + * 2) Address sanitizer allocates local C++ object in the heap when + * UseAfterReturn mode is enabled. + * This method returns address that can be used for comparisons with + * addresses into the JS stack. When neither simulator nor ASAN's + * UseAfterReturn is enabled, then the address returned will be the address + * of the C++ try catch handler itself. + */ + static void* JSStackComparableAddress(v8::TryCatch* handler) { + if (handler == NULL) return NULL; + return handler->js_stack_comparable_address_; + } + private: + void ResetInternal(); + // Make it hard to create heap-allocated TryCatch blocks. TryCatch(const TryCatch&); void operator=(const TryCatch&); @@ -5095,10 +5167,11 @@ class V8_EXPORT TryCatch { void operator delete(void*, size_t); v8::internal::Isolate* isolate_; - void* next_; + v8::TryCatch* next_; void* exception_; void* message_obj_; void* message_script_; + void* js_stack_comparable_address_; int message_start_pos_; int message_end_pos_; bool is_verbose_ : 1; @@ -5208,14 +5281,6 @@ class V8_EXPORT Context { */ void Exit(); - /** - * Returns true if the context has experienced an out of memory situation. - * Since V8 always treats OOM as fatal error, this can no longer return true. - * Therefore this is now deprecated. - * */ - V8_DEPRECATED("This can no longer happen. OOM is a fatal error.", - bool HasOutOfMemoryException()) { return false; } - /** Returns an isolate associated with a current context. */ v8::Isolate* GetIsolate(); @@ -5435,6 +5500,7 @@ namespace internal { const int kApiPointerSize = sizeof(void*); // NOLINT const int kApiIntSize = sizeof(int); // NOLINT +const int kApiInt64Size = sizeof(int64_t); // NOLINT // Tag information for HeapObject. const int kHeapObjectTag = 1; @@ -5460,7 +5526,7 @@ V8_INLINE internal::Object* IntToSmi(int value) { template <> struct SmiTagging<4> { static const int kSmiShiftSize = 0; static const int kSmiValueSize = 31; - V8_INLINE static int SmiToInt(internal::Object* value) { + V8_INLINE static int SmiToInt(const internal::Object* value) { int shift_bits = kSmiTagSize + kSmiShiftSize; // Throw away top 32 bits and shift down (requires >> to be sign extending). return static_cast<int>(reinterpret_cast<intptr_t>(value)) >> shift_bits; @@ -5488,7 +5554,7 @@ template <> struct SmiTagging<4> { template <> struct SmiTagging<8> { static const int kSmiShiftSize = 31; static const int kSmiValueSize = 32; - V8_INLINE static int SmiToInt(internal::Object* value) { + V8_INLINE static int SmiToInt(const internal::Object* value) { int shift_bits = kSmiTagSize + kSmiShiftSize; // Shift down and throw away top 32 bits. return static_cast<int>(reinterpret_cast<intptr_t>(value) >> shift_bits); @@ -5518,7 +5584,8 @@ class Internals { // These values match non-compiler-dependent values defined within // the implementation of v8. static const int kHeapObjectMapOffset = 0; - static const int kMapInstanceTypeOffset = 1 * kApiPointerSize + kApiIntSize; + static const int kMapInstanceTypeAndBitFieldOffset = + 1 * kApiPointerSize + kApiIntSize; static const int kStringResourceOffset = 3 * kApiPointerSize; static const int kOddballKindOffset = 3 * kApiPointerSize; @@ -5526,19 +5593,29 @@ class Internals { static const int kJSObjectHeaderSize = 3 * kApiPointerSize; static const int kFixedArrayHeaderSize = 2 * kApiPointerSize; static const int kContextHeaderSize = 2 * kApiPointerSize; - static const int kContextEmbedderDataIndex = 74; + static const int kContextEmbedderDataIndex = 95; static const int kFullStringRepresentationMask = 0x07; static const int kStringEncodingMask = 0x4; static const int kExternalTwoByteRepresentationTag = 0x02; static const int kExternalAsciiRepresentationTag = 0x06; static const int kIsolateEmbedderDataOffset = 0 * kApiPointerSize; - static const int kIsolateRootsOffset = 5 * kApiPointerSize; + static const int kAmountOfExternalAllocatedMemoryOffset = + 4 * kApiPointerSize; + static const int kAmountOfExternalAllocatedMemoryAtLastGlobalGCOffset = + kAmountOfExternalAllocatedMemoryOffset + kApiInt64Size; + static const int kIsolateRootsOffset = + kAmountOfExternalAllocatedMemoryAtLastGlobalGCOffset + kApiInt64Size + + kApiPointerSize; static const int kUndefinedValueRootIndex = 5; static const int kNullValueRootIndex = 7; static const int kTrueValueRootIndex = 8; static const int kFalseValueRootIndex = 9; - static const int kEmptyStringRootIndex = 162; + static const int kEmptyStringRootIndex = 164; + + // The external allocation limit should be below 256 MB on all architectures + // to avoid that resource-constrained embedders run low on memory. + static const int kExternalAllocationLimit = 192 * 1024 * 1024; static const int kNodeClassIdOffset = 1 * kApiPointerSize; static const int kNodeFlagsOffset = 1 * kApiPointerSize + 3; @@ -5549,10 +5626,10 @@ class Internals { static const int kNodeIsIndependentShift = 4; static const int kNodeIsPartiallyDependentShift = 5; - static const int kJSObjectType = 0xbb; + static const int kJSObjectType = 0xbc; static const int kFirstNonstringType = 0x80; static const int kOddballType = 0x83; - static const int kForeignType = 0x87; + static const int kForeignType = 0x88; static const int kUndefinedOddballKind = 5; static const int kNullOddballKind = 3; @@ -5566,12 +5643,12 @@ class Internals { #endif } - V8_INLINE static bool HasHeapObjectTag(internal::Object* value) { + V8_INLINE static bool HasHeapObjectTag(const internal::Object* value) { return ((reinterpret_cast<intptr_t>(value) & kHeapObjectTagMask) == kHeapObjectTag); } - V8_INLINE static int SmiValue(internal::Object* value) { + V8_INLINE static int SmiValue(const internal::Object* value) { return PlatformSmiTagging::SmiToInt(value); } @@ -5583,13 +5660,15 @@ class Internals { return PlatformSmiTagging::IsValidSmi(value); } - V8_INLINE static int GetInstanceType(internal::Object* obj) { + V8_INLINE static int GetInstanceType(const internal::Object* obj) { typedef internal::Object O; O* map = ReadField<O*>(obj, kHeapObjectMapOffset); - return ReadField<uint8_t>(map, kMapInstanceTypeOffset); + // Map::InstanceType is defined so that it will always be loaded into + // the LS 8 bits of one 16-bit word, regardless of endianess. + return ReadField<uint16_t>(map, kMapInstanceTypeAndBitFieldOffset) & 0xff; } - V8_INLINE static int GetOddballKind(internal::Object* obj) { + V8_INLINE static int GetOddballKind(const internal::Object* obj) { typedef internal::Object O; return SmiValue(ReadField<O*>(obj, kOddballKindOffset)); } @@ -5622,18 +5701,19 @@ class Internals { *addr = static_cast<uint8_t>((*addr & ~kNodeStateMask) | value); } - V8_INLINE static void SetEmbedderData(v8::Isolate *isolate, + V8_INLINE static void SetEmbedderData(v8::Isolate* isolate, uint32_t slot, - void *data) { + void* data) { uint8_t *addr = reinterpret_cast<uint8_t *>(isolate) + kIsolateEmbedderDataOffset + slot * kApiPointerSize; *reinterpret_cast<void**>(addr) = data; } - V8_INLINE static void* GetEmbedderData(v8::Isolate* isolate, uint32_t slot) { - uint8_t* addr = reinterpret_cast<uint8_t*>(isolate) + + V8_INLINE static void* GetEmbedderData(const v8::Isolate* isolate, + uint32_t slot) { + const uint8_t* addr = reinterpret_cast<const uint8_t*>(isolate) + kIsolateEmbedderDataOffset + slot * kApiPointerSize; - return *reinterpret_cast<void**>(addr); + return *reinterpret_cast<void* const*>(addr); } V8_INLINE static internal::Object** GetRoot(v8::Isolate* isolate, @@ -5642,16 +5722,18 @@ class Internals { return reinterpret_cast<internal::Object**>(addr + index * kApiPointerSize); } - template <typename T> V8_INLINE static T ReadField(Object* ptr, int offset) { - uint8_t* addr = reinterpret_cast<uint8_t*>(ptr) + offset - kHeapObjectTag; - return *reinterpret_cast<T*>(addr); + template <typename T> + V8_INLINE static T ReadField(const internal::Object* ptr, int offset) { + const uint8_t* addr = + reinterpret_cast<const uint8_t*>(ptr) + offset - kHeapObjectTag; + return *reinterpret_cast<const T*>(addr); } template <typename T> - V8_INLINE static T ReadEmbedderData(Context* context, int index) { + V8_INLINE static T ReadEmbedderData(const v8::Context* context, int index) { typedef internal::Object O; typedef internal::Internals I; - O* ctx = *reinterpret_cast<O**>(context); + O* ctx = *reinterpret_cast<O* const*>(context); int embedder_data_offset = I::kContextHeaderSize + (internal::kApiPointerSize * I::kContextEmbedderDataIndex); O* embedder_data = I::ReadField<O*>(ctx, embedder_data_offset); @@ -5659,14 +5741,6 @@ class Internals { I::kFixedArrayHeaderSize + (internal::kApiPointerSize * index); return I::ReadField<T>(embedder_data, value_offset); } - - V8_INLINE static bool CanCastToHeapObject(void* o) { return false; } - V8_INLINE static bool CanCastToHeapObject(Context* o) { return true; } - V8_INLINE static bool CanCastToHeapObject(String* o) { return true; } - V8_INLINE static bool CanCastToHeapObject(Object* o) { return true; } - V8_INLINE static bool CanCastToHeapObject(Message* o) { return true; } - V8_INLINE static bool CanCastToHeapObject(StackTrace* o) { return true; } - V8_INLINE static bool CanCastToHeapObject(StackFrame* o) { return true; } }; } // namespace internal @@ -5973,6 +6047,13 @@ Isolate* ReturnValue<T>::GetIsolate() { return *reinterpret_cast<Isolate**>(&value_[-2]); } +template<typename T> +template<typename S> +void ReturnValue<T>::Set(S* whatever) { + // Uncompilable to prevent inadvertent misuse. + TYPE_CHECK(S*, Primitive); +} + template<typename T> internal::Object* ReturnValue<T>::GetDefaultValue() { // Default value is always the pointer below value_ on the stack. @@ -6062,11 +6143,17 @@ Handle<Integer> ScriptOrigin::ResourceColumnOffset() const { return resource_column_offset_; } + Handle<Boolean> ScriptOrigin::ResourceIsSharedCrossOrigin() const { return resource_is_shared_cross_origin_; } +Handle<Integer> ScriptOrigin::ScriptID() const { + return script_id_; +} + + ScriptCompiler::Source::Source(Local<String> string, const ScriptOrigin& origin, CachedData* data) : source_string(string), @@ -6158,7 +6245,7 @@ Local<String> String::Empty(Isolate* isolate) { String::ExternalStringResource* String::GetExternalStringResource() const { typedef internal::Object O; typedef internal::Internals I; - O* obj = *reinterpret_cast<O**>(const_cast<String*>(this)); + O* obj = *reinterpret_cast<O* const*>(this); String::ExternalStringResource* result; if (I::IsExternalTwoByteString(I::GetInstanceType(obj))) { void* value = I::ReadField<void*>(obj, I::kStringResourceOffset); @@ -6177,7 +6264,7 @@ String::ExternalStringResourceBase* String::GetExternalStringResourceBase( String::Encoding* encoding_out) const { typedef internal::Object O; typedef internal::Internals I; - O* obj = *reinterpret_cast<O**>(const_cast<String*>(this)); + O* obj = *reinterpret_cast<O* const*>(this); int type = I::GetInstanceType(obj) & I::kFullStringRepresentationMask; *encoding_out = static_cast<Encoding>(type & I::kStringEncodingMask); ExternalStringResourceBase* resource = NULL; @@ -6204,7 +6291,7 @@ bool Value::IsUndefined() const { bool Value::QuickIsUndefined() const { typedef internal::Object O; typedef internal::Internals I; - O* obj = *reinterpret_cast<O**>(const_cast<Value*>(this)); + O* obj = *reinterpret_cast<O* const*>(this); if (!I::HasHeapObjectTag(obj)) return false; if (I::GetInstanceType(obj) != I::kOddballType) return false; return (I::GetOddballKind(obj) == I::kUndefinedOddballKind); @@ -6222,7 +6309,7 @@ bool Value::IsNull() const { bool Value::QuickIsNull() const { typedef internal::Object O; typedef internal::Internals I; - O* obj = *reinterpret_cast<O**>(const_cast<Value*>(this)); + O* obj = *reinterpret_cast<O* const*>(this); if (!I::HasHeapObjectTag(obj)) return false; if (I::GetInstanceType(obj) != I::kOddballType) return false; return (I::GetOddballKind(obj) == I::kNullOddballKind); @@ -6240,7 +6327,7 @@ bool Value::IsString() const { bool Value::QuickIsString() const { typedef internal::Object O; typedef internal::Internals I; - O* obj = *reinterpret_cast<O**>(const_cast<Value*>(this)); + O* obj = *reinterpret_cast<O* const*>(this); if (!I::HasHeapObjectTag(obj)) return false; return (I::GetInstanceType(obj) < I::kFirstNonstringType); } @@ -6559,6 +6646,28 @@ uint32_t Isolate::GetNumberOfDataSlots() { } +int64_t Isolate::AdjustAmountOfExternalAllocatedMemory( + int64_t change_in_bytes) { + typedef internal::Internals I; + int64_t* amount_of_external_allocated_memory = + reinterpret_cast<int64_t*>(reinterpret_cast<uint8_t*>(this) + + I::kAmountOfExternalAllocatedMemoryOffset); + int64_t* amount_of_external_allocated_memory_at_last_global_gc = + reinterpret_cast<int64_t*>( + reinterpret_cast<uint8_t*>(this) + + I::kAmountOfExternalAllocatedMemoryAtLastGlobalGCOffset); + int64_t amount = *amount_of_external_allocated_memory + change_in_bytes; + if (change_in_bytes > 0 && + amount - *amount_of_external_allocated_memory_at_last_global_gc > + I::kExternalAllocationLimit) { + CollectAllGarbage("external memory allocation limit reached."); + } else { + *amount_of_external_allocated_memory = amount; + } + return *amount_of_external_allocated_memory; +} + + template<typename T> void Isolate::SetObjectGroupId(const Persistent<T>& object, UniqueId id) { diff --git a/deps/v8/samples/lineprocessor.cc b/deps/v8/samples/lineprocessor.cc index f259ea4e943..9b627f3019c 100644 --- a/deps/v8/samples/lineprocessor.cc +++ b/deps/v8/samples/lineprocessor.cc @@ -25,14 +25,15 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include <v8.h> +#include <include/v8.h> -#include <v8-debug.h> +#include <include/libplatform/libplatform.h> +#include <include/v8-debug.h> #include <fcntl.h> -#include <string.h> #include <stdio.h> #include <stdlib.h> +#include <string.h> /** * This sample program should demonstrate certain aspects of debugging @@ -69,25 +70,6 @@ while (true) { var res = line + " | " + line; print(res); } - - * - * When run with "-p" argument, the program starts V8 Debugger Agent and - * allows remote debugger to attach and debug JavaScript code. - * - * Interesting aspects: - * 1. Wait for remote debugger to attach - * Normally the program compiles custom script and immediately runs it. - * If programmer needs to debug script from the very beginning, he should - * run this sample program with "--wait-for-connection" command line parameter. - * This way V8 will suspend on the first statement and wait for - * debugger to attach. - * - * 2. Unresponsive V8 - * V8 Debugger Agent holds a connection with remote debugger, but it does - * respond only when V8 is running some script. In particular, when this program - * is waiting for input, all requests from debugger get deferred until V8 - * is called again. See how "--callback" command-line parameter in this sample - * fixes this issue. */ enum MainCycleType { @@ -109,41 +91,16 @@ bool RunCppCycle(v8::Handle<v8::Script> script, v8::Persistent<v8::Context> debug_message_context; -void DispatchDebugMessages() { - // We are in some random thread. We should already have v8::Locker acquired - // (we requested this when registered this callback). We was called - // because new debug messages arrived; they may have already been processed, - // but we shouldn't worry about this. - // - // All we have to do is to set context and call ProcessDebugMessages. - // - // We should decide which V8 context to use here. This is important for - // "evaluate" command, because it must be executed some context. - // In our sample we have only one context, so there is nothing really to - // think about. - v8::Isolate* isolate = v8::Isolate::GetCurrent(); - v8::HandleScope handle_scope(isolate); - v8::Local<v8::Context> context = - v8::Local<v8::Context>::New(isolate, debug_message_context); - v8::Context::Scope scope(context); - - v8::Debug::ProcessDebugMessages(); -} - - int RunMain(int argc, char* argv[]) { v8::V8::SetFlagsFromCommandLine(&argc, argv, true); - v8::Isolate* isolate = v8::Isolate::GetCurrent(); + v8::Isolate* isolate = v8::Isolate::New(); + v8::Isolate::Scope isolate_scope(isolate); v8::HandleScope handle_scope(isolate); v8::Handle<v8::String> script_source; v8::Handle<v8::Value> script_name; int script_param_counter = 0; - int port_number = -1; - bool wait_for_connection = false; - bool support_callback = false; - MainCycleType cycle_type = CycleInCpp; for (int i = 1; i < argc; i++) { @@ -156,13 +113,6 @@ int RunMain(int argc, char* argv[]) { cycle_type = CycleInCpp; } else if (strcmp(str, "--main-cycle-in-js") == 0) { cycle_type = CycleInJs; - } else if (strcmp(str, "--callback") == 0) { - support_callback = true; - } else if (strcmp(str, "--wait-for-connection") == 0) { - wait_for_connection = true; - } else if (strcmp(str, "-p") == 0 && i + 1 < argc) { - port_number = atoi(argv[i + 1]); // NOLINT - i++; } else if (strncmp(str, "--", 2) == 0) { printf("Warning: unknown flag %s.\nTry --help for options\n", str); } else if (strcmp(str, "-e") == 0 && i + 1 < argc) { @@ -212,16 +162,6 @@ int RunMain(int argc, char* argv[]) { debug_message_context.Reset(isolate, context); - v8::Locker locker(isolate); - - if (support_callback) { - v8::Debug::SetDebugMessageDispatchHandler(DispatchDebugMessages, true); - } - - if (port_number != -1) { - v8::Debug::EnableAgent("lineprocessor", port_number, wait_for_connection); - } - bool report_exceptions = true; v8::Handle<v8::Script> script; @@ -265,7 +205,6 @@ bool RunCppCycle(v8::Handle<v8::Script> script, v8::Local<v8::Context> context, bool report_exceptions) { v8::Isolate* isolate = context->GetIsolate(); - v8::Locker lock(isolate); v8::Handle<v8::String> fun_name = v8::String::NewFromUtf8(isolate, "ProcessLine"); @@ -316,8 +255,12 @@ bool RunCppCycle(v8::Handle<v8::Script> script, int main(int argc, char* argv[]) { v8::V8::InitializeICU(); + v8::Platform* platform = v8::platform::CreateDefaultPlatform(); + v8::V8::InitializePlatform(platform); int result = RunMain(argc, argv); v8::V8::Dispose(); + v8::V8::ShutdownPlatform(); + delete platform; return result; } @@ -362,7 +305,7 @@ void ReportException(v8::Isolate* isolate, v8::TryCatch* try_catch) { printf("%s\n", exception_string); } else { // Print (filename):(line number): (message). - v8::String::Utf8Value filename(message->GetScriptResourceName()); + v8::String::Utf8Value filename(message->GetScriptOrigin().ResourceName()); const char* filename_string = ToCString(filename); int linenum = message->GetLineNumber(); printf("%s:%i: %s\n", filename_string, linenum, exception_string); @@ -423,7 +366,6 @@ v8::Handle<v8::String> ReadLine() { char* res; { - v8::Unlocker unlocker(v8::Isolate::GetCurrent()); res = fgets(buffer, kBufferSize, stdin); } v8::Isolate* isolate = v8::Isolate::GetCurrent(); diff --git a/deps/v8/samples/process.cc b/deps/v8/samples/process.cc index 37b4d392089..4db7eeb7b87 100644 --- a/deps/v8/samples/process.cc +++ b/deps/v8/samples/process.cc @@ -25,10 +25,12 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include <v8.h> +#include <include/v8.h> + +#include <include/libplatform/libplatform.h> -#include <string> #include <map> +#include <string> #ifdef COMPRESS_STARTUP_DATA_BZ2 #error Using compressed startup data is not supported for this sample @@ -574,7 +576,7 @@ StringHttpRequest::StringHttpRequest(const string& path, void ParseOptions(int argc, char* argv[], - map<string, string>& options, + map<string, string>* options, string* file) { for (int i = 1; i < argc; i++) { string arg = argv[i]; @@ -584,7 +586,7 @@ void ParseOptions(int argc, } else { string key = arg.substr(0, index); string value = arg.substr(index+1); - options[key] = value; + (*options)[key] = value; } } } @@ -644,14 +646,17 @@ void PrintMap(map<string, string>* m) { int main(int argc, char* argv[]) { v8::V8::InitializeICU(); + v8::Platform* platform = v8::platform::CreateDefaultPlatform(); + v8::V8::InitializePlatform(platform); map<string, string> options; string file; - ParseOptions(argc, argv, options, &file); + ParseOptions(argc, argv, &options, &file); if (file.empty()) { fprintf(stderr, "No script was specified.\n"); return 1; } - Isolate* isolate = Isolate::GetCurrent(); + Isolate* isolate = Isolate::New(); + Isolate::Scope isolate_scope(isolate); HandleScope scope(isolate); Handle<String> source = ReadFile(isolate, file); if (source.IsEmpty()) { diff --git a/deps/v8/samples/samples.gyp b/deps/v8/samples/samples.gyp index dfc7410070b..0c4c705609a 100644 --- a/deps/v8/samples/samples.gyp +++ b/deps/v8/samples/samples.gyp @@ -35,9 +35,10 @@ 'type': 'executable', 'dependencies': [ '../tools/gyp/v8.gyp:v8', + '../tools/gyp/v8.gyp:v8_libplatform', ], 'include_dirs': [ - '../include', + '..', ], 'conditions': [ ['v8_enable_i18n_support==1', { diff --git a/deps/v8/samples/shell.cc b/deps/v8/samples/shell.cc index f8d2c84594b..ef61426a0ab 100644 --- a/deps/v8/samples/shell.cc +++ b/deps/v8/samples/shell.cc @@ -25,12 +25,15 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include <v8.h> +#include <include/v8.h> + +#include <include/libplatform/libplatform.h> + #include <assert.h> #include <fcntl.h> -#include <string.h> #include <stdio.h> #include <stdlib.h> +#include <string.h> #ifdef COMPRESS_STARTUP_DATA_BZ2 #error Using compressed startup data is not supported for this sample @@ -65,25 +68,42 @@ void ReportException(v8::Isolate* isolate, v8::TryCatch* handler); static bool run_shell; +class ShellArrayBufferAllocator : public v8::ArrayBuffer::Allocator { + public: + virtual void* Allocate(size_t length) { + void* data = AllocateUninitialized(length); + return data == NULL ? data : memset(data, 0, length); + } + virtual void* AllocateUninitialized(size_t length) { return malloc(length); } + virtual void Free(void* data, size_t) { free(data); } +}; + + int main(int argc, char* argv[]) { v8::V8::InitializeICU(); + v8::Platform* platform = v8::platform::CreateDefaultPlatform(); + v8::V8::InitializePlatform(platform); v8::V8::SetFlagsFromCommandLine(&argc, argv, true); - v8::Isolate* isolate = v8::Isolate::GetCurrent(); + ShellArrayBufferAllocator array_buffer_allocator; + v8::V8::SetArrayBufferAllocator(&array_buffer_allocator); + v8::Isolate* isolate = v8::Isolate::New(); run_shell = (argc == 1); int result; { + v8::Isolate::Scope isolate_scope(isolate); v8::HandleScope handle_scope(isolate); v8::Handle<v8::Context> context = CreateShellContext(isolate); if (context.IsEmpty()) { fprintf(stderr, "Error creating context\n"); return 1; } - context->Enter(); + v8::Context::Scope context_scope(context); result = RunMain(isolate, argc, argv); if (run_shell) RunShell(context); - context->Exit(); } v8::V8::Dispose(); + v8::V8::ShutdownPlatform(); + delete platform; return result; } @@ -345,7 +365,7 @@ void ReportException(v8::Isolate* isolate, v8::TryCatch* try_catch) { fprintf(stderr, "%s\n", exception_string); } else { // Print (filename):(line number): (message). - v8::String::Utf8Value filename(message->GetScriptResourceName()); + v8::String::Utf8Value filename(message->GetScriptOrigin().ResourceName()); const char* filename_string = ToCString(filename); int linenum = message->GetLineNumber(); fprintf(stderr, "%s:%i: %s\n", filename_string, linenum, exception_string); diff --git a/deps/v8/src/DEPS b/deps/v8/src/DEPS new file mode 100644 index 00000000000..f38a902bdf0 --- /dev/null +++ b/deps/v8/src/DEPS @@ -0,0 +1,13 @@ +include_rules = [ + "+src", + "-src/compiler", + "+src/compiler/pipeline.h", + "-src/libplatform", + "-include/libplatform", +] + +specific_include_rules = { + "(mksnapshot|d8)\.cc": [ + "+include/libplatform/libplatform.h", + ], +} diff --git a/deps/v8/src/accessors.cc b/deps/v8/src/accessors.cc index f219bed3b34..3875c4fdf48 100644 --- a/deps/v8/src/accessors.cc +++ b/deps/v8/src/accessors.cc @@ -2,34 +2,25 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" -#include "accessors.h" - -#include "compiler.h" -#include "contexts.h" -#include "deoptimizer.h" -#include "execution.h" -#include "factory.h" -#include "frames-inl.h" -#include "isolate.h" -#include "list-inl.h" -#include "property-details.h" -#include "api.h" +#include "src/v8.h" + +#include "src/accessors.h" +#include "src/api.h" +#include "src/compiler.h" +#include "src/contexts.h" +#include "src/deoptimizer.h" +#include "src/execution.h" +#include "src/factory.h" +#include "src/frames-inl.h" +#include "src/isolate.h" +#include "src/list-inl.h" +#include "src/property-details.h" +#include "src/prototype.h" namespace v8 { namespace internal { -// We have a slight impedance mismatch between the external API and the way we -// use callbacks internally: Externally, callbacks can only be used with -// v8::Object, but internally we even have callbacks on entities which are -// higher in the hierarchy, so we can only return i::Object here, not -// i::JSObject. -Handle<Object> GetThisFrom(const v8::PropertyCallbackInfo<v8::Value>& info) { - return Utils::OpenHandle(*v8::Local<v8::Value>(info.This())); -} - - Handle<AccessorInfo> Accessors::MakeAccessor( Isolate* isolate, Handle<String> name, @@ -41,7 +32,6 @@ Handle<AccessorInfo> Accessors::MakeAccessor( info->set_property_attributes(attributes); info->set_all_can_read(false); info->set_all_can_write(false); - info->set_prohibits_overwriting(false); info->set_name(*name); Handle<Object> get = v8::FromCData(isolate, getter); Handle<Object> set = v8::FromCData(isolate, setter); @@ -51,20 +41,37 @@ Handle<AccessorInfo> Accessors::MakeAccessor( } +Handle<ExecutableAccessorInfo> Accessors::CloneAccessor( + Isolate* isolate, + Handle<ExecutableAccessorInfo> accessor) { + Factory* factory = isolate->factory(); + Handle<ExecutableAccessorInfo> info = factory->NewExecutableAccessorInfo(); + info->set_name(accessor->name()); + info->set_flag(accessor->flag()); + info->set_expected_receiver_type(accessor->expected_receiver_type()); + info->set_getter(accessor->getter()); + info->set_setter(accessor->setter()); + info->set_data(accessor->data()); + return info; +} + + template <class C> static C* FindInstanceOf(Isolate* isolate, Object* obj) { - for (Object* cur = obj; !cur->IsNull(); cur = cur->GetPrototype(isolate)) { - if (Is<C>(cur)) return C::cast(cur); + for (PrototypeIterator iter(isolate, obj, + PrototypeIterator::START_AT_RECEIVER); + !iter.IsAtEnd(); iter.Advance()) { + if (Is<C>(iter.GetCurrent())) return C::cast(iter.GetCurrent()); } return NULL; } -static V8_INLINE bool CheckForName(Handle<String> name, +static V8_INLINE bool CheckForName(Handle<Name> name, Handle<String> property_name, int offset, int* object_offset) { - if (String::Equals(name, property_name)) { + if (Name::Equals(name, property_name)) { *object_offset = offset; return true; } @@ -76,7 +83,7 @@ static V8_INLINE bool CheckForName(Handle<String> name, // If true, *object_offset contains offset of object field. template <class T> bool Accessors::IsJSObjectFieldAccessor(typename T::TypeHandle type, - Handle<String> name, + Handle<Name> name, int* object_offset) { Isolate* isolate = name->GetIsolate(); @@ -119,16 +126,35 @@ bool Accessors::IsJSObjectFieldAccessor(typename T::TypeHandle type, template bool Accessors::IsJSObjectFieldAccessor<Type>(Type* type, - Handle<String> name, + Handle<Name> name, int* object_offset); template bool Accessors::IsJSObjectFieldAccessor<HeapType>(Handle<HeapType> type, - Handle<String> name, + Handle<Name> name, int* object_offset); +bool SetPropertyOnInstanceIfInherited( + Isolate* isolate, const v8::PropertyCallbackInfo<void>& info, + v8::Local<v8::String> name, Handle<Object> value) { + Handle<Object> holder = Utils::OpenHandle(*info.Holder()); + Handle<Object> receiver = Utils::OpenHandle(*info.This()); + if (*holder == *receiver) return false; + if (receiver->IsJSObject()) { + Handle<JSObject> object = Handle<JSObject>::cast(receiver); + // This behaves sloppy since we lost the actual strict-mode. + // TODO(verwaest): Fix by making ExecutableAccessorInfo behave like data + // properties. + if (!object->map()->is_extensible()) return true; + JSObject::SetOwnPropertyIgnoreAttributes(object, Utils::OpenHandle(*name), + value, NONE).Check(); + } + return true; +} + + // // Accessors::ArrayLength // @@ -139,10 +165,9 @@ Handle<Object> Accessors::FlattenNumber(Isolate* isolate, Handle<Object> value) { if (value->IsNumber() || !value->IsJSValue()) return value; Handle<JSValue> wrapper = Handle<JSValue>::cast(value); - ASSERT(wrapper->GetIsolate()->context()->native_context()->number_function()-> + DCHECK(wrapper->GetIsolate()->native_context()->number_function()-> has_initial_map()); - if (wrapper->map() == - isolate->context()->native_context()->number_function()->initial_map()) { + if (wrapper->map() == isolate->number_function()->initial_map()) { return handle(wrapper->value(), isolate); } @@ -156,15 +181,8 @@ void Accessors::ArrayLengthGetter( i::Isolate* isolate = reinterpret_cast<i::Isolate*>(info.GetIsolate()); DisallowHeapAllocation no_allocation; HandleScope scope(isolate); - Object* object = *GetThisFrom(info); - // Traverse the prototype chain until we reach an array. - JSArray* holder = FindInstanceOf<JSArray>(isolate, object); - Object* result; - if (holder != NULL) { - result = holder->length(); - } else { - result = Smi::FromInt(0); - } + JSArray* holder = JSArray::cast(*Utils::OpenHandle(*info.Holder())); + Object* result = holder->length(); info.GetReturnValue().Set(Utils::ToLocal(Handle<Object>(result, isolate))); } @@ -175,17 +193,9 @@ void Accessors::ArrayLengthSetter( const v8::PropertyCallbackInfo<void>& info) { i::Isolate* isolate = reinterpret_cast<i::Isolate*>(info.GetIsolate()); HandleScope scope(isolate); - Handle<JSObject> object = Handle<JSObject>::cast( - Utils::OpenHandle(*info.This())); + Handle<JSObject> object = Utils::OpenHandle(*info.This()); Handle<Object> value = Utils::OpenHandle(*val); - // This means one of the object's prototypes is a JSArray and the - // object does not have a 'length' property. Calling SetProperty - // causes an infinite loop. - if (!object->IsJSArray()) { - MaybeHandle<Object> maybe_result = - JSObject::SetLocalPropertyIgnoreAttributes( - object, isolate->factory()->length_string(), value, NONE); - maybe_result.Check(); + if (SetPropertyOnInstanceIfInherited(isolate, info, name, value)) { return; } @@ -239,16 +249,19 @@ void Accessors::StringLengthGetter( i::Isolate* isolate = reinterpret_cast<i::Isolate*>(info.GetIsolate()); DisallowHeapAllocation no_allocation; HandleScope scope(isolate); - Object* value = *GetThisFrom(info); - Object* result; - if (value->IsJSValue()) value = JSValue::cast(value)->value(); - if (value->IsString()) { - result = Smi::FromInt(String::cast(value)->length()); - } else { - // If object is not a string we return 0 to be compatible with WebKit. - // Note: Firefox returns the length of ToString(object). - result = Smi::FromInt(0); + + // We have a slight impedance mismatch between the external API and the way we + // use callbacks internally: Externally, callbacks can only be used with + // v8::Object, but internally we have callbacks on entities which are higher + // in the hierarchy, in this case for String values. + + Object* value = *Utils::OpenHandle(*v8::Local<v8::Value>(info.This())); + if (!value->IsString()) { + // Not a string value. That means that we either got a String wrapper or + // a Value with a String wrapper in its prototype chain. + value = JSValue::cast(*Utils::OpenHandle(*info.Holder()))->value(); } + Object* result = Smi::FromInt(String::cast(value)->length()); info.GetReturnValue().Set(Utils::ToLocal(Handle<Object>(result, isolate))); } @@ -541,10 +554,10 @@ void Accessors::ScriptLineEndsGetter( Handle<Script> script( Script::cast(Handle<JSValue>::cast(object)->value()), isolate); Script::InitLineEnds(script); - ASSERT(script->line_ends()->IsFixedArray()); + DCHECK(script->line_ends()->IsFixedArray()); Handle<FixedArray> line_ends(FixedArray::cast(script->line_ends())); // We do not want anyone to modify this array from JS. - ASSERT(*line_ends == isolate->heap()->empty_fixed_array() || + DCHECK(*line_ends == isolate->heap()->empty_fixed_array() || line_ends->map() == isolate->heap()->fixed_cow_array_map()); Handle<JSArray> js_array = isolate->factory()->NewJSArrayWithElements(line_ends); @@ -572,6 +585,77 @@ Handle<AccessorInfo> Accessors::ScriptLineEndsInfo( } +// +// Accessors::ScriptSourceUrl +// + + +void Accessors::ScriptSourceUrlGetter( + v8::Local<v8::String> name, + const v8::PropertyCallbackInfo<v8::Value>& info) { + i::Isolate* isolate = reinterpret_cast<i::Isolate*>(info.GetIsolate()); + DisallowHeapAllocation no_allocation; + HandleScope scope(isolate); + Object* object = *Utils::OpenHandle(*info.This()); + Object* url = Script::cast(JSValue::cast(object)->value())->source_url(); + info.GetReturnValue().Set(Utils::ToLocal(Handle<Object>(url, isolate))); +} + + +void Accessors::ScriptSourceUrlSetter( + v8::Local<v8::String> name, + v8::Local<v8::Value> value, + const v8::PropertyCallbackInfo<void>& info) { + UNREACHABLE(); +} + + +Handle<AccessorInfo> Accessors::ScriptSourceUrlInfo( + Isolate* isolate, PropertyAttributes attributes) { + return MakeAccessor(isolate, + isolate->factory()->source_url_string(), + &ScriptSourceUrlGetter, + &ScriptSourceUrlSetter, + attributes); +} + + +// +// Accessors::ScriptSourceMappingUrl +// + + +void Accessors::ScriptSourceMappingUrlGetter( + v8::Local<v8::String> name, + const v8::PropertyCallbackInfo<v8::Value>& info) { + i::Isolate* isolate = reinterpret_cast<i::Isolate*>(info.GetIsolate()); + DisallowHeapAllocation no_allocation; + HandleScope scope(isolate); + Object* object = *Utils::OpenHandle(*info.This()); + Object* url = + Script::cast(JSValue::cast(object)->value())->source_mapping_url(); + info.GetReturnValue().Set(Utils::ToLocal(Handle<Object>(url, isolate))); +} + + +void Accessors::ScriptSourceMappingUrlSetter( + v8::Local<v8::String> name, + v8::Local<v8::Value> value, + const v8::PropertyCallbackInfo<void>& info) { + UNREACHABLE(); +} + + +Handle<AccessorInfo> Accessors::ScriptSourceMappingUrlInfo( + Isolate* isolate, PropertyAttributes attributes) { + return MakeAccessor(isolate, + isolate->factory()->source_mapping_url_string(), + &ScriptSourceMappingUrlGetter, + &ScriptSourceMappingUrlSetter, + attributes); +} + + // // Accessors::ScriptGetContextData // @@ -753,21 +837,7 @@ Handle<AccessorInfo> Accessors::ScriptEvalFromFunctionNameInfo( // static Handle<Object> GetFunctionPrototype(Isolate* isolate, - Handle<Object> receiver) { - Handle<JSFunction> function; - { - DisallowHeapAllocation no_allocation; - JSFunction* function_raw = FindInstanceOf<JSFunction>(isolate, *receiver); - if (function_raw == NULL) return isolate->factory()->undefined_value(); - while (!function_raw->should_have_prototype()) { - function_raw = FindInstanceOf<JSFunction>(isolate, - function_raw->GetPrototype()); - // There has to be one because we hit the getter. - ASSERT(function_raw != NULL); - } - function = Handle<JSFunction>(function_raw, isolate); - } - + Handle<JSFunction> function) { if (!function->has_prototype()) { Handle<Object> proto = isolate->factory()->NewFunctionPrototype(function); JSFunction::SetPrototype(function, proto); @@ -777,26 +847,10 @@ static Handle<Object> GetFunctionPrototype(Isolate* isolate, static Handle<Object> SetFunctionPrototype(Isolate* isolate, - Handle<JSObject> receiver, + Handle<JSFunction> function, Handle<Object> value) { - Handle<JSFunction> function; - { - DisallowHeapAllocation no_allocation; - JSFunction* function_raw = FindInstanceOf<JSFunction>(isolate, *receiver); - if (function_raw == NULL) return isolate->factory()->undefined_value(); - function = Handle<JSFunction>(function_raw, isolate); - } - - if (!function->should_have_prototype()) { - // Since we hit this accessor, object will have no prototype property. - MaybeHandle<Object> maybe_result = - JSObject::SetLocalPropertyIgnoreAttributes( - receiver, isolate->factory()->prototype_string(), value, NONE); - return maybe_result.ToHandleChecked(); - } - Handle<Object> old_value; - bool is_observed = *function == *receiver && function->map()->is_observed(); + bool is_observed = function->map()->is_observed(); if (is_observed) { if (function->has_prototype()) old_value = handle(function->prototype(), isolate); @@ -805,7 +859,7 @@ static Handle<Object> SetFunctionPrototype(Isolate* isolate, } JSFunction::SetPrototype(function, value); - ASSERT(function->prototype() == *value); + DCHECK(function->prototype() == *value); if (is_observed && !old_value->SameValue(*value)) { JSObject::EnqueueChangeRecord( @@ -823,7 +877,7 @@ Handle<Object> Accessors::FunctionGetPrototype(Handle<JSFunction> function) { Handle<Object> Accessors::FunctionSetPrototype(Handle<JSFunction> function, Handle<Object> prototype) { - ASSERT(function->should_have_prototype()); + DCHECK(function->should_have_prototype()); Isolate* isolate = function->GetIsolate(); return SetFunctionPrototype(isolate, function, prototype); } @@ -834,8 +888,9 @@ void Accessors::FunctionPrototypeGetter( const v8::PropertyCallbackInfo<v8::Value>& info) { i::Isolate* isolate = reinterpret_cast<i::Isolate*>(info.GetIsolate()); HandleScope scope(isolate); - Handle<Object> object = GetThisFrom(info); - Handle<Object> result = GetFunctionPrototype(isolate, object); + Handle<JSFunction> function = + Handle<JSFunction>::cast(Utils::OpenHandle(*info.Holder())); + Handle<Object> result = GetFunctionPrototype(isolate, function); info.GetReturnValue().Set(Utils::ToLocal(result)); } @@ -846,10 +901,12 @@ void Accessors::FunctionPrototypeSetter( const v8::PropertyCallbackInfo<void>& info) { i::Isolate* isolate = reinterpret_cast<i::Isolate*>(info.GetIsolate()); HandleScope scope(isolate); - Handle<JSObject> object = - Handle<JSObject>::cast(Utils::OpenHandle(*info.This())); Handle<Object> value = Utils::OpenHandle(*val); - + if (SetPropertyOnInstanceIfInherited(isolate, info, name, value)) { + return; + } + Handle<JSFunction> object = + Handle<JSFunction>::cast(Utils::OpenHandle(*info.Holder())); SetFunctionPrototype(isolate, object, value); } @@ -874,29 +931,20 @@ void Accessors::FunctionLengthGetter( const v8::PropertyCallbackInfo<v8::Value>& info) { i::Isolate* isolate = reinterpret_cast<i::Isolate*>(info.GetIsolate()); HandleScope scope(isolate); - Handle<Object> object = GetThisFrom(info); - MaybeHandle<JSFunction> maybe_function; - - { - DisallowHeapAllocation no_allocation; - JSFunction* function = FindInstanceOf<JSFunction>(isolate, *object); - if (function != NULL) maybe_function = Handle<JSFunction>(function); - } + Handle<JSFunction> function = + Handle<JSFunction>::cast(Utils::OpenHandle(*info.Holder())); int length = 0; - Handle<JSFunction> function; - if (maybe_function.ToHandle(&function)) { - if (function->shared()->is_compiled()) { + if (function->shared()->is_compiled()) { + length = function->shared()->length(); + } else { + // If the function isn't compiled yet, the length is not computed + // correctly yet. Compile it now and return the right length. + if (Compiler::EnsureCompiled(function, KEEP_EXCEPTION)) { length = function->shared()->length(); - } else { - // If the function isn't compiled yet, the length is not computed - // correctly yet. Compile it now and return the right length. - if (Compiler::EnsureCompiled(function, KEEP_EXCEPTION)) { - length = function->shared()->length(); - } - if (isolate->has_pending_exception()) { - isolate->OptionalRescheduleException(false); - } + } + if (isolate->has_pending_exception()) { + isolate->OptionalRescheduleException(false); } } Handle<Object> result(Smi::FromInt(length), isolate); @@ -908,7 +956,8 @@ void Accessors::FunctionLengthSetter( v8::Local<v8::String> name, v8::Local<v8::Value> val, const v8::PropertyCallbackInfo<void>& info) { - // Do nothing. + // Function length is non writable, non configurable. + UNREACHABLE(); } @@ -932,22 +981,9 @@ void Accessors::FunctionNameGetter( const v8::PropertyCallbackInfo<v8::Value>& info) { i::Isolate* isolate = reinterpret_cast<i::Isolate*>(info.GetIsolate()); HandleScope scope(isolate); - Handle<Object> object = GetThisFrom(info); - MaybeHandle<JSFunction> maybe_function; - - { - DisallowHeapAllocation no_allocation; - JSFunction* function = FindInstanceOf<JSFunction>(isolate, *object); - if (function != NULL) maybe_function = Handle<JSFunction>(function); - } - - Handle<JSFunction> function; - Handle<Object> result; - if (maybe_function.ToHandle(&function)) { - result = Handle<Object>(function->shared()->name(), isolate); - } else { - result = isolate->factory()->undefined_value(); - } + Handle<JSFunction> function = + Handle<JSFunction>::cast(Utils::OpenHandle(*info.Holder())); + Handle<Object> result(function->shared()->name(), isolate); info.GetReturnValue().Set(Utils::ToLocal(result)); } @@ -956,7 +992,8 @@ void Accessors::FunctionNameSetter( v8::Local<v8::String> name, v8::Local<v8::Value> val, const v8::PropertyCallbackInfo<void>& info) { - // Do nothing. + // Function name is non writable, non configurable. + UNREACHABLE(); } @@ -1058,7 +1095,7 @@ Handle<Object> GetFunctionArguments(Isolate* isolate, Handle<FixedArray> array = isolate->factory()->NewFixedArray(length); // Copy the parameters to the arguments object. - ASSERT(array->length() == length); + DCHECK(array->length() == length); for (int i = 0; i < length; i++) array->set(i, frame->GetParameter(i)); arguments->set_elements(*array); @@ -1081,22 +1118,9 @@ void Accessors::FunctionArgumentsGetter( const v8::PropertyCallbackInfo<v8::Value>& info) { i::Isolate* isolate = reinterpret_cast<i::Isolate*>(info.GetIsolate()); HandleScope scope(isolate); - Handle<Object> object = GetThisFrom(info); - MaybeHandle<JSFunction> maybe_function; - - { - DisallowHeapAllocation no_allocation; - JSFunction* function = FindInstanceOf<JSFunction>(isolate, *object); - if (function != NULL) maybe_function = Handle<JSFunction>(function); - } - - Handle<JSFunction> function; - Handle<Object> result; - if (maybe_function.ToHandle(&function)) { - result = GetFunctionArguments(isolate, function); - } else { - result = isolate->factory()->undefined_value(); - } + Handle<JSFunction> function = + Handle<JSFunction>::cast(Utils::OpenHandle(*info.Holder())); + Handle<Object> result = GetFunctionArguments(isolate, function); info.GetReturnValue().Set(Utils::ToLocal(result)); } @@ -1105,7 +1129,8 @@ void Accessors::FunctionArgumentsSetter( v8::Local<v8::String> name, v8::Local<v8::Value> val, const v8::PropertyCallbackInfo<void>& info) { - // Do nothing. + // Function arguments is non writable, non configurable. + UNREACHABLE(); } @@ -1124,22 +1149,33 @@ Handle<AccessorInfo> Accessors::FunctionArgumentsInfo( // +static inline bool AllowAccessToFunction(Context* current_context, + JSFunction* function) { + return current_context->HasSameSecurityTokenAs(function->context()); +} + + class FrameFunctionIterator { public: FrameFunctionIterator(Isolate* isolate, const DisallowHeapAllocation& promise) - : frame_iterator_(isolate), + : isolate_(isolate), + frame_iterator_(isolate), functions_(2), index_(0) { GetFunctions(); } JSFunction* next() { - if (functions_.length() == 0) return NULL; - JSFunction* next_function = functions_[index_]; - index_--; - if (index_ < 0) { - GetFunctions(); + while (true) { + if (functions_.length() == 0) return NULL; + JSFunction* next_function = functions_[index_]; + index_--; + if (index_ < 0) { + GetFunctions(); + } + // Skip functions from other origins. + if (!AllowAccessToFunction(isolate_->context(), next_function)) continue; + return next_function; } - return next_function; } // Iterate through functions until the first occurence of 'function'. @@ -1160,10 +1196,11 @@ class FrameFunctionIterator { if (frame_iterator_.done()) return; JavaScriptFrame* frame = frame_iterator_.frame(); frame->GetFunctions(&functions_); - ASSERT(functions_.length() > 0); + DCHECK(functions_.length() > 0); frame_iterator_.Advance(); index_ = functions_.length() - 1; } + Isolate* isolate_; JavaScriptFrameIterator frame_iterator_; List<JSFunction*> functions_; int index_; @@ -1211,6 +1248,10 @@ MaybeHandle<JSFunction> FindCaller(Isolate* isolate, if (caller->shared()->strict_mode() == STRICT) { return MaybeHandle<JSFunction>(); } + // Don't return caller from another security context. + if (!AllowAccessToFunction(isolate->context(), caller)) { + return MaybeHandle<JSFunction>(); + } return Handle<JSFunction>(caller); } @@ -1220,26 +1261,16 @@ void Accessors::FunctionCallerGetter( const v8::PropertyCallbackInfo<v8::Value>& info) { i::Isolate* isolate = reinterpret_cast<i::Isolate*>(info.GetIsolate()); HandleScope scope(isolate); - Handle<Object> object = GetThisFrom(info); - MaybeHandle<JSFunction> maybe_function; - { - DisallowHeapAllocation no_allocation; - JSFunction* function = FindInstanceOf<JSFunction>(isolate, *object); - if (function != NULL) maybe_function = Handle<JSFunction>(function); - } - Handle<JSFunction> function; + Handle<JSFunction> function = + Handle<JSFunction>::cast(Utils::OpenHandle(*info.Holder())); Handle<Object> result; - if (maybe_function.ToHandle(&function)) { - MaybeHandle<JSFunction> maybe_caller; - maybe_caller = FindCaller(isolate, function); - Handle<JSFunction> caller; - if (maybe_caller.ToHandle(&caller)) { - result = caller; - } else { - result = isolate->factory()->null_value(); - } + MaybeHandle<JSFunction> maybe_caller; + maybe_caller = FindCaller(isolate, function); + Handle<JSFunction> caller; + if (maybe_caller.ToHandle(&caller)) { + result = caller; } else { - result = isolate->factory()->undefined_value(); + result = isolate->factory()->null_value(); } info.GetReturnValue().Set(Utils::ToLocal(result)); } @@ -1249,7 +1280,8 @@ void Accessors::FunctionCallerSetter( v8::Local<v8::String> name, v8::Local<v8::Value> val, const v8::PropertyCallbackInfo<void>& info) { - // Do nothing. + // Function caller is non writable, non configurable. + UNREACHABLE(); } @@ -1272,7 +1304,7 @@ static void ModuleGetExport( const v8::PropertyCallbackInfo<v8::Value>& info) { JSModule* instance = JSModule::cast(*v8::Utils::OpenHandle(*info.Holder())); Context* context = Context::cast(instance->context()); - ASSERT(context->IsModuleContext()); + DCHECK(context->IsModuleContext()); int slot = info.Data()->Int32Value(); Object* value = context->get(slot); Isolate* isolate = instance->GetIsolate(); @@ -1293,7 +1325,7 @@ static void ModuleSetExport( const v8::PropertyCallbackInfo<v8::Value>& info) { JSModule* instance = JSModule::cast(*v8::Utils::OpenHandle(*info.Holder())); Context* context = Context::cast(instance->context()); - ASSERT(context->IsModuleContext()); + DCHECK(context->IsModuleContext()); int slot = info.Data()->Int32Value(); Object* old_value = context->get(slot); if (old_value->IsTheHole()) { diff --git a/deps/v8/src/accessors.h b/deps/v8/src/accessors.h index 8c006e93ac3..17b7510adef 100644 --- a/deps/v8/src/accessors.h +++ b/deps/v8/src/accessors.h @@ -5,8 +5,8 @@ #ifndef V8_ACCESSORS_H_ #define V8_ACCESSORS_H_ -#include "allocation.h" -#include "v8globals.h" +#include "src/allocation.h" +#include "src/globals.h" namespace v8 { namespace internal { @@ -32,6 +32,8 @@ namespace internal { V(ScriptName) \ V(ScriptSource) \ V(ScriptType) \ + V(ScriptSourceUrl) \ + V(ScriptSourceMappingUrl) \ V(StringLength) // Accessors contains all predefined proxy accessors. @@ -76,7 +78,7 @@ class Accessors : public AllStatic { // If true, *object_offset contains offset of object field. template <class T> static bool IsJSObjectFieldAccessor(typename T::TypeHandle type, - Handle<String> name, + Handle<Name> name, int* object_offset); static Handle<AccessorInfo> MakeAccessor( @@ -86,6 +88,11 @@ class Accessors : public AllStatic { AccessorSetterCallback setter, PropertyAttributes attributes); + static Handle<ExecutableAccessorInfo> CloneAccessor( + Isolate* isolate, + Handle<ExecutableAccessorInfo> accessor); + + private: // Helper functions. static Handle<Object> FlattenNumber(Isolate* isolate, Handle<Object> value); diff --git a/deps/v8/src/allocation-site-scopes.cc b/deps/v8/src/allocation-site-scopes.cc index 51392fac8e5..5b513f6fef5 100644 --- a/deps/v8/src/allocation-site-scopes.cc +++ b/deps/v8/src/allocation-site-scopes.cc @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "allocation-site-scopes.h" +#include "src/allocation-site-scopes.h" namespace v8 { namespace internal { @@ -20,7 +20,7 @@ Handle<AllocationSite> AllocationSiteCreationContext::EnterNewScope() { static_cast<void*>(*scope_site)); } } else { - ASSERT(!current().is_null()); + DCHECK(!current().is_null()); scope_site = isolate()->factory()->NewAllocationSite(); if (FLAG_trace_creation_allocation_sites) { PrintF("Creating nested site (top, current, new) (%p, %p, %p)\n", @@ -31,7 +31,7 @@ Handle<AllocationSite> AllocationSiteCreationContext::EnterNewScope() { current()->set_nested_site(*scope_site); update_current_site(*scope_site); } - ASSERT(!scope_site.is_null()); + DCHECK(!scope_site.is_null()); return scope_site; } diff --git a/deps/v8/src/allocation-site-scopes.h b/deps/v8/src/allocation-site-scopes.h index 1ffe004e1ba..836da43bef8 100644 --- a/deps/v8/src/allocation-site-scopes.h +++ b/deps/v8/src/allocation-site-scopes.h @@ -5,10 +5,10 @@ #ifndef V8_ALLOCATION_SITE_SCOPES_H_ #define V8_ALLOCATION_SITE_SCOPES_H_ -#include "ast.h" -#include "handles.h" -#include "objects.h" -#include "zone.h" +#include "src/ast.h" +#include "src/handles.h" +#include "src/objects.h" +#include "src/zone.h" namespace v8 { namespace internal { @@ -20,7 +20,7 @@ class AllocationSiteContext { public: explicit AllocationSiteContext(Isolate* isolate) { isolate_ = isolate; - }; + } Handle<AllocationSite> top() { return top_; } Handle<AllocationSite> current() { return current_; } @@ -75,7 +75,7 @@ class AllocationSiteUsageContext : public AllocationSiteContext { // Advance current site Object* nested_site = current()->nested_site(); // Something is wrong if we advance to the end of the list here. - ASSERT(nested_site->IsAllocationSite()); + DCHECK(nested_site->IsAllocationSite()); update_current_site(AllocationSite::cast(nested_site)); } return Handle<AllocationSite>(*current(), isolate()); @@ -85,7 +85,7 @@ class AllocationSiteUsageContext : public AllocationSiteContext { Handle<JSObject> object) { // This assert ensures that we are pointing at the right sub-object in a // recursive walk of a nested literal. - ASSERT(object.is_null() || *object == scope_site->transition_info()); + DCHECK(object.is_null() || *object == scope_site->transition_info()); } bool ShouldCreateMemento(Handle<JSObject> object); diff --git a/deps/v8/src/allocation-tracker.cc b/deps/v8/src/allocation-tracker.cc index f5d7e0c9d4f..7534ffb82fd 100644 --- a/deps/v8/src/allocation-tracker.cc +++ b/deps/v8/src/allocation-tracker.cc @@ -2,12 +2,11 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "allocation-tracker.h" - -#include "heap-snapshot-generator.h" -#include "frames-inl.h" +#include "src/allocation-tracker.h" +#include "src/frames-inl.h" +#include "src/heap-snapshot-generator.h" namespace v8 { namespace internal { @@ -55,15 +54,15 @@ void AllocationTraceNode::AddAllocation(unsigned size) { void AllocationTraceNode::Print(int indent, AllocationTracker* tracker) { - OS::Print("%10u %10u %*c", total_size_, allocation_count_, indent, ' '); + base::OS::Print("%10u %10u %*c", total_size_, allocation_count_, indent, ' '); if (tracker != NULL) { AllocationTracker::FunctionInfo* info = tracker->function_info_list()[function_info_index_]; - OS::Print("%s #%u", info->name, id_); + base::OS::Print("%s #%u", info->name, id_); } else { - OS::Print("%u #%u", function_info_index_, id_); + base::OS::Print("%u #%u", function_info_index_, id_); } - OS::Print("\n"); + base::OS::Print("\n"); indent += 2; for (int i = 0; i < children_.length(); i++) { children_[i]->Print(indent, tracker); @@ -94,8 +93,8 @@ AllocationTraceNode* AllocationTraceTree::AddPathFromEnd( void AllocationTraceTree::Print(AllocationTracker* tracker) { - OS::Print("[AllocationTraceTree:]\n"); - OS::Print("Total size | Allocation count | Function id | id\n"); + base::OS::Print("[AllocationTraceTree:]\n"); + base::OS::Print("Total size | Allocation count | Function id | id\n"); root()->Print(0, tracker); } @@ -229,8 +228,8 @@ void AllocationTracker::AllocationEvent(Address addr, int size) { // Mark the new block as FreeSpace to make sure the heap is iterable // while we are capturing stack trace. FreeListNode::FromAddress(addr)->set_size(heap, size); - ASSERT_EQ(HeapObject::FromAddress(addr)->Size(), size); - ASSERT(FreeListNode::IsFreeListNode(HeapObject::FromAddress(addr))); + DCHECK_EQ(HeapObject::FromAddress(addr)->Size(), size); + DCHECK(FreeListNode::IsFreeListNode(HeapObject::FromAddress(addr))); Isolate* isolate = heap->isolate(); int length = 0; diff --git a/deps/v8/src/allocation.cc b/deps/v8/src/allocation.cc index 0549a199ff2..b5aa98416b6 100644 --- a/deps/v8/src/allocation.cc +++ b/deps/v8/src/allocation.cc @@ -2,12 +2,12 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "allocation.h" +#include "src/allocation.h" #include <stdlib.h> // For free, malloc. -#include "checks.h" -#include "platform.h" -#include "utils.h" +#include "src/base/logging.h" +#include "src/base/platform/platform.h" +#include "src/utils.h" #if V8_LIBC_BIONIC #include <malloc.h> // NOLINT @@ -66,7 +66,7 @@ void AllStatic::operator delete(void* p) { char* StrDup(const char* str) { int length = StrLength(str); char* result = NewArray<char>(length + 1); - OS::MemCopy(result, str, length); + MemCopy(result, str, length); result[length] = '\0'; return result; } @@ -76,14 +76,14 @@ char* StrNDup(const char* str, int n) { int length = StrLength(str); if (n < length) length = n; char* result = NewArray<char>(length + 1); - OS::MemCopy(result, str, length); + MemCopy(result, str, length); result[length] = '\0'; return result; } void* AlignedAlloc(size_t size, size_t alignment) { - ASSERT(IsPowerOf2(alignment) && alignment >= V8_ALIGNOF(void*)); // NOLINT + DCHECK(IsPowerOf2(alignment) && alignment >= V8_ALIGNOF(void*)); // NOLINT void* ptr; #if V8_OS_WIN ptr = _aligned_malloc(size, alignment); diff --git a/deps/v8/src/allocation.h b/deps/v8/src/allocation.h index 13d08a8169f..2fea7b2826e 100644 --- a/deps/v8/src/allocation.h +++ b/deps/v8/src/allocation.h @@ -5,7 +5,7 @@ #ifndef V8_ALLOCATION_H_ #define V8_ALLOCATION_H_ -#include "globals.h" +#include "src/globals.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/api.cc b/deps/v8/src/api.cc index 8a99c278cb1..4a6345910f9 100644 --- a/deps/v8/src/api.cc +++ b/deps/v8/src/api.cc @@ -2,53 +2,56 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "api.h" +#include "src/api.h" #include <string.h> // For memcpy, strlen. +#ifdef V8_USE_ADDRESS_SANITIZER +#include <sanitizer/asan_interface.h> +#endif // V8_USE_ADDRESS_SANITIZER #include <cmath> // For isnan. -#include "../include/v8-debug.h" -#include "../include/v8-profiler.h" -#include "../include/v8-testing.h" -#include "assert-scope.h" -#include "bootstrapper.h" -#include "code-stubs.h" -#include "compiler.h" -#include "conversions-inl.h" -#include "counters.h" -#include "cpu-profiler.h" -#include "debug.h" -#include "deoptimizer.h" -#include "execution.h" -#include "global-handles.h" -#include "heap-profiler.h" -#include "heap-snapshot-generator-inl.h" -#include "icu_util.h" -#include "json-parser.h" -#include "messages.h" -#ifdef COMPRESS_STARTUP_DATA_BZ2 -#include "natives.h" -#endif -#include "parser.h" -#include "platform.h" -#include "platform/time.h" -#include "profile-generator-inl.h" -#include "property-details.h" -#include "property.h" -#include "runtime.h" -#include "runtime-profiler.h" -#include "scanner-character-streams.h" -#include "snapshot.h" -#include "unicode-inl.h" -#include "utils/random-number-generator.h" -#include "v8threads.h" -#include "version.h" -#include "vm-state-inl.h" +#include "include/v8-debug.h" +#include "include/v8-profiler.h" +#include "include/v8-testing.h" +#include "src/assert-scope.h" +#include "src/base/platform/platform.h" +#include "src/base/platform/time.h" +#include "src/base/utils/random-number-generator.h" +#include "src/bootstrapper.h" +#include "src/code-stubs.h" +#include "src/compiler.h" +#include "src/conversions-inl.h" +#include "src/counters.h" +#include "src/cpu-profiler.h" +#include "src/debug.h" +#include "src/deoptimizer.h" +#include "src/execution.h" +#include "src/global-handles.h" +#include "src/heap-profiler.h" +#include "src/heap-snapshot-generator-inl.h" +#include "src/icu_util.h" +#include "src/json-parser.h" +#include "src/messages.h" +#include "src/natives.h" +#include "src/parser.h" +#include "src/profile-generator-inl.h" +#include "src/property.h" +#include "src/property-details.h" +#include "src/prototype.h" +#include "src/runtime.h" +#include "src/runtime-profiler.h" +#include "src/scanner-character-streams.h" +#include "src/simulator.h" +#include "src/snapshot.h" +#include "src/unicode-inl.h" +#include "src/v8threads.h" +#include "src/version.h" +#include "src/vm-state-inl.h" #define LOG_API(isolate, expr) LOG(isolate, ApiEntryCall(expr)) #define ENTER_V8(isolate) \ - ASSERT((isolate)->IsInitialized()); \ + DCHECK((isolate)->IsInitialized()); \ i::VMState<i::OTHER> __state__((isolate)) namespace v8 { @@ -62,7 +65,7 @@ namespace v8 { #define EXCEPTION_PREAMBLE(isolate) \ (isolate)->handle_scope_implementer()->IncrementCallDepth(); \ - ASSERT(!(isolate)->external_caught_exception()); \ + DCHECK(!(isolate)->external_caught_exception()); \ bool has_pending_exception = false @@ -172,9 +175,9 @@ void Utils::ReportApiFailure(const char* location, const char* message) { i::Isolate* isolate = i::Isolate::Current(); FatalErrorCallback callback = isolate->exception_behavior(); if (callback == NULL) { - i::OS::PrintError("\n#\n# Fatal error in %s\n# %s\n#\n\n", - location, message); - i::OS::Abort(); + base::OS::PrintError("\n#\n# Fatal error in %s\n# %s\n#\n\n", location, + message); + base::OS::Abort(); } else { callback(location, message); } @@ -252,7 +255,7 @@ int StartupDataDecompressor::Decompress() { compressed_data[i].compressed_size); if (result != 0) return result; } else { - ASSERT_EQ(0, compressed_data[i].raw_size); + DCHECK_EQ(0, compressed_data[i].raw_size); } compressed_data[i].data = decompressed; } @@ -322,24 +325,24 @@ void V8::GetCompressedStartupData(StartupData* compressed_data) { void V8::SetDecompressedStartupData(StartupData* decompressed_data) { #ifdef COMPRESS_STARTUP_DATA_BZ2 - ASSERT_EQ(i::Snapshot::raw_size(), decompressed_data[kSnapshot].raw_size); + DCHECK_EQ(i::Snapshot::raw_size(), decompressed_data[kSnapshot].raw_size); i::Snapshot::set_raw_data( reinterpret_cast<const i::byte*>(decompressed_data[kSnapshot].data)); - ASSERT_EQ(i::Snapshot::context_raw_size(), + DCHECK_EQ(i::Snapshot::context_raw_size(), decompressed_data[kSnapshotContext].raw_size); i::Snapshot::set_context_raw_data( reinterpret_cast<const i::byte*>( decompressed_data[kSnapshotContext].data)); - ASSERT_EQ(i::Natives::GetRawScriptsSize(), + DCHECK_EQ(i::Natives::GetRawScriptsSize(), decompressed_data[kLibraries].raw_size); i::Vector<const char> libraries_source( decompressed_data[kLibraries].data, decompressed_data[kLibraries].raw_size); i::Natives::SetRawScriptsSource(libraries_source); - ASSERT_EQ(i::ExperimentalNatives::GetRawScriptsSize(), + DCHECK_EQ(i::ExperimentalNatives::GetRawScriptsSize(), decompressed_data[kExperimentalLibraries].raw_size); i::Vector<const char> exp_libraries_source( decompressed_data[kExperimentalLibraries].data, @@ -349,15 +352,33 @@ void V8::SetDecompressedStartupData(StartupData* decompressed_data) { } +void V8::SetNativesDataBlob(StartupData* natives_blob) { +#ifdef V8_USE_EXTERNAL_STARTUP_DATA + i::SetNativesFromFile(natives_blob); +#else + CHECK(false); +#endif +} + + +void V8::SetSnapshotDataBlob(StartupData* snapshot_blob) { +#ifdef V8_USE_EXTERNAL_STARTUP_DATA + i::SetSnapshotFromFile(snapshot_blob); +#else + CHECK(false); +#endif +} + + void V8::SetFatalErrorHandler(FatalErrorCallback that) { - i::Isolate* isolate = i::Isolate::UncheckedCurrent(); + i::Isolate* isolate = i::Isolate::Current(); isolate->set_exception_behavior(that); } void V8::SetAllowCodeGenerationFromStringsCallback( AllowCodeGenerationFromStringsCallback callback) { - i::Isolate* isolate = i::Isolate::UncheckedCurrent(); + i::Isolate* isolate = i::Isolate::Current(); isolate->set_allow_code_gen_callback(callback); } @@ -419,7 +440,7 @@ Extension::Extension(const char* name, ResourceConstraints::ResourceConstraints() - : max_new_space_size_(0), + : max_semi_space_size_(0), max_old_space_size_(0), max_executable_size_(0), stack_limit_(NULL), @@ -442,30 +463,31 @@ void ResourceConstraints::ConfigureDefaults(uint64_t physical_memory, #endif if (physical_memory <= low_limit) { - set_max_new_space_size(i::Heap::kMaxNewSpaceSizeLowMemoryDevice); + set_max_semi_space_size(i::Heap::kMaxSemiSpaceSizeLowMemoryDevice); set_max_old_space_size(i::Heap::kMaxOldSpaceSizeLowMemoryDevice); set_max_executable_size(i::Heap::kMaxExecutableSizeLowMemoryDevice); } else if (physical_memory <= medium_limit) { - set_max_new_space_size(i::Heap::kMaxNewSpaceSizeMediumMemoryDevice); + set_max_semi_space_size(i::Heap::kMaxSemiSpaceSizeMediumMemoryDevice); set_max_old_space_size(i::Heap::kMaxOldSpaceSizeMediumMemoryDevice); set_max_executable_size(i::Heap::kMaxExecutableSizeMediumMemoryDevice); } else if (physical_memory <= high_limit) { - set_max_new_space_size(i::Heap::kMaxNewSpaceSizeHighMemoryDevice); + set_max_semi_space_size(i::Heap::kMaxSemiSpaceSizeHighMemoryDevice); set_max_old_space_size(i::Heap::kMaxOldSpaceSizeHighMemoryDevice); set_max_executable_size(i::Heap::kMaxExecutableSizeHighMemoryDevice); } else { - set_max_new_space_size(i::Heap::kMaxNewSpaceSizeHugeMemoryDevice); + set_max_semi_space_size(i::Heap::kMaxSemiSpaceSizeHugeMemoryDevice); set_max_old_space_size(i::Heap::kMaxOldSpaceSizeHugeMemoryDevice); set_max_executable_size(i::Heap::kMaxExecutableSizeHugeMemoryDevice); } set_max_available_threads(i::Max(i::Min(number_of_processors, 4u), 1u)); - if (virtual_memory_limit > 0 && i::kIs64BitArch) { + if (virtual_memory_limit > 0 && i::kRequiresCodeRange) { // Reserve no more than 1/8 of the memory for the code range, but at most - // 512 MB. + // kMaximalCodeRangeSize. set_code_range_size( - i::Min(512 * i::MB, static_cast<int>(virtual_memory_limit >> 3))); + i::Min(i::kMaximalCodeRangeSize / i::MB, + static_cast<size_t>((virtual_memory_limit >> 3) / i::MB))); } } @@ -473,15 +495,15 @@ void ResourceConstraints::ConfigureDefaults(uint64_t physical_memory, bool SetResourceConstraints(Isolate* v8_isolate, ResourceConstraints* constraints) { i::Isolate* isolate = reinterpret_cast<i::Isolate*>(v8_isolate); - int new_space_size = constraints->max_new_space_size(); + int semi_space_size = constraints->max_semi_space_size(); int old_space_size = constraints->max_old_space_size(); int max_executable_size = constraints->max_executable_size(); - int code_range_size = constraints->code_range_size(); - if (new_space_size != 0 || old_space_size != 0 || max_executable_size != 0 || - code_range_size != 0) { + size_t code_range_size = constraints->code_range_size(); + if (semi_space_size != 0 || old_space_size != 0 || + max_executable_size != 0 || code_range_size != 0) { // After initialization it's too late to change Heap constraints. - ASSERT(!isolate->IsInitialized()); - bool result = isolate->heap()->ConfigureHeap(new_space_size / 2, + DCHECK(!isolate->IsInitialized()); + bool result = isolate->heap()->ConfigureHeap(semi_space_size, old_space_size, max_executable_size, code_range_size); @@ -589,7 +611,7 @@ i::Object** HandleScope::CreateHandle(i::Isolate* isolate, i::Object* value) { i::Object** HandleScope::CreateHandle(i::HeapObject* heap_object, i::Object* value) { - ASSERT(heap_object->IsHeapObject()); + DCHECK(heap_object->IsHeapObject()); return i::HandleScope::CreateHandle(heap_object->GetIsolate(), value); } @@ -692,7 +714,7 @@ void Context::SetEmbedderData(int index, v8::Handle<Value> value) { if (data.is_null()) return; i::Handle<i::Object> val = Utils::OpenHandle(*value); data->set(index, *val); - ASSERT_EQ(*Utils::OpenHandle(*value), + DCHECK_EQ(*Utils::OpenHandle(*value), *Utils::OpenHandle(*GetEmbedderData(index))); } @@ -709,7 +731,7 @@ void Context::SetAlignedPointerInEmbedderData(int index, void* value) { const char* location = "v8::Context::SetAlignedPointerInEmbedderData()"; i::Handle<i::FixedArray> data = EmbedderDataFor(this, index, true, location); data->set(index, EncodeAlignedAsSmi(value, location)); - ASSERT_EQ(value, GetAlignedPointerFromEmbedderData(index)); + DCHECK_EQ(value, GetAlignedPointerFromEmbedderData(index)); } @@ -746,8 +768,8 @@ int NeanderArray::length() { i::Object* NeanderArray::get(int offset) { - ASSERT(0 <= offset); - ASSERT(offset < length()); + DCHECK(0 <= offset); + DCHECK(offset < length()); return obj_.get(offset + 1); } @@ -828,10 +850,12 @@ void Template::SetAccessorProperty( v8::Local<FunctionTemplate> setter, v8::PropertyAttribute attribute, v8::AccessControl access_control) { + // TODO(verwaest): Remove |access_control|. + DCHECK_EQ(v8::DEFAULT, access_control); i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); ENTER_V8(isolate); - ASSERT(!name.IsEmpty()); - ASSERT(!getter.IsEmpty() || !setter.IsEmpty()); + DCHECK(!name.IsEmpty()); + DCHECK(!getter.IsEmpty() || !setter.IsEmpty()); i::HandleScope scope(isolate); const int kSize = 5; v8::Isolate* v8_isolate = reinterpret_cast<v8::Isolate*>(isolate); @@ -839,8 +863,7 @@ void Template::SetAccessorProperty( name, getter, setter, - v8::Integer::New(v8_isolate, attribute), - v8::Integer::New(v8_isolate, access_control)}; + v8::Integer::New(v8_isolate, attribute)}; TemplateSet(isolate, this, kSize, data); } @@ -1140,7 +1163,6 @@ static i::Handle<i::AccessorInfo> SetAccessorInfoProperties( obj->set_name(*Utils::OpenHandle(*name)); if (settings & ALL_CAN_READ) obj->set_all_can_read(true); if (settings & ALL_CAN_WRITE) obj->set_all_can_write(true); - if (settings & PROHIBITS_OVERWRITING) obj->set_prohibits_overwriting(true); obj->set_property_attributes(static_cast<PropertyAttributes>(attributes)); if (!signature.IsEmpty()) { obj->set_expected_receiver_type(*Utils::OpenHandle(*signature)); @@ -1189,14 +1211,14 @@ static i::Handle<i::AccessorInfo> MakeAccessorInfo( Local<ObjectTemplate> FunctionTemplate::InstanceTemplate() { - i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); - if (!Utils::ApiCheck(this != NULL, + i::Handle<i::FunctionTemplateInfo> handle = Utils::OpenHandle(this, true); + if (!Utils::ApiCheck(!handle.is_null(), "v8::FunctionTemplate::InstanceTemplate()", "Reading from empty handle")) { return Local<ObjectTemplate>(); } + i::Isolate* isolate = handle->GetIsolate(); ENTER_V8(isolate); - i::Handle<i::FunctionTemplateInfo> handle = Utils::OpenHandle(this); if (handle->instance_template()->IsUndefined()) { Local<ObjectTemplate> templ = ObjectTemplate::New(isolate, ToApiHandle<FunctionTemplate>(handle)); @@ -1582,13 +1604,13 @@ int UnboundScript::GetId() { int UnboundScript::GetLineNumber(int code_pos) { - i::Handle<i::HeapObject> obj = - i::Handle<i::HeapObject>::cast(Utils::OpenHandle(this)); + i::Handle<i::SharedFunctionInfo> obj = + i::Handle<i::SharedFunctionInfo>::cast(Utils::OpenHandle(this)); i::Isolate* isolate = obj->GetIsolate(); ON_BAILOUT(isolate, "v8::UnboundScript::GetLineNumber()", return -1); LOG_API(isolate, "UnboundScript::GetLineNumber"); - if (obj->IsScript()) { - i::Handle<i::Script> script(i::Script::cast(*obj)); + if (obj->script()->IsScript()) { + i::Handle<i::Script> script(i::Script::cast(obj->script())); return i::Script::GetLineNumber(script, code_pos); } else { return -1; @@ -1597,14 +1619,14 @@ int UnboundScript::GetLineNumber(int code_pos) { Handle<Value> UnboundScript::GetScriptName() { - i::Handle<i::HeapObject> obj = - i::Handle<i::HeapObject>::cast(Utils::OpenHandle(this)); + i::Handle<i::SharedFunctionInfo> obj = + i::Handle<i::SharedFunctionInfo>::cast(Utils::OpenHandle(this)); i::Isolate* isolate = obj->GetIsolate(); ON_BAILOUT(isolate, "v8::UnboundScript::GetName()", return Handle<String>()); LOG_API(isolate, "UnboundScript::GetName"); - if (obj->IsScript()) { - i::Object* name = i::Script::cast(*obj)->name(); + if (obj->script()->IsScript()) { + i::Object* name = i::Script::cast(obj->script())->name(); return Utils::ToLocal(i::Handle<i::Object>(name, isolate)); } else { return Handle<String>(); @@ -1612,23 +1634,52 @@ Handle<Value> UnboundScript::GetScriptName() { } +Handle<Value> UnboundScript::GetSourceURL() { + i::Handle<i::SharedFunctionInfo> obj = + i::Handle<i::SharedFunctionInfo>::cast(Utils::OpenHandle(this)); + i::Isolate* isolate = obj->GetIsolate(); + ON_BAILOUT(isolate, "v8::UnboundScript::GetSourceURL()", + return Handle<String>()); + LOG_API(isolate, "UnboundScript::GetSourceURL"); + if (obj->script()->IsScript()) { + i::Object* url = i::Script::cast(obj->script())->source_url(); + return Utils::ToLocal(i::Handle<i::Object>(url, isolate)); + } else { + return Handle<String>(); + } +} + + +Handle<Value> UnboundScript::GetSourceMappingURL() { + i::Handle<i::SharedFunctionInfo> obj = + i::Handle<i::SharedFunctionInfo>::cast(Utils::OpenHandle(this)); + i::Isolate* isolate = obj->GetIsolate(); + ON_BAILOUT(isolate, "v8::UnboundScript::GetSourceMappingURL()", + return Handle<String>()); + LOG_API(isolate, "UnboundScript::GetSourceMappingURL"); + if (obj->script()->IsScript()) { + i::Object* url = i::Script::cast(obj->script())->source_mapping_url(); + return Utils::ToLocal(i::Handle<i::Object>(url, isolate)); + } else { + return Handle<String>(); + } +} + + Local<Value> Script::Run() { + i::Handle<i::Object> obj = Utils::OpenHandle(this, true); // If execution is terminating, Compile(..)->Run() requires this // check. - if (this == NULL) return Local<Value>(); - i::Handle<i::HeapObject> obj = - i::Handle<i::HeapObject>::cast(Utils::OpenHandle(this)); - i::Isolate* isolate = obj->GetIsolate(); + if (obj.is_null()) return Local<Value>(); + i::Isolate* isolate = i::Handle<i::HeapObject>::cast(obj)->GetIsolate(); ON_BAILOUT(isolate, "v8::Script::Run()", return Local<Value>()); LOG_API(isolate, "Script::Run"); ENTER_V8(isolate); - i::Logger::TimerEventScope timer_scope( - isolate, i::Logger::TimerEventScope::v8_execute); + i::TimerEventScope<i::TimerEventExecute> timer_scope(isolate); i::HandleScope scope(isolate); i::Handle<i::JSFunction> fun = i::Handle<i::JSFunction>::cast(obj); EXCEPTION_PREAMBLE(isolate); - i::Handle<i::Object> receiver( - isolate->context()->global_proxy(), isolate); + i::Handle<i::Object> receiver(isolate->global_proxy(), isolate); i::Handle<i::Object> result; has_pending_exception = !i::Execution::Call( isolate, fun, receiver, 0, NULL).ToHandle(&result); @@ -1648,43 +1699,25 @@ Local<UnboundScript> ScriptCompiler::CompileUnbound( Isolate* v8_isolate, Source* source, CompileOptions options) { - i::ScriptData* script_data_impl = NULL; - i::CachedDataMode cached_data_mode = i::NO_CACHED_DATA; i::Isolate* isolate = reinterpret_cast<i::Isolate*>(v8_isolate); ON_BAILOUT(isolate, "v8::ScriptCompiler::CompileUnbound()", return Local<UnboundScript>()); - if (options & kProduceDataToCache) { - cached_data_mode = i::PRODUCE_CACHED_DATA; - ASSERT(source->cached_data == NULL); - if (source->cached_data) { - // Asked to produce cached data even though there is some already -> not - // good. Fail the compilation. - EXCEPTION_PREAMBLE(isolate); - i::Handle<i::Object> result = isolate->factory()->NewSyntaxError( - "invalid_cached_data", isolate->factory()->NewJSArray(0)); - isolate->Throw(*result); - isolate->ReportPendingMessages(); - has_pending_exception = true; - EXCEPTION_BAILOUT_CHECK(isolate, Local<UnboundScript>()); - } - } else if (source->cached_data) { - cached_data_mode = i::CONSUME_CACHED_DATA; - // ScriptData takes care of aligning, in case the data is not aligned - // correctly. - script_data_impl = i::ScriptData::New( - reinterpret_cast<const char*>(source->cached_data->data), - source->cached_data->length); - // If the cached data is not valid, fail the compilation. - if (script_data_impl == NULL || !script_data_impl->SanityCheck()) { - EXCEPTION_PREAMBLE(isolate); - i::Handle<i::Object> result = isolate->factory()->NewSyntaxError( - "invalid_cached_data", isolate->factory()->NewJSArray(0)); - isolate->Throw(*result); - isolate->ReportPendingMessages(); - delete script_data_impl; - has_pending_exception = true; - EXCEPTION_BAILOUT_CHECK(isolate, Local<UnboundScript>()); - } + + // Support the old API for a transition period: + // - kProduceToCache -> kProduceParserCache + // - kNoCompileOptions + cached_data != NULL -> kConsumeParserCache + if (options == kProduceDataToCache) { + options = kProduceParserCache; + } else if (options == kNoCompileOptions && source->cached_data) { + options = kConsumeParserCache; + } + + i::ScriptData* script_data = NULL; + if (options == kConsumeParserCache || options == kConsumeCodeCache) { + DCHECK(source->cached_data); + // ScriptData takes care of pointer-aligning the data. + script_data = new i::ScriptData(source->cached_data->data, + source->cached_data->length); } i::Handle<i::String> str = Utils::OpenHandle(*(source->source_string)); @@ -1712,29 +1745,30 @@ Local<UnboundScript> ScriptCompiler::CompileUnbound( source->resource_is_shared_cross_origin == v8::True(v8_isolate); } EXCEPTION_PREAMBLE(isolate); - i::Handle<i::SharedFunctionInfo> result = - i::Compiler::CompileScript(str, - name_obj, - line_offset, - column_offset, - is_shared_cross_origin, - isolate->global_context(), - NULL, - &script_data_impl, - cached_data_mode, - i::NOT_NATIVES_CODE); + i::Handle<i::SharedFunctionInfo> result = i::Compiler::CompileScript( + str, name_obj, line_offset, column_offset, is_shared_cross_origin, + isolate->global_context(), NULL, &script_data, options, + i::NOT_NATIVES_CODE); has_pending_exception = result.is_null(); + if (has_pending_exception && script_data != NULL) { + // This case won't happen during normal operation; we have compiled + // successfully and produced cached data, and but the second compilation + // of the same source code fails. + delete script_data; + script_data = NULL; + } EXCEPTION_BAILOUT_CHECK(isolate, Local<UnboundScript>()); raw_result = *result; - if ((options & kProduceDataToCache) && script_data_impl != NULL) { - // script_data_impl now contains the data that was generated. source will + + if ((options == kProduceParserCache || options == kProduceCodeCache) && + script_data != NULL) { + // script_data now contains the data that was generated. source will // take the ownership. source->cached_data = new CachedData( - reinterpret_cast<const uint8_t*>(script_data_impl->Data()), - script_data_impl->Length(), CachedData::BufferOwned); - script_data_impl->owns_store_ = false; + script_data->data(), script_data->length(), CachedData::BufferOwned); + script_data->ReleaseDataOwnership(); } - delete script_data_impl; + delete script_data; } i::Handle<i::SharedFunctionInfo> result(raw_result, isolate); return ToApiHandle<UnboundScript>(result); @@ -1746,12 +1780,10 @@ Local<Script> ScriptCompiler::Compile( Source* source, CompileOptions options) { i::Isolate* isolate = reinterpret_cast<i::Isolate*>(v8_isolate); - ON_BAILOUT(isolate, "v8::ScriptCompiler::Compile()", - return Local<Script>()); + ON_BAILOUT(isolate, "v8::ScriptCompiler::Compile()", return Local<Script>()); LOG_API(isolate, "ScriptCompiler::CompiletBound()"); ENTER_V8(isolate); - Local<UnboundScript> generic = - CompileUnbound(v8_isolate, source, options); + Local<UnboundScript> generic = CompileUnbound(v8_isolate, source, options); if (generic.IsEmpty()) return Local<Script>(); return generic->BindToCurrentContext(); } @@ -1785,19 +1817,23 @@ Local<Script> Script::Compile(v8::Handle<String> source, v8::TryCatch::TryCatch() : isolate_(i::Isolate::Current()), - next_(isolate_->try_catch_handler_address()), + next_(isolate_->try_catch_handler()), is_verbose_(false), can_continue_(true), capture_message_(true), rethrow_(false), has_terminated_(false) { - Reset(); + ResetInternal(); + // Special handling for simulators which have a separate JS stack. + js_stack_comparable_address_ = + reinterpret_cast<void*>(v8::internal::SimulatorStack::RegisterCTryCatch( + GetCurrentStackPosition())); isolate_->RegisterTryCatchHandler(this); } v8::TryCatch::~TryCatch() { - ASSERT(isolate_ == i::Isolate::Current()); + DCHECK(isolate_ == i::Isolate::Current()); if (rethrow_) { v8::Isolate* isolate = reinterpret_cast<Isolate*>(isolate_); v8::HandleScope scope(isolate); @@ -1811,10 +1847,18 @@ v8::TryCatch::~TryCatch() { isolate_->RestorePendingMessageFromTryCatch(this); } isolate_->UnregisterTryCatchHandler(this); + v8::internal::SimulatorStack::UnregisterCTryCatch(); reinterpret_cast<Isolate*>(isolate_)->ThrowException(exc); - ASSERT(!isolate_->thread_local_top()->rethrowing_message_); + DCHECK(!isolate_->thread_local_top()->rethrowing_message_); } else { + if (HasCaught() && isolate_->has_scheduled_exception()) { + // If an exception was caught but is still scheduled because no API call + // promoted it, then it is canceled to prevent it from being propagated. + // Note that this will not cancel termination exceptions. + isolate_->CancelScheduledExceptionFromTryCatch(this); + } isolate_->UnregisterTryCatchHandler(this); + v8::internal::SimulatorStack::UnregisterCTryCatch(); } } @@ -1842,7 +1886,7 @@ v8::Handle<v8::Value> v8::TryCatch::ReThrow() { v8::Local<Value> v8::TryCatch::Exception() const { - ASSERT(isolate_ == i::Isolate::Current()); + DCHECK(isolate_ == i::Isolate::Current()); if (HasCaught()) { // Check for out of memory exception. i::Object* exception = reinterpret_cast<i::Object*>(exception_); @@ -1854,14 +1898,18 @@ v8::Local<Value> v8::TryCatch::Exception() const { v8::Local<Value> v8::TryCatch::StackTrace() const { - ASSERT(isolate_ == i::Isolate::Current()); + DCHECK(isolate_ == i::Isolate::Current()); if (HasCaught()) { i::Object* raw_obj = reinterpret_cast<i::Object*>(exception_); if (!raw_obj->IsJSObject()) return v8::Local<Value>(); i::HandleScope scope(isolate_); i::Handle<i::JSObject> obj(i::JSObject::cast(raw_obj), isolate_); i::Handle<i::String> name = isolate_->factory()->stack_string(); - if (!i::JSReceiver::HasProperty(obj, name)) return v8::Local<Value>(); + EXCEPTION_PREAMBLE(isolate_); + Maybe<bool> maybe = i::JSReceiver::HasProperty(obj, name); + has_pending_exception = !maybe.has_value; + EXCEPTION_BAILOUT_CHECK(isolate_, v8::Local<Value>()); + if (!maybe.value) return v8::Local<Value>(); i::Handle<i::Object> value; if (!i::Object::GetProperty(obj, name).ToHandle(&value)) { return v8::Local<Value>(); @@ -1874,9 +1922,9 @@ v8::Local<Value> v8::TryCatch::StackTrace() const { v8::Local<v8::Message> v8::TryCatch::Message() const { - ASSERT(isolate_ == i::Isolate::Current()); + DCHECK(isolate_ == i::Isolate::Current()); i::Object* message = reinterpret_cast<i::Object*>(message_obj_); - ASSERT(message->IsJSMessageObject() || message->IsTheHole()); + DCHECK(message->IsJSMessageObject() || message->IsTheHole()); if (HasCaught() && !message->IsTheHole()) { return v8::Utils::MessageToLocal(i::Handle<i::Object>(message, isolate_)); } else { @@ -1886,7 +1934,18 @@ v8::Local<v8::Message> v8::TryCatch::Message() const { void v8::TryCatch::Reset() { - ASSERT(isolate_ == i::Isolate::Current()); + DCHECK(isolate_ == i::Isolate::Current()); + if (!rethrow_ && HasCaught() && isolate_->has_scheduled_exception()) { + // If an exception was caught but is still scheduled because no API call + // promoted it, then it is canceled to prevent it from being propagated. + // Note that this will not cancel termination exceptions. + isolate_->CancelScheduledExceptionFromTryCatch(this); + } + ResetInternal(); +} + + +void v8::TryCatch::ResetInternal() { i::Object* the_hole = isolate_->heap()->the_hole_value(); exception_ = the_hole; message_obj_ = the_hole; @@ -1921,19 +1980,30 @@ Local<String> Message::Get() const { } -v8::Handle<Value> Message::GetScriptResourceName() const { +ScriptOrigin Message::GetScriptOrigin() const { i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); - ENTER_V8(isolate); - EscapableHandleScope scope(reinterpret_cast<Isolate*>(isolate)); i::Handle<i::JSMessageObject> message = i::Handle<i::JSMessageObject>::cast(Utils::OpenHandle(this)); - // Return this.script.name. - i::Handle<i::JSValue> script = - i::Handle<i::JSValue>::cast(i::Handle<i::Object>(message->script(), - isolate)); - i::Handle<i::Object> resource_name(i::Script::cast(script->value())->name(), - isolate); - return scope.Escape(Utils::ToLocal(resource_name)); + i::Handle<i::Object> script_wraper = + i::Handle<i::Object>(message->script(), isolate); + i::Handle<i::JSValue> script_value = + i::Handle<i::JSValue>::cast(script_wraper); + i::Handle<i::Script> script(i::Script::cast(script_value->value())); + i::Handle<i::Object> scriptName(i::Script::GetNameOrSourceURL(script)); + v8::Isolate* v8_isolate = + reinterpret_cast<v8::Isolate*>(script->GetIsolate()); + v8::ScriptOrigin origin( + Utils::ToLocal(scriptName), + v8::Integer::New(v8_isolate, script->line_offset()->value()), + v8::Integer::New(v8_isolate, script->column_offset()->value()), + Handle<Boolean>(), + v8::Integer::New(v8_isolate, script->id()->value())); + return origin; +} + + +v8::Handle<Value> Message::GetScriptResourceName() const { + return GetScriptOrigin().ResourceName(); } @@ -2013,6 +2083,7 @@ int Message::GetEndPosition() const { int Message::GetStartColumn() const { i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); + ON_BAILOUT(isolate, "v8::Message::GetStartColumn()", return kNoColumnInfo); ENTER_V8(isolate); i::HandleScope scope(isolate); i::Handle<i::JSObject> data_obj = Utils::OpenHandle(this); @@ -2027,6 +2098,7 @@ int Message::GetStartColumn() const { int Message::GetEndColumn() const { i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); + ON_BAILOUT(isolate, "v8::Message::GetEndColumn()", return kNoColumnInfo); ENTER_V8(isolate); i::HandleScope scope(isolate); i::Handle<i::JSObject> data_obj = Utils::OpenHandle(this); @@ -2115,6 +2187,9 @@ Local<StackTrace> StackTrace::CurrentStackTrace( StackTraceOptions options) { i::Isolate* i_isolate = reinterpret_cast<i::Isolate*>(isolate); ENTER_V8(i_isolate); + // TODO(dcarney): remove when ScriptDebugServer is fixed. + options = static_cast<StackTraceOptions>( + static_cast<int>(options) | kExposeFramesAcrossSecurityOrigins); i::Handle<i::JSArray> stackTrace = i_isolate->CaptureCurrentStackTrace(frame_limit, options); return Utils::StackTraceToLocal(stackTrace); @@ -2123,109 +2198,77 @@ Local<StackTrace> StackTrace::CurrentStackTrace( // --- S t a c k F r a m e --- -int StackFrame::GetLineNumber() const { - i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); +static int getIntProperty(const StackFrame* f, const char* propertyName, + int defaultValue) { + i::Isolate* isolate = Utils::OpenHandle(f)->GetIsolate(); ENTER_V8(isolate); i::HandleScope scope(isolate); - i::Handle<i::JSObject> self = Utils::OpenHandle(this); - i::Handle<i::Object> line = i::Object::GetProperty( - isolate, self, "lineNumber").ToHandleChecked(); - if (!line->IsSmi()) { - return Message::kNoLineNumberInfo; - } - return i::Smi::cast(*line)->value(); + i::Handle<i::JSObject> self = Utils::OpenHandle(f); + i::Handle<i::Object> obj = + i::Object::GetProperty(isolate, self, propertyName).ToHandleChecked(); + return obj->IsSmi() ? i::Smi::cast(*obj)->value() : defaultValue; +} + + +int StackFrame::GetLineNumber() const { + return getIntProperty(this, "lineNumber", Message::kNoLineNumberInfo); } int StackFrame::GetColumn() const { - i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); - ENTER_V8(isolate); - i::HandleScope scope(isolate); - i::Handle<i::JSObject> self = Utils::OpenHandle(this); - i::Handle<i::Object> column = i::Object::GetProperty( - isolate, self, "column").ToHandleChecked(); - if (!column->IsSmi()) { - return Message::kNoColumnInfo; - } - return i::Smi::cast(*column)->value(); + return getIntProperty(this, "column", Message::kNoColumnInfo); } int StackFrame::GetScriptId() const { - i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); - ENTER_V8(isolate); - i::HandleScope scope(isolate); - i::Handle<i::JSObject> self = Utils::OpenHandle(this); - i::Handle<i::Object> scriptId = i::Object::GetProperty( - isolate, self, "scriptId").ToHandleChecked(); - if (!scriptId->IsSmi()) { - return Message::kNoScriptIdInfo; - } - return i::Smi::cast(*scriptId)->value(); + return getIntProperty(this, "scriptId", Message::kNoScriptIdInfo); } -Local<String> StackFrame::GetScriptName() const { - i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); +static Local<String> getStringProperty(const StackFrame* f, + const char* propertyName) { + i::Isolate* isolate = Utils::OpenHandle(f)->GetIsolate(); ENTER_V8(isolate); EscapableHandleScope scope(reinterpret_cast<Isolate*>(isolate)); - i::Handle<i::JSObject> self = Utils::OpenHandle(this); - i::Handle<i::Object> name = i::Object::GetProperty( - isolate, self, "scriptName").ToHandleChecked(); - if (!name->IsString()) { - return Local<String>(); - } - return scope.Escape(Local<String>::Cast(Utils::ToLocal(name))); + i::Handle<i::JSObject> self = Utils::OpenHandle(f); + i::Handle<i::Object> obj = + i::Object::GetProperty(isolate, self, propertyName).ToHandleChecked(); + return obj->IsString() + ? scope.Escape(Local<String>::Cast(Utils::ToLocal(obj))) + : Local<String>(); +} + + +Local<String> StackFrame::GetScriptName() const { + return getStringProperty(this, "scriptName"); } Local<String> StackFrame::GetScriptNameOrSourceURL() const { - i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); - ENTER_V8(isolate); - EscapableHandleScope scope(reinterpret_cast<Isolate*>(isolate)); - i::Handle<i::JSObject> self = Utils::OpenHandle(this); - i::Handle<i::Object> name = i::Object::GetProperty( - isolate, self, "scriptNameOrSourceURL").ToHandleChecked(); - if (!name->IsString()) { - return Local<String>(); - } - return scope.Escape(Local<String>::Cast(Utils::ToLocal(name))); + return getStringProperty(this, "scriptNameOrSourceURL"); } Local<String> StackFrame::GetFunctionName() const { - i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); - ENTER_V8(isolate); - EscapableHandleScope scope(reinterpret_cast<Isolate*>(isolate)); - i::Handle<i::JSObject> self = Utils::OpenHandle(this); - i::Handle<i::Object> name = i::Object::GetProperty( - isolate, self, "functionName").ToHandleChecked(); - if (!name->IsString()) { - return Local<String>(); - } - return scope.Escape(Local<String>::Cast(Utils::ToLocal(name))); + return getStringProperty(this, "functionName"); } -bool StackFrame::IsEval() const { - i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); +static bool getBoolProperty(const StackFrame* f, const char* propertyName) { + i::Isolate* isolate = Utils::OpenHandle(f)->GetIsolate(); ENTER_V8(isolate); i::HandleScope scope(isolate); - i::Handle<i::JSObject> self = Utils::OpenHandle(this); - i::Handle<i::Object> is_eval = i::Object::GetProperty( - isolate, self, "isEval").ToHandleChecked(); - return is_eval->IsTrue(); + i::Handle<i::JSObject> self = Utils::OpenHandle(f); + i::Handle<i::Object> obj = + i::Object::GetProperty(isolate, self, propertyName).ToHandleChecked(); + return obj->IsTrue(); } +bool StackFrame::IsEval() const { return getBoolProperty(this, "isEval"); } + bool StackFrame::IsConstructor() const { - i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); - ENTER_V8(isolate); - i::HandleScope scope(isolate); - i::Handle<i::JSObject> self = Utils::OpenHandle(this); - i::Handle<i::Object> is_constructor = i::Object::GetProperty( - isolate, self, "isConstructor").ToHandleChecked(); - return is_constructor->IsTrue(); + return getBoolProperty(this, "isConstructor"); } @@ -2254,14 +2297,14 @@ Local<Value> JSON::Parse(Local<String> json_string) { bool Value::FullIsUndefined() const { bool result = Utils::OpenHandle(this)->IsUndefined(); - ASSERT_EQ(result, QuickIsUndefined()); + DCHECK_EQ(result, QuickIsUndefined()); return result; } bool Value::FullIsNull() const { bool result = Utils::OpenHandle(this)->IsNull(); - ASSERT_EQ(result, QuickIsNull()); + DCHECK_EQ(result, QuickIsNull()); return result; } @@ -2283,7 +2326,7 @@ bool Value::IsFunction() const { bool Value::FullIsString() const { bool result = Utils::OpenHandle(this)->IsString(); - ASSERT_EQ(result, QuickIsString()); + DCHECK_EQ(result, QuickIsString()); return result; } @@ -2771,7 +2814,7 @@ double Value::NumberValue() const { EXCEPTION_PREAMBLE(isolate); has_pending_exception = !i::Execution::ToNumber( isolate, obj).ToHandle(&num); - EXCEPTION_BAILOUT_CHECK(isolate, i::OS::nan_value()); + EXCEPTION_BAILOUT_CHECK(isolate, base::OS::nan_value()); } return num->Number(); } @@ -2886,14 +2929,14 @@ int32_t Value::Int32Value() const { bool Value::Equals(Handle<Value> that) const { i::Isolate* isolate = i::Isolate::Current(); - if (!Utils::ApiCheck(this != NULL && !that.IsEmpty(), + i::Handle<i::Object> obj = Utils::OpenHandle(this, true); + if (!Utils::ApiCheck(!obj.is_null() && !that.IsEmpty(), "v8::Value::Equals()", "Reading from empty handle")) { return false; } LOG_API(isolate, "Equals"); ENTER_V8(isolate); - i::Handle<i::Object> obj = Utils::OpenHandle(this); i::Handle<i::Object> other = Utils::OpenHandle(*that); // If both obj and other are JSObjects, we'd better compare by identity // immediately when going into JS builtin. The reason is Invoke @@ -2913,13 +2956,13 @@ bool Value::Equals(Handle<Value> that) const { bool Value::StrictEquals(Handle<Value> that) const { i::Isolate* isolate = i::Isolate::Current(); - if (!Utils::ApiCheck(this != NULL && !that.IsEmpty(), + i::Handle<i::Object> obj = Utils::OpenHandle(this, true); + if (!Utils::ApiCheck(!obj.is_null() && !that.IsEmpty(), "v8::Value::StrictEquals()", "Reading from empty handle")) { return false; } LOG_API(isolate, "StrictEquals"); - i::Handle<i::Object> obj = Utils::OpenHandle(this); i::Handle<i::Object> other = Utils::OpenHandle(*that); // Must check HeapNumber first, since NaN !== NaN. if (obj->IsHeapNumber()) { @@ -2945,12 +2988,12 @@ bool Value::StrictEquals(Handle<Value> that) const { bool Value::SameValue(Handle<Value> that) const { - if (!Utils::ApiCheck(this != NULL && !that.IsEmpty(), + i::Handle<i::Object> obj = Utils::OpenHandle(this, true); + if (!Utils::ApiCheck(!obj.is_null() && !that.IsEmpty(), "v8::Value::SameValue()", "Reading from empty handle")) { return false; } - i::Handle<i::Object> obj = Utils::OpenHandle(this); i::Handle<i::Object> other = Utils::OpenHandle(*that); return obj->SameValue(*other); } @@ -2978,8 +3021,7 @@ uint32_t Value::Uint32Value() const { } -bool v8::Object::Set(v8::Handle<Value> key, v8::Handle<Value> value, - v8::PropertyAttribute attribs) { +bool v8::Object::Set(v8::Handle<Value> key, v8::Handle<Value> value) { i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); ON_BAILOUT(isolate, "v8::Object::Set()", return false); ENTER_V8(isolate); @@ -2988,13 +3030,9 @@ bool v8::Object::Set(v8::Handle<Value> key, v8::Handle<Value> value, i::Handle<i::Object> key_obj = Utils::OpenHandle(*key); i::Handle<i::Object> value_obj = Utils::OpenHandle(*value); EXCEPTION_PREAMBLE(isolate); - has_pending_exception = i::Runtime::SetObjectProperty( - isolate, - self, - key_obj, - value_obj, - static_cast<PropertyAttributes>(attribs), - i::SLOPPY).is_null(); + has_pending_exception = + i::Runtime::SetObjectProperty(isolate, self, key_obj, value_obj, + i::SLOPPY).is_null(); EXCEPTION_BAILOUT_CHECK(isolate, false); return true; } @@ -3026,7 +3064,7 @@ bool v8::Object::ForceSet(v8::Handle<Value> key, i::Handle<i::Object> key_obj = Utils::OpenHandle(*key); i::Handle<i::Object> value_obj = Utils::OpenHandle(*value); EXCEPTION_PREAMBLE(isolate); - has_pending_exception = i::Runtime::ForceSetObjectProperty( + has_pending_exception = i::Runtime::DefineObjectProperty( self, key_obj, value_obj, @@ -3037,8 +3075,8 @@ bool v8::Object::ForceSet(v8::Handle<Value> key, bool v8::Object::SetPrivate(v8::Handle<Private> key, v8::Handle<Value> value) { - return Set(v8::Handle<Value>(reinterpret_cast<Value*>(*key)), - value, DontEnum); + return ForceSet(v8::Handle<Value>(reinterpret_cast<Value*>(*key)), + value, DontEnum); } @@ -3103,7 +3141,7 @@ Local<Value> v8::Object::GetPrivate(v8::Handle<Private> key) { PropertyAttribute v8::Object::GetPropertyAttributes(v8::Handle<Value> key) { i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); - ON_BAILOUT(isolate, "v8::Object::GetPropertyAttribute()", + ON_BAILOUT(isolate, "v8::Object::GetPropertyAttributes()", return static_cast<PropertyAttribute>(NONE)); ENTER_V8(isolate); i::HandleScope scope(isolate); @@ -3116,21 +3154,43 @@ PropertyAttribute v8::Object::GetPropertyAttributes(v8::Handle<Value> key) { EXCEPTION_BAILOUT_CHECK(isolate, static_cast<PropertyAttribute>(NONE)); } i::Handle<i::Name> key_name = i::Handle<i::Name>::cast(key_obj); - PropertyAttributes result = - i::JSReceiver::GetPropertyAttribute(self, key_name); - if (result == ABSENT) return static_cast<PropertyAttribute>(NONE); - return static_cast<PropertyAttribute>(result); + EXCEPTION_PREAMBLE(isolate); + Maybe<PropertyAttributes> result = + i::JSReceiver::GetPropertyAttributes(self, key_name); + has_pending_exception = !result.has_value; + EXCEPTION_BAILOUT_CHECK(isolate, static_cast<PropertyAttribute>(NONE)); + if (result.value == ABSENT) return static_cast<PropertyAttribute>(NONE); + return static_cast<PropertyAttribute>(result.value); +} + + +Local<Value> v8::Object::GetOwnPropertyDescriptor(Local<String> key) { + i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); + ON_BAILOUT(isolate, "v8::Object::GetOwnPropertyDescriptor()", + return Local<Value>()); + ENTER_V8(isolate); + i::Handle<i::JSObject> obj = Utils::OpenHandle(this); + i::Handle<i::Name> key_name = Utils::OpenHandle(*key); + i::Handle<i::Object> args[] = { obj, key_name }; + EXCEPTION_PREAMBLE(isolate); + i::Handle<i::Object> result; + has_pending_exception = !CallV8HeapFunction( + "ObjectGetOwnPropertyDescriptor", + isolate->factory()->undefined_value(), + ARRAY_SIZE(args), + args).ToHandle(&result); + EXCEPTION_BAILOUT_CHECK(isolate, Local<Value>()); + return Utils::ToLocal(result); } Local<Value> v8::Object::GetPrototype() { i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); - ON_BAILOUT(isolate, "v8::Object::GetPrototype()", - return Local<v8::Value>()); + ON_BAILOUT(isolate, "v8::Object::GetPrototype()", return Local<v8::Value>()); ENTER_V8(isolate); i::Handle<i::Object> self = Utils::OpenHandle(this); - i::Handle<i::Object> result(self->GetPrototype(isolate), isolate); - return Utils::ToLocal(result); + i::PrototypeIterator iter(isolate, self); + return Utils::ToLocal(i::PrototypeIterator::GetCurrent(iter)); } @@ -3144,8 +3204,8 @@ bool v8::Object::SetPrototype(Handle<Value> value) { // to propagate outside. TryCatch try_catch; EXCEPTION_PREAMBLE(isolate); - i::MaybeHandle<i::Object> result = i::JSObject::SetPrototype( - self, value_obj); + i::MaybeHandle<i::Object> result = + i::JSObject::SetPrototype(self, value_obj, false); has_pending_exception = result.is_null(); EXCEPTION_BAILOUT_CHECK(isolate, false); return true; @@ -3159,14 +3219,17 @@ Local<Object> v8::Object::FindInstanceInPrototypeChain( "v8::Object::FindInstanceInPrototypeChain()", return Local<v8::Object>()); ENTER_V8(isolate); - i::JSObject* object = *Utils::OpenHandle(this); + i::PrototypeIterator iter(isolate, *Utils::OpenHandle(this), + i::PrototypeIterator::START_AT_RECEIVER); i::FunctionTemplateInfo* tmpl_info = *Utils::OpenHandle(*tmpl); - while (!tmpl_info->IsTemplateFor(object)) { - i::Object* prototype = object->GetPrototype(); - if (!prototype->IsJSObject()) return Local<Object>(); - object = i::JSObject::cast(prototype); + while (!tmpl_info->IsTemplateFor(iter.GetCurrent())) { + iter.Advance(); + if (iter.IsAtEnd()) { + return Local<Object>(); + } } - return Utils::ToLocal(i::Handle<i::JSObject>(object)); + return Utils::ToLocal( + i::handle(i::JSObject::cast(iter.GetCurrent()), isolate)); } @@ -3202,7 +3265,7 @@ Local<Array> v8::Object::GetOwnPropertyNames() { EXCEPTION_PREAMBLE(isolate); i::Handle<i::FixedArray> value; has_pending_exception = !i::JSReceiver::GetKeys( - self, i::JSReceiver::LOCAL_ONLY).ToHandle(&value); + self, i::JSReceiver::OWN_ONLY).ToHandle(&value); EXCEPTION_BAILOUT_CHECK(isolate, Local<v8::Array>()); // Because we use caching to speed up enumeration it is important // to never change the result of the basic enumeration function so @@ -3249,7 +3312,7 @@ Local<String> v8::Object::ObjectProtoToString() { // Write prefix. char* ptr = buf.start(); - i::OS::MemCopy(ptr, prefix, prefix_len * v8::internal::kCharSize); + i::MemCopy(ptr, prefix, prefix_len * v8::internal::kCharSize); ptr += prefix_len; // Write real content. @@ -3257,7 +3320,7 @@ Local<String> v8::Object::ObjectProtoToString() { ptr += str_len; // Write postfix. - i::OS::MemCopy(ptr, postfix, postfix_len * v8::internal::kCharSize); + i::MemCopy(ptr, postfix, postfix_len * v8::internal::kCharSize); // Copy the buffer into a heap-allocated string and return it. Local<String> result = v8::String::NewFromUtf8( @@ -3268,17 +3331,6 @@ Local<String> v8::Object::ObjectProtoToString() { } -Local<Value> v8::Object::GetConstructor() { - i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); - ON_BAILOUT(isolate, "v8::Object::GetConstructor()", - return Local<v8::Function>()); - ENTER_V8(isolate); - i::Handle<i::JSObject> self = Utils::OpenHandle(this); - i::Handle<i::Object> constructor(self->GetConstructor(), isolate); - return Utils::ToLocal(constructor); -} - - Local<String> v8::Object::GetConstructorName() { i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); ON_BAILOUT(isolate, "v8::Object::GetConstructorName()", @@ -3327,6 +3379,8 @@ bool v8::Object::Has(v8::Handle<Value> key) { bool v8::Object::HasPrivate(v8::Handle<Private> key) { + // TODO(rossberg): this should use HasOwnProperty, but we'd need to + // generalise that to a (noy yet existant) Name argument first. return Has(v8::Handle<Value>(reinterpret_cast<Value*>(*key))); } @@ -3352,7 +3406,11 @@ bool v8::Object::Has(uint32_t index) { i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); ON_BAILOUT(isolate, "v8::Object::HasProperty()", return false); i::Handle<i::JSObject> self = Utils::OpenHandle(this); - return i::JSReceiver::HasElement(self, index); + EXCEPTION_PREAMBLE(isolate); + Maybe<bool> maybe = i::JSReceiver::HasElement(self, index); + has_pending_exception = !maybe.has_value; + EXCEPTION_BAILOUT_CHECK(isolate, false); + return maybe.value; } @@ -3379,7 +3437,7 @@ static inline bool ObjectSetAccessor(Object* obj, i::JSObject::SetAccessor(Utils::OpenHandle(obj), info), false); if (result->IsUndefined()) return false; - if (fast) i::JSObject::TransformToFastProperties(Utils::OpenHandle(obj), 0); + if (fast) i::JSObject::MigrateSlowToFast(Utils::OpenHandle(obj), 0); return true; } @@ -3410,6 +3468,8 @@ void Object::SetAccessorProperty(Local<String> name, Handle<Function> setter, PropertyAttribute attribute, AccessControl settings) { + // TODO(verwaest): Remove |settings|. + DCHECK_EQ(v8::DEFAULT, settings); i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); ON_BAILOUT(isolate, "v8::Object::SetAccessorProperty()", return); ENTER_V8(isolate); @@ -3421,8 +3481,7 @@ void Object::SetAccessorProperty(Local<String> name, v8::Utils::OpenHandle(*name), getter_i, setter_i, - static_cast<PropertyAttributes>(attribute), - settings); + static_cast<PropertyAttributes>(attribute)); } @@ -3430,8 +3489,12 @@ bool v8::Object::HasOwnProperty(Handle<String> key) { i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); ON_BAILOUT(isolate, "v8::Object::HasOwnProperty()", return false); - return i::JSReceiver::HasLocalProperty( - Utils::OpenHandle(this), Utils::OpenHandle(*key)); + EXCEPTION_PREAMBLE(isolate); + Maybe<bool> maybe = i::JSReceiver::HasOwnProperty(Utils::OpenHandle(this), + Utils::OpenHandle(*key)); + has_pending_exception = !maybe.has_value; + EXCEPTION_BAILOUT_CHECK(isolate, false); + return maybe.value; } @@ -3439,8 +3502,12 @@ bool v8::Object::HasRealNamedProperty(Handle<String> key) { i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); ON_BAILOUT(isolate, "v8::Object::HasRealNamedProperty()", return false); - return i::JSObject::HasRealNamedProperty(Utils::OpenHandle(this), - Utils::OpenHandle(*key)); + EXCEPTION_PREAMBLE(isolate); + Maybe<bool> maybe = i::JSObject::HasRealNamedProperty( + Utils::OpenHandle(this), Utils::OpenHandle(*key)); + has_pending_exception = !maybe.has_value; + EXCEPTION_BAILOUT_CHECK(isolate, false); + return maybe.value; } @@ -3448,7 +3515,12 @@ bool v8::Object::HasRealIndexedProperty(uint32_t index) { i::Isolate* isolate = Utils::OpenHandle(this)->GetIsolate(); ON_BAILOUT(isolate, "v8::Object::HasRealIndexedProperty()", return false); - return i::JSObject::HasRealElementProperty(Utils::OpenHandle(this), index); + EXCEPTION_PREAMBLE(isolate); + Maybe<bool> maybe = + i::JSObject::HasRealElementProperty(Utils::OpenHandle(this), index); + has_pending_exception = !maybe.has_value; + EXCEPTION_BAILOUT_CHECK(isolate, false); + return maybe.value; } @@ -3458,8 +3530,12 @@ bool v8::Object::HasRealNamedCallbackProperty(Handle<String> key) { "v8::Object::HasRealNamedCallbackProperty()", return false); ENTER_V8(isolate); - return i::JSObject::HasRealNamedCallbackProperty(Utils::OpenHandle(this), - Utils::OpenHandle(*key)); + EXCEPTION_PREAMBLE(isolate); + Maybe<bool> maybe = i::JSObject::HasRealNamedCallbackProperty( + Utils::OpenHandle(this), Utils::OpenHandle(*key)); + has_pending_exception = !maybe.has_value; + EXCEPTION_BAILOUT_CHECK(isolate, false); + return maybe.value; } @@ -3491,10 +3567,11 @@ static Local<Value> GetPropertyByLookup(i::Isolate* isolate, // If the property being looked up is a callback, it can throw // an exception. EXCEPTION_PREAMBLE(isolate); - PropertyAttributes ignored; + i::LookupIterator it( + receiver, name, i::Handle<i::JSReceiver>(lookup->holder(), isolate), + i::LookupIterator::SKIP_INTERCEPTOR); i::Handle<i::Object> result; - has_pending_exception = !i::Object::GetProperty( - receiver, receiver, lookup, name, &ignored).ToHandle(&result); + has_pending_exception = !i::Object::GetProperty(&it).ToHandle(&result); EXCEPTION_BAILOUT_CHECK(isolate, Local<Value>()); return Utils::ToLocal(result); @@ -3545,7 +3622,7 @@ void v8::Object::TurnOnAccessCheck() { i::Handle<i::Map> new_map = i::Map::Copy(i::Handle<i::Map>(obj->map())); new_map->set_is_access_check_needed(true); - obj->set_map(*new_map); + i::JSObject::MigrateToMap(obj, new_map); } @@ -3584,8 +3661,7 @@ int v8::Object::GetIdentityHash() { ENTER_V8(isolate); i::HandleScope scope(isolate); i::Handle<i::JSObject> self = Utils::OpenHandle(this); - return i::Handle<i::Smi>::cast( - i::JSReceiver::GetOrCreateIdentityHash(self))->value(); + return i::JSReceiver::GetOrCreateIdentityHash(self)->value(); } @@ -3819,8 +3895,7 @@ Local<v8::Value> Object::CallAsFunction(v8::Handle<v8::Value> recv, return Local<v8::Value>()); LOG_API(isolate, "Object::CallAsFunction"); ENTER_V8(isolate); - i::Logger::TimerEventScope timer_scope( - isolate, i::Logger::TimerEventScope::v8_execute); + i::TimerEventScope<i::TimerEventExecute> timer_scope(isolate); i::HandleScope scope(isolate); i::Handle<i::JSObject> obj = Utils::OpenHandle(this); i::Handle<i::Object> recv_obj = Utils::OpenHandle(*recv); @@ -3854,8 +3929,7 @@ Local<v8::Value> Object::CallAsConstructor(int argc, return Local<v8::Object>()); LOG_API(isolate, "Object::CallAsConstructor"); ENTER_V8(isolate); - i::Logger::TimerEventScope timer_scope( - isolate, i::Logger::TimerEventScope::v8_execute); + i::TimerEventScope<i::TimerEventExecute> timer_scope(isolate); i::HandleScope scope(isolate); i::Handle<i::JSObject> obj = Utils::OpenHandle(this); STATIC_ASSERT(sizeof(v8::Handle<v8::Value>) == sizeof(i::Object**)); @@ -3882,7 +3956,7 @@ Local<v8::Value> Object::CallAsConstructor(int argc, has_pending_exception = !i::Execution::Call( isolate, fun, obj, argc, args).ToHandle(&returned); EXCEPTION_BAILOUT_CHECK_DO_CALLBACK(isolate, Local<v8::Object>()); - ASSERT(!delegate->IsUndefined()); + DCHECK(!delegate->IsUndefined()); return Utils::ToLocal(scope.CloseAndEscape(returned)); } return Local<v8::Object>(); @@ -3914,8 +3988,7 @@ Local<v8::Object> Function::NewInstance(int argc, return Local<v8::Object>()); LOG_API(isolate, "Function::NewInstance"); ENTER_V8(isolate); - i::Logger::TimerEventScope timer_scope( - isolate, i::Logger::TimerEventScope::v8_execute); + i::TimerEventScope<i::TimerEventExecute> timer_scope(isolate); EscapableHandleScope scope(reinterpret_cast<Isolate*>(isolate)); i::Handle<i::JSFunction> function = Utils::OpenHandle(this); STATIC_ASSERT(sizeof(v8::Handle<v8::Value>) == sizeof(i::Object**)); @@ -3935,8 +4008,7 @@ Local<v8::Value> Function::Call(v8::Handle<v8::Value> recv, int argc, ON_BAILOUT(isolate, "v8::Function::Call()", return Local<v8::Value>()); LOG_API(isolate, "Function::Call"); ENTER_V8(isolate); - i::Logger::TimerEventScope timer_scope( - isolate, i::Logger::TimerEventScope::v8_execute); + i::TimerEventScope<i::TimerEventExecute> timer_scope(isolate); i::HandleScope scope(isolate); i::Handle<i::JSFunction> fun = Utils::OpenHandle(this); i::Handle<i::Object> recv_obj = Utils::OpenHandle(*recv); @@ -4218,9 +4290,7 @@ class Utf8LengthHelper : public i::AllStatic { class Visitor { public: - inline explicit Visitor() - : utf8_length_(0), - state_(kInitialState) {} + Visitor() : utf8_length_(0), state_(kInitialState) {} void VisitOneByteString(const uint8_t* chars, int length) { int utf8_length = 0; @@ -4273,7 +4343,7 @@ class Utf8LengthHelper : public i::AllStatic { uint8_t leaf_state) { bool edge_surrogate = StartsWithSurrogate(leaf_state); if (!(*state & kLeftmostEdgeIsCalculated)) { - ASSERT(!(*state & kLeftmostEdgeIsSurrogate)); + DCHECK(!(*state & kLeftmostEdgeIsSurrogate)); *state |= kLeftmostEdgeIsCalculated | (edge_surrogate ? kLeftmostEdgeIsSurrogate : 0); } else if (EndsWithSurrogate(*state) && edge_surrogate) { @@ -4291,7 +4361,7 @@ class Utf8LengthHelper : public i::AllStatic { uint8_t leaf_state) { bool edge_surrogate = EndsWithSurrogate(leaf_state); if (!(*state & kRightmostEdgeIsCalculated)) { - ASSERT(!(*state & kRightmostEdgeIsSurrogate)); + DCHECK(!(*state & kRightmostEdgeIsSurrogate)); *state |= (kRightmostEdgeIsCalculated | (edge_surrogate ? kRightmostEdgeIsSurrogate : 0)); } else if (edge_surrogate && StartsWithSurrogate(*state)) { @@ -4307,7 +4377,7 @@ class Utf8LengthHelper : public i::AllStatic { static inline void MergeTerminal(int* length, uint8_t state, uint8_t* state_out) { - ASSERT((state & kLeftmostEdgeIsCalculated) && + DCHECK((state & kLeftmostEdgeIsCalculated) && (state & kRightmostEdgeIsCalculated)); if (EndsWithSurrogate(state) && StartsWithSurrogate(state)) { *length -= unibrow::Utf8::kBytesSavedByCombiningSurrogates; @@ -4419,7 +4489,7 @@ class Utf8WriterVisitor { char* const buffer, bool replace_invalid_utf8) { using namespace unibrow; - ASSERT(remaining > 0); + DCHECK(remaining > 0); // We can't use a local buffer here because Encode needs to modify // previous characters in the stream. We know, however, that // exactly one character will be advanced. @@ -4428,7 +4498,7 @@ class Utf8WriterVisitor { character, last_character, replace_invalid_utf8); - ASSERT(written == 1); + DCHECK(written == 1); return written; } // Use a scratch buffer to check the required characters. @@ -4460,7 +4530,7 @@ class Utf8WriterVisitor { template<typename Char> void Visit(const Char* chars, const int length) { using namespace unibrow; - ASSERT(!early_termination_); + DCHECK(!early_termination_); if (length == 0) return; // Copy state to stack. char* buffer = buffer_; @@ -4489,7 +4559,7 @@ class Utf8WriterVisitor { for (; i < fast_length; i++) { buffer += Utf8::EncodeOneByte(buffer, static_cast<uint8_t>(*chars++)); - ASSERT(capacity_ == -1 || (buffer - start_) <= capacity_); + DCHECK(capacity_ == -1 || (buffer - start_) <= capacity_); } } else { for (; i < fast_length; i++) { @@ -4499,7 +4569,7 @@ class Utf8WriterVisitor { last_character, replace_invalid_utf8_); last_character = character; - ASSERT(capacity_ == -1 || (buffer - start_) <= capacity_); + DCHECK(capacity_ == -1 || (buffer - start_) <= capacity_); } } // Array is fully written. Exit. @@ -4511,10 +4581,10 @@ class Utf8WriterVisitor { return; } } - ASSERT(!skip_capacity_check_); + DCHECK(!skip_capacity_check_); // Slow loop. Must check capacity on each iteration. int remaining_capacity = capacity_ - static_cast<int>(buffer - start_); - ASSERT(remaining_capacity >= 0); + DCHECK(remaining_capacity >= 0); for (; i < length && remaining_capacity > 0; i++) { uint16_t character = *chars++; // remaining_capacity is <= 3 bytes at this point, so we do not write out @@ -4661,7 +4731,7 @@ static inline int WriteHelper(const String* string, i::Isolate* isolate = Utils::OpenHandle(string)->GetIsolate(); LOG_API(isolate, "String::Write"); ENTER_V8(isolate); - ASSERT(start >= 0 && length >= -1); + DCHECK(start >= 0 && length >= -1); i::Handle<i::String> str = Utils::OpenHandle(string); isolate->string_tracker()->RecordWrite(str); if (options & String::HINT_MANY_WRITES_EXPECTED) { @@ -4846,7 +4916,7 @@ void v8::Object::SetInternalField(int index, v8::Handle<Value> value) { if (!InternalFieldOK(obj, index, location)) return; i::Handle<i::Object> val = Utils::OpenHandle(*value); obj->SetInternalField(index, *val); - ASSERT_EQ(value, GetInternalField(index)); + DCHECK_EQ(value, GetInternalField(index)); } @@ -4863,7 +4933,7 @@ void v8::Object::SetAlignedPointerInInternalField(int index, void* value) { const char* location = "v8::Object::SetAlignedPointerInInternalField()"; if (!InternalFieldOK(obj, index, location)) return; obj->SetInternalField(index, EncodeAlignedAsSmi(value, location)); - ASSERT_EQ(value, GetAlignedPointerFromInternalField(index)); + DCHECK_EQ(value, GetAlignedPointerFromInternalField(index)); } @@ -4879,20 +4949,12 @@ static void* ExternalValue(i::Object* obj) { void v8::V8::InitializePlatform(Platform* platform) { -#ifdef V8_USE_DEFAULT_PLATFORM - FATAL("Can't override v8::Platform when using default implementation"); -#else i::V8::InitializePlatform(platform); -#endif } void v8::V8::ShutdownPlatform() { -#ifdef V8_USE_DEFAULT_PLATFORM - FATAL("Can't override v8::Platform when using default implementation"); -#else i::V8::ShutdownPlatform(); -#endif } @@ -4906,7 +4968,7 @@ bool v8::V8::Initialize() { void v8::V8::SetEntropySource(EntropySource entropy_source) { - i::RandomNumberGenerator::SetEntropySource(entropy_source); + base::RandomNumberGenerator::SetEntropySource(entropy_source); } @@ -4918,8 +4980,8 @@ void v8::V8::SetReturnAddressLocationResolver( bool v8::V8::SetFunctionEntryHook(Isolate* ext_isolate, FunctionEntryHook entry_hook) { - ASSERT(ext_isolate != NULL); - ASSERT(entry_hook != NULL); + DCHECK(ext_isolate != NULL); + DCHECK(entry_hook != NULL); i::Isolate* isolate = reinterpret_cast<i::Isolate*>(ext_isolate); @@ -4958,12 +5020,6 @@ void v8::V8::SetArrayBufferAllocator( bool v8::V8::Dispose() { - i::Isolate* isolate = i::Isolate::Current(); - if (!Utils::ApiCheck(isolate != NULL && isolate->IsDefaultIsolate(), - "v8::V8::Dispose()", - "Use v8::Isolate::Dispose() for non-default isolate.")) { - return false; - } i::V8::TearDown(); return true; } @@ -5011,7 +5067,7 @@ void v8::V8::VisitHandlesWithClassIds(PersistentHandleVisitor* visitor) { void v8::V8::VisitHandlesForPartialDependence( Isolate* exported_isolate, PersistentHandleVisitor* visitor) { i::Isolate* isolate = reinterpret_cast<i::Isolate*>(exported_isolate); - ASSERT(isolate == i::Isolate::Current()); + DCHECK(isolate == i::Isolate::Current()); i::DisallowHeapAllocation no_allocation; VisitorAdapter visitor_adapter(visitor); @@ -5020,30 +5076,6 @@ void v8::V8::VisitHandlesForPartialDependence( } -bool v8::V8::IdleNotification(int hint) { - // Returning true tells the caller that it need not - // continue to call IdleNotification. - i::Isolate* isolate = i::Isolate::Current(); - if (isolate == NULL || !isolate->IsInitialized()) return true; - if (!i::FLAG_use_idle_notification) return true; - return isolate->heap()->IdleNotification(hint); -} - - -void v8::V8::LowMemoryNotification() { - i::Isolate* isolate = i::Isolate::Current(); - if (isolate == NULL || !isolate->IsInitialized()) return; - isolate->heap()->CollectAllAvailableGarbage("low memory notification"); -} - - -int v8::V8::ContextDisposedNotification() { - i::Isolate* isolate = i::Isolate::Current(); - if (!isolate->IsInitialized()) return 0; - return isolate->heap()->NotifyContextDisposed(); -} - - bool v8::V8::InitializeICU(const char* icu_data_file) { return i::InitializeICU(icu_data_file); } @@ -5058,7 +5090,7 @@ static i::Handle<i::Context> CreateEnvironment( i::Isolate* isolate, v8::ExtensionConfiguration* extensions, v8::Handle<ObjectTemplate> global_template, - v8::Handle<Value> global_object) { + v8::Handle<Value> maybe_global_proxy) { i::Handle<i::Context> env; // Enter V8 via an ENTER_V8 scope. @@ -5096,16 +5128,19 @@ static i::Handle<i::Context> CreateEnvironment( } } + i::Handle<i::Object> proxy = Utils::OpenHandle(*maybe_global_proxy, true); + i::MaybeHandle<i::JSGlobalProxy> maybe_proxy; + if (!proxy.is_null()) { + maybe_proxy = i::Handle<i::JSGlobalProxy>::cast(proxy); + } // Create the environment. env = isolate->bootstrapper()->CreateEnvironment( - Utils::OpenHandle(*global_object, true), - proxy_template, - extensions); + maybe_proxy, proxy_template, extensions); // Restore the access check info on the global template. if (!global_template.IsEmpty()) { - ASSERT(!global_constructor.is_null()); - ASSERT(!proxy_constructor.is_null()); + DCHECK(!global_constructor.is_null()); + DCHECK(!proxy_constructor.is_null()); global_constructor->set_access_check_info( proxy_constructor->access_check_info()); global_constructor->set_needs_access_check( @@ -5438,7 +5473,7 @@ Local<String> v8::String::NewExternal( bool v8::String::MakeExternal(v8::String::ExternalStringResource* resource) { i::Handle<i::String> obj = Utils::OpenHandle(this); i::Isolate* isolate = obj->GetIsolate(); - if (i::StringShape(*obj).IsExternalTwoByte()) { + if (i::StringShape(*obj).IsExternal()) { return false; // Already an external string. } ENTER_V8(isolate); @@ -5451,8 +5486,10 @@ bool v8::String::MakeExternal(v8::String::ExternalStringResource* resource) { CHECK(resource && resource->data()); bool result = obj->MakeExternal(resource); + // Assert that if CanMakeExternal(), then externalizing actually succeeds. + DCHECK(!CanMakeExternal() || result); if (result) { - ASSERT(obj->IsExternalString()); + DCHECK(obj->IsExternalString()); isolate->heap()->external_string_table()->AddString(*obj); } return result; @@ -5478,7 +5515,7 @@ bool v8::String::MakeExternal( v8::String::ExternalAsciiStringResource* resource) { i::Handle<i::String> obj = Utils::OpenHandle(this); i::Isolate* isolate = obj->GetIsolate(); - if (i::StringShape(*obj).IsExternalTwoByte()) { + if (i::StringShape(*obj).IsExternal()) { return false; // Already an external string. } ENTER_V8(isolate); @@ -5491,8 +5528,10 @@ bool v8::String::MakeExternal( CHECK(resource && resource->data()); bool result = obj->MakeExternal(resource); + // Assert that if CanMakeExternal(), then externalizing actually succeeds. + DCHECK(!CanMakeExternal() || result); if (result) { - ASSERT(obj->IsExternalString()); + DCHECK(obj->IsExternalString()); isolate->heap()->external_string_table()->AddString(*obj); } return result; @@ -5504,11 +5543,6 @@ bool v8::String::CanMakeExternal() { i::Handle<i::String> obj = Utils::OpenHandle(this); i::Isolate* isolate = obj->GetIsolate(); - // TODO(yangguo): Externalizing sliced/cons strings allocates. - // This rule can be removed when all code that can - // trigger an access check is handlified and therefore GC safe. - if (isolate->heap()->old_pointer_space()->Contains(*obj)) return false; - if (isolate->string_tracker()->IsFreshUnusedString(obj)) return false; int size = obj->Size(); // Byte size of the original string. if (size < i::ExternalString::kShortSize) return false; @@ -5622,7 +5656,7 @@ Local<v8::Value> v8::Date::New(Isolate* isolate, double time) { LOG_API(i_isolate, "Date::New"); if (std::isnan(time)) { // Introduce only canonical NaN value into the VM, to avoid signaling NaNs. - time = i::OS::nan_value(); + time = base::OS::nan_value(); } ENTER_V8(i_isolate); EXCEPTION_PREAMBLE(i_isolate); @@ -5660,7 +5694,7 @@ void v8::Date::DateTimeConfigurationChangeNotification(Isolate* isolate) { i::Handle<i::FixedArray> date_cache_version = i::Handle<i::FixedArray>::cast(i_isolate->eternal_handles()->GetSingleton( i::EternalHandles::DATE_CACHE_VERSION)); - ASSERT_EQ(1, date_cache_version->length()); + DCHECK_EQ(1, date_cache_version->length()); CHECK(date_cache_version->get(0)->IsSmi()); date_cache_version->set( 0, @@ -5675,7 +5709,7 @@ static i::Handle<i::String> RegExpFlagsToString(RegExp::Flags flags) { if ((flags & RegExp::kGlobal) != 0) flags_buf[num_flags++] = 'g'; if ((flags & RegExp::kMultiline) != 0) flags_buf[num_flags++] = 'm'; if ((flags & RegExp::kIgnoreCase) != 0) flags_buf[num_flags++] = 'i'; - ASSERT(num_flags <= static_cast<int>(ARRAY_SIZE(flags_buf))); + DCHECK(num_flags <= static_cast<int>(ARRAY_SIZE(flags_buf))); return isolate->factory()->InternalizeOneByteString( i::Vector<const uint8_t>(flags_buf, num_flags)); } @@ -5779,8 +5813,7 @@ bool Value::IsPromise() const { i::Handle<i::Object> b; has_pending_exception = !i::Execution::Call( isolate, - handle( - isolate->context()->global_object()->native_context()->is_promise()), + isolate->is_promise(), isolate->factory()->undefined_value(), ARRAY_SIZE(argv), argv, false).ToHandle(&b); @@ -5797,8 +5830,7 @@ Local<Promise::Resolver> Promise::Resolver::New(Isolate* v8_isolate) { i::Handle<i::Object> result; has_pending_exception = !i::Execution::Call( isolate, - handle(isolate->context()->global_object()->native_context()-> - promise_create()), + isolate->promise_create(), isolate->factory()->undefined_value(), 0, NULL, false).ToHandle(&result); @@ -5822,8 +5854,7 @@ void Promise::Resolver::Resolve(Handle<Value> value) { i::Handle<i::Object> argv[] = { promise, Utils::OpenHandle(*value) }; has_pending_exception = i::Execution::Call( isolate, - handle(isolate->context()->global_object()->native_context()-> - promise_resolve()), + isolate->promise_resolve(), isolate->factory()->undefined_value(), ARRAY_SIZE(argv), argv, false).is_null(); @@ -5840,8 +5871,7 @@ void Promise::Resolver::Reject(Handle<Value> value) { i::Handle<i::Object> argv[] = { promise, Utils::OpenHandle(*value) }; has_pending_exception = i::Execution::Call( isolate, - handle(isolate->context()->global_object()->native_context()-> - promise_reject()), + isolate->promise_reject(), isolate->factory()->undefined_value(), ARRAY_SIZE(argv), argv, false).is_null(); @@ -5859,8 +5889,7 @@ Local<Promise> Promise::Chain(Handle<Function> handler) { i::Handle<i::Object> result; has_pending_exception = !i::Execution::Call( isolate, - handle(isolate->context()->global_object()->native_context()-> - promise_chain()), + isolate->promise_chain(), promise, ARRAY_SIZE(argv), argv, false).ToHandle(&result); @@ -5879,8 +5908,26 @@ Local<Promise> Promise::Catch(Handle<Function> handler) { i::Handle<i::Object> result; has_pending_exception = !i::Execution::Call( isolate, - handle(isolate->context()->global_object()->native_context()-> - promise_catch()), + isolate->promise_catch(), + promise, + ARRAY_SIZE(argv), argv, + false).ToHandle(&result); + EXCEPTION_BAILOUT_CHECK(isolate, Local<Promise>()); + return Local<Promise>::Cast(Utils::ToLocal(result)); +} + + +Local<Promise> Promise::Then(Handle<Function> handler) { + i::Handle<i::JSObject> promise = Utils::OpenHandle(this); + i::Isolate* isolate = promise->GetIsolate(); + LOG_API(isolate, "Promise::Then"); + ENTER_V8(isolate); + EXCEPTION_PREAMBLE(isolate); + i::Handle<i::Object> argv[] = { Utils::OpenHandle(*handler) }; + i::Handle<i::Object> result; + has_pending_exception = !i::Execution::Call( + isolate, + isolate->promise_then(), promise, ARRAY_SIZE(argv), argv, false).ToHandle(&result); @@ -5956,10 +6003,10 @@ Local<ArrayBuffer> v8::ArrayBufferView::Buffer() { i::Handle<i::JSArrayBuffer> buffer; if (obj->IsJSDataView()) { i::Handle<i::JSDataView> data_view(i::JSDataView::cast(*obj)); - ASSERT(data_view->buffer()->IsJSArrayBuffer()); + DCHECK(data_view->buffer()->IsJSArrayBuffer()); buffer = i::handle(i::JSArrayBuffer::cast(data_view->buffer())); } else { - ASSERT(obj->IsJSTypedArray()); + DCHECK(obj->IsJSTypedArray()); buffer = i::JSTypedArray::cast(*obj)->GetBuffer(); } return Utils::ToLocal(buffer); @@ -5990,7 +6037,7 @@ static inline void SetupArrayBufferView( i::Handle<i::JSArrayBuffer> buffer, size_t byte_offset, size_t byte_length) { - ASSERT(byte_offset + byte_length <= + DCHECK(byte_offset + byte_length <= static_cast<size_t>(buffer->byte_length()->Number())); obj->set_buffer(*buffer); @@ -6017,7 +6064,7 @@ i::Handle<i::JSTypedArray> NewTypedArray( isolate->factory()->NewJSTypedArray(array_type); i::Handle<i::JSArrayBuffer> buffer = Utils::OpenHandle(*array_buffer); - ASSERT(byte_offset % sizeof(ElementType) == 0); + DCHECK(byte_offset % sizeof(ElementType) == 0); CHECK(length <= (std::numeric_limits<size_t>::max() / sizeof(ElementType))); CHECK(length <= static_cast<size_t>(i::Smi::kMaxValue)); @@ -6102,11 +6149,10 @@ Local<Symbol> v8::Symbol::For(Isolate* isolate, Local<String> name) { i::Handle<i::Object> symbol = i::Object::GetPropertyOrElement(symbols, i_name).ToHandleChecked(); if (!symbol->IsSymbol()) { - ASSERT(symbol->IsUndefined()); + DCHECK(symbol->IsUndefined()); symbol = i_isolate->factory()->NewSymbol(); i::Handle<i::Symbol>::cast(symbol)->set_name(*i_name); - i::JSObject::SetProperty( - symbols, i_name, symbol, NONE, i::STRICT).Assert(); + i::JSObject::SetProperty(symbols, i_name, symbol, i::STRICT).Assert(); } return Utils::ToLocal(i::Handle<i::Symbol>::cast(symbol)); } @@ -6123,11 +6169,10 @@ Local<Symbol> v8::Symbol::ForApi(Isolate* isolate, Local<String> name) { i::Handle<i::Object> symbol = i::Object::GetPropertyOrElement(symbols, i_name).ToHandleChecked(); if (!symbol->IsSymbol()) { - ASSERT(symbol->IsUndefined()); + DCHECK(symbol->IsUndefined()); symbol = i_isolate->factory()->NewSymbol(); i::Handle<i::Symbol>::cast(symbol)->set_name(*i_name); - i::JSObject::SetProperty( - symbols, i_name, symbol, NONE, i::STRICT).Assert(); + i::JSObject::SetProperty(symbols, i_name, symbol, i::STRICT).Assert(); } return Utils::ToLocal(i::Handle<i::Symbol>::cast(symbol)); } @@ -6156,11 +6201,10 @@ Local<Private> v8::Private::ForApi(Isolate* isolate, Local<String> name) { i::Handle<i::Object> symbol = i::Object::GetPropertyOrElement(privates, i_name).ToHandleChecked(); if (!symbol->IsSymbol()) { - ASSERT(symbol->IsUndefined()); + DCHECK(symbol->IsUndefined()); symbol = i_isolate->factory()->NewPrivateSymbol(); i::Handle<i::Symbol>::cast(symbol)->set_name(*i_name); - i::JSObject::SetProperty( - privates, i_name, symbol, NONE, i::STRICT).Assert(); + i::JSObject::SetProperty(privates, i_name, symbol, i::STRICT).Assert(); } Local<Symbol> result = Utils::ToLocal(i::Handle<i::Symbol>::cast(symbol)); return v8::Handle<Private>(reinterpret_cast<Private*>(*result)); @@ -6169,10 +6213,10 @@ Local<Private> v8::Private::ForApi(Isolate* isolate, Local<String> name) { Local<Number> v8::Number::New(Isolate* isolate, double value) { i::Isolate* internal_isolate = reinterpret_cast<i::Isolate*>(isolate); - ASSERT(internal_isolate->IsInitialized()); + DCHECK(internal_isolate->IsInitialized()); if (std::isnan(value)) { // Introduce only canonical NaN value into the VM, to avoid signaling NaNs. - value = i::OS::nan_value(); + value = base::OS::nan_value(); } ENTER_V8(internal_isolate); i::Handle<i::Object> result = internal_isolate->factory()->NewNumber(value); @@ -6182,7 +6226,7 @@ Local<Number> v8::Number::New(Isolate* isolate, double value) { Local<Integer> v8::Integer::New(Isolate* isolate, int32_t value) { i::Isolate* internal_isolate = reinterpret_cast<i::Isolate*>(isolate); - ASSERT(internal_isolate->IsInitialized()); + DCHECK(internal_isolate->IsInitialized()); if (i::Smi::IsValid(value)) { return Utils::IntegerToLocal(i::Handle<i::Object>(i::Smi::FromInt(value), internal_isolate)); @@ -6195,7 +6239,7 @@ Local<Integer> v8::Integer::New(Isolate* isolate, int32_t value) { Local<Integer> v8::Integer::NewFromUnsigned(Isolate* isolate, uint32_t value) { i::Isolate* internal_isolate = reinterpret_cast<i::Isolate*>(isolate); - ASSERT(internal_isolate->IsInitialized()); + DCHECK(internal_isolate->IsInitialized()); bool fits_into_int32_t = (value & (1 << 31)) == 0; if (fits_into_int32_t) { return Integer::New(isolate, static_cast<int32_t>(value)); @@ -6252,32 +6296,6 @@ void V8::SetCaptureStackTraceForUncaughtExceptions( } -void V8::SetCounterFunction(CounterLookupCallback callback) { - i::Isolate* isolate = i::Isolate::UncheckedCurrent(); - // TODO(svenpanne) The Isolate should really be a parameter. - if (isolate == NULL) return; - isolate->stats_table()->SetCounterFunction(callback); -} - - -void V8::SetCreateHistogramFunction(CreateHistogramCallback callback) { - i::Isolate* isolate = i::Isolate::UncheckedCurrent(); - // TODO(svenpanne) The Isolate should really be a parameter. - if (isolate == NULL) return; - isolate->stats_table()->SetCreateHistogramFunction(callback); - isolate->InitializeLoggingAndCounters(); - isolate->counters()->ResetHistograms(); -} - - -void V8::SetAddHistogramSampleFunction(AddHistogramSampleCallback callback) { - i::Isolate* isolate = i::Isolate::UncheckedCurrent(); - // TODO(svenpanne) The Isolate should really be a parameter. - if (isolate == NULL) return; - isolate->stats_table()-> - SetAddHistogramSampleFunction(callback); -} - void V8::SetFailedAccessCheckCallbackFunction( FailedAccessCheckCallback callback) { i::Isolate* isolate = i::Isolate::Current(); @@ -6285,10 +6303,9 @@ void V8::SetFailedAccessCheckCallbackFunction( } -int64_t Isolate::AdjustAmountOfExternalAllocatedMemory( - int64_t change_in_bytes) { - i::Heap* heap = reinterpret_cast<i::Isolate*>(this)->heap(); - return heap->AdjustAmountOfExternalAllocatedMemory(change_in_bytes); +void Isolate::CollectAllGarbage(const char* gc_reason) { + reinterpret_cast<i::Isolate*>(this)->heap()->CollectAllGarbage( + i::Heap::kNoGCFlags, gc_reason); } @@ -6316,7 +6333,7 @@ v8::Local<v8::Context> Isolate::GetCurrentContext() { i::Isolate* isolate = reinterpret_cast<i::Isolate*>(this); i::Context* context = isolate->context(); if (context == NULL) return Local<Context>(); - i::Context* native_context = context->global_object()->native_context(); + i::Context* native_context = context->native_context(); if (native_context == NULL) return Local<Context>(); return Utils::ToLocal(i::Handle<i::Context>(native_context)); } @@ -6454,23 +6471,9 @@ void V8::RemoveMemoryAllocationCallback(MemoryAllocationCallback callback) { } -void V8::RunMicrotasks(Isolate* isolate) { - isolate->RunMicrotasks(); -} - - -void V8::EnqueueMicrotask(Isolate* isolate, Handle<Function> microtask) { - isolate->EnqueueMicrotask(microtask); -} - - -void V8::SetAutorunMicrotasks(Isolate* isolate, bool autorun) { - isolate->SetAutorunMicrotasks(autorun); -} - - void V8::TerminateExecution(Isolate* isolate) { - reinterpret_cast<i::Isolate*>(isolate)->stack_guard()->TerminateExecution(); + i::Isolate* i_isolate = reinterpret_cast<i::Isolate*>(isolate); + i_isolate->stack_guard()->RequestTerminateExecution(); } @@ -6483,18 +6486,24 @@ bool V8::IsExecutionTerminating(Isolate* isolate) { void V8::CancelTerminateExecution(Isolate* isolate) { i::Isolate* i_isolate = reinterpret_cast<i::Isolate*>(isolate); - i_isolate->stack_guard()->CancelTerminateExecution(); + i_isolate->stack_guard()->ClearTerminateExecution(); + i_isolate->CancelTerminateExecution(); } void Isolate::RequestInterrupt(InterruptCallback callback, void* data) { - reinterpret_cast<i::Isolate*>(this)->stack_guard()->RequestInterrupt( - callback, data); + i::Isolate* i_isolate = reinterpret_cast<i::Isolate*>(this); + i_isolate->set_api_interrupt_callback(callback); + i_isolate->set_api_interrupt_callback_data(data); + i_isolate->stack_guard()->RequestApiInterrupt(); } void Isolate::ClearInterrupt() { - reinterpret_cast<i::Isolate*>(this)->stack_guard()->ClearInterrupt(); + i::Isolate* i_isolate = reinterpret_cast<i::Isolate*>(this); + i_isolate->stack_guard()->ClearApiInterrupt(); + i_isolate->set_api_interrupt_callback(NULL); + i_isolate->set_api_interrupt_callback_data(NULL); } @@ -6505,7 +6514,7 @@ void Isolate::RequestGarbageCollectionForTesting(GarbageCollectionType type) { i::NEW_SPACE, "Isolate::RequestGarbageCollection", kGCCallbackFlagForced); } else { - ASSERT_EQ(kFullGarbageCollection, type); + DCHECK_EQ(kFullGarbageCollection, type); reinterpret_cast<i::Isolate*>(this)->heap()->CollectAllGarbage( i::Heap::kAbortIncrementalMarkingMask, "Isolate::RequestGarbageCollection", kGCCallbackFlagForced); @@ -6514,7 +6523,7 @@ void Isolate::RequestGarbageCollectionForTesting(GarbageCollectionType type) { Isolate* Isolate::GetCurrent() { - i::Isolate* isolate = i::Isolate::UncheckedCurrent(); + i::Isolate* isolate = i::Isolate::Current(); return reinterpret_cast<Isolate*>(isolate); } @@ -6557,7 +6566,7 @@ Isolate::DisallowJavascriptExecutionScope::DisallowJavascriptExecutionScope( internal_ = reinterpret_cast<void*>( new i::DisallowJavascriptExecution(i_isolate)); } else { - ASSERT_EQ(THROW_ON_FAILURE, on_failure); + DCHECK_EQ(THROW_ON_FAILURE, on_failure); internal_ = reinterpret_cast<void*>( new i::ThrowOnJavascriptExecution(i_isolate)); } @@ -6622,6 +6631,8 @@ void Isolate::GetHeapStatistics(HeapStatistics* heap_statistics) { void Isolate::SetEventLogger(LogEventCallback that) { + // Do not overwrite the event logger if we want to log explicitly. + if (i::FLAG_log_timer_events) return; i::Isolate* isolate = reinterpret_cast<i::Isolate*>(this); isolate->set_event_logger(that); } @@ -6641,16 +6652,25 @@ void Isolate::RemoveCallCompletedCallback(CallCompletedCallback callback) { void Isolate::RunMicrotasks() { - i::Isolate* i_isolate = reinterpret_cast<i::Isolate*>(this); - i::HandleScope scope(i_isolate); - i_isolate->RunMicrotasks(); + reinterpret_cast<i::Isolate*>(this)->RunMicrotasks(); } void Isolate::EnqueueMicrotask(Handle<Function> microtask) { - i::Isolate* i_isolate = reinterpret_cast<i::Isolate*>(this); - ENTER_V8(i_isolate); - i::Execution::EnqueueMicrotask(i_isolate, Utils::OpenHandle(*microtask)); + i::Isolate* isolate = reinterpret_cast<i::Isolate*>(this); + isolate->EnqueueMicrotask(Utils::OpenHandle(*microtask)); +} + + +void Isolate::EnqueueMicrotask(MicrotaskCallback microtask, void* data) { + i::Isolate* isolate = reinterpret_cast<i::Isolate*>(this); + i::HandleScope scope(isolate); + i::Handle<i::CallHandlerInfo> callback_info = + i::Handle<i::CallHandlerInfo>::cast( + isolate->factory()->NewStruct(i::CALL_HANDLER_INFO_TYPE)); + SET_FIELD_WRAPPED(callback_info, set_callback, microtask); + SET_FIELD_WRAPPED(callback_info, set_data, data); + isolate->EnqueueMicrotask(callback_info); } @@ -6659,6 +6679,65 @@ void Isolate::SetAutorunMicrotasks(bool autorun) { } +bool Isolate::WillAutorunMicrotasks() const { + return reinterpret_cast<const i::Isolate*>(this)->autorun_microtasks(); +} + + +void Isolate::SetUseCounterCallback(UseCounterCallback callback) { + reinterpret_cast<i::Isolate*>(this)->SetUseCounterCallback(callback); +} + + +void Isolate::SetCounterFunction(CounterLookupCallback callback) { + i::Isolate* isolate = reinterpret_cast<i::Isolate*>(this); + isolate->stats_table()->SetCounterFunction(callback); + isolate->InitializeLoggingAndCounters(); + isolate->counters()->ResetCounters(); +} + + +void Isolate::SetCreateHistogramFunction(CreateHistogramCallback callback) { + i::Isolate* isolate = reinterpret_cast<i::Isolate*>(this); + isolate->stats_table()->SetCreateHistogramFunction(callback); + isolate->InitializeLoggingAndCounters(); + isolate->counters()->ResetHistograms(); +} + + +void Isolate::SetAddHistogramSampleFunction( + AddHistogramSampleCallback callback) { + reinterpret_cast<i::Isolate*>(this) + ->stats_table() + ->SetAddHistogramSampleFunction(callback); +} + + +bool v8::Isolate::IdleNotification(int idle_time_in_ms) { + // Returning true tells the caller that it need not + // continue to call IdleNotification. + i::Isolate* isolate = reinterpret_cast<i::Isolate*>(this); + if (!i::FLAG_use_idle_notification) return true; + return isolate->heap()->IdleNotification(idle_time_in_ms); +} + + +void v8::Isolate::LowMemoryNotification() { + i::Isolate* isolate = reinterpret_cast<i::Isolate*>(this); + { + i::HistogramTimerScope idle_notification_scope( + isolate->counters()->gc_low_memory_notification()); + isolate->heap()->CollectAllAvailableGarbage("low memory notification"); + } +} + + +int v8::Isolate::ContextDisposedNotification() { + i::Isolate* isolate = reinterpret_cast<i::Isolate*>(this); + return isolate->heap()->NotifyContextDisposed(); +} + + String::Utf8Value::Utf8Value(v8::Handle<v8::Value> obj) : str_(NULL), length_(0) { i::Isolate* isolate = i::Isolate::Current(); @@ -6788,55 +6867,44 @@ Local<Value> Exception::Error(v8::Handle<v8::String> raw_message) { // --- D e b u g S u p p o r t --- -bool Debug::SetDebugEventListener2(EventCallback2 that, Handle<Value> data) { +bool Debug::SetDebugEventListener(EventCallback that, Handle<Value> data) { i::Isolate* isolate = i::Isolate::Current(); - EnsureInitializedForIsolate(isolate, "v8::Debug::SetDebugEventListener2()"); - ON_BAILOUT(isolate, "v8::Debug::SetDebugEventListener2()", return false); + EnsureInitializedForIsolate(isolate, "v8::Debug::SetDebugEventListener()"); + ON_BAILOUT(isolate, "v8::Debug::SetDebugEventListener()", return false); ENTER_V8(isolate); i::HandleScope scope(isolate); i::Handle<i::Object> foreign = isolate->factory()->undefined_value(); if (that != NULL) { foreign = isolate->factory()->NewForeign(FUNCTION_ADDR(that)); } - isolate->debugger()->SetEventListener(foreign, - Utils::OpenHandle(*data, true)); - return true; -} - - -bool Debug::SetDebugEventListener(v8::Handle<v8::Object> that, - Handle<Value> data) { - i::Isolate* isolate = i::Isolate::Current(); - ON_BAILOUT(isolate, "v8::Debug::SetDebugEventListener()", return false); - ENTER_V8(isolate); - isolate->debugger()->SetEventListener(Utils::OpenHandle(*that), - Utils::OpenHandle(*data, true)); + isolate->debug()->SetEventListener(foreign, + Utils::OpenHandle(*data, true)); return true; } void Debug::DebugBreak(Isolate* isolate) { - reinterpret_cast<i::Isolate*>(isolate)->stack_guard()->DebugBreak(); + reinterpret_cast<i::Isolate*>(isolate)->stack_guard()->RequestDebugBreak(); } void Debug::CancelDebugBreak(Isolate* isolate) { i::Isolate* internal_isolate = reinterpret_cast<i::Isolate*>(isolate); - internal_isolate->stack_guard()->Continue(i::DEBUGBREAK); + internal_isolate->stack_guard()->ClearDebugBreak(); } void Debug::DebugBreakForCommand(Isolate* isolate, ClientData* data) { i::Isolate* internal_isolate = reinterpret_cast<i::Isolate*>(isolate); - internal_isolate->debugger()->EnqueueDebugCommand(data); + internal_isolate->debug()->EnqueueDebugCommand(data); } -void Debug::SetMessageHandler2(v8::Debug::MessageHandler2 handler) { +void Debug::SetMessageHandler(v8::Debug::MessageHandler handler) { i::Isolate* isolate = i::Isolate::Current(); EnsureInitializedForIsolate(isolate, "v8::Debug::SetMessageHandler"); ENTER_V8(isolate); - isolate->debugger()->SetMessageHandler(handler); + isolate->debug()->SetMessageHandler(handler); } @@ -6845,32 +6913,11 @@ void Debug::SendCommand(Isolate* isolate, int length, ClientData* client_data) { i::Isolate* internal_isolate = reinterpret_cast<i::Isolate*>(isolate); - internal_isolate->debugger()->ProcessCommand( + internal_isolate->debug()->EnqueueCommandMessage( i::Vector<const uint16_t>(command, length), client_data); } -void Debug::SetHostDispatchHandler(HostDispatchHandler handler, - int period) { - i::Isolate* isolate = i::Isolate::Current(); - EnsureInitializedForIsolate(isolate, "v8::Debug::SetHostDispatchHandler"); - ENTER_V8(isolate); - isolate->debugger()->SetHostDispatchHandler( - handler, i::TimeDelta::FromMilliseconds(period)); -} - - -void Debug::SetDebugMessageDispatchHandler( - DebugMessageDispatchHandler handler, bool provide_locker) { - i::Isolate* isolate = i::Isolate::Current(); - EnsureInitializedForIsolate(isolate, - "v8::Debug::SetDebugMessageDispatchHandler"); - ENTER_V8(isolate); - isolate->debugger()->SetDebugMessageDispatchHandler( - handler, provide_locker); -} - - Local<Value> Debug::Call(v8::Handle<v8::Function> fun, v8::Handle<v8::Value> data) { i::Isolate* isolate = i::Isolate::Current(); @@ -6880,10 +6927,10 @@ Local<Value> Debug::Call(v8::Handle<v8::Function> fun, i::MaybeHandle<i::Object> maybe_result; EXCEPTION_PREAMBLE(isolate); if (data.IsEmpty()) { - maybe_result = isolate->debugger()->Call( + maybe_result = isolate->debug()->Call( Utils::OpenHandle(*fun), isolate->factory()->undefined_value()); } else { - maybe_result = isolate->debugger()->Call( + maybe_result = isolate->debug()->Call( Utils::OpenHandle(*fun), Utils::OpenHandle(*data)); } i::Handle<i::Object> result; @@ -6922,19 +6969,8 @@ Local<Value> Debug::GetMirror(v8::Handle<v8::Value> obj) { } -bool Debug::EnableAgent(const char* name, int port, bool wait_for_connection) { - return i::Isolate::Current()->debugger()->StartAgent(name, port, - wait_for_connection); -} - - -void Debug::DisableAgent() { - return i::Isolate::Current()->debugger()->StopAgent(); -} - - void Debug::ProcessDebugMessages() { - i::Execution::ProcessDebugMessages(i::Isolate::Current(), true); + i::Isolate::Current()->debug()->ProcessDebugMessages(true); } @@ -6942,13 +6978,13 @@ Local<Context> Debug::GetDebugContext() { i::Isolate* isolate = i::Isolate::Current(); EnsureInitializedForIsolate(isolate, "v8::Debug::GetDebugContext()"); ENTER_V8(isolate); - return Utils::ToLocal(i::Isolate::Current()->debugger()->GetDebugContext()); + return Utils::ToLocal(i::Isolate::Current()->debug()->GetDebugContext()); } void Debug::SetLiveEditEnabled(Isolate* isolate, bool enable) { i::Isolate* internal_isolate = reinterpret_cast<i::Isolate*>(isolate); - internal_isolate->debugger()->set_live_edit_enabled(enable); + internal_isolate->debug()->set_live_edit_enabled(enable); } @@ -7032,7 +7068,7 @@ const CpuProfileNode* CpuProfileNode::GetChild(int index) const { void CpuProfile::Delete() { i::Isolate* isolate = i::Isolate::Current(); i::CpuProfiler* profiler = isolate->cpu_profiler(); - ASSERT(profiler != NULL); + DCHECK(profiler != NULL); profiler->DeleteProfile(reinterpret_cast<i::CpuProfile*>(this)); } @@ -7059,19 +7095,20 @@ const CpuProfileNode* CpuProfile::GetSample(int index) const { int64_t CpuProfile::GetSampleTimestamp(int index) const { const i::CpuProfile* profile = reinterpret_cast<const i::CpuProfile*>(this); - return (profile->sample_timestamp(index) - i::TimeTicks()).InMicroseconds(); + return (profile->sample_timestamp(index) - base::TimeTicks()) + .InMicroseconds(); } int64_t CpuProfile::GetStartTime() const { const i::CpuProfile* profile = reinterpret_cast<const i::CpuProfile*>(this); - return (profile->start_time() - i::TimeTicks()).InMicroseconds(); + return (profile->start_time() - base::TimeTicks()).InMicroseconds(); } int64_t CpuProfile::GetEndTime() const { const i::CpuProfile* profile = reinterpret_cast<const i::CpuProfile*>(this); - return (profile->end_time() - i::TimeTicks()).InMicroseconds(); + return (profile->end_time() - base::TimeTicks()).InMicroseconds(); } @@ -7081,9 +7118,9 @@ int CpuProfile::GetSamplesCount() const { void CpuProfiler::SetSamplingInterval(int us) { - ASSERT(us >= 0); + DCHECK(us >= 0); return reinterpret_cast<i::CpuProfiler*>(this)->set_sampling_interval( - i::TimeDelta::FromMicroseconds(us)); + base::TimeDelta::FromMicroseconds(us)); } @@ -7113,7 +7150,7 @@ const CpuProfile* CpuProfiler::StopCpuProfiling(Handle<String> title) { void CpuProfiler::SetIdle(bool is_idle) { i::Isolate* isolate = reinterpret_cast<i::CpuProfiler*>(this)->isolate(); i::StateTag state = isolate->current_vm_state(); - ASSERT(state == i::EXTERNAL || state == i::IDLE); + DCHECK(state == i::EXTERNAL || state == i::IDLE); if (isolate->js_entry_sp() != NULL) return; if (is_idle) { isolate->set_current_vm_state(i::IDLE); @@ -7442,7 +7479,7 @@ void HandleScopeImplementer::FreeThreadResources() { char* HandleScopeImplementer::ArchiveThread(char* storage) { HandleScopeData* current = isolate_->handle_scope_data(); handle_scope_data_ = *current; - OS::MemCopy(storage, this, sizeof(*this)); + MemCopy(storage, this, sizeof(*this)); ResetAfterArchive(); current->Initialize(); @@ -7457,7 +7494,7 @@ int HandleScopeImplementer::ArchiveSpacePerThread() { char* HandleScopeImplementer::RestoreThread(char* storage) { - OS::MemCopy(this, storage, sizeof(*this)); + MemCopy(this, storage, sizeof(*this)); *isolate_->handle_scope_data() = handle_scope_data_; return storage + ArchiveSpacePerThread(); } @@ -7474,7 +7511,7 @@ void HandleScopeImplementer::IterateThis(ObjectVisitor* v) { (last_handle_before_deferred_block_ <= &block[kHandleBlockSize]) && (last_handle_before_deferred_block_ >= block)) { v->VisitPointers(block, last_handle_before_deferred_block_); - ASSERT(!found_block_before_deferred); + DCHECK(!found_block_before_deferred); #ifdef DEBUG found_block_before_deferred = true; #endif @@ -7483,7 +7520,7 @@ void HandleScopeImplementer::IterateThis(ObjectVisitor* v) { } } - ASSERT(last_handle_before_deferred_block_ == NULL || + DCHECK(last_handle_before_deferred_block_ == NULL || found_block_before_deferred); // Iterate over live handles in the last block (if any). @@ -7523,7 +7560,7 @@ DeferredHandles* HandleScopeImplementer::Detach(Object** prev_limit) { Object** block_start = blocks_.last(); Object** block_limit = &block_start[kHandleBlockSize]; // We should not need to check for SealHandleScope here. Assert this. - ASSERT(prev_limit == block_limit || + DCHECK(prev_limit == block_limit || !(block_start <= prev_limit && prev_limit <= block_limit)); if (prev_limit == block_limit) break; deferred->blocks_.Add(blocks_.last()); @@ -7534,17 +7571,17 @@ DeferredHandles* HandleScopeImplementer::Detach(Object** prev_limit) { // HandleScope stack since BeginDeferredScope was called, but in // reverse order. - ASSERT(prev_limit == NULL || !blocks_.is_empty()); + DCHECK(prev_limit == NULL || !blocks_.is_empty()); - ASSERT(!blocks_.is_empty() && prev_limit != NULL); - ASSERT(last_handle_before_deferred_block_ != NULL); + DCHECK(!blocks_.is_empty() && prev_limit != NULL); + DCHECK(last_handle_before_deferred_block_ != NULL); last_handle_before_deferred_block_ = NULL; return deferred; } void HandleScopeImplementer::BeginDeferredScope() { - ASSERT(last_handle_before_deferred_block_ == NULL); + DCHECK(last_handle_before_deferred_block_ == NULL); last_handle_before_deferred_block_ = isolate()->handle_scope_data()->next; } @@ -7562,9 +7599,9 @@ DeferredHandles::~DeferredHandles() { void DeferredHandles::Iterate(ObjectVisitor* v) { - ASSERT(!blocks_.is_empty()); + DCHECK(!blocks_.is_empty()); - ASSERT((first_block_limit_ >= blocks_.first()) && + DCHECK((first_block_limit_ >= blocks_.first()) && (first_block_limit_ <= &(blocks_.first())[kHandleBlockSize])); v->VisitPointers(blocks_.first(), first_block_limit_); diff --git a/deps/v8/src/api.h b/deps/v8/src/api.h index f530e56f94b..c87bd712efe 100644 --- a/deps/v8/src/api.h +++ b/deps/v8/src/api.h @@ -5,13 +5,13 @@ #ifndef V8_API_H_ #define V8_API_H_ -#include "v8.h" +#include "src/v8.h" -#include "../include/v8-testing.h" -#include "contexts.h" -#include "factory.h" -#include "isolate.h" -#include "list-inl.h" +#include "include/v8-testing.h" +#include "src/contexts.h" +#include "src/factory.h" +#include "src/isolate.h" +#include "src/list-inl.h" namespace v8 { @@ -81,13 +81,13 @@ NeanderArray::NeanderArray(v8::internal::Handle<v8::internal::Object> obj) v8::internal::Object* NeanderObject::get(int offset) { - ASSERT(value()->HasFastObjectElements()); + DCHECK(value()->HasFastObjectElements()); return v8::internal::FixedArray::cast(value()->elements())->get(offset); } void NeanderObject::set(int offset, v8::internal::Object* value) { - ASSERT(value_->HasFastObjectElements()); + DCHECK(value_->HasFastObjectElements()); v8::internal::FixedArray::cast(value_->elements())->set(offset, value); } @@ -264,7 +264,7 @@ OPEN_HANDLE_LIST(DECLARE_OPEN_HANDLE) template<class From, class To> static inline Local<To> Convert(v8::internal::Handle<From> obj) { - ASSERT(obj.is_null() || !obj->IsTheHole()); + DCHECK(obj.is_null() || !obj->IsTheHole()); return Local<To>(reinterpret_cast<To*>(obj.location())); } @@ -325,7 +325,7 @@ inline v8::Local<T> ToApiHandle( #define MAKE_TO_LOCAL_TYPED_ARRAY(Type, typeName, TYPE, ctype, size) \ Local<v8::Type##Array> Utils::ToLocal##Type##Array( \ v8::internal::Handle<v8::internal::JSTypedArray> obj) { \ - ASSERT(obj->type() == kExternal##Type##Array); \ + DCHECK(obj->type() == kExternal##Type##Array); \ return Convert<v8::internal::JSTypedArray, v8::Type##Array>(obj); \ } @@ -370,8 +370,7 @@ MAKE_TO_LOCAL(ToLocal, DeclaredAccessorDescriptor, DeclaredAccessorDescriptor) const v8::From* that, bool allow_empty_handle) { \ EXTRA_CHECK(allow_empty_handle || that != NULL); \ EXTRA_CHECK(that == NULL || \ - (*reinterpret_cast<v8::internal::Object**>( \ - const_cast<v8::From*>(that)))->Is##To()); \ + (*reinterpret_cast<v8::internal::Object* const*>(that))->Is##To()); \ return v8::internal::Handle<v8::internal::To>( \ reinterpret_cast<v8::internal::To**>(const_cast<v8::From*>(that))); \ } @@ -535,7 +534,7 @@ class HandleScopeImplementer { Isolate* isolate() const { return isolate_; } void ReturnBlock(Object** block) { - ASSERT(block != NULL); + DCHECK(block != NULL); if (spare_ != NULL) DeleteArray(spare_); spare_ = block; } @@ -551,9 +550,9 @@ class HandleScopeImplementer { } void Free() { - ASSERT(blocks_.length() == 0); - ASSERT(entered_contexts_.length() == 0); - ASSERT(saved_contexts_.length() == 0); + DCHECK(blocks_.length() == 0); + DCHECK(entered_contexts_.length() == 0); + DCHECK(saved_contexts_.length() == 0); blocks_.Free(); entered_contexts_.Free(); saved_contexts_.Free(); @@ -561,7 +560,7 @@ class HandleScopeImplementer { DeleteArray(spare_); spare_ = NULL; } - ASSERT(call_depth_ == 0); + DCHECK(call_depth_ == 0); } void BeginDeferredScope(); @@ -664,7 +663,7 @@ void HandleScopeImplementer::DeleteExtensions(internal::Object** prev_limit) { } spare_ = block_start; } - ASSERT((blocks_.is_empty() && prev_limit == NULL) || + DCHECK((blocks_.is_empty() && prev_limit == NULL) || (!blocks_.is_empty() && prev_limit != NULL)); } diff --git a/deps/v8/src/apinatives.js b/deps/v8/src/apinatives.js index 0579caf544b..dda1d24bab2 100644 --- a/deps/v8/src/apinatives.js +++ b/deps/v8/src/apinatives.js @@ -2,6 +2,8 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. +"use strict"; + // This file contains infrastructure used by the API. See // v8natives.js for an explanation of these files are processed and // loaded. @@ -28,10 +30,16 @@ function Instantiate(data, name) { var Constructor = %GetTemplateField(data, kApiConstructorOffset); // Note: Do not directly use a function template as a condition, our // internal ToBoolean doesn't handle that! - var result = typeof Constructor === 'undefined' ? - {} : new (Instantiate(Constructor))(); - ConfigureTemplateInstance(result, data); - result = %ToFastProperties(result); + var result; + if (typeof Constructor === 'undefined') { + result = {}; + ConfigureTemplateInstance(result, data); + } else { + // ConfigureTemplateInstance is implicitly called before calling the API + // constructor in HandleApiCall. + result = new (Instantiate(Constructor))(); + result = %ToFastProperties(result); + } return result; default: throw 'Unknown API tag <' + tag + '>'; @@ -49,9 +57,8 @@ function InstantiateFunction(data, name) { if (!isFunctionCached) { try { var flags = %GetTemplateField(data, kApiFlagOffset); - var has_proto = !(flags & (1 << kRemovePrototypeBit)); var prototype; - if (has_proto) { + if (!(flags & (1 << kRemovePrototypeBit))) { var template = %GetTemplateField(data, kApiPrototypeTemplateOffset); prototype = typeof template === 'undefined' ? {} : Instantiate(template); @@ -61,16 +68,13 @@ function InstantiateFunction(data, name) { // internal ToBoolean doesn't handle that! if (typeof parent !== 'undefined') { var parent_fun = Instantiate(parent); - %SetPrototype(prototype, parent_fun.prototype); + %InternalSetPrototype(prototype, parent_fun.prototype); } } var fun = %CreateApiFunction(data, prototype); if (name) %FunctionSetName(fun, name); var doNotCache = flags & (1 << kDoNotCacheBit); if (!doNotCache) cache[serialNumber] = fun; - if (has_proto && flags & (1 << kReadOnlyPrototypeBit)) { - %FunctionSetReadOnlyPrototype(fun); - } ConfigureTemplateInstance(fun, data); if (doNotCache) return fun; } catch (e) { @@ -95,15 +99,15 @@ function ConfigureTemplateInstance(obj, data) { var prop_data = properties[i + 2]; var attributes = properties[i + 3]; var value = Instantiate(prop_data, name); - %SetProperty(obj, name, value, attributes); - } else if (length == 5) { + %AddPropertyForTemplate(obj, name, value, attributes); + } else if (length == 4 || length == 5) { + // TODO(verwaest): The 5th value used to be access_control. Remove once + // the bindings are updated. var name = properties[i + 1]; var getter = properties[i + 2]; var setter = properties[i + 3]; var attribute = properties[i + 4]; - var access_control = properties[i + 5]; - %SetAccessorProperty( - obj, name, getter, setter, attribute, access_control); + %DefineApiAccessorProperty(obj, name, getter, setter, attribute); } else { throw "Bad properties array"; } diff --git a/deps/v8/src/arguments.cc b/deps/v8/src/arguments.cc index 72420eea8fd..d31c479fc1c 100644 --- a/deps/v8/src/arguments.cc +++ b/deps/v8/src/arguments.cc @@ -2,10 +2,10 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" -#include "arguments.h" +#include "src/v8.h" -#include "vm-state-inl.h" +#include "src/arguments.h" +#include "src/vm-state-inl.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/arguments.h b/deps/v8/src/arguments.h index eb75724f2ba..bbd2262fd78 100644 --- a/deps/v8/src/arguments.h +++ b/deps/v8/src/arguments.h @@ -5,7 +5,8 @@ #ifndef V8_ARGUMENTS_H_ #define V8_ARGUMENTS_H_ -#include "allocation.h" +#include "src/allocation.h" +#include "src/isolate.h" namespace v8 { namespace internal { @@ -21,6 +22,9 @@ namespace internal { // Object* Runtime_function(Arguments args) { // ... use args[i] here ... // } +// +// Note that length_ (whose value is in the integer range) is defined +// as intptr_t to provide endian-neutrality on 64-bit archs. class Arguments BASE_EMBEDDED { public: @@ -28,7 +32,7 @@ class Arguments BASE_EMBEDDED { : length_(length), arguments_(arguments) { } Object*& operator[] (int index) { - ASSERT(0 <= index && index < length_); + DCHECK(0 <= index && index < length_); return *(reinterpret_cast<Object**>(reinterpret_cast<intptr_t>(arguments_) - index * kPointerSize)); } @@ -50,12 +54,12 @@ class Arguments BASE_EMBEDDED { } // Get the total number of arguments including the receiver. - int length() const { return length_; } + int length() const { return static_cast<int>(length_); } Object** arguments() { return arguments_; } private: - int length_; + intptr_t length_; Object** arguments_; }; @@ -172,8 +176,8 @@ class PropertyCallbackArguments values[T::kReturnValueDefaultValueIndex] = isolate->heap()->the_hole_value(); values[T::kReturnValueIndex] = isolate->heap()->the_hole_value(); - ASSERT(values[T::kHolderIndex]->IsHeapObject()); - ASSERT(values[T::kIsolateIndex]->IsSmi()); + DCHECK(values[T::kHolderIndex]->IsHeapObject()); + DCHECK(values[T::kIsolateIndex]->IsSmi()); } /* @@ -244,9 +248,9 @@ class FunctionCallbackArguments values[T::kReturnValueDefaultValueIndex] = isolate->heap()->the_hole_value(); values[T::kReturnValueIndex] = isolate->heap()->the_hole_value(); - ASSERT(values[T::kCalleeIndex]->IsJSFunction()); - ASSERT(values[T::kHolderIndex]->IsHeapObject()); - ASSERT(values[T::kIsolateIndex]->IsSmi()); + DCHECK(values[T::kCalleeIndex]->IsJSFunction()); + DCHECK(values[T::kHolderIndex]->IsHeapObject()); + DCHECK(values[T::kIsolateIndex]->IsSmi()); } /* @@ -279,13 +283,13 @@ double ClobberDoubleRegisters(double x1, double x2, double x3, double x4); #define DECLARE_RUNTIME_FUNCTION(Name) \ Object* Name(int args_length, Object** args_object, Isolate* isolate) -#define RUNTIME_FUNCTION_RETURNS_TYPE(Type, Name) \ -static Type __RT_impl_##Name(Arguments args, Isolate* isolate); \ -Type Name(int args_length, Object** args_object, Isolate* isolate) { \ - CLOBBER_DOUBLE_REGISTERS(); \ - Arguments args(args_length, args_object); \ - return __RT_impl_##Name(args, isolate); \ -} \ +#define RUNTIME_FUNCTION_RETURNS_TYPE(Type, Name) \ +static INLINE(Type __RT_impl_##Name(Arguments args, Isolate* isolate)); \ +Type Name(int args_length, Object** args_object, Isolate* isolate) { \ + CLOBBER_DOUBLE_REGISTERS(); \ + Arguments args(args_length, args_object); \ + return __RT_impl_##Name(args, isolate); \ +} \ static Type __RT_impl_##Name(Arguments args, Isolate* isolate) diff --git a/deps/v8/src/arm/assembler-arm-inl.h b/deps/v8/src/arm/assembler-arm-inl.h index f5612e463cf..1cfe34b241f 100644 --- a/deps/v8/src/arm/assembler-arm-inl.h +++ b/deps/v8/src/arm/assembler-arm-inl.h @@ -37,16 +37,19 @@ #ifndef V8_ARM_ASSEMBLER_ARM_INL_H_ #define V8_ARM_ASSEMBLER_ARM_INL_H_ -#include "arm/assembler-arm.h" +#include "src/arm/assembler-arm.h" -#include "cpu.h" -#include "debug.h" +#include "src/assembler.h" +#include "src/debug.h" namespace v8 { namespace internal { +bool CpuFeatures::SupportsCrankshaft() { return IsSupported(VFP3); } + + int Register::NumAllocatableRegisters() { return kMaxNumAllocatableRegisters; } @@ -68,8 +71,8 @@ int DwVfpRegister::NumAllocatableRegisters() { int DwVfpRegister::ToAllocationIndex(DwVfpRegister reg) { - ASSERT(!reg.is(kDoubleRegZero)); - ASSERT(!reg.is(kScratchDoubleReg)); + DCHECK(!reg.is(kDoubleRegZero)); + DCHECK(!reg.is(kScratchDoubleReg)); if (reg.code() > kDoubleRegZero.code()) { return reg.code() - kNumReservedRegisters; } @@ -78,8 +81,8 @@ int DwVfpRegister::ToAllocationIndex(DwVfpRegister reg) { DwVfpRegister DwVfpRegister::FromAllocationIndex(int index) { - ASSERT(index >= 0 && index < NumAllocatableRegisters()); - ASSERT(kScratchDoubleReg.code() - kDoubleRegZero.code() == + DCHECK(index >= 0 && index < NumAllocatableRegisters()); + DCHECK(kScratchDoubleReg.code() - kDoubleRegZero.code() == kNumReservedRegisters - 1); if (index >= kDoubleRegZero.code()) { return from_code(index + kNumReservedRegisters); @@ -88,7 +91,7 @@ DwVfpRegister DwVfpRegister::FromAllocationIndex(int index) { } -void RelocInfo::apply(intptr_t delta) { +void RelocInfo::apply(intptr_t delta, ICacheFlushMode icache_flush_mode) { if (RelocInfo::IsInternalReference(rmode_)) { // absolute code pointer inside code object moves with the code object. int32_t* p = reinterpret_cast<int32_t*>(pc_); @@ -100,13 +103,13 @@ void RelocInfo::apply(intptr_t delta) { Address RelocInfo::target_address() { - ASSERT(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); + DCHECK(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); return Assembler::target_address_at(pc_, host_); } Address RelocInfo::target_address_address() { - ASSERT(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_) + DCHECK(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_) || rmode_ == EMBEDDED_OBJECT || rmode_ == EXTERNAL_REFERENCE); if (FLAG_enable_ool_constant_pool || @@ -115,22 +118,15 @@ Address RelocInfo::target_address_address() { // serializerer and expects the address to reside within the code object. return reinterpret_cast<Address>(pc_); } else { - ASSERT(Assembler::IsLdrPcImmediateOffset(Memory::int32_at(pc_))); - return Assembler::target_pointer_address_at(pc_); + DCHECK(Assembler::IsLdrPcImmediateOffset(Memory::int32_at(pc_))); + return constant_pool_entry_address(); } } Address RelocInfo::constant_pool_entry_address() { - ASSERT(IsInConstantPool()); - if (FLAG_enable_ool_constant_pool) { - ASSERT(Assembler::IsLdrPpImmediateOffset(Memory::int32_at(pc_))); - return Assembler::target_constant_pool_address_at(pc_, - host_->constant_pool()); - } else { - ASSERT(Assembler::IsLdrPcImmediateOffset(Memory::int32_at(pc_))); - return Assembler::target_pointer_address_at(pc_); - } + DCHECK(IsInConstantPool()); + return Assembler::constant_pool_entry_address(pc_, host_->constant_pool()); } @@ -139,10 +135,13 @@ int RelocInfo::target_address_size() { } -void RelocInfo::set_target_address(Address target, WriteBarrierMode mode) { - ASSERT(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); - Assembler::set_target_address_at(pc_, host_, target); - if (mode == UPDATE_WRITE_BARRIER && host() != NULL && IsCodeTarget(rmode_)) { +void RelocInfo::set_target_address(Address target, + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); + Assembler::set_target_address_at(pc_, host_, target, icache_flush_mode); + if (write_barrier_mode == UPDATE_WRITE_BARRIER && + host() != NULL && IsCodeTarget(rmode_)) { Object* target_code = Code::GetCodeFromTargetAddress(target); host()->GetHeap()->incremental_marking()->RecordWriteIntoCode( host(), this, HeapObject::cast(target_code)); @@ -151,24 +150,26 @@ void RelocInfo::set_target_address(Address target, WriteBarrierMode mode) { Object* RelocInfo::target_object() { - ASSERT(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); + DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); return reinterpret_cast<Object*>(Assembler::target_address_at(pc_, host_)); } Handle<Object> RelocInfo::target_object_handle(Assembler* origin) { - ASSERT(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); + DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); return Handle<Object>(reinterpret_cast<Object**>( Assembler::target_address_at(pc_, host_))); } -void RelocInfo::set_target_object(Object* target, WriteBarrierMode mode) { - ASSERT(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); - ASSERT(!target->IsConsString()); +void RelocInfo::set_target_object(Object* target, + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); Assembler::set_target_address_at(pc_, host_, - reinterpret_cast<Address>(target)); - if (mode == UPDATE_WRITE_BARRIER && + reinterpret_cast<Address>(target), + icache_flush_mode); + if (write_barrier_mode == UPDATE_WRITE_BARRIER && host() != NULL && target->IsHeapObject()) { host()->GetHeap()->incremental_marking()->RecordWrite( @@ -178,42 +179,46 @@ void RelocInfo::set_target_object(Object* target, WriteBarrierMode mode) { Address RelocInfo::target_reference() { - ASSERT(rmode_ == EXTERNAL_REFERENCE); + DCHECK(rmode_ == EXTERNAL_REFERENCE); return Assembler::target_address_at(pc_, host_); } Address RelocInfo::target_runtime_entry(Assembler* origin) { - ASSERT(IsRuntimeEntry(rmode_)); + DCHECK(IsRuntimeEntry(rmode_)); return target_address(); } void RelocInfo::set_target_runtime_entry(Address target, - WriteBarrierMode mode) { - ASSERT(IsRuntimeEntry(rmode_)); - if (target_address() != target) set_target_address(target, mode); + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(IsRuntimeEntry(rmode_)); + if (target_address() != target) + set_target_address(target, write_barrier_mode, icache_flush_mode); } Handle<Cell> RelocInfo::target_cell_handle() { - ASSERT(rmode_ == RelocInfo::CELL); + DCHECK(rmode_ == RelocInfo::CELL); Address address = Memory::Address_at(pc_); return Handle<Cell>(reinterpret_cast<Cell**>(address)); } Cell* RelocInfo::target_cell() { - ASSERT(rmode_ == RelocInfo::CELL); + DCHECK(rmode_ == RelocInfo::CELL); return Cell::FromValueAddress(Memory::Address_at(pc_)); } -void RelocInfo::set_target_cell(Cell* cell, WriteBarrierMode mode) { - ASSERT(rmode_ == RelocInfo::CELL); +void RelocInfo::set_target_cell(Cell* cell, + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(rmode_ == RelocInfo::CELL); Address address = cell->address() + Cell::kValueOffset; Memory::Address_at(pc_) = address; - if (mode == UPDATE_WRITE_BARRIER && host() != NULL) { + if (write_barrier_mode == UPDATE_WRITE_BARRIER && host() != NULL) { // TODO(1550) We are passing NULL as a slot because cell can never be on // evacuation candidate. host()->GetHeap()->incremental_marking()->RecordWrite( @@ -232,15 +237,16 @@ Handle<Object> RelocInfo::code_age_stub_handle(Assembler* origin) { Code* RelocInfo::code_age_stub() { - ASSERT(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); + DCHECK(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); return Code::GetCodeFromTargetAddress( Memory::Address_at(pc_ + (kNoCodeAgeSequenceLength - Assembler::kInstrSize))); } -void RelocInfo::set_code_age_stub(Code* stub) { - ASSERT(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); +void RelocInfo::set_code_age_stub(Code* stub, + ICacheFlushMode icache_flush_mode) { + DCHECK(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); Memory::Address_at(pc_ + (kNoCodeAgeSequenceLength - Assembler::kInstrSize)) = stub->instruction_start(); @@ -250,14 +256,14 @@ void RelocInfo::set_code_age_stub(Code* stub) { Address RelocInfo::call_address() { // The 2 instructions offset assumes patched debug break slot or return // sequence. - ASSERT((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || + DCHECK((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || (IsDebugBreakSlot(rmode()) && IsPatchedDebugBreakSlotSequence())); return Memory::Address_at(pc_ + 2 * Assembler::kInstrSize); } void RelocInfo::set_call_address(Address target) { - ASSERT((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || + DCHECK((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || (IsDebugBreakSlot(rmode()) && IsPatchedDebugBreakSlotSequence())); Memory::Address_at(pc_ + 2 * Assembler::kInstrSize) = target; if (host() != NULL) { @@ -279,14 +285,14 @@ void RelocInfo::set_call_object(Object* target) { Object** RelocInfo::call_object_address() { - ASSERT((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || + DCHECK((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || (IsDebugBreakSlot(rmode()) && IsPatchedDebugBreakSlotSequence())); return reinterpret_cast<Object**>(pc_ + 2 * Assembler::kInstrSize); } void RelocInfo::WipeOut() { - ASSERT(IsEmbeddedObject(rmode_) || + DCHECK(IsEmbeddedObject(rmode_) || IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_) || IsExternalReference(rmode_)); @@ -300,8 +306,8 @@ bool RelocInfo::IsPatchedReturnSequence() { // A patched return sequence is: // ldr ip, [pc, #0] // blx ip - return ((current_instr & kLdrPCMask) == kLdrPCPattern) - && ((next_instr & kBlxRegMask) == kBlxRegPattern); + return Assembler::IsLdrPcImmediateOffset(current_instr) && + Assembler::IsBlxReg(next_instr); } @@ -414,42 +420,6 @@ void Assembler::emit(Instr x) { } -Address Assembler::target_pointer_address_at(Address pc) { - Instr instr = Memory::int32_at(pc); - return pc + GetLdrRegisterImmediateOffset(instr) + kPcLoadDelta; -} - - -Address Assembler::target_constant_pool_address_at( - Address pc, ConstantPoolArray* constant_pool) { - ASSERT(constant_pool != NULL); - ASSERT(IsLdrPpImmediateOffset(Memory::int32_at(pc))); - Instr instr = Memory::int32_at(pc); - return reinterpret_cast<Address>(constant_pool) + - GetLdrRegisterImmediateOffset(instr); -} - - -Address Assembler::target_address_at(Address pc, - ConstantPoolArray* constant_pool) { - if (IsMovW(Memory::int32_at(pc))) { - ASSERT(IsMovT(Memory::int32_at(pc + kInstrSize))); - Instruction* instr = Instruction::At(pc); - Instruction* next_instr = Instruction::At(pc + kInstrSize); - return reinterpret_cast<Address>( - (next_instr->ImmedMovwMovtValue() << 16) | - instr->ImmedMovwMovtValue()); - } else if (FLAG_enable_ool_constant_pool) { - ASSERT(IsLdrPpImmediateOffset(Memory::int32_at(pc))); - return Memory::Address_at( - target_constant_pool_address_at(pc, constant_pool)); - } else { - ASSERT(IsLdrPcImmediateOffset(Memory::int32_at(pc))); - return Memory::Address_at(target_pointer_address_at(pc)); - } -} - - Address Assembler::target_address_from_return_address(Address pc) { // Returns the address of the call target from the return address that will // be returned to after a call. @@ -458,8 +428,15 @@ Address Assembler::target_address_from_return_address(Address pc) { // movt ip, #... @ call address high 16 // blx ip // @ return address - // Or pre-V7 or cases that need frequent patching: - // ldr ip, [pc, #...] @ call address + // Or pre-V7 or cases that need frequent patching, the address is in the + // constant pool. It could be a small constant pool load: + // ldr ip, [pc / pp, #...] @ call address + // blx ip + // @ return address + // Or an extended constant pool load: + // movw ip, #... + // movt ip, #... + // ldr ip, [pc, ip] @ call address // blx ip // @ return address Address candidate = pc - 2 * Assembler::kInstrSize; @@ -467,22 +444,40 @@ Address Assembler::target_address_from_return_address(Address pc) { if (IsLdrPcImmediateOffset(candidate_instr) | IsLdrPpImmediateOffset(candidate_instr)) { return candidate; + } else if (IsLdrPpRegOffset(candidate_instr)) { + candidate = pc - 4 * Assembler::kInstrSize; + DCHECK(IsMovW(Memory::int32_at(candidate)) && + IsMovT(Memory::int32_at(candidate + Assembler::kInstrSize))); + return candidate; + } else { + candidate = pc - 3 * Assembler::kInstrSize; + DCHECK(IsMovW(Memory::int32_at(candidate)) && + IsMovT(Memory::int32_at(candidate + kInstrSize))); + return candidate; } - candidate = pc - 3 * Assembler::kInstrSize; - ASSERT(IsMovW(Memory::int32_at(candidate)) && - IsMovT(Memory::int32_at(candidate + kInstrSize))); - return candidate; +} + + +Address Assembler::break_address_from_return_address(Address pc) { + return pc - Assembler::kPatchDebugBreakSlotReturnOffset; } Address Assembler::return_address_from_call_start(Address pc) { if (IsLdrPcImmediateOffset(Memory::int32_at(pc)) | IsLdrPpImmediateOffset(Memory::int32_at(pc))) { + // Load from constant pool, small section. return pc + kInstrSize * 2; } else { - ASSERT(IsMovW(Memory::int32_at(pc))); - ASSERT(IsMovT(Memory::int32_at(pc + kInstrSize))); - return pc + kInstrSize * 3; + DCHECK(IsMovW(Memory::int32_at(pc))); + DCHECK(IsMovT(Memory::int32_at(pc + kInstrSize))); + if (IsLdrPpRegOffset(Memory::int32_at(pc + kInstrSize))) { + // Load from constant pool, extended section. + return pc + kInstrSize * 4; + } else { + // A movw / movt load immediate. + return pc + kInstrSize * 3; + } } } @@ -497,45 +492,88 @@ void Assembler::deserialization_set_special_target_at( } -static Instr EncodeMovwImmediate(uint32_t immediate) { - ASSERT(immediate < 0x10000); - return ((immediate & 0xf000) << 4) | (immediate & 0xfff); +bool Assembler::is_constant_pool_load(Address pc) { + return !Assembler::IsMovW(Memory::int32_at(pc)) || + (FLAG_enable_ool_constant_pool && + Assembler::IsLdrPpRegOffset( + Memory::int32_at(pc + 2 * Assembler::kInstrSize))); +} + + +Address Assembler::constant_pool_entry_address( + Address pc, ConstantPoolArray* constant_pool) { + if (FLAG_enable_ool_constant_pool) { + DCHECK(constant_pool != NULL); + int cp_offset; + if (IsMovW(Memory::int32_at(pc))) { + DCHECK(IsMovT(Memory::int32_at(pc + kInstrSize)) && + IsLdrPpRegOffset(Memory::int32_at(pc + 2 * kInstrSize))); + // This is an extended constant pool lookup. + Instruction* movw_instr = Instruction::At(pc); + Instruction* movt_instr = Instruction::At(pc + kInstrSize); + cp_offset = (movt_instr->ImmedMovwMovtValue() << 16) | + movw_instr->ImmedMovwMovtValue(); + } else { + // This is a small constant pool lookup. + DCHECK(Assembler::IsLdrPpImmediateOffset(Memory::int32_at(pc))); + cp_offset = GetLdrRegisterImmediateOffset(Memory::int32_at(pc)); + } + return reinterpret_cast<Address>(constant_pool) + cp_offset; + } else { + DCHECK(Assembler::IsLdrPcImmediateOffset(Memory::int32_at(pc))); + Instr instr = Memory::int32_at(pc); + return pc + GetLdrRegisterImmediateOffset(instr) + kPcLoadDelta; + } +} + + +Address Assembler::target_address_at(Address pc, + ConstantPoolArray* constant_pool) { + if (is_constant_pool_load(pc)) { + // This is a constant pool lookup. Return the value in the constant pool. + return Memory::Address_at(constant_pool_entry_address(pc, constant_pool)); + } else { + // This is an movw_movt immediate load. Return the immediate. + DCHECK(IsMovW(Memory::int32_at(pc)) && + IsMovT(Memory::int32_at(pc + kInstrSize))); + Instruction* movw_instr = Instruction::At(pc); + Instruction* movt_instr = Instruction::At(pc + kInstrSize); + return reinterpret_cast<Address>( + (movt_instr->ImmedMovwMovtValue() << 16) | + movw_instr->ImmedMovwMovtValue()); + } } void Assembler::set_target_address_at(Address pc, ConstantPoolArray* constant_pool, - Address target) { - if (IsMovW(Memory::int32_at(pc))) { - ASSERT(IsMovT(Memory::int32_at(pc + kInstrSize))); - uint32_t* instr_ptr = reinterpret_cast<uint32_t*>(pc); - uint32_t immediate = reinterpret_cast<uint32_t>(target); - uint32_t intermediate = instr_ptr[0]; - intermediate &= ~EncodeMovwImmediate(0xFFFF); - intermediate |= EncodeMovwImmediate(immediate & 0xFFFF); - instr_ptr[0] = intermediate; - intermediate = instr_ptr[1]; - intermediate &= ~EncodeMovwImmediate(0xFFFF); - intermediate |= EncodeMovwImmediate(immediate >> 16); - instr_ptr[1] = intermediate; - ASSERT(IsMovW(Memory::int32_at(pc))); - ASSERT(IsMovT(Memory::int32_at(pc + kInstrSize))); - CPU::FlushICache(pc, 2 * kInstrSize); - } else if (FLAG_enable_ool_constant_pool) { - ASSERT(IsLdrPpImmediateOffset(Memory::int32_at(pc))); - Memory::Address_at( - target_constant_pool_address_at(pc, constant_pool)) = target; - } else { - ASSERT(IsLdrPcImmediateOffset(Memory::int32_at(pc))); - Memory::Address_at(target_pointer_address_at(pc)) = target; + Address target, + ICacheFlushMode icache_flush_mode) { + if (is_constant_pool_load(pc)) { + // This is a constant pool lookup. Update the entry in the constant pool. + Memory::Address_at(constant_pool_entry_address(pc, constant_pool)) = target; // Intuitively, we would think it is necessary to always flush the // instruction cache after patching a target address in the code as follows: - // CPU::FlushICache(pc, sizeof(target)); + // CpuFeatures::FlushICache(pc, sizeof(target)); // However, on ARM, no instruction is actually patched in the case // of embedded constants of the form: - // ldr ip, [pc, #...] + // ldr ip, [pp, #...] // since the instruction accessing this address in the constant pool remains // unchanged. + } else { + // This is an movw_movt immediate load. Patch the immediate embedded in the + // instructions. + DCHECK(IsMovW(Memory::int32_at(pc))); + DCHECK(IsMovT(Memory::int32_at(pc + kInstrSize))); + uint32_t* instr_ptr = reinterpret_cast<uint32_t*>(pc); + uint32_t immediate = reinterpret_cast<uint32_t>(target); + instr_ptr[0] = PatchMovwImmediate(instr_ptr[0], immediate & 0xFFFF); + instr_ptr[1] = PatchMovwImmediate(instr_ptr[1], immediate >> 16); + DCHECK(IsMovW(Memory::int32_at(pc))); + DCHECK(IsMovT(Memory::int32_at(pc + kInstrSize))); + if (icache_flush_mode != SKIP_ICACHE_FLUSH) { + CpuFeatures::FlushICache(pc, 2 * kInstrSize); + } } } diff --git a/deps/v8/src/arm/assembler-arm.cc b/deps/v8/src/arm/assembler-arm.cc index 74fd61979b0..1a2f5d6e5dd 100644 --- a/deps/v8/src/arm/assembler-arm.cc +++ b/deps/v8/src/arm/assembler-arm.cc @@ -34,32 +34,18 @@ // modified significantly by Google Inc. // Copyright 2012 the V8 project authors. All rights reserved. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM -#include "arm/assembler-arm-inl.h" -#include "macro-assembler.h" -#include "serialize.h" +#include "src/arm/assembler-arm-inl.h" +#include "src/base/cpu.h" +#include "src/macro-assembler.h" +#include "src/serialize.h" namespace v8 { namespace internal { -#ifdef DEBUG -bool CpuFeatures::initialized_ = false; -#endif -unsigned CpuFeatures::supported_ = 0; -unsigned CpuFeatures::found_by_runtime_probing_only_ = 0; -unsigned CpuFeatures::cross_compile_ = 0; -unsigned CpuFeatures::cache_line_size_ = 64; - - -ExternalReference ExternalReference::cpu_features() { - ASSERT(CpuFeatures::initialized_); - return ExternalReference(&CpuFeatures::supported_); -} - - // Get the CPU features enabled by the build. For cross compilation the // preprocessor symbols CAN_USE_ARMV7_INSTRUCTIONS and CAN_USE_VFP3_INSTRUCTIONS // can be defined to enable ARMv7 and VFPv3 instructions when building the @@ -67,19 +53,16 @@ ExternalReference ExternalReference::cpu_features() { static unsigned CpuFeaturesImpliedByCompiler() { unsigned answer = 0; #ifdef CAN_USE_ARMV7_INSTRUCTIONS - if (FLAG_enable_armv7) { - answer |= 1u << ARMv7; - } + if (FLAG_enable_armv7) answer |= 1u << ARMv7; #endif // CAN_USE_ARMV7_INSTRUCTIONS #ifdef CAN_USE_VFP3_INSTRUCTIONS - if (FLAG_enable_vfp3) { - answer |= 1u << VFP3 | 1u << ARMv7; - } + if (FLAG_enable_vfp3) answer |= 1u << VFP3 | 1u << ARMv7; #endif // CAN_USE_VFP3_INSTRUCTIONS #ifdef CAN_USE_VFP32DREGS - if (FLAG_enable_32dregs) { - answer |= 1u << VFP32DREGS; - } + if (FLAG_enable_32dregs) answer |= 1u << VFP32DREGS; +#endif // CAN_USE_VFP32DREGS +#ifdef CAN_USE_NEON + if (FLAG_enable_neon) answer |= 1u << NEON; #endif // CAN_USE_VFP32DREGS if ((answer & (1u << ARMv7)) && FLAG_enable_unaligned_accesses) { answer |= 1u << UNALIGNED_ACCESSES; @@ -89,177 +72,112 @@ static unsigned CpuFeaturesImpliedByCompiler() { } -const char* DwVfpRegister::AllocationIndexToString(int index) { - ASSERT(index >= 0 && index < NumAllocatableRegisters()); - ASSERT(kScratchDoubleReg.code() - kDoubleRegZero.code() == - kNumReservedRegisters - 1); - if (index >= kDoubleRegZero.code()) - index += kNumReservedRegisters; - - return VFPRegisters::Name(index, true); -} - +void CpuFeatures::ProbeImpl(bool cross_compile) { + supported_ |= CpuFeaturesImpliedByCompiler(); + cache_line_size_ = 64; -void CpuFeatures::Probe(bool serializer_enabled) { - uint64_t standard_features = static_cast<unsigned>( - OS::CpuFeaturesImpliedByPlatform()) | CpuFeaturesImpliedByCompiler(); - ASSERT(supported_ == 0 || - (supported_ & standard_features) == standard_features); -#ifdef DEBUG - initialized_ = true; -#endif - - // Get the features implied by the OS and the compiler settings. This is the - // minimal set of features which is also alowed for generated code in the - // snapshot. - supported_ |= standard_features; - - if (serializer_enabled) { - // No probing for features if we might serialize (generate snapshot). - return; - } + // Only use statically determined features for cross compile (snapshot). + if (cross_compile) return; #ifndef __arm__ - // For the simulator=arm build, use VFP when FLAG_enable_vfp3 is - // enabled. VFPv3 implies ARMv7, see ARM DDI 0406B, page A1-6. - if (FLAG_enable_vfp3) { - supported_ |= - static_cast<uint64_t>(1) << VFP3 | - static_cast<uint64_t>(1) << ARMv7; - } - if (FLAG_enable_neon) { - supported_ |= 1u << NEON; - } - // For the simulator=arm build, use ARMv7 when FLAG_enable_armv7 is enabled + // For the simulator build, use whatever the flags specify. if (FLAG_enable_armv7) { - supported_ |= static_cast<uint64_t>(1) << ARMv7; - } - - if (FLAG_enable_sudiv) { - supported_ |= static_cast<uint64_t>(1) << SUDIV; - } - - if (FLAG_enable_movw_movt) { - supported_ |= static_cast<uint64_t>(1) << MOVW_MOVT_IMMEDIATE_LOADS; - } - - if (FLAG_enable_32dregs) { - supported_ |= static_cast<uint64_t>(1) << VFP32DREGS; - } - - if (FLAG_enable_unaligned_accesses) { - supported_ |= static_cast<uint64_t>(1) << UNALIGNED_ACCESSES; + supported_ |= 1u << ARMv7; + if (FLAG_enable_vfp3) supported_ |= 1u << VFP3; + if (FLAG_enable_neon) supported_ |= 1u << NEON | 1u << VFP32DREGS; + if (FLAG_enable_sudiv) supported_ |= 1u << SUDIV; + if (FLAG_enable_movw_movt) supported_ |= 1u << MOVW_MOVT_IMMEDIATE_LOADS; + if (FLAG_enable_32dregs) supported_ |= 1u << VFP32DREGS; } + if (FLAG_enable_mls) supported_ |= 1u << MLS; + if (FLAG_enable_unaligned_accesses) supported_ |= 1u << UNALIGNED_ACCESSES; #else // __arm__ - // Probe for additional features not already known to be available. - CPU cpu; - if (!IsSupported(VFP3) && FLAG_enable_vfp3 && cpu.has_vfp3()) { + // Probe for additional features at runtime. + base::CPU cpu; + if (FLAG_enable_vfp3 && cpu.has_vfp3()) { // This implementation also sets the VFP flags if runtime // detection of VFP returns true. VFPv3 implies ARMv7, see ARM DDI // 0406B, page A1-6. - found_by_runtime_probing_only_ |= - static_cast<uint64_t>(1) << VFP3 | - static_cast<uint64_t>(1) << ARMv7; + supported_ |= 1u << VFP3 | 1u << ARMv7; } - if (!IsSupported(NEON) && FLAG_enable_neon && cpu.has_neon()) { - found_by_runtime_probing_only_ |= 1u << NEON; - } - - if (!IsSupported(ARMv7) && FLAG_enable_armv7 && cpu.architecture() >= 7) { - found_by_runtime_probing_only_ |= static_cast<uint64_t>(1) << ARMv7; - } - - if (!IsSupported(SUDIV) && FLAG_enable_sudiv && cpu.has_idiva()) { - found_by_runtime_probing_only_ |= static_cast<uint64_t>(1) << SUDIV; - } + if (FLAG_enable_neon && cpu.has_neon()) supported_ |= 1u << NEON; + if (FLAG_enable_sudiv && cpu.has_idiva()) supported_ |= 1u << SUDIV; + if (FLAG_enable_mls && cpu.has_thumb2()) supported_ |= 1u << MLS; - if (!IsSupported(UNALIGNED_ACCESSES) && FLAG_enable_unaligned_accesses - && cpu.architecture() >= 7) { - found_by_runtime_probing_only_ |= - static_cast<uint64_t>(1) << UNALIGNED_ACCESSES; - } - - // Use movw/movt for QUALCOMM ARMv7 cores. - if (cpu.implementer() == CPU::QUALCOMM && - cpu.architecture() >= 7 && - FLAG_enable_movw_movt) { - found_by_runtime_probing_only_ |= - static_cast<uint64_t>(1) << MOVW_MOVT_IMMEDIATE_LOADS; + if (cpu.architecture() >= 7) { + if (FLAG_enable_armv7) supported_ |= 1u << ARMv7; + if (FLAG_enable_unaligned_accesses) supported_ |= 1u << UNALIGNED_ACCESSES; + // Use movw/movt for QUALCOMM ARMv7 cores. + if (FLAG_enable_movw_movt && cpu.implementer() == base::CPU::QUALCOMM) { + supported_ |= 1u << MOVW_MOVT_IMMEDIATE_LOADS; + } } // ARM Cortex-A9 and Cortex-A5 have 32 byte cachelines. - if (cpu.implementer() == CPU::ARM && - (cpu.part() == CPU::ARM_CORTEX_A5 || - cpu.part() == CPU::ARM_CORTEX_A9)) { + if (cpu.implementer() == base::CPU::ARM && + (cpu.part() == base::CPU::ARM_CORTEX_A5 || + cpu.part() == base::CPU::ARM_CORTEX_A9)) { cache_line_size_ = 32; } - if (!IsSupported(VFP32DREGS) && FLAG_enable_32dregs && cpu.has_vfp3_d32()) { - found_by_runtime_probing_only_ |= static_cast<uint64_t>(1) << VFP32DREGS; - } - - supported_ |= found_by_runtime_probing_only_; + if (FLAG_enable_32dregs && cpu.has_vfp3_d32()) supported_ |= 1u << VFP32DREGS; #endif - // Assert that VFP3 implies ARMv7. - ASSERT(!IsSupported(VFP3) || IsSupported(ARMv7)); + DCHECK(!IsSupported(VFP3) || IsSupported(ARMv7)); } void CpuFeatures::PrintTarget() { const char* arm_arch = NULL; - const char* arm_test = ""; + const char* arm_target_type = ""; + const char* arm_no_probe = ""; const char* arm_fpu = ""; const char* arm_thumb = ""; const char* arm_float_abi = NULL; +#if !defined __arm__ + arm_target_type = " simulator"; +#endif + +#if defined ARM_TEST_NO_FEATURE_PROBE + arm_no_probe = " noprobe"; +#endif + #if defined CAN_USE_ARMV7_INSTRUCTIONS arm_arch = "arm v7"; #else arm_arch = "arm v6"; #endif -#ifdef __arm__ - -# ifdef ARM_TEST - arm_test = " test"; -# endif -# if defined __ARM_NEON__ +#if defined CAN_USE_NEON arm_fpu = " neon"; -# elif defined CAN_USE_VFP3_INSTRUCTIONS - arm_fpu = " vfp3"; -# else - arm_fpu = " vfp2"; -# endif -# if (defined __thumb__) || (defined __thumb2__) - arm_thumb = " thumb"; -# endif - arm_float_abi = OS::ArmUsingHardFloat() ? "hard" : "softfp"; - -#else // __arm__ - - arm_test = " simulator"; -# if defined CAN_USE_VFP3_INSTRUCTIONS +#elif defined CAN_USE_VFP3_INSTRUCTIONS # if defined CAN_USE_VFP32DREGS arm_fpu = " vfp3"; # else arm_fpu = " vfp3-d16"; # endif -# else +#else arm_fpu = " vfp2"; -# endif -# if USE_EABI_HARDFLOAT == 1 +#endif + +#ifdef __arm__ + arm_float_abi = base::OS::ArmUsingHardFloat() ? "hard" : "softfp"; +#elif USE_EABI_HARDFLOAT arm_float_abi = "hard"; -# else +#else arm_float_abi = "softfp"; -# endif +#endif -#endif // __arm__ +#if defined __arm__ && (defined __thumb__) || (defined __thumb2__) + arm_thumb = " thumb"; +#endif - printf("target%s %s%s%s %s\n", - arm_test, arm_arch, arm_fpu, arm_thumb, arm_float_abi); + printf("target%s%s %s%s%s %s\n", + arm_target_type, arm_no_probe, arm_arch, arm_fpu, arm_thumb, + arm_float_abi); } @@ -275,7 +193,7 @@ void CpuFeatures::PrintFeatures() { CpuFeatures::IsSupported(UNALIGNED_ACCESSES), CpuFeatures::IsSupported(MOVW_MOVT_IMMEDIATE_LOADS)); #ifdef __arm__ - bool eabi_hardfloat = OS::ArmUsingHardFloat(); + bool eabi_hardfloat = base::OS::ArmUsingHardFloat(); #elif USE_EABI_HARDFLOAT bool eabi_hardfloat = true; #else @@ -285,6 +203,18 @@ void CpuFeatures::PrintFeatures() { } +// ----------------------------------------------------------------------------- +// Implementation of DwVfpRegister + +const char* DwVfpRegister::AllocationIndexToString(int index) { + DCHECK(index >= 0 && index < NumAllocatableRegisters()); + DCHECK(kScratchDoubleReg.code() - kDoubleRegZero.code() == + kNumReservedRegisters - 1); + if (index >= kDoubleRegZero.code()) index += kNumReservedRegisters; + return VFPRegisters::Name(index, true); +} + + // ----------------------------------------------------------------------------- // Implementation of RelocInfo @@ -301,11 +231,7 @@ bool RelocInfo::IsCodedSpecially() { bool RelocInfo::IsInConstantPool() { - if (FLAG_enable_ool_constant_pool) { - return Assembler::IsLdrPpImmediateOffset(Memory::int32_at(pc_)); - } else { - return Assembler::IsLdrPcImmediateOffset(Memory::int32_at(pc_)); - } + return Assembler::is_constant_pool_load(pc_); } @@ -318,7 +244,7 @@ void RelocInfo::PatchCode(byte* instructions, int instruction_count) { } // Indicate that code has changed. - CPU::FlushICache(pc_, instruction_count * Assembler::kInstrSize); + CpuFeatures::FlushICache(pc_, instruction_count * Assembler::kInstrSize); } @@ -340,7 +266,7 @@ Operand::Operand(Handle<Object> handle) { // Verify all Objects referred by code are NOT in new space. Object* obj = *handle; if (obj->IsHeapObject()) { - ASSERT(!HeapObject::cast(obj)->GetHeap()->InNewSpace(obj)); + DCHECK(!HeapObject::cast(obj)->GetHeap()->InNewSpace(obj)); imm32_ = reinterpret_cast<intptr_t>(handle.location()); rmode_ = RelocInfo::EMBEDDED_OBJECT; } else { @@ -352,7 +278,7 @@ Operand::Operand(Handle<Object> handle) { Operand::Operand(Register rm, ShiftOp shift_op, int shift_imm) { - ASSERT(is_uint5(shift_imm)); + DCHECK(is_uint5(shift_imm)); rm_ = rm; rs_ = no_reg; @@ -365,7 +291,7 @@ Operand::Operand(Register rm, ShiftOp shift_op, int shift_imm) { shift_op = LSL; } else if (shift_op == RRX) { // encoded as ROR with shift_imm == 0 - ASSERT(shift_imm == 0); + DCHECK(shift_imm == 0); shift_op_ = ROR; shift_imm_ = 0; } @@ -373,7 +299,7 @@ Operand::Operand(Register rm, ShiftOp shift_op, int shift_imm) { Operand::Operand(Register rm, ShiftOp shift_op, Register rs) { - ASSERT(shift_op != RRX); + DCHECK(shift_op != RRX); rm_ = rm; rs_ = no_reg; shift_op_ = shift_op; @@ -400,7 +326,7 @@ MemOperand::MemOperand(Register rn, Register rm, AddrMode am) { MemOperand::MemOperand(Register rn, Register rm, ShiftOp shift_op, int shift_imm, AddrMode am) { - ASSERT(is_uint5(shift_imm)); + DCHECK(is_uint5(shift_imm)); rn_ = rn; rm_ = rm; shift_op_ = shift_op; @@ -410,7 +336,7 @@ MemOperand::MemOperand(Register rn, Register rm, NeonMemOperand::NeonMemOperand(Register rn, AddrMode am, int align) { - ASSERT((am == Offset) || (am == PostIndex)); + DCHECK((am == Offset) || (am == PostIndex)); rn_ = rn; rm_ = (am == Offset) ? pc : sp; SetAlignment(align); @@ -472,10 +398,6 @@ NeonListOperand::NeonListOperand(DoubleRegister base, int registers_count) { // ----------------------------------------------------------------------------- // Specific instructions, constants, and masks. -// add(sp, sp, 4) instruction (aka Pop()) -const Instr kPopInstruction = - al | PostIndex | 4 | LeaveCC | I | kRegister_sp_Code * B16 | - kRegister_sp_Code * B12; // str(r, MemOperand(sp, 4, NegPreIndex), al) instruction (aka push(r)) // register r is not encoded. const Instr kPushRegPattern = @@ -484,14 +406,15 @@ const Instr kPushRegPattern = // register r is not encoded. const Instr kPopRegPattern = al | B26 | L | 4 | PostIndex | kRegister_sp_Code * B16; -// mov lr, pc -const Instr kMovLrPc = al | MOV | kRegister_pc_Code | kRegister_lr_Code * B12; // ldr rd, [pc, #offset] -const Instr kLdrPCMask = 15 * B24 | 7 * B20 | 15 * B16; -const Instr kLdrPCPattern = 5 * B24 | L | kRegister_pc_Code * B16; +const Instr kLdrPCImmedMask = 15 * B24 | 7 * B20 | 15 * B16; +const Instr kLdrPCImmedPattern = 5 * B24 | L | kRegister_pc_Code * B16; // ldr rd, [pp, #offset] -const Instr kLdrPpMask = 15 * B24 | 7 * B20 | 15 * B16; -const Instr kLdrPpPattern = 5 * B24 | L | kRegister_r8_Code * B16; +const Instr kLdrPpImmedMask = 15 * B24 | 7 * B20 | 15 * B16; +const Instr kLdrPpImmedPattern = 5 * B24 | L | kRegister_r8_Code * B16; +// ldr rd, [pp, rn] +const Instr kLdrPpRegMask = 15 * B24 | 7 * B20 | 15 * B16; +const Instr kLdrPpRegPattern = 7 * B24 | L | kRegister_r8_Code * B16; // vldr dd, [pc, #offset] const Instr kVldrDPCMask = 15 * B24 | 3 * B20 | 15 * B16 | 15 * B8; const Instr kVldrDPCPattern = 13 * B24 | L | kRegister_pc_Code * B16 | 11 * B8; @@ -509,8 +432,8 @@ const Instr kMovMvnPattern = 0xd * B21; const Instr kMovMvnFlip = B22; const Instr kMovLeaveCCMask = 0xdff * B16; const Instr kMovLeaveCCPattern = 0x1a0 * B16; -const Instr kMovwMask = 0xff * B20; const Instr kMovwPattern = 0x30 * B20; +const Instr kMovtPattern = 0x34 * B20; const Instr kMovwLeaveCCFlip = 0x5 * B21; const Instr kCmpCmnMask = 0xdd * B20 | 0xf * B12; const Instr kCmpCmnPattern = 0x15 * B20; @@ -528,8 +451,6 @@ const Instr kLdrRegFpNegOffsetPattern = const Instr kStrRegFpNegOffsetPattern = al | B26 | NegOffset | kRegister_fp_Code * B16; const Instr kLdrStrInstrTypeMask = 0xffff0000; -const Instr kLdrStrInstrArgumentMask = 0x0000ffff; -const Instr kLdrStrOffsetMask = 0x00000fff; Assembler::Assembler(Isolate* isolate, void* buffer, int buffer_size) @@ -547,13 +468,12 @@ Assembler::Assembler(Isolate* isolate, void* buffer, int buffer_size) first_const_pool_64_use_ = -1; last_bound_pos_ = 0; constant_pool_available_ = !FLAG_enable_ool_constant_pool; - constant_pool_full_ = false; ClearRecordedAstId(); } Assembler::~Assembler() { - ASSERT(const_pool_blocked_nesting_ == 0); + DCHECK(const_pool_blocked_nesting_ == 0); } @@ -561,8 +481,8 @@ void Assembler::GetCode(CodeDesc* desc) { if (!FLAG_enable_ool_constant_pool) { // Emit constant pool if necessary. CheckConstPool(true, false); - ASSERT(num_pending_32_bit_reloc_info_ == 0); - ASSERT(num_pending_64_bit_reloc_info_ == 0); + DCHECK(num_pending_32_bit_reloc_info_ == 0); + DCHECK(num_pending_64_bit_reloc_info_ == 0); } // Set up code descriptor. desc->buffer = buffer_; @@ -574,7 +494,7 @@ void Assembler::GetCode(CodeDesc* desc) { void Assembler::Align(int m) { - ASSERT(m >= 4 && IsPowerOf2(m)); + DCHECK(m >= 4 && IsPowerOf2(m)); while ((pc_offset() & (m - 1)) != 0) { nop(); } @@ -598,7 +518,7 @@ bool Assembler::IsBranch(Instr instr) { int Assembler::GetBranchOffset(Instr instr) { - ASSERT(IsBranch(instr)); + DCHECK(IsBranch(instr)); // Take the jump offset in the lower 24 bits, sign extend it and multiply it // with 4 to get the offset in bytes. return ((instr & kImm24Mask) << 8) >> 6; @@ -616,7 +536,7 @@ bool Assembler::IsVldrDRegisterImmediate(Instr instr) { int Assembler::GetLdrRegisterImmediateOffset(Instr instr) { - ASSERT(IsLdrRegisterImmediate(instr)); + DCHECK(IsLdrRegisterImmediate(instr)); bool positive = (instr & B23) == B23; int offset = instr & kOff12Mask; // Zero extended offset. return positive ? offset : -offset; @@ -624,7 +544,7 @@ int Assembler::GetLdrRegisterImmediateOffset(Instr instr) { int Assembler::GetVldrDRegisterImmediateOffset(Instr instr) { - ASSERT(IsVldrDRegisterImmediate(instr)); + DCHECK(IsVldrDRegisterImmediate(instr)); bool positive = (instr & B23) == B23; int offset = instr & kOff8Mask; // Zero extended offset. offset <<= 2; @@ -633,10 +553,10 @@ int Assembler::GetVldrDRegisterImmediateOffset(Instr instr) { Instr Assembler::SetLdrRegisterImmediateOffset(Instr instr, int offset) { - ASSERT(IsLdrRegisterImmediate(instr)); + DCHECK(IsLdrRegisterImmediate(instr)); bool positive = offset >= 0; if (!positive) offset = -offset; - ASSERT(is_uint12(offset)); + DCHECK(is_uint12(offset)); // Set bit indicating whether the offset should be added. instr = (instr & ~B23) | (positive ? B23 : 0); // Set the actual offset. @@ -645,11 +565,11 @@ Instr Assembler::SetLdrRegisterImmediateOffset(Instr instr, int offset) { Instr Assembler::SetVldrDRegisterImmediateOffset(Instr instr, int offset) { - ASSERT(IsVldrDRegisterImmediate(instr)); - ASSERT((offset & ~3) == offset); // Must be 64-bit aligned. + DCHECK(IsVldrDRegisterImmediate(instr)); + DCHECK((offset & ~3) == offset); // Must be 64-bit aligned. bool positive = offset >= 0; if (!positive) offset = -offset; - ASSERT(is_uint10(offset)); + DCHECK(is_uint10(offset)); // Set bit indicating whether the offset should be added. instr = (instr & ~B23) | (positive ? B23 : 0); // Set the actual offset. Its bottom 2 bits are zero. @@ -663,10 +583,10 @@ bool Assembler::IsStrRegisterImmediate(Instr instr) { Instr Assembler::SetStrRegisterImmediateOffset(Instr instr, int offset) { - ASSERT(IsStrRegisterImmediate(instr)); + DCHECK(IsStrRegisterImmediate(instr)); bool positive = offset >= 0; if (!positive) offset = -offset; - ASSERT(is_uint12(offset)); + DCHECK(is_uint12(offset)); // Set bit indicating whether the offset should be added. instr = (instr & ~B23) | (positive ? B23 : 0); // Set the actual offset. @@ -680,9 +600,9 @@ bool Assembler::IsAddRegisterImmediate(Instr instr) { Instr Assembler::SetAddRegisterImmediateOffset(Instr instr, int offset) { - ASSERT(IsAddRegisterImmediate(instr)); - ASSERT(offset >= 0); - ASSERT(is_uint12(offset)); + DCHECK(IsAddRegisterImmediate(instr)); + DCHECK(offset >= 0); + DCHECK(is_uint12(offset)); // Set the offset. return (instr & ~kOff12Mask) | offset; } @@ -709,6 +629,24 @@ Register Assembler::GetRm(Instr instr) { } +Instr Assembler::GetConsantPoolLoadPattern() { + if (FLAG_enable_ool_constant_pool) { + return kLdrPpImmedPattern; + } else { + return kLdrPCImmedPattern; + } +} + + +Instr Assembler::GetConsantPoolLoadMask() { + if (FLAG_enable_ool_constant_pool) { + return kLdrPpImmedMask; + } else { + return kLdrPCImmedMask; + } +} + + bool Assembler::IsPush(Instr instr) { return ((instr & ~kRdMask) == kPushRegPattern); } @@ -742,17 +680,27 @@ bool Assembler::IsLdrRegFpNegOffset(Instr instr) { bool Assembler::IsLdrPcImmediateOffset(Instr instr) { // Check the instruction is indeed a // ldr<cond> <Rd>, [pc +/- offset_12]. - return (instr & kLdrPCMask) == kLdrPCPattern; + return (instr & kLdrPCImmedMask) == kLdrPCImmedPattern; } bool Assembler::IsLdrPpImmediateOffset(Instr instr) { // Check the instruction is indeed a // ldr<cond> <Rd>, [pp +/- offset_12]. - return (instr & kLdrPpMask) == kLdrPpPattern; + return (instr & kLdrPpImmedMask) == kLdrPpImmedPattern; +} + + +bool Assembler::IsLdrPpRegOffset(Instr instr) { + // Check the instruction is indeed a + // ldr<cond> <Rd>, [pp, +/- <Rm>]. + return (instr & kLdrPpRegMask) == kLdrPpRegPattern; } +Instr Assembler::GetLdrPpRegOffsetPattern() { return kLdrPpRegPattern; } + + bool Assembler::IsVldrDPcImmediateOffset(Instr instr) { // Check the instruction is indeed a // vldr<cond> <Dd>, [pc +/- offset_10]. @@ -767,6 +715,20 @@ bool Assembler::IsVldrDPpImmediateOffset(Instr instr) { } +bool Assembler::IsBlxReg(Instr instr) { + // Check the instruction is indeed a + // blxcc <Rm> + return (instr & kBlxRegMask) == kBlxRegPattern; +} + + +bool Assembler::IsBlxIp(Instr instr) { + // Check the instruction is indeed a + // blx ip + return instr == kBlxIp; +} + + bool Assembler::IsTstImmediate(Instr instr) { return (instr & (B27 | B26 | I | kOpCodeMask | S | kRdMask)) == (I | TST | S); @@ -786,13 +748,13 @@ bool Assembler::IsCmpImmediate(Instr instr) { Register Assembler::GetCmpImmediateRegister(Instr instr) { - ASSERT(IsCmpImmediate(instr)); + DCHECK(IsCmpImmediate(instr)); return GetRn(instr); } int Assembler::GetCmpImmediateRawImmediate(Instr instr) { - ASSERT(IsCmpImmediate(instr)); + DCHECK(IsCmpImmediate(instr)); return instr & kOff12Mask; } @@ -815,13 +777,13 @@ int Assembler::GetCmpImmediateRawImmediate(Instr instr) { // same position. -int Assembler::target_at(int pos) { +int Assembler::target_at(int pos) { Instr instr = instr_at(pos); if (is_uint24(instr)) { // Emitted link to a label, not part of a branch. return instr; } - ASSERT((instr & 7*B25) == 5*B25); // b, bl, or blx imm24 + DCHECK((instr & 7*B25) == 5*B25); // b, bl, or blx imm24 int imm26 = ((instr & kImm24Mask) << 8) >> 6; if ((Instruction::ConditionField(instr) == kSpecialCondition) && ((instr & B24) != 0)) { @@ -835,7 +797,7 @@ int Assembler::target_at(int pos) { void Assembler::target_at_put(int pos, int target_pos) { Instr instr = instr_at(pos); if (is_uint24(instr)) { - ASSERT(target_pos == pos || target_pos >= 0); + DCHECK(target_pos == pos || target_pos >= 0); // Emitted link to a label, not part of a branch. // Load the position of the label relative to the generated code object // pointer in a register. @@ -852,9 +814,9 @@ void Assembler::target_at_put(int pos, int target_pos) { // We extract the destination register from the emitted nop instruction. Register dst = Register::from_code( Instruction::RmValue(instr_at(pos + kInstrSize))); - ASSERT(IsNop(instr_at(pos + kInstrSize), dst.code())); + DCHECK(IsNop(instr_at(pos + kInstrSize), dst.code())); uint32_t target24 = target_pos + (Code::kHeaderSize - kHeapObjectTag); - ASSERT(is_uint24(target24)); + DCHECK(is_uint24(target24)); if (is_uint8(target24)) { // If the target fits in a byte then only patch with a mov // instruction. @@ -903,17 +865,17 @@ void Assembler::target_at_put(int pos, int target_pos) { return; } int imm26 = target_pos - (pos + kPcLoadDelta); - ASSERT((instr & 7*B25) == 5*B25); // b, bl, or blx imm24 + DCHECK((instr & 7*B25) == 5*B25); // b, bl, or blx imm24 if (Instruction::ConditionField(instr) == kSpecialCondition) { // blx uses bit 24 to encode bit 2 of imm26 - ASSERT((imm26 & 1) == 0); + DCHECK((imm26 & 1) == 0); instr = (instr & ~(B24 | kImm24Mask)) | ((imm26 & 2) >> 1)*B24; } else { - ASSERT((imm26 & 3) == 0); + DCHECK((imm26 & 3) == 0); instr &= ~kImm24Mask; } int imm24 = imm26 >> 2; - ASSERT(is_int24(imm24)); + DCHECK(is_int24(imm24)); instr_at_put(pos, instr | (imm24 & kImm24Mask)); } @@ -932,7 +894,7 @@ void Assembler::print(Label* L) { if ((instr & ~kImm24Mask) == 0) { PrintF("value\n"); } else { - ASSERT((instr & 7*B25) == 5*B25); // b, bl, or blx + DCHECK((instr & 7*B25) == 5*B25); // b, bl, or blx Condition cond = Instruction::ConditionField(instr); const char* b; const char* c; @@ -977,7 +939,7 @@ void Assembler::print(Label* L) { void Assembler::bind_to(Label* L, int pos) { - ASSERT(0 <= pos && pos <= pc_offset()); // must have a valid binding position + DCHECK(0 <= pos && pos <= pc_offset()); // must have a valid binding position while (L->is_linked()) { int fixup_pos = L->pos(); next(L); // call next before overwriting link with target at fixup_pos @@ -993,20 +955,20 @@ void Assembler::bind_to(Label* L, int pos) { void Assembler::bind(Label* L) { - ASSERT(!L->is_bound()); // label can only be bound once + DCHECK(!L->is_bound()); // label can only be bound once bind_to(L, pc_offset()); } void Assembler::next(Label* L) { - ASSERT(L->is_linked()); + DCHECK(L->is_linked()); int link = target_at(L->pos()); if (link == L->pos()) { // Branch target points to the same instuction. This is the end of the link // chain. L->Unuse(); } else { - ASSERT(link >= 0); + DCHECK(link >= 0); L->link_to(link); } } @@ -1040,7 +1002,7 @@ static bool fits_shifter(uint32_t imm32, if (CpuFeatures::IsSupported(ARMv7)) { if (imm32 < 0x10000) { *instr ^= kMovwLeaveCCFlip; - *instr |= EncodeMovwImmediate(imm32); + *instr |= Assembler::EncodeMovwImmediate(imm32); *rotate_imm = *immed_8 = 0; // Not used for movw. return true; } @@ -1076,11 +1038,10 @@ static bool fits_shifter(uint32_t imm32, // if they can be encoded in the ARM's 12 bits of immediate-offset instruction // space. There is no guarantee that the relocated location can be similarly // encoded. -bool Operand::must_output_reloc_info(Isolate* isolate, - const Assembler* assembler) const { +bool Operand::must_output_reloc_info(const Assembler* assembler) const { if (rmode_ == RelocInfo::EXTERNAL_REFERENCE) { if (assembler != NULL && assembler->predictable_code_size()) return true; - return Serializer::enabled(isolate); + return assembler->serializer_enabled(); } else if (RelocInfo::IsNone(rmode_)) { return false; } @@ -1088,19 +1049,18 @@ bool Operand::must_output_reloc_info(Isolate* isolate, } -static bool use_mov_immediate_load(Isolate* isolate, - const Operand& x, +static bool use_mov_immediate_load(const Operand& x, const Assembler* assembler) { - if (assembler != NULL && !assembler->can_use_constant_pool()) { + if (assembler != NULL && !assembler->is_constant_pool_available()) { // If there is no constant pool available, we must use an mov immediate. // TODO(rmcilroy): enable ARMv6 support. - ASSERT(CpuFeatures::IsSupported(ARMv7)); + DCHECK(CpuFeatures::IsSupported(ARMv7)); return true; } else if (CpuFeatures::IsSupported(MOVW_MOVT_IMMEDIATE_LOADS) && (assembler == NULL || !assembler->predictable_code_size())) { // Prefer movw / movt to constant pool if it is more efficient on the CPU. return true; - } else if (x.must_output_reloc_info(isolate, assembler)) { + } else if (x.must_output_reloc_info(assembler)) { // Prefer constant pool if data is likely to be patched. return false; } else { @@ -1110,29 +1070,35 @@ static bool use_mov_immediate_load(Isolate* isolate, } -bool Operand::is_single_instruction(Isolate* isolate, - const Assembler* assembler, - Instr instr) const { - if (rm_.is_valid()) return true; +int Operand::instructions_required(const Assembler* assembler, + Instr instr) const { + if (rm_.is_valid()) return 1; uint32_t dummy1, dummy2; - if (must_output_reloc_info(isolate, assembler) || + if (must_output_reloc_info(assembler) || !fits_shifter(imm32_, &dummy1, &dummy2, &instr)) { // The immediate operand cannot be encoded as a shifter operand, or use of - // constant pool is required. For a mov instruction not setting the - // condition code additional instruction conventions can be used. - if ((instr & ~kCondMask) == 13*B21) { // mov, S not set - return !use_mov_immediate_load(isolate, *this, assembler); + // constant pool is required. First account for the instructions required + // for the constant pool or immediate load + int instructions; + if (use_mov_immediate_load(*this, assembler)) { + instructions = 2; // A movw, movt immediate load. + } else if (assembler != NULL && assembler->use_extended_constant_pool()) { + instructions = 3; // An extended constant pool load. } else { - // If this is not a mov or mvn instruction there will always an additional - // instructions - either mov or ldr. The mov might actually be two - // instructions mov or movw followed by movt so including the actual - // instruction two or three instructions will be generated. - return false; + instructions = 1; // A small constant pool load. + } + + if ((instr & ~kCondMask) != 13 * B21) { // mov, S not set + // For a mov or mvn instruction which doesn't set the condition + // code, the constant pool or immediate load is enough, otherwise we need + // to account for the actual instruction being requested. + instructions += 1; } + return instructions; } else { // No use of constant pool and the immediate operand can be encoded as a // shifter operand. - return true; + return 1; } } @@ -1141,29 +1107,39 @@ void Assembler::move_32_bit_immediate(Register rd, const Operand& x, Condition cond) { RelocInfo rinfo(pc_, x.rmode_, x.imm32_, NULL); - if (x.must_output_reloc_info(isolate(), this)) { + if (x.must_output_reloc_info(this)) { RecordRelocInfo(rinfo); } - if (use_mov_immediate_load(isolate(), x, this)) { + if (use_mov_immediate_load(x, this)) { Register target = rd.code() == pc.code() ? ip : rd; // TODO(rmcilroy): add ARMv6 support for immediate loads. - ASSERT(CpuFeatures::IsSupported(ARMv7)); + DCHECK(CpuFeatures::IsSupported(ARMv7)); if (!FLAG_enable_ool_constant_pool && - x.must_output_reloc_info(isolate(), this)) { + x.must_output_reloc_info(this)) { // Make sure the movw/movt doesn't get separated. BlockConstPoolFor(2); } - emit(cond | 0x30*B20 | target.code()*B12 | - EncodeMovwImmediate(x.imm32_ & 0xffff)); + movw(target, static_cast<uint32_t>(x.imm32_ & 0xffff), cond); movt(target, static_cast<uint32_t>(x.imm32_) >> 16, cond); if (target.code() != rd.code()) { mov(rd, target, LeaveCC, cond); } } else { - ASSERT(can_use_constant_pool()); - ConstantPoolAddEntry(rinfo); - ldr(rd, MemOperand(FLAG_enable_ool_constant_pool ? pp : pc, 0), cond); + DCHECK(is_constant_pool_available()); + ConstantPoolArray::LayoutSection section = ConstantPoolAddEntry(rinfo); + if (section == ConstantPoolArray::EXTENDED_SECTION) { + DCHECK(FLAG_enable_ool_constant_pool); + Register target = rd.code() == pc.code() ? ip : rd; + // Emit instructions to load constant pool offset. + movw(target, 0, cond); + movt(target, 0, cond); + // Load from constant pool at offset. + ldr(rd, MemOperand(pp, target), cond); + } else { + DCHECK(section == ConstantPoolArray::SMALL_SECTION); + ldr(rd, MemOperand(FLAG_enable_ool_constant_pool ? pp : pc, 0), cond); + } } } @@ -1173,12 +1149,12 @@ void Assembler::addrmod1(Instr instr, Register rd, const Operand& x) { CheckBuffer(); - ASSERT((instr & ~(kCondMask | kOpCodeMask | S)) == 0); + DCHECK((instr & ~(kCondMask | kOpCodeMask | S)) == 0); if (!x.rm_.is_valid()) { // Immediate. uint32_t rotate_imm; uint32_t immed_8; - if (x.must_output_reloc_info(isolate(), this) || + if (x.must_output_reloc_info(this) || !fits_shifter(x.imm32_, &rotate_imm, &immed_8, &instr)) { // The immediate operand cannot be encoded as a shifter operand, so load // it first to register ip and change the original instruction to use ip. @@ -1200,7 +1176,7 @@ void Assembler::addrmod1(Instr instr, instr |= x.shift_imm_*B7 | x.shift_op_ | x.rm_.code(); } else { // Register shift. - ASSERT(!rn.is(pc) && !rd.is(pc) && !x.rm_.is(pc) && !x.rs_.is(pc)); + DCHECK(!rn.is(pc) && !rd.is(pc) && !x.rm_.is(pc) && !x.rs_.is(pc)); instr |= x.rs_.code()*B8 | x.shift_op_ | B4 | x.rm_.code(); } emit(instr | rn.code()*B16 | rd.code()*B12); @@ -1212,7 +1188,7 @@ void Assembler::addrmod1(Instr instr, void Assembler::addrmod2(Instr instr, Register rd, const MemOperand& x) { - ASSERT((instr & ~(kCondMask | B | L)) == B26); + DCHECK((instr & ~(kCondMask | B | L)) == B26); int am = x.am_; if (!x.rm_.is_valid()) { // Immediate offset. @@ -1224,28 +1200,28 @@ void Assembler::addrmod2(Instr instr, Register rd, const MemOperand& x) { if (!is_uint12(offset_12)) { // Immediate offset cannot be encoded, load it first to register ip // rn (and rd in a load) should never be ip, or will be trashed. - ASSERT(!x.rn_.is(ip) && ((instr & L) == L || !rd.is(ip))); + DCHECK(!x.rn_.is(ip) && ((instr & L) == L || !rd.is(ip))); mov(ip, Operand(x.offset_), LeaveCC, Instruction::ConditionField(instr)); addrmod2(instr, rd, MemOperand(x.rn_, ip, x.am_)); return; } - ASSERT(offset_12 >= 0); // no masking needed + DCHECK(offset_12 >= 0); // no masking needed instr |= offset_12; } else { // Register offset (shift_imm_ and shift_op_ are 0) or scaled // register offset the constructors make sure than both shift_imm_ // and shift_op_ are initialized. - ASSERT(!x.rm_.is(pc)); + DCHECK(!x.rm_.is(pc)); instr |= B25 | x.shift_imm_*B7 | x.shift_op_ | x.rm_.code(); } - ASSERT((am & (P|W)) == P || !x.rn_.is(pc)); // no pc base with writeback + DCHECK((am & (P|W)) == P || !x.rn_.is(pc)); // no pc base with writeback emit(instr | am | x.rn_.code()*B16 | rd.code()*B12); } void Assembler::addrmod3(Instr instr, Register rd, const MemOperand& x) { - ASSERT((instr & ~(kCondMask | L | S6 | H)) == (B4 | B7)); - ASSERT(x.rn_.is_valid()); + DCHECK((instr & ~(kCondMask | L | S6 | H)) == (B4 | B7)); + DCHECK(x.rn_.is_valid()); int am = x.am_; if (!x.rm_.is_valid()) { // Immediate offset. @@ -1257,60 +1233,60 @@ void Assembler::addrmod3(Instr instr, Register rd, const MemOperand& x) { if (!is_uint8(offset_8)) { // Immediate offset cannot be encoded, load it first to register ip // rn (and rd in a load) should never be ip, or will be trashed. - ASSERT(!x.rn_.is(ip) && ((instr & L) == L || !rd.is(ip))); + DCHECK(!x.rn_.is(ip) && ((instr & L) == L || !rd.is(ip))); mov(ip, Operand(x.offset_), LeaveCC, Instruction::ConditionField(instr)); addrmod3(instr, rd, MemOperand(x.rn_, ip, x.am_)); return; } - ASSERT(offset_8 >= 0); // no masking needed + DCHECK(offset_8 >= 0); // no masking needed instr |= B | (offset_8 >> 4)*B8 | (offset_8 & 0xf); } else if (x.shift_imm_ != 0) { // Scaled register offset not supported, load index first // rn (and rd in a load) should never be ip, or will be trashed. - ASSERT(!x.rn_.is(ip) && ((instr & L) == L || !rd.is(ip))); + DCHECK(!x.rn_.is(ip) && ((instr & L) == L || !rd.is(ip))); mov(ip, Operand(x.rm_, x.shift_op_, x.shift_imm_), LeaveCC, Instruction::ConditionField(instr)); addrmod3(instr, rd, MemOperand(x.rn_, ip, x.am_)); return; } else { // Register offset. - ASSERT((am & (P|W)) == P || !x.rm_.is(pc)); // no pc index with writeback + DCHECK((am & (P|W)) == P || !x.rm_.is(pc)); // no pc index with writeback instr |= x.rm_.code(); } - ASSERT((am & (P|W)) == P || !x.rn_.is(pc)); // no pc base with writeback + DCHECK((am & (P|W)) == P || !x.rn_.is(pc)); // no pc base with writeback emit(instr | am | x.rn_.code()*B16 | rd.code()*B12); } void Assembler::addrmod4(Instr instr, Register rn, RegList rl) { - ASSERT((instr & ~(kCondMask | P | U | W | L)) == B27); - ASSERT(rl != 0); - ASSERT(!rn.is(pc)); + DCHECK((instr & ~(kCondMask | P | U | W | L)) == B27); + DCHECK(rl != 0); + DCHECK(!rn.is(pc)); emit(instr | rn.code()*B16 | rl); } void Assembler::addrmod5(Instr instr, CRegister crd, const MemOperand& x) { // Unindexed addressing is not encoded by this function. - ASSERT_EQ((B27 | B26), + DCHECK_EQ((B27 | B26), (instr & ~(kCondMask | kCoprocessorMask | P | U | N | W | L))); - ASSERT(x.rn_.is_valid() && !x.rm_.is_valid()); + DCHECK(x.rn_.is_valid() && !x.rm_.is_valid()); int am = x.am_; int offset_8 = x.offset_; - ASSERT((offset_8 & 3) == 0); // offset must be an aligned word offset + DCHECK((offset_8 & 3) == 0); // offset must be an aligned word offset offset_8 >>= 2; if (offset_8 < 0) { offset_8 = -offset_8; am ^= U; } - ASSERT(is_uint8(offset_8)); // unsigned word offset must fit in a byte - ASSERT((am & (P|W)) == P || !x.rn_.is(pc)); // no pc base with writeback + DCHECK(is_uint8(offset_8)); // unsigned word offset must fit in a byte + DCHECK((am & (P|W)) == P || !x.rn_.is(pc)); // no pc base with writeback // Post-indexed addressing requires W == 1; different than in addrmod2/3. if ((am & P) == 0) am |= W; - ASSERT(offset_8 >= 0); // no masking needed + DCHECK(offset_8 >= 0); // no masking needed emit(instr | am | x.rn_.code()*B16 | crd.code()*B12 | offset_8); } @@ -1339,9 +1315,9 @@ int Assembler::branch_offset(Label* L, bool jump_elimination_allowed) { // Branch instructions. void Assembler::b(int branch_offset, Condition cond) { - ASSERT((branch_offset & 3) == 0); + DCHECK((branch_offset & 3) == 0); int imm24 = branch_offset >> 2; - ASSERT(is_int24(imm24)); + DCHECK(is_int24(imm24)); emit(cond | B27 | B25 | (imm24 & kImm24Mask)); if (cond == al) { @@ -1353,33 +1329,33 @@ void Assembler::b(int branch_offset, Condition cond) { void Assembler::bl(int branch_offset, Condition cond) { positions_recorder()->WriteRecordedPositions(); - ASSERT((branch_offset & 3) == 0); + DCHECK((branch_offset & 3) == 0); int imm24 = branch_offset >> 2; - ASSERT(is_int24(imm24)); + DCHECK(is_int24(imm24)); emit(cond | B27 | B25 | B24 | (imm24 & kImm24Mask)); } void Assembler::blx(int branch_offset) { // v5 and above positions_recorder()->WriteRecordedPositions(); - ASSERT((branch_offset & 1) == 0); + DCHECK((branch_offset & 1) == 0); int h = ((branch_offset & 2) >> 1)*B24; int imm24 = branch_offset >> 2; - ASSERT(is_int24(imm24)); + DCHECK(is_int24(imm24)); emit(kSpecialCondition | B27 | B25 | h | (imm24 & kImm24Mask)); } void Assembler::blx(Register target, Condition cond) { // v5 and above positions_recorder()->WriteRecordedPositions(); - ASSERT(!target.is(pc)); + DCHECK(!target.is(pc)); emit(cond | B24 | B21 | 15*B16 | 15*B12 | 15*B8 | BLX | target.code()); } void Assembler::bx(Register target, Condition cond) { // v5 and above, plus v4t positions_recorder()->WriteRecordedPositions(); - ASSERT(!target.is(pc)); // use of pc is actually allowed, but discouraged + DCHECK(!target.is(pc)); // use of pc is actually allowed, but discouraged emit(cond | B24 | B21 | 15*B16 | 15*B12 | 15*B8 | BX | target.code()); } @@ -1451,7 +1427,7 @@ void Assembler::cmp(Register src1, const Operand& src2, Condition cond) { void Assembler::cmp_raw_immediate( Register src, int raw_immediate, Condition cond) { - ASSERT(is_uint12(raw_immediate)); + DCHECK(is_uint12(raw_immediate)); emit(cond | I | CMP | S | src.code() << 16 | raw_immediate); } @@ -1474,7 +1450,7 @@ void Assembler::mov(Register dst, const Operand& src, SBit s, Condition cond) { // Don't allow nop instructions in the form mov rn, rn to be generated using // the mov instruction. They must be generated using nop(int/NopMarkerTypes) // or MarkCode(int/NopMarkerTypes) pseudo instructions. - ASSERT(!(src.is_reg() && src.rm().is(dst) && s == LeaveCC && cond == al)); + DCHECK(!(src.is_reg() && src.rm().is(dst) && s == LeaveCC && cond == al)); addrmod1(cond | MOV | s, r0, dst, src); } @@ -1507,7 +1483,7 @@ void Assembler::mov_label_offset(Register dst, Label* label) { // // When the label gets bound: target_at extracts the link and target_at_put // patches the instructions. - ASSERT(is_uint24(link)); + DCHECK(is_uint24(link)); BlockConstPoolScope block_const_pool(this); emit(link); nop(dst.code()); @@ -1519,15 +1495,13 @@ void Assembler::mov_label_offset(Register dst, Label* label) { void Assembler::movw(Register reg, uint32_t immediate, Condition cond) { - ASSERT(immediate < 0x10000); - // May use movw if supported, but on unsupported platforms will try to use - // equivalent rotated immed_8 value and other tricks before falling back to a - // constant pool load. - mov(reg, Operand(immediate), LeaveCC, cond); + DCHECK(CpuFeatures::IsSupported(ARMv7)); + emit(cond | 0x30*B20 | reg.code()*B12 | EncodeMovwImmediate(immediate)); } void Assembler::movt(Register reg, uint32_t immediate, Condition cond) { + DCHECK(CpuFeatures::IsSupported(ARMv7)); emit(cond | 0x34*B20 | reg.code()*B12 | EncodeMovwImmediate(immediate)); } @@ -1546,7 +1520,7 @@ void Assembler::mvn(Register dst, const Operand& src, SBit s, Condition cond) { // Multiply instructions. void Assembler::mla(Register dst, Register src1, Register src2, Register srcA, SBit s, Condition cond) { - ASSERT(!dst.is(pc) && !src1.is(pc) && !src2.is(pc) && !srcA.is(pc)); + DCHECK(!dst.is(pc) && !src1.is(pc) && !src2.is(pc) && !srcA.is(pc)); emit(cond | A | s | dst.code()*B16 | srcA.code()*B12 | src2.code()*B8 | B7 | B4 | src1.code()); } @@ -1554,7 +1528,8 @@ void Assembler::mla(Register dst, Register src1, Register src2, Register srcA, void Assembler::mls(Register dst, Register src1, Register src2, Register srcA, Condition cond) { - ASSERT(!dst.is(pc) && !src1.is(pc) && !src2.is(pc) && !srcA.is(pc)); + DCHECK(!dst.is(pc) && !src1.is(pc) && !src2.is(pc) && !srcA.is(pc)); + DCHECK(IsEnabled(MLS)); emit(cond | B22 | B21 | dst.code()*B16 | srcA.code()*B12 | src2.code()*B8 | B7 | B4 | src1.code()); } @@ -1562,16 +1537,25 @@ void Assembler::mls(Register dst, Register src1, Register src2, Register srcA, void Assembler::sdiv(Register dst, Register src1, Register src2, Condition cond) { - ASSERT(!dst.is(pc) && !src1.is(pc) && !src2.is(pc)); - ASSERT(IsEnabled(SUDIV)); + DCHECK(!dst.is(pc) && !src1.is(pc) && !src2.is(pc)); + DCHECK(IsEnabled(SUDIV)); emit(cond | B26 | B25| B24 | B20 | dst.code()*B16 | 0xf * B12 | src2.code()*B8 | B4 | src1.code()); } +void Assembler::udiv(Register dst, Register src1, Register src2, + Condition cond) { + DCHECK(!dst.is(pc) && !src1.is(pc) && !src2.is(pc)); + DCHECK(IsEnabled(SUDIV)); + emit(cond | B26 | B25 | B24 | B21 | B20 | dst.code() * B16 | 0xf * B12 | + src2.code() * B8 | B4 | src1.code()); +} + + void Assembler::mul(Register dst, Register src1, Register src2, SBit s, Condition cond) { - ASSERT(!dst.is(pc) && !src1.is(pc) && !src2.is(pc)); + DCHECK(!dst.is(pc) && !src1.is(pc) && !src2.is(pc)); // dst goes in bits 16-19 for this instruction! emit(cond | s | dst.code()*B16 | src2.code()*B8 | B7 | B4 | src1.code()); } @@ -1583,8 +1567,8 @@ void Assembler::smlal(Register dstL, Register src2, SBit s, Condition cond) { - ASSERT(!dstL.is(pc) && !dstH.is(pc) && !src1.is(pc) && !src2.is(pc)); - ASSERT(!dstL.is(dstH)); + DCHECK(!dstL.is(pc) && !dstH.is(pc) && !src1.is(pc) && !src2.is(pc)); + DCHECK(!dstL.is(dstH)); emit(cond | B23 | B22 | A | s | dstH.code()*B16 | dstL.code()*B12 | src2.code()*B8 | B7 | B4 | src1.code()); } @@ -1596,8 +1580,8 @@ void Assembler::smull(Register dstL, Register src2, SBit s, Condition cond) { - ASSERT(!dstL.is(pc) && !dstH.is(pc) && !src1.is(pc) && !src2.is(pc)); - ASSERT(!dstL.is(dstH)); + DCHECK(!dstL.is(pc) && !dstH.is(pc) && !src1.is(pc) && !src2.is(pc)); + DCHECK(!dstL.is(dstH)); emit(cond | B23 | B22 | s | dstH.code()*B16 | dstL.code()*B12 | src2.code()*B8 | B7 | B4 | src1.code()); } @@ -1609,8 +1593,8 @@ void Assembler::umlal(Register dstL, Register src2, SBit s, Condition cond) { - ASSERT(!dstL.is(pc) && !dstH.is(pc) && !src1.is(pc) && !src2.is(pc)); - ASSERT(!dstL.is(dstH)); + DCHECK(!dstL.is(pc) && !dstH.is(pc) && !src1.is(pc) && !src2.is(pc)); + DCHECK(!dstL.is(dstH)); emit(cond | B23 | A | s | dstH.code()*B16 | dstL.code()*B12 | src2.code()*B8 | B7 | B4 | src1.code()); } @@ -1622,8 +1606,8 @@ void Assembler::umull(Register dstL, Register src2, SBit s, Condition cond) { - ASSERT(!dstL.is(pc) && !dstH.is(pc) && !src1.is(pc) && !src2.is(pc)); - ASSERT(!dstL.is(dstH)); + DCHECK(!dstL.is(pc) && !dstH.is(pc) && !src1.is(pc) && !src2.is(pc)); + DCHECK(!dstL.is(dstH)); emit(cond | B23 | s | dstH.code()*B16 | dstL.code()*B12 | src2.code()*B8 | B7 | B4 | src1.code()); } @@ -1632,7 +1616,7 @@ void Assembler::umull(Register dstL, // Miscellaneous arithmetic instructions. void Assembler::clz(Register dst, Register src, Condition cond) { // v5 and above. - ASSERT(!dst.is(pc) && !src.is(pc)); + DCHECK(!dst.is(pc) && !src.is(pc)); emit(cond | B24 | B22 | B21 | 15*B16 | dst.code()*B12 | 15*B8 | CLZ | src.code()); } @@ -1646,11 +1630,11 @@ void Assembler::usat(Register dst, const Operand& src, Condition cond) { // v6 and above. - ASSERT(CpuFeatures::IsSupported(ARMv7)); - ASSERT(!dst.is(pc) && !src.rm_.is(pc)); - ASSERT((satpos >= 0) && (satpos <= 31)); - ASSERT((src.shift_op_ == ASR) || (src.shift_op_ == LSL)); - ASSERT(src.rs_.is(no_reg)); + DCHECK(CpuFeatures::IsSupported(ARMv7)); + DCHECK(!dst.is(pc) && !src.rm_.is(pc)); + DCHECK((satpos >= 0) && (satpos <= 31)); + DCHECK((src.shift_op_ == ASR) || (src.shift_op_ == LSL)); + DCHECK(src.rs_.is(no_reg)); int sh = 0; if (src.shift_op_ == ASR) { @@ -1674,10 +1658,10 @@ void Assembler::ubfx(Register dst, int width, Condition cond) { // v7 and above. - ASSERT(CpuFeatures::IsSupported(ARMv7)); - ASSERT(!dst.is(pc) && !src.is(pc)); - ASSERT((lsb >= 0) && (lsb <= 31)); - ASSERT((width >= 1) && (width <= (32 - lsb))); + DCHECK(CpuFeatures::IsSupported(ARMv7)); + DCHECK(!dst.is(pc) && !src.is(pc)); + DCHECK((lsb >= 0) && (lsb <= 31)); + DCHECK((width >= 1) && (width <= (32 - lsb))); emit(cond | 0xf*B23 | B22 | B21 | (width - 1)*B16 | dst.code()*B12 | lsb*B7 | B6 | B4 | src.code()); } @@ -1694,10 +1678,10 @@ void Assembler::sbfx(Register dst, int width, Condition cond) { // v7 and above. - ASSERT(CpuFeatures::IsSupported(ARMv7)); - ASSERT(!dst.is(pc) && !src.is(pc)); - ASSERT((lsb >= 0) && (lsb <= 31)); - ASSERT((width >= 1) && (width <= (32 - lsb))); + DCHECK(CpuFeatures::IsSupported(ARMv7)); + DCHECK(!dst.is(pc) && !src.is(pc)); + DCHECK((lsb >= 0) && (lsb <= 31)); + DCHECK((width >= 1) && (width <= (32 - lsb))); emit(cond | 0xf*B23 | B21 | (width - 1)*B16 | dst.code()*B12 | lsb*B7 | B6 | B4 | src.code()); } @@ -1709,10 +1693,10 @@ void Assembler::sbfx(Register dst, // bfc dst, #lsb, #width void Assembler::bfc(Register dst, int lsb, int width, Condition cond) { // v7 and above. - ASSERT(CpuFeatures::IsSupported(ARMv7)); - ASSERT(!dst.is(pc)); - ASSERT((lsb >= 0) && (lsb <= 31)); - ASSERT((width >= 1) && (width <= (32 - lsb))); + DCHECK(CpuFeatures::IsSupported(ARMv7)); + DCHECK(!dst.is(pc)); + DCHECK((lsb >= 0) && (lsb <= 31)); + DCHECK((width >= 1) && (width <= (32 - lsb))); int msb = lsb + width - 1; emit(cond | 0x1f*B22 | msb*B16 | dst.code()*B12 | lsb*B7 | B4 | 0xf); } @@ -1728,10 +1712,10 @@ void Assembler::bfi(Register dst, int width, Condition cond) { // v7 and above. - ASSERT(CpuFeatures::IsSupported(ARMv7)); - ASSERT(!dst.is(pc) && !src.is(pc)); - ASSERT((lsb >= 0) && (lsb <= 31)); - ASSERT((width >= 1) && (width <= (32 - lsb))); + DCHECK(CpuFeatures::IsSupported(ARMv7)); + DCHECK(!dst.is(pc) && !src.is(pc)); + DCHECK((lsb >= 0) && (lsb <= 31)); + DCHECK((width >= 1) && (width <= (32 - lsb))); int msb = lsb + width - 1; emit(cond | 0x1f*B22 | msb*B16 | dst.code()*B12 | lsb*B7 | B4 | src.code()); @@ -1745,13 +1729,13 @@ void Assembler::pkhbt(Register dst, // Instruction details available in ARM DDI 0406C.b, A8.8.125. // cond(31-28) | 01101000(27-20) | Rn(19-16) | // Rd(15-12) | imm5(11-7) | 0(6) | 01(5-4) | Rm(3-0) - ASSERT(!dst.is(pc)); - ASSERT(!src1.is(pc)); - ASSERT(!src2.rm().is(pc)); - ASSERT(!src2.rm().is(no_reg)); - ASSERT(src2.rs().is(no_reg)); - ASSERT((src2.shift_imm_ >= 0) && (src2.shift_imm_ <= 31)); - ASSERT(src2.shift_op() == LSL); + DCHECK(!dst.is(pc)); + DCHECK(!src1.is(pc)); + DCHECK(!src2.rm().is(pc)); + DCHECK(!src2.rm().is(no_reg)); + DCHECK(src2.rs().is(no_reg)); + DCHECK((src2.shift_imm_ >= 0) && (src2.shift_imm_ <= 31)); + DCHECK(src2.shift_op() == LSL); emit(cond | 0x68*B20 | src1.code()*B16 | dst.code()*B12 | src2.shift_imm_*B7 | B4 | src2.rm().code()); } @@ -1764,13 +1748,13 @@ void Assembler::pkhtb(Register dst, // Instruction details available in ARM DDI 0406C.b, A8.8.125. // cond(31-28) | 01101000(27-20) | Rn(19-16) | // Rd(15-12) | imm5(11-7) | 1(6) | 01(5-4) | Rm(3-0) - ASSERT(!dst.is(pc)); - ASSERT(!src1.is(pc)); - ASSERT(!src2.rm().is(pc)); - ASSERT(!src2.rm().is(no_reg)); - ASSERT(src2.rs().is(no_reg)); - ASSERT((src2.shift_imm_ >= 1) && (src2.shift_imm_ <= 32)); - ASSERT(src2.shift_op() == ASR); + DCHECK(!dst.is(pc)); + DCHECK(!src1.is(pc)); + DCHECK(!src2.rm().is(pc)); + DCHECK(!src2.rm().is(no_reg)); + DCHECK(src2.rs().is(no_reg)); + DCHECK((src2.shift_imm_ >= 1) && (src2.shift_imm_ <= 32)); + DCHECK(src2.shift_op() == ASR); int asr = (src2.shift_imm_ == 32) ? 0 : src2.shift_imm_; emit(cond | 0x68*B20 | src1.code()*B16 | dst.code()*B12 | asr*B7 | B6 | B4 | src2.rm().code()); @@ -1783,16 +1767,16 @@ void Assembler::uxtb(Register dst, // Instruction details available in ARM DDI 0406C.b, A8.8.274. // cond(31-28) | 01101110(27-20) | 1111(19-16) | // Rd(15-12) | rotate(11-10) | 00(9-8)| 0111(7-4) | Rm(3-0) - ASSERT(!dst.is(pc)); - ASSERT(!src.rm().is(pc)); - ASSERT(!src.rm().is(no_reg)); - ASSERT(src.rs().is(no_reg)); - ASSERT((src.shift_imm_ == 0) || + DCHECK(!dst.is(pc)); + DCHECK(!src.rm().is(pc)); + DCHECK(!src.rm().is(no_reg)); + DCHECK(src.rs().is(no_reg)); + DCHECK((src.shift_imm_ == 0) || (src.shift_imm_ == 8) || (src.shift_imm_ == 16) || (src.shift_imm_ == 24)); // Operand maps ROR #0 to LSL #0. - ASSERT((src.shift_op() == ROR) || + DCHECK((src.shift_op() == ROR) || ((src.shift_op() == LSL) && (src.shift_imm_ == 0))); emit(cond | 0x6E*B20 | 0xF*B16 | dst.code()*B12 | ((src.shift_imm_ >> 1)&0xC)*B8 | 7*B4 | src.rm().code()); @@ -1806,17 +1790,17 @@ void Assembler::uxtab(Register dst, // Instruction details available in ARM DDI 0406C.b, A8.8.271. // cond(31-28) | 01101110(27-20) | Rn(19-16) | // Rd(15-12) | rotate(11-10) | 00(9-8)| 0111(7-4) | Rm(3-0) - ASSERT(!dst.is(pc)); - ASSERT(!src1.is(pc)); - ASSERT(!src2.rm().is(pc)); - ASSERT(!src2.rm().is(no_reg)); - ASSERT(src2.rs().is(no_reg)); - ASSERT((src2.shift_imm_ == 0) || + DCHECK(!dst.is(pc)); + DCHECK(!src1.is(pc)); + DCHECK(!src2.rm().is(pc)); + DCHECK(!src2.rm().is(no_reg)); + DCHECK(src2.rs().is(no_reg)); + DCHECK((src2.shift_imm_ == 0) || (src2.shift_imm_ == 8) || (src2.shift_imm_ == 16) || (src2.shift_imm_ == 24)); // Operand maps ROR #0 to LSL #0. - ASSERT((src2.shift_op() == ROR) || + DCHECK((src2.shift_op() == ROR) || ((src2.shift_op() == LSL) && (src2.shift_imm_ == 0))); emit(cond | 0x6E*B20 | src1.code()*B16 | dst.code()*B12 | ((src2.shift_imm_ >> 1) &0xC)*B8 | 7*B4 | src2.rm().code()); @@ -1829,16 +1813,16 @@ void Assembler::uxtb16(Register dst, // Instruction details available in ARM DDI 0406C.b, A8.8.275. // cond(31-28) | 01101100(27-20) | 1111(19-16) | // Rd(15-12) | rotate(11-10) | 00(9-8)| 0111(7-4) | Rm(3-0) - ASSERT(!dst.is(pc)); - ASSERT(!src.rm().is(pc)); - ASSERT(!src.rm().is(no_reg)); - ASSERT(src.rs().is(no_reg)); - ASSERT((src.shift_imm_ == 0) || + DCHECK(!dst.is(pc)); + DCHECK(!src.rm().is(pc)); + DCHECK(!src.rm().is(no_reg)); + DCHECK(src.rs().is(no_reg)); + DCHECK((src.shift_imm_ == 0) || (src.shift_imm_ == 8) || (src.shift_imm_ == 16) || (src.shift_imm_ == 24)); // Operand maps ROR #0 to LSL #0. - ASSERT((src.shift_op() == ROR) || + DCHECK((src.shift_op() == ROR) || ((src.shift_op() == LSL) && (src.shift_imm_ == 0))); emit(cond | 0x6C*B20 | 0xF*B16 | dst.code()*B12 | ((src.shift_imm_ >> 1)&0xC)*B8 | 7*B4 | src.rm().code()); @@ -1847,20 +1831,20 @@ void Assembler::uxtb16(Register dst, // Status register access instructions. void Assembler::mrs(Register dst, SRegister s, Condition cond) { - ASSERT(!dst.is(pc)); + DCHECK(!dst.is(pc)); emit(cond | B24 | s | 15*B16 | dst.code()*B12); } void Assembler::msr(SRegisterFieldMask fields, const Operand& src, Condition cond) { - ASSERT(fields >= B16 && fields < B20); // at least one field set + DCHECK(fields >= B16 && fields < B20); // at least one field set Instr instr; if (!src.rm_.is_valid()) { // Immediate. uint32_t rotate_imm; uint32_t immed_8; - if (src.must_output_reloc_info(isolate(), this) || + if (src.must_output_reloc_info(this) || !fits_shifter(src.imm32_, &rotate_imm, &immed_8, NULL)) { // Immediate operand cannot be encoded, load it first to register ip. move_32_bit_immediate(ip, src); @@ -1869,7 +1853,7 @@ void Assembler::msr(SRegisterFieldMask fields, const Operand& src, } instr = I | rotate_imm*B8 | immed_8; } else { - ASSERT(!src.rs_.is_valid() && src.shift_imm_ == 0); // only rm allowed + DCHECK(!src.rs_.is_valid() && src.shift_imm_ == 0); // only rm allowed instr = src.rm_.code(); } emit(cond | instr | B24 | B21 | fields | 15*B12); @@ -1922,22 +1906,22 @@ void Assembler::ldrsh(Register dst, const MemOperand& src, Condition cond) { void Assembler::ldrd(Register dst1, Register dst2, const MemOperand& src, Condition cond) { - ASSERT(IsEnabled(ARMv7)); - ASSERT(src.rm().is(no_reg)); - ASSERT(!dst1.is(lr)); // r14. - ASSERT_EQ(0, dst1.code() % 2); - ASSERT_EQ(dst1.code() + 1, dst2.code()); + DCHECK(IsEnabled(ARMv7)); + DCHECK(src.rm().is(no_reg)); + DCHECK(!dst1.is(lr)); // r14. + DCHECK_EQ(0, dst1.code() % 2); + DCHECK_EQ(dst1.code() + 1, dst2.code()); addrmod3(cond | B7 | B6 | B4, dst1, src); } void Assembler::strd(Register src1, Register src2, const MemOperand& dst, Condition cond) { - ASSERT(dst.rm().is(no_reg)); - ASSERT(!src1.is(lr)); // r14. - ASSERT_EQ(0, src1.code() % 2); - ASSERT_EQ(src1.code() + 1, src2.code()); - ASSERT(IsEnabled(ARMv7)); + DCHECK(dst.rm().is(no_reg)); + DCHECK(!src1.is(lr)); // r14. + DCHECK_EQ(0, src1.code() % 2); + DCHECK_EQ(src1.code() + 1, src2.code()); + DCHECK(IsEnabled(ARMv7)); addrmod3(cond | B7 | B6 | B5 | B4, src1, dst); } @@ -1947,15 +1931,15 @@ void Assembler::pld(const MemOperand& address) { // Instruction details available in ARM DDI 0406C.b, A8.8.128. // 1111(31-28) | 0111(27-24) | U(23) | R(22) | 01(21-20) | Rn(19-16) | // 1111(15-12) | imm5(11-07) | type(6-5) | 0(4)| Rm(3-0) | - ASSERT(address.rm().is(no_reg)); - ASSERT(address.am() == Offset); + DCHECK(address.rm().is(no_reg)); + DCHECK(address.am() == Offset); int U = B23; int offset = address.offset(); if (offset < 0) { offset = -offset; U = 0; } - ASSERT(offset < 4096); + DCHECK(offset < 4096); emit(kSpecialCondition | B26 | B24 | U | B22 | B20 | address.rn().code()*B16 | 0xf*B12 | offset); } @@ -1967,7 +1951,7 @@ void Assembler::ldm(BlockAddrMode am, RegList dst, Condition cond) { // ABI stack constraint: ldmxx base, {..sp..} base != sp is not restartable. - ASSERT(base.is(sp) || (dst & sp.bit()) == 0); + DCHECK(base.is(sp) || (dst & sp.bit()) == 0); addrmod4(cond | B27 | am | L, base, dst); @@ -1996,7 +1980,7 @@ void Assembler::stm(BlockAddrMode am, // enabling/disabling and a counter feature. See simulator-arm.h . void Assembler::stop(const char* msg, Condition cond, int32_t code) { #ifndef __arm__ - ASSERT(code >= kDefaultStopCode); + DCHECK(code >= kDefaultStopCode); { // The Simulator will handle the stop instruction and get the message // address. It expects to find the address just after the svc instruction. @@ -2022,13 +2006,13 @@ void Assembler::stop(const char* msg, Condition cond, int32_t code) { void Assembler::bkpt(uint32_t imm16) { // v5 and above - ASSERT(is_uint16(imm16)); + DCHECK(is_uint16(imm16)); emit(al | B24 | B21 | (imm16 >> 4)*B8 | BKPT | (imm16 & 0xf)); } void Assembler::svc(uint32_t imm24, Condition cond) { - ASSERT(is_uint24(imm24)); + DCHECK(is_uint24(imm24)); emit(cond | 15*B24 | imm24); } @@ -2041,7 +2025,7 @@ void Assembler::cdp(Coprocessor coproc, CRegister crm, int opcode_2, Condition cond) { - ASSERT(is_uint4(opcode_1) && is_uint3(opcode_2)); + DCHECK(is_uint4(opcode_1) && is_uint3(opcode_2)); emit(cond | B27 | B26 | B25 | (opcode_1 & 15)*B20 | crn.code()*B16 | crd.code()*B12 | coproc*B8 | (opcode_2 & 7)*B5 | crm.code()); } @@ -2064,7 +2048,7 @@ void Assembler::mcr(Coprocessor coproc, CRegister crm, int opcode_2, Condition cond) { - ASSERT(is_uint3(opcode_1) && is_uint3(opcode_2)); + DCHECK(is_uint3(opcode_1) && is_uint3(opcode_2)); emit(cond | B27 | B26 | B25 | (opcode_1 & 7)*B21 | crn.code()*B16 | rd.code()*B12 | coproc*B8 | (opcode_2 & 7)*B5 | B4 | crm.code()); } @@ -2087,7 +2071,7 @@ void Assembler::mrc(Coprocessor coproc, CRegister crm, int opcode_2, Condition cond) { - ASSERT(is_uint3(opcode_1) && is_uint3(opcode_2)); + DCHECK(is_uint3(opcode_1) && is_uint3(opcode_2)); emit(cond | B27 | B26 | B25 | (opcode_1 & 7)*B21 | L | crn.code()*B16 | rd.code()*B12 | coproc*B8 | (opcode_2 & 7)*B5 | B4 | crm.code()); } @@ -2119,7 +2103,7 @@ void Assembler::ldc(Coprocessor coproc, LFlag l, Condition cond) { // Unindexed addressing. - ASSERT(is_uint8(option)); + DCHECK(is_uint8(option)); emit(cond | B27 | B26 | U | l | L | rn.code()*B16 | crd.code()*B12 | coproc*B8 | (option & 255)); } @@ -2160,14 +2144,14 @@ void Assembler::vldr(const DwVfpRegister dst, int vd, d; dst.split_code(&vd, &d); - ASSERT(offset >= 0); + DCHECK(offset >= 0); if ((offset % 4) == 0 && (offset / 4) < 256) { emit(cond | 0xD*B24 | u*B23 | d*B22 | B20 | base.code()*B16 | vd*B12 | 0xB*B8 | ((offset / 4) & 255)); } else { // Larger offsets must be handled by computing the correct address // in the ip register. - ASSERT(!base.is(ip)); + DCHECK(!base.is(ip)); if (u == 1) { add(ip, base, Operand(offset)); } else { @@ -2181,9 +2165,14 @@ void Assembler::vldr(const DwVfpRegister dst, void Assembler::vldr(const DwVfpRegister dst, const MemOperand& operand, const Condition cond) { - ASSERT(!operand.rm().is_valid()); - ASSERT(operand.am_ == Offset); - vldr(dst, operand.rn(), operand.offset(), cond); + DCHECK(operand.am_ == Offset); + if (operand.rm().is_valid()) { + add(ip, operand.rn(), + Operand(operand.rm(), operand.shift_op_, operand.shift_imm_)); + vldr(dst, ip, 0, cond); + } else { + vldr(dst, operand.rn(), operand.offset(), cond); + } } @@ -2202,7 +2191,7 @@ void Assembler::vldr(const SwVfpRegister dst, } int sd, d; dst.split_code(&sd, &d); - ASSERT(offset >= 0); + DCHECK(offset >= 0); if ((offset % 4) == 0 && (offset / 4) < 256) { emit(cond | u*B23 | d*B22 | 0xD1*B20 | base.code()*B16 | sd*B12 | @@ -2210,7 +2199,7 @@ void Assembler::vldr(const SwVfpRegister dst, } else { // Larger offsets must be handled by computing the correct address // in the ip register. - ASSERT(!base.is(ip)); + DCHECK(!base.is(ip)); if (u == 1) { add(ip, base, Operand(offset)); } else { @@ -2224,9 +2213,14 @@ void Assembler::vldr(const SwVfpRegister dst, void Assembler::vldr(const SwVfpRegister dst, const MemOperand& operand, const Condition cond) { - ASSERT(!operand.rm().is_valid()); - ASSERT(operand.am_ == Offset); - vldr(dst, operand.rn(), operand.offset(), cond); + DCHECK(operand.am_ == Offset); + if (operand.rm().is_valid()) { + add(ip, operand.rn(), + Operand(operand.rm(), operand.shift_op_, operand.shift_imm_)); + vldr(dst, ip, 0, cond); + } else { + vldr(dst, operand.rn(), operand.offset(), cond); + } } @@ -2243,7 +2237,7 @@ void Assembler::vstr(const DwVfpRegister src, offset = -offset; u = 0; } - ASSERT(offset >= 0); + DCHECK(offset >= 0); int vd, d; src.split_code(&vd, &d); @@ -2253,7 +2247,7 @@ void Assembler::vstr(const DwVfpRegister src, } else { // Larger offsets must be handled by computing the correct address // in the ip register. - ASSERT(!base.is(ip)); + DCHECK(!base.is(ip)); if (u == 1) { add(ip, base, Operand(offset)); } else { @@ -2267,9 +2261,14 @@ void Assembler::vstr(const DwVfpRegister src, void Assembler::vstr(const DwVfpRegister src, const MemOperand& operand, const Condition cond) { - ASSERT(!operand.rm().is_valid()); - ASSERT(operand.am_ == Offset); - vstr(src, operand.rn(), operand.offset(), cond); + DCHECK(operand.am_ == Offset); + if (operand.rm().is_valid()) { + add(ip, operand.rn(), + Operand(operand.rm(), operand.shift_op_, operand.shift_imm_)); + vstr(src, ip, 0, cond); + } else { + vstr(src, operand.rn(), operand.offset(), cond); + } } @@ -2288,14 +2287,14 @@ void Assembler::vstr(const SwVfpRegister src, } int sd, d; src.split_code(&sd, &d); - ASSERT(offset >= 0); + DCHECK(offset >= 0); if ((offset % 4) == 0 && (offset / 4) < 256) { emit(cond | u*B23 | d*B22 | 0xD0*B20 | base.code()*B16 | sd*B12 | 0xA*B8 | ((offset / 4) & 255)); } else { // Larger offsets must be handled by computing the correct address // in the ip register. - ASSERT(!base.is(ip)); + DCHECK(!base.is(ip)); if (u == 1) { add(ip, base, Operand(offset)); } else { @@ -2309,9 +2308,14 @@ void Assembler::vstr(const SwVfpRegister src, void Assembler::vstr(const SwVfpRegister src, const MemOperand& operand, const Condition cond) { - ASSERT(!operand.rm().is_valid()); - ASSERT(operand.am_ == Offset); - vstr(src, operand.rn(), operand.offset(), cond); + DCHECK(operand.am_ == Offset); + if (operand.rm().is_valid()) { + add(ip, operand.rn(), + Operand(operand.rm(), operand.shift_op_, operand.shift_imm_)); + vstr(src, ip, 0, cond); + } else { + vstr(src, operand.rn(), operand.offset(), cond); + } } @@ -2323,14 +2327,14 @@ void Assembler::vldm(BlockAddrMode am, // Instruction details available in ARM DDI 0406C.b, A8-922. // cond(31-28) | 110(27-25)| PUDW1(24-20) | Rbase(19-16) | // first(15-12) | 1011(11-8) | (count * 2) - ASSERT_LE(first.code(), last.code()); - ASSERT(am == ia || am == ia_w || am == db_w); - ASSERT(!base.is(pc)); + DCHECK_LE(first.code(), last.code()); + DCHECK(am == ia || am == ia_w || am == db_w); + DCHECK(!base.is(pc)); int sd, d; first.split_code(&sd, &d); int count = last.code() - first.code() + 1; - ASSERT(count <= 16); + DCHECK(count <= 16); emit(cond | B27 | B26 | am | d*B22 | B20 | base.code()*B16 | sd*B12 | 0xB*B8 | count*2); } @@ -2344,14 +2348,14 @@ void Assembler::vstm(BlockAddrMode am, // Instruction details available in ARM DDI 0406C.b, A8-1080. // cond(31-28) | 110(27-25)| PUDW0(24-20) | Rbase(19-16) | // first(15-12) | 1011(11-8) | (count * 2) - ASSERT_LE(first.code(), last.code()); - ASSERT(am == ia || am == ia_w || am == db_w); - ASSERT(!base.is(pc)); + DCHECK_LE(first.code(), last.code()); + DCHECK(am == ia || am == ia_w || am == db_w); + DCHECK(!base.is(pc)); int sd, d; first.split_code(&sd, &d); int count = last.code() - first.code() + 1; - ASSERT(count <= 16); + DCHECK(count <= 16); emit(cond | B27 | B26 | am | d*B22 | base.code()*B16 | sd*B12 | 0xB*B8 | count*2); } @@ -2364,9 +2368,9 @@ void Assembler::vldm(BlockAddrMode am, // Instruction details available in ARM DDI 0406A, A8-626. // cond(31-28) | 110(27-25)| PUDW1(24-20) | Rbase(19-16) | // first(15-12) | 1010(11-8) | (count/2) - ASSERT_LE(first.code(), last.code()); - ASSERT(am == ia || am == ia_w || am == db_w); - ASSERT(!base.is(pc)); + DCHECK_LE(first.code(), last.code()); + DCHECK(am == ia || am == ia_w || am == db_w); + DCHECK(!base.is(pc)); int sd, d; first.split_code(&sd, &d); @@ -2384,9 +2388,9 @@ void Assembler::vstm(BlockAddrMode am, // Instruction details available in ARM DDI 0406A, A8-784. // cond(31-28) | 110(27-25)| PUDW0(24-20) | Rbase(19-16) | // first(15-12) | 1011(11-8) | (count/2) - ASSERT_LE(first.code(), last.code()); - ASSERT(am == ia || am == ia_w || am == db_w); - ASSERT(!base.is(pc)); + DCHECK_LE(first.code(), last.code()); + DCHECK(am == ia || am == ia_w || am == db_w); + DCHECK(!base.is(pc)); int sd, d; first.split_code(&sd, &d); @@ -2398,7 +2402,7 @@ void Assembler::vstm(BlockAddrMode am, static void DoubleAsTwoUInt32(double d, uint32_t* lo, uint32_t* hi) { uint64_t i; - OS::MemCopy(&i, &d, 8); + memcpy(&i, &d, 8); *lo = i & 0xffffffff; *hi = i >> 32; @@ -2408,7 +2412,7 @@ static void DoubleAsTwoUInt32(double d, uint32_t* lo, uint32_t* hi) { // Only works for little endian floating point formats. // We don't support VFP on the mixed endian floating point platform. static bool FitsVMOVDoubleImmediate(double d, uint32_t *encoding) { - ASSERT(CpuFeatures::IsSupported(VFP3)); + DCHECK(CpuFeatures::IsSupported(VFP3)); // VMOV can accept an immediate of the form: // @@ -2470,7 +2474,7 @@ void Assembler::vmov(const DwVfpRegister dst, int vd, d; dst.split_code(&vd, &d); emit(al | 0x1D*B23 | d*B22 | 0x3*B20 | vd*B12 | 0x5*B9 | B8 | enc); - } else if (FLAG_enable_vldr_imm && can_use_constant_pool()) { + } else if (FLAG_enable_vldr_imm && is_constant_pool_available()) { // TODO(jfb) Temporarily turned off until we have constant blinding or // some equivalent mitigation: an attacker can otherwise control // generated data which also happens to be executable, a Very Bad @@ -2487,8 +2491,18 @@ void Assembler::vmov(const DwVfpRegister dst, // that's tricky because vldr has a limited reach. Furthermore // it breaks load locality. RelocInfo rinfo(pc_, imm); - ConstantPoolAddEntry(rinfo); - vldr(dst, MemOperand(FLAG_enable_ool_constant_pool ? pp : pc, 0)); + ConstantPoolArray::LayoutSection section = ConstantPoolAddEntry(rinfo); + if (section == ConstantPoolArray::EXTENDED_SECTION) { + DCHECK(FLAG_enable_ool_constant_pool); + // Emit instructions to load constant pool offset. + movw(ip, 0); + movt(ip, 0); + // Load from constant pool at offset. + vldr(dst, MemOperand(pp, ip)); + } else { + DCHECK(section == ConstantPoolArray::SMALL_SECTION); + vldr(dst, MemOperand(FLAG_enable_ool_constant_pool ? pp : pc, 0)); + } } else { // Synthesise the double from ARM immediates. uint32_t lo, hi; @@ -2562,7 +2576,7 @@ void Assembler::vmov(const DwVfpRegister dst, // Instruction details available in ARM DDI 0406C.b, A8-940. // cond(31-28) | 1110(27-24) | 0(23) | opc1=0index(22-21) | 0(20) | // Vd(19-16) | Rt(15-12) | 1011(11-8) | D(7) | opc2=00(6-5) | 1(4) | 0000(3-0) - ASSERT(index.index == 0 || index.index == 1); + DCHECK(index.index == 0 || index.index == 1); int vd, d; dst.split_code(&vd, &d); emit(cond | 0xE*B24 | index.index*B21 | vd*B16 | src.code()*B12 | 0xB*B8 | @@ -2578,7 +2592,7 @@ void Assembler::vmov(const Register dst, // Instruction details available in ARM DDI 0406C.b, A8.8.342. // cond(31-28) | 1110(27-24) | U=0(23) | opc1=0index(22-21) | 1(20) | // Vn(19-16) | Rt(15-12) | 1011(11-8) | N(7) | opc2=00(6-5) | 1(4) | 0000(3-0) - ASSERT(index.index == 0 || index.index == 1); + DCHECK(index.index == 0 || index.index == 1); int vn, n; src.split_code(&vn, &n); emit(cond | 0xE*B24 | index.index*B21 | B20 | vn*B16 | dst.code()*B12 | @@ -2594,7 +2608,7 @@ void Assembler::vmov(const DwVfpRegister dst, // Instruction details available in ARM DDI 0406C.b, A8-948. // cond(31-28) | 1100(27-24)| 010(23-21) | op=0(20) | Rt2(19-16) | // Rt(15-12) | 1011(11-8) | 00(7-6) | M(5) | 1(4) | Vm - ASSERT(!src1.is(pc) && !src2.is(pc)); + DCHECK(!src1.is(pc) && !src2.is(pc)); int vm, m; dst.split_code(&vm, &m); emit(cond | 0xC*B24 | B22 | src2.code()*B16 | @@ -2610,7 +2624,7 @@ void Assembler::vmov(const Register dst1, // Instruction details available in ARM DDI 0406C.b, A8-948. // cond(31-28) | 1100(27-24)| 010(23-21) | op=1(20) | Rt2(19-16) | // Rt(15-12) | 1011(11-8) | 00(7-6) | M(5) | 1(4) | Vm - ASSERT(!dst1.is(pc) && !dst2.is(pc)); + DCHECK(!dst1.is(pc) && !dst2.is(pc)); int vm, m; src.split_code(&vm, &m); emit(cond | 0xC*B24 | B22 | B20 | dst2.code()*B16 | @@ -2625,7 +2639,7 @@ void Assembler::vmov(const SwVfpRegister dst, // Instruction details available in ARM DDI 0406A, A8-642. // cond(31-28) | 1110(27-24)| 000(23-21) | op=0(20) | Vn(19-16) | // Rt(15-12) | 1010(11-8) | N(7)=0 | 00(6-5) | 1(4) | 0000(3-0) - ASSERT(!src.is(pc)); + DCHECK(!src.is(pc)); int sn, n; dst.split_code(&sn, &n); emit(cond | 0xE*B24 | sn*B16 | src.code()*B12 | 0xA*B8 | n*B7 | B4); @@ -2639,7 +2653,7 @@ void Assembler::vmov(const Register dst, // Instruction details available in ARM DDI 0406A, A8-642. // cond(31-28) | 1110(27-24)| 000(23-21) | op=1(20) | Vn(19-16) | // Rt(15-12) | 1010(11-8) | N(7)=0 | 00(6-5) | 1(4) | 0000(3-0) - ASSERT(!dst.is(pc)); + DCHECK(!dst.is(pc)); int sn, n; src.split_code(&sn, &n); emit(cond | 0xE*B24 | B20 | sn*B16 | dst.code()*B12 | 0xA*B8 | n*B7 | B4); @@ -2700,7 +2714,7 @@ static void SplitRegCode(VFPType reg_type, int reg_code, int* vm, int* m) { - ASSERT((reg_code >= 0) && (reg_code <= 31)); + DCHECK((reg_code >= 0) && (reg_code <= 31)); if (IsIntegerVFPType(reg_type) || !IsDoubleVFPType(reg_type)) { // 32 bit type. *m = reg_code & 0x1; @@ -2720,7 +2734,7 @@ static Instr EncodeVCVT(const VFPType dst_type, const int src_code, VFPConversionMode mode, const Condition cond) { - ASSERT(src_type != dst_type); + DCHECK(src_type != dst_type); int D, Vd, M, Vm; SplitRegCode(src_type, src_code, &Vm, &M); SplitRegCode(dst_type, dst_code, &Vd, &D); @@ -2730,7 +2744,7 @@ static Instr EncodeVCVT(const VFPType dst_type, // Instruction details available in ARM DDI 0406B, A8.6.295. // cond(31-28) | 11101(27-23)| D(22) | 11(21-20) | 1(19) | opc2(18-16) | // Vd(15-12) | 101(11-9) | sz(8) | op(7) | 1(6) | M(5) | 0(4) | Vm(3-0) - ASSERT(!IsIntegerVFPType(dst_type) || !IsIntegerVFPType(src_type)); + DCHECK(!IsIntegerVFPType(dst_type) || !IsIntegerVFPType(src_type)); int sz, opc2, op; @@ -2739,7 +2753,7 @@ static Instr EncodeVCVT(const VFPType dst_type, sz = IsDoubleVFPType(src_type) ? 0x1 : 0x0; op = mode; } else { - ASSERT(IsIntegerVFPType(src_type)); + DCHECK(IsIntegerVFPType(src_type)); opc2 = 0x0; sz = IsDoubleVFPType(dst_type) ? 0x1 : 0x0; op = IsSignedVFPType(src_type) ? 0x1 : 0x0; @@ -2821,8 +2835,8 @@ void Assembler::vcvt_f64_s32(const DwVfpRegister dst, // Instruction details available in ARM DDI 0406C.b, A8-874. // cond(31-28) | 11101(27-23) | D(22) | 11(21-20) | 1010(19-16) | Vd(15-12) | // 101(11-9) | sf=1(8) | sx=1(7) | 1(6) | i(5) | 0(4) | imm4(3-0) - ASSERT(fraction_bits > 0 && fraction_bits <= 32); - ASSERT(CpuFeatures::IsSupported(VFP3)); + DCHECK(fraction_bits > 0 && fraction_bits <= 32); + DCHECK(CpuFeatures::IsSupported(VFP3)); int vd, d; dst.split_code(&vd, &d); int imm5 = 32 - fraction_bits; @@ -3003,7 +3017,7 @@ void Assembler::vcmp(const DwVfpRegister src1, // Instruction details available in ARM DDI 0406C.b, A8-864. // cond(31-28) | 11101(27-23)| D(22) | 11(21-20) | 0101(19-16) | // Vd(15-12) | 101(11-9) | sz=1(8) | E=0(7) | 1(6) | 0(5) | 0(4) | 0000(3-0) - ASSERT(src2 == 0.0); + DCHECK(src2 == 0.0); int vd, d; src1.split_code(&vd, &d); emit(cond | 0x1D*B23 | d*B22 | 0x3*B20 | 0x5*B16 | vd*B12 | 0x5*B9 | B8 | B6); @@ -3051,7 +3065,7 @@ void Assembler::vld1(NeonSize size, // Instruction details available in ARM DDI 0406C.b, A8.8.320. // 1111(31-28) | 01000(27-23) | D(22) | 10(21-20) | Rn(19-16) | // Vd(15-12) | type(11-8) | size(7-6) | align(5-4) | Rm(3-0) - ASSERT(CpuFeatures::IsSupported(NEON)); + DCHECK(CpuFeatures::IsSupported(NEON)); int vd, d; dst.base().split_code(&vd, &d); emit(0xFU*B28 | 4*B24 | d*B22 | 2*B20 | src.rn().code()*B16 | vd*B12 | @@ -3065,7 +3079,7 @@ void Assembler::vst1(NeonSize size, // Instruction details available in ARM DDI 0406C.b, A8.8.404. // 1111(31-28) | 01000(27-23) | D(22) | 00(21-20) | Rn(19-16) | // Vd(15-12) | type(11-8) | size(7-6) | align(5-4) | Rm(3-0) - ASSERT(CpuFeatures::IsSupported(NEON)); + DCHECK(CpuFeatures::IsSupported(NEON)); int vd, d; src.base().split_code(&vd, &d); emit(0xFU*B28 | 4*B24 | d*B22 | dst.rn().code()*B16 | vd*B12 | src.type()*B8 | @@ -3077,7 +3091,7 @@ void Assembler::vmovl(NeonDataType dt, QwNeonRegister dst, DwVfpRegister src) { // Instruction details available in ARM DDI 0406C.b, A8.8.346. // 1111(31-28) | 001(27-25) | U(24) | 1(23) | D(22) | imm3(21-19) | // 000(18-16) | Vd(15-12) | 101000(11-6) | M(5) | 1(4) | Vm(3-0) - ASSERT(CpuFeatures::IsSupported(NEON)); + DCHECK(CpuFeatures::IsSupported(NEON)); int vd, d; dst.split_code(&vd, &d); int vm, m; @@ -3094,7 +3108,7 @@ void Assembler::nop(int type) { // MOV Rx, Rx as NOP and it performs better even in newer CPUs. // We therefore use MOV Rx, Rx, even on newer CPUs, and use Rx to encode // a type. - ASSERT(0 <= type && type <= 14); // mov pc, pc isn't a nop. + DCHECK(0 <= type && type <= 14); // mov pc, pc isn't a nop. emit(al | 13*B21 | type*B12 | type); } @@ -3103,7 +3117,7 @@ bool Assembler::IsMovT(Instr instr) { instr &= ~(((kNumberOfConditions - 1) << 28) | // Mask off conditions ((kNumRegisters-1)*B12) | // mask out register EncodeMovwImmediate(0xFFFF)); // mask out immediate value - return instr == 0x34*B20; + return instr == kMovtPattern; } @@ -3111,17 +3125,36 @@ bool Assembler::IsMovW(Instr instr) { instr &= ~(((kNumberOfConditions - 1) << 28) | // Mask off conditions ((kNumRegisters-1)*B12) | // mask out destination EncodeMovwImmediate(0xFFFF)); // mask out immediate value - return instr == 0x30*B20; + return instr == kMovwPattern; +} + + +Instr Assembler::GetMovTPattern() { return kMovtPattern; } + + +Instr Assembler::GetMovWPattern() { return kMovwPattern; } + + +Instr Assembler::EncodeMovwImmediate(uint32_t immediate) { + DCHECK(immediate < 0x10000); + return ((immediate & 0xf000) << 4) | (immediate & 0xfff); +} + + +Instr Assembler::PatchMovwImmediate(Instr instruction, uint32_t immediate) { + instruction &= ~EncodeMovwImmediate(0xffff); + return instruction | EncodeMovwImmediate(immediate); } bool Assembler::IsNop(Instr instr, int type) { - ASSERT(0 <= type && type <= 14); // mov pc, pc isn't a nop. + DCHECK(0 <= type && type <= 14); // mov pc, pc isn't a nop. // Check for mov rx, rx where x = type. return instr == (al | 13*B21 | type*B12 | type); } +// static bool Assembler::ImmediateFitsAddrMode1Instruction(int32_t imm32) { uint32_t dummy1; uint32_t dummy2; @@ -3169,9 +3202,7 @@ void Assembler::GrowBuffer() { // Compute new buffer size. CodeDesc desc; // the new buffer - if (buffer_size_ < 4*KB) { - desc.buffer_size = 4*KB; - } else if (buffer_size_ < 1*MB) { + if (buffer_size_ < 1 * MB) { desc.buffer_size = 2*buffer_size_; } else { desc.buffer_size = buffer_size_ + 1*MB; @@ -3187,9 +3218,9 @@ void Assembler::GrowBuffer() { // Copy the data. int pc_delta = desc.buffer - buffer_; int rc_delta = (desc.buffer + desc.buffer_size) - (buffer_ + buffer_size_); - OS::MemMove(desc.buffer, buffer_, desc.instr_size); - OS::MemMove(reloc_info_writer.pos() + rc_delta, - reloc_info_writer.pos(), desc.reloc_size); + MemMove(desc.buffer, buffer_, desc.instr_size); + MemMove(reloc_info_writer.pos() + rc_delta, reloc_info_writer.pos(), + desc.reloc_size); // Switch buffers. DeleteArray(buffer_); @@ -3206,7 +3237,7 @@ void Assembler::GrowBuffer() { // Relocate pending relocation entries. for (int i = 0; i < num_pending_32_bit_reloc_info_; i++) { RelocInfo& rinfo = pending_32_bit_reloc_info_[i]; - ASSERT(rinfo.rmode() != RelocInfo::COMMENT && + DCHECK(rinfo.rmode() != RelocInfo::COMMENT && rinfo.rmode() != RelocInfo::POSITION); if (rinfo.rmode() != RelocInfo::JS_RETURN) { rinfo.set_pc(rinfo.pc() + pc_delta); @@ -3214,7 +3245,7 @@ void Assembler::GrowBuffer() { } for (int i = 0; i < num_pending_64_bit_reloc_info_; i++) { RelocInfo& rinfo = pending_64_bit_reloc_info_[i]; - ASSERT(rinfo.rmode() == RelocInfo::NONE64); + DCHECK(rinfo.rmode() == RelocInfo::NONE64); rinfo.set_pc(rinfo.pc() + pc_delta); } constant_pool_builder_.Relocate(pc_delta); @@ -3225,8 +3256,8 @@ void Assembler::db(uint8_t data) { // No relocation info should be pending while using db. db is used // to write pure data with no pointers and the constant pool should // be emitted before using db. - ASSERT(num_pending_32_bit_reloc_info_ == 0); - ASSERT(num_pending_64_bit_reloc_info_ == 0); + DCHECK(num_pending_32_bit_reloc_info_ == 0); + DCHECK(num_pending_64_bit_reloc_info_ == 0); CheckBuffer(); *reinterpret_cast<uint8_t*>(pc_) = data; pc_ += sizeof(uint8_t); @@ -3237,8 +3268,8 @@ void Assembler::dd(uint32_t data) { // No relocation info should be pending while using dd. dd is used // to write pure data with no pointers and the constant pool should // be emitted before using dd. - ASSERT(num_pending_32_bit_reloc_info_ == 0); - ASSERT(num_pending_64_bit_reloc_info_ == 0); + DCHECK(num_pending_32_bit_reloc_info_ == 0); + DCHECK(num_pending_64_bit_reloc_info_ == 0); CheckBuffer(); *reinterpret_cast<uint32_t*>(pc_) = data; pc_ += sizeof(uint32_t); @@ -3262,12 +3293,11 @@ void Assembler::RecordRelocInfo(RelocInfo::Mode rmode, intptr_t data) { void Assembler::RecordRelocInfo(const RelocInfo& rinfo) { if (!RelocInfo::IsNone(rinfo.rmode())) { // Don't record external references unless the heap will be serialized. - if (rinfo.rmode() == RelocInfo::EXTERNAL_REFERENCE) { - if (!Serializer::enabled(isolate()) && !emit_debug_code()) { - return; - } + if (rinfo.rmode() == RelocInfo::EXTERNAL_REFERENCE && + !serializer_enabled() && !emit_debug_code()) { + return; } - ASSERT(buffer_space() >= kMaxRelocSize); // too late to grow buffer here + DCHECK(buffer_space() >= kMaxRelocSize); // too late to grow buffer here if (rinfo.rmode() == RelocInfo::CODE_TARGET_WITH_ID) { RelocInfo reloc_info_with_ast_id(rinfo.pc(), rinfo.rmode(), @@ -3282,18 +3312,19 @@ void Assembler::RecordRelocInfo(const RelocInfo& rinfo) { } -void Assembler::ConstantPoolAddEntry(const RelocInfo& rinfo) { +ConstantPoolArray::LayoutSection Assembler::ConstantPoolAddEntry( + const RelocInfo& rinfo) { if (FLAG_enable_ool_constant_pool) { - constant_pool_builder_.AddEntry(this, rinfo); + return constant_pool_builder_.AddEntry(this, rinfo); } else { if (rinfo.rmode() == RelocInfo::NONE64) { - ASSERT(num_pending_64_bit_reloc_info_ < kMaxNumPending64RelocInfo); + DCHECK(num_pending_64_bit_reloc_info_ < kMaxNumPending64RelocInfo); if (num_pending_64_bit_reloc_info_ == 0) { first_const_pool_64_use_ = pc_offset(); } pending_64_bit_reloc_info_[num_pending_64_bit_reloc_info_++] = rinfo; } else { - ASSERT(num_pending_32_bit_reloc_info_ < kMaxNumPending32RelocInfo); + DCHECK(num_pending_32_bit_reloc_info_ < kMaxNumPending32RelocInfo); if (num_pending_32_bit_reloc_info_ == 0) { first_const_pool_32_use_ = pc_offset(); } @@ -3302,6 +3333,7 @@ void Assembler::ConstantPoolAddEntry(const RelocInfo& rinfo) { // Make sure the constant pool is not emitted in place of the next // instruction for which we just recorded relocation info. BlockConstPoolFor(1); + return ConstantPoolArray::SMALL_SECTION; } } @@ -3309,8 +3341,8 @@ void Assembler::ConstantPoolAddEntry(const RelocInfo& rinfo) { void Assembler::BlockConstPoolFor(int instructions) { if (FLAG_enable_ool_constant_pool) { // Should be a no-op if using an out-of-line constant pool. - ASSERT(num_pending_32_bit_reloc_info_ == 0); - ASSERT(num_pending_64_bit_reloc_info_ == 0); + DCHECK(num_pending_32_bit_reloc_info_ == 0); + DCHECK(num_pending_64_bit_reloc_info_ == 0); return; } @@ -3319,10 +3351,10 @@ void Assembler::BlockConstPoolFor(int instructions) { // Max pool start (if we need a jump and an alignment). #ifdef DEBUG int start = pc_limit + kInstrSize + 2 * kPointerSize; - ASSERT((num_pending_32_bit_reloc_info_ == 0) || + DCHECK((num_pending_32_bit_reloc_info_ == 0) || (start - first_const_pool_32_use_ + num_pending_64_bit_reloc_info_ * kDoubleSize < kMaxDistToIntPool)); - ASSERT((num_pending_64_bit_reloc_info_ == 0) || + DCHECK((num_pending_64_bit_reloc_info_ == 0) || (start - first_const_pool_64_use_ < kMaxDistToFPPool)); #endif no_const_pool_before_ = pc_limit; @@ -3337,8 +3369,8 @@ void Assembler::BlockConstPoolFor(int instructions) { void Assembler::CheckConstPool(bool force_emit, bool require_jump) { if (FLAG_enable_ool_constant_pool) { // Should be a no-op if using an out-of-line constant pool. - ASSERT(num_pending_32_bit_reloc_info_ == 0); - ASSERT(num_pending_64_bit_reloc_info_ == 0); + DCHECK(num_pending_32_bit_reloc_info_ == 0); + DCHECK(num_pending_64_bit_reloc_info_ == 0); return; } @@ -3347,7 +3379,7 @@ void Assembler::CheckConstPool(bool force_emit, bool require_jump) { // BlockConstPoolScope. if (is_const_pool_blocked()) { // Something is wrong if emission is forced and blocked at the same time. - ASSERT(!force_emit); + DCHECK(!force_emit); return; } @@ -3386,7 +3418,7 @@ void Assembler::CheckConstPool(bool force_emit, bool require_jump) { // * the instruction doesn't require a jump after itself to jump over the // constant pool, and we're getting close to running out of range. if (!force_emit) { - ASSERT((first_const_pool_32_use_ >= 0) || (first_const_pool_64_use_ >= 0)); + DCHECK((first_const_pool_32_use_ >= 0) || (first_const_pool_64_use_ >= 0)); bool need_emit = false; if (has_fp_values) { int dist64 = pc_offset() + @@ -3436,15 +3468,15 @@ void Assembler::CheckConstPool(bool force_emit, bool require_jump) { for (int i = 0; i < num_pending_64_bit_reloc_info_; i++) { RelocInfo& rinfo = pending_64_bit_reloc_info_[i]; - ASSERT(!((uintptr_t)pc_ & 0x7)); // Check 64-bit alignment. + DCHECK(!((uintptr_t)pc_ & 0x7)); // Check 64-bit alignment. Instr instr = instr_at(rinfo.pc()); // Instruction to patch must be 'vldr rd, [pc, #offset]' with offset == 0. - ASSERT((IsVldrDPcImmediateOffset(instr) && + DCHECK((IsVldrDPcImmediateOffset(instr) && GetVldrDRegisterImmediateOffset(instr) == 0)); int delta = pc_ - rinfo.pc() - kPcLoadDelta; - ASSERT(is_uint10(delta)); + DCHECK(is_uint10(delta)); bool found = false; uint64_t value = rinfo.raw_data64(); @@ -3452,9 +3484,9 @@ void Assembler::CheckConstPool(bool force_emit, bool require_jump) { RelocInfo& rinfo2 = pending_64_bit_reloc_info_[j]; if (value == rinfo2.raw_data64()) { found = true; - ASSERT(rinfo2.rmode() == RelocInfo::NONE64); + DCHECK(rinfo2.rmode() == RelocInfo::NONE64); Instr instr2 = instr_at(rinfo2.pc()); - ASSERT(IsVldrDPcImmediateOffset(instr2)); + DCHECK(IsVldrDPcImmediateOffset(instr2)); delta = GetVldrDRegisterImmediateOffset(instr2); delta += rinfo2.pc() - rinfo.pc(); break; @@ -3473,7 +3505,7 @@ void Assembler::CheckConstPool(bool force_emit, bool require_jump) { // Emit 32-bit constant pool entries. for (int i = 0; i < num_pending_32_bit_reloc_info_; i++) { RelocInfo& rinfo = pending_32_bit_reloc_info_[i]; - ASSERT(rinfo.rmode() != RelocInfo::COMMENT && + DCHECK(rinfo.rmode() != RelocInfo::COMMENT && rinfo.rmode() != RelocInfo::POSITION && rinfo.rmode() != RelocInfo::STATEMENT_POSITION && rinfo.rmode() != RelocInfo::CONST_POOL && @@ -3482,20 +3514,19 @@ void Assembler::CheckConstPool(bool force_emit, bool require_jump) { Instr instr = instr_at(rinfo.pc()); // 64-bit loads shouldn't get here. - ASSERT(!IsVldrDPcImmediateOffset(instr)); + DCHECK(!IsVldrDPcImmediateOffset(instr)); if (IsLdrPcImmediateOffset(instr) && GetLdrRegisterImmediateOffset(instr) == 0) { int delta = pc_ - rinfo.pc() - kPcLoadDelta; - ASSERT(is_uint12(delta)); + DCHECK(is_uint12(delta)); // 0 is the smallest delta: // ldr rd, [pc, #0] // constant pool marker // data bool found = false; - if (!Serializer::enabled(isolate()) && - (rinfo.rmode() >= RelocInfo::CELL)) { + if (!serializer_enabled() && rinfo.rmode() >= RelocInfo::CELL) { for (int j = 0; j < i; j++) { RelocInfo& rinfo2 = pending_32_bit_reloc_info_[j]; @@ -3518,7 +3549,7 @@ void Assembler::CheckConstPool(bool force_emit, bool require_jump) { emit(rinfo.data()); } } else { - ASSERT(IsMovW(instr)); + DCHECK(IsMovW(instr)); } } @@ -3554,12 +3585,7 @@ void Assembler::PopulateConstantPool(ConstantPoolArray* constant_pool) { ConstantPoolBuilder::ConstantPoolBuilder() - : entries_(), - merged_indexes_(), - count_of_64bit_(0), - count_of_code_ptr_(0), - count_of_heap_ptr_(0), - count_of_32bit_(0) { } + : entries_(), current_section_(ConstantPoolArray::SMALL_SECTION) {} bool ConstantPoolBuilder::IsEmpty() { @@ -3567,83 +3593,70 @@ bool ConstantPoolBuilder::IsEmpty() { } -bool ConstantPoolBuilder::Is64BitEntry(RelocInfo::Mode rmode) { - return rmode == RelocInfo::NONE64; -} - - -bool ConstantPoolBuilder::Is32BitEntry(RelocInfo::Mode rmode) { - return !RelocInfo::IsGCRelocMode(rmode) && rmode != RelocInfo::NONE64; -} - - -bool ConstantPoolBuilder::IsCodePtrEntry(RelocInfo::Mode rmode) { - return RelocInfo::IsCodeTarget(rmode); -} - - -bool ConstantPoolBuilder::IsHeapPtrEntry(RelocInfo::Mode rmode) { - return RelocInfo::IsGCRelocMode(rmode) && !RelocInfo::IsCodeTarget(rmode); +ConstantPoolArray::Type ConstantPoolBuilder::GetConstantPoolType( + RelocInfo::Mode rmode) { + if (rmode == RelocInfo::NONE64) { + return ConstantPoolArray::INT64; + } else if (!RelocInfo::IsGCRelocMode(rmode)) { + return ConstantPoolArray::INT32; + } else if (RelocInfo::IsCodeTarget(rmode)) { + return ConstantPoolArray::CODE_PTR; + } else { + DCHECK(RelocInfo::IsGCRelocMode(rmode) && !RelocInfo::IsCodeTarget(rmode)); + return ConstantPoolArray::HEAP_PTR; + } } -void ConstantPoolBuilder::AddEntry(Assembler* assm, - const RelocInfo& rinfo) { +ConstantPoolArray::LayoutSection ConstantPoolBuilder::AddEntry( + Assembler* assm, const RelocInfo& rinfo) { RelocInfo::Mode rmode = rinfo.rmode(); - ASSERT(rmode != RelocInfo::COMMENT && + DCHECK(rmode != RelocInfo::COMMENT && rmode != RelocInfo::POSITION && rmode != RelocInfo::STATEMENT_POSITION && rmode != RelocInfo::CONST_POOL); - // Try to merge entries which won't be patched. int merged_index = -1; + ConstantPoolArray::LayoutSection entry_section = current_section_; if (RelocInfo::IsNone(rmode) || - (!Serializer::enabled(assm->isolate()) && (rmode >= RelocInfo::CELL))) { + (!assm->serializer_enabled() && (rmode >= RelocInfo::CELL))) { size_t i; - std::vector<RelocInfo>::const_iterator it; + std::vector<ConstantPoolEntry>::const_iterator it; for (it = entries_.begin(), i = 0; it != entries_.end(); it++, i++) { - if (RelocInfo::IsEqual(rinfo, *it)) { + if (RelocInfo::IsEqual(rinfo, it->rinfo_)) { + // Merge with found entry. merged_index = i; + entry_section = entries_[i].section_; break; } } } - - entries_.push_back(rinfo); - merged_indexes_.push_back(merged_index); + DCHECK(entry_section <= current_section_); + entries_.push_back(ConstantPoolEntry(rinfo, entry_section, merged_index)); if (merged_index == -1) { // Not merged, so update the appropriate count. - if (Is64BitEntry(rmode)) { - count_of_64bit_++; - } else if (Is32BitEntry(rmode)) { - count_of_32bit_++; - } else if (IsCodePtrEntry(rmode)) { - count_of_code_ptr_++; - } else { - ASSERT(IsHeapPtrEntry(rmode)); - count_of_heap_ptr_++; - } + number_of_entries_[entry_section].increment(GetConstantPoolType(rmode)); } - // Check if we still have room for another entry given Arm's ldr and vldr - // immediate offset range. - if (!(is_uint12(ConstantPoolArray::SizeFor(count_of_64bit_, - count_of_code_ptr_, - count_of_heap_ptr_, - count_of_32bit_))) && - is_uint10(ConstantPoolArray::SizeFor(count_of_64bit_, 0, 0, 0))) { - assm->set_constant_pool_full(); + // Check if we still have room for another entry in the small section + // given Arm's ldr and vldr immediate offset range. + if (current_section_ == ConstantPoolArray::SMALL_SECTION && + !(is_uint12(ConstantPoolArray::SizeFor(*small_entries())) && + is_uint10(ConstantPoolArray::MaxInt64Offset( + small_entries()->count_of(ConstantPoolArray::INT64))))) { + current_section_ = ConstantPoolArray::EXTENDED_SECTION; } + return entry_section; } void ConstantPoolBuilder::Relocate(int pc_delta) { - for (std::vector<RelocInfo>::iterator rinfo = entries_.begin(); - rinfo != entries_.end(); rinfo++) { - ASSERT(rinfo->rmode() != RelocInfo::JS_RETURN); - rinfo->set_pc(rinfo->pc() + pc_delta); + for (std::vector<ConstantPoolEntry>::iterator entry = entries_.begin(); + entry != entries_.end(); entry++) { + DCHECK(entry->rinfo_.rmode() != RelocInfo::JS_RETURN); + entry->rinfo_.set_pc(entry->rinfo_.pc() + pc_delta); } } @@ -3651,84 +3664,104 @@ void ConstantPoolBuilder::Relocate(int pc_delta) { Handle<ConstantPoolArray> ConstantPoolBuilder::New(Isolate* isolate) { if (IsEmpty()) { return isolate->factory()->empty_constant_pool_array(); + } else if (extended_entries()->is_empty()) { + return isolate->factory()->NewConstantPoolArray(*small_entries()); } else { - return isolate->factory()->NewConstantPoolArray(count_of_64bit_, - count_of_code_ptr_, - count_of_heap_ptr_, - count_of_32bit_); + DCHECK(current_section_ == ConstantPoolArray::EXTENDED_SECTION); + return isolate->factory()->NewExtendedConstantPoolArray( + *small_entries(), *extended_entries()); } } void ConstantPoolBuilder::Populate(Assembler* assm, ConstantPoolArray* constant_pool) { - ASSERT(constant_pool->count_of_int64_entries() == count_of_64bit_); - ASSERT(constant_pool->count_of_code_ptr_entries() == count_of_code_ptr_); - ASSERT(constant_pool->count_of_heap_ptr_entries() == count_of_heap_ptr_); - ASSERT(constant_pool->count_of_int32_entries() == count_of_32bit_); - ASSERT(entries_.size() == merged_indexes_.size()); - - int index_64bit = 0; - int index_code_ptr = count_of_64bit_; - int index_heap_ptr = count_of_64bit_ + count_of_code_ptr_; - int index_32bit = count_of_64bit_ + count_of_code_ptr_ + count_of_heap_ptr_; - - size_t i; - std::vector<RelocInfo>::const_iterator rinfo; - for (rinfo = entries_.begin(), i = 0; rinfo != entries_.end(); rinfo++, i++) { - RelocInfo::Mode rmode = rinfo->rmode(); + DCHECK_EQ(extended_entries()->is_empty(), + !constant_pool->is_extended_layout()); + DCHECK(small_entries()->equals(ConstantPoolArray::NumberOfEntries( + constant_pool, ConstantPoolArray::SMALL_SECTION))); + if (constant_pool->is_extended_layout()) { + DCHECK(extended_entries()->equals(ConstantPoolArray::NumberOfEntries( + constant_pool, ConstantPoolArray::EXTENDED_SECTION))); + } + + // Set up initial offsets. + int offsets[ConstantPoolArray::NUMBER_OF_LAYOUT_SECTIONS] + [ConstantPoolArray::NUMBER_OF_TYPES]; + for (int section = 0; section <= constant_pool->final_section(); section++) { + int section_start = (section == ConstantPoolArray::EXTENDED_SECTION) + ? small_entries()->total_count() + : 0; + for (int i = 0; i < ConstantPoolArray::NUMBER_OF_TYPES; i++) { + ConstantPoolArray::Type type = static_cast<ConstantPoolArray::Type>(i); + if (number_of_entries_[section].count_of(type) != 0) { + offsets[section][type] = constant_pool->OffsetOfElementAt( + number_of_entries_[section].base_of(type) + section_start); + } + } + } + + for (std::vector<ConstantPoolEntry>::iterator entry = entries_.begin(); + entry != entries_.end(); entry++) { + RelocInfo rinfo = entry->rinfo_; + RelocInfo::Mode rmode = entry->rinfo_.rmode(); + ConstantPoolArray::Type type = GetConstantPoolType(rmode); // Update constant pool if necessary and get the entry's offset. int offset; - if (merged_indexes_[i] == -1) { - if (Is64BitEntry(rmode)) { - offset = constant_pool->OffsetOfElementAt(index_64bit) - kHeapObjectTag; - constant_pool->set(index_64bit++, rinfo->data64()); - } else if (Is32BitEntry(rmode)) { - offset = constant_pool->OffsetOfElementAt(index_32bit) - kHeapObjectTag; - constant_pool->set(index_32bit++, static_cast<int32_t>(rinfo->data())); - } else if (IsCodePtrEntry(rmode)) { - offset = constant_pool->OffsetOfElementAt(index_code_ptr) - - kHeapObjectTag; - constant_pool->set(index_code_ptr++, - reinterpret_cast<Object *>(rinfo->data())); + if (entry->merged_index_ == -1) { + offset = offsets[entry->section_][type]; + offsets[entry->section_][type] += ConstantPoolArray::entry_size(type); + if (type == ConstantPoolArray::INT64) { + constant_pool->set_at_offset(offset, rinfo.data64()); + } else if (type == ConstantPoolArray::INT32) { + constant_pool->set_at_offset(offset, + static_cast<int32_t>(rinfo.data())); + } else if (type == ConstantPoolArray::CODE_PTR) { + constant_pool->set_at_offset(offset, + reinterpret_cast<Address>(rinfo.data())); } else { - ASSERT(IsHeapPtrEntry(rmode)); - offset = constant_pool->OffsetOfElementAt(index_heap_ptr) - - kHeapObjectTag; - constant_pool->set(index_heap_ptr++, - reinterpret_cast<Object *>(rinfo->data())); + DCHECK(type == ConstantPoolArray::HEAP_PTR); + constant_pool->set_at_offset(offset, + reinterpret_cast<Object*>(rinfo.data())); } - merged_indexes_[i] = offset; // Stash offset for merged entries. + offset -= kHeapObjectTag; + entry->merged_index_ = offset; // Stash offset for merged entries. } else { - size_t merged_index = static_cast<size_t>(merged_indexes_[i]); - ASSERT(merged_index < merged_indexes_.size() && merged_index < i); - offset = merged_indexes_[merged_index]; + DCHECK(entry->merged_index_ < (entry - entries_.begin())); + offset = entries_[entry->merged_index_].merged_index_; } // Patch vldr/ldr instruction with correct offset. - Instr instr = assm->instr_at(rinfo->pc()); - if (Is64BitEntry(rmode)) { + Instr instr = assm->instr_at(rinfo.pc()); + if (entry->section_ == ConstantPoolArray::EXTENDED_SECTION) { + // Instructions to patch must be 'movw rd, [#0]' and 'movt rd, [#0]. + Instr next_instr = assm->instr_at(rinfo.pc() + Assembler::kInstrSize); + DCHECK((Assembler::IsMovW(instr) && + Instruction::ImmedMovwMovtValue(instr) == 0)); + DCHECK((Assembler::IsMovT(next_instr) && + Instruction::ImmedMovwMovtValue(next_instr) == 0)); + assm->instr_at_put(rinfo.pc(), + Assembler::PatchMovwImmediate(instr, offset & 0xffff)); + assm->instr_at_put( + rinfo.pc() + Assembler::kInstrSize, + Assembler::PatchMovwImmediate(next_instr, offset >> 16)); + } else if (type == ConstantPoolArray::INT64) { // Instruction to patch must be 'vldr rd, [pp, #0]'. - ASSERT((Assembler::IsVldrDPpImmediateOffset(instr) && + DCHECK((Assembler::IsVldrDPpImmediateOffset(instr) && Assembler::GetVldrDRegisterImmediateOffset(instr) == 0)); - ASSERT(is_uint10(offset)); - assm->instr_at_put(rinfo->pc(), - Assembler::SetVldrDRegisterImmediateOffset(instr, offset)); + DCHECK(is_uint10(offset)); + assm->instr_at_put(rinfo.pc(), Assembler::SetVldrDRegisterImmediateOffset( + instr, offset)); } else { // Instruction to patch must be 'ldr rd, [pp, #0]'. - ASSERT((Assembler::IsLdrPpImmediateOffset(instr) && + DCHECK((Assembler::IsLdrPpImmediateOffset(instr) && Assembler::GetLdrRegisterImmediateOffset(instr) == 0)); - ASSERT(is_uint12(offset)); - assm->instr_at_put(rinfo->pc(), - Assembler::SetLdrRegisterImmediateOffset(instr, offset)); + DCHECK(is_uint12(offset)); + assm->instr_at_put( + rinfo.pc(), Assembler::SetLdrRegisterImmediateOffset(instr, offset)); } } - - ASSERT((index_64bit == count_of_64bit_) && - (index_code_ptr == (index_64bit + count_of_code_ptr_)) && - (index_heap_ptr == (index_code_ptr + count_of_heap_ptr_)) && - (index_32bit == (index_heap_ptr + count_of_32bit_))); } diff --git a/deps/v8/src/arm/assembler-arm.h b/deps/v8/src/arm/assembler-arm.h index 1c6a7f04f86..e33f48a05b8 100644 --- a/deps/v8/src/arm/assembler-arm.h +++ b/deps/v8/src/arm/assembler-arm.h @@ -43,78 +43,13 @@ #include <stdio.h> #include <vector> -#include "assembler.h" -#include "constants-arm.h" -#include "serialize.h" +#include "src/arm/constants-arm.h" +#include "src/assembler.h" +#include "src/serialize.h" namespace v8 { namespace internal { -// CpuFeatures keeps track of which features are supported by the target CPU. -// Supported features must be enabled by a CpuFeatureScope before use. -class CpuFeatures : public AllStatic { - public: - // Detect features of the target CPU. Set safe defaults if the serializer - // is enabled (snapshots must be portable). - static void Probe(bool serializer_enabled); - - // Display target use when compiling. - static void PrintTarget(); - - // Display features. - static void PrintFeatures(); - - // Check whether a feature is supported by the target CPU. - static bool IsSupported(CpuFeature f) { - ASSERT(initialized_); - return Check(f, supported_); - } - - static bool IsSafeForSnapshot(Isolate* isolate, CpuFeature f) { - return Check(f, cross_compile_) || - (IsSupported(f) && - !(Serializer::enabled(isolate) && - Check(f, found_by_runtime_probing_only_))); - } - - static unsigned cache_line_size() { return cache_line_size_; } - - static bool VerifyCrossCompiling() { - return cross_compile_ == 0; - } - - static bool VerifyCrossCompiling(CpuFeature f) { - unsigned mask = flag2set(f); - return cross_compile_ == 0 || - (cross_compile_ & mask) == mask; - } - - static bool SupportsCrankshaft() { return CpuFeatures::IsSupported(VFP3); } - - private: - static bool Check(CpuFeature f, unsigned set) { - return (set & flag2set(f)) != 0; - } - - static unsigned flag2set(CpuFeature f) { - return 1u << f; - } - -#ifdef DEBUG - static bool initialized_; -#endif - static unsigned supported_; - static unsigned found_by_runtime_probing_only_; - static unsigned cache_line_size_; - - static unsigned cross_compile_; - - friend class ExternalReference; - friend class PlatformFeatureScope; - DISALLOW_COPY_AND_ASSIGN(CpuFeatures); -}; - - // CPU Registers. // // 1) We would prefer to use an enum, but enum values are assignment- @@ -165,17 +100,17 @@ struct Register { inline static int NumAllocatableRegisters(); static int ToAllocationIndex(Register reg) { - ASSERT(reg.code() < kMaxNumAllocatableRegisters); + DCHECK(reg.code() < kMaxNumAllocatableRegisters); return reg.code(); } static Register FromAllocationIndex(int index) { - ASSERT(index >= 0 && index < kMaxNumAllocatableRegisters); + DCHECK(index >= 0 && index < kMaxNumAllocatableRegisters); return from_code(index); } static const char* AllocationIndexToString(int index) { - ASSERT(index >= 0 && index < kMaxNumAllocatableRegisters); + DCHECK(index >= 0 && index < kMaxNumAllocatableRegisters); const char* const names[] = { "r0", "r1", @@ -201,17 +136,17 @@ struct Register { bool is_valid() const { return 0 <= code_ && code_ < kNumRegisters; } bool is(Register reg) const { return code_ == reg.code_; } int code() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return code_; } int bit() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return 1 << code_; } void set_code(int code) { code_ = code; - ASSERT(is_valid()); + DCHECK(is_valid()); } // Unfortunately we can't make this private in a struct. @@ -247,15 +182,15 @@ struct SwVfpRegister { bool is_valid() const { return 0 <= code_ && code_ < 32; } bool is(SwVfpRegister reg) const { return code_ == reg.code_; } int code() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return code_; } int bit() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return 1 << code_; } void split_code(int* vm, int* m) const { - ASSERT(is_valid()); + DCHECK(is_valid()); *m = code_ & 0x1; *vm = code_ >> 1; } @@ -297,15 +232,15 @@ struct DwVfpRegister { } bool is(DwVfpRegister reg) const { return code_ == reg.code_; } int code() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return code_; } int bit() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return 1 << code_; } void split_code(int* vm, int* m) const { - ASSERT(is_valid()); + DCHECK(is_valid()); *m = (code_ & 0x10) >> 4; *vm = code_ & 0x0F; } @@ -336,21 +271,21 @@ struct LowDwVfpRegister { bool is(DwVfpRegister reg) const { return code_ == reg.code_; } bool is(LowDwVfpRegister reg) const { return code_ == reg.code_; } int code() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return code_; } SwVfpRegister low() const { SwVfpRegister reg; reg.code_ = code_ * 2; - ASSERT(reg.is_valid()); + DCHECK(reg.is_valid()); return reg; } SwVfpRegister high() const { SwVfpRegister reg; reg.code_ = (code_ * 2) + 1; - ASSERT(reg.is_valid()); + DCHECK(reg.is_valid()); return reg; } @@ -372,11 +307,11 @@ struct QwNeonRegister { } bool is(QwNeonRegister reg) const { return code_ == reg.code_; } int code() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return code_; } void split_code(int* vm, int* m) const { - ASSERT(is_valid()); + DCHECK(is_valid()); int encoded_code = code_ << 1; *m = (encoded_code & 0x10) >> 4; *vm = encoded_code & 0x0F; @@ -490,11 +425,11 @@ struct CRegister { bool is_valid() const { return 0 <= code_ && code_ < 16; } bool is(CRegister creg) const { return code_ == creg.code_; } int code() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return code_; } int bit() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return 1 << code_; } @@ -583,19 +518,22 @@ class Operand BASE_EMBEDDED { // Return true if this is a register operand. INLINE(bool is_reg() const); - // Return true if this operand fits in one instruction so that no - // 2-instruction solution with a load into the ip register is necessary. If + // Return the number of actual instructions required to implement the given + // instruction for this particular operand. This can be a single instruction, + // if no load into the ip register is necessary, or anything between 2 and 4 + // instructions when we need to load from the constant pool (depending upon + // whether the constant pool entry is in the small or extended section). If // the instruction this operand is used for is a MOV or MVN instruction the // actual instruction to use is required for this calculation. For other // instructions instr is ignored. - bool is_single_instruction(Isolate* isolate, - const Assembler* assembler, - Instr instr = 0) const; - bool must_output_reloc_info(Isolate* isolate, - const Assembler* assembler) const; + // + // The value returned is only valid as long as no entries are added to the + // constant pool between this call and the actual instruction being emitted. + int instructions_required(const Assembler* assembler, Instr instr = 0) const; + bool must_output_reloc_info(const Assembler* assembler) const; inline int32_t immediate() const { - ASSERT(!rm_.is_valid()); + DCHECK(!rm_.is_valid()); return imm32_; } @@ -643,12 +581,12 @@ class MemOperand BASE_EMBEDDED { } void set_offset(int32_t offset) { - ASSERT(rm_.is(no_reg)); + DCHECK(rm_.is(no_reg)); offset_ = offset; } uint32_t offset() const { - ASSERT(rm_.is(no_reg)); + DCHECK(rm_.is(no_reg)); return offset_; } @@ -711,59 +649,48 @@ class NeonListOperand BASE_EMBEDDED { // Class used to build a constant pool. class ConstantPoolBuilder BASE_EMBEDDED { public: - explicit ConstantPoolBuilder(); - void AddEntry(Assembler* assm, const RelocInfo& rinfo); + ConstantPoolBuilder(); + ConstantPoolArray::LayoutSection AddEntry(Assembler* assm, + const RelocInfo& rinfo); void Relocate(int pc_delta); bool IsEmpty(); Handle<ConstantPoolArray> New(Isolate* isolate); void Populate(Assembler* assm, ConstantPoolArray* constant_pool); - inline int count_of_64bit() const { return count_of_64bit_; } - inline int count_of_code_ptr() const { return count_of_code_ptr_; } - inline int count_of_heap_ptr() const { return count_of_heap_ptr_; } - inline int count_of_32bit() const { return count_of_32bit_; } + inline ConstantPoolArray::LayoutSection current_section() const { + return current_section_; + } - private: - bool Is64BitEntry(RelocInfo::Mode rmode); - bool Is32BitEntry(RelocInfo::Mode rmode); - bool IsCodePtrEntry(RelocInfo::Mode rmode); - bool IsHeapPtrEntry(RelocInfo::Mode rmode); - - // TODO(rmcilroy): This should ideally be a ZoneList, however that would mean - // RelocInfo would need to subclass ZoneObject which it currently doesn't. - std::vector<RelocInfo> entries_; - std::vector<int> merged_indexes_; - int count_of_64bit_; - int count_of_code_ptr_; - int count_of_heap_ptr_; - int count_of_32bit_; -}; + inline ConstantPoolArray::NumberOfEntries* number_of_entries( + ConstantPoolArray::LayoutSection section) { + return &number_of_entries_[section]; + } + inline ConstantPoolArray::NumberOfEntries* small_entries() { + return number_of_entries(ConstantPoolArray::SMALL_SECTION); + } -extern const Instr kMovLrPc; -extern const Instr kLdrPCMask; -extern const Instr kLdrPCPattern; -extern const Instr kLdrPpMask; -extern const Instr kLdrPpPattern; -extern const Instr kBlxRegMask; -extern const Instr kBlxRegPattern; -extern const Instr kBlxIp; + inline ConstantPoolArray::NumberOfEntries* extended_entries() { + return number_of_entries(ConstantPoolArray::EXTENDED_SECTION); + } -extern const Instr kMovMvnMask; -extern const Instr kMovMvnPattern; -extern const Instr kMovMvnFlip; + private: + struct ConstantPoolEntry { + ConstantPoolEntry(RelocInfo rinfo, ConstantPoolArray::LayoutSection section, + int merged_index) + : rinfo_(rinfo), section_(section), merged_index_(merged_index) {} + + RelocInfo rinfo_; + ConstantPoolArray::LayoutSection section_; + int merged_index_; + }; -extern const Instr kMovLeaveCCMask; -extern const Instr kMovLeaveCCPattern; -extern const Instr kMovwMask; -extern const Instr kMovwPattern; -extern const Instr kMovwLeaveCCFlip; + ConstantPoolArray::Type GetConstantPoolType(RelocInfo::Mode rmode); -extern const Instr kCmpCmnMask; -extern const Instr kCmpCmnPattern; -extern const Instr kCmpCmnFlip; -extern const Instr kAddSubFlip; -extern const Instr kAndBicFlip; + std::vector<ConstantPoolEntry> entries_; + ConstantPoolArray::LayoutSection current_section_; + ConstantPoolArray::NumberOfEntries number_of_entries_[2]; +}; struct VmovIndex { unsigned char index; @@ -816,13 +743,13 @@ class Assembler : public AssemblerBase { // Manages the jump elimination optimization if the second parameter is true. int branch_offset(Label* L, bool jump_elimination_allowed); - // Return the address in the constant pool of the code target address used by - // the branch/call instruction at pc, or the object in a mov. - INLINE(static Address target_pointer_address_at(Address pc)); + // Returns true if the given pc address is the start of a constant pool load + // instruction sequence. + INLINE(static bool is_constant_pool_load(Address pc)); // Return the address in the constant pool of the code target address used by // the branch/call instruction at pc, or the object in a mov. - INLINE(static Address target_constant_pool_address_at( + INLINE(static Address constant_pool_entry_address( Address pc, ConstantPoolArray* constant_pool)); // Read/Modify the code target address in the branch/call instruction at pc. @@ -830,16 +757,20 @@ class Assembler : public AssemblerBase { ConstantPoolArray* constant_pool)); INLINE(static void set_target_address_at(Address pc, ConstantPoolArray* constant_pool, - Address target)); + Address target, + ICacheFlushMode icache_flush_mode = + FLUSH_ICACHE_IF_NEEDED)); INLINE(static Address target_address_at(Address pc, Code* code)) { ConstantPoolArray* constant_pool = code ? code->constant_pool() : NULL; return target_address_at(pc, constant_pool); } INLINE(static void set_target_address_at(Address pc, Code* code, - Address target)) { + Address target, + ICacheFlushMode icache_flush_mode = + FLUSH_ICACHE_IF_NEEDED)) { ConstantPoolArray* constant_pool = code ? code->constant_pool() : NULL; - set_target_address_at(pc, constant_pool, target); + set_target_address_at(pc, constant_pool, target, icache_flush_mode); } // Return the code target address at a call site from the return address @@ -850,6 +781,9 @@ class Assembler : public AssemblerBase { // in the instruction stream that the call will return from. INLINE(static Address return_address_from_call_start(Address pc)); + // Return the code target address of the patch debug break slot + INLINE(static Address break_address_from_return_address(Address pc)); + // This sets the branch destination (which is in the constant pool on ARM). // This is for calls and branches within generated code. inline static void deserialization_set_special_target_at( @@ -981,10 +915,8 @@ class Assembler : public AssemblerBase { void mov_label_offset(Register dst, Label* label); // ARMv7 instructions for loading a 32 bit immediate in two instructions. - // This may actually emit a different mov instruction, but on an ARMv7 it - // is guaranteed to only emit one instruction. + // The constant for movw and movt should be in the range 0-0xffff. void movw(Register reg, uint32_t immediate, Condition cond = al); - // The constant for movt should be in the range 0-0xffff. void movt(Register reg, uint32_t immediate, Condition cond = al); void bic(Register dst, Register src1, const Operand& src2, @@ -993,6 +925,35 @@ class Assembler : public AssemblerBase { void mvn(Register dst, const Operand& src, SBit s = LeaveCC, Condition cond = al); + // Shift instructions + + void asr(Register dst, Register src1, const Operand& src2, SBit s = LeaveCC, + Condition cond = al) { + if (src2.is_reg()) { + mov(dst, Operand(src1, ASR, src2.rm()), s, cond); + } else { + mov(dst, Operand(src1, ASR, src2.immediate()), s, cond); + } + } + + void lsl(Register dst, Register src1, const Operand& src2, SBit s = LeaveCC, + Condition cond = al) { + if (src2.is_reg()) { + mov(dst, Operand(src1, LSL, src2.rm()), s, cond); + } else { + mov(dst, Operand(src1, LSL, src2.immediate()), s, cond); + } + } + + void lsr(Register dst, Register src1, const Operand& src2, SBit s = LeaveCC, + Condition cond = al) { + if (src2.is_reg()) { + mov(dst, Operand(src1, LSR, src2.rm()), s, cond); + } else { + mov(dst, Operand(src1, LSR, src2.immediate()), s, cond); + } + } + // Multiply instructions void mla(Register dst, Register src1, Register src2, Register srcA, @@ -1004,6 +965,8 @@ class Assembler : public AssemblerBase { void sdiv(Register dst, Register src1, Register src2, Condition cond = al); + void udiv(Register dst, Register src1, Register src2, Condition cond = al); + void mul(Register dst, Register src1, Register src2, SBit s = LeaveCC, Condition cond = al); @@ -1361,7 +1324,7 @@ class Assembler : public AssemblerBase { } // Check whether an immediate fits an addressing mode 1 instruction. - bool ImmediateFitsAddrMode1Instruction(int32_t imm32); + static bool ImmediateFitsAddrMode1Instruction(int32_t imm32); // Check whether an immediate fits an addressing mode 2 instruction. bool ImmediateFitsAddrMode2Instruction(int32_t imm32); @@ -1393,12 +1356,12 @@ class Assembler : public AssemblerBase { // Record the AST id of the CallIC being compiled, so that it can be placed // in the relocation information. void SetRecordedAstId(TypeFeedbackId ast_id) { - ASSERT(recorded_ast_id_.IsNone()); + DCHECK(recorded_ast_id_.IsNone()); recorded_ast_id_ = ast_id; } TypeFeedbackId RecordedAstId() { - ASSERT(!recorded_ast_id_.IsNone()); + DCHECK(!recorded_ast_id_.IsNone()); return recorded_ast_id_; } @@ -1416,9 +1379,9 @@ class Assembler : public AssemblerBase { // function, compiled with and without debugger support (see for example // Debug::PrepareForBreakPoints()). // Compiling functions with debugger support generates additional code - // (Debug::GenerateSlot()). This may affect the emission of the constant - // pools and cause the version of the code with debugger support to have - // constant pools generated in different places. + // (DebugCodegen::GenerateSlot()). This may affect the emission of the + // constant pools and cause the version of the code with debugger support to + // have constant pools generated in different places. // Recording the position and size of emitted constant pools allows to // correctly compute the offset mappings between the different versions of a // function in all situations. @@ -1453,6 +1416,10 @@ class Assembler : public AssemblerBase { static int GetBranchOffset(Instr instr); static bool IsLdrRegisterImmediate(Instr instr); static bool IsVldrDRegisterImmediate(Instr instr); + static Instr GetConsantPoolLoadPattern(); + static Instr GetConsantPoolLoadMask(); + static bool IsLdrPpRegOffset(Instr instr); + static Instr GetLdrPpRegOffsetPattern(); static bool IsLdrPpImmediateOffset(Instr instr); static bool IsVldrDPpImmediateOffset(Instr instr); static int GetLdrRegisterImmediateOffset(Instr instr); @@ -1474,6 +1441,8 @@ class Assembler : public AssemblerBase { static bool IsLdrRegFpNegOffset(Instr instr); static bool IsLdrPcImmediateOffset(Instr instr); static bool IsVldrDPcImmediateOffset(Instr instr); + static bool IsBlxReg(Instr instr); + static bool IsBlxIp(Instr instr); static bool IsTstImmediate(Instr instr); static bool IsCmpRegister(Instr instr); static bool IsCmpImmediate(Instr instr); @@ -1481,7 +1450,11 @@ class Assembler : public AssemblerBase { static int GetCmpImmediateRawImmediate(Instr instr); static bool IsNop(Instr instr, int type = NON_MARKING_NOP); static bool IsMovT(Instr instr); + static Instr GetMovTPattern(); static bool IsMovW(Instr instr); + static Instr GetMovWPattern(); + static Instr EncodeMovwImmediate(uint32_t immediate); + static Instr PatchMovwImmediate(Instr instruction, uint32_t immediate); // Constants in pools are accessed via pc relative addressing, which can // reach +/-4KB for integer PC-relative loads and +/-1KB for floating-point @@ -1506,14 +1479,14 @@ class Assembler : public AssemblerBase { // Generate the constant pool for the generated code. void PopulateConstantPool(ConstantPoolArray* constant_pool); - bool can_use_constant_pool() const { - return is_constant_pool_available() && !constant_pool_full_; - } + bool is_constant_pool_available() const { return constant_pool_available_; } - void set_constant_pool_full() { - constant_pool_full_ = true; + bool use_extended_constant_pool() const { + return constant_pool_builder_.current_section() == + ConstantPoolArray::EXTENDED_SECTION; } + protected: // Relocation for a type-recording IC has the AST id added to it. This // member variable is a way to pass the information from the call site to @@ -1547,10 +1520,10 @@ class Assembler : public AssemblerBase { // Max pool start (if we need a jump and an alignment). int start = pc_offset() + kInstrSize + 2 * kPointerSize; // Check the constant pool hasn't been blocked for too long. - ASSERT((num_pending_32_bit_reloc_info_ == 0) || + DCHECK((num_pending_32_bit_reloc_info_ == 0) || (start + num_pending_64_bit_reloc_info_ * kDoubleSize < (first_const_pool_32_use_ + kMaxDistToIntPool))); - ASSERT((num_pending_64_bit_reloc_info_ == 0) || + DCHECK((num_pending_64_bit_reloc_info_ == 0) || (start < (first_const_pool_64_use_ + kMaxDistToFPPool))); #endif // Two cases: @@ -1567,10 +1540,6 @@ class Assembler : public AssemblerBase { (pc_offset() < no_const_pool_before_); } - bool is_constant_pool_available() const { - return constant_pool_available_; - } - void set_constant_pool_available(bool available) { constant_pool_available_ = available; } @@ -1640,9 +1609,6 @@ class Assembler : public AssemblerBase { // Indicates whether the constant pool can be accessed, which is only possible // if the pp register points to the current code object's constant pool. bool constant_pool_available_; - // Indicates whether the constant pool is too full to accept new entries due - // to the ldr instruction's limitted immediate offset range. - bool constant_pool_full_; // Code emission inline void CheckBuffer(); @@ -1674,7 +1640,7 @@ class Assembler : public AssemblerBase { // Record reloc info for current pc_ void RecordRelocInfo(RelocInfo::Mode rmode, intptr_t data = 0); void RecordRelocInfo(const RelocInfo& rinfo); - void ConstantPoolAddEntry(const RelocInfo& rinfo); + ConstantPoolArray::LayoutSection ConstantPoolAddEntry(const RelocInfo& rinfo); friend class RelocInfo; friend class CodePatcher; diff --git a/deps/v8/src/arm/builtins-arm.cc b/deps/v8/src/arm/builtins-arm.cc index 2e5cc7398c7..60055a6cd3c 100644 --- a/deps/v8/src/arm/builtins-arm.cc +++ b/deps/v8/src/arm/builtins-arm.cc @@ -2,16 +2,16 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM -#include "codegen.h" -#include "debug.h" -#include "deoptimizer.h" -#include "full-codegen.h" -#include "runtime.h" -#include "stub-cache.h" +#include "src/codegen.h" +#include "src/debug.h" +#include "src/deoptimizer.h" +#include "src/full-codegen.h" +#include "src/runtime.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -40,7 +40,7 @@ void Builtins::Generate_Adaptor(MacroAssembler* masm, num_extra_args = 1; __ push(r1); } else { - ASSERT(extra_args == NO_EXTRA_ARGUMENTS); + DCHECK(extra_args == NO_EXTRA_ARGUMENTS); } // JumpToExternalReference expects r0 to contain the number of arguments @@ -303,7 +303,7 @@ void Builtins::Generate_InOptimizationQueue(MacroAssembler* masm) { __ cmp(sp, Operand(ip)); __ b(hs, &ok); - CallRuntimePassFunction(masm, Runtime::kHiddenTryInstallOptimizedCode); + CallRuntimePassFunction(masm, Runtime::kTryInstallOptimizedCode); GenerateTailCallToReturnedCode(masm); __ bind(&ok); @@ -313,7 +313,6 @@ void Builtins::Generate_InOptimizationQueue(MacroAssembler* masm) { static void Generate_JSConstructStubHelper(MacroAssembler* masm, bool is_api_function, - bool count_constructions, bool create_memento) { // ----------- S t a t e ------------- // -- r0 : number of arguments @@ -323,14 +322,8 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, // -- sp[...]: constructor arguments // ----------------------------------- - // Should never count constructions for api objects. - ASSERT(!is_api_function || !count_constructions); - // Should never create mementos for api functions. - ASSERT(!is_api_function || !create_memento); - - // Should never create mementos before slack tracking is finished. - ASSERT(!count_constructions || !create_memento); + DCHECK(!is_api_function || !create_memento); Isolate* isolate = masm->isolate(); @@ -375,22 +368,24 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, __ CompareInstanceType(r2, r3, JS_FUNCTION_TYPE); __ b(eq, &rt_call); - if (count_constructions) { + if (!is_api_function) { Label allocate; + MemOperand bit_field3 = FieldMemOperand(r2, Map::kBitField3Offset); + // Check if slack tracking is enabled. + __ ldr(r4, bit_field3); + __ DecodeField<Map::ConstructionCount>(r3, r4); + __ cmp(r3, Operand(JSFunction::kNoSlackTracking)); + __ b(eq, &allocate); // Decrease generous allocation count. - __ ldr(r3, FieldMemOperand(r1, JSFunction::kSharedFunctionInfoOffset)); - MemOperand constructor_count = - FieldMemOperand(r3, SharedFunctionInfo::kConstructionCountOffset); - __ ldrb(r4, constructor_count); - __ sub(r4, r4, Operand(1), SetCC); - __ strb(r4, constructor_count); + __ sub(r4, r4, Operand(1 << Map::ConstructionCount::kShift)); + __ str(r4, bit_field3); + __ cmp(r3, Operand(JSFunction::kFinishSlackTracking)); __ b(ne, &allocate); __ push(r1); __ Push(r2, r1); // r1 = constructor - // The call will replace the stub, so the countdown is only done once. - __ CallRuntime(Runtime::kHiddenFinalizeInstanceSize, 1); + __ CallRuntime(Runtime::kFinalizeInstanceSize, 1); __ pop(r2); __ pop(r1); @@ -416,11 +411,11 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, // r4: JSObject (not tagged) __ LoadRoot(r6, Heap::kEmptyFixedArrayRootIndex); __ mov(r5, r4); - ASSERT_EQ(0 * kPointerSize, JSObject::kMapOffset); + DCHECK_EQ(0 * kPointerSize, JSObject::kMapOffset); __ str(r2, MemOperand(r5, kPointerSize, PostIndex)); - ASSERT_EQ(1 * kPointerSize, JSObject::kPropertiesOffset); + DCHECK_EQ(1 * kPointerSize, JSObject::kPropertiesOffset); __ str(r6, MemOperand(r5, kPointerSize, PostIndex)); - ASSERT_EQ(2 * kPointerSize, JSObject::kElementsOffset); + DCHECK_EQ(2 * kPointerSize, JSObject::kElementsOffset); __ str(r6, MemOperand(r5, kPointerSize, PostIndex)); // Fill all the in-object properties with the appropriate filler. @@ -429,10 +424,19 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, // r3: object size (in words, including memento if create_memento) // r4: JSObject (not tagged) // r5: First in-object property of JSObject (not tagged) - ASSERT_EQ(3 * kPointerSize, JSObject::kHeaderSize); + DCHECK_EQ(3 * kPointerSize, JSObject::kHeaderSize); + __ LoadRoot(r6, Heap::kUndefinedValueRootIndex); + + if (!is_api_function) { + Label no_inobject_slack_tracking; + + // Check if slack tracking is enabled. + __ ldr(ip, FieldMemOperand(r2, Map::kBitField3Offset)); + __ DecodeField<Map::ConstructionCount>(ip); + __ cmp(ip, Operand(JSFunction::kNoSlackTracking)); + __ b(eq, &no_inobject_slack_tracking); - if (count_constructions) { - __ LoadRoot(r6, Heap::kUndefinedValueRootIndex); + // Allocate object with a slack. __ ldr(r0, FieldMemOperand(r2, Map::kInstanceSizesOffset)); __ Ubfx(r0, r0, Map::kPreAllocatedPropertyFieldsByte * kBitsPerByte, kBitsPerByte); @@ -446,25 +450,26 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, __ InitializeFieldsWithFiller(r5, r0, r6); // To allow for truncation. __ LoadRoot(r6, Heap::kOnePointerFillerMapRootIndex); - __ add(r0, r4, Operand(r3, LSL, kPointerSizeLog2)); // End of object. - __ InitializeFieldsWithFiller(r5, r0, r6); - } else if (create_memento) { - __ sub(r6, r3, Operand(AllocationMemento::kSize / kPointerSize)); - __ add(r0, r4, Operand(r6, LSL, kPointerSizeLog2)); // End of object. - __ LoadRoot(r6, Heap::kUndefinedValueRootIndex); + // Fill the remaining fields with one pointer filler map. + + __ bind(&no_inobject_slack_tracking); + } + + if (create_memento) { + __ sub(ip, r3, Operand(AllocationMemento::kSize / kPointerSize)); + __ add(r0, r4, Operand(ip, LSL, kPointerSizeLog2)); // End of object. __ InitializeFieldsWithFiller(r5, r0, r6); // Fill in memento fields. // r5: points to the allocated but uninitialized memento. __ LoadRoot(r6, Heap::kAllocationMementoMapRootIndex); - ASSERT_EQ(0 * kPointerSize, AllocationMemento::kMapOffset); + DCHECK_EQ(0 * kPointerSize, AllocationMemento::kMapOffset); __ str(r6, MemOperand(r5, kPointerSize, PostIndex)); // Load the AllocationSite __ ldr(r6, MemOperand(sp, 2 * kPointerSize)); - ASSERT_EQ(1 * kPointerSize, AllocationMemento::kAllocationSiteOffset); + DCHECK_EQ(1 * kPointerSize, AllocationMemento::kAllocationSiteOffset); __ str(r6, MemOperand(r5, kPointerSize, PostIndex)); } else { - __ LoadRoot(r6, Heap::kUndefinedValueRootIndex); __ add(r0, r4, Operand(r3, LSL, kPointerSizeLog2)); // End of object. __ InitializeFieldsWithFiller(r5, r0, r6); } @@ -517,9 +522,9 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, // r5: FixedArray (not tagged) __ LoadRoot(r6, Heap::kFixedArrayMapRootIndex); __ mov(r2, r5); - ASSERT_EQ(0 * kPointerSize, JSObject::kMapOffset); + DCHECK_EQ(0 * kPointerSize, JSObject::kMapOffset); __ str(r6, MemOperand(r2, kPointerSize, PostIndex)); - ASSERT_EQ(1 * kPointerSize, FixedArray::kLengthOffset); + DCHECK_EQ(1 * kPointerSize, FixedArray::kLengthOffset); __ SmiTag(r0, r3); __ str(r0, MemOperand(r2, kPointerSize, PostIndex)); @@ -530,7 +535,7 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, // r4: JSObject // r5: FixedArray (not tagged) __ add(r6, r2, Operand(r3, LSL, kPointerSizeLog2)); // End of object. - ASSERT_EQ(2 * kPointerSize, FixedArray::kHeaderSize); + DCHECK_EQ(2 * kPointerSize, FixedArray::kHeaderSize); { Label loop, entry; __ LoadRoot(r0, Heap::kUndefinedValueRootIndex); __ b(&entry); @@ -573,9 +578,9 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, __ push(r1); // argument for Runtime_NewObject if (create_memento) { - __ CallRuntime(Runtime::kHiddenNewObjectWithAllocationSite, 2); + __ CallRuntime(Runtime::kNewObjectWithAllocationSite, 2); } else { - __ CallRuntime(Runtime::kHiddenNewObject, 1); + __ CallRuntime(Runtime::kNewObject, 1); } __ mov(r4, r0); @@ -655,7 +660,7 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, } // Store offset of return address for deoptimizer. - if (!is_api_function && !count_constructions) { + if (!is_api_function) { masm->isolate()->heap()->SetConstructStubDeoptPCOffset(masm->pc_offset()); } @@ -707,18 +712,13 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, } -void Builtins::Generate_JSConstructStubCountdown(MacroAssembler* masm) { - Generate_JSConstructStubHelper(masm, false, true, false); -} - - void Builtins::Generate_JSConstructStubGeneric(MacroAssembler* masm) { - Generate_JSConstructStubHelper(masm, false, false, FLAG_pretenuring_call_new); + Generate_JSConstructStubHelper(masm, false, FLAG_pretenuring_call_new); } void Builtins::Generate_JSConstructStubApi(MacroAssembler* masm) { - Generate_JSConstructStubHelper(masm, true, false, false); + Generate_JSConstructStubHelper(masm, true, false); } @@ -809,7 +809,7 @@ void Builtins::Generate_JSConstructEntryTrampoline(MacroAssembler* masm) { void Builtins::Generate_CompileUnoptimized(MacroAssembler* masm) { - CallRuntimePassFunction(masm, Runtime::kHiddenCompileUnoptimized); + CallRuntimePassFunction(masm, Runtime::kCompileUnoptimized); GenerateTailCallToReturnedCode(masm); } @@ -823,7 +823,7 @@ static void CallCompileOptimized(MacroAssembler* masm, bool concurrent) { // Whether to compile in a background thread. __ Push(masm->isolate()->factory()->ToBoolean(concurrent)); - __ CallRuntime(Runtime::kHiddenCompileOptimized, 2); + __ CallRuntime(Runtime::kCompileOptimized, 2); // Restore receiver. __ pop(r1); } @@ -918,7 +918,7 @@ static void Generate_NotifyStubFailureHelper(MacroAssembler* masm, // registers. __ stm(db_w, sp, kJSCallerSaved | kCalleeSaved); // Pass the function and deoptimization type to the runtime system. - __ CallRuntime(Runtime::kHiddenNotifyStubFailure, 0, save_doubles); + __ CallRuntime(Runtime::kNotifyStubFailure, 0, save_doubles); __ ldm(ia_w, sp, kJSCallerSaved | kCalleeSaved); } @@ -944,7 +944,7 @@ static void Generate_NotifyDeoptimizedHelper(MacroAssembler* masm, // Pass the function and deoptimization type to the runtime system. __ mov(r0, Operand(Smi::FromInt(static_cast<int>(type)))); __ push(r0); - __ CallRuntime(Runtime::kHiddenNotifyDeoptimized, 1); + __ CallRuntime(Runtime::kNotifyDeoptimized, 1); } // Get the full codegen state from the stack and untag it -> r6. @@ -1035,7 +1035,7 @@ void Builtins::Generate_OsrAfterStackCheck(MacroAssembler* masm) { __ b(hs, &ok); { FrameAndConstantPoolScope scope(masm, StackFrame::INTERNAL); - __ CallRuntime(Runtime::kHiddenStackGuard, 0); + __ CallRuntime(Runtime::kStackGuard, 0); } __ Jump(masm->isolate()->builtins()->OnStackReplacement(), RelocInfo::CODE_TARGET); @@ -1071,7 +1071,7 @@ void Builtins::Generate_FunctionCall(MacroAssembler* masm) { // r1: function Label shift_arguments; __ mov(r4, Operand::Zero()); // indicate regular JS_FUNCTION - { Label convert_to_object, use_global_receiver, patch_receiver; + { Label convert_to_object, use_global_proxy, patch_receiver; // Change context eagerly in case we need the global receiver. __ ldr(cp, FieldMemOperand(r1, JSFunction::kContextOffset)); @@ -1096,10 +1096,10 @@ void Builtins::Generate_FunctionCall(MacroAssembler* masm) { __ LoadRoot(r3, Heap::kUndefinedValueRootIndex); __ cmp(r2, r3); - __ b(eq, &use_global_receiver); + __ b(eq, &use_global_proxy); __ LoadRoot(r3, Heap::kNullValueRootIndex); __ cmp(r2, r3); - __ b(eq, &use_global_receiver); + __ b(eq, &use_global_proxy); STATIC_ASSERT(LAST_SPEC_OBJECT_TYPE == LAST_TYPE); __ CompareObjectType(r2, r3, r3, FIRST_SPEC_OBJECT_TYPE); @@ -1128,9 +1128,9 @@ void Builtins::Generate_FunctionCall(MacroAssembler* masm) { __ mov(r4, Operand::Zero()); __ jmp(&patch_receiver); - __ bind(&use_global_receiver); + __ bind(&use_global_proxy); __ ldr(r2, ContextOperand(cp, Context::GLOBAL_OBJECT_INDEX)); - __ ldr(r2, FieldMemOperand(r2, GlobalObject::kGlobalReceiverOffset)); + __ ldr(r2, FieldMemOperand(r2, GlobalObject::kGlobalProxyOffset)); __ bind(&patch_receiver); __ add(r3, sp, Operand(r0, LSL, kPointerSizeLog2)); @@ -1284,7 +1284,7 @@ void Builtins::Generate_FunctionApply(MacroAssembler* masm) { // Compute the receiver. // Do not transform the receiver for strict mode functions. - Label call_to_object, use_global_receiver; + Label call_to_object, use_global_proxy; __ ldr(r2, FieldMemOperand(r2, SharedFunctionInfo::kCompilerHintsOffset)); __ tst(r2, Operand(1 << (SharedFunctionInfo::kStrictModeFunction + kSmiTagSize))); @@ -1298,10 +1298,10 @@ void Builtins::Generate_FunctionApply(MacroAssembler* masm) { __ JumpIfSmi(r0, &call_to_object); __ LoadRoot(r1, Heap::kNullValueRootIndex); __ cmp(r0, r1); - __ b(eq, &use_global_receiver); + __ b(eq, &use_global_proxy); __ LoadRoot(r1, Heap::kUndefinedValueRootIndex); __ cmp(r0, r1); - __ b(eq, &use_global_receiver); + __ b(eq, &use_global_proxy); // Check if the receiver is already a JavaScript object. // r0: receiver @@ -1316,9 +1316,9 @@ void Builtins::Generate_FunctionApply(MacroAssembler* masm) { __ InvokeBuiltin(Builtins::TO_OBJECT, CALL_FUNCTION); __ b(&push_receiver); - __ bind(&use_global_receiver); + __ bind(&use_global_proxy); __ ldr(r0, ContextOperand(cp, Context::GLOBAL_OBJECT_INDEX)); - __ ldr(r0, FieldMemOperand(r0, GlobalObject::kGlobalReceiverOffset)); + __ ldr(r0, FieldMemOperand(r0, GlobalObject::kGlobalProxyOffset)); // Push the receiver. // r0: receiver diff --git a/deps/v8/src/arm/code-stubs-arm.cc b/deps/v8/src/arm/code-stubs-arm.cc index 7b2935106ff..a728d58fbfb 100644 --- a/deps/v8/src/arm/code-stubs-arm.cc +++ b/deps/v8/src/arm/code-stubs-arm.cc @@ -2,14 +2,14 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM -#include "bootstrapper.h" -#include "code-stubs.h" -#include "regexp-macro-assembler.h" -#include "stub-cache.h" +#include "src/bootstrapper.h" +#include "src/code-stubs.h" +#include "src/regexp-macro-assembler.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -17,251 +17,206 @@ namespace internal { void FastNewClosureStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { r2 }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenNewClosureFromStubFailure)->entry; + Register registers[] = { cp, r2 }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kNewClosureFromStubFailure)->entry); } void FastNewContextStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { r1 }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; + Register registers[] = { cp, r1 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); } void ToNumberStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { r0 }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; + Register registers[] = { cp, r0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); } void NumberToStringStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { r0 }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenNumberToString)->entry; + Register registers[] = { cp, r0 }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kNumberToStringRT)->entry); } void FastCloneShallowArrayStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { r3, r2, r1 }; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId( - Runtime::kHiddenCreateArrayLiteralStubBailout)->entry; + Register registers[] = { cp, r3, r2, r1 }; + Representation representations[] = { + Representation::Tagged(), + Representation::Tagged(), + Representation::Smi(), + Representation::Tagged() }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kCreateArrayLiteralStubBailout)->entry, + representations); } void FastCloneShallowObjectStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { r3, r2, r1, r0 }; - descriptor->register_param_count_ = 4; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenCreateObjectLiteral)->entry; + Register registers[] = { cp, r3, r2, r1, r0 }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kCreateObjectLiteral)->entry); } void CreateAllocationSiteStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { r2, r3 }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; + Register registers[] = { cp, r2, r3 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); } -void KeyedLoadFastElementStub::InitializeInterfaceDescriptor( +void CallFunctionStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { r1, r0 }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(KeyedLoadIC_MissFromStubFailure); + // r1 function the function to call + Register registers[] = {cp, r1}; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); } -void KeyedLoadDictionaryElementStub::InitializeInterfaceDescriptor( +void CallConstructStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { r1, r0 }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(KeyedLoadIC_MissFromStubFailure); + // r0 : number of arguments + // r1 : the function to call + // r2 : feedback vector + // r3 : (only if r2 is not the megamorphic symbol) slot in feedback + // vector (Smi) + // TODO(turbofan): So far we don't gather type feedback and hence skip the + // slot parameter, but ArrayConstructStub needs the vector to be undefined. + Register registers[] = {cp, r0, r1, r2}; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); } void RegExpConstructResultStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { r2, r1, r0 }; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenRegExpConstructResult)->entry; -} - - -void LoadFieldStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { r0 }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; -} - - -void KeyedLoadFieldStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { r1 }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; -} - - -void StringLengthStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { r0, r2 }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; -} - - -void KeyedStringLengthStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { r1, r0 }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; -} - - -void KeyedStoreFastElementStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { r2, r1, r0 }; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(KeyedStoreIC_MissFromStubFailure); + Register registers[] = { cp, r2, r1, r0 }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kRegExpConstructResult)->entry); } void TransitionElementsKindStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { r0, r1 }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; + Register registers[] = { cp, r0, r1 }; Address entry = Runtime::FunctionForId(Runtime::kTransitionElementsKind)->entry; - descriptor->deoptimization_handler_ = FUNCTION_ADDR(entry); + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(entry)); } void CompareNilICStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { r0 }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(CompareNilIC_Miss); + Register registers[] = { cp, r0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(CompareNilIC_Miss)); descriptor->SetMissHandler( ExternalReference(IC_Utility(IC::kCompareNilIC_Miss), isolate())); } +const Register InterfaceDescriptor::ContextRegister() { return cp; } + + static void InitializeArrayConstructorDescriptor( - CodeStubInterfaceDescriptor* descriptor, + CodeStub::Major major, CodeStubInterfaceDescriptor* descriptor, int constant_stack_parameter_count) { // register state + // cp -- context // r0 -- number of arguments // r1 -- function // r2 -- allocation site with elements kind - static Register registers_variable_args[] = { r1, r2, r0 }; - static Register registers_no_args[] = { r1, r2 }; + Address deopt_handler = Runtime::FunctionForId( + Runtime::kArrayConstructor)->entry; if (constant_stack_parameter_count == 0) { - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers_no_args; + Register registers[] = { cp, r1, r2 }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, + deopt_handler, NULL, constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE); } else { // stack param count needs (constructor pointer, and single argument) - descriptor->handler_arguments_mode_ = PASS_ARGUMENTS; - descriptor->stack_parameter_count_ = r0; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers_variable_args; + Register registers[] = { cp, r1, r2, r0 }; + Representation representations[] = { + Representation::Tagged(), + Representation::Tagged(), + Representation::Tagged(), + Representation::Integer32() }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, r0, + deopt_handler, representations, + constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE, PASS_ARGUMENTS); } - - descriptor->hint_stack_parameter_count_ = constant_stack_parameter_count; - descriptor->function_mode_ = JS_FUNCTION_STUB_MODE; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenArrayConstructor)->entry; } static void InitializeInternalArrayConstructorDescriptor( - CodeStubInterfaceDescriptor* descriptor, + CodeStub::Major major, CodeStubInterfaceDescriptor* descriptor, int constant_stack_parameter_count) { // register state + // cp -- context // r0 -- number of arguments // r1 -- constructor function - static Register registers_variable_args[] = { r1, r0 }; - static Register registers_no_args[] = { r1 }; + Address deopt_handler = Runtime::FunctionForId( + Runtime::kInternalArrayConstructor)->entry; if (constant_stack_parameter_count == 0) { - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers_no_args; + Register registers[] = { cp, r1 }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, + deopt_handler, NULL, constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE); } else { // stack param count needs (constructor pointer, and single argument) - descriptor->handler_arguments_mode_ = PASS_ARGUMENTS; - descriptor->stack_parameter_count_ = r0; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers_variable_args; + Register registers[] = { cp, r1, r0 }; + Representation representations[] = { + Representation::Tagged(), + Representation::Tagged(), + Representation::Integer32() }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, r0, + deopt_handler, representations, + constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE, PASS_ARGUMENTS); } - - descriptor->hint_stack_parameter_count_ = constant_stack_parameter_count; - descriptor->function_mode_ = JS_FUNCTION_STUB_MODE; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenInternalArrayConstructor)->entry; } void ArrayNoArgumentConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeArrayConstructorDescriptor(descriptor, 0); + InitializeArrayConstructorDescriptor(MajorKey(), descriptor, 0); } void ArraySingleArgumentConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeArrayConstructorDescriptor(descriptor, 1); + InitializeArrayConstructorDescriptor(MajorKey(), descriptor, 1); } void ArrayNArgumentsConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeArrayConstructorDescriptor(descriptor, -1); + InitializeArrayConstructorDescriptor(MajorKey(), descriptor, -1); } void ToBooleanStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { r0 }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(ToBooleanIC_Miss); + Register registers[] = { cp, r0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(ToBooleanIC_Miss)); descriptor->SetMissHandler( ExternalReference(IC_Utility(IC::kToBooleanIC_Miss), isolate())); } @@ -269,48 +224,27 @@ void ToBooleanStub::InitializeInterfaceDescriptor( void InternalArrayNoArgumentConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeInternalArrayConstructorDescriptor(descriptor, 0); + InitializeInternalArrayConstructorDescriptor(MajorKey(), descriptor, 0); } void InternalArraySingleArgumentConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeInternalArrayConstructorDescriptor(descriptor, 1); + InitializeInternalArrayConstructorDescriptor(MajorKey(), descriptor, 1); } void InternalArrayNArgumentsConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeInternalArrayConstructorDescriptor(descriptor, -1); -} - - -void StoreGlobalStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { r1, r2, r0 }; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(StoreIC_MissFromStubFailure); -} - - -void ElementsTransitionAndStoreStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { r0, r3, r1, r2 }; - descriptor->register_param_count_ = 4; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(ElementsTransitionAndStoreIC_Miss); + InitializeInternalArrayConstructorDescriptor(MajorKey(), descriptor, -1); } void BinaryOpICStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { r1, r0 }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = FUNCTION_ADDR(BinaryOpIC_Miss); + Register registers[] = { cp, r1, r0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(BinaryOpIC_Miss)); descriptor->SetMissHandler( ExternalReference(IC_Utility(IC::kBinaryOpIC_Miss), isolate())); } @@ -318,115 +252,101 @@ void BinaryOpICStub::InitializeInterfaceDescriptor( void BinaryOpWithAllocationSiteStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { r2, r1, r0 }; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(BinaryOpIC_MissWithAllocationSite); + Register registers[] = { cp, r2, r1, r0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(BinaryOpIC_MissWithAllocationSite)); } void StringAddStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { r1, r0 }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenStringAdd)->entry; + Register registers[] = { cp, r1, r0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kStringAdd)->entry); } void CallDescriptors::InitializeForIsolate(Isolate* isolate) { - static PlatformCallInterfaceDescriptor default_descriptor = - PlatformCallInterfaceDescriptor(CAN_INLINE_TARGET_ADDRESS); + static PlatformInterfaceDescriptor default_descriptor = + PlatformInterfaceDescriptor(CAN_INLINE_TARGET_ADDRESS); - static PlatformCallInterfaceDescriptor noInlineDescriptor = - PlatformCallInterfaceDescriptor(NEVER_INLINE_TARGET_ADDRESS); + static PlatformInterfaceDescriptor noInlineDescriptor = + PlatformInterfaceDescriptor(NEVER_INLINE_TARGET_ADDRESS); { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::ArgumentAdaptorCall); - static Register registers[] = { r1, // JSFunction - cp, // context - r0, // actual number of arguments - r2, // expected number of arguments + Register registers[] = { cp, // context + r1, // JSFunction + r0, // actual number of arguments + r2, // expected number of arguments }; - static Representation representations[] = { - Representation::Tagged(), // JSFunction + Representation representations[] = { Representation::Tagged(), // context + Representation::Tagged(), // JSFunction Representation::Integer32(), // actual number of arguments Representation::Integer32(), // expected number of arguments }; - descriptor->register_param_count_ = 4; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; - descriptor->platform_specific_descriptor_ = &default_descriptor; + descriptor->Initialize(ARRAY_SIZE(registers), registers, + representations, &default_descriptor); } { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::KeyedCall); - static Register registers[] = { cp, // context - r2, // key + Register registers[] = { cp, // context + r2, // key }; - static Representation representations[] = { + Representation representations[] = { Representation::Tagged(), // context Representation::Tagged(), // key }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; - descriptor->platform_specific_descriptor_ = &noInlineDescriptor; + descriptor->Initialize(ARRAY_SIZE(registers), registers, + representations, &noInlineDescriptor); } { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::NamedCall); - static Register registers[] = { cp, // context - r2, // name + Register registers[] = { cp, // context + r2, // name }; - static Representation representations[] = { + Representation representations[] = { Representation::Tagged(), // context Representation::Tagged(), // name }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; - descriptor->platform_specific_descriptor_ = &noInlineDescriptor; + descriptor->Initialize(ARRAY_SIZE(registers), registers, + representations, &noInlineDescriptor); } { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::CallHandler); - static Register registers[] = { cp, // context - r0, // receiver + Register registers[] = { cp, // context + r0, // receiver }; - static Representation representations[] = { + Representation representations[] = { Representation::Tagged(), // context Representation::Tagged(), // receiver }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; - descriptor->platform_specific_descriptor_ = &default_descriptor; + descriptor->Initialize(ARRAY_SIZE(registers), registers, + representations, &default_descriptor); } { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::ApiFunctionCall); - static Register registers[] = { r0, // callee - r4, // call_data - r2, // holder - r1, // api_function_address - cp, // context + Register registers[] = { cp, // context + r0, // callee + r4, // call_data + r2, // holder + r1, // api_function_address }; - static Representation representations[] = { + Representation representations[] = { + Representation::Tagged(), // context Representation::Tagged(), // callee Representation::Tagged(), // call_data Representation::Tagged(), // holder Representation::External(), // api_function_address - Representation::Tagged(), // context }; - descriptor->register_param_count_ = 5; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; - descriptor->platform_specific_descriptor_ = &default_descriptor; + descriptor->Initialize(ARRAY_SIZE(registers), registers, + representations, &default_descriptor); } } @@ -453,18 +373,19 @@ void HydrogenCodeStub::GenerateLightweightMiss(MacroAssembler* masm) { isolate()->counters()->code_stubs()->Increment(); CodeStubInterfaceDescriptor* descriptor = GetInterfaceDescriptor(); - int param_count = descriptor->register_param_count_; + int param_count = descriptor->GetEnvironmentParameterCount(); { // Call the runtime system in a fresh internal frame. FrameAndConstantPoolScope scope(masm, StackFrame::INTERNAL); - ASSERT(descriptor->register_param_count_ == 0 || - r0.is(descriptor->register_params_[param_count - 1])); + DCHECK(param_count == 0 || + r0.is(descriptor->GetEnvironmentParameterRegister( + param_count - 1))); // Push arguments for (int i = 0; i < param_count; ++i) { - __ push(descriptor->register_params_[i]); + __ push(descriptor->GetEnvironmentParameterRegister(i)); } ExternalReference miss = descriptor->miss_handler(); - __ CallExternalReference(miss, descriptor->register_param_count_); + __ CallExternalReference(miss, param_count); } __ Ret(); @@ -499,8 +420,8 @@ class ConvertToDoubleStub : public PlatformCodeStub { class ModeBits: public BitField<OverwriteMode, 0, 2> {}; class OpBits: public BitField<Token::Value, 2, 14> {}; - Major MajorKey() { return ConvertToDouble; } - int MinorKey() { + Major MajorKey() const { return ConvertToDouble; } + int MinorKey() const { // Encode the parameters in a unique 16 bit value. return result1_.code() + (result2_.code() << 4) + @@ -570,7 +491,7 @@ void DoubleToIStub::Generate(MacroAssembler* masm) { Label out_of_range, only_low, negate, done; Register input_reg = source(); Register result_reg = destination(); - ASSERT(is_truncating()); + DCHECK(is_truncating()); int double_offset = offset(); // Account for saved regs if input is sp. @@ -702,7 +623,7 @@ void WriteInt32ToHeapNumberStub::Generate(MacroAssembler* masm) { // but it just ends up combining harmlessly with the last digit of the // exponent that happens to be 1. The sign bit is 0 so we shift 10 to get // the most significant 1 to hit the last bit of the 12 bit sign and exponent. - ASSERT(((1 << HeapNumber::kExponentShift) & non_smi_exponent) != 0); + DCHECK(((1 << HeapNumber::kExponentShift) & non_smi_exponent) != 0); const int shift_distance = HeapNumber::kNonMantissaBitsInTopWord - 2; __ orr(scratch_, scratch_, Operand(the_int_, LSR, shift_distance)); __ str(scratch_, FieldMemOperand(the_heap_number_, @@ -833,7 +754,7 @@ static void EmitSmiNonsmiComparison(MacroAssembler* masm, Label* lhs_not_nan, Label* slow, bool strict) { - ASSERT((lhs.is(r0) && rhs.is(r1)) || + DCHECK((lhs.is(r0) && rhs.is(r1)) || (lhs.is(r1) && rhs.is(r0))); Label rhs_is_smi; @@ -895,7 +816,7 @@ static void EmitSmiNonsmiComparison(MacroAssembler* masm, static void EmitStrictTwoHeapObjectCompare(MacroAssembler* masm, Register lhs, Register rhs) { - ASSERT((lhs.is(r0) && rhs.is(r1)) || + DCHECK((lhs.is(r0) && rhs.is(r1)) || (lhs.is(r1) && rhs.is(r0))); // If either operand is a JS object or an oddball value, then they are @@ -941,7 +862,7 @@ static void EmitCheckForTwoHeapNumbers(MacroAssembler* masm, Label* both_loaded_as_doubles, Label* not_heap_numbers, Label* slow) { - ASSERT((lhs.is(r0) && rhs.is(r1)) || + DCHECK((lhs.is(r0) && rhs.is(r1)) || (lhs.is(r1) && rhs.is(r0))); __ CompareObjectType(rhs, r3, r2, HEAP_NUMBER_TYPE); @@ -964,7 +885,7 @@ static void EmitCheckForInternalizedStringsOrObjects(MacroAssembler* masm, Register rhs, Label* possible_strings, Label* not_both_strings) { - ASSERT((lhs.is(r0) && rhs.is(r1)) || + DCHECK((lhs.is(r0) && rhs.is(r1)) || (lhs.is(r1) && rhs.is(r0))); // r2 is object type of rhs. @@ -1054,7 +975,7 @@ void ICCompareStub::GenerateGeneric(MacroAssembler* masm) { // If either is a Smi (we know that not both are), then they can only // be strictly equal if the other is a HeapNumber. STATIC_ASSERT(kSmiTag == 0); - ASSERT_EQ(0, Smi::FromInt(0)); + DCHECK_EQ(0, Smi::FromInt(0)); __ and_(r2, lhs, Operand(rhs)); __ JumpIfNotSmi(r2, ¬_smis); // One operand is a smi. EmitSmiNonsmiComparison generates code that can: @@ -1166,7 +1087,7 @@ void ICCompareStub::GenerateGeneric(MacroAssembler* masm) { if (cc == lt || cc == le) { ncr = GREATER; } else { - ASSERT(cc == gt || cc == ge); // remaining cases + DCHECK(cc == gt || cc == ge); // remaining cases ncr = LESS; } __ mov(r0, Operand(Smi::FromInt(ncr))); @@ -1375,7 +1296,7 @@ void MathPowStub::Generate(MacroAssembler* masm) { if (exponent_type_ == ON_STACK) { // The arguments are still on the stack. __ bind(&call_runtime); - __ TailCallRuntime(Runtime::kHiddenMathPow, 2, 1); + __ TailCallRuntime(Runtime::kMathPowRT, 2, 1); // The stub is called from non-optimized code, which expects the result // as heap number in exponent. @@ -1384,7 +1305,7 @@ void MathPowStub::Generate(MacroAssembler* masm) { heapnumber, scratch, scratch2, heapnumbermap, &call_runtime); __ vstr(double_result, FieldMemOperand(heapnumber, HeapNumber::kValueOffset)); - ASSERT(heapnumber.is(r0)); + DCHECK(heapnumber.is(r0)); __ IncrementCounter(counters->math_pow(), 1, scratch, scratch2); __ Ret(2); } else { @@ -1484,7 +1405,7 @@ void CEntryStub::Generate(MacroAssembler* masm) { if (FLAG_debug_code) { if (frame_alignment > kPointerSize) { Label alignment_as_expected; - ASSERT(IsPowerOf2(frame_alignment)); + DCHECK(IsPowerOf2(frame_alignment)); __ tst(sp, Operand(frame_alignment_mask)); __ b(eq, &alignment_as_expected); // Don't use Check here, as it will call Runtime_Abort re-entering here. @@ -1756,24 +1677,19 @@ void JSEntryStub::GenerateBody(MacroAssembler* masm, bool is_construct) { // * function: r1 or at sp. // // An inlined call site may have been generated before calling this stub. -// In this case the offset to the inline site to patch is passed in r5. +// In this case the offset to the inline sites to patch are passed in r5 and r6. // (See LCodeGen::DoInstanceOfKnownGlobal) void InstanceofStub::Generate(MacroAssembler* masm) { // Call site inlining and patching implies arguments in registers. - ASSERT(HasArgsInRegisters() || !HasCallSiteInlineCheck()); - // ReturnTrueFalse is only implemented for inlined call sites. - ASSERT(!ReturnTrueFalseObject() || HasCallSiteInlineCheck()); + DCHECK(HasArgsInRegisters() || !HasCallSiteInlineCheck()); // Fixed register usage throughout the stub: const Register object = r0; // Object (lhs). Register map = r3; // Map of the object. const Register function = r1; // Function (rhs). const Register prototype = r4; // Prototype of the function. - const Register inline_site = r9; const Register scratch = r2; - const int32_t kDeltaToLoadBoolResult = 4 * kPointerSize; - Label slow, loop, is_instance, is_not_instance, not_js_object; if (!HasArgsInRegisters()) { @@ -1787,7 +1703,7 @@ void InstanceofStub::Generate(MacroAssembler* masm) { // If there is a call site cache don't look in the global cache, but do the // real lookup and update the call site cache. - if (!HasCallSiteInlineCheck()) { + if (!HasCallSiteInlineCheck() && !ReturnTrueFalseObject()) { Label miss; __ CompareRoot(function, Heap::kInstanceofCacheFunctionRootIndex); __ b(ne, &miss); @@ -1812,17 +1728,17 @@ void InstanceofStub::Generate(MacroAssembler* masm) { __ StoreRoot(function, Heap::kInstanceofCacheFunctionRootIndex); __ StoreRoot(map, Heap::kInstanceofCacheMapRootIndex); } else { - ASSERT(HasArgsInRegisters()); + DCHECK(HasArgsInRegisters()); // Patch the (relocated) inlined map check. - // The offset was stored in r5 + // The map_load_offset was stored in r5 // (See LCodeGen::DoDeferredLInstanceOfKnownGlobal). - const Register offset = r5; - __ sub(inline_site, lr, offset); + const Register map_load_offset = r5; + __ sub(r9, lr, map_load_offset); // Get the map location in r5 and patch it. - __ GetRelocatedValueLocation(inline_site, offset); - __ ldr(offset, MemOperand(offset)); - __ str(map, FieldMemOperand(offset, Cell::kValueOffset)); + __ GetRelocatedValueLocation(r9, map_load_offset, scratch); + __ ldr(map_load_offset, MemOperand(map_load_offset)); + __ str(map, FieldMemOperand(map_load_offset, Cell::kValueOffset)); } // Register mapping: r3 is object map and r4 is function prototype. @@ -1843,17 +1759,24 @@ void InstanceofStub::Generate(MacroAssembler* masm) { __ ldr(scratch, FieldMemOperand(scratch, HeapObject::kMapOffset)); __ ldr(scratch, FieldMemOperand(scratch, Map::kPrototypeOffset)); __ jmp(&loop); + Factory* factory = isolate()->factory(); __ bind(&is_instance); if (!HasCallSiteInlineCheck()) { __ mov(r0, Operand(Smi::FromInt(0))); __ StoreRoot(r0, Heap::kInstanceofCacheAnswerRootIndex); + if (ReturnTrueFalseObject()) { + __ Move(r0, factory->true_value()); + } } else { // Patch the call site to return true. __ LoadRoot(r0, Heap::kTrueValueRootIndex); - __ add(inline_site, inline_site, Operand(kDeltaToLoadBoolResult)); + // The bool_load_offset was stored in r6 + // (See LCodeGen::DoDeferredLInstanceOfKnownGlobal). + const Register bool_load_offset = r6; + __ sub(r9, lr, bool_load_offset); // Get the boolean result location in scratch and patch it. - __ GetRelocatedValueLocation(inline_site, scratch); + __ GetRelocatedValueLocation(r9, scratch, scratch2); __ str(r0, MemOperand(scratch)); if (!ReturnTrueFalseObject()) { @@ -1866,12 +1789,19 @@ void InstanceofStub::Generate(MacroAssembler* masm) { if (!HasCallSiteInlineCheck()) { __ mov(r0, Operand(Smi::FromInt(1))); __ StoreRoot(r0, Heap::kInstanceofCacheAnswerRootIndex); + if (ReturnTrueFalseObject()) { + __ Move(r0, factory->false_value()); + } } else { // Patch the call site to return false. __ LoadRoot(r0, Heap::kFalseValueRootIndex); - __ add(inline_site, inline_site, Operand(kDeltaToLoadBoolResult)); + // The bool_load_offset was stored in r6 + // (See LCodeGen::DoDeferredLInstanceOfKnownGlobal). + const Register bool_load_offset = r6; + __ sub(r9, lr, bool_load_offset); + ; // Get the boolean result location in scratch and patch it. - __ GetRelocatedValueLocation(inline_site, scratch); + __ GetRelocatedValueLocation(r9, scratch, scratch2); __ str(r0, MemOperand(scratch)); if (!ReturnTrueFalseObject()) { @@ -1891,19 +1821,31 @@ void InstanceofStub::Generate(MacroAssembler* masm) { // Null is not instance of anything. __ cmp(scratch, Operand(isolate()->factory()->null_value())); __ b(ne, &object_not_null); - __ mov(r0, Operand(Smi::FromInt(1))); + if (ReturnTrueFalseObject()) { + __ Move(r0, factory->false_value()); + } else { + __ mov(r0, Operand(Smi::FromInt(1))); + } __ Ret(HasArgsInRegisters() ? 0 : 2); __ bind(&object_not_null); // Smi values are not instances of anything. __ JumpIfNotSmi(object, &object_not_null_or_smi); - __ mov(r0, Operand(Smi::FromInt(1))); + if (ReturnTrueFalseObject()) { + __ Move(r0, factory->false_value()); + } else { + __ mov(r0, Operand(Smi::FromInt(1))); + } __ Ret(HasArgsInRegisters() ? 0 : 2); __ bind(&object_not_null_or_smi); // String values are not instances of anything. __ IsObjectJSStringType(object, scratch, &slow); - __ mov(r0, Operand(Smi::FromInt(1))); + if (ReturnTrueFalseObject()) { + __ Move(r0, factory->false_value()); + } else { + __ mov(r0, Operand(Smi::FromInt(1))); + } __ Ret(HasArgsInRegisters() ? 0 : 2); // Slow-case. Tail call builtin. @@ -1929,31 +1871,13 @@ void InstanceofStub::Generate(MacroAssembler* masm) { void FunctionPrototypeStub::Generate(MacroAssembler* masm) { Label miss; - Register receiver; - if (kind() == Code::KEYED_LOAD_IC) { - // ----------- S t a t e ------------- - // -- lr : return address - // -- r0 : key - // -- r1 : receiver - // ----------------------------------- - __ cmp(r0, Operand(isolate()->factory()->prototype_string())); - __ b(ne, &miss); - receiver = r1; - } else { - ASSERT(kind() == Code::LOAD_IC); - // ----------- S t a t e ------------- - // -- r2 : name - // -- lr : return address - // -- r0 : receiver - // -- sp[0] : receiver - // ----------------------------------- - receiver = r0; - } + Register receiver = LoadIC::ReceiverRegister(); - StubCompiler::GenerateLoadFunctionPrototype(masm, receiver, r3, r4, &miss); + NamedLoadHandlerCompiler::GenerateLoadFunctionPrototype(masm, receiver, r3, + r4, &miss); __ bind(&miss); - StubCompiler::TailCallBuiltin( - masm, BaseLoadStoreStubCompiler::MissBuiltin(kind())); + PropertyAccessCompiler::TailCallBuiltin( + masm, PropertyAccessCompiler::MissBuiltin(Code::LOAD_IC)); } @@ -2034,7 +1958,7 @@ void ArgumentsAccessStub::GenerateNewSloppySlow(MacroAssembler* masm) { __ str(r3, MemOperand(sp, 1 * kPointerSize)); __ bind(&runtime); - __ TailCallRuntime(Runtime::kHiddenNewArgumentsFast, 3, 1); + __ TailCallRuntime(Runtime::kNewSloppyArguments, 3, 1); } @@ -2098,12 +2022,12 @@ void ArgumentsAccessStub::GenerateNewSloppyFast(MacroAssembler* masm) { __ Allocate(r9, r0, r3, r4, &runtime, TAG_OBJECT); // r0 = address of new object(s) (tagged) - // r2 = argument count (tagged) + // r2 = argument count (smi-tagged) // Get the arguments boilerplate from the current native context into r4. const int kNormalOffset = - Context::SlotOffset(Context::SLOPPY_ARGUMENTS_BOILERPLATE_INDEX); + Context::SlotOffset(Context::SLOPPY_ARGUMENTS_MAP_INDEX); const int kAliasedOffset = - Context::SlotOffset(Context::ALIASED_ARGUMENTS_BOILERPLATE_INDEX); + Context::SlotOffset(Context::ALIASED_ARGUMENTS_MAP_INDEX); __ ldr(r4, MemOperand(cp, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); __ ldr(r4, FieldMemOperand(r4, GlobalObject::kNativeContextOffset)); @@ -2113,22 +2037,23 @@ void ArgumentsAccessStub::GenerateNewSloppyFast(MacroAssembler* masm) { // r0 = address of new object (tagged) // r1 = mapped parameter count (tagged) - // r2 = argument count (tagged) - // r4 = address of boilerplate object (tagged) - // Copy the JS object part. - for (int i = 0; i < JSObject::kHeaderSize; i += kPointerSize) { - __ ldr(r3, FieldMemOperand(r4, i)); - __ str(r3, FieldMemOperand(r0, i)); - } + // r2 = argument count (smi-tagged) + // r4 = address of arguments map (tagged) + __ str(r4, FieldMemOperand(r0, JSObject::kMapOffset)); + __ LoadRoot(r3, Heap::kEmptyFixedArrayRootIndex); + __ str(r3, FieldMemOperand(r0, JSObject::kPropertiesOffset)); + __ str(r3, FieldMemOperand(r0, JSObject::kElementsOffset)); // Set up the callee in-object property. STATIC_ASSERT(Heap::kArgumentsCalleeIndex == 1); __ ldr(r3, MemOperand(sp, 2 * kPointerSize)); + __ AssertNotSmi(r3); const int kCalleeOffset = JSObject::kHeaderSize + Heap::kArgumentsCalleeIndex * kPointerSize; __ str(r3, FieldMemOperand(r0, kCalleeOffset)); // Use the length (smi tagged) and set that as an in-object property too. + __ AssertSmi(r2); STATIC_ASSERT(Heap::kArgumentsLengthIndex == 0); const int kLengthOffset = JSObject::kHeaderSize + Heap::kArgumentsLengthIndex * kPointerSize; @@ -2238,7 +2163,7 @@ void ArgumentsAccessStub::GenerateNewSloppyFast(MacroAssembler* masm) { // r2 = argument count (tagged) __ bind(&runtime); __ str(r2, MemOperand(sp, 0 * kPointerSize)); // Patch argument count. - __ TailCallRuntime(Runtime::kHiddenNewArgumentsFast, 3, 1); + __ TailCallRuntime(Runtime::kNewSloppyArguments, 3, 1); } @@ -2282,15 +2207,18 @@ void ArgumentsAccessStub::GenerateNewStrict(MacroAssembler* masm) { // Get the arguments boilerplate from the current native context. __ ldr(r4, MemOperand(cp, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); __ ldr(r4, FieldMemOperand(r4, GlobalObject::kNativeContextOffset)); - __ ldr(r4, MemOperand(r4, Context::SlotOffset( - Context::STRICT_ARGUMENTS_BOILERPLATE_INDEX))); + __ ldr(r4, MemOperand( + r4, Context::SlotOffset(Context::STRICT_ARGUMENTS_MAP_INDEX))); - // Copy the JS object part. - __ CopyFields(r0, r4, d0, JSObject::kHeaderSize / kPointerSize); + __ str(r4, FieldMemOperand(r0, JSObject::kMapOffset)); + __ LoadRoot(r3, Heap::kEmptyFixedArrayRootIndex); + __ str(r3, FieldMemOperand(r0, JSObject::kPropertiesOffset)); + __ str(r3, FieldMemOperand(r0, JSObject::kElementsOffset)); // Get the length (smi tagged) and set that as an in-object property too. STATIC_ASSERT(Heap::kArgumentsLengthIndex == 0); __ ldr(r1, MemOperand(sp, 0 * kPointerSize)); + __ AssertSmi(r1); __ str(r1, FieldMemOperand(r0, JSObject::kHeaderSize + Heap::kArgumentsLengthIndex * kPointerSize)); @@ -2332,7 +2260,7 @@ void ArgumentsAccessStub::GenerateNewStrict(MacroAssembler* masm) { // Do the runtime call to allocate the arguments object. __ bind(&runtime); - __ TailCallRuntime(Runtime::kHiddenNewStrictArgumentsFast, 3, 1); + __ TailCallRuntime(Runtime::kNewStrictArguments, 3, 1); } @@ -2341,7 +2269,7 @@ void RegExpExecStub::Generate(MacroAssembler* masm) { // time or if regexp entry in generated code is turned off runtime switch or // at compilation. #ifdef V8_INTERPRETED_REGEXP - __ TailCallRuntime(Runtime::kHiddenRegExpExec, 4, 1); + __ TailCallRuntime(Runtime::kRegExpExecRT, 4, 1); #else // V8_INTERPRETED_REGEXP // Stack frame on entry. @@ -2473,8 +2401,8 @@ void RegExpExecStub::Generate(MacroAssembler* masm) { STATIC_ASSERT(kSeqStringTag == 0); __ tst(r0, Operand(kStringRepresentationMask)); // The underlying external string is never a short external string. - STATIC_CHECK(ExternalString::kMaxShortLength < ConsString::kMinLength); - STATIC_CHECK(ExternalString::kMaxShortLength < SlicedString::kMinLength); + STATIC_ASSERT(ExternalString::kMaxShortLength < ConsString::kMinLength); + STATIC_ASSERT(ExternalString::kMaxShortLength < SlicedString::kMinLength); __ b(ne, &external_string); // Go to (7). // (5) Sequential string. Load regexp code according to encoding. @@ -2716,7 +2644,7 @@ void RegExpExecStub::Generate(MacroAssembler* masm) { // Do the runtime call to execute the regexp. __ bind(&runtime); - __ TailCallRuntime(Runtime::kHiddenRegExpExec, 4, 1); + __ TailCallRuntime(Runtime::kRegExpExecRT, 4, 1); // Deferred code for string handling. // (6) Not a long external string? If yes, go to (8). @@ -2769,9 +2697,9 @@ static void GenerateRecordCallTarget(MacroAssembler* masm) { // r3 : slot in feedback vector (Smi) Label initialize, done, miss, megamorphic, not_array_function; - ASSERT_EQ(*TypeFeedbackInfo::MegamorphicSentinel(masm->isolate()), + DCHECK_EQ(*TypeFeedbackInfo::MegamorphicSentinel(masm->isolate()), masm->isolate()->heap()->megamorphic_symbol()); - ASSERT_EQ(*TypeFeedbackInfo::UninitializedSentinel(masm->isolate()), + DCHECK_EQ(*TypeFeedbackInfo::UninitializedSentinel(masm->isolate()), masm->isolate()->heap()->uninitialized_symbol()); // Load the cache state into r4. @@ -2910,11 +2838,13 @@ static void EmitWrapCase(MacroAssembler* masm, int argc, Label* cont) { } -void CallFunctionStub::Generate(MacroAssembler* masm) { +static void CallFunctionNoFeedback(MacroAssembler* masm, + int argc, bool needs_checks, + bool call_as_method) { // r1 : the function to call Label slow, non_function, wrap, cont; - if (NeedsChecks()) { + if (needs_checks) { // Check that the function is really a JavaScript function. // r1: pushed function (to be verified) __ JumpIfSmi(r1, &non_function); @@ -2926,18 +2856,17 @@ void CallFunctionStub::Generate(MacroAssembler* masm) { // Fast-case: Invoke the function now. // r1: pushed function - int argc = argc_; ParameterCount actual(argc); - if (CallAsMethod()) { - if (NeedsChecks()) { + if (call_as_method) { + if (needs_checks) { EmitContinueIfStrictOrNative(masm, &cont); } // Compute the receiver in sloppy mode. __ ldr(r3, MemOperand(sp, argc * kPointerSize)); - if (NeedsChecks()) { + if (needs_checks) { __ JumpIfSmi(r3, &wrap); __ CompareObjectType(r3, r4, r4, FIRST_SPEC_OBJECT_TYPE); __ b(lt, &wrap); @@ -2950,19 +2879,24 @@ void CallFunctionStub::Generate(MacroAssembler* masm) { __ InvokeFunction(r1, actual, JUMP_FUNCTION, NullCallWrapper()); - if (NeedsChecks()) { + if (needs_checks) { // Slow-case: Non-function called. __ bind(&slow); EmitSlowCase(masm, argc, &non_function); } - if (CallAsMethod()) { + if (call_as_method) { __ bind(&wrap); EmitWrapCase(masm, argc, &cont); } } +void CallFunctionStub::Generate(MacroAssembler* masm) { + CallFunctionNoFeedback(masm, argc_, NeedsChecks(), CallAsMethod()); +} + + void CallConstructStub::Generate(MacroAssembler* masm) { // r0 : number of arguments // r1 : the function to call @@ -3022,7 +2956,7 @@ void CallConstructStub::Generate(MacroAssembler* masm) { __ bind(&do_call); // Set expected number of arguments to zero (not changing r0). __ mov(r2, Operand::Zero()); - __ Jump(isolate()->builtins()->ArgumentsAdaptorTrampoline(), + __ Jump(masm->isolate()->builtins()->ArgumentsAdaptorTrampoline(), RelocInfo::CODE_TARGET); } @@ -3036,6 +2970,46 @@ static void EmitLoadTypeFeedbackVector(MacroAssembler* masm, Register vector) { } +void CallIC_ArrayStub::Generate(MacroAssembler* masm) { + // r1 - function + // r3 - slot id + Label miss; + int argc = state_.arg_count(); + ParameterCount actual(argc); + + EmitLoadTypeFeedbackVector(masm, r2); + + __ LoadGlobalFunction(Context::ARRAY_FUNCTION_INDEX, r4); + __ cmp(r1, r4); + __ b(ne, &miss); + + __ mov(r0, Operand(arg_count())); + __ add(r4, r2, Operand::PointerOffsetFromSmiKey(r3)); + __ ldr(r4, FieldMemOperand(r4, FixedArray::kHeaderSize)); + + // Verify that r4 contains an AllocationSite + __ ldr(r5, FieldMemOperand(r4, HeapObject::kMapOffset)); + __ CompareRoot(r5, Heap::kAllocationSiteMapRootIndex); + __ b(ne, &miss); + + __ mov(r2, r4); + ArrayConstructorStub stub(masm->isolate(), arg_count()); + __ TailCallStub(&stub); + + __ bind(&miss); + GenerateMiss(masm, IC::kCallIC_Customization_Miss); + + // The slow case, we need this no matter what to complete a call after a miss. + CallFunctionNoFeedback(masm, + arg_count(), + true, + CallAsMethod()); + + // Unreachable. + __ stop("Unexpected code address"); +} + + void CallICStub::Generate(MacroAssembler* masm) { // r1 - function // r3 - slot id (Smi) @@ -3085,7 +3059,11 @@ void CallICStub::Generate(MacroAssembler* masm) { __ b(eq, &miss); if (!FLAG_trace_ic) { - // We are going megamorphic, and we don't want to visit the runtime. + // We are going megamorphic. If the feedback is a JSFunction, it is fine + // to handle it here. More complex cases are dealt with in the runtime. + __ AssertNotSmi(r4); + __ CompareObjectType(r4, r5, r5, JS_FUNCTION_TYPE); + __ b(ne, &miss); __ add(r4, r2, Operand::PointerOffsetFromSmiKey(r3)); __ LoadRoot(ip, Heap::kMegamorphicSymbolRootIndex); __ str(ip, FieldMemOperand(r4, FixedArray::kHeaderSize)); @@ -3094,7 +3072,7 @@ void CallICStub::Generate(MacroAssembler* masm) { // We are here because tracing is on or we are going monomorphic. __ bind(&miss); - GenerateMiss(masm); + GenerateMiss(masm, IC::kCallIC_Miss); // the slow case __ bind(&slow_start); @@ -3109,7 +3087,7 @@ void CallICStub::Generate(MacroAssembler* masm) { } -void CallICStub::GenerateMiss(MacroAssembler* masm) { +void CallICStub::GenerateMiss(MacroAssembler* masm, IC::UtilityId id) { // Get the receiver of the function from the stack; 1 ~ return address. __ ldr(r4, MemOperand(sp, (state_.arg_count() + 1) * kPointerSize)); @@ -3120,7 +3098,7 @@ void CallICStub::GenerateMiss(MacroAssembler* masm) { __ Push(r4, r1, r2, r3); // Call the entry. - ExternalReference miss = ExternalReference(IC_Utility(IC::kCallIC_Miss), + ExternalReference miss = ExternalReference(IC_Utility(id), masm->isolate()); __ CallExternalReference(miss, 4); @@ -3188,9 +3166,9 @@ void StringCharCodeAtGenerator::GenerateSlow( if (index_flags_ == STRING_INDEX_IS_NUMBER) { __ CallRuntime(Runtime::kNumberToIntegerMapMinusZero, 1); } else { - ASSERT(index_flags_ == STRING_INDEX_IS_ARRAY_INDEX); + DCHECK(index_flags_ == STRING_INDEX_IS_ARRAY_INDEX); // NumberToSmi discards numbers that are not exact integers. - __ CallRuntime(Runtime::kHiddenNumberToSmi, 1); + __ CallRuntime(Runtime::kNumberToSmi, 1); } // Save the conversion result before the pop instructions below // have a chance to overwrite it. @@ -3212,7 +3190,7 @@ void StringCharCodeAtGenerator::GenerateSlow( call_helper.BeforeCall(masm); __ SmiTag(index_); __ Push(object_, index_); - __ CallRuntime(Runtime::kHiddenStringCharCodeAt, 2); + __ CallRuntime(Runtime::kStringCharCodeAtRT, 2); __ Move(result_, r0); call_helper.AfterCall(masm); __ jmp(&exit_); @@ -3228,7 +3206,7 @@ void StringCharFromCodeGenerator::GenerateFast(MacroAssembler* masm) { // Fast case of Heap::LookupSingleCharacterStringFromCode. STATIC_ASSERT(kSmiTag == 0); STATIC_ASSERT(kSmiShiftSize == 0); - ASSERT(IsPowerOf2(String::kMaxOneByteCharCode + 1)); + DCHECK(IsPowerOf2(String::kMaxOneByteCharCode + 1)); __ tst(code_, Operand(kSmiTagMask | ((~String::kMaxOneByteCharCode) << kSmiTagSize))); @@ -3267,142 +3245,37 @@ enum CopyCharactersFlags { }; -void StringHelper::GenerateCopyCharactersLong(MacroAssembler* masm, - Register dest, - Register src, - Register count, - Register scratch1, - Register scratch2, - Register scratch3, - Register scratch4, - int flags) { - bool ascii = (flags & COPY_ASCII) != 0; - bool dest_always_aligned = (flags & DEST_ALWAYS_ALIGNED) != 0; - - if (dest_always_aligned && FLAG_debug_code) { - // Check that destination is actually word aligned if the flag says - // that it is. +void StringHelper::GenerateCopyCharacters(MacroAssembler* masm, + Register dest, + Register src, + Register count, + Register scratch, + String::Encoding encoding) { + if (FLAG_debug_code) { + // Check that destination is word aligned. __ tst(dest, Operand(kPointerAlignmentMask)); __ Check(eq, kDestinationOfCopyNotAligned); } - const int kReadAlignment = 4; - const int kReadAlignmentMask = kReadAlignment - 1; - // Ensure that reading an entire aligned word containing the last character - // of a string will not read outside the allocated area (because we pad up - // to kObjectAlignment). - STATIC_ASSERT(kObjectAlignment >= kReadAlignment); // Assumes word reads and writes are little endian. // Nothing to do for zero characters. Label done; - if (!ascii) { + if (encoding == String::TWO_BYTE_ENCODING) { __ add(count, count, Operand(count), SetCC); - } else { - __ cmp(count, Operand::Zero()); } - __ b(eq, &done); - // Assume that you cannot read (or write) unaligned. - Label byte_loop; - // Must copy at least eight bytes, otherwise just do it one byte at a time. - __ cmp(count, Operand(8)); - __ add(count, dest, Operand(count)); - Register limit = count; // Read until src equals this. - __ b(lt, &byte_loop); - - if (!dest_always_aligned) { - // Align dest by byte copying. Copies between zero and three bytes. - __ and_(scratch4, dest, Operand(kReadAlignmentMask), SetCC); - Label dest_aligned; - __ b(eq, &dest_aligned); - __ cmp(scratch4, Operand(2)); - __ ldrb(scratch1, MemOperand(src, 1, PostIndex)); - __ ldrb(scratch2, MemOperand(src, 1, PostIndex), le); - __ ldrb(scratch3, MemOperand(src, 1, PostIndex), lt); - __ strb(scratch1, MemOperand(dest, 1, PostIndex)); - __ strb(scratch2, MemOperand(dest, 1, PostIndex), le); - __ strb(scratch3, MemOperand(dest, 1, PostIndex), lt); - __ bind(&dest_aligned); - } + Register limit = count; // Read until dest equals this. + __ add(limit, dest, Operand(count)); - Label simple_loop; - - __ sub(scratch4, dest, Operand(src)); - __ and_(scratch4, scratch4, Operand(0x03), SetCC); - __ b(eq, &simple_loop); - // Shift register is number of bits in a source word that - // must be combined with bits in the next source word in order - // to create a destination word. - - // Complex loop for src/dst that are not aligned the same way. - { - Label loop; - __ mov(scratch4, Operand(scratch4, LSL, 3)); - Register left_shift = scratch4; - __ and_(src, src, Operand(~3)); // Round down to load previous word. - __ ldr(scratch1, MemOperand(src, 4, PostIndex)); - // Store the "shift" most significant bits of scratch in the least - // signficant bits (i.e., shift down by (32-shift)). - __ rsb(scratch2, left_shift, Operand(32)); - Register right_shift = scratch2; - __ mov(scratch1, Operand(scratch1, LSR, right_shift)); - - __ bind(&loop); - __ ldr(scratch3, MemOperand(src, 4, PostIndex)); - __ orr(scratch1, scratch1, Operand(scratch3, LSL, left_shift)); - __ str(scratch1, MemOperand(dest, 4, PostIndex)); - __ mov(scratch1, Operand(scratch3, LSR, right_shift)); - // Loop if four or more bytes left to copy. - __ sub(scratch3, limit, Operand(dest)); - __ sub(scratch3, scratch3, Operand(4), SetCC); - __ b(ge, &loop); - } - // There is now between zero and three bytes left to copy (negative that - // number is in scratch3), and between one and three bytes already read into - // scratch1 (eight times that number in scratch4). We may have read past - // the end of the string, but because objects are aligned, we have not read - // past the end of the object. - // Find the minimum of remaining characters to move and preloaded characters - // and write those as bytes. - __ add(scratch3, scratch3, Operand(4), SetCC); - __ b(eq, &done); - __ cmp(scratch4, Operand(scratch3, LSL, 3), ne); - // Move minimum of bytes read and bytes left to copy to scratch4. - __ mov(scratch3, Operand(scratch4, LSR, 3), LeaveCC, lt); - // Between one and three (value in scratch3) characters already read into - // scratch ready to write. - __ cmp(scratch3, Operand(2)); - __ strb(scratch1, MemOperand(dest, 1, PostIndex)); - __ mov(scratch1, Operand(scratch1, LSR, 8), LeaveCC, ge); - __ strb(scratch1, MemOperand(dest, 1, PostIndex), ge); - __ mov(scratch1, Operand(scratch1, LSR, 8), LeaveCC, gt); - __ strb(scratch1, MemOperand(dest, 1, PostIndex), gt); - // Copy any remaining bytes. - __ b(&byte_loop); - - // Simple loop. - // Copy words from src to dst, until less than four bytes left. - // Both src and dest are word aligned. - __ bind(&simple_loop); - { - Label loop; - __ bind(&loop); - __ ldr(scratch1, MemOperand(src, 4, PostIndex)); - __ sub(scratch3, limit, Operand(dest)); - __ str(scratch1, MemOperand(dest, 4, PostIndex)); - // Compare to 8, not 4, because we do the substraction before increasing - // dest. - __ cmp(scratch3, Operand(8)); - __ b(ge, &loop); - } - - // Copy bytes from src to dst until dst hits limit. - __ bind(&byte_loop); + Label loop_entry, loop; + // Copy bytes from src to dest until dest hits limit. + __ b(&loop_entry); + __ bind(&loop); + __ ldrb(scratch, MemOperand(src, 1, PostIndex), lt); + __ strb(scratch, MemOperand(dest, 1, PostIndex)); + __ bind(&loop_entry); __ cmp(dest, Operand(limit)); - __ ldrb(scratch1, MemOperand(src, 1, PostIndex), lt); - __ b(ge, &done); - __ strb(scratch1, MemOperand(dest, 1, PostIndex)); - __ b(&byte_loop); + __ b(lt, &loop); __ bind(&done); } @@ -3597,7 +3470,7 @@ void SubStringStub::Generate(MacroAssembler* masm) { // Handle external string. // Rule out short external strings. - STATIC_CHECK(kShortExternalStringTag != 0); + STATIC_ASSERT(kShortExternalStringTag != 0); __ tst(r1, Operand(kShortExternalStringTag)); __ b(ne, &runtime); __ ldr(r5, FieldMemOperand(r5, ExternalString::kResourceDataOffset)); @@ -3628,8 +3501,8 @@ void SubStringStub::Generate(MacroAssembler* masm) { // r2: result string length // r5: first character of substring to copy STATIC_ASSERT((SeqOneByteString::kHeaderSize & kObjectAlignmentMask) == 0); - StringHelper::GenerateCopyCharactersLong(masm, r1, r5, r2, r3, r4, r6, r9, - COPY_ASCII | DEST_ALWAYS_ALIGNED); + StringHelper::GenerateCopyCharacters( + masm, r1, r5, r2, r3, String::ONE_BYTE_ENCODING); __ jmp(&return_r0); // Allocate and copy the resulting two-byte string. @@ -3647,8 +3520,8 @@ void SubStringStub::Generate(MacroAssembler* masm) { // r2: result length. // r5: first character of substring to copy. STATIC_ASSERT((SeqTwoByteString::kHeaderSize & kObjectAlignmentMask) == 0); - StringHelper::GenerateCopyCharactersLong( - masm, r1, r5, r2, r3, r4, r6, r9, DEST_ALWAYS_ALIGNED); + StringHelper::GenerateCopyCharacters( + masm, r1, r5, r2, r3, String::TWO_BYTE_ENCODING); __ bind(&return_r0); Counters* counters = isolate()->counters(); @@ -3658,7 +3531,7 @@ void SubStringStub::Generate(MacroAssembler* masm) { // Just jump to runtime to create the sub string. __ bind(&runtime); - __ TailCallRuntime(Runtime::kHiddenSubString, 3, 1); + __ TailCallRuntime(Runtime::kSubString, 3, 1); __ bind(&single_char); // r0: original string @@ -3740,7 +3613,7 @@ void StringCompareStub::GenerateCompareFlatAsciiStrings(MacroAssembler* masm, // Compare lengths - strings up to min-length are equal. __ bind(&compare_lengths); - ASSERT(Smi::FromInt(EQUAL) == static_cast<Smi*>(0)); + DCHECK(Smi::FromInt(EQUAL) == static_cast<Smi*>(0)); // Use length_delta as result if it's zero. __ mov(r0, Operand(length_delta), SetCC); __ bind(&result_not_equal); @@ -3816,7 +3689,7 @@ void StringCompareStub::Generate(MacroAssembler* masm) { // Call the runtime; it returns -1 (less), 0 (equal), or 1 (greater) // tagged as a small integer. __ bind(&runtime); - __ TailCallRuntime(Runtime::kHiddenStringCompare, 2, 1); + __ TailCallRuntime(Runtime::kStringCompare, 2, 1); } @@ -3852,7 +3725,7 @@ void BinaryOpICWithAllocationSiteStub::Generate(MacroAssembler* masm) { void ICCompareStub::GenerateSmis(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::SMI); + DCHECK(state_ == CompareIC::SMI); Label miss; __ orr(r2, r1, r0); __ JumpIfNotSmi(r2, &miss); @@ -3873,7 +3746,7 @@ void ICCompareStub::GenerateSmis(MacroAssembler* masm) { void ICCompareStub::GenerateNumbers(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::NUMBER); + DCHECK(state_ == CompareIC::NUMBER); Label generic_stub; Label unordered, maybe_undefined1, maybe_undefined2; @@ -3950,7 +3823,7 @@ void ICCompareStub::GenerateNumbers(MacroAssembler* masm) { void ICCompareStub::GenerateInternalizedStrings(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::INTERNALIZED_STRING); + DCHECK(state_ == CompareIC::INTERNALIZED_STRING); Label miss; // Registers containing left and right operands respectively. @@ -3976,7 +3849,7 @@ void ICCompareStub::GenerateInternalizedStrings(MacroAssembler* masm) { __ cmp(left, right); // Make sure r0 is non-zero. At this point input operands are // guaranteed to be non-zero. - ASSERT(right.is(r0)); + DCHECK(right.is(r0)); STATIC_ASSERT(EQUAL == 0); STATIC_ASSERT(kSmiTag == 0); __ mov(r0, Operand(Smi::FromInt(EQUAL)), LeaveCC, eq); @@ -3988,8 +3861,8 @@ void ICCompareStub::GenerateInternalizedStrings(MacroAssembler* masm) { void ICCompareStub::GenerateUniqueNames(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::UNIQUE_NAME); - ASSERT(GetCondition() == eq); + DCHECK(state_ == CompareIC::UNIQUE_NAME); + DCHECK(GetCondition() == eq); Label miss; // Registers containing left and right operands respectively. @@ -4015,7 +3888,7 @@ void ICCompareStub::GenerateUniqueNames(MacroAssembler* masm) { __ cmp(left, right); // Make sure r0 is non-zero. At this point input operands are // guaranteed to be non-zero. - ASSERT(right.is(r0)); + DCHECK(right.is(r0)); STATIC_ASSERT(EQUAL == 0); STATIC_ASSERT(kSmiTag == 0); __ mov(r0, Operand(Smi::FromInt(EQUAL)), LeaveCC, eq); @@ -4027,7 +3900,7 @@ void ICCompareStub::GenerateUniqueNames(MacroAssembler* masm) { void ICCompareStub::GenerateStrings(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::STRING); + DCHECK(state_ == CompareIC::STRING); Label miss; bool equality = Token::IsEqualityOp(op_); @@ -4067,13 +3940,13 @@ void ICCompareStub::GenerateStrings(MacroAssembler* masm) { // because we already know they are not identical. We know they are both // strings. if (equality) { - ASSERT(GetCondition() == eq); + DCHECK(GetCondition() == eq); STATIC_ASSERT(kInternalizedTag == 0); __ orr(tmp3, tmp1, Operand(tmp2)); __ tst(tmp3, Operand(kIsNotInternalizedMask)); // Make sure r0 is non-zero. At this point input operands are // guaranteed to be non-zero. - ASSERT(right.is(r0)); + DCHECK(right.is(r0)); __ Ret(eq); } @@ -4097,7 +3970,7 @@ void ICCompareStub::GenerateStrings(MacroAssembler* masm) { if (equality) { __ TailCallRuntime(Runtime::kStringEquals, 2, 1); } else { - __ TailCallRuntime(Runtime::kHiddenStringCompare, 2, 1); + __ TailCallRuntime(Runtime::kStringCompare, 2, 1); } __ bind(&miss); @@ -4106,7 +3979,7 @@ void ICCompareStub::GenerateStrings(MacroAssembler* masm) { void ICCompareStub::GenerateObjects(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::OBJECT); + DCHECK(state_ == CompareIC::OBJECT); Label miss; __ and_(r2, r1, Operand(r0)); __ JumpIfSmi(r2, &miss); @@ -4116,7 +3989,7 @@ void ICCompareStub::GenerateObjects(MacroAssembler* masm) { __ CompareObjectType(r1, r2, r2, JS_OBJECT_TYPE); __ b(ne, &miss); - ASSERT(GetCondition() == eq); + DCHECK(GetCondition() == eq); __ sub(r0, r0, Operand(r1)); __ Ret(); @@ -4195,7 +4068,7 @@ void NameDictionaryLookupStub::GenerateNegativeLookup(MacroAssembler* masm, Register properties, Handle<Name> name, Register scratch0) { - ASSERT(name->IsUniqueName()); + DCHECK(name->IsUniqueName()); // If names of slots in range from 1 to kProbes - 1 for the hash value are // not equal to the name and kProbes-th slot is not used (its name is the // undefined value), it guarantees the hash table doesn't contain the @@ -4212,17 +4085,17 @@ void NameDictionaryLookupStub::GenerateNegativeLookup(MacroAssembler* masm, Smi::FromInt(name->Hash() + NameDictionary::GetProbeOffset(i)))); // Scale the index by multiplying by the entry size. - ASSERT(NameDictionary::kEntrySize == 3); + DCHECK(NameDictionary::kEntrySize == 3); __ add(index, index, Operand(index, LSL, 1)); // index *= 3. Register entity_name = scratch0; // Having undefined at this place means the name is not contained. - ASSERT_EQ(kSmiTagSize, 1); + DCHECK_EQ(kSmiTagSize, 1); Register tmp = properties; __ add(tmp, properties, Operand(index, LSL, 1)); __ ldr(entity_name, FieldMemOperand(tmp, kElementsStartOffset)); - ASSERT(!tmp.is(entity_name)); + DCHECK(!tmp.is(entity_name)); __ LoadRoot(tmp, Heap::kUndefinedValueRootIndex); __ cmp(entity_name, tmp); __ b(eq, done); @@ -4278,10 +4151,10 @@ void NameDictionaryLookupStub::GeneratePositiveLookup(MacroAssembler* masm, Register name, Register scratch1, Register scratch2) { - ASSERT(!elements.is(scratch1)); - ASSERT(!elements.is(scratch2)); - ASSERT(!name.is(scratch1)); - ASSERT(!name.is(scratch2)); + DCHECK(!elements.is(scratch1)); + DCHECK(!elements.is(scratch2)); + DCHECK(!name.is(scratch1)); + DCHECK(!name.is(scratch2)); __ AssertName(name); @@ -4300,7 +4173,7 @@ void NameDictionaryLookupStub::GeneratePositiveLookup(MacroAssembler* masm, // Add the probe offset (i + i * i) left shifted to avoid right shifting // the hash in a separate instruction. The value hash + i + i * i is right // shifted in the following and instruction. - ASSERT(NameDictionary::GetProbeOffset(i) < + DCHECK(NameDictionary::GetProbeOffset(i) < 1 << (32 - Name::kHashFieldOffset)); __ add(scratch2, scratch2, Operand( NameDictionary::GetProbeOffset(i) << Name::kHashShift)); @@ -4308,7 +4181,7 @@ void NameDictionaryLookupStub::GeneratePositiveLookup(MacroAssembler* masm, __ and_(scratch2, scratch1, Operand(scratch2, LSR, Name::kHashShift)); // Scale the index by multiplying by the element size. - ASSERT(NameDictionary::kEntrySize == 3); + DCHECK(NameDictionary::kEntrySize == 3); // scratch2 = scratch2 * 3. __ add(scratch2, scratch2, Operand(scratch2, LSL, 1)); @@ -4326,7 +4199,7 @@ void NameDictionaryLookupStub::GeneratePositiveLookup(MacroAssembler* masm, __ stm(db_w, sp, spill_mask); if (name.is(r0)) { - ASSERT(!elements.is(r1)); + DCHECK(!elements.is(r1)); __ Move(r1, name); __ Move(r0, elements); } else { @@ -4382,7 +4255,7 @@ void NameDictionaryLookupStub::Generate(MacroAssembler* masm) { // Add the probe offset (i + i * i) left shifted to avoid right shifting // the hash in a separate instruction. The value hash + i + i * i is right // shifted in the following and instruction. - ASSERT(NameDictionary::GetProbeOffset(i) < + DCHECK(NameDictionary::GetProbeOffset(i) < 1 << (32 - Name::kHashFieldOffset)); __ add(index, hash, Operand( NameDictionary::GetProbeOffset(i) << Name::kHashShift)); @@ -4392,10 +4265,10 @@ void NameDictionaryLookupStub::Generate(MacroAssembler* masm) { __ and_(index, mask, Operand(index, LSR, Name::kHashShift)); // Scale the index by multiplying by the entry size. - ASSERT(NameDictionary::kEntrySize == 3); + DCHECK(NameDictionary::kEntrySize == 3); __ add(index, index, Operand(index, LSL, 1)); // index *= 3. - ASSERT_EQ(kSmiTagSize, 1); + DCHECK_EQ(kSmiTagSize, 1); __ add(index, dictionary, Operand(index, LSL, 2)); __ ldr(entry_key, FieldMemOperand(index, kElementsStartOffset)); @@ -4445,11 +4318,6 @@ void StoreBufferOverflowStub::GenerateFixedRegStubsAheadOfTime( } -bool CodeStub::CanUseFPRegisters() { - return true; // VFP2 is a base requirement for V8 -} - - // Takes the input in 3 registers: address_ value_ and object_. A pointer to // the value has just been written into the object, now this stub makes sure // we keep the GC informed. The word in the object where the value has been @@ -4488,8 +4356,8 @@ void RecordWriteStub::Generate(MacroAssembler* masm) { // Initial mode of the stub is expected to be STORE_BUFFER_ONLY. // Will be checked in IncrementalMarking::ActivateGeneratedStub. - ASSERT(Assembler::GetBranchOffset(masm->instr_at(0)) < (1 << 12)); - ASSERT(Assembler::GetBranchOffset(masm->instr_at(4)) < (1 << 12)); + DCHECK(Assembler::GetBranchOffset(masm->instr_at(0)) < (1 << 12)); + DCHECK(Assembler::GetBranchOffset(masm->instr_at(4)) < (1 << 12)); PatchBranchIntoNop(masm, 0); PatchBranchIntoNop(masm, Assembler::kInstrSize); } @@ -4541,8 +4409,8 @@ void RecordWriteStub::InformIncrementalMarker(MacroAssembler* masm) { __ PrepareCallCFunction(argument_count, regs_.scratch0()); Register address = r0.is(regs_.address()) ? regs_.scratch0() : regs_.address(); - ASSERT(!address.is(regs_.object())); - ASSERT(!address.is(r0)); + DCHECK(!address.is(regs_.object())); + DCHECK(!address.is(r0)); __ Move(address, regs_.address()); __ Move(r0, regs_.object()); __ Move(r1, address); @@ -4705,7 +4573,7 @@ void StoreArrayLiteralElementStub::Generate(MacroAssembler* masm) { void StubFailureTrampolineStub::Generate(MacroAssembler* masm) { - CEntryStub ces(isolate(), 1, fp_registers_ ? kSaveFPRegs : kDontSaveFPRegs); + CEntryStub ces(isolate(), 1, kSaveFPRegs); __ Call(ces.GetCode(), RelocInfo::CODE_TARGET); int parameter_count_offset = StubFailureTrampolineFrame::kCallerStackParameterCountFrameOffset; @@ -4748,7 +4616,7 @@ void ProfileEntryHookStub::Generate(MacroAssembler* masm) { // We also save lr, so the count here is one higher than the mask indicates. const int32_t kNumSavedRegs = 7; - ASSERT((kCallerSaved & kSavedRegs) == kCallerSaved); + DCHECK((kCallerSaved & kSavedRegs) == kCallerSaved); // Save all caller-save registers as this may be called from anywhere. __ stm(db_w, sp, kSavedRegs | lr.bit()); @@ -4764,7 +4632,7 @@ void ProfileEntryHookStub::Generate(MacroAssembler* masm) { int frame_alignment = masm->ActivationFrameAlignment(); if (frame_alignment > kPointerSize) { __ mov(r5, sp); - ASSERT(IsPowerOf2(frame_alignment)); + DCHECK(IsPowerOf2(frame_alignment)); __ and_(sp, sp, Operand(-frame_alignment)); } @@ -4828,12 +4696,12 @@ static void CreateArrayDispatchOneArgument(MacroAssembler* masm, // sp[0] - last argument Label normal_sequence; if (mode == DONT_OVERRIDE) { - ASSERT(FAST_SMI_ELEMENTS == 0); - ASSERT(FAST_HOLEY_SMI_ELEMENTS == 1); - ASSERT(FAST_ELEMENTS == 2); - ASSERT(FAST_HOLEY_ELEMENTS == 3); - ASSERT(FAST_DOUBLE_ELEMENTS == 4); - ASSERT(FAST_HOLEY_DOUBLE_ELEMENTS == 5); + DCHECK(FAST_SMI_ELEMENTS == 0); + DCHECK(FAST_HOLEY_SMI_ELEMENTS == 1); + DCHECK(FAST_ELEMENTS == 2); + DCHECK(FAST_HOLEY_ELEMENTS == 3); + DCHECK(FAST_DOUBLE_ELEMENTS == 4); + DCHECK(FAST_HOLEY_DOUBLE_ELEMENTS == 5); // is the low bit set? If so, we are holey and that is good. __ tst(r3, Operand(1)); @@ -5059,7 +4927,7 @@ void InternalArrayConstructorStub::Generate(MacroAssembler* masm) { // but the following bit field extraction takes care of that anyway. __ ldr(r3, FieldMemOperand(r3, Map::kBitField2Offset)); // Retrieve elements_kind from bit field 2. - __ Ubfx(r3, r3, Map::kElementsKindShift, Map::kElementsKindBitCount); + __ DecodeField<Map::ElementsKindBits>(r3); if (FLAG_debug_code) { Label done; @@ -5152,7 +5020,7 @@ void CallApiFunctionStub::Generate(MacroAssembler* masm) { FrameScope frame_scope(masm, StackFrame::MANUAL); __ EnterExitFrame(false, kApiStackSpace); - ASSERT(!api_function_address.is(r0) && !scratch.is(r0)); + DCHECK(!api_function_address.is(r0) && !scratch.is(r0)); // r0 = FunctionCallbackInfo& // Arguments is after the return address. __ add(r0, sp, Operand(1 * kPointerSize)); diff --git a/deps/v8/src/arm/code-stubs-arm.h b/deps/v8/src/arm/code-stubs-arm.h index 3237b3af411..ff2a80e676f 100644 --- a/deps/v8/src/arm/code-stubs-arm.h +++ b/deps/v8/src/arm/code-stubs-arm.h @@ -5,7 +5,7 @@ #ifndef V8_ARM_CODE_STUBS_ARM_H_ #define V8_ARM_CODE_STUBS_ARM_H_ -#include "ic-inl.h" +#include "src/ic-inl.h" namespace v8 { namespace internal { @@ -27,8 +27,8 @@ class StoreBufferOverflowStub: public PlatformCodeStub { private: SaveFPRegsMode save_doubles_; - Major MajorKey() { return StoreBufferOverflow; } - int MinorKey() { return (save_doubles_ == kSaveFPRegs) ? 1 : 0; } + Major MajorKey() const { return StoreBufferOverflow; } + int MinorKey() const { return (save_doubles_ == kSaveFPRegs) ? 1 : 0; } }; @@ -38,15 +38,12 @@ class StringHelper : public AllStatic { // is allowed to spend extra time setting up conditions to make copying // faster. Copying of overlapping regions is not supported. // Dest register ends at the position after the last character written. - static void GenerateCopyCharactersLong(MacroAssembler* masm, - Register dest, - Register src, - Register count, - Register scratch1, - Register scratch2, - Register scratch3, - Register scratch4, - int flags); + static void GenerateCopyCharacters(MacroAssembler* masm, + Register dest, + Register src, + Register count, + Register scratch, + String::Encoding encoding); // Generate string hash. @@ -71,8 +68,8 @@ class SubStringStub: public PlatformCodeStub { explicit SubStringStub(Isolate* isolate) : PlatformCodeStub(isolate) {} private: - Major MajorKey() { return SubString; } - int MinorKey() { return 0; } + Major MajorKey() const { return SubString; } + int MinorKey() const { return 0; } void Generate(MacroAssembler* masm); }; @@ -102,8 +99,8 @@ class StringCompareStub: public PlatformCodeStub { Register scratch3); private: - virtual Major MajorKey() { return StringCompare; } - virtual int MinorKey() { return 0; } + virtual Major MajorKey() const { return StringCompare; } + virtual int MinorKey() const { return 0; } virtual void Generate(MacroAssembler* masm); static void GenerateAsciiCharsCompareLoop(MacroAssembler* masm, @@ -142,8 +139,8 @@ class WriteInt32ToHeapNumberStub : public PlatformCodeStub { class HeapNumberRegisterBits: public BitField<int, 4, 4> {}; class ScratchRegisterBits: public BitField<int, 8, 4> {}; - Major MajorKey() { return WriteInt32ToHeapNumber; } - int MinorKey() { + Major MajorKey() const { return WriteInt32ToHeapNumber; } + int MinorKey() const { // Encode the parameters in a unique 16 bit value. return IntRegisterBits::encode(the_int_.code()) | HeapNumberRegisterBits::encode(the_heap_number_.code()) @@ -183,12 +180,12 @@ class RecordWriteStub: public PlatformCodeStub { static void PatchBranchIntoNop(MacroAssembler* masm, int pos) { masm->instr_at_put(pos, (masm->instr_at(pos) & ~B27) | (B24 | B20)); - ASSERT(Assembler::IsTstImmediate(masm->instr_at(pos))); + DCHECK(Assembler::IsTstImmediate(masm->instr_at(pos))); } static void PatchNopIntoBranch(MacroAssembler* masm, int pos) { masm->instr_at_put(pos, (masm->instr_at(pos) & ~(B24 | B20)) | B27); - ASSERT(Assembler::IsBranch(masm->instr_at(pos))); + DCHECK(Assembler::IsBranch(masm->instr_at(pos))); } static Mode GetMode(Code* stub) { @@ -200,13 +197,13 @@ class RecordWriteStub: public PlatformCodeStub { return INCREMENTAL; } - ASSERT(Assembler::IsTstImmediate(first_instruction)); + DCHECK(Assembler::IsTstImmediate(first_instruction)); if (Assembler::IsBranch(second_instruction)) { return INCREMENTAL_COMPACTION; } - ASSERT(Assembler::IsTstImmediate(second_instruction)); + DCHECK(Assembler::IsTstImmediate(second_instruction)); return STORE_BUFFER_ONLY; } @@ -217,22 +214,23 @@ class RecordWriteStub: public PlatformCodeStub { stub->instruction_size()); switch (mode) { case STORE_BUFFER_ONLY: - ASSERT(GetMode(stub) == INCREMENTAL || + DCHECK(GetMode(stub) == INCREMENTAL || GetMode(stub) == INCREMENTAL_COMPACTION); PatchBranchIntoNop(&masm, 0); PatchBranchIntoNop(&masm, Assembler::kInstrSize); break; case INCREMENTAL: - ASSERT(GetMode(stub) == STORE_BUFFER_ONLY); + DCHECK(GetMode(stub) == STORE_BUFFER_ONLY); PatchNopIntoBranch(&masm, 0); break; case INCREMENTAL_COMPACTION: - ASSERT(GetMode(stub) == STORE_BUFFER_ONLY); + DCHECK(GetMode(stub) == STORE_BUFFER_ONLY); PatchNopIntoBranch(&masm, Assembler::kInstrSize); break; } - ASSERT(GetMode(stub) == mode); - CPU::FlushICache(stub->instruction_start(), 2 * Assembler::kInstrSize); + DCHECK(GetMode(stub) == mode); + CpuFeatures::FlushICache(stub->instruction_start(), + 2 * Assembler::kInstrSize); } private: @@ -247,12 +245,12 @@ class RecordWriteStub: public PlatformCodeStub { : object_(object), address_(address), scratch0_(scratch0) { - ASSERT(!AreAliased(scratch0, object, address, no_reg)); + DCHECK(!AreAliased(scratch0, object, address, no_reg)); scratch1_ = GetRegisterThatIsNotOneOf(object_, address_, scratch0_); } void Save(MacroAssembler* masm) { - ASSERT(!AreAliased(object_, address_, scratch1_, scratch0_)); + DCHECK(!AreAliased(object_, address_, scratch1_, scratch0_)); // We don't have to save scratch0_ because it was given to us as // a scratch register. masm->push(scratch1_); @@ -307,9 +305,9 @@ class RecordWriteStub: public PlatformCodeStub { Mode mode); void InformIncrementalMarker(MacroAssembler* masm); - Major MajorKey() { return RecordWrite; } + Major MajorKey() const { return RecordWrite; } - int MinorKey() { + int MinorKey() const { return ObjectBits::encode(object_.code()) | ValueBits::encode(value_.code()) | AddressBits::encode(address_.code()) | @@ -349,8 +347,8 @@ class DirectCEntryStub: public PlatformCodeStub { void GenerateCall(MacroAssembler* masm, Register target); private: - Major MajorKey() { return DirectCEntry; } - int MinorKey() { return 0; } + Major MajorKey() const { return DirectCEntry; } + int MinorKey() const { return 0; } bool NeedsImmovableCode() { return true; } }; @@ -395,11 +393,9 @@ class NameDictionaryLookupStub: public PlatformCodeStub { NameDictionary::kHeaderSize + NameDictionary::kElementsStartIndex * kPointerSize; - Major MajorKey() { return NameDictionaryLookup; } + Major MajorKey() const { return NameDictionaryLookup; } - int MinorKey() { - return LookupModeBits::encode(mode_); - } + int MinorKey() const { return LookupModeBits::encode(mode_); } class LookupModeBits: public BitField<LookupMode, 0, 1> {}; @@ -407,8 +403,9 @@ class NameDictionaryLookupStub: public PlatformCodeStub { }; -struct PlatformCallInterfaceDescriptor { - explicit PlatformCallInterfaceDescriptor( +class PlatformInterfaceDescriptor { + public: + explicit PlatformInterfaceDescriptor( TargetAddressStorageMode storage_mode) : storage_mode_(storage_mode) { } diff --git a/deps/v8/src/arm/codegen-arm.cc b/deps/v8/src/arm/codegen-arm.cc index 8a46006eb9b..cdc40a48cf8 100644 --- a/deps/v8/src/arm/codegen-arm.cc +++ b/deps/v8/src/arm/codegen-arm.cc @@ -2,13 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM -#include "codegen.h" -#include "macro-assembler.h" -#include "simulator-arm.h" +#include "src/arm/simulator-arm.h" +#include "src/codegen.h" +#include "src/macro-assembler.h" namespace v8 { namespace internal { @@ -29,7 +29,8 @@ double fast_exp_simulator(double x) { UnaryMathFunction CreateExpFunction() { if (!FLAG_fast_math) return &std::exp; size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(1 * KB, &actual_size, true)); + byte* buffer = + static_cast<byte*>(base::OS::Allocate(1 * KB, &actual_size, true)); if (buffer == NULL) return &std::exp; ExternalReference::InitializeMathExpData(); @@ -64,10 +65,10 @@ UnaryMathFunction CreateExpFunction() { CodeDesc desc; masm.GetCode(&desc); - ASSERT(!RelocInfo::RequiresRelocation(desc)); + DCHECK(!RelocInfo::RequiresRelocation(desc)); - CPU::FlushICache(buffer, actual_size); - OS::ProtectCode(buffer, actual_size); + CpuFeatures::FlushICache(buffer, actual_size); + base::OS::ProtectCode(buffer, actual_size); #if !defined(USE_SIMULATOR) return FUNCTION_CAST<UnaryMathFunction>(buffer); @@ -78,14 +79,14 @@ UnaryMathFunction CreateExpFunction() { } #if defined(V8_HOST_ARCH_ARM) -OS::MemCopyUint8Function CreateMemCopyUint8Function( - OS::MemCopyUint8Function stub) { +MemCopyUint8Function CreateMemCopyUint8Function(MemCopyUint8Function stub) { #if defined(USE_SIMULATOR) return stub; #else if (!CpuFeatures::IsSupported(UNALIGNED_ACCESSES)) return stub; size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(1 * KB, &actual_size, true)); + byte* buffer = + static_cast<byte*>(base::OS::Allocate(1 * KB, &actual_size, true)); if (buffer == NULL) return stub; MacroAssembler masm(NULL, buffer, static_cast<int>(actual_size)); @@ -224,24 +225,25 @@ OS::MemCopyUint8Function CreateMemCopyUint8Function( CodeDesc desc; masm.GetCode(&desc); - ASSERT(!RelocInfo::RequiresRelocation(desc)); + DCHECK(!RelocInfo::RequiresRelocation(desc)); - CPU::FlushICache(buffer, actual_size); - OS::ProtectCode(buffer, actual_size); - return FUNCTION_CAST<OS::MemCopyUint8Function>(buffer); + CpuFeatures::FlushICache(buffer, actual_size); + base::OS::ProtectCode(buffer, actual_size); + return FUNCTION_CAST<MemCopyUint8Function>(buffer); #endif } // Convert 8 to 16. The number of character to copy must be at least 8. -OS::MemCopyUint16Uint8Function CreateMemCopyUint16Uint8Function( - OS::MemCopyUint16Uint8Function stub) { +MemCopyUint16Uint8Function CreateMemCopyUint16Uint8Function( + MemCopyUint16Uint8Function stub) { #if defined(USE_SIMULATOR) return stub; #else if (!CpuFeatures::IsSupported(UNALIGNED_ACCESSES)) return stub; size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(1 * KB, &actual_size, true)); + byte* buffer = + static_cast<byte*>(base::OS::Allocate(1 * KB, &actual_size, true)); if (buffer == NULL) return stub; MacroAssembler masm(NULL, buffer, static_cast<int>(actual_size)); @@ -312,10 +314,10 @@ OS::MemCopyUint16Uint8Function CreateMemCopyUint16Uint8Function( CodeDesc desc; masm.GetCode(&desc); - CPU::FlushICache(buffer, actual_size); - OS::ProtectCode(buffer, actual_size); + CpuFeatures::FlushICache(buffer, actual_size); + base::OS::ProtectCode(buffer, actual_size); - return FUNCTION_CAST<OS::MemCopyUint16Uint8Function>(buffer); + return FUNCTION_CAST<MemCopyUint16Uint8Function>(buffer); #endif } #endif @@ -325,7 +327,8 @@ UnaryMathFunction CreateSqrtFunction() { return &std::sqrt; #else size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(1 * KB, &actual_size, true)); + byte* buffer = + static_cast<byte*>(base::OS::Allocate(1 * KB, &actual_size, true)); if (buffer == NULL) return &std::sqrt; MacroAssembler masm(NULL, buffer, static_cast<int>(actual_size)); @@ -337,10 +340,10 @@ UnaryMathFunction CreateSqrtFunction() { CodeDesc desc; masm.GetCode(&desc); - ASSERT(!RelocInfo::RequiresRelocation(desc)); + DCHECK(!RelocInfo::RequiresRelocation(desc)); - CPU::FlushICache(buffer, actual_size); - OS::ProtectCode(buffer, actual_size); + CpuFeatures::FlushICache(buffer, actual_size); + base::OS::ProtectCode(buffer, actual_size); return FUNCTION_CAST<UnaryMathFunction>(buffer); #endif } @@ -353,14 +356,14 @@ UnaryMathFunction CreateSqrtFunction() { void StubRuntimeCallHelper::BeforeCall(MacroAssembler* masm) const { masm->EnterFrame(StackFrame::INTERNAL); - ASSERT(!masm->has_frame()); + DCHECK(!masm->has_frame()); masm->set_has_frame(true); } void StubRuntimeCallHelper::AfterCall(MacroAssembler* masm) const { masm->LeaveFrame(StackFrame::INTERNAL); - ASSERT(masm->has_frame()); + DCHECK(masm->has_frame()); masm->set_has_frame(false); } @@ -371,26 +374,28 @@ void StubRuntimeCallHelper::AfterCall(MacroAssembler* masm) const { #define __ ACCESS_MASM(masm) void ElementsTransitionGenerator::GenerateMapChangeElementsTransition( - MacroAssembler* masm, AllocationSiteMode mode, + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, + AllocationSiteMode mode, Label* allocation_memento_found) { - // ----------- S t a t e ------------- - // -- r0 : value - // -- r1 : key - // -- r2 : receiver - // -- lr : return address - // -- r3 : target map, scratch for subsequent call - // -- r4 : scratch (elements) - // ----------------------------------- + Register scratch_elements = r4; + DCHECK(!AreAliased(receiver, key, value, target_map, + scratch_elements)); + if (mode == TRACK_ALLOCATION_SITE) { - ASSERT(allocation_memento_found != NULL); - __ JumpIfJSArrayHasAllocationMemento(r2, r4, allocation_memento_found); + DCHECK(allocation_memento_found != NULL); + __ JumpIfJSArrayHasAllocationMemento( + receiver, scratch_elements, allocation_memento_found); } // Set transitioned map. - __ str(r3, FieldMemOperand(r2, HeapObject::kMapOffset)); - __ RecordWriteField(r2, + __ str(target_map, FieldMemOperand(receiver, HeapObject::kMapOffset)); + __ RecordWriteField(receiver, HeapObject::kMapOffset, - r3, + target_map, r9, kLRHasNotBeenSaved, kDontSaveFPRegs, @@ -400,87 +405,103 @@ void ElementsTransitionGenerator::GenerateMapChangeElementsTransition( void ElementsTransitionGenerator::GenerateSmiToDouble( - MacroAssembler* masm, AllocationSiteMode mode, Label* fail) { - // ----------- S t a t e ------------- - // -- r0 : value - // -- r1 : key - // -- r2 : receiver - // -- lr : return address - // -- r3 : target map, scratch for subsequent call - // -- r4 : scratch (elements) - // ----------------------------------- + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, + AllocationSiteMode mode, + Label* fail) { + // Register lr contains the return address. Label loop, entry, convert_hole, gc_required, only_change_map, done; + Register elements = r4; + Register length = r5; + Register array = r6; + Register array_end = array; + + // target_map parameter can be clobbered. + Register scratch1 = target_map; + Register scratch2 = r9; + + // Verify input registers don't conflict with locals. + DCHECK(!AreAliased(receiver, key, value, target_map, + elements, length, array, scratch2)); if (mode == TRACK_ALLOCATION_SITE) { - __ JumpIfJSArrayHasAllocationMemento(r2, r4, fail); + __ JumpIfJSArrayHasAllocationMemento(receiver, elements, fail); } // Check for empty arrays, which only require a map transition and no changes // to the backing store. - __ ldr(r4, FieldMemOperand(r2, JSObject::kElementsOffset)); - __ CompareRoot(r4, Heap::kEmptyFixedArrayRootIndex); + __ ldr(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); + __ CompareRoot(elements, Heap::kEmptyFixedArrayRootIndex); __ b(eq, &only_change_map); __ push(lr); - __ ldr(r5, FieldMemOperand(r4, FixedArray::kLengthOffset)); - // r5: number of elements (smi-tagged) + __ ldr(length, FieldMemOperand(elements, FixedArray::kLengthOffset)); + // length: number of elements (smi-tagged) // Allocate new FixedDoubleArray. // Use lr as a temporary register. - __ mov(lr, Operand(r5, LSL, 2)); + __ mov(lr, Operand(length, LSL, 2)); __ add(lr, lr, Operand(FixedDoubleArray::kHeaderSize)); - __ Allocate(lr, r6, r4, r9, &gc_required, DOUBLE_ALIGNMENT); - // r6: destination FixedDoubleArray, not tagged as heap object. - __ ldr(r4, FieldMemOperand(r2, JSObject::kElementsOffset)); + __ Allocate(lr, array, elements, scratch2, &gc_required, DOUBLE_ALIGNMENT); + // array: destination FixedDoubleArray, not tagged as heap object. + __ ldr(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); // r4: source FixedArray. // Set destination FixedDoubleArray's length and map. - __ LoadRoot(r9, Heap::kFixedDoubleArrayMapRootIndex); - __ str(r5, MemOperand(r6, FixedDoubleArray::kLengthOffset)); + __ LoadRoot(scratch2, Heap::kFixedDoubleArrayMapRootIndex); + __ str(length, MemOperand(array, FixedDoubleArray::kLengthOffset)); // Update receiver's map. - __ str(r9, MemOperand(r6, HeapObject::kMapOffset)); + __ str(scratch2, MemOperand(array, HeapObject::kMapOffset)); - __ str(r3, FieldMemOperand(r2, HeapObject::kMapOffset)); - __ RecordWriteField(r2, + __ str(target_map, FieldMemOperand(receiver, HeapObject::kMapOffset)); + __ RecordWriteField(receiver, HeapObject::kMapOffset, - r3, - r9, + target_map, + scratch2, kLRHasBeenSaved, kDontSaveFPRegs, OMIT_REMEMBERED_SET, OMIT_SMI_CHECK); // Replace receiver's backing store with newly created FixedDoubleArray. - __ add(r3, r6, Operand(kHeapObjectTag)); - __ str(r3, FieldMemOperand(r2, JSObject::kElementsOffset)); - __ RecordWriteField(r2, + __ add(scratch1, array, Operand(kHeapObjectTag)); + __ str(scratch1, FieldMemOperand(receiver, JSObject::kElementsOffset)); + __ RecordWriteField(receiver, JSObject::kElementsOffset, - r3, - r9, + scratch1, + scratch2, kLRHasBeenSaved, kDontSaveFPRegs, EMIT_REMEMBERED_SET, OMIT_SMI_CHECK); // Prepare for conversion loop. - __ add(r3, r4, Operand(FixedArray::kHeaderSize - kHeapObjectTag)); - __ add(r9, r6, Operand(FixedDoubleArray::kHeaderSize)); - __ add(r6, r9, Operand(r5, LSL, 2)); - __ mov(r4, Operand(kHoleNanLower32)); - __ mov(r5, Operand(kHoleNanUpper32)); - // r3: begin of source FixedArray element fields, not tagged - // r4: kHoleNanLower32 - // r5: kHoleNanUpper32 - // r6: end of destination FixedDoubleArray, not tagged - // r9: begin of FixedDoubleArray element fields, not tagged + __ add(scratch1, elements, Operand(FixedArray::kHeaderSize - kHeapObjectTag)); + __ add(scratch2, array, Operand(FixedDoubleArray::kHeaderSize)); + __ add(array_end, scratch2, Operand(length, LSL, 2)); + + // Repurpose registers no longer in use. + Register hole_lower = elements; + Register hole_upper = length; + + __ mov(hole_lower, Operand(kHoleNanLower32)); + __ mov(hole_upper, Operand(kHoleNanUpper32)); + // scratch1: begin of source FixedArray element fields, not tagged + // hole_lower: kHoleNanLower32 + // hole_upper: kHoleNanUpper32 + // array_end: end of destination FixedDoubleArray, not tagged + // scratch2: begin of FixedDoubleArray element fields, not tagged __ b(&entry); __ bind(&only_change_map); - __ str(r3, FieldMemOperand(r2, HeapObject::kMapOffset)); - __ RecordWriteField(r2, + __ str(target_map, FieldMemOperand(receiver, HeapObject::kMapOffset)); + __ RecordWriteField(receiver, HeapObject::kMapOffset, - r3, - r9, + target_map, + scratch2, kLRHasNotBeenSaved, kDontSaveFPRegs, OMIT_REMEMBERED_SET, @@ -494,15 +515,15 @@ void ElementsTransitionGenerator::GenerateSmiToDouble( // Convert and copy elements. __ bind(&loop); - __ ldr(lr, MemOperand(r3, 4, PostIndex)); + __ ldr(lr, MemOperand(scratch1, 4, PostIndex)); // lr: current element __ UntagAndJumpIfNotSmi(lr, lr, &convert_hole); // Normal smi, convert to double and store. __ vmov(s0, lr); __ vcvt_f64_s32(d0, s0); - __ vstr(d0, r9, 0); - __ add(r9, r9, Operand(8)); + __ vstr(d0, scratch2, 0); + __ add(scratch2, scratch2, Operand(8)); __ b(&entry); // Hole found, store the-hole NaN. @@ -514,10 +535,10 @@ void ElementsTransitionGenerator::GenerateSmiToDouble( __ CompareRoot(lr, Heap::kTheHoleValueRootIndex); __ Assert(eq, kObjectFoundInSmiOnlyArray); } - __ Strd(r4, r5, MemOperand(r9, 8, PostIndex)); + __ Strd(hole_lower, hole_upper, MemOperand(scratch2, 8, PostIndex)); __ bind(&entry); - __ cmp(r9, r6); + __ cmp(scratch2, array_end); __ b(lt, &loop); __ pop(lr); @@ -526,80 +547,104 @@ void ElementsTransitionGenerator::GenerateSmiToDouble( void ElementsTransitionGenerator::GenerateDoubleToObject( - MacroAssembler* masm, AllocationSiteMode mode, Label* fail) { - // ----------- S t a t e ------------- - // -- r0 : value - // -- r1 : key - // -- r2 : receiver - // -- lr : return address - // -- r3 : target map, scratch for subsequent call - // -- r4 : scratch (elements) - // ----------------------------------- + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, + AllocationSiteMode mode, + Label* fail) { + // Register lr contains the return address. Label entry, loop, convert_hole, gc_required, only_change_map; + Register elements = r4; + Register array = r6; + Register length = r5; + Register scratch = r9; + + // Verify input registers don't conflict with locals. + DCHECK(!AreAliased(receiver, key, value, target_map, + elements, array, length, scratch)); if (mode == TRACK_ALLOCATION_SITE) { - __ JumpIfJSArrayHasAllocationMemento(r2, r4, fail); + __ JumpIfJSArrayHasAllocationMemento(receiver, elements, fail); } // Check for empty arrays, which only require a map transition and no changes // to the backing store. - __ ldr(r4, FieldMemOperand(r2, JSObject::kElementsOffset)); - __ CompareRoot(r4, Heap::kEmptyFixedArrayRootIndex); + __ ldr(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); + __ CompareRoot(elements, Heap::kEmptyFixedArrayRootIndex); __ b(eq, &only_change_map); __ push(lr); - __ Push(r3, r2, r1, r0); - __ ldr(r5, FieldMemOperand(r4, FixedArray::kLengthOffset)); - // r4: source FixedDoubleArray - // r5: number of elements (smi-tagged) + __ Push(target_map, receiver, key, value); + __ ldr(length, FieldMemOperand(elements, FixedArray::kLengthOffset)); + // elements: source FixedDoubleArray + // length: number of elements (smi-tagged) // Allocate new FixedArray. - __ mov(r0, Operand(FixedDoubleArray::kHeaderSize)); - __ add(r0, r0, Operand(r5, LSL, 1)); - __ Allocate(r0, r6, r3, r9, &gc_required, NO_ALLOCATION_FLAGS); - // r6: destination FixedArray, not tagged as heap object + // Re-use value and target_map registers, as they have been saved on the + // stack. + Register array_size = value; + Register allocate_scratch = target_map; + __ mov(array_size, Operand(FixedDoubleArray::kHeaderSize)); + __ add(array_size, array_size, Operand(length, LSL, 1)); + __ Allocate(array_size, array, allocate_scratch, scratch, &gc_required, + NO_ALLOCATION_FLAGS); + // array: destination FixedArray, not tagged as heap object // Set destination FixedDoubleArray's length and map. - __ LoadRoot(r9, Heap::kFixedArrayMapRootIndex); - __ str(r5, MemOperand(r6, FixedDoubleArray::kLengthOffset)); - __ str(r9, MemOperand(r6, HeapObject::kMapOffset)); + __ LoadRoot(scratch, Heap::kFixedArrayMapRootIndex); + __ str(length, MemOperand(array, FixedDoubleArray::kLengthOffset)); + __ str(scratch, MemOperand(array, HeapObject::kMapOffset)); // Prepare for conversion loop. - __ add(r4, r4, Operand(FixedDoubleArray::kHeaderSize - kHeapObjectTag + 4)); - __ add(r3, r6, Operand(FixedArray::kHeaderSize)); - __ add(r6, r6, Operand(kHeapObjectTag)); - __ add(r5, r3, Operand(r5, LSL, 1)); - __ LoadRoot(r9, Heap::kHeapNumberMapRootIndex); - // Using offsetted addresses in r4 to fully take advantage of post-indexing. - // r3: begin of destination FixedArray element fields, not tagged - // r4: begin of source FixedDoubleArray element fields, not tagged, +4 - // r5: end of destination FixedArray, not tagged - // r6: destination FixedArray - // r9: heap number map + Register src_elements = elements; + Register dst_elements = target_map; + Register dst_end = length; + Register heap_number_map = scratch; + __ add(src_elements, elements, + Operand(FixedDoubleArray::kHeaderSize - kHeapObjectTag + 4)); + __ add(dst_elements, array, Operand(FixedArray::kHeaderSize)); + __ add(array, array, Operand(kHeapObjectTag)); + __ add(dst_end, dst_elements, Operand(length, LSL, 1)); + __ LoadRoot(heap_number_map, Heap::kHeapNumberMapRootIndex); + // Using offsetted addresses in src_elements to fully take advantage of + // post-indexing. + // dst_elements: begin of destination FixedArray element fields, not tagged + // src_elements: begin of source FixedDoubleArray element fields, + // not tagged, +4 + // dst_end: end of destination FixedArray, not tagged + // array: destination FixedArray + // heap_number_map: heap number map __ b(&entry); // Call into runtime if GC is required. __ bind(&gc_required); - __ Pop(r3, r2, r1, r0); + __ Pop(target_map, receiver, key, value); __ pop(lr); __ b(fail); __ bind(&loop); - __ ldr(r1, MemOperand(r4, 8, PostIndex)); - // r1: current element's upper 32 bit - // r4: address of next element's upper 32 bit - __ cmp(r1, Operand(kHoleNanUpper32)); + Register upper_bits = key; + __ ldr(upper_bits, MemOperand(src_elements, 8, PostIndex)); + // upper_bits: current element's upper 32 bit + // src_elements: address of next element's upper 32 bit + __ cmp(upper_bits, Operand(kHoleNanUpper32)); __ b(eq, &convert_hole); // Non-hole double, copy value into a heap number. - __ AllocateHeapNumber(r2, r0, lr, r9, &gc_required); - // r2: new heap number - __ ldr(r0, MemOperand(r4, 12, NegOffset)); - __ Strd(r0, r1, FieldMemOperand(r2, HeapNumber::kValueOffset)); - __ mov(r0, r3); - __ str(r2, MemOperand(r3, 4, PostIndex)); - __ RecordWrite(r6, - r0, - r2, + Register heap_number = receiver; + Register scratch2 = value; + __ AllocateHeapNumber(heap_number, scratch2, lr, heap_number_map, + &gc_required); + // heap_number: new heap number + __ ldr(scratch2, MemOperand(src_elements, 12, NegOffset)); + __ Strd(scratch2, upper_bits, + FieldMemOperand(heap_number, HeapNumber::kValueOffset)); + __ mov(scratch2, dst_elements); + __ str(heap_number, MemOperand(dst_elements, 4, PostIndex)); + __ RecordWrite(array, + scratch2, + heap_number, kLRHasBeenSaved, kDontSaveFPRegs, EMIT_REMEMBERED_SET, @@ -608,20 +653,20 @@ void ElementsTransitionGenerator::GenerateDoubleToObject( // Replace the-hole NaN with the-hole pointer. __ bind(&convert_hole); - __ LoadRoot(r0, Heap::kTheHoleValueRootIndex); - __ str(r0, MemOperand(r3, 4, PostIndex)); + __ LoadRoot(scratch2, Heap::kTheHoleValueRootIndex); + __ str(scratch2, MemOperand(dst_elements, 4, PostIndex)); __ bind(&entry); - __ cmp(r3, r5); + __ cmp(dst_elements, dst_end); __ b(lt, &loop); - __ Pop(r3, r2, r1, r0); + __ Pop(target_map, receiver, key, value); // Replace receiver's backing store with newly created and filled FixedArray. - __ str(r6, FieldMemOperand(r2, JSObject::kElementsOffset)); - __ RecordWriteField(r2, + __ str(array, FieldMemOperand(receiver, JSObject::kElementsOffset)); + __ RecordWriteField(receiver, JSObject::kElementsOffset, - r6, - r9, + array, + scratch, kLRHasBeenSaved, kDontSaveFPRegs, EMIT_REMEMBERED_SET, @@ -630,11 +675,11 @@ void ElementsTransitionGenerator::GenerateDoubleToObject( __ bind(&only_change_map); // Update receiver's map. - __ str(r3, FieldMemOperand(r2, HeapObject::kMapOffset)); - __ RecordWriteField(r2, + __ str(target_map, FieldMemOperand(receiver, HeapObject::kMapOffset)); + __ RecordWriteField(receiver, HeapObject::kMapOffset, - r3, - r9, + target_map, + scratch, kLRHasNotBeenSaved, kDontSaveFPRegs, OMIT_REMEMBERED_SET, @@ -709,7 +754,7 @@ void StringCharLoadGenerator::Generate(MacroAssembler* masm, __ Assert(eq, kExternalStringExpectedButNotFound); } // Rule out short external strings. - STATIC_CHECK(kShortExternalStringTag != 0); + STATIC_ASSERT(kShortExternalStringTag != 0); __ tst(result, Operand(kShortExternalStringMask)); __ b(ne, call_runtime); __ ldr(string, FieldMemOperand(string, ExternalString::kResourceDataOffset)); @@ -742,16 +787,17 @@ void MathExpGenerator::EmitMathExp(MacroAssembler* masm, Register temp1, Register temp2, Register temp3) { - ASSERT(!input.is(result)); - ASSERT(!input.is(double_scratch1)); - ASSERT(!input.is(double_scratch2)); - ASSERT(!result.is(double_scratch1)); - ASSERT(!result.is(double_scratch2)); - ASSERT(!double_scratch1.is(double_scratch2)); - ASSERT(!temp1.is(temp2)); - ASSERT(!temp1.is(temp3)); - ASSERT(!temp2.is(temp3)); - ASSERT(ExternalReference::math_exp_constants(0).address() != NULL); + DCHECK(!input.is(result)); + DCHECK(!input.is(double_scratch1)); + DCHECK(!input.is(double_scratch2)); + DCHECK(!result.is(double_scratch1)); + DCHECK(!result.is(double_scratch2)); + DCHECK(!double_scratch1.is(double_scratch2)); + DCHECK(!temp1.is(temp2)); + DCHECK(!temp1.is(temp3)); + DCHECK(!temp2.is(temp3)); + DCHECK(ExternalReference::math_exp_constants(0).address() != NULL); + DCHECK(!masm->serializer_enabled()); // External references not serializable. Label zero, infinity, done; @@ -782,7 +828,7 @@ void MathExpGenerator::EmitMathExp(MacroAssembler* masm, __ vmul(result, result, double_scratch2); __ vsub(result, result, double_scratch1); // Mov 1 in double_scratch2 as math_exp_constants_array[8] == 1. - ASSERT(*reinterpret_cast<double*> + DCHECK(*reinterpret_cast<double*> (ExternalReference::math_exp_constants(8).address()) == 1); __ vmov(double_scratch2, 1); __ vadd(result, result, double_scratch2); @@ -823,7 +869,7 @@ static const uint32_t kCodeAgePatchFirstInstruction = 0xe24f0008; #endif CodeAgingHelper::CodeAgingHelper() { - ASSERT(young_sequence_.length() == kNoCodeAgeSequenceLength); + DCHECK(young_sequence_.length() == kNoCodeAgeSequenceLength); // Since patcher is a large object, allocate it dynamically when needed, // to avoid overloading the stack in stress conditions. // DONT_FLUSH is used because the CodeAgingHelper is initialized early in @@ -849,7 +895,7 @@ bool CodeAgingHelper::IsOld(byte* candidate) const { bool Code::IsYoungSequence(Isolate* isolate, byte* sequence) { bool result = isolate->code_aging_helper()->IsYoung(sequence); - ASSERT(result || isolate->code_aging_helper()->IsOld(sequence)); + DCHECK(result || isolate->code_aging_helper()->IsOld(sequence)); return result; } @@ -875,7 +921,7 @@ void Code::PatchPlatformCodeAge(Isolate* isolate, uint32_t young_length = isolate->code_aging_helper()->young_sequence_length(); if (age == kNoAgeCodeAge) { isolate->code_aging_helper()->CopyYoungSequenceTo(sequence); - CPU::FlushICache(sequence, young_length); + CpuFeatures::FlushICache(sequence, young_length); } else { Code* stub = GetCodeAgeStub(isolate, age, parity); CodePatcher patcher(sequence, young_length / Assembler::kInstrSize); diff --git a/deps/v8/src/arm/codegen-arm.h b/deps/v8/src/arm/codegen-arm.h index 2fc8eb3f0a2..9ec09583d97 100644 --- a/deps/v8/src/arm/codegen-arm.h +++ b/deps/v8/src/arm/codegen-arm.h @@ -5,8 +5,8 @@ #ifndef V8_ARM_CODEGEN_ARM_H_ #define V8_ARM_CODEGEN_ARM_H_ -#include "ast.h" -#include "ic-inl.h" +#include "src/ast.h" +#include "src/ic-inl.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/arm/constants-arm.cc b/deps/v8/src/arm/constants-arm.cc index 676239f829f..3f3c5ed7739 100644 --- a/deps/v8/src/arm/constants-arm.cc +++ b/deps/v8/src/arm/constants-arm.cc @@ -2,11 +2,11 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM -#include "constants-arm.h" +#include "src/arm/constants-arm.h" namespace v8 { @@ -28,7 +28,7 @@ double Instruction::DoubleImmedVmov() const { uint64_t imm = high16 << 48; double d; - OS::MemCopy(&d, &imm, 8); + memcpy(&d, &imm, 8); return d; } @@ -81,7 +81,7 @@ const char* VFPRegisters::names_[kNumVFPRegisters] = { const char* VFPRegisters::Name(int reg, bool is_double) { - ASSERT((0 <= reg) && (reg < kNumVFPRegisters)); + DCHECK((0 <= reg) && (reg < kNumVFPRegisters)); return names_[reg + (is_double ? kNumVFPSingleRegisters : 0)]; } diff --git a/deps/v8/src/arm/constants-arm.h b/deps/v8/src/arm/constants-arm.h index 5bace505fa4..c4e559d6c13 100644 --- a/deps/v8/src/arm/constants-arm.h +++ b/deps/v8/src/arm/constants-arm.h @@ -19,11 +19,11 @@ const int kConstantPoolMarkerMask = 0xfff000f0; const int kConstantPoolMarker = 0xe7f000f0; const int kConstantPoolLengthMaxMask = 0xffff; inline int EncodeConstantPoolLength(int length) { - ASSERT((length & kConstantPoolLengthMaxMask) == length); + DCHECK((length & kConstantPoolLengthMaxMask) == length); return ((length & 0xfff0) << 4) | (length & 0xf); } inline int DecodeConstantPoolLength(int instr) { - ASSERT((instr & kConstantPoolMarkerMask) == kConstantPoolMarker); + DCHECK((instr & kConstantPoolMarkerMask) == kConstantPoolMarker); return ((instr >> 4) & 0xfff0) | (instr & 0xf); } @@ -84,13 +84,13 @@ enum Condition { inline Condition NegateCondition(Condition cond) { - ASSERT(cond != al); + DCHECK(cond != al); return static_cast<Condition>(cond ^ ne); } -// Corresponds to transposing the operands of a comparison. -inline Condition ReverseCondition(Condition cond) { +// Commute a condition such that {a cond b == b cond' a}. +inline Condition CommuteCondition(Condition cond) { switch (cond) { case lo: return hi; @@ -110,7 +110,7 @@ inline Condition ReverseCondition(Condition cond) { return ge; default: return cond; - }; + } } @@ -405,64 +405,6 @@ enum Hint { no_hint }; inline Hint NegateHint(Hint ignored) { return no_hint; } -// ----------------------------------------------------------------------------- -// Specific instructions, constants, and masks. -// These constants are declared in assembler-arm.cc, as they use named registers -// and other constants. - - -// add(sp, sp, 4) instruction (aka Pop()) -extern const Instr kPopInstruction; - -// str(r, MemOperand(sp, 4, NegPreIndex), al) instruction (aka push(r)) -// register r is not encoded. -extern const Instr kPushRegPattern; - -// ldr(r, MemOperand(sp, 4, PostIndex), al) instruction (aka pop(r)) -// register r is not encoded. -extern const Instr kPopRegPattern; - -// mov lr, pc -extern const Instr kMovLrPc; -// ldr rd, [pc, #offset] -extern const Instr kLdrPCMask; -extern const Instr kLdrPCPattern; -// vldr dd, [pc, #offset] -extern const Instr kVldrDPCMask; -extern const Instr kVldrDPCPattern; -// blxcc rm -extern const Instr kBlxRegMask; - -extern const Instr kBlxRegPattern; - -extern const Instr kMovMvnMask; -extern const Instr kMovMvnPattern; -extern const Instr kMovMvnFlip; -extern const Instr kMovLeaveCCMask; -extern const Instr kMovLeaveCCPattern; -extern const Instr kMovwMask; -extern const Instr kMovwPattern; -extern const Instr kMovwLeaveCCFlip; -extern const Instr kCmpCmnMask; -extern const Instr kCmpCmnPattern; -extern const Instr kCmpCmnFlip; -extern const Instr kAddSubFlip; -extern const Instr kAndBicFlip; - -// A mask for the Rd register for push, pop, ldr, str instructions. -extern const Instr kLdrRegFpOffsetPattern; - -extern const Instr kStrRegFpOffsetPattern; - -extern const Instr kLdrRegFpNegOffsetPattern; - -extern const Instr kStrRegFpNegOffsetPattern; - -extern const Instr kLdrStrInstrTypeMask; -extern const Instr kLdrStrInstrArgumentMask; -extern const Instr kLdrStrOffsetMask; - - // ----------------------------------------------------------------------------- // Instruction abstraction. @@ -626,6 +568,7 @@ class Instruction { inline int Immed4Value() const { return Bits(19, 16); } inline int ImmedMovwMovtValue() const { return Immed4Value() << 12 | Offset12Value(); } + DECLARE_STATIC_ACCESSOR(ImmedMovwMovtValue); // Fields used in Load/Store instructions inline int PUValue() const { return Bits(24, 23); } diff --git a/deps/v8/src/arm/cpu-arm.cc b/deps/v8/src/arm/cpu-arm.cc index 083d9b39eba..9c7104eb95a 100644 --- a/deps/v8/src/arm/cpu-arm.cc +++ b/deps/v8/src/arm/cpu-arm.cc @@ -12,22 +12,20 @@ #endif #endif -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM -#include "cpu.h" -#include "macro-assembler.h" -#include "simulator.h" // for cache flushing. +#include "src/assembler.h" +#include "src/macro-assembler.h" +#include "src/simulator.h" // for cache flushing. namespace v8 { namespace internal { -void CPU::FlushICache(void* start, size_t size) { - // Nothing to do flushing no instructions. - if (size == 0) { - return; - } + +void CpuFeatures::FlushICache(void* start, size_t size) { + if (size == 0) return; #if defined(USE_SIMULATOR) // Not generating ARM instructions for C-code. This means that we are @@ -36,47 +34,31 @@ void CPU::FlushICache(void* start, size_t size) { // None of this code ends up in the snapshot so there are no issues // around whether or not to generate the code when building snapshots. Simulator::FlushICache(Isolate::Current()->simulator_i_cache(), start, size); + #elif V8_OS_QNX msync(start, size, MS_SYNC | MS_INVALIDATE_ICACHE); + #else - // Ideally, we would call - // syscall(__ARM_NR_cacheflush, start, - // reinterpret_cast<intptr_t>(start) + size, 0); - // however, syscall(int, ...) is not supported on all platforms, especially - // not when using EABI, so we call the __ARM_NR_cacheflush syscall directly. + register uint32_t beg asm("r0") = reinterpret_cast<uint32_t>(start); + register uint32_t end asm("r1") = beg + size; + register uint32_t flg asm("r2") = 0; + + asm volatile( + // This assembly works for both ARM and Thumb targets. + + // Preserve r7; it is callee-saved, and GCC uses it as a frame pointer for + // Thumb targets. + " push {r7}\n" + // r0 = beg + // r1 = end + // r2 = flags (0) + " ldr r7, =%c[scno]\n" // r7 = syscall number + " svc 0\n" - register uint32_t beg asm("a1") = reinterpret_cast<uint32_t>(start); - register uint32_t end asm("a2") = - reinterpret_cast<uint32_t>(start) + size; - register uint32_t flg asm("a3") = 0; - #if defined (__arm__) && !defined(__thumb__) - // __arm__ may be defined in thumb mode. - register uint32_t scno asm("r7") = __ARM_NR_cacheflush; - asm volatile( - "svc 0x0" - : "=r" (beg) - : "0" (beg), "r" (end), "r" (flg), "r" (scno)); - #else - // r7 is reserved by the EABI in thumb mode. - asm volatile( - "@ Enter ARM Mode \n\t" - "adr r3, 1f \n\t" - "bx r3 \n\t" - ".ALIGN 4 \n\t" - ".ARM \n" - "1: push {r7} \n\t" - "mov r7, %4 \n\t" - "svc 0x0 \n\t" - "pop {r7} \n\t" - "@ Enter THUMB Mode\n\t" - "adr r3, 2f+1 \n\t" - "bx r3 \n\t" - ".THUMB \n" - "2: \n\t" - : "=r" (beg) - : "0" (beg), "r" (end), "r" (flg), "r" (__ARM_NR_cacheflush) - : "r3"); - #endif + " pop {r7}\n" + : + : "r" (beg), "r" (end), "r" (flg), [scno] "i" (__ARM_NR_cacheflush) + : "memory"); #endif } diff --git a/deps/v8/src/arm/debug-arm.cc b/deps/v8/src/arm/debug-arm.cc index c3270f0bcd2..ec98a7ed362 100644 --- a/deps/v8/src/arm/debug-arm.cc +++ b/deps/v8/src/arm/debug-arm.cc @@ -2,12 +2,12 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM -#include "codegen.h" -#include "debug.h" +#include "src/codegen.h" +#include "src/debug.h" namespace v8 { namespace internal { @@ -27,7 +27,7 @@ void BreakLocationIterator::SetDebugBreakAtReturn() { // ldr ip, [pc, #0] // blx ip // <debug break return code entry point address> - // bktp 0 + // bkpt 0 CodePatcher patcher(rinfo()->pc(), Assembler::kJSReturnSequenceInstructions); patcher.masm()->ldr(v8::internal::ip, MemOperand(v8::internal::pc, 0)); patcher.masm()->blx(v8::internal::ip); @@ -47,20 +47,20 @@ void BreakLocationIterator::ClearDebugBreakAtReturn() { // A debug break in the frame exit code is identified by the JS frame exit code // having been patched with a call instruction. bool Debug::IsDebugBreakAtReturn(RelocInfo* rinfo) { - ASSERT(RelocInfo::IsJSReturn(rinfo->rmode())); + DCHECK(RelocInfo::IsJSReturn(rinfo->rmode())); return rinfo->IsPatchedReturnSequence(); } bool BreakLocationIterator::IsDebugBreakAtSlot() { - ASSERT(IsDebugBreakSlot()); + DCHECK(IsDebugBreakSlot()); // Check whether the debug break slot instructions have been patched. return rinfo()->IsPatchedDebugBreakSlotSequence(); } void BreakLocationIterator::SetDebugBreakAtSlot() { - ASSERT(IsDebugBreakSlot()); + DCHECK(IsDebugBreakSlot()); // Patch the code changing the debug break slot code from // mov r2, r2 // mov r2, r2 @@ -78,13 +78,11 @@ void BreakLocationIterator::SetDebugBreakAtSlot() { void BreakLocationIterator::ClearDebugBreakAtSlot() { - ASSERT(IsDebugBreakSlot()); + DCHECK(IsDebugBreakSlot()); rinfo()->PatchCode(original_rinfo()->pc(), Assembler::kDebugBreakSlotInstructions); } -const bool Debug::FramePaddingLayout::kIsSupported = false; - #define __ ACCESS_MASM(masm) @@ -95,12 +93,20 @@ static void Generate_DebugBreakCallHelper(MacroAssembler* masm, { FrameAndConstantPoolScope scope(masm, StackFrame::INTERNAL); + // Load padding words on stack. + __ mov(ip, Operand(Smi::FromInt(LiveEdit::kFramePaddingValue))); + for (int i = 0; i < LiveEdit::kFramePaddingInitialSize; i++) { + __ push(ip); + } + __ mov(ip, Operand(Smi::FromInt(LiveEdit::kFramePaddingInitialSize))); + __ push(ip); + // Store the registers containing live values on the expression stack to // make sure that these are correctly updated during GC. Non object values // are stored as a smi causing it to be untouched by GC. - ASSERT((object_regs & ~kJSCallerSaved) == 0); - ASSERT((non_object_regs & ~kJSCallerSaved) == 0); - ASSERT((object_regs & non_object_regs) == 0); + DCHECK((object_regs & ~kJSCallerSaved) == 0); + DCHECK((non_object_regs & ~kJSCallerSaved) == 0); + DCHECK((object_regs & non_object_regs) == 0); if ((object_regs | non_object_regs) != 0) { for (int i = 0; i < kNumJSCallerSaved; i++) { int r = JSCallerSavedCode(i); @@ -141,6 +147,9 @@ static void Generate_DebugBreakCallHelper(MacroAssembler* masm, } } + // Don't bother removing padding bytes pushed on the stack + // as the frame is going to be restored right away. + // Leave the internal frame. } @@ -148,14 +157,14 @@ static void Generate_DebugBreakCallHelper(MacroAssembler* masm, // jumping to the target address intended by the caller and that was // overwritten by the address of DebugBreakXXX. ExternalReference after_break_target = - ExternalReference(Debug_Address::AfterBreakTarget(), masm->isolate()); + ExternalReference::debug_after_break_target_address(masm->isolate()); __ mov(ip, Operand(after_break_target)); __ ldr(ip, MemOperand(ip)); __ Jump(ip); } -void Debug::GenerateCallICStubDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCallICStubDebugBreak(MacroAssembler* masm) { // Register state for CallICStub // ----------- S t a t e ------------- // -- r1 : function @@ -165,54 +174,41 @@ void Debug::GenerateCallICStubDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateLoadICDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateLoadICDebugBreak(MacroAssembler* masm) { // Calling convention for IC load (from ic-arm.cc). - // ----------- S t a t e ------------- - // -- r2 : name - // -- lr : return address - // -- r0 : receiver - // -- [sp] : receiver - // ----------------------------------- - // Registers r0 and r2 contain objects that need to be pushed on the - // expression stack of the fake JS frame. - Generate_DebugBreakCallHelper(masm, r0.bit() | r2.bit(), 0); + Register receiver = LoadIC::ReceiverRegister(); + Register name = LoadIC::NameRegister(); + Generate_DebugBreakCallHelper(masm, receiver.bit() | name.bit(), 0); } -void Debug::GenerateStoreICDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateStoreICDebugBreak(MacroAssembler* masm) { // Calling convention for IC store (from ic-arm.cc). - // ----------- S t a t e ------------- - // -- r0 : value - // -- r1 : receiver - // -- r2 : name - // -- lr : return address - // ----------------------------------- - // Registers r0, r1, and r2 contain objects that need to be pushed on the - // expression stack of the fake JS frame. - Generate_DebugBreakCallHelper(masm, r0.bit() | r1.bit() | r2.bit(), 0); + Register receiver = StoreIC::ReceiverRegister(); + Register name = StoreIC::NameRegister(); + Register value = StoreIC::ValueRegister(); + Generate_DebugBreakCallHelper( + masm, receiver.bit() | name.bit() | value.bit(), 0); } -void Debug::GenerateKeyedLoadICDebugBreak(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- lr : return address - // -- r0 : key - // -- r1 : receiver - Generate_DebugBreakCallHelper(masm, r0.bit() | r1.bit(), 0); +void DebugCodegen::GenerateKeyedLoadICDebugBreak(MacroAssembler* masm) { + // Calling convention for keyed IC load (from ic-arm.cc). + GenerateLoadICDebugBreak(masm); } -void Debug::GenerateKeyedStoreICDebugBreak(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- r0 : value - // -- r1 : key - // -- r2 : receiver - // -- lr : return address - Generate_DebugBreakCallHelper(masm, r0.bit() | r1.bit() | r2.bit(), 0); +void DebugCodegen::GenerateKeyedStoreICDebugBreak(MacroAssembler* masm) { + // Calling convention for IC keyed store call (from ic-arm.cc). + Register receiver = KeyedStoreIC::ReceiverRegister(); + Register name = KeyedStoreIC::NameRegister(); + Register value = KeyedStoreIC::ValueRegister(); + Generate_DebugBreakCallHelper( + masm, receiver.bit() | name.bit() | value.bit(), 0); } -void Debug::GenerateCompareNilICDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCompareNilICDebugBreak(MacroAssembler* masm) { // Register state for CompareNil IC // ----------- S t a t e ------------- // -- r0 : value @@ -221,7 +217,7 @@ void Debug::GenerateCompareNilICDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateReturnDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateReturnDebugBreak(MacroAssembler* masm) { // In places other than IC call sites it is expected that r0 is TOS which // is an object - this is not generally the case so this should be used with // care. @@ -229,7 +225,7 @@ void Debug::GenerateReturnDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateCallFunctionStubDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCallFunctionStubDebugBreak(MacroAssembler* masm) { // Register state for CallFunctionStub (from code-stubs-arm.cc). // ----------- S t a t e ------------- // -- r1 : function @@ -238,7 +234,7 @@ void Debug::GenerateCallFunctionStubDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateCallConstructStubDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCallConstructStubDebugBreak(MacroAssembler* masm) { // Calling convention for CallConstructStub (from code-stubs-arm.cc) // ----------- S t a t e ------------- // -- r0 : number of arguments (not smi) @@ -248,7 +244,8 @@ void Debug::GenerateCallConstructStubDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateCallConstructStubRecordDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCallConstructStubRecordDebugBreak( + MacroAssembler* masm) { // Calling convention for CallConstructStub (from code-stubs-arm.cc) // ----------- S t a t e ------------- // -- r0 : number of arguments (not smi) @@ -260,7 +257,7 @@ void Debug::GenerateCallConstructStubRecordDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateSlot(MacroAssembler* masm) { +void DebugCodegen::GenerateSlot(MacroAssembler* masm) { // Generate enough nop's to make space for a call instruction. Avoid emitting // the constant pool in the debug break slot code. Assembler::BlockConstPoolScope block_const_pool(masm); @@ -270,28 +267,55 @@ void Debug::GenerateSlot(MacroAssembler* masm) { for (int i = 0; i < Assembler::kDebugBreakSlotInstructions; i++) { __ nop(MacroAssembler::DEBUG_BREAK_NOP); } - ASSERT_EQ(Assembler::kDebugBreakSlotInstructions, + DCHECK_EQ(Assembler::kDebugBreakSlotInstructions, masm->InstructionsGeneratedSince(&check_codesize)); } -void Debug::GenerateSlotDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateSlotDebugBreak(MacroAssembler* masm) { // In the places where a debug break slot is inserted no registers can contain // object pointers. Generate_DebugBreakCallHelper(masm, 0, 0); } -void Debug::GeneratePlainReturnLiveEdit(MacroAssembler* masm) { - masm->Abort(kLiveEditFrameDroppingIsNotSupportedOnArm); +void DebugCodegen::GeneratePlainReturnLiveEdit(MacroAssembler* masm) { + __ Ret(); } -void Debug::GenerateFrameDropperLiveEdit(MacroAssembler* masm) { - masm->Abort(kLiveEditFrameDroppingIsNotSupportedOnArm); +void DebugCodegen::GenerateFrameDropperLiveEdit(MacroAssembler* masm) { + ExternalReference restarter_frame_function_slot = + ExternalReference::debug_restarter_frame_function_pointer_address( + masm->isolate()); + __ mov(ip, Operand(restarter_frame_function_slot)); + __ mov(r1, Operand::Zero()); + __ str(r1, MemOperand(ip, 0)); + + // Load the function pointer off of our current stack frame. + __ ldr(r1, MemOperand(fp, + StandardFrameConstants::kConstantPoolOffset - kPointerSize)); + + // Pop return address, frame and constant pool pointer (if + // FLAG_enable_ool_constant_pool). + __ LeaveFrame(StackFrame::INTERNAL); + + { ConstantPoolUnavailableScope constant_pool_unavailable(masm); + // Load context from the function. + __ ldr(cp, FieldMemOperand(r1, JSFunction::kContextOffset)); + + // Get function code. + __ ldr(ip, FieldMemOperand(r1, JSFunction::kSharedFunctionInfoOffset)); + __ ldr(ip, FieldMemOperand(ip, SharedFunctionInfo::kCodeOffset)); + __ add(ip, ip, Operand(Code::kHeaderSize - kHeapObjectTag)); + + // Re-run JSFunction, r1 is function, cp is context. + __ Jump(ip); + } } -const bool Debug::kFrameDropperSupported = false; + +const bool LiveEdit::kFrameDropperSupported = true; #undef __ diff --git a/deps/v8/src/arm/deoptimizer-arm.cc b/deps/v8/src/arm/deoptimizer-arm.cc index aa98c8b75f7..df2c0984563 100644 --- a/deps/v8/src/arm/deoptimizer-arm.cc +++ b/deps/v8/src/arm/deoptimizer-arm.cc @@ -2,17 +2,17 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "codegen.h" -#include "deoptimizer.h" -#include "full-codegen.h" -#include "safepoint-table.h" +#include "src/codegen.h" +#include "src/deoptimizer.h" +#include "src/full-codegen.h" +#include "src/safepoint-table.h" namespace v8 { namespace internal { -const int Deoptimizer::table_entry_size_ = 12; +const int Deoptimizer::table_entry_size_ = 8; int Deoptimizer::patch_size() { @@ -49,9 +49,6 @@ void Deoptimizer::PatchCodeForDeoptimization(Isolate* isolate, Code* code) { DeoptimizationInputData* deopt_data = DeoptimizationInputData::cast(code->deoptimization_data()); - SharedFunctionInfo* shared = - SharedFunctionInfo::cast(deopt_data->SharedFunctionInfo()); - shared->EvictFromOptimizedCodeMap(code, "deoptimized code"); #ifdef DEBUG Address prev_call_address = NULL; #endif @@ -68,13 +65,13 @@ void Deoptimizer::PatchCodeForDeoptimization(Isolate* isolate, Code* code) { deopt_entry, RelocInfo::NONE32); int call_size_in_words = call_size_in_bytes / Assembler::kInstrSize; - ASSERT(call_size_in_bytes % Assembler::kInstrSize == 0); - ASSERT(call_size_in_bytes <= patch_size()); + DCHECK(call_size_in_bytes % Assembler::kInstrSize == 0); + DCHECK(call_size_in_bytes <= patch_size()); CodePatcher patcher(call_address, call_size_in_words); patcher.masm()->Call(deopt_entry, RelocInfo::NONE32); - ASSERT(prev_call_address == NULL || + DCHECK(prev_call_address == NULL || call_address >= prev_call_address + patch_size()); - ASSERT(call_address + patch_size() <= code->instruction_end()); + DCHECK(call_address + patch_size() <= code->instruction_end()); #ifdef DEBUG prev_call_address = call_address; #endif @@ -105,7 +102,7 @@ void Deoptimizer::FillInputFrame(Address tos, JavaScriptFrame* frame) { void Deoptimizer::SetPlatformCompiledStubRegisters( FrameDescription* output_frame, CodeStubInterfaceDescriptor* descriptor) { - ApiFunction function(descriptor->deoptimization_handler_); + ApiFunction function(descriptor->deoptimization_handler()); ExternalReference xref(&function, ExternalReference::BUILTIN_CALL, isolate_); intptr_t handler = reinterpret_cast<intptr_t>(xref.address()); int params = descriptor->GetHandlerParameterCount(); @@ -128,11 +125,6 @@ bool Deoptimizer::HasAlignmentPadding(JSFunction* function) { } -Code* Deoptimizer::NotifyStubFailureBuiltin() { - return isolate_->builtins()->builtin(Builtins::kNotifyStubFailureSaveDoubles); -} - - #define __ masm()-> // This code tries to be close to ia32 code so that any changes can be @@ -150,8 +142,8 @@ void Deoptimizer::EntryGenerator::Generate() { kDoubleSize * DwVfpRegister::kMaxNumAllocatableRegisters; // Save all allocatable VFP registers before messing with them. - ASSERT(kDoubleRegZero.code() == 14); - ASSERT(kScratchDoubleReg.code() == 15); + DCHECK(kDoubleRegZero.code() == 14); + DCHECK(kScratchDoubleReg.code() == 15); // Check CPU flags for number of registers, setting the Z condition flag. __ CheckFor32DRegs(ip); @@ -202,7 +194,7 @@ void Deoptimizer::EntryGenerator::Generate() { __ ldr(r1, MemOperand(r0, Deoptimizer::input_offset())); // Copy core registers into FrameDescription::registers_[kNumRegisters]. - ASSERT(Register::kNumRegisters == kNumberOfRegisters); + DCHECK(Register::kNumRegisters == kNumberOfRegisters); for (int i = 0; i < kNumberOfRegisters; i++) { int offset = (i * kPointerSize) + FrameDescription::registers_offset(); __ ldr(r2, MemOperand(sp, i * kPointerSize)); @@ -333,11 +325,11 @@ void Deoptimizer::TableEntryGenerator::GeneratePrologue() { int start = masm()->pc_offset(); USE(start); __ mov(ip, Operand(i)); - __ push(ip); __ b(&done); - ASSERT(masm()->pc_offset() - start == table_entry_size_); + DCHECK(masm()->pc_offset() - start == table_entry_size_); } __ bind(&done); + __ push(ip); } @@ -352,7 +344,7 @@ void FrameDescription::SetCallerFp(unsigned offset, intptr_t value) { void FrameDescription::SetCallerConstantPool(unsigned offset, intptr_t value) { - ASSERT(FLAG_enable_ool_constant_pool); + DCHECK(FLAG_enable_ool_constant_pool); SetFrameSlot(offset, value); } diff --git a/deps/v8/src/arm/disasm-arm.cc b/deps/v8/src/arm/disasm-arm.cc index 0a5d5b0d396..85977b186e8 100644 --- a/deps/v8/src/arm/disasm-arm.cc +++ b/deps/v8/src/arm/disasm-arm.cc @@ -24,18 +24,18 @@ #include <assert.h> -#include <stdio.h> #include <stdarg.h> +#include <stdio.h> #include <string.h> -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM -#include "constants-arm.h" -#include "disasm.h" -#include "macro-assembler.h" -#include "platform.h" +#include "src/arm/constants-arm.h" +#include "src/base/platform/platform.h" +#include "src/disasm.h" +#include "src/macro-assembler.h" namespace v8 { @@ -207,15 +207,15 @@ void Decoder::PrintShiftRm(Instruction* instr) { } else if (((shift == LSR) || (shift == ASR)) && (shift_amount == 0)) { shift_amount = 32; } - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - ", %s #%d", - shift_names[shift_index], - shift_amount); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + ", %s #%d", + shift_names[shift_index], + shift_amount); } else { // by register int rs = instr->RsValue(); - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - ", %s ", shift_names[shift_index]); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + ", %s ", shift_names[shift_index]); PrintRegister(rs); } } @@ -227,8 +227,7 @@ void Decoder::PrintShiftImm(Instruction* instr) { int rotate = instr->RotateValue() * 2; int immed8 = instr->Immed8Value(); int imm = (immed8 >> rotate) | (immed8 << (32 - rotate)); - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "#%d", imm); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, "#%d", imm); } @@ -236,10 +235,10 @@ void Decoder::PrintShiftImm(Instruction* instr) { void Decoder::PrintShiftSat(Instruction* instr) { int shift = instr->Bits(11, 7); if (shift > 0) { - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - ", %s #%d", - shift_names[instr->Bit(6) * 2], - instr->Bits(11, 7)); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + ", %s #%d", + shift_names[instr->Bit(6) * 2], + instr->Bits(11, 7)); } } @@ -283,14 +282,14 @@ void Decoder::PrintSoftwareInterrupt(SoftwareInterruptCodes svc) { return; default: if (svc >= kStopCode) { - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "%d - 0x%x", - svc & kStopCodeMask, - svc & kStopCodeMask); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "%d - 0x%x", + svc & kStopCodeMask, + svc & kStopCodeMask); } else { - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "%d", - svc); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "%d", + svc); } return; } @@ -300,7 +299,7 @@ void Decoder::PrintSoftwareInterrupt(SoftwareInterruptCodes svc) { // Handle all register based formatting in this function to reduce the // complexity of FormatOption. int Decoder::FormatRegister(Instruction* instr, const char* format) { - ASSERT(format[0] == 'r'); + DCHECK(format[0] == 'r'); if (format[1] == 'n') { // 'rn: Rn register int reg = instr->RnValue(); PrintRegister(reg); @@ -323,7 +322,7 @@ int Decoder::FormatRegister(Instruction* instr, const char* format) { return 2; } else if (format[1] == 'l') { // 'rlist: register list for load and store multiple instructions - ASSERT(STRING_STARTS_WITH(format, "rlist")); + DCHECK(STRING_STARTS_WITH(format, "rlist")); int rlist = instr->RlistValue(); int reg = 0; Print("{"); @@ -349,7 +348,7 @@ int Decoder::FormatRegister(Instruction* instr, const char* format) { // Handle all VFP register based formatting in this function to reduce the // complexity of FormatOption. int Decoder::FormatVFPRegister(Instruction* instr, const char* format) { - ASSERT((format[0] == 'S') || (format[0] == 'D')); + DCHECK((format[0] == 'S') || (format[0] == 'D')); VFPRegPrecision precision = format[0] == 'D' ? kDoublePrecision : kSinglePrecision; @@ -399,35 +398,35 @@ int Decoder::FormatVFPinstruction(Instruction* instr, const char* format) { void Decoder::FormatNeonList(int Vd, int type) { if (type == nlt_1) { - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "{d%d}", Vd); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "{d%d}", Vd); } else if (type == nlt_2) { - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "{d%d, d%d}", Vd, Vd + 1); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "{d%d, d%d}", Vd, Vd + 1); } else if (type == nlt_3) { - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "{d%d, d%d, d%d}", Vd, Vd + 1, Vd + 2); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "{d%d, d%d, d%d}", Vd, Vd + 1, Vd + 2); } else if (type == nlt_4) { - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "{d%d, d%d, d%d, d%d}", Vd, Vd + 1, Vd + 2, Vd + 3); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "{d%d, d%d, d%d, d%d}", Vd, Vd + 1, Vd + 2, Vd + 3); } } void Decoder::FormatNeonMemory(int Rn, int align, int Rm) { - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "[r%d", Rn); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "[r%d", Rn); if (align != 0) { - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - ":%d", (1 << align) << 6); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + ":%d", (1 << align) << 6); } if (Rm == 15) { Print("]"); } else if (Rm == 13) { Print("]!"); } else { - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "], r%d", Rm); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "], r%d", Rm); } } @@ -437,8 +436,7 @@ void Decoder::PrintMovwMovt(Instruction* instr) { int imm = instr->ImmedMovwMovtValue(); int rd = instr->RdValue(); PrintRegister(rd); - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - ", #%d", imm); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, ", #%d", imm); } @@ -464,14 +462,13 @@ int Decoder::FormatOption(Instruction* instr, const char* format) { return 1; } case 'c': { // 'cond: conditional execution - ASSERT(STRING_STARTS_WITH(format, "cond")); + DCHECK(STRING_STARTS_WITH(format, "cond")); PrintCondition(instr); return 4; } case 'd': { // 'd: vmov double immediate. double d = instr->DoubleImmedVmov(); - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "#%g", d); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, "#%g", d); return 1; } case 'f': { // 'f: bitfield instructions - v7 and above. @@ -481,11 +478,11 @@ int Decoder::FormatOption(Instruction* instr, const char* format) { // BFC/BFI: // Bits 20-16 represent most-significant bit. Covert to width. width -= lsbit; - ASSERT(width > 0); + DCHECK(width > 0); } - ASSERT((width + lsbit) <= 32); - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "#%d, #%d", lsbit, width); + DCHECK((width + lsbit) <= 32); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "#%d, #%d", lsbit, width); return 1; } case 'h': { // 'h: halfword operation for extra loads and stores @@ -501,13 +498,13 @@ int Decoder::FormatOption(Instruction* instr, const char* format) { int width = (format[3] - '0') * 10 + (format[4] - '0'); int lsb = (format[6] - '0') * 10 + (format[7] - '0'); - ASSERT((width >= 1) && (width <= 32)); - ASSERT((lsb >= 0) && (lsb <= 31)); - ASSERT((width + lsb) <= 32); + DCHECK((width >= 1) && (width <= 32)); + DCHECK((lsb >= 0) && (lsb <= 31)); + DCHECK((width + lsb) <= 32); - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "%d", - instr->Bits(width + lsb - 1, lsb)); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "%d", + instr->Bits(width + lsb - 1, lsb)); return 8; } case 'l': { // 'l: branch and link @@ -523,7 +520,7 @@ int Decoder::FormatOption(Instruction* instr, const char* format) { return 2; } if (format[1] == 'e') { // 'memop: load/store instructions. - ASSERT(STRING_STARTS_WITH(format, "memop")); + DCHECK(STRING_STARTS_WITH(format, "memop")); if (instr->HasL()) { Print("ldr"); } else { @@ -541,38 +538,37 @@ int Decoder::FormatOption(Instruction* instr, const char* format) { return 5; } // 'msg: for simulator break instructions - ASSERT(STRING_STARTS_WITH(format, "msg")); + DCHECK(STRING_STARTS_WITH(format, "msg")); byte* str = reinterpret_cast<byte*>(instr->InstructionBits() & 0x0fffffff); - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "%s", converter_.NameInCode(str)); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "%s", converter_.NameInCode(str)); return 3; } case 'o': { if ((format[3] == '1') && (format[4] == '2')) { // 'off12: 12-bit offset for load and store instructions - ASSERT(STRING_STARTS_WITH(format, "off12")); - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "%d", instr->Offset12Value()); + DCHECK(STRING_STARTS_WITH(format, "off12")); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "%d", instr->Offset12Value()); return 5; } else if (format[3] == '0') { // 'off0to3and8to19 16-bit immediate encoded in bits 19-8 and 3-0. - ASSERT(STRING_STARTS_WITH(format, "off0to3and8to19")); - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "%d", - (instr->Bits(19, 8) << 4) + - instr->Bits(3, 0)); + DCHECK(STRING_STARTS_WITH(format, "off0to3and8to19")); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "%d", + (instr->Bits(19, 8) << 4) + + instr->Bits(3, 0)); return 15; } // 'off8: 8-bit offset for extra load and store instructions - ASSERT(STRING_STARTS_WITH(format, "off8")); + DCHECK(STRING_STARTS_WITH(format, "off8")); int offs8 = (instr->ImmedHValue() << 4) | instr->ImmedLValue(); - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "%d", offs8); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, "%d", offs8); return 4; } case 'p': { // 'pu: P and U bits for load and store instructions - ASSERT(STRING_STARTS_WITH(format, "pu")); + DCHECK(STRING_STARTS_WITH(format, "pu")); PrintPU(instr); return 2; } @@ -582,29 +578,29 @@ int Decoder::FormatOption(Instruction* instr, const char* format) { case 's': { if (format[1] == 'h') { // 'shift_op or 'shift_rm or 'shift_sat. if (format[6] == 'o') { // 'shift_op - ASSERT(STRING_STARTS_WITH(format, "shift_op")); + DCHECK(STRING_STARTS_WITH(format, "shift_op")); if (instr->TypeValue() == 0) { PrintShiftRm(instr); } else { - ASSERT(instr->TypeValue() == 1); + DCHECK(instr->TypeValue() == 1); PrintShiftImm(instr); } return 8; } else if (format[6] == 's') { // 'shift_sat. - ASSERT(STRING_STARTS_WITH(format, "shift_sat")); + DCHECK(STRING_STARTS_WITH(format, "shift_sat")); PrintShiftSat(instr); return 9; } else { // 'shift_rm - ASSERT(STRING_STARTS_WITH(format, "shift_rm")); + DCHECK(STRING_STARTS_WITH(format, "shift_rm")); PrintShiftRm(instr); return 8; } } else if (format[1] == 'v') { // 'svc - ASSERT(STRING_STARTS_WITH(format, "svc")); + DCHECK(STRING_STARTS_WITH(format, "svc")); PrintSoftwareInterrupt(instr->SvcValue()); return 3; } else if (format[1] == 'i') { // 'sign: signed extra loads and stores - ASSERT(STRING_STARTS_WITH(format, "sign")); + DCHECK(STRING_STARTS_WITH(format, "sign")); if (instr->HasSign()) { Print("s"); } @@ -617,13 +613,13 @@ int Decoder::FormatOption(Instruction* instr, const char* format) { return 1; } case 't': { // 'target: target of branch instructions - ASSERT(STRING_STARTS_WITH(format, "target")); + DCHECK(STRING_STARTS_WITH(format, "target")); int off = (instr->SImmed24Value() << 2) + 8; - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "%+d -> %s", - off, - converter_.NameOfAddress( - reinterpret_cast<byte*>(instr) + off)); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "%+d -> %s", + off, + converter_.NameOfAddress( + reinterpret_cast<byte*>(instr) + off)); return 6; } case 'u': { // 'u: signed or unsigned multiplies @@ -1101,13 +1097,16 @@ void Decoder::DecodeType3(Instruction* instr) { } case db_x: { if (FLAG_enable_sudiv) { - if (!instr->HasW()) { - if (instr->Bits(5, 4) == 0x1) { - if ((instr->Bit(22) == 0x0) && (instr->Bit(20) == 0x1)) { + if (instr->Bits(5, 4) == 0x1) { + if ((instr->Bit(22) == 0x0) && (instr->Bit(20) == 0x1)) { + if (instr->Bit(21) == 0x1) { + // UDIV (in V8 notation matching ARM ISA format) rn = rm/rs + Format(instr, "udiv'cond'b 'rn, 'rm, 'rs"); + } else { // SDIV (in V8 notation matching ARM ISA format) rn = rm/rs Format(instr, "sdiv'cond'b 'rn, 'rm, 'rs"); - break; } + break; } } } @@ -1184,14 +1183,14 @@ int Decoder::DecodeType7(Instruction* instr) { Format(instr, "stop'cond 'svc"); // Also print the stop message. Its address is encoded // in the following 4 bytes. - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "\n %p %08x stop message: %s", - reinterpret_cast<int32_t*>(instr - + Instruction::kInstrSize), - *reinterpret_cast<char**>(instr - + Instruction::kInstrSize), - *reinterpret_cast<char**>(instr - + Instruction::kInstrSize)); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "\n %p %08x stop message: %s", + reinterpret_cast<void*>(instr + + Instruction::kInstrSize), + *reinterpret_cast<uint32_t*>(instr + + Instruction::kInstrSize), + *reinterpret_cast<char**>(instr + + Instruction::kInstrSize)); // We have decoded 2 * Instruction::kInstrSize bytes. return 2 * Instruction::kInstrSize; } else { @@ -1251,8 +1250,8 @@ void Decoder::DecodeTypeVFP(Instruction* instr) { // vcvt.f64.s32 Dd, Dd, #<fbits> int fraction_bits = 32 - ((instr->Bits(3, 0) << 1) | instr->Bit(5)); Format(instr, "vcvt'cond.f64.s32 'Dd, 'Dd"); - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - ", #%d", fraction_bits); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + ", #%d", fraction_bits); } else if (((instr->Opc2Value() >> 1) == 0x6) && (instr->Opc3Value() & 0x1)) { DecodeVCVTBetweenFloatingPointAndInteger(instr); @@ -1547,8 +1546,8 @@ void Decoder::DecodeSpecialCondition(Instruction* instr) { int Vd = (instr->Bit(22) << 3) | (instr->VdValue() >> 1); int Vm = (instr->Bit(5) << 4) | instr->VmValue(); int imm3 = instr->Bits(21, 19); - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "vmovl.s%d q%d, d%d", imm3*8, Vd, Vm); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "vmovl.s%d q%d, d%d", imm3*8, Vd, Vm); } else { Unknown(instr); } @@ -1561,8 +1560,8 @@ void Decoder::DecodeSpecialCondition(Instruction* instr) { int Vd = (instr->Bit(22) << 3) | (instr->VdValue() >> 1); int Vm = (instr->Bit(5) << 4) | instr->VmValue(); int imm3 = instr->Bits(21, 19); - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "vmovl.u%d q%d, d%d", imm3*8, Vd, Vm); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "vmovl.u%d q%d, d%d", imm3*8, Vd, Vm); } else { Unknown(instr); } @@ -1576,8 +1575,8 @@ void Decoder::DecodeSpecialCondition(Instruction* instr) { int size = instr->Bits(7, 6); int align = instr->Bits(5, 4); int Rm = instr->VmValue(); - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "vst1.%d ", (1 << size) << 3); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "vst1.%d ", (1 << size) << 3); FormatNeonList(Vd, type); Print(", "); FormatNeonMemory(Rn, align, Rm); @@ -1589,8 +1588,8 @@ void Decoder::DecodeSpecialCondition(Instruction* instr) { int size = instr->Bits(7, 6); int align = instr->Bits(5, 4); int Rm = instr->VmValue(); - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "vld1.%d ", (1 << size) << 3); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "vld1.%d ", (1 << size) << 3); FormatNeonList(Vd, type); Print(", "); FormatNeonMemory(Rn, align, Rm); @@ -1604,14 +1603,14 @@ void Decoder::DecodeSpecialCondition(Instruction* instr) { int Rn = instr->Bits(19, 16); int offset = instr->Bits(11, 0); if (offset == 0) { - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "pld [r%d]", Rn); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "pld [r%d]", Rn); } else if (instr->Bit(23) == 0) { - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "pld [r%d, #-%d]", Rn, offset); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "pld [r%d, #-%d]", Rn, offset); } else { - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "pld [r%d, #+%d]", Rn, offset); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "pld [r%d, #+%d]", Rn, offset); } } else { Unknown(instr); @@ -1645,26 +1644,26 @@ int Decoder::ConstantPoolSizeAt(byte* instr_ptr) { int Decoder::InstructionDecode(byte* instr_ptr) { Instruction* instr = Instruction::At(instr_ptr); // Print raw instruction bytes. - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "%08x ", - instr->InstructionBits()); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "%08x ", + instr->InstructionBits()); if (instr->ConditionField() == kSpecialCondition) { DecodeSpecialCondition(instr); return Instruction::kInstrSize; } int instruction_bits = *(reinterpret_cast<int*>(instr_ptr)); if ((instruction_bits & kConstantPoolMarkerMask) == kConstantPoolMarker) { - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "constant pool begin (length %d)", - DecodeConstantPoolLength(instruction_bits)); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "constant pool begin (length %d)", + DecodeConstantPoolLength(instruction_bits)); return Instruction::kInstrSize; } else if (instruction_bits == kCodeAgeJumpInstruction) { // The code age prologue has a constant immediatly following the jump // instruction. Instruction* target = Instruction::At(instr_ptr + Instruction::kInstrSize); DecodeType2(instr); - OS::SNPrintF(out_buffer_ + out_buffer_pos_, - " (0x%08x)", target->InstructionBits()); + SNPrintF(out_buffer_ + out_buffer_pos_, + " (0x%08x)", target->InstructionBits()); return 2 * Instruction::kInstrSize; } switch (instr->TypeValue()) { @@ -1716,7 +1715,7 @@ namespace disasm { const char* NameConverter::NameOfAddress(byte* addr) const { - v8::internal::OS::SNPrintF(tmp_buffer_, "%p", addr); + v8::internal::SNPrintF(tmp_buffer_, "%p", addr); return tmp_buffer_.start(); } diff --git a/deps/v8/src/arm/frames-arm.cc b/deps/v8/src/arm/frames-arm.cc index 605f9f42233..fde4a177492 100644 --- a/deps/v8/src/arm/frames-arm.cc +++ b/deps/v8/src/arm/frames-arm.cc @@ -2,16 +2,17 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM -#include "assembler.h" -#include "assembler-arm.h" -#include "assembler-arm-inl.h" -#include "frames.h" -#include "macro-assembler.h" -#include "macro-assembler-arm.h" +#include "src/assembler.h" +#include "src/frames.h" +#include "src/macro-assembler.h" + +#include "src/arm/assembler-arm-inl.h" +#include "src/arm/assembler-arm.h" +#include "src/arm/macro-assembler-arm.h" namespace v8 { namespace internal { @@ -20,7 +21,7 @@ namespace internal { Register JavaScriptFrame::fp_register() { return v8::internal::fp; } Register JavaScriptFrame::context_register() { return cp; } Register JavaScriptFrame::constant_pool_pointer_register() { - ASSERT(FLAG_enable_ool_constant_pool); + DCHECK(FLAG_enable_ool_constant_pool); return pp; } @@ -28,13 +29,13 @@ Register JavaScriptFrame::constant_pool_pointer_register() { Register StubFailureTrampolineFrame::fp_register() { return v8::internal::fp; } Register StubFailureTrampolineFrame::context_register() { return cp; } Register StubFailureTrampolineFrame::constant_pool_pointer_register() { - ASSERT(FLAG_enable_ool_constant_pool); + DCHECK(FLAG_enable_ool_constant_pool); return pp; } Object*& ExitFrame::constant_pool_slot() const { - ASSERT(FLAG_enable_ool_constant_pool); + DCHECK(FLAG_enable_ool_constant_pool); const int offset = ExitFrameConstants::kConstantPoolOffset; return Memory::Object_at(fp() + offset); } diff --git a/deps/v8/src/arm/frames-arm.h b/deps/v8/src/arm/frames-arm.h index 6dd5186404e..ce65e887f84 100644 --- a/deps/v8/src/arm/frames-arm.h +++ b/deps/v8/src/arm/frames-arm.h @@ -29,8 +29,6 @@ const RegList kJSCallerSaved = const int kNumJSCallerSaved = 4; -typedef Object* JSCallerSavedBuffer[kNumJSCallerSaved]; - // Return the code of the n-th caller-saved register available to JavaScript // e.g. JSCallerSavedReg(0) returns r0.code() == 0 int JSCallerSavedCode(int n); diff --git a/deps/v8/src/arm/full-codegen-arm.cc b/deps/v8/src/arm/full-codegen-arm.cc index c22caa4a816..09459e4e35b 100644 --- a/deps/v8/src/arm/full-codegen-arm.cc +++ b/deps/v8/src/arm/full-codegen-arm.cc @@ -2,22 +2,22 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM -#include "code-stubs.h" -#include "codegen.h" -#include "compiler.h" -#include "debug.h" -#include "full-codegen.h" -#include "isolate-inl.h" -#include "parser.h" -#include "scopes.h" -#include "stub-cache.h" +#include "src/code-stubs.h" +#include "src/codegen.h" +#include "src/compiler.h" +#include "src/debug.h" +#include "src/full-codegen.h" +#include "src/isolate-inl.h" +#include "src/parser.h" +#include "src/scopes.h" +#include "src/stub-cache.h" -#include "arm/code-stubs-arm.h" -#include "arm/macro-assembler-arm.h" +#include "src/arm/code-stubs-arm.h" +#include "src/arm/macro-assembler-arm.h" namespace v8 { namespace internal { @@ -40,13 +40,13 @@ class JumpPatchSite BASE_EMBEDDED { } ~JumpPatchSite() { - ASSERT(patch_site_.is_bound() == info_emitted_); + DCHECK(patch_site_.is_bound() == info_emitted_); } // When initially emitting this ensure that a jump is always generated to skip // the inlined smi code. void EmitJumpIfNotSmi(Register reg, Label* target) { - ASSERT(!patch_site_.is_bound() && !info_emitted_); + DCHECK(!patch_site_.is_bound() && !info_emitted_); Assembler::BlockConstPoolScope block_const_pool(masm_); __ bind(&patch_site_); __ cmp(reg, Operand(reg)); @@ -56,7 +56,7 @@ class JumpPatchSite BASE_EMBEDDED { // When initially emitting this ensure that a jump is never generated to skip // the inlined smi code. void EmitJumpIfSmi(Register reg, Label* target) { - ASSERT(!patch_site_.is_bound() && !info_emitted_); + DCHECK(!patch_site_.is_bound() && !info_emitted_); Assembler::BlockConstPoolScope block_const_pool(masm_); __ bind(&patch_site_); __ cmp(reg, Operand(reg)); @@ -88,31 +88,6 @@ class JumpPatchSite BASE_EMBEDDED { }; -static void EmitStackCheck(MacroAssembler* masm_, - Register stack_limit_scratch, - int pointers = 0, - Register scratch = sp) { - Isolate* isolate = masm_->isolate(); - Label ok; - ASSERT(scratch.is(sp) == (pointers == 0)); - Heap::RootListIndex index; - if (pointers != 0) { - __ sub(scratch, sp, Operand(pointers * kPointerSize)); - index = Heap::kRealStackLimitRootIndex; - } else { - index = Heap::kStackLimitRootIndex; - } - __ LoadRoot(stack_limit_scratch, index); - __ cmp(scratch, Operand(stack_limit_scratch)); - __ b(hs, &ok); - Handle<Code> stack_check = isolate->builtins()->StackCheck(); - PredictableCodeSizeScope predictable(masm_, - masm_->CallSize(stack_check, RelocInfo::CODE_TARGET)); - __ Call(stack_check, RelocInfo::CODE_TARGET); - __ bind(&ok); -} - - // Generate code for a JS function. On entry to the function the receiver // and arguments have been pushed on the stack left to right. The actual // argument count matches the formal parameter count expected by the @@ -158,7 +133,7 @@ void FullCodeGenerator::Generate() { __ b(ne, &ok); __ ldr(r2, GlobalObjectOperand()); - __ ldr(r2, FieldMemOperand(r2, GlobalObject::kGlobalReceiverOffset)); + __ ldr(r2, FieldMemOperand(r2, GlobalObject::kGlobalProxyOffset)); __ str(r2, MemOperand(sp, receiver_offset)); @@ -171,16 +146,22 @@ void FullCodeGenerator::Generate() { FrameScope frame_scope(masm_, StackFrame::MANUAL); info->set_prologue_offset(masm_->pc_offset()); - __ Prologue(BUILD_FUNCTION_FRAME); + __ Prologue(info->IsCodePreAgingActive()); info->AddNoFrameRange(0, masm_->pc_offset()); { Comment cmnt(masm_, "[ Allocate locals"); int locals_count = info->scope()->num_stack_slots(); // Generators allocate locals, if any, in context slots. - ASSERT(!info->function()->is_generator() || locals_count == 0); + DCHECK(!info->function()->is_generator() || locals_count == 0); if (locals_count > 0) { if (locals_count >= 128) { - EmitStackCheck(masm_, r2, locals_count, r9); + Label ok; + __ sub(r9, sp, Operand(locals_count * kPointerSize)); + __ LoadRoot(r2, Heap::kRealStackLimitRootIndex); + __ cmp(r9, Operand(r2)); + __ b(hs, &ok); + __ InvokeBuiltin(Builtins::STACK_OVERFLOW, CALL_FUNCTION); + __ bind(&ok); } __ LoadRoot(r9, Heap::kUndefinedValueRootIndex); int kMaxPushes = FLAG_optimize_for_size ? 4 : 32; @@ -212,16 +193,19 @@ void FullCodeGenerator::Generate() { if (heap_slots > 0) { // Argument to NewContext is the function, which is still in r1. Comment cmnt(masm_, "[ Allocate context"); + bool need_write_barrier = true; if (FLAG_harmony_scoping && info->scope()->is_global_scope()) { __ push(r1); __ Push(info->scope()->GetScopeInfo()); - __ CallRuntime(Runtime::kHiddenNewGlobalContext, 2); + __ CallRuntime(Runtime::kNewGlobalContext, 2); } else if (heap_slots <= FastNewContextStub::kMaximumSlots) { FastNewContextStub stub(isolate(), heap_slots); __ CallStub(&stub); + // Result of FastNewContextStub is always in new space. + need_write_barrier = false; } else { __ push(r1); - __ CallRuntime(Runtime::kHiddenNewFunctionContext, 1); + __ CallRuntime(Runtime::kNewFunctionContext, 1); } function_in_register = false; // Context is returned in r0. It replaces the context passed to us. @@ -242,8 +226,15 @@ void FullCodeGenerator::Generate() { __ str(r0, target); // Update the write barrier. - __ RecordWriteContextSlot( - cp, target.offset(), r0, r3, kLRHasBeenSaved, kDontSaveFPRegs); + if (need_write_barrier) { + __ RecordWriteContextSlot( + cp, target.offset(), r0, r3, kLRHasBeenSaved, kDontSaveFPRegs); + } else if (FLAG_debug_code) { + Label done; + __ JumpIfInNewSpace(cp, r0, &done); + __ Abort(kExpectedNewSpaceObject); + __ bind(&done); + } } } } @@ -301,9 +292,9 @@ void FullCodeGenerator::Generate() { // constant. if (scope()->is_function_scope() && scope()->function() != NULL) { VariableDeclaration* function = scope()->function(); - ASSERT(function->proxy()->var()->mode() == CONST || + DCHECK(function->proxy()->var()->mode() == CONST || function->proxy()->var()->mode() == CONST_LEGACY); - ASSERT(function->proxy()->var()->location() != Variable::UNALLOCATED); + DCHECK(function->proxy()->var()->location() != Variable::UNALLOCATED); VisitVariableDeclaration(function); } VisitDeclarations(scope()->declarations()); @@ -311,13 +302,21 @@ void FullCodeGenerator::Generate() { { Comment cmnt(masm_, "[ Stack check"); PrepareForBailoutForId(BailoutId::Declarations(), NO_REGISTERS); - EmitStackCheck(masm_, ip); + Label ok; + __ LoadRoot(ip, Heap::kStackLimitRootIndex); + __ cmp(sp, Operand(ip)); + __ b(hs, &ok); + Handle<Code> stack_check = isolate()->builtins()->StackCheck(); + PredictableCodeSizeScope predictable(masm_, + masm_->CallSize(stack_check, RelocInfo::CODE_TARGET)); + __ Call(stack_check, RelocInfo::CODE_TARGET); + __ bind(&ok); } { Comment cmnt(masm_, "[ Body"); - ASSERT(loop_depth() == 0); + DCHECK(loop_depth() == 0); VisitStatements(function()->body()); - ASSERT(loop_depth() == 0); + DCHECK(loop_depth() == 0); } } @@ -347,13 +346,27 @@ void FullCodeGenerator::EmitProfilingCounterDecrement(int delta) { } +static const int kProfileCounterResetSequenceLength = 5 * Assembler::kInstrSize; + + void FullCodeGenerator::EmitProfilingCounterReset() { + Assembler::BlockConstPoolScope block_const_pool(masm_); + PredictableCodeSizeScope predictable_code_size_scope( + masm_, kProfileCounterResetSequenceLength); + Label start; + __ bind(&start); int reset_value = FLAG_interrupt_budget; - if (isolate()->IsDebuggerActive()) { + if (info_->is_debug()) { // Detect debug break requests as soon as possible. reset_value = FLAG_interrupt_budget >> 4; } __ mov(r2, Operand(profiling_counter_)); + // The mov instruction above can be either 1, 2 or 3 instructions depending + // upon whether it is an extended constant pool - insert nop to compensate. + DCHECK(masm_->InstructionsGeneratedSince(&start) <= 3); + while (masm_->InstructionsGeneratedSince(&start) != 3) { + __ nop(); + } __ mov(r3, Operand(Smi::FromInt(reset_value))); __ str(r3, FieldMemOperand(r2, Cell::kValueOffset)); } @@ -366,7 +379,7 @@ void FullCodeGenerator::EmitBackEdgeBookkeeping(IterationStatement* stmt, Assembler::BlockConstPoolScope block_const_pool(masm_); Label ok; - ASSERT(back_edge_target->is_bound()); + DCHECK(back_edge_target->is_bound()); int distance = masm_->SizeOfCodeGeneratedSince(back_edge_target); int weight = Min(kMaxBackEdgeWeight, Max(1, distance / kCodeSizeMultiplier)); @@ -443,7 +456,7 @@ void FullCodeGenerator::EmitReturnSequence() { #ifdef DEBUG // Check that the size of the code used for returning is large enough // for the debugger's requirements. - ASSERT(Assembler::kJSReturnSequenceInstructions <= + DCHECK(Assembler::kJSReturnSequenceInstructions <= masm_->InstructionsGeneratedSince(&check_exit_codesize)); #endif } @@ -451,25 +464,25 @@ void FullCodeGenerator::EmitReturnSequence() { void FullCodeGenerator::EffectContext::Plug(Variable* var) const { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); } void FullCodeGenerator::AccumulatorValueContext::Plug(Variable* var) const { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); codegen()->GetVar(result_register(), var); } void FullCodeGenerator::StackValueContext::Plug(Variable* var) const { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); codegen()->GetVar(result_register(), var); __ push(result_register()); } void FullCodeGenerator::TestContext::Plug(Variable* var) const { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); // For simplicity we always test the accumulator register. codegen()->GetVar(result_register(), var); codegen()->PrepareForBailoutBeforeSplit(condition(), false, NULL, NULL); @@ -534,7 +547,7 @@ void FullCodeGenerator::TestContext::Plug(Handle<Object> lit) const { true, true_label_, false_label_); - ASSERT(!lit->IsUndetectableObject()); // There are no undetectable literals. + DCHECK(!lit->IsUndetectableObject()); // There are no undetectable literals. if (lit->IsUndefined() || lit->IsNull() || lit->IsFalse()) { if (false_label_ != fall_through_) __ b(false_label_); } else if (lit->IsTrue() || lit->IsJSObject()) { @@ -561,7 +574,7 @@ void FullCodeGenerator::TestContext::Plug(Handle<Object> lit) const { void FullCodeGenerator::EffectContext::DropAndPlug(int count, Register reg) const { - ASSERT(count > 0); + DCHECK(count > 0); __ Drop(count); } @@ -569,7 +582,7 @@ void FullCodeGenerator::EffectContext::DropAndPlug(int count, void FullCodeGenerator::AccumulatorValueContext::DropAndPlug( int count, Register reg) const { - ASSERT(count > 0); + DCHECK(count > 0); __ Drop(count); __ Move(result_register(), reg); } @@ -577,7 +590,7 @@ void FullCodeGenerator::AccumulatorValueContext::DropAndPlug( void FullCodeGenerator::StackValueContext::DropAndPlug(int count, Register reg) const { - ASSERT(count > 0); + DCHECK(count > 0); if (count > 1) __ Drop(count - 1); __ str(reg, MemOperand(sp, 0)); } @@ -585,7 +598,7 @@ void FullCodeGenerator::StackValueContext::DropAndPlug(int count, void FullCodeGenerator::TestContext::DropAndPlug(int count, Register reg) const { - ASSERT(count > 0); + DCHECK(count > 0); // For simplicity we always test the accumulator register. __ Drop(count); __ Move(result_register(), reg); @@ -596,7 +609,7 @@ void FullCodeGenerator::TestContext::DropAndPlug(int count, void FullCodeGenerator::EffectContext::Plug(Label* materialize_true, Label* materialize_false) const { - ASSERT(materialize_true == materialize_false); + DCHECK(materialize_true == materialize_false); __ bind(materialize_true); } @@ -630,8 +643,8 @@ void FullCodeGenerator::StackValueContext::Plug( void FullCodeGenerator::TestContext::Plug(Label* materialize_true, Label* materialize_false) const { - ASSERT(materialize_true == true_label_); - ASSERT(materialize_false == false_label_); + DCHECK(materialize_true == true_label_); + DCHECK(materialize_false == false_label_); } @@ -694,7 +707,7 @@ void FullCodeGenerator::Split(Condition cond, MemOperand FullCodeGenerator::StackOperand(Variable* var) { - ASSERT(var->IsStackAllocated()); + DCHECK(var->IsStackAllocated()); // Offset is negative because higher indexes are at lower addresses. int offset = -var->index() * kPointerSize; // Adjust by a (parameter or local) base offset. @@ -708,7 +721,7 @@ MemOperand FullCodeGenerator::StackOperand(Variable* var) { MemOperand FullCodeGenerator::VarOperand(Variable* var, Register scratch) { - ASSERT(var->IsContextSlot() || var->IsStackAllocated()); + DCHECK(var->IsContextSlot() || var->IsStackAllocated()); if (var->IsContextSlot()) { int context_chain_length = scope()->ContextChainLength(var->scope()); __ LoadContext(scratch, context_chain_length); @@ -730,10 +743,10 @@ void FullCodeGenerator::SetVar(Variable* var, Register src, Register scratch0, Register scratch1) { - ASSERT(var->IsContextSlot() || var->IsStackAllocated()); - ASSERT(!scratch0.is(src)); - ASSERT(!scratch0.is(scratch1)); - ASSERT(!scratch1.is(src)); + DCHECK(var->IsContextSlot() || var->IsStackAllocated()); + DCHECK(!scratch0.is(src)); + DCHECK(!scratch0.is(scratch1)); + DCHECK(!scratch1.is(src)); MemOperand location = VarOperand(var, scratch0); __ str(src, location); @@ -773,7 +786,7 @@ void FullCodeGenerator::PrepareForBailoutBeforeSplit(Expression* expr, void FullCodeGenerator::EmitDebugCheckDeclarationContext(Variable* variable) { // The variable in the declaration always resides in the current function // context. - ASSERT_EQ(0, scope()->ContextChainLength(variable->scope())); + DCHECK_EQ(0, scope()->ContextChainLength(variable->scope())); if (generate_debug_code_) { // Check that we're not inside a with or catch context. __ ldr(r1, FieldMemOperand(cp, HeapObject::kMapOffset)); @@ -827,7 +840,7 @@ void FullCodeGenerator::VisitVariableDeclaration( Comment cmnt(masm_, "[ VariableDeclaration"); __ mov(r2, Operand(variable->name())); // Declaration nodes are always introduced in one of four modes. - ASSERT(IsDeclaredVariableMode(mode)); + DCHECK(IsDeclaredVariableMode(mode)); PropertyAttributes attr = IsImmutableVariableMode(mode) ? READ_ONLY : NONE; __ mov(r1, Operand(Smi::FromInt(attr))); @@ -842,7 +855,7 @@ void FullCodeGenerator::VisitVariableDeclaration( __ mov(r0, Operand(Smi::FromInt(0))); // Indicates no initial value. __ Push(cp, r2, r1, r0); } - __ CallRuntime(Runtime::kHiddenDeclareContextSlot, 4); + __ CallRuntime(Runtime::kDeclareLookupSlot, 4); break; } } @@ -857,7 +870,7 @@ void FullCodeGenerator::VisitFunctionDeclaration( case Variable::UNALLOCATED: { globals_->Add(variable->name(), zone()); Handle<SharedFunctionInfo> function = - Compiler::BuildFunctionInfo(declaration->fun(), script()); + Compiler::BuildFunctionInfo(declaration->fun(), script(), info_); // Check for stack-overflow exception. if (function.is_null()) return SetStackOverflow(); globals_->Add(function, zone()); @@ -898,7 +911,7 @@ void FullCodeGenerator::VisitFunctionDeclaration( __ Push(cp, r2, r1); // Push initial value for function declaration. VisitForStackValue(declaration->fun()); - __ CallRuntime(Runtime::kHiddenDeclareContextSlot, 4); + __ CallRuntime(Runtime::kDeclareLookupSlot, 4); break; } } @@ -907,8 +920,8 @@ void FullCodeGenerator::VisitFunctionDeclaration( void FullCodeGenerator::VisitModuleDeclaration(ModuleDeclaration* declaration) { Variable* variable = declaration->proxy()->var(); - ASSERT(variable->location() == Variable::CONTEXT); - ASSERT(variable->interface()->IsFrozen()); + DCHECK(variable->location() == Variable::CONTEXT); + DCHECK(variable->interface()->IsFrozen()); Comment cmnt(masm_, "[ ModuleDeclaration"); EmitDebugCheckDeclarationContext(variable); @@ -970,7 +983,7 @@ void FullCodeGenerator::DeclareGlobals(Handle<FixedArray> pairs) { __ mov(r1, Operand(pairs)); __ mov(r0, Operand(Smi::FromInt(DeclareGlobalsFlags()))); __ Push(cp, r1, r0); - __ CallRuntime(Runtime::kHiddenDeclareGlobals, 3); + __ CallRuntime(Runtime::kDeclareGlobals, 3); // Return value is ignored. } @@ -978,7 +991,7 @@ void FullCodeGenerator::DeclareGlobals(Handle<FixedArray> pairs) { void FullCodeGenerator::DeclareModules(Handle<FixedArray> descriptions) { // Call the runtime to declare the modules. __ Push(descriptions); - __ CallRuntime(Runtime::kHiddenDeclareModules, 1); + __ CallRuntime(Runtime::kDeclareModules, 1); // Return value is ignored. } @@ -1263,25 +1276,8 @@ void FullCodeGenerator::VisitForOfStatement(ForOfStatement* stmt) { Iteration loop_statement(this, stmt); increment_loop_depth(); - // var iterator = iterable[@@iterator]() - VisitForAccumulatorValue(stmt->assign_iterator()); - - // As with for-in, skip the loop if the iterator is null or undefined. - __ CompareRoot(r0, Heap::kUndefinedValueRootIndex); - __ b(eq, loop_statement.break_label()); - __ CompareRoot(r0, Heap::kNullValueRootIndex); - __ b(eq, loop_statement.break_label()); - - // Convert the iterator to a JS object. - Label convert, done_convert; - __ JumpIfSmi(r0, &convert); - __ CompareObjectType(r0, r1, r1, FIRST_SPEC_OBJECT_TYPE); - __ b(ge, &done_convert); - __ bind(&convert); - __ push(r0); - __ InvokeBuiltin(Builtins::TO_OBJECT, CALL_FUNCTION); - __ bind(&done_convert); - __ push(r0); + // var iterator = iterable[Symbol.iterator](); + VisitForEffect(stmt->assign_iterator()); // Loop entry. __ bind(loop_statement.continue_label()); @@ -1338,7 +1334,7 @@ void FullCodeGenerator::EmitNewClosure(Handle<SharedFunctionInfo> info, __ LoadRoot(r1, pretenure ? Heap::kTrueValueRootIndex : Heap::kFalseValueRootIndex); __ Push(cp, r0, r1); - __ CallRuntime(Runtime::kHiddenNewClosure, 3); + __ CallRuntime(Runtime::kNewClosure, 3); } context()->Plug(r0); } @@ -1350,7 +1346,7 @@ void FullCodeGenerator::VisitVariableProxy(VariableProxy* expr) { } -void FullCodeGenerator::EmitLoadGlobalCheckExtensions(Variable* var, +void FullCodeGenerator::EmitLoadGlobalCheckExtensions(VariableProxy* proxy, TypeofState typeof_state, Label* slow) { Register current = cp; @@ -1398,8 +1394,13 @@ void FullCodeGenerator::EmitLoadGlobalCheckExtensions(Variable* var, __ bind(&fast); } - __ ldr(r0, GlobalObjectOperand()); - __ mov(r2, Operand(var->name())); + __ ldr(LoadIC::ReceiverRegister(), GlobalObjectOperand()); + __ mov(LoadIC::NameRegister(), Operand(proxy->var()->name())); + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Operand(Smi::FromInt(proxy->VariableFeedbackSlot()))); + } + ContextualMode mode = (typeof_state == INSIDE_TYPEOF) ? NOT_CONTEXTUAL : CONTEXTUAL; @@ -1409,7 +1410,7 @@ void FullCodeGenerator::EmitLoadGlobalCheckExtensions(Variable* var, MemOperand FullCodeGenerator::ContextSlotOperandCheckExtensions(Variable* var, Label* slow) { - ASSERT(var->IsContextSlot()); + DCHECK(var->IsContextSlot()); Register context = cp; Register next = r3; Register temp = r4; @@ -1439,7 +1440,7 @@ MemOperand FullCodeGenerator::ContextSlotOperandCheckExtensions(Variable* var, } -void FullCodeGenerator::EmitDynamicLookupFastCase(Variable* var, +void FullCodeGenerator::EmitDynamicLookupFastCase(VariableProxy* proxy, TypeofState typeof_state, Label* slow, Label* done) { @@ -1448,8 +1449,9 @@ void FullCodeGenerator::EmitDynamicLookupFastCase(Variable* var, // introducing variables. In those cases, we do not want to // perform a runtime call for all variables in the scope // containing the eval. + Variable* var = proxy->var(); if (var->mode() == DYNAMIC_GLOBAL) { - EmitLoadGlobalCheckExtensions(var, typeof_state, slow); + EmitLoadGlobalCheckExtensions(proxy, typeof_state, slow); __ jmp(done); } else if (var->mode() == DYNAMIC_LOCAL) { Variable* local = var->local_if_not_shadowed(); @@ -1463,7 +1465,7 @@ void FullCodeGenerator::EmitDynamicLookupFastCase(Variable* var, __ b(ne, done); __ mov(r0, Operand(var->name())); __ push(r0); - __ CallRuntime(Runtime::kHiddenThrowReferenceError, 1); + __ CallRuntime(Runtime::kThrowReferenceError, 1); } } __ jmp(done); @@ -1481,10 +1483,12 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { switch (var->location()) { case Variable::UNALLOCATED: { Comment cmnt(masm_, "[ Global variable"); - // Use inline caching. Variable name is passed in r2 and the global - // object (receiver) in r0. - __ ldr(r0, GlobalObjectOperand()); - __ mov(r2, Operand(var->name())); + __ ldr(LoadIC::ReceiverRegister(), GlobalObjectOperand()); + __ mov(LoadIC::NameRegister(), Operand(var->name())); + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Operand(Smi::FromInt(proxy->VariableFeedbackSlot()))); + } CallLoadIC(CONTEXTUAL); context()->Plug(r0); break; @@ -1501,7 +1505,7 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { // always looked up dynamically, i.e. in that case // var->location() == LOOKUP. // always holds. - ASSERT(var->scope() != NULL); + DCHECK(var->scope() != NULL); // Check if the binding really needs an initialization check. The check // can be skipped in the following situation: we have a LET or CONST @@ -1524,8 +1528,8 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { skip_init_check = false; } else { // Check that we always have valid source position. - ASSERT(var->initializer_position() != RelocInfo::kNoPosition); - ASSERT(proxy->position() != RelocInfo::kNoPosition); + DCHECK(var->initializer_position() != RelocInfo::kNoPosition); + DCHECK(proxy->position() != RelocInfo::kNoPosition); skip_init_check = var->mode() != CONST_LEGACY && var->initializer_position() < proxy->position(); } @@ -1541,11 +1545,11 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { __ b(ne, &done); __ mov(r0, Operand(var->name())); __ push(r0); - __ CallRuntime(Runtime::kHiddenThrowReferenceError, 1); + __ CallRuntime(Runtime::kThrowReferenceError, 1); __ bind(&done); } else { // Uninitalized const bindings outside of harmony mode are unholed. - ASSERT(var->mode() == CONST_LEGACY); + DCHECK(var->mode() == CONST_LEGACY); __ LoadRoot(r0, Heap::kUndefinedValueRootIndex, eq); } context()->Plug(r0); @@ -1561,11 +1565,11 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { Label done, slow; // Generate code for loading from variables potentially shadowed // by eval-introduced variables. - EmitDynamicLookupFastCase(var, NOT_INSIDE_TYPEOF, &slow, &done); + EmitDynamicLookupFastCase(proxy, NOT_INSIDE_TYPEOF, &slow, &done); __ bind(&slow); __ mov(r1, Operand(var->name())); __ Push(cp, r1); // Context and name. - __ CallRuntime(Runtime::kHiddenLoadContextSlot, 2); + __ CallRuntime(Runtime::kLoadLookupSlot, 2); __ bind(&done); context()->Plug(r0); } @@ -1598,7 +1602,7 @@ void FullCodeGenerator::VisitRegExpLiteral(RegExpLiteral* expr) { __ mov(r2, Operand(expr->pattern())); __ mov(r1, Operand(expr->flags())); __ Push(r4, r3, r2, r1); - __ CallRuntime(Runtime::kHiddenMaterializeRegExpLiteral, 4); + __ CallRuntime(Runtime::kMaterializeRegExpLiteral, 4); __ mov(r5, r0); __ bind(&materialized); @@ -1610,7 +1614,7 @@ void FullCodeGenerator::VisitRegExpLiteral(RegExpLiteral* expr) { __ bind(&runtime_allocate); __ mov(r0, Operand(Smi::FromInt(size))); __ Push(r5, r0); - __ CallRuntime(Runtime::kHiddenAllocateInNewSpace, 1); + __ CallRuntime(Runtime::kAllocateInNewSpace, 1); __ pop(r5); __ bind(&allocated); @@ -1651,10 +1655,10 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) { __ mov(r0, Operand(Smi::FromInt(flags))); int properties_count = constant_properties->length() / 2; if (expr->may_store_doubles() || expr->depth() > 1 || - Serializer::enabled(isolate()) || flags != ObjectLiteral::kFastElements || + masm()->serializer_enabled() || flags != ObjectLiteral::kFastElements || properties_count > FastCloneShallowObjectStub::kMaximumClonedProperties) { __ Push(r3, r2, r1, r0); - __ CallRuntime(Runtime::kHiddenCreateObjectLiteral, 4); + __ CallRuntime(Runtime::kCreateObjectLiteral, 4); } else { FastCloneShallowObjectStub stub(isolate(), properties_count); __ CallStub(&stub); @@ -1684,14 +1688,15 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) { case ObjectLiteral::Property::CONSTANT: UNREACHABLE(); case ObjectLiteral::Property::MATERIALIZED_LITERAL: - ASSERT(!CompileTimeValue::IsCompileTimeValue(property->value())); + DCHECK(!CompileTimeValue::IsCompileTimeValue(property->value())); // Fall through. case ObjectLiteral::Property::COMPUTED: if (key->value()->IsInternalizedString()) { if (property->emit_store()) { VisitForAccumulatorValue(value); - __ mov(r2, Operand(key->value())); - __ ldr(r1, MemOperand(sp)); + DCHECK(StoreIC::ValueRegister().is(r0)); + __ mov(StoreIC::NameRegister(), Operand(key->value())); + __ ldr(StoreIC::ReceiverRegister(), MemOperand(sp)); CallStoreIC(key->LiteralFeedbackId()); PrepareForBailoutForId(key->id(), NO_REGISTERS); } else { @@ -1705,7 +1710,7 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) { VisitForStackValue(key); VisitForStackValue(value); if (property->emit_store()) { - __ mov(r0, Operand(Smi::FromInt(NONE))); // PropertyAttributes + __ mov(r0, Operand(Smi::FromInt(SLOPPY))); // PropertyAttributes __ push(r0); __ CallRuntime(Runtime::kSetProperty, 4); } else { @@ -1745,11 +1750,11 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) { EmitAccessor(it->second->setter); __ mov(r0, Operand(Smi::FromInt(NONE))); __ push(r0); - __ CallRuntime(Runtime::kDefineOrRedefineAccessorProperty, 5); + __ CallRuntime(Runtime::kDefineAccessorPropertyUnchecked, 5); } if (expr->has_function()) { - ASSERT(result_saved); + DCHECK(result_saved); __ ldr(r0, MemOperand(sp)); __ push(r0); __ CallRuntime(Runtime::kToFastProperties, 1); @@ -1774,7 +1779,7 @@ void FullCodeGenerator::VisitArrayLiteral(ArrayLiteral* expr) { ZoneList<Expression*>* subexprs = expr->values(); int length = subexprs->length(); Handle<FixedArray> constant_elements = expr->constant_elements(); - ASSERT_EQ(2, constant_elements->length()); + DCHECK_EQ(2, constant_elements->length()); ElementsKind constant_elements_kind = static_cast<ElementsKind>(Smi::cast(constant_elements->get(0))->value()); bool has_fast_elements = IsFastObjectElementsKind(constant_elements_kind); @@ -1792,33 +1797,12 @@ void FullCodeGenerator::VisitArrayLiteral(ArrayLiteral* expr) { __ ldr(r3, FieldMemOperand(r3, JSFunction::kLiteralsOffset)); __ mov(r2, Operand(Smi::FromInt(expr->literal_index()))); __ mov(r1, Operand(constant_elements)); - if (has_fast_elements && constant_elements_values->map() == - isolate()->heap()->fixed_cow_array_map()) { - FastCloneShallowArrayStub stub( - isolate(), - FastCloneShallowArrayStub::COPY_ON_WRITE_ELEMENTS, - allocation_site_mode, - length); - __ CallStub(&stub); - __ IncrementCounter( - isolate()->counters()->cow_arrays_created_stub(), 1, r1, r2); - } else if (expr->depth() > 1 || Serializer::enabled(isolate()) || - length > FastCloneShallowArrayStub::kMaximumClonedLength) { + if (expr->depth() > 1 || length > JSObject::kInitialMaxFastElementArray) { __ mov(r0, Operand(Smi::FromInt(flags))); __ Push(r3, r2, r1, r0); - __ CallRuntime(Runtime::kHiddenCreateArrayLiteral, 4); + __ CallRuntime(Runtime::kCreateArrayLiteral, 4); } else { - ASSERT(IsFastSmiOrObjectElementsKind(constant_elements_kind) || - FLAG_smi_only_arrays); - FastCloneShallowArrayStub::Mode mode = - FastCloneShallowArrayStub::CLONE_ANY_ELEMENTS; - - if (has_fast_elements) { - mode = FastCloneShallowArrayStub::CLONE_ELEMENTS; - } - - FastCloneShallowArrayStub stub(isolate(), mode, allocation_site_mode, - length); + FastCloneShallowArrayStub stub(isolate(), allocation_site_mode); __ CallStub(&stub); } @@ -1867,7 +1851,7 @@ void FullCodeGenerator::VisitArrayLiteral(ArrayLiteral* expr) { void FullCodeGenerator::VisitAssignment(Assignment* expr) { - ASSERT(expr->target()->IsValidReferenceExpression()); + DCHECK(expr->target()->IsValidReferenceExpression()); Comment cmnt(masm_, "[ Assignment"); @@ -1889,9 +1873,9 @@ void FullCodeGenerator::VisitAssignment(Assignment* expr) { break; case NAMED_PROPERTY: if (expr->is_compound()) { - // We need the receiver both on the stack and in the accumulator. - VisitForAccumulatorValue(property->obj()); - __ push(result_register()); + // We need the receiver both on the stack and in the register. + VisitForStackValue(property->obj()); + __ ldr(LoadIC::ReceiverRegister(), MemOperand(sp, 0)); } else { VisitForStackValue(property->obj()); } @@ -1899,9 +1883,9 @@ void FullCodeGenerator::VisitAssignment(Assignment* expr) { case KEYED_PROPERTY: if (expr->is_compound()) { VisitForStackValue(property->obj()); - VisitForAccumulatorValue(property->key()); - __ ldr(r1, MemOperand(sp, 0)); - __ push(r0); + VisitForStackValue(property->key()); + __ ldr(LoadIC::ReceiverRegister(), MemOperand(sp, 1 * kPointerSize)); + __ ldr(LoadIC::NameRegister(), MemOperand(sp, 0)); } else { VisitForStackValue(property->obj()); VisitForStackValue(property->key()); @@ -1997,7 +1981,7 @@ void FullCodeGenerator::VisitYield(Yield* expr) { __ bind(&suspend); VisitForAccumulatorValue(expr->generator_object()); - ASSERT(continuation.pos() > 0 && Smi::IsValid(continuation.pos())); + DCHECK(continuation.pos() > 0 && Smi::IsValid(continuation.pos())); __ mov(r1, Operand(Smi::FromInt(continuation.pos()))); __ str(r1, FieldMemOperand(r0, JSGeneratorObject::kContinuationOffset)); __ str(cp, FieldMemOperand(r0, JSGeneratorObject::kContextOffset)); @@ -2008,7 +1992,7 @@ void FullCodeGenerator::VisitYield(Yield* expr) { __ cmp(sp, r1); __ b(eq, &post_runtime); __ push(r0); // generator object - __ CallRuntime(Runtime::kHiddenSuspendJSGeneratorObject, 1); + __ CallRuntime(Runtime::kSuspendJSGeneratorObject, 1); __ ldr(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); __ bind(&post_runtime); __ pop(result_register()); @@ -2040,6 +2024,9 @@ void FullCodeGenerator::VisitYield(Yield* expr) { Label l_catch, l_try, l_suspend, l_continuation, l_resume; Label l_next, l_call, l_loop; + Register load_receiver = LoadIC::ReceiverRegister(); + Register load_name = LoadIC::NameRegister(); + // Initial send value is undefined. __ LoadRoot(r0, Heap::kUndefinedValueRootIndex); __ b(&l_next); @@ -2047,9 +2034,9 @@ void FullCodeGenerator::VisitYield(Yield* expr) { // catch (e) { receiver = iter; f = 'throw'; arg = e; goto l_call; } __ bind(&l_catch); handler_table()->set(expr->index(), Smi::FromInt(l_catch.pos())); - __ LoadRoot(r2, Heap::kthrow_stringRootIndex); // "throw" - __ ldr(r3, MemOperand(sp, 1 * kPointerSize)); // iter - __ Push(r2, r3, r0); // "throw", iter, except + __ LoadRoot(load_name, Heap::kthrow_stringRootIndex); // "throw" + __ ldr(r3, MemOperand(sp, 1 * kPointerSize)); // iter + __ Push(load_name, r3, r0); // "throw", iter, except __ jmp(&l_call); // try { received = %yield result } @@ -2067,14 +2054,14 @@ void FullCodeGenerator::VisitYield(Yield* expr) { const int generator_object_depth = kPointerSize + handler_size; __ ldr(r0, MemOperand(sp, generator_object_depth)); __ push(r0); // g - ASSERT(l_continuation.pos() > 0 && Smi::IsValid(l_continuation.pos())); + DCHECK(l_continuation.pos() > 0 && Smi::IsValid(l_continuation.pos())); __ mov(r1, Operand(Smi::FromInt(l_continuation.pos()))); __ str(r1, FieldMemOperand(r0, JSGeneratorObject::kContinuationOffset)); __ str(cp, FieldMemOperand(r0, JSGeneratorObject::kContextOffset)); __ mov(r1, cp); __ RecordWriteField(r0, JSGeneratorObject::kContextOffset, r1, r2, kLRHasBeenSaved, kDontSaveFPRegs); - __ CallRuntime(Runtime::kHiddenSuspendJSGeneratorObject, 1); + __ CallRuntime(Runtime::kSuspendJSGeneratorObject, 1); __ ldr(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); __ pop(r0); // result EmitReturnSequence(); @@ -2083,14 +2070,19 @@ void FullCodeGenerator::VisitYield(Yield* expr) { // receiver = iter; f = 'next'; arg = received; __ bind(&l_next); - __ LoadRoot(r2, Heap::knext_stringRootIndex); // "next" - __ ldr(r3, MemOperand(sp, 1 * kPointerSize)); // iter - __ Push(r2, r3, r0); // "next", iter, received + + __ LoadRoot(load_name, Heap::knext_stringRootIndex); // "next" + __ ldr(r3, MemOperand(sp, 1 * kPointerSize)); // iter + __ Push(load_name, r3, r0); // "next", iter, received // result = receiver[f](arg); __ bind(&l_call); - __ ldr(r1, MemOperand(sp, kPointerSize)); - __ ldr(r0, MemOperand(sp, 2 * kPointerSize)); + __ ldr(load_receiver, MemOperand(sp, kPointerSize)); + __ ldr(load_name, MemOperand(sp, 2 * kPointerSize)); + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Operand(Smi::FromInt(expr->KeyedLoadFeedbackSlot()))); + } Handle<Code> ic = isolate()->builtins()->KeyedLoadIC_Initialize(); CallIC(ic, TypeFeedbackId::None()); __ mov(r1, r0); @@ -2103,19 +2095,29 @@ void FullCodeGenerator::VisitYield(Yield* expr) { // if (!result.done) goto l_try; __ bind(&l_loop); - __ push(r0); // save result - __ LoadRoot(r2, Heap::kdone_stringRootIndex); // "done" - CallLoadIC(NOT_CONTEXTUAL); // result.done in r0 + __ Move(load_receiver, r0); + + __ push(load_receiver); // save result + __ LoadRoot(load_name, Heap::kdone_stringRootIndex); // "done" + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Operand(Smi::FromInt(expr->DoneFeedbackSlot()))); + } + CallLoadIC(NOT_CONTEXTUAL); // r0=result.done Handle<Code> bool_ic = ToBooleanStub::GetUninitialized(isolate()); CallIC(bool_ic); __ cmp(r0, Operand(0)); __ b(eq, &l_try); // result.value - __ pop(r0); // result - __ LoadRoot(r2, Heap::kvalue_stringRootIndex); // "value" - CallLoadIC(NOT_CONTEXTUAL); // result.value in r0 - context()->DropAndPlug(2, r0); // drop iter and g + __ pop(load_receiver); // result + __ LoadRoot(load_name, Heap::kvalue_stringRootIndex); // "value" + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Operand(Smi::FromInt(expr->ValueFeedbackSlot()))); + } + CallLoadIC(NOT_CONTEXTUAL); // r0=result.value + context()->DropAndPlug(2, r0); // drop iter and g break; } } @@ -2126,7 +2128,7 @@ void FullCodeGenerator::EmitGeneratorResume(Expression *generator, Expression *value, JSGeneratorObject::ResumeMode resume_mode) { // The value stays in r0, and is ultimately read by the resumed generator, as - // if CallRuntime(Runtime::kHiddenSuspendJSGeneratorObject) returned it. Or it + // if CallRuntime(Runtime::kSuspendJSGeneratorObject) returned it. Or it // is read to throw the value when the resumed generator is already closed. // r1 will hold the generator object until the activation has been resumed. VisitForStackValue(generator); @@ -2217,10 +2219,10 @@ void FullCodeGenerator::EmitGeneratorResume(Expression *generator, __ push(r2); __ b(&push_operand_holes); __ bind(&call_resume); - ASSERT(!result_register().is(r1)); + DCHECK(!result_register().is(r1)); __ Push(r1, result_register()); __ Push(Smi::FromInt(resume_mode)); - __ CallRuntime(Runtime::kHiddenResumeJSGeneratorObject, 3); + __ CallRuntime(Runtime::kResumeJSGeneratorObject, 3); // Not reached: the runtime call returns elsewhere. __ stop("not-reached"); @@ -2235,14 +2237,14 @@ void FullCodeGenerator::EmitGeneratorResume(Expression *generator, } else { // Throw the provided value. __ push(r0); - __ CallRuntime(Runtime::kHiddenThrow, 1); + __ CallRuntime(Runtime::kThrow, 1); } __ jmp(&done); // Throw error if we attempt to operate on a running generator. __ bind(&wrong_state); __ push(r1); - __ CallRuntime(Runtime::kHiddenThrowGeneratorStateError, 1); + __ CallRuntime(Runtime::kThrowGeneratorStateError, 1); __ bind(&done); context()->Plug(result_register()); @@ -2260,7 +2262,7 @@ void FullCodeGenerator::EmitCreateIteratorResult(bool done) { __ bind(&gc_required); __ Push(Smi::FromInt(map->instance_size())); - __ CallRuntime(Runtime::kHiddenAllocateInNewSpace, 1); + __ CallRuntime(Runtime::kAllocateInNewSpace, 1); __ ldr(context_register(), MemOperand(fp, StandardFrameConstants::kContextOffset)); @@ -2269,7 +2271,7 @@ void FullCodeGenerator::EmitCreateIteratorResult(bool done) { __ pop(r2); __ mov(r3, Operand(isolate()->factory()->ToBoolean(done))); __ mov(r4, Operand(isolate()->factory()->empty_fixed_array())); - ASSERT_EQ(map->instance_size(), 5 * kPointerSize); + DCHECK_EQ(map->instance_size(), 5 * kPointerSize); __ str(r1, FieldMemOperand(r0, HeapObject::kMapOffset)); __ str(r4, FieldMemOperand(r0, JSObject::kPropertiesOffset)); __ str(r4, FieldMemOperand(r0, JSObject::kElementsOffset)); @@ -2288,17 +2290,27 @@ void FullCodeGenerator::EmitCreateIteratorResult(bool done) { void FullCodeGenerator::EmitNamedPropertyLoad(Property* prop) { SetSourcePosition(prop->position()); Literal* key = prop->key()->AsLiteral(); - __ mov(r2, Operand(key->value())); - // Call load IC. It has arguments receiver and property name r0 and r2. - CallLoadIC(NOT_CONTEXTUAL, prop->PropertyFeedbackId()); + __ mov(LoadIC::NameRegister(), Operand(key->value())); + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Operand(Smi::FromInt(prop->PropertyFeedbackSlot()))); + CallLoadIC(NOT_CONTEXTUAL); + } else { + CallLoadIC(NOT_CONTEXTUAL, prop->PropertyFeedbackId()); + } } void FullCodeGenerator::EmitKeyedPropertyLoad(Property* prop) { SetSourcePosition(prop->position()); - // Call keyed load IC. It has arguments key and receiver in r0 and r1. Handle<Code> ic = isolate()->builtins()->KeyedLoadIC_Initialize(); - CallIC(ic, prop->PropertyFeedbackId()); + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Operand(Smi::FromInt(prop->PropertyFeedbackSlot()))); + CallIC(ic); + } else { + CallIC(ic, prop->PropertyFeedbackId()); + } } @@ -2409,7 +2421,7 @@ void FullCodeGenerator::EmitBinaryOp(BinaryOperation* expr, void FullCodeGenerator::EmitAssignment(Expression* expr) { - ASSERT(expr->IsValidReferenceExpression()); + DCHECK(expr->IsValidReferenceExpression()); // Left-hand side can only be a property, a global or a (parameter or local) // slot. @@ -2432,9 +2444,10 @@ void FullCodeGenerator::EmitAssignment(Expression* expr) { case NAMED_PROPERTY: { __ push(r0); // Preserve value. VisitForAccumulatorValue(prop->obj()); - __ mov(r1, r0); - __ pop(r0); // Restore value. - __ mov(r2, Operand(prop->key()->AsLiteral()->value())); + __ Move(StoreIC::ReceiverRegister(), r0); + __ pop(StoreIC::ValueRegister()); // Restore value. + __ mov(StoreIC::NameRegister(), + Operand(prop->key()->AsLiteral()->value())); CallStoreIC(); break; } @@ -2442,8 +2455,8 @@ void FullCodeGenerator::EmitAssignment(Expression* expr) { __ push(r0); // Preserve value. VisitForStackValue(prop->obj()); VisitForAccumulatorValue(prop->key()); - __ mov(r1, r0); - __ Pop(r0, r2); // r0 = restored value. + __ Move(KeyedStoreIC::NameRegister(), r0); + __ Pop(KeyedStoreIC::ValueRegister(), KeyedStoreIC::ReceiverRegister()); Handle<Code> ic = strict_mode() == SLOPPY ? isolate()->builtins()->KeyedStoreIC_Initialize() : isolate()->builtins()->KeyedStoreIC_Initialize_Strict(); @@ -2468,33 +2481,23 @@ void FullCodeGenerator::EmitStoreToStackLocalOrContextSlot( } -void FullCodeGenerator::EmitCallStoreContextSlot( - Handle<String> name, StrictMode strict_mode) { - __ push(r0); // Value. - __ mov(r1, Operand(name)); - __ mov(r0, Operand(Smi::FromInt(strict_mode))); - __ Push(cp, r1, r0); // Context, name, strict mode. - __ CallRuntime(Runtime::kHiddenStoreContextSlot, 4); -} - - void FullCodeGenerator::EmitVariableAssignment(Variable* var, Token::Value op) { if (var->IsUnallocated()) { // Global var, const, or let. - __ mov(r2, Operand(var->name())); - __ ldr(r1, GlobalObjectOperand()); + __ mov(StoreIC::NameRegister(), Operand(var->name())); + __ ldr(StoreIC::ReceiverRegister(), GlobalObjectOperand()); CallStoreIC(); } else if (op == Token::INIT_CONST_LEGACY) { // Const initializers need a write barrier. - ASSERT(!var->IsParameter()); // No const parameters. + DCHECK(!var->IsParameter()); // No const parameters. if (var->IsLookupSlot()) { __ push(r0); __ mov(r0, Operand(var->name())); __ Push(cp, r0); // Context and name. - __ CallRuntime(Runtime::kHiddenInitializeConstContextSlot, 3); + __ CallRuntime(Runtime::kInitializeLegacyConstLookupSlot, 3); } else { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); Label skip; MemOperand location = VarOperand(var, r1); __ ldr(r2, location); @@ -2506,30 +2509,32 @@ void FullCodeGenerator::EmitVariableAssignment(Variable* var, Token::Value op) { } else if (var->mode() == LET && op != Token::INIT_LET) { // Non-initializing assignment to let variable needs a write barrier. - if (var->IsLookupSlot()) { - EmitCallStoreContextSlot(var->name(), strict_mode()); - } else { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); - Label assign; - MemOperand location = VarOperand(var, r1); - __ ldr(r3, location); - __ CompareRoot(r3, Heap::kTheHoleValueRootIndex); - __ b(ne, &assign); - __ mov(r3, Operand(var->name())); - __ push(r3); - __ CallRuntime(Runtime::kHiddenThrowReferenceError, 1); - // Perform the assignment. - __ bind(&assign); - EmitStoreToStackLocalOrContextSlot(var, location); - } + DCHECK(!var->IsLookupSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); + Label assign; + MemOperand location = VarOperand(var, r1); + __ ldr(r3, location); + __ CompareRoot(r3, Heap::kTheHoleValueRootIndex); + __ b(ne, &assign); + __ mov(r3, Operand(var->name())); + __ push(r3); + __ CallRuntime(Runtime::kThrowReferenceError, 1); + // Perform the assignment. + __ bind(&assign); + EmitStoreToStackLocalOrContextSlot(var, location); } else if (!var->is_const_mode() || op == Token::INIT_CONST) { - // Assignment to var or initializing assignment to let/const - // in harmony mode. if (var->IsLookupSlot()) { - EmitCallStoreContextSlot(var->name(), strict_mode()); + // Assignment to var. + __ push(r0); // Value. + __ mov(r1, Operand(var->name())); + __ mov(r0, Operand(Smi::FromInt(strict_mode()))); + __ Push(cp, r1, r0); // Context, name, strict mode. + __ CallRuntime(Runtime::kStoreLookupSlot, 4); } else { - ASSERT((var->IsStackAllocated() || var->IsContextSlot())); + // Assignment to var or initializing assignment to let/const in harmony + // mode. + DCHECK((var->IsStackAllocated() || var->IsContextSlot())); MemOperand location = VarOperand(var, r1); if (generate_debug_code_ && op == Token::INIT_LET) { // Check for an uninitialized let binding. @@ -2547,14 +2552,13 @@ void FullCodeGenerator::EmitVariableAssignment(Variable* var, Token::Value op) { void FullCodeGenerator::EmitNamedPropertyAssignment(Assignment* expr) { // Assignment to a property, using a named store IC. Property* prop = expr->target()->AsProperty(); - ASSERT(prop != NULL); - ASSERT(prop->key()->AsLiteral() != NULL); + DCHECK(prop != NULL); + DCHECK(prop->key()->IsLiteral()); // Record source code position before IC call. SetSourcePosition(expr->position()); - __ mov(r2, Operand(prop->key()->AsLiteral()->value())); - __ pop(r1); - + __ mov(StoreIC::NameRegister(), Operand(prop->key()->AsLiteral()->value())); + __ pop(StoreIC::ReceiverRegister()); CallStoreIC(expr->AssignmentFeedbackId()); PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); @@ -2567,7 +2571,8 @@ void FullCodeGenerator::EmitKeyedPropertyAssignment(Assignment* expr) { // Record source code position before IC call. SetSourcePosition(expr->position()); - __ Pop(r2, r1); // r1 = key. + __ Pop(KeyedStoreIC::ReceiverRegister(), KeyedStoreIC::NameRegister()); + DCHECK(KeyedStoreIC::ValueRegister().is(r0)); Handle<Code> ic = strict_mode() == SLOPPY ? isolate()->builtins()->KeyedStoreIC_Initialize() @@ -2585,13 +2590,15 @@ void FullCodeGenerator::VisitProperty(Property* expr) { if (key->IsPropertyName()) { VisitForAccumulatorValue(expr->obj()); + __ Move(LoadIC::ReceiverRegister(), r0); EmitNamedPropertyLoad(expr); PrepareForBailoutForId(expr->LoadId(), TOS_REG); context()->Plug(r0); } else { VisitForStackValue(expr->obj()); VisitForAccumulatorValue(expr->key()); - __ pop(r1); + __ Move(LoadIC::NameRegister(), r0); + __ pop(LoadIC::ReceiverRegister()); EmitKeyedPropertyLoad(expr); context()->Plug(r0); } @@ -2627,8 +2634,8 @@ void FullCodeGenerator::EmitCallWithLoadIC(Call* expr) { __ Push(isolate()->factory()->undefined_value()); } else { // Load the function from the receiver. - ASSERT(callee->IsProperty()); - __ ldr(r0, MemOperand(sp, 0)); + DCHECK(callee->IsProperty()); + __ ldr(LoadIC::ReceiverRegister(), MemOperand(sp, 0)); EmitNamedPropertyLoad(callee->AsProperty()); PrepareForBailoutForId(callee->AsProperty()->LoadId(), TOS_REG); // Push the target function under the receiver. @@ -2650,8 +2657,9 @@ void FullCodeGenerator::EmitKeyedCallWithLoadIC(Call* expr, Expression* callee = expr->expression(); // Load the function from the receiver. - ASSERT(callee->IsProperty()); - __ ldr(r1, MemOperand(sp, 0)); + DCHECK(callee->IsProperty()); + __ ldr(LoadIC::ReceiverRegister(), MemOperand(sp, 0)); + __ Move(LoadIC::NameRegister(), r0); EmitKeyedPropertyLoad(callee->AsProperty()); PrepareForBailoutForId(callee->AsProperty()->LoadId(), TOS_REG); @@ -2711,7 +2719,7 @@ void FullCodeGenerator::EmitResolvePossiblyDirectEval(int arg_count) { // Do the runtime call. __ Push(r4, r3, r2, r1); - __ CallRuntime(Runtime::kHiddenResolvePossiblyDirectEval, 5); + __ CallRuntime(Runtime::kResolvePossiblyDirectEval, 5); } @@ -2776,16 +2784,16 @@ void FullCodeGenerator::VisitCall(Call* expr) { { PreservePositionScope scope(masm()->positions_recorder()); // Generate code for loading from variables potentially shadowed // by eval-introduced variables. - EmitDynamicLookupFastCase(proxy->var(), NOT_INSIDE_TYPEOF, &slow, &done); + EmitDynamicLookupFastCase(proxy, NOT_INSIDE_TYPEOF, &slow, &done); } __ bind(&slow); // Call the runtime to find the function to call (returned in r0) // and the object holding it (returned in edx). - ASSERT(!context_register().is(r2)); + DCHECK(!context_register().is(r2)); __ mov(r2, Operand(proxy->name())); __ Push(context_register(), r2); - __ CallRuntime(Runtime::kHiddenLoadContextSlot, 2); + __ CallRuntime(Runtime::kLoadLookupSlot, 2); __ Push(r0, r1); // Function, receiver. // If fast case code has been generated, emit code to push the @@ -2818,7 +2826,7 @@ void FullCodeGenerator::VisitCall(Call* expr) { EmitKeyedCallWithLoadIC(expr, property->key()); } } else { - ASSERT(call_type == Call::OTHER_CALL); + DCHECK(call_type == Call::OTHER_CALL); // Call to an arbitrary expression not handled specially above. { PreservePositionScope scope(masm()->positions_recorder()); VisitForStackValue(callee); @@ -2831,7 +2839,7 @@ void FullCodeGenerator::VisitCall(Call* expr) { #ifdef DEBUG // RecordJSReturnSite should have been called. - ASSERT(expr->return_is_recorded_); + DCHECK(expr->return_is_recorded_); #endif } @@ -2865,7 +2873,7 @@ void FullCodeGenerator::VisitCallNew(CallNew* expr) { // Record call targets in unoptimized code. if (FLAG_pretenuring_call_new) { EnsureSlotContainsAllocationSite(expr->AllocationSiteFeedbackSlot()); - ASSERT(expr->AllocationSiteFeedbackSlot() == + DCHECK(expr->AllocationSiteFeedbackSlot() == expr->CallNewFeedbackSlot() + 1); } @@ -2881,7 +2889,7 @@ void FullCodeGenerator::VisitCallNew(CallNew* expr) { void FullCodeGenerator::EmitIsSmi(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2902,7 +2910,7 @@ void FullCodeGenerator::EmitIsSmi(CallRuntime* expr) { void FullCodeGenerator::EmitIsNonNegativeSmi(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2923,7 +2931,7 @@ void FullCodeGenerator::EmitIsNonNegativeSmi(CallRuntime* expr) { void FullCodeGenerator::EmitIsObject(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2956,7 +2964,7 @@ void FullCodeGenerator::EmitIsObject(CallRuntime* expr) { void FullCodeGenerator::EmitIsSpecObject(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2978,7 +2986,7 @@ void FullCodeGenerator::EmitIsSpecObject(CallRuntime* expr) { void FullCodeGenerator::EmitIsUndetectableObject(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3003,7 +3011,7 @@ void FullCodeGenerator::EmitIsUndetectableObject(CallRuntime* expr) { void FullCodeGenerator::EmitIsStringWrapperSafeForDefaultValueOf( CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3047,7 +3055,7 @@ void FullCodeGenerator::EmitIsStringWrapperSafeForDefaultValueOf( __ add(r4, r4, Operand(DescriptorArray::kFirstOffset - kHeapObjectTag)); // Calculate the end of the descriptor array. __ mov(r2, r4); - __ add(r2, r2, Operand::PointerOffsetFromSmiKey(r3)); + __ add(r2, r2, Operand(r3, LSL, kPointerSizeLog2)); // Loop through all the keys in the descriptor array. If one of these is the // string "valueOf" the result is false. @@ -3091,7 +3099,7 @@ void FullCodeGenerator::EmitIsStringWrapperSafeForDefaultValueOf( void FullCodeGenerator::EmitIsFunction(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3113,7 +3121,7 @@ void FullCodeGenerator::EmitIsFunction(CallRuntime* expr) { void FullCodeGenerator::EmitIsMinusZero(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3139,7 +3147,7 @@ void FullCodeGenerator::EmitIsMinusZero(CallRuntime* expr) { void FullCodeGenerator::EmitIsArray(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3161,7 +3169,7 @@ void FullCodeGenerator::EmitIsArray(CallRuntime* expr) { void FullCodeGenerator::EmitIsRegExp(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3183,7 +3191,7 @@ void FullCodeGenerator::EmitIsRegExp(CallRuntime* expr) { void FullCodeGenerator::EmitIsConstructCall(CallRuntime* expr) { - ASSERT(expr->arguments()->length() == 0); + DCHECK(expr->arguments()->length() == 0); Label materialize_true, materialize_false; Label* if_true = NULL; @@ -3212,7 +3220,7 @@ void FullCodeGenerator::EmitIsConstructCall(CallRuntime* expr) { void FullCodeGenerator::EmitObjectEquals(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); // Load the two objects into registers and perform the comparison. VisitForStackValue(args->at(0)); @@ -3236,7 +3244,7 @@ void FullCodeGenerator::EmitObjectEquals(CallRuntime* expr) { void FullCodeGenerator::EmitArguments(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); // ArgumentsAccessStub expects the key in edx and the formal // parameter count in r0. @@ -3250,7 +3258,7 @@ void FullCodeGenerator::EmitArguments(CallRuntime* expr) { void FullCodeGenerator::EmitArgumentsLength(CallRuntime* expr) { - ASSERT(expr->arguments()->length() == 0); + DCHECK(expr->arguments()->length() == 0); // Get the number of formal parameters. __ mov(r0, Operand(Smi::FromInt(info_->scope()->num_parameters()))); @@ -3270,7 +3278,7 @@ void FullCodeGenerator::EmitArgumentsLength(CallRuntime* expr) { void FullCodeGenerator::EmitClassOf(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); Label done, null, function, non_function_constructor; VisitForAccumulatorValue(args->at(0)); @@ -3333,7 +3341,7 @@ void FullCodeGenerator::EmitSubString(CallRuntime* expr) { // Load the arguments on the stack and call the stub. SubStringStub stub(isolate()); ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 3); + DCHECK(args->length() == 3); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); VisitForStackValue(args->at(2)); @@ -3346,7 +3354,7 @@ void FullCodeGenerator::EmitRegExpExec(CallRuntime* expr) { // Load the arguments on the stack and call the stub. RegExpExecStub stub(isolate()); ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 4); + DCHECK(args->length() == 4); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); VisitForStackValue(args->at(2)); @@ -3358,7 +3366,7 @@ void FullCodeGenerator::EmitRegExpExec(CallRuntime* expr) { void FullCodeGenerator::EmitValueOf(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); // Load the object. Label done; @@ -3375,8 +3383,8 @@ void FullCodeGenerator::EmitValueOf(CallRuntime* expr) { void FullCodeGenerator::EmitDateField(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); - ASSERT_NE(NULL, args->at(1)->AsLiteral()); + DCHECK(args->length() == 2); + DCHECK_NE(NULL, args->at(1)->AsLiteral()); Smi* index = Smi::cast(*(args->at(1)->AsLiteral()->value())); VisitForAccumulatorValue(args->at(0)); // Load the object. @@ -3414,7 +3422,7 @@ void FullCodeGenerator::EmitDateField(CallRuntime* expr) { } __ bind(¬_date_object); - __ CallRuntime(Runtime::kHiddenThrowNotDateError, 0); + __ CallRuntime(Runtime::kThrowNotDateError, 0); __ bind(&done); context()->Plug(r0); } @@ -3422,7 +3430,7 @@ void FullCodeGenerator::EmitDateField(CallRuntime* expr) { void FullCodeGenerator::EmitOneByteSeqStringSetChar(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(3, args->length()); + DCHECK_EQ(3, args->length()); Register string = r0; Register index = r1; @@ -3455,7 +3463,7 @@ void FullCodeGenerator::EmitOneByteSeqStringSetChar(CallRuntime* expr) { void FullCodeGenerator::EmitTwoByteSeqStringSetChar(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(3, args->length()); + DCHECK_EQ(3, args->length()); Register string = r0; Register index = r1; @@ -3491,7 +3499,7 @@ void FullCodeGenerator::EmitTwoByteSeqStringSetChar(CallRuntime* expr) { void FullCodeGenerator::EmitMathPow(CallRuntime* expr) { // Load the arguments on the stack and call the runtime function. ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); MathPowStub stub(isolate(), MathPowStub::ON_STACK); @@ -3502,7 +3510,7 @@ void FullCodeGenerator::EmitMathPow(CallRuntime* expr) { void FullCodeGenerator::EmitSetValueOf(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); VisitForStackValue(args->at(0)); // Load the object. VisitForAccumulatorValue(args->at(1)); // Load the value. __ pop(r1); // r0 = value. r1 = object. @@ -3530,7 +3538,7 @@ void FullCodeGenerator::EmitSetValueOf(CallRuntime* expr) { void FullCodeGenerator::EmitNumberToString(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(args->length(), 1); + DCHECK_EQ(args->length(), 1); // Load the argument into r0 and call the stub. VisitForAccumulatorValue(args->at(0)); @@ -3542,7 +3550,7 @@ void FullCodeGenerator::EmitNumberToString(CallRuntime* expr) { void FullCodeGenerator::EmitStringCharFromCode(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); Label done; @@ -3560,7 +3568,7 @@ void FullCodeGenerator::EmitStringCharFromCode(CallRuntime* expr) { void FullCodeGenerator::EmitStringCharCodeAt(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); VisitForStackValue(args->at(0)); VisitForAccumulatorValue(args->at(1)); @@ -3605,7 +3613,7 @@ void FullCodeGenerator::EmitStringCharCodeAt(CallRuntime* expr) { void FullCodeGenerator::EmitStringCharAt(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); VisitForStackValue(args->at(0)); VisitForAccumulatorValue(args->at(1)); @@ -3652,7 +3660,7 @@ void FullCodeGenerator::EmitStringCharAt(CallRuntime* expr) { void FullCodeGenerator::EmitStringAdd(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(2, args->length()); + DCHECK_EQ(2, args->length()); VisitForStackValue(args->at(0)); VisitForAccumulatorValue(args->at(1)); @@ -3665,7 +3673,7 @@ void FullCodeGenerator::EmitStringAdd(CallRuntime* expr) { void FullCodeGenerator::EmitStringCompare(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(2, args->length()); + DCHECK_EQ(2, args->length()); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); @@ -3677,7 +3685,7 @@ void FullCodeGenerator::EmitStringCompare(CallRuntime* expr) { void FullCodeGenerator::EmitCallFunction(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() >= 2); + DCHECK(args->length() >= 2); int arg_count = args->length() - 2; // 2 ~ receiver and function. for (int i = 0; i < arg_count + 1; i++) { @@ -3710,7 +3718,7 @@ void FullCodeGenerator::EmitCallFunction(CallRuntime* expr) { void FullCodeGenerator::EmitRegExpConstructResult(CallRuntime* expr) { RegExpConstructResultStub stub(isolate()); ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 3); + DCHECK(args->length() == 3); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); VisitForAccumulatorValue(args->at(2)); @@ -3723,8 +3731,8 @@ void FullCodeGenerator::EmitRegExpConstructResult(CallRuntime* expr) { void FullCodeGenerator::EmitGetFromCache(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(2, args->length()); - ASSERT_NE(NULL, args->at(0)->AsLiteral()); + DCHECK_EQ(2, args->length()); + DCHECK_NE(NULL, args->at(0)->AsLiteral()); int cache_id = Smi::cast(*(args->at(0)->AsLiteral()->value()))->value(); Handle<FixedArray> jsfunction_result_caches( @@ -3763,7 +3771,7 @@ void FullCodeGenerator::EmitGetFromCache(CallRuntime* expr) { __ bind(¬_found); // Call runtime to perform the lookup. __ Push(cache, key); - __ CallRuntime(Runtime::kHiddenGetFromCache, 2); + __ CallRuntime(Runtime::kGetFromCache, 2); __ bind(&done); context()->Plug(r0); @@ -3792,7 +3800,7 @@ void FullCodeGenerator::EmitHasCachedArrayIndex(CallRuntime* expr) { void FullCodeGenerator::EmitGetCachedArrayIndex(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); __ AssertString(r0); @@ -3809,7 +3817,7 @@ void FullCodeGenerator::EmitFastAsciiArrayJoin(CallRuntime* expr) { not_size_one_array, loop, empty_separator_loop, one_char_separator_loop, one_char_separator_loop_entry, long_separator_loop; ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); VisitForStackValue(args->at(1)); VisitForAccumulatorValue(args->at(0)); @@ -3967,7 +3975,7 @@ void FullCodeGenerator::EmitFastAsciiArrayJoin(CallRuntime* expr) { __ CopyBytes(string, result_pos, string_length, scratch); __ cmp(element, elements_end); __ b(lt, &empty_separator_loop); // End while (element < elements_end). - ASSERT(result.is(r0)); + DCHECK(result.is(r0)); __ b(&done); // One-character separator case @@ -3999,7 +4007,7 @@ void FullCodeGenerator::EmitFastAsciiArrayJoin(CallRuntime* expr) { __ CopyBytes(string, result_pos, string_length, scratch); __ cmp(element, elements_end); __ b(lt, &one_char_separator_loop); // End while (element < elements_end). - ASSERT(result.is(r0)); + DCHECK(result.is(r0)); __ b(&done); // Long separator case (separator is more than one character). Entry is at the @@ -4029,7 +4037,7 @@ void FullCodeGenerator::EmitFastAsciiArrayJoin(CallRuntime* expr) { __ CopyBytes(string, result_pos, string_length, scratch); __ cmp(element, elements_end); __ b(lt, &long_separator_loop); // End while (element < elements_end). - ASSERT(result.is(r0)); + DCHECK(result.is(r0)); __ b(&done); __ bind(&bailout); @@ -4039,6 +4047,17 @@ void FullCodeGenerator::EmitFastAsciiArrayJoin(CallRuntime* expr) { } +void FullCodeGenerator::EmitDebugIsActive(CallRuntime* expr) { + DCHECK(expr->arguments()->length() == 0); + ExternalReference debug_is_active = + ExternalReference::debug_is_active_address(isolate()); + __ mov(ip, Operand(debug_is_active)); + __ ldrb(r0, MemOperand(ip)); + __ SmiTag(r0); + context()->Plug(r0); +} + + void FullCodeGenerator::VisitCallRuntime(CallRuntime* expr) { if (expr->function() != NULL && expr->function()->intrinsic_type == Runtime::INLINE) { @@ -4053,13 +4072,20 @@ void FullCodeGenerator::VisitCallRuntime(CallRuntime* expr) { if (expr->is_jsruntime()) { // Push the builtins object as the receiver. - __ ldr(r0, GlobalObjectOperand()); - __ ldr(r0, FieldMemOperand(r0, GlobalObject::kBuiltinsOffset)); - __ push(r0); + Register receiver = LoadIC::ReceiverRegister(); + __ ldr(receiver, GlobalObjectOperand()); + __ ldr(receiver, FieldMemOperand(receiver, GlobalObject::kBuiltinsOffset)); + __ push(receiver); // Load the function from the receiver. - __ mov(r2, Operand(expr->name())); - CallLoadIC(NOT_CONTEXTUAL, expr->CallRuntimeFeedbackId()); + __ mov(LoadIC::NameRegister(), Operand(expr->name())); + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Operand(Smi::FromInt(expr->CallRuntimeFeedbackSlot()))); + CallLoadIC(NOT_CONTEXTUAL); + } else { + CallLoadIC(NOT_CONTEXTUAL, expr->CallRuntimeFeedbackId()); + } // Push the target function under the receiver. __ ldr(ip, MemOperand(sp, 0)); @@ -4113,7 +4139,7 @@ void FullCodeGenerator::VisitUnaryOperation(UnaryOperation* expr) { Variable* var = proxy->var(); // Delete of an unqualified identifier is disallowed in strict mode // but "delete this" is allowed. - ASSERT(strict_mode() == SLOPPY || var->is_this()); + DCHECK(strict_mode() == SLOPPY || var->is_this()); if (var->IsUnallocated()) { __ ldr(r2, GlobalObjectOperand()); __ mov(r1, Operand(var->name())); @@ -4128,10 +4154,10 @@ void FullCodeGenerator::VisitUnaryOperation(UnaryOperation* expr) { } else { // Non-global variable. Call the runtime to try to delete from the // context where the variable was introduced. - ASSERT(!context_register().is(r2)); + DCHECK(!context_register().is(r2)); __ mov(r2, Operand(var->name())); __ Push(context_register(), r2); - __ CallRuntime(Runtime::kHiddenDeleteContextSlot, 2); + __ CallRuntime(Runtime::kDeleteLookupSlot, 2); context()->Plug(r0); } } else { @@ -4169,7 +4195,7 @@ void FullCodeGenerator::VisitUnaryOperation(UnaryOperation* expr) { // for control and plugging the control flow into the context, // because we need to prepare a pair of extra administrative AST ids // for the optimizing compiler. - ASSERT(context()->IsAccumulatorValue() || context()->IsStackValue()); + DCHECK(context()->IsAccumulatorValue() || context()->IsStackValue()); Label materialize_true, materialize_false, done; VisitForControl(expr->expression(), &materialize_false, @@ -4206,7 +4232,7 @@ void FullCodeGenerator::VisitUnaryOperation(UnaryOperation* expr) { void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { - ASSERT(expr->expression()->IsValidReferenceExpression()); + DCHECK(expr->expression()->IsValidReferenceExpression()); Comment cmnt(masm_, "[ CountOperation"); SetSourcePosition(expr->position()); @@ -4225,7 +4251,7 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { // Evaluate expression and get value. if (assign_type == VARIABLE) { - ASSERT(expr->expression()->AsVariableProxy()->var() != NULL); + DCHECK(expr->expression()->AsVariableProxy()->var() != NULL); AccumulatorValueContext context(this); EmitVariableLoad(expr->expression()->AsVariableProxy()); } else { @@ -4235,15 +4261,15 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { __ push(ip); } if (assign_type == NAMED_PROPERTY) { - // Put the object both on the stack and in the accumulator. - VisitForAccumulatorValue(prop->obj()); - __ push(r0); + // Put the object both on the stack and in the register. + VisitForStackValue(prop->obj()); + __ ldr(LoadIC::ReceiverRegister(), MemOperand(sp, 0)); EmitNamedPropertyLoad(prop); } else { VisitForStackValue(prop->obj()); - VisitForAccumulatorValue(prop->key()); - __ ldr(r1, MemOperand(sp, 0)); - __ push(r0); + VisitForStackValue(prop->key()); + __ ldr(LoadIC::ReceiverRegister(), MemOperand(sp, 1 * kPointerSize)); + __ ldr(LoadIC::NameRegister(), MemOperand(sp, 0)); EmitKeyedPropertyLoad(prop); } } @@ -4351,8 +4377,9 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { } break; case NAMED_PROPERTY: { - __ mov(r2, Operand(prop->key()->AsLiteral()->value())); - __ pop(r1); + __ mov(StoreIC::NameRegister(), + Operand(prop->key()->AsLiteral()->value())); + __ pop(StoreIC::ReceiverRegister()); CallStoreIC(expr->CountStoreFeedbackId()); PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); if (expr->is_postfix()) { @@ -4365,7 +4392,7 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { break; } case KEYED_PROPERTY: { - __ Pop(r2, r1); // r1 = key. r2 = receiver. + __ Pop(KeyedStoreIC::ReceiverRegister(), KeyedStoreIC::NameRegister()); Handle<Code> ic = strict_mode() == SLOPPY ? isolate()->builtins()->KeyedStoreIC_Initialize() : isolate()->builtins()->KeyedStoreIC_Initialize_Strict(); @@ -4385,13 +4412,17 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { void FullCodeGenerator::VisitForTypeofValue(Expression* expr) { - ASSERT(!context()->IsEffect()); - ASSERT(!context()->IsTest()); + DCHECK(!context()->IsEffect()); + DCHECK(!context()->IsTest()); VariableProxy* proxy = expr->AsVariableProxy(); if (proxy != NULL && proxy->var()->IsUnallocated()) { Comment cmnt(masm_, "[ Global variable"); - __ ldr(r0, GlobalObjectOperand()); - __ mov(r2, Operand(proxy->name())); + __ ldr(LoadIC::ReceiverRegister(), GlobalObjectOperand()); + __ mov(LoadIC::NameRegister(), Operand(proxy->name())); + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Operand(Smi::FromInt(proxy->VariableFeedbackSlot()))); + } // Use a regular load, not a contextual load, to avoid a reference // error. CallLoadIC(NOT_CONTEXTUAL); @@ -4403,12 +4434,12 @@ void FullCodeGenerator::VisitForTypeofValue(Expression* expr) { // Generate code for loading from variables potentially shadowed // by eval-introduced variables. - EmitDynamicLookupFastCase(proxy->var(), INSIDE_TYPEOF, &slow, &done); + EmitDynamicLookupFastCase(proxy, INSIDE_TYPEOF, &slow, &done); __ bind(&slow); __ mov(r0, Operand(proxy->name())); __ Push(cp, r0); - __ CallRuntime(Runtime::kHiddenLoadContextSlotNoReferenceError, 2); + __ CallRuntime(Runtime::kLoadLookupSlotNoReferenceError, 2); PrepareForBailout(expr, TOS_REG); __ bind(&done); @@ -4459,10 +4490,6 @@ void FullCodeGenerator::EmitLiteralCompareTypeof(Expression* expr, __ b(eq, if_true); __ CompareRoot(r0, Heap::kFalseValueRootIndex); Split(eq, if_true, if_false, fall_through); - } else if (FLAG_harmony_typeof && - String::Equals(check, factory->null_string())) { - __ CompareRoot(r0, Heap::kNullValueRootIndex); - Split(eq, if_true, if_false, fall_through); } else if (String::Equals(check, factory->undefined_string())) { __ CompareRoot(r0, Heap::kUndefinedValueRootIndex); __ b(eq, if_true); @@ -4482,10 +4509,8 @@ void FullCodeGenerator::EmitLiteralCompareTypeof(Expression* expr, Split(eq, if_true, if_false, fall_through); } else if (String::Equals(check, factory->object_string())) { __ JumpIfSmi(r0, if_false); - if (!FLAG_harmony_typeof) { - __ CompareRoot(r0, Heap::kNullValueRootIndex); - __ b(eq, if_true); - } + __ CompareRoot(r0, Heap::kNullValueRootIndex); + __ b(eq, if_true); // Check for JS objects => true. __ CompareObjectType(r0, r0, r1, FIRST_NONCALLABLE_SPEC_OBJECT_TYPE); __ b(lt, if_false); @@ -4621,7 +4646,7 @@ Register FullCodeGenerator::context_register() { void FullCodeGenerator::StoreToFrameField(int frame_offset, Register value) { - ASSERT_EQ(POINTER_SIZE_ALIGN(frame_offset), frame_offset); + DCHECK_EQ(POINTER_SIZE_ALIGN(frame_offset), frame_offset); __ str(value, MemOperand(fp, frame_offset)); } @@ -4646,7 +4671,7 @@ void FullCodeGenerator::PushFunctionArgumentForContextAllocation() { // code. Fetch it from the context. __ ldr(ip, ContextOperand(cp, Context::CLOSURE_INDEX)); } else { - ASSERT(declaration_scope->is_function_scope()); + DCHECK(declaration_scope->is_function_scope()); __ ldr(ip, MemOperand(fp, JavaScriptFrameConstants::kFunctionOffset)); } __ push(ip); @@ -4657,7 +4682,7 @@ void FullCodeGenerator::PushFunctionArgumentForContextAllocation() { // Non-local control flow support. void FullCodeGenerator::EnterFinallyBlock() { - ASSERT(!result_register().is(r1)); + DCHECK(!result_register().is(r1)); // Store result register while executing finally block. __ push(result_register()); // Cook return address in link register to stack (smi encoded Code* delta) @@ -4691,7 +4716,7 @@ void FullCodeGenerator::EnterFinallyBlock() { void FullCodeGenerator::ExitFinallyBlock() { - ASSERT(!result_register().is(r1)); + DCHECK(!result_register().is(r1)); // Restore pending message from stack. __ pop(r1); ExternalReference pending_message_script = @@ -4757,12 +4782,20 @@ FullCodeGenerator::NestedStatement* FullCodeGenerator::TryFinally::Exit( static Address GetInterruptImmediateLoadAddress(Address pc) { Address load_address = pc - 2 * Assembler::kInstrSize; if (!FLAG_enable_ool_constant_pool) { - ASSERT(Assembler::IsLdrPcImmediateOffset(Memory::int32_at(load_address))); + DCHECK(Assembler::IsLdrPcImmediateOffset(Memory::int32_at(load_address))); + } else if (Assembler::IsLdrPpRegOffset(Memory::int32_at(load_address))) { + // This is an extended constant pool lookup. + load_address -= 2 * Assembler::kInstrSize; + DCHECK(Assembler::IsMovW(Memory::int32_at(load_address))); + DCHECK(Assembler::IsMovT( + Memory::int32_at(load_address + Assembler::kInstrSize))); } else if (Assembler::IsMovT(Memory::int32_at(load_address))) { + // This is a movw_movt immediate load. load_address -= Assembler::kInstrSize; - ASSERT(Assembler::IsMovW(Memory::int32_at(load_address))); + DCHECK(Assembler::IsMovW(Memory::int32_at(load_address))); } else { - ASSERT(Assembler::IsLdrPpImmediateOffset(Memory::int32_at(load_address))); + // This is a small constant pool lookup. + DCHECK(Assembler::IsLdrPpImmediateOffset(Memory::int32_at(load_address))); } return load_address; } @@ -4772,9 +4805,8 @@ void BackEdgeTable::PatchAt(Code* unoptimized_code, Address pc, BackEdgeState target_state, Code* replacement_code) { - static const int kInstrSize = Assembler::kInstrSize; Address pc_immediate_load_address = GetInterruptImmediateLoadAddress(pc); - Address branch_address = pc_immediate_load_address - kInstrSize; + Address branch_address = pc_immediate_load_address - Assembler::kInstrSize; CodePatcher patcher(branch_address, 1); switch (target_state) { case INTERRUPT: @@ -4782,14 +4814,19 @@ void BackEdgeTable::PatchAt(Code* unoptimized_code, // <decrement profiling counter> // bpl ok // ; load interrupt stub address into ip - either of: - // ldr ip, [pc/pp, <constant pool offset>] | movw ip, <immed low> - // | movt ip, <immed high> + // ; <small cp load> | <extended cp load> | <immediate load> + // ldr ip, [pc/pp, #imm] | movw ip, #imm | movw ip, #imm + // | movt ip, #imm> | movw ip, #imm + // | ldr ip, [pp, ip] // blx ip + // <reset profiling counter> // ok-label - // Calculate branch offet to the ok-label - this is the difference between - // the branch address and |pc| (which points at <blx ip>) plus one instr. - int branch_offset = pc + kInstrSize - branch_address; + // Calculate branch offset to the ok-label - this is the difference + // between the branch address and |pc| (which points at <blx ip>) plus + // kProfileCounterResetSequence instructions + int branch_offset = pc - Instruction::kPCReadOffset - branch_address + + kProfileCounterResetSequenceLength; patcher.masm()->b(branch_offset, pl); break; } @@ -4798,9 +4835,12 @@ void BackEdgeTable::PatchAt(Code* unoptimized_code, // <decrement profiling counter> // mov r0, r0 (NOP) // ; load on-stack replacement address into ip - either of: - // ldr ip, [pc/pp, <constant pool offset>] | movw ip, <immed low> - // | movt ip, <immed high> + // ; <small cp load> | <extended cp load> | <immediate load> + // ldr ip, [pc/pp, #imm] | movw ip, #imm | movw ip, #imm + // | movt ip, #imm> | movw ip, #imm + // | ldr ip, [pp, ip] // blx ip + // <reset profiling counter> // ok-label patcher.masm()->nop(); break; @@ -4819,28 +4859,27 @@ BackEdgeTable::BackEdgeState BackEdgeTable::GetBackEdgeState( Isolate* isolate, Code* unoptimized_code, Address pc) { - static const int kInstrSize = Assembler::kInstrSize; - ASSERT(Memory::int32_at(pc - kInstrSize) == kBlxIp); + DCHECK(Assembler::IsBlxIp(Memory::int32_at(pc - Assembler::kInstrSize))); Address pc_immediate_load_address = GetInterruptImmediateLoadAddress(pc); - Address branch_address = pc_immediate_load_address - kInstrSize; + Address branch_address = pc_immediate_load_address - Assembler::kInstrSize; Address interrupt_address = Assembler::target_address_at( pc_immediate_load_address, unoptimized_code); if (Assembler::IsBranch(Assembler::instr_at(branch_address))) { - ASSERT(interrupt_address == + DCHECK(interrupt_address == isolate->builtins()->InterruptCheck()->entry()); return INTERRUPT; } - ASSERT(Assembler::IsNop(Assembler::instr_at(branch_address))); + DCHECK(Assembler::IsNop(Assembler::instr_at(branch_address))); if (interrupt_address == isolate->builtins()->OnStackReplacement()->entry()) { return ON_STACK_REPLACEMENT; } - ASSERT(interrupt_address == + DCHECK(interrupt_address == isolate->builtins()->OsrAfterStackCheck()->entry()); return OSR_AFTER_STACK_CHECK; } diff --git a/deps/v8/src/arm/ic-arm.cc b/deps/v8/src/arm/ic-arm.cc index 4626e375165..d1add6d2ff2 100644 --- a/deps/v8/src/arm/ic-arm.cc +++ b/deps/v8/src/arm/ic-arm.cc @@ -2,17 +2,17 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM -#include "assembler-arm.h" -#include "code-stubs.h" -#include "codegen.h" -#include "disasm.h" -#include "ic-inl.h" -#include "runtime.h" -#include "stub-cache.h" +#include "src/arm/assembler-arm.h" +#include "src/code-stubs.h" +#include "src/codegen.h" +#include "src/disasm.h" +#include "src/ic-inl.h" +#include "src/runtime.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -39,48 +39,6 @@ static void GenerateGlobalInstanceTypeCheck(MacroAssembler* masm, } -// Generated code falls through if the receiver is a regular non-global -// JS object with slow properties and no interceptors. -static void GenerateNameDictionaryReceiverCheck(MacroAssembler* masm, - Register receiver, - Register elements, - Register t0, - Register t1, - Label* miss) { - // Register usage: - // receiver: holds the receiver on entry and is unchanged. - // elements: holds the property dictionary on fall through. - // Scratch registers: - // t0: used to holds the receiver map. - // t1: used to holds the receiver instance type, receiver bit mask and - // elements map. - - // Check that the receiver isn't a smi. - __ JumpIfSmi(receiver, miss); - - // Check that the receiver is a valid JS object. - __ CompareObjectType(receiver, t0, t1, FIRST_SPEC_OBJECT_TYPE); - __ b(lt, miss); - - // If this assert fails, we have to check upper bound too. - STATIC_ASSERT(LAST_TYPE == LAST_SPEC_OBJECT_TYPE); - - GenerateGlobalInstanceTypeCheck(masm, t1, miss); - - // Check that the global object does not require access checks. - __ ldrb(t1, FieldMemOperand(t0, Map::kBitFieldOffset)); - __ tst(t1, Operand((1 << Map::kIsAccessCheckNeeded) | - (1 << Map::kHasNamedInterceptor))); - __ b(ne, miss); - - __ ldr(elements, FieldMemOperand(receiver, JSObject::kPropertiesOffset)); - __ ldr(t1, FieldMemOperand(elements, HeapObject::kMapOffset)); - __ LoadRoot(ip, Heap::kHashTableMapRootIndex); - __ cmp(t1, ip); - __ b(ne, miss); -} - - // Helper function used from LoadIC GenerateNormal. // // elements: Property dictionary. It is not clobbered if a jump to the miss @@ -211,7 +169,7 @@ static void GenerateKeyedLoadReceiverCheck(MacroAssembler* masm, // In the case that the object is a value-wrapper object, // we enter the runtime system to make sure that indexing into string // objects work as intended. - ASSERT(JS_OBJECT_TYPE > JS_VALUE_TYPE); + DCHECK(JS_OBJECT_TYPE > JS_VALUE_TYPE); __ ldrb(scratch, FieldMemOperand(map, Map::kInstanceTypeOffset)); __ cmp(scratch, Operand(JS_OBJECT_TYPE)); __ b(lt, slow); @@ -311,16 +269,17 @@ static void GenerateKeyNameCheck(MacroAssembler* masm, void LoadIC::GenerateMegamorphic(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- r2 : name - // -- lr : return address - // -- r0 : receiver - // ----------------------------------- + // The return address is in lr. + Register receiver = ReceiverRegister(); + Register name = NameRegister(); + DCHECK(receiver.is(r1)); + DCHECK(name.is(r2)); // Probe the stub cache. - Code::Flags flags = Code::ComputeHandlerFlags(Code::LOAD_IC); + Code::Flags flags = Code::RemoveTypeAndHolderFromFlags( + Code::ComputeHandlerFlags(Code::LOAD_IC)); masm->isolate()->stub_cache()->GenerateProbe( - masm, flags, r0, r2, r3, r4, r5, r6); + masm, flags, receiver, name, r3, r4, r5, r6); // Cache miss: Jump to runtime. GenerateMiss(masm); @@ -328,37 +287,35 @@ void LoadIC::GenerateMegamorphic(MacroAssembler* masm) { void LoadIC::GenerateNormal(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- r2 : name - // -- lr : return address - // -- r0 : receiver - // ----------------------------------- - Label miss; + Register dictionary = r0; + DCHECK(!dictionary.is(ReceiverRegister())); + DCHECK(!dictionary.is(NameRegister())); - GenerateNameDictionaryReceiverCheck(masm, r0, r1, r3, r4, &miss); + Label slow; - // r1: elements - GenerateDictionaryLoad(masm, &miss, r1, r2, r0, r3, r4); + __ ldr(dictionary, + FieldMemOperand(ReceiverRegister(), JSObject::kPropertiesOffset)); + GenerateDictionaryLoad(masm, &slow, dictionary, NameRegister(), r0, r3, r4); __ Ret(); - // Cache miss: Jump to runtime. - __ bind(&miss); - GenerateMiss(masm); + // Dictionary load failed, go slow (but don't miss). + __ bind(&slow); + GenerateRuntimeGetProperty(masm); } +// A register that isn't one of the parameters to the load ic. +static const Register LoadIC_TempRegister() { return r3; } + + void LoadIC::GenerateMiss(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- r2 : name - // -- lr : return address - // -- r0 : receiver - // ----------------------------------- + // The return address is in lr. Isolate* isolate = masm->isolate(); __ IncrementCounter(isolate->counters()->load_miss(), 1, r3, r4); - __ mov(r3, r0); - __ Push(r3, r2); + __ mov(LoadIC_TempRegister(), ReceiverRegister()); + __ Push(LoadIC_TempRegister(), NameRegister()); // Perform tail call to the entry. ExternalReference ref = @@ -368,14 +325,10 @@ void LoadIC::GenerateMiss(MacroAssembler* masm) { void LoadIC::GenerateRuntimeGetProperty(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- r2 : name - // -- lr : return address - // -- r0 : receiver - // ----------------------------------- + // The return address is in lr. - __ mov(r3, r0); - __ Push(r3, r2); + __ mov(LoadIC_TempRegister(), ReceiverRegister()); + __ Push(LoadIC_TempRegister(), NameRegister()); __ TailCallRuntime(Runtime::kGetProperty, 2, 1); } @@ -467,25 +420,26 @@ static MemOperand GenerateUnmappedArgumentsLookup(MacroAssembler* masm, void KeyedLoadIC::GenerateSloppyArguments(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- lr : return address - // -- r0 : key - // -- r1 : receiver - // ----------------------------------- + // The return address is in lr. + Register receiver = ReceiverRegister(); + Register key = NameRegister(); + DCHECK(receiver.is(r1)); + DCHECK(key.is(r2)); + Label slow, notin; MemOperand mapped_location = - GenerateMappedArgumentsLookup(masm, r1, r0, r2, r3, r4, ¬in, &slow); + GenerateMappedArgumentsLookup( + masm, receiver, key, r0, r3, r4, ¬in, &slow); __ ldr(r0, mapped_location); __ Ret(); __ bind(¬in); - // The unmapped lookup expects that the parameter map is in r2. + // The unmapped lookup expects that the parameter map is in r0. MemOperand unmapped_location = - GenerateUnmappedArgumentsLookup(masm, r0, r2, r3, &slow); - __ ldr(r2, unmapped_location); + GenerateUnmappedArgumentsLookup(masm, key, r0, r3, &slow); + __ ldr(r0, unmapped_location); __ LoadRoot(r3, Heap::kTheHoleValueRootIndex); - __ cmp(r2, r3); + __ cmp(r0, r3); __ b(eq, &slow); - __ mov(r0, r2); __ Ret(); __ bind(&slow); GenerateMiss(masm); @@ -493,27 +447,28 @@ void KeyedLoadIC::GenerateSloppyArguments(MacroAssembler* masm) { void KeyedStoreIC::GenerateSloppyArguments(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- r0 : value - // -- r1 : key - // -- r2 : receiver - // -- lr : return address - // ----------------------------------- + Register receiver = ReceiverRegister(); + Register key = NameRegister(); + Register value = ValueRegister(); + DCHECK(receiver.is(r1)); + DCHECK(key.is(r2)); + DCHECK(value.is(r0)); + Label slow, notin; - MemOperand mapped_location = - GenerateMappedArgumentsLookup(masm, r2, r1, r3, r4, r5, ¬in, &slow); - __ str(r0, mapped_location); + MemOperand mapped_location = GenerateMappedArgumentsLookup( + masm, receiver, key, r3, r4, r5, ¬in, &slow); + __ str(value, mapped_location); __ add(r6, r3, r5); - __ mov(r9, r0); + __ mov(r9, value); __ RecordWrite(r3, r6, r9, kLRHasNotBeenSaved, kDontSaveFPRegs); __ Ret(); __ bind(¬in); // The unmapped lookup expects that the parameter map is in r3. MemOperand unmapped_location = - GenerateUnmappedArgumentsLookup(masm, r1, r3, r4, &slow); - __ str(r0, unmapped_location); + GenerateUnmappedArgumentsLookup(masm, key, r3, r4, &slow); + __ str(value, unmapped_location); __ add(r6, r3, r4); - __ mov(r9, r0); + __ mov(r9, value); __ RecordWrite(r3, r6, r9, kLRHasNotBeenSaved, kDontSaveFPRegs); __ Ret(); __ bind(&slow); @@ -522,16 +477,12 @@ void KeyedStoreIC::GenerateSloppyArguments(MacroAssembler* masm) { void KeyedLoadIC::GenerateMiss(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- lr : return address - // -- r0 : key - // -- r1 : receiver - // ----------------------------------- + // The return address is in lr. Isolate* isolate = masm->isolate(); __ IncrementCounter(isolate->counters()->keyed_load_miss(), 1, r3, r4); - __ Push(r1, r0); + __ Push(ReceiverRegister(), NameRegister()); // Perform tail call to the entry. ExternalReference ref = @@ -541,30 +492,51 @@ void KeyedLoadIC::GenerateMiss(MacroAssembler* masm) { } +// IC register specifications +const Register LoadIC::ReceiverRegister() { return r1; } +const Register LoadIC::NameRegister() { return r2; } + + +const Register LoadIC::SlotRegister() { + DCHECK(FLAG_vector_ics); + return r0; +} + + +const Register LoadIC::VectorRegister() { + DCHECK(FLAG_vector_ics); + return r3; +} + + +const Register StoreIC::ReceiverRegister() { return r1; } +const Register StoreIC::NameRegister() { return r2; } +const Register StoreIC::ValueRegister() { return r0; } + + +const Register KeyedStoreIC::MapRegister() { + return r3; +} + + void KeyedLoadIC::GenerateRuntimeGetProperty(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- lr : return address - // -- r0 : key - // -- r1 : receiver - // ----------------------------------- + // The return address is in lr. - __ Push(r1, r0); + __ Push(ReceiverRegister(), NameRegister()); __ TailCallRuntime(Runtime::kKeyedGetProperty, 2, 1); } void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- lr : return address - // -- r0 : key - // -- r1 : receiver - // ----------------------------------- + // The return address is in lr. Label slow, check_name, index_smi, index_name, property_array_property; Label probe_dictionary, check_number_dictionary; - Register key = r0; - Register receiver = r1; + Register key = NameRegister(); + Register receiver = ReceiverRegister(); + DCHECK(key.is(r2)); + DCHECK(receiver.is(r1)); Isolate* isolate = masm->isolate(); @@ -575,14 +547,14 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { // where a numeric string is converted to a smi. GenerateKeyedLoadReceiverCheck( - masm, receiver, r2, r3, Map::kHasIndexedInterceptor, &slow); + masm, receiver, r0, r3, Map::kHasIndexedInterceptor, &slow); // Check the receiver's map to see if it has fast elements. - __ CheckFastElements(r2, r3, &check_number_dictionary); + __ CheckFastElements(r0, r3, &check_number_dictionary); GenerateFastArrayLoad( - masm, receiver, key, r4, r3, r2, r0, NULL, &slow); - __ IncrementCounter(isolate->counters()->keyed_load_generic_smi(), 1, r2, r3); + masm, receiver, key, r0, r3, r4, r0, NULL, &slow); + __ IncrementCounter(isolate->counters()->keyed_load_generic_smi(), 1, r4, r3); __ Ret(); __ bind(&check_number_dictionary); @@ -590,31 +562,30 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { __ ldr(r3, FieldMemOperand(r4, JSObject::kMapOffset)); // Check whether the elements is a number dictionary. - // r0: key // r3: elements map // r4: elements __ LoadRoot(ip, Heap::kHashTableMapRootIndex); __ cmp(r3, ip); __ b(ne, &slow); - __ SmiUntag(r2, r0); - __ LoadFromNumberDictionary(&slow, r4, r0, r0, r2, r3, r5); + __ SmiUntag(r0, key); + __ LoadFromNumberDictionary(&slow, r4, key, r0, r0, r3, r5); __ Ret(); - // Slow case, key and receiver still in r0 and r1. + // Slow case, key and receiver still in r2 and r1. __ bind(&slow); __ IncrementCounter(isolate->counters()->keyed_load_generic_slow(), - 1, r2, r3); + 1, r4, r3); GenerateRuntimeGetProperty(masm); __ bind(&check_name); - GenerateKeyNameCheck(masm, key, r2, r3, &index_name, &slow); + GenerateKeyNameCheck(masm, key, r0, r3, &index_name, &slow); GenerateKeyedLoadReceiverCheck( - masm, receiver, r2, r3, Map::kHasNamedInterceptor, &slow); + masm, receiver, r0, r3, Map::kHasNamedInterceptor, &slow); // If the receiver is a fast-case object, check the keyed lookup // cache. Otherwise probe the dictionary. - __ ldr(r3, FieldMemOperand(r1, JSObject::kPropertiesOffset)); + __ ldr(r3, FieldMemOperand(receiver, JSObject::kPropertiesOffset)); __ ldr(r4, FieldMemOperand(r3, HeapObject::kMapOffset)); __ LoadRoot(ip, Heap::kHashTableMapRootIndex); __ cmp(r4, ip); @@ -622,9 +593,9 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { // Load the map of the receiver, compute the keyed lookup cache hash // based on 32 bits of the map pointer and the name hash. - __ ldr(r2, FieldMemOperand(r1, HeapObject::kMapOffset)); - __ mov(r3, Operand(r2, ASR, KeyedLookupCache::kMapHashShift)); - __ ldr(r4, FieldMemOperand(r0, Name::kHashFieldOffset)); + __ ldr(r0, FieldMemOperand(receiver, HeapObject::kMapOffset)); + __ mov(r3, Operand(r0, ASR, KeyedLookupCache::kMapHashShift)); + __ ldr(r4, FieldMemOperand(key, Name::kHashFieldOffset)); __ eor(r3, r3, Operand(r4, ASR, Name::kHashShift)); int mask = KeyedLookupCache::kCapacityMask & KeyedLookupCache::kHashMask; __ And(r3, r3, Operand(mask)); @@ -644,26 +615,24 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { Label try_next_entry; // Load map and move r4 to next entry. __ ldr(r5, MemOperand(r4, kPointerSize * 2, PostIndex)); - __ cmp(r2, r5); + __ cmp(r0, r5); __ b(ne, &try_next_entry); __ ldr(r5, MemOperand(r4, -kPointerSize)); // Load name - __ cmp(r0, r5); + __ cmp(key, r5); __ b(eq, &hit_on_nth_entry[i]); __ bind(&try_next_entry); } // Last entry: Load map and move r4 to name. __ ldr(r5, MemOperand(r4, kPointerSize, PostIndex)); - __ cmp(r2, r5); + __ cmp(r0, r5); __ b(ne, &slow); __ ldr(r5, MemOperand(r4)); - __ cmp(r0, r5); + __ cmp(key, r5); __ b(ne, &slow); // Get field offset. - // r0 : key - // r1 : receiver - // r2 : receiver's map + // r0 : receiver's map // r3 : lookup cache index ExternalReference cache_field_offsets = ExternalReference::keyed_lookup_cache_field_offsets(isolate); @@ -676,7 +645,7 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { __ add(r3, r3, Operand(i)); } __ ldr(r5, MemOperand(r4, r3, LSL, kPointerSizeLog2)); - __ ldrb(r6, FieldMemOperand(r2, Map::kInObjectPropertiesOffset)); + __ ldrb(r6, FieldMemOperand(r0, Map::kInObjectPropertiesOffset)); __ sub(r5, r5, r6, SetCC); __ b(ge, &property_array_property); if (i != 0) { @@ -686,36 +655,34 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { // Load in-object property. __ bind(&load_in_object_property); - __ ldrb(r6, FieldMemOperand(r2, Map::kInstanceSizeOffset)); + __ ldrb(r6, FieldMemOperand(r0, Map::kInstanceSizeOffset)); __ add(r6, r6, r5); // Index from start of object. - __ sub(r1, r1, Operand(kHeapObjectTag)); // Remove the heap tag. - __ ldr(r0, MemOperand(r1, r6, LSL, kPointerSizeLog2)); + __ sub(receiver, receiver, Operand(kHeapObjectTag)); // Remove the heap tag. + __ ldr(r0, MemOperand(receiver, r6, LSL, kPointerSizeLog2)); __ IncrementCounter(isolate->counters()->keyed_load_generic_lookup_cache(), - 1, r2, r3); + 1, r4, r3); __ Ret(); // Load property array property. __ bind(&property_array_property); - __ ldr(r1, FieldMemOperand(r1, JSObject::kPropertiesOffset)); - __ add(r1, r1, Operand(FixedArray::kHeaderSize - kHeapObjectTag)); - __ ldr(r0, MemOperand(r1, r5, LSL, kPointerSizeLog2)); + __ ldr(receiver, FieldMemOperand(receiver, JSObject::kPropertiesOffset)); + __ add(receiver, receiver, Operand(FixedArray::kHeaderSize - kHeapObjectTag)); + __ ldr(r0, MemOperand(receiver, r5, LSL, kPointerSizeLog2)); __ IncrementCounter(isolate->counters()->keyed_load_generic_lookup_cache(), - 1, r2, r3); + 1, r4, r3); __ Ret(); // Do a quick inline probe of the receiver's dictionary, if it // exists. __ bind(&probe_dictionary); - // r1: receiver - // r0: key // r3: elements - __ ldr(r2, FieldMemOperand(r1, HeapObject::kMapOffset)); - __ ldrb(r2, FieldMemOperand(r2, Map::kInstanceTypeOffset)); - GenerateGlobalInstanceTypeCheck(masm, r2, &slow); + __ ldr(r0, FieldMemOperand(receiver, HeapObject::kMapOffset)); + __ ldrb(r0, FieldMemOperand(r0, Map::kInstanceTypeOffset)); + GenerateGlobalInstanceTypeCheck(masm, r0, &slow); // Load the property to r0. - GenerateDictionaryLoad(masm, &slow, r3, r0, r0, r2, r4); + GenerateDictionaryLoad(masm, &slow, r3, key, r0, r5, r4); __ IncrementCounter( - isolate->counters()->keyed_load_generic_symbol(), 1, r2, r3); + isolate->counters()->keyed_load_generic_symbol(), 1, r4, r3); __ Ret(); __ bind(&index_name); @@ -726,17 +693,14 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { void KeyedLoadIC::GenerateString(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- lr : return address - // -- r0 : key (index) - // -- r1 : receiver - // ----------------------------------- + // Return address is in lr. Label miss; - Register receiver = r1; - Register index = r0; + Register receiver = ReceiverRegister(); + Register index = NameRegister(); Register scratch = r3; Register result = r0; + DCHECK(!scratch.is(receiver) && !scratch.is(index)); StringCharAtGenerator char_at_generator(receiver, index, @@ -758,39 +722,41 @@ void KeyedLoadIC::GenerateString(MacroAssembler* masm) { void KeyedLoadIC::GenerateIndexedInterceptor(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- lr : return address - // -- r0 : key - // -- r1 : receiver - // ----------------------------------- + // Return address is in lr. Label slow; + Register receiver = ReceiverRegister(); + Register key = NameRegister(); + Register scratch1 = r3; + Register scratch2 = r4; + DCHECK(!scratch1.is(receiver) && !scratch1.is(key)); + DCHECK(!scratch2.is(receiver) && !scratch2.is(key)); + // Check that the receiver isn't a smi. - __ JumpIfSmi(r1, &slow); + __ JumpIfSmi(receiver, &slow); // Check that the key is an array index, that is Uint32. - __ NonNegativeSmiTst(r0); + __ NonNegativeSmiTst(key); __ b(ne, &slow); // Get the map of the receiver. - __ ldr(r2, FieldMemOperand(r1, HeapObject::kMapOffset)); + __ ldr(scratch1, FieldMemOperand(receiver, HeapObject::kMapOffset)); // Check that it has indexed interceptor and access checks // are not enabled for this object. - __ ldrb(r3, FieldMemOperand(r2, Map::kBitFieldOffset)); - __ and_(r3, r3, Operand(kSlowCaseBitFieldMask)); - __ cmp(r3, Operand(1 << Map::kHasIndexedInterceptor)); + __ ldrb(scratch2, FieldMemOperand(scratch1, Map::kBitFieldOffset)); + __ and_(scratch2, scratch2, Operand(kSlowCaseBitFieldMask)); + __ cmp(scratch2, Operand(1 << Map::kHasIndexedInterceptor)); __ b(ne, &slow); // Everything is fine, call runtime. - __ Push(r1, r0); // Receiver, key. + __ Push(receiver, key); // Receiver, key. // Perform tail call to the entry. __ TailCallExternalReference( - ExternalReference(IC_Utility(kKeyedLoadPropertyWithInterceptor), + ExternalReference(IC_Utility(kLoadElementWithInterceptor), masm->isolate()), - 2, - 1); + 2, 1); __ bind(&slow); GenerateMiss(masm); @@ -798,15 +764,8 @@ void KeyedLoadIC::GenerateIndexedInterceptor(MacroAssembler* masm) { void KeyedStoreIC::GenerateMiss(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- r0 : value - // -- r1 : key - // -- r2 : receiver - // -- lr : return address - // ----------------------------------- - // Push receiver, key and value for runtime call. - __ Push(r2, r1, r0); + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); ExternalReference ref = ExternalReference(IC_Utility(kKeyedStoreIC_Miss), masm->isolate()); @@ -815,15 +774,8 @@ void KeyedStoreIC::GenerateMiss(MacroAssembler* masm) { void StoreIC::GenerateSlow(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- r0 : value - // -- r2 : key - // -- r1 : receiver - // -- lr : return address - // ----------------------------------- - // Push receiver, key and value for runtime call. - __ Push(r1, r2, r0); + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); // The slow case calls into the runtime to complete the store without causing // an IC miss that would otherwise cause a transition to the generic stub. @@ -834,15 +786,8 @@ void StoreIC::GenerateSlow(MacroAssembler* masm) { void KeyedStoreIC::GenerateSlow(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- r0 : value - // -- r1 : key - // -- r2 : receiver - // -- lr : return address - // ----------------------------------- - // Push receiver, key and value for runtime call. - __ Push(r2, r1, r0); + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); // The slow case calls into the runtime to complete the store without causing // an IC miss that would otherwise cause a transition to the generic stub. @@ -854,21 +799,13 @@ void KeyedStoreIC::GenerateSlow(MacroAssembler* masm) { void KeyedStoreIC::GenerateRuntimeSetProperty(MacroAssembler* masm, StrictMode strict_mode) { - // ---------- S t a t e -------------- - // -- r0 : value - // -- r1 : key - // -- r2 : receiver - // -- lr : return address - // ----------------------------------- - // Push receiver, key and value for runtime call. - __ Push(r2, r1, r0); + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); - __ mov(r1, Operand(Smi::FromInt(NONE))); // PropertyAttributes __ mov(r0, Operand(Smi::FromInt(strict_mode))); // Strict mode. - __ Push(r1, r0); + __ Push(r0); - __ TailCallRuntime(Runtime::kSetProperty, 5, 1); + __ TailCallRuntime(Runtime::kSetProperty, 4, 1); } @@ -998,10 +935,10 @@ static void KeyedStoreGenerateGenericHelper( receiver_map, r4, slow); - ASSERT(receiver_map.is(r3)); // Transition code expects map in r3 AllocationSiteMode mode = AllocationSite::GetMode(FAST_SMI_ELEMENTS, FAST_DOUBLE_ELEMENTS); - ElementsTransitionGenerator::GenerateSmiToDouble(masm, mode, slow); + ElementsTransitionGenerator::GenerateSmiToDouble( + masm, receiver, key, value, receiver_map, mode, slow); __ ldr(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); __ jmp(&fast_double_without_map_check); @@ -1012,10 +949,9 @@ static void KeyedStoreGenerateGenericHelper( receiver_map, r4, slow); - ASSERT(receiver_map.is(r3)); // Transition code expects map in r3 mode = AllocationSite::GetMode(FAST_SMI_ELEMENTS, FAST_ELEMENTS); - ElementsTransitionGenerator::GenerateMapChangeElementsTransition(masm, mode, - slow); + ElementsTransitionGenerator::GenerateMapChangeElementsTransition( + masm, receiver, key, value, receiver_map, mode, slow); __ ldr(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); __ jmp(&finish_object_store); @@ -1028,9 +964,9 @@ static void KeyedStoreGenerateGenericHelper( receiver_map, r4, slow); - ASSERT(receiver_map.is(r3)); // Transition code expects map in r3 mode = AllocationSite::GetMode(FAST_DOUBLE_ELEMENTS, FAST_ELEMENTS); - ElementsTransitionGenerator::GenerateDoubleToObject(masm, mode, slow); + ElementsTransitionGenerator::GenerateDoubleToObject( + masm, receiver, key, value, receiver_map, mode, slow); __ ldr(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); __ jmp(&finish_object_store); } @@ -1049,9 +985,12 @@ void KeyedStoreIC::GenerateGeneric(MacroAssembler* masm, Label array, extra, check_if_double_array; // Register usage. - Register value = r0; - Register key = r1; - Register receiver = r2; + Register value = ValueRegister(); + Register key = NameRegister(); + Register receiver = ReceiverRegister(); + DCHECK(receiver.is(r1)); + DCHECK(key.is(r2)); + DCHECK(value.is(r0)); Register receiver_map = r3; Register elements_map = r6; Register elements = r9; // Elements array of the receiver. @@ -1137,18 +1076,18 @@ void KeyedStoreIC::GenerateGeneric(MacroAssembler* masm, void StoreIC::GenerateMegamorphic(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- r0 : value - // -- r1 : receiver - // -- r2 : name - // -- lr : return address - // ----------------------------------- + Register receiver = ReceiverRegister(); + Register name = NameRegister(); + DCHECK(receiver.is(r1)); + DCHECK(name.is(r2)); + DCHECK(ValueRegister().is(r0)); // Get the receiver from the stack and probe the stub cache. - Code::Flags flags = Code::ComputeHandlerFlags(Code::STORE_IC); + Code::Flags flags = Code::RemoveTypeAndHolderFromFlags( + Code::ComputeHandlerFlags(Code::STORE_IC)); masm->isolate()->stub_cache()->GenerateProbe( - masm, flags, r1, r2, r3, r4, r5, r6); + masm, flags, receiver, name, r3, r4, r5, r6); // Cache miss: Jump to runtime. GenerateMiss(masm); @@ -1156,14 +1095,7 @@ void StoreIC::GenerateMegamorphic(MacroAssembler* masm) { void StoreIC::GenerateMiss(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- r0 : value - // -- r1 : receiver - // -- r2 : name - // -- lr : return address - // ----------------------------------- - - __ Push(r1, r2, r0); + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); // Perform tail call to the entry. ExternalReference ref = @@ -1173,17 +1105,18 @@ void StoreIC::GenerateMiss(MacroAssembler* masm) { void StoreIC::GenerateNormal(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- r0 : value - // -- r1 : receiver - // -- r2 : name - // -- lr : return address - // ----------------------------------- Label miss; + Register receiver = ReceiverRegister(); + Register name = NameRegister(); + Register value = ValueRegister(); + Register dictionary = r3; + DCHECK(receiver.is(r1)); + DCHECK(name.is(r2)); + DCHECK(value.is(r0)); - GenerateNameDictionaryReceiverCheck(masm, r1, r3, r4, r5, &miss); + __ ldr(dictionary, FieldMemOperand(receiver, JSObject::kPropertiesOffset)); - GenerateDictionaryStore(masm, &miss, r3, r2, r0, r4, r5); + GenerateDictionaryStore(masm, &miss, dictionary, name, value, r4, r5); Counters* counters = masm->isolate()->counters(); __ IncrementCounter(counters->store_normal_hit(), 1, r4, r5); @@ -1197,21 +1130,13 @@ void StoreIC::GenerateNormal(MacroAssembler* masm) { void StoreIC::GenerateRuntimeSetProperty(MacroAssembler* masm, StrictMode strict_mode) { - // ----------- S t a t e ------------- - // -- r0 : value - // -- r1 : receiver - // -- r2 : name - // -- lr : return address - // ----------------------------------- - - __ Push(r1, r2, r0); + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); - __ mov(r1, Operand(Smi::FromInt(NONE))); // PropertyAttributes __ mov(r0, Operand(Smi::FromInt(strict_mode))); - __ Push(r1, r0); + __ Push(r0); // Do tail-call to runtime routine. - __ TailCallRuntime(Runtime::kSetProperty, 5, 1); + __ TailCallRuntime(Runtime::kSetProperty, 4, 1); } @@ -1293,20 +1218,20 @@ void PatchInlinedSmiCode(Address address, InlinedSmiCheck check) { CodePatcher patcher(patch_address, 2); Register reg = Assembler::GetRn(instr_at_patch); if (check == ENABLE_INLINED_SMI_CHECK) { - ASSERT(Assembler::IsCmpRegister(instr_at_patch)); - ASSERT_EQ(Assembler::GetRn(instr_at_patch).code(), + DCHECK(Assembler::IsCmpRegister(instr_at_patch)); + DCHECK_EQ(Assembler::GetRn(instr_at_patch).code(), Assembler::GetRm(instr_at_patch).code()); patcher.masm()->tst(reg, Operand(kSmiTagMask)); } else { - ASSERT(check == DISABLE_INLINED_SMI_CHECK); - ASSERT(Assembler::IsTstImmediate(instr_at_patch)); + DCHECK(check == DISABLE_INLINED_SMI_CHECK); + DCHECK(Assembler::IsTstImmediate(instr_at_patch)); patcher.masm()->cmp(reg, reg); } - ASSERT(Assembler::IsBranch(branch_instr)); + DCHECK(Assembler::IsBranch(branch_instr)); if (Assembler::GetCondition(branch_instr) == eq) { patcher.EmitCondition(ne); } else { - ASSERT(Assembler::GetCondition(branch_instr) == ne); + DCHECK(Assembler::GetCondition(branch_instr) == ne); patcher.EmitCondition(eq); } } diff --git a/deps/v8/src/arm/lithium-arm.cc b/deps/v8/src/arm/lithium-arm.cc index b26a88217f3..6b86088ee7c 100644 --- a/deps/v8/src/arm/lithium-arm.cc +++ b/deps/v8/src/arm/lithium-arm.cc @@ -2,12 +2,11 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "lithium-allocator-inl.h" -#include "arm/lithium-arm.h" -#include "arm/lithium-codegen-arm.h" -#include "hydrogen-osr.h" +#include "src/arm/lithium-codegen-arm.h" +#include "src/hydrogen-osr.h" +#include "src/lithium-inl.h" namespace v8 { namespace internal { @@ -25,17 +24,17 @@ void LInstruction::VerifyCall() { // outputs because all registers are blocked by the calling convention. // Inputs operands must use a fixed register or use-at-start policy or // a non-register policy. - ASSERT(Output() == NULL || + DCHECK(Output() == NULL || LUnallocated::cast(Output())->HasFixedPolicy() || !LUnallocated::cast(Output())->HasRegisterPolicy()); for (UseIterator it(this); !it.Done(); it.Advance()) { LUnallocated* operand = LUnallocated::cast(it.Current()); - ASSERT(operand->HasFixedPolicy() || + DCHECK(operand->HasFixedPolicy() || operand->IsUsedAtStart()); } for (TempIterator it(this); !it.Done(); it.Advance()) { LUnallocated* operand = LUnallocated::cast(it.Current()); - ASSERT(operand->HasFixedPolicy() ||!operand->HasRegisterPolicy()); + DCHECK(operand->HasFixedPolicy() ||!operand->HasRegisterPolicy()); } } #endif @@ -317,8 +316,9 @@ void LAccessArgumentsAt::PrintDataTo(StringStream* stream) { void LStoreNamedField::PrintDataTo(StringStream* stream) { object()->PrintTo(stream); - hydrogen()->access().PrintTo(stream); - stream->Add(" <- "); + OStringStream os; + os << hydrogen()->access() << " <- "; + stream->Add(os.c_str()); value()->PrintTo(stream); } @@ -337,7 +337,7 @@ void LLoadKeyed::PrintDataTo(StringStream* stream) { stream->Add("["); key()->PrintTo(stream); if (hydrogen()->IsDehoisted()) { - stream->Add(" + %d]", additional_index()); + stream->Add(" + %d]", base_offset()); } else { stream->Add("]"); } @@ -349,13 +349,13 @@ void LStoreKeyed::PrintDataTo(StringStream* stream) { stream->Add("["); key()->PrintTo(stream); if (hydrogen()->IsDehoisted()) { - stream->Add(" + %d] <-", additional_index()); + stream->Add(" + %d] <-", base_offset()); } else { stream->Add("] <- "); } if (value() == NULL) { - ASSERT(hydrogen()->IsConstantHoleStore() && + DCHECK(hydrogen()->IsConstantHoleStore() && hydrogen()->value()->representation().IsDouble()); stream->Add("<the hole(nan)>"); } else { @@ -391,14 +391,14 @@ LOperand* LPlatformChunk::GetNextSpillSlot(RegisterKind kind) { if (kind == DOUBLE_REGISTERS) { return LDoubleStackSlot::Create(index, zone()); } else { - ASSERT(kind == GENERAL_REGISTERS); + DCHECK(kind == GENERAL_REGISTERS); return LStackSlot::Create(index, zone()); } } LPlatformChunk* LChunkBuilder::Build() { - ASSERT(is_unused()); + DCHECK(is_unused()); chunk_ = new(zone()) LPlatformChunk(info(), graph()); LPhase phase("L_Building chunk", chunk_); status_ = BUILDING; @@ -609,7 +609,7 @@ LInstruction* LChunkBuilder::MarkAsCall(LInstruction* instr, LInstruction* LChunkBuilder::AssignPointerMap(LInstruction* instr) { - ASSERT(!instr->HasPointerMap()); + DCHECK(!instr->HasPointerMap()); instr->set_pointer_map(new(zone()) LPointerMap(zone())); return instr; } @@ -628,16 +628,29 @@ LUnallocated* LChunkBuilder::TempRegister() { } +LUnallocated* LChunkBuilder::TempDoubleRegister() { + LUnallocated* operand = + new(zone()) LUnallocated(LUnallocated::MUST_HAVE_DOUBLE_REGISTER); + int vreg = allocator_->GetVirtualRegister(); + if (!allocator_->AllocationOk()) { + Abort(kOutOfVirtualRegistersWhileTryingToAllocateTempRegister); + vreg = 0; + } + operand->set_virtual_register(vreg); + return operand; +} + + LOperand* LChunkBuilder::FixedTemp(Register reg) { LUnallocated* operand = ToUnallocated(reg); - ASSERT(operand->HasFixedPolicy()); + DCHECK(operand->HasFixedPolicy()); return operand; } LOperand* LChunkBuilder::FixedTemp(DoubleRegister reg) { LUnallocated* operand = ToUnallocated(reg); - ASSERT(operand->HasFixedPolicy()); + DCHECK(operand->HasFixedPolicy()); return operand; } @@ -666,8 +679,8 @@ LInstruction* LChunkBuilder::DoDeoptimize(HDeoptimize* instr) { LInstruction* LChunkBuilder::DoShift(Token::Value op, HBitwiseBinaryOperation* instr) { if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* left = UseRegisterAtStart(instr->left()); HValue* right_value = instr->right(); @@ -708,9 +721,9 @@ LInstruction* LChunkBuilder::DoShift(Token::Value op, LInstruction* LChunkBuilder::DoArithmeticD(Token::Value op, HArithmeticBinaryOperation* instr) { - ASSERT(instr->representation().IsDouble()); - ASSERT(instr->left()->representation().IsDouble()); - ASSERT(instr->right()->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); + DCHECK(instr->right()->representation().IsDouble()); if (op == Token::MOD) { LOperand* left = UseFixedDouble(instr->left(), d0); LOperand* right = UseFixedDouble(instr->right(), d1); @@ -729,8 +742,8 @@ LInstruction* LChunkBuilder::DoArithmeticT(Token::Value op, HBinaryOperation* instr) { HValue* left = instr->left(); HValue* right = instr->right(); - ASSERT(left->representation().IsTagged()); - ASSERT(right->representation().IsTagged()); + DCHECK(left->representation().IsTagged()); + DCHECK(right->representation().IsTagged()); LOperand* context = UseFixed(instr->context(), cp); LOperand* left_operand = UseFixed(left, r1); LOperand* right_operand = UseFixed(right, r0); @@ -741,7 +754,7 @@ LInstruction* LChunkBuilder::DoArithmeticT(Token::Value op, void LChunkBuilder::DoBasicBlock(HBasicBlock* block, HBasicBlock* next_block) { - ASSERT(is_building()); + DCHECK(is_building()); current_block_ = block; next_block_ = next_block; if (block->IsStartBlock()) { @@ -750,13 +763,13 @@ void LChunkBuilder::DoBasicBlock(HBasicBlock* block, HBasicBlock* next_block) { } else if (block->predecessors()->length() == 1) { // We have a single predecessor => copy environment and outgoing // argument count from the predecessor. - ASSERT(block->phis()->length() == 0); + DCHECK(block->phis()->length() == 0); HBasicBlock* pred = block->predecessors()->at(0); HEnvironment* last_environment = pred->last_environment(); - ASSERT(last_environment != NULL); + DCHECK(last_environment != NULL); // Only copy the environment, if it is later used again. if (pred->end()->SecondSuccessor() == NULL) { - ASSERT(pred->end()->FirstSuccessor() == block); + DCHECK(pred->end()->FirstSuccessor() == block); } else { if (pred->end()->FirstSuccessor()->block_id() > block->block_id() || pred->end()->SecondSuccessor()->block_id() > block->block_id()) { @@ -764,7 +777,7 @@ void LChunkBuilder::DoBasicBlock(HBasicBlock* block, HBasicBlock* next_block) { } } block->UpdateEnvironment(last_environment); - ASSERT(pred->argument_count() >= 0); + DCHECK(pred->argument_count() >= 0); argument_count_ = pred->argument_count(); } else { // We are at a state join => process phis. @@ -816,7 +829,7 @@ void LChunkBuilder::VisitInstruction(HInstruction* current) { if (current->OperandCount() == 0) { instr = DefineAsRegister(new(zone()) LDummy()); } else { - ASSERT(!current->OperandAt(0)->IsControlInstruction()); + DCHECK(!current->OperandAt(0)->IsControlInstruction()); instr = DefineAsRegister(new(zone()) LDummyUse(UseAny(current->OperandAt(0)))); } @@ -828,76 +841,90 @@ void LChunkBuilder::VisitInstruction(HInstruction* current) { chunk_->AddInstruction(dummy, current_block_); } } else { - instr = current->CompileToLithium(this); + HBasicBlock* successor; + if (current->IsControlInstruction() && + HControlInstruction::cast(current)->KnownSuccessorBlock(&successor) && + successor != NULL) { + instr = new(zone()) LGoto(successor); + } else { + instr = current->CompileToLithium(this); + } } argument_count_ += current->argument_delta(); - ASSERT(argument_count_ >= 0); + DCHECK(argument_count_ >= 0); if (instr != NULL) { - // Associate the hydrogen instruction first, since we may need it for - // the ClobbersRegisters() or ClobbersDoubleRegisters() calls below. - instr->set_hydrogen_value(current); + AddInstruction(instr, current); + } + + current_instruction_ = old_current; +} + + +void LChunkBuilder::AddInstruction(LInstruction* instr, + HInstruction* hydrogen_val) { + // Associate the hydrogen instruction first, since we may need it for + // the ClobbersRegisters() or ClobbersDoubleRegisters() calls below. + instr->set_hydrogen_value(hydrogen_val); #if DEBUG - // Make sure that the lithium instruction has either no fixed register - // constraints in temps or the result OR no uses that are only used at - // start. If this invariant doesn't hold, the register allocator can decide - // to insert a split of a range immediately before the instruction due to an - // already allocated register needing to be used for the instruction's fixed - // register constraint. In this case, The register allocator won't see an - // interference between the split child and the use-at-start (it would if - // the it was just a plain use), so it is free to move the split child into - // the same register that is used for the use-at-start. - // See https://code.google.com/p/chromium/issues/detail?id=201590 - if (!(instr->ClobbersRegisters() && - instr->ClobbersDoubleRegisters(isolate()))) { - int fixed = 0; - int used_at_start = 0; - for (UseIterator it(instr); !it.Done(); it.Advance()) { - LUnallocated* operand = LUnallocated::cast(it.Current()); - if (operand->IsUsedAtStart()) ++used_at_start; - } - if (instr->Output() != NULL) { - if (LUnallocated::cast(instr->Output())->HasFixedPolicy()) ++fixed; - } - for (TempIterator it(instr); !it.Done(); it.Advance()) { - LUnallocated* operand = LUnallocated::cast(it.Current()); - if (operand->HasFixedPolicy()) ++fixed; - } - ASSERT(fixed == 0 || used_at_start == 0); + // Make sure that the lithium instruction has either no fixed register + // constraints in temps or the result OR no uses that are only used at + // start. If this invariant doesn't hold, the register allocator can decide + // to insert a split of a range immediately before the instruction due to an + // already allocated register needing to be used for the instruction's fixed + // register constraint. In this case, The register allocator won't see an + // interference between the split child and the use-at-start (it would if + // the it was just a plain use), so it is free to move the split child into + // the same register that is used for the use-at-start. + // See https://code.google.com/p/chromium/issues/detail?id=201590 + if (!(instr->ClobbersRegisters() && + instr->ClobbersDoubleRegisters(isolate()))) { + int fixed = 0; + int used_at_start = 0; + for (UseIterator it(instr); !it.Done(); it.Advance()) { + LUnallocated* operand = LUnallocated::cast(it.Current()); + if (operand->IsUsedAtStart()) ++used_at_start; + } + if (instr->Output() != NULL) { + if (LUnallocated::cast(instr->Output())->HasFixedPolicy()) ++fixed; } + for (TempIterator it(instr); !it.Done(); it.Advance()) { + LUnallocated* operand = LUnallocated::cast(it.Current()); + if (operand->HasFixedPolicy()) ++fixed; + } + DCHECK(fixed == 0 || used_at_start == 0); + } #endif - if (FLAG_stress_pointer_maps && !instr->HasPointerMap()) { - instr = AssignPointerMap(instr); - } - if (FLAG_stress_environments && !instr->HasEnvironment()) { - instr = AssignEnvironment(instr); + if (FLAG_stress_pointer_maps && !instr->HasPointerMap()) { + instr = AssignPointerMap(instr); + } + if (FLAG_stress_environments && !instr->HasEnvironment()) { + instr = AssignEnvironment(instr); + } + chunk_->AddInstruction(instr, current_block_); + + if (instr->IsCall()) { + HValue* hydrogen_value_for_lazy_bailout = hydrogen_val; + LInstruction* instruction_needing_environment = NULL; + if (hydrogen_val->HasObservableSideEffects()) { + HSimulate* sim = HSimulate::cast(hydrogen_val->next()); + instruction_needing_environment = instr; + sim->ReplayEnvironment(current_block_->last_environment()); + hydrogen_value_for_lazy_bailout = sim; } - chunk_->AddInstruction(instr, current_block_); - - if (instr->IsCall()) { - HValue* hydrogen_value_for_lazy_bailout = current; - LInstruction* instruction_needing_environment = NULL; - if (current->HasObservableSideEffects()) { - HSimulate* sim = HSimulate::cast(current->next()); - instruction_needing_environment = instr; - sim->ReplayEnvironment(current_block_->last_environment()); - hydrogen_value_for_lazy_bailout = sim; - } - LInstruction* bailout = AssignEnvironment(new(zone()) LLazyBailout()); - bailout->set_hydrogen_value(hydrogen_value_for_lazy_bailout); - chunk_->AddInstruction(bailout, current_block_); - if (instruction_needing_environment != NULL) { - // Store the lazy deopt environment with the instruction if needed. - // Right now it is only used for LInstanceOfKnownGlobal. - instruction_needing_environment-> - SetDeferredLazyDeoptimizationEnvironment(bailout->environment()); - } + LInstruction* bailout = AssignEnvironment(new(zone()) LLazyBailout()); + bailout->set_hydrogen_value(hydrogen_value_for_lazy_bailout); + chunk_->AddInstruction(bailout, current_block_); + if (instruction_needing_environment != NULL) { + // Store the lazy deopt environment with the instruction if needed. + // Right now it is only used for LInstanceOfKnownGlobal. + instruction_needing_environment-> + SetDeferredLazyDeoptimizationEnvironment(bailout->environment()); } } - current_instruction_ = old_current; } @@ -907,9 +934,6 @@ LInstruction* LChunkBuilder::DoGoto(HGoto* instr) { LInstruction* LChunkBuilder::DoBranch(HBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; - HValue* value = instr->value(); Representation r = value->representation(); HType type = value->type(); @@ -934,10 +958,7 @@ LInstruction* LChunkBuilder::DoDebugBreak(HDebugBreak* instr) { LInstruction* LChunkBuilder::DoCompareMap(HCompareMap* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; - - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); LOperand* temp = TempRegister(); return new(zone()) LCmpMapAndBranch(value, temp); @@ -998,9 +1019,13 @@ LInstruction* LChunkBuilder::DoApplyArguments(HApplyArguments* instr) { } -LInstruction* LChunkBuilder::DoPushArgument(HPushArgument* instr) { - LOperand* argument = Use(instr->argument()); - return new(zone()) LPushArgument(argument); +LInstruction* LChunkBuilder::DoPushArguments(HPushArguments* instr) { + int argc = instr->OperandCount(); + for (int i = 0; i < argc; ++i) { + LOperand* argument = Use(instr->argument(i)); + AddInstruction(new(zone()) LPushArgument(argument), instr); + } + return NULL; } @@ -1057,7 +1082,7 @@ LInstruction* LChunkBuilder::DoCallJSFunction( LInstruction* LChunkBuilder::DoCallWithDescriptor( HCallWithDescriptor* instr) { - const CallInterfaceDescriptor* descriptor = instr->descriptor(); + const InterfaceDescriptor* descriptor = instr->descriptor(); LOperand* target = UseRegisterOrConstantAtStart(instr->target()); ZoneList<LOperand*> ops(instr->OperandCount(), zone()); @@ -1084,14 +1109,24 @@ LInstruction* LChunkBuilder::DoInvokeFunction(HInvokeFunction* instr) { LInstruction* LChunkBuilder::DoUnaryMathOperation(HUnaryMathOperation* instr) { switch (instr->op()) { - case kMathFloor: return DoMathFloor(instr); - case kMathRound: return DoMathRound(instr); - case kMathAbs: return DoMathAbs(instr); - case kMathLog: return DoMathLog(instr); - case kMathExp: return DoMathExp(instr); - case kMathSqrt: return DoMathSqrt(instr); - case kMathPowHalf: return DoMathPowHalf(instr); - case kMathClz32: return DoMathClz32(instr); + case kMathFloor: + return DoMathFloor(instr); + case kMathRound: + return DoMathRound(instr); + case kMathFround: + return DoMathFround(instr); + case kMathAbs: + return DoMathAbs(instr); + case kMathLog: + return DoMathLog(instr); + case kMathExp: + return DoMathExp(instr); + case kMathSqrt: + return DoMathSqrt(instr); + case kMathPowHalf: + return DoMathPowHalf(instr); + case kMathClz32: + return DoMathClz32(instr); default: UNREACHABLE(); return NULL; @@ -1108,12 +1143,19 @@ LInstruction* LChunkBuilder::DoMathFloor(HUnaryMathOperation* instr) { LInstruction* LChunkBuilder::DoMathRound(HUnaryMathOperation* instr) { LOperand* input = UseRegister(instr->value()); - LOperand* temp = FixedTemp(d3); + LOperand* temp = TempDoubleRegister(); LMathRound* result = new(zone()) LMathRound(input, temp); return AssignEnvironment(DefineAsRegister(result)); } +LInstruction* LChunkBuilder::DoMathFround(HUnaryMathOperation* instr) { + LOperand* input = UseRegister(instr->value()); + LMathFround* result = new (zone()) LMathFround(input); + return DefineAsRegister(result); +} + + LInstruction* LChunkBuilder::DoMathAbs(HUnaryMathOperation* instr) { Representation r = instr->value()->representation(); LOperand* context = (r.IsDouble() || r.IsSmiOrInteger32()) @@ -1129,8 +1171,8 @@ LInstruction* LChunkBuilder::DoMathAbs(HUnaryMathOperation* instr) { LInstruction* LChunkBuilder::DoMathLog(HUnaryMathOperation* instr) { - ASSERT(instr->representation().IsDouble()); - ASSERT(instr->value()->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->value()->representation().IsDouble()); LOperand* input = UseFixedDouble(instr->value(), d0); return MarkAsCall(DefineFixedDouble(new(zone()) LMathLog(input), d0), instr); } @@ -1144,12 +1186,12 @@ LInstruction* LChunkBuilder::DoMathClz32(HUnaryMathOperation* instr) { LInstruction* LChunkBuilder::DoMathExp(HUnaryMathOperation* instr) { - ASSERT(instr->representation().IsDouble()); - ASSERT(instr->value()->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->value()->representation().IsDouble()); LOperand* input = UseRegister(instr->value()); LOperand* temp1 = TempRegister(); LOperand* temp2 = TempRegister(); - LOperand* double_temp = FixedTemp(d3); // Chosen by fair dice roll. + LOperand* double_temp = TempDoubleRegister(); LMathExp* result = new(zone()) LMathExp(input, double_temp, temp1, temp2); return DefineAsRegister(result); } @@ -1221,9 +1263,9 @@ LInstruction* LChunkBuilder::DoShl(HShl* instr) { LInstruction* LChunkBuilder::DoBitwise(HBitwise* instr) { if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); - ASSERT(instr->CheckFlag(HValue::kTruncatingToInt32)); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->CheckFlag(HValue::kTruncatingToInt32)); LOperand* left = UseRegisterAtStart(instr->BetterLeftOperand()); LOperand* right = UseOrConstantAtStart(instr->BetterRightOperand()); @@ -1235,9 +1277,9 @@ LInstruction* LChunkBuilder::DoBitwise(HBitwise* instr) { LInstruction* LChunkBuilder::DoDivByPowerOf2I(HDiv* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LInstruction* result = DefineAsRegister(new(zone()) LDivByPowerOf2I( @@ -1253,9 +1295,9 @@ LInstruction* LChunkBuilder::DoDivByPowerOf2I(HDiv* instr) { LInstruction* LChunkBuilder::DoDivByConstI(HDiv* instr) { - ASSERT(instr->representation().IsInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LInstruction* result = DefineAsRegister(new(zone()) LDivByConstI( @@ -1270,12 +1312,13 @@ LInstruction* LChunkBuilder::DoDivByConstI(HDiv* instr) { LInstruction* LChunkBuilder::DoDivI(HDiv* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); LOperand* divisor = UseRegister(instr->right()); - LOperand* temp = CpuFeatures::IsSupported(SUDIV) ? NULL : FixedTemp(d4); + LOperand* temp = + CpuFeatures::IsSupported(SUDIV) ? NULL : TempDoubleRegister(); LInstruction* result = DefineAsRegister(new(zone()) LDivI(dividend, divisor, temp)); if (instr->CheckFlag(HValue::kCanBeDivByZero) || @@ -1322,9 +1365,9 @@ LInstruction* LChunkBuilder::DoFlooringDivByPowerOf2I(HMathFloorOfDiv* instr) { LInstruction* LChunkBuilder::DoFlooringDivByConstI(HMathFloorOfDiv* instr) { - ASSERT(instr->representation().IsInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LOperand* temp = @@ -1342,12 +1385,13 @@ LInstruction* LChunkBuilder::DoFlooringDivByConstI(HMathFloorOfDiv* instr) { LInstruction* LChunkBuilder::DoFlooringDivI(HMathFloorOfDiv* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); LOperand* divisor = UseRegister(instr->right()); - LOperand* temp = CpuFeatures::IsSupported(SUDIV) ? NULL : FixedTemp(d4); + LOperand* temp = + CpuFeatures::IsSupported(SUDIV) ? NULL : TempDoubleRegister(); LFlooringDivI* div = new(zone()) LFlooringDivI(dividend, divisor, temp); return AssignEnvironment(DefineAsRegister(div)); } @@ -1365,14 +1409,15 @@ LInstruction* LChunkBuilder::DoMathFloorOfDiv(HMathFloorOfDiv* instr) { LInstruction* LChunkBuilder::DoModByPowerOf2I(HMod* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegisterAtStart(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LInstruction* result = DefineSameAsFirst(new(zone()) LModByPowerOf2I( dividend, divisor)); - if (instr->CheckFlag(HValue::kBailoutOnMinusZero)) { + if (instr->CheckFlag(HValue::kLeftCanBeNegative) && + instr->CheckFlag(HValue::kBailoutOnMinusZero)) { result = AssignEnvironment(result); } return result; @@ -1380,9 +1425,9 @@ LInstruction* LChunkBuilder::DoModByPowerOf2I(HMod* instr) { LInstruction* LChunkBuilder::DoModByConstI(HMod* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LInstruction* result = DefineAsRegister(new(zone()) LModByConstI( @@ -1395,13 +1440,15 @@ LInstruction* LChunkBuilder::DoModByConstI(HMod* instr) { LInstruction* LChunkBuilder::DoModI(HMod* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); LOperand* divisor = UseRegister(instr->right()); - LOperand* temp = CpuFeatures::IsSupported(SUDIV) ? NULL : FixedTemp(d10); - LOperand* temp2 = CpuFeatures::IsSupported(SUDIV) ? NULL : FixedTemp(d11); + LOperand* temp = + CpuFeatures::IsSupported(SUDIV) ? NULL : TempDoubleRegister(); + LOperand* temp2 = + CpuFeatures::IsSupported(SUDIV) ? NULL : TempDoubleRegister(); LInstruction* result = DefineAsRegister(new(zone()) LModI( dividend, divisor, temp, temp2)); if (instr->CheckFlag(HValue::kCanBeDivByZero) || @@ -1431,8 +1478,8 @@ LInstruction* LChunkBuilder::DoMod(HMod* instr) { LInstruction* LChunkBuilder::DoMul(HMul* instr) { if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); HValue* left = instr->BetterLeftOperand(); HValue* right = instr->BetterRightOperand(); LOperand* left_op; @@ -1471,8 +1518,8 @@ LInstruction* LChunkBuilder::DoMul(HMul* instr) { return DefineAsRegister(mul); } else if (instr->representation().IsDouble()) { - if (instr->UseCount() == 1 && (instr->uses().value()->IsAdd() || - instr->uses().value()->IsSub())) { + if (instr->HasOneUse() && (instr->uses().value()->IsAdd() || + instr->uses().value()->IsSub())) { HBinaryOperation* use = HBinaryOperation::cast(instr->uses().value()); if (use->IsAdd() && instr == use->left()) { @@ -1501,8 +1548,8 @@ LInstruction* LChunkBuilder::DoMul(HMul* instr) { LInstruction* LChunkBuilder::DoSub(HSub* instr) { if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); if (instr->left()->IsConstant()) { // If lhs is constant, do reverse subtraction instead. @@ -1518,7 +1565,7 @@ LInstruction* LChunkBuilder::DoSub(HSub* instr) { } return result; } else if (instr->representation().IsDouble()) { - if (instr->right()->IsMul()) { + if (instr->right()->IsMul() && instr->right()->HasOneUse()) { return DoMultiplySub(instr->left(), HMul::cast(instr->right())); } @@ -1530,9 +1577,9 @@ LInstruction* LChunkBuilder::DoSub(HSub* instr) { LInstruction* LChunkBuilder::DoRSub(HSub* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); // Note: The lhs of the subtraction becomes the rhs of the // reverse-subtraction. @@ -1569,8 +1616,8 @@ LInstruction* LChunkBuilder::DoMultiplySub(HValue* minuend, HMul* mul) { LInstruction* LChunkBuilder::DoAdd(HAdd* instr) { if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* left = UseRegisterAtStart(instr->BetterLeftOperand()); LOperand* right = UseOrConstantAtStart(instr->BetterRightOperand()); LAddI* add = new(zone()) LAddI(left, right); @@ -1580,21 +1627,21 @@ LInstruction* LChunkBuilder::DoAdd(HAdd* instr) { } return result; } else if (instr->representation().IsExternal()) { - ASSERT(instr->left()->representation().IsExternal()); - ASSERT(instr->right()->representation().IsInteger32()); - ASSERT(!instr->CheckFlag(HValue::kCanOverflow)); + DCHECK(instr->left()->representation().IsExternal()); + DCHECK(instr->right()->representation().IsInteger32()); + DCHECK(!instr->CheckFlag(HValue::kCanOverflow)); LOperand* left = UseRegisterAtStart(instr->left()); LOperand* right = UseOrConstantAtStart(instr->right()); LAddI* add = new(zone()) LAddI(left, right); LInstruction* result = DefineAsRegister(add); return result; } else if (instr->representation().IsDouble()) { - if (instr->left()->IsMul()) { + if (instr->left()->IsMul() && instr->left()->HasOneUse()) { return DoMultiplyAdd(HMul::cast(instr->left()), instr->right()); } - if (instr->right()->IsMul()) { - ASSERT(!instr->left()->IsMul()); + if (instr->right()->IsMul() && instr->right()->HasOneUse()) { + DCHECK(!instr->left()->IsMul() || !instr->left()->HasOneUse()); return DoMultiplyAdd(HMul::cast(instr->right()), instr->left()); } @@ -1609,14 +1656,14 @@ LInstruction* LChunkBuilder::DoMathMinMax(HMathMinMax* instr) { LOperand* left = NULL; LOperand* right = NULL; if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); left = UseRegisterAtStart(instr->BetterLeftOperand()); right = UseOrConstantAtStart(instr->BetterRightOperand()); } else { - ASSERT(instr->representation().IsDouble()); - ASSERT(instr->left()->representation().IsDouble()); - ASSERT(instr->right()->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); + DCHECK(instr->right()->representation().IsDouble()); left = UseRegisterAtStart(instr->left()); right = UseRegisterAtStart(instr->right()); } @@ -1625,11 +1672,11 @@ LInstruction* LChunkBuilder::DoMathMinMax(HMathMinMax* instr) { LInstruction* LChunkBuilder::DoPower(HPower* instr) { - ASSERT(instr->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); // We call a C function for double power. It can't trigger a GC. // We need to use fixed result register for the call. Representation exponent_type = instr->right()->representation(); - ASSERT(instr->left()->representation().IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); LOperand* left = UseFixedDouble(instr->left(), d0); LOperand* right = exponent_type.IsDouble() ? UseFixedDouble(instr->right(), d1) : @@ -1642,8 +1689,8 @@ LInstruction* LChunkBuilder::DoPower(HPower* instr) { LInstruction* LChunkBuilder::DoCompareGeneric(HCompareGeneric* instr) { - ASSERT(instr->left()->representation().IsTagged()); - ASSERT(instr->right()->representation().IsTagged()); + DCHECK(instr->left()->representation().IsTagged()); + DCHECK(instr->right()->representation().IsTagged()); LOperand* context = UseFixed(instr->context(), cp); LOperand* left = UseFixed(instr->left(), r1); LOperand* right = UseFixed(instr->right(), r0); @@ -1654,19 +1701,17 @@ LInstruction* LChunkBuilder::DoCompareGeneric(HCompareGeneric* instr) { LInstruction* LChunkBuilder::DoCompareNumericAndBranch( HCompareNumericAndBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; Representation r = instr->representation(); if (r.IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(r)); - ASSERT(instr->right()->representation().Equals(r)); + DCHECK(instr->left()->representation().Equals(r)); + DCHECK(instr->right()->representation().Equals(r)); LOperand* left = UseRegisterOrConstantAtStart(instr->left()); LOperand* right = UseRegisterOrConstantAtStart(instr->right()); return new(zone()) LCompareNumericAndBranch(left, right); } else { - ASSERT(r.IsDouble()); - ASSERT(instr->left()->representation().IsDouble()); - ASSERT(instr->right()->representation().IsDouble()); + DCHECK(r.IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); + DCHECK(instr->right()->representation().IsDouble()); LOperand* left = UseRegisterAtStart(instr->left()); LOperand* right = UseRegisterAtStart(instr->right()); return new(zone()) LCompareNumericAndBranch(left, right); @@ -1676,8 +1721,6 @@ LInstruction* LChunkBuilder::DoCompareNumericAndBranch( LInstruction* LChunkBuilder::DoCompareObjectEqAndBranch( HCompareObjectEqAndBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; LOperand* left = UseRegisterAtStart(instr->left()); LOperand* right = UseRegisterAtStart(instr->right()); return new(zone()) LCmpObjectEqAndBranch(left, right); @@ -1693,8 +1736,6 @@ LInstruction* LChunkBuilder::DoCompareHoleAndBranch( LInstruction* LChunkBuilder::DoCompareMinusZeroAndBranch( HCompareMinusZeroAndBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; LOperand* value = UseRegister(instr->value()); LOperand* scratch = TempRegister(); return new(zone()) LCompareMinusZeroAndBranch(value, scratch); @@ -1702,7 +1743,7 @@ LInstruction* LChunkBuilder::DoCompareMinusZeroAndBranch( LInstruction* LChunkBuilder::DoIsObjectAndBranch(HIsObjectAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); LOperand* temp = TempRegister(); return new(zone()) LIsObjectAndBranch(value, temp); @@ -1710,7 +1751,7 @@ LInstruction* LChunkBuilder::DoIsObjectAndBranch(HIsObjectAndBranch* instr) { LInstruction* LChunkBuilder::DoIsStringAndBranch(HIsStringAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); LOperand* temp = TempRegister(); return new(zone()) LIsStringAndBranch(value, temp); @@ -1718,14 +1759,14 @@ LInstruction* LChunkBuilder::DoIsStringAndBranch(HIsStringAndBranch* instr) { LInstruction* LChunkBuilder::DoIsSmiAndBranch(HIsSmiAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); return new(zone()) LIsSmiAndBranch(Use(instr->value())); } LInstruction* LChunkBuilder::DoIsUndetectableAndBranch( HIsUndetectableAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); return new(zone()) LIsUndetectableAndBranch(value, TempRegister()); } @@ -1733,8 +1774,8 @@ LInstruction* LChunkBuilder::DoIsUndetectableAndBranch( LInstruction* LChunkBuilder::DoStringCompareAndBranch( HStringCompareAndBranch* instr) { - ASSERT(instr->left()->representation().IsTagged()); - ASSERT(instr->right()->representation().IsTagged()); + DCHECK(instr->left()->representation().IsTagged()); + DCHECK(instr->right()->representation().IsTagged()); LOperand* context = UseFixed(instr->context(), cp); LOperand* left = UseFixed(instr->left(), r1); LOperand* right = UseFixed(instr->right(), r0); @@ -1746,7 +1787,7 @@ LInstruction* LChunkBuilder::DoStringCompareAndBranch( LInstruction* LChunkBuilder::DoHasInstanceTypeAndBranch( HHasInstanceTypeAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); return new(zone()) LHasInstanceTypeAndBranch(value); } @@ -1754,7 +1795,7 @@ LInstruction* LChunkBuilder::DoHasInstanceTypeAndBranch( LInstruction* LChunkBuilder::DoGetCachedArrayIndex( HGetCachedArrayIndex* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); return DefineAsRegister(new(zone()) LGetCachedArrayIndex(value)); @@ -1763,7 +1804,7 @@ LInstruction* LChunkBuilder::DoGetCachedArrayIndex( LInstruction* LChunkBuilder::DoHasCachedArrayIndexAndBranch( HHasCachedArrayIndexAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); return new(zone()) LHasCachedArrayIndexAndBranch( UseRegisterAtStart(instr->value())); } @@ -1771,7 +1812,7 @@ LInstruction* LChunkBuilder::DoHasCachedArrayIndexAndBranch( LInstruction* LChunkBuilder::DoClassOfTestAndBranch( HClassOfTestAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegister(instr->value()); return new(zone()) LClassOfTestAndBranch(value, TempRegister()); } @@ -1874,14 +1915,14 @@ LInstruction* LChunkBuilder::DoChange(HChange* instr) { } return AssignEnvironment(DefineSameAsFirst(new(zone()) LCheckSmi(value))); } else { - ASSERT(to.IsInteger32()); + DCHECK(to.IsInteger32()); if (val->type().IsSmi() || val->representation().IsSmi()) { LOperand* value = UseRegisterAtStart(val); return DefineAsRegister(new(zone()) LSmiUntag(value, false)); } else { LOperand* value = UseRegister(val); LOperand* temp1 = TempRegister(); - LOperand* temp2 = FixedTemp(d11); + LOperand* temp2 = TempDoubleRegister(); LInstruction* result = DefineSameAsFirst(new(zone()) LTaggedToI(value, temp1, temp2)); if (!val->representation().IsSmi()) result = AssignEnvironment(result); @@ -1902,7 +1943,7 @@ LInstruction* LChunkBuilder::DoChange(HChange* instr) { return AssignEnvironment( DefineAsRegister(new(zone()) LDoubleToSmi(value))); } else { - ASSERT(to.IsInteger32()); + DCHECK(to.IsInteger32()); LOperand* value = UseRegister(val); LInstruction* result = DefineAsRegister(new(zone()) LDoubleToI(value)); if (!instr->CanTruncateToInt32()) result = AssignEnvironment(result); @@ -1935,7 +1976,7 @@ LInstruction* LChunkBuilder::DoChange(HChange* instr) { } return result; } else { - ASSERT(to.IsDouble()); + DCHECK(to.IsDouble()); if (val->CheckFlag(HInstruction::kUint32)) { return DefineAsRegister(new(zone()) LUint32ToDouble(UseRegister(val))); } else { @@ -1951,7 +1992,9 @@ LInstruction* LChunkBuilder::DoChange(HChange* instr) { LInstruction* LChunkBuilder::DoCheckHeapObject(HCheckHeapObject* instr) { LOperand* value = UseRegisterAtStart(instr->value()); LInstruction* result = new(zone()) LCheckNonSmi(value); - if (!instr->value()->IsHeapObject()) result = AssignEnvironment(result); + if (!instr->value()->type().IsHeapObject()) { + result = AssignEnvironment(result); + } return result; } @@ -1996,10 +2039,11 @@ LInstruction* LChunkBuilder::DoClampToUint8(HClampToUint8* instr) { } else if (input_rep.IsInteger32()) { return DefineAsRegister(new(zone()) LClampIToUint8(reg)); } else { - ASSERT(input_rep.IsSmiOrTagged()); + DCHECK(input_rep.IsSmiOrTagged()); // Register allocator doesn't (yet) support allocation of double // temps. Reserve d1 explicitly. - LClampTToUint8* result = new(zone()) LClampTToUint8(reg, FixedTemp(d11)); + LClampTToUint8* result = + new(zone()) LClampTToUint8(reg, TempDoubleRegister()); return AssignEnvironment(DefineAsRegister(result)); } } @@ -2007,7 +2051,7 @@ LInstruction* LChunkBuilder::DoClampToUint8(HClampToUint8* instr) { LInstruction* LChunkBuilder::DoDoubleBits(HDoubleBits* instr) { HValue* value = instr->value(); - ASSERT(value->representation().IsDouble()); + DCHECK(value->representation().IsDouble()); return DefineAsRegister(new(zone()) LDoubleBits(UseRegister(value))); } @@ -2058,9 +2102,14 @@ LInstruction* LChunkBuilder::DoLoadGlobalCell(HLoadGlobalCell* instr) { LInstruction* LChunkBuilder::DoLoadGlobalGeneric(HLoadGlobalGeneric* instr) { LOperand* context = UseFixed(instr->context(), cp); - LOperand* global_object = UseFixed(instr->global_object(), r0); + LOperand* global_object = UseFixed(instr->global_object(), + LoadIC::ReceiverRegister()); + LOperand* vector = NULL; + if (FLAG_vector_ics) { + vector = FixedTemp(LoadIC::VectorRegister()); + } LLoadGlobalGeneric* result = - new(zone()) LLoadGlobalGeneric(context, global_object); + new(zone()) LLoadGlobalGeneric(context, global_object, vector); return MarkAsCall(DefineFixed(result, r0), instr); } @@ -2112,9 +2161,14 @@ LInstruction* LChunkBuilder::DoLoadNamedField(HLoadNamedField* instr) { LInstruction* LChunkBuilder::DoLoadNamedGeneric(HLoadNamedGeneric* instr) { LOperand* context = UseFixed(instr->context(), cp); - LOperand* object = UseFixed(instr->object(), r0); + LOperand* object = UseFixed(instr->object(), LoadIC::ReceiverRegister()); + LOperand* vector = NULL; + if (FLAG_vector_ics) { + vector = FixedTemp(LoadIC::VectorRegister()); + } + LInstruction* result = - DefineFixed(new(zone()) LLoadNamedGeneric(context, object), r0); + DefineFixed(new(zone()) LLoadNamedGeneric(context, object, vector), r0); return MarkAsCall(result, instr); } @@ -2132,7 +2186,7 @@ LInstruction* LChunkBuilder::DoLoadRoot(HLoadRoot* instr) { LInstruction* LChunkBuilder::DoLoadKeyed(HLoadKeyed* instr) { - ASSERT(instr->key()->representation().IsSmiOrInteger32()); + DCHECK(instr->key()->representation().IsSmiOrInteger32()); ElementsKind elements_kind = instr->elements_kind(); LOperand* key = UseRegisterOrConstantAtStart(instr->key()); LInstruction* result = NULL; @@ -2142,12 +2196,12 @@ LInstruction* LChunkBuilder::DoLoadKeyed(HLoadKeyed* instr) { if (instr->representation().IsDouble()) { obj = UseRegister(instr->elements()); } else { - ASSERT(instr->representation().IsSmiOrTagged()); + DCHECK(instr->representation().IsSmiOrTagged()); obj = UseRegisterAtStart(instr->elements()); } result = DefineAsRegister(new(zone()) LLoadKeyed(obj, key)); } else { - ASSERT( + DCHECK( (instr->representation().IsInteger32() && !IsDoubleOrFloatElementsKind(elements_kind)) || (instr->representation().IsDouble() && @@ -2172,18 +2226,23 @@ LInstruction* LChunkBuilder::DoLoadKeyed(HLoadKeyed* instr) { LInstruction* LChunkBuilder::DoLoadKeyedGeneric(HLoadKeyedGeneric* instr) { LOperand* context = UseFixed(instr->context(), cp); - LOperand* object = UseFixed(instr->object(), r1); - LOperand* key = UseFixed(instr->key(), r0); + LOperand* object = UseFixed(instr->object(), LoadIC::ReceiverRegister()); + LOperand* key = UseFixed(instr->key(), LoadIC::NameRegister()); + LOperand* vector = NULL; + if (FLAG_vector_ics) { + vector = FixedTemp(LoadIC::VectorRegister()); + } LInstruction* result = - DefineFixed(new(zone()) LLoadKeyedGeneric(context, object, key), r0); + DefineFixed(new(zone()) LLoadKeyedGeneric(context, object, key, vector), + r0); return MarkAsCall(result, instr); } LInstruction* LChunkBuilder::DoStoreKeyed(HStoreKeyed* instr) { if (!instr->is_typed_elements()) { - ASSERT(instr->elements()->representation().IsTagged()); + DCHECK(instr->elements()->representation().IsTagged()); bool needs_write_barrier = instr->NeedsWriteBarrier(); LOperand* object = NULL; LOperand* key = NULL; @@ -2194,7 +2253,7 @@ LInstruction* LChunkBuilder::DoStoreKeyed(HStoreKeyed* instr) { val = UseRegister(instr->value()); key = UseRegisterOrConstantAtStart(instr->key()); } else { - ASSERT(instr->value()->representation().IsSmiOrTagged()); + DCHECK(instr->value()->representation().IsSmiOrTagged()); if (needs_write_barrier) { object = UseTempRegister(instr->elements()); val = UseTempRegister(instr->value()); @@ -2209,12 +2268,12 @@ LInstruction* LChunkBuilder::DoStoreKeyed(HStoreKeyed* instr) { return new(zone()) LStoreKeyed(object, key, val); } - ASSERT( + DCHECK( (instr->value()->representation().IsInteger32() && !IsDoubleOrFloatElementsKind(instr->elements_kind())) || (instr->value()->representation().IsDouble() && IsDoubleOrFloatElementsKind(instr->elements_kind()))); - ASSERT((instr->is_fixed_typed_array() && + DCHECK((instr->is_fixed_typed_array() && instr->elements()->representation().IsTagged()) || (instr->is_external() && instr->elements()->representation().IsExternal())); @@ -2227,13 +2286,13 @@ LInstruction* LChunkBuilder::DoStoreKeyed(HStoreKeyed* instr) { LInstruction* LChunkBuilder::DoStoreKeyedGeneric(HStoreKeyedGeneric* instr) { LOperand* context = UseFixed(instr->context(), cp); - LOperand* obj = UseFixed(instr->object(), r2); - LOperand* key = UseFixed(instr->key(), r1); - LOperand* val = UseFixed(instr->value(), r0); + LOperand* obj = UseFixed(instr->object(), KeyedStoreIC::ReceiverRegister()); + LOperand* key = UseFixed(instr->key(), KeyedStoreIC::NameRegister()); + LOperand* val = UseFixed(instr->value(), KeyedStoreIC::ValueRegister()); - ASSERT(instr->object()->representation().IsTagged()); - ASSERT(instr->key()->representation().IsTagged()); - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->object()->representation().IsTagged()); + DCHECK(instr->key()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); return MarkAsCall( new(zone()) LStoreKeyedGeneric(context, obj, key, val), instr); @@ -2303,8 +2362,8 @@ LInstruction* LChunkBuilder::DoStoreNamedField(HStoreNamedField* instr) { LInstruction* LChunkBuilder::DoStoreNamedGeneric(HStoreNamedGeneric* instr) { LOperand* context = UseFixed(instr->context(), cp); - LOperand* obj = UseFixed(instr->object(), r1); - LOperand* val = UseFixed(instr->value(), r0); + LOperand* obj = UseFixed(instr->object(), StoreIC::ReceiverRegister()); + LOperand* val = UseFixed(instr->value(), StoreIC::ValueRegister()); LInstruction* result = new(zone()) LStoreNamedGeneric(context, obj, val); return MarkAsCall(result, instr); @@ -2343,9 +2402,7 @@ LInstruction* LChunkBuilder::DoStringCharFromCode(HStringCharFromCode* instr) { LInstruction* LChunkBuilder::DoAllocate(HAllocate* instr) { info()->MarkAsDeferredCalling(); LOperand* context = UseAny(instr->context()); - LOperand* size = instr->size()->IsConstant() - ? UseConstant(instr->size()) - : UseTempRegister(instr->size()); + LOperand* size = UseRegisterOrConstant(instr->size()); LOperand* temp1 = TempRegister(); LOperand* temp2 = TempRegister(); LAllocate* result = new(zone()) LAllocate(context, size, temp1, temp2); @@ -2368,7 +2425,7 @@ LInstruction* LChunkBuilder::DoFunctionLiteral(HFunctionLiteral* instr) { LInstruction* LChunkBuilder::DoOsrEntry(HOsrEntry* instr) { - ASSERT(argument_count_ == 0); + DCHECK(argument_count_ == 0); allocator_->MarkAsOsrEntry(); current_block_->last_environment()->set_ast_id(instr->ast_id()); return AssignEnvironment(new(zone()) LOsrEntry); @@ -2381,11 +2438,11 @@ LInstruction* LChunkBuilder::DoParameter(HParameter* instr) { int spill_index = chunk()->GetParameterStackSlot(instr->index()); return DefineAsSpilled(result, spill_index); } else { - ASSERT(info()->IsStub()); + DCHECK(info()->IsStub()); CodeStubInterfaceDescriptor* descriptor = info()->code_stub()->GetInterfaceDescriptor(); int index = static_cast<int>(instr->index()); - Register reg = descriptor->GetParameterRegister(index); + Register reg = descriptor->GetEnvironmentParameterRegister(index); return DefineFixed(result, reg); } } @@ -2456,9 +2513,6 @@ LInstruction* LChunkBuilder::DoTypeof(HTypeof* instr) { LInstruction* LChunkBuilder::DoTypeofIsAndBranch(HTypeofIsAndBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; - return new(zone()) LTypeofIsAndBranch(UseRegister(instr->value())); } @@ -2480,7 +2534,7 @@ LInstruction* LChunkBuilder::DoStackCheck(HStackCheck* instr) { LOperand* context = UseFixed(instr->context(), cp); return MarkAsCall(new(zone()) LStackCheck(context), instr); } else { - ASSERT(instr->is_backwards_branch()); + DCHECK(instr->is_backwards_branch()); LOperand* context = UseAny(instr->context()); return AssignEnvironment( AssignPointerMap(new(zone()) LStackCheck(context))); @@ -2516,7 +2570,7 @@ LInstruction* LChunkBuilder::DoLeaveInlined(HLeaveInlined* instr) { if (env->entry()->arguments_pushed()) { int argument_count = env->arguments_environment()->parameter_count(); pop = new(zone()) LDrop(argument_count); - ASSERT(instr->argument_delta() == -argument_count); + DCHECK(instr->argument_delta() == -argument_count); } HEnvironment* outer = current_block_->last_environment()-> @@ -2556,4 +2610,20 @@ LInstruction* LChunkBuilder::DoLoadFieldByIndex(HLoadFieldByIndex* instr) { return AssignPointerMap(result); } + +LInstruction* LChunkBuilder::DoStoreFrameContext(HStoreFrameContext* instr) { + LOperand* context = UseRegisterAtStart(instr->context()); + return new(zone()) LStoreFrameContext(context); +} + + +LInstruction* LChunkBuilder::DoAllocateBlockContext( + HAllocateBlockContext* instr) { + LOperand* context = UseFixed(instr->context(), cp); + LOperand* function = UseRegisterAtStart(instr->function()); + LAllocateBlockContext* result = + new(zone()) LAllocateBlockContext(context, function); + return MarkAsCall(DefineFixed(result, cp), instr); +} + } } // namespace v8::internal diff --git a/deps/v8/src/arm/lithium-arm.h b/deps/v8/src/arm/lithium-arm.h index 1a90eb638be..16f522e5b6a 100644 --- a/deps/v8/src/arm/lithium-arm.h +++ b/deps/v8/src/arm/lithium-arm.h @@ -5,11 +5,11 @@ #ifndef V8_ARM_LITHIUM_ARM_H_ #define V8_ARM_LITHIUM_ARM_H_ -#include "hydrogen.h" -#include "lithium-allocator.h" -#include "lithium.h" -#include "safepoint-table.h" -#include "utils.h" +#include "src/hydrogen.h" +#include "src/lithium.h" +#include "src/lithium-allocator.h" +#include "src/safepoint-table.h" +#include "src/utils.h" namespace v8 { namespace internal { @@ -17,148 +17,151 @@ namespace internal { // Forward declarations. class LCodeGen; -#define LITHIUM_CONCRETE_INSTRUCTION_LIST(V) \ - V(AccessArgumentsAt) \ - V(AddI) \ - V(Allocate) \ - V(ApplyArguments) \ - V(ArgumentsElements) \ - V(ArgumentsLength) \ - V(ArithmeticD) \ - V(ArithmeticT) \ - V(BitI) \ - V(BoundsCheck) \ - V(Branch) \ - V(CallJSFunction) \ - V(CallWithDescriptor) \ - V(CallFunction) \ - V(CallNew) \ - V(CallNewArray) \ - V(CallRuntime) \ - V(CallStub) \ - V(CheckInstanceType) \ - V(CheckNonSmi) \ - V(CheckMaps) \ - V(CheckMapValue) \ - V(CheckSmi) \ - V(CheckValue) \ - V(ClampDToUint8) \ - V(ClampIToUint8) \ - V(ClampTToUint8) \ - V(ClassOfTestAndBranch) \ - V(CompareMinusZeroAndBranch) \ - V(CompareNumericAndBranch) \ - V(CmpObjectEqAndBranch) \ - V(CmpHoleAndBranch) \ - V(CmpMapAndBranch) \ - V(CmpT) \ - V(ConstantD) \ - V(ConstantE) \ - V(ConstantI) \ - V(ConstantS) \ - V(ConstantT) \ - V(ConstructDouble) \ - V(Context) \ - V(DateField) \ - V(DebugBreak) \ - V(DeclareGlobals) \ - V(Deoptimize) \ - V(DivByConstI) \ - V(DivByPowerOf2I) \ - V(DivI) \ - V(DoubleBits) \ - V(DoubleToI) \ - V(DoubleToSmi) \ - V(Drop) \ - V(Dummy) \ - V(DummyUse) \ - V(FlooringDivByConstI) \ - V(FlooringDivByPowerOf2I) \ - V(FlooringDivI) \ - V(ForInCacheArray) \ - V(ForInPrepareMap) \ - V(FunctionLiteral) \ - V(GetCachedArrayIndex) \ - V(Goto) \ - V(HasCachedArrayIndexAndBranch) \ - V(HasInstanceTypeAndBranch) \ - V(InnerAllocatedObject) \ - V(InstanceOf) \ - V(InstanceOfKnownGlobal) \ - V(InstructionGap) \ - V(Integer32ToDouble) \ - V(InvokeFunction) \ - V(IsConstructCallAndBranch) \ - V(IsObjectAndBranch) \ - V(IsStringAndBranch) \ - V(IsSmiAndBranch) \ - V(IsUndetectableAndBranch) \ - V(Label) \ - V(LazyBailout) \ - V(LoadContextSlot) \ - V(LoadRoot) \ - V(LoadFieldByIndex) \ - V(LoadFunctionPrototype) \ - V(LoadGlobalCell) \ - V(LoadGlobalGeneric) \ - V(LoadKeyed) \ - V(LoadKeyedGeneric) \ - V(LoadNamedField) \ - V(LoadNamedGeneric) \ - V(MapEnumLength) \ - V(MathAbs) \ - V(MathClz32) \ - V(MathExp) \ - V(MathFloor) \ - V(MathLog) \ - V(MathMinMax) \ - V(MathPowHalf) \ - V(MathRound) \ - V(MathSqrt) \ - V(ModByConstI) \ - V(ModByPowerOf2I) \ - V(ModI) \ - V(MulI) \ - V(MultiplyAddD) \ - V(MultiplySubD) \ - V(NumberTagD) \ - V(NumberTagI) \ - V(NumberTagU) \ - V(NumberUntagD) \ - V(OsrEntry) \ - V(Parameter) \ - V(Power) \ - V(PushArgument) \ - V(RegExpLiteral) \ - V(Return) \ - V(SeqStringGetChar) \ - V(SeqStringSetChar) \ - V(ShiftI) \ - V(SmiTag) \ - V(SmiUntag) \ - V(StackCheck) \ - V(StoreCodeEntry) \ - V(StoreContextSlot) \ - V(StoreGlobalCell) \ - V(StoreKeyed) \ - V(StoreKeyedGeneric) \ - V(StoreNamedField) \ - V(StoreNamedGeneric) \ - V(StringAdd) \ - V(StringCharCodeAt) \ - V(StringCharFromCode) \ - V(StringCompareAndBranch) \ - V(SubI) \ - V(RSubI) \ - V(TaggedToI) \ - V(ThisFunction) \ - V(ToFastProperties) \ - V(TransitionElementsKind) \ - V(TrapAllocationMemento) \ - V(Typeof) \ - V(TypeofIsAndBranch) \ - V(Uint32ToDouble) \ - V(UnknownOSRValue) \ +#define LITHIUM_CONCRETE_INSTRUCTION_LIST(V) \ + V(AccessArgumentsAt) \ + V(AddI) \ + V(Allocate) \ + V(AllocateBlockContext) \ + V(ApplyArguments) \ + V(ArgumentsElements) \ + V(ArgumentsLength) \ + V(ArithmeticD) \ + V(ArithmeticT) \ + V(BitI) \ + V(BoundsCheck) \ + V(Branch) \ + V(CallJSFunction) \ + V(CallWithDescriptor) \ + V(CallFunction) \ + V(CallNew) \ + V(CallNewArray) \ + V(CallRuntime) \ + V(CallStub) \ + V(CheckInstanceType) \ + V(CheckNonSmi) \ + V(CheckMaps) \ + V(CheckMapValue) \ + V(CheckSmi) \ + V(CheckValue) \ + V(ClampDToUint8) \ + V(ClampIToUint8) \ + V(ClampTToUint8) \ + V(ClassOfTestAndBranch) \ + V(CompareMinusZeroAndBranch) \ + V(CompareNumericAndBranch) \ + V(CmpObjectEqAndBranch) \ + V(CmpHoleAndBranch) \ + V(CmpMapAndBranch) \ + V(CmpT) \ + V(ConstantD) \ + V(ConstantE) \ + V(ConstantI) \ + V(ConstantS) \ + V(ConstantT) \ + V(ConstructDouble) \ + V(Context) \ + V(DateField) \ + V(DebugBreak) \ + V(DeclareGlobals) \ + V(Deoptimize) \ + V(DivByConstI) \ + V(DivByPowerOf2I) \ + V(DivI) \ + V(DoubleBits) \ + V(DoubleToI) \ + V(DoubleToSmi) \ + V(Drop) \ + V(Dummy) \ + V(DummyUse) \ + V(FlooringDivByConstI) \ + V(FlooringDivByPowerOf2I) \ + V(FlooringDivI) \ + V(ForInCacheArray) \ + V(ForInPrepareMap) \ + V(FunctionLiteral) \ + V(GetCachedArrayIndex) \ + V(Goto) \ + V(HasCachedArrayIndexAndBranch) \ + V(HasInstanceTypeAndBranch) \ + V(InnerAllocatedObject) \ + V(InstanceOf) \ + V(InstanceOfKnownGlobal) \ + V(InstructionGap) \ + V(Integer32ToDouble) \ + V(InvokeFunction) \ + V(IsConstructCallAndBranch) \ + V(IsObjectAndBranch) \ + V(IsStringAndBranch) \ + V(IsSmiAndBranch) \ + V(IsUndetectableAndBranch) \ + V(Label) \ + V(LazyBailout) \ + V(LoadContextSlot) \ + V(LoadRoot) \ + V(LoadFieldByIndex) \ + V(LoadFunctionPrototype) \ + V(LoadGlobalCell) \ + V(LoadGlobalGeneric) \ + V(LoadKeyed) \ + V(LoadKeyedGeneric) \ + V(LoadNamedField) \ + V(LoadNamedGeneric) \ + V(MapEnumLength) \ + V(MathAbs) \ + V(MathClz32) \ + V(MathExp) \ + V(MathFloor) \ + V(MathFround) \ + V(MathLog) \ + V(MathMinMax) \ + V(MathPowHalf) \ + V(MathRound) \ + V(MathSqrt) \ + V(ModByConstI) \ + V(ModByPowerOf2I) \ + V(ModI) \ + V(MulI) \ + V(MultiplyAddD) \ + V(MultiplySubD) \ + V(NumberTagD) \ + V(NumberTagI) \ + V(NumberTagU) \ + V(NumberUntagD) \ + V(OsrEntry) \ + V(Parameter) \ + V(Power) \ + V(PushArgument) \ + V(RegExpLiteral) \ + V(Return) \ + V(SeqStringGetChar) \ + V(SeqStringSetChar) \ + V(ShiftI) \ + V(SmiTag) \ + V(SmiUntag) \ + V(StackCheck) \ + V(StoreCodeEntry) \ + V(StoreContextSlot) \ + V(StoreFrameContext) \ + V(StoreGlobalCell) \ + V(StoreKeyed) \ + V(StoreKeyedGeneric) \ + V(StoreNamedField) \ + V(StoreNamedGeneric) \ + V(StringAdd) \ + V(StringCharCodeAt) \ + V(StringCharFromCode) \ + V(StringCompareAndBranch) \ + V(SubI) \ + V(RSubI) \ + V(TaggedToI) \ + V(ThisFunction) \ + V(ToFastProperties) \ + V(TransitionElementsKind) \ + V(TrapAllocationMemento) \ + V(Typeof) \ + V(TypeofIsAndBranch) \ + V(Uint32ToDouble) \ + V(UnknownOSRValue) \ V(WrapReceiver) @@ -171,7 +174,7 @@ class LCodeGen; return mnemonic; \ } \ static L##type* cast(LInstruction* instr) { \ - ASSERT(instr->Is##type()); \ + DCHECK(instr->Is##type()); \ return reinterpret_cast<L##type*>(instr); \ } @@ -220,6 +223,9 @@ class LInstruction : public ZoneObject { virtual bool IsControl() const { return false; } + // Try deleting this instruction if possible. + virtual bool TryDelete() { return false; } + void set_environment(LEnvironment* env) { environment_ = env; } LEnvironment* environment() const { return environment_; } bool HasEnvironment() const { return environment_ != NULL; } @@ -258,11 +264,12 @@ class LInstruction : public ZoneObject { void VerifyCall(); #endif + virtual int InputCount() = 0; + virtual LOperand* InputAt(int i) = 0; + private: // Iterator support. friend class InputIterator; - virtual int InputCount() = 0; - virtual LOperand* InputAt(int i) = 0; friend class TempIterator; virtual int TempCount() = 0; @@ -327,7 +334,7 @@ class LGap : public LTemplateInstruction<0, 0, 0> { virtual bool IsGap() const V8_OVERRIDE { return true; } virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; static LGap* cast(LInstruction* instr) { - ASSERT(instr->IsGap()); + DCHECK(instr->IsGap()); return reinterpret_cast<LGap*>(instr); } @@ -407,7 +414,7 @@ class LLazyBailout V8_FINAL : public LTemplateInstruction<0, 0, 0> { class LDummy V8_FINAL : public LTemplateInstruction<1, 0, 0> { public: - explicit LDummy() { } + LDummy() {} DECLARE_CONCRETE_INSTRUCTION(Dummy, "dummy") }; @@ -423,6 +430,7 @@ class LDummyUse V8_FINAL : public LTemplateInstruction<1, 1, 0> { class LDeoptimize V8_FINAL : public LTemplateInstruction<0, 0, 0> { public: + virtual bool IsControl() const V8_OVERRIDE { return true; } DECLARE_CONCRETE_INSTRUCTION(Deoptimize, "deoptimize") DECLARE_HYDROGEN_ACCESSOR(Deoptimize) }; @@ -872,6 +880,16 @@ class LMathRound V8_FINAL : public LTemplateInstruction<1, 1, 1> { }; +class LMathFround V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LMathFround(LOperand* value) { inputs_[0] = value; } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(MathFround, "math-fround") +}; + + class LMathAbs V8_FINAL : public LTemplateInstruction<1, 2, 0> { public: LMathAbs(LOperand* context, LOperand* value) { @@ -1560,7 +1578,7 @@ class LReturn V8_FINAL : public LTemplateInstruction<0, 3, 0> { return parameter_count()->IsConstantOperand(); } LConstantOperand* constant_parameter_count() { - ASSERT(has_constant_parameter_count()); + DCHECK(has_constant_parameter_count()); return LConstantOperand::cast(parameter_count()); } LOperand* parameter_count() { return inputs_[2]; } @@ -1582,15 +1600,17 @@ class LLoadNamedField V8_FINAL : public LTemplateInstruction<1, 1, 0> { }; -class LLoadNamedGeneric V8_FINAL : public LTemplateInstruction<1, 2, 0> { +class LLoadNamedGeneric V8_FINAL : public LTemplateInstruction<1, 2, 1> { public: - LLoadNamedGeneric(LOperand* context, LOperand* object) { + LLoadNamedGeneric(LOperand* context, LOperand* object, LOperand* vector) { inputs_[0] = context; inputs_[1] = object; + temps_[0] = vector; } LOperand* context() { return inputs_[0]; } LOperand* object() { return inputs_[1]; } + LOperand* temp_vector() { return temps_[0]; } DECLARE_CONCRETE_INSTRUCTION(LoadNamedGeneric, "load-named-generic") DECLARE_HYDROGEN_ACCESSOR(LoadNamedGeneric) @@ -1647,23 +1667,27 @@ class LLoadKeyed V8_FINAL : public LTemplateInstruction<1, 2, 0> { DECLARE_HYDROGEN_ACCESSOR(LoadKeyed) virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; - uint32_t additional_index() const { return hydrogen()->index_offset(); } + uint32_t base_offset() const { return hydrogen()->base_offset(); } }; -class LLoadKeyedGeneric V8_FINAL : public LTemplateInstruction<1, 3, 0> { +class LLoadKeyedGeneric V8_FINAL : public LTemplateInstruction<1, 3, 1> { public: - LLoadKeyedGeneric(LOperand* context, LOperand* object, LOperand* key) { + LLoadKeyedGeneric(LOperand* context, LOperand* object, LOperand* key, + LOperand* vector) { inputs_[0] = context; inputs_[1] = object; inputs_[2] = key; + temps_[0] = vector; } LOperand* context() { return inputs_[0]; } LOperand* object() { return inputs_[1]; } LOperand* key() { return inputs_[2]; } + LOperand* temp_vector() { return temps_[0]; } DECLARE_CONCRETE_INSTRUCTION(LoadKeyedGeneric, "load-keyed-generic") + DECLARE_HYDROGEN_ACCESSOR(LoadKeyedGeneric) }; @@ -1674,15 +1698,18 @@ class LLoadGlobalCell V8_FINAL : public LTemplateInstruction<1, 0, 0> { }; -class LLoadGlobalGeneric V8_FINAL : public LTemplateInstruction<1, 2, 0> { +class LLoadGlobalGeneric V8_FINAL : public LTemplateInstruction<1, 2, 1> { public: - LLoadGlobalGeneric(LOperand* context, LOperand* global_object) { + LLoadGlobalGeneric(LOperand* context, LOperand* global_object, + LOperand* vector) { inputs_[0] = context; inputs_[1] = global_object; + temps_[0] = vector; } LOperand* context() { return inputs_[0]; } LOperand* global_object() { return inputs_[1]; } + LOperand* temp_vector() { return temps_[0]; } DECLARE_CONCRETE_INSTRUCTION(LoadGlobalGeneric, "load-global-generic") DECLARE_HYDROGEN_ACCESSOR(LoadGlobalGeneric) @@ -1768,15 +1795,15 @@ class LDrop V8_FINAL : public LTemplateInstruction<0, 0, 0> { }; -class LStoreCodeEntry V8_FINAL: public LTemplateInstruction<0, 1, 1> { +class LStoreCodeEntry V8_FINAL: public LTemplateInstruction<0, 2, 0> { public: LStoreCodeEntry(LOperand* function, LOperand* code_object) { inputs_[0] = function; - temps_[0] = code_object; + inputs_[1] = code_object; } LOperand* function() { return inputs_[0]; } - LOperand* code_object() { return temps_[0]; } + LOperand* code_object() { return inputs_[1]; } virtual void PrintDataTo(StringStream* stream); @@ -1847,18 +1874,18 @@ class LCallJSFunction V8_FINAL : public LTemplateInstruction<1, 1, 0> { class LCallWithDescriptor V8_FINAL : public LTemplateResultInstruction<1> { public: - LCallWithDescriptor(const CallInterfaceDescriptor* descriptor, - ZoneList<LOperand*>& operands, + LCallWithDescriptor(const InterfaceDescriptor* descriptor, + const ZoneList<LOperand*>& operands, Zone* zone) : descriptor_(descriptor), - inputs_(descriptor->environment_length() + 1, zone) { - ASSERT(descriptor->environment_length() + 1 == operands.length()); + inputs_(descriptor->GetRegisterParameterCount() + 1, zone) { + DCHECK(descriptor->GetRegisterParameterCount() + 1 == operands.length()); inputs_.AddAll(operands, zone); } LOperand* target() const { return inputs_[0]; } - const CallInterfaceDescriptor* descriptor() { return descriptor_; } + const InterfaceDescriptor* descriptor() { return descriptor_; } private: DECLARE_CONCRETE_INSTRUCTION(CallWithDescriptor, "call-with-descriptor") @@ -1868,7 +1895,7 @@ class LCallWithDescriptor V8_FINAL : public LTemplateResultInstruction<1> { int arity() const { return hydrogen()->argument_count() - 1; } - const CallInterfaceDescriptor* descriptor_; + const InterfaceDescriptor* descriptor_; ZoneList<LOperand*> inputs_; // Iterator support. @@ -2222,7 +2249,7 @@ class LStoreKeyed V8_FINAL : public LTemplateInstruction<0, 3, 0> { } return hydrogen()->NeedsCanonicalization(); } - uint32_t additional_index() const { return hydrogen()->index_offset(); } + uint32_t base_offset() const { return hydrogen()->base_offset(); } }; @@ -2668,6 +2695,35 @@ class LLoadFieldByIndex V8_FINAL : public LTemplateInstruction<1, 2, 0> { }; +class LStoreFrameContext: public LTemplateInstruction<0, 1, 0> { + public: + explicit LStoreFrameContext(LOperand* context) { + inputs_[0] = context; + } + + LOperand* context() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(StoreFrameContext, "store-frame-context") +}; + + +class LAllocateBlockContext: public LTemplateInstruction<1, 2, 0> { + public: + LAllocateBlockContext(LOperand* context, LOperand* function) { + inputs_[0] = context; + inputs_[1] = function; + } + + LOperand* context() { return inputs_[0]; } + LOperand* function() { return inputs_[1]; } + + Handle<ScopeInfo> scope_info() { return hydrogen()->scope_info(); } + + DECLARE_CONCRETE_INSTRUCTION(AllocateBlockContext, "allocate-block-context") + DECLARE_HYDROGEN_ACCESSOR(AllocateBlockContext) +}; + + class LChunkBuilder; class LPlatformChunk V8_FINAL : public LChunk { public: @@ -2697,8 +2753,6 @@ class LChunkBuilder V8_FINAL : public LChunkBuilderBase { // Build the sequence for the graph. LPlatformChunk* Build(); - LInstruction* CheckElideControlInstruction(HControlInstruction* instr); - // Declare methods that deal with the individual node types. #define DECLARE_DO(type) LInstruction* Do##type(H##type* node); HYDROGEN_CONCRETE_INSTRUCTION_LIST(DECLARE_DO) @@ -2712,6 +2766,7 @@ class LChunkBuilder V8_FINAL : public LChunkBuilderBase { LInstruction* DoMathFloor(HUnaryMathOperation* instr); LInstruction* DoMathRound(HUnaryMathOperation* instr); + LInstruction* DoMathFround(HUnaryMathOperation* instr); LInstruction* DoMathAbs(HUnaryMathOperation* instr); LInstruction* DoMathLog(HUnaryMathOperation* instr); LInstruction* DoMathExp(HUnaryMathOperation* instr); @@ -2792,6 +2847,7 @@ class LChunkBuilder V8_FINAL : public LChunkBuilderBase { // Temporary operand that must be in a register. MUST_USE_RESULT LUnallocated* TempRegister(); + MUST_USE_RESULT LUnallocated* TempDoubleRegister(); MUST_USE_RESULT LOperand* FixedTemp(Register reg); MUST_USE_RESULT LOperand* FixedTemp(DoubleRegister reg); @@ -2821,6 +2877,7 @@ class LChunkBuilder V8_FINAL : public LChunkBuilderBase { CanDeoptimize can_deoptimize = CANNOT_DEOPTIMIZE_EAGERLY); void VisitInstruction(HInstruction* current); + void AddInstruction(LInstruction* instr, HInstruction* current); void DoBasicBlock(HBasicBlock* block, HBasicBlock* next_block); LInstruction* DoShift(Token::Value op, HBitwiseBinaryOperation* instr); diff --git a/deps/v8/src/arm/lithium-codegen-arm.cc b/deps/v8/src/arm/lithium-codegen-arm.cc index 25411fb9ac9..ff09e287f76 100644 --- a/deps/v8/src/arm/lithium-codegen-arm.cc +++ b/deps/v8/src/arm/lithium-codegen-arm.cc @@ -2,13 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "arm/lithium-codegen-arm.h" -#include "arm/lithium-gap-resolver-arm.h" -#include "code-stubs.h" -#include "stub-cache.h" -#include "hydrogen-osr.h" +#include "src/arm/lithium-codegen-arm.h" +#include "src/arm/lithium-gap-resolver-arm.h" +#include "src/code-stubs.h" +#include "src/hydrogen-osr.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -41,7 +41,7 @@ class SafepointGenerator V8_FINAL : public CallWrapper { bool LCodeGen::GenerateCode() { LPhase phase("Z_Code generation", chunk()); - ASSERT(is_unused()); + DCHECK(is_unused()); status_ = GENERATING; // Open a frame scope to indicate that there is a frame on the stack. The @@ -58,7 +58,7 @@ bool LCodeGen::GenerateCode() { void LCodeGen::FinishCode(Handle<Code> code) { - ASSERT(is_done()); + DCHECK(is_done()); code->set_stack_slots(GetStackSlotCount()); code->set_safepoint_table_offset(safepoints_.GetCodeOffset()); if (code->is_optimized_code()) RegisterWeakObjectsInOptimizedCode(code); @@ -67,8 +67,8 @@ void LCodeGen::FinishCode(Handle<Code> code) { void LCodeGen::SaveCallerDoubles() { - ASSERT(info()->saves_caller_doubles()); - ASSERT(NeedsEagerFrame()); + DCHECK(info()->saves_caller_doubles()); + DCHECK(NeedsEagerFrame()); Comment(";;; Save clobbered callee double registers"); int count = 0; BitVector* doubles = chunk()->allocated_double_registers(); @@ -83,8 +83,8 @@ void LCodeGen::SaveCallerDoubles() { void LCodeGen::RestoreCallerDoubles() { - ASSERT(info()->saves_caller_doubles()); - ASSERT(NeedsEagerFrame()); + DCHECK(info()->saves_caller_doubles()); + DCHECK(NeedsEagerFrame()); Comment(";;; Restore clobbered callee double registers"); BitVector* doubles = chunk()->allocated_double_registers(); BitVector::Iterator save_iterator(doubles); @@ -99,7 +99,7 @@ void LCodeGen::RestoreCallerDoubles() { bool LCodeGen::GeneratePrologue() { - ASSERT(is_generating()); + DCHECK(is_generating()); if (info()->IsOptimizing()) { ProfileEntryHookStub::MaybeCallEntryHook(masm_); @@ -130,7 +130,7 @@ bool LCodeGen::GeneratePrologue() { __ b(ne, &ok); __ ldr(r2, GlobalObjectOperand()); - __ ldr(r2, FieldMemOperand(r2, GlobalObject::kGlobalReceiverOffset)); + __ ldr(r2, FieldMemOperand(r2, GlobalObject::kGlobalProxyOffset)); __ str(r2, MemOperand(sp, receiver_offset)); @@ -140,7 +140,11 @@ bool LCodeGen::GeneratePrologue() { info()->set_prologue_offset(masm_->pc_offset()); if (NeedsEagerFrame()) { - __ Prologue(info()->IsStub() ? BUILD_STUB_FRAME : BUILD_FUNCTION_FRAME); + if (info()->IsStub()) { + __ StubPrologue(); + } else { + __ Prologue(info()->IsCodePreAgingActive()); + } frame_is_built_ = true; info_->AddNoFrameRange(0, masm_->pc_offset()); } @@ -175,13 +179,16 @@ bool LCodeGen::GeneratePrologue() { int heap_slots = info()->num_heap_slots() - Context::MIN_CONTEXT_SLOTS; if (heap_slots > 0) { Comment(";;; Allocate local context"); + bool need_write_barrier = true; // Argument to NewContext is the function, which is in r1. if (heap_slots <= FastNewContextStub::kMaximumSlots) { FastNewContextStub stub(isolate(), heap_slots); __ CallStub(&stub); + // Result of FastNewContextStub is always in new space. + need_write_barrier = false; } else { __ push(r1); - __ CallRuntime(Runtime::kHiddenNewFunctionContext, 1); + __ CallRuntime(Runtime::kNewFunctionContext, 1); } RecordSafepoint(Safepoint::kNoLazyDeopt); // Context is returned in both r0 and cp. It replaces the context @@ -201,13 +208,20 @@ bool LCodeGen::GeneratePrologue() { MemOperand target = ContextOperand(cp, var->index()); __ str(r0, target); // Update the write barrier. This clobbers r3 and r0. - __ RecordWriteContextSlot( - cp, - target.offset(), - r0, - r3, - GetLinkRegisterState(), - kSaveFPRegs); + if (need_write_barrier) { + __ RecordWriteContextSlot( + cp, + target.offset(), + r0, + r3, + GetLinkRegisterState(), + kSaveFPRegs); + } else if (FLAG_debug_code) { + Label done; + __ JumpIfInNewSpace(cp, r0, &done); + __ Abort(kExpectedNewSpaceObject); + __ bind(&done); + } } } Comment(";;; End allocate local context"); @@ -233,7 +247,7 @@ void LCodeGen::GenerateOsrPrologue() { // Adjust the frame size, subsuming the unoptimized frame into the // optimized frame. int slots = GetStackSlotCount() - graph()->osr()->UnoptimizedFrameSlots(); - ASSERT(slots >= 0); + DCHECK(slots >= 0); __ sub(sp, sp, Operand(slots * kPointerSize)); } @@ -249,7 +263,7 @@ void LCodeGen::GenerateBodyInstructionPre(LInstruction* instr) { bool LCodeGen::GenerateDeferredCode() { - ASSERT(is_generating()); + DCHECK(is_generating()); if (deferred_.length() > 0) { for (int i = 0; !is_aborted() && i < deferred_.length(); i++) { LDeferredCode* code = deferred_[i]; @@ -267,8 +281,8 @@ bool LCodeGen::GenerateDeferredCode() { __ bind(code->entry()); if (NeedsDeferredFrame()) { Comment(";;; Build frame"); - ASSERT(!frame_is_built_); - ASSERT(info()->IsStub()); + DCHECK(!frame_is_built_); + DCHECK(info()->IsStub()); frame_is_built_ = true; __ PushFixedFrame(); __ mov(scratch0(), Operand(Smi::FromInt(StackFrame::STUB))); @@ -279,7 +293,7 @@ bool LCodeGen::GenerateDeferredCode() { code->Generate(); if (NeedsDeferredFrame()) { Comment(";;; Destroy frame"); - ASSERT(frame_is_built_); + DCHECK(frame_is_built_); __ pop(ip); __ PopFixedFrame(); frame_is_built_ = false; @@ -310,48 +324,79 @@ bool LCodeGen::GenerateDeoptJumpTable() { } if (deopt_jump_table_.length() > 0) { + Label needs_frame, call_deopt_entry; + Comment(";;; -------------------- Jump table --------------------"); - } - Label table_start; - __ bind(&table_start); - Label needs_frame; - for (int i = 0; i < deopt_jump_table_.length(); i++) { - __ bind(&deopt_jump_table_[i].label); - Address entry = deopt_jump_table_[i].address; - Deoptimizer::BailoutType type = deopt_jump_table_[i].bailout_type; - int id = Deoptimizer::GetDeoptimizationId(isolate(), entry, type); - if (id == Deoptimizer::kNotDeoptimizationEntry) { - Comment(";;; jump table entry %d.", i); - } else { + Address base = deopt_jump_table_[0].address; + + Register entry_offset = scratch0(); + + int length = deopt_jump_table_.length(); + for (int i = 0; i < length; i++) { + __ bind(&deopt_jump_table_[i].label); + + Deoptimizer::BailoutType type = deopt_jump_table_[i].bailout_type; + DCHECK(type == deopt_jump_table_[0].bailout_type); + Address entry = deopt_jump_table_[i].address; + int id = Deoptimizer::GetDeoptimizationId(isolate(), entry, type); + DCHECK(id != Deoptimizer::kNotDeoptimizationEntry); Comment(";;; jump table entry %d: deoptimization bailout %d.", i, id); - } - if (deopt_jump_table_[i].needs_frame) { - ASSERT(!info()->saves_caller_doubles()); - __ mov(ip, Operand(ExternalReference::ForDeoptEntry(entry))); - if (needs_frame.is_bound()) { - __ b(&needs_frame); + + // Second-level deopt table entries are contiguous and small, so instead + // of loading the full, absolute address of each one, load an immediate + // offset which will be added to the base address later. + __ mov(entry_offset, Operand(entry - base)); + + if (deopt_jump_table_[i].needs_frame) { + DCHECK(!info()->saves_caller_doubles()); + if (needs_frame.is_bound()) { + __ b(&needs_frame); + } else { + __ bind(&needs_frame); + Comment(";;; call deopt with frame"); + __ PushFixedFrame(); + // This variant of deopt can only be used with stubs. Since we don't + // have a function pointer to install in the stack frame that we're + // building, install a special marker there instead. + DCHECK(info()->IsStub()); + __ mov(ip, Operand(Smi::FromInt(StackFrame::STUB))); + __ push(ip); + __ add(fp, sp, + Operand(StandardFrameConstants::kFixedFrameSizeFromFp)); + __ bind(&call_deopt_entry); + // Add the base address to the offset previously loaded in + // entry_offset. + __ add(entry_offset, entry_offset, + Operand(ExternalReference::ForDeoptEntry(base))); + __ blx(entry_offset); + } + + masm()->CheckConstPool(false, false); } else { - __ bind(&needs_frame); - __ PushFixedFrame(); - // This variant of deopt can only be used with stubs. Since we don't - // have a function pointer to install in the stack frame that we're - // building, install a special marker there instead. - ASSERT(info()->IsStub()); - __ mov(scratch0(), Operand(Smi::FromInt(StackFrame::STUB))); - __ push(scratch0()); - __ add(fp, sp, Operand(StandardFrameConstants::kFixedFrameSizeFromFp)); - __ mov(lr, Operand(pc), LeaveCC, al); - __ mov(pc, ip); + // The last entry can fall through into `call_deopt_entry`, avoiding a + // branch. + bool need_branch = ((i + 1) != length) || call_deopt_entry.is_bound(); + + if (need_branch) __ b(&call_deopt_entry); + + masm()->CheckConstPool(false, !need_branch); } - } else { + } + + if (!call_deopt_entry.is_bound()) { + Comment(";;; call deopt"); + __ bind(&call_deopt_entry); + if (info()->saves_caller_doubles()) { - ASSERT(info()->IsStub()); + DCHECK(info()->IsStub()); RestoreCallerDoubles(); } - __ mov(lr, Operand(pc), LeaveCC, al); - __ mov(pc, Operand(ExternalReference::ForDeoptEntry(entry))); + + // Add the base address to the offset previously loaded in entry_offset. + __ add(entry_offset, entry_offset, + Operand(ExternalReference::ForDeoptEntry(base))); + __ blx(entry_offset); } - masm()->CheckConstPool(false, false); } // Force constant pool emission at the end of the deopt jump table to make @@ -366,7 +411,7 @@ bool LCodeGen::GenerateDeoptJumpTable() { bool LCodeGen::GenerateSafepointTable() { - ASSERT(is_done()); + DCHECK(is_done()); safepoints_.Emit(masm(), GetStackSlotCount()); return !is_aborted(); } @@ -383,7 +428,7 @@ DwVfpRegister LCodeGen::ToDoubleRegister(int index) const { Register LCodeGen::ToRegister(LOperand* op) const { - ASSERT(op->IsRegister()); + DCHECK(op->IsRegister()); return ToRegister(op->index()); } @@ -397,12 +442,12 @@ Register LCodeGen::EmitLoadRegister(LOperand* op, Register scratch) { Handle<Object> literal = constant->handle(isolate()); Representation r = chunk_->LookupLiteralRepresentation(const_op); if (r.IsInteger32()) { - ASSERT(literal->IsNumber()); + DCHECK(literal->IsNumber()); __ mov(scratch, Operand(static_cast<int32_t>(literal->Number()))); } else if (r.IsDouble()) { Abort(kEmitLoadRegisterUnsupportedDoubleImmediate); } else { - ASSERT(r.IsSmiOrTagged()); + DCHECK(r.IsSmiOrTagged()); __ Move(scratch, literal); } return scratch; @@ -416,7 +461,7 @@ Register LCodeGen::EmitLoadRegister(LOperand* op, Register scratch) { DwVfpRegister LCodeGen::ToDoubleRegister(LOperand* op) const { - ASSERT(op->IsDoubleRegister()); + DCHECK(op->IsDoubleRegister()); return ToDoubleRegister(op->index()); } @@ -432,7 +477,7 @@ DwVfpRegister LCodeGen::EmitLoadDoubleRegister(LOperand* op, Handle<Object> literal = constant->handle(isolate()); Representation r = chunk_->LookupLiteralRepresentation(const_op); if (r.IsInteger32()) { - ASSERT(literal->IsNumber()); + DCHECK(literal->IsNumber()); __ mov(ip, Operand(static_cast<int32_t>(literal->Number()))); __ vmov(flt_scratch, ip); __ vcvt_f64_s32(dbl_scratch, flt_scratch); @@ -456,7 +501,7 @@ DwVfpRegister LCodeGen::EmitLoadDoubleRegister(LOperand* op, Handle<Object> LCodeGen::ToHandle(LConstantOperand* op) const { HConstant* constant = chunk_->LookupConstant(op); - ASSERT(chunk_->LookupLiteralRepresentation(op).IsSmiOrTagged()); + DCHECK(chunk_->LookupLiteralRepresentation(op).IsSmiOrTagged()); return constant->handle(isolate()); } @@ -481,7 +526,7 @@ int32_t LCodeGen::ToRepresentation(LConstantOperand* op, HConstant* constant = chunk_->LookupConstant(op); int32_t value = constant->Integer32Value(); if (r.IsInteger32()) return value; - ASSERT(r.IsSmiOrTagged()); + DCHECK(r.IsSmiOrTagged()); return reinterpret_cast<int32_t>(Smi::FromInt(value)); } @@ -494,7 +539,7 @@ Smi* LCodeGen::ToSmi(LConstantOperand* op) const { double LCodeGen::ToDouble(LConstantOperand* op) const { HConstant* constant = chunk_->LookupConstant(op); - ASSERT(constant->HasDoubleValue()); + DCHECK(constant->HasDoubleValue()); return constant->DoubleValue(); } @@ -505,15 +550,15 @@ Operand LCodeGen::ToOperand(LOperand* op) { HConstant* constant = chunk()->LookupConstant(const_op); Representation r = chunk_->LookupLiteralRepresentation(const_op); if (r.IsSmi()) { - ASSERT(constant->HasSmiValue()); + DCHECK(constant->HasSmiValue()); return Operand(Smi::FromInt(constant->Integer32Value())); } else if (r.IsInteger32()) { - ASSERT(constant->HasInteger32Value()); + DCHECK(constant->HasInteger32Value()); return Operand(constant->Integer32Value()); } else if (r.IsDouble()) { Abort(kToOperandUnsupportedDoubleImmediate); } - ASSERT(r.IsTagged()); + DCHECK(r.IsTagged()); return Operand(constant->handle(isolate())); } else if (op->IsRegister()) { return Operand(ToRegister(op)); @@ -528,15 +573,15 @@ Operand LCodeGen::ToOperand(LOperand* op) { static int ArgumentsOffsetWithoutFrame(int index) { - ASSERT(index < 0); + DCHECK(index < 0); return -(index + 1) * kPointerSize; } MemOperand LCodeGen::ToMemOperand(LOperand* op) const { - ASSERT(!op->IsRegister()); - ASSERT(!op->IsDoubleRegister()); - ASSERT(op->IsStackSlot() || op->IsDoubleStackSlot()); + DCHECK(!op->IsRegister()); + DCHECK(!op->IsDoubleRegister()); + DCHECK(op->IsStackSlot() || op->IsDoubleStackSlot()); if (NeedsEagerFrame()) { return MemOperand(fp, StackSlotOffset(op->index())); } else { @@ -548,7 +593,7 @@ MemOperand LCodeGen::ToMemOperand(LOperand* op) const { MemOperand LCodeGen::ToHighMemOperand(LOperand* op) const { - ASSERT(op->IsDoubleStackSlot()); + DCHECK(op->IsDoubleStackSlot()); if (NeedsEagerFrame()) { return MemOperand(fp, StackSlotOffset(op->index()) + kPointerSize); } else { @@ -584,13 +629,13 @@ void LCodeGen::WriteTranslation(LEnvironment* environment, translation->BeginConstructStubFrame(closure_id, translation_size); break; case JS_GETTER: - ASSERT(translation_size == 1); - ASSERT(height == 0); + DCHECK(translation_size == 1); + DCHECK(height == 0); translation->BeginGetterStubFrame(closure_id); break; case JS_SETTER: - ASSERT(translation_size == 2); - ASSERT(height == 0); + DCHECK(translation_size == 2); + DCHECK(height == 0); translation->BeginSetterStubFrame(closure_id); break; case STUB: @@ -707,7 +752,7 @@ void LCodeGen::CallCodeGeneric(Handle<Code> code, LInstruction* instr, SafepointMode safepoint_mode, TargetAddressStorageMode storage_mode) { - ASSERT(instr != NULL); + DCHECK(instr != NULL); // Block literal pool emission to ensure nop indicating no inlined smi code // is in the correct position. Assembler::BlockConstPoolScope block_const_pool(masm()); @@ -727,7 +772,7 @@ void LCodeGen::CallRuntime(const Runtime::Function* function, int num_arguments, LInstruction* instr, SaveFPRegsMode save_doubles) { - ASSERT(instr != NULL); + DCHECK(instr != NULL); __ CallRuntime(function, num_arguments, save_doubles); @@ -802,9 +847,9 @@ void LCodeGen::DeoptimizeIf(Condition condition, LEnvironment* environment, Deoptimizer::BailoutType bailout_type) { RegisterEnvironmentForDeoptimization(environment, Safepoint::kNoLazyDeopt); - ASSERT(environment->HasBeenRegistered()); + DCHECK(environment->HasBeenRegistered()); int id = environment->deoptimization_index(); - ASSERT(info()->IsOptimizing() || info()->IsStub()); + DCHECK(info()->IsOptimizing() || info()->IsStub()); Address entry = Deoptimizer::GetDeoptimizationEntry(isolate(), id, bailout_type); if (entry == NULL) { @@ -827,7 +872,7 @@ void LCodeGen::DeoptimizeIf(Condition condition, __ mov(scratch, Operand(count)); __ ldr(r1, MemOperand(scratch)); __ sub(r1, r1, Operand(1), SetCC); - __ movw(r1, FLAG_deopt_every_n_times, eq); + __ mov(r1, Operand(FLAG_deopt_every_n_times), LeaveCC, eq); __ str(r1, MemOperand(scratch)); __ pop(r1); @@ -851,7 +896,7 @@ void LCodeGen::DeoptimizeIf(Condition condition, __ stop("trap_on_deopt", condition); } - ASSERT(info()->IsStub() || frame_is_built_); + DCHECK(info()->IsStub() || frame_is_built_); // Go through jump table if we need to handle condition, build frame, or // restore caller doubles. if (condition == al && frame_is_built_ && @@ -887,7 +932,7 @@ void LCodeGen::PopulateDeoptimizationData(Handle<Code> code) { int length = deoptimizations_.length(); if (length == 0) return; Handle<DeoptimizationInputData> data = - DeoptimizationInputData::New(isolate(), length, TENURED); + DeoptimizationInputData::New(isolate(), length, 0, TENURED); Handle<ByteArray> translations = translations_.CreateByteArray(isolate()->factory()); @@ -938,7 +983,7 @@ int LCodeGen::DefineDeoptimizationLiteral(Handle<Object> literal) { void LCodeGen::PopulateDeoptimizationLiteralsWithInlinedFunctions() { - ASSERT(deoptimization_literals_.length() == 0); + DCHECK(deoptimization_literals_.length() == 0); const ZoneList<Handle<JSFunction> >* inlined_closures = chunk()->inlined_closures(); @@ -958,7 +1003,7 @@ void LCodeGen::RecordSafepointWithLazyDeopt( if (safepoint_mode == RECORD_SIMPLE_SAFEPOINT) { RecordSafepoint(instr->pointer_map(), Safepoint::kLazyDeopt); } else { - ASSERT(safepoint_mode == RECORD_SAFEPOINT_WITH_REGISTERS_AND_NO_ARGUMENTS); + DCHECK(safepoint_mode == RECORD_SAFEPOINT_WITH_REGISTERS_AND_NO_ARGUMENTS); RecordSafepointWithRegisters( instr->pointer_map(), 0, Safepoint::kLazyDeopt); } @@ -970,7 +1015,7 @@ void LCodeGen::RecordSafepoint( Safepoint::Kind kind, int arguments, Safepoint::DeoptMode deopt_mode) { - ASSERT(expected_safepoint_kind_ == kind); + DCHECK(expected_safepoint_kind_ == kind); const ZoneList<LOperand*>* operands = pointers->GetNormalizedOperands(); Safepoint safepoint = safepoints_.DefineSafepoint(masm(), @@ -1010,15 +1055,6 @@ void LCodeGen::RecordSafepointWithRegisters(LPointerMap* pointers, } -void LCodeGen::RecordSafepointWithRegistersAndDoubles( - LPointerMap* pointers, - int arguments, - Safepoint::DeoptMode deopt_mode) { - RecordSafepoint( - pointers, Safepoint::kWithRegistersAndDoubles, arguments, deopt_mode); -} - - void LCodeGen::RecordAndWritePosition(int position) { if (position == RelocInfo::kNoPosition) return; masm()->positions_recorder()->RecordPosition(position); @@ -1072,8 +1108,8 @@ void LCodeGen::DoParameter(LParameter* instr) { void LCodeGen::DoCallStub(LCallStub* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->result()).is(r0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->result()).is(r0)); switch (instr->hydrogen()->major_key()) { case CodeStub::RegExpExec: { RegExpExecStub stub(isolate()); @@ -1104,7 +1140,7 @@ void LCodeGen::DoUnknownOSRValue(LUnknownOSRValue* instr) { void LCodeGen::DoModByPowerOf2I(LModByPowerOf2I* instr) { Register dividend = ToRegister(instr->dividend()); int32_t divisor = instr->divisor(); - ASSERT(dividend.is(ToRegister(instr->result()))); + DCHECK(dividend.is(ToRegister(instr->result()))); // Theoretically, a variation of the branch-free code for integer division by // a power of 2 (calculating the remainder via an additional multiplication @@ -1138,7 +1174,7 @@ void LCodeGen::DoModByConstI(LModByConstI* instr) { Register dividend = ToRegister(instr->dividend()); int32_t divisor = instr->divisor(); Register result = ToRegister(instr->result()); - ASSERT(!dividend.is(result)); + DCHECK(!dividend.is(result)); if (divisor == 0) { DeoptimizeIf(al, instr->environment()); @@ -1201,7 +1237,7 @@ void LCodeGen::DoModI(LModI* instr) { // mls r3, r3, r2, r1 __ sdiv(result_reg, left_reg, right_reg); - __ mls(result_reg, result_reg, right_reg, left_reg); + __ Mls(result_reg, result_reg, right_reg, left_reg); // If we care about -0, test if the dividend is <0 and the result is 0. if (hmod->CheckFlag(HValue::kBailoutOnMinusZero)) { @@ -1218,15 +1254,15 @@ void LCodeGen::DoModI(LModI* instr) { Register right_reg = ToRegister(instr->right()); Register result_reg = ToRegister(instr->result()); Register scratch = scratch0(); - ASSERT(!scratch.is(left_reg)); - ASSERT(!scratch.is(right_reg)); - ASSERT(!scratch.is(result_reg)); + DCHECK(!scratch.is(left_reg)); + DCHECK(!scratch.is(right_reg)); + DCHECK(!scratch.is(result_reg)); DwVfpRegister dividend = ToDoubleRegister(instr->temp()); DwVfpRegister divisor = ToDoubleRegister(instr->temp2()); - ASSERT(!divisor.is(dividend)); + DCHECK(!divisor.is(dividend)); LowDwVfpRegister quotient = double_scratch0(); - ASSERT(!quotient.is(dividend)); - ASSERT(!quotient.is(divisor)); + DCHECK(!quotient.is(dividend)); + DCHECK(!quotient.is(divisor)); Label done; // Check for x % 0, we have to deopt in this case because we can't return a @@ -1274,8 +1310,8 @@ void LCodeGen::DoDivByPowerOf2I(LDivByPowerOf2I* instr) { Register dividend = ToRegister(instr->dividend()); int32_t divisor = instr->divisor(); Register result = ToRegister(instr->result()); - ASSERT(divisor == kMinInt || IsPowerOf2(Abs(divisor))); - ASSERT(!result.is(dividend)); + DCHECK(divisor == kMinInt || IsPowerOf2(Abs(divisor))); + DCHECK(!result.is(dividend)); // Check for (0 / -x) that will produce negative zero. HDiv* hdiv = instr->hydrogen(); @@ -1318,7 +1354,7 @@ void LCodeGen::DoDivByConstI(LDivByConstI* instr) { Register dividend = ToRegister(instr->dividend()); int32_t divisor = instr->divisor(); Register result = ToRegister(instr->result()); - ASSERT(!dividend.is(result)); + DCHECK(!dividend.is(result)); if (divisor == 0) { DeoptimizeIf(al, instr->environment()); @@ -1399,7 +1435,7 @@ void LCodeGen::DoDivI(LDivI* instr) { if (!hdiv->CheckFlag(HValue::kAllUsesTruncatingToInt32)) { // Compute remainder and deopt if it's not zero. Register remainder = scratch0(); - __ mls(remainder, result, divisor, dividend); + __ Mls(remainder, result, divisor, dividend); __ cmp(remainder, Operand::Zero()); DeoptimizeIf(ne, instr->environment()); } @@ -1412,7 +1448,7 @@ void LCodeGen::DoMultiplyAddD(LMultiplyAddD* instr) { DwVfpRegister multiplicand = ToDoubleRegister(instr->multiplicand()); // This is computed in-place. - ASSERT(addend.is(ToDoubleRegister(instr->result()))); + DCHECK(addend.is(ToDoubleRegister(instr->result()))); __ vmla(addend, multiplier, multiplicand); } @@ -1424,7 +1460,7 @@ void LCodeGen::DoMultiplySubD(LMultiplySubD* instr) { DwVfpRegister multiplicand = ToDoubleRegister(instr->multiplicand()); // This is computed in-place. - ASSERT(minuend.is(ToDoubleRegister(instr->result()))); + DCHECK(minuend.is(ToDoubleRegister(instr->result()))); __ vmls(minuend, multiplier, multiplicand); } @@ -1435,9 +1471,14 @@ void LCodeGen::DoFlooringDivByPowerOf2I(LFlooringDivByPowerOf2I* instr) { Register result = ToRegister(instr->result()); int32_t divisor = instr->divisor(); + // If the divisor is 1, return the dividend. + if (divisor == 1) { + __ Move(result, dividend); + return; + } + // If the divisor is positive, things are easy: There can be no deopts and we // can simply do an arithmetic right shift. - if (divisor == 1) return; int32_t shift = WhichPowerOf2Abs(divisor); if (divisor > 1) { __ mov(result, Operand(dividend, ASR, shift)); @@ -1450,20 +1491,22 @@ void LCodeGen::DoFlooringDivByPowerOf2I(LFlooringDivByPowerOf2I* instr) { DeoptimizeIf(eq, instr->environment()); } - // If the negation could not overflow, simply shifting is OK. - if (!instr->hydrogen()->CheckFlag(HValue::kLeftCanBeMinInt)) { - __ mov(result, Operand(dividend, ASR, shift)); + // Dividing by -1 is basically negation, unless we overflow. + if (divisor == -1) { + if (instr->hydrogen()->CheckFlag(HValue::kLeftCanBeMinInt)) { + DeoptimizeIf(vs, instr->environment()); + } return; } - // Dividing by -1 is basically negation, unless we overflow. - if (divisor == -1) { - DeoptimizeIf(vs, instr->environment()); + // If the negation could not overflow, simply shifting is OK. + if (!instr->hydrogen()->CheckFlag(HValue::kLeftCanBeMinInt)) { + __ mov(result, Operand(result, ASR, shift)); return; } __ mov(result, Operand(kMinInt / divisor), LeaveCC, vs); - __ mov(result, Operand(dividend, ASR, shift), LeaveCC, vc); + __ mov(result, Operand(result, ASR, shift), LeaveCC, vc); } @@ -1471,7 +1514,7 @@ void LCodeGen::DoFlooringDivByConstI(LFlooringDivByConstI* instr) { Register dividend = ToRegister(instr->dividend()); int32_t divisor = instr->divisor(); Register result = ToRegister(instr->result()); - ASSERT(!dividend.is(result)); + DCHECK(!dividend.is(result)); if (divisor == 0) { DeoptimizeIf(al, instr->environment()); @@ -1497,7 +1540,7 @@ void LCodeGen::DoFlooringDivByConstI(LFlooringDivByConstI* instr) { // In the general case we may need to adjust before and after the truncating // division to get a flooring division. Register temp = ToRegister(instr->temp()); - ASSERT(!temp.is(dividend) && !temp.is(result)); + DCHECK(!temp.is(dividend) && !temp.is(result)); Label needs_adjustment, done; __ cmp(dividend, Operand::Zero()); __ b(divisor > 0 ? lt : gt, &needs_adjustment); @@ -1567,7 +1610,7 @@ void LCodeGen::DoFlooringDivI(LFlooringDivI* instr) { Label done; Register remainder = scratch0(); - __ mls(remainder, result, right, left); + __ Mls(remainder, result, right, left); __ cmp(remainder, Operand::Zero()); __ b(eq, &done); __ eor(remainder, remainder, Operand(right)); @@ -1647,7 +1690,7 @@ void LCodeGen::DoMulI(LMulI* instr) { } } else { - ASSERT(right_op->IsRegister()); + DCHECK(right_op->IsRegister()); Register right = ToRegister(right_op); if (overflow) { @@ -1686,7 +1729,7 @@ void LCodeGen::DoMulI(LMulI* instr) { void LCodeGen::DoBitI(LBitI* instr) { LOperand* left_op = instr->left(); LOperand* right_op = instr->right(); - ASSERT(left_op->IsRegister()); + DCHECK(left_op->IsRegister()); Register left = ToRegister(left_op); Register result = ToRegister(instr->result()); Operand right(no_reg); @@ -1694,7 +1737,7 @@ void LCodeGen::DoBitI(LBitI* instr) { if (right_op->IsStackSlot()) { right = Operand(EmitLoadRegister(right_op, ip)); } else { - ASSERT(right_op->IsRegister() || right_op->IsConstantOperand()); + DCHECK(right_op->IsRegister() || right_op->IsConstantOperand()); right = ToOperand(right_op); } @@ -1818,7 +1861,7 @@ void LCodeGen::DoSubI(LSubI* instr) { Register right_reg = EmitLoadRegister(right, ip); __ sub(ToRegister(result), ToRegister(left), Operand(right_reg), set_cond); } else { - ASSERT(right->IsRegister() || right->IsConstantOperand()); + DCHECK(right->IsRegister() || right->IsConstantOperand()); __ sub(ToRegister(result), ToRegister(left), ToOperand(right), set_cond); } @@ -1839,7 +1882,7 @@ void LCodeGen::DoRSubI(LRSubI* instr) { Register right_reg = EmitLoadRegister(right, ip); __ rsb(ToRegister(result), ToRegister(left), Operand(right_reg), set_cond); } else { - ASSERT(right->IsRegister() || right->IsConstantOperand()); + DCHECK(right->IsRegister() || right->IsConstantOperand()); __ rsb(ToRegister(result), ToRegister(left), ToOperand(right), set_cond); } @@ -1860,7 +1903,7 @@ void LCodeGen::DoConstantS(LConstantS* instr) { void LCodeGen::DoConstantD(LConstantD* instr) { - ASSERT(instr->result()->IsDoubleRegister()); + DCHECK(instr->result()->IsDoubleRegister()); DwVfpRegister result = ToDoubleRegister(instr->result()); double v = instr->value(); __ Vmov(result, v, scratch0()); @@ -1875,13 +1918,6 @@ void LCodeGen::DoConstantE(LConstantE* instr) { void LCodeGen::DoConstantT(LConstantT* instr) { Handle<Object> object = instr->value(isolate()); AllowDeferredHandleDereference smi_check; - if (instr->hydrogen()->HasObjectMap()) { - Handle<Map> object_map = instr->hydrogen()->ObjectMap().handle(); - ASSERT(object->IsHeapObject()); - ASSERT(!object_map->is_stable() || - *object_map == Handle<HeapObject>::cast(object)->map()); - USE(object_map); - } __ Move(ToRegister(instr->result()), object); } @@ -1899,10 +1935,10 @@ void LCodeGen::DoDateField(LDateField* instr) { Register scratch = ToRegister(instr->temp()); Smi* index = instr->index(); Label runtime, done; - ASSERT(object.is(result)); - ASSERT(object.is(r0)); - ASSERT(!scratch.is(scratch0())); - ASSERT(!scratch.is(object)); + DCHECK(object.is(result)); + DCHECK(object.is(r0)); + DCHECK(!scratch.is(scratch0())); + DCHECK(!scratch.is(object)); __ SmiTst(object); DeoptimizeIf(eq, instr->environment()); @@ -1944,8 +1980,8 @@ MemOperand LCodeGen::BuildSeqStringOperand(Register string, return FieldMemOperand(string, SeqString::kHeaderSize + offset); } Register scratch = scratch0(); - ASSERT(!scratch.is(string)); - ASSERT(!scratch.is(ToRegister(index))); + DCHECK(!scratch.is(string)); + DCHECK(!scratch.is(ToRegister(index))); if (encoding == String::ONE_BYTE_ENCODING) { __ add(scratch, string, Operand(ToRegister(index))); } else { @@ -2019,7 +2055,7 @@ void LCodeGen::DoAddI(LAddI* instr) { Register right_reg = EmitLoadRegister(right, ip); __ add(ToRegister(result), ToRegister(left), Operand(right_reg), set_cond); } else { - ASSERT(right->IsRegister() || right->IsConstantOperand()); + DCHECK(right->IsRegister() || right->IsConstantOperand()); __ add(ToRegister(result), ToRegister(left), ToOperand(right), set_cond); } @@ -2044,7 +2080,7 @@ void LCodeGen::DoMathMinMax(LMathMinMax* instr) { __ Move(result_reg, left_reg, condition); __ mov(result_reg, right_op, LeaveCC, NegateCondition(condition)); } else { - ASSERT(instr->hydrogen()->representation().IsDouble()); + DCHECK(instr->hydrogen()->representation().IsDouble()); DwVfpRegister left_reg = ToDoubleRegister(left); DwVfpRegister right_reg = ToDoubleRegister(right); DwVfpRegister result_reg = ToDoubleRegister(instr->result()); @@ -2131,10 +2167,10 @@ void LCodeGen::DoArithmeticD(LArithmeticD* instr) { void LCodeGen::DoArithmeticT(LArithmeticT* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->left()).is(r1)); - ASSERT(ToRegister(instr->right()).is(r0)); - ASSERT(ToRegister(instr->result()).is(r0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->left()).is(r1)); + DCHECK(ToRegister(instr->right()).is(r0)); + DCHECK(ToRegister(instr->result()).is(r0)); BinaryOpICStub stub(isolate(), instr->op(), NO_OVERWRITE); // Block literal pool emission to ensure nop indicating no inlined smi code @@ -2179,34 +2215,34 @@ void LCodeGen::DoDebugBreak(LDebugBreak* instr) { void LCodeGen::DoBranch(LBranch* instr) { Representation r = instr->hydrogen()->value()->representation(); if (r.IsInteger32() || r.IsSmi()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); Register reg = ToRegister(instr->value()); __ cmp(reg, Operand::Zero()); EmitBranch(instr, ne); } else if (r.IsDouble()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); DwVfpRegister reg = ToDoubleRegister(instr->value()); // Test the double value. Zero and NaN are false. __ VFPCompareAndSetFlags(reg, 0.0); __ cmp(r0, r0, vs); // If NaN, set the Z flag. (NaN -> false) EmitBranch(instr, ne); } else { - ASSERT(r.IsTagged()); + DCHECK(r.IsTagged()); Register reg = ToRegister(instr->value()); HType type = instr->hydrogen()->value()->type(); if (type.IsBoolean()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); __ CompareRoot(reg, Heap::kTrueValueRootIndex); EmitBranch(instr, eq); } else if (type.IsSmi()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); __ cmp(reg, Operand::Zero()); EmitBranch(instr, ne); } else if (type.IsJSArray()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); EmitBranch(instr, al); } else if (type.IsHeapNumber()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); DwVfpRegister dbl_scratch = double_scratch0(); __ vldr(dbl_scratch, FieldMemOperand(reg, HeapNumber::kValueOffset)); // Test the double value. Zero and NaN are false. @@ -2214,7 +2250,7 @@ void LCodeGen::DoBranch(LBranch* instr) { __ cmp(r0, r0, vs); // If NaN, set the Z flag. (NaN) EmitBranch(instr, ne); } else if (type.IsString()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); __ ldr(ip, FieldMemOperand(reg, String::kLengthOffset)); __ cmp(ip, Operand::Zero()); EmitBranch(instr, ne); @@ -2359,7 +2395,10 @@ Condition LCodeGen::TokenToCondition(Token::Value op, bool is_unsigned) { void LCodeGen::DoCompareNumericAndBranch(LCompareNumericAndBranch* instr) { LOperand* left = instr->left(); LOperand* right = instr->right(); - Condition cond = TokenToCondition(instr->op(), false); + bool is_unsigned = + instr->hydrogen()->left()->CheckFlag(HInstruction::kUint32) || + instr->hydrogen()->right()->CheckFlag(HInstruction::kUint32); + Condition cond = TokenToCondition(instr->op(), is_unsigned); if (left->IsConstantOperand() && right->IsConstantOperand()) { // We can statically evaluate the comparison. @@ -2391,8 +2430,8 @@ void LCodeGen::DoCompareNumericAndBranch(LCompareNumericAndBranch* instr) { } else { __ cmp(ToRegister(right), Operand(value)); } - // We transposed the operands. Reverse the condition. - cond = ReverseCondition(cond); + // We commuted the operands, so commute the condition. + cond = CommuteCondition(cond); } else { __ cmp(ToRegister(left), ToRegister(right)); } @@ -2433,7 +2472,7 @@ void LCodeGen::DoCmpHoleAndBranch(LCmpHoleAndBranch* instr) { void LCodeGen::DoCompareMinusZeroAndBranch(LCompareMinusZeroAndBranch* instr) { Representation rep = instr->hydrogen()->value()->representation(); - ASSERT(!rep.IsInteger32()); + DCHECK(!rep.IsInteger32()); Register scratch = ToRegister(instr->temp()); if (rep.IsDouble()) { @@ -2515,7 +2554,7 @@ void LCodeGen::DoIsStringAndBranch(LIsStringAndBranch* instr) { Register temp1 = ToRegister(instr->temp()); SmiCheck check_needed = - instr->hydrogen()->value()->IsHeapObject() + instr->hydrogen()->value()->type().IsHeapObject() ? OMIT_SMI_CHECK : INLINE_SMI_CHECK; Condition true_cond = EmitIsString(reg, temp1, instr->FalseLabel(chunk_), check_needed); @@ -2535,7 +2574,7 @@ void LCodeGen::DoIsUndetectableAndBranch(LIsUndetectableAndBranch* instr) { Register input = ToRegister(instr->value()); Register temp = ToRegister(instr->temp()); - if (!instr->hydrogen()->value()->IsHeapObject()) { + if (!instr->hydrogen()->value()->type().IsHeapObject()) { __ JumpIfSmi(input, instr->FalseLabel(chunk_)); } __ ldr(temp, FieldMemOperand(input, HeapObject::kMapOffset)); @@ -2566,7 +2605,7 @@ static Condition ComputeCompareCondition(Token::Value op) { void LCodeGen::DoStringCompareAndBranch(LStringCompareAndBranch* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->context()).is(cp)); Token::Value op = instr->op(); Handle<Code> ic = CompareIC::GetUninitialized(isolate(), op); @@ -2584,7 +2623,7 @@ static InstanceType TestType(HHasInstanceTypeAndBranch* instr) { InstanceType from = instr->from(); InstanceType to = instr->to(); if (from == FIRST_TYPE) return to; - ASSERT(from == to || to == LAST_TYPE); + DCHECK(from == to || to == LAST_TYPE); return from; } @@ -2604,7 +2643,7 @@ void LCodeGen::DoHasInstanceTypeAndBranch(LHasInstanceTypeAndBranch* instr) { Register scratch = scratch0(); Register input = ToRegister(instr->value()); - if (!instr->hydrogen()->value()->IsHeapObject()) { + if (!instr->hydrogen()->value()->type().IsHeapObject()) { __ JumpIfSmi(input, instr->FalseLabel(chunk_)); } @@ -2644,9 +2683,9 @@ void LCodeGen::EmitClassOfTest(Label* is_true, Register input, Register temp, Register temp2) { - ASSERT(!input.is(temp)); - ASSERT(!input.is(temp2)); - ASSERT(!temp.is(temp2)); + DCHECK(!input.is(temp)); + DCHECK(!input.is(temp2)); + DCHECK(!temp.is(temp2)); __ JumpIfSmi(input, is_false); @@ -2727,9 +2766,9 @@ void LCodeGen::DoCmpMapAndBranch(LCmpMapAndBranch* instr) { void LCodeGen::DoInstanceOf(LInstanceOf* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->left()).is(r0)); // Object is in r0. - ASSERT(ToRegister(instr->right()).is(r1)); // Function is in r1. + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->left()).is(r0)); // Object is in r0. + DCHECK(ToRegister(instr->right()).is(r1)); // Function is in r1. InstanceofStub stub(isolate(), InstanceofStub::kArgsInRegisters); CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); @@ -2747,13 +2786,17 @@ void LCodeGen::DoInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr) { LInstanceOfKnownGlobal* instr) : LDeferredCode(codegen), instr_(instr) { } virtual void Generate() V8_OVERRIDE { - codegen()->DoDeferredInstanceOfKnownGlobal(instr_, &map_check_); + codegen()->DoDeferredInstanceOfKnownGlobal(instr_, &map_check_, + &load_bool_); } virtual LInstruction* instr() V8_OVERRIDE { return instr_; } Label* map_check() { return &map_check_; } + Label* load_bool() { return &load_bool_; } + private: LInstanceOfKnownGlobal* instr_; Label map_check_; + Label load_bool_; }; DeferredInstanceOfKnownGlobal* deferred; @@ -2781,12 +2824,12 @@ void LCodeGen::DoInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr) { // We use Factory::the_hole_value() on purpose instead of loading from the // root array to force relocation to be able to later patch with // the cached map. - PredictableCodeSizeScope predictable(masm_, 5 * Assembler::kInstrSize); Handle<Cell> cell = factory()->NewCell(factory()->the_hole_value()); __ mov(ip, Operand(Handle<Object>(cell))); __ ldr(ip, FieldMemOperand(ip, PropertyCell::kValueOffset)); __ cmp(map, Operand(ip)); __ b(ne, &cache_miss); + __ bind(deferred->load_bool()); // Label for calculating code patching. // We use Factory::the_hole_value() on purpose instead of loading from the // root array to force relocation to be able to later patch // with true or false. @@ -2820,7 +2863,8 @@ void LCodeGen::DoInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr) { void LCodeGen::DoDeferredInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr, - Label* map_check) { + Label* map_check, + Label* bool_load) { InstanceofStub::Flags flags = InstanceofStub::kNoFlags; flags = static_cast<InstanceofStub::Flags>( flags | InstanceofStub::kArgsInRegisters); @@ -2830,25 +2874,39 @@ void LCodeGen::DoDeferredInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr, flags | InstanceofStub::kReturnTrueFalseObject); InstanceofStub stub(isolate(), flags); - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); LoadContextFromDeferred(instr->context()); __ Move(InstanceofStub::right(), instr->function()); - static const int kAdditionalDelta = 4; + + int call_size = CallCodeSize(stub.GetCode(), RelocInfo::CODE_TARGET); + int additional_delta = (call_size / Assembler::kInstrSize) + 4; // Make sure that code size is predicable, since we use specific constants // offsets in the code to find embedded values.. - PredictableCodeSizeScope predictable(masm_, 5 * Assembler::kInstrSize); - int delta = masm_->InstructionsGeneratedSince(map_check) + kAdditionalDelta; - Label before_push_delta; - __ bind(&before_push_delta); - __ BlockConstPoolFor(kAdditionalDelta); - // r5 is used to communicate the offset to the location of the map check. - __ mov(r5, Operand(delta * kPointerSize)); - // The mov above can generate one or two instructions. The delta was computed - // for two instructions, so we need to pad here in case of one instruction. - if (masm_->InstructionsGeneratedSince(&before_push_delta) != 2) { - ASSERT_EQ(1, masm_->InstructionsGeneratedSince(&before_push_delta)); - __ nop(); + PredictableCodeSizeScope predictable( + masm_, (additional_delta + 1) * Assembler::kInstrSize); + // Make sure we don't emit any additional entries in the constant pool before + // the call to ensure that the CallCodeSize() calculated the correct number of + // instructions for the constant pool load. + { + ConstantPoolUnavailableScope constant_pool_unavailable(masm_); + int map_check_delta = + masm_->InstructionsGeneratedSince(map_check) + additional_delta; + int bool_load_delta = + masm_->InstructionsGeneratedSince(bool_load) + additional_delta; + Label before_push_delta; + __ bind(&before_push_delta); + __ BlockConstPoolFor(additional_delta); + // r5 is used to communicate the offset to the location of the map check. + __ mov(r5, Operand(map_check_delta * kPointerSize)); + // r6 is used to communicate the offset to the location of the bool load. + __ mov(r6, Operand(bool_load_delta * kPointerSize)); + // The mov above can generate one or two instructions. The delta was + // computed for two instructions, so we need to pad here in case of one + // instruction. + while (masm_->InstructionsGeneratedSince(&before_push_delta) != 4) { + __ nop(); + } } CallCodeGeneric(stub.GetCode(), RelocInfo::CODE_TARGET, @@ -2863,7 +2921,7 @@ void LCodeGen::DoDeferredInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr, void LCodeGen::DoCmpT(LCmpT* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->context()).is(cp)); Token::Value op = instr->op(); Handle<Code> ic = CompareIC::GetUninitialized(isolate(), op); @@ -2932,11 +2990,20 @@ void LCodeGen::DoLoadGlobalCell(LLoadGlobalCell* instr) { void LCodeGen::DoLoadGlobalGeneric(LLoadGlobalGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->global_object()).is(r0)); - ASSERT(ToRegister(instr->result()).is(r0)); - - __ mov(r2, Operand(instr->name())); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->global_object()).is(LoadIC::ReceiverRegister())); + DCHECK(ToRegister(instr->result()).is(r0)); + + __ mov(LoadIC::NameRegister(), Operand(instr->name())); + if (FLAG_vector_ics) { + Register vector = ToRegister(instr->temp_vector()); + DCHECK(vector.is(LoadIC::VectorRegister())); + __ Move(vector, instr->hydrogen()->feedback_vector()); + // No need to allocate this register. + DCHECK(LoadIC::SlotRegister().is(r0)); + __ mov(LoadIC::SlotRegister(), + Operand(Smi::FromInt(instr->hydrogen()->slot()))); + } ContextualMode mode = instr->for_typeof() ? NOT_CONTEXTUAL : CONTEXTUAL; Handle<Code> ic = LoadIC::initialize_stub(isolate(), mode); CallCode(ic, RelocInfo::CODE_TARGET, instr); @@ -3006,7 +3073,7 @@ void LCodeGen::DoStoreContextSlot(LStoreContextSlot* instr) { __ str(value, target); if (instr->hydrogen()->NeedsWriteBarrier()) { SmiCheck check_needed = - instr->hydrogen()->value()->IsHeapObject() + instr->hydrogen()->value()->type().IsHeapObject() ? OMIT_SMI_CHECK : INLINE_SMI_CHECK; __ RecordWriteContextSlot(context, target.offset(), @@ -3051,12 +3118,21 @@ void LCodeGen::DoLoadNamedField(LLoadNamedField* instr) { void LCodeGen::DoLoadNamedGeneric(LLoadNamedGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->object()).is(r0)); - ASSERT(ToRegister(instr->result()).is(r0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->object()).is(LoadIC::ReceiverRegister())); + DCHECK(ToRegister(instr->result()).is(r0)); // Name is always in r2. - __ mov(r2, Operand(instr->name())); + __ mov(LoadIC::NameRegister(), Operand(instr->name())); + if (FLAG_vector_ics) { + Register vector = ToRegister(instr->temp_vector()); + DCHECK(vector.is(LoadIC::VectorRegister())); + __ Move(vector, instr->hydrogen()->feedback_vector()); + // No need to allocate this register. + DCHECK(LoadIC::SlotRegister().is(r0)); + __ mov(LoadIC::SlotRegister(), + Operand(Smi::FromInt(instr->hydrogen()->slot()))); + } Handle<Code> ic = LoadIC::initialize_stub(isolate(), NOT_CONTEXTUAL); CallCode(ic, RelocInfo::CODE_TARGET, instr, NEVER_INLINE_TARGET_ADDRESS); } @@ -3067,17 +3143,6 @@ void LCodeGen::DoLoadFunctionPrototype(LLoadFunctionPrototype* instr) { Register function = ToRegister(instr->function()); Register result = ToRegister(instr->result()); - // Check that the function really is a function. Load map into the - // result register. - __ CompareObjectType(function, result, scratch, JS_FUNCTION_TYPE); - DeoptimizeIf(ne, instr->environment()); - - // Make sure that the function has an instance prototype. - Label non_instance; - __ ldrb(scratch, FieldMemOperand(result, Map::kBitFieldOffset)); - __ tst(scratch, Operand(1 << Map::kHasNonInstancePrototype)); - __ b(ne, &non_instance); - // Get the prototype or initial map from the function. __ ldr(result, FieldMemOperand(function, JSFunction::kPrototypeOrInitialMapOffset)); @@ -3094,12 +3159,6 @@ void LCodeGen::DoLoadFunctionPrototype(LLoadFunctionPrototype* instr) { // Get the prototype from the initial map. __ ldr(result, FieldMemOperand(result, Map::kPrototypeOffset)); - __ jmp(&done); - - // Non-instance prototype: Fetch prototype from constructor field - // in initial map. - __ bind(&non_instance); - __ ldr(result, FieldMemOperand(result, Map::kConstructorOffset)); // All done. __ bind(&done); @@ -3165,17 +3224,13 @@ void LCodeGen::DoLoadKeyedExternalArray(LLoadKeyed* instr) { int element_size_shift = ElementsKindToShiftSize(elements_kind); int shift_size = (instr->hydrogen()->key()->representation().IsSmi()) ? (element_size_shift - kSmiTagSize) : element_size_shift; - int additional_offset = IsFixedTypedArrayElementsKind(elements_kind) - ? FixedTypedArrayBase::kDataOffset - kHeapObjectTag - : 0; - + int base_offset = instr->base_offset(); if (elements_kind == EXTERNAL_FLOAT32_ELEMENTS || elements_kind == FLOAT32_ELEMENTS || elements_kind == EXTERNAL_FLOAT64_ELEMENTS || elements_kind == FLOAT64_ELEMENTS) { - int base_offset = - (instr->additional_index() << element_size_shift) + additional_offset; + int base_offset = instr->base_offset(); DwVfpRegister result = ToDoubleRegister(instr->result()); Operand operand = key_is_constant ? Operand(constant_key << element_size_shift) @@ -3185,15 +3240,14 @@ void LCodeGen::DoLoadKeyedExternalArray(LLoadKeyed* instr) { elements_kind == FLOAT32_ELEMENTS) { __ vldr(double_scratch0().low(), scratch0(), base_offset); __ vcvt_f64_f32(result, double_scratch0().low()); - } else { // loading doubles, not floats. + } else { // i.e. elements_kind == EXTERNAL_DOUBLE_ELEMENTS __ vldr(result, scratch0(), base_offset); } } else { Register result = ToRegister(instr->result()); MemOperand mem_operand = PrepareKeyedOperand( key, external_pointer, key_is_constant, constant_key, - element_size_shift, shift_size, - instr->additional_index(), additional_offset); + element_size_shift, shift_size, base_offset); switch (elements_kind) { case EXTERNAL_INT8_ELEMENTS: case INT8_ELEMENTS: @@ -3253,15 +3307,13 @@ void LCodeGen::DoLoadKeyedFixedDoubleArray(LLoadKeyed* instr) { int element_size_shift = ElementsKindToShiftSize(FAST_DOUBLE_ELEMENTS); - int base_offset = - FixedDoubleArray::kHeaderSize - kHeapObjectTag + - (instr->additional_index() << element_size_shift); + int base_offset = instr->base_offset(); if (key_is_constant) { int constant_key = ToInteger32(LConstantOperand::cast(instr->key())); if (constant_key & 0xF0000000) { Abort(kArrayIndexConstantValueTooBig); } - base_offset += constant_key << element_size_shift; + base_offset += constant_key * kDoubleSize; } __ add(scratch, elements, Operand(base_offset)); @@ -3287,12 +3339,11 @@ void LCodeGen::DoLoadKeyedFixedArray(LLoadKeyed* instr) { Register result = ToRegister(instr->result()); Register scratch = scratch0(); Register store_base = scratch; - int offset = 0; + int offset = instr->base_offset(); if (instr->key()->IsConstantOperand()) { LConstantOperand* const_operand = LConstantOperand::cast(instr->key()); - offset = FixedArray::OffsetOfElementAt(ToInteger32(const_operand) + - instr->additional_index()); + offset += ToInteger32(const_operand) * kPointerSize; store_base = elements; } else { Register key = ToRegister(instr->key()); @@ -3305,9 +3356,8 @@ void LCodeGen::DoLoadKeyedFixedArray(LLoadKeyed* instr) { } else { __ add(scratch, elements, Operand(key, LSL, kPointerSizeLog2)); } - offset = FixedArray::OffsetOfElementAt(instr->additional_index()); } - __ ldr(result, FieldMemOperand(store_base, offset)); + __ ldr(result, MemOperand(store_base, offset)); // Check for the hole value. if (instr->hydrogen()->RequiresHoleCheck()) { @@ -3340,53 +3390,45 @@ MemOperand LCodeGen::PrepareKeyedOperand(Register key, int constant_key, int element_size, int shift_size, - int additional_index, - int additional_offset) { - int base_offset = (additional_index << element_size) + additional_offset; + int base_offset) { if (key_is_constant) { - return MemOperand(base, - base_offset + (constant_key << element_size)); - } - - if (additional_offset != 0) { - __ mov(scratch0(), Operand(base_offset)); - if (shift_size >= 0) { - __ add(scratch0(), scratch0(), Operand(key, LSL, shift_size)); - } else { - ASSERT_EQ(-1, shift_size); - // key can be negative, so using ASR here. - __ add(scratch0(), scratch0(), Operand(key, ASR, 1)); - } - return MemOperand(base, scratch0()); + return MemOperand(base, (constant_key << element_size) + base_offset); } - if (additional_index != 0) { - additional_index *= 1 << (element_size - shift_size); - __ add(scratch0(), key, Operand(additional_index)); - } - - if (additional_index == 0) { + if (base_offset == 0) { if (shift_size >= 0) { return MemOperand(base, key, LSL, shift_size); } else { - ASSERT_EQ(-1, shift_size); + DCHECK_EQ(-1, shift_size); return MemOperand(base, key, LSR, 1); } } if (shift_size >= 0) { - return MemOperand(base, scratch0(), LSL, shift_size); + __ add(scratch0(), base, Operand(key, LSL, shift_size)); + return MemOperand(scratch0(), base_offset); } else { - ASSERT_EQ(-1, shift_size); - return MemOperand(base, scratch0(), LSR, 1); + DCHECK_EQ(-1, shift_size); + __ add(scratch0(), base, Operand(key, ASR, 1)); + return MemOperand(scratch0(), base_offset); } } void LCodeGen::DoLoadKeyedGeneric(LLoadKeyedGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->object()).is(r1)); - ASSERT(ToRegister(instr->key()).is(r0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->object()).is(LoadIC::ReceiverRegister())); + DCHECK(ToRegister(instr->key()).is(LoadIC::NameRegister())); + + if (FLAG_vector_ics) { + Register vector = ToRegister(instr->temp_vector()); + DCHECK(vector.is(LoadIC::VectorRegister())); + __ Move(vector, instr->hydrogen()->feedback_vector()); + // No need to allocate this register. + DCHECK(LoadIC::SlotRegister().is(r0)); + __ mov(LoadIC::SlotRegister(), + Operand(Smi::FromInt(instr->hydrogen()->slot()))); + } Handle<Code> ic = isolate()->builtins()->KeyedLoadIC_Initialize(); CallCode(ic, RelocInfo::CODE_TARGET, instr, NEVER_INLINE_TARGET_ADDRESS); @@ -3482,8 +3524,7 @@ void LCodeGen::DoWrapReceiver(LWrapReceiver* instr) { __ ldr(result, FieldMemOperand(function, JSFunction::kContextOffset)); __ ldr(result, ContextOperand(result, Context::GLOBAL_OBJECT_INDEX)); - __ ldr(result, - FieldMemOperand(result, GlobalObject::kGlobalReceiverOffset)); + __ ldr(result, FieldMemOperand(result, GlobalObject::kGlobalProxyOffset)); if (result.is(receiver)) { __ bind(&result_in_receiver); @@ -3503,9 +3544,9 @@ void LCodeGen::DoApplyArguments(LApplyArguments* instr) { Register length = ToRegister(instr->length()); Register elements = ToRegister(instr->elements()); Register scratch = scratch0(); - ASSERT(receiver.is(r0)); // Used for parameter count. - ASSERT(function.is(r1)); // Required by InvokeFunction. - ASSERT(ToRegister(instr->result()).is(r0)); + DCHECK(receiver.is(r0)); // Used for parameter count. + DCHECK(function.is(r1)); // Required by InvokeFunction. + DCHECK(ToRegister(instr->result()).is(r0)); // Copy the arguments to this function possibly from the // adaptor frame below it. @@ -3533,7 +3574,7 @@ void LCodeGen::DoApplyArguments(LApplyArguments* instr) { __ b(ne, &loop); __ bind(&invoke); - ASSERT(instr->HasPointerMap()); + DCHECK(instr->HasPointerMap()); LPointerMap* pointers = instr->pointer_map(); SafepointGenerator safepoint_generator( this, pointers, Safepoint::kLazyDeopt); @@ -3573,19 +3614,19 @@ void LCodeGen::DoContext(LContext* instr) { __ ldr(result, MemOperand(fp, StandardFrameConstants::kContextOffset)); } else { // If there is no frame, the context must be in cp. - ASSERT(result.is(cp)); + DCHECK(result.is(cp)); } } void LCodeGen::DoDeclareGlobals(LDeclareGlobals* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->context()).is(cp)); __ push(cp); // The context is the first argument. __ Move(scratch0(), instr->hydrogen()->pairs()); __ push(scratch0()); __ mov(scratch0(), Operand(Smi::FromInt(instr->hydrogen()->flags()))); __ push(scratch0()); - CallRuntime(Runtime::kHiddenDeclareGlobals, 3, instr); + CallRuntime(Runtime::kDeclareGlobals, 3, instr); } @@ -3631,8 +3672,8 @@ void LCodeGen::CallKnownFunction(Handle<JSFunction> function, void LCodeGen::DoDeferredMathAbsTaggedHeapNumber(LMathAbs* instr) { - ASSERT(instr->context() != NULL); - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(instr->context() != NULL); + DCHECK(ToRegister(instr->context()).is(cp)); Register input = ToRegister(instr->value()); Register result = ToRegister(instr->result()); Register scratch = scratch0(); @@ -3657,7 +3698,7 @@ void LCodeGen::DoDeferredMathAbsTaggedHeapNumber(LMathAbs* instr) { // Input is negative. Reverse its sign. // Preserve the value of all registers. { - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); // Registers were saved at the safepoint, so we can use // many scratch registers. @@ -3676,7 +3717,7 @@ void LCodeGen::DoDeferredMathAbsTaggedHeapNumber(LMathAbs* instr) { // Slow case: Call the runtime system to do the number allocation. __ bind(&slow); - CallRuntimeFromDeferred(Runtime::kHiddenAllocateHeapNumber, 0, instr, + CallRuntimeFromDeferred(Runtime::kAllocateHeapNumber, 0, instr, instr->context()); // Set the pointer to the new heap number in tmp. if (!tmp1.is(r0)) __ mov(tmp1, Operand(r0)); @@ -3807,6 +3848,15 @@ void LCodeGen::DoMathRound(LMathRound* instr) { } +void LCodeGen::DoMathFround(LMathFround* instr) { + DwVfpRegister input_reg = ToDoubleRegister(instr->value()); + DwVfpRegister output_reg = ToDoubleRegister(instr->result()); + LowDwVfpRegister scratch = double_scratch0(); + __ vcvt_f32_f64(scratch.low(), input_reg); + __ vcvt_f64_f32(output_reg, scratch.low()); +} + + void LCodeGen::DoMathSqrt(LMathSqrt* instr) { DwVfpRegister input = ToDoubleRegister(instr->value()); DwVfpRegister result = ToDoubleRegister(instr->result()); @@ -3839,12 +3889,12 @@ void LCodeGen::DoPower(LPower* instr) { Representation exponent_type = instr->hydrogen()->right()->representation(); // Having marked this as a call, we can use any registers. // Just make sure that the input/output registers are the expected ones. - ASSERT(!instr->right()->IsDoubleRegister() || + DCHECK(!instr->right()->IsDoubleRegister() || ToDoubleRegister(instr->right()).is(d1)); - ASSERT(!instr->right()->IsRegister() || + DCHECK(!instr->right()->IsRegister() || ToRegister(instr->right()).is(r2)); - ASSERT(ToDoubleRegister(instr->left()).is(d0)); - ASSERT(ToDoubleRegister(instr->result()).is(d2)); + DCHECK(ToDoubleRegister(instr->left()).is(d0)); + DCHECK(ToDoubleRegister(instr->result()).is(d2)); if (exponent_type.IsSmi()) { MathPowStub stub(isolate(), MathPowStub::TAGGED); @@ -3863,7 +3913,7 @@ void LCodeGen::DoPower(LPower* instr) { MathPowStub stub(isolate(), MathPowStub::INTEGER); __ CallStub(&stub); } else { - ASSERT(exponent_type.IsDouble()); + DCHECK(exponent_type.IsDouble()); MathPowStub stub(isolate(), MathPowStub::DOUBLE); __ CallStub(&stub); } @@ -3901,9 +3951,9 @@ void LCodeGen::DoMathClz32(LMathClz32* instr) { void LCodeGen::DoInvokeFunction(LInvokeFunction* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->function()).is(r1)); - ASSERT(instr->HasPointerMap()); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->function()).is(r1)); + DCHECK(instr->HasPointerMap()); Handle<JSFunction> known_function = instr->hydrogen()->known_function(); if (known_function.is_null()) { @@ -3922,7 +3972,7 @@ void LCodeGen::DoInvokeFunction(LInvokeFunction* instr) { void LCodeGen::DoCallWithDescriptor(LCallWithDescriptor* instr) { - ASSERT(ToRegister(instr->result()).is(r0)); + DCHECK(ToRegister(instr->result()).is(r0)); LPointerMap* pointers = instr->pointer_map(); SafepointGenerator generator(this, pointers, Safepoint::kLazyDeopt); @@ -3931,15 +3981,21 @@ void LCodeGen::DoCallWithDescriptor(LCallWithDescriptor* instr) { LConstantOperand* target = LConstantOperand::cast(instr->target()); Handle<Code> code = Handle<Code>::cast(ToHandle(target)); generator.BeforeCall(__ CallSize(code, RelocInfo::CODE_TARGET)); - PlatformCallInterfaceDescriptor* call_descriptor = + PlatformInterfaceDescriptor* call_descriptor = instr->descriptor()->platform_specific_descriptor(); __ Call(code, RelocInfo::CODE_TARGET, TypeFeedbackId::None(), al, call_descriptor->storage_mode()); } else { - ASSERT(instr->target()->IsRegister()); + DCHECK(instr->target()->IsRegister()); Register target = ToRegister(instr->target()); generator.BeforeCall(__ CallSize(target)); - __ add(target, target, Operand(Code::kHeaderSize - kHeapObjectTag)); + // Make sure we don't emit any additional entries in the constant pool + // before the call to ensure that the CallCodeSize() calculated the correct + // number of instructions for the constant pool load. + { + ConstantPoolUnavailableScope constant_pool_unavailable(masm_); + __ add(target, target, Operand(Code::kHeaderSize - kHeapObjectTag)); + } __ Call(target); } generator.AfterCall(); @@ -3947,8 +4003,8 @@ void LCodeGen::DoCallWithDescriptor(LCallWithDescriptor* instr) { void LCodeGen::DoCallJSFunction(LCallJSFunction* instr) { - ASSERT(ToRegister(instr->function()).is(r1)); - ASSERT(ToRegister(instr->result()).is(r0)); + DCHECK(ToRegister(instr->function()).is(r1)); + DCHECK(ToRegister(instr->result()).is(r0)); if (instr->hydrogen()->pass_argument_count()) { __ mov(r0, Operand(instr->arity())); @@ -3966,9 +4022,9 @@ void LCodeGen::DoCallJSFunction(LCallJSFunction* instr) { void LCodeGen::DoCallFunction(LCallFunction* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->function()).is(r1)); - ASSERT(ToRegister(instr->result()).is(r0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->function()).is(r1)); + DCHECK(ToRegister(instr->result()).is(r0)); int arity = instr->arity(); CallFunctionStub stub(isolate(), arity, instr->hydrogen()->function_flags()); @@ -3977,9 +4033,9 @@ void LCodeGen::DoCallFunction(LCallFunction* instr) { void LCodeGen::DoCallNew(LCallNew* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->constructor()).is(r1)); - ASSERT(ToRegister(instr->result()).is(r0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->constructor()).is(r1)); + DCHECK(ToRegister(instr->result()).is(r0)); __ mov(r0, Operand(instr->arity())); // No cell in r2 for construct type feedback in optimized code @@ -3990,9 +4046,9 @@ void LCodeGen::DoCallNew(LCallNew* instr) { void LCodeGen::DoCallNewArray(LCallNewArray* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->constructor()).is(r1)); - ASSERT(ToRegister(instr->result()).is(r0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->constructor()).is(r1)); + DCHECK(ToRegister(instr->result()).is(r0)); __ mov(r0, Operand(instr->arity())); __ LoadRoot(r2, Heap::kUndefinedValueRootIndex); @@ -4078,13 +4134,13 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { __ AssertNotSmi(object); - ASSERT(!representation.IsSmi() || + DCHECK(!representation.IsSmi() || !instr->value()->IsConstantOperand() || IsSmi(LConstantOperand::cast(instr->value()))); if (representation.IsDouble()) { - ASSERT(access.IsInobject()); - ASSERT(!instr->hydrogen()->has_transition()); - ASSERT(!instr->hydrogen()->NeedsWriteBarrier()); + DCHECK(access.IsInobject()); + DCHECK(!instr->hydrogen()->has_transition()); + DCHECK(!instr->hydrogen()->NeedsWriteBarrier()); DwVfpRegister value = ToDoubleRegister(instr->value()); __ vstr(value, FieldMemOperand(object, offset)); return; @@ -4098,14 +4154,11 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { if (instr->hydrogen()->NeedsWriteBarrierForMap()) { Register temp = ToRegister(instr->temp()); // Update the write barrier for the map field. - __ RecordWriteField(object, - HeapObject::kMapOffset, - scratch, - temp, - GetLinkRegisterState(), - kSaveFPRegs, - OMIT_REMEMBERED_SET, - OMIT_SMI_CHECK); + __ RecordWriteForMap(object, + scratch, + temp, + GetLinkRegisterState(), + kSaveFPRegs); } } @@ -4123,7 +4176,8 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { GetLinkRegisterState(), kSaveFPRegs, EMIT_REMEMBERED_SET, - instr->hydrogen()->SmiCheckForWriteBarrier()); + instr->hydrogen()->SmiCheckForWriteBarrier(), + instr->hydrogen()->PointersToHereCheckForValue()); } } else { __ ldr(scratch, FieldMemOperand(object, JSObject::kPropertiesOffset)); @@ -4139,19 +4193,19 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { GetLinkRegisterState(), kSaveFPRegs, EMIT_REMEMBERED_SET, - instr->hydrogen()->SmiCheckForWriteBarrier()); + instr->hydrogen()->SmiCheckForWriteBarrier(), + instr->hydrogen()->PointersToHereCheckForValue()); } } } void LCodeGen::DoStoreNamedGeneric(LStoreNamedGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->object()).is(r1)); - ASSERT(ToRegister(instr->value()).is(r0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->object()).is(StoreIC::ReceiverRegister())); + DCHECK(ToRegister(instr->value()).is(StoreIC::ValueRegister())); - // Name is always in r2. - __ mov(r2, Operand(instr->name())); + __ mov(StoreIC::NameRegister(), Operand(instr->name())); Handle<Code> ic = StoreIC::initialize_stub(isolate(), instr->strict_mode()); CallCode(ic, RelocInfo::CODE_TARGET, instr, NEVER_INLINE_TARGET_ADDRESS); } @@ -4163,7 +4217,7 @@ void LCodeGen::DoBoundsCheck(LBoundsCheck* instr) { Operand index = ToOperand(instr->index()); Register length = ToRegister(instr->length()); __ cmp(length, index); - cc = ReverseCondition(cc); + cc = CommuteCondition(cc); } else { Register index = ToRegister(instr->index()); Operand length = ToOperand(instr->length()); @@ -4197,16 +4251,12 @@ void LCodeGen::DoStoreKeyedExternalArray(LStoreKeyed* instr) { int element_size_shift = ElementsKindToShiftSize(elements_kind); int shift_size = (instr->hydrogen()->key()->representation().IsSmi()) ? (element_size_shift - kSmiTagSize) : element_size_shift; - int additional_offset = IsFixedTypedArrayElementsKind(elements_kind) - ? FixedTypedArrayBase::kDataOffset - kHeapObjectTag - : 0; + int base_offset = instr->base_offset(); if (elements_kind == EXTERNAL_FLOAT32_ELEMENTS || elements_kind == FLOAT32_ELEMENTS || elements_kind == EXTERNAL_FLOAT64_ELEMENTS || elements_kind == FLOAT64_ELEMENTS) { - int base_offset = - (instr->additional_index() << element_size_shift) + additional_offset; Register address = scratch0(); DwVfpRegister value(ToDoubleRegister(instr->value())); if (key_is_constant) { @@ -4231,7 +4281,7 @@ void LCodeGen::DoStoreKeyedExternalArray(LStoreKeyed* instr) { MemOperand mem_operand = PrepareKeyedOperand( key, external_pointer, key_is_constant, constant_key, element_size_shift, shift_size, - instr->additional_index(), additional_offset); + base_offset); switch (elements_kind) { case EXTERNAL_UINT8_CLAMPED_ELEMENTS: case EXTERNAL_INT8_ELEMENTS: @@ -4278,6 +4328,7 @@ void LCodeGen::DoStoreKeyedFixedDoubleArray(LStoreKeyed* instr) { Register scratch = scratch0(); DwVfpRegister double_scratch = double_scratch0(); bool key_is_constant = instr->key()->IsConstantOperand(); + int base_offset = instr->base_offset(); // Calculate the effective address of the slot in the array to store the // double value. @@ -4288,13 +4339,11 @@ void LCodeGen::DoStoreKeyedFixedDoubleArray(LStoreKeyed* instr) { Abort(kArrayIndexConstantValueTooBig); } __ add(scratch, elements, - Operand((constant_key << element_size_shift) + - FixedDoubleArray::kHeaderSize - kHeapObjectTag)); + Operand((constant_key << element_size_shift) + base_offset)); } else { int shift_size = (instr->hydrogen()->key()->representation().IsSmi()) ? (element_size_shift - kSmiTagSize) : element_size_shift; - __ add(scratch, elements, - Operand(FixedDoubleArray::kHeaderSize - kHeapObjectTag)); + __ add(scratch, elements, Operand(base_offset)); __ add(scratch, scratch, Operand(ToRegister(instr->key()), LSL, shift_size)); } @@ -4307,10 +4356,9 @@ void LCodeGen::DoStoreKeyedFixedDoubleArray(LStoreKeyed* instr) { __ Assert(ne, kDefaultNaNModeNotSet); } __ VFPCanonicalizeNaN(double_scratch, value); - __ vstr(double_scratch, scratch, - instr->additional_index() << element_size_shift); + __ vstr(double_scratch, scratch, 0); } else { - __ vstr(value, scratch, instr->additional_index() << element_size_shift); + __ vstr(value, scratch, 0); } } @@ -4322,14 +4370,13 @@ void LCodeGen::DoStoreKeyedFixedArray(LStoreKeyed* instr) { : no_reg; Register scratch = scratch0(); Register store_base = scratch; - int offset = 0; + int offset = instr->base_offset(); // Do the store. if (instr->key()->IsConstantOperand()) { - ASSERT(!instr->hydrogen()->NeedsWriteBarrier()); + DCHECK(!instr->hydrogen()->NeedsWriteBarrier()); LConstantOperand* const_operand = LConstantOperand::cast(instr->key()); - offset = FixedArray::OffsetOfElementAt(ToInteger32(const_operand) + - instr->additional_index()); + offset += ToInteger32(const_operand) * kPointerSize; store_base = elements; } else { // Even though the HLoadKeyed instruction forces the input @@ -4341,23 +4388,23 @@ void LCodeGen::DoStoreKeyedFixedArray(LStoreKeyed* instr) { } else { __ add(scratch, elements, Operand(key, LSL, kPointerSizeLog2)); } - offset = FixedArray::OffsetOfElementAt(instr->additional_index()); } - __ str(value, FieldMemOperand(store_base, offset)); + __ str(value, MemOperand(store_base, offset)); if (instr->hydrogen()->NeedsWriteBarrier()) { SmiCheck check_needed = - instr->hydrogen()->value()->IsHeapObject() + instr->hydrogen()->value()->type().IsHeapObject() ? OMIT_SMI_CHECK : INLINE_SMI_CHECK; // Compute address of modified element and store it into key register. - __ add(key, store_base, Operand(offset - kHeapObjectTag)); + __ add(key, store_base, Operand(offset)); __ RecordWrite(elements, key, value, GetLinkRegisterState(), kSaveFPRegs, EMIT_REMEMBERED_SET, - check_needed); + check_needed, + instr->hydrogen()->PointersToHereCheckForValue()); } } @@ -4375,10 +4422,10 @@ void LCodeGen::DoStoreKeyed(LStoreKeyed* instr) { void LCodeGen::DoStoreKeyedGeneric(LStoreKeyedGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->object()).is(r2)); - ASSERT(ToRegister(instr->key()).is(r1)); - ASSERT(ToRegister(instr->value()).is(r0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->object()).is(KeyedStoreIC::ReceiverRegister())); + DCHECK(ToRegister(instr->key()).is(KeyedStoreIC::NameRegister())); + DCHECK(ToRegister(instr->value()).is(KeyedStoreIC::ValueRegister())); Handle<Code> ic = instr->strict_mode() == STRICT ? isolate()->builtins()->KeyedStoreIC_Initialize_Strict() @@ -4406,18 +4453,20 @@ void LCodeGen::DoTransitionElementsKind(LTransitionElementsKind* instr) { __ mov(new_map_reg, Operand(to_map)); __ str(new_map_reg, FieldMemOperand(object_reg, HeapObject::kMapOffset)); // Write barrier. - __ RecordWriteField(object_reg, HeapObject::kMapOffset, new_map_reg, - scratch, GetLinkRegisterState(), kDontSaveFPRegs); + __ RecordWriteForMap(object_reg, + new_map_reg, + scratch, + GetLinkRegisterState(), + kDontSaveFPRegs); } else { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(object_reg.is(r0)); - PushSafepointRegistersScope scope( - this, Safepoint::kWithRegistersAndDoubles); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(object_reg.is(r0)); + PushSafepointRegistersScope scope(this); __ Move(r1, to_map); bool is_js_array = from_map->instance_type() == JS_ARRAY_TYPE; TransitionElementsKindStub stub(isolate(), from_kind, to_kind, is_js_array); __ CallStub(&stub); - RecordSafepointWithRegistersAndDoubles( + RecordSafepointWithRegisters( instr->pointer_map(), 0, Safepoint::kLazyDeopt); } __ bind(¬_applicable); @@ -4435,9 +4484,9 @@ void LCodeGen::DoTrapAllocationMemento(LTrapAllocationMemento* instr) { void LCodeGen::DoStringAdd(LStringAdd* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->left()).is(r1)); - ASSERT(ToRegister(instr->right()).is(r0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->left()).is(r1)); + DCHECK(ToRegister(instr->right()).is(r0)); StringAddStub stub(isolate(), instr->hydrogen()->flags(), instr->hydrogen()->pretenure_flag()); @@ -4480,7 +4529,7 @@ void LCodeGen::DoDeferredStringCharCodeAt(LStringCharCodeAt* instr) { // contained in the register pointer map. __ mov(result, Operand::Zero()); - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); __ push(string); // Push the index as a smi. This is safe because of the checks in // DoStringCharCodeAt above. @@ -4493,7 +4542,7 @@ void LCodeGen::DoDeferredStringCharCodeAt(LStringCharCodeAt* instr) { __ SmiTag(index); __ push(index); } - CallRuntimeFromDeferred(Runtime::kHiddenStringCharCodeAt, 2, instr, + CallRuntimeFromDeferred(Runtime::kStringCharCodeAtRT, 2, instr, instr->context()); __ AssertSmi(r0); __ SmiUntag(r0); @@ -4517,10 +4566,10 @@ void LCodeGen::DoStringCharFromCode(LStringCharFromCode* instr) { DeferredStringCharFromCode* deferred = new(zone()) DeferredStringCharFromCode(this, instr); - ASSERT(instr->hydrogen()->value()->representation().IsInteger32()); + DCHECK(instr->hydrogen()->value()->representation().IsInteger32()); Register char_code = ToRegister(instr->char_code()); Register result = ToRegister(instr->result()); - ASSERT(!char_code.is(result)); + DCHECK(!char_code.is(result)); __ cmp(char_code, Operand(String::kMaxOneByteCharCode)); __ b(hi, deferred->entry()); @@ -4543,7 +4592,7 @@ void LCodeGen::DoDeferredStringCharFromCode(LStringCharFromCode* instr) { // contained in the register pointer map. __ mov(result, Operand::Zero()); - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); __ SmiTag(char_code); __ push(char_code); CallRuntimeFromDeferred(Runtime::kCharFromCode, 1, instr, instr->context()); @@ -4553,9 +4602,9 @@ void LCodeGen::DoDeferredStringCharFromCode(LStringCharFromCode* instr) { void LCodeGen::DoInteger32ToDouble(LInteger32ToDouble* instr) { LOperand* input = instr->value(); - ASSERT(input->IsRegister() || input->IsStackSlot()); + DCHECK(input->IsRegister() || input->IsStackSlot()); LOperand* output = instr->result(); - ASSERT(output->IsDoubleRegister()); + DCHECK(output->IsDoubleRegister()); SwVfpRegister single_scratch = double_scratch0().low(); if (input->IsStackSlot()) { Register scratch = scratch0(); @@ -4676,15 +4725,15 @@ void LCodeGen::DoDeferredNumberTagIU(LInstruction* instr, __ mov(dst, Operand::Zero()); // Preserve the value of all registers. - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); // NumberTagI and NumberTagD use the context from the frame, rather than // the environment's HContext or HInlinedContext value. - // They only call Runtime::kHiddenAllocateHeapNumber. + // They only call Runtime::kAllocateHeapNumber. // The corresponding HChange instructions are added in a phase that does // not have easy access to the local context. __ ldr(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); - __ CallRuntimeSaveDoubles(Runtime::kHiddenAllocateHeapNumber); + __ CallRuntimeSaveDoubles(Runtime::kAllocateHeapNumber); RecordSafepointWithRegisters( instr->pointer_map(), 0, Safepoint::kNoLazyDeopt); __ sub(r0, r0, Operand(kHeapObjectTag)); @@ -4741,14 +4790,14 @@ void LCodeGen::DoDeferredNumberTagD(LNumberTagD* instr) { Register reg = ToRegister(instr->result()); __ mov(reg, Operand::Zero()); - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); // NumberTagI and NumberTagD use the context from the frame, rather than // the environment's HContext or HInlinedContext value. - // They only call Runtime::kHiddenAllocateHeapNumber. + // They only call Runtime::kAllocateHeapNumber. // The corresponding HChange instructions are added in a phase that does // not have easy access to the local context. __ ldr(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); - __ CallRuntimeSaveDoubles(Runtime::kHiddenAllocateHeapNumber); + __ CallRuntimeSaveDoubles(Runtime::kAllocateHeapNumber); RecordSafepointWithRegisters( instr->pointer_map(), 0, Safepoint::kNoLazyDeopt); __ sub(r0, r0, Operand(kHeapObjectTag)); @@ -4797,7 +4846,7 @@ void LCodeGen::EmitNumberUntagD(Register input_reg, NumberUntagDMode mode) { Register scratch = scratch0(); SwVfpRegister flt_scratch = double_scratch0().low(); - ASSERT(!result_reg.is(double_scratch0())); + DCHECK(!result_reg.is(double_scratch0())); Label convert, load_smi, done; if (mode == NUMBER_CANDIDATE_IS_ANY_TAGGED) { // Smi check. @@ -4834,7 +4883,7 @@ void LCodeGen::EmitNumberUntagD(Register input_reg, } } else { __ SmiUntag(scratch, input_reg); - ASSERT(mode == NUMBER_CANDIDATE_IS_SMI); + DCHECK(mode == NUMBER_CANDIDATE_IS_SMI); } // Smi to double register conversion __ bind(&load_smi); @@ -4852,8 +4901,8 @@ void LCodeGen::DoDeferredTaggedToI(LTaggedToI* instr) { LowDwVfpRegister double_scratch = double_scratch0(); DwVfpRegister double_scratch2 = ToDoubleRegister(instr->temp2()); - ASSERT(!scratch1.is(input_reg) && !scratch1.is(scratch2)); - ASSERT(!scratch2.is(input_reg) && !scratch2.is(scratch1)); + DCHECK(!scratch1.is(input_reg) && !scratch1.is(scratch2)); + DCHECK(!scratch2.is(input_reg) && !scratch2.is(scratch1)); Label done; @@ -4933,8 +4982,8 @@ void LCodeGen::DoTaggedToI(LTaggedToI* instr) { }; LOperand* input = instr->value(); - ASSERT(input->IsRegister()); - ASSERT(input->Equals(instr->result())); + DCHECK(input->IsRegister()); + DCHECK(input->Equals(instr->result())); Register input_reg = ToRegister(input); @@ -4956,9 +5005,9 @@ void LCodeGen::DoTaggedToI(LTaggedToI* instr) { void LCodeGen::DoNumberUntagD(LNumberUntagD* instr) { LOperand* input = instr->value(); - ASSERT(input->IsRegister()); + DCHECK(input->IsRegister()); LOperand* result = instr->result(); - ASSERT(result->IsDoubleRegister()); + DCHECK(result->IsDoubleRegister()); Register input_reg = ToRegister(input); DwVfpRegister result_reg = ToDoubleRegister(result); @@ -5035,7 +5084,7 @@ void LCodeGen::DoCheckSmi(LCheckSmi* instr) { void LCodeGen::DoCheckNonSmi(LCheckNonSmi* instr) { - if (!instr->hydrogen()->value()->IsHeapObject()) { + if (!instr->hydrogen()->value()->type().IsHeapObject()) { LOperand* input = instr->value(); __ SmiTst(ToRegister(input)); DeoptimizeIf(eq, instr->environment()); @@ -5074,7 +5123,7 @@ void LCodeGen::DoCheckInstanceType(LCheckInstanceType* instr) { instr->hydrogen()->GetCheckMaskAndTag(&mask, &tag); if (IsPowerOf2(mask)) { - ASSERT(tag == 0 || IsPowerOf2(tag)); + DCHECK(tag == 0 || IsPowerOf2(tag)); __ tst(scratch, Operand(mask)); DeoptimizeIf(tag == 0 ? ne : eq, instr->environment()); } else { @@ -5105,7 +5154,7 @@ void LCodeGen::DoCheckValue(LCheckValue* instr) { void LCodeGen::DoDeferredInstanceMigration(LCheckMaps* instr, Register object) { { - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); __ push(object); __ mov(cp, Operand::Zero()); __ CallRuntimeSaveDoubles(Runtime::kTryMigrateInstance); @@ -5147,7 +5196,7 @@ void LCodeGen::DoCheckMaps(LCheckMaps* instr) { Register map_reg = scratch0(); LOperand* input = instr->value(); - ASSERT(input->IsRegister()); + DCHECK(input->IsRegister()); Register reg = ToRegister(input); __ ldr(map_reg, FieldMemOperand(reg, HeapObject::kMapOffset)); @@ -5274,11 +5323,11 @@ void LCodeGen::DoAllocate(LAllocate* instr) { flags = static_cast<AllocationFlags>(flags | DOUBLE_ALIGNMENT); } if (instr->hydrogen()->IsOldPointerSpaceAllocation()) { - ASSERT(!instr->hydrogen()->IsOldDataSpaceAllocation()); - ASSERT(!instr->hydrogen()->IsNewSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsOldDataSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); flags = static_cast<AllocationFlags>(flags | PRETENURE_OLD_POINTER_SPACE); } else if (instr->hydrogen()->IsOldDataSpaceAllocation()) { - ASSERT(!instr->hydrogen()->IsNewSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); flags = static_cast<AllocationFlags>(flags | PRETENURE_OLD_DATA_SPACE); } @@ -5291,33 +5340,25 @@ void LCodeGen::DoAllocate(LAllocate* instr) { } } else { Register size = ToRegister(instr->size()); - __ Allocate(size, - result, - scratch, - scratch2, - deferred->entry(), - flags); + __ Allocate(size, result, scratch, scratch2, deferred->entry(), flags); } __ bind(deferred->exit()); if (instr->hydrogen()->MustPrefillWithFiller()) { + STATIC_ASSERT(kHeapObjectTag == 1); if (instr->size()->IsConstantOperand()) { int32_t size = ToInteger32(LConstantOperand::cast(instr->size())); - __ mov(scratch, Operand(size)); + __ mov(scratch, Operand(size - kHeapObjectTag)); } else { - scratch = ToRegister(instr->size()); + __ sub(scratch, ToRegister(instr->size()), Operand(kHeapObjectTag)); } - __ sub(scratch, scratch, Operand(kPointerSize)); - __ sub(result, result, Operand(kHeapObjectTag)); + __ mov(scratch2, Operand(isolate()->factory()->one_pointer_filler_map())); Label loop; __ bind(&loop); - __ mov(scratch2, Operand(isolate()->factory()->one_pointer_filler_map())); + __ sub(scratch, scratch, Operand(kPointerSize), SetCC); __ str(scratch2, MemOperand(result, scratch)); - __ sub(scratch, scratch, Operand(kPointerSize)); - __ cmp(scratch, Operand(0)); __ b(ge, &loop); - __ add(result, result, Operand(kHeapObjectTag)); } } @@ -5330,10 +5371,10 @@ void LCodeGen::DoDeferredAllocate(LAllocate* instr) { // contained in the register pointer map. __ mov(result, Operand(Smi::FromInt(0))); - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); if (instr->size()->IsRegister()) { Register size = ToRegister(instr->size()); - ASSERT(!size.is(result)); + DCHECK(!size.is(result)); __ SmiTag(size); __ push(size); } else { @@ -5350,11 +5391,11 @@ void LCodeGen::DoDeferredAllocate(LAllocate* instr) { int flags = AllocateDoubleAlignFlag::encode( instr->hydrogen()->MustAllocateDoubleAligned()); if (instr->hydrogen()->IsOldPointerSpaceAllocation()) { - ASSERT(!instr->hydrogen()->IsOldDataSpaceAllocation()); - ASSERT(!instr->hydrogen()->IsNewSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsOldDataSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); flags = AllocateTargetSpace::update(flags, OLD_POINTER_SPACE); } else if (instr->hydrogen()->IsOldDataSpaceAllocation()) { - ASSERT(!instr->hydrogen()->IsNewSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); flags = AllocateTargetSpace::update(flags, OLD_DATA_SPACE); } else { flags = AllocateTargetSpace::update(flags, NEW_SPACE); @@ -5362,20 +5403,20 @@ void LCodeGen::DoDeferredAllocate(LAllocate* instr) { __ Push(Smi::FromInt(flags)); CallRuntimeFromDeferred( - Runtime::kHiddenAllocateInTargetSpace, 2, instr, instr->context()); + Runtime::kAllocateInTargetSpace, 2, instr, instr->context()); __ StoreToSafepointRegisterSlot(r0, result); } void LCodeGen::DoToFastProperties(LToFastProperties* instr) { - ASSERT(ToRegister(instr->value()).is(r0)); + DCHECK(ToRegister(instr->value()).is(r0)); __ push(r0); CallRuntime(Runtime::kToFastProperties, 1, instr); } void LCodeGen::DoRegExpLiteral(LRegExpLiteral* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->context()).is(cp)); Label materialized; // Registers will be used as follows: // r6 = literals array. @@ -5396,7 +5437,7 @@ void LCodeGen::DoRegExpLiteral(LRegExpLiteral* instr) { __ mov(r4, Operand(instr->hydrogen()->pattern())); __ mov(r3, Operand(instr->hydrogen()->flags())); __ Push(r6, r5, r4, r3); - CallRuntime(Runtime::kHiddenMaterializeRegExpLiteral, 4, instr); + CallRuntime(Runtime::kMaterializeRegExpLiteral, 4, instr); __ mov(r1, r0); __ bind(&materialized); @@ -5409,7 +5450,7 @@ void LCodeGen::DoRegExpLiteral(LRegExpLiteral* instr) { __ bind(&runtime_allocate); __ mov(r0, Operand(Smi::FromInt(size))); __ Push(r1, r0); - CallRuntime(Runtime::kHiddenAllocateInNewSpace, 1, instr); + CallRuntime(Runtime::kAllocateInNewSpace, 1, instr); __ pop(r1); __ bind(&allocated); @@ -5419,7 +5460,7 @@ void LCodeGen::DoRegExpLiteral(LRegExpLiteral* instr) { void LCodeGen::DoFunctionLiteral(LFunctionLiteral* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->context()).is(cp)); // Use the fast case closure allocation code that allocates in new // space for nested functions that don't need literals cloning. bool pretenure = instr->hydrogen()->pretenure(); @@ -5434,7 +5475,7 @@ void LCodeGen::DoFunctionLiteral(LFunctionLiteral* instr) { __ mov(r1, Operand(pretenure ? factory()->true_value() : factory()->false_value())); __ Push(cp, r2, r1); - CallRuntime(Runtime::kHiddenNewClosure, 3, instr); + CallRuntime(Runtime::kNewClosure, 3, instr); } } @@ -5491,11 +5532,6 @@ Condition LCodeGen::EmitTypeofIs(Label* true_label, __ CompareRoot(input, Heap::kFalseValueRootIndex); final_branch_condition = eq; - } else if (FLAG_harmony_typeof && - String::Equals(type_name, factory->null_string())) { - __ CompareRoot(input, Heap::kNullValueRootIndex); - final_branch_condition = eq; - } else if (String::Equals(type_name, factory->undefined_string())) { __ CompareRoot(input, Heap::kUndefinedValueRootIndex); __ b(eq, true_label); @@ -5518,10 +5554,8 @@ Condition LCodeGen::EmitTypeofIs(Label* true_label, } else if (String::Equals(type_name, factory->object_string())) { Register map = scratch; __ JumpIfSmi(input, false_label); - if (!FLAG_harmony_typeof) { - __ CompareRoot(input, Heap::kNullValueRootIndex); - __ b(eq, true_label); - } + __ CompareRoot(input, Heap::kNullValueRootIndex); + __ b(eq, true_label); __ CheckObjectTypeRange(input, map, FIRST_NONCALLABLE_SPEC_OBJECT_TYPE, @@ -5549,7 +5583,7 @@ void LCodeGen::DoIsConstructCallAndBranch(LIsConstructCallAndBranch* instr) { void LCodeGen::EmitIsConstructCall(Register temp1, Register temp2) { - ASSERT(!temp1.is(temp2)); + DCHECK(!temp1.is(temp2)); // Get the frame pointer for the calling frame. __ ldr(temp1, MemOperand(fp, StandardFrameConstants::kCallerFPOffset)); @@ -5573,7 +5607,7 @@ void LCodeGen::EnsureSpaceForLazyDeopt(int space_needed) { // Block literal pool emission for duration of padding. Assembler::BlockConstPoolScope block_const_pool(masm()); int padding_size = last_lazy_deopt_pc_ + space_needed - current_pc; - ASSERT_EQ(0, padding_size % Assembler::kInstrSize); + DCHECK_EQ(0, padding_size % Assembler::kInstrSize); while (padding_size > 0) { __ nop(); padding_size -= Assembler::kInstrSize; @@ -5586,7 +5620,7 @@ void LCodeGen::EnsureSpaceForLazyDeopt(int space_needed) { void LCodeGen::DoLazyBailout(LLazyBailout* instr) { last_lazy_deopt_pc_ = masm()->pc_offset(); - ASSERT(instr->HasEnvironment()); + DCHECK(instr->HasEnvironment()); LEnvironment* env = instr->environment(); RegisterEnvironmentForDeoptimization(env, Safepoint::kLazyDeopt); safepoints_.RecordLazyDeoptimizationIndex(env->deoptimization_index()); @@ -5619,12 +5653,12 @@ void LCodeGen::DoDummyUse(LDummyUse* instr) { void LCodeGen::DoDeferredStackCheck(LStackCheck* instr) { - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); LoadContextFromDeferred(instr->context()); - __ CallRuntimeSaveDoubles(Runtime::kHiddenStackGuard); + __ CallRuntimeSaveDoubles(Runtime::kStackGuard); RecordSafepointWithLazyDeopt( instr, RECORD_SAFEPOINT_WITH_REGISTERS_AND_NO_ARGUMENTS); - ASSERT(instr->HasEnvironment()); + DCHECK(instr->HasEnvironment()); LEnvironment* env = instr->environment(); safepoints_.RecordLazyDeoptimizationIndex(env->deoptimization_index()); } @@ -5643,7 +5677,7 @@ void LCodeGen::DoStackCheck(LStackCheck* instr) { LStackCheck* instr_; }; - ASSERT(instr->HasEnvironment()); + DCHECK(instr->HasEnvironment()); LEnvironment* env = instr->environment(); // There is no LLazyBailout instruction for stack-checks. We have to // prepare for lazy deoptimization explicitly here. @@ -5656,12 +5690,12 @@ void LCodeGen::DoStackCheck(LStackCheck* instr) { Handle<Code> stack_check = isolate()->builtins()->StackCheck(); PredictableCodeSizeScope predictable(masm(), CallCodeSize(stack_check, RelocInfo::CODE_TARGET)); - ASSERT(instr->context()->IsRegister()); - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(instr->context()->IsRegister()); + DCHECK(ToRegister(instr->context()).is(cp)); CallCode(stack_check, RelocInfo::CODE_TARGET, instr); __ bind(&done); } else { - ASSERT(instr->hydrogen()->is_backwards_branch()); + DCHECK(instr->hydrogen()->is_backwards_branch()); // Perform stack overflow check if this goto needs it before jumping. DeferredStackCheck* deferred_stack_check = new(zone()) DeferredStackCheck(this, instr); @@ -5687,7 +5721,7 @@ void LCodeGen::DoOsrEntry(LOsrEntry* instr) { // If the environment were already registered, we would have no way of // backpatching it with the spill slot operands. - ASSERT(!environment->HasBeenRegistered()); + DCHECK(!environment->HasBeenRegistered()); RegisterEnvironmentForDeoptimization(environment, Safepoint::kNoLazyDeopt); GenerateOsrPrologue(); @@ -5766,7 +5800,7 @@ void LCodeGen::DoDeferredLoadMutableDouble(LLoadFieldByIndex* instr, Register result, Register object, Register index) { - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); __ Push(object); __ Push(index); __ mov(cp, Operand::Zero()); @@ -5837,6 +5871,21 @@ void LCodeGen::DoLoadFieldByIndex(LLoadFieldByIndex* instr) { } +void LCodeGen::DoStoreFrameContext(LStoreFrameContext* instr) { + Register context = ToRegister(instr->context()); + __ str(context, MemOperand(fp, StandardFrameConstants::kContextOffset)); +} + + +void LCodeGen::DoAllocateBlockContext(LAllocateBlockContext* instr) { + Handle<ScopeInfo> scope_info = instr->scope_info(); + __ Push(scope_info); + __ push(ToRegister(instr->function())); + CallRuntime(Runtime::kPushBlockContext, 2, instr); + RecordSafepoint(Safepoint::kNoLazyDeopt); +} + + #undef __ } } // namespace v8::internal diff --git a/deps/v8/src/arm/lithium-codegen-arm.h b/deps/v8/src/arm/lithium-codegen-arm.h index 3e05c328cbf..ee5f4e90862 100644 --- a/deps/v8/src/arm/lithium-codegen-arm.h +++ b/deps/v8/src/arm/lithium-codegen-arm.h @@ -5,14 +5,14 @@ #ifndef V8_ARM_LITHIUM_CODEGEN_ARM_H_ #define V8_ARM_LITHIUM_CODEGEN_ARM_H_ -#include "arm/lithium-arm.h" +#include "src/arm/lithium-arm.h" -#include "arm/lithium-gap-resolver-arm.h" -#include "deoptimizer.h" -#include "lithium-codegen.h" -#include "safepoint-table.h" -#include "scopes.h" -#include "utils.h" +#include "src/arm/lithium-gap-resolver-arm.h" +#include "src/deoptimizer.h" +#include "src/lithium-codegen.h" +#include "src/safepoint-table.h" +#include "src/scopes.h" +#include "src/utils.h" namespace v8 { namespace internal { @@ -116,7 +116,7 @@ class LCodeGen: public LCodeGenBase { void DoDeferredStringCharFromCode(LStringCharFromCode* instr); void DoDeferredAllocate(LAllocate* instr); void DoDeferredInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr, - Label* map_check); + Label* map_check, Label* bool_load); void DoDeferredInstanceMigration(LCheckMaps* instr, Register object); void DoDeferredLoadMutableDouble(LLoadFieldByIndex* instr, Register result, @@ -133,8 +133,7 @@ class LCodeGen: public LCodeGenBase { int constant_key, int element_size, int shift_size, - int additional_index, - int additional_offset); + int base_offset); // Emit frame translation commands for an environment. void WriteTranslation(LEnvironment* environment, Translation* translation); @@ -271,9 +270,6 @@ class LCodeGen: public LCodeGenBase { void RecordSafepointWithRegisters(LPointerMap* pointers, int arguments, Safepoint::DeoptMode mode); - void RecordSafepointWithRegistersAndDoubles(LPointerMap* pointers, - int arguments, - Safepoint::DeoptMode mode); void RecordAndWritePosition(int position) V8_OVERRIDE; @@ -357,38 +353,17 @@ class LCodeGen: public LCodeGenBase { class PushSafepointRegistersScope V8_FINAL BASE_EMBEDDED { public: - PushSafepointRegistersScope(LCodeGen* codegen, - Safepoint::Kind kind) + explicit PushSafepointRegistersScope(LCodeGen* codegen) : codegen_(codegen) { - ASSERT(codegen_->info()->is_calling()); - ASSERT(codegen_->expected_safepoint_kind_ == Safepoint::kSimple); - codegen_->expected_safepoint_kind_ = kind; - - switch (codegen_->expected_safepoint_kind_) { - case Safepoint::kWithRegisters: - codegen_->masm_->PushSafepointRegisters(); - break; - case Safepoint::kWithRegistersAndDoubles: - codegen_->masm_->PushSafepointRegistersAndDoubles(); - break; - default: - UNREACHABLE(); - } + DCHECK(codegen_->info()->is_calling()); + DCHECK(codegen_->expected_safepoint_kind_ == Safepoint::kSimple); + codegen_->expected_safepoint_kind_ = Safepoint::kWithRegisters; + codegen_->masm_->PushSafepointRegisters(); } ~PushSafepointRegistersScope() { - Safepoint::Kind kind = codegen_->expected_safepoint_kind_; - ASSERT((kind & Safepoint::kWithRegisters) != 0); - switch (kind) { - case Safepoint::kWithRegisters: - codegen_->masm_->PopSafepointRegisters(); - break; - case Safepoint::kWithRegistersAndDoubles: - codegen_->masm_->PopSafepointRegistersAndDoubles(); - break; - default: - UNREACHABLE(); - } + DCHECK(codegen_->expected_safepoint_kind_ == Safepoint::kWithRegisters); + codegen_->masm_->PopSafepointRegisters(); codegen_->expected_safepoint_kind_ = Safepoint::kSimple; } diff --git a/deps/v8/src/arm/lithium-gap-resolver-arm.cc b/deps/v8/src/arm/lithium-gap-resolver-arm.cc index fe0ef144aba..2fceec9d21a 100644 --- a/deps/v8/src/arm/lithium-gap-resolver-arm.cc +++ b/deps/v8/src/arm/lithium-gap-resolver-arm.cc @@ -2,10 +2,10 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "arm/lithium-gap-resolver-arm.h" -#include "arm/lithium-codegen-arm.h" +#include "src/arm/lithium-codegen-arm.h" +#include "src/arm/lithium-gap-resolver-arm.h" namespace v8 { namespace internal { @@ -29,7 +29,7 @@ LGapResolver::LGapResolver(LCodeGen* owner) void LGapResolver::Resolve(LParallelMove* parallel_move) { - ASSERT(moves_.is_empty()); + DCHECK(moves_.is_empty()); // Build up a worklist of moves. BuildInitialMoveList(parallel_move); @@ -50,13 +50,13 @@ void LGapResolver::Resolve(LParallelMove* parallel_move) { // Perform the moves with constant sources. for (int i = 0; i < moves_.length(); ++i) { if (!moves_[i].IsEliminated()) { - ASSERT(moves_[i].source()->IsConstantOperand()); + DCHECK(moves_[i].source()->IsConstantOperand()); EmitMove(i); } } if (need_to_restore_root_) { - ASSERT(kSavedValueRegister.is(kRootRegister)); + DCHECK(kSavedValueRegister.is(kRootRegister)); __ InitializeRootRegister(); need_to_restore_root_ = false; } @@ -94,13 +94,13 @@ void LGapResolver::PerformMove(int index) { // An additional complication is that moves to MemOperands with large // offsets (more than 1K or 4K) require us to spill this spilled value to // the stack, to free up the register. - ASSERT(!moves_[index].IsPending()); - ASSERT(!moves_[index].IsRedundant()); + DCHECK(!moves_[index].IsPending()); + DCHECK(!moves_[index].IsRedundant()); // Clear this move's destination to indicate a pending move. The actual // destination is saved in a stack allocated local. Multiple moves can // be pending because this function is recursive. - ASSERT(moves_[index].source() != NULL); // Or else it will look eliminated. + DCHECK(moves_[index].source() != NULL); // Or else it will look eliminated. LOperand* destination = moves_[index].destination(); moves_[index].set_destination(NULL); @@ -127,7 +127,7 @@ void LGapResolver::PerformMove(int index) { // a scratch register to break it. LMoveOperands other_move = moves_[root_index_]; if (other_move.Blocks(destination)) { - ASSERT(other_move.IsPending()); + DCHECK(other_move.IsPending()); BreakCycle(index); return; } @@ -138,12 +138,12 @@ void LGapResolver::PerformMove(int index) { void LGapResolver::Verify() { -#ifdef ENABLE_SLOW_ASSERTS +#ifdef ENABLE_SLOW_DCHECKS // No operand should be the destination for more than one move. for (int i = 0; i < moves_.length(); ++i) { LOperand* destination = moves_[i].destination(); for (int j = i + 1; j < moves_.length(); ++j) { - SLOW_ASSERT(!destination->Equals(moves_[j].destination())); + SLOW_DCHECK(!destination->Equals(moves_[j].destination())); } } #endif @@ -154,8 +154,8 @@ void LGapResolver::BreakCycle(int index) { // We save in a register the source of that move and we remember its // destination. Then we mark this move as resolved so the cycle is // broken and we can perform the other moves. - ASSERT(moves_[index].destination()->Equals(moves_[root_index_].source())); - ASSERT(!in_cycle_); + DCHECK(moves_[index].destination()->Equals(moves_[root_index_].source())); + DCHECK(!in_cycle_); in_cycle_ = true; LOperand* source = moves_[index].source(); saved_destination_ = moves_[index].destination(); @@ -178,8 +178,8 @@ void LGapResolver::BreakCycle(int index) { void LGapResolver::RestoreValue() { - ASSERT(in_cycle_); - ASSERT(saved_destination_ != NULL); + DCHECK(in_cycle_); + DCHECK(saved_destination_ != NULL); if (saved_destination_->IsRegister()) { __ mov(cgen_->ToRegister(saved_destination_), kSavedValueRegister); @@ -210,7 +210,7 @@ void LGapResolver::EmitMove(int index) { if (destination->IsRegister()) { __ mov(cgen_->ToRegister(destination), source_register); } else { - ASSERT(destination->IsStackSlot()); + DCHECK(destination->IsStackSlot()); __ str(source_register, cgen_->ToMemOperand(destination)); } } else if (source->IsStackSlot()) { @@ -218,7 +218,7 @@ void LGapResolver::EmitMove(int index) { if (destination->IsRegister()) { __ ldr(cgen_->ToRegister(destination), source_operand); } else { - ASSERT(destination->IsStackSlot()); + DCHECK(destination->IsStackSlot()); MemOperand destination_operand = cgen_->ToMemOperand(destination); if (!destination_operand.OffsetIsUint12Encodable()) { // ip is overwritten while saving the value to the destination. @@ -248,8 +248,8 @@ void LGapResolver::EmitMove(int index) { double v = cgen_->ToDouble(constant_source); __ Vmov(result, v, ip); } else { - ASSERT(destination->IsStackSlot()); - ASSERT(!in_cycle_); // Constant moves happen after all cycles are gone. + DCHECK(destination->IsStackSlot()); + DCHECK(!in_cycle_); // Constant moves happen after all cycles are gone. need_to_restore_root_ = true; Representation r = cgen_->IsSmi(constant_source) ? Representation::Smi() : Representation::Integer32(); @@ -267,7 +267,7 @@ void LGapResolver::EmitMove(int index) { if (destination->IsDoubleRegister()) { __ vmov(cgen_->ToDoubleRegister(destination), source_register); } else { - ASSERT(destination->IsDoubleStackSlot()); + DCHECK(destination->IsDoubleStackSlot()); __ vstr(source_register, cgen_->ToMemOperand(destination)); } @@ -276,7 +276,7 @@ void LGapResolver::EmitMove(int index) { if (destination->IsDoubleRegister()) { __ vldr(cgen_->ToDoubleRegister(destination), source_operand); } else { - ASSERT(destination->IsDoubleStackSlot()); + DCHECK(destination->IsDoubleStackSlot()); MemOperand destination_operand = cgen_->ToMemOperand(destination); if (in_cycle_) { // kScratchDoubleReg was used to break the cycle. diff --git a/deps/v8/src/arm/lithium-gap-resolver-arm.h b/deps/v8/src/arm/lithium-gap-resolver-arm.h index 73914e4daf7..909ea643980 100644 --- a/deps/v8/src/arm/lithium-gap-resolver-arm.h +++ b/deps/v8/src/arm/lithium-gap-resolver-arm.h @@ -5,9 +5,9 @@ #ifndef V8_ARM_LITHIUM_GAP_RESOLVER_ARM_H_ #define V8_ARM_LITHIUM_GAP_RESOLVER_ARM_H_ -#include "v8.h" +#include "src/v8.h" -#include "lithium.h" +#include "src/lithium.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/arm/macro-assembler-arm.cc b/deps/v8/src/arm/macro-assembler-arm.cc index 0485161a6cb..4b3cb4e860a 100644 --- a/deps/v8/src/arm/macro-assembler-arm.cc +++ b/deps/v8/src/arm/macro-assembler-arm.cc @@ -4,16 +4,16 @@ #include <limits.h> // For LONG_MIN, LONG_MAX. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM -#include "bootstrapper.h" -#include "codegen.h" -#include "cpu-profiler.h" -#include "debug.h" -#include "isolate-inl.h" -#include "runtime.h" +#include "src/bootstrapper.h" +#include "src/codegen.h" +#include "src/cpu-profiler.h" +#include "src/debug.h" +#include "src/isolate-inl.h" +#include "src/runtime.h" namespace v8 { namespace internal { @@ -36,21 +36,21 @@ void MacroAssembler::Jump(Register target, Condition cond) { void MacroAssembler::Jump(intptr_t target, RelocInfo::Mode rmode, Condition cond) { - ASSERT(RelocInfo::IsCodeTarget(rmode)); + DCHECK(RelocInfo::IsCodeTarget(rmode)); mov(pc, Operand(target, rmode), LeaveCC, cond); } void MacroAssembler::Jump(Address target, RelocInfo::Mode rmode, Condition cond) { - ASSERT(!RelocInfo::IsCodeTarget(rmode)); + DCHECK(!RelocInfo::IsCodeTarget(rmode)); Jump(reinterpret_cast<intptr_t>(target), rmode, cond); } void MacroAssembler::Jump(Handle<Code> code, RelocInfo::Mode rmode, Condition cond) { - ASSERT(RelocInfo::IsCodeTarget(rmode)); + DCHECK(RelocInfo::IsCodeTarget(rmode)); // 'code' is always generated ARM code, never THUMB code AllowDeferredHandleDereference embedding_raw_address; Jump(reinterpret_cast<intptr_t>(code.location()), rmode, cond); @@ -68,21 +68,16 @@ void MacroAssembler::Call(Register target, Condition cond) { Label start; bind(&start); blx(target, cond); - ASSERT_EQ(CallSize(target, cond), SizeOfCodeGeneratedSince(&start)); + DCHECK_EQ(CallSize(target, cond), SizeOfCodeGeneratedSince(&start)); } int MacroAssembler::CallSize( Address target, RelocInfo::Mode rmode, Condition cond) { - int size = 2 * kInstrSize; Instr mov_instr = cond | MOV | LeaveCC; - intptr_t immediate = reinterpret_cast<intptr_t>(target); - if (!Operand(immediate, rmode).is_single_instruction(isolate(), - this, - mov_instr)) { - size += kInstrSize; - } - return size; + Operand mov_operand = Operand(reinterpret_cast<intptr_t>(target), rmode); + return kInstrSize + + mov_operand.instructions_required(this, mov_instr) * kInstrSize; } @@ -96,15 +91,10 @@ int MacroAssembler::CallSizeNotPredictableCodeSize(Isolate* isolate, Address target, RelocInfo::Mode rmode, Condition cond) { - int size = 2 * kInstrSize; Instr mov_instr = cond | MOV | LeaveCC; - intptr_t immediate = reinterpret_cast<intptr_t>(target); - if (!Operand(immediate, rmode).is_single_instruction(isolate, - NULL, - mov_instr)) { - size += kInstrSize; - } - return size; + Operand mov_operand = Operand(reinterpret_cast<intptr_t>(target), rmode); + return kInstrSize + + mov_operand.instructions_required(NULL, mov_instr) * kInstrSize; } @@ -148,7 +138,7 @@ void MacroAssembler::Call(Address target, mov(ip, Operand(reinterpret_cast<int32_t>(target), rmode)); blx(ip, cond); - ASSERT_EQ(expected_size, SizeOfCodeGeneratedSince(&start)); + DCHECK_EQ(expected_size, SizeOfCodeGeneratedSince(&start)); if (mode == NEVER_INLINE_TARGET_ADDRESS) { set_predictable_code_size(old_predictable_code_size); } @@ -171,7 +161,7 @@ void MacroAssembler::Call(Handle<Code> code, TargetAddressStorageMode mode) { Label start; bind(&start); - ASSERT(RelocInfo::IsCodeTarget(rmode)); + DCHECK(RelocInfo::IsCodeTarget(rmode)); if (rmode == RelocInfo::CODE_TARGET && !ast_id.IsNone()) { SetRecordedAstId(ast_id); rmode = RelocInfo::CODE_TARGET_WITH_ID; @@ -232,7 +222,7 @@ void MacroAssembler::Move(Register dst, Handle<Object> value) { if (value->IsSmi()) { mov(dst, Operand(value)); } else { - ASSERT(value->IsHeapObject()); + DCHECK(value->IsHeapObject()); if (isolate()->heap()->InNewSpace(*value)) { Handle<Cell> cell = isolate()->factory()->NewCell(value); mov(dst, Operand(cell)); @@ -258,14 +248,27 @@ void MacroAssembler::Move(DwVfpRegister dst, DwVfpRegister src) { } +void MacroAssembler::Mls(Register dst, Register src1, Register src2, + Register srcA, Condition cond) { + if (CpuFeatures::IsSupported(MLS)) { + CpuFeatureScope scope(this, MLS); + mls(dst, src1, src2, srcA, cond); + } else { + DCHECK(!srcA.is(ip)); + mul(ip, src1, src2, LeaveCC, cond); + sub(dst, srcA, ip, LeaveCC, cond); + } +} + + void MacroAssembler::And(Register dst, Register src1, const Operand& src2, Condition cond) { if (!src2.is_reg() && - !src2.must_output_reloc_info(isolate(), this) && + !src2.must_output_reloc_info(this) && src2.immediate() == 0) { mov(dst, Operand::Zero(), LeaveCC, cond); - } else if (!src2.is_single_instruction(isolate(), this) && - !src2.must_output_reloc_info(isolate(), this) && + } else if (!(src2.instructions_required(this) == 1) && + !src2.must_output_reloc_info(this) && CpuFeatures::IsSupported(ARMv7) && IsPowerOf2(src2.immediate() + 1)) { ubfx(dst, src1, 0, @@ -278,7 +281,7 @@ void MacroAssembler::And(Register dst, Register src1, const Operand& src2, void MacroAssembler::Ubfx(Register dst, Register src1, int lsb, int width, Condition cond) { - ASSERT(lsb < 32); + DCHECK(lsb < 32); if (!CpuFeatures::IsSupported(ARMv7) || predictable_code_size()) { int mask = (1 << (width + lsb)) - 1 - ((1 << lsb) - 1); and_(dst, src1, Operand(mask), LeaveCC, cond); @@ -293,7 +296,7 @@ void MacroAssembler::Ubfx(Register dst, Register src1, int lsb, int width, void MacroAssembler::Sbfx(Register dst, Register src1, int lsb, int width, Condition cond) { - ASSERT(lsb < 32); + DCHECK(lsb < 32); if (!CpuFeatures::IsSupported(ARMv7) || predictable_code_size()) { int mask = (1 << (width + lsb)) - 1 - ((1 << lsb) - 1); and_(dst, src1, Operand(mask), LeaveCC, cond); @@ -317,10 +320,10 @@ void MacroAssembler::Bfi(Register dst, int lsb, int width, Condition cond) { - ASSERT(0 <= lsb && lsb < 32); - ASSERT(0 <= width && width < 32); - ASSERT(lsb + width < 32); - ASSERT(!scratch.is(dst)); + DCHECK(0 <= lsb && lsb < 32); + DCHECK(0 <= width && width < 32); + DCHECK(lsb + width < 32); + DCHECK(!scratch.is(dst)); if (width == 0) return; if (!CpuFeatures::IsSupported(ARMv7) || predictable_code_size()) { int mask = (1 << (width + lsb)) - 1 - ((1 << lsb) - 1); @@ -336,7 +339,7 @@ void MacroAssembler::Bfi(Register dst, void MacroAssembler::Bfc(Register dst, Register src, int lsb, int width, Condition cond) { - ASSERT(lsb < 32); + DCHECK(lsb < 32); if (!CpuFeatures::IsSupported(ARMv7) || predictable_code_size()) { int mask = (1 << (width + lsb)) - 1 - ((1 << lsb) - 1); bic(dst, src, Operand(mask)); @@ -350,13 +353,13 @@ void MacroAssembler::Bfc(Register dst, Register src, int lsb, int width, void MacroAssembler::Usat(Register dst, int satpos, const Operand& src, Condition cond) { if (!CpuFeatures::IsSupported(ARMv7) || predictable_code_size()) { - ASSERT(!dst.is(pc) && !src.rm().is(pc)); - ASSERT((satpos >= 0) && (satpos <= 31)); + DCHECK(!dst.is(pc) && !src.rm().is(pc)); + DCHECK((satpos >= 0) && (satpos <= 31)); // These asserts are required to ensure compatibility with the ARMv7 // implementation. - ASSERT((src.shift_op() == ASR) || (src.shift_op() == LSL)); - ASSERT(src.rs().is(no_reg)); + DCHECK((src.shift_op() == ASR) || (src.shift_op() == LSL)); + DCHECK(src.rs().is(no_reg)); Label done; int satval = (1 << satpos) - 1; @@ -381,7 +384,7 @@ void MacroAssembler::Usat(Register dst, int satpos, const Operand& src, void MacroAssembler::Load(Register dst, const MemOperand& src, Representation r) { - ASSERT(!r.IsDouble()); + DCHECK(!r.IsDouble()); if (r.IsInteger8()) { ldrsb(dst, src); } else if (r.IsUInteger8()) { @@ -399,7 +402,7 @@ void MacroAssembler::Load(Register dst, void MacroAssembler::Store(Register src, const MemOperand& dst, Representation r) { - ASSERT(!r.IsDouble()); + DCHECK(!r.IsDouble()); if (r.IsInteger8() || r.IsUInteger8()) { strb(src, dst); } else if (r.IsInteger16() || r.IsUInteger16()) { @@ -442,7 +445,7 @@ void MacroAssembler::InNewSpace(Register object, Register scratch, Condition cond, Label* branch) { - ASSERT(cond == eq || cond == ne); + DCHECK(cond == eq || cond == ne); and_(scratch, object, Operand(ExternalReference::new_space_mask(isolate()))); cmp(scratch, Operand(ExternalReference::new_space_start(isolate()))); b(cond, branch); @@ -457,7 +460,8 @@ void MacroAssembler::RecordWriteField( LinkRegisterStatus lr_status, SaveFPRegsMode save_fp, RememberedSetAction remembered_set_action, - SmiCheck smi_check) { + SmiCheck smi_check, + PointersToHereCheck pointers_to_here_check_for_value) { // First, check if a write barrier is even needed. The tests below // catch stores of Smis. Label done; @@ -469,7 +473,7 @@ void MacroAssembler::RecordWriteField( // Although the object register is tagged, the offset is relative to the start // of the object, so so offset must be a multiple of kPointerSize. - ASSERT(IsAligned(offset, kPointerSize)); + DCHECK(IsAligned(offset, kPointerSize)); add(dst, object, Operand(offset - kHeapObjectTag)); if (emit_debug_code()) { @@ -486,7 +490,8 @@ void MacroAssembler::RecordWriteField( lr_status, save_fp, remembered_set_action, - OMIT_SMI_CHECK); + OMIT_SMI_CHECK, + pointers_to_here_check_for_value); bind(&done); @@ -499,26 +504,99 @@ void MacroAssembler::RecordWriteField( } +// Will clobber 4 registers: object, map, dst, ip. The +// register 'object' contains a heap object pointer. +void MacroAssembler::RecordWriteForMap(Register object, + Register map, + Register dst, + LinkRegisterStatus lr_status, + SaveFPRegsMode fp_mode) { + if (emit_debug_code()) { + ldr(dst, FieldMemOperand(map, HeapObject::kMapOffset)); + cmp(dst, Operand(isolate()->factory()->meta_map())); + Check(eq, kWrongAddressOrValuePassedToRecordWrite); + } + + if (!FLAG_incremental_marking) { + return; + } + + if (emit_debug_code()) { + ldr(ip, FieldMemOperand(object, HeapObject::kMapOffset)); + cmp(ip, map); + Check(eq, kWrongAddressOrValuePassedToRecordWrite); + } + + Label done; + + // A single check of the map's pages interesting flag suffices, since it is + // only set during incremental collection, and then it's also guaranteed that + // the from object's page's interesting flag is also set. This optimization + // relies on the fact that maps can never be in new space. + CheckPageFlag(map, + map, // Used as scratch. + MemoryChunk::kPointersToHereAreInterestingMask, + eq, + &done); + + add(dst, object, Operand(HeapObject::kMapOffset - kHeapObjectTag)); + if (emit_debug_code()) { + Label ok; + tst(dst, Operand((1 << kPointerSizeLog2) - 1)); + b(eq, &ok); + stop("Unaligned cell in write barrier"); + bind(&ok); + } + + // Record the actual write. + if (lr_status == kLRHasNotBeenSaved) { + push(lr); + } + RecordWriteStub stub(isolate(), object, map, dst, OMIT_REMEMBERED_SET, + fp_mode); + CallStub(&stub); + if (lr_status == kLRHasNotBeenSaved) { + pop(lr); + } + + bind(&done); + + // Count number of write barriers in generated code. + isolate()->counters()->write_barriers_static()->Increment(); + IncrementCounter(isolate()->counters()->write_barriers_dynamic(), 1, ip, dst); + + // Clobber clobbered registers when running with the debug-code flag + // turned on to provoke errors. + if (emit_debug_code()) { + mov(dst, Operand(BitCast<int32_t>(kZapValue + 12))); + mov(map, Operand(BitCast<int32_t>(kZapValue + 16))); + } +} + + // Will clobber 4 registers: object, address, scratch, ip. The // register 'object' contains a heap object pointer. The heap object // tag is shifted away. -void MacroAssembler::RecordWrite(Register object, - Register address, - Register value, - LinkRegisterStatus lr_status, - SaveFPRegsMode fp_mode, - RememberedSetAction remembered_set_action, - SmiCheck smi_check) { - ASSERT(!object.is(value)); +void MacroAssembler::RecordWrite( + Register object, + Register address, + Register value, + LinkRegisterStatus lr_status, + SaveFPRegsMode fp_mode, + RememberedSetAction remembered_set_action, + SmiCheck smi_check, + PointersToHereCheck pointers_to_here_check_for_value) { + DCHECK(!object.is(value)); if (emit_debug_code()) { ldr(ip, MemOperand(address)); cmp(ip, value); Check(eq, kWrongAddressOrValuePassedToRecordWrite); } - // Count number of write barriers in generated code. - isolate()->counters()->write_barriers_static()->Increment(); - // TODO(mstarzinger): Dynamic counter missing. + if (remembered_set_action == OMIT_REMEMBERED_SET && + !FLAG_incremental_marking) { + return; + } // First, check if a write barrier is even needed. The tests below // catch stores of smis and stores into the young generation. @@ -528,11 +606,13 @@ void MacroAssembler::RecordWrite(Register object, JumpIfSmi(value, &done); } - CheckPageFlag(value, - value, // Used as scratch. - MemoryChunk::kPointersToHereAreInterestingMask, - eq, - &done); + if (pointers_to_here_check_for_value != kPointersToHereAreAlwaysInteresting) { + CheckPageFlag(value, + value, // Used as scratch. + MemoryChunk::kPointersToHereAreInterestingMask, + eq, + &done); + } CheckPageFlag(object, value, // Used as scratch. MemoryChunk::kPointersFromHereAreInterestingMask, @@ -552,6 +632,11 @@ void MacroAssembler::RecordWrite(Register object, bind(&done); + // Count number of write barriers in generated code. + isolate()->counters()->write_barriers_static()->Increment(); + IncrementCounter(isolate()->counters()->write_barriers_dynamic(), 1, ip, + value); + // Clobber clobbered registers when running with the debug-code flag // turned on to provoke errors. if (emit_debug_code()) { @@ -588,7 +673,7 @@ void MacroAssembler::RememberedSetHelper(Register object, // For debug tests. if (and_then == kFallThroughAtEnd) { b(eq, &done); } else { - ASSERT(and_then == kReturnAtEnd); + DCHECK(and_then == kReturnAtEnd); Ret(eq); } push(lr); @@ -604,7 +689,7 @@ void MacroAssembler::RememberedSetHelper(Register object, // For debug tests. void MacroAssembler::PushFixedFrame(Register marker_reg) { - ASSERT(!marker_reg.is_valid() || marker_reg.code() < cp.code()); + DCHECK(!marker_reg.is_valid() || marker_reg.code() < cp.code()); stm(db_w, sp, (marker_reg.is_valid() ? marker_reg.bit() : 0) | cp.bit() | (FLAG_enable_ool_constant_pool ? pp.bit() : 0) | @@ -614,7 +699,7 @@ void MacroAssembler::PushFixedFrame(Register marker_reg) { void MacroAssembler::PopFixedFrame(Register marker_reg) { - ASSERT(!marker_reg.is_valid() || marker_reg.code() < cp.code()); + DCHECK(!marker_reg.is_valid() || marker_reg.code() < cp.code()); ldm(ia_w, sp, (marker_reg.is_valid() ? marker_reg.bit() : 0) | cp.bit() | (FLAG_enable_ool_constant_pool ? pp.bit() : 0) | @@ -626,11 +711,11 @@ void MacroAssembler::PopFixedFrame(Register marker_reg) { // Push and pop all registers that can hold pointers. void MacroAssembler::PushSafepointRegisters() { // Safepoints expect a block of contiguous register values starting with r0: - ASSERT(((1 << kNumSafepointSavedRegisters) - 1) == kSafepointSavedRegisters); + DCHECK(((1 << kNumSafepointSavedRegisters) - 1) == kSafepointSavedRegisters); // Safepoints expect a block of kNumSafepointRegisters values on the // stack, so adjust the stack for unsaved registers. const int num_unsaved = kNumSafepointRegisters - kNumSafepointSavedRegisters; - ASSERT(num_unsaved >= 0); + DCHECK(num_unsaved >= 0); sub(sp, sp, Operand(num_unsaved * kPointerSize)); stm(db_w, sp, kSafepointSavedRegisters); } @@ -643,39 +728,6 @@ void MacroAssembler::PopSafepointRegisters() { } -void MacroAssembler::PushSafepointRegistersAndDoubles() { - // Number of d-regs not known at snapshot time. - ASSERT(!Serializer::enabled(isolate())); - PushSafepointRegisters(); - // Only save allocatable registers. - ASSERT(kScratchDoubleReg.is(d15) && kDoubleRegZero.is(d14)); - ASSERT(DwVfpRegister::NumReservedRegisters() == 2); - if (CpuFeatures::IsSupported(VFP32DREGS)) { - vstm(db_w, sp, d16, d31); - } - vstm(db_w, sp, d0, d13); -} - - -void MacroAssembler::PopSafepointRegistersAndDoubles() { - // Number of d-regs not known at snapshot time. - ASSERT(!Serializer::enabled(isolate())); - // Only save allocatable registers. - ASSERT(kScratchDoubleReg.is(d15) && kDoubleRegZero.is(d14)); - ASSERT(DwVfpRegister::NumReservedRegisters() == 2); - vldm(ia_w, sp, d0, d13); - if (CpuFeatures::IsSupported(VFP32DREGS)) { - vldm(ia_w, sp, d16, d31); - } - PopSafepointRegisters(); -} - -void MacroAssembler::StoreToSafepointRegistersAndDoublesSlot(Register src, - Register dst) { - str(src, SafepointRegistersAndDoublesSlot(dst)); -} - - void MacroAssembler::StoreToSafepointRegisterSlot(Register src, Register dst) { str(src, SafepointRegisterSlot(dst)); } @@ -689,7 +741,7 @@ void MacroAssembler::LoadFromSafepointRegisterSlot(Register dst, Register src) { int MacroAssembler::SafepointRegisterStackIndex(int reg_code) { // The registers are pushed starting with the highest encoding, // which means that lowest encodings are closest to the stack pointer. - ASSERT(reg_code >= 0 && reg_code < kNumSafepointRegisters); + DCHECK(reg_code >= 0 && reg_code < kNumSafepointRegisters); return reg_code; } @@ -701,7 +753,7 @@ MemOperand MacroAssembler::SafepointRegisterSlot(Register reg) { MemOperand MacroAssembler::SafepointRegistersAndDoublesSlot(Register reg) { // Number of d-regs not known at snapshot time. - ASSERT(!Serializer::enabled(isolate())); + DCHECK(!serializer_enabled()); // General purpose registers are pushed last on the stack. int doubles_size = DwVfpRegister::NumAllocatableRegisters() * kDoubleSize; int register_offset = SafepointRegisterStackIndex(reg.code()) * kPointerSize; @@ -711,12 +763,12 @@ MemOperand MacroAssembler::SafepointRegistersAndDoublesSlot(Register reg) { void MacroAssembler::Ldrd(Register dst1, Register dst2, const MemOperand& src, Condition cond) { - ASSERT(src.rm().is(no_reg)); - ASSERT(!dst1.is(lr)); // r14. + DCHECK(src.rm().is(no_reg)); + DCHECK(!dst1.is(lr)); // r14. // V8 does not use this addressing mode, so the fallback code // below doesn't support it yet. - ASSERT((src.am() != PreIndex) && (src.am() != NegPreIndex)); + DCHECK((src.am() != PreIndex) && (src.am() != NegPreIndex)); // Generate two ldr instructions if ldrd is not available. if (CpuFeatures::IsSupported(ARMv7) && !predictable_code_size() && @@ -735,7 +787,7 @@ void MacroAssembler::Ldrd(Register dst1, Register dst2, ldr(dst2, src2, cond); } } else { // PostIndex or NegPostIndex. - ASSERT((src.am() == PostIndex) || (src.am() == NegPostIndex)); + DCHECK((src.am() == PostIndex) || (src.am() == NegPostIndex)); if (dst1.is(src.rn())) { ldr(dst2, MemOperand(src.rn(), 4, Offset), cond); ldr(dst1, src, cond); @@ -752,12 +804,12 @@ void MacroAssembler::Ldrd(Register dst1, Register dst2, void MacroAssembler::Strd(Register src1, Register src2, const MemOperand& dst, Condition cond) { - ASSERT(dst.rm().is(no_reg)); - ASSERT(!src1.is(lr)); // r14. + DCHECK(dst.rm().is(no_reg)); + DCHECK(!src1.is(lr)); // r14. // V8 does not use this addressing mode, so the fallback code // below doesn't support it yet. - ASSERT((dst.am() != PreIndex) && (dst.am() != NegPreIndex)); + DCHECK((dst.am() != PreIndex) && (dst.am() != NegPreIndex)); // Generate two str instructions if strd is not available. if (CpuFeatures::IsSupported(ARMv7) && !predictable_code_size() && @@ -771,7 +823,7 @@ void MacroAssembler::Strd(Register src1, Register src2, str(src1, dst, cond); str(src2, dst2, cond); } else { // PostIndex or NegPostIndex. - ASSERT((dst.am() == PostIndex) || (dst.am() == NegPostIndex)); + DCHECK((dst.am() == PostIndex) || (dst.am() == NegPostIndex)); dst2.set_offset(dst2.offset() - 4); str(src1, MemOperand(dst.rn(), 4, PostIndex), cond); str(src2, dst2, cond); @@ -901,24 +953,30 @@ void MacroAssembler::LoadConstantPoolPointerRegister() { if (FLAG_enable_ool_constant_pool) { int constant_pool_offset = Code::kConstantPoolOffset - Code::kHeaderSize - pc_offset() - Instruction::kPCReadOffset; - ASSERT(ImmediateFitsAddrMode2Instruction(constant_pool_offset)); + DCHECK(ImmediateFitsAddrMode2Instruction(constant_pool_offset)); ldr(pp, MemOperand(pc, constant_pool_offset)); } } -void MacroAssembler::Prologue(PrologueFrameMode frame_mode) { - if (frame_mode == BUILD_STUB_FRAME) { - PushFixedFrame(); - Push(Smi::FromInt(StackFrame::STUB)); - // Adjust FP to point to saved FP. - add(fp, sp, Operand(StandardFrameConstants::kFixedFrameSizeFromFp)); - } else { - PredictableCodeSizeScope predictible_code_size_scope( +void MacroAssembler::StubPrologue() { + PushFixedFrame(); + Push(Smi::FromInt(StackFrame::STUB)); + // Adjust FP to point to saved FP. + add(fp, sp, Operand(StandardFrameConstants::kFixedFrameSizeFromFp)); + if (FLAG_enable_ool_constant_pool) { + LoadConstantPoolPointerRegister(); + set_constant_pool_available(true); + } +} + + +void MacroAssembler::Prologue(bool code_pre_aging) { + { PredictableCodeSizeScope predictible_code_size_scope( this, kNoCodeAgeSequenceLength); // The following three instructions must remain together and unmodified // for code aging to work properly. - if (isolate()->IsCodePreAgingActive()) { + if (code_pre_aging) { // Pre-age the code. Code* stub = Code::GetPreAgedCodeAgeStub(isolate()); add(r0, pc, Operand(-8)); @@ -979,9 +1037,9 @@ int MacroAssembler::LeaveFrame(StackFrame::Type type) { void MacroAssembler::EnterExitFrame(bool save_doubles, int stack_space) { // Set up the frame structure on the stack. - ASSERT_EQ(2 * kPointerSize, ExitFrameConstants::kCallerSPDisplacement); - ASSERT_EQ(1 * kPointerSize, ExitFrameConstants::kCallerPCOffset); - ASSERT_EQ(0 * kPointerSize, ExitFrameConstants::kCallerFPOffset); + DCHECK_EQ(2 * kPointerSize, ExitFrameConstants::kCallerSPDisplacement); + DCHECK_EQ(1 * kPointerSize, ExitFrameConstants::kCallerPCOffset); + DCHECK_EQ(0 * kPointerSize, ExitFrameConstants::kCallerFPOffset); Push(lr, fp); mov(fp, Operand(sp)); // Set up new frame pointer. // Reserve room for saved entry sp and code object. @@ -1017,7 +1075,7 @@ void MacroAssembler::EnterExitFrame(bool save_doubles, int stack_space) { const int frame_alignment = MacroAssembler::ActivationFrameAlignment(); sub(sp, sp, Operand((stack_space + 1) * kPointerSize)); if (frame_alignment > 0) { - ASSERT(IsPowerOf2(frame_alignment)); + DCHECK(IsPowerOf2(frame_alignment)); and_(sp, sp, Operand(-frame_alignment)); } @@ -1048,7 +1106,7 @@ int MacroAssembler::ActivationFrameAlignment() { // environment. // Note: This will break if we ever start generating snapshots on one ARM // platform for another ARM platform with a different alignment. - return OS::ActivationFrameAlignment(); + return base::OS::ActivationFrameAlignment(); #else // V8_HOST_ARCH_ARM // If we are using the simulator then we should always align to the expected // alignment. As the simulator is used to generate snapshots we do not know @@ -1136,12 +1194,12 @@ void MacroAssembler::InvokePrologue(const ParameterCount& expected, // The code below is made a lot easier because the calling code already sets // up actual and expected registers according to the contract if values are // passed in registers. - ASSERT(actual.is_immediate() || actual.reg().is(r0)); - ASSERT(expected.is_immediate() || expected.reg().is(r2)); - ASSERT((!code_constant.is_null() && code_reg.is(no_reg)) || code_reg.is(r3)); + DCHECK(actual.is_immediate() || actual.reg().is(r0)); + DCHECK(expected.is_immediate() || expected.reg().is(r2)); + DCHECK((!code_constant.is_null() && code_reg.is(no_reg)) || code_reg.is(r3)); if (expected.is_immediate()) { - ASSERT(actual.is_immediate()); + DCHECK(actual.is_immediate()); if (expected.immediate() == actual.immediate()) { definitely_matches = true; } else { @@ -1198,7 +1256,7 @@ void MacroAssembler::InvokeCode(Register code, InvokeFlag flag, const CallWrapper& call_wrapper) { // You can't call a function without a valid frame. - ASSERT(flag == JUMP_FUNCTION || has_frame()); + DCHECK(flag == JUMP_FUNCTION || has_frame()); Label done; bool definitely_mismatches = false; @@ -1211,7 +1269,7 @@ void MacroAssembler::InvokeCode(Register code, Call(code); call_wrapper.AfterCall(); } else { - ASSERT(flag == JUMP_FUNCTION); + DCHECK(flag == JUMP_FUNCTION); Jump(code); } @@ -1227,10 +1285,10 @@ void MacroAssembler::InvokeFunction(Register fun, InvokeFlag flag, const CallWrapper& call_wrapper) { // You can't call a function without a valid frame. - ASSERT(flag == JUMP_FUNCTION || has_frame()); + DCHECK(flag == JUMP_FUNCTION || has_frame()); // Contract with called JS functions requires that function is passed in r1. - ASSERT(fun.is(r1)); + DCHECK(fun.is(r1)); Register expected_reg = r2; Register code_reg = r3; @@ -1255,10 +1313,10 @@ void MacroAssembler::InvokeFunction(Register function, InvokeFlag flag, const CallWrapper& call_wrapper) { // You can't call a function without a valid frame. - ASSERT(flag == JUMP_FUNCTION || has_frame()); + DCHECK(flag == JUMP_FUNCTION || has_frame()); // Contract with called JS functions requires that function is passed in r1. - ASSERT(function.is(r1)); + DCHECK(function.is(r1)); // Get the function and setup the context. ldr(cp, FieldMemOperand(r1, JSFunction::kContextOffset)); @@ -1304,7 +1362,7 @@ void MacroAssembler::IsInstanceJSObjectType(Register map, void MacroAssembler::IsObjectJSStringType(Register object, Register scratch, Label* fail) { - ASSERT(kNotStringTag != 0); + DCHECK(kNotStringTag != 0); ldr(scratch, FieldMemOperand(object, HeapObject::kMapOffset)); ldrb(scratch, FieldMemOperand(scratch, Map::kInstanceTypeOffset)); @@ -1327,7 +1385,7 @@ void MacroAssembler::DebugBreak() { mov(r0, Operand::Zero()); mov(r1, Operand(ExternalReference(Runtime::kDebugBreak, isolate()))); CEntryStub ces(isolate(), 1); - ASSERT(AllowThisStubCall(&ces)); + DCHECK(AllowThisStubCall(&ces)); Call(ces.GetCode(), RelocInfo::DEBUG_BREAK); } @@ -1475,9 +1533,9 @@ void MacroAssembler::CheckAccessGlobalProxy(Register holder_reg, Label* miss) { Label same_contexts; - ASSERT(!holder_reg.is(scratch)); - ASSERT(!holder_reg.is(ip)); - ASSERT(!scratch.is(ip)); + DCHECK(!holder_reg.is(scratch)); + DCHECK(!holder_reg.is(ip)); + DCHECK(!scratch.is(ip)); // Load current lexical context from the stack frame. ldr(scratch, MemOperand(fp, StandardFrameConstants::kContextOffset)); @@ -1547,7 +1605,7 @@ void MacroAssembler::CheckAccessGlobalProxy(Register holder_reg, // Compute the hash code from the untagged key. This must be kept in sync with -// ComputeIntegerHash in utils.h and KeyedLoadGenericElementStub in +// ComputeIntegerHash in utils.h and KeyedLoadGenericStub in // code-stub-hydrogen.cc void MacroAssembler::GetNumberHash(Register t0, Register scratch) { // First of all we assign the hash seed to scratch. @@ -1625,7 +1683,7 @@ void MacroAssembler::LoadFromNumberDictionary(Label* miss, and_(t2, t2, Operand(t1)); // Scale the index by multiplying by the element size. - ASSERT(SeededNumberDictionary::kEntrySize == 3); + DCHECK(SeededNumberDictionary::kEntrySize == 3); add(t2, t2, Operand(t2, LSL, 1)); // t2 = t2 * 3 // Check if the key is identical to the name. @@ -1661,7 +1719,7 @@ void MacroAssembler::Allocate(int object_size, Register scratch2, Label* gc_required, AllocationFlags flags) { - ASSERT(object_size <= Page::kMaxRegularHeapObjectSize); + DCHECK(object_size <= Page::kMaxRegularHeapObjectSize); if (!FLAG_inline_new) { if (emit_debug_code()) { // Trash the registers to simulate an allocation failure. @@ -1673,17 +1731,17 @@ void MacroAssembler::Allocate(int object_size, return; } - ASSERT(!result.is(scratch1)); - ASSERT(!result.is(scratch2)); - ASSERT(!scratch1.is(scratch2)); - ASSERT(!scratch1.is(ip)); - ASSERT(!scratch2.is(ip)); + DCHECK(!result.is(scratch1)); + DCHECK(!result.is(scratch2)); + DCHECK(!scratch1.is(scratch2)); + DCHECK(!scratch1.is(ip)); + DCHECK(!scratch2.is(ip)); // Make object size into bytes. if ((flags & SIZE_IN_WORDS) != 0) { object_size *= kPointerSize; } - ASSERT_EQ(0, object_size & kObjectAlignmentMask); + DCHECK_EQ(0, object_size & kObjectAlignmentMask); // Check relative positions of allocation top and limit addresses. // The values must be adjacent in memory to allow the use of LDM. @@ -1698,8 +1756,8 @@ void MacroAssembler::Allocate(int object_size, reinterpret_cast<intptr_t>(allocation_top.address()); intptr_t limit = reinterpret_cast<intptr_t>(allocation_limit.address()); - ASSERT((limit - top) == kPointerSize); - ASSERT(result.code() < ip.code()); + DCHECK((limit - top) == kPointerSize); + DCHECK(result.code() < ip.code()); // Set up allocation top address register. Register topaddr = scratch1; @@ -1726,7 +1784,7 @@ void MacroAssembler::Allocate(int object_size, if ((flags & DOUBLE_ALIGNMENT) != 0) { // Align the next allocation. Storing the filler map without checking top is // safe in new-space because the limit of the heap is aligned there. - ASSERT((flags & PRETENURE_OLD_POINTER_SPACE) == 0); + DCHECK((flags & PRETENURE_OLD_POINTER_SPACE) == 0); STATIC_ASSERT(kPointerAlignment * 2 == kDoubleAlignment); and_(scratch2, result, Operand(kDoubleAlignmentMask), SetCC); Label aligned; @@ -1743,7 +1801,7 @@ void MacroAssembler::Allocate(int object_size, // Calculate new top and bail out if new space is exhausted. Use result // to calculate the new top. We must preserve the ip register at this // point, so we cannot just use add(). - ASSERT(object_size > 0); + DCHECK(object_size > 0); Register source = result; Condition cond = al; int shift = 0; @@ -1755,7 +1813,7 @@ void MacroAssembler::Allocate(int object_size, object_size -= bits; shift += 8; Operand bits_operand(bits); - ASSERT(bits_operand.is_single_instruction(isolate(), this)); + DCHECK(bits_operand.instructions_required(this) == 1); add(scratch2, source, bits_operand, SetCC, cond); source = scratch2; cond = cc; @@ -1792,13 +1850,13 @@ void MacroAssembler::Allocate(Register object_size, // Assert that the register arguments are different and that none of // them are ip. ip is used explicitly in the code generated below. - ASSERT(!result.is(scratch1)); - ASSERT(!result.is(scratch2)); - ASSERT(!scratch1.is(scratch2)); - ASSERT(!object_size.is(ip)); - ASSERT(!result.is(ip)); - ASSERT(!scratch1.is(ip)); - ASSERT(!scratch2.is(ip)); + DCHECK(!result.is(scratch1)); + DCHECK(!result.is(scratch2)); + DCHECK(!scratch1.is(scratch2)); + DCHECK(!object_size.is(ip)); + DCHECK(!result.is(ip)); + DCHECK(!scratch1.is(ip)); + DCHECK(!scratch2.is(ip)); // Check relative positions of allocation top and limit addresses. // The values must be adjacent in memory to allow the use of LDM. @@ -1812,8 +1870,8 @@ void MacroAssembler::Allocate(Register object_size, reinterpret_cast<intptr_t>(allocation_top.address()); intptr_t limit = reinterpret_cast<intptr_t>(allocation_limit.address()); - ASSERT((limit - top) == kPointerSize); - ASSERT(result.code() < ip.code()); + DCHECK((limit - top) == kPointerSize); + DCHECK(result.code() < ip.code()); // Set up allocation top address. Register topaddr = scratch1; @@ -1840,8 +1898,8 @@ void MacroAssembler::Allocate(Register object_size, if ((flags & DOUBLE_ALIGNMENT) != 0) { // Align the next allocation. Storing the filler map without checking top is // safe in new-space because the limit of the heap is aligned there. - ASSERT((flags & PRETENURE_OLD_POINTER_SPACE) == 0); - ASSERT(kPointerAlignment * 2 == kDoubleAlignment); + DCHECK((flags & PRETENURE_OLD_POINTER_SPACE) == 0); + DCHECK(kPointerAlignment * 2 == kDoubleAlignment); and_(scratch2, result, Operand(kDoubleAlignmentMask), SetCC); Label aligned; b(eq, &aligned); @@ -1908,7 +1966,7 @@ void MacroAssembler::AllocateTwoByteString(Register result, Label* gc_required) { // Calculate the number of bytes needed for the characters in the string while // observing object alignment. - ASSERT((SeqTwoByteString::kHeaderSize & kObjectAlignmentMask) == 0); + DCHECK((SeqTwoByteString::kHeaderSize & kObjectAlignmentMask) == 0); mov(scratch1, Operand(length, LSL, 1)); // Length in bytes, not chars. add(scratch1, scratch1, Operand(kObjectAlignmentMask + SeqTwoByteString::kHeaderSize)); @@ -1939,8 +1997,8 @@ void MacroAssembler::AllocateAsciiString(Register result, Label* gc_required) { // Calculate the number of bytes needed for the characters in the string while // observing object alignment. - ASSERT((SeqOneByteString::kHeaderSize & kObjectAlignmentMask) == 0); - ASSERT(kCharSize == 1); + DCHECK((SeqOneByteString::kHeaderSize & kObjectAlignmentMask) == 0); + DCHECK(kCharSize == 1); add(scratch1, length, Operand(kObjectAlignmentMask + SeqOneByteString::kHeaderSize)); and_(scratch1, scratch1, Operand(~kObjectAlignmentMask)); @@ -1983,34 +2041,12 @@ void MacroAssembler::AllocateAsciiConsString(Register result, Register scratch1, Register scratch2, Label* gc_required) { - Label allocate_new_space, install_map; - AllocationFlags flags = TAG_OBJECT; - - ExternalReference high_promotion_mode = ExternalReference:: - new_space_high_promotion_mode_active_address(isolate()); - mov(scratch1, Operand(high_promotion_mode)); - ldr(scratch1, MemOperand(scratch1, 0)); - cmp(scratch1, Operand::Zero()); - b(eq, &allocate_new_space); - - Allocate(ConsString::kSize, - result, - scratch1, - scratch2, - gc_required, - static_cast<AllocationFlags>(flags | PRETENURE_OLD_POINTER_SPACE)); - - jmp(&install_map); - - bind(&allocate_new_space); Allocate(ConsString::kSize, result, scratch1, scratch2, gc_required, - flags); - - bind(&install_map); + TAG_OBJECT); InitializeNewString(result, length, @@ -2093,7 +2129,7 @@ void MacroAssembler::CompareInstanceType(Register map, void MacroAssembler::CompareRoot(Register obj, Heap::RootListIndex index) { - ASSERT(!obj.is(ip)); + DCHECK(!obj.is(ip)); LoadRoot(ip, index); cmp(obj, ip); } @@ -2248,14 +2284,15 @@ void MacroAssembler::TryGetFunctionPrototype(Register function, Register scratch, Label* miss, bool miss_on_bound_function) { - // Check that the receiver isn't a smi. - JumpIfSmi(function, miss); + Label non_instance; + if (miss_on_bound_function) { + // Check that the receiver isn't a smi. + JumpIfSmi(function, miss); - // Check that the function really is a function. Load map into result reg. - CompareObjectType(function, result, scratch, JS_FUNCTION_TYPE); - b(ne, miss); + // Check that the function really is a function. Load map into result reg. + CompareObjectType(function, result, scratch, JS_FUNCTION_TYPE); + b(ne, miss); - if (miss_on_bound_function) { ldr(scratch, FieldMemOperand(function, JSFunction::kSharedFunctionInfoOffset)); ldr(scratch, @@ -2263,13 +2300,12 @@ void MacroAssembler::TryGetFunctionPrototype(Register function, tst(scratch, Operand(Smi::FromInt(1 << SharedFunctionInfo::kBoundFunction))); b(ne, miss); - } - // Make sure that the function has an instance prototype. - Label non_instance; - ldrb(scratch, FieldMemOperand(result, Map::kBitFieldOffset)); - tst(scratch, Operand(1 << Map::kHasNonInstancePrototype)); - b(ne, &non_instance); + // Make sure that the function has an instance prototype. + ldrb(scratch, FieldMemOperand(result, Map::kBitFieldOffset)); + tst(scratch, Operand(1 << Map::kHasNonInstancePrototype)); + b(ne, &non_instance); + } // Get the prototype or initial map from the function. ldr(result, @@ -2289,12 +2325,15 @@ void MacroAssembler::TryGetFunctionPrototype(Register function, // Get the prototype from the initial map. ldr(result, FieldMemOperand(result, Map::kPrototypeOffset)); - jmp(&done); - // Non-instance prototype: Fetch prototype from constructor field - // in initial map. - bind(&non_instance); - ldr(result, FieldMemOperand(result, Map::kConstructorOffset)); + if (miss_on_bound_function) { + jmp(&done); + + // Non-instance prototype: Fetch prototype from constructor field + // in initial map. + bind(&non_instance); + ldr(result, FieldMemOperand(result, Map::kConstructorOffset)); + } // All done. bind(&done); @@ -2304,7 +2343,7 @@ void MacroAssembler::TryGetFunctionPrototype(Register function, void MacroAssembler::CallStub(CodeStub* stub, TypeFeedbackId ast_id, Condition cond) { - ASSERT(AllowThisStubCall(stub)); // Stub calls are not allowed in some stubs. + DCHECK(AllowThisStubCall(stub)); // Stub calls are not allowed in some stubs. Call(stub->GetCode(), RelocInfo::CODE_TARGET, ast_id, cond); } @@ -2335,7 +2374,7 @@ void MacroAssembler::CallApiFunctionAndReturn( ExternalReference::handle_scope_level_address(isolate()), next_address); - ASSERT(function_address.is(r1) || function_address.is(r2)); + DCHECK(function_address.is(r1) || function_address.is(r2)); Label profiler_disabled; Label end_profiler_check; @@ -2429,7 +2468,7 @@ void MacroAssembler::CallApiFunctionAndReturn( { FrameScope frame(this, StackFrame::INTERNAL); CallExternalReference( - ExternalReference(Runtime::kHiddenPromoteScheduledException, isolate()), + ExternalReference(Runtime::kPromoteScheduledException, isolate()), 0); } jmp(&exception_handled); @@ -2457,12 +2496,9 @@ void MacroAssembler::IndexFromHash(Register hash, Register index) { // that the constants for the maximum number of digits for an array index // cached in the hash field and the number of bits reserved for it does not // conflict. - ASSERT(TenToThe(String::kMaxCachedArrayIndexLength) < + DCHECK(TenToThe(String::kMaxCachedArrayIndexLength) < (1 << String::kArrayIndexValueBits)); - // We want the smi-tagged index in key. kArrayIndexValueMask has zeros in - // the low kHashShift bits. - Ubfx(hash, hash, String::kHashShift, String::kArrayIndexValueBits); - SmiTag(index, hash); + DecodeFieldToSmi<String::ArrayIndexValueBits>(index, hash); } @@ -2480,7 +2516,7 @@ void MacroAssembler::SmiToDouble(LowDwVfpRegister value, Register smi) { void MacroAssembler::TestDoubleIsInt32(DwVfpRegister double_input, LowDwVfpRegister double_scratch) { - ASSERT(!double_input.is(double_scratch)); + DCHECK(!double_input.is(double_scratch)); vcvt_s32_f64(double_scratch.low(), double_input); vcvt_f64_s32(double_scratch, double_scratch.low()); VFPCompareAndSetFlags(double_input, double_scratch); @@ -2490,7 +2526,7 @@ void MacroAssembler::TestDoubleIsInt32(DwVfpRegister double_input, void MacroAssembler::TryDoubleToInt32Exact(Register result, DwVfpRegister double_input, LowDwVfpRegister double_scratch) { - ASSERT(!double_input.is(double_scratch)); + DCHECK(!double_input.is(double_scratch)); vcvt_s32_f64(double_scratch.low(), double_input); vmov(result, double_scratch.low()); vcvt_f64_s32(double_scratch, double_scratch.low()); @@ -2504,8 +2540,8 @@ void MacroAssembler::TryInt32Floor(Register result, LowDwVfpRegister double_scratch, Label* done, Label* exact) { - ASSERT(!result.is(input_high)); - ASSERT(!double_input.is(double_scratch)); + DCHECK(!result.is(input_high)); + DCHECK(!double_input.is(double_scratch)); Label negative, exception; VmovHigh(input_high, double_input); @@ -2583,7 +2619,7 @@ void MacroAssembler::TruncateHeapNumberToI(Register result, Register object) { Label done; LowDwVfpRegister double_scratch = kScratchDoubleReg; - ASSERT(!result.is(object)); + DCHECK(!result.is(object)); vldr(double_scratch, MemOperand(object, HeapNumber::kValueOffset - kHeapObjectTag)); @@ -2610,7 +2646,7 @@ void MacroAssembler::TruncateNumberToI(Register object, Register scratch1, Label* not_number) { Label done; - ASSERT(!result.is(object)); + DCHECK(!result.is(object)); UntagAndJumpIfSmi(result, object, &done); JumpIfNotHeapNumber(object, heap_number_map, scratch1, not_number); @@ -2694,7 +2730,7 @@ void MacroAssembler::TailCallRuntime(Runtime::FunctionId fid, void MacroAssembler::JumpToExternalReference(const ExternalReference& builtin) { #if defined(__thumb__) // Thumb mode builtin. - ASSERT((reinterpret_cast<intptr_t>(builtin.address()) & 1) == 1); + DCHECK((reinterpret_cast<intptr_t>(builtin.address()) & 1) == 1); #endif mov(r1, Operand(builtin)); CEntryStub stub(isolate(), 1); @@ -2706,7 +2742,7 @@ void MacroAssembler::InvokeBuiltin(Builtins::JavaScript id, InvokeFlag flag, const CallWrapper& call_wrapper) { // You can't call a builtin without a valid frame. - ASSERT(flag == JUMP_FUNCTION || has_frame()); + DCHECK(flag == JUMP_FUNCTION || has_frame()); GetBuiltinEntry(r2, id); if (flag == CALL_FUNCTION) { @@ -2714,7 +2750,7 @@ void MacroAssembler::InvokeBuiltin(Builtins::JavaScript id, Call(r2); call_wrapper.AfterCall(); } else { - ASSERT(flag == JUMP_FUNCTION); + DCHECK(flag == JUMP_FUNCTION); Jump(r2); } } @@ -2733,7 +2769,7 @@ void MacroAssembler::GetBuiltinFunction(Register target, void MacroAssembler::GetBuiltinEntry(Register target, Builtins::JavaScript id) { - ASSERT(!target.is(r1)); + DCHECK(!target.is(r1)); GetBuiltinFunction(r1, id); // Load the code entry point from the builtins object. ldr(target, FieldMemOperand(r1, JSFunction::kCodeEntryOffset)); @@ -2752,7 +2788,7 @@ void MacroAssembler::SetCounter(StatsCounter* counter, int value, void MacroAssembler::IncrementCounter(StatsCounter* counter, int value, Register scratch1, Register scratch2) { - ASSERT(value > 0); + DCHECK(value > 0); if (FLAG_native_code_counters && counter->Enabled()) { mov(scratch2, Operand(ExternalReference(counter))); ldr(scratch1, MemOperand(scratch2)); @@ -2764,7 +2800,7 @@ void MacroAssembler::IncrementCounter(StatsCounter* counter, int value, void MacroAssembler::DecrementCounter(StatsCounter* counter, int value, Register scratch1, Register scratch2) { - ASSERT(value > 0); + DCHECK(value > 0); if (FLAG_native_code_counters && counter->Enabled()) { mov(scratch2, Operand(ExternalReference(counter))); ldr(scratch1, MemOperand(scratch2)); @@ -2782,7 +2818,7 @@ void MacroAssembler::Assert(Condition cond, BailoutReason reason) { void MacroAssembler::AssertFastElements(Register elements) { if (emit_debug_code()) { - ASSERT(!elements.is(ip)); + DCHECK(!elements.is(ip)); Label ok; push(elements); ldr(elements, FieldMemOperand(elements, HeapObject::kMapOffset)); @@ -2846,7 +2882,7 @@ void MacroAssembler::Abort(BailoutReason reason) { // of the Abort macro constant. static const int kExpectedAbortInstructions = 7; int abort_instructions = InstructionsGeneratedSince(&abort_start); - ASSERT(abort_instructions <= kExpectedAbortInstructions); + DCHECK(abort_instructions <= kExpectedAbortInstructions); while (abort_instructions++ < kExpectedAbortInstructions) { nop(); } @@ -3203,14 +3239,19 @@ void MacroAssembler::AllocateHeapNumber(Register result, Register scratch2, Register heap_number_map, Label* gc_required, - TaggingMode tagging_mode) { + TaggingMode tagging_mode, + MutableMode mode) { // Allocate an object in the heap for the heap number and tag it as a heap // object. Allocate(HeapNumber::kSize, result, scratch1, scratch2, gc_required, tagging_mode == TAG_RESULT ? TAG_OBJECT : NO_ALLOCATION_FLAGS); + Heap::RootListIndex map_index = mode == MUTABLE + ? Heap::kMutableHeapNumberMapRootIndex + : Heap::kHeapNumberMapRootIndex; + AssertIsRoot(heap_number_map, map_index); + // Store heap number map in the allocated object. - AssertIsRoot(heap_number_map, Heap::kHeapNumberMapRootIndex); if (tagging_mode == TAG_RESULT) { str(heap_number_map, FieldMemOperand(result, HeapObject::kMapOffset)); } else { @@ -3448,7 +3489,7 @@ void MacroAssembler::PrepareCallCFunction(int num_reg_arguments, // and the original value of sp. mov(scratch, sp); sub(sp, sp, Operand((stack_passed_arguments + 1) * kPointerSize)); - ASSERT(IsPowerOf2(frame_alignment)); + DCHECK(IsPowerOf2(frame_alignment)); and_(sp, sp, Operand(-frame_alignment)); str(scratch, MemOperand(sp, stack_passed_arguments * kPointerSize)); } else { @@ -3464,7 +3505,7 @@ void MacroAssembler::PrepareCallCFunction(int num_reg_arguments, void MacroAssembler::MovToFloatParameter(DwVfpRegister src) { - ASSERT(src.is(d0)); + DCHECK(src.is(d0)); if (!use_eabi_hardfloat()) { vmov(r0, r1, src); } @@ -3479,8 +3520,8 @@ void MacroAssembler::MovToFloatResult(DwVfpRegister src) { void MacroAssembler::MovToFloatParameters(DwVfpRegister src1, DwVfpRegister src2) { - ASSERT(src1.is(d0)); - ASSERT(src2.is(d1)); + DCHECK(src1.is(d0)); + DCHECK(src2.is(d1)); if (!use_eabi_hardfloat()) { vmov(r0, r1, src1); vmov(r2, r3, src2); @@ -3518,16 +3559,16 @@ void MacroAssembler::CallCFunction(Register function, void MacroAssembler::CallCFunctionHelper(Register function, int num_reg_arguments, int num_double_arguments) { - ASSERT(has_frame()); + DCHECK(has_frame()); // Make sure that the stack is aligned before calling a C function unless // running in the simulator. The simulator has its own alignment check which // provides more information. #if V8_HOST_ARCH_ARM if (emit_debug_code()) { - int frame_alignment = OS::ActivationFrameAlignment(); + int frame_alignment = base::OS::ActivationFrameAlignment(); int frame_alignment_mask = frame_alignment - 1; if (frame_alignment > kPointerSize) { - ASSERT(IsPowerOf2(frame_alignment)); + DCHECK(IsPowerOf2(frame_alignment)); Label alignment_as_expected; tst(sp, Operand(frame_alignment_mask)); b(eq, &alignment_as_expected); @@ -3554,25 +3595,65 @@ void MacroAssembler::CallCFunctionHelper(Register function, void MacroAssembler::GetRelocatedValueLocation(Register ldr_location, - Register result) { - const uint32_t kLdrOffsetMask = (1 << 12) - 1; + Register result, + Register scratch) { + Label small_constant_pool_load, load_result; ldr(result, MemOperand(ldr_location)); + + if (FLAG_enable_ool_constant_pool) { + // Check if this is an extended constant pool load. + and_(scratch, result, Operand(GetConsantPoolLoadMask())); + teq(scratch, Operand(GetConsantPoolLoadPattern())); + b(eq, &small_constant_pool_load); + if (emit_debug_code()) { + // Check that the instruction sequence is: + // movw reg, #offset_low + // movt reg, #offset_high + // ldr reg, [pp, reg] + Instr patterns[] = {GetMovWPattern(), GetMovTPattern(), + GetLdrPpRegOffsetPattern()}; + for (int i = 0; i < 3; i++) { + ldr(result, MemOperand(ldr_location, i * kInstrSize)); + and_(result, result, Operand(patterns[i])); + cmp(result, Operand(patterns[i])); + Check(eq, kTheInstructionToPatchShouldBeALoadFromConstantPool); + } + // Result was clobbered. Restore it. + ldr(result, MemOperand(ldr_location)); + } + + // Get the offset into the constant pool. First extract movw immediate into + // result. + and_(scratch, result, Operand(0xfff)); + mov(ip, Operand(result, LSR, 4)); + and_(ip, ip, Operand(0xf000)); + orr(result, scratch, Operand(ip)); + // Then extract movt immediate and or into result. + ldr(scratch, MemOperand(ldr_location, kInstrSize)); + and_(ip, scratch, Operand(0xf0000)); + orr(result, result, Operand(ip, LSL, 12)); + and_(scratch, scratch, Operand(0xfff)); + orr(result, result, Operand(scratch, LSL, 16)); + + b(&load_result); + } + + bind(&small_constant_pool_load); if (emit_debug_code()) { // Check that the instruction is a ldr reg, [<pc or pp> + offset] . - if (FLAG_enable_ool_constant_pool) { - and_(result, result, Operand(kLdrPpPattern)); - cmp(result, Operand(kLdrPpPattern)); - Check(eq, kTheInstructionToPatchShouldBeALoadFromPp); - } else { - and_(result, result, Operand(kLdrPCPattern)); - cmp(result, Operand(kLdrPCPattern)); - Check(eq, kTheInstructionToPatchShouldBeALoadFromPc); - } + and_(result, result, Operand(GetConsantPoolLoadPattern())); + cmp(result, Operand(GetConsantPoolLoadPattern())); + Check(eq, kTheInstructionToPatchShouldBeALoadFromConstantPool); // Result was clobbered. Restore it. ldr(result, MemOperand(ldr_location)); } - // Get the address of the constant. + + // Get the offset into the constant pool. + const uint32_t kLdrOffsetMask = (1 << 12) - 1; and_(result, result, Operand(kLdrOffsetMask)); + + bind(&load_result); + // Get the address of the constant. if (FLAG_enable_ool_constant_pool) { add(result, pp, Operand(result)); } else { @@ -3601,7 +3682,7 @@ void MacroAssembler::CheckMapDeprecated(Handle<Map> map, if (map->CanBeDeprecated()) { mov(scratch, Operand(map)); ldr(scratch, FieldMemOperand(scratch, Map::kBitField3Offset)); - tst(scratch, Operand(Smi::FromInt(Map::Deprecated::kMask))); + tst(scratch, Operand(Map::Deprecated::kMask)); b(ne, if_deprecated); } } @@ -3612,7 +3693,7 @@ void MacroAssembler::JumpIfBlack(Register object, Register scratch1, Label* on_black) { HasColor(object, scratch0, scratch1, on_black, 1, 0); // kBlackBitPattern. - ASSERT(strcmp(Marking::kBlackBitPattern, "10") == 0); + DCHECK(strcmp(Marking::kBlackBitPattern, "10") == 0); } @@ -3622,7 +3703,7 @@ void MacroAssembler::HasColor(Register object, Label* has_color, int first_bit, int second_bit) { - ASSERT(!AreAliased(object, bitmap_scratch, mask_scratch, no_reg)); + DCHECK(!AreAliased(object, bitmap_scratch, mask_scratch, no_reg)); GetMarkBits(object, bitmap_scratch, mask_scratch); @@ -3655,8 +3736,8 @@ void MacroAssembler::JumpIfDataObject(Register value, ldr(scratch, FieldMemOperand(value, HeapObject::kMapOffset)); CompareRoot(scratch, Heap::kHeapNumberMapRootIndex); b(eq, &is_data_object); - ASSERT(kIsIndirectStringTag == 1 && kIsIndirectStringMask == 1); - ASSERT(kNotStringTag == 0x80 && kIsNotStringMask == 0x80); + DCHECK(kIsIndirectStringTag == 1 && kIsIndirectStringMask == 1); + DCHECK(kNotStringTag == 0x80 && kIsNotStringMask == 0x80); // If it's a string and it's not a cons string then it's an object containing // no GC pointers. ldrb(scratch, FieldMemOperand(scratch, Map::kInstanceTypeOffset)); @@ -3669,7 +3750,7 @@ void MacroAssembler::JumpIfDataObject(Register value, void MacroAssembler::GetMarkBits(Register addr_reg, Register bitmap_reg, Register mask_reg) { - ASSERT(!AreAliased(addr_reg, bitmap_reg, mask_reg, no_reg)); + DCHECK(!AreAliased(addr_reg, bitmap_reg, mask_reg, no_reg)); and_(bitmap_reg, addr_reg, Operand(~Page::kPageAlignmentMask)); Ubfx(mask_reg, addr_reg, kPointerSizeLog2, Bitmap::kBitsPerCellLog2); const int kLowBits = kPointerSizeLog2 + Bitmap::kBitsPerCellLog2; @@ -3686,14 +3767,14 @@ void MacroAssembler::EnsureNotWhite( Register mask_scratch, Register load_scratch, Label* value_is_white_and_not_data) { - ASSERT(!AreAliased(value, bitmap_scratch, mask_scratch, ip)); + DCHECK(!AreAliased(value, bitmap_scratch, mask_scratch, ip)); GetMarkBits(value, bitmap_scratch, mask_scratch); // If the value is black or grey we don't need to do anything. - ASSERT(strcmp(Marking::kWhiteBitPattern, "00") == 0); - ASSERT(strcmp(Marking::kBlackBitPattern, "10") == 0); - ASSERT(strcmp(Marking::kGreyBitPattern, "11") == 0); - ASSERT(strcmp(Marking::kImpossibleBitPattern, "01") == 0); + DCHECK(strcmp(Marking::kWhiteBitPattern, "00") == 0); + DCHECK(strcmp(Marking::kBlackBitPattern, "10") == 0); + DCHECK(strcmp(Marking::kGreyBitPattern, "11") == 0); + DCHECK(strcmp(Marking::kImpossibleBitPattern, "01") == 0); Label done; @@ -3726,8 +3807,8 @@ void MacroAssembler::EnsureNotWhite( b(eq, &is_data_object); // Check for strings. - ASSERT(kIsIndirectStringTag == 1 && kIsIndirectStringMask == 1); - ASSERT(kNotStringTag == 0x80 && kIsNotStringMask == 0x80); + DCHECK(kIsIndirectStringTag == 1 && kIsIndirectStringMask == 1); + DCHECK(kNotStringTag == 0x80 && kIsNotStringMask == 0x80); // If it's a string and it's not a cons string then it's an object containing // no GC pointers. Register instance_type = load_scratch; @@ -3739,8 +3820,8 @@ void MacroAssembler::EnsureNotWhite( // Otherwise it's String::kHeaderSize + string->length() * (1 or 2). // External strings are the only ones with the kExternalStringTag bit // set. - ASSERT_EQ(0, kSeqStringTag & kExternalStringTag); - ASSERT_EQ(0, kConsStringTag & kExternalStringTag); + DCHECK_EQ(0, kSeqStringTag & kExternalStringTag); + DCHECK_EQ(0, kConsStringTag & kExternalStringTag); tst(instance_type, Operand(kExternalStringTag)); mov(length, Operand(ExternalString::kSize), LeaveCC, ne); b(ne, &is_data_object); @@ -3749,8 +3830,8 @@ void MacroAssembler::EnsureNotWhite( // For ASCII (char-size of 1) we shift the smi tag away to get the length. // For UC16 (char-size of 2) we just leave the smi tag in place, thereby // getting the length multiplied by 2. - ASSERT(kOneByteStringTag == 4 && kStringEncodingMask == 4); - ASSERT(kSmiTag == 0 && kSmiTagSize == 1); + DCHECK(kOneByteStringTag == 4 && kStringEncodingMask == 4); + DCHECK(kSmiTag == 0 && kSmiTagSize == 1); ldr(ip, FieldMemOperand(value, String::kLengthOffset)); tst(instance_type, Operand(kStringEncodingMask)); mov(ip, Operand(ip, LSR, 1), LeaveCC, ne); @@ -3798,52 +3879,6 @@ void MacroAssembler::ClampDoubleToUint8(Register result_reg, } -void MacroAssembler::Throw(BailoutReason reason) { - Label throw_start; - bind(&throw_start); -#ifdef DEBUG - const char* msg = GetBailoutReason(reason); - if (msg != NULL) { - RecordComment("Throw message: "); - RecordComment(msg); - } -#endif - - mov(r0, Operand(Smi::FromInt(reason))); - push(r0); - // Disable stub call restrictions to always allow calls to throw. - if (!has_frame_) { - // We don't actually want to generate a pile of code for this, so just - // claim there is a stack frame, without generating one. - FrameScope scope(this, StackFrame::NONE); - CallRuntime(Runtime::kHiddenThrowMessage, 1); - } else { - CallRuntime(Runtime::kHiddenThrowMessage, 1); - } - // will not return here - if (is_const_pool_blocked()) { - // If the calling code cares throw the exact number of - // instructions generated, we insert padding here to keep the size - // of the ThrowMessage macro constant. - static const int kExpectedThrowMessageInstructions = 10; - int throw_instructions = InstructionsGeneratedSince(&throw_start); - ASSERT(throw_instructions <= kExpectedThrowMessageInstructions); - while (throw_instructions++ < kExpectedThrowMessageInstructions) { - nop(); - } - } -} - - -void MacroAssembler::ThrowIf(Condition cc, BailoutReason reason) { - Label L; - b(NegateCondition(cc), &L); - Throw(reason); - // will not return here - bind(&L); -} - - void MacroAssembler::LoadInstanceDescriptors(Register map, Register descriptors) { ldr(descriptors, FieldMemOperand(map, Map::kDescriptorsOffset)); @@ -3859,7 +3894,8 @@ void MacroAssembler::NumberOfOwnDescriptors(Register dst, Register map) { void MacroAssembler::EnumLength(Register dst, Register map) { STATIC_ASSERT(Map::EnumLengthBits::kShift == 0); ldr(dst, FieldMemOperand(map, Map::kBitField3Offset)); - and_(dst, dst, Operand(Smi::FromInt(Map::EnumLengthBits::kMask))); + and_(dst, dst, Operand(Map::EnumLengthBits::kMask)); + SmiTag(dst); } @@ -3958,7 +3994,7 @@ void MacroAssembler::JumpIfDictionaryInPrototypeChain( Register scratch0, Register scratch1, Label* found) { - ASSERT(!scratch1.is(scratch0)); + DCHECK(!scratch1.is(scratch0)); Factory* factory = isolate()->factory(); Register current = scratch0; Label loop_again; @@ -3970,7 +4006,7 @@ void MacroAssembler::JumpIfDictionaryInPrototypeChain( bind(&loop_again); ldr(current, FieldMemOperand(current, HeapObject::kMapOffset)); ldr(scratch1, FieldMemOperand(current, Map::kBitField2Offset)); - Ubfx(scratch1, scratch1, Map::kElementsKindShift, Map::kElementsKindBitCount); + DecodeField<Map::ElementsKindBits>(scratch1); cmp(scratch1, Operand(DICTIONARY_ELEMENTS)); b(eq, found); ldr(current, FieldMemOperand(current, Map::kPrototypeOffset)); @@ -3985,9 +4021,12 @@ bool AreAliased(Register reg1, Register reg3, Register reg4, Register reg5, - Register reg6) { + Register reg6, + Register reg7, + Register reg8) { int n_of_valid_regs = reg1.is_valid() + reg2.is_valid() + - reg3.is_valid() + reg4.is_valid() + reg5.is_valid() + reg6.is_valid(); + reg3.is_valid() + reg4.is_valid() + reg5.is_valid() + reg6.is_valid() + + reg7.is_valid() + reg8.is_valid(); RegList regs = 0; if (reg1.is_valid()) regs |= reg1.bit(); @@ -3996,6 +4035,8 @@ bool AreAliased(Register reg1, if (reg4.is_valid()) regs |= reg4.bit(); if (reg5.is_valid()) regs |= reg5.bit(); if (reg6.is_valid()) regs |= reg6.bit(); + if (reg7.is_valid()) regs |= reg7.bit(); + if (reg8.is_valid()) regs |= reg8.bit(); int n_of_non_aliasing_regs = NumRegs(regs); return n_of_valid_regs != n_of_non_aliasing_regs; @@ -4013,19 +4054,19 @@ CodePatcher::CodePatcher(byte* address, // Create a new macro assembler pointing to the address of the code to patch. // The size is adjusted with kGap on order for the assembler to generate size // bytes of instructions without failing with buffer size constraints. - ASSERT(masm_.reloc_info_writer.pos() == address_ + size_ + Assembler::kGap); + DCHECK(masm_.reloc_info_writer.pos() == address_ + size_ + Assembler::kGap); } CodePatcher::~CodePatcher() { // Indicate that code has changed. if (flush_cache_ == FLUSH) { - CPU::FlushICache(address_, size_); + CpuFeatures::FlushICache(address_, size_); } // Check that the code was patched as expected. - ASSERT(masm_.pc_ == address_ + size_); - ASSERT(masm_.reloc_info_writer.pos() == address_ + size_ + Assembler::kGap); + DCHECK(masm_.pc_ == address_ + size_); + DCHECK(masm_.reloc_info_writer.pos() == address_ + size_ + Assembler::kGap); } @@ -4049,9 +4090,9 @@ void CodePatcher::EmitCondition(Condition cond) { void MacroAssembler::TruncatingDiv(Register result, Register dividend, int32_t divisor) { - ASSERT(!dividend.is(result)); - ASSERT(!dividend.is(ip)); - ASSERT(!result.is(ip)); + DCHECK(!dividend.is(result)); + DCHECK(!dividend.is(ip)); + DCHECK(!result.is(ip)); MultiplierAndShift ms(divisor); mov(ip, Operand(ms.multiplier())); smull(ip, result, dividend, ip); diff --git a/deps/v8/src/arm/macro-assembler-arm.h b/deps/v8/src/arm/macro-assembler-arm.h index ba6f82571d6..d5ca12e4f45 100644 --- a/deps/v8/src/arm/macro-assembler-arm.h +++ b/deps/v8/src/arm/macro-assembler-arm.h @@ -5,9 +5,9 @@ #ifndef V8_ARM_MACRO_ASSEMBLER_ARM_H_ #define V8_ARM_MACRO_ASSEMBLER_ARM_H_ -#include "assembler.h" -#include "frames.h" -#include "v8globals.h" +#include "src/assembler.h" +#include "src/frames.h" +#include "src/globals.h" namespace v8 { namespace internal { @@ -37,6 +37,10 @@ enum TaggingMode { enum RememberedSetAction { EMIT_REMEMBERED_SET, OMIT_REMEMBERED_SET }; enum SmiCheck { INLINE_SMI_CHECK, OMIT_SMI_CHECK }; +enum PointersToHereCheck { + kPointersToHereMaybeInteresting, + kPointersToHereAreAlwaysInteresting +}; enum LinkRegisterStatus { kLRHasNotBeenSaved, kLRHasBeenSaved }; @@ -54,7 +58,9 @@ bool AreAliased(Register reg1, Register reg3 = no_reg, Register reg4 = no_reg, Register reg5 = no_reg, - Register reg6 = no_reg); + Register reg6 = no_reg, + Register reg7 = no_reg, + Register reg8 = no_reg); #endif @@ -72,12 +78,11 @@ class MacroAssembler: public Assembler { // macro assembler. MacroAssembler(Isolate* isolate, void* buffer, int size); - // Jump, Call, and Ret pseudo instructions implementing inter-working. - void Jump(Register target, Condition cond = al); - void Jump(Address target, RelocInfo::Mode rmode, Condition cond = al); - void Jump(Handle<Code> code, RelocInfo::Mode rmode, Condition cond = al); + + // Returns the size of a call in instructions. Note, the value returned is + // only valid as long as no entries are added to the constant pool between + // checking the call size and emitting the actual call. static int CallSize(Register target, Condition cond = al); - void Call(Register target, Condition cond = al); int CallSize(Address target, RelocInfo::Mode rmode, Condition cond = al); int CallStubSize(CodeStub* stub, TypeFeedbackId ast_id = TypeFeedbackId::None(), @@ -86,6 +91,12 @@ class MacroAssembler: public Assembler { Address target, RelocInfo::Mode rmode, Condition cond = al); + + // Jump, Call, and Ret pseudo instructions implementing inter-working. + void Jump(Register target, Condition cond = al); + void Jump(Address target, RelocInfo::Mode rmode, Condition cond = al); + void Jump(Handle<Code> code, RelocInfo::Mode rmode, Condition cond = al); + void Call(Register target, Condition cond = al); void Call(Address target, RelocInfo::Mode rmode, Condition cond = al, TargetAddressStorageMode mode = CAN_INLINE_TARGET_ADDRESS); @@ -113,7 +124,8 @@ class MacroAssembler: public Assembler { Register scratch = no_reg, Condition cond = al); - + void Mls(Register dst, Register src1, Register src2, Register srcA, + Condition cond = al); void And(Register dst, Register src1, const Operand& src2, Condition cond = al); void Ubfx(Register dst, Register src, int lsb, int width, @@ -140,6 +152,9 @@ class MacroAssembler: public Assembler { // Register move. May do nothing if the registers are identical. void Move(Register dst, Handle<Object> value); void Move(Register dst, Register src, Condition cond = al); + void Move(Register dst, const Operand& src, Condition cond = al) { + if (!src.is_reg() || !src.rm().is(dst)) mov(dst, src, LeaveCC, cond); + } void Move(DwVfpRegister dst, DwVfpRegister src); void Load(Register dst, const MemOperand& src, Representation r); @@ -244,7 +259,9 @@ class MacroAssembler: public Assembler { LinkRegisterStatus lr_status, SaveFPRegsMode save_fp, RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, - SmiCheck smi_check = INLINE_SMI_CHECK); + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting); // As above, but the offset has the tag presubtracted. For use with // MemOperand(reg, off). @@ -256,7 +273,9 @@ class MacroAssembler: public Assembler { LinkRegisterStatus lr_status, SaveFPRegsMode save_fp, RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, - SmiCheck smi_check = INLINE_SMI_CHECK) { + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting) { RecordWriteField(context, offset + kHeapObjectTag, value, @@ -264,9 +283,17 @@ class MacroAssembler: public Assembler { lr_status, save_fp, remembered_set_action, - smi_check); + smi_check, + pointers_to_here_check_for_value); } + void RecordWriteForMap( + Register object, + Register map, + Register dst, + LinkRegisterStatus lr_status, + SaveFPRegsMode save_fp); + // For a given |object| notify the garbage collector that the slot |address| // has been written. |value| is the object being stored. The value and // address registers are clobbered by the operation. @@ -277,7 +304,9 @@ class MacroAssembler: public Assembler { LinkRegisterStatus lr_status, SaveFPRegsMode save_fp, RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, - SmiCheck smi_check = INLINE_SMI_CHECK); + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting); // Push a handle. void Push(Handle<Object> handle); @@ -285,7 +314,7 @@ class MacroAssembler: public Assembler { // Push two registers. Pushes leftmost register first (to highest address). void Push(Register src1, Register src2, Condition cond = al) { - ASSERT(!src1.is(src2)); + DCHECK(!src1.is(src2)); if (src1.code() > src2.code()) { stm(db_w, sp, src1.bit() | src2.bit(), cond); } else { @@ -296,9 +325,9 @@ class MacroAssembler: public Assembler { // Push three registers. Pushes leftmost register first (to highest address). void Push(Register src1, Register src2, Register src3, Condition cond = al) { - ASSERT(!src1.is(src2)); - ASSERT(!src2.is(src3)); - ASSERT(!src1.is(src3)); + DCHECK(!src1.is(src2)); + DCHECK(!src2.is(src3)); + DCHECK(!src1.is(src3)); if (src1.code() > src2.code()) { if (src2.code() > src3.code()) { stm(db_w, sp, src1.bit() | src2.bit() | src3.bit(), cond); @@ -318,12 +347,12 @@ class MacroAssembler: public Assembler { Register src3, Register src4, Condition cond = al) { - ASSERT(!src1.is(src2)); - ASSERT(!src2.is(src3)); - ASSERT(!src1.is(src3)); - ASSERT(!src1.is(src4)); - ASSERT(!src2.is(src4)); - ASSERT(!src3.is(src4)); + DCHECK(!src1.is(src2)); + DCHECK(!src2.is(src3)); + DCHECK(!src1.is(src3)); + DCHECK(!src1.is(src4)); + DCHECK(!src2.is(src4)); + DCHECK(!src3.is(src4)); if (src1.code() > src2.code()) { if (src2.code() > src3.code()) { if (src3.code() > src4.code()) { @@ -347,7 +376,7 @@ class MacroAssembler: public Assembler { // Pop two registers. Pops rightmost register first (from lower address). void Pop(Register src1, Register src2, Condition cond = al) { - ASSERT(!src1.is(src2)); + DCHECK(!src1.is(src2)); if (src1.code() > src2.code()) { ldm(ia_w, sp, src1.bit() | src2.bit(), cond); } else { @@ -358,9 +387,9 @@ class MacroAssembler: public Assembler { // Pop three registers. Pops rightmost register first (from lower address). void Pop(Register src1, Register src2, Register src3, Condition cond = al) { - ASSERT(!src1.is(src2)); - ASSERT(!src2.is(src3)); - ASSERT(!src1.is(src3)); + DCHECK(!src1.is(src2)); + DCHECK(!src2.is(src3)); + DCHECK(!src1.is(src3)); if (src1.code() > src2.code()) { if (src2.code() > src3.code()) { ldm(ia_w, sp, src1.bit() | src2.bit() | src3.bit(), cond); @@ -380,12 +409,12 @@ class MacroAssembler: public Assembler { Register src3, Register src4, Condition cond = al) { - ASSERT(!src1.is(src2)); - ASSERT(!src2.is(src3)); - ASSERT(!src1.is(src3)); - ASSERT(!src1.is(src4)); - ASSERT(!src2.is(src4)); - ASSERT(!src3.is(src4)); + DCHECK(!src1.is(src2)); + DCHECK(!src2.is(src3)); + DCHECK(!src1.is(src3)); + DCHECK(!src1.is(src4)); + DCHECK(!src2.is(src4)); + DCHECK(!src3.is(src4)); if (src1.code() > src2.code()) { if (src2.code() > src3.code()) { if (src3.code() > src4.code()) { @@ -417,12 +446,9 @@ class MacroAssembler: public Assembler { // RegList constant kSafepointSavedRegisters. void PushSafepointRegisters(); void PopSafepointRegisters(); - void PushSafepointRegistersAndDoubles(); - void PopSafepointRegistersAndDoubles(); // Store value in register src in the safepoint stack slot for // register dst. void StoreToSafepointRegisterSlot(Register src, Register dst); - void StoreToSafepointRegistersAndDoublesSlot(Register src, Register dst); // Load the value of the src register from its safepoint stack slot // into register dst. void LoadFromSafepointRegisterSlot(Register dst, Register src); @@ -519,7 +545,8 @@ class MacroAssembler: public Assembler { Label* not_int32); // Generates function and stub prologue code. - void Prologue(PrologueFrameMode frame_mode); + void StubPrologue(); + void Prologue(bool code_pre_aging); // Enter exit frame. // stack_space - extra stack space, used for alignment before call to C. @@ -630,12 +657,6 @@ class MacroAssembler: public Assembler { // handler chain. void ThrowUncatchable(Register value); - // Throw a message string as an exception. - void Throw(BailoutReason reason); - - // Throw a message string as an exception if a condition is not true. - void ThrowIf(Condition cc, BailoutReason reason); - // --------------------------------------------------------------------------- // Inline caching support @@ -666,7 +687,7 @@ class MacroAssembler: public Assembler { // These instructions are generated to mark special location in the code, // like some special IC code. static inline bool IsMarkedCode(Instr instr, int type) { - ASSERT((FIRST_IC_MARKER <= type) && (type < LAST_CODE_MARKER)); + DCHECK((FIRST_IC_MARKER <= type) && (type < LAST_CODE_MARKER)); return IsNop(instr, type); } @@ -686,7 +707,7 @@ class MacroAssembler: public Assembler { (FIRST_IC_MARKER <= dst_reg) && (dst_reg < LAST_CODE_MARKER) ? src_reg : -1; - ASSERT((type == -1) || + DCHECK((type == -1) || ((FIRST_IC_MARKER <= type) && (type < LAST_CODE_MARKER))); return type; } @@ -764,7 +785,8 @@ class MacroAssembler: public Assembler { Register scratch2, Register heap_number_map, Label* gc_required, - TaggingMode tagging_mode = TAG_RESULT); + TaggingMode tagging_mode = TAG_RESULT, + MutableMode mode = IMMUTABLE); void AllocateHeapNumberWithValue(Register result, DwVfpRegister value, Register scratch1, @@ -925,7 +947,7 @@ class MacroAssembler: public Assembler { ldr(type, FieldMemOperand(obj, HeapObject::kMapOffset), cond); ldrb(type, FieldMemOperand(type, Map::kInstanceTypeOffset), cond); tst(type, Operand(kIsNotStringMask), cond); - ASSERT_EQ(0, kStringTag); + DCHECK_EQ(0, kStringTag); return eq; } @@ -1122,7 +1144,7 @@ class MacroAssembler: public Assembler { void GetBuiltinFunction(Register target, Builtins::JavaScript id); Handle<Object> CodeObject() { - ASSERT(!code_object_.is_null()); + DCHECK(!code_object_.is_null()); return code_object_; } @@ -1166,7 +1188,7 @@ class MacroAssembler: public Assembler { // EABI variant for double arguments in use. bool use_eabi_hardfloat() { #ifdef __arm__ - return OS::ArmUsingHardFloat(); + return base::OS::ArmUsingHardFloat(); #elif USE_EABI_HARDFLOAT return true; #else @@ -1339,8 +1361,8 @@ class MacroAssembler: public Assembler { // Get the location of a relocated constant (its address in the constant pool) // from its load site. - void GetRelocatedValueLocation(Register ldr_location, - Register result); + void GetRelocatedValueLocation(Register ldr_location, Register result, + Register scratch); void ClampUint8(Register output_reg, Register input_reg); @@ -1354,12 +1376,36 @@ class MacroAssembler: public Assembler { void EnumLength(Register dst, Register map); void NumberOfOwnDescriptors(Register dst, Register map); + template<typename Field> + void DecodeField(Register dst, Register src) { + Ubfx(dst, src, Field::kShift, Field::kSize); + } + template<typename Field> void DecodeField(Register reg) { + DecodeField<Field>(reg, reg); + } + + template<typename Field> + void DecodeFieldToSmi(Register dst, Register src) { static const int shift = Field::kShift; - static const int mask = (Field::kMask >> shift) << kSmiTagSize; - mov(reg, Operand(reg, LSR, shift)); - and_(reg, reg, Operand(mask)); + static const int mask = Field::kMask >> shift << kSmiTagSize; + STATIC_ASSERT((mask & (0x80000000u >> (kSmiTagSize - 1))) == 0); + STATIC_ASSERT(kSmiTag == 0); + if (shift < kSmiTagSize) { + mov(dst, Operand(src, LSL, kSmiTagSize - shift)); + and_(dst, dst, Operand(mask)); + } else if (shift > kSmiTagSize) { + mov(dst, Operand(src, LSR, shift - kSmiTagSize)); + and_(dst, dst, Operand(mask)); + } else { + and_(dst, src, Operand(mask)); + } + } + + template<typename Field> + void DecodeFieldToSmi(Register reg) { + DecodeField<Field>(reg, reg); } // Activation support. @@ -1501,7 +1547,7 @@ class FrameAndConstantPoolScope { old_constant_pool_available_(masm->is_constant_pool_available()) { // We only want to enable constant pool access for non-manual frame scopes // to ensure the constant pool pointer is valid throughout the scope. - ASSERT(type_ != StackFrame::MANUAL && type_ != StackFrame::NONE); + DCHECK(type_ != StackFrame::MANUAL && type_ != StackFrame::NONE); masm->set_has_frame(true); masm->set_constant_pool_available(true); masm->EnterFrame(type, !old_constant_pool_available_); @@ -1519,7 +1565,7 @@ class FrameAndConstantPoolScope { // scope, the MacroAssembler is still marked as being in a frame scope, and // the code will be generated again when it goes out of scope. void GenerateLeaveFrame() { - ASSERT(type_ != StackFrame::MANUAL && type_ != StackFrame::NONE); + DCHECK(type_ != StackFrame::MANUAL && type_ != StackFrame::NONE); masm_->LeaveFrame(type_); } diff --git a/deps/v8/src/arm/regexp-macro-assembler-arm.cc b/deps/v8/src/arm/regexp-macro-assembler-arm.cc index e511554eff7..8480f4559b5 100644 --- a/deps/v8/src/arm/regexp-macro-assembler-arm.cc +++ b/deps/v8/src/arm/regexp-macro-assembler-arm.cc @@ -2,18 +2,19 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM -#include "cpu-profiler.h" -#include "unicode.h" -#include "log.h" -#include "code-stubs.h" -#include "regexp-stack.h" -#include "macro-assembler.h" -#include "regexp-macro-assembler.h" -#include "arm/regexp-macro-assembler-arm.h" +#include "src/code-stubs.h" +#include "src/cpu-profiler.h" +#include "src/log.h" +#include "src/macro-assembler.h" +#include "src/regexp-macro-assembler.h" +#include "src/regexp-stack.h" +#include "src/unicode.h" + +#include "src/arm/regexp-macro-assembler-arm.h" namespace v8 { namespace internal { @@ -109,7 +110,7 @@ RegExpMacroAssemblerARM::RegExpMacroAssemblerARM( success_label_(), backtrack_label_(), exit_label_() { - ASSERT_EQ(0, registers_to_save % 2); + DCHECK_EQ(0, registers_to_save % 2); __ jmp(&entry_label_); // We'll write the entry code later. __ bind(&start_label_); // And then continue from here. } @@ -142,8 +143,8 @@ void RegExpMacroAssemblerARM::AdvanceCurrentPosition(int by) { void RegExpMacroAssemblerARM::AdvanceRegister(int reg, int by) { - ASSERT(reg >= 0); - ASSERT(reg < num_registers_); + DCHECK(reg >= 0); + DCHECK(reg < num_registers_); if (by != 0) { __ ldr(r0, register_location(reg)); __ add(r0, r0, Operand(by)); @@ -286,7 +287,7 @@ void RegExpMacroAssemblerARM::CheckNotBackReferenceIgnoreCase( // Compute new value of character position after the matched part. __ sub(current_input_offset(), r2, end_of_input_address()); } else { - ASSERT(mode_ == UC16); + DCHECK(mode_ == UC16); int argument_count = 4; __ PrepareCallCFunction(argument_count, r2); @@ -357,7 +358,7 @@ void RegExpMacroAssemblerARM::CheckNotBackReference( __ ldrb(r3, MemOperand(r0, char_size(), PostIndex)); __ ldrb(r4, MemOperand(r2, char_size(), PostIndex)); } else { - ASSERT(mode_ == UC16); + DCHECK(mode_ == UC16); __ ldrh(r3, MemOperand(r0, char_size(), PostIndex)); __ ldrh(r4, MemOperand(r2, char_size(), PostIndex)); } @@ -410,7 +411,7 @@ void RegExpMacroAssemblerARM::CheckNotCharacterAfterMinusAnd( uc16 minus, uc16 mask, Label* on_not_equal) { - ASSERT(minus < String::kMaxUtf16CodeUnit); + DCHECK(minus < String::kMaxUtf16CodeUnit); __ sub(r0, current_character(), Operand(minus)); __ and_(r0, r0, Operand(mask)); __ cmp(r0, Operand(c)); @@ -709,7 +710,7 @@ Handle<HeapObject> RegExpMacroAssemblerARM::GetCode(Handle<String> source) { __ add(r1, r1, Operand(r2)); // r1 is length of string in characters. - ASSERT_EQ(0, num_saved_registers_ % 2); + DCHECK_EQ(0, num_saved_registers_ % 2); // Always an even number of capture registers. This allows us to // unroll the loop once to add an operation between a load of a register // and the following use of that register. @@ -894,8 +895,8 @@ void RegExpMacroAssemblerARM::LoadCurrentCharacter(int cp_offset, Label* on_end_of_input, bool check_bounds, int characters) { - ASSERT(cp_offset >= -1); // ^ and \b can look behind one character. - ASSERT(cp_offset < (1<<30)); // Be sane! (And ensure negation works) + DCHECK(cp_offset >= -1); // ^ and \b can look behind one character. + DCHECK(cp_offset < (1<<30)); // Be sane! (And ensure negation works) if (check_bounds) { CheckPosition(cp_offset + characters - 1, on_end_of_input); } @@ -960,7 +961,7 @@ void RegExpMacroAssemblerARM::SetCurrentPositionFromEnd(int by) { void RegExpMacroAssemblerARM::SetRegister(int register_index, int to) { - ASSERT(register_index >= num_saved_registers_); // Reserved for positions! + DCHECK(register_index >= num_saved_registers_); // Reserved for positions! __ mov(r0, Operand(to)); __ str(r0, register_location(register_index)); } @@ -984,7 +985,7 @@ void RegExpMacroAssemblerARM::WriteCurrentPositionToRegister(int reg, void RegExpMacroAssemblerARM::ClearRegisters(int reg_from, int reg_to) { - ASSERT(reg_from <= reg_to); + DCHECK(reg_from <= reg_to); __ ldr(r0, MemOperand(frame_pointer(), kInputStartMinusOne)); for (int reg = reg_from; reg <= reg_to; reg++) { __ str(r0, register_location(reg)); @@ -1010,8 +1011,8 @@ void RegExpMacroAssemblerARM::CallCheckStackGuardState(Register scratch) { __ mov(r1, Operand(masm_->CodeObject())); // We need to make room for the return address on the stack. - int stack_alignment = OS::ActivationFrameAlignment(); - ASSERT(IsAligned(stack_alignment, kPointerSize)); + int stack_alignment = base::OS::ActivationFrameAlignment(); + DCHECK(IsAligned(stack_alignment, kPointerSize)); __ sub(sp, sp, Operand(stack_alignment)); // r0 will point to the return address, placed by DirectCEntry. @@ -1026,7 +1027,7 @@ void RegExpMacroAssemblerARM::CallCheckStackGuardState(Register scratch) { // Drop the return address from the stack. __ add(sp, sp, Operand(stack_alignment)); - ASSERT(stack_alignment != 0); + DCHECK(stack_alignment != 0); __ ldr(sp, MemOperand(sp, 0)); __ mov(code_pointer(), Operand(masm_->CodeObject())); @@ -1044,7 +1045,8 @@ int RegExpMacroAssemblerARM::CheckStackGuardState(Address* return_address, Code* re_code, Address re_frame) { Isolate* isolate = frame_entry<Isolate*>(re_frame, kIsolate); - if (isolate->stack_guard()->IsStackOverflow()) { + StackLimitCheck check(isolate); + if (check.JsHasOverflowed()) { isolate->StackOverflow(); return EXCEPTION; } @@ -1067,11 +1069,11 @@ int RegExpMacroAssemblerARM::CheckStackGuardState(Address* return_address, // Current string. bool is_ascii = subject->IsOneByteRepresentationUnderneath(); - ASSERT(re_code->instruction_start() <= *return_address); - ASSERT(*return_address <= + DCHECK(re_code->instruction_start() <= *return_address); + DCHECK(*return_address <= re_code->instruction_start() + re_code->instruction_size()); - Object* result = Execution::HandleStackGuardInterrupt(isolate); + Object* result = isolate->stack_guard()->HandleInterrupts(); if (*code_handle != re_code) { // Return address no longer valid int delta = code_handle->address() - re_code->address(); @@ -1107,7 +1109,7 @@ int RegExpMacroAssemblerARM::CheckStackGuardState(Address* return_address, // be a sequential or external string with the same content. // Update the start and end pointers in the stack frame to the current // location (whether it has actually moved or not). - ASSERT(StringShape(*subject_tmp).IsSequential() || + DCHECK(StringShape(*subject_tmp).IsSequential() || StringShape(*subject_tmp).IsExternal()); // The original start address of the characters to match. @@ -1139,7 +1141,7 @@ int RegExpMacroAssemblerARM::CheckStackGuardState(Address* return_address, MemOperand RegExpMacroAssemblerARM::register_location(int register_index) { - ASSERT(register_index < (1<<30)); + DCHECK(register_index < (1<<30)); if (num_registers_ <= register_index) { num_registers_ = register_index + 1; } @@ -1192,14 +1194,14 @@ void RegExpMacroAssemblerARM::SafeCallTarget(Label* name) { void RegExpMacroAssemblerARM::Push(Register source) { - ASSERT(!source.is(backtrack_stackpointer())); + DCHECK(!source.is(backtrack_stackpointer())); __ str(source, MemOperand(backtrack_stackpointer(), kPointerSize, NegPreIndex)); } void RegExpMacroAssemblerARM::Pop(Register target) { - ASSERT(!target.is(backtrack_stackpointer())); + DCHECK(!target.is(backtrack_stackpointer())); __ ldr(target, MemOperand(backtrack_stackpointer(), kPointerSize, PostIndex)); } @@ -1244,7 +1246,7 @@ void RegExpMacroAssemblerARM::LoadCurrentCharacterUnchecked(int cp_offset, // If unaligned load/stores are not supported then this function must only // be used to load a single character at a time. if (!CanReadUnaligned()) { - ASSERT(characters == 1); + DCHECK(characters == 1); } if (mode_ == ASCII) { @@ -1253,15 +1255,15 @@ void RegExpMacroAssemblerARM::LoadCurrentCharacterUnchecked(int cp_offset, } else if (characters == 2) { __ ldrh(current_character(), MemOperand(end_of_input_address(), offset)); } else { - ASSERT(characters == 1); + DCHECK(characters == 1); __ ldrb(current_character(), MemOperand(end_of_input_address(), offset)); } } else { - ASSERT(mode_ == UC16); + DCHECK(mode_ == UC16); if (characters == 2) { __ ldr(current_character(), MemOperand(end_of_input_address(), offset)); } else { - ASSERT(characters == 1); + DCHECK(characters == 1); __ ldrh(current_character(), MemOperand(end_of_input_address(), offset)); } } diff --git a/deps/v8/src/arm/regexp-macro-assembler-arm.h b/deps/v8/src/arm/regexp-macro-assembler-arm.h index 4b18b274d7a..fef8413411f 100644 --- a/deps/v8/src/arm/regexp-macro-assembler-arm.h +++ b/deps/v8/src/arm/regexp-macro-assembler-arm.h @@ -5,9 +5,9 @@ #ifndef V8_ARM_REGEXP_MACRO_ASSEMBLER_ARM_H_ #define V8_ARM_REGEXP_MACRO_ASSEMBLER_ARM_H_ -#include "arm/assembler-arm.h" -#include "arm/assembler-arm-inl.h" -#include "macro-assembler.h" +#include "src/arm/assembler-arm.h" +#include "src/arm/assembler-arm-inl.h" +#include "src/macro-assembler.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/arm/simulator-arm.cc b/deps/v8/src/arm/simulator-arm.cc index 80b46e04df3..ab26e8af13e 100644 --- a/deps/v8/src/arm/simulator-arm.cc +++ b/deps/v8/src/arm/simulator-arm.cc @@ -6,15 +6,15 @@ #include <stdlib.h> #include <cmath> -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM -#include "disasm.h" -#include "assembler.h" -#include "codegen.h" -#include "arm/constants-arm.h" -#include "arm/simulator-arm.h" +#include "src/arm/constants-arm.h" +#include "src/arm/simulator-arm.h" +#include "src/assembler.h" +#include "src/codegen.h" +#include "src/disasm.h" #if defined(USE_SIMULATOR) @@ -87,7 +87,7 @@ void ArmDebugger::Stop(Instruction* instr) { char** msg_address = reinterpret_cast<char**>(sim_->get_pc() + Instruction::kInstrSize); char* msg = *msg_address; - ASSERT(msg != NULL); + DCHECK(msg != NULL); // Update this stop description. if (isWatchedStop(code) && !watched_stops_[code].desc) { @@ -342,17 +342,18 @@ void ArmDebugger::Debug() { || (strcmp(cmd, "printobject") == 0)) { if (argc == 2) { int32_t value; + OFStream os(stdout); if (GetValue(arg1, &value)) { Object* obj = reinterpret_cast<Object*>(value); - PrintF("%s: \n", arg1); + os << arg1 << ": \n"; #ifdef DEBUG - obj->PrintLn(); + obj->Print(os); + os << "\n"; #else - obj->ShortPrint(); - PrintF("\n"); + os << Brief(obj) << "\n"; #endif } else { - PrintF("%s unrecognized\n", arg1); + os << arg1 << " unrecognized\n"; } } else { PrintF("printobject <value>\n"); @@ -451,7 +452,7 @@ void ArmDebugger::Debug() { } } else if (strcmp(cmd, "gdb") == 0) { PrintF("relinquishing control to gdb\n"); - v8::internal::OS::DebugBreak(); + v8::base::OS::DebugBreak(); PrintF("regaining control from gdb\n"); } else if (strcmp(cmd, "break") == 0) { if (argc == 2) { @@ -607,8 +608,8 @@ void ArmDebugger::Debug() { static bool ICacheMatch(void* one, void* two) { - ASSERT((reinterpret_cast<intptr_t>(one) & CachePage::kPageMask) == 0); - ASSERT((reinterpret_cast<intptr_t>(two) & CachePage::kPageMask) == 0); + DCHECK((reinterpret_cast<intptr_t>(one) & CachePage::kPageMask) == 0); + DCHECK((reinterpret_cast<intptr_t>(two) & CachePage::kPageMask) == 0); return one == two; } @@ -645,7 +646,7 @@ void Simulator::FlushICache(v8::internal::HashMap* i_cache, FlushOnePage(i_cache, start, bytes_to_flush); start += bytes_to_flush; size -= bytes_to_flush; - ASSERT_EQ(0, start & CachePage::kPageMask); + DCHECK_EQ(0, start & CachePage::kPageMask); offset = 0; } if (size != 0) { @@ -670,10 +671,10 @@ CachePage* Simulator::GetCachePage(v8::internal::HashMap* i_cache, void* page) { void Simulator::FlushOnePage(v8::internal::HashMap* i_cache, intptr_t start, int size) { - ASSERT(size <= CachePage::kPageSize); - ASSERT(AllOnOnePage(start, size - 1)); - ASSERT((start & CachePage::kLineMask) == 0); - ASSERT((size & CachePage::kLineMask) == 0); + DCHECK(size <= CachePage::kPageSize); + DCHECK(AllOnOnePage(start, size - 1)); + DCHECK((start & CachePage::kLineMask) == 0); + DCHECK((size & CachePage::kLineMask) == 0); void* page = reinterpret_cast<void*>(start & (~CachePage::kPageMask)); int offset = (start & CachePage::kPageMask); CachePage* cache_page = GetCachePage(i_cache, page); @@ -694,12 +695,12 @@ void Simulator::CheckICache(v8::internal::HashMap* i_cache, char* cached_line = cache_page->CachedData(offset & ~CachePage::kLineMask); if (cache_hit) { // Check that the data in memory matches the contents of the I-cache. - CHECK(memcmp(reinterpret_cast<void*>(instr), - cache_page->CachedData(offset), - Instruction::kInstrSize) == 0); + CHECK_EQ(0, + memcmp(reinterpret_cast<void*>(instr), + cache_page->CachedData(offset), Instruction::kInstrSize)); } else { // Cache miss. Load memory into the cache. - OS::MemCopy(cached_line, line, CachePage::kLineLength); + memcpy(cached_line, line, CachePage::kLineLength); *cache_valid_byte = CachePage::LINE_VALID; } } @@ -813,7 +814,7 @@ class Redirection { Redirection* current = isolate->simulator_redirection(); for (; current != NULL; current = current->next_) { if (current->external_function_ == external_function) { - ASSERT_EQ(current->type(), type); + DCHECK_EQ(current->type(), type); return current; } } @@ -852,7 +853,7 @@ void* Simulator::RedirectExternalReference(void* external_function, Simulator* Simulator::current(Isolate* isolate) { v8::internal::Isolate::PerIsolateThreadData* isolate_data = isolate->FindOrAllocatePerThreadDataForThisThread(); - ASSERT(isolate_data != NULL); + DCHECK(isolate_data != NULL); Simulator* sim = isolate_data->simulator(); if (sim == NULL) { @@ -867,7 +868,7 @@ Simulator* Simulator::current(Isolate* isolate) { // Sets the register in the architecture state. It will also deal with updating // Simulator internal state for special registers such as PC. void Simulator::set_register(int reg, int32_t value) { - ASSERT((reg >= 0) && (reg < num_registers)); + DCHECK((reg >= 0) && (reg < num_registers)); if (reg == pc) { pc_modified_ = true; } @@ -878,7 +879,7 @@ void Simulator::set_register(int reg, int32_t value) { // Get the register from the architecture state. This function does handle // the special case of accessing the PC register. int32_t Simulator::get_register(int reg) const { - ASSERT((reg >= 0) && (reg < num_registers)); + DCHECK((reg >= 0) && (reg < num_registers)); // Stupid code added to avoid bug in GCC. // See: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=43949 if (reg >= num_registers) return 0; @@ -888,75 +889,75 @@ int32_t Simulator::get_register(int reg) const { double Simulator::get_double_from_register_pair(int reg) { - ASSERT((reg >= 0) && (reg < num_registers) && ((reg % 2) == 0)); + DCHECK((reg >= 0) && (reg < num_registers) && ((reg % 2) == 0)); double dm_val = 0.0; // Read the bits from the unsigned integer register_[] array // into the double precision floating point value and return it. char buffer[2 * sizeof(vfp_registers_[0])]; - OS::MemCopy(buffer, ®isters_[reg], 2 * sizeof(registers_[0])); - OS::MemCopy(&dm_val, buffer, 2 * sizeof(registers_[0])); + memcpy(buffer, ®isters_[reg], 2 * sizeof(registers_[0])); + memcpy(&dm_val, buffer, 2 * sizeof(registers_[0])); return(dm_val); } void Simulator::set_register_pair_from_double(int reg, double* value) { - ASSERT((reg >= 0) && (reg < num_registers) && ((reg % 2) == 0)); + DCHECK((reg >= 0) && (reg < num_registers) && ((reg % 2) == 0)); memcpy(registers_ + reg, value, sizeof(*value)); } void Simulator::set_dw_register(int dreg, const int* dbl) { - ASSERT((dreg >= 0) && (dreg < num_d_registers)); + DCHECK((dreg >= 0) && (dreg < num_d_registers)); registers_[dreg] = dbl[0]; registers_[dreg + 1] = dbl[1]; } void Simulator::get_d_register(int dreg, uint64_t* value) { - ASSERT((dreg >= 0) && (dreg < DwVfpRegister::NumRegisters())); + DCHECK((dreg >= 0) && (dreg < DwVfpRegister::NumRegisters())); memcpy(value, vfp_registers_ + dreg * 2, sizeof(*value)); } void Simulator::set_d_register(int dreg, const uint64_t* value) { - ASSERT((dreg >= 0) && (dreg < DwVfpRegister::NumRegisters())); + DCHECK((dreg >= 0) && (dreg < DwVfpRegister::NumRegisters())); memcpy(vfp_registers_ + dreg * 2, value, sizeof(*value)); } void Simulator::get_d_register(int dreg, uint32_t* value) { - ASSERT((dreg >= 0) && (dreg < DwVfpRegister::NumRegisters())); + DCHECK((dreg >= 0) && (dreg < DwVfpRegister::NumRegisters())); memcpy(value, vfp_registers_ + dreg * 2, sizeof(*value) * 2); } void Simulator::set_d_register(int dreg, const uint32_t* value) { - ASSERT((dreg >= 0) && (dreg < DwVfpRegister::NumRegisters())); + DCHECK((dreg >= 0) && (dreg < DwVfpRegister::NumRegisters())); memcpy(vfp_registers_ + dreg * 2, value, sizeof(*value) * 2); } void Simulator::get_q_register(int qreg, uint64_t* value) { - ASSERT((qreg >= 0) && (qreg < num_q_registers)); + DCHECK((qreg >= 0) && (qreg < num_q_registers)); memcpy(value, vfp_registers_ + qreg * 4, sizeof(*value) * 2); } void Simulator::set_q_register(int qreg, const uint64_t* value) { - ASSERT((qreg >= 0) && (qreg < num_q_registers)); + DCHECK((qreg >= 0) && (qreg < num_q_registers)); memcpy(vfp_registers_ + qreg * 4, value, sizeof(*value) * 2); } void Simulator::get_q_register(int qreg, uint32_t* value) { - ASSERT((qreg >= 0) && (qreg < num_q_registers)); + DCHECK((qreg >= 0) && (qreg < num_q_registers)); memcpy(value, vfp_registers_ + qreg * 4, sizeof(*value) * 4); } void Simulator::set_q_register(int qreg, const uint32_t* value) { - ASSERT((qreg >= 0) && (qreg < num_q_registers)); + DCHECK((qreg >= 0) && (qreg < num_q_registers)); memcpy(vfp_registers_ + qreg * 4, value, sizeof(*value) * 4); } @@ -981,41 +982,41 @@ int32_t Simulator::get_pc() const { // Getting from and setting into VFP registers. void Simulator::set_s_register(int sreg, unsigned int value) { - ASSERT((sreg >= 0) && (sreg < num_s_registers)); + DCHECK((sreg >= 0) && (sreg < num_s_registers)); vfp_registers_[sreg] = value; } unsigned int Simulator::get_s_register(int sreg) const { - ASSERT((sreg >= 0) && (sreg < num_s_registers)); + DCHECK((sreg >= 0) && (sreg < num_s_registers)); return vfp_registers_[sreg]; } template<class InputType, int register_size> void Simulator::SetVFPRegister(int reg_index, const InputType& value) { - ASSERT(reg_index >= 0); - if (register_size == 1) ASSERT(reg_index < num_s_registers); - if (register_size == 2) ASSERT(reg_index < DwVfpRegister::NumRegisters()); + DCHECK(reg_index >= 0); + if (register_size == 1) DCHECK(reg_index < num_s_registers); + if (register_size == 2) DCHECK(reg_index < DwVfpRegister::NumRegisters()); char buffer[register_size * sizeof(vfp_registers_[0])]; - OS::MemCopy(buffer, &value, register_size * sizeof(vfp_registers_[0])); - OS::MemCopy(&vfp_registers_[reg_index * register_size], buffer, - register_size * sizeof(vfp_registers_[0])); + memcpy(buffer, &value, register_size * sizeof(vfp_registers_[0])); + memcpy(&vfp_registers_[reg_index * register_size], buffer, + register_size * sizeof(vfp_registers_[0])); } template<class ReturnType, int register_size> ReturnType Simulator::GetFromVFPRegister(int reg_index) { - ASSERT(reg_index >= 0); - if (register_size == 1) ASSERT(reg_index < num_s_registers); - if (register_size == 2) ASSERT(reg_index < DwVfpRegister::NumRegisters()); + DCHECK(reg_index >= 0); + if (register_size == 1) DCHECK(reg_index < num_s_registers); + if (register_size == 2) DCHECK(reg_index < DwVfpRegister::NumRegisters()); ReturnType value = 0; char buffer[register_size * sizeof(vfp_registers_[0])]; - OS::MemCopy(buffer, &vfp_registers_[register_size * reg_index], - register_size * sizeof(vfp_registers_[0])); - OS::MemCopy(&value, buffer, register_size * sizeof(vfp_registers_[0])); + memcpy(buffer, &vfp_registers_[register_size * reg_index], + register_size * sizeof(vfp_registers_[0])); + memcpy(&value, buffer, register_size * sizeof(vfp_registers_[0])); return value; } @@ -1044,14 +1045,14 @@ void Simulator::GetFpArgs(double* x, double* y, int32_t* z) { void Simulator::SetFpResult(const double& result) { if (use_eabi_hardfloat()) { char buffer[2 * sizeof(vfp_registers_[0])]; - OS::MemCopy(buffer, &result, sizeof(buffer)); + memcpy(buffer, &result, sizeof(buffer)); // Copy result to d0. - OS::MemCopy(vfp_registers_, buffer, sizeof(buffer)); + memcpy(vfp_registers_, buffer, sizeof(buffer)); } else { char buffer[2 * sizeof(registers_[0])]; - OS::MemCopy(buffer, &result, sizeof(buffer)); + memcpy(buffer, &result, sizeof(buffer)); // Copy result to r0 and r1. - OS::MemCopy(registers_, buffer, sizeof(buffer)); + memcpy(registers_, buffer, sizeof(buffer)); } } @@ -1429,7 +1430,7 @@ int32_t Simulator::GetShiftRm(Instruction* instr, bool* carry_out) { *carry_out = (result & 1) == 1; result >>= 1; } else { - ASSERT(shift_amount >= 32); + DCHECK(shift_amount >= 32); if (result < 0) { *carry_out = true; result = 0xffffffff; @@ -1452,7 +1453,7 @@ int32_t Simulator::GetShiftRm(Instruction* instr, bool* carry_out) { *carry_out = (result & 1) == 1; result = 0; } else { - ASSERT(shift_amount > 32); + DCHECK(shift_amount > 32); *carry_out = false; result = 0; } @@ -1574,7 +1575,7 @@ void Simulator::HandleRList(Instruction* instr, bool load) { intptr_t* address = reinterpret_cast<intptr_t*>(start_address); // Catch null pointers a little earlier. - ASSERT(start_address > 8191 || start_address < 0); + DCHECK(start_address > 8191 || start_address < 0); int reg = 0; while (rlist != 0) { if ((rlist & 1) != 0) { @@ -1588,7 +1589,7 @@ void Simulator::HandleRList(Instruction* instr, bool load) { reg++; rlist >>= 1; } - ASSERT(end_address == ((intptr_t)address) - 4); + DCHECK(end_address == ((intptr_t)address) - 4); if (instr->HasW()) { set_register(instr->RnValue(), rn_val); } @@ -1635,19 +1636,19 @@ void Simulator::HandleVList(Instruction* instr) { ReadW(reinterpret_cast<int32_t>(address + 1), instr) }; double d; - OS::MemCopy(&d, data, 8); + memcpy(&d, data, 8); set_d_register_from_double(reg, d); } else { int32_t data[2]; double d = get_double_from_d_register(reg); - OS::MemCopy(data, &d, 8); + memcpy(data, &d, 8); WriteW(reinterpret_cast<int32_t>(address), data[0], instr); WriteW(reinterpret_cast<int32_t>(address + 1), data[1], instr); } address += 2; } } - ASSERT(reinterpret_cast<intptr_t>(address) - operand_size == end_address); + DCHECK(reinterpret_cast<intptr_t>(address) - operand_size == end_address); if (instr->HasW()) { set_register(instr->RnValue(), rn_val); } @@ -1852,7 +1853,7 @@ void Simulator::SoftwareInterrupt(Instruction* instr) { target(arg0, arg1, Redirection::ReverseRedirection(arg2)); } else { // builtin call. - ASSERT(redirection->type() == ExternalReference::BUILTIN_CALL); + DCHECK(redirection->type() == ExternalReference::BUILTIN_CALL); SimulatorRuntimeCall target = reinterpret_cast<SimulatorRuntimeCall>(external); if (::v8::internal::FLAG_trace_sim || !stack_aligned) { @@ -1928,13 +1929,13 @@ bool Simulator::isStopInstruction(Instruction* instr) { bool Simulator::isWatchedStop(uint32_t code) { - ASSERT(code <= kMaxStopCode); + DCHECK(code <= kMaxStopCode); return code < kNumOfWatchedStops; } bool Simulator::isEnabledStop(uint32_t code) { - ASSERT(code <= kMaxStopCode); + DCHECK(code <= kMaxStopCode); // Unwatched stops are always enabled. return !isWatchedStop(code) || !(watched_stops_[code].count & kStopDisabledBit); @@ -1942,7 +1943,7 @@ bool Simulator::isEnabledStop(uint32_t code) { void Simulator::EnableStop(uint32_t code) { - ASSERT(isWatchedStop(code)); + DCHECK(isWatchedStop(code)); if (!isEnabledStop(code)) { watched_stops_[code].count &= ~kStopDisabledBit; } @@ -1950,7 +1951,7 @@ void Simulator::EnableStop(uint32_t code) { void Simulator::DisableStop(uint32_t code) { - ASSERT(isWatchedStop(code)); + DCHECK(isWatchedStop(code)); if (isEnabledStop(code)) { watched_stops_[code].count |= kStopDisabledBit; } @@ -1958,8 +1959,8 @@ void Simulator::DisableStop(uint32_t code) { void Simulator::IncreaseStopCounter(uint32_t code) { - ASSERT(code <= kMaxStopCode); - ASSERT(isWatchedStop(code)); + DCHECK(code <= kMaxStopCode); + DCHECK(isWatchedStop(code)); if ((watched_stops_[code].count & ~(1 << 31)) == 0x7fffffff) { PrintF("Stop counter for code %i has overflowed.\n" "Enabling this code and reseting the counter to 0.\n", code); @@ -1973,7 +1974,7 @@ void Simulator::IncreaseStopCounter(uint32_t code) { // Print a stop status. void Simulator::PrintStopInfo(uint32_t code) { - ASSERT(code <= kMaxStopCode); + DCHECK(code <= kMaxStopCode); if (!isWatchedStop(code)) { PrintF("Stop not watched."); } else { @@ -2091,7 +2092,7 @@ void Simulator::DecodeType01(Instruction* instr) { switch (instr->PUField()) { case da_x: { // Format(instr, "'memop'cond'sign'h 'rd, ['rn], -'rm"); - ASSERT(!instr->HasW()); + DCHECK(!instr->HasW()); addr = rn_val; rn_val -= rm_val; set_register(rn, rn_val); @@ -2099,7 +2100,7 @@ void Simulator::DecodeType01(Instruction* instr) { } case ia_x: { // Format(instr, "'memop'cond'sign'h 'rd, ['rn], +'rm"); - ASSERT(!instr->HasW()); + DCHECK(!instr->HasW()); addr = rn_val; rn_val += rm_val; set_register(rn, rn_val); @@ -2134,7 +2135,7 @@ void Simulator::DecodeType01(Instruction* instr) { switch (instr->PUField()) { case da_x: { // Format(instr, "'memop'cond'sign'h 'rd, ['rn], #-'off8"); - ASSERT(!instr->HasW()); + DCHECK(!instr->HasW()); addr = rn_val; rn_val -= imm_val; set_register(rn, rn_val); @@ -2142,7 +2143,7 @@ void Simulator::DecodeType01(Instruction* instr) { } case ia_x: { // Format(instr, "'memop'cond'sign'h 'rd, ['rn], #+'off8"); - ASSERT(!instr->HasW()); + DCHECK(!instr->HasW()); addr = rn_val; rn_val += imm_val; set_register(rn, rn_val); @@ -2174,7 +2175,7 @@ void Simulator::DecodeType01(Instruction* instr) { } } if (((instr->Bits(7, 4) & 0xd) == 0xd) && (instr->Bit(20) == 0)) { - ASSERT((rd % 2) == 0); + DCHECK((rd % 2) == 0); if (instr->HasH()) { // The strd instruction. int32_t value1 = get_register(rd); @@ -2205,8 +2206,8 @@ void Simulator::DecodeType01(Instruction* instr) { } } else { // signed byte loads - ASSERT(instr->HasSign()); - ASSERT(instr->HasL()); + DCHECK(instr->HasSign()); + DCHECK(instr->HasL()); int8_t val = ReadB(addr); set_register(rd, val); } @@ -2270,7 +2271,7 @@ void Simulator::DecodeType01(Instruction* instr) { if (type == 0) { shifter_operand = GetShiftRm(instr, &shifter_carry_out); } else { - ASSERT(instr->TypeValue() == 1); + DCHECK(instr->TypeValue() == 1); shifter_operand = GetImm(instr, &shifter_carry_out); } int32_t alu_out; @@ -2493,7 +2494,7 @@ void Simulator::DecodeType2(Instruction* instr) { switch (instr->PUField()) { case da_x: { // Format(instr, "'memop'cond'b 'rd, ['rn], #-'off12"); - ASSERT(!instr->HasW()); + DCHECK(!instr->HasW()); addr = rn_val; rn_val -= im_val; set_register(rn, rn_val); @@ -2501,7 +2502,7 @@ void Simulator::DecodeType2(Instruction* instr) { } case ia_x: { // Format(instr, "'memop'cond'b 'rd, ['rn], #+'off12"); - ASSERT(!instr->HasW()); + DCHECK(!instr->HasW()); addr = rn_val; rn_val += im_val; set_register(rn, rn_val); @@ -2557,7 +2558,7 @@ void Simulator::DecodeType3(Instruction* instr) { int32_t addr = 0; switch (instr->PUField()) { case da_x: { - ASSERT(!instr->HasW()); + DCHECK(!instr->HasW()); Format(instr, "'memop'cond'b 'rd, ['rn], -'shift_rm"); UNIMPLEMENTED(); break; @@ -2710,28 +2711,30 @@ void Simulator::DecodeType3(Instruction* instr) { } case db_x: { if (FLAG_enable_sudiv) { - if (!instr->HasW()) { - if (instr->Bits(5, 4) == 0x1) { - if ((instr->Bit(22) == 0x0) && (instr->Bit(20) == 0x1)) { - // sdiv (in V8 notation matching ARM ISA format) rn = rm/rs - // Format(instr, "'sdiv'cond'b 'rn, 'rm, 'rs); - int rm = instr->RmValue(); - int32_t rm_val = get_register(rm); - int rs = instr->RsValue(); - int32_t rs_val = get_register(rs); - int32_t ret_val = 0; - ASSERT(rs_val != 0); - if ((rm_val == kMinInt) && (rs_val == -1)) { - ret_val = kMinInt; - } else { - ret_val = rm_val / rs_val; - } - set_register(rn, ret_val); - return; - } - } - } - } + if (instr->Bits(5, 4) == 0x1) { + if ((instr->Bit(22) == 0x0) && (instr->Bit(20) == 0x1)) { + // (s/u)div (in V8 notation matching ARM ISA format) rn = rm/rs + // Format(instr, "'(s/u)div'cond'b 'rn, 'rm, 'rs); + int rm = instr->RmValue(); + int32_t rm_val = get_register(rm); + int rs = instr->RsValue(); + int32_t rs_val = get_register(rs); + int32_t ret_val = 0; + DCHECK(rs_val != 0); + // udiv + if (instr->Bit(21) == 0x1) { + ret_val = static_cast<int32_t>(static_cast<uint32_t>(rm_val) / + static_cast<uint32_t>(rs_val)); + } else if ((rm_val == kMinInt) && (rs_val == -1)) { + ret_val = kMinInt; + } else { + ret_val = rm_val / rs_val; + } + set_register(rn, ret_val); + return; + } + } + } // Format(instr, "'memop'cond'b 'rd, ['rn, -'shift_rm]'w"); addr = rn_val - shifter_operand; if (instr->HasW()) { @@ -2771,7 +2774,7 @@ void Simulator::DecodeType3(Instruction* instr) { uint32_t rd_val = static_cast<uint32_t>(get_register(instr->RdValue())); uint32_t bitcount = msbit - lsbit + 1; - uint32_t mask = (1 << bitcount) - 1; + uint32_t mask = 0xffffffffu >> (32 - bitcount); rd_val &= ~(mask << lsbit); if (instr->RmValue() != 15) { // bfi - bitfield insert. @@ -2818,7 +2821,7 @@ void Simulator::DecodeType3(Instruction* instr) { void Simulator::DecodeType4(Instruction* instr) { - ASSERT(instr->Bit(22) == 0); // only allowed to be set in privileged mode + DCHECK(instr->Bit(22) == 0); // only allowed to be set in privileged mode if (instr->HasL()) { // Format(instr, "ldm'cond'pu 'rn'w, 'rlist"); HandleRList(instr, true); @@ -2872,8 +2875,8 @@ void Simulator::DecodeType7(Instruction* instr) { // vmrs // Dd = vsqrt(Dm) void Simulator::DecodeTypeVFP(Instruction* instr) { - ASSERT((instr->TypeValue() == 7) && (instr->Bit(24) == 0x0) ); - ASSERT(instr->Bits(11, 9) == 0x5); + DCHECK((instr->TypeValue() == 7) && (instr->Bit(24) == 0x0) ); + DCHECK(instr->Bits(11, 9) == 0x5); // Obtain double precision register codes. int vm = instr->VFPMRegValue(kDoublePrecision); @@ -3020,9 +3023,9 @@ void Simulator::DecodeTypeVFP(Instruction* instr) { int vd = instr->Bits(19, 16) | (instr->Bit(7) << 4); double dd_value = get_double_from_d_register(vd); int32_t data[2]; - OS::MemCopy(data, &dd_value, 8); + memcpy(data, &dd_value, 8); data[instr->Bit(21)] = get_register(instr->RtValue()); - OS::MemCopy(&dd_value, data, 8); + memcpy(&dd_value, data, 8); set_d_register_from_double(vd, dd_value); } else if ((instr->VLValue() == 0x1) && (instr->VCValue() == 0x1) && @@ -3031,7 +3034,7 @@ void Simulator::DecodeTypeVFP(Instruction* instr) { int vn = instr->Bits(19, 16) | (instr->Bit(7) << 4); double dn_value = get_double_from_d_register(vn); int32_t data[2]; - OS::MemCopy(data, &dn_value, 8); + memcpy(data, &dn_value, 8); set_register(instr->RtValue(), data[instr->Bit(21)]); } else if ((instr->VLValue() == 0x1) && (instr->VCValue() == 0x0) && @@ -3088,7 +3091,7 @@ void Simulator::DecodeTypeVFP(Instruction* instr) { void Simulator::DecodeVMOVBetweenCoreAndSinglePrecisionRegisters( Instruction* instr) { - ASSERT((instr->Bit(4) == 1) && (instr->VCValue() == 0x0) && + DCHECK((instr->Bit(4) == 1) && (instr->VCValue() == 0x0) && (instr->VAValue() == 0x0)); int t = instr->RtValue(); @@ -3106,8 +3109,8 @@ void Simulator::DecodeVMOVBetweenCoreAndSinglePrecisionRegisters( void Simulator::DecodeVCMP(Instruction* instr) { - ASSERT((instr->Bit(4) == 0) && (instr->Opc1Value() == 0x7)); - ASSERT(((instr->Opc2Value() == 0x4) || (instr->Opc2Value() == 0x5)) && + DCHECK((instr->Bit(4) == 0) && (instr->Opc1Value() == 0x7)); + DCHECK(((instr->Opc2Value() == 0x4) || (instr->Opc2Value() == 0x5)) && (instr->Opc3Value() & 0x1)); // Comparison. @@ -3144,8 +3147,8 @@ void Simulator::DecodeVCMP(Instruction* instr) { void Simulator::DecodeVCVTBetweenDoubleAndSingle(Instruction* instr) { - ASSERT((instr->Bit(4) == 0) && (instr->Opc1Value() == 0x7)); - ASSERT((instr->Opc2Value() == 0x7) && (instr->Opc3Value() == 0x3)); + DCHECK((instr->Bit(4) == 0) && (instr->Opc1Value() == 0x7)); + DCHECK((instr->Opc2Value() == 0x7) && (instr->Opc3Value() == 0x3)); VFPRegPrecision dst_precision = kDoublePrecision; VFPRegPrecision src_precision = kSinglePrecision; @@ -3169,7 +3172,7 @@ void Simulator::DecodeVCVTBetweenDoubleAndSingle(Instruction* instr) { bool get_inv_op_vfp_flag(VFPRoundingMode mode, double val, bool unsigned_) { - ASSERT((mode == RN) || (mode == RM) || (mode == RZ)); + DCHECK((mode == RN) || (mode == RM) || (mode == RZ)); double max_uint = static_cast<double>(0xffffffffu); double max_int = static_cast<double>(kMaxInt); double min_int = static_cast<double>(kMinInt); @@ -3222,9 +3225,9 @@ int VFPConversionSaturate(double val, bool unsigned_res) { void Simulator::DecodeVCVTBetweenFloatingPointAndInteger(Instruction* instr) { - ASSERT((instr->Bit(4) == 0) && (instr->Opc1Value() == 0x7) && + DCHECK((instr->Bit(4) == 0) && (instr->Opc1Value() == 0x7) && (instr->Bits(27, 23) == 0x1D)); - ASSERT(((instr->Opc2Value() == 0x8) && (instr->Opc3Value() & 0x1)) || + DCHECK(((instr->Opc2Value() == 0x8) && (instr->Opc3Value() & 0x1)) || (((instr->Opc2Value() >> 1) == 0x6) && (instr->Opc3Value() & 0x1))); // Conversion between floating-point and integer. @@ -3248,7 +3251,7 @@ void Simulator::DecodeVCVTBetweenFloatingPointAndInteger(Instruction* instr) { // mode or the default Round to Zero mode. VFPRoundingMode mode = (instr->Bit(7) != 1) ? FPSCR_rounding_mode_ : RZ; - ASSERT((mode == RM) || (mode == RZ) || (mode == RN)); + DCHECK((mode == RM) || (mode == RZ) || (mode == RN)); bool unsigned_integer = (instr->Bit(16) == 0); bool double_precision = (src_precision == kDoublePrecision); @@ -3332,7 +3335,7 @@ void Simulator::DecodeVCVTBetweenFloatingPointAndInteger(Instruction* instr) { // Ddst = MEM(Rbase + 4*offset). // MEM(Rbase + 4*offset) = Dsrc. void Simulator::DecodeType6CoprocessorIns(Instruction* instr) { - ASSERT((instr->TypeValue() == 6)); + DCHECK((instr->TypeValue() == 6)); if (instr->CoprocessorValue() == 0xA) { switch (instr->OpcodeValue()) { @@ -3382,13 +3385,13 @@ void Simulator::DecodeType6CoprocessorIns(Instruction* instr) { if (instr->HasL()) { int32_t data[2]; double d = get_double_from_d_register(vm); - OS::MemCopy(data, &d, 8); + memcpy(data, &d, 8); set_register(rt, data[0]); set_register(rn, data[1]); } else { int32_t data[] = { get_register(rt), get_register(rn) }; double d; - OS::MemCopy(&d, data, 8); + memcpy(&d, data, 8); set_d_register_from_double(vm, d); } } @@ -3411,13 +3414,13 @@ void Simulator::DecodeType6CoprocessorIns(Instruction* instr) { ReadW(address + 4, instr) }; double val; - OS::MemCopy(&val, data, 8); + memcpy(&val, data, 8); set_d_register_from_double(vd, val); } else { // Store double to memory: vstr. int32_t data[2]; double val = get_double_from_d_register(vd); - OS::MemCopy(data, &val, 8); + memcpy(data, &val, 8); WriteW(address, data[0], instr); WriteW(address + 4, data[1], instr); } @@ -3753,7 +3756,7 @@ int32_t Simulator::Call(byte* entry, int argument_count, ...) { // Set up arguments // First four arguments passed in registers. - ASSERT(argument_count >= 4); + DCHECK(argument_count >= 4); set_register(r0, va_arg(parameters, int32_t)); set_register(r1, va_arg(parameters, int32_t)); set_register(r2, va_arg(parameters, int32_t)); @@ -3763,8 +3766,8 @@ int32_t Simulator::Call(byte* entry, int argument_count, ...) { int original_stack = get_register(sp); // Compute position of stack on entry to generated code. int entry_stack = (original_stack - (argument_count - 4) * sizeof(int32_t)); - if (OS::ActivationFrameAlignment() != 0) { - entry_stack &= -OS::ActivationFrameAlignment(); + if (base::OS::ActivationFrameAlignment() != 0) { + entry_stack &= -base::OS::ActivationFrameAlignment(); } // Store remaining arguments on stack, from low to high memory. intptr_t* stack_argument = reinterpret_cast<intptr_t*>(entry_stack); diff --git a/deps/v8/src/arm/simulator-arm.h b/deps/v8/src/arm/simulator-arm.h index bbe87bcbe27..76865bcf2ac 100644 --- a/deps/v8/src/arm/simulator-arm.h +++ b/deps/v8/src/arm/simulator-arm.h @@ -13,7 +13,7 @@ #ifndef V8_ARM_SIMULATOR_ARM_H_ #define V8_ARM_SIMULATOR_ARM_H_ -#include "allocation.h" +#include "src/allocation.h" #if !defined(USE_SIMULATOR) // Running without a simulator on a native arm platform. @@ -37,9 +37,6 @@ typedef int (*arm_regexp_matcher)(String*, int, const byte*, const byte*, (FUNCTION_CAST<arm_regexp_matcher>(entry)( \ p0, p1, p2, p3, NULL, p4, p5, p6, p7, p8)) -#define TRY_CATCH_FROM_ADDRESS(try_catch_address) \ - reinterpret_cast<TryCatch*>(try_catch_address) - // The stack limit beyond which we will throw stack overflow errors in // generated code. Because generated code on arm uses the C stack, we // just use the C stack limit. @@ -63,9 +60,9 @@ class SimulatorStack : public v8::internal::AllStatic { #else // !defined(USE_SIMULATOR) // Running with a simulator. -#include "constants-arm.h" -#include "hashmap.h" -#include "assembler.h" +#include "src/arm/constants-arm.h" +#include "src/assembler.h" +#include "src/hashmap.h" namespace v8 { namespace internal { @@ -265,7 +262,7 @@ class Simulator { inline int GetCarry() { return c_flag_ ? 1 : 0; - }; + } // Support for VFP. void Compute_FPSCR_Flags(double val1, double val2); @@ -436,10 +433,6 @@ class Simulator { Simulator::current(Isolate::Current())->Call( \ entry, 10, p0, p1, p2, p3, NULL, p4, p5, p6, p7, p8) -#define TRY_CATCH_FROM_ADDRESS(try_catch_address) \ - try_catch_address == NULL ? \ - NULL : *(reinterpret_cast<TryCatch**>(try_catch_address)) - // The simulator has its own stack. Thus it has a different stack limit from // the C-based native code. Setting the c_limit to indicate a very small diff --git a/deps/v8/src/arm/stub-cache-arm.cc b/deps/v8/src/arm/stub-cache-arm.cc index fd53b9782d6..38f391a3364 100644 --- a/deps/v8/src/arm/stub-cache-arm.cc +++ b/deps/v8/src/arm/stub-cache-arm.cc @@ -2,13 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM -#include "ic-inl.h" -#include "codegen.h" -#include "stub-cache.h" +#include "src/codegen.h" +#include "src/ic-inl.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -36,12 +36,12 @@ static void ProbeTable(Isolate* isolate, uint32_t map_off_addr = reinterpret_cast<uint32_t>(map_offset.address()); // Check the relative positions of the address fields. - ASSERT(value_off_addr > key_off_addr); - ASSERT((value_off_addr - key_off_addr) % 4 == 0); - ASSERT((value_off_addr - key_off_addr) < (256 * 4)); - ASSERT(map_off_addr > key_off_addr); - ASSERT((map_off_addr - key_off_addr) % 4 == 0); - ASSERT((map_off_addr - key_off_addr) < (256 * 4)); + DCHECK(value_off_addr > key_off_addr); + DCHECK((value_off_addr - key_off_addr) % 4 == 0); + DCHECK((value_off_addr - key_off_addr) < (256 * 4)); + DCHECK(map_off_addr > key_off_addr); + DCHECK((map_off_addr - key_off_addr) % 4 == 0); + DCHECK((map_off_addr - key_off_addr) < (256 * 4)); Label miss; Register base_addr = scratch; @@ -77,7 +77,7 @@ static void ProbeTable(Isolate* isolate, // It's a nice optimization if this constant is encodable in the bic insn. uint32_t mask = Code::kFlagsNotUsedInLookup; - ASSERT(__ ImmediateFitsAddrMode1Instruction(mask)); + DCHECK(__ ImmediateFitsAddrMode1Instruction(mask)); __ bic(flags_reg, flags_reg, Operand(mask)); __ cmp(flags_reg, Operand(flags)); __ b(ne, &miss); @@ -98,14 +98,11 @@ static void ProbeTable(Isolate* isolate, } -void StubCompiler::GenerateDictionaryNegativeLookup(MacroAssembler* masm, - Label* miss_label, - Register receiver, - Handle<Name> name, - Register scratch0, - Register scratch1) { - ASSERT(name->IsUniqueName()); - ASSERT(!receiver.is(scratch0)); +void PropertyHandlerCompiler::GenerateDictionaryNegativeLookup( + MacroAssembler* masm, Label* miss_label, Register receiver, + Handle<Name> name, Register scratch0, Register scratch1) { + DCHECK(name->IsUniqueName()); + DCHECK(!receiver.is(scratch0)); Counters* counters = masm->isolate()->counters(); __ IncrementCounter(counters->negative_lookups(), 1, scratch0, scratch1); __ IncrementCounter(counters->negative_lookups_miss(), 1, scratch0, scratch1); @@ -166,27 +163,27 @@ void StubCache::GenerateProbe(MacroAssembler* masm, // Make sure that code is valid. The multiplying code relies on the // entry size being 12. - ASSERT(sizeof(Entry) == 12); + DCHECK(sizeof(Entry) == 12); // Make sure the flags does not name a specific type. - ASSERT(Code::ExtractTypeFromFlags(flags) == 0); + DCHECK(Code::ExtractTypeFromFlags(flags) == 0); // Make sure that there are no register conflicts. - ASSERT(!scratch.is(receiver)); - ASSERT(!scratch.is(name)); - ASSERT(!extra.is(receiver)); - ASSERT(!extra.is(name)); - ASSERT(!extra.is(scratch)); - ASSERT(!extra2.is(receiver)); - ASSERT(!extra2.is(name)); - ASSERT(!extra2.is(scratch)); - ASSERT(!extra2.is(extra)); + DCHECK(!scratch.is(receiver)); + DCHECK(!scratch.is(name)); + DCHECK(!extra.is(receiver)); + DCHECK(!extra.is(name)); + DCHECK(!extra.is(scratch)); + DCHECK(!extra2.is(receiver)); + DCHECK(!extra2.is(name)); + DCHECK(!extra2.is(scratch)); + DCHECK(!extra2.is(extra)); // Check scratch, extra and extra2 registers are valid. - ASSERT(!scratch.is(no_reg)); - ASSERT(!extra.is(no_reg)); - ASSERT(!extra2.is(no_reg)); - ASSERT(!extra3.is(no_reg)); + DCHECK(!scratch.is(no_reg)); + DCHECK(!extra.is(no_reg)); + DCHECK(!extra2.is(no_reg)); + DCHECK(!extra3.is(no_reg)); Counters* counters = masm->isolate()->counters(); __ IncrementCounter(counters->megamorphic_stub_cache_probes(), 1, @@ -202,10 +199,10 @@ void StubCache::GenerateProbe(MacroAssembler* masm, uint32_t mask = kPrimaryTableSize - 1; // We shift out the last two bits because they are not part of the hash and // they are always 01 for maps. - __ mov(scratch, Operand(scratch, LSR, kHeapObjectTagSize)); + __ mov(scratch, Operand(scratch, LSR, kCacheIndexShift)); // Mask down the eor argument to the minimum to keep the immediate // ARM-encodable. - __ eor(scratch, scratch, Operand((flags >> kHeapObjectTagSize) & mask)); + __ eor(scratch, scratch, Operand((flags >> kCacheIndexShift) & mask)); // Prefer and_ to ubfx here because ubfx takes 2 cycles. __ and_(scratch, scratch, Operand(mask)); @@ -222,9 +219,9 @@ void StubCache::GenerateProbe(MacroAssembler* masm, extra3); // Primary miss: Compute hash for secondary probe. - __ sub(scratch, scratch, Operand(name, LSR, kHeapObjectTagSize)); + __ sub(scratch, scratch, Operand(name, LSR, kCacheIndexShift)); uint32_t mask2 = kSecondaryTableSize - 1; - __ add(scratch, scratch, Operand((flags >> kHeapObjectTagSize) & mask2)); + __ add(scratch, scratch, Operand((flags >> kCacheIndexShift) & mask2)); __ and_(scratch, scratch, Operand(mask2)); // Probe the secondary table. @@ -247,30 +244,8 @@ void StubCache::GenerateProbe(MacroAssembler* masm, } -void StubCompiler::GenerateLoadGlobalFunctionPrototype(MacroAssembler* masm, - int index, - Register prototype) { - // Load the global or builtins object from the current context. - __ ldr(prototype, - MemOperand(cp, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); - // Load the native context from the global or builtins object. - __ ldr(prototype, - FieldMemOperand(prototype, GlobalObject::kNativeContextOffset)); - // Load the function from the native context. - __ ldr(prototype, MemOperand(prototype, Context::SlotOffset(index))); - // Load the initial map. The global functions all have initial maps. - __ ldr(prototype, - FieldMemOperand(prototype, JSFunction::kPrototypeOrInitialMapOffset)); - // Load the prototype from the initial map. - __ ldr(prototype, FieldMemOperand(prototype, Map::kPrototypeOffset)); -} - - -void StubCompiler::GenerateDirectLoadGlobalFunctionPrototype( - MacroAssembler* masm, - int index, - Register prototype, - Label* miss) { +void NamedLoadHandlerCompiler::GenerateDirectLoadGlobalFunctionPrototype( + MacroAssembler* masm, int index, Register prototype, Label* miss) { Isolate* isolate = masm->isolate(); // Get the global function with the given index. Handle<JSFunction> function( @@ -293,46 +268,9 @@ void StubCompiler::GenerateDirectLoadGlobalFunctionPrototype( } -void StubCompiler::GenerateFastPropertyLoad(MacroAssembler* masm, - Register dst, - Register src, - bool inobject, - int index, - Representation representation) { - ASSERT(!representation.IsDouble()); - int offset = index * kPointerSize; - if (!inobject) { - // Calculate the offset into the properties array. - offset = offset + FixedArray::kHeaderSize; - __ ldr(dst, FieldMemOperand(src, JSObject::kPropertiesOffset)); - src = dst; - } - __ ldr(dst, FieldMemOperand(src, offset)); -} - - -void StubCompiler::GenerateLoadArrayLength(MacroAssembler* masm, - Register receiver, - Register scratch, - Label* miss_label) { - // Check that the receiver isn't a smi. - __ JumpIfSmi(receiver, miss_label); - - // Check that the object is a JS array. - __ CompareObjectType(receiver, scratch, scratch, JS_ARRAY_TYPE); - __ b(ne, miss_label); - - // Load length directly from the JS array. - __ ldr(r0, FieldMemOperand(receiver, JSArray::kLengthOffset)); - __ Ret(); -} - - -void StubCompiler::GenerateLoadFunctionPrototype(MacroAssembler* masm, - Register receiver, - Register scratch1, - Register scratch2, - Label* miss_label) { +void NamedLoadHandlerCompiler::GenerateLoadFunctionPrototype( + MacroAssembler* masm, Register receiver, Register scratch1, + Register scratch2, Label* miss_label) { __ TryGetFunctionPrototype(receiver, scratch1, scratch2, miss_label); __ mov(r0, scratch1); __ Ret(); @@ -342,13 +280,11 @@ void StubCompiler::GenerateLoadFunctionPrototype(MacroAssembler* masm, // Generate code to check that a global property cell is empty. Create // the property cell at compilation time if no cell exists for the // property. -void StubCompiler::GenerateCheckPropertyCell(MacroAssembler* masm, - Handle<JSGlobalObject> global, - Handle<Name> name, - Register scratch, - Label* miss) { +void PropertyHandlerCompiler::GenerateCheckPropertyCell( + MacroAssembler* masm, Handle<JSGlobalObject> global, Handle<Name> name, + Register scratch, Label* miss) { Handle<Cell> cell = JSGlobalObject::EnsurePropertyCell(global, name); - ASSERT(cell->value()->IsTheHole()); + DCHECK(cell->value()->IsTheHole()); __ mov(scratch, Operand(cell)); __ ldr(scratch, FieldMemOperand(scratch, Cell::kValueOffset)); __ LoadRoot(ip, Heap::kTheHoleValueRootIndex); @@ -357,18 +293,120 @@ void StubCompiler::GenerateCheckPropertyCell(MacroAssembler* masm, } -void StoreStubCompiler::GenerateNegativeHolderLookup( - MacroAssembler* masm, - Handle<JSObject> holder, - Register holder_reg, - Handle<Name> name, - Label* miss) { - if (holder->IsJSGlobalObject()) { - GenerateCheckPropertyCell( - masm, Handle<JSGlobalObject>::cast(holder), name, scratch1(), miss); - } else if (!holder->HasFastProperties() && !holder->IsJSGlobalProxy()) { - GenerateDictionaryNegativeLookup( - masm, miss, holder_reg, name, scratch1(), scratch2()); +static void PushInterceptorArguments(MacroAssembler* masm, Register receiver, + Register holder, Register name, + Handle<JSObject> holder_obj) { + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsNameIndex == 0); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsInfoIndex == 1); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsThisIndex == 2); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsHolderIndex == 3); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsLength == 4); + __ push(name); + Handle<InterceptorInfo> interceptor(holder_obj->GetNamedInterceptor()); + DCHECK(!masm->isolate()->heap()->InNewSpace(*interceptor)); + Register scratch = name; + __ mov(scratch, Operand(interceptor)); + __ push(scratch); + __ push(receiver); + __ push(holder); +} + + +static void CompileCallLoadPropertyWithInterceptor( + MacroAssembler* masm, Register receiver, Register holder, Register name, + Handle<JSObject> holder_obj, IC::UtilityId id) { + PushInterceptorArguments(masm, receiver, holder, name, holder_obj); + __ CallExternalReference(ExternalReference(IC_Utility(id), masm->isolate()), + NamedLoadHandlerCompiler::kInterceptorArgsLength); +} + + +// Generate call to api function. +void PropertyHandlerCompiler::GenerateFastApiCall( + MacroAssembler* masm, const CallOptimization& optimization, + Handle<Map> receiver_map, Register receiver, Register scratch_in, + bool is_store, int argc, Register* values) { + DCHECK(!receiver.is(scratch_in)); + __ push(receiver); + // Write the arguments to stack frame. + for (int i = 0; i < argc; i++) { + Register arg = values[argc - 1 - i]; + DCHECK(!receiver.is(arg)); + DCHECK(!scratch_in.is(arg)); + __ push(arg); + } + DCHECK(optimization.is_simple_api_call()); + + // Abi for CallApiFunctionStub. + Register callee = r0; + Register call_data = r4; + Register holder = r2; + Register api_function_address = r1; + + // Put holder in place. + CallOptimization::HolderLookup holder_lookup; + Handle<JSObject> api_holder = + optimization.LookupHolderOfExpectedType(receiver_map, &holder_lookup); + switch (holder_lookup) { + case CallOptimization::kHolderIsReceiver: + __ Move(holder, receiver); + break; + case CallOptimization::kHolderFound: + __ Move(holder, api_holder); + break; + case CallOptimization::kHolderNotFound: + UNREACHABLE(); + break; + } + + Isolate* isolate = masm->isolate(); + Handle<JSFunction> function = optimization.constant_function(); + Handle<CallHandlerInfo> api_call_info = optimization.api_call_info(); + Handle<Object> call_data_obj(api_call_info->data(), isolate); + + // Put callee in place. + __ Move(callee, function); + + bool call_data_undefined = false; + // Put call_data in place. + if (isolate->heap()->InNewSpace(*call_data_obj)) { + __ Move(call_data, api_call_info); + __ ldr(call_data, FieldMemOperand(call_data, CallHandlerInfo::kDataOffset)); + } else if (call_data_obj->IsUndefined()) { + call_data_undefined = true; + __ LoadRoot(call_data, Heap::kUndefinedValueRootIndex); + } else { + __ Move(call_data, call_data_obj); + } + + // Put api_function_address in place. + Address function_address = v8::ToCData<Address>(api_call_info->callback()); + ApiFunction fun(function_address); + ExternalReference::Type type = ExternalReference::DIRECT_API_CALL; + ExternalReference ref = ExternalReference(&fun, type, masm->isolate()); + __ mov(api_function_address, Operand(ref)); + + // Jump to stub. + CallApiFunctionStub stub(isolate, is_store, call_data_undefined, argc); + __ TailCallStub(&stub); +} + + +void PropertyAccessCompiler::GenerateTailCall(MacroAssembler* masm, + Handle<Code> code) { + __ Jump(code, RelocInfo::CODE_TARGET); +} + + +#undef __ +#define __ ACCESS_MASM(masm()) + + +void NamedStoreHandlerCompiler::GenerateRestoreName(Label* label, + Handle<Name> name) { + if (!label->is_unused()) { + __ bind(label); + __ mov(this->name(), Operand(name)); } } @@ -377,19 +415,10 @@ void StoreStubCompiler::GenerateNegativeHolderLookup( // When leaving generated code after success, the receiver_reg and name_reg // may be clobbered. Upon branch to miss_label, the receiver and name // registers have their original values. -void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, - Handle<JSObject> object, - LookupResult* lookup, - Handle<Map> transition, - Handle<Name> name, - Register receiver_reg, - Register storage_reg, - Register value_reg, - Register scratch1, - Register scratch2, - Register scratch3, - Label* miss_label, - Label* slow) { +void NamedStoreHandlerCompiler::GenerateStoreTransition( + Handle<Map> transition, Handle<Name> name, Register receiver_reg, + Register storage_reg, Register value_reg, Register scratch1, + Register scratch2, Register scratch3, Label* miss_label, Label* slow) { // r0 : value Label exit; @@ -397,10 +426,10 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, DescriptorArray* descriptors = transition->instance_descriptors(); PropertyDetails details = descriptors->GetDetails(descriptor); Representation representation = details.representation(); - ASSERT(!representation.IsNone()); + DCHECK(!representation.IsNone()); if (details.type() == CONSTANT) { - Handle<Object> constant(descriptors->GetValue(descriptor), masm->isolate()); + Handle<Object> constant(descriptors->GetValue(descriptor), isolate()); __ Move(scratch1, constant); __ cmp(value_reg, scratch1); __ b(ne, miss_label); @@ -426,8 +455,9 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, } } else if (representation.IsDouble()) { Label do_store, heap_number; - __ LoadRoot(scratch3, Heap::kHeapNumberMapRootIndex); - __ AllocateHeapNumber(storage_reg, scratch1, scratch2, scratch3, slow); + __ LoadRoot(scratch3, Heap::kMutableHeapNumberMapRootIndex); + __ AllocateHeapNumber(storage_reg, scratch1, scratch2, scratch3, slow, + TAG_RESULT, MUTABLE); __ JumpIfNotSmi(value_reg, &heap_number); __ SmiUntag(scratch1, value_reg); @@ -444,13 +474,12 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, __ vstr(d0, FieldMemOperand(storage_reg, HeapNumber::kValueOffset)); } - // Stub never generated for non-global objects that require access - // checks. - ASSERT(object->IsJSGlobalProxy() || !object->IsAccessCheckNeeded()); + // Stub never generated for objects that require access checks. + DCHECK(!transition->is_access_check_needed()); // Perform map transition for the receiver if necessary. if (details.type() == FIELD && - object->map()->unused_property_fields() == 0) { + Map::cast(transition->GetBackPointer())->unused_property_fields() == 0) { // The properties must be extended before we can store the value. // We jump to a runtime call that extends the properties array. __ push(receiver_reg); @@ -458,9 +487,8 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, __ Push(r2, r0); __ TailCallExternalReference( ExternalReference(IC_Utility(IC::kSharedStoreIC_ExtendStorage), - masm->isolate()), - 3, - 1); + isolate()), + 3, 1); return; } @@ -479,7 +507,7 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, OMIT_SMI_CHECK); if (details.type() == CONSTANT) { - ASSERT(value_reg.is(r0)); + DCHECK(value_reg.is(r0)); __ Ret(); return; } @@ -490,14 +518,14 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, // Adjust for the number of properties stored in the object. Even in the // face of a transition we can use the old map here because the size of the // object and the number of in-object properties is not going to change. - index -= object->map()->inobject_properties(); + index -= transition->inobject_properties(); // TODO(verwaest): Share this code as a code stub. SmiCheck smi_check = representation.IsTagged() ? INLINE_SMI_CHECK : OMIT_SMI_CHECK; if (index < 0) { // Set the property straight into the object. - int offset = object->map()->instance_size() + (index * kPointerSize); + int offset = transition->instance_size() + (index * kPointerSize); if (representation.IsDouble()) { __ str(storage_reg, FieldMemOperand(receiver_reg, offset)); } else { @@ -547,297 +575,46 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, } // Return the value (register r0). - ASSERT(value_reg.is(r0)); + DCHECK(value_reg.is(r0)); __ bind(&exit); __ Ret(); } -// Generate StoreField code, value is passed in r0 register. -// When leaving generated code after success, the receiver_reg and name_reg -// may be clobbered. Upon branch to miss_label, the receiver and name -// registers have their original values. -void StoreStubCompiler::GenerateStoreField(MacroAssembler* masm, - Handle<JSObject> object, - LookupResult* lookup, - Register receiver_reg, - Register name_reg, - Register value_reg, - Register scratch1, - Register scratch2, - Label* miss_label) { - // r0 : value - Label exit; - - // Stub never generated for non-global objects that require access - // checks. - ASSERT(object->IsJSGlobalProxy() || !object->IsAccessCheckNeeded()); - - int index = lookup->GetFieldIndex().field_index(); - - // Adjust for the number of properties stored in the object. Even in the - // face of a transition we can use the old map here because the size of the - // object and the number of in-object properties is not going to change. - index -= object->map()->inobject_properties(); - - Representation representation = lookup->representation(); - ASSERT(!representation.IsNone()); - if (representation.IsSmi()) { - __ JumpIfNotSmi(value_reg, miss_label); - } else if (representation.IsHeapObject()) { - __ JumpIfSmi(value_reg, miss_label); - HeapType* field_type = lookup->GetFieldType(); - HeapType::Iterator<Map> it = field_type->Classes(); - if (!it.Done()) { - __ ldr(scratch1, FieldMemOperand(value_reg, HeapObject::kMapOffset)); - Label do_store; - while (true) { - __ CompareMap(scratch1, it.Current(), &do_store); - it.Advance(); - if (it.Done()) { - __ b(ne, miss_label); - break; - } - __ b(eq, &do_store); - } - __ bind(&do_store); - } - } else if (representation.IsDouble()) { - // Load the double storage. - if (index < 0) { - int offset = object->map()->instance_size() + (index * kPointerSize); - __ ldr(scratch1, FieldMemOperand(receiver_reg, offset)); - } else { - __ ldr(scratch1, - FieldMemOperand(receiver_reg, JSObject::kPropertiesOffset)); - int offset = index * kPointerSize + FixedArray::kHeaderSize; - __ ldr(scratch1, FieldMemOperand(scratch1, offset)); - } - - // Store the value into the storage. - Label do_store, heap_number; - __ JumpIfNotSmi(value_reg, &heap_number); - __ SmiUntag(scratch2, value_reg); - __ vmov(s0, scratch2); - __ vcvt_f64_s32(d0, s0); - __ jmp(&do_store); - - __ bind(&heap_number); - __ CheckMap(value_reg, scratch2, Heap::kHeapNumberMapRootIndex, - miss_label, DONT_DO_SMI_CHECK); - __ vldr(d0, FieldMemOperand(value_reg, HeapNumber::kValueOffset)); - - __ bind(&do_store); - __ vstr(d0, FieldMemOperand(scratch1, HeapNumber::kValueOffset)); - // Return the value (register r0). - ASSERT(value_reg.is(r0)); - __ Ret(); - return; - } - - // TODO(verwaest): Share this code as a code stub. - SmiCheck smi_check = representation.IsTagged() - ? INLINE_SMI_CHECK : OMIT_SMI_CHECK; - if (index < 0) { - // Set the property straight into the object. - int offset = object->map()->instance_size() + (index * kPointerSize); - __ str(value_reg, FieldMemOperand(receiver_reg, offset)); - - if (!representation.IsSmi()) { - // Skip updating write barrier if storing a smi. - __ JumpIfSmi(value_reg, &exit); - - // Update the write barrier for the array address. - // Pass the now unused name_reg as a scratch register. - __ mov(name_reg, value_reg); - __ RecordWriteField(receiver_reg, - offset, - name_reg, - scratch1, - kLRHasNotBeenSaved, - kDontSaveFPRegs, - EMIT_REMEMBERED_SET, - smi_check); - } - } else { - // Write to the properties array. - int offset = index * kPointerSize + FixedArray::kHeaderSize; - // Get the properties array - __ ldr(scratch1, - FieldMemOperand(receiver_reg, JSObject::kPropertiesOffset)); - __ str(value_reg, FieldMemOperand(scratch1, offset)); - - if (!representation.IsSmi()) { - // Skip updating write barrier if storing a smi. - __ JumpIfSmi(value_reg, &exit); - - // Update the write barrier for the array address. - // Ok to clobber receiver_reg and name_reg, since we return. - __ mov(name_reg, value_reg); - __ RecordWriteField(scratch1, - offset, - name_reg, - receiver_reg, - kLRHasNotBeenSaved, - kDontSaveFPRegs, - EMIT_REMEMBERED_SET, - smi_check); - } - } - - // Return the value (register r0). - ASSERT(value_reg.is(r0)); - __ bind(&exit); - __ Ret(); -} - - -void StoreStubCompiler::GenerateRestoreName(MacroAssembler* masm, - Label* label, - Handle<Name> name) { - if (!label->is_unused()) { - __ bind(label); - __ mov(this->name(), Operand(name)); - } -} - - -static void PushInterceptorArguments(MacroAssembler* masm, - Register receiver, - Register holder, - Register name, - Handle<JSObject> holder_obj) { - STATIC_ASSERT(StubCache::kInterceptorArgsNameIndex == 0); - STATIC_ASSERT(StubCache::kInterceptorArgsInfoIndex == 1); - STATIC_ASSERT(StubCache::kInterceptorArgsThisIndex == 2); - STATIC_ASSERT(StubCache::kInterceptorArgsHolderIndex == 3); - STATIC_ASSERT(StubCache::kInterceptorArgsLength == 4); - __ push(name); - Handle<InterceptorInfo> interceptor(holder_obj->GetNamedInterceptor()); - ASSERT(!masm->isolate()->heap()->InNewSpace(*interceptor)); - Register scratch = name; - __ mov(scratch, Operand(interceptor)); - __ push(scratch); - __ push(receiver); - __ push(holder); -} - - -static void CompileCallLoadPropertyWithInterceptor( - MacroAssembler* masm, - Register receiver, - Register holder, - Register name, - Handle<JSObject> holder_obj, - IC::UtilityId id) { - PushInterceptorArguments(masm, receiver, holder, name, holder_obj); - __ CallExternalReference( - ExternalReference(IC_Utility(id), masm->isolate()), - StubCache::kInterceptorArgsLength); -} - - -// Generate call to api function. -void StubCompiler::GenerateFastApiCall(MacroAssembler* masm, - const CallOptimization& optimization, - Handle<Map> receiver_map, - Register receiver, - Register scratch_in, - bool is_store, - int argc, - Register* values) { - ASSERT(!receiver.is(scratch_in)); - __ push(receiver); - // Write the arguments to stack frame. - for (int i = 0; i < argc; i++) { - Register arg = values[argc-1-i]; - ASSERT(!receiver.is(arg)); - ASSERT(!scratch_in.is(arg)); - __ push(arg); - } - ASSERT(optimization.is_simple_api_call()); - - // Abi for CallApiFunctionStub. - Register callee = r0; - Register call_data = r4; - Register holder = r2; - Register api_function_address = r1; - - // Put holder in place. - CallOptimization::HolderLookup holder_lookup; - Handle<JSObject> api_holder = optimization.LookupHolderOfExpectedType( - receiver_map, - &holder_lookup); - switch (holder_lookup) { - case CallOptimization::kHolderIsReceiver: - __ Move(holder, receiver); - break; - case CallOptimization::kHolderFound: - __ Move(holder, api_holder); - break; - case CallOptimization::kHolderNotFound: - UNREACHABLE(); +void NamedStoreHandlerCompiler::GenerateStoreField(LookupResult* lookup, + Register value_reg, + Label* miss_label) { + DCHECK(lookup->representation().IsHeapObject()); + __ JumpIfSmi(value_reg, miss_label); + HeapType::Iterator<Map> it = lookup->GetFieldType()->Classes(); + __ ldr(scratch1(), FieldMemOperand(value_reg, HeapObject::kMapOffset)); + Label do_store; + while (true) { + __ CompareMap(scratch1(), it.Current(), &do_store); + it.Advance(); + if (it.Done()) { + __ b(ne, miss_label); break; + } + __ b(eq, &do_store); } + __ bind(&do_store); - Isolate* isolate = masm->isolate(); - Handle<JSFunction> function = optimization.constant_function(); - Handle<CallHandlerInfo> api_call_info = optimization.api_call_info(); - Handle<Object> call_data_obj(api_call_info->data(), isolate); - - // Put callee in place. - __ Move(callee, function); - - bool call_data_undefined = false; - // Put call_data in place. - if (isolate->heap()->InNewSpace(*call_data_obj)) { - __ Move(call_data, api_call_info); - __ ldr(call_data, FieldMemOperand(call_data, CallHandlerInfo::kDataOffset)); - } else if (call_data_obj->IsUndefined()) { - call_data_undefined = true; - __ LoadRoot(call_data, Heap::kUndefinedValueRootIndex); - } else { - __ Move(call_data, call_data_obj); - } - - // Put api_function_address in place. - Address function_address = v8::ToCData<Address>(api_call_info->callback()); - ApiFunction fun(function_address); - ExternalReference::Type type = ExternalReference::DIRECT_API_CALL; - ExternalReference ref = ExternalReference(&fun, - type, - masm->isolate()); - __ mov(api_function_address, Operand(ref)); - - // Jump to stub. - CallApiFunctionStub stub(isolate, is_store, call_data_undefined, argc); - __ TailCallStub(&stub); -} - - -void StubCompiler::GenerateTailCall(MacroAssembler* masm, Handle<Code> code) { - __ Jump(code, RelocInfo::CODE_TARGET); + StoreFieldStub stub(isolate(), lookup->GetFieldIndex(), + lookup->representation()); + GenerateTailCall(masm(), stub.GetCode()); } -#undef __ -#define __ ACCESS_MASM(masm()) - - -Register StubCompiler::CheckPrototypes(Handle<HeapType> type, - Register object_reg, - Handle<JSObject> holder, - Register holder_reg, - Register scratch1, - Register scratch2, - Handle<Name> name, - Label* miss, - PrototypeCheckType check) { - Handle<Map> receiver_map(IC::TypeToMap(*type, isolate())); +Register PropertyHandlerCompiler::CheckPrototypes( + Register object_reg, Register holder_reg, Register scratch1, + Register scratch2, Handle<Name> name, Label* miss, + PrototypeCheckType check) { + Handle<Map> receiver_map(IC::TypeToMap(*type(), isolate())); // Make sure there's no overlap between holder and object registers. - ASSERT(!scratch1.is(object_reg) && !scratch1.is(holder_reg)); - ASSERT(!scratch2.is(object_reg) && !scratch2.is(holder_reg) + DCHECK(!scratch1.is(object_reg) && !scratch1.is(holder_reg)); + DCHECK(!scratch2.is(object_reg) && !scratch2.is(holder_reg) && !scratch2.is(scratch1)); // Keep track of the current object in register reg. @@ -845,12 +622,12 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, int depth = 0; Handle<JSObject> current = Handle<JSObject>::null(); - if (type->IsConstant()) { - current = Handle<JSObject>::cast(type->AsConstant()->Value()); + if (type()->IsConstant()) { + current = Handle<JSObject>::cast(type()->AsConstant()->Value()); } Handle<JSObject> prototype = Handle<JSObject>::null(); Handle<Map> current_map = receiver_map; - Handle<Map> holder_map(holder->map()); + Handle<Map> holder_map(holder()->map()); // Traverse the prototype chain and check the maps in the prototype chain for // fast and global objects or do negative lookup for normal objects. while (!current_map.is_identical_to(holder_map)) { @@ -858,18 +635,18 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, // Only global objects and objects that do not require access // checks are allowed in stubs. - ASSERT(current_map->IsJSGlobalProxyMap() || + DCHECK(current_map->IsJSGlobalProxyMap() || !current_map->is_access_check_needed()); prototype = handle(JSObject::cast(current_map->prototype())); if (current_map->is_dictionary_map() && - !current_map->IsJSGlobalObjectMap() && - !current_map->IsJSGlobalProxyMap()) { + !current_map->IsJSGlobalObjectMap()) { + DCHECK(!current_map->IsJSGlobalProxyMap()); // Proxy maps are fast. if (!name->IsUniqueName()) { - ASSERT(name->IsString()); + DCHECK(name->IsString()); name = factory()->InternalizeString(Handle<String>::cast(name)); } - ASSERT(current.is_null() || + DCHECK(current.is_null() || current->property_dictionary()->FindEntry(name) == NameDictionary::kNotFound); @@ -891,6 +668,9 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, // Check access rights to the global object. This has to happen after // the map check so that we know that the object is actually a global // object. + // This allows us to install generated handlers for accesses to the + // global proxy (as opposed to using slow ICs). See corresponding code + // in LookupForRead(). if (current_map->IsJSGlobalProxyMap()) { __ CheckAccessGlobalProxy(reg, scratch2, miss); } else if (current_map->IsJSGlobalObjectMap()) { @@ -901,12 +681,15 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, reg = holder_reg; // From now on the object will be in holder_reg. - if (heap()->InNewSpace(*prototype)) { - // The prototype is in new space; we cannot store a reference to it - // in the code. Load it from the map. + // Two possible reasons for loading the prototype from the map: + // (1) Can't store references to new space in code. + // (2) Handler is shared for all receivers with the same prototype + // map (but not necessarily the same prototype instance). + bool load_prototype_from_map = + heap()->InNewSpace(*prototype) || depth == 1; + if (load_prototype_from_map) { __ ldr(reg, FieldMemOperand(map_reg, Map::kPrototypeOffset)); } else { - // The prototype is in old space; load it directly. __ mov(reg, Operand(prototype)); } } @@ -925,7 +708,7 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, } // Perform security check for access to the global object. - ASSERT(current_map->IsJSGlobalProxyMap() || + DCHECK(current_map->IsJSGlobalProxyMap() || !current_map->is_access_check_needed()); if (current_map->IsJSGlobalProxyMap()) { __ CheckAccessGlobalProxy(reg, scratch1, miss); @@ -936,7 +719,7 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, } -void LoadStubCompiler::HandlerFrontendFooter(Handle<Name> name, Label* miss) { +void NamedLoadHandlerCompiler::FrontendFooter(Handle<Name> name, Label* miss) { if (!miss->is_unused()) { Label success; __ b(&success); @@ -947,94 +730,26 @@ void LoadStubCompiler::HandlerFrontendFooter(Handle<Name> name, Label* miss) { } -void StoreStubCompiler::HandlerFrontendFooter(Handle<Name> name, Label* miss) { +void NamedStoreHandlerCompiler::FrontendFooter(Handle<Name> name, Label* miss) { if (!miss->is_unused()) { Label success; __ b(&success); - GenerateRestoreName(masm(), miss, name); + GenerateRestoreName(miss, name); TailCallBuiltin(masm(), MissBuiltin(kind())); __ bind(&success); } } -Register LoadStubCompiler::CallbackHandlerFrontend( - Handle<HeapType> type, - Register object_reg, - Handle<JSObject> holder, - Handle<Name> name, - Handle<Object> callback) { - Label miss; - - Register reg = HandlerFrontendHeader(type, object_reg, holder, name, &miss); - - if (!holder->HasFastProperties() && !holder->IsJSGlobalObject()) { - ASSERT(!reg.is(scratch2())); - ASSERT(!reg.is(scratch3())); - ASSERT(!reg.is(scratch4())); - - // Load the properties dictionary. - Register dictionary = scratch4(); - __ ldr(dictionary, FieldMemOperand(reg, JSObject::kPropertiesOffset)); - - // Probe the dictionary. - Label probe_done; - NameDictionaryLookupStub::GeneratePositiveLookup(masm(), - &miss, - &probe_done, - dictionary, - this->name(), - scratch2(), - scratch3()); - __ bind(&probe_done); - - // If probing finds an entry in the dictionary, scratch3 contains the - // pointer into the dictionary. Check that the value is the callback. - Register pointer = scratch3(); - const int kElementsStartOffset = NameDictionary::kHeaderSize + - NameDictionary::kElementsStartIndex * kPointerSize; - const int kValueOffset = kElementsStartOffset + kPointerSize; - __ ldr(scratch2(), FieldMemOperand(pointer, kValueOffset)); - __ cmp(scratch2(), Operand(callback)); - __ b(ne, &miss); - } - - HandlerFrontendFooter(name, &miss); - return reg; -} - - -void LoadStubCompiler::GenerateLoadField(Register reg, - Handle<JSObject> holder, - PropertyIndex field, - Representation representation) { - if (!reg.is(receiver())) __ mov(receiver(), reg); - if (kind() == Code::LOAD_IC) { - LoadFieldStub stub(isolate(), - field.is_inobject(holder), - field.translate(holder), - representation); - GenerateTailCall(masm(), stub.GetCode()); - } else { - KeyedLoadFieldStub stub(isolate(), - field.is_inobject(holder), - field.translate(holder), - representation); - GenerateTailCall(masm(), stub.GetCode()); - } -} - - -void LoadStubCompiler::GenerateLoadConstant(Handle<Object> value) { +void NamedLoadHandlerCompiler::GenerateLoadConstant(Handle<Object> value) { // Return the constant value. __ Move(r0, value); __ Ret(); } -void LoadStubCompiler::GenerateLoadCallback( - Register reg, - Handle<ExecutableAccessorInfo> callback) { +void NamedLoadHandlerCompiler::GenerateLoadCallback( + Register reg, Handle<ExecutableAccessorInfo> callback) { // Build AccessorInfo::args_ list on the stack and push property name below // the exit frame to make GC aware of them and store pointers to them. STATIC_ASSERT(PropertyCallbackArguments::kHolderIndex == 0); @@ -1044,9 +759,9 @@ void LoadStubCompiler::GenerateLoadCallback( STATIC_ASSERT(PropertyCallbackArguments::kDataIndex == 4); STATIC_ASSERT(PropertyCallbackArguments::kThisIndex == 5); STATIC_ASSERT(PropertyCallbackArguments::kArgsLength == 6); - ASSERT(!scratch2().is(reg)); - ASSERT(!scratch3().is(reg)); - ASSERT(!scratch4().is(reg)); + DCHECK(!scratch2().is(reg)); + DCHECK(!scratch3().is(reg)); + DCHECK(!scratch4().is(reg)); __ push(receiver()); if (heap()->InNewSpace(callback->data())) { __ Move(scratch3(), callback); @@ -1079,14 +794,11 @@ void LoadStubCompiler::GenerateLoadCallback( } -void LoadStubCompiler::GenerateLoadInterceptor( - Register holder_reg, - Handle<Object> object, - Handle<JSObject> interceptor_holder, - LookupResult* lookup, - Handle<Name> name) { - ASSERT(interceptor_holder->HasNamedInterceptor()); - ASSERT(!interceptor_holder->GetNamedInterceptor()->getter()->IsUndefined()); +void NamedLoadHandlerCompiler::GenerateLoadInterceptor(Register holder_reg, + LookupResult* lookup, + Handle<Name> name) { + DCHECK(holder()->HasNamedInterceptor()); + DCHECK(!holder()->GetNamedInterceptor()->getter()->IsUndefined()); // So far the most popular follow ups for interceptor loads are FIELD // and CALLBACKS, so inline only them, other cases may be added @@ -1097,10 +809,12 @@ void LoadStubCompiler::GenerateLoadInterceptor( compile_followup_inline = true; } else if (lookup->type() == CALLBACKS && lookup->GetCallbackObject()->IsExecutableAccessorInfo()) { - ExecutableAccessorInfo* callback = - ExecutableAccessorInfo::cast(lookup->GetCallbackObject()); - compile_followup_inline = callback->getter() != NULL && - callback->IsCompatibleReceiver(*object); + Handle<ExecutableAccessorInfo> callback( + ExecutableAccessorInfo::cast(lookup->GetCallbackObject())); + compile_followup_inline = + callback->getter() != NULL && + ExecutableAccessorInfo::IsCompatibleReceiverType(isolate(), callback, + type()); } } @@ -1108,13 +822,13 @@ void LoadStubCompiler::GenerateLoadInterceptor( // Compile the interceptor call, followed by inline code to load the // property from further up the prototype chain if the call fails. // Check that the maps haven't changed. - ASSERT(holder_reg.is(receiver()) || holder_reg.is(scratch1())); + DCHECK(holder_reg.is(receiver()) || holder_reg.is(scratch1())); // Preserve the receiver register explicitly whenever it is different from // the holder and it is needed should the interceptor return without any // result. The CALLBACKS case needs the receiver to be passed into C++ code, // the FIELD case might cause a miss during the prototype check. - bool must_perfrom_prototype_check = *interceptor_holder != lookup->holder(); + bool must_perfrom_prototype_check = *holder() != lookup->holder(); bool must_preserve_receiver_reg = !receiver().is(holder_reg) && (lookup->type() == CALLBACKS || must_perfrom_prototype_check); @@ -1131,7 +845,7 @@ void LoadStubCompiler::GenerateLoadInterceptor( // interceptor's holder has been compiled before (see a caller // of this method.) CompileCallLoadPropertyWithInterceptor( - masm(), receiver(), holder_reg, this->name(), interceptor_holder, + masm(), receiver(), holder_reg, this->name(), holder(), IC::kLoadPropertyWithInterceptorOnly); // Check if interceptor provided a value for property. If it's @@ -1152,31 +866,26 @@ void LoadStubCompiler::GenerateLoadInterceptor( // Leave the internal frame. } - GenerateLoadPostInterceptor(holder_reg, interceptor_holder, name, lookup); + GenerateLoadPostInterceptor(holder_reg, name, lookup); } else { // !compile_followup_inline // Call the runtime system to load the interceptor. // Check that the maps haven't changed. - PushInterceptorArguments(masm(), receiver(), holder_reg, - this->name(), interceptor_holder); + PushInterceptorArguments(masm(), receiver(), holder_reg, this->name(), + holder()); ExternalReference ref = - ExternalReference(IC_Utility(IC::kLoadPropertyWithInterceptorForLoad), + ExternalReference(IC_Utility(IC::kLoadPropertyWithInterceptor), isolate()); - __ TailCallExternalReference(ref, StubCache::kInterceptorArgsLength, 1); + __ TailCallExternalReference( + ref, NamedLoadHandlerCompiler::kInterceptorArgsLength, 1); } } -Handle<Code> StoreStubCompiler::CompileStoreCallback( - Handle<JSObject> object, - Handle<JSObject> holder, - Handle<Name> name, +Handle<Code> NamedStoreHandlerCompiler::CompileStoreCallback( + Handle<JSObject> object, Handle<Name> name, Handle<ExecutableAccessorInfo> callback) { - Register holder_reg = HandlerFrontend( - IC::CurrentTypeOf(object, isolate()), receiver(), holder, name); - - // Stub never generated for non-global objects that require access checks. - ASSERT(holder->IsJSGlobalProxy() || !holder->IsAccessCheckNeeded()); + Register holder_reg = Frontend(receiver(), name); __ push(receiver()); // receiver __ push(holder_reg); @@ -1199,10 +908,8 @@ Handle<Code> StoreStubCompiler::CompileStoreCallback( #define __ ACCESS_MASM(masm) -void StoreStubCompiler::GenerateStoreViaSetter( - MacroAssembler* masm, - Handle<HeapType> type, - Register receiver, +void NamedStoreHandlerCompiler::GenerateStoreViaSetter( + MacroAssembler* masm, Handle<HeapType> type, Register receiver, Handle<JSFunction> setter) { // ----------- S t a t e ------------- // -- lr : return address @@ -1218,8 +925,7 @@ void StoreStubCompiler::GenerateStoreViaSetter( if (IC::TypeToMap(*type, masm->isolate())->IsJSGlobalObjectMap()) { // Swap in the global receiver. __ ldr(receiver, - FieldMemOperand( - receiver, JSGlobalObject::kGlobalReceiverOffset)); + FieldMemOperand(receiver, JSGlobalObject::kGlobalProxyOffset)); } __ Push(receiver, value()); ParameterCount actual(1); @@ -1246,14 +952,13 @@ void StoreStubCompiler::GenerateStoreViaSetter( #define __ ACCESS_MASM(masm()) -Handle<Code> StoreStubCompiler::CompileStoreInterceptor( - Handle<JSObject> object, +Handle<Code> NamedStoreHandlerCompiler::CompileStoreInterceptor( Handle<Name> name) { __ Push(receiver(), this->name(), value()); // Do tail-call to the runtime system. - ExternalReference store_ic_property = - ExternalReference(IC_Utility(IC::kStoreInterceptorProperty), isolate()); + ExternalReference store_ic_property = ExternalReference( + IC_Utility(IC::kStorePropertyWithInterceptor), isolate()); __ TailCallExternalReference(store_ic_property, 3, 1); // Return the generated code. @@ -1261,62 +966,35 @@ Handle<Code> StoreStubCompiler::CompileStoreInterceptor( } -Handle<Code> LoadStubCompiler::CompileLoadNonexistent(Handle<HeapType> type, - Handle<JSObject> last, - Handle<Name> name) { - NonexistentHandlerFrontend(type, last, name); - - // Return undefined if maps of the full prototype chain are still the - // same and no global property with this name contains a value. - __ LoadRoot(r0, Heap::kUndefinedValueRootIndex); - __ Ret(); - - // Return the generated code. - return GetCode(kind(), Code::FAST, name); -} - - -Register* LoadStubCompiler::registers() { - // receiver, name, scratch1, scratch2, scratch3, scratch4. - static Register registers[] = { r0, r2, r3, r1, r4, r5 }; - return registers; -} - - -Register* KeyedLoadStubCompiler::registers() { +Register* PropertyAccessCompiler::load_calling_convention() { // receiver, name, scratch1, scratch2, scratch3, scratch4. - static Register registers[] = { r1, r0, r2, r3, r4, r5 }; + Register receiver = LoadIC::ReceiverRegister(); + Register name = LoadIC::NameRegister(); + static Register registers[] = { receiver, name, r3, r0, r4, r5 }; return registers; } -Register StoreStubCompiler::value() { - return r0; -} - - -Register* StoreStubCompiler::registers() { +Register* PropertyAccessCompiler::store_calling_convention() { // receiver, name, scratch1, scratch2, scratch3. - static Register registers[] = { r1, r2, r3, r4, r5 }; + Register receiver = StoreIC::ReceiverRegister(); + Register name = StoreIC::NameRegister(); + DCHECK(r3.is(KeyedStoreIC::MapRegister())); + static Register registers[] = { receiver, name, r3, r4, r5 }; return registers; } -Register* KeyedStoreStubCompiler::registers() { - // receiver, name, scratch1, scratch2, scratch3. - static Register registers[] = { r2, r1, r3, r4, r5 }; - return registers; -} +Register NamedStoreHandlerCompiler::value() { return StoreIC::ValueRegister(); } #undef __ #define __ ACCESS_MASM(masm) -void LoadStubCompiler::GenerateLoadViaGetter(MacroAssembler* masm, - Handle<HeapType> type, - Register receiver, - Handle<JSFunction> getter) { +void NamedLoadHandlerCompiler::GenerateLoadViaGetter( + MacroAssembler* masm, Handle<HeapType> type, Register receiver, + Handle<JSFunction> getter) { // ----------- S t a t e ------------- // -- r0 : receiver // -- r2 : name @@ -1330,8 +1008,7 @@ void LoadStubCompiler::GenerateLoadViaGetter(MacroAssembler* masm, if (IC::TypeToMap(*type, masm->isolate())->IsJSGlobalObjectMap()) { // Swap in the global receiver. __ ldr(receiver, - FieldMemOperand( - receiver, JSGlobalObject::kGlobalReceiverOffset)); + FieldMemOperand(receiver, JSGlobalObject::kGlobalProxyOffset)); } __ push(receiver); ParameterCount actual(0); @@ -1355,57 +1032,61 @@ void LoadStubCompiler::GenerateLoadViaGetter(MacroAssembler* masm, #define __ ACCESS_MASM(masm()) -Handle<Code> LoadStubCompiler::CompileLoadGlobal( - Handle<HeapType> type, - Handle<GlobalObject> global, - Handle<PropertyCell> cell, - Handle<Name> name, - bool is_dont_delete) { +Handle<Code> NamedLoadHandlerCompiler::CompileLoadGlobal( + Handle<PropertyCell> cell, Handle<Name> name, bool is_configurable) { Label miss; - HandlerFrontendHeader(type, receiver(), global, name, &miss); + FrontendHeader(receiver(), name, &miss); // Get the value from the cell. - __ mov(r3, Operand(cell)); - __ ldr(r4, FieldMemOperand(r3, Cell::kValueOffset)); + Register result = StoreIC::ValueRegister(); + __ mov(result, Operand(cell)); + __ ldr(result, FieldMemOperand(result, Cell::kValueOffset)); // Check for deleted property if property can actually be deleted. - if (!is_dont_delete) { + if (is_configurable) { __ LoadRoot(ip, Heap::kTheHoleValueRootIndex); - __ cmp(r4, ip); + __ cmp(result, ip); __ b(eq, &miss); } Counters* counters = isolate()->counters(); __ IncrementCounter(counters->named_load_global_stub(), 1, r1, r3); - __ mov(r0, r4); __ Ret(); - HandlerFrontendFooter(name, &miss); + FrontendFooter(name, &miss); // Return the generated code. return GetCode(kind(), Code::NORMAL, name); } -Handle<Code> BaseLoadStoreStubCompiler::CompilePolymorphicIC( - TypeHandleList* types, - CodeHandleList* handlers, - Handle<Name> name, - Code::StubType type, - IcCheckType check) { +Handle<Code> PropertyICCompiler::CompilePolymorphic(TypeHandleList* types, + CodeHandleList* handlers, + Handle<Name> name, + Code::StubType type, + IcCheckType check) { Label miss; if (check == PROPERTY && (kind() == Code::KEYED_LOAD_IC || kind() == Code::KEYED_STORE_IC)) { - __ cmp(this->name(), Operand(name)); - __ b(ne, &miss); + // In case we are compiling an IC for dictionary loads and stores, just + // check whether the name is unique. + if (name.is_identical_to(isolate()->factory()->normal_ic_symbol())) { + __ JumpIfNotUniqueName(this->name(), &miss); + } else { + __ cmp(this->name(), Operand(name)); + __ b(ne, &miss); + } } Label number_case; Label* smi_target = IncludesNumberType(types) ? &number_case : &miss; __ JumpIfSmi(receiver(), smi_target); + // Polymorphic keyed stores may use the map register Register map_reg = scratch1(); + DCHECK(kind() != Code::KEYED_STORE_IC || + map_reg.is(KeyedStoreIC::MapRegister())); int receiver_count = types->length(); int number_of_handled_maps = 0; @@ -1418,13 +1099,13 @@ Handle<Code> BaseLoadStoreStubCompiler::CompilePolymorphicIC( __ mov(ip, Operand(map)); __ cmp(map_reg, ip); if (type->Is(HeapType::Number())) { - ASSERT(!number_case.is_unused()); + DCHECK(!number_case.is_unused()); __ bind(&number_case); } __ Jump(handlers->at(current), RelocInfo::CODE_TARGET, eq); } } - ASSERT(number_of_handled_maps != 0); + DCHECK(number_of_handled_maps != 0); __ bind(&miss); TailCallBuiltin(masm(), MissBuiltin(kind())); @@ -1432,24 +1113,12 @@ Handle<Code> BaseLoadStoreStubCompiler::CompilePolymorphicIC( // Return the generated code. InlineCacheState state = number_of_handled_maps > 1 ? POLYMORPHIC : MONOMORPHIC; - return GetICCode(kind(), type, name, state); -} - - -void StoreStubCompiler::GenerateStoreArrayLength() { - // Prepare tail call to StoreIC_ArrayLength. - __ Push(receiver(), value()); - - ExternalReference ref = - ExternalReference(IC_Utility(IC::kStoreIC_ArrayLength), - masm()->isolate()); - __ TailCallExternalReference(ref, 2, 1); + return GetCode(kind(), type, name, state); } -Handle<Code> KeyedStoreStubCompiler::CompileStorePolymorphic( - MapHandleList* receiver_maps, - CodeHandleList* handler_stubs, +Handle<Code> PropertyICCompiler::CompileKeyedStorePolymorphic( + MapHandleList* receiver_maps, CodeHandleList* handler_stubs, MapHandleList* transitioned_maps) { Label miss; __ JumpIfSmi(receiver(), &miss); @@ -1474,8 +1143,7 @@ Handle<Code> KeyedStoreStubCompiler::CompileStorePolymorphic( TailCallBuiltin(masm(), MissBuiltin(kind())); // Return the generated code. - return GetICCode( - kind(), Code::NORMAL, factory()->empty_string(), POLYMORPHIC); + return GetCode(kind(), Code::NORMAL, factory()->empty_string(), POLYMORPHIC); } @@ -1483,21 +1151,19 @@ Handle<Code> KeyedStoreStubCompiler::CompileStorePolymorphic( #define __ ACCESS_MASM(masm) -void KeyedLoadStubCompiler::GenerateLoadDictionaryElement( +void ElementHandlerCompiler::GenerateLoadDictionaryElement( MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- lr : return address - // -- r0 : key - // -- r1 : receiver - // ----------------------------------- + // The return address is in lr. Label slow, miss; - Register key = r0; - Register receiver = r1; + Register key = LoadIC::NameRegister(); + Register receiver = LoadIC::ReceiverRegister(); + DCHECK(receiver.is(r1)); + DCHECK(key.is(r2)); - __ UntagAndJumpIfNotSmi(r2, key, &miss); + __ UntagAndJumpIfNotSmi(r6, key, &miss); __ ldr(r4, FieldMemOperand(receiver, JSObject::kElementsOffset)); - __ LoadFromNumberDictionary(&slow, r4, key, r0, r2, r3, r5); + __ LoadFromNumberDictionary(&slow, r4, key, r0, r6, r3, r5); __ Ret(); __ bind(&slow); @@ -1505,21 +1171,11 @@ void KeyedLoadStubCompiler::GenerateLoadDictionaryElement( masm->isolate()->counters()->keyed_load_external_array_slow(), 1, r2, r3); - // ---------- S t a t e -------------- - // -- lr : return address - // -- r0 : key - // -- r1 : receiver - // ----------------------------------- TailCallBuiltin(masm, Builtins::kKeyedLoadIC_Slow); // Miss case, call the runtime. __ bind(&miss); - // ---------- S t a t e -------------- - // -- lr : return address - // -- r0 : key - // -- r1 : receiver - // ----------------------------------- TailCallBuiltin(masm, Builtins::kKeyedLoadIC_Miss); } diff --git a/deps/v8/src/arm64/assembler-arm64-inl.h b/deps/v8/src/arm64/assembler-arm64-inl.h index 3c17153f68e..3b24197ebf3 100644 --- a/deps/v8/src/arm64/assembler-arm64-inl.h +++ b/deps/v8/src/arm64/assembler-arm64-inl.h @@ -5,24 +5,30 @@ #ifndef V8_ARM64_ASSEMBLER_ARM64_INL_H_ #define V8_ARM64_ASSEMBLER_ARM64_INL_H_ -#include "arm64/assembler-arm64.h" -#include "cpu.h" -#include "debug.h" +#include "src/arm64/assembler-arm64.h" +#include "src/assembler.h" +#include "src/debug.h" namespace v8 { namespace internal { -void RelocInfo::apply(intptr_t delta) { +bool CpuFeatures::SupportsCrankshaft() { return true; } + + +void RelocInfo::apply(intptr_t delta, ICacheFlushMode icache_flush_mode) { UNIMPLEMENTED(); } -void RelocInfo::set_target_address(Address target, WriteBarrierMode mode) { - ASSERT(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); - Assembler::set_target_address_at(pc_, host_, target); - if (mode == UPDATE_WRITE_BARRIER && host() != NULL && IsCodeTarget(rmode_)) { +void RelocInfo::set_target_address(Address target, + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); + Assembler::set_target_address_at(pc_, host_, target, icache_flush_mode); + if (write_barrier_mode == UPDATE_WRITE_BARRIER && host() != NULL && + IsCodeTarget(rmode_)) { Object* target_code = Code::GetCodeFromTargetAddress(target); host()->GetHeap()->incremental_marking()->RecordWriteIntoCode( host(), this, HeapObject::cast(target_code)); @@ -31,54 +37,54 @@ void RelocInfo::set_target_address(Address target, WriteBarrierMode mode) { inline unsigned CPURegister::code() const { - ASSERT(IsValid()); + DCHECK(IsValid()); return reg_code; } inline CPURegister::RegisterType CPURegister::type() const { - ASSERT(IsValidOrNone()); + DCHECK(IsValidOrNone()); return reg_type; } inline RegList CPURegister::Bit() const { - ASSERT(reg_code < (sizeof(RegList) * kBitsPerByte)); + DCHECK(reg_code < (sizeof(RegList) * kBitsPerByte)); return IsValid() ? 1UL << reg_code : 0; } inline unsigned CPURegister::SizeInBits() const { - ASSERT(IsValid()); + DCHECK(IsValid()); return reg_size; } inline int CPURegister::SizeInBytes() const { - ASSERT(IsValid()); - ASSERT(SizeInBits() % 8 == 0); + DCHECK(IsValid()); + DCHECK(SizeInBits() % 8 == 0); return reg_size / 8; } inline bool CPURegister::Is32Bits() const { - ASSERT(IsValid()); + DCHECK(IsValid()); return reg_size == 32; } inline bool CPURegister::Is64Bits() const { - ASSERT(IsValid()); + DCHECK(IsValid()); return reg_size == 64; } inline bool CPURegister::IsValid() const { if (IsValidRegister() || IsValidFPRegister()) { - ASSERT(!IsNone()); + DCHECK(!IsNone()); return true; } else { - ASSERT(IsNone()); + DCHECK(IsNone()); return false; } } @@ -100,17 +106,22 @@ inline bool CPURegister::IsValidFPRegister() const { inline bool CPURegister::IsNone() const { // kNoRegister types should always have size 0 and code 0. - ASSERT((reg_type != kNoRegister) || (reg_code == 0)); - ASSERT((reg_type != kNoRegister) || (reg_size == 0)); + DCHECK((reg_type != kNoRegister) || (reg_code == 0)); + DCHECK((reg_type != kNoRegister) || (reg_size == 0)); return reg_type == kNoRegister; } inline bool CPURegister::Is(const CPURegister& other) const { - ASSERT(IsValidOrNone() && other.IsValidOrNone()); - return (reg_code == other.reg_code) && (reg_size == other.reg_size) && - (reg_type == other.reg_type); + DCHECK(IsValidOrNone() && other.IsValidOrNone()); + return Aliases(other) && (reg_size == other.reg_size); +} + + +inline bool CPURegister::Aliases(const CPURegister& other) const { + DCHECK(IsValidOrNone() && other.IsValidOrNone()); + return (reg_code == other.reg_code) && (reg_type == other.reg_type); } @@ -135,27 +146,27 @@ inline bool CPURegister::IsValidOrNone() const { inline bool CPURegister::IsZero() const { - ASSERT(IsValid()); + DCHECK(IsValid()); return IsRegister() && (reg_code == kZeroRegCode); } inline bool CPURegister::IsSP() const { - ASSERT(IsValid()); + DCHECK(IsValid()); return IsRegister() && (reg_code == kSPRegInternalCode); } inline void CPURegList::Combine(const CPURegList& other) { - ASSERT(IsValid()); - ASSERT(other.type() == type_); - ASSERT(other.RegisterSizeInBits() == size_); + DCHECK(IsValid()); + DCHECK(other.type() == type_); + DCHECK(other.RegisterSizeInBits() == size_); list_ |= other.list(); } inline void CPURegList::Remove(const CPURegList& other) { - ASSERT(IsValid()); + DCHECK(IsValid()); if (other.type() == type_) { list_ &= ~other.list(); } @@ -163,8 +174,8 @@ inline void CPURegList::Remove(const CPURegList& other) { inline void CPURegList::Combine(const CPURegister& other) { - ASSERT(other.type() == type_); - ASSERT(other.SizeInBits() == size_); + DCHECK(other.type() == type_); + DCHECK(other.SizeInBits() == size_); Combine(other.code()); } @@ -181,92 +192,92 @@ inline void CPURegList::Remove(const CPURegister& other1, inline void CPURegList::Combine(int code) { - ASSERT(IsValid()); - ASSERT(CPURegister::Create(code, size_, type_).IsValid()); + DCHECK(IsValid()); + DCHECK(CPURegister::Create(code, size_, type_).IsValid()); list_ |= (1UL << code); } inline void CPURegList::Remove(int code) { - ASSERT(IsValid()); - ASSERT(CPURegister::Create(code, size_, type_).IsValid()); + DCHECK(IsValid()); + DCHECK(CPURegister::Create(code, size_, type_).IsValid()); list_ &= ~(1UL << code); } inline Register Register::XRegFromCode(unsigned code) { - // This function returns the zero register when code = 31. The stack pointer - // can not be returned. - ASSERT(code < kNumberOfRegisters); - return Register::Create(code, kXRegSizeInBits); + if (code == kSPRegInternalCode) { + return csp; + } else { + DCHECK(code < kNumberOfRegisters); + return Register::Create(code, kXRegSizeInBits); + } } inline Register Register::WRegFromCode(unsigned code) { - ASSERT(code < kNumberOfRegisters); - return Register::Create(code, kWRegSizeInBits); + if (code == kSPRegInternalCode) { + return wcsp; + } else { + DCHECK(code < kNumberOfRegisters); + return Register::Create(code, kWRegSizeInBits); + } } inline FPRegister FPRegister::SRegFromCode(unsigned code) { - ASSERT(code < kNumberOfFPRegisters); + DCHECK(code < kNumberOfFPRegisters); return FPRegister::Create(code, kSRegSizeInBits); } inline FPRegister FPRegister::DRegFromCode(unsigned code) { - ASSERT(code < kNumberOfFPRegisters); + DCHECK(code < kNumberOfFPRegisters); return FPRegister::Create(code, kDRegSizeInBits); } inline Register CPURegister::W() const { - ASSERT(IsValidRegister()); + DCHECK(IsValidRegister()); return Register::WRegFromCode(reg_code); } inline Register CPURegister::X() const { - ASSERT(IsValidRegister()); + DCHECK(IsValidRegister()); return Register::XRegFromCode(reg_code); } inline FPRegister CPURegister::S() const { - ASSERT(IsValidFPRegister()); + DCHECK(IsValidFPRegister()); return FPRegister::SRegFromCode(reg_code); } inline FPRegister CPURegister::D() const { - ASSERT(IsValidFPRegister()); + DCHECK(IsValidFPRegister()); return FPRegister::DRegFromCode(reg_code); } -// Operand. -template<typename T> -Operand::Operand(Handle<T> value) : reg_(NoReg) { - initialize_handle(value); -} - - +// Immediate. // Default initializer is for int types -template<typename int_t> -struct OperandInitializer { +template<typename T> +struct ImmediateInitializer { static const bool kIsIntType = true; - static inline RelocInfo::Mode rmode_for(int_t) { - return sizeof(int_t) == 8 ? RelocInfo::NONE64 : RelocInfo::NONE32; + static inline RelocInfo::Mode rmode_for(T) { + return sizeof(T) == 8 ? RelocInfo::NONE64 : RelocInfo::NONE32; } - static inline int64_t immediate_for(int_t t) { - STATIC_ASSERT(sizeof(int_t) <= 8); + static inline int64_t immediate_for(T t) { + STATIC_ASSERT(sizeof(T) <= 8); return t; } }; template<> -struct OperandInitializer<Smi*> { +struct ImmediateInitializer<Smi*> { static const bool kIsIntType = false; static inline RelocInfo::Mode rmode_for(Smi* t) { return RelocInfo::NONE64; @@ -278,7 +289,7 @@ struct OperandInitializer<Smi*> { template<> -struct OperandInitializer<ExternalReference> { +struct ImmediateInitializer<ExternalReference> { static const bool kIsIntType = false; static inline RelocInfo::Mode rmode_for(ExternalReference t) { return RelocInfo::EXTERNAL_REFERENCE; @@ -290,45 +301,64 @@ struct OperandInitializer<ExternalReference> { template<typename T> -Operand::Operand(T t) - : immediate_(OperandInitializer<T>::immediate_for(t)), - reg_(NoReg), - rmode_(OperandInitializer<T>::rmode_for(t)) {} +Immediate::Immediate(Handle<T> value) { + InitializeHandle(value); +} template<typename T> -Operand::Operand(T t, RelocInfo::Mode rmode) - : immediate_(OperandInitializer<T>::immediate_for(t)), - reg_(NoReg), +Immediate::Immediate(T t) + : value_(ImmediateInitializer<T>::immediate_for(t)), + rmode_(ImmediateInitializer<T>::rmode_for(t)) {} + + +template<typename T> +Immediate::Immediate(T t, RelocInfo::Mode rmode) + : value_(ImmediateInitializer<T>::immediate_for(t)), rmode_(rmode) { - STATIC_ASSERT(OperandInitializer<T>::kIsIntType); + STATIC_ASSERT(ImmediateInitializer<T>::kIsIntType); } +// Operand. +template<typename T> +Operand::Operand(Handle<T> value) : immediate_(value), reg_(NoReg) {} + + +template<typename T> +Operand::Operand(T t) : immediate_(t), reg_(NoReg) {} + + +template<typename T> +Operand::Operand(T t, RelocInfo::Mode rmode) + : immediate_(t, rmode), + reg_(NoReg) {} + + Operand::Operand(Register reg, Shift shift, unsigned shift_amount) - : reg_(reg), + : immediate_(0), + reg_(reg), shift_(shift), extend_(NO_EXTEND), - shift_amount_(shift_amount), - rmode_(reg.Is64Bits() ? RelocInfo::NONE64 : RelocInfo::NONE32) { - ASSERT(reg.Is64Bits() || (shift_amount < kWRegSizeInBits)); - ASSERT(reg.Is32Bits() || (shift_amount < kXRegSizeInBits)); - ASSERT(!reg.IsSP()); + shift_amount_(shift_amount) { + DCHECK(reg.Is64Bits() || (shift_amount < kWRegSizeInBits)); + DCHECK(reg.Is32Bits() || (shift_amount < kXRegSizeInBits)); + DCHECK(!reg.IsSP()); } Operand::Operand(Register reg, Extend extend, unsigned shift_amount) - : reg_(reg), + : immediate_(0), + reg_(reg), shift_(NO_SHIFT), extend_(extend), - shift_amount_(shift_amount), - rmode_(reg.Is64Bits() ? RelocInfo::NONE64 : RelocInfo::NONE32) { - ASSERT(reg.IsValid()); - ASSERT(shift_amount <= 4); - ASSERT(!reg.IsSP()); + shift_amount_(shift_amount) { + DCHECK(reg.IsValid()); + DCHECK(shift_amount <= 4); + DCHECK(!reg.IsSP()); // Extend modes SXTX and UXTX require a 64-bit register. - ASSERT(reg.Is64Bits() || ((extend != SXTX) && (extend != UXTX))); + DCHECK(reg.Is64Bits() || ((extend != SXTX) && (extend != UXTX))); } @@ -349,7 +379,7 @@ bool Operand::IsExtendedRegister() const { bool Operand::IsZero() const { if (IsImmediate()) { - return immediate() == 0; + return ImmediateValue() == 0; } else { return reg().IsZero(); } @@ -357,51 +387,61 @@ bool Operand::IsZero() const { Operand Operand::ToExtendedRegister() const { - ASSERT(IsShiftedRegister()); - ASSERT((shift_ == LSL) && (shift_amount_ <= 4)); + DCHECK(IsShiftedRegister()); + DCHECK((shift_ == LSL) && (shift_amount_ <= 4)); return Operand(reg_, reg_.Is64Bits() ? UXTX : UXTW, shift_amount_); } -int64_t Operand::immediate() const { - ASSERT(IsImmediate()); +Immediate Operand::immediate() const { + DCHECK(IsImmediate()); return immediate_; } +int64_t Operand::ImmediateValue() const { + DCHECK(IsImmediate()); + return immediate_.value(); +} + + Register Operand::reg() const { - ASSERT(IsShiftedRegister() || IsExtendedRegister()); + DCHECK(IsShiftedRegister() || IsExtendedRegister()); return reg_; } Shift Operand::shift() const { - ASSERT(IsShiftedRegister()); + DCHECK(IsShiftedRegister()); return shift_; } Extend Operand::extend() const { - ASSERT(IsExtendedRegister()); + DCHECK(IsExtendedRegister()); return extend_; } unsigned Operand::shift_amount() const { - ASSERT(IsShiftedRegister() || IsExtendedRegister()); + DCHECK(IsShiftedRegister() || IsExtendedRegister()); return shift_amount_; } Operand Operand::UntagSmi(Register smi) { - ASSERT(smi.Is64Bits()); + STATIC_ASSERT(kXRegSizeInBits == static_cast<unsigned>(kSmiShift + + kSmiValueSize)); + DCHECK(smi.Is64Bits()); return Operand(smi, ASR, kSmiShift); } Operand Operand::UntagSmiAndScale(Register smi, int scale) { - ASSERT(smi.Is64Bits()); - ASSERT((scale >= 0) && (scale <= (64 - kSmiValueSize))); + STATIC_ASSERT(kXRegSizeInBits == static_cast<unsigned>(kSmiShift + + kSmiValueSize)); + DCHECK(smi.Is64Bits()); + DCHECK((scale >= 0) && (scale <= (64 - kSmiValueSize))); if (scale > kSmiShift) { return Operand(smi, LSL, scale - kSmiShift); } else if (scale < kSmiShift) { @@ -420,7 +460,7 @@ MemOperand::MemOperand() MemOperand::MemOperand(Register base, ptrdiff_t offset, AddrMode addrmode) : base_(base), regoffset_(NoReg), offset_(offset), addrmode_(addrmode), shift_(NO_SHIFT), extend_(NO_EXTEND), shift_amount_(0) { - ASSERT(base.Is64Bits() && !base.IsZero()); + DCHECK(base.Is64Bits() && !base.IsZero()); } @@ -430,12 +470,12 @@ MemOperand::MemOperand(Register base, unsigned shift_amount) : base_(base), regoffset_(regoffset), offset_(0), addrmode_(Offset), shift_(NO_SHIFT), extend_(extend), shift_amount_(shift_amount) { - ASSERT(base.Is64Bits() && !base.IsZero()); - ASSERT(!regoffset.IsSP()); - ASSERT((extend == UXTW) || (extend == SXTW) || (extend == SXTX)); + DCHECK(base.Is64Bits() && !base.IsZero()); + DCHECK(!regoffset.IsSP()); + DCHECK((extend == UXTW) || (extend == SXTW) || (extend == SXTX)); // SXTX extend mode requires a 64-bit offset register. - ASSERT(regoffset.Is64Bits() || (extend != SXTX)); + DCHECK(regoffset.Is64Bits() || (extend != SXTX)); } @@ -445,22 +485,22 @@ MemOperand::MemOperand(Register base, unsigned shift_amount) : base_(base), regoffset_(regoffset), offset_(0), addrmode_(Offset), shift_(shift), extend_(NO_EXTEND), shift_amount_(shift_amount) { - ASSERT(base.Is64Bits() && !base.IsZero()); - ASSERT(regoffset.Is64Bits() && !regoffset.IsSP()); - ASSERT(shift == LSL); + DCHECK(base.Is64Bits() && !base.IsZero()); + DCHECK(regoffset.Is64Bits() && !regoffset.IsSP()); + DCHECK(shift == LSL); } MemOperand::MemOperand(Register base, const Operand& offset, AddrMode addrmode) : base_(base), addrmode_(addrmode) { - ASSERT(base.Is64Bits() && !base.IsZero()); + DCHECK(base.Is64Bits() && !base.IsZero()); if (offset.IsImmediate()) { - offset_ = offset.immediate(); + offset_ = offset.ImmediateValue(); regoffset_ = NoReg; } else if (offset.IsShiftedRegister()) { - ASSERT(addrmode == Offset); + DCHECK(addrmode == Offset); regoffset_ = offset.reg(); shift_= offset.shift(); @@ -470,11 +510,11 @@ MemOperand::MemOperand(Register base, const Operand& offset, AddrMode addrmode) offset_ = 0; // These assertions match those in the shifted-register constructor. - ASSERT(regoffset_.Is64Bits() && !regoffset_.IsSP()); - ASSERT(shift_ == LSL); + DCHECK(regoffset_.Is64Bits() && !regoffset_.IsSP()); + DCHECK(shift_ == LSL); } else { - ASSERT(offset.IsExtendedRegister()); - ASSERT(addrmode == Offset); + DCHECK(offset.IsExtendedRegister()); + DCHECK(addrmode == Offset); regoffset_ = offset.reg(); extend_ = offset.extend(); @@ -484,9 +524,9 @@ MemOperand::MemOperand(Register base, const Operand& offset, AddrMode addrmode) offset_ = 0; // These assertions match those in the extended-register constructor. - ASSERT(!regoffset_.IsSP()); - ASSERT((extend_ == UXTW) || (extend_ == SXTW) || (extend_ == SXTX)); - ASSERT((regoffset_.Is64Bits() || (extend_ != SXTX))); + DCHECK(!regoffset_.IsSP()); + DCHECK((extend_ == UXTW) || (extend_ == SXTW) || (extend_ == SXTX)); + DCHECK((regoffset_.Is64Bits() || (extend_ != SXTX))); } } @@ -513,7 +553,7 @@ Operand MemOperand::OffsetAsOperand() const { if (IsImmediateOffset()) { return offset(); } else { - ASSERT(IsRegisterOffset()); + DCHECK(IsRegisterOffset()); if (extend() == NO_EXTEND) { return Operand(regoffset(), shift(), shift_amount()); } else { @@ -535,7 +575,7 @@ void Assembler::Unreachable() { Address Assembler::target_pointer_address_at(Address pc) { Instruction* instr = reinterpret_cast<Instruction*>(pc); - ASSERT(instr->IsLdrLiteralX()); + DCHECK(instr->IsLdrLiteralX()); return reinterpret_cast<Address>(instr->ImmPCOffsetTarget()); } @@ -562,11 +602,16 @@ Address Assembler::target_address_from_return_address(Address pc) { Address candidate = pc - 2 * kInstructionSize; Instruction* instr = reinterpret_cast<Instruction*>(candidate); USE(instr); - ASSERT(instr->IsLdrLiteralX()); + DCHECK(instr->IsLdrLiteralX()); return candidate; } +Address Assembler::break_address_from_return_address(Address pc) { + return pc - Assembler::kPatchDebugBreakSlotReturnOffset; +} + + Address Assembler::return_address_from_call_start(Address pc) { // The call, generated by MacroAssembler::Call, is one of two possible // sequences: @@ -590,14 +635,14 @@ Address Assembler::return_address_from_call_start(Address pc) { Instruction* instr = reinterpret_cast<Instruction*>(pc); if (instr->IsMovz()) { // Verify the instruction sequence. - ASSERT(instr->following(1)->IsMovk()); - ASSERT(instr->following(2)->IsMovk()); - ASSERT(instr->following(3)->IsBranchAndLinkToRegister()); + DCHECK(instr->following(1)->IsMovk()); + DCHECK(instr->following(2)->IsMovk()); + DCHECK(instr->following(3)->IsBranchAndLinkToRegister()); return pc + Assembler::kCallSizeWithoutRelocation; } else { // Verify the instruction sequence. - ASSERT(instr->IsLdrLiteralX()); - ASSERT(instr->following(1)->IsBranchAndLinkToRegister()); + DCHECK(instr->IsLdrLiteralX()); + DCHECK(instr->following(1)->IsBranchAndLinkToRegister()); return pc + Assembler::kCallSizeWithRelocation; } } @@ -611,11 +656,12 @@ void Assembler::deserialization_set_special_target_at( void Assembler::set_target_address_at(Address pc, ConstantPoolArray* constant_pool, - Address target) { + Address target, + ICacheFlushMode icache_flush_mode) { Memory::Address_at(target_pointer_address_at(pc)) = target; // Intuitively, we would think it is necessary to always flush the // instruction cache after patching a target address in the code as follows: - // CPU::FlushICache(pc, sizeof(target)); + // CpuFeatures::FlushICache(pc, sizeof(target)); // However, on ARM, an instruction is actually patched in the case of // embedded constants of the form: // ldr ip, [pc, #...] @@ -626,9 +672,10 @@ void Assembler::set_target_address_at(Address pc, void Assembler::set_target_address_at(Address pc, Code* code, - Address target) { + Address target, + ICacheFlushMode icache_flush_mode) { ConstantPoolArray* constant_pool = code ? code->constant_pool() : NULL; - set_target_address_at(pc, constant_pool, target); + set_target_address_at(pc, constant_pool, target, icache_flush_mode); } @@ -638,13 +685,13 @@ int RelocInfo::target_address_size() { Address RelocInfo::target_address() { - ASSERT(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); + DCHECK(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); return Assembler::target_address_at(pc_, host_); } Address RelocInfo::target_address_address() { - ASSERT(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_) + DCHECK(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_) || rmode_ == EMBEDDED_OBJECT || rmode_ == EXTERNAL_REFERENCE); return Assembler::target_pointer_address_at(pc_); @@ -652,30 +699,32 @@ Address RelocInfo::target_address_address() { Address RelocInfo::constant_pool_entry_address() { - ASSERT(IsInConstantPool()); + DCHECK(IsInConstantPool()); return Assembler::target_pointer_address_at(pc_); } Object* RelocInfo::target_object() { - ASSERT(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); + DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); return reinterpret_cast<Object*>(Assembler::target_address_at(pc_, host_)); } Handle<Object> RelocInfo::target_object_handle(Assembler* origin) { - ASSERT(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); + DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); return Handle<Object>(reinterpret_cast<Object**>( Assembler::target_address_at(pc_, host_))); } -void RelocInfo::set_target_object(Object* target, WriteBarrierMode mode) { - ASSERT(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); - ASSERT(!target->IsConsString()); +void RelocInfo::set_target_object(Object* target, + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); Assembler::set_target_address_at(pc_, host_, - reinterpret_cast<Address>(target)); - if (mode == UPDATE_WRITE_BARRIER && + reinterpret_cast<Address>(target), + icache_flush_mode); + if (write_barrier_mode == UPDATE_WRITE_BARRIER && host() != NULL && target->IsHeapObject()) { host()->GetHeap()->incremental_marking()->RecordWrite( @@ -685,21 +734,24 @@ void RelocInfo::set_target_object(Object* target, WriteBarrierMode mode) { Address RelocInfo::target_reference() { - ASSERT(rmode_ == EXTERNAL_REFERENCE); + DCHECK(rmode_ == EXTERNAL_REFERENCE); return Assembler::target_address_at(pc_, host_); } Address RelocInfo::target_runtime_entry(Assembler* origin) { - ASSERT(IsRuntimeEntry(rmode_)); + DCHECK(IsRuntimeEntry(rmode_)); return target_address(); } void RelocInfo::set_target_runtime_entry(Address target, - WriteBarrierMode mode) { - ASSERT(IsRuntimeEntry(rmode_)); - if (target_address() != target) set_target_address(target, mode); + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(IsRuntimeEntry(rmode_)); + if (target_address() != target) { + set_target_address(target, write_barrier_mode, icache_flush_mode); + } } @@ -711,12 +763,14 @@ Handle<Cell> RelocInfo::target_cell_handle() { Cell* RelocInfo::target_cell() { - ASSERT(rmode_ == RelocInfo::CELL); + DCHECK(rmode_ == RelocInfo::CELL); return Cell::FromValueAddress(Memory::Address_at(pc_)); } -void RelocInfo::set_target_cell(Cell* cell, WriteBarrierMode mode) { +void RelocInfo::set_target_cell(Cell* cell, + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { UNIMPLEMENTED(); } @@ -732,16 +786,17 @@ Handle<Object> RelocInfo::code_age_stub_handle(Assembler* origin) { Code* RelocInfo::code_age_stub() { - ASSERT(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); + DCHECK(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); // Read the stub entry point from the code age sequence. Address stub_entry_address = pc_ + kCodeAgeStubEntryOffset; return Code::GetCodeFromTargetAddress(Memory::Address_at(stub_entry_address)); } -void RelocInfo::set_code_age_stub(Code* stub) { - ASSERT(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); - ASSERT(!Code::IsYoungSequence(stub->GetIsolate(), pc_)); +void RelocInfo::set_code_age_stub(Code* stub, + ICacheFlushMode icache_flush_mode) { + DCHECK(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); + DCHECK(!Code::IsYoungSequence(stub->GetIsolate(), pc_)); // Overwrite the stub entry point in the code age sequence. This is loaded as // a literal so there is no need to call FlushICache here. Address stub_entry_address = pc_ + kCodeAgeStubEntryOffset; @@ -750,7 +805,7 @@ void RelocInfo::set_code_age_stub(Code* stub) { Address RelocInfo::call_address() { - ASSERT((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || + DCHECK((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || (IsDebugBreakSlot(rmode()) && IsPatchedDebugBreakSlotSequence())); // For the above sequences the Relocinfo points to the load literal loading // the call address. @@ -759,7 +814,7 @@ Address RelocInfo::call_address() { void RelocInfo::set_call_address(Address target) { - ASSERT((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || + DCHECK((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || (IsDebugBreakSlot(rmode()) && IsPatchedDebugBreakSlotSequence())); Assembler::set_target_address_at(pc_, host_, target); if (host() != NULL) { @@ -771,7 +826,7 @@ void RelocInfo::set_call_address(Address target) { void RelocInfo::WipeOut() { - ASSERT(IsEmbeddedObject(rmode_) || + DCHECK(IsEmbeddedObject(rmode_) || IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_) || IsExternalReference(rmode_)); @@ -843,11 +898,11 @@ void RelocInfo::Visit(Heap* heap) { LoadStoreOp Assembler::LoadOpFor(const CPURegister& rt) { - ASSERT(rt.IsValid()); + DCHECK(rt.IsValid()); if (rt.IsRegister()) { return rt.Is64Bits() ? LDR_x : LDR_w; } else { - ASSERT(rt.IsFPRegister()); + DCHECK(rt.IsFPRegister()); return rt.Is64Bits() ? LDR_d : LDR_s; } } @@ -855,23 +910,23 @@ LoadStoreOp Assembler::LoadOpFor(const CPURegister& rt) { LoadStorePairOp Assembler::LoadPairOpFor(const CPURegister& rt, const CPURegister& rt2) { - ASSERT(AreSameSizeAndType(rt, rt2)); + DCHECK(AreSameSizeAndType(rt, rt2)); USE(rt2); if (rt.IsRegister()) { return rt.Is64Bits() ? LDP_x : LDP_w; } else { - ASSERT(rt.IsFPRegister()); + DCHECK(rt.IsFPRegister()); return rt.Is64Bits() ? LDP_d : LDP_s; } } LoadStoreOp Assembler::StoreOpFor(const CPURegister& rt) { - ASSERT(rt.IsValid()); + DCHECK(rt.IsValid()); if (rt.IsRegister()) { return rt.Is64Bits() ? STR_x : STR_w; } else { - ASSERT(rt.IsFPRegister()); + DCHECK(rt.IsFPRegister()); return rt.Is64Bits() ? STR_d : STR_s; } } @@ -879,12 +934,12 @@ LoadStoreOp Assembler::StoreOpFor(const CPURegister& rt) { LoadStorePairOp Assembler::StorePairOpFor(const CPURegister& rt, const CPURegister& rt2) { - ASSERT(AreSameSizeAndType(rt, rt2)); + DCHECK(AreSameSizeAndType(rt, rt2)); USE(rt2); if (rt.IsRegister()) { return rt.Is64Bits() ? STP_x : STP_w; } else { - ASSERT(rt.IsFPRegister()); + DCHECK(rt.IsFPRegister()); return rt.Is64Bits() ? STP_d : STP_s; } } @@ -892,12 +947,12 @@ LoadStorePairOp Assembler::StorePairOpFor(const CPURegister& rt, LoadStorePairNonTemporalOp Assembler::LoadPairNonTemporalOpFor( const CPURegister& rt, const CPURegister& rt2) { - ASSERT(AreSameSizeAndType(rt, rt2)); + DCHECK(AreSameSizeAndType(rt, rt2)); USE(rt2); if (rt.IsRegister()) { return rt.Is64Bits() ? LDNP_x : LDNP_w; } else { - ASSERT(rt.IsFPRegister()); + DCHECK(rt.IsFPRegister()); return rt.Is64Bits() ? LDNP_d : LDNP_s; } } @@ -905,21 +960,31 @@ LoadStorePairNonTemporalOp Assembler::LoadPairNonTemporalOpFor( LoadStorePairNonTemporalOp Assembler::StorePairNonTemporalOpFor( const CPURegister& rt, const CPURegister& rt2) { - ASSERT(AreSameSizeAndType(rt, rt2)); + DCHECK(AreSameSizeAndType(rt, rt2)); USE(rt2); if (rt.IsRegister()) { return rt.Is64Bits() ? STNP_x : STNP_w; } else { - ASSERT(rt.IsFPRegister()); + DCHECK(rt.IsFPRegister()); return rt.Is64Bits() ? STNP_d : STNP_s; } } +LoadLiteralOp Assembler::LoadLiteralOpFor(const CPURegister& rt) { + if (rt.IsRegister()) { + return rt.Is64Bits() ? LDR_x_lit : LDR_w_lit; + } else { + DCHECK(rt.IsFPRegister()); + return rt.Is64Bits() ? LDR_d_lit : LDR_s_lit; + } +} + + int Assembler::LinkAndGetInstructionOffsetTo(Label* label) { - ASSERT(kStartOfLabelLinkChain == 0); + DCHECK(kStartOfLabelLinkChain == 0); int offset = LinkAndGetByteOffsetTo(label); - ASSERT(IsAligned(offset, kInstructionSize)); + DCHECK(IsAligned(offset, kInstructionSize)); return offset >> kInstructionSizeLog2; } @@ -974,7 +1039,7 @@ Instr Assembler::ImmTestBranch(int imm14) { Instr Assembler::ImmTestBranchBit(unsigned bit_pos) { - ASSERT(is_uint6(bit_pos)); + DCHECK(is_uint6(bit_pos)); // Subtract five from the shift offset, as we need bit 5 from bit_pos. unsigned b5 = bit_pos << (ImmTestBranchBit5_offset - 5); unsigned b40 = bit_pos << ImmTestBranchBit40_offset; @@ -990,7 +1055,7 @@ Instr Assembler::SF(Register rd) { Instr Assembler::ImmAddSub(int64_t imm) { - ASSERT(IsImmAddSub(imm)); + DCHECK(IsImmAddSub(imm)); if (is_uint12(imm)) { // No shift required. return imm << ImmAddSub_offset; } else { @@ -1000,7 +1065,7 @@ Instr Assembler::ImmAddSub(int64_t imm) { Instr Assembler::ImmS(unsigned imms, unsigned reg_size) { - ASSERT(((reg_size == kXRegSizeInBits) && is_uint6(imms)) || + DCHECK(((reg_size == kXRegSizeInBits) && is_uint6(imms)) || ((reg_size == kWRegSizeInBits) && is_uint5(imms))); USE(reg_size); return imms << ImmS_offset; @@ -1008,26 +1073,26 @@ Instr Assembler::ImmS(unsigned imms, unsigned reg_size) { Instr Assembler::ImmR(unsigned immr, unsigned reg_size) { - ASSERT(((reg_size == kXRegSizeInBits) && is_uint6(immr)) || + DCHECK(((reg_size == kXRegSizeInBits) && is_uint6(immr)) || ((reg_size == kWRegSizeInBits) && is_uint5(immr))); USE(reg_size); - ASSERT(is_uint6(immr)); + DCHECK(is_uint6(immr)); return immr << ImmR_offset; } Instr Assembler::ImmSetBits(unsigned imms, unsigned reg_size) { - ASSERT((reg_size == kWRegSizeInBits) || (reg_size == kXRegSizeInBits)); - ASSERT(is_uint6(imms)); - ASSERT((reg_size == kXRegSizeInBits) || is_uint6(imms + 3)); + DCHECK((reg_size == kWRegSizeInBits) || (reg_size == kXRegSizeInBits)); + DCHECK(is_uint6(imms)); + DCHECK((reg_size == kXRegSizeInBits) || is_uint6(imms + 3)); USE(reg_size); return imms << ImmSetBits_offset; } Instr Assembler::ImmRotate(unsigned immr, unsigned reg_size) { - ASSERT((reg_size == kWRegSizeInBits) || (reg_size == kXRegSizeInBits)); - ASSERT(((reg_size == kXRegSizeInBits) && is_uint6(immr)) || + DCHECK((reg_size == kWRegSizeInBits) || (reg_size == kXRegSizeInBits)); + DCHECK(((reg_size == kXRegSizeInBits) && is_uint6(immr)) || ((reg_size == kWRegSizeInBits) && is_uint5(immr))); USE(reg_size); return immr << ImmRotate_offset; @@ -1041,21 +1106,21 @@ Instr Assembler::ImmLLiteral(int imm19) { Instr Assembler::BitN(unsigned bitn, unsigned reg_size) { - ASSERT((reg_size == kWRegSizeInBits) || (reg_size == kXRegSizeInBits)); - ASSERT((reg_size == kXRegSizeInBits) || (bitn == 0)); + DCHECK((reg_size == kWRegSizeInBits) || (reg_size == kXRegSizeInBits)); + DCHECK((reg_size == kXRegSizeInBits) || (bitn == 0)); USE(reg_size); return bitn << BitN_offset; } Instr Assembler::ShiftDP(Shift shift) { - ASSERT(shift == LSL || shift == LSR || shift == ASR || shift == ROR); + DCHECK(shift == LSL || shift == LSR || shift == ASR || shift == ROR); return shift << ShiftDP_offset; } Instr Assembler::ImmDPShift(unsigned amount) { - ASSERT(is_uint6(amount)); + DCHECK(is_uint6(amount)); return amount << ImmDPShift_offset; } @@ -1066,13 +1131,13 @@ Instr Assembler::ExtendMode(Extend extend) { Instr Assembler::ImmExtendShift(unsigned left_shift) { - ASSERT(left_shift <= 4); + DCHECK(left_shift <= 4); return left_shift << ImmExtendShift_offset; } Instr Assembler::ImmCondCmp(unsigned imm) { - ASSERT(is_uint5(imm)); + DCHECK(is_uint5(imm)); return imm << ImmCondCmp_offset; } @@ -1083,75 +1148,75 @@ Instr Assembler::Nzcv(StatusFlags nzcv) { Instr Assembler::ImmLSUnsigned(int imm12) { - ASSERT(is_uint12(imm12)); + DCHECK(is_uint12(imm12)); return imm12 << ImmLSUnsigned_offset; } Instr Assembler::ImmLS(int imm9) { - ASSERT(is_int9(imm9)); + DCHECK(is_int9(imm9)); return truncate_to_int9(imm9) << ImmLS_offset; } Instr Assembler::ImmLSPair(int imm7, LSDataSize size) { - ASSERT(((imm7 >> size) << size) == imm7); + DCHECK(((imm7 >> size) << size) == imm7); int scaled_imm7 = imm7 >> size; - ASSERT(is_int7(scaled_imm7)); + DCHECK(is_int7(scaled_imm7)); return truncate_to_int7(scaled_imm7) << ImmLSPair_offset; } Instr Assembler::ImmShiftLS(unsigned shift_amount) { - ASSERT(is_uint1(shift_amount)); + DCHECK(is_uint1(shift_amount)); return shift_amount << ImmShiftLS_offset; } Instr Assembler::ImmException(int imm16) { - ASSERT(is_uint16(imm16)); + DCHECK(is_uint16(imm16)); return imm16 << ImmException_offset; } Instr Assembler::ImmSystemRegister(int imm15) { - ASSERT(is_uint15(imm15)); + DCHECK(is_uint15(imm15)); return imm15 << ImmSystemRegister_offset; } Instr Assembler::ImmHint(int imm7) { - ASSERT(is_uint7(imm7)); + DCHECK(is_uint7(imm7)); return imm7 << ImmHint_offset; } Instr Assembler::ImmBarrierDomain(int imm2) { - ASSERT(is_uint2(imm2)); + DCHECK(is_uint2(imm2)); return imm2 << ImmBarrierDomain_offset; } Instr Assembler::ImmBarrierType(int imm2) { - ASSERT(is_uint2(imm2)); + DCHECK(is_uint2(imm2)); return imm2 << ImmBarrierType_offset; } LSDataSize Assembler::CalcLSDataSize(LoadStoreOp op) { - ASSERT((SizeLS_offset + SizeLS_width) == (kInstructionSize * 8)); + DCHECK((SizeLS_offset + SizeLS_width) == (kInstructionSize * 8)); return static_cast<LSDataSize>(op >> SizeLS_offset); } Instr Assembler::ImmMoveWide(uint64_t imm) { - ASSERT(is_uint16(imm)); + DCHECK(is_uint16(imm)); return imm << ImmMoveWide_offset; } Instr Assembler::ShiftMoveWide(int64_t shift) { - ASSERT(is_uint2(shift)); + DCHECK(is_uint2(shift)); return shift << ShiftMoveWide_offset; } @@ -1162,7 +1227,7 @@ Instr Assembler::FPType(FPRegister fd) { Instr Assembler::FPScale(unsigned scale) { - ASSERT(is_uint6(scale)); + DCHECK(is_uint6(scale)); return scale << FPScale_offset; } @@ -1172,13 +1237,8 @@ const Register& Assembler::AppropriateZeroRegFor(const CPURegister& reg) const { } -void Assembler::LoadRelocated(const CPURegister& rt, const Operand& operand) { - LoadRelocatedValue(rt, operand, LDR_x_lit); -} - - inline void Assembler::CheckBufferSpace() { - ASSERT(pc_ < (buffer_ + buffer_size_)); + DCHECK(pc_ < (buffer_ + buffer_size_)); if (buffer_space() < kGap) { GrowBuffer(); } @@ -1197,7 +1257,7 @@ inline void Assembler::CheckBuffer() { TypeFeedbackId Assembler::RecordedAstId() { - ASSERT(!recorded_ast_id_.IsNone()); + DCHECK(!recorded_ast_id_.IsNone()); return recorded_ast_id_; } diff --git a/deps/v8/src/arm64/assembler-arm64.cc b/deps/v8/src/arm64/assembler-arm64.cc index 14f4145578f..7f86e14a777 100644 --- a/deps/v8/src/arm64/assembler-arm64.cc +++ b/deps/v8/src/arm64/assembler-arm64.cc @@ -26,49 +26,64 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM64 #define ARM64_DEFINE_REG_STATICS -#include "arm64/assembler-arm64-inl.h" +#include "src/arm64/assembler-arm64-inl.h" +#include "src/base/cpu.h" namespace v8 { namespace internal { // ----------------------------------------------------------------------------- -// CpuFeatures utilities (for V8 compatibility). +// CpuFeatures implementation. -ExternalReference ExternalReference::cpu_features() { - return ExternalReference(&CpuFeatures::supported_); +void CpuFeatures::ProbeImpl(bool cross_compile) { + if (cross_compile) { + // Always align csp in cross compiled code - this is safe and ensures that + // csp will always be aligned if it is enabled by probing at runtime. + if (FLAG_enable_always_align_csp) supported_ |= 1u << ALWAYS_ALIGN_CSP; + } else { + base::CPU cpu; + if (FLAG_enable_always_align_csp && + (cpu.implementer() == base::CPU::NVIDIA || FLAG_debug_code)) { + supported_ |= 1u << ALWAYS_ALIGN_CSP; + } + } } +void CpuFeatures::PrintTarget() { } +void CpuFeatures::PrintFeatures() { } + + // ----------------------------------------------------------------------------- // CPURegList utilities. CPURegister CPURegList::PopLowestIndex() { - ASSERT(IsValid()); + DCHECK(IsValid()); if (IsEmpty()) { return NoCPUReg; } int index = CountTrailingZeros(list_, kRegListSizeInBits); - ASSERT((1 << index) & list_); + DCHECK((1 << index) & list_); Remove(index); return CPURegister::Create(index, size_, type_); } CPURegister CPURegList::PopHighestIndex() { - ASSERT(IsValid()); + DCHECK(IsValid()); if (IsEmpty()) { return NoCPUReg; } int index = CountLeadingZeros(list_, kRegListSizeInBits); index = kRegListSizeInBits - 1 - index; - ASSERT((1 << index) & list_); + DCHECK((1 << index) & list_); Remove(index); return CPURegister::Create(index, size_, type_); } @@ -80,8 +95,8 @@ void CPURegList::RemoveCalleeSaved() { } else if (type() == CPURegister::kFPRegister) { Remove(GetCalleeSavedFP(RegisterSizeInBits())); } else { - ASSERT(type() == CPURegister::kNoRegister); - ASSERT(IsEmpty()); + DCHECK(type() == CPURegister::kNoRegister); + DCHECK(IsEmpty()); // The list must already be empty, so do nothing. } } @@ -176,7 +191,7 @@ void RelocInfo::PatchCode(byte* instructions, int instruction_count) { } // Indicate that code has changed. - CPU::FlushICache(pc_, instruction_count * kInstructionSize); + CpuFeatures::FlushICache(pc_, instruction_count * kInstructionSize); } @@ -212,7 +227,7 @@ bool AreAliased(const CPURegister& reg1, const CPURegister& reg2, const CPURegister regs[] = {reg1, reg2, reg3, reg4, reg5, reg6, reg7, reg8}; - for (unsigned i = 0; i < sizeof(regs) / sizeof(regs[0]); i++) { + for (unsigned i = 0; i < ARRAY_SIZE(regs); i++) { if (regs[i].IsRegister()) { number_of_valid_regs++; unique_regs |= regs[i].Bit(); @@ -220,7 +235,7 @@ bool AreAliased(const CPURegister& reg1, const CPURegister& reg2, number_of_valid_fpregs++; unique_fpregs |= regs[i].Bit(); } else { - ASSERT(!regs[i].IsValid()); + DCHECK(!regs[i].IsValid()); } } @@ -229,8 +244,8 @@ bool AreAliased(const CPURegister& reg1, const CPURegister& reg2, int number_of_unique_fpregs = CountSetBits(unique_fpregs, sizeof(unique_fpregs) * kBitsPerByte); - ASSERT(number_of_valid_regs >= number_of_unique_regs); - ASSERT(number_of_valid_fpregs >= number_of_unique_fpregs); + DCHECK(number_of_valid_regs >= number_of_unique_regs); + DCHECK(number_of_valid_fpregs >= number_of_unique_fpregs); return (number_of_valid_regs != number_of_unique_regs) || (number_of_valid_fpregs != number_of_unique_fpregs); @@ -241,7 +256,7 @@ bool AreSameSizeAndType(const CPURegister& reg1, const CPURegister& reg2, const CPURegister& reg3, const CPURegister& reg4, const CPURegister& reg5, const CPURegister& reg6, const CPURegister& reg7, const CPURegister& reg8) { - ASSERT(reg1.IsValid()); + DCHECK(reg1.IsValid()); bool match = true; match &= !reg2.IsValid() || reg2.IsSameSizeAndType(reg1); match &= !reg3.IsValid() || reg3.IsSameSizeAndType(reg1); @@ -254,36 +269,285 @@ bool AreSameSizeAndType(const CPURegister& reg1, const CPURegister& reg2, } -void Operand::initialize_handle(Handle<Object> handle) { +void Immediate::InitializeHandle(Handle<Object> handle) { AllowDeferredHandleDereference using_raw_address; // Verify all Objects referred by code are NOT in new space. Object* obj = *handle; if (obj->IsHeapObject()) { - ASSERT(!HeapObject::cast(obj)->GetHeap()->InNewSpace(obj)); - immediate_ = reinterpret_cast<intptr_t>(handle.location()); + DCHECK(!HeapObject::cast(obj)->GetHeap()->InNewSpace(obj)); + value_ = reinterpret_cast<intptr_t>(handle.location()); rmode_ = RelocInfo::EMBEDDED_OBJECT; } else { STATIC_ASSERT(sizeof(intptr_t) == sizeof(int64_t)); - immediate_ = reinterpret_cast<intptr_t>(obj); + value_ = reinterpret_cast<intptr_t>(obj); rmode_ = RelocInfo::NONE64; } } -bool Operand::NeedsRelocation(Isolate* isolate) const { - if (rmode_ == RelocInfo::EXTERNAL_REFERENCE) { - return Serializer::enabled(isolate); +bool Operand::NeedsRelocation(const Assembler* assembler) const { + RelocInfo::Mode rmode = immediate_.rmode(); + + if (rmode == RelocInfo::EXTERNAL_REFERENCE) { + return assembler->serializer_enabled(); } - return !RelocInfo::IsNone(rmode_); + return !RelocInfo::IsNone(rmode); } -// Assembler +// Constant Pool. +void ConstPool::RecordEntry(intptr_t data, + RelocInfo::Mode mode) { + DCHECK(mode != RelocInfo::COMMENT && + mode != RelocInfo::POSITION && + mode != RelocInfo::STATEMENT_POSITION && + mode != RelocInfo::CONST_POOL && + mode != RelocInfo::VENEER_POOL && + mode != RelocInfo::CODE_AGE_SEQUENCE); + + uint64_t raw_data = static_cast<uint64_t>(data); + int offset = assm_->pc_offset(); + if (IsEmpty()) { + first_use_ = offset; + } + + std::pair<uint64_t, int> entry = std::make_pair(raw_data, offset); + if (CanBeShared(mode)) { + shared_entries_.insert(entry); + if (shared_entries_.count(entry.first) == 1) { + shared_entries_count++; + } + } else { + unique_entries_.push_back(entry); + } + + if (EntryCount() > Assembler::kApproxMaxPoolEntryCount) { + // Request constant pool emission after the next instruction. + assm_->SetNextConstPoolCheckIn(1); + } +} + + +int ConstPool::DistanceToFirstUse() { + DCHECK(first_use_ >= 0); + return assm_->pc_offset() - first_use_; +} + + +int ConstPool::MaxPcOffset() { + // There are no pending entries in the pool so we can never get out of + // range. + if (IsEmpty()) return kMaxInt; + + // Entries are not necessarily emitted in the order they are added so in the + // worst case the first constant pool use will be accessing the last entry. + return first_use_ + kMaxLoadLiteralRange - WorstCaseSize(); +} + + +int ConstPool::WorstCaseSize() { + if (IsEmpty()) return 0; + + // Max size prologue: + // b over + // ldr xzr, #pool_size + // blr xzr + // nop + // All entries are 64-bit for now. + return 4 * kInstructionSize + EntryCount() * kPointerSize; +} + + +int ConstPool::SizeIfEmittedAtCurrentPc(bool require_jump) { + if (IsEmpty()) return 0; + + // Prologue is: + // b over ;; if require_jump + // ldr xzr, #pool_size + // blr xzr + // nop ;; if not 64-bit aligned + int prologue_size = require_jump ? kInstructionSize : 0; + prologue_size += 2 * kInstructionSize; + prologue_size += IsAligned(assm_->pc_offset() + prologue_size, 8) ? + 0 : kInstructionSize; + + // All entries are 64-bit for now. + return prologue_size + EntryCount() * kPointerSize; +} + + +void ConstPool::Emit(bool require_jump) { + DCHECK(!assm_->is_const_pool_blocked()); + // Prevent recursive pool emission and protect from veneer pools. + Assembler::BlockPoolsScope block_pools(assm_); + + int size = SizeIfEmittedAtCurrentPc(require_jump); + Label size_check; + assm_->bind(&size_check); + + assm_->RecordConstPool(size); + // Emit the constant pool. It is preceded by an optional branch if + // require_jump and a header which will: + // 1) Encode the size of the constant pool, for use by the disassembler. + // 2) Terminate the program, to try to prevent execution from accidentally + // flowing into the constant pool. + // 3) align the pool entries to 64-bit. + // The header is therefore made of up to three arm64 instructions: + // ldr xzr, #<size of the constant pool in 32-bit words> + // blr xzr + // nop + // + // If executed, the header will likely segfault and lr will point to the + // instruction following the offending blr. + // TODO(all): Make the alignment part less fragile. Currently code is + // allocated as a byte array so there are no guarantees the alignment will + // be preserved on compaction. Currently it works as allocation seems to be + // 64-bit aligned. + + // Emit branch if required + Label after_pool; + if (require_jump) { + assm_->b(&after_pool); + } + + // Emit the header. + assm_->RecordComment("[ Constant Pool"); + EmitMarker(); + EmitGuard(); + assm_->Align(8); + + // Emit constant pool entries. + // TODO(all): currently each relocated constant is 64 bits, consider adding + // support for 32-bit entries. + EmitEntries(); + assm_->RecordComment("]"); + + if (after_pool.is_linked()) { + assm_->bind(&after_pool); + } + + DCHECK(assm_->SizeOfCodeGeneratedSince(&size_check) == + static_cast<unsigned>(size)); +} + + +void ConstPool::Clear() { + shared_entries_.clear(); + shared_entries_count = 0; + unique_entries_.clear(); + first_use_ = -1; +} + + +bool ConstPool::CanBeShared(RelocInfo::Mode mode) { + // Constant pool currently does not support 32-bit entries. + DCHECK(mode != RelocInfo::NONE32); + + return RelocInfo::IsNone(mode) || + (!assm_->serializer_enabled() && (mode >= RelocInfo::CELL)); +} + +void ConstPool::EmitMarker() { + // A constant pool size is expressed in number of 32-bits words. + // Currently all entries are 64-bit. + // + 1 is for the crash guard. + // + 0/1 for alignment. + int word_count = EntryCount() * 2 + 1 + + (IsAligned(assm_->pc_offset(), 8) ? 0 : 1); + assm_->Emit(LDR_x_lit | + Assembler::ImmLLiteral(word_count) | + Assembler::Rt(xzr)); +} + + +MemOperand::PairResult MemOperand::AreConsistentForPair( + const MemOperand& operandA, + const MemOperand& operandB, + int access_size_log2) { + DCHECK(access_size_log2 >= 0); + DCHECK(access_size_log2 <= 3); + // Step one: check that they share the same base, that the mode is Offset + // and that the offset is a multiple of access size. + if (!operandA.base().Is(operandB.base()) || + (operandA.addrmode() != Offset) || + (operandB.addrmode() != Offset) || + ((operandA.offset() & ((1 << access_size_log2) - 1)) != 0)) { + return kNotPair; + } + // Step two: check that the offsets are contiguous and that the range + // is OK for ldp/stp. + if ((operandB.offset() == operandA.offset() + (1 << access_size_log2)) && + is_int7(operandA.offset() >> access_size_log2)) { + return kPairAB; + } + if ((operandA.offset() == operandB.offset() + (1 << access_size_log2)) && + is_int7(operandB.offset() >> access_size_log2)) { + return kPairBA; + } + return kNotPair; +} + + +void ConstPool::EmitGuard() { +#ifdef DEBUG + Instruction* instr = reinterpret_cast<Instruction*>(assm_->pc()); + DCHECK(instr->preceding()->IsLdrLiteralX() && + instr->preceding()->Rt() == xzr.code()); +#endif + assm_->EmitPoolGuard(); +} + + +void ConstPool::EmitEntries() { + DCHECK(IsAligned(assm_->pc_offset(), 8)); + + typedef std::multimap<uint64_t, int>::const_iterator SharedEntriesIterator; + SharedEntriesIterator value_it; + // Iterate through the keys (constant pool values). + for (value_it = shared_entries_.begin(); + value_it != shared_entries_.end(); + value_it = shared_entries_.upper_bound(value_it->first)) { + std::pair<SharedEntriesIterator, SharedEntriesIterator> range; + uint64_t data = value_it->first; + range = shared_entries_.equal_range(data); + SharedEntriesIterator offset_it; + // Iterate through the offsets of a given key. + for (offset_it = range.first; offset_it != range.second; offset_it++) { + Instruction* instr = assm_->InstructionAt(offset_it->second); + + // Instruction to patch must be 'ldr rd, [pc, #offset]' with offset == 0. + DCHECK(instr->IsLdrLiteral() && instr->ImmLLiteral() == 0); + instr->SetImmPCOffsetTarget(assm_->pc()); + } + assm_->dc64(data); + } + shared_entries_.clear(); + shared_entries_count = 0; + + // Emit unique entries. + std::vector<std::pair<uint64_t, int> >::const_iterator unique_it; + for (unique_it = unique_entries_.begin(); + unique_it != unique_entries_.end(); + unique_it++) { + Instruction* instr = assm_->InstructionAt(unique_it->second); + + // Instruction to patch must be 'ldr rd, [pc, #offset]' with offset == 0. + DCHECK(instr->IsLdrLiteral() && instr->ImmLLiteral() == 0); + instr->SetImmPCOffsetTarget(assm_->pc()); + assm_->dc64(unique_it->first); + } + unique_entries_.clear(); + first_use_ = -1; +} + + +// Assembler Assembler::Assembler(Isolate* isolate, void* buffer, int buffer_size) : AssemblerBase(isolate, buffer, buffer_size), + constpool_(this), recorded_ast_id_(TypeFeedbackId::None()), unresolved_branches_(), positions_recorder_(this) { @@ -294,28 +558,27 @@ Assembler::Assembler(Isolate* isolate, void* buffer, int buffer_size) Assembler::~Assembler() { - ASSERT(num_pending_reloc_info_ == 0); - ASSERT(const_pool_blocked_nesting_ == 0); - ASSERT(veneer_pool_blocked_nesting_ == 0); + DCHECK(constpool_.IsEmpty()); + DCHECK(const_pool_blocked_nesting_ == 0); + DCHECK(veneer_pool_blocked_nesting_ == 0); } void Assembler::Reset() { #ifdef DEBUG - ASSERT((pc_ >= buffer_) && (pc_ < buffer_ + buffer_size_)); - ASSERT(const_pool_blocked_nesting_ == 0); - ASSERT(veneer_pool_blocked_nesting_ == 0); - ASSERT(unresolved_branches_.empty()); + DCHECK((pc_ >= buffer_) && (pc_ < buffer_ + buffer_size_)); + DCHECK(const_pool_blocked_nesting_ == 0); + DCHECK(veneer_pool_blocked_nesting_ == 0); + DCHECK(unresolved_branches_.empty()); memset(buffer_, 0, pc_ - buffer_); #endif pc_ = buffer_; reloc_info_writer.Reposition(reinterpret_cast<byte*>(buffer_ + buffer_size_), reinterpret_cast<byte*>(pc_)); - num_pending_reloc_info_ = 0; + constpool_.Clear(); next_constant_pool_check_ = 0; next_veneer_pool_check_ = kMaxInt; no_const_pool_before_ = 0; - first_const_pool_use_ = -1; ClearRecordedAstId(); } @@ -323,7 +586,7 @@ void Assembler::Reset() { void Assembler::GetCode(CodeDesc* desc) { // Emit constant pool if necessary. CheckConstPool(true, false); - ASSERT(num_pending_reloc_info_ == 0); + DCHECK(constpool_.IsEmpty()); // Set up code descriptor. if (desc) { @@ -338,7 +601,7 @@ void Assembler::GetCode(CodeDesc* desc) { void Assembler::Align(int m) { - ASSERT(m >= 4 && IsPowerOf2(m)); + DCHECK(m >= 4 && IsPowerOf2(m)); while ((pc_offset() & (m - 1)) != 0) { nop(); } @@ -366,7 +629,7 @@ void Assembler::CheckLabelLinkChain(Label const * label) { void Assembler::RemoveBranchFromLabelLinkChain(Instruction* branch, Label* label, Instruction* label_veneer) { - ASSERT(label->is_linked()); + DCHECK(label->is_linked()); CheckLabelLinkChain(label); @@ -382,7 +645,7 @@ void Assembler::RemoveBranchFromLabelLinkChain(Instruction* branch, link = next_link; } - ASSERT(branch == link); + DCHECK(branch == link); next_link = branch->ImmPCOffsetTarget(); if (branch == prev_link) { @@ -448,8 +711,8 @@ void Assembler::bind(Label* label) { // that are linked to this label will be updated to point to the newly-bound // label. - ASSERT(!label->is_near_linked()); - ASSERT(!label->is_bound()); + DCHECK(!label->is_near_linked()); + DCHECK(!label->is_bound()); DeleteUnresolvedBranchInfoForLabel(label); @@ -472,11 +735,11 @@ void Assembler::bind(Label* label) { CheckLabelLinkChain(label); - ASSERT(linkoffset >= 0); - ASSERT(linkoffset < pc_offset()); - ASSERT((linkoffset > prevlinkoffset) || + DCHECK(linkoffset >= 0); + DCHECK(linkoffset < pc_offset()); + DCHECK((linkoffset > prevlinkoffset) || (linkoffset - prevlinkoffset == kStartOfLabelLinkChain)); - ASSERT(prevlinkoffset >= 0); + DCHECK(prevlinkoffset >= 0); // Update the link to point to the label. link->SetImmPCOffsetTarget(reinterpret_cast<Instruction*>(pc_)); @@ -492,13 +755,13 @@ void Assembler::bind(Label* label) { } label->bind_to(pc_offset()); - ASSERT(label->is_bound()); - ASSERT(!label->is_linked()); + DCHECK(label->is_bound()); + DCHECK(!label->is_linked()); } int Assembler::LinkAndGetByteOffsetTo(Label* label) { - ASSERT(sizeof(*pc_) == 1); + DCHECK(sizeof(*pc_) == 1); CheckLabelLinkChain(label); int offset; @@ -513,7 +776,7 @@ int Assembler::LinkAndGetByteOffsetTo(Label* label) { // Note that offset can be zero for self-referential instructions. (This // could be useful for ADR, for example.) offset = label->pos() - pc_offset(); - ASSERT(offset <= 0); + DCHECK(offset <= 0); } else { if (label->is_linked()) { // The label is linked, so the referring instruction should be added onto @@ -522,7 +785,7 @@ int Assembler::LinkAndGetByteOffsetTo(Label* label) { // In this case, label->pos() returns the offset of the last linked // instruction from the start of the buffer. offset = label->pos() - pc_offset(); - ASSERT(offset != kStartOfLabelLinkChain); + DCHECK(offset != kStartOfLabelLinkChain); // Note that the offset here needs to be PC-relative only so that the // first instruction in a buffer can link to an unbound label. Otherwise, // the offset would be 0 for this case, and 0 is reserved for @@ -541,7 +804,7 @@ int Assembler::LinkAndGetByteOffsetTo(Label* label) { void Assembler::DeleteUnresolvedBranchInfoForLabelTraverse(Label* label) { - ASSERT(label->is_linked()); + DCHECK(label->is_linked()); CheckLabelLinkChain(label); int link_offset = label->pos(); @@ -576,7 +839,7 @@ void Assembler::DeleteUnresolvedBranchInfoForLabelTraverse(Label* label) { void Assembler::DeleteUnresolvedBranchInfoForLabel(Label* label) { if (unresolved_branches_.empty()) { - ASSERT(next_veneer_pool_check_ == kMaxInt); + DCHECK(next_veneer_pool_check_ == kMaxInt); return; } @@ -606,8 +869,7 @@ void Assembler::StartBlockConstPool() { void Assembler::EndBlockConstPool() { if (--const_pool_blocked_nesting_ == 0) { // Check the constant pool hasn't been blocked for too long. - ASSERT((num_pending_reloc_info_ == 0) || - (pc_offset() < (first_const_pool_use_ + kMaxDistToConstPool))); + DCHECK(pc_offset() < constpool_.MaxPcOffset()); // Two cases: // * no_const_pool_before_ >= next_constant_pool_check_ and the emission is // still blocked @@ -632,7 +894,7 @@ bool Assembler::IsConstantPoolAt(Instruction* instr) { // It is still worth asserting the marker is complete. // 4: blr xzr - ASSERT(!result || (instr->following()->IsBranchAndLinkToRegister() && + DCHECK(!result || (instr->following()->IsBranchAndLinkToRegister() && instr->following()->Rn() == xzr.code())); return result; @@ -666,13 +928,6 @@ int Assembler::ConstantPoolSizeAt(Instruction* instr) { } -void Assembler::ConstantPoolMarker(uint32_t size) { - ASSERT(is_const_pool_blocked()); - // + 1 is for the crash guard. - Emit(LDR_x_lit | ImmLLiteral(size + 1) | Rt(xzr)); -} - - void Assembler::EmitPoolGuard() { // We must generate only one instruction as this is used in scopes that // control the size of the code generated. @@ -680,18 +935,6 @@ void Assembler::EmitPoolGuard() { } -void Assembler::ConstantPoolGuard() { -#ifdef DEBUG - // Currently this is only used after a constant pool marker. - ASSERT(is_const_pool_blocked()); - Instruction* instr = reinterpret_cast<Instruction*>(pc_); - ASSERT(instr->preceding()->IsLdrLiteralX() && - instr->preceding()->Rt() == xzr.code()); -#endif - EmitPoolGuard(); -} - - void Assembler::StartBlockVeneerPool() { ++veneer_pool_blocked_nesting_; } @@ -700,7 +943,7 @@ void Assembler::StartBlockVeneerPool() { void Assembler::EndBlockVeneerPool() { if (--veneer_pool_blocked_nesting_ == 0) { // Check the veneer pool hasn't been blocked for too long. - ASSERT(unresolved_branches_.empty() || + DCHECK(unresolved_branches_.empty() || (pc_offset() < unresolved_branches_first_limit())); } } @@ -708,24 +951,24 @@ void Assembler::EndBlockVeneerPool() { void Assembler::br(const Register& xn) { positions_recorder()->WriteRecordedPositions(); - ASSERT(xn.Is64Bits()); + DCHECK(xn.Is64Bits()); Emit(BR | Rn(xn)); } void Assembler::blr(const Register& xn) { positions_recorder()->WriteRecordedPositions(); - ASSERT(xn.Is64Bits()); + DCHECK(xn.Is64Bits()); // The pattern 'blr xzr' is used as a guard to detect when execution falls // through the constant pool. It should not be emitted. - ASSERT(!xn.Is(xzr)); + DCHECK(!xn.Is(xzr)); Emit(BLR | Rn(xn)); } void Assembler::ret(const Register& xn) { positions_recorder()->WriteRecordedPositions(); - ASSERT(xn.Is64Bits()); + DCHECK(xn.Is64Bits()); Emit(RET | Rn(xn)); } @@ -796,7 +1039,7 @@ void Assembler::tbz(const Register& rt, unsigned bit_pos, int imm14) { positions_recorder()->WriteRecordedPositions(); - ASSERT(rt.Is64Bits() || (rt.Is32Bits() && (bit_pos < kWRegSizeInBits))); + DCHECK(rt.Is64Bits() || (rt.Is32Bits() && (bit_pos < kWRegSizeInBits))); Emit(TBZ | ImmTestBranchBit(bit_pos) | ImmTestBranch(imm14) | Rt(rt)); } @@ -813,7 +1056,7 @@ void Assembler::tbnz(const Register& rt, unsigned bit_pos, int imm14) { positions_recorder()->WriteRecordedPositions(); - ASSERT(rt.Is64Bits() || (rt.Is32Bits() && (bit_pos < kWRegSizeInBits))); + DCHECK(rt.Is64Bits() || (rt.Is32Bits() && (bit_pos < kWRegSizeInBits))); Emit(TBNZ | ImmTestBranchBit(bit_pos) | ImmTestBranch(imm14) | Rt(rt)); } @@ -827,7 +1070,7 @@ void Assembler::tbnz(const Register& rt, void Assembler::adr(const Register& rd, int imm21) { - ASSERT(rd.Is64Bits()); + DCHECK(rd.Is64Bits()); Emit(ADR | ImmPCRelAddress(imm21) | Rd(rd)); } @@ -996,8 +1239,8 @@ void Assembler::eon(const Register& rd, void Assembler::lslv(const Register& rd, const Register& rn, const Register& rm) { - ASSERT(rd.SizeInBits() == rn.SizeInBits()); - ASSERT(rd.SizeInBits() == rm.SizeInBits()); + DCHECK(rd.SizeInBits() == rn.SizeInBits()); + DCHECK(rd.SizeInBits() == rm.SizeInBits()); Emit(SF(rd) | LSLV | Rm(rm) | Rn(rn) | Rd(rd)); } @@ -1005,8 +1248,8 @@ void Assembler::lslv(const Register& rd, void Assembler::lsrv(const Register& rd, const Register& rn, const Register& rm) { - ASSERT(rd.SizeInBits() == rn.SizeInBits()); - ASSERT(rd.SizeInBits() == rm.SizeInBits()); + DCHECK(rd.SizeInBits() == rn.SizeInBits()); + DCHECK(rd.SizeInBits() == rm.SizeInBits()); Emit(SF(rd) | LSRV | Rm(rm) | Rn(rn) | Rd(rd)); } @@ -1014,8 +1257,8 @@ void Assembler::lsrv(const Register& rd, void Assembler::asrv(const Register& rd, const Register& rn, const Register& rm) { - ASSERT(rd.SizeInBits() == rn.SizeInBits()); - ASSERT(rd.SizeInBits() == rm.SizeInBits()); + DCHECK(rd.SizeInBits() == rn.SizeInBits()); + DCHECK(rd.SizeInBits() == rm.SizeInBits()); Emit(SF(rd) | ASRV | Rm(rm) | Rn(rn) | Rd(rd)); } @@ -1023,8 +1266,8 @@ void Assembler::asrv(const Register& rd, void Assembler::rorv(const Register& rd, const Register& rn, const Register& rm) { - ASSERT(rd.SizeInBits() == rn.SizeInBits()); - ASSERT(rd.SizeInBits() == rm.SizeInBits()); + DCHECK(rd.SizeInBits() == rn.SizeInBits()); + DCHECK(rd.SizeInBits() == rm.SizeInBits()); Emit(SF(rd) | RORV | Rm(rm) | Rn(rn) | Rd(rd)); } @@ -1034,7 +1277,7 @@ void Assembler::bfm(const Register& rd, const Register& rn, unsigned immr, unsigned imms) { - ASSERT(rd.SizeInBits() == rn.SizeInBits()); + DCHECK(rd.SizeInBits() == rn.SizeInBits()); Instr N = SF(rd) >> (kSFOffset - kBitfieldNOffset); Emit(SF(rd) | BFM | N | ImmR(immr, rd.SizeInBits()) | @@ -1047,7 +1290,7 @@ void Assembler::sbfm(const Register& rd, const Register& rn, unsigned immr, unsigned imms) { - ASSERT(rd.Is64Bits() || rn.Is32Bits()); + DCHECK(rd.Is64Bits() || rn.Is32Bits()); Instr N = SF(rd) >> (kSFOffset - kBitfieldNOffset); Emit(SF(rd) | SBFM | N | ImmR(immr, rd.SizeInBits()) | @@ -1060,7 +1303,7 @@ void Assembler::ubfm(const Register& rd, const Register& rn, unsigned immr, unsigned imms) { - ASSERT(rd.SizeInBits() == rn.SizeInBits()); + DCHECK(rd.SizeInBits() == rn.SizeInBits()); Instr N = SF(rd) >> (kSFOffset - kBitfieldNOffset); Emit(SF(rd) | UBFM | N | ImmR(immr, rd.SizeInBits()) | @@ -1073,8 +1316,8 @@ void Assembler::extr(const Register& rd, const Register& rn, const Register& rm, unsigned lsb) { - ASSERT(rd.SizeInBits() == rn.SizeInBits()); - ASSERT(rd.SizeInBits() == rm.SizeInBits()); + DCHECK(rd.SizeInBits() == rn.SizeInBits()); + DCHECK(rd.SizeInBits() == rm.SizeInBits()); Instr N = SF(rd) >> (kSFOffset - kBitfieldNOffset); Emit(SF(rd) | EXTR | N | Rm(rm) | ImmS(lsb, rn.SizeInBits()) | Rn(rn) | Rd(rd)); @@ -1114,34 +1357,34 @@ void Assembler::csneg(const Register& rd, void Assembler::cset(const Register &rd, Condition cond) { - ASSERT((cond != al) && (cond != nv)); + DCHECK((cond != al) && (cond != nv)); Register zr = AppropriateZeroRegFor(rd); - csinc(rd, zr, zr, InvertCondition(cond)); + csinc(rd, zr, zr, NegateCondition(cond)); } void Assembler::csetm(const Register &rd, Condition cond) { - ASSERT((cond != al) && (cond != nv)); + DCHECK((cond != al) && (cond != nv)); Register zr = AppropriateZeroRegFor(rd); - csinv(rd, zr, zr, InvertCondition(cond)); + csinv(rd, zr, zr, NegateCondition(cond)); } void Assembler::cinc(const Register &rd, const Register &rn, Condition cond) { - ASSERT((cond != al) && (cond != nv)); - csinc(rd, rn, rn, InvertCondition(cond)); + DCHECK((cond != al) && (cond != nv)); + csinc(rd, rn, rn, NegateCondition(cond)); } void Assembler::cinv(const Register &rd, const Register &rn, Condition cond) { - ASSERT((cond != al) && (cond != nv)); - csinv(rd, rn, rn, InvertCondition(cond)); + DCHECK((cond != al) && (cond != nv)); + csinv(rd, rn, rn, NegateCondition(cond)); } void Assembler::cneg(const Register &rd, const Register &rn, Condition cond) { - ASSERT((cond != al) && (cond != nv)); - csneg(rd, rn, rn, InvertCondition(cond)); + DCHECK((cond != al) && (cond != nv)); + csneg(rd, rn, rn, NegateCondition(cond)); } @@ -1150,8 +1393,8 @@ void Assembler::ConditionalSelect(const Register& rd, const Register& rm, Condition cond, ConditionalSelectOp op) { - ASSERT(rd.SizeInBits() == rn.SizeInBits()); - ASSERT(rd.SizeInBits() == rm.SizeInBits()); + DCHECK(rd.SizeInBits() == rn.SizeInBits()); + DCHECK(rd.SizeInBits() == rm.SizeInBits()); Emit(SF(rd) | op | Rm(rm) | Cond(cond) | Rn(rn) | Rd(rd)); } @@ -1184,7 +1427,7 @@ void Assembler::DataProcessing3Source(const Register& rd, void Assembler::mul(const Register& rd, const Register& rn, const Register& rm) { - ASSERT(AreSameSizeAndType(rd, rn, rm)); + DCHECK(AreSameSizeAndType(rd, rn, rm)); Register zr = AppropriateZeroRegFor(rn); DataProcessing3Source(rd, rn, rm, zr, MADD); } @@ -1194,7 +1437,7 @@ void Assembler::madd(const Register& rd, const Register& rn, const Register& rm, const Register& ra) { - ASSERT(AreSameSizeAndType(rd, rn, rm, ra)); + DCHECK(AreSameSizeAndType(rd, rn, rm, ra)); DataProcessing3Source(rd, rn, rm, ra, MADD); } @@ -1202,7 +1445,7 @@ void Assembler::madd(const Register& rd, void Assembler::mneg(const Register& rd, const Register& rn, const Register& rm) { - ASSERT(AreSameSizeAndType(rd, rn, rm)); + DCHECK(AreSameSizeAndType(rd, rn, rm)); Register zr = AppropriateZeroRegFor(rn); DataProcessing3Source(rd, rn, rm, zr, MSUB); } @@ -1212,7 +1455,7 @@ void Assembler::msub(const Register& rd, const Register& rn, const Register& rm, const Register& ra) { - ASSERT(AreSameSizeAndType(rd, rn, rm, ra)); + DCHECK(AreSameSizeAndType(rd, rn, rm, ra)); DataProcessing3Source(rd, rn, rm, ra, MSUB); } @@ -1221,8 +1464,8 @@ void Assembler::smaddl(const Register& rd, const Register& rn, const Register& rm, const Register& ra) { - ASSERT(rd.Is64Bits() && ra.Is64Bits()); - ASSERT(rn.Is32Bits() && rm.Is32Bits()); + DCHECK(rd.Is64Bits() && ra.Is64Bits()); + DCHECK(rn.Is32Bits() && rm.Is32Bits()); DataProcessing3Source(rd, rn, rm, ra, SMADDL_x); } @@ -1231,8 +1474,8 @@ void Assembler::smsubl(const Register& rd, const Register& rn, const Register& rm, const Register& ra) { - ASSERT(rd.Is64Bits() && ra.Is64Bits()); - ASSERT(rn.Is32Bits() && rm.Is32Bits()); + DCHECK(rd.Is64Bits() && ra.Is64Bits()); + DCHECK(rn.Is32Bits() && rm.Is32Bits()); DataProcessing3Source(rd, rn, rm, ra, SMSUBL_x); } @@ -1241,8 +1484,8 @@ void Assembler::umaddl(const Register& rd, const Register& rn, const Register& rm, const Register& ra) { - ASSERT(rd.Is64Bits() && ra.Is64Bits()); - ASSERT(rn.Is32Bits() && rm.Is32Bits()); + DCHECK(rd.Is64Bits() && ra.Is64Bits()); + DCHECK(rn.Is32Bits() && rm.Is32Bits()); DataProcessing3Source(rd, rn, rm, ra, UMADDL_x); } @@ -1251,8 +1494,8 @@ void Assembler::umsubl(const Register& rd, const Register& rn, const Register& rm, const Register& ra) { - ASSERT(rd.Is64Bits() && ra.Is64Bits()); - ASSERT(rn.Is32Bits() && rm.Is32Bits()); + DCHECK(rd.Is64Bits() && ra.Is64Bits()); + DCHECK(rn.Is32Bits() && rm.Is32Bits()); DataProcessing3Source(rd, rn, rm, ra, UMSUBL_x); } @@ -1260,8 +1503,8 @@ void Assembler::umsubl(const Register& rd, void Assembler::smull(const Register& rd, const Register& rn, const Register& rm) { - ASSERT(rd.Is64Bits()); - ASSERT(rn.Is32Bits() && rm.Is32Bits()); + DCHECK(rd.Is64Bits()); + DCHECK(rn.Is32Bits() && rm.Is32Bits()); DataProcessing3Source(rd, rn, rm, xzr, SMADDL_x); } @@ -1269,7 +1512,7 @@ void Assembler::smull(const Register& rd, void Assembler::smulh(const Register& rd, const Register& rn, const Register& rm) { - ASSERT(AreSameSizeAndType(rd, rn, rm)); + DCHECK(AreSameSizeAndType(rd, rn, rm)); DataProcessing3Source(rd, rn, rm, xzr, SMULH_x); } @@ -1277,8 +1520,8 @@ void Assembler::smulh(const Register& rd, void Assembler::sdiv(const Register& rd, const Register& rn, const Register& rm) { - ASSERT(rd.SizeInBits() == rn.SizeInBits()); - ASSERT(rd.SizeInBits() == rm.SizeInBits()); + DCHECK(rd.SizeInBits() == rn.SizeInBits()); + DCHECK(rd.SizeInBits() == rm.SizeInBits()); Emit(SF(rd) | SDIV | Rm(rm) | Rn(rn) | Rd(rd)); } @@ -1286,8 +1529,8 @@ void Assembler::sdiv(const Register& rd, void Assembler::udiv(const Register& rd, const Register& rn, const Register& rm) { - ASSERT(rd.SizeInBits() == rn.SizeInBits()); - ASSERT(rd.SizeInBits() == rm.SizeInBits()); + DCHECK(rd.SizeInBits() == rn.SizeInBits()); + DCHECK(rd.SizeInBits() == rm.SizeInBits()); Emit(SF(rd) | UDIV | Rm(rm) | Rn(rn) | Rd(rd)); } @@ -1306,7 +1549,7 @@ void Assembler::rev16(const Register& rd, void Assembler::rev32(const Register& rd, const Register& rn) { - ASSERT(rd.Is64Bits()); + DCHECK(rd.Is64Bits()); DataProcessing1Source(rd, rn, REV); } @@ -1346,7 +1589,7 @@ void Assembler::stp(const CPURegister& rt, void Assembler::ldpsw(const Register& rt, const Register& rt2, const MemOperand& src) { - ASSERT(rt.Is64Bits()); + DCHECK(rt.Is64Bits()); LoadStorePair(rt, rt2, src, LDPSW_x); } @@ -1356,8 +1599,8 @@ void Assembler::LoadStorePair(const CPURegister& rt, const MemOperand& addr, LoadStorePairOp op) { // 'rt' and 'rt2' can only be aliased for stores. - ASSERT(((op & LoadStorePairLBit) == 0) || !rt.Is(rt2)); - ASSERT(AreSameSizeAndType(rt, rt2)); + DCHECK(((op & LoadStorePairLBit) == 0) || !rt.Is(rt2)); + DCHECK(AreSameSizeAndType(rt, rt2)); Instr memop = op | Rt(rt) | Rt2(rt2) | RnSP(addr.base()) | ImmLSPair(addr.offset(), CalcLSPairDataSize(op)); @@ -1367,13 +1610,13 @@ void Assembler::LoadStorePair(const CPURegister& rt, addrmodeop = LoadStorePairOffsetFixed; } else { // Pre-index and post-index modes. - ASSERT(!rt.Is(addr.base())); - ASSERT(!rt2.Is(addr.base())); - ASSERT(addr.offset() != 0); + DCHECK(!rt.Is(addr.base())); + DCHECK(!rt2.Is(addr.base())); + DCHECK(addr.offset() != 0); if (addr.IsPreIndex()) { addrmodeop = LoadStorePairPreIndexFixed; } else { - ASSERT(addr.IsPostIndex()); + DCHECK(addr.IsPostIndex()); addrmodeop = LoadStorePairPostIndexFixed; } } @@ -1401,9 +1644,9 @@ void Assembler::LoadStorePairNonTemporal(const CPURegister& rt, const CPURegister& rt2, const MemOperand& addr, LoadStorePairNonTemporalOp op) { - ASSERT(!rt.Is(rt2)); - ASSERT(AreSameSizeAndType(rt, rt2)); - ASSERT(addr.IsImmediateOffset()); + DCHECK(!rt.Is(rt2)); + DCHECK(AreSameSizeAndType(rt, rt2)); + DCHECK(addr.IsImmediateOffset()); LSDataSize size = CalcLSPairDataSize( static_cast<LoadStorePairOp>(op & LoadStorePairMask)); @@ -1454,32 +1697,28 @@ void Assembler::str(const CPURegister& rt, const MemOperand& src) { void Assembler::ldrsw(const Register& rt, const MemOperand& src) { - ASSERT(rt.Is64Bits()); + DCHECK(rt.Is64Bits()); LoadStore(rt, src, LDRSW_x); } -void Assembler::ldr(const Register& rt, uint64_t imm) { - // TODO(all): Constant pool may be garbage collected. Hence we cannot store - // arbitrary values in them. Manually move it for now. Fix - // MacroAssembler::Fmov when this is implemented. - UNIMPLEMENTED(); +void Assembler::ldr_pcrel(const CPURegister& rt, int imm19) { + // The pattern 'ldr xzr, #offset' is used to indicate the beginning of a + // constant pool. It should not be emitted. + DCHECK(!rt.IsZero()); + Emit(LoadLiteralOpFor(rt) | ImmLLiteral(imm19) | Rt(rt)); } -void Assembler::ldr(const FPRegister& ft, double imm) { - // TODO(all): Constant pool may be garbage collected. Hence we cannot store - // arbitrary values in them. Manually move it for now. Fix - // MacroAssembler::Fmov when this is implemented. - UNIMPLEMENTED(); -} - +void Assembler::ldr(const CPURegister& rt, const Immediate& imm) { + // Currently we only support 64-bit literals. + DCHECK(rt.Is64Bits()); -void Assembler::ldr(const FPRegister& ft, float imm) { - // TODO(all): Constant pool may be garbage collected. Hence we cannot store - // arbitrary values in them. Manually move it for now. Fix - // MacroAssembler::Fmov when this is implemented. - UNIMPLEMENTED(); + RecordRelocInfo(imm.rmode(), imm.value()); + BlockConstPoolFor(1); + // The load will be patched when the constpool is emitted, patching code + // expect a load literal with offset 0. + ldr_pcrel(rt, 0); } @@ -1501,13 +1740,13 @@ void Assembler::mvn(const Register& rd, const Operand& operand) { void Assembler::mrs(const Register& rt, SystemRegister sysreg) { - ASSERT(rt.Is64Bits()); + DCHECK(rt.Is64Bits()); Emit(MRS | ImmSystemRegister(sysreg) | Rt(rt)); } void Assembler::msr(SystemRegister sysreg, const Register& rt) { - ASSERT(rt.Is64Bits()); + DCHECK(rt.Is64Bits()); Emit(MSR | Rt(rt) | ImmSystemRegister(sysreg)); } @@ -1533,35 +1772,35 @@ void Assembler::isb() { void Assembler::fmov(FPRegister fd, double imm) { - ASSERT(fd.Is64Bits()); - ASSERT(IsImmFP64(imm)); + DCHECK(fd.Is64Bits()); + DCHECK(IsImmFP64(imm)); Emit(FMOV_d_imm | Rd(fd) | ImmFP64(imm)); } void Assembler::fmov(FPRegister fd, float imm) { - ASSERT(fd.Is32Bits()); - ASSERT(IsImmFP32(imm)); + DCHECK(fd.Is32Bits()); + DCHECK(IsImmFP32(imm)); Emit(FMOV_s_imm | Rd(fd) | ImmFP32(imm)); } void Assembler::fmov(Register rd, FPRegister fn) { - ASSERT(rd.SizeInBits() == fn.SizeInBits()); + DCHECK(rd.SizeInBits() == fn.SizeInBits()); FPIntegerConvertOp op = rd.Is32Bits() ? FMOV_ws : FMOV_xd; Emit(op | Rd(rd) | Rn(fn)); } void Assembler::fmov(FPRegister fd, Register rn) { - ASSERT(fd.SizeInBits() == rn.SizeInBits()); + DCHECK(fd.SizeInBits() == rn.SizeInBits()); FPIntegerConvertOp op = fd.Is32Bits() ? FMOV_sw : FMOV_dx; Emit(op | Rd(fd) | Rn(rn)); } void Assembler::fmov(FPRegister fd, FPRegister fn) { - ASSERT(fd.SizeInBits() == fn.SizeInBits()); + DCHECK(fd.SizeInBits() == fn.SizeInBits()); Emit(FPType(fd) | FMOV | Rd(fd) | Rn(fn)); } @@ -1656,56 +1895,56 @@ void Assembler::fminnm(const FPRegister& fd, void Assembler::fabs(const FPRegister& fd, const FPRegister& fn) { - ASSERT(fd.SizeInBits() == fn.SizeInBits()); + DCHECK(fd.SizeInBits() == fn.SizeInBits()); FPDataProcessing1Source(fd, fn, FABS); } void Assembler::fneg(const FPRegister& fd, const FPRegister& fn) { - ASSERT(fd.SizeInBits() == fn.SizeInBits()); + DCHECK(fd.SizeInBits() == fn.SizeInBits()); FPDataProcessing1Source(fd, fn, FNEG); } void Assembler::fsqrt(const FPRegister& fd, const FPRegister& fn) { - ASSERT(fd.SizeInBits() == fn.SizeInBits()); + DCHECK(fd.SizeInBits() == fn.SizeInBits()); FPDataProcessing1Source(fd, fn, FSQRT); } void Assembler::frinta(const FPRegister& fd, const FPRegister& fn) { - ASSERT(fd.SizeInBits() == fn.SizeInBits()); + DCHECK(fd.SizeInBits() == fn.SizeInBits()); FPDataProcessing1Source(fd, fn, FRINTA); } void Assembler::frintm(const FPRegister& fd, const FPRegister& fn) { - ASSERT(fd.SizeInBits() == fn.SizeInBits()); + DCHECK(fd.SizeInBits() == fn.SizeInBits()); FPDataProcessing1Source(fd, fn, FRINTM); } void Assembler::frintn(const FPRegister& fd, const FPRegister& fn) { - ASSERT(fd.SizeInBits() == fn.SizeInBits()); + DCHECK(fd.SizeInBits() == fn.SizeInBits()); FPDataProcessing1Source(fd, fn, FRINTN); } void Assembler::frintz(const FPRegister& fd, const FPRegister& fn) { - ASSERT(fd.SizeInBits() == fn.SizeInBits()); + DCHECK(fd.SizeInBits() == fn.SizeInBits()); FPDataProcessing1Source(fd, fn, FRINTZ); } void Assembler::fcmp(const FPRegister& fn, const FPRegister& fm) { - ASSERT(fn.SizeInBits() == fm.SizeInBits()); + DCHECK(fn.SizeInBits() == fm.SizeInBits()); Emit(FPType(fn) | FCMP | Rm(fm) | Rn(fn)); } @@ -1716,7 +1955,7 @@ void Assembler::fcmp(const FPRegister& fn, // Although the fcmp instruction can strictly only take an immediate value of // +0.0, we don't need to check for -0.0 because the sign of 0.0 doesn't // affect the result of the comparison. - ASSERT(value == 0.0); + DCHECK(value == 0.0); Emit(FPType(fn) | FCMP_zero | Rn(fn)); } @@ -1725,7 +1964,7 @@ void Assembler::fccmp(const FPRegister& fn, const FPRegister& fm, StatusFlags nzcv, Condition cond) { - ASSERT(fn.SizeInBits() == fm.SizeInBits()); + DCHECK(fn.SizeInBits() == fm.SizeInBits()); Emit(FPType(fn) | FCCMP | Rm(fm) | Cond(cond) | Rn(fn) | Nzcv(nzcv)); } @@ -1734,8 +1973,8 @@ void Assembler::fcsel(const FPRegister& fd, const FPRegister& fn, const FPRegister& fm, Condition cond) { - ASSERT(fd.SizeInBits() == fn.SizeInBits()); - ASSERT(fd.SizeInBits() == fm.SizeInBits()); + DCHECK(fd.SizeInBits() == fn.SizeInBits()); + DCHECK(fd.SizeInBits() == fm.SizeInBits()); Emit(FPType(fd) | FCSEL | Rm(fm) | Cond(cond) | Rn(fn) | Rd(fd)); } @@ -1751,11 +1990,11 @@ void Assembler::fcvt(const FPRegister& fd, const FPRegister& fn) { if (fd.Is64Bits()) { // Convert float to double. - ASSERT(fn.Is32Bits()); + DCHECK(fn.Is32Bits()); FPDataProcessing1Source(fd, fn, FCVT_ds); } else { // Convert double to float. - ASSERT(fn.Is64Bits()); + DCHECK(fn.Is64Bits()); FPDataProcessing1Source(fd, fn, FCVT_sd); } } @@ -1830,7 +2069,7 @@ void Assembler::ucvtf(const FPRegister& fd, // negated bit. // If b is 1, then B is 0. Instr Assembler::ImmFP32(float imm) { - ASSERT(IsImmFP32(imm)); + DCHECK(IsImmFP32(imm)); // bits: aBbb.bbbc.defg.h000.0000.0000.0000.0000 uint32_t bits = float_to_rawbits(imm); // bit7: a000.0000 @@ -1845,7 +2084,7 @@ Instr Assembler::ImmFP32(float imm) { Instr Assembler::ImmFP64(double imm) { - ASSERT(IsImmFP64(imm)); + DCHECK(IsImmFP64(imm)); // bits: aBbb.bbbb.bbcd.efgh.0000.0000.0000.0000 // 0000.0000.0000.0000.0000.0000.0000.0000 uint64_t bits = double_to_rawbits(imm); @@ -1865,10 +2104,19 @@ void Assembler::MoveWide(const Register& rd, uint64_t imm, int shift, MoveWideImmediateOp mov_op) { + // Ignore the top 32 bits of an immediate if we're moving to a W register. + if (rd.Is32Bits()) { + // Check that the top 32 bits are zero (a positive 32-bit number) or top + // 33 bits are one (a negative 32-bit number, sign extended to 64 bits). + DCHECK(((imm >> kWRegSizeInBits) == 0) || + ((imm >> (kWRegSizeInBits - 1)) == 0x1ffffffff)); + imm &= kWRegMask; + } + if (shift >= 0) { // Explicit shift specified. - ASSERT((shift == 0) || (shift == 16) || (shift == 32) || (shift == 48)); - ASSERT(rd.Is64Bits() || (shift == 0) || (shift == 16)); + DCHECK((shift == 0) || (shift == 16) || (shift == 32) || (shift == 48)); + DCHECK(rd.Is64Bits() || (shift == 0) || (shift == 16)); shift /= 16; } else { // Calculate a new immediate and shift combination to encode the immediate @@ -1880,17 +2128,17 @@ void Assembler::MoveWide(const Register& rd, imm >>= 16; shift = 1; } else if ((imm & ~(0xffffUL << 32)) == 0) { - ASSERT(rd.Is64Bits()); + DCHECK(rd.Is64Bits()); imm >>= 32; shift = 2; } else if ((imm & ~(0xffffUL << 48)) == 0) { - ASSERT(rd.Is64Bits()); + DCHECK(rd.Is64Bits()); imm >>= 48; shift = 3; } } - ASSERT(is_uint16(imm)); + DCHECK(is_uint16(imm)); Emit(SF(rd) | MoveWideImmediateFixed | mov_op | Rd(rd) | ImmMoveWide(imm) | ShiftMoveWide(shift)); @@ -1902,17 +2150,17 @@ void Assembler::AddSub(const Register& rd, const Operand& operand, FlagsUpdate S, AddSubOp op) { - ASSERT(rd.SizeInBits() == rn.SizeInBits()); - ASSERT(!operand.NeedsRelocation(isolate())); + DCHECK(rd.SizeInBits() == rn.SizeInBits()); + DCHECK(!operand.NeedsRelocation(this)); if (operand.IsImmediate()) { - int64_t immediate = operand.immediate(); - ASSERT(IsImmAddSub(immediate)); + int64_t immediate = operand.ImmediateValue(); + DCHECK(IsImmAddSub(immediate)); Instr dest_reg = (S == SetFlags) ? Rd(rd) : RdSP(rd); Emit(SF(rd) | AddSubImmediateFixed | op | Flags(S) | ImmAddSub(immediate) | dest_reg | RnSP(rn)); } else if (operand.IsShiftedRegister()) { - ASSERT(operand.reg().SizeInBits() == rd.SizeInBits()); - ASSERT(operand.shift() != ROR); + DCHECK(operand.reg().SizeInBits() == rd.SizeInBits()); + DCHECK(operand.shift() != ROR); // For instructions of the form: // add/sub wsp, <Wn>, <Wm> [, LSL #0-3 ] @@ -1922,14 +2170,14 @@ void Assembler::AddSub(const Register& rd, // or their 64-bit register equivalents, convert the operand from shifted to // extended register mode, and emit an add/sub extended instruction. if (rn.IsSP() || rd.IsSP()) { - ASSERT(!(rd.IsSP() && (S == SetFlags))); + DCHECK(!(rd.IsSP() && (S == SetFlags))); DataProcExtendedRegister(rd, rn, operand.ToExtendedRegister(), S, AddSubExtendedFixed | op); } else { DataProcShiftedRegister(rd, rn, operand, S, AddSubShiftedFixed | op); } } else { - ASSERT(operand.IsExtendedRegister()); + DCHECK(operand.IsExtendedRegister()); DataProcExtendedRegister(rd, rn, operand, S, AddSubExtendedFixed | op); } } @@ -1940,22 +2188,22 @@ void Assembler::AddSubWithCarry(const Register& rd, const Operand& operand, FlagsUpdate S, AddSubWithCarryOp op) { - ASSERT(rd.SizeInBits() == rn.SizeInBits()); - ASSERT(rd.SizeInBits() == operand.reg().SizeInBits()); - ASSERT(operand.IsShiftedRegister() && (operand.shift_amount() == 0)); - ASSERT(!operand.NeedsRelocation(isolate())); + DCHECK(rd.SizeInBits() == rn.SizeInBits()); + DCHECK(rd.SizeInBits() == operand.reg().SizeInBits()); + DCHECK(operand.IsShiftedRegister() && (operand.shift_amount() == 0)); + DCHECK(!operand.NeedsRelocation(this)); Emit(SF(rd) | op | Flags(S) | Rm(operand.reg()) | Rn(rn) | Rd(rd)); } void Assembler::hlt(int code) { - ASSERT(is_uint16(code)); + DCHECK(is_uint16(code)); Emit(HLT | ImmException(code)); } void Assembler::brk(int code) { - ASSERT(is_uint16(code)); + DCHECK(is_uint16(code)); Emit(BRK | ImmException(code)); } @@ -1964,7 +2212,7 @@ void Assembler::debug(const char* message, uint32_t code, Instr params) { #ifdef USE_SIMULATOR // Don't generate simulator specific code if we are building a snapshot, which // might be run on real hardware. - if (!Serializer::enabled(isolate())) { + if (!serializer_enabled()) { // The arguments to the debug marker need to be contiguous in memory, so // make sure we don't try to emit pools. BlockPoolsScope scope(this); @@ -1975,11 +2223,11 @@ void Assembler::debug(const char* message, uint32_t code, Instr params) { // Refer to instructions-arm64.h for a description of the marker and its // arguments. hlt(kImmExceptionIsDebug); - ASSERT(SizeOfCodeGeneratedSince(&start) == kDebugCodeOffset); + DCHECK(SizeOfCodeGeneratedSince(&start) == kDebugCodeOffset); dc32(code); - ASSERT(SizeOfCodeGeneratedSince(&start) == kDebugParamsOffset); + DCHECK(SizeOfCodeGeneratedSince(&start) == kDebugParamsOffset); dc32(params); - ASSERT(SizeOfCodeGeneratedSince(&start) == kDebugMessageOffset); + DCHECK(SizeOfCodeGeneratedSince(&start) == kDebugMessageOffset); EmitStringData(message); hlt(kImmExceptionIsUnreachable); @@ -1998,15 +2246,15 @@ void Assembler::Logical(const Register& rd, const Register& rn, const Operand& operand, LogicalOp op) { - ASSERT(rd.SizeInBits() == rn.SizeInBits()); - ASSERT(!operand.NeedsRelocation(isolate())); + DCHECK(rd.SizeInBits() == rn.SizeInBits()); + DCHECK(!operand.NeedsRelocation(this)); if (operand.IsImmediate()) { - int64_t immediate = operand.immediate(); + int64_t immediate = operand.ImmediateValue(); unsigned reg_size = rd.SizeInBits(); - ASSERT(immediate != 0); - ASSERT(immediate != -1); - ASSERT(rd.Is64Bits() || is_uint32(immediate)); + DCHECK(immediate != 0); + DCHECK(immediate != -1); + DCHECK(rd.Is64Bits() || is_uint32(immediate)); // If the operation is NOT, invert the operation and immediate. if ((op & NOT) == NOT) { @@ -2023,8 +2271,8 @@ void Assembler::Logical(const Register& rd, UNREACHABLE(); } } else { - ASSERT(operand.IsShiftedRegister()); - ASSERT(operand.reg().SizeInBits() == rd.SizeInBits()); + DCHECK(operand.IsShiftedRegister()); + DCHECK(operand.reg().SizeInBits() == rd.SizeInBits()); Instr dp_op = static_cast<Instr>(op | LogicalShiftedFixed); DataProcShiftedRegister(rd, rn, operand, LeaveFlags, dp_op); } @@ -2051,13 +2299,13 @@ void Assembler::ConditionalCompare(const Register& rn, Condition cond, ConditionalCompareOp op) { Instr ccmpop; - ASSERT(!operand.NeedsRelocation(isolate())); + DCHECK(!operand.NeedsRelocation(this)); if (operand.IsImmediate()) { - int64_t immediate = operand.immediate(); - ASSERT(IsImmConditionalCompare(immediate)); + int64_t immediate = operand.ImmediateValue(); + DCHECK(IsImmConditionalCompare(immediate)); ccmpop = ConditionalCompareImmediateFixed | op | ImmCondCmp(immediate); } else { - ASSERT(operand.IsShiftedRegister() && (operand.shift_amount() == 0)); + DCHECK(operand.IsShiftedRegister() && (operand.shift_amount() == 0)); ccmpop = ConditionalCompareRegisterFixed | op | Rm(operand.reg()); } Emit(SF(rn) | ccmpop | Cond(cond) | Rn(rn) | Nzcv(nzcv)); @@ -2067,7 +2315,7 @@ void Assembler::ConditionalCompare(const Register& rn, void Assembler::DataProcessing1Source(const Register& rd, const Register& rn, DataProcessing1SourceOp op) { - ASSERT(rd.SizeInBits() == rn.SizeInBits()); + DCHECK(rd.SizeInBits() == rn.SizeInBits()); Emit(SF(rn) | op | Rn(rn) | Rd(rd)); } @@ -2083,8 +2331,8 @@ void Assembler::FPDataProcessing2Source(const FPRegister& fd, const FPRegister& fn, const FPRegister& fm, FPDataProcessing2SourceOp op) { - ASSERT(fd.SizeInBits() == fn.SizeInBits()); - ASSERT(fd.SizeInBits() == fm.SizeInBits()); + DCHECK(fd.SizeInBits() == fn.SizeInBits()); + DCHECK(fd.SizeInBits() == fm.SizeInBits()); Emit(FPType(fd) | op | Rm(fm) | Rn(fn) | Rd(fd)); } @@ -2094,7 +2342,7 @@ void Assembler::FPDataProcessing3Source(const FPRegister& fd, const FPRegister& fm, const FPRegister& fa, FPDataProcessing3SourceOp op) { - ASSERT(AreSameSizeAndType(fd, fn, fm, fa)); + DCHECK(AreSameSizeAndType(fd, fn, fm, fa)); Emit(FPType(fd) | op | Rm(fm) | Rn(fn) | Rd(fd) | Ra(fa)); } @@ -2126,7 +2374,7 @@ void Assembler::EmitExtendShift(const Register& rd, const Register& rn, Extend extend, unsigned left_shift) { - ASSERT(rd.SizeInBits() >= rn.SizeInBits()); + DCHECK(rd.SizeInBits() >= rn.SizeInBits()); unsigned reg_size = rd.SizeInBits(); // Use the correct size of register. Register rn_ = Register::Create(rn.code(), rd.SizeInBits()); @@ -2145,7 +2393,7 @@ void Assembler::EmitExtendShift(const Register& rd, case SXTW: sbfm(rd, rn_, non_shift_bits, high_bit); break; case UXTX: case SXTX: { - ASSERT(rn.SizeInBits() == kXRegSizeInBits); + DCHECK(rn.SizeInBits() == kXRegSizeInBits); // Nothing to extend. Just shift. lsl(rd, rn_, left_shift); break; @@ -2164,9 +2412,9 @@ void Assembler::DataProcShiftedRegister(const Register& rd, const Operand& operand, FlagsUpdate S, Instr op) { - ASSERT(operand.IsShiftedRegister()); - ASSERT(rn.Is64Bits() || (rn.Is32Bits() && is_uint5(operand.shift_amount()))); - ASSERT(!operand.NeedsRelocation(isolate())); + DCHECK(operand.IsShiftedRegister()); + DCHECK(rn.Is64Bits() || (rn.Is32Bits() && is_uint5(operand.shift_amount()))); + DCHECK(!operand.NeedsRelocation(this)); Emit(SF(rd) | op | Flags(S) | ShiftDP(operand.shift()) | ImmDPShift(operand.shift_amount()) | Rm(operand.reg()) | Rn(rn) | Rd(rd)); @@ -2178,7 +2426,7 @@ void Assembler::DataProcExtendedRegister(const Register& rd, const Operand& operand, FlagsUpdate S, Instr op) { - ASSERT(!operand.NeedsRelocation(isolate())); + DCHECK(!operand.NeedsRelocation(this)); Instr dest_reg = (S == SetFlags) ? Rd(rd) : RdSP(rd); Emit(SF(rd) | op | Flags(S) | Rm(operand.reg()) | ExtendMode(operand.extend()) | ImmExtendShift(operand.shift_amount()) | @@ -2222,18 +2470,18 @@ void Assembler::LoadStore(const CPURegister& rt, // Shifts are encoded in one bit, indicating a left shift by the memory // access size. - ASSERT((shift_amount == 0) || + DCHECK((shift_amount == 0) || (shift_amount == static_cast<unsigned>(CalcLSDataSize(op)))); Emit(LoadStoreRegisterOffsetFixed | memop | Rm(addr.regoffset()) | ExtendMode(ext) | ImmShiftLS((shift_amount > 0) ? 1 : 0)); } else { // Pre-index and post-index modes. - ASSERT(!rt.Is(addr.base())); + DCHECK(!rt.Is(addr.base())); if (IsImmLSUnscaled(offset)) { if (addr.IsPreIndex()) { Emit(LoadStorePreIndexFixed | memop | ImmLS(offset)); } else { - ASSERT(addr.IsPostIndex()); + DCHECK(addr.IsPostIndex()); Emit(LoadStorePostIndexFixed | memop | ImmLS(offset)); } } else { @@ -2255,25 +2503,9 @@ bool Assembler::IsImmLSScaled(ptrdiff_t offset, LSDataSize size) { } -void Assembler::LoadLiteral(const CPURegister& rt, int offset_from_pc) { - ASSERT((offset_from_pc & ((1 << kLiteralEntrySizeLog2) - 1)) == 0); - // The pattern 'ldr xzr, #offset' is used to indicate the beginning of a - // constant pool. It should not be emitted. - ASSERT(!rt.Is(xzr)); - Emit(LDR_x_lit | - ImmLLiteral(offset_from_pc >> kLiteralEntrySizeLog2) | - Rt(rt)); -} - - -void Assembler::LoadRelocatedValue(const CPURegister& rt, - const Operand& operand, - LoadLiteralOp op) { - int64_t imm = operand.immediate(); - ASSERT(is_int32(imm) || is_uint32(imm) || (rt.Is64Bits())); - RecordRelocInfo(operand.rmode(), imm); - BlockConstPoolFor(1); - Emit(op | ImmLLiteral(0) | Rt(rt)); +bool Assembler::IsImmLSPair(ptrdiff_t offset, LSDataSize size) { + bool offset_is_size_multiple = (((offset >> size) << size) == offset); + return offset_is_size_multiple && is_int7(offset >> size); } @@ -2289,94 +2521,200 @@ bool Assembler::IsImmLogical(uint64_t value, unsigned* n, unsigned* imm_s, unsigned* imm_r) { - ASSERT((n != NULL) && (imm_s != NULL) && (imm_r != NULL)); - ASSERT((width == kWRegSizeInBits) || (width == kXRegSizeInBits)); + DCHECK((n != NULL) && (imm_s != NULL) && (imm_r != NULL)); + DCHECK((width == kWRegSizeInBits) || (width == kXRegSizeInBits)); + + bool negate = false; // Logical immediates are encoded using parameters n, imm_s and imm_r using // the following table: // - // N imms immr size S R - // 1 ssssss rrrrrr 64 UInt(ssssss) UInt(rrrrrr) - // 0 0sssss xrrrrr 32 UInt(sssss) UInt(rrrrr) - // 0 10ssss xxrrrr 16 UInt(ssss) UInt(rrrr) - // 0 110sss xxxrrr 8 UInt(sss) UInt(rrr) - // 0 1110ss xxxxrr 4 UInt(ss) UInt(rr) - // 0 11110s xxxxxr 2 UInt(s) UInt(r) + // N imms immr size S R + // 1 ssssss rrrrrr 64 UInt(ssssss) UInt(rrrrrr) + // 0 0sssss xrrrrr 32 UInt(sssss) UInt(rrrrr) + // 0 10ssss xxrrrr 16 UInt(ssss) UInt(rrrr) + // 0 110sss xxxrrr 8 UInt(sss) UInt(rrr) + // 0 1110ss xxxxrr 4 UInt(ss) UInt(rr) + // 0 11110s xxxxxr 2 UInt(s) UInt(r) // (s bits must not be all set) // - // A pattern is constructed of size bits, where the least significant S+1 - // bits are set. The pattern is rotated right by R, and repeated across a - // 32 or 64-bit value, depending on destination register width. + // A pattern is constructed of size bits, where the least significant S+1 bits + // are set. The pattern is rotated right by R, and repeated across a 32 or + // 64-bit value, depending on destination register width. // - // To test if an arbitary immediate can be encoded using this scheme, an - // iterative algorithm is used. + // Put another way: the basic format of a logical immediate is a single + // contiguous stretch of 1 bits, repeated across the whole word at intervals + // given by a power of 2. To identify them quickly, we first locate the + // lowest stretch of 1 bits, then the next 1 bit above that; that combination + // is different for every logical immediate, so it gives us all the + // information we need to identify the only logical immediate that our input + // could be, and then we simply check if that's the value we actually have. // - // TODO(mcapewel) This code does not consider using X/W register overlap to - // support 64-bit immediates where the top 32-bits are zero, and the bottom - // 32-bits are an encodable logical immediate. + // (The rotation parameter does give the possibility of the stretch of 1 bits + // going 'round the end' of the word. To deal with that, we observe that in + // any situation where that happens the bitwise NOT of the value is also a + // valid logical immediate. So we simply invert the input whenever its low bit + // is set, and then we know that the rotated case can't arise.) - // 1. If the value has all set or all clear bits, it can't be encoded. - if ((value == 0) || (value == 0xffffffffffffffffUL) || - ((width == kWRegSizeInBits) && (value == 0xffffffff))) { - return false; + if (value & 1) { + // If the low bit is 1, negate the value, and set a flag to remember that we + // did (so that we can adjust the return values appropriately). + negate = true; + value = ~value; } - unsigned lead_zero = CountLeadingZeros(value, width); - unsigned lead_one = CountLeadingZeros(~value, width); - unsigned trail_zero = CountTrailingZeros(value, width); - unsigned trail_one = CountTrailingZeros(~value, width); - unsigned set_bits = CountSetBits(value, width); - - // The fixed bits in the immediate s field. - // If width == 64 (X reg), start at 0xFFFFFF80. - // If width == 32 (W reg), start at 0xFFFFFFC0, as the iteration for 64-bit - // widths won't be executed. - int imm_s_fixed = (width == kXRegSizeInBits) ? -128 : -64; - int imm_s_mask = 0x3F; - - for (;;) { - // 2. If the value is two bits wide, it can be encoded. - if (width == 2) { - *n = 0; - *imm_s = 0x3C; - *imm_r = (value & 3) - 1; - return true; - } + if (width == kWRegSizeInBits) { + // To handle 32-bit logical immediates, the very easiest thing is to repeat + // the input value twice to make a 64-bit word. The correct encoding of that + // as a logical immediate will also be the correct encoding of the 32-bit + // value. - *n = (width == 64) ? 1 : 0; - *imm_s = ((imm_s_fixed | (set_bits - 1)) & imm_s_mask); - if ((lead_zero + set_bits) == width) { - *imm_r = 0; - } else { - *imm_r = (lead_zero > 0) ? (width - trail_zero) : lead_one; - } + // The most-significant 32 bits may not be zero (ie. negate is true) so + // shift the value left before duplicating it. + value <<= kWRegSizeInBits; + value |= value >> kWRegSizeInBits; + } - // 3. If the sum of leading zeros, trailing zeros and set bits is equal to - // the bit width of the value, it can be encoded. - if (lead_zero + trail_zero + set_bits == width) { - return true; + // The basic analysis idea: imagine our input word looks like this. + // + // 0011111000111110001111100011111000111110001111100011111000111110 + // c b a + // |<--d-->| + // + // We find the lowest set bit (as an actual power-of-2 value, not its index) + // and call it a. Then we add a to our original number, which wipes out the + // bottommost stretch of set bits and replaces it with a 1 carried into the + // next zero bit. Then we look for the new lowest set bit, which is in + // position b, and subtract it, so now our number is just like the original + // but with the lowest stretch of set bits completely gone. Now we find the + // lowest set bit again, which is position c in the diagram above. Then we'll + // measure the distance d between bit positions a and c (using CLZ), and that + // tells us that the only valid logical immediate that could possibly be equal + // to this number is the one in which a stretch of bits running from a to just + // below b is replicated every d bits. + uint64_t a = LargestPowerOf2Divisor(value); + uint64_t value_plus_a = value + a; + uint64_t b = LargestPowerOf2Divisor(value_plus_a); + uint64_t value_plus_a_minus_b = value_plus_a - b; + uint64_t c = LargestPowerOf2Divisor(value_plus_a_minus_b); + + int d, clz_a, out_n; + uint64_t mask; + + if (c != 0) { + // The general case, in which there is more than one stretch of set bits. + // Compute the repeat distance d, and set up a bitmask covering the basic + // unit of repetition (i.e. a word with the bottom d bits set). Also, in all + // of these cases the N bit of the output will be zero. + clz_a = CountLeadingZeros(a, kXRegSizeInBits); + int clz_c = CountLeadingZeros(c, kXRegSizeInBits); + d = clz_a - clz_c; + mask = ((V8_UINT64_C(1) << d) - 1); + out_n = 0; + } else { + // Handle degenerate cases. + // + // If any of those 'find lowest set bit' operations didn't find a set bit at + // all, then the word will have been zero thereafter, so in particular the + // last lowest_set_bit operation will have returned zero. So we can test for + // all the special case conditions in one go by seeing if c is zero. + if (a == 0) { + // The input was zero (or all 1 bits, which will come to here too after we + // inverted it at the start of the function), for which we just return + // false. + return false; + } else { + // Otherwise, if c was zero but a was not, then there's just one stretch + // of set bits in our word, meaning that we have the trivial case of + // d == 64 and only one 'repetition'. Set up all the same variables as in + // the general case above, and set the N bit in the output. + clz_a = CountLeadingZeros(a, kXRegSizeInBits); + d = 64; + mask = ~V8_UINT64_C(0); + out_n = 1; } + } - // 4. If the sum of leading ones, trailing ones and unset bits in the - // value is equal to the bit width of the value, it can be encoded. - if (lead_one + trail_one + (width - set_bits) == width) { - return true; - } + // If the repeat period d is not a power of two, it can't be encoded. + if (!IS_POWER_OF_TWO(d)) { + return false; + } - // 5. If the most-significant half of the bitwise value is equal to the - // least-significant half, return to step 2 using the least-significant - // half of the value. - uint64_t mask = (1UL << (width >> 1)) - 1; - if ((value & mask) == ((value >> (width >> 1)) & mask)) { - width >>= 1; - set_bits >>= 1; - imm_s_fixed >>= 1; - continue; - } + if (((b - a) & ~mask) != 0) { + // If the bit stretch (b - a) does not fit within the mask derived from the + // repeat period, then fail. + return false; + } - // 6. Otherwise, the value can't be encoded. + // The only possible option is b - a repeated every d bits. Now we're going to + // actually construct the valid logical immediate derived from that + // specification, and see if it equals our original input. + // + // To repeat a value every d bits, we multiply it by a number of the form + // (1 + 2^d + 2^(2d) + ...), i.e. 0x0001000100010001 or similar. These can + // be derived using a table lookup on CLZ(d). + static const uint64_t multipliers[] = { + 0x0000000000000001UL, + 0x0000000100000001UL, + 0x0001000100010001UL, + 0x0101010101010101UL, + 0x1111111111111111UL, + 0x5555555555555555UL, + }; + int multiplier_idx = CountLeadingZeros(d, kXRegSizeInBits) - 57; + // Ensure that the index to the multipliers array is within bounds. + DCHECK((multiplier_idx >= 0) && + (static_cast<size_t>(multiplier_idx) < ARRAY_SIZE(multipliers))); + uint64_t multiplier = multipliers[multiplier_idx]; + uint64_t candidate = (b - a) * multiplier; + + if (value != candidate) { + // The candidate pattern doesn't match our input value, so fail. return false; } + + // We have a match! This is a valid logical immediate, so now we have to + // construct the bits and pieces of the instruction encoding that generates + // it. + + // Count the set bits in our basic stretch. The special case of clz(0) == -1 + // makes the answer come out right for stretches that reach the very top of + // the word (e.g. numbers like 0xffffc00000000000). + int clz_b = (b == 0) ? -1 : CountLeadingZeros(b, kXRegSizeInBits); + int s = clz_a - clz_b; + + // Decide how many bits to rotate right by, to put the low bit of that basic + // stretch in position a. + int r; + if (negate) { + // If we inverted the input right at the start of this function, here's + // where we compensate: the number of set bits becomes the number of clear + // bits, and the rotation count is based on position b rather than position + // a (since b is the location of the 'lowest' 1 bit after inversion). + s = d - s; + r = (clz_b + 1) & (d - 1); + } else { + r = (clz_a + 1) & (d - 1); + } + + // Now we're done, except for having to encode the S output in such a way that + // it gives both the number of set bits and the length of the repeated + // segment. The s field is encoded like this: + // + // imms size S + // ssssss 64 UInt(ssssss) + // 0sssss 32 UInt(sssss) + // 10ssss 16 UInt(ssss) + // 110sss 8 UInt(sss) + // 1110ss 4 UInt(ss) + // 11110s 2 UInt(s) + // + // So we 'or' (-d << 1) with our computed s to form imms. + *n = out_n; + *imm_s = ((-d << 1) | (s - 1)) & 0x3f; + *imm_r = r; + + return true; } @@ -2439,9 +2777,7 @@ void Assembler::GrowBuffer() { // Compute new buffer size. CodeDesc desc; // the new buffer - if (buffer_size_ < 4 * KB) { - desc.buffer_size = 4 * KB; - } else if (buffer_size_ < 1 * MB) { + if (buffer_size_ < 1 * MB) { desc.buffer_size = 2 * buffer_size_; } else { desc.buffer_size = buffer_size_ + 1 * MB; @@ -2476,15 +2812,7 @@ void Assembler::GrowBuffer() { // buffer nor pc absolute pointing inside the code buffer, so there is no need // to relocate any emitted relocation entries. - // Relocate pending relocation entries. - for (int i = 0; i < num_pending_reloc_info_; i++) { - RelocInfo& rinfo = pending_reloc_info_[i]; - ASSERT(rinfo.rmode() != RelocInfo::COMMENT && - rinfo.rmode() != RelocInfo::POSITION); - if (rinfo.rmode() != RelocInfo::JS_RETURN) { - rinfo.set_pc(rinfo.pc() + pc_delta); - } - } + // Pending relocation entries are also relative, no need to relocate. } @@ -2496,7 +2824,7 @@ void Assembler::RecordRelocInfo(RelocInfo::Mode rmode, intptr_t data) { (rmode == RelocInfo::CONST_POOL) || (rmode == RelocInfo::VENEER_POOL)) { // Adjust code for new modes. - ASSERT(RelocInfo::IsDebugBreakSlot(rmode) + DCHECK(RelocInfo::IsDebugBreakSlot(rmode) || RelocInfo::IsJSReturn(rmode) || RelocInfo::IsComment(rmode) || RelocInfo::IsPosition(rmode) @@ -2504,11 +2832,7 @@ void Assembler::RecordRelocInfo(RelocInfo::Mode rmode, intptr_t data) { || RelocInfo::IsVeneerPool(rmode)); // These modes do not need an entry in the constant pool. } else { - ASSERT(num_pending_reloc_info_ < kMaxNumPendingRelocInfo); - if (num_pending_reloc_info_ == 0) { - first_const_pool_use_ = pc_offset(); - } - pending_reloc_info_[num_pending_reloc_info_++] = rinfo; + constpool_.RecordEntry(data, rmode); // Make sure the constant pool is not emitted in place of the next // instruction for which we just recorded relocation info. BlockConstPoolFor(1); @@ -2516,12 +2840,11 @@ void Assembler::RecordRelocInfo(RelocInfo::Mode rmode, intptr_t data) { if (!RelocInfo::IsNone(rmode)) { // Don't record external references unless the heap will be serialized. - if (rmode == RelocInfo::EXTERNAL_REFERENCE) { - if (!Serializer::enabled(isolate()) && !emit_debug_code()) { - return; - } + if (rmode == RelocInfo::EXTERNAL_REFERENCE && + !serializer_enabled() && !emit_debug_code()) { + return; } - ASSERT(buffer_space() >= kMaxRelocSize); // too late to grow buffer here + DCHECK(buffer_space() >= kMaxRelocSize); // too late to grow buffer here if (rmode == RelocInfo::CODE_TARGET_WITH_ID) { RelocInfo reloc_info_with_ast_id( reinterpret_cast<byte*>(pc_), rmode, RecordedAstId().ToInt(), NULL); @@ -2537,11 +2860,9 @@ void Assembler::RecordRelocInfo(RelocInfo::Mode rmode, intptr_t data) { void Assembler::BlockConstPoolFor(int instructions) { int pc_limit = pc_offset() + instructions * kInstructionSize; if (no_const_pool_before_ < pc_limit) { - // If there are some pending entries, the constant pool cannot be blocked - // further than first_const_pool_use_ + kMaxDistToConstPool - ASSERT((num_pending_reloc_info_ == 0) || - (pc_limit < (first_const_pool_use_ + kMaxDistToConstPool))); no_const_pool_before_ = pc_limit; + // Make sure the pool won't be blocked for too long. + DCHECK(pc_limit < constpool_.MaxPcOffset()); } if (next_constant_pool_check_ < no_const_pool_before_) { @@ -2556,111 +2877,53 @@ void Assembler::CheckConstPool(bool force_emit, bool require_jump) { // BlockConstPoolScope. if (is_const_pool_blocked()) { // Something is wrong if emission is forced and blocked at the same time. - ASSERT(!force_emit); + DCHECK(!force_emit); return; } // There is nothing to do if there are no pending constant pool entries. - if (num_pending_reloc_info_ == 0) { + if (constpool_.IsEmpty()) { // Calculate the offset of the next check. - next_constant_pool_check_ = pc_offset() + kCheckConstPoolInterval; + SetNextConstPoolCheckIn(kCheckConstPoolInterval); return; } // We emit a constant pool when: // * requested to do so by parameter force_emit (e.g. after each function). // * the distance to the first instruction accessing the constant pool is - // kAvgDistToConstPool or more. - // * no jump is required and the distance to the first instruction accessing - // the constant pool is at least kMaxDistToPConstool / 2. - ASSERT(first_const_pool_use_ >= 0); - int dist = pc_offset() - first_const_pool_use_; - if (!force_emit && dist < kAvgDistToConstPool && - (require_jump || (dist < (kMaxDistToConstPool / 2)))) { + // kApproxMaxDistToConstPool or more. + // * the number of entries in the pool is kApproxMaxPoolEntryCount or more. + int dist = constpool_.DistanceToFirstUse(); + int count = constpool_.EntryCount(); + if (!force_emit && + (dist < kApproxMaxDistToConstPool) && + (count < kApproxMaxPoolEntryCount)) { return; } - int jump_instr = require_jump ? kInstructionSize : 0; - int size_pool_marker = kInstructionSize; - int size_pool_guard = kInstructionSize; - int pool_size = jump_instr + size_pool_marker + size_pool_guard + - num_pending_reloc_info_ * kPointerSize; - int needed_space = pool_size + kGap; // Emit veneers for branches that would go out of range during emission of the // constant pool. - CheckVeneerPool(false, require_jump, kVeneerDistanceMargin + pool_size); - - Label size_check; - bind(&size_check); + int worst_case_size = constpool_.WorstCaseSize(); + CheckVeneerPool(false, require_jump, + kVeneerDistanceMargin + worst_case_size); // Check that the code buffer is large enough before emitting the constant - // pool (include the jump over the pool, the constant pool marker, the - // constant pool guard, and the gap to the relocation information). + // pool (this includes the gap to the relocation information). + int needed_space = worst_case_size + kGap + 1 * kInstructionSize; while (buffer_space() <= needed_space) { GrowBuffer(); } - { - // Block recursive calls to CheckConstPool and protect from veneer pools. - BlockPoolsScope block_pools(this); - RecordConstPool(pool_size); - - // Emit jump over constant pool if necessary. - Label after_pool; - if (require_jump) { - b(&after_pool); - } - - // Emit a constant pool header. The header has two goals: - // 1) Encode the size of the constant pool, for use by the disassembler. - // 2) Terminate the program, to try to prevent execution from accidentally - // flowing into the constant pool. - // The header is therefore made of two arm64 instructions: - // ldr xzr, #<size of the constant pool in 32-bit words> - // blr xzr - // If executed the code will likely segfault and lr will point to the - // beginning of the constant pool. - // TODO(all): currently each relocated constant is 64 bits, consider adding - // support for 32-bit entries. - RecordComment("[ Constant Pool"); - ConstantPoolMarker(2 * num_pending_reloc_info_); - ConstantPoolGuard(); - - // Emit constant pool entries. - for (int i = 0; i < num_pending_reloc_info_; i++) { - RelocInfo& rinfo = pending_reloc_info_[i]; - ASSERT(rinfo.rmode() != RelocInfo::COMMENT && - rinfo.rmode() != RelocInfo::POSITION && - rinfo.rmode() != RelocInfo::STATEMENT_POSITION && - rinfo.rmode() != RelocInfo::CONST_POOL && - rinfo.rmode() != RelocInfo::VENEER_POOL); - - Instruction* instr = reinterpret_cast<Instruction*>(rinfo.pc()); - // Instruction to patch must be 'ldr rd, [pc, #offset]' with offset == 0. - ASSERT(instr->IsLdrLiteral() && - instr->ImmLLiteral() == 0); - - instr->SetImmPCOffsetTarget(reinterpret_cast<Instruction*>(pc_)); - dc64(rinfo.data()); - } - - num_pending_reloc_info_ = 0; - first_const_pool_use_ = -1; - - RecordComment("]"); - - if (after_pool.is_linked()) { - bind(&after_pool); - } - } + Label size_check; + bind(&size_check); + constpool_.Emit(require_jump); + DCHECK(SizeOfCodeGeneratedSince(&size_check) <= + static_cast<unsigned>(worst_case_size)); // Since a constant pool was just emitted, move the check offset forward by // the standard interval. - next_constant_pool_check_ = pc_offset() + kCheckConstPoolInterval; - - ASSERT(SizeOfCodeGeneratedSince(&size_check) == - static_cast<unsigned>(pool_size)); + SetNextConstPoolCheckIn(kCheckConstPoolInterval); } @@ -2720,7 +2983,7 @@ void Assembler::EmitVeneers(bool force_emit, bool need_protection, int margin) { branch->SetImmPCOffsetTarget(veneer); b(label); #ifdef DEBUG - ASSERT(SizeOfCodeGeneratedSince(&veneer_size_check) <= + DCHECK(SizeOfCodeGeneratedSince(&veneer_size_check) <= static_cast<uint64_t>(kMaxVeneerCodeSize)); veneer_size_check.Unuse(); #endif @@ -2753,17 +3016,17 @@ void Assembler::CheckVeneerPool(bool force_emit, bool require_jump, int margin) { // There is nothing to do if there are no pending veneer pool entries. if (unresolved_branches_.empty()) { - ASSERT(next_veneer_pool_check_ == kMaxInt); + DCHECK(next_veneer_pool_check_ == kMaxInt); return; } - ASSERT(pc_offset() < unresolved_branches_first_limit()); + DCHECK(pc_offset() < unresolved_branches_first_limit()); // Some short sequence of instruction mustn't be broken up by veneer pool // emission, such sequences are protected by calls to BlockVeneerPoolFor and // BlockVeneerPoolScope. if (is_veneer_pool_blocked()) { - ASSERT(!force_emit); + DCHECK(!force_emit); return; } @@ -2816,43 +3079,24 @@ void Assembler::RecordConstPool(int size) { Handle<ConstantPoolArray> Assembler::NewConstantPool(Isolate* isolate) { // No out-of-line constant pool support. - ASSERT(!FLAG_enable_ool_constant_pool); + DCHECK(!FLAG_enable_ool_constant_pool); return isolate->factory()->empty_constant_pool_array(); } void Assembler::PopulateConstantPool(ConstantPoolArray* constant_pool) { // No out-of-line constant pool support. - ASSERT(!FLAG_enable_ool_constant_pool); + DCHECK(!FLAG_enable_ool_constant_pool); return; } -void PatchingAssembler::MovInt64(const Register& rd, int64_t imm) { - Label start; - bind(&start); - - ASSERT(rd.Is64Bits()); - ASSERT(!rd.IsSP()); - - for (unsigned i = 0; i < (rd.SizeInBits() / 16); i++) { - uint64_t imm16 = (imm >> (16 * i)) & 0xffffL; - movk(rd, imm16, 16 * i); - } - - ASSERT(SizeOfCodeGeneratedSince(&start) == - kMovInt64NInstrs * kInstructionSize); -} - - -void PatchingAssembler::PatchAdrFar(Instruction* target) { +void PatchingAssembler::PatchAdrFar(ptrdiff_t target_offset) { // The code at the current instruction should be: // adr rd, 0 // nop (adr_far) // nop (adr_far) - // nop (adr_far) // movz scratch, 0 - // add rd, rd, scratch // Verify the expected code. Instruction* expected_adr = InstructionAt(0); @@ -2862,39 +3106,21 @@ void PatchingAssembler::PatchAdrFar(Instruction* target) { CHECK(InstructionAt((i + 1) * kInstructionSize)->IsNop(ADR_FAR_NOP)); } Instruction* expected_movz = - InstructionAt((kAdrFarPatchableNInstrs - 2) * kInstructionSize); + InstructionAt((kAdrFarPatchableNInstrs - 1) * kInstructionSize); CHECK(expected_movz->IsMovz() && (expected_movz->ImmMoveWide() == 0) && (expected_movz->ShiftMoveWide() == 0)); int scratch_code = expected_movz->Rd(); - Instruction* expected_add = - InstructionAt((kAdrFarPatchableNInstrs - 1) * kInstructionSize); - CHECK(expected_add->IsAddSubShifted() && - (expected_add->Mask(AddSubOpMask) == ADD) && - expected_add->SixtyFourBits() && - (expected_add->Rd() == rd_code) && (expected_add->Rn() == rd_code) && - (expected_add->Rm() == scratch_code) && - (static_cast<Shift>(expected_add->ShiftDP()) == LSL) && - (expected_add->ImmDPShift() == 0)); // Patch to load the correct address. - Label start; - bind(&start); Register rd = Register::XRegFromCode(rd_code); - // If the target is in range, we only patch the adr. Otherwise we patch the - // nops with fixup instructions. - int target_offset = expected_adr->DistanceTo(target); - if (Instruction::IsValidPCRelOffset(target_offset)) { - adr(rd, target_offset); - for (int i = 0; i < kAdrFarPatchableNInstrs - 2; ++i) { - nop(ADR_FAR_NOP); - } - } else { - Register scratch = Register::XRegFromCode(scratch_code); - adr(rd, 0); - MovInt64(scratch, target_offset); - add(rd, rd, scratch); - } + Register scratch = Register::XRegFromCode(scratch_code); + // Addresses are only 48 bits. + adr(rd, target_offset & 0xFFFF); + movz(scratch, (target_offset >> 16) & 0xFFFF, 16); + movk(scratch, (target_offset >> 32) & 0xFFFF, 32); + DCHECK((target_offset >> 48) == 0); + add(rd, rd, scratch); } diff --git a/deps/v8/src/arm64/assembler-arm64.h b/deps/v8/src/arm64/assembler-arm64.h index a3fbc98d975..1bafce84548 100644 --- a/deps/v8/src/arm64/assembler-arm64.h +++ b/deps/v8/src/arm64/assembler-arm64.h @@ -7,13 +7,13 @@ #include <list> #include <map> +#include <vector> -#include "globals.h" -#include "utils.h" -#include "assembler.h" -#include "serialize.h" -#include "arm64/instructions-arm64.h" -#include "arm64/cpu-arm64.h" +#include "src/arm64/instructions-arm64.h" +#include "src/assembler.h" +#include "src/globals.h" +#include "src/serialize.h" +#include "src/utils.h" namespace v8 { @@ -66,6 +66,7 @@ struct CPURegister { bool IsValidFPRegister() const; bool IsNone() const; bool Is(const CPURegister& other) const; + bool Aliases(const CPURegister& other) const; bool IsZero() const; bool IsSP() const; @@ -105,18 +106,18 @@ struct Register : public CPURegister { reg_code = r.reg_code; reg_size = r.reg_size; reg_type = r.reg_type; - ASSERT(IsValidOrNone()); + DCHECK(IsValidOrNone()); } Register(const Register& r) { // NOLINT(runtime/explicit) reg_code = r.reg_code; reg_size = r.reg_size; reg_type = r.reg_type; - ASSERT(IsValidOrNone()); + DCHECK(IsValidOrNone()); } bool IsValid() const { - ASSERT(IsRegister() || IsNone()); + DCHECK(IsRegister() || IsNone()); return IsValidRegister(); } @@ -168,7 +169,7 @@ struct Register : public CPURegister { } static Register FromAllocationIndex(unsigned index) { - ASSERT(index < static_cast<unsigned>(NumAllocatableRegisters())); + DCHECK(index < static_cast<unsigned>(NumAllocatableRegisters())); // cp is the last allocatable register. if (index == (static_cast<unsigned>(NumAllocatableRegisters() - 1))) { return from_code(kAllocatableContext); @@ -181,8 +182,8 @@ struct Register : public CPURegister { } static const char* AllocationIndexToString(int index) { - ASSERT((index >= 0) && (index < NumAllocatableRegisters())); - ASSERT((kAllocatableLowRangeBegin == 0) && + DCHECK((index >= 0) && (index < NumAllocatableRegisters())); + DCHECK((kAllocatableLowRangeBegin == 0) && (kAllocatableLowRangeEnd == 15) && (kAllocatableHighRangeBegin == 18) && (kAllocatableHighRangeEnd == 24) && @@ -198,7 +199,7 @@ struct Register : public CPURegister { } static int ToAllocationIndex(Register reg) { - ASSERT(reg.IsAllocatable()); + DCHECK(reg.IsAllocatable()); unsigned code = reg.code(); if (code == kAllocatableContext) { return NumAllocatableRegisters() - 1; @@ -234,18 +235,18 @@ struct FPRegister : public CPURegister { reg_code = r.reg_code; reg_size = r.reg_size; reg_type = r.reg_type; - ASSERT(IsValidOrNone()); + DCHECK(IsValidOrNone()); } FPRegister(const FPRegister& r) { // NOLINT(runtime/explicit) reg_code = r.reg_code; reg_size = r.reg_size; reg_type = r.reg_type; - ASSERT(IsValidOrNone()); + DCHECK(IsValidOrNone()); } bool IsValid() const { - ASSERT(IsFPRegister() || IsNone()); + DCHECK(IsFPRegister() || IsNone()); return IsValidFPRegister(); } @@ -281,7 +282,7 @@ struct FPRegister : public CPURegister { } static FPRegister FromAllocationIndex(unsigned int index) { - ASSERT(index < static_cast<unsigned>(NumAllocatableRegisters())); + DCHECK(index < static_cast<unsigned>(NumAllocatableRegisters())); return (index <= kAllocatableLowRangeEnd) ? from_code(index) @@ -289,8 +290,8 @@ struct FPRegister : public CPURegister { } static const char* AllocationIndexToString(int index) { - ASSERT((index >= 0) && (index < NumAllocatableRegisters())); - ASSERT((kAllocatableLowRangeBegin == 0) && + DCHECK((index >= 0) && (index < NumAllocatableRegisters())); + DCHECK((kAllocatableLowRangeBegin == 0) && (kAllocatableLowRangeEnd == 14) && (kAllocatableHighRangeBegin == 16) && (kAllocatableHighRangeEnd == 28)); @@ -304,7 +305,7 @@ struct FPRegister : public CPURegister { } static int ToAllocationIndex(FPRegister reg) { - ASSERT(reg.IsAllocatable()); + DCHECK(reg.IsAllocatable()); unsigned code = reg.code(); return (code <= kAllocatableLowRangeEnd) @@ -450,40 +451,40 @@ class CPURegList { CPURegister reg4 = NoCPUReg) : list_(reg1.Bit() | reg2.Bit() | reg3.Bit() | reg4.Bit()), size_(reg1.SizeInBits()), type_(reg1.type()) { - ASSERT(AreSameSizeAndType(reg1, reg2, reg3, reg4)); - ASSERT(IsValid()); + DCHECK(AreSameSizeAndType(reg1, reg2, reg3, reg4)); + DCHECK(IsValid()); } CPURegList(CPURegister::RegisterType type, unsigned size, RegList list) : list_(list), size_(size), type_(type) { - ASSERT(IsValid()); + DCHECK(IsValid()); } CPURegList(CPURegister::RegisterType type, unsigned size, unsigned first_reg, unsigned last_reg) : size_(size), type_(type) { - ASSERT(((type == CPURegister::kRegister) && + DCHECK(((type == CPURegister::kRegister) && (last_reg < kNumberOfRegisters)) || ((type == CPURegister::kFPRegister) && (last_reg < kNumberOfFPRegisters))); - ASSERT(last_reg >= first_reg); + DCHECK(last_reg >= first_reg); list_ = (1UL << (last_reg + 1)) - 1; list_ &= ~((1UL << first_reg) - 1); - ASSERT(IsValid()); + DCHECK(IsValid()); } CPURegister::RegisterType type() const { - ASSERT(IsValid()); + DCHECK(IsValid()); return type_; } RegList list() const { - ASSERT(IsValid()); + DCHECK(IsValid()); return list_; } inline void set_list(RegList new_list) { - ASSERT(IsValid()); + DCHECK(IsValid()); list_ = new_list; } @@ -528,7 +529,7 @@ class CPURegList { static CPURegList GetSafepointSavedRegisters(); bool IsEmpty() const { - ASSERT(IsValid()); + DCHECK(IsValid()); return list_ == 0; } @@ -536,7 +537,7 @@ class CPURegList { const CPURegister& other2 = NoCPUReg, const CPURegister& other3 = NoCPUReg, const CPURegister& other4 = NoCPUReg) const { - ASSERT(IsValid()); + DCHECK(IsValid()); RegList list = 0; if (!other1.IsNone() && (other1.type() == type_)) list |= other1.Bit(); if (!other2.IsNone() && (other2.type() == type_)) list |= other2.Bit(); @@ -546,21 +547,26 @@ class CPURegList { } int Count() const { - ASSERT(IsValid()); + DCHECK(IsValid()); return CountSetBits(list_, kRegListSizeInBits); } unsigned RegisterSizeInBits() const { - ASSERT(IsValid()); + DCHECK(IsValid()); return size_; } unsigned RegisterSizeInBytes() const { int size_in_bits = RegisterSizeInBits(); - ASSERT((size_in_bits % kBitsPerByte) == 0); + DCHECK((size_in_bits % kBitsPerByte) == 0); return size_in_bits / kBitsPerByte; } + unsigned TotalSizeInBytes() const { + DCHECK(IsValid()); + return RegisterSizeInBytes() * Count(); + } + private: RegList list_; unsigned size_; @@ -593,6 +599,31 @@ class CPURegList { #define kCallerSaved CPURegList::GetCallerSaved() #define kCallerSavedFP CPURegList::GetCallerSavedFP() +// ----------------------------------------------------------------------------- +// Immediates. +class Immediate { + public: + template<typename T> + inline explicit Immediate(Handle<T> handle); + + // This is allowed to be an implicit constructor because Immediate is + // a wrapper class that doesn't normally perform any type conversion. + template<typename T> + inline Immediate(T value); // NOLINT(runtime/explicit) + + template<typename T> + inline Immediate(T value, RelocInfo::Mode rmode); + + int64_t value() const { return value_; } + RelocInfo::Mode rmode() const { return rmode_; } + + private: + void InitializeHandle(Handle<Object> value); + + int64_t value_; + RelocInfo::Mode rmode_; +}; + // ----------------------------------------------------------------------------- // Operands. @@ -628,8 +659,8 @@ class Operand { inline Operand(T t); // NOLINT(runtime/explicit) // Implicit constructor for int types. - template<typename int_t> - inline Operand(int_t t, RelocInfo::Mode rmode); + template<typename T> + inline Operand(T t, RelocInfo::Mode rmode); inline bool IsImmediate() const; inline bool IsShiftedRegister() const; @@ -640,36 +671,33 @@ class Operand { // which helps in the encoding of instructions that use the stack pointer. inline Operand ToExtendedRegister() const; - inline int64_t immediate() const; + inline Immediate immediate() const; + inline int64_t ImmediateValue() const; inline Register reg() const; inline Shift shift() const; inline Extend extend() const; inline unsigned shift_amount() const; // Relocation information. - RelocInfo::Mode rmode() const { return rmode_; } - void set_rmode(RelocInfo::Mode rmode) { rmode_ = rmode; } - bool NeedsRelocation(Isolate* isolate) const; + bool NeedsRelocation(const Assembler* assembler) const; // Helpers inline static Operand UntagSmi(Register smi); inline static Operand UntagSmiAndScale(Register smi, int scale); private: - void initialize_handle(Handle<Object> value); - int64_t immediate_; + Immediate immediate_; Register reg_; Shift shift_; Extend extend_; unsigned shift_amount_; - RelocInfo::Mode rmode_; }; // MemOperand represents a memory operand in a load or store instruction. class MemOperand { public: - inline explicit MemOperand(); + inline MemOperand(); inline explicit MemOperand(Register base, ptrdiff_t offset = 0, AddrMode addrmode = Offset); @@ -701,6 +729,16 @@ class MemOperand { // handle indexed modes. inline Operand OffsetAsOperand() const; + enum PairResult { + kNotPair, // Can't use a pair instruction. + kPairAB, // Can use a pair instruction (operandA has lower address). + kPairBA // Can use a pair instruction (operandB has lower address). + }; + // Check if two MemOperand are consistent for stp/ldp use. + static PairResult AreConsistentForPair(const MemOperand& operandA, + const MemOperand& operandB, + int access_size_log2 = kXRegSizeLog2); + private: Register base_; Register regoffset_; @@ -712,6 +750,55 @@ class MemOperand { }; +class ConstPool { + public: + explicit ConstPool(Assembler* assm) + : assm_(assm), + first_use_(-1), + shared_entries_count(0) {} + void RecordEntry(intptr_t data, RelocInfo::Mode mode); + int EntryCount() const { + return shared_entries_count + unique_entries_.size(); + } + bool IsEmpty() const { + return shared_entries_.empty() && unique_entries_.empty(); + } + // Distance in bytes between the current pc and the first instruction + // using the pool. If there are no pending entries return kMaxInt. + int DistanceToFirstUse(); + // Offset after which instructions using the pool will be out of range. + int MaxPcOffset(); + // Maximum size the constant pool can be with current entries. It always + // includes alignment padding and branch over. + int WorstCaseSize(); + // Size in bytes of the literal pool *if* it is emitted at the current + // pc. The size will include the branch over the pool if it was requested. + int SizeIfEmittedAtCurrentPc(bool require_jump); + // Emit the literal pool at the current pc with a branch over the pool if + // requested. + void Emit(bool require_jump); + // Discard any pending pool entries. + void Clear(); + + private: + bool CanBeShared(RelocInfo::Mode mode); + void EmitMarker(); + void EmitGuard(); + void EmitEntries(); + + Assembler* assm_; + // Keep track of the first instruction requiring a constant pool entry + // since the previous constant pool was emitted. + int first_use_; + // values, pc offset(s) of entries which can be shared. + std::multimap<uint64_t, int> shared_entries_; + // Number of distinct literal in shared entries. + int shared_entries_count; + // values, pc offset of entries which cannot be shared. + std::vector<std::pair<uint64_t, int> > unique_entries_; +}; + + // ----------------------------------------------------------------------------- // Assembler. @@ -735,14 +822,14 @@ class Assembler : public AssemblerBase { virtual ~Assembler(); virtual void AbortedCodeGeneration() { - num_pending_reloc_info_ = 0; + constpool_.Clear(); } // System functions --------------------------------------------------------- // Start generating code from the beginning of the buffer, discarding any code // and data that has already been emitted into the buffer. // - // In order to avoid any accidental transfer of state, Reset ASSERTs that the + // In order to avoid any accidental transfer of state, Reset DCHECKs that the // constant pool is not blocked. void Reset(); @@ -782,11 +869,15 @@ class Assembler : public AssemblerBase { ConstantPoolArray* constant_pool); inline static void set_target_address_at(Address pc, ConstantPoolArray* constant_pool, - Address target); + Address target, + ICacheFlushMode icache_flush_mode = + FLUSH_ICACHE_IF_NEEDED); static inline Address target_address_at(Address pc, Code* code); static inline void set_target_address_at(Address pc, Code* code, - Address target); + Address target, + ICacheFlushMode icache_flush_mode = + FLUSH_ICACHE_IF_NEEDED); // Return the code target address at a call site from the return address of // that call in the instruction stream. @@ -796,6 +887,9 @@ class Assembler : public AssemblerBase { // instruction stream that call will return from. inline static Address return_address_from_call_start(Address pc); + // Return the code target address of the patch debug break slot + inline static Address break_address_from_return_address(Address pc); + // This sets the branch destination (which is in the constant pool on ARM). // This is for calls and branches within generated code. inline static void deserialization_set_special_target_at( @@ -822,15 +916,15 @@ class Assembler : public AssemblerBase { // Size of the generated code in bytes uint64_t SizeOfGeneratedCode() const { - ASSERT((pc_ >= buffer_) && (pc_ < (buffer_ + buffer_size_))); + DCHECK((pc_ >= buffer_) && (pc_ < (buffer_ + buffer_size_))); return pc_ - buffer_; } // Return the code size generated from label to the current position. uint64_t SizeOfCodeGeneratedSince(const Label* label) { - ASSERT(label->is_bound()); - ASSERT(pc_offset() >= label->pos()); - ASSERT(pc_offset() < buffer_size_); + DCHECK(label->is_bound()); + DCHECK(pc_offset() >= label->pos()); + DCHECK(pc_offset() < buffer_size_); return pc_offset() - label->pos(); } @@ -840,8 +934,8 @@ class Assembler : public AssemblerBase { // TODO(jbramley): Work out what sign to use for these things and if possible, // change things to be consistent. void AssertSizeOfCodeGeneratedSince(const Label* label, ptrdiff_t size) { - ASSERT(size >= 0); - ASSERT(static_cast<uint64_t>(size) == SizeOfCodeGeneratedSince(label)); + DCHECK(size >= 0); + DCHECK(static_cast<uint64_t>(size) == SizeOfCodeGeneratedSince(label)); } // Return the number of instructions generated from label to the @@ -859,7 +953,8 @@ class Assembler : public AssemblerBase { static const int kPatchDebugBreakSlotAddressOffset = 0; // Number of instructions necessary to be able to later patch it to a call. - // See Debug::GenerateSlot() and BreakLocationIterator::SetDebugBreakAtSlot(). + // See DebugCodegen::GenerateSlot() and + // BreakLocationIterator::SetDebugBreakAtSlot(). static const int kDebugBreakSlotInstructions = 4; static const int kDebugBreakSlotLength = kDebugBreakSlotInstructions * kInstructionSize; @@ -879,9 +974,7 @@ class Assembler : public AssemblerBase { static bool IsConstantPoolAt(Instruction* instr); static int ConstantPoolSizeAt(Instruction* instr); // See Assembler::CheckConstPool for more info. - void ConstantPoolMarker(uint32_t size); void EmitPoolGuard(); - void ConstantPoolGuard(); // Prevent veneer pool emission until EndBlockVeneerPool is called. // Call to this function can be nested but must be followed by an equal @@ -925,9 +1018,9 @@ class Assembler : public AssemblerBase { // function, compiled with and without debugger support (see for example // Debug::PrepareForBreakPoints()). // Compiling functions with debugger support generates additional code - // (Debug::GenerateSlot()). This may affect the emission of the pools and - // cause the version of the code with debugger support to have pools generated - // in different places. + // (DebugCodegen::GenerateSlot()). This may affect the emission of the pools + // and cause the version of the code with debugger support to have pools + // generated in different places. // Recording the position and size of emitted pools allows to correctly // compute the offset mappings between the different versions of a function in // all situations. @@ -1124,8 +1217,8 @@ class Assembler : public AssemblerBase { const Register& rn, unsigned lsb, unsigned width) { - ASSERT(width >= 1); - ASSERT(lsb + width <= rn.SizeInBits()); + DCHECK(width >= 1); + DCHECK(lsb + width <= rn.SizeInBits()); bfm(rd, rn, (rd.SizeInBits() - lsb) & (rd.SizeInBits() - 1), width - 1); } @@ -1134,15 +1227,15 @@ class Assembler : public AssemblerBase { const Register& rn, unsigned lsb, unsigned width) { - ASSERT(width >= 1); - ASSERT(lsb + width <= rn.SizeInBits()); + DCHECK(width >= 1); + DCHECK(lsb + width <= rn.SizeInBits()); bfm(rd, rn, lsb, lsb + width - 1); } // Sbfm aliases. // Arithmetic shift right. void asr(const Register& rd, const Register& rn, unsigned shift) { - ASSERT(shift < rd.SizeInBits()); + DCHECK(shift < rd.SizeInBits()); sbfm(rd, rn, shift, rd.SizeInBits() - 1); } @@ -1151,8 +1244,8 @@ class Assembler : public AssemblerBase { const Register& rn, unsigned lsb, unsigned width) { - ASSERT(width >= 1); - ASSERT(lsb + width <= rn.SizeInBits()); + DCHECK(width >= 1); + DCHECK(lsb + width <= rn.SizeInBits()); sbfm(rd, rn, (rd.SizeInBits() - lsb) & (rd.SizeInBits() - 1), width - 1); } @@ -1161,8 +1254,8 @@ class Assembler : public AssemblerBase { const Register& rn, unsigned lsb, unsigned width) { - ASSERT(width >= 1); - ASSERT(lsb + width <= rn.SizeInBits()); + DCHECK(width >= 1); + DCHECK(lsb + width <= rn.SizeInBits()); sbfm(rd, rn, lsb, lsb + width - 1); } @@ -1185,13 +1278,13 @@ class Assembler : public AssemblerBase { // Logical shift left. void lsl(const Register& rd, const Register& rn, unsigned shift) { unsigned reg_size = rd.SizeInBits(); - ASSERT(shift < reg_size); + DCHECK(shift < reg_size); ubfm(rd, rn, (reg_size - shift) % reg_size, reg_size - shift - 1); } // Logical shift right. void lsr(const Register& rd, const Register& rn, unsigned shift) { - ASSERT(shift < rd.SizeInBits()); + DCHECK(shift < rd.SizeInBits()); ubfm(rd, rn, shift, rd.SizeInBits() - 1); } @@ -1200,8 +1293,8 @@ class Assembler : public AssemblerBase { const Register& rn, unsigned lsb, unsigned width) { - ASSERT(width >= 1); - ASSERT(lsb + width <= rn.SizeInBits()); + DCHECK(width >= 1); + DCHECK(lsb + width <= rn.SizeInBits()); ubfm(rd, rn, (rd.SizeInBits() - lsb) & (rd.SizeInBits() - 1), width - 1); } @@ -1210,8 +1303,8 @@ class Assembler : public AssemblerBase { const Register& rn, unsigned lsb, unsigned width) { - ASSERT(width >= 1); - ASSERT(lsb + width <= rn.SizeInBits()); + DCHECK(width >= 1); + DCHECK(lsb + width <= rn.SizeInBits()); ubfm(rd, rn, lsb, lsb + width - 1); } @@ -1358,9 +1451,6 @@ class Assembler : public AssemblerBase { // Memory instructions. - // Load literal from pc + offset_from_pc. - void LoadLiteral(const CPURegister& rt, int offset_from_pc); - // Load integer or FP register. void ldr(const CPURegister& rt, const MemOperand& src); @@ -1407,12 +1497,11 @@ class Assembler : public AssemblerBase { void stnp(const CPURegister& rt, const CPURegister& rt2, const MemOperand& dst); - // Load literal to register. - void ldr(const Register& rt, uint64_t imm); + // Load literal to register from a pc relative address. + void ldr_pcrel(const CPURegister& rt, int imm19); - // Load literal to FP register. - void ldr(const FPRegister& ft, double imm); - void ldr(const FPRegister& ft, float imm); + // Load literal to register. + void ldr(const CPURegister& rt, const Immediate& imm); // Move instructions. The default shift of -1 indicates that the move // instruction will calculate an appropriate 16-bit immediate and left shift @@ -1485,7 +1574,7 @@ class Assembler : public AssemblerBase { }; void nop(NopMarkerTypes n) { - ASSERT((FIRST_NOP_MARKER <= n) && (n <= LAST_NOP_MARKER)); + DCHECK((FIRST_NOP_MARKER <= n) && (n <= LAST_NOP_MARKER)); mov(Register::XRegFromCode(n), Register::XRegFromCode(n)); } @@ -1646,7 +1735,7 @@ class Assembler : public AssemblerBase { // subsequent instructions. void EmitStringData(const char * string) { size_t len = strlen(string) + 1; - ASSERT(RoundUp(len, kInstructionSize) <= static_cast<size_t>(kGap)); + DCHECK(RoundUp(len, kInstructionSize) <= static_cast<size_t>(kGap)); EmitData(string, len); // Pad with NULL characters until pc_ is aligned. const char pad[] = {'\0', '\0', '\0', '\0'}; @@ -1666,7 +1755,9 @@ class Assembler : public AssemblerBase { // Code generation helpers -------------------------------------------------- - unsigned num_pending_reloc_info() const { return num_pending_reloc_info_; } + bool IsConstPoolEmpty() const { return constpool_.IsEmpty(); } + + Instruction* pc() const { return Instruction::Cast(pc_); } Instruction* InstructionAt(int offset) const { return reinterpret_cast<Instruction*>(buffer_ + offset); @@ -1678,44 +1769,44 @@ class Assembler : public AssemblerBase { // Register encoding. static Instr Rd(CPURegister rd) { - ASSERT(rd.code() != kSPRegInternalCode); + DCHECK(rd.code() != kSPRegInternalCode); return rd.code() << Rd_offset; } static Instr Rn(CPURegister rn) { - ASSERT(rn.code() != kSPRegInternalCode); + DCHECK(rn.code() != kSPRegInternalCode); return rn.code() << Rn_offset; } static Instr Rm(CPURegister rm) { - ASSERT(rm.code() != kSPRegInternalCode); + DCHECK(rm.code() != kSPRegInternalCode); return rm.code() << Rm_offset; } static Instr Ra(CPURegister ra) { - ASSERT(ra.code() != kSPRegInternalCode); + DCHECK(ra.code() != kSPRegInternalCode); return ra.code() << Ra_offset; } static Instr Rt(CPURegister rt) { - ASSERT(rt.code() != kSPRegInternalCode); + DCHECK(rt.code() != kSPRegInternalCode); return rt.code() << Rt_offset; } static Instr Rt2(CPURegister rt2) { - ASSERT(rt2.code() != kSPRegInternalCode); + DCHECK(rt2.code() != kSPRegInternalCode); return rt2.code() << Rt2_offset; } // These encoding functions allow the stack pointer to be encoded, and // disallow the zero register. static Instr RdSP(Register rd) { - ASSERT(!rd.IsZero()); + DCHECK(!rd.IsZero()); return (rd.code() & kRegCodeMask) << Rd_offset; } static Instr RnSP(Register rn) { - ASSERT(!rn.IsZero()); + DCHECK(!rn.IsZero()); return (rn.code() & kRegCodeMask) << Rn_offset; } @@ -1830,7 +1921,6 @@ class Assembler : public AssemblerBase { void CheckVeneerPool(bool force_emit, bool require_jump, int margin = kVeneerDistanceMargin); - class BlockPoolsScope { public: explicit BlockPoolsScope(Assembler* assem) : assem_(assem) { @@ -1846,10 +1936,6 @@ class Assembler : public AssemblerBase { DISALLOW_IMPLICIT_CONSTRUCTORS(BlockPoolsScope); }; - // Available for constrained code generation scopes. Prefer - // MacroAssembler::Mov() when possible. - inline void LoadRelocated(const CPURegister& rt, const Operand& operand); - protected: inline const Register& AppropriateZeroRegFor(const CPURegister& reg) const; @@ -1859,6 +1945,10 @@ class Assembler : public AssemblerBase { static bool IsImmLSUnscaled(ptrdiff_t offset); static bool IsImmLSScaled(ptrdiff_t offset, LSDataSize size); + void LoadStorePair(const CPURegister& rt, const CPURegister& rt2, + const MemOperand& addr, LoadStorePairOp op); + static bool IsImmLSPair(ptrdiff_t offset, LSDataSize size); + void Logical(const Register& rd, const Register& rn, const Operand& operand, @@ -1916,6 +2006,7 @@ class Assembler : public AssemblerBase { const CPURegister& rt, const CPURegister& rt2); static inline LoadStorePairNonTemporalOp StorePairNonTemporalOpFor( const CPURegister& rt, const CPURegister& rt2); + static inline LoadLiteralOp LoadLiteralOpFor(const CPURegister& rt); // Remove the specified branch from the unbound label link chain. // If available, a veneer for this label can be used for other branches in the @@ -1940,19 +2031,10 @@ class Assembler : public AssemblerBase { const Operand& operand, FlagsUpdate S, Instr op); - void LoadStorePair(const CPURegister& rt, - const CPURegister& rt2, - const MemOperand& addr, - LoadStorePairOp op); void LoadStorePairNonTemporal(const CPURegister& rt, const CPURegister& rt2, const MemOperand& addr, LoadStorePairNonTemporalOp op); - // Register the relocation information for the operand and load its value - // into rt. - void LoadRelocatedValue(const CPURegister& rt, - const Operand& operand, - LoadLiteralOp op); void ConditionalSelect(const Register& rd, const Register& rn, const Register& rm, @@ -1999,11 +2081,16 @@ class Assembler : public AssemblerBase { // instructions. void BlockConstPoolFor(int instructions); + // Set how far from current pc the next constant pool check will be. + void SetNextConstPoolCheckIn(int instructions) { + next_constant_pool_check_ = pc_offset() + instructions * kInstructionSize; + } + // Emit the instruction at pc_. void Emit(Instr instruction) { STATIC_ASSERT(sizeof(*pc_) == 1); STATIC_ASSERT(sizeof(instruction) == kInstructionSize); - ASSERT((pc_ + sizeof(instruction)) <= (buffer_ + buffer_size_)); + DCHECK((pc_ + sizeof(instruction)) <= (buffer_ + buffer_size_)); memcpy(pc_, &instruction, sizeof(instruction)); pc_ += sizeof(instruction); @@ -2012,8 +2099,8 @@ class Assembler : public AssemblerBase { // Emit data inline in the instruction stream. void EmitData(void const * data, unsigned size) { - ASSERT(sizeof(*pc_) == 1); - ASSERT((pc_ + size) <= (buffer_ + buffer_size_)); + DCHECK(sizeof(*pc_) == 1); + DCHECK((pc_ + size) <= (buffer_ + buffer_size_)); // TODO(all): Somehow register we have some data here. Then we can // disassemble it correctly. @@ -2030,12 +2117,13 @@ class Assembler : public AssemblerBase { int next_constant_pool_check_; // Constant pool generation - // Pools are emitted in the instruction stream, preferably after unconditional - // jumps or after returns from functions (in dead code locations). - // If a long code sequence does not contain unconditional jumps, it is - // necessary to emit the constant pool before the pool gets too far from the - // location it is accessed from. In this case, we emit a jump over the emitted - // constant pool. + // Pools are emitted in the instruction stream. They are emitted when: + // * the distance to the first use is above a pre-defined distance or + // * the numbers of entries in the pool is above a pre-defined size or + // * code generation is finished + // If a pool needs to be emitted before code generation is finished a branch + // over the emitted pool will be inserted. + // Constants in the pool may be addresses of functions that gets relocated; // if so, a relocation info entry is associated to the constant pool entry. @@ -2043,34 +2131,22 @@ class Assembler : public AssemblerBase { // expensive. By default we only check again once a number of instructions // has been generated. That also means that the sizing of the buffers is not // an exact science, and that we rely on some slop to not overrun buffers. - static const int kCheckConstPoolIntervalInst = 128; - static const int kCheckConstPoolInterval = - kCheckConstPoolIntervalInst * kInstructionSize; - - // Constants in pools are accessed via pc relative addressing, which can - // reach +/-4KB thereby defining a maximum distance between the instruction - // and the accessed constant. - static const int kMaxDistToConstPool = 4 * KB; - static const int kMaxNumPendingRelocInfo = - kMaxDistToConstPool / kInstructionSize; - - - // Average distance beetween a constant pool and the first instruction - // accessing the constant pool. Longer distance should result in less I-cache - // pollution. - // In practice the distance will be smaller since constant pool emission is - // forced after function return and sometimes after unconditional branches. - static const int kAvgDistToConstPool = - kMaxDistToConstPool - kCheckConstPoolInterval; + static const int kCheckConstPoolInterval = 128; + + // Distance to first use after a which a pool will be emitted. Pool entries + // are accessed with pc relative load therefore this cannot be more than + // 1 * MB. Since constant pool emission checks are interval based this value + // is an approximation. + static const int kApproxMaxDistToConstPool = 64 * KB; + + // Number of pool entries after which a pool will be emitted. Since constant + // pool emission checks are interval based this value is an approximation. + static const int kApproxMaxPoolEntryCount = 512; // Emission of the constant pool may be blocked in some code sequences. int const_pool_blocked_nesting_; // Block emission if this is not zero. int no_const_pool_before_; // Block emission before this pc offset. - // Keep track of the first instruction requiring a constant pool entry - // since the previous constant pool was emitted. - int first_const_pool_use_; - // Emission of the veneer pools may be blocked in some code sequences. int veneer_pool_blocked_nesting_; // Block emission if this is not zero. @@ -2086,10 +2162,8 @@ class Assembler : public AssemblerBase { // If every instruction in a long sequence is accessing the pool, we need one // pending relocation entry per instruction. - // the buffer of pending relocation info - RelocInfo pending_reloc_info_[kMaxNumPendingRelocInfo]; - // number of pending reloc info entries in the buffer - int num_pending_reloc_info_; + // The pending constant pool. + ConstPool constpool_; // Relocation for a type-recording IC has the AST id added to it. This // member variable is a way to pass the information from the call site to @@ -2103,7 +2177,7 @@ class Assembler : public AssemblerBase { // Record the AST id of the CallIC being compiled, so that it can be placed // in the relocation information. void SetRecordedAstId(TypeFeedbackId ast_id) { - ASSERT(recorded_ast_id_.IsNone()); + DCHECK(recorded_ast_id_.IsNone()); recorded_ast_id_ = ast_id; } @@ -2151,7 +2225,7 @@ class Assembler : public AssemblerBase { static const int kVeneerDistanceCheckMargin = kVeneerNoProtectionFactor * kVeneerDistanceMargin; int unresolved_branches_first_limit() const { - ASSERT(!unresolved_branches_.empty()); + DCHECK(!unresolved_branches_.empty()); return unresolved_branches_.begin()->first; } // This is similar to next_constant_pool_check_ and helps reduce the overhead @@ -2176,6 +2250,7 @@ class Assembler : public AssemblerBase { PositionsRecorder positions_recorder_; friend class PositionsRecorder; friend class EnsureSpace; + friend class ConstPool; }; class PatchingAssembler : public Assembler { @@ -2203,24 +2278,21 @@ class PatchingAssembler : public Assembler { ~PatchingAssembler() { // Const pool should still be blocked. - ASSERT(is_const_pool_blocked()); + DCHECK(is_const_pool_blocked()); EndBlockPools(); // Verify we have generated the number of instruction we expected. - ASSERT((pc_offset() + kGap) == buffer_size_); + DCHECK((pc_offset() + kGap) == buffer_size_); // Verify no relocation information has been emitted. - ASSERT(num_pending_reloc_info() == 0); + DCHECK(IsConstPoolEmpty()); // Flush the Instruction cache. size_t length = buffer_size_ - kGap; - CPU::FlushICache(buffer_, length); + CpuFeatures::FlushICache(buffer_, length); } - static const int kMovInt64NInstrs = 4; - void MovInt64(const Register& rd, int64_t imm); - // See definition of PatchAdrFar() for details. - static const int kAdrFarPatchableNNops = kMovInt64NInstrs - 1; - static const int kAdrFarPatchableNInstrs = kAdrFarPatchableNNops + 3; - void PatchAdrFar(Instruction* target); + static const int kAdrFarPatchableNNops = 2; + static const int kAdrFarPatchableNInstrs = kAdrFarPatchableNNops + 2; + void PatchAdrFar(ptrdiff_t target_offset); }; diff --git a/deps/v8/src/arm64/builtins-arm64.cc b/deps/v8/src/arm64/builtins-arm64.cc index fec5fef99af..2e0aed77a59 100644 --- a/deps/v8/src/arm64/builtins-arm64.cc +++ b/deps/v8/src/arm64/builtins-arm64.cc @@ -2,16 +2,16 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM64 -#include "codegen.h" -#include "debug.h" -#include "deoptimizer.h" -#include "full-codegen.h" -#include "runtime.h" -#include "stub-cache.h" +#include "src/codegen.h" +#include "src/debug.h" +#include "src/deoptimizer.h" +#include "src/full-codegen.h" +#include "src/runtime.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -66,7 +66,7 @@ void Builtins::Generate_Adaptor(MacroAssembler* masm, num_extra_args = 1; __ Push(x1); } else { - ASSERT(extra_args == NO_EXTRA_ARGUMENTS); + DCHECK(extra_args == NO_EXTRA_ARGUMENTS); } // JumpToExternalReference expects x0 to contain the number of arguments @@ -294,7 +294,7 @@ void Builtins::Generate_InOptimizationQueue(MacroAssembler* masm) { __ CompareRoot(masm->StackPointer(), Heap::kStackLimitRootIndex); __ B(hs, &ok); - CallRuntimePassFunction(masm, Runtime::kHiddenTryInstallOptimizedCode); + CallRuntimePassFunction(masm, Runtime::kTryInstallOptimizedCode); GenerateTailCallToReturnedCode(masm); __ Bind(&ok); @@ -304,7 +304,6 @@ void Builtins::Generate_InOptimizationQueue(MacroAssembler* masm) { static void Generate_JSConstructStubHelper(MacroAssembler* masm, bool is_api_function, - bool count_constructions, bool create_memento) { // ----------- S t a t e ------------- // -- x0 : number of arguments @@ -315,12 +314,8 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, // ----------------------------------- ASM_LOCATION("Builtins::Generate_JSConstructStubHelper"); - // Should never count constructions for api objects. - ASSERT(!is_api_function || !count_constructions); // Should never create mementos for api functions. - ASSERT(!is_api_function || !create_memento); - // Should never create mementos before slack tracking is finished. - ASSERT(!count_constructions || !create_memento); + DCHECK(!is_api_function || !create_memento); Isolate* isolate = masm->isolate(); @@ -366,24 +361,28 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, __ CompareInstanceType(init_map, x10, JS_FUNCTION_TYPE); __ B(eq, &rt_call); - if (count_constructions) { + Register constructon_count = x14; + if (!is_api_function) { Label allocate; + MemOperand bit_field3 = + FieldMemOperand(init_map, Map::kBitField3Offset); + // Check if slack tracking is enabled. + __ Ldr(x4, bit_field3); + __ DecodeField<Map::ConstructionCount>(constructon_count, x4); + __ Cmp(constructon_count, Operand(JSFunction::kNoSlackTracking)); + __ B(eq, &allocate); // Decrease generous allocation count. - __ Ldr(x3, FieldMemOperand(constructor, - JSFunction::kSharedFunctionInfoOffset)); - MemOperand constructor_count = - FieldMemOperand(x3, SharedFunctionInfo::kConstructionCountOffset); - __ Ldrb(x4, constructor_count); - __ Subs(x4, x4, 1); - __ Strb(x4, constructor_count); + __ Subs(x4, x4, Operand(1 << Map::ConstructionCount::kShift)); + __ Str(x4, bit_field3); + __ Cmp(constructon_count, Operand(JSFunction::kFinishSlackTracking)); __ B(ne, &allocate); // Push the constructor and map to the stack, and the constructor again // as argument to the runtime call. __ Push(constructor, init_map, constructor); - // The call will replace the stub, so the countdown is only done once. - __ CallRuntime(Runtime::kHiddenFinalizeInstanceSize, 1); + __ CallRuntime(Runtime::kFinalizeInstanceSize, 1); __ Pop(init_map, constructor); + __ Mov(constructon_count, Operand(JSFunction::kNoSlackTracking)); __ Bind(&allocate); } @@ -413,8 +412,8 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, __ Add(first_prop, new_obj, JSObject::kHeaderSize); // Fill all of the in-object properties with the appropriate filler. - Register undef = x7; - __ LoadRoot(undef, Heap::kUndefinedValueRootIndex); + Register filler = x7; + __ LoadRoot(filler, Heap::kUndefinedValueRootIndex); // Obtain number of pre-allocated property fields and in-object // properties. @@ -432,48 +431,50 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, Register prop_fields = x6; __ Sub(prop_fields, obj_size, JSObject::kHeaderSize / kPointerSize); - if (count_constructions) { + if (!is_api_function) { + Label no_inobject_slack_tracking; + + // Check if slack tracking is enabled. + __ Cmp(constructon_count, Operand(JSFunction::kNoSlackTracking)); + __ B(eq, &no_inobject_slack_tracking); + constructon_count = NoReg; + // Fill the pre-allocated fields with undef. - __ FillFields(first_prop, prealloc_fields, undef); + __ FillFields(first_prop, prealloc_fields, filler); - // Register first_non_prealloc is the offset of the first field after + // Update first_prop register to be the offset of the first field after // pre-allocated fields. - Register first_non_prealloc = x12; - __ Add(first_non_prealloc, first_prop, + __ Add(first_prop, first_prop, Operand(prealloc_fields, LSL, kPointerSizeLog2)); - first_prop = NoReg; - if (FLAG_debug_code) { - Register obj_end = x5; + Register obj_end = x14; __ Add(obj_end, new_obj, Operand(obj_size, LSL, kPointerSizeLog2)); - __ Cmp(first_non_prealloc, obj_end); + __ Cmp(first_prop, obj_end); __ Assert(le, kUnexpectedNumberOfPreAllocatedPropertyFields); } // Fill the remaining fields with one pointer filler map. - Register one_pointer_filler = x5; - Register non_prealloc_fields = x6; - __ LoadRoot(one_pointer_filler, Heap::kOnePointerFillerMapRootIndex); - __ Sub(non_prealloc_fields, prop_fields, prealloc_fields); - __ FillFields(first_non_prealloc, non_prealloc_fields, - one_pointer_filler); - prop_fields = NoReg; - } else if (create_memento) { + __ LoadRoot(filler, Heap::kOnePointerFillerMapRootIndex); + __ Sub(prop_fields, prop_fields, prealloc_fields); + + __ bind(&no_inobject_slack_tracking); + } + if (create_memento) { // Fill the pre-allocated fields with undef. - __ FillFields(first_prop, prop_fields, undef); + __ FillFields(first_prop, prop_fields, filler); __ Add(first_prop, new_obj, Operand(obj_size, LSL, kPointerSizeLog2)); __ LoadRoot(x14, Heap::kAllocationMementoMapRootIndex); - ASSERT_EQ(0 * kPointerSize, AllocationMemento::kMapOffset); + DCHECK_EQ(0 * kPointerSize, AllocationMemento::kMapOffset); __ Str(x14, MemOperand(first_prop, kPointerSize, PostIndex)); // Load the AllocationSite __ Peek(x14, 2 * kXRegSize); - ASSERT_EQ(1 * kPointerSize, AllocationMemento::kAllocationSiteOffset); + DCHECK_EQ(1 * kPointerSize, AllocationMemento::kAllocationSiteOffset); __ Str(x14, MemOperand(first_prop, kPointerSize, PostIndex)); first_prop = NoReg; } else { // Fill all of the property fields with undef. - __ FillFields(first_prop, prop_fields, undef); + __ FillFields(first_prop, prop_fields, filler); first_prop = NoReg; prop_fields = NoReg; } @@ -516,7 +517,7 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, // Initialize the fields to undefined. Register elements = x10; __ Add(elements, new_array, FixedArray::kHeaderSize); - __ FillFields(elements, element_count, undef); + __ FillFields(elements, element_count, filler); // Store the initialized FixedArray into the properties field of the // JSObject. @@ -541,7 +542,7 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, __ Peek(x4, 2 * kXRegSize); __ Push(x4); __ Push(constructor); // Argument for Runtime_NewObject. - __ CallRuntime(Runtime::kHiddenNewObjectWithAllocationSite, 2); + __ CallRuntime(Runtime::kNewObjectWithAllocationSite, 2); __ Mov(x4, x0); // If we ended up using the runtime, and we want a memento, then the // runtime call made it for us, and we shouldn't do create count @@ -549,7 +550,7 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, __ jmp(&count_incremented); } else { __ Push(constructor); // Argument for Runtime_NewObject. - __ CallRuntime(Runtime::kHiddenNewObject, 1); + __ CallRuntime(Runtime::kNewObject, 1); __ Mov(x4, x0); } @@ -624,7 +625,7 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, } // Store offset of return address for deoptimizer. - if (!is_api_function && !count_constructions) { + if (!is_api_function) { masm->isolate()->heap()->SetConstructStubDeoptPCOffset(masm->pc_offset()); } @@ -675,18 +676,13 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, } -void Builtins::Generate_JSConstructStubCountdown(MacroAssembler* masm) { - Generate_JSConstructStubHelper(masm, false, true, false); -} - - void Builtins::Generate_JSConstructStubGeneric(MacroAssembler* masm) { - Generate_JSConstructStubHelper(masm, false, false, FLAG_pretenuring_call_new); + Generate_JSConstructStubHelper(masm, false, FLAG_pretenuring_call_new); } void Builtins::Generate_JSConstructStubApi(MacroAssembler* masm) { - Generate_JSConstructStubHelper(masm, true, false, false); + Generate_JSConstructStubHelper(masm, true, false); } @@ -786,7 +782,7 @@ void Builtins::Generate_JSConstructEntryTrampoline(MacroAssembler* masm) { void Builtins::Generate_CompileUnoptimized(MacroAssembler* masm) { - CallRuntimePassFunction(masm, Runtime::kHiddenCompileUnoptimized); + CallRuntimePassFunction(masm, Runtime::kCompileUnoptimized); GenerateTailCallToReturnedCode(masm); } @@ -796,11 +792,11 @@ static void CallCompileOptimized(MacroAssembler* masm, bool concurrent) { Register function = x1; // Preserve function. At the same time, push arguments for - // kHiddenCompileOptimized. + // kCompileOptimized. __ LoadObject(x10, masm->isolate()->factory()->ToBoolean(concurrent)); __ Push(function, function, x10); - __ CallRuntime(Runtime::kHiddenCompileOptimized, 2); + __ CallRuntime(Runtime::kCompileOptimized, 2); // Restore receiver. __ Pop(function); @@ -910,7 +906,7 @@ static void Generate_NotifyStubFailureHelper(MacroAssembler* masm, // preserve the registers with parameters. __ PushXRegList(kSafepointSavedRegisters); // Pass the function and deoptimization type to the runtime system. - __ CallRuntime(Runtime::kHiddenNotifyStubFailure, 0, save_doubles); + __ CallRuntime(Runtime::kNotifyStubFailure, 0, save_doubles); __ PopXRegList(kSafepointSavedRegisters); } @@ -940,7 +936,7 @@ static void Generate_NotifyDeoptimizedHelper(MacroAssembler* masm, // Pass the deoptimization type to the runtime system. __ Mov(x0, Smi::FromInt(static_cast<int>(type))); __ Push(x0); - __ CallRuntime(Runtime::kHiddenNotifyDeoptimized, 1); + __ CallRuntime(Runtime::kNotifyDeoptimized, 1); } // Get the full codegen state from the stack and untag it. @@ -1025,7 +1021,7 @@ void Builtins::Generate_OsrAfterStackCheck(MacroAssembler* masm) { __ B(hs, &ok); { FrameScope scope(masm, StackFrame::INTERNAL); - __ CallRuntime(Runtime::kHiddenStackGuard, 0); + __ CallRuntime(Runtime::kStackGuard, 0); } __ Jump(masm->isolate()->builtins()->OnStackReplacement(), RelocInfo::CODE_TARGET); @@ -1069,7 +1065,7 @@ void Builtins::Generate_FunctionCall(MacroAssembler* masm) { // 3a. Patch the first argument if necessary when calling a function. Label shift_arguments; __ Mov(call_type, static_cast<int>(call_type_JS_func)); - { Label convert_to_object, use_global_receiver, patch_receiver; + { Label convert_to_object, use_global_proxy, patch_receiver; // Change context eagerly in case we need the global receiver. __ Ldr(cp, FieldMemOperand(function, JSFunction::kContextOffset)); @@ -1093,8 +1089,8 @@ void Builtins::Generate_FunctionCall(MacroAssembler* masm) { __ JumpIfSmi(receiver, &convert_to_object); __ JumpIfRoot(receiver, Heap::kUndefinedValueRootIndex, - &use_global_receiver); - __ JumpIfRoot(receiver, Heap::kNullValueRootIndex, &use_global_receiver); + &use_global_proxy); + __ JumpIfRoot(receiver, Heap::kNullValueRootIndex, &use_global_proxy); STATIC_ASSERT(LAST_SPEC_OBJECT_TYPE == LAST_TYPE); __ JumpIfObjectType(receiver, scratch1, scratch2, @@ -1122,10 +1118,10 @@ void Builtins::Generate_FunctionCall(MacroAssembler* masm) { __ Mov(call_type, static_cast<int>(call_type_JS_func)); __ B(&patch_receiver); - __ Bind(&use_global_receiver); + __ Bind(&use_global_proxy); __ Ldr(receiver, GlobalObjectMemOperand()); __ Ldr(receiver, - FieldMemOperand(receiver, GlobalObject::kGlobalReceiverOffset)); + FieldMemOperand(receiver, GlobalObject::kGlobalProxyOffset)); __ Bind(&patch_receiver); @@ -1250,7 +1246,7 @@ void Builtins::Generate_FunctionApply(MacroAssembler* masm) { // TODO(jbramley): Check that the stack usage here is safe. __ Sub(x10, jssp, x10); // Check if the arguments will overflow the stack. - __ Cmp(x10, Operand(argc, LSR, kSmiShift - kPointerSizeLog2)); + __ Cmp(x10, Operand::UntagSmiAndScale(argc, kPointerSizeLog2)); __ B(gt, &enough_stack_space); // There is not enough stack space, so use a builtin to throw an appropriate // error. @@ -1282,7 +1278,7 @@ void Builtins::Generate_FunctionApply(MacroAssembler* masm) { // Compute and push the receiver. // Do not transform the receiver for strict mode functions. - Label convert_receiver_to_object, use_global_receiver; + Label convert_receiver_to_object, use_global_proxy; __ Ldr(w10, FieldMemOperand(x2, SharedFunctionInfo::kCompilerHintsOffset)); __ Tbnz(x10, SharedFunctionInfo::kStrictModeFunction, &push_receiver); // Do not transform the receiver for native functions. @@ -1290,9 +1286,9 @@ void Builtins::Generate_FunctionApply(MacroAssembler* masm) { // Compute the receiver in sloppy mode. __ JumpIfSmi(receiver, &convert_receiver_to_object); - __ JumpIfRoot(receiver, Heap::kNullValueRootIndex, &use_global_receiver); + __ JumpIfRoot(receiver, Heap::kNullValueRootIndex, &use_global_proxy); __ JumpIfRoot(receiver, Heap::kUndefinedValueRootIndex, - &use_global_receiver); + &use_global_proxy); // Check if the receiver is already a JavaScript object. STATIC_ASSERT(LAST_SPEC_OBJECT_TYPE == LAST_TYPE); @@ -1306,9 +1302,9 @@ void Builtins::Generate_FunctionApply(MacroAssembler* masm) { __ Mov(receiver, x0); __ B(&push_receiver); - __ Bind(&use_global_receiver); + __ Bind(&use_global_proxy); __ Ldr(x10, GlobalObjectMemOperand()); - __ Ldr(receiver, FieldMemOperand(x10, GlobalObject::kGlobalReceiverOffset)); + __ Ldr(receiver, FieldMemOperand(x10, GlobalObject::kGlobalProxyOffset)); // Push the receiver __ Bind(&push_receiver); diff --git a/deps/v8/src/arm64/code-stubs-arm64.cc b/deps/v8/src/arm64/code-stubs-arm64.cc index a2dd220586d..3ef118aae9e 100644 --- a/deps/v8/src/arm64/code-stubs-arm64.cc +++ b/deps/v8/src/arm64/code-stubs-arm64.cc @@ -2,354 +2,279 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM64 -#include "bootstrapper.h" -#include "code-stubs.h" -#include "regexp-macro-assembler.h" -#include "stub-cache.h" +#include "src/bootstrapper.h" +#include "src/code-stubs.h" +#include "src/regexp-macro-assembler.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { - void FastNewClosureStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { + // cp: context // x2: function info - static Register registers[] = { x2 }; - descriptor->register_param_count_ = sizeof(registers) / sizeof(registers[0]); - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenNewClosureFromStubFailure)->entry; + Register registers[] = { cp, x2 }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kNewClosureFromStubFailure)->entry); } void FastNewContextStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { + // cp: context // x1: function - static Register registers[] = { x1 }; - descriptor->register_param_count_ = sizeof(registers) / sizeof(registers[0]); - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; + Register registers[] = { cp, x1 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); } void ToNumberStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { + // cp: context // x0: value - static Register registers[] = { x0 }; - descriptor->register_param_count_ = sizeof(registers) / sizeof(registers[0]); - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; + Register registers[] = { cp, x0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); } void NumberToStringStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { + // cp: context // x0: value - static Register registers[] = { x0 }; - descriptor->register_param_count_ = sizeof(registers) / sizeof(registers[0]); - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenNumberToString)->entry; + Register registers[] = { cp, x0 }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kNumberToStringRT)->entry); } void FastCloneShallowArrayStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { + // cp: context // x3: array literals array // x2: array literal index // x1: constant elements - static Register registers[] = { x3, x2, x1 }; - descriptor->register_param_count_ = sizeof(registers) / sizeof(registers[0]); - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId( - Runtime::kHiddenCreateArrayLiteralStubBailout)->entry; + Register registers[] = { cp, x3, x2, x1 }; + Representation representations[] = { + Representation::Tagged(), + Representation::Tagged(), + Representation::Smi(), + Representation::Tagged() }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kCreateArrayLiteralStubBailout)->entry, + representations); } void FastCloneShallowObjectStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { + // cp: context // x3: object literals array // x2: object literal index // x1: constant properties // x0: object literal flags - static Register registers[] = { x3, x2, x1, x0 }; - descriptor->register_param_count_ = sizeof(registers) / sizeof(registers[0]); - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenCreateObjectLiteral)->entry; + Register registers[] = { cp, x3, x2, x1, x0 }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kCreateObjectLiteral)->entry); } void CreateAllocationSiteStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { + // cp: context // x2: feedback vector // x3: call feedback slot - static Register registers[] = { x2, x3 }; - descriptor->register_param_count_ = sizeof(registers) / sizeof(registers[0]); - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; + Register registers[] = { cp, x2, x3 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); } -void KeyedLoadFastElementStub::InitializeInterfaceDescriptor( +void CallFunctionStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - // x1: receiver - // x0: key - static Register registers[] = { x1, x0 }; - descriptor->register_param_count_ = sizeof(registers) / sizeof(registers[0]); - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(KeyedLoadIC_MissFromStubFailure); + // x1 function the function to call + Register registers[] = {cp, x1}; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); } -void KeyedLoadDictionaryElementStub::InitializeInterfaceDescriptor( +void CallConstructStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - // x1: receiver - // x0: key - static Register registers[] = { x1, x0 }; - descriptor->register_param_count_ = sizeof(registers) / sizeof(registers[0]); - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(KeyedLoadIC_MissFromStubFailure); + // x0 : number of arguments + // x1 : the function to call + // x2 : feedback vector + // x3 : slot in feedback vector (smi) (if r2 is not the megamorphic symbol) + // TODO(turbofan): So far we don't gather type feedback and hence skip the + // slot parameter, but ArrayConstructStub needs the vector to be undefined. + Register registers[] = {cp, x0, x1, x2}; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); } void RegExpConstructResultStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { + // cp: context // x2: length // x1: index (of last match) // x0: string - static Register registers[] = { x2, x1, x0 }; - descriptor->register_param_count_ = sizeof(registers) / sizeof(registers[0]); - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenRegExpConstructResult)->entry; -} - - -void LoadFieldStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - // x0: receiver - static Register registers[] = { x0 }; - descriptor->register_param_count_ = sizeof(registers) / sizeof(registers[0]); - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; -} - - -void KeyedLoadFieldStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - // x1: receiver - static Register registers[] = { x1 }; - descriptor->register_param_count_ = sizeof(registers) / sizeof(registers[0]); - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; -} - - -void StringLengthStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { x0, x2 }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; -} - - -void KeyedStringLengthStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { x1, x0 }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; -} - - -void KeyedStoreFastElementStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - // x2: receiver - // x1: key - // x0: value - static Register registers[] = { x2, x1, x0 }; - descriptor->register_param_count_ = sizeof(registers) / sizeof(registers[0]); - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(KeyedStoreIC_MissFromStubFailure); + Register registers[] = { cp, x2, x1, x0 }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kRegExpConstructResult)->entry); } void TransitionElementsKindStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { + // cp: context // x0: value (js_array) // x1: to_map - static Register registers[] = { x0, x1 }; - descriptor->register_param_count_ = sizeof(registers) / sizeof(registers[0]); - descriptor->register_params_ = registers; + Register registers[] = { cp, x0, x1 }; Address entry = Runtime::FunctionForId(Runtime::kTransitionElementsKind)->entry; - descriptor->deoptimization_handler_ = FUNCTION_ADDR(entry); + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(entry)); } void CompareNilICStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { + // cp: context // x0: value to compare - static Register registers[] = { x0 }; - descriptor->register_param_count_ = sizeof(registers) / sizeof(registers[0]); - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(CompareNilIC_Miss); + Register registers[] = { cp, x0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(CompareNilIC_Miss)); descriptor->SetMissHandler( ExternalReference(IC_Utility(IC::kCompareNilIC_Miss), isolate())); } +const Register InterfaceDescriptor::ContextRegister() { return cp; } + + static void InitializeArrayConstructorDescriptor( - CodeStubInterfaceDescriptor* descriptor, + CodeStub::Major major, CodeStubInterfaceDescriptor* descriptor, int constant_stack_parameter_count) { + // cp: context // x1: function // x2: allocation site with elements kind // x0: number of arguments to the constructor function - static Register registers_variable_args[] = { x1, x2, x0 }; - static Register registers_no_args[] = { x1, x2 }; + Address deopt_handler = Runtime::FunctionForId( + Runtime::kArrayConstructor)->entry; if (constant_stack_parameter_count == 0) { - descriptor->register_param_count_ = - sizeof(registers_no_args) / sizeof(registers_no_args[0]); - descriptor->register_params_ = registers_no_args; + Register registers[] = { cp, x1, x2 }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, + deopt_handler, NULL, constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE); } else { // stack param count needs (constructor pointer, and single argument) - descriptor->handler_arguments_mode_ = PASS_ARGUMENTS; - descriptor->stack_parameter_count_ = x0; - descriptor->register_param_count_ = - sizeof(registers_variable_args) / sizeof(registers_variable_args[0]); - descriptor->register_params_ = registers_variable_args; + Register registers[] = { cp, x1, x2, x0 }; + Representation representations[] = { + Representation::Tagged(), + Representation::Tagged(), + Representation::Tagged(), + Representation::Integer32() }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, x0, + deopt_handler, representations, + constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE, PASS_ARGUMENTS); } - - descriptor->hint_stack_parameter_count_ = constant_stack_parameter_count; - descriptor->function_mode_ = JS_FUNCTION_STUB_MODE; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenArrayConstructor)->entry; } void ArrayNoArgumentConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeArrayConstructorDescriptor(descriptor, 0); + InitializeArrayConstructorDescriptor(MajorKey(), descriptor, 0); } void ArraySingleArgumentConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeArrayConstructorDescriptor(descriptor, 1); + InitializeArrayConstructorDescriptor(MajorKey(), descriptor, 1); } void ArrayNArgumentsConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeArrayConstructorDescriptor(descriptor, -1); + InitializeArrayConstructorDescriptor(MajorKey(), descriptor, -1); } static void InitializeInternalArrayConstructorDescriptor( - CodeStubInterfaceDescriptor* descriptor, + CodeStub::Major major, CodeStubInterfaceDescriptor* descriptor, int constant_stack_parameter_count) { + // cp: context // x1: constructor function // x0: number of arguments to the constructor function - static Register registers_variable_args[] = { x1, x0 }; - static Register registers_no_args[] = { x1 }; + Address deopt_handler = Runtime::FunctionForId( + Runtime::kInternalArrayConstructor)->entry; if (constant_stack_parameter_count == 0) { - descriptor->register_param_count_ = - sizeof(registers_no_args) / sizeof(registers_no_args[0]); - descriptor->register_params_ = registers_no_args; + Register registers[] = { cp, x1 }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, + deopt_handler, NULL, constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE); } else { // stack param count needs (constructor pointer, and single argument) - descriptor->handler_arguments_mode_ = PASS_ARGUMENTS; - descriptor->stack_parameter_count_ = x0; - descriptor->register_param_count_ = - sizeof(registers_variable_args) / sizeof(registers_variable_args[0]); - descriptor->register_params_ = registers_variable_args; + Register registers[] = { cp, x1, x0 }; + Representation representations[] = { + Representation::Tagged(), + Representation::Tagged(), + Representation::Integer32() }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, x0, + deopt_handler, representations, + constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE, PASS_ARGUMENTS); } - - descriptor->hint_stack_parameter_count_ = constant_stack_parameter_count; - descriptor->function_mode_ = JS_FUNCTION_STUB_MODE; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenInternalArrayConstructor)->entry; } void InternalArrayNoArgumentConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeInternalArrayConstructorDescriptor(descriptor, 0); + InitializeInternalArrayConstructorDescriptor(MajorKey(), descriptor, 0); } void InternalArraySingleArgumentConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeInternalArrayConstructorDescriptor(descriptor, 1); + InitializeInternalArrayConstructorDescriptor(MajorKey(), descriptor, 1); } void InternalArrayNArgumentsConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeInternalArrayConstructorDescriptor(descriptor, -1); + InitializeInternalArrayConstructorDescriptor(MajorKey(), descriptor, -1); } void ToBooleanStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { + // cp: context // x0: value - static Register registers[] = { x0 }; - descriptor->register_param_count_ = sizeof(registers) / sizeof(registers[0]); - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = FUNCTION_ADDR(ToBooleanIC_Miss); + Register registers[] = { cp, x0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(ToBooleanIC_Miss)); descriptor->SetMissHandler( ExternalReference(IC_Utility(IC::kToBooleanIC_Miss), isolate())); } -void StoreGlobalStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - // x1: receiver - // x2: key (unused) - // x0: value - static Register registers[] = { x1, x2, x0 }; - descriptor->register_param_count_ = sizeof(registers) / sizeof(registers[0]); - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(StoreIC_MissFromStubFailure); -} - - -void ElementsTransitionAndStoreStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - // x0: value - // x3: target map - // x1: key - // x2: receiver - static Register registers[] = { x0, x3, x1, x2 }; - descriptor->register_param_count_ = sizeof(registers) / sizeof(registers[0]); - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(ElementsTransitionAndStoreIC_Miss); -} - - void BinaryOpICStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { + // cp: context // x1: left operand // x0: right operand - static Register registers[] = { x1, x0 }; - descriptor->register_param_count_ = sizeof(registers) / sizeof(registers[0]); - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = FUNCTION_ADDR(BinaryOpIC_Miss); + Register registers[] = { cp, x1, x0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(BinaryOpIC_Miss)); descriptor->SetMissHandler( ExternalReference(IC_Utility(IC::kBinaryOpIC_Miss), isolate())); } @@ -357,120 +282,108 @@ void BinaryOpICStub::InitializeInterfaceDescriptor( void BinaryOpWithAllocationSiteStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { + // cp: context // x2: allocation site // x1: left operand // x0: right operand - static Register registers[] = { x2, x1, x0 }; - descriptor->register_param_count_ = sizeof(registers) / sizeof(registers[0]); - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(BinaryOpIC_MissWithAllocationSite); + Register registers[] = { cp, x2, x1, x0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(BinaryOpIC_MissWithAllocationSite)); } void StringAddStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { + // cp: context // x1: left operand // x0: right operand - static Register registers[] = { x1, x0 }; - descriptor->register_param_count_ = sizeof(registers) / sizeof(registers[0]); - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenStringAdd)->entry; + Register registers[] = { cp, x1, x0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kStringAdd)->entry); } void CallDescriptors::InitializeForIsolate(Isolate* isolate) { - static PlatformCallInterfaceDescriptor default_descriptor = - PlatformCallInterfaceDescriptor(CAN_INLINE_TARGET_ADDRESS); + static PlatformInterfaceDescriptor default_descriptor = + PlatformInterfaceDescriptor(CAN_INLINE_TARGET_ADDRESS); - static PlatformCallInterfaceDescriptor noInlineDescriptor = - PlatformCallInterfaceDescriptor(NEVER_INLINE_TARGET_ADDRESS); + static PlatformInterfaceDescriptor noInlineDescriptor = + PlatformInterfaceDescriptor(NEVER_INLINE_TARGET_ADDRESS); { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::ArgumentAdaptorCall); - static Register registers[] = { x1, // JSFunction - cp, // context - x0, // actual number of arguments - x2, // expected number of arguments + Register registers[] = { cp, // context + x1, // JSFunction + x0, // actual number of arguments + x2, // expected number of arguments }; - static Representation representations[] = { - Representation::Tagged(), // JSFunction + Representation representations[] = { Representation::Tagged(), // context + Representation::Tagged(), // JSFunction Representation::Integer32(), // actual number of arguments Representation::Integer32(), // expected number of arguments }; - descriptor->register_param_count_ = 4; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; - descriptor->platform_specific_descriptor_ = &default_descriptor; + descriptor->Initialize(ARRAY_SIZE(registers), registers, + representations, &default_descriptor); } { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::KeyedCall); - static Register registers[] = { cp, // context - x2, // key + Register registers[] = { cp, // context + x2, // key }; - static Representation representations[] = { + Representation representations[] = { Representation::Tagged(), // context Representation::Tagged(), // key }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; - descriptor->platform_specific_descriptor_ = &noInlineDescriptor; + descriptor->Initialize(ARRAY_SIZE(registers), registers, + representations, &noInlineDescriptor); } { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::NamedCall); - static Register registers[] = { cp, // context - x2, // name + Register registers[] = { cp, // context + x2, // name }; - static Representation representations[] = { + Representation representations[] = { Representation::Tagged(), // context Representation::Tagged(), // name }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; - descriptor->platform_specific_descriptor_ = &noInlineDescriptor; + descriptor->Initialize(ARRAY_SIZE(registers), registers, + representations, &noInlineDescriptor); } { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::CallHandler); - static Register registers[] = { cp, // context - x0, // receiver + Register registers[] = { cp, // context + x0, // receiver }; - static Representation representations[] = { + Representation representations[] = { Representation::Tagged(), // context Representation::Tagged(), // receiver }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; - descriptor->platform_specific_descriptor_ = &default_descriptor; + descriptor->Initialize(ARRAY_SIZE(registers), registers, + representations, &default_descriptor); } { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::ApiFunctionCall); - static Register registers[] = { x0, // callee - x4, // call_data - x2, // holder - x1, // api_function_address - cp, // context + Register registers[] = { cp, // context + x0, // callee + x4, // call_data + x2, // holder + x1, // api_function_address }; - static Representation representations[] = { + Representation representations[] = { + Representation::Tagged(), // context Representation::Tagged(), // callee Representation::Tagged(), // call_data Representation::Tagged(), // holder Representation::External(), // api_function_address - Representation::Tagged(), // context }; - descriptor->register_param_count_ = 5; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; - descriptor->platform_specific_descriptor_ = &default_descriptor; + descriptor->Initialize(ARRAY_SIZE(registers), registers, + representations, &default_descriptor); } } @@ -483,22 +396,22 @@ void HydrogenCodeStub::GenerateLightweightMiss(MacroAssembler* masm) { isolate()->counters()->code_stubs()->Increment(); CodeStubInterfaceDescriptor* descriptor = GetInterfaceDescriptor(); - int param_count = descriptor->register_param_count_; + int param_count = descriptor->GetEnvironmentParameterCount(); { // Call the runtime system in a fresh internal frame. FrameScope scope(masm, StackFrame::INTERNAL); - ASSERT((descriptor->register_param_count_ == 0) || - x0.Is(descriptor->register_params_[param_count - 1])); + DCHECK((param_count == 0) || + x0.Is(descriptor->GetEnvironmentParameterRegister(param_count - 1))); // Push arguments MacroAssembler::PushPopQueue queue(masm); for (int i = 0; i < param_count; ++i) { - queue.Queue(descriptor->register_params_[i]); + queue.Queue(descriptor->GetEnvironmentParameterRegister(i)); } queue.PushQueued(); ExternalReference miss = descriptor->miss_handler(); - __ CallExternalReference(miss, descriptor->register_param_count_); + __ CallExternalReference(miss, param_count); } __ Ret(); @@ -509,10 +422,10 @@ void DoubleToIStub::Generate(MacroAssembler* masm) { Label done; Register input = source(); Register result = destination(); - ASSERT(is_truncating()); + DCHECK(is_truncating()); - ASSERT(result.Is64Bits()); - ASSERT(jssp.Is(masm->StackPointer())); + DCHECK(result.Is64Bits()); + DCHECK(jssp.Is(masm->StackPointer())); int double_offset = offset(); @@ -592,7 +505,7 @@ static void EmitIdenticalObjectComparison(MacroAssembler* masm, FPRegister double_scratch, Label* slow, Condition cond) { - ASSERT(!AreAliased(left, right, scratch)); + DCHECK(!AreAliased(left, right, scratch)); Label not_identical, return_equal, heap_number; Register result = x0; @@ -647,7 +560,7 @@ static void EmitIdenticalObjectComparison(MacroAssembler* masm, // it is handled in the parser (see Parser::ParseBinaryExpression). We are // only concerned with cases ge, le and eq here. if ((cond != lt) && (cond != gt)) { - ASSERT((cond == ge) || (cond == le) || (cond == eq)); + DCHECK((cond == ge) || (cond == le) || (cond == eq)); __ Bind(&heap_number); // Left and right are identical pointers to a heap number object. Return // non-equal if the heap number is a NaN, and equal otherwise. Comparing @@ -680,7 +593,7 @@ static void EmitStrictTwoHeapObjectCompare(MacroAssembler* masm, Register left_type, Register right_type, Register scratch) { - ASSERT(!AreAliased(left, right, left_type, right_type, scratch)); + DCHECK(!AreAliased(left, right, left_type, right_type, scratch)); if (masm->emit_debug_code()) { // We assume that the arguments are not identical. @@ -698,7 +611,7 @@ static void EmitStrictTwoHeapObjectCompare(MacroAssembler* masm, __ B(lt, &right_non_object); // Return non-zero - x0 already contains a non-zero pointer. - ASSERT(left.is(x0) || right.is(x0)); + DCHECK(left.is(x0) || right.is(x0)); Label return_not_equal; __ Bind(&return_not_equal); __ Ret(); @@ -736,9 +649,9 @@ static void EmitSmiNonsmiComparison(MacroAssembler* masm, Register scratch, Label* slow, bool strict) { - ASSERT(!AreAliased(left, right, scratch)); - ASSERT(!AreAliased(left_d, right_d)); - ASSERT((left.is(x0) && right.is(x1)) || + DCHECK(!AreAliased(left, right, scratch)); + DCHECK(!AreAliased(left_d, right_d)); + DCHECK((left.is(x0) && right.is(x1)) || (right.is(x0) && left.is(x1))); Register result = x0; @@ -811,7 +724,7 @@ static void EmitCheckForInternalizedStringsOrObjects(MacroAssembler* masm, Register right_type, Label* possible_strings, Label* not_both_strings) { - ASSERT(!AreAliased(left, right, left_map, right_map, left_type, right_type)); + DCHECK(!AreAliased(left, right, left_map, right_map, left_type, right_type)); Register result = x0; Label object_test; @@ -931,7 +844,7 @@ void ICCompareStub::GenerateGeneric(MacroAssembler* masm) { // Left and/or right is a NaN. Load the result register with whatever makes // the comparison fail, since comparisons with NaN always fail (except ne, // which is filtered out at a higher level.) - ASSERT(cond != ne); + DCHECK(cond != ne); if ((cond == lt) || (cond == le)) { __ Mov(result, GREATER); } else { @@ -1022,7 +935,7 @@ void ICCompareStub::GenerateGeneric(MacroAssembler* masm) { if ((cond == lt) || (cond == le)) { ncr = GREATER; } else { - ASSERT((cond == gt) || (cond == ge)); // remaining cases + DCHECK((cond == gt) || (cond == ge)); // remaining cases ncr = LESS; } __ Mov(x10, Smi::FromInt(ncr)); @@ -1086,11 +999,7 @@ void StoreRegistersStateStub::Generate(MacroAssembler* masm) { // Restore lr with the value it had before the call to this stub (the value // which must be pushed). __ Mov(lr, saved_lr); - if (save_doubles_ == kSaveFPRegs) { - __ PushSafepointRegistersAndDoubles(); - } else { - __ PushSafepointRegisters(); - } + __ PushSafepointRegisters(); __ Ret(return_address); } @@ -1101,11 +1010,7 @@ void RestoreRegistersStateStub::Generate(MacroAssembler* masm) { Register return_address = temps.AcquireX(); // Preserve the return address (lr will be clobbered by the pop). __ Mov(return_address, lr); - if (save_doubles_ == kSaveFPRegs) { - __ PopSafepointRegistersAndDoubles(); - } else { - __ PopSafepointRegisters(); - } + __ PopSafepointRegisters(); __ Ret(return_address); } @@ -1332,13 +1237,13 @@ void MathPowStub::Generate(MacroAssembler* masm) { __ Bind(&call_runtime); // Put the arguments back on the stack. __ Push(base_tagged, exponent_tagged); - __ TailCallRuntime(Runtime::kHiddenMathPow, 2, 1); + __ TailCallRuntime(Runtime::kMathPowRT, 2, 1); // Return. __ Bind(&done); __ AllocateHeapNumber(result_tagged, &call_runtime, scratch0, scratch1, result_double); - ASSERT(result_tagged.is(x0)); + DCHECK(result_tagged.is(x0)); __ IncrementCounter( isolate()->counters()->math_pow(), 1, scratch0, scratch1); __ Ret(); @@ -1377,18 +1282,14 @@ void CodeStub::GenerateStubsAheadOfTime(Isolate* isolate) { void StoreRegistersStateStub::GenerateAheadOfTime(Isolate* isolate) { - StoreRegistersStateStub stub1(isolate, kDontSaveFPRegs); - stub1.GetCode(); - StoreRegistersStateStub stub2(isolate, kSaveFPRegs); - stub2.GetCode(); + StoreRegistersStateStub stub(isolate); + stub.GetCode(); } void RestoreRegistersStateStub::GenerateAheadOfTime(Isolate* isolate) { - RestoreRegistersStateStub stub1(isolate, kDontSaveFPRegs); - stub1.GetCode(); - RestoreRegistersStateStub stub2(isolate, kSaveFPRegs); - stub2.GetCode(); + RestoreRegistersStateStub stub(isolate); + stub.GetCode(); } @@ -1446,7 +1347,7 @@ void CEntryStub::Generate(MacroAssembler* masm) { // // The arguments are in reverse order, so that arg[argc-2] is actually the // first argument to the target function and arg[0] is the last. - ASSERT(jssp.Is(__ StackPointer())); + DCHECK(jssp.Is(__ StackPointer())); const Register& argc_input = x0; const Register& target_input = x1; @@ -1473,7 +1374,7 @@ void CEntryStub::Generate(MacroAssembler* masm) { // registers. FrameScope scope(masm, StackFrame::MANUAL); __ EnterExitFrame(save_doubles_, x10, 3); - ASSERT(csp.Is(__ StackPointer())); + DCHECK(csp.Is(__ StackPointer())); // Poke callee-saved registers into reserved space. __ Poke(argv, 1 * kPointerSize); @@ -1523,7 +1424,7 @@ void CEntryStub::Generate(MacroAssembler* masm) { // untouched, and the stub either throws an exception by jumping to one of // the exception_returned label. - ASSERT(csp.Is(__ StackPointer())); + DCHECK(csp.Is(__ StackPointer())); // Prepare AAPCS64 arguments to pass to the builtin. __ Mov(x0, argc); @@ -1570,7 +1471,7 @@ void CEntryStub::Generate(MacroAssembler* masm) { __ Peek(target, 3 * kPointerSize); __ LeaveExitFrame(save_doubles_, x10, true); - ASSERT(jssp.Is(__ StackPointer())); + DCHECK(jssp.Is(__ StackPointer())); // Pop or drop the remaining stack slots and return from the stub. // jssp[24]: Arguments array (of size argc), including receiver. // jssp[16]: Preserved x23 (used for target). @@ -1642,7 +1543,7 @@ void CEntryStub::Generate(MacroAssembler* masm) { // Output: // x0: result. void JSEntryStub::GenerateBody(MacroAssembler* masm, bool is_construct) { - ASSERT(jssp.Is(__ StackPointer())); + DCHECK(jssp.Is(__ StackPointer())); Register code_entry = x0; // Enable instruction instrumentation. This only works on the simulator, and @@ -1696,7 +1597,7 @@ void JSEntryStub::GenerateBody(MacroAssembler* masm, bool is_construct) { __ B(&done); __ Bind(&non_outermost_js); // We spare one instruction by pushing xzr since the marker is 0. - ASSERT(Smi::FromInt(StackFrame::INNER_JSENTRY_FRAME) == NULL); + DCHECK(Smi::FromInt(StackFrame::INNER_JSENTRY_FRAME) == NULL); __ Push(xzr); __ Bind(&done); @@ -1798,7 +1699,7 @@ void JSEntryStub::GenerateBody(MacroAssembler* masm, bool is_construct) { // Reset the stack to the callee saved registers. __ Drop(-EntryFrameConstants::kCallerFPOffset, kByteSizeInBytes); // Restore the callee-saved registers and return. - ASSERT(jssp.Is(__ StackPointer())); + DCHECK(jssp.Is(__ StackPointer())); __ Mov(csp, jssp); __ SetStackPointer(csp); __ PopCalleeSavedRegisters(); @@ -1810,33 +1711,14 @@ void JSEntryStub::GenerateBody(MacroAssembler* masm, bool is_construct) { void FunctionPrototypeStub::Generate(MacroAssembler* masm) { Label miss; - Register receiver; - if (kind() == Code::KEYED_LOAD_IC) { - // ----------- S t a t e ------------- - // -- lr : return address - // -- x1 : receiver - // -- x0 : key - // ----------------------------------- - Register key = x0; - receiver = x1; - __ Cmp(key, Operand(isolate()->factory()->prototype_string())); - __ B(ne, &miss); - } else { - ASSERT(kind() == Code::LOAD_IC); - // ----------- S t a t e ------------- - // -- lr : return address - // -- x2 : name - // -- x0 : receiver - // -- sp[0] : receiver - // ----------------------------------- - receiver = x0; - } + Register receiver = LoadIC::ReceiverRegister(); - StubCompiler::GenerateLoadFunctionPrototype(masm, receiver, x10, x11, &miss); + NamedLoadHandlerCompiler::GenerateLoadFunctionPrototype(masm, receiver, x10, + x11, &miss); __ Bind(&miss); - StubCompiler::TailCallBuiltin(masm, - BaseLoadStoreStubCompiler::MissBuiltin(kind())); + PropertyAccessCompiler::TailCallBuiltin( + masm, PropertyAccessCompiler::MissBuiltin(Code::LOAD_IC)); } @@ -1884,7 +1766,7 @@ void InstanceofStub::Generate(MacroAssembler* masm) { // If there is a call site cache, don't look in the global cache, but do the // real lookup and update the call site cache. - if (!HasCallSiteInlineCheck()) { + if (!HasCallSiteInlineCheck() && !ReturnTrueFalseObject()) { Label miss; __ JumpIfNotRoot(function, Heap::kInstanceofCacheFunctionRootIndex, &miss); __ JumpIfNotRoot(map, Heap::kInstanceofCacheMapRootIndex, &miss); @@ -1916,6 +1798,7 @@ void InstanceofStub::Generate(MacroAssembler* masm) { } Label return_true, return_result; + Register smi_value = scratch1; { // Loop through the prototype chain looking for the function prototype. Register chain_map = x1; @@ -1926,6 +1809,10 @@ void InstanceofStub::Generate(MacroAssembler* masm) { __ LoadRoot(null_value, Heap::kNullValueRootIndex); // Speculatively set a result. __ Mov(result, res_false); + if (!HasCallSiteInlineCheck() && ReturnTrueFalseObject()) { + // Value to store in the cache cannot be an object. + __ Mov(smi_value, Smi::FromInt(1)); + } __ Bind(&loop); @@ -1948,14 +1835,19 @@ void InstanceofStub::Generate(MacroAssembler* masm) { // We cannot fall through to here. __ Bind(&return_true); __ Mov(result, res_true); + if (!HasCallSiteInlineCheck() && ReturnTrueFalseObject()) { + // Value to store in the cache cannot be an object. + __ Mov(smi_value, Smi::FromInt(0)); + } __ Bind(&return_result); if (HasCallSiteInlineCheck()) { - ASSERT(ReturnTrueFalseObject()); + DCHECK(ReturnTrueFalseObject()); __ Add(map_check_site, map_check_site, kDeltaToLoadBoolResult); __ GetRelocatedValueLocation(map_check_site, scratch2); __ Str(result, MemOperand(scratch2)); } else { - __ StoreRoot(result, Heap::kInstanceofCacheAnswerRootIndex); + Register cached_value = ReturnTrueFalseObject() ? smi_value : result; + __ StoreRoot(cached_value, Heap::kInstanceofCacheAnswerRootIndex); } __ Ret(); @@ -2082,9 +1974,8 @@ void ArgumentsAccessStub::GenerateNewSloppySlow(MacroAssembler* masm) { Register caller_fp = x10; __ Ldr(caller_fp, MemOperand(fp, StandardFrameConstants::kCallerFPOffset)); // Load and untag the context. - STATIC_ASSERT((kSmiShift / kBitsPerByte) == 4); - __ Ldr(w11, MemOperand(caller_fp, StandardFrameConstants::kContextOffset + - (kSmiShift / kBitsPerByte))); + __ Ldr(w11, UntagSmiMemOperand(caller_fp, + StandardFrameConstants::kContextOffset)); __ Cmp(w11, StackFrame::ARGUMENTS_ADAPTOR); __ B(ne, &runtime); @@ -2097,7 +1988,7 @@ void ArgumentsAccessStub::GenerateNewSloppySlow(MacroAssembler* masm) { __ Poke(x10, 1 * kXRegSize); __ Bind(&runtime); - __ TailCallRuntime(Runtime::kHiddenNewArgumentsFast, 3, 1); + __ TailCallRuntime(Runtime::kNewSloppyArguments, 3, 1); } @@ -2197,41 +2088,42 @@ void ArgumentsAccessStub::GenerateNewSloppyFast(MacroAssembler* masm) { // Get the arguments boilerplate from the current (global) context. - // x0 alloc_obj pointer to allocated objects (param map, backing - // store, arguments) - // x1 mapped_params number of mapped parameters, min(params, args) - // x2 arg_count number of function arguments - // x3 arg_count_smi number of function arguments (smi) - // x4 function function pointer - // x7 param_count number of function parameters - // x11 args_offset offset to args (or aliased args) boilerplate (uninit) - // x14 recv_arg pointer to receiver arguments + // x0 alloc_obj pointer to allocated objects (param map, backing + // store, arguments) + // x1 mapped_params number of mapped parameters, min(params, args) + // x2 arg_count number of function arguments + // x3 arg_count_smi number of function arguments (smi) + // x4 function function pointer + // x7 param_count number of function parameters + // x11 sloppy_args_map offset to args (or aliased args) map (uninit) + // x14 recv_arg pointer to receiver arguments Register global_object = x10; Register global_ctx = x10; - Register args_offset = x11; - Register aliased_args_offset = x10; + Register sloppy_args_map = x11; + Register aliased_args_map = x10; __ Ldr(global_object, GlobalObjectMemOperand()); __ Ldr(global_ctx, FieldMemOperand(global_object, GlobalObject::kNativeContextOffset)); - __ Ldr(args_offset, - ContextMemOperand(global_ctx, - Context::SLOPPY_ARGUMENTS_BOILERPLATE_INDEX)); - __ Ldr(aliased_args_offset, - ContextMemOperand(global_ctx, - Context::ALIASED_ARGUMENTS_BOILERPLATE_INDEX)); + __ Ldr(sloppy_args_map, + ContextMemOperand(global_ctx, Context::SLOPPY_ARGUMENTS_MAP_INDEX)); + __ Ldr(aliased_args_map, + ContextMemOperand(global_ctx, Context::ALIASED_ARGUMENTS_MAP_INDEX)); __ Cmp(mapped_params, 0); - __ CmovX(args_offset, aliased_args_offset, ne); + __ CmovX(sloppy_args_map, aliased_args_map, ne); // Copy the JS object part. - __ CopyFields(alloc_obj, args_offset, CPURegList(x10, x12, x13), - JSObject::kHeaderSize / kPointerSize); + __ Str(sloppy_args_map, FieldMemOperand(alloc_obj, JSObject::kMapOffset)); + __ LoadRoot(x10, Heap::kEmptyFixedArrayRootIndex); + __ Str(x10, FieldMemOperand(alloc_obj, JSObject::kPropertiesOffset)); + __ Str(x10, FieldMemOperand(alloc_obj, JSObject::kElementsOffset)); // Set up the callee in-object property. STATIC_ASSERT(Heap::kArgumentsCalleeIndex == 1); const int kCalleeOffset = JSObject::kHeaderSize + Heap::kArgumentsCalleeIndex * kPointerSize; + __ AssertNotSmi(function); __ Str(function, FieldMemOperand(alloc_obj, kCalleeOffset)); // Use the length and set that as an in-object property. @@ -2369,7 +2261,7 @@ void ArgumentsAccessStub::GenerateNewSloppyFast(MacroAssembler* masm) { // Do the runtime call to allocate the arguments object. __ Bind(&runtime); __ Push(function, recv_arg, arg_count_smi); - __ TailCallRuntime(Runtime::kHiddenNewArgumentsFast, 3, 1); + __ TailCallRuntime(Runtime::kNewSloppyArguments, 3, 1); } @@ -2432,25 +2324,24 @@ void ArgumentsAccessStub::GenerateNewStrict(MacroAssembler* masm) { // Get the arguments boilerplate from the current (native) context. Register global_object = x10; Register global_ctx = x10; - Register args_offset = x4; + Register strict_args_map = x4; __ Ldr(global_object, GlobalObjectMemOperand()); __ Ldr(global_ctx, FieldMemOperand(global_object, GlobalObject::kNativeContextOffset)); - __ Ldr(args_offset, - ContextMemOperand(global_ctx, - Context::STRICT_ARGUMENTS_BOILERPLATE_INDEX)); + __ Ldr(strict_args_map, + ContextMemOperand(global_ctx, Context::STRICT_ARGUMENTS_MAP_INDEX)); // x0 alloc_obj pointer to allocated objects: parameter array and // arguments object // x1 param_count_smi number of parameters passed to function (smi) // x2 params pointer to parameters // x3 function function pointer - // x4 args_offset offset to arguments boilerplate + // x4 strict_args_map offset to arguments map // x13 param_count number of parameters passed to function - - // Copy the JS object part. - __ CopyFields(alloc_obj, args_offset, CPURegList(x5, x6, x7), - JSObject::kHeaderSize / kPointerSize); + __ Str(strict_args_map, FieldMemOperand(alloc_obj, JSObject::kMapOffset)); + __ LoadRoot(x5, Heap::kEmptyFixedArrayRootIndex); + __ Str(x5, FieldMemOperand(alloc_obj, JSObject::kPropertiesOffset)); + __ Str(x5, FieldMemOperand(alloc_obj, JSObject::kElementsOffset)); // Set the smi-tagged length as an in-object property. STATIC_ASSERT(Heap::kArgumentsLengthIndex == 0); @@ -2502,13 +2393,13 @@ void ArgumentsAccessStub::GenerateNewStrict(MacroAssembler* masm) { // Do the runtime call to allocate the arguments object. __ Bind(&runtime); __ Push(function, params, param_count_smi); - __ TailCallRuntime(Runtime::kHiddenNewStrictArgumentsFast, 3, 1); + __ TailCallRuntime(Runtime::kNewStrictArguments, 3, 1); } void RegExpExecStub::Generate(MacroAssembler* masm) { #ifdef V8_INTERPRETED_REGEXP - __ TailCallRuntime(Runtime::kHiddenRegExpExec, 4, 1); + __ TailCallRuntime(Runtime::kRegExpExecRT, 4, 1); #else // V8_INTERPRETED_REGEXP // Stack frame on entry. @@ -2587,7 +2478,7 @@ void RegExpExecStub::Generate(MacroAssembler* masm) { __ Cbz(x10, &runtime); // Check that the first argument is a JSRegExp object. - ASSERT(jssp.Is(__ StackPointer())); + DCHECK(jssp.Is(__ StackPointer())); __ Peek(jsregexp_object, kJSRegExpOffset); __ JumpIfSmi(jsregexp_object, &runtime); __ JumpIfNotObjectType(jsregexp_object, x10, x10, JS_REGEXP_TYPE, &runtime); @@ -2624,7 +2515,7 @@ void RegExpExecStub::Generate(MacroAssembler* masm) { // Initialize offset for possibly sliced string. __ Mov(sliced_string_offset, 0); - ASSERT(jssp.Is(__ StackPointer())); + DCHECK(jssp.Is(__ StackPointer())); __ Peek(subject, kSubjectOffset); __ JumpIfSmi(subject, &runtime); @@ -2696,8 +2587,8 @@ void RegExpExecStub::Generate(MacroAssembler* masm) { __ Ldrb(string_type, FieldMemOperand(x10, Map::kInstanceTypeOffset)); STATIC_ASSERT(kSeqStringTag == 0); // The underlying external string is never a short external string. - STATIC_CHECK(ExternalString::kMaxShortLength < ConsString::kMinLength); - STATIC_CHECK(ExternalString::kMaxShortLength < SlicedString::kMinLength); + STATIC_ASSERT(ExternalString::kMaxShortLength < ConsString::kMinLength); + STATIC_ASSERT(ExternalString::kMaxShortLength < SlicedString::kMinLength); __ TestAndBranchIfAnySet(string_type.X(), kStringRepresentationMask, &external_string); // Go to (7). @@ -2707,7 +2598,7 @@ void RegExpExecStub::Generate(MacroAssembler* masm) { // Check that the third argument is a positive smi less than the subject // string length. A negative value will be greater (unsigned comparison). - ASSERT(jssp.Is(__ StackPointer())); + DCHECK(jssp.Is(__ StackPointer())); __ Peek(x10, kPreviousIndexOffset); __ JumpIfNotSmi(x10, &runtime); __ Cmp(jsstring_length, x10); @@ -2725,7 +2616,7 @@ void RegExpExecStub::Generate(MacroAssembler* masm) { // Find the code object based on the assumptions above. // kDataAsciiCodeOffset and kDataUC16CodeOffset are adjacent, adds an offset // of kPointerSize to reach the latter. - ASSERT_EQ(JSRegExp::kDataAsciiCodeOffset + kPointerSize, + DCHECK_EQ(JSRegExp::kDataAsciiCodeOffset + kPointerSize, JSRegExp::kDataUC16CodeOffset); __ Mov(x10, kPointerSize); // We will need the encoding later: ASCII = 0x04 @@ -2749,7 +2640,7 @@ void RegExpExecStub::Generate(MacroAssembler* masm) { // Isolates: note we add an additional parameter here (isolate pointer). __ EnterExitFrame(false, x10, 1); - ASSERT(csp.Is(__ StackPointer())); + DCHECK(csp.Is(__ StackPointer())); // We have 9 arguments to pass to the regexp code, therefore we have to pass // one on the stack and the rest as registers. @@ -2853,7 +2744,7 @@ void RegExpExecStub::Generate(MacroAssembler* masm) { __ Add(number_of_capture_registers, x10, 2); // Check that the fourth object is a JSArray object. - ASSERT(jssp.Is(__ StackPointer())); + DCHECK(jssp.Is(__ StackPointer())); __ Peek(x10, kLastMatchInfoOffset); __ JumpIfSmi(x10, &runtime); __ JumpIfNotObjectType(x10, x11, x11, JS_ARRAY_TYPE, &runtime); @@ -2932,8 +2823,8 @@ void RegExpExecStub::Generate(MacroAssembler* masm) { // Store the smi values in the last match info. __ SmiTag(x10, current_offset); // Clearing the 32 bottom bits gives us a Smi. - STATIC_ASSERT(kSmiShift == 32); - __ And(x11, current_offset, ~kWRegMask); + STATIC_ASSERT(kSmiTag == 0); + __ Bic(x11, current_offset, kSmiShiftMask); __ Stp(x10, x11, MemOperand(last_match_offsets, kXRegSize * 2, PostIndex)); @@ -2982,7 +2873,7 @@ void RegExpExecStub::Generate(MacroAssembler* masm) { __ Bind(&runtime); __ PopCPURegList(used_callee_saved_registers); - __ TailCallRuntime(Runtime::kHiddenRegExpExec, 4, 1); + __ TailCallRuntime(Runtime::kRegExpExecRT, 4, 1); // Deferred code for string handling. // (6) Not a long external string? If yes, go to (8). @@ -3035,7 +2926,7 @@ static void GenerateRecordCallTarget(MacroAssembler* masm, Register scratch1, Register scratch2) { ASM_LOCATION("GenerateRecordCallTarget"); - ASSERT(!AreAliased(scratch1, scratch2, + DCHECK(!AreAliased(scratch1, scratch2, argc, function, feedback_vector, index)); // Cache the called function in a feedback vector slot. Cache states are // uninitialized, monomorphic (indicated by a JSFunction), and megamorphic. @@ -3045,9 +2936,9 @@ static void GenerateRecordCallTarget(MacroAssembler* masm, // index : slot in feedback vector (smi) Label initialize, done, miss, megamorphic, not_array_function; - ASSERT_EQ(*TypeFeedbackInfo::MegamorphicSentinel(masm->isolate()), + DCHECK_EQ(*TypeFeedbackInfo::MegamorphicSentinel(masm->isolate()), masm->isolate()->heap()->megamorphic_symbol()); - ASSERT_EQ(*TypeFeedbackInfo::UninitializedSentinel(masm->isolate()), + DCHECK_EQ(*TypeFeedbackInfo::UninitializedSentinel(masm->isolate()), masm->isolate()->heap()->uninitialized_symbol()); // Load the cache state. @@ -3112,7 +3003,7 @@ static void GenerateRecordCallTarget(MacroAssembler* masm, // CreateAllocationSiteStub expect the feedback vector in x2 and the slot // index in x3. - ASSERT(feedback_vector.Is(x2) && index.Is(x3)); + DCHECK(feedback_vector.Is(x2) && index.Is(x3)); __ CallStub(&create_stub); __ Pop(index, feedback_vector, function, argc); @@ -3192,10 +3083,10 @@ static void EmitWrapCase(MacroAssembler* masm, int argc, Label* cont) { } -void CallFunctionStub::Generate(MacroAssembler* masm) { - ASM_LOCATION("CallFunctionStub::Generate"); +static void CallFunctionNoFeedback(MacroAssembler* masm, + int argc, bool needs_checks, + bool call_as_method) { // x1 function the function to call - Register function = x1; Register type = x4; Label slow, non_function, wrap, cont; @@ -3203,7 +3094,7 @@ void CallFunctionStub::Generate(MacroAssembler* masm) { // TODO(jbramley): This function has a lot of unnamed registers. Name them, // and tidy things up a bit. - if (NeedsChecks()) { + if (needs_checks) { // Check that the function is really a JavaScript function. __ JumpIfSmi(function, &non_function); @@ -3213,18 +3104,17 @@ void CallFunctionStub::Generate(MacroAssembler* masm) { // Fast-case: Invoke the function now. // x1 function pushed function - int argc = argc_; ParameterCount actual(argc); - if (CallAsMethod()) { - if (NeedsChecks()) { + if (call_as_method) { + if (needs_checks) { EmitContinueIfStrictOrNative(masm, &cont); } // Compute the receiver in sloppy mode. __ Peek(x3, argc * kPointerSize); - if (NeedsChecks()) { + if (needs_checks) { __ JumpIfSmi(x3, &wrap); __ JumpIfObjectType(x3, x10, type, FIRST_SPEC_OBJECT_TYPE, &wrap, lt); } else { @@ -3238,20 +3128,25 @@ void CallFunctionStub::Generate(MacroAssembler* masm) { actual, JUMP_FUNCTION, NullCallWrapper()); - - if (NeedsChecks()) { + if (needs_checks) { // Slow-case: Non-function called. __ Bind(&slow); EmitSlowCase(masm, argc, function, type, &non_function); } - if (CallAsMethod()) { + if (call_as_method) { __ Bind(&wrap); EmitWrapCase(masm, argc, &cont); } } +void CallFunctionStub::Generate(MacroAssembler* masm) { + ASM_LOCATION("CallFunctionStub::Generate"); + CallFunctionNoFeedback(masm, argc_, NeedsChecks(), CallAsMethod()); +} + + void CallConstructStub::Generate(MacroAssembler* masm) { ASM_LOCATION("CallConstructStub::Generate"); // x0 : number of arguments @@ -3331,6 +3226,50 @@ static void EmitLoadTypeFeedbackVector(MacroAssembler* masm, Register vector) { } +void CallIC_ArrayStub::Generate(MacroAssembler* masm) { + // x1 - function + // x3 - slot id + Label miss; + Register function = x1; + Register feedback_vector = x2; + Register index = x3; + Register scratch = x4; + + EmitLoadTypeFeedbackVector(masm, feedback_vector); + + __ LoadGlobalFunction(Context::ARRAY_FUNCTION_INDEX, scratch); + __ Cmp(function, scratch); + __ B(ne, &miss); + + __ Mov(x0, Operand(arg_count())); + + __ Add(scratch, feedback_vector, + Operand::UntagSmiAndScale(index, kPointerSizeLog2)); + __ Ldr(scratch, FieldMemOperand(scratch, FixedArray::kHeaderSize)); + + // Verify that scratch contains an AllocationSite + Register map = x5; + __ Ldr(map, FieldMemOperand(scratch, HeapObject::kMapOffset)); + __ JumpIfNotRoot(map, Heap::kAllocationSiteMapRootIndex, &miss); + + Register allocation_site = feedback_vector; + __ Mov(allocation_site, scratch); + ArrayConstructorStub stub(masm->isolate(), arg_count()); + __ TailCallStub(&stub); + + __ bind(&miss); + GenerateMiss(masm, IC::kCallIC_Customization_Miss); + + // The slow case, we need this no matter what to complete a call after a miss. + CallFunctionNoFeedback(masm, + arg_count(), + true, + CallAsMethod()); + + __ Unreachable(); +} + + void CallICStub::Generate(MacroAssembler* masm) { ASM_LOCATION("CallICStub"); @@ -3390,7 +3329,10 @@ void CallICStub::Generate(MacroAssembler* masm) { __ JumpIfRoot(x4, Heap::kUninitializedSymbolRootIndex, &miss); if (!FLAG_trace_ic) { - // We are going megamorphic, and we don't want to visit the runtime. + // We are going megamorphic. If the feedback is a JSFunction, it is fine + // to handle it here. More complex cases are dealt with in the runtime. + __ AssertNotSmi(x4); + __ JumpIfNotObjectType(x4, x5, x5, JS_FUNCTION_TYPE, &miss); __ Add(x4, feedback_vector, Operand::UntagSmiAndScale(index, kPointerSizeLog2)); __ LoadRoot(x5, Heap::kMegamorphicSymbolRootIndex); @@ -3400,7 +3342,7 @@ void CallICStub::Generate(MacroAssembler* masm) { // We are here because tracing is on or we are going monomorphic. __ bind(&miss); - GenerateMiss(masm); + GenerateMiss(masm, IC::kCallIC_Miss); // the slow case __ bind(&slow_start); @@ -3414,7 +3356,7 @@ void CallICStub::Generate(MacroAssembler* masm) { } -void CallICStub::GenerateMiss(MacroAssembler* masm) { +void CallICStub::GenerateMiss(MacroAssembler* masm, IC::UtilityId id) { ASM_LOCATION("CallICStub[Miss]"); // Get the receiver of the function from the stack; 1 ~ return address. @@ -3427,7 +3369,7 @@ void CallICStub::GenerateMiss(MacroAssembler* masm) { __ Push(x4, x1, x2, x3); // Call the entry. - ExternalReference miss = ExternalReference(IC_Utility(IC::kCallIC_Miss), + ExternalReference miss = ExternalReference(IC_Utility(id), masm->isolate()); __ CallExternalReference(miss, 4); @@ -3487,9 +3429,9 @@ void StringCharCodeAtGenerator::GenerateSlow( if (index_flags_ == STRING_INDEX_IS_NUMBER) { __ CallRuntime(Runtime::kNumberToIntegerMapMinusZero, 1); } else { - ASSERT(index_flags_ == STRING_INDEX_IS_ARRAY_INDEX); + DCHECK(index_flags_ == STRING_INDEX_IS_ARRAY_INDEX); // NumberToSmi discards numbers that are not exact integers. - __ CallRuntime(Runtime::kHiddenNumberToSmi, 1); + __ CallRuntime(Runtime::kNumberToSmi, 1); } // Save the conversion result before the pop instructions below // have a chance to overwrite it. @@ -3512,7 +3454,7 @@ void StringCharCodeAtGenerator::GenerateSlow( call_helper.BeforeCall(masm); __ SmiTag(index_); __ Push(object_, index_); - __ CallRuntime(Runtime::kHiddenStringCharCodeAt, 2); + __ CallRuntime(Runtime::kStringCharCodeAtRT, 2); __ Mov(result_, x0); call_helper.AfterCall(masm); __ B(&exit_); @@ -3528,8 +3470,7 @@ void StringCharFromCodeGenerator::GenerateFast(MacroAssembler* masm) { __ LoadRoot(result_, Heap::kSingleCharacterStringCacheRootIndex); // At this point code register contains smi tagged ASCII char code. - STATIC_ASSERT(kSmiShift > kPointerSizeLog2); - __ Add(result_, result_, Operand(code_, LSR, kSmiShift - kPointerSizeLog2)); + __ Add(result_, result_, Operand::UntagSmiAndScale(code_, kPointerSizeLog2)); __ Ldr(result_, FieldMemOperand(result_, FixedArray::kHeaderSize)); __ JumpIfRoot(result_, Heap::kUndefinedValueRootIndex, &slow_case_); __ Bind(&exit_); @@ -3555,7 +3496,7 @@ void StringCharFromCodeGenerator::GenerateSlow( void ICCompareStub::GenerateSmis(MacroAssembler* masm) { // Inputs are in x0 (lhs) and x1 (rhs). - ASSERT(state_ == CompareIC::SMI); + DCHECK(state_ == CompareIC::SMI); ASM_LOCATION("ICCompareStub[Smis]"); Label miss; // Bail out (to 'miss') unless both x0 and x1 are smis. @@ -3577,7 +3518,7 @@ void ICCompareStub::GenerateSmis(MacroAssembler* masm) { void ICCompareStub::GenerateNumbers(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::NUMBER); + DCHECK(state_ == CompareIC::NUMBER); ASM_LOCATION("ICCompareStub[HeapNumbers]"); Label unordered, maybe_undefined1, maybe_undefined2; @@ -3645,7 +3586,7 @@ void ICCompareStub::GenerateNumbers(MacroAssembler* masm) { void ICCompareStub::GenerateInternalizedStrings(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::INTERNALIZED_STRING); + DCHECK(state_ == CompareIC::INTERNALIZED_STRING); ASM_LOCATION("ICCompareStub[InternalizedStrings]"); Label miss; @@ -3683,9 +3624,9 @@ void ICCompareStub::GenerateInternalizedStrings(MacroAssembler* masm) { void ICCompareStub::GenerateUniqueNames(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::UNIQUE_NAME); + DCHECK(state_ == CompareIC::UNIQUE_NAME); ASM_LOCATION("ICCompareStub[UniqueNames]"); - ASSERT(GetCondition() == eq); + DCHECK(GetCondition() == eq); Label miss; Register result = x0; @@ -3722,7 +3663,7 @@ void ICCompareStub::GenerateUniqueNames(MacroAssembler* masm) { void ICCompareStub::GenerateStrings(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::STRING); + DCHECK(state_ == CompareIC::STRING); ASM_LOCATION("ICCompareStub[Strings]"); Label miss; @@ -3763,7 +3704,7 @@ void ICCompareStub::GenerateStrings(MacroAssembler* masm) { // because we already know they are not identical. We know they are both // strings. if (equality) { - ASSERT(GetCondition() == eq); + DCHECK(GetCondition() == eq); STATIC_ASSERT(kInternalizedTag == 0); Label not_internalized_strings; __ Orr(x12, lhs_type, rhs_type); @@ -3794,7 +3735,7 @@ void ICCompareStub::GenerateStrings(MacroAssembler* masm) { if (equality) { __ TailCallRuntime(Runtime::kStringEquals, 2, 1); } else { - __ TailCallRuntime(Runtime::kHiddenStringCompare, 2, 1); + __ TailCallRuntime(Runtime::kStringCompare, 2, 1); } __ Bind(&miss); @@ -3803,7 +3744,7 @@ void ICCompareStub::GenerateStrings(MacroAssembler* masm) { void ICCompareStub::GenerateObjects(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::OBJECT); + DCHECK(state_ == CompareIC::OBJECT); ASM_LOCATION("ICCompareStub[Objects]"); Label miss; @@ -3817,7 +3758,7 @@ void ICCompareStub::GenerateObjects(MacroAssembler* masm) { __ JumpIfNotObjectType(rhs, x10, x10, JS_OBJECT_TYPE, &miss); __ JumpIfNotObjectType(lhs, x10, x10, JS_OBJECT_TYPE, &miss); - ASSERT(GetCondition() == eq); + DCHECK(GetCondition() == eq); __ Sub(result, rhs, lhs); __ Ret(); @@ -3893,12 +3834,12 @@ void ICCompareStub::GenerateMiss(MacroAssembler* masm) { void StringHelper::GenerateHashInit(MacroAssembler* masm, Register hash, Register character) { - ASSERT(!AreAliased(hash, character)); + DCHECK(!AreAliased(hash, character)); // hash = character + (character << 10); __ LoadRoot(hash, Heap::kHashSeedRootIndex); // Untag smi seed and add the character. - __ Add(hash, character, Operand(hash, LSR, kSmiShift)); + __ Add(hash, character, Operand::UntagSmi(hash)); // Compute hashes modulo 2^32 using a 32-bit W register. Register hash_w = hash.W(); @@ -3913,7 +3854,7 @@ void StringHelper::GenerateHashInit(MacroAssembler* masm, void StringHelper::GenerateHashAddCharacter(MacroAssembler* masm, Register hash, Register character) { - ASSERT(!AreAliased(hash, character)); + DCHECK(!AreAliased(hash, character)); // hash += character; __ Add(hash, hash, character); @@ -3934,7 +3875,7 @@ void StringHelper::GenerateHashGetHash(MacroAssembler* masm, // Compute hashes modulo 2^32 using a 32-bit W register. Register hash_w = hash.W(); Register scratch_w = scratch.W(); - ASSERT(!AreAliased(hash_w, scratch_w)); + DCHECK(!AreAliased(hash_w, scratch_w)); // hash += hash << 3; __ Add(hash_w, hash_w, Operand(hash_w, LSL, 3)); @@ -4184,7 +4125,7 @@ void SubStringStub::Generate(MacroAssembler* masm) { __ Ret(); __ Bind(&runtime); - __ TailCallRuntime(Runtime::kHiddenSubString, 3, 1); + __ TailCallRuntime(Runtime::kSubString, 3, 1); __ bind(&single_char); // x1: result_length @@ -4208,7 +4149,7 @@ void StringCompareStub::GenerateFlatAsciiStringEquals(MacroAssembler* masm, Register scratch1, Register scratch2, Register scratch3) { - ASSERT(!AreAliased(left, right, scratch1, scratch2, scratch3)); + DCHECK(!AreAliased(left, right, scratch1, scratch2, scratch3)); Register result = x0; Register left_length = scratch1; Register right_length = scratch2; @@ -4251,7 +4192,7 @@ void StringCompareStub::GenerateCompareFlatAsciiStrings(MacroAssembler* masm, Register scratch2, Register scratch3, Register scratch4) { - ASSERT(!AreAliased(left, right, scratch1, scratch2, scratch3, scratch4)); + DCHECK(!AreAliased(left, right, scratch1, scratch2, scratch3, scratch4)); Label result_not_equal, compare_lengths; // Find minimum length and length difference. @@ -4272,7 +4213,7 @@ void StringCompareStub::GenerateCompareFlatAsciiStrings(MacroAssembler* masm, // Compare lengths - strings up to min-length are equal. __ Bind(&compare_lengths); - ASSERT(Smi::FromInt(EQUAL) == static_cast<Smi*>(0)); + DCHECK(Smi::FromInt(EQUAL) == static_cast<Smi*>(0)); // Use length_delta as result if it's zero. Register result = x0; @@ -4297,7 +4238,7 @@ void StringCompareStub::GenerateAsciiCharsCompareLoop( Register scratch1, Register scratch2, Label* chars_not_equal) { - ASSERT(!AreAliased(left, right, length, scratch1, scratch2)); + DCHECK(!AreAliased(left, right, length, scratch1, scratch2)); // Change index to run from -length to -1 by adding length to string // start. This means that loop ends when index reaches zero, which @@ -4361,7 +4302,7 @@ void StringCompareStub::Generate(MacroAssembler* masm) { // Call the runtime. // Returns -1 (less), 0 (equal), or 1 (greater) tagged as a small integer. - __ TailCallRuntime(Runtime::kHiddenStringCompare, 2, 1); + __ TailCallRuntime(Runtime::kStringCompare, 2, 1); } @@ -4392,12 +4333,6 @@ void BinaryOpICWithAllocationSiteStub::Generate(MacroAssembler* masm) { } -bool CodeStub::CanUseFPRegisters() { - // FP registers always available on ARM64. - return true; -} - - void RecordWriteStub::GenerateIncremental(MacroAssembler* masm, Mode mode) { // We need some extra registers for this stub, they have been allocated // but we need to save them before using them. @@ -4443,8 +4378,8 @@ void RecordWriteStub::InformIncrementalMarker(MacroAssembler* masm) { regs_.SaveCallerSaveRegisters(masm, save_fp_regs_mode_); Register address = x0.Is(regs_.address()) ? regs_.scratch0() : regs_.address(); - ASSERT(!address.Is(regs_.object())); - ASSERT(!address.Is(x0)); + DCHECK(!address.Is(regs_.object())); + DCHECK(!address.Is(x0)); __ Mov(address, regs_.address()); __ Mov(x0, regs_.object()); __ Mov(x1, address); @@ -4606,7 +4541,7 @@ void StoreArrayLiteralElementStub::Generate(MacroAssembler* masm) { __ JumpIfSmi(value, &smi_element); // Jump if array's ElementsKind is not FAST_ELEMENTS or FAST_HOLEY_ELEMENTS. - __ Tbnz(bitfield2, MaskToBit(FAST_ELEMENTS << Map::kElementsKindShift), + __ Tbnz(bitfield2, MaskToBit(FAST_ELEMENTS << Map::ElementsKindBits::kShift), &fast_elements); // Store into the array literal requires an elements transition. Call into @@ -4646,7 +4581,7 @@ void StoreArrayLiteralElementStub::Generate(MacroAssembler* masm) { void StubFailureTrampolineStub::Generate(MacroAssembler* masm) { - CEntryStub ces(isolate(), 1, fp_registers_ ? kSaveFPRegs : kDontSaveFPRegs); + CEntryStub ces(isolate(), 1, kSaveFPRegs); __ Call(ces.GetCode(), RelocInfo::CODE_TARGET); int parameter_count_offset = StubFailureTrampolineFrame::kCallerStackParameterCountFrameOffset; @@ -4661,22 +4596,31 @@ void StubFailureTrampolineStub::Generate(MacroAssembler* masm) { } -// The entry hook is a "BumpSystemStackPointer" instruction (sub), followed by -// a "Push lr" instruction, followed by a call. -static const unsigned int kProfileEntryHookCallSize = - Assembler::kCallSizeWithRelocation + (2 * kInstructionSize); +static unsigned int GetProfileEntryHookCallSize(MacroAssembler* masm) { + // The entry hook is a "BumpSystemStackPointer" instruction (sub), + // followed by a "Push lr" instruction, followed by a call. + unsigned int size = + Assembler::kCallSizeWithRelocation + (2 * kInstructionSize); + if (CpuFeatures::IsSupported(ALWAYS_ALIGN_CSP)) { + // If ALWAYS_ALIGN_CSP then there will be an extra bic instruction in + // "BumpSystemStackPointer". + size += kInstructionSize; + } + return size; +} void ProfileEntryHookStub::MaybeCallEntryHook(MacroAssembler* masm) { if (masm->isolate()->function_entry_hook() != NULL) { ProfileEntryHookStub stub(masm->isolate()); Assembler::BlockConstPoolScope no_const_pools(masm); + DontEmitDebugCodeScope no_debug_code(masm); Label entry_hook_call_start; __ Bind(&entry_hook_call_start); __ Push(lr); __ CallStub(&stub); - ASSERT(masm->SizeOfCodeGeneratedSince(&entry_hook_call_start) == - kProfileEntryHookCallSize); + DCHECK(masm->SizeOfCodeGeneratedSince(&entry_hook_call_start) == + GetProfileEntryHookCallSize(masm)); __ Pop(lr); } @@ -4690,11 +4634,11 @@ void ProfileEntryHookStub::Generate(MacroAssembler* masm) { // from anywhere. // TODO(jbramley): What about FP registers? __ PushCPURegList(kCallerSaved); - ASSERT(kCallerSaved.IncludesAliasOf(lr)); + DCHECK(kCallerSaved.IncludesAliasOf(lr)); const int kNumSavedRegs = kCallerSaved.Count(); // Compute the function's address as the first argument. - __ Sub(x0, lr, kProfileEntryHookCallSize); + __ Sub(x0, lr, GetProfileEntryHookCallSize(masm)); #if V8_HOST_ARCH_ARM64 uintptr_t entry_hook = @@ -4751,7 +4695,7 @@ void DirectCEntryStub::GenerateCall(MacroAssembler* masm, Register target) { // Make sure the caller configured the stack pointer (see comment in // DirectCEntryStub::Generate). - ASSERT(csp.Is(__ StackPointer())); + DCHECK(csp.Is(__ StackPointer())); intptr_t code = reinterpret_cast<intptr_t>(GetCode().location()); @@ -4776,7 +4720,7 @@ void NameDictionaryLookupStub::GeneratePositiveLookup( Register name, Register scratch1, Register scratch2) { - ASSERT(!AreAliased(elements, name, scratch1, scratch2)); + DCHECK(!AreAliased(elements, name, scratch1, scratch2)); // Assert that name contains a string. __ AssertName(name); @@ -4793,7 +4737,7 @@ void NameDictionaryLookupStub::GeneratePositiveLookup( // Add the probe offset (i + i * i) left shifted to avoid right shifting // the hash in a separate instruction. The value hash + i + i * i is right // shifted in the following and instruction. - ASSERT(NameDictionary::GetProbeOffset(i) < + DCHECK(NameDictionary::GetProbeOffset(i) < 1 << (32 - Name::kHashFieldOffset)); __ Add(scratch2, scratch2, Operand( NameDictionary::GetProbeOffset(i) << Name::kHashShift)); @@ -4801,7 +4745,7 @@ void NameDictionaryLookupStub::GeneratePositiveLookup( __ And(scratch2, scratch1, Operand(scratch2, LSR, Name::kHashShift)); // Scale the index by multiplying by the element size. - ASSERT(NameDictionary::kEntrySize == 3); + DCHECK(NameDictionary::kEntrySize == 3); __ Add(scratch2, scratch2, Operand(scratch2, LSL, 1)); // Check if the key is identical to the name. @@ -4824,7 +4768,7 @@ void NameDictionaryLookupStub::GeneratePositiveLookup( __ PushCPURegList(spill_list); if (name.is(x0)) { - ASSERT(!elements.is(x1)); + DCHECK(!elements.is(x1)); __ Mov(x1, name); __ Mov(x0, elements); } else { @@ -4853,8 +4797,8 @@ void NameDictionaryLookupStub::GenerateNegativeLookup(MacroAssembler* masm, Register properties, Handle<Name> name, Register scratch0) { - ASSERT(!AreAliased(receiver, properties, scratch0)); - ASSERT(name->IsUniqueName()); + DCHECK(!AreAliased(receiver, properties, scratch0)); + DCHECK(name->IsUniqueName()); // If names of slots in range from 1 to kProbes - 1 for the hash value are // not equal to the name and kProbes-th slot is not used (its name is the // undefined value), it guarantees the hash table doesn't contain the @@ -4870,7 +4814,7 @@ void NameDictionaryLookupStub::GenerateNegativeLookup(MacroAssembler* masm, __ And(index, index, name->Hash() + NameDictionary::GetProbeOffset(i)); // Scale the index by multiplying by the entry size. - ASSERT(NameDictionary::kEntrySize == 3); + DCHECK(NameDictionary::kEntrySize == 3); __ Add(index, index, Operand(index, LSL, 1)); // index *= 3. Register entity_name = scratch0; @@ -4951,7 +4895,7 @@ void NameDictionaryLookupStub::Generate(MacroAssembler* masm) { // Add the probe offset (i + i * i) left shifted to avoid right shifting // the hash in a separate instruction. The value hash + i + i * i is right // shifted in the following and instruction. - ASSERT(NameDictionary::GetProbeOffset(i) < + DCHECK(NameDictionary::GetProbeOffset(i) < 1 << (32 - Name::kHashFieldOffset)); __ Add(index, hash, NameDictionary::GetProbeOffset(i) << Name::kHashShift); @@ -4961,7 +4905,7 @@ void NameDictionaryLookupStub::Generate(MacroAssembler* masm) { __ And(index, mask, Operand(index, LSR, Name::kHashShift)); // Scale the index by multiplying by the entry size. - ASSERT(NameDictionary::kEntrySize == 3); + DCHECK(NameDictionary::kEntrySize == 3); __ Add(index, index, Operand(index, LSL, 1)); // index *= 3. __ Add(index, dictionary, Operand(index, LSL, kPointerSizeLog2)); @@ -5397,7 +5341,7 @@ void CallApiFunctionStub::Generate(MacroAssembler* masm) { FrameScope frame_scope(masm, StackFrame::MANUAL); __ EnterExitFrame(false, x10, kApiStackSpace + kCallApiFunctionSpillSpace); - ASSERT(!AreAliased(x0, api_function_address)); + DCHECK(!AreAliased(x0, api_function_address)); // x0 = FunctionCallbackInfo& // Arguments is after the return address. __ Add(x0, masm->StackPointer(), 1 * kPointerSize); diff --git a/deps/v8/src/arm64/code-stubs-arm64.h b/deps/v8/src/arm64/code-stubs-arm64.h index a92445c47a4..75a945299fb 100644 --- a/deps/v8/src/arm64/code-stubs-arm64.h +++ b/deps/v8/src/arm64/code-stubs-arm64.h @@ -5,7 +5,7 @@ #ifndef V8_ARM64_CODE_STUBS_ARM64_H_ #define V8_ARM64_CODE_STUBS_ARM64_H_ -#include "ic-inl.h" +#include "src/ic-inl.h" namespace v8 { namespace internal { @@ -27,8 +27,8 @@ class StoreBufferOverflowStub: public PlatformCodeStub { private: SaveFPRegsMode save_doubles_; - Major MajorKey() { return StoreBufferOverflow; } - int MinorKey() { return (save_doubles_ == kSaveFPRegs) ? 1 : 0; } + Major MajorKey() const { return StoreBufferOverflow; } + int MinorKey() const { return (save_doubles_ == kSaveFPRegs) ? 1 : 0; } }; @@ -56,15 +56,14 @@ class StringHelper : public AllStatic { class StoreRegistersStateStub: public PlatformCodeStub { public: - StoreRegistersStateStub(Isolate* isolate, SaveFPRegsMode with_fp) - : PlatformCodeStub(isolate), save_doubles_(with_fp) {} + explicit StoreRegistersStateStub(Isolate* isolate) + : PlatformCodeStub(isolate) {} static Register to_be_pushed_lr() { return ip0; } static void GenerateAheadOfTime(Isolate* isolate); private: - Major MajorKey() { return StoreRegistersState; } - int MinorKey() { return (save_doubles_ == kSaveFPRegs) ? 1 : 0; } - SaveFPRegsMode save_doubles_; + Major MajorKey() const { return StoreRegistersState; } + int MinorKey() const { return 0; } void Generate(MacroAssembler* masm); }; @@ -72,14 +71,13 @@ class StoreRegistersStateStub: public PlatformCodeStub { class RestoreRegistersStateStub: public PlatformCodeStub { public: - RestoreRegistersStateStub(Isolate* isolate, SaveFPRegsMode with_fp) - : PlatformCodeStub(isolate), save_doubles_(with_fp) {} + explicit RestoreRegistersStateStub(Isolate* isolate) + : PlatformCodeStub(isolate) {} static void GenerateAheadOfTime(Isolate* isolate); private: - Major MajorKey() { return RestoreRegistersState; } - int MinorKey() { return (save_doubles_ == kSaveFPRegs) ? 1 : 0; } - SaveFPRegsMode save_doubles_; + Major MajorKey() const { return RestoreRegistersState; } + int MinorKey() const { return 0; } void Generate(MacroAssembler* masm); }; @@ -122,17 +120,17 @@ class RecordWriteStub: public PlatformCodeStub { Instruction* instr2 = instr1->following(); if (instr1->IsUncondBranchImm()) { - ASSERT(instr2->IsPCRelAddressing() && (instr2->Rd() == xzr.code())); + DCHECK(instr2->IsPCRelAddressing() && (instr2->Rd() == xzr.code())); return INCREMENTAL; } - ASSERT(instr1->IsPCRelAddressing() && (instr1->Rd() == xzr.code())); + DCHECK(instr1->IsPCRelAddressing() && (instr1->Rd() == xzr.code())); if (instr2->IsUncondBranchImm()) { return INCREMENTAL_COMPACTION; } - ASSERT(instr2->IsPCRelAddressing()); + DCHECK(instr2->IsPCRelAddressing()); return STORE_BUFFER_ONLY; } @@ -151,31 +149,31 @@ class RecordWriteStub: public PlatformCodeStub { Instruction* instr1 = patcher.InstructionAt(0); Instruction* instr2 = patcher.InstructionAt(kInstructionSize); // Instructions must be either 'adr' or 'b'. - ASSERT(instr1->IsPCRelAddressing() || instr1->IsUncondBranchImm()); - ASSERT(instr2->IsPCRelAddressing() || instr2->IsUncondBranchImm()); + DCHECK(instr1->IsPCRelAddressing() || instr1->IsUncondBranchImm()); + DCHECK(instr2->IsPCRelAddressing() || instr2->IsUncondBranchImm()); // Retrieve the offsets to the labels. int32_t offset_to_incremental_noncompacting = instr1->ImmPCOffset(); int32_t offset_to_incremental_compacting = instr2->ImmPCOffset(); switch (mode) { case STORE_BUFFER_ONLY: - ASSERT(GetMode(stub) == INCREMENTAL || + DCHECK(GetMode(stub) == INCREMENTAL || GetMode(stub) == INCREMENTAL_COMPACTION); patcher.adr(xzr, offset_to_incremental_noncompacting); patcher.adr(xzr, offset_to_incremental_compacting); break; case INCREMENTAL: - ASSERT(GetMode(stub) == STORE_BUFFER_ONLY); + DCHECK(GetMode(stub) == STORE_BUFFER_ONLY); patcher.b(offset_to_incremental_noncompacting >> kInstructionSizeLog2); patcher.adr(xzr, offset_to_incremental_compacting); break; case INCREMENTAL_COMPACTION: - ASSERT(GetMode(stub) == STORE_BUFFER_ONLY); + DCHECK(GetMode(stub) == STORE_BUFFER_ONLY); patcher.adr(xzr, offset_to_incremental_noncompacting); patcher.b(offset_to_incremental_compacting >> kInstructionSizeLog2); break; } - ASSERT(GetMode(stub) == mode); + DCHECK(GetMode(stub) == mode); } private: @@ -191,7 +189,7 @@ class RecordWriteStub: public PlatformCodeStub { scratch0_(scratch), saved_regs_(kCallerSaved), saved_fp_regs_(kCallerSavedFP) { - ASSERT(!AreAliased(scratch, object, address)); + DCHECK(!AreAliased(scratch, object, address)); // The SaveCallerSaveRegisters method needs to save caller-saved // registers, but we don't bother saving MacroAssembler scratch registers. @@ -303,9 +301,9 @@ class RecordWriteStub: public PlatformCodeStub { Mode mode); void InformIncrementalMarker(MacroAssembler* masm); - Major MajorKey() { return RecordWrite; } + Major MajorKey() const { return RecordWrite; } - int MinorKey() { + int MinorKey() const { return MinorKeyFor(object_, value_, address_, remembered_set_action_, save_fp_regs_mode_); } @@ -315,9 +313,9 @@ class RecordWriteStub: public PlatformCodeStub { Register address, RememberedSetAction action, SaveFPRegsMode fp_mode) { - ASSERT(object.Is64Bits()); - ASSERT(value.Is64Bits()); - ASSERT(address.Is64Bits()); + DCHECK(object.Is64Bits()); + DCHECK(value.Is64Bits()); + DCHECK(address.Is64Bits()); return ObjectBits::encode(object.code()) | ValueBits::encode(value.code()) | AddressBits::encode(address.code()) | @@ -354,8 +352,8 @@ class DirectCEntryStub: public PlatformCodeStub { void GenerateCall(MacroAssembler* masm, Register target); private: - Major MajorKey() { return DirectCEntry; } - int MinorKey() { return 0; } + Major MajorKey() const { return DirectCEntry; } + int MinorKey() const { return 0; } bool NeedsImmovableCode() { return true; } }; @@ -400,11 +398,9 @@ class NameDictionaryLookupStub: public PlatformCodeStub { NameDictionary::kHeaderSize + NameDictionary::kElementsStartIndex * kPointerSize; - Major MajorKey() { return NameDictionaryLookup; } + Major MajorKey() const { return NameDictionaryLookup; } - int MinorKey() { - return LookupModeBits::encode(mode_); - } + int MinorKey() const { return LookupModeBits::encode(mode_); } class LookupModeBits: public BitField<LookupMode, 0, 1> {}; @@ -417,8 +413,8 @@ class SubStringStub: public PlatformCodeStub { explicit SubStringStub(Isolate* isolate) : PlatformCodeStub(isolate) {} private: - Major MajorKey() { return SubString; } - int MinorKey() { return 0; } + Major MajorKey() const { return SubString; } + int MinorKey() const { return 0; } void Generate(MacroAssembler* masm); }; @@ -447,8 +443,8 @@ class StringCompareStub: public PlatformCodeStub { Register scratch3); private: - virtual Major MajorKey() { return StringCompare; } - virtual int MinorKey() { return 0; } + virtual Major MajorKey() const { return StringCompare; } + virtual int MinorKey() const { return 0; } virtual void Generate(MacroAssembler* masm); static void GenerateAsciiCharsCompareLoop(MacroAssembler* masm, @@ -461,8 +457,9 @@ class StringCompareStub: public PlatformCodeStub { }; -struct PlatformCallInterfaceDescriptor { - explicit PlatformCallInterfaceDescriptor( +class PlatformInterfaceDescriptor { + public: + explicit PlatformInterfaceDescriptor( TargetAddressStorageMode storage_mode) : storage_mode_(storage_mode) { } diff --git a/deps/v8/src/arm64/codegen-arm64.cc b/deps/v8/src/arm64/codegen-arm64.cc index ff06eda86ab..16b6d3b1882 100644 --- a/deps/v8/src/arm64/codegen-arm64.cc +++ b/deps/v8/src/arm64/codegen-arm64.cc @@ -2,13 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM64 -#include "codegen.h" -#include "macro-assembler.h" -#include "simulator-arm64.h" +#include "src/arm64/simulator-arm64.h" +#include "src/codegen.h" +#include "src/macro-assembler.h" namespace v8 { namespace internal { @@ -35,7 +35,8 @@ UnaryMathFunction CreateExpFunction() { // an AAPCS64-compliant exp() function. This will be faster than the C // library's exp() function, but probably less accurate. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(1 * KB, &actual_size, true)); + byte* buffer = + static_cast<byte*>(base::OS::Allocate(1 * KB, &actual_size, true)); if (buffer == NULL) return &std::exp; ExternalReference::InitializeMathExpData(); @@ -61,10 +62,10 @@ UnaryMathFunction CreateExpFunction() { CodeDesc desc; masm.GetCode(&desc); - ASSERT(!RelocInfo::RequiresRelocation(desc)); + DCHECK(!RelocInfo::RequiresRelocation(desc)); - CPU::FlushICache(buffer, actual_size); - OS::ProtectCode(buffer, actual_size); + CpuFeatures::FlushICache(buffer, actual_size); + base::OS::ProtectCode(buffer, actual_size); #if !defined(USE_SIMULATOR) return FUNCTION_CAST<UnaryMathFunction>(buffer); @@ -85,14 +86,14 @@ UnaryMathFunction CreateSqrtFunction() { void StubRuntimeCallHelper::BeforeCall(MacroAssembler* masm) const { masm->EnterFrame(StackFrame::INTERNAL); - ASSERT(!masm->has_frame()); + DCHECK(!masm->has_frame()); masm->set_has_frame(true); } void StubRuntimeCallHelper::AfterCall(MacroAssembler* masm) const { masm->LeaveFrame(StackFrame::INTERNAL); - ASSERT(masm->has_frame()); + DCHECK(masm->has_frame()); masm->set_has_frame(false); } @@ -101,26 +102,28 @@ void StubRuntimeCallHelper::AfterCall(MacroAssembler* masm) const { // Code generators void ElementsTransitionGenerator::GenerateMapChangeElementsTransition( - MacroAssembler* masm, AllocationSiteMode mode, + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, + AllocationSiteMode mode, Label* allocation_memento_found) { - // ----------- S t a t e ------------- - // -- x2 : receiver - // -- x3 : target map - // ----------------------------------- - Register receiver = x2; - Register map = x3; + ASM_LOCATION( + "ElementsTransitionGenerator::GenerateMapChangeElementsTransition"); + DCHECK(!AreAliased(receiver, key, value, target_map)); if (mode == TRACK_ALLOCATION_SITE) { - ASSERT(allocation_memento_found != NULL); + DCHECK(allocation_memento_found != NULL); __ JumpIfJSArrayHasAllocationMemento(receiver, x10, x11, allocation_memento_found); } // Set transitioned map. - __ Str(map, FieldMemOperand(receiver, HeapObject::kMapOffset)); + __ Str(target_map, FieldMemOperand(receiver, HeapObject::kMapOffset)); __ RecordWriteField(receiver, HeapObject::kMapOffset, - map, + target_map, x10, kLRHasNotBeenSaved, kDontSaveFPRegs, @@ -130,19 +133,25 @@ void ElementsTransitionGenerator::GenerateMapChangeElementsTransition( void ElementsTransitionGenerator::GenerateSmiToDouble( - MacroAssembler* masm, AllocationSiteMode mode, Label* fail) { + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, + AllocationSiteMode mode, + Label* fail) { ASM_LOCATION("ElementsTransitionGenerator::GenerateSmiToDouble"); - // ----------- S t a t e ------------- - // -- lr : return address - // -- x0 : value - // -- x1 : key - // -- x2 : receiver - // -- x3 : target map, scratch for subsequent call - // ----------------------------------- - Register receiver = x2; - Register target_map = x3; - Label gc_required, only_change_map; + Register elements = x4; + Register length = x5; + Register array_size = x6; + Register array = x7; + + Register scratch = x6; + + // Verify input registers don't conflict with locals. + DCHECK(!AreAliased(receiver, key, value, target_map, + elements, length, array_size, array)); if (mode == TRACK_ALLOCATION_SITE) { __ JumpIfJSArrayHasAllocationMemento(receiver, x10, x11, fail); @@ -150,32 +159,28 @@ void ElementsTransitionGenerator::GenerateSmiToDouble( // Check for empty arrays, which only require a map transition and no changes // to the backing store. - Register elements = x4; __ Ldr(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); __ JumpIfRoot(elements, Heap::kEmptyFixedArrayRootIndex, &only_change_map); __ Push(lr); - Register length = x5; __ Ldrsw(length, UntagSmiFieldMemOperand(elements, FixedArray::kLengthOffset)); // Allocate new FixedDoubleArray. - Register array_size = x6; - Register array = x7; __ Lsl(array_size, length, kDoubleSizeLog2); __ Add(array_size, array_size, FixedDoubleArray::kHeaderSize); __ Allocate(array_size, array, x10, x11, &gc_required, DOUBLE_ALIGNMENT); // Register array is non-tagged heap object. // Set the destination FixedDoubleArray's length and map. - Register map_root = x6; + Register map_root = array_size; __ LoadRoot(map_root, Heap::kFixedDoubleArrayMapRootIndex); __ SmiTag(x11, length); __ Str(x11, MemOperand(array, FixedDoubleArray::kLengthOffset)); __ Str(map_root, MemOperand(array, HeapObject::kMapOffset)); __ Str(target_map, FieldMemOperand(receiver, HeapObject::kMapOffset)); - __ RecordWriteField(receiver, HeapObject::kMapOffset, target_map, x6, + __ RecordWriteField(receiver, HeapObject::kMapOffset, target_map, scratch, kLRHasBeenSaved, kDontSaveFPRegs, OMIT_REMEMBERED_SET, OMIT_SMI_CHECK); @@ -183,7 +188,7 @@ void ElementsTransitionGenerator::GenerateSmiToDouble( __ Add(x10, array, kHeapObjectTag); __ Str(x10, FieldMemOperand(receiver, JSObject::kElementsOffset)); __ RecordWriteField(receiver, JSObject::kElementsOffset, x10, - x6, kLRHasBeenSaved, kDontSaveFPRegs, + scratch, kLRHasBeenSaved, kDontSaveFPRegs, EMIT_REMEMBERED_SET, OMIT_SMI_CHECK); // Prepare for conversion loop. @@ -202,7 +207,7 @@ void ElementsTransitionGenerator::GenerateSmiToDouble( __ Bind(&only_change_map); __ Str(target_map, FieldMemOperand(receiver, HeapObject::kMapOffset)); - __ RecordWriteField(receiver, HeapObject::kMapOffset, target_map, x6, + __ RecordWriteField(receiver, HeapObject::kMapOffset, target_map, scratch, kLRHasNotBeenSaved, kDontSaveFPRegs, OMIT_REMEMBERED_SET, OMIT_SMI_CHECK); __ B(&done); @@ -234,20 +239,22 @@ void ElementsTransitionGenerator::GenerateSmiToDouble( void ElementsTransitionGenerator::GenerateDoubleToObject( - MacroAssembler* masm, AllocationSiteMode mode, Label* fail) { + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, + AllocationSiteMode mode, + Label* fail) { ASM_LOCATION("ElementsTransitionGenerator::GenerateDoubleToObject"); - // ----------- S t a t e ------------- - // -- x0 : value - // -- x1 : key - // -- x2 : receiver - // -- lr : return address - // -- x3 : target map, scratch for subsequent call - // -- x4 : scratch (elements) - // ----------------------------------- - Register value = x0; - Register key = x1; - Register receiver = x2; - Register target_map = x3; + Register elements = x4; + Register array_size = x6; + Register array = x7; + Register length = x5; + + // Verify input registers don't conflict with locals. + DCHECK(!AreAliased(receiver, key, value, target_map, + elements, array_size, array, length)); if (mode == TRACK_ALLOCATION_SITE) { __ JumpIfJSArrayHasAllocationMemento(receiver, x10, x11, fail); @@ -256,7 +263,7 @@ void ElementsTransitionGenerator::GenerateDoubleToObject( // Check for empty arrays, which only require a map transition and no changes // to the backing store. Label only_change_map; - Register elements = x4; + __ Ldr(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); __ JumpIfRoot(elements, Heap::kEmptyFixedArrayRootIndex, &only_change_map); @@ -264,20 +271,16 @@ void ElementsTransitionGenerator::GenerateDoubleToObject( // TODO(all): These registers may not need to be pushed. Examine // RecordWriteStub and check whether it's needed. __ Push(target_map, receiver, key, value); - Register length = x5; __ Ldrsw(length, UntagSmiFieldMemOperand(elements, FixedArray::kLengthOffset)); - // Allocate new FixedArray. - Register array_size = x6; - Register array = x7; Label gc_required; __ Mov(array_size, FixedDoubleArray::kHeaderSize); __ Add(array_size, array_size, Operand(length, LSL, kPointerSizeLog2)); __ Allocate(array_size, array, x10, x11, &gc_required, NO_ALLOCATION_FLAGS); // Set destination FixedDoubleArray's length and map. - Register map_root = x6; + Register map_root = array_size; __ LoadRoot(map_root, Heap::kFixedArrayMapRootIndex); __ SmiTag(x11, length); __ Str(x11, MemOperand(array, FixedDoubleArray::kLengthOffset)); @@ -315,8 +318,10 @@ void ElementsTransitionGenerator::GenerateDoubleToObject( __ B(eq, &convert_hole); // Non-hole double, copy value into a heap number. - Register heap_num = x5; - __ AllocateHeapNumber(heap_num, &gc_required, x6, x4, + Register heap_num = length; + Register scratch = array_size; + Register scratch2 = elements; + __ AllocateHeapNumber(heap_num, &gc_required, scratch, scratch2, x13, heap_num_map); __ Mov(x13, dst_elements); __ Str(heap_num, MemOperand(dst_elements, kPointerSize, PostIndex)); @@ -351,7 +356,7 @@ void ElementsTransitionGenerator::GenerateDoubleToObject( CodeAgingHelper::CodeAgingHelper() { - ASSERT(young_sequence_.length() == kNoCodeAgeSequenceLength); + DCHECK(young_sequence_.length() == kNoCodeAgeSequenceLength); // The sequence of instructions that is patched out for aging code is the // following boilerplate stack-building prologue that is found both in // FUNCTION and OPTIMIZED_FUNCTION code: @@ -363,7 +368,7 @@ CodeAgingHelper::CodeAgingHelper() { #ifdef DEBUG const int length = kCodeAgeStubEntryOffset / kInstructionSize; - ASSERT(old_sequence_.length() >= kCodeAgeStubEntryOffset); + DCHECK(old_sequence_.length() >= kCodeAgeStubEntryOffset); PatchingAssembler patcher_old(old_sequence_.start(), length); MacroAssembler::EmitCodeAgeSequence(&patcher_old, NULL); #endif @@ -415,7 +420,7 @@ void StringCharLoadGenerator::Generate(MacroAssembler* masm, Register index, Register result, Label* call_runtime) { - ASSERT(string.Is64Bits() && index.Is32Bits() && result.Is64Bits()); + DCHECK(string.Is64Bits() && index.Is32Bits() && result.Is64Bits()); // Fetch the instance type of the receiver into result register. __ Ldr(result, FieldMemOperand(string, HeapObject::kMapOffset)); __ Ldrb(result, FieldMemOperand(result, Map::kInstanceTypeOffset)); @@ -473,7 +478,7 @@ void StringCharLoadGenerator::Generate(MacroAssembler* masm, __ Assert(eq, kExternalStringExpectedButNotFound); } // Rule out short external strings. - STATIC_CHECK(kShortExternalStringTag != 0); + STATIC_ASSERT(kShortExternalStringTag != 0); // TestAndBranchIfAnySet can emit Tbnz. Do not use it because call_runtime // can be bound far away in deferred code. __ Tst(result, kShortExternalStringMask); @@ -511,10 +516,11 @@ void MathExpGenerator::EmitMathExp(MacroAssembler* masm, // instead of fmul and fsub. Doing this changes the result, but since this is // an estimation anyway, does it matter? - ASSERT(!AreAliased(input, result, + DCHECK(!AreAliased(input, result, double_temp1, double_temp2, temp1, temp2, temp3)); - ASSERT(ExternalReference::math_exp_constants(0).address() != NULL); + DCHECK(ExternalReference::math_exp_constants(0).address() != NULL); + DCHECK(!masm->serializer_enabled()); // External references not serializable. Label done; DoubleRegister double_temp3 = result; @@ -534,7 +540,7 @@ void MathExpGenerator::EmitMathExp(MacroAssembler* masm, Label result_is_finite_non_zero; // Assert that we can load offset 0 (the small input threshold) and offset 1 // (the large input threshold) with a single ldp. - ASSERT(kDRegSize == (ExpConstant(constants, 1).offset() - + DCHECK(kDRegSize == (ExpConstant(constants, 1).offset() - ExpConstant(constants, 0).offset())); __ Ldp(double_temp1, double_temp2, ExpConstant(constants, 0)); @@ -564,7 +570,7 @@ void MathExpGenerator::EmitMathExp(MacroAssembler* masm, __ Bind(&result_is_finite_non_zero); // Assert that we can load offset 3 and offset 4 with a single ldp. - ASSERT(kDRegSize == (ExpConstant(constants, 4).offset() - + DCHECK(kDRegSize == (ExpConstant(constants, 4).offset() - ExpConstant(constants, 3).offset())); __ Ldp(double_temp1, double_temp3, ExpConstant(constants, 3)); __ Fmadd(double_temp1, double_temp1, input, double_temp3); @@ -572,7 +578,7 @@ void MathExpGenerator::EmitMathExp(MacroAssembler* masm, __ Fsub(double_temp1, double_temp1, double_temp3); // Assert that we can load offset 5 and offset 6 with a single ldp. - ASSERT(kDRegSize == (ExpConstant(constants, 6).offset() - + DCHECK(kDRegSize == (ExpConstant(constants, 6).offset() - ExpConstant(constants, 5).offset())); __ Ldp(double_temp2, double_temp3, ExpConstant(constants, 5)); // TODO(jbramley): Consider using Fnmsub here. diff --git a/deps/v8/src/arm64/codegen-arm64.h b/deps/v8/src/arm64/codegen-arm64.h index bb42bf8d317..9ef148cc409 100644 --- a/deps/v8/src/arm64/codegen-arm64.h +++ b/deps/v8/src/arm64/codegen-arm64.h @@ -5,8 +5,8 @@ #ifndef V8_ARM64_CODEGEN_ARM64_H_ #define V8_ARM64_CODEGEN_ARM64_H_ -#include "ast.h" -#include "ic-inl.h" +#include "src/ast.h" +#include "src/ic-inl.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/arm64/constants-arm64.h b/deps/v8/src/arm64/constants-arm64.h index 7ee22760d82..8db120ba459 100644 --- a/deps/v8/src/arm64/constants-arm64.h +++ b/deps/v8/src/arm64/constants-arm64.h @@ -15,7 +15,9 @@ STATIC_ASSERT(sizeof(1L) == sizeof(int64_t)); // NOLINT(runtime/sizeof) // Get the standard printf format macros for C99 stdint types. +#ifndef __STDC_FORMAT_MACROS #define __STDC_FORMAT_MACROS +#endif #include <inttypes.h> @@ -25,8 +27,7 @@ namespace internal { const unsigned kInstructionSize = 4; const unsigned kInstructionSizeLog2 = 2; -const unsigned kLiteralEntrySize = 4; -const unsigned kLiteralEntrySizeLog2 = 2; +const unsigned kLoadLiteralScaleLog2 = 2; const unsigned kMaxLoadLiteralRange = 1 * MB; const unsigned kNumberOfRegisters = 32; @@ -258,15 +259,15 @@ enum Condition { nv = 15 // Behaves as always/al. }; -inline Condition InvertCondition(Condition cond) { +inline Condition NegateCondition(Condition cond) { // Conditions al and nv behave identically, as "always true". They can't be // inverted, because there is no never condition. - ASSERT((cond != al) && (cond != nv)); + DCHECK((cond != al) && (cond != nv)); return static_cast<Condition>(cond ^ 1); } -// Corresponds to transposing the operands of a comparison. -inline Condition ReverseConditionForCmp(Condition cond) { +// Commute a condition such that {a cond b == b cond' a}. +inline Condition CommuteCondition(Condition cond) { switch (cond) { case lo: return hi; @@ -293,7 +294,7 @@ inline Condition ReverseConditionForCmp(Condition cond) { // 'mi' for instance). UNREACHABLE(); return nv; - }; + } } enum FlagsUpdate { @@ -399,7 +400,7 @@ enum SystemRegister { // // The enumerations can be used like this: // -// ASSERT(instr->Mask(PCRelAddressingFMask) == PCRelAddressingFixed); +// DCHECK(instr->Mask(PCRelAddressingFMask) == PCRelAddressingFixed); // switch(instr->Mask(PCRelAddressingMask)) { // case ADR: Format("adr 'Xd, 'AddrPCRelByte"); break; // case ADRP: Format("adrp 'Xd, 'AddrPCRelPage"); break; diff --git a/deps/v8/src/arm64/cpu-arm64.cc b/deps/v8/src/arm64/cpu-arm64.cc index 0e1ed91be40..39beb6d9ef8 100644 --- a/deps/v8/src/arm64/cpu-arm64.cc +++ b/deps/v8/src/arm64/cpu-arm64.cc @@ -4,23 +4,16 @@ // CPU specific code for arm independent of OS goes here. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM64 -#include "arm64/cpu-arm64.h" -#include "arm64/utils-arm64.h" +#include "src/arm64/utils-arm64.h" +#include "src/assembler.h" namespace v8 { namespace internal { -#ifdef DEBUG -bool CpuFeatures::initialized_ = false; -#endif -unsigned CpuFeatures::supported_ = 0; -unsigned CpuFeatures::cross_compile_ = 0; - - class CacheLineSizes { public: CacheLineSizes() { @@ -31,22 +24,23 @@ class CacheLineSizes { __asm__ __volatile__ ("mrs %[ctr], ctr_el0" // NOLINT : [ctr] "=r" (cache_type_register_)); #endif - }; + } uint32_t icache_line_size() const { return ExtractCacheLineSize(0); } uint32_t dcache_line_size() const { return ExtractCacheLineSize(16); } private: uint32_t ExtractCacheLineSize(int cache_line_size_shift) const { - // The cache type register holds the size of the caches as a power of two. - return 1 << ((cache_type_register_ >> cache_line_size_shift) & 0xf); + // The cache type register holds the size of cache lines in words as a + // power of two. + return 4 << ((cache_type_register_ >> cache_line_size_shift) & 0xf); } uint32_t cache_type_register_; }; -void CPU::FlushICache(void* address, size_t length) { +void CpuFeatures::FlushICache(void* address, size_t length) { if (length == 0) return; #ifdef USE_SIMULATOR @@ -65,8 +59,8 @@ void CPU::FlushICache(void* address, size_t length) { uintptr_t dsize = sizes.dcache_line_size(); uintptr_t isize = sizes.icache_line_size(); // Cache line sizes are always a power of 2. - ASSERT(CountSetBits(dsize, 64) == 1); - ASSERT(CountSetBits(isize, 64) == 1); + DCHECK(CountSetBits(dsize, 64) == 1); + DCHECK(CountSetBits(isize, 64) == 1); uintptr_t dstart = start & ~(dsize - 1); uintptr_t istart = start & ~(isize - 1); uintptr_t end = start + length; @@ -124,17 +118,6 @@ void CPU::FlushICache(void* address, size_t length) { #endif } - -void CpuFeatures::Probe(bool serializer_enabled) { - // AArch64 has no configuration options, no further probing is required. - supported_ = 0; - -#ifdef DEBUG - initialized_ = true; -#endif -} - - } } // namespace v8::internal #endif // V8_TARGET_ARCH_ARM64 diff --git a/deps/v8/src/arm64/cpu-arm64.h b/deps/v8/src/arm64/cpu-arm64.h deleted file mode 100644 index 0b7a7d7f1fe..00000000000 --- a/deps/v8/src/arm64/cpu-arm64.h +++ /dev/null @@ -1,71 +0,0 @@ -// Copyright 2013 the V8 project authors. All rights reserved. -// Use of this source code is governed by a BSD-style license that can be -// found in the LICENSE file. - -#ifndef V8_ARM64_CPU_ARM64_H_ -#define V8_ARM64_CPU_ARM64_H_ - -#include <stdio.h> -#include "serialize.h" -#include "cpu.h" - -namespace v8 { -namespace internal { - - -// CpuFeatures keeps track of which features are supported by the target CPU. -// Supported features must be enabled by a CpuFeatureScope before use. -class CpuFeatures : public AllStatic { - public: - // Detect features of the target CPU. Set safe defaults if the serializer - // is enabled (snapshots must be portable). - static void Probe(bool serializer_enabled); - - // Check whether a feature is supported by the target CPU. - static bool IsSupported(CpuFeature f) { - ASSERT(initialized_); - // There are no optional features for ARM64. - return false; - }; - - // There are no optional features for ARM64. - static bool IsSafeForSnapshot(Isolate* isolate, CpuFeature f) { - return IsSupported(f); - } - - // I and D cache line size in bytes. - static unsigned dcache_line_size(); - static unsigned icache_line_size(); - - static unsigned supported_; - - static bool VerifyCrossCompiling() { - // There are no optional features for ARM64. - ASSERT(cross_compile_ == 0); - return true; - } - - static bool VerifyCrossCompiling(CpuFeature f) { - // There are no optional features for ARM64. - USE(f); - ASSERT(cross_compile_ == 0); - return true; - } - - static bool SupportsCrankshaft() { return true; } - - private: -#ifdef DEBUG - static bool initialized_; -#endif - - // This isn't used (and is always 0), but it is required by V8. - static unsigned cross_compile_; - - friend class PlatformFeatureScope; - DISALLOW_COPY_AND_ASSIGN(CpuFeatures); -}; - -} } // namespace v8::internal - -#endif // V8_ARM64_CPU_ARM64_H_ diff --git a/deps/v8/src/arm64/debug-arm64.cc b/deps/v8/src/arm64/debug-arm64.cc index 6b189678212..746d9a8b470 100644 --- a/deps/v8/src/arm64/debug-arm64.cc +++ b/deps/v8/src/arm64/debug-arm64.cc @@ -2,12 +2,12 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM64 -#include "codegen.h" -#include "debug.h" +#include "src/codegen.h" +#include "src/debug.h" namespace v8 { namespace internal { @@ -46,7 +46,7 @@ void BreakLocationIterator::SetDebugBreakAtReturn() { // The first instruction of a patched return sequence must be a load literal // loading the address of the debug break return code. - patcher.LoadLiteral(ip0, 3 * kInstructionSize); + patcher.ldr_pcrel(ip0, (3 * kInstructionSize) >> kLoadLiteralScaleLog2); // TODO(all): check the following is correct. // The debug break return code will push a frame and call statically compiled // code. By using blr, even though control will not return after the branch, @@ -67,21 +67,21 @@ void BreakLocationIterator::ClearDebugBreakAtReturn() { bool Debug::IsDebugBreakAtReturn(RelocInfo* rinfo) { - ASSERT(RelocInfo::IsJSReturn(rinfo->rmode())); + DCHECK(RelocInfo::IsJSReturn(rinfo->rmode())); return rinfo->IsPatchedReturnSequence(); } bool BreakLocationIterator::IsDebugBreakAtSlot() { - ASSERT(IsDebugBreakSlot()); + DCHECK(IsDebugBreakSlot()); // Check whether the debug break slot instructions have been patched. return rinfo()->IsPatchedDebugBreakSlotSequence(); } void BreakLocationIterator::SetDebugBreakAtSlot() { - // Patch the code emitted by Debug::GenerateSlots, changing the debug break - // slot code from + // Patch the code emitted by DebugCodegen::GenerateSlots, changing the debug + // break slot code from // mov x0, x0 @ nop DEBUG_BREAK_NOP // mov x0, x0 @ nop DEBUG_BREAK_NOP // mov x0, x0 @ nop DEBUG_BREAK_NOP @@ -105,7 +105,7 @@ void BreakLocationIterator::SetDebugBreakAtSlot() { // The first instruction of a patched debug break slot must be a load literal // loading the address of the debug break slot code. - patcher.LoadLiteral(ip0, 2 * kInstructionSize); + patcher.ldr_pcrel(ip0, (2 * kInstructionSize) >> kLoadLiteralScaleLog2); // TODO(all): check the following is correct. // The debug break slot code will push a frame and call statically compiled // code. By using blr, event hough control will not return after the branch, @@ -118,12 +118,11 @@ void BreakLocationIterator::SetDebugBreakAtSlot() { void BreakLocationIterator::ClearDebugBreakAtSlot() { - ASSERT(IsDebugBreakSlot()); + DCHECK(IsDebugBreakSlot()); rinfo()->PatchCode(original_rinfo()->pc(), Assembler::kDebugBreakSlotInstructions); } -const bool Debug::FramePaddingLayout::kIsSupported = false; static void Generate_DebugBreakCallHelper(MacroAssembler* masm, RegList object_regs, @@ -132,6 +131,12 @@ static void Generate_DebugBreakCallHelper(MacroAssembler* masm, { FrameScope scope(masm, StackFrame::INTERNAL); + // Load padding words on stack. + __ Mov(scratch, Smi::FromInt(LiveEdit::kFramePaddingValue)); + __ PushMultipleTimes(scratch, LiveEdit::kFramePaddingInitialSize); + __ Mov(scratch, Smi::FromInt(LiveEdit::kFramePaddingInitialSize)); + __ Push(scratch); + // Any live values (object_regs and non_object_regs) in caller-saved // registers (or lr) need to be stored on the stack so that their values are // safely preserved for a call into C code. @@ -145,12 +150,12 @@ static void Generate_DebugBreakCallHelper(MacroAssembler* masm, // collector doesn't try to interpret them as pointers. // // TODO(jbramley): Why can't this handle callee-saved registers? - ASSERT((~kCallerSaved.list() & object_regs) == 0); - ASSERT((~kCallerSaved.list() & non_object_regs) == 0); - ASSERT((object_regs & non_object_regs) == 0); - ASSERT((scratch.Bit() & object_regs) == 0); - ASSERT((scratch.Bit() & non_object_regs) == 0); - ASSERT((masm->TmpList()->list() & (object_regs | non_object_regs)) == 0); + DCHECK((~kCallerSaved.list() & object_regs) == 0); + DCHECK((~kCallerSaved.list() & non_object_regs) == 0); + DCHECK((object_regs & non_object_regs) == 0); + DCHECK((scratch.Bit() & object_regs) == 0); + DCHECK((scratch.Bit() & non_object_regs) == 0); + DCHECK((masm->TmpList()->list() & (object_regs | non_object_regs)) == 0); STATIC_ASSERT(kSmiValueSize == 32); CPURegList non_object_list = @@ -158,15 +163,16 @@ static void Generate_DebugBreakCallHelper(MacroAssembler* masm, while (!non_object_list.IsEmpty()) { // Store each non-object register as two SMIs. Register reg = Register(non_object_list.PopLowestIndex()); - __ Push(reg); - __ Poke(wzr, 0); - __ Push(reg.W(), wzr); + __ Lsr(scratch, reg, 32); + __ SmiTagAndPush(scratch, reg); + // Stack: // jssp[12]: reg[63:32] // jssp[8]: 0x00000000 (SMI tag & padding) // jssp[4]: reg[31:0] // jssp[0]: 0x00000000 (SMI tag & padding) - STATIC_ASSERT((kSmiTag == 0) && (kSmiShift == 32)); + STATIC_ASSERT(kSmiTag == 0); + STATIC_ASSERT(static_cast<unsigned>(kSmiShift) == kWRegSizeInBits); } if (object_regs != 0) { @@ -201,21 +207,24 @@ static void Generate_DebugBreakCallHelper(MacroAssembler* masm, __ Bfxil(reg, scratch, 32, 32); } + // Don't bother removing padding bytes pushed on the stack + // as the frame is going to be restored right away. + // Leave the internal frame. } // Now that the break point has been handled, resume normal execution by // jumping to the target address intended by the caller and that was // overwritten by the address of DebugBreakXXX. - ExternalReference after_break_target(Debug_Address::AfterBreakTarget(), - masm->isolate()); + ExternalReference after_break_target = + ExternalReference::debug_after_break_target_address(masm->isolate()); __ Mov(scratch, after_break_target); __ Ldr(scratch, MemOperand(scratch)); __ Br(scratch); } -void Debug::GenerateCallICStubDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCallICStubDebugBreak(MacroAssembler* masm) { // Register state for CallICStub // ----------- S t a t e ------------- // -- x1 : function @@ -225,54 +234,41 @@ void Debug::GenerateCallICStubDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateLoadICDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateLoadICDebugBreak(MacroAssembler* masm) { // Calling convention for IC load (from ic-arm.cc). - // ----------- S t a t e ------------- - // -- x2 : name - // -- lr : return address - // -- x0 : receiver - // -- [sp] : receiver - // ----------------------------------- - // Registers x0 and x2 contain objects that need to be pushed on the - // expression stack of the fake JS frame. - Generate_DebugBreakCallHelper(masm, x0.Bit() | x2.Bit(), 0, x10); + Register receiver = LoadIC::ReceiverRegister(); + Register name = LoadIC::NameRegister(); + Generate_DebugBreakCallHelper(masm, receiver.Bit() | name.Bit(), 0, x10); } -void Debug::GenerateStoreICDebugBreak(MacroAssembler* masm) { - // Calling convention for IC store (from ic-arm.cc). - // ----------- S t a t e ------------- - // -- x0 : value - // -- x1 : receiver - // -- x2 : name - // -- lr : return address - // ----------------------------------- - // Registers x0, x1, and x2 contain objects that need to be pushed on the - // expression stack of the fake JS frame. - Generate_DebugBreakCallHelper(masm, x0.Bit() | x1.Bit() | x2.Bit(), 0, x10); +void DebugCodegen::GenerateStoreICDebugBreak(MacroAssembler* masm) { + // Calling convention for IC store (from ic-arm64.cc). + Register receiver = StoreIC::ReceiverRegister(); + Register name = StoreIC::NameRegister(); + Register value = StoreIC::ValueRegister(); + Generate_DebugBreakCallHelper( + masm, receiver.Bit() | name.Bit() | value.Bit(), 0, x10); } -void Debug::GenerateKeyedLoadICDebugBreak(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- lr : return address - // -- x0 : key - // -- x1 : receiver - Generate_DebugBreakCallHelper(masm, x0.Bit() | x1.Bit(), 0, x10); +void DebugCodegen::GenerateKeyedLoadICDebugBreak(MacroAssembler* masm) { + // Calling convention for keyed IC load (from ic-arm.cc). + GenerateLoadICDebugBreak(masm); } -void Debug::GenerateKeyedStoreICDebugBreak(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- x0 : value - // -- x1 : key - // -- x2 : receiver - // -- lr : return address - Generate_DebugBreakCallHelper(masm, x0.Bit() | x1.Bit() | x2.Bit(), 0, x10); +void DebugCodegen::GenerateKeyedStoreICDebugBreak(MacroAssembler* masm) { + // Calling convention for IC keyed store call (from ic-arm64.cc). + Register receiver = KeyedStoreIC::ReceiverRegister(); + Register name = KeyedStoreIC::NameRegister(); + Register value = KeyedStoreIC::ValueRegister(); + Generate_DebugBreakCallHelper( + masm, receiver.Bit() | name.Bit() | value.Bit(), 0, x10); } -void Debug::GenerateCompareNilICDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCompareNilICDebugBreak(MacroAssembler* masm) { // Register state for CompareNil IC // ----------- S t a t e ------------- // -- r0 : value @@ -281,7 +277,7 @@ void Debug::GenerateCompareNilICDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateReturnDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateReturnDebugBreak(MacroAssembler* masm) { // In places other than IC call sites it is expected that r0 is TOS which // is an object - this is not generally the case so this should be used with // care. @@ -289,7 +285,7 @@ void Debug::GenerateReturnDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateCallFunctionStubDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCallFunctionStubDebugBreak(MacroAssembler* masm) { // Register state for CallFunctionStub (from code-stubs-arm64.cc). // ----------- S t a t e ------------- // -- x1 : function @@ -298,7 +294,7 @@ void Debug::GenerateCallFunctionStubDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateCallConstructStubDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCallConstructStubDebugBreak(MacroAssembler* masm) { // Calling convention for CallConstructStub (from code-stubs-arm64.cc). // ----------- S t a t e ------------- // -- x0 : number of arguments (not smi) @@ -308,7 +304,8 @@ void Debug::GenerateCallConstructStubDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateCallConstructStubRecordDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCallConstructStubRecordDebugBreak( + MacroAssembler* masm) { // Calling convention for CallConstructStub (from code-stubs-arm64.cc). // ----------- S t a t e ------------- // -- x0 : number of arguments (not smi) @@ -321,7 +318,7 @@ void Debug::GenerateCallConstructStubRecordDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateSlot(MacroAssembler* masm) { +void DebugCodegen::GenerateSlot(MacroAssembler* masm) { // Generate enough nop's to make space for a call instruction. Avoid emitting // the constant pool in the debug break slot code. InstructionAccurateScope scope(masm, Assembler::kDebugBreakSlotInstructions); @@ -333,23 +330,48 @@ void Debug::GenerateSlot(MacroAssembler* masm) { } -void Debug::GenerateSlotDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateSlotDebugBreak(MacroAssembler* masm) { // In the places where a debug break slot is inserted no registers can contain // object pointers. Generate_DebugBreakCallHelper(masm, 0, 0, x10); } -void Debug::GeneratePlainReturnLiveEdit(MacroAssembler* masm) { - masm->Abort(kLiveEditFrameDroppingIsNotSupportedOnARM64); +void DebugCodegen::GeneratePlainReturnLiveEdit(MacroAssembler* masm) { + __ Ret(); } -void Debug::GenerateFrameDropperLiveEdit(MacroAssembler* masm) { - masm->Abort(kLiveEditFrameDroppingIsNotSupportedOnARM64); +void DebugCodegen::GenerateFrameDropperLiveEdit(MacroAssembler* masm) { + ExternalReference restarter_frame_function_slot = + ExternalReference::debug_restarter_frame_function_pointer_address( + masm->isolate()); + UseScratchRegisterScope temps(masm); + Register scratch = temps.AcquireX(); + + __ Mov(scratch, restarter_frame_function_slot); + __ Str(xzr, MemOperand(scratch)); + + // We do not know our frame height, but set sp based on fp. + __ Sub(masm->StackPointer(), fp, kPointerSize); + __ AssertStackConsistency(); + + __ Pop(x1, fp, lr); // Function, Frame, Return address. + + // Load context from the function. + __ Ldr(cp, FieldMemOperand(x1, JSFunction::kContextOffset)); + + // Get function code. + __ Ldr(scratch, FieldMemOperand(x1, JSFunction::kSharedFunctionInfoOffset)); + __ Ldr(scratch, FieldMemOperand(scratch, SharedFunctionInfo::kCodeOffset)); + __ Add(scratch, scratch, Code::kHeaderSize - kHeapObjectTag); + + // Re-run JSFunction, x1 is function, cp is context. + __ Br(scratch); } -const bool Debug::kFrameDropperSupported = false; + +const bool LiveEdit::kFrameDropperSupported = true; } } // namespace v8::internal diff --git a/deps/v8/src/arm64/decoder-arm64-inl.h b/deps/v8/src/arm64/decoder-arm64-inl.h index eb791336da7..5dd2fd9cc04 100644 --- a/deps/v8/src/arm64/decoder-arm64-inl.h +++ b/deps/v8/src/arm64/decoder-arm64-inl.h @@ -5,9 +5,9 @@ #ifndef V8_ARM64_DECODER_ARM64_INL_H_ #define V8_ARM64_DECODER_ARM64_INL_H_ -#include "arm64/decoder-arm64.h" -#include "globals.h" -#include "utils.h" +#include "src/arm64/decoder-arm64.h" +#include "src/globals.h" +#include "src/utils.h" namespace v8 { @@ -96,17 +96,17 @@ void Decoder<V>::Decode(Instruction *instr) { template<typename V> void Decoder<V>::DecodePCRelAddressing(Instruction* instr) { - ASSERT(instr->Bits(27, 24) == 0x0); + DCHECK(instr->Bits(27, 24) == 0x0); // We know bit 28 is set, as <b28:b27> = 0 is filtered out at the top level // decode. - ASSERT(instr->Bit(28) == 0x1); + DCHECK(instr->Bit(28) == 0x1); V::VisitPCRelAddressing(instr); } template<typename V> void Decoder<V>::DecodeBranchSystemException(Instruction* instr) { - ASSERT((instr->Bits(27, 24) == 0x4) || + DCHECK((instr->Bits(27, 24) == 0x4) || (instr->Bits(27, 24) == 0x5) || (instr->Bits(27, 24) == 0x6) || (instr->Bits(27, 24) == 0x7) ); @@ -208,7 +208,7 @@ void Decoder<V>::DecodeBranchSystemException(Instruction* instr) { template<typename V> void Decoder<V>::DecodeLoadStore(Instruction* instr) { - ASSERT((instr->Bits(27, 24) == 0x8) || + DCHECK((instr->Bits(27, 24) == 0x8) || (instr->Bits(27, 24) == 0x9) || (instr->Bits(27, 24) == 0xC) || (instr->Bits(27, 24) == 0xD) ); @@ -328,7 +328,7 @@ void Decoder<V>::DecodeLoadStore(Instruction* instr) { template<typename V> void Decoder<V>::DecodeLogical(Instruction* instr) { - ASSERT(instr->Bits(27, 24) == 0x2); + DCHECK(instr->Bits(27, 24) == 0x2); if (instr->Mask(0x80400000) == 0x00400000) { V::VisitUnallocated(instr); @@ -348,7 +348,7 @@ void Decoder<V>::DecodeLogical(Instruction* instr) { template<typename V> void Decoder<V>::DecodeBitfieldExtract(Instruction* instr) { - ASSERT(instr->Bits(27, 24) == 0x3); + DCHECK(instr->Bits(27, 24) == 0x3); if ((instr->Mask(0x80400000) == 0x80000000) || (instr->Mask(0x80400000) == 0x00400000) || @@ -374,7 +374,7 @@ void Decoder<V>::DecodeBitfieldExtract(Instruction* instr) { template<typename V> void Decoder<V>::DecodeAddSubImmediate(Instruction* instr) { - ASSERT(instr->Bits(27, 24) == 0x1); + DCHECK(instr->Bits(27, 24) == 0x1); if (instr->Bit(23) == 1) { V::VisitUnallocated(instr); } else { @@ -385,7 +385,7 @@ void Decoder<V>::DecodeAddSubImmediate(Instruction* instr) { template<typename V> void Decoder<V>::DecodeDataProcessing(Instruction* instr) { - ASSERT((instr->Bits(27, 24) == 0xA) || + DCHECK((instr->Bits(27, 24) == 0xA) || (instr->Bits(27, 24) == 0xB) ); if (instr->Bit(24) == 0) { @@ -501,7 +501,7 @@ void Decoder<V>::DecodeDataProcessing(Instruction* instr) { template<typename V> void Decoder<V>::DecodeFP(Instruction* instr) { - ASSERT((instr->Bits(27, 24) == 0xE) || + DCHECK((instr->Bits(27, 24) == 0xE) || (instr->Bits(27, 24) == 0xF) ); if (instr->Bit(28) == 0) { @@ -614,7 +614,7 @@ void Decoder<V>::DecodeFP(Instruction* instr) { } } else { // Bit 30 == 1 has been handled earlier. - ASSERT(instr->Bit(30) == 0); + DCHECK(instr->Bit(30) == 0); if (instr->Mask(0xA0800000) != 0) { V::VisitUnallocated(instr); } else { @@ -630,7 +630,7 @@ void Decoder<V>::DecodeFP(Instruction* instr) { template<typename V> void Decoder<V>::DecodeAdvSIMDLoadStore(Instruction* instr) { // TODO(all): Implement Advanced SIMD load/store instruction decode. - ASSERT(instr->Bits(29, 25) == 0x6); + DCHECK(instr->Bits(29, 25) == 0x6); V::VisitUnimplemented(instr); } @@ -638,7 +638,7 @@ void Decoder<V>::DecodeAdvSIMDLoadStore(Instruction* instr) { template<typename V> void Decoder<V>::DecodeAdvSIMDDataProcessing(Instruction* instr) { // TODO(all): Implement Advanced SIMD data processing instruction decode. - ASSERT(instr->Bits(27, 25) == 0x7); + DCHECK(instr->Bits(27, 25) == 0x7); V::VisitUnimplemented(instr); } diff --git a/deps/v8/src/arm64/decoder-arm64.cc b/deps/v8/src/arm64/decoder-arm64.cc index 13962387dd1..cf7dc34c581 100644 --- a/deps/v8/src/arm64/decoder-arm64.cc +++ b/deps/v8/src/arm64/decoder-arm64.cc @@ -2,13 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM64 -#include "globals.h" -#include "utils.h" -#include "arm64/decoder-arm64.h" +#include "src/arm64/decoder-arm64.h" +#include "src/globals.h" +#include "src/utils.h" namespace v8 { @@ -39,7 +39,7 @@ void DispatchingDecoderVisitor::InsertVisitorBefore( } // We reached the end of the list. The last element must be // registered_visitor. - ASSERT(*it == registered_visitor); + DCHECK(*it == registered_visitor); visitors_.insert(it, new_visitor); } @@ -57,7 +57,7 @@ void DispatchingDecoderVisitor::InsertVisitorAfter( } // We reached the end of the list. The last element must be // registered_visitor. - ASSERT(*it == registered_visitor); + DCHECK(*it == registered_visitor); visitors_.push_back(new_visitor); } @@ -70,7 +70,7 @@ void DispatchingDecoderVisitor::RemoveVisitor(DecoderVisitor* visitor) { #define DEFINE_VISITOR_CALLERS(A) \ void DispatchingDecoderVisitor::Visit##A(Instruction* instr) { \ if (!(instr->Mask(A##FMask) == A##Fixed)) { \ - ASSERT(instr->Mask(A##FMask) == A##Fixed); \ + DCHECK(instr->Mask(A##FMask) == A##Fixed); \ } \ std::list<DecoderVisitor*>::iterator it; \ for (it = visitors_.begin(); it != visitors_.end(); it++) { \ diff --git a/deps/v8/src/arm64/decoder-arm64.h b/deps/v8/src/arm64/decoder-arm64.h index 4409421bdcc..af6bcc6f4f5 100644 --- a/deps/v8/src/arm64/decoder-arm64.h +++ b/deps/v8/src/arm64/decoder-arm64.h @@ -7,8 +7,8 @@ #include <list> -#include "globals.h" -#include "arm64/instructions-arm64.h" +#include "src/arm64/instructions-arm64.h" +#include "src/globals.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/arm64/delayed-masm-arm64-inl.h b/deps/v8/src/arm64/delayed-masm-arm64-inl.h new file mode 100644 index 00000000000..2c446303716 --- /dev/null +++ b/deps/v8/src/arm64/delayed-masm-arm64-inl.h @@ -0,0 +1,55 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_ARM64_DELAYED_MASM_ARM64_INL_H_ +#define V8_ARM64_DELAYED_MASM_ARM64_INL_H_ + +#include "src/arm64/delayed-masm-arm64.h" + +namespace v8 { +namespace internal { + +#define __ ACCESS_MASM(masm_) + + +void DelayedMasm::EndDelayedUse() { + EmitPending(); + DCHECK(!scratch_register_acquired_); + ResetSavedValue(); +} + + +void DelayedMasm::Mov(const Register& rd, + const Operand& operand, + DiscardMoveMode discard_mode) { + EmitPending(); + DCHECK(!IsScratchRegister(rd) || scratch_register_acquired_); + __ Mov(rd, operand, discard_mode); +} + + +void DelayedMasm::Fmov(FPRegister fd, FPRegister fn) { + EmitPending(); + __ Fmov(fd, fn); +} + + +void DelayedMasm::Fmov(FPRegister fd, double imm) { + EmitPending(); + __ Fmov(fd, imm); +} + + +void DelayedMasm::LoadObject(Register result, Handle<Object> object) { + EmitPending(); + DCHECK(!IsScratchRegister(result) || scratch_register_acquired_); + __ LoadObject(result, object); +} + + +#undef __ + +} } // namespace v8::internal + +#endif // V8_ARM64_DELAYED_MASM_ARM64_INL_H_ diff --git a/deps/v8/src/arm64/delayed-masm-arm64.cc b/deps/v8/src/arm64/delayed-masm-arm64.cc new file mode 100644 index 00000000000..c3bda915e8c --- /dev/null +++ b/deps/v8/src/arm64/delayed-masm-arm64.cc @@ -0,0 +1,198 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_ARM64 + +#include "src/arm64/delayed-masm-arm64.h" +#include "src/arm64/lithium-codegen-arm64.h" + +namespace v8 { +namespace internal { + +#define __ ACCESS_MASM(masm_) + + +void DelayedMasm::StackSlotMove(LOperand* src, LOperand* dst) { + DCHECK(src->IsStackSlot()); + DCHECK(dst->IsStackSlot()); + MemOperand src_operand = cgen_->ToMemOperand(src); + MemOperand dst_operand = cgen_->ToMemOperand(dst); + if (pending_ == kStackSlotMove) { + DCHECK(pending_pc_ == masm_->pc_offset()); + UseScratchRegisterScope scope(masm_); + DoubleRegister temp1 = scope.AcquireD(); + DoubleRegister temp2 = scope.AcquireD(); + switch (MemOperand::AreConsistentForPair(pending_address_src_, + src_operand)) { + case MemOperand::kNotPair: + __ Ldr(temp1, pending_address_src_); + __ Ldr(temp2, src_operand); + break; + case MemOperand::kPairAB: + __ Ldp(temp1, temp2, pending_address_src_); + break; + case MemOperand::kPairBA: + __ Ldp(temp2, temp1, src_operand); + break; + } + switch (MemOperand::AreConsistentForPair(pending_address_dst_, + dst_operand)) { + case MemOperand::kNotPair: + __ Str(temp1, pending_address_dst_); + __ Str(temp2, dst_operand); + break; + case MemOperand::kPairAB: + __ Stp(temp1, temp2, pending_address_dst_); + break; + case MemOperand::kPairBA: + __ Stp(temp2, temp1, dst_operand); + break; + } + ResetPending(); + return; + } + + EmitPending(); + pending_ = kStackSlotMove; + pending_address_src_ = src_operand; + pending_address_dst_ = dst_operand; +#ifdef DEBUG + pending_pc_ = masm_->pc_offset(); +#endif +} + + +void DelayedMasm::StoreConstant(uint64_t value, const MemOperand& operand) { + DCHECK(!scratch_register_acquired_); + if ((pending_ == kStoreConstant) && (value == pending_value_)) { + MemOperand::PairResult result = + MemOperand::AreConsistentForPair(pending_address_dst_, operand); + if (result != MemOperand::kNotPair) { + const MemOperand& dst = + (result == MemOperand::kPairAB) ? + pending_address_dst_ : + operand; + DCHECK(pending_pc_ == masm_->pc_offset()); + if (pending_value_ == 0) { + __ Stp(xzr, xzr, dst); + } else { + SetSavedValue(pending_value_); + __ Stp(ScratchRegister(), ScratchRegister(), dst); + } + ResetPending(); + return; + } + } + + EmitPending(); + pending_ = kStoreConstant; + pending_address_dst_ = operand; + pending_value_ = value; +#ifdef DEBUG + pending_pc_ = masm_->pc_offset(); +#endif +} + + +void DelayedMasm::Load(const CPURegister& rd, const MemOperand& operand) { + if ((pending_ == kLoad) && + pending_register_.IsSameSizeAndType(rd)) { + switch (MemOperand::AreConsistentForPair(pending_address_src_, operand)) { + case MemOperand::kNotPair: + break; + case MemOperand::kPairAB: + DCHECK(pending_pc_ == masm_->pc_offset()); + DCHECK(!IsScratchRegister(pending_register_) || + scratch_register_acquired_); + DCHECK(!IsScratchRegister(rd) || scratch_register_acquired_); + __ Ldp(pending_register_, rd, pending_address_src_); + ResetPending(); + return; + case MemOperand::kPairBA: + DCHECK(pending_pc_ == masm_->pc_offset()); + DCHECK(!IsScratchRegister(pending_register_) || + scratch_register_acquired_); + DCHECK(!IsScratchRegister(rd) || scratch_register_acquired_); + __ Ldp(rd, pending_register_, operand); + ResetPending(); + return; + } + } + + EmitPending(); + pending_ = kLoad; + pending_register_ = rd; + pending_address_src_ = operand; +#ifdef DEBUG + pending_pc_ = masm_->pc_offset(); +#endif +} + + +void DelayedMasm::Store(const CPURegister& rd, const MemOperand& operand) { + if ((pending_ == kStore) && + pending_register_.IsSameSizeAndType(rd)) { + switch (MemOperand::AreConsistentForPair(pending_address_dst_, operand)) { + case MemOperand::kNotPair: + break; + case MemOperand::kPairAB: + DCHECK(pending_pc_ == masm_->pc_offset()); + __ Stp(pending_register_, rd, pending_address_dst_); + ResetPending(); + return; + case MemOperand::kPairBA: + DCHECK(pending_pc_ == masm_->pc_offset()); + __ Stp(rd, pending_register_, operand); + ResetPending(); + return; + } + } + + EmitPending(); + pending_ = kStore; + pending_register_ = rd; + pending_address_dst_ = operand; +#ifdef DEBUG + pending_pc_ = masm_->pc_offset(); +#endif +} + + +void DelayedMasm::EmitPending() { + DCHECK((pending_ == kNone) || (pending_pc_ == masm_->pc_offset())); + switch (pending_) { + case kNone: + return; + case kStoreConstant: + if (pending_value_ == 0) { + __ Str(xzr, pending_address_dst_); + } else { + SetSavedValue(pending_value_); + __ Str(ScratchRegister(), pending_address_dst_); + } + break; + case kLoad: + DCHECK(!IsScratchRegister(pending_register_) || + scratch_register_acquired_); + __ Ldr(pending_register_, pending_address_src_); + break; + case kStore: + __ Str(pending_register_, pending_address_dst_); + break; + case kStackSlotMove: { + UseScratchRegisterScope scope(masm_); + DoubleRegister temp = scope.AcquireD(); + __ Ldr(temp, pending_address_src_); + __ Str(temp, pending_address_dst_); + break; + } + } + ResetPending(); +} + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_ARM64 diff --git a/deps/v8/src/arm64/delayed-masm-arm64.h b/deps/v8/src/arm64/delayed-masm-arm64.h new file mode 100644 index 00000000000..76227a38983 --- /dev/null +++ b/deps/v8/src/arm64/delayed-masm-arm64.h @@ -0,0 +1,164 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_ARM64_DELAYED_MASM_ARM64_H_ +#define V8_ARM64_DELAYED_MASM_ARM64_H_ + +#include "src/lithium.h" + +namespace v8 { +namespace internal { + +class LCodeGen; + +// This class delays the generation of some instructions. This way, we have a +// chance to merge two instructions in one (with load/store pair). +// Each instruction must either: +// - merge with the pending instruction and generate just one instruction. +// - emit the pending instruction and then generate the instruction (or set the +// pending instruction). +class DelayedMasm BASE_EMBEDDED { + public: + DelayedMasm(LCodeGen* owner, + MacroAssembler* masm, + const Register& scratch_register) + : cgen_(owner), masm_(masm), scratch_register_(scratch_register), + scratch_register_used_(false), pending_(kNone), saved_value_(0) { +#ifdef DEBUG + pending_register_ = no_reg; + pending_value_ = 0; + pending_pc_ = 0; + scratch_register_acquired_ = false; +#endif + } + ~DelayedMasm() { + DCHECK(!scratch_register_acquired_); + DCHECK(!scratch_register_used_); + DCHECK(!pending()); + } + inline void EndDelayedUse(); + + const Register& ScratchRegister() { + scratch_register_used_ = true; + return scratch_register_; + } + bool IsScratchRegister(const CPURegister& reg) { + return reg.Is(scratch_register_); + } + bool scratch_register_used() const { return scratch_register_used_; } + void reset_scratch_register_used() { scratch_register_used_ = false; } + // Acquire/Release scratch register for use outside this class. + void AcquireScratchRegister() { + EmitPending(); + ResetSavedValue(); +#ifdef DEBUG + DCHECK(!scratch_register_acquired_); + scratch_register_acquired_ = true; +#endif + } + void ReleaseScratchRegister() { +#ifdef DEBUG + DCHECK(scratch_register_acquired_); + scratch_register_acquired_ = false; +#endif + } + bool pending() { return pending_ != kNone; } + + // Extra layer over the macro-assembler instructions (which emits the + // potential pending instruction). + inline void Mov(const Register& rd, + const Operand& operand, + DiscardMoveMode discard_mode = kDontDiscardForSameWReg); + inline void Fmov(FPRegister fd, FPRegister fn); + inline void Fmov(FPRegister fd, double imm); + inline void LoadObject(Register result, Handle<Object> object); + // Instructions which try to merge which the pending instructions. + void StackSlotMove(LOperand* src, LOperand* dst); + // StoreConstant can only be used if the scratch register is not acquired. + void StoreConstant(uint64_t value, const MemOperand& operand); + void Load(const CPURegister& rd, const MemOperand& operand); + void Store(const CPURegister& rd, const MemOperand& operand); + // Emit the potential pending instruction. + void EmitPending(); + // Reset the pending state. + void ResetPending() { + pending_ = kNone; +#ifdef DEBUG + pending_register_ = no_reg; + MemOperand tmp; + pending_address_src_ = tmp; + pending_address_dst_ = tmp; + pending_value_ = 0; + pending_pc_ = 0; +#endif + } + void InitializeRootRegister() { + masm_->InitializeRootRegister(); + } + + private: + // Set the saved value and load the ScratchRegister with it. + void SetSavedValue(uint64_t saved_value) { + DCHECK(saved_value != 0); + if (saved_value_ != saved_value) { + masm_->Mov(ScratchRegister(), saved_value); + saved_value_ = saved_value; + } + } + // Reset the saved value (i.e. the value of ScratchRegister is no longer + // known). + void ResetSavedValue() { + saved_value_ = 0; + } + + LCodeGen* cgen_; + MacroAssembler* masm_; + + // Register used to store a constant. + Register scratch_register_; + bool scratch_register_used_; + + // Sometimes we store or load two values in two contiguous stack slots. + // In this case, we try to use the ldp/stp instructions to reduce code size. + // To be able to do that, instead of generating directly the instructions, + // we register with the following fields that an instruction needs to be + // generated. Then with the next instruction, if the instruction is + // consistent with the pending one for stp/ldp we generate ldp/stp. Else, + // if they are not consistent, we generate the pending instruction and we + // register the new instruction (which becomes pending). + + // Enumeration of instructions which can be pending. + enum Pending { + kNone, + kStoreConstant, + kLoad, kStore, + kStackSlotMove + }; + // The pending instruction. + Pending pending_; + // For kLoad, kStore: register which must be loaded/stored. + CPURegister pending_register_; + // For kLoad, kStackSlotMove: address of the load. + MemOperand pending_address_src_; + // For kStoreConstant, kStore, kStackSlotMove: address of the store. + MemOperand pending_address_dst_; + // For kStoreConstant: value to be stored. + uint64_t pending_value_; + // Value held into the ScratchRegister if the saved_value_ is not 0. + // For 0, we use xzr. + uint64_t saved_value_; +#ifdef DEBUG + // Address where the pending instruction must be generated. It's only used to + // check that nothing else has been generated since we set the pending + // instruction. + int pending_pc_; + // If true, the scratch register has been acquired outside this class. The + // scratch register can no longer be used for constants. + bool scratch_register_acquired_; +#endif +}; + +} } // namespace v8::internal + +#endif // V8_ARM64_DELAYED_MASM_ARM64_H_ diff --git a/deps/v8/src/arm64/deoptimizer-arm64.cc b/deps/v8/src/arm64/deoptimizer-arm64.cc index a19e2fc9fdb..d40468029f5 100644 --- a/deps/v8/src/arm64/deoptimizer-arm64.cc +++ b/deps/v8/src/arm64/deoptimizer-arm64.cc @@ -2,12 +2,12 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "codegen.h" -#include "deoptimizer.h" -#include "full-codegen.h" -#include "safepoint-table.h" +#include "src/codegen.h" +#include "src/deoptimizer.h" +#include "src/full-codegen.h" +#include "src/safepoint-table.h" namespace v8 { @@ -32,9 +32,6 @@ void Deoptimizer::PatchCodeForDeoptimization(Isolate* isolate, Code* code) { DeoptimizationInputData* deopt_data = DeoptimizationInputData::cast(code->deoptimization_data()); - SharedFunctionInfo* shared = - SharedFunctionInfo::cast(deopt_data->SharedFunctionInfo()); - shared->EvictFromOptimizedCodeMap(code, "deoptimized code"); Address code_start_address = code->instruction_start(); #ifdef DEBUG Address prev_call_address = NULL; @@ -48,13 +45,13 @@ void Deoptimizer::PatchCodeForDeoptimization(Isolate* isolate, Code* code) { Address deopt_entry = GetDeoptimizationEntry(isolate, i, LAZY); PatchingAssembler patcher(call_address, patch_size() / kInstructionSize); - patcher.LoadLiteral(ip0, 2 * kInstructionSize); + patcher.ldr_pcrel(ip0, (2 * kInstructionSize) >> kLoadLiteralScaleLog2); patcher.blr(ip0); patcher.dc64(reinterpret_cast<intptr_t>(deopt_entry)); - ASSERT((prev_call_address == NULL) || + DCHECK((prev_call_address == NULL) || (call_address >= prev_call_address + patch_size())); - ASSERT(call_address + patch_size() <= code->instruction_end()); + DCHECK(call_address + patch_size() <= code->instruction_end()); #ifdef DEBUG prev_call_address = call_address; #endif @@ -93,7 +90,7 @@ bool Deoptimizer::HasAlignmentPadding(JSFunction* function) { void Deoptimizer::SetPlatformCompiledStubRegisters( FrameDescription* output_frame, CodeStubInterfaceDescriptor* descriptor) { - ApiFunction function(descriptor->deoptimization_handler_); + ApiFunction function(descriptor->deoptimization_handler()); ExternalReference xref(&function, ExternalReference::BUILTIN_CALL, isolate_); intptr_t handler = reinterpret_cast<intptr_t>(xref.address()); int params = descriptor->GetHandlerParameterCount(); @@ -110,47 +107,6 @@ void Deoptimizer::CopyDoubleRegisters(FrameDescription* output_frame) { } -Code* Deoptimizer::NotifyStubFailureBuiltin() { - return isolate_->builtins()->builtin(Builtins::kNotifyStubFailureSaveDoubles); -} - - -#define __ masm-> - -static void CopyRegisterDumpToFrame(MacroAssembler* masm, - Register frame, - CPURegList reg_list, - Register scratch1, - Register scratch2, - int src_offset, - int dst_offset) { - int offset0, offset1; - CPURegList copy_to_input = reg_list; - int reg_count = reg_list.Count(); - int reg_size = reg_list.RegisterSizeInBytes(); - for (int i = 0; i < (reg_count / 2); i++) { - __ PeekPair(scratch1, scratch2, src_offset + (i * reg_size * 2)); - - offset0 = (copy_to_input.PopLowestIndex().code() * reg_size) + dst_offset; - offset1 = (copy_to_input.PopLowestIndex().code() * reg_size) + dst_offset; - - if ((offset0 + reg_size) == offset1) { - // Registers are adjacent: store in pairs. - __ Stp(scratch1, scratch2, MemOperand(frame, offset0)); - } else { - // Registers are not adjacent: store individually. - __ Str(scratch1, MemOperand(frame, offset0)); - __ Str(scratch2, MemOperand(frame, offset1)); - } - } - if ((reg_count & 1) != 0) { - __ Peek(scratch1, src_offset + (reg_count - 1) * reg_size); - offset0 = (copy_to_input.PopLowestIndex().code() * reg_size) + dst_offset; - __ Str(scratch1, MemOperand(frame, offset0)); - } -} - -#undef __ #define __ masm()-> @@ -214,13 +170,23 @@ void Deoptimizer::EntryGenerator::Generate() { __ Ldr(x1, MemOperand(deoptimizer, Deoptimizer::input_offset())); // Copy core registers into the input frame. - CopyRegisterDumpToFrame(masm(), x1, saved_registers, x2, x4, 0, - FrameDescription::registers_offset()); + CPURegList copy_to_input = saved_registers; + for (int i = 0; i < saved_registers.Count(); i++) { + __ Peek(x2, i * kPointerSize); + CPURegister current_reg = copy_to_input.PopLowestIndex(); + int offset = (current_reg.code() * kPointerSize) + + FrameDescription::registers_offset(); + __ Str(x2, MemOperand(x1, offset)); + } // Copy FP registers to the input frame. - CopyRegisterDumpToFrame(masm(), x1, saved_fp_registers, x2, x4, - kFPRegistersOffset, - FrameDescription::double_registers_offset()); + for (int i = 0; i < saved_fp_registers.Count(); i++) { + int dst_offset = FrameDescription::double_registers_offset() + + (i * kDoubleSize); + int src_offset = kFPRegistersOffset + (i * kDoubleSize); + __ Peek(x2, src_offset); + __ Str(x2, MemOperand(x1, dst_offset)); + } // Remove the bailout id and the saved registers from the stack. __ Drop(1 + (kSavedRegistersAreaSize / kXRegSize)); @@ -284,7 +250,7 @@ void Deoptimizer::EntryGenerator::Generate() { __ B(lt, &outer_push_loop); __ Ldr(x1, MemOperand(x4, Deoptimizer::input_offset())); - ASSERT(!saved_fp_registers.IncludesAliasOf(crankshaft_fp_scratch) && + DCHECK(!saved_fp_registers.IncludesAliasOf(crankshaft_fp_scratch) && !saved_fp_registers.IncludesAliasOf(fp_zero) && !saved_fp_registers.IncludesAliasOf(fp_scratch)); int src_offset = FrameDescription::double_registers_offset(); @@ -311,7 +277,7 @@ void Deoptimizer::EntryGenerator::Generate() { // Note that lr is not in the list of saved_registers and will be restored // later. We can use it to hold the address of last output frame while // reloading the other registers. - ASSERT(!saved_registers.IncludesAliasOf(lr)); + DCHECK(!saved_registers.IncludesAliasOf(lr)); Register last_output_frame = lr; __ Mov(last_output_frame, current_frame); @@ -354,14 +320,14 @@ void Deoptimizer::TableEntryGenerator::GeneratePrologue() { // The number of entry will never exceed kMaxNumberOfEntries. // As long as kMaxNumberOfEntries is a valid 16 bits immediate you can use // a movz instruction to load the entry id. - ASSERT(is_uint16(Deoptimizer::kMaxNumberOfEntries)); + DCHECK(is_uint16(Deoptimizer::kMaxNumberOfEntries)); for (int i = 0; i < count(); i++) { int start = masm()->pc_offset(); USE(start); __ movz(entry_id, i); __ b(&done); - ASSERT(masm()->pc_offset() - start == table_entry_size_); + DCHECK(masm()->pc_offset() - start == table_entry_size_); } } __ Bind(&done); diff --git a/deps/v8/src/arm64/disasm-arm64.cc b/deps/v8/src/arm64/disasm-arm64.cc index e9e1decadfe..b8b1d5d2561 100644 --- a/deps/v8/src/arm64/disasm-arm64.cc +++ b/deps/v8/src/arm64/disasm-arm64.cc @@ -3,19 +3,19 @@ // found in the LICENSE file. #include <assert.h> -#include <stdio.h> #include <stdarg.h> +#include <stdio.h> #include <string.h> -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM64 -#include "disasm.h" -#include "arm64/decoder-arm64-inl.h" -#include "arm64/disasm-arm64.h" -#include "macro-assembler.h" -#include "platform.h" +#include "src/arm64/decoder-arm64-inl.h" +#include "src/arm64/disasm-arm64.h" +#include "src/base/platform/platform.h" +#include "src/disasm.h" +#include "src/macro-assembler.h" namespace v8 { namespace internal { @@ -258,7 +258,7 @@ void Disassembler::VisitLogicalImmediate(Instruction* instr) { bool Disassembler::IsMovzMovnImm(unsigned reg_size, uint64_t value) { - ASSERT((reg_size == kXRegSizeInBits) || + DCHECK((reg_size == kXRegSizeInBits) || ((reg_size == kWRegSizeInBits) && (value <= 0xffffffff))); // Test for movz: 16-bits set at positions 0, 16, 32 or 48. @@ -1176,7 +1176,7 @@ void Disassembler::VisitSystem(Instruction* instr) { } } } else if (instr->Mask(SystemHintFMask) == SystemHintFixed) { - ASSERT(instr->Mask(SystemHintMask) == HINT); + DCHECK(instr->Mask(SystemHintMask) == HINT); switch (instr->ImmHint()) { case NOP: { mnemonic = "nop"; @@ -1246,7 +1246,7 @@ void Disassembler::Format(Instruction* instr, const char* mnemonic, const char* format) { // TODO(mcapewel) don't think I can use the instr address here - there needs // to be a base address too - ASSERT(mnemonic != NULL); + DCHECK(mnemonic != NULL); ResetOutput(); Substitute(instr, mnemonic); if (format != NULL) { @@ -1364,7 +1364,7 @@ int Disassembler::SubstituteRegisterField(Instruction* instr, int Disassembler::SubstituteImmediateField(Instruction* instr, const char* format) { - ASSERT(format[0] == 'I'); + DCHECK(format[0] == 'I'); switch (format[1]) { case 'M': { // IMoveImm or IMoveLSL. @@ -1372,7 +1372,7 @@ int Disassembler::SubstituteImmediateField(Instruction* instr, uint64_t imm = instr->ImmMoveWide() << (16 * instr->ShiftMoveWide()); AppendToOutput("#0x%" PRIx64, imm); } else { - ASSERT(format[5] == 'L'); + DCHECK(format[5] == 'L'); AppendToOutput("#0x%" PRIx64, instr->ImmMoveWide()); if (instr->ShiftMoveWide() > 0) { AppendToOutput(", lsl #%d", 16 * instr->ShiftMoveWide()); @@ -1384,7 +1384,7 @@ int Disassembler::SubstituteImmediateField(Instruction* instr, switch (format[2]) { case 'L': { // ILLiteral - Immediate Load Literal. AppendToOutput("pc%+" PRId64, - instr->ImmLLiteral() << kLiteralEntrySizeLog2); + instr->ImmLLiteral() << kLoadLiteralScaleLog2); return 9; } case 'S': { // ILS - Immediate Load/Store. @@ -1417,7 +1417,7 @@ int Disassembler::SubstituteImmediateField(Instruction* instr, return 6; } case 'A': { // IAddSub. - ASSERT(instr->ShiftAddSub() <= 1); + DCHECK(instr->ShiftAddSub() <= 1); int64_t imm = instr->ImmAddSub() << (12 * instr->ShiftAddSub()); AppendToOutput("#0x%" PRIx64 " (%" PRId64 ")", imm, imm); return 7; @@ -1474,7 +1474,7 @@ int Disassembler::SubstituteImmediateField(Instruction* instr, int Disassembler::SubstituteBitfieldImmediateField(Instruction* instr, const char* format) { - ASSERT((format[0] == 'I') && (format[1] == 'B')); + DCHECK((format[0] == 'I') && (format[1] == 'B')); unsigned r = instr->ImmR(); unsigned s = instr->ImmS(); @@ -1488,13 +1488,13 @@ int Disassembler::SubstituteBitfieldImmediateField(Instruction* instr, AppendToOutput("#%d", s + 1); return 5; } else { - ASSERT(format[3] == '-'); + DCHECK(format[3] == '-'); AppendToOutput("#%d", s - r + 1); return 7; } } case 'Z': { // IBZ-r. - ASSERT((format[3] == '-') && (format[4] == 'r')); + DCHECK((format[3] == '-') && (format[4] == 'r')); unsigned reg_size = (instr->SixtyFourBits() == 1) ? kXRegSizeInBits : kWRegSizeInBits; AppendToOutput("#%d", reg_size - r); @@ -1510,7 +1510,7 @@ int Disassembler::SubstituteBitfieldImmediateField(Instruction* instr, int Disassembler::SubstituteLiteralField(Instruction* instr, const char* format) { - ASSERT(strncmp(format, "LValue", 6) == 0); + DCHECK(strncmp(format, "LValue", 6) == 0); USE(format); switch (instr->Mask(LoadLiteralMask)) { @@ -1526,12 +1526,12 @@ int Disassembler::SubstituteLiteralField(Instruction* instr, int Disassembler::SubstituteShiftField(Instruction* instr, const char* format) { - ASSERT(format[0] == 'H'); - ASSERT(instr->ShiftDP() <= 0x3); + DCHECK(format[0] == 'H'); + DCHECK(instr->ShiftDP() <= 0x3); switch (format[1]) { case 'D': { // HDP. - ASSERT(instr->ShiftDP() != ROR); + DCHECK(instr->ShiftDP() != ROR); } // Fall through. case 'L': { // HLo. if (instr->ImmDPShift() != 0) { @@ -1550,7 +1550,7 @@ int Disassembler::SubstituteShiftField(Instruction* instr, const char* format) { int Disassembler::SubstituteConditionField(Instruction* instr, const char* format) { - ASSERT(format[0] == 'C'); + DCHECK(format[0] == 'C'); const char* condition_code[] = { "eq", "ne", "hs", "lo", "mi", "pl", "vs", "vc", "hi", "ls", "ge", "lt", @@ -1559,7 +1559,7 @@ int Disassembler::SubstituteConditionField(Instruction* instr, switch (format[1]) { case 'B': cond = instr->ConditionBranch(); break; case 'I': { - cond = InvertCondition(static_cast<Condition>(instr->Condition())); + cond = NegateCondition(static_cast<Condition>(instr->Condition())); break; } default: cond = instr->Condition(); @@ -1572,12 +1572,12 @@ int Disassembler::SubstituteConditionField(Instruction* instr, int Disassembler::SubstitutePCRelAddressField(Instruction* instr, const char* format) { USE(format); - ASSERT(strncmp(format, "AddrPCRel", 9) == 0); + DCHECK(strncmp(format, "AddrPCRel", 9) == 0); int offset = instr->ImmPCRel(); // Only ADR (AddrPCRelByte) is supported. - ASSERT(strcmp(format, "AddrPCRelByte") == 0); + DCHECK(strcmp(format, "AddrPCRelByte") == 0); char sign = '+'; if (offset < 0) { @@ -1592,7 +1592,7 @@ int Disassembler::SubstitutePCRelAddressField(Instruction* instr, int Disassembler::SubstituteBranchTargetField(Instruction* instr, const char* format) { - ASSERT(strncmp(format, "BImm", 4) == 0); + DCHECK(strncmp(format, "BImm", 4) == 0); int64_t offset = 0; switch (format[5]) { @@ -1619,8 +1619,8 @@ int Disassembler::SubstituteBranchTargetField(Instruction* instr, int Disassembler::SubstituteExtendField(Instruction* instr, const char* format) { - ASSERT(strncmp(format, "Ext", 3) == 0); - ASSERT(instr->ExtendMode() <= 7); + DCHECK(strncmp(format, "Ext", 3) == 0); + DCHECK(instr->ExtendMode() <= 7); USE(format); const char* extend_mode[] = { "uxtb", "uxth", "uxtw", "uxtx", @@ -1646,7 +1646,7 @@ int Disassembler::SubstituteExtendField(Instruction* instr, int Disassembler::SubstituteLSRegOffsetField(Instruction* instr, const char* format) { - ASSERT(strncmp(format, "Offsetreg", 9) == 0); + DCHECK(strncmp(format, "Offsetreg", 9) == 0); const char* extend_mode[] = { "undefined", "undefined", "uxtw", "lsl", "undefined", "undefined", "sxtw", "sxtx" }; USE(format); @@ -1675,7 +1675,7 @@ int Disassembler::SubstituteLSRegOffsetField(Instruction* instr, int Disassembler::SubstitutePrefetchField(Instruction* instr, const char* format) { - ASSERT(format[0] == 'P'); + DCHECK(format[0] == 'P'); USE(format); int prefetch_mode = instr->PrefetchMode(); @@ -1690,7 +1690,7 @@ int Disassembler::SubstitutePrefetchField(Instruction* instr, int Disassembler::SubstituteBarrierField(Instruction* instr, const char* format) { - ASSERT(format[0] == 'M'); + DCHECK(format[0] == 'M'); USE(format); static const char* options[4][4] = { @@ -1734,7 +1734,7 @@ namespace disasm { const char* NameConverter::NameOfAddress(byte* addr) const { - v8::internal::OS::SNPrintF(tmp_buffer_, "%p", addr); + v8::internal::SNPrintF(tmp_buffer_, "%p", addr); return tmp_buffer_.start(); } @@ -1752,7 +1752,7 @@ const char* NameConverter::NameOfCPURegister(int reg) const { if (ureg == v8::internal::kZeroRegCode) { return "xzr"; } - v8::internal::OS::SNPrintF(tmp_buffer_, "x%u", ureg); + v8::internal::SNPrintF(tmp_buffer_, "x%u", ureg); return tmp_buffer_.start(); } @@ -1786,7 +1786,7 @@ class BufferDisassembler : public v8::internal::Disassembler { ~BufferDisassembler() { } virtual void ProcessOutput(v8::internal::Instruction* instr) { - v8::internal::OS::SNPrintF(out_buffer_, "%s", GetOutput()); + v8::internal::SNPrintF(out_buffer_, "%s", GetOutput()); } private: @@ -1797,7 +1797,7 @@ Disassembler::Disassembler(const NameConverter& converter) : converter_(converter) {} -Disassembler::~Disassembler() {} +Disassembler::~Disassembler() { USE(converter_); } int Disassembler::InstructionDecode(v8::internal::Vector<char> buffer, diff --git a/deps/v8/src/arm64/disasm-arm64.h b/deps/v8/src/arm64/disasm-arm64.h index 42552a2d8b2..8cd3b80dbe1 100644 --- a/deps/v8/src/arm64/disasm-arm64.h +++ b/deps/v8/src/arm64/disasm-arm64.h @@ -5,12 +5,12 @@ #ifndef V8_ARM64_DISASM_ARM64_H #define V8_ARM64_DISASM_ARM64_H -#include "v8.h" +#include "src/v8.h" -#include "globals.h" -#include "utils.h" -#include "instructions-arm64.h" -#include "decoder-arm64.h" +#include "src/arm64/decoder-arm64.h" +#include "src/arm64/instructions-arm64.h" +#include "src/globals.h" +#include "src/utils.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/arm64/frames-arm64.cc b/deps/v8/src/arm64/frames-arm64.cc index da638ad6e33..b3633e07baf 100644 --- a/deps/v8/src/arm64/frames-arm64.cc +++ b/deps/v8/src/arm64/frames-arm64.cc @@ -2,14 +2,14 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM64 -#include "assembler.h" -#include "assembler-arm64.h" -#include "assembler-arm64-inl.h" -#include "frames.h" +#include "src/arm64/assembler-arm64-inl.h" +#include "src/arm64/assembler-arm64.h" +#include "src/assembler.h" +#include "src/frames.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/arm64/frames-arm64.h b/deps/v8/src/arm64/frames-arm64.h index 3996bd75dc9..8d4ce861978 100644 --- a/deps/v8/src/arm64/frames-arm64.h +++ b/deps/v8/src/arm64/frames-arm64.h @@ -2,8 +2,8 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "arm64/constants-arm64.h" -#include "arm64/assembler-arm64.h" +#include "src/arm64/assembler-arm64.h" +#include "src/arm64/constants-arm64.h" #ifndef V8_ARM64_FRAMES_ARM64_H_ #define V8_ARM64_FRAMES_ARM64_H_ @@ -15,7 +15,6 @@ const int kNumRegs = kNumberOfRegisters; // Registers x0-x17 are caller-saved. const int kNumJSCallerSaved = 18; const RegList kJSCallerSaved = 0x3ffff; -typedef Object* JSCallerSavedBuffer[kNumJSCallerSaved]; // Number of registers for which space is reserved in safepoints. Must be a // multiple of eight. diff --git a/deps/v8/src/arm64/full-codegen-arm64.cc b/deps/v8/src/arm64/full-codegen-arm64.cc index 0196e69e454..2e63814bd39 100644 --- a/deps/v8/src/arm64/full-codegen-arm64.cc +++ b/deps/v8/src/arm64/full-codegen-arm64.cc @@ -2,22 +2,22 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM64 -#include "code-stubs.h" -#include "codegen.h" -#include "compiler.h" -#include "debug.h" -#include "full-codegen.h" -#include "isolate-inl.h" -#include "parser.h" -#include "scopes.h" -#include "stub-cache.h" +#include "src/code-stubs.h" +#include "src/codegen.h" +#include "src/compiler.h" +#include "src/debug.h" +#include "src/full-codegen.h" +#include "src/isolate-inl.h" +#include "src/parser.h" +#include "src/scopes.h" +#include "src/stub-cache.h" -#include "arm64/code-stubs-arm64.h" -#include "arm64/macro-assembler-arm64.h" +#include "src/arm64/code-stubs-arm64.h" +#include "src/arm64/macro-assembler-arm64.h" namespace v8 { namespace internal { @@ -34,18 +34,18 @@ class JumpPatchSite BASE_EMBEDDED { ~JumpPatchSite() { if (patch_site_.is_bound()) { - ASSERT(info_emitted_); + DCHECK(info_emitted_); } else { - ASSERT(reg_.IsNone()); + DCHECK(reg_.IsNone()); } } void EmitJumpIfNotSmi(Register reg, Label* target) { // This code will be patched by PatchInlinedSmiCode, in ic-arm64.cc. InstructionAccurateScope scope(masm_, 1); - ASSERT(!info_emitted_); - ASSERT(reg.Is64Bits()); - ASSERT(!reg.Is(csp)); + DCHECK(!info_emitted_); + DCHECK(reg.Is64Bits()); + DCHECK(!reg.Is(csp)); reg_ = reg; __ bind(&patch_site_); __ tbz(xzr, 0, target); // Always taken before patched. @@ -54,9 +54,9 @@ class JumpPatchSite BASE_EMBEDDED { void EmitJumpIfSmi(Register reg, Label* target) { // This code will be patched by PatchInlinedSmiCode, in ic-arm64.cc. InstructionAccurateScope scope(masm_, 1); - ASSERT(!info_emitted_); - ASSERT(reg.Is64Bits()); - ASSERT(!reg.Is(csp)); + DCHECK(!info_emitted_); + DCHECK(reg.Is64Bits()); + DCHECK(!reg.Is(csp)); reg_ = reg; __ bind(&patch_site_); __ tbnz(xzr, 0, target); // Never taken before patched. @@ -87,29 +87,6 @@ class JumpPatchSite BASE_EMBEDDED { }; -static void EmitStackCheck(MacroAssembler* masm_, - int pointers = 0, - Register scratch = jssp) { - Isolate* isolate = masm_->isolate(); - Label ok; - ASSERT(jssp.Is(__ StackPointer())); - ASSERT(scratch.Is(jssp) == (pointers == 0)); - Heap::RootListIndex index; - if (pointers != 0) { - __ Sub(scratch, jssp, pointers * kPointerSize); - index = Heap::kRealStackLimitRootIndex; - } else { - index = Heap::kStackLimitRootIndex; - } - __ CompareRoot(scratch, index); - __ B(hs, &ok); - PredictableCodeSizeScope predictable(masm_, - Assembler::kCallSizeWithRelocation); - __ Call(isolate->builtins()->StackCheck(), RelocInfo::CODE_TARGET); - __ Bind(&ok); -} - - // Generate code for a JS function. On entry to the function the receiver // and arguments have been pushed on the stack left to right. The actual // argument count matches the formal parameter count expected by the @@ -153,7 +130,7 @@ void FullCodeGenerator::Generate() { __ JumpIfNotRoot(x10, Heap::kUndefinedValueRootIndex, &ok); __ Ldr(x10, GlobalObjectMemOperand()); - __ Ldr(x10, FieldMemOperand(x10, GlobalObject::kGlobalReceiverOffset)); + __ Ldr(x10, FieldMemOperand(x10, GlobalObject::kGlobalProxyOffset)); __ Poke(x10, receiver_offset); __ Bind(&ok); @@ -170,18 +147,24 @@ void FullCodeGenerator::Generate() { // Push(lr, fp, cp, x1); // Add(fp, jssp, 2 * kPointerSize); info->set_prologue_offset(masm_->pc_offset()); - __ Prologue(BUILD_FUNCTION_FRAME); + __ Prologue(info->IsCodePreAgingActive()); info->AddNoFrameRange(0, masm_->pc_offset()); // Reserve space on the stack for locals. { Comment cmnt(masm_, "[ Allocate locals"); int locals_count = info->scope()->num_stack_slots(); // Generators allocate locals, if any, in context slots. - ASSERT(!info->function()->is_generator() || locals_count == 0); + DCHECK(!info->function()->is_generator() || locals_count == 0); if (locals_count > 0) { if (locals_count >= 128) { - EmitStackCheck(masm_, locals_count, x10); + Label ok; + DCHECK(jssp.Is(__ StackPointer())); + __ Sub(x10, jssp, locals_count * kPointerSize); + __ CompareRoot(x10, Heap::kRealStackLimitRootIndex); + __ B(hs, &ok); + __ InvokeBuiltin(Builtins::STACK_OVERFLOW, CALL_FUNCTION); + __ Bind(&ok); } __ LoadRoot(x10, Heap::kUndefinedValueRootIndex); if (FLAG_optimize_for_size) { @@ -211,16 +194,19 @@ void FullCodeGenerator::Generate() { if (heap_slots > 0) { // Argument to NewContext is the function, which is still in x1. Comment cmnt(masm_, "[ Allocate context"); + bool need_write_barrier = true; if (FLAG_harmony_scoping && info->scope()->is_global_scope()) { __ Mov(x10, Operand(info->scope()->GetScopeInfo())); __ Push(x1, x10); - __ CallRuntime(Runtime::kHiddenNewGlobalContext, 2); + __ CallRuntime(Runtime::kNewGlobalContext, 2); } else if (heap_slots <= FastNewContextStub::kMaximumSlots) { FastNewContextStub stub(isolate(), heap_slots); __ CallStub(&stub); + // Result of FastNewContextStub is always in new space. + need_write_barrier = false; } else { __ Push(x1); - __ CallRuntime(Runtime::kHiddenNewFunctionContext, 1); + __ CallRuntime(Runtime::kNewFunctionContext, 1); } function_in_register_x1 = false; // Context is returned in x0. It replaces the context passed to us. @@ -241,8 +227,15 @@ void FullCodeGenerator::Generate() { __ Str(x10, target); // Update the write barrier. - __ RecordWriteContextSlot( - cp, target.offset(), x10, x11, kLRHasBeenSaved, kDontSaveFPRegs); + if (need_write_barrier) { + __ RecordWriteContextSlot( + cp, target.offset(), x10, x11, kLRHasBeenSaved, kDontSaveFPRegs); + } else if (FLAG_debug_code) { + Label done; + __ JumpIfInNewSpace(cp, &done); + __ Abort(kExpectedNewSpaceObject); + __ bind(&done); + } } } } @@ -298,9 +291,9 @@ void FullCodeGenerator::Generate() { { Comment cmnt(masm_, "[ Declarations"); if (scope()->is_function_scope() && scope()->function() != NULL) { VariableDeclaration* function = scope()->function(); - ASSERT(function->proxy()->var()->mode() == CONST || + DCHECK(function->proxy()->var()->mode() == CONST || function->proxy()->var()->mode() == CONST_LEGACY); - ASSERT(function->proxy()->var()->location() != Variable::UNALLOCATED); + DCHECK(function->proxy()->var()->location() != Variable::UNALLOCATED); VisitVariableDeclaration(function); } VisitDeclarations(scope()->declarations()); @@ -309,13 +302,20 @@ void FullCodeGenerator::Generate() { { Comment cmnt(masm_, "[ Stack check"); PrepareForBailoutForId(BailoutId::Declarations(), NO_REGISTERS); - EmitStackCheck(masm_); + Label ok; + DCHECK(jssp.Is(__ StackPointer())); + __ CompareRoot(jssp, Heap::kStackLimitRootIndex); + __ B(hs, &ok); + PredictableCodeSizeScope predictable(masm_, + Assembler::kCallSizeWithRelocation); + __ Call(isolate()->builtins()->StackCheck(), RelocInfo::CODE_TARGET); + __ Bind(&ok); } { Comment cmnt(masm_, "[ Body"); - ASSERT(loop_depth() == 0); + DCHECK(loop_depth() == 0); VisitStatements(function()->body()); - ASSERT(loop_depth() == 0); + DCHECK(loop_depth() == 0); } // Always emit a 'return undefined' in case control fell off the end of @@ -347,7 +347,7 @@ void FullCodeGenerator::EmitProfilingCounterDecrement(int delta) { void FullCodeGenerator::EmitProfilingCounterReset() { int reset_value = FLAG_interrupt_budget; - if (isolate()->IsDebuggerActive()) { + if (info_->is_debug()) { // Detect debug break requests as soon as possible. reset_value = FLAG_interrupt_budget >> 4; } @@ -359,13 +359,13 @@ void FullCodeGenerator::EmitProfilingCounterReset() { void FullCodeGenerator::EmitBackEdgeBookkeeping(IterationStatement* stmt, Label* back_edge_target) { - ASSERT(jssp.Is(__ StackPointer())); + DCHECK(jssp.Is(__ StackPointer())); Comment cmnt(masm_, "[ Back edge bookkeeping"); // Block literal pools whilst emitting back edge code. Assembler::BlockPoolsScope block_const_pool(masm_); Label ok; - ASSERT(back_edge_target->is_bound()); + DCHECK(back_edge_target->is_bound()); // We want to do a round rather than a floor of distance/kCodeSizeMultiplier // to reduce the absolute error due to the integer division. To do that, // we add kCodeSizeMultiplier/2 to the distance (equivalent to adding 0.5 to @@ -407,7 +407,7 @@ void FullCodeGenerator::EmitReturnSequence() { // Runtime::TraceExit returns its parameter in x0. __ Push(result_register()); __ CallRuntime(Runtime::kTraceExit, 1); - ASSERT(x0.Is(result_register())); + DCHECK(x0.Is(result_register())); } // Pretend that the exit is a backwards jump to the entry. int weight = 1; @@ -441,7 +441,7 @@ void FullCodeGenerator::EmitReturnSequence() { // of the generated code must be consistent. const Register& current_sp = __ StackPointer(); // Nothing ensures 16 bytes alignment here. - ASSERT(!current_sp.Is(csp)); + DCHECK(!current_sp.Is(csp)); __ mov(current_sp, fp); int no_frame_start = masm_->pc_offset(); __ ldp(fp, lr, MemOperand(current_sp, 2 * kXRegSize, PostIndex)); @@ -449,7 +449,7 @@ void FullCodeGenerator::EmitReturnSequence() { // TODO(all): This implementation is overkill as it supports 2**31+1 // arguments, consider how to improve it without creating a security // hole. - __ LoadLiteral(ip0, 3 * kInstructionSize); + __ ldr_pcrel(ip0, (3 * kInstructionSize) >> kLoadLiteralScaleLog2); __ add(current_sp, current_sp, ip0); __ ret(); __ dc64(kXRegSize * (info_->scope()->num_parameters() + 1)); @@ -460,25 +460,25 @@ void FullCodeGenerator::EmitReturnSequence() { void FullCodeGenerator::EffectContext::Plug(Variable* var) const { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); } void FullCodeGenerator::AccumulatorValueContext::Plug(Variable* var) const { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); codegen()->GetVar(result_register(), var); } void FullCodeGenerator::StackValueContext::Plug(Variable* var) const { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); codegen()->GetVar(result_register(), var); __ Push(result_register()); } void FullCodeGenerator::TestContext::Plug(Variable* var) const { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); // For simplicity we always test the accumulator register. codegen()->GetVar(result_register(), var); codegen()->PrepareForBailoutBeforeSplit(condition(), false, NULL, NULL); @@ -542,7 +542,7 @@ void FullCodeGenerator::TestContext::Plug(Handle<Object> lit) const { true, true_label_, false_label_); - ASSERT(!lit->IsUndetectableObject()); // There are no undetectable literals. + DCHECK(!lit->IsUndetectableObject()); // There are no undetectable literals. if (lit->IsUndefined() || lit->IsNull() || lit->IsFalse()) { if (false_label_ != fall_through_) __ B(false_label_); } else if (lit->IsTrue() || lit->IsJSObject()) { @@ -569,7 +569,7 @@ void FullCodeGenerator::TestContext::Plug(Handle<Object> lit) const { void FullCodeGenerator::EffectContext::DropAndPlug(int count, Register reg) const { - ASSERT(count > 0); + DCHECK(count > 0); __ Drop(count); } @@ -577,7 +577,7 @@ void FullCodeGenerator::EffectContext::DropAndPlug(int count, void FullCodeGenerator::AccumulatorValueContext::DropAndPlug( int count, Register reg) const { - ASSERT(count > 0); + DCHECK(count > 0); __ Drop(count); __ Move(result_register(), reg); } @@ -585,7 +585,7 @@ void FullCodeGenerator::AccumulatorValueContext::DropAndPlug( void FullCodeGenerator::StackValueContext::DropAndPlug(int count, Register reg) const { - ASSERT(count > 0); + DCHECK(count > 0); if (count > 1) __ Drop(count - 1); __ Poke(reg, 0); } @@ -593,7 +593,7 @@ void FullCodeGenerator::StackValueContext::DropAndPlug(int count, void FullCodeGenerator::TestContext::DropAndPlug(int count, Register reg) const { - ASSERT(count > 0); + DCHECK(count > 0); // For simplicity we always test the accumulator register. __ Drop(count); __ Mov(result_register(), reg); @@ -604,7 +604,7 @@ void FullCodeGenerator::TestContext::DropAndPlug(int count, void FullCodeGenerator::EffectContext::Plug(Label* materialize_true, Label* materialize_false) const { - ASSERT(materialize_true == materialize_false); + DCHECK(materialize_true == materialize_false); __ Bind(materialize_true); } @@ -638,8 +638,8 @@ void FullCodeGenerator::StackValueContext::Plug( void FullCodeGenerator::TestContext::Plug(Label* materialize_true, Label* materialize_false) const { - ASSERT(materialize_true == true_label_); - ASSERT(materialize_false == false_label_); + DCHECK(materialize_true == true_label_); + DCHECK(materialize_false == false_label_); } @@ -700,8 +700,8 @@ void FullCodeGenerator::Split(Condition cond, if (if_false == fall_through) { __ B(cond, if_true); } else if (if_true == fall_through) { - ASSERT(if_false != fall_through); - __ B(InvertCondition(cond), if_false); + DCHECK(if_false != fall_through); + __ B(NegateCondition(cond), if_false); } else { __ B(cond, if_true); __ B(if_false); @@ -723,7 +723,7 @@ MemOperand FullCodeGenerator::StackOperand(Variable* var) { MemOperand FullCodeGenerator::VarOperand(Variable* var, Register scratch) { - ASSERT(var->IsContextSlot() || var->IsStackAllocated()); + DCHECK(var->IsContextSlot() || var->IsStackAllocated()); if (var->IsContextSlot()) { int context_chain_length = scope()->ContextChainLength(var->scope()); __ LoadContext(scratch, context_chain_length); @@ -745,8 +745,8 @@ void FullCodeGenerator::SetVar(Variable* var, Register src, Register scratch0, Register scratch1) { - ASSERT(var->IsContextSlot() || var->IsStackAllocated()); - ASSERT(!AreAliased(src, scratch0, scratch1)); + DCHECK(var->IsContextSlot() || var->IsStackAllocated()); + DCHECK(!AreAliased(src, scratch0, scratch1)); MemOperand location = VarOperand(var, scratch0); __ Str(src, location); @@ -789,7 +789,7 @@ void FullCodeGenerator::PrepareForBailoutBeforeSplit(Expression* expr, void FullCodeGenerator::EmitDebugCheckDeclarationContext(Variable* variable) { // The variable in the declaration always resides in the current function // context. - ASSERT_EQ(0, scope()->ContextChainLength(variable->scope())); + DCHECK_EQ(0, scope()->ContextChainLength(variable->scope())); if (generate_debug_code_) { // Check that we're not inside a with or catch context. __ Ldr(x1, FieldMemOperand(cp, HeapObject::kMapOffset)); @@ -844,7 +844,7 @@ void FullCodeGenerator::VisitVariableDeclaration( Comment cmnt(masm_, "[ VariableDeclaration"); __ Mov(x2, Operand(variable->name())); // Declaration nodes are always introduced in one of four modes. - ASSERT(IsDeclaredVariableMode(mode)); + DCHECK(IsDeclaredVariableMode(mode)); PropertyAttributes attr = IsImmutableVariableMode(mode) ? READ_ONLY : NONE; __ Mov(x1, Smi::FromInt(attr)); @@ -859,7 +859,7 @@ void FullCodeGenerator::VisitVariableDeclaration( // Pushing 0 (xzr) indicates no initial value. __ Push(cp, x2, x1, xzr); } - __ CallRuntime(Runtime::kHiddenDeclareContextSlot, 4); + __ CallRuntime(Runtime::kDeclareLookupSlot, 4); break; } } @@ -874,7 +874,7 @@ void FullCodeGenerator::VisitFunctionDeclaration( case Variable::UNALLOCATED: { globals_->Add(variable->name(), zone()); Handle<SharedFunctionInfo> function = - Compiler::BuildFunctionInfo(declaration->fun(), script()); + Compiler::BuildFunctionInfo(declaration->fun(), script(), info_); // Check for stack overflow exception. if (function.is_null()) return SetStackOverflow(); globals_->Add(function, zone()); @@ -915,7 +915,7 @@ void FullCodeGenerator::VisitFunctionDeclaration( __ Push(cp, x2, x1); // Push initial value for function declaration. VisitForStackValue(declaration->fun()); - __ CallRuntime(Runtime::kHiddenDeclareContextSlot, 4); + __ CallRuntime(Runtime::kDeclareLookupSlot, 4); break; } } @@ -924,8 +924,8 @@ void FullCodeGenerator::VisitFunctionDeclaration( void FullCodeGenerator::VisitModuleDeclaration(ModuleDeclaration* declaration) { Variable* variable = declaration->proxy()->var(); - ASSERT(variable->location() == Variable::CONTEXT); - ASSERT(variable->interface()->IsFrozen()); + DCHECK(variable->location() == Variable::CONTEXT); + DCHECK(variable->interface()->IsFrozen()); Comment cmnt(masm_, "[ ModuleDeclaration"); EmitDebugCheckDeclarationContext(variable); @@ -990,7 +990,7 @@ void FullCodeGenerator::DeclareGlobals(Handle<FixedArray> pairs) { __ Mov(flags, Smi::FromInt(DeclareGlobalsFlags())); } __ Push(cp, x11, flags); - __ CallRuntime(Runtime::kHiddenDeclareGlobals, 3); + __ CallRuntime(Runtime::kDeclareGlobals, 3); // Return value is ignored. } @@ -998,7 +998,7 @@ void FullCodeGenerator::DeclareGlobals(Handle<FixedArray> pairs) { void FullCodeGenerator::DeclareModules(Handle<FixedArray> descriptions) { // Call the runtime to declare the modules. __ Push(descriptions); - __ CallRuntime(Runtime::kHiddenDeclareModules, 1); + __ CallRuntime(Runtime::kDeclareModules, 1); // Return value is ignored. } @@ -1165,11 +1165,9 @@ void FullCodeGenerator::VisitForInStatement(ForInStatement* stmt) { FieldMemOperand(x2, DescriptorArray::kEnumCacheBridgeCacheOffset)); // Set up the four remaining stack slots. - __ Push(x0); // Map. - __ Mov(x0, Smi::FromInt(0)); - // Push enumeration cache, enumeration cache length (as smi) and zero. __ SmiTag(x1); - __ Push(x2, x1, x0); + // Map, enumeration cache, enum cache length, zero (both last as smis). + __ Push(x0, x2, x1, xzr); __ B(&loop); __ Bind(&no_descriptors); @@ -1188,11 +1186,11 @@ void FullCodeGenerator::VisitForInStatement(ForInStatement* stmt) { STATIC_ASSERT(FIRST_JS_PROXY_TYPE == FIRST_SPEC_OBJECT_TYPE); // TODO(all): similar check was done already. Can we avoid it here? __ CompareObjectType(x10, x11, x12, LAST_JS_PROXY_TYPE); - ASSERT(Smi::FromInt(0) == 0); + DCHECK(Smi::FromInt(0) == 0); __ CzeroX(x1, le); // Zero indicates proxy. - __ Push(x1, x0); // Smi and array - __ Ldr(x1, FieldMemOperand(x0, FixedArray::kLengthOffset)); - __ Push(x1, xzr); // Fixed array length (as smi) and initial index. + __ Ldr(x2, FieldMemOperand(x0, FixedArray::kLengthOffset)); + // Smi and array, fixed array length (as smi) and initial index. + __ Push(x1, x0, x2, xzr); // Generate code for doing the condition check. PrepareForBailoutForId(stmt->BodyId(), NO_REGISTERS); @@ -1273,26 +1271,8 @@ void FullCodeGenerator::VisitForOfStatement(ForOfStatement* stmt) { Iteration loop_statement(this, stmt); increment_loop_depth(); - // var iterator = iterable[@@iterator]() - VisitForAccumulatorValue(stmt->assign_iterator()); - - // As with for-in, skip the loop if the iterator is null or undefined. - Register iterator = x0; - __ JumpIfRoot(iterator, Heap::kUndefinedValueRootIndex, - loop_statement.break_label()); - __ JumpIfRoot(iterator, Heap::kNullValueRootIndex, - loop_statement.break_label()); - - // Convert the iterator to a JS object. - Label convert, done_convert; - __ JumpIfSmi(iterator, &convert); - __ CompareObjectType(iterator, x1, x1, FIRST_SPEC_OBJECT_TYPE); - __ B(ge, &done_convert); - __ Bind(&convert); - __ Push(iterator); - __ InvokeBuiltin(Builtins::TO_OBJECT, CALL_FUNCTION); - __ Bind(&done_convert); - __ Push(iterator); + // var iterator = iterable[Symbol.iterator](); + VisitForEffect(stmt->assign_iterator()); // Loop entry. __ Bind(loop_statement.continue_label()); @@ -1349,7 +1329,7 @@ void FullCodeGenerator::EmitNewClosure(Handle<SharedFunctionInfo> info, __ LoadRoot(x10, pretenure ? Heap::kTrueValueRootIndex : Heap::kFalseValueRootIndex); __ Push(cp, x11, x10); - __ CallRuntime(Runtime::kHiddenNewClosure, 3); + __ CallRuntime(Runtime::kNewClosure, 3); } context()->Plug(x0); } @@ -1361,7 +1341,7 @@ void FullCodeGenerator::VisitVariableProxy(VariableProxy* expr) { } -void FullCodeGenerator::EmitLoadGlobalCheckExtensions(Variable* var, +void FullCodeGenerator::EmitLoadGlobalCheckExtensions(VariableProxy* proxy, TypeofState typeof_state, Label* slow) { Register current = cp; @@ -1404,8 +1384,13 @@ void FullCodeGenerator::EmitLoadGlobalCheckExtensions(Variable* var, __ Bind(&fast); } - __ Ldr(x0, GlobalObjectMemOperand()); - __ Mov(x2, Operand(var->name())); + __ Ldr(LoadIC::ReceiverRegister(), GlobalObjectMemOperand()); + __ Mov(LoadIC::NameRegister(), Operand(proxy->var()->name())); + if (FLAG_vector_ics) { + __ Mov(LoadIC::SlotRegister(), + Smi::FromInt(proxy->VariableFeedbackSlot())); + } + ContextualMode mode = (typeof_state == INSIDE_TYPEOF) ? NOT_CONTEXTUAL : CONTEXTUAL; CallLoadIC(mode); @@ -1414,7 +1399,7 @@ void FullCodeGenerator::EmitLoadGlobalCheckExtensions(Variable* var, MemOperand FullCodeGenerator::ContextSlotOperandCheckExtensions(Variable* var, Label* slow) { - ASSERT(var->IsContextSlot()); + DCHECK(var->IsContextSlot()); Register context = cp; Register next = x10; Register temp = x11; @@ -1442,7 +1427,7 @@ MemOperand FullCodeGenerator::ContextSlotOperandCheckExtensions(Variable* var, } -void FullCodeGenerator::EmitDynamicLookupFastCase(Variable* var, +void FullCodeGenerator::EmitDynamicLookupFastCase(VariableProxy* proxy, TypeofState typeof_state, Label* slow, Label* done) { @@ -1451,8 +1436,9 @@ void FullCodeGenerator::EmitDynamicLookupFastCase(Variable* var, // introducing variables. In those cases, we do not want to // perform a runtime call for all variables in the scope // containing the eval. + Variable* var = proxy->var(); if (var->mode() == DYNAMIC_GLOBAL) { - EmitLoadGlobalCheckExtensions(var, typeof_state, slow); + EmitLoadGlobalCheckExtensions(proxy, typeof_state, slow); __ B(done); } else if (var->mode() == DYNAMIC_LOCAL) { Variable* local = var->local_if_not_shadowed(); @@ -1465,7 +1451,7 @@ void FullCodeGenerator::EmitDynamicLookupFastCase(Variable* var, } else { // LET || CONST __ Mov(x0, Operand(var->name())); __ Push(x0); - __ CallRuntime(Runtime::kHiddenThrowReferenceError, 1); + __ CallRuntime(Runtime::kThrowReferenceError, 1); } } __ B(done); @@ -1483,10 +1469,12 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { switch (var->location()) { case Variable::UNALLOCATED: { Comment cmnt(masm_, "Global variable"); - // Use inline caching. Variable name is passed in x2 and the global - // object (receiver) in x0. - __ Ldr(x0, GlobalObjectMemOperand()); - __ Mov(x2, Operand(var->name())); + __ Ldr(LoadIC::ReceiverRegister(), GlobalObjectMemOperand()); + __ Mov(LoadIC::NameRegister(), Operand(var->name())); + if (FLAG_vector_ics) { + __ Mov(LoadIC::SlotRegister(), + Smi::FromInt(proxy->VariableFeedbackSlot())); + } CallLoadIC(CONTEXTUAL); context()->Plug(x0); break; @@ -1504,7 +1492,7 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { // always looked up dynamically, i.e. in that case // var->location() == LOOKUP. // always holds. - ASSERT(var->scope() != NULL); + DCHECK(var->scope() != NULL); // Check if the binding really needs an initialization check. The check // can be skipped in the following situation: we have a LET or CONST @@ -1527,8 +1515,8 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { skip_init_check = false; } else { // Check that we always have valid source position. - ASSERT(var->initializer_position() != RelocInfo::kNoPosition); - ASSERT(proxy->position() != RelocInfo::kNoPosition); + DCHECK(var->initializer_position() != RelocInfo::kNoPosition); + DCHECK(proxy->position() != RelocInfo::kNoPosition); skip_init_check = var->mode() != CONST_LEGACY && var->initializer_position() < proxy->position(); } @@ -1543,11 +1531,11 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { // binding in harmony mode. __ Mov(x0, Operand(var->name())); __ Push(x0); - __ CallRuntime(Runtime::kHiddenThrowReferenceError, 1); + __ CallRuntime(Runtime::kThrowReferenceError, 1); __ Bind(&done); } else { // Uninitalized const bindings outside of harmony mode are unholed. - ASSERT(var->mode() == CONST_LEGACY); + DCHECK(var->mode() == CONST_LEGACY); __ LoadRoot(x0, Heap::kUndefinedValueRootIndex); __ Bind(&done); } @@ -1563,12 +1551,12 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { Label done, slow; // Generate code for loading from variables potentially shadowed by // eval-introduced variables. - EmitDynamicLookupFastCase(var, NOT_INSIDE_TYPEOF, &slow, &done); + EmitDynamicLookupFastCase(proxy, NOT_INSIDE_TYPEOF, &slow, &done); __ Bind(&slow); Comment cmnt(masm_, "Lookup variable"); __ Mov(x1, Operand(var->name())); __ Push(cp, x1); // Context and name. - __ CallRuntime(Runtime::kHiddenLoadContextSlot, 2); + __ CallRuntime(Runtime::kLoadLookupSlot, 2); __ Bind(&done); context()->Plug(x0); break; @@ -1600,7 +1588,7 @@ void FullCodeGenerator::VisitRegExpLiteral(RegExpLiteral* expr) { __ Mov(x2, Operand(expr->pattern())); __ Mov(x1, Operand(expr->flags())); __ Push(x4, x3, x2, x1); - __ CallRuntime(Runtime::kHiddenMaterializeRegExpLiteral, 4); + __ CallRuntime(Runtime::kMaterializeRegExpLiteral, 4); __ Mov(x5, x0); __ Bind(&materialized); @@ -1612,7 +1600,7 @@ void FullCodeGenerator::VisitRegExpLiteral(RegExpLiteral* expr) { __ Bind(&runtime_allocate); __ Mov(x10, Smi::FromInt(size)); __ Push(x5, x10); - __ CallRuntime(Runtime::kHiddenAllocateInNewSpace, 1); + __ CallRuntime(Runtime::kAllocateInNewSpace, 1); __ Pop(x5); __ Bind(&allocated); @@ -1655,10 +1643,10 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) { const int max_cloned_properties = FastCloneShallowObjectStub::kMaximumClonedProperties; if (expr->may_store_doubles() || expr->depth() > 1 || - Serializer::enabled(isolate()) || flags != ObjectLiteral::kFastElements || + masm()->serializer_enabled() || flags != ObjectLiteral::kFastElements || properties_count > max_cloned_properties) { __ Push(x3, x2, x1, x0); - __ CallRuntime(Runtime::kHiddenCreateObjectLiteral, 4); + __ CallRuntime(Runtime::kCreateObjectLiteral, 4); } else { FastCloneShallowObjectStub stub(isolate(), properties_count); __ CallStub(&stub); @@ -1688,14 +1676,15 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) { case ObjectLiteral::Property::CONSTANT: UNREACHABLE(); case ObjectLiteral::Property::MATERIALIZED_LITERAL: - ASSERT(!CompileTimeValue::IsCompileTimeValue(property->value())); + DCHECK(!CompileTimeValue::IsCompileTimeValue(property->value())); // Fall through. case ObjectLiteral::Property::COMPUTED: if (key->value()->IsInternalizedString()) { if (property->emit_store()) { VisitForAccumulatorValue(value); - __ Mov(x2, Operand(key->value())); - __ Peek(x1, 0); + DCHECK(StoreIC::ValueRegister().is(x0)); + __ Mov(StoreIC::NameRegister(), Operand(key->value())); + __ Peek(StoreIC::ReceiverRegister(), 0); CallStoreIC(key->LiteralFeedbackId()); PrepareForBailoutForId(key->id(), NO_REGISTERS); } else { @@ -1709,7 +1698,7 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) { __ Push(x0); VisitForStackValue(key); VisitForStackValue(value); - __ Mov(x0, Smi::FromInt(NONE)); // PropertyAttributes + __ Mov(x0, Smi::FromInt(SLOPPY)); // Strict mode __ Push(x0); __ CallRuntime(Runtime::kSetProperty, 4); } else { @@ -1749,11 +1738,11 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) { EmitAccessor(it->second->setter); __ Mov(x10, Smi::FromInt(NONE)); __ Push(x10); - __ CallRuntime(Runtime::kDefineOrRedefineAccessorProperty, 5); + __ CallRuntime(Runtime::kDefineAccessorPropertyUnchecked, 5); } if (expr->has_function()) { - ASSERT(result_saved); + DCHECK(result_saved); __ Peek(x0, 0); __ Push(x0); __ CallRuntime(Runtime::kToFastProperties, 1); @@ -1777,7 +1766,7 @@ void FullCodeGenerator::VisitArrayLiteral(ArrayLiteral* expr) { ZoneList<Expression*>* subexprs = expr->values(); int length = subexprs->length(); Handle<FixedArray> constant_elements = expr->constant_elements(); - ASSERT_EQ(2, constant_elements->length()); + DCHECK_EQ(2, constant_elements->length()); ElementsKind constant_elements_kind = static_cast<ElementsKind>(Smi::cast(constant_elements->get(0))->value()); bool has_fast_elements = IsFastObjectElementsKind(constant_elements_kind); @@ -1795,35 +1784,12 @@ void FullCodeGenerator::VisitArrayLiteral(ArrayLiteral* expr) { __ Ldr(x3, FieldMemOperand(x3, JSFunction::kLiteralsOffset)); __ Mov(x2, Smi::FromInt(expr->literal_index())); __ Mov(x1, Operand(constant_elements)); - if (has_fast_elements && constant_elements_values->map() == - isolate()->heap()->fixed_cow_array_map()) { - FastCloneShallowArrayStub stub( - isolate(), - FastCloneShallowArrayStub::COPY_ON_WRITE_ELEMENTS, - allocation_site_mode, - length); - __ CallStub(&stub); - __ IncrementCounter( - isolate()->counters()->cow_arrays_created_stub(), 1, x10, x11); - } else if ((expr->depth() > 1) || Serializer::enabled(isolate()) || - length > FastCloneShallowArrayStub::kMaximumClonedLength) { + if (expr->depth() > 1 || length > JSObject::kInitialMaxFastElementArray) { __ Mov(x0, Smi::FromInt(flags)); __ Push(x3, x2, x1, x0); - __ CallRuntime(Runtime::kHiddenCreateArrayLiteral, 4); + __ CallRuntime(Runtime::kCreateArrayLiteral, 4); } else { - ASSERT(IsFastSmiOrObjectElementsKind(constant_elements_kind) || - FLAG_smi_only_arrays); - FastCloneShallowArrayStub::Mode mode = - FastCloneShallowArrayStub::CLONE_ANY_ELEMENTS; - - if (has_fast_elements) { - mode = FastCloneShallowArrayStub::CLONE_ELEMENTS; - } - - FastCloneShallowArrayStub stub(isolate(), - mode, - allocation_site_mode, - length); + FastCloneShallowArrayStub stub(isolate(), allocation_site_mode); __ CallStub(&stub); } @@ -1838,8 +1804,8 @@ void FullCodeGenerator::VisitArrayLiteral(ArrayLiteral* expr) { if (CompileTimeValue::IsCompileTimeValue(subexpr)) continue; if (!result_saved) { - __ Push(x0); - __ Push(Smi::FromInt(expr->literal_index())); + __ Mov(x1, Smi::FromInt(expr->literal_index())); + __ Push(x0, x1); result_saved = true; } VisitForAccumulatorValue(subexpr); @@ -1872,7 +1838,7 @@ void FullCodeGenerator::VisitArrayLiteral(ArrayLiteral* expr) { void FullCodeGenerator::VisitAssignment(Assignment* expr) { - ASSERT(expr->target()->IsValidReferenceExpression()); + DCHECK(expr->target()->IsValidReferenceExpression()); Comment cmnt(masm_, "[ Assignment"); @@ -1894,9 +1860,9 @@ void FullCodeGenerator::VisitAssignment(Assignment* expr) { break; case NAMED_PROPERTY: if (expr->is_compound()) { - // We need the receiver both on the stack and in the accumulator. - VisitForAccumulatorValue(property->obj()); - __ Push(result_register()); + // We need the receiver both on the stack and in the register. + VisitForStackValue(property->obj()); + __ Peek(LoadIC::ReceiverRegister(), 0); } else { VisitForStackValue(property->obj()); } @@ -1904,9 +1870,9 @@ void FullCodeGenerator::VisitAssignment(Assignment* expr) { case KEYED_PROPERTY: if (expr->is_compound()) { VisitForStackValue(property->obj()); - VisitForAccumulatorValue(property->key()); - __ Peek(x1, 0); - __ Push(x0); + VisitForStackValue(property->key()); + __ Peek(LoadIC::ReceiverRegister(), 1 * kPointerSize); + __ Peek(LoadIC::NameRegister(), 0); } else { VisitForStackValue(property->obj()); VisitForStackValue(property->key()); @@ -1983,9 +1949,14 @@ void FullCodeGenerator::VisitAssignment(Assignment* expr) { void FullCodeGenerator::EmitNamedPropertyLoad(Property* prop) { SetSourcePosition(prop->position()); Literal* key = prop->key()->AsLiteral(); - __ Mov(x2, Operand(key->value())); - // Call load IC. It has arguments receiver and property name x0 and x2. - CallLoadIC(NOT_CONTEXTUAL, prop->PropertyFeedbackId()); + __ Mov(LoadIC::NameRegister(), Operand(key->value())); + if (FLAG_vector_ics) { + __ Mov(LoadIC::SlotRegister(), + Smi::FromInt(prop->PropertyFeedbackSlot())); + CallLoadIC(NOT_CONTEXTUAL); + } else { + CallLoadIC(NOT_CONTEXTUAL, prop->PropertyFeedbackId()); + } } @@ -1993,7 +1964,13 @@ void FullCodeGenerator::EmitKeyedPropertyLoad(Property* prop) { SetSourcePosition(prop->position()); // Call keyed load IC. It has arguments key and receiver in r0 and r1. Handle<Code> ic = isolate()->builtins()->KeyedLoadIC_Initialize(); - CallIC(ic, prop->PropertyFeedbackId()); + if (FLAG_vector_ics) { + __ Mov(LoadIC::SlotRegister(), + Smi::FromInt(prop->PropertyFeedbackSlot())); + CallIC(ic); + } else { + CallIC(ic, prop->PropertyFeedbackId()); + } } @@ -2064,11 +2041,12 @@ void FullCodeGenerator::EmitInlineSmiBinaryOp(BinaryOperation* expr, break; case Token::MUL: { Label not_minus_zero, done; + STATIC_ASSERT(static_cast<unsigned>(kSmiShift) == (kXRegSizeInBits / 2)); + STATIC_ASSERT(kSmiTag == 0); __ Smulh(x10, left, right); __ Cbnz(x10, ¬_minus_zero); __ Eor(x11, left, right); __ Tbnz(x11, kXSignBit, &stub_call); - STATIC_ASSERT(kSmiTag == 0); __ Mov(result, x10); __ B(&done); __ Bind(¬_minus_zero); @@ -2113,7 +2091,7 @@ void FullCodeGenerator::EmitBinaryOp(BinaryOperation* expr, void FullCodeGenerator::EmitAssignment(Expression* expr) { - ASSERT(expr->IsValidReferenceExpression()); + DCHECK(expr->IsValidReferenceExpression()); // Left-hand side can only be a property, a global or a (parameter or local) // slot. @@ -2138,9 +2116,10 @@ void FullCodeGenerator::EmitAssignment(Expression* expr) { VisitForAccumulatorValue(prop->obj()); // TODO(all): We could introduce a VisitForRegValue(reg, expr) to avoid // this copy. - __ Mov(x1, x0); - __ Pop(x0); // Restore value. - __ Mov(x2, Operand(prop->key()->AsLiteral()->value())); + __ Mov(StoreIC::ReceiverRegister(), x0); + __ Pop(StoreIC::ValueRegister()); // Restore value. + __ Mov(StoreIC::NameRegister(), + Operand(prop->key()->AsLiteral()->value())); CallStoreIC(); break; } @@ -2148,8 +2127,8 @@ void FullCodeGenerator::EmitAssignment(Expression* expr) { __ Push(x0); // Preserve value. VisitForStackValue(prop->obj()); VisitForAccumulatorValue(prop->key()); - __ Mov(x1, x0); - __ Pop(x2, x0); + __ Mov(KeyedStoreIC::NameRegister(), x0); + __ Pop(KeyedStoreIC::ReceiverRegister(), KeyedStoreIC::ValueRegister()); Handle<Code> ic = strict_mode() == SLOPPY ? isolate()->builtins()->KeyedStoreIC_Initialize() : isolate()->builtins()->KeyedStoreIC_Initialize_Strict(); @@ -2174,38 +2153,24 @@ void FullCodeGenerator::EmitStoreToStackLocalOrContextSlot( } -void FullCodeGenerator::EmitCallStoreContextSlot( - Handle<String> name, StrictMode strict_mode) { - __ Mov(x11, Operand(name)); - __ Mov(x10, Smi::FromInt(strict_mode)); - // jssp[0] : mode. - // jssp[8] : name. - // jssp[16] : context. - // jssp[24] : value. - __ Push(x0, cp, x11, x10); - __ CallRuntime(Runtime::kHiddenStoreContextSlot, 4); -} - - void FullCodeGenerator::EmitVariableAssignment(Variable* var, Token::Value op) { ASM_LOCATION("FullCodeGenerator::EmitVariableAssignment"); if (var->IsUnallocated()) { // Global var, const, or let. - __ Mov(x2, Operand(var->name())); - __ Ldr(x1, GlobalObjectMemOperand()); + __ Mov(StoreIC::NameRegister(), Operand(var->name())); + __ Ldr(StoreIC::ReceiverRegister(), GlobalObjectMemOperand()); CallStoreIC(); } else if (op == Token::INIT_CONST_LEGACY) { // Const initializers need a write barrier. - ASSERT(!var->IsParameter()); // No const parameters. + DCHECK(!var->IsParameter()); // No const parameters. if (var->IsLookupSlot()) { - __ Push(x0); - __ Mov(x0, Operand(var->name())); - __ Push(cp, x0); // Context and name. - __ CallRuntime(Runtime::kHiddenInitializeConstContextSlot, 3); + __ Mov(x1, Operand(var->name())); + __ Push(x0, cp, x1); + __ CallRuntime(Runtime::kInitializeLegacyConstLookupSlot, 3); } else { - ASSERT(var->IsStackLocal() || var->IsContextSlot()); + DCHECK(var->IsStackLocal() || var->IsContextSlot()); Label skip; MemOperand location = VarOperand(var, x1); __ Ldr(x10, location); @@ -2216,29 +2181,34 @@ void FullCodeGenerator::EmitVariableAssignment(Variable* var, } else if (var->mode() == LET && op != Token::INIT_LET) { // Non-initializing assignment to let variable needs a write barrier. - if (var->IsLookupSlot()) { - EmitCallStoreContextSlot(var->name(), strict_mode()); - } else { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); - Label assign; - MemOperand location = VarOperand(var, x1); - __ Ldr(x10, location); - __ JumpIfNotRoot(x10, Heap::kTheHoleValueRootIndex, &assign); - __ Mov(x10, Operand(var->name())); - __ Push(x10); - __ CallRuntime(Runtime::kHiddenThrowReferenceError, 1); - // Perform the assignment. - __ Bind(&assign); - EmitStoreToStackLocalOrContextSlot(var, location); - } + DCHECK(!var->IsLookupSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); + Label assign; + MemOperand location = VarOperand(var, x1); + __ Ldr(x10, location); + __ JumpIfNotRoot(x10, Heap::kTheHoleValueRootIndex, &assign); + __ Mov(x10, Operand(var->name())); + __ Push(x10); + __ CallRuntime(Runtime::kThrowReferenceError, 1); + // Perform the assignment. + __ Bind(&assign); + EmitStoreToStackLocalOrContextSlot(var, location); } else if (!var->is_const_mode() || op == Token::INIT_CONST) { - // Assignment to var or initializing assignment to let/const - // in harmony mode. if (var->IsLookupSlot()) { - EmitCallStoreContextSlot(var->name(), strict_mode()); + // Assignment to var. + __ Mov(x11, Operand(var->name())); + __ Mov(x10, Smi::FromInt(strict_mode())); + // jssp[0] : mode. + // jssp[8] : name. + // jssp[16] : context. + // jssp[24] : value. + __ Push(x0, cp, x11, x10); + __ CallRuntime(Runtime::kStoreLookupSlot, 4); } else { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); + // Assignment to var or initializing assignment to let/const in harmony + // mode. + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); MemOperand location = VarOperand(var, x1); if (FLAG_debug_code && op == Token::INIT_LET) { __ Ldr(x10, location); @@ -2256,14 +2226,13 @@ void FullCodeGenerator::EmitNamedPropertyAssignment(Assignment* expr) { ASM_LOCATION("FullCodeGenerator::EmitNamedPropertyAssignment"); // Assignment to a property, using a named store IC. Property* prop = expr->target()->AsProperty(); - ASSERT(prop != NULL); - ASSERT(prop->key()->AsLiteral() != NULL); + DCHECK(prop != NULL); + DCHECK(prop->key()->IsLiteral()); // Record source code position before IC call. SetSourcePosition(expr->position()); - __ Mov(x2, Operand(prop->key()->AsLiteral()->value())); - __ Pop(x1); - + __ Mov(StoreIC::NameRegister(), Operand(prop->key()->AsLiteral()->value())); + __ Pop(StoreIC::ReceiverRegister()); CallStoreIC(expr->AssignmentFeedbackId()); PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); @@ -2278,7 +2247,8 @@ void FullCodeGenerator::EmitKeyedPropertyAssignment(Assignment* expr) { // Record source code position before IC call. SetSourcePosition(expr->position()); // TODO(all): Could we pass this in registers rather than on the stack? - __ Pop(x1, x2); // Key and object holding the property. + __ Pop(KeyedStoreIC::NameRegister(), KeyedStoreIC::ReceiverRegister()); + DCHECK(KeyedStoreIC::ValueRegister().is(x0)); Handle<Code> ic = strict_mode() == SLOPPY ? isolate()->builtins()->KeyedStoreIC_Initialize() @@ -2296,13 +2266,15 @@ void FullCodeGenerator::VisitProperty(Property* expr) { if (key->IsPropertyName()) { VisitForAccumulatorValue(expr->obj()); + __ Move(LoadIC::ReceiverRegister(), x0); EmitNamedPropertyLoad(expr); PrepareForBailoutForId(expr->LoadId(), TOS_REG); context()->Plug(x0); } else { VisitForStackValue(expr->obj()); VisitForAccumulatorValue(expr->key()); - __ Pop(x1); + __ Move(LoadIC::NameRegister(), x0); + __ Pop(LoadIC::ReceiverRegister()); EmitKeyedPropertyLoad(expr); context()->Plug(x0); } @@ -2337,8 +2309,8 @@ void FullCodeGenerator::EmitCallWithLoadIC(Call* expr) { __ Push(isolate()->factory()->undefined_value()); } else { // Load the function from the receiver. - ASSERT(callee->IsProperty()); - __ Peek(x0, 0); + DCHECK(callee->IsProperty()); + __ Peek(LoadIC::ReceiverRegister(), 0); EmitNamedPropertyLoad(callee->AsProperty()); PrepareForBailoutForId(callee->AsProperty()->LoadId(), TOS_REG); // Push the target function under the receiver. @@ -2359,8 +2331,9 @@ void FullCodeGenerator::EmitKeyedCallWithLoadIC(Call* expr, Expression* callee = expr->expression(); // Load the function from the receiver. - ASSERT(callee->IsProperty()); - __ Peek(x1, 0); + DCHECK(callee->IsProperty()); + __ Peek(LoadIC::ReceiverRegister(), 0); + __ Move(LoadIC::NameRegister(), x0); EmitKeyedPropertyLoad(callee->AsProperty()); PrepareForBailoutForId(callee->AsProperty()->LoadId(), TOS_REG); @@ -2413,19 +2386,16 @@ void FullCodeGenerator::EmitResolvePossiblyDirectEval(int arg_count) { int receiver_offset = 2 + info_->scope()->num_parameters(); __ Ldr(x11, MemOperand(fp, receiver_offset * kPointerSize)); - // Push. - __ Push(x10, x11); - // Prepare to push the language mode. - __ Mov(x10, Smi::FromInt(strict_mode())); + __ Mov(x12, Smi::FromInt(strict_mode())); // Prepare to push the start position of the scope the calls resides in. - __ Mov(x11, Smi::FromInt(scope()->start_position())); + __ Mov(x13, Smi::FromInt(scope()->start_position())); // Push. - __ Push(x10, x11); + __ Push(x10, x11, x12, x13); // Do the runtime call. - __ CallRuntime(Runtime::kHiddenResolvePossiblyDirectEval, 5); + __ CallRuntime(Runtime::kResolvePossiblyDirectEval, 5); } @@ -2493,16 +2463,15 @@ void FullCodeGenerator::VisitCall(Call* expr) { { PreservePositionScope scope(masm()->positions_recorder()); // Generate code for loading from variables potentially shadowed // by eval-introduced variables. - EmitDynamicLookupFastCase(proxy->var(), NOT_INSIDE_TYPEOF, &slow, &done); + EmitDynamicLookupFastCase(proxy, NOT_INSIDE_TYPEOF, &slow, &done); } __ Bind(&slow); // Call the runtime to find the function to call (returned in x0) // and the object holding it (returned in x1). - __ Push(context_register()); __ Mov(x10, Operand(proxy->name())); - __ Push(x10); - __ CallRuntime(Runtime::kHiddenLoadContextSlot, 2); + __ Push(context_register(), x10); + __ CallRuntime(Runtime::kLoadLookupSlot, 2); __ Push(x0, x1); // Receiver, function. // If fast case code has been generated, emit code to push the @@ -2513,11 +2482,10 @@ void FullCodeGenerator::VisitCall(Call* expr) { __ B(&call); __ Bind(&done); // Push function. - __ Push(x0); // The receiver is implicitly the global receiver. Indicate this // by passing the undefined to the call function stub. __ LoadRoot(x1, Heap::kUndefinedValueRootIndex); - __ Push(x1); + __ Push(x0, x1); __ Bind(&call); } @@ -2536,7 +2504,7 @@ void FullCodeGenerator::VisitCall(Call* expr) { } } else { - ASSERT(call_type == Call::OTHER_CALL); + DCHECK(call_type == Call::OTHER_CALL); // Call to an arbitrary expression not handled specially above. { PreservePositionScope scope(masm()->positions_recorder()); VisitForStackValue(callee); @@ -2549,7 +2517,7 @@ void FullCodeGenerator::VisitCall(Call* expr) { #ifdef DEBUG // RecordJSReturnSite should have been called. - ASSERT(expr->return_is_recorded_); + DCHECK(expr->return_is_recorded_); #endif } @@ -2583,7 +2551,7 @@ void FullCodeGenerator::VisitCallNew(CallNew* expr) { // Record call targets in unoptimized code. if (FLAG_pretenuring_call_new) { EnsureSlotContainsAllocationSite(expr->AllocationSiteFeedbackSlot()); - ASSERT(expr->AllocationSiteFeedbackSlot() == + DCHECK(expr->AllocationSiteFeedbackSlot() == expr->CallNewFeedbackSlot() + 1); } @@ -2599,7 +2567,7 @@ void FullCodeGenerator::VisitCallNew(CallNew* expr) { void FullCodeGenerator::EmitIsSmi(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2619,7 +2587,7 @@ void FullCodeGenerator::EmitIsSmi(CallRuntime* expr) { void FullCodeGenerator::EmitIsNonNegativeSmi(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2630,9 +2598,10 @@ void FullCodeGenerator::EmitIsNonNegativeSmi(CallRuntime* expr) { context()->PrepareTest(&materialize_true, &materialize_false, &if_true, &if_false, &fall_through); + uint64_t sign_mask = V8_UINT64_C(1) << (kSmiShift + kSmiValueSize - 1); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); - __ TestAndSplit(x0, kSmiTagMask | (0x80000000UL << kSmiShift), if_true, - if_false, fall_through); + __ TestAndSplit(x0, kSmiTagMask | sign_mask, if_true, if_false, fall_through); context()->Plug(if_true, if_false); } @@ -2640,7 +2609,7 @@ void FullCodeGenerator::EmitIsNonNegativeSmi(CallRuntime* expr) { void FullCodeGenerator::EmitIsObject(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2670,7 +2639,7 @@ void FullCodeGenerator::EmitIsObject(CallRuntime* expr) { void FullCodeGenerator::EmitIsSpecObject(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2693,7 +2662,7 @@ void FullCodeGenerator::EmitIsSpecObject(CallRuntime* expr) { void FullCodeGenerator::EmitIsUndetectableObject(CallRuntime* expr) { ASM_LOCATION("FullCodeGenerator::EmitIsUndetectableObject"); ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2718,7 +2687,7 @@ void FullCodeGenerator::EmitIsUndetectableObject(CallRuntime* expr) { void FullCodeGenerator::EmitIsStringWrapperSafeForDefaultValueOf( CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); Label materialize_true, materialize_false, skip_lookup; @@ -2819,7 +2788,7 @@ void FullCodeGenerator::EmitIsStringWrapperSafeForDefaultValueOf( void FullCodeGenerator::EmitIsFunction(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2841,7 +2810,7 @@ void FullCodeGenerator::EmitIsFunction(CallRuntime* expr) { void FullCodeGenerator::EmitIsMinusZero(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2868,7 +2837,7 @@ void FullCodeGenerator::EmitIsMinusZero(CallRuntime* expr) { void FullCodeGenerator::EmitIsArray(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2890,7 +2859,7 @@ void FullCodeGenerator::EmitIsArray(CallRuntime* expr) { void FullCodeGenerator::EmitIsRegExp(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2912,7 +2881,7 @@ void FullCodeGenerator::EmitIsRegExp(CallRuntime* expr) { void FullCodeGenerator::EmitIsConstructCall(CallRuntime* expr) { - ASSERT(expr->arguments()->length() == 0); + DCHECK(expr->arguments()->length() == 0); Label materialize_true, materialize_false; Label* if_true = NULL; @@ -2944,7 +2913,7 @@ void FullCodeGenerator::EmitIsConstructCall(CallRuntime* expr) { void FullCodeGenerator::EmitObjectEquals(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); // Load the two objects into registers and perform the comparison. VisitForStackValue(args->at(0)); @@ -2968,7 +2937,7 @@ void FullCodeGenerator::EmitObjectEquals(CallRuntime* expr) { void FullCodeGenerator::EmitArguments(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); // ArgumentsAccessStub expects the key in x1. VisitForAccumulatorValue(args->at(0)); @@ -2981,7 +2950,7 @@ void FullCodeGenerator::EmitArguments(CallRuntime* expr) { void FullCodeGenerator::EmitArgumentsLength(CallRuntime* expr) { - ASSERT(expr->arguments()->length() == 0); + DCHECK(expr->arguments()->length() == 0); Label exit; // Get the number of formal parameters. __ Mov(x0, Smi::FromInt(info_->scope()->num_parameters())); @@ -3004,7 +2973,7 @@ void FullCodeGenerator::EmitArgumentsLength(CallRuntime* expr) { void FullCodeGenerator::EmitClassOf(CallRuntime* expr) { ASM_LOCATION("FullCodeGenerator::EmitClassOf"); ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); Label done, null, function, non_function_constructor; VisitForAccumulatorValue(args->at(0)); @@ -3069,7 +3038,7 @@ void FullCodeGenerator::EmitSubString(CallRuntime* expr) { // Load the arguments on the stack and call the stub. SubStringStub stub(isolate()); ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 3); + DCHECK(args->length() == 3); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); VisitForStackValue(args->at(2)); @@ -3082,7 +3051,7 @@ void FullCodeGenerator::EmitRegExpExec(CallRuntime* expr) { // Load the arguments on the stack and call the stub. RegExpExecStub stub(isolate()); ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 4); + DCHECK(args->length() == 4); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); VisitForStackValue(args->at(2)); @@ -3095,7 +3064,7 @@ void FullCodeGenerator::EmitRegExpExec(CallRuntime* expr) { void FullCodeGenerator::EmitValueOf(CallRuntime* expr) { ASM_LOCATION("FullCodeGenerator::EmitValueOf"); ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); // Load the object. Label done; @@ -3112,8 +3081,8 @@ void FullCodeGenerator::EmitValueOf(CallRuntime* expr) { void FullCodeGenerator::EmitDateField(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); - ASSERT_NE(NULL, args->at(1)->AsLiteral()); + DCHECK(args->length() == 2); + DCHECK_NE(NULL, args->at(1)->AsLiteral()); Smi* index = Smi::cast(*(args->at(1)->AsLiteral()->value())); VisitForAccumulatorValue(args->at(0)); // Load the object. @@ -3150,7 +3119,7 @@ void FullCodeGenerator::EmitDateField(CallRuntime* expr) { } __ Bind(¬_date_object); - __ CallRuntime(Runtime::kHiddenThrowNotDateError, 0); + __ CallRuntime(Runtime::kThrowNotDateError, 0); __ Bind(&done); context()->Plug(x0); } @@ -3158,7 +3127,7 @@ void FullCodeGenerator::EmitDateField(CallRuntime* expr) { void FullCodeGenerator::EmitOneByteSeqStringSetChar(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(3, args->length()); + DCHECK_EQ(3, args->length()); Register string = x0; Register index = x1; @@ -3188,7 +3157,7 @@ void FullCodeGenerator::EmitOneByteSeqStringSetChar(CallRuntime* expr) { void FullCodeGenerator::EmitTwoByteSeqStringSetChar(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(3, args->length()); + DCHECK_EQ(3, args->length()); Register string = x0; Register index = x1; @@ -3219,7 +3188,7 @@ void FullCodeGenerator::EmitTwoByteSeqStringSetChar(CallRuntime* expr) { void FullCodeGenerator::EmitMathPow(CallRuntime* expr) { // Load the arguments on the stack and call the MathPow stub. ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); MathPowStub stub(isolate(), MathPowStub::ON_STACK); @@ -3230,7 +3199,7 @@ void FullCodeGenerator::EmitMathPow(CallRuntime* expr) { void FullCodeGenerator::EmitSetValueOf(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); VisitForStackValue(args->at(0)); // Load the object. VisitForAccumulatorValue(args->at(1)); // Load the value. __ Pop(x1); @@ -3259,7 +3228,7 @@ void FullCodeGenerator::EmitSetValueOf(CallRuntime* expr) { void FullCodeGenerator::EmitNumberToString(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(args->length(), 1); + DCHECK_EQ(args->length(), 1); // Load the argument into x0 and call the stub. VisitForAccumulatorValue(args->at(0)); @@ -3272,7 +3241,7 @@ void FullCodeGenerator::EmitNumberToString(CallRuntime* expr) { void FullCodeGenerator::EmitStringCharFromCode(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3294,7 +3263,7 @@ void FullCodeGenerator::EmitStringCharFromCode(CallRuntime* expr) { void FullCodeGenerator::EmitStringCharCodeAt(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); VisitForStackValue(args->at(0)); VisitForAccumulatorValue(args->at(1)); @@ -3339,7 +3308,7 @@ void FullCodeGenerator::EmitStringCharCodeAt(CallRuntime* expr) { void FullCodeGenerator::EmitStringCharAt(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); VisitForStackValue(args->at(0)); VisitForAccumulatorValue(args->at(1)); @@ -3386,7 +3355,7 @@ void FullCodeGenerator::EmitStringCharAt(CallRuntime* expr) { void FullCodeGenerator::EmitStringAdd(CallRuntime* expr) { ASM_LOCATION("FullCodeGenerator::EmitStringAdd"); ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(2, args->length()); + DCHECK_EQ(2, args->length()); VisitForStackValue(args->at(0)); VisitForAccumulatorValue(args->at(1)); @@ -3401,7 +3370,7 @@ void FullCodeGenerator::EmitStringAdd(CallRuntime* expr) { void FullCodeGenerator::EmitStringCompare(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(2, args->length()); + DCHECK_EQ(2, args->length()); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); @@ -3414,7 +3383,7 @@ void FullCodeGenerator::EmitStringCompare(CallRuntime* expr) { void FullCodeGenerator::EmitCallFunction(CallRuntime* expr) { ASM_LOCATION("FullCodeGenerator::EmitCallFunction"); ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() >= 2); + DCHECK(args->length() >= 2); int arg_count = args->length() - 2; // 2 ~ receiver and function. for (int i = 0; i < arg_count + 1; i++) { @@ -3446,7 +3415,7 @@ void FullCodeGenerator::EmitCallFunction(CallRuntime* expr) { void FullCodeGenerator::EmitRegExpConstructResult(CallRuntime* expr) { RegExpConstructResultStub stub(isolate()); ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 3); + DCHECK(args->length() == 3); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); VisitForAccumulatorValue(args->at(2)); @@ -3458,8 +3427,8 @@ void FullCodeGenerator::EmitRegExpConstructResult(CallRuntime* expr) { void FullCodeGenerator::EmitGetFromCache(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(2, args->length()); - ASSERT_NE(NULL, args->at(0)->AsLiteral()); + DCHECK_EQ(2, args->length()); + DCHECK_NE(NULL, args->at(0)->AsLiteral()); int cache_id = Smi::cast(*(args->at(0)->AsLiteral()->value()))->value(); Handle<FixedArray> jsfunction_result_caches( @@ -3497,7 +3466,7 @@ void FullCodeGenerator::EmitGetFromCache(CallRuntime* expr) { // Call runtime to perform the lookup. __ Push(cache, key); - __ CallRuntime(Runtime::kHiddenGetFromCache, 2); + __ CallRuntime(Runtime::kGetFromCache, 2); __ Bind(&done); context()->Plug(x0); @@ -3526,7 +3495,7 @@ void FullCodeGenerator::EmitHasCachedArrayIndex(CallRuntime* expr) { void FullCodeGenerator::EmitGetCachedArrayIndex(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); __ AssertString(x0); @@ -3542,7 +3511,7 @@ void FullCodeGenerator::EmitFastAsciiArrayJoin(CallRuntime* expr) { ASM_LOCATION("FullCodeGenerator::EmitFastAsciiArrayJoin"); ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); VisitForStackValue(args->at(1)); VisitForAccumulatorValue(args->at(0)); @@ -3754,6 +3723,17 @@ void FullCodeGenerator::EmitFastAsciiArrayJoin(CallRuntime* expr) { } +void FullCodeGenerator::EmitDebugIsActive(CallRuntime* expr) { + DCHECK(expr->arguments()->length() == 0); + ExternalReference debug_is_active = + ExternalReference::debug_is_active_address(isolate()); + __ Mov(x10, debug_is_active); + __ Ldrb(x0, MemOperand(x10)); + __ SmiTag(x0); + context()->Plug(x0); +} + + void FullCodeGenerator::VisitCallRuntime(CallRuntime* expr) { if (expr->function() != NULL && expr->function()->intrinsic_type == Runtime::INLINE) { @@ -3769,13 +3749,20 @@ void FullCodeGenerator::VisitCallRuntime(CallRuntime* expr) { if (expr->is_jsruntime()) { // Push the builtins object as the receiver. __ Ldr(x10, GlobalObjectMemOperand()); - __ Ldr(x0, FieldMemOperand(x10, GlobalObject::kBuiltinsOffset)); - __ Push(x0); + __ Ldr(LoadIC::ReceiverRegister(), + FieldMemOperand(x10, GlobalObject::kBuiltinsOffset)); + __ Push(LoadIC::ReceiverRegister()); // Load the function from the receiver. Handle<String> name = expr->name(); - __ Mov(x2, Operand(name)); - CallLoadIC(NOT_CONTEXTUAL, expr->CallRuntimeFeedbackId()); + __ Mov(LoadIC::NameRegister(), Operand(name)); + if (FLAG_vector_ics) { + __ Mov(LoadIC::SlotRegister(), + Smi::FromInt(expr->CallRuntimeFeedbackSlot())); + CallLoadIC(NOT_CONTEXTUAL); + } else { + CallLoadIC(NOT_CONTEXTUAL, expr->CallRuntimeFeedbackId()); + } // Push the target function under the receiver. __ Pop(x10); @@ -3827,7 +3814,7 @@ void FullCodeGenerator::VisitUnaryOperation(UnaryOperation* expr) { Variable* var = proxy->var(); // Delete of an unqualified identifier is disallowed in strict mode // but "delete this" is allowed. - ASSERT(strict_mode() == SLOPPY || var->is_this()); + DCHECK(strict_mode() == SLOPPY || var->is_this()); if (var->IsUnallocated()) { __ Ldr(x12, GlobalObjectMemOperand()); __ Mov(x11, Operand(var->name())); @@ -3844,7 +3831,7 @@ void FullCodeGenerator::VisitUnaryOperation(UnaryOperation* expr) { // context where the variable was introduced. __ Mov(x2, Operand(var->name())); __ Push(context_register(), x2); - __ CallRuntime(Runtime::kHiddenDeleteContextSlot, 2); + __ CallRuntime(Runtime::kDeleteLookupSlot, 2); context()->Plug(x0); } } else { @@ -3877,7 +3864,7 @@ void FullCodeGenerator::VisitUnaryOperation(UnaryOperation* expr) { test->fall_through()); context()->Plug(test->true_label(), test->false_label()); } else { - ASSERT(context()->IsAccumulatorValue() || context()->IsStackValue()); + DCHECK(context()->IsAccumulatorValue() || context()->IsStackValue()); // TODO(jbramley): This could be much more efficient using (for // example) the CSEL instruction. Label materialize_true, materialize_false, done; @@ -3920,7 +3907,7 @@ void FullCodeGenerator::VisitUnaryOperation(UnaryOperation* expr) { void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { - ASSERT(expr->expression()->IsValidReferenceExpression()); + DCHECK(expr->expression()->IsValidReferenceExpression()); Comment cmnt(masm_, "[ CountOperation"); SetSourcePosition(expr->position()); @@ -3939,7 +3926,7 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { // Evaluate expression and get value. if (assign_type == VARIABLE) { - ASSERT(expr->expression()->AsVariableProxy()->var() != NULL); + DCHECK(expr->expression()->AsVariableProxy()->var() != NULL); AccumulatorValueContext context(this); EmitVariableLoad(expr->expression()->AsVariableProxy()); } else { @@ -3948,16 +3935,16 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { __ Push(xzr); } if (assign_type == NAMED_PROPERTY) { - // Put the object both on the stack and in the accumulator. - VisitForAccumulatorValue(prop->obj()); - __ Push(x0); + // Put the object both on the stack and in the register. + VisitForStackValue(prop->obj()); + __ Peek(LoadIC::ReceiverRegister(), 0); EmitNamedPropertyLoad(prop); } else { // KEYED_PROPERTY VisitForStackValue(prop->obj()); - VisitForAccumulatorValue(prop->key()); - __ Peek(x1, 0); - __ Push(x0); + VisitForStackValue(prop->key()); + __ Peek(LoadIC::ReceiverRegister(), 1 * kPointerSize); + __ Peek(LoadIC::NameRegister(), 0); EmitKeyedPropertyLoad(prop); } } @@ -4067,8 +4054,9 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { } break; case NAMED_PROPERTY: { - __ Mov(x2, Operand(prop->key()->AsLiteral()->value())); - __ Pop(x1); + __ Mov(StoreIC::NameRegister(), + Operand(prop->key()->AsLiteral()->value())); + __ Pop(StoreIC::ReceiverRegister()); CallStoreIC(expr->CountStoreFeedbackId()); PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); if (expr->is_postfix()) { @@ -4081,8 +4069,8 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { break; } case KEYED_PROPERTY: { - __ Pop(x1); // Key. - __ Pop(x2); // Receiver. + __ Pop(KeyedStoreIC::NameRegister()); + __ Pop(KeyedStoreIC::ReceiverRegister()); Handle<Code> ic = strict_mode() == SLOPPY ? isolate()->builtins()->KeyedStoreIC_Initialize() : isolate()->builtins()->KeyedStoreIC_Initialize_Strict(); @@ -4102,13 +4090,17 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { void FullCodeGenerator::VisitForTypeofValue(Expression* expr) { - ASSERT(!context()->IsEffect()); - ASSERT(!context()->IsTest()); + DCHECK(!context()->IsEffect()); + DCHECK(!context()->IsTest()); VariableProxy* proxy = expr->AsVariableProxy(); if (proxy != NULL && proxy->var()->IsUnallocated()) { Comment cmnt(masm_, "Global variable"); - __ Ldr(x0, GlobalObjectMemOperand()); - __ Mov(x2, Operand(proxy->name())); + __ Ldr(LoadIC::ReceiverRegister(), GlobalObjectMemOperand()); + __ Mov(LoadIC::NameRegister(), Operand(proxy->name())); + if (FLAG_vector_ics) { + __ Mov(LoadIC::SlotRegister(), + Smi::FromInt(proxy->VariableFeedbackSlot())); + } // Use a regular load, not a contextual load, to avoid a reference // error. CallLoadIC(NOT_CONTEXTUAL); @@ -4119,12 +4111,12 @@ void FullCodeGenerator::VisitForTypeofValue(Expression* expr) { // Generate code for loading from variables potentially shadowed // by eval-introduced variables. - EmitDynamicLookupFastCase(proxy->var(), INSIDE_TYPEOF, &slow, &done); + EmitDynamicLookupFastCase(proxy, INSIDE_TYPEOF, &slow, &done); __ Bind(&slow); __ Mov(x0, Operand(proxy->name())); __ Push(cp, x0); - __ CallRuntime(Runtime::kHiddenLoadContextSlotNoReferenceError, 2); + __ CallRuntime(Runtime::kLoadLookupSlotNoReferenceError, 2); PrepareForBailout(expr, TOS_REG); __ Bind(&done); @@ -4178,11 +4170,6 @@ void FullCodeGenerator::EmitLiteralCompareTypeof(Expression* expr, __ JumpIfRoot(x0, Heap::kTrueValueRootIndex, if_true); __ CompareRoot(x0, Heap::kFalseValueRootIndex); Split(eq, if_true, if_false, fall_through); - } else if (FLAG_harmony_typeof && - String::Equals(check, factory->null_string())) { - ASM_LOCATION("FullCodeGenerator::EmitLiteralCompareTypeof null_string"); - __ CompareRoot(x0, Heap::kNullValueRootIndex); - Split(eq, if_true, if_false, fall_through); } else if (String::Equals(check, factory->undefined_string())) { ASM_LOCATION( "FullCodeGenerator::EmitLiteralCompareTypeof undefined_string"); @@ -4204,9 +4191,7 @@ void FullCodeGenerator::EmitLiteralCompareTypeof(Expression* expr, } else if (String::Equals(check, factory->object_string())) { ASM_LOCATION("FullCodeGenerator::EmitLiteralCompareTypeof object_string"); __ JumpIfSmi(x0, if_false); - if (!FLAG_harmony_typeof) { - __ JumpIfRoot(x0, Heap::kNullValueRootIndex, if_true); - } + __ JumpIfRoot(x0, Heap::kNullValueRootIndex, if_true); // Check for JS objects => true. Register map = x10; __ JumpIfObjectType(x0, map, x11, FIRST_NONCALLABLE_SPEC_OBJECT_TYPE, @@ -4365,7 +4350,7 @@ void FullCodeGenerator::VisitYield(Yield* expr) { __ Bind(&suspend); VisitForAccumulatorValue(expr->generator_object()); - ASSERT((continuation.pos() > 0) && Smi::IsValid(continuation.pos())); + DCHECK((continuation.pos() > 0) && Smi::IsValid(continuation.pos())); __ Mov(x1, Smi::FromInt(continuation.pos())); __ Str(x1, FieldMemOperand(x0, JSGeneratorObject::kContinuationOffset)); __ Str(cp, FieldMemOperand(x0, JSGeneratorObject::kContextOffset)); @@ -4376,7 +4361,7 @@ void FullCodeGenerator::VisitYield(Yield* expr) { __ Cmp(__ StackPointer(), x1); __ B(eq, &post_runtime); __ Push(x0); // generator object - __ CallRuntime(Runtime::kHiddenSuspendJSGeneratorObject, 1); + __ CallRuntime(Runtime::kSuspendJSGeneratorObject, 1); __ Ldr(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); __ Bind(&post_runtime); __ Pop(result_register()); @@ -4408,6 +4393,9 @@ void FullCodeGenerator::VisitYield(Yield* expr) { Label l_catch, l_try, l_suspend, l_continuation, l_resume; Label l_next, l_call, l_loop; + Register load_receiver = LoadIC::ReceiverRegister(); + Register load_name = LoadIC::NameRegister(); + // Initial send value is undefined. __ LoadRoot(x0, Heap::kUndefinedValueRootIndex); __ B(&l_next); @@ -4415,9 +4403,9 @@ void FullCodeGenerator::VisitYield(Yield* expr) { // catch (e) { receiver = iter; f = 'throw'; arg = e; goto l_call; } __ Bind(&l_catch); handler_table()->set(expr->index(), Smi::FromInt(l_catch.pos())); - __ LoadRoot(x2, Heap::kthrow_stringRootIndex); // "throw" - __ Peek(x3, 1 * kPointerSize); // iter - __ Push(x2, x3, x0); // "throw", iter, except + __ LoadRoot(load_name, Heap::kthrow_stringRootIndex); // "throw" + __ Peek(x3, 1 * kPointerSize); // iter + __ Push(load_name, x3, x0); // "throw", iter, except __ B(&l_call); // try { received = %yield result } @@ -4440,14 +4428,14 @@ void FullCodeGenerator::VisitYield(Yield* expr) { const int generator_object_depth = kPointerSize + handler_size; __ Peek(x0, generator_object_depth); __ Push(x0); // g - ASSERT((l_continuation.pos() > 0) && Smi::IsValid(l_continuation.pos())); + DCHECK((l_continuation.pos() > 0) && Smi::IsValid(l_continuation.pos())); __ Mov(x1, Smi::FromInt(l_continuation.pos())); __ Str(x1, FieldMemOperand(x0, JSGeneratorObject::kContinuationOffset)); __ Str(cp, FieldMemOperand(x0, JSGeneratorObject::kContextOffset)); __ Mov(x1, cp); __ RecordWriteField(x0, JSGeneratorObject::kContextOffset, x1, x2, kLRHasBeenSaved, kDontSaveFPRegs); - __ CallRuntime(Runtime::kHiddenSuspendJSGeneratorObject, 1); + __ CallRuntime(Runtime::kSuspendJSGeneratorObject, 1); __ Ldr(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); __ Pop(x0); // result EmitReturnSequence(); @@ -4456,14 +4444,19 @@ void FullCodeGenerator::VisitYield(Yield* expr) { // receiver = iter; f = 'next'; arg = received; __ Bind(&l_next); - __ LoadRoot(x2, Heap::knext_stringRootIndex); // "next" - __ Peek(x3, 1 * kPointerSize); // iter - __ Push(x2, x3, x0); // "next", iter, received + + __ LoadRoot(load_name, Heap::knext_stringRootIndex); // "next" + __ Peek(x3, 1 * kPointerSize); // iter + __ Push(load_name, x3, x0); // "next", iter, received // result = receiver[f](arg); __ Bind(&l_call); - __ Peek(x1, 1 * kPointerSize); - __ Peek(x0, 2 * kPointerSize); + __ Peek(load_receiver, 1 * kPointerSize); + __ Peek(load_name, 2 * kPointerSize); + if (FLAG_vector_ics) { + __ Mov(LoadIC::SlotRegister(), + Smi::FromInt(expr->KeyedLoadFeedbackSlot())); + } Handle<Code> ic = isolate()->builtins()->KeyedLoadIC_Initialize(); CallIC(ic, TypeFeedbackId::None()); __ Mov(x1, x0); @@ -4476,19 +4469,29 @@ void FullCodeGenerator::VisitYield(Yield* expr) { // if (!result.done) goto l_try; __ Bind(&l_loop); - __ Push(x0); // save result - __ LoadRoot(x2, Heap::kdone_stringRootIndex); // "done" - CallLoadIC(NOT_CONTEXTUAL); // result.done in x0 + __ Move(load_receiver, x0); + + __ Push(load_receiver); // save result + __ LoadRoot(load_name, Heap::kdone_stringRootIndex); // "done" + if (FLAG_vector_ics) { + __ Mov(LoadIC::SlotRegister(), + Smi::FromInt(expr->DoneFeedbackSlot())); + } + CallLoadIC(NOT_CONTEXTUAL); // x0=result.done // The ToBooleanStub argument (result.done) is in x0. Handle<Code> bool_ic = ToBooleanStub::GetUninitialized(isolate()); CallIC(bool_ic); __ Cbz(x0, &l_try); // result.value - __ Pop(x0); // result - __ LoadRoot(x2, Heap::kvalue_stringRootIndex); // "value" - CallLoadIC(NOT_CONTEXTUAL); // result.value in x0 - context()->DropAndPlug(2, x0); // drop iter and g + __ Pop(load_receiver); // result + __ LoadRoot(load_name, Heap::kvalue_stringRootIndex); // "value" + if (FLAG_vector_ics) { + __ Mov(LoadIC::SlotRegister(), + Smi::FromInt(expr->ValueFeedbackSlot())); + } + CallLoadIC(NOT_CONTEXTUAL); // x0=result.value + context()->DropAndPlug(2, x0); // drop iter and g break; } } @@ -4506,7 +4509,7 @@ void FullCodeGenerator::EmitGeneratorResume(Expression *generator, Register function = x4; // The value stays in x0, and is ultimately read by the resumed generator, as - // if CallRuntime(Runtime::kHiddenSuspendJSGeneratorObject) returned it. Or it + // if CallRuntime(Runtime::kSuspendJSGeneratorObject) returned it. Or it // is read to throw the value when the resumed generator is already closed. r1 // will hold the generator object until the activation has been resumed. VisitForStackValue(generator); @@ -4588,7 +4591,7 @@ void FullCodeGenerator::EmitGeneratorResume(Expression *generator, __ Mov(x10, Smi::FromInt(resume_mode)); __ Push(generator_object, result_register(), x10); - __ CallRuntime(Runtime::kHiddenResumeJSGeneratorObject, 3); + __ CallRuntime(Runtime::kResumeJSGeneratorObject, 3); // Not reached: the runtime call returns elsewhere. __ Unreachable(); @@ -4603,14 +4606,14 @@ void FullCodeGenerator::EmitGeneratorResume(Expression *generator, } else { // Throw the provided value. __ Push(value_reg); - __ CallRuntime(Runtime::kHiddenThrow, 1); + __ CallRuntime(Runtime::kThrow, 1); } __ B(&done); // Throw error if we attempt to operate on a running generator. __ Bind(&wrong_state); __ Push(generator_object); - __ CallRuntime(Runtime::kHiddenThrowGeneratorStateError, 1); + __ CallRuntime(Runtime::kThrowGeneratorStateError, 1); __ Bind(&done); context()->Plug(result_register()); @@ -4631,7 +4634,7 @@ void FullCodeGenerator::EmitCreateIteratorResult(bool done) { __ Bind(&gc_required); __ Push(Smi::FromInt(map->instance_size())); - __ CallRuntime(Runtime::kHiddenAllocateInNewSpace, 1); + __ CallRuntime(Runtime::kAllocateInNewSpace, 1); __ Ldr(context_register(), MemOperand(fp, StandardFrameConstants::kContextOffset)); @@ -4645,7 +4648,7 @@ void FullCodeGenerator::EmitCreateIteratorResult(bool done) { __ Pop(result_value); __ Mov(boolean_done, Operand(isolate()->factory()->ToBoolean(done))); __ Mov(empty_fixed_array, Operand(isolate()->factory()->empty_fixed_array())); - ASSERT_EQ(map->instance_size(), 5 * kPointerSize); + DCHECK_EQ(map->instance_size(), 5 * kPointerSize); STATIC_ASSERT(JSObject::kPropertiesOffset + kPointerSize == JSObject::kElementsOffset); STATIC_ASSERT(JSGeneratorObject::kResultValuePropertyOffset + kPointerSize == @@ -4685,7 +4688,7 @@ Register FullCodeGenerator::context_register() { void FullCodeGenerator::StoreToFrameField(int frame_offset, Register value) { - ASSERT(POINTER_SIZE_ALIGN(frame_offset) == frame_offset); + DCHECK(POINTER_SIZE_ALIGN(frame_offset) == frame_offset); __ Str(value, MemOperand(fp, frame_offset)); } @@ -4703,7 +4706,7 @@ void FullCodeGenerator::PushFunctionArgumentForContextAllocation() { // as their closure, not the anonymous closure containing the global // code. Pass a smi sentinel and let the runtime look up the empty // function. - ASSERT(kSmiTag == 0); + DCHECK(kSmiTag == 0); __ Push(xzr); } else if (declaration_scope->is_eval_scope()) { // Contexts created by a call to eval have the same closure as the @@ -4712,7 +4715,7 @@ void FullCodeGenerator::PushFunctionArgumentForContextAllocation() { __ Ldr(x10, ContextMemOperand(cp, Context::CLOSURE_INDEX)); __ Push(x10); } else { - ASSERT(declaration_scope->is_function_scope()); + DCHECK(declaration_scope->is_function_scope()); __ Ldr(x10, MemOperand(fp, JavaScriptFrameConstants::kFunctionOffset)); __ Push(x10); } @@ -4721,7 +4724,7 @@ void FullCodeGenerator::PushFunctionArgumentForContextAllocation() { void FullCodeGenerator::EnterFinallyBlock() { ASM_LOCATION("FullCodeGenerator::EnterFinallyBlock"); - ASSERT(!result_register().is(x10)); + DCHECK(!result_register().is(x10)); // Preserve the result register while executing finally block. // Also cook the return address in lr to the stack (smi encoded Code* delta). __ Sub(x10, lr, Operand(masm_->CodeObject())); @@ -4753,7 +4756,7 @@ void FullCodeGenerator::EnterFinallyBlock() { void FullCodeGenerator::ExitFinallyBlock() { ASM_LOCATION("FullCodeGenerator::ExitFinallyBlock"); - ASSERT(!result_register().is(x10)); + DCHECK(!result_register().is(x10)); // Restore pending message from stack. __ Pop(x10, x11, x12); @@ -4795,7 +4798,7 @@ void BackEdgeTable::PatchAt(Code* unoptimized_code, Address branch_address = pc - 3 * kInstructionSize; PatchingAssembler patcher(branch_address, 1); - ASSERT(Instruction::Cast(branch_address) + DCHECK(Instruction::Cast(branch_address) ->IsNop(Assembler::INTERRUPT_CODE_NOP) || (Instruction::Cast(branch_address)->IsCondBranchImm() && Instruction::Cast(branch_address)->ImmPCOffset() == @@ -4826,7 +4829,7 @@ void BackEdgeTable::PatchAt(Code* unoptimized_code, Instruction* load = Instruction::Cast(pc)->preceding(2); Address interrupt_address_pointer = reinterpret_cast<Address>(load) + load->ImmPCOffset(); - ASSERT((Memory::uint64_at(interrupt_address_pointer) == + DCHECK((Memory::uint64_at(interrupt_address_pointer) == reinterpret_cast<uint64_t>(unoptimized_code->GetIsolate() ->builtins() ->OnStackReplacement() diff --git a/deps/v8/src/arm64/ic-arm64.cc b/deps/v8/src/arm64/ic-arm64.cc index c09b847ba5d..e08fcfd884d 100644 --- a/deps/v8/src/arm64/ic-arm64.cc +++ b/deps/v8/src/arm64/ic-arm64.cc @@ -2,17 +2,17 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM64 -#include "arm64/assembler-arm64.h" -#include "code-stubs.h" -#include "codegen.h" -#include "disasm.h" -#include "ic-inl.h" -#include "runtime.h" -#include "stub-cache.h" +#include "src/arm64/assembler-arm64.h" +#include "src/code-stubs.h" +#include "src/codegen.h" +#include "src/disasm.h" +#include "src/ic-inl.h" +#include "src/runtime.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -34,51 +34,6 @@ static void GenerateGlobalInstanceTypeCheck(MacroAssembler* masm, } -// Generated code falls through if the receiver is a regular non-global -// JS object with slow properties and no interceptors. -// -// "receiver" holds the receiver on entry and is unchanged. -// "elements" holds the property dictionary on fall through. -static void GenerateNameDictionaryReceiverCheck(MacroAssembler* masm, - Register receiver, - Register elements, - Register scratch0, - Register scratch1, - Label* miss) { - ASSERT(!AreAliased(receiver, elements, scratch0, scratch1)); - - // Check that the receiver isn't a smi. - __ JumpIfSmi(receiver, miss); - - // Check that the receiver is a valid JS object. - // Let t be the object instance type, we want: - // FIRST_SPEC_OBJECT_TYPE <= t <= LAST_SPEC_OBJECT_TYPE. - // Since LAST_SPEC_OBJECT_TYPE is the last possible instance type we only - // check the lower bound. - STATIC_ASSERT(LAST_TYPE == LAST_SPEC_OBJECT_TYPE); - - __ JumpIfObjectType(receiver, scratch0, scratch1, FIRST_SPEC_OBJECT_TYPE, - miss, lt); - - // scratch0 now contains the map of the receiver and scratch1 the object type. - Register map = scratch0; - Register type = scratch1; - - // Check if the receiver is a global JS object. - GenerateGlobalInstanceTypeCheck(masm, type, miss); - - // Check that the object does not require access checks. - __ Ldrb(scratch1, FieldMemOperand(map, Map::kBitFieldOffset)); - __ Tbnz(scratch1, Map::kIsAccessCheckNeeded, miss); - __ Tbnz(scratch1, Map::kHasNamedInterceptor, miss); - - // Check that the properties dictionary is valid. - __ Ldr(elements, FieldMemOperand(receiver, JSObject::kPropertiesOffset)); - __ Ldr(scratch1, FieldMemOperand(elements, HeapObject::kMapOffset)); - __ JumpIfNotRoot(scratch1, Heap::kHashTableMapRootIndex, miss); -} - - // Helper function used from LoadIC GenerateNormal. // // elements: Property dictionary. It is not clobbered if a jump to the miss @@ -97,8 +52,8 @@ static void GenerateDictionaryLoad(MacroAssembler* masm, Register result, Register scratch1, Register scratch2) { - ASSERT(!AreAliased(elements, name, scratch1, scratch2)); - ASSERT(!AreAliased(result, scratch1, scratch2)); + DCHECK(!AreAliased(elements, name, scratch1, scratch2)); + DCHECK(!AreAliased(result, scratch1, scratch2)); Label done; @@ -144,7 +99,7 @@ static void GenerateDictionaryStore(MacroAssembler* masm, Register value, Register scratch1, Register scratch2) { - ASSERT(!AreAliased(elements, name, value, scratch1, scratch2)); + DCHECK(!AreAliased(elements, name, value, scratch1, scratch2)); Label done; @@ -192,7 +147,7 @@ static void GenerateKeyedLoadReceiverCheck(MacroAssembler* masm, Register scratch, int interceptor_bit, Label* slow) { - ASSERT(!AreAliased(map_scratch, scratch)); + DCHECK(!AreAliased(map_scratch, scratch)); // Check that the object isn't a smi. __ JumpIfSmi(receiver, slow); @@ -241,7 +196,7 @@ static void GenerateFastArrayLoad(MacroAssembler* masm, Register result, Label* not_fast_array, Label* slow) { - ASSERT(!AreAliased(receiver, key, elements, elements_map, scratch2)); + DCHECK(!AreAliased(receiver, key, elements, elements_map, scratch2)); // Check for fast array. __ Ldr(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); @@ -290,7 +245,7 @@ static void GenerateKeyNameCheck(MacroAssembler* masm, Register hash_scratch, Label* index_string, Label* not_unique) { - ASSERT(!AreAliased(key, map_scratch, hash_scratch)); + DCHECK(!AreAliased(key, map_scratch, hash_scratch)); // Is the key a name? Label unique; @@ -329,7 +284,7 @@ static MemOperand GenerateMappedArgumentsLookup(MacroAssembler* masm, Register scratch2, Label* unmapped_case, Label* slow_case) { - ASSERT(!AreAliased(object, key, map, scratch1, scratch2)); + DCHECK(!AreAliased(object, key, map, scratch1, scratch2)); Heap* heap = masm->isolate()->heap(); @@ -384,7 +339,7 @@ static MemOperand GenerateUnmappedArgumentsLookup(MacroAssembler* masm, Register parameter_map, Register scratch, Label* slow_case) { - ASSERT(!AreAliased(key, parameter_map, scratch)); + DCHECK(!AreAliased(key, parameter_map, scratch)); // Element is in arguments backing store, which is referenced by the // second element of the parameter_map. @@ -407,16 +362,17 @@ static MemOperand GenerateUnmappedArgumentsLookup(MacroAssembler* masm, void LoadIC::GenerateMegamorphic(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- x2 : name - // -- lr : return address - // -- x0 : receiver - // ----------------------------------- + // The return address is in lr. + Register receiver = ReceiverRegister(); + Register name = NameRegister(); + DCHECK(receiver.is(x1)); + DCHECK(name.is(x2)); // Probe the stub cache. - Code::Flags flags = Code::ComputeHandlerFlags(Code::LOAD_IC); + Code::Flags flags = Code::RemoveTypeAndHolderFromFlags( + Code::ComputeHandlerFlags(Code::LOAD_IC)); masm->isolate()->stub_cache()->GenerateProbe( - masm, flags, x0, x2, x3, x4, x5, x6); + masm, flags, receiver, name, x3, x4, x5, x6); // Cache miss: Jump to runtime. GenerateMiss(masm); @@ -424,38 +380,31 @@ void LoadIC::GenerateMegamorphic(MacroAssembler* masm) { void LoadIC::GenerateNormal(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- x2 : name - // -- lr : return address - // -- x0 : receiver - // ----------------------------------- - Label miss; - - GenerateNameDictionaryReceiverCheck(masm, x0, x1, x3, x4, &miss); + Register dictionary = x0; + DCHECK(!dictionary.is(ReceiverRegister())); + DCHECK(!dictionary.is(NameRegister())); + Label slow; - // x1 now holds the property dictionary. - GenerateDictionaryLoad(masm, &miss, x1, x2, x0, x3, x4); + __ Ldr(dictionary, + FieldMemOperand(ReceiverRegister(), JSObject::kPropertiesOffset)); + GenerateDictionaryLoad(masm, &slow, dictionary, NameRegister(), x0, x3, x4); __ Ret(); - // Cache miss: Jump to runtime. - __ Bind(&miss); - GenerateMiss(masm); + // Dictionary load failed, go slow (but don't miss). + __ Bind(&slow); + GenerateRuntimeGetProperty(masm); } void LoadIC::GenerateMiss(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- x2 : name - // -- lr : return address - // -- x0 : receiver - // ----------------------------------- + // The return address is in lr. Isolate* isolate = masm->isolate(); ASM_LOCATION("LoadIC::GenerateMiss"); __ IncrementCounter(isolate->counters()->load_miss(), 1, x3, x4); // Perform tail call to the entry. - __ Push(x0, x2); + __ Push(ReceiverRegister(), NameRegister()); ExternalReference ref = ExternalReference(IC_Utility(kLoadIC_Miss), isolate); __ TailCallExternalReference(ref, 2, 1); @@ -463,29 +412,23 @@ void LoadIC::GenerateMiss(MacroAssembler* masm) { void LoadIC::GenerateRuntimeGetProperty(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- x2 : name - // -- lr : return address - // -- x0 : receiver - // ----------------------------------- - - __ Push(x0, x2); + // The return address is in lr. + __ Push(ReceiverRegister(), NameRegister()); __ TailCallRuntime(Runtime::kGetProperty, 2, 1); } void KeyedLoadIC::GenerateSloppyArguments(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- lr : return address - // -- x0 : key - // -- x1 : receiver - // ----------------------------------- + // The return address is in lr. Register result = x0; - Register key = x0; - Register receiver = x1; + Register receiver = ReceiverRegister(); + Register key = NameRegister(); + DCHECK(receiver.is(x1)); + DCHECK(key.is(x2)); + Label miss, unmapped; - Register map_scratch = x2; + Register map_scratch = x0; MemOperand mapped_location = GenerateMappedArgumentsLookup( masm, receiver, key, map_scratch, x3, x4, &unmapped, &miss); __ Ldr(result, mapped_location); @@ -495,10 +438,8 @@ void KeyedLoadIC::GenerateSloppyArguments(MacroAssembler* masm) { // Parameter map is left in map_scratch when a jump on unmapped is done. MemOperand unmapped_location = GenerateUnmappedArgumentsLookup(masm, key, map_scratch, x3, &miss); - __ Ldr(x2, unmapped_location); - __ JumpIfRoot(x2, Heap::kTheHoleValueRootIndex, &miss); - // Move the result in x0. x0 must be preserved on miss. - __ Mov(result, x2); + __ Ldr(result, unmapped_location); + __ JumpIfRoot(result, Heap::kTheHoleValueRootIndex, &miss); __ Ret(); __ Bind(&miss); @@ -508,18 +449,14 @@ void KeyedLoadIC::GenerateSloppyArguments(MacroAssembler* masm) { void KeyedStoreIC::GenerateSloppyArguments(MacroAssembler* masm) { ASM_LOCATION("KeyedStoreIC::GenerateSloppyArguments"); - // ---------- S t a t e -------------- - // -- lr : return address - // -- x0 : value - // -- x1 : key - // -- x2 : receiver - // ----------------------------------- - Label slow, notin; + Register value = ValueRegister(); + Register key = NameRegister(); + Register receiver = ReceiverRegister(); + DCHECK(receiver.is(x1)); + DCHECK(key.is(x2)); + DCHECK(value.is(x0)); - Register value = x0; - Register key = x1; - Register receiver = x2; Register map = x3; // These registers are used by GenerateMappedArgumentsLookup to build a @@ -559,16 +496,12 @@ void KeyedStoreIC::GenerateSloppyArguments(MacroAssembler* masm) { void KeyedLoadIC::GenerateMiss(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- lr : return address - // -- x0 : key - // -- x1 : receiver - // ----------------------------------- + // The return address is in lr. Isolate* isolate = masm->isolate(); __ IncrementCounter(isolate->counters()->keyed_load_miss(), 1, x10, x11); - __ Push(x1, x0); + __ Push(ReceiverRegister(), NameRegister()); // Perform tail call to the entry. ExternalReference ref = @@ -578,16 +511,35 @@ void KeyedLoadIC::GenerateMiss(MacroAssembler* masm) { } -void KeyedLoadIC::GenerateRuntimeGetProperty(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- lr : return address - // -- x0 : key - // -- x1 : receiver - // ----------------------------------- - Register key = x0; - Register receiver = x1; +// IC register specifications +const Register LoadIC::ReceiverRegister() { return x1; } +const Register LoadIC::NameRegister() { return x2; } - __ Push(receiver, key); +const Register LoadIC::SlotRegister() { + DCHECK(FLAG_vector_ics); + return x0; +} + + +const Register LoadIC::VectorRegister() { + DCHECK(FLAG_vector_ics); + return x3; +} + + +const Register StoreIC::ReceiverRegister() { return x1; } +const Register StoreIC::NameRegister() { return x2; } +const Register StoreIC::ValueRegister() { return x0; } + + +const Register KeyedStoreIC::MapRegister() { + return x3; +} + + +void KeyedLoadIC::GenerateRuntimeGetProperty(MacroAssembler* masm) { + // The return address is in lr. + __ Push(ReceiverRegister(), NameRegister()); __ TailCallRuntime(Runtime::kKeyedGetProperty, 2, 1); } @@ -601,7 +553,7 @@ static void GenerateKeyedLoadWithSmiKey(MacroAssembler* masm, Register scratch4, Register scratch5, Label *slow) { - ASSERT(!AreAliased( + DCHECK(!AreAliased( key, receiver, scratch1, scratch2, scratch3, scratch4, scratch5)); Isolate* isolate = masm->isolate(); @@ -642,7 +594,7 @@ static void GenerateKeyedLoadWithNameKey(MacroAssembler* masm, Register scratch4, Register scratch5, Label *slow) { - ASSERT(!AreAliased( + DCHECK(!AreAliased( key, receiver, scratch1, scratch2, scratch3, scratch4, scratch5)); Isolate* isolate = masm->isolate(); @@ -756,32 +708,30 @@ static void GenerateKeyedLoadWithNameKey(MacroAssembler* masm, void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- lr : return address - // -- x0 : key - // -- x1 : receiver - // ----------------------------------- + // The return address is in lr. Label slow, check_name, index_smi, index_name; - Register key = x0; - Register receiver = x1; + Register key = NameRegister(); + Register receiver = ReceiverRegister(); + DCHECK(key.is(x2)); + DCHECK(receiver.is(x1)); __ JumpIfNotSmi(key, &check_name); __ Bind(&index_smi); // Now the key is known to be a smi. This place is also jumped to from below // where a numeric string is converted to a smi. - GenerateKeyedLoadWithSmiKey(masm, key, receiver, x2, x3, x4, x5, x6, &slow); + GenerateKeyedLoadWithSmiKey(masm, key, receiver, x7, x3, x4, x5, x6, &slow); - // Slow case, key and receiver still in x0 and x1. + // Slow case. __ Bind(&slow); __ IncrementCounter( - masm->isolate()->counters()->keyed_load_generic_slow(), 1, x2, x3); + masm->isolate()->counters()->keyed_load_generic_slow(), 1, x4, x3); GenerateRuntimeGetProperty(masm); __ Bind(&check_name); - GenerateKeyNameCheck(masm, key, x2, x3, &index_name, &slow); + GenerateKeyNameCheck(masm, key, x0, x3, &index_name, &slow); - GenerateKeyedLoadWithNameKey(masm, key, receiver, x2, x3, x4, x5, x6, &slow); + GenerateKeyedLoadWithNameKey(masm, key, receiver, x7, x3, x4, x5, x6, &slow); __ Bind(&index_name); __ IndexFromHash(x3, key); @@ -791,17 +741,14 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { void KeyedLoadIC::GenerateString(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- lr : return address - // -- x0 : key (index) - // -- x1 : receiver - // ----------------------------------- + // Return address is in lr. Label miss; - Register index = x0; - Register receiver = x1; + Register receiver = ReceiverRegister(); + Register index = NameRegister(); Register result = x0; Register scratch = x3; + DCHECK(!scratch.is(receiver) && !scratch.is(index)); StringCharAtGenerator char_at_generator(receiver, index, @@ -823,14 +770,14 @@ void KeyedLoadIC::GenerateString(MacroAssembler* masm) { void KeyedLoadIC::GenerateIndexedInterceptor(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- lr : return address - // -- x0 : key - // -- x1 : receiver - // ----------------------------------- + // Return address is in lr. Label slow; - Register key = x0; - Register receiver = x1; + + Register receiver = ReceiverRegister(); + Register key = NameRegister(); + Register scratch1 = x3; + Register scratch2 = x4; + DCHECK(!AreAliased(scratch1, scratch2, receiver, key)); // Check that the receiver isn't a smi. __ JumpIfSmi(receiver, &slow); @@ -839,24 +786,23 @@ void KeyedLoadIC::GenerateIndexedInterceptor(MacroAssembler* masm) { __ TestAndBranchIfAnySet(key, kSmiTagMask | kSmiSignMask, &slow); // Get the map of the receiver. - Register map = x2; + Register map = scratch1; __ Ldr(map, FieldMemOperand(receiver, HeapObject::kMapOffset)); // Check that it has indexed interceptor and access checks // are not enabled for this object. - __ Ldrb(x3, FieldMemOperand(map, Map::kBitFieldOffset)); - ASSERT(kSlowCaseBitFieldMask == + __ Ldrb(scratch2, FieldMemOperand(map, Map::kBitFieldOffset)); + DCHECK(kSlowCaseBitFieldMask == ((1 << Map::kIsAccessCheckNeeded) | (1 << Map::kHasIndexedInterceptor))); - __ Tbnz(x3, Map::kIsAccessCheckNeeded, &slow); - __ Tbz(x3, Map::kHasIndexedInterceptor, &slow); + __ Tbnz(scratch2, Map::kIsAccessCheckNeeded, &slow); + __ Tbz(scratch2, Map::kHasIndexedInterceptor, &slow); // Everything is fine, call runtime. __ Push(receiver, key); __ TailCallExternalReference( - ExternalReference(IC_Utility(kKeyedLoadPropertyWithInterceptor), + ExternalReference(IC_Utility(kLoadElementWithInterceptor), masm->isolate()), - 2, - 1); + 2, 1); __ Bind(&slow); GenerateMiss(masm); @@ -865,15 +811,9 @@ void KeyedLoadIC::GenerateIndexedInterceptor(MacroAssembler* masm) { void KeyedStoreIC::GenerateMiss(MacroAssembler* masm) { ASM_LOCATION("KeyedStoreIC::GenerateMiss"); - // ---------- S t a t e -------------- - // -- x0 : value - // -- x1 : key - // -- x2 : receiver - // -- lr : return address - // ----------------------------------- // Push receiver, key and value for runtime call. - __ Push(x2, x1, x0); + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); ExternalReference ref = ExternalReference(IC_Utility(kKeyedStoreIC_Miss), masm->isolate()); @@ -883,15 +823,9 @@ void KeyedStoreIC::GenerateMiss(MacroAssembler* masm) { void KeyedStoreIC::GenerateSlow(MacroAssembler* masm) { ASM_LOCATION("KeyedStoreIC::GenerateSlow"); - // ---------- S t a t e -------------- - // -- lr : return address - // -- x0 : value - // -- x1 : key - // -- x2 : receiver - // ----------------------------------- // Push receiver, key and value for runtime call. - __ Push(x2, x1, x0); + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); // The slow case calls into the runtime to complete the store without causing // an IC miss that would otherwise cause a transition to the generic stub. @@ -904,22 +838,15 @@ void KeyedStoreIC::GenerateSlow(MacroAssembler* masm) { void KeyedStoreIC::GenerateRuntimeSetProperty(MacroAssembler* masm, StrictMode strict_mode) { ASM_LOCATION("KeyedStoreIC::GenerateRuntimeSetProperty"); - // ---------- S t a t e -------------- - // -- x0 : value - // -- x1 : key - // -- x2 : receiver - // -- lr : return address - // ----------------------------------- // Push receiver, key and value for runtime call. - __ Push(x2, x1, x0); + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); - // Push PropertyAttributes(NONE) and strict_mode for runtime call. - STATIC_ASSERT(NONE == 0); + // Push strict_mode for runtime call. __ Mov(x10, Smi::FromInt(strict_mode)); - __ Push(xzr, x10); + __ Push(x10); - __ TailCallRuntime(Runtime::kSetProperty, 5, 1); + __ TailCallRuntime(Runtime::kSetProperty, 4, 1); } @@ -936,7 +863,7 @@ static void KeyedStoreGenerateGenericHelper( Register receiver_map, Register elements_map, Register elements) { - ASSERT(!AreAliased( + DCHECK(!AreAliased( value, key, receiver, receiver_map, elements_map, elements, x10, x11)); Label transition_smi_elements; @@ -1043,10 +970,10 @@ static void KeyedStoreGenerateGenericHelper( x10, x11, slow); - ASSERT(receiver_map.Is(x3)); // Transition code expects map in x3. AllocationSiteMode mode = AllocationSite::GetMode(FAST_SMI_ELEMENTS, FAST_DOUBLE_ELEMENTS); - ElementsTransitionGenerator::GenerateSmiToDouble(masm, mode, slow); + ElementsTransitionGenerator::GenerateSmiToDouble( + masm, receiver, key, value, receiver_map, mode, slow); __ Ldr(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); __ B(&fast_double_without_map_check); @@ -1058,10 +985,11 @@ static void KeyedStoreGenerateGenericHelper( x10, x11, slow); - ASSERT(receiver_map.Is(x3)); // Transition code expects map in x3. + mode = AllocationSite::GetMode(FAST_SMI_ELEMENTS, FAST_ELEMENTS); - ElementsTransitionGenerator::GenerateMapChangeElementsTransition(masm, mode, - slow); + ElementsTransitionGenerator::GenerateMapChangeElementsTransition( + masm, receiver, key, value, receiver_map, mode, slow); + __ Ldr(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); __ B(&finish_store); @@ -1075,9 +1003,9 @@ static void KeyedStoreGenerateGenericHelper( x10, x11, slow); - ASSERT(receiver_map.Is(x3)); // Transition code expects map in x3. mode = AllocationSite::GetMode(FAST_DOUBLE_ELEMENTS, FAST_ELEMENTS); - ElementsTransitionGenerator::GenerateDoubleToObject(masm, mode, slow); + ElementsTransitionGenerator::GenerateDoubleToObject( + masm, receiver, key, value, receiver_map, mode, slow); __ Ldr(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); __ B(&finish_store); } @@ -1086,12 +1014,6 @@ static void KeyedStoreGenerateGenericHelper( void KeyedStoreIC::GenerateGeneric(MacroAssembler* masm, StrictMode strict_mode) { ASM_LOCATION("KeyedStoreIC::GenerateGeneric"); - // ---------- S t a t e -------------- - // -- x0 : value - // -- x1 : key - // -- x2 : receiver - // -- lr : return address - // ----------------------------------- Label slow; Label array; Label fast_object; @@ -1100,9 +1022,13 @@ void KeyedStoreIC::GenerateGeneric(MacroAssembler* masm, Label fast_double_grow; Label fast_double; - Register value = x0; - Register key = x1; - Register receiver = x2; + Register value = ValueRegister(); + Register key = NameRegister(); + Register receiver = ReceiverRegister(); + DCHECK(receiver.is(x1)); + DCHECK(key.is(x2)); + DCHECK(value.is(x0)); + Register receiver_map = x3; Register elements = x4; Register elements_map = x5; @@ -1187,17 +1113,15 @@ void KeyedStoreIC::GenerateGeneric(MacroAssembler* masm, void StoreIC::GenerateMegamorphic(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- x0 : value - // -- x1 : receiver - // -- x2 : name - // -- lr : return address - // ----------------------------------- + Register receiver = ReceiverRegister(); + Register name = NameRegister(); + DCHECK(!AreAliased(receiver, name, ValueRegister(), x3, x4, x5, x6)); // Probe the stub cache. - Code::Flags flags = Code::ComputeHandlerFlags(Code::STORE_IC); + Code::Flags flags = Code::RemoveTypeAndHolderFromFlags( + Code::ComputeHandlerFlags(Code::STORE_IC)); masm->isolate()->stub_cache()->GenerateProbe( - masm, flags, x1, x2, x3, x4, x5, x6); + masm, flags, receiver, name, x3, x4, x5, x6); // Cache miss: Jump to runtime. GenerateMiss(masm); @@ -1205,14 +1129,7 @@ void StoreIC::GenerateMegamorphic(MacroAssembler* masm) { void StoreIC::GenerateMiss(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- x0 : value - // -- x1 : receiver - // -- x2 : name - // -- lr : return address - // ----------------------------------- - - __ Push(x1, x2, x0); + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); // Tail call to the entry. ExternalReference ref = @@ -1222,20 +1139,14 @@ void StoreIC::GenerateMiss(MacroAssembler* masm) { void StoreIC::GenerateNormal(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- x0 : value - // -- x1 : receiver - // -- x2 : name - // -- lr : return address - // ----------------------------------- Label miss; - Register value = x0; - Register receiver = x1; - Register name = x2; + Register value = ValueRegister(); + Register receiver = ReceiverRegister(); + Register name = NameRegister(); Register dictionary = x3; + DCHECK(!AreAliased(value, receiver, name, x3, x4, x5)); - GenerateNameDictionaryReceiverCheck( - masm, receiver, dictionary, x4, x5, &miss); + __ Ldr(dictionary, FieldMemOperand(receiver, JSObject::kPropertiesOffset)); GenerateDictionaryStore(masm, &miss, dictionary, name, value, x4, x5); Counters* counters = masm->isolate()->counters(); @@ -1252,21 +1163,14 @@ void StoreIC::GenerateNormal(MacroAssembler* masm) { void StoreIC::GenerateRuntimeSetProperty(MacroAssembler* masm, StrictMode strict_mode) { ASM_LOCATION("StoreIC::GenerateRuntimeSetProperty"); - // ----------- S t a t e ------------- - // -- x0 : value - // -- x1 : receiver - // -- x2 : name - // -- lr : return address - // ----------------------------------- - __ Push(x1, x2, x0); + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); - __ Mov(x11, Smi::FromInt(NONE)); // PropertyAttributes __ Mov(x10, Smi::FromInt(strict_mode)); - __ Push(x11, x10); + __ Push(x10); // Do tail-call to runtime routine. - __ TailCallRuntime(Runtime::kSetProperty, 5, 1); + __ TailCallRuntime(Runtime::kSetProperty, 4, 1); } @@ -1279,7 +1183,7 @@ void StoreIC::GenerateSlow(MacroAssembler* masm) { // ----------------------------------- // Push receiver, name and value for runtime call. - __ Push(x1, x2, x0); + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); // The slow case calls into the runtime to complete the store without causing // an IC miss that would otherwise cause a transition to the generic stub. @@ -1349,9 +1253,9 @@ void PatchInlinedSmiCode(Address address, InlinedSmiCheck check) { // tb(!n)z test_reg, #0, <target> Instruction* to_patch = info.SmiCheck(); PatchingAssembler patcher(to_patch, 1); - ASSERT(to_patch->IsTestBranch()); - ASSERT(to_patch->ImmTestBranchBit5() == 0); - ASSERT(to_patch->ImmTestBranchBit40() == 0); + DCHECK(to_patch->IsTestBranch()); + DCHECK(to_patch->ImmTestBranchBit5() == 0); + DCHECK(to_patch->ImmTestBranchBit40() == 0); STATIC_ASSERT(kSmiTag == 0); STATIC_ASSERT(kSmiTagMask == 1); @@ -1359,11 +1263,11 @@ void PatchInlinedSmiCode(Address address, InlinedSmiCheck check) { int branch_imm = to_patch->ImmTestBranch(); Register smi_reg; if (check == ENABLE_INLINED_SMI_CHECK) { - ASSERT(to_patch->Rt() == xzr.code()); + DCHECK(to_patch->Rt() == xzr.code()); smi_reg = info.SmiRegister(); } else { - ASSERT(check == DISABLE_INLINED_SMI_CHECK); - ASSERT(to_patch->Rt() != xzr.code()); + DCHECK(check == DISABLE_INLINED_SMI_CHECK); + DCHECK(to_patch->Rt() != xzr.code()); smi_reg = xzr; } @@ -1371,7 +1275,7 @@ void PatchInlinedSmiCode(Address address, InlinedSmiCheck check) { // This is JumpIfNotSmi(smi_reg, branch_imm). patcher.tbnz(smi_reg, 0, branch_imm); } else { - ASSERT(to_patch->Mask(TestBranchMask) == TBNZ); + DCHECK(to_patch->Mask(TestBranchMask) == TBNZ); // This is JumpIfSmi(smi_reg, branch_imm). patcher.tbz(smi_reg, 0, branch_imm); } diff --git a/deps/v8/src/arm64/instructions-arm64.cc b/deps/v8/src/arm64/instructions-arm64.cc index 2996fc94c7a..a6ca6affae9 100644 --- a/deps/v8/src/arm64/instructions-arm64.cc +++ b/deps/v8/src/arm64/instructions-arm64.cc @@ -2,14 +2,14 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM64 #define ARM64_DEFINE_FP_STATICS -#include "arm64/instructions-arm64.h" -#include "arm64/assembler-arm64-inl.h" +#include "src/arm64/assembler-arm64-inl.h" +#include "src/arm64/instructions-arm64.h" namespace v8 { namespace internal { @@ -67,7 +67,7 @@ bool Instruction::IsStore() const { static uint64_t RotateRight(uint64_t value, unsigned int rotate, unsigned int width) { - ASSERT(width <= 64); + DCHECK(width <= 64); rotate &= 63; return ((value & ((1UL << rotate) - 1UL)) << (width - rotate)) | (value >> rotate); @@ -77,9 +77,9 @@ static uint64_t RotateRight(uint64_t value, static uint64_t RepeatBitsAcrossReg(unsigned reg_size, uint64_t value, unsigned width) { - ASSERT((width == 2) || (width == 4) || (width == 8) || (width == 16) || + DCHECK((width == 2) || (width == 4) || (width == 8) || (width == 16) || (width == 32)); - ASSERT((reg_size == kWRegSizeInBits) || (reg_size == kXRegSizeInBits)); + DCHECK((reg_size == kWRegSizeInBits) || (reg_size == kXRegSizeInBits)); uint64_t result = value & ((1UL << width) - 1UL); for (unsigned i = width; i < reg_size; i *= 2) { result |= (result << i); @@ -193,7 +193,7 @@ ptrdiff_t Instruction::ImmPCOffset() { offset = ImmBranch() << kInstructionSizeLog2; } else { // Load literal (offset from PC). - ASSERT(IsLdrLiteral()); + DCHECK(IsLdrLiteral()); // The offset is always shifted by 2 bits, even for loads to 64-bits // registers. offset = ImmLLiteral() << kInstructionSizeLog2; @@ -231,9 +231,9 @@ void Instruction::SetImmPCOffsetTarget(Instruction* target) { void Instruction::SetPCRelImmTarget(Instruction* target) { // ADRP is not supported, so 'this' must point to an ADR instruction. - ASSERT(IsAdr()); + DCHECK(IsAdr()); - int target_offset = DistanceTo(target); + ptrdiff_t target_offset = DistanceTo(target); Instr imm; if (Instruction::IsValidPCRelOffset(target_offset)) { imm = Assembler::ImmPCRelAddress(target_offset); @@ -241,13 +241,13 @@ void Instruction::SetPCRelImmTarget(Instruction* target) { } else { PatchingAssembler patcher(this, PatchingAssembler::kAdrFarPatchableNInstrs); - patcher.PatchAdrFar(target); + patcher.PatchAdrFar(target_offset); } } void Instruction::SetBranchImmTarget(Instruction* target) { - ASSERT(IsAligned(DistanceTo(target), kInstructionSize)); + DCHECK(IsAligned(DistanceTo(target), kInstructionSize)); Instr branch_imm = 0; uint32_t imm_mask = 0; ptrdiff_t offset = DistanceTo(target) >> kInstructionSizeLog2; @@ -279,8 +279,8 @@ void Instruction::SetBranchImmTarget(Instruction* target) { void Instruction::SetImmLLiteral(Instruction* source) { - ASSERT(IsAligned(DistanceTo(source), kInstructionSize)); - ptrdiff_t offset = DistanceTo(source) >> kLiteralEntrySizeLog2; + DCHECK(IsAligned(DistanceTo(source), kInstructionSize)); + ptrdiff_t offset = DistanceTo(source) >> kLoadLiteralScaleLog2; Instr imm = Assembler::ImmLLiteral(offset); Instr mask = ImmLLiteral_mask; @@ -304,7 +304,7 @@ bool InstructionSequence::IsInlineData() const { // xzr and Register are not defined in that header. Consider adding // instructions-arm64-inl.h to work around this. uint64_t InstructionSequence::InlineData() const { - ASSERT(IsInlineData()); + DCHECK(IsInlineData()); uint64_t payload = ImmMoveWide(); // TODO(all): If we extend ::InlineData() to support bigger data, we need // to update this method too. diff --git a/deps/v8/src/arm64/instructions-arm64.h b/deps/v8/src/arm64/instructions-arm64.h index 968ddace046..bd4e7537794 100644 --- a/deps/v8/src/arm64/instructions-arm64.h +++ b/deps/v8/src/arm64/instructions-arm64.h @@ -5,10 +5,10 @@ #ifndef V8_ARM64_INSTRUCTIONS_ARM64_H_ #define V8_ARM64_INSTRUCTIONS_ARM64_H_ -#include "globals.h" -#include "utils.h" -#include "arm64/constants-arm64.h" -#include "arm64/utils-arm64.h" +#include "src/arm64/constants-arm64.h" +#include "src/arm64/utils-arm64.h" +#include "src/globals.h" +#include "src/utils.h" namespace v8 { namespace internal { @@ -137,7 +137,7 @@ class Instruction { // ImmPCRel is a compound field (not present in INSTRUCTION_FIELDS_LIST), // formed from ImmPCRelLo and ImmPCRelHi. int ImmPCRel() const { - ASSERT(IsPCRelAddressing()); + DCHECK(IsPCRelAddressing()); int const offset = ((ImmPCRelHi() << ImmPCRelLo_width) | ImmPCRelLo()); int const width = ImmPCRelLo_width + ImmPCRelHi_width; return signed_bitextract_32(width - 1, 0, offset); @@ -353,7 +353,7 @@ class Instruction { void SetImmLLiteral(Instruction* source); uint8_t* LiteralAddress() { - int offset = ImmLLiteral() << kLiteralEntrySizeLog2; + int offset = ImmLLiteral() << kLoadLiteralScaleLog2; return reinterpret_cast<uint8_t*>(this) + offset; } @@ -364,7 +364,7 @@ class Instruction { CheckAlignment check = CHECK_ALIGNMENT) { Address addr = reinterpret_cast<Address>(this) + offset; // The FUZZ_disasm test relies on no check being done. - ASSERT(check == NO_CHECK || IsAddressAligned(addr, kInstructionSize)); + DCHECK(check == NO_CHECK || IsAddressAligned(addr, kInstructionSize)); return Cast(addr); } @@ -416,24 +416,38 @@ const Instr kImmExceptionIsUnreachable = 0xdebf; // A pseudo 'printf' instruction. The arguments will be passed to the platform // printf method. const Instr kImmExceptionIsPrintf = 0xdeb1; -// Parameters are stored in ARM64 registers as if the printf pseudo-instruction -// was a call to the real printf method: -// -// x0: The format string, then either of: +// Most parameters are stored in ARM64 registers as if the printf +// pseudo-instruction was a call to the real printf method: +// x0: The format string. // x1-x7: Optional arguments. // d0-d7: Optional arguments. // -// Floating-point and integer arguments are passed in separate sets of -// registers in AAPCS64 (even for varargs functions), so it is not possible to -// determine the type of location of each arguments without some information -// about the values that were passed in. This information could be retrieved -// from the printf format string, but the format string is not trivial to -// parse so we encode the relevant information with the HLT instruction. -// - Type -// Either kRegister or kFPRegister, but stored as a uint32_t because there's -// no way to guarantee the size of the CPURegister::RegisterType enum. -const unsigned kPrintfTypeOffset = 1 * kInstructionSize; -const unsigned kPrintfLength = 2 * kInstructionSize; +// Also, the argument layout is described inline in the instructions: +// - arg_count: The number of arguments. +// - arg_pattern: A set of PrintfArgPattern values, packed into two-bit fields. +// +// Floating-point and integer arguments are passed in separate sets of registers +// in AAPCS64 (even for varargs functions), so it is not possible to determine +// the type of each argument without some information about the values that were +// passed in. This information could be retrieved from the printf format string, +// but the format string is not trivial to parse so we encode the relevant +// information with the HLT instruction. +const unsigned kPrintfArgCountOffset = 1 * kInstructionSize; +const unsigned kPrintfArgPatternListOffset = 2 * kInstructionSize; +const unsigned kPrintfLength = 3 * kInstructionSize; + +const unsigned kPrintfMaxArgCount = 4; + +// The argument pattern is a set of two-bit-fields, each with one of the +// following values: +enum PrintfArgPattern { + kPrintfArgW = 1, + kPrintfArgX = 2, + // There is no kPrintfArgS because floats are always converted to doubles in C + // varargs calls. + kPrintfArgD = 3 +}; +static const unsigned kPrintfArgPatternBits = 2; // A pseudo 'debug' instruction. const Instr kImmExceptionIsDebug = 0xdeb0; diff --git a/deps/v8/src/arm64/instrument-arm64.cc b/deps/v8/src/arm64/instrument-arm64.cc index a6fe1234b0c..59982d975bc 100644 --- a/deps/v8/src/arm64/instrument-arm64.cc +++ b/deps/v8/src/arm64/instrument-arm64.cc @@ -2,14 +2,14 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "arm64/instrument-arm64.h" +#include "src/arm64/instrument-arm64.h" namespace v8 { namespace internal { Counter::Counter(const char* name, CounterType type) : count_(0), enabled_(false), type_(type) { - ASSERT(name != NULL); + DCHECK(name != NULL); strncpy(name_, name, kCounterNameMaxLength); } @@ -107,8 +107,7 @@ Instrument::Instrument(const char* datafile, uint64_t sample_period) } } - static const int num_counters = - sizeof(kCounterList) / sizeof(CounterDescriptor); + static const int num_counters = ARRAY_SIZE(kCounterList); // Dump an instrumentation description comment at the top of the file. fprintf(output_stream_, "# counters=%d\n", num_counters); @@ -144,7 +143,7 @@ void Instrument::Update() { // Increment the instruction counter, and dump all counters if a sample period // has elapsed. static Counter* counter = GetCounter("Instruction"); - ASSERT(counter->type() == Cumulative); + DCHECK(counter->type() == Cumulative); counter->Increment(); if (counter->IsEnabled() && (counter->count() % sample_period_) == 0) { diff --git a/deps/v8/src/arm64/instrument-arm64.h b/deps/v8/src/arm64/instrument-arm64.h index 2d41b585748..86ddfcbbc1e 100644 --- a/deps/v8/src/arm64/instrument-arm64.h +++ b/deps/v8/src/arm64/instrument-arm64.h @@ -5,10 +5,11 @@ #ifndef V8_ARM64_INSTRUMENT_ARM64_H_ #define V8_ARM64_INSTRUMENT_ARM64_H_ -#include "globals.h" -#include "utils.h" -#include "arm64/decoder-arm64.h" -#include "arm64/constants-arm64.h" +#include "src/globals.h" +#include "src/utils.h" + +#include "src/arm64/constants-arm64.h" +#include "src/arm64/decoder-arm64.h" namespace v8 { namespace internal { @@ -31,7 +32,7 @@ enum CounterType { class Counter { public: - Counter(const char* name, CounterType type = Gauge); + explicit Counter(const char* name, CounterType type = Gauge); void Increment(); void Enable(); diff --git a/deps/v8/src/arm64/lithium-arm64.cc b/deps/v8/src/arm64/lithium-arm64.cc index a0d3c298f1e..7bb66dbd701 100644 --- a/deps/v8/src/arm64/lithium-arm64.cc +++ b/deps/v8/src/arm64/lithium-arm64.cc @@ -2,17 +2,15 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "lithium-allocator-inl.h" -#include "arm64/lithium-arm64.h" -#include "arm64/lithium-codegen-arm64.h" -#include "hydrogen-osr.h" +#include "src/arm64/lithium-codegen-arm64.h" +#include "src/hydrogen-osr.h" +#include "src/lithium-inl.h" namespace v8 { namespace internal { - #define DEFINE_COMPILE(type) \ void L##type::CompileToNative(LCodeGen* generator) { \ generator->Do##type(this); \ @@ -26,17 +24,17 @@ void LInstruction::VerifyCall() { // outputs because all registers are blocked by the calling convention. // Inputs operands must use a fixed register or use-at-start policy or // a non-register policy. - ASSERT(Output() == NULL || + DCHECK(Output() == NULL || LUnallocated::cast(Output())->HasFixedPolicy() || !LUnallocated::cast(Output())->HasRegisterPolicy()); for (UseIterator it(this); !it.Done(); it.Advance()) { LUnallocated* operand = LUnallocated::cast(it.Current()); - ASSERT(operand->HasFixedPolicy() || + DCHECK(operand->HasFixedPolicy() || operand->IsUsedAtStart()); } for (TempIterator it(this); !it.Done(); it.Advance()) { LUnallocated* operand = LUnallocated::cast(it.Current()); - ASSERT(operand->HasFixedPolicy() ||!operand->HasRegisterPolicy()); + DCHECK(operand->HasFixedPolicy() ||!operand->HasRegisterPolicy()); } } #endif @@ -284,7 +282,9 @@ void LStoreKeyedGeneric::PrintDataTo(StringStream* stream) { void LStoreNamedField::PrintDataTo(StringStream* stream) { object()->PrintTo(stream); - hydrogen()->access().PrintTo(stream); + OStringStream os; + os << hydrogen()->access(); + stream->Add(os.c_str()); stream->Add(" <- "); value()->PrintTo(stream); } @@ -501,7 +501,7 @@ LInstruction* LChunkBuilder::MarkAsCall(LInstruction* instr, LInstruction* LChunkBuilder::AssignPointerMap(LInstruction* instr) { - ASSERT(!instr->HasPointerMap()); + DCHECK(!instr->HasPointerMap()); instr->set_pointer_map(new(zone()) LPointerMap(zone())); return instr; } @@ -543,21 +543,28 @@ LOperand* LPlatformChunk::GetNextSpillSlot(RegisterKind kind) { if (kind == DOUBLE_REGISTERS) { return LDoubleStackSlot::Create(index, zone()); } else { - ASSERT(kind == GENERAL_REGISTERS); + DCHECK(kind == GENERAL_REGISTERS); return LStackSlot::Create(index, zone()); } } +LOperand* LChunkBuilder::FixedTemp(Register reg) { + LUnallocated* operand = ToUnallocated(reg); + DCHECK(operand->HasFixedPolicy()); + return operand; +} + + LOperand* LChunkBuilder::FixedTemp(DoubleRegister reg) { LUnallocated* operand = ToUnallocated(reg); - ASSERT(operand->HasFixedPolicy()); + DCHECK(operand->HasFixedPolicy()); return operand; } LPlatformChunk* LChunkBuilder::Build() { - ASSERT(is_unused()); + DCHECK(is_unused()); chunk_ = new(zone()) LPlatformChunk(info_, graph_); LPhase phase("L_Building chunk", chunk_); status_ = BUILDING; @@ -583,7 +590,7 @@ LPlatformChunk* LChunkBuilder::Build() { void LChunkBuilder::DoBasicBlock(HBasicBlock* block) { - ASSERT(is_building()); + DCHECK(is_building()); current_block_ = block; if (block->IsStartBlock()) { @@ -592,14 +599,14 @@ void LChunkBuilder::DoBasicBlock(HBasicBlock* block) { } else if (block->predecessors()->length() == 1) { // We have a single predecessor => copy environment and outgoing // argument count from the predecessor. - ASSERT(block->phis()->length() == 0); + DCHECK(block->phis()->length() == 0); HBasicBlock* pred = block->predecessors()->at(0); HEnvironment* last_environment = pred->last_environment(); - ASSERT(last_environment != NULL); + DCHECK(last_environment != NULL); // Only copy the environment, if it is later used again. if (pred->end()->SecondSuccessor() == NULL) { - ASSERT(pred->end()->FirstSuccessor() == block); + DCHECK(pred->end()->FirstSuccessor() == block); } else { if ((pred->end()->FirstSuccessor()->block_id() > block->block_id()) || (pred->end()->SecondSuccessor()->block_id() > block->block_id())) { @@ -607,7 +614,7 @@ void LChunkBuilder::DoBasicBlock(HBasicBlock* block) { } } block->UpdateEnvironment(last_environment); - ASSERT(pred->argument_count() >= 0); + DCHECK(pred->argument_count() >= 0); argument_count_ = pred->argument_count(); } else { // We are at a state join => process phis. @@ -660,7 +667,7 @@ void LChunkBuilder::VisitInstruction(HInstruction* current) { if (current->OperandCount() == 0) { instr = DefineAsRegister(new(zone()) LDummy()); } else { - ASSERT(!current->OperandAt(0)->IsControlInstruction()); + DCHECK(!current->OperandAt(0)->IsControlInstruction()); instr = DefineAsRegister(new(zone()) LDummyUse(UseAny(current->OperandAt(0)))); } @@ -672,76 +679,90 @@ void LChunkBuilder::VisitInstruction(HInstruction* current) { chunk_->AddInstruction(dummy, current_block_); } } else { - instr = current->CompileToLithium(this); + HBasicBlock* successor; + if (current->IsControlInstruction() && + HControlInstruction::cast(current)->KnownSuccessorBlock(&successor) && + successor != NULL) { + instr = new(zone()) LGoto(successor); + } else { + instr = current->CompileToLithium(this); + } } argument_count_ += current->argument_delta(); - ASSERT(argument_count_ >= 0); + DCHECK(argument_count_ >= 0); if (instr != NULL) { - // Associate the hydrogen instruction first, since we may need it for - // the ClobbersRegisters() or ClobbersDoubleRegisters() calls below. - instr->set_hydrogen_value(current); + AddInstruction(instr, current); + } + + current_instruction_ = old_current; +} + + +void LChunkBuilder::AddInstruction(LInstruction* instr, + HInstruction* hydrogen_val) { + // Associate the hydrogen instruction first, since we may need it for + // the ClobbersRegisters() or ClobbersDoubleRegisters() calls below. + instr->set_hydrogen_value(hydrogen_val); #if DEBUG - // Make sure that the lithium instruction has either no fixed register - // constraints in temps or the result OR no uses that are only used at - // start. If this invariant doesn't hold, the register allocator can decide - // to insert a split of a range immediately before the instruction due to an - // already allocated register needing to be used for the instruction's fixed - // register constraint. In this case, the register allocator won't see an - // interference between the split child and the use-at-start (it would if - // the it was just a plain use), so it is free to move the split child into - // the same register that is used for the use-at-start. - // See https://code.google.com/p/chromium/issues/detail?id=201590 - if (!(instr->ClobbersRegisters() && - instr->ClobbersDoubleRegisters(isolate()))) { - int fixed = 0; - int used_at_start = 0; - for (UseIterator it(instr); !it.Done(); it.Advance()) { - LUnallocated* operand = LUnallocated::cast(it.Current()); - if (operand->IsUsedAtStart()) ++used_at_start; - } - if (instr->Output() != NULL) { - if (LUnallocated::cast(instr->Output())->HasFixedPolicy()) ++fixed; - } - for (TempIterator it(instr); !it.Done(); it.Advance()) { - LUnallocated* operand = LUnallocated::cast(it.Current()); - if (operand->HasFixedPolicy()) ++fixed; - } - ASSERT(fixed == 0 || used_at_start == 0); + // Make sure that the lithium instruction has either no fixed register + // constraints in temps or the result OR no uses that are only used at + // start. If this invariant doesn't hold, the register allocator can decide + // to insert a split of a range immediately before the instruction due to an + // already allocated register needing to be used for the instruction's fixed + // register constraint. In this case, the register allocator won't see an + // interference between the split child and the use-at-start (it would if + // the it was just a plain use), so it is free to move the split child into + // the same register that is used for the use-at-start. + // See https://code.google.com/p/chromium/issues/detail?id=201590 + if (!(instr->ClobbersRegisters() && + instr->ClobbersDoubleRegisters(isolate()))) { + int fixed = 0; + int used_at_start = 0; + for (UseIterator it(instr); !it.Done(); it.Advance()) { + LUnallocated* operand = LUnallocated::cast(it.Current()); + if (operand->IsUsedAtStart()) ++used_at_start; + } + if (instr->Output() != NULL) { + if (LUnallocated::cast(instr->Output())->HasFixedPolicy()) ++fixed; } + for (TempIterator it(instr); !it.Done(); it.Advance()) { + LUnallocated* operand = LUnallocated::cast(it.Current()); + if (operand->HasFixedPolicy()) ++fixed; + } + DCHECK(fixed == 0 || used_at_start == 0); + } #endif - if (FLAG_stress_pointer_maps && !instr->HasPointerMap()) { - instr = AssignPointerMap(instr); - } - if (FLAG_stress_environments && !instr->HasEnvironment()) { - instr = AssignEnvironment(instr); + if (FLAG_stress_pointer_maps && !instr->HasPointerMap()) { + instr = AssignPointerMap(instr); + } + if (FLAG_stress_environments && !instr->HasEnvironment()) { + instr = AssignEnvironment(instr); + } + chunk_->AddInstruction(instr, current_block_); + + if (instr->IsCall()) { + HValue* hydrogen_value_for_lazy_bailout = hydrogen_val; + LInstruction* instruction_needing_environment = NULL; + if (hydrogen_val->HasObservableSideEffects()) { + HSimulate* sim = HSimulate::cast(hydrogen_val->next()); + instruction_needing_environment = instr; + sim->ReplayEnvironment(current_block_->last_environment()); + hydrogen_value_for_lazy_bailout = sim; } - chunk_->AddInstruction(instr, current_block_); - - if (instr->IsCall()) { - HValue* hydrogen_value_for_lazy_bailout = current; - LInstruction* instruction_needing_environment = NULL; - if (current->HasObservableSideEffects()) { - HSimulate* sim = HSimulate::cast(current->next()); - instruction_needing_environment = instr; - sim->ReplayEnvironment(current_block_->last_environment()); - hydrogen_value_for_lazy_bailout = sim; - } - LInstruction* bailout = AssignEnvironment(new(zone()) LLazyBailout()); - bailout->set_hydrogen_value(hydrogen_value_for_lazy_bailout); - chunk_->AddInstruction(bailout, current_block_); - if (instruction_needing_environment != NULL) { - // Store the lazy deopt environment with the instruction if needed. - // Right now it is only used for LInstanceOfKnownGlobal. - instruction_needing_environment-> - SetDeferredLazyDeoptimizationEnvironment(bailout->environment()); - } + LInstruction* bailout = AssignEnvironment(new(zone()) LLazyBailout()); + bailout->set_hydrogen_value(hydrogen_value_for_lazy_bailout); + chunk_->AddInstruction(bailout, current_block_); + if (instruction_needing_environment != NULL) { + // Store the lazy deopt environment with the instruction if needed. + // Right now it is only used for LInstanceOfKnownGlobal. + instruction_needing_environment-> + SetDeferredLazyDeoptimizationEnvironment(bailout->environment()); } } - current_instruction_ = old_current; } @@ -765,9 +786,9 @@ LInstruction* LChunkBuilder::DoAbnormalExit(HAbnormalExit* instr) { LInstruction* LChunkBuilder::DoArithmeticD(Token::Value op, HArithmeticBinaryOperation* instr) { - ASSERT(instr->representation().IsDouble()); - ASSERT(instr->left()->representation().IsDouble()); - ASSERT(instr->right()->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); + DCHECK(instr->right()->representation().IsDouble()); if (op == Token::MOD) { LOperand* left = UseFixedDouble(instr->left(), d0); @@ -785,7 +806,7 @@ LInstruction* LChunkBuilder::DoArithmeticD(Token::Value op, LInstruction* LChunkBuilder::DoArithmeticT(Token::Value op, HBinaryOperation* instr) { - ASSERT((op == Token::ADD) || (op == Token::SUB) || (op == Token::MUL) || + DCHECK((op == Token::ADD) || (op == Token::SUB) || (op == Token::MUL) || (op == Token::DIV) || (op == Token::MOD) || (op == Token::SHR) || (op == Token::SHL) || (op == Token::SAR) || (op == Token::ROR) || (op == Token::BIT_OR) || (op == Token::BIT_AND) || @@ -795,9 +816,9 @@ LInstruction* LChunkBuilder::DoArithmeticT(Token::Value op, // TODO(jbramley): Once we've implemented smi support for all arithmetic // operations, these assertions should check IsTagged(). - ASSERT(instr->representation().IsSmiOrTagged()); - ASSERT(left->representation().IsSmiOrTagged()); - ASSERT(right->representation().IsSmiOrTagged()); + DCHECK(instr->representation().IsSmiOrTagged()); + DCHECK(left->representation().IsSmiOrTagged()); + DCHECK(right->representation().IsSmiOrTagged()); LOperand* context = UseFixed(instr->context(), cp); LOperand* left_operand = UseFixed(left, x1); @@ -837,8 +858,8 @@ LInstruction* LChunkBuilder::DoAccessArgumentsAt(HAccessArgumentsAt* instr) { LInstruction* LChunkBuilder::DoAdd(HAdd* instr) { if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LInstruction* shifted_operation = TryDoOpWithShiftedRightOperand(instr); if (shifted_operation != NULL) { @@ -856,16 +877,16 @@ LInstruction* LChunkBuilder::DoAdd(HAdd* instr) { } return result; } else if (instr->representation().IsExternal()) { - ASSERT(instr->left()->representation().IsExternal()); - ASSERT(instr->right()->representation().IsInteger32()); - ASSERT(!instr->CheckFlag(HValue::kCanOverflow)); + DCHECK(instr->left()->representation().IsExternal()); + DCHECK(instr->right()->representation().IsInteger32()); + DCHECK(!instr->CheckFlag(HValue::kCanOverflow)); LOperand* left = UseRegisterAtStart(instr->left()); LOperand* right = UseRegisterOrConstantAtStart(instr->right()); return DefineAsRegister(new(zone()) LAddE(left, right)); } else if (instr->representation().IsDouble()) { return DoArithmeticD(Token::ADD, instr); } else { - ASSERT(instr->representation().IsTagged()); + DCHECK(instr->representation().IsTagged()); return DoArithmeticT(Token::ADD, instr); } } @@ -921,9 +942,9 @@ LInstruction* LChunkBuilder::DoArgumentsObject(HArgumentsObject* instr) { LInstruction* LChunkBuilder::DoBitwise(HBitwise* instr) { if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); - ASSERT(instr->CheckFlag(HValue::kTruncatingToInt32)); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->CheckFlag(HValue::kTruncatingToInt32)); LInstruction* shifted_operation = TryDoOpWithShiftedRightOperand(instr); if (shifted_operation != NULL) { @@ -965,9 +986,6 @@ LInstruction* LChunkBuilder::DoBoundsCheck(HBoundsCheck* instr) { LInstruction* LChunkBuilder::DoBranch(HBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; - HValue* value = instr->value(); Representation r = value->representation(); HType type = value->type(); @@ -976,7 +994,7 @@ LInstruction* LChunkBuilder::DoBranch(HBranch* instr) { // These representations have simple checks that cannot deoptimize. return new(zone()) LBranch(UseRegister(value), NULL, NULL); } else { - ASSERT(r.IsTagged()); + DCHECK(r.IsTagged()); if (type.IsBoolean() || type.IsSmi() || type.IsJSArray() || type.IsHeapNumber()) { // These types have simple checks that cannot deoptimize. @@ -996,7 +1014,7 @@ LInstruction* LChunkBuilder::DoBranch(HBranch* instr) { if (expected.IsGeneric() || expected.IsEmpty()) { // The generic case cannot deoptimize because it already supports every // possible input type. - ASSERT(needs_temps); + DCHECK(needs_temps); return new(zone()) LBranch(UseRegister(value), temp1, temp2); } else { return AssignEnvironment( @@ -1018,7 +1036,7 @@ LInstruction* LChunkBuilder::DoCallJSFunction( LInstruction* LChunkBuilder::DoCallWithDescriptor( HCallWithDescriptor* instr) { - const CallInterfaceDescriptor* descriptor = instr->descriptor(); + const InterfaceDescriptor* descriptor = instr->descriptor(); LOperand* target = UseRegisterOrConstantAtStart(instr->target()); ZoneList<LOperand*> ops(instr->OperandCount(), zone()); @@ -1108,7 +1126,7 @@ LInstruction* LChunkBuilder::DoChange(HChange* instr) { } return AssignEnvironment(DefineSameAsFirst(new(zone()) LCheckSmi(value))); } else { - ASSERT(to.IsInteger32()); + DCHECK(to.IsInteger32()); if (val->type().IsSmi() || val->representation().IsSmi()) { LOperand* value = UseRegisterAtStart(val); return DefineAsRegister(new(zone()) LSmiUntag(value, false)); @@ -1132,7 +1150,7 @@ LInstruction* LChunkBuilder::DoChange(HChange* instr) { LNumberTagD* result = new(zone()) LNumberTagD(value, temp1, temp2); return AssignPointerMap(DefineAsRegister(result)); } else { - ASSERT(to.IsSmi() || to.IsInteger32()); + DCHECK(to.IsSmi() || to.IsInteger32()); if (instr->CanTruncateToInt32()) { LOperand* value = UseRegister(val); return DefineAsRegister(new(zone()) LTruncateDoubleToIntOrSmi(value)); @@ -1164,7 +1182,7 @@ LInstruction* LChunkBuilder::DoChange(HChange* instr) { } return result; } else { - ASSERT(to.IsDouble()); + DCHECK(to.IsDouble()); if (val->CheckFlag(HInstruction::kUint32)) { return DefineAsRegister( new(zone()) LUint32ToDouble(UseRegisterAtStart(val))); @@ -1209,7 +1227,9 @@ LInstruction* LChunkBuilder::DoCheckMaps(HCheckMaps* instr) { LInstruction* LChunkBuilder::DoCheckHeapObject(HCheckHeapObject* instr) { LOperand* value = UseRegisterAtStart(instr->value()); LInstruction* result = new(zone()) LCheckNonSmi(value); - if (!instr->value()->IsHeapObject()) result = AssignEnvironment(result); + if (!instr->value()->type().IsHeapObject()) { + result = AssignEnvironment(result); + } return result; } @@ -1229,7 +1249,7 @@ LInstruction* LChunkBuilder::DoClampToUint8(HClampToUint8* instr) { } else if (input_rep.IsInteger32()) { return DefineAsRegister(new(zone()) LClampIToUint8(reg)); } else { - ASSERT(input_rep.IsSmiOrTagged()); + DCHECK(input_rep.IsSmiOrTagged()); return AssignEnvironment( DefineAsRegister(new(zone()) LClampTToUint8(reg, TempRegister(), @@ -1240,7 +1260,7 @@ LInstruction* LChunkBuilder::DoClampToUint8(HClampToUint8* instr) { LInstruction* LChunkBuilder::DoClassOfTestAndBranch( HClassOfTestAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); return new(zone()) LClassOfTestAndBranch(value, TempRegister(), @@ -1250,36 +1270,32 @@ LInstruction* LChunkBuilder::DoClassOfTestAndBranch( LInstruction* LChunkBuilder::DoCompareNumericAndBranch( HCompareNumericAndBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; Representation r = instr->representation(); if (r.IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(r)); - ASSERT(instr->right()->representation().Equals(r)); + DCHECK(instr->left()->representation().Equals(r)); + DCHECK(instr->right()->representation().Equals(r)); LOperand* left = UseRegisterOrConstantAtStart(instr->left()); LOperand* right = UseRegisterOrConstantAtStart(instr->right()); return new(zone()) LCompareNumericAndBranch(left, right); } else { - ASSERT(r.IsDouble()); - ASSERT(instr->left()->representation().IsDouble()); - ASSERT(instr->right()->representation().IsDouble()); - // TODO(all): In fact the only case that we can handle more efficiently is - // when one of the operand is the constant 0. Currently the MacroAssembler - // will be able to cope with any constant by loading it into an internal - // scratch register. This means that if the constant is used more that once, - // it will be loaded multiple times. Unfortunatly crankshaft already - // duplicates constant loads, but we should modify the code below once this - // issue has been addressed in crankshaft. - LOperand* left = UseRegisterOrConstantAtStart(instr->left()); - LOperand* right = UseRegisterOrConstantAtStart(instr->right()); + DCHECK(r.IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); + DCHECK(instr->right()->representation().IsDouble()); + if (instr->left()->IsConstant() && instr->right()->IsConstant()) { + LOperand* left = UseConstant(instr->left()); + LOperand* right = UseConstant(instr->right()); + return new(zone()) LCompareNumericAndBranch(left, right); + } + LOperand* left = UseRegisterAtStart(instr->left()); + LOperand* right = UseRegisterAtStart(instr->right()); return new(zone()) LCompareNumericAndBranch(left, right); } } LInstruction* LChunkBuilder::DoCompareGeneric(HCompareGeneric* instr) { - ASSERT(instr->left()->representation().IsTagged()); - ASSERT(instr->right()->representation().IsTagged()); + DCHECK(instr->left()->representation().IsTagged()); + DCHECK(instr->right()->representation().IsTagged()); LOperand* context = UseFixed(instr->context(), cp); LOperand* left = UseFixed(instr->left(), x1); LOperand* right = UseFixed(instr->right(), x0); @@ -1302,9 +1318,6 @@ LInstruction* LChunkBuilder::DoCompareHoleAndBranch( LInstruction* LChunkBuilder::DoCompareObjectEqAndBranch( HCompareObjectEqAndBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; - LOperand* left = UseRegisterAtStart(instr->left()); LOperand* right = UseRegisterAtStart(instr->right()); return new(zone()) LCmpObjectEqAndBranch(left, right); @@ -1312,10 +1325,7 @@ LInstruction* LChunkBuilder::DoCompareObjectEqAndBranch( LInstruction* LChunkBuilder::DoCompareMap(HCompareMap* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; - - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); LOperand* temp = TempRegister(); return new(zone()) LCmpMapAndBranch(value, temp); @@ -1376,9 +1386,9 @@ LInstruction* LChunkBuilder::DoDeoptimize(HDeoptimize* instr) { LInstruction* LChunkBuilder::DoDivByPowerOf2I(HDiv* instr) { - ASSERT(instr->representation().IsInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LInstruction* result = DefineAsRegister(new(zone()) LDivByPowerOf2I( @@ -1394,9 +1404,9 @@ LInstruction* LChunkBuilder::DoDivByPowerOf2I(HDiv* instr) { LInstruction* LChunkBuilder::DoDivByConstI(HDiv* instr) { - ASSERT(instr->representation().IsInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LOperand* temp = instr->CheckFlag(HInstruction::kAllUsesTruncatingToInt32) @@ -1413,9 +1423,9 @@ LInstruction* LChunkBuilder::DoDivByConstI(HDiv* instr) { LInstruction* LChunkBuilder::DoDivI(HBinaryOperation* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); LOperand* divisor = UseRegister(instr->right()); LOperand* temp = instr->CheckFlag(HInstruction::kAllUsesTruncatingToInt32) @@ -1496,7 +1506,7 @@ LInstruction* LChunkBuilder::DoFunctionLiteral(HFunctionLiteral* instr) { LInstruction* LChunkBuilder::DoGetCachedArrayIndex( HGetCachedArrayIndex* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); return DefineAsRegister(new(zone()) LGetCachedArrayIndex(value)); } @@ -1509,7 +1519,7 @@ LInstruction* LChunkBuilder::DoGoto(HGoto* instr) { LInstruction* LChunkBuilder::DoHasCachedArrayIndexAndBranch( HHasCachedArrayIndexAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); return new(zone()) LHasCachedArrayIndexAndBranch( UseRegisterAtStart(instr->value()), TempRegister()); } @@ -1517,7 +1527,7 @@ LInstruction* LChunkBuilder::DoHasCachedArrayIndexAndBranch( LInstruction* LChunkBuilder::DoHasInstanceTypeAndBranch( HHasInstanceTypeAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); return new(zone()) LHasInstanceTypeAndBranch(value, TempRegister()); } @@ -1568,8 +1578,6 @@ LInstruction* LChunkBuilder::DoIsConstructCallAndBranch( LInstruction* LChunkBuilder::DoCompareMinusZeroAndBranch( HCompareMinusZeroAndBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; LOperand* value = UseRegister(instr->value()); LOperand* scratch = TempRegister(); return new(zone()) LCompareMinusZeroAndBranch(value, scratch); @@ -1577,7 +1585,7 @@ LInstruction* LChunkBuilder::DoCompareMinusZeroAndBranch( LInstruction* LChunkBuilder::DoIsObjectAndBranch(HIsObjectAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); LOperand* temp1 = TempRegister(); LOperand* temp2 = TempRegister(); @@ -1586,7 +1594,7 @@ LInstruction* LChunkBuilder::DoIsObjectAndBranch(HIsObjectAndBranch* instr) { LInstruction* LChunkBuilder::DoIsStringAndBranch(HIsStringAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); LOperand* temp = TempRegister(); return new(zone()) LIsStringAndBranch(value, temp); @@ -1594,14 +1602,14 @@ LInstruction* LChunkBuilder::DoIsStringAndBranch(HIsStringAndBranch* instr) { LInstruction* LChunkBuilder::DoIsSmiAndBranch(HIsSmiAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); return new(zone()) LIsSmiAndBranch(UseRegisterAtStart(instr->value())); } LInstruction* LChunkBuilder::DoIsUndetectableAndBranch( HIsUndetectableAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); return new(zone()) LIsUndetectableAndBranch(value, TempRegister()); } @@ -1614,7 +1622,7 @@ LInstruction* LChunkBuilder::DoLeaveInlined(HLeaveInlined* instr) { if (env->entry()->arguments_pushed()) { int argument_count = env->arguments_environment()->parameter_count(); pop = new(zone()) LDrop(argument_count); - ASSERT(instr->argument_delta() == -argument_count); + DCHECK(instr->argument_delta() == -argument_count); } HEnvironment* outer = @@ -1655,15 +1663,21 @@ LInstruction* LChunkBuilder::DoLoadGlobalCell(HLoadGlobalCell* instr) { LInstruction* LChunkBuilder::DoLoadGlobalGeneric(HLoadGlobalGeneric* instr) { LOperand* context = UseFixed(instr->context(), cp); - LOperand* global_object = UseFixed(instr->global_object(), x0); + LOperand* global_object = UseFixed(instr->global_object(), + LoadIC::ReceiverRegister()); + LOperand* vector = NULL; + if (FLAG_vector_ics) { + vector = FixedTemp(LoadIC::VectorRegister()); + } + LLoadGlobalGeneric* result = - new(zone()) LLoadGlobalGeneric(context, global_object); + new(zone()) LLoadGlobalGeneric(context, global_object, vector); return MarkAsCall(DefineFixed(result, x0), instr); } LInstruction* LChunkBuilder::DoLoadKeyed(HLoadKeyed* instr) { - ASSERT(instr->key()->representation().IsSmiOrInteger32()); + DCHECK(instr->key()->representation().IsSmiOrInteger32()); ElementsKind elements_kind = instr->elements_kind(); LOperand* elements = UseRegister(instr->elements()); LOperand* key = UseRegisterOrConstant(instr->key()); @@ -1681,7 +1695,7 @@ LInstruction* LChunkBuilder::DoLoadKeyed(HLoadKeyed* instr) { ? AssignEnvironment(DefineAsRegister(result)) : DefineAsRegister(result); } else { - ASSERT(instr->representation().IsSmiOrTagged() || + DCHECK(instr->representation().IsSmiOrTagged() || instr->representation().IsInteger32()); LOperand* temp = instr->key()->IsConstant() ? NULL : TempRegister(); LLoadKeyedFixed* result = @@ -1691,7 +1705,7 @@ LInstruction* LChunkBuilder::DoLoadKeyed(HLoadKeyed* instr) { : DefineAsRegister(result); } } else { - ASSERT((instr->representation().IsInteger32() && + DCHECK((instr->representation().IsInteger32() && !IsDoubleOrFloatElementsKind(instr->elements_kind())) || (instr->representation().IsDouble() && IsDoubleOrFloatElementsKind(instr->elements_kind()))); @@ -1711,11 +1725,16 @@ LInstruction* LChunkBuilder::DoLoadKeyed(HLoadKeyed* instr) { LInstruction* LChunkBuilder::DoLoadKeyedGeneric(HLoadKeyedGeneric* instr) { LOperand* context = UseFixed(instr->context(), cp); - LOperand* object = UseFixed(instr->object(), x1); - LOperand* key = UseFixed(instr->key(), x0); + LOperand* object = UseFixed(instr->object(), LoadIC::ReceiverRegister()); + LOperand* key = UseFixed(instr->key(), LoadIC::NameRegister()); + LOperand* vector = NULL; + if (FLAG_vector_ics) { + vector = FixedTemp(LoadIC::VectorRegister()); + } LInstruction* result = - DefineFixed(new(zone()) LLoadKeyedGeneric(context, object, key), x0); + DefineFixed(new(zone()) LLoadKeyedGeneric(context, object, key, vector), + x0); return MarkAsCall(result, instr); } @@ -1728,9 +1747,14 @@ LInstruction* LChunkBuilder::DoLoadNamedField(HLoadNamedField* instr) { LInstruction* LChunkBuilder::DoLoadNamedGeneric(HLoadNamedGeneric* instr) { LOperand* context = UseFixed(instr->context(), cp); - LOperand* object = UseFixed(instr->object(), x0); + LOperand* object = UseFixed(instr->object(), LoadIC::ReceiverRegister()); + LOperand* vector = NULL; + if (FLAG_vector_ics) { + vector = FixedTemp(LoadIC::VectorRegister()); + } + LInstruction* result = - DefineFixed(new(zone()) LLoadNamedGeneric(context, object), x0); + DefineFixed(new(zone()) LLoadNamedGeneric(context, object, vector), x0); return MarkAsCall(result, instr); } @@ -1747,9 +1771,9 @@ LInstruction* LChunkBuilder::DoMapEnumLength(HMapEnumLength* instr) { LInstruction* LChunkBuilder::DoFlooringDivByPowerOf2I(HMathFloorOfDiv* instr) { - ASSERT(instr->representation().IsInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegisterAtStart(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LInstruction* result = DefineAsRegister(new(zone()) LFlooringDivByPowerOf2I( @@ -1763,9 +1787,9 @@ LInstruction* LChunkBuilder::DoFlooringDivByPowerOf2I(HMathFloorOfDiv* instr) { LInstruction* LChunkBuilder::DoFlooringDivByConstI(HMathFloorOfDiv* instr) { - ASSERT(instr->representation().IsInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LOperand* temp = @@ -1807,14 +1831,14 @@ LInstruction* LChunkBuilder::DoMathMinMax(HMathMinMax* instr) { LOperand* left = NULL; LOperand* right = NULL; if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); left = UseRegisterAtStart(instr->BetterLeftOperand()); right = UseRegisterOrConstantAtStart(instr->BetterRightOperand()); } else { - ASSERT(instr->representation().IsDouble()); - ASSERT(instr->left()->representation().IsDouble()); - ASSERT(instr->right()->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); + DCHECK(instr->right()->representation().IsDouble()); left = UseRegisterAtStart(instr->left()); right = UseRegisterAtStart(instr->right()); } @@ -1823,14 +1847,15 @@ LInstruction* LChunkBuilder::DoMathMinMax(HMathMinMax* instr) { LInstruction* LChunkBuilder::DoModByPowerOf2I(HMod* instr) { - ASSERT(instr->representation().IsInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegisterAtStart(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LInstruction* result = DefineSameAsFirst(new(zone()) LModByPowerOf2I( dividend, divisor)); - if (instr->CheckFlag(HValue::kBailoutOnMinusZero)) { + if (instr->CheckFlag(HValue::kLeftCanBeNegative) && + instr->CheckFlag(HValue::kBailoutOnMinusZero)) { result = AssignEnvironment(result); } return result; @@ -1838,9 +1863,9 @@ LInstruction* LChunkBuilder::DoModByPowerOf2I(HMod* instr) { LInstruction* LChunkBuilder::DoModByConstI(HMod* instr) { - ASSERT(instr->representation().IsInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LOperand* temp = TempRegister(); @@ -1854,9 +1879,9 @@ LInstruction* LChunkBuilder::DoModByConstI(HMod* instr) { LInstruction* LChunkBuilder::DoModI(HMod* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); LOperand* divisor = UseRegister(instr->right()); LInstruction* result = DefineAsRegister(new(zone()) LModI(dividend, divisor)); @@ -1887,8 +1912,8 @@ LInstruction* LChunkBuilder::DoMod(HMod* instr) { LInstruction* LChunkBuilder::DoMul(HMul* instr) { if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); bool can_overflow = instr->CheckFlag(HValue::kCanOverflow); bool bailout_on_minus_zero = instr->CheckFlag(HValue::kBailoutOnMinusZero); @@ -1946,7 +1971,7 @@ LInstruction* LChunkBuilder::DoMul(HMul* instr) { LInstruction* LChunkBuilder::DoOsrEntry(HOsrEntry* instr) { - ASSERT(argument_count_ == 0); + DCHECK(argument_count_ == 0); allocator_->MarkAsOsrEntry(); current_block_->last_environment()->set_ast_id(instr->ast_id()); return AssignEnvironment(new(zone()) LOsrEntry); @@ -1959,22 +1984,22 @@ LInstruction* LChunkBuilder::DoParameter(HParameter* instr) { int spill_index = chunk_->GetParameterStackSlot(instr->index()); return DefineAsSpilled(result, spill_index); } else { - ASSERT(info()->IsStub()); + DCHECK(info()->IsStub()); CodeStubInterfaceDescriptor* descriptor = info()->code_stub()->GetInterfaceDescriptor(); int index = static_cast<int>(instr->index()); - Register reg = descriptor->GetParameterRegister(index); + Register reg = descriptor->GetEnvironmentParameterRegister(index); return DefineFixed(result, reg); } } LInstruction* LChunkBuilder::DoPower(HPower* instr) { - ASSERT(instr->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); // We call a C function for double power. It can't trigger a GC. // We need to use fixed result register for the call. Representation exponent_type = instr->right()->representation(); - ASSERT(instr->left()->representation().IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); LOperand* left = UseFixedDouble(instr->left(), d0); LOperand* right = exponent_type.IsInteger32() ? UseFixed(instr->right(), x12) @@ -1988,9 +2013,21 @@ LInstruction* LChunkBuilder::DoPower(HPower* instr) { } -LInstruction* LChunkBuilder::DoPushArgument(HPushArgument* instr) { - LOperand* argument = UseRegister(instr->argument()); - return new(zone()) LPushArgument(argument); +LInstruction* LChunkBuilder::DoPushArguments(HPushArguments* instr) { + int argc = instr->OperandCount(); + AddInstruction(new(zone()) LPreparePushArguments(argc), instr); + + LPushArguments* push_args = new(zone()) LPushArguments(zone()); + + for (int i = 0; i < argc; ++i) { + if (push_args->ShouldSplitPush()) { + AddInstruction(push_args, instr); + push_args = new(zone()) LPushArguments(zone()); + } + push_args->AddArgument(UseRegister(instr->argument(i))); + } + + return push_args; } @@ -2003,16 +2040,15 @@ LInstruction* LChunkBuilder::DoRegExpLiteral(HRegExpLiteral* instr) { LInstruction* LChunkBuilder::DoDoubleBits(HDoubleBits* instr) { HValue* value = instr->value(); - ASSERT(value->representation().IsDouble()); + DCHECK(value->representation().IsDouble()); return DefineAsRegister(new(zone()) LDoubleBits(UseRegister(value))); } LInstruction* LChunkBuilder::DoConstructDouble(HConstructDouble* instr) { - LOperand* lo = UseRegister(instr->lo()); + LOperand* lo = UseRegisterAndClobber(instr->lo()); LOperand* hi = UseRegister(instr->hi()); - LOperand* temp = TempRegister(); - return DefineAsRegister(new(zone()) LConstructDouble(hi, lo, temp)); + return DefineAsRegister(new(zone()) LConstructDouble(hi, lo)); } @@ -2058,8 +2094,8 @@ HBitwiseBinaryOperation* LChunkBuilder::CanTransformToShiftedOp(HValue* val, HBinaryOperation* hinstr = HBinaryOperation::cast(val); HValue* hleft = hinstr->left(); HValue* hright = hinstr->right(); - ASSERT(hleft->representation().Equals(hinstr->representation())); - ASSERT(hright->representation().Equals(hinstr->representation())); + DCHECK(hleft->representation().Equals(hinstr->representation())); + DCHECK(hright->representation().Equals(hinstr->representation())); if ((hright->IsConstant() && LikelyFitsImmField(hinstr, HConstant::cast(hright)->Integer32Value())) || @@ -2131,8 +2167,8 @@ LInstruction* LChunkBuilder::TryDoOpWithShiftedRightOperand( LInstruction* LChunkBuilder::DoShiftedBinaryOp( HBinaryOperation* hinstr, HValue* hleft, HBitwiseBinaryOperation* hshift) { - ASSERT(hshift->IsBitwiseBinaryShift()); - ASSERT(!hshift->IsShr() || (JSShiftAmountFromHConstant(hshift->right()) > 0)); + DCHECK(hshift->IsBitwiseBinaryShift()); + DCHECK(!hshift->IsShr() || (JSShiftAmountFromHConstant(hshift->right()) > 0)); LTemplateResultInstruction<1>* res; LOperand* left = UseRegisterAtStart(hleft); @@ -2151,7 +2187,7 @@ LInstruction* LChunkBuilder::DoShiftedBinaryOp( } else if (hinstr->IsAdd()) { res = new(zone()) LAddI(left, right, shift_op, shift_amount); } else { - ASSERT(hinstr->IsSub()); + DCHECK(hinstr->IsSub()); res = new(zone()) LSubI(left, right, shift_op, shift_amount); } if (hinstr->CheckFlag(HValue::kCanOverflow)) { @@ -2167,10 +2203,10 @@ LInstruction* LChunkBuilder::DoShift(Token::Value op, return DoArithmeticT(op, instr); } - ASSERT(instr->representation().IsInteger32() || + DCHECK(instr->representation().IsInteger32() || instr->representation().IsSmi()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); if (ShiftCanBeOptimizedAway(instr)) { return NULL; @@ -2209,7 +2245,7 @@ LInstruction* LChunkBuilder::DoShift(Token::Value op, if (instr->representation().IsInteger32()) { result = DefineAsRegister(new(zone()) LShiftI(op, left, right, does_deopt)); } else { - ASSERT(instr->representation().IsSmi()); + DCHECK(instr->representation().IsSmi()); result = DefineAsRegister( new(zone()) LShiftS(op, left, right, temp, does_deopt)); } @@ -2249,7 +2285,7 @@ LInstruction* LChunkBuilder::DoStackCheck(HStackCheck* instr) { LOperand* context = UseFixed(instr->context(), cp); return MarkAsCall(new(zone()) LStackCheck(context), instr); } else { - ASSERT(instr->is_backwards_branch()); + DCHECK(instr->is_backwards_branch()); LOperand* context = UseAny(instr->context()); return AssignEnvironment( AssignPointerMap(new(zone()) LStackCheck(context))); @@ -2318,23 +2354,23 @@ LInstruction* LChunkBuilder::DoStoreKeyed(HStoreKeyed* instr) { } if (instr->is_typed_elements()) { - ASSERT((instr->value()->representation().IsInteger32() && + DCHECK((instr->value()->representation().IsInteger32() && !IsDoubleOrFloatElementsKind(instr->elements_kind())) || (instr->value()->representation().IsDouble() && IsDoubleOrFloatElementsKind(instr->elements_kind()))); - ASSERT((instr->is_fixed_typed_array() && + DCHECK((instr->is_fixed_typed_array() && instr->elements()->representation().IsTagged()) || (instr->is_external() && instr->elements()->representation().IsExternal())); return new(zone()) LStoreKeyedExternal(elements, key, val, temp); } else if (instr->value()->representation().IsDouble()) { - ASSERT(instr->elements()->representation().IsTagged()); + DCHECK(instr->elements()->representation().IsTagged()); return new(zone()) LStoreKeyedFixedDouble(elements, key, val, temp); } else { - ASSERT(instr->elements()->representation().IsTagged()); - ASSERT(instr->value()->representation().IsSmiOrTagged() || + DCHECK(instr->elements()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsSmiOrTagged() || instr->value()->representation().IsInteger32()); return new(zone()) LStoreKeyedFixed(elements, key, val, temp); } @@ -2343,13 +2379,14 @@ LInstruction* LChunkBuilder::DoStoreKeyed(HStoreKeyed* instr) { LInstruction* LChunkBuilder::DoStoreKeyedGeneric(HStoreKeyedGeneric* instr) { LOperand* context = UseFixed(instr->context(), cp); - LOperand* object = UseFixed(instr->object(), x2); - LOperand* key = UseFixed(instr->key(), x1); - LOperand* value = UseFixed(instr->value(), x0); + LOperand* object = UseFixed(instr->object(), + KeyedStoreIC::ReceiverRegister()); + LOperand* key = UseFixed(instr->key(), KeyedStoreIC::NameRegister()); + LOperand* value = UseFixed(instr->value(), KeyedStoreIC::ValueRegister()); - ASSERT(instr->object()->representation().IsTagged()); - ASSERT(instr->key()->representation().IsTagged()); - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->object()->representation().IsTagged()); + DCHECK(instr->key()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); return MarkAsCall( new(zone()) LStoreKeyedGeneric(context, object, key, value), instr); @@ -2387,8 +2424,9 @@ LInstruction* LChunkBuilder::DoStoreNamedField(HStoreNamedField* instr) { LInstruction* LChunkBuilder::DoStoreNamedGeneric(HStoreNamedGeneric* instr) { LOperand* context = UseFixed(instr->context(), cp); - LOperand* object = UseFixed(instr->object(), x1); - LOperand* value = UseFixed(instr->value(), x0); + LOperand* object = UseFixed(instr->object(), StoreIC::ReceiverRegister()); + LOperand* value = UseFixed(instr->value(), StoreIC::ValueRegister()); + LInstruction* result = new(zone()) LStoreNamedGeneric(context, object, value); return MarkAsCall(result, instr); } @@ -2425,8 +2463,8 @@ LInstruction* LChunkBuilder::DoStringCharFromCode(HStringCharFromCode* instr) { LInstruction* LChunkBuilder::DoStringCompareAndBranch( HStringCompareAndBranch* instr) { - ASSERT(instr->left()->representation().IsTagged()); - ASSERT(instr->right()->representation().IsTagged()); + DCHECK(instr->left()->representation().IsTagged()); + DCHECK(instr->right()->representation().IsTagged()); LOperand* context = UseFixed(instr->context(), cp); LOperand* left = UseFixed(instr->left(), x1); LOperand* right = UseFixed(instr->right(), x0); @@ -2438,8 +2476,8 @@ LInstruction* LChunkBuilder::DoStringCompareAndBranch( LInstruction* LChunkBuilder::DoSub(HSub* instr) { if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LInstruction* shifted_operation = TryDoOpWithShiftedRightOperand(instr); if (shifted_operation != NULL) { @@ -2527,9 +2565,6 @@ LInstruction* LChunkBuilder::DoTypeof(HTypeof* instr) { LInstruction* LChunkBuilder::DoTypeofIsAndBranch(HTypeofIsAndBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; - // We only need temp registers in some cases, but we can't dereference the // instr->type_literal() handle to test that here. LOperand* temp1 = TempRegister(); @@ -2563,8 +2598,8 @@ LInstruction* LChunkBuilder::DoUnaryMathOperation(HUnaryMathOperation* instr) { } } case kMathExp: { - ASSERT(instr->representation().IsDouble()); - ASSERT(instr->value()->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->value()->representation().IsDouble()); LOperand* input = UseRegister(instr->value()); LOperand* double_temp1 = TempDoubleRegister(); LOperand* temp1 = TempRegister(); @@ -2575,52 +2610,58 @@ LInstruction* LChunkBuilder::DoUnaryMathOperation(HUnaryMathOperation* instr) { return DefineAsRegister(result); } case kMathFloor: { - ASSERT(instr->value()->representation().IsDouble()); + DCHECK(instr->value()->representation().IsDouble()); LOperand* input = UseRegisterAtStart(instr->value()); if (instr->representation().IsInteger32()) { LMathFloorI* result = new(zone()) LMathFloorI(input); return AssignEnvironment(AssignPointerMap(DefineAsRegister(result))); } else { - ASSERT(instr->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); LMathFloorD* result = new(zone()) LMathFloorD(input); return DefineAsRegister(result); } } case kMathLog: { - ASSERT(instr->representation().IsDouble()); - ASSERT(instr->value()->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->value()->representation().IsDouble()); LOperand* input = UseFixedDouble(instr->value(), d0); LMathLog* result = new(zone()) LMathLog(input); return MarkAsCall(DefineFixedDouble(result, d0), instr); } case kMathPowHalf: { - ASSERT(instr->representation().IsDouble()); - ASSERT(instr->value()->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->value()->representation().IsDouble()); LOperand* input = UseRegister(instr->value()); return DefineAsRegister(new(zone()) LMathPowHalf(input)); } case kMathRound: { - ASSERT(instr->value()->representation().IsDouble()); + DCHECK(instr->value()->representation().IsDouble()); LOperand* input = UseRegister(instr->value()); if (instr->representation().IsInteger32()) { LOperand* temp = TempDoubleRegister(); LMathRoundI* result = new(zone()) LMathRoundI(input, temp); return AssignEnvironment(DefineAsRegister(result)); } else { - ASSERT(instr->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); LMathRoundD* result = new(zone()) LMathRoundD(input); return DefineAsRegister(result); } } + case kMathFround: { + DCHECK(instr->value()->representation().IsDouble()); + LOperand* input = UseRegister(instr->value()); + LMathFround* result = new (zone()) LMathFround(input); + return DefineAsRegister(result); + } case kMathSqrt: { - ASSERT(instr->representation().IsDouble()); - ASSERT(instr->value()->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->value()->representation().IsDouble()); LOperand* input = UseRegisterAtStart(instr->value()); return DefineAsRegister(new(zone()) LMathSqrt(input)); } case kMathClz32: { - ASSERT(instr->representation().IsInteger32()); - ASSERT(instr->value()->representation().IsInteger32()); + DCHECK(instr->representation().IsInteger32()); + DCHECK(instr->value()->representation().IsInteger32()); LOperand* input = UseRegisterAtStart(instr->value()); return DefineAsRegister(new(zone()) LMathClz32(input)); } @@ -2695,4 +2736,20 @@ LInstruction* LChunkBuilder::DoWrapReceiver(HWrapReceiver* instr) { } +LInstruction* LChunkBuilder::DoStoreFrameContext(HStoreFrameContext* instr) { + LOperand* context = UseRegisterAtStart(instr->context()); + return new(zone()) LStoreFrameContext(context); +} + + +LInstruction* LChunkBuilder::DoAllocateBlockContext( + HAllocateBlockContext* instr) { + LOperand* context = UseFixed(instr->context(), cp); + LOperand* function = UseRegisterAtStart(instr->function()); + LAllocateBlockContext* result = + new(zone()) LAllocateBlockContext(context, function); + return MarkAsCall(DefineFixed(result, cp), instr); +} + + } } // namespace v8::internal diff --git a/deps/v8/src/arm64/lithium-arm64.h b/deps/v8/src/arm64/lithium-arm64.h index 3abc388fe1d..21a5f741417 100644 --- a/deps/v8/src/arm64/lithium-arm64.h +++ b/deps/v8/src/arm64/lithium-arm64.h @@ -5,11 +5,11 @@ #ifndef V8_ARM64_LITHIUM_ARM64_H_ #define V8_ARM64_LITHIUM_ARM64_H_ -#include "hydrogen.h" -#include "lithium-allocator.h" -#include "lithium.h" -#include "safepoint-table.h" -#include "utils.h" +#include "src/hydrogen.h" +#include "src/lithium.h" +#include "src/lithium-allocator.h" +#include "src/safepoint-table.h" +#include "src/utils.h" namespace v8 { namespace internal { @@ -17,159 +17,163 @@ namespace internal { // Forward declarations. class LCodeGen; -#define LITHIUM_CONCRETE_INSTRUCTION_LIST(V) \ - V(AccessArgumentsAt) \ - V(AddE) \ - V(AddI) \ - V(AddS) \ - V(Allocate) \ - V(ApplyArguments) \ - V(ArgumentsElements) \ - V(ArgumentsLength) \ - V(ArithmeticD) \ - V(ArithmeticT) \ - V(BitI) \ - V(BitS) \ - V(BoundsCheck) \ - V(Branch) \ - V(CallFunction) \ - V(CallJSFunction) \ - V(CallNew) \ - V(CallNewArray) \ - V(CallRuntime) \ - V(CallStub) \ - V(CallWithDescriptor) \ - V(CheckInstanceType) \ - V(CheckMapValue) \ - V(CheckMaps) \ - V(CheckNonSmi) \ - V(CheckSmi) \ - V(CheckValue) \ - V(ClampDToUint8) \ - V(ClampIToUint8) \ - V(ClampTToUint8) \ - V(ClassOfTestAndBranch) \ - V(CmpHoleAndBranchD) \ - V(CmpHoleAndBranchT) \ - V(CmpMapAndBranch) \ - V(CmpObjectEqAndBranch) \ - V(CmpT) \ - V(CompareMinusZeroAndBranch) \ - V(CompareNumericAndBranch) \ - V(ConstantD) \ - V(ConstantE) \ - V(ConstantI) \ - V(ConstantS) \ - V(ConstantT) \ - V(ConstructDouble) \ - V(Context) \ - V(DateField) \ - V(DebugBreak) \ - V(DeclareGlobals) \ - V(Deoptimize) \ - V(DivByConstI) \ - V(DivByPowerOf2I) \ - V(DivI) \ - V(DoubleBits) \ - V(DoubleToIntOrSmi) \ - V(Drop) \ - V(Dummy) \ - V(DummyUse) \ - V(FlooringDivByConstI) \ - V(FlooringDivByPowerOf2I) \ - V(FlooringDivI) \ - V(ForInCacheArray) \ - V(ForInPrepareMap) \ - V(FunctionLiteral) \ - V(GetCachedArrayIndex) \ - V(Goto) \ - V(HasCachedArrayIndexAndBranch) \ - V(HasInstanceTypeAndBranch) \ - V(InnerAllocatedObject) \ - V(InstanceOf) \ - V(InstanceOfKnownGlobal) \ - V(InstructionGap) \ - V(Integer32ToDouble) \ - V(InvokeFunction) \ - V(IsConstructCallAndBranch) \ - V(IsObjectAndBranch) \ - V(IsSmiAndBranch) \ - V(IsStringAndBranch) \ - V(IsUndetectableAndBranch) \ - V(Label) \ - V(LazyBailout) \ - V(LoadContextSlot) \ - V(LoadFieldByIndex) \ - V(LoadFunctionPrototype) \ - V(LoadGlobalCell) \ - V(LoadGlobalGeneric) \ - V(LoadKeyedExternal) \ - V(LoadKeyedFixed) \ - V(LoadKeyedFixedDouble) \ - V(LoadKeyedGeneric) \ - V(LoadNamedField) \ - V(LoadNamedGeneric) \ - V(LoadRoot) \ - V(MapEnumLength) \ - V(MathAbs) \ - V(MathAbsTagged) \ - V(MathClz32) \ - V(MathExp) \ - V(MathFloorD) \ - V(MathFloorI) \ - V(MathLog) \ - V(MathMinMax) \ - V(MathPowHalf) \ - V(MathRoundD) \ - V(MathRoundI) \ - V(MathSqrt) \ - V(ModByConstI) \ - V(ModByPowerOf2I) \ - V(ModI) \ - V(MulConstIS) \ - V(MulI) \ - V(MulS) \ - V(NumberTagD) \ - V(NumberTagU) \ - V(NumberUntagD) \ - V(OsrEntry) \ - V(Parameter) \ - V(Power) \ - V(PushArgument) \ - V(RegExpLiteral) \ - V(Return) \ - V(SeqStringGetChar) \ - V(SeqStringSetChar) \ - V(ShiftI) \ - V(ShiftS) \ - V(SmiTag) \ - V(SmiUntag) \ - V(StackCheck) \ - V(StoreCodeEntry) \ - V(StoreContextSlot) \ - V(StoreGlobalCell) \ - V(StoreKeyedExternal) \ - V(StoreKeyedFixed) \ - V(StoreKeyedFixedDouble) \ - V(StoreKeyedGeneric) \ - V(StoreNamedField) \ - V(StoreNamedGeneric) \ - V(StringAdd) \ - V(StringCharCodeAt) \ - V(StringCharFromCode) \ - V(StringCompareAndBranch) \ - V(SubI) \ - V(SubS) \ - V(TaggedToI) \ - V(ThisFunction) \ - V(ToFastProperties) \ - V(TransitionElementsKind) \ - V(TrapAllocationMemento) \ - V(TruncateDoubleToIntOrSmi) \ - V(Typeof) \ - V(TypeofIsAndBranch) \ - V(Uint32ToDouble) \ - V(UnknownOSRValue) \ +#define LITHIUM_CONCRETE_INSTRUCTION_LIST(V) \ + V(AccessArgumentsAt) \ + V(AddE) \ + V(AddI) \ + V(AddS) \ + V(Allocate) \ + V(AllocateBlockContext) \ + V(ApplyArguments) \ + V(ArgumentsElements) \ + V(ArgumentsLength) \ + V(ArithmeticD) \ + V(ArithmeticT) \ + V(BitI) \ + V(BitS) \ + V(BoundsCheck) \ + V(Branch) \ + V(CallFunction) \ + V(CallJSFunction) \ + V(CallNew) \ + V(CallNewArray) \ + V(CallRuntime) \ + V(CallStub) \ + V(CallWithDescriptor) \ + V(CheckInstanceType) \ + V(CheckMapValue) \ + V(CheckMaps) \ + V(CheckNonSmi) \ + V(CheckSmi) \ + V(CheckValue) \ + V(ClampDToUint8) \ + V(ClampIToUint8) \ + V(ClampTToUint8) \ + V(ClassOfTestAndBranch) \ + V(CmpHoleAndBranchD) \ + V(CmpHoleAndBranchT) \ + V(CmpMapAndBranch) \ + V(CmpObjectEqAndBranch) \ + V(CmpT) \ + V(CompareMinusZeroAndBranch) \ + V(CompareNumericAndBranch) \ + V(ConstantD) \ + V(ConstantE) \ + V(ConstantI) \ + V(ConstantS) \ + V(ConstantT) \ + V(ConstructDouble) \ + V(Context) \ + V(DateField) \ + V(DebugBreak) \ + V(DeclareGlobals) \ + V(Deoptimize) \ + V(DivByConstI) \ + V(DivByPowerOf2I) \ + V(DivI) \ + V(DoubleBits) \ + V(DoubleToIntOrSmi) \ + V(Drop) \ + V(Dummy) \ + V(DummyUse) \ + V(FlooringDivByConstI) \ + V(FlooringDivByPowerOf2I) \ + V(FlooringDivI) \ + V(ForInCacheArray) \ + V(ForInPrepareMap) \ + V(FunctionLiteral) \ + V(GetCachedArrayIndex) \ + V(Goto) \ + V(HasCachedArrayIndexAndBranch) \ + V(HasInstanceTypeAndBranch) \ + V(InnerAllocatedObject) \ + V(InstanceOf) \ + V(InstanceOfKnownGlobal) \ + V(InstructionGap) \ + V(Integer32ToDouble) \ + V(InvokeFunction) \ + V(IsConstructCallAndBranch) \ + V(IsObjectAndBranch) \ + V(IsSmiAndBranch) \ + V(IsStringAndBranch) \ + V(IsUndetectableAndBranch) \ + V(Label) \ + V(LazyBailout) \ + V(LoadContextSlot) \ + V(LoadFieldByIndex) \ + V(LoadFunctionPrototype) \ + V(LoadGlobalCell) \ + V(LoadGlobalGeneric) \ + V(LoadKeyedExternal) \ + V(LoadKeyedFixed) \ + V(LoadKeyedFixedDouble) \ + V(LoadKeyedGeneric) \ + V(LoadNamedField) \ + V(LoadNamedGeneric) \ + V(LoadRoot) \ + V(MapEnumLength) \ + V(MathAbs) \ + V(MathAbsTagged) \ + V(MathClz32) \ + V(MathExp) \ + V(MathFloorD) \ + V(MathFloorI) \ + V(MathFround) \ + V(MathLog) \ + V(MathMinMax) \ + V(MathPowHalf) \ + V(MathRoundD) \ + V(MathRoundI) \ + V(MathSqrt) \ + V(ModByConstI) \ + V(ModByPowerOf2I) \ + V(ModI) \ + V(MulConstIS) \ + V(MulI) \ + V(MulS) \ + V(NumberTagD) \ + V(NumberTagU) \ + V(NumberUntagD) \ + V(OsrEntry) \ + V(Parameter) \ + V(Power) \ + V(PreparePushArguments) \ + V(PushArguments) \ + V(RegExpLiteral) \ + V(Return) \ + V(SeqStringGetChar) \ + V(SeqStringSetChar) \ + V(ShiftI) \ + V(ShiftS) \ + V(SmiTag) \ + V(SmiUntag) \ + V(StackCheck) \ + V(StoreCodeEntry) \ + V(StoreContextSlot) \ + V(StoreFrameContext) \ + V(StoreGlobalCell) \ + V(StoreKeyedExternal) \ + V(StoreKeyedFixed) \ + V(StoreKeyedFixedDouble) \ + V(StoreKeyedGeneric) \ + V(StoreNamedField) \ + V(StoreNamedGeneric) \ + V(StringAdd) \ + V(StringCharCodeAt) \ + V(StringCharFromCode) \ + V(StringCompareAndBranch) \ + V(SubI) \ + V(SubS) \ + V(TaggedToI) \ + V(ThisFunction) \ + V(ToFastProperties) \ + V(TransitionElementsKind) \ + V(TrapAllocationMemento) \ + V(TruncateDoubleToIntOrSmi) \ + V(Typeof) \ + V(TypeofIsAndBranch) \ + V(Uint32ToDouble) \ + V(UnknownOSRValue) \ V(WrapReceiver) @@ -182,7 +186,7 @@ class LCodeGen; return mnemonic; \ } \ static L##type* cast(LInstruction* instr) { \ - ASSERT(instr->Is##type()); \ + DCHECK(instr->Is##type()); \ return reinterpret_cast<L##type*>(instr); \ } @@ -230,6 +234,9 @@ class LInstruction : public ZoneObject { virtual bool IsControl() const { return false; } + // Try deleting this instruction if possible. + virtual bool TryDelete() { return false; } + void set_environment(LEnvironment* env) { environment_ = env; } LEnvironment* environment() const { return environment_; } bool HasEnvironment() const { return environment_ != NULL; } @@ -384,7 +391,7 @@ class LGap : public LTemplateInstruction<0, 0, 0> { virtual bool IsGap() const V8_OVERRIDE { return true; } virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; static LGap* cast(LInstruction* instr) { - ASSERT(instr->IsGap()); + DCHECK(instr->IsGap()); return reinterpret_cast<LGap*>(instr); } @@ -445,7 +452,7 @@ class LDrop V8_FINAL : public LTemplateInstruction<0, 0, 0> { class LDummy V8_FINAL : public LTemplateInstruction<1, 0, 0> { public: - explicit LDummy() { } + LDummy() {} DECLARE_CONCRETE_INSTRUCTION(Dummy, "dummy") }; @@ -936,7 +943,7 @@ class LCheckInstanceType V8_FINAL : public LTemplateInstruction<0, 1, 1> { class LCheckMaps V8_FINAL : public LTemplateInstruction<0, 1, 1> { public: - LCheckMaps(LOperand* value = NULL, LOperand* temp = NULL) { + explicit LCheckMaps(LOperand* value = NULL, LOperand* temp = NULL) { inputs_[0] = value; temps_[0] = temp; } @@ -1040,17 +1047,15 @@ class LDoubleBits V8_FINAL : public LTemplateInstruction<1, 1, 0> { }; -class LConstructDouble V8_FINAL : public LTemplateInstruction<1, 2, 1> { +class LConstructDouble V8_FINAL : public LTemplateInstruction<1, 2, 0> { public: - LConstructDouble(LOperand* hi, LOperand* lo, LOperand* temp) { + LConstructDouble(LOperand* hi, LOperand* lo) { inputs_[0] = hi; inputs_[1] = lo; - temps_[0] = temp; } LOperand* hi() { return inputs_[0]; } LOperand* lo() { return inputs_[1]; } - LOperand* temp() { return temps_[0]; } DECLARE_CONCRETE_INSTRUCTION(ConstructDouble, "construct-double") }; @@ -1288,6 +1293,7 @@ class LDeclareGlobals V8_FINAL : public LTemplateInstruction<0, 1, 0> { class LDeoptimize V8_FINAL : public LTemplateInstruction<0, 0, 0> { public: + virtual bool IsControl() const V8_OVERRIDE { return true; } DECLARE_CONCRETE_INSTRUCTION(Deoptimize, "deoptimize") DECLARE_HYDROGEN_ACCESSOR(Deoptimize) }; @@ -1517,18 +1523,18 @@ class LInteger32ToDouble V8_FINAL : public LTemplateInstruction<1, 1, 0> { class LCallWithDescriptor V8_FINAL : public LTemplateResultInstruction<1> { public: - LCallWithDescriptor(const CallInterfaceDescriptor* descriptor, - ZoneList<LOperand*>& operands, + LCallWithDescriptor(const InterfaceDescriptor* descriptor, + const ZoneList<LOperand*>& operands, Zone* zone) : descriptor_(descriptor), - inputs_(descriptor->environment_length() + 1, zone) { - ASSERT(descriptor->environment_length() + 1 == operands.length()); + inputs_(descriptor->GetRegisterParameterCount() + 1, zone) { + DCHECK(descriptor->GetRegisterParameterCount() + 1 == operands.length()); inputs_.AddAll(operands, zone); } LOperand* target() const { return inputs_[0]; } - const CallInterfaceDescriptor* descriptor() { return descriptor_; } + const InterfaceDescriptor* descriptor() { return descriptor_; } private: DECLARE_CONCRETE_INSTRUCTION(CallWithDescriptor, "call-with-descriptor") @@ -1538,7 +1544,7 @@ class LCallWithDescriptor V8_FINAL : public LTemplateResultInstruction<1> { int arity() const { return hydrogen()->argument_count() - 1; } - const CallInterfaceDescriptor* descriptor_; + const InterfaceDescriptor* descriptor_; ZoneList<LOperand*> inputs_; // Iterator support. @@ -1718,15 +1724,18 @@ class LLoadGlobalCell V8_FINAL : public LTemplateInstruction<1, 0, 0> { }; -class LLoadGlobalGeneric V8_FINAL : public LTemplateInstruction<1, 2, 0> { +class LLoadGlobalGeneric V8_FINAL : public LTemplateInstruction<1, 2, 1> { public: - LLoadGlobalGeneric(LOperand* context, LOperand* global_object) { + LLoadGlobalGeneric(LOperand* context, LOperand* global_object, + LOperand* vector) { inputs_[0] = context; inputs_[1] = global_object; + temps_[0] = vector; } LOperand* context() { return inputs_[0]; } LOperand* global_object() { return inputs_[1]; } + LOperand* temp_vector() { return temps_[0]; } DECLARE_CONCRETE_INSTRUCTION(LoadGlobalGeneric, "load-global-generic") DECLARE_HYDROGEN_ACCESSOR(LoadGlobalGeneric) @@ -1758,15 +1767,15 @@ class LLoadKeyed : public LTemplateInstruction<1, 2, T> { bool is_typed_elements() const { return is_external() || is_fixed_typed_array(); } - uint32_t additional_index() const { - return this->hydrogen()->index_offset(); + uint32_t base_offset() const { + return this->hydrogen()->base_offset(); } void PrintDataTo(StringStream* stream) V8_OVERRIDE { this->elements()->PrintTo(stream); stream->Add("["); this->key()->PrintTo(stream); - if (this->hydrogen()->IsDehoisted()) { - stream->Add(" + %d]", this->additional_index()); + if (this->base_offset() != 0) { + stream->Add(" + %d]", this->base_offset()); } else { stream->Add("]"); } @@ -1815,31 +1824,37 @@ class LLoadKeyedFixedDouble: public LLoadKeyed<1> { }; -class LLoadKeyedGeneric V8_FINAL : public LTemplateInstruction<1, 3, 0> { +class LLoadKeyedGeneric V8_FINAL : public LTemplateInstruction<1, 3, 1> { public: - LLoadKeyedGeneric(LOperand* context, LOperand* object, LOperand* key) { + LLoadKeyedGeneric(LOperand* context, LOperand* object, LOperand* key, + LOperand* vector) { inputs_[0] = context; inputs_[1] = object; inputs_[2] = key; + temps_[0] = vector; } LOperand* context() { return inputs_[0]; } LOperand* object() { return inputs_[1]; } LOperand* key() { return inputs_[2]; } + LOperand* temp_vector() { return temps_[0]; } DECLARE_CONCRETE_INSTRUCTION(LoadKeyedGeneric, "load-keyed-generic") + DECLARE_HYDROGEN_ACCESSOR(LoadKeyedGeneric) }; -class LLoadNamedGeneric V8_FINAL : public LTemplateInstruction<1, 2, 0> { +class LLoadNamedGeneric V8_FINAL : public LTemplateInstruction<1, 2, 1> { public: - LLoadNamedGeneric(LOperand* context, LOperand* object) { + LLoadNamedGeneric(LOperand* context, LOperand* object, LOperand* vector) { inputs_[0] = context; inputs_[1] = object; + temps_[0] = vector; } LOperand* context() { return inputs_[0]; } LOperand* object() { return inputs_[1]; } + LOperand* temp_vector() { return temps_[0]; } DECLARE_CONCRETE_INSTRUCTION(LoadNamedGeneric, "load-named-generic") DECLARE_HYDROGEN_ACCESSOR(LoadNamedGeneric) @@ -2072,6 +2087,14 @@ class LMathRoundI V8_FINAL : public LUnaryMathOperation<1> { }; +class LMathFround V8_FINAL : public LUnaryMathOperation<0> { + public: + explicit LMathFround(LOperand* value) : LUnaryMathOperation<0>(value) {} + + DECLARE_CONCRETE_INSTRUCTION(MathFround, "math-fround") +}; + + class LMathSqrt V8_FINAL : public LUnaryMathOperation<0> { public: explicit LMathSqrt(LOperand* value) : LUnaryMathOperation<0>(value) { } @@ -2250,15 +2273,50 @@ class LPower V8_FINAL : public LTemplateInstruction<1, 2, 0> { }; -class LPushArgument V8_FINAL : public LTemplateInstruction<0, 1, 0> { +class LPreparePushArguments V8_FINAL : public LTemplateInstruction<0, 0, 0> { public: - explicit LPushArgument(LOperand* value) { - inputs_[0] = value; + explicit LPreparePushArguments(int argc) : argc_(argc) {} + + inline int argc() const { return argc_; } + + DECLARE_CONCRETE_INSTRUCTION(PreparePushArguments, "prepare-push-arguments") + + protected: + int argc_; +}; + + +class LPushArguments V8_FINAL : public LTemplateResultInstruction<0> { + public: + explicit LPushArguments(Zone* zone, + int capacity = kRecommendedMaxPushedArgs) + : zone_(zone), inputs_(capacity, zone) {} + + LOperand* argument(int i) { return inputs_[i]; } + int ArgumentCount() const { return inputs_.length(); } + + void AddArgument(LOperand* arg) { inputs_.Add(arg, zone_); } + + DECLARE_CONCRETE_INSTRUCTION(PushArguments, "push-arguments") + + // It is better to limit the number of arguments pushed simultaneously to + // avoid pressure on the register allocator. + static const int kRecommendedMaxPushedArgs = 4; + bool ShouldSplitPush() const { + return inputs_.length() >= kRecommendedMaxPushedArgs; } - LOperand* value() { return inputs_[0]; } + protected: + Zone* zone_; + ZoneList<LOperand*> inputs_; - DECLARE_CONCRETE_INSTRUCTION(PushArgument, "push-argument") + private: + // Iterator support. + virtual int InputCount() V8_FINAL V8_OVERRIDE { return inputs_.length(); } + virtual LOperand* InputAt(int i) V8_FINAL V8_OVERRIDE { return inputs_[i]; } + + virtual int TempCount() V8_FINAL V8_OVERRIDE { return 0; } + virtual LOperand* TempAt(int i) V8_FINAL V8_OVERRIDE { return NULL; } }; @@ -2290,7 +2348,7 @@ class LReturn V8_FINAL : public LTemplateInstruction<0, 3, 0> { return parameter_count()->IsConstantOperand(); } LConstantOperand* constant_parameter_count() { - ASSERT(has_constant_parameter_count()); + DCHECK(has_constant_parameter_count()); return LConstantOperand::cast(parameter_count()); } @@ -2420,20 +2478,20 @@ class LStoreKeyed : public LTemplateInstruction<0, 3, T> { } return this->hydrogen()->NeedsCanonicalization(); } - uint32_t additional_index() const { return this->hydrogen()->index_offset(); } + uint32_t base_offset() const { return this->hydrogen()->base_offset(); } void PrintDataTo(StringStream* stream) V8_OVERRIDE { this->elements()->PrintTo(stream); stream->Add("["); this->key()->PrintTo(stream); - if (this->hydrogen()->IsDehoisted()) { - stream->Add(" + %d] <-", this->additional_index()); + if (this->base_offset() != 0) { + stream->Add(" + %d] <-", this->base_offset()); } else { stream->Add("] <- "); } if (this->value() == NULL) { - ASSERT(hydrogen()->IsConstantHoleStore() && + DCHECK(hydrogen()->IsConstantHoleStore() && hydrogen()->value()->representation().IsDouble()); stream->Add("<the hole(nan)>"); } else { @@ -2451,7 +2509,7 @@ class LStoreKeyedExternal V8_FINAL : public LStoreKeyed<1> { LOperand* temp) : LStoreKeyed<1>(elements, key, value) { temps_[0] = temp; - }; + } LOperand* temp() { return temps_[0]; } @@ -2465,7 +2523,7 @@ class LStoreKeyedFixed V8_FINAL : public LStoreKeyed<1> { LOperand* temp) : LStoreKeyed<1>(elements, key, value) { temps_[0] = temp; - }; + } LOperand* temp() { return temps_[0]; } @@ -2479,7 +2537,7 @@ class LStoreKeyedFixedDouble V8_FINAL : public LStoreKeyed<1> { LOperand* temp) : LStoreKeyed<1>(elements, key, value) { temps_[0] = temp; - }; + } LOperand* temp() { return temps_[0]; } @@ -2962,6 +3020,35 @@ class LLoadFieldByIndex V8_FINAL : public LTemplateInstruction<1, 2, 0> { }; +class LStoreFrameContext: public LTemplateInstruction<0, 1, 0> { + public: + explicit LStoreFrameContext(LOperand* context) { + inputs_[0] = context; + } + + LOperand* context() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(StoreFrameContext, "store-frame-context") +}; + + +class LAllocateBlockContext: public LTemplateInstruction<1, 2, 0> { + public: + LAllocateBlockContext(LOperand* context, LOperand* function) { + inputs_[0] = context; + inputs_[1] = function; + } + + LOperand* context() { return inputs_[0]; } + LOperand* function() { return inputs_[1]; } + + Handle<ScopeInfo> scope_info() { return hydrogen()->scope_info(); } + + DECLARE_CONCRETE_INSTRUCTION(AllocateBlockContext, "allocate-block-context") + DECLARE_HYDROGEN_ACCESSOR(AllocateBlockContext) +}; + + class LWrapReceiver V8_FINAL : public LTemplateInstruction<1, 2, 0> { public: LWrapReceiver(LOperand* receiver, LOperand* function) { @@ -3003,8 +3090,6 @@ class LChunkBuilder V8_FINAL : public LChunkBuilderBase { // Build the sequence for the graph. LPlatformChunk* Build(); - LInstruction* CheckElideControlInstruction(HControlInstruction* instr); - // Declare methods that deal with the individual node types. #define DECLARE_DO(type) LInstruction* Do##type(H##type* node); HYDROGEN_CONCRETE_INSTRUCTION_LIST(DECLARE_DO) @@ -3092,6 +3177,8 @@ class LChunkBuilder V8_FINAL : public LChunkBuilderBase { // Temporary operand that must be in a double register. MUST_USE_RESULT LUnallocated* TempDoubleRegister(); + MUST_USE_RESULT LOperand* FixedTemp(Register reg); + // Temporary operand that must be in a fixed double register. MUST_USE_RESULT LOperand* FixedTemp(DoubleRegister reg); @@ -3123,6 +3210,7 @@ class LChunkBuilder V8_FINAL : public LChunkBuilderBase { LInstruction* AssignEnvironment(LInstruction* instr); void VisitInstruction(HInstruction* current); + void AddInstruction(LInstruction* instr, HInstruction* current); void DoBasicBlock(HBasicBlock* block); int JSShiftAmountFromHConstant(HValue* constant) { @@ -3132,7 +3220,7 @@ class LChunkBuilder V8_FINAL : public LChunkBuilderBase { if (instr->IsAdd() || instr->IsSub()) { return Assembler::IsImmAddSub(imm) || Assembler::IsImmAddSub(-imm); } else { - ASSERT(instr->IsBitwise()); + DCHECK(instr->IsBitwise()); unsigned unused_n, unused_imm_s, unused_imm_r; return Assembler::IsImmLogical(imm, kWRegSizeInBits, &unused_n, &unused_imm_s, &unused_imm_r); diff --git a/deps/v8/src/arm64/lithium-codegen-arm64.cc b/deps/v8/src/arm64/lithium-codegen-arm64.cc index 610502a7fd2..53a1cfac42a 100644 --- a/deps/v8/src/arm64/lithium-codegen-arm64.cc +++ b/deps/v8/src/arm64/lithium-codegen-arm64.cc @@ -2,13 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "arm64/lithium-codegen-arm64.h" -#include "arm64/lithium-gap-resolver-arm64.h" -#include "code-stubs.h" -#include "stub-cache.h" -#include "hydrogen-osr.h" +#include "src/arm64/lithium-codegen-arm64.h" +#include "src/arm64/lithium-gap-resolver-arm64.h" +#include "src/code-stubs.h" +#include "src/hydrogen-osr.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -56,7 +56,7 @@ class BranchOnCondition : public BranchGenerator { virtual void EmitInverted(Label* label) const { if (cond_ != al) { - __ B(InvertCondition(cond_), label); + __ B(NegateCondition(cond_), label); } } @@ -86,7 +86,7 @@ class CompareAndBranch : public BranchGenerator { } virtual void EmitInverted(Label* label) const { - __ CompareAndBranch(lhs_, rhs_, InvertCondition(cond_), label); + __ CompareAndBranch(lhs_, rhs_, NegateCondition(cond_), label); } private: @@ -136,7 +136,7 @@ class TestAndBranch : public BranchGenerator { break; default: __ Tst(value_, mask_); - __ B(InvertCondition(cond_), label); + __ B(NegateCondition(cond_), label); } } @@ -238,13 +238,13 @@ void LCodeGen::WriteTranslation(LEnvironment* environment, translation->BeginConstructStubFrame(closure_id, translation_size); break; case JS_GETTER: - ASSERT(translation_size == 1); - ASSERT(height == 0); + DCHECK(translation_size == 1); + DCHECK(height == 0); translation->BeginGetterStubFrame(closure_id); break; case JS_SETTER: - ASSERT(translation_size == 2); - ASSERT(height == 0); + DCHECK(translation_size == 2); + DCHECK(height == 0); translation->BeginSetterStubFrame(closure_id); break; case STUB: @@ -386,7 +386,7 @@ void LCodeGen::CallCodeGeneric(Handle<Code> code, RelocInfo::Mode mode, LInstruction* instr, SafepointMode safepoint_mode) { - ASSERT(instr != NULL); + DCHECK(instr != NULL); Assembler::BlockPoolsScope scope(masm_); __ Call(code, mode); @@ -402,9 +402,9 @@ void LCodeGen::CallCodeGeneric(Handle<Code> code, void LCodeGen::DoCallFunction(LCallFunction* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->function()).Is(x1)); - ASSERT(ToRegister(instr->result()).Is(x0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->function()).Is(x1)); + DCHECK(ToRegister(instr->result()).Is(x0)); int arity = instr->arity(); CallFunctionStub stub(isolate(), arity, instr->hydrogen()->function_flags()); @@ -414,9 +414,9 @@ void LCodeGen::DoCallFunction(LCallFunction* instr) { void LCodeGen::DoCallNew(LCallNew* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(instr->IsMarkedAsCall()); - ASSERT(ToRegister(instr->constructor()).is(x1)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(instr->IsMarkedAsCall()); + DCHECK(ToRegister(instr->constructor()).is(x1)); __ Mov(x0, instr->arity()); // No cell in x2 for construct type feedback in optimized code. @@ -426,14 +426,14 @@ void LCodeGen::DoCallNew(LCallNew* instr) { CallCode(stub.GetCode(), RelocInfo::CONSTRUCT_CALL, instr); after_push_argument_ = false; - ASSERT(ToRegister(instr->result()).is(x0)); + DCHECK(ToRegister(instr->result()).is(x0)); } void LCodeGen::DoCallNewArray(LCallNewArray* instr) { - ASSERT(instr->IsMarkedAsCall()); - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->constructor()).is(x1)); + DCHECK(instr->IsMarkedAsCall()); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->constructor()).is(x1)); __ Mov(x0, Operand(instr->arity())); __ LoadRoot(x2, Heap::kUndefinedValueRootIndex); @@ -474,7 +474,7 @@ void LCodeGen::DoCallNewArray(LCallNewArray* instr) { } after_push_argument_ = false; - ASSERT(ToRegister(instr->result()).is(x0)); + DCHECK(ToRegister(instr->result()).is(x0)); } @@ -482,7 +482,7 @@ void LCodeGen::CallRuntime(const Runtime::Function* function, int num_arguments, LInstruction* instr, SaveFPRegsMode save_doubles) { - ASSERT(instr != NULL); + DCHECK(instr != NULL); __ CallRuntime(function, num_arguments, save_doubles); @@ -529,7 +529,7 @@ void LCodeGen::RecordSafepointWithLazyDeopt(LInstruction* instr, if (safepoint_mode == RECORD_SIMPLE_SAFEPOINT) { RecordSafepoint(instr->pointer_map(), Safepoint::kLazyDeopt); } else { - ASSERT(safepoint_mode == RECORD_SAFEPOINT_WITH_REGISTERS_AND_NO_ARGUMENTS); + DCHECK(safepoint_mode == RECORD_SAFEPOINT_WITH_REGISTERS_AND_NO_ARGUMENTS); RecordSafepointWithRegisters( instr->pointer_map(), 0, Safepoint::kLazyDeopt); } @@ -540,7 +540,7 @@ void LCodeGen::RecordSafepoint(LPointerMap* pointers, Safepoint::Kind kind, int arguments, Safepoint::DeoptMode deopt_mode) { - ASSERT(expected_safepoint_kind_ == kind); + DCHECK(expected_safepoint_kind_ == kind); const ZoneList<LOperand*>* operands = pointers->GetNormalizedOperands(); Safepoint safepoint = safepoints_.DefineSafepoint( @@ -580,16 +580,9 @@ void LCodeGen::RecordSafepointWithRegisters(LPointerMap* pointers, } -void LCodeGen::RecordSafepointWithRegistersAndDoubles( - LPointerMap* pointers, int arguments, Safepoint::DeoptMode deopt_mode) { - RecordSafepoint( - pointers, Safepoint::kWithRegistersAndDoubles, arguments, deopt_mode); -} - - bool LCodeGen::GenerateCode() { LPhase phase("Z_Code generation", chunk()); - ASSERT(is_unused()); + DCHECK(is_unused()); status_ = GENERATING; // Open a frame scope to indicate that there is a frame on the stack. The @@ -606,8 +599,8 @@ bool LCodeGen::GenerateCode() { void LCodeGen::SaveCallerDoubles() { - ASSERT(info()->saves_caller_doubles()); - ASSERT(NeedsEagerFrame()); + DCHECK(info()->saves_caller_doubles()); + DCHECK(NeedsEagerFrame()); Comment(";;; Save clobbered callee double registers"); BitVector* doubles = chunk()->allocated_double_registers(); BitVector::Iterator iterator(doubles); @@ -624,8 +617,8 @@ void LCodeGen::SaveCallerDoubles() { void LCodeGen::RestoreCallerDoubles() { - ASSERT(info()->saves_caller_doubles()); - ASSERT(NeedsEagerFrame()); + DCHECK(info()->saves_caller_doubles()); + DCHECK(NeedsEagerFrame()); Comment(";;; Restore clobbered callee double registers"); BitVector* doubles = chunk()->allocated_double_registers(); BitVector::Iterator iterator(doubles); @@ -642,7 +635,7 @@ void LCodeGen::RestoreCallerDoubles() { bool LCodeGen::GeneratePrologue() { - ASSERT(is_generating()); + DCHECK(is_generating()); if (info()->IsOptimizing()) { ProfileEntryHookStub::MaybeCallEntryHook(masm_); @@ -661,17 +654,21 @@ bool LCodeGen::GeneratePrologue() { __ JumpIfNotRoot(x10, Heap::kUndefinedValueRootIndex, &ok); __ Ldr(x10, GlobalObjectMemOperand()); - __ Ldr(x10, FieldMemOperand(x10, GlobalObject::kGlobalReceiverOffset)); + __ Ldr(x10, FieldMemOperand(x10, GlobalObject::kGlobalProxyOffset)); __ Poke(x10, receiver_offset); __ Bind(&ok); } } - ASSERT(__ StackPointer().Is(jssp)); + DCHECK(__ StackPointer().Is(jssp)); info()->set_prologue_offset(masm_->pc_offset()); if (NeedsEagerFrame()) { - __ Prologue(info()->IsStub() ? BUILD_STUB_FRAME : BUILD_FUNCTION_FRAME); + if (info()->IsStub()) { + __ StubPrologue(); + } else { + __ Prologue(info()->IsCodePreAgingActive()); + } frame_is_built_ = true; info_->AddNoFrameRange(0, masm_->pc_offset()); } @@ -690,13 +687,16 @@ bool LCodeGen::GeneratePrologue() { int heap_slots = info()->num_heap_slots() - Context::MIN_CONTEXT_SLOTS; if (heap_slots > 0) { Comment(";;; Allocate local context"); + bool need_write_barrier = true; // Argument to NewContext is the function, which is in x1. if (heap_slots <= FastNewContextStub::kMaximumSlots) { FastNewContextStub stub(isolate(), heap_slots); __ CallStub(&stub); + // Result of FastNewContextStub is always in new space. + need_write_barrier = false; } else { __ Push(x1); - __ CallRuntime(Runtime::kHiddenNewFunctionContext, 1); + __ CallRuntime(Runtime::kNewFunctionContext, 1); } RecordSafepoint(Safepoint::kNoLazyDeopt); // Context is returned in x0. It replaces the context passed to us. It's @@ -719,8 +719,15 @@ bool LCodeGen::GeneratePrologue() { MemOperand target = ContextMemOperand(cp, var->index()); __ Str(value, target); // Update the write barrier. This clobbers value and scratch. - __ RecordWriteContextSlot(cp, target.offset(), value, scratch, - GetLinkRegisterState(), kSaveFPRegs); + if (need_write_barrier) { + __ RecordWriteContextSlot(cp, target.offset(), value, scratch, + GetLinkRegisterState(), kSaveFPRegs); + } else if (FLAG_debug_code) { + Label done; + __ JumpIfInNewSpace(cp, &done); + __ Abort(kExpectedNewSpaceObject); + __ bind(&done); + } } } Comment(";;; End allocate local context"); @@ -747,7 +754,7 @@ void LCodeGen::GenerateOsrPrologue() { // Adjust the frame size, subsuming the unoptimized frame into the // optimized frame. int slots = GetStackSlotCount() - graph()->osr()->UnoptimizedFrameSlots(); - ASSERT(slots >= 0); + DCHECK(slots >= 0); __ Claim(slots); } @@ -763,7 +770,7 @@ void LCodeGen::GenerateBodyInstructionPre(LInstruction* instr) { bool LCodeGen::GenerateDeferredCode() { - ASSERT(is_generating()); + DCHECK(is_generating()); if (deferred_.length() > 0) { for (int i = 0; !is_aborted() && (i < deferred_.length()); i++) { LDeferredCode* code = deferred_[i]; @@ -783,8 +790,8 @@ bool LCodeGen::GenerateDeferredCode() { if (NeedsDeferredFrame()) { Comment(";;; Build frame"); - ASSERT(!frame_is_built_); - ASSERT(info()->IsStub()); + DCHECK(!frame_is_built_); + DCHECK(info()->IsStub()); frame_is_built_ = true; __ Push(lr, fp, cp); __ Mov(fp, Smi::FromInt(StackFrame::STUB)); @@ -798,7 +805,7 @@ bool LCodeGen::GenerateDeferredCode() { if (NeedsDeferredFrame()) { Comment(";;; Destroy frame"); - ASSERT(frame_is_built_); + DCHECK(frame_is_built_); __ Pop(xzr, cp, fp, lr); frame_is_built_ = false; } @@ -818,51 +825,82 @@ bool LCodeGen::GenerateDeferredCode() { bool LCodeGen::GenerateDeoptJumpTable() { + Label needs_frame, restore_caller_doubles, call_deopt_entry; + if (deopt_jump_table_.length() > 0) { Comment(";;; -------------------- Jump table --------------------"); - } - Label table_start; - __ bind(&table_start); - Label needs_frame; - for (int i = 0; i < deopt_jump_table_.length(); i++) { - __ Bind(&deopt_jump_table_[i]->label); - Address entry = deopt_jump_table_[i]->address; - Deoptimizer::BailoutType type = deopt_jump_table_[i]->bailout_type; - int id = Deoptimizer::GetDeoptimizationId(isolate(), entry, type); - if (id == Deoptimizer::kNotDeoptimizationEntry) { - Comment(";;; jump table entry %d.", i); - } else { - Comment(";;; jump table entry %d: deoptimization bailout %d.", i, id); - } - if (deopt_jump_table_[i]->needs_frame) { - ASSERT(!info()->saves_caller_doubles()); + Address base = deopt_jump_table_[0]->address; - UseScratchRegisterScope temps(masm()); - Register stub_deopt_entry = temps.AcquireX(); - Register stub_marker = temps.AcquireX(); + UseScratchRegisterScope temps(masm()); + Register entry_offset = temps.AcquireX(); + + int length = deopt_jump_table_.length(); + for (int i = 0; i < length; i++) { + __ Bind(&deopt_jump_table_[i]->label); - __ Mov(stub_deopt_entry, ExternalReference::ForDeoptEntry(entry)); - if (needs_frame.is_bound()) { - __ B(&needs_frame); + Deoptimizer::BailoutType type = deopt_jump_table_[i]->bailout_type; + Address entry = deopt_jump_table_[i]->address; + int id = Deoptimizer::GetDeoptimizationId(isolate(), entry, type); + if (id == Deoptimizer::kNotDeoptimizationEntry) { + Comment(";;; jump table entry %d.", i); } else { - __ Bind(&needs_frame); - // This variant of deopt can only be used with stubs. Since we don't - // have a function pointer to install in the stack frame that we're - // building, install a special marker there instead. - ASSERT(info()->IsStub()); - __ Mov(stub_marker, Smi::FromInt(StackFrame::STUB)); - __ Push(lr, fp, cp, stub_marker); - __ Add(fp, __ StackPointer(), 2 * kPointerSize); - __ Call(stub_deopt_entry); + Comment(";;; jump table entry %d: deoptimization bailout %d.", i, id); } - } else { - if (info()->saves_caller_doubles()) { - ASSERT(info()->IsStub()); - RestoreCallerDoubles(); + + // Second-level deopt table entries are contiguous and small, so instead + // of loading the full, absolute address of each one, load the base + // address and add an immediate offset. + __ Mov(entry_offset, entry - base); + + // The last entry can fall through into `call_deopt_entry`, avoiding a + // branch. + bool last_entry = (i + 1) == length; + + if (deopt_jump_table_[i]->needs_frame) { + DCHECK(!info()->saves_caller_doubles()); + if (!needs_frame.is_bound()) { + // This variant of deopt can only be used with stubs. Since we don't + // have a function pointer to install in the stack frame that we're + // building, install a special marker there instead. + DCHECK(info()->IsStub()); + + UseScratchRegisterScope temps(masm()); + Register stub_marker = temps.AcquireX(); + __ Bind(&needs_frame); + __ Mov(stub_marker, Smi::FromInt(StackFrame::STUB)); + __ Push(lr, fp, cp, stub_marker); + __ Add(fp, __ StackPointer(), 2 * kPointerSize); + if (!last_entry) __ B(&call_deopt_entry); + } else { + // Reuse the existing needs_frame code. + __ B(&needs_frame); + } + } else if (info()->saves_caller_doubles()) { + DCHECK(info()->IsStub()); + if (!restore_caller_doubles.is_bound()) { + __ Bind(&restore_caller_doubles); + RestoreCallerDoubles(); + if (!last_entry) __ B(&call_deopt_entry); + } else { + // Reuse the existing restore_caller_doubles code. + __ B(&restore_caller_doubles); + } + } else { + // There is nothing special to do, so just continue to the second-level + // table. + if (!last_entry) __ B(&call_deopt_entry); } - __ Call(entry, RelocInfo::RUNTIME_ENTRY); + + masm()->CheckConstPool(false, last_entry); } - masm()->CheckConstPool(false, false); + + // Generate common code for calling the second-level deopt table. + Register deopt_entry = temps.AcquireX(); + __ Bind(&call_deopt_entry); + __ Mov(deopt_entry, Operand(reinterpret_cast<uint64_t>(base), + RelocInfo::RUNTIME_ENTRY)); + __ Add(deopt_entry, deopt_entry, entry_offset); + __ Call(deopt_entry); } // Force constant pool emission at the end of the deopt jump table to make @@ -877,7 +915,7 @@ bool LCodeGen::GenerateDeoptJumpTable() { bool LCodeGen::GenerateSafepointTable() { - ASSERT(is_done()); + DCHECK(is_done()); // We do not know how much data will be emitted for the safepoint table, so // force emission of the veneer pool. masm()->CheckVeneerPool(true, true); @@ -887,7 +925,7 @@ bool LCodeGen::GenerateSafepointTable() { void LCodeGen::FinishCode(Handle<Code> code) { - ASSERT(is_done()); + DCHECK(is_done()); code->set_stack_slots(GetStackSlotCount()); code->set_safepoint_table_offset(safepoints_.GetCodeOffset()); if (code->is_optimized_code()) RegisterWeakObjectsInOptimizedCode(code); @@ -900,7 +938,7 @@ void LCodeGen::PopulateDeoptimizationData(Handle<Code> code) { if (length == 0) return; Handle<DeoptimizationInputData> data = - DeoptimizationInputData::New(isolate(), length, TENURED); + DeoptimizationInputData::New(isolate(), length, 0, TENURED); Handle<ByteArray> translations = translations_.CreateByteArray(isolate()->factory()); @@ -942,7 +980,7 @@ void LCodeGen::PopulateDeoptimizationData(Handle<Code> code) { void LCodeGen::PopulateDeoptimizationLiteralsWithInlinedFunctions() { - ASSERT(deoptimization_literals_.length() == 0); + DCHECK(deoptimization_literals_.length() == 0); const ZoneList<Handle<JSFunction> >* inlined_closures = chunk()->inlined_closures(); @@ -967,8 +1005,8 @@ void LCodeGen::DeoptimizeBranch( bailout_type = *override_bailout_type; } - ASSERT(environment->HasBeenRegistered()); - ASSERT(info()->IsOptimizing() || info()->IsStub()); + DCHECK(environment->HasBeenRegistered()); + DCHECK(info()->IsOptimizing() || info()->IsStub()); int id = environment->deoptimization_index(); Address entry = Deoptimizer::GetDeoptimizationEntry(isolate(), id, bailout_type); @@ -990,7 +1028,7 @@ void LCodeGen::DeoptimizeBranch( __ Mov(w1, FLAG_deopt_every_n_times); __ Str(w1, MemOperand(x0)); __ Pop(x2, x1, x0); - ASSERT(frame_is_built_); + DCHECK(frame_is_built_); __ Call(entry, RelocInfo::RUNTIME_ENTRY); __ Unreachable(); @@ -1007,7 +1045,7 @@ void LCodeGen::DeoptimizeBranch( __ Bind(&dont_trap); } - ASSERT(info()->IsStub() || frame_is_built_); + DCHECK(info()->IsStub() || frame_is_built_); // Go through jump table if we need to build frame, or restore caller doubles. if (branch_type == always && frame_is_built_ && !info()->saves_caller_doubles()) { @@ -1114,7 +1152,7 @@ void LCodeGen::EnsureSpaceForLazyDeopt(int space_needed) { if (current_pc < (last_lazy_deopt_pc_ + space_needed)) { ptrdiff_t padding_size = last_lazy_deopt_pc_ + space_needed - current_pc; - ASSERT((padding_size % kInstructionSize) == 0); + DCHECK((padding_size % kInstructionSize) == 0); InstructionAccurateScope instruction_accurate( masm(), padding_size / kInstructionSize); @@ -1130,16 +1168,16 @@ void LCodeGen::EnsureSpaceForLazyDeopt(int space_needed) { Register LCodeGen::ToRegister(LOperand* op) const { // TODO(all): support zero register results, as ToRegister32. - ASSERT((op != NULL) && op->IsRegister()); + DCHECK((op != NULL) && op->IsRegister()); return Register::FromAllocationIndex(op->index()); } Register LCodeGen::ToRegister32(LOperand* op) const { - ASSERT(op != NULL); + DCHECK(op != NULL); if (op->IsConstantOperand()) { // If this is a constant operand, the result must be the zero register. - ASSERT(ToInteger32(LConstantOperand::cast(op)) == 0); + DCHECK(ToInteger32(LConstantOperand::cast(op)) == 0); return wzr; } else { return ToRegister(op).W(); @@ -1154,27 +1192,27 @@ Smi* LCodeGen::ToSmi(LConstantOperand* op) const { DoubleRegister LCodeGen::ToDoubleRegister(LOperand* op) const { - ASSERT((op != NULL) && op->IsDoubleRegister()); + DCHECK((op != NULL) && op->IsDoubleRegister()); return DoubleRegister::FromAllocationIndex(op->index()); } Operand LCodeGen::ToOperand(LOperand* op) { - ASSERT(op != NULL); + DCHECK(op != NULL); if (op->IsConstantOperand()) { LConstantOperand* const_op = LConstantOperand::cast(op); HConstant* constant = chunk()->LookupConstant(const_op); Representation r = chunk_->LookupLiteralRepresentation(const_op); if (r.IsSmi()) { - ASSERT(constant->HasSmiValue()); + DCHECK(constant->HasSmiValue()); return Operand(Smi::FromInt(constant->Integer32Value())); } else if (r.IsInteger32()) { - ASSERT(constant->HasInteger32Value()); + DCHECK(constant->HasInteger32Value()); return Operand(constant->Integer32Value()); } else if (r.IsDouble()) { Abort(kToOperandUnsupportedDoubleImmediate); } - ASSERT(r.IsTagged()); + DCHECK(r.IsTagged()); return Operand(constant->handle(isolate())); } else if (op->IsRegister()) { return Operand(ToRegister(op)); @@ -1199,7 +1237,7 @@ Operand LCodeGen::ToOperand32U(LOperand* op) { Operand LCodeGen::ToOperand32(LOperand* op, IntegerSignedness signedness) { - ASSERT(op != NULL); + DCHECK(op != NULL); if (op->IsRegister()) { return Operand(ToRegister32(op)); } else if (op->IsConstantOperand()) { @@ -1207,7 +1245,7 @@ Operand LCodeGen::ToOperand32(LOperand* op, IntegerSignedness signedness) { HConstant* constant = chunk()->LookupConstant(const_op); Representation r = chunk_->LookupLiteralRepresentation(const_op); if (r.IsInteger32()) { - ASSERT(constant->HasInteger32Value()); + DCHECK(constant->HasInteger32Value()); return (signedness == SIGNED_INT32) ? Operand(constant->Integer32Value()) : Operand(static_cast<uint32_t>(constant->Integer32Value())); @@ -1223,16 +1261,16 @@ Operand LCodeGen::ToOperand32(LOperand* op, IntegerSignedness signedness) { static ptrdiff_t ArgumentsOffsetWithoutFrame(ptrdiff_t index) { - ASSERT(index < 0); + DCHECK(index < 0); return -(index + 1) * kPointerSize; } MemOperand LCodeGen::ToMemOperand(LOperand* op, StackMode stack_mode) const { - ASSERT(op != NULL); - ASSERT(!op->IsRegister()); - ASSERT(!op->IsDoubleRegister()); - ASSERT(op->IsStackSlot() || op->IsDoubleStackSlot()); + DCHECK(op != NULL); + DCHECK(!op->IsRegister()); + DCHECK(!op->IsDoubleRegister()); + DCHECK(op->IsStackSlot() || op->IsDoubleStackSlot()); if (NeedsEagerFrame()) { int fp_offset = StackSlotOffset(op->index()); if (op->index() >= 0) { @@ -1271,7 +1309,7 @@ MemOperand LCodeGen::ToMemOperand(LOperand* op, StackMode stack_mode) const { Handle<Object> LCodeGen::ToHandle(LConstantOperand* op) const { HConstant* constant = chunk_->LookupConstant(op); - ASSERT(chunk_->LookupLiteralRepresentation(op).IsSmiOrTagged()); + DCHECK(chunk_->LookupLiteralRepresentation(op).IsSmiOrTagged()); return constant->handle(isolate()); } @@ -1309,7 +1347,7 @@ int32_t LCodeGen::ToInteger32(LConstantOperand* op) const { double LCodeGen::ToDouble(LConstantOperand* op) const { HConstant* constant = chunk_->LookupConstant(op); - ASSERT(constant->HasDoubleValue()); + DCHECK(constant->HasDoubleValue()); return constant->DoubleValue(); } @@ -1369,7 +1407,7 @@ void LCodeGen::EmitBranchGeneric(InstrType instr, template<class InstrType> void LCodeGen::EmitBranch(InstrType instr, Condition condition) { - ASSERT((condition != al) && (condition != nv)); + DCHECK((condition != al) && (condition != nv)); BranchOnCondition branch(this, condition); EmitBranchGeneric(instr, branch); } @@ -1380,7 +1418,7 @@ void LCodeGen::EmitCompareAndBranch(InstrType instr, Condition condition, const Register& lhs, const Operand& rhs) { - ASSERT((condition != al) && (condition != nv)); + DCHECK((condition != al) && (condition != nv)); CompareAndBranch branch(this, condition, lhs, rhs); EmitBranchGeneric(instr, branch); } @@ -1391,7 +1429,7 @@ void LCodeGen::EmitTestAndBranch(InstrType instr, Condition condition, const Register& value, uint64_t mask) { - ASSERT((condition != al) && (condition != nv)); + DCHECK((condition != al) && (condition != nv)); TestAndBranch branch(this, condition, value, mask); EmitBranchGeneric(instr, branch); } @@ -1478,7 +1516,7 @@ void LCodeGen::DoAddE(LAddE* instr) { ? ToInteger32(LConstantOperand::cast(instr->right())) : Operand(ToRegister32(instr->right()), SXTW); - ASSERT(!instr->hydrogen()->CheckFlag(HValue::kCanOverflow)); + DCHECK(!instr->hydrogen()->CheckFlag(HValue::kCanOverflow)); __ Add(result, left, right); } @@ -1536,11 +1574,11 @@ void LCodeGen::DoAllocate(LAllocate* instr) { } if (instr->hydrogen()->IsOldPointerSpaceAllocation()) { - ASSERT(!instr->hydrogen()->IsOldDataSpaceAllocation()); - ASSERT(!instr->hydrogen()->IsNewSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsOldDataSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); flags = static_cast<AllocationFlags>(flags | PRETENURE_OLD_POINTER_SPACE); } else if (instr->hydrogen()->IsOldDataSpaceAllocation()) { - ASSERT(!instr->hydrogen()->IsNewSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); flags = static_cast<AllocationFlags>(flags | PRETENURE_OLD_DATA_SPACE); } @@ -1575,7 +1613,7 @@ void LCodeGen::DoAllocate(LAllocate* instr) { __ Mov(filler, Operand(isolate()->factory()->one_pointer_filler_map())); __ FillFields(untagged_result, filler_count, filler); } else { - ASSERT(instr->temp3() == NULL); + DCHECK(instr->temp3() == NULL); } } @@ -1586,7 +1624,7 @@ void LCodeGen::DoDeferredAllocate(LAllocate* instr) { // contained in the register pointer map. __ Mov(ToRegister(instr->result()), Smi::FromInt(0)); - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); // We're in a SafepointRegistersScope so we can use any scratch registers. Register size = x0; if (instr->size()->IsConstantOperand()) { @@ -1597,11 +1635,11 @@ void LCodeGen::DoDeferredAllocate(LAllocate* instr) { int flags = AllocateDoubleAlignFlag::encode( instr->hydrogen()->MustAllocateDoubleAligned()); if (instr->hydrogen()->IsOldPointerSpaceAllocation()) { - ASSERT(!instr->hydrogen()->IsOldDataSpaceAllocation()); - ASSERT(!instr->hydrogen()->IsNewSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsOldDataSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); flags = AllocateTargetSpace::update(flags, OLD_POINTER_SPACE); } else if (instr->hydrogen()->IsOldDataSpaceAllocation()) { - ASSERT(!instr->hydrogen()->IsNewSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); flags = AllocateTargetSpace::update(flags, OLD_DATA_SPACE); } else { flags = AllocateTargetSpace::update(flags, NEW_SPACE); @@ -1610,7 +1648,7 @@ void LCodeGen::DoDeferredAllocate(LAllocate* instr) { __ Push(size, x10); CallRuntimeFromDeferred( - Runtime::kHiddenAllocateInTargetSpace, 2, instr, instr->context()); + Runtime::kAllocateInTargetSpace, 2, instr, instr->context()); __ StoreToSafepointRegisterSlot(x0, ToRegister(instr->result())); } @@ -1622,10 +1660,10 @@ void LCodeGen::DoApplyArguments(LApplyArguments* instr) { Register elements = ToRegister(instr->elements()); Register scratch = x5; - ASSERT(receiver.Is(x0)); // Used for parameter count. - ASSERT(function.Is(x1)); // Required by InvokeFunction. - ASSERT(ToRegister(instr->result()).Is(x0)); - ASSERT(instr->IsMarkedAsCall()); + DCHECK(receiver.Is(x0)); // Used for parameter count. + DCHECK(function.Is(x1)); // Required by InvokeFunction. + DCHECK(ToRegister(instr->result()).Is(x0)); + DCHECK(instr->IsMarkedAsCall()); // Copy the arguments to this function possibly from the // adaptor frame below it. @@ -1654,7 +1692,7 @@ void LCodeGen::DoApplyArguments(LApplyArguments* instr) { __ B(ne, &loop); __ Bind(&invoke); - ASSERT(instr->HasPointerMap()); + DCHECK(instr->HasPointerMap()); LPointerMap* pointers = instr->pointer_map(); SafepointGenerator safepoint_generator(this, pointers, Safepoint::kLazyDeopt); // The number of arguments is stored in argc (receiver) which is x0, as @@ -1680,10 +1718,10 @@ void LCodeGen::DoArgumentsElements(LArgumentsElements* instr) { // LAccessArgumentsAt implementation take that into account. // In the inlined case we need to subtract the size of 2 words to jssp to // get a pointer which will work well with LAccessArgumentsAt. - ASSERT(masm()->StackPointer().Is(jssp)); + DCHECK(masm()->StackPointer().Is(jssp)); __ Sub(result, jssp, 2 * kPointerSize); } else { - ASSERT(instr->temp() != NULL); + DCHECK(instr->temp() != NULL); Register previous_fp = ToRegister(instr->temp()); __ Ldr(previous_fp, @@ -1737,12 +1775,12 @@ void LCodeGen::DoArithmeticD(LArithmeticD* instr) { // precision), it should be possible. However, we would need support for // fdiv in round-towards-zero mode, and the ARM64 simulator doesn't // support that yet. - ASSERT(left.Is(d0)); - ASSERT(right.Is(d1)); + DCHECK(left.Is(d0)); + DCHECK(right.Is(d1)); __ CallCFunction( ExternalReference::mod_two_doubles_operation(isolate()), 0, 2); - ASSERT(result.Is(d0)); + DCHECK(result.Is(d0)); break; } default: @@ -1753,10 +1791,10 @@ void LCodeGen::DoArithmeticD(LArithmeticD* instr) { void LCodeGen::DoArithmeticT(LArithmeticT* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->left()).is(x1)); - ASSERT(ToRegister(instr->right()).is(x0)); - ASSERT(ToRegister(instr->result()).is(x0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->left()).is(x1)); + DCHECK(ToRegister(instr->right()).is(x0)); + DCHECK(ToRegister(instr->result()).is(x0)); BinaryOpICStub stub(isolate(), instr->op(), NO_OVERWRITE); CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); @@ -1797,20 +1835,20 @@ void LCodeGen::DoBitS(LBitS* instr) { void LCodeGen::DoBoundsCheck(LBoundsCheck *instr) { Condition cond = instr->hydrogen()->allow_equality() ? hi : hs; - ASSERT(instr->hydrogen()->index()->representation().IsInteger32()); - ASSERT(instr->hydrogen()->length()->representation().IsInteger32()); + DCHECK(instr->hydrogen()->index()->representation().IsInteger32()); + DCHECK(instr->hydrogen()->length()->representation().IsInteger32()); if (instr->index()->IsConstantOperand()) { Operand index = ToOperand32I(instr->index()); Register length = ToRegister32(instr->length()); __ Cmp(length, index); - cond = ReverseConditionForCmp(cond); + cond = CommuteCondition(cond); } else { Register index = ToRegister32(instr->index()); Operand length = ToOperand32I(instr->length()); __ Cmp(index, length); } if (FLAG_debug_code && instr->hydrogen()->skip_check()) { - __ Assert(InvertCondition(cond), kEliminatedBoundsCheckFailed); + __ Assert(NegateCondition(cond), kEliminatedBoundsCheckFailed); } else { DeoptimizeIf(cond, instr->environment()); } @@ -1823,10 +1861,10 @@ void LCodeGen::DoBranch(LBranch* instr) { Label* false_label = instr->FalseLabel(chunk_); if (r.IsInteger32()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); EmitCompareAndBranch(instr, ne, ToRegister32(instr->value()), 0); } else if (r.IsSmi()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); STATIC_ASSERT(kSmiTag == 0); EmitCompareAndBranch(instr, ne, ToRegister(instr->value()), 0); } else if (r.IsDouble()) { @@ -1834,28 +1872,28 @@ void LCodeGen::DoBranch(LBranch* instr) { // Test the double value. Zero and NaN are false. EmitBranchIfNonZeroNumber(instr, value, double_scratch()); } else { - ASSERT(r.IsTagged()); + DCHECK(r.IsTagged()); Register value = ToRegister(instr->value()); HType type = instr->hydrogen()->value()->type(); if (type.IsBoolean()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); __ CompareRoot(value, Heap::kTrueValueRootIndex); EmitBranch(instr, eq); } else if (type.IsSmi()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); EmitCompareAndBranch(instr, ne, value, Smi::FromInt(0)); } else if (type.IsJSArray()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); EmitGoto(instr->TrueDestination(chunk())); } else if (type.IsHeapNumber()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); __ Ldr(double_scratch(), FieldMemOperand(value, HeapNumber::kValueOffset)); // Test the double value. Zero and NaN are false. EmitBranchIfNonZeroNumber(instr, double_scratch(), double_scratch()); } else if (type.IsString()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); Register temp = ToRegister(instr->temp1()); __ Ldr(temp, FieldMemOperand(value, String::kLengthOffset)); EmitCompareAndBranch(instr, ne, temp, 0); @@ -1886,7 +1924,7 @@ void LCodeGen::DoBranch(LBranch* instr) { if (expected.Contains(ToBooleanStub::SMI)) { // Smis: 0 -> false, all other -> true. - ASSERT(Smi::FromInt(0) == 0); + DCHECK(Smi::FromInt(0) == 0); __ Cbz(value, false_label); __ JumpIfSmi(value, true_label); } else if (expected.NeedsMap()) { @@ -1898,7 +1936,7 @@ void LCodeGen::DoBranch(LBranch* instr) { Register scratch = NoReg; if (expected.NeedsMap()) { - ASSERT((instr->temp1() != NULL) && (instr->temp2() != NULL)); + DCHECK((instr->temp1() != NULL) && (instr->temp2() != NULL)); map = ToRegister(instr->temp1()); scratch = ToRegister(instr->temp2()); @@ -1970,7 +2008,7 @@ void LCodeGen::CallKnownFunction(Handle<JSFunction> function, dont_adapt_arguments || formal_parameter_count == arity; // The function interface relies on the following register assignments. - ASSERT(function_reg.Is(x1) || function_reg.IsNone()); + DCHECK(function_reg.Is(x1) || function_reg.IsNone()); Register arity_reg = x0; LPointerMap* pointers = instr->pointer_map(); @@ -2015,8 +2053,8 @@ void LCodeGen::CallKnownFunction(Handle<JSFunction> function, void LCodeGen::DoCallWithDescriptor(LCallWithDescriptor* instr) { - ASSERT(instr->IsMarkedAsCall()); - ASSERT(ToRegister(instr->result()).Is(x0)); + DCHECK(instr->IsMarkedAsCall()); + DCHECK(ToRegister(instr->result()).Is(x0)); LPointerMap* pointers = instr->pointer_map(); SafepointGenerator generator(this, pointers, Safepoint::kLazyDeopt); @@ -2030,7 +2068,7 @@ void LCodeGen::DoCallWithDescriptor(LCallWithDescriptor* instr) { // this understanding is correct. __ Call(code, RelocInfo::CODE_TARGET, TypeFeedbackId::None()); } else { - ASSERT(instr->target()->IsRegister()); + DCHECK(instr->target()->IsRegister()); Register target = ToRegister(instr->target()); generator.BeforeCall(__ CallSize(target)); __ Add(target, target, Code::kHeaderSize - kHeapObjectTag); @@ -2042,8 +2080,8 @@ void LCodeGen::DoCallWithDescriptor(LCallWithDescriptor* instr) { void LCodeGen::DoCallJSFunction(LCallJSFunction* instr) { - ASSERT(instr->IsMarkedAsCall()); - ASSERT(ToRegister(instr->function()).is(x1)); + DCHECK(instr->IsMarkedAsCall()); + DCHECK(ToRegister(instr->function()).is(x1)); if (instr->hydrogen()->pass_argument_count()) { __ Mov(x0, Operand(instr->arity())); @@ -2068,8 +2106,8 @@ void LCodeGen::DoCallRuntime(LCallRuntime* instr) { void LCodeGen::DoCallStub(LCallStub* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->result()).is(x0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->result()).is(x0)); switch (instr->hydrogen()->major_key()) { case CodeStub::RegExpExec: { RegExpExecStub stub(isolate()); @@ -2101,7 +2139,7 @@ void LCodeGen::DoUnknownOSRValue(LUnknownOSRValue* instr) { void LCodeGen::DoDeferredInstanceMigration(LCheckMaps* instr, Register object) { Register temp = ToRegister(instr->temp()); { - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); __ Push(object); __ Mov(cp, 0); __ CallRuntimeSaveDoubles(Runtime::kTryMigrateInstance); @@ -2172,7 +2210,7 @@ void LCodeGen::DoCheckMaps(LCheckMaps* instr) { void LCodeGen::DoCheckNonSmi(LCheckNonSmi* instr) { - if (!instr->hydrogen()->value()->IsHeapObject()) { + if (!instr->hydrogen()->value()->type().IsHeapObject()) { DeoptimizeIfSmi(ToRegister(instr->value()), instr->environment()); } } @@ -2180,7 +2218,7 @@ void LCodeGen::DoCheckNonSmi(LCheckNonSmi* instr) { void LCodeGen::DoCheckSmi(LCheckSmi* instr) { Register value = ToRegister(instr->value()); - ASSERT(!instr->result() || ToRegister(instr->result()).Is(value)); + DCHECK(!instr->result() || ToRegister(instr->result()).Is(value)); DeoptimizeIfNotSmi(value, instr->environment()); } @@ -2215,7 +2253,7 @@ void LCodeGen::DoCheckInstanceType(LCheckInstanceType* instr) { instr->hydrogen()->GetCheckMaskAndTag(&mask, &tag); if (IsPowerOf2(mask)) { - ASSERT((tag == 0) || (tag == mask)); + DCHECK((tag == 0) || (tag == mask)); if (tag == 0) { DeoptimizeIfBitSet(scratch, MaskToBit(mask), instr->environment()); } else { @@ -2290,7 +2328,7 @@ void LCodeGen::DoDoubleBits(LDoubleBits* instr) { Register result_reg = ToRegister(instr->result()); if (instr->hydrogen()->bits() == HDoubleBits::HIGH) { __ Fmov(result_reg, value_reg); - __ Mov(result_reg, Operand(result_reg, LSR, 32)); + __ Lsr(result_reg, result_reg, 32); } else { __ Fmov(result_reg.W(), value_reg.S()); } @@ -2300,12 +2338,12 @@ void LCodeGen::DoDoubleBits(LDoubleBits* instr) { void LCodeGen::DoConstructDouble(LConstructDouble* instr) { Register hi_reg = ToRegister(instr->hi()); Register lo_reg = ToRegister(instr->lo()); - Register temp = ToRegister(instr->temp()); DoubleRegister result_reg = ToDoubleRegister(instr->result()); - __ And(temp, lo_reg, Operand(0xffffffff)); - __ Orr(temp, temp, Operand(hi_reg, LSL, 32)); - __ Fmov(result_reg, temp); + // Insert the least significant 32 bits of hi_reg into the most significant + // 32 bits of lo_reg, and move to a floating point register. + __ Bfi(lo_reg, hi_reg, 32, 32); + __ Fmov(result_reg, lo_reg); } @@ -2371,7 +2409,7 @@ void LCodeGen::DoClassOfTestAndBranch(LClassOfTestAndBranch* instr) { void LCodeGen::DoCmpHoleAndBranchD(LCmpHoleAndBranchD* instr) { - ASSERT(instr->hydrogen()->representation().IsDouble()); + DCHECK(instr->hydrogen()->representation().IsDouble()); FPRegister object = ToDoubleRegister(instr->object()); Register temp = ToRegister(instr->temp()); @@ -2387,7 +2425,7 @@ void LCodeGen::DoCmpHoleAndBranchD(LCmpHoleAndBranchD* instr) { void LCodeGen::DoCmpHoleAndBranchT(LCmpHoleAndBranchT* instr) { - ASSERT(instr->hydrogen()->representation().IsTagged()); + DCHECK(instr->hydrogen()->representation().IsTagged()); Register object = ToRegister(instr->object()); EmitBranchIfRoot(instr, object, Heap::kTheHoleValueRootIndex); @@ -2405,7 +2443,7 @@ void LCodeGen::DoCmpMapAndBranch(LCmpMapAndBranch* instr) { void LCodeGen::DoCompareMinusZeroAndBranch(LCompareMinusZeroAndBranch* instr) { Representation rep = instr->hydrogen()->value()->representation(); - ASSERT(!rep.IsInteger32()); + DCHECK(!rep.IsInteger32()); Register scratch = ToRegister(instr->temp()); if (rep.IsDouble()) { @@ -2415,8 +2453,8 @@ void LCodeGen::DoCompareMinusZeroAndBranch(LCompareMinusZeroAndBranch* instr) { Register value = ToRegister(instr->value()); __ CheckMap(value, scratch, Heap::kHeapNumberMapRootIndex, instr->FalseLabel(chunk()), DO_SMI_CHECK); - __ Ldr(double_scratch(), FieldMemOperand(value, HeapNumber::kValueOffset)); - __ JumpIfMinusZero(double_scratch(), instr->TrueLabel(chunk())); + __ Ldr(scratch, FieldMemOperand(value, HeapNumber::kValueOffset)); + __ JumpIfMinusZero(scratch, instr->TrueLabel(chunk())); } EmitGoto(instr->FalseDestination(chunk())); } @@ -2425,7 +2463,10 @@ void LCodeGen::DoCompareMinusZeroAndBranch(LCompareMinusZeroAndBranch* instr) { void LCodeGen::DoCompareNumericAndBranch(LCompareNumericAndBranch* instr) { LOperand* left = instr->left(); LOperand* right = instr->right(); - Condition cond = TokenToCondition(instr->op(), false); + bool is_unsigned = + instr->hydrogen()->left()->CheckFlag(HInstruction::kUint32) || + instr->hydrogen()->right()->CheckFlag(HInstruction::kUint32); + Condition cond = TokenToCondition(instr->op(), is_unsigned); if (left->IsConstantOperand() && right->IsConstantOperand()) { // We can statically evaluate the comparison. @@ -2436,17 +2477,7 @@ void LCodeGen::DoCompareNumericAndBranch(LCompareNumericAndBranch* instr) { EmitGoto(next_block); } else { if (instr->is_double()) { - if (right->IsConstantOperand()) { - __ Fcmp(ToDoubleRegister(left), - ToDouble(LConstantOperand::cast(right))); - } else if (left->IsConstantOperand()) { - // Transpose the operands and reverse the condition. - __ Fcmp(ToDoubleRegister(right), - ToDouble(LConstantOperand::cast(left))); - cond = ReverseConditionForCmp(cond); - } else { - __ Fcmp(ToDoubleRegister(left), ToDoubleRegister(right)); - } + __ Fcmp(ToDoubleRegister(left), ToDoubleRegister(right)); // If a NaN is involved, i.e. the result is unordered (V set), // jump to false block label. @@ -2460,14 +2491,14 @@ void LCodeGen::DoCompareNumericAndBranch(LCompareNumericAndBranch* instr) { ToRegister32(left), ToOperand32I(right)); } else { - // Transpose the operands and reverse the condition. + // Commute the operands and the condition. EmitCompareAndBranch(instr, - ReverseConditionForCmp(cond), + CommuteCondition(cond), ToRegister32(right), ToOperand32I(left)); } } else { - ASSERT(instr->hydrogen_value()->representation().IsSmi()); + DCHECK(instr->hydrogen_value()->representation().IsSmi()); if (right->IsConstantOperand()) { int32_t value = ToInteger32(LConstantOperand::cast(right)); EmitCompareAndBranch(instr, @@ -2475,10 +2506,10 @@ void LCodeGen::DoCompareNumericAndBranch(LCompareNumericAndBranch* instr) { ToRegister(left), Operand(Smi::FromInt(value))); } else if (left->IsConstantOperand()) { - // Transpose the operands and reverse the condition. + // Commute the operands and the condition. int32_t value = ToInteger32(LConstantOperand::cast(left)); EmitCompareAndBranch(instr, - ReverseConditionForCmp(cond), + CommuteCondition(cond), ToRegister(right), Operand(Smi::FromInt(value))); } else { @@ -2501,12 +2532,12 @@ void LCodeGen::DoCmpObjectEqAndBranch(LCmpObjectEqAndBranch* instr) { void LCodeGen::DoCmpT(LCmpT* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->context()).is(cp)); Token::Value op = instr->op(); Condition cond = TokenToCondition(op, false); - ASSERT(ToRegister(instr->left()).Is(x1)); - ASSERT(ToRegister(instr->right()).Is(x0)); + DCHECK(ToRegister(instr->left()).Is(x1)); + DCHECK(ToRegister(instr->right()).Is(x0)); Handle<Code> ic = CompareIC::GetUninitialized(isolate(), op); CallCode(ic, RelocInfo::CODE_TARGET, instr); // Signal that we don't inline smi code before this stub. @@ -2514,7 +2545,7 @@ void LCodeGen::DoCmpT(LCmpT* instr) { // Return true or false depending on CompareIC result. // This instruction is marked as call. We can clobber any register. - ASSERT(instr->IsMarkedAsCall()); + DCHECK(instr->IsMarkedAsCall()); __ LoadTrueFalseRoots(x1, x2); __ Cmp(x0, 0); __ Csel(ToRegister(instr->result()), x1, x2, cond); @@ -2522,9 +2553,17 @@ void LCodeGen::DoCmpT(LCmpT* instr) { void LCodeGen::DoConstantD(LConstantD* instr) { - ASSERT(instr->result()->IsDoubleRegister()); + DCHECK(instr->result()->IsDoubleRegister()); DoubleRegister result = ToDoubleRegister(instr->result()); - __ Fmov(result, instr->value()); + if (instr->value() == 0) { + if (copysign(1.0, instr->value()) == 1.0) { + __ Fmov(result, fp_zero); + } else { + __ Fneg(result, fp_zero); + } + } else { + __ Fmov(result, instr->value()); + } } @@ -2534,7 +2573,7 @@ void LCodeGen::DoConstantE(LConstantE* instr) { void LCodeGen::DoConstantI(LConstantI* instr) { - ASSERT(is_int32(instr->value())); + DCHECK(is_int32(instr->value())); // Cast the value here to ensure that the value isn't sign extended by the // implicit Operand constructor. __ Mov(ToRegister32(instr->result()), static_cast<uint32_t>(instr->value())); @@ -2549,13 +2588,6 @@ void LCodeGen::DoConstantS(LConstantS* instr) { void LCodeGen::DoConstantT(LConstantT* instr) { Handle<Object> object = instr->value(isolate()); AllowDeferredHandleDereference smi_check; - if (instr->hydrogen()->HasObjectMap()) { - Handle<Map> object_map = instr->hydrogen()->ObjectMap().handle(); - ASSERT(object->IsHeapObject()); - ASSERT(!object_map->is_stable() || - *object_map == Handle<HeapObject>::cast(object)->map()); - USE(object_map); - } __ LoadObject(ToRegister(instr->result()), object); } @@ -2567,7 +2599,7 @@ void LCodeGen::DoContext(LContext* instr) { __ Ldr(result, MemOperand(fp, StandardFrameConstants::kContextOffset)); } else { // If there is no frame, the context must be in cp. - ASSERT(result.is(cp)); + DCHECK(result.is(cp)); } } @@ -2592,7 +2624,7 @@ void LCodeGen::DoCheckValue(LCheckValue* instr) { void LCodeGen::DoLazyBailout(LLazyBailout* instr) { last_lazy_deopt_pc_ = masm()->pc_offset(); - ASSERT(instr->HasEnvironment()); + DCHECK(instr->HasEnvironment()); LEnvironment* env = instr->environment(); RegisterEnvironmentForDeoptimization(env, Safepoint::kLazyDeopt); safepoints_.RecordLazyDeoptimizationIndex(env->deoptimization_index()); @@ -2607,8 +2639,8 @@ void LCodeGen::DoDateField(LDateField* instr) { Smi* index = instr->index(); Label runtime, done; - ASSERT(object.is(result) && object.Is(x0)); - ASSERT(instr->IsMarkedAsCall()); + DCHECK(object.is(result) && object.Is(x0)); + DCHECK(instr->IsMarkedAsCall()); DeoptimizeIfSmi(object, instr->environment()); __ CompareObjectType(object, temp1, temp1, JS_DATE_TYPE); @@ -2657,19 +2689,20 @@ void LCodeGen::DoDivByPowerOf2I(LDivByPowerOf2I* instr) { Register dividend = ToRegister32(instr->dividend()); int32_t divisor = instr->divisor(); Register result = ToRegister32(instr->result()); - ASSERT(divisor == kMinInt || IsPowerOf2(Abs(divisor))); - ASSERT(!result.is(dividend)); + DCHECK(divisor == kMinInt || IsPowerOf2(Abs(divisor))); + DCHECK(!result.is(dividend)); // Check for (0 / -x) that will produce negative zero. HDiv* hdiv = instr->hydrogen(); if (hdiv->CheckFlag(HValue::kBailoutOnMinusZero) && divisor < 0) { - __ Cmp(dividend, 0); - DeoptimizeIf(eq, instr->environment()); + DeoptimizeIfZero(dividend, instr->environment()); } // Check for (kMinInt / -1). if (hdiv->CheckFlag(HValue::kCanOverflow) && divisor == -1) { - __ Cmp(dividend, kMinInt); - DeoptimizeIf(eq, instr->environment()); + // Test dividend for kMinInt by subtracting one (cmp) and checking for + // overflow. + __ Cmp(dividend, 1); + DeoptimizeIf(vs, instr->environment()); } // Deoptimize if remainder will not be 0. if (!hdiv->CheckFlag(HInstruction::kAllUsesTruncatingToInt32) && @@ -2701,7 +2734,7 @@ void LCodeGen::DoDivByConstI(LDivByConstI* instr) { Register dividend = ToRegister32(instr->dividend()); int32_t divisor = instr->divisor(); Register result = ToRegister32(instr->result()); - ASSERT(!AreAliased(dividend, result)); + DCHECK(!AreAliased(dividend, result)); if (divisor == 0) { Deoptimize(instr->environment()); @@ -2719,7 +2752,7 @@ void LCodeGen::DoDivByConstI(LDivByConstI* instr) { if (!hdiv->CheckFlag(HInstruction::kAllUsesTruncatingToInt32)) { Register temp = ToRegister32(instr->temp()); - ASSERT(!AreAliased(dividend, result, temp)); + DCHECK(!AreAliased(dividend, result, temp)); __ Sxtw(dividend.X(), dividend); __ Mov(temp, divisor); __ Smsubl(temp.X(), result, temp, dividend.X()); @@ -2740,7 +2773,7 @@ void LCodeGen::DoDivI(LDivI* instr) { __ Sdiv(result, dividend, divisor); if (hdiv->CheckFlag(HValue::kAllUsesTruncatingToInt32)) { - ASSERT_EQ(NULL, instr->temp()); + DCHECK_EQ(NULL, instr->temp()); return; } @@ -2813,9 +2846,9 @@ void LCodeGen::DoDummyUse(LDummyUse* instr) { void LCodeGen::DoFunctionLiteral(LFunctionLiteral* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->context()).is(cp)); // FunctionLiteral instruction is marked as call, we can trash any register. - ASSERT(instr->IsMarkedAsCall()); + DCHECK(instr->IsMarkedAsCall()); // Use the fast case closure allocation code that allocates in new // space for nested functions that don't need literals cloning. @@ -2831,7 +2864,7 @@ void LCodeGen::DoFunctionLiteral(LFunctionLiteral* instr) { __ Mov(x1, Operand(pretenure ? factory()->true_value() : factory()->false_value())); __ Push(cp, x2, x1); - CallRuntime(Runtime::kHiddenNewClosure, 3, instr); + CallRuntime(Runtime::kNewClosure, 3, instr); } } @@ -2861,8 +2894,8 @@ void LCodeGen::DoForInPrepareMap(LForInPrepareMap* instr) { Register object = ToRegister(instr->object()); Register null_value = x5; - ASSERT(instr->IsMarkedAsCall()); - ASSERT(object.Is(x0)); + DCHECK(instr->IsMarkedAsCall()); + DCHECK(object.Is(x0)); DeoptimizeIfRoot(object, Heap::kUndefinedValueRootIndex, instr->environment()); @@ -2902,7 +2935,7 @@ void LCodeGen::DoGetCachedArrayIndex(LGetCachedArrayIndex* instr) { __ AssertString(input); // Assert that we can use a W register load to get the hash. - ASSERT((String::kHashShift + String::kArrayIndexValueBits) < kWRegSizeInBits); + DCHECK((String::kHashShift + String::kArrayIndexValueBits) < kWRegSizeInBits); __ Ldr(result.W(), FieldMemOperand(input, String::kHashFieldOffset)); __ IndexFromHash(result, result); } @@ -2927,7 +2960,7 @@ void LCodeGen::DoHasCachedArrayIndexAndBranch( Register temp = ToRegister32(instr->temp()); // Assert that the cache status bits fit in a W register. - ASSERT(is_uint32(String::kContainsCachedArrayIndexMask)); + DCHECK(is_uint32(String::kContainsCachedArrayIndexMask)); __ Ldr(temp, FieldMemOperand(input, String::kHashFieldOffset)); __ Tst(temp, String::kContainsCachedArrayIndexMask); EmitBranch(instr, eq); @@ -2951,7 +2984,7 @@ static InstanceType TestType(HHasInstanceTypeAndBranch* instr) { InstanceType from = instr->from(); InstanceType to = instr->to(); if (from == FIRST_TYPE) return to; - ASSERT((from == to) || (to == LAST_TYPE)); + DCHECK((from == to) || (to == LAST_TYPE)); return from; } @@ -2972,7 +3005,7 @@ void LCodeGen::DoHasInstanceTypeAndBranch(LHasInstanceTypeAndBranch* instr) { Register input = ToRegister(instr->value()); Register scratch = ToRegister(instr->temp()); - if (!instr->hydrogen()->value()->IsHeapObject()) { + if (!instr->hydrogen()->value()->type().IsHeapObject()) { __ JumpIfSmi(input, instr->FalseLabel(chunk_)); } __ CompareObjectType(input, scratch, scratch, TestType(instr->hydrogen())); @@ -2992,10 +3025,10 @@ void LCodeGen::DoInnerAllocatedObject(LInnerAllocatedObject* instr) { void LCodeGen::DoInstanceOf(LInstanceOf* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->context()).is(cp)); // Assert that the arguments are in the registers expected by InstanceofStub. - ASSERT(ToRegister(instr->left()).Is(InstanceofStub::left())); - ASSERT(ToRegister(instr->right()).Is(InstanceofStub::right())); + DCHECK(ToRegister(instr->left()).Is(InstanceofStub::left())); + DCHECK(ToRegister(instr->right()).Is(InstanceofStub::right())); InstanceofStub stub(isolate(), InstanceofStub::kArgsInRegisters); CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); @@ -3034,10 +3067,10 @@ void LCodeGen::DoInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr) { Register map = x5; // This instruction is marked as call. We can clobber any register. - ASSERT(instr->IsMarkedAsCall()); + DCHECK(instr->IsMarkedAsCall()); // We must take into account that object is in x11. - ASSERT(object.Is(x11)); + DCHECK(object.Is(x11)); Register scratch = x10; // A Smi is not instance of anything. @@ -3055,15 +3088,15 @@ void LCodeGen::DoInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr) { __ bind(&map_check); // Will be patched with the cached map. Handle<Cell> cell = factory()->NewCell(factory()->the_hole_value()); - __ LoadRelocated(scratch, Operand(Handle<Object>(cell))); + __ ldr(scratch, Immediate(Handle<Object>(cell))); __ ldr(scratch, FieldMemOperand(scratch, PropertyCell::kValueOffset)); __ cmp(map, scratch); __ b(&cache_miss, ne); // The address of this instruction is computed relative to the map check // above, so check the size of the code generated. - ASSERT(masm()->InstructionsGeneratedSince(&map_check) == 4); + DCHECK(masm()->InstructionsGeneratedSince(&map_check) == 4); // Will be patched with the cached result. - __ LoadRelocated(result, Operand(factory()->the_hole_value())); + __ ldr(result, Immediate(factory()->the_hole_value())); } __ B(&done); @@ -3096,7 +3129,7 @@ void LCodeGen::DoInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr) { void LCodeGen::DoDeferredInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr) { Register result = ToRegister(instr->result()); - ASSERT(result.Is(x0)); // InstanceofStub returns its result in x0. + DCHECK(result.Is(x0)); // InstanceofStub returns its result in x0. InstanceofStub::Flags flags = InstanceofStub::kNoFlags; flags = static_cast<InstanceofStub::Flags>( flags | InstanceofStub::kArgsInRegisters); @@ -3105,11 +3138,11 @@ void LCodeGen::DoDeferredInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr) { flags = static_cast<InstanceofStub::Flags>( flags | InstanceofStub::kCallSiteInlineCheck); - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); LoadContextFromDeferred(instr->context()); // Prepare InstanceofStub arguments. - ASSERT(ToRegister(instr->value()).Is(InstanceofStub::left())); + DCHECK(ToRegister(instr->value()).Is(InstanceofStub::left())); __ LoadObject(InstanceofStub::right(), instr->function()); InstanceofStub stub(isolate(), flags); @@ -3138,10 +3171,10 @@ void LCodeGen::DoInteger32ToDouble(LInteger32ToDouble* instr) { void LCodeGen::DoInvokeFunction(LInvokeFunction* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->context()).is(cp)); // The function is required to be in x1. - ASSERT(ToRegister(instr->function()).is(x1)); - ASSERT(instr->HasPointerMap()); + DCHECK(ToRegister(instr->function()).is(x1)); + DCHECK(instr->HasPointerMap()); Handle<JSFunction> known_function = instr->hydrogen()->known_function(); if (known_function.is_null()) { @@ -3226,7 +3259,7 @@ void LCodeGen::DoIsStringAndBranch(LIsStringAndBranch* instr) { Register scratch = ToRegister(instr->temp()); SmiCheck check_needed = - instr->hydrogen()->value()->IsHeapObject() + instr->hydrogen()->value()->type().IsHeapObject() ? OMIT_SMI_CHECK : INLINE_SMI_CHECK; Condition true_cond = EmitIsString(val, scratch, instr->FalseLabel(chunk_), check_needed); @@ -3246,7 +3279,7 @@ void LCodeGen::DoIsUndetectableAndBranch(LIsUndetectableAndBranch* instr) { Register input = ToRegister(instr->value()); Register temp = ToRegister(instr->temp()); - if (!instr->hydrogen()->value()->IsHeapObject()) { + if (!instr->hydrogen()->value()->type().IsHeapObject()) { __ JumpIfSmi(input, instr->FalseLabel(chunk_)); } __ Ldr(temp, FieldMemOperand(input, HeapObject::kMapOffset)); @@ -3299,16 +3332,6 @@ void LCodeGen::DoLoadFunctionPrototype(LLoadFunctionPrototype* instr) { Register result = ToRegister(instr->result()); Register temp = ToRegister(instr->temp()); - // Check that the function really is a function. Leaves map in the result - // register. - __ CompareObjectType(function, result, temp, JS_FUNCTION_TYPE); - DeoptimizeIf(ne, instr->environment()); - - // Make sure that the function has an instance prototype. - Label non_instance; - __ Ldrb(temp, FieldMemOperand(result, Map::kBitFieldOffset)); - __ Tbnz(temp, Map::kHasNonInstancePrototype, &non_instance); - // Get the prototype or initial map from the function. __ Ldr(result, FieldMemOperand(function, JSFunction::kPrototypeOrInitialMapOffset)); @@ -3324,12 +3347,6 @@ void LCodeGen::DoLoadFunctionPrototype(LLoadFunctionPrototype* instr) { // Get the prototype from the initial map. __ Ldr(result, FieldMemOperand(result, Map::kPrototypeOffset)); - __ B(&done); - - // Non-instance prototype: fetch prototype from constructor field in initial - // map. - __ Bind(&non_instance); - __ Ldr(result, FieldMemOperand(result, Map::kConstructorOffset)); // All done. __ Bind(&done); @@ -3348,10 +3365,19 @@ void LCodeGen::DoLoadGlobalCell(LLoadGlobalCell* instr) { void LCodeGen::DoLoadGlobalGeneric(LLoadGlobalGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->global_object()).Is(x0)); - ASSERT(ToRegister(instr->result()).Is(x0)); - __ Mov(x2, Operand(instr->name())); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->global_object()).is(LoadIC::ReceiverRegister())); + DCHECK(ToRegister(instr->result()).Is(x0)); + __ Mov(LoadIC::NameRegister(), Operand(instr->name())); + if (FLAG_vector_ics) { + Register vector = ToRegister(instr->temp_vector()); + DCHECK(vector.is(LoadIC::VectorRegister())); + __ Mov(vector, instr->hydrogen()->feedback_vector()); + // No need to allocate this register. + DCHECK(LoadIC::SlotRegister().is(x0)); + __ Mov(LoadIC::SlotRegister(), + Smi::FromInt(instr->hydrogen()->slot())); + } ContextualMode mode = instr->for_typeof() ? NOT_CONTEXTUAL : CONTEXTUAL; Handle<Code> ic = LoadIC::initialize_stub(isolate(), mode); CallCode(ic, RelocInfo::CODE_TARGET, instr); @@ -3366,29 +3392,25 @@ MemOperand LCodeGen::PrepareKeyedExternalArrayOperand( bool key_is_constant, int constant_key, ElementsKind elements_kind, - int additional_index) { + int base_offset) { int element_size_shift = ElementsKindToShiftSize(elements_kind); - int additional_offset = additional_index << element_size_shift; - if (IsFixedTypedArrayElementsKind(elements_kind)) { - additional_offset += FixedTypedArrayBase::kDataOffset - kHeapObjectTag; - } if (key_is_constant) { int key_offset = constant_key << element_size_shift; - return MemOperand(base, key_offset + additional_offset); + return MemOperand(base, key_offset + base_offset); } if (key_is_smi) { __ Add(scratch, base, Operand::UntagSmiAndScale(key, element_size_shift)); - return MemOperand(scratch, additional_offset); + return MemOperand(scratch, base_offset); } - if (additional_offset == 0) { + if (base_offset == 0) { return MemOperand(base, key, SXTW, element_size_shift); } - ASSERT(!AreAliased(scratch, key)); - __ Add(scratch, base, additional_offset); + DCHECK(!AreAliased(scratch, key)); + __ Add(scratch, base, base_offset); return MemOperand(scratch, key, SXTW, element_size_shift); } @@ -3403,7 +3425,7 @@ void LCodeGen::DoLoadKeyedExternal(LLoadKeyedExternal* instr) { Register key = no_reg; int constant_key = 0; if (key_is_constant) { - ASSERT(instr->temp() == NULL); + DCHECK(instr->temp() == NULL); constant_key = ToInteger32(LConstantOperand::cast(instr->key())); if (constant_key & 0xf0000000) { Abort(kArrayIndexConstantValueTooBig); @@ -3417,7 +3439,7 @@ void LCodeGen::DoLoadKeyedExternal(LLoadKeyedExternal* instr) { PrepareKeyedExternalArrayOperand(key, ext_ptr, scratch, key_is_smi, key_is_constant, constant_key, elements_kind, - instr->additional_index()); + instr->base_offset()); if ((elements_kind == EXTERNAL_FLOAT32_ELEMENTS) || (elements_kind == FLOAT32_ELEMENTS)) { @@ -3488,8 +3510,9 @@ MemOperand LCodeGen::PrepareKeyedArrayOperand(Register base, bool key_is_tagged, ElementsKind elements_kind, Representation representation, - int additional_index) { - STATIC_ASSERT((kSmiValueSize == 32) && (kSmiShift == 32) && (kSmiTag == 0)); + int base_offset) { + STATIC_ASSERT(static_cast<unsigned>(kSmiValueSize) == kWRegSizeInBits); + STATIC_ASSERT(kSmiTag == 0); int element_size_shift = ElementsKindToShiftSize(elements_kind); // Even though the HLoad/StoreKeyed instructions force the input @@ -3499,25 +3522,23 @@ MemOperand LCodeGen::PrepareKeyedArrayOperand(Register base, if (key_is_tagged) { __ Add(base, elements, Operand::UntagSmiAndScale(key, element_size_shift)); if (representation.IsInteger32()) { - ASSERT(elements_kind == FAST_SMI_ELEMENTS); - // Read or write only the most-significant 32 bits in the case of fast smi - // arrays. - return UntagSmiFieldMemOperand(base, additional_index); + DCHECK(elements_kind == FAST_SMI_ELEMENTS); + // Read or write only the smi payload in the case of fast smi arrays. + return UntagSmiMemOperand(base, base_offset); } else { - return FieldMemOperand(base, additional_index); + return MemOperand(base, base_offset); } } else { // Sign extend key because it could be a 32-bit negative value or contain // garbage in the top 32-bits. The address computation happens in 64-bit. - ASSERT((element_size_shift >= 0) && (element_size_shift <= 4)); + DCHECK((element_size_shift >= 0) && (element_size_shift <= 4)); if (representation.IsInteger32()) { - ASSERT(elements_kind == FAST_SMI_ELEMENTS); - // Read or write only the most-significant 32 bits in the case of fast smi - // arrays. + DCHECK(elements_kind == FAST_SMI_ELEMENTS); + // Read or write only the smi payload in the case of fast smi arrays. __ Add(base, elements, Operand(key, SXTW, element_size_shift)); - return UntagSmiFieldMemOperand(base, additional_index); + return UntagSmiMemOperand(base, base_offset); } else { - __ Add(base, elements, additional_index - kHeapObjectTag); + __ Add(base, elements, base_offset); return MemOperand(base, key, SXTW, element_size_shift); } } @@ -3530,25 +3551,23 @@ void LCodeGen::DoLoadKeyedFixedDouble(LLoadKeyedFixedDouble* instr) { MemOperand mem_op; if (instr->key()->IsConstantOperand()) { - ASSERT(instr->hydrogen()->RequiresHoleCheck() || + DCHECK(instr->hydrogen()->RequiresHoleCheck() || (instr->temp() == NULL)); int constant_key = ToInteger32(LConstantOperand::cast(instr->key())); if (constant_key & 0xf0000000) { Abort(kArrayIndexConstantValueTooBig); } - int offset = FixedDoubleArray::OffsetOfElementAt(constant_key + - instr->additional_index()); - mem_op = FieldMemOperand(elements, offset); + int offset = instr->base_offset() + constant_key * kDoubleSize; + mem_op = MemOperand(elements, offset); } else { Register load_base = ToRegister(instr->temp()); Register key = ToRegister(instr->key()); bool key_is_tagged = instr->hydrogen()->key()->representation().IsSmi(); - int offset = FixedDoubleArray::OffsetOfElementAt(instr->additional_index()); mem_op = PrepareKeyedArrayOperand(load_base, elements, key, key_is_tagged, instr->hydrogen()->elements_kind(), instr->hydrogen()->representation(), - offset); + instr->base_offset()); } __ Ldr(result, mem_op); @@ -3572,27 +3591,26 @@ void LCodeGen::DoLoadKeyedFixed(LLoadKeyedFixed* instr) { Representation representation = instr->hydrogen()->representation(); if (instr->key()->IsConstantOperand()) { - ASSERT(instr->temp() == NULL); + DCHECK(instr->temp() == NULL); LConstantOperand* const_operand = LConstantOperand::cast(instr->key()); - int offset = FixedArray::OffsetOfElementAt(ToInteger32(const_operand) + - instr->additional_index()); + int offset = instr->base_offset() + + ToInteger32(const_operand) * kPointerSize; if (representation.IsInteger32()) { - ASSERT(instr->hydrogen()->elements_kind() == FAST_SMI_ELEMENTS); - STATIC_ASSERT((kSmiValueSize == 32) && (kSmiShift == 32) && - (kSmiTag == 0)); - mem_op = UntagSmiFieldMemOperand(elements, offset); + DCHECK(instr->hydrogen()->elements_kind() == FAST_SMI_ELEMENTS); + STATIC_ASSERT(static_cast<unsigned>(kSmiValueSize) == kWRegSizeInBits); + STATIC_ASSERT(kSmiTag == 0); + mem_op = UntagSmiMemOperand(elements, offset); } else { - mem_op = FieldMemOperand(elements, offset); + mem_op = MemOperand(elements, offset); } } else { Register load_base = ToRegister(instr->temp()); Register key = ToRegister(instr->key()); bool key_is_tagged = instr->hydrogen()->key()->representation().IsSmi(); - int offset = FixedArray::OffsetOfElementAt(instr->additional_index()); mem_op = PrepareKeyedArrayOperand(load_base, elements, key, key_is_tagged, instr->hydrogen()->elements_kind(), - representation, offset); + representation, instr->base_offset()); } __ Load(result, mem_op, representation); @@ -3609,14 +3627,23 @@ void LCodeGen::DoLoadKeyedFixed(LLoadKeyedFixed* instr) { void LCodeGen::DoLoadKeyedGeneric(LLoadKeyedGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->object()).Is(x1)); - ASSERT(ToRegister(instr->key()).Is(x0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->object()).is(LoadIC::ReceiverRegister())); + DCHECK(ToRegister(instr->key()).is(LoadIC::NameRegister())); + if (FLAG_vector_ics) { + Register vector = ToRegister(instr->temp_vector()); + DCHECK(vector.is(LoadIC::VectorRegister())); + __ Mov(vector, instr->hydrogen()->feedback_vector()); + // No need to allocate this register. + DCHECK(LoadIC::SlotRegister().is(x0)); + __ Mov(LoadIC::SlotRegister(), + Smi::FromInt(instr->hydrogen()->slot())); + } Handle<Code> ic = isolate()->builtins()->KeyedLoadIC_Initialize(); CallCode(ic, RelocInfo::CODE_TARGET, instr); - ASSERT(ToRegister(instr->result()).Is(x0)); + DCHECK(ToRegister(instr->result()).Is(x0)); } @@ -3650,7 +3677,8 @@ void LCodeGen::DoLoadNamedField(LLoadNamedField* instr) { if (access.representation().IsSmi() && instr->hydrogen()->representation().IsInteger32()) { // Read int value directly from upper half of the smi. - STATIC_ASSERT(kSmiValueSize == 32 && kSmiShift == 32 && kSmiTag == 0); + STATIC_ASSERT(static_cast<unsigned>(kSmiValueSize) == kWRegSizeInBits); + STATIC_ASSERT(kSmiTag == 0); __ Load(result, UntagSmiFieldMemOperand(source, offset), Representation::Integer32()); } else { @@ -3660,15 +3688,24 @@ void LCodeGen::DoLoadNamedField(LLoadNamedField* instr) { void LCodeGen::DoLoadNamedGeneric(LLoadNamedGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - // LoadIC expects x2 to hold the name, and x0 to hold the receiver. - ASSERT(ToRegister(instr->object()).is(x0)); - __ Mov(x2, Operand(instr->name())); + DCHECK(ToRegister(instr->context()).is(cp)); + // LoadIC expects name and receiver in registers. + DCHECK(ToRegister(instr->object()).is(LoadIC::ReceiverRegister())); + __ Mov(LoadIC::NameRegister(), Operand(instr->name())); + if (FLAG_vector_ics) { + Register vector = ToRegister(instr->temp_vector()); + DCHECK(vector.is(LoadIC::VectorRegister())); + __ Mov(vector, instr->hydrogen()->feedback_vector()); + // No need to allocate this register. + DCHECK(LoadIC::SlotRegister().is(x0)); + __ Mov(LoadIC::SlotRegister(), + Smi::FromInt(instr->hydrogen()->slot())); + } Handle<Code> ic = LoadIC::initialize_stub(isolate(), NOT_CONTEXTUAL); CallCode(ic, RelocInfo::CODE_TARGET, instr); - ASSERT(ToRegister(instr->result()).is(x0)); + DCHECK(ToRegister(instr->result()).is(x0)); } @@ -3714,8 +3751,8 @@ void LCodeGen::DoDeferredMathAbsTagged(LMathAbsTagged* instr, // - The (smi) input -0x80000000, produces +0x80000000, which does not fit // a smi. In this case, the inline code sets the result and jumps directly // to the allocation_entry label. - ASSERT(instr->context() != NULL); - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(instr->context() != NULL); + DCHECK(ToRegister(instr->context()).is(cp)); Register input = ToRegister(instr->value()); Register temp1 = ToRegister(instr->temp1()); Register temp2 = ToRegister(instr->temp2()); @@ -3761,8 +3798,8 @@ void LCodeGen::DoDeferredMathAbsTagged(LMathAbsTagged* instr, __ Bind(&result_ok); } - { PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); - CallRuntimeFromDeferred(Runtime::kHiddenAllocateHeapNumber, 0, instr, + { PushSafepointRegistersScope scope(this); + CallRuntimeFromDeferred(Runtime::kAllocateHeapNumber, 0, instr, instr->context()); __ StoreToSafepointRegisterSlot(x0, result); } @@ -3789,12 +3826,12 @@ void LCodeGen::DoMathAbsTagged(LMathAbsTagged* instr) { // TODO(jbramley): The early-exit mechanism would skip the new frame handling // in GenerateDeferredCode. Tidy this up. - ASSERT(!NeedsDeferredFrame()); + DCHECK(!NeedsDeferredFrame()); DeferredMathAbsTagged* deferred = new(zone()) DeferredMathAbsTagged(this, instr); - ASSERT(instr->hydrogen()->value()->representation().IsTagged() || + DCHECK(instr->hydrogen()->value()->representation().IsTagged() || instr->hydrogen()->value()->representation().IsSmi()); Register input = ToRegister(instr->value()); Register result_bits = ToRegister(instr->temp3()); @@ -3870,9 +3907,14 @@ void LCodeGen::DoFlooringDivByPowerOf2I(LFlooringDivByPowerOf2I* instr) { Register result = ToRegister32(instr->result()); int32_t divisor = instr->divisor(); + // If the divisor is 1, return the dividend. + if (divisor == 1) { + __ Mov(result, dividend, kDiscardForSameWReg); + return; + } + // If the divisor is positive, things are easy: There can be no deopts and we // can simply do an arithmetic right shift. - if (divisor == 1) return; int32_t shift = WhichPowerOf2Abs(divisor); if (divisor > 1) { __ Mov(result, Operand(dividend, ASR, shift)); @@ -3885,26 +3927,22 @@ void LCodeGen::DoFlooringDivByPowerOf2I(LFlooringDivByPowerOf2I* instr) { DeoptimizeIf(eq, instr->environment()); } - // If the negation could not overflow, simply shifting is OK. - if (!instr->hydrogen()->CheckFlag(HValue::kLeftCanBeMinInt)) { - __ Mov(result, Operand(dividend, ASR, shift)); + // Dividing by -1 is basically negation, unless we overflow. + if (divisor == -1) { + if (instr->hydrogen()->CheckFlag(HValue::kLeftCanBeMinInt)) { + DeoptimizeIf(vs, instr->environment()); + } return; } - // Dividing by -1 is basically negation, unless we overflow. - if (divisor == -1) { - DeoptimizeIf(vs, instr->environment()); + // If the negation could not overflow, simply shifting is OK. + if (!instr->hydrogen()->CheckFlag(HValue::kLeftCanBeMinInt)) { + __ Mov(result, Operand(dividend, ASR, shift)); return; } - // Using a conditional data processing instruction would need 1 more register. - Label not_kmin_int, done; - __ B(vc, ¬_kmin_int); - __ Mov(result, kMinInt / divisor); - __ B(&done); - __ bind(¬_kmin_int); - __ Mov(result, Operand(dividend, ASR, shift)); - __ bind(&done); + __ Asr(result, result, shift); + __ Csel(result, result, kMinInt / divisor, vc); } @@ -3912,7 +3950,7 @@ void LCodeGen::DoFlooringDivByConstI(LFlooringDivByConstI* instr) { Register dividend = ToRegister32(instr->dividend()); int32_t divisor = instr->divisor(); Register result = ToRegister32(instr->result()); - ASSERT(!AreAliased(dividend, result)); + DCHECK(!AreAliased(dividend, result)); if (divisor == 0) { Deoptimize(instr->environment()); @@ -3922,8 +3960,7 @@ void LCodeGen::DoFlooringDivByConstI(LFlooringDivByConstI* instr) { // Check for (0 / -x) that will produce negative zero. HMathFloorOfDiv* hdiv = instr->hydrogen(); if (hdiv->CheckFlag(HValue::kBailoutOnMinusZero) && divisor < 0) { - __ Cmp(dividend, 0); - DeoptimizeIf(eq, instr->environment()); + DeoptimizeIfZero(dividend, instr->environment()); } // Easy case: We need no dynamic check for the dividend and the flooring @@ -3938,19 +3975,19 @@ void LCodeGen::DoFlooringDivByConstI(LFlooringDivByConstI* instr) { // In the general case we may need to adjust before and after the truncating // division to get a flooring division. Register temp = ToRegister32(instr->temp()); - ASSERT(!AreAliased(temp, dividend, result)); + DCHECK(!AreAliased(temp, dividend, result)); Label needs_adjustment, done; __ Cmp(dividend, 0); __ B(divisor > 0 ? lt : gt, &needs_adjustment); __ TruncatingDiv(result, dividend, Abs(divisor)); if (divisor < 0) __ Neg(result, result); __ B(&done); - __ bind(&needs_adjustment); + __ Bind(&needs_adjustment); __ Add(temp, dividend, Operand(divisor > 0 ? 1 : -1)); __ TruncatingDiv(result, temp, Abs(divisor)); if (divisor < 0) __ Neg(result, result); __ Sub(result, result, Operand(1)); - __ bind(&done); + __ Bind(&done); } @@ -4001,11 +4038,11 @@ void LCodeGen::DoFlooringDivI(LFlooringDivI* instr) { void LCodeGen::DoMathLog(LMathLog* instr) { - ASSERT(instr->IsMarkedAsCall()); - ASSERT(ToDoubleRegister(instr->value()).is(d0)); + DCHECK(instr->IsMarkedAsCall()); + DCHECK(ToDoubleRegister(instr->value()).is(d0)); __ CallCFunction(ExternalReference::math_log_double_function(isolate()), 0, 1); - ASSERT(ToDoubleRegister(instr->result()).Is(d0)); + DCHECK(ToDoubleRegister(instr->result()).Is(d0)); } @@ -4044,13 +4081,13 @@ void LCodeGen::DoPower(LPower* instr) { Representation exponent_type = instr->hydrogen()->right()->representation(); // Having marked this as a call, we can use any registers. // Just make sure that the input/output registers are the expected ones. - ASSERT(!instr->right()->IsDoubleRegister() || + DCHECK(!instr->right()->IsDoubleRegister() || ToDoubleRegister(instr->right()).is(d1)); - ASSERT(exponent_type.IsInteger32() || !instr->right()->IsRegister() || + DCHECK(exponent_type.IsInteger32() || !instr->right()->IsRegister() || ToRegister(instr->right()).is(x11)); - ASSERT(!exponent_type.IsInteger32() || ToRegister(instr->right()).is(x12)); - ASSERT(ToDoubleRegister(instr->left()).is(d0)); - ASSERT(ToDoubleRegister(instr->result()).is(d0)); + DCHECK(!exponent_type.IsInteger32() || ToRegister(instr->right()).is(x12)); + DCHECK(ToDoubleRegister(instr->left()).is(d0)); + DCHECK(ToDoubleRegister(instr->result()).is(d0)); if (exponent_type.IsSmi()) { MathPowStub stub(isolate(), MathPowStub::TAGGED); @@ -4072,7 +4109,7 @@ void LCodeGen::DoPower(LPower* instr) { MathPowStub stub(isolate(), MathPowStub::INTEGER); __ CallStub(&stub); } else { - ASSERT(exponent_type.IsDouble()); + DCHECK(exponent_type.IsDouble()); MathPowStub stub(isolate(), MathPowStub::DOUBLE); __ CallStub(&stub); } @@ -4084,7 +4121,7 @@ void LCodeGen::DoMathRoundD(LMathRoundD* instr) { DoubleRegister result = ToDoubleRegister(instr->result()); DoubleRegister scratch_d = double_scratch(); - ASSERT(!AreAliased(input, result, scratch_d)); + DCHECK(!AreAliased(input, result, scratch_d)); Label done; @@ -4111,9 +4148,9 @@ void LCodeGen::DoMathRoundD(LMathRoundD* instr) { void LCodeGen::DoMathRoundI(LMathRoundI* instr) { DoubleRegister input = ToDoubleRegister(instr->value()); - DoubleRegister temp1 = ToDoubleRegister(instr->temp1()); + DoubleRegister temp = ToDoubleRegister(instr->temp1()); + DoubleRegister dot_five = double_scratch(); Register result = ToRegister(instr->result()); - Label try_rounding; Label done; // Math.round() rounds to the nearest integer, with ties going towards @@ -4124,46 +4161,53 @@ void LCodeGen::DoMathRoundI(LMathRoundI* instr) { // that -0.0 rounds to itself, and values -0.5 <= input < 0 also produce a // result of -0.0. - DoubleRegister dot_five = double_scratch(); + // Add 0.5 and round towards -infinity. __ Fmov(dot_five, 0.5); - __ Fabs(temp1, input); - __ Fcmp(temp1, dot_five); - // If input is in [-0.5, -0], the result is -0. - // If input is in [+0, +0.5[, the result is +0. - // If the input is +0.5, the result is 1. - __ B(hi, &try_rounding); // hi so NaN will also branch. + __ Fadd(temp, input, dot_five); + __ Fcvtms(result, temp); + + // The result is correct if: + // result is not 0, as the input could be NaN or [-0.5, -0.0]. + // result is not 1, as 0.499...94 will wrongly map to 1. + // result fits in 32 bits. + __ Cmp(result, Operand(result.W(), SXTW)); + __ Ccmp(result, 1, ZFlag, eq); + __ B(hi, &done); + + // At this point, we have to handle possible inputs of NaN or numbers in the + // range [-0.5, 1.5[, or numbers larger than 32 bits. + + // Deoptimize if the result > 1, as it must be larger than 32 bits. + __ Cmp(result, 1); + DeoptimizeIf(hi, instr->environment()); + // Deoptimize for negative inputs, which at this point are only numbers in + // the range [-0.5, -0.0] if (instr->hydrogen()->CheckFlag(HValue::kBailoutOnMinusZero)) { __ Fmov(result, input); - DeoptimizeIfNegative(result, instr->environment()); // [-0.5, -0.0]. + DeoptimizeIfNegative(result, instr->environment()); } - __ Fcmp(input, dot_five); - __ Mov(result, 1); // +0.5. - // Remaining cases: [+0, +0.5[ or [-0.5, +0.5[, depending on - // flag kBailoutOnMinusZero, will return 0 (xzr). - __ Csel(result, result, xzr, eq); - __ B(&done); - __ Bind(&try_rounding); - // Since we're providing a 32-bit result, we can implement ties-to-infinity by - // adding 0.5 to the input, then taking the floor of the result. This does not - // work for very large positive doubles because adding 0.5 would cause an - // intermediate rounding stage, so a different approach is necessary when a - // double result is needed. - __ Fadd(temp1, input, dot_five); - __ Fcvtms(result, temp1); - - // Deopt if - // * the input was NaN - // * the result is not representable using a 32-bit integer. - __ Fcmp(input, 0.0); - __ Ccmp(result, Operand(result.W(), SXTW), NoFlag, vc); - DeoptimizeIf(ne, instr->environment()); + // Deoptimize if the input was NaN. + __ Fcmp(input, dot_five); + DeoptimizeIf(vs, instr->environment()); + // Now, the only unhandled inputs are in the range [0.0, 1.5[ (or [-0.5, 1.5[ + // if we didn't generate a -0.0 bailout). If input >= 0.5 then return 1, + // else 0; we avoid dealing with 0.499...94 directly. + __ Cset(result, ge); __ Bind(&done); } +void LCodeGen::DoMathFround(LMathFround* instr) { + DoubleRegister input = ToDoubleRegister(instr->value()); + DoubleRegister result = ToDoubleRegister(instr->result()); + __ Fcvt(result.S(), input); + __ Fcvt(result, result.S()); +} + + void LCodeGen::DoMathSqrt(LMathSqrt* instr) { DoubleRegister input = ToDoubleRegister(instr->value()); DoubleRegister result = ToDoubleRegister(instr->result()); @@ -4188,7 +4232,7 @@ void LCodeGen::DoMathMinMax(LMathMinMax* instr) { __ Cmp(left, right); __ Csel(result, left, right, (op == HMathMinMax::kMathMax) ? ge : le); } else { - ASSERT(instr->hydrogen()->representation().IsDouble()); + DCHECK(instr->hydrogen()->representation().IsDouble()); DoubleRegister result = ToDoubleRegister(instr->result()); DoubleRegister left = ToDoubleRegister(instr->left()); DoubleRegister right = ToDoubleRegister(instr->right()); @@ -4196,7 +4240,7 @@ void LCodeGen::DoMathMinMax(LMathMinMax* instr) { if (op == HMathMinMax::kMathMax) { __ Fmax(result, left, right); } else { - ASSERT(op == HMathMinMax::kMathMin); + DCHECK(op == HMathMinMax::kMathMin); __ Fmin(result, left, right); } } @@ -4206,7 +4250,7 @@ void LCodeGen::DoMathMinMax(LMathMinMax* instr) { void LCodeGen::DoModByPowerOf2I(LModByPowerOf2I* instr) { Register dividend = ToRegister32(instr->dividend()); int32_t divisor = instr->divisor(); - ASSERT(dividend.is(ToRegister32(instr->result()))); + DCHECK(dividend.is(ToRegister32(instr->result()))); // Theoretically, a variation of the branch-free code for integer division by // a power of 2 (calculating the remainder via an additional multiplication @@ -4218,8 +4262,7 @@ void LCodeGen::DoModByPowerOf2I(LModByPowerOf2I* instr) { int32_t mask = divisor < 0 ? -(divisor + 1) : (divisor - 1); Label dividend_is_not_negative, done; if (hmod->CheckFlag(HValue::kLeftCanBeNegative)) { - __ Cmp(dividend, 0); - __ B(pl, ÷nd_is_not_negative); + __ Tbz(dividend, kWSignBit, ÷nd_is_not_negative); // Note that this is correct even for kMinInt operands. __ Neg(dividend, dividend); __ And(dividend, dividend, mask); @@ -4241,7 +4284,7 @@ void LCodeGen::DoModByConstI(LModByConstI* instr) { int32_t divisor = instr->divisor(); Register result = ToRegister32(instr->result()); Register temp = ToRegister32(instr->temp()); - ASSERT(!AreAliased(dividend, result, temp)); + DCHECK(!AreAliased(dividend, result, temp)); if (divisor == 0) { Deoptimize(instr->environment()); @@ -4285,14 +4328,14 @@ void LCodeGen::DoModI(LModI* instr) { void LCodeGen::DoMulConstIS(LMulConstIS* instr) { - ASSERT(instr->hydrogen()->representation().IsSmiOrInteger32()); + DCHECK(instr->hydrogen()->representation().IsSmiOrInteger32()); bool is_smi = instr->hydrogen()->representation().IsSmi(); Register result = is_smi ? ToRegister(instr->result()) : ToRegister32(instr->result()); Register left = is_smi ? ToRegister(instr->left()) : ToRegister32(instr->left()) ; int32_t right = ToInteger32(instr->right()); - ASSERT((right > -kMaxInt) || (right < kMaxInt)); + DCHECK((right > -kMaxInt) || (right < kMaxInt)); bool can_overflow = instr->hydrogen()->CheckFlag(HValue::kCanOverflow); bool bailout_on_minus_zero = @@ -4346,7 +4389,7 @@ void LCodeGen::DoMulConstIS(LMulConstIS* instr) { if (can_overflow) { Register scratch = result; - ASSERT(!AreAliased(scratch, left)); + DCHECK(!AreAliased(scratch, left)); __ Cls(scratch, left); __ Cmp(scratch, right_log2); DeoptimizeIf(lt, instr->environment()); @@ -4371,7 +4414,7 @@ void LCodeGen::DoMulConstIS(LMulConstIS* instr) { // For the following cases, we could perform a conservative overflow check // with CLS as above. However the few cycles saved are likely not worth // the risk of deoptimizing more often than required. - ASSERT(!can_overflow); + DCHECK(!can_overflow); if (right >= 0) { if (IsPowerOf2(right - 1)) { @@ -4469,7 +4512,7 @@ void LCodeGen::DoMulS(LMulS* instr) { __ SmiUntag(result, left); __ Mul(result, result, right); } else { - ASSERT(!left.Is(result)); + DCHECK(!left.Is(result)); // Registers result and right alias, left is distinct, or all registers // are distinct: untag right into result, and then multiply by left, // giving a tagged result. @@ -4487,14 +4530,14 @@ void LCodeGen::DoDeferredNumberTagD(LNumberTagD* instr) { Register result = ToRegister(instr->result()); __ Mov(result, 0); - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); // NumberTagU and NumberTagD use the context from the frame, rather than // the environment's HContext or HInlinedContext value. - // They only call Runtime::kHiddenAllocateHeapNumber. + // They only call Runtime::kAllocateHeapNumber. // The corresponding HChange instructions are added in a phase that does // not have easy access to the local context. __ Ldr(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); - __ CallRuntimeSaveDoubles(Runtime::kHiddenAllocateHeapNumber); + __ CallRuntimeSaveDoubles(Runtime::kAllocateHeapNumber); RecordSafepointWithRegisters( instr->pointer_map(), 0, Safepoint::kNoLazyDeopt); __ StoreToSafepointRegisterSlot(x0, result); @@ -4552,15 +4595,15 @@ void LCodeGen::DoDeferredNumberTagU(LInstruction* instr, __ Mov(dst, 0); { // Preserve the value of all registers. - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); // NumberTagU and NumberTagD use the context from the frame, rather than // the environment's HContext or HInlinedContext value. - // They only call Runtime::kHiddenAllocateHeapNumber. + // They only call Runtime::kAllocateHeapNumber. // The corresponding HChange instructions are added in a phase that does // not have easy access to the local context. __ Ldr(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); - __ CallRuntimeSaveDoubles(Runtime::kHiddenAllocateHeapNumber); + __ CallRuntimeSaveDoubles(Runtime::kAllocateHeapNumber); RecordSafepointWithRegisters( instr->pointer_map(), 0, Safepoint::kNoLazyDeopt); __ StoreToSafepointRegisterSlot(x0, dst); @@ -4649,7 +4692,7 @@ void LCodeGen::DoNumberUntagD(LNumberUntagD* instr) { } } else { - ASSERT(mode == NUMBER_CANDIDATE_IS_SMI); + DCHECK(mode == NUMBER_CANDIDATE_IS_SMI); // Fall through to load_smi. } @@ -4669,7 +4712,7 @@ void LCodeGen::DoOsrEntry(LOsrEntry* instr) { // If the environment were already registered, we would have no way of // backpatching it with the spill slot operands. - ASSERT(!environment->HasBeenRegistered()); + DCHECK(!environment->HasBeenRegistered()); RegisterEnvironmentForDeoptimization(environment, Safepoint::kNoLazyDeopt); GenerateOsrPrologue(); @@ -4681,14 +4724,27 @@ void LCodeGen::DoParameter(LParameter* instr) { } -void LCodeGen::DoPushArgument(LPushArgument* instr) { - LOperand* argument = instr->value(); - if (argument->IsDoubleRegister() || argument->IsDoubleStackSlot()) { - Abort(kDoPushArgumentNotImplementedForDoubleType); - } else { - __ Push(ToRegister(argument)); - after_push_argument_ = true; +void LCodeGen::DoPreparePushArguments(LPreparePushArguments* instr) { + __ PushPreamble(instr->argc(), kPointerSize); +} + + +void LCodeGen::DoPushArguments(LPushArguments* instr) { + MacroAssembler::PushPopQueue args(masm()); + + for (int i = 0; i < instr->ArgumentCount(); ++i) { + LOperand* arg = instr->argument(i); + if (arg->IsDoubleRegister() || arg->IsDoubleStackSlot()) { + Abort(kDoPushArgumentNotImplementedForDoubleType); + return; + } + args.Queue(ToRegister(arg)); } + + // The preamble was done by LPreparePushArguments. + args.PushQueued(MacroAssembler::PushPopQueue::SKIP_PREAMBLE); + + after_push_argument_ = true; } @@ -4795,7 +4851,7 @@ void LCodeGen::DoSeqStringSetChar(LSeqStringSetChar* instr) { Register temp = ToRegister(instr->temp()); if (FLAG_debug_code) { - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->context()).is(cp)); Register index = ToRegister(instr->index()); static const uint32_t one_byte_seq_type = kSeqStringTag | kOneByteStringTag; static const uint32_t two_byte_seq_type = kSeqStringTag | kTwoByteStringTag; @@ -4865,7 +4921,7 @@ void LCodeGen::DoShiftI(LShiftI* instr) { default: UNREACHABLE(); } } else { - ASSERT(right_op->IsConstantOperand()); + DCHECK(right_op->IsConstantOperand()); int shift_count = JSShiftAmountFromLConstant(right_op); if (shift_count == 0) { if ((instr->op() == Token::SHR) && instr->can_deopt()) { @@ -4891,7 +4947,7 @@ void LCodeGen::DoShiftS(LShiftS* instr) { Register result = ToRegister(instr->result()); // Only ROR by register needs a temp. - ASSERT(((instr->op() == Token::ROR) && right_op->IsRegister()) || + DCHECK(((instr->op() == Token::ROR) && right_op->IsRegister()) || (instr->temp() == NULL)); if (right_op->IsRegister()) { @@ -4928,7 +4984,7 @@ void LCodeGen::DoShiftS(LShiftS* instr) { default: UNREACHABLE(); } } else { - ASSERT(right_op->IsConstantOperand()); + DCHECK(right_op->IsConstantOperand()); int shift_count = JSShiftAmountFromLConstant(right_op); if (shift_count == 0) { if ((instr->op() == Token::SHR) && instr->can_deopt()) { @@ -4966,10 +5022,10 @@ void LCodeGen::DoDebugBreak(LDebugBreak* instr) { void LCodeGen::DoDeclareGlobals(LDeclareGlobals* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->context()).is(cp)); Register scratch1 = x5; Register scratch2 = x6; - ASSERT(instr->IsMarkedAsCall()); + DCHECK(instr->IsMarkedAsCall()); ASM_UNIMPLEMENTED_BREAK("DoDeclareGlobals"); // TODO(all): if Mov could handle object in new space then it could be used @@ -4977,17 +5033,17 @@ void LCodeGen::DoDeclareGlobals(LDeclareGlobals* instr) { __ LoadHeapObject(scratch1, instr->hydrogen()->pairs()); __ Mov(scratch2, Smi::FromInt(instr->hydrogen()->flags())); __ Push(cp, scratch1, scratch2); // The context is the first argument. - CallRuntime(Runtime::kHiddenDeclareGlobals, 3, instr); + CallRuntime(Runtime::kDeclareGlobals, 3, instr); } void LCodeGen::DoDeferredStackCheck(LStackCheck* instr) { - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); LoadContextFromDeferred(instr->context()); - __ CallRuntimeSaveDoubles(Runtime::kHiddenStackGuard); + __ CallRuntimeSaveDoubles(Runtime::kStackGuard); RecordSafepointWithLazyDeopt( instr, RECORD_SAFEPOINT_WITH_REGISTERS_AND_NO_ARGUMENTS); - ASSERT(instr->HasEnvironment()); + DCHECK(instr->HasEnvironment()); LEnvironment* env = instr->environment(); safepoints_.RecordLazyDeoptimizationIndex(env->deoptimization_index()); } @@ -5004,7 +5060,7 @@ void LCodeGen::DoStackCheck(LStackCheck* instr) { LStackCheck* instr_; }; - ASSERT(instr->HasEnvironment()); + DCHECK(instr->HasEnvironment()); LEnvironment* env = instr->environment(); // There is no LLazyBailout instruction for stack-checks. We have to // prepare for lazy deoptimization explicitly here. @@ -5016,14 +5072,14 @@ void LCodeGen::DoStackCheck(LStackCheck* instr) { PredictableCodeSizeScope predictable(masm_, Assembler::kCallSizeWithRelocation); - ASSERT(instr->context()->IsRegister()); - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(instr->context()->IsRegister()); + DCHECK(ToRegister(instr->context()).is(cp)); CallCode(isolate()->builtins()->StackCheck(), RelocInfo::CODE_TARGET, instr); __ Bind(&done); } else { - ASSERT(instr->hydrogen()->is_backwards_branch()); + DCHECK(instr->hydrogen()->is_backwards_branch()); // Perform stack overflow check if this goto needs it before jumping. DeferredStackCheck* deferred_stack_check = new(zone()) DeferredStackCheck(this, instr); @@ -5071,7 +5127,7 @@ void LCodeGen::DoStoreContextSlot(LStoreContextSlot* instr) { __ Str(value, target); if (instr->hydrogen()->NeedsWriteBarrier()) { SmiCheck check_needed = - instr->hydrogen()->value()->IsHeapObject() + instr->hydrogen()->value()->type().IsHeapObject() ? OMIT_SMI_CHECK : INLINE_SMI_CHECK; __ RecordWriteContextSlot(context, target.offset(), @@ -5120,7 +5176,7 @@ void LCodeGen::DoStoreKeyedExternal(LStoreKeyedExternal* instr) { bool key_is_constant = instr->key()->IsConstantOperand(); int constant_key = 0; if (key_is_constant) { - ASSERT(instr->temp() == NULL); + DCHECK(instr->temp() == NULL); constant_key = ToInteger32(LConstantOperand::cast(instr->key())); if (constant_key & 0xf0000000) { Abort(kArrayIndexConstantValueTooBig); @@ -5134,7 +5190,7 @@ void LCodeGen::DoStoreKeyedExternal(LStoreKeyedExternal* instr) { PrepareKeyedExternalArrayOperand(key, ext_ptr, scratch, key_is_smi, key_is_constant, constant_key, elements_kind, - instr->additional_index()); + instr->base_offset()); if ((elements_kind == EXTERNAL_FLOAT32_ELEMENTS) || (elements_kind == FLOAT32_ELEMENTS)) { @@ -5199,18 +5255,16 @@ void LCodeGen::DoStoreKeyedFixedDouble(LStoreKeyedFixedDouble* instr) { if (constant_key & 0xf0000000) { Abort(kArrayIndexConstantValueTooBig); } - int offset = FixedDoubleArray::OffsetOfElementAt(constant_key + - instr->additional_index()); - mem_op = FieldMemOperand(elements, offset); + int offset = instr->base_offset() + constant_key * kDoubleSize; + mem_op = MemOperand(elements, offset); } else { Register store_base = ToRegister(instr->temp()); Register key = ToRegister(instr->key()); bool key_is_tagged = instr->hydrogen()->key()->representation().IsSmi(); - int offset = FixedDoubleArray::OffsetOfElementAt(instr->additional_index()); mem_op = PrepareKeyedArrayOperand(store_base, elements, key, key_is_tagged, instr->hydrogen()->elements_kind(), instr->hydrogen()->representation(), - offset); + instr->base_offset()); } if (instr->NeedsCanonicalization()) { @@ -5238,51 +5292,51 @@ void LCodeGen::DoStoreKeyedFixed(LStoreKeyedFixed* instr) { Representation representation = instr->hydrogen()->value()->representation(); if (instr->key()->IsConstantOperand()) { LConstantOperand* const_operand = LConstantOperand::cast(instr->key()); - int offset = FixedArray::OffsetOfElementAt(ToInteger32(const_operand) + - instr->additional_index()); + int offset = instr->base_offset() + + ToInteger32(const_operand) * kPointerSize; store_base = elements; if (representation.IsInteger32()) { - ASSERT(instr->hydrogen()->store_mode() == STORE_TO_INITIALIZED_ENTRY); - ASSERT(instr->hydrogen()->elements_kind() == FAST_SMI_ELEMENTS); - STATIC_ASSERT((kSmiValueSize == 32) && (kSmiShift == 32) && - (kSmiTag == 0)); - mem_op = UntagSmiFieldMemOperand(store_base, offset); + DCHECK(instr->hydrogen()->store_mode() == STORE_TO_INITIALIZED_ENTRY); + DCHECK(instr->hydrogen()->elements_kind() == FAST_SMI_ELEMENTS); + STATIC_ASSERT(static_cast<unsigned>(kSmiValueSize) == kWRegSizeInBits); + STATIC_ASSERT(kSmiTag == 0); + mem_op = UntagSmiMemOperand(store_base, offset); } else { - mem_op = FieldMemOperand(store_base, offset); + mem_op = MemOperand(store_base, offset); } } else { store_base = scratch; key = ToRegister(instr->key()); bool key_is_tagged = instr->hydrogen()->key()->representation().IsSmi(); - int offset = FixedArray::OffsetOfElementAt(instr->additional_index()); mem_op = PrepareKeyedArrayOperand(store_base, elements, key, key_is_tagged, instr->hydrogen()->elements_kind(), - representation, offset); + representation, instr->base_offset()); } __ Store(value, mem_op, representation); if (instr->hydrogen()->NeedsWriteBarrier()) { - ASSERT(representation.IsTagged()); + DCHECK(representation.IsTagged()); // This assignment may cause element_addr to alias store_base. Register element_addr = scratch; SmiCheck check_needed = - instr->hydrogen()->value()->IsHeapObject() + instr->hydrogen()->value()->type().IsHeapObject() ? OMIT_SMI_CHECK : INLINE_SMI_CHECK; // Compute address of modified element and store it into key register. __ Add(element_addr, mem_op.base(), mem_op.OffsetAsOperand()); __ RecordWrite(elements, element_addr, value, GetLinkRegisterState(), - kSaveFPRegs, EMIT_REMEMBERED_SET, check_needed); + kSaveFPRegs, EMIT_REMEMBERED_SET, check_needed, + instr->hydrogen()->PointersToHereCheckForValue()); } } void LCodeGen::DoStoreKeyedGeneric(LStoreKeyedGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->object()).Is(x2)); - ASSERT(ToRegister(instr->key()).Is(x1)); - ASSERT(ToRegister(instr->value()).Is(x0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->object()).is(KeyedStoreIC::ReceiverRegister())); + DCHECK(ToRegister(instr->key()).is(KeyedStoreIC::NameRegister())); + DCHECK(ToRegister(instr->value()).is(KeyedStoreIC::ValueRegister())); Handle<Code> ic = instr->strict_mode() == STRICT ? isolate()->builtins()->KeyedStoreIC_Initialize_Strict() @@ -5299,8 +5353,8 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { int offset = access.offset(); if (access.IsExternalMemory()) { - ASSERT(!instr->hydrogen()->has_transition()); - ASSERT(!instr->hydrogen()->NeedsWriteBarrier()); + DCHECK(!instr->hydrogen()->has_transition()); + DCHECK(!instr->hydrogen()->NeedsWriteBarrier()); Register value = ToRegister(instr->value()); __ Store(value, MemOperand(object, offset), representation); return; @@ -5309,9 +5363,9 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { __ AssertNotSmi(object); if (representation.IsDouble()) { - ASSERT(access.IsInobject()); - ASSERT(!instr->hydrogen()->has_transition()); - ASSERT(!instr->hydrogen()->NeedsWriteBarrier()); + DCHECK(access.IsInobject()); + DCHECK(!instr->hydrogen()->has_transition()); + DCHECK(!instr->hydrogen()->NeedsWriteBarrier()); FPRegister value = ToDoubleRegister(instr->value()); __ Str(value, FieldMemOperand(object, offset)); return; @@ -5319,7 +5373,7 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { Register value = ToRegister(instr->value()); - ASSERT(!representation.IsSmi() || + DCHECK(!representation.IsSmi() || !instr->value()->IsConstantOperand() || IsInteger32Constant(LConstantOperand::cast(instr->value()))); @@ -5332,14 +5386,11 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { __ Str(new_map_value, FieldMemOperand(object, HeapObject::kMapOffset)); if (instr->hydrogen()->NeedsWriteBarrierForMap()) { // Update the write barrier for the map field. - __ RecordWriteField(object, - HeapObject::kMapOffset, - new_map_value, - ToRegister(instr->temp1()), - GetLinkRegisterState(), - kSaveFPRegs, - OMIT_REMEMBERED_SET, - OMIT_SMI_CHECK); + __ RecordWriteForMap(object, + new_map_value, + ToRegister(instr->temp1()), + GetLinkRegisterState(), + kSaveFPRegs); } } @@ -5355,7 +5406,7 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { if (representation.IsSmi() && instr->hydrogen()->value()->representation().IsInteger32()) { - ASSERT(instr->hydrogen()->store_mode() == STORE_TO_INITIALIZED_ENTRY); + DCHECK(instr->hydrogen()->store_mode() == STORE_TO_INITIALIZED_ENTRY); #ifdef DEBUG Register temp0 = ToRegister(instr->temp0()); __ Ldr(temp0, FieldMemOperand(destination, offset)); @@ -5363,11 +5414,12 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { // If destination aliased temp0, restore it to the address calculated // earlier. if (destination.Is(temp0)) { - ASSERT(!access.IsInobject()); + DCHECK(!access.IsInobject()); __ Ldr(destination, FieldMemOperand(object, JSObject::kPropertiesOffset)); } #endif - STATIC_ASSERT(kSmiValueSize == 32 && kSmiShift == 32 && kSmiTag == 0); + STATIC_ASSERT(static_cast<unsigned>(kSmiValueSize) == kWRegSizeInBits); + STATIC_ASSERT(kSmiTag == 0); __ Store(value, UntagSmiFieldMemOperand(destination, offset), Representation::Integer32()); } else { @@ -5381,27 +5433,27 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { GetLinkRegisterState(), kSaveFPRegs, EMIT_REMEMBERED_SET, - instr->hydrogen()->SmiCheckForWriteBarrier()); + instr->hydrogen()->SmiCheckForWriteBarrier(), + instr->hydrogen()->PointersToHereCheckForValue()); } } void LCodeGen::DoStoreNamedGeneric(LStoreNamedGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->value()).is(x0)); - ASSERT(ToRegister(instr->object()).is(x1)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->object()).is(StoreIC::ReceiverRegister())); + DCHECK(ToRegister(instr->value()).is(StoreIC::ValueRegister())); - // Name must be in x2. - __ Mov(x2, Operand(instr->name())); + __ Mov(StoreIC::NameRegister(), Operand(instr->name())); Handle<Code> ic = StoreIC::initialize_stub(isolate(), instr->strict_mode()); CallCode(ic, RelocInfo::CODE_TARGET, instr); } void LCodeGen::DoStringAdd(LStringAdd* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->left()).Is(x1)); - ASSERT(ToRegister(instr->right()).Is(x0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->left()).Is(x1)); + DCHECK(ToRegister(instr->right()).Is(x0)); StringAddStub stub(isolate(), instr->hydrogen()->flags(), instr->hydrogen()->pretenure_flag()); @@ -5441,15 +5493,14 @@ void LCodeGen::DoDeferredStringCharCodeAt(LStringCharCodeAt* instr) { // contained in the register pointer map. __ Mov(result, 0); - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); __ Push(string); // Push the index as a smi. This is safe because of the checks in // DoStringCharCodeAt above. Register index = ToRegister(instr->index()); - __ SmiTag(index); - __ Push(index); + __ SmiTagAndPush(index); - CallRuntimeFromDeferred(Runtime::kHiddenStringCharCodeAt, 2, instr, + CallRuntimeFromDeferred(Runtime::kStringCharCodeAtRT, 2, instr, instr->context()); __ AssertSmi(x0); __ SmiUntag(x0); @@ -5471,7 +5522,7 @@ void LCodeGen::DoStringCharFromCode(LStringCharFromCode* instr) { DeferredStringCharFromCode* deferred = new(zone()) DeferredStringCharFromCode(this, instr); - ASSERT(instr->hydrogen()->value()->representation().IsInteger32()); + DCHECK(instr->hydrogen()->value()->representation().IsInteger32()); Register char_code = ToRegister32(instr->char_code()); Register result = ToRegister(instr->result()); @@ -5495,16 +5546,15 @@ void LCodeGen::DoDeferredStringCharFromCode(LStringCharFromCode* instr) { // contained in the register pointer map. __ Mov(result, 0); - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); - __ SmiTag(char_code); - __ Push(char_code); + PushSafepointRegistersScope scope(this); + __ SmiTagAndPush(char_code); CallRuntimeFromDeferred(Runtime::kCharFromCode, 1, instr, instr->context()); __ StoreToSafepointRegisterSlot(x0, result); } void LCodeGen::DoStringCompareAndBranch(LStringCompareAndBranch* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->context()).is(cp)); Token::Value op = instr->op(); Handle<Code> ic = CompareIC::GetUninitialized(isolate(), op); @@ -5647,15 +5697,15 @@ void LCodeGen::DoThisFunction(LThisFunction* instr) { void LCodeGen::DoToFastProperties(LToFastProperties* instr) { - ASSERT(ToRegister(instr->value()).Is(x0)); - ASSERT(ToRegister(instr->result()).Is(x0)); + DCHECK(ToRegister(instr->value()).Is(x0)); + DCHECK(ToRegister(instr->result()).Is(x0)); __ Push(x0); CallRuntime(Runtime::kToFastProperties, 1, instr); } void LCodeGen::DoRegExpLiteral(LRegExpLiteral* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->context()).is(cp)); Label materialized; // Registers will be used as follows: // x7 = literals array. @@ -5674,7 +5724,7 @@ void LCodeGen::DoRegExpLiteral(LRegExpLiteral* instr) { __ Mov(x11, Operand(instr->hydrogen()->pattern())); __ Mov(x10, Operand(instr->hydrogen()->flags())); __ Push(x7, x12, x11, x10); - CallRuntime(Runtime::kHiddenMaterializeRegExpLiteral, 4, instr); + CallRuntime(Runtime::kMaterializeRegExpLiteral, 4, instr); __ Mov(x1, x0); __ Bind(&materialized); @@ -5687,7 +5737,7 @@ void LCodeGen::DoRegExpLiteral(LRegExpLiteral* instr) { __ Bind(&runtime_allocate); __ Mov(x0, Smi::FromInt(size)); __ Push(x1, x0); - CallRuntime(Runtime::kHiddenAllocateInNewSpace, 1, instr); + CallRuntime(Runtime::kAllocateInNewSpace, 1, instr); __ Pop(x1); __ Bind(&allocated); @@ -5713,8 +5763,8 @@ void LCodeGen::DoTransitionElementsKind(LTransitionElementsKind* instr) { __ Mov(new_map, Operand(to_map)); __ Str(new_map, FieldMemOperand(object, HeapObject::kMapOffset)); // Write barrier. - __ RecordWriteField(object, HeapObject::kMapOffset, new_map, temp1, - GetLinkRegisterState(), kDontSaveFPRegs); + __ RecordWriteForMap(object, new_map, temp1, GetLinkRegisterState(), + kDontSaveFPRegs); } else { { UseScratchRegisterScope temps(masm()); @@ -5723,15 +5773,14 @@ void LCodeGen::DoTransitionElementsKind(LTransitionElementsKind* instr) { __ CheckMap(object, temps.AcquireX(), from_map, ¬_applicable, DONT_DO_SMI_CHECK); } - ASSERT(object.is(x0)); - ASSERT(ToRegister(instr->context()).is(cp)); - PushSafepointRegistersScope scope( - this, Safepoint::kWithRegistersAndDoubles); + DCHECK(object.is(x0)); + DCHECK(ToRegister(instr->context()).is(cp)); + PushSafepointRegistersScope scope(this); __ Mov(x1, Operand(to_map)); bool is_js_array = from_map->instance_type() == JS_ARRAY_TYPE; TransitionElementsKindStub stub(isolate(), from_kind, to_kind, is_js_array); __ CallStub(&stub); - RecordSafepointWithRegistersAndDoubles( + RecordSafepointWithRegisters( instr->pointer_map(), 0, Safepoint::kLazyDeopt); } __ Bind(¬_applicable); @@ -5775,7 +5824,7 @@ void LCodeGen::DoTypeofIsAndBranch(LTypeofIsAndBranch* instr) { Factory* factory = isolate()->factory(); if (String::Equals(type_name, factory->number_string())) { - ASSERT(instr->temp1() != NULL); + DCHECK(instr->temp1() != NULL); Register map = ToRegister(instr->temp1()); __ JumpIfSmi(value, true_label); @@ -5784,7 +5833,7 @@ void LCodeGen::DoTypeofIsAndBranch(LTypeofIsAndBranch* instr) { EmitBranch(instr, eq); } else if (String::Equals(type_name, factory->string_string())) { - ASSERT((instr->temp1() != NULL) && (instr->temp2() != NULL)); + DCHECK((instr->temp1() != NULL) && (instr->temp2() != NULL)); Register map = ToRegister(instr->temp1()); Register scratch = ToRegister(instr->temp2()); @@ -5795,7 +5844,7 @@ void LCodeGen::DoTypeofIsAndBranch(LTypeofIsAndBranch* instr) { EmitTestAndBranch(instr, eq, scratch, 1 << Map::kIsUndetectable); } else if (String::Equals(type_name, factory->symbol_string())) { - ASSERT((instr->temp1() != NULL) && (instr->temp2() != NULL)); + DCHECK((instr->temp1() != NULL) && (instr->temp2() != NULL)); Register map = ToRegister(instr->temp1()); Register scratch = ToRegister(instr->temp2()); @@ -5808,13 +5857,8 @@ void LCodeGen::DoTypeofIsAndBranch(LTypeofIsAndBranch* instr) { __ CompareRoot(value, Heap::kFalseValueRootIndex); EmitBranch(instr, eq); - } else if (FLAG_harmony_typeof && - String::Equals(type_name, factory->null_string())) { - __ CompareRoot(value, Heap::kNullValueRootIndex); - EmitBranch(instr, eq); - } else if (String::Equals(type_name, factory->undefined_string())) { - ASSERT(instr->temp1() != NULL); + DCHECK(instr->temp1() != NULL); Register scratch = ToRegister(instr->temp1()); __ JumpIfRoot(value, Heap::kUndefinedValueRootIndex, true_label); @@ -5826,7 +5870,7 @@ void LCodeGen::DoTypeofIsAndBranch(LTypeofIsAndBranch* instr) { } else if (String::Equals(type_name, factory->function_string())) { STATIC_ASSERT(NUM_OF_CALLABLE_SPEC_OBJECT_TYPES == 2); - ASSERT(instr->temp1() != NULL); + DCHECK(instr->temp1() != NULL); Register type = ToRegister(instr->temp1()); __ JumpIfSmi(value, false_label); @@ -5835,20 +5879,18 @@ void LCodeGen::DoTypeofIsAndBranch(LTypeofIsAndBranch* instr) { EmitCompareAndBranch(instr, eq, type, JS_FUNCTION_PROXY_TYPE); } else if (String::Equals(type_name, factory->object_string())) { - ASSERT((instr->temp1() != NULL) && (instr->temp2() != NULL)); + DCHECK((instr->temp1() != NULL) && (instr->temp2() != NULL)); Register map = ToRegister(instr->temp1()); Register scratch = ToRegister(instr->temp2()); __ JumpIfSmi(value, false_label); - if (!FLAG_harmony_typeof) { - __ JumpIfRoot(value, Heap::kNullValueRootIndex, true_label); - } + __ JumpIfRoot(value, Heap::kNullValueRootIndex, true_label); __ JumpIfObjectType(value, map, scratch, FIRST_NONCALLABLE_SPEC_OBJECT_TYPE, false_label, lt); __ CompareInstanceType(map, scratch, LAST_NONCALLABLE_SPEC_OBJECT_TYPE); __ B(gt, false_label); // Check for undetectable objects => false. - __ Ldrb(scratch, FieldMemOperand(value, Map::kBitFieldOffset)); + __ Ldrb(scratch, FieldMemOperand(map, Map::kBitFieldOffset)); EmitTestAndBranch(instr, eq, scratch, 1 << Map::kIsUndetectable); } else { @@ -5910,7 +5952,7 @@ void LCodeGen::DoWrapReceiver(LWrapReceiver* instr) { __ Bind(&global_object); __ Ldr(result, FieldMemOperand(function, JSFunction::kContextOffset)); __ Ldr(result, ContextMemOperand(result, Context::GLOBAL_OBJECT_INDEX)); - __ Ldr(result, FieldMemOperand(result, GlobalObject::kGlobalReceiverOffset)); + __ Ldr(result, FieldMemOperand(result, GlobalObject::kGlobalProxyOffset)); __ B(&done); __ Bind(©_receiver); @@ -5923,7 +5965,7 @@ void LCodeGen::DoDeferredLoadMutableDouble(LLoadFieldByIndex* instr, Register result, Register object, Register index) { - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); __ Push(object); __ Push(index); __ Mov(cp, 0); @@ -5993,4 +6035,21 @@ void LCodeGen::DoLoadFieldByIndex(LLoadFieldByIndex* instr) { __ Bind(&done); } + +void LCodeGen::DoStoreFrameContext(LStoreFrameContext* instr) { + Register context = ToRegister(instr->context()); + __ Str(context, MemOperand(fp, StandardFrameConstants::kContextOffset)); +} + + +void LCodeGen::DoAllocateBlockContext(LAllocateBlockContext* instr) { + Handle<ScopeInfo> scope_info = instr->scope_info(); + __ Push(scope_info); + __ Push(ToRegister(instr->function())); + CallRuntime(Runtime::kPushBlockContext, 2, instr); + RecordSafepoint(Safepoint::kNoLazyDeopt); +} + + + } } // namespace v8::internal diff --git a/deps/v8/src/arm64/lithium-codegen-arm64.h b/deps/v8/src/arm64/lithium-codegen-arm64.h index 8c25e634050..bb06f483afe 100644 --- a/deps/v8/src/arm64/lithium-codegen-arm64.h +++ b/deps/v8/src/arm64/lithium-codegen-arm64.h @@ -5,14 +5,14 @@ #ifndef V8_ARM64_LITHIUM_CODEGEN_ARM64_H_ #define V8_ARM64_LITHIUM_CODEGEN_ARM64_H_ -#include "arm64/lithium-arm64.h" +#include "src/arm64/lithium-arm64.h" -#include "arm64/lithium-gap-resolver-arm64.h" -#include "deoptimizer.h" -#include "lithium-codegen.h" -#include "safepoint-table.h" -#include "scopes.h" -#include "utils.h" +#include "src/arm64/lithium-gap-resolver-arm64.h" +#include "src/deoptimizer.h" +#include "src/lithium-codegen.h" +#include "src/safepoint-table.h" +#include "src/scopes.h" +#include "src/utils.h" namespace v8 { namespace internal { @@ -44,7 +44,7 @@ class LCodeGen: public LCodeGenBase { } ~LCodeGen() { - ASSERT(!after_push_argument_ || inlined_arguments_); + DCHECK(!after_push_argument_ || inlined_arguments_); } // Simple accessors. @@ -255,14 +255,14 @@ class LCodeGen: public LCodeGenBase { bool key_is_constant, int constant_key, ElementsKind elements_kind, - int additional_index); + int base_offset); MemOperand PrepareKeyedArrayOperand(Register base, Register elements, Register key, bool key_is_tagged, ElementsKind elements_kind, Representation representation, - int additional_index); + int base_offset); void RegisterEnvironmentForDeoptimization(LEnvironment* environment, Safepoint::DeoptMode mode); @@ -348,9 +348,6 @@ class LCodeGen: public LCodeGenBase { void RecordSafepointWithRegisters(LPointerMap* pointers, int arguments, Safepoint::DeoptMode mode); - void RecordSafepointWithRegistersAndDoubles(LPointerMap* pointers, - int arguments, - Safepoint::DeoptMode mode); void RecordSafepointWithLazyDeopt(LInstruction* instr, SafepointMode safepoint_mode); @@ -388,12 +385,11 @@ class LCodeGen: public LCodeGenBase { class PushSafepointRegistersScope BASE_EMBEDDED { public: - PushSafepointRegistersScope(LCodeGen* codegen, - Safepoint::Kind kind) + explicit PushSafepointRegistersScope(LCodeGen* codegen) : codegen_(codegen) { - ASSERT(codegen_->info()->is_calling()); - ASSERT(codegen_->expected_safepoint_kind_ == Safepoint::kSimple); - codegen_->expected_safepoint_kind_ = kind; + DCHECK(codegen_->info()->is_calling()); + DCHECK(codegen_->expected_safepoint_kind_ == Safepoint::kSimple); + codegen_->expected_safepoint_kind_ = Safepoint::kWithRegisters; UseScratchRegisterScope temps(codegen_->masm_); // Preserve the value of lr which must be saved on the stack (the call to @@ -401,39 +397,14 @@ class LCodeGen: public LCodeGenBase { Register to_be_pushed_lr = temps.UnsafeAcquire(StoreRegistersStateStub::to_be_pushed_lr()); codegen_->masm_->Mov(to_be_pushed_lr, lr); - switch (codegen_->expected_safepoint_kind_) { - case Safepoint::kWithRegisters: { - StoreRegistersStateStub stub(codegen_->isolate(), kDontSaveFPRegs); - codegen_->masm_->CallStub(&stub); - break; - } - case Safepoint::kWithRegistersAndDoubles: { - StoreRegistersStateStub stub(codegen_->isolate(), kSaveFPRegs); - codegen_->masm_->CallStub(&stub); - break; - } - default: - UNREACHABLE(); - } + StoreRegistersStateStub stub(codegen_->isolate()); + codegen_->masm_->CallStub(&stub); } ~PushSafepointRegistersScope() { - Safepoint::Kind kind = codegen_->expected_safepoint_kind_; - ASSERT((kind & Safepoint::kWithRegisters) != 0); - switch (kind) { - case Safepoint::kWithRegisters: { - RestoreRegistersStateStub stub(codegen_->isolate(), kDontSaveFPRegs); - codegen_->masm_->CallStub(&stub); - break; - } - case Safepoint::kWithRegistersAndDoubles: { - RestoreRegistersStateStub stub(codegen_->isolate(), kSaveFPRegs); - codegen_->masm_->CallStub(&stub); - break; - } - default: - UNREACHABLE(); - } + DCHECK(codegen_->expected_safepoint_kind_ == Safepoint::kWithRegisters); + RestoreRegistersStateStub stub(codegen_->isolate()); + codegen_->masm_->CallStub(&stub); codegen_->expected_safepoint_kind_ = Safepoint::kSimple; } diff --git a/deps/v8/src/arm64/lithium-gap-resolver-arm64.cc b/deps/v8/src/arm64/lithium-gap-resolver-arm64.cc index c721cb48a8c..d06a37bc4e4 100644 --- a/deps/v8/src/arm64/lithium-gap-resolver-arm64.cc +++ b/deps/v8/src/arm64/lithium-gap-resolver-arm64.cc @@ -2,38 +2,38 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "arm64/lithium-gap-resolver-arm64.h" -#include "arm64/lithium-codegen-arm64.h" +#include "src/arm64/delayed-masm-arm64-inl.h" +#include "src/arm64/lithium-codegen-arm64.h" +#include "src/arm64/lithium-gap-resolver-arm64.h" namespace v8 { namespace internal { -// We use the root register to spill a value while breaking a cycle in parallel -// moves. We don't need access to roots while resolving the move list and using -// the root register has two advantages: -// - It is not in crankshaft allocatable registers list, so it can't interfere -// with any of the moves we are resolving. -// - We don't need to push it on the stack, as we can reload it with its value -// once we have resolved a cycle. -#define kSavedValue root +#define __ ACCESS_MASM((&masm_)) -// We use the MacroAssembler floating-point scratch register to break a cycle -// involving double values as the MacroAssembler will not need it for the -// operations performed by the gap resolver. -#define kSavedDoubleValue fp_scratch +void DelayedGapMasm::EndDelayedUse() { + DelayedMasm::EndDelayedUse(); + if (scratch_register_used()) { + DCHECK(ScratchRegister().Is(root)); + DCHECK(!pending()); + InitializeRootRegister(); + reset_scratch_register_used(); + } +} -LGapResolver::LGapResolver(LCodeGen* owner) - : cgen_(owner), moves_(32, owner->zone()), root_index_(0), in_cycle_(false), - saved_destination_(NULL), need_to_restore_root_(false) { } +LGapResolver::LGapResolver(LCodeGen* owner) + : cgen_(owner), masm_(owner, owner->masm()), moves_(32, owner->zone()), + root_index_(0), in_cycle_(false), saved_destination_(NULL) { +} -#define __ ACCESS_MASM(cgen_->masm()) void LGapResolver::Resolve(LParallelMove* parallel_move) { - ASSERT(moves_.is_empty()); + DCHECK(moves_.is_empty()); + DCHECK(!masm_.pending()); // Build up a worklist of moves. BuildInitialMoveList(parallel_move); @@ -56,16 +56,12 @@ void LGapResolver::Resolve(LParallelMove* parallel_move) { LMoveOperands move = moves_[i]; if (!move.IsEliminated()) { - ASSERT(move.source()->IsConstantOperand()); + DCHECK(move.source()->IsConstantOperand()); EmitMove(i); } } - if (need_to_restore_root_) { - ASSERT(kSavedValue.Is(root)); - __ InitializeRootRegister(); - need_to_restore_root_ = false; - } + __ EndDelayedUse(); moves_.Rewind(0); } @@ -92,13 +88,13 @@ void LGapResolver::PerformMove(int index) { // cycles in the move graph. LMoveOperands& current_move = moves_[index]; - ASSERT(!current_move.IsPending()); - ASSERT(!current_move.IsRedundant()); + DCHECK(!current_move.IsPending()); + DCHECK(!current_move.IsRedundant()); // Clear this move's destination to indicate a pending move. The actual // destination is saved in a stack allocated local. Multiple moves can // be pending because this function is recursive. - ASSERT(current_move.source() != NULL); // Otherwise it will look eliminated. + DCHECK(current_move.source() != NULL); // Otherwise it will look eliminated. LOperand* destination = current_move.destination(); current_move.set_destination(NULL); @@ -125,7 +121,7 @@ void LGapResolver::PerformMove(int index) { // a scratch register to break it. LMoveOperands other_move = moves_[root_index_]; if (other_move.Blocks(destination)) { - ASSERT(other_move.IsPending()); + DCHECK(other_move.IsPending()); BreakCycle(index); return; } @@ -136,12 +132,12 @@ void LGapResolver::PerformMove(int index) { void LGapResolver::Verify() { -#ifdef ENABLE_SLOW_ASSERTS +#ifdef ENABLE_SLOW_DCHECKS // No operand should be the destination for more than one move. for (int i = 0; i < moves_.length(); ++i) { LOperand* destination = moves_[i].destination(); for (int j = i + 1; j < moves_.length(); ++j) { - SLOW_ASSERT(!destination->Equals(moves_[j].destination())); + SLOW_DCHECK(!destination->Equals(moves_[j].destination())); } } #endif @@ -149,13 +145,8 @@ void LGapResolver::Verify() { void LGapResolver::BreakCycle(int index) { - ASSERT(moves_[index].destination()->Equals(moves_[root_index_].source())); - ASSERT(!in_cycle_); - - // We use registers which are not allocatable by crankshaft to break the cycle - // to be sure they don't interfere with the moves we are resolving. - ASSERT(!kSavedValue.IsAllocatable()); - ASSERT(!kSavedDoubleValue.IsAllocatable()); + DCHECK(moves_[index].destination()->Equals(moves_[root_index_].source())); + DCHECK(!in_cycle_); // We save in a register the source of that move and we remember its // destination. Then we mark this move as resolved so the cycle is @@ -165,19 +156,15 @@ void LGapResolver::BreakCycle(int index) { saved_destination_ = moves_[index].destination(); if (source->IsRegister()) { - need_to_restore_root_ = true; - __ Mov(kSavedValue, cgen_->ToRegister(source)); + AcquireSavedValueRegister(); + __ Mov(SavedValueRegister(), cgen_->ToRegister(source)); } else if (source->IsStackSlot()) { - need_to_restore_root_ = true; - __ Ldr(kSavedValue, cgen_->ToMemOperand(source)); + AcquireSavedValueRegister(); + __ Load(SavedValueRegister(), cgen_->ToMemOperand(source)); } else if (source->IsDoubleRegister()) { - ASSERT(cgen_->masm()->FPTmpList()->IncludesAliasOf(kSavedDoubleValue)); - cgen_->masm()->FPTmpList()->Remove(kSavedDoubleValue); - __ Fmov(kSavedDoubleValue, cgen_->ToDoubleRegister(source)); + __ Fmov(SavedFPValueRegister(), cgen_->ToDoubleRegister(source)); } else if (source->IsDoubleStackSlot()) { - ASSERT(cgen_->masm()->FPTmpList()->IncludesAliasOf(kSavedDoubleValue)); - cgen_->masm()->FPTmpList()->Remove(kSavedDoubleValue); - __ Ldr(kSavedDoubleValue, cgen_->ToMemOperand(source)); + __ Load(SavedFPValueRegister(), cgen_->ToMemOperand(source)); } else { UNREACHABLE(); } @@ -190,19 +177,20 @@ void LGapResolver::BreakCycle(int index) { void LGapResolver::RestoreValue() { - ASSERT(in_cycle_); - ASSERT(saved_destination_ != NULL); + DCHECK(in_cycle_); + DCHECK(saved_destination_ != NULL); if (saved_destination_->IsRegister()) { - __ Mov(cgen_->ToRegister(saved_destination_), kSavedValue); + __ Mov(cgen_->ToRegister(saved_destination_), SavedValueRegister()); + ReleaseSavedValueRegister(); } else if (saved_destination_->IsStackSlot()) { - __ Str(kSavedValue, cgen_->ToMemOperand(saved_destination_)); + __ Store(SavedValueRegister(), cgen_->ToMemOperand(saved_destination_)); + ReleaseSavedValueRegister(); } else if (saved_destination_->IsDoubleRegister()) { - __ Fmov(cgen_->ToDoubleRegister(saved_destination_), kSavedDoubleValue); - cgen_->masm()->FPTmpList()->Combine(kSavedDoubleValue); + __ Fmov(cgen_->ToDoubleRegister(saved_destination_), + SavedFPValueRegister()); } else if (saved_destination_->IsDoubleStackSlot()) { - __ Str(kSavedDoubleValue, cgen_->ToMemOperand(saved_destination_)); - cgen_->masm()->FPTmpList()->Combine(kSavedDoubleValue); + __ Store(SavedFPValueRegister(), cgen_->ToMemOperand(saved_destination_)); } else { UNREACHABLE(); } @@ -224,16 +212,16 @@ void LGapResolver::EmitMove(int index) { if (destination->IsRegister()) { __ Mov(cgen_->ToRegister(destination), source_register); } else { - ASSERT(destination->IsStackSlot()); - __ Str(source_register, cgen_->ToMemOperand(destination)); + DCHECK(destination->IsStackSlot()); + __ Store(source_register, cgen_->ToMemOperand(destination)); } } else if (source->IsStackSlot()) { MemOperand source_operand = cgen_->ToMemOperand(source); if (destination->IsRegister()) { - __ Ldr(cgen_->ToRegister(destination), source_operand); + __ Load(cgen_->ToRegister(destination), source_operand); } else { - ASSERT(destination->IsStackSlot()); + DCHECK(destination->IsStackSlot()); EmitStackSlotMove(index); } @@ -252,17 +240,30 @@ void LGapResolver::EmitMove(int index) { DoubleRegister result = cgen_->ToDoubleRegister(destination); __ Fmov(result, cgen_->ToDouble(constant_source)); } else { - ASSERT(destination->IsStackSlot()); - ASSERT(!in_cycle_); // Constant moves happen after all cycles are gone. - need_to_restore_root_ = true; + DCHECK(destination->IsStackSlot()); + DCHECK(!in_cycle_); // Constant moves happen after all cycles are gone. if (cgen_->IsSmi(constant_source)) { - __ Mov(kSavedValue, cgen_->ToSmi(constant_source)); + Smi* smi = cgen_->ToSmi(constant_source); + __ StoreConstant(reinterpret_cast<intptr_t>(smi), + cgen_->ToMemOperand(destination)); } else if (cgen_->IsInteger32Constant(constant_source)) { - __ Mov(kSavedValue, cgen_->ToInteger32(constant_source)); + __ StoreConstant(cgen_->ToInteger32(constant_source), + cgen_->ToMemOperand(destination)); } else { - __ LoadObject(kSavedValue, cgen_->ToHandle(constant_source)); + Handle<Object> handle = cgen_->ToHandle(constant_source); + AllowDeferredHandleDereference smi_object_check; + if (handle->IsSmi()) { + Object* obj = *handle; + DCHECK(!obj->IsHeapObject()); + __ StoreConstant(reinterpret_cast<intptr_t>(obj), + cgen_->ToMemOperand(destination)); + } else { + AcquireSavedValueRegister(); + __ LoadObject(SavedValueRegister(), handle); + __ Store(SavedValueRegister(), cgen_->ToMemOperand(destination)); + ReleaseSavedValueRegister(); + } } - __ Str(kSavedValue, cgen_->ToMemOperand(destination)); } } else if (source->IsDoubleRegister()) { @@ -270,16 +271,16 @@ void LGapResolver::EmitMove(int index) { if (destination->IsDoubleRegister()) { __ Fmov(cgen_->ToDoubleRegister(destination), src); } else { - ASSERT(destination->IsDoubleStackSlot()); - __ Str(src, cgen_->ToMemOperand(destination)); + DCHECK(destination->IsDoubleStackSlot()); + __ Store(src, cgen_->ToMemOperand(destination)); } } else if (source->IsDoubleStackSlot()) { MemOperand src = cgen_->ToMemOperand(source); if (destination->IsDoubleRegister()) { - __ Ldr(cgen_->ToDoubleRegister(destination), src); + __ Load(cgen_->ToDoubleRegister(destination), src); } else { - ASSERT(destination->IsDoubleStackSlot()); + DCHECK(destination->IsDoubleStackSlot()); EmitStackSlotMove(index); } @@ -291,21 +292,4 @@ void LGapResolver::EmitMove(int index) { moves_[index].Eliminate(); } - -void LGapResolver::EmitStackSlotMove(int index) { - // We need a temp register to perform a stack slot to stack slot move, and - // the register must not be involved in breaking cycles. - - // Use the Crankshaft double scratch register as the temporary. - DoubleRegister temp = crankshaft_fp_scratch; - - LOperand* src = moves_[index].source(); - LOperand* dst = moves_[index].destination(); - - ASSERT(src->IsStackSlot()); - ASSERT(dst->IsStackSlot()); - __ Ldr(temp, cgen_->ToMemOperand(src)); - __ Str(temp, cgen_->ToMemOperand(dst)); -} - } } // namespace v8::internal diff --git a/deps/v8/src/arm64/lithium-gap-resolver-arm64.h b/deps/v8/src/arm64/lithium-gap-resolver-arm64.h index ae67190734b..2eb651b9241 100644 --- a/deps/v8/src/arm64/lithium-gap-resolver-arm64.h +++ b/deps/v8/src/arm64/lithium-gap-resolver-arm64.h @@ -5,9 +5,10 @@ #ifndef V8_ARM64_LITHIUM_GAP_RESOLVER_ARM64_H_ #define V8_ARM64_LITHIUM_GAP_RESOLVER_ARM64_H_ -#include "v8.h" +#include "src/v8.h" -#include "lithium.h" +#include "src/arm64/delayed-masm-arm64.h" +#include "src/lithium.h" namespace v8 { namespace internal { @@ -15,6 +16,21 @@ namespace internal { class LCodeGen; class LGapResolver; +class DelayedGapMasm : public DelayedMasm { + public: + DelayedGapMasm(LCodeGen* owner, MacroAssembler* masm) + : DelayedMasm(owner, masm, root) { + // We use the root register as an extra scratch register. + // The root register has two advantages: + // - It is not in crankshaft allocatable registers list, so it can't + // interfere with the allocatable registers. + // - We don't need to push it on the stack, as we can reload it with its + // value once we have finish. + } + void EndDelayedUse(); +}; + + class LGapResolver BASE_EMBEDDED { public: explicit LGapResolver(LCodeGen* owner); @@ -43,12 +59,32 @@ class LGapResolver BASE_EMBEDDED { void EmitMove(int index); // Emit a move from one stack slot to another. - void EmitStackSlotMove(int index); + void EmitStackSlotMove(int index) { + masm_.StackSlotMove(moves_[index].source(), moves_[index].destination()); + } // Verify the move list before performing moves. void Verify(); + // Registers used to solve cycles. + const Register& SavedValueRegister() { + DCHECK(!masm_.ScratchRegister().IsAllocatable()); + return masm_.ScratchRegister(); + } + // The scratch register is used to break cycles and to store constant. + // These two methods switch from one mode to the other. + void AcquireSavedValueRegister() { masm_.AcquireScratchRegister(); } + void ReleaseSavedValueRegister() { masm_.ReleaseScratchRegister(); } + const FPRegister& SavedFPValueRegister() { + // We use the Crankshaft floating-point scratch register to break a cycle + // involving double values as the MacroAssembler will not need it for the + // operations performed by the gap resolver. + DCHECK(!crankshaft_fp_scratch.IsAllocatable()); + return crankshaft_fp_scratch; + } + LCodeGen* cgen_; + DelayedGapMasm masm_; // List of moves not yet resolved. ZoneList<LMoveOperands> moves_; @@ -56,10 +92,6 @@ class LGapResolver BASE_EMBEDDED { int root_index_; bool in_cycle_; LOperand* saved_destination_; - - // We use the root register as a scratch in a few places. When that happens, - // this flag is set to indicate that it needs to be restored. - bool need_to_restore_root_; }; } } // namespace v8::internal diff --git a/deps/v8/src/arm64/macro-assembler-arm64-inl.h b/deps/v8/src/arm64/macro-assembler-arm64-inl.h index 7c9258a9ca3..f7c724842ac 100644 --- a/deps/v8/src/arm64/macro-assembler-arm64-inl.h +++ b/deps/v8/src/arm64/macro-assembler-arm64-inl.h @@ -7,13 +7,12 @@ #include <ctype.h> -#include "v8globals.h" -#include "globals.h" +#include "src/globals.h" -#include "arm64/assembler-arm64.h" -#include "arm64/assembler-arm64-inl.h" -#include "arm64/macro-assembler-arm64.h" -#include "arm64/instrument-arm64.h" +#include "src/arm64/assembler-arm64-inl.h" +#include "src/arm64/assembler-arm64.h" +#include "src/arm64/instrument-arm64.h" +#include "src/arm64/macro-assembler-arm64.h" namespace v8 { @@ -38,7 +37,7 @@ MemOperand UntagSmiMemOperand(Register object, int offset) { Handle<Object> MacroAssembler::CodeObject() { - ASSERT(!code_object_.is_null()); + DCHECK(!code_object_.is_null()); return code_object_; } @@ -46,8 +45,8 @@ Handle<Object> MacroAssembler::CodeObject() { void MacroAssembler::And(const Register& rd, const Register& rn, const Operand& operand) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); LogicalMacro(rd, rn, operand, AND); } @@ -55,15 +54,15 @@ void MacroAssembler::And(const Register& rd, void MacroAssembler::Ands(const Register& rd, const Register& rn, const Operand& operand) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); LogicalMacro(rd, rn, operand, ANDS); } void MacroAssembler::Tst(const Register& rn, const Operand& operand) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); LogicalMacro(AppropriateZeroRegFor(rn), rn, operand, ANDS); } @@ -71,8 +70,8 @@ void MacroAssembler::Tst(const Register& rn, void MacroAssembler::Bic(const Register& rd, const Register& rn, const Operand& operand) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); LogicalMacro(rd, rn, operand, BIC); } @@ -80,8 +79,8 @@ void MacroAssembler::Bic(const Register& rd, void MacroAssembler::Bics(const Register& rd, const Register& rn, const Operand& operand) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); LogicalMacro(rd, rn, operand, BICS); } @@ -89,8 +88,8 @@ void MacroAssembler::Bics(const Register& rd, void MacroAssembler::Orr(const Register& rd, const Register& rn, const Operand& operand) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); LogicalMacro(rd, rn, operand, ORR); } @@ -98,8 +97,8 @@ void MacroAssembler::Orr(const Register& rd, void MacroAssembler::Orn(const Register& rd, const Register& rn, const Operand& operand) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); LogicalMacro(rd, rn, operand, ORN); } @@ -107,8 +106,8 @@ void MacroAssembler::Orn(const Register& rd, void MacroAssembler::Eor(const Register& rd, const Register& rn, const Operand& operand) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); LogicalMacro(rd, rn, operand, EOR); } @@ -116,8 +115,8 @@ void MacroAssembler::Eor(const Register& rd, void MacroAssembler::Eon(const Register& rd, const Register& rn, const Operand& operand) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); LogicalMacro(rd, rn, operand, EON); } @@ -126,9 +125,9 @@ void MacroAssembler::Ccmp(const Register& rn, const Operand& operand, StatusFlags nzcv, Condition cond) { - ASSERT(allow_macro_instructions_); - if (operand.IsImmediate() && (operand.immediate() < 0)) { - ConditionalCompareMacro(rn, -operand.immediate(), nzcv, cond, CCMN); + DCHECK(allow_macro_instructions_); + if (operand.IsImmediate() && (operand.ImmediateValue() < 0)) { + ConditionalCompareMacro(rn, -operand.ImmediateValue(), nzcv, cond, CCMN); } else { ConditionalCompareMacro(rn, operand, nzcv, cond, CCMP); } @@ -139,9 +138,9 @@ void MacroAssembler::Ccmn(const Register& rn, const Operand& operand, StatusFlags nzcv, Condition cond) { - ASSERT(allow_macro_instructions_); - if (operand.IsImmediate() && (operand.immediate() < 0)) { - ConditionalCompareMacro(rn, -operand.immediate(), nzcv, cond, CCMP); + DCHECK(allow_macro_instructions_); + if (operand.IsImmediate() && (operand.ImmediateValue() < 0)) { + ConditionalCompareMacro(rn, -operand.ImmediateValue(), nzcv, cond, CCMP); } else { ConditionalCompareMacro(rn, operand, nzcv, cond, CCMN); } @@ -151,9 +150,10 @@ void MacroAssembler::Ccmn(const Register& rn, void MacroAssembler::Add(const Register& rd, const Register& rn, const Operand& operand) { - ASSERT(allow_macro_instructions_); - if (operand.IsImmediate() && (operand.immediate() < 0)) { - AddSubMacro(rd, rn, -operand.immediate(), LeaveFlags, SUB); + DCHECK(allow_macro_instructions_); + if (operand.IsImmediate() && (operand.ImmediateValue() < 0) && + IsImmAddSub(-operand.ImmediateValue())) { + AddSubMacro(rd, rn, -operand.ImmediateValue(), LeaveFlags, SUB); } else { AddSubMacro(rd, rn, operand, LeaveFlags, ADD); } @@ -162,9 +162,10 @@ void MacroAssembler::Add(const Register& rd, void MacroAssembler::Adds(const Register& rd, const Register& rn, const Operand& operand) { - ASSERT(allow_macro_instructions_); - if (operand.IsImmediate() && (operand.immediate() < 0)) { - AddSubMacro(rd, rn, -operand.immediate(), SetFlags, SUB); + DCHECK(allow_macro_instructions_); + if (operand.IsImmediate() && (operand.ImmediateValue() < 0) && + IsImmAddSub(-operand.ImmediateValue())) { + AddSubMacro(rd, rn, -operand.ImmediateValue(), SetFlags, SUB); } else { AddSubMacro(rd, rn, operand, SetFlags, ADD); } @@ -174,9 +175,10 @@ void MacroAssembler::Adds(const Register& rd, void MacroAssembler::Sub(const Register& rd, const Register& rn, const Operand& operand) { - ASSERT(allow_macro_instructions_); - if (operand.IsImmediate() && (operand.immediate() < 0)) { - AddSubMacro(rd, rn, -operand.immediate(), LeaveFlags, ADD); + DCHECK(allow_macro_instructions_); + if (operand.IsImmediate() && (operand.ImmediateValue() < 0) && + IsImmAddSub(-operand.ImmediateValue())) { + AddSubMacro(rd, rn, -operand.ImmediateValue(), LeaveFlags, ADD); } else { AddSubMacro(rd, rn, operand, LeaveFlags, SUB); } @@ -186,9 +188,10 @@ void MacroAssembler::Sub(const Register& rd, void MacroAssembler::Subs(const Register& rd, const Register& rn, const Operand& operand) { - ASSERT(allow_macro_instructions_); - if (operand.IsImmediate() && (operand.immediate() < 0)) { - AddSubMacro(rd, rn, -operand.immediate(), SetFlags, ADD); + DCHECK(allow_macro_instructions_); + if (operand.IsImmediate() && (operand.ImmediateValue() < 0) && + IsImmAddSub(-operand.ImmediateValue())) { + AddSubMacro(rd, rn, -operand.ImmediateValue(), SetFlags, ADD); } else { AddSubMacro(rd, rn, operand, SetFlags, SUB); } @@ -196,23 +199,23 @@ void MacroAssembler::Subs(const Register& rd, void MacroAssembler::Cmn(const Register& rn, const Operand& operand) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); Adds(AppropriateZeroRegFor(rn), rn, operand); } void MacroAssembler::Cmp(const Register& rn, const Operand& operand) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); Subs(AppropriateZeroRegFor(rn), rn, operand); } void MacroAssembler::Neg(const Register& rd, const Operand& operand) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); if (operand.IsImmediate()) { - Mov(rd, -operand.immediate()); + Mov(rd, -operand.ImmediateValue()); } else { Sub(rd, AppropriateZeroRegFor(rd), operand); } @@ -221,7 +224,7 @@ void MacroAssembler::Neg(const Register& rd, void MacroAssembler::Negs(const Register& rd, const Operand& operand) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); Subs(rd, AppropriateZeroRegFor(rd), operand); } @@ -229,8 +232,8 @@ void MacroAssembler::Negs(const Register& rd, void MacroAssembler::Adc(const Register& rd, const Register& rn, const Operand& operand) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); AddSubWithCarryMacro(rd, rn, operand, LeaveFlags, ADC); } @@ -238,8 +241,8 @@ void MacroAssembler::Adc(const Register& rd, void MacroAssembler::Adcs(const Register& rd, const Register& rn, const Operand& operand) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); AddSubWithCarryMacro(rd, rn, operand, SetFlags, ADC); } @@ -247,8 +250,8 @@ void MacroAssembler::Adcs(const Register& rd, void MacroAssembler::Sbc(const Register& rd, const Register& rn, const Operand& operand) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); AddSubWithCarryMacro(rd, rn, operand, LeaveFlags, SBC); } @@ -256,16 +259,16 @@ void MacroAssembler::Sbc(const Register& rd, void MacroAssembler::Sbcs(const Register& rd, const Register& rn, const Operand& operand) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); AddSubWithCarryMacro(rd, rn, operand, SetFlags, SBC); } void MacroAssembler::Ngc(const Register& rd, const Operand& operand) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); Register zr = AppropriateZeroRegFor(rd); Sbc(rd, zr, operand); } @@ -273,34 +276,44 @@ void MacroAssembler::Ngc(const Register& rd, void MacroAssembler::Ngcs(const Register& rd, const Operand& operand) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); Register zr = AppropriateZeroRegFor(rd); Sbcs(rd, zr, operand); } void MacroAssembler::Mvn(const Register& rd, uint64_t imm) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); Mov(rd, ~imm); } #define DEFINE_FUNCTION(FN, REGTYPE, REG, OP) \ void MacroAssembler::FN(const REGTYPE REG, const MemOperand& addr) { \ - ASSERT(allow_macro_instructions_); \ + DCHECK(allow_macro_instructions_); \ LoadStoreMacro(REG, addr, OP); \ } LS_MACRO_LIST(DEFINE_FUNCTION) #undef DEFINE_FUNCTION +#define DEFINE_FUNCTION(FN, REGTYPE, REG, REG2, OP) \ + void MacroAssembler::FN(const REGTYPE REG, const REGTYPE REG2, \ + const MemOperand& addr) { \ + DCHECK(allow_macro_instructions_); \ + LoadStorePairMacro(REG, REG2, addr, OP); \ + } +LSPAIR_MACRO_LIST(DEFINE_FUNCTION) +#undef DEFINE_FUNCTION + + void MacroAssembler::Asr(const Register& rd, const Register& rn, unsigned shift) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); asr(rd, rn, shift); } @@ -308,8 +321,8 @@ void MacroAssembler::Asr(const Register& rd, void MacroAssembler::Asr(const Register& rd, const Register& rn, const Register& rm) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); asrv(rd, rn, rm); } @@ -321,7 +334,7 @@ void MacroAssembler::B(Label* label) { void MacroAssembler::B(Condition cond, Label* label) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); B(label, cond); } @@ -330,8 +343,8 @@ void MacroAssembler::Bfi(const Register& rd, const Register& rn, unsigned lsb, unsigned width) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); bfi(rd, rn, lsb, width); } @@ -340,40 +353,40 @@ void MacroAssembler::Bfxil(const Register& rd, const Register& rn, unsigned lsb, unsigned width) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); bfxil(rd, rn, lsb, width); } void MacroAssembler::Bind(Label* label) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); bind(label); } void MacroAssembler::Bl(Label* label) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); bl(label); } void MacroAssembler::Blr(const Register& xn) { - ASSERT(allow_macro_instructions_); - ASSERT(!xn.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!xn.IsZero()); blr(xn); } void MacroAssembler::Br(const Register& xn) { - ASSERT(allow_macro_instructions_); - ASSERT(!xn.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!xn.IsZero()); br(xn); } void MacroAssembler::Brk(int code) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); brk(code); } @@ -381,9 +394,9 @@ void MacroAssembler::Brk(int code) { void MacroAssembler::Cinc(const Register& rd, const Register& rn, Condition cond) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); - ASSERT((cond != al) && (cond != nv)); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); + DCHECK((cond != al) && (cond != nv)); cinc(rd, rn, cond); } @@ -391,23 +404,23 @@ void MacroAssembler::Cinc(const Register& rd, void MacroAssembler::Cinv(const Register& rd, const Register& rn, Condition cond) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); - ASSERT((cond != al) && (cond != nv)); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); + DCHECK((cond != al) && (cond != nv)); cinv(rd, rn, cond); } void MacroAssembler::Cls(const Register& rd, const Register& rn) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); cls(rd, rn); } void MacroAssembler::Clz(const Register& rd, const Register& rn) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); clz(rd, rn); } @@ -415,9 +428,9 @@ void MacroAssembler::Clz(const Register& rd, const Register& rn) { void MacroAssembler::Cneg(const Register& rd, const Register& rn, Condition cond) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); - ASSERT((cond != al) && (cond != nv)); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); + DCHECK((cond != al) && (cond != nv)); cneg(rd, rn, cond); } @@ -426,9 +439,9 @@ void MacroAssembler::Cneg(const Register& rd, // due to the truncation side-effect when used on W registers. void MacroAssembler::CzeroX(const Register& rd, Condition cond) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsSP() && rd.Is64Bits()); - ASSERT((cond != al) && (cond != nv)); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsSP() && rd.Is64Bits()); + DCHECK((cond != al) && (cond != nv)); csel(rd, xzr, rd, cond); } @@ -438,10 +451,10 @@ void MacroAssembler::CzeroX(const Register& rd, void MacroAssembler::CmovX(const Register& rd, const Register& rn, Condition cond) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsSP()); - ASSERT(rd.Is64Bits() && rn.Is64Bits()); - ASSERT((cond != al) && (cond != nv)); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsSP()); + DCHECK(rd.Is64Bits() && rn.Is64Bits()); + DCHECK((cond != al) && (cond != nv)); if (!rd.is(rn)) { csel(rd, rn, rd, cond); } @@ -449,17 +462,17 @@ void MacroAssembler::CmovX(const Register& rd, void MacroAssembler::Cset(const Register& rd, Condition cond) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); - ASSERT((cond != al) && (cond != nv)); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); + DCHECK((cond != al) && (cond != nv)); cset(rd, cond); } void MacroAssembler::Csetm(const Register& rd, Condition cond) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); - ASSERT((cond != al) && (cond != nv)); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); + DCHECK((cond != al) && (cond != nv)); csetm(rd, cond); } @@ -468,9 +481,9 @@ void MacroAssembler::Csinc(const Register& rd, const Register& rn, const Register& rm, Condition cond) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); - ASSERT((cond != al) && (cond != nv)); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); + DCHECK((cond != al) && (cond != nv)); csinc(rd, rn, rm, cond); } @@ -479,9 +492,9 @@ void MacroAssembler::Csinv(const Register& rd, const Register& rn, const Register& rm, Condition cond) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); - ASSERT((cond != al) && (cond != nv)); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); + DCHECK((cond != al) && (cond != nv)); csinv(rd, rn, rm, cond); } @@ -490,27 +503,27 @@ void MacroAssembler::Csneg(const Register& rd, const Register& rn, const Register& rm, Condition cond) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); - ASSERT((cond != al) && (cond != nv)); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); + DCHECK((cond != al) && (cond != nv)); csneg(rd, rn, rm, cond); } void MacroAssembler::Dmb(BarrierDomain domain, BarrierType type) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); dmb(domain, type); } void MacroAssembler::Dsb(BarrierDomain domain, BarrierType type) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); dsb(domain, type); } void MacroAssembler::Debug(const char* message, uint32_t code, Instr params) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); debug(message, code, params); } @@ -519,14 +532,14 @@ void MacroAssembler::Extr(const Register& rd, const Register& rn, const Register& rm, unsigned lsb) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); extr(rd, rn, rm, lsb); } void MacroAssembler::Fabs(const FPRegister& fd, const FPRegister& fn) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); fabs(fd, fn); } @@ -534,7 +547,7 @@ void MacroAssembler::Fabs(const FPRegister& fd, const FPRegister& fn) { void MacroAssembler::Fadd(const FPRegister& fd, const FPRegister& fn, const FPRegister& fm) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); fadd(fd, fn, fm); } @@ -543,20 +556,20 @@ void MacroAssembler::Fccmp(const FPRegister& fn, const FPRegister& fm, StatusFlags nzcv, Condition cond) { - ASSERT(allow_macro_instructions_); - ASSERT((cond != al) && (cond != nv)); + DCHECK(allow_macro_instructions_); + DCHECK((cond != al) && (cond != nv)); fccmp(fn, fm, nzcv, cond); } void MacroAssembler::Fcmp(const FPRegister& fn, const FPRegister& fm) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); fcmp(fn, fm); } void MacroAssembler::Fcmp(const FPRegister& fn, double value) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); if (value != 0.0) { UseScratchRegisterScope temps(this); FPRegister tmp = temps.AcquireSameSizeAs(fn); @@ -572,68 +585,68 @@ void MacroAssembler::Fcsel(const FPRegister& fd, const FPRegister& fn, const FPRegister& fm, Condition cond) { - ASSERT(allow_macro_instructions_); - ASSERT((cond != al) && (cond != nv)); + DCHECK(allow_macro_instructions_); + DCHECK((cond != al) && (cond != nv)); fcsel(fd, fn, fm, cond); } void MacroAssembler::Fcvt(const FPRegister& fd, const FPRegister& fn) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); fcvt(fd, fn); } void MacroAssembler::Fcvtas(const Register& rd, const FPRegister& fn) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); fcvtas(rd, fn); } void MacroAssembler::Fcvtau(const Register& rd, const FPRegister& fn) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); fcvtau(rd, fn); } void MacroAssembler::Fcvtms(const Register& rd, const FPRegister& fn) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); fcvtms(rd, fn); } void MacroAssembler::Fcvtmu(const Register& rd, const FPRegister& fn) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); fcvtmu(rd, fn); } void MacroAssembler::Fcvtns(const Register& rd, const FPRegister& fn) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); fcvtns(rd, fn); } void MacroAssembler::Fcvtnu(const Register& rd, const FPRegister& fn) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); fcvtnu(rd, fn); } void MacroAssembler::Fcvtzs(const Register& rd, const FPRegister& fn) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); fcvtzs(rd, fn); } void MacroAssembler::Fcvtzu(const Register& rd, const FPRegister& fn) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); fcvtzu(rd, fn); } @@ -641,7 +654,7 @@ void MacroAssembler::Fcvtzu(const Register& rd, const FPRegister& fn) { void MacroAssembler::Fdiv(const FPRegister& fd, const FPRegister& fn, const FPRegister& fm) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); fdiv(fd, fn, fm); } @@ -650,7 +663,7 @@ void MacroAssembler::Fmadd(const FPRegister& fd, const FPRegister& fn, const FPRegister& fm, const FPRegister& fa) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); fmadd(fd, fn, fm, fa); } @@ -658,7 +671,7 @@ void MacroAssembler::Fmadd(const FPRegister& fd, void MacroAssembler::Fmax(const FPRegister& fd, const FPRegister& fn, const FPRegister& fm) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); fmax(fd, fn, fm); } @@ -666,7 +679,7 @@ void MacroAssembler::Fmax(const FPRegister& fd, void MacroAssembler::Fmaxnm(const FPRegister& fd, const FPRegister& fn, const FPRegister& fm) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); fmaxnm(fd, fn, fm); } @@ -674,7 +687,7 @@ void MacroAssembler::Fmaxnm(const FPRegister& fd, void MacroAssembler::Fmin(const FPRegister& fd, const FPRegister& fn, const FPRegister& fm) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); fmin(fd, fn, fm); } @@ -682,13 +695,13 @@ void MacroAssembler::Fmin(const FPRegister& fd, void MacroAssembler::Fminnm(const FPRegister& fd, const FPRegister& fn, const FPRegister& fm) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); fminnm(fd, fn, fm); } void MacroAssembler::Fmov(FPRegister fd, FPRegister fn) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); // Only emit an instruction if fd and fn are different, and they are both D // registers. fmov(s0, s0) is not a no-op because it clears the top word of // d0. Technically, fmov(d0, d0) is not a no-op either because it clears the @@ -700,41 +713,37 @@ void MacroAssembler::Fmov(FPRegister fd, FPRegister fn) { void MacroAssembler::Fmov(FPRegister fd, Register rn) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); fmov(fd, rn); } void MacroAssembler::Fmov(FPRegister fd, double imm) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); if (fd.Is32Bits()) { Fmov(fd, static_cast<float>(imm)); return; } - ASSERT(fd.Is64Bits()); + DCHECK(fd.Is64Bits()); if (IsImmFP64(imm)) { fmov(fd, imm); } else if ((imm == 0.0) && (copysign(1.0, imm) == 1.0)) { fmov(fd, xzr); } else { - UseScratchRegisterScope temps(this); - Register tmp = temps.AcquireX(); - // TODO(all): Use Assembler::ldr(const FPRegister& ft, double imm). - Mov(tmp, double_to_rawbits(imm)); - Fmov(fd, tmp); + Ldr(fd, imm); } } void MacroAssembler::Fmov(FPRegister fd, float imm) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); if (fd.Is64Bits()) { Fmov(fd, static_cast<double>(imm)); return; } - ASSERT(fd.Is32Bits()); + DCHECK(fd.Is32Bits()); if (IsImmFP32(imm)) { fmov(fd, imm); } else if ((imm == 0.0) && (copysign(1.0, imm) == 1.0)) { @@ -750,8 +759,8 @@ void MacroAssembler::Fmov(FPRegister fd, float imm) { void MacroAssembler::Fmov(Register rd, FPRegister fn) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); fmov(rd, fn); } @@ -760,7 +769,7 @@ void MacroAssembler::Fmsub(const FPRegister& fd, const FPRegister& fn, const FPRegister& fm, const FPRegister& fa) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); fmsub(fd, fn, fm, fa); } @@ -768,13 +777,13 @@ void MacroAssembler::Fmsub(const FPRegister& fd, void MacroAssembler::Fmul(const FPRegister& fd, const FPRegister& fn, const FPRegister& fm) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); fmul(fd, fn, fm); } void MacroAssembler::Fneg(const FPRegister& fd, const FPRegister& fn) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); fneg(fd, fn); } @@ -783,7 +792,7 @@ void MacroAssembler::Fnmadd(const FPRegister& fd, const FPRegister& fn, const FPRegister& fm, const FPRegister& fa) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); fnmadd(fd, fn, fm, fa); } @@ -792,37 +801,37 @@ void MacroAssembler::Fnmsub(const FPRegister& fd, const FPRegister& fn, const FPRegister& fm, const FPRegister& fa) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); fnmsub(fd, fn, fm, fa); } void MacroAssembler::Frinta(const FPRegister& fd, const FPRegister& fn) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); frinta(fd, fn); } void MacroAssembler::Frintm(const FPRegister& fd, const FPRegister& fn) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); frintm(fd, fn); } void MacroAssembler::Frintn(const FPRegister& fd, const FPRegister& fn) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); frintn(fd, fn); } void MacroAssembler::Frintz(const FPRegister& fd, const FPRegister& fn) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); frintz(fd, fn); } void MacroAssembler::Fsqrt(const FPRegister& fd, const FPRegister& fn) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); fsqrt(fd, fn); } @@ -830,25 +839,25 @@ void MacroAssembler::Fsqrt(const FPRegister& fd, const FPRegister& fn) { void MacroAssembler::Fsub(const FPRegister& fd, const FPRegister& fn, const FPRegister& fm) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); fsub(fd, fn, fm); } void MacroAssembler::Hint(SystemHint code) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); hint(code); } void MacroAssembler::Hlt(int code) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); hlt(code); } void MacroAssembler::Isb() { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); isb(); } @@ -856,49 +865,30 @@ void MacroAssembler::Isb() { void MacroAssembler::Ldnp(const CPURegister& rt, const CPURegister& rt2, const MemOperand& src) { - ASSERT(allow_macro_instructions_); - ASSERT(!AreAliased(rt, rt2)); + DCHECK(allow_macro_instructions_); + DCHECK(!AreAliased(rt, rt2)); ldnp(rt, rt2, src); } -void MacroAssembler::Ldp(const CPURegister& rt, - const CPURegister& rt2, - const MemOperand& src) { - ASSERT(allow_macro_instructions_); - ASSERT(!AreAliased(rt, rt2)); - ldp(rt, rt2, src); -} - - -void MacroAssembler::Ldpsw(const Register& rt, - const Register& rt2, - const MemOperand& src) { - ASSERT(allow_macro_instructions_); - ASSERT(!rt.IsZero()); - ASSERT(!rt2.IsZero()); - ldpsw(rt, rt2, src); -} - - -void MacroAssembler::Ldr(const FPRegister& ft, double imm) { - ASSERT(allow_macro_instructions_); - ldr(ft, imm); +void MacroAssembler::Ldr(const CPURegister& rt, const Immediate& imm) { + DCHECK(allow_macro_instructions_); + ldr(rt, imm); } -void MacroAssembler::Ldr(const Register& rt, uint64_t imm) { - ASSERT(allow_macro_instructions_); - ASSERT(!rt.IsZero()); - ldr(rt, imm); +void MacroAssembler::Ldr(const CPURegister& rt, double imm) { + DCHECK(allow_macro_instructions_); + DCHECK(rt.Is64Bits()); + ldr(rt, Immediate(double_to_rawbits(imm))); } void MacroAssembler::Lsl(const Register& rd, const Register& rn, unsigned shift) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); lsl(rd, rn, shift); } @@ -906,8 +896,8 @@ void MacroAssembler::Lsl(const Register& rd, void MacroAssembler::Lsl(const Register& rd, const Register& rn, const Register& rm) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); lslv(rd, rn, rm); } @@ -915,8 +905,8 @@ void MacroAssembler::Lsl(const Register& rd, void MacroAssembler::Lsr(const Register& rd, const Register& rn, unsigned shift) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); lsr(rd, rn, shift); } @@ -924,8 +914,8 @@ void MacroAssembler::Lsr(const Register& rd, void MacroAssembler::Lsr(const Register& rd, const Register& rn, const Register& rm) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); lsrv(rd, rn, rm); } @@ -934,8 +924,8 @@ void MacroAssembler::Madd(const Register& rd, const Register& rn, const Register& rm, const Register& ra) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); madd(rd, rn, rm, ra); } @@ -943,15 +933,15 @@ void MacroAssembler::Madd(const Register& rd, void MacroAssembler::Mneg(const Register& rd, const Register& rn, const Register& rm) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); mneg(rd, rn, rm); } void MacroAssembler::Mov(const Register& rd, const Register& rn) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); // Emit a register move only if the registers are distinct, or if they are // not X registers. Note that mov(w0, w0) is not a no-op because it clears // the top word of x0. @@ -962,21 +952,21 @@ void MacroAssembler::Mov(const Register& rd, const Register& rn) { void MacroAssembler::Movk(const Register& rd, uint64_t imm, int shift) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); movk(rd, imm, shift); } void MacroAssembler::Mrs(const Register& rt, SystemRegister sysreg) { - ASSERT(allow_macro_instructions_); - ASSERT(!rt.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rt.IsZero()); mrs(rt, sysreg); } void MacroAssembler::Msr(SystemRegister sysreg, const Register& rt) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); msr(sysreg, rt); } @@ -985,8 +975,8 @@ void MacroAssembler::Msub(const Register& rd, const Register& rn, const Register& rm, const Register& ra) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); msub(rd, rn, rm, ra); } @@ -994,44 +984,44 @@ void MacroAssembler::Msub(const Register& rd, void MacroAssembler::Mul(const Register& rd, const Register& rn, const Register& rm) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); mul(rd, rn, rm); } void MacroAssembler::Rbit(const Register& rd, const Register& rn) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); rbit(rd, rn); } void MacroAssembler::Ret(const Register& xn) { - ASSERT(allow_macro_instructions_); - ASSERT(!xn.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!xn.IsZero()); ret(xn); CheckVeneerPool(false, false); } void MacroAssembler::Rev(const Register& rd, const Register& rn) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); rev(rd, rn); } void MacroAssembler::Rev16(const Register& rd, const Register& rn) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); rev16(rd, rn); } void MacroAssembler::Rev32(const Register& rd, const Register& rn) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); rev32(rd, rn); } @@ -1039,8 +1029,8 @@ void MacroAssembler::Rev32(const Register& rd, const Register& rn) { void MacroAssembler::Ror(const Register& rd, const Register& rs, unsigned shift) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); ror(rd, rs, shift); } @@ -1048,8 +1038,8 @@ void MacroAssembler::Ror(const Register& rd, void MacroAssembler::Ror(const Register& rd, const Register& rn, const Register& rm) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); rorv(rd, rn, rm); } @@ -1058,8 +1048,8 @@ void MacroAssembler::Sbfiz(const Register& rd, const Register& rn, unsigned lsb, unsigned width) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); sbfiz(rd, rn, lsb, width); } @@ -1068,8 +1058,8 @@ void MacroAssembler::Sbfx(const Register& rd, const Register& rn, unsigned lsb, unsigned width) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); sbfx(rd, rn, lsb, width); } @@ -1077,7 +1067,7 @@ void MacroAssembler::Sbfx(const Register& rd, void MacroAssembler::Scvtf(const FPRegister& fd, const Register& rn, unsigned fbits) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); scvtf(fd, rn, fbits); } @@ -1085,8 +1075,8 @@ void MacroAssembler::Scvtf(const FPRegister& fd, void MacroAssembler::Sdiv(const Register& rd, const Register& rn, const Register& rm) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); sdiv(rd, rn, rm); } @@ -1095,8 +1085,8 @@ void MacroAssembler::Smaddl(const Register& rd, const Register& rn, const Register& rm, const Register& ra) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); smaddl(rd, rn, rm, ra); } @@ -1105,8 +1095,8 @@ void MacroAssembler::Smsubl(const Register& rd, const Register& rn, const Register& rm, const Register& ra) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); smsubl(rd, rn, rm, ra); } @@ -1114,8 +1104,8 @@ void MacroAssembler::Smsubl(const Register& rd, void MacroAssembler::Smull(const Register& rd, const Register& rn, const Register& rm) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); smull(rd, rn, rm); } @@ -1123,8 +1113,8 @@ void MacroAssembler::Smull(const Register& rd, void MacroAssembler::Smulh(const Register& rd, const Register& rn, const Register& rm) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); smulh(rd, rn, rm); } @@ -1132,36 +1122,28 @@ void MacroAssembler::Smulh(const Register& rd, void MacroAssembler::Stnp(const CPURegister& rt, const CPURegister& rt2, const MemOperand& dst) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); stnp(rt, rt2, dst); } -void MacroAssembler::Stp(const CPURegister& rt, - const CPURegister& rt2, - const MemOperand& dst) { - ASSERT(allow_macro_instructions_); - stp(rt, rt2, dst); -} - - void MacroAssembler::Sxtb(const Register& rd, const Register& rn) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); sxtb(rd, rn); } void MacroAssembler::Sxth(const Register& rd, const Register& rn) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); sxth(rd, rn); } void MacroAssembler::Sxtw(const Register& rd, const Register& rn) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); sxtw(rd, rn); } @@ -1170,8 +1152,8 @@ void MacroAssembler::Ubfiz(const Register& rd, const Register& rn, unsigned lsb, unsigned width) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); ubfiz(rd, rn, lsb, width); } @@ -1180,8 +1162,8 @@ void MacroAssembler::Ubfx(const Register& rd, const Register& rn, unsigned lsb, unsigned width) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); ubfx(rd, rn, lsb, width); } @@ -1189,7 +1171,7 @@ void MacroAssembler::Ubfx(const Register& rd, void MacroAssembler::Ucvtf(const FPRegister& fd, const Register& rn, unsigned fbits) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); ucvtf(fd, rn, fbits); } @@ -1197,8 +1179,8 @@ void MacroAssembler::Ucvtf(const FPRegister& fd, void MacroAssembler::Udiv(const Register& rd, const Register& rn, const Register& rm) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); udiv(rd, rn, rm); } @@ -1207,8 +1189,8 @@ void MacroAssembler::Umaddl(const Register& rd, const Register& rn, const Register& rm, const Register& ra) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); umaddl(rd, rn, rm, ra); } @@ -1217,58 +1199,87 @@ void MacroAssembler::Umsubl(const Register& rd, const Register& rn, const Register& rm, const Register& ra) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); umsubl(rd, rn, rm, ra); } void MacroAssembler::Uxtb(const Register& rd, const Register& rn) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); uxtb(rd, rn); } void MacroAssembler::Uxth(const Register& rd, const Register& rn) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); uxth(rd, rn); } void MacroAssembler::Uxtw(const Register& rd, const Register& rn) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); uxtw(rd, rn); } void MacroAssembler::BumpSystemStackPointer(const Operand& space) { - ASSERT(!csp.Is(sp_)); - // TODO(jbramley): Several callers rely on this not using scratch registers, - // so we use the assembler directly here. However, this means that large - // immediate values of 'space' cannot be handled cleanly. (Only 24-bits - // immediates or values of 'space' that can be encoded in one instruction are - // accepted.) Once we implement our flexible scratch register idea, we could - // greatly simplify this function. - InstructionAccurateScope scope(this); - if ((space.IsImmediate()) && !is_uint12(space.immediate())) { - // The subtract instruction supports a 12-bit immediate, shifted left by - // zero or 12 bits. So, in two instructions, we can subtract any immediate - // between zero and (1 << 24) - 1. - int64_t imm = space.immediate(); - ASSERT(is_uint24(imm)); - - int64_t imm_top_12_bits = imm >> 12; - sub(csp, StackPointer(), imm_top_12_bits << 12); - imm -= imm_top_12_bits << 12; - if (imm > 0) { - sub(csp, csp, imm); + DCHECK(!csp.Is(sp_)); + if (!TmpList()->IsEmpty()) { + if (CpuFeatures::IsSupported(ALWAYS_ALIGN_CSP)) { + UseScratchRegisterScope temps(this); + Register temp = temps.AcquireX(); + Sub(temp, StackPointer(), space); + Bic(csp, temp, 0xf); + } else { + Sub(csp, StackPointer(), space); } } else { - sub(csp, StackPointer(), space); + // TODO(jbramley): Several callers rely on this not using scratch + // registers, so we use the assembler directly here. However, this means + // that large immediate values of 'space' cannot be handled cleanly. (Only + // 24-bits immediates or values of 'space' that can be encoded in one + // instruction are accepted.) Once we implement our flexible scratch + // register idea, we could greatly simplify this function. + InstructionAccurateScope scope(this); + DCHECK(space.IsImmediate()); + // Align to 16 bytes. + uint64_t imm = RoundUp(space.ImmediateValue(), 0x10); + DCHECK(is_uint24(imm)); + + Register source = StackPointer(); + if (CpuFeatures::IsSupported(ALWAYS_ALIGN_CSP)) { + bic(csp, source, 0xf); + source = csp; + } + if (!is_uint12(imm)) { + int64_t imm_top_12_bits = imm >> 12; + sub(csp, source, imm_top_12_bits << 12); + source = csp; + imm -= imm_top_12_bits << 12; + } + if (imm > 0) { + sub(csp, source, imm); + } } + AssertStackConsistency(); +} + + +void MacroAssembler::SyncSystemStackPointer() { + DCHECK(emit_debug_code()); + DCHECK(!csp.Is(sp_)); + { InstructionAccurateScope scope(this); + if (CpuFeatures::IsSupported(ALWAYS_ALIGN_CSP)) { + bic(csp, StackPointer(), 0xf); + } else { + mov(csp, StackPointer()); + } + } + AssertStackConsistency(); } @@ -1280,7 +1291,9 @@ void MacroAssembler::InitializeRootRegister() { void MacroAssembler::SmiTag(Register dst, Register src) { - ASSERT(dst.Is64Bits() && src.Is64Bits()); + STATIC_ASSERT(kXRegSizeInBits == + static_cast<unsigned>(kSmiShift + kSmiValueSize)); + DCHECK(dst.Is64Bits() && src.Is64Bits()); Lsl(dst, src, kSmiShift); } @@ -1289,7 +1302,9 @@ void MacroAssembler::SmiTag(Register smi) { SmiTag(smi, smi); } void MacroAssembler::SmiUntag(Register dst, Register src) { - ASSERT(dst.Is64Bits() && src.Is64Bits()); + STATIC_ASSERT(kXRegSizeInBits == + static_cast<unsigned>(kSmiShift + kSmiValueSize)); + DCHECK(dst.Is64Bits() && src.Is64Bits()); if (FLAG_enable_slow_asserts) { AssertSmi(src); } @@ -1303,7 +1318,7 @@ void MacroAssembler::SmiUntag(Register smi) { SmiUntag(smi, smi); } void MacroAssembler::SmiUntagToDouble(FPRegister dst, Register src, UntagMode mode) { - ASSERT(dst.Is64Bits() && src.Is64Bits()); + DCHECK(dst.Is64Bits() && src.Is64Bits()); if (FLAG_enable_slow_asserts && (mode == kNotSpeculativeUntag)) { AssertSmi(src); } @@ -1314,7 +1329,7 @@ void MacroAssembler::SmiUntagToDouble(FPRegister dst, void MacroAssembler::SmiUntagToFloat(FPRegister dst, Register src, UntagMode mode) { - ASSERT(dst.Is32Bits() && src.Is64Bits()); + DCHECK(dst.Is32Bits() && src.Is64Bits()); if (FLAG_enable_slow_asserts && (mode == kNotSpeculativeUntag)) { AssertSmi(src); } @@ -1322,6 +1337,22 @@ void MacroAssembler::SmiUntagToFloat(FPRegister dst, } +void MacroAssembler::SmiTagAndPush(Register src) { + STATIC_ASSERT((static_cast<unsigned>(kSmiShift) == kWRegSizeInBits) && + (static_cast<unsigned>(kSmiValueSize) == kWRegSizeInBits) && + (kSmiTag == 0)); + Push(src.W(), wzr); +} + + +void MacroAssembler::SmiTagAndPush(Register src1, Register src2) { + STATIC_ASSERT((static_cast<unsigned>(kSmiShift) == kWRegSizeInBits) && + (static_cast<unsigned>(kSmiValueSize) == kWRegSizeInBits) && + (kSmiTag == 0)); + Push(src1.W(), wzr, src2.W(), wzr); +} + + void MacroAssembler::JumpIfSmi(Register value, Label* smi_label, Label* not_smi_label) { @@ -1333,7 +1364,7 @@ void MacroAssembler::JumpIfSmi(Register value, B(not_smi_label); } } else { - ASSERT(not_smi_label); + DCHECK(not_smi_label); Tbnz(value, 0, not_smi_label); } } @@ -1450,7 +1481,7 @@ void MacroAssembler::IsObjectJSStringType(Register object, Ldrb(type.W(), FieldMemOperand(type, Map::kInstanceTypeOffset)); STATIC_ASSERT(kStringTag == 0); - ASSERT((string != NULL) || (not_string != NULL)); + DCHECK((string != NULL) || (not_string != NULL)); if (string == NULL) { TestAndBranchIfAnySet(type.W(), kIsNotStringMask, not_string); } else if (not_string == NULL) { @@ -1478,7 +1509,7 @@ void MacroAssembler::Claim(uint64_t count, uint64_t unit_size) { } if (csp.Is(StackPointer())) { - ASSERT(size % 16 == 0); + DCHECK(size % 16 == 0); } else { BumpSystemStackPointer(size); } @@ -1489,7 +1520,7 @@ void MacroAssembler::Claim(uint64_t count, uint64_t unit_size) { void MacroAssembler::Claim(const Register& count, uint64_t unit_size) { if (unit_size == 0) return; - ASSERT(IsPowerOf2(unit_size)); + DCHECK(IsPowerOf2(unit_size)); const int shift = CountTrailingZeros(unit_size, kXRegSizeInBits); const Operand size(count, LSL, shift); @@ -1507,7 +1538,7 @@ void MacroAssembler::Claim(const Register& count, uint64_t unit_size) { void MacroAssembler::ClaimBySMI(const Register& count_smi, uint64_t unit_size) { - ASSERT(unit_size == 0 || IsPowerOf2(unit_size)); + DCHECK(unit_size == 0 || IsPowerOf2(unit_size)); const int shift = CountTrailingZeros(unit_size, kXRegSizeInBits) - kSmiShift; const Operand size(count_smi, (shift >= 0) ? (LSL) : (LSR), @@ -1535,19 +1566,19 @@ void MacroAssembler::Drop(uint64_t count, uint64_t unit_size) { Add(StackPointer(), StackPointer(), size); if (csp.Is(StackPointer())) { - ASSERT(size % 16 == 0); + DCHECK(size % 16 == 0); } else if (emit_debug_code()) { // It is safe to leave csp where it is when unwinding the JavaScript stack, // but if we keep it matching StackPointer, the simulator can detect memory // accesses in the now-free part of the stack. - Mov(csp, StackPointer()); + SyncSystemStackPointer(); } } void MacroAssembler::Drop(const Register& count, uint64_t unit_size) { if (unit_size == 0) return; - ASSERT(IsPowerOf2(unit_size)); + DCHECK(IsPowerOf2(unit_size)); const int shift = CountTrailingZeros(unit_size, kXRegSizeInBits); const Operand size(count, LSL, shift); @@ -1562,13 +1593,13 @@ void MacroAssembler::Drop(const Register& count, uint64_t unit_size) { // It is safe to leave csp where it is when unwinding the JavaScript stack, // but if we keep it matching StackPointer, the simulator can detect memory // accesses in the now-free part of the stack. - Mov(csp, StackPointer()); + SyncSystemStackPointer(); } } void MacroAssembler::DropBySMI(const Register& count_smi, uint64_t unit_size) { - ASSERT(unit_size == 0 || IsPowerOf2(unit_size)); + DCHECK(unit_size == 0 || IsPowerOf2(unit_size)); const int shift = CountTrailingZeros(unit_size, kXRegSizeInBits) - kSmiShift; const Operand size(count_smi, (shift >= 0) ? (LSL) : (LSR), @@ -1584,7 +1615,7 @@ void MacroAssembler::DropBySMI(const Register& count_smi, uint64_t unit_size) { // It is safe to leave csp where it is when unwinding the JavaScript stack, // but if we keep it matching StackPointer, the simulator can detect memory // accesses in the now-free part of the stack. - Mov(csp, StackPointer()); + SyncSystemStackPointer(); } } @@ -1593,7 +1624,7 @@ void MacroAssembler::CompareAndBranch(const Register& lhs, const Operand& rhs, Condition cond, Label* label) { - if (rhs.IsImmediate() && (rhs.immediate() == 0) && + if (rhs.IsImmediate() && (rhs.ImmediateValue() == 0) && ((cond == eq) || (cond == ne))) { if (cond == eq) { Cbz(lhs, label); @@ -1611,7 +1642,7 @@ void MacroAssembler::TestAndBranchIfAnySet(const Register& reg, const uint64_t bit_pattern, Label* label) { int bits = reg.SizeInBits(); - ASSERT(CountSetBits(bit_pattern, bits) > 0); + DCHECK(CountSetBits(bit_pattern, bits) > 0); if (CountSetBits(bit_pattern, bits) == 1) { Tbnz(reg, MaskToBit(bit_pattern), label); } else { @@ -1625,7 +1656,7 @@ void MacroAssembler::TestAndBranchIfAllClear(const Register& reg, const uint64_t bit_pattern, Label* label) { int bits = reg.SizeInBits(); - ASSERT(CountSetBits(bit_pattern, bits) > 0); + DCHECK(CountSetBits(bit_pattern, bits) > 0); if (CountSetBits(bit_pattern, bits) == 1) { Tbz(reg, MaskToBit(bit_pattern), label); } else { @@ -1636,7 +1667,7 @@ void MacroAssembler::TestAndBranchIfAllClear(const Register& reg, void MacroAssembler::InlineData(uint64_t data) { - ASSERT(is_uint16(data)); + DCHECK(is_uint16(data)); InstructionAccurateScope scope(this, 1); movz(xzr, data); } @@ -1655,11 +1686,11 @@ void MacroAssembler::DisableInstrumentation() { void MacroAssembler::AnnotateInstrumentation(const char* marker_name) { - ASSERT(strlen(marker_name) == 2); + DCHECK(strlen(marker_name) == 2); // We allow only printable characters in the marker names. Unprintable // characters are reserved for controlling features of the instrumentation. - ASSERT(isprint(marker_name[0]) && isprint(marker_name[1])); + DCHECK(isprint(marker_name[0]) && isprint(marker_name[1])); InstructionAccurateScope scope(this, 1); movn(xzr, (marker_name[1] << 8) | marker_name[0]); diff --git a/deps/v8/src/arm64/macro-assembler-arm64.cc b/deps/v8/src/arm64/macro-assembler-arm64.cc index 352f3c2ac70..658497b9f76 100644 --- a/deps/v8/src/arm64/macro-assembler-arm64.cc +++ b/deps/v8/src/arm64/macro-assembler-arm64.cc @@ -2,16 +2,16 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM64 -#include "bootstrapper.h" -#include "codegen.h" -#include "cpu-profiler.h" -#include "debug.h" -#include "isolate-inl.h" -#include "runtime.h" +#include "src/bootstrapper.h" +#include "src/codegen.h" +#include "src/cpu-profiler.h" +#include "src/debug.h" +#include "src/isolate-inl.h" +#include "src/runtime.h" namespace v8 { namespace internal { @@ -56,25 +56,31 @@ void MacroAssembler::LogicalMacro(const Register& rd, LogicalOp op) { UseScratchRegisterScope temps(this); - if (operand.NeedsRelocation(isolate())) { + if (operand.NeedsRelocation(this)) { Register temp = temps.AcquireX(); - LoadRelocated(temp, operand); + Ldr(temp, operand.immediate()); Logical(rd, rn, temp, op); } else if (operand.IsImmediate()) { - int64_t immediate = operand.immediate(); + int64_t immediate = operand.ImmediateValue(); unsigned reg_size = rd.SizeInBits(); - ASSERT(rd.Is64Bits() || is_uint32(immediate)); // If the operation is NOT, invert the operation and immediate. if ((op & NOT) == NOT) { op = static_cast<LogicalOp>(op & ~NOT); immediate = ~immediate; - if (rd.Is32Bits()) { - immediate &= kWRegMask; - } } + // Ignore the top 32 bits of an immediate if we're moving to a W register. + if (rd.Is32Bits()) { + // Check that the top 32 bits are consistent. + DCHECK(((immediate >> kWRegSizeInBits) == 0) || + ((immediate >> kWRegSizeInBits) == -1)); + immediate &= kWRegMask; + } + + DCHECK(rd.Is64Bits() || is_uint32(immediate)); + // Special cases for all set or all clear immediates. if (immediate == 0) { switch (op) { @@ -118,23 +124,24 @@ void MacroAssembler::LogicalMacro(const Register& rd, } else { // Immediate can't be encoded: synthesize using move immediate. Register temp = temps.AcquireSameSizeAs(rn); - Mov(temp, immediate); + Operand imm_operand = MoveImmediateForShiftedOp(temp, immediate); if (rd.Is(csp)) { // If rd is the stack pointer we cannot use it as the destination // register so we use the temp register as an intermediate again. - Logical(temp, rn, temp, op); + Logical(temp, rn, imm_operand, op); Mov(csp, temp); + AssertStackConsistency(); } else { - Logical(rd, rn, temp, op); + Logical(rd, rn, imm_operand, op); } } } else if (operand.IsExtendedRegister()) { - ASSERT(operand.reg().SizeInBits() <= rd.SizeInBits()); + DCHECK(operand.reg().SizeInBits() <= rd.SizeInBits()); // Add/sub extended supports shift <= 4. We want to support exactly the // same modes here. - ASSERT(operand.shift_amount() <= 4); - ASSERT(operand.reg().Is64Bits() || + DCHECK(operand.shift_amount() <= 4); + DCHECK(operand.reg().Is64Bits() || ((operand.extend() != UXTX) && (operand.extend() != SXTX))); Register temp = temps.AcquireSameSizeAs(rn); EmitExtendShift(temp, operand.reg(), operand.extend(), @@ -143,16 +150,16 @@ void MacroAssembler::LogicalMacro(const Register& rd, } else { // The operand can be encoded in the instruction. - ASSERT(operand.IsShiftedRegister()); + DCHECK(operand.IsShiftedRegister()); Logical(rd, rn, operand, op); } } void MacroAssembler::Mov(const Register& rd, uint64_t imm) { - ASSERT(allow_macro_instructions_); - ASSERT(is_uint32(imm) || is_int32(imm) || rd.Is64Bits()); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(is_uint32(imm) || is_int32(imm) || rd.Is64Bits()); + DCHECK(!rd.IsZero()); // TODO(all) extend to support more immediates. // @@ -171,20 +178,11 @@ void MacroAssembler::Mov(const Register& rd, uint64_t imm) { // applying move-keep operations to move-zero and move-inverted initial // values. - unsigned reg_size = rd.SizeInBits(); - unsigned n, imm_s, imm_r; - if (IsImmMovz(imm, reg_size) && !rd.IsSP()) { - // Immediate can be represented in a move zero instruction. Movz can't - // write to the stack pointer. - movz(rd, imm); - } else if (IsImmMovn(imm, reg_size) && !rd.IsSP()) { - // Immediate can be represented in a move inverted instruction. Movn can't - // write to the stack pointer. - movn(rd, rd.Is64Bits() ? ~imm : (~imm & kWRegMask)); - } else if (IsImmLogical(imm, reg_size, &n, &imm_s, &imm_r)) { - // Immediate can be represented in a logical orr instruction. - LogicalImmediate(rd, AppropriateZeroRegFor(rd), n, imm_s, imm_r, ORR); - } else { + // Try to move the immediate in one instruction, and if that fails, switch to + // using multiple instructions. + if (!TryOneInstrMoveImmediate(rd, imm)) { + unsigned reg_size = rd.SizeInBits(); + // Generic immediate case. Imm will be represented by // [imm3, imm2, imm1, imm0], where each imm is 16 bits. // A move-zero or move-inverted is generated for the first non-zero or @@ -207,7 +205,7 @@ void MacroAssembler::Mov(const Register& rd, uint64_t imm) { // Iterate through the halfwords. Use movn/movz for the first non-ignored // halfword, and movk for subsequent halfwords. - ASSERT((reg_size % 16) == 0); + DCHECK((reg_size % 16) == 0); bool first_mov_done = false; for (unsigned i = 0; i < (rd.SizeInBits() / 16); i++) { uint64_t imm16 = (imm >> (16 * i)) & 0xffffL; @@ -225,12 +223,13 @@ void MacroAssembler::Mov(const Register& rd, uint64_t imm) { } } } - ASSERT(first_mov_done); + DCHECK(first_mov_done); // Move the temporary if the original destination register was the stack // pointer. if (rd.IsSP()) { mov(rd, temp); + AssertStackConsistency(); } } } @@ -239,20 +238,20 @@ void MacroAssembler::Mov(const Register& rd, uint64_t imm) { void MacroAssembler::Mov(const Register& rd, const Operand& operand, DiscardMoveMode discard_mode) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); // Provide a swap register for instructions that need to write into the // system stack pointer (and can't do this inherently). UseScratchRegisterScope temps(this); Register dst = (rd.IsSP()) ? temps.AcquireSameSizeAs(rd) : rd; - if (operand.NeedsRelocation(isolate())) { - LoadRelocated(dst, operand); + if (operand.NeedsRelocation(this)) { + Ldr(dst, operand.immediate()); } else if (operand.IsImmediate()) { // Call the macro assembler for generic immediates. - Mov(dst, operand.immediate()); + Mov(dst, operand.ImmediateValue()); } else if (operand.IsShiftedRegister() && (operand.shift_amount() != 0)) { // Emit a shift instruction if moving a shifted register. This operation @@ -286,22 +285,22 @@ void MacroAssembler::Mov(const Register& rd, // Copy the result to the system stack pointer. if (!dst.Is(rd)) { - ASSERT(rd.IsSP()); + DCHECK(rd.IsSP()); Assembler::mov(rd, dst); } } void MacroAssembler::Mvn(const Register& rd, const Operand& operand) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); - if (operand.NeedsRelocation(isolate())) { - LoadRelocated(rd, operand); + if (operand.NeedsRelocation(this)) { + Ldr(rd, operand.immediate()); mvn(rd, rd); } else if (operand.IsImmediate()) { // Call the macro assembler for generic immediates. - Mov(rd, ~operand.immediate()); + Mov(rd, ~operand.ImmediateValue()); } else if (operand.IsExtendedRegister()) { // Emit two instructions for the extend case. This differs from Mov, as @@ -317,7 +316,7 @@ void MacroAssembler::Mvn(const Register& rd, const Operand& operand) { unsigned MacroAssembler::CountClearHalfWords(uint64_t imm, unsigned reg_size) { - ASSERT((reg_size % 8) == 0); + DCHECK((reg_size % 8) == 0); int count = 0; for (unsigned i = 0; i < (reg_size / 16); i++) { if ((imm & 0xffff) == 0) { @@ -332,7 +331,7 @@ unsigned MacroAssembler::CountClearHalfWords(uint64_t imm, unsigned reg_size) { // The movz instruction can generate immediates containing an arbitrary 16-bit // half-word, with remaining bits clear, eg. 0x00001234, 0x0000123400000000. bool MacroAssembler::IsImmMovz(uint64_t imm, unsigned reg_size) { - ASSERT((reg_size == kXRegSizeInBits) || (reg_size == kWRegSizeInBits)); + DCHECK((reg_size == kXRegSizeInBits) || (reg_size == kWRegSizeInBits)); return CountClearHalfWords(imm, reg_size) >= ((reg_size / 16) - 1); } @@ -349,15 +348,16 @@ void MacroAssembler::ConditionalCompareMacro(const Register& rn, StatusFlags nzcv, Condition cond, ConditionalCompareOp op) { - ASSERT((cond != al) && (cond != nv)); - if (operand.NeedsRelocation(isolate())) { + DCHECK((cond != al) && (cond != nv)); + if (operand.NeedsRelocation(this)) { UseScratchRegisterScope temps(this); Register temp = temps.AcquireX(); - LoadRelocated(temp, operand); + Ldr(temp, operand.immediate()); ConditionalCompareMacro(rn, temp, nzcv, cond, op); } else if ((operand.IsShiftedRegister() && (operand.shift_amount() == 0)) || - (operand.IsImmediate() && IsImmConditionalCompare(operand.immediate()))) { + (operand.IsImmediate() && + IsImmConditionalCompare(operand.ImmediateValue()))) { // The immediate can be encoded in the instruction, or the operand is an // unshifted register: call the assembler. ConditionalCompare(rn, operand, nzcv, cond, op); @@ -377,13 +377,13 @@ void MacroAssembler::Csel(const Register& rd, const Register& rn, const Operand& operand, Condition cond) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); - ASSERT((cond != al) && (cond != nv)); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); + DCHECK((cond != al) && (cond != nv)); if (operand.IsImmediate()) { // Immediate argument. Handle special cases of 0, 1 and -1 using zero // register. - int64_t imm = operand.immediate(); + int64_t imm = operand.ImmediateValue(); Register zr = AppropriateZeroRegFor(rn); if (imm == 0) { csel(rd, rn, zr, cond); @@ -394,7 +394,7 @@ void MacroAssembler::Csel(const Register& rd, } else { UseScratchRegisterScope temps(this); Register temp = temps.AcquireSameSizeAs(rn); - Mov(temp, operand.immediate()); + Mov(temp, imm); csel(rd, rn, temp, cond); } } else if (operand.IsShiftedRegister() && (operand.shift_amount() == 0)) { @@ -410,29 +410,96 @@ void MacroAssembler::Csel(const Register& rd, } +bool MacroAssembler::TryOneInstrMoveImmediate(const Register& dst, + int64_t imm) { + unsigned n, imm_s, imm_r; + int reg_size = dst.SizeInBits(); + if (IsImmMovz(imm, reg_size) && !dst.IsSP()) { + // Immediate can be represented in a move zero instruction. Movz can't write + // to the stack pointer. + movz(dst, imm); + return true; + } else if (IsImmMovn(imm, reg_size) && !dst.IsSP()) { + // Immediate can be represented in a move not instruction. Movn can't write + // to the stack pointer. + movn(dst, dst.Is64Bits() ? ~imm : (~imm & kWRegMask)); + return true; + } else if (IsImmLogical(imm, reg_size, &n, &imm_s, &imm_r)) { + // Immediate can be represented in a logical orr instruction. + LogicalImmediate(dst, AppropriateZeroRegFor(dst), n, imm_s, imm_r, ORR); + return true; + } + return false; +} + + +Operand MacroAssembler::MoveImmediateForShiftedOp(const Register& dst, + int64_t imm) { + int reg_size = dst.SizeInBits(); + + // Encode the immediate in a single move instruction, if possible. + if (TryOneInstrMoveImmediate(dst, imm)) { + // The move was successful; nothing to do here. + } else { + // Pre-shift the immediate to the least-significant bits of the register. + int shift_low = CountTrailingZeros(imm, reg_size); + int64_t imm_low = imm >> shift_low; + + // Pre-shift the immediate to the most-significant bits of the register. We + // insert set bits in the least-significant bits, as this creates a + // different immediate that may be encodable using movn or orr-immediate. + // If this new immediate is encodable, the set bits will be eliminated by + // the post shift on the following instruction. + int shift_high = CountLeadingZeros(imm, reg_size); + int64_t imm_high = (imm << shift_high) | ((1 << shift_high) - 1); + + if (TryOneInstrMoveImmediate(dst, imm_low)) { + // The new immediate has been moved into the destination's low bits: + // return a new leftward-shifting operand. + return Operand(dst, LSL, shift_low); + } else if (TryOneInstrMoveImmediate(dst, imm_high)) { + // The new immediate has been moved into the destination's high bits: + // return a new rightward-shifting operand. + return Operand(dst, LSR, shift_high); + } else { + // Use the generic move operation to set up the immediate. + Mov(dst, imm); + } + } + return Operand(dst); +} + + void MacroAssembler::AddSubMacro(const Register& rd, const Register& rn, const Operand& operand, FlagsUpdate S, AddSubOp op) { if (operand.IsZero() && rd.Is(rn) && rd.Is64Bits() && rn.Is64Bits() && - !operand.NeedsRelocation(isolate()) && (S == LeaveFlags)) { + !operand.NeedsRelocation(this) && (S == LeaveFlags)) { // The instruction would be a nop. Avoid generating useless code. return; } - if (operand.NeedsRelocation(isolate())) { + if (operand.NeedsRelocation(this)) { UseScratchRegisterScope temps(this); Register temp = temps.AcquireX(); - LoadRelocated(temp, operand); + Ldr(temp, operand.immediate()); AddSubMacro(rd, rn, temp, S, op); - } else if ((operand.IsImmediate() && !IsImmAddSub(operand.immediate())) || - (rn.IsZero() && !operand.IsShiftedRegister()) || + } else if ((operand.IsImmediate() && + !IsImmAddSub(operand.ImmediateValue())) || + (rn.IsZero() && !operand.IsShiftedRegister()) || (operand.IsShiftedRegister() && (operand.shift() == ROR))) { UseScratchRegisterScope temps(this); Register temp = temps.AcquireSameSizeAs(rn); - Mov(temp, operand); - AddSub(rd, rn, temp, S, op); + if (operand.IsImmediate()) { + Operand imm_operand = + MoveImmediateForShiftedOp(temp, operand.ImmediateValue()); + AddSub(rd, rn, imm_operand, S, op); + } else { + Mov(temp, operand); + AddSub(rd, rn, temp, S, op); + } } else { AddSub(rd, rn, operand, S, op); } @@ -444,12 +511,12 @@ void MacroAssembler::AddSubWithCarryMacro(const Register& rd, const Operand& operand, FlagsUpdate S, AddSubWithCarryOp op) { - ASSERT(rd.SizeInBits() == rn.SizeInBits()); + DCHECK(rd.SizeInBits() == rn.SizeInBits()); UseScratchRegisterScope temps(this); - if (operand.NeedsRelocation(isolate())) { + if (operand.NeedsRelocation(this)) { Register temp = temps.AcquireX(); - LoadRelocated(temp, operand); + Ldr(temp, operand.immediate()); AddSubWithCarryMacro(rd, rn, temp, S, op); } else if (operand.IsImmediate() || @@ -461,9 +528,9 @@ void MacroAssembler::AddSubWithCarryMacro(const Register& rd, } else if (operand.IsShiftedRegister() && (operand.shift_amount() != 0)) { // Add/sub with carry (shifted register). - ASSERT(operand.reg().SizeInBits() == rd.SizeInBits()); - ASSERT(operand.shift() != ROR); - ASSERT(is_uintn(operand.shift_amount(), + DCHECK(operand.reg().SizeInBits() == rd.SizeInBits()); + DCHECK(operand.shift() != ROR); + DCHECK(is_uintn(operand.shift_amount(), rd.SizeInBits() == kXRegSizeInBits ? kXRegSizeInBitsLog2 : kWRegSizeInBitsLog2)); Register temp = temps.AcquireSameSizeAs(rn); @@ -472,11 +539,11 @@ void MacroAssembler::AddSubWithCarryMacro(const Register& rd, } else if (operand.IsExtendedRegister()) { // Add/sub with carry (extended register). - ASSERT(operand.reg().SizeInBits() <= rd.SizeInBits()); + DCHECK(operand.reg().SizeInBits() <= rd.SizeInBits()); // Add/sub extended supports a shift <= 4. We want to support exactly the // same modes. - ASSERT(operand.shift_amount() <= 4); - ASSERT(operand.reg().Is64Bits() || + DCHECK(operand.shift_amount() <= 4); + DCHECK(operand.reg().Is64Bits() || ((operand.extend() != UXTX) && (operand.extend() != SXTX))); Register temp = temps.AcquireSameSizeAs(rn); EmitExtendShift(temp, operand.reg(), operand.extend(), @@ -521,11 +588,44 @@ void MacroAssembler::LoadStoreMacro(const CPURegister& rt, } } +void MacroAssembler::LoadStorePairMacro(const CPURegister& rt, + const CPURegister& rt2, + const MemOperand& addr, + LoadStorePairOp op) { + // TODO(all): Should we support register offset for load-store-pair? + DCHECK(!addr.IsRegisterOffset()); + + int64_t offset = addr.offset(); + LSDataSize size = CalcLSPairDataSize(op); + + // Check if the offset fits in the immediate field of the appropriate + // instruction. If not, emit two instructions to perform the operation. + if (IsImmLSPair(offset, size)) { + // Encodable in one load/store pair instruction. + LoadStorePair(rt, rt2, addr, op); + } else { + Register base = addr.base(); + if (addr.IsImmediateOffset()) { + UseScratchRegisterScope temps(this); + Register temp = temps.AcquireSameSizeAs(base); + Add(temp, base, offset); + LoadStorePair(rt, rt2, MemOperand(temp), op); + } else if (addr.IsPostIndex()) { + LoadStorePair(rt, rt2, MemOperand(base), op); + Add(base, base, offset); + } else { + DCHECK(addr.IsPreIndex()); + Add(base, base, offset); + LoadStorePair(rt, rt2, MemOperand(base), op); + } + } +} + void MacroAssembler::Load(const Register& rt, const MemOperand& addr, Representation r) { - ASSERT(!r.IsDouble()); + DCHECK(!r.IsDouble()); if (r.IsInteger8()) { Ldrsb(rt, addr); @@ -538,7 +638,7 @@ void MacroAssembler::Load(const Register& rt, } else if (r.IsInteger32()) { Ldr(rt.W(), addr); } else { - ASSERT(rt.Is64Bits()); + DCHECK(rt.Is64Bits()); Ldr(rt, addr); } } @@ -547,7 +647,7 @@ void MacroAssembler::Load(const Register& rt, void MacroAssembler::Store(const Register& rt, const MemOperand& addr, Representation r) { - ASSERT(!r.IsDouble()); + DCHECK(!r.IsDouble()); if (r.IsInteger8() || r.IsUInteger8()) { Strb(rt, addr); @@ -556,7 +656,7 @@ void MacroAssembler::Store(const Register& rt, } else if (r.IsInteger32()) { Str(rt.W(), addr); } else { - ASSERT(rt.Is64Bits()); + DCHECK(rt.Is64Bits()); if (r.IsHeapObject()) { AssertNotSmi(rt); } else if (r.IsSmi()) { @@ -594,30 +694,29 @@ bool MacroAssembler::NeedExtraInstructionsOrRegisterBranch( void MacroAssembler::Adr(const Register& rd, Label* label, AdrHint hint) { - ASSERT(allow_macro_instructions_); - ASSERT(!rd.IsZero()); + DCHECK(allow_macro_instructions_); + DCHECK(!rd.IsZero()); if (hint == kAdrNear) { adr(rd, label); return; } - ASSERT(hint == kAdrFar); - UseScratchRegisterScope temps(this); - Register scratch = temps.AcquireX(); - ASSERT(!AreAliased(rd, scratch)); - + DCHECK(hint == kAdrFar); if (label->is_bound()) { int label_offset = label->pos() - pc_offset(); if (Instruction::IsValidPCRelOffset(label_offset)) { adr(rd, label); } else { - ASSERT(label_offset <= 0); + DCHECK(label_offset <= 0); int min_adr_offset = -(1 << (Instruction::ImmPCRelRangeBitwidth - 1)); adr(rd, min_adr_offset); Add(rd, rd, label_offset - min_adr_offset); } } else { + UseScratchRegisterScope temps(this); + Register scratch = temps.AcquireX(); + InstructionAccurateScope scope( this, PatchingAssembler::kAdrFarPatchableNInstrs); adr(rd, label); @@ -625,13 +724,12 @@ void MacroAssembler::Adr(const Register& rd, Label* label, AdrHint hint) { nop(ADR_FAR_NOP); } movz(scratch, 0); - add(rd, rd, scratch); } } void MacroAssembler::B(Label* label, BranchType type, Register reg, int bit) { - ASSERT((reg.Is(NoReg) || type >= kBranchTypeFirstUsingReg) && + DCHECK((reg.Is(NoReg) || type >= kBranchTypeFirstUsingReg) && (bit == -1 || type >= kBranchTypeFirstUsingBit)); if (kBranchTypeFirstCondition <= type && type <= kBranchTypeLastCondition) { B(static_cast<Condition>(type), label); @@ -651,15 +749,15 @@ void MacroAssembler::B(Label* label, BranchType type, Register reg, int bit) { void MacroAssembler::B(Label* label, Condition cond) { - ASSERT(allow_macro_instructions_); - ASSERT((cond != al) && (cond != nv)); + DCHECK(allow_macro_instructions_); + DCHECK((cond != al) && (cond != nv)); Label done; bool need_extra_instructions = NeedExtraInstructionsOrRegisterBranch(label, CondBranchType); if (need_extra_instructions) { - b(&done, InvertCondition(cond)); + b(&done, NegateCondition(cond)); B(label); } else { b(label, cond); @@ -669,7 +767,7 @@ void MacroAssembler::B(Label* label, Condition cond) { void MacroAssembler::Tbnz(const Register& rt, unsigned bit_pos, Label* label) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); Label done; bool need_extra_instructions = @@ -686,7 +784,7 @@ void MacroAssembler::Tbnz(const Register& rt, unsigned bit_pos, Label* label) { void MacroAssembler::Tbz(const Register& rt, unsigned bit_pos, Label* label) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); Label done; bool need_extra_instructions = @@ -703,7 +801,7 @@ void MacroAssembler::Tbz(const Register& rt, unsigned bit_pos, Label* label) { void MacroAssembler::Cbnz(const Register& rt, Label* label) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); Label done; bool need_extra_instructions = @@ -720,7 +818,7 @@ void MacroAssembler::Cbnz(const Register& rt, Label* label) { void MacroAssembler::Cbz(const Register& rt, Label* label) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); Label done; bool need_extra_instructions = @@ -742,8 +840,8 @@ void MacroAssembler::Cbz(const Register& rt, Label* label) { void MacroAssembler::Abs(const Register& rd, const Register& rm, Label* is_not_representable, Label* is_representable) { - ASSERT(allow_macro_instructions_); - ASSERT(AreSameSizeAndType(rd, rm)); + DCHECK(allow_macro_instructions_); + DCHECK(AreSameSizeAndType(rd, rm)); Cmp(rm, 1); Cneg(rd, rm, lt); @@ -767,12 +865,12 @@ void MacroAssembler::Abs(const Register& rd, const Register& rm, void MacroAssembler::Push(const CPURegister& src0, const CPURegister& src1, const CPURegister& src2, const CPURegister& src3) { - ASSERT(AreSameSizeAndType(src0, src1, src2, src3)); + DCHECK(AreSameSizeAndType(src0, src1, src2, src3)); int count = 1 + src1.IsValid() + src2.IsValid() + src3.IsValid(); int size = src0.SizeInBytes(); - PrepareForPush(count, size); + PushPreamble(count, size); PushHelper(count, size, src0, src1, src2, src3); } @@ -781,12 +879,12 @@ void MacroAssembler::Push(const CPURegister& src0, const CPURegister& src1, const CPURegister& src2, const CPURegister& src3, const CPURegister& src4, const CPURegister& src5, const CPURegister& src6, const CPURegister& src7) { - ASSERT(AreSameSizeAndType(src0, src1, src2, src3, src4, src5, src6, src7)); + DCHECK(AreSameSizeAndType(src0, src1, src2, src3, src4, src5, src6, src7)); int count = 5 + src5.IsValid() + src6.IsValid() + src6.IsValid(); int size = src0.SizeInBytes(); - PrepareForPush(count, size); + PushPreamble(count, size); PushHelper(4, size, src0, src1, src2, src3); PushHelper(count - 4, size, src4, src5, src6, src7); } @@ -796,29 +894,36 @@ void MacroAssembler::Pop(const CPURegister& dst0, const CPURegister& dst1, const CPURegister& dst2, const CPURegister& dst3) { // It is not valid to pop into the same register more than once in one // instruction, not even into the zero register. - ASSERT(!AreAliased(dst0, dst1, dst2, dst3)); - ASSERT(AreSameSizeAndType(dst0, dst1, dst2, dst3)); - ASSERT(dst0.IsValid()); + DCHECK(!AreAliased(dst0, dst1, dst2, dst3)); + DCHECK(AreSameSizeAndType(dst0, dst1, dst2, dst3)); + DCHECK(dst0.IsValid()); int count = 1 + dst1.IsValid() + dst2.IsValid() + dst3.IsValid(); int size = dst0.SizeInBytes(); - PrepareForPop(count, size); PopHelper(count, size, dst0, dst1, dst2, dst3); + PopPostamble(count, size); +} - if (!csp.Is(StackPointer()) && emit_debug_code()) { - // It is safe to leave csp where it is when unwinding the JavaScript stack, - // but if we keep it matching StackPointer, the simulator can detect memory - // accesses in the now-free part of the stack. - Mov(csp, StackPointer()); - } + +void MacroAssembler::Push(const Register& src0, const FPRegister& src1) { + int size = src0.SizeInBytes() + src1.SizeInBytes(); + + PushPreamble(size); + // Reserve room for src0 and push src1. + str(src1, MemOperand(StackPointer(), -size, PreIndex)); + // Fill the gap with src0. + str(src0, MemOperand(StackPointer(), src1.SizeInBytes())); } -void MacroAssembler::PushPopQueue::PushQueued() { +void MacroAssembler::PushPopQueue::PushQueued( + PreambleDirective preamble_directive) { if (queued_.empty()) return; - masm_->PrepareForPush(size_); + if (preamble_directive == WITH_PREAMBLE) { + masm_->PushPreamble(size_); + } int count = queued_.size(); int index = 0; @@ -843,8 +948,6 @@ void MacroAssembler::PushPopQueue::PushQueued() { void MacroAssembler::PushPopQueue::PopQueued() { if (queued_.empty()) return; - masm_->PrepareForPop(size_); - int count = queued_.size(); int index = 0; while (index < count) { @@ -861,6 +964,7 @@ void MacroAssembler::PushPopQueue::PopQueued() { batch[0], batch[1], batch[2], batch[3]); } + masm_->PopPostamble(size_); queued_.clear(); } @@ -868,7 +972,7 @@ void MacroAssembler::PushPopQueue::PopQueued() { void MacroAssembler::PushCPURegList(CPURegList registers) { int size = registers.RegisterSizeInBytes(); - PrepareForPush(registers.Count(), size); + PushPreamble(registers.Count(), size); // Push up to four registers at a time because if the current stack pointer is // csp and reg_size is 32, registers must be pushed in blocks of four in order // to maintain the 16-byte alignment for csp. @@ -887,7 +991,6 @@ void MacroAssembler::PushCPURegList(CPURegList registers) { void MacroAssembler::PopCPURegList(CPURegList registers) { int size = registers.RegisterSizeInBytes(); - PrepareForPop(registers.Count(), size); // Pop up to four registers at a time because if the current stack pointer is // csp and reg_size is 32, registers must be pushed in blocks of four in // order to maintain the 16-byte alignment for csp. @@ -900,20 +1003,14 @@ void MacroAssembler::PopCPURegList(CPURegList registers) { int count = count_before - registers.Count(); PopHelper(count, size, dst0, dst1, dst2, dst3); } - - if (!csp.Is(StackPointer()) && emit_debug_code()) { - // It is safe to leave csp where it is when unwinding the JavaScript stack, - // but if we keep it matching StackPointer, the simulator can detect memory - // accesses in the now-free part of the stack. - Mov(csp, StackPointer()); - } + PopPostamble(registers.Count(), size); } void MacroAssembler::PushMultipleTimes(CPURegister src, int count) { int size = src.SizeInBytes(); - PrepareForPush(count, size); + PushPreamble(count, size); if (FLAG_optimize_for_size && count > 8) { UseScratchRegisterScope temps(this); @@ -944,12 +1041,12 @@ void MacroAssembler::PushMultipleTimes(CPURegister src, int count) { PushHelper(1, size, src, NoReg, NoReg, NoReg); count -= 1; } - ASSERT(count == 0); + DCHECK(count == 0); } void MacroAssembler::PushMultipleTimes(CPURegister src, Register count) { - PrepareForPush(Operand(count, UXTW, WhichPowerOf2(src.SizeInBytes()))); + PushPreamble(Operand(count, UXTW, WhichPowerOf2(src.SizeInBytes()))); UseScratchRegisterScope temps(this); Register temp = temps.AcquireSameSizeAs(count); @@ -1002,22 +1099,22 @@ void MacroAssembler::PushHelper(int count, int size, // Ensure that we don't unintentially modify scratch or debug registers. InstructionAccurateScope scope(this); - ASSERT(AreSameSizeAndType(src0, src1, src2, src3)); - ASSERT(size == src0.SizeInBytes()); + DCHECK(AreSameSizeAndType(src0, src1, src2, src3)); + DCHECK(size == src0.SizeInBytes()); // When pushing multiple registers, the store order is chosen such that // Push(a, b) is equivalent to Push(a) followed by Push(b). switch (count) { case 1: - ASSERT(src1.IsNone() && src2.IsNone() && src3.IsNone()); + DCHECK(src1.IsNone() && src2.IsNone() && src3.IsNone()); str(src0, MemOperand(StackPointer(), -1 * size, PreIndex)); break; case 2: - ASSERT(src2.IsNone() && src3.IsNone()); + DCHECK(src2.IsNone() && src3.IsNone()); stp(src1, src0, MemOperand(StackPointer(), -2 * size, PreIndex)); break; case 3: - ASSERT(src3.IsNone()); + DCHECK(src3.IsNone()); stp(src2, src1, MemOperand(StackPointer(), -3 * size, PreIndex)); str(src0, MemOperand(StackPointer(), 2 * size)); break; @@ -1042,22 +1139,22 @@ void MacroAssembler::PopHelper(int count, int size, // Ensure that we don't unintentially modify scratch or debug registers. InstructionAccurateScope scope(this); - ASSERT(AreSameSizeAndType(dst0, dst1, dst2, dst3)); - ASSERT(size == dst0.SizeInBytes()); + DCHECK(AreSameSizeAndType(dst0, dst1, dst2, dst3)); + DCHECK(size == dst0.SizeInBytes()); // When popping multiple registers, the load order is chosen such that // Pop(a, b) is equivalent to Pop(a) followed by Pop(b). switch (count) { case 1: - ASSERT(dst1.IsNone() && dst2.IsNone() && dst3.IsNone()); + DCHECK(dst1.IsNone() && dst2.IsNone() && dst3.IsNone()); ldr(dst0, MemOperand(StackPointer(), 1 * size, PostIndex)); break; case 2: - ASSERT(dst2.IsNone() && dst3.IsNone()); + DCHECK(dst2.IsNone() && dst3.IsNone()); ldp(dst0, dst1, MemOperand(StackPointer(), 2 * size, PostIndex)); break; case 3: - ASSERT(dst3.IsNone()); + DCHECK(dst3.IsNone()); ldr(dst2, MemOperand(StackPointer(), 2 * size)); ldp(dst0, dst1, MemOperand(StackPointer(), 3 * size, PostIndex)); break; @@ -1075,15 +1172,13 @@ void MacroAssembler::PopHelper(int count, int size, } -void MacroAssembler::PrepareForPush(Operand total_size) { - // TODO(jbramley): This assertion generates too much code in some debug tests. - // AssertStackConsistency(); +void MacroAssembler::PushPreamble(Operand total_size) { if (csp.Is(StackPointer())) { // If the current stack pointer is csp, then it must be aligned to 16 bytes // on entry and the total size of the specified registers must also be a // multiple of 16 bytes. if (total_size.IsImmediate()) { - ASSERT((total_size.immediate() % 16) == 0); + DCHECK((total_size.ImmediateValue() % 16) == 0); } // Don't check access size for non-immediate sizes. It's difficult to do @@ -1097,25 +1192,29 @@ void MacroAssembler::PrepareForPush(Operand total_size) { } -void MacroAssembler::PrepareForPop(Operand total_size) { - AssertStackConsistency(); +void MacroAssembler::PopPostamble(Operand total_size) { if (csp.Is(StackPointer())) { // If the current stack pointer is csp, then it must be aligned to 16 bytes // on entry and the total size of the specified registers must also be a // multiple of 16 bytes. if (total_size.IsImmediate()) { - ASSERT((total_size.immediate() % 16) == 0); + DCHECK((total_size.ImmediateValue() % 16) == 0); } // Don't check access size for non-immediate sizes. It's difficult to do // well, and it will be caught by hardware (or the simulator) anyway. + } else if (emit_debug_code()) { + // It is safe to leave csp where it is when unwinding the JavaScript stack, + // but if we keep it matching StackPointer, the simulator can detect memory + // accesses in the now-free part of the stack. + SyncSystemStackPointer(); } } void MacroAssembler::Poke(const CPURegister& src, const Operand& offset) { if (offset.IsImmediate()) { - ASSERT(offset.immediate() >= 0); + DCHECK(offset.ImmediateValue() >= 0); } else if (emit_debug_code()) { Cmp(xzr, offset); Check(le, kStackAccessBelowStackPointer); @@ -1127,7 +1226,7 @@ void MacroAssembler::Poke(const CPURegister& src, const Operand& offset) { void MacroAssembler::Peek(const CPURegister& dst, const Operand& offset) { if (offset.IsImmediate()) { - ASSERT(offset.immediate() >= 0); + DCHECK(offset.ImmediateValue() >= 0); } else if (emit_debug_code()) { Cmp(xzr, offset); Check(le, kStackAccessBelowStackPointer); @@ -1140,8 +1239,8 @@ void MacroAssembler::Peek(const CPURegister& dst, const Operand& offset) { void MacroAssembler::PokePair(const CPURegister& src1, const CPURegister& src2, int offset) { - ASSERT(AreSameSizeAndType(src1, src2)); - ASSERT((offset >= 0) && ((offset % src1.SizeInBytes()) == 0)); + DCHECK(AreSameSizeAndType(src1, src2)); + DCHECK((offset >= 0) && ((offset % src1.SizeInBytes()) == 0)); Stp(src1, src2, MemOperand(StackPointer(), offset)); } @@ -1149,8 +1248,8 @@ void MacroAssembler::PokePair(const CPURegister& src1, void MacroAssembler::PeekPair(const CPURegister& dst1, const CPURegister& dst2, int offset) { - ASSERT(AreSameSizeAndType(dst1, dst2)); - ASSERT((offset >= 0) && ((offset % dst1.SizeInBytes()) == 0)); + DCHECK(AreSameSizeAndType(dst1, dst2)); + DCHECK((offset >= 0) && ((offset % dst1.SizeInBytes()) == 0)); Ldp(dst1, dst2, MemOperand(StackPointer(), offset)); } @@ -1161,7 +1260,7 @@ void MacroAssembler::PushCalleeSavedRegisters() { // This method must not be called unless the current stack pointer is the // system stack pointer (csp). - ASSERT(csp.Is(StackPointer())); + DCHECK(csp.Is(StackPointer())); MemOperand tos(csp, -2 * kXRegSize, PreIndex); @@ -1185,7 +1284,7 @@ void MacroAssembler::PopCalleeSavedRegisters() { // This method must not be called unless the current stack pointer is the // system stack pointer (csp). - ASSERT(csp.Is(StackPointer())); + DCHECK(csp.Is(StackPointer())); MemOperand tos(csp, 2 * kXRegSize, PostIndex); @@ -1204,20 +1303,27 @@ void MacroAssembler::PopCalleeSavedRegisters() { void MacroAssembler::AssertStackConsistency() { - if (emit_debug_code()) { - if (csp.Is(StackPointer())) { - // We can't check the alignment of csp without using a scratch register - // (or clobbering the flags), but the processor (or simulator) will abort - // if it is not properly aligned during a load. + // Avoid emitting code when !use_real_abort() since non-real aborts cause too + // much code to be generated. + if (emit_debug_code() && use_real_aborts()) { + if (csp.Is(StackPointer()) || CpuFeatures::IsSupported(ALWAYS_ALIGN_CSP)) { + // Always check the alignment of csp if ALWAYS_ALIGN_CSP is true. We + // can't check the alignment of csp without using a scratch register (or + // clobbering the flags), but the processor (or simulator) will abort if + // it is not properly aligned during a load. ldr(xzr, MemOperand(csp, 0)); - } else if (FLAG_enable_slow_asserts) { + } + if (FLAG_enable_slow_asserts && !csp.Is(StackPointer())) { Label ok; // Check that csp <= StackPointer(), preserving all registers and NZCV. sub(StackPointer(), csp, StackPointer()); cbz(StackPointer(), &ok); // Ok if csp == StackPointer(). tbnz(StackPointer(), kXSignBit, &ok); // Ok if csp < StackPointer(). - Abort(kTheCurrentStackPointerIsBelowCsp); + // Avoid generating AssertStackConsistency checks for the Push in Abort. + { DontEmitDebugCodeScope dont_emit_debug_code_scope(this); + Abort(kTheCurrentStackPointerIsBelowCsp); + } bind(&ok); // Restore StackPointer(). @@ -1334,15 +1440,14 @@ void MacroAssembler::NumberOfOwnDescriptors(Register dst, Register map) { void MacroAssembler::EnumLengthUntagged(Register dst, Register map) { STATIC_ASSERT(Map::EnumLengthBits::kShift == 0); - Ldrsw(dst, UntagSmiFieldMemOperand(map, Map::kBitField3Offset)); + Ldrsw(dst, FieldMemOperand(map, Map::kBitField3Offset)); And(dst, dst, Map::EnumLengthBits::kMask); } void MacroAssembler::EnumLengthSmi(Register dst, Register map) { - STATIC_ASSERT(Map::EnumLengthBits::kShift == 0); - Ldr(dst, FieldMemOperand(map, Map::kBitField3Offset)); - And(dst, dst, Smi::FromInt(Map::EnumLengthBits::kMask)); + EnumLengthUntagged(dst, map); + SmiTag(dst, dst); } @@ -1353,7 +1458,7 @@ void MacroAssembler::CheckEnumCache(Register object, Register scratch2, Register scratch3, Label* call_runtime) { - ASSERT(!AreAliased(object, null_value, scratch0, scratch1, scratch2, + DCHECK(!AreAliased(object, null_value, scratch0, scratch1, scratch2, scratch3)); Register empty_fixed_array_value = scratch0; @@ -1435,7 +1540,7 @@ void MacroAssembler::JumpToHandlerEntry(Register exception, Register scratch1, Register scratch2) { // Handler expects argument in x0. - ASSERT(exception.Is(x0)); + DCHECK(exception.Is(x0)); // Compute the handler entry address and jump to it. The handler table is // a fixed array of (smi-tagged) code offsets. @@ -1453,7 +1558,7 @@ void MacroAssembler::JumpToHandlerEntry(Register exception, void MacroAssembler::InNewSpace(Register object, Condition cond, Label* branch) { - ASSERT(cond == eq || cond == ne); + DCHECK(cond == eq || cond == ne); UseScratchRegisterScope temps(this); Register temp = temps.AcquireX(); And(temp, object, ExternalReference::new_space_mask(isolate())); @@ -1476,10 +1581,10 @@ void MacroAssembler::Throw(Register value, STATIC_ASSERT(StackHandlerConstants::kFPOffset == 4 * kPointerSize); // The handler expects the exception in x0. - ASSERT(value.Is(x0)); + DCHECK(value.Is(x0)); // Drop the stack pointer to the top of the top handler. - ASSERT(jssp.Is(StackPointer())); + DCHECK(jssp.Is(StackPointer())); Mov(scratch1, Operand(ExternalReference(Isolate::kHandlerAddress, isolate()))); Ldr(jssp, MemOperand(scratch1)); @@ -1518,10 +1623,10 @@ void MacroAssembler::ThrowUncatchable(Register value, STATIC_ASSERT(StackHandlerConstants::kFPOffset == 4 * kPointerSize); // The handler expects the exception in x0. - ASSERT(value.Is(x0)); + DCHECK(value.Is(x0)); // Drop the stack pointer to the top of the top stack handler. - ASSERT(jssp.Is(StackPointer())); + DCHECK(jssp.Is(StackPointer())); Mov(scratch1, Operand(ExternalReference(Isolate::kHandlerAddress, isolate()))); Ldr(jssp, MemOperand(scratch1)); @@ -1551,50 +1656,8 @@ void MacroAssembler::ThrowUncatchable(Register value, } -void MacroAssembler::Throw(BailoutReason reason) { - Label throw_start; - Bind(&throw_start); -#ifdef DEBUG - const char* msg = GetBailoutReason(reason); - RecordComment("Throw message: "); - RecordComment((msg != NULL) ? msg : "UNKNOWN"); -#endif - - Mov(x0, Smi::FromInt(reason)); - Push(x0); - - // Disable stub call restrictions to always allow calls to throw. - if (!has_frame_) { - // We don't actually want to generate a pile of code for this, so just - // claim there is a stack frame, without generating one. - FrameScope scope(this, StackFrame::NONE); - CallRuntime(Runtime::kHiddenThrowMessage, 1); - } else { - CallRuntime(Runtime::kHiddenThrowMessage, 1); - } - // ThrowMessage should not return here. - Unreachable(); -} - - -void MacroAssembler::ThrowIf(Condition cond, BailoutReason reason) { - Label ok; - B(InvertCondition(cond), &ok); - Throw(reason); - Bind(&ok); -} - - -void MacroAssembler::ThrowIfSmi(const Register& value, BailoutReason reason) { - Label ok; - JumpIfNotSmi(value, &ok); - Throw(reason); - Bind(&ok); -} - - void MacroAssembler::SmiAbs(const Register& smi, Label* slow) { - ASSERT(smi.Is64Bits()); + DCHECK(smi.Is64Bits()); Abs(smi, smi, slow); } @@ -1660,7 +1723,7 @@ void MacroAssembler::AssertString(Register object) { void MacroAssembler::CallStub(CodeStub* stub, TypeFeedbackId ast_id) { - ASSERT(AllowThisStubCall(stub)); // Stub calls are not allowed in some stubs. + DCHECK(AllowThisStubCall(stub)); // Stub calls are not allowed in some stubs. Call(stub->GetCode(), RelocInfo::CODE_TARGET, ast_id); } @@ -1712,7 +1775,7 @@ void MacroAssembler::CallApiFunctionAndReturn( ExternalReference::handle_scope_level_address(isolate()), next_address); - ASSERT(function_address.is(x1) || function_address.is(x2)); + DCHECK(function_address.is(x1) || function_address.is(x2)); Label profiler_disabled; Label end_profiler_check; @@ -1821,7 +1884,7 @@ void MacroAssembler::CallApiFunctionAndReturn( FrameScope frame(this, StackFrame::INTERNAL); CallExternalReference( ExternalReference( - Runtime::kHiddenPromoteScheduledException, isolate()), 0); + Runtime::kPromoteScheduledException, isolate()), 0); } B(&exception_handled); @@ -1870,7 +1933,7 @@ void MacroAssembler::GetBuiltinFunction(Register target, void MacroAssembler::GetBuiltinEntry(Register target, Register function, Builtins::JavaScript id) { - ASSERT(!AreAliased(target, function)); + DCHECK(!AreAliased(target, function)); GetBuiltinFunction(function, id); // Load the code entry point from the builtins object. Ldr(target, FieldMemOperand(function, JSFunction::kCodeEntryOffset)); @@ -1882,7 +1945,7 @@ void MacroAssembler::InvokeBuiltin(Builtins::JavaScript id, const CallWrapper& call_wrapper) { ASM_LOCATION("MacroAssembler::InvokeBuiltin"); // You can't call a builtin without a valid frame. - ASSERT(flag == JUMP_FUNCTION || has_frame()); + DCHECK(flag == JUMP_FUNCTION || has_frame()); // Get the builtin entry in x2 and setup the function object in x1. GetBuiltinEntry(x2, x1, id); @@ -1891,7 +1954,7 @@ void MacroAssembler::InvokeBuiltin(Builtins::JavaScript id, Call(x2); call_wrapper.AfterCall(); } else { - ASSERT(flag == JUMP_FUNCTION); + DCHECK(flag == JUMP_FUNCTION); Jump(x2); } } @@ -1923,7 +1986,7 @@ void MacroAssembler::InitializeNewString(Register string, Heap::RootListIndex map_index, Register scratch1, Register scratch2) { - ASSERT(!AreAliased(string, length, scratch1, scratch2)); + DCHECK(!AreAliased(string, length, scratch1, scratch2)); LoadRoot(scratch2, map_index); SmiTag(scratch1, length); Str(scratch2, FieldMemOperand(string, HeapObject::kMapOffset)); @@ -1940,7 +2003,7 @@ int MacroAssembler::ActivationFrameAlignment() { // environment. // Note: This will break if we ever start generating snapshots on one ARM // platform for another ARM platform with a different alignment. - return OS::ActivationFrameAlignment(); + return base::OS::ActivationFrameAlignment(); #else // V8_HOST_ARCH_ARM64 // If we are using the simulator then we should always align to the expected // alignment. As the simulator is used to generate snapshots we do not know @@ -1970,10 +2033,10 @@ void MacroAssembler::CallCFunction(ExternalReference function, void MacroAssembler::CallCFunction(Register function, int num_of_reg_args, int num_of_double_args) { - ASSERT(has_frame()); + DCHECK(has_frame()); // We can pass 8 integer arguments in registers. If we need to pass more than // that, we'll need to implement support for passing them on the stack. - ASSERT(num_of_reg_args <= 8); + DCHECK(num_of_reg_args <= 8); // If we're passing doubles, we're limited to the following prototypes // (defined by ExternalReference::Type): @@ -1982,8 +2045,8 @@ void MacroAssembler::CallCFunction(Register function, // BUILTIN_FP_CALL: double f(double) // BUILTIN_FP_INT_CALL: double f(double, int) if (num_of_double_args > 0) { - ASSERT(num_of_reg_args <= 1); - ASSERT((num_of_double_args + num_of_reg_args) <= 2); + DCHECK(num_of_reg_args <= 1); + DCHECK((num_of_double_args + num_of_reg_args) <= 2); } @@ -1995,12 +2058,12 @@ void MacroAssembler::CallCFunction(Register function, int sp_alignment = ActivationFrameAlignment(); // The ABI mandates at least 16-byte alignment. - ASSERT(sp_alignment >= 16); - ASSERT(IsPowerOf2(sp_alignment)); + DCHECK(sp_alignment >= 16); + DCHECK(IsPowerOf2(sp_alignment)); // The current stack pointer is a callee saved register, and is preserved // across the call. - ASSERT(kCalleeSaved.IncludesAliasOf(old_stack_pointer)); + DCHECK(kCalleeSaved.IncludesAliasOf(old_stack_pointer)); // Align and synchronize the system stack pointer with jssp. Bic(csp, old_stack_pointer, sp_alignment - 1); @@ -2018,7 +2081,7 @@ void MacroAssembler::CallCFunction(Register function, // where we only pushed one W register on top of an aligned jssp. UseScratchRegisterScope temps(this); Register temp = temps.AcquireX(); - ASSERT(ActivationFrameAlignment() == 16); + DCHECK(ActivationFrameAlignment() == 16); Sub(temp, csp, old_stack_pointer); // We want temp <= 0 && temp >= -12. Cmp(temp, 0); @@ -2044,13 +2107,13 @@ void MacroAssembler::Jump(intptr_t target, RelocInfo::Mode rmode) { void MacroAssembler::Jump(Address target, RelocInfo::Mode rmode) { - ASSERT(!RelocInfo::IsCodeTarget(rmode)); + DCHECK(!RelocInfo::IsCodeTarget(rmode)); Jump(reinterpret_cast<intptr_t>(target), rmode); } void MacroAssembler::Jump(Handle<Code> code, RelocInfo::Mode rmode) { - ASSERT(RelocInfo::IsCodeTarget(rmode)); + DCHECK(RelocInfo::IsCodeTarget(rmode)); AllowDeferredHandleDereference embedding_raw_address; Jump(reinterpret_cast<intptr_t>(code.location()), rmode); } @@ -2099,7 +2162,7 @@ void MacroAssembler::Call(Address target, RelocInfo::Mode rmode) { positions_recorder()->WriteRecordedPositions(); // Addresses always have 64 bits, so we shouldn't encounter NONE32. - ASSERT(rmode != RelocInfo::NONE32); + DCHECK(rmode != RelocInfo::NONE32); UseScratchRegisterScope temps(this); Register temp = temps.AcquireX(); @@ -2108,12 +2171,12 @@ void MacroAssembler::Call(Address target, RelocInfo::Mode rmode) { // Addresses are 48 bits so we never need to load the upper 16 bits. uint64_t imm = reinterpret_cast<uint64_t>(target); // If we don't use ARM tagged addresses, the 16 higher bits must be 0. - ASSERT(((imm >> 48) & 0xffff) == 0); + DCHECK(((imm >> 48) & 0xffff) == 0); movz(temp, (imm >> 0) & 0xffff, 0); movk(temp, (imm >> 16) & 0xffff, 16); movk(temp, (imm >> 32) & 0xffff, 32); } else { - LoadRelocated(temp, Operand(reinterpret_cast<intptr_t>(target), rmode)); + Ldr(temp, Immediate(reinterpret_cast<intptr_t>(target), rmode)); } Blr(temp); #ifdef DEBUG @@ -2161,7 +2224,7 @@ int MacroAssembler::CallSize(Address target, RelocInfo::Mode rmode) { USE(target); // Addresses always have 64 bits, so we shouldn't encounter NONE32. - ASSERT(rmode != RelocInfo::NONE32); + DCHECK(rmode != RelocInfo::NONE32); if (rmode == RelocInfo::NONE64) { return kCallSizeWithoutRelocation; @@ -2178,7 +2241,7 @@ int MacroAssembler::CallSize(Handle<Code> code, USE(ast_id); // Addresses always have 64 bits, so we shouldn't encounter NONE32. - ASSERT(rmode != RelocInfo::NONE32); + DCHECK(rmode != RelocInfo::NONE32); if (rmode == RelocInfo::NONE64) { return kCallSizeWithoutRelocation; @@ -2195,7 +2258,7 @@ void MacroAssembler::JumpForHeapNumber(Register object, Register heap_number_map, Label* on_heap_number, Label* on_not_heap_number) { - ASSERT(on_heap_number || on_not_heap_number); + DCHECK(on_heap_number || on_not_heap_number); AssertNotSmi(object); UseScratchRegisterScope temps(this); @@ -2209,7 +2272,7 @@ void MacroAssembler::JumpForHeapNumber(Register object, AssertRegisterIsRoot(heap_number_map, Heap::kHeapNumberMapRootIndex); } - ASSERT(!AreAliased(temp, heap_number_map)); + DCHECK(!AreAliased(temp, heap_number_map)); Ldr(temp, FieldMemOperand(object, HeapObject::kMapOffset)); Cmp(temp, heap_number_map); @@ -2249,7 +2312,7 @@ void MacroAssembler::LookupNumberStringCache(Register object, Register scratch2, Register scratch3, Label* not_found) { - ASSERT(!AreAliased(object, result, scratch1, scratch2, scratch3)); + DCHECK(!AreAliased(object, result, scratch1, scratch2, scratch3)); // Use of registers. Register result is used as a temporary. Register number_string_cache = result; @@ -2353,6 +2416,16 @@ void MacroAssembler::JumpIfMinusZero(DoubleRegister input, } +void MacroAssembler::JumpIfMinusZero(Register input, + Label* on_negative_zero) { + DCHECK(input.Is64Bits()); + // Floating point value is in an integer register. Detect -0.0 by subtracting + // 1 (cmp), which will cause overflow. + Cmp(input, 1); + B(vs, on_negative_zero); +} + + void MacroAssembler::ClampInt32ToUint8(Register output, Register input) { // Clamp the value to [0..255]. Cmp(input.W(), Operand(input.W(), UXTB)); @@ -2398,9 +2471,9 @@ void MacroAssembler::CopyFieldsLoopPairsHelper(Register dst, Register scratch5) { // Untag src and dst into scratch registers. // Copy src->dst in a tight loop. - ASSERT(!AreAliased(dst, src, + DCHECK(!AreAliased(dst, src, scratch1, scratch2, scratch3, scratch4, scratch5)); - ASSERT(count >= 2); + DCHECK(count >= 2); const Register& remaining = scratch3; Mov(remaining, count / 2); @@ -2437,7 +2510,7 @@ void MacroAssembler::CopyFieldsUnrolledPairsHelper(Register dst, Register scratch4) { // Untag src and dst into scratch registers. // Copy src->dst in an unrolled loop. - ASSERT(!AreAliased(dst, src, scratch1, scratch2, scratch3, scratch4)); + DCHECK(!AreAliased(dst, src, scratch1, scratch2, scratch3, scratch4)); const Register& dst_untagged = scratch1; const Register& src_untagged = scratch2; @@ -2466,7 +2539,7 @@ void MacroAssembler::CopyFieldsUnrolledHelper(Register dst, Register scratch3) { // Untag src and dst into scratch registers. // Copy src->dst in an unrolled loop. - ASSERT(!AreAliased(dst, src, scratch1, scratch2, scratch3)); + DCHECK(!AreAliased(dst, src, scratch1, scratch2, scratch3)); const Register& dst_untagged = scratch1; const Register& src_untagged = scratch2; @@ -2495,10 +2568,10 @@ void MacroAssembler::CopyFields(Register dst, Register src, CPURegList temps, // // In both cases, fields are copied in pairs if possible, and left-overs are // handled separately. - ASSERT(!AreAliased(dst, src)); - ASSERT(!temps.IncludesAliasOf(dst)); - ASSERT(!temps.IncludesAliasOf(src)); - ASSERT(!temps.IncludesAliasOf(xzr)); + DCHECK(!AreAliased(dst, src)); + DCHECK(!temps.IncludesAliasOf(dst)); + DCHECK(!temps.IncludesAliasOf(src)); + DCHECK(!temps.IncludesAliasOf(xzr)); if (emit_debug_code()) { Cmp(dst, src); @@ -2542,8 +2615,8 @@ void MacroAssembler::CopyBytes(Register dst, UseScratchRegisterScope temps(this); Register tmp1 = temps.AcquireX(); Register tmp2 = temps.AcquireX(); - ASSERT(!AreAliased(src, dst, length, scratch, tmp1, tmp2)); - ASSERT(!AreAliased(src, dst, csp)); + DCHECK(!AreAliased(src, dst, length, scratch, tmp1, tmp2)); + DCHECK(!AreAliased(src, dst, csp)); if (emit_debug_code()) { // Check copy length. @@ -2592,7 +2665,7 @@ void MacroAssembler::CopyBytes(Register dst, void MacroAssembler::FillFields(Register dst, Register field_count, Register filler) { - ASSERT(!dst.Is(csp)); + DCHECK(!dst.Is(csp)); UseScratchRegisterScope temps(this); Register field_ptr = temps.AcquireX(); Register counter = temps.AcquireX(); @@ -2637,7 +2710,7 @@ void MacroAssembler::JumpIfEitherIsNotSequentialAsciiStrings( if (smi_check == DO_SMI_CHECK) { JumpIfEitherSmi(first, second, failure); } else if (emit_debug_code()) { - ASSERT(smi_check == DONT_DO_SMI_CHECK); + DCHECK(smi_check == DONT_DO_SMI_CHECK); Label not_smi; JumpIfEitherSmi(first, second, NULL, ¬_smi); @@ -2668,8 +2741,8 @@ void MacroAssembler::JumpIfEitherInstanceTypeIsNotSequentialAscii( Register scratch1, Register scratch2, Label* failure) { - ASSERT(!AreAliased(scratch1, second)); - ASSERT(!AreAliased(scratch1, scratch2)); + DCHECK(!AreAliased(scratch1, second)); + DCHECK(!AreAliased(scratch1, scratch2)); static const int kFlatAsciiStringMask = kIsNotStringMask | kStringEncodingMask | kStringRepresentationMask; static const int kFlatAsciiStringTag = ASCII_STRING_TYPE; @@ -2700,7 +2773,7 @@ void MacroAssembler::JumpIfBothInstanceTypesAreNotSequentialAscii( Register scratch1, Register scratch2, Label* failure) { - ASSERT(!AreAliased(first, second, scratch1, scratch2)); + DCHECK(!AreAliased(first, second, scratch1, scratch2)); const int kFlatAsciiStringMask = kIsNotStringMask | kStringEncodingMask | kStringRepresentationMask; const int kFlatAsciiStringTag = @@ -2748,12 +2821,12 @@ void MacroAssembler::InvokePrologue(const ParameterCount& expected, // The code below is made a lot easier because the calling code already sets // up actual and expected registers according to the contract if values are // passed in registers. - ASSERT(actual.is_immediate() || actual.reg().is(x0)); - ASSERT(expected.is_immediate() || expected.reg().is(x2)); - ASSERT((!code_constant.is_null() && code_reg.is(no_reg)) || code_reg.is(x3)); + DCHECK(actual.is_immediate() || actual.reg().is(x0)); + DCHECK(expected.is_immediate() || expected.reg().is(x2)); + DCHECK((!code_constant.is_null() && code_reg.is(no_reg)) || code_reg.is(x3)); if (expected.is_immediate()) { - ASSERT(actual.is_immediate()); + DCHECK(actual.is_immediate()); if (expected.immediate() == actual.immediate()) { definitely_matches = true; @@ -2816,7 +2889,7 @@ void MacroAssembler::InvokeCode(Register code, InvokeFlag flag, const CallWrapper& call_wrapper) { // You can't call a function without a valid frame. - ASSERT(flag == JUMP_FUNCTION || has_frame()); + DCHECK(flag == JUMP_FUNCTION || has_frame()); Label done; @@ -2833,7 +2906,7 @@ void MacroAssembler::InvokeCode(Register code, Call(code); call_wrapper.AfterCall(); } else { - ASSERT(flag == JUMP_FUNCTION); + DCHECK(flag == JUMP_FUNCTION); Jump(code); } } @@ -2849,11 +2922,11 @@ void MacroAssembler::InvokeFunction(Register function, InvokeFlag flag, const CallWrapper& call_wrapper) { // You can't call a function without a valid frame. - ASSERT(flag == JUMP_FUNCTION || has_frame()); + DCHECK(flag == JUMP_FUNCTION || has_frame()); // Contract with called JS functions requires that function is passed in x1. // (See FullCodeGenerator::Generate().) - ASSERT(function.is(x1)); + DCHECK(function.is(x1)); Register expected_reg = x2; Register code_reg = x3; @@ -2881,11 +2954,11 @@ void MacroAssembler::InvokeFunction(Register function, InvokeFlag flag, const CallWrapper& call_wrapper) { // You can't call a function without a valid frame. - ASSERT(flag == JUMP_FUNCTION || has_frame()); + DCHECK(flag == JUMP_FUNCTION || has_frame()); // Contract with called JS functions requires that function is passed in x1. // (See FullCodeGenerator::Generate().) - ASSERT(function.Is(x1)); + DCHECK(function.Is(x1)); Register code_reg = x3; @@ -2940,15 +3013,14 @@ void MacroAssembler::TryConvertDoubleToInt64(Register result, void MacroAssembler::TruncateDoubleToI(Register result, DoubleRegister double_input) { Label done; - ASSERT(jssp.Is(StackPointer())); + DCHECK(jssp.Is(StackPointer())); // Try to convert the double to an int64. If successful, the bottom 32 bits // contain our truncated int32 result. TryConvertDoubleToInt64(result, double_input, &done); // If we fell through then inline version didn't succeed - call stub instead. - Push(lr); - Push(double_input); // Put input on stack. + Push(lr, double_input); DoubleToIStub stub(isolate(), jssp, @@ -2968,8 +3040,8 @@ void MacroAssembler::TruncateDoubleToI(Register result, void MacroAssembler::TruncateHeapNumberToI(Register result, Register object) { Label done; - ASSERT(!result.is(object)); - ASSERT(jssp.Is(StackPointer())); + DCHECK(!result.is(object)); + DCHECK(jssp.Is(StackPointer())); Ldr(fp_scratch, FieldMemOperand(object, HeapNumber::kValueOffset)); @@ -2992,29 +3064,30 @@ void MacroAssembler::TruncateHeapNumberToI(Register result, } -void MacroAssembler::Prologue(PrologueFrameMode frame_mode) { - if (frame_mode == BUILD_STUB_FRAME) { - ASSERT(StackPointer().Is(jssp)); - UseScratchRegisterScope temps(this); - Register temp = temps.AcquireX(); - __ Mov(temp, Smi::FromInt(StackFrame::STUB)); - // Compiled stubs don't age, and so they don't need the predictable code - // ageing sequence. - __ Push(lr, fp, cp, temp); - __ Add(fp, jssp, StandardFrameConstants::kFixedFrameSizeFromFp); +void MacroAssembler::StubPrologue() { + DCHECK(StackPointer().Is(jssp)); + UseScratchRegisterScope temps(this); + Register temp = temps.AcquireX(); + __ Mov(temp, Smi::FromInt(StackFrame::STUB)); + // Compiled stubs don't age, and so they don't need the predictable code + // ageing sequence. + __ Push(lr, fp, cp, temp); + __ Add(fp, jssp, StandardFrameConstants::kFixedFrameSizeFromFp); +} + + +void MacroAssembler::Prologue(bool code_pre_aging) { + if (code_pre_aging) { + Code* stub = Code::GetPreAgedCodeAgeStub(isolate()); + __ EmitCodeAgeSequence(stub); } else { - if (isolate()->IsCodePreAgingActive()) { - Code* stub = Code::GetPreAgedCodeAgeStub(isolate()); - __ EmitCodeAgeSequence(stub); - } else { - __ EmitFrameSetupForCodeAgePatching(); - } + __ EmitFrameSetupForCodeAgePatching(); } } void MacroAssembler::EnterFrame(StackFrame::Type type) { - ASSERT(jssp.Is(StackPointer())); + DCHECK(jssp.Is(StackPointer())); UseScratchRegisterScope temps(this); Register type_reg = temps.AcquireX(); Register code_reg = temps.AcquireX(); @@ -3035,7 +3108,7 @@ void MacroAssembler::EnterFrame(StackFrame::Type type) { void MacroAssembler::LeaveFrame(StackFrame::Type type) { - ASSERT(jssp.Is(StackPointer())); + DCHECK(jssp.Is(StackPointer())); // Drop the execution stack down to the frame pointer and restore // the caller frame pointer and return address. Mov(jssp, fp); @@ -3053,7 +3126,7 @@ void MacroAssembler::ExitFrameRestoreFPRegs() { // Read the registers from the stack without popping them. The stack pointer // will be reset as part of the unwinding process. CPURegList saved_fp_regs = kCallerSavedFP; - ASSERT(saved_fp_regs.Count() % 2 == 0); + DCHECK(saved_fp_regs.Count() % 2 == 0); int offset = ExitFrameConstants::kLastExitFrameField; while (!saved_fp_regs.IsEmpty()) { @@ -3068,7 +3141,7 @@ void MacroAssembler::ExitFrameRestoreFPRegs() { void MacroAssembler::EnterExitFrame(bool save_doubles, const Register& scratch, int extra_space) { - ASSERT(jssp.Is(StackPointer())); + DCHECK(jssp.Is(StackPointer())); // Set up the new stack frame. Mov(scratch, Operand(CodeObject())); @@ -3114,7 +3187,7 @@ void MacroAssembler::EnterExitFrame(bool save_doubles, // Align and synchronize the system stack pointer with jssp. AlignAndSetCSPForFrame(); - ASSERT(csp.Is(StackPointer())); + DCHECK(csp.Is(StackPointer())); // fp[8]: CallerPC (lr) // fp -> fp[0]: CallerFP (old fp) @@ -3138,7 +3211,7 @@ void MacroAssembler::EnterExitFrame(bool save_doubles, void MacroAssembler::LeaveExitFrame(bool restore_doubles, const Register& scratch, bool restore_context) { - ASSERT(csp.Is(StackPointer())); + DCHECK(csp.Is(StackPointer())); if (restore_doubles) { ExitFrameRestoreFPRegs(); @@ -3185,7 +3258,7 @@ void MacroAssembler::SetCounter(StatsCounter* counter, int value, void MacroAssembler::IncrementCounter(StatsCounter* counter, int value, Register scratch1, Register scratch2) { - ASSERT(value != 0); + DCHECK(value != 0); if (FLAG_native_code_counters && counter->Enabled()) { Mov(scratch2, ExternalReference(counter)); Ldr(scratch1, MemOperand(scratch2)); @@ -3221,14 +3294,14 @@ void MacroAssembler::DebugBreak() { Mov(x0, 0); Mov(x1, ExternalReference(Runtime::kDebugBreak, isolate())); CEntryStub ces(isolate(), 1); - ASSERT(AllowThisStubCall(&ces)); + DCHECK(AllowThisStubCall(&ces)); Call(ces.GetCode(), RelocInfo::DEBUG_BREAK); } void MacroAssembler::PushTryHandler(StackHandler::Kind kind, int handler_index) { - ASSERT(jssp.Is(StackPointer())); + DCHECK(jssp.Is(StackPointer())); // Adjust this code if the asserts don't hold. STATIC_ASSERT(StackHandlerConstants::kSize == 5 * kPointerSize); STATIC_ASSERT(StackHandlerConstants::kNextOffset == 0 * kPointerSize); @@ -3250,7 +3323,7 @@ void MacroAssembler::PushTryHandler(StackHandler::Kind kind, // Push the frame pointer, context, state, and code object. if (kind == StackHandler::JS_ENTRY) { - ASSERT(Smi::FromInt(0) == 0); + DCHECK(Smi::FromInt(0) == 0); Push(xzr, xzr, x11, x10); } else { Push(fp, cp, x11, x10); @@ -3280,7 +3353,7 @@ void MacroAssembler::Allocate(int object_size, Register scratch2, Label* gc_required, AllocationFlags flags) { - ASSERT(object_size <= Page::kMaxRegularHeapObjectSize); + DCHECK(object_size <= Page::kMaxRegularHeapObjectSize); if (!FLAG_inline_new) { if (emit_debug_code()) { // Trash the registers to simulate an allocation failure. @@ -3296,14 +3369,14 @@ void MacroAssembler::Allocate(int object_size, UseScratchRegisterScope temps(this); Register scratch3 = temps.AcquireX(); - ASSERT(!AreAliased(result, scratch1, scratch2, scratch3)); - ASSERT(result.Is64Bits() && scratch1.Is64Bits() && scratch2.Is64Bits()); + DCHECK(!AreAliased(result, scratch1, scratch2, scratch3)); + DCHECK(result.Is64Bits() && scratch1.Is64Bits() && scratch2.Is64Bits()); // Make object size into bytes. if ((flags & SIZE_IN_WORDS) != 0) { object_size *= kPointerSize; } - ASSERT(0 == (object_size & kObjectAlignmentMask)); + DCHECK(0 == (object_size & kObjectAlignmentMask)); // Check relative positions of allocation top and limit addresses. // The values must be adjacent in memory to allow the use of LDP. @@ -3313,7 +3386,7 @@ void MacroAssembler::Allocate(int object_size, AllocationUtils::GetAllocationLimitReference(isolate(), flags); intptr_t top = reinterpret_cast<intptr_t>(heap_allocation_top.address()); intptr_t limit = reinterpret_cast<intptr_t>(heap_allocation_limit.address()); - ASSERT((limit - top) == kPointerSize); + DCHECK((limit - top) == kPointerSize); // Set up allocation top address and object size registers. Register top_address = scratch1; @@ -3372,8 +3445,8 @@ void MacroAssembler::Allocate(Register object_size, UseScratchRegisterScope temps(this); Register scratch3 = temps.AcquireX(); - ASSERT(!AreAliased(object_size, result, scratch1, scratch2, scratch3)); - ASSERT(object_size.Is64Bits() && result.Is64Bits() && + DCHECK(!AreAliased(object_size, result, scratch1, scratch2, scratch3)); + DCHECK(object_size.Is64Bits() && result.Is64Bits() && scratch1.Is64Bits() && scratch2.Is64Bits()); // Check relative positions of allocation top and limit addresses. @@ -3384,7 +3457,7 @@ void MacroAssembler::Allocate(Register object_size, AllocationUtils::GetAllocationLimitReference(isolate(), flags); intptr_t top = reinterpret_cast<intptr_t>(heap_allocation_top.address()); intptr_t limit = reinterpret_cast<intptr_t>(heap_allocation_limit.address()); - ASSERT((limit - top) == kPointerSize); + DCHECK((limit - top) == kPointerSize); // Set up allocation top address and object size registers. Register top_address = scratch1; @@ -3458,7 +3531,7 @@ void MacroAssembler::AllocateTwoByteString(Register result, Register scratch2, Register scratch3, Label* gc_required) { - ASSERT(!AreAliased(result, length, scratch1, scratch2, scratch3)); + DCHECK(!AreAliased(result, length, scratch1, scratch2, scratch3)); // Calculate the number of bytes needed for the characters in the string while // observing object alignment. STATIC_ASSERT((SeqTwoByteString::kHeaderSize & kObjectAlignmentMask) == 0); @@ -3489,7 +3562,7 @@ void MacroAssembler::AllocateAsciiString(Register result, Register scratch2, Register scratch3, Label* gc_required) { - ASSERT(!AreAliased(result, length, scratch1, scratch2, scratch3)); + DCHECK(!AreAliased(result, length, scratch1, scratch2, scratch3)); // Calculate the number of bytes needed for the characters in the string while // observing object alignment. STATIC_ASSERT((SeqOneByteString::kHeaderSize & kObjectAlignmentMask) == 0); @@ -3535,33 +3608,12 @@ void MacroAssembler::AllocateAsciiConsString(Register result, Register scratch1, Register scratch2, Label* gc_required) { - Label allocate_new_space, install_map; - AllocationFlags flags = TAG_OBJECT; - - ExternalReference high_promotion_mode = ExternalReference:: - new_space_high_promotion_mode_active_address(isolate()); - Mov(scratch1, high_promotion_mode); - Ldr(scratch1, MemOperand(scratch1)); - Cbz(scratch1, &allocate_new_space); - - Allocate(ConsString::kSize, - result, - scratch1, - scratch2, - gc_required, - static_cast<AllocationFlags>(flags | PRETENURE_OLD_POINTER_SPACE)); - - B(&install_map); - - Bind(&allocate_new_space); Allocate(ConsString::kSize, result, scratch1, scratch2, gc_required, - flags); - - Bind(&install_map); + TAG_OBJECT); InitializeNewString(result, length, @@ -3576,7 +3628,7 @@ void MacroAssembler::AllocateTwoByteSlicedString(Register result, Register scratch1, Register scratch2, Label* gc_required) { - ASSERT(!AreAliased(result, length, scratch1, scratch2)); + DCHECK(!AreAliased(result, length, scratch1, scratch2)); Allocate(SlicedString::kSize, result, scratch1, scratch2, gc_required, TAG_OBJECT); @@ -3593,7 +3645,7 @@ void MacroAssembler::AllocateAsciiSlicedString(Register result, Register scratch1, Register scratch2, Label* gc_required) { - ASSERT(!AreAliased(result, length, scratch1, scratch2)); + DCHECK(!AreAliased(result, length, scratch1, scratch2)); Allocate(SlicedString::kSize, result, scratch1, scratch2, gc_required, TAG_OBJECT); @@ -3612,8 +3664,9 @@ void MacroAssembler::AllocateHeapNumber(Register result, Register scratch1, Register scratch2, CPURegister value, - CPURegister heap_number_map) { - ASSERT(!value.IsValid() || value.Is64Bits()); + CPURegister heap_number_map, + MutableMode mode) { + DCHECK(!value.IsValid() || value.Is64Bits()); UseScratchRegisterScope temps(this); // Allocate an object in the heap for the heap number and tag it as a heap @@ -3621,6 +3674,10 @@ void MacroAssembler::AllocateHeapNumber(Register result, Allocate(HeapNumber::kSize, result, scratch1, scratch2, gc_required, NO_ALLOCATION_FLAGS); + Heap::RootListIndex map_index = mode == MUTABLE + ? Heap::kMutableHeapNumberMapRootIndex + : Heap::kHeapNumberMapRootIndex; + // Prepare the heap number map. if (!heap_number_map.IsValid()) { // If we have a valid value register, use the same type of register to store @@ -3630,7 +3687,7 @@ void MacroAssembler::AllocateHeapNumber(Register result, } else { heap_number_map = scratch1; } - LoadRoot(heap_number_map, Heap::kHeapNumberMapRootIndex); + LoadRoot(heap_number_map, map_index); } if (emit_debug_code()) { Register map; @@ -3640,7 +3697,7 @@ void MacroAssembler::AllocateHeapNumber(Register result, } else { map = Register(heap_number_map); } - AssertRegisterIsRoot(map, Heap::kHeapNumberMapRootIndex); + AssertRegisterIsRoot(map, map_index); } // Store the heap number map and the value in the allocated object. @@ -3781,7 +3838,7 @@ void MacroAssembler::LoadElementsKindFromMap(Register result, Register map) { // Load the map's "bit field 2". __ Ldrb(result, FieldMemOperand(map, Map::kBitField2Offset)); // Retrieve elements_kind from bit field 2. - __ Ubfx(result, result, Map::kElementsKindShift, Map::kElementsKindBitCount); + DecodeField<Map::ElementsKindBits>(result); } @@ -3790,15 +3847,16 @@ void MacroAssembler::TryGetFunctionPrototype(Register function, Register scratch, Label* miss, BoundFunctionAction action) { - ASSERT(!AreAliased(function, result, scratch)); + DCHECK(!AreAliased(function, result, scratch)); - // Check that the receiver isn't a smi. - JumpIfSmi(function, miss); + Label non_instance; + if (action == kMissOnBoundFunction) { + // Check that the receiver isn't a smi. + JumpIfSmi(function, miss); - // Check that the function really is a function. Load map into result reg. - JumpIfNotObjectType(function, result, scratch, JS_FUNCTION_TYPE, miss); + // Check that the function really is a function. Load map into result reg. + JumpIfNotObjectType(function, result, scratch, JS_FUNCTION_TYPE, miss); - if (action == kMissOnBoundFunction) { Register scratch_w = scratch.W(); Ldr(scratch, FieldMemOperand(function, JSFunction::kSharedFunctionInfoOffset)); @@ -3807,12 +3865,11 @@ void MacroAssembler::TryGetFunctionPrototype(Register function, Ldr(scratch_w, FieldMemOperand(scratch, SharedFunctionInfo::kCompilerHintsOffset)); Tbnz(scratch, SharedFunctionInfo::kBoundFunction, miss); - } - // Make sure that the function has an instance prototype. - Label non_instance; - Ldrb(scratch, FieldMemOperand(result, Map::kBitFieldOffset)); - Tbnz(scratch, Map::kHasNonInstancePrototype, &non_instance); + // Make sure that the function has an instance prototype. + Ldrb(scratch, FieldMemOperand(result, Map::kBitFieldOffset)); + Tbnz(scratch, Map::kHasNonInstancePrototype, &non_instance); + } // Get the prototype or initial map from the function. Ldr(result, @@ -3829,12 +3886,15 @@ void MacroAssembler::TryGetFunctionPrototype(Register function, // Get the prototype from the initial map. Ldr(result, FieldMemOperand(result, Map::kPrototypeOffset)); - B(&done); - // Non-instance prototype: fetch prototype from constructor field in initial - // map. - Bind(&non_instance); - Ldr(result, FieldMemOperand(result, Map::kConstructorOffset)); + if (action == kMissOnBoundFunction) { + B(&done); + + // Non-instance prototype: fetch prototype from constructor field in initial + // map. + Bind(&non_instance); + Ldr(result, FieldMemOperand(result, Map::kConstructorOffset)); + } // All done. Bind(&done); @@ -3845,7 +3905,7 @@ void MacroAssembler::CompareRoot(const Register& obj, Heap::RootListIndex index) { UseScratchRegisterScope temps(this); Register temp = temps.AcquireX(); - ASSERT(!AreAliased(obj, temp)); + DCHECK(!AreAliased(obj, temp)); LoadRoot(temp, index); Cmp(obj, temp); } @@ -3880,7 +3940,7 @@ void MacroAssembler::CompareAndSplit(const Register& lhs, } else if (if_false == fall_through) { CompareAndBranch(lhs, rhs, cond, if_true); } else if (if_true == fall_through) { - CompareAndBranch(lhs, rhs, InvertCondition(cond), if_false); + CompareAndBranch(lhs, rhs, NegateCondition(cond), if_false); } else { CompareAndBranch(lhs, rhs, cond, if_true); B(if_false); @@ -3946,7 +4006,7 @@ void MacroAssembler::StoreNumberToDoubleElements(Register value_reg, FPRegister fpscratch1, Label* fail, int elements_offset) { - ASSERT(!AreAliased(value_reg, key_reg, elements_reg, scratch1)); + DCHECK(!AreAliased(value_reg, key_reg, elements_reg, scratch1)); Label store_num; // Speculatively convert the smi to a double - all smis can be exactly @@ -3985,13 +4045,10 @@ void MacroAssembler::IndexFromHash(Register hash, Register index) { // that the constants for the maximum number of digits for an array index // cached in the hash field and the number of bits reserved for it does not // conflict. - ASSERT(TenToThe(String::kMaxCachedArrayIndexLength) < + DCHECK(TenToThe(String::kMaxCachedArrayIndexLength) < (1 << String::kArrayIndexValueBits)); - // We want the smi-tagged index in key. kArrayIndexValueMask has zeros in - // the low kHashShift bits. - STATIC_ASSERT(kSmiTag == 0); - Ubfx(hash, hash, String::kHashShift, String::kArrayIndexValueBits); - SmiTag(index, hash); + DecodeField<String::ArrayIndexValueBits>(index, hash); + SmiTag(index, index); } @@ -4001,7 +4058,7 @@ void MacroAssembler::EmitSeqStringSetCharCheck( SeqStringSetCharCheckIndexType index_type, Register scratch, uint32_t encoding_mask) { - ASSERT(!AreAliased(string, index, scratch)); + DCHECK(!AreAliased(string, index, scratch)); if (index_type == kIndexIsSmi) { AssertSmi(index); @@ -4022,7 +4079,7 @@ void MacroAssembler::EmitSeqStringSetCharCheck( Cmp(index, index_type == kIndexIsSmi ? scratch : Operand::UntagSmi(scratch)); Check(lt, kIndexIsTooLarge); - ASSERT_EQ(0, Smi::FromInt(0)); + DCHECK_EQ(0, Smi::FromInt(0)); Cmp(index, 0); Check(ge, kIndexIsNegative); } @@ -4032,7 +4089,7 @@ void MacroAssembler::CheckAccessGlobalProxy(Register holder_reg, Register scratch1, Register scratch2, Label* miss) { - ASSERT(!AreAliased(holder_reg, scratch1, scratch2)); + DCHECK(!AreAliased(holder_reg, scratch1, scratch2)); Label same_contexts; // Load current lexical context from the stack frame. @@ -4094,10 +4151,10 @@ void MacroAssembler::CheckAccessGlobalProxy(Register holder_reg, // Compute the hash code from the untagged key. This must be kept in sync with -// ComputeIntegerHash in utils.h and KeyedLoadGenericElementStub in +// ComputeIntegerHash in utils.h and KeyedLoadGenericStub in // code-stub-hydrogen.cc void MacroAssembler::GetNumberHash(Register key, Register scratch) { - ASSERT(!AreAliased(key, scratch)); + DCHECK(!AreAliased(key, scratch)); // Xor original key with a seed. LoadRoot(scratch, Heap::kHashSeedRootIndex); @@ -4136,7 +4193,7 @@ void MacroAssembler::LoadFromNumberDictionary(Label* miss, Register scratch1, Register scratch2, Register scratch3) { - ASSERT(!AreAliased(elements, key, scratch0, scratch1, scratch2, scratch3)); + DCHECK(!AreAliased(elements, key, scratch0, scratch1, scratch2, scratch3)); Label done; @@ -4160,7 +4217,7 @@ void MacroAssembler::LoadFromNumberDictionary(Label* miss, And(scratch2, scratch2, scratch1); // Scale the index by multiplying by the element size. - ASSERT(SeededNumberDictionary::kEntrySize == 3); + DCHECK(SeededNumberDictionary::kEntrySize == 3); Add(scratch2, scratch2, Operand(scratch2, LSL, 1)); // Check if the key is identical to the name. @@ -4195,7 +4252,7 @@ void MacroAssembler::RememberedSetHelper(Register object, // For debug tests. Register scratch1, SaveFPRegsMode fp_mode, RememberedSetFinalAction and_then) { - ASSERT(!AreAliased(object, address, scratch1)); + DCHECK(!AreAliased(object, address, scratch1)); Label done, store_buffer_overflow; if (emit_debug_code()) { Label ok; @@ -4215,12 +4272,12 @@ void MacroAssembler::RememberedSetHelper(Register object, // For debug tests. Str(scratch1, MemOperand(scratch2)); // Call stub on end of buffer. // Check for end of buffer. - ASSERT(StoreBuffer::kStoreBufferOverflowBit == + DCHECK(StoreBuffer::kStoreBufferOverflowBit == (1 << (14 + kPointerSizeLog2))); if (and_then == kFallThroughAtEnd) { Tbz(scratch1, (14 + kPointerSizeLog2), &done); } else { - ASSERT(and_then == kReturnAtEnd); + DCHECK(and_then == kReturnAtEnd); Tbnz(scratch1, (14 + kPointerSizeLog2), &store_buffer_overflow); Ret(); } @@ -4250,7 +4307,7 @@ void MacroAssembler::PushSafepointRegisters() { // Safepoints expect a block of kNumSafepointRegisters values on the stack, so // adjust the stack for unsaved registers. const int num_unsaved = kNumSafepointRegisters - kNumSafepointSavedRegisters; - ASSERT(num_unsaved >= 0); + DCHECK(num_unsaved >= 0); Claim(num_unsaved); PushXRegList(kSafepointSavedRegisters); } @@ -4272,7 +4329,7 @@ void MacroAssembler::PopSafepointRegistersAndDoubles() { int MacroAssembler::SafepointRegisterStackIndex(int reg_code) { // Make sure the safepoint registers list is what we expect. - ASSERT(CPURegList::GetSafepointSavedRegisters().list() == 0x6ffcffff); + DCHECK(CPURegList::GetSafepointSavedRegisters().list() == 0x6ffcffff); // Safepoint registers are stored contiguously on the stack, but not all the // registers are saved. The following registers are excluded: @@ -4329,7 +4386,8 @@ void MacroAssembler::RecordWriteField( LinkRegisterStatus lr_status, SaveFPRegsMode save_fp, RememberedSetAction remembered_set_action, - SmiCheck smi_check) { + SmiCheck smi_check, + PointersToHereCheck pointers_to_here_check_for_value) { // First, check if a write barrier is even needed. The tests below // catch stores of Smis. Label done; @@ -4341,7 +4399,7 @@ void MacroAssembler::RecordWriteField( // Although the object register is tagged, the offset is relative to the start // of the object, so offset must be a multiple of kPointerSize. - ASSERT(IsAligned(offset, kPointerSize)); + DCHECK(IsAligned(offset, kPointerSize)); Add(scratch, object, offset - kHeapObjectTag); if (emit_debug_code()) { @@ -4358,7 +4416,8 @@ void MacroAssembler::RecordWriteField( lr_status, save_fp, remembered_set_action, - OMIT_SMI_CHECK); + OMIT_SMI_CHECK, + pointers_to_here_check_for_value); Bind(&done); @@ -4371,20 +4430,94 @@ void MacroAssembler::RecordWriteField( } +// Will clobber: object, map, dst. +// If lr_status is kLRHasBeenSaved, lr will also be clobbered. +void MacroAssembler::RecordWriteForMap(Register object, + Register map, + Register dst, + LinkRegisterStatus lr_status, + SaveFPRegsMode fp_mode) { + ASM_LOCATION("MacroAssembler::RecordWrite"); + DCHECK(!AreAliased(object, map)); + + if (emit_debug_code()) { + UseScratchRegisterScope temps(this); + Register temp = temps.AcquireX(); + + CompareMap(map, temp, isolate()->factory()->meta_map()); + Check(eq, kWrongAddressOrValuePassedToRecordWrite); + } + + if (!FLAG_incremental_marking) { + return; + } + + if (emit_debug_code()) { + UseScratchRegisterScope temps(this); + Register temp = temps.AcquireX(); + + Ldr(temp, FieldMemOperand(object, HeapObject::kMapOffset)); + Cmp(temp, map); + Check(eq, kWrongAddressOrValuePassedToRecordWrite); + } + + // First, check if a write barrier is even needed. The tests below + // catch stores of smis and stores into the young generation. + Label done; + + // A single check of the map's pages interesting flag suffices, since it is + // only set during incremental collection, and then it's also guaranteed that + // the from object's page's interesting flag is also set. This optimization + // relies on the fact that maps can never be in new space. + CheckPageFlagClear(map, + map, // Used as scratch. + MemoryChunk::kPointersToHereAreInterestingMask, + &done); + + // Record the actual write. + if (lr_status == kLRHasNotBeenSaved) { + Push(lr); + } + Add(dst, object, HeapObject::kMapOffset - kHeapObjectTag); + RecordWriteStub stub(isolate(), object, map, dst, OMIT_REMEMBERED_SET, + fp_mode); + CallStub(&stub); + if (lr_status == kLRHasNotBeenSaved) { + Pop(lr); + } + + Bind(&done); + + // Count number of write barriers in generated code. + isolate()->counters()->write_barriers_static()->Increment(); + IncrementCounter(isolate()->counters()->write_barriers_dynamic(), 1, map, + dst); + + // Clobber clobbered registers when running with the debug-code flag + // turned on to provoke errors. + if (emit_debug_code()) { + Mov(dst, Operand(BitCast<int64_t>(kZapValue + 12))); + Mov(map, Operand(BitCast<int64_t>(kZapValue + 16))); + } +} + + // Will clobber: object, address, value. // If lr_status is kLRHasBeenSaved, lr will also be clobbered. // // The register 'object' contains a heap object pointer. The heap object tag is // shifted away. -void MacroAssembler::RecordWrite(Register object, - Register address, - Register value, - LinkRegisterStatus lr_status, - SaveFPRegsMode fp_mode, - RememberedSetAction remembered_set_action, - SmiCheck smi_check) { +void MacroAssembler::RecordWrite( + Register object, + Register address, + Register value, + LinkRegisterStatus lr_status, + SaveFPRegsMode fp_mode, + RememberedSetAction remembered_set_action, + SmiCheck smi_check, + PointersToHereCheck pointers_to_here_check_for_value) { ASM_LOCATION("MacroAssembler::RecordWrite"); - ASSERT(!AreAliased(object, value)); + DCHECK(!AreAliased(object, value)); if (emit_debug_code()) { UseScratchRegisterScope temps(this); @@ -4395,23 +4528,21 @@ void MacroAssembler::RecordWrite(Register object, Check(eq, kWrongAddressOrValuePassedToRecordWrite); } - // Count number of write barriers in generated code. - isolate()->counters()->write_barriers_static()->Increment(); - // TODO(mstarzinger): Dynamic counter missing. - // First, check if a write barrier is even needed. The tests below // catch stores of smis and stores into the young generation. Label done; if (smi_check == INLINE_SMI_CHECK) { - ASSERT_EQ(0, kSmiTag); + DCHECK_EQ(0, kSmiTag); JumpIfSmi(value, &done); } - CheckPageFlagClear(value, - value, // Used as scratch. - MemoryChunk::kPointersToHereAreInterestingMask, - &done); + if (pointers_to_here_check_for_value != kPointersToHereAreAlwaysInteresting) { + CheckPageFlagClear(value, + value, // Used as scratch. + MemoryChunk::kPointersToHereAreInterestingMask, + &done); + } CheckPageFlagClear(object, value, // Used as scratch. MemoryChunk::kPointersFromHereAreInterestingMask, @@ -4430,6 +4561,11 @@ void MacroAssembler::RecordWrite(Register object, Bind(&done); + // Count number of write barriers in generated code. + isolate()->counters()->write_barriers_static()->Increment(); + IncrementCounter(isolate()->counters()->write_barriers_dynamic(), 1, address, + value); + // Clobber clobbered registers when running with the debug-code flag // turned on to provoke errors. if (emit_debug_code()) { @@ -4443,7 +4579,7 @@ void MacroAssembler::AssertHasValidColor(const Register& reg) { if (emit_debug_code()) { // The bit sequence is backward. The first character in the string // represents the least significant bit. - ASSERT(strcmp(Marking::kImpossibleBitPattern, "01") == 0); + DCHECK(strcmp(Marking::kImpossibleBitPattern, "01") == 0); Label color_is_valid; Tbnz(reg, 0, &color_is_valid); @@ -4457,8 +4593,8 @@ void MacroAssembler::AssertHasValidColor(const Register& reg) { void MacroAssembler::GetMarkBits(Register addr_reg, Register bitmap_reg, Register shift_reg) { - ASSERT(!AreAliased(addr_reg, bitmap_reg, shift_reg)); - ASSERT(addr_reg.Is64Bits() && bitmap_reg.Is64Bits() && shift_reg.Is64Bits()); + DCHECK(!AreAliased(addr_reg, bitmap_reg, shift_reg)); + DCHECK(addr_reg.Is64Bits() && bitmap_reg.Is64Bits() && shift_reg.Is64Bits()); // addr_reg is divided into fields: // |63 page base 20|19 high 8|7 shift 3|2 0| // 'high' gives the index of the cell holding color bits for the object. @@ -4482,7 +4618,7 @@ void MacroAssembler::HasColor(Register object, int first_bit, int second_bit) { // See mark-compact.h for color definitions. - ASSERT(!AreAliased(object, bitmap_scratch, shift_scratch)); + DCHECK(!AreAliased(object, bitmap_scratch, shift_scratch)); GetMarkBits(object, bitmap_scratch, shift_scratch); Ldr(bitmap_scratch, MemOperand(bitmap_scratch, MemoryChunk::kHeaderSize)); @@ -4493,14 +4629,14 @@ void MacroAssembler::HasColor(Register object, // These bit sequences are backwards. The first character in the string // represents the least significant bit. - ASSERT(strcmp(Marking::kWhiteBitPattern, "00") == 0); - ASSERT(strcmp(Marking::kBlackBitPattern, "10") == 0); - ASSERT(strcmp(Marking::kGreyBitPattern, "11") == 0); + DCHECK(strcmp(Marking::kWhiteBitPattern, "00") == 0); + DCHECK(strcmp(Marking::kBlackBitPattern, "10") == 0); + DCHECK(strcmp(Marking::kGreyBitPattern, "11") == 0); // Check for the color. if (first_bit == 0) { // Checking for white. - ASSERT(second_bit == 0); + DCHECK(second_bit == 0); // We only need to test the first bit. Tbz(bitmap_scratch, 0, has_color); } else { @@ -4524,7 +4660,7 @@ void MacroAssembler::CheckMapDeprecated(Handle<Map> map, Label* if_deprecated) { if (map->CanBeDeprecated()) { Mov(scratch, Operand(map)); - Ldrsw(scratch, UntagSmiFieldMemOperand(scratch, Map::kBitField3Offset)); + Ldrsw(scratch, FieldMemOperand(scratch, Map::kBitField3Offset)); TestAndBranchIfAnySet(scratch, Map::Deprecated::kMask, if_deprecated); } } @@ -4534,7 +4670,7 @@ void MacroAssembler::JumpIfBlack(Register object, Register scratch0, Register scratch1, Label* on_black) { - ASSERT(strcmp(Marking::kBlackBitPattern, "10") == 0); + DCHECK(strcmp(Marking::kBlackBitPattern, "10") == 0); HasColor(object, scratch0, scratch1, on_black, 1, 0); // kBlackBitPattern. } @@ -4544,7 +4680,7 @@ void MacroAssembler::JumpIfDictionaryInPrototypeChain( Register scratch0, Register scratch1, Label* found) { - ASSERT(!AreAliased(object, scratch0, scratch1)); + DCHECK(!AreAliased(object, scratch0, scratch1)); Factory* factory = isolate()->factory(); Register current = scratch0; Label loop_again; @@ -4556,7 +4692,7 @@ void MacroAssembler::JumpIfDictionaryInPrototypeChain( Bind(&loop_again); Ldr(current, FieldMemOperand(current, HeapObject::kMapOffset)); Ldrb(scratch1, FieldMemOperand(current, Map::kBitField2Offset)); - Ubfx(scratch1, scratch1, Map::kElementsKindShift, Map::kElementsKindBitCount); + DecodeField<Map::ElementsKindBits>(scratch1); CompareAndBranch(scratch1, DICTIONARY_ELEMENTS, eq, found); Ldr(current, FieldMemOperand(current, Map::kPrototypeOffset)); CompareAndBranch(current, Operand(factory->null_value()), ne, &loop_again); @@ -4565,7 +4701,7 @@ void MacroAssembler::JumpIfDictionaryInPrototypeChain( void MacroAssembler::GetRelocatedValueLocation(Register ldr_location, Register result) { - ASSERT(!result.Is(ldr_location)); + DCHECK(!result.Is(ldr_location)); const uint32_t kLdrLitOffset_lsb = 5; const uint32_t kLdrLitOffset_width = 19; Ldr(result, MemOperand(ldr_location)); @@ -4588,14 +4724,14 @@ void MacroAssembler::EnsureNotWhite( Register load_scratch, Register length_scratch, Label* value_is_white_and_not_data) { - ASSERT(!AreAliased( + DCHECK(!AreAliased( value, bitmap_scratch, shift_scratch, load_scratch, length_scratch)); // These bit sequences are backwards. The first character in the string // represents the least significant bit. - ASSERT(strcmp(Marking::kWhiteBitPattern, "00") == 0); - ASSERT(strcmp(Marking::kBlackBitPattern, "10") == 0); - ASSERT(strcmp(Marking::kGreyBitPattern, "11") == 0); + DCHECK(strcmp(Marking::kWhiteBitPattern, "00") == 0); + DCHECK(strcmp(Marking::kBlackBitPattern, "10") == 0); + DCHECK(strcmp(Marking::kGreyBitPattern, "11") == 0); GetMarkBits(value, bitmap_scratch, shift_scratch); Ldr(load_scratch, MemOperand(bitmap_scratch, MemoryChunk::kHeaderSize)); @@ -4619,8 +4755,8 @@ void MacroAssembler::EnsureNotWhite( JumpIfRoot(map, Heap::kHeapNumberMapRootIndex, &is_data_object); // Check for strings. - ASSERT(kIsIndirectStringTag == 1 && kIsIndirectStringMask == 1); - ASSERT(kNotStringTag == 0x80 && kIsNotStringMask == 0x80); + DCHECK(kIsIndirectStringTag == 1 && kIsIndirectStringMask == 1); + DCHECK(kNotStringTag == 0x80 && kIsNotStringMask == 0x80); // If it's a string and it's not a cons string then it's an object containing // no GC pointers. Register instance_type = load_scratch; @@ -4634,8 +4770,8 @@ void MacroAssembler::EnsureNotWhite( // Otherwise it's String::kHeaderSize + string->length() * (1 or 2). // External strings are the only ones with the kExternalStringTag bit // set. - ASSERT_EQ(0, kSeqStringTag & kExternalStringTag); - ASSERT_EQ(0, kConsStringTag & kExternalStringTag); + DCHECK_EQ(0, kSeqStringTag & kExternalStringTag); + DCHECK_EQ(0, kConsStringTag & kExternalStringTag); Mov(length_scratch, ExternalString::kSize); TestAndBranchIfAnySet(instance_type, kExternalStringTag, &is_data_object); @@ -4643,7 +4779,7 @@ void MacroAssembler::EnsureNotWhite( // For ASCII (char-size of 1) we shift the smi tag away to get the length. // For UC16 (char-size of 2) we just leave the smi tag in place, thereby // getting the length multiplied by 2. - ASSERT(kOneByteStringTag == 4 && kStringEncodingMask == 4); + DCHECK(kOneByteStringTag == 4 && kStringEncodingMask == 4); Ldrsw(length_scratch, UntagSmiFieldMemOperand(value, String::kLengthOffset)); Tst(instance_type, kStringEncodingMask); @@ -4869,110 +5005,97 @@ void MacroAssembler::PrintfNoPreserve(const char * format, const CPURegister& arg3) { // We cannot handle a caller-saved stack pointer. It doesn't make much sense // in most cases anyway, so this restriction shouldn't be too serious. - ASSERT(!kCallerSaved.IncludesAliasOf(__ StackPointer())); - - // Make sure that the macro assembler doesn't try to use any of our arguments - // as scratch registers. - ASSERT(!TmpList()->IncludesAliasOf(arg0, arg1, arg2, arg3)); - ASSERT(!FPTmpList()->IncludesAliasOf(arg0, arg1, arg2, arg3)); - - // We cannot print the stack pointer because it is typically used to preserve - // caller-saved registers (using other Printf variants which depend on this - // helper). - ASSERT(!AreAliased(arg0, StackPointer())); - ASSERT(!AreAliased(arg1, StackPointer())); - ASSERT(!AreAliased(arg2, StackPointer())); - ASSERT(!AreAliased(arg3, StackPointer())); - - static const int kMaxArgCount = 4; - // Assume that we have the maximum number of arguments until we know - // otherwise. - int arg_count = kMaxArgCount; - - // The provided arguments. - CPURegister args[kMaxArgCount] = {arg0, arg1, arg2, arg3}; - - // The PCS registers where the arguments need to end up. - CPURegister pcs[kMaxArgCount] = {NoCPUReg, NoCPUReg, NoCPUReg, NoCPUReg}; - - // Promote FP arguments to doubles, and integer arguments to X registers. - // Note that FP and integer arguments cannot be mixed, but we'll check - // AreSameSizeAndType once we've processed these promotions. - for (int i = 0; i < kMaxArgCount; i++) { + DCHECK(!kCallerSaved.IncludesAliasOf(__ StackPointer())); + + // The provided arguments, and their proper procedure-call standard registers. + CPURegister args[kPrintfMaxArgCount] = {arg0, arg1, arg2, arg3}; + CPURegister pcs[kPrintfMaxArgCount] = {NoReg, NoReg, NoReg, NoReg}; + + int arg_count = kPrintfMaxArgCount; + + // The PCS varargs registers for printf. Note that x0 is used for the printf + // format string. + static const CPURegList kPCSVarargs = + CPURegList(CPURegister::kRegister, kXRegSizeInBits, 1, arg_count); + static const CPURegList kPCSVarargsFP = + CPURegList(CPURegister::kFPRegister, kDRegSizeInBits, 0, arg_count - 1); + + // We can use caller-saved registers as scratch values, except for the + // arguments and the PCS registers where they might need to go. + CPURegList tmp_list = kCallerSaved; + tmp_list.Remove(x0); // Used to pass the format string. + tmp_list.Remove(kPCSVarargs); + tmp_list.Remove(arg0, arg1, arg2, arg3); + + CPURegList fp_tmp_list = kCallerSavedFP; + fp_tmp_list.Remove(kPCSVarargsFP); + fp_tmp_list.Remove(arg0, arg1, arg2, arg3); + + // Override the MacroAssembler's scratch register list. The lists will be + // reset automatically at the end of the UseScratchRegisterScope. + UseScratchRegisterScope temps(this); + TmpList()->set_list(tmp_list.list()); + FPTmpList()->set_list(fp_tmp_list.list()); + + // Copies of the printf vararg registers that we can pop from. + CPURegList pcs_varargs = kPCSVarargs; + CPURegList pcs_varargs_fp = kPCSVarargsFP; + + // Place the arguments. There are lots of clever tricks and optimizations we + // could use here, but Printf is a debug tool so instead we just try to keep + // it simple: Move each input that isn't already in the right place to a + // scratch register, then move everything back. + for (unsigned i = 0; i < kPrintfMaxArgCount; i++) { + // Work out the proper PCS register for this argument. if (args[i].IsRegister()) { - // Note that we use x1 onwards, because x0 will hold the format string. - pcs[i] = Register::XRegFromCode(i + 1); - // For simplicity, we handle all integer arguments as X registers. An X - // register argument takes the same space as a W register argument in the - // PCS anyway. The only limitation is that we must explicitly clear the - // top word for W register arguments as the callee will expect it to be - // clear. - if (!args[i].Is64Bits()) { - const Register& as_x = args[i].X(); - And(as_x, as_x, 0x00000000ffffffff); - args[i] = as_x; - } + pcs[i] = pcs_varargs.PopLowestIndex().X(); + // We might only need a W register here. We need to know the size of the + // argument so we can properly encode it for the simulator call. + if (args[i].Is32Bits()) pcs[i] = pcs[i].W(); } else if (args[i].IsFPRegister()) { - pcs[i] = FPRegister::DRegFromCode(i); - // C and C++ varargs functions (such as printf) implicitly promote float - // arguments to doubles. - if (!args[i].Is64Bits()) { - FPRegister s(args[i]); - const FPRegister& as_d = args[i].D(); - Fcvt(as_d, s); - args[i] = as_d; - } + // In C, floats are always cast to doubles for varargs calls. + pcs[i] = pcs_varargs_fp.PopLowestIndex().D(); } else { - // This is the first empty (NoCPUReg) argument, so use it to set the - // argument count and bail out. + DCHECK(args[i].IsNone()); arg_count = i; break; } - } - ASSERT((arg_count >= 0) && (arg_count <= kMaxArgCount)); - // Check that every remaining argument is NoCPUReg. - for (int i = arg_count; i < kMaxArgCount; i++) { - ASSERT(args[i].IsNone()); - } - ASSERT((arg_count == 0) || AreSameSizeAndType(args[0], args[1], - args[2], args[3], - pcs[0], pcs[1], - pcs[2], pcs[3])); - // Move the arguments into the appropriate PCS registers. - // - // Arranging an arbitrary list of registers into x1-x4 (or d0-d3) is - // surprisingly complicated. - // - // * For even numbers of registers, we push the arguments and then pop them - // into their final registers. This maintains 16-byte stack alignment in - // case csp is the stack pointer, since we're only handling X or D - // registers at this point. - // - // * For odd numbers of registers, we push and pop all but one register in - // the same way, but the left-over register is moved directly, since we - // can always safely move one register without clobbering any source. - if (arg_count >= 4) { - Push(args[3], args[2], args[1], args[0]); - } else if (arg_count >= 2) { - Push(args[1], args[0]); - } - - if ((arg_count % 2) != 0) { - // Move the left-over register directly. - const CPURegister& leftover_arg = args[arg_count - 1]; - const CPURegister& leftover_pcs = pcs[arg_count - 1]; - if (leftover_arg.IsRegister()) { - Mov(Register(leftover_pcs), Register(leftover_arg)); - } else { - Fmov(FPRegister(leftover_pcs), FPRegister(leftover_arg)); + // If the argument is already in the right place, leave it where it is. + if (args[i].Aliases(pcs[i])) continue; + + // Otherwise, if the argument is in a PCS argument register, allocate an + // appropriate scratch register and then move it out of the way. + if (kPCSVarargs.IncludesAliasOf(args[i]) || + kPCSVarargsFP.IncludesAliasOf(args[i])) { + if (args[i].IsRegister()) { + Register old_arg = Register(args[i]); + Register new_arg = temps.AcquireSameSizeAs(old_arg); + Mov(new_arg, old_arg); + args[i] = new_arg; + } else { + FPRegister old_arg = FPRegister(args[i]); + FPRegister new_arg = temps.AcquireSameSizeAs(old_arg); + Fmov(new_arg, old_arg); + args[i] = new_arg; + } } } - if (arg_count >= 4) { - Pop(pcs[0], pcs[1], pcs[2], pcs[3]); - } else if (arg_count >= 2) { - Pop(pcs[0], pcs[1]); + // Do a second pass to move values into their final positions and perform any + // conversions that may be required. + for (int i = 0; i < arg_count; i++) { + DCHECK(pcs[i].type() == args[i].type()); + if (pcs[i].IsRegister()) { + Mov(Register(pcs[i]), Register(args[i]), kDiscardForSameWReg); + } else { + DCHECK(pcs[i].IsFPRegister()); + if (pcs[i].SizeInBytes() == args[i].SizeInBytes()) { + Fmov(FPRegister(pcs[i]), FPRegister(args[i])); + } else { + Fcvt(FPRegister(pcs[i]), FPRegister(args[i])); + } + } } // Load the format string into x0, as per the procedure-call standard. @@ -5000,18 +5123,33 @@ void MacroAssembler::PrintfNoPreserve(const char * format, Bic(csp, StackPointer(), 0xf); } - CallPrintf(pcs[0].type()); + CallPrintf(arg_count, pcs); } -void MacroAssembler::CallPrintf(CPURegister::RegisterType type) { +void MacroAssembler::CallPrintf(int arg_count, const CPURegister * args) { // A call to printf needs special handling for the simulator, since the system // printf function will use a different instruction set and the procedure-call // standard will not be compatible. #ifdef USE_SIMULATOR { InstructionAccurateScope scope(this, kPrintfLength / kInstructionSize); hlt(kImmExceptionIsPrintf); - dc32(type); + dc32(arg_count); // kPrintfArgCountOffset + + // Determine the argument pattern. + uint32_t arg_pattern_list = 0; + for (int i = 0; i < arg_count; i++) { + uint32_t arg_pattern; + if (args[i].IsRegister()) { + arg_pattern = args[i].Is32Bits() ? kPrintfArgW : kPrintfArgX; + } else { + DCHECK(args[i].Is64Bits()); + arg_pattern = kPrintfArgD; + } + DCHECK(arg_pattern < (1 << kPrintfArgPatternBits)); + arg_pattern_list |= (arg_pattern << (kPrintfArgPatternBits * i)); + } + dc32(arg_pattern_list); // kPrintfArgPatternListOffset } #else Call(FUNCTION_ADDR(printf), RelocInfo::EXTERNAL_REFERENCE); @@ -5020,10 +5158,18 @@ void MacroAssembler::CallPrintf(CPURegister::RegisterType type) { void MacroAssembler::Printf(const char * format, - const CPURegister& arg0, - const CPURegister& arg1, - const CPURegister& arg2, - const CPURegister& arg3) { + CPURegister arg0, + CPURegister arg1, + CPURegister arg2, + CPURegister arg3) { + // We can only print sp if it is the current stack pointer. + if (!csp.Is(StackPointer())) { + DCHECK(!csp.Aliases(arg0)); + DCHECK(!csp.Aliases(arg1)); + DCHECK(!csp.Aliases(arg2)); + DCHECK(!csp.Aliases(arg3)); + } + // Printf is expected to preserve all registers, so make sure that none are // available as scratch registers until we've preserved them. RegList old_tmp_list = TmpList()->list(); @@ -5045,19 +5191,41 @@ void MacroAssembler::Printf(const char * format, TmpList()->set_list(tmp_list.list()); FPTmpList()->set_list(fp_tmp_list.list()); - // Preserve NZCV. { UseScratchRegisterScope temps(this); - Register tmp = temps.AcquireX(); - Mrs(tmp, NZCV); - Push(tmp, xzr); - } + // If any of the arguments are the current stack pointer, allocate a new + // register for them, and adjust the value to compensate for pushing the + // caller-saved registers. + bool arg0_sp = StackPointer().Aliases(arg0); + bool arg1_sp = StackPointer().Aliases(arg1); + bool arg2_sp = StackPointer().Aliases(arg2); + bool arg3_sp = StackPointer().Aliases(arg3); + if (arg0_sp || arg1_sp || arg2_sp || arg3_sp) { + // Allocate a register to hold the original stack pointer value, to pass + // to PrintfNoPreserve as an argument. + Register arg_sp = temps.AcquireX(); + Add(arg_sp, StackPointer(), + kCallerSaved.TotalSizeInBytes() + kCallerSavedFP.TotalSizeInBytes()); + if (arg0_sp) arg0 = Register::Create(arg_sp.code(), arg0.SizeInBits()); + if (arg1_sp) arg1 = Register::Create(arg_sp.code(), arg1.SizeInBits()); + if (arg2_sp) arg2 = Register::Create(arg_sp.code(), arg2.SizeInBits()); + if (arg3_sp) arg3 = Register::Create(arg_sp.code(), arg3.SizeInBits()); + } - PrintfNoPreserve(format, arg0, arg1, arg2, arg3); + // Preserve NZCV. + { UseScratchRegisterScope temps(this); + Register tmp = temps.AcquireX(); + Mrs(tmp, NZCV); + Push(tmp, xzr); + } - { UseScratchRegisterScope temps(this); - Register tmp = temps.AcquireX(); - Pop(xzr, tmp); - Msr(NZCV, tmp); + PrintfNoPreserve(format, arg0, arg1, arg2, arg3); + + // Restore NZCV. + { UseScratchRegisterScope temps(this); + Register tmp = temps.AcquireX(); + Pop(xzr, tmp); + Msr(NZCV, tmp); + } } PopCPURegList(kCallerSavedFP); @@ -5074,7 +5242,7 @@ void MacroAssembler::EmitFrameSetupForCodeAgePatching() { // the sequence and copying it in the same way. InstructionAccurateScope scope(this, kNoCodeAgeSequenceLength / kInstructionSize); - ASSERT(jssp.Is(StackPointer())); + DCHECK(jssp.Is(StackPointer())); EmitFrameSetupForCodeAgePatching(this); } @@ -5083,7 +5251,7 @@ void MacroAssembler::EmitFrameSetupForCodeAgePatching() { void MacroAssembler::EmitCodeAgeSequence(Code* stub) { InstructionAccurateScope scope(this, kNoCodeAgeSequenceLength / kInstructionSize); - ASSERT(jssp.Is(StackPointer())); + DCHECK(jssp.Is(StackPointer())); EmitCodeAgeSequence(this, stub); } @@ -5121,7 +5289,7 @@ void MacroAssembler::EmitCodeAgeSequence(Assembler * assm, // // A branch (br) is used rather than a call (blr) because this code replaces // the frame setup code that would normally preserve lr. - __ LoadLiteral(ip0, kCodeAgeStubEntryOffset); + __ ldr_pcrel(ip0, kCodeAgeStubEntryOffset >> kLoadLiteralScaleLog2); __ adr(x0, &start); __ br(ip0); // IsCodeAgeSequence in codegen-arm64.cc assumes that the code generated up @@ -5136,7 +5304,7 @@ void MacroAssembler::EmitCodeAgeSequence(Assembler * assm, bool MacroAssembler::IsYoungSequence(Isolate* isolate, byte* sequence) { bool is_young = isolate->code_aging_helper()->IsYoung(sequence); - ASSERT(is_young || + DCHECK(is_young || isolate->code_aging_helper()->IsOld(sequence)); return is_young; } @@ -5145,8 +5313,8 @@ bool MacroAssembler::IsYoungSequence(Isolate* isolate, byte* sequence) { void MacroAssembler::TruncatingDiv(Register result, Register dividend, int32_t divisor) { - ASSERT(!AreAliased(result, dividend)); - ASSERT(result.Is32Bits() && dividend.Is32Bits()); + DCHECK(!AreAliased(result, dividend)); + DCHECK(result.Is32Bits() && dividend.Is32Bits()); MultiplierAndShift ms(divisor); Mov(result, ms.multiplier()); Smull(result.X(), dividend, result); @@ -5183,14 +5351,14 @@ CPURegister UseScratchRegisterScope::AcquireNextAvailable( CPURegList* available) { CHECK(!available->IsEmpty()); CPURegister result = available->PopLowestIndex(); - ASSERT(!AreAliased(result, xzr, csp)); + DCHECK(!AreAliased(result, xzr, csp)); return result; } CPURegister UseScratchRegisterScope::UnsafeAcquire(CPURegList* available, const CPURegister& reg) { - ASSERT(available->IncludesAliasOf(reg)); + DCHECK(available->IncludesAliasOf(reg)); available->Remove(reg); return reg; } @@ -5203,8 +5371,8 @@ void InlineSmiCheckInfo::Emit(MacroAssembler* masm, const Register& reg, const Label* smi_check) { Assembler::BlockPoolsScope scope(masm); if (reg.IsValid()) { - ASSERT(smi_check->is_bound()); - ASSERT(reg.Is64Bits()); + DCHECK(smi_check->is_bound()); + DCHECK(reg.Is64Bits()); // Encode the register (x0-x30) in the lowest 5 bits, then the offset to // 'check' in the other bits. The possible offset is limited in that we @@ -5213,7 +5381,7 @@ void InlineSmiCheckInfo::Emit(MacroAssembler* masm, const Register& reg, uint32_t delta = __ InstructionsGeneratedSince(smi_check); __ InlineData(RegisterBits::encode(reg.code()) | DeltaBits::encode(delta)); } else { - ASSERT(!smi_check->is_bound()); + DCHECK(!smi_check->is_bound()); // An offset of 0 indicates that there is no patch site. __ InlineData(0); @@ -5224,17 +5392,17 @@ void InlineSmiCheckInfo::Emit(MacroAssembler* masm, const Register& reg, InlineSmiCheckInfo::InlineSmiCheckInfo(Address info) : reg_(NoReg), smi_check_(NULL) { InstructionSequence* inline_data = InstructionSequence::At(info); - ASSERT(inline_data->IsInlineData()); + DCHECK(inline_data->IsInlineData()); if (inline_data->IsInlineData()) { uint64_t payload = inline_data->InlineData(); // We use BitField to decode the payload, and BitField can only handle // 32-bit values. - ASSERT(is_uint32(payload)); + DCHECK(is_uint32(payload)); if (payload != 0) { int reg_code = RegisterBits::decode(payload); reg_ = Register::XRegFromCode(reg_code); uint64_t smi_check_delta = DeltaBits::decode(payload); - ASSERT(smi_check_delta != 0); + DCHECK(smi_check_delta != 0); smi_check_ = inline_data->preceding(smi_check_delta); } } diff --git a/deps/v8/src/arm64/macro-assembler-arm64.h b/deps/v8/src/arm64/macro-assembler-arm64.h index 7d267a2cb0c..aa83c7040fb 100644 --- a/deps/v8/src/arm64/macro-assembler-arm64.h +++ b/deps/v8/src/arm64/macro-assembler-arm64.h @@ -7,10 +7,27 @@ #include <vector> -#include "v8globals.h" -#include "globals.h" +#include "src/globals.h" + +#include "src/arm64/assembler-arm64-inl.h" + +// Simulator specific helpers. +#if USE_SIMULATOR + // TODO(all): If possible automatically prepend an indicator like + // UNIMPLEMENTED or LOCATION. + #define ASM_UNIMPLEMENTED(message) \ + __ Debug(message, __LINE__, NO_PARAM) + #define ASM_UNIMPLEMENTED_BREAK(message) \ + __ Debug(message, __LINE__, \ + FLAG_ignore_asm_unimplemented_break ? NO_PARAM : BREAK) + #define ASM_LOCATION(message) \ + __ Debug("LOCATION: " message, __LINE__, NO_PARAM) +#else + #define ASM_UNIMPLEMENTED(message) + #define ASM_UNIMPLEMENTED_BREAK(message) + #define ASM_LOCATION(message) +#endif -#include "arm64/assembler-arm64-inl.h" namespace v8 { namespace internal { @@ -26,6 +43,11 @@ namespace internal { V(Str, CPURegister&, rt, StoreOpFor(rt)) \ V(Ldrsw, Register&, rt, LDRSW_x) +#define LSPAIR_MACRO_LIST(V) \ + V(Ldp, CPURegister&, rt, rt2, LoadPairOpFor(rt, rt2)) \ + V(Stp, CPURegister&, rt, rt2, StorePairOpFor(rt, rt2)) \ + V(Ldpsw, CPURegister&, rt, rt2, LDPSW_x) + // ---------------------------------------------------------------------------- // Static helper functions @@ -82,7 +104,7 @@ enum BranchType { inline BranchType InvertBranchType(BranchType type) { if (kBranchTypeFirstCondition <= type && type <= kBranchTypeLastCondition) { return static_cast<BranchType>( - InvertCondition(static_cast<Condition>(type))); + NegateCondition(static_cast<Condition>(type))); } else { return static_cast<BranchType>(type ^ 1); } @@ -90,6 +112,10 @@ inline BranchType InvertBranchType(BranchType type) { enum RememberedSetAction { EMIT_REMEMBERED_SET, OMIT_REMEMBERED_SET }; enum SmiCheck { INLINE_SMI_CHECK, OMIT_SMI_CHECK }; +enum PointersToHereCheck { + kPointersToHereMaybeInteresting, + kPointersToHereAreAlwaysInteresting +}; enum LinkRegisterStatus { kLRHasNotBeenSaved, kLRHasBeenSaved }; enum TargetAddressStorageMode { CAN_INLINE_TARGET_ADDRESS, @@ -199,6 +225,18 @@ class MacroAssembler : public Assembler { static bool IsImmMovz(uint64_t imm, unsigned reg_size); static unsigned CountClearHalfWords(uint64_t imm, unsigned reg_size); + // Try to move an immediate into the destination register in a single + // instruction. Returns true for success, and updates the contents of dst. + // Returns false, otherwise. + bool TryOneInstrMoveImmediate(const Register& dst, int64_t imm); + + // Move an immediate into register dst, and return an Operand object for use + // with a subsequent instruction that accepts a shift. The value moved into + // dst is not necessarily equal to imm; it may have had a shifting operation + // applied to it that will be subsequently undone by the shift applied in the + // Operand. + Operand MoveImmediateForShiftedOp(const Register& dst, int64_t imm); + // Conditional macros. inline void Ccmp(const Register& rn, const Operand& operand, @@ -228,6 +266,14 @@ class MacroAssembler : public Assembler { const MemOperand& addr, LoadStoreOp op); +#define DECLARE_FUNCTION(FN, REGTYPE, REG, REG2, OP) \ + inline void FN(const REGTYPE REG, const REGTYPE REG2, const MemOperand& addr); + LSPAIR_MACRO_LIST(DECLARE_FUNCTION) +#undef DECLARE_FUNCTION + + void LoadStorePairMacro(const CPURegister& rt, const CPURegister& rt2, + const MemOperand& addr, LoadStorePairOp op); + // V8-specific load/store helpers. void Load(const Register& rt, const MemOperand& addr, Representation r); void Store(const Register& rt, const MemOperand& addr, Representation r); @@ -351,7 +397,7 @@ class MacroAssembler : public Assembler { // Provide a template to allow other types to be converted automatically. template<typename T> void Fmov(FPRegister fd, T imm) { - ASSERT(allow_macro_instructions_); + DCHECK(allow_macro_instructions_); Fmov(fd, static_cast<double>(imm)); } inline void Fmov(Register rd, FPRegister fn); @@ -385,19 +431,10 @@ class MacroAssembler : public Assembler { inline void Ldnp(const CPURegister& rt, const CPURegister& rt2, const MemOperand& src); - inline void Ldp(const CPURegister& rt, - const CPURegister& rt2, - const MemOperand& src); - inline void Ldpsw(const Register& rt, - const Register& rt2, - const MemOperand& src); - // Provide both double and float interfaces for FP immediate loads, rather - // than relying on implicit C++ casts. This allows signalling NaNs to be - // preserved when the immediate matches the format of fd. Most systems convert - // signalling NaNs to quiet NaNs when converting between float and double. - inline void Ldr(const FPRegister& ft, double imm); - inline void Ldr(const FPRegister& ft, float imm); - inline void Ldr(const Register& rt, uint64_t imm); + // Load a literal from the inline constant pool. + inline void Ldr(const CPURegister& rt, const Immediate& imm); + // Helper function for double immediate. + inline void Ldr(const CPURegister& rt, double imm); inline void Lsl(const Register& rd, const Register& rn, unsigned shift); inline void Lsl(const Register& rd, const Register& rn, const Register& rm); inline void Lsr(const Register& rd, const Register& rn, unsigned shift); @@ -453,9 +490,6 @@ class MacroAssembler : public Assembler { inline void Stnp(const CPURegister& rt, const CPURegister& rt2, const MemOperand& dst); - inline void Stp(const CPURegister& rt, - const CPURegister& rt2, - const MemOperand& dst); inline void Sxtb(const Register& rd, const Register& rn); inline void Sxth(const Register& rd, const Register& rn); inline void Sxtw(const Register& rd, const Register& rn); @@ -531,6 +565,7 @@ class MacroAssembler : public Assembler { const CPURegister& src6 = NoReg, const CPURegister& src7 = NoReg); void Pop(const CPURegister& dst0, const CPURegister& dst1 = NoReg, const CPURegister& dst2 = NoReg, const CPURegister& dst3 = NoReg); + void Push(const Register& src0, const FPRegister& src1); // Alternative forms of Push and Pop, taking a RegList or CPURegList that // specifies the registers that are to be pushed or popped. Higher-numbered @@ -606,7 +641,7 @@ class MacroAssembler : public Assembler { explicit PushPopQueue(MacroAssembler* masm) : masm_(masm), size_(0) { } ~PushPopQueue() { - ASSERT(queued_.empty()); + DCHECK(queued_.empty()); } void Queue(const CPURegister& rt) { @@ -614,7 +649,11 @@ class MacroAssembler : public Assembler { queued_.push_back(rt); } - void PushQueued(); + enum PreambleDirective { + WITH_PREAMBLE, + SKIP_PREAMBLE + }; + void PushQueued(PreambleDirective preamble_directive = WITH_PREAMBLE); void PopQueued(); private: @@ -718,9 +757,11 @@ class MacroAssembler : public Assembler { // it can be evidence of a potential bug because the ABI forbids accesses // below csp. // - // If emit_debug_code() is false, this emits no code. + // If StackPointer() is the system stack pointer (csp) or ALWAYS_ALIGN_CSP is + // enabled, then csp will be dereferenced to cause the processor + // (or simulator) to abort if it is not properly aligned. // - // If StackPointer() is the system stack pointer, this emits no code. + // If emit_debug_code() is false, this emits no code. void AssertStackConsistency(); // Preserve the callee-saved registers (as defined by AAPCS64). @@ -752,7 +793,7 @@ class MacroAssembler : public Assembler { // Set the current stack pointer, but don't generate any code. inline void SetStackPointer(const Register& stack_pointer) { - ASSERT(!TmpList()->IncludesAliasOf(stack_pointer)); + DCHECK(!TmpList()->IncludesAliasOf(stack_pointer)); sp_ = stack_pointer; } @@ -766,8 +807,8 @@ class MacroAssembler : public Assembler { inline void AlignAndSetCSPForFrame() { int sp_alignment = ActivationFrameAlignment(); // AAPCS64 mandates at least 16-byte alignment. - ASSERT(sp_alignment >= 16); - ASSERT(IsPowerOf2(sp_alignment)); + DCHECK(sp_alignment >= 16); + DCHECK(IsPowerOf2(sp_alignment)); Bic(csp, StackPointer(), sp_alignment - 1); SetStackPointer(csp); } @@ -778,12 +819,22 @@ class MacroAssembler : public Assembler { // // This is necessary when pushing or otherwise adding things to the stack, to // satisfy the AAPCS64 constraint that the memory below the system stack - // pointer is not accessed. + // pointer is not accessed. The amount pushed will be increased as necessary + // to ensure csp remains aligned to 16 bytes. // // This method asserts that StackPointer() is not csp, since the call does // not make sense in that context. inline void BumpSystemStackPointer(const Operand& space); + // Re-synchronizes the system stack pointer (csp) with the current stack + // pointer (according to StackPointer()). This function will ensure the + // new value of the system stack pointer is remains aligned to 16 bytes, and + // is lower than or equal to the value of the current stack pointer. + // + // This method asserts that StackPointer() is not csp, since the call does + // not make sense in that context. + inline void SyncSystemStackPointer(); + // Helpers ------------------------------------------------------------------ // Root register. inline void InitializeRootRegister(); @@ -812,7 +863,7 @@ class MacroAssembler : public Assembler { if (object->IsHeapObject()) { LoadHeapObject(result, Handle<HeapObject>::cast(object)); } else { - ASSERT(object->IsSmi()); + DCHECK(object->IsSmi()); Mov(result, Operand(object)); } } @@ -830,10 +881,15 @@ class MacroAssembler : public Assembler { void NumberOfOwnDescriptors(Register dst, Register map); template<typename Field> - void DecodeField(Register reg) { - static const uint64_t shift = Field::kShift + kSmiShift; + void DecodeField(Register dst, Register src) { + static const uint64_t shift = Field::kShift; static const uint64_t setbits = CountSetBits(Field::kMask, 32); - Ubfx(reg, reg, shift, setbits); + Ubfx(dst, src, shift, setbits); + } + + template<typename Field> + void DecodeField(Register reg) { + DecodeField<Field>(reg, reg); } // ---- SMI and Number Utilities ---- @@ -849,6 +905,10 @@ class MacroAssembler : public Assembler { Register src, UntagMode mode = kNotSpeculativeUntag); + // Tag and push in one step. + inline void SmiTagAndPush(Register src); + inline void SmiTagAndPush(Register src1, Register src2); + // Compute the absolute value of 'smi' and leave the result in 'smi' // register. If 'smi' is the most negative SMI, the absolute value cannot // be represented as a SMI and a jump to 'slow' is done. @@ -907,6 +967,10 @@ class MacroAssembler : public Assembler { // Jump to label if the input double register contains -0.0. void JumpIfMinusZero(DoubleRegister input, Label* on_negative_zero); + // Jump to label if the input integer register contains the double precision + // floating point representation of -0.0. + void JumpIfMinusZero(Register input, Label* on_negative_zero); + // Generate code to do a lookup in the number string cache. If the number in // the register object is found in the cache the generated code falls through // with the result in the result register. The object and the result register @@ -939,7 +1003,7 @@ class MacroAssembler : public Assembler { FPRegister scratch_d, Label* on_successful_conversion = NULL, Label* on_failed_conversion = NULL) { - ASSERT(as_int.Is32Bits()); + DCHECK(as_int.Is32Bits()); TryRepresentDoubleAsInt(as_int, value, scratch_d, on_successful_conversion, on_failed_conversion); } @@ -954,7 +1018,7 @@ class MacroAssembler : public Assembler { FPRegister scratch_d, Label* on_successful_conversion = NULL, Label* on_failed_conversion = NULL) { - ASSERT(as_int.Is64Bits()); + DCHECK(as_int.Is64Bits()); TryRepresentDoubleAsInt(as_int, value, scratch_d, on_successful_conversion, on_failed_conversion); } @@ -1048,15 +1112,6 @@ class MacroAssembler : public Assembler { Register scratch3, Register scratch4); - // Throw a message string as an exception. - void Throw(BailoutReason reason); - - // Throw a message string as an exception if a condition is not true. - void ThrowIf(Condition cond, BailoutReason reason); - - // Throw a message string as an exception if the value is a smi. - void ThrowIfSmi(const Register& value, BailoutReason reason); - void CallStub(CodeStub* stub, TypeFeedbackId ast_id = TypeFeedbackId::None()); void TailCallStub(CodeStub* stub); @@ -1351,7 +1406,8 @@ class MacroAssembler : public Assembler { Register scratch1, Register scratch2, CPURegister value = NoFPReg, - CPURegister heap_number_map = NoReg); + CPURegister heap_number_map = NoReg, + MutableMode mode = IMMUTABLE); // --------------------------------------------------------------------------- // Support functions. @@ -1636,7 +1692,8 @@ class MacroAssembler : public Assembler { void ExitFrameRestoreFPRegs(); // Generates function and stub prologue code. - void Prologue(PrologueFrameMode frame_mode); + void StubPrologue(); + void Prologue(bool code_pre_aging); // Enter exit frame. Exit frames are used when calling C code from generated // (JavaScript) code. @@ -1771,7 +1828,9 @@ class MacroAssembler : public Assembler { LinkRegisterStatus lr_status, SaveFPRegsMode save_fp, RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, - SmiCheck smi_check = INLINE_SMI_CHECK); + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting); // As above, but the offset has the tag presubtracted. For use with // MemOperand(reg, off). @@ -1783,7 +1842,9 @@ class MacroAssembler : public Assembler { LinkRegisterStatus lr_status, SaveFPRegsMode save_fp, RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, - SmiCheck smi_check = INLINE_SMI_CHECK) { + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting) { RecordWriteField(context, offset + kHeapObjectTag, value, @@ -1791,9 +1852,17 @@ class MacroAssembler : public Assembler { lr_status, save_fp, remembered_set_action, - smi_check); + smi_check, + pointers_to_here_check_for_value); } + void RecordWriteForMap( + Register object, + Register map, + Register dst, + LinkRegisterStatus lr_status, + SaveFPRegsMode save_fp); + // For a given |object| notify the garbage collector that the slot |address| // has been written. |value| is the object being stored. The value and // address registers are clobbered by the operation. @@ -1804,7 +1873,9 @@ class MacroAssembler : public Assembler { LinkRegisterStatus lr_status, SaveFPRegsMode save_fp, RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, - SmiCheck smi_check = INLINE_SMI_CHECK); + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting); // Checks the color of an object. If the object is already grey or black // then we just fall through, since it is already live. If it is white and @@ -1918,12 +1989,13 @@ class MacroAssembler : public Assembler { // (such as %e, %f or %g) are FPRegisters, and that arguments for integer // placeholders are Registers. // - // A maximum of four arguments may be given to any single Printf call. The - // arguments must be of the same type, but they do not need to have the same - // size. + // At the moment it is only possible to print the value of csp if it is the + // current stack pointer. Otherwise, the MacroAssembler will automatically + // update csp on every push (using BumpSystemStackPointer), so determining its + // value is difficult. // - // The following registers cannot be printed: - // StackPointer(), csp. + // Format placeholders that refer to more than one argument, or to a specific + // argument, are not supported. This includes formats like "%1$d" or "%.*d". // // This function automatically preserves caller-saved registers so that // calling code can use Printf at any point without having to worry about @@ -1931,15 +2003,11 @@ class MacroAssembler : public Assembler { // a problem, preserve the important registers manually and then call // PrintfNoPreserve. Callee-saved registers are not used by Printf, and are // implicitly preserved. - // - // This function assumes (and asserts) that the current stack pointer is - // callee-saved, not caller-saved. This is most likely the case anyway, as a - // caller-saved stack pointer doesn't make a lot of sense. void Printf(const char * format, - const CPURegister& arg0 = NoCPUReg, - const CPURegister& arg1 = NoCPUReg, - const CPURegister& arg2 = NoCPUReg, - const CPURegister& arg3 = NoCPUReg); + CPURegister arg0 = NoCPUReg, + CPURegister arg1 = NoCPUReg, + CPURegister arg2 = NoCPUReg, + CPURegister arg3 = NoCPUReg); // Like Printf, but don't preserve any caller-saved registers, not even 'lr'. // @@ -1993,6 +2061,15 @@ class MacroAssembler : public Assembler { void JumpIfDictionaryInPrototypeChain(Register object, Register scratch0, Register scratch1, Label* found); + // Perform necessary maintenance operations before a push or after a pop. + // + // Note that size is specified in bytes. + void PushPreamble(Operand total_size); + void PopPostamble(Operand total_size); + + void PushPreamble(int count, int size) { PushPreamble(count * size); } + void PopPostamble(int count, int size) { PopPostamble(count * size); } + private: // Helpers for CopyFields. // These each implement CopyFields in a different way. @@ -2020,22 +2097,15 @@ class MacroAssembler : public Assembler { const CPURegister& dst0, const CPURegister& dst1, const CPURegister& dst2, const CPURegister& dst3); - // Perform necessary maintenance operations before a push or pop. - // - // Note that size is specified in bytes. - void PrepareForPush(Operand total_size); - void PrepareForPop(Operand total_size); - - void PrepareForPush(int count, int size) { PrepareForPush(count * size); } - void PrepareForPop(int count, int size) { PrepareForPop(count * size); } - // Call Printf. On a native build, a simple call will be generated, but if the // simulator is being used then a suitable pseudo-instruction is used. The // arguments and stack (csp) must be prepared by the caller as for a normal // AAPCS64 call to 'printf'. // - // The 'type' argument specifies the type of the optional arguments. - void CallPrintf(CPURegister::RegisterType type = CPURegister::kNoRegister); + // The 'args' argument should point to an array of variable arguments in their + // proper PCS registers (and in calling order). The argument registers can + // have mixed types. The format string (x0) should not be included. + void CallPrintf(int arg_count = 0, const CPURegister * args = NULL); // Helper for throwing exceptions. Compute a handler address and jump to // it. See the implementation for register usage. @@ -2131,7 +2201,7 @@ class MacroAssembler : public Assembler { // emitted is what you specified when creating the scope. class InstructionAccurateScope BASE_EMBEDDED { public: - InstructionAccurateScope(MacroAssembler* masm, size_t count = 0) + explicit InstructionAccurateScope(MacroAssembler* masm, size_t count = 0) : masm_(masm) #ifdef DEBUG , @@ -2156,7 +2226,7 @@ class InstructionAccurateScope BASE_EMBEDDED { masm_->EndBlockPools(); #ifdef DEBUG if (start_.is_bound()) { - ASSERT(masm_->SizeOfCodeGeneratedSince(&start_) == size_); + DCHECK(masm_->SizeOfCodeGeneratedSince(&start_) == size_); } masm_->set_allow_macro_instructions(previous_allow_macro_instructions_); #endif @@ -2186,8 +2256,8 @@ class UseScratchRegisterScope { availablefp_(masm->FPTmpList()), old_available_(available_->list()), old_availablefp_(availablefp_->list()) { - ASSERT(available_->type() == CPURegister::kRegister); - ASSERT(availablefp_->type() == CPURegister::kFPRegister); + DCHECK(available_->type() == CPURegister::kRegister); + DCHECK(availablefp_->type() == CPURegister::kFPRegister); } ~UseScratchRegisterScope(); diff --git a/deps/v8/src/arm64/regexp-macro-assembler-arm64.cc b/deps/v8/src/arm64/regexp-macro-assembler-arm64.cc index 97040cf75eb..432d9568bdc 100644 --- a/deps/v8/src/arm64/regexp-macro-assembler-arm64.cc +++ b/deps/v8/src/arm64/regexp-macro-assembler-arm64.cc @@ -2,18 +2,19 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM64 -#include "cpu-profiler.h" -#include "unicode.h" -#include "log.h" -#include "code-stubs.h" -#include "regexp-stack.h" -#include "macro-assembler.h" -#include "regexp-macro-assembler.h" -#include "arm64/regexp-macro-assembler-arm64.h" +#include "src/code-stubs.h" +#include "src/cpu-profiler.h" +#include "src/log.h" +#include "src/macro-assembler.h" +#include "src/regexp-macro-assembler.h" +#include "src/regexp-stack.h" +#include "src/unicode.h" + +#include "src/arm64/regexp-macro-assembler-arm64.h" namespace v8 { namespace internal { @@ -125,7 +126,7 @@ RegExpMacroAssemblerARM64::RegExpMacroAssemblerARM64( backtrack_label_(), exit_label_() { __ SetStackPointer(csp); - ASSERT_EQ(0, registers_to_save % 2); + DCHECK_EQ(0, registers_to_save % 2); // We can cache at most 16 W registers in x0-x7. STATIC_ASSERT(kNumCachedRegisters <= 16); STATIC_ASSERT((kNumCachedRegisters % 2) == 0); @@ -160,7 +161,7 @@ void RegExpMacroAssemblerARM64::AdvanceCurrentPosition(int by) { void RegExpMacroAssemblerARM64::AdvanceRegister(int reg, int by) { - ASSERT((reg >= 0) && (reg < num_registers_)); + DCHECK((reg >= 0) && (reg < num_registers_)); if (by != 0) { Register to_advance; RegisterState register_state = GetRegisterState(reg); @@ -261,7 +262,7 @@ void RegExpMacroAssemblerARM64::CheckCharacters(Vector<const uc16> str, for (int i = 0; i < str.length(); i++) { if (mode_ == ASCII) { __ Ldrb(w10, MemOperand(characters_address, 1, PostIndex)); - ASSERT(str[i] <= String::kMaxOneByteCharCode); + DCHECK(str[i] <= String::kMaxOneByteCharCode); } else { __ Ldrh(w10, MemOperand(characters_address, 2, PostIndex)); } @@ -288,10 +289,10 @@ void RegExpMacroAssemblerARM64::CheckNotBackReferenceIgnoreCase( // Save the capture length in a callee-saved register so it will // be preserved if we call a C helper. Register capture_length = w19; - ASSERT(kCalleeSaved.IncludesAliasOf(capture_length)); + DCHECK(kCalleeSaved.IncludesAliasOf(capture_length)); // Find length of back-referenced capture. - ASSERT((start_reg % 2) == 0); + DCHECK((start_reg % 2) == 0); if (start_reg < kNumCachedRegisters) { __ Mov(capture_start_offset.X(), GetCachedRegister(start_reg)); __ Lsr(x11, GetCachedRegister(start_reg), kWRegSizeInBits); @@ -364,12 +365,12 @@ void RegExpMacroAssemblerARM64::CheckNotBackReferenceIgnoreCase( __ Check(le, kOffsetOutOfRange); } } else { - ASSERT(mode_ == UC16); + DCHECK(mode_ == UC16); int argument_count = 4; // The cached registers need to be retained. CPURegList cached_registers(CPURegister::kRegister, kXRegSizeInBits, 0, 7); - ASSERT((cached_registers.Count() * 2) == kNumCachedRegisters); + DCHECK((cached_registers.Count() * 2) == kNumCachedRegisters); __ PushCPURegList(cached_registers); // Put arguments into arguments registers. @@ -396,11 +397,14 @@ void RegExpMacroAssemblerARM64::CheckNotBackReferenceIgnoreCase( } // Check if function returned non-zero for success or zero for failure. - CompareAndBranchOrBacktrack(x0, 0, eq, on_no_match); + // x0 is one of the registers used as a cache so it must be tested before + // the cache is restored. + __ Cmp(x0, 0); + __ PopCPURegList(cached_registers); + BranchOrBacktrack(eq, on_no_match); + // On success, increment position by length of capture. __ Add(current_input_offset(), current_input_offset(), capture_length); - // Reset the cached registers. - __ PopCPURegList(cached_registers); } __ Bind(&fallthrough); @@ -417,7 +421,7 @@ void RegExpMacroAssemblerARM64::CheckNotBackReference( Register capture_length = w15; // Find length of back-referenced capture. - ASSERT((start_reg % 2) == 0); + DCHECK((start_reg % 2) == 0); if (start_reg < kNumCachedRegisters) { __ Mov(x10, GetCachedRegister(start_reg)); __ Lsr(x11, GetCachedRegister(start_reg), kWRegSizeInBits); @@ -447,7 +451,7 @@ void RegExpMacroAssemblerARM64::CheckNotBackReference( __ Ldrb(w10, MemOperand(capture_start_address, 1, PostIndex)); __ Ldrb(w11, MemOperand(current_position_address, 1, PostIndex)); } else { - ASSERT(mode_ == UC16); + DCHECK(mode_ == UC16); __ Ldrh(w10, MemOperand(capture_start_address, 2, PostIndex)); __ Ldrh(w11, MemOperand(current_position_address, 2, PostIndex)); } @@ -495,7 +499,7 @@ void RegExpMacroAssemblerARM64::CheckNotCharacterAfterMinusAnd( uc16 minus, uc16 mask, Label* on_not_equal) { - ASSERT(minus < String::kMaxUtf16CodeUnit); + DCHECK(minus < String::kMaxUtf16CodeUnit); __ Sub(w10, current_character(), minus); __ And(w10, w10, mask); CompareAndBranchOrBacktrack(w10, c, ne, on_not_equal); @@ -677,10 +681,10 @@ Handle<HeapObject> RegExpMacroAssemblerARM64::GetCode(Handle<String> source) { CPURegList argument_registers(x0, x5, x6, x7); CPURegList registers_to_retain = kCalleeSaved; - ASSERT(kCalleeSaved.Count() == 11); + DCHECK(kCalleeSaved.Count() == 11); registers_to_retain.Combine(lr); - ASSERT(csp.Is(__ StackPointer())); + DCHECK(csp.Is(__ StackPointer())); __ PushCPURegList(registers_to_retain); __ PushCPURegList(argument_registers); @@ -704,7 +708,7 @@ Handle<HeapObject> RegExpMacroAssemblerARM64::GetCode(Handle<String> source) { // Make sure the stack alignment will be respected. int alignment = masm_->ActivationFrameAlignment(); - ASSERT_EQ(alignment % 16, 0); + DCHECK_EQ(alignment % 16, 0); int align_mask = (alignment / kWRegSize) - 1; num_wreg_to_allocate = (num_wreg_to_allocate + align_mask) & ~align_mask; @@ -857,7 +861,7 @@ Handle<HeapObject> RegExpMacroAssemblerARM64::GetCode(Handle<String> source) { Register base = x10; // There are always an even number of capture registers. A couple of // registers determine one match with two offsets. - ASSERT_EQ(0, num_registers_left_on_stack % 2); + DCHECK_EQ(0, num_registers_left_on_stack % 2); __ Add(base, frame_pointer(), kFirstCaptureOnStack); // We can unroll the loop here, we should not unroll for less than 2 @@ -974,8 +978,9 @@ Handle<HeapObject> RegExpMacroAssemblerARM64::GetCode(Handle<String> source) { __ Bind(&return_w0); // Set stack pointer back to first register to retain - ASSERT(csp.Is(__ StackPointer())); + DCHECK(csp.Is(__ StackPointer())); __ Mov(csp, fp); + __ AssertStackConsistency(); // Restore registers. __ PopCPURegList(registers_to_retain); @@ -986,7 +991,7 @@ Handle<HeapObject> RegExpMacroAssemblerARM64::GetCode(Handle<String> source) { // Registers x0 to x7 are used to store the first captures, they need to be // retained over calls to C++ code. CPURegList cached_registers(CPURegister::kRegister, kXRegSizeInBits, 0, 7); - ASSERT((cached_registers.Count() * 2) == kNumCachedRegisters); + DCHECK((cached_registers.Count() * 2) == kNumCachedRegisters); if (check_preempt_label_.is_linked()) { __ Bind(&check_preempt_label_); @@ -1079,9 +1084,9 @@ void RegExpMacroAssemblerARM64::LoadCurrentCharacter(int cp_offset, int characters) { // TODO(pielan): Make sure long strings are caught before this, and not // just asserted in debug mode. - ASSERT(cp_offset >= -1); // ^ and \b can look behind one character. + DCHECK(cp_offset >= -1); // ^ and \b can look behind one character. // Be sane! (And ensure that an int32_t can be used to index the string) - ASSERT(cp_offset < (1<<30)); + DCHECK(cp_offset < (1<<30)); if (check_bounds) { CheckPosition(cp_offset + characters - 1, on_end_of_input); } @@ -1174,7 +1179,7 @@ void RegExpMacroAssemblerARM64::SetCurrentPositionFromEnd(int by) { void RegExpMacroAssemblerARM64::SetRegister(int register_index, int to) { - ASSERT(register_index >= num_saved_registers_); // Reserved for positions! + DCHECK(register_index >= num_saved_registers_); // Reserved for positions! Register set_to = wzr; if (to != 0) { set_to = w10; @@ -1202,7 +1207,7 @@ void RegExpMacroAssemblerARM64::WriteCurrentPositionToRegister(int reg, void RegExpMacroAssemblerARM64::ClearRegisters(int reg_from, int reg_to) { - ASSERT(reg_from <= reg_to); + DCHECK(reg_from <= reg_to); int num_registers = reg_to - reg_from + 1; // If the first capture register is cached in a hardware register but not @@ -1215,7 +1220,7 @@ void RegExpMacroAssemblerARM64::ClearRegisters(int reg_from, int reg_to) { // Clear cached registers in pairs as far as possible. while ((num_registers >= 2) && (reg_from < kNumCachedRegisters)) { - ASSERT(GetRegisterState(reg_from) == CACHED_LSW); + DCHECK(GetRegisterState(reg_from) == CACHED_LSW); __ Mov(GetCachedRegister(reg_from), twice_non_position_value()); reg_from += 2; num_registers -= 2; @@ -1229,7 +1234,7 @@ void RegExpMacroAssemblerARM64::ClearRegisters(int reg_from, int reg_to) { if (num_registers > 0) { // If there are some remaining registers, they are stored on the stack. - ASSERT(reg_from >= kNumCachedRegisters); + DCHECK(reg_from >= kNumCachedRegisters); // Move down the indexes of the registers on stack to get the correct offset // in memory. @@ -1288,7 +1293,8 @@ int RegExpMacroAssemblerARM64::CheckStackGuardState(Address* return_address, const byte** input_start, const byte** input_end) { Isolate* isolate = frame_entry<Isolate*>(re_frame, kIsolate); - if (isolate->stack_guard()->IsStackOverflow()) { + StackLimitCheck check(isolate); + if (check.JsHasOverflowed()) { isolate->StackOverflow(); return EXCEPTION; } @@ -1311,11 +1317,11 @@ int RegExpMacroAssemblerARM64::CheckStackGuardState(Address* return_address, // Current string. bool is_ascii = subject->IsOneByteRepresentationUnderneath(); - ASSERT(re_code->instruction_start() <= *return_address); - ASSERT(*return_address <= + DCHECK(re_code->instruction_start() <= *return_address); + DCHECK(*return_address <= re_code->instruction_start() + re_code->instruction_size()); - Object* result = Execution::HandleStackGuardInterrupt(isolate); + Object* result = isolate->stack_guard()->HandleInterrupts(); if (*code_handle != re_code) { // Return address no longer valid int delta = code_handle->address() - re_code->address(); @@ -1351,7 +1357,7 @@ int RegExpMacroAssemblerARM64::CheckStackGuardState(Address* return_address, // be a sequential or external string with the same content. // Update the start and end pointers in the stack frame to the current // location (whether it has actually moved or not). - ASSERT(StringShape(*subject_tmp).IsSequential() || + DCHECK(StringShape(*subject_tmp).IsSequential() || StringShape(*subject_tmp).IsExternal()); // The original start address of the characters to match. @@ -1404,11 +1410,11 @@ void RegExpMacroAssemblerARM64::CallCheckStackGuardState(Register scratch) { // moved. Allocate extra space for 2 arguments passed by pointers. // AAPCS64 requires the stack to be 16 byte aligned. int alignment = masm_->ActivationFrameAlignment(); - ASSERT_EQ(alignment % 16, 0); + DCHECK_EQ(alignment % 16, 0); int align_mask = (alignment / kXRegSize) - 1; int xreg_to_claim = (3 + align_mask) & ~align_mask; - ASSERT(csp.Is(__ StackPointer())); + DCHECK(csp.Is(__ StackPointer())); __ Claim(xreg_to_claim); // CheckStackGuardState needs the end and start addresses of the input string. @@ -1438,7 +1444,7 @@ void RegExpMacroAssemblerARM64::CallCheckStackGuardState(Register scratch) { __ Peek(input_start(), kPointerSize); __ Peek(input_end(), 2 * kPointerSize); - ASSERT(csp.Is(__ StackPointer())); + DCHECK(csp.Is(__ StackPointer())); __ Drop(xreg_to_claim); // Reload the Code pointer. @@ -1458,12 +1464,7 @@ void RegExpMacroAssemblerARM64::BranchOrBacktrack(Condition condition, if (to == NULL) { to = &backtrack_label_; } - // TODO(ulan): do direct jump when jump distance is known and fits in imm19. - Condition inverted_condition = InvertCondition(condition); - Label no_branch; - __ B(inverted_condition, &no_branch); - __ B(to); - __ Bind(&no_branch); + __ B(condition, to); } void RegExpMacroAssemblerARM64::CompareAndBranchOrBacktrack(Register reg, @@ -1474,15 +1475,11 @@ void RegExpMacroAssemblerARM64::CompareAndBranchOrBacktrack(Register reg, if (to == NULL) { to = &backtrack_label_; } - // TODO(ulan): do direct jump when jump distance is known and fits in imm19. - Label no_branch; if (condition == eq) { - __ Cbnz(reg, &no_branch); + __ Cbz(reg, to); } else { - __ Cbz(reg, &no_branch); + __ Cbnz(reg, to); } - __ B(to); - __ Bind(&no_branch); } else { __ Cmp(reg, immediate); BranchOrBacktrack(condition, to); @@ -1496,7 +1493,7 @@ void RegExpMacroAssemblerARM64::CheckPreemption() { ExternalReference::address_of_stack_limit(isolate()); __ Mov(x10, stack_limit); __ Ldr(x10, MemOperand(x10)); - ASSERT(csp.Is(__ StackPointer())); + DCHECK(csp.Is(__ StackPointer())); __ Cmp(csp, x10); CallIf(&check_preempt_label_, ls); } @@ -1513,8 +1510,8 @@ void RegExpMacroAssemblerARM64::CheckStackLimit() { void RegExpMacroAssemblerARM64::Push(Register source) { - ASSERT(source.Is32Bits()); - ASSERT(!source.is(backtrack_stackpointer())); + DCHECK(source.Is32Bits()); + DCHECK(!source.is(backtrack_stackpointer())); __ Str(source, MemOperand(backtrack_stackpointer(), -static_cast<int>(kWRegSize), @@ -1523,23 +1520,23 @@ void RegExpMacroAssemblerARM64::Push(Register source) { void RegExpMacroAssemblerARM64::Pop(Register target) { - ASSERT(target.Is32Bits()); - ASSERT(!target.is(backtrack_stackpointer())); + DCHECK(target.Is32Bits()); + DCHECK(!target.is(backtrack_stackpointer())); __ Ldr(target, MemOperand(backtrack_stackpointer(), kWRegSize, PostIndex)); } Register RegExpMacroAssemblerARM64::GetCachedRegister(int register_index) { - ASSERT(register_index < kNumCachedRegisters); + DCHECK(register_index < kNumCachedRegisters); return Register::Create(register_index / 2, kXRegSizeInBits); } Register RegExpMacroAssemblerARM64::GetRegister(int register_index, Register maybe_result) { - ASSERT(maybe_result.Is32Bits()); - ASSERT(register_index >= 0); + DCHECK(maybe_result.Is32Bits()); + DCHECK(register_index >= 0); if (num_registers_ <= register_index) { num_registers_ = register_index + 1; } @@ -1562,15 +1559,15 @@ Register RegExpMacroAssemblerARM64::GetRegister(int register_index, UNREACHABLE(); break; } - ASSERT(result.Is32Bits()); + DCHECK(result.Is32Bits()); return result; } void RegExpMacroAssemblerARM64::StoreRegister(int register_index, Register source) { - ASSERT(source.Is32Bits()); - ASSERT(register_index >= 0); + DCHECK(source.Is32Bits()); + DCHECK(register_index >= 0); if (num_registers_ <= register_index) { num_registers_ = register_index + 1; } @@ -1600,29 +1597,29 @@ void RegExpMacroAssemblerARM64::StoreRegister(int register_index, void RegExpMacroAssemblerARM64::CallIf(Label* to, Condition condition) { Label skip_call; - if (condition != al) __ B(&skip_call, InvertCondition(condition)); + if (condition != al) __ B(&skip_call, NegateCondition(condition)); __ Bl(to); __ Bind(&skip_call); } void RegExpMacroAssemblerARM64::RestoreLinkRegister() { - ASSERT(csp.Is(__ StackPointer())); + DCHECK(csp.Is(__ StackPointer())); __ Pop(lr, xzr); __ Add(lr, lr, Operand(masm_->CodeObject())); } void RegExpMacroAssemblerARM64::SaveLinkRegister() { - ASSERT(csp.Is(__ StackPointer())); + DCHECK(csp.Is(__ StackPointer())); __ Sub(lr, lr, Operand(masm_->CodeObject())); __ Push(xzr, lr); } MemOperand RegExpMacroAssemblerARM64::register_location(int register_index) { - ASSERT(register_index < (1<<30)); - ASSERT(register_index >= kNumCachedRegisters); + DCHECK(register_index < (1<<30)); + DCHECK(register_index >= kNumCachedRegisters); if (num_registers_ <= register_index) { num_registers_ = register_index + 1; } @@ -1633,10 +1630,10 @@ MemOperand RegExpMacroAssemblerARM64::register_location(int register_index) { MemOperand RegExpMacroAssemblerARM64::capture_location(int register_index, Register scratch) { - ASSERT(register_index < (1<<30)); - ASSERT(register_index < num_saved_registers_); - ASSERT(register_index >= kNumCachedRegisters); - ASSERT_EQ(register_index % 2, 0); + DCHECK(register_index < (1<<30)); + DCHECK(register_index < num_saved_registers_); + DCHECK(register_index >= kNumCachedRegisters); + DCHECK_EQ(register_index % 2, 0); register_index -= kNumCachedRegisters; int offset = kFirstCaptureOnStack - register_index * kWRegSize; // capture_location is used with Stp instructions to load/store 2 registers. @@ -1662,7 +1659,7 @@ void RegExpMacroAssemblerARM64::LoadCurrentCharacterUnchecked(int cp_offset, // disable it. // TODO(pielan): See whether or not we should disable unaligned accesses. if (!CanReadUnaligned()) { - ASSERT(characters == 1); + DCHECK(characters == 1); } if (cp_offset != 0) { @@ -1684,15 +1681,15 @@ void RegExpMacroAssemblerARM64::LoadCurrentCharacterUnchecked(int cp_offset, } else if (characters == 2) { __ Ldrh(current_character(), MemOperand(input_end(), offset, SXTW)); } else { - ASSERT(characters == 1); + DCHECK(characters == 1); __ Ldrb(current_character(), MemOperand(input_end(), offset, SXTW)); } } else { - ASSERT(mode_ == UC16); + DCHECK(mode_ == UC16); if (characters == 2) { __ Ldr(current_character(), MemOperand(input_end(), offset, SXTW)); } else { - ASSERT(characters == 1); + DCHECK(characters == 1); __ Ldrh(current_character(), MemOperand(input_end(), offset, SXTW)); } } diff --git a/deps/v8/src/arm64/regexp-macro-assembler-arm64.h b/deps/v8/src/arm64/regexp-macro-assembler-arm64.h index 5d0d925ecc6..a27cff0566d 100644 --- a/deps/v8/src/arm64/regexp-macro-assembler-arm64.h +++ b/deps/v8/src/arm64/regexp-macro-assembler-arm64.h @@ -5,9 +5,10 @@ #ifndef V8_ARM64_REGEXP_MACRO_ASSEMBLER_ARM64_H_ #define V8_ARM64_REGEXP_MACRO_ASSEMBLER_ARM64_H_ -#include "arm64/assembler-arm64.h" -#include "arm64/assembler-arm64-inl.h" -#include "macro-assembler.h" +#include "src/macro-assembler.h" + +#include "src/arm64/assembler-arm64.h" +#include "src/arm64/assembler-arm64-inl.h" namespace v8 { namespace internal { @@ -230,7 +231,7 @@ class RegExpMacroAssemblerARM64: public NativeRegExpMacroAssembler { }; RegisterState GetRegisterState(int register_index) { - ASSERT(register_index >= 0); + DCHECK(register_index >= 0); if (register_index >= kNumCachedRegisters) { return STACKED; } else { diff --git a/deps/v8/src/arm64/simulator-arm64.cc b/deps/v8/src/arm64/simulator-arm64.cc index 3c970f854a0..cde93db98e0 100644 --- a/deps/v8/src/arm64/simulator-arm64.cc +++ b/deps/v8/src/arm64/simulator-arm64.cc @@ -5,15 +5,16 @@ #include <stdlib.h> #include <cmath> #include <cstdarg> -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM64 -#include "disasm.h" -#include "assembler.h" -#include "arm64/decoder-arm64-inl.h" -#include "arm64/simulator-arm64.h" -#include "macro-assembler.h" +#include "src/arm64/decoder-arm64-inl.h" +#include "src/arm64/simulator-arm64.h" +#include "src/assembler.h" +#include "src/disasm.h" +#include "src/macro-assembler.h" +#include "src/ostreams.h" namespace v8 { namespace internal { @@ -61,7 +62,7 @@ void Simulator::TraceSim(const char* format, ...) { if (FLAG_trace_sim) { va_list arguments; va_start(arguments, format); - OS::VFPrint(stream_, format, arguments); + base::OS::VFPrint(stream_, format, arguments); va_end(arguments); } } @@ -72,11 +73,11 @@ const Instruction* Simulator::kEndOfSimAddress = NULL; void SimSystemRegister::SetBits(int msb, int lsb, uint32_t bits) { int width = msb - lsb + 1; - ASSERT(is_uintn(bits, width) || is_intn(bits, width)); + DCHECK(is_uintn(bits, width) || is_intn(bits, width)); bits <<= lsb; uint32_t mask = ((1 << width) - 1) << lsb; - ASSERT((mask & write_ignore_mask_) == 0); + DCHECK((mask & write_ignore_mask_) == 0); value_ = (value_ & ~mask) | (bits & mask); } @@ -106,7 +107,7 @@ void Simulator::Initialize(Isolate* isolate) { Simulator* Simulator::current(Isolate* isolate) { Isolate::PerIsolateThreadData* isolate_data = isolate->FindOrAllocatePerThreadDataForThisThread(); - ASSERT(isolate_data != NULL); + DCHECK(isolate_data != NULL); Simulator* sim = isolate_data->simulator(); if (sim == NULL) { @@ -134,7 +135,7 @@ void Simulator::CallVoid(byte* entry, CallArgument* args) { } else if (arg.IsD() && (index_d < 8)) { set_dreg_bits(index_d++, arg.bits()); } else { - ASSERT(arg.IsD() || arg.IsX()); + DCHECK(arg.IsD() || arg.IsX()); stack_args.push_back(arg.bits()); } } @@ -143,8 +144,8 @@ void Simulator::CallVoid(byte* entry, CallArgument* args) { uintptr_t original_stack = sp(); uintptr_t entry_stack = original_stack - stack_args.size() * sizeof(stack_args[0]); - if (OS::ActivationFrameAlignment() != 0) { - entry_stack &= -OS::ActivationFrameAlignment(); + if (base::OS::ActivationFrameAlignment() != 0) { + entry_stack &= -base::OS::ActivationFrameAlignment(); } char * stack = reinterpret_cast<char*>(entry_stack); std::vector<int64_t>::const_iterator it; @@ -153,7 +154,7 @@ void Simulator::CallVoid(byte* entry, CallArgument* args) { stack += sizeof(*it); } - ASSERT(reinterpret_cast<uintptr_t>(stack) <= original_stack); + DCHECK(reinterpret_cast<uintptr_t>(stack) <= original_stack); set_sp(entry_stack); // Call the generated code. @@ -255,7 +256,7 @@ void Simulator::CheckPCSComplianceAndRun() { CHECK_EQ(saved_registers[i], xreg(register_list.PopLowestIndex().code())); } for (int i = 0; i < kNumberOfCalleeSavedFPRegisters; i++) { - ASSERT(saved_fpregisters[i] == + DCHECK(saved_fpregisters[i] == dreg_bits(fpregister_list.PopLowestIndex().code())); } @@ -288,7 +289,7 @@ void Simulator::CorruptRegisters(CPURegList* list, uint64_t value) { set_xreg(code, value | code); } } else { - ASSERT(list->type() == CPURegister::kFPRegister); + DCHECK(list->type() == CPURegister::kFPRegister); while (!list->IsEmpty()) { unsigned code = list->PopLowestIndex().code(); set_dreg_bits(code, value | code); @@ -310,7 +311,7 @@ void Simulator::CorruptAllCallerSavedCPURegisters() { // Extending the stack by 2 * 64 bits is required for stack alignment purposes. uintptr_t Simulator::PushAddress(uintptr_t address) { - ASSERT(sizeof(uintptr_t) < 2 * kXRegSize); + DCHECK(sizeof(uintptr_t) < 2 * kXRegSize); intptr_t new_sp = sp() - 2 * kXRegSize; uintptr_t* alignment_slot = reinterpret_cast<uintptr_t*>(new_sp + kXRegSize); @@ -326,7 +327,7 @@ uintptr_t Simulator::PopAddress() { intptr_t current_sp = sp(); uintptr_t* stack_slot = reinterpret_cast<uintptr_t*>(current_sp); uintptr_t address = *stack_slot; - ASSERT(sizeof(uintptr_t) < 2 * kXRegSize); + DCHECK(sizeof(uintptr_t) < 2 * kXRegSize); set_sp(current_sp + 2 * kXRegSize); return address; } @@ -480,7 +481,7 @@ class Redirection { Redirection* current = isolate->simulator_redirection(); for (; current != NULL; current = current->next_) { if (current->external_function_ == external_function) { - ASSERT_EQ(current->type(), type); + DCHECK_EQ(current->type(), type); return current; } } @@ -764,7 +765,7 @@ const char* Simulator::vreg_names[] = { const char* Simulator::WRegNameForCode(unsigned code, Reg31Mode mode) { - ASSERT(code < kNumberOfRegisters); + DCHECK(code < kNumberOfRegisters); // If the code represents the stack pointer, index the name after zr. if ((code == kZeroRegCode) && (mode == Reg31IsStackPointer)) { code = kZeroRegCode + 1; @@ -774,7 +775,7 @@ const char* Simulator::WRegNameForCode(unsigned code, Reg31Mode mode) { const char* Simulator::XRegNameForCode(unsigned code, Reg31Mode mode) { - ASSERT(code < kNumberOfRegisters); + DCHECK(code < kNumberOfRegisters); // If the code represents the stack pointer, index the name after zr. if ((code == kZeroRegCode) && (mode == Reg31IsStackPointer)) { code = kZeroRegCode + 1; @@ -784,19 +785,19 @@ const char* Simulator::XRegNameForCode(unsigned code, Reg31Mode mode) { const char* Simulator::SRegNameForCode(unsigned code) { - ASSERT(code < kNumberOfFPRegisters); + DCHECK(code < kNumberOfFPRegisters); return sreg_names[code]; } const char* Simulator::DRegNameForCode(unsigned code) { - ASSERT(code < kNumberOfFPRegisters); + DCHECK(code < kNumberOfFPRegisters); return dreg_names[code]; } const char* Simulator::VRegNameForCode(unsigned code) { - ASSERT(code < kNumberOfFPRegisters); + DCHECK(code < kNumberOfFPRegisters); return vreg_names[code]; } @@ -823,49 +824,30 @@ int Simulator::CodeFromName(const char* name) { // Helpers --------------------------------------------------------------------- -int64_t Simulator::AddWithCarry(unsigned reg_size, - bool set_flags, - int64_t src1, - int64_t src2, - int64_t carry_in) { - ASSERT((carry_in == 0) || (carry_in == 1)); - ASSERT((reg_size == kXRegSizeInBits) || (reg_size == kWRegSizeInBits)); - - uint64_t u1, u2; - int64_t result; - int64_t signed_sum = src1 + src2 + carry_in; +template <typename T> +T Simulator::AddWithCarry(bool set_flags, + T src1, + T src2, + T carry_in) { + typedef typename make_unsigned<T>::type unsignedT; + DCHECK((carry_in == 0) || (carry_in == 1)); + + T signed_sum = src1 + src2 + carry_in; + T result = signed_sum; bool N, Z, C, V; - if (reg_size == kWRegSizeInBits) { - u1 = static_cast<uint64_t>(src1) & kWRegMask; - u2 = static_cast<uint64_t>(src2) & kWRegMask; - - result = signed_sum & kWRegMask; - // Compute the C flag by comparing the sum to the max unsigned integer. - C = ((kWMaxUInt - u1) < (u2 + carry_in)) || - ((kWMaxUInt - u1 - carry_in) < u2); - // Overflow iff the sign bit is the same for the two inputs and different - // for the result. - int64_t s_src1 = src1 << (kXRegSizeInBits - kWRegSizeInBits); - int64_t s_src2 = src2 << (kXRegSizeInBits - kWRegSizeInBits); - int64_t s_result = result << (kXRegSizeInBits - kWRegSizeInBits); - V = ((s_src1 ^ s_src2) >= 0) && ((s_src1 ^ s_result) < 0); + // Compute the C flag + unsignedT u1 = static_cast<unsignedT>(src1); + unsignedT u2 = static_cast<unsignedT>(src2); + unsignedT urest = std::numeric_limits<unsignedT>::max() - u1; + C = (u2 > urest) || (carry_in && (((u2 + 1) > urest) || (u2 > (urest - 1)))); - } else { - u1 = static_cast<uint64_t>(src1); - u2 = static_cast<uint64_t>(src2); - - result = signed_sum; - // Compute the C flag by comparing the sum to the max unsigned integer. - C = ((kXMaxUInt - u1) < (u2 + carry_in)) || - ((kXMaxUInt - u1 - carry_in) < u2); - // Overflow iff the sign bit is the same for the two inputs and different - // for the result. - V = ((src1 ^ src2) >= 0) && ((src1 ^ result) < 0); - } + // Overflow iff the sign bit is the same for the two inputs and different + // for the result. + V = ((src1 ^ src2) >= 0) && ((src1 ^ result) < 0); - N = CalcNFlag(result, reg_size); + N = CalcNFlag(result); Z = CalcZFlag(result); if (set_flags) { @@ -878,33 +860,42 @@ int64_t Simulator::AddWithCarry(unsigned reg_size, } -int64_t Simulator::ShiftOperand(unsigned reg_size, - int64_t value, - Shift shift_type, - unsigned amount) { +template<typename T> +void Simulator::AddSubWithCarry(Instruction* instr) { + T op2 = reg<T>(instr->Rm()); + T new_val; + + if ((instr->Mask(AddSubOpMask) == SUB) || instr->Mask(AddSubOpMask) == SUBS) { + op2 = ~op2; + } + + new_val = AddWithCarry<T>(instr->FlagsUpdate(), + reg<T>(instr->Rn()), + op2, + nzcv().C()); + + set_reg<T>(instr->Rd(), new_val); +} + +template <typename T> +T Simulator::ShiftOperand(T value, Shift shift_type, unsigned amount) { + typedef typename make_unsigned<T>::type unsignedT; + if (amount == 0) { return value; } - int64_t mask = reg_size == kXRegSizeInBits ? kXRegMask : kWRegMask; + switch (shift_type) { case LSL: - return (value << amount) & mask; + return value << amount; case LSR: - return static_cast<uint64_t>(value) >> amount; - case ASR: { - // Shift used to restore the sign. - unsigned s_shift = kXRegSizeInBits - reg_size; - // Value with its sign restored. - int64_t s_value = (value << s_shift) >> s_shift; - return (s_value >> amount) & mask; - } - case ROR: { - if (reg_size == kWRegSizeInBits) { - value &= kWRegMask; - } - return (static_cast<uint64_t>(value) >> amount) | - ((value & ((1L << amount) - 1L)) << (reg_size - amount)); - } + return static_cast<unsignedT>(value) >> amount; + case ASR: + return value >> amount; + case ROR: + return (static_cast<unsignedT>(value) >> amount) | + ((value & ((1L << amount) - 1L)) << + (sizeof(unsignedT) * 8 - amount)); default: UNIMPLEMENTED(); return 0; @@ -912,10 +903,12 @@ int64_t Simulator::ShiftOperand(unsigned reg_size, } -int64_t Simulator::ExtendValue(unsigned reg_size, - int64_t value, - Extend extend_type, - unsigned left_shift) { +template <typename T> +T Simulator::ExtendValue(T value, Extend extend_type, unsigned left_shift) { + const unsigned kSignExtendBShift = (sizeof(T) - 1) * 8; + const unsigned kSignExtendHShift = (sizeof(T) - 2) * 8; + const unsigned kSignExtendWShift = (sizeof(T) - 4) * 8; + switch (extend_type) { case UXTB: value &= kByteMask; @@ -927,13 +920,13 @@ int64_t Simulator::ExtendValue(unsigned reg_size, value &= kWordMask; break; case SXTB: - value = (value << 56) >> 56; + value = (value << kSignExtendBShift) >> kSignExtendBShift; break; case SXTH: - value = (value << 48) >> 48; + value = (value << kSignExtendHShift) >> kSignExtendHShift; break; case SXTW: - value = (value << 32) >> 32; + value = (value << kSignExtendWShift) >> kSignExtendWShift; break; case UXTX: case SXTX: @@ -941,8 +934,21 @@ int64_t Simulator::ExtendValue(unsigned reg_size, default: UNREACHABLE(); } - int64_t mask = (reg_size == kXRegSizeInBits) ? kXRegMask : kWRegMask; - return (value << left_shift) & mask; + return value << left_shift; +} + + +template <typename T> +void Simulator::Extract(Instruction* instr) { + unsigned lsb = instr->ImmS(); + T op2 = reg<T>(instr->Rm()); + T result = op2; + + if (lsb) { + T op1 = reg<T>(instr->Rn()); + result = op2 >> lsb | (op1 << ((sizeof(T) * 8) - lsb)); + } + set_reg<T>(instr->Rd(), result); } @@ -1059,7 +1065,7 @@ void Simulator::PrintSystemRegisters(bool print_all) { "0b10 (Round towards Minus Infinity)", "0b11 (Round towards Zero)" }; - ASSERT(fpcr().RMode() <= (sizeof(rmode) / sizeof(rmode[0]))); + DCHECK(fpcr().RMode() < ARRAY_SIZE(rmode)); fprintf(stream_, "# %sFPCR: %sAHP:%d DN:%d FZ:%d RMode:%s%s\n", clr_flag_name, clr_flag_value, @@ -1199,7 +1205,7 @@ void Simulator::VisitUnconditionalBranch(Instruction* instr) { void Simulator::VisitConditionalBranch(Instruction* instr) { - ASSERT(instr->Mask(ConditionalBranchMask) == B_cond); + DCHECK(instr->Mask(ConditionalBranchMask) == B_cond); if (ConditionPassed(static_cast<Condition>(instr->ConditionBranch()))) { set_pc(instr->ImmPCOffsetTarget()); } @@ -1256,110 +1262,110 @@ void Simulator::VisitCompareBranch(Instruction* instr) { } -void Simulator::AddSubHelper(Instruction* instr, int64_t op2) { - unsigned reg_size = instr->SixtyFourBits() ? kXRegSizeInBits - : kWRegSizeInBits; +template<typename T> +void Simulator::AddSubHelper(Instruction* instr, T op2) { bool set_flags = instr->FlagsUpdate(); - int64_t new_val = 0; + T new_val = 0; Instr operation = instr->Mask(AddSubOpMask); switch (operation) { case ADD: case ADDS: { - new_val = AddWithCarry(reg_size, - set_flags, - reg(reg_size, instr->Rn(), instr->RnMode()), - op2); + new_val = AddWithCarry<T>(set_flags, + reg<T>(instr->Rn(), instr->RnMode()), + op2); break; } case SUB: case SUBS: { - new_val = AddWithCarry(reg_size, - set_flags, - reg(reg_size, instr->Rn(), instr->RnMode()), - ~op2, - 1); + new_val = AddWithCarry<T>(set_flags, + reg<T>(instr->Rn(), instr->RnMode()), + ~op2, + 1); break; } default: UNREACHABLE(); } - set_reg(reg_size, instr->Rd(), new_val, instr->RdMode()); + set_reg<T>(instr->Rd(), new_val, instr->RdMode()); } void Simulator::VisitAddSubShifted(Instruction* instr) { - unsigned reg_size = instr->SixtyFourBits() ? kXRegSizeInBits - : kWRegSizeInBits; - int64_t op2 = ShiftOperand(reg_size, - reg(reg_size, instr->Rm()), - static_cast<Shift>(instr->ShiftDP()), - instr->ImmDPShift()); - AddSubHelper(instr, op2); + Shift shift_type = static_cast<Shift>(instr->ShiftDP()); + unsigned shift_amount = instr->ImmDPShift(); + + if (instr->SixtyFourBits()) { + int64_t op2 = ShiftOperand(xreg(instr->Rm()), shift_type, shift_amount); + AddSubHelper(instr, op2); + } else { + int32_t op2 = ShiftOperand(wreg(instr->Rm()), shift_type, shift_amount); + AddSubHelper(instr, op2); + } } void Simulator::VisitAddSubImmediate(Instruction* instr) { int64_t op2 = instr->ImmAddSub() << ((instr->ShiftAddSub() == 1) ? 12 : 0); - AddSubHelper(instr, op2); + if (instr->SixtyFourBits()) { + AddSubHelper<int64_t>(instr, op2); + } else { + AddSubHelper<int32_t>(instr, op2); + } } void Simulator::VisitAddSubExtended(Instruction* instr) { - unsigned reg_size = instr->SixtyFourBits() ? kXRegSizeInBits - : kWRegSizeInBits; - int64_t op2 = ExtendValue(reg_size, - reg(reg_size, instr->Rm()), - static_cast<Extend>(instr->ExtendMode()), - instr->ImmExtendShift()); - AddSubHelper(instr, op2); + Extend ext = static_cast<Extend>(instr->ExtendMode()); + unsigned left_shift = instr->ImmExtendShift(); + if (instr->SixtyFourBits()) { + int64_t op2 = ExtendValue(xreg(instr->Rm()), ext, left_shift); + AddSubHelper(instr, op2); + } else { + int32_t op2 = ExtendValue(wreg(instr->Rm()), ext, left_shift); + AddSubHelper(instr, op2); + } } void Simulator::VisitAddSubWithCarry(Instruction* instr) { - unsigned reg_size = instr->SixtyFourBits() ? kXRegSizeInBits - : kWRegSizeInBits; - int64_t op2 = reg(reg_size, instr->Rm()); - int64_t new_val; - - if ((instr->Mask(AddSubOpMask) == SUB) || instr->Mask(AddSubOpMask) == SUBS) { - op2 = ~op2; + if (instr->SixtyFourBits()) { + AddSubWithCarry<int64_t>(instr); + } else { + AddSubWithCarry<int32_t>(instr); } - - new_val = AddWithCarry(reg_size, - instr->FlagsUpdate(), - reg(reg_size, instr->Rn()), - op2, - nzcv().C()); - - set_reg(reg_size, instr->Rd(), new_val); } void Simulator::VisitLogicalShifted(Instruction* instr) { - unsigned reg_size = instr->SixtyFourBits() ? kXRegSizeInBits - : kWRegSizeInBits; Shift shift_type = static_cast<Shift>(instr->ShiftDP()); unsigned shift_amount = instr->ImmDPShift(); - int64_t op2 = ShiftOperand(reg_size, reg(reg_size, instr->Rm()), shift_type, - shift_amount); - if (instr->Mask(NOT) == NOT) { - op2 = ~op2; + + if (instr->SixtyFourBits()) { + int64_t op2 = ShiftOperand(xreg(instr->Rm()), shift_type, shift_amount); + op2 = (instr->Mask(NOT) == NOT) ? ~op2 : op2; + LogicalHelper<int64_t>(instr, op2); + } else { + int32_t op2 = ShiftOperand(wreg(instr->Rm()), shift_type, shift_amount); + op2 = (instr->Mask(NOT) == NOT) ? ~op2 : op2; + LogicalHelper<int32_t>(instr, op2); } - LogicalHelper(instr, op2); } void Simulator::VisitLogicalImmediate(Instruction* instr) { - LogicalHelper(instr, instr->ImmLogical()); + if (instr->SixtyFourBits()) { + LogicalHelper<int64_t>(instr, instr->ImmLogical()); + } else { + LogicalHelper<int32_t>(instr, instr->ImmLogical()); + } } -void Simulator::LogicalHelper(Instruction* instr, int64_t op2) { - unsigned reg_size = instr->SixtyFourBits() ? kXRegSizeInBits - : kWRegSizeInBits; - int64_t op1 = reg(reg_size, instr->Rn()); - int64_t result = 0; +template<typename T> +void Simulator::LogicalHelper(Instruction* instr, T op2) { + T op1 = reg<T>(instr->Rn()); + T result = 0; bool update_flags = false; // Switch on the logical operation, stripping out the NOT bit, as it has a @@ -1374,41 +1380,46 @@ void Simulator::LogicalHelper(Instruction* instr, int64_t op2) { } if (update_flags) { - nzcv().SetN(CalcNFlag(result, reg_size)); + nzcv().SetN(CalcNFlag(result)); nzcv().SetZ(CalcZFlag(result)); nzcv().SetC(0); nzcv().SetV(0); } - set_reg(reg_size, instr->Rd(), result, instr->RdMode()); + set_reg<T>(instr->Rd(), result, instr->RdMode()); } void Simulator::VisitConditionalCompareRegister(Instruction* instr) { - unsigned reg_size = instr->SixtyFourBits() ? kXRegSizeInBits - : kWRegSizeInBits; - ConditionalCompareHelper(instr, reg(reg_size, instr->Rm())); + if (instr->SixtyFourBits()) { + ConditionalCompareHelper(instr, xreg(instr->Rm())); + } else { + ConditionalCompareHelper(instr, wreg(instr->Rm())); + } } void Simulator::VisitConditionalCompareImmediate(Instruction* instr) { - ConditionalCompareHelper(instr, instr->ImmCondCmp()); + if (instr->SixtyFourBits()) { + ConditionalCompareHelper<int64_t>(instr, instr->ImmCondCmp()); + } else { + ConditionalCompareHelper<int32_t>(instr, instr->ImmCondCmp()); + } } -void Simulator::ConditionalCompareHelper(Instruction* instr, int64_t op2) { - unsigned reg_size = instr->SixtyFourBits() ? kXRegSizeInBits - : kWRegSizeInBits; - int64_t op1 = reg(reg_size, instr->Rn()); +template<typename T> +void Simulator::ConditionalCompareHelper(Instruction* instr, T op2) { + T op1 = reg<T>(instr->Rn()); if (ConditionPassed(static_cast<Condition>(instr->Condition()))) { // If the condition passes, set the status flags to the result of comparing // the operands. if (instr->Mask(ConditionalCompareMask) == CCMP) { - AddWithCarry(reg_size, true, op1, ~op2, 1); + AddWithCarry<T>(true, op1, ~op2, 1); } else { - ASSERT(instr->Mask(ConditionalCompareMask) == CCMN); - AddWithCarry(reg_size, true, op1, op2, 0); + DCHECK(instr->Mask(ConditionalCompareMask) == CCMN); + AddWithCarry<T>(true, op1, op2, 0); } } else { // If the condition fails, set the status flags to the nzcv immediate. @@ -1440,11 +1451,10 @@ void Simulator::VisitLoadStorePostIndex(Instruction* instr) { void Simulator::VisitLoadStoreRegisterOffset(Instruction* instr) { Extend ext = static_cast<Extend>(instr->ExtendMode()); - ASSERT((ext == UXTW) || (ext == UXTX) || (ext == SXTW) || (ext == SXTX)); + DCHECK((ext == UXTW) || (ext == UXTX) || (ext == SXTW) || (ext == SXTX)); unsigned shift_amount = instr->ImmShiftLS() * instr->SizeLS(); - int64_t offset = ExtendValue(kXRegSizeInBits, xreg(instr->Rm()), ext, - shift_amount); + int64_t offset = ExtendValue(xreg(instr->Rm()), ext, shift_amount); LoadStoreHelper(instr, offset, Offset); } @@ -1484,28 +1494,23 @@ void Simulator::LoadStoreHelper(Instruction* instr, case STR_w: case STR_x: MemoryWrite(address, xreg(srcdst), num_bytes); break; case LDRSB_w: { - set_wreg(srcdst, - ExtendValue(kWRegSizeInBits, MemoryRead8(address), SXTB)); + set_wreg(srcdst, ExtendValue<int32_t>(MemoryRead8(address), SXTB)); break; } case LDRSB_x: { - set_xreg(srcdst, - ExtendValue(kXRegSizeInBits, MemoryRead8(address), SXTB)); + set_xreg(srcdst, ExtendValue<int64_t>(MemoryRead8(address), SXTB)); break; } case LDRSH_w: { - set_wreg(srcdst, - ExtendValue(kWRegSizeInBits, MemoryRead16(address), SXTH)); + set_wreg(srcdst, ExtendValue<int32_t>(MemoryRead16(address), SXTH)); break; } case LDRSH_x: { - set_xreg(srcdst, - ExtendValue(kXRegSizeInBits, MemoryRead16(address), SXTH)); + set_xreg(srcdst, ExtendValue<int64_t>(MemoryRead16(address), SXTH)); break; } case LDRSW_x: { - set_xreg(srcdst, - ExtendValue(kXRegSizeInBits, MemoryRead32(address), SXTW)); + set_xreg(srcdst, ExtendValue<int64_t>(MemoryRead32(address), SXTW)); break; } case LDR_s: set_sreg(srcdst, MemoryReadFP32(address)); break; @@ -1581,7 +1586,7 @@ void Simulator::LoadStorePairHelper(Instruction* instr, static_cast<LoadStorePairOp>(instr->Mask(LoadStorePairMask)); // 'rt' and 'rt2' can only be aliased for stores. - ASSERT(((op & LoadStorePairLBit) == 0) || (rt != rt2)); + DCHECK(((op & LoadStorePairLBit) == 0) || (rt != rt2)); switch (op) { case LDP_w: { @@ -1605,8 +1610,8 @@ void Simulator::LoadStorePairHelper(Instruction* instr, break; } case LDPSW_x: { - set_xreg(rt, ExtendValue(kXRegSizeInBits, MemoryRead32(address), SXTW)); - set_xreg(rt2, ExtendValue(kXRegSizeInBits, + set_xreg(rt, ExtendValue<int64_t>(MemoryRead32(address), SXTW)); + set_xreg(rt2, ExtendValue<int64_t>( MemoryRead32(address + kWRegSize), SXTW)); break; } @@ -1689,7 +1694,7 @@ void Simulator::LoadStoreWriteBack(unsigned addr_reg, int64_t offset, AddrMode addrmode) { if ((addrmode == PreIndex) || (addrmode == PostIndex)) { - ASSERT(offset != 0); + DCHECK(offset != 0); uint64_t address = xreg(addr_reg, Reg31IsStackPointer); set_reg(addr_reg, address + offset, Reg31IsStackPointer); } @@ -1709,8 +1714,8 @@ void Simulator::CheckMemoryAccess(uint8_t* address, uint8_t* stack) { uint64_t Simulator::MemoryRead(uint8_t* address, unsigned num_bytes) { - ASSERT(address != NULL); - ASSERT((num_bytes > 0) && (num_bytes <= sizeof(uint64_t))); + DCHECK(address != NULL); + DCHECK((num_bytes > 0) && (num_bytes <= sizeof(uint64_t))); uint64_t read = 0; memcpy(&read, address, num_bytes); return read; @@ -1750,8 +1755,8 @@ double Simulator::MemoryReadFP64(uint8_t* address) { void Simulator::MemoryWrite(uint8_t* address, uint64_t value, unsigned num_bytes) { - ASSERT(address != NULL); - ASSERT((num_bytes > 0) && (num_bytes <= sizeof(uint64_t))); + DCHECK(address != NULL); + DCHECK((num_bytes > 0) && (num_bytes <= sizeof(uint64_t))); LogWrite(address, value, num_bytes); memcpy(address, &value, num_bytes); @@ -1785,7 +1790,7 @@ void Simulator::VisitMoveWideImmediate(Instruction* instr) { bool is_64_bits = instr->SixtyFourBits() == 1; // Shift is limited for W operations. - ASSERT(is_64_bits || (instr->ShiftMoveWide() < 2)); + DCHECK(is_64_bits || (instr->ShiftMoveWide() < 2)); // Get the shifted immediate. int64_t shift = instr->ShiftMoveWide() * 16; @@ -1822,25 +1827,26 @@ void Simulator::VisitMoveWideImmediate(Instruction* instr) { void Simulator::VisitConditionalSelect(Instruction* instr) { - uint64_t new_val = xreg(instr->Rn()); - if (ConditionFailed(static_cast<Condition>(instr->Condition()))) { - new_val = xreg(instr->Rm()); + uint64_t new_val = xreg(instr->Rm()); switch (instr->Mask(ConditionalSelectMask)) { - case CSEL_w: - case CSEL_x: break; - case CSINC_w: - case CSINC_x: new_val++; break; - case CSINV_w: - case CSINV_x: new_val = ~new_val; break; - case CSNEG_w: - case CSNEG_x: new_val = -new_val; break; + case CSEL_w: set_wreg(instr->Rd(), new_val); break; + case CSEL_x: set_xreg(instr->Rd(), new_val); break; + case CSINC_w: set_wreg(instr->Rd(), new_val + 1); break; + case CSINC_x: set_xreg(instr->Rd(), new_val + 1); break; + case CSINV_w: set_wreg(instr->Rd(), ~new_val); break; + case CSINV_x: set_xreg(instr->Rd(), ~new_val); break; + case CSNEG_w: set_wreg(instr->Rd(), -new_val); break; + case CSNEG_x: set_xreg(instr->Rd(), -new_val); break; default: UNIMPLEMENTED(); } + } else { + if (instr->SixtyFourBits()) { + set_xreg(instr->Rd(), xreg(instr->Rn())); + } else { + set_wreg(instr->Rd(), wreg(instr->Rn())); + } } - unsigned reg_size = instr->SixtyFourBits() ? kXRegSizeInBits - : kWRegSizeInBits; - set_reg(reg_size, instr->Rd(), new_val); } @@ -1874,7 +1880,7 @@ void Simulator::VisitDataProcessing1Source(Instruction* instr) { uint64_t Simulator::ReverseBits(uint64_t value, unsigned num_bits) { - ASSERT((num_bits == kWRegSizeInBits) || (num_bits == kXRegSizeInBits)); + DCHECK((num_bits == kWRegSizeInBits) || (num_bits == kXRegSizeInBits)); uint64_t result = 0; for (unsigned i = 0; i < num_bits; i++) { result = (result << 1) | (value & 1); @@ -1898,7 +1904,7 @@ uint64_t Simulator::ReverseBytes(uint64_t value, ReverseByteMode mode) { // permute_table[Reverse16] is used by REV16_x, REV16_w // permute_table[Reverse32] is used by REV32_x, REV_w // permute_table[Reverse64] is used by REV_x - ASSERT((Reverse16 == 0) && (Reverse32 == 1) && (Reverse64 == 2)); + DCHECK((Reverse16 == 0) && (Reverse32 == 1) && (Reverse64 == 2)); static const uint8_t permute_table[3][8] = { {6, 7, 4, 5, 2, 3, 0, 1}, {4, 5, 6, 7, 0, 1, 2, 3}, {0, 1, 2, 3, 4, 5, 6, 7} }; @@ -1911,28 +1917,17 @@ uint64_t Simulator::ReverseBytes(uint64_t value, ReverseByteMode mode) { } -void Simulator::VisitDataProcessing2Source(Instruction* instr) { +template <typename T> +void Simulator::DataProcessing2Source(Instruction* instr) { Shift shift_op = NO_SHIFT; - int64_t result = 0; + T result = 0; switch (instr->Mask(DataProcessing2SourceMask)) { - case SDIV_w: { - int32_t rn = wreg(instr->Rn()); - int32_t rm = wreg(instr->Rm()); - if ((rn == kWMinInt) && (rm == -1)) { - result = kWMinInt; - } else if (rm == 0) { - // Division by zero can be trapped, but not on A-class processors. - result = 0; - } else { - result = rn / rm; - } - break; - } + case SDIV_w: case SDIV_x: { - int64_t rn = xreg(instr->Rn()); - int64_t rm = xreg(instr->Rm()); - if ((rn == kXMinInt) && (rm == -1)) { - result = kXMinInt; + T rn = reg<T>(instr->Rn()); + T rm = reg<T>(instr->Rm()); + if ((rn == std::numeric_limits<T>::min()) && (rm == -1)) { + result = std::numeric_limits<T>::min(); } else if (rm == 0) { // Division by zero can be trapped, but not on A-class processors. result = 0; @@ -1941,20 +1936,11 @@ void Simulator::VisitDataProcessing2Source(Instruction* instr) { } break; } - case UDIV_w: { - uint32_t rn = static_cast<uint32_t>(wreg(instr->Rn())); - uint32_t rm = static_cast<uint32_t>(wreg(instr->Rm())); - if (rm == 0) { - // Division by zero can be trapped, but not on A-class processors. - result = 0; - } else { - result = rn / rm; - } - break; - } + case UDIV_w: case UDIV_x: { - uint64_t rn = static_cast<uint64_t>(xreg(instr->Rn())); - uint64_t rm = static_cast<uint64_t>(xreg(instr->Rm())); + typedef typename make_unsigned<T>::type unsignedT; + unsignedT rn = static_cast<unsignedT>(reg<T>(instr->Rn())); + unsignedT rm = static_cast<unsignedT>(reg<T>(instr->Rm())); if (rm == 0) { // Division by zero can be trapped, but not on A-class processors. result = 0; @@ -1974,18 +1960,27 @@ void Simulator::VisitDataProcessing2Source(Instruction* instr) { default: UNIMPLEMENTED(); } - unsigned reg_size = instr->SixtyFourBits() ? kXRegSizeInBits - : kWRegSizeInBits; if (shift_op != NO_SHIFT) { // Shift distance encoded in the least-significant five/six bits of the // register. - int mask = (instr->SixtyFourBits() == 1) ? kShiftAmountXRegMask - : kShiftAmountWRegMask; - unsigned shift = wreg(instr->Rm()) & mask; - result = ShiftOperand(reg_size, reg(reg_size, instr->Rn()), shift_op, - shift); + unsigned shift = wreg(instr->Rm()); + if (sizeof(T) == kWRegSize) { + shift &= kShiftAmountWRegMask; + } else { + shift &= kShiftAmountXRegMask; + } + result = ShiftOperand(reg<T>(instr->Rn()), shift_op, shift); + } + set_reg<T>(instr->Rd(), result); +} + + +void Simulator::VisitDataProcessing2Source(Instruction* instr) { + if (instr->SixtyFourBits()) { + DataProcessing2Source<int64_t>(instr); + } else { + DataProcessing2Source<int32_t>(instr); } - set_reg(reg_size, instr->Rd(), result); } @@ -2012,9 +2007,6 @@ static int64_t MultiplyHighSigned(int64_t u, int64_t v) { void Simulator::VisitDataProcessing3Source(Instruction* instr) { - unsigned reg_size = instr->SixtyFourBits() ? kXRegSizeInBits - : kWRegSizeInBits; - int64_t result = 0; // Extract and sign- or zero-extend 32-bit arguments for widening operations. uint64_t rn_u32 = reg<uint32_t>(instr->Rn()); @@ -2035,26 +2027,31 @@ void Simulator::VisitDataProcessing3Source(Instruction* instr) { case UMADDL_x: result = xreg(instr->Ra()) + (rn_u32 * rm_u32); break; case UMSUBL_x: result = xreg(instr->Ra()) - (rn_u32 * rm_u32); break; case SMULH_x: - ASSERT(instr->Ra() == kZeroRegCode); + DCHECK(instr->Ra() == kZeroRegCode); result = MultiplyHighSigned(xreg(instr->Rn()), xreg(instr->Rm())); break; default: UNIMPLEMENTED(); } - set_reg(reg_size, instr->Rd(), result); + + if (instr->SixtyFourBits()) { + set_xreg(instr->Rd(), result); + } else { + set_wreg(instr->Rd(), result); + } } -void Simulator::VisitBitfield(Instruction* instr) { - unsigned reg_size = instr->SixtyFourBits() ? kXRegSizeInBits - : kWRegSizeInBits; - int64_t reg_mask = instr->SixtyFourBits() ? kXRegMask : kWRegMask; - int64_t R = instr->ImmR(); - int64_t S = instr->ImmS(); - int64_t diff = S - R; - int64_t mask; +template <typename T> +void Simulator::BitfieldHelper(Instruction* instr) { + typedef typename make_unsigned<T>::type unsignedT; + T reg_size = sizeof(T) * 8; + T R = instr->ImmR(); + T S = instr->ImmS(); + T diff = S - R; + T mask; if (diff >= 0) { - mask = diff < reg_size - 1 ? (1L << (diff + 1)) - 1 - : reg_mask; + mask = diff < reg_size - 1 ? (static_cast<T>(1) << (diff + 1)) - 1 + : static_cast<T>(-1); } else { mask = ((1L << (S + 1)) - 1); mask = (static_cast<uint64_t>(mask) >> R) | (mask << (reg_size - R)); @@ -2083,30 +2080,37 @@ void Simulator::VisitBitfield(Instruction* instr) { UNIMPLEMENTED(); } - int64_t dst = inzero ? 0 : reg(reg_size, instr->Rd()); - int64_t src = reg(reg_size, instr->Rn()); + T dst = inzero ? 0 : reg<T>(instr->Rd()); + T src = reg<T>(instr->Rn()); // Rotate source bitfield into place. - int64_t result = (static_cast<uint64_t>(src) >> R) | (src << (reg_size - R)); + T result = (static_cast<unsignedT>(src) >> R) | (src << (reg_size - R)); // Determine the sign extension. - int64_t topbits_preshift = (1L << (reg_size - diff - 1)) - 1; - int64_t signbits = (extend && ((src >> S) & 1) ? topbits_preshift : 0) - << (diff + 1); + T topbits_preshift = (static_cast<T>(1) << (reg_size - diff - 1)) - 1; + T signbits = (extend && ((src >> S) & 1) ? topbits_preshift : 0) + << (diff + 1); // Merge sign extension, dest/zero and bitfield. result = signbits | (result & mask) | (dst & ~mask); - set_reg(reg_size, instr->Rd(), result); + set_reg<T>(instr->Rd(), result); +} + + +void Simulator::VisitBitfield(Instruction* instr) { + if (instr->SixtyFourBits()) { + BitfieldHelper<int64_t>(instr); + } else { + BitfieldHelper<int32_t>(instr); + } } void Simulator::VisitExtract(Instruction* instr) { - unsigned lsb = instr->ImmS(); - unsigned reg_size = (instr->SixtyFourBits() == 1) ? kXRegSizeInBits - : kWRegSizeInBits; - set_reg(reg_size, - instr->Rd(), - (static_cast<uint64_t>(reg(reg_size, instr->Rm())) >> lsb) | - (reg(reg_size, instr->Rn()) << (reg_size - lsb))); + if (instr->SixtyFourBits()) { + Extract<uint64_t>(instr); + } else { + Extract<uint32_t>(instr); + } } @@ -2403,10 +2407,10 @@ void Simulator::VisitFPDataProcessing1Source(Instruction* instr) { template <class T, int ebits, int mbits> static T FPRound(int64_t sign, int64_t exponent, uint64_t mantissa, FPRounding round_mode) { - ASSERT((sign == 0) || (sign == 1)); + DCHECK((sign == 0) || (sign == 1)); // Only the FPTieEven rounding mode is implemented. - ASSERT(round_mode == FPTieEven); + DCHECK(round_mode == FPTieEven); USE(round_mode); // Rounding can promote subnormals to normals, and normals to infinities. For @@ -2723,7 +2727,7 @@ double Simulator::FPToDouble(float value) { float Simulator::FPToFloat(double value, FPRounding round_mode) { // Only the FPTieEven rounding mode is implemented. - ASSERT(round_mode == FPTieEven); + DCHECK(round_mode == FPTieEven); USE(round_mode); switch (std::fpclassify(value)) { @@ -2852,7 +2856,7 @@ void Simulator::VisitFPDataProcessing3Source(Instruction* instr) { template <typename T> T Simulator::FPAdd(T op1, T op2) { // NaNs should be handled elsewhere. - ASSERT(!std::isnan(op1) && !std::isnan(op2)); + DCHECK(!std::isnan(op1) && !std::isnan(op2)); if (std::isinf(op1) && std::isinf(op2) && (op1 != op2)) { // inf + -inf returns the default NaN. @@ -2867,7 +2871,7 @@ T Simulator::FPAdd(T op1, T op2) { template <typename T> T Simulator::FPDiv(T op1, T op2) { // NaNs should be handled elsewhere. - ASSERT(!std::isnan(op1) && !std::isnan(op2)); + DCHECK(!std::isnan(op1) && !std::isnan(op2)); if ((std::isinf(op1) && std::isinf(op2)) || ((op1 == 0.0) && (op2 == 0.0))) { // inf / inf and 0.0 / 0.0 return the default NaN. @@ -2882,7 +2886,7 @@ T Simulator::FPDiv(T op1, T op2) { template <typename T> T Simulator::FPMax(T a, T b) { // NaNs should be handled elsewhere. - ASSERT(!std::isnan(a) && !std::isnan(b)); + DCHECK(!std::isnan(a) && !std::isnan(b)); if ((a == 0.0) && (b == 0.0) && (copysign(1.0, a) != copysign(1.0, b))) { @@ -2909,7 +2913,7 @@ T Simulator::FPMaxNM(T a, T b) { template <typename T> T Simulator::FPMin(T a, T b) { // NaNs should be handled elsewhere. - ASSERT(!isnan(a) && !isnan(b)); + DCHECK(!std::isnan(a) && !std::isnan(b)); if ((a == 0.0) && (b == 0.0) && (copysign(1.0, a) != copysign(1.0, b))) { @@ -2937,7 +2941,7 @@ T Simulator::FPMinNM(T a, T b) { template <typename T> T Simulator::FPMul(T op1, T op2) { // NaNs should be handled elsewhere. - ASSERT(!std::isnan(op1) && !std::isnan(op2)); + DCHECK(!std::isnan(op1) && !std::isnan(op2)); if ((std::isinf(op1) && (op2 == 0.0)) || (std::isinf(op2) && (op1 == 0.0))) { // inf * 0.0 returns the default NaN. @@ -2982,7 +2986,7 @@ T Simulator::FPMulAdd(T a, T op1, T op2) { } result = FusedMultiplyAdd(op1, op2, a); - ASSERT(!std::isnan(result)); + DCHECK(!std::isnan(result)); // Work around broken fma implementations for rounded zero results: If a is // 0.0, the sign of the result is the sign of op1 * op2 before rounding. @@ -3009,7 +3013,7 @@ T Simulator::FPSqrt(T op) { template <typename T> T Simulator::FPSub(T op1, T op2) { // NaNs should be handled elsewhere. - ASSERT(!std::isnan(op1) && !std::isnan(op2)); + DCHECK(!std::isnan(op1) && !std::isnan(op2)); if (std::isinf(op1) && std::isinf(op2) && (op1 == op2)) { // inf - inf returns the default NaN. @@ -3023,7 +3027,7 @@ T Simulator::FPSub(T op1, T op2) { template <typename T> T Simulator::FPProcessNaN(T op) { - ASSERT(std::isnan(op)); + DCHECK(std::isnan(op)); return fpcr().DN() ? FPDefaultNaN<T>() : ToQuietNaN(op); } @@ -3035,10 +3039,10 @@ T Simulator::FPProcessNaNs(T op1, T op2) { } else if (IsSignallingNaN(op2)) { return FPProcessNaN(op2); } else if (std::isnan(op1)) { - ASSERT(IsQuietNaN(op1)); + DCHECK(IsQuietNaN(op1)); return FPProcessNaN(op1); } else if (std::isnan(op2)) { - ASSERT(IsQuietNaN(op2)); + DCHECK(IsQuietNaN(op2)); return FPProcessNaN(op2); } else { return 0.0; @@ -3055,13 +3059,13 @@ T Simulator::FPProcessNaNs3(T op1, T op2, T op3) { } else if (IsSignallingNaN(op3)) { return FPProcessNaN(op3); } else if (std::isnan(op1)) { - ASSERT(IsQuietNaN(op1)); + DCHECK(IsQuietNaN(op1)); return FPProcessNaN(op1); } else if (std::isnan(op2)) { - ASSERT(IsQuietNaN(op2)); + DCHECK(IsQuietNaN(op2)); return FPProcessNaN(op2); } else if (std::isnan(op3)) { - ASSERT(IsQuietNaN(op3)); + DCHECK(IsQuietNaN(op3)); return FPProcessNaN(op3); } else { return 0.0; @@ -3117,7 +3121,7 @@ void Simulator::VisitSystem(Instruction* instr) { } } } else if (instr->Mask(SystemHintFMask) == SystemHintFixed) { - ASSERT(instr->Mask(SystemHintMask) == HINT); + DCHECK(instr->Mask(SystemHintMask) == HINT); switch (instr->ImmHint()) { case NOP: break; default: UNIMPLEMENTED(); @@ -3160,12 +3164,12 @@ bool Simulator::GetValue(const char* desc, int64_t* value) { bool Simulator::PrintValue(const char* desc) { if (strcmp(desc, "csp") == 0) { - ASSERT(CodeFromName(desc) == static_cast<int>(kSPRegInternalCode)); + DCHECK(CodeFromName(desc) == static_cast<int>(kSPRegInternalCode)); PrintF(stream_, "%s csp:%s 0x%016" PRIx64 "%s\n", clr_reg_name, clr_reg_value, xreg(31, Reg31IsStackPointer), clr_normal); return true; } else if (strcmp(desc, "wcsp") == 0) { - ASSERT(CodeFromName(desc) == static_cast<int>(kSPRegInternalCode)); + DCHECK(CodeFromName(desc) == static_cast<int>(kSPRegInternalCode)); PrintF(stream_, "%s wcsp:%s 0x%08" PRIx32 "%s\n", clr_reg_name, clr_reg_value, wreg(31, Reg31IsStackPointer), clr_normal); return true; @@ -3341,17 +3345,18 @@ void Simulator::Debug() { (strcmp(cmd, "po") == 0)) { if (argc == 2) { int64_t value; + OFStream os(stdout); if (GetValue(arg1, &value)) { Object* obj = reinterpret_cast<Object*>(value); - PrintF("%s: \n", arg1); + os << arg1 << ": \n"; #ifdef DEBUG - obj->PrintLn(); + obj->Print(os); + os << "\n"; #else - obj->ShortPrint(); - PrintF("\n"); + os << Brief(obj) << "\n"; #endif } else { - PrintF("%s unrecognized\n", arg1); + os << arg1 << " unrecognized\n"; } } else { PrintF("printobject <value>\n" @@ -3441,7 +3446,7 @@ void Simulator::Debug() { // gdb ------------------------------------------------------------------- } else if (strcmp(cmd, "gdb") == 0) { PrintF("Relinquishing control to gdb.\n"); - OS::DebugBreak(); + base::OS::DebugBreak(); PrintF("Regaining control from gdb.\n"); // sysregs --------------------------------------------------------------- @@ -3556,7 +3561,7 @@ void Simulator::VisitException(Instruction* instr) { break; default: // We don't support a one-shot LOG_DISASM. - ASSERT((parameters & LOG_DISASM) == 0); + DCHECK((parameters & LOG_DISASM) == 0); // Don't print information that is already being traced. parameters &= ~log_parameters(); // Print the requested information. @@ -3570,8 +3575,8 @@ void Simulator::VisitException(Instruction* instr) { size_t size = kDebugMessageOffset + strlen(message) + 1; pc_ = pc_->InstructionAtOffset(RoundUp(size, kInstructionSize)); // - Verify that the unreachable marker is present. - ASSERT(pc_->Mask(ExceptionMask) == HLT); - ASSERT(pc_->ImmException() == kImmExceptionIsUnreachable); + DCHECK(pc_->Mask(ExceptionMask) == HLT); + DCHECK(pc_->ImmException() == kImmExceptionIsUnreachable); // - Skip past the unreachable marker. set_pc(pc_->following()); @@ -3581,43 +3586,7 @@ void Simulator::VisitException(Instruction* instr) { } else if (instr->ImmException() == kImmExceptionIsRedirectedCall) { DoRuntimeCall(instr); } else if (instr->ImmException() == kImmExceptionIsPrintf) { - // Read the argument encoded inline in the instruction stream. - uint32_t type; - memcpy(&type, - pc_->InstructionAtOffset(kPrintfTypeOffset), - sizeof(type)); - - const char* format = reg<const char*>(0); - - // Pass all of the relevant PCS registers onto printf. It doesn't - // matter if we pass too many as the extra ones won't be read. - int result; - fputs(clr_printf, stream_); - if (type == CPURegister::kRegister) { - result = fprintf(stream_, format, - xreg(1), xreg(2), xreg(3), xreg(4), - xreg(5), xreg(6), xreg(7)); - } else if (type == CPURegister::kFPRegister) { - result = fprintf(stream_, format, - dreg(0), dreg(1), dreg(2), dreg(3), - dreg(4), dreg(5), dreg(6), dreg(7)); - } else { - ASSERT(type == CPURegister::kNoRegister); - result = fprintf(stream_, "%s", format); - } - fputs(clr_normal, stream_); - -#ifdef DEBUG - CorruptAllCallerSavedCPURegisters(); -#endif - - set_xreg(0, result); - - // The printf parameters are inlined in the code, so skip them. - set_pc(pc_->InstructionAtOffset(kPrintfLength)); - - // Set LR as if we'd just called a native printf function. - set_lr(pc()); + DoPrintf(instr); } else if (instr->ImmException() == kImmExceptionIsUnreachable) { fprintf(stream_, "Hit UNREACHABLE marker at PC=%p.\n", @@ -3625,7 +3594,7 @@ void Simulator::VisitException(Instruction* instr) { abort(); } else { - OS::DebugBreak(); + base::OS::DebugBreak(); } break; } @@ -3635,6 +3604,133 @@ void Simulator::VisitException(Instruction* instr) { } } + +void Simulator::DoPrintf(Instruction* instr) { + DCHECK((instr->Mask(ExceptionMask) == HLT) && + (instr->ImmException() == kImmExceptionIsPrintf)); + + // Read the arguments encoded inline in the instruction stream. + uint32_t arg_count; + uint32_t arg_pattern_list; + STATIC_ASSERT(sizeof(*instr) == 1); + memcpy(&arg_count, + instr + kPrintfArgCountOffset, + sizeof(arg_count)); + memcpy(&arg_pattern_list, + instr + kPrintfArgPatternListOffset, + sizeof(arg_pattern_list)); + + DCHECK(arg_count <= kPrintfMaxArgCount); + DCHECK((arg_pattern_list >> (kPrintfArgPatternBits * arg_count)) == 0); + + // We need to call the host printf function with a set of arguments defined by + // arg_pattern_list. Because we don't know the types and sizes of the + // arguments, this is very difficult to do in a robust and portable way. To + // work around the problem, we pick apart the format string, and print one + // format placeholder at a time. + + // Allocate space for the format string. We take a copy, so we can modify it. + // Leave enough space for one extra character per expected argument (plus the + // '\0' termination). + const char * format_base = reg<const char *>(0); + DCHECK(format_base != NULL); + size_t length = strlen(format_base) + 1; + char * const format = new char[length + arg_count]; + + // A list of chunks, each with exactly one format placeholder. + const char * chunks[kPrintfMaxArgCount]; + + // Copy the format string and search for format placeholders. + uint32_t placeholder_count = 0; + char * format_scratch = format; + for (size_t i = 0; i < length; i++) { + if (format_base[i] != '%') { + *format_scratch++ = format_base[i]; + } else { + if (format_base[i + 1] == '%') { + // Ignore explicit "%%" sequences. + *format_scratch++ = format_base[i]; + + if (placeholder_count == 0) { + // The first chunk is passed to printf using "%s", so we need to + // unescape "%%" sequences in this chunk. (Just skip the next '%'.) + i++; + } else { + // Otherwise, pass through "%%" unchanged. + *format_scratch++ = format_base[++i]; + } + } else { + CHECK(placeholder_count < arg_count); + // Insert '\0' before placeholders, and store their locations. + *format_scratch++ = '\0'; + chunks[placeholder_count++] = format_scratch; + *format_scratch++ = format_base[i]; + } + } + } + DCHECK(format_scratch <= (format + length + arg_count)); + CHECK(placeholder_count == arg_count); + + // Finally, call printf with each chunk, passing the appropriate register + // argument. Normally, printf returns the number of bytes transmitted, so we + // can emulate a single printf call by adding the result from each chunk. If + // any call returns a negative (error) value, though, just return that value. + + fprintf(stream_, "%s", clr_printf); + + // Because '\0' is inserted before each placeholder, the first string in + // 'format' contains no format placeholders and should be printed literally. + int result = fprintf(stream_, "%s", format); + int pcs_r = 1; // Start at x1. x0 holds the format string. + int pcs_f = 0; // Start at d0. + if (result >= 0) { + for (uint32_t i = 0; i < placeholder_count; i++) { + int part_result = -1; + + uint32_t arg_pattern = arg_pattern_list >> (i * kPrintfArgPatternBits); + arg_pattern &= (1 << kPrintfArgPatternBits) - 1; + switch (arg_pattern) { + case kPrintfArgW: + part_result = fprintf(stream_, chunks[i], wreg(pcs_r++)); + break; + case kPrintfArgX: + part_result = fprintf(stream_, chunks[i], xreg(pcs_r++)); + break; + case kPrintfArgD: + part_result = fprintf(stream_, chunks[i], dreg(pcs_f++)); + break; + default: UNREACHABLE(); + } + + if (part_result < 0) { + // Handle error values. + result = part_result; + break; + } + + result += part_result; + } + } + + fprintf(stream_, "%s", clr_normal); + +#ifdef DEBUG + CorruptAllCallerSavedCPURegisters(); +#endif + + // Printf returns its result in x0 (just like the C library's printf). + set_xreg(0, result); + + // The printf parameters are inlined in the code, so skip them. + set_pc(instr->InstructionAtOffset(kPrintfLength)); + + // Set LR as if we'd just called a native printf function. + set_lr(pc()); + + delete[] format; +} + + #endif // USE_SIMULATOR } } // namespace v8::internal diff --git a/deps/v8/src/arm64/simulator-arm64.h b/deps/v8/src/arm64/simulator-arm64.h index 543385b3761..6b0211816cb 100644 --- a/deps/v8/src/arm64/simulator-arm64.h +++ b/deps/v8/src/arm64/simulator-arm64.h @@ -8,16 +8,16 @@ #include <stdarg.h> #include <vector> -#include "v8.h" +#include "src/v8.h" -#include "globals.h" -#include "utils.h" -#include "allocation.h" -#include "assembler.h" -#include "arm64/assembler-arm64.h" -#include "arm64/decoder-arm64.h" -#include "arm64/disasm-arm64.h" -#include "arm64/instrument-arm64.h" +#include "src/allocation.h" +#include "src/arm64/assembler-arm64.h" +#include "src/arm64/decoder-arm64.h" +#include "src/arm64/disasm-arm64.h" +#include "src/arm64/instrument-arm64.h" +#include "src/assembler.h" +#include "src/globals.h" +#include "src/utils.h" #define REGISTER_CODE_LIST(R) \ R(0) R(1) R(2) R(3) R(4) R(5) R(6) R(7) \ @@ -54,9 +54,6 @@ typedef int (*arm64_regexp_matcher)(String* input, (FUNCTION_CAST<arm64_regexp_matcher>(entry)( \ p0, p1, p2, p3, p4, p5, p6, p7, NULL, p8)) -#define TRY_CATCH_FROM_ADDRESS(try_catch_address) \ - reinterpret_cast<TryCatch*>(try_catch_address) - // Running without a simulator there is nothing to do. class SimulatorStack : public v8::internal::AllStatic { public: @@ -136,35 +133,28 @@ class SimSystemRegister { // Represent a register (r0-r31, v0-v31). -template<int kSizeInBytes> class SimRegisterBase { public: template<typename T> - void Set(T new_value, unsigned size = sizeof(T)) { - ASSERT(size <= kSizeInBytes); - ASSERT(size <= sizeof(new_value)); - // All AArch64 registers are zero-extending; Writing a W register clears the - // top bits of the corresponding X register. - memset(value_, 0, kSizeInBytes); - memcpy(value_, &new_value, size); + void Set(T new_value) { + value_ = 0; + memcpy(&value_, &new_value, sizeof(T)); } - // Copy 'size' bytes of the register to the result, and zero-extend to fill - // the result. template<typename T> - T Get(unsigned size = sizeof(T)) const { - ASSERT(size <= kSizeInBytes); + T Get() const { T result; - memset(&result, 0, sizeof(result)); - memcpy(&result, value_, size); + memcpy(&result, &value_, sizeof(T)); return result; } protected: - uint8_t value_[kSizeInBytes]; + int64_t value_; }; -typedef SimRegisterBase<kXRegSize> SimRegister; // r0-r31 -typedef SimRegisterBase<kDRegSize> SimFPRegister; // v0-v31 + + +typedef SimRegisterBase SimRegister; // r0-r31 +typedef SimRegisterBase SimFPRegister; // v0-v31 class Simulator : public DecoderVisitor { @@ -221,13 +211,14 @@ class Simulator : public DecoderVisitor { public: template<typename T> explicit CallArgument(T argument) { - ASSERT(sizeof(argument) <= sizeof(bits_)); + bits_ = 0; + DCHECK(sizeof(argument) <= sizeof(bits_)); memcpy(&bits_, &argument, sizeof(argument)); type_ = X_ARG; } explicit CallArgument(double argument) { - ASSERT(sizeof(argument) == sizeof(bits_)); + DCHECK(sizeof(argument) == sizeof(bits_)); memcpy(&bits_, &argument, sizeof(argument)); type_ = D_ARG; } @@ -238,10 +229,10 @@ class Simulator : public DecoderVisitor { UNIMPLEMENTED(); // Make the D register a NaN to try to trap errors if the callee expects a // double. If it expects a float, the callee should ignore the top word. - ASSERT(sizeof(kFP64SignallingNaN) == sizeof(bits_)); + DCHECK(sizeof(kFP64SignallingNaN) == sizeof(bits_)); memcpy(&bits_, &kFP64SignallingNaN, sizeof(kFP64SignallingNaN)); // Write the float payload to the S register. - ASSERT(sizeof(argument) <= sizeof(bits_)); + DCHECK(sizeof(argument) <= sizeof(bits_)); memcpy(&bits_, &argument, sizeof(argument)); type_ = D_ARG; } @@ -299,7 +290,7 @@ class Simulator : public DecoderVisitor { // Simulation helpers. template <typename T> void set_pc(T new_pc) { - ASSERT(sizeof(T) == sizeof(pc_)); + DCHECK(sizeof(T) == sizeof(pc_)); memcpy(&pc_, &new_pc, sizeof(T)); pc_modified_ = true; } @@ -318,7 +309,7 @@ class Simulator : public DecoderVisitor { } void ExecuteInstruction() { - ASSERT(IsAligned(reinterpret_cast<uintptr_t>(pc_), kInstructionSize)); + DCHECK(IsAligned(reinterpret_cast<uintptr_t>(pc_), kInstructionSize)); CheckBreakNext(); Decode(pc_); LogProcessorState(); @@ -331,98 +322,65 @@ class Simulator : public DecoderVisitor { VISITOR_LIST(DECLARE) #undef DECLARE - // Register accessors. + bool IsZeroRegister(unsigned code, Reg31Mode r31mode) const { + return ((code == 31) && (r31mode == Reg31IsZeroRegister)); + } + // Register accessors. // Return 'size' bits of the value of an integer register, as the specified // type. The value is zero-extended to fill the result. // - // The only supported values of 'size' are kXRegSizeInBits and - // kWRegSizeInBits. - template<typename T> - T reg(unsigned size, unsigned code, - Reg31Mode r31mode = Reg31IsZeroRegister) const { - unsigned size_in_bytes = size / 8; - ASSERT(size_in_bytes <= sizeof(T)); - ASSERT((size == kXRegSizeInBits) || (size == kWRegSizeInBits)); - ASSERT(code < kNumberOfRegisters); - - if ((code == 31) && (r31mode == Reg31IsZeroRegister)) { - T result; - memset(&result, 0, sizeof(result)); - return result; - } - return registers_[code].Get<T>(size_in_bytes); - } - - // Like reg(), but infer the access size from the template type. template<typename T> T reg(unsigned code, Reg31Mode r31mode = Reg31IsZeroRegister) const { - return reg<T>(sizeof(T) * 8, code, r31mode); + DCHECK(code < kNumberOfRegisters); + if (IsZeroRegister(code, r31mode)) { + return 0; + } + return registers_[code].Get<T>(); } // Common specialized accessors for the reg() template. - int32_t wreg(unsigned code, - Reg31Mode r31mode = Reg31IsZeroRegister) const { + int32_t wreg(unsigned code, Reg31Mode r31mode = Reg31IsZeroRegister) const { return reg<int32_t>(code, r31mode); } - int64_t xreg(unsigned code, - Reg31Mode r31mode = Reg31IsZeroRegister) const { + int64_t xreg(unsigned code, Reg31Mode r31mode = Reg31IsZeroRegister) const { return reg<int64_t>(code, r31mode); } - int64_t reg(unsigned size, unsigned code, - Reg31Mode r31mode = Reg31IsZeroRegister) const { - return reg<int64_t>(size, code, r31mode); - } - // Write 'size' bits of 'value' into an integer register. The value is // zero-extended. This behaviour matches AArch64 register writes. - // - // The only supported values of 'size' are kXRegSizeInBits and - // kWRegSizeInBits. - template<typename T> - void set_reg(unsigned size, unsigned code, T value, - Reg31Mode r31mode = Reg31IsZeroRegister) { - unsigned size_in_bytes = size / 8; - ASSERT(size_in_bytes <= sizeof(T)); - ASSERT((size == kXRegSizeInBits) || (size == kWRegSizeInBits)); - ASSERT(code < kNumberOfRegisters); - - if ((code == 31) && (r31mode == Reg31IsZeroRegister)) { - return; - } - return registers_[code].Set(value, size_in_bytes); - } // Like set_reg(), but infer the access size from the template type. template<typename T> void set_reg(unsigned code, T value, Reg31Mode r31mode = Reg31IsZeroRegister) { - set_reg(sizeof(value) * 8, code, value, r31mode); + DCHECK(code < kNumberOfRegisters); + if (!IsZeroRegister(code, r31mode)) + registers_[code].Set(value); } // Common specialized accessors for the set_reg() template. void set_wreg(unsigned code, int32_t value, Reg31Mode r31mode = Reg31IsZeroRegister) { - set_reg(kWRegSizeInBits, code, value, r31mode); + set_reg(code, value, r31mode); } void set_xreg(unsigned code, int64_t value, Reg31Mode r31mode = Reg31IsZeroRegister) { - set_reg(kXRegSizeInBits, code, value, r31mode); + set_reg(code, value, r31mode); } // Commonly-used special cases. template<typename T> void set_lr(T value) { - ASSERT(sizeof(T) == kPointerSize); + DCHECK(sizeof(T) == kPointerSize); set_reg(kLinkRegCode, value); } template<typename T> void set_sp(T value) { - ASSERT(sizeof(T) == kPointerSize); + DCHECK(sizeof(T) == kPointerSize); set_reg(31, value, Reg31IsStackPointer); } @@ -435,24 +393,10 @@ class Simulator : public DecoderVisitor { Address get_sp() { return reg<Address>(31, Reg31IsStackPointer); } - // Return 'size' bits of the value of a floating-point register, as the - // specified type. The value is zero-extended to fill the result. - // - // The only supported values of 'size' are kDRegSizeInBits and - // kSRegSizeInBits. - template<typename T> - T fpreg(unsigned size, unsigned code) const { - unsigned size_in_bytes = size / 8; - ASSERT(size_in_bytes <= sizeof(T)); - ASSERT((size == kDRegSizeInBits) || (size == kSRegSizeInBits)); - ASSERT(code < kNumberOfFPRegisters); - return fpregisters_[code].Get<T>(size_in_bytes); - } - - // Like fpreg(), but infer the access size from the template type. template<typename T> T fpreg(unsigned code) const { - return fpreg<T>(sizeof(T) * 8, code); + DCHECK(code < kNumberOfRegisters); + return fpregisters_[code].Get<T>(); } // Common specialized accessors for the fpreg() template. @@ -486,9 +430,9 @@ class Simulator : public DecoderVisitor { // This behaviour matches AArch64 register writes. template<typename T> void set_fpreg(unsigned code, T value) { - ASSERT((sizeof(value) == kDRegSize) || (sizeof(value) == kSRegSize)); - ASSERT(code < kNumberOfFPRegisters); - fpregisters_[code].Set(value, sizeof(value)); + DCHECK((sizeof(value) == kDRegSize) || (sizeof(value) == kSRegSize)); + DCHECK(code < kNumberOfFPRegisters); + fpregisters_[code].Set(value); } // Common specialized accessors for the set_fpreg() template. @@ -628,14 +572,19 @@ class Simulator : public DecoderVisitor { return !ConditionPassed(cond); } - void AddSubHelper(Instruction* instr, int64_t op2); - int64_t AddWithCarry(unsigned reg_size, - bool set_flags, - int64_t src1, - int64_t src2, - int64_t carry_in = 0); - void LogicalHelper(Instruction* instr, int64_t op2); - void ConditionalCompareHelper(Instruction* instr, int64_t op2); + template<typename T> + void AddSubHelper(Instruction* instr, T op2); + template<typename T> + T AddWithCarry(bool set_flags, + T src1, + T src2, + T carry_in = 0); + template<typename T> + void AddSubWithCarry(Instruction* instr); + template<typename T> + void LogicalHelper(Instruction* instr, T op2); + template<typename T> + void ConditionalCompareHelper(Instruction* instr, T op2); void LoadStoreHelper(Instruction* instr, int64_t offset, AddrMode addrmode); @@ -662,18 +611,21 @@ class Simulator : public DecoderVisitor { void MemoryWrite64(uint8_t* address, uint64_t value); void MemoryWriteFP64(uint8_t* address, double value); - int64_t ShiftOperand(unsigned reg_size, - int64_t value, - Shift shift_type, - unsigned amount); - int64_t Rotate(unsigned reg_width, - int64_t value, + + template <typename T> + T ShiftOperand(T value, Shift shift_type, unsigned amount); - int64_t ExtendValue(unsigned reg_width, - int64_t value, - Extend extend_type, - unsigned left_shift = 0); + template <typename T> + T ExtendValue(T value, + Extend extend_type, + unsigned left_shift = 0); + template <typename T> + void Extract(Instruction* instr); + template <typename T> + void DataProcessing2Source(Instruction* instr); + template <typename T> + void BitfieldHelper(Instruction* instr); uint64_t ReverseBits(uint64_t value, unsigned num_bits); uint64_t ReverseBytes(uint64_t value, ReverseByteMode mode); @@ -757,6 +709,9 @@ class Simulator : public DecoderVisitor { void CorruptAllCallerSavedCPURegisters(); #endif + // Pseudo Printf instruction + void DoPrintf(Instruction* instr); + // Processor state --------------------------------------- // Output stream. @@ -789,15 +744,16 @@ class Simulator : public DecoderVisitor { // functions, or to save and restore it when entering and leaving generated // code. void AssertSupportedFPCR() { - ASSERT(fpcr().FZ() == 0); // No flush-to-zero support. - ASSERT(fpcr().RMode() == FPTieEven); // Ties-to-even rounding only. + DCHECK(fpcr().FZ() == 0); // No flush-to-zero support. + DCHECK(fpcr().RMode() == FPTieEven); // Ties-to-even rounding only. // The simulator does not support half-precision operations so fpcr().AHP() // is irrelevant, and is not checked here. } - static int CalcNFlag(uint64_t result, unsigned reg_size) { - return (result >> (reg_size - 1)) & 1; + template <typename T> + static int CalcNFlag(T result) { + return (result >> (sizeof(T) * 8 - 1)) & 1; } static int CalcZFlag(uint64_t result) { @@ -854,10 +810,6 @@ class Simulator : public DecoderVisitor { entry, \ p0, p1, p2, p3, p4, p5, p6, p7, NULL, p8) -#define TRY_CATCH_FROM_ADDRESS(try_catch_address) \ - try_catch_address == NULL ? \ - NULL : *(reinterpret_cast<TryCatch**>(try_catch_address)) - // The simulator has its own stack. Thus it has a different stack limit from // the C-based native code. diff --git a/deps/v8/src/arm64/stub-cache-arm64.cc b/deps/v8/src/arm64/stub-cache-arm64.cc index 760fbb35409..b7d43a4771a 100644 --- a/deps/v8/src/arm64/stub-cache-arm64.cc +++ b/deps/v8/src/arm64/stub-cache-arm64.cc @@ -2,13 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_ARM64 -#include "ic-inl.h" -#include "codegen.h" -#include "stub-cache.h" +#include "src/codegen.h" +#include "src/ic-inl.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -17,14 +17,11 @@ namespace internal { #define __ ACCESS_MASM(masm) -void StubCompiler::GenerateDictionaryNegativeLookup(MacroAssembler* masm, - Label* miss_label, - Register receiver, - Handle<Name> name, - Register scratch0, - Register scratch1) { - ASSERT(!AreAliased(receiver, scratch0, scratch1)); - ASSERT(name->IsUniqueName()); +void PropertyHandlerCompiler::GenerateDictionaryNegativeLookup( + MacroAssembler* masm, Label* miss_label, Register receiver, + Handle<Name> name, Register scratch0, Register scratch1) { + DCHECK(!AreAliased(receiver, scratch0, scratch1)); + DCHECK(name->IsUniqueName()); Counters* counters = masm->isolate()->counters(); __ IncrementCounter(counters->negative_lookups(), 1, scratch0, scratch1); __ IncrementCounter(counters->negative_lookups_miss(), 1, scratch0, scratch1); @@ -96,7 +93,7 @@ static void ProbeTable(Isolate* isolate, Label miss; - ASSERT(!AreAliased(name, offset, scratch, scratch2, scratch3)); + DCHECK(!AreAliased(name, offset, scratch, scratch2, scratch3)); // Multiply by 3 because there are 3 fields per entry. __ Add(scratch3, offset, Operand(offset, LSL, 1)); @@ -154,15 +151,15 @@ void StubCache::GenerateProbe(MacroAssembler* masm, Label miss; // Make sure the flags does not name a specific type. - ASSERT(Code::ExtractTypeFromFlags(flags) == 0); + DCHECK(Code::ExtractTypeFromFlags(flags) == 0); // Make sure that there are no register conflicts. - ASSERT(!AreAliased(receiver, name, scratch, extra, extra2, extra3)); + DCHECK(!AreAliased(receiver, name, scratch, extra, extra2, extra3)); // Make sure extra and extra2 registers are valid. - ASSERT(!extra.is(no_reg)); - ASSERT(!extra2.is(no_reg)); - ASSERT(!extra3.is(no_reg)); + DCHECK(!extra.is(no_reg)); + DCHECK(!extra2.is(no_reg)); + DCHECK(!extra3.is(no_reg)); Counters* counters = masm->isolate()->counters(); __ IncrementCounter(counters->megamorphic_stub_cache_probes(), 1, @@ -177,7 +174,7 @@ void StubCache::GenerateProbe(MacroAssembler* masm, __ Add(scratch, scratch, extra); __ Eor(scratch, scratch, flags); // We shift out the last two bits because they are not part of the hash. - __ Ubfx(scratch, scratch, kHeapObjectTagSize, + __ Ubfx(scratch, scratch, kCacheIndexShift, CountTrailingZeros(kPrimaryTableSize, 64)); // Probe the primary table. @@ -185,8 +182,8 @@ void StubCache::GenerateProbe(MacroAssembler* masm, scratch, extra, extra2, extra3); // Primary miss: Compute hash for secondary table. - __ Sub(scratch, scratch, Operand(name, LSR, kHeapObjectTagSize)); - __ Add(scratch, scratch, flags >> kHeapObjectTagSize); + __ Sub(scratch, scratch, Operand(name, LSR, kCacheIndexShift)); + __ Add(scratch, scratch, flags >> kCacheIndexShift); __ And(scratch, scratch, kSecondaryTableSize - 1); // Probe the secondary table. @@ -201,29 +198,8 @@ void StubCache::GenerateProbe(MacroAssembler* masm, } -void StubCompiler::GenerateLoadGlobalFunctionPrototype(MacroAssembler* masm, - int index, - Register prototype) { - // Load the global or builtins object from the current context. - __ Ldr(prototype, GlobalObjectMemOperand()); - // Load the native context from the global or builtins object. - __ Ldr(prototype, - FieldMemOperand(prototype, GlobalObject::kNativeContextOffset)); - // Load the function from the native context. - __ Ldr(prototype, ContextMemOperand(prototype, index)); - // Load the initial map. The global functions all have initial maps. - __ Ldr(prototype, - FieldMemOperand(prototype, JSFunction::kPrototypeOrInitialMapOffset)); - // Load the prototype from the initial map. - __ Ldr(prototype, FieldMemOperand(prototype, Map::kPrototypeOffset)); -} - - -void StubCompiler::GenerateDirectLoadGlobalFunctionPrototype( - MacroAssembler* masm, - int index, - Register prototype, - Label* miss) { +void NamedLoadHandlerCompiler::GenerateDirectLoadGlobalFunctionPrototype( + MacroAssembler* masm, int index, Register prototype, Label* miss) { Isolate* isolate = masm->isolate(); // Get the global function with the given index. Handle<JSFunction> function( @@ -244,50 +220,9 @@ void StubCompiler::GenerateDirectLoadGlobalFunctionPrototype( } -void StubCompiler::GenerateFastPropertyLoad(MacroAssembler* masm, - Register dst, - Register src, - bool inobject, - int index, - Representation representation) { - ASSERT(!representation.IsDouble()); - USE(representation); - if (inobject) { - int offset = index * kPointerSize; - __ Ldr(dst, FieldMemOperand(src, offset)); - } else { - // Calculate the offset into the properties array. - int offset = index * kPointerSize + FixedArray::kHeaderSize; - __ Ldr(dst, FieldMemOperand(src, JSObject::kPropertiesOffset)); - __ Ldr(dst, FieldMemOperand(dst, offset)); - } -} - - -void StubCompiler::GenerateLoadArrayLength(MacroAssembler* masm, - Register receiver, - Register scratch, - Label* miss_label) { - ASSERT(!AreAliased(receiver, scratch)); - - // Check that the receiver isn't a smi. - __ JumpIfSmi(receiver, miss_label); - - // Check that the object is a JS array. - __ JumpIfNotObjectType(receiver, scratch, scratch, JS_ARRAY_TYPE, - miss_label); - - // Load length directly from the JS array. - __ Ldr(x0, FieldMemOperand(receiver, JSArray::kLengthOffset)); - __ Ret(); -} - - -void StubCompiler::GenerateLoadFunctionPrototype(MacroAssembler* masm, - Register receiver, - Register scratch1, - Register scratch2, - Label* miss_label) { +void NamedLoadHandlerCompiler::GenerateLoadFunctionPrototype( + MacroAssembler* masm, Register receiver, Register scratch1, + Register scratch2, Label* miss_label) { __ TryGetFunctionPrototype(receiver, scratch1, scratch2, miss_label); // TryGetFunctionPrototype can't put the result directly in x0 because the // 3 inputs registers can't alias and we call this function from @@ -301,31 +236,134 @@ void StubCompiler::GenerateLoadFunctionPrototype(MacroAssembler* masm, // Generate code to check that a global property cell is empty. Create // the property cell at compilation time if no cell exists for the // property. -void StubCompiler::GenerateCheckPropertyCell(MacroAssembler* masm, - Handle<JSGlobalObject> global, - Handle<Name> name, - Register scratch, - Label* miss) { +void PropertyHandlerCompiler::GenerateCheckPropertyCell( + MacroAssembler* masm, Handle<JSGlobalObject> global, Handle<Name> name, + Register scratch, Label* miss) { Handle<Cell> cell = JSGlobalObject::EnsurePropertyCell(global, name); - ASSERT(cell->value()->IsTheHole()); + DCHECK(cell->value()->IsTheHole()); __ Mov(scratch, Operand(cell)); __ Ldr(scratch, FieldMemOperand(scratch, Cell::kValueOffset)); __ JumpIfNotRoot(scratch, Heap::kTheHoleValueRootIndex, miss); } -void StoreStubCompiler::GenerateNegativeHolderLookup( - MacroAssembler* masm, - Handle<JSObject> holder, - Register holder_reg, - Handle<Name> name, - Label* miss) { - if (holder->IsJSGlobalObject()) { - GenerateCheckPropertyCell( - masm, Handle<JSGlobalObject>::cast(holder), name, scratch1(), miss); - } else if (!holder->HasFastProperties() && !holder->IsJSGlobalProxy()) { - GenerateDictionaryNegativeLookup( - masm, miss, holder_reg, name, scratch1(), scratch2()); +static void PushInterceptorArguments(MacroAssembler* masm, Register receiver, + Register holder, Register name, + Handle<JSObject> holder_obj) { + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsNameIndex == 0); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsInfoIndex == 1); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsThisIndex == 2); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsHolderIndex == 3); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsLength == 4); + + __ Push(name); + Handle<InterceptorInfo> interceptor(holder_obj->GetNamedInterceptor()); + DCHECK(!masm->isolate()->heap()->InNewSpace(*interceptor)); + Register scratch = name; + __ Mov(scratch, Operand(interceptor)); + __ Push(scratch, receiver, holder); +} + + +static void CompileCallLoadPropertyWithInterceptor( + MacroAssembler* masm, Register receiver, Register holder, Register name, + Handle<JSObject> holder_obj, IC::UtilityId id) { + PushInterceptorArguments(masm, receiver, holder, name, holder_obj); + + __ CallExternalReference(ExternalReference(IC_Utility(id), masm->isolate()), + NamedLoadHandlerCompiler::kInterceptorArgsLength); +} + + +// Generate call to api function. +void PropertyHandlerCompiler::GenerateFastApiCall( + MacroAssembler* masm, const CallOptimization& optimization, + Handle<Map> receiver_map, Register receiver, Register scratch, + bool is_store, int argc, Register* values) { + DCHECK(!AreAliased(receiver, scratch)); + + MacroAssembler::PushPopQueue queue(masm); + queue.Queue(receiver); + // Write the arguments to the stack frame. + for (int i = 0; i < argc; i++) { + Register arg = values[argc - 1 - i]; + DCHECK(!AreAliased(receiver, scratch, arg)); + queue.Queue(arg); + } + queue.PushQueued(); + + DCHECK(optimization.is_simple_api_call()); + + // Abi for CallApiFunctionStub. + Register callee = x0; + Register call_data = x4; + Register holder = x2; + Register api_function_address = x1; + + // Put holder in place. + CallOptimization::HolderLookup holder_lookup; + Handle<JSObject> api_holder = + optimization.LookupHolderOfExpectedType(receiver_map, &holder_lookup); + switch (holder_lookup) { + case CallOptimization::kHolderIsReceiver: + __ Mov(holder, receiver); + break; + case CallOptimization::kHolderFound: + __ LoadObject(holder, api_holder); + break; + case CallOptimization::kHolderNotFound: + UNREACHABLE(); + break; + } + + Isolate* isolate = masm->isolate(); + Handle<JSFunction> function = optimization.constant_function(); + Handle<CallHandlerInfo> api_call_info = optimization.api_call_info(); + Handle<Object> call_data_obj(api_call_info->data(), isolate); + + // Put callee in place. + __ LoadObject(callee, function); + + bool call_data_undefined = false; + // Put call_data in place. + if (isolate->heap()->InNewSpace(*call_data_obj)) { + __ LoadObject(call_data, api_call_info); + __ Ldr(call_data, FieldMemOperand(call_data, CallHandlerInfo::kDataOffset)); + } else if (call_data_obj->IsUndefined()) { + call_data_undefined = true; + __ LoadRoot(call_data, Heap::kUndefinedValueRootIndex); + } else { + __ LoadObject(call_data, call_data_obj); + } + + // Put api_function_address in place. + Address function_address = v8::ToCData<Address>(api_call_info->callback()); + ApiFunction fun(function_address); + ExternalReference ref = ExternalReference( + &fun, ExternalReference::DIRECT_API_CALL, masm->isolate()); + __ Mov(api_function_address, ref); + + // Jump to stub. + CallApiFunctionStub stub(isolate, is_store, call_data_undefined, argc); + __ TailCallStub(&stub); +} + + +void PropertyAccessCompiler::GenerateTailCall(MacroAssembler* masm, + Handle<Code> code) { + __ Jump(code, RelocInfo::CODE_TARGET); +} + + +#undef __ +#define __ ACCESS_MASM(masm()) + + +void NamedStoreHandlerCompiler::GenerateRestoreName(Label* label, + Handle<Name> name) { + if (!label->is_unused()) { + __ Bind(label); + __ Mov(this->name(), Operand(name)); } } @@ -334,22 +372,13 @@ void StoreStubCompiler::GenerateNegativeHolderLookup( // When leaving generated code after success, the receiver_reg and storage_reg // may be clobbered. Upon branch to miss_label, the receiver and name registers // have their original values. -void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, - Handle<JSObject> object, - LookupResult* lookup, - Handle<Map> transition, - Handle<Name> name, - Register receiver_reg, - Register storage_reg, - Register value_reg, - Register scratch1, - Register scratch2, - Register scratch3, - Label* miss_label, - Label* slow) { +void NamedStoreHandlerCompiler::GenerateStoreTransition( + Handle<Map> transition, Handle<Name> name, Register receiver_reg, + Register storage_reg, Register value_reg, Register scratch1, + Register scratch2, Register scratch3, Label* miss_label, Label* slow) { Label exit; - ASSERT(!AreAliased(receiver_reg, storage_reg, value_reg, + DCHECK(!AreAliased(receiver_reg, storage_reg, value_reg, scratch1, scratch2, scratch3)); // We don't need scratch3. @@ -359,10 +388,10 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, DescriptorArray* descriptors = transition->instance_descriptors(); PropertyDetails details = descriptors->GetDetails(descriptor); Representation representation = details.representation(); - ASSERT(!representation.IsNone()); + DCHECK(!representation.IsNone()); if (details.type() == CONSTANT) { - Handle<Object> constant(descriptors->GetValue(descriptor), masm->isolate()); + Handle<Object> constant(descriptors->GetValue(descriptor), isolate()); __ LoadObject(scratch1, constant); __ Cmp(value_reg, scratch1); __ B(ne, miss_label); @@ -387,7 +416,7 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, __ Bind(&do_store); } } else if (representation.IsDouble()) { - UseScratchRegisterScope temps(masm); + UseScratchRegisterScope temps(masm()); DoubleRegister temp_double = temps.AcquireD(); __ SmiUntagToDouble(temp_double, value_reg, kSpeculativeUntag); @@ -399,24 +428,24 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, __ Ldr(temp_double, FieldMemOperand(value_reg, HeapNumber::kValueOffset)); __ Bind(&do_store); - __ AllocateHeapNumber(storage_reg, slow, scratch1, scratch2, temp_double); + __ AllocateHeapNumber(storage_reg, slow, scratch1, scratch2, temp_double, + NoReg, MUTABLE); } - // Stub never generated for non-global objects that require access checks. - ASSERT(object->IsJSGlobalProxy() || !object->IsAccessCheckNeeded()); + // Stub never generated for objects that require access checks. + DCHECK(!transition->is_access_check_needed()); // Perform map transition for the receiver if necessary. - if ((details.type() == FIELD) && - (object->map()->unused_property_fields() == 0)) { + if (details.type() == FIELD && + Map::cast(transition->GetBackPointer())->unused_property_fields() == 0) { // The properties must be extended before we can store the value. // We jump to a runtime call that extends the properties array. __ Mov(scratch1, Operand(transition)); __ Push(receiver_reg, scratch1, value_reg); __ TailCallExternalReference( ExternalReference(IC_Utility(IC::kSharedStoreIC_ExtendStorage), - masm->isolate()), - 3, - 1); + isolate()), + 3, 1); return; } @@ -435,7 +464,7 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, OMIT_SMI_CHECK); if (details.type() == CONSTANT) { - ASSERT(value_reg.is(x0)); + DCHECK(value_reg.is(x0)); __ Ret(); return; } @@ -446,7 +475,7 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, // Adjust for the number of properties stored in the object. Even in the // face of a transition we can use the old map here because the size of the // object and the number of in-object properties is not going to change. - index -= object->map()->inobject_properties(); + index -= transition->inobject_properties(); // TODO(verwaest): Share this code as a code stub. SmiCheck smi_check = representation.IsTagged() @@ -454,7 +483,7 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, Register prop_reg = representation.IsDouble() ? storage_reg : value_reg; if (index < 0) { // Set the property straight into the object. - int offset = object->map()->instance_size() + (index * kPointerSize); + int offset = transition->instance_size() + (index * kPointerSize); __ Str(prop_reg, FieldMemOperand(receiver_reg, offset)); if (!representation.IsSmi()) { @@ -497,311 +526,57 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, __ Bind(&exit); // Return the value (register x0). - ASSERT(value_reg.is(x0)); + DCHECK(value_reg.is(x0)); __ Ret(); } -// Generate StoreField code, value is passed in x0 register. -// When leaving generated code after success, the receiver_reg and name_reg may -// be clobbered. Upon branch to miss_label, the receiver and name registers have -// their original values. -void StoreStubCompiler::GenerateStoreField(MacroAssembler* masm, - Handle<JSObject> object, - LookupResult* lookup, - Register receiver_reg, - Register name_reg, - Register value_reg, - Register scratch1, - Register scratch2, - Label* miss_label) { - // x0 : value - Label exit; - - // Stub never generated for non-global objects that require access - // checks. - ASSERT(object->IsJSGlobalProxy() || !object->IsAccessCheckNeeded()); - - int index = lookup->GetFieldIndex().field_index(); - - // Adjust for the number of properties stored in the object. Even in the - // face of a transition we can use the old map here because the size of the - // object and the number of in-object properties is not going to change. - index -= object->map()->inobject_properties(); - - Representation representation = lookup->representation(); - ASSERT(!representation.IsNone()); - if (representation.IsSmi()) { - __ JumpIfNotSmi(value_reg, miss_label); - } else if (representation.IsHeapObject()) { - __ JumpIfSmi(value_reg, miss_label); - HeapType* field_type = lookup->GetFieldType(); - HeapType::Iterator<Map> it = field_type->Classes(); - if (!it.Done()) { - __ Ldr(scratch1, FieldMemOperand(value_reg, HeapObject::kMapOffset)); - Label do_store; - while (true) { - __ CompareMap(scratch1, it.Current()); - it.Advance(); - if (it.Done()) { - __ B(ne, miss_label); - break; - } - __ B(eq, &do_store); - } - __ Bind(&do_store); - } - } else if (representation.IsDouble()) { - UseScratchRegisterScope temps(masm); - DoubleRegister temp_double = temps.AcquireD(); - - __ SmiUntagToDouble(temp_double, value_reg, kSpeculativeUntag); - - // Load the double storage. - if (index < 0) { - int offset = (index * kPointerSize) + object->map()->instance_size(); - __ Ldr(scratch1, FieldMemOperand(receiver_reg, offset)); - } else { - int offset = (index * kPointerSize) + FixedArray::kHeaderSize; - __ Ldr(scratch1, - FieldMemOperand(receiver_reg, JSObject::kPropertiesOffset)); - __ Ldr(scratch1, FieldMemOperand(scratch1, offset)); - } - - // Store the value into the storage. - Label do_store, heap_number; - - __ JumpIfSmi(value_reg, &do_store); - - __ CheckMap(value_reg, scratch2, Heap::kHeapNumberMapRootIndex, - miss_label, DONT_DO_SMI_CHECK); - __ Ldr(temp_double, FieldMemOperand(value_reg, HeapNumber::kValueOffset)); - - __ Bind(&do_store); - __ Str(temp_double, FieldMemOperand(scratch1, HeapNumber::kValueOffset)); - - // Return the value (register x0). - ASSERT(value_reg.is(x0)); - __ Ret(); - return; - } - - // TODO(verwaest): Share this code as a code stub. - SmiCheck smi_check = representation.IsTagged() - ? INLINE_SMI_CHECK : OMIT_SMI_CHECK; - if (index < 0) { - // Set the property straight into the object. - int offset = object->map()->instance_size() + (index * kPointerSize); - __ Str(value_reg, FieldMemOperand(receiver_reg, offset)); - - if (!representation.IsSmi()) { - // Skip updating write barrier if storing a smi. - __ JumpIfSmi(value_reg, &exit); - - // Update the write barrier for the array address. - // Pass the now unused name_reg as a scratch register. - __ Mov(name_reg, value_reg); - __ RecordWriteField(receiver_reg, - offset, - name_reg, - scratch1, - kLRHasNotBeenSaved, - kDontSaveFPRegs, - EMIT_REMEMBERED_SET, - smi_check); - } - } else { - // Write to the properties array. - int offset = index * kPointerSize + FixedArray::kHeaderSize; - // Get the properties array - __ Ldr(scratch1, - FieldMemOperand(receiver_reg, JSObject::kPropertiesOffset)); - __ Str(value_reg, FieldMemOperand(scratch1, offset)); - - if (!representation.IsSmi()) { - // Skip updating write barrier if storing a smi. - __ JumpIfSmi(value_reg, &exit); - - // Update the write barrier for the array address. - // Ok to clobber receiver_reg and name_reg, since we return. - __ Mov(name_reg, value_reg); - __ RecordWriteField(scratch1, - offset, - name_reg, - receiver_reg, - kLRHasNotBeenSaved, - kDontSaveFPRegs, - EMIT_REMEMBERED_SET, - smi_check); - } - } - - __ Bind(&exit); - // Return the value (register x0). - ASSERT(value_reg.is(x0)); - __ Ret(); -} - - -void StoreStubCompiler::GenerateRestoreName(MacroAssembler* masm, - Label* label, - Handle<Name> name) { - if (!label->is_unused()) { - __ Bind(label); - __ Mov(this->name(), Operand(name)); - } -} - - -static void PushInterceptorArguments(MacroAssembler* masm, - Register receiver, - Register holder, - Register name, - Handle<JSObject> holder_obj) { - STATIC_ASSERT(StubCache::kInterceptorArgsNameIndex == 0); - STATIC_ASSERT(StubCache::kInterceptorArgsInfoIndex == 1); - STATIC_ASSERT(StubCache::kInterceptorArgsThisIndex == 2); - STATIC_ASSERT(StubCache::kInterceptorArgsHolderIndex == 3); - STATIC_ASSERT(StubCache::kInterceptorArgsLength == 4); - - __ Push(name); - Handle<InterceptorInfo> interceptor(holder_obj->GetNamedInterceptor()); - ASSERT(!masm->isolate()->heap()->InNewSpace(*interceptor)); - Register scratch = name; - __ Mov(scratch, Operand(interceptor)); - __ Push(scratch, receiver, holder); -} - - -static void CompileCallLoadPropertyWithInterceptor( - MacroAssembler* masm, - Register receiver, - Register holder, - Register name, - Handle<JSObject> holder_obj, - IC::UtilityId id) { - PushInterceptorArguments(masm, receiver, holder, name, holder_obj); - - __ CallExternalReference( - ExternalReference(IC_Utility(id), masm->isolate()), - StubCache::kInterceptorArgsLength); -} - - -// Generate call to api function. -void StubCompiler::GenerateFastApiCall(MacroAssembler* masm, - const CallOptimization& optimization, - Handle<Map> receiver_map, - Register receiver, - Register scratch, - bool is_store, - int argc, - Register* values) { - ASSERT(!AreAliased(receiver, scratch)); - - MacroAssembler::PushPopQueue queue(masm); - queue.Queue(receiver); - // Write the arguments to the stack frame. - for (int i = 0; i < argc; i++) { - Register arg = values[argc-1-i]; - ASSERT(!AreAliased(receiver, scratch, arg)); - queue.Queue(arg); - } - queue.PushQueued(); - - ASSERT(optimization.is_simple_api_call()); - - // Abi for CallApiFunctionStub. - Register callee = x0; - Register call_data = x4; - Register holder = x2; - Register api_function_address = x1; - - // Put holder in place. - CallOptimization::HolderLookup holder_lookup; - Handle<JSObject> api_holder = - optimization.LookupHolderOfExpectedType(receiver_map, &holder_lookup); - switch (holder_lookup) { - case CallOptimization::kHolderIsReceiver: - __ Mov(holder, receiver); - break; - case CallOptimization::kHolderFound: - __ LoadObject(holder, api_holder); - break; - case CallOptimization::kHolderNotFound: - UNREACHABLE(); +void NamedStoreHandlerCompiler::GenerateStoreField(LookupResult* lookup, + Register value_reg, + Label* miss_label) { + DCHECK(lookup->representation().IsHeapObject()); + __ JumpIfSmi(value_reg, miss_label); + HeapType::Iterator<Map> it = lookup->GetFieldType()->Classes(); + __ Ldr(scratch1(), FieldMemOperand(value_reg, HeapObject::kMapOffset)); + Label do_store; + while (true) { + __ CompareMap(scratch1(), it.Current()); + it.Advance(); + if (it.Done()) { + __ B(ne, miss_label); break; + } + __ B(eq, &do_store); } + __ Bind(&do_store); - Isolate* isolate = masm->isolate(); - Handle<JSFunction> function = optimization.constant_function(); - Handle<CallHandlerInfo> api_call_info = optimization.api_call_info(); - Handle<Object> call_data_obj(api_call_info->data(), isolate); - - // Put callee in place. - __ LoadObject(callee, function); - - bool call_data_undefined = false; - // Put call_data in place. - if (isolate->heap()->InNewSpace(*call_data_obj)) { - __ LoadObject(call_data, api_call_info); - __ Ldr(call_data, FieldMemOperand(call_data, CallHandlerInfo::kDataOffset)); - } else if (call_data_obj->IsUndefined()) { - call_data_undefined = true; - __ LoadRoot(call_data, Heap::kUndefinedValueRootIndex); - } else { - __ LoadObject(call_data, call_data_obj); - } - - // Put api_function_address in place. - Address function_address = v8::ToCData<Address>(api_call_info->callback()); - ApiFunction fun(function_address); - ExternalReference ref = ExternalReference(&fun, - ExternalReference::DIRECT_API_CALL, - masm->isolate()); - __ Mov(api_function_address, ref); - - // Jump to stub. - CallApiFunctionStub stub(isolate, is_store, call_data_undefined, argc); - __ TailCallStub(&stub); -} - - -void StubCompiler::GenerateTailCall(MacroAssembler* masm, Handle<Code> code) { - __ Jump(code, RelocInfo::CODE_TARGET); + StoreFieldStub stub(isolate(), lookup->GetFieldIndex(), + lookup->representation()); + GenerateTailCall(masm(), stub.GetCode()); } -#undef __ -#define __ ACCESS_MASM(masm()) - - -Register StubCompiler::CheckPrototypes(Handle<HeapType> type, - Register object_reg, - Handle<JSObject> holder, - Register holder_reg, - Register scratch1, - Register scratch2, - Handle<Name> name, - Label* miss, - PrototypeCheckType check) { - Handle<Map> receiver_map(IC::TypeToMap(*type, isolate())); +Register PropertyHandlerCompiler::CheckPrototypes( + Register object_reg, Register holder_reg, Register scratch1, + Register scratch2, Handle<Name> name, Label* miss, + PrototypeCheckType check) { + Handle<Map> receiver_map(IC::TypeToMap(*type(), isolate())); // object_reg and holder_reg registers can alias. - ASSERT(!AreAliased(object_reg, scratch1, scratch2)); - ASSERT(!AreAliased(holder_reg, scratch1, scratch2)); + DCHECK(!AreAliased(object_reg, scratch1, scratch2)); + DCHECK(!AreAliased(holder_reg, scratch1, scratch2)); // Keep track of the current object in register reg. Register reg = object_reg; int depth = 0; Handle<JSObject> current = Handle<JSObject>::null(); - if (type->IsConstant()) { - current = Handle<JSObject>::cast(type->AsConstant()->Value()); + if (type()->IsConstant()) { + current = Handle<JSObject>::cast(type()->AsConstant()->Value()); } Handle<JSObject> prototype = Handle<JSObject>::null(); Handle<Map> current_map = receiver_map; - Handle<Map> holder_map(holder->map()); + Handle<Map> holder_map(holder()->map()); // Traverse the prototype chain and check the maps in the prototype chain for // fast and global objects or do negative lookup for normal objects. while (!current_map.is_identical_to(holder_map)) { @@ -809,18 +584,18 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, // Only global objects and objects that do not require access // checks are allowed in stubs. - ASSERT(current_map->IsJSGlobalProxyMap() || + DCHECK(current_map->IsJSGlobalProxyMap() || !current_map->is_access_check_needed()); prototype = handle(JSObject::cast(current_map->prototype())); if (current_map->is_dictionary_map() && - !current_map->IsJSGlobalObjectMap() && - !current_map->IsJSGlobalProxyMap()) { + !current_map->IsJSGlobalObjectMap()) { + DCHECK(!current_map->IsJSGlobalProxyMap()); // Proxy maps are fast. if (!name->IsUniqueName()) { - ASSERT(name->IsString()); + DCHECK(name->IsString()); name = factory()->InternalizeString(Handle<String>::cast(name)); } - ASSERT(current.is_null() || + DCHECK(current.is_null() || (current->property_dictionary()->FindEntry(name) == NameDictionary::kNotFound)); @@ -831,13 +606,14 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, reg = holder_reg; // From now on the object will be in holder_reg. __ Ldr(reg, FieldMemOperand(scratch1, Map::kPrototypeOffset)); } else { - bool need_map = (depth != 1 || check == CHECK_ALL_MAPS) || - heap()->InNewSpace(*prototype); - Register map_reg = NoReg; - if (need_map) { - map_reg = scratch1; - __ Ldr(map_reg, FieldMemOperand(reg, HeapObject::kMapOffset)); - } + // Two possible reasons for loading the prototype from the map: + // (1) Can't store references to new space in code. + // (2) Handler is shared for all receivers with the same prototype + // map (but not necessarily the same prototype instance). + bool load_prototype_from_map = + heap()->InNewSpace(*prototype) || depth == 1; + Register map_reg = scratch1; + __ Ldr(map_reg, FieldMemOperand(reg, HeapObject::kMapOffset)); if (depth != 1 || check == CHECK_ALL_MAPS) { __ CheckMap(map_reg, current_map, miss, DONT_DO_SMI_CHECK); @@ -846,6 +622,9 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, // Check access rights to the global object. This has to happen after // the map check so that we know that the object is actually a global // object. + // This allows us to install generated handlers for accesses to the + // global proxy (as opposed to using slow ICs). See corresponding code + // in LookupForRead(). if (current_map->IsJSGlobalProxyMap()) { UseScratchRegisterScope temps(masm()); __ CheckAccessGlobalProxy(reg, scratch2, temps.AcquireX(), miss); @@ -857,12 +636,9 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, reg = holder_reg; // From now on the object will be in holder_reg. - if (heap()->InNewSpace(*prototype)) { - // The prototype is in new space; we cannot store a reference to it - // in the code. Load it from the map. + if (load_prototype_from_map) { __ Ldr(reg, FieldMemOperand(map_reg, Map::kPrototypeOffset)); } else { - // The prototype is in old space; load it directly. __ Mov(reg, Operand(prototype)); } } @@ -882,7 +658,7 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, } // Perform security check for access to the global object. - ASSERT(current_map->IsJSGlobalProxyMap() || + DCHECK(current_map->IsJSGlobalProxyMap() || !current_map->is_access_check_needed()); if (current_map->IsJSGlobalProxyMap()) { __ CheckAccessGlobalProxy(reg, scratch1, scratch2, miss); @@ -893,7 +669,7 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, } -void LoadStubCompiler::HandlerFrontendFooter(Handle<Name> name, Label* miss) { +void NamedLoadHandlerCompiler::FrontendFooter(Handle<Name> name, Label* miss) { if (!miss->is_unused()) { Label success; __ B(&success); @@ -906,12 +682,12 @@ void LoadStubCompiler::HandlerFrontendFooter(Handle<Name> name, Label* miss) { } -void StoreStubCompiler::HandlerFrontendFooter(Handle<Name> name, Label* miss) { +void NamedStoreHandlerCompiler::FrontendFooter(Handle<Name> name, Label* miss) { if (!miss->is_unused()) { Label success; __ B(&success); - GenerateRestoreName(masm(), miss, name); + GenerateRestoreName(miss, name); TailCallBuiltin(masm(), MissBuiltin(kind())); __ Bind(&success); @@ -919,84 +695,16 @@ void StoreStubCompiler::HandlerFrontendFooter(Handle<Name> name, Label* miss) { } -Register LoadStubCompiler::CallbackHandlerFrontend(Handle<HeapType> type, - Register object_reg, - Handle<JSObject> holder, - Handle<Name> name, - Handle<Object> callback) { - Label miss; - - Register reg = HandlerFrontendHeader(type, object_reg, holder, name, &miss); - // HandlerFrontendHeader can return its result into scratch1() so do not - // use it. - Register scratch2 = this->scratch2(); - Register scratch3 = this->scratch3(); - Register dictionary = this->scratch4(); - ASSERT(!AreAliased(reg, scratch2, scratch3, dictionary)); - - if (!holder->HasFastProperties() && !holder->IsJSGlobalObject()) { - // Load the properties dictionary. - __ Ldr(dictionary, FieldMemOperand(reg, JSObject::kPropertiesOffset)); - - // Probe the dictionary. - Label probe_done; - NameDictionaryLookupStub::GeneratePositiveLookup(masm(), - &miss, - &probe_done, - dictionary, - this->name(), - scratch2, - scratch3); - __ Bind(&probe_done); - - // If probing finds an entry in the dictionary, scratch3 contains the - // pointer into the dictionary. Check that the value is the callback. - Register pointer = scratch3; - const int kElementsStartOffset = NameDictionary::kHeaderSize + - NameDictionary::kElementsStartIndex * kPointerSize; - const int kValueOffset = kElementsStartOffset + kPointerSize; - __ Ldr(scratch2, FieldMemOperand(pointer, kValueOffset)); - __ Cmp(scratch2, Operand(callback)); - __ B(ne, &miss); - } - - HandlerFrontendFooter(name, &miss); - return reg; -} - - -void LoadStubCompiler::GenerateLoadField(Register reg, - Handle<JSObject> holder, - PropertyIndex field, - Representation representation) { - __ Mov(receiver(), reg); - if (kind() == Code::LOAD_IC) { - LoadFieldStub stub(isolate(), - field.is_inobject(holder), - field.translate(holder), - representation); - GenerateTailCall(masm(), stub.GetCode()); - } else { - KeyedLoadFieldStub stub(isolate(), - field.is_inobject(holder), - field.translate(holder), - representation); - GenerateTailCall(masm(), stub.GetCode()); - } -} - - -void LoadStubCompiler::GenerateLoadConstant(Handle<Object> value) { +void NamedLoadHandlerCompiler::GenerateLoadConstant(Handle<Object> value) { // Return the constant value. __ LoadObject(x0, value); __ Ret(); } -void LoadStubCompiler::GenerateLoadCallback( - Register reg, - Handle<ExecutableAccessorInfo> callback) { - ASSERT(!AreAliased(scratch2(), scratch3(), scratch4(), reg)); +void NamedLoadHandlerCompiler::GenerateLoadCallback( + Register reg, Handle<ExecutableAccessorInfo> callback) { + DCHECK(!AreAliased(scratch2(), scratch3(), scratch4(), reg)); // Build ExecutableAccessorInfo::args_ list on the stack and push property // name below the exit frame to make GC aware of them and store pointers to @@ -1048,16 +756,13 @@ void LoadStubCompiler::GenerateLoadCallback( } -void LoadStubCompiler::GenerateLoadInterceptor( - Register holder_reg, - Handle<Object> object, - Handle<JSObject> interceptor_holder, - LookupResult* lookup, - Handle<Name> name) { - ASSERT(!AreAliased(receiver(), this->name(), +void NamedLoadHandlerCompiler::GenerateLoadInterceptor(Register holder_reg, + LookupResult* lookup, + Handle<Name> name) { + DCHECK(!AreAliased(receiver(), this->name(), scratch1(), scratch2(), scratch3())); - ASSERT(interceptor_holder->HasNamedInterceptor()); - ASSERT(!interceptor_holder->GetNamedInterceptor()->getter()->IsUndefined()); + DCHECK(holder()->HasNamedInterceptor()); + DCHECK(!holder()->GetNamedInterceptor()->getter()->IsUndefined()); // So far the most popular follow ups for interceptor loads are FIELD // and CALLBACKS, so inline only them, other cases may be added later. @@ -1067,10 +772,12 @@ void LoadStubCompiler::GenerateLoadInterceptor( compile_followup_inline = true; } else if (lookup->type() == CALLBACKS && lookup->GetCallbackObject()->IsExecutableAccessorInfo()) { - ExecutableAccessorInfo* callback = - ExecutableAccessorInfo::cast(lookup->GetCallbackObject()); - compile_followup_inline = callback->getter() != NULL && - callback->IsCompatibleReceiver(*object); + Handle<ExecutableAccessorInfo> callback( + ExecutableAccessorInfo::cast(lookup->GetCallbackObject())); + compile_followup_inline = + callback->getter() != NULL && + ExecutableAccessorInfo::IsCompatibleReceiverType(isolate(), callback, + type()); } } @@ -1078,13 +785,13 @@ void LoadStubCompiler::GenerateLoadInterceptor( // Compile the interceptor call, followed by inline code to load the // property from further up the prototype chain if the call fails. // Check that the maps haven't changed. - ASSERT(holder_reg.is(receiver()) || holder_reg.is(scratch1())); + DCHECK(holder_reg.is(receiver()) || holder_reg.is(scratch1())); // Preserve the receiver register explicitly whenever it is different from // the holder and it is needed should the interceptor return without any // result. The CALLBACKS case needs the receiver to be passed into C++ code, // the FIELD case might cause a miss during the prototype check. - bool must_perfrom_prototype_check = *interceptor_holder != lookup->holder(); + bool must_perfrom_prototype_check = *holder() != lookup->holder(); bool must_preserve_receiver_reg = !receiver().Is(holder_reg) && (lookup->type() == CALLBACKS || must_perfrom_prototype_check); @@ -1101,7 +808,7 @@ void LoadStubCompiler::GenerateLoadInterceptor( // interceptor's holder has been compiled before (see a caller // of this method.) CompileCallLoadPropertyWithInterceptor( - masm(), receiver(), holder_reg, this->name(), interceptor_holder, + masm(), receiver(), holder_reg, this->name(), holder(), IC::kLoadPropertyWithInterceptorOnly); // Check if interceptor provided a value for property. If it's @@ -1121,36 +828,34 @@ void LoadStubCompiler::GenerateLoadInterceptor( } // Leave the internal frame. } - GenerateLoadPostInterceptor(holder_reg, interceptor_holder, name, lookup); + GenerateLoadPostInterceptor(holder_reg, name, lookup); } else { // !compile_followup_inline // Call the runtime system to load the interceptor. // Check that the maps haven't changed. - PushInterceptorArguments( - masm(), receiver(), holder_reg, this->name(), interceptor_holder); + PushInterceptorArguments(masm(), receiver(), holder_reg, this->name(), + holder()); ExternalReference ref = - ExternalReference(IC_Utility(IC::kLoadPropertyWithInterceptorForLoad), + ExternalReference(IC_Utility(IC::kLoadPropertyWithInterceptor), isolate()); - __ TailCallExternalReference(ref, StubCache::kInterceptorArgsLength, 1); + __ TailCallExternalReference( + ref, NamedLoadHandlerCompiler::kInterceptorArgsLength, 1); } } -Handle<Code> StoreStubCompiler::CompileStoreCallback( - Handle<JSObject> object, - Handle<JSObject> holder, - Handle<Name> name, +Handle<Code> NamedStoreHandlerCompiler::CompileStoreCallback( + Handle<JSObject> object, Handle<Name> name, Handle<ExecutableAccessorInfo> callback) { - ASM_LOCATION("StoreStubCompiler::CompileStoreCallback"); - Register holder_reg = HandlerFrontend( - IC::CurrentTypeOf(object, isolate()), receiver(), holder, name); + ASM_LOCATION("NamedStoreHandlerCompiler::CompileStoreCallback"); + Register holder_reg = Frontend(receiver(), name); // Stub never generated for non-global objects that require access checks. - ASSERT(holder->IsJSGlobalProxy() || !holder->IsAccessCheckNeeded()); + DCHECK(holder()->IsJSGlobalProxy() || !holder()->IsAccessCheckNeeded()); // receiver() and holder_reg can alias. - ASSERT(!AreAliased(receiver(), scratch1(), scratch2(), value())); - ASSERT(!AreAliased(holder_reg, scratch1(), scratch2(), value())); + DCHECK(!AreAliased(receiver(), scratch1(), scratch2(), value())); + DCHECK(!AreAliased(holder_reg, scratch1(), scratch2(), value())); __ Mov(scratch1(), Operand(callback)); __ Mov(scratch2(), Operand(name)); __ Push(receiver(), holder_reg, scratch1(), scratch2(), value()); @@ -1169,10 +874,8 @@ Handle<Code> StoreStubCompiler::CompileStoreCallback( #define __ ACCESS_MASM(masm) -void StoreStubCompiler::GenerateStoreViaSetter( - MacroAssembler* masm, - Handle<HeapType> type, - Register receiver, +void NamedStoreHandlerCompiler::GenerateStoreViaSetter( + MacroAssembler* masm, Handle<HeapType> type, Register receiver, Handle<JSFunction> setter) { // ----------- S t a t e ------------- // -- lr : return address @@ -1190,8 +893,7 @@ void StoreStubCompiler::GenerateStoreViaSetter( if (IC::TypeToMap(*type, masm->isolate())->IsJSGlobalObjectMap()) { // Swap in the global receiver. __ Ldr(receiver, - FieldMemOperand( - receiver, JSGlobalObject::kGlobalReceiverOffset)); + FieldMemOperand(receiver, JSGlobalObject::kGlobalProxyOffset)); } __ Push(receiver, value()); ParameterCount actual(1); @@ -1218,18 +920,17 @@ void StoreStubCompiler::GenerateStoreViaSetter( #define __ ACCESS_MASM(masm()) -Handle<Code> StoreStubCompiler::CompileStoreInterceptor( - Handle<JSObject> object, +Handle<Code> NamedStoreHandlerCompiler::CompileStoreInterceptor( Handle<Name> name) { Label miss; - ASM_LOCATION("StoreStubCompiler::CompileStoreInterceptor"); + ASM_LOCATION("NamedStoreHandlerCompiler::CompileStoreInterceptor"); __ Push(receiver(), this->name(), value()); // Do tail-call to the runtime system. - ExternalReference store_ic_property = - ExternalReference(IC_Utility(IC::kStoreInterceptorProperty), isolate()); + ExternalReference store_ic_property = ExternalReference( + IC_Utility(IC::kStorePropertyWithInterceptor), isolate()); __ TailCallExternalReference(store_ic_property, 3, 1); // Return the generated code. @@ -1237,67 +938,41 @@ Handle<Code> StoreStubCompiler::CompileStoreInterceptor( } -Handle<Code> LoadStubCompiler::CompileLoadNonexistent(Handle<HeapType> type, - Handle<JSObject> last, - Handle<Name> name) { - NonexistentHandlerFrontend(type, last, name); - - // Return undefined if maps of the full prototype chain are still the - // same and no global property with this name contains a value. - __ LoadRoot(x0, Heap::kUndefinedValueRootIndex); - __ Ret(); - - // Return the generated code. - return GetCode(kind(), Code::FAST, name); -} - - // TODO(all): The so-called scratch registers are significant in some cases. For -// example, KeyedStoreStubCompiler::registers()[3] (x3) is actually used for -// KeyedStoreCompiler::transition_map(). We should verify which registers are -// actually scratch registers, and which are important. For now, we use the same -// assignments as ARM to remain on the safe side. +// example, PropertyAccessCompiler::keyed_store_calling_convention()[3] (x3) is +// actually +// used for KeyedStoreCompiler::transition_map(). We should verify which +// registers are actually scratch registers, and which are important. For now, +// we use the same assignments as ARM to remain on the safe side. -Register* LoadStubCompiler::registers() { +Register* PropertyAccessCompiler::load_calling_convention() { // receiver, name, scratch1, scratch2, scratch3, scratch4. - static Register registers[] = { x0, x2, x3, x1, x4, x5 }; - return registers; -} - - -Register* KeyedLoadStubCompiler::registers() { - // receiver, name/key, scratch1, scratch2, scratch3, scratch4. - static Register registers[] = { x1, x0, x2, x3, x4, x5 }; + Register receiver = LoadIC::ReceiverRegister(); + Register name = LoadIC::NameRegister(); + static Register registers[] = { receiver, name, x3, x0, x4, x5 }; return registers; } -Register StoreStubCompiler::value() { - return x0; -} - - -Register* StoreStubCompiler::registers() { +Register* PropertyAccessCompiler::store_calling_convention() { // receiver, value, scratch1, scratch2, scratch3. - static Register registers[] = { x1, x2, x3, x4, x5 }; + Register receiver = StoreIC::ReceiverRegister(); + Register name = StoreIC::NameRegister(); + DCHECK(x3.is(KeyedStoreIC::MapRegister())); + static Register registers[] = { receiver, name, x3, x4, x5 }; return registers; } -Register* KeyedStoreStubCompiler::registers() { - // receiver, name, scratch1, scratch2, scratch3. - static Register registers[] = { x2, x1, x3, x4, x5 }; - return registers; -} +Register NamedStoreHandlerCompiler::value() { return StoreIC::ValueRegister(); } #undef __ #define __ ACCESS_MASM(masm) -void LoadStubCompiler::GenerateLoadViaGetter(MacroAssembler* masm, - Handle<HeapType> type, - Register receiver, - Handle<JSFunction> getter) { +void NamedLoadHandlerCompiler::GenerateLoadViaGetter( + MacroAssembler* masm, Handle<HeapType> type, Register receiver, + Handle<JSFunction> getter) { { FrameScope scope(masm, StackFrame::INTERNAL); @@ -1306,8 +981,7 @@ void LoadStubCompiler::GenerateLoadViaGetter(MacroAssembler* masm, if (IC::TypeToMap(*type, masm->isolate())->IsJSGlobalObjectMap()) { // Swap in the global receiver. __ Ldr(receiver, - FieldMemOperand( - receiver, JSGlobalObject::kGlobalReceiverOffset)); + FieldMemOperand(receiver, JSGlobalObject::kGlobalProxyOffset)); } __ Push(receiver); ParameterCount actual(0); @@ -1331,54 +1005,58 @@ void LoadStubCompiler::GenerateLoadViaGetter(MacroAssembler* masm, #define __ ACCESS_MASM(masm()) -Handle<Code> LoadStubCompiler::CompileLoadGlobal( - Handle<HeapType> type, - Handle<GlobalObject> global, - Handle<PropertyCell> cell, - Handle<Name> name, - bool is_dont_delete) { +Handle<Code> NamedLoadHandlerCompiler::CompileLoadGlobal( + Handle<PropertyCell> cell, Handle<Name> name, bool is_configurable) { Label miss; - HandlerFrontendHeader(type, receiver(), global, name, &miss); + FrontendHeader(receiver(), name, &miss); // Get the value from the cell. - __ Mov(x3, Operand(cell)); - __ Ldr(x4, FieldMemOperand(x3, Cell::kValueOffset)); + Register result = StoreIC::ValueRegister(); + __ Mov(result, Operand(cell)); + __ Ldr(result, FieldMemOperand(result, Cell::kValueOffset)); // Check for deleted property if property can actually be deleted. - if (!is_dont_delete) { - __ JumpIfRoot(x4, Heap::kTheHoleValueRootIndex, &miss); + if (is_configurable) { + __ JumpIfRoot(result, Heap::kTheHoleValueRootIndex, &miss); } Counters* counters = isolate()->counters(); __ IncrementCounter(counters->named_load_global_stub(), 1, x1, x3); - __ Mov(x0, x4); __ Ret(); - HandlerFrontendFooter(name, &miss); + FrontendFooter(name, &miss); // Return the generated code. return GetCode(kind(), Code::NORMAL, name); } -Handle<Code> BaseLoadStoreStubCompiler::CompilePolymorphicIC( - TypeHandleList* types, - CodeHandleList* handlers, - Handle<Name> name, - Code::StubType type, - IcCheckType check) { +Handle<Code> PropertyICCompiler::CompilePolymorphic(TypeHandleList* types, + CodeHandleList* handlers, + Handle<Name> name, + Code::StubType type, + IcCheckType check) { Label miss; if (check == PROPERTY && (kind() == Code::KEYED_LOAD_IC || kind() == Code::KEYED_STORE_IC)) { - __ CompareAndBranch(this->name(), Operand(name), ne, &miss); + // In case we are compiling an IC for dictionary loads and stores, just + // check whether the name is unique. + if (name.is_identical_to(isolate()->factory()->normal_ic_symbol())) { + __ JumpIfNotUniqueName(this->name(), &miss); + } else { + __ CompareAndBranch(this->name(), Operand(name), ne, &miss); + } } Label number_case; Label* smi_target = IncludesNumberType(types) ? &number_case : &miss; __ JumpIfSmi(receiver(), smi_target); + // Polymorphic keyed stores may use the map register Register map_reg = scratch1(); + DCHECK(kind() != Code::KEYED_STORE_IC || + map_reg.is(KeyedStoreIC::MapRegister())); __ Ldr(map_reg, FieldMemOperand(receiver(), HeapObject::kMapOffset)); int receiver_count = types->length(); int number_of_handled_maps = 0; @@ -1391,14 +1069,14 @@ Handle<Code> BaseLoadStoreStubCompiler::CompilePolymorphicIC( __ Cmp(map_reg, Operand(map)); __ B(ne, &try_next); if (type->Is(HeapType::Number())) { - ASSERT(!number_case.is_unused()); + DCHECK(!number_case.is_unused()); __ Bind(&number_case); } __ Jump(handlers->at(current), RelocInfo::CODE_TARGET); __ Bind(&try_next); } } - ASSERT(number_of_handled_maps != 0); + DCHECK(number_of_handled_maps != 0); __ Bind(&miss); TailCallBuiltin(masm(), MissBuiltin(kind())); @@ -1406,28 +1084,16 @@ Handle<Code> BaseLoadStoreStubCompiler::CompilePolymorphicIC( // Return the generated code. InlineCacheState state = (number_of_handled_maps > 1) ? POLYMORPHIC : MONOMORPHIC; - return GetICCode(kind(), type, name, state); + return GetCode(kind(), type, name, state); } -void StoreStubCompiler::GenerateStoreArrayLength() { - // Prepare tail call to StoreIC_ArrayLength. - __ Push(receiver(), value()); - - ExternalReference ref = - ExternalReference(IC_Utility(IC::kStoreIC_ArrayLength), - masm()->isolate()); - __ TailCallExternalReference(ref, 2, 1); -} - - -Handle<Code> KeyedStoreStubCompiler::CompileStorePolymorphic( - MapHandleList* receiver_maps, - CodeHandleList* handler_stubs, +Handle<Code> PropertyICCompiler::CompileKeyedStorePolymorphic( + MapHandleList* receiver_maps, CodeHandleList* handler_stubs, MapHandleList* transitioned_maps) { Label miss; - ASM_LOCATION("KeyedStoreStubCompiler::CompileStorePolymorphic"); + ASM_LOCATION("PropertyICCompiler::CompileStorePolymorphic"); __ JumpIfSmi(receiver(), &miss); @@ -1450,35 +1116,32 @@ Handle<Code> KeyedStoreStubCompiler::CompileStorePolymorphic( __ Bind(&miss); TailCallBuiltin(masm(), MissBuiltin(kind())); - return GetICCode( - kind(), Code::NORMAL, factory()->empty_string(), POLYMORPHIC); + return GetCode(kind(), Code::NORMAL, factory()->empty_string(), POLYMORPHIC); } #undef __ #define __ ACCESS_MASM(masm) -void KeyedLoadStubCompiler::GenerateLoadDictionaryElement( +void ElementHandlerCompiler::GenerateLoadDictionaryElement( MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- lr : return address - // -- x0 : key - // -- x1 : receiver - // ----------------------------------- + // The return address is in lr. Label slow, miss; Register result = x0; - Register key = x0; - Register receiver = x1; + Register key = LoadIC::NameRegister(); + Register receiver = LoadIC::ReceiverRegister(); + DCHECK(receiver.is(x1)); + DCHECK(key.is(x2)); __ JumpIfNotSmi(key, &miss); __ Ldr(x4, FieldMemOperand(receiver, JSObject::kElementsOffset)); - __ LoadFromNumberDictionary(&slow, x4, key, result, x2, x3, x5, x6); + __ LoadFromNumberDictionary(&slow, x4, key, result, x7, x3, x5, x6); __ Ret(); __ Bind(&slow); __ IncrementCounter( - masm->isolate()->counters()->keyed_load_external_array_slow(), 1, x2, x3); + masm->isolate()->counters()->keyed_load_external_array_slow(), 1, x4, x3); TailCallBuiltin(masm, Builtins::kKeyedLoadIC_Slow); // Miss case, call the runtime. diff --git a/deps/v8/src/arm64/utils-arm64.cc b/deps/v8/src/arm64/utils-arm64.cc index 53a2957e916..dbfb87638bc 100644 --- a/deps/v8/src/arm64/utils-arm64.cc +++ b/deps/v8/src/arm64/utils-arm64.cc @@ -4,7 +4,7 @@ #if V8_TARGET_ARCH_ARM64 -#include "arm64/utils-arm64.h" +#include "src/arm64/utils-arm64.h" namespace v8 { @@ -15,7 +15,7 @@ namespace internal { int CountLeadingZeros(uint64_t value, int width) { // TODO(jbramley): Optimize this for ARM64 hosts. - ASSERT((width == 32) || (width == 64)); + DCHECK((width == 32) || (width == 64)); int count = 0; uint64_t bit_test = 1UL << (width - 1); while ((count < width) && ((bit_test & value) == 0)) { @@ -28,7 +28,7 @@ int CountLeadingZeros(uint64_t value, int width) { int CountLeadingSignBits(int64_t value, int width) { // TODO(jbramley): Optimize this for ARM64 hosts. - ASSERT((width == 32) || (width == 64)); + DCHECK((width == 32) || (width == 64)); if (value >= 0) { return CountLeadingZeros(value, width) - 1; } else { @@ -39,7 +39,7 @@ int CountLeadingSignBits(int64_t value, int width) { int CountTrailingZeros(uint64_t value, int width) { // TODO(jbramley): Optimize this for ARM64 hosts. - ASSERT((width == 32) || (width == 64)); + DCHECK((width == 32) || (width == 64)); int count = 0; while ((count < width) && (((value >> count) & 1) == 0)) { count++; @@ -51,7 +51,7 @@ int CountTrailingZeros(uint64_t value, int width) { int CountSetBits(uint64_t value, int width) { // TODO(jbramley): Would it be useful to allow other widths? The // implementation already supports them. - ASSERT((width == 32) || (width == 64)); + DCHECK((width == 32) || (width == 64)); // Mask out unused bits to ensure that they are not counted. value &= (0xffffffffffffffffUL >> (64-width)); @@ -78,8 +78,13 @@ int CountSetBits(uint64_t value, int width) { } +uint64_t LargestPowerOf2Divisor(uint64_t value) { + return value & -value; +} + + int MaskToBit(uint64_t mask) { - ASSERT(CountSetBits(mask, 64) == 1); + DCHECK(CountSetBits(mask, 64) == 1); return CountTrailingZeros(mask, 64); } diff --git a/deps/v8/src/arm64/utils-arm64.h b/deps/v8/src/arm64/utils-arm64.h index c739e50f269..c22ed9aed79 100644 --- a/deps/v8/src/arm64/utils-arm64.h +++ b/deps/v8/src/arm64/utils-arm64.h @@ -6,8 +6,9 @@ #define V8_ARM64_UTILS_ARM64_H_ #include <cmath> -#include "v8.h" -#include "arm64/constants-arm64.h" +#include "src/v8.h" + +#include "src/arm64/constants-arm64.h" #define REGISTER_CODE_LIST(R) \ R(0) R(1) R(2) R(3) R(4) R(5) R(6) R(7) \ @@ -56,6 +57,7 @@ int CountLeadingZeros(uint64_t value, int width); int CountLeadingSignBits(int64_t value, int width); int CountTrailingZeros(uint64_t value, int width); int CountSetBits(uint64_t value, int width); +uint64_t LargestPowerOf2Divisor(uint64_t value); int MaskToBit(uint64_t mask); @@ -86,13 +88,13 @@ inline bool IsQuietNaN(T num) { // Convert the NaN in 'num' to a quiet NaN. inline double ToQuietNaN(double num) { - ASSERT(isnan(num)); + DCHECK(std::isnan(num)); return rawbits_to_double(double_to_rawbits(num) | kDQuietNanMask); } inline float ToQuietNaN(float num) { - ASSERT(isnan(num)); + DCHECK(std::isnan(num)); return rawbits_to_float(float_to_rawbits(num) | kSQuietNanMask); } diff --git a/deps/v8/src/array-iterator.js b/deps/v8/src/array-iterator.js index 10116b1d10b..f04d6c974a4 100644 --- a/deps/v8/src/array-iterator.js +++ b/deps/v8/src/array-iterator.js @@ -1,42 +1,28 @@ // Copyright 2013 the V8 project authors. All rights reserved. -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are -// met: -// -// * Redistributions of source code must retain the above copyright -// notice, this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above -// copyright notice, this list of conditions and the following -// disclaimer in the documentation and/or other materials provided -// with the distribution. -// * Neither the name of Google Inc. nor the names of its -// contributors may be used to endorse or promote products derived -// from this software without specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -// 'AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. 'use strict'; + // This file relies on the fact that the following declaration has been made // in runtime.js: // var $Array = global.Array; + var arrayIteratorObjectSymbol = GLOBAL_PRIVATE("ArrayIterator#object"); var arrayIteratorNextIndexSymbol = GLOBAL_PRIVATE("ArrayIterator#next"); var arrayIterationKindSymbol = GLOBAL_PRIVATE("ArrayIterator#kind"); + function ArrayIterator() {} + +// TODO(wingo): Update section numbers when ES6 has stabilized. The +// section numbers below are already out of date as of the May 2014 +// draft. + + // 15.4.5.1 CreateArrayIterator Abstract Operation function CreateArrayIterator(array, kind) { var object = ToObject(array); @@ -47,20 +33,33 @@ function CreateArrayIterator(array, kind) { return iterator; } + // 15.19.4.3.4 CreateItrResultObject function CreateIteratorResultObject(value, done) { return {value: value, done: done}; } + +// 22.1.5.2.2 %ArrayIteratorPrototype%[@@iterator] +function ArrayIteratorIterator() { + return this; +} + + // 15.4.5.2.2 ArrayIterator.prototype.next( ) function ArrayIteratorNext() { var iterator = ToObject(this); - var array = GET_PRIVATE(iterator, arrayIteratorObjectSymbol); - if (!array) { + + if (!HAS_PRIVATE(iterator, arrayIteratorObjectSymbol)) { throw MakeTypeError('incompatible_method_receiver', ['Array Iterator.prototype.next']); } + var array = GET_PRIVATE(iterator, arrayIteratorObjectSymbol); + if (IS_UNDEFINED(array)) { + return CreateIteratorResultObject(UNDEFINED, true); + } + var index = GET_PRIVATE(iterator, arrayIteratorNextIndexSymbol); var itemKind = GET_PRIVATE(iterator, arrayIterationKindSymbol); var length = TO_UINT32(array.length); @@ -68,46 +67,55 @@ function ArrayIteratorNext() { // "sparse" is never used. if (index >= length) { - SET_PRIVATE(iterator, arrayIteratorNextIndexSymbol, INFINITY); + SET_PRIVATE(iterator, arrayIteratorObjectSymbol, UNDEFINED); return CreateIteratorResultObject(UNDEFINED, true); } SET_PRIVATE(iterator, arrayIteratorNextIndexSymbol, index + 1); - if (itemKind == ITERATOR_KIND_VALUES) + if (itemKind == ITERATOR_KIND_VALUES) { return CreateIteratorResultObject(array[index], false); + } - if (itemKind == ITERATOR_KIND_ENTRIES) + if (itemKind == ITERATOR_KIND_ENTRIES) { return CreateIteratorResultObject([index, array[index]], false); + } return CreateIteratorResultObject(index, false); } + function ArrayEntries() { return CreateArrayIterator(this, ITERATOR_KIND_ENTRIES); } + function ArrayValues() { return CreateArrayIterator(this, ITERATOR_KIND_VALUES); } + function ArrayKeys() { return CreateArrayIterator(this, ITERATOR_KIND_KEYS); } + function SetUpArrayIterator() { %CheckIsBootstrapping(); + %FunctionSetPrototype(ArrayIterator, new $Object()); %FunctionSetInstanceClassName(ArrayIterator, 'Array Iterator'); - %FunctionSetReadOnlyPrototype(ArrayIterator); InstallFunctions(ArrayIterator.prototype, DONT_ENUM, $Array( 'next', ArrayIteratorNext )); + %FunctionSetName(ArrayIteratorIterator, '[Symbol.iterator]'); + %AddNamedProperty(ArrayIterator.prototype, symbolIterator, + ArrayIteratorIterator, DONT_ENUM); } - SetUpArrayIterator(); + function ExtendArrayPrototype() { %CheckIsBootstrapping(); @@ -116,6 +124,34 @@ function ExtendArrayPrototype() { 'values', ArrayValues, 'keys', ArrayKeys )); -} + %AddNamedProperty($Array.prototype, symbolIterator, ArrayValues, DONT_ENUM); +} ExtendArrayPrototype(); + + +function ExtendTypedArrayPrototypes() { + %CheckIsBootstrapping(); + +macro TYPED_ARRAYS(FUNCTION) + FUNCTION(Uint8Array) + FUNCTION(Int8Array) + FUNCTION(Uint16Array) + FUNCTION(Int16Array) + FUNCTION(Uint32Array) + FUNCTION(Int32Array) + FUNCTION(Float32Array) + FUNCTION(Float64Array) + FUNCTION(Uint8ClampedArray) +endmacro + +macro EXTEND_TYPED_ARRAY(NAME) + %AddNamedProperty($NAME.prototype, 'entries', ArrayEntries, DONT_ENUM); + %AddNamedProperty($NAME.prototype, 'values', ArrayValues, DONT_ENUM); + %AddNamedProperty($NAME.prototype, 'keys', ArrayKeys, DONT_ENUM); + %AddNamedProperty($NAME.prototype, symbolIterator, ArrayValues, DONT_ENUM); +endmacro + + TYPED_ARRAYS(EXTEND_TYPED_ARRAY) +} +ExtendTypedArrayPrototypes(); diff --git a/deps/v8/src/array.js b/deps/v8/src/array.js index dcaf0f40080..cf99aceb699 100644 --- a/deps/v8/src/array.js +++ b/deps/v8/src/array.js @@ -45,7 +45,7 @@ function GetSortedArrayKeys(array, indices) { } -function SparseJoinWithSeparator(array, len, convert, separator) { +function SparseJoinWithSeparatorJS(array, len, convert, separator) { var keys = GetSortedArrayKeys(array, %GetArrayKeys(array, len)); var totalLength = 0; var elements = new InternalArray(keys.length * 2); @@ -86,11 +86,20 @@ function SparseJoin(array, len, convert) { } -function UseSparseVariant(object, length, is_array) { - return is_array && - length > 1000 && - (!%_IsSmi(length) || - %EstimateNumberOfElements(object) < (length >> 2)); +function UseSparseVariant(array, length, is_array, touched) { + // Only use the sparse variant on arrays that are likely to be sparse and the + // number of elements touched in the operation is relatively small compared to + // the overall size of the array. + if (!is_array || length < 1000 || %IsObserved(array)) { + return false; + } + if (!%_IsSmi(length)) { + return true; + } + var elements_threshold = length >> 2; // No more than 75% holes + var estimated_elements = %EstimateNumberOfElements(array); + return (estimated_elements < elements_threshold) && + (touched > estimated_elements * 4); } @@ -107,11 +116,12 @@ function Join(array, length, separator, convert) { // Attempt to convert the elements. try { - if (UseSparseVariant(array, length, is_array)) { + if (UseSparseVariant(array, length, is_array, length)) { + %NormalizeElements(array); if (separator.length == 0) { return SparseJoin(array, length, convert); } else { - return SparseJoinWithSeparator(array, length, convert, separator); + return SparseJoinWithSeparatorJS(array, length, convert, separator); } } @@ -271,7 +281,7 @@ function SmartMove(array, start_i, del_count, len, num_additional_args) { function SimpleSlice(array, start_i, del_count, len, deleted_elements) { for (var i = 0; i < del_count; i++) { var index = start_i + i; - // The spec could also be interpreted such that %HasLocalProperty + // The spec could also be interpreted such that %HasOwnProperty // would be the appropriate test. We follow KJS in consulting the // prototype. var current = array[index]; @@ -291,7 +301,7 @@ function SimpleMove(array, start_i, del_count, len, num_additional_args) { var from_index = i + del_count - 1; var to_index = i + num_additional_args - 1; // The spec could also be interpreted such that - // %HasLocalProperty would be the appropriate test. We follow + // %HasOwnProperty would be the appropriate test. We follow // KJS in consulting the prototype. var current = array[from_index]; if (!IS_UNDEFINED(current) || from_index in array) { @@ -305,7 +315,7 @@ function SimpleMove(array, start_i, del_count, len, num_additional_args) { var from_index = i + del_count; var to_index = i + num_additional_args; // The spec could also be interpreted such that - // %HasLocalProperty would be the appropriate test. We follow + // %HasOwnProperty would be the appropriate test. We follow // KJS in consulting the prototype. var current = array[from_index]; if (!IS_UNDEFINED(current) || from_index in array) { @@ -443,9 +453,7 @@ function ArrayPush() { var m = %_ArgumentsLength(); for (var i = 0; i < m; i++) { - // Use SetProperty rather than a direct keyed store to ensure that the store - // site doesn't become poisened with an elements transition KeyedStoreIC. - %SetProperty(array, i+n, %_Arguments(i), 0, kStrictMode); + array[i+n] = %_Arguments(i); } var new_length = n + m; @@ -457,7 +465,7 @@ function ArrayPush() { // Returns an array containing the array elements of the object followed // by the array elements of each argument in order. See ECMA-262, // section 15.4.4.7. -function ArrayConcat(arg1) { // length == 1 +function ArrayConcatJS(arg1) { // length == 1 CHECK_OBJECT_COERCIBLE(this, "Array.prototype.concat"); var array = ToObject(this); @@ -520,13 +528,15 @@ function ArrayReverse() { CHECK_OBJECT_COERCIBLE(this, "Array.prototype.reverse"); var array = TO_OBJECT_INLINE(this); - var j = TO_UINT32(array.length) - 1; + var len = TO_UINT32(array.length); - if (UseSparseVariant(array, j, IS_ARRAY(array))) { - SparseReverse(array, j+1); + if (UseSparseVariant(array, len, IS_ARRAY(array), len)) { + %NormalizeElements(array); + SparseReverse(array, len); return array; } + var j = len - 1; for (var i = 0; i < j; i++, j--) { var current_i = array[i]; if (!IS_UNDEFINED(current_i) || i in array) { @@ -628,7 +638,7 @@ function ArrayUnshift(arg1) { // length == 1 var num_arguments = %_ArgumentsLength(); var is_sealed = ObjectIsSealed(array); - if (IS_ARRAY(array) && !is_sealed) { + if (IS_ARRAY(array) && !is_sealed && len > 0) { SmartMove(array, 0, 0, len, num_arguments); } else { SimpleMove(array, 0, 0, len, num_arguments); @@ -672,10 +682,9 @@ function ArraySlice(start, end) { if (end_i < start_i) return result; - if (IS_ARRAY(array) && - !%IsObserved(array) && - (end_i > 1000) && - (%EstimateNumberOfElements(array) < end_i)) { + if (UseSparseVariant(array, len, IS_ARRAY(array), end_i - start_i)) { + %NormalizeElements(array); + %NormalizeElements(result); SmartSlice(array, start_i, end_i - start_i, len, result); } else { SimpleSlice(array, start_i, end_i - start_i, len, result); @@ -783,24 +792,20 @@ function ArraySplice(start, delete_count) { ["Array.prototype.splice"]); } - var use_simple_splice = true; - if (IS_ARRAY(array) && - num_elements_to_add !== del_count) { - // If we are only deleting/moving a few things near the end of the - // array then the simple version is going to be faster, because it - // doesn't touch most of the array. - var estimated_non_hole_elements = %EstimateNumberOfElements(array); - if (len > 20 && (estimated_non_hole_elements >> 2) < (len - start_i)) { - use_simple_splice = false; - } + var changed_elements = del_count; + if (num_elements_to_add != del_count) { + // If the slice needs to do a actually move elements after the insertion + // point, then include those in the estimate of changed elements. + changed_elements += len - start_i - del_count; } - - if (use_simple_splice) { - SimpleSlice(array, start_i, del_count, len, deleted_elements); - SimpleMove(array, start_i, del_count, len, num_elements_to_add); - } else { + if (UseSparseVariant(array, len, IS_ARRAY(array), changed_elements)) { + %NormalizeElements(array); + %NormalizeElements(deleted_elements); SmartSlice(array, start_i, del_count, len, deleted_elements); SmartMove(array, start_i, del_count, len, num_elements_to_add); + } else { + SimpleSlice(array, start_i, del_count, len, deleted_elements); + SimpleMove(array, start_i, del_count, len, num_elements_to_add); } // Insert the arguments into the resulting array in @@ -1075,7 +1080,7 @@ function ArraySort(comparefn) { // For compatibility with JSC, we also sort elements inherited from // the prototype chain on non-Array objects. // We do this by copying them to this object and sorting only - // local elements. This is not very efficient, but sorting with + // own elements. This is not very efficient, but sorting with // inherited elements happens very, very rarely, if at all. // The specification allows "implementation dependent" behavior // if an element on the prototype chain has an element that @@ -1128,7 +1133,7 @@ function ArrayFilter(f, receiver) { var result = new $Array(); var accumulator = new InternalArray(); var accumulator_length = 0; - var stepping = %_DebugCallbackSupportsStepping(f); + var stepping = DEBUG_IS_ACTIVE && %DebugCallbackSupportsStepping(f); for (var i = 0; i < length; i++) { if (i in array) { var element = array[i]; @@ -1161,7 +1166,7 @@ function ArrayForEach(f, receiver) { receiver = ToObject(receiver); } - var stepping = %_DebugCallbackSupportsStepping(f); + var stepping = DEBUG_IS_ACTIVE && %DebugCallbackSupportsStepping(f); for (var i = 0; i < length; i++) { if (i in array) { var element = array[i]; @@ -1192,7 +1197,7 @@ function ArraySome(f, receiver) { receiver = ToObject(receiver); } - var stepping = %_DebugCallbackSupportsStepping(f); + var stepping = DEBUG_IS_ACTIVE && %DebugCallbackSupportsStepping(f); for (var i = 0; i < length; i++) { if (i in array) { var element = array[i]; @@ -1222,7 +1227,7 @@ function ArrayEvery(f, receiver) { receiver = ToObject(receiver); } - var stepping = %_DebugCallbackSupportsStepping(f); + var stepping = DEBUG_IS_ACTIVE && %DebugCallbackSupportsStepping(f); for (var i = 0; i < length; i++) { if (i in array) { var element = array[i]; @@ -1253,7 +1258,7 @@ function ArrayMap(f, receiver) { var result = new $Array(); var accumulator = new InternalArray(length); - var stepping = %_DebugCallbackSupportsStepping(f); + var stepping = DEBUG_IS_ACTIVE && %DebugCallbackSupportsStepping(f); for (var i = 0; i < length; i++) { if (i in array) { var element = array[i]; @@ -1285,7 +1290,8 @@ function ArrayIndexOf(element, index) { } var min = index; var max = length; - if (UseSparseVariant(this, length, IS_ARRAY(this))) { + if (UseSparseVariant(this, length, IS_ARRAY(this), max - min)) { + %NormalizeElements(this); var indices = %GetArrayKeys(this, length); if (IS_NUMBER(indices)) { // It's an interval. @@ -1340,7 +1346,8 @@ function ArrayLastIndexOf(element, index) { } var min = 0; var max = index; - if (UseSparseVariant(this, length, IS_ARRAY(this))) { + if (UseSparseVariant(this, length, IS_ARRAY(this), index)) { + %NormalizeElements(this); var indices = %GetArrayKeys(this, index + 1); if (IS_NUMBER(indices)) { // It's an interval. @@ -1400,7 +1407,7 @@ function ArrayReduce(callback, current) { } var receiver = %GetDefaultReceiver(callback); - var stepping = %_DebugCallbackSupportsStepping(callback); + var stepping = DEBUG_IS_ACTIVE && %DebugCallbackSupportsStepping(callback); for (; i < length; i++) { if (i in array) { var element = array[i]; @@ -1437,7 +1444,7 @@ function ArrayReduceRight(callback, current) { } var receiver = %GetDefaultReceiver(callback); - var stepping = %_DebugCallbackSupportsStepping(callback); + var stepping = DEBUG_IS_ACTIVE && %DebugCallbackSupportsStepping(callback); for (; i >= 0; i--) { if (i in array) { var element = array[i]; @@ -1462,14 +1469,28 @@ function SetUpArray() { // Set up non-enumerable constructor property on the Array.prototype // object. - %SetProperty($Array.prototype, "constructor", $Array, DONT_ENUM); + %AddNamedProperty($Array.prototype, "constructor", $Array, DONT_ENUM); + + // Set up unscopable properties on the Array.prototype object. + var unscopables = { + __proto__: null, + copyWithin: true, + entries: true, + fill: true, + find: true, + findIndex: true, + keys: true, + values: true, + }; + %AddNamedProperty($Array.prototype, symbolUnscopables, unscopables, + DONT_ENUM | READ_ONLY); // Set up non-enumerable functions on the Array object. InstallFunctions($Array, DONT_ENUM, $Array( "isArray", ArrayIsArray )); - var specialFunctions = %SpecialArrayFunctions({}); + var specialFunctions = %SpecialArrayFunctions(); var getFunction = function(name, jsBuiltin, len) { var f = jsBuiltin; @@ -1492,7 +1513,7 @@ function SetUpArray() { "join", getFunction("join", ArrayJoin), "pop", getFunction("pop", ArrayPop), "push", getFunction("push", ArrayPush, 1), - "concat", getFunction("concat", ArrayConcat, 1), + "concat", getFunction("concat", ArrayConcatJS, 1), "reverse", getFunction("reverse", ArrayReverse), "shift", getFunction("shift", ArrayShift), "unshift", getFunction("unshift", ArrayUnshift, 1), @@ -1516,7 +1537,7 @@ function SetUpArray() { // exposed to user code. // Adding only the functions that are actually used. SetUpLockedPrototype(InternalArray, $Array(), $Array( - "concat", getFunction("concat", ArrayConcat), + "concat", getFunction("concat", ArrayConcatJS), "indexOf", getFunction("indexOf", ArrayIndexOf), "join", getFunction("join", ArrayJoin), "pop", getFunction("pop", ArrayPop), diff --git a/deps/v8/src/arraybuffer.js b/deps/v8/src/arraybuffer.js index 44989f5dbf8..e1c887fdb8b 100644 --- a/deps/v8/src/arraybuffer.js +++ b/deps/v8/src/arraybuffer.js @@ -62,7 +62,7 @@ function ArrayBufferSlice(start, end) { return result; } -function ArrayBufferIsView(obj) { +function ArrayBufferIsViewJS(obj) { return %ArrayBufferIsView(obj); } @@ -74,12 +74,13 @@ function SetUpArrayBuffer() { %FunctionSetPrototype($ArrayBuffer, new $Object()); // Set up the constructor property on the ArrayBuffer prototype object. - %SetProperty($ArrayBuffer.prototype, "constructor", $ArrayBuffer, DONT_ENUM); + %AddNamedProperty( + $ArrayBuffer.prototype, "constructor", $ArrayBuffer, DONT_ENUM); InstallGetter($ArrayBuffer.prototype, "byteLength", ArrayBufferGetByteLen); InstallFunctions($ArrayBuffer, DONT_ENUM, $Array( - "isView", ArrayBufferIsView + "isView", ArrayBufferIsViewJS )); InstallFunctions($ArrayBuffer.prototype, DONT_ENUM, $Array( diff --git a/deps/v8/src/assembler.cc b/deps/v8/src/assembler.cc index 38604538b58..e6dc4eb14ea 100644 --- a/deps/v8/src/assembler.cc +++ b/deps/v8/src/assembler.cc @@ -32,40 +32,43 @@ // modified significantly by Google Inc. // Copyright 2012 the V8 project authors. All rights reserved. -#include "assembler.h" +#include "src/assembler.h" #include <cmath> -#include "api.h" -#include "builtins.h" -#include "counters.h" -#include "cpu.h" -#include "cpu-profiler.h" -#include "debug.h" -#include "deoptimizer.h" -#include "execution.h" -#include "ic.h" -#include "isolate-inl.h" -#include "jsregexp.h" -#include "lazy-instance.h" -#include "platform.h" -#include "regexp-macro-assembler.h" -#include "regexp-stack.h" -#include "runtime.h" -#include "serialize.h" -#include "store-buffer-inl.h" -#include "stub-cache.h" -#include "token.h" +#include "src/api.h" +#include "src/base/cpu.h" +#include "src/base/lazy-instance.h" +#include "src/base/platform/platform.h" +#include "src/builtins.h" +#include "src/counters.h" +#include "src/cpu-profiler.h" +#include "src/debug.h" +#include "src/deoptimizer.h" +#include "src/execution.h" +#include "src/ic.h" +#include "src/isolate-inl.h" +#include "src/jsregexp.h" +#include "src/regexp-macro-assembler.h" +#include "src/regexp-stack.h" +#include "src/runtime.h" +#include "src/serialize.h" +#include "src/stub-cache.h" +#include "src/token.h" #if V8_TARGET_ARCH_IA32 -#include "ia32/assembler-ia32-inl.h" +#include "src/ia32/assembler-ia32-inl.h" // NOLINT #elif V8_TARGET_ARCH_X64 -#include "x64/assembler-x64-inl.h" +#include "src/x64/assembler-x64-inl.h" // NOLINT #elif V8_TARGET_ARCH_ARM64 -#include "arm64/assembler-arm64-inl.h" +#include "src/arm64/assembler-arm64-inl.h" // NOLINT #elif V8_TARGET_ARCH_ARM -#include "arm/assembler-arm-inl.h" +#include "src/arm/assembler-arm-inl.h" // NOLINT #elif V8_TARGET_ARCH_MIPS -#include "mips/assembler-mips-inl.h" +#include "src/mips/assembler-mips-inl.h" // NOLINT +#elif V8_TARGET_ARCH_MIPS64 +#include "src/mips64/assembler-mips64-inl.h" // NOLINT +#elif V8_TARGET_ARCH_X87 +#include "src/x87/assembler-x87-inl.h" // NOLINT #else #error "Unknown architecture." #endif @@ -73,15 +76,19 @@ // Include native regexp-macro-assembler. #ifndef V8_INTERPRETED_REGEXP #if V8_TARGET_ARCH_IA32 -#include "ia32/regexp-macro-assembler-ia32.h" +#include "src/ia32/regexp-macro-assembler-ia32.h" // NOLINT #elif V8_TARGET_ARCH_X64 -#include "x64/regexp-macro-assembler-x64.h" +#include "src/x64/regexp-macro-assembler-x64.h" // NOLINT #elif V8_TARGET_ARCH_ARM64 -#include "arm64/regexp-macro-assembler-arm64.h" +#include "src/arm64/regexp-macro-assembler-arm64.h" // NOLINT #elif V8_TARGET_ARCH_ARM -#include "arm/regexp-macro-assembler-arm.h" +#include "src/arm/regexp-macro-assembler-arm.h" // NOLINT #elif V8_TARGET_ARCH_MIPS -#include "mips/regexp-macro-assembler-mips.h" +#include "src/mips/regexp-macro-assembler-mips.h" // NOLINT +#elif V8_TARGET_ARCH_MIPS64 +#include "src/mips64/regexp-macro-assembler-mips64.h" // NOLINT +#elif V8_TARGET_ARCH_X87 +#include "src/x87/regexp-macro-assembler-x87.h" // NOLINT #else // Unknown architecture. #error "Unknown architecture." #endif // Target architecture. @@ -94,16 +101,13 @@ namespace internal { // Common double constants. struct DoubleConstant BASE_EMBEDDED { - double min_int; - double one_half; - double minus_one_half; - double minus_zero; - double zero; - double uint8_max_value; - double negative_infinity; - double canonical_non_hole_nan; - double the_hole_nan; - double uint32_bias; +double min_int; +double one_half; +double minus_one_half; +double negative_infinity; +double canonical_non_hole_nan; +double the_hole_nan; +double uint32_bias; }; static DoubleConstant double_constants; @@ -111,7 +115,7 @@ static DoubleConstant double_constants; const char* const RelocInfo::kFillerCommentString = "DEOPTIMIZATION PADDING"; static bool math_exp_data_initialized = false; -static Mutex* math_exp_data_mutex = NULL; +static base::Mutex* math_exp_data_mutex = NULL; static double* math_exp_constants_array = NULL; static double* math_exp_log_table_array = NULL; @@ -123,26 +127,16 @@ AssemblerBase::AssemblerBase(Isolate* isolate, void* buffer, int buffer_size) jit_cookie_(0), enabled_cpu_features_(0), emit_debug_code_(FLAG_debug_code), - predictable_code_size_(false) { + predictable_code_size_(false), + // We may use the assembler without an isolate. + serializer_enabled_(isolate && isolate->serializer_enabled()) { if (FLAG_mask_constants_with_cookie && isolate != NULL) { jit_cookie_ = isolate->random_number_generator()->NextInt(); } - if (buffer == NULL) { - // Do our own buffer management. - if (buffer_size <= kMinimalBufferSize) { - buffer_size = kMinimalBufferSize; - if (isolate->assembler_spare_buffer() != NULL) { - buffer = isolate->assembler_spare_buffer(); - isolate->set_assembler_spare_buffer(NULL); - } - } - if (buffer == NULL) buffer = NewArray<byte>(buffer_size); - own_buffer_ = true; - } else { - // Use externally provided buffer instead. - ASSERT(buffer_size > 0); - own_buffer_ = false; - } + own_buffer_ = buffer == NULL; + if (buffer_size == 0) buffer_size = kMinimalBufferSize; + DCHECK(buffer_size > 0); + if (own_buffer_) buffer = NewArray<byte>(buffer_size); buffer_ = static_cast<byte*>(buffer); buffer_size_ = buffer_size; @@ -151,15 +145,7 @@ AssemblerBase::AssemblerBase(Isolate* isolate, void* buffer, int buffer_size) AssemblerBase::~AssemblerBase() { - if (own_buffer_) { - if (isolate() != NULL && - isolate()->assembler_spare_buffer() == NULL && - buffer_size_ == kMinimalBufferSize) { - isolate()->set_assembler_spare_buffer(buffer_); - } else { - DeleteArray(buffer_); - } - } + if (own_buffer_) DeleteArray(buffer_); } @@ -191,7 +177,7 @@ PredictableCodeSizeScope::~PredictableCodeSizeScope() { #ifdef DEBUG CpuFeatureScope::CpuFeatureScope(AssemblerBase* assembler, CpuFeature f) : assembler_(assembler) { - ASSERT(CpuFeatures::IsSafeForSnapshot(assembler_->isolate(), f)); + DCHECK(CpuFeatures::IsSupported(f)); old_enabled_ = assembler_->enabled_cpu_features(); uint64_t mask = static_cast<uint64_t>(1) << f; // TODO(svenpanne) This special case below doesn't belong here! @@ -211,23 +197,9 @@ CpuFeatureScope::~CpuFeatureScope() { #endif -// ----------------------------------------------------------------------------- -// Implementation of PlatformFeatureScope - -PlatformFeatureScope::PlatformFeatureScope(Isolate* isolate, CpuFeature f) - : isolate_(isolate), old_cross_compile_(CpuFeatures::cross_compile_) { - // CpuFeatures is a global singleton, therefore this is only safe in - // single threaded code. - ASSERT(Serializer::enabled(isolate)); - uint64_t mask = static_cast<uint64_t>(1) << f; - CpuFeatures::cross_compile_ |= mask; - USE(isolate_); -} - - -PlatformFeatureScope::~PlatformFeatureScope() { - CpuFeatures::cross_compile_ = old_cross_compile_; -} +bool CpuFeatures::initialized_ = false; +unsigned CpuFeatures::supported_ = 0; +unsigned CpuFeatures::cache_line_size_ = 0; // ----------------------------------------------------------------------------- @@ -362,7 +334,7 @@ uint32_t RelocInfoWriter::WriteVariableLengthPCJump(uint32_t pc_delta) { if (is_uintn(pc_delta, kSmallPCDeltaBits)) return pc_delta; WriteExtraTag(kPCJumpExtraTag, kVariableLengthPCJumpTopTag); uint32_t pc_jump = pc_delta >> kSmallPCDeltaBits; - ASSERT(pc_jump > 0); + DCHECK(pc_jump > 0); // Write kChunkBits size chunks of the pc_jump. for (; pc_jump > 0; pc_jump = pc_jump >> kChunkBits) { byte b = pc_jump & kChunkMask; @@ -436,9 +408,9 @@ void RelocInfoWriter::Write(const RelocInfo* rinfo) { #ifdef DEBUG byte* begin_pos = pos_; #endif - ASSERT(rinfo->rmode() < RelocInfo::NUMBER_OF_MODES); - ASSERT(rinfo->pc() - last_pc_ >= 0); - ASSERT(RelocInfo::LAST_STANDARD_NONCOMPACT_ENUM - RelocInfo::LAST_COMPACT_ENUM + DCHECK(rinfo->rmode() < RelocInfo::NUMBER_OF_MODES); + DCHECK(rinfo->pc() - last_pc_ >= 0); + DCHECK(RelocInfo::LAST_STANDARD_NONCOMPACT_ENUM - RelocInfo::LAST_COMPACT_ENUM <= kMaxStandardNonCompactModes); // Use unsigned delta-encoding for pc. uint32_t pc_delta = static_cast<uint32_t>(rinfo->pc() - last_pc_); @@ -449,10 +421,10 @@ void RelocInfoWriter::Write(const RelocInfo* rinfo) { WriteTaggedPC(pc_delta, kEmbeddedObjectTag); } else if (rmode == RelocInfo::CODE_TARGET) { WriteTaggedPC(pc_delta, kCodeTargetTag); - ASSERT(begin_pos - pos_ <= RelocInfo::kMaxCallSize); + DCHECK(begin_pos - pos_ <= RelocInfo::kMaxCallSize); } else if (rmode == RelocInfo::CODE_TARGET_WITH_ID) { // Use signed delta-encoding for id. - ASSERT(static_cast<int>(rinfo->data()) == rinfo->data()); + DCHECK(static_cast<int>(rinfo->data()) == rinfo->data()); int id_delta = static_cast<int>(rinfo->data()) - last_id_; // Check if delta is small enough to fit in a tagged byte. if (is_intn(id_delta, kSmallDataBits)) { @@ -466,7 +438,7 @@ void RelocInfoWriter::Write(const RelocInfo* rinfo) { last_id_ = static_cast<int>(rinfo->data()); } else if (RelocInfo::IsPosition(rmode)) { // Use signed delta-encoding for position. - ASSERT(static_cast<int>(rinfo->data()) == rinfo->data()); + DCHECK(static_cast<int>(rinfo->data()) == rinfo->data()); int pos_delta = static_cast<int>(rinfo->data()) - last_position_; int pos_type_tag = (rmode == RelocInfo::POSITION) ? kNonstatementPositionTag : kStatementPositionTag; @@ -484,23 +456,23 @@ void RelocInfoWriter::Write(const RelocInfo* rinfo) { // Comments are normally not generated, so we use the costly encoding. WriteExtraTaggedPC(pc_delta, kPCJumpExtraTag); WriteExtraTaggedData(rinfo->data(), kCommentTag); - ASSERT(begin_pos - pos_ >= RelocInfo::kMinRelocCommentSize); + DCHECK(begin_pos - pos_ >= RelocInfo::kMinRelocCommentSize); } else if (RelocInfo::IsConstPool(rmode) || RelocInfo::IsVeneerPool(rmode)) { WriteExtraTaggedPC(pc_delta, kPCJumpExtraTag); WriteExtraTaggedPoolData(static_cast<int>(rinfo->data()), RelocInfo::IsConstPool(rmode) ? kConstPoolTag : kVeneerPoolTag); } else { - ASSERT(rmode > RelocInfo::LAST_COMPACT_ENUM); + DCHECK(rmode > RelocInfo::LAST_COMPACT_ENUM); int saved_mode = rmode - RelocInfo::LAST_COMPACT_ENUM; // For all other modes we simply use the mode as the extra tag. // None of these modes need a data component. - ASSERT(saved_mode < kPCJumpExtraTag && saved_mode < kDataJumpExtraTag); + DCHECK(saved_mode < kPCJumpExtraTag && saved_mode < kDataJumpExtraTag); WriteExtraTaggedPC(pc_delta, saved_mode); } last_pc_ = rinfo->pc(); #ifdef DEBUG - ASSERT(begin_pos - pos_ <= kMaxSize); + DCHECK(begin_pos - pos_ <= kMaxSize); #endif } @@ -606,7 +578,7 @@ inline void RelocIterator::ReadTaggedPosition() { static inline RelocInfo::Mode GetPositionModeFromTag(int tag) { - ASSERT(tag == kNonstatementPositionTag || + DCHECK(tag == kNonstatementPositionTag || tag == kStatementPositionTag); return (tag == kNonstatementPositionTag) ? RelocInfo::POSITION : @@ -615,7 +587,7 @@ static inline RelocInfo::Mode GetPositionModeFromTag(int tag) { void RelocIterator::next() { - ASSERT(!done()); + DCHECK(!done()); // Basically, do the opposite of RelocInfoWriter::Write. // Reading of data is as far as possible avoided for unwanted modes, // but we must always update the pc. @@ -641,7 +613,7 @@ void RelocIterator::next() { } else { // Compact encoding is never used for comments, // so it must be a position. - ASSERT(locatable_tag == kNonstatementPositionTag || + DCHECK(locatable_tag == kNonstatementPositionTag || locatable_tag == kStatementPositionTag); if (mode_mask_ & RelocInfo::kPositionMask) { ReadTaggedPosition(); @@ -649,7 +621,7 @@ void RelocIterator::next() { } } } else { - ASSERT(tag == kDefaultTag); + DCHECK(tag == kDefaultTag); int extra_tag = GetExtraTag(); if (extra_tag == kPCJumpExtraTag) { if (GetTopTag() == kVariableLengthPCJumpTopTag) { @@ -666,7 +638,7 @@ void RelocIterator::next() { } Advance(kIntSize); } else if (locatable_tag != kCommentTag) { - ASSERT(locatable_tag == kNonstatementPositionTag || + DCHECK(locatable_tag == kNonstatementPositionTag || locatable_tag == kStatementPositionTag); if (mode_mask_ & RelocInfo::kPositionMask) { AdvanceReadPosition(); @@ -675,7 +647,7 @@ void RelocIterator::next() { Advance(kIntSize); } } else { - ASSERT(locatable_tag == kCommentTag); + DCHECK(locatable_tag == kCommentTag); if (SetMode(RelocInfo::COMMENT)) { AdvanceReadData(); return; @@ -684,7 +656,7 @@ void RelocIterator::next() { } } else if (extra_tag == kPoolExtraTag) { int pool_type = GetTopTag(); - ASSERT(pool_type == kConstPoolTag || pool_type == kVeneerPoolTag); + DCHECK(pool_type == kConstPoolTag || pool_type == kVeneerPoolTag); RelocInfo::Mode rmode = (pool_type == kConstPoolTag) ? RelocInfo::CONST_POOL : RelocInfo::VENEER_POOL; if (SetMode(rmode)) { @@ -821,39 +793,36 @@ const char* RelocInfo::RelocModeName(RelocInfo::Mode rmode) { } -void RelocInfo::Print(Isolate* isolate, FILE* out) { - PrintF(out, "%p %s", pc_, RelocModeName(rmode_)); +void RelocInfo::Print(Isolate* isolate, OStream& os) { // NOLINT + os << pc_ << " " << RelocModeName(rmode_); if (IsComment(rmode_)) { - PrintF(out, " (%s)", reinterpret_cast<char*>(data_)); + os << " (" << reinterpret_cast<char*>(data_) << ")"; } else if (rmode_ == EMBEDDED_OBJECT) { - PrintF(out, " ("); - target_object()->ShortPrint(out); - PrintF(out, ")"); + os << " (" << Brief(target_object()) << ")"; } else if (rmode_ == EXTERNAL_REFERENCE) { ExternalReferenceEncoder ref_encoder(isolate); - PrintF(out, " (%s) (%p)", - ref_encoder.NameOfAddress(target_reference()), - target_reference()); + os << " (" << ref_encoder.NameOfAddress(target_reference()) << ") (" + << target_reference() << ")"; } else if (IsCodeTarget(rmode_)) { Code* code = Code::GetCodeFromTargetAddress(target_address()); - PrintF(out, " (%s) (%p)", Code::Kind2String(code->kind()), - target_address()); + os << " (" << Code::Kind2String(code->kind()) << ") (" << target_address() + << ")"; if (rmode_ == CODE_TARGET_WITH_ID) { - PrintF(out, " (id=%d)", static_cast<int>(data_)); + os << " (id=" << static_cast<int>(data_) << ")"; } } else if (IsPosition(rmode_)) { - PrintF(out, " (%" V8_PTR_PREFIX "d)", data()); + os << " (" << data() << ")"; } else if (IsRuntimeEntry(rmode_) && isolate->deoptimizer_data() != NULL) { // Depotimization bailouts are stored as runtime entries. int id = Deoptimizer::GetDeoptimizationId( isolate, target_address(), Deoptimizer::EAGER); if (id != Deoptimizer::kNotDeoptimizationEntry) { - PrintF(out, " (deoptimization bailout %d)", id); + os << " (deoptimization bailout " << id << ")"; } } - PrintF(out, "\n"); + os << "\n"; } #endif // ENABLE_DISASSEMBLER @@ -898,7 +867,7 @@ void RelocInfo::Verify(Isolate* isolate) { UNREACHABLE(); break; case CODE_AGE_SEQUENCE: - ASSERT(Code::IsYoungSequence(isolate, pc_) || code_age_stub()->IsCode()); + DCHECK(Code::IsYoungSequence(isolate, pc_) || code_age_stub()->IsCode()); break; } } @@ -912,16 +881,13 @@ void ExternalReference::SetUp() { double_constants.min_int = kMinInt; double_constants.one_half = 0.5; double_constants.minus_one_half = -0.5; - double_constants.minus_zero = -0.0; - double_constants.uint8_max_value = 255; - double_constants.zero = 0.0; - double_constants.canonical_non_hole_nan = OS::nan_value(); + double_constants.canonical_non_hole_nan = base::OS::nan_value(); double_constants.the_hole_nan = BitCast<double>(kHoleNanInt64); double_constants.negative_infinity = -V8_INFINITY; double_constants.uint32_bias = static_cast<double>(static_cast<uint32_t>(0xFFFFFFFF)) + 1; - math_exp_data_mutex = new Mutex(); + math_exp_data_mutex = new base::Mutex(); } @@ -929,7 +895,7 @@ void ExternalReference::InitializeMathExpData() { // Early return? if (math_exp_data_initialized) return; - LockGuard<Mutex> lock_guard(math_exp_data_mutex); + base::LockGuard<base::Mutex> lock_guard(math_exp_data_mutex); if (!math_exp_data_initialized) { // If this is changed, generated code must be adapted too. const int kTableSizeBits = 11; @@ -1009,9 +975,6 @@ ExternalReference::ExternalReference(const IC_Utility& ic_utility, Isolate* isolate) : address_(Redirect(isolate, ic_utility.address())) {} -ExternalReference::ExternalReference(const Debug_Address& debug_address, - Isolate* isolate) - : address_(debug_address.address(isolate)) {} ExternalReference::ExternalReference(StatsCounter* counter) : address_(reinterpret_cast<Address>(counter->GetInternalPointer())) {} @@ -1042,7 +1005,8 @@ ExternalReference ExternalReference:: ExternalReference ExternalReference::flush_icache_function(Isolate* isolate) { - return ExternalReference(Redirect(isolate, FUNCTION_ADDR(CPU::FlushICache))); + return ExternalReference( + Redirect(isolate, FUNCTION_ADDR(CpuFeatures::FlushICache))); } @@ -1174,13 +1138,6 @@ ExternalReference ExternalReference::new_space_allocation_top_address( } -ExternalReference ExternalReference::heap_always_allocate_scope_depth( - Isolate* isolate) { - Heap* heap = isolate->heap(); - return ExternalReference(heap->always_allocate_scope_depth_address()); -} - - ExternalReference ExternalReference::new_space_allocation_limit_address( Isolate* isolate) { return ExternalReference(isolate->heap()->NewSpaceAllocationLimitAddress()); @@ -1215,13 +1172,6 @@ ExternalReference ExternalReference::old_data_space_allocation_limit_address( } -ExternalReference ExternalReference:: - new_space_high_promotion_mode_active_address(Isolate* isolate) { - return ExternalReference( - isolate->heap()->NewSpaceHighPromotionModeActiveAddress()); -} - - ExternalReference ExternalReference::handle_scope_level_address( Isolate* isolate) { return ExternalReference(HandleScope::current_level_address(isolate)); @@ -1280,23 +1230,6 @@ ExternalReference ExternalReference::address_of_minus_one_half() { } -ExternalReference ExternalReference::address_of_minus_zero() { - return ExternalReference( - reinterpret_cast<void*>(&double_constants.minus_zero)); -} - - -ExternalReference ExternalReference::address_of_zero() { - return ExternalReference(reinterpret_cast<void*>(&double_constants.zero)); -} - - -ExternalReference ExternalReference::address_of_uint8_max_value() { - return ExternalReference( - reinterpret_cast<void*>(&double_constants.uint8_max_value)); -} - - ExternalReference ExternalReference::address_of_negative_infinity() { return ExternalReference( reinterpret_cast<void*>(&double_constants.negative_infinity)); @@ -1360,6 +1293,10 @@ ExternalReference ExternalReference::re_check_stack_guard_state( function = FUNCTION_ADDR(RegExpMacroAssemblerARM::CheckStackGuardState); #elif V8_TARGET_ARCH_MIPS function = FUNCTION_ADDR(RegExpMacroAssemblerMIPS::CheckStackGuardState); +#elif V8_TARGET_ARCH_MIPS64 + function = FUNCTION_ADDR(RegExpMacroAssemblerMIPS::CheckStackGuardState); +#elif V8_TARGET_ARCH_X87 + function = FUNCTION_ADDR(RegExpMacroAssemblerX87::CheckStackGuardState); #else UNREACHABLE(); #endif @@ -1415,14 +1352,14 @@ ExternalReference ExternalReference::math_log_double_function( ExternalReference ExternalReference::math_exp_constants(int constant_index) { - ASSERT(math_exp_data_initialized); + DCHECK(math_exp_data_initialized); return ExternalReference( reinterpret_cast<void*>(math_exp_constants_array + constant_index)); } ExternalReference ExternalReference::math_exp_log_table() { - ASSERT(math_exp_data_initialized); + DCHECK(math_exp_data_initialized); return ExternalReference(reinterpret_cast<void*>(math_exp_log_table_array)); } @@ -1438,6 +1375,32 @@ ExternalReference ExternalReference::ForDeoptEntry(Address entry) { } +ExternalReference ExternalReference::cpu_features() { + DCHECK(CpuFeatures::initialized_); + return ExternalReference(&CpuFeatures::supported_); +} + + +ExternalReference ExternalReference::debug_is_active_address( + Isolate* isolate) { + return ExternalReference(isolate->debug()->is_active_address()); +} + + +ExternalReference ExternalReference::debug_after_break_target_address( + Isolate* isolate) { + return ExternalReference(isolate->debug()->after_break_target_address()); +} + + +ExternalReference + ExternalReference::debug_restarter_frame_function_pointer_address( + Isolate* isolate) { + return ExternalReference( + isolate->debug()->restarter_frame_function_pointer_address()); +} + + double power_helper(double x, double y) { int y_int = static_cast<int>(y); if (y == y_int) { @@ -1496,7 +1459,7 @@ double power_double_double(double x, double y) { // The checks for special cases can be dropped in ia32 because it has already // been done in generated code before bailing out here. if (std::isnan(y) || ((x == 1 || x == -1) && std::isinf(y))) { - return OS::nan_value(); + return base::OS::nan_value(); } return std::pow(x, y); } @@ -1519,7 +1482,7 @@ ExternalReference ExternalReference::power_double_int_function( bool EvalComparison(Token::Value op, double op1, double op2) { - ASSERT(Token::IsCompareOp(op)); + DCHECK(Token::IsCompareOp(op)); switch (op) { case Token::EQ: case Token::EQ_STRICT: return (op1 == op2); @@ -1555,14 +1518,9 @@ ExternalReference ExternalReference::debug_step_in_fp_address( void PositionsRecorder::RecordPosition(int pos) { - ASSERT(pos != RelocInfo::kNoPosition); - ASSERT(pos >= 0); + DCHECK(pos != RelocInfo::kNoPosition); + DCHECK(pos >= 0); state_.current_position = pos; -#ifdef ENABLE_GDB_JIT_INTERFACE - if (gdbjit_lineinfo_ != NULL) { - gdbjit_lineinfo_->SetPosition(assembler_->pc_offset(), pos, false); - } -#endif LOG_CODE_EVENT(assembler_->isolate(), CodeLinePosInfoAddPositionEvent(jit_handler_data_, assembler_->pc_offset(), @@ -1571,14 +1529,9 @@ void PositionsRecorder::RecordPosition(int pos) { void PositionsRecorder::RecordStatementPosition(int pos) { - ASSERT(pos != RelocInfo::kNoPosition); - ASSERT(pos >= 0); + DCHECK(pos != RelocInfo::kNoPosition); + DCHECK(pos >= 0); state_.current_statement_position = pos; -#ifdef ENABLE_GDB_JIT_INTERFACE - if (gdbjit_lineinfo_ != NULL) { - gdbjit_lineinfo_->SetPosition(assembler_->pc_offset(), pos, true); - } -#endif LOG_CODE_EVENT(assembler_->isolate(), CodeLinePosInfoAddStatementPositionEvent( jit_handler_data_, @@ -1616,7 +1569,7 @@ bool PositionsRecorder::WriteRecordedPositions() { MultiplierAndShift::MultiplierAndShift(int32_t d) { - ASSERT(d <= -2 || 2 <= d); + DCHECK(d <= -2 || 2 <= d); const uint32_t two31 = 0x80000000; uint32_t ad = Abs(d); uint32_t t = two31 + (uint32_t(d) >> 31); diff --git a/deps/v8/src/assembler.h b/deps/v8/src/assembler.h index c67253c4897..a128c09d095 100644 --- a/deps/v8/src/assembler.h +++ b/deps/v8/src/assembler.h @@ -35,14 +35,14 @@ #ifndef V8_ASSEMBLER_H_ #define V8_ASSEMBLER_H_ -#include "v8.h" +#include "src/v8.h" -#include "allocation.h" -#include "builtins.h" -#include "gdb-jit.h" -#include "isolate.h" -#include "runtime.h" -#include "token.h" +#include "src/allocation.h" +#include "src/builtins.h" +#include "src/gdb-jit.h" +#include "src/isolate.h" +#include "src/runtime.h" +#include "src/token.h" namespace v8 { @@ -65,6 +65,9 @@ class AssemblerBase: public Malloced { bool emit_debug_code() const { return emit_debug_code_; } void set_emit_debug_code(bool value) { emit_debug_code_ = value; } + bool serializer_enabled() const { return serializer_enabled_; } + void enable_serializer() { serializer_enabled_ = true; } + bool predictable_code_size() const { return predictable_code_size_; } void set_predictable_code_size(bool value) { predictable_code_size_ = value; } @@ -104,6 +107,7 @@ class AssemblerBase: public Malloced { uint64_t enabled_cpu_features_; bool emit_debug_code_; bool predictable_code_size_; + bool serializer_enabled_; }; @@ -116,7 +120,7 @@ class DontEmitDebugCodeScope BASE_EMBEDDED { } ~DontEmitDebugCodeScope() { assembler_->set_emit_debug_code(old_value_); - }; + } private: AssemblerBase* assembler_; bool old_value_; @@ -154,16 +158,55 @@ class CpuFeatureScope BASE_EMBEDDED { }; -// Enable a unsupported feature within a scope for cross-compiling for a -// different CPU. -class PlatformFeatureScope BASE_EMBEDDED { +// CpuFeatures keeps track of which features are supported by the target CPU. +// Supported features must be enabled by a CpuFeatureScope before use. +// Example: +// if (assembler->IsSupported(SSE3)) { +// CpuFeatureScope fscope(assembler, SSE3); +// // Generate code containing SSE3 instructions. +// } else { +// // Generate alternative code. +// } +class CpuFeatures : public AllStatic { public: - PlatformFeatureScope(Isolate* isolate, CpuFeature f); - ~PlatformFeatureScope(); + static void Probe(bool cross_compile) { + STATIC_ASSERT(NUMBER_OF_CPU_FEATURES <= kBitsPerInt); + if (initialized_) return; + initialized_ = true; + ProbeImpl(cross_compile); + } + + static unsigned SupportedFeatures() { + Probe(false); + return supported_; + } + + static bool IsSupported(CpuFeature f) { + return (supported_ & (1u << f)) != 0; + } + + static inline bool SupportsCrankshaft(); + + static inline unsigned cache_line_size() { + DCHECK(cache_line_size_ != 0); + return cache_line_size_; + } + + static void PrintTarget(); + static void PrintFeatures(); + + // Flush instruction cache. + static void FlushICache(void* start, size_t size); private: - Isolate* isolate_; - uint64_t old_cross_compile_; + // Platform-dependent implementation. + static void ProbeImpl(bool cross_compile); + + static unsigned supported_; + static unsigned cache_line_size_; + static bool initialized_; + friend class ExternalReference; + DISALLOW_COPY_AND_ASSIGN(CpuFeatures); }; @@ -185,8 +228,8 @@ class Label BASE_EMBEDDED { } INLINE(~Label()) { - ASSERT(!is_linked()); - ASSERT(!is_near_linked()); + DCHECK(!is_linked()); + DCHECK(!is_near_linked()); } INLINE(void Unuse()) { pos_ = 0; } @@ -216,15 +259,15 @@ class Label BASE_EMBEDDED { void bind_to(int pos) { pos_ = -pos - 1; - ASSERT(is_bound()); + DCHECK(is_bound()); } void link_to(int pos, Distance distance = kFar) { if (distance == kNear) { near_link_pos_ = pos + 1; - ASSERT(is_near_linked()); + DCHECK(is_near_linked()); } else { pos_ = pos + 1; - ASSERT(is_linked()); + DCHECK(is_linked()); } } @@ -242,6 +285,12 @@ class Label BASE_EMBEDDED { enum SaveFPRegsMode { kDontSaveFPRegs, kSaveFPRegs }; +// Specifies whether to perform icache flush operations on RelocInfo updates. +// If FLUSH_ICACHE_IF_NEEDED, the icache will always be flushed if an +// instruction was modified. If SKIP_ICACHE_FLUSH the flush will always be +// skipped (only use this if you will flush the icache manually before it is +// executed). +enum ICacheFlushMode { FLUSH_ICACHE_IF_NEEDED, SKIP_ICACHE_FLUSH }; // ----------------------------------------------------------------------------- // Relocation information @@ -326,7 +375,6 @@ class RelocInfo { LAST_STANDARD_NONCOMPACT_ENUM = INTERNAL_REFERENCE }; - RelocInfo() {} RelocInfo(byte* pc, Mode rmode, intptr_t data, Code* host) @@ -341,7 +389,7 @@ class RelocInfo { mode <= LAST_REAL_RELOC_MODE; } static inline bool IsPseudoRelocMode(Mode mode) { - ASSERT(!IsRealRelocMode(mode)); + DCHECK(!IsRealRelocMode(mode)); return mode >= FIRST_PSEUDO_RELOC_MODE && mode <= LAST_PSEUDO_RELOC_MODE; } @@ -418,7 +466,9 @@ class RelocInfo { void set_host(Code* host) { host_ = host; } // Apply a relocation by delta bytes - INLINE(void apply(intptr_t delta)); + INLINE(void apply(intptr_t delta, + ICacheFlushMode icache_flush_mode = + FLUSH_ICACHE_IF_NEEDED)); // Is the pointer this relocation info refers to coded like a plain pointer // or is it strange in some way (e.g. relative or patched into a series of @@ -434,22 +484,35 @@ class RelocInfo { // can only be called if IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_) INLINE(Address target_address()); INLINE(void set_target_address(Address target, - WriteBarrierMode mode = UPDATE_WRITE_BARRIER)); + WriteBarrierMode write_barrier_mode = + UPDATE_WRITE_BARRIER, + ICacheFlushMode icache_flush_mode = + FLUSH_ICACHE_IF_NEEDED)); INLINE(Object* target_object()); INLINE(Handle<Object> target_object_handle(Assembler* origin)); INLINE(void set_target_object(Object* target, - WriteBarrierMode mode = UPDATE_WRITE_BARRIER)); + WriteBarrierMode write_barrier_mode = + UPDATE_WRITE_BARRIER, + ICacheFlushMode icache_flush_mode = + FLUSH_ICACHE_IF_NEEDED)); INLINE(Address target_runtime_entry(Assembler* origin)); INLINE(void set_target_runtime_entry(Address target, - WriteBarrierMode mode = - UPDATE_WRITE_BARRIER)); + WriteBarrierMode write_barrier_mode = + UPDATE_WRITE_BARRIER, + ICacheFlushMode icache_flush_mode = + FLUSH_ICACHE_IF_NEEDED)); INLINE(Cell* target_cell()); INLINE(Handle<Cell> target_cell_handle()); INLINE(void set_target_cell(Cell* cell, - WriteBarrierMode mode = UPDATE_WRITE_BARRIER)); + WriteBarrierMode write_barrier_mode = + UPDATE_WRITE_BARRIER, + ICacheFlushMode icache_flush_mode = + FLUSH_ICACHE_IF_NEEDED)); INLINE(Handle<Object> code_age_stub_handle(Assembler* origin)); INLINE(Code* code_age_stub()); - INLINE(void set_code_age_stub(Code* stub)); + INLINE(void set_code_age_stub(Code* stub, + ICacheFlushMode icache_flush_mode = + FLUSH_ICACHE_IF_NEEDED)); // Returns the address of the constant pool entry where the target address // is held. This should only be called if IsInConstantPool returns true. @@ -517,7 +580,7 @@ class RelocInfo { #ifdef ENABLE_DISASSEMBLER // Printing static const char* RelocModeName(Mode rmode); - void Print(Isolate* isolate, FILE* out); + void Print(Isolate* isolate, OStream& os); // NOLINT #endif // ENABLE_DISASSEMBLER #ifdef VERIFY_HEAP void Verify(Isolate* isolate); @@ -623,7 +686,7 @@ class RelocIterator: public Malloced { // Return pointer valid until next next(). RelocInfo* rinfo() { - ASSERT(!done()); + DCHECK(!done()); return &rinfo_; } @@ -740,8 +803,6 @@ class ExternalReference BASE_EMBEDDED { ExternalReference(const IC_Utility& ic_utility, Isolate* isolate); - ExternalReference(const Debug_Address& debug_address, Isolate* isolate); - explicit ExternalReference(StatsCounter* counter); ExternalReference(Isolate::AddressId id, Isolate* isolate); @@ -805,8 +866,6 @@ class ExternalReference BASE_EMBEDDED { // Static variable Heap::NewSpaceStart() static ExternalReference new_space_start(Isolate* isolate); static ExternalReference new_space_mask(Isolate* isolate); - static ExternalReference heap_always_allocate_scope_depth(Isolate* isolate); - static ExternalReference new_space_mark_bits(Isolate* isolate); // Write barrier. static ExternalReference store_buffer_top(Isolate* isolate); @@ -822,8 +881,6 @@ class ExternalReference BASE_EMBEDDED { Isolate* isolate); static ExternalReference old_data_space_allocation_limit_address( Isolate* isolate); - static ExternalReference new_space_high_promotion_mode_active_address( - Isolate* isolate); static ExternalReference mod_two_doubles_operation(Isolate* isolate); static ExternalReference power_double_double_function(Isolate* isolate); @@ -842,9 +899,6 @@ class ExternalReference BASE_EMBEDDED { static ExternalReference address_of_min_int(); static ExternalReference address_of_one_half(); static ExternalReference address_of_minus_one_half(); - static ExternalReference address_of_minus_zero(); - static ExternalReference address_of_zero(); - static ExternalReference address_of_uint8_max_value(); static ExternalReference address_of_negative_infinity(); static ExternalReference address_of_canonical_non_hole_nan(); static ExternalReference address_of_the_hole_nan(); @@ -861,6 +915,11 @@ class ExternalReference BASE_EMBEDDED { static ExternalReference cpu_features(); + static ExternalReference debug_is_active_address(Isolate* isolate); + static ExternalReference debug_after_break_target_address(Isolate* isolate); + static ExternalReference debug_restarter_frame_function_pointer_address( + Isolate* isolate); + static ExternalReference is_profiling_address(Isolate* isolate); static ExternalReference invoke_function_callback(Isolate* isolate); static ExternalReference invoke_accessor_getter_callback(Isolate* isolate); @@ -895,7 +954,7 @@ class ExternalReference BASE_EMBEDDED { static void set_redirector(Isolate* isolate, ExternalReferenceRedirector* redirector) { // We can't stack them. - ASSERT(isolate->external_reference_redirector() == NULL); + DCHECK(isolate->external_reference_redirector() == NULL); isolate->set_external_reference_redirector( reinterpret_cast<ExternalReferenceRedirectorPointer*>(redirector)); } @@ -914,17 +973,6 @@ class ExternalReference BASE_EMBEDDED { explicit ExternalReference(void* address) : address_(address) {} - static void* Redirect(Isolate* isolate, - void* address, - Type type = ExternalReference::BUILTIN_CALL) { - ExternalReferenceRedirector* redirector = - reinterpret_cast<ExternalReferenceRedirector*>( - isolate->external_reference_redirector()); - if (redirector == NULL) return address; - void* answer = (*redirector)(address, type); - return answer; - } - static void* Redirect(Isolate* isolate, Address address_arg, Type type = ExternalReference::BUILTIN_CALL) { @@ -963,29 +1011,9 @@ class PositionsRecorder BASE_EMBEDDED { public: explicit PositionsRecorder(Assembler* assembler) : assembler_(assembler) { -#ifdef ENABLE_GDB_JIT_INTERFACE - gdbjit_lineinfo_ = NULL; -#endif jit_handler_data_ = NULL; } -#ifdef ENABLE_GDB_JIT_INTERFACE - ~PositionsRecorder() { - delete gdbjit_lineinfo_; - } - - void StartGDBJITLineInfoRecording() { - if (FLAG_gdbjit) { - gdbjit_lineinfo_ = new GDBJITLineInfo(); - } - } - - GDBJITLineInfo* DetachGDBJITLineInfo() { - GDBJITLineInfo* lineinfo = gdbjit_lineinfo_; - gdbjit_lineinfo_ = NULL; // To prevent deallocation in destructor. - return lineinfo; - } -#endif void AttachJITHandlerData(void* user_data) { jit_handler_data_ = user_data; } @@ -1013,9 +1041,6 @@ class PositionsRecorder BASE_EMBEDDED { private: Assembler* assembler_; PositionState state_; -#ifdef ENABLE_GDB_JIT_INTERFACE - GDBJITLineInfo* gdbjit_lineinfo_; -#endif // Currently jit_handler_data_ is used to store JITHandler-specific data // over the lifetime of a PositionsRecorder diff --git a/deps/v8/src/assert-scope.cc b/deps/v8/src/assert-scope.cc index 960567cfa3c..c4aa9877d45 100644 --- a/deps/v8/src/assert-scope.cc +++ b/deps/v8/src/assert-scope.cc @@ -3,8 +3,8 @@ // found in the LICENSE file. -#include "assert-scope.h" -#include "v8.h" +#include "src/assert-scope.h" +#include "src/v8.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/assert-scope.h b/deps/v8/src/assert-scope.h index 5086eaa4269..7cfec567ba6 100644 --- a/deps/v8/src/assert-scope.h +++ b/deps/v8/src/assert-scope.h @@ -5,9 +5,9 @@ #ifndef V8_ASSERT_SCOPE_H_ #define V8_ASSERT_SCOPE_H_ -#include "allocation.h" -#include "platform.h" -#include "utils.h" +#include "src/allocation.h" +#include "src/base/platform/platform.h" +#include "src/utils.h" namespace v8 { namespace internal { @@ -28,7 +28,8 @@ enum PerIsolateAssertType { JAVASCRIPT_EXECUTION_ASSERT, JAVASCRIPT_EXECUTION_THROWS, ALLOCATION_FAILURE_ASSERT, - DEOPTIMIZATION_ASSERT + DEOPTIMIZATION_ASSERT, + COMPILATION_ASSERT }; @@ -73,7 +74,7 @@ class PerThreadAssertScopeBase { ~PerThreadAssertScopeBase() { if (!data_->decrement_level()) return; for (int i = 0; i < LAST_PER_THREAD_ASSERT_TYPE; i++) { - ASSERT(data_->get(static_cast<PerThreadAssertType>(i))); + DCHECK(data_->get(static_cast<PerThreadAssertType>(i))); } delete data_; SetThreadLocalData(NULL); @@ -81,16 +82,16 @@ class PerThreadAssertScopeBase { static PerThreadAssertData* GetAssertData() { return reinterpret_cast<PerThreadAssertData*>( - Thread::GetThreadLocal(thread_local_key)); + base::Thread::GetThreadLocal(thread_local_key)); } - static Thread::LocalStorageKey thread_local_key; + static base::Thread::LocalStorageKey thread_local_key; PerThreadAssertData* data_; friend class Isolate; private: static void SetThreadLocalData(PerThreadAssertData* data) { - Thread::SetThreadLocal(thread_local_key, data); + base::Thread::SetThreadLocal(thread_local_key, data); } }; @@ -254,6 +255,13 @@ typedef PerIsolateAssertScopeDebugOnly<DEOPTIMIZATION_ASSERT, false> typedef PerIsolateAssertScopeDebugOnly<DEOPTIMIZATION_ASSERT, true> AllowDeoptimization; +// Scope to document where we do not expect deoptimization. +typedef PerIsolateAssertScopeDebugOnly<COMPILATION_ASSERT, false> + DisallowCompilation; + +// Scope to introduce an exception to DisallowDeoptimization. +typedef PerIsolateAssertScopeDebugOnly<COMPILATION_ASSERT, true> + AllowCompilation; } } // namespace v8::internal #endif // V8_ASSERT_SCOPE_H_ diff --git a/deps/v8/src/ast-value-factory.cc b/deps/v8/src/ast-value-factory.cc new file mode 100644 index 00000000000..dcfd289091c --- /dev/null +++ b/deps/v8/src/ast-value-factory.cc @@ -0,0 +1,409 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following +// disclaimer in the documentation and/or other materials provided +// with the distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived +// from this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +#include "src/ast-value-factory.h" + +#include "src/api.h" +#include "src/objects.h" + +namespace v8 { +namespace internal { + +namespace { + +// For using StringToArrayIndex. +class OneByteStringStream { + public: + explicit OneByteStringStream(Vector<const byte> lb) : + literal_bytes_(lb), pos_(0) {} + + bool HasMore() { return pos_ < literal_bytes_.length(); } + uint16_t GetNext() { return literal_bytes_[pos_++]; } + + private: + Vector<const byte> literal_bytes_; + int pos_; +}; + +} + +class AstRawStringInternalizationKey : public HashTableKey { + public: + explicit AstRawStringInternalizationKey(const AstRawString* string) + : string_(string) {} + + virtual bool IsMatch(Object* other) V8_OVERRIDE { + if (string_->is_one_byte_) + return String::cast(other)->IsOneByteEqualTo(string_->literal_bytes_); + return String::cast(other)->IsTwoByteEqualTo( + Vector<const uint16_t>::cast(string_->literal_bytes_)); + } + + virtual uint32_t Hash() V8_OVERRIDE { + return string_->hash() >> Name::kHashShift; + } + + virtual uint32_t HashForObject(Object* key) V8_OVERRIDE { + return String::cast(key)->Hash(); + } + + virtual Handle<Object> AsHandle(Isolate* isolate) V8_OVERRIDE { + if (string_->is_one_byte_) + return isolate->factory()->NewOneByteInternalizedString( + string_->literal_bytes_, string_->hash()); + return isolate->factory()->NewTwoByteInternalizedString( + Vector<const uint16_t>::cast(string_->literal_bytes_), string_->hash()); + } + + private: + const AstRawString* string_; +}; + + +void AstRawString::Internalize(Isolate* isolate) { + if (!string_.is_null()) return; + if (literal_bytes_.length() == 0) { + string_ = isolate->factory()->empty_string(); + } else { + AstRawStringInternalizationKey key(this); + string_ = StringTable::LookupKey(isolate, &key); + } +} + + +bool AstRawString::AsArrayIndex(uint32_t* index) const { + if (!string_.is_null()) + return string_->AsArrayIndex(index); + if (!is_one_byte_ || literal_bytes_.length() == 0 || + literal_bytes_.length() > String::kMaxArrayIndexSize) + return false; + OneByteStringStream stream(literal_bytes_); + return StringToArrayIndex(&stream, index); +} + + +bool AstRawString::IsOneByteEqualTo(const char* data) const { + int length = static_cast<int>(strlen(data)); + if (is_one_byte_ && literal_bytes_.length() == length) { + const char* token = reinterpret_cast<const char*>(literal_bytes_.start()); + return !strncmp(token, data, length); + } + return false; +} + + +bool AstRawString::Compare(void* a, void* b) { + AstRawString* string1 = reinterpret_cast<AstRawString*>(a); + AstRawString* string2 = reinterpret_cast<AstRawString*>(b); + if (string1->is_one_byte_ != string2->is_one_byte_) return false; + if (string1->hash_ != string2->hash_) return false; + int length = string1->literal_bytes_.length(); + if (string2->literal_bytes_.length() != length) return false; + return memcmp(string1->literal_bytes_.start(), + string2->literal_bytes_.start(), length) == 0; +} + + +void AstConsString::Internalize(Isolate* isolate) { + // AstRawStrings are internalized before AstConsStrings so left and right are + // already internalized. + string_ = isolate->factory() + ->NewConsString(left_->string(), right_->string()) + .ToHandleChecked(); +} + + +bool AstValue::IsPropertyName() const { + if (type_ == STRING) { + uint32_t index; + return !string_->AsArrayIndex(&index); + } + return false; +} + + +bool AstValue::BooleanValue() const { + switch (type_) { + case STRING: + DCHECK(string_ != NULL); + return !string_->IsEmpty(); + case SYMBOL: + UNREACHABLE(); + break; + case NUMBER: + return DoubleToBoolean(number_); + case SMI: + return smi_ != 0; + case STRING_ARRAY: + UNREACHABLE(); + break; + case BOOLEAN: + return bool_; + case NULL_TYPE: + return false; + case THE_HOLE: + UNREACHABLE(); + break; + case UNDEFINED: + return false; + } + UNREACHABLE(); + return false; +} + + +void AstValue::Internalize(Isolate* isolate) { + switch (type_) { + case STRING: + DCHECK(string_ != NULL); + // Strings are already internalized. + DCHECK(!string_->string().is_null()); + break; + case SYMBOL: + value_ = Object::GetProperty( + isolate, handle(isolate->native_context()->builtins()), + symbol_name_).ToHandleChecked(); + break; + case NUMBER: + value_ = isolate->factory()->NewNumber(number_, TENURED); + break; + case SMI: + value_ = handle(Smi::FromInt(smi_), isolate); + break; + case BOOLEAN: + if (bool_) { + value_ = isolate->factory()->true_value(); + } else { + value_ = isolate->factory()->false_value(); + } + break; + case STRING_ARRAY: { + DCHECK(strings_ != NULL); + Factory* factory = isolate->factory(); + int len = strings_->length(); + Handle<FixedArray> elements = factory->NewFixedArray(len, TENURED); + for (int i = 0; i < len; i++) { + const AstRawString* string = (*strings_)[i]; + Handle<Object> element = string->string(); + // Strings are already internalized. + DCHECK(!element.is_null()); + elements->set(i, *element); + } + value_ = + factory->NewJSArrayWithElements(elements, FAST_ELEMENTS, TENURED); + break; + } + case NULL_TYPE: + value_ = isolate->factory()->null_value(); + break; + case THE_HOLE: + value_ = isolate->factory()->the_hole_value(); + break; + case UNDEFINED: + value_ = isolate->factory()->undefined_value(); + break; + } +} + + +const AstRawString* AstValueFactory::GetOneByteString( + Vector<const uint8_t> literal) { + uint32_t hash = StringHasher::HashSequentialString<uint8_t>( + literal.start(), literal.length(), hash_seed_); + return GetString(hash, true, literal); +} + + +const AstRawString* AstValueFactory::GetTwoByteString( + Vector<const uint16_t> literal) { + uint32_t hash = StringHasher::HashSequentialString<uint16_t>( + literal.start(), literal.length(), hash_seed_); + return GetString(hash, false, Vector<const byte>::cast(literal)); +} + + +const AstRawString* AstValueFactory::GetString(Handle<String> literal) { + DisallowHeapAllocation no_gc; + String::FlatContent content = literal->GetFlatContent(); + if (content.IsAscii()) { + return GetOneByteString(content.ToOneByteVector()); + } + DCHECK(content.IsTwoByte()); + return GetTwoByteString(content.ToUC16Vector()); +} + + +const AstConsString* AstValueFactory::NewConsString( + const AstString* left, const AstString* right) { + // This Vector will be valid as long as the Collector is alive (meaning that + // the AstRawString will not be moved). + AstConsString* new_string = new (zone_) AstConsString(left, right); + strings_.Add(new_string); + if (isolate_) { + new_string->Internalize(isolate_); + } + return new_string; +} + + +void AstValueFactory::Internalize(Isolate* isolate) { + if (isolate_) { + // Everything is already internalized. + return; + } + // Strings need to be internalized before values, because values refer to + // strings. + for (int i = 0; i < strings_.length(); ++i) { + strings_[i]->Internalize(isolate); + } + for (int i = 0; i < values_.length(); ++i) { + values_[i]->Internalize(isolate); + } + isolate_ = isolate; +} + + +const AstValue* AstValueFactory::NewString(const AstRawString* string) { + AstValue* value = new (zone_) AstValue(string); + DCHECK(string != NULL); + if (isolate_) { + value->Internalize(isolate_); + } + values_.Add(value); + return value; +} + + +const AstValue* AstValueFactory::NewSymbol(const char* name) { + AstValue* value = new (zone_) AstValue(name); + if (isolate_) { + value->Internalize(isolate_); + } + values_.Add(value); + return value; +} + + +const AstValue* AstValueFactory::NewNumber(double number) { + AstValue* value = new (zone_) AstValue(number); + if (isolate_) { + value->Internalize(isolate_); + } + values_.Add(value); + return value; +} + + +const AstValue* AstValueFactory::NewSmi(int number) { + AstValue* value = + new (zone_) AstValue(AstValue::SMI, number); + if (isolate_) { + value->Internalize(isolate_); + } + values_.Add(value); + return value; +} + + +const AstValue* AstValueFactory::NewBoolean(bool b) { + AstValue* value = new (zone_) AstValue(b); + if (isolate_) { + value->Internalize(isolate_); + } + values_.Add(value); + return value; +} + + +const AstValue* AstValueFactory::NewStringList( + ZoneList<const AstRawString*>* strings) { + AstValue* value = new (zone_) AstValue(strings); + if (isolate_) { + value->Internalize(isolate_); + } + values_.Add(value); + return value; +} + + +const AstValue* AstValueFactory::NewNull() { + AstValue* value = new (zone_) AstValue(AstValue::NULL_TYPE); + if (isolate_) { + value->Internalize(isolate_); + } + values_.Add(value); + return value; +} + + +const AstValue* AstValueFactory::NewUndefined() { + AstValue* value = new (zone_) AstValue(AstValue::UNDEFINED); + if (isolate_) { + value->Internalize(isolate_); + } + values_.Add(value); + return value; +} + + +const AstValue* AstValueFactory::NewTheHole() { + AstValue* value = new (zone_) AstValue(AstValue::THE_HOLE); + if (isolate_) { + value->Internalize(isolate_); + } + values_.Add(value); + return value; +} + + +const AstRawString* AstValueFactory::GetString( + uint32_t hash, bool is_one_byte, Vector<const byte> literal_bytes) { + // literal_bytes here points to whatever the user passed, and this is OK + // because we use vector_compare (which checks the contents) to compare + // against the AstRawStrings which are in the string_table_. We should not + // return this AstRawString. + AstRawString key(is_one_byte, literal_bytes, hash); + HashMap::Entry* entry = string_table_.Lookup(&key, hash, true); + if (entry->value == NULL) { + // Copy literal contents for later comparison. + int length = literal_bytes.length(); + byte* new_literal_bytes = zone_->NewArray<byte>(length); + memcpy(new_literal_bytes, literal_bytes.start(), length); + AstRawString* new_string = new (zone_) AstRawString( + is_one_byte, Vector<const byte>(new_literal_bytes, length), hash); + entry->key = new_string; + strings_.Add(new_string); + if (isolate_) { + new_string->Internalize(isolate_); + } + entry->value = reinterpret_cast<void*>(1); + } + return reinterpret_cast<AstRawString*>(entry->key); +} + + +} } // namespace v8::internal diff --git a/deps/v8/src/ast-value-factory.h b/deps/v8/src/ast-value-factory.h new file mode 100644 index 00000000000..c3bf24c2d67 --- /dev/null +++ b/deps/v8/src/ast-value-factory.h @@ -0,0 +1,344 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following +// disclaimer in the documentation and/or other materials provided +// with the distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived +// from this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +#ifndef V8_AST_VALUE_FACTORY_H_ +#define V8_AST_VALUE_FACTORY_H_ + +#include "src/api.h" +#include "src/hashmap.h" +#include "src/utils.h" + +// AstString, AstValue and AstValueFactory are for storing strings and values +// independent of the V8 heap and internalizing them later. During parsing, +// AstStrings and AstValues are created and stored outside the heap, in +// AstValueFactory. After parsing, the strings and values are internalized +// (moved into the V8 heap). +namespace v8 { +namespace internal { + +class AstString : public ZoneObject { + public: + virtual ~AstString() {} + + virtual int length() const = 0; + bool IsEmpty() const { return length() == 0; } + + // Puts the string into the V8 heap. + virtual void Internalize(Isolate* isolate) = 0; + + // This function can be called after internalizing. + V8_INLINE Handle<String> string() const { + DCHECK(!string_.is_null()); + return string_; + } + + protected: + // This is null until the string is internalized. + Handle<String> string_; +}; + + +class AstRawString : public AstString { + public: + virtual int length() const V8_OVERRIDE { + if (is_one_byte_) + return literal_bytes_.length(); + return literal_bytes_.length() / 2; + } + + virtual void Internalize(Isolate* isolate) V8_OVERRIDE; + + bool AsArrayIndex(uint32_t* index) const; + + // The string is not null-terminated, use length() to find out the length. + const unsigned char* raw_data() const { + return literal_bytes_.start(); + } + bool is_one_byte() const { return is_one_byte_; } + bool IsOneByteEqualTo(const char* data) const; + uint16_t FirstCharacter() const { + if (is_one_byte_) + return literal_bytes_[0]; + const uint16_t* c = + reinterpret_cast<const uint16_t*>(literal_bytes_.start()); + return *c; + } + + // For storing AstRawStrings in a hash map. + uint32_t hash() const { + return hash_; + } + static bool Compare(void* a, void* b); + + private: + friend class AstValueFactory; + friend class AstRawStringInternalizationKey; + + AstRawString(bool is_one_byte, const Vector<const byte>& literal_bytes, + uint32_t hash) + : is_one_byte_(is_one_byte), literal_bytes_(literal_bytes), hash_(hash) {} + + AstRawString() + : is_one_byte_(true), + hash_(0) {} + + bool is_one_byte_; + + // Points to memory owned by Zone. + Vector<const byte> literal_bytes_; + uint32_t hash_; +}; + + +class AstConsString : public AstString { + public: + AstConsString(const AstString* left, const AstString* right) + : left_(left), + right_(right) {} + + virtual int length() const V8_OVERRIDE { + return left_->length() + right_->length(); + } + + virtual void Internalize(Isolate* isolate) V8_OVERRIDE; + + private: + friend class AstValueFactory; + + const AstString* left_; + const AstString* right_; +}; + + +// AstValue is either a string, a number, a string array, a boolean, or a +// special value (null, undefined, the hole). +class AstValue : public ZoneObject { + public: + bool IsString() const { + return type_ == STRING; + } + + bool IsNumber() const { + return type_ == NUMBER || type_ == SMI; + } + + const AstRawString* AsString() const { + if (type_ == STRING) + return string_; + UNREACHABLE(); + return 0; + } + + double AsNumber() const { + if (type_ == NUMBER) + return number_; + if (type_ == SMI) + return smi_; + UNREACHABLE(); + return 0; + } + + bool EqualsString(const AstRawString* string) const { + return type_ == STRING && string_ == string; + } + + bool IsPropertyName() const; + + bool BooleanValue() const; + + void Internalize(Isolate* isolate); + + // Can be called after Internalize has been called. + V8_INLINE Handle<Object> value() const { + if (type_ == STRING) { + return string_->string(); + } + DCHECK(!value_.is_null()); + return value_; + } + + private: + friend class AstValueFactory; + + enum Type { + STRING, + SYMBOL, + NUMBER, + SMI, + BOOLEAN, + STRING_ARRAY, + NULL_TYPE, + UNDEFINED, + THE_HOLE + }; + + explicit AstValue(const AstRawString* s) : type_(STRING) { string_ = s; } + + explicit AstValue(const char* name) : type_(SYMBOL) { symbol_name_ = name; } + + explicit AstValue(double n) : type_(NUMBER) { number_ = n; } + + AstValue(Type t, int i) : type_(t) { + DCHECK(type_ == SMI); + smi_ = i; + } + + explicit AstValue(bool b) : type_(BOOLEAN) { bool_ = b; } + + explicit AstValue(ZoneList<const AstRawString*>* s) : type_(STRING_ARRAY) { + strings_ = s; + } + + explicit AstValue(Type t) : type_(t) { + DCHECK(t == NULL_TYPE || t == UNDEFINED || t == THE_HOLE); + } + + Type type_; + + // Uninternalized value. + union { + const AstRawString* string_; + double number_; + int smi_; + bool bool_; + ZoneList<const AstRawString*>* strings_; + const char* symbol_name_; + }; + + // Internalized value (empty before internalized). + Handle<Object> value_; +}; + + +// For generating string constants. +#define STRING_CONSTANTS(F) \ + F(anonymous_function, "(anonymous function)") \ + F(arguments, "arguments") \ + F(done, "done") \ + F(dot, ".") \ + F(dot_for, ".for") \ + F(dot_generator, ".generator") \ + F(dot_generator_object, ".generator_object") \ + F(dot_iterator, ".iterator") \ + F(dot_module, ".module") \ + F(dot_result, ".result") \ + F(empty, "") \ + F(eval, "eval") \ + F(initialize_const_global, "initializeConstGlobal") \ + F(initialize_var_global, "initializeVarGlobal") \ + F(make_reference_error, "MakeReferenceError") \ + F(make_syntax_error, "MakeSyntaxError") \ + F(make_type_error, "MakeTypeError") \ + F(module, "module") \ + F(native, "native") \ + F(next, "next") \ + F(proto, "__proto__") \ + F(prototype, "prototype") \ + F(this, "this") \ + F(use_asm, "use asm") \ + F(use_strict, "use strict") \ + F(value, "value") + + +class AstValueFactory { + public: + AstValueFactory(Zone* zone, uint32_t hash_seed) + : string_table_(AstRawString::Compare), + zone_(zone), + isolate_(NULL), + hash_seed_(hash_seed) { +#define F(name, str) \ + name##_string_ = NULL; + STRING_CONSTANTS(F) +#undef F + } + + const AstRawString* GetOneByteString(Vector<const uint8_t> literal); + const AstRawString* GetOneByteString(const char* string) { + return GetOneByteString(Vector<const uint8_t>( + reinterpret_cast<const uint8_t*>(string), StrLength(string))); + } + const AstRawString* GetTwoByteString(Vector<const uint16_t> literal); + const AstRawString* GetString(Handle<String> literal); + const AstConsString* NewConsString(const AstString* left, + const AstString* right); + + void Internalize(Isolate* isolate); + bool IsInternalized() { + return isolate_ != NULL; + } + +#define F(name, str) \ + const AstRawString* name##_string() { \ + if (name##_string_ == NULL) { \ + const char* data = str; \ + name##_string_ = GetOneByteString( \ + Vector<const uint8_t>(reinterpret_cast<const uint8_t*>(data), \ + static_cast<int>(strlen(data)))); \ + } \ + return name##_string_; \ + } + STRING_CONSTANTS(F) +#undef F + + const AstValue* NewString(const AstRawString* string); + // A JavaScript symbol (ECMA-262 edition 6). + const AstValue* NewSymbol(const char* name); + const AstValue* NewNumber(double number); + const AstValue* NewSmi(int number); + const AstValue* NewBoolean(bool b); + const AstValue* NewStringList(ZoneList<const AstRawString*>* strings); + const AstValue* NewNull(); + const AstValue* NewUndefined(); + const AstValue* NewTheHole(); + + private: + const AstRawString* GetString(uint32_t hash, bool is_one_byte, + Vector<const byte> literal_bytes); + + // All strings are copied here, one after another (no NULLs inbetween). + HashMap string_table_; + // For keeping track of all AstValues and AstRawStrings we've created (so that + // they can be internalized later). + List<AstValue*> values_; + List<AstString*> strings_; + Zone* zone_; + Isolate* isolate_; + + uint32_t hash_seed_; + +#define F(name, str) \ + const AstRawString* name##_string_; + STRING_CONSTANTS(F) +#undef F +}; + +} } // namespace v8::internal + +#undef STRING_CONSTANTS + +#endif // V8_AST_VALUE_FACTORY_H_ diff --git a/deps/v8/src/ast.cc b/deps/v8/src/ast.cc index 303c442f843..9f431604b77 100644 --- a/deps/v8/src/ast.cc +++ b/deps/v8/src/ast.cc @@ -2,20 +2,20 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "ast.h" +#include "src/ast.h" #include <cmath> // For isfinite. -#include "builtins.h" -#include "code-stubs.h" -#include "contexts.h" -#include "conversions.h" -#include "hashmap.h" -#include "parser.h" -#include "property-details.h" -#include "property.h" -#include "scopes.h" -#include "string-stream.h" -#include "type-info.h" +#include "src/builtins.h" +#include "src/code-stubs.h" +#include "src/contexts.h" +#include "src/conversions.h" +#include "src/hashmap.h" +#include "src/parser.h" +#include "src/property.h" +#include "src/property-details.h" +#include "src/scopes.h" +#include "src/string-stream.h" +#include "src/type-info.h" namespace v8 { namespace internal { @@ -34,17 +34,17 @@ AST_NODE_LIST(DECL_ACCEPT) bool Expression::IsSmiLiteral() const { - return AsLiteral() != NULL && AsLiteral()->value()->IsSmi(); + return IsLiteral() && AsLiteral()->value()->IsSmi(); } bool Expression::IsStringLiteral() const { - return AsLiteral() != NULL && AsLiteral()->value()->IsString(); + return IsLiteral() && AsLiteral()->value()->IsString(); } bool Expression::IsNullLiteral() const { - return AsLiteral() != NULL && AsLiteral()->value()->IsNull(); + return IsLiteral() && AsLiteral()->value()->IsNull(); } @@ -55,25 +55,24 @@ bool Expression::IsUndefinedLiteral(Isolate* isolate) const { // The global identifier "undefined" is immutable. Everything // else could be reassigned. return var != NULL && var->location() == Variable::UNALLOCATED && - String::Equals(var_proxy->name(), - isolate->factory()->undefined_string()); + var_proxy->raw_name()->IsOneByteEqualTo("undefined"); } VariableProxy::VariableProxy(Zone* zone, Variable* var, int position) : Expression(zone, position), - name_(var->name()), + name_(var->raw_name()), var_(NULL), // Will be set by the call to BindTo. is_this_(var->is_this()), - is_trivial_(false), - is_lvalue_(false), - interface_(var->interface()) { + is_assigned_(false), + interface_(var->interface()), + variable_feedback_slot_(kInvalidFeedbackSlot) { BindTo(var); } VariableProxy::VariableProxy(Zone* zone, - Handle<String> name, + const AstRawString* name, bool is_this, Interface* interface, int position) @@ -81,26 +80,24 @@ VariableProxy::VariableProxy(Zone* zone, name_(name), var_(NULL), is_this_(is_this), - is_trivial_(false), - is_lvalue_(false), - interface_(interface) { - // Names must be canonicalized for fast equality checks. - ASSERT(name->IsInternalizedString()); + is_assigned_(false), + interface_(interface), + variable_feedback_slot_(kInvalidFeedbackSlot) { } void VariableProxy::BindTo(Variable* var) { - ASSERT(var_ == NULL); // must be bound only once - ASSERT(var != NULL); // must bind - ASSERT(!FLAG_harmony_modules || interface_->IsUnified(var->interface())); - ASSERT((is_this() && var->is_this()) || name_.is_identical_to(var->name())); + DCHECK(var_ == NULL); // must be bound only once + DCHECK(var != NULL); // must bind + DCHECK(!FLAG_harmony_modules || interface_->IsUnified(var->interface())); + DCHECK((is_this() && var->is_this()) || name_ == var->raw_name()); // Ideally CONST-ness should match. However, this is very hard to achieve // because we don't know the exact semantics of conflicting (const and // non-const) multiple variable declarations, const vars introduced via // eval() etc. Const-ness and variable declarations are a complete mess // in JS. Sigh... var_ = var; - var->set_is_used(true); + var->set_is_used(); } @@ -180,19 +177,17 @@ void FunctionLiteral::InitializeSharedInfo( } -ObjectLiteralProperty::ObjectLiteralProperty( - Zone* zone, Literal* key, Expression* value) { +ObjectLiteralProperty::ObjectLiteralProperty(Zone* zone, + AstValueFactory* ast_value_factory, + Literal* key, Expression* value) { emit_store_ = true; key_ = key; value_ = value; - Handle<Object> k = key->value(); - if (k->IsInternalizedString() && - String::Equals(Handle<String>::cast(k), - zone->isolate()->factory()->proto_string())) { + if (key->raw_value()->EqualsString(ast_value_factory->proto_string())) { kind_ = PROTOTYPE; } else if (value_->AsMaterializedLiteral() != NULL) { kind_ = MATERIALIZED_LITERAL; - } else if (value_->AsLiteral() != NULL) { + } else if (value_->IsLiteral()) { kind_ = CONSTANT; } else { kind_ = COMPUTED; @@ -390,7 +385,7 @@ void ArrayLiteral::BuildConstantElements(Isolate* isolate) { Handle<Object> MaterializedLiteral::GetBoilerplateValue(Expression* expression, Isolate* isolate) { - if (expression->AsLiteral() != NULL) { + if (expression->IsLiteral()) { return expression->AsLiteral()->value(); } if (CompileTimeValue::IsCompileTimeValue(expression)) { @@ -407,8 +402,8 @@ void MaterializedLiteral::BuildConstants(Isolate* isolate) { if (IsObjectLiteral()) { return AsObjectLiteral()->BuildConstantProperties(isolate); } - ASSERT(IsRegExpLiteral()); - ASSERT(depth() >= 1); // Depth should be initialized. + DCHECK(IsRegExpLiteral()); + DCHECK(depth() >= 1); // Depth should be initialized. } @@ -499,7 +494,7 @@ static bool IsVoidOfLiteral(Expression* expr) { UnaryOperation* maybe_unary = expr->AsUnaryOperation(); return maybe_unary != NULL && maybe_unary->op() == Token::VOID && - maybe_unary->expression()->AsLiteral() != NULL; + maybe_unary->expression()->IsLiteral(); } @@ -598,7 +593,7 @@ bool Call::ComputeGlobalTarget(Handle<GlobalObject> global, LookupResult* lookup) { target_ = Handle<JSFunction>::null(); cell_ = Handle<Cell>::null(); - ASSERT(lookup->IsFound() && + DCHECK(lookup->IsFound() && lookup->type() == NORMAL && lookup->holder() == *global); cell_ = Handle<Cell>(global->GetPropertyCell(lookup)); @@ -806,51 +801,44 @@ bool RegExpCapture::IsAnchoredAtEnd() { // output formats are alike. class RegExpUnparser V8_FINAL : public RegExpVisitor { public: - explicit RegExpUnparser(Zone* zone); + RegExpUnparser(OStream& os, Zone* zone) : os_(os), zone_(zone) {} void VisitCharacterRange(CharacterRange that); - SmartArrayPointer<const char> ToString() { return stream_.ToCString(); } #define MAKE_CASE(Name) virtual void* Visit##Name(RegExp##Name*, \ void* data) V8_OVERRIDE; FOR_EACH_REG_EXP_TREE_TYPE(MAKE_CASE) #undef MAKE_CASE private: - StringStream* stream() { return &stream_; } - HeapStringAllocator alloc_; - StringStream stream_; + OStream& os_; Zone* zone_; }; -RegExpUnparser::RegExpUnparser(Zone* zone) : stream_(&alloc_), zone_(zone) { -} - - void* RegExpUnparser::VisitDisjunction(RegExpDisjunction* that, void* data) { - stream()->Add("(|"); + os_ << "(|"; for (int i = 0; i < that->alternatives()->length(); i++) { - stream()->Add(" "); + os_ << " "; that->alternatives()->at(i)->Accept(this, data); } - stream()->Add(")"); + os_ << ")"; return NULL; } void* RegExpUnparser::VisitAlternative(RegExpAlternative* that, void* data) { - stream()->Add("(:"); + os_ << "(:"; for (int i = 0; i < that->nodes()->length(); i++) { - stream()->Add(" "); + os_ << " "; that->nodes()->at(i)->Accept(this, data); } - stream()->Add(")"); + os_ << ")"; return NULL; } void RegExpUnparser::VisitCharacterRange(CharacterRange that) { - stream()->Add("%k", that.from()); + os_ << AsUC16(that.from()); if (!that.IsSingleton()) { - stream()->Add("-%k", that.to()); + os_ << "-" << AsUC16(that.to()); } } @@ -858,14 +846,13 @@ void RegExpUnparser::VisitCharacterRange(CharacterRange that) { void* RegExpUnparser::VisitCharacterClass(RegExpCharacterClass* that, void* data) { - if (that->is_negated()) - stream()->Add("^"); - stream()->Add("["); + if (that->is_negated()) os_ << "^"; + os_ << "["; for (int i = 0; i < that->ranges(zone_)->length(); i++) { - if (i > 0) stream()->Add(" "); + if (i > 0) os_ << " "; VisitCharacterRange(that->ranges(zone_)->at(i)); } - stream()->Add("]"); + os_ << "]"; return NULL; } @@ -873,22 +860,22 @@ void* RegExpUnparser::VisitCharacterClass(RegExpCharacterClass* that, void* RegExpUnparser::VisitAssertion(RegExpAssertion* that, void* data) { switch (that->assertion_type()) { case RegExpAssertion::START_OF_INPUT: - stream()->Add("@^i"); + os_ << "@^i"; break; case RegExpAssertion::END_OF_INPUT: - stream()->Add("@$i"); + os_ << "@$i"; break; case RegExpAssertion::START_OF_LINE: - stream()->Add("@^l"); + os_ << "@^l"; break; case RegExpAssertion::END_OF_LINE: - stream()->Add("@$l"); + os_ << "@$l"; break; case RegExpAssertion::BOUNDARY: - stream()->Add("@b"); + os_ << "@b"; break; case RegExpAssertion::NON_BOUNDARY: - stream()->Add("@B"); + os_ << "@B"; break; } return NULL; @@ -896,12 +883,12 @@ void* RegExpUnparser::VisitAssertion(RegExpAssertion* that, void* data) { void* RegExpUnparser::VisitAtom(RegExpAtom* that, void* data) { - stream()->Add("'"); + os_ << "'"; Vector<const uc16> chardata = that->data(); for (int i = 0; i < chardata.length(); i++) { - stream()->Add("%k", chardata[i]); + os_ << AsUC16(chardata[i]); } - stream()->Add("'"); + os_ << "'"; return NULL; } @@ -910,71 +897,70 @@ void* RegExpUnparser::VisitText(RegExpText* that, void* data) { if (that->elements()->length() == 1) { that->elements()->at(0).tree()->Accept(this, data); } else { - stream()->Add("(!"); + os_ << "(!"; for (int i = 0; i < that->elements()->length(); i++) { - stream()->Add(" "); + os_ << " "; that->elements()->at(i).tree()->Accept(this, data); } - stream()->Add(")"); + os_ << ")"; } return NULL; } void* RegExpUnparser::VisitQuantifier(RegExpQuantifier* that, void* data) { - stream()->Add("(# %i ", that->min()); + os_ << "(# " << that->min() << " "; if (that->max() == RegExpTree::kInfinity) { - stream()->Add("- "); + os_ << "- "; } else { - stream()->Add("%i ", that->max()); + os_ << that->max() << " "; } - stream()->Add(that->is_greedy() ? "g " : that->is_possessive() ? "p " : "n "); + os_ << (that->is_greedy() ? "g " : that->is_possessive() ? "p " : "n "); that->body()->Accept(this, data); - stream()->Add(")"); + os_ << ")"; return NULL; } void* RegExpUnparser::VisitCapture(RegExpCapture* that, void* data) { - stream()->Add("(^ "); + os_ << "(^ "; that->body()->Accept(this, data); - stream()->Add(")"); + os_ << ")"; return NULL; } void* RegExpUnparser::VisitLookahead(RegExpLookahead* that, void* data) { - stream()->Add("(-> "); - stream()->Add(that->is_positive() ? "+ " : "- "); + os_ << "(-> " << (that->is_positive() ? "+ " : "- "); that->body()->Accept(this, data); - stream()->Add(")"); + os_ << ")"; return NULL; } void* RegExpUnparser::VisitBackReference(RegExpBackReference* that, void* data) { - stream()->Add("(<- %i)", that->index()); + os_ << "(<- " << that->index() << ")"; return NULL; } void* RegExpUnparser::VisitEmpty(RegExpEmpty* that, void* data) { - stream()->Put('%'); + os_ << '%'; return NULL; } -SmartArrayPointer<const char> RegExpTree::ToString(Zone* zone) { - RegExpUnparser unparser(zone); +OStream& RegExpTree::Print(OStream& os, Zone* zone) { // NOLINT + RegExpUnparser unparser(os, zone); Accept(&unparser, NULL); - return unparser.ToString(); + return os; } RegExpDisjunction::RegExpDisjunction(ZoneList<RegExpTree*>* alternatives) : alternatives_(alternatives) { - ASSERT(alternatives->length() > 1); + DCHECK(alternatives->length() > 1); RegExpTree* first_alternative = alternatives->at(0); min_match_ = first_alternative->min_match(); max_match_ = first_alternative->max_match(); @@ -996,7 +982,7 @@ static int IncreaseBy(int previous, int increase) { RegExpAlternative::RegExpAlternative(ZoneList<RegExpTree*>* nodes) : nodes_(nodes) { - ASSERT(nodes->length() > 1); + DCHECK(nodes->length() > 1); min_match_ = 0; max_match_ = 0; for (int i = 0; i < nodes->length(); i++) { @@ -1035,10 +1021,16 @@ CaseClause::CaseClause(Zone* zone, void AstConstructionVisitor::Visit##NodeType(NodeType* node) { \ increase_node_count(); \ set_dont_optimize_reason(k##NodeType); \ - add_flag(kDontInline); \ add_flag(kDontSelfOptimize); \ } -#define DONT_SELFOPTIMIZE_NODE(NodeType) \ +#define DONT_OPTIMIZE_NODE_WITH_FEEDBACK_SLOTS(NodeType) \ + void AstConstructionVisitor::Visit##NodeType(NodeType* node) { \ + increase_node_count(); \ + add_slot_node(node); \ + set_dont_optimize_reason(k##NodeType); \ + add_flag(kDontSelfOptimize); \ + } +#define DONT_SELFOPTIMIZE_NODE(NodeType) \ void AstConstructionVisitor::Visit##NodeType(NodeType* node) { \ increase_node_count(); \ add_flag(kDontSelfOptimize); \ @@ -1053,7 +1045,6 @@ CaseClause::CaseClause(Zone* zone, void AstConstructionVisitor::Visit##NodeType(NodeType* node) { \ increase_node_count(); \ set_dont_optimize_reason(k##NodeType); \ - add_flag(kDontInline); \ add_flag(kDontSelfOptimize); \ add_flag(kDontCache); \ } @@ -1077,19 +1068,21 @@ REGULAR_NODE(RegExpLiteral) REGULAR_NODE(FunctionLiteral) REGULAR_NODE(Assignment) REGULAR_NODE(Throw) -REGULAR_NODE(Property) REGULAR_NODE(UnaryOperation) REGULAR_NODE(CountOperation) REGULAR_NODE(BinaryOperation) REGULAR_NODE(CompareOperation) REGULAR_NODE(ThisFunction) + REGULAR_NODE_WITH_FEEDBACK_SLOTS(Call) REGULAR_NODE_WITH_FEEDBACK_SLOTS(CallNew) +REGULAR_NODE_WITH_FEEDBACK_SLOTS(Property) // In theory, for VariableProxy we'd have to add: -// if (node->var()->IsLookupSlot()) add_flag(kDontInline); +// if (node->var()->IsLookupSlot()) +// set_dont_optimize_reason(kReferenceToAVariableWhichRequiresDynamicLookup); // But node->var() is usually not bound yet at VariableProxy creation time, and // LOOKUP variables only result from constructs that cannot be inlined anyway. -REGULAR_NODE(VariableProxy) +REGULAR_NODE_WITH_FEEDBACK_SLOTS(VariableProxy) // We currently do not optimize any modules. DONT_OPTIMIZE_NODE(ModuleDeclaration) @@ -1099,36 +1092,30 @@ DONT_OPTIMIZE_NODE(ModuleVariable) DONT_OPTIMIZE_NODE(ModulePath) DONT_OPTIMIZE_NODE(ModuleUrl) DONT_OPTIMIZE_NODE(ModuleStatement) -DONT_OPTIMIZE_NODE(Yield) DONT_OPTIMIZE_NODE(WithStatement) DONT_OPTIMIZE_NODE(TryCatchStatement) DONT_OPTIMIZE_NODE(TryFinallyStatement) DONT_OPTIMIZE_NODE(DebuggerStatement) DONT_OPTIMIZE_NODE(NativeFunctionLiteral) +DONT_OPTIMIZE_NODE_WITH_FEEDBACK_SLOTS(Yield) + DONT_SELFOPTIMIZE_NODE(DoWhileStatement) DONT_SELFOPTIMIZE_NODE(WhileStatement) DONT_SELFOPTIMIZE_NODE(ForStatement) -DONT_SELFOPTIMIZE_NODE_WITH_FEEDBACK_SLOTS(ForInStatement) DONT_SELFOPTIMIZE_NODE(ForOfStatement) +DONT_SELFOPTIMIZE_NODE_WITH_FEEDBACK_SLOTS(ForInStatement) + DONT_CACHE_NODE(ModuleLiteral) void AstConstructionVisitor::VisitCallRuntime(CallRuntime* node) { increase_node_count(); + add_slot_node(node); if (node->is_jsruntime()) { - // Don't try to inline JS runtime calls because we don't (currently) even - // optimize them. - add_flag(kDontInline); - } else if (node->function()->intrinsic_type == Runtime::INLINE && - (node->name()->IsOneByteEqualTo( - STATIC_ASCII_VECTOR("_ArgumentsLength")) || - node->name()->IsOneByteEqualTo(STATIC_ASCII_VECTOR("_Arguments")))) { - // Don't inline the %_ArgumentsLength or %_Arguments because their - // implementation will not work. There is no stack frame to get them - // from. - add_flag(kDontInline); + // Don't try to optimize JS runtime calls because we bailout on them. + set_dont_optimize_reason(kCallToAJavaScriptRuntimeFunction); } } @@ -1139,17 +1126,17 @@ void AstConstructionVisitor::VisitCallRuntime(CallRuntime* node) { Handle<String> Literal::ToString() { - if (value_->IsString()) return Handle<String>::cast(value_); - ASSERT(value_->IsNumber()); + if (value_->IsString()) return value_->AsString()->string(); + DCHECK(value_->IsNumber()); char arr[100]; Vector<char> buffer(arr, ARRAY_SIZE(arr)); const char* str; - if (value_->IsSmi()) { + if (value()->IsSmi()) { // Optimization only, the heap number case would subsume this. - OS::SNPrintF(buffer, "%d", Smi::cast(*value_)->value()); + SNPrintF(buffer, "%d", Smi::cast(*value())->value()); str = arr; } else { - str = DoubleToCString(value_->Number(), buffer); + str = DoubleToCString(value()->Number(), buffer); } return isolate_->factory()->NewStringFromAsciiChecked(str); } diff --git a/deps/v8/src/ast.h b/deps/v8/src/ast.h index 0115d988272..e18fdc79c71 100644 --- a/deps/v8/src/ast.h +++ b/deps/v8/src/ast.h @@ -5,23 +5,24 @@ #ifndef V8_AST_H_ #define V8_AST_H_ -#include "v8.h" - -#include "assembler.h" -#include "factory.h" -#include "feedback-slots.h" -#include "isolate.h" -#include "jsregexp.h" -#include "list-inl.h" -#include "runtime.h" -#include "small-pointer-list.h" -#include "smart-pointers.h" -#include "token.h" -#include "types.h" -#include "utils.h" -#include "variables.h" -#include "interface.h" -#include "zone-inl.h" +#include "src/v8.h" + +#include "src/assembler.h" +#include "src/ast-value-factory.h" +#include "src/factory.h" +#include "src/feedback-slots.h" +#include "src/interface.h" +#include "src/isolate.h" +#include "src/jsregexp.h" +#include "src/list-inl.h" +#include "src/runtime.h" +#include "src/small-pointer-list.h" +#include "src/smart-pointers.h" +#include "src/token.h" +#include "src/types.h" +#include "src/utils.h" +#include "src/variables.h" +#include "src/zone-inl.h" namespace v8 { namespace internal { @@ -111,6 +112,7 @@ class BreakableStatement; class Expression; class IterationStatement; class MaterializedLiteral; +class OStream; class Statement; class TargetCollector; class TypeFeedbackOracle; @@ -148,7 +150,6 @@ typedef ZoneList<Handle<Object> > ZoneObjectList; enum AstPropertiesFlag { - kDontInline, kDontSelfOptimize, kDontSoftInline, kDontCache @@ -264,7 +265,7 @@ class SmallMapList V8_FINAL { int length() const { return list_.length(); } void AddMapIfMissing(Handle<Map> map, Zone* zone) { - if (!Map::CurrentMapForDeprecated(map).ToHandle(&map)) return; + if (!Map::TryUpdate(map).ToHandle(&map)) return; for (int i = 0; i < length(); ++i) { if (at(i).is_identical_to(map)) return; } @@ -343,6 +344,11 @@ class Expression : public AstNode { Bounds bounds() const { return bounds_; } void set_bounds(Bounds bounds) { bounds_ = bounds; } + // Whether the expression is parenthesized + unsigned parenthesization_level() const { return parenthesization_level_; } + bool is_parenthesized() const { return parenthesization_level_ > 0; } + void increase_parenthesization_level() { ++parenthesization_level_; } + // Type feedback information for assignments and properties. virtual bool IsMonomorphic() { UNREACHABLE(); @@ -367,14 +373,19 @@ class Expression : public AstNode { protected: Expression(Zone* zone, int pos) : AstNode(pos), + zone_(zone), bounds_(Bounds::Unbounded(zone)), + parenthesization_level_(0), id_(GetNextId(zone)), test_id_(GetNextId(zone)) {} void set_to_boolean_types(byte types) { to_boolean_types_ = types; } + Zone* zone_; + private: Bounds bounds_; byte to_boolean_types_; + unsigned parenthesization_level_; const BailoutId id_; const TypeFeedbackId test_id_; @@ -390,7 +401,7 @@ class BreakableStatement : public Statement { // The labels associated with this statement. May be NULL; // if it is != NULL, guaranteed to contain at least one entry. - ZoneStringList* labels() const { return labels_; } + ZoneList<const AstRawString*>* labels() const { return labels_; } // Type testing & conversion. virtual BreakableStatement* AsBreakableStatement() V8_FINAL V8_OVERRIDE { @@ -410,19 +421,19 @@ class BreakableStatement : public Statement { protected: BreakableStatement( - Zone* zone, ZoneStringList* labels, + Zone* zone, ZoneList<const AstRawString*>* labels, BreakableType breakable_type, int position) : Statement(zone, position), labels_(labels), breakable_type_(breakable_type), entry_id_(GetNextId(zone)), exit_id_(GetNextId(zone)) { - ASSERT(labels == NULL || labels->length() > 0); + DCHECK(labels == NULL || labels->length() > 0); } private: - ZoneStringList* labels_; + ZoneList<const AstRawString*>* labels_; BreakableType breakable_type_; Label break_target_; const BailoutId entry_id_; @@ -441,6 +452,8 @@ class Block V8_FINAL : public BreakableStatement { ZoneList<Statement*>* statements() { return &statements_; } bool is_initializer_block() const { return is_initializer_block_; } + BailoutId DeclsId() const { return decls_id_; } + virtual bool IsJump() const V8_OVERRIDE { return !statements_.is_empty() && statements_.last()->IsJump() && labels() == NULL; // Good enough as an approximation... @@ -451,19 +464,21 @@ class Block V8_FINAL : public BreakableStatement { protected: Block(Zone* zone, - ZoneStringList* labels, + ZoneList<const AstRawString*>* labels, int capacity, bool is_initializer_block, int pos) : BreakableStatement(zone, labels, TARGET_FOR_NAMED_ONLY, pos), statements_(capacity, zone), is_initializer_block_(is_initializer_block), + decls_id_(GetNextId(zone)), scope_(NULL) { } private: ZoneList<Statement*> statements_; bool is_initializer_block_; + const BailoutId decls_id_; Scope* scope_; }; @@ -486,7 +501,7 @@ class Declaration : public AstNode { proxy_(proxy), mode_(mode), scope_(scope) { - ASSERT(IsDeclaredVariableMode(mode)); + DCHECK(IsDeclaredVariableMode(mode)); } private: @@ -537,8 +552,8 @@ class FunctionDeclaration V8_FINAL : public Declaration { : Declaration(zone, proxy, mode, scope, pos), fun_(fun) { // At the moment there are no "const functions" in JavaScript... - ASSERT(mode == VAR || mode == LET); - ASSERT(fun != NULL); + DCHECK(mode == VAR || mode == LET); + DCHECK(fun != NULL); } private: @@ -658,18 +673,15 @@ class ModulePath V8_FINAL : public Module { DECLARE_NODE_TYPE(ModulePath) Module* module() const { return module_; } - Handle<String> name() const { return name_; } + Handle<String> name() const { return name_->string(); } protected: - ModulePath(Zone* zone, Module* module, Handle<String> name, int pos) - : Module(zone, pos), - module_(module), - name_(name) { - } + ModulePath(Zone* zone, Module* module, const AstRawString* name, int pos) + : Module(zone, pos), module_(module), name_(name) {} private: Module* module_; - Handle<String> name_; + const AstRawString* name_; }; @@ -726,7 +738,7 @@ class IterationStatement : public BreakableStatement { Label* continue_target() { return &continue_target_; } protected: - IterationStatement(Zone* zone, ZoneStringList* labels, int pos) + IterationStatement(Zone* zone, ZoneList<const AstRawString*>* labels, int pos) : BreakableStatement(zone, labels, TARGET_FOR_ANONYMOUS, pos), body_(NULL), osr_entry_id_(GetNextId(zone)) { @@ -760,7 +772,7 @@ class DoWhileStatement V8_FINAL : public IterationStatement { BailoutId BackEdgeId() const { return back_edge_id_; } protected: - DoWhileStatement(Zone* zone, ZoneStringList* labels, int pos) + DoWhileStatement(Zone* zone, ZoneList<const AstRawString*>* labels, int pos) : IterationStatement(zone, labels, pos), cond_(NULL), continue_id_(GetNextId(zone)), @@ -797,7 +809,7 @@ class WhileStatement V8_FINAL : public IterationStatement { BailoutId BodyId() const { return body_id_; } protected: - WhileStatement(Zone* zone, ZoneStringList* labels, int pos) + WhileStatement(Zone* zone, ZoneList<const AstRawString*>* labels, int pos) : IterationStatement(zone, labels, pos), cond_(NULL), may_have_function_literal_(true), @@ -848,7 +860,7 @@ class ForStatement V8_FINAL : public IterationStatement { void set_loop_variable(Variable* var) { loop_variable_ = var; } protected: - ForStatement(Zone* zone, ZoneStringList* labels, int pos) + ForStatement(Zone* zone, ZoneList<const AstRawString*>* labels, int pos) : IterationStatement(zone, labels, pos), init_(NULL), cond_(NULL), @@ -890,11 +902,8 @@ class ForEachStatement : public IterationStatement { Expression* subject() const { return subject_; } protected: - ForEachStatement(Zone* zone, ZoneStringList* labels, int pos) - : IterationStatement(zone, labels, pos), - each_(NULL), - subject_(NULL) { - } + ForEachStatement(Zone* zone, ZoneList<const AstRawString*>* labels, int pos) + : IterationStatement(zone, labels, pos), each_(NULL), subject_(NULL) {} private: Expression* each_; @@ -916,7 +925,7 @@ class ForInStatement V8_FINAL : public ForEachStatement, virtual void SetFirstFeedbackSlot(int slot) { for_in_feedback_slot_ = slot; } int ForInFeedbackSlot() { - ASSERT(for_in_feedback_slot_ != kInvalidFeedbackSlot); + DCHECK(for_in_feedback_slot_ != kInvalidFeedbackSlot); return for_in_feedback_slot_; } @@ -930,7 +939,7 @@ class ForInStatement V8_FINAL : public ForEachStatement, virtual BailoutId StackCheckId() const V8_OVERRIDE { return body_id_; } protected: - ForInStatement(Zone* zone, ZoneStringList* labels, int pos) + ForInStatement(Zone* zone, ZoneList<const AstRawString*>* labels, int pos) : ForEachStatement(zone, labels, pos), for_in_type_(SLOW_FOR_IN), for_in_feedback_slot_(kInvalidFeedbackSlot), @@ -967,7 +976,7 @@ class ForOfStatement V8_FINAL : public ForEachStatement { return subject(); } - // var iterator = iterable; + // var iterator = subject[Symbol.iterator](); Expression* assign_iterator() const { return assign_iterator_; } @@ -993,7 +1002,7 @@ class ForOfStatement V8_FINAL : public ForEachStatement { BailoutId BackEdgeId() const { return back_edge_id_; } protected: - ForOfStatement(Zone* zone, ZoneStringList* labels, int pos) + ForOfStatement(Zone* zone, ZoneList<const AstRawString*>* labels, int pos) : ForEachStatement(zone, labels, pos), assign_iterator_(NULL), next_result_(NULL), @@ -1153,7 +1162,7 @@ class SwitchStatement V8_FINAL : public BreakableStatement { ZoneList<CaseClause*>* cases() const { return cases_; } protected: - SwitchStatement(Zone* zone, ZoneStringList* labels, int pos) + SwitchStatement(Zone* zone, ZoneList<const AstRawString*>* labels, int pos) : BreakableStatement(zone, labels, TARGET_FOR_ANONYMOUS, pos), tag_(NULL), cases_(NULL) { } @@ -1333,40 +1342,28 @@ class Literal V8_FINAL : public Expression { DECLARE_NODE_TYPE(Literal) virtual bool IsPropertyName() const V8_OVERRIDE { - if (value_->IsInternalizedString()) { - uint32_t ignored; - return !String::cast(*value_)->AsArrayIndex(&ignored); - } - return false; + return value_->IsPropertyName(); } Handle<String> AsPropertyName() { - ASSERT(IsPropertyName()); - return Handle<String>::cast(value_); + DCHECK(IsPropertyName()); + return Handle<String>::cast(value()); } - virtual bool ToBooleanIsTrue() const V8_OVERRIDE { - return value_->BooleanValue(); - } - virtual bool ToBooleanIsFalse() const V8_OVERRIDE { - return !value_->BooleanValue(); + const AstRawString* AsRawPropertyName() { + DCHECK(IsPropertyName()); + return value_->AsString(); } - // Identity testers. - bool IsNull() const { - ASSERT(!value_.is_null()); - return value_->IsNull(); - } - bool IsTrue() const { - ASSERT(!value_.is_null()); - return value_->IsTrue(); + virtual bool ToBooleanIsTrue() const V8_OVERRIDE { + return value()->BooleanValue(); } - bool IsFalse() const { - ASSERT(!value_.is_null()); - return value_->IsFalse(); + virtual bool ToBooleanIsFalse() const V8_OVERRIDE { + return !value()->BooleanValue(); } - Handle<Object> value() const { return value_; } + Handle<Object> value() const { return value_->value(); } + const AstValue* raw_value() const { return value_; } // Support for using Literal as a HashMap key. NOTE: Currently, this works // only for string and number literals! @@ -1381,7 +1378,7 @@ class Literal V8_FINAL : public Expression { TypeFeedbackId LiteralFeedbackId() const { return reuse(id()); } protected: - Literal(Zone* zone, Handle<Object> value, int position) + Literal(Zone* zone, const AstValue* value, int position) : Expression(zone, position), value_(value), isolate_(zone->isolate()) { } @@ -1389,7 +1386,7 @@ class Literal V8_FINAL : public Expression { private: Handle<String> ToString(); - Handle<Object> value_; + const AstValue* value_; // TODO(dcarney): remove. this is only needed for Match and Hash. Isolate* isolate_; }; @@ -1404,7 +1401,7 @@ class MaterializedLiteral : public Expression { int depth() const { // only callable after initialization. - ASSERT(depth_ >= 1); + DCHECK(depth_ >= 1); return depth_; } @@ -1424,7 +1421,7 @@ class MaterializedLiteral : public Expression { friend class CompileTimeValue; void set_depth(int depth) { - ASSERT(depth >= 1); + DCHECK(depth >= 1); depth_ = depth; } @@ -1460,7 +1457,8 @@ class ObjectLiteralProperty V8_FINAL : public ZoneObject { PROTOTYPE // Property is __proto__. }; - ObjectLiteralProperty(Zone* zone, Literal* key, Expression* value); + ObjectLiteralProperty(Zone* zone, AstValueFactory* ast_value_factory, + Literal* key, Expression* value); Literal* key() { return key_; } Expression* value() { return value_; } @@ -1518,6 +1516,13 @@ class ObjectLiteral V8_FINAL : public MaterializedLiteral { // marked expressions, no store code is emitted. void CalculateEmitStore(Zone* zone); + // Assemble bitfield of flags for the CreateObjectLiteral helper. + int ComputeFlags() const { + int flags = fast_elements() ? kFastElements : kNoFlags; + flags |= has_function() ? kHasFunction : kNoFlags; + return flags; + } + enum Flags { kNoFlags = 0, kFastElements = 1, @@ -1559,13 +1564,13 @@ class RegExpLiteral V8_FINAL : public MaterializedLiteral { public: DECLARE_NODE_TYPE(RegExpLiteral) - Handle<String> pattern() const { return pattern_; } - Handle<String> flags() const { return flags_; } + Handle<String> pattern() const { return pattern_->string(); } + Handle<String> flags() const { return flags_->string(); } protected: RegExpLiteral(Zone* zone, - Handle<String> pattern, - Handle<String> flags, + const AstRawString* pattern, + const AstRawString* flags, int literal_index, int pos) : MaterializedLiteral(zone, literal_index, pos), @@ -1575,8 +1580,8 @@ class RegExpLiteral V8_FINAL : public MaterializedLiteral { } private: - Handle<String> pattern_; - Handle<String> flags_; + const AstRawString* pattern_; + const AstRawString* flags_; }; @@ -1597,6 +1602,13 @@ class ArrayLiteral V8_FINAL : public MaterializedLiteral { // Populate the constant elements fixed array. void BuildConstantElements(Isolate* isolate); + // Assemble bitfield of flags for the CreateArrayLiteral helper. + int ComputeFlags() const { + int flags = depth() == 1 ? kShallowElements : kNoFlags; + flags |= ArrayLiteral::kDisableMementos; + return flags; + } + enum Flags { kNoFlags = 0, kShallowElements = 1, @@ -1619,7 +1631,7 @@ class ArrayLiteral V8_FINAL : public MaterializedLiteral { }; -class VariableProxy V8_FINAL : public Expression { +class VariableProxy V8_FINAL : public Expression, public FeedbackSlotInterface { public: DECLARE_NODE_TYPE(VariableProxy) @@ -1627,47 +1639,46 @@ class VariableProxy V8_FINAL : public Expression { return var_ == NULL ? true : var_->IsValidReference(); } - bool IsVariable(Handle<String> n) const { - return !is_this() && name().is_identical_to(n); - } - bool IsArguments() const { return var_ != NULL && var_->is_arguments(); } - bool IsLValue() const { return is_lvalue_; } - - Handle<String> name() const { return name_; } + Handle<String> name() const { return name_->string(); } + const AstRawString* raw_name() const { return name_; } Variable* var() const { return var_; } bool is_this() const { return is_this_; } Interface* interface() const { return interface_; } - - void MarkAsTrivial() { is_trivial_ = true; } - void MarkAsLValue() { is_lvalue_ = true; } + bool is_assigned() const { return is_assigned_; } + void set_is_assigned() { is_assigned_ = true; } // Bind this proxy to the variable var. Interfaces must match. void BindTo(Variable* var); + virtual int ComputeFeedbackSlotCount() { return FLAG_vector_ics ? 1 : 0; } + virtual void SetFirstFeedbackSlot(int slot) { + variable_feedback_slot_ = slot; + } + + int VariableFeedbackSlot() { return variable_feedback_slot_; } + protected: VariableProxy(Zone* zone, Variable* var, int position); VariableProxy(Zone* zone, - Handle<String> name, + const AstRawString* name, bool is_this, Interface* interface, int position); - Handle<String> name_; + const AstRawString* name_; Variable* var_; // resolved variable, or NULL bool is_this_; - bool is_trivial_; - // True if this variable proxy is being used in an assignment - // or with a increment/decrement operator. - bool is_lvalue_; + bool is_assigned_; Interface* interface_; + int variable_feedback_slot_; }; -class Property V8_FINAL : public Expression { +class Property V8_FINAL : public Expression, public FeedbackSlotInterface { public: DECLARE_NODE_TYPE(Property) @@ -1679,7 +1690,6 @@ class Property V8_FINAL : public Expression { BailoutId LoadId() const { return load_id_; } bool IsStringAccess() const { return is_string_access_; } - bool IsFunctionPrototype() const { return is_function_prototype_; } // Type feedback information. virtual bool IsMonomorphic() V8_OVERRIDE { @@ -1697,36 +1707,39 @@ class Property V8_FINAL : public Expression { } void set_is_uninitialized(bool b) { is_uninitialized_ = b; } void set_is_string_access(bool b) { is_string_access_ = b; } - void set_is_function_prototype(bool b) { is_function_prototype_ = b; } void mark_for_call() { is_for_call_ = true; } bool IsForCall() { return is_for_call_; } TypeFeedbackId PropertyFeedbackId() { return reuse(id()); } + virtual int ComputeFeedbackSlotCount() { return FLAG_vector_ics ? 1 : 0; } + virtual void SetFirstFeedbackSlot(int slot) { + property_feedback_slot_ = slot; + } + + int PropertyFeedbackSlot() const { return property_feedback_slot_; } + protected: - Property(Zone* zone, - Expression* obj, - Expression* key, - int pos) + Property(Zone* zone, Expression* obj, Expression* key, int pos) : Expression(zone, pos), obj_(obj), key_(key), load_id_(GetNextId(zone)), + property_feedback_slot_(kInvalidFeedbackSlot), is_for_call_(false), is_uninitialized_(false), - is_string_access_(false), - is_function_prototype_(false) { } + is_string_access_(false) {} private: Expression* obj_; Expression* key_; const BailoutId load_id_; + int property_feedback_slot_; SmallMapList receiver_types_; bool is_for_call_ : 1; bool is_uninitialized_ : 1; bool is_string_access_ : 1; - bool is_function_prototype_ : 1; }; @@ -1762,11 +1775,25 @@ class Call V8_FINAL : public Expression, public FeedbackSlotInterface { return !target_.is_null(); } + bool global_call() const { + VariableProxy* proxy = expression_->AsVariableProxy(); + return proxy != NULL && proxy->var()->IsUnallocated(); + } + + bool known_global_function() const { + return global_call() && !target_.is_null(); + } + Handle<JSFunction> target() { return target_; } Handle<Cell> cell() { return cell_; } + Handle<AllocationSite> allocation_site() { return allocation_site_; } + void set_target(Handle<JSFunction> target) { target_ = target; } + void set_allocation_site(Handle<AllocationSite> site) { + allocation_site_ = site; + } bool ComputeGlobalTarget(Handle<GlobalObject> global, LookupResult* lookup); BailoutId ReturnId() const { return return_id_; } @@ -1809,6 +1836,7 @@ class Call V8_FINAL : public Expression, public FeedbackSlotInterface { Handle<JSFunction> target_; Handle<Cell> cell_; + Handle<AllocationSite> allocation_site_; int call_feedback_slot_; const BailoutId return_id_; @@ -1831,12 +1859,12 @@ class CallNew V8_FINAL : public Expression, public FeedbackSlotInterface { } int CallNewFeedbackSlot() { - ASSERT(callnew_feedback_slot_ != kInvalidFeedbackSlot); + DCHECK(callnew_feedback_slot_ != kInvalidFeedbackSlot); return callnew_feedback_slot_; } int AllocationSiteFeedbackSlot() { - ASSERT(callnew_feedback_slot_ != kInvalidFeedbackSlot); - ASSERT(FLAG_pretenuring_call_new); + DCHECK(callnew_feedback_slot_ != kInvalidFeedbackSlot); + DCHECK(FLAG_pretenuring_call_new); return callnew_feedback_slot_ + 1; } @@ -1883,32 +1911,48 @@ class CallNew V8_FINAL : public Expression, public FeedbackSlotInterface { // language construct. Instead it is used to call a C or JS function // with a set of arguments. This is used from the builtins that are // implemented in JavaScript (see "v8natives.js"). -class CallRuntime V8_FINAL : public Expression { +class CallRuntime V8_FINAL : public Expression, public FeedbackSlotInterface { public: DECLARE_NODE_TYPE(CallRuntime) - Handle<String> name() const { return name_; } + Handle<String> name() const { return raw_name_->string(); } + const AstRawString* raw_name() const { return raw_name_; } const Runtime::Function* function() const { return function_; } ZoneList<Expression*>* arguments() const { return arguments_; } bool is_jsruntime() const { return function_ == NULL; } + // Type feedback information. + virtual int ComputeFeedbackSlotCount() { + return (FLAG_vector_ics && is_jsruntime()) ? 1 : 0; + } + virtual void SetFirstFeedbackSlot(int slot) { + callruntime_feedback_slot_ = slot; + } + + int CallRuntimeFeedbackSlot() { + DCHECK(!is_jsruntime() || + callruntime_feedback_slot_ != kInvalidFeedbackSlot); + return callruntime_feedback_slot_; + } + TypeFeedbackId CallRuntimeFeedbackId() const { return reuse(id()); } protected: CallRuntime(Zone* zone, - Handle<String> name, + const AstRawString* name, const Runtime::Function* function, ZoneList<Expression*>* arguments, int pos) : Expression(zone, pos), - name_(name), + raw_name_(name), function_(function), arguments_(arguments) { } private: - Handle<String> name_; + const AstRawString* raw_name_; const Runtime::Function* function_; ZoneList<Expression*>* arguments_; + int callruntime_feedback_slot_; }; @@ -1935,7 +1979,7 @@ class UnaryOperation V8_FINAL : public Expression { expression_(expression), materialize_true_id_(GetNextId(zone)), materialize_false_id_(GetNextId(zone)) { - ASSERT(Token::IsUnaryOp(op)); + DCHECK(Token::IsUnaryOp(op)); } private: @@ -1983,7 +2027,7 @@ class BinaryOperation V8_FINAL : public Expression { left_(left), right_(right), right_id_(GetNextId(zone)) { - ASSERT(Token::IsBinaryOp(op)); + DCHECK(Token::IsBinaryOp(op)); } private: @@ -2091,7 +2135,7 @@ class CompareOperation V8_FINAL : public Expression { left_(left), right_(right), combined_type_(Type::None(zone)) { - ASSERT(Token::IsCompareOp(op)); + DCHECK(Token::IsCompareOp(op)); } private: @@ -2181,7 +2225,7 @@ class Assignment V8_FINAL : public Expression { template<class Visitor> void Init(Zone* zone, AstNodeFactory<Visitor>* factory) { - ASSERT(Token::IsAssignmentOp(op_)); + DCHECK(Token::IsAssignmentOp(op_)); if (is_compound()) { binary_operation_ = factory->NewBinaryOperation( binary_op(), target_, value_, position() + 1); @@ -2202,7 +2246,7 @@ class Assignment V8_FINAL : public Expression { }; -class Yield V8_FINAL : public Expression { +class Yield V8_FINAL : public Expression, public FeedbackSlotInterface { public: DECLARE_NODE_TYPE(Yield) @@ -2221,14 +2265,37 @@ class Yield V8_FINAL : public Expression { // locates the catch handler in the handler table, and is equivalent to // TryCatchStatement::index(). int index() const { - ASSERT(yield_kind() == DELEGATING); + DCHECK(yield_kind() == DELEGATING); return index_; } void set_index(int index) { - ASSERT(yield_kind() == DELEGATING); + DCHECK(yield_kind() == DELEGATING); index_ = index; } + // Type feedback information. + virtual int ComputeFeedbackSlotCount() { + return (FLAG_vector_ics && yield_kind() == DELEGATING) ? 3 : 0; + } + virtual void SetFirstFeedbackSlot(int slot) { + yield_first_feedback_slot_ = slot; + } + + int KeyedLoadFeedbackSlot() { + DCHECK(yield_first_feedback_slot_ != kInvalidFeedbackSlot); + return yield_first_feedback_slot_; + } + + int DoneFeedbackSlot() { + DCHECK(yield_first_feedback_slot_ != kInvalidFeedbackSlot); + return yield_first_feedback_slot_ + 1; + } + + int ValueFeedbackSlot() { + DCHECK(yield_first_feedback_slot_ != kInvalidFeedbackSlot); + return yield_first_feedback_slot_ + 2; + } + protected: Yield(Zone* zone, Expression* generator_object, @@ -2239,13 +2306,15 @@ class Yield V8_FINAL : public Expression { generator_object_(generator_object), expression_(expression), yield_kind_(yield_kind), - index_(-1) { } + index_(-1), + yield_first_feedback_slot_(kInvalidFeedbackSlot) { } private: Expression* generator_object_; Expression* expression_; Kind yield_kind_; int index_; + int yield_first_feedback_slot_; }; @@ -2287,14 +2356,22 @@ class FunctionLiteral V8_FINAL : public Expression { kNotParenthesized }; - enum IsGeneratorFlag { - kIsGenerator, - kNotGenerator + enum KindFlag { + kNormalFunction, + kArrowFunction, + kGeneratorFunction + }; + + enum ArityRestriction { + NORMAL_ARITY, + GETTER_ARITY, + SETTER_ARITY }; DECLARE_NODE_TYPE(FunctionLiteral) - Handle<String> name() const { return name_; } + Handle<String> name() const { return raw_name_->string(); } + const AstRawString* raw_name() const { return raw_name_; } Scope* scope() const { return scope_; } ZoneList<Statement*>* body() const { return body_; } void set_function_token_position(int pos) { function_token_position_ = pos; } @@ -2317,13 +2394,37 @@ class FunctionLiteral V8_FINAL : public Expression { void InitializeSharedInfo(Handle<Code> code); Handle<String> debug_name() const { - if (name_->length() > 0) return name_; + if (raw_name_ != NULL && !raw_name_->IsEmpty()) { + return raw_name_->string(); + } return inferred_name(); } - Handle<String> inferred_name() const { return inferred_name_; } + Handle<String> inferred_name() const { + if (!inferred_name_.is_null()) { + DCHECK(raw_inferred_name_ == NULL); + return inferred_name_; + } + if (raw_inferred_name_ != NULL) { + return raw_inferred_name_->string(); + } + UNREACHABLE(); + return Handle<String>(); + } + + // Only one of {set_inferred_name, set_raw_inferred_name} should be called. void set_inferred_name(Handle<String> inferred_name) { + DCHECK(!inferred_name.is_null()); inferred_name_ = inferred_name; + DCHECK(raw_inferred_name_== NULL || raw_inferred_name_->IsEmpty()); + raw_inferred_name_ = NULL; + } + + void set_raw_inferred_name(const AstString* raw_inferred_name) { + DCHECK(raw_inferred_name != NULL); + raw_inferred_name_ = raw_inferred_name; + DCHECK(inferred_name_.is_null()); + inferred_name_ = Handle<String>(); } // shared_info may be null if it's not cached in full code. @@ -2350,9 +2451,8 @@ class FunctionLiteral V8_FINAL : public Expression { bitfield_ = IsParenthesized::update(bitfield_, kIsParenthesized); } - bool is_generator() { - return IsGenerator::decode(bitfield_) == kIsGenerator; - } + bool is_generator() { return IsGenerator::decode(bitfield_); } + bool is_arrow() { return IsArrow::decode(bitfield_); } int ast_node_count() { return ast_properties_.node_count(); } AstProperties::Flags* flags() { return ast_properties_.flags(); } @@ -2369,46 +2469,43 @@ class FunctionLiteral V8_FINAL : public Expression { } protected: - FunctionLiteral(Zone* zone, - Handle<String> name, - Scope* scope, - ZoneList<Statement*>* body, - int materialized_literal_count, - int expected_property_count, - int handler_count, - int parameter_count, - FunctionType function_type, + FunctionLiteral(Zone* zone, const AstRawString* name, + AstValueFactory* ast_value_factory, Scope* scope, + ZoneList<Statement*>* body, int materialized_literal_count, + int expected_property_count, int handler_count, + int parameter_count, FunctionType function_type, ParameterFlag has_duplicate_parameters, IsFunctionFlag is_function, - IsParenthesizedFlag is_parenthesized, - IsGeneratorFlag is_generator, + IsParenthesizedFlag is_parenthesized, KindFlag kind, int position) : Expression(zone, position), - name_(name), + raw_name_(name), scope_(scope), body_(body), - inferred_name_(zone->isolate()->factory()->empty_string()), + raw_inferred_name_(ast_value_factory->empty_string()), dont_optimize_reason_(kNoReason), materialized_literal_count_(materialized_literal_count), expected_property_count_(expected_property_count), handler_count_(handler_count), parameter_count_(parameter_count), function_token_position_(RelocInfo::kNoPosition) { - bitfield_ = - IsExpression::encode(function_type != DECLARATION) | - IsAnonymous::encode(function_type == ANONYMOUS_EXPRESSION) | - Pretenure::encode(false) | - HasDuplicateParameters::encode(has_duplicate_parameters) | - IsFunction::encode(is_function) | - IsParenthesized::encode(is_parenthesized) | - IsGenerator::encode(is_generator); + bitfield_ = IsExpression::encode(function_type != DECLARATION) | + IsAnonymous::encode(function_type == ANONYMOUS_EXPRESSION) | + Pretenure::encode(false) | + HasDuplicateParameters::encode(has_duplicate_parameters) | + IsFunction::encode(is_function) | + IsParenthesized::encode(is_parenthesized) | + IsGenerator::encode(kind == kGeneratorFunction) | + IsArrow::encode(kind == kArrowFunction); } private: + const AstRawString* raw_name_; Handle<String> name_; Handle<SharedFunctionInfo> shared_info_; Scope* scope_; ZoneList<Statement*>* body_; + const AstString* raw_inferred_name_; Handle<String> inferred_name_; AstProperties ast_properties_; BailoutReason dont_optimize_reason_; @@ -2426,7 +2523,8 @@ class FunctionLiteral V8_FINAL : public Expression { class HasDuplicateParameters: public BitField<ParameterFlag, 3, 1> {}; class IsFunction: public BitField<IsFunctionFlag, 4, 1> {}; class IsParenthesized: public BitField<IsParenthesizedFlag, 5, 1> {}; - class IsGenerator: public BitField<IsGeneratorFlag, 6, 1> {}; + class IsGenerator : public BitField<bool, 6, 1> {}; + class IsArrow : public BitField<bool, 7, 1> {}; }; @@ -2434,16 +2532,16 @@ class NativeFunctionLiteral V8_FINAL : public Expression { public: DECLARE_NODE_TYPE(NativeFunctionLiteral) - Handle<String> name() const { return name_; } + Handle<String> name() const { return name_->string(); } v8::Extension* extension() const { return extension_; } protected: - NativeFunctionLiteral( - Zone* zone, Handle<String> name, v8::Extension* extension, int pos) + NativeFunctionLiteral(Zone* zone, const AstRawString* name, + v8::Extension* extension, int pos) : Expression(zone, pos), name_(name), extension_(extension) {} private: - Handle<String> name_; + const AstRawString* name_; v8::Extension* extension_; }; @@ -2489,7 +2587,7 @@ class RegExpTree : public ZoneObject { // expression. virtual Interval CaptureRegisters() { return Interval::Empty(); } virtual void AppendToText(RegExpText* text, Zone* zone); - SmartArrayPointer<const char> ToString(Zone* zone); + OStream& Print(OStream& os, Zone* zone); // NOLINT #define MAKE_ASTYPE(Name) \ virtual RegExp##Name* As##Name(); \ virtual bool Is##Name(); @@ -2935,7 +3033,8 @@ class AstNullVisitor BASE_EMBEDDED { template<class Visitor> class AstNodeFactory V8_FINAL BASE_EMBEDDED { public: - explicit AstNodeFactory(Zone* zone) : zone_(zone) { } + explicit AstNodeFactory(Zone* zone, AstValueFactory* ast_value_factory) + : zone_(zone), ast_value_factory_(ast_value_factory) {} Visitor* visitor() { return &visitor_; } @@ -2999,8 +3098,8 @@ class AstNodeFactory V8_FINAL BASE_EMBEDDED { VISIT_AND_RETURN(ModuleVariable, module) } - ModulePath* NewModulePath(Module* origin, Handle<String> name, int pos) { - ModulePath* module = new(zone_) ModulePath(zone_, origin, name, pos); + ModulePath* NewModulePath(Module* origin, const AstRawString* name, int pos) { + ModulePath* module = new (zone_) ModulePath(zone_, origin, name, pos); VISIT_AND_RETURN(ModulePath, module) } @@ -3009,7 +3108,7 @@ class AstNodeFactory V8_FINAL BASE_EMBEDDED { VISIT_AND_RETURN(ModuleUrl, module) } - Block* NewBlock(ZoneStringList* labels, + Block* NewBlock(ZoneList<const AstRawString*>* labels, int capacity, bool is_initializer_block, int pos) { @@ -3019,7 +3118,7 @@ class AstNodeFactory V8_FINAL BASE_EMBEDDED { } #define STATEMENT_WITH_LABELS(NodeType) \ - NodeType* New##NodeType(ZoneStringList* labels, int pos) { \ + NodeType* New##NodeType(ZoneList<const AstRawString*>* labels, int pos) { \ NodeType* stmt = new(zone_) NodeType(zone_, labels, pos); \ VISIT_AND_RETURN(NodeType, stmt); \ } @@ -3030,7 +3129,7 @@ class AstNodeFactory V8_FINAL BASE_EMBEDDED { #undef STATEMENT_WITH_LABELS ForEachStatement* NewForEachStatement(ForEachStatement::VisitMode visit_mode, - ZoneStringList* labels, + ZoneList<const AstRawString*>* labels, int pos) { switch (visit_mode) { case ForEachStatement::ENUMERATE: { @@ -3127,14 +3226,60 @@ class AstNodeFactory V8_FINAL BASE_EMBEDDED { VISIT_AND_RETURN(CaseClause, clause) } - Literal* NewLiteral(Handle<Object> handle, int pos) { - Literal* lit = new(zone_) Literal(zone_, handle, pos); + Literal* NewStringLiteral(const AstRawString* string, int pos) { + Literal* lit = + new (zone_) Literal(zone_, ast_value_factory_->NewString(string), pos); + VISIT_AND_RETURN(Literal, lit) + } + + // A JavaScript symbol (ECMA-262 edition 6). + Literal* NewSymbolLiteral(const char* name, int pos) { + Literal* lit = + new (zone_) Literal(zone_, ast_value_factory_->NewSymbol(name), pos); VISIT_AND_RETURN(Literal, lit) } Literal* NewNumberLiteral(double number, int pos) { - return NewLiteral( - zone_->isolate()->factory()->NewNumber(number, TENURED), pos); + Literal* lit = new (zone_) + Literal(zone_, ast_value_factory_->NewNumber(number), pos); + VISIT_AND_RETURN(Literal, lit) + } + + Literal* NewSmiLiteral(int number, int pos) { + Literal* lit = + new (zone_) Literal(zone_, ast_value_factory_->NewSmi(number), pos); + VISIT_AND_RETURN(Literal, lit) + } + + Literal* NewBooleanLiteral(bool b, int pos) { + Literal* lit = + new (zone_) Literal(zone_, ast_value_factory_->NewBoolean(b), pos); + VISIT_AND_RETURN(Literal, lit) + } + + Literal* NewStringListLiteral(ZoneList<const AstRawString*>* strings, + int pos) { + Literal* lit = new (zone_) + Literal(zone_, ast_value_factory_->NewStringList(strings), pos); + VISIT_AND_RETURN(Literal, lit) + } + + Literal* NewNullLiteral(int pos) { + Literal* lit = + new (zone_) Literal(zone_, ast_value_factory_->NewNull(), pos); + VISIT_AND_RETURN(Literal, lit) + } + + Literal* NewUndefinedLiteral(int pos) { + Literal* lit = + new (zone_) Literal(zone_, ast_value_factory_->NewUndefined(), pos); + VISIT_AND_RETURN(Literal, lit) + } + + Literal* NewTheHoleLiteral(int pos) { + Literal* lit = + new (zone_) Literal(zone_, ast_value_factory_->NewTheHole(), pos); + VISIT_AND_RETURN(Literal, lit) } ObjectLiteral* NewObjectLiteral( @@ -3151,7 +3296,8 @@ class AstNodeFactory V8_FINAL BASE_EMBEDDED { ObjectLiteral::Property* NewObjectLiteralProperty(Literal* key, Expression* value) { - return new(zone_) ObjectLiteral::Property(zone_, key, value); + return new (zone_) + ObjectLiteral::Property(zone_, ast_value_factory_, key, value); } ObjectLiteral::Property* NewObjectLiteralProperty(bool is_getter, @@ -3159,12 +3305,12 @@ class AstNodeFactory V8_FINAL BASE_EMBEDDED { int pos) { ObjectLiteral::Property* prop = new(zone_) ObjectLiteral::Property(zone_, is_getter, value); - prop->set_key(NewLiteral(value->name(), pos)); + prop->set_key(NewStringLiteral(value->raw_name(), pos)); return prop; // Not an AST node, will not be visited. } - RegExpLiteral* NewRegExpLiteral(Handle<String> pattern, - Handle<String> flags, + RegExpLiteral* NewRegExpLiteral(const AstRawString* pattern, + const AstRawString* flags, int literal_index, int pos) { RegExpLiteral* lit = @@ -3186,7 +3332,7 @@ class AstNodeFactory V8_FINAL BASE_EMBEDDED { VISIT_AND_RETURN(VariableProxy, proxy) } - VariableProxy* NewVariableProxy(Handle<String> name, + VariableProxy* NewVariableProxy(const AstRawString* name, bool is_this, Interface* interface = Interface::NewValue(), int position = RelocInfo::kNoPosition) { @@ -3214,7 +3360,7 @@ class AstNodeFactory V8_FINAL BASE_EMBEDDED { VISIT_AND_RETURN(CallNew, call) } - CallRuntime* NewCallRuntime(Handle<String> name, + CallRuntime* NewCallRuntime(const AstRawString* name, const Runtime::Function* function, ZoneList<Expression*>* arguments, int pos) { @@ -3281,6 +3427,7 @@ class AstNodeFactory V8_FINAL BASE_EMBEDDED { Expression* expression, Yield::Kind yield_kind, int pos) { + if (!expression) expression = NewUndefinedLiteral(pos); Yield* yield = new(zone_) Yield( zone_, generator_object, expression, yield_kind, pos); VISIT_AND_RETURN(Yield, yield) @@ -3292,24 +3439,19 @@ class AstNodeFactory V8_FINAL BASE_EMBEDDED { } FunctionLiteral* NewFunctionLiteral( - Handle<String> name, - Scope* scope, - ZoneList<Statement*>* body, - int materialized_literal_count, - int expected_property_count, - int handler_count, - int parameter_count, + const AstRawString* name, AstValueFactory* ast_value_factory, + Scope* scope, ZoneList<Statement*>* body, int materialized_literal_count, + int expected_property_count, int handler_count, int parameter_count, FunctionLiteral::ParameterFlag has_duplicate_parameters, FunctionLiteral::FunctionType function_type, FunctionLiteral::IsFunctionFlag is_function, FunctionLiteral::IsParenthesizedFlag is_parenthesized, - FunctionLiteral::IsGeneratorFlag is_generator, - int position) { - FunctionLiteral* lit = new(zone_) FunctionLiteral( - zone_, name, scope, body, - materialized_literal_count, expected_property_count, handler_count, - parameter_count, function_type, has_duplicate_parameters, is_function, - is_parenthesized, is_generator, position); + FunctionLiteral::KindFlag kind, int position) { + FunctionLiteral* lit = new (zone_) FunctionLiteral( + zone_, name, ast_value_factory, scope, body, materialized_literal_count, + expected_property_count, handler_count, parameter_count, function_type, + has_duplicate_parameters, is_function, is_parenthesized, kind, + position); // Top-level literal doesn't count for the AST's properties. if (is_function == FunctionLiteral::kIsFunction) { visitor_.VisitFunctionLiteral(lit); @@ -3318,7 +3460,8 @@ class AstNodeFactory V8_FINAL BASE_EMBEDDED { } NativeFunctionLiteral* NewNativeFunctionLiteral( - Handle<String> name, v8::Extension* extension, int pos) { + const AstRawString* name, v8::Extension* extension, + int pos) { NativeFunctionLiteral* lit = new(zone_) NativeFunctionLiteral(zone_, name, extension, pos); VISIT_AND_RETURN(NativeFunctionLiteral, lit) @@ -3334,6 +3477,7 @@ class AstNodeFactory V8_FINAL BASE_EMBEDDED { private: Zone* zone_; Visitor visitor_; + AstValueFactory* ast_value_factory_; }; diff --git a/deps/v8/src/base/DEPS b/deps/v8/src/base/DEPS new file mode 100644 index 00000000000..e53cadff484 --- /dev/null +++ b/deps/v8/src/base/DEPS @@ -0,0 +1,7 @@ +include_rules = [ + "-include", + "+include/v8config.h", + "+include/v8stdint.h", + "-src", + "+src/base", +] diff --git a/deps/v8/src/atomicops.h b/deps/v8/src/base/atomicops.h similarity index 90% rename from deps/v8/src/atomicops.h rename to deps/v8/src/base/atomicops.h index 9289d171b8a..eba172f0be0 100644 --- a/deps/v8/src/atomicops.h +++ b/deps/v8/src/base/atomicops.h @@ -22,11 +22,11 @@ // to use these. // -#ifndef V8_ATOMICOPS_H_ -#define V8_ATOMICOPS_H_ +#ifndef V8_BASE_ATOMICOPS_H_ +#define V8_BASE_ATOMICOPS_H_ -#include "../include/v8.h" -#include "globals.h" +#include "include/v8stdint.h" +#include "src/base/build_config.h" #if defined(_WIN32) && defined(V8_HOST_ARCH_64_BIT) // windows.h #defines this (only on x64). This causes problems because the @@ -38,7 +38,7 @@ #endif namespace v8 { -namespace internal { +namespace base { typedef char Atomic8; typedef int32_t Atomic32; @@ -131,23 +131,25 @@ Atomic64 Acquire_Load(volatile const Atomic64* ptr); Atomic64 Release_Load(volatile const Atomic64* ptr); #endif // V8_HOST_ARCH_64_BIT -} } // namespace v8::internal +} } // namespace v8::base // Include our platform specific implementation. #if defined(THREAD_SANITIZER) -#include "atomicops_internals_tsan.h" +#include "src/base/atomicops_internals_tsan.h" #elif defined(_MSC_VER) && (V8_HOST_ARCH_IA32 || V8_HOST_ARCH_X64) -#include "atomicops_internals_x86_msvc.h" +#include "src/base/atomicops_internals_x86_msvc.h" #elif defined(__APPLE__) -#include "atomicops_internals_mac.h" +#include "src/base/atomicops_internals_mac.h" #elif defined(__GNUC__) && V8_HOST_ARCH_ARM64 -#include "atomicops_internals_arm64_gcc.h" +#include "src/base/atomicops_internals_arm64_gcc.h" #elif defined(__GNUC__) && V8_HOST_ARCH_ARM -#include "atomicops_internals_arm_gcc.h" +#include "src/base/atomicops_internals_arm_gcc.h" #elif defined(__GNUC__) && (V8_HOST_ARCH_IA32 || V8_HOST_ARCH_X64) -#include "atomicops_internals_x86_gcc.h" +#include "src/base/atomicops_internals_x86_gcc.h" #elif defined(__GNUC__) && V8_HOST_ARCH_MIPS -#include "atomicops_internals_mips_gcc.h" +#include "src/base/atomicops_internals_mips_gcc.h" +#elif defined(__GNUC__) && V8_HOST_ARCH_MIPS64 +#include "src/base/atomicops_internals_mips64_gcc.h" #else #error "Atomic operations are not supported on your platform" #endif @@ -155,7 +157,7 @@ Atomic64 Release_Load(volatile const Atomic64* ptr); // On some platforms we need additional declarations to make // AtomicWord compatible with our other Atomic* types. #if defined(__APPLE__) || defined(__OpenBSD__) -#include "atomicops_internals_atomicword_compat.h" +#include "src/base/atomicops_internals_atomicword_compat.h" #endif -#endif // V8_ATOMICOPS_H_ +#endif // V8_BASE_ATOMICOPS_H_ diff --git a/deps/v8/src/atomicops_internals_arm64_gcc.h b/deps/v8/src/base/atomicops_internals_arm64_gcc.h similarity index 97% rename from deps/v8/src/atomicops_internals_arm64_gcc.h rename to deps/v8/src/base/atomicops_internals_arm64_gcc.h index 36e30a90c1e..b01783e6a7e 100644 --- a/deps/v8/src/atomicops_internals_arm64_gcc.h +++ b/deps/v8/src/base/atomicops_internals_arm64_gcc.h @@ -4,11 +4,11 @@ // This file is an internal atomic implementation, use atomicops.h instead. -#ifndef V8_ATOMICOPS_INTERNALS_ARM_GCC_H_ -#define V8_ATOMICOPS_INTERNALS_ARM_GCC_H_ +#ifndef V8_BASE_ATOMICOPS_INTERNALS_ARM_GCC_H_ +#define V8_BASE_ATOMICOPS_INTERNALS_ARM_GCC_H_ namespace v8 { -namespace internal { +namespace base { inline void MemoryBarrier() { __asm__ __volatile__ ("dmb ish" ::: "memory"); // NOLINT @@ -311,6 +311,6 @@ inline Atomic64 Release_Load(volatile const Atomic64* ptr) { return *ptr; } -} } // namespace v8::internal +} } // namespace v8::base -#endif // V8_ATOMICOPS_INTERNALS_ARM_GCC_H_ +#endif // V8_BASE_ATOMICOPS_INTERNALS_ARM_GCC_H_ diff --git a/deps/v8/src/atomicops_internals_arm_gcc.h b/deps/v8/src/base/atomicops_internals_arm_gcc.h similarity index 98% rename from deps/v8/src/atomicops_internals_arm_gcc.h rename to deps/v8/src/base/atomicops_internals_arm_gcc.h index b72ffb6a6dd..069b1ffa883 100644 --- a/deps/v8/src/atomicops_internals_arm_gcc.h +++ b/deps/v8/src/base/atomicops_internals_arm_gcc.h @@ -6,15 +6,15 @@ // // LinuxKernelCmpxchg and Barrier_AtomicIncrement are from Google Gears. -#ifndef V8_ATOMICOPS_INTERNALS_ARM_GCC_H_ -#define V8_ATOMICOPS_INTERNALS_ARM_GCC_H_ +#ifndef V8_BASE_ATOMICOPS_INTERNALS_ARM_GCC_H_ +#define V8_BASE_ATOMICOPS_INTERNALS_ARM_GCC_H_ #if defined(__QNXNTO__) #include <sys/cpuinline.h> #endif namespace v8 { -namespace internal { +namespace base { // Memory barriers on ARM are funky, but the kernel is here to help: // @@ -296,6 +296,6 @@ inline void NoBarrier_Store(volatile Atomic8* ptr, Atomic8 value) { inline Atomic8 NoBarrier_Load(volatile const Atomic8* ptr) { return *ptr; } -} } // namespace v8::internal +} } // namespace v8::base -#endif // V8_ATOMICOPS_INTERNALS_ARM_GCC_H_ +#endif // V8_BASE_ATOMICOPS_INTERNALS_ARM_GCC_H_ diff --git a/deps/v8/src/atomicops_internals_atomicword_compat.h b/deps/v8/src/base/atomicops_internals_atomicword_compat.h similarity index 87% rename from deps/v8/src/atomicops_internals_atomicword_compat.h rename to deps/v8/src/base/atomicops_internals_atomicword_compat.h index 617aa73b539..0530ced2a44 100644 --- a/deps/v8/src/atomicops_internals_atomicword_compat.h +++ b/deps/v8/src/base/atomicops_internals_atomicword_compat.h @@ -4,8 +4,8 @@ // This file is an internal atomic implementation, use atomicops.h instead. -#ifndef V8_ATOMICOPS_INTERNALS_ATOMICWORD_COMPAT_H_ -#define V8_ATOMICOPS_INTERNALS_ATOMICWORD_COMPAT_H_ +#ifndef V8_BASE_ATOMICOPS_INTERNALS_ATOMICWORD_COMPAT_H_ +#define V8_BASE_ATOMICOPS_INTERNALS_ATOMICWORD_COMPAT_H_ // AtomicWord is a synonym for intptr_t, and Atomic32 is a synonym for int32, // which in turn means int. On some LP32 platforms, intptr_t is an int, but @@ -21,7 +21,7 @@ #if !defined(V8_HOST_ARCH_64_BIT) namespace v8 { -namespace internal { +namespace base { inline AtomicWord NoBarrier_CompareAndSwap(volatile AtomicWord* ptr, AtomicWord old_value, @@ -51,14 +51,14 @@ inline AtomicWord Barrier_AtomicIncrement(volatile AtomicWord* ptr, inline AtomicWord Acquire_CompareAndSwap(volatile AtomicWord* ptr, AtomicWord old_value, AtomicWord new_value) { - return v8::internal::Acquire_CompareAndSwap( + return v8::base::Acquire_CompareAndSwap( reinterpret_cast<volatile Atomic32*>(ptr), old_value, new_value); } inline AtomicWord Release_CompareAndSwap(volatile AtomicWord* ptr, AtomicWord old_value, AtomicWord new_value) { - return v8::internal::Release_CompareAndSwap( + return v8::base::Release_CompareAndSwap( reinterpret_cast<volatile Atomic32*>(ptr), old_value, new_value); } @@ -68,12 +68,12 @@ inline void NoBarrier_Store(volatile AtomicWord *ptr, AtomicWord value) { } inline void Acquire_Store(volatile AtomicWord* ptr, AtomicWord value) { - return v8::internal::Acquire_Store( + return v8::base::Acquire_Store( reinterpret_cast<volatile Atomic32*>(ptr), value); } inline void Release_Store(volatile AtomicWord* ptr, AtomicWord value) { - return v8::internal::Release_Store( + return v8::base::Release_Store( reinterpret_cast<volatile Atomic32*>(ptr), value); } @@ -83,17 +83,17 @@ inline AtomicWord NoBarrier_Load(volatile const AtomicWord *ptr) { } inline AtomicWord Acquire_Load(volatile const AtomicWord* ptr) { - return v8::internal::Acquire_Load( + return v8::base::Acquire_Load( reinterpret_cast<volatile const Atomic32*>(ptr)); } inline AtomicWord Release_Load(volatile const AtomicWord* ptr) { - return v8::internal::Release_Load( + return v8::base::Release_Load( reinterpret_cast<volatile const Atomic32*>(ptr)); } -} } // namespace v8::internal +} } // namespace v8::base #endif // !defined(V8_HOST_ARCH_64_BIT) -#endif // V8_ATOMICOPS_INTERNALS_ATOMICWORD_COMPAT_H_ +#endif // V8_BASE_ATOMICOPS_INTERNALS_ATOMICWORD_COMPAT_H_ diff --git a/deps/v8/src/atomicops_internals_mac.h b/deps/v8/src/base/atomicops_internals_mac.h similarity index 97% rename from deps/v8/src/atomicops_internals_mac.h rename to deps/v8/src/base/atomicops_internals_mac.h index 5e6abe4a46d..a046872e4d0 100644 --- a/deps/v8/src/atomicops_internals_mac.h +++ b/deps/v8/src/base/atomicops_internals_mac.h @@ -4,13 +4,13 @@ // This file is an internal atomic implementation, use atomicops.h instead. -#ifndef V8_ATOMICOPS_INTERNALS_MAC_H_ -#define V8_ATOMICOPS_INTERNALS_MAC_H_ +#ifndef V8_BASE_ATOMICOPS_INTERNALS_MAC_H_ +#define V8_BASE_ATOMICOPS_INTERNALS_MAC_H_ #include <libkern/OSAtomic.h> namespace v8 { -namespace internal { +namespace base { inline Atomic32 NoBarrier_CompareAndSwap(volatile Atomic32* ptr, Atomic32 old_value, @@ -199,6 +199,6 @@ inline Atomic64 Release_Load(volatile const Atomic64* ptr) { #endif // defined(__LP64__) -} } // namespace v8::internal +} } // namespace v8::base -#endif // V8_ATOMICOPS_INTERNALS_MAC_H_ +#endif // V8_BASE_ATOMICOPS_INTERNALS_MAC_H_ diff --git a/deps/v8/src/base/atomicops_internals_mips64_gcc.h b/deps/v8/src/base/atomicops_internals_mips64_gcc.h new file mode 100644 index 00000000000..1f629b6ea17 --- /dev/null +++ b/deps/v8/src/base/atomicops_internals_mips64_gcc.h @@ -0,0 +1,307 @@ +// Copyright 2010 the V8 project authors. All rights reserved. +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following +// disclaimer in the documentation and/or other materials provided +// with the distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived +// from this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +// This file is an internal atomic implementation, use atomicops.h instead. + +#ifndef V8_BASE_ATOMICOPS_INTERNALS_MIPS_GCC_H_ +#define V8_BASE_ATOMICOPS_INTERNALS_MIPS_GCC_H_ + +namespace v8 { +namespace base { + +// Atomically execute: +// result = *ptr; +// if (*ptr == old_value) +// *ptr = new_value; +// return result; +// +// I.e., replace "*ptr" with "new_value" if "*ptr" used to be "old_value". +// Always return the old value of "*ptr" +// +// This routine implies no memory barriers. +inline Atomic32 NoBarrier_CompareAndSwap(volatile Atomic32* ptr, + Atomic32 old_value, + Atomic32 new_value) { + Atomic32 prev, tmp; + __asm__ __volatile__(".set push\n" + ".set noreorder\n" + "1:\n" + "ll %0, %5\n" // prev = *ptr + "bne %0, %3, 2f\n" // if (prev != old_value) goto 2 + "move %2, %4\n" // tmp = new_value + "sc %2, %1\n" // *ptr = tmp (with atomic check) + "beqz %2, 1b\n" // start again on atomic error + "nop\n" // delay slot nop + "2:\n" + ".set pop\n" + : "=&r" (prev), "=m" (*ptr), "=&r" (tmp) + : "Ir" (old_value), "r" (new_value), "m" (*ptr) + : "memory"); + return prev; +} + +// Atomically store new_value into *ptr, returning the previous value held in +// *ptr. This routine implies no memory barriers. +inline Atomic32 NoBarrier_AtomicExchange(volatile Atomic32* ptr, + Atomic32 new_value) { + Atomic32 temp, old; + __asm__ __volatile__(".set push\n" + ".set noreorder\n" + "1:\n" + "ll %1, %2\n" // old = *ptr + "move %0, %3\n" // temp = new_value + "sc %0, %2\n" // *ptr = temp (with atomic check) + "beqz %0, 1b\n" // start again on atomic error + "nop\n" // delay slot nop + ".set pop\n" + : "=&r" (temp), "=&r" (old), "=m" (*ptr) + : "r" (new_value), "m" (*ptr) + : "memory"); + + return old; +} + +// Atomically increment *ptr by "increment". Returns the new value of +// *ptr with the increment applied. This routine implies no memory barriers. +inline Atomic32 NoBarrier_AtomicIncrement(volatile Atomic32* ptr, + Atomic32 increment) { + Atomic32 temp, temp2; + + __asm__ __volatile__(".set push\n" + ".set noreorder\n" + "1:\n" + "ll %0, %2\n" // temp = *ptr + "addu %1, %0, %3\n" // temp2 = temp + increment + "sc %1, %2\n" // *ptr = temp2 (with atomic check) + "beqz %1, 1b\n" // start again on atomic error + "addu %1, %0, %3\n" // temp2 = temp + increment + ".set pop\n" + : "=&r" (temp), "=&r" (temp2), "=m" (*ptr) + : "Ir" (increment), "m" (*ptr) + : "memory"); + // temp2 now holds the final value. + return temp2; +} + +inline Atomic32 Barrier_AtomicIncrement(volatile Atomic32* ptr, + Atomic32 increment) { + MemoryBarrier(); + Atomic32 res = NoBarrier_AtomicIncrement(ptr, increment); + MemoryBarrier(); + return res; +} + +// "Acquire" operations +// ensure that no later memory access can be reordered ahead of the operation. +// "Release" operations ensure that no previous memory access can be reordered +// after the operation. "Barrier" operations have both "Acquire" and "Release" +// semantics. A MemoryBarrier() has "Barrier" semantics, but does no memory +// access. +inline Atomic32 Acquire_CompareAndSwap(volatile Atomic32* ptr, + Atomic32 old_value, + Atomic32 new_value) { + Atomic32 res = NoBarrier_CompareAndSwap(ptr, old_value, new_value); + MemoryBarrier(); + return res; +} + +inline Atomic32 Release_CompareAndSwap(volatile Atomic32* ptr, + Atomic32 old_value, + Atomic32 new_value) { + MemoryBarrier(); + return NoBarrier_CompareAndSwap(ptr, old_value, new_value); +} + +inline void NoBarrier_Store(volatile Atomic8* ptr, Atomic8 value) { + *ptr = value; +} + +inline void NoBarrier_Store(volatile Atomic32* ptr, Atomic32 value) { + *ptr = value; +} + +inline void MemoryBarrier() { + __asm__ __volatile__("sync" : : : "memory"); +} + +inline void Acquire_Store(volatile Atomic32* ptr, Atomic32 value) { + *ptr = value; + MemoryBarrier(); +} + +inline void Release_Store(volatile Atomic32* ptr, Atomic32 value) { + MemoryBarrier(); + *ptr = value; +} + +inline Atomic8 NoBarrier_Load(volatile const Atomic8* ptr) { + return *ptr; +} + +inline Atomic32 NoBarrier_Load(volatile const Atomic32* ptr) { + return *ptr; +} + +inline Atomic32 Acquire_Load(volatile const Atomic32* ptr) { + Atomic32 value = *ptr; + MemoryBarrier(); + return value; +} + +inline Atomic32 Release_Load(volatile const Atomic32* ptr) { + MemoryBarrier(); + return *ptr; +} + + +// 64-bit versions of the atomic ops. + +inline Atomic64 NoBarrier_CompareAndSwap(volatile Atomic64* ptr, + Atomic64 old_value, + Atomic64 new_value) { + Atomic64 prev, tmp; + __asm__ __volatile__(".set push\n" + ".set noreorder\n" + "1:\n" + "lld %0, %5\n" // prev = *ptr + "bne %0, %3, 2f\n" // if (prev != old_value) goto 2 + "move %2, %4\n" // tmp = new_value + "scd %2, %1\n" // *ptr = tmp (with atomic check) + "beqz %2, 1b\n" // start again on atomic error + "nop\n" // delay slot nop + "2:\n" + ".set pop\n" + : "=&r" (prev), "=m" (*ptr), "=&r" (tmp) + : "Ir" (old_value), "r" (new_value), "m" (*ptr) + : "memory"); + return prev; +} + +// Atomically store new_value into *ptr, returning the previous value held in +// *ptr. This routine implies no memory barriers. +inline Atomic64 NoBarrier_AtomicExchange(volatile Atomic64* ptr, + Atomic64 new_value) { + Atomic64 temp, old; + __asm__ __volatile__(".set push\n" + ".set noreorder\n" + "1:\n" + "lld %1, %2\n" // old = *ptr + "move %0, %3\n" // temp = new_value + "scd %0, %2\n" // *ptr = temp (with atomic check) + "beqz %0, 1b\n" // start again on atomic error + "nop\n" // delay slot nop + ".set pop\n" + : "=&r" (temp), "=&r" (old), "=m" (*ptr) + : "r" (new_value), "m" (*ptr) + : "memory"); + + return old; +} + +// Atomically increment *ptr by "increment". Returns the new value of +// *ptr with the increment applied. This routine implies no memory barriers. +inline Atomic64 NoBarrier_AtomicIncrement(volatile Atomic64* ptr, + Atomic64 increment) { + Atomic64 temp, temp2; + + __asm__ __volatile__(".set push\n" + ".set noreorder\n" + "1:\n" + "lld %0, %2\n" // temp = *ptr + "daddu %1, %0, %3\n" // temp2 = temp + increment + "scd %1, %2\n" // *ptr = temp2 (with atomic check) + "beqz %1, 1b\n" // start again on atomic error + "daddu %1, %0, %3\n" // temp2 = temp + increment + ".set pop\n" + : "=&r" (temp), "=&r" (temp2), "=m" (*ptr) + : "Ir" (increment), "m" (*ptr) + : "memory"); + // temp2 now holds the final value. + return temp2; +} + +inline Atomic64 Barrier_AtomicIncrement(volatile Atomic64* ptr, + Atomic64 increment) { + MemoryBarrier(); + Atomic64 res = NoBarrier_AtomicIncrement(ptr, increment); + MemoryBarrier(); + return res; +} + +// "Acquire" operations +// ensure that no later memory access can be reordered ahead of the operation. +// "Release" operations ensure that no previous memory access can be reordered +// after the operation. "Barrier" operations have both "Acquire" and "Release" +// semantics. A MemoryBarrier() has "Barrier" semantics, but does no memory +// access. +inline Atomic64 Acquire_CompareAndSwap(volatile Atomic64* ptr, + Atomic64 old_value, + Atomic64 new_value) { + Atomic64 res = NoBarrier_CompareAndSwap(ptr, old_value, new_value); + MemoryBarrier(); + return res; +} + +inline Atomic64 Release_CompareAndSwap(volatile Atomic64* ptr, + Atomic64 old_value, + Atomic64 new_value) { + MemoryBarrier(); + return NoBarrier_CompareAndSwap(ptr, old_value, new_value); +} + +inline void NoBarrier_Store(volatile Atomic64* ptr, Atomic64 value) { + *ptr = value; +} + +inline void Acquire_Store(volatile Atomic64* ptr, Atomic64 value) { + *ptr = value; + MemoryBarrier(); +} + +inline void Release_Store(volatile Atomic64* ptr, Atomic64 value) { + MemoryBarrier(); + *ptr = value; +} + +inline Atomic64 NoBarrier_Load(volatile const Atomic64* ptr) { + return *ptr; +} + +inline Atomic64 Acquire_Load(volatile const Atomic64* ptr) { + Atomic64 value = *ptr; + MemoryBarrier(); + return value; +} + +inline Atomic64 Release_Load(volatile const Atomic64* ptr) { + MemoryBarrier(); + return *ptr; +} + +} } // namespace v8::base + +#endif // V8_BASE_ATOMICOPS_INTERNALS_MIPS_GCC_H_ diff --git a/deps/v8/src/atomicops_internals_mips_gcc.h b/deps/v8/src/base/atomicops_internals_mips_gcc.h similarity index 96% rename from deps/v8/src/atomicops_internals_mips_gcc.h rename to deps/v8/src/base/atomicops_internals_mips_gcc.h index da9f6e99364..0d3a0e38c13 100644 --- a/deps/v8/src/atomicops_internals_mips_gcc.h +++ b/deps/v8/src/base/atomicops_internals_mips_gcc.h @@ -4,11 +4,11 @@ // This file is an internal atomic implementation, use atomicops.h instead. -#ifndef V8_ATOMICOPS_INTERNALS_MIPS_GCC_H_ -#define V8_ATOMICOPS_INTERNALS_MIPS_GCC_H_ +#ifndef V8_BASE_ATOMICOPS_INTERNALS_MIPS_GCC_H_ +#define V8_BASE_ATOMICOPS_INTERNALS_MIPS_GCC_H_ namespace v8 { -namespace internal { +namespace base { // Atomically execute: // result = *ptr; @@ -154,6 +154,6 @@ inline Atomic32 Release_Load(volatile const Atomic32* ptr) { return *ptr; } -} } // namespace v8::internal +} } // namespace v8::base -#endif // V8_ATOMICOPS_INTERNALS_MIPS_GCC_H_ +#endif // V8_BASE_ATOMICOPS_INTERNALS_MIPS_GCC_H_ diff --git a/deps/v8/src/atomicops_internals_tsan.h b/deps/v8/src/base/atomicops_internals_tsan.h similarity index 94% rename from deps/v8/src/atomicops_internals_tsan.h rename to deps/v8/src/base/atomicops_internals_tsan.h index e100812499c..646e5bd4b74 100644 --- a/deps/v8/src/atomicops_internals_tsan.h +++ b/deps/v8/src/base/atomicops_internals_tsan.h @@ -6,29 +6,15 @@ // This file is an internal atomic implementation for compiler-based // ThreadSanitizer. Use base/atomicops.h instead. -#ifndef V8_ATOMICOPS_INTERNALS_TSAN_H_ -#define V8_ATOMICOPS_INTERNALS_TSAN_H_ +#ifndef V8_BASE_ATOMICOPS_INTERNALS_TSAN_H_ +#define V8_BASE_ATOMICOPS_INTERNALS_TSAN_H_ namespace v8 { -namespace internal { +namespace base { #ifndef TSAN_INTERFACE_ATOMIC_H #define TSAN_INTERFACE_ATOMIC_H -// This struct is not part of the public API of this module; clients may not -// use it. (However, it's exported via BASE_EXPORT because clients implicitly -// do use it at link time by inlining these functions.) -// Features of this x86. Values may not be correct before main() is run, -// but are set conservatively. -struct AtomicOps_x86CPUFeatureStruct { - bool has_amd_lock_mb_bug; // Processor has AMD memory-barrier bug; do lfence - // after acquire compare-and-swap. - bool has_sse2; // Processor has SSE2. -}; -extern struct AtomicOps_x86CPUFeatureStruct - AtomicOps_Internalx86CPUFeatures; - -#define ATOMICOPS_COMPILER_BARRIER() __asm__ __volatile__("" : : : "memory") extern "C" { typedef char __tsan_atomic8; @@ -371,9 +357,7 @@ inline void MemoryBarrier() { __tsan_atomic_thread_fence(__tsan_memory_order_seq_cst); } -} // namespace internal +} // namespace base } // namespace v8 -#undef ATOMICOPS_COMPILER_BARRIER - -#endif // V8_ATOMICOPS_INTERNALS_TSAN_H_ +#endif // V8_BASE_ATOMICOPS_INTERNALS_TSAN_H_ diff --git a/deps/v8/src/atomicops_internals_x86_gcc.cc b/deps/v8/src/base/atomicops_internals_x86_gcc.cc similarity index 89% rename from deps/v8/src/atomicops_internals_x86_gcc.cc rename to deps/v8/src/base/atomicops_internals_x86_gcc.cc index 0b0e04c8157..969f2371b0b 100644 --- a/deps/v8/src/atomicops_internals_x86_gcc.cc +++ b/deps/v8/src/base/atomicops_internals_x86_gcc.cc @@ -7,14 +7,13 @@ #include <string.h> -#include "atomicops.h" -#include "platform.h" +#include "src/base/atomicops.h" // This file only makes sense with atomicops_internals_x86_gcc.h -- it // depends on structs that are defined in that file. If atomicops.h // doesn't sub-include that file, then we aren't needed, and shouldn't // try to do anything. -#ifdef V8_ATOMICOPS_INTERNALS_X86_GCC_H_ +#ifdef V8_BASE_ATOMICOPS_INTERNALS_X86_GCC_H_ // Inline cpuid instruction. In PIC compilations, %ebx contains the address // of the global offset table. To avoid breaking such executables, this code @@ -36,23 +35,25 @@ #if defined(cpuid) // initialize the struct only on x86 namespace v8 { -namespace internal { +namespace base { // Set the flags so that code will run correctly and conservatively, so even // if we haven't been initialized yet, we're probably single threaded, and our // default values should hopefully be pretty safe. struct AtomicOps_x86CPUFeatureStruct AtomicOps_Internalx86CPUFeatures = { false, // bug can't exist before process spawns multiple threads +#if !defined(__SSE2__) false, // no SSE2 +#endif }; -} } // namespace v8::internal +} } // namespace v8::base namespace { // Initialize the AtomicOps_Internalx86CPUFeatures struct. void AtomicOps_Internalx86CPUFeaturesInit() { - using v8::internal::AtomicOps_Internalx86CPUFeatures; + using v8::base::AtomicOps_Internalx86CPUFeatures; uint32_t eax = 0; uint32_t ebx = 0; @@ -62,9 +63,9 @@ void AtomicOps_Internalx86CPUFeaturesInit() { // Get vendor string (issue CPUID with eax = 0) cpuid(eax, ebx, ecx, edx, 0); char vendor[13]; - v8::internal::OS::MemCopy(vendor, &ebx, 4); - v8::internal::OS::MemCopy(vendor + 4, &edx, 4); - v8::internal::OS::MemCopy(vendor + 8, &ecx, 4); + memcpy(vendor, &ebx, 4); + memcpy(vendor + 4, &edx, 4); + memcpy(vendor + 8, &ecx, 4); vendor[12] = 0; // get feature flags in ecx/edx, and family/model in eax @@ -90,8 +91,10 @@ void AtomicOps_Internalx86CPUFeaturesInit() { AtomicOps_Internalx86CPUFeatures.has_amd_lock_mb_bug = false; } +#if !defined(__SSE2__) // edx bit 26 is SSE2 which we use to tell use whether we can use mfence AtomicOps_Internalx86CPUFeatures.has_sse2 = ((edx >> 26) & 1); +#endif } class AtomicOpsx86Initializer { @@ -109,4 +112,4 @@ AtomicOpsx86Initializer g_initer; #endif // if x86 -#endif // ifdef V8_ATOMICOPS_INTERNALS_X86_GCC_H_ +#endif // ifdef V8_BASE_ATOMICOPS_INTERNALS_X86_GCC_H_ diff --git a/deps/v8/src/atomicops_internals_x86_gcc.h b/deps/v8/src/base/atomicops_internals_x86_gcc.h similarity index 97% rename from deps/v8/src/atomicops_internals_x86_gcc.h rename to deps/v8/src/base/atomicops_internals_x86_gcc.h index c8950676b45..ec87c421212 100644 --- a/deps/v8/src/atomicops_internals_x86_gcc.h +++ b/deps/v8/src/base/atomicops_internals_x86_gcc.h @@ -4,11 +4,11 @@ // This file is an internal atomic implementation, use atomicops.h instead. -#ifndef V8_ATOMICOPS_INTERNALS_X86_GCC_H_ -#define V8_ATOMICOPS_INTERNALS_X86_GCC_H_ +#ifndef V8_BASE_ATOMICOPS_INTERNALS_X86_GCC_H_ +#define V8_BASE_ATOMICOPS_INTERNALS_X86_GCC_H_ namespace v8 { -namespace internal { +namespace base { // This struct is not part of the public API of this module; clients may not // use it. @@ -17,7 +17,9 @@ namespace internal { struct AtomicOps_x86CPUFeatureStruct { bool has_amd_lock_mb_bug; // Processor has AMD memory-barrier bug; do lfence // after acquire compare-and-swap. +#if !defined(__SSE2__) bool has_sse2; // Processor has SSE2. +#endif }; extern struct AtomicOps_x86CPUFeatureStruct AtomicOps_Internalx86CPUFeatures; @@ -92,7 +94,7 @@ inline void NoBarrier_Store(volatile Atomic32* ptr, Atomic32 value) { *ptr = value; } -#if defined(__x86_64__) +#if defined(__x86_64__) || defined(__SSE2__) // 64-bit implementations of memory barrier can be simpler, because it // "mfence" is guaranteed to exist. @@ -265,8 +267,8 @@ inline Atomic64 Release_CompareAndSwap(volatile Atomic64* ptr, #endif // defined(__x86_64__) -} } // namespace v8::internal +} } // namespace v8::base #undef ATOMICOPS_COMPILER_BARRIER -#endif // V8_ATOMICOPS_INTERNALS_X86_GCC_H_ +#endif // V8_BASE_ATOMICOPS_INTERNALS_X86_GCC_H_ diff --git a/deps/v8/src/atomicops_internals_x86_msvc.h b/deps/v8/src/base/atomicops_internals_x86_msvc.h similarity index 96% rename from deps/v8/src/atomicops_internals_x86_msvc.h rename to deps/v8/src/base/atomicops_internals_x86_msvc.h index 6376666ae76..adc40318e92 100644 --- a/deps/v8/src/atomicops_internals_x86_msvc.h +++ b/deps/v8/src/base/atomicops_internals_x86_msvc.h @@ -4,11 +4,11 @@ // This file is an internal atomic implementation, use atomicops.h instead. -#ifndef V8_ATOMICOPS_INTERNALS_X86_MSVC_H_ -#define V8_ATOMICOPS_INTERNALS_X86_MSVC_H_ +#ifndef V8_BASE_ATOMICOPS_INTERNALS_X86_MSVC_H_ +#define V8_BASE_ATOMICOPS_INTERNALS_X86_MSVC_H_ -#include "checks.h" -#include "win32-headers.h" +#include "src/base/macros.h" +#include "src/base/win32-headers.h" #if defined(V8_HOST_ARCH_64_BIT) // windows.h #defines this (only on x64). This causes problems because the @@ -20,7 +20,7 @@ #endif namespace v8 { -namespace internal { +namespace base { inline Atomic32 NoBarrier_CompareAndSwap(volatile Atomic32* ptr, Atomic32 old_value, @@ -197,6 +197,6 @@ inline Atomic64 Release_CompareAndSwap(volatile Atomic64* ptr, #endif // defined(_WIN64) -} } // namespace v8::internal +} } // namespace v8::base -#endif // V8_ATOMICOPS_INTERNALS_X86_MSVC_H_ +#endif // V8_BASE_ATOMICOPS_INTERNALS_X86_MSVC_H_ diff --git a/deps/v8/src/base/build_config.h b/deps/v8/src/base/build_config.h new file mode 100644 index 00000000000..c66d6a5cc4c --- /dev/null +++ b/deps/v8/src/base/build_config.h @@ -0,0 +1,169 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_BASE_BUILD_CONFIG_H_ +#define V8_BASE_BUILD_CONFIG_H_ + +#include "include/v8config.h" + +// Processor architecture detection. For more info on what's defined, see: +// http://msdn.microsoft.com/en-us/library/b0084kay.aspx +// http://www.agner.org/optimize/calling_conventions.pdf +// or with gcc, run: "echo | gcc -E -dM -" +#if defined(_M_X64) || defined(__x86_64__) +#if defined(__native_client__) +// For Native Client builds of V8, use V8_TARGET_ARCH_ARM, so that V8 +// generates ARM machine code, together with a portable ARM simulator +// compiled for the host architecture in question. +// +// Since Native Client is ILP-32 on all architectures we use +// V8_HOST_ARCH_IA32 on both 32- and 64-bit x86. +#define V8_HOST_ARCH_IA32 1 +#define V8_HOST_ARCH_32_BIT 1 +#define V8_HOST_CAN_READ_UNALIGNED 1 +#else +#define V8_HOST_ARCH_X64 1 +#if defined(__x86_64__) && !defined(__LP64__) +#define V8_HOST_ARCH_32_BIT 1 +#else +#define V8_HOST_ARCH_64_BIT 1 +#endif +#define V8_HOST_CAN_READ_UNALIGNED 1 +#endif // __native_client__ +#elif defined(_M_IX86) || defined(__i386__) +#define V8_HOST_ARCH_IA32 1 +#define V8_HOST_ARCH_32_BIT 1 +#define V8_HOST_CAN_READ_UNALIGNED 1 +#elif defined(__AARCH64EL__) +#define V8_HOST_ARCH_ARM64 1 +#define V8_HOST_ARCH_64_BIT 1 +#define V8_HOST_CAN_READ_UNALIGNED 1 +#elif defined(__ARMEL__) +#define V8_HOST_ARCH_ARM 1 +#define V8_HOST_ARCH_32_BIT 1 +#elif defined(__mips64) +#define V8_HOST_ARCH_MIPS64 1 +#define V8_HOST_ARCH_64_BIT 1 +#elif defined(__MIPSEB__) || defined(__MIPSEL__) +#define V8_HOST_ARCH_MIPS 1 +#define V8_HOST_ARCH_32_BIT 1 +#else +#error "Host architecture was not detected as supported by v8" +#endif + +#if defined(__ARM_ARCH_7A__) || \ + defined(__ARM_ARCH_7R__) || \ + defined(__ARM_ARCH_7__) +# define CAN_USE_ARMV7_INSTRUCTIONS 1 +# ifndef CAN_USE_VFP3_INSTRUCTIONS +# define CAN_USE_VFP3_INSTRUCTIONS +# endif +#endif + + +// Target architecture detection. This may be set externally. If not, detect +// in the same way as the host architecture, that is, target the native +// environment as presented by the compiler. +#if !V8_TARGET_ARCH_X64 && !V8_TARGET_ARCH_IA32 && !V8_TARGET_ARCH_X87 && \ + !V8_TARGET_ARCH_ARM && !V8_TARGET_ARCH_ARM64 && !V8_TARGET_ARCH_MIPS && \ + !V8_TARGET_ARCH_MIPS64 +#if defined(_M_X64) || defined(__x86_64__) +#define V8_TARGET_ARCH_X64 1 +#elif defined(_M_IX86) || defined(__i386__) +#define V8_TARGET_ARCH_IA32 1 +#elif defined(__AARCH64EL__) +#define V8_TARGET_ARCH_ARM64 1 +#elif defined(__ARMEL__) +#define V8_TARGET_ARCH_ARM 1 +#elif defined(__mips64) +#define V8_TARGET_ARCH_MIPS64 1 +#elif defined(__MIPSEB__) || defined(__MIPSEL__) +#define V8_TARGET_ARCH_MIPS 1 +#else +#error Target architecture was not detected as supported by v8 +#endif +#endif + +// Determine architecture pointer size. +#if V8_TARGET_ARCH_IA32 +#define V8_TARGET_ARCH_32_BIT 1 +#elif V8_TARGET_ARCH_X64 +#if !V8_TARGET_ARCH_32_BIT && !V8_TARGET_ARCH_64_BIT +#if defined(__x86_64__) && !defined(__LP64__) +#define V8_TARGET_ARCH_32_BIT 1 +#else +#define V8_TARGET_ARCH_64_BIT 1 +#endif +#endif +#elif V8_TARGET_ARCH_ARM +#define V8_TARGET_ARCH_32_BIT 1 +#elif V8_TARGET_ARCH_ARM64 +#define V8_TARGET_ARCH_64_BIT 1 +#elif V8_TARGET_ARCH_MIPS +#define V8_TARGET_ARCH_32_BIT 1 +#elif V8_TARGET_ARCH_MIPS64 +#define V8_TARGET_ARCH_64_BIT 1 +#elif V8_TARGET_ARCH_X87 +#define V8_TARGET_ARCH_32_BIT 1 +#else +#error Unknown target architecture pointer size +#endif + +// Check for supported combinations of host and target architectures. +#if V8_TARGET_ARCH_IA32 && !V8_HOST_ARCH_IA32 +#error Target architecture ia32 is only supported on ia32 host +#endif +#if (V8_TARGET_ARCH_X64 && V8_TARGET_ARCH_64_BIT && \ + !(V8_HOST_ARCH_X64 && V8_HOST_ARCH_64_BIT)) +#error Target architecture x64 is only supported on x64 host +#endif +#if (V8_TARGET_ARCH_X64 && V8_TARGET_ARCH_32_BIT && \ + !(V8_HOST_ARCH_X64 && V8_HOST_ARCH_32_BIT)) +#error Target architecture x32 is only supported on x64 host with x32 support +#endif +#if (V8_TARGET_ARCH_ARM && !(V8_HOST_ARCH_IA32 || V8_HOST_ARCH_ARM)) +#error Target architecture arm is only supported on arm and ia32 host +#endif +#if (V8_TARGET_ARCH_ARM64 && !(V8_HOST_ARCH_X64 || V8_HOST_ARCH_ARM64)) +#error Target architecture arm64 is only supported on arm64 and x64 host +#endif +#if (V8_TARGET_ARCH_MIPS && !(V8_HOST_ARCH_IA32 || V8_HOST_ARCH_MIPS)) +#error Target architecture mips is only supported on mips and ia32 host +#endif +#if (V8_TARGET_ARCH_MIPS64 && !(V8_HOST_ARCH_X64 || V8_HOST_ARCH_MIPS64)) +#error Target architecture mips64 is only supported on mips64 and x64 host +#endif + +// Determine architecture endianness. +#if V8_TARGET_ARCH_IA32 +#define V8_TARGET_LITTLE_ENDIAN 1 +#elif V8_TARGET_ARCH_X64 +#define V8_TARGET_LITTLE_ENDIAN 1 +#elif V8_TARGET_ARCH_ARM +#define V8_TARGET_LITTLE_ENDIAN 1 +#elif V8_TARGET_ARCH_ARM64 +#define V8_TARGET_LITTLE_ENDIAN 1 +#elif V8_TARGET_ARCH_MIPS +#if defined(__MIPSEB__) +#define V8_TARGET_BIG_ENDIAN 1 +#else +#define V8_TARGET_LITTLE_ENDIAN 1 +#endif +#elif V8_TARGET_ARCH_MIPS64 +#define V8_TARGET_LITTLE_ENDIAN 1 +#elif V8_TARGET_ARCH_X87 +#define V8_TARGET_LITTLE_ENDIAN 1 +#else +#error Unknown target architecture endianness +#endif + +#if V8_OS_MACOSX || defined(__FreeBSD__) || defined(__OpenBSD__) +#define USING_BSD_ABI +#endif + +// Number of bits to represent the page size for paged spaces. The value of 20 +// gives 1Mb bytes per page. +const int kPageSizeBits = 20; + +#endif // V8_BASE_BUILD_CONFIG_H_ diff --git a/deps/v8/src/cpu.cc b/deps/v8/src/base/cpu.cc similarity index 88% rename from deps/v8/src/cpu.cc rename to deps/v8/src/base/cpu.cc index c600eda10b8..adce69d4579 100644 --- a/deps/v8/src/cpu.cc +++ b/deps/v8/src/base/cpu.cc @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "cpu.h" +#include "src/base/cpu.h" #if V8_LIBC_MSVCRT #include <intrin.h> // __cpuid() @@ -21,13 +21,13 @@ #include <string.h> #include <algorithm> -#include "checks.h" +#include "src/base/logging.h" #if V8_OS_WIN -#include "win32-headers.h" +#include "src/base/win32-headers.h" // NOLINT #endif namespace v8 { -namespace internal { +namespace base { #if V8_HOST_ARCH_IA32 || V8_HOST_ARCH_X64 @@ -56,7 +56,8 @@ static V8_INLINE void __cpuid(int cpu_info[4], int info_type) { #endif // !V8_LIBC_MSVCRT -#elif V8_HOST_ARCH_ARM || V8_HOST_ARCH_MIPS +#elif V8_HOST_ARCH_ARM || V8_HOST_ARCH_ARM64 \ + || V8_HOST_ARCH_MIPS || V8_HOST_ARCH_MIPS64 #if V8_OS_LINUX @@ -115,7 +116,7 @@ static uint32_t ReadELFHWCaps() { #endif // V8_HOST_ARCH_ARM // Extract the information exposed by the kernel via /proc/cpuinfo. -class CPUInfo V8_FINAL BASE_EMBEDDED { +class CPUInfo V8_FINAL { public: CPUInfo() : datalen_(0) { // Get the size of the cpuinfo file by reading it until the end. This is @@ -162,7 +163,7 @@ class CPUInfo V8_FINAL BASE_EMBEDDED { // string that must be freed by the caller using delete[]. // Return NULL if not found. char* ExtractField(const char* field) const { - ASSERT(field != NULL); + DCHECK(field != NULL); // Look for first field occurence, and ensure it starts the line. size_t fieldlen = strlen(field); @@ -206,6 +207,7 @@ class CPUInfo V8_FINAL BASE_EMBEDDED { size_t datalen_; }; +#if V8_HOST_ARCH_ARM || V8_HOST_ARCH_MIPS || V8_HOST_ARCH_MIPS64 // Checks that a space-separated list of items contains one given 'item'. static bool HasListItem(const char* list, const char* item) { @@ -231,6 +233,8 @@ static bool HasListItem(const char* list, const char* item) { return false; } +#endif // V8_HOST_ARCH_ARM || V8_HOST_ARCH_MIPS || V8_HOST_ARCH_MIPS64 + #endif // V8_OS_LINUX #endif // V8_HOST_ARCH_IA32 || V8_HOST_ARCH_X64 @@ -256,7 +260,7 @@ CPU::CPU() : stepping_(0), has_sse42_(false), has_idiva_(false), has_neon_(false), - has_thumbee_(false), + has_thumb2_(false), has_vfp_(false), has_vfp3_(false), has_vfp3_d32_(false) { @@ -297,6 +301,10 @@ CPU::CPU() : stepping_(0), has_sse42_ = (cpu_info[2] & 0x00100000) != 0; } +#if V8_HOST_ARCH_IA32 + // SAHF is always available in compat/legacy mode, + has_sahf_ = true; +#else // Query extended IDs. __cpuid(cpu_info, 0x80000000); unsigned num_ext_ids = cpu_info[0]; @@ -304,14 +312,10 @@ CPU::CPU() : stepping_(0), // Interpret extended CPU feature information. if (num_ext_ids > 0x80000000) { __cpuid(cpu_info, 0x80000001); - // SAHF is always available in compat/legacy mode, - // but must be probed in long mode. -#if V8_HOST_ARCH_IA32 - has_sahf_ = true; -#else + // SAHF must be probed in long mode. has_sahf_ = (cpu_info[2] & 0x00000001) != 0; -#endif } +#endif #elif V8_HOST_ARCH_ARM @@ -380,7 +384,6 @@ CPU::CPU() : stepping_(0), if (hwcaps != 0) { has_idiva_ = (hwcaps & HWCAP_IDIVA) != 0; has_neon_ = (hwcaps & HWCAP_NEON) != 0; - has_thumbee_ = (hwcaps & HWCAP_THUMBEE) != 0; has_vfp_ = (hwcaps & HWCAP_VFP) != 0; has_vfp3_ = (hwcaps & (HWCAP_VFPv3 | HWCAP_VFPv3D16 | HWCAP_VFPv4)) != 0; has_vfp3_d32_ = (has_vfp3_ && ((hwcaps & HWCAP_VFPv3D16) == 0 || @@ -390,13 +393,13 @@ CPU::CPU() : stepping_(0), char* features = cpu_info.ExtractField("Features"); has_idiva_ = HasListItem(features, "idiva"); has_neon_ = HasListItem(features, "neon"); - has_thumbee_ = HasListItem(features, "thumbee"); + has_thumb2_ = HasListItem(features, "thumb2"); has_vfp_ = HasListItem(features, "vfp"); - if (HasListItem(features, "vfpv3")) { + if (HasListItem(features, "vfpv3d16")) { has_vfp3_ = true; - has_vfp3_d32_ = true; - } else if (HasListItem(features, "vfpv3d16")) { + } else if (HasListItem(features, "vfpv3")) { has_vfp3_ = true; + has_vfp3_d32_ = true; } delete[] features; } @@ -414,13 +417,13 @@ CPU::CPU() : stepping_(0), architecture_ = 7; } - // ARMv7 implies ThumbEE. + // ARMv7 implies Thumb2. if (architecture_ >= 7) { - has_thumbee_ = true; + has_thumb2_ = true; } - // The earliest architecture with ThumbEE is ARMv6T2. - if (has_thumbee_ && architecture_ < 6) { + // The earliest architecture with Thumb2 is ARMv6T2. + if (has_thumb2_ && architecture_ < 6) { architecture_ = 6; } @@ -432,13 +435,13 @@ CPU::CPU() : stepping_(0), uint32_t cpu_flags = SYSPAGE_ENTRY(cpuinfo)->flags; if (cpu_flags & ARM_CPU_FLAG_V7) { architecture_ = 7; - has_thumbee_ = true; + has_thumb2_ = true; } else if (cpu_flags & ARM_CPU_FLAG_V6) { architecture_ = 6; - // QNX doesn't say if ThumbEE is available. + // QNX doesn't say if Thumb2 is available. // Assume false for the architectures older than ARMv7. } - ASSERT(architecture_ >= 6); + DCHECK(architecture_ >= 6); has_fpu_ = (cpu_flags & CPU_FLAG_FPU) != 0; has_vfp_ = has_fpu_; if (cpu_flags & ARM_CPU_FLAG_NEON) { @@ -452,7 +455,7 @@ CPU::CPU() : stepping_(0), #endif // V8_OS_LINUX -#elif V8_HOST_ARCH_MIPS +#elif V8_HOST_ARCH_MIPS || V8_HOST_ARCH_MIPS64 // Simple detection of FPU at runtime for Linux. // It is based on /proc/cpuinfo, which reveals hardware configuration @@ -464,19 +467,33 @@ CPU::CPU() : stepping_(0), has_fpu_ = HasListItem(cpu_model, "FPU"); delete[] cpu_model; -#endif -} +#elif V8_HOST_ARCH_ARM64 + CPUInfo cpu_info; + + // Extract implementor from the "CPU implementer" field. + char* implementer = cpu_info.ExtractField("CPU implementer"); + if (implementer != NULL) { + char* end ; + implementer_ = strtol(implementer, &end, 0); + if (end == implementer) { + implementer_ = 0; + } + delete[] implementer; + } + + // Extract part number from the "CPU part" field. + char* part = cpu_info.ExtractField("CPU part"); + if (part != NULL) { + char* end ; + part_ = strtol(part, &end, 0); + if (end == part) { + part_ = 0; + } + delete[] part; + } -// static -int CPU::NumberOfProcessorsOnline() { -#if V8_OS_WIN - SYSTEM_INFO info; - GetSystemInfo(&info); - return info.dwNumberOfProcessors; -#else - return static_cast<int>(sysconf(_SC_NPROCESSORS_ONLN)); #endif } -} } // namespace v8::internal +} } // namespace v8::base diff --git a/deps/v8/src/cpu.h b/deps/v8/src/base/cpu.h similarity index 88% rename from deps/v8/src/cpu.h rename to deps/v8/src/base/cpu.h index 0315435b201..eb51674df53 100644 --- a/deps/v8/src/cpu.h +++ b/deps/v8/src/base/cpu.h @@ -10,13 +10,13 @@ // The build system then uses the implementation for the target architecture. // -#ifndef V8_CPU_H_ -#define V8_CPU_H_ +#ifndef V8_BASE_CPU_H_ +#define V8_BASE_CPU_H_ -#include "allocation.h" +#include "src/base/macros.h" namespace v8 { -namespace internal { +namespace base { // ---------------------------------------------------------------------------- // CPU @@ -28,7 +28,7 @@ namespace internal { // architectures. For each architecture the file cpu_<arch>.cc contains the // implementation of these static functions. -class CPU V8_FINAL BASE_EMBEDDED { +class CPU V8_FINAL { public: CPU(); @@ -44,6 +44,7 @@ class CPU V8_FINAL BASE_EMBEDDED { // arm implementer/part information int implementer() const { return implementer_; } static const int ARM = 0x41; + static const int NVIDIA = 0x4e; static const int QUALCOMM = 0x51; int architecture() const { return architecture_; } int part() const { return part_; } @@ -71,17 +72,11 @@ class CPU V8_FINAL BASE_EMBEDDED { // arm features bool has_idiva() const { return has_idiva_; } bool has_neon() const { return has_neon_; } - bool has_thumbee() const { return has_thumbee_; } + bool has_thumb2() const { return has_thumb2_; } bool has_vfp() const { return has_vfp_; } bool has_vfp3() const { return has_vfp3_; } bool has_vfp3_d32() const { return has_vfp3_d32_; } - // Returns the number of processors online. - static int NumberOfProcessorsOnline(); - - // Flush instruction cache. - static void FlushICache(void* start, size_t size); - private: char vendor_[13]; int stepping_; @@ -105,12 +100,12 @@ class CPU V8_FINAL BASE_EMBEDDED { bool has_sse42_; bool has_idiva_; bool has_neon_; - bool has_thumbee_; + bool has_thumb2_; bool has_vfp_; bool has_vfp3_; bool has_vfp3_d32_; }; -} } // namespace v8::internal +} } // namespace v8::base -#endif // V8_CPU_H_ +#endif // V8_BASE_CPU_H_ diff --git a/deps/v8/src/lazy-instance.h b/deps/v8/src/base/lazy-instance.h similarity index 97% rename from deps/v8/src/lazy-instance.h rename to deps/v8/src/base/lazy-instance.h index b760f1fb67f..a20689a16c4 100644 --- a/deps/v8/src/lazy-instance.h +++ b/deps/v8/src/base/lazy-instance.h @@ -65,14 +65,14 @@ // The macro LAZY_DYNAMIC_INSTANCE_INITIALIZER must be used to initialize // dynamic lazy instances. -#ifndef V8_LAZY_INSTANCE_H_ -#define V8_LAZY_INSTANCE_H_ +#ifndef V8_BASE_LAZY_INSTANCE_H_ +#define V8_BASE_LAZY_INSTANCE_H_ -#include "checks.h" -#include "once.h" +#include "src/base/macros.h" +#include "src/base/once.h" namespace v8 { -namespace internal { +namespace base { #define LAZY_STATIC_INSTANCE_INITIALIZER { V8_ONCE_INIT, { {} } } #define LAZY_DYNAMIC_INSTANCE_INITIALIZER { V8_ONCE_INIT, 0 } @@ -232,6 +232,6 @@ struct LazyDynamicInstance { CreateTrait, InitOnceTrait, DestroyTrait> type; }; -} } // namespace v8::internal +} } // namespace v8::base -#endif // V8_LAZY_INSTANCE_H_ +#endif // V8_BASE_LAZY_INSTANCE_H_ diff --git a/deps/v8/src/base/logging.cc b/deps/v8/src/base/logging.cc new file mode 100644 index 00000000000..4f62ac48dde --- /dev/null +++ b/deps/v8/src/base/logging.cc @@ -0,0 +1,88 @@ +// Copyright 2006-2008 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/base/logging.h" + +#if V8_LIBC_GLIBC || V8_OS_BSD +# include <cxxabi.h> +# include <execinfo.h> +#elif V8_OS_QNX +# include <backtrace.h> +#endif // V8_LIBC_GLIBC || V8_OS_BSD +#include <stdio.h> +#include <stdlib.h> + +#include "src/base/platform/platform.h" + +namespace v8 { +namespace base { + +// Attempts to dump a backtrace (if supported). +void DumpBacktrace() { +#if V8_LIBC_GLIBC || V8_OS_BSD + void* trace[100]; + int size = backtrace(trace, ARRAY_SIZE(trace)); + char** symbols = backtrace_symbols(trace, size); + OS::PrintError("\n==== C stack trace ===============================\n\n"); + if (size == 0) { + OS::PrintError("(empty)\n"); + } else if (symbols == NULL) { + OS::PrintError("(no symbols)\n"); + } else { + for (int i = 1; i < size; ++i) { + OS::PrintError("%2d: ", i); + char mangled[201]; + if (sscanf(symbols[i], "%*[^(]%*[(]%200[^)+]", mangled) == 1) { // NOLINT + int status; + size_t length; + char* demangled = abi::__cxa_demangle(mangled, NULL, &length, &status); + OS::PrintError("%s\n", demangled != NULL ? demangled : mangled); + free(demangled); + } else { + OS::PrintError("??\n"); + } + } + } + free(symbols); +#elif V8_OS_QNX + char out[1024]; + bt_accessor_t acc; + bt_memmap_t memmap; + bt_init_accessor(&acc, BT_SELF); + bt_load_memmap(&acc, &memmap); + bt_sprn_memmap(&memmap, out, sizeof(out)); + OS::PrintError(out); + bt_addr_t trace[100]; + int size = bt_get_backtrace(&acc, trace, ARRAY_SIZE(trace)); + OS::PrintError("\n==== C stack trace ===============================\n\n"); + if (size == 0) { + OS::PrintError("(empty)\n"); + } else { + bt_sprnf_addrs(&memmap, trace, size, const_cast<char*>("%a\n"), + out, sizeof(out), NULL); + OS::PrintError(out); + } + bt_unload_memmap(&memmap); + bt_release_accessor(&acc); +#endif // V8_LIBC_GLIBC || V8_OS_BSD +} + +} } // namespace v8::base + + +// Contains protection against recursive calls (faults while handling faults). +extern "C" void V8_Fatal(const char* file, int line, const char* format, ...) { + fflush(stdout); + fflush(stderr); + v8::base::OS::PrintError("\n\n#\n# Fatal error in %s, line %d\n# ", file, + line); + va_list arguments; + va_start(arguments, format); + v8::base::OS::VPrintError(format, arguments); + va_end(arguments); + v8::base::OS::PrintError("\n#\n"); + v8::base::DumpBacktrace(); + fflush(stderr); + v8::base::OS::Abort(); +} diff --git a/deps/v8/src/base/logging.h b/deps/v8/src/base/logging.h new file mode 100644 index 00000000000..8e24bb0864e --- /dev/null +++ b/deps/v8/src/base/logging.h @@ -0,0 +1,223 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_BASE_LOGGING_H_ +#define V8_BASE_LOGGING_H_ + +#include <string.h> + +#include "include/v8stdint.h" +#include "src/base/build_config.h" + +extern "C" void V8_Fatal(const char* file, int line, const char* format, ...); + + +// The FATAL, UNREACHABLE and UNIMPLEMENTED macros are useful during +// development, but they should not be relied on in the final product. +#ifdef DEBUG +#define FATAL(msg) \ + V8_Fatal(__FILE__, __LINE__, "%s", (msg)) +#define UNIMPLEMENTED() \ + V8_Fatal(__FILE__, __LINE__, "unimplemented code") +#define UNREACHABLE() \ + V8_Fatal(__FILE__, __LINE__, "unreachable code") +#else +#define FATAL(msg) \ + V8_Fatal("", 0, "%s", (msg)) +#define UNIMPLEMENTED() \ + V8_Fatal("", 0, "unimplemented code") +#define UNREACHABLE() ((void) 0) +#endif + + +// The CHECK macro checks that the given condition is true; if not, it +// prints a message to stderr and aborts. +#define CHECK(condition) do { \ + if (!(condition)) { \ + V8_Fatal(__FILE__, __LINE__, "CHECK(%s) failed", #condition); \ + } \ + } while (0) + + +// Helper function used by the CHECK_EQ function when given int +// arguments. Should not be called directly. +inline void CheckEqualsHelper(const char* file, int line, + const char* expected_source, int expected, + const char* value_source, int value) { + if (expected != value) { + V8_Fatal(file, line, + "CHECK_EQ(%s, %s) failed\n# Expected: %i\n# Found: %i", + expected_source, value_source, expected, value); + } +} + + +// Helper function used by the CHECK_EQ function when given int64_t +// arguments. Should not be called directly. +inline void CheckEqualsHelper(const char* file, int line, + const char* expected_source, + int64_t expected, + const char* value_source, + int64_t value) { + if (expected != value) { + // Print int64_t values in hex, as two int32s, + // to avoid platform-dependencies. + V8_Fatal(file, line, + "CHECK_EQ(%s, %s) failed\n#" + " Expected: 0x%08x%08x\n# Found: 0x%08x%08x", + expected_source, value_source, + static_cast<uint32_t>(expected >> 32), + static_cast<uint32_t>(expected), + static_cast<uint32_t>(value >> 32), + static_cast<uint32_t>(value)); + } +} + + +// Helper function used by the CHECK_NE function when given int +// arguments. Should not be called directly. +inline void CheckNonEqualsHelper(const char* file, + int line, + const char* unexpected_source, + int unexpected, + const char* value_source, + int value) { + if (unexpected == value) { + V8_Fatal(file, line, "CHECK_NE(%s, %s) failed\n# Value: %i", + unexpected_source, value_source, value); + } +} + + +// Helper function used by the CHECK function when given string +// arguments. Should not be called directly. +inline void CheckEqualsHelper(const char* file, + int line, + const char* expected_source, + const char* expected, + const char* value_source, + const char* value) { + if ((expected == NULL && value != NULL) || + (expected != NULL && value == NULL) || + (expected != NULL && value != NULL && strcmp(expected, value) != 0)) { + V8_Fatal(file, line, + "CHECK_EQ(%s, %s) failed\n# Expected: %s\n# Found: %s", + expected_source, value_source, expected, value); + } +} + + +inline void CheckNonEqualsHelper(const char* file, + int line, + const char* expected_source, + const char* expected, + const char* value_source, + const char* value) { + if (expected == value || + (expected != NULL && value != NULL && strcmp(expected, value) == 0)) { + V8_Fatal(file, line, "CHECK_NE(%s, %s) failed\n# Value: %s", + expected_source, value_source, value); + } +} + + +// Helper function used by the CHECK function when given pointer +// arguments. Should not be called directly. +inline void CheckEqualsHelper(const char* file, + int line, + const char* expected_source, + const void* expected, + const char* value_source, + const void* value) { + if (expected != value) { + V8_Fatal(file, line, + "CHECK_EQ(%s, %s) failed\n# Expected: %p\n# Found: %p", + expected_source, value_source, + expected, value); + } +} + + +inline void CheckNonEqualsHelper(const char* file, + int line, + const char* expected_source, + const void* expected, + const char* value_source, + const void* value) { + if (expected == value) { + V8_Fatal(file, line, "CHECK_NE(%s, %s) failed\n# Value: %p", + expected_source, value_source, value); + } +} + + +inline void CheckNonEqualsHelper(const char* file, + int line, + const char* expected_source, + int64_t expected, + const char* value_source, + int64_t value) { + if (expected == value) { + V8_Fatal(file, line, + "CHECK_EQ(%s, %s) failed\n# Expected: %f\n# Found: %f", + expected_source, value_source, expected, value); + } +} + + +#define CHECK_EQ(expected, value) CheckEqualsHelper(__FILE__, __LINE__, \ + #expected, expected, #value, value) + + +#define CHECK_NE(unexpected, value) CheckNonEqualsHelper(__FILE__, __LINE__, \ + #unexpected, unexpected, #value, value) + + +#define CHECK_GT(a, b) CHECK((a) > (b)) +#define CHECK_GE(a, b) CHECK((a) >= (b)) +#define CHECK_LT(a, b) CHECK((a) < (b)) +#define CHECK_LE(a, b) CHECK((a) <= (b)) + + +namespace v8 { +namespace base { + +// Exposed for making debugging easier (to see where your function is being +// called, just add a call to DumpBacktrace). +void DumpBacktrace(); + +} } // namespace v8::base + + +// The DCHECK macro is equivalent to CHECK except that it only +// generates code in debug builds. +#ifdef DEBUG +#define DCHECK_RESULT(expr) CHECK(expr) +#define DCHECK(condition) CHECK(condition) +#define DCHECK_EQ(v1, v2) CHECK_EQ(v1, v2) +#define DCHECK_NE(v1, v2) CHECK_NE(v1, v2) +#define DCHECK_GE(v1, v2) CHECK_GE(v1, v2) +#define DCHECK_LT(v1, v2) CHECK_LT(v1, v2) +#define DCHECK_LE(v1, v2) CHECK_LE(v1, v2) +#else +#define DCHECK_RESULT(expr) (expr) +#define DCHECK(condition) ((void) 0) +#define DCHECK_EQ(v1, v2) ((void) 0) +#define DCHECK_NE(v1, v2) ((void) 0) +#define DCHECK_GE(v1, v2) ((void) 0) +#define DCHECK_LT(v1, v2) ((void) 0) +#define DCHECK_LE(v1, v2) ((void) 0) +#endif + +#define DCHECK_NOT_NULL(p) DCHECK_NE(NULL, p) + +// "Extra checks" are lightweight checks that are enabled in some release +// builds. +#ifdef ENABLE_EXTRA_CHECKS +#define EXTRA_CHECK(condition) CHECK(condition) +#else +#define EXTRA_CHECK(condition) ((void) 0) +#endif + +#endif // V8_BASE_LOGGING_H_ diff --git a/deps/v8/src/base/macros.h b/deps/v8/src/base/macros.h index b99f01b230c..50828db57ff 100644 --- a/deps/v8/src/base/macros.h +++ b/deps/v8/src/base/macros.h @@ -5,7 +5,9 @@ #ifndef V8_BASE_MACROS_H_ #define V8_BASE_MACROS_H_ -#include "../../include/v8stdint.h" +#include "include/v8stdint.h" +#include "src/base/build_config.h" +#include "src/base/logging.h" // The expression OFFSET_OF(type, field) computes the byte-offset @@ -54,15 +56,17 @@ #define MUST_USE_RESULT V8_WARN_UNUSED_RESULT -// Define DISABLE_ASAN macros. +// Define V8_USE_ADDRESS_SANITIZER macros. #if defined(__has_feature) #if __has_feature(address_sanitizer) -#define DISABLE_ASAN __attribute__((no_sanitize_address)) +#define V8_USE_ADDRESS_SANITIZER 1 #endif #endif - -#ifndef DISABLE_ASAN +// Define DISABLE_ASAN macros. +#ifdef V8_USE_ADDRESS_SANITIZER +#define DISABLE_ASAN __attribute__((no_sanitize_address)) +#else #define DISABLE_ASAN #endif @@ -73,4 +77,183 @@ #define V8_IMMEDIATE_CRASH() ((void(*)())0)() #endif + +// Use C++11 static_assert if possible, which gives error +// messages that are easier to understand on first sight. +#if V8_HAS_CXX11_STATIC_ASSERT +#define STATIC_ASSERT(test) static_assert(test, #test) +#else +// This is inspired by the static assertion facility in boost. This +// is pretty magical. If it causes you trouble on a platform you may +// find a fix in the boost code. +template <bool> class StaticAssertion; +template <> class StaticAssertion<true> { }; +// This macro joins two tokens. If one of the tokens is a macro the +// helper call causes it to be resolved before joining. +#define SEMI_STATIC_JOIN(a, b) SEMI_STATIC_JOIN_HELPER(a, b) +#define SEMI_STATIC_JOIN_HELPER(a, b) a##b +// Causes an error during compilation of the condition is not +// statically known to be true. It is formulated as a typedef so that +// it can be used wherever a typedef can be used. Beware that this +// actually causes each use to introduce a new defined type with a +// name depending on the source line. +template <int> class StaticAssertionHelper { }; +#define STATIC_ASSERT(test) \ + typedef \ + StaticAssertionHelper<sizeof(StaticAssertion<static_cast<bool>((test))>)> \ + SEMI_STATIC_JOIN(__StaticAssertTypedef__, __LINE__) V8_UNUSED + +#endif + + +// The USE(x) template is used to silence C++ compiler warnings +// issued for (yet) unused variables (typically parameters). +template <typename T> +inline void USE(T) { } + + +#define IS_POWER_OF_TWO(x) ((x) != 0 && (((x) & ((x) - 1)) == 0)) + + +// Returns true iff x is a power of 2. Cannot be used with the maximally +// negative value of the type T (the -1 overflows). +template <typename T> +inline bool IsPowerOf2(T x) { + return IS_POWER_OF_TWO(x); +} + + +// Define our own macros for writing 64-bit constants. This is less fragile +// than defining __STDC_CONSTANT_MACROS before including <stdint.h>, and it +// works on compilers that don't have it (like MSVC). +#if V8_CC_MSVC +# define V8_UINT64_C(x) (x ## UI64) +# define V8_INT64_C(x) (x ## I64) +# if V8_HOST_ARCH_64_BIT +# define V8_INTPTR_C(x) (x ## I64) +# define V8_PTR_PREFIX "ll" +# else +# define V8_INTPTR_C(x) (x) +# define V8_PTR_PREFIX "" +# endif // V8_HOST_ARCH_64_BIT +#elif V8_CC_MINGW64 +# define V8_UINT64_C(x) (x ## ULL) +# define V8_INT64_C(x) (x ## LL) +# define V8_INTPTR_C(x) (x ## LL) +# define V8_PTR_PREFIX "I64" +#elif V8_HOST_ARCH_64_BIT +# if V8_OS_MACOSX +# define V8_UINT64_C(x) (x ## ULL) +# define V8_INT64_C(x) (x ## LL) +# else +# define V8_UINT64_C(x) (x ## UL) +# define V8_INT64_C(x) (x ## L) +# endif +# define V8_INTPTR_C(x) (x ## L) +# define V8_PTR_PREFIX "l" +#else +# define V8_UINT64_C(x) (x ## ULL) +# define V8_INT64_C(x) (x ## LL) +# define V8_INTPTR_C(x) (x) +# define V8_PTR_PREFIX "" +#endif + +#define V8PRIxPTR V8_PTR_PREFIX "x" +#define V8PRIdPTR V8_PTR_PREFIX "d" +#define V8PRIuPTR V8_PTR_PREFIX "u" + +// Fix for Mac OS X defining uintptr_t as "unsigned long": +#if V8_OS_MACOSX +#undef V8PRIxPTR +#define V8PRIxPTR "lx" +#endif + +// The following macro works on both 32 and 64-bit platforms. +// Usage: instead of writing 0x1234567890123456 +// write V8_2PART_UINT64_C(0x12345678,90123456); +#define V8_2PART_UINT64_C(a, b) (((static_cast<uint64_t>(a) << 32) + 0x##b##u)) + + +// Compute the 0-relative offset of some absolute value x of type T. +// This allows conversion of Addresses and integral types into +// 0-relative int offsets. +template <typename T> +inline intptr_t OffsetFrom(T x) { + return x - static_cast<T>(0); +} + + +// Compute the absolute value of type T for some 0-relative offset x. +// This allows conversion of 0-relative int offsets into Addresses and +// integral types. +template <typename T> +inline T AddressFrom(intptr_t x) { + return static_cast<T>(static_cast<T>(0) + x); +} + + +// Return the largest multiple of m which is <= x. +template <typename T> +inline T RoundDown(T x, intptr_t m) { + DCHECK(IsPowerOf2(m)); + return AddressFrom<T>(OffsetFrom(x) & -m); +} + + +// Return the smallest multiple of m which is >= x. +template <typename T> +inline T RoundUp(T x, intptr_t m) { + return RoundDown<T>(static_cast<T>(x + m - 1), m); +} + + +// Increment a pointer until it has the specified alignment. +// This works like RoundUp, but it works correctly on pointer types where +// sizeof(*pointer) might not be 1. +template<class T> +T AlignUp(T pointer, size_t alignment) { + DCHECK(sizeof(pointer) == sizeof(uintptr_t)); + uintptr_t pointer_raw = reinterpret_cast<uintptr_t>(pointer); + return reinterpret_cast<T>(RoundUp(pointer_raw, alignment)); +} + + +template <typename T, typename U> +inline bool IsAligned(T value, U alignment) { + return (value & (alignment - 1)) == 0; +} + + +// Returns the smallest power of two which is >= x. If you pass in a +// number that is already a power of two, it is returned as is. +// Implementation is from "Hacker's Delight" by Henry S. Warren, Jr., +// figure 3-3, page 48, where the function is called clp2. +inline uint32_t RoundUpToPowerOf2(uint32_t x) { + DCHECK(x <= 0x80000000u); + x = x - 1; + x = x | (x >> 1); + x = x | (x >> 2); + x = x | (x >> 4); + x = x | (x >> 8); + x = x | (x >> 16); + return x + 1; +} + + +inline uint32_t RoundDownToPowerOf2(uint32_t x) { + uint32_t rounded_up = RoundUpToPowerOf2(x); + if (rounded_up > x) return rounded_up >> 1; + return rounded_up; +} + + +// Returns current value of top of the stack. Works correctly with ASAN. +DISABLE_ASAN +inline uintptr_t GetCurrentStackPosition() { + // Takes the address of the limit variable in order to find out where + // the top of stack is right now. + uintptr_t limit = reinterpret_cast<uintptr_t>(&limit); + return limit; +} + #endif // V8_BASE_MACROS_H_ diff --git a/deps/v8/src/once.cc b/deps/v8/src/base/once.cc similarity index 92% rename from deps/v8/src/once.cc rename to deps/v8/src/base/once.cc index 412c66a1db6..eaabf40d9a5 100644 --- a/deps/v8/src/once.cc +++ b/deps/v8/src/base/once.cc @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "once.h" +#include "src/base/once.h" #ifdef _WIN32 #include <windows.h> @@ -10,11 +10,10 @@ #include <sched.h> #endif -#include "atomicops.h" -#include "checks.h" +#include "src/base/atomicops.h" namespace v8 { -namespace internal { +namespace base { void CallOnceImpl(OnceType* once, PointerArgFunction init_func, void* arg) { AtomicWord state = Acquire_Load(once); @@ -51,4 +50,4 @@ void CallOnceImpl(OnceType* once, PointerArgFunction init_func, void* arg) { } } -} } // namespace v8::internal +} } // namespace v8::base diff --git a/deps/v8/src/once.h b/deps/v8/src/base/once.h similarity index 93% rename from deps/v8/src/once.h rename to deps/v8/src/base/once.h index 938af281ca1..a8e8437afaa 100644 --- a/deps/v8/src/once.h +++ b/deps/v8/src/base/once.h @@ -49,19 +49,19 @@ // whatsoever to statically-initialize its synchronization primitives, so our // only choice is to assume that dynamic initialization is single-threaded. -#ifndef V8_ONCE_H_ -#define V8_ONCE_H_ +#ifndef V8_BASE_ONCE_H_ +#define V8_BASE_ONCE_H_ -#include "atomicops.h" +#include "src/base/atomicops.h" namespace v8 { -namespace internal { +namespace base { typedef AtomicWord OnceType; #define V8_ONCE_INIT 0 -#define V8_DECLARE_ONCE(NAME) ::v8::internal::OnceType NAME +#define V8_DECLARE_ONCE(NAME) ::v8::base::OnceType NAME enum { ONCE_STATE_UNINITIALIZED = 0, @@ -95,6 +95,6 @@ inline void CallOnce(OnceType* once, } } -} } // namespace v8::internal +} } // namespace v8::base -#endif // V8_ONCE_H_ +#endif // V8_BASE_ONCE_H_ diff --git a/deps/v8/src/platform/condition-variable.cc b/deps/v8/src/base/platform/condition-variable.cc similarity index 89% rename from deps/v8/src/platform/condition-variable.cc rename to deps/v8/src/base/platform/condition-variable.cc index 8e4d16a29ed..4547b66f7a6 100644 --- a/deps/v8/src/platform/condition-variable.cc +++ b/deps/v8/src/base/platform/condition-variable.cc @@ -2,15 +2,15 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "platform/condition-variable.h" +#include "src/base/platform/condition-variable.h" #include <errno.h> #include <time.h> -#include "platform/time.h" +#include "src/base/platform/time.h" namespace v8 { -namespace internal { +namespace base { #if V8_OS_POSIX @@ -24,37 +24,37 @@ ConditionVariable::ConditionVariable() { // source for pthread_cond_timedwait() to use the monotonic clock. pthread_condattr_t attr; int result = pthread_condattr_init(&attr); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); result = pthread_condattr_setclock(&attr, CLOCK_MONOTONIC); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); result = pthread_cond_init(&native_handle_, &attr); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); result = pthread_condattr_destroy(&attr); #else int result = pthread_cond_init(&native_handle_, NULL); #endif - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); USE(result); } ConditionVariable::~ConditionVariable() { int result = pthread_cond_destroy(&native_handle_); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); USE(result); } void ConditionVariable::NotifyOne() { int result = pthread_cond_signal(&native_handle_); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); USE(result); } void ConditionVariable::NotifyAll() { int result = pthread_cond_broadcast(&native_handle_); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); USE(result); } @@ -62,7 +62,7 @@ void ConditionVariable::NotifyAll() { void ConditionVariable::Wait(Mutex* mutex) { mutex->AssertHeldAndUnmark(); int result = pthread_cond_wait(&native_handle_, &mutex->native_handle()); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); USE(result); mutex->AssertUnheldAndMark(); } @@ -76,8 +76,8 @@ bool ConditionVariable::WaitFor(Mutex* mutex, const TimeDelta& rel_time) { // Mac OS X provides pthread_cond_timedwait_relative_np(), which does // not depend on the real time clock, which is what you really WANT here! ts = rel_time.ToTimespec(); - ASSERT_GE(ts.tv_sec, 0); - ASSERT_GE(ts.tv_nsec, 0); + DCHECK_GE(ts.tv_sec, 0); + DCHECK_GE(ts.tv_nsec, 0); result = pthread_cond_timedwait_relative_np( &native_handle_, &mutex->native_handle(), &ts); #else @@ -89,14 +89,14 @@ bool ConditionVariable::WaitFor(Mutex* mutex, const TimeDelta& rel_time) { // On Free/Net/OpenBSD and Linux with glibc we can change the time // source for pthread_cond_timedwait() to use the monotonic clock. result = clock_gettime(CLOCK_MONOTONIC, &ts); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); Time now = Time::FromTimespec(ts); #else // The timeout argument to pthread_cond_timedwait() is in absolute time. Time now = Time::NowFromSystemTime(); #endif Time end_time = now + rel_time; - ASSERT_GE(end_time, now); + DCHECK_GE(end_time, now); ts = end_time.ToTimespec(); result = pthread_cond_timedwait( &native_handle_, &mutex->native_handle(), &ts); @@ -105,7 +105,7 @@ bool ConditionVariable::WaitFor(Mutex* mutex, const TimeDelta& rel_time) { if (result == ETIMEDOUT) { return false; } - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); return true; } @@ -113,12 +113,12 @@ bool ConditionVariable::WaitFor(Mutex* mutex, const TimeDelta& rel_time) { struct ConditionVariable::Event { Event() : handle_(::CreateEventA(NULL, true, false, NULL)) { - ASSERT(handle_ != NULL); + DCHECK(handle_ != NULL); } ~Event() { BOOL ok = ::CloseHandle(handle_); - ASSERT(ok); + DCHECK(ok); USE(ok); } @@ -127,7 +127,7 @@ struct ConditionVariable::Event { if (result == WAIT_OBJECT_0) { return true; } - ASSERT(result == WAIT_TIMEOUT); + DCHECK(result == WAIT_TIMEOUT); return false; } @@ -139,7 +139,7 @@ struct ConditionVariable::Event { ConditionVariable::NativeHandle::~NativeHandle() { - ASSERT(waitlist_ == NULL); + DCHECK(waitlist_ == NULL); while (freelist_ != NULL) { Event* event = freelist_; @@ -165,7 +165,7 @@ ConditionVariable::Event* ConditionVariable::NativeHandle::Pre() { #ifdef DEBUG // The event must not be on the wait list. for (Event* we = waitlist_; we != NULL; we = we->next_) { - ASSERT_NE(event, we); + DCHECK_NE(event, we); } #endif @@ -182,7 +182,7 @@ void ConditionVariable::NativeHandle::Post(Event* event, bool result) { // Remove the event from the wait list. for (Event** wep = &waitlist_;; wep = &(*wep)->next_) { - ASSERT_NE(NULL, *wep); + DCHECK_NE(NULL, *wep); if (*wep == event) { *wep = event->next_; break; @@ -192,13 +192,13 @@ void ConditionVariable::NativeHandle::Post(Event* event, bool result) { #ifdef DEBUG // The event must not be on the free list. for (Event* fe = freelist_; fe != NULL; fe = fe->next_) { - ASSERT_NE(event, fe); + DCHECK_NE(event, fe); } #endif // Reset the event. BOOL ok = ::ResetEvent(event->handle_); - ASSERT(ok); + DCHECK(ok); USE(ok); // Insert the event into the free list. @@ -208,7 +208,7 @@ void ConditionVariable::NativeHandle::Post(Event* event, bool result) { // Forward signals delivered after the timeout to the next waiting event. if (!result && event->notified_ && waitlist_ != NULL) { ok = ::SetEvent(waitlist_->handle_); - ASSERT(ok); + DCHECK(ok); USE(ok); waitlist_->notified_ = true; } @@ -234,14 +234,14 @@ void ConditionVariable::NotifyOne() { continue; } int priority = GetThreadPriority(event->thread_); - ASSERT_NE(THREAD_PRIORITY_ERROR_RETURN, priority); + DCHECK_NE(THREAD_PRIORITY_ERROR_RETURN, priority); if (priority >= highest_priority) { highest_priority = priority; highest_event = event; } } if (highest_event != NULL) { - ASSERT(!highest_event->notified_); + DCHECK(!highest_event->notified_); ::SetEvent(highest_event->handle_); highest_event->notified_ = true; } @@ -277,7 +277,7 @@ void ConditionVariable::Wait(Mutex* mutex) { mutex->Lock(); // Release the wait event (we must have been notified). - ASSERT(event->notified_); + DCHECK(event->notified_); native_handle_.Post(event, true); } @@ -311,7 +311,7 @@ bool ConditionVariable::WaitFor(Mutex* mutex, const TimeDelta& rel_time) { mutex->Lock(); // Release the wait event. - ASSERT(!result || event->notified_); + DCHECK(!result || event->notified_); native_handle_.Post(event, result); return result; @@ -319,4 +319,4 @@ bool ConditionVariable::WaitFor(Mutex* mutex, const TimeDelta& rel_time) { #endif // V8_OS_POSIX -} } // namespace v8::internal +} } // namespace v8::base diff --git a/deps/v8/src/platform/condition-variable.h b/deps/v8/src/base/platform/condition-variable.h similarity index 89% rename from deps/v8/src/platform/condition-variable.h rename to deps/v8/src/base/platform/condition-variable.h index eb357beb3ff..9855970eba6 100644 --- a/deps/v8/src/platform/condition-variable.h +++ b/deps/v8/src/base/platform/condition-variable.h @@ -2,13 +2,14 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#ifndef V8_PLATFORM_CONDITION_VARIABLE_H_ -#define V8_PLATFORM_CONDITION_VARIABLE_H_ +#ifndef V8_BASE_PLATFORM_CONDITION_VARIABLE_H_ +#define V8_BASE_PLATFORM_CONDITION_VARIABLE_H_ -#include "platform/mutex.h" +#include "src/base/lazy-instance.h" +#include "src/base/platform/mutex.h" namespace v8 { -namespace internal { +namespace base { // Forward declarations. class ConditionVariableEvent; @@ -106,12 +107,12 @@ class ConditionVariable V8_FINAL { // LockGuard<Mutex> lock_guard(&my_mutex); // my_condvar.Pointer()->Wait(&my_mutex); // } -typedef LazyStaticInstance<ConditionVariable, - DefaultConstructTrait<ConditionVariable>, - ThreadSafeInitOnceTrait>::type LazyConditionVariable; +typedef LazyStaticInstance< + ConditionVariable, DefaultConstructTrait<ConditionVariable>, + ThreadSafeInitOnceTrait>::type LazyConditionVariable; #define LAZY_CONDITION_VARIABLE_INITIALIZER LAZY_STATIC_INSTANCE_INITIALIZER -} } // namespace v8::internal +} } // namespace v8::base -#endif // V8_PLATFORM_CONDITION_VARIABLE_H_ +#endif // V8_BASE_PLATFORM_CONDITION_VARIABLE_H_ diff --git a/deps/v8/src/platform/elapsed-timer.h b/deps/v8/src/base/platform/elapsed-timer.h similarity index 74% rename from deps/v8/src/platform/elapsed-timer.h rename to deps/v8/src/base/platform/elapsed-timer.h index b25ff20401e..3f456efdf36 100644 --- a/deps/v8/src/platform/elapsed-timer.h +++ b/deps/v8/src/base/platform/elapsed-timer.h @@ -2,16 +2,16 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#ifndef V8_PLATFORM_ELAPSED_TIMER_H_ -#define V8_PLATFORM_ELAPSED_TIMER_H_ +#ifndef V8_BASE_PLATFORM_ELAPSED_TIMER_H_ +#define V8_BASE_PLATFORM_ELAPSED_TIMER_H_ -#include "../checks.h" -#include "time.h" +#include "src/base/logging.h" +#include "src/base/platform/time.h" namespace v8 { -namespace internal { +namespace base { -class ElapsedTimer V8_FINAL BASE_EMBEDDED { +class ElapsedTimer V8_FINAL { public: #ifdef DEBUG ElapsedTimer() : started_(false) {} @@ -21,29 +21,29 @@ class ElapsedTimer V8_FINAL BASE_EMBEDDED { // |Elapsed()| or |HasExpired()|, and may be restarted using |Restart()|. // This method must not be called on an already started timer. void Start() { - ASSERT(!IsStarted()); + DCHECK(!IsStarted()); start_ticks_ = Now(); #ifdef DEBUG started_ = true; #endif - ASSERT(IsStarted()); + DCHECK(IsStarted()); } // Stops this timer. Must not be called on a timer that was not // started before. void Stop() { - ASSERT(IsStarted()); + DCHECK(IsStarted()); start_ticks_ = TimeTicks(); #ifdef DEBUG started_ = false; #endif - ASSERT(!IsStarted()); + DCHECK(!IsStarted()); } // Returns |true| if this timer was started previously. bool IsStarted() const { - ASSERT(started_ || start_ticks_.IsNull()); - ASSERT(!started_ || !start_ticks_.IsNull()); + DCHECK(started_ || start_ticks_.IsNull()); + DCHECK(!started_ || !start_ticks_.IsNull()); return !start_ticks_.IsNull(); } @@ -53,21 +53,21 @@ class ElapsedTimer V8_FINAL BASE_EMBEDDED { // avoiding the need to obtain the clock value twice. It may only be called // on a previously started timer. TimeDelta Restart() { - ASSERT(IsStarted()); + DCHECK(IsStarted()); TimeTicks ticks = Now(); TimeDelta elapsed = ticks - start_ticks_; - ASSERT(elapsed.InMicroseconds() >= 0); + DCHECK(elapsed.InMicroseconds() >= 0); start_ticks_ = ticks; - ASSERT(IsStarted()); + DCHECK(IsStarted()); return elapsed; } // Returns the time elapsed since the previous start. This method may only // be called on a previously started timer. TimeDelta Elapsed() const { - ASSERT(IsStarted()); + DCHECK(IsStarted()); TimeDelta elapsed = Now() - start_ticks_; - ASSERT(elapsed.InMicroseconds() >= 0); + DCHECK(elapsed.InMicroseconds() >= 0); return elapsed; } @@ -75,14 +75,14 @@ class ElapsedTimer V8_FINAL BASE_EMBEDDED { // previous start, or |false| if not. This method may only be called on // a previously started timer. bool HasExpired(TimeDelta time_delta) const { - ASSERT(IsStarted()); + DCHECK(IsStarted()); return Elapsed() >= time_delta; } private: static V8_INLINE TimeTicks Now() { TimeTicks now = TimeTicks::HighResolutionNow(); - ASSERT(!now.IsNull()); + DCHECK(!now.IsNull()); return now; } @@ -92,6 +92,6 @@ class ElapsedTimer V8_FINAL BASE_EMBEDDED { #endif }; -} } // namespace v8::internal +} } // namespace v8::base -#endif // V8_PLATFORM_ELAPSED_TIMER_H_ +#endif // V8_BASE_PLATFORM_ELAPSED_TIMER_H_ diff --git a/deps/v8/src/platform/mutex.cc b/deps/v8/src/base/platform/mutex.cc similarity index 87% rename from deps/v8/src/platform/mutex.cc rename to deps/v8/src/base/platform/mutex.cc index 4e9fb989b60..8b1e305701f 100644 --- a/deps/v8/src/platform/mutex.cc +++ b/deps/v8/src/base/platform/mutex.cc @@ -2,12 +2,12 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "platform/mutex.h" +#include "src/base/platform/mutex.h" #include <errno.h> namespace v8 { -namespace internal { +namespace base { #if V8_OS_POSIX @@ -17,17 +17,17 @@ static V8_INLINE void InitializeNativeHandle(pthread_mutex_t* mutex) { // Use an error checking mutex in debug mode. pthread_mutexattr_t attr; result = pthread_mutexattr_init(&attr); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); result = pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_ERRORCHECK); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); result = pthread_mutex_init(mutex, &attr); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); result = pthread_mutexattr_destroy(&attr); #else // Use a fast mutex (default attributes). result = pthread_mutex_init(mutex, NULL); #endif // defined(DEBUG) - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); USE(result); } @@ -35,34 +35,34 @@ static V8_INLINE void InitializeNativeHandle(pthread_mutex_t* mutex) { static V8_INLINE void InitializeRecursiveNativeHandle(pthread_mutex_t* mutex) { pthread_mutexattr_t attr; int result = pthread_mutexattr_init(&attr); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); result = pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); result = pthread_mutex_init(mutex, &attr); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); result = pthread_mutexattr_destroy(&attr); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); USE(result); } static V8_INLINE void DestroyNativeHandle(pthread_mutex_t* mutex) { int result = pthread_mutex_destroy(mutex); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); USE(result); } static V8_INLINE void LockNativeHandle(pthread_mutex_t* mutex) { int result = pthread_mutex_lock(mutex); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); USE(result); } static V8_INLINE void UnlockNativeHandle(pthread_mutex_t* mutex) { int result = pthread_mutex_unlock(mutex); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); USE(result); } @@ -72,7 +72,7 @@ static V8_INLINE bool TryLockNativeHandle(pthread_mutex_t* mutex) { if (result == EBUSY) { return false; } - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); return true; } @@ -120,7 +120,7 @@ Mutex::Mutex() { Mutex::~Mutex() { DestroyNativeHandle(&native_handle_); - ASSERT_EQ(0, level_); + DCHECK_EQ(0, level_); } @@ -155,14 +155,14 @@ RecursiveMutex::RecursiveMutex() { RecursiveMutex::~RecursiveMutex() { DestroyNativeHandle(&native_handle_); - ASSERT_EQ(0, level_); + DCHECK_EQ(0, level_); } void RecursiveMutex::Lock() { LockNativeHandle(&native_handle_); #ifdef DEBUG - ASSERT_LE(0, level_); + DCHECK_LE(0, level_); level_++; #endif } @@ -170,7 +170,7 @@ void RecursiveMutex::Lock() { void RecursiveMutex::Unlock() { #ifdef DEBUG - ASSERT_LT(0, level_); + DCHECK_LT(0, level_); level_--; #endif UnlockNativeHandle(&native_handle_); @@ -182,10 +182,10 @@ bool RecursiveMutex::TryLock() { return false; } #ifdef DEBUG - ASSERT_LE(0, level_); + DCHECK_LE(0, level_); level_++; #endif return true; } -} } // namespace v8::internal +} } // namespace v8::base diff --git a/deps/v8/src/platform/mutex.h b/deps/v8/src/base/platform/mutex.h similarity index 94% rename from deps/v8/src/platform/mutex.h rename to deps/v8/src/base/platform/mutex.h index 4abf7f71cd1..2f8c07d89e5 100644 --- a/deps/v8/src/platform/mutex.h +++ b/deps/v8/src/base/platform/mutex.h @@ -2,20 +2,21 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#ifndef V8_PLATFORM_MUTEX_H_ -#define V8_PLATFORM_MUTEX_H_ +#ifndef V8_BASE_PLATFORM_MUTEX_H_ +#define V8_BASE_PLATFORM_MUTEX_H_ -#include "../lazy-instance.h" +#include "src/base/lazy-instance.h" #if V8_OS_WIN -#include "../win32-headers.h" +#include "src/base/win32-headers.h" #endif +#include "src/base/logging.h" #if V8_OS_POSIX #include <pthread.h> // NOLINT #endif namespace v8 { -namespace internal { +namespace base { // ---------------------------------------------------------------------------- // Mutex @@ -73,14 +74,14 @@ class Mutex V8_FINAL { V8_INLINE void AssertHeldAndUnmark() { #ifdef DEBUG - ASSERT_EQ(1, level_); + DCHECK_EQ(1, level_); level_--; #endif } V8_INLINE void AssertUnheldAndMark() { #ifdef DEBUG - ASSERT_EQ(0, level_); + DCHECK_EQ(0, level_); level_++; #endif } @@ -100,8 +101,7 @@ class Mutex V8_FINAL { // // Do something. // } // -typedef LazyStaticInstance<Mutex, - DefaultConstructTrait<Mutex>, +typedef LazyStaticInstance<Mutex, DefaultConstructTrait<Mutex>, ThreadSafeInitOnceTrait>::type LazyMutex; #define LAZY_MUTEX_INITIALIZER LAZY_STATIC_INSTANCE_INITIALIZER @@ -210,6 +210,6 @@ class LockGuard V8_FINAL { DISALLOW_COPY_AND_ASSIGN(LockGuard); }; -} } // namespace v8::internal +} } // namespace v8::base -#endif // V8_PLATFORM_MUTEX_H_ +#endif // V8_BASE_PLATFORM_MUTEX_H_ diff --git a/deps/v8/src/platform-cygwin.cc b/deps/v8/src/base/platform/platform-cygwin.cc similarity index 80% rename from deps/v8/src/platform-cygwin.cc rename to deps/v8/src/base/platform/platform-cygwin.cc index 05ce578270f..d93439bf144 100644 --- a/deps/v8/src/platform-cygwin.cc +++ b/deps/v8/src/base/platform/platform-cygwin.cc @@ -10,20 +10,20 @@ #include <semaphore.h> #include <stdarg.h> #include <strings.h> // index -#include <sys/time.h> #include <sys/mman.h> // mmap & munmap +#include <sys/time.h> #include <unistd.h> // sysconf -#undef MAP_TYPE +#include <cmath> -#include "v8.h" +#undef MAP_TYPE -#include "platform.h" -#include "v8threads.h" -#include "win32-headers.h" +#include "src/base/macros.h" +#include "src/base/platform/platform.h" +#include "src/base/win32-headers.h" namespace v8 { -namespace internal { +namespace base { const char* OS::LocalTimezone(double time, TimezoneCache* cache) { @@ -38,9 +38,9 @@ const char* OS::LocalTimezone(double time, TimezoneCache* cache) { double OS::LocalTimeOffset(TimezoneCache* cache) { // On Cygwin, struct tm does not contain a tm_gmtoff field. time_t utc = time(NULL); - ASSERT(utc != -1); + DCHECK(utc != -1); struct tm* loc = localtime(&utc); - ASSERT(loc != NULL); + DCHECK(loc != NULL); // time - localtime includes any daylight savings offset, so subtract it. return static_cast<double>((mktime(loc) - utc) * msPerSecond - (loc->tm_isdst > 0 ? 3600 * msPerSecond : 0)); @@ -53,10 +53,7 @@ void* OS::Allocate(const size_t requested, const size_t msize = RoundUp(requested, sysconf(_SC_PAGESIZE)); int prot = PROT_READ | PROT_WRITE | (is_executable ? PROT_EXEC : 0); void* mbase = mmap(NULL, msize, prot, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); - if (mbase == MAP_FAILED) { - LOG(Isolate::Current(), StringEvent("OS::Allocate", "mmap failed")); - return NULL; - } + if (mbase == MAP_FAILED) return NULL; *allocated = msize; return mbase; } @@ -110,12 +107,13 @@ PosixMemoryMappedFile::~PosixMemoryMappedFile() { } -void OS::LogSharedLibraryAddresses(Isolate* isolate) { +std::vector<OS::SharedLibraryAddress> OS::GetSharedLibraryAddresses() { + std::vector<SharedLibraryAddresses> result; // This function assumes that the layout of the file is as follows: // hex_start_addr-hex_end_addr rwxp <unused data> [binary_file_name] // If we encounter an unexpected situation we abort scanning further entries. FILE* fp = fopen("/proc/self/maps", "r"); - if (fp == NULL) return; + if (fp == NULL) return result; // Allocate enough room to be able to store a full file name. const int kLibNameLen = FILENAME_MAX + 1; @@ -154,7 +152,7 @@ void OS::LogSharedLibraryAddresses(Isolate* isolate) { snprintf(lib_name, kLibNameLen, "%08" V8PRIxPTR "-%08" V8PRIxPTR, start, end); } - LOG(isolate, SharedLibraryEvent(lib_name, start, end)); + result.push_back(SharedLibraryAddress(lib_name, start, end)); } else { // Entry not describing executable data. Skip to end of line to set up // reading the next entry. @@ -166,6 +164,7 @@ void OS::LogSharedLibraryAddresses(Isolate* isolate) { } free(lib_name); fclose(fp); + return result; } @@ -180,41 +179,13 @@ void OS::SignalCodeMovingGC() { // This causes VirtualMemory::Commit to not always commit the memory region // specified. -static void* GetRandomAddr() { - Isolate* isolate = Isolate::UncheckedCurrent(); - // Note that the current isolate isn't set up in a call path via - // CpuFeatures::Probe. We don't care about randomization in this case because - // the code page is immediately freed. - if (isolate != NULL) { - // The address range used to randomize RWX allocations in OS::Allocate - // Try not to map pages into the default range that windows loads DLLs - // Use a multiple of 64k to prevent committing unused memory. - // Note: This does not guarantee RWX regions will be within the - // range kAllocationRandomAddressMin to kAllocationRandomAddressMax -#ifdef V8_HOST_ARCH_64_BIT - static const intptr_t kAllocationRandomAddressMin = 0x0000000080000000; - static const intptr_t kAllocationRandomAddressMax = 0x000003FFFFFF0000; -#else - static const intptr_t kAllocationRandomAddressMin = 0x04000000; - static const intptr_t kAllocationRandomAddressMax = 0x3FFF0000; -#endif - uintptr_t address = - (isolate->random_number_generator()->NextInt() << kPageSizeBits) | - kAllocationRandomAddressMin; - address &= kAllocationRandomAddressMax; - return reinterpret_cast<void *>(address); - } - return NULL; -} - - static void* RandomizedVirtualAlloc(size_t size, int action, int protection) { LPVOID base = NULL; if (protection == PAGE_EXECUTE_READWRITE || protection == PAGE_NOACCESS) { // For exectutable pages try and randomize the allocation address for (size_t attempts = 0; base == NULL && attempts < 3; ++attempts) { - base = VirtualAlloc(GetRandomAddr(), size, action, protection); + base = VirtualAlloc(OS::GetRandomMmapAddr(), size, action, protection); } } @@ -234,20 +205,20 @@ VirtualMemory::VirtualMemory(size_t size) VirtualMemory::VirtualMemory(size_t size, size_t alignment) : address_(NULL), size_(0) { - ASSERT(IsAligned(alignment, static_cast<intptr_t>(OS::AllocateAlignment()))); + DCHECK(IsAligned(alignment, static_cast<intptr_t>(OS::AllocateAlignment()))); size_t request_size = RoundUp(size + alignment, static_cast<intptr_t>(OS::AllocateAlignment())); void* address = ReserveRegion(request_size); if (address == NULL) return; - Address base = RoundUp(static_cast<Address>(address), alignment); + uint8_t* base = RoundUp(static_cast<uint8_t*>(address), alignment); // Try reducing the size by freeing and then reallocating a specific area. bool result = ReleaseRegion(address, request_size); USE(result); - ASSERT(result); + DCHECK(result); address = VirtualAlloc(base, size, MEM_RESERVE, PAGE_NOACCESS); if (address != NULL) { request_size = size; - ASSERT(base == static_cast<Address>(address)); + DCHECK(base == static_cast<uint8_t*>(address)); } else { // Resizing failed, just go with a bigger area. address = ReserveRegion(request_size); @@ -261,7 +232,7 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) VirtualMemory::~VirtualMemory() { if (IsReserved()) { bool result = ReleaseRegion(address_, size_); - ASSERT(result); + DCHECK(result); USE(result); } } @@ -284,7 +255,7 @@ bool VirtualMemory::Commit(void* address, size_t size, bool is_executable) { bool VirtualMemory::Uncommit(void* address, size_t size) { - ASSERT(IsReserved()); + DCHECK(IsReserved()); return UncommitRegion(address, size); } @@ -329,4 +300,4 @@ bool VirtualMemory::HasLazyCommits() { return false; } -} } // namespace v8::internal +} } // namespace v8::base diff --git a/deps/v8/src/platform-freebsd.cc b/deps/v8/src/base/platform/platform-freebsd.cc similarity index 91% rename from deps/v8/src/platform-freebsd.cc rename to deps/v8/src/base/platform/platform-freebsd.cc index 7e5bb8a9f93..09d7ca77d3f 100644 --- a/deps/v8/src/platform-freebsd.cc +++ b/deps/v8/src/base/platform/platform-freebsd.cc @@ -8,33 +8,33 @@ #include <pthread.h> #include <semaphore.h> #include <signal.h> -#include <sys/time.h> +#include <stdlib.h> #include <sys/resource.h> +#include <sys/time.h> #include <sys/types.h> #include <sys/ucontext.h> -#include <stdlib.h> -#include <sys/types.h> // mmap & munmap +#include <sys/fcntl.h> // open #include <sys/mman.h> // mmap & munmap #include <sys/stat.h> // open -#include <sys/fcntl.h> // open +#include <sys/types.h> // mmap & munmap #include <unistd.h> // getpagesize // If you don't have execinfo.h then you need devel/libexecinfo from ports. -#include <strings.h> // index #include <errno.h> -#include <stdarg.h> #include <limits.h> +#include <stdarg.h> +#include <strings.h> // index -#undef MAP_TYPE +#include <cmath> -#include "v8.h" -#include "v8threads.h" +#undef MAP_TYPE -#include "platform.h" +#include "src/base/macros.h" +#include "src/base/platform/platform.h" namespace v8 { -namespace internal { +namespace base { const char* OS::LocalTimezone(double time, TimezoneCache* cache) { @@ -62,10 +62,7 @@ void* OS::Allocate(const size_t requested, int prot = PROT_READ | PROT_WRITE | (executable ? PROT_EXEC : 0); void* mbase = mmap(NULL, msize, prot, MAP_PRIVATE | MAP_ANON, -1, 0); - if (mbase == MAP_FAILED) { - LOG(Isolate::Current(), StringEvent("OS::Allocate", "mmap failed")); - return NULL; - } + if (mbase == MAP_FAILED) return NULL; *allocated = msize; return mbase; } @@ -124,10 +121,11 @@ static unsigned StringToLong(char* buffer) { } -void OS::LogSharedLibraryAddresses(Isolate* isolate) { +std::vector<OS::SharedLibraryAddress> OS::GetSharedLibraryAddresses() { + std::vector<SharedLibraryAddress> result; static const int MAP_LENGTH = 1024; int fd = open("/proc/self/maps", O_RDONLY); - if (fd < 0) return; + if (fd < 0) return result; while (true) { char addr_buffer[11]; addr_buffer[0] = '0'; @@ -158,9 +156,10 @@ void OS::LogSharedLibraryAddresses(Isolate* isolate) { // There may be no filename in this line. Skip to next. if (start_of_path == NULL) continue; buffer[bytes_read] = 0; - LOG(isolate, SharedLibraryEvent(start_of_path, start, end)); + result.push_back(SharedLibraryAddress(start_of_path, start, end)); } close(fd); + return result; } @@ -183,7 +182,7 @@ VirtualMemory::VirtualMemory(size_t size) VirtualMemory::VirtualMemory(size_t size, size_t alignment) : address_(NULL), size_(0) { - ASSERT(IsAligned(alignment, static_cast<intptr_t>(OS::AllocateAlignment()))); + DCHECK(IsAligned(alignment, static_cast<intptr_t>(OS::AllocateAlignment()))); size_t request_size = RoundUp(size + alignment, static_cast<intptr_t>(OS::AllocateAlignment())); void* reservation = mmap(OS::GetRandomMmapAddr(), @@ -194,9 +193,9 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) kMmapFdOffset); if (reservation == MAP_FAILED) return; - Address base = static_cast<Address>(reservation); - Address aligned_base = RoundUp(base, alignment); - ASSERT_LE(base, aligned_base); + uint8_t* base = static_cast<uint8_t*>(reservation); + uint8_t* aligned_base = RoundUp(base, alignment); + DCHECK_LE(base, aligned_base); // Unmap extra memory reserved before and after the desired block. if (aligned_base != base) { @@ -206,7 +205,7 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) } size_t aligned_size = RoundUp(size, OS::AllocateAlignment()); - ASSERT_LE(aligned_size, request_size); + DCHECK_LE(aligned_size, request_size); if (aligned_size != request_size) { size_t suffix_size = request_size - aligned_size; @@ -214,7 +213,7 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) request_size -= suffix_size; } - ASSERT(aligned_size == request_size); + DCHECK(aligned_size == request_size); address_ = static_cast<void*>(aligned_base); size_ = aligned_size; @@ -224,7 +223,7 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) VirtualMemory::~VirtualMemory() { if (IsReserved()) { bool result = ReleaseRegion(address(), size()); - ASSERT(result); + DCHECK(result); USE(result); } } @@ -305,4 +304,4 @@ bool VirtualMemory::HasLazyCommits() { return false; } -} } // namespace v8::internal +} } // namespace v8::base diff --git a/deps/v8/src/platform-linux.cc b/deps/v8/src/base/platform/platform-linux.cc similarity index 92% rename from deps/v8/src/platform-linux.cc rename to deps/v8/src/base/platform/platform-linux.cc index 34e8c551efa..fca170916c4 100644 --- a/deps/v8/src/platform-linux.cc +++ b/deps/v8/src/base/platform/platform-linux.cc @@ -8,47 +8,47 @@ #include <pthread.h> #include <semaphore.h> #include <signal.h> +#include <stdlib.h> #include <sys/prctl.h> -#include <sys/time.h> #include <sys/resource.h> #include <sys/syscall.h> +#include <sys/time.h> #include <sys/types.h> -#include <stdlib.h> // Ubuntu Dapper requires memory pages to be marked as // executable. Otherwise, OS raises an exception when executing code // in that page. -#include <sys/types.h> // mmap & munmap +#include <errno.h> +#include <fcntl.h> // open +#include <stdarg.h> +#include <strings.h> // index #include <sys/mman.h> // mmap & munmap #include <sys/stat.h> // open -#include <fcntl.h> // open +#include <sys/types.h> // mmap & munmap #include <unistd.h> // sysconf -#include <strings.h> // index -#include <errno.h> -#include <stdarg.h> // GLibc on ARM defines mcontext_t has a typedef for 'struct sigcontext'. // Old versions of the C library <signal.h> didn't define the type. #if defined(__ANDROID__) && !defined(__BIONIC_HAVE_UCONTEXT_T) && \ (defined(__arm__) || defined(__aarch64__)) && \ !defined(__BIONIC_HAVE_STRUCT_SIGCONTEXT) -#include <asm/sigcontext.h> +#include <asm/sigcontext.h> // NOLINT #endif #if defined(LEAK_SANITIZER) #include <sanitizer/lsan_interface.h> #endif -#undef MAP_TYPE +#include <cmath> -#include "v8.h" +#undef MAP_TYPE -#include "platform.h" -#include "v8threads.h" +#include "src/base/macros.h" +#include "src/base/platform/platform.h" namespace v8 { -namespace internal { +namespace base { #ifdef __arm__ @@ -119,11 +119,7 @@ void* OS::Allocate(const size_t requested, int prot = PROT_READ | PROT_WRITE | (is_executable ? PROT_EXEC : 0); void* addr = OS::GetRandomMmapAddr(); void* mbase = mmap(addr, msize, prot, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); - if (mbase == MAP_FAILED) { - LOG(i::Isolate::Current(), - StringEvent("OS::Allocate", "mmap failed")); - return NULL; - } + if (mbase == MAP_FAILED) return NULL; *allocated = msize; return mbase; } @@ -187,12 +183,13 @@ PosixMemoryMappedFile::~PosixMemoryMappedFile() { } -void OS::LogSharedLibraryAddresses(Isolate* isolate) { +std::vector<OS::SharedLibraryAddress> OS::GetSharedLibraryAddresses() { + std::vector<SharedLibraryAddress> result; // This function assumes that the layout of the file is as follows: // hex_start_addr-hex_end_addr rwxp <unused data> [binary_file_name] // If we encounter an unexpected situation we abort scanning further entries. FILE* fp = fopen("/proc/self/maps", "r"); - if (fp == NULL) return; + if (fp == NULL) return result; // Allocate enough room to be able to store a full file name. const int kLibNameLen = FILENAME_MAX + 1; @@ -232,7 +229,7 @@ void OS::LogSharedLibraryAddresses(Isolate* isolate) { snprintf(lib_name, kLibNameLen, "%08" V8PRIxPTR "-%08" V8PRIxPTR, start, end); } - LOG(isolate, SharedLibraryEvent(lib_name, start, end)); + result.push_back(SharedLibraryAddress(lib_name, start, end)); } else { // Entry not describing executable data. Skip to end of line to set up // reading the next entry. @@ -244,6 +241,7 @@ void OS::LogSharedLibraryAddresses(Isolate* isolate) { } free(lib_name); fclose(fp); + return result; } @@ -257,9 +255,9 @@ void OS::SignalCodeMovingGC() { // by the kernel and allows us to synchronize V8 code log and the // kernel log. int size = sysconf(_SC_PAGESIZE); - FILE* f = fopen(FLAG_gc_fake_mmap, "w+"); + FILE* f = fopen(OS::GetGCFakeMMapFile(), "w+"); if (f == NULL) { - OS::PrintError("Failed to open %s\n", FLAG_gc_fake_mmap); + OS::PrintError("Failed to open %s\n", OS::GetGCFakeMMapFile()); OS::Abort(); } void* addr = mmap(OS::GetRandomMmapAddr(), @@ -274,7 +272,7 @@ void OS::SignalCodeMovingGC() { MAP_PRIVATE, fileno(f), 0); - ASSERT(addr != MAP_FAILED); + DCHECK(addr != MAP_FAILED); OS::Free(addr, size); fclose(f); } @@ -294,7 +292,7 @@ VirtualMemory::VirtualMemory(size_t size) VirtualMemory::VirtualMemory(size_t size, size_t alignment) : address_(NULL), size_(0) { - ASSERT(IsAligned(alignment, static_cast<intptr_t>(OS::AllocateAlignment()))); + DCHECK(IsAligned(alignment, static_cast<intptr_t>(OS::AllocateAlignment()))); size_t request_size = RoundUp(size + alignment, static_cast<intptr_t>(OS::AllocateAlignment())); void* reservation = mmap(OS::GetRandomMmapAddr(), @@ -305,9 +303,9 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) kMmapFdOffset); if (reservation == MAP_FAILED) return; - Address base = static_cast<Address>(reservation); - Address aligned_base = RoundUp(base, alignment); - ASSERT_LE(base, aligned_base); + uint8_t* base = static_cast<uint8_t*>(reservation); + uint8_t* aligned_base = RoundUp(base, alignment); + DCHECK_LE(base, aligned_base); // Unmap extra memory reserved before and after the desired block. if (aligned_base != base) { @@ -317,7 +315,7 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) } size_t aligned_size = RoundUp(size, OS::AllocateAlignment()); - ASSERT_LE(aligned_size, request_size); + DCHECK_LE(aligned_size, request_size); if (aligned_size != request_size) { size_t suffix_size = request_size - aligned_size; @@ -325,7 +323,7 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) request_size -= suffix_size; } - ASSERT(aligned_size == request_size); + DCHECK(aligned_size == request_size); address_ = static_cast<void*>(aligned_base); size_ = aligned_size; @@ -338,7 +336,7 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) VirtualMemory::~VirtualMemory() { if (IsReserved()) { bool result = ReleaseRegion(address(), size()); - ASSERT(result); + DCHECK(result); USE(result); } } @@ -431,4 +429,4 @@ bool VirtualMemory::HasLazyCommits() { return true; } -} } // namespace v8::internal +} } // namespace v8::base diff --git a/deps/v8/src/platform-macos.cc b/deps/v8/src/base/platform/platform-macos.cc similarity index 91% rename from deps/v8/src/platform-macos.cc rename to deps/v8/src/base/platform/platform-macos.cc index facb6bd6179..77771f46c1c 100644 --- a/deps/v8/src/platform-macos.cc +++ b/deps/v8/src/base/platform/platform-macos.cc @@ -6,40 +6,41 @@ // parts, the implementation is in platform-posix.cc. #include <dlfcn.h> -#include <unistd.h> -#include <sys/mman.h> #include <mach/mach_init.h> #include <mach-o/dyld.h> #include <mach-o/getsect.h> +#include <sys/mman.h> +#include <unistd.h> #include <AvailabilityMacros.h> -#include <pthread.h> -#include <semaphore.h> -#include <signal.h> +#include <errno.h> #include <libkern/OSAtomic.h> #include <mach/mach.h> #include <mach/semaphore.h> #include <mach/task.h> #include <mach/vm_statistics.h> -#include <sys/time.h> -#include <sys/resource.h> -#include <sys/types.h> -#include <sys/sysctl.h> +#include <pthread.h> +#include <semaphore.h> +#include <signal.h> #include <stdarg.h> #include <stdlib.h> #include <string.h> -#include <errno.h> +#include <sys/resource.h> +#include <sys/sysctl.h> +#include <sys/time.h> +#include <sys/types.h> -#undef MAP_TYPE +#include <cmath> -#include "v8.h" +#undef MAP_TYPE -#include "platform.h" +#include "src/base/macros.h" +#include "src/base/platform/platform.h" namespace v8 { -namespace internal { +namespace base { // Constants used for mmap. @@ -61,10 +62,7 @@ void* OS::Allocate(const size_t requested, MAP_PRIVATE | MAP_ANON, kMmapFd, kMmapFdOffset); - if (mbase == MAP_FAILED) { - LOG(Isolate::Current(), StringEvent("OS::Allocate", "mmap failed")); - return NULL; - } + if (mbase == MAP_FAILED) return NULL; *allocated = msize; return mbase; } @@ -128,7 +126,8 @@ PosixMemoryMappedFile::~PosixMemoryMappedFile() { } -void OS::LogSharedLibraryAddresses(Isolate* isolate) { +std::vector<OS::SharedLibraryAddress> OS::GetSharedLibraryAddresses() { + std::vector<SharedLibraryAddress> result; unsigned int images_count = _dyld_image_count(); for (unsigned int i = 0; i < images_count; ++i) { const mach_header* header = _dyld_get_image_header(i); @@ -147,9 +146,10 @@ void OS::LogSharedLibraryAddresses(Isolate* isolate) { if (code_ptr == NULL) continue; const uintptr_t slide = _dyld_get_image_vmaddr_slide(i); const uintptr_t start = reinterpret_cast<uintptr_t>(code_ptr) + slide; - LOG(isolate, - SharedLibraryEvent(_dyld_get_image_name(i), start, start + size)); + result.push_back( + SharedLibraryAddress(_dyld_get_image_name(i), start, start + size)); } + return result; } @@ -184,7 +184,7 @@ VirtualMemory::VirtualMemory(size_t size) VirtualMemory::VirtualMemory(size_t size, size_t alignment) : address_(NULL), size_(0) { - ASSERT(IsAligned(alignment, static_cast<intptr_t>(OS::AllocateAlignment()))); + DCHECK(IsAligned(alignment, static_cast<intptr_t>(OS::AllocateAlignment()))); size_t request_size = RoundUp(size + alignment, static_cast<intptr_t>(OS::AllocateAlignment())); void* reservation = mmap(OS::GetRandomMmapAddr(), @@ -195,9 +195,9 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) kMmapFdOffset); if (reservation == MAP_FAILED) return; - Address base = static_cast<Address>(reservation); - Address aligned_base = RoundUp(base, alignment); - ASSERT_LE(base, aligned_base); + uint8_t* base = static_cast<uint8_t*>(reservation); + uint8_t* aligned_base = RoundUp(base, alignment); + DCHECK_LE(base, aligned_base); // Unmap extra memory reserved before and after the desired block. if (aligned_base != base) { @@ -207,7 +207,7 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) } size_t aligned_size = RoundUp(size, OS::AllocateAlignment()); - ASSERT_LE(aligned_size, request_size); + DCHECK_LE(aligned_size, request_size); if (aligned_size != request_size) { size_t suffix_size = request_size - aligned_size; @@ -215,7 +215,7 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) request_size -= suffix_size; } - ASSERT(aligned_size == request_size); + DCHECK(aligned_size == request_size); address_ = static_cast<void*>(aligned_base); size_ = aligned_size; @@ -225,7 +225,7 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) VirtualMemory::~VirtualMemory() { if (IsReserved()) { bool result = ReleaseRegion(address(), size()); - ASSERT(result); + DCHECK(result); USE(result); } } @@ -307,4 +307,4 @@ bool VirtualMemory::HasLazyCommits() { return false; } -} } // namespace v8::internal +} } // namespace v8::base diff --git a/deps/v8/src/platform-openbsd.cc b/deps/v8/src/base/platform/platform-openbsd.cc similarity index 91% rename from deps/v8/src/platform-openbsd.cc rename to deps/v8/src/base/platform/platform-openbsd.cc index 21fe2b4d998..a3f39e2dd78 100644 --- a/deps/v8/src/platform-openbsd.cc +++ b/deps/v8/src/base/platform/platform-openbsd.cc @@ -8,31 +8,31 @@ #include <pthread.h> #include <semaphore.h> #include <signal.h> -#include <sys/time.h> +#include <stdlib.h> #include <sys/resource.h> #include <sys/syscall.h> +#include <sys/time.h> #include <sys/types.h> -#include <stdlib.h> -#include <sys/types.h> // mmap & munmap +#include <errno.h> +#include <fcntl.h> // open +#include <stdarg.h> +#include <strings.h> // index #include <sys/mman.h> // mmap & munmap #include <sys/stat.h> // open -#include <fcntl.h> // open +#include <sys/types.h> // mmap & munmap #include <unistd.h> // sysconf -#include <strings.h> // index -#include <errno.h> -#include <stdarg.h> -#undef MAP_TYPE +#include <cmath> -#include "v8.h" +#undef MAP_TYPE -#include "platform.h" -#include "v8threads.h" +#include "src/base/macros.h" +#include "src/base/platform/platform.h" namespace v8 { -namespace internal { +namespace base { const char* OS::LocalTimezone(double time, TimezoneCache* cache) { @@ -60,11 +60,7 @@ void* OS::Allocate(const size_t requested, int prot = PROT_READ | PROT_WRITE | (is_executable ? PROT_EXEC : 0); void* addr = OS::GetRandomMmapAddr(); void* mbase = mmap(addr, msize, prot, MAP_PRIVATE | MAP_ANON, -1, 0); - if (mbase == MAP_FAILED) { - LOG(i::Isolate::Current(), - StringEvent("OS::Allocate", "mmap failed")); - return NULL; - } + if (mbase == MAP_FAILED) return NULL; *allocated = msize; return mbase; } @@ -118,12 +114,13 @@ PosixMemoryMappedFile::~PosixMemoryMappedFile() { } -void OS::LogSharedLibraryAddresses(Isolate* isolate) { +std::vector<OS::SharedLibraryAddress> OS::GetSharedLibraryAddresses() { + std::vector<SharedLibraryAddress> result; // This function assumes that the layout of the file is as follows: // hex_start_addr-hex_end_addr rwxp <unused data> [binary_file_name] // If we encounter an unexpected situation we abort scanning further entries. FILE* fp = fopen("/proc/self/maps", "r"); - if (fp == NULL) return; + if (fp == NULL) return result; // Allocate enough room to be able to store a full file name. const int kLibNameLen = FILENAME_MAX + 1; @@ -162,7 +159,7 @@ void OS::LogSharedLibraryAddresses(Isolate* isolate) { snprintf(lib_name, kLibNameLen, "%08" V8PRIxPTR "-%08" V8PRIxPTR, start, end); } - LOG(isolate, SharedLibraryEvent(lib_name, start, end)); + result.push_back(SharedLibraryAddress(lib_name, start, end)); } else { // Entry not describing executable data. Skip to end of line to set up // reading the next entry. @@ -174,6 +171,7 @@ void OS::LogSharedLibraryAddresses(Isolate* isolate) { } free(lib_name); fclose(fp); + return result; } @@ -187,14 +185,14 @@ void OS::SignalCodeMovingGC() { // by the kernel and allows us to synchronize V8 code log and the // kernel log. int size = sysconf(_SC_PAGESIZE); - FILE* f = fopen(FLAG_gc_fake_mmap, "w+"); + FILE* f = fopen(OS::GetGCFakeMMapFile(), "w+"); if (f == NULL) { - OS::PrintError("Failed to open %s\n", FLAG_gc_fake_mmap); + OS::PrintError("Failed to open %s\n", OS::GetGCFakeMMapFile()); OS::Abort(); } void* addr = mmap(NULL, size, PROT_READ | PROT_EXEC, MAP_PRIVATE, fileno(f), 0); - ASSERT(addr != MAP_FAILED); + DCHECK(addr != MAP_FAILED); OS::Free(addr, size); fclose(f); } @@ -215,7 +213,7 @@ VirtualMemory::VirtualMemory(size_t size) VirtualMemory::VirtualMemory(size_t size, size_t alignment) : address_(NULL), size_(0) { - ASSERT(IsAligned(alignment, static_cast<intptr_t>(OS::AllocateAlignment()))); + DCHECK(IsAligned(alignment, static_cast<intptr_t>(OS::AllocateAlignment()))); size_t request_size = RoundUp(size + alignment, static_cast<intptr_t>(OS::AllocateAlignment())); void* reservation = mmap(OS::GetRandomMmapAddr(), @@ -226,9 +224,9 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) kMmapFdOffset); if (reservation == MAP_FAILED) return; - Address base = static_cast<Address>(reservation); - Address aligned_base = RoundUp(base, alignment); - ASSERT_LE(base, aligned_base); + uint8_t* base = static_cast<uint8_t*>(reservation); + uint8_t* aligned_base = RoundUp(base, alignment); + DCHECK_LE(base, aligned_base); // Unmap extra memory reserved before and after the desired block. if (aligned_base != base) { @@ -238,7 +236,7 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) } size_t aligned_size = RoundUp(size, OS::AllocateAlignment()); - ASSERT_LE(aligned_size, request_size); + DCHECK_LE(aligned_size, request_size); if (aligned_size != request_size) { size_t suffix_size = request_size - aligned_size; @@ -246,7 +244,7 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) request_size -= suffix_size; } - ASSERT(aligned_size == request_size); + DCHECK(aligned_size == request_size); address_ = static_cast<void*>(aligned_base); size_ = aligned_size; @@ -256,7 +254,7 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) VirtualMemory::~VirtualMemory() { if (IsReserved()) { bool result = ReleaseRegion(address(), size()); - ASSERT(result); + DCHECK(result); USE(result); } } @@ -337,4 +335,4 @@ bool VirtualMemory::HasLazyCommits() { return false; } -} } // namespace v8::internal +} } // namespace v8::base diff --git a/deps/v8/src/platform-posix.cc b/deps/v8/src/base/platform/platform-posix.cc similarity index 74% rename from deps/v8/src/platform-posix.cc rename to deps/v8/src/base/platform/platform-posix.cc index 143bf3ca105..771634e24e9 100644 --- a/deps/v8/src/platform-posix.cc +++ b/deps/v8/src/base/platform/platform-posix.cc @@ -7,61 +7,71 @@ // Linux, MacOS, FreeBSD, OpenBSD, NetBSD and QNX. #include <dlfcn.h> +#include <errno.h> +#include <limits.h> #include <pthread.h> #if defined(__DragonFly__) || defined(__FreeBSD__) || defined(__OpenBSD__) #include <pthread_np.h> // for pthread_set_name_np #endif #include <sched.h> // for sched_yield -#include <unistd.h> -#include <errno.h> #include <time.h> +#include <unistd.h> #include <sys/mman.h> -#include <sys/socket.h> #include <sys/resource.h> +#include <sys/stat.h> +#include <sys/syscall.h> #include <sys/time.h> #include <sys/types.h> -#include <sys/stat.h> #if defined(__linux__) -#include <sys/prctl.h> // for prctl +#include <sys/prctl.h> // NOLINT, for prctl #endif #if defined(__APPLE__) || defined(__DragonFly__) || defined(__FreeBSD__) || \ defined(__NetBSD__) || defined(__OpenBSD__) -#include <sys/sysctl.h> // for sysctl +#include <sys/sysctl.h> // NOLINT, for sysctl #endif #include <arpa/inet.h> -#include <netinet/in.h> #include <netdb.h> +#include <netinet/in.h> #undef MAP_TYPE #if defined(ANDROID) && !defined(V8_ANDROID_LOG_STDOUT) #define LOG_TAG "v8" -#include <android/log.h> +#include <android/log.h> // NOLINT #endif -#include "v8.h" +#include <cmath> +#include <cstdlib> + +#include "src/base/lazy-instance.h" +#include "src/base/macros.h" +#include "src/base/platform/platform.h" +#include "src/base/platform/time.h" +#include "src/base/utils/random-number-generator.h" -#include "isolate-inl.h" -#include "platform.h" +#ifdef V8_FAST_TLS_SUPPORTED +#include "src/base/atomicops.h" +#endif namespace v8 { -namespace internal { +namespace base { + +namespace { // 0 is never a valid thread id. -static const pthread_t kNoThread = (pthread_t) 0; +const pthread_t kNoThread = (pthread_t) 0; +bool g_hard_abort = false; -uint64_t OS::CpuFeaturesImpliedByPlatform() { -#if V8_OS_MACOSX - // Mac OS X requires all these to install so we can assume they are present. - // These constants are defined by the CPUid instructions. - const uint64_t one = 1; - return (one << SSE2) | (one << CMOV); -#else - return 0; // Nothing special about the other systems. -#endif +const char* g_gc_fake_mmap = NULL; + +} // namespace + + +int OS::NumberOfProcessorsOnline() { + return static_cast<int>(sysconf(_SC_NPROCESSORS_ONLN)); } @@ -159,7 +169,7 @@ void OS::Free(void* address, const size_t size) { // TODO(1240712): munmap has a return value which is ignored here. int result = munmap(address, size); USE(result); - ASSERT(result == 0); + DCHECK(result == 0); } @@ -189,6 +199,25 @@ void OS::Guard(void* address, const size_t size) { } +static LazyInstance<RandomNumberGenerator>::type + platform_random_number_generator = LAZY_INSTANCE_INITIALIZER; + + +void OS::Initialize(int64_t random_seed, bool hard_abort, + const char* const gc_fake_mmap) { + if (random_seed) { + platform_random_number_generator.Pointer()->SetSeed(random_seed); + } + g_hard_abort = hard_abort; + g_gc_fake_mmap = gc_fake_mmap; +} + + +const char* OS::GetGCFakeMMapFile() { + return g_gc_fake_mmap; +} + + void* OS::GetRandomMmapAddr() { #if V8_OS_NACL // TODO(bradchen): restore randomization once Native Client gets @@ -201,42 +230,36 @@ void* OS::GetRandomMmapAddr() { // Dynamic tools do not support custom mmap addresses. return NULL; #endif - Isolate* isolate = Isolate::UncheckedCurrent(); - // Note that the current isolate isn't set up in a call path via - // CpuFeatures::Probe. We don't care about randomization in this case because - // the code page is immediately freed. - if (isolate != NULL) { - uintptr_t raw_addr; - isolate->random_number_generator()->NextBytes(&raw_addr, sizeof(raw_addr)); + uintptr_t raw_addr; + platform_random_number_generator.Pointer()->NextBytes(&raw_addr, + sizeof(raw_addr)); #if V8_TARGET_ARCH_X64 - // Currently available CPUs have 48 bits of virtual addressing. Truncate - // the hint address to 46 bits to give the kernel a fighting chance of - // fulfilling our placement request. - raw_addr &= V8_UINT64_C(0x3ffffffff000); + // Currently available CPUs have 48 bits of virtual addressing. Truncate + // the hint address to 46 bits to give the kernel a fighting chance of + // fulfilling our placement request. + raw_addr &= V8_UINT64_C(0x3ffffffff000); #else - raw_addr &= 0x3ffff000; + raw_addr &= 0x3ffff000; # ifdef __sun - // For our Solaris/illumos mmap hint, we pick a random address in the bottom - // half of the top half of the address space (that is, the third quarter). - // Because we do not MAP_FIXED, this will be treated only as a hint -- the - // system will not fail to mmap() because something else happens to already - // be mapped at our random address. We deliberately set the hint high enough - // to get well above the system's break (that is, the heap); Solaris and - // illumos will try the hint and if that fails allocate as if there were - // no hint at all. The high hint prevents the break from getting hemmed in - // at low values, ceding half of the address space to the system heap. - raw_addr += 0x80000000; + // For our Solaris/illumos mmap hint, we pick a random address in the bottom + // half of the top half of the address space (that is, the third quarter). + // Because we do not MAP_FIXED, this will be treated only as a hint -- the + // system will not fail to mmap() because something else happens to already + // be mapped at our random address. We deliberately set the hint high enough + // to get well above the system's break (that is, the heap); Solaris and + // illumos will try the hint and if that fails allocate as if there were + // no hint at all. The high hint prevents the break from getting hemmed in + // at low values, ceding half of the address space to the system heap. + raw_addr += 0x80000000; # else - // The range 0x20000000 - 0x60000000 is relatively unpopulated across a - // variety of ASLR modes (PAE kernel, NX compat mode, etc) and on macos - // 10.6 and 10.7. - raw_addr += 0x20000000; + // The range 0x20000000 - 0x60000000 is relatively unpopulated across a + // variety of ASLR modes (PAE kernel, NX compat mode, etc) and on macos + // 10.6 and 10.7. + raw_addr += 0x20000000; # endif #endif - return reinterpret_cast<void*>(raw_addr); - } - return NULL; + return reinterpret_cast<void*>(raw_addr); } @@ -252,7 +275,7 @@ void OS::Sleep(int milliseconds) { void OS::Abort() { - if (FLAG_hard_abort) { + if (g_hard_abort) { V8_IMMEDIATE_CRASH(); } // Redirect to std abort to signal abnormal program termination. @@ -267,6 +290,8 @@ void OS::DebugBreak() { asm("brk 0"); #elif V8_HOST_ARCH_MIPS asm("break"); +#elif V8_HOST_ARCH_MIPS64 + asm("break"); #elif V8_HOST_ARCH_IA32 #if defined(__native_client__) asm("hlt"); @@ -295,6 +320,19 @@ int OS::GetCurrentProcessId() { } +int OS::GetCurrentThreadId() { +#if V8_OS_MACOSX + return static_cast<int>(pthread_mach_thread_np(pthread_self())); +#elif V8_OS_LINUX + return static_cast<int>(syscall(__NR_gettid)); +#elif V8_OS_ANDROID + return static_cast<int>(gettid()); +#else + return static_cast<int>(pthread_self()); +#endif +} + + // ---------------------------------------------------------------------------- // POSIX date/time support. // @@ -323,12 +361,12 @@ TimezoneCache* OS::CreateTimezoneCache() { void OS::DisposeTimezoneCache(TimezoneCache* cache) { - ASSERT(cache == NULL); + DCHECK(cache == NULL); } void OS::ClearTimezoneCache(TimezoneCache* cache) { - ASSERT(cache == NULL); + DCHECK(cache == NULL); } @@ -426,23 +464,24 @@ void OS::VPrintError(const char* format, va_list args) { } -int OS::SNPrintF(Vector<char> str, const char* format, ...) { +int OS::SNPrintF(char* str, int length, const char* format, ...) { va_list args; va_start(args, format); - int result = VSNPrintF(str, format, args); + int result = VSNPrintF(str, length, format, args); va_end(args); return result; } -int OS::VSNPrintF(Vector<char> str, +int OS::VSNPrintF(char* str, + int length, const char* format, va_list args) { - int n = vsnprintf(str.start(), str.length(), format, args); - if (n < 0 || n >= str.length()) { + int n = vsnprintf(str, length, format, args); + if (n < 0 || n >= length) { // If the length is zero, the assignment fails. - if (str.length() > 0) - str[str.length() - 1] = '\0'; + if (length > 0) + str[length - 1] = '\0'; return -1; } else { return n; @@ -450,72 +489,6 @@ int OS::VSNPrintF(Vector<char> str, } -#if V8_TARGET_ARCH_IA32 -static void MemMoveWrapper(void* dest, const void* src, size_t size) { - memmove(dest, src, size); -} - - -// Initialize to library version so we can call this at any time during startup. -static OS::MemMoveFunction memmove_function = &MemMoveWrapper; - -// Defined in codegen-ia32.cc. -OS::MemMoveFunction CreateMemMoveFunction(); - -// Copy memory area. No restrictions. -void OS::MemMove(void* dest, const void* src, size_t size) { - if (size == 0) return; - // Note: here we rely on dependent reads being ordered. This is true - // on all architectures we currently support. - (*memmove_function)(dest, src, size); -} - -#elif defined(V8_HOST_ARCH_ARM) -void OS::MemCopyUint16Uint8Wrapper(uint16_t* dest, - const uint8_t* src, - size_t chars) { - uint16_t *limit = dest + chars; - while (dest < limit) { - *dest++ = static_cast<uint16_t>(*src++); - } -} - - -OS::MemCopyUint8Function OS::memcopy_uint8_function = &OS::MemCopyUint8Wrapper; -OS::MemCopyUint16Uint8Function OS::memcopy_uint16_uint8_function = - &OS::MemCopyUint16Uint8Wrapper; -// Defined in codegen-arm.cc. -OS::MemCopyUint8Function CreateMemCopyUint8Function( - OS::MemCopyUint8Function stub); -OS::MemCopyUint16Uint8Function CreateMemCopyUint16Uint8Function( - OS::MemCopyUint16Uint8Function stub); - -#elif defined(V8_HOST_ARCH_MIPS) -OS::MemCopyUint8Function OS::memcopy_uint8_function = &OS::MemCopyUint8Wrapper; -// Defined in codegen-mips.cc. -OS::MemCopyUint8Function CreateMemCopyUint8Function( - OS::MemCopyUint8Function stub); -#endif - - -void OS::PostSetUp() { -#if V8_TARGET_ARCH_IA32 - OS::MemMoveFunction generated_memmove = CreateMemMoveFunction(); - if (generated_memmove != NULL) { - memmove_function = generated_memmove; - } -#elif defined(V8_HOST_ARCH_ARM) - OS::memcopy_uint8_function = - CreateMemCopyUint8Function(&OS::MemCopyUint8Wrapper); - OS::memcopy_uint16_uint8_function = - CreateMemCopyUint16Uint8Function(&OS::MemCopyUint16Uint8Wrapper); -#elif defined(V8_HOST_ARCH_MIPS) - OS::memcopy_uint8_function = - CreateMemCopyUint8Function(&OS::MemCopyUint8Wrapper); -#endif -} - - // ---------------------------------------------------------------------------- // POSIX string support. // @@ -525,8 +498,8 @@ char* OS::StrChr(char* str, int c) { } -void OS::StrNCpy(Vector<char> dest, const char* src, size_t n) { - strncpy(dest.start(), src, n); +void OS::StrNCpy(char* dest, int length, const char* src, size_t n) { + strncpy(dest, src, n); } @@ -534,7 +507,7 @@ void OS::StrNCpy(Vector<char> dest, const char* src, size_t n) { // POSIX thread support. // -class Thread::PlatformData : public Malloced { +class Thread::PlatformData { public: PlatformData() : thread_(kNoThread) {} pthread_t thread_; // Thread handle for pthread. @@ -546,7 +519,7 @@ Thread::Thread(const Options& options) : data_(new PlatformData), stack_size_(options.stack_size()), start_semaphore_(NULL) { - if (stack_size_ > 0 && stack_size_ < PTHREAD_STACK_MIN) { + if (stack_size_ > 0 && static_cast<size_t>(stack_size_) < PTHREAD_STACK_MIN) { stack_size_ = PTHREAD_STACK_MIN; } set_name(options.name()); @@ -592,7 +565,7 @@ static void* ThreadEntry(void* arg) { // one). { LockGuard<Mutex> lock_guard(&thread->data()->thread_creation_mutex_); } SetThreadName(thread->name()); - ASSERT(thread->data()->thread_ != kNoThread); + DCHECK(thread->data()->thread_ != kNoThread); thread->NotifyStartedAndRun(); return NULL; } @@ -609,22 +582,22 @@ void Thread::Start() { pthread_attr_t attr; memset(&attr, 0, sizeof(attr)); result = pthread_attr_init(&attr); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); // Native client uses default stack size. #if !V8_OS_NACL if (stack_size_ > 0) { result = pthread_attr_setstacksize(&attr, static_cast<size_t>(stack_size_)); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); } #endif { LockGuard<Mutex> lock_guard(&data_->thread_creation_mutex_); result = pthread_create(&data_->thread_, &attr, ThreadEntry, this); } - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); result = pthread_attr_destroy(&attr); - ASSERT_EQ(0, result); - ASSERT(data_->thread_ != kNoThread); + DCHECK_EQ(0, result); + DCHECK(data_->thread_ != kNoThread); USE(result); } @@ -636,7 +609,7 @@ void Thread::Join() { void Thread::YieldCPU() { int result = sched_yield(); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); USE(result); } @@ -732,7 +705,7 @@ Thread::LocalStorageKey Thread::CreateThreadLocalKey() { #endif pthread_key_t key; int result = pthread_key_create(&key, NULL); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); USE(result); LocalStorageKey local_key = PthreadKeyToLocalKey(key); #ifdef V8_FAST_TLS_SUPPORTED @@ -746,7 +719,7 @@ Thread::LocalStorageKey Thread::CreateThreadLocalKey() { void Thread::DeleteThreadLocalKey(LocalStorageKey key) { pthread_key_t pthread_key = LocalKeyToPthreadKey(key); int result = pthread_key_delete(pthread_key); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); USE(result); } @@ -760,9 +733,9 @@ void* Thread::GetThreadLocal(LocalStorageKey key) { void Thread::SetThreadLocal(LocalStorageKey key, void* value) { pthread_key_t pthread_key = LocalKeyToPthreadKey(key); int result = pthread_setspecific(pthread_key, value); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); USE(result); } -} } // namespace v8::internal +} } // namespace v8::base diff --git a/deps/v8/src/platform-qnx.cc b/deps/v8/src/base/platform/platform-qnx.cc similarity index 91% rename from deps/v8/src/platform-qnx.cc rename to deps/v8/src/base/platform/platform-qnx.cc index 587a1d3554f..6f2f9897236 100644 --- a/deps/v8/src/platform-qnx.cc +++ b/deps/v8/src/base/platform/platform-qnx.cc @@ -5,38 +5,38 @@ // Platform-specific code for QNX goes here. For the POSIX-compatible // parts the implementation is in platform-posix.cc. +#include <backtrace.h> #include <pthread.h> #include <semaphore.h> #include <signal.h> -#include <sys/time.h> +#include <stdlib.h> #include <sys/resource.h> +#include <sys/time.h> #include <sys/types.h> -#include <stdlib.h> #include <ucontext.h> -#include <backtrace.h> // QNX requires memory pages to be marked as executable. // Otherwise, the OS raises an exception when executing code in that page. -#include <sys/types.h> // mmap & munmap -#include <sys/mman.h> // mmap & munmap -#include <sys/stat.h> // open -#include <fcntl.h> // open -#include <unistd.h> // sysconf -#include <strings.h> // index #include <errno.h> +#include <fcntl.h> // open #include <stdarg.h> +#include <strings.h> // index +#include <sys/mman.h> // mmap & munmap #include <sys/procfs.h> +#include <sys/stat.h> // open +#include <sys/types.h> // mmap & munmap +#include <unistd.h> // sysconf -#undef MAP_TYPE +#include <cmath> -#include "v8.h" +#undef MAP_TYPE -#include "platform.h" -#include "v8threads.h" +#include "src/base/macros.h" +#include "src/base/platform/platform.h" namespace v8 { -namespace internal { +namespace base { // 0 is never a valid thread id on Qnx since tids and pids share a // name space and pid 0 is reserved (see man 2 kill). @@ -111,11 +111,7 @@ void* OS::Allocate(const size_t requested, int prot = PROT_READ | PROT_WRITE | (is_executable ? PROT_EXEC : 0); void* addr = OS::GetRandomMmapAddr(); void* mbase = mmap(addr, msize, prot, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); - if (mbase == MAP_FAILED) { - LOG(i::Isolate::Current(), - StringEvent("OS::Allocate", "mmap failed")); - return NULL; - } + if (mbase == MAP_FAILED) return NULL; *allocated = msize; return mbase; } @@ -179,7 +175,8 @@ PosixMemoryMappedFile::~PosixMemoryMappedFile() { } -void OS::LogSharedLibraryAddresses(Isolate* isolate) { +std::vector<OS::SharedLibraryAddress> OS::GetSharedLibraryAddresses() { + std::vector<SharedLibraryAddress> result; procfs_mapinfo *mapinfos = NULL, *mapinfo; int proc_fd, num, i; @@ -193,20 +190,20 @@ void OS::LogSharedLibraryAddresses(Isolate* isolate) { if ((proc_fd = open(buf, O_RDONLY)) == -1) { close(proc_fd); - return; + return result; } /* Get the number of map entries. */ if (devctl(proc_fd, DCMD_PROC_MAPINFO, NULL, 0, &num) != EOK) { close(proc_fd); - return; + return result; } mapinfos = reinterpret_cast<procfs_mapinfo *>( malloc(num * sizeof(procfs_mapinfo))); if (mapinfos == NULL) { close(proc_fd); - return; + return result; } /* Fill the map entries. */ @@ -214,7 +211,7 @@ void OS::LogSharedLibraryAddresses(Isolate* isolate) { mapinfos, num * sizeof(procfs_mapinfo), &num) != EOK) { free(mapinfos); close(proc_fd); - return; + return result; } for (i = 0; i < num; i++) { @@ -224,13 +221,13 @@ void OS::LogSharedLibraryAddresses(Isolate* isolate) { if (devctl(proc_fd, DCMD_PROC_MAPDEBUG, &map, sizeof(map), 0) != EOK) { continue; } - LOG(isolate, SharedLibraryEvent(map.info.path, - mapinfo->vaddr, - mapinfo->vaddr + mapinfo->size)); + result.push_back(SharedLibraryAddress( + map.info.path, mapinfo->vaddr, mapinfo->vaddr + mapinfo->size)); } } free(mapinfos); close(proc_fd); + return result; } @@ -252,7 +249,7 @@ VirtualMemory::VirtualMemory(size_t size) VirtualMemory::VirtualMemory(size_t size, size_t alignment) : address_(NULL), size_(0) { - ASSERT(IsAligned(alignment, static_cast<intptr_t>(OS::AllocateAlignment()))); + DCHECK(IsAligned(alignment, static_cast<intptr_t>(OS::AllocateAlignment()))); size_t request_size = RoundUp(size + alignment, static_cast<intptr_t>(OS::AllocateAlignment())); void* reservation = mmap(OS::GetRandomMmapAddr(), @@ -263,9 +260,9 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) kMmapFdOffset); if (reservation == MAP_FAILED) return; - Address base = static_cast<Address>(reservation); - Address aligned_base = RoundUp(base, alignment); - ASSERT_LE(base, aligned_base); + uint8_t* base = static_cast<uint8_t*>(reservation); + uint8_t* aligned_base = RoundUp(base, alignment); + DCHECK_LE(base, aligned_base); // Unmap extra memory reserved before and after the desired block. if (aligned_base != base) { @@ -275,7 +272,7 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) } size_t aligned_size = RoundUp(size, OS::AllocateAlignment()); - ASSERT_LE(aligned_size, request_size); + DCHECK_LE(aligned_size, request_size); if (aligned_size != request_size) { size_t suffix_size = request_size - aligned_size; @@ -283,7 +280,7 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) request_size -= suffix_size; } - ASSERT(aligned_size == request_size); + DCHECK(aligned_size == request_size); address_ = static_cast<void*>(aligned_base); size_ = aligned_size; @@ -293,7 +290,7 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) VirtualMemory::~VirtualMemory() { if (IsReserved()) { bool result = ReleaseRegion(address(), size()); - ASSERT(result); + DCHECK(result); USE(result); } } @@ -374,4 +371,4 @@ bool VirtualMemory::HasLazyCommits() { return false; } -} } // namespace v8::internal +} } // namespace v8::base diff --git a/deps/v8/src/platform-solaris.cc b/deps/v8/src/base/platform/platform-solaris.cc similarity index 79% rename from deps/v8/src/platform-solaris.cc rename to deps/v8/src/base/platform/platform-solaris.cc index a2226f61356..7a54f7c4861 100644 --- a/deps/v8/src/platform-solaris.cc +++ b/deps/v8/src/base/platform/platform-solaris.cc @@ -9,27 +9,26 @@ # error "V8 does not support the SPARC CPU architecture." #endif -#include <sys/stack.h> // for stack alignment -#include <unistd.h> // getpagesize(), usleep() -#include <sys/mman.h> // mmap() -#include <ucontext.h> // walkstack(), getcontext() -#include <dlfcn.h> // dladdr -#include <pthread.h> -#include <semaphore.h> -#include <time.h> -#include <sys/time.h> // gettimeofday(), timeradd() +#include <dlfcn.h> // dladdr #include <errno.h> #include <ieeefp.h> // finite() +#include <pthread.h> +#include <semaphore.h> #include <signal.h> // sigemptyset(), etc +#include <sys/mman.h> // mmap() #include <sys/regset.h> +#include <sys/stack.h> // for stack alignment +#include <sys/time.h> // gettimeofday(), timeradd() +#include <time.h> +#include <ucontext.h> // walkstack(), getcontext() +#include <unistd.h> // getpagesize(), usleep() +#include <cmath> #undef MAP_TYPE -#include "v8.h" - -#include "platform.h" -#include "v8threads.h" +#include "src/base/macros.h" +#include "src/base/platform/platform.h" // It seems there is a bug in some Solaris distributions (experienced in @@ -53,7 +52,7 @@ int signbit(double x) { #endif // signbit namespace v8 { -namespace internal { +namespace base { const char* OS::LocalTimezone(double time, TimezoneCache* cache) { @@ -78,10 +77,7 @@ void* OS::Allocate(const size_t requested, int prot = PROT_READ | PROT_WRITE | (is_executable ? PROT_EXEC : 0); void* mbase = mmap(NULL, msize, prot, MAP_PRIVATE | MAP_ANON, -1, 0); - if (mbase == MAP_FAILED) { - LOG(Isolate::Current(), StringEvent("OS::Allocate", "mmap failed")); - return NULL; - } + if (mbase == MAP_FAILED) return NULL; *allocated = msize; return mbase; } @@ -135,7 +131,8 @@ PosixMemoryMappedFile::~PosixMemoryMappedFile() { } -void OS::LogSharedLibraryAddresses(Isolate* isolate) { +std::vector<OS::SharedLibraryAddress> OS::GetSharedLibraryAddresses() { + return std::vector<SharedLibraryAddress>(); } @@ -143,44 +140,6 @@ void OS::SignalCodeMovingGC() { } -struct StackWalker { - Vector<OS::StackFrame>& frames; - int index; -}; - - -static int StackWalkCallback(uintptr_t pc, int signo, void* data) { - struct StackWalker* walker = static_cast<struct StackWalker*>(data); - Dl_info info; - - int i = walker->index; - - walker->frames[i].address = reinterpret_cast<void*>(pc); - - // Make sure line termination is in place. - walker->frames[i].text[OS::kStackWalkMaxTextLen - 1] = '\0'; - - Vector<char> text = MutableCStrVector(walker->frames[i].text, - OS::kStackWalkMaxTextLen); - - if (dladdr(reinterpret_cast<void*>(pc), &info) == 0) { - OS::SNPrintF(text, "[0x%p]", pc); - } else if ((info.dli_fname != NULL && info.dli_sname != NULL)) { - // We have symbol info. - OS::SNPrintF(text, "%s'%s+0x%x", info.dli_fname, info.dli_sname, pc); - } else { - // No local symbol info. - OS::SNPrintF(text, - "%s'0x%p [0x%p]", - info.dli_fname, - pc - reinterpret_cast<uintptr_t>(info.dli_fbase), - pc); - } - walker->index++; - return 0; -} - - // Constants used for mmap. static const int kMmapFd = -1; static const int kMmapFdOffset = 0; @@ -195,7 +154,7 @@ VirtualMemory::VirtualMemory(size_t size) VirtualMemory::VirtualMemory(size_t size, size_t alignment) : address_(NULL), size_(0) { - ASSERT(IsAligned(alignment, static_cast<intptr_t>(OS::AllocateAlignment()))); + DCHECK(IsAligned(alignment, static_cast<intptr_t>(OS::AllocateAlignment()))); size_t request_size = RoundUp(size + alignment, static_cast<intptr_t>(OS::AllocateAlignment())); void* reservation = mmap(OS::GetRandomMmapAddr(), @@ -206,9 +165,9 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) kMmapFdOffset); if (reservation == MAP_FAILED) return; - Address base = static_cast<Address>(reservation); - Address aligned_base = RoundUp(base, alignment); - ASSERT_LE(base, aligned_base); + uint8_t* base = static_cast<uint8_t*>(reservation); + uint8_t* aligned_base = RoundUp(base, alignment); + DCHECK_LE(base, aligned_base); // Unmap extra memory reserved before and after the desired block. if (aligned_base != base) { @@ -218,7 +177,7 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) } size_t aligned_size = RoundUp(size, OS::AllocateAlignment()); - ASSERT_LE(aligned_size, request_size); + DCHECK_LE(aligned_size, request_size); if (aligned_size != request_size) { size_t suffix_size = request_size - aligned_size; @@ -226,7 +185,7 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) request_size -= suffix_size; } - ASSERT(aligned_size == request_size); + DCHECK(aligned_size == request_size); address_ = static_cast<void*>(aligned_base); size_ = aligned_size; @@ -236,7 +195,7 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) VirtualMemory::~VirtualMemory() { if (IsReserved()) { bool result = ReleaseRegion(address(), size()); - ASSERT(result); + DCHECK(result); USE(result); } } @@ -317,4 +276,4 @@ bool VirtualMemory::HasLazyCommits() { return false; } -} } // namespace v8::internal +} } // namespace v8::base diff --git a/deps/v8/src/platform-win32.cc b/deps/v8/src/base/platform/platform-win32.cc similarity index 88% rename from deps/v8/src/platform-win32.cc rename to deps/v8/src/base/platform/platform-win32.cc index 08d03e14906..9f106785eb1 100644 --- a/deps/v8/src/platform-win32.cc +++ b/deps/v8/src/base/platform/platform-win32.cc @@ -15,12 +15,17 @@ #endif // MINGW_HAS_SECURE_API #endif // __MINGW32__ -#include "win32-headers.h" +#ifdef _MSC_VER +#include <limits> +#endif -#include "v8.h" +#include "src/base/win32-headers.h" -#include "isolate-inl.h" -#include "platform.h" +#include "src/base/lazy-instance.h" +#include "src/base/macros.h" +#include "src/base/platform/platform.h" +#include "src/base/platform/time.h" +#include "src/base/utils/random-number-generator.h" #ifdef _MSC_VER @@ -66,7 +71,7 @@ int fopen_s(FILE** pFile, const char* filename, const char* mode) { int _vsnprintf_s(char* buffer, size_t sizeOfBuffer, size_t count, const char* format, va_list argptr) { - ASSERT(count == _TRUNCATE); + DCHECK(count == _TRUNCATE); return _vsnprintf(buffer, sizeOfBuffer, format, argptr); } @@ -100,35 +105,18 @@ int strncpy_s(char* dest, size_t dest_size, const char* source, size_t count) { #endif // __MINGW32__ namespace v8 { -namespace internal { - -intptr_t OS::MaxVirtualMemory() { - return 0; -} - +namespace base { -#if V8_TARGET_ARCH_IA32 -static void MemMoveWrapper(void* dest, const void* src, size_t size) { - memmove(dest, src, size); -} +namespace { +bool g_hard_abort = false; -// Initialize to library version so we can call this at any time during startup. -static OS::MemMoveFunction memmove_function = &MemMoveWrapper; +} // namespace -// Defined in codegen-ia32.cc. -OS::MemMoveFunction CreateMemMoveFunction(); - -// Copy memory area to disjoint memory area. -void OS::MemMove(void* dest, const void* src, size_t size) { - if (size == 0) return; - // Note: here we rely on dependent reads being ordered. This is true - // on all architectures we currently support. - (*memmove_function)(dest, src, size); +intptr_t OS::MaxVirtualMemory() { + return 0; } -#endif // V8_TARGET_ARCH_IA32 - class TimezoneCache { public: @@ -174,12 +162,12 @@ class TimezoneCache { // To properly resolve the resource identifier requires a library load, // which is not possible in a sandbox. if (std_tz_name_[0] == '\0' || std_tz_name_[0] == '@') { - OS::SNPrintF(Vector<char>(std_tz_name_, kTzNameSize - 1), + OS::SNPrintF(std_tz_name_, kTzNameSize - 1, "%s Standard Time", GuessTimezoneNameFromBias(tzinfo_.Bias)); } if (dst_tz_name_[0] == '\0' || dst_tz_name_[0] == '@') { - OS::SNPrintF(Vector<char>(dst_tz_name_, kTzNameSize - 1), + OS::SNPrintF(dst_tz_name_, kTzNameSize - 1, "%s Daylight Time", GuessTimezoneNameFromBias(tzinfo_.Bias)); } @@ -452,16 +440,6 @@ char* Win32Time::LocalTimezone(TimezoneCache* cache) { } -void OS::PostSetUp() { -#if V8_TARGET_ARCH_IA32 - OS::MemMoveFunction generated_memmove = CreateMemMoveFunction(); - if (generated_memmove != NULL) { - memmove_function = generated_memmove; - } -#endif -} - - // Returns the accumulated user time for thread. int OS::GetUserTime(uint32_t* secs, uint32_t* usecs) { FILETIME dummy; @@ -539,6 +517,11 @@ int OS::GetCurrentProcessId() { } +int OS::GetCurrentThreadId() { + return static_cast<int>(::GetCurrentThreadId()); +} + + // ---------------------------------------------------------------------------- // Win32 console output. // @@ -584,9 +567,9 @@ static void VPrintHelper(FILE* stream, const char* format, va_list args) { // It is important to use safe print here in order to avoid // overflowing the buffer. We might truncate the output, but this // does not crash. - EmbeddedVector<char, 4096> buffer; - OS::VSNPrintF(buffer, format, args); - OutputDebugStringA(buffer.start()); + char buffer[4096]; + OS::VSNPrintF(buffer, sizeof(buffer), format, args); + OutputDebugStringA(buffer); } else { vfprintf(stream, format, args); } @@ -671,22 +654,22 @@ void OS::VPrintError(const char* format, va_list args) { } -int OS::SNPrintF(Vector<char> str, const char* format, ...) { +int OS::SNPrintF(char* str, int length, const char* format, ...) { va_list args; va_start(args, format); - int result = VSNPrintF(str, format, args); + int result = VSNPrintF(str, length, format, args); va_end(args); return result; } -int OS::VSNPrintF(Vector<char> str, const char* format, va_list args) { - int n = _vsnprintf_s(str.start(), str.length(), _TRUNCATE, format, args); +int OS::VSNPrintF(char* str, int length, const char* format, va_list args) { + int n = _vsnprintf_s(str, length, _TRUNCATE, format, args); // Make sure to zero-terminate the string if the output was // truncated or if there was an error. - if (n < 0 || n >= str.length()) { - if (str.length() > 0) - str[str.length() - 1] = '\0'; + if (n < 0 || n >= length) { + if (length > 0) + str[length - 1] = '\0'; return -1; } else { return n; @@ -699,14 +682,14 @@ char* OS::StrChr(char* str, int c) { } -void OS::StrNCpy(Vector<char> dest, const char* src, size_t n) { +void OS::StrNCpy(char* dest, int length, const char* src, size_t n) { // Use _TRUNCATE or strncpy_s crashes (by design) if buffer is too small. - size_t buffer_size = static_cast<size_t>(dest.length()); + size_t buffer_size = static_cast<size_t>(length); if (n + 1 > buffer_size) // count for trailing '\0' n = _TRUNCATE; - int result = strncpy_s(dest.start(), dest.length(), src, n); + int result = strncpy_s(dest, length, src, n); USE(result); - ASSERT(result == 0 || (n == _TRUNCATE && result == STRUNCATE)); + DCHECK(result == 0 || (n == _TRUNCATE && result == STRUNCATE)); } @@ -741,31 +724,37 @@ size_t OS::AllocateAlignment() { } +static LazyInstance<RandomNumberGenerator>::type + platform_random_number_generator = LAZY_INSTANCE_INITIALIZER; + + +void OS::Initialize(int64_t random_seed, bool hard_abort, + const char* const gc_fake_mmap) { + if (random_seed) { + platform_random_number_generator.Pointer()->SetSeed(random_seed); + } + g_hard_abort = hard_abort; +} + + void* OS::GetRandomMmapAddr() { - Isolate* isolate = Isolate::UncheckedCurrent(); - // Note that the current isolate isn't set up in a call path via - // CpuFeatures::Probe. We don't care about randomization in this case because - // the code page is immediately freed. - if (isolate != NULL) { - // The address range used to randomize RWX allocations in OS::Allocate - // Try not to map pages into the default range that windows loads DLLs - // Use a multiple of 64k to prevent committing unused memory. - // Note: This does not guarantee RWX regions will be within the - // range kAllocationRandomAddressMin to kAllocationRandomAddressMax + // The address range used to randomize RWX allocations in OS::Allocate + // Try not to map pages into the default range that windows loads DLLs + // Use a multiple of 64k to prevent committing unused memory. + // Note: This does not guarantee RWX regions will be within the + // range kAllocationRandomAddressMin to kAllocationRandomAddressMax #ifdef V8_HOST_ARCH_64_BIT - static const intptr_t kAllocationRandomAddressMin = 0x0000000080000000; - static const intptr_t kAllocationRandomAddressMax = 0x000003FFFFFF0000; + static const intptr_t kAllocationRandomAddressMin = 0x0000000080000000; + static const intptr_t kAllocationRandomAddressMax = 0x000003FFFFFF0000; #else - static const intptr_t kAllocationRandomAddressMin = 0x04000000; - static const intptr_t kAllocationRandomAddressMax = 0x3FFF0000; + static const intptr_t kAllocationRandomAddressMin = 0x04000000; + static const intptr_t kAllocationRandomAddressMax = 0x3FFF0000; #endif - uintptr_t address = - (isolate->random_number_generator()->NextInt() << kPageSizeBits) | - kAllocationRandomAddressMin; - address &= kAllocationRandomAddressMax; - return reinterpret_cast<void *>(address); - } - return NULL; + uintptr_t address = + (platform_random_number_generator.Pointer()->NextInt() << kPageSizeBits) | + kAllocationRandomAddressMin; + address &= kAllocationRandomAddressMax; + return reinterpret_cast<void *>(address); } @@ -799,12 +788,9 @@ void* OS::Allocate(const size_t requested, MEM_COMMIT | MEM_RESERVE, prot); - if (mbase == NULL) { - LOG(Isolate::Current(), StringEvent("OS::Allocate", "VirtualAlloc failed")); - return NULL; - } + if (mbase == NULL) return NULL; - ASSERT(IsAligned(reinterpret_cast<size_t>(mbase), OS::AllocateAlignment())); + DCHECK(IsAligned(reinterpret_cast<size_t>(mbase), OS::AllocateAlignment())); *allocated = msize; return mbase; @@ -841,7 +827,7 @@ void OS::Sleep(int milliseconds) { void OS::Abort() { - if (FLAG_hard_abort) { + if (g_hard_abort) { V8_IMMEDIATE_CRASH(); } // Make the MSVCRT do a silent abort. @@ -913,7 +899,7 @@ OS::MemoryMappedFile* OS::MemoryMappedFile::create(const char* name, int size, if (file_mapping == NULL) return NULL; // Map a view of the file into memory void* memory = MapViewOfFile(file_mapping, FILE_MAP_ALL_ACCESS, 0, 0, size); - if (memory) OS::MemMove(memory, initial, size); + if (memory) memmove(memory, initial, size); return new Win32MemoryMappedFile(file, file_mapping, memory, size); } @@ -1094,10 +1080,13 @@ TLHELP32_FUNCTION_LIST(DLL_FUNC_LOADED) // Load the symbols for generating stack traces. -static bool LoadSymbols(Isolate* isolate, HANDLE process_handle) { +static std::vector<OS::SharedLibraryAddress> LoadSymbols( + HANDLE process_handle) { + static std::vector<OS::SharedLibraryAddress> result; + static bool symbols_loaded = false; - if (symbols_loaded) return true; + if (symbols_loaded) return result; BOOL ok; @@ -1105,7 +1094,7 @@ static bool LoadSymbols(Isolate* isolate, HANDLE process_handle) { ok = _SymInitialize(process_handle, // hProcess NULL, // UserSearchPath false); // fInvadeProcess - if (!ok) return false; + if (!ok) return result; DWORD options = _SymGetOptions(); options |= SYMOPT_LOAD_LINES; @@ -1116,14 +1105,14 @@ static bool LoadSymbols(Isolate* isolate, HANDLE process_handle) { ok = _SymGetSearchPath(process_handle, buf, OS::kStackWalkMaxNameLen); if (!ok) { int err = GetLastError(); - PrintF("%d\n", err); - return false; + OS::Print("%d\n", err); + return result; } HANDLE snapshot = _CreateToolhelp32Snapshot( TH32CS_SNAPMODULE, // dwFlags GetCurrentProcessId()); // th32ProcessId - if (snapshot == INVALID_HANDLE_VALUE) return false; + if (snapshot == INVALID_HANDLE_VALUE) return result; MODULEENTRY32W module_entry; module_entry.dwSize = sizeof(module_entry); // Set the size of the structure. BOOL cont = _Module32FirstW(snapshot, &module_entry); @@ -1141,31 +1130,37 @@ static bool LoadSymbols(Isolate* isolate, HANDLE process_handle) { if (base == 0) { int err = GetLastError(); if (err != ERROR_MOD_NOT_FOUND && - err != ERROR_INVALID_HANDLE) return false; + err != ERROR_INVALID_HANDLE) { + result.clear(); + return result; + } } - LOG(isolate, - SharedLibraryEvent( - module_entry.szExePath, - reinterpret_cast<unsigned int>(module_entry.modBaseAddr), - reinterpret_cast<unsigned int>(module_entry.modBaseAddr + - module_entry.modBaseSize))); + int lib_name_length = WideCharToMultiByte( + CP_UTF8, 0, module_entry.szExePath, -1, NULL, 0, NULL, NULL); + std::string lib_name(lib_name_length, 0); + WideCharToMultiByte(CP_UTF8, 0, module_entry.szExePath, -1, &lib_name[0], + lib_name_length, NULL, NULL); + result.push_back(OS::SharedLibraryAddress( + lib_name, reinterpret_cast<unsigned int>(module_entry.modBaseAddr), + reinterpret_cast<unsigned int>(module_entry.modBaseAddr + + module_entry.modBaseSize))); cont = _Module32NextW(snapshot, &module_entry); } CloseHandle(snapshot); symbols_loaded = true; - return true; + return result; } -void OS::LogSharedLibraryAddresses(Isolate* isolate) { +std::vector<OS::SharedLibraryAddress> OS::GetSharedLibraryAddresses() { // SharedLibraryEvents are logged when loading symbol information. // Only the shared libraries loaded at the time of the call to - // LogSharedLibraryAddresses are logged. DLLs loaded after + // GetSharedLibraryAddresses are logged. DLLs loaded after // initialization are not accounted for. - if (!LoadDbgHelpAndTlHelp32()) return; + if (!LoadDbgHelpAndTlHelp32()) return std::vector<OS::SharedLibraryAddress>(); HANDLE process_handle = GetCurrentProcess(); - LoadSymbols(isolate, process_handle); + return LoadSymbols(process_handle); } @@ -1186,22 +1181,25 @@ uint64_t OS::TotalPhysicalMemory() { #else // __MINGW32__ -void OS::LogSharedLibraryAddresses(Isolate* isolate) { } +std::vector<OS::SharedLibraryAddress> OS::GetSharedLibraryAddresses() { + return std::vector<OS::SharedLibraryAddress>(); +} + + void OS::SignalCodeMovingGC() { } #endif // __MINGW32__ -uint64_t OS::CpuFeaturesImpliedByPlatform() { - return 0; // Windows runs on anything. +int OS::NumberOfProcessorsOnline() { + SYSTEM_INFO info; + GetSystemInfo(&info); + return info.dwNumberOfProcessors; } double OS::nan_value() { #ifdef _MSC_VER - // Positive Quiet NaN with no payload (aka. Indeterminate) has all bits - // in mask set, so value equals mask. - static const __int64 nanval = kQuietNaNMask; - return *reinterpret_cast<const double*>(&nanval); + return std::numeric_limits<double>::quiet_NaN(); #else // _MSC_VER return NAN; #endif // _MSC_VER @@ -1230,20 +1228,20 @@ VirtualMemory::VirtualMemory(size_t size) VirtualMemory::VirtualMemory(size_t size, size_t alignment) : address_(NULL), size_(0) { - ASSERT(IsAligned(alignment, static_cast<intptr_t>(OS::AllocateAlignment()))); + DCHECK(IsAligned(alignment, static_cast<intptr_t>(OS::AllocateAlignment()))); size_t request_size = RoundUp(size + alignment, static_cast<intptr_t>(OS::AllocateAlignment())); void* address = ReserveRegion(request_size); if (address == NULL) return; - Address base = RoundUp(static_cast<Address>(address), alignment); + uint8_t* base = RoundUp(static_cast<uint8_t*>(address), alignment); // Try reducing the size by freeing and then reallocating a specific area. bool result = ReleaseRegion(address, request_size); USE(result); - ASSERT(result); + DCHECK(result); address = VirtualAlloc(base, size, MEM_RESERVE, PAGE_NOACCESS); if (address != NULL) { request_size = size; - ASSERT(base == static_cast<Address>(address)); + DCHECK(base == static_cast<uint8_t*>(address)); } else { // Resizing failed, just go with a bigger area. address = ReserveRegion(request_size); @@ -1257,7 +1255,7 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment) VirtualMemory::~VirtualMemory() { if (IsReserved()) { bool result = ReleaseRegion(address(), size()); - ASSERT(result); + DCHECK(result); USE(result); } } @@ -1280,7 +1278,7 @@ bool VirtualMemory::Commit(void* address, size_t size, bool is_executable) { bool VirtualMemory::Uncommit(void* address, size_t size) { - ASSERT(IsReserved()); + DCHECK(IsReserved()); return UncommitRegion(address, size); } @@ -1343,7 +1341,7 @@ static unsigned int __stdcall ThreadEntry(void* arg) { } -class Thread::PlatformData : public Malloced { +class Thread::PlatformData { public: explicit PlatformData(HANDLE thread) : thread_(thread) {} HANDLE thread_; @@ -1363,7 +1361,7 @@ Thread::Thread(const Options& options) void Thread::set_name(const char* name) { - OS::StrNCpy(Vector<char>(name_, sizeof(name_)), name, strlen(name)); + OS::StrNCpy(name_, sizeof(name_), name, strlen(name)); name_[sizeof(name_) - 1] = '\0'; } @@ -1399,7 +1397,7 @@ void Thread::Join() { Thread::LocalStorageKey Thread::CreateThreadLocalKey() { DWORD result = TlsAlloc(); - ASSERT(result != TLS_OUT_OF_INDEXES); + DCHECK(result != TLS_OUT_OF_INDEXES); return static_cast<LocalStorageKey>(result); } @@ -1407,7 +1405,7 @@ Thread::LocalStorageKey Thread::CreateThreadLocalKey() { void Thread::DeleteThreadLocalKey(LocalStorageKey key) { BOOL result = TlsFree(static_cast<DWORD>(key)); USE(result); - ASSERT(result); + DCHECK(result); } @@ -1419,7 +1417,7 @@ void* Thread::GetThreadLocal(LocalStorageKey key) { void Thread::SetThreadLocal(LocalStorageKey key, void* value) { BOOL result = TlsSetValue(static_cast<DWORD>(key), value); USE(result); - ASSERT(result); + DCHECK(result); } @@ -1428,4 +1426,4 @@ void Thread::YieldCPU() { Sleep(0); } -} } // namespace v8::internal +} } // namespace v8::base diff --git a/deps/v8/src/platform.h b/deps/v8/src/base/platform/platform.h similarity index 73% rename from deps/v8/src/platform.h rename to deps/v8/src/base/platform/platform.h index 764bd5408f3..9567572d800 100644 --- a/deps/v8/src/platform.h +++ b/deps/v8/src/base/platform/platform.h @@ -18,15 +18,16 @@ // implementation and the overhead of virtual methods for performance // sensitive like mutex locking/unlocking. -#ifndef V8_PLATFORM_H_ -#define V8_PLATFORM_H_ +#ifndef V8_BASE_PLATFORM_PLATFORM_H_ +#define V8_BASE_PLATFORM_PLATFORM_H_ #include <stdarg.h> +#include <string> +#include <vector> -#include "platform/mutex.h" -#include "platform/semaphore.h" -#include "vector.h" -#include "v8globals.h" +#include "src/base/build_config.h" +#include "src/base/platform/mutex.h" +#include "src/base/platform/semaphore.h" #ifdef __sun # ifndef signbit @@ -34,17 +35,18 @@ namespace std { int signbit(double x); } # endif +#include <alloca.h> #endif #if V8_OS_QNX -#include "qnx-math.h" +#include "src/base/qnx-math.h" #endif // Microsoft Visual C++ specific stuff. #if V8_LIBC_MSVCRT -#include "win32-headers.h" -#include "win32-math.h" +#include "src/base/win32-headers.h" +#include "src/base/win32-math.h" int strncasecmp(const char* s1, const char* s2, int n); @@ -52,7 +54,7 @@ int strncasecmp(const char* s1, const char* s2, int n); #if (_MSC_VER < 1800) inline int lrint(double flt) { int intgr; -#if V8_TARGET_ARCH_IA32 +#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87 __asm { fld flt fistp intgr @@ -71,14 +73,14 @@ inline int lrint(double flt) { #endif // V8_LIBC_MSVCRT namespace v8 { -namespace internal { +namespace base { // ---------------------------------------------------------------------------- // Fast TLS support #ifndef V8_NO_FAST_TLS -#if defined(_MSC_VER) && V8_HOST_ARCH_IA32 +#if defined(_MSC_VER) && (V8_HOST_ARCH_IA32) #define V8_FAST_TLS_SUPPORTED 1 @@ -89,13 +91,14 @@ inline intptr_t InternalGetExistingThreadLocal(intptr_t index) { const intptr_t kTibExtraTlsOffset = 0xF94; const intptr_t kMaxInlineSlots = 64; const intptr_t kMaxSlots = kMaxInlineSlots + 1024; - ASSERT(0 <= index && index < kMaxSlots); + const intptr_t kPointerSize = sizeof(void*); + DCHECK(0 <= index && index < kMaxSlots); if (index < kMaxInlineSlots) { return static_cast<intptr_t>(__readfsdword(kTibInlineTlsOffset + kPointerSize * index)); } intptr_t extra = static_cast<intptr_t>(__readfsdword(kTibExtraTlsOffset)); - ASSERT(extra != 0); + DCHECK(extra != 0); return *reinterpret_cast<intptr_t*>(extra + kPointerSize * (index - kMaxInlineSlots)); } @@ -139,9 +142,13 @@ class TimezoneCache; class OS { public: - // Initializes the platform OS support that depend on CPU features. This is - // called after CPU initialization. - static void PostSetUp(); + // Initialize the OS class. + // - random_seed: Used for the GetRandomMmapAddress() if non-zero. + // - hard_abort: If true, OS::Abort() will crash instead of aborting. + // - gc_fake_mmap: Name of the file for fake gc mmap used in ll_prof. + static void Initialize(int64_t random_seed, + bool hard_abort, + const char* const gc_fake_mmap); // Returns the accumulated user time for thread. This routine // can be used for profiling. The implementation should @@ -249,17 +256,28 @@ class OS { // Safe formatting print. Ensures that str is always null-terminated. // Returns the number of chars written, or -1 if output was truncated. - static int SNPrintF(Vector<char> str, const char* format, ...); - static int VSNPrintF(Vector<char> str, + static int SNPrintF(char* str, int length, const char* format, ...); + static int VSNPrintF(char* str, + int length, const char* format, va_list args); static char* StrChr(char* str, int c); - static void StrNCpy(Vector<char> dest, const char* src, size_t n); + static void StrNCpy(char* dest, int length, const char* src, size_t n); // Support for the profiler. Can do nothing, in which case ticks // occuring in shared libraries will not be properly accounted for. - static void LogSharedLibraryAddresses(Isolate* isolate); + struct SharedLibraryAddress { + SharedLibraryAddress( + const std::string& library_path, uintptr_t start, uintptr_t end) + : library_path(library_path), start(start), end(end) {} + + std::string library_path; + uintptr_t start; + uintptr_t end; + }; + + static std::vector<SharedLibraryAddress> GetSharedLibraryAddresses(); // Support for the profiler. Notifies the external profiling // process that a code moving garbage collection starts. Can do @@ -267,13 +285,8 @@ class OS { // using --never-compact) if accurate profiling is desired. static void SignalCodeMovingGC(); - // The return value indicates the CPU features we are sure of because of the - // OS. For example MacOSX doesn't run on any x86 CPUs that don't have SSE2 - // instructions. - // This is a little messy because the interpretation is subject to the cross - // of the CPU and the OS. The bits in the answer correspond to the bit - // positions indicated by the members of the CpuFeature enum from globals.h - static uint64_t CpuFeaturesImpliedByPlatform(); + // Returns the number of processors online. + static int NumberOfProcessorsOnline(); // The total amount of physical memory available on the current system. static uint64_t TotalPhysicalMemory(); @@ -293,91 +306,17 @@ class OS { // the platform doesn't care. Guaranteed to be a power of two. static int ActivationFrameAlignment(); -#if defined(V8_TARGET_ARCH_IA32) - // Limit below which the extra overhead of the MemCopy function is likely - // to outweigh the benefits of faster copying. - static const int kMinComplexMemCopy = 64; - - // Copy memory area. No restrictions. - static void MemMove(void* dest, const void* src, size_t size); - typedef void (*MemMoveFunction)(void* dest, const void* src, size_t size); - - // Keep the distinction of "move" vs. "copy" for the benefit of other - // architectures. - static void MemCopy(void* dest, const void* src, size_t size) { - MemMove(dest, src, size); - } -#elif defined(V8_HOST_ARCH_ARM) - typedef void (*MemCopyUint8Function)(uint8_t* dest, - const uint8_t* src, - size_t size); - static MemCopyUint8Function memcopy_uint8_function; - static void MemCopyUint8Wrapper(uint8_t* dest, - const uint8_t* src, - size_t chars) { - memcpy(dest, src, chars); - } - // For values < 16, the assembler function is slower than the inlined C code. - static const int kMinComplexMemCopy = 16; - static void MemCopy(void* dest, const void* src, size_t size) { - (*memcopy_uint8_function)(reinterpret_cast<uint8_t*>(dest), - reinterpret_cast<const uint8_t*>(src), - size); - } - static void MemMove(void* dest, const void* src, size_t size) { - memmove(dest, src, size); - } - - typedef void (*MemCopyUint16Uint8Function)(uint16_t* dest, - const uint8_t* src, - size_t size); - static MemCopyUint16Uint8Function memcopy_uint16_uint8_function; - static void MemCopyUint16Uint8Wrapper(uint16_t* dest, - const uint8_t* src, - size_t chars); - // For values < 12, the assembler function is slower than the inlined C code. - static const int kMinComplexConvertMemCopy = 12; - static void MemCopyUint16Uint8(uint16_t* dest, - const uint8_t* src, - size_t size) { - (*memcopy_uint16_uint8_function)(dest, src, size); - } -#elif defined(V8_HOST_ARCH_MIPS) - typedef void (*MemCopyUint8Function)(uint8_t* dest, - const uint8_t* src, - size_t size); - static MemCopyUint8Function memcopy_uint8_function; - static void MemCopyUint8Wrapper(uint8_t* dest, - const uint8_t* src, - size_t chars) { - memcpy(dest, src, chars); - } - // For values < 16, the assembler function is slower than the inlined C code. - static const int kMinComplexMemCopy = 16; - static void MemCopy(void* dest, const void* src, size_t size) { - (*memcopy_uint8_function)(reinterpret_cast<uint8_t*>(dest), - reinterpret_cast<const uint8_t*>(src), - size); - } - static void MemMove(void* dest, const void* src, size_t size) { - memmove(dest, src, size); - } -#else - // Copy memory area to disjoint memory area. - static void MemCopy(void* dest, const void* src, size_t size) { - memcpy(dest, src, size); - } - static void MemMove(void* dest, const void* src, size_t size) { - memmove(dest, src, size); - } - static const int kMinComplexMemCopy = 16 * kPointerSize; -#endif // V8_TARGET_ARCH_IA32 - static int GetCurrentProcessId(); + static int GetCurrentThreadId(); + private: static const int msPerSecond = 1000; +#if V8_OS_POSIX + static const char* GetGCFakeMMapFile(); +#endif + DISALLOW_IMPLICIT_CONSTRUCTORS(OS); }; @@ -413,7 +352,7 @@ class VirtualMemory { // necessarily aligned. The user might need to round it up to a multiple of // the alignment to get the start of the aligned block. void* address() { - ASSERT(IsReserved()); + DCHECK(IsReserved()); return address_; } @@ -433,7 +372,7 @@ class VirtualMemory { bool Guard(void* address); void Release() { - ASSERT(IsReserved()); + DCHECK(IsReserved()); // Notice: Order is important here. The VirtualMemory object might live // inside the allocated region. void* address = address_; @@ -441,13 +380,13 @@ class VirtualMemory { Reset(); bool result = ReleaseRegion(address, size); USE(result); - ASSERT(result); + DCHECK(result); } // Assign control of the reserved region to a different VirtualMemory object. // The old object is no longer functional (IsReserved() returns false). void TakeControl(VirtualMemory* from) { - ASSERT(!IsReserved()); + DCHECK(!IsReserved()); address_ = from->address_; size_ = from->size_; from->Reset(); @@ -485,18 +424,12 @@ class VirtualMemory { class Thread { public: // Opaque data type for thread-local storage keys. - // LOCAL_STORAGE_KEY_MIN_VALUE and LOCAL_STORAGE_KEY_MAX_VALUE are specified - // to ensure that enumeration type has correct value range (see Issue 830 for - // more details). - enum LocalStorageKey { - LOCAL_STORAGE_KEY_MIN_VALUE = kMinInt, - LOCAL_STORAGE_KEY_MAX_VALUE = kMaxInt - }; + typedef int32_t LocalStorageKey; class Options { public: Options() : name_("v8:<unknown>"), stack_size_(0) {} - Options(const char* name, int stack_size = 0) + explicit Options(const char* name, int stack_size = 0) : name_(name), stack_size_(stack_size) {} const char* name() const { return name_; } @@ -552,7 +485,7 @@ class Thread { static inline void* GetExistingThreadLocal(LocalStorageKey key) { void* result = reinterpret_cast<void*>( InternalGetExistingThreadLocal(static_cast<intptr_t>(key))); - ASSERT(result == GetThreadLocal(key)); + DCHECK(result == GetThreadLocal(key)); return result; } #else @@ -589,6 +522,6 @@ class Thread { DISALLOW_COPY_AND_ASSIGN(Thread); }; -} } // namespace v8::internal +} } // namespace v8::base -#endif // V8_PLATFORM_H_ +#endif // V8_BASE_PLATFORM_PLATFORM_H_ diff --git a/deps/v8/src/platform/semaphore.cc b/deps/v8/src/base/platform/semaphore.cc similarity index 83% rename from deps/v8/src/platform/semaphore.cc rename to deps/v8/src/base/platform/semaphore.cc index eae47cb367c..e11338fd55a 100644 --- a/deps/v8/src/platform/semaphore.cc +++ b/deps/v8/src/base/platform/semaphore.cc @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "platform/semaphore.h" +#include "src/base/platform/semaphore.h" #if V8_OS_MACOSX #include <mach/mach_init.h> @@ -11,32 +11,32 @@ #include <errno.h> -#include "checks.h" -#include "platform/time.h" +#include "src/base/logging.h" +#include "src/base/platform/time.h" namespace v8 { -namespace internal { +namespace base { #if V8_OS_MACOSX Semaphore::Semaphore(int count) { kern_return_t result = semaphore_create( mach_task_self(), &native_handle_, SYNC_POLICY_FIFO, count); - ASSERT_EQ(KERN_SUCCESS, result); + DCHECK_EQ(KERN_SUCCESS, result); USE(result); } Semaphore::~Semaphore() { kern_return_t result = semaphore_destroy(mach_task_self(), native_handle_); - ASSERT_EQ(KERN_SUCCESS, result); + DCHECK_EQ(KERN_SUCCESS, result); USE(result); } void Semaphore::Signal() { kern_return_t result = semaphore_signal(native_handle_); - ASSERT_EQ(KERN_SUCCESS, result); + DCHECK_EQ(KERN_SUCCESS, result); USE(result); } @@ -45,7 +45,7 @@ void Semaphore::Wait() { while (true) { kern_return_t result = semaphore_wait(native_handle_); if (result == KERN_SUCCESS) return; // Semaphore was signalled. - ASSERT_EQ(KERN_ABORTED, result); + DCHECK_EQ(KERN_ABORTED, result); } } @@ -65,7 +65,7 @@ bool Semaphore::WaitFor(const TimeDelta& rel_time) { kern_return_t result = semaphore_timedwait(native_handle_, ts); if (result == KERN_SUCCESS) return true; // Semaphore was signalled. if (result == KERN_OPERATION_TIMED_OUT) return false; // Timeout. - ASSERT_EQ(KERN_ABORTED, result); + DCHECK_EQ(KERN_ABORTED, result); now = TimeTicks::Now(); } } @@ -73,23 +73,23 @@ bool Semaphore::WaitFor(const TimeDelta& rel_time) { #elif V8_OS_POSIX Semaphore::Semaphore(int count) { - ASSERT(count >= 0); + DCHECK(count >= 0); int result = sem_init(&native_handle_, 0, count); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); USE(result); } Semaphore::~Semaphore() { int result = sem_destroy(&native_handle_); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); USE(result); } void Semaphore::Signal() { int result = sem_post(&native_handle_); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); USE(result); } @@ -99,8 +99,8 @@ void Semaphore::Wait() { int result = sem_wait(&native_handle_); if (result == 0) return; // Semaphore was signalled. // Signal caused spurious wakeup. - ASSERT_EQ(-1, result); - ASSERT_EQ(EINTR, errno); + DCHECK_EQ(-1, result); + DCHECK_EQ(EINTR, errno); } } @@ -126,23 +126,23 @@ bool Semaphore::WaitFor(const TimeDelta& rel_time) { return false; } // Signal caused spurious wakeup. - ASSERT_EQ(-1, result); - ASSERT_EQ(EINTR, errno); + DCHECK_EQ(-1, result); + DCHECK_EQ(EINTR, errno); } } #elif V8_OS_WIN Semaphore::Semaphore(int count) { - ASSERT(count >= 0); + DCHECK(count >= 0); native_handle_ = ::CreateSemaphoreA(NULL, count, 0x7fffffff, NULL); - ASSERT(native_handle_ != NULL); + DCHECK(native_handle_ != NULL); } Semaphore::~Semaphore() { BOOL result = CloseHandle(native_handle_); - ASSERT(result); + DCHECK(result); USE(result); } @@ -150,14 +150,14 @@ Semaphore::~Semaphore() { void Semaphore::Signal() { LONG dummy; BOOL result = ReleaseSemaphore(native_handle_, 1, &dummy); - ASSERT(result); + DCHECK(result); USE(result); } void Semaphore::Wait() { DWORD result = WaitForSingleObject(native_handle_, INFINITE); - ASSERT(result == WAIT_OBJECT_0); + DCHECK(result == WAIT_OBJECT_0); USE(result); } @@ -172,7 +172,7 @@ bool Semaphore::WaitFor(const TimeDelta& rel_time) { if (result == WAIT_OBJECT_0) { return true; } - ASSERT(result == WAIT_TIMEOUT); + DCHECK(result == WAIT_TIMEOUT); now = TimeTicks::Now(); } else { DWORD result = WaitForSingleObject( @@ -180,7 +180,7 @@ bool Semaphore::WaitFor(const TimeDelta& rel_time) { if (result == WAIT_TIMEOUT) { return false; } - ASSERT(result == WAIT_OBJECT_0); + DCHECK(result == WAIT_OBJECT_0); return true; } } @@ -188,4 +188,4 @@ bool Semaphore::WaitFor(const TimeDelta& rel_time) { #endif // V8_OS_MACOSX -} } // namespace v8::internal +} } // namespace v8::base diff --git a/deps/v8/src/platform/semaphore.h b/deps/v8/src/base/platform/semaphore.h similarity index 85% rename from deps/v8/src/platform/semaphore.h rename to deps/v8/src/base/platform/semaphore.h index 43441920475..b3105e36f0c 100644 --- a/deps/v8/src/platform/semaphore.h +++ b/deps/v8/src/base/platform/semaphore.h @@ -2,12 +2,12 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#ifndef V8_PLATFORM_SEMAPHORE_H_ -#define V8_PLATFORM_SEMAPHORE_H_ +#ifndef V8_BASE_PLATFORM_SEMAPHORE_H_ +#define V8_BASE_PLATFORM_SEMAPHORE_H_ -#include "../lazy-instance.h" +#include "src/base/lazy-instance.h" #if V8_OS_WIN -#include "../win32-headers.h" +#include "src/base/win32-headers.h" #endif #if V8_OS_MACOSX @@ -17,7 +17,7 @@ #endif namespace v8 { -namespace internal { +namespace base { // Forward declarations. class TimeDelta; @@ -90,14 +90,12 @@ struct CreateSemaphoreTrait { template <int N> struct LazySemaphore { - typedef typename LazyDynamicInstance< - Semaphore, - CreateSemaphoreTrait<N>, - ThreadSafeInitOnceTrait>::type type; + typedef typename LazyDynamicInstance<Semaphore, CreateSemaphoreTrait<N>, + ThreadSafeInitOnceTrait>::type type; }; #define LAZY_SEMAPHORE_INITIALIZER LAZY_DYNAMIC_INSTANCE_INITIALIZER -} } // namespace v8::internal +} } // namespace v8::base -#endif // V8_PLATFORM_SEMAPHORE_H_ +#endif // V8_BASE_PLATFORM_SEMAPHORE_H_ diff --git a/deps/v8/src/platform/time.cc b/deps/v8/src/base/platform/time.cc similarity index 82% rename from deps/v8/src/platform/time.cc rename to deps/v8/src/base/platform/time.cc index c6b5786a6fe..4d1bec2b25d 100644 --- a/deps/v8/src/platform/time.cc +++ b/deps/v8/src/base/platform/time.cc @@ -2,10 +2,12 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "platform/time.h" +#include "src/base/platform/time.h" #if V8_OS_POSIX +#include <fcntl.h> // for O_RDONLY #include <sys/time.h> +#include <unistd.h> #endif #if V8_OS_MACOSX #include <mach/mach_time.h> @@ -13,15 +15,16 @@ #include <string.h> -#include "checks.h" -#include "cpu.h" -#include "platform.h" #if V8_OS_WIN -#include "win32-headers.h" +#include "src/base/lazy-instance.h" +#include "src/base/win32-headers.h" #endif +#include "src/base/cpu.h" +#include "src/base/logging.h" +#include "src/base/platform/platform.h" namespace v8 { -namespace internal { +namespace base { TimeDelta TimeDelta::FromDays(int days) { return TimeDelta(days * Time::kMicrosecondsPerDay); @@ -96,8 +99,8 @@ int64_t TimeDelta::InNanoseconds() const { #if V8_OS_MACOSX TimeDelta TimeDelta::FromMachTimespec(struct mach_timespec ts) { - ASSERT_GE(ts.tv_nsec, 0); - ASSERT_LT(ts.tv_nsec, + DCHECK_GE(ts.tv_nsec, 0); + DCHECK_LT(ts.tv_nsec, static_cast<long>(Time::kNanosecondsPerSecond)); // NOLINT return TimeDelta(ts.tv_sec * Time::kMicrosecondsPerSecond + ts.tv_nsec / Time::kNanosecondsPerMicrosecond); @@ -106,7 +109,7 @@ TimeDelta TimeDelta::FromMachTimespec(struct mach_timespec ts) { struct mach_timespec TimeDelta::ToMachTimespec() const { struct mach_timespec ts; - ASSERT(delta_ >= 0); + DCHECK(delta_ >= 0); ts.tv_sec = delta_ / Time::kMicrosecondsPerSecond; ts.tv_nsec = (delta_ % Time::kMicrosecondsPerSecond) * Time::kNanosecondsPerMicrosecond; @@ -119,8 +122,8 @@ struct mach_timespec TimeDelta::ToMachTimespec() const { #if V8_OS_POSIX TimeDelta TimeDelta::FromTimespec(struct timespec ts) { - ASSERT_GE(ts.tv_nsec, 0); - ASSERT_LT(ts.tv_nsec, + DCHECK_GE(ts.tv_nsec, 0); + DCHECK_LT(ts.tv_nsec, static_cast<long>(Time::kNanosecondsPerSecond)); // NOLINT return TimeDelta(ts.tv_sec * Time::kMicrosecondsPerSecond + ts.tv_nsec / Time::kNanosecondsPerMicrosecond); @@ -193,9 +196,9 @@ class Clock V8_FINAL { }; -static LazyStaticInstance<Clock, - DefaultConstructTrait<Clock>, - ThreadSafeInitOnceTrait>::type clock = LAZY_STATIC_INSTANCE_INITIALIZER; +static LazyStaticInstance<Clock, DefaultConstructTrait<Clock>, + ThreadSafeInitOnceTrait>::type clock = + LAZY_STATIC_INSTANCE_INITIALIZER; Time Time::Now() { @@ -227,7 +230,7 @@ Time Time::FromFiletime(FILETIME ft) { FILETIME Time::ToFiletime() const { - ASSERT(us_ >= 0); + DCHECK(us_ >= 0); FILETIME ft; if (IsNull()) { ft.dwLowDateTime = 0; @@ -250,7 +253,7 @@ FILETIME Time::ToFiletime() const { Time Time::Now() { struct timeval tv; int result = gettimeofday(&tv, NULL); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); USE(result); return FromTimeval(tv); } @@ -262,8 +265,8 @@ Time Time::NowFromSystemTime() { Time Time::FromTimespec(struct timespec ts) { - ASSERT(ts.tv_nsec >= 0); - ASSERT(ts.tv_nsec < static_cast<long>(kNanosecondsPerSecond)); // NOLINT + DCHECK(ts.tv_nsec >= 0); + DCHECK(ts.tv_nsec < static_cast<long>(kNanosecondsPerSecond)); // NOLINT if (ts.tv_nsec == 0 && ts.tv_sec == 0) { return Time(); } @@ -295,8 +298,8 @@ struct timespec Time::ToTimespec() const { Time Time::FromTimeval(struct timeval tv) { - ASSERT(tv.tv_usec >= 0); - ASSERT(tv.tv_usec < static_cast<suseconds_t>(kMicrosecondsPerSecond)); + DCHECK(tv.tv_usec >= 0); + DCHECK(tv.tv_usec < static_cast<suseconds_t>(kMicrosecondsPerSecond)); if (tv.tv_usec == 0 && tv.tv_sec == 0) { return Time(); } @@ -394,14 +397,14 @@ class HighResolutionTickClock V8_FINAL : public TickClock { public: explicit HighResolutionTickClock(int64_t ticks_per_second) : ticks_per_second_(ticks_per_second) { - ASSERT_LT(0, ticks_per_second); + DCHECK_LT(0, ticks_per_second); } virtual ~HighResolutionTickClock() {} virtual int64_t Now() V8_OVERRIDE { LARGE_INTEGER now; BOOL result = QueryPerformanceCounter(&now); - ASSERT(result); + DCHECK(result); USE(result); // Intentionally calculate microseconds in a round about manner to avoid @@ -463,9 +466,9 @@ class RolloverProtectedTickClock V8_FINAL : public TickClock { static LazyStaticInstance<RolloverProtectedTickClock, - DefaultConstructTrait<RolloverProtectedTickClock>, - ThreadSafeInitOnceTrait>::type tick_clock = - LAZY_STATIC_INSTANCE_INITIALIZER; + DefaultConstructTrait<RolloverProtectedTickClock>, + ThreadSafeInitOnceTrait>::type tick_clock = + LAZY_STATIC_INSTANCE_INITIALIZER; struct CreateHighResTickClockTrait { @@ -489,16 +492,15 @@ struct CreateHighResTickClockTrait { }; -static LazyDynamicInstance<TickClock, - CreateHighResTickClockTrait, - ThreadSafeInitOnceTrait>::type high_res_tick_clock = - LAZY_DYNAMIC_INSTANCE_INITIALIZER; +static LazyDynamicInstance<TickClock, CreateHighResTickClockTrait, + ThreadSafeInitOnceTrait>::type high_res_tick_clock = + LAZY_DYNAMIC_INSTANCE_INITIALIZER; TimeTicks TimeTicks::Now() { // Make sure we never return 0 here. TimeTicks ticks(tick_clock.Pointer()->Now()); - ASSERT(!ticks.IsNull()); + DCHECK(!ticks.IsNull()); return ticks; } @@ -506,7 +508,7 @@ TimeTicks TimeTicks::Now() { TimeTicks TimeTicks::HighResolutionNow() { // Make sure we never return 0 here. TimeTicks ticks(high_res_tick_clock.Pointer()->Now()); - ASSERT(!ticks.IsNull()); + DCHECK(!ticks.IsNull()); return ticks; } @@ -516,6 +518,14 @@ bool TimeTicks::IsHighResolutionClockWorking() { return high_res_tick_clock.Pointer()->IsHighResolution(); } + +// static +TimeTicks TimeTicks::KernelTimestampNow() { return TimeTicks(0); } + + +// static +bool TimeTicks::KernelTimestampAvailable() { return false; } + #else // V8_OS_WIN TimeTicks TimeTicks::Now() { @@ -529,7 +539,7 @@ TimeTicks TimeTicks::HighResolutionNow() { static struct mach_timebase_info info; if (info.denom == 0) { kern_return_t result = mach_timebase_info(&info); - ASSERT_EQ(KERN_SUCCESS, result); + DCHECK_EQ(KERN_SUCCESS, result); USE(result); } ticks = (mach_absolute_time() / Time::kNanosecondsPerMicrosecond * @@ -542,13 +552,13 @@ TimeTicks TimeTicks::HighResolutionNow() { // cleanup the tools/gyp/v8.gyp file. struct timeval tv; int result = gettimeofday(&tv, NULL); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); USE(result); ticks = (tv.tv_sec * Time::kMicrosecondsPerSecond + tv.tv_usec); #elif V8_OS_POSIX struct timespec ts; int result = clock_gettime(CLOCK_MONOTONIC, &ts); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); USE(result); ticks = (ts.tv_sec * Time::kMicrosecondsPerSecond + ts.tv_nsec / Time::kNanosecondsPerMicrosecond); @@ -563,6 +573,82 @@ bool TimeTicks::IsHighResolutionClockWorking() { return true; } + +#if V8_OS_LINUX && !V8_LIBRT_NOT_AVAILABLE + +class KernelTimestampClock { + public: + KernelTimestampClock() : clock_fd_(-1), clock_id_(kClockInvalid) { + clock_fd_ = open(kTraceClockDevice, O_RDONLY); + if (clock_fd_ == -1) { + return; + } + clock_id_ = get_clockid(clock_fd_); + } + + virtual ~KernelTimestampClock() { + if (clock_fd_ != -1) { + close(clock_fd_); + } + } + + int64_t Now() { + if (clock_id_ == kClockInvalid) { + return 0; + } + + struct timespec ts; + + clock_gettime(clock_id_, &ts); + return ((int64_t)ts.tv_sec * kNsecPerSec) + ts.tv_nsec; + } + + bool Available() { return clock_id_ != kClockInvalid; } + + private: + static const clockid_t kClockInvalid = -1; + static const char kTraceClockDevice[]; + static const uint64_t kNsecPerSec = 1000000000; + + int clock_fd_; + clockid_t clock_id_; + + static int get_clockid(int fd) { return ((~(clockid_t)(fd) << 3) | 3); } +}; + + +// Timestamp module name +const char KernelTimestampClock::kTraceClockDevice[] = "/dev/trace_clock"; + +#else + +class KernelTimestampClock { + public: + KernelTimestampClock() {} + + int64_t Now() { return 0; } + bool Available() { return false; } +}; + +#endif // V8_OS_LINUX && !V8_LIBRT_NOT_AVAILABLE + +static LazyStaticInstance<KernelTimestampClock, + DefaultConstructTrait<KernelTimestampClock>, + ThreadSafeInitOnceTrait>::type kernel_tick_clock = + LAZY_STATIC_INSTANCE_INITIALIZER; + + +// static +TimeTicks TimeTicks::KernelTimestampNow() { + return TimeTicks(kernel_tick_clock.Pointer()->Now()); +} + + +// static +bool TimeTicks::KernelTimestampAvailable() { + return kernel_tick_clock.Pointer()->Available(); +} + #endif // V8_OS_WIN -} } // namespace v8::internal +} } // namespace v8::base diff --git a/deps/v8/src/platform/time.h b/deps/v8/src/base/platform/time.h similarity index 95% rename from deps/v8/src/platform/time.h rename to deps/v8/src/base/platform/time.h index 8e35b879177..b348236ff1c 100644 --- a/deps/v8/src/platform/time.h +++ b/deps/v8/src/base/platform/time.h @@ -2,13 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#ifndef V8_PLATFORM_TIME_H_ -#define V8_PLATFORM_TIME_H_ +#ifndef V8_BASE_PLATFORM_TIME_H_ +#define V8_BASE_PLATFORM_TIME_H_ #include <time.h> #include <limits> -#include "../allocation.h" +#include "src/base/macros.h" // Forward declarations. extern "C" { @@ -19,7 +19,7 @@ struct timeval; } namespace v8 { -namespace internal { +namespace base { class Time; class TimeTicks; @@ -30,7 +30,7 @@ class TimeTicks; // This class represents a duration of time, internally represented in // microseonds. -class TimeDelta V8_FINAL BASE_EMBEDDED { +class TimeDelta V8_FINAL { public: TimeDelta() : delta_(0) {} @@ -158,7 +158,7 @@ class TimeDelta V8_FINAL BASE_EMBEDDED { // This class represents an absolute point in time, internally represented as // microseconds (s/1,000,000) since 00:00:00 UTC, January 1, 1970. -class Time V8_FINAL BASE_EMBEDDED { +class Time V8_FINAL { public: static const int64_t kMillisecondsPerSecond = 1000; static const int64_t kMicrosecondsPerMillisecond = 1000; @@ -295,7 +295,7 @@ inline Time operator+(const TimeDelta& delta, const Time& time) { // Time::Now() may actually decrease or jump). But note that TimeTicks may // "stand still", for example if the computer suspended. -class TimeTicks V8_FINAL BASE_EMBEDDED { +class TimeTicks V8_FINAL { public: TimeTicks() : ticks_(0) {} @@ -315,6 +315,13 @@ class TimeTicks V8_FINAL BASE_EMBEDDED { // Returns true if the high-resolution clock is working on this system. static bool IsHighResolutionClockWorking(); + // Returns Linux kernel timestamp for generating profiler events. This method + // returns null TimeTicks if the kernel cannot provide the timestamps (e.g., + // on non-Linux OS or if the kernel module for timestamps is not loaded). + + static TimeTicks KernelTimestampNow(); + static bool KernelTimestampAvailable(); + // Returns true if this object has not been initialized. bool IsNull() const { return ticks_ == 0; } @@ -388,6 +395,6 @@ inline TimeTicks operator+(const TimeDelta& delta, const TimeTicks& ticks) { return ticks + delta; } -} } // namespace v8::internal +} } // namespace v8::base -#endif // V8_PLATFORM_TIME_H_ +#endif // V8_BASE_PLATFORM_TIME_H_ diff --git a/deps/v8/src/qnx-math.h b/deps/v8/src/base/qnx-math.h similarity index 77% rename from deps/v8/src/qnx-math.h rename to deps/v8/src/base/qnx-math.h index 8cf65d208cb..6ff18f8d123 100644 --- a/deps/v8/src/qnx-math.h +++ b/deps/v8/src/base/qnx-math.h @@ -2,8 +2,8 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#ifndef V8_QNX_MATH_H_ -#define V8_QNX_MATH_H_ +#ifndef V8_BASE_QNX_MATH_H_ +#define V8_QBASE_NX_MATH_H_ #include <cmath> @@ -16,4 +16,4 @@ using std::lrint; -#endif // V8_QNX_MATH_H_ +#endif // V8_BASE_QNX_MATH_H_ diff --git a/deps/v8/src/base/safe_conversions.h b/deps/v8/src/base/safe_conversions.h new file mode 100644 index 00000000000..0a1bd696469 --- /dev/null +++ b/deps/v8/src/base/safe_conversions.h @@ -0,0 +1,67 @@ +// Copyright 2014 The Chromium Authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Slightly adapted for inclusion in V8. +// Copyright 2014 the V8 project authors. All rights reserved. + +#ifndef V8_BASE_SAFE_CONVERSIONS_H_ +#define V8_BASE_SAFE_CONVERSIONS_H_ + +#include <limits> + +#include "src/base/safe_conversions_impl.h" + +namespace v8 { +namespace base { + +// Convenience function that returns true if the supplied value is in range +// for the destination type. +template <typename Dst, typename Src> +inline bool IsValueInRangeForNumericType(Src value) { + return internal::DstRangeRelationToSrcRange<Dst>(value) == + internal::RANGE_VALID; +} + +// checked_cast<> is analogous to static_cast<> for numeric types, +// except that it CHECKs that the specified numeric conversion will not +// overflow or underflow. NaN source will always trigger a CHECK. +template <typename Dst, typename Src> +inline Dst checked_cast(Src value) { + CHECK(IsValueInRangeForNumericType<Dst>(value)); + return static_cast<Dst>(value); +} + +// saturated_cast<> is analogous to static_cast<> for numeric types, except +// that the specified numeric conversion will saturate rather than overflow or +// underflow. NaN assignment to an integral will trigger a CHECK condition. +template <typename Dst, typename Src> +inline Dst saturated_cast(Src value) { + // Optimization for floating point values, which already saturate. + if (std::numeric_limits<Dst>::is_iec559) + return static_cast<Dst>(value); + + switch (internal::DstRangeRelationToSrcRange<Dst>(value)) { + case internal::RANGE_VALID: + return static_cast<Dst>(value); + + case internal::RANGE_UNDERFLOW: + return std::numeric_limits<Dst>::min(); + + case internal::RANGE_OVERFLOW: + return std::numeric_limits<Dst>::max(); + + // Should fail only on attempting to assign NaN to a saturated integer. + case internal::RANGE_INVALID: + CHECK(false); + return std::numeric_limits<Dst>::max(); + } + + UNREACHABLE(); + return static_cast<Dst>(value); +} + +} // namespace base +} // namespace v8 + +#endif // V8_BASE_SAFE_CONVERSIONS_H_ diff --git a/deps/v8/src/base/safe_conversions_impl.h b/deps/v8/src/base/safe_conversions_impl.h new file mode 100644 index 00000000000..90c8e19353d --- /dev/null +++ b/deps/v8/src/base/safe_conversions_impl.h @@ -0,0 +1,220 @@ +// Copyright 2014 The Chromium Authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Slightly adapted for inclusion in V8. +// Copyright 2014 the V8 project authors. All rights reserved. + +#ifndef V8_BASE_SAFE_CONVERSIONS_IMPL_H_ +#define V8_BASE_SAFE_CONVERSIONS_IMPL_H_ + +#include <limits> + +#include "src/base/logging.h" +#include "src/base/macros.h" + +namespace v8 { +namespace base { +namespace internal { + +// The std library doesn't provide a binary max_exponent for integers, however +// we can compute one by adding one to the number of non-sign bits. This allows +// for accurate range comparisons between floating point and integer types. +template <typename NumericType> +struct MaxExponent { + static const int value = std::numeric_limits<NumericType>::is_iec559 + ? std::numeric_limits<NumericType>::max_exponent + : (sizeof(NumericType) * 8 + 1 - + std::numeric_limits<NumericType>::is_signed); +}; + +enum IntegerRepresentation { + INTEGER_REPRESENTATION_UNSIGNED, + INTEGER_REPRESENTATION_SIGNED +}; + +// A range for a given nunmeric Src type is contained for a given numeric Dst +// type if both numeric_limits<Src>::max() <= numeric_limits<Dst>::max() and +// numeric_limits<Src>::min() >= numeric_limits<Dst>::min() are true. +// We implement this as template specializations rather than simple static +// comparisons to ensure type correctness in our comparisons. +enum NumericRangeRepresentation { + NUMERIC_RANGE_NOT_CONTAINED, + NUMERIC_RANGE_CONTAINED +}; + +// Helper templates to statically determine if our destination type can contain +// maximum and minimum values represented by the source type. + +template < + typename Dst, + typename Src, + IntegerRepresentation DstSign = std::numeric_limits<Dst>::is_signed + ? INTEGER_REPRESENTATION_SIGNED + : INTEGER_REPRESENTATION_UNSIGNED, + IntegerRepresentation SrcSign = + std::numeric_limits<Src>::is_signed + ? INTEGER_REPRESENTATION_SIGNED + : INTEGER_REPRESENTATION_UNSIGNED > +struct StaticDstRangeRelationToSrcRange; + +// Same sign: Dst is guaranteed to contain Src only if its range is equal or +// larger. +template <typename Dst, typename Src, IntegerRepresentation Sign> +struct StaticDstRangeRelationToSrcRange<Dst, Src, Sign, Sign> { + static const NumericRangeRepresentation value = + MaxExponent<Dst>::value >= MaxExponent<Src>::value + ? NUMERIC_RANGE_CONTAINED + : NUMERIC_RANGE_NOT_CONTAINED; +}; + +// Unsigned to signed: Dst is guaranteed to contain source only if its range is +// larger. +template <typename Dst, typename Src> +struct StaticDstRangeRelationToSrcRange<Dst, + Src, + INTEGER_REPRESENTATION_SIGNED, + INTEGER_REPRESENTATION_UNSIGNED> { + static const NumericRangeRepresentation value = + MaxExponent<Dst>::value > MaxExponent<Src>::value + ? NUMERIC_RANGE_CONTAINED + : NUMERIC_RANGE_NOT_CONTAINED; +}; + +// Signed to unsigned: Dst cannot be statically determined to contain Src. +template <typename Dst, typename Src> +struct StaticDstRangeRelationToSrcRange<Dst, + Src, + INTEGER_REPRESENTATION_UNSIGNED, + INTEGER_REPRESENTATION_SIGNED> { + static const NumericRangeRepresentation value = NUMERIC_RANGE_NOT_CONTAINED; +}; + +enum RangeConstraint { + RANGE_VALID = 0x0, // Value can be represented by the destination type. + RANGE_UNDERFLOW = 0x1, // Value would overflow. + RANGE_OVERFLOW = 0x2, // Value would underflow. + RANGE_INVALID = RANGE_UNDERFLOW | RANGE_OVERFLOW // Invalid (i.e. NaN). +}; + +// Helper function for coercing an int back to a RangeContraint. +inline RangeConstraint GetRangeConstraint(int integer_range_constraint) { + DCHECK(integer_range_constraint >= RANGE_VALID && + integer_range_constraint <= RANGE_INVALID); + return static_cast<RangeConstraint>(integer_range_constraint); +} + +// This function creates a RangeConstraint from an upper and lower bound +// check by taking advantage of the fact that only NaN can be out of range in +// both directions at once. +inline RangeConstraint GetRangeConstraint(bool is_in_upper_bound, + bool is_in_lower_bound) { + return GetRangeConstraint((is_in_upper_bound ? 0 : RANGE_OVERFLOW) | + (is_in_lower_bound ? 0 : RANGE_UNDERFLOW)); +} + +template < + typename Dst, + typename Src, + IntegerRepresentation DstSign = std::numeric_limits<Dst>::is_signed + ? INTEGER_REPRESENTATION_SIGNED + : INTEGER_REPRESENTATION_UNSIGNED, + IntegerRepresentation SrcSign = std::numeric_limits<Src>::is_signed + ? INTEGER_REPRESENTATION_SIGNED + : INTEGER_REPRESENTATION_UNSIGNED, + NumericRangeRepresentation DstRange = + StaticDstRangeRelationToSrcRange<Dst, Src>::value > +struct DstRangeRelationToSrcRangeImpl; + +// The following templates are for ranges that must be verified at runtime. We +// split it into checks based on signedness to avoid confusing casts and +// compiler warnings on signed an unsigned comparisons. + +// Dst range is statically determined to contain Src: Nothing to check. +template <typename Dst, + typename Src, + IntegerRepresentation DstSign, + IntegerRepresentation SrcSign> +struct DstRangeRelationToSrcRangeImpl<Dst, + Src, + DstSign, + SrcSign, + NUMERIC_RANGE_CONTAINED> { + static RangeConstraint Check(Src value) { return RANGE_VALID; } +}; + +// Signed to signed narrowing: Both the upper and lower boundaries may be +// exceeded. +template <typename Dst, typename Src> +struct DstRangeRelationToSrcRangeImpl<Dst, + Src, + INTEGER_REPRESENTATION_SIGNED, + INTEGER_REPRESENTATION_SIGNED, + NUMERIC_RANGE_NOT_CONTAINED> { + static RangeConstraint Check(Src value) { + return std::numeric_limits<Dst>::is_iec559 + ? GetRangeConstraint(value <= std::numeric_limits<Dst>::max(), + value >= -std::numeric_limits<Dst>::max()) + : GetRangeConstraint(value <= std::numeric_limits<Dst>::max(), + value >= std::numeric_limits<Dst>::min()); + } +}; + +// Unsigned to unsigned narrowing: Only the upper boundary can be exceeded. +template <typename Dst, typename Src> +struct DstRangeRelationToSrcRangeImpl<Dst, + Src, + INTEGER_REPRESENTATION_UNSIGNED, + INTEGER_REPRESENTATION_UNSIGNED, + NUMERIC_RANGE_NOT_CONTAINED> { + static RangeConstraint Check(Src value) { + return GetRangeConstraint(value <= std::numeric_limits<Dst>::max(), true); + } +}; + +// Unsigned to signed: The upper boundary may be exceeded. +template <typename Dst, typename Src> +struct DstRangeRelationToSrcRangeImpl<Dst, + Src, + INTEGER_REPRESENTATION_SIGNED, + INTEGER_REPRESENTATION_UNSIGNED, + NUMERIC_RANGE_NOT_CONTAINED> { + static RangeConstraint Check(Src value) { + return sizeof(Dst) > sizeof(Src) + ? RANGE_VALID + : GetRangeConstraint( + value <= static_cast<Src>(std::numeric_limits<Dst>::max()), + true); + } +}; + +// Signed to unsigned: The upper boundary may be exceeded for a narrower Dst, +// and any negative value exceeds the lower boundary. +template <typename Dst, typename Src> +struct DstRangeRelationToSrcRangeImpl<Dst, + Src, + INTEGER_REPRESENTATION_UNSIGNED, + INTEGER_REPRESENTATION_SIGNED, + NUMERIC_RANGE_NOT_CONTAINED> { + static RangeConstraint Check(Src value) { + return (MaxExponent<Dst>::value >= MaxExponent<Src>::value) + ? GetRangeConstraint(true, value >= static_cast<Src>(0)) + : GetRangeConstraint( + value <= static_cast<Src>(std::numeric_limits<Dst>::max()), + value >= static_cast<Src>(0)); + } +}; + +template <typename Dst, typename Src> +inline RangeConstraint DstRangeRelationToSrcRange(Src value) { + // Both source and destination must be numeric. + STATIC_ASSERT(std::numeric_limits<Src>::is_specialized); + STATIC_ASSERT(std::numeric_limits<Dst>::is_specialized); + return DstRangeRelationToSrcRangeImpl<Dst, Src>::Check(value); +} + +} // namespace internal +} // namespace base +} // namespace v8 + +#endif // V8_BASE_SAFE_CONVERSIONS_IMPL_H_ diff --git a/deps/v8/src/base/safe_math.h b/deps/v8/src/base/safe_math.h new file mode 100644 index 00000000000..62a2f723f2b --- /dev/null +++ b/deps/v8/src/base/safe_math.h @@ -0,0 +1,276 @@ +// Copyright 2014 The Chromium Authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Slightly adapted for inclusion in V8. +// Copyright 2014 the V8 project authors. All rights reserved. + +#ifndef V8_BASE_SAFE_MATH_H_ +#define V8_BASE_SAFE_MATH_H_ + +#include "src/base/safe_math_impl.h" + +namespace v8 { +namespace base { +namespace internal { + +// CheckedNumeric implements all the logic and operators for detecting integer +// boundary conditions such as overflow, underflow, and invalid conversions. +// The CheckedNumeric type implicitly converts from floating point and integer +// data types, and contains overloads for basic arithmetic operations (i.e.: +, +// -, *, /, %). +// +// The following methods convert from CheckedNumeric to standard numeric values: +// IsValid() - Returns true if the underlying numeric value is valid (i.e. has +// has not wrapped and is not the result of an invalid conversion). +// ValueOrDie() - Returns the underlying value. If the state is not valid this +// call will crash on a CHECK. +// ValueOrDefault() - Returns the current value, or the supplied default if the +// state is not valid. +// ValueFloating() - Returns the underlying floating point value (valid only +// only for floating point CheckedNumeric types). +// +// Bitwise operations are explicitly not supported, because correct +// handling of some cases (e.g. sign manipulation) is ambiguous. Comparison +// operations are explicitly not supported because they could result in a crash +// on a CHECK condition. You should use patterns like the following for these +// operations: +// Bitwise operation: +// CheckedNumeric<int> checked_int = untrusted_input_value; +// int x = checked_int.ValueOrDefault(0) | kFlagValues; +// Comparison: +// CheckedNumeric<size_t> checked_size; +// CheckedNumeric<int> checked_size = untrusted_input_value; +// checked_size = checked_size + HEADER LENGTH; +// if (checked_size.IsValid() && checked_size.ValueOrDie() < buffer_size) +// Do stuff... +template <typename T> +class CheckedNumeric { + public: + typedef T type; + + CheckedNumeric() {} + + // Copy constructor. + template <typename Src> + CheckedNumeric(const CheckedNumeric<Src>& rhs) + : state_(rhs.ValueUnsafe(), rhs.validity()) {} + + template <typename Src> + CheckedNumeric(Src value, RangeConstraint validity) + : state_(value, validity) {} + + // This is not an explicit constructor because we implicitly upgrade regular + // numerics to CheckedNumerics to make them easier to use. + template <typename Src> + CheckedNumeric(Src value) // NOLINT + : state_(value) { + // Argument must be numeric. + STATIC_ASSERT(std::numeric_limits<Src>::is_specialized); + } + + // IsValid() is the public API to test if a CheckedNumeric is currently valid. + bool IsValid() const { return validity() == RANGE_VALID; } + + // ValueOrDie() The primary accessor for the underlying value. If the current + // state is not valid it will CHECK and crash. + T ValueOrDie() const { + CHECK(IsValid()); + return state_.value(); + } + + // ValueOrDefault(T default_value) A convenience method that returns the + // current value if the state is valid, and the supplied default_value for + // any other state. + T ValueOrDefault(T default_value) const { + return IsValid() ? state_.value() : default_value; + } + + // ValueFloating() - Since floating point values include their validity state, + // we provide an easy method for extracting them directly, without a risk of + // crashing on a CHECK. + T ValueFloating() const { + // Argument must be a floating-point value. + STATIC_ASSERT(std::numeric_limits<T>::is_iec559); + return CheckedNumeric<T>::cast(*this).ValueUnsafe(); + } + + // validity() - DO NOT USE THIS IN EXTERNAL CODE - It is public right now for + // tests and to avoid a big matrix of friend operator overloads. But the + // values it returns are likely to change in the future. + // Returns: current validity state (i.e. valid, overflow, underflow, nan). + // TODO(jschuh): crbug.com/332611 Figure out and implement semantics for + // saturation/wrapping so we can expose this state consistently and implement + // saturated arithmetic. + RangeConstraint validity() const { return state_.validity(); } + + // ValueUnsafe() - DO NOT USE THIS IN EXTERNAL CODE - It is public right now + // for tests and to avoid a big matrix of friend operator overloads. But the + // values it returns are likely to change in the future. + // Returns: the raw numeric value, regardless of the current state. + // TODO(jschuh): crbug.com/332611 Figure out and implement semantics for + // saturation/wrapping so we can expose this state consistently and implement + // saturated arithmetic. + T ValueUnsafe() const { return state_.value(); } + + // Prototypes for the supported arithmetic operator overloads. + template <typename Src> CheckedNumeric& operator+=(Src rhs); + template <typename Src> CheckedNumeric& operator-=(Src rhs); + template <typename Src> CheckedNumeric& operator*=(Src rhs); + template <typename Src> CheckedNumeric& operator/=(Src rhs); + template <typename Src> CheckedNumeric& operator%=(Src rhs); + + CheckedNumeric operator-() const { + RangeConstraint validity; + T value = CheckedNeg(state_.value(), &validity); + // Negation is always valid for floating point. + if (std::numeric_limits<T>::is_iec559) + return CheckedNumeric<T>(value); + + validity = GetRangeConstraint(state_.validity() | validity); + return CheckedNumeric<T>(value, validity); + } + + CheckedNumeric Abs() const { + RangeConstraint validity; + T value = CheckedAbs(state_.value(), &validity); + // Absolute value is always valid for floating point. + if (std::numeric_limits<T>::is_iec559) + return CheckedNumeric<T>(value); + + validity = GetRangeConstraint(state_.validity() | validity); + return CheckedNumeric<T>(value, validity); + } + + CheckedNumeric& operator++() { + *this += 1; + return *this; + } + + CheckedNumeric operator++(int) { + CheckedNumeric value = *this; + *this += 1; + return value; + } + + CheckedNumeric& operator--() { + *this -= 1; + return *this; + } + + CheckedNumeric operator--(int) { + CheckedNumeric value = *this; + *this -= 1; + return value; + } + + // These static methods behave like a convenience cast operator targeting + // the desired CheckedNumeric type. As an optimization, a reference is + // returned when Src is the same type as T. + template <typename Src> + static CheckedNumeric<T> cast( + Src u, + typename enable_if<std::numeric_limits<Src>::is_specialized, int>::type = + 0) { + return u; + } + + template <typename Src> + static CheckedNumeric<T> cast( + const CheckedNumeric<Src>& u, + typename enable_if<!is_same<Src, T>::value, int>::type = 0) { + return u; + } + + static const CheckedNumeric<T>& cast(const CheckedNumeric<T>& u) { return u; } + + private: + CheckedNumericState<T> state_; +}; + +// This is the boilerplate for the standard arithmetic operator overloads. A +// macro isn't the prettiest solution, but it beats rewriting these five times. +// Some details worth noting are: +// * We apply the standard arithmetic promotions. +// * We skip range checks for floating points. +// * We skip range checks for destination integers with sufficient range. +// TODO(jschuh): extract these out into templates. +#define BASE_NUMERIC_ARITHMETIC_OPERATORS(NAME, OP, COMPOUND_OP) \ + /* Binary arithmetic operator for CheckedNumerics of the same type. */ \ + template <typename T> \ + CheckedNumeric<typename ArithmeticPromotion<T>::type> operator OP( \ + const CheckedNumeric<T>& lhs, const CheckedNumeric<T>& rhs) { \ + typedef typename ArithmeticPromotion<T>::type Promotion; \ + /* Floating point always takes the fast path */ \ + if (std::numeric_limits<T>::is_iec559) \ + return CheckedNumeric<T>(lhs.ValueUnsafe() OP rhs.ValueUnsafe()); \ + if (IsIntegerArithmeticSafe<Promotion, T, T>::value) \ + return CheckedNumeric<Promotion>( \ + lhs.ValueUnsafe() OP rhs.ValueUnsafe(), \ + GetRangeConstraint(rhs.validity() | lhs.validity())); \ + RangeConstraint validity = RANGE_VALID; \ + T result = Checked##NAME(static_cast<Promotion>(lhs.ValueUnsafe()), \ + static_cast<Promotion>(rhs.ValueUnsafe()), \ + &validity); \ + return CheckedNumeric<Promotion>( \ + result, \ + GetRangeConstraint(validity | lhs.validity() | rhs.validity())); \ + } \ + /* Assignment arithmetic operator implementation from CheckedNumeric. */ \ + template <typename T> \ + template <typename Src> \ + CheckedNumeric<T>& CheckedNumeric<T>::operator COMPOUND_OP(Src rhs) { \ + *this = CheckedNumeric<T>::cast(*this) OP CheckedNumeric<Src>::cast(rhs); \ + return *this; \ + } \ + /* Binary arithmetic operator for CheckedNumeric of different type. */ \ + template <typename T, typename Src> \ + CheckedNumeric<typename ArithmeticPromotion<T, Src>::type> operator OP( \ + const CheckedNumeric<Src>& lhs, const CheckedNumeric<T>& rhs) { \ + typedef typename ArithmeticPromotion<T, Src>::type Promotion; \ + if (IsIntegerArithmeticSafe<Promotion, T, Src>::value) \ + return CheckedNumeric<Promotion>( \ + lhs.ValueUnsafe() OP rhs.ValueUnsafe(), \ + GetRangeConstraint(rhs.validity() | lhs.validity())); \ + return CheckedNumeric<Promotion>::cast(lhs) \ + OP CheckedNumeric<Promotion>::cast(rhs); \ + } \ + /* Binary arithmetic operator for left CheckedNumeric and right numeric. */ \ + template <typename T, typename Src> \ + CheckedNumeric<typename ArithmeticPromotion<T, Src>::type> operator OP( \ + const CheckedNumeric<T>& lhs, Src rhs) { \ + typedef typename ArithmeticPromotion<T, Src>::type Promotion; \ + if (IsIntegerArithmeticSafe<Promotion, T, Src>::value) \ + return CheckedNumeric<Promotion>(lhs.ValueUnsafe() OP rhs, \ + lhs.validity()); \ + return CheckedNumeric<Promotion>::cast(lhs) \ + OP CheckedNumeric<Promotion>::cast(rhs); \ + } \ + /* Binary arithmetic operator for right numeric and left CheckedNumeric. */ \ + template <typename T, typename Src> \ + CheckedNumeric<typename ArithmeticPromotion<T, Src>::type> operator OP( \ + Src lhs, const CheckedNumeric<T>& rhs) { \ + typedef typename ArithmeticPromotion<T, Src>::type Promotion; \ + if (IsIntegerArithmeticSafe<Promotion, T, Src>::value) \ + return CheckedNumeric<Promotion>(lhs OP rhs.ValueUnsafe(), \ + rhs.validity()); \ + return CheckedNumeric<Promotion>::cast(lhs) \ + OP CheckedNumeric<Promotion>::cast(rhs); \ + } + +BASE_NUMERIC_ARITHMETIC_OPERATORS(Add, +, += ) +BASE_NUMERIC_ARITHMETIC_OPERATORS(Sub, -, -= ) +BASE_NUMERIC_ARITHMETIC_OPERATORS(Mul, *, *= ) +BASE_NUMERIC_ARITHMETIC_OPERATORS(Div, /, /= ) +BASE_NUMERIC_ARITHMETIC_OPERATORS(Mod, %, %= ) + +#undef BASE_NUMERIC_ARITHMETIC_OPERATORS + +} // namespace internal + +using internal::CheckedNumeric; + +} // namespace base +} // namespace v8 + +#endif // V8_BASE_SAFE_MATH_H_ diff --git a/deps/v8/src/base/safe_math_impl.h b/deps/v8/src/base/safe_math_impl.h new file mode 100644 index 00000000000..055e2a02750 --- /dev/null +++ b/deps/v8/src/base/safe_math_impl.h @@ -0,0 +1,531 @@ +// Copyright 2014 The Chromium Authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Slightly adapted for inclusion in V8. +// Copyright 2014 the V8 project authors. All rights reserved. + +#ifndef V8_BASE_SAFE_MATH_IMPL_H_ +#define V8_BASE_SAFE_MATH_IMPL_H_ + +#include <stdint.h> + +#include <cmath> +#include <cstdlib> +#include <limits> + +#include "src/base/macros.h" +#include "src/base/safe_conversions.h" + +namespace v8 { +namespace base { +namespace internal { + + +// From Chromium's base/template_util.h: + +template<class T, T v> +struct integral_constant { + static const T value = v; + typedef T value_type; + typedef integral_constant<T, v> type; +}; + +template <class T, T v> const T integral_constant<T, v>::value; + +typedef integral_constant<bool, true> true_type; +typedef integral_constant<bool, false> false_type; + +template <class T, class U> struct is_same : public false_type {}; +template <class T> struct is_same<T, T> : true_type {}; + +template<bool B, class T = void> +struct enable_if {}; + +template<class T> +struct enable_if<true, T> { typedef T type; }; + +// </template_util.h> + + +// Everything from here up to the floating point operations is portable C++, +// but it may not be fast. This code could be split based on +// platform/architecture and replaced with potentially faster implementations. + +// Integer promotion templates used by the portable checked integer arithmetic. +template <size_t Size, bool IsSigned> +struct IntegerForSizeAndSign; +template <> +struct IntegerForSizeAndSign<1, true> { + typedef int8_t type; +}; +template <> +struct IntegerForSizeAndSign<1, false> { + typedef uint8_t type; +}; +template <> +struct IntegerForSizeAndSign<2, true> { + typedef int16_t type; +}; +template <> +struct IntegerForSizeAndSign<2, false> { + typedef uint16_t type; +}; +template <> +struct IntegerForSizeAndSign<4, true> { + typedef int32_t type; +}; +template <> +struct IntegerForSizeAndSign<4, false> { + typedef uint32_t type; +}; +template <> +struct IntegerForSizeAndSign<8, true> { + typedef int64_t type; +}; +template <> +struct IntegerForSizeAndSign<8, false> { + typedef uint64_t type; +}; + +// WARNING: We have no IntegerForSizeAndSign<16, *>. If we ever add one to +// support 128-bit math, then the ArithmeticPromotion template below will need +// to be updated (or more likely replaced with a decltype expression). + +template <typename Integer> +struct UnsignedIntegerForSize { + typedef typename enable_if< + std::numeric_limits<Integer>::is_integer, + typename IntegerForSizeAndSign<sizeof(Integer), false>::type>::type type; +}; + +template <typename Integer> +struct SignedIntegerForSize { + typedef typename enable_if< + std::numeric_limits<Integer>::is_integer, + typename IntegerForSizeAndSign<sizeof(Integer), true>::type>::type type; +}; + +template <typename Integer> +struct TwiceWiderInteger { + typedef typename enable_if< + std::numeric_limits<Integer>::is_integer, + typename IntegerForSizeAndSign< + sizeof(Integer) * 2, + std::numeric_limits<Integer>::is_signed>::type>::type type; +}; + +template <typename Integer> +struct PositionOfSignBit { + static const typename enable_if<std::numeric_limits<Integer>::is_integer, + size_t>::type value = 8 * sizeof(Integer) - 1; +}; + +// Helper templates for integer manipulations. + +template <typename T> +bool HasSignBit(T x) { + // Cast to unsigned since right shift on signed is undefined. + return !!(static_cast<typename UnsignedIntegerForSize<T>::type>(x) >> + PositionOfSignBit<T>::value); +} + +// This wrapper undoes the standard integer promotions. +template <typename T> +T BinaryComplement(T x) { + return ~x; +} + +// Here are the actual portable checked integer math implementations. +// TODO(jschuh): Break this code out from the enable_if pattern and find a clean +// way to coalesce things into the CheckedNumericState specializations below. + +template <typename T> +typename enable_if<std::numeric_limits<T>::is_integer, T>::type +CheckedAdd(T x, T y, RangeConstraint* validity) { + // Since the value of x+y is undefined if we have a signed type, we compute + // it using the unsigned type of the same size. + typedef typename UnsignedIntegerForSize<T>::type UnsignedDst; + UnsignedDst ux = static_cast<UnsignedDst>(x); + UnsignedDst uy = static_cast<UnsignedDst>(y); + UnsignedDst uresult = ux + uy; + // Addition is valid if the sign of (x + y) is equal to either that of x or + // that of y. + if (std::numeric_limits<T>::is_signed) { + if (HasSignBit(BinaryComplement((uresult ^ ux) & (uresult ^ uy)))) + *validity = RANGE_VALID; + else // Direction of wrap is inverse of result sign. + *validity = HasSignBit(uresult) ? RANGE_OVERFLOW : RANGE_UNDERFLOW; + + } else { // Unsigned is either valid or overflow. + *validity = BinaryComplement(x) >= y ? RANGE_VALID : RANGE_OVERFLOW; + } + return static_cast<T>(uresult); +} + +template <typename T> +typename enable_if<std::numeric_limits<T>::is_integer, T>::type +CheckedSub(T x, T y, RangeConstraint* validity) { + // Since the value of x+y is undefined if we have a signed type, we compute + // it using the unsigned type of the same size. + typedef typename UnsignedIntegerForSize<T>::type UnsignedDst; + UnsignedDst ux = static_cast<UnsignedDst>(x); + UnsignedDst uy = static_cast<UnsignedDst>(y); + UnsignedDst uresult = ux - uy; + // Subtraction is valid if either x and y have same sign, or (x-y) and x have + // the same sign. + if (std::numeric_limits<T>::is_signed) { + if (HasSignBit(BinaryComplement((uresult ^ ux) & (ux ^ uy)))) + *validity = RANGE_VALID; + else // Direction of wrap is inverse of result sign. + *validity = HasSignBit(uresult) ? RANGE_OVERFLOW : RANGE_UNDERFLOW; + + } else { // Unsigned is either valid or underflow. + *validity = x >= y ? RANGE_VALID : RANGE_UNDERFLOW; + } + return static_cast<T>(uresult); +} + +// Integer multiplication is a bit complicated. In the fast case we just +// we just promote to a twice wider type, and range check the result. In the +// slow case we need to manually check that the result won't be truncated by +// checking with division against the appropriate bound. +template <typename T> +typename enable_if< + std::numeric_limits<T>::is_integer && sizeof(T) * 2 <= sizeof(uintmax_t), + T>::type +CheckedMul(T x, T y, RangeConstraint* validity) { + typedef typename TwiceWiderInteger<T>::type IntermediateType; + IntermediateType tmp = + static_cast<IntermediateType>(x) * static_cast<IntermediateType>(y); + *validity = DstRangeRelationToSrcRange<T>(tmp); + return static_cast<T>(tmp); +} + +template <typename T> +typename enable_if<std::numeric_limits<T>::is_integer && + std::numeric_limits<T>::is_signed && + (sizeof(T) * 2 > sizeof(uintmax_t)), + T>::type +CheckedMul(T x, T y, RangeConstraint* validity) { + // if either side is zero then the result will be zero. + if (!(x || y)) { + return RANGE_VALID; + + } else if (x > 0) { + if (y > 0) + *validity = + x <= std::numeric_limits<T>::max() / y ? RANGE_VALID : RANGE_OVERFLOW; + else + *validity = y >= std::numeric_limits<T>::min() / x ? RANGE_VALID + : RANGE_UNDERFLOW; + + } else { + if (y > 0) + *validity = x >= std::numeric_limits<T>::min() / y ? RANGE_VALID + : RANGE_UNDERFLOW; + else + *validity = + y >= std::numeric_limits<T>::max() / x ? RANGE_VALID : RANGE_OVERFLOW; + } + + return x * y; +} + +template <typename T> +typename enable_if<std::numeric_limits<T>::is_integer && + !std::numeric_limits<T>::is_signed && + (sizeof(T) * 2 > sizeof(uintmax_t)), + T>::type +CheckedMul(T x, T y, RangeConstraint* validity) { + *validity = (y == 0 || x <= std::numeric_limits<T>::max() / y) + ? RANGE_VALID + : RANGE_OVERFLOW; + return x * y; +} + +// Division just requires a check for an invalid negation on signed min/-1. +template <typename T> +T CheckedDiv( + T x, + T y, + RangeConstraint* validity, + typename enable_if<std::numeric_limits<T>::is_integer, int>::type = 0) { + if (std::numeric_limits<T>::is_signed && x == std::numeric_limits<T>::min() && + y == static_cast<T>(-1)) { + *validity = RANGE_OVERFLOW; + return std::numeric_limits<T>::min(); + } + + *validity = RANGE_VALID; + return x / y; +} + +template <typename T> +typename enable_if< + std::numeric_limits<T>::is_integer && std::numeric_limits<T>::is_signed, + T>::type +CheckedMod(T x, T y, RangeConstraint* validity) { + *validity = y > 0 ? RANGE_VALID : RANGE_INVALID; + return x % y; +} + +template <typename T> +typename enable_if< + std::numeric_limits<T>::is_integer && !std::numeric_limits<T>::is_signed, + T>::type +CheckedMod(T x, T y, RangeConstraint* validity) { + *validity = RANGE_VALID; + return x % y; +} + +template <typename T> +typename enable_if< + std::numeric_limits<T>::is_integer && std::numeric_limits<T>::is_signed, + T>::type +CheckedNeg(T value, RangeConstraint* validity) { + *validity = + value != std::numeric_limits<T>::min() ? RANGE_VALID : RANGE_OVERFLOW; + // The negation of signed min is min, so catch that one. + return -value; +} + +template <typename T> +typename enable_if< + std::numeric_limits<T>::is_integer && !std::numeric_limits<T>::is_signed, + T>::type +CheckedNeg(T value, RangeConstraint* validity) { + // The only legal unsigned negation is zero. + *validity = value ? RANGE_UNDERFLOW : RANGE_VALID; + return static_cast<T>( + -static_cast<typename SignedIntegerForSize<T>::type>(value)); +} + +template <typename T> +typename enable_if< + std::numeric_limits<T>::is_integer && std::numeric_limits<T>::is_signed, + T>::type +CheckedAbs(T value, RangeConstraint* validity) { + *validity = + value != std::numeric_limits<T>::min() ? RANGE_VALID : RANGE_OVERFLOW; + return std::abs(value); +} + +template <typename T> +typename enable_if< + std::numeric_limits<T>::is_integer && !std::numeric_limits<T>::is_signed, + T>::type +CheckedAbs(T value, RangeConstraint* validity) { + // Absolute value of a positive is just its identiy. + *validity = RANGE_VALID; + return value; +} + +// These are the floating point stubs that the compiler needs to see. Only the +// negation operation is ever called. +#define BASE_FLOAT_ARITHMETIC_STUBS(NAME) \ + template <typename T> \ + typename enable_if<std::numeric_limits<T>::is_iec559, T>::type \ + Checked##NAME(T, T, RangeConstraint*) { \ + UNREACHABLE(); \ + return 0; \ + } + +BASE_FLOAT_ARITHMETIC_STUBS(Add) +BASE_FLOAT_ARITHMETIC_STUBS(Sub) +BASE_FLOAT_ARITHMETIC_STUBS(Mul) +BASE_FLOAT_ARITHMETIC_STUBS(Div) +BASE_FLOAT_ARITHMETIC_STUBS(Mod) + +#undef BASE_FLOAT_ARITHMETIC_STUBS + +template <typename T> +typename enable_if<std::numeric_limits<T>::is_iec559, T>::type CheckedNeg( + T value, + RangeConstraint*) { + return -value; +} + +template <typename T> +typename enable_if<std::numeric_limits<T>::is_iec559, T>::type CheckedAbs( + T value, + RangeConstraint*) { + return std::abs(value); +} + +// Floats carry around their validity state with them, but integers do not. So, +// we wrap the underlying value in a specialization in order to hide that detail +// and expose an interface via accessors. +enum NumericRepresentation { + NUMERIC_INTEGER, + NUMERIC_FLOATING, + NUMERIC_UNKNOWN +}; + +template <typename NumericType> +struct GetNumericRepresentation { + static const NumericRepresentation value = + std::numeric_limits<NumericType>::is_integer + ? NUMERIC_INTEGER + : (std::numeric_limits<NumericType>::is_iec559 ? NUMERIC_FLOATING + : NUMERIC_UNKNOWN); +}; + +template <typename T, NumericRepresentation type = + GetNumericRepresentation<T>::value> +class CheckedNumericState {}; + +// Integrals require quite a bit of additional housekeeping to manage state. +template <typename T> +class CheckedNumericState<T, NUMERIC_INTEGER> { + private: + T value_; + RangeConstraint validity_; + + public: + template <typename Src, NumericRepresentation type> + friend class CheckedNumericState; + + CheckedNumericState() : value_(0), validity_(RANGE_VALID) {} + + template <typename Src> + CheckedNumericState(Src value, RangeConstraint validity) + : value_(value), + validity_(GetRangeConstraint(validity | + DstRangeRelationToSrcRange<T>(value))) { + // Argument must be numeric. + STATIC_ASSERT(std::numeric_limits<Src>::is_specialized); + } + + // Copy constructor. + template <typename Src> + CheckedNumericState(const CheckedNumericState<Src>& rhs) + : value_(static_cast<T>(rhs.value())), + validity_(GetRangeConstraint( + rhs.validity() | DstRangeRelationToSrcRange<T>(rhs.value()))) {} + + template <typename Src> + explicit CheckedNumericState( + Src value, + typename enable_if<std::numeric_limits<Src>::is_specialized, int>::type = + 0) + : value_(static_cast<T>(value)), + validity_(DstRangeRelationToSrcRange<T>(value)) {} + + RangeConstraint validity() const { return validity_; } + T value() const { return value_; } +}; + +// Floating points maintain their own validity, but need translation wrappers. +template <typename T> +class CheckedNumericState<T, NUMERIC_FLOATING> { + private: + T value_; + + public: + template <typename Src, NumericRepresentation type> + friend class CheckedNumericState; + + CheckedNumericState() : value_(0.0) {} + + template <typename Src> + CheckedNumericState( + Src value, + RangeConstraint validity, + typename enable_if<std::numeric_limits<Src>::is_integer, int>::type = 0) { + switch (DstRangeRelationToSrcRange<T>(value)) { + case RANGE_VALID: + value_ = static_cast<T>(value); + break; + + case RANGE_UNDERFLOW: + value_ = -std::numeric_limits<T>::infinity(); + break; + + case RANGE_OVERFLOW: + value_ = std::numeric_limits<T>::infinity(); + break; + + case RANGE_INVALID: + value_ = std::numeric_limits<T>::quiet_NaN(); + break; + } + } + + template <typename Src> + explicit CheckedNumericState( + Src value, + typename enable_if<std::numeric_limits<Src>::is_specialized, int>::type = + 0) + : value_(static_cast<T>(value)) {} + + // Copy constructor. + template <typename Src> + CheckedNumericState(const CheckedNumericState<Src>& rhs) + : value_(static_cast<T>(rhs.value())) {} + + RangeConstraint validity() const { + return GetRangeConstraint(value_ <= std::numeric_limits<T>::max(), + value_ >= -std::numeric_limits<T>::max()); + } + T value() const { return value_; } +}; + +// For integers less than 128-bit and floats 32-bit or larger, we can distil +// C/C++ arithmetic promotions down to two simple rules: +// 1. The type with the larger maximum exponent always takes precedence. +// 2. The resulting type must be promoted to at least an int. +// The following template specializations implement that promotion logic. +enum ArithmeticPromotionCategory { + LEFT_PROMOTION, + RIGHT_PROMOTION, + DEFAULT_PROMOTION +}; + +template <typename Lhs, + typename Rhs = Lhs, + ArithmeticPromotionCategory Promotion = + (MaxExponent<Lhs>::value > MaxExponent<Rhs>::value) + ? (MaxExponent<Lhs>::value > MaxExponent<int>::value + ? LEFT_PROMOTION + : DEFAULT_PROMOTION) + : (MaxExponent<Rhs>::value > MaxExponent<int>::value + ? RIGHT_PROMOTION + : DEFAULT_PROMOTION) > +struct ArithmeticPromotion; + +template <typename Lhs, typename Rhs> +struct ArithmeticPromotion<Lhs, Rhs, LEFT_PROMOTION> { + typedef Lhs type; +}; + +template <typename Lhs, typename Rhs> +struct ArithmeticPromotion<Lhs, Rhs, RIGHT_PROMOTION> { + typedef Rhs type; +}; + +template <typename Lhs, typename Rhs> +struct ArithmeticPromotion<Lhs, Rhs, DEFAULT_PROMOTION> { + typedef int type; +}; + +// We can statically check if operations on the provided types can wrap, so we +// can skip the checked operations if they're not needed. So, for an integer we +// care if the destination type preserves the sign and is twice the width of +// the source. +template <typename T, typename Lhs, typename Rhs> +struct IsIntegerArithmeticSafe { + static const bool value = !std::numeric_limits<T>::is_iec559 && + StaticDstRangeRelationToSrcRange<T, Lhs>::value == + NUMERIC_RANGE_CONTAINED && + sizeof(T) >= (2 * sizeof(Lhs)) && + StaticDstRangeRelationToSrcRange<T, Rhs>::value != + NUMERIC_RANGE_CONTAINED && + sizeof(T) >= (2 * sizeof(Rhs)); +}; + +} // namespace internal +} // namespace base +} // namespace v8 + +#endif // V8_BASE_SAFE_MATH_IMPL_H_ diff --git a/deps/v8/src/utils/random-number-generator.cc b/deps/v8/src/base/utils/random-number-generator.cc similarity index 78% rename from deps/v8/src/utils/random-number-generator.cc rename to deps/v8/src/base/utils/random-number-generator.cc index 92114b0e25e..be798111712 100644 --- a/deps/v8/src/utils/random-number-generator.cc +++ b/deps/v8/src/base/utils/random-number-generator.cc @@ -2,18 +2,19 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "utils/random-number-generator.h" +#include "src/base/utils/random-number-generator.h" #include <stdio.h> #include <stdlib.h> -#include "flags.h" -#include "platform/mutex.h" -#include "platform/time.h" -#include "utils.h" +#include <new> + +#include "src/base/macros.h" +#include "src/base/platform/mutex.h" +#include "src/base/platform/time.h" namespace v8 { -namespace internal { +namespace base { static LazyMutex entropy_mutex = LAZY_MUTEX_INITIALIZER; static RandomNumberGenerator::EntropySource entropy_source = NULL; @@ -27,12 +28,6 @@ void RandomNumberGenerator::SetEntropySource(EntropySource source) { RandomNumberGenerator::RandomNumberGenerator() { - // Check --random-seed flag first. - if (FLAG_random_seed != 0) { - SetSeed(FLAG_random_seed); - return; - } - // Check if embedder supplied an entropy source. { LockGuard<Mutex> lock_guard(entropy_mutex.Pointer()); if (entropy_source != NULL) { @@ -50,9 +45,9 @@ RandomNumberGenerator::RandomNumberGenerator() { // https://code.google.com/p/v8/issues/detail?id=2905 unsigned first_half, second_half; errno_t result = rand_s(&first_half); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); result = rand_s(&second_half); - ASSERT_EQ(0, result); + DCHECK_EQ(0, result); SetSeed((static_cast<int64_t>(first_half) << 32) + second_half); #else // Gather entropy from /dev/urandom if available. @@ -84,10 +79,10 @@ RandomNumberGenerator::RandomNumberGenerator() { int RandomNumberGenerator::NextInt(int max) { - ASSERT_LE(0, max); + DCHECK_LE(0, max); // Fast path if max is a power of 2. - if (IsPowerOf2(max)) { + if (IS_POWER_OF_TWO(max)) { return static_cast<int>((max * static_cast<int64_t>(Next(31))) >> 31); } @@ -115,9 +110,15 @@ void RandomNumberGenerator::NextBytes(void* buffer, size_t buflen) { int RandomNumberGenerator::Next(int bits) { - ASSERT_LT(0, bits); - ASSERT_GE(32, bits); - int64_t seed = (seed_ * kMultiplier + kAddend) & kMask; + DCHECK_LT(0, bits); + DCHECK_GE(32, bits); + // Do unsigned multiplication, which has the intended modulo semantics, while + // signed multiplication would expose undefined behavior. + uint64_t product = static_cast<uint64_t>(seed_) * kMultiplier; + // Assigning a uint64_t to an int64_t is implementation defined, but this + // should be OK. Use a static_cast to explicitly state that we know what we're + // doing. (Famous last words...) + int64_t seed = static_cast<int64_t>((product + kAddend) & kMask); seed_ = seed; return static_cast<int>(seed >> (48 - bits)); } @@ -127,4 +128,4 @@ void RandomNumberGenerator::SetSeed(int64_t seed) { seed_ = (seed ^ kMultiplier) & kMask; } -} } // namespace v8::internal +} } // namespace v8::base diff --git a/deps/v8/src/utils/random-number-generator.h b/deps/v8/src/base/utils/random-number-generator.h similarity index 93% rename from deps/v8/src/utils/random-number-generator.h rename to deps/v8/src/base/utils/random-number-generator.h index 331cffae77d..5955d665976 100644 --- a/deps/v8/src/utils/random-number-generator.h +++ b/deps/v8/src/base/utils/random-number-generator.h @@ -2,13 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#ifndef V8_UTILS_RANDOM_NUMBER_GENERATOR_H_ -#define V8_UTILS_RANDOM_NUMBER_GENERATOR_H_ +#ifndef V8_BASE_UTILS_RANDOM_NUMBER_GENERATOR_H_ +#define V8_BASE_UTILS_RANDOM_NUMBER_GENERATOR_H_ -#include "globals.h" +#include "src/base/macros.h" namespace v8 { -namespace internal { +namespace base { // ----------------------------------------------------------------------------- // RandomNumberGenerator @@ -71,17 +71,19 @@ class RandomNumberGenerator V8_FINAL { // Fills the elements of a specified array of bytes with random numbers. void NextBytes(void* buffer, size_t buflen); + // Override the current ssed. + void SetSeed(int64_t seed); + private: static const int64_t kMultiplier = V8_2PART_UINT64_C(0x5, deece66d); static const int64_t kAddend = 0xb; static const int64_t kMask = V8_2PART_UINT64_C(0xffff, ffffffff); int Next(int bits) V8_WARN_UNUSED_RESULT; - void SetSeed(int64_t seed); int64_t seed_; }; -} } // namespace v8::internal +} } // namespace v8::base -#endif // V8_UTILS_RANDOM_NUMBER_GENERATOR_H_ +#endif // V8_BASE_UTILS_RANDOM_NUMBER_GENERATOR_H_ diff --git a/deps/v8/src/win32-headers.h b/deps/v8/src/base/win32-headers.h similarity index 94% rename from deps/v8/src/win32-headers.h rename to deps/v8/src/base/win32-headers.h index 5ede3b313d7..e6b88bb2ff1 100644 --- a/deps/v8/src/win32-headers.h +++ b/deps/v8/src/base/win32-headers.h @@ -2,8 +2,8 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#ifndef V8_WIN32_HEADERS_H_ -#define V8_WIN32_HEADERS_H_ +#ifndef V8_BASE_WIN32_HEADERS_H_ +#define V8_BASE_WIN32_HEADERS_H_ #ifndef WIN32_LEAN_AND_MEAN // WIN32_LEAN_AND_MEAN implies NOCRYPT and NOGDI. @@ -35,9 +35,9 @@ #include <windows.h> +#include <mmsystem.h> // For timeGetTime(). #include <signal.h> // For raise(). #include <time.h> // For LocalOffset() implementation. -#include <mmsystem.h> // For timeGetTime(). #ifdef __MINGW32__ // Require Windows XP or higher when compiling with MinGW. This is for MinGW // header files to expose getaddrinfo. @@ -76,4 +76,4 @@ #undef CreateSemaphore #undef Yield -#endif // V8_WIN32_HEADERS_H_ +#endif // V8_BASE_WIN32_HEADERS_H_ diff --git a/deps/v8/src/win32-math.cc b/deps/v8/src/base/win32-math.cc similarity index 93% rename from deps/v8/src/win32-math.cc rename to deps/v8/src/base/win32-math.cc index fb42383de9d..d6fc78bc82a 100644 --- a/deps/v8/src/win32-math.cc +++ b/deps/v8/src/base/win32-math.cc @@ -8,13 +8,13 @@ // (http://www.opengroup.org/onlinepubs/000095399/) #if defined(_MSC_VER) && (_MSC_VER < 1800) -#include "win32-headers.h" -#include <limits.h> // Required for INT_MAX etc. +#include "src/base/win32-headers.h" #include <float.h> // Required for DBL_MAX and on Win32 for finite() +#include <limits.h> // Required for INT_MAX etc. #include <cmath> -#include "win32-math.h" +#include "src/base/win32-math.h" -#include "checks.h" +#include "src/base/logging.h" namespace std { @@ -62,7 +62,7 @@ int fpclassify(double x) { if (flags & (_FPCLASS_PINF | _FPCLASS_NINF)) return FP_INFINITE; // All cases should be covered by the code above. - ASSERT(flags & (_FPCLASS_SNAN | _FPCLASS_QNAN)); + DCHECK(flags & (_FPCLASS_SNAN | _FPCLASS_QNAN)); return FP_NAN; } diff --git a/deps/v8/src/win32-math.h b/deps/v8/src/base/win32-math.h similarity index 90% rename from deps/v8/src/win32-math.h rename to deps/v8/src/base/win32-math.h index 7b7cbc9256a..e1c03506b2d 100644 --- a/deps/v8/src/win32-math.h +++ b/deps/v8/src/base/win32-math.h @@ -7,8 +7,8 @@ // semantics for these functions. // (http://www.opengroup.org/onlinepubs/000095399/) -#ifndef V8_WIN32_MATH_H_ -#define V8_WIN32_MATH_H_ +#ifndef V8_BASE_WIN32_MATH_H_ +#define V8_BASE_WIN32_MATH_H_ #ifndef _MSC_VER #error Wrong environment, expected MSVC. @@ -39,4 +39,4 @@ int signbit(double x); #endif // _MSC_VER < 1800 -#endif // V8_WIN32_MATH_H_ +#endif // V8_BASE_WIN32_MATH_H_ diff --git a/deps/v8/src/bignum-dtoa.cc b/deps/v8/src/bignum-dtoa.cc index fa80aad4eac..53bf418f936 100644 --- a/deps/v8/src/bignum-dtoa.cc +++ b/deps/v8/src/bignum-dtoa.cc @@ -4,20 +4,20 @@ #include <cmath> -#include "../include/v8stdint.h" -#include "checks.h" -#include "utils.h" +#include "include/v8stdint.h" +#include "src/base/logging.h" +#include "src/utils.h" -#include "bignum-dtoa.h" +#include "src/bignum-dtoa.h" -#include "bignum.h" -#include "double.h" +#include "src/bignum.h" +#include "src/double.h" namespace v8 { namespace internal { static int NormalizedExponent(uint64_t significand, int exponent) { - ASSERT(significand != 0); + DCHECK(significand != 0); while ((significand & Double::kHiddenBit) == 0) { significand = significand << 1; exponent = exponent - 1; @@ -68,8 +68,8 @@ static void GenerateCountedDigits(int count, int* decimal_point, void BignumDtoa(double v, BignumDtoaMode mode, int requested_digits, Vector<char> buffer, int* length, int* decimal_point) { - ASSERT(v > 0); - ASSERT(!Double(v).IsSpecial()); + DCHECK(v > 0); + DCHECK(!Double(v).IsSpecial()); uint64_t significand = Double(v).Significand(); bool is_even = (significand & 1) == 0; int exponent = Double(v).Exponent(); @@ -99,7 +99,7 @@ void BignumDtoa(double v, BignumDtoaMode mode, int requested_digits, // 4e-324. In this case the denominator needs fewer than 324*4 binary digits. // The maximum double is 1.7976931348623157e308 which needs fewer than // 308*4 binary digits. - ASSERT(Bignum::kMaxSignificantBits >= 324*4); + DCHECK(Bignum::kMaxSignificantBits >= 324*4); bool need_boundary_deltas = (mode == BIGNUM_DTOA_SHORTEST); InitialScaledStartValues(v, estimated_power, need_boundary_deltas, &numerator, &denominator, @@ -159,7 +159,7 @@ static void GenerateShortestDigits(Bignum* numerator, Bignum* denominator, while (true) { uint16_t digit; digit = numerator->DivideModuloIntBignum(*denominator); - ASSERT(digit <= 9); // digit is a uint16_t and therefore always positive. + DCHECK(digit <= 9); // digit is a uint16_t and therefore always positive. // digit = numerator / denominator (integer division). // numerator = numerator % denominator. buffer[(*length)++] = digit + '0'; @@ -205,7 +205,7 @@ static void GenerateShortestDigits(Bignum* numerator, Bignum* denominator, // loop would have stopped earlier. // We still have an assert here in case the preconditions were not // satisfied. - ASSERT(buffer[(*length) - 1] != '9'); + DCHECK(buffer[(*length) - 1] != '9'); buffer[(*length) - 1]++; } else { // Halfway case. @@ -216,7 +216,7 @@ static void GenerateShortestDigits(Bignum* numerator, Bignum* denominator, if ((buffer[(*length) - 1] - '0') % 2 == 0) { // Round down => Do nothing. } else { - ASSERT(buffer[(*length) - 1] != '9'); + DCHECK(buffer[(*length) - 1] != '9'); buffer[(*length) - 1]++; } } @@ -228,9 +228,9 @@ static void GenerateShortestDigits(Bignum* numerator, Bignum* denominator, // Round up. // Note again that the last digit could not be '9' since this would have // stopped the loop earlier. - // We still have an ASSERT here, in case the preconditions were not + // We still have an DCHECK here, in case the preconditions were not // satisfied. - ASSERT(buffer[(*length) -1] != '9'); + DCHECK(buffer[(*length) -1] != '9'); buffer[(*length) - 1]++; return; } @@ -247,11 +247,11 @@ static void GenerateShortestDigits(Bignum* numerator, Bignum* denominator, static void GenerateCountedDigits(int count, int* decimal_point, Bignum* numerator, Bignum* denominator, Vector<char>(buffer), int* length) { - ASSERT(count >= 0); + DCHECK(count >= 0); for (int i = 0; i < count - 1; ++i) { uint16_t digit; digit = numerator->DivideModuloIntBignum(*denominator); - ASSERT(digit <= 9); // digit is a uint16_t and therefore always positive. + DCHECK(digit <= 9); // digit is a uint16_t and therefore always positive. // digit = numerator / denominator (integer division). // numerator = numerator % denominator. buffer[i] = digit + '0'; @@ -304,7 +304,7 @@ static void BignumToFixed(int requested_digits, int* decimal_point, } else if (-(*decimal_point) == requested_digits) { // We only need to verify if the number rounds down or up. // Ex: 0.04 and 0.06 with requested_digits == 1. - ASSERT(*decimal_point == -requested_digits); + DCHECK(*decimal_point == -requested_digits); // Initially the fraction lies in range (1, 10]. Multiply the denominator // by 10 so that we can compare more easily. denominator->Times10(); @@ -383,7 +383,7 @@ static void InitialScaledStartValuesPositiveExponent( Bignum* numerator, Bignum* denominator, Bignum* delta_minus, Bignum* delta_plus) { // A positive exponent implies a positive power. - ASSERT(estimated_power >= 0); + DCHECK(estimated_power >= 0); // Since the estimated_power is positive we simply multiply the denominator // by 10^estimated_power. @@ -502,7 +502,7 @@ static void InitialScaledStartValuesNegativeExponentNegativePower( // numerator = v * 10^-estimated_power * 2 * 2^-exponent. // Remember: numerator has been abused as power_ten. So no need to assign it // to itself. - ASSERT(numerator == power_ten); + DCHECK(numerator == power_ten); numerator->MultiplyByUInt64(significand); // denominator = 2 * 2^-exponent with exponent < 0. diff --git a/deps/v8/src/bignum.cc b/deps/v8/src/bignum.cc index e47f355301b..254cb012b66 100644 --- a/deps/v8/src/bignum.cc +++ b/deps/v8/src/bignum.cc @@ -2,9 +2,10 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "../include/v8stdint.h" -#include "utils.h" -#include "bignum.h" +#include "src/v8.h" + +#include "src/bignum.h" +#include "src/utils.h" namespace v8 { namespace internal { @@ -25,7 +26,7 @@ static int BitSize(S value) { // Guaranteed to lie in one Bigit. void Bignum::AssignUInt16(uint16_t value) { - ASSERT(kBigitSize >= BitSize(value)); + DCHECK(kBigitSize >= BitSize(value)); Zero(); if (value == 0) return; @@ -71,7 +72,7 @@ static uint64_t ReadUInt64(Vector<const char> buffer, uint64_t result = 0; for (int i = from; i < from + digits_to_read; ++i) { int digit = buffer[i] - '0'; - ASSERT(0 <= digit && digit <= 9); + DCHECK(0 <= digit && digit <= 9); result = result * 10 + digit; } return result; @@ -147,8 +148,8 @@ void Bignum::AddUInt64(uint64_t operand) { void Bignum::AddBignum(const Bignum& other) { - ASSERT(IsClamped()); - ASSERT(other.IsClamped()); + DCHECK(IsClamped()); + DCHECK(other.IsClamped()); // If this has a greater exponent than other append zero-bigits to this. // After this call exponent_ <= other.exponent_. @@ -169,7 +170,7 @@ void Bignum::AddBignum(const Bignum& other) { EnsureCapacity(1 + Max(BigitLength(), other.BigitLength()) - exponent_); Chunk carry = 0; int bigit_pos = other.exponent_ - exponent_; - ASSERT(bigit_pos >= 0); + DCHECK(bigit_pos >= 0); for (int i = 0; i < other.used_digits_; ++i) { Chunk sum = bigits_[bigit_pos] + other.bigits_[i] + carry; bigits_[bigit_pos] = sum & kBigitMask; @@ -184,15 +185,15 @@ void Bignum::AddBignum(const Bignum& other) { bigit_pos++; } used_digits_ = Max(bigit_pos, used_digits_); - ASSERT(IsClamped()); + DCHECK(IsClamped()); } void Bignum::SubtractBignum(const Bignum& other) { - ASSERT(IsClamped()); - ASSERT(other.IsClamped()); + DCHECK(IsClamped()); + DCHECK(other.IsClamped()); // We require this to be bigger than other. - ASSERT(LessEqual(other, *this)); + DCHECK(LessEqual(other, *this)); Align(other); @@ -200,7 +201,7 @@ void Bignum::SubtractBignum(const Bignum& other) { Chunk borrow = 0; int i; for (i = 0; i < other.used_digits_; ++i) { - ASSERT((borrow == 0) || (borrow == 1)); + DCHECK((borrow == 0) || (borrow == 1)); Chunk difference = bigits_[i + offset] - other.bigits_[i] - borrow; bigits_[i + offset] = difference & kBigitMask; borrow = difference >> (kChunkSize - 1); @@ -234,7 +235,7 @@ void Bignum::MultiplyByUInt32(uint32_t factor) { // The product of a bigit with the factor is of size kBigitSize + 32. // Assert that this number + 1 (for the carry) fits into double chunk. - ASSERT(kDoubleChunkSize >= kBigitSize + 32 + 1); + DCHECK(kDoubleChunkSize >= kBigitSize + 32 + 1); DoubleChunk carry = 0; for (int i = 0; i < used_digits_; ++i) { DoubleChunk product = static_cast<DoubleChunk>(factor) * bigits_[i] + carry; @@ -256,7 +257,7 @@ void Bignum::MultiplyByUInt64(uint64_t factor) { Zero(); return; } - ASSERT(kBigitSize < 32); + DCHECK(kBigitSize < 32); uint64_t carry = 0; uint64_t low = factor & 0xFFFFFFFF; uint64_t high = factor >> 32; @@ -296,7 +297,7 @@ void Bignum::MultiplyByPowerOfTen(int exponent) { { kFive1, kFive2, kFive3, kFive4, kFive5, kFive6, kFive7, kFive8, kFive9, kFive10, kFive11, kFive12 }; - ASSERT(exponent >= 0); + DCHECK(exponent >= 0); if (exponent == 0) return; if (used_digits_ == 0) return; @@ -318,7 +319,7 @@ void Bignum::MultiplyByPowerOfTen(int exponent) { void Bignum::Square() { - ASSERT(IsClamped()); + DCHECK(IsClamped()); int product_length = 2 * used_digits_; EnsureCapacity(product_length); @@ -380,7 +381,7 @@ void Bignum::Square() { } // Since the result was guaranteed to lie inside the number the // accumulator must be 0 now. - ASSERT(accumulator == 0); + DCHECK(accumulator == 0); // Don't forget to update the used_digits and the exponent. used_digits_ = product_length; @@ -390,8 +391,8 @@ void Bignum::Square() { void Bignum::AssignPowerUInt16(uint16_t base, int power_exponent) { - ASSERT(base != 0); - ASSERT(power_exponent >= 0); + DCHECK(base != 0); + DCHECK(power_exponent >= 0); if (power_exponent == 0) { AssignUInt16(1); return; @@ -464,9 +465,9 @@ void Bignum::AssignPowerUInt16(uint16_t base, int power_exponent) { // Precondition: this/other < 16bit. uint16_t Bignum::DivideModuloIntBignum(const Bignum& other) { - ASSERT(IsClamped()); - ASSERT(other.IsClamped()); - ASSERT(other.used_digits_ > 0); + DCHECK(IsClamped()); + DCHECK(other.IsClamped()); + DCHECK(other.used_digits_ > 0); // Easy case: if we have less digits than the divisor than the result is 0. // Note: this handles the case where this == 0, too. @@ -484,14 +485,14 @@ uint16_t Bignum::DivideModuloIntBignum(const Bignum& other) { // This naive approach is extremely inefficient if the this divided other // might be big. This function is implemented for doubleToString where // the result should be small (less than 10). - ASSERT(other.bigits_[other.used_digits_ - 1] >= ((1 << kBigitSize) / 16)); + DCHECK(other.bigits_[other.used_digits_ - 1] >= ((1 << kBigitSize) / 16)); // Remove the multiples of the first digit. // Example this = 23 and other equals 9. -> Remove 2 multiples. result += bigits_[used_digits_ - 1]; SubtractTimes(other, bigits_[used_digits_ - 1]); } - ASSERT(BigitLength() == other.BigitLength()); + DCHECK(BigitLength() == other.BigitLength()); // Both bignums are at the same length now. // Since other has more than 0 digits we know that the access to @@ -528,7 +529,7 @@ uint16_t Bignum::DivideModuloIntBignum(const Bignum& other) { template<typename S> static int SizeInHexChars(S number) { - ASSERT(number > 0); + DCHECK(number > 0); int result = 0; while (number != 0) { number >>= 4; @@ -539,16 +540,16 @@ static int SizeInHexChars(S number) { static char HexCharOfValue(int value) { - ASSERT(0 <= value && value <= 16); + DCHECK(0 <= value && value <= 16); if (value < 10) return value + '0'; return value - 10 + 'A'; } bool Bignum::ToHexString(char* buffer, int buffer_size) const { - ASSERT(IsClamped()); + DCHECK(IsClamped()); // Each bigit must be printable as separate hex-character. - ASSERT(kBigitSize % 4 == 0); + DCHECK(kBigitSize % 4 == 0); const int kHexCharsPerBigit = kBigitSize / 4; if (used_digits_ == 0) { @@ -593,8 +594,8 @@ Bignum::Chunk Bignum::BigitAt(int index) const { int Bignum::Compare(const Bignum& a, const Bignum& b) { - ASSERT(a.IsClamped()); - ASSERT(b.IsClamped()); + DCHECK(a.IsClamped()); + DCHECK(b.IsClamped()); int bigit_length_a = a.BigitLength(); int bigit_length_b = b.BigitLength(); if (bigit_length_a < bigit_length_b) return -1; @@ -611,9 +612,9 @@ int Bignum::Compare(const Bignum& a, const Bignum& b) { int Bignum::PlusCompare(const Bignum& a, const Bignum& b, const Bignum& c) { - ASSERT(a.IsClamped()); - ASSERT(b.IsClamped()); - ASSERT(c.IsClamped()); + DCHECK(a.IsClamped()); + DCHECK(b.IsClamped()); + DCHECK(c.IsClamped()); if (a.BigitLength() < b.BigitLength()) { return PlusCompare(b, a, c); } @@ -690,15 +691,15 @@ void Bignum::Align(const Bignum& other) { } used_digits_ += zero_digits; exponent_ -= zero_digits; - ASSERT(used_digits_ >= 0); - ASSERT(exponent_ >= 0); + DCHECK(used_digits_ >= 0); + DCHECK(exponent_ >= 0); } } void Bignum::BigitsShiftLeft(int shift_amount) { - ASSERT(shift_amount < kBigitSize); - ASSERT(shift_amount >= 0); + DCHECK(shift_amount < kBigitSize); + DCHECK(shift_amount >= 0); Chunk carry = 0; for (int i = 0; i < used_digits_; ++i) { Chunk new_carry = bigits_[i] >> (kBigitSize - shift_amount); @@ -720,7 +721,7 @@ void Bignum::SubtractTimes(const Bignum& other, int factor) { b.MultiplyByUInt32(factor); a.SubtractBignum(b); #endif - ASSERT(exponent_ <= other.exponent_); + DCHECK(exponent_ <= other.exponent_); if (factor < 3) { for (int i = 0; i < factor; ++i) { SubtractBignum(other); @@ -745,7 +746,7 @@ void Bignum::SubtractTimes(const Bignum& other, int factor) { borrow = difference >> (kChunkSize - 1); } Clamp(); - ASSERT(Bignum::Equal(a, *this)); + DCHECK(Bignum::Equal(a, *this)); } diff --git a/deps/v8/src/bootstrapper.cc b/deps/v8/src/bootstrapper.cc index 1e59f725c97..240be719187 100644 --- a/deps/v8/src/bootstrapper.cc +++ b/deps/v8/src/bootstrapper.cc @@ -2,19 +2,19 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "bootstrapper.h" - -#include "accessors.h" -#include "isolate-inl.h" -#include "natives.h" -#include "snapshot.h" -#include "trig-table.h" -#include "extensions/externalize-string-extension.h" -#include "extensions/free-buffer-extension.h" -#include "extensions/gc-extension.h" -#include "extensions/statistics-extension.h" -#include "extensions/trigger-failure-extension.h" -#include "code-stubs.h" +#include "src/bootstrapper.h" + +#include "src/accessors.h" +#include "src/code-stubs.h" +#include "src/extensions/externalize-string-extension.h" +#include "src/extensions/free-buffer-extension.h" +#include "src/extensions/gc-extension.h" +#include "src/extensions/statistics-extension.h" +#include "src/extensions/trigger-failure-extension.h" +#include "src/isolate-inl.h" +#include "src/natives.h" +#include "src/snapshot.h" +#include "third_party/fdlibm/fdlibm.h" namespace v8 { namespace internal { @@ -44,7 +44,7 @@ Bootstrapper::Bootstrapper(Isolate* isolate) Handle<String> Bootstrapper::NativesSourceLookup(int index) { - ASSERT(0 <= index && index < Natives::GetBuiltinsCount()); + DCHECK(0 <= index && index < Natives::GetBuiltinsCount()); Heap* heap = isolate_->heap(); if (heap->natives_source_cache()->get(index)->IsUndefined()) { // We can use external strings for the natives. @@ -121,7 +121,7 @@ char* Bootstrapper::AllocateAutoDeletedArray(int bytes) { void Bootstrapper::TearDown() { if (delete_these_non_arrays_on_tear_down_ != NULL) { int len = delete_these_non_arrays_on_tear_down_->length(); - ASSERT(len < 24); // Don't use this mechanism for unbounded allocations. + DCHECK(len < 27); // Don't use this mechanism for unbounded allocations. for (int i = 0; i < len; i++) { delete delete_these_non_arrays_on_tear_down_->at(i); delete_these_non_arrays_on_tear_down_->at(i) = NULL; @@ -132,7 +132,7 @@ void Bootstrapper::TearDown() { if (delete_these_arrays_on_tear_down_ != NULL) { int len = delete_these_arrays_on_tear_down_->length(); - ASSERT(len < 1000); // Don't use this mechanism for unbounded allocations. + DCHECK(len < 1000); // Don't use this mechanism for unbounded allocations. for (int i = 0; i < len; i++) { delete[] delete_these_arrays_on_tear_down_->at(i); delete_these_arrays_on_tear_down_->at(i) = NULL; @@ -148,8 +148,8 @@ void Bootstrapper::TearDown() { class Genesis BASE_EMBEDDED { public: Genesis(Isolate* isolate, - Handle<Object> global_object, - v8::Handle<v8::ObjectTemplate> global_template, + MaybeHandle<JSGlobalProxy> maybe_global_proxy, + v8::Handle<v8::ObjectTemplate> global_proxy_template, v8::ExtensionConfiguration* extensions); ~Genesis() { } @@ -167,7 +167,9 @@ class Genesis BASE_EMBEDDED { // Creates the empty function. Used for creating a context from scratch. Handle<JSFunction> CreateEmptyFunction(Isolate* isolate); // Creates the ThrowTypeError function. ECMA 5th Ed. 13.2.3 - Handle<JSFunction> GetThrowTypeErrorFunction(); + Handle<JSFunction> GetStrictPoisonFunction(); + // Poison for sloppy generator function arguments/callee. + Handle<JSFunction> GetGeneratorPoisonFunction(); void CreateStrictModeFunctionMaps(Handle<JSFunction> empty); @@ -181,26 +183,25 @@ class Genesis BASE_EMBEDDED { // we have to used the deserialized ones that are linked together with the // rest of the context snapshot. Handle<JSGlobalProxy> CreateNewGlobals( - v8::Handle<v8::ObjectTemplate> global_template, - Handle<Object> global_object, - Handle<GlobalObject>* global_proxy_out); + v8::Handle<v8::ObjectTemplate> global_proxy_template, + MaybeHandle<JSGlobalProxy> maybe_global_proxy, + Handle<GlobalObject>* global_object_out); // Hooks the given global proxy into the context. If the context was created // by deserialization then this will unhook the global proxy that was // deserialized, leaving the GC to pick it up. - void HookUpGlobalProxy(Handle<GlobalObject> inner_global, + void HookUpGlobalProxy(Handle<GlobalObject> global_object, Handle<JSGlobalProxy> global_proxy); - // Similarly, we want to use the inner global that has been created by the - // templates passed through the API. The inner global from the snapshot is - // detached from the other objects in the snapshot. - void HookUpInnerGlobal(Handle<GlobalObject> inner_global); + // Similarly, we want to use the global that has been created by the templates + // passed through the API. The global from the snapshot is detached from the + // other objects in the snapshot. + void HookUpGlobalObject(Handle<GlobalObject> global_object); // New context initialization. Used for creating a context from scratch. - void InitializeGlobal(Handle<GlobalObject> inner_global, + void InitializeGlobal(Handle<GlobalObject> global_object, Handle<JSFunction> empty_function); void InitializeExperimentalGlobal(); // Installs the contents of the native .js files on the global objects. // Used for creating a context from scratch. void InstallNativeFunctions(); - void InstallExperimentalBuiltinFunctionIds(); void InstallExperimentalNativeFunctions(); Handle<JSFunction> InstallInternalArray(Handle<JSBuiltinsObject> builtins, const char* name, @@ -251,7 +252,8 @@ class Genesis BASE_EMBEDDED { bool InstallJSBuiltins(Handle<JSBuiltinsObject> builtins); bool ConfigureApiObject(Handle<JSObject> object, Handle<ObjectTemplateInfo> object_template); - bool ConfigureGlobalObjects(v8::Handle<v8::ObjectTemplate> global_template); + bool ConfigureGlobalObjects( + v8::Handle<v8::ObjectTemplate> global_proxy_template); // Migrates all properties from the 'from' object to the 'to' // object and overrides the prototype in 'to' with the one from @@ -260,24 +262,32 @@ class Genesis BASE_EMBEDDED { void TransferNamedProperties(Handle<JSObject> from, Handle<JSObject> to); void TransferIndexedProperties(Handle<JSObject> from, Handle<JSObject> to); - enum PrototypePropertyMode { - DONT_ADD_PROTOTYPE, - ADD_READONLY_PROTOTYPE, - ADD_WRITEABLE_PROTOTYPE + enum FunctionMode { + // With prototype. + FUNCTION_WITH_WRITEABLE_PROTOTYPE, + FUNCTION_WITH_READONLY_PROTOTYPE, + // Without prototype. + FUNCTION_WITHOUT_PROTOTYPE, + BOUND_FUNCTION }; - Handle<Map> CreateFunctionMap(PrototypePropertyMode prototype_mode); + static bool IsFunctionModeWithPrototype(FunctionMode function_mode) { + return (function_mode == FUNCTION_WITH_WRITEABLE_PROTOTYPE || + function_mode == FUNCTION_WITH_READONLY_PROTOTYPE); + } + + Handle<Map> CreateFunctionMap(FunctionMode function_mode); void SetFunctionInstanceDescriptor(Handle<Map> map, - PrototypePropertyMode prototypeMode); + FunctionMode function_mode); void MakeFunctionInstancePrototypeWritable(); Handle<Map> CreateStrictFunctionMap( - PrototypePropertyMode prototype_mode, + FunctionMode function_mode, Handle<JSFunction> empty_function); void SetStrictFunctionInstanceDescriptor(Handle<Map> map, - PrototypePropertyMode propertyMode); + FunctionMode function_mode); static bool CompileBuiltin(Isolate* isolate, int index); static bool CompileExperimentalBuiltin(Isolate* isolate, int index); @@ -302,7 +312,8 @@ class Genesis BASE_EMBEDDED { // prototype, maps. Handle<Map> sloppy_function_map_writable_prototype_; Handle<Map> strict_function_map_writable_prototype_; - Handle<JSFunction> throw_type_error_function; + Handle<JSFunction> strict_poison_function; + Handle<JSFunction> generator_poison_function; BootstrapperActive active_; friend class Bootstrapper; @@ -316,11 +327,12 @@ void Bootstrapper::Iterate(ObjectVisitor* v) { Handle<Context> Bootstrapper::CreateEnvironment( - Handle<Object> global_object, - v8::Handle<v8::ObjectTemplate> global_template, + MaybeHandle<JSGlobalProxy> maybe_global_proxy, + v8::Handle<v8::ObjectTemplate> global_proxy_template, v8::ExtensionConfiguration* extensions) { HandleScope scope(isolate_); - Genesis genesis(isolate_, global_object, global_template, extensions); + Genesis genesis( + isolate_, maybe_global_proxy, global_proxy_template, extensions); Handle<Context> env = genesis.result(); if (env.is_null() || !InstallExtensions(env, extensions)) { return Handle<Context>(); @@ -331,10 +343,10 @@ Handle<Context> Bootstrapper::CreateEnvironment( static void SetObjectPrototype(Handle<JSObject> object, Handle<Object> proto) { // object.__proto__ = proto; - Handle<Map> old_to_map = Handle<Map>(object->map()); - Handle<Map> new_to_map = Map::Copy(old_to_map); - new_to_map->set_prototype(*proto); - object->set_map(*new_to_map); + Handle<Map> old_map = Handle<Map>(object->map()); + Handle<Map> new_map = Map::Copy(old_map); + new_map->set_prototype(*proto); + JSObject::MigrateToMap(object, new_map); } @@ -343,6 +355,7 @@ void Bootstrapper::DetachGlobal(Handle<Context> env) { Handle<JSGlobalProxy> global_proxy(JSGlobalProxy::cast(env->global_proxy())); global_proxy->set_native_context(*factory->null_value()); SetObjectPrototype(global_proxy, factory->null_value()); + global_proxy->map()->set_constructor(*factory->null_value()); } @@ -350,22 +363,17 @@ static Handle<JSFunction> InstallFunction(Handle<JSObject> target, const char* name, InstanceType type, int instance_size, - Handle<JSObject> prototype, - Builtins::Name call, - bool install_initial_map, - bool set_instance_class_name) { + MaybeHandle<JSObject> maybe_prototype, + Builtins::Name call) { Isolate* isolate = target->GetIsolate(); Factory* factory = isolate->factory(); Handle<String> internalized_name = factory->InternalizeUtf8String(name); Handle<Code> call_code = Handle<Code>(isolate->builtins()->builtin(call)); - Handle<JSFunction> function = prototype.is_null() - ? factory->NewFunction(internalized_name, call_code) - : factory->NewFunctionWithPrototype(internalized_name, - type, - instance_size, - prototype, - call_code, - install_initial_map); + Handle<JSObject> prototype; + Handle<JSFunction> function = maybe_prototype.ToHandle(&prototype) + ? factory->NewFunction(internalized_name, call_code, prototype, + type, instance_size) + : factory->NewFunctionWithoutPrototype(internalized_name, call_code); PropertyAttributes attributes; if (target->IsJSBuiltinsObject()) { attributes = @@ -373,9 +381,8 @@ static Handle<JSFunction> InstallFunction(Handle<JSObject> target, } else { attributes = DONT_ENUM; } - JSObject::SetLocalPropertyIgnoreAttributes( - target, internalized_name, function, attributes).Check(); - if (set_instance_class_name) { + JSObject::AddProperty(target, internalized_name, function, attributes); + if (target->IsJSGlobalObject()) { function->shared()->set_instance_class_name(*internalized_name); } function->shared()->set_native(true); @@ -384,8 +391,8 @@ static Handle<JSFunction> InstallFunction(Handle<JSObject> target, void Genesis::SetFunctionInstanceDescriptor( - Handle<Map> map, PrototypePropertyMode prototypeMode) { - int size = (prototypeMode == DONT_ADD_PROTOTYPE) ? 4 : 5; + Handle<Map> map, FunctionMode function_mode) { + int size = IsFunctionModeWithPrototype(function_mode) ? 5 : 4; Map::EnsureDescriptorSlack(map, size); PropertyAttributes attribs = static_cast<PropertyAttributes>( @@ -419,8 +426,8 @@ void Genesis::SetFunctionInstanceDescriptor( caller, attribs); map->AppendDescriptor(&d); } - if (prototypeMode != DONT_ADD_PROTOTYPE) { - if (prototypeMode == ADD_WRITEABLE_PROTOTYPE) { + if (IsFunctionModeWithPrototype(function_mode)) { + if (function_mode == FUNCTION_WITH_WRITEABLE_PROTOTYPE) { attribs = static_cast<PropertyAttributes>(attribs & ~READ_ONLY); } Handle<AccessorInfo> prototype = @@ -432,10 +439,10 @@ void Genesis::SetFunctionInstanceDescriptor( } -Handle<Map> Genesis::CreateFunctionMap(PrototypePropertyMode prototype_mode) { +Handle<Map> Genesis::CreateFunctionMap(FunctionMode function_mode) { Handle<Map> map = factory()->NewMap(JS_FUNCTION_TYPE, JSFunction::kSize); - SetFunctionInstanceDescriptor(map, prototype_mode); - map->set_function_with_prototype(prototype_mode != DONT_ADD_PROTOTYPE); + SetFunctionInstanceDescriptor(map, function_mode); + map->set_function_with_prototype(IsFunctionModeWithPrototype(function_mode)); return map; } @@ -447,32 +454,36 @@ Handle<JSFunction> Genesis::CreateEmptyFunction(Isolate* isolate) { // Functions with this map will not have a 'prototype' property, and // can not be used as constructors. Handle<Map> function_without_prototype_map = - CreateFunctionMap(DONT_ADD_PROTOTYPE); + CreateFunctionMap(FUNCTION_WITHOUT_PROTOTYPE); native_context()->set_sloppy_function_without_prototype_map( *function_without_prototype_map); // Allocate the function map. This map is temporary, used only for processing // of builtins. // Later the map is replaced with writable prototype map, allocated below. - Handle<Map> function_map = CreateFunctionMap(ADD_READONLY_PROTOTYPE); + Handle<Map> function_map = + CreateFunctionMap(FUNCTION_WITH_READONLY_PROTOTYPE); native_context()->set_sloppy_function_map(*function_map); + native_context()->set_sloppy_function_with_readonly_prototype_map( + *function_map); // The final map for functions. Writeable prototype. // This map is installed in MakeFunctionInstancePrototypeWritable. sloppy_function_map_writable_prototype_ = - CreateFunctionMap(ADD_WRITEABLE_PROTOTYPE); + CreateFunctionMap(FUNCTION_WITH_WRITEABLE_PROTOTYPE); Factory* factory = isolate->factory(); Handle<String> object_name = factory->Object_string(); { // --- O b j e c t --- - Handle<JSFunction> object_fun = factory->NewFunctionWithPrototype( - object_name, factory->null_value()); + Handle<JSFunction> object_fun = factory->NewFunction(object_name); Handle<Map> object_function_map = factory->NewMap(JS_OBJECT_TYPE, JSObject::kHeaderSize); - object_fun->set_initial_map(*object_function_map); - object_function_map->set_constructor(*object_fun); + JSFunction::SetInitialMap(object_fun, object_function_map, + isolate->factory()->null_value()); + object_function_map->set_unused_property_fields( + JSObject::kInitialGlobalObjectUnusedPropertiesCount); native_context()->set_object_function(*object_fun); @@ -480,6 +491,9 @@ Handle<JSFunction> Genesis::CreateEmptyFunction(Isolate* isolate) { Handle<JSObject> prototype = factory->NewJSObject( isolate->object_function(), TENURED); + Handle<Map> map = Map::Copy(handle(prototype->map())); + map->set_is_prototype_map(true); + prototype->set_map(*map); native_context()->set_initial_object_prototype(*prototype); // For bootstrapping set the array prototype to be the same as the object @@ -494,7 +508,17 @@ Handle<JSFunction> Genesis::CreateEmptyFunction(Isolate* isolate) { Handle<String> empty_string = factory->InternalizeOneByteString(STATIC_ASCII_VECTOR("Empty")); Handle<Code> code(isolate->builtins()->builtin(Builtins::kEmptyFunction)); - Handle<JSFunction> empty_function = factory->NewFunction(empty_string, code); + Handle<JSFunction> empty_function = factory->NewFunctionWithoutPrototype( + empty_string, code); + + // Allocate the function map first and then patch the prototype later + Handle<Map> empty_function_map = + CreateFunctionMap(FUNCTION_WITHOUT_PROTOTYPE); + DCHECK(!empty_function_map->is_dictionary_map()); + empty_function_map->set_prototype( + native_context()->object_function()->prototype()); + empty_function_map->set_is_prototype_map(true); + empty_function->set_map(*empty_function_map); // --- E m p t y --- Handle<String> source = factory->NewStringFromStaticAscii("() {}"); @@ -510,19 +534,13 @@ Handle<JSFunction> Genesis::CreateEmptyFunction(Isolate* isolate) { native_context()->sloppy_function_without_prototype_map()-> set_prototype(*empty_function); sloppy_function_map_writable_prototype_->set_prototype(*empty_function); - - // Allocate the function map first and then patch the prototype later - Handle<Map> empty_function_map = CreateFunctionMap(DONT_ADD_PROTOTYPE); - empty_function_map->set_prototype( - native_context()->object_function()->prototype()); - empty_function->set_map(*empty_function_map); return empty_function; } void Genesis::SetStrictFunctionInstanceDescriptor( - Handle<Map> map, PrototypePropertyMode prototypeMode) { - int size = (prototypeMode == DONT_ADD_PROTOTYPE) ? 4 : 5; + Handle<Map> map, FunctionMode function_mode) { + int size = IsFunctionModeWithPrototype(function_mode) ? 5 : 4; Map::EnsureDescriptorSlack(map, size); Handle<AccessorPair> arguments(factory()->NewAccessorPair()); @@ -532,9 +550,17 @@ void Genesis::SetStrictFunctionInstanceDescriptor( PropertyAttributes ro_attribs = static_cast<PropertyAttributes>(DONT_ENUM | DONT_DELETE | READ_ONLY); - Handle<AccessorInfo> length = - Accessors::FunctionLengthInfo(isolate(), ro_attribs); - { // Add length. + // Add length. + if (function_mode == BOUND_FUNCTION) { + Handle<String> length_string = isolate()->factory()->length_string(); + FieldDescriptor d(length_string, 0, ro_attribs, Representation::Tagged()); + map->AppendDescriptor(&d); + } else { + DCHECK(function_mode == FUNCTION_WITH_WRITEABLE_PROTOTYPE || + function_mode == FUNCTION_WITH_READONLY_PROTOTYPE || + function_mode == FUNCTION_WITHOUT_PROTOTYPE); + Handle<AccessorInfo> length = + Accessors::FunctionLengthInfo(isolate(), ro_attribs); CallbacksDescriptor d(Handle<Name>(Name::cast(length->name())), length, ro_attribs); map->AppendDescriptor(&d); @@ -555,10 +581,11 @@ void Genesis::SetStrictFunctionInstanceDescriptor( CallbacksDescriptor d(factory()->caller_string(), caller, rw_attribs); map->AppendDescriptor(&d); } - if (prototypeMode != DONT_ADD_PROTOTYPE) { + if (IsFunctionModeWithPrototype(function_mode)) { // Add prototype. PropertyAttributes attribs = - prototypeMode == ADD_WRITEABLE_PROTOTYPE ? rw_attribs : ro_attribs; + function_mode == FUNCTION_WITH_WRITEABLE_PROTOTYPE ? rw_attribs + : ro_attribs; Handle<AccessorInfo> prototype = Accessors::FunctionPrototypeInfo(isolate(), attribs); CallbacksDescriptor d(Handle<Name>(Name::cast(prototype->name())), @@ -569,28 +596,45 @@ void Genesis::SetStrictFunctionInstanceDescriptor( // ECMAScript 5th Edition, 13.2.3 -Handle<JSFunction> Genesis::GetThrowTypeErrorFunction() { - if (throw_type_error_function.is_null()) { +Handle<JSFunction> Genesis::GetStrictPoisonFunction() { + if (strict_poison_function.is_null()) { Handle<String> name = factory()->InternalizeOneByteString( STATIC_ASCII_VECTOR("ThrowTypeError")); Handle<Code> code(isolate()->builtins()->builtin( Builtins::kStrictModePoisonPill)); - throw_type_error_function = factory()->NewFunction(name, code); - throw_type_error_function->set_map(native_context()->sloppy_function_map()); - throw_type_error_function->shared()->DontAdaptArguments(); + strict_poison_function = factory()->NewFunctionWithoutPrototype(name, code); + strict_poison_function->set_map(native_context()->sloppy_function_map()); + strict_poison_function->shared()->DontAdaptArguments(); - JSObject::PreventExtensions(throw_type_error_function).Assert(); + JSObject::PreventExtensions(strict_poison_function).Assert(); } - return throw_type_error_function; + return strict_poison_function; +} + + +Handle<JSFunction> Genesis::GetGeneratorPoisonFunction() { + if (generator_poison_function.is_null()) { + Handle<String> name = factory()->InternalizeOneByteString( + STATIC_ASCII_VECTOR("ThrowTypeError")); + Handle<Code> code(isolate()->builtins()->builtin( + Builtins::kGeneratorPoisonPill)); + generator_poison_function = factory()->NewFunctionWithoutPrototype( + name, code); + generator_poison_function->set_map(native_context()->sloppy_function_map()); + generator_poison_function->shared()->DontAdaptArguments(); + + JSObject::PreventExtensions(generator_poison_function).Assert(); + } + return generator_poison_function; } Handle<Map> Genesis::CreateStrictFunctionMap( - PrototypePropertyMode prototype_mode, + FunctionMode function_mode, Handle<JSFunction> empty_function) { Handle<Map> map = factory()->NewMap(JS_FUNCTION_TYPE, JSFunction::kSize); - SetStrictFunctionInstanceDescriptor(map, prototype_mode); - map->set_function_with_prototype(prototype_mode != DONT_ADD_PROTOTYPE); + SetStrictFunctionInstanceDescriptor(map, function_mode); + map->set_function_with_prototype(IsFunctionModeWithPrototype(function_mode)); map->set_prototype(*empty_function); return map; } @@ -599,7 +643,7 @@ Handle<Map> Genesis::CreateStrictFunctionMap( void Genesis::CreateStrictModeFunctionMaps(Handle<JSFunction> empty) { // Allocate map for the prototype-less strict mode instances. Handle<Map> strict_function_without_prototype_map = - CreateStrictFunctionMap(DONT_ADD_PROTOTYPE, empty); + CreateStrictFunctionMap(FUNCTION_WITHOUT_PROTOTYPE, empty); native_context()->set_strict_function_without_prototype_map( *strict_function_without_prototype_map); @@ -607,18 +651,23 @@ void Genesis::CreateStrictModeFunctionMaps(Handle<JSFunction> empty) { // only for processing of builtins. // Later the map is replaced with writable prototype map, allocated below. Handle<Map> strict_function_map = - CreateStrictFunctionMap(ADD_READONLY_PROTOTYPE, empty); + CreateStrictFunctionMap(FUNCTION_WITH_READONLY_PROTOTYPE, empty); native_context()->set_strict_function_map(*strict_function_map); // The final map for the strict mode functions. Writeable prototype. // This map is installed in MakeFunctionInstancePrototypeWritable. strict_function_map_writable_prototype_ = - CreateStrictFunctionMap(ADD_WRITEABLE_PROTOTYPE, empty); + CreateStrictFunctionMap(FUNCTION_WITH_WRITEABLE_PROTOTYPE, empty); + // Special map for bound functions. + Handle<Map> bound_function_map = + CreateStrictFunctionMap(BOUND_FUNCTION, empty); + native_context()->set_bound_function_map(*bound_function_map); // Complete the callbacks. PoisonArgumentsAndCaller(strict_function_without_prototype_map); PoisonArgumentsAndCaller(strict_function_map); PoisonArgumentsAndCaller(strict_function_map_writable_prototype_); + PoisonArgumentsAndCaller(bound_function_map); } @@ -633,23 +682,34 @@ static void SetAccessors(Handle<Map> map, } +static void ReplaceAccessors(Handle<Map> map, + Handle<String> name, + PropertyAttributes attributes, + Handle<AccessorPair> accessor_pair) { + DescriptorArray* descriptors = map->instance_descriptors(); + int idx = descriptors->SearchWithCache(*name, *map); + CallbacksDescriptor descriptor(name, accessor_pair, attributes); + descriptors->Replace(idx, &descriptor); +} + + void Genesis::PoisonArgumentsAndCaller(Handle<Map> map) { - SetAccessors(map, factory()->arguments_string(), GetThrowTypeErrorFunction()); - SetAccessors(map, factory()->caller_string(), GetThrowTypeErrorFunction()); + SetAccessors(map, factory()->arguments_string(), GetStrictPoisonFunction()); + SetAccessors(map, factory()->caller_string(), GetStrictPoisonFunction()); } static void AddToWeakNativeContextList(Context* context) { - ASSERT(context->IsNativeContext()); + DCHECK(context->IsNativeContext()); Heap* heap = context->GetIsolate()->heap(); #ifdef DEBUG { // NOLINT - ASSERT(context->get(Context::NEXT_CONTEXT_LINK)->IsUndefined()); + DCHECK(context->get(Context::NEXT_CONTEXT_LINK)->IsUndefined()); // Check that context is not in the list yet. for (Object* current = heap->native_contexts_list(); !current->IsUndefined(); current = Context::cast(current)->get(Context::NEXT_CONTEXT_LINK)) { - ASSERT(current != context); + DCHECK(current != context); } } #endif @@ -676,89 +736,89 @@ void Genesis::CreateRoots() { Handle<JSGlobalProxy> Genesis::CreateNewGlobals( - v8::Handle<v8::ObjectTemplate> global_template, - Handle<Object> global_object, - Handle<GlobalObject>* inner_global_out) { - // The argument global_template aka data is an ObjectTemplateInfo. + v8::Handle<v8::ObjectTemplate> global_proxy_template, + MaybeHandle<JSGlobalProxy> maybe_global_proxy, + Handle<GlobalObject>* global_object_out) { + // The argument global_proxy_template aka data is an ObjectTemplateInfo. // It has a constructor pointer that points at global_constructor which is a // FunctionTemplateInfo. - // The global_constructor is used to create or reinitialize the global_proxy. - // The global_constructor also has a prototype_template pointer that points at - // js_global_template which is an ObjectTemplateInfo. + // The global_proxy_constructor is used to create or reinitialize the + // global_proxy. The global_proxy_constructor also has a prototype_template + // pointer that points at js_global_object_template which is an + // ObjectTemplateInfo. // That in turn has a constructor pointer that points at - // js_global_constructor which is a FunctionTemplateInfo. - // js_global_constructor is used to make js_global_function - // js_global_function is used to make the new inner_global. + // js_global_object_constructor which is a FunctionTemplateInfo. + // js_global_object_constructor is used to make js_global_object_function + // js_global_object_function is used to make the new global_object. // // --- G l o b a l --- - // Step 1: Create a fresh inner JSGlobalObject. - Handle<JSFunction> js_global_function; - Handle<ObjectTemplateInfo> js_global_template; - if (!global_template.IsEmpty()) { - // Get prototype template of the global_template. + // Step 1: Create a fresh JSGlobalObject. + Handle<JSFunction> js_global_object_function; + Handle<ObjectTemplateInfo> js_global_object_template; + if (!global_proxy_template.IsEmpty()) { + // Get prototype template of the global_proxy_template. Handle<ObjectTemplateInfo> data = - v8::Utils::OpenHandle(*global_template); + v8::Utils::OpenHandle(*global_proxy_template); Handle<FunctionTemplateInfo> global_constructor = Handle<FunctionTemplateInfo>( FunctionTemplateInfo::cast(data->constructor())); Handle<Object> proto_template(global_constructor->prototype_template(), isolate()); if (!proto_template->IsUndefined()) { - js_global_template = + js_global_object_template = Handle<ObjectTemplateInfo>::cast(proto_template); } } - if (js_global_template.is_null()) { + if (js_global_object_template.is_null()) { Handle<String> name = Handle<String>(heap()->empty_string()); Handle<Code> code = Handle<Code>(isolate()->builtins()->builtin( Builtins::kIllegal)); - js_global_function = - factory()->NewFunction(name, JS_GLOBAL_OBJECT_TYPE, - JSGlobalObject::kSize, code, true); - // Change the constructor property of the prototype of the - // hidden global function to refer to the Object function. Handle<JSObject> prototype = - Handle<JSObject>( - JSObject::cast(js_global_function->instance_prototype())); - JSObject::SetLocalPropertyIgnoreAttributes( - prototype, factory()->constructor_string(), - isolate()->object_function(), NONE).Check(); + factory()->NewFunctionPrototype(isolate()->object_function()); + js_global_object_function = factory()->NewFunction( + name, code, prototype, JS_GLOBAL_OBJECT_TYPE, JSGlobalObject::kSize); +#ifdef DEBUG + LookupIterator it(prototype, factory()->constructor_string(), + LookupIterator::CHECK_OWN_REAL); + Handle<Object> value = JSReceiver::GetProperty(&it).ToHandleChecked(); + DCHECK(it.IsFound()); + DCHECK_EQ(*isolate()->object_function(), *value); +#endif } else { - Handle<FunctionTemplateInfo> js_global_constructor( - FunctionTemplateInfo::cast(js_global_template->constructor())); - js_global_function = - factory()->CreateApiFunction(js_global_constructor, + Handle<FunctionTemplateInfo> js_global_object_constructor( + FunctionTemplateInfo::cast(js_global_object_template->constructor())); + js_global_object_function = + factory()->CreateApiFunction(js_global_object_constructor, factory()->the_hole_value(), - factory()->InnerGlobalObject); + factory()->GlobalObjectType); } - js_global_function->initial_map()->set_is_hidden_prototype(); - js_global_function->initial_map()->set_dictionary_map(true); - Handle<GlobalObject> inner_global = - factory()->NewGlobalObject(js_global_function); - if (inner_global_out != NULL) { - *inner_global_out = inner_global; + js_global_object_function->initial_map()->set_is_hidden_prototype(); + js_global_object_function->initial_map()->set_dictionary_map(true); + Handle<GlobalObject> global_object = + factory()->NewGlobalObject(js_global_object_function); + if (global_object_out != NULL) { + *global_object_out = global_object; } // Step 2: create or re-initialize the global proxy object. Handle<JSFunction> global_proxy_function; - if (global_template.IsEmpty()) { + if (global_proxy_template.IsEmpty()) { Handle<String> name = Handle<String>(heap()->empty_string()); Handle<Code> code = Handle<Code>(isolate()->builtins()->builtin( Builtins::kIllegal)); - global_proxy_function = - factory()->NewFunction(name, JS_GLOBAL_PROXY_TYPE, - JSGlobalProxy::kSize, code, true); + global_proxy_function = factory()->NewFunction( + name, code, JS_GLOBAL_PROXY_TYPE, JSGlobalProxy::kSize); } else { Handle<ObjectTemplateInfo> data = - v8::Utils::OpenHandle(*global_template); + v8::Utils::OpenHandle(*global_proxy_template); Handle<FunctionTemplateInfo> global_constructor( FunctionTemplateInfo::cast(data->constructor())); global_proxy_function = factory()->CreateApiFunction(global_constructor, factory()->the_hole_value(), - factory()->OuterGlobalObject); + factory()->GlobalProxyType); } Handle<String> global_name = factory()->InternalizeOneByteString( @@ -770,9 +830,7 @@ Handle<JSGlobalProxy> Genesis::CreateNewGlobals( // Return the global proxy. Handle<JSGlobalProxy> global_proxy; - if (global_object.location() != NULL) { - ASSERT(global_object->IsJSGlobalProxy()); - global_proxy = Handle<JSGlobalProxy>::cast(global_object); + if (maybe_global_proxy.ToHandle(&global_proxy)) { factory()->ReinitializeJSGlobalProxy(global_proxy, global_proxy_function); } else { global_proxy = Handle<JSGlobalProxy>::cast( @@ -783,75 +841,73 @@ Handle<JSGlobalProxy> Genesis::CreateNewGlobals( } -void Genesis::HookUpGlobalProxy(Handle<GlobalObject> inner_global, +void Genesis::HookUpGlobalProxy(Handle<GlobalObject> global_object, Handle<JSGlobalProxy> global_proxy) { // Set the native context for the global object. - inner_global->set_native_context(*native_context()); - inner_global->set_global_context(*native_context()); - inner_global->set_global_receiver(*global_proxy); + global_object->set_native_context(*native_context()); + global_object->set_global_context(*native_context()); + global_object->set_global_proxy(*global_proxy); global_proxy->set_native_context(*native_context()); native_context()->set_global_proxy(*global_proxy); } -void Genesis::HookUpInnerGlobal(Handle<GlobalObject> inner_global) { - Handle<GlobalObject> inner_global_from_snapshot( +void Genesis::HookUpGlobalObject(Handle<GlobalObject> global_object) { + Handle<GlobalObject> global_object_from_snapshot( GlobalObject::cast(native_context()->extension())); Handle<JSBuiltinsObject> builtins_global(native_context()->builtins()); - native_context()->set_extension(*inner_global); - native_context()->set_global_object(*inner_global); - native_context()->set_security_token(*inner_global); + native_context()->set_extension(*global_object); + native_context()->set_global_object(*global_object); + native_context()->set_security_token(*global_object); static const PropertyAttributes attributes = static_cast<PropertyAttributes>(READ_ONLY | DONT_DELETE); - Runtime::ForceSetObjectProperty(builtins_global, - factory()->InternalizeOneByteString( - STATIC_ASCII_VECTOR("global")), - inner_global, - attributes).Assert(); + Runtime::DefineObjectProperty(builtins_global, + factory()->InternalizeOneByteString( + STATIC_ASCII_VECTOR("global")), + global_object, + attributes).Assert(); // Set up the reference from the global object to the builtins object. - JSGlobalObject::cast(*inner_global)->set_builtins(*builtins_global); - TransferNamedProperties(inner_global_from_snapshot, inner_global); - TransferIndexedProperties(inner_global_from_snapshot, inner_global); + JSGlobalObject::cast(*global_object)->set_builtins(*builtins_global); + TransferNamedProperties(global_object_from_snapshot, global_object); + TransferIndexedProperties(global_object_from_snapshot, global_object); } // This is only called if we are not using snapshots. The equivalent -// work in the snapshot case is done in HookUpInnerGlobal. -void Genesis::InitializeGlobal(Handle<GlobalObject> inner_global, +// work in the snapshot case is done in HookUpGlobalObject. +void Genesis::InitializeGlobal(Handle<GlobalObject> global_object, Handle<JSFunction> empty_function) { // --- N a t i v e C o n t e x t --- // Use the empty function as closure (no scope info). native_context()->set_closure(*empty_function); native_context()->set_previous(NULL); // Set extension and global object. - native_context()->set_extension(*inner_global); - native_context()->set_global_object(*inner_global); - // Security setup: Set the security token of the global object to - // its the inner global. This makes the security check between two - // different contexts fail by default even in case of global - // object reinitialization. - native_context()->set_security_token(*inner_global); - - Isolate* isolate = inner_global->GetIsolate(); + native_context()->set_extension(*global_object); + native_context()->set_global_object(*global_object); + // Security setup: Set the security token of the native context to the global + // object. This makes the security check between two different contexts fail + // by default even in case of global object reinitialization. + native_context()->set_security_token(*global_object); + + Isolate* isolate = global_object->GetIsolate(); Factory* factory = isolate->factory(); Heap* heap = isolate->heap(); Handle<String> object_name = factory->Object_string(); - JSObject::SetLocalPropertyIgnoreAttributes( - inner_global, object_name, - isolate->object_function(), DONT_ENUM).Check(); + JSObject::AddProperty( + global_object, object_name, isolate->object_function(), DONT_ENUM); - Handle<JSObject> global = Handle<JSObject>(native_context()->global_object()); + Handle<JSObject> global(native_context()->global_object()); // Install global Function object InstallFunction(global, "Function", JS_FUNCTION_TYPE, JSFunction::kSize, - empty_function, Builtins::kIllegal, true, true); + empty_function, Builtins::kIllegal); { // --- A r r a y --- Handle<JSFunction> array_function = InstallFunction(global, "Array", JS_ARRAY_TYPE, JSArray::kSize, isolate->initial_object_prototype(), - Builtins::kArrayCode, true, true); + Builtins::kArrayCode); array_function->shared()->DontAdaptArguments(); array_function->shared()->set_function_data(Smi::FromInt(kArrayCode)); @@ -863,7 +919,7 @@ void Genesis::InitializeGlobal(Handle<GlobalObject> inner_global, // This assert protects an optimization in // HGraphBuilder::JSArrayBuilder::EmitMapCode() - ASSERT(initial_map->elements_kind() == GetInitialFastElementsKind()); + DCHECK(initial_map->elements_kind() == GetInitialFastElementsKind()); Map::EnsureDescriptorSlack(initial_map, 1); PropertyAttributes attribs = static_cast<PropertyAttributes>( @@ -895,7 +951,7 @@ void Genesis::InitializeGlobal(Handle<GlobalObject> inner_global, Handle<JSFunction> number_fun = InstallFunction(global, "Number", JS_VALUE_TYPE, JSValue::kSize, isolate->initial_object_prototype(), - Builtins::kIllegal, true, true); + Builtins::kIllegal); native_context()->set_number_function(*number_fun); } @@ -903,7 +959,7 @@ void Genesis::InitializeGlobal(Handle<GlobalObject> inner_global, Handle<JSFunction> boolean_fun = InstallFunction(global, "Boolean", JS_VALUE_TYPE, JSValue::kSize, isolate->initial_object_prototype(), - Builtins::kIllegal, true, true); + Builtins::kIllegal); native_context()->set_boolean_function(*boolean_fun); } @@ -911,7 +967,7 @@ void Genesis::InitializeGlobal(Handle<GlobalObject> inner_global, Handle<JSFunction> string_fun = InstallFunction(global, "String", JS_VALUE_TYPE, JSValue::kSize, isolate->initial_object_prototype(), - Builtins::kIllegal, true, true); + Builtins::kIllegal); string_fun->shared()->set_construct_stub( isolate->builtins()->builtin(Builtins::kStringConstructCode)); native_context()->set_string_function(*string_fun); @@ -931,12 +987,20 @@ void Genesis::InitializeGlobal(Handle<GlobalObject> inner_global, } } + { + // --- S y m b o l --- + Handle<JSFunction> symbol_fun = InstallFunction( + global, "Symbol", JS_VALUE_TYPE, JSValue::kSize, + isolate->initial_object_prototype(), Builtins::kIllegal); + native_context()->set_symbol_function(*symbol_fun); + } + { // --- D a t e --- // Builtin functions for Date.prototype. Handle<JSFunction> date_fun = InstallFunction(global, "Date", JS_DATE_TYPE, JSDate::kSize, isolate->initial_object_prototype(), - Builtins::kIllegal, true, true); + Builtins::kIllegal); native_context()->set_date_function(*date_fun); } @@ -947,13 +1011,13 @@ void Genesis::InitializeGlobal(Handle<GlobalObject> inner_global, Handle<JSFunction> regexp_fun = InstallFunction(global, "RegExp", JS_REGEXP_TYPE, JSRegExp::kSize, isolate->initial_object_prototype(), - Builtins::kIllegal, true, true); + Builtins::kIllegal); native_context()->set_regexp_function(*regexp_fun); - ASSERT(regexp_fun->has_initial_map()); + DCHECK(regexp_fun->has_initial_map()); Handle<Map> initial_map(regexp_fun->initial_map()); - ASSERT_EQ(0, initial_map->inobject_properties()); + DCHECK_EQ(0, initial_map->inobject_properties()); PropertyAttributes final = static_cast<PropertyAttributes>(DONT_ENUM | DONT_DELETE | READ_ONLY); @@ -1024,6 +1088,7 @@ void Genesis::InitializeGlobal(Handle<GlobalObject> inner_global, proto->InObjectPropertyAtPut(JSRegExp::kLastIndexFieldIndex, Smi::FromInt(0), SKIP_WRITE_BARRIER); // It's a Smi. + proto_map->set_is_prototype_map(true); initial_map->set_prototype(*proto); factory->SetRegExpIrregexpData(Handle<JSRegExp>::cast(proto), JSRegExp::IRREGEXP, factory->empty_string(), @@ -1032,29 +1097,27 @@ void Genesis::InitializeGlobal(Handle<GlobalObject> inner_global, { // -- J S O N Handle<String> name = factory->InternalizeUtf8String("JSON"); - Handle<JSFunction> cons = factory->NewFunctionWithPrototype( - name, factory->the_hole_value()); + Handle<JSFunction> cons = factory->NewFunction(name); JSFunction::SetInstancePrototype(cons, Handle<Object>(native_context()->initial_object_prototype(), isolate)); cons->SetInstanceClassName(*name); Handle<JSObject> json_object = factory->NewJSObject(cons, TENURED); - ASSERT(json_object->IsJSObject()); - JSObject::SetLocalPropertyIgnoreAttributes( - global, name, json_object, DONT_ENUM).Check(); + DCHECK(json_object->IsJSObject()); + JSObject::AddProperty(global, name, json_object, DONT_ENUM); native_context()->set_json_object(*json_object); } - { // -- A r r a y B u f f e r + { // -- A r r a y B u f f e r Handle<JSFunction> array_buffer_fun = InstallFunction( global, "ArrayBuffer", JS_ARRAY_BUFFER_TYPE, JSArrayBuffer::kSizeWithInternalFields, isolate->initial_object_prototype(), - Builtins::kIllegal, true, true); + Builtins::kIllegal); native_context()->set_array_buffer_fun(*array_buffer_fun); } - { // -- T y p e d A r r a y s + { // -- T y p e d A r r a y s #define INSTALL_TYPED_ARRAY(Type, type, TYPE, ctype, size) \ { \ Handle<JSFunction> fun; \ @@ -1074,99 +1137,102 @@ void Genesis::InitializeGlobal(Handle<GlobalObject> inner_global, global, "DataView", JS_DATA_VIEW_TYPE, JSDataView::kSizeWithInternalFields, isolate->initial_object_prototype(), - Builtins::kIllegal, true, true); + Builtins::kIllegal); native_context()->set_data_view_fun(*data_view_fun); } - { // -- W e a k M a p - InstallFunction(global, "WeakMap", JS_WEAK_MAP_TYPE, JSWeakMap::kSize, - isolate->initial_object_prototype(), - Builtins::kIllegal, true, true); - } + // -- M a p + InstallFunction(global, "Map", JS_MAP_TYPE, JSMap::kSize, + isolate->initial_object_prototype(), Builtins::kIllegal); - { // -- W e a k S e t - InstallFunction(global, "WeakSet", JS_WEAK_SET_TYPE, JSWeakSet::kSize, - isolate->initial_object_prototype(), - Builtins::kIllegal, true, true); + // -- S e t + InstallFunction(global, "Set", JS_SET_TYPE, JSSet::kSize, + isolate->initial_object_prototype(), Builtins::kIllegal); + + { // Set up the iterator result object + STATIC_ASSERT(JSGeneratorObject::kResultPropertyCount == 2); + Handle<JSFunction> object_function(native_context()->object_function()); + DCHECK(object_function->initial_map()->inobject_properties() == 0); + Handle<Map> iterator_result_map = + Map::Create(object_function, JSGeneratorObject::kResultPropertyCount); + DCHECK(iterator_result_map->inobject_properties() == + JSGeneratorObject::kResultPropertyCount); + Map::EnsureDescriptorSlack(iterator_result_map, + JSGeneratorObject::kResultPropertyCount); + + FieldDescriptor value_descr(factory->value_string(), + JSGeneratorObject::kResultValuePropertyIndex, + NONE, Representation::Tagged()); + iterator_result_map->AppendDescriptor(&value_descr); + + FieldDescriptor done_descr(factory->done_string(), + JSGeneratorObject::kResultDonePropertyIndex, + NONE, Representation::Tagged()); + iterator_result_map->AppendDescriptor(&done_descr); + + iterator_result_map->set_unused_property_fields(0); + DCHECK_EQ(JSGeneratorObject::kResultSize, + iterator_result_map->instance_size()); + native_context()->set_iterator_result_map(*iterator_result_map); } - { // --- arguments_boilerplate_ + // -- W e a k M a p + InstallFunction(global, "WeakMap", JS_WEAK_MAP_TYPE, JSWeakMap::kSize, + isolate->initial_object_prototype(), Builtins::kIllegal); + // -- W e a k S e t + InstallFunction(global, "WeakSet", JS_WEAK_SET_TYPE, JSWeakSet::kSize, + isolate->initial_object_prototype(), Builtins::kIllegal); + + { // --- sloppy arguments map // Make sure we can recognize argument objects at runtime. // This is done by introducing an anonymous function with // class_name equals 'Arguments'. Handle<String> arguments_string = factory->InternalizeOneByteString( STATIC_ASCII_VECTOR("Arguments")); Handle<Code> code(isolate->builtins()->builtin(Builtins::kIllegal)); - Handle<JSObject> prototype( - JSObject::cast(native_context()->object_function()->prototype())); - - Handle<JSFunction> function = - factory->NewFunctionWithPrototype(arguments_string, - JS_OBJECT_TYPE, - JSObject::kHeaderSize, - prototype, - code, - false); - ASSERT(!function->has_initial_map()); + Handle<JSFunction> function = factory->NewFunctionWithoutPrototype( + arguments_string, code); function->shared()->set_instance_class_name(*arguments_string); - function->shared()->set_expected_nof_properties(2); - Handle<JSObject> result = factory->NewJSObject(function); - - native_context()->set_sloppy_arguments_boilerplate(*result); - // Note: length must be added as the first property and - // callee must be added as the second property. - JSObject::SetLocalPropertyIgnoreAttributes( - result, factory->length_string(), - factory->undefined_value(), DONT_ENUM, - Object::FORCE_TAGGED, FORCE_FIELD).Check(); - JSObject::SetLocalPropertyIgnoreAttributes( - result, factory->callee_string(), - factory->undefined_value(), DONT_ENUM, - Object::FORCE_TAGGED, FORCE_FIELD).Check(); -#ifdef DEBUG - LookupResult lookup(isolate); - result->LocalLookup(factory->callee_string(), &lookup); - ASSERT(lookup.IsField()); - ASSERT(lookup.GetFieldIndex().field_index() == Heap::kArgumentsCalleeIndex); + Handle<Map> map = + factory->NewMap(JS_OBJECT_TYPE, Heap::kSloppyArgumentsObjectSize); + // Create the descriptor array for the arguments object. + Map::EnsureDescriptorSlack(map, 2); + + { // length + FieldDescriptor d(factory->length_string(), Heap::kArgumentsLengthIndex, + DONT_ENUM, Representation::Tagged()); + map->AppendDescriptor(&d); + } + { // callee + FieldDescriptor d(factory->callee_string(), Heap::kArgumentsCalleeIndex, + DONT_ENUM, Representation::Tagged()); + map->AppendDescriptor(&d); + } - result->LocalLookup(factory->length_string(), &lookup); - ASSERT(lookup.IsField()); - ASSERT(lookup.GetFieldIndex().field_index() == Heap::kArgumentsLengthIndex); + map->set_function_with_prototype(true); + map->set_pre_allocated_property_fields(2); + map->set_inobject_properties(2); + native_context()->set_sloppy_arguments_map(*map); - ASSERT(result->map()->inobject_properties() > Heap::kArgumentsCalleeIndex); - ASSERT(result->map()->inobject_properties() > Heap::kArgumentsLengthIndex); + DCHECK(!function->has_initial_map()); + JSFunction::SetInitialMap(function, map, + isolate->initial_object_prototype()); - // Check the state of the object. - ASSERT(result->HasFastProperties()); - ASSERT(result->HasFastObjectElements()); -#endif + DCHECK(map->inobject_properties() > Heap::kArgumentsCalleeIndex); + DCHECK(map->inobject_properties() > Heap::kArgumentsLengthIndex); + DCHECK(!map->is_dictionary_map()); + DCHECK(IsFastObjectElementsKind(map->elements_kind())); } - { // --- aliased_arguments_boilerplate_ - // Set up a well-formed parameter map to make assertions happy. - Handle<FixedArray> elements = factory->NewFixedArray(2); - elements->set_map(heap->sloppy_arguments_elements_map()); - Handle<FixedArray> array; - array = factory->NewFixedArray(0); - elements->set(0, *array); - array = factory->NewFixedArray(0); - elements->set(1, *array); - - Handle<Map> old_map( - native_context()->sloppy_arguments_boilerplate()->map()); - Handle<Map> new_map = Map::Copy(old_map); - new_map->set_pre_allocated_property_fields(2); - Handle<JSObject> result = factory->NewJSObjectFromMap(new_map); - // Set elements kind after allocating the object because - // NewJSObjectFromMap assumes a fast elements map. - new_map->set_elements_kind(SLOPPY_ARGUMENTS_ELEMENTS); - result->set_elements(*elements); - ASSERT(result->HasSloppyArgumentsElements()); - native_context()->set_aliased_arguments_boilerplate(*result); - } - - { // --- strict mode arguments boilerplate + { // --- aliased arguments map + Handle<Map> map = Map::Copy(isolate->sloppy_arguments_map()); + map->set_elements_kind(SLOPPY_ARGUMENTS_ELEMENTS); + DCHECK_EQ(2, map->pre_allocated_property_fields()); + native_context()->set_aliased_arguments_map(*map); + } + + { // --- strict mode arguments map const PropertyAttributes attributes = static_cast<PropertyAttributes>(DONT_ENUM | DONT_DELETE | READ_ONLY); @@ -1174,14 +1240,13 @@ void Genesis::InitializeGlobal(Handle<GlobalObject> inner_global, Handle<AccessorPair> callee = factory->NewAccessorPair(); Handle<AccessorPair> caller = factory->NewAccessorPair(); - Handle<JSFunction> throw_function = - GetThrowTypeErrorFunction(); + Handle<JSFunction> poison = GetStrictPoisonFunction(); // Install the ThrowTypeError functions. - callee->set_getter(*throw_function); - callee->set_setter(*throw_function); - caller->set_getter(*throw_function); - caller->set_setter(*throw_function); + callee->set_getter(*poison); + callee->set_setter(*poison); + caller->set_getter(*poison); + caller->set_setter(*poison); // Create the map. Allocate one in-object field for length. Handle<Map> map = factory->NewMap(JS_OBJECT_TYPE, @@ -1190,20 +1255,16 @@ void Genesis::InitializeGlobal(Handle<GlobalObject> inner_global, Map::EnsureDescriptorSlack(map, 3); { // length - FieldDescriptor d( - factory->length_string(), 0, DONT_ENUM, Representation::Tagged()); + FieldDescriptor d(factory->length_string(), Heap::kArgumentsLengthIndex, + DONT_ENUM, Representation::Tagged()); map->AppendDescriptor(&d); } { // callee - CallbacksDescriptor d(factory->callee_string(), - callee, - attributes); + CallbacksDescriptor d(factory->callee_string(), callee, attributes); map->AppendDescriptor(&d); } { // caller - CallbacksDescriptor d(factory->caller_string(), - caller, - attributes); + CallbacksDescriptor d(factory->caller_string(), caller, attributes); map->AppendDescriptor(&d); } @@ -1214,41 +1275,22 @@ void Genesis::InitializeGlobal(Handle<GlobalObject> inner_global, // Copy constructor from the sloppy arguments boilerplate. map->set_constructor( - native_context()->sloppy_arguments_boilerplate()->map()->constructor()); - - // Allocate the arguments boilerplate object. - Handle<JSObject> result = factory->NewJSObjectFromMap(map); - native_context()->set_strict_arguments_boilerplate(*result); + native_context()->sloppy_arguments_map()->constructor()); - // Add length property only for strict mode boilerplate. - JSObject::SetLocalPropertyIgnoreAttributes( - result, factory->length_string(), - factory->undefined_value(), DONT_ENUM).Check(); + native_context()->set_strict_arguments_map(*map); -#ifdef DEBUG - LookupResult lookup(isolate); - result->LocalLookup(factory->length_string(), &lookup); - ASSERT(lookup.IsField()); - ASSERT(lookup.GetFieldIndex().field_index() == Heap::kArgumentsLengthIndex); - - ASSERT(result->map()->inobject_properties() > Heap::kArgumentsLengthIndex); - - // Check the state of the object. - ASSERT(result->HasFastProperties()); - ASSERT(result->HasFastObjectElements()); -#endif + DCHECK(map->inobject_properties() > Heap::kArgumentsLengthIndex); + DCHECK(!map->is_dictionary_map()); + DCHECK(IsFastObjectElementsKind(map->elements_kind())); } { // --- context extension // Create a function for the context extension objects. Handle<Code> code = Handle<Code>( isolate->builtins()->builtin(Builtins::kIllegal)); - Handle<JSFunction> context_extension_fun = - factory->NewFunction(factory->empty_string(), - JS_CONTEXT_EXTENSION_OBJECT_TYPE, - JSObject::kHeaderSize, - code, - true); + Handle<JSFunction> context_extension_fun = factory->NewFunction( + factory->empty_string(), code, JS_CONTEXT_EXTENSION_OBJECT_TYPE, + JSObject::kHeaderSize); Handle<String> name = factory->InternalizeOneByteString( STATIC_ASCII_VECTOR("context_extension")); @@ -1262,9 +1304,8 @@ void Genesis::InitializeGlobal(Handle<GlobalObject> inner_global, Handle<Code> code = Handle<Code>(isolate->builtins()->builtin( Builtins::kHandleApiCallAsFunction)); - Handle<JSFunction> delegate = - factory->NewFunction(factory->empty_string(), JS_OBJECT_TYPE, - JSObject::kHeaderSize, code, true); + Handle<JSFunction> delegate = factory->NewFunction( + factory->empty_string(), code, JS_OBJECT_TYPE, JSObject::kHeaderSize); native_context()->set_call_as_function_delegate(*delegate); delegate->shared()->DontAdaptArguments(); } @@ -1274,9 +1315,8 @@ void Genesis::InitializeGlobal(Handle<GlobalObject> inner_global, Handle<Code> code = Handle<Code>(isolate->builtins()->builtin( Builtins::kHandleApiCallAsConstructor)); - Handle<JSFunction> delegate = - factory->NewFunction(factory->empty_string(), JS_OBJECT_TYPE, - JSObject::kHeaderSize, code, true); + Handle<JSFunction> delegate = factory->NewFunction( + factory->empty_string(), code, JS_OBJECT_TYPE, JSObject::kHeaderSize); native_context()->set_call_as_constructor_delegate(*delegate); delegate->shared()->DontAdaptArguments(); } @@ -1293,16 +1333,16 @@ void Genesis::InstallTypedArray( Handle<JSFunction>* fun, Handle<Map>* external_map) { Handle<JSObject> global = Handle<JSObject>(native_context()->global_object()); - Handle<JSFunction> result = InstallFunction(global, name, JS_TYPED_ARRAY_TYPE, - JSTypedArray::kSize, isolate()->initial_object_prototype(), - Builtins::kIllegal, false, true); + Handle<JSFunction> result = InstallFunction( + global, name, JS_TYPED_ARRAY_TYPE, JSTypedArray::kSize, + isolate()->initial_object_prototype(), Builtins::kIllegal); Handle<Map> initial_map = isolate()->factory()->NewMap( JS_TYPED_ARRAY_TYPE, JSTypedArray::kSizeWithInternalFields, elements_kind); - result->set_initial_map(*initial_map); - initial_map->set_constructor(*result); + JSFunction::SetInitialMap(result, initial_map, + handle(initial_map->prototype(), isolate())); *fun = result; ElementsKind external_kind = GetNextTransitionElementsKind(elements_kind); @@ -1311,74 +1351,61 @@ void Genesis::InstallTypedArray( void Genesis::InitializeExperimentalGlobal() { - Handle<JSObject> global = Handle<JSObject>(native_context()->global_object()); - // TODO(mstarzinger): Move this into Genesis::InitializeGlobal once we no // longer need to live behind flags, so functions get added to the snapshot. - if (FLAG_harmony_symbols) { - // --- S y m b o l --- - Handle<JSFunction> symbol_fun = - InstallFunction(global, "Symbol", JS_VALUE_TYPE, JSValue::kSize, - isolate()->initial_object_prototype(), - Builtins::kIllegal, true, true); - native_context()->set_symbol_function(*symbol_fun); - } - - if (FLAG_harmony_collections) { - { // -- M a p - InstallFunction(global, "Map", JS_MAP_TYPE, JSMap::kSize, - isolate()->initial_object_prototype(), - Builtins::kIllegal, true, true); - } - { // -- S e t - InstallFunction(global, "Set", JS_SET_TYPE, JSSet::kSize, - isolate()->initial_object_prototype(), - Builtins::kIllegal, true, true); - } - { // -- S e t I t e r a t o r - Handle<Map> map = isolate()->factory()->NewMap( - JS_SET_ITERATOR_TYPE, JSSetIterator::kSize); - native_context()->set_set_iterator_map(*map); - } - { // -- M a p I t e r a t o r - Handle<Map> map = isolate()->factory()->NewMap( - JS_MAP_ITERATOR_TYPE, JSMapIterator::kSize); - native_context()->set_map_iterator_map(*map); - } - } - if (FLAG_harmony_generators) { // Create generator meta-objects and install them on the builtins object. Handle<JSObject> builtins(native_context()->builtins()); Handle<JSObject> generator_object_prototype = factory()->NewJSObject(isolate()->object_function(), TENURED); - Handle<JSFunction> generator_function_prototype = - InstallFunction(builtins, "GeneratorFunctionPrototype", - JS_FUNCTION_TYPE, JSFunction::kHeaderSize, - generator_object_prototype, Builtins::kIllegal, - false, false); + Handle<JSFunction> generator_function_prototype = InstallFunction( + builtins, "GeneratorFunctionPrototype", JS_FUNCTION_TYPE, + JSFunction::kHeaderSize, generator_object_prototype, + Builtins::kIllegal); InstallFunction(builtins, "GeneratorFunction", JS_FUNCTION_TYPE, JSFunction::kSize, - generator_function_prototype, Builtins::kIllegal, - false, false); + generator_function_prototype, Builtins::kIllegal); // Create maps for generator functions and their prototypes. Store those // maps in the native context. - Handle<Map> function_map(native_context()->sloppy_function_map()); - Handle<Map> generator_function_map = Map::Copy(function_map); + Handle<Map> sloppy_function_map(native_context()->sloppy_function_map()); + Handle<Map> generator_function_map = Map::Copy(sloppy_function_map); generator_function_map->set_prototype(*generator_function_prototype); native_context()->set_sloppy_generator_function_map( *generator_function_map); - Handle<Map> strict_mode_function_map( - native_context()->strict_function_map()); - Handle<Map> strict_mode_generator_function_map = - Map::Copy(strict_mode_function_map); - strict_mode_generator_function_map->set_prototype( - *generator_function_prototype); + // The "arguments" and "caller" instance properties aren't specified, so + // technically we could leave them out. They make even less sense for + // generators than for functions. Still, the same argument that it makes + // sense to keep them around but poisoned in strict mode applies to + // generators as well. With poisoned accessors, naive callers can still + // iterate over the properties without accessing them. + // + // We can't use PoisonArgumentsAndCaller because that mutates accessor pairs + // in place, and the initial state of the generator function map shares the + // accessor pair with sloppy functions. Also the error message should be + // different. Also unhappily, we can't use the API accessors to implement + // poisoning, because API accessors present themselves as data properties, + // not accessor properties, and so getOwnPropertyDescriptor raises an + // exception as it tries to get the values. Sadness. + Handle<AccessorPair> poison_pair(factory()->NewAccessorPair()); + PropertyAttributes rw_attribs = + static_cast<PropertyAttributes>(DONT_ENUM | DONT_DELETE); + Handle<JSFunction> poison_function = GetGeneratorPoisonFunction(); + poison_pair->set_getter(*poison_function); + poison_pair->set_setter(*poison_function); + ReplaceAccessors(generator_function_map, factory()->arguments_string(), + rw_attribs, poison_pair); + ReplaceAccessors(generator_function_map, factory()->caller_string(), + rw_attribs, poison_pair); + + Handle<Map> strict_function_map(native_context()->strict_function_map()); + Handle<Map> strict_generator_function_map = Map::Copy(strict_function_map); + // "arguments" and "caller" already poisoned. + strict_generator_function_map->set_prototype(*generator_function_prototype); native_context()->set_strict_generator_function_map( - *strict_mode_generator_function_map); + *strict_generator_function_map); Handle<JSFunction> object_function(native_context()->object_function()); Handle<Map> generator_object_prototype_map = Map::Create( @@ -1388,38 +1415,6 @@ void Genesis::InitializeExperimentalGlobal() { native_context()->set_generator_object_prototype_map( *generator_object_prototype_map); } - - if (FLAG_harmony_collections || FLAG_harmony_generators) { - // Collection forEach uses an iterator result object. - // Generators return iteraror result objects. - - STATIC_ASSERT(JSGeneratorObject::kResultPropertyCount == 2); - Handle<JSFunction> object_function(native_context()->object_function()); - ASSERT(object_function->initial_map()->inobject_properties() == 0); - Handle<Map> iterator_result_map = Map::Create( - object_function, JSGeneratorObject::kResultPropertyCount); - ASSERT(iterator_result_map->inobject_properties() == - JSGeneratorObject::kResultPropertyCount); - Map::EnsureDescriptorSlack( - iterator_result_map, JSGeneratorObject::kResultPropertyCount); - - FieldDescriptor value_descr(isolate()->factory()->value_string(), - JSGeneratorObject::kResultValuePropertyIndex, - NONE, - Representation::Tagged()); - iterator_result_map->AppendDescriptor(&value_descr); - - FieldDescriptor done_descr(isolate()->factory()->done_string(), - JSGeneratorObject::kResultDonePropertyIndex, - NONE, - Representation::Tagged()); - iterator_result_map->AppendDescriptor(&done_descr); - - iterator_result_map->set_unused_property_fields(0); - ASSERT_EQ(JSGeneratorObject::kResultSize, - iterator_result_map->instance_size()); - native_context()->set_iterator_result_map(*iterator_result_map); - } } @@ -1448,7 +1443,7 @@ bool Genesis::CompileNative(Isolate* isolate, Vector<const char> name, Handle<String> source) { HandleScope scope(isolate); - isolate->debugger()->set_compiling_natives(true); + SuppressDebug compiling_natives(isolate->debug()); // During genesis, the boilerplate for stack overflow won't work until the // environment has been at least partially initialized. Add a stack check // before entering JS code to catch overflow early. @@ -1462,9 +1457,8 @@ bool Genesis::CompileNative(Isolate* isolate, NULL, Handle<Context>(isolate->context()), true); - ASSERT(isolate->has_pending_exception() != result); + DCHECK(isolate->has_pending_exception() != result); if (!result) isolate->clear_pending_exception(); - isolate->debugger()->set_compiling_natives(false); return result; } @@ -1483,19 +1477,12 @@ bool Genesis::CompileScriptCached(Isolate* isolate, // If we can't find the function in the cache, we compile a new // function and insert it into the cache. if (cache == NULL || !cache->Lookup(name, &function_info)) { - ASSERT(source->IsOneByteRepresentation()); + DCHECK(source->IsOneByteRepresentation()); Handle<String> script_name = factory->NewStringFromUtf8(name).ToHandleChecked(); function_info = Compiler::CompileScript( - source, - script_name, - 0, - 0, - false, - top_context, - extension, - NULL, - NO_CACHED_DATA, + source, script_name, 0, 0, false, top_context, extension, NULL, + ScriptCompiler::kNoCompileOptions, use_runtime_context ? NATIVES_CODE : NOT_NATIVES_CODE); if (function_info.is_null()) return false; if (cache != NULL) cache->Add(name, function_info); @@ -1504,7 +1491,7 @@ bool Genesis::CompileScriptCached(Isolate* isolate, // Set up the function context. Conceptually, we should clone the // function before overwriting the context but since we're in a // single-threaded environment it is not strictly necessary. - ASSERT(top_context->IsNativeContext()); + DCHECK(top_context->IsNativeContext()); Handle<Context> context = Handle<Context>(use_runtime_context ? Handle<Context>(top_context->runtime_context()) @@ -1524,6 +1511,38 @@ bool Genesis::CompileScriptCached(Isolate* isolate, } +static Handle<JSObject> ResolveBuiltinIdHolder(Handle<Context> native_context, + const char* holder_expr) { + Isolate* isolate = native_context->GetIsolate(); + Factory* factory = isolate->factory(); + Handle<GlobalObject> global(native_context->global_object()); + const char* period_pos = strchr(holder_expr, '.'); + if (period_pos == NULL) { + return Handle<JSObject>::cast( + Object::GetPropertyOrElement( + global, factory->InternalizeUtf8String(holder_expr)) + .ToHandleChecked()); + } + const char* inner = period_pos + 1; + DCHECK_EQ(NULL, strchr(inner, '.')); + Vector<const char> property(holder_expr, + static_cast<int>(period_pos - holder_expr)); + Handle<String> property_string = factory->InternalizeUtf8String(property); + DCHECK(!property_string.is_null()); + Handle<JSObject> object = Handle<JSObject>::cast( + Object::GetProperty(global, property_string).ToHandleChecked()); + if (strcmp("prototype", inner) == 0) { + Handle<JSFunction> function = Handle<JSFunction>::cast(object); + return Handle<JSObject>(JSObject::cast(function->prototype())); + } + Handle<String> inner_string = factory->InternalizeUtf8String(inner); + DCHECK(!inner_string.is_null()); + Handle<Object> value = + Object::GetProperty(object, inner_string).ToHandleChecked(); + return Handle<JSObject>::cast(value); +} + + #define INSTALL_NATIVE(Type, name, var) \ Handle<String> var##_name = \ factory()->InternalizeOneByteString(STATIC_ASCII_VECTOR(name)); \ @@ -1531,6 +1550,12 @@ bool Genesis::CompileScriptCached(Isolate* isolate, handle(native_context()->builtins()), var##_name).ToHandleChecked(); \ native_context()->set_##var(Type::cast(*var##_native)); +#define INSTALL_NATIVE_MATH(name) \ + { \ + Handle<Object> fun = \ + ResolveBuiltinIdHolder(native_context(), "Math." #name); \ + native_context()->set_math_##name##_fun(JSFunction::cast(*fun)); \ + } void Genesis::InstallNativeFunctions() { HandleScope scope(isolate()); @@ -1559,6 +1584,7 @@ void Genesis::InstallNativeFunctions() { INSTALL_NATIVE(JSFunction, "PromiseReject", promise_reject); INSTALL_NATIVE(JSFunction, "PromiseChain", promise_chain); INSTALL_NATIVE(JSFunction, "PromiseCatch", promise_catch); + INSTALL_NATIVE(JSFunction, "PromiseThen", promise_then); INSTALL_NATIVE(JSFunction, "NotifyChange", observers_notify_change); INSTALL_NATIVE(JSFunction, "EnqueueSpliceRecord", observers_enqueue_splice); @@ -1572,13 +1598,33 @@ void Genesis::InstallNativeFunctions() { native_object_get_notifier); INSTALL_NATIVE(JSFunction, "NativeObjectNotifierPerformChange", native_object_notifier_perform_change); + + INSTALL_NATIVE(Symbol, "symbolIterator", iterator_symbol); + INSTALL_NATIVE(Symbol, "symbolUnscopables", unscopables_symbol); + + INSTALL_NATIVE_MATH(abs) + INSTALL_NATIVE_MATH(acos) + INSTALL_NATIVE_MATH(asin) + INSTALL_NATIVE_MATH(atan) + INSTALL_NATIVE_MATH(atan2) + INSTALL_NATIVE_MATH(ceil) + INSTALL_NATIVE_MATH(cos) + INSTALL_NATIVE_MATH(exp) + INSTALL_NATIVE_MATH(floor) + INSTALL_NATIVE_MATH(imul) + INSTALL_NATIVE_MATH(log) + INSTALL_NATIVE_MATH(max) + INSTALL_NATIVE_MATH(min) + INSTALL_NATIVE_MATH(pow) + INSTALL_NATIVE_MATH(random) + INSTALL_NATIVE_MATH(round) + INSTALL_NATIVE_MATH(sin) + INSTALL_NATIVE_MATH(sqrt) + INSTALL_NATIVE_MATH(tan) } void Genesis::InstallExperimentalNativeFunctions() { - INSTALL_NATIVE(JSFunction, "RunMicrotasks", run_microtasks); - INSTALL_NATIVE(JSFunction, "EnqueueMicrotask", enqueue_microtask); - if (FLAG_harmony_proxies) { INSTALL_NATIVE(JSFunction, "DerivedHasTrap", derived_has_trap); INSTALL_NATIVE(JSFunction, "DerivedGetTrap", derived_get_trap); @@ -1600,17 +1646,11 @@ Handle<JSFunction> Genesis::InstallInternalArray( // doesn't inherit from Object.prototype. // To be used only for internal work by builtins. Instances // must not be leaked to user code. - Handle<JSFunction> array_function = - InstallFunction(builtins, - name, - JS_ARRAY_TYPE, - JSArray::kSize, - isolate()->initial_object_prototype(), - Builtins::kInternalArrayCode, - true, true); Handle<JSObject> prototype = factory()->NewJSObject(isolate()->object_function(), TENURED); - Accessors::FunctionSetPrototype(array_function, prototype); + Handle<JSFunction> array_function = InstallFunction( + builtins, name, JS_ARRAY_TYPE, JSArray::kSize, + prototype, Builtins::kInternalArrayCode); InternalArrayConstructorStub internal_array_constructor_stub(isolate()); Handle<Code> code = internal_array_constructor_stub.GetCode(); @@ -1620,7 +1660,7 @@ Handle<JSFunction> Genesis::InstallInternalArray( Handle<Map> original_map(array_function->initial_map()); Handle<Map> initial_map = Map::Copy(original_map); initial_map->set_elements_kind(elements_kind); - array_function->set_initial_map(*initial_map); + JSFunction::SetInitialMap(array_function, initial_map, prototype); // Make "length" magic on instances. Map::EnsureDescriptorSlack(initial_map, 1); @@ -1648,10 +1688,9 @@ bool Genesis::InstallNatives() { // (itself) and a reference to the native_context directly in the object. Handle<Code> code = Handle<Code>( isolate()->builtins()->builtin(Builtins::kIllegal)); - Handle<JSFunction> builtins_fun = - factory()->NewFunction(factory()->empty_string(), - JS_BUILTINS_OBJECT_TYPE, - JSBuiltinsObject::kSize, code, true); + Handle<JSFunction> builtins_fun = factory()->NewFunction( + factory()->empty_string(), code, JS_BUILTINS_OBJECT_TYPE, + JSBuiltinsObject::kSize); Handle<String> name = factory()->InternalizeOneByteString(STATIC_ASCII_VECTOR("builtins")); @@ -1665,8 +1704,7 @@ bool Genesis::InstallNatives() { builtins->set_builtins(*builtins); builtins->set_native_context(*native_context()); builtins->set_global_context(*native_context()); - builtins->set_global_receiver(*builtins); - builtins->set_global_receiver(native_context()->global_proxy()); + builtins->set_global_proxy(native_context()->global_proxy()); // Set up the 'global' properties of the builtins object. The @@ -1678,21 +1716,18 @@ bool Genesis::InstallNatives() { Handle<String> global_string = factory()->InternalizeOneByteString(STATIC_ASCII_VECTOR("global")); Handle<Object> global_obj(native_context()->global_object(), isolate()); - JSObject::SetLocalPropertyIgnoreAttributes( - builtins, global_string, global_obj, attributes).Check(); + JSObject::AddProperty(builtins, global_string, global_obj, attributes); Handle<String> builtins_string = factory()->InternalizeOneByteString(STATIC_ASCII_VECTOR("builtins")); - JSObject::SetLocalPropertyIgnoreAttributes( - builtins, builtins_string, builtins, attributes).Check(); + JSObject::AddProperty(builtins, builtins_string, builtins, attributes); // Set up the reference from the global object to the builtins object. JSGlobalObject::cast(native_context()->global_object())-> set_builtins(*builtins); // Create a bridge function that has context in the native context. - Handle<JSFunction> bridge = factory()->NewFunctionWithPrototype( - factory()->empty_string(), factory()->undefined_value()); - ASSERT(bridge->context() == *isolate()->native_context()); + Handle<JSFunction> bridge = factory()->NewFunction(factory()->empty_string()); + DCHECK(bridge->context() == *isolate()->native_context()); // Allocate the builtins context. Handle<Context> context = @@ -1703,17 +1738,16 @@ bool Genesis::InstallNatives() { { // -- S c r i p t // Builtin functions for Script. - Handle<JSFunction> script_fun = - InstallFunction(builtins, "Script", JS_VALUE_TYPE, JSValue::kSize, - isolate()->initial_object_prototype(), - Builtins::kIllegal, false, false); + Handle<JSFunction> script_fun = InstallFunction( + builtins, "Script", JS_VALUE_TYPE, JSValue::kSize, + isolate()->initial_object_prototype(), Builtins::kIllegal); Handle<JSObject> prototype = factory()->NewJSObject(isolate()->object_function(), TENURED); Accessors::FunctionSetPrototype(script_fun, prototype); native_context()->set_script_function(*script_fun); Handle<Map> script_map = Handle<Map>(script_fun->initial_map()); - Map::EnsureDescriptorSlack(script_map, 13); + Map::EnsureDescriptorSlack(script_map, 14); PropertyAttributes attribs = static_cast<PropertyAttributes>(DONT_ENUM | DONT_DELETE | READ_ONLY); @@ -1820,6 +1854,23 @@ bool Genesis::InstallNatives() { script_map->AppendDescriptor(&d); } + Handle<AccessorInfo> script_source_url = + Accessors::ScriptSourceUrlInfo(isolate(), attribs); + { + CallbacksDescriptor d(Handle<Name>(Name::cast(script_source_url->name())), + script_source_url, attribs); + script_map->AppendDescriptor(&d); + } + + Handle<AccessorInfo> script_source_mapping_url = + Accessors::ScriptSourceMappingUrlInfo(isolate(), attribs); + { + CallbacksDescriptor d( + Handle<Name>(Name::cast(script_source_mapping_url->name())), + script_source_mapping_url, attribs); + script_map->AppendDescriptor(&d); + } + // Allocate the empty script. Handle<Script> script = factory()->NewScript(factory()->empty_string()); script->set_type(Smi::FromInt(Script::TYPE_NATIVE)); @@ -1829,11 +1880,9 @@ bool Genesis::InstallNatives() { // Builtin function for OpaqueReference -- a JSValue-based object, // that keeps its field isolated from JavaScript code. It may store // objects, that JavaScript code may not access. - Handle<JSFunction> opaque_reference_fun = - InstallFunction(builtins, "OpaqueReference", JS_VALUE_TYPE, - JSValue::kSize, - isolate()->initial_object_prototype(), - Builtins::kIllegal, false, false); + Handle<JSFunction> opaque_reference_fun = InstallFunction( + builtins, "OpaqueReference", JS_VALUE_TYPE, JSValue::kSize, + isolate()->initial_object_prototype(), Builtins::kIllegal); Handle<JSObject> prototype = factory()->NewJSObject(isolate()->object_function(), TENURED); Accessors::FunctionSetPrototype(opaque_reference_fun, prototype); @@ -1855,6 +1904,22 @@ bool Genesis::InstallNatives() { InstallInternalArray(builtins, "InternalPackedArray", FAST_ELEMENTS); } + { // -- S e t I t e r a t o r + Handle<JSFunction> set_iterator_function = InstallFunction( + builtins, "SetIterator", JS_SET_ITERATOR_TYPE, JSSetIterator::kSize, + isolate()->initial_object_prototype(), Builtins::kIllegal); + native_context()->set_set_iterator_map( + set_iterator_function->initial_map()); + } + + { // -- M a p I t e r a t o r + Handle<JSFunction> map_iterator_function = InstallFunction( + builtins, "MapIterator", JS_MAP_ITERATOR_TYPE, JSMapIterator::kSize, + isolate()->initial_object_prototype(), Builtins::kIllegal); + native_context()->set_map_iterator_map( + map_iterator_function->initial_map()); + } + if (FLAG_disable_native_files) { PrintF("Warning: Running without installed natives!\n"); return true; @@ -1876,7 +1941,7 @@ bool Genesis::InstallNatives() { // Store the map for the string prototype after the natives has been compiled // and the String function has been set up. Handle<JSFunction> string_function(native_context()->string_function()); - ASSERT(JSObject::cast( + DCHECK(JSObject::cast( string_function->initial_map()->prototype())->HasFastProperties()); native_context()->set_string_function_prototype_map( HeapObject::cast(string_function->initial_map()->prototype())->map()); @@ -1885,27 +1950,29 @@ bool Genesis::InstallNatives() { { Handle<String> key = factory()->function_class_string(); Handle<JSFunction> function = Handle<JSFunction>::cast(Object::GetProperty( - isolate()->global_object(), key).ToHandleChecked()); + handle(native_context()->global_object()), key).ToHandleChecked()); Handle<JSObject> proto = Handle<JSObject>(JSObject::cast(function->instance_prototype())); // Install the call and the apply functions. Handle<JSFunction> call = InstallFunction(proto, "call", JS_OBJECT_TYPE, JSObject::kHeaderSize, - Handle<JSObject>::null(), - Builtins::kFunctionCall, - false, false); + MaybeHandle<JSObject>(), Builtins::kFunctionCall); Handle<JSFunction> apply = InstallFunction(proto, "apply", JS_OBJECT_TYPE, JSObject::kHeaderSize, - Handle<JSObject>::null(), - Builtins::kFunctionApply, - false, false); + MaybeHandle<JSObject>(), Builtins::kFunctionApply); + if (FLAG_vector_ics) { + // Apply embeds an IC, so we need a type vector of size 1 in the shared + // function info. + Handle<FixedArray> feedback_vector = factory()->NewTypeFeedbackVector(1); + apply->shared()->set_feedback_vector(*feedback_vector); + } // Make sure that Function.prototype.call appears to be compiled. // The code will never be called, but inline caching for call will // only work if it appears to be compiled. call->shared()->DontAdaptArguments(); - ASSERT(call->is_compiled()); + DCHECK(call->is_compiled()); // Set the expected parameters for apply to 2; required by builtin. apply->shared()->set_formal_parameter_count(2); @@ -1946,7 +2013,7 @@ bool Genesis::InstallNatives() { Handle<String> length = factory()->length_string(); int old = array_descriptors->SearchWithCache( *length, array_function->initial_map()); - ASSERT(old != DescriptorArray::kNotFound); + DCHECK(old != DescriptorArray::kNotFound); CallbacksDescriptor desc(length, handle(array_descriptors->GetValue(old), isolate()), @@ -1996,44 +2063,17 @@ bool Genesis::InstallExperimentalNatives() { for (int i = ExperimentalNatives::GetDebuggerCount(); i < ExperimentalNatives::GetBuiltinsCount(); i++) { - INSTALL_EXPERIMENTAL_NATIVE(i, symbols, "symbol.js") INSTALL_EXPERIMENTAL_NATIVE(i, proxies, "proxy.js") - INSTALL_EXPERIMENTAL_NATIVE(i, collections, "collection.js") INSTALL_EXPERIMENTAL_NATIVE(i, generators, "generator.js") - INSTALL_EXPERIMENTAL_NATIVE(i, iteration, "array-iterator.js") INSTALL_EXPERIMENTAL_NATIVE(i, strings, "harmony-string.js") INSTALL_EXPERIMENTAL_NATIVE(i, arrays, "harmony-array.js") - INSTALL_EXPERIMENTAL_NATIVE(i, maths, "harmony-math.js") } InstallExperimentalNativeFunctions(); - InstallExperimentalBuiltinFunctionIds(); return true; } -static Handle<JSObject> ResolveBuiltinIdHolder( - Handle<Context> native_context, - const char* holder_expr) { - Isolate* isolate = native_context->GetIsolate(); - Factory* factory = isolate->factory(); - Handle<GlobalObject> global(native_context->global_object()); - const char* period_pos = strchr(holder_expr, '.'); - if (period_pos == NULL) { - return Handle<JSObject>::cast(Object::GetPropertyOrElement( - global, factory->InternalizeUtf8String(holder_expr)).ToHandleChecked()); - } - ASSERT_EQ(".prototype", period_pos); - Vector<const char> property(holder_expr, - static_cast<int>(period_pos - holder_expr)); - Handle<String> property_string = factory->InternalizeUtf8String(property); - ASSERT(!property_string.is_null()); - Handle<JSFunction> function = Handle<JSFunction>::cast( - Object::GetProperty(global, property_string).ToHandleChecked()); - return Handle<JSObject>(JSObject::cast(function->prototype())); -} - - static void InstallBuiltinFunctionId(Handle<JSObject> holder, const char* function_name, BuiltinFunctionId id) { @@ -2059,15 +2099,6 @@ void Genesis::InstallBuiltinFunctionIds() { } -void Genesis::InstallExperimentalBuiltinFunctionIds() { - HandleScope scope(isolate()); - if (FLAG_harmony_maths) { - Handle<JSObject> holder = ResolveBuiltinIdHolder(native_context(), "Math"); - InstallBuiltinFunctionId(holder, "clz32", kMathClz32); - } -} - - // Do not forget to update macros.py with named constant // of cache id. #define JSFUNCTION_RESULT_CACHE_LIST(F) \ @@ -2130,56 +2161,55 @@ bool Bootstrapper::InstallExtensions(Handle<Context> native_context, bool Genesis::InstallSpecialObjects(Handle<Context> native_context) { Isolate* isolate = native_context->GetIsolate(); + // Don't install extensions into the snapshot. + if (isolate->serializer_enabled()) return true; + Factory* factory = isolate->factory(); HandleScope scope(isolate); Handle<JSGlobalObject> global(JSGlobalObject::cast( native_context->global_object())); + + Handle<JSObject> Error = Handle<JSObject>::cast( + Object::GetProperty(isolate, global, "Error").ToHandleChecked()); + Handle<String> name = + factory->InternalizeOneByteString(STATIC_ASCII_VECTOR("stackTraceLimit")); + Handle<Smi> stack_trace_limit(Smi::FromInt(FLAG_stack_trace_limit), isolate); + JSObject::AddProperty(Error, name, stack_trace_limit, NONE); + // Expose the natives in global if a name for it is specified. if (FLAG_expose_natives_as != NULL && strlen(FLAG_expose_natives_as) != 0) { Handle<String> natives = factory->InternalizeUtf8String(FLAG_expose_natives_as); - RETURN_ON_EXCEPTION_VALUE( - isolate, - JSObject::SetLocalPropertyIgnoreAttributes( - global, natives, Handle<JSObject>(global->builtins()), DONT_ENUM), - false); - } - - Handle<Object> Error = Object::GetProperty( - isolate, global, "Error").ToHandleChecked(); - if (Error->IsJSObject()) { - Handle<String> name = factory->InternalizeOneByteString( - STATIC_ASCII_VECTOR("stackTraceLimit")); - Handle<Smi> stack_trace_limit( - Smi::FromInt(FLAG_stack_trace_limit), isolate); - RETURN_ON_EXCEPTION_VALUE( - isolate, - JSObject::SetLocalPropertyIgnoreAttributes( - Handle<JSObject>::cast(Error), name, stack_trace_limit, NONE), - false); - } + JSObject::AddProperty(global, natives, handle(global->builtins()), + DONT_ENUM); + } + + // Expose the stack trace symbol to native JS. + RETURN_ON_EXCEPTION_VALUE( + isolate, + JSObject::SetOwnPropertyIgnoreAttributes( + handle(native_context->builtins(), isolate), + factory->InternalizeOneByteString( + STATIC_ASCII_VECTOR("stack_trace_symbol")), + factory->stack_trace_symbol(), + NONE), + false); // Expose the debug global object in global if a name for it is specified. if (FLAG_expose_debug_as != NULL && strlen(FLAG_expose_debug_as) != 0) { - Debug* debug = isolate->debug(); // If loading fails we just bail out without installing the // debugger but without tanking the whole context. + Debug* debug = isolate->debug(); if (!debug->Load()) return true; + Handle<Context> debug_context = debug->debug_context(); // Set the security token for the debugger context to the same as // the shell native context to allow calling between these (otherwise // exposing debug global object doesn't make much sense). - debug->debug_context()->set_security_token( - native_context->security_token()); - + debug_context->set_security_token(native_context->security_token()); Handle<String> debug_string = factory->InternalizeUtf8String(FLAG_expose_debug_as); - Handle<Object> global_proxy( - debug->debug_context()->global_proxy(), isolate); - RETURN_ON_EXCEPTION_VALUE( - isolate, - JSObject::SetLocalPropertyIgnoreAttributes( - global, debug_string, global_proxy, DONT_ENUM), - false); + Handle<Object> global_proxy(debug_context->global_proxy(), isolate); + JSObject::AddProperty(global, debug_string, global_proxy, DONT_ENUM); } return true; } @@ -2283,7 +2313,7 @@ bool Genesis::InstallExtension(Isolate* isolate, "Circular extension dependency")) { return false; } - ASSERT(extension_states->get_state(current) == UNVISITED); + DCHECK(extension_states->get_state(current) == UNVISITED); extension_states->set_state(current, VISITED); v8::Extension* extension = current->extension(); // Install the extension's dependencies @@ -2305,14 +2335,14 @@ bool Genesis::InstallExtension(Isolate* isolate, extension, Handle<Context>(isolate->context()), false); - ASSERT(isolate->has_pending_exception() != result); + DCHECK(isolate->has_pending_exception() != result); if (!result) { // We print out the name of the extension that fail to install. // When an error is thrown during bootstrapping we automatically print // the line number at which this happened to the console in the isolate // error throwing functionality. - OS::PrintError("Error installing extension '%s'.\n", - current->extension()->name()); + base::OS::PrintError("Error installing extension '%s'.\n", + current->extension()->name()); isolate->clear_pending_exception(); } extension_states->set_state(current, INSTALLED); @@ -2329,6 +2359,10 @@ bool Genesis::InstallJSBuiltins(Handle<JSBuiltinsObject> builtins) { isolate(), builtins, Builtins::GetName(id)).ToHandleChecked(); Handle<JSFunction> function = Handle<JSFunction>::cast(function_object); builtins->set_javascript_builtin(id, *function); + // TODO(mstarzinger): This is just a temporary hack to make TurboFan work, + // the correct solution is to restore the context register after invoking + // builtins from full-codegen. + function->shared()->set_optimization_disabled(true); if (!Compiler::EnsureCompiled(function, CLEAR_EXCEPTION)) { return false; } @@ -2342,26 +2376,26 @@ bool Genesis::ConfigureGlobalObjects( v8::Handle<v8::ObjectTemplate> global_proxy_template) { Handle<JSObject> global_proxy( JSObject::cast(native_context()->global_proxy())); - Handle<JSObject> inner_global( + Handle<JSObject> global_object( JSObject::cast(native_context()->global_object())); if (!global_proxy_template.IsEmpty()) { // Configure the global proxy object. - Handle<ObjectTemplateInfo> proxy_data = + Handle<ObjectTemplateInfo> global_proxy_data = v8::Utils::OpenHandle(*global_proxy_template); - if (!ConfigureApiObject(global_proxy, proxy_data)) return false; + if (!ConfigureApiObject(global_proxy, global_proxy_data)) return false; - // Configure the inner global object. + // Configure the global object. Handle<FunctionTemplateInfo> proxy_constructor( - FunctionTemplateInfo::cast(proxy_data->constructor())); + FunctionTemplateInfo::cast(global_proxy_data->constructor())); if (!proxy_constructor->prototype_template()->IsUndefined()) { - Handle<ObjectTemplateInfo> inner_data( + Handle<ObjectTemplateInfo> global_object_data( ObjectTemplateInfo::cast(proxy_constructor->prototype_template())); - if (!ConfigureApiObject(inner_global, inner_data)) return false; + if (!ConfigureApiObject(global_object, global_object_data)) return false; } } - SetObjectPrototype(global_proxy, inner_global); + SetObjectPrototype(global_proxy, global_object); native_context()->set_initial_array_prototype( JSArray::cast(native_context()->array_function()->prototype())); @@ -2371,16 +2405,16 @@ bool Genesis::ConfigureGlobalObjects( bool Genesis::ConfigureApiObject(Handle<JSObject> object, - Handle<ObjectTemplateInfo> object_template) { - ASSERT(!object_template.is_null()); - ASSERT(FunctionTemplateInfo::cast(object_template->constructor()) + Handle<ObjectTemplateInfo> object_template) { + DCHECK(!object_template.is_null()); + DCHECK(FunctionTemplateInfo::cast(object_template->constructor()) ->IsTemplateFor(object->map()));; MaybeHandle<JSObject> maybe_obj = Execution::InstantiateObject(object_template); Handle<JSObject> obj; if (!maybe_obj.ToHandle(&obj)) { - ASSERT(isolate()->has_pending_exception()); + DCHECK(isolate()->has_pending_exception()); isolate()->clear_pending_exception(); return false; } @@ -2400,30 +2434,28 @@ void Genesis::TransferNamedProperties(Handle<JSObject> from, case FIELD: { HandleScope inner(isolate()); Handle<Name> key = Handle<Name>(descs->GetKey(i)); - int index = descs->GetFieldIndex(i); - ASSERT(!descs->GetDetails(i).representation().IsDouble()); + FieldIndex index = FieldIndex::ForDescriptor(from->map(), i); + DCHECK(!descs->GetDetails(i).representation().IsDouble()); Handle<Object> value = Handle<Object>(from->RawFastPropertyAt(index), isolate()); - JSObject::SetLocalPropertyIgnoreAttributes( - to, key, value, details.attributes()).Check(); + JSObject::AddProperty(to, key, value, details.attributes()); break; } case CONSTANT: { HandleScope inner(isolate()); Handle<Name> key = Handle<Name>(descs->GetKey(i)); Handle<Object> constant(descs->GetConstant(i), isolate()); - JSObject::SetLocalPropertyIgnoreAttributes( - to, key, constant, details.attributes()).Check(); + JSObject::AddProperty(to, key, constant, details.attributes()); break; } case CALLBACKS: { LookupResult result(isolate()); Handle<Name> key(Name::cast(descs->GetKey(i)), isolate()); - to->LocalLookup(key, &result); + to->LookupOwn(key, &result); // If the property is already there we skip it if (result.IsFound()) continue; HandleScope inner(isolate()); - ASSERT(!to->HasFastProperties()); + DCHECK(!to->HasFastProperties()); // Add to dictionary. Handle<Object> callbacks(descs->GetCallbacksObject(i), isolate()); PropertyDetails d = PropertyDetails( @@ -2448,23 +2480,22 @@ void Genesis::TransferNamedProperties(Handle<JSObject> from, for (int i = 0; i < capacity; i++) { Object* raw_key(properties->KeyAt(i)); if (properties->IsKey(raw_key)) { - ASSERT(raw_key->IsName()); + DCHECK(raw_key->IsName()); // If the property is already there we skip it. LookupResult result(isolate()); Handle<Name> key(Name::cast(raw_key)); - to->LocalLookup(key, &result); + to->LookupOwn(key, &result); if (result.IsFound()) continue; // Set the property. Handle<Object> value = Handle<Object>(properties->ValueAt(i), isolate()); - ASSERT(!value->IsCell()); + DCHECK(!value->IsCell()); if (value->IsPropertyCell()) { value = Handle<Object>(PropertyCell::cast(*value)->value(), isolate()); } PropertyDetails details = properties->DetailsAt(i); - JSObject::SetLocalPropertyIgnoreAttributes( - to, key, value, details.attributes()).Check(); + JSObject::AddProperty(to, key, value, details.attributes()); } } } @@ -2484,17 +2515,15 @@ void Genesis::TransferIndexedProperties(Handle<JSObject> from, void Genesis::TransferObject(Handle<JSObject> from, Handle<JSObject> to) { HandleScope outer(isolate()); - ASSERT(!from->IsJSArray()); - ASSERT(!to->IsJSArray()); + DCHECK(!from->IsJSArray()); + DCHECK(!to->IsJSArray()); TransferNamedProperties(from, to); TransferIndexedProperties(from, to); // Transfer the prototype (new map is needed). - Handle<Map> old_to_map = Handle<Map>(to->map()); - Handle<Map> new_to_map = Map::Copy(old_to_map); - new_to_map->set_prototype(from->map()->prototype()); - to->set_map(*new_to_map); + Handle<Object> proto(from->map()->prototype(), isolate()); + SetObjectPrototype(to, proto); } @@ -2502,8 +2531,8 @@ void Genesis::MakeFunctionInstancePrototypeWritable() { // The maps with writable prototype are created in CreateEmptyFunction // and CreateStrictModeFunctionMaps respectively. Initially the maps are // created with read-only prototype for JS builtins processing. - ASSERT(!sloppy_function_map_writable_prototype_.is_null()); - ASSERT(!strict_function_map_writable_prototype_.is_null()); + DCHECK(!sloppy_function_map_writable_prototype_.is_null()); + DCHECK(!strict_function_map_writable_prototype_.is_null()); // Replace function instance maps to make prototype writable. native_context()->set_sloppy_function_map( @@ -2516,8 +2545,8 @@ void Genesis::MakeFunctionInstancePrototypeWritable() { class NoTrackDoubleFieldsForSerializerScope { public: explicit NoTrackDoubleFieldsForSerializerScope(Isolate* isolate) - : isolate_(isolate), flag_(FLAG_track_double_fields) { - if (Serializer::enabled(isolate)) { + : flag_(FLAG_track_double_fields) { + if (isolate->serializer_enabled()) { // Disable tracking double fields because heap numbers treated as // immutable by the serializer. FLAG_track_double_fields = false; @@ -2525,20 +2554,17 @@ class NoTrackDoubleFieldsForSerializerScope { } ~NoTrackDoubleFieldsForSerializerScope() { - if (Serializer::enabled(isolate_)) { - FLAG_track_double_fields = flag_; - } + FLAG_track_double_fields = flag_; } private: - Isolate* isolate_; bool flag_; }; Genesis::Genesis(Isolate* isolate, - Handle<Object> global_object, - v8::Handle<v8::ObjectTemplate> global_template, + MaybeHandle<JSGlobalProxy> maybe_global_proxy, + v8::Handle<v8::ObjectTemplate> global_proxy_template, v8::ExtensionConfiguration* extensions) : isolate_(isolate), active_(isolate->bootstrapper()) { @@ -2569,35 +2595,33 @@ Genesis::Genesis(Isolate* isolate, AddToWeakNativeContextList(*native_context()); isolate->set_context(*native_context()); isolate->counters()->contexts_created_by_snapshot()->Increment(); - Handle<GlobalObject> inner_global; - Handle<JSGlobalProxy> global_proxy = - CreateNewGlobals(global_template, - global_object, - &inner_global); - - HookUpGlobalProxy(inner_global, global_proxy); - HookUpInnerGlobal(inner_global); - native_context()->builtins()->set_global_receiver( + Handle<GlobalObject> global_object; + Handle<JSGlobalProxy> global_proxy = CreateNewGlobals( + global_proxy_template, maybe_global_proxy, &global_object); + + HookUpGlobalProxy(global_object, global_proxy); + HookUpGlobalObject(global_object); + native_context()->builtins()->set_global_proxy( native_context()->global_proxy()); - if (!ConfigureGlobalObjects(global_template)) return; + if (!ConfigureGlobalObjects(global_proxy_template)) return; } else { // We get here if there was no context snapshot. CreateRoots(); Handle<JSFunction> empty_function = CreateEmptyFunction(isolate); CreateStrictModeFunctionMaps(empty_function); - Handle<GlobalObject> inner_global; - Handle<JSGlobalProxy> global_proxy = - CreateNewGlobals(global_template, global_object, &inner_global); - HookUpGlobalProxy(inner_global, global_proxy); - InitializeGlobal(inner_global, empty_function); + Handle<GlobalObject> global_object; + Handle<JSGlobalProxy> global_proxy = CreateNewGlobals( + global_proxy_template, maybe_global_proxy, &global_object); + HookUpGlobalProxy(global_object, global_proxy); + InitializeGlobal(global_object, empty_function); InstallJSFunctionResultCaches(); InitializeNormalizedMapCaches(); if (!InstallNatives()) return; MakeFunctionInstancePrototypeWritable(); - if (!ConfigureGlobalObjects(global_template)) return; + if (!ConfigureGlobalObjects(global_proxy_template)) return; isolate->counters()->contexts_created_from_scratch()->Increment(); } @@ -2608,7 +2632,7 @@ Genesis::Genesis(Isolate* isolate, // We can't (de-)serialize typed arrays currently, but we are lucky: The state // of the random number generator needs no initialization during snapshot // creation time and we don't need trigonometric functions then. - if (!Serializer::enabled(isolate)) { + if (!isolate->serializer_enabled()) { // Initially seed the per-context random number generator using the // per-isolate random number generator. const int num_elems = 2; @@ -2624,50 +2648,25 @@ Genesis::Genesis(Isolate* isolate, Utils::OpenHandle(*buffer)->set_should_be_freed(true); v8::Local<v8::Uint32Array> ta = v8::Uint32Array::New(buffer, 0, num_elems); Handle<JSBuiltinsObject> builtins(native_context()->builtins()); - Runtime::ForceSetObjectProperty(builtins, - factory()->InternalizeOneByteString( - STATIC_ASCII_VECTOR("rngstate")), - Utils::OpenHandle(*ta), - NONE).Assert(); + Runtime::DefineObjectProperty(builtins, + factory()->InternalizeOneByteString( + STATIC_ASCII_VECTOR("rngstate")), + Utils::OpenHandle(*ta), + NONE).Assert(); // Initialize trigonometric lookup tables and constants. - const int table_num_bytes = TrigonometricLookupTable::table_num_bytes(); - v8::Local<v8::ArrayBuffer> sin_buffer = v8::ArrayBuffer::New( - reinterpret_cast<v8::Isolate*>(isolate), - TrigonometricLookupTable::sin_table(), table_num_bytes); - v8::Local<v8::ArrayBuffer> cos_buffer = v8::ArrayBuffer::New( + const int constants_size = ARRAY_SIZE(fdlibm::MathConstants::constants); + const int table_num_bytes = constants_size * kDoubleSize; + v8::Local<v8::ArrayBuffer> trig_buffer = v8::ArrayBuffer::New( reinterpret_cast<v8::Isolate*>(isolate), - TrigonometricLookupTable::cos_x_interval_table(), table_num_bytes); - v8::Local<v8::Float64Array> sin_table = v8::Float64Array::New( - sin_buffer, 0, TrigonometricLookupTable::table_size()); - v8::Local<v8::Float64Array> cos_table = v8::Float64Array::New( - cos_buffer, 0, TrigonometricLookupTable::table_size()); - - Runtime::ForceSetObjectProperty(builtins, - factory()->InternalizeOneByteString( - STATIC_ASCII_VECTOR("kSinTable")), - Utils::OpenHandle(*sin_table), - NONE).Assert(); - Runtime::ForceSetObjectProperty( - builtins, - factory()->InternalizeOneByteString( - STATIC_ASCII_VECTOR("kCosXIntervalTable")), - Utils::OpenHandle(*cos_table), - NONE).Assert(); - Runtime::ForceSetObjectProperty( - builtins, - factory()->InternalizeOneByteString( - STATIC_ASCII_VECTOR("kSamples")), - factory()->NewHeapNumber( - TrigonometricLookupTable::samples()), - NONE).Assert(); - Runtime::ForceSetObjectProperty( + const_cast<double*>(fdlibm::MathConstants::constants), table_num_bytes); + v8::Local<v8::Float64Array> trig_table = + v8::Float64Array::New(trig_buffer, 0, constants_size); + + Runtime::DefineObjectProperty( builtins, - factory()->InternalizeOneByteString( - STATIC_ASCII_VECTOR("kIndexConvert")), - factory()->NewHeapNumber( - TrigonometricLookupTable::samples_over_pi_half()), - NONE).Assert(); + factory()->InternalizeOneByteString(STATIC_ASCII_VECTOR("kMath")), + Utils::OpenHandle(*trig_table), NONE).Assert(); } result_ = native_context(); @@ -2682,7 +2681,7 @@ int Bootstrapper::ArchiveSpacePerThread() { } -// Archive statics that are thread local. +// Archive statics that are thread-local. char* Bootstrapper::ArchiveState(char* to) { *reinterpret_cast<NestingCounterType*>(to) = nesting_; nesting_ = 0; @@ -2690,7 +2689,7 @@ char* Bootstrapper::ArchiveState(char* to) { } -// Restore statics that are thread local. +// Restore statics that are thread-local. char* Bootstrapper::RestoreState(char* from) { nesting_ = *reinterpret_cast<NestingCounterType*>(from); return from + sizeof(NestingCounterType); @@ -2699,7 +2698,7 @@ char* Bootstrapper::RestoreState(char* from) { // Called when the top-level V8 mutex is destroyed. void Bootstrapper::FreeThreadResources() { - ASSERT(!IsActive()); + DCHECK(!IsActive()); } } } // namespace v8::internal diff --git a/deps/v8/src/bootstrapper.h b/deps/v8/src/bootstrapper.h index f6fcd02ef62..1899d6a1e2f 100644 --- a/deps/v8/src/bootstrapper.h +++ b/deps/v8/src/bootstrapper.h @@ -5,7 +5,7 @@ #ifndef V8_BOOTSTRAPPER_H_ #define V8_BOOTSTRAPPER_H_ -#include "factory.h" +#include "src/factory.h" namespace v8 { namespace internal { @@ -49,7 +49,7 @@ class SourceCodeCache V8_FINAL BASE_EMBEDDED { cache_ = *new_array; Handle<String> str = factory->NewStringFromAscii(name, TENURED).ToHandleChecked(); - ASSERT(!str.is_null()); + DCHECK(!str.is_null()); cache_->set(length, *str); cache_->set(length + 1, *shared); Script::cast(shared->script())->set_type(Smi::FromInt(type_)); @@ -76,8 +76,8 @@ class Bootstrapper V8_FINAL { // Creates a JavaScript Global Context with initial object graph. // The returned value is a global handle casted to V8Environment*. Handle<Context> CreateEnvironment( - Handle<Object> global_object, - v8::Handle<v8::ObjectTemplate> global_template, + MaybeHandle<JSGlobalProxy> maybe_global_proxy, + v8::Handle<v8::ObjectTemplate> global_object_template, v8::ExtensionConfiguration* extensions); // Detach the environment from its outer global object. diff --git a/deps/v8/src/builtins.cc b/deps/v8/src/builtins.cc index d0c1a446a8b..498387353da 100644 --- a/deps/v8/src/builtins.cc +++ b/deps/v8/src/builtins.cc @@ -2,19 +2,21 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" - -#include "api.h" -#include "arguments.h" -#include "bootstrapper.h" -#include "builtins.h" -#include "cpu-profiler.h" -#include "gdb-jit.h" -#include "ic-inl.h" -#include "heap-profiler.h" -#include "mark-compact.h" -#include "stub-cache.h" -#include "vm-state-inl.h" +#include "src/v8.h" + +#include "src/api.h" +#include "src/arguments.h" +#include "src/base/once.h" +#include "src/bootstrapper.h" +#include "src/builtins.h" +#include "src/cpu-profiler.h" +#include "src/gdb-jit.h" +#include "src/heap/mark-compact.h" +#include "src/heap-profiler.h" +#include "src/ic-inl.h" +#include "src/prototype.h" +#include "src/stub-cache.h" +#include "src/vm-state-inl.h" namespace v8 { namespace internal { @@ -29,12 +31,12 @@ class BuiltinArguments : public Arguments { : Arguments(length, arguments) { } Object*& operator[] (int index) { - ASSERT(index < length()); + DCHECK(index < length()); return Arguments::operator[](index); } template <class S> Handle<S> at(int index) { - ASSERT(index < length()); + DCHECK(index < length()); return Arguments::at<S>(index); } @@ -57,7 +59,7 @@ class BuiltinArguments : public Arguments { #ifdef DEBUG void Verify() { // Check we have at least the receiver. - ASSERT(Arguments::length() >= 1); + DCHECK(Arguments::length() >= 1); } #endif }; @@ -74,7 +76,7 @@ int BuiltinArguments<NEEDS_CALLED_FUNCTION>::length() const { template <> void BuiltinArguments<NEEDS_CALLED_FUNCTION>::Verify() { // Check we have at least the receiver and the called function. - ASSERT(Arguments::length() >= 2); + DCHECK(Arguments::length() >= 2); // Make sure cast to JSFunction succeeds. called_function(); } @@ -136,7 +138,7 @@ static inline bool CalledAsConstructor(Isolate* isolate) { // that the state of the stack is as we assume it to be in the // code below. StackFrameIterator it(isolate); - ASSERT(it.frame()->is_exit()); + DCHECK(it.frame()->is_exit()); it.Advance(); StackFrame* frame = it.frame(); bool reference_result = frame->is_construct(); @@ -153,7 +155,7 @@ static inline bool CalledAsConstructor(Isolate* isolate) { const Smi* kConstructMarker = Smi::FromInt(StackFrame::CONSTRUCT); Object* marker = Memory::Object_at(caller_fp + kMarkerOffset); bool result = (marker == kConstructMarker); - ASSERT_EQ(result, reference_result); + DCHECK_EQ(result, reference_result); return result; } #endif @@ -172,83 +174,11 @@ BUILTIN(EmptyFunction) { } -static void MoveDoubleElements(FixedDoubleArray* dst, - int dst_index, - FixedDoubleArray* src, - int src_index, - int len) { +static void MoveDoubleElements(FixedDoubleArray* dst, int dst_index, + FixedDoubleArray* src, int src_index, int len) { if (len == 0) return; - OS::MemMove(dst->data_start() + dst_index, - src->data_start() + src_index, - len * kDoubleSize); -} - - -static FixedArrayBase* LeftTrimFixedArray(Heap* heap, - FixedArrayBase* elms, - int to_trim) { - ASSERT(heap->CanMoveObjectStart(elms)); - - Map* map = elms->map(); - int entry_size; - if (elms->IsFixedArray()) { - entry_size = kPointerSize; - } else { - entry_size = kDoubleSize; - } - ASSERT(elms->map() != heap->fixed_cow_array_map()); - // For now this trick is only applied to fixed arrays in new and paged space. - // In large object space the object's start must coincide with chunk - // and thus the trick is just not applicable. - ASSERT(!heap->lo_space()->Contains(elms)); - - STATIC_ASSERT(FixedArrayBase::kMapOffset == 0); - STATIC_ASSERT(FixedArrayBase::kLengthOffset == kPointerSize); - STATIC_ASSERT(FixedArrayBase::kHeaderSize == 2 * kPointerSize); - - Object** former_start = HeapObject::RawField(elms, 0); - - const int len = elms->length(); - - if (to_trim * entry_size > FixedArrayBase::kHeaderSize && - elms->IsFixedArray() && - !heap->new_space()->Contains(elms)) { - // If we are doing a big trim in old space then we zap the space that was - // formerly part of the array so that the GC (aided by the card-based - // remembered set) won't find pointers to new-space there. - Object** zap = reinterpret_cast<Object**>(elms->address()); - zap++; // Header of filler must be at least one word so skip that. - for (int i = 1; i < to_trim; i++) { - *zap++ = Smi::FromInt(0); - } - } - // Technically in new space this write might be omitted (except for - // debug mode which iterates through the heap), but to play safer - // we still do it. - // Since left trimming is only performed on pages which are not concurrently - // swept creating a filler object does not require synchronization. - heap->CreateFillerObjectAt(elms->address(), to_trim * entry_size); - - int new_start_index = to_trim * (entry_size / kPointerSize); - former_start[new_start_index] = map; - former_start[new_start_index + 1] = Smi::FromInt(len - to_trim); - - // Maintain marking consistency for HeapObjectIterator and - // IncrementalMarking. - int size_delta = to_trim * entry_size; - Address new_start = elms->address() + size_delta; - heap->marking()->TransferMark(elms->address(), new_start); - heap->AdjustLiveBytes(new_start, -size_delta, Heap::FROM_MUTATOR); - - FixedArrayBase* new_elms = - FixedArrayBase::cast(HeapObject::FromAddress(new_start)); - HeapProfiler* profiler = heap->isolate()->heap_profiler(); - if (profiler->is_tracking_object_moves()) { - profiler->ObjectMoveEvent(elms->address(), - new_elms->address(), - new_elms->Size()); - } - return new_elms; + MemMove(dst->data_start() + dst_index, src->data_start() + src_index, + len * kDoubleSize); } @@ -260,12 +190,15 @@ static bool ArrayPrototypeHasNoElements(Heap* heap, // fields. if (array_proto->elements() != heap->empty_fixed_array()) return false; // Object.prototype - Object* proto = array_proto->GetPrototype(); - if (proto == heap->null_value()) return false; - array_proto = JSObject::cast(proto); + PrototypeIterator iter(heap->isolate(), array_proto); + if (iter.IsAtEnd()) { + return false; + } + array_proto = JSObject::cast(iter.GetCurrent()); if (array_proto != native_context->initial_object_prototype()) return false; if (array_proto->elements() != heap->empty_fixed_array()) return false; - return array_proto->GetPrototype()->IsNull(); + iter.Advance(); + return iter.IsAtEnd(); } @@ -305,7 +238,7 @@ static inline MaybeHandle<FixedArrayBase> EnsureJSArrayWithWritableFastElements( if (first_added_arg >= args_length) return handle(array->elements(), isolate); ElementsKind origin_kind = array->map()->elements_kind(); - ASSERT(!IsFastObjectElementsKind(origin_kind)); + DCHECK(!IsFastObjectElementsKind(origin_kind)); ElementsKind target_kind = origin_kind; { DisallowHeapAllocation no_gc; @@ -338,7 +271,8 @@ static inline bool IsJSArrayFastElementMovingAllowed(Heap* heap, Context* native_context = heap->isolate()->context()->native_context(); JSObject* array_proto = JSObject::cast(native_context->array_function()->prototype()); - return receiver->GetPrototype() == array_proto && + PrototypeIterator iter(heap->isolate(), receiver); + return iter.GetCurrent() == array_proto && ArrayPrototypeHasNoElements(heap, native_context, array_proto); } @@ -382,21 +316,23 @@ BUILTIN(ArrayPush) { } Handle<JSArray> array = Handle<JSArray>::cast(receiver); - ASSERT(!array->map()->is_observed()); + int len = Smi::cast(array->length())->value(); + int to_add = args.length() - 1; + if (to_add > 0 && JSArray::WouldChangeReadOnlyLength(array, len + to_add)) { + return CallJsBuiltin(isolate, "ArrayPush", args); + } + DCHECK(!array->map()->is_observed()); ElementsKind kind = array->GetElementsKind(); if (IsFastSmiOrObjectElementsKind(kind)) { Handle<FixedArray> elms = Handle<FixedArray>::cast(elms_obj); - - int len = Smi::cast(array->length())->value(); - int to_add = args.length() - 1; if (to_add == 0) { return Smi::FromInt(len); } // Currently fixed arrays cannot grow too big, so // we should never hit this case. - ASSERT(to_add <= (Smi::kMaxValue - len)); + DCHECK(to_add <= (Smi::kMaxValue - len)); int new_length = len + to_add; @@ -429,16 +365,13 @@ BUILTIN(ArrayPush) { array->set_length(Smi::FromInt(new_length)); return Smi::FromInt(new_length); } else { - int len = Smi::cast(array->length())->value(); int elms_len = elms_obj->length(); - - int to_add = args.length() - 1; if (to_add == 0) { return Smi::FromInt(len); } // Currently fixed arrays cannot grow too big, so // we should never hit this case. - ASSERT(to_add <= (Smi::kMaxValue - len)); + DCHECK(to_add <= (Smi::kMaxValue - len)); int new_length = len + to_add; @@ -493,7 +426,7 @@ BUILTIN(ArrayPop) { } Handle<JSArray> array = Handle<JSArray>::cast(receiver); - ASSERT(!array->map()->is_observed()); + DCHECK(!array->map()->is_observed()); int len = Smi::cast(array->length())->value(); if (len == 0) return isolate->heap()->undefined_value(); @@ -525,7 +458,7 @@ BUILTIN(ArrayShift) { return CallJsBuiltin(isolate, "ArrayShift", args); } Handle<JSArray> array = Handle<JSArray>::cast(receiver); - ASSERT(!array->map()->is_observed()); + DCHECK(!array->map()->is_observed()); int len = Smi::cast(array->length())->value(); if (len == 0) return heap->undefined_value(); @@ -539,7 +472,7 @@ BUILTIN(ArrayShift) { } if (heap->CanMoveObjectStart(*elms_obj)) { - array->set_elements(LeftTrimFixedArray(heap, *elms_obj, 1)); + array->set_elements(heap->LeftTrimFixedArray(*elms_obj, 1)); } else { // Shift the elements. if (elms_obj->IsFixedArray()) { @@ -574,18 +507,22 @@ BUILTIN(ArrayUnshift) { return CallJsBuiltin(isolate, "ArrayUnshift", args); } Handle<JSArray> array = Handle<JSArray>::cast(receiver); - ASSERT(!array->map()->is_observed()); + DCHECK(!array->map()->is_observed()); if (!array->HasFastSmiOrObjectElements()) { return CallJsBuiltin(isolate, "ArrayUnshift", args); } - Handle<FixedArray> elms = Handle<FixedArray>::cast(elms_obj); - int len = Smi::cast(array->length())->value(); int to_add = args.length() - 1; int new_length = len + to_add; // Currently fixed arrays cannot grow too big, so // we should never hit this case. - ASSERT(to_add <= (Smi::kMaxValue - len)); + DCHECK(to_add <= (Smi::kMaxValue - len)); + + if (to_add > 0 && JSArray::WouldChangeReadOnlyLength(array, len + to_add)) { + return CallJsBuiltin(isolate, "ArrayUnshift", args); + } + + Handle<FixedArray> elms = Handle<FixedArray>::cast(elms_obj); JSObject::EnsureCanContainElements(array, &args, 1, to_add, DONT_ALLOW_DOUBLE_ELEMENTS); @@ -647,8 +584,8 @@ BUILTIN(ArraySlice) { } else { // Array.slice(arguments, ...) is quite a common idiom (notably more // than 50% of invocations in Web apps). Treat it in C++ as well. - Map* arguments_map = isolate->context()->native_context()-> - sloppy_arguments_boilerplate()->map(); + Map* arguments_map = + isolate->context()->native_context()->sloppy_arguments_map(); bool is_arguments_object_with_fast_elements = receiver->IsJSObject() && @@ -676,7 +613,7 @@ BUILTIN(ArraySlice) { } } - ASSERT(len >= 0); + DCHECK(len >= 0); int n_arguments = args.length() - 1; // Note carefully choosen defaults---if argument is missing, @@ -777,7 +714,7 @@ BUILTIN(ArraySplice) { return CallJsBuiltin(isolate, "ArraySplice", args); } Handle<JSArray> array = Handle<JSArray>::cast(receiver); - ASSERT(!array->map()->is_observed()); + DCHECK(!array->map()->is_observed()); int len = Smi::cast(array->length())->value(); @@ -811,7 +748,7 @@ BUILTIN(ArraySplice) { // compatibility. int actual_delete_count; if (n_arguments == 1) { - ASSERT(len - actual_start >= 0); + DCHECK(len - actual_start >= 0); actual_delete_count = len - actual_start; } else { int value = 0; // ToInteger(undefined) == 0 @@ -880,7 +817,7 @@ BUILTIN(ArraySplice) { if (heap->CanMoveObjectStart(*elms_obj)) { // On the fast path we move the start of the object in memory. - elms_obj = handle(LeftTrimFixedArray(heap, *elms_obj, delta), isolate); + elms_obj = handle(heap->LeftTrimFixedArray(*elms_obj, delta)); } else { // This is the slow path. We are going to move the elements to the left // by copying them. For trimmed values we store the hole. @@ -918,7 +855,7 @@ BUILTIN(ArraySplice) { Handle<FixedArray> elms = Handle<FixedArray>::cast(elms_obj); // Currently fixed arrays cannot grow too big, so // we should never hit this case. - ASSERT((item_count - actual_delete_count) <= (Smi::kMaxValue - len)); + DCHECK((item_count - actual_delete_count) <= (Smi::kMaxValue - len)); // Check if array need to grow. if (new_length > elms->length()) { @@ -995,7 +932,7 @@ BUILTIN(ArrayConcat) { JSObject::cast(native_context->array_function()->prototype()); if (!ArrayPrototypeHasNoElements(heap, native_context, array_proto)) { AllowHeapAllocation allow_allocation; - return CallJsBuiltin(isolate, "ArrayConcat", args); + return CallJsBuiltin(isolate, "ArrayConcatJS", args); } // Iterate through all the arguments performing checks @@ -1003,11 +940,11 @@ BUILTIN(ArrayConcat) { bool is_holey = false; for (int i = 0; i < n_arguments; i++) { Object* arg = args[i]; - if (!arg->IsJSArray() || - !JSArray::cast(arg)->HasFastElements() || - JSArray::cast(arg)->GetPrototype() != array_proto) { + PrototypeIterator iter(isolate, arg); + if (!arg->IsJSArray() || !JSArray::cast(arg)->HasFastElements() || + iter.GetCurrent() != array_proto) { AllowHeapAllocation allow_allocation; - return CallJsBuiltin(isolate, "ArrayConcat", args); + return CallJsBuiltin(isolate, "ArrayConcatJS", args); } int len = Smi::cast(JSArray::cast(arg)->length())->value(); @@ -1016,11 +953,11 @@ BUILTIN(ArrayConcat) { STATIC_ASSERT(FixedArray::kMaxLength < kHalfOfMaxInt); USE(kHalfOfMaxInt); result_len += len; - ASSERT(result_len >= 0); + DCHECK(result_len >= 0); if (result_len > FixedDoubleArray::kMaxLength) { AllowHeapAllocation allow_allocation; - return CallJsBuiltin(isolate, "ArrayConcat", args); + return CallJsBuiltin(isolate, "ArrayConcatJS", args); } ElementsKind arg_kind = JSArray::cast(arg)->map()->elements_kind(); @@ -1061,14 +998,14 @@ BUILTIN(ArrayConcat) { } } - ASSERT(j == result_len); + DCHECK(j == result_len); return *result_array; } // ----------------------------------------------------------------------------- -// Strict mode poison pills +// Generator and strict mode poison pills BUILTIN(StrictModePoisonPill) { @@ -1078,6 +1015,13 @@ BUILTIN(StrictModePoisonPill) { } +BUILTIN(GeneratorPoisonPill) { + HandleScope scope(isolate); + return isolate->Throw(*isolate->factory()->NewTypeError( + "generator_poison_pill", HandleVector<Object>(NULL, 0))); +} + + // ----------------------------------------------------------------------------- // @@ -1088,11 +1032,12 @@ BUILTIN(StrictModePoisonPill) { static inline Object* FindHidden(Heap* heap, Object* object, FunctionTemplateInfo* type) { - if (type->IsTemplateFor(object)) return object; - Object* proto = object->GetPrototype(heap->isolate()); - if (proto->IsJSObject() && - JSObject::cast(proto)->map()->is_hidden_prototype()) { - return FindHidden(heap, proto, type); + for (PrototypeIterator iter(heap->isolate(), object, + PrototypeIterator::START_AT_RECEIVER); + !iter.IsAtEnd(PrototypeIterator::END_AT_NON_HIDDEN); iter.Advance()) { + if (type->IsTemplateFor(iter.GetCurrent())) { + return iter.GetCurrent(); + } } return heap->null_value(); } @@ -1143,12 +1088,12 @@ static inline Object* TypeCheck(Heap* heap, template <bool is_construct> MUST_USE_RESULT static Object* HandleApiCallHelper( BuiltinArguments<NEEDS_CALLED_FUNCTION> args, Isolate* isolate) { - ASSERT(is_construct == CalledAsConstructor(isolate)); + DCHECK(is_construct == CalledAsConstructor(isolate)); Heap* heap = isolate->heap(); HandleScope scope(isolate); Handle<JSFunction> function = args.called_function(); - ASSERT(function->shared()->IsApiFunction()); + DCHECK(function->shared()->IsApiFunction()); Handle<FunctionTemplateInfo> fun_data( function->shared()->get_api_func_data(), isolate); @@ -1162,10 +1107,8 @@ MUST_USE_RESULT static Object* HandleApiCallHelper( SharedFunctionInfo* shared = function->shared(); if (shared->strict_mode() == SLOPPY && !shared->native()) { Object* recv = args[0]; - ASSERT(!recv->IsNull()); - if (recv->IsUndefined()) { - args[0] = function->context()->global_object()->global_receiver(); - } + DCHECK(!recv->IsNull()); + if (recv->IsUndefined()) args[0] = function->global_proxy(); } Object* raw_holder = TypeCheck(heap, args.length(), &args[0], *fun_data); @@ -1188,7 +1131,7 @@ MUST_USE_RESULT static Object* HandleApiCallHelper( Object* result; LOG(isolate, ApiObjectAccess("call", JSObject::cast(*args.receiver()))); - ASSERT(raw_holder->IsJSObject()); + DCHECK(raw_holder->IsJSObject()); FunctionCallbackArguments custom(isolate, data_obj, @@ -1233,7 +1176,7 @@ MUST_USE_RESULT static Object* HandleApiCallAsFunctionOrConstructor( BuiltinArguments<NO_EXTRA_ARGUMENTS> args) { // Non-functions are never called as constructors. Even if this is an object // called as a constructor the delegate call is not a construct call. - ASSERT(!CalledAsConstructor(isolate)); + DCHECK(!CalledAsConstructor(isolate)); Heap* heap = isolate->heap(); Handle<Object> receiver = args.receiver(); @@ -1243,12 +1186,12 @@ MUST_USE_RESULT static Object* HandleApiCallAsFunctionOrConstructor( // Get the invocation callback from the function descriptor that was // used to create the called object. - ASSERT(obj->map()->has_instance_call_handler()); + DCHECK(obj->map()->has_instance_call_handler()); JSFunction* constructor = JSFunction::cast(obj->map()->constructor()); - ASSERT(constructor->shared()->IsApiFunction()); + DCHECK(constructor->shared()->IsApiFunction()); Object* handler = constructor->shared()->get_api_func_data()->instance_call_handler(); - ASSERT(!handler->IsUndefined()); + DCHECK(!handler->IsUndefined()); CallHandlerInfo* call_data = CallHandlerInfo::cast(handler); Object* callback_obj = call_data->callback(); v8::FunctionCallback callback = @@ -1306,7 +1249,7 @@ static void Generate_LoadIC_Normal(MacroAssembler* masm) { static void Generate_LoadIC_Getter_ForDeopt(MacroAssembler* masm) { - LoadStubCompiler::GenerateLoadViaGetterForDeopt(masm); + NamedLoadHandlerCompiler::GenerateLoadViaGetterForDeopt(masm); } @@ -1371,7 +1314,7 @@ static void Generate_StoreIC_Normal(MacroAssembler* masm) { static void Generate_StoreIC_Setter_ForDeopt(MacroAssembler* masm) { - StoreStubCompiler::GenerateStoreViaSetterForDeopt(masm); + NamedStoreHandlerCompiler::GenerateStoreViaSetterForDeopt(masm); } @@ -1421,68 +1364,68 @@ static void Generate_KeyedStoreIC_SloppyArguments(MacroAssembler* masm) { static void Generate_CallICStub_DebugBreak(MacroAssembler* masm) { - Debug::GenerateCallICStubDebugBreak(masm); + DebugCodegen::GenerateCallICStubDebugBreak(masm); } static void Generate_LoadIC_DebugBreak(MacroAssembler* masm) { - Debug::GenerateLoadICDebugBreak(masm); + DebugCodegen::GenerateLoadICDebugBreak(masm); } static void Generate_StoreIC_DebugBreak(MacroAssembler* masm) { - Debug::GenerateStoreICDebugBreak(masm); + DebugCodegen::GenerateStoreICDebugBreak(masm); } static void Generate_KeyedLoadIC_DebugBreak(MacroAssembler* masm) { - Debug::GenerateKeyedLoadICDebugBreak(masm); + DebugCodegen::GenerateKeyedLoadICDebugBreak(masm); } static void Generate_KeyedStoreIC_DebugBreak(MacroAssembler* masm) { - Debug::GenerateKeyedStoreICDebugBreak(masm); + DebugCodegen::GenerateKeyedStoreICDebugBreak(masm); } static void Generate_CompareNilIC_DebugBreak(MacroAssembler* masm) { - Debug::GenerateCompareNilICDebugBreak(masm); + DebugCodegen::GenerateCompareNilICDebugBreak(masm); } static void Generate_Return_DebugBreak(MacroAssembler* masm) { - Debug::GenerateReturnDebugBreak(masm); + DebugCodegen::GenerateReturnDebugBreak(masm); } static void Generate_CallFunctionStub_DebugBreak(MacroAssembler* masm) { - Debug::GenerateCallFunctionStubDebugBreak(masm); + DebugCodegen::GenerateCallFunctionStubDebugBreak(masm); } static void Generate_CallConstructStub_DebugBreak(MacroAssembler* masm) { - Debug::GenerateCallConstructStubDebugBreak(masm); + DebugCodegen::GenerateCallConstructStubDebugBreak(masm); } static void Generate_CallConstructStub_Recording_DebugBreak( MacroAssembler* masm) { - Debug::GenerateCallConstructStubRecordDebugBreak(masm); + DebugCodegen::GenerateCallConstructStubRecordDebugBreak(masm); } static void Generate_Slot_DebugBreak(MacroAssembler* masm) { - Debug::GenerateSlotDebugBreak(masm); + DebugCodegen::GenerateSlotDebugBreak(masm); } static void Generate_PlainReturn_LiveEdit(MacroAssembler* masm) { - Debug::GeneratePlainReturnLiveEdit(masm); + DebugCodegen::GeneratePlainReturnLiveEdit(masm); } static void Generate_FrameDropper_LiveEdit(MacroAssembler* masm) { - Debug::GenerateFrameDropperLiveEdit(masm); + DebugCodegen::GenerateFrameDropperLiveEdit(masm); } @@ -1528,11 +1471,11 @@ struct BuiltinDesc { class BuiltinFunctionTable { public: BuiltinDesc* functions() { - CallOnce(&once_, &Builtins::InitBuiltinFunctionTable); + base::CallOnce(&once_, &Builtins::InitBuiltinFunctionTable); return functions_; } - OnceType once_; + base::OnceType once_; BuiltinDesc functions_[Builtins::builtin_count + 1]; friend class Builtins; @@ -1594,7 +1537,7 @@ void Builtins::InitBuiltinFunctionTable() { void Builtins::SetUp(Isolate* isolate, bool create_heap_objects) { - ASSERT(!initialized_); + DCHECK(!initialized_); // Create a scope for the handles in the builtins. HandleScope scope(isolate); @@ -1604,9 +1547,13 @@ void Builtins::SetUp(Isolate* isolate, bool create_heap_objects) { // For now we generate builtin adaptor code into a stack-allocated // buffer, before copying it into individual code objects. Be careful // with alignment, some platforms don't like unaligned code. - // TODO(jbramley): I had to increase the size of this buffer from 8KB because - // we can generate a lot of debug code on ARM64. - union { int force_alignment; byte buffer[16*KB]; } u; +#ifdef DEBUG + // We can generate a lot of debug code on Arm64. + const size_t buffer_size = 32*KB; +#else + const size_t buffer_size = 8*KB; +#endif + union { int force_alignment; byte buffer[buffer_size]; } u; // Traverse the list of builtins and generate an adaptor in a // separate code object for each one. @@ -1619,7 +1566,7 @@ void Builtins::SetUp(Isolate* isolate, bool create_heap_objects) { // We pass all arguments to the generator, but it may not use all of // them. This works because the first arguments are on top of the // stack. - ASSERT(!masm.has_frame()); + DCHECK(!masm.has_frame()); g(&masm, functions[i].name, functions[i].extra_args); // Move the code into the object heap. CodeDesc desc; @@ -1630,14 +1577,15 @@ void Builtins::SetUp(Isolate* isolate, bool create_heap_objects) { // Log the event and add the code to the builtins array. PROFILE(isolate, CodeCreateEvent(Logger::BUILTIN_TAG, *code, functions[i].s_name)); - GDBJIT(AddCode(GDBJITInterface::BUILTIN, functions[i].s_name, *code)); builtins_[i] = *code; + if (code->kind() == Code::BUILTIN) code->set_builtin_index(i); #ifdef ENABLE_DISASSEMBLER if (FLAG_print_builtin_code) { CodeTracer::Scope trace_scope(isolate->GetCodeTracer()); - PrintF(trace_scope.file(), "Builtin: %s\n", functions[i].s_name); - code->Disassemble(functions[i].s_name, trace_scope.file()); - PrintF(trace_scope.file(), "\n"); + OFStream os(trace_scope.file()); + os << "Builtin: " << functions[i].s_name << "\n"; + code->Disassemble(functions[i].s_name, os); + os << "\n"; } #endif } else { @@ -1677,12 +1625,12 @@ const char* Builtins::Lookup(byte* pc) { void Builtins::Generate_InterruptCheck(MacroAssembler* masm) { - masm->TailCallRuntime(Runtime::kHiddenInterrupt, 0, 1); + masm->TailCallRuntime(Runtime::kInterrupt, 0, 1); } void Builtins::Generate_StackCheck(MacroAssembler* masm) { - masm->TailCallRuntime(Runtime::kHiddenStackGuard, 0, 1); + masm->TailCallRuntime(Runtime::kStackGuard, 0, 1); } diff --git a/deps/v8/src/builtins.h b/deps/v8/src/builtins.h index e6b60c732cd..a28dd01cfe0 100644 --- a/deps/v8/src/builtins.h +++ b/deps/v8/src/builtins.h @@ -59,7 +59,8 @@ enum BuiltinExtraArguments { V(HandleApiCallAsFunction, NO_EXTRA_ARGUMENTS) \ V(HandleApiCallAsConstructor, NO_EXTRA_ARGUMENTS) \ \ - V(StrictModePoisonPill, NO_EXTRA_ARGUMENTS) + V(StrictModePoisonPill, NO_EXTRA_ARGUMENTS) \ + V(GeneratorPoisonPill, NO_EXTRA_ARGUMENTS) // Define list of builtins implemented in assembly. #define BUILTIN_LIST_A(V) \ @@ -67,8 +68,6 @@ enum BuiltinExtraArguments { kNoExtraICState) \ V(InOptimizationQueue, BUILTIN, UNINITIALIZED, \ kNoExtraICState) \ - V(JSConstructStubCountdown, BUILTIN, UNINITIALIZED, \ - kNoExtraICState) \ V(JSConstructStubGeneric, BUILTIN, UNINITIALIZED, \ kNoExtraICState) \ V(JSConstructStubApi, BUILTIN, UNINITIALIZED, \ @@ -308,8 +307,8 @@ class Builtins { static const char* GetName(JavaScript id) { return javascript_names_[id]; } const char* name(int index) { - ASSERT(index >= 0); - ASSERT(index < builtin_count); + DCHECK(index >= 0); + DCHECK(index < builtin_count); return names_[index]; } static int GetArgumentsCount(JavaScript id) { return javascript_argc_[id]; } @@ -339,7 +338,6 @@ class Builtins { static void Generate_InOptimizationQueue(MacroAssembler* masm); static void Generate_CompileOptimized(MacroAssembler* masm); static void Generate_CompileOptimizedConcurrent(MacroAssembler* masm); - static void Generate_JSConstructStubCountdown(MacroAssembler* masm); static void Generate_JSConstructStubGeneric(MacroAssembler* masm); static void Generate_JSConstructStubApi(MacroAssembler* masm); static void Generate_JSEntryTrampoline(MacroAssembler* masm); diff --git a/deps/v8/src/cached-powers.cc b/deps/v8/src/cached-powers.cc index 8a009684826..5726c496060 100644 --- a/deps/v8/src/cached-powers.cc +++ b/deps/v8/src/cached-powers.cc @@ -2,14 +2,14 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include <stdarg.h> #include <limits.h> +#include <stdarg.h> #include <cmath> -#include "../include/v8stdint.h" -#include "globals.h" -#include "checks.h" -#include "cached-powers.h" +#include "include/v8stdint.h" +#include "src/base/logging.h" +#include "src/cached-powers.h" +#include "src/globals.h" namespace v8 { namespace internal { @@ -133,10 +133,10 @@ void PowersOfTenCache::GetCachedPowerForBinaryExponentRange( int foo = kCachedPowersOffset; int index = (foo + static_cast<int>(k) - 1) / kDecimalExponentDistance + 1; - ASSERT(0 <= index && index < kCachedPowersLength); + DCHECK(0 <= index && index < kCachedPowersLength); CachedPower cached_power = kCachedPowers[index]; - ASSERT(min_exponent <= cached_power.binary_exponent); - ASSERT(cached_power.binary_exponent <= max_exponent); + DCHECK(min_exponent <= cached_power.binary_exponent); + DCHECK(cached_power.binary_exponent <= max_exponent); *decimal_exponent = cached_power.decimal_exponent; *power = DiyFp(cached_power.significand, cached_power.binary_exponent); } @@ -145,15 +145,15 @@ void PowersOfTenCache::GetCachedPowerForBinaryExponentRange( void PowersOfTenCache::GetCachedPowerForDecimalExponent(int requested_exponent, DiyFp* power, int* found_exponent) { - ASSERT(kMinDecimalExponent <= requested_exponent); - ASSERT(requested_exponent < kMaxDecimalExponent + kDecimalExponentDistance); + DCHECK(kMinDecimalExponent <= requested_exponent); + DCHECK(requested_exponent < kMaxDecimalExponent + kDecimalExponentDistance); int index = (requested_exponent + kCachedPowersOffset) / kDecimalExponentDistance; CachedPower cached_power = kCachedPowers[index]; *power = DiyFp(cached_power.significand, cached_power.binary_exponent); *found_exponent = cached_power.decimal_exponent; - ASSERT(*found_exponent <= requested_exponent); - ASSERT(requested_exponent < *found_exponent + kDecimalExponentDistance); + DCHECK(*found_exponent <= requested_exponent); + DCHECK(requested_exponent < *found_exponent + kDecimalExponentDistance); } } } // namespace v8::internal diff --git a/deps/v8/src/cached-powers.h b/deps/v8/src/cached-powers.h index b58924fb36c..bfe36351ba0 100644 --- a/deps/v8/src/cached-powers.h +++ b/deps/v8/src/cached-powers.h @@ -5,7 +5,8 @@ #ifndef V8_CACHED_POWERS_H_ #define V8_CACHED_POWERS_H_ -#include "diy-fp.h" +#include "src/base/logging.h" +#include "src/diy-fp.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/char-predicates-inl.h b/deps/v8/src/char-predicates-inl.h index 16a89f4c3d7..71d1b06a921 100644 --- a/deps/v8/src/char-predicates-inl.h +++ b/deps/v8/src/char-predicates-inl.h @@ -5,7 +5,7 @@ #ifndef V8_CHAR_PREDICATES_INL_H_ #define V8_CHAR_PREDICATES_INL_H_ -#include "char-predicates.h" +#include "src/char-predicates.h" namespace v8 { namespace internal { @@ -30,7 +30,7 @@ inline bool IsLineFeed(uc32 c) { inline bool IsInRange(int value, int lower_limit, int higher_limit) { - ASSERT(lower_limit <= higher_limit); + DCHECK(lower_limit <= higher_limit); return static_cast<unsigned int>(value - lower_limit) <= static_cast<unsigned int>(higher_limit - lower_limit); } diff --git a/deps/v8/src/char-predicates.h b/deps/v8/src/char-predicates.h index 84247e43ce0..b7c5d42320f 100644 --- a/deps/v8/src/char-predicates.h +++ b/deps/v8/src/char-predicates.h @@ -5,7 +5,7 @@ #ifndef V8_CHAR_PREDICATES_H_ #define V8_CHAR_PREDICATES_H_ -#include "unicode.h" +#include "src/unicode.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/checks.cc b/deps/v8/src/checks.cc index 4667facad89..e5a4caa6c8a 100644 --- a/deps/v8/src/checks.cc +++ b/deps/v8/src/checks.cc @@ -2,90 +2,59 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "checks.h" +#include "src/checks.h" -#if V8_LIBC_GLIBC || V8_OS_BSD -# include <cxxabi.h> -# include <execinfo.h> -#elif V8_OS_QNX -# include <backtrace.h> -#endif // V8_LIBC_GLIBC || V8_OS_BSD -#include <stdio.h> - -#include "platform.h" -#include "v8.h" +#include "src/v8.h" namespace v8 { namespace internal { intptr_t HeapObjectTagMask() { return kHeapObjectTagMask; } -// Attempts to dump a backtrace (if supported). -void DumpBacktrace() { -#if V8_LIBC_GLIBC || V8_OS_BSD - void* trace[100]; - int size = backtrace(trace, ARRAY_SIZE(trace)); - char** symbols = backtrace_symbols(trace, size); - OS::PrintError("\n==== C stack trace ===============================\n\n"); - if (size == 0) { - OS::PrintError("(empty)\n"); - } else if (symbols == NULL) { - OS::PrintError("(no symbols)\n"); - } else { - for (int i = 1; i < size; ++i) { - OS::PrintError("%2d: ", i); - char mangled[201]; - if (sscanf(symbols[i], "%*[^(]%*[(]%200[^)+]", mangled) == 1) { // NOLINT - int status; - size_t length; - char* demangled = abi::__cxa_demangle(mangled, NULL, &length, &status); - OS::PrintError("%s\n", demangled != NULL ? demangled : mangled); - free(demangled); - } else { - OS::PrintError("??\n"); - } - } - } - free(symbols); -#elif V8_OS_QNX - char out[1024]; - bt_accessor_t acc; - bt_memmap_t memmap; - bt_init_accessor(&acc, BT_SELF); - bt_load_memmap(&acc, &memmap); - bt_sprn_memmap(&memmap, out, sizeof(out)); - OS::PrintError(out); - bt_addr_t trace[100]; - int size = bt_get_backtrace(&acc, trace, ARRAY_SIZE(trace)); - OS::PrintError("\n==== C stack trace ===============================\n\n"); - if (size == 0) { - OS::PrintError("(empty)\n"); - } else { - bt_sprnf_addrs(&memmap, trace, size, const_cast<char*>("%a\n"), - out, sizeof(out), NULL); - OS::PrintError(out); - } - bt_unload_memmap(&memmap); - bt_release_accessor(&acc); -#endif // V8_LIBC_GLIBC || V8_OS_BSD +} } // namespace v8::internal + + +static bool CheckEqualsStrict(volatile double* exp, volatile double* val) { + v8::internal::DoubleRepresentation exp_rep(*exp); + v8::internal::DoubleRepresentation val_rep(*val); + if (std::isnan(exp_rep.value) && std::isnan(val_rep.value)) return true; + return exp_rep.bits == val_rep.bits; } -} } // namespace v8::internal +void CheckEqualsHelper(const char* file, int line, const char* expected_source, + double expected, const char* value_source, + double value) { + // Force values to 64 bit memory to truncate 80 bit precision on IA32. + volatile double* exp = new double[1]; + *exp = expected; + volatile double* val = new double[1]; + *val = value; + if (!CheckEqualsStrict(exp, val)) { + V8_Fatal(file, line, + "CHECK_EQ(%s, %s) failed\n# Expected: %f\n# Found: %f", + expected_source, value_source, *exp, *val); + } + delete[] exp; + delete[] val; +} -// Contains protection against recursive calls (faults while handling faults). -extern "C" void V8_Fatal(const char* file, int line, const char* format, ...) { - fflush(stdout); - fflush(stderr); - i::OS::PrintError("\n\n#\n# Fatal error in %s, line %d\n# ", file, line); - va_list arguments; - va_start(arguments, format); - i::OS::VPrintError(format, arguments); - va_end(arguments); - i::OS::PrintError("\n#\n"); - v8::internal::DumpBacktrace(); - fflush(stderr); - i::OS::Abort(); + +void CheckNonEqualsHelper(const char* file, int line, + const char* expected_source, double expected, + const char* value_source, double value) { + // Force values to 64 bit memory to truncate 80 bit precision on IA32. + volatile double* exp = new double[1]; + *exp = expected; + volatile double* val = new double[1]; + *val = value; + if (CheckEqualsStrict(exp, val)) { + V8_Fatal(file, line, + "CHECK_EQ(%s, %s) failed\n# Expected: %f\n# Found: %f", + expected_source, value_source, *exp, *val); + } + delete[] exp; + delete[] val; } diff --git a/deps/v8/src/checks.h b/deps/v8/src/checks.h index c2ccc8193ec..6303855fc86 100644 --- a/deps/v8/src/checks.h +++ b/deps/v8/src/checks.h @@ -5,326 +5,58 @@ #ifndef V8_CHECKS_H_ #define V8_CHECKS_H_ -#include <string.h> +#include "src/base/logging.h" -#include "../include/v8stdint.h" - -extern "C" void V8_Fatal(const char* file, int line, const char* format, ...); - - -// The FATAL, UNREACHABLE and UNIMPLEMENTED macros are useful during -// development, but they should not be relied on in the final product. #ifdef DEBUG -#define FATAL(msg) \ - V8_Fatal(__FILE__, __LINE__, "%s", (msg)) -#define UNIMPLEMENTED() \ - V8_Fatal(__FILE__, __LINE__, "unimplemented code") -#define UNREACHABLE() \ - V8_Fatal(__FILE__, __LINE__, "unreachable code") -#else -#define FATAL(msg) \ - V8_Fatal("", 0, "%s", (msg)) -#define UNIMPLEMENTED() \ - V8_Fatal("", 0, "unimplemented code") -#define UNREACHABLE() ((void) 0) +#ifndef OPTIMIZED_DEBUG +#define ENABLE_SLOW_DCHECKS 1 #endif - -// Simulator specific helpers. -#if defined(USE_SIMULATOR) && defined(V8_TARGET_ARCH_ARM64) - // TODO(all): If possible automatically prepend an indicator like - // UNIMPLEMENTED or LOCATION. - #define ASM_UNIMPLEMENTED(message) \ - __ Debug(message, __LINE__, NO_PARAM) - #define ASM_UNIMPLEMENTED_BREAK(message) \ - __ Debug(message, __LINE__, \ - FLAG_ignore_asm_unimplemented_break ? NO_PARAM : BREAK) - #define ASM_LOCATION(message) \ - __ Debug("LOCATION: " message, __LINE__, NO_PARAM) -#else - #define ASM_UNIMPLEMENTED(message) - #define ASM_UNIMPLEMENTED_BREAK(message) - #define ASM_LOCATION(message) #endif +namespace v8 { -// The CHECK macro checks that the given condition is true; if not, it -// prints a message to stderr and aborts. -#define CHECK(condition) do { \ - if (!(condition)) { \ - V8_Fatal(__FILE__, __LINE__, "CHECK(%s) failed", #condition); \ - } \ - } while (0) - - -// Helper function used by the CHECK_EQ function when given int -// arguments. Should not be called directly. -inline void CheckEqualsHelper(const char* file, int line, - const char* expected_source, int expected, - const char* value_source, int value) { - if (expected != value) { - V8_Fatal(file, line, - "CHECK_EQ(%s, %s) failed\n# Expected: %i\n# Found: %i", - expected_source, value_source, expected, value); - } -} - - -// Helper function used by the CHECK_EQ function when given int64_t -// arguments. Should not be called directly. -inline void CheckEqualsHelper(const char* file, int line, - const char* expected_source, - int64_t expected, - const char* value_source, - int64_t value) { - if (expected != value) { - // Print int64_t values in hex, as two int32s, - // to avoid platform-dependencies. - V8_Fatal(file, line, - "CHECK_EQ(%s, %s) failed\n#" - " Expected: 0x%08x%08x\n# Found: 0x%08x%08x", - expected_source, value_source, - static_cast<uint32_t>(expected >> 32), - static_cast<uint32_t>(expected), - static_cast<uint32_t>(value >> 32), - static_cast<uint32_t>(value)); - } -} - - -// Helper function used by the CHECK_NE function when given int -// arguments. Should not be called directly. -inline void CheckNonEqualsHelper(const char* file, - int line, - const char* unexpected_source, - int unexpected, - const char* value_source, - int value) { - if (unexpected == value) { - V8_Fatal(file, line, "CHECK_NE(%s, %s) failed\n# Value: %i", - unexpected_source, value_source, value); - } -} - - -// Helper function used by the CHECK function when given string -// arguments. Should not be called directly. -inline void CheckEqualsHelper(const char* file, - int line, - const char* expected_source, - const char* expected, - const char* value_source, - const char* value) { - if ((expected == NULL && value != NULL) || - (expected != NULL && value == NULL) || - (expected != NULL && value != NULL && strcmp(expected, value) != 0)) { - V8_Fatal(file, line, - "CHECK_EQ(%s, %s) failed\n# Expected: %s\n# Found: %s", - expected_source, value_source, expected, value); - } -} - - -inline void CheckNonEqualsHelper(const char* file, - int line, - const char* expected_source, - const char* expected, - const char* value_source, - const char* value) { - if (expected == value || - (expected != NULL && value != NULL && strcmp(expected, value) == 0)) { - V8_Fatal(file, line, "CHECK_NE(%s, %s) failed\n# Value: %s", - expected_source, value_source, value); - } -} - - -// Helper function used by the CHECK function when given pointer -// arguments. Should not be called directly. -inline void CheckEqualsHelper(const char* file, - int line, - const char* expected_source, - const void* expected, - const char* value_source, - const void* value) { - if (expected != value) { - V8_Fatal(file, line, - "CHECK_EQ(%s, %s) failed\n# Expected: %p\n# Found: %p", - expected_source, value_source, - expected, value); - } -} - - -inline void CheckNonEqualsHelper(const char* file, - int line, - const char* expected_source, - const void* expected, - const char* value_source, - const void* value) { - if (expected == value) { - V8_Fatal(file, line, "CHECK_NE(%s, %s) failed\n# Value: %p", - expected_source, value_source, value); - } -} - - -// Helper function used by the CHECK function when given floating -// point arguments. Should not be called directly. -inline void CheckEqualsHelper(const char* file, - int line, - const char* expected_source, - double expected, - const char* value_source, - double value) { - // Force values to 64 bit memory to truncate 80 bit precision on IA32. - volatile double* exp = new double[1]; - *exp = expected; - volatile double* val = new double[1]; - *val = value; - if (*exp != *val) { - V8_Fatal(file, line, - "CHECK_EQ(%s, %s) failed\n# Expected: %f\n# Found: %f", - expected_source, value_source, *exp, *val); - } - delete[] exp; - delete[] val; -} - - -inline void CheckNonEqualsHelper(const char* file, - int line, - const char* expected_source, - int64_t expected, - const char* value_source, - int64_t value) { - if (expected == value) { - V8_Fatal(file, line, - "CHECK_EQ(%s, %s) failed\n# Expected: %f\n# Found: %f", - expected_source, value_source, expected, value); - } -} - - -inline void CheckNonEqualsHelper(const char* file, - int line, - const char* expected_source, - double expected, - const char* value_source, - double value) { - // Force values to 64 bit memory to truncate 80 bit precision on IA32. - volatile double* exp = new double[1]; - *exp = expected; - volatile double* val = new double[1]; - *val = value; - if (*exp == *val) { - V8_Fatal(file, line, - "CHECK_NE(%s, %s) failed\n# Value: %f", - expected_source, value_source, *val); - } - delete[] exp; - delete[] val; -} - - -#define CHECK_EQ(expected, value) CheckEqualsHelper(__FILE__, __LINE__, \ - #expected, expected, #value, value) - - -#define CHECK_NE(unexpected, value) CheckNonEqualsHelper(__FILE__, __LINE__, \ - #unexpected, unexpected, #value, value) - - -#define CHECK_GT(a, b) CHECK((a) > (b)) -#define CHECK_GE(a, b) CHECK((a) >= (b)) -#define CHECK_LT(a, b) CHECK((a) < (b)) -#define CHECK_LE(a, b) CHECK((a) <= (b)) - +class Value; +template <class T> class Handle; -// Use C++11 static_assert if possible, which gives error -// messages that are easier to understand on first sight. -#if V8_HAS_CXX11_STATIC_ASSERT -#define STATIC_CHECK(test) static_assert(test, #test) -#else -// This is inspired by the static assertion facility in boost. This -// is pretty magical. If it causes you trouble on a platform you may -// find a fix in the boost code. -template <bool> class StaticAssertion; -template <> class StaticAssertion<true> { }; -// This macro joins two tokens. If one of the tokens is a macro the -// helper call causes it to be resolved before joining. -#define SEMI_STATIC_JOIN(a, b) SEMI_STATIC_JOIN_HELPER(a, b) -#define SEMI_STATIC_JOIN_HELPER(a, b) a##b -// Causes an error during compilation of the condition is not -// statically known to be true. It is formulated as a typedef so that -// it can be used wherever a typedef can be used. Beware that this -// actually causes each use to introduce a new defined type with a -// name depending on the source line. -template <int> class StaticAssertionHelper { }; -#define STATIC_CHECK(test) \ - typedef \ - StaticAssertionHelper<sizeof(StaticAssertion<static_cast<bool>((test))>)> \ - SEMI_STATIC_JOIN(__StaticAssertTypedef__, __LINE__) V8_UNUSED -#endif +namespace internal { +intptr_t HeapObjectTagMask(); -#ifdef DEBUG -#ifndef OPTIMIZED_DEBUG -#define ENABLE_SLOW_ASSERTS 1 -#endif -#endif - -namespace v8 { -namespace internal { -#ifdef ENABLE_SLOW_ASSERTS -#define SLOW_ASSERT(condition) \ +#ifdef ENABLE_SLOW_DCHECKS +#define SLOW_DCHECK(condition) \ CHECK(!v8::internal::FLAG_enable_slow_asserts || (condition)) extern bool FLAG_enable_slow_asserts; #else -#define SLOW_ASSERT(condition) ((void) 0) +#define SLOW_DCHECK(condition) ((void) 0) const bool FLAG_enable_slow_asserts = false; #endif -// Exposed for making debugging easier (to see where your function is being -// called, just add a call to DumpBacktrace). -void DumpBacktrace(); - } } // namespace v8::internal -// The ASSERT macro is equivalent to CHECK except that it only -// generates code in debug builds. -#ifdef DEBUG -#define ASSERT_RESULT(expr) CHECK(expr) -#define ASSERT(condition) CHECK(condition) -#define ASSERT_EQ(v1, v2) CHECK_EQ(v1, v2) -#define ASSERT_NE(v1, v2) CHECK_NE(v1, v2) -#define ASSERT_GE(v1, v2) CHECK_GE(v1, v2) -#define ASSERT_LT(v1, v2) CHECK_LT(v1, v2) -#define ASSERT_LE(v1, v2) CHECK_LE(v1, v2) -#else -#define ASSERT_RESULT(expr) (expr) -#define ASSERT(condition) ((void) 0) -#define ASSERT_EQ(v1, v2) ((void) 0) -#define ASSERT_NE(v1, v2) ((void) 0) -#define ASSERT_GE(v1, v2) ((void) 0) -#define ASSERT_LT(v1, v2) ((void) 0) -#define ASSERT_LE(v1, v2) ((void) 0) -#endif -// Static asserts has no impact on runtime performance, so they can be -// safely enabled in release mode. Moreover, the ((void) 0) expression -// obeys different syntax rules than typedef's, e.g. it can't appear -// inside class declaration, this leads to inconsistency between debug -// and release compilation modes behavior. -#define STATIC_ASSERT(test) STATIC_CHECK(test) +void CheckNonEqualsHelper(const char* file, int line, + const char* expected_source, double expected, + const char* value_source, double value); -#define ASSERT_NOT_NULL(p) ASSERT_NE(NULL, p) +void CheckEqualsHelper(const char* file, int line, const char* expected_source, + double expected, const char* value_source, double value); -// "Extra checks" are lightweight checks that are enabled in some release -// builds. -#ifdef ENABLE_EXTRA_CHECKS -#define EXTRA_CHECK(condition) CHECK(condition) -#else -#define EXTRA_CHECK(condition) ((void) 0) -#endif +void CheckNonEqualsHelper(const char* file, int line, + const char* unexpected_source, + v8::Handle<v8::Value> unexpected, + const char* value_source, + v8::Handle<v8::Value> value); + +void CheckEqualsHelper(const char* file, + int line, + const char* expected_source, + v8::Handle<v8::Value> expected, + const char* value_source, + v8::Handle<v8::Value> value); + +#define DCHECK_TAG_ALIGNED(address) \ + DCHECK((reinterpret_cast<intptr_t>(address) & HeapObjectTagMask()) == 0) + +#define DCHECK_SIZE_TAG_ALIGNED(size) DCHECK((size & HeapObjectTagMask()) == 0) #endif // V8_CHECKS_H_ diff --git a/deps/v8/src/circular-queue-inl.h b/deps/v8/src/circular-queue-inl.h index 14910a17e46..2f06f6c496f 100644 --- a/deps/v8/src/circular-queue-inl.h +++ b/deps/v8/src/circular-queue-inl.h @@ -5,7 +5,7 @@ #ifndef V8_CIRCULAR_QUEUE_INL_H_ #define V8_CIRCULAR_QUEUE_INL_H_ -#include "circular-queue.h" +#include "src/circular-queue.h" namespace v8 { namespace internal { @@ -24,8 +24,8 @@ SamplingCircularQueue<T, L>::~SamplingCircularQueue() { template<typename T, unsigned L> T* SamplingCircularQueue<T, L>::Peek() { - MemoryBarrier(); - if (Acquire_Load(&dequeue_pos_->marker) == kFull) { + base::MemoryBarrier(); + if (base::Acquire_Load(&dequeue_pos_->marker) == kFull) { return &dequeue_pos_->record; } return NULL; @@ -34,15 +34,15 @@ T* SamplingCircularQueue<T, L>::Peek() { template<typename T, unsigned L> void SamplingCircularQueue<T, L>::Remove() { - Release_Store(&dequeue_pos_->marker, kEmpty); + base::Release_Store(&dequeue_pos_->marker, kEmpty); dequeue_pos_ = Next(dequeue_pos_); } template<typename T, unsigned L> T* SamplingCircularQueue<T, L>::StartEnqueue() { - MemoryBarrier(); - if (Acquire_Load(&enqueue_pos_->marker) == kEmpty) { + base::MemoryBarrier(); + if (base::Acquire_Load(&enqueue_pos_->marker) == kEmpty) { return &enqueue_pos_->record; } return NULL; @@ -51,7 +51,7 @@ T* SamplingCircularQueue<T, L>::StartEnqueue() { template<typename T, unsigned L> void SamplingCircularQueue<T, L>::FinishEnqueue() { - Release_Store(&enqueue_pos_->marker, kFull); + base::Release_Store(&enqueue_pos_->marker, kFull); enqueue_pos_ = Next(enqueue_pos_); } diff --git a/deps/v8/src/circular-queue.h b/deps/v8/src/circular-queue.h index 81e80d2aa44..c312c597c67 100644 --- a/deps/v8/src/circular-queue.h +++ b/deps/v8/src/circular-queue.h @@ -5,8 +5,8 @@ #ifndef V8_CIRCULAR_QUEUE_H_ #define V8_CIRCULAR_QUEUE_H_ -#include "atomicops.h" -#include "v8globals.h" +#include "src/base/atomicops.h" +#include "src/globals.h" namespace v8 { namespace internal { @@ -50,7 +50,7 @@ class SamplingCircularQueue { struct V8_ALIGNED(PROCESSOR_CACHE_LINE_SIZE) Entry { Entry() : marker(kEmpty) {} T record; - Atomic32 marker; + base::Atomic32 marker; }; Entry* Next(Entry* entry); diff --git a/deps/v8/src/code-stubs-hydrogen.cc b/deps/v8/src/code-stubs-hydrogen.cc index 68c864176b7..027517a835c 100644 --- a/deps/v8/src/code-stubs-hydrogen.cc +++ b/deps/v8/src/code-stubs-hydrogen.cc @@ -2,11 +2,12 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "code-stubs.h" -#include "hydrogen.h" -#include "lithium.h" +#include "src/code-stubs.h" +#include "src/field-index.h" +#include "src/hydrogen.h" +#include "src/lithium.h" namespace v8 { namespace internal { @@ -17,7 +18,7 @@ static LChunk* OptimizeGraph(HGraph* graph) { DisallowHandleAllocation no_handles; DisallowHandleDereference no_deref; - ASSERT(graph != NULL); + DCHECK(graph != NULL); BailoutReason bailout_reason = kNoReason; if (!graph->Optimize(&bailout_reason)) { FATAL(GetBailoutReason(bailout_reason)); @@ -38,19 +39,20 @@ class CodeStubGraphBuilderBase : public HGraphBuilder { info_(stub, isolate), context_(NULL) { descriptor_ = stub->GetInterfaceDescriptor(); - parameters_.Reset(new HParameter*[descriptor_->register_param_count_]); + int parameter_count = descriptor_->GetEnvironmentParameterCount(); + parameters_.Reset(new HParameter*[parameter_count]); } virtual bool BuildGraph(); protected: virtual HValue* BuildCodeStub() = 0; HParameter* GetParameter(int parameter) { - ASSERT(parameter < descriptor_->register_param_count_); + DCHECK(parameter < descriptor_->GetEnvironmentParameterCount()); return parameters_[parameter]; } HValue* GetArgumentsLength() { // This is initialized in BuildGraph() - ASSERT(arguments_length_ != NULL); + DCHECK(arguments_length_ != NULL); return arguments_length_; } CompilationInfo* info() { return &info_; } @@ -59,9 +61,9 @@ class CodeStubGraphBuilderBase : public HGraphBuilder { Isolate* isolate() { return info_.isolate(); } HLoadNamedField* BuildLoadNamedField(HValue* object, - Representation representation, - int offset, - bool is_inobject); + FieldIndex index); + void BuildStoreNamedField(HValue* object, HValue* value, FieldIndex index, + Representation representation); enum ArgumentClass { NONE, @@ -117,30 +119,29 @@ bool CodeStubGraphBuilderBase::BuildGraph() { isolate()->GetHTracer()->TraceCompilation(&info_); } - int param_count = descriptor_->register_param_count_; + int param_count = descriptor_->GetEnvironmentParameterCount(); HEnvironment* start_environment = graph()->start_environment(); HBasicBlock* next_block = CreateBasicBlock(start_environment); Goto(next_block); next_block->SetJoinId(BailoutId::StubEntry()); set_current_block(next_block); - bool runtime_stack_params = descriptor_->stack_parameter_count_.is_valid(); + bool runtime_stack_params = descriptor_->stack_parameter_count().is_valid(); HInstruction* stack_parameter_count = NULL; for (int i = 0; i < param_count; ++i) { - Representation r = descriptor_->IsParameterCountRegister(i) - ? Representation::Integer32() - : Representation::Tagged(); - HParameter* param = Add<HParameter>(i, HParameter::REGISTER_PARAMETER, r); + Representation r = descriptor_->GetEnvironmentParameterRepresentation(i); + HParameter* param = Add<HParameter>(i, + HParameter::REGISTER_PARAMETER, r); start_environment->Bind(i, param); parameters_[i] = param; - if (descriptor_->IsParameterCountRegister(i)) { + if (descriptor_->IsEnvironmentParameterCountRegister(i)) { param->set_type(HType::Smi()); stack_parameter_count = param; arguments_length_ = stack_parameter_count; } } - ASSERT(!runtime_stack_params || arguments_length_ != NULL); + DCHECK(!runtime_stack_params || arguments_length_ != NULL); if (!runtime_stack_params) { stack_parameter_count = graph()->GetConstantMinus1(); arguments_length_ = graph()->GetConstant0(); @@ -158,16 +159,16 @@ bool CodeStubGraphBuilderBase::BuildGraph() { // We might have extra expressions to pop from the stack in addition to the // arguments above. HInstruction* stack_pop_count = stack_parameter_count; - if (descriptor_->function_mode_ == JS_FUNCTION_STUB_MODE) { + if (descriptor_->function_mode() == JS_FUNCTION_STUB_MODE) { if (!stack_parameter_count->IsConstant() && - descriptor_->hint_stack_parameter_count_ < 0) { + descriptor_->hint_stack_parameter_count() < 0) { HInstruction* constant_one = graph()->GetConstant1(); stack_pop_count = AddUncasted<HAdd>(stack_parameter_count, constant_one); stack_pop_count->ClearFlag(HValue::kCanOverflow); // TODO(mvstanton): verify that stack_parameter_count+1 really fits in a // smi. } else { - int count = descriptor_->hint_stack_parameter_count_; + int count = descriptor_->hint_stack_parameter_count(); stack_pop_count = Add<HConstant>(count); } } @@ -250,11 +251,10 @@ Handle<Code> HydrogenCodeStub::GenerateLightweightMissCode() { template <class Stub> static Handle<Code> DoGenerateCode(Stub* stub) { Isolate* isolate = stub->isolate(); - CodeStub::Major major_key = - static_cast<HydrogenCodeStub*>(stub)->MajorKey(); + CodeStub::Major major_key = static_cast<CodeStub*>(stub)->MajorKey(); CodeStubInterfaceDescriptor* descriptor = isolate->code_stub_interface_descriptor(major_key); - if (descriptor->register_param_count_ < 0) { + if (!descriptor->IsInitialized()) { stub->InitializeInterfaceDescriptor(descriptor); } @@ -262,20 +262,22 @@ static Handle<Code> DoGenerateCode(Stub* stub) { // the runtime that is significantly faster than using the standard // stub-failure deopt mechanism. if (stub->IsUninitialized() && descriptor->has_miss_handler()) { - ASSERT(!descriptor->stack_parameter_count_.is_valid()); + DCHECK(!descriptor->stack_parameter_count().is_valid()); return stub->GenerateLightweightMissCode(); } - ElapsedTimer timer; + base::ElapsedTimer timer; if (FLAG_profile_hydrogen_code_stub_compilation) { timer.Start(); } CodeStubGraphBuilder<Stub> builder(isolate, stub); LChunk* chunk = OptimizeGraph(builder.CreateGraph()); + // TODO(yangguo) remove this once the code serializer handles code stubs. + if (FLAG_serialize_toplevel) chunk->info()->PrepareForSerializing(); Handle<Code> code = chunk->Codegen(); if (FLAG_profile_hydrogen_code_stub_compilation) { - double ms = timer.Elapsed().InMillisecondsF(); - PrintF("[Lazy compilation of %s took %0.3f ms]\n", - stub->GetName().get(), ms); + OFStream os(stdout); + os << "[Lazy compilation of " << stub << " took " + << timer.Elapsed().InMillisecondsF() << " ms]" << endl; } return code; } @@ -298,7 +300,7 @@ HValue* CodeStubGraphBuilder<ToNumberStub>::BuildCodeStub() { // Convert the parameter to number using the builtin. HValue* function = AddLoadJSBuiltin(Builtins::TO_NUMBER); - Add<HPushArgument>(value); + Add<HPushArguments>(value); Push(Add<HInvokeFunction>(function, 1)); if_number.End(); @@ -330,8 +332,10 @@ HValue* CodeStubGraphBuilder<FastCloneShallowArrayStub>::BuildCodeStub() { Factory* factory = isolate()->factory(); HValue* undefined = graph()->GetConstantUndefined(); AllocationSiteMode alloc_site_mode = casted_stub()->allocation_site_mode(); - FastCloneShallowArrayStub::Mode mode = casted_stub()->mode(); - int length = casted_stub()->length(); + + // This stub is very performance sensitive, the generated code must be tuned + // so that it doesn't build and eager frame. + info()->MarkMustNotHaveEagerFrame(); HInstruction* allocation_site = Add<HLoadKeyed>(GetParameter(0), GetParameter(1), @@ -346,46 +350,40 @@ HValue* CodeStubGraphBuilder<FastCloneShallowArrayStub>::BuildCodeStub() { AllocationSite::kTransitionInfoOffset); HInstruction* boilerplate = Add<HLoadNamedField>( allocation_site, static_cast<HValue*>(NULL), access); - HValue* push_value; - if (mode == FastCloneShallowArrayStub::CLONE_ANY_ELEMENTS) { - HValue* elements = AddLoadElements(boilerplate); - - IfBuilder if_fixed_cow(this); - if_fixed_cow.If<HCompareMap>(elements, factory->fixed_cow_array_map()); - if_fixed_cow.Then(); - push_value = BuildCloneShallowArray(boilerplate, - allocation_site, - alloc_site_mode, - FAST_ELEMENTS, - 0/*copy-on-write*/); - environment()->Push(push_value); - if_fixed_cow.Else(); - - IfBuilder if_fixed(this); - if_fixed.If<HCompareMap>(elements, factory->fixed_array_map()); - if_fixed.Then(); - push_value = BuildCloneShallowArray(boilerplate, - allocation_site, - alloc_site_mode, - FAST_ELEMENTS, - length); - environment()->Push(push_value); - if_fixed.Else(); - push_value = BuildCloneShallowArray(boilerplate, - allocation_site, - alloc_site_mode, - FAST_DOUBLE_ELEMENTS, - length); - environment()->Push(push_value); - } else { - ElementsKind elements_kind = casted_stub()->ComputeElementsKind(); - push_value = BuildCloneShallowArray(boilerplate, - allocation_site, - alloc_site_mode, - elements_kind, - length); - environment()->Push(push_value); - } + HValue* elements = AddLoadElements(boilerplate); + HValue* capacity = AddLoadFixedArrayLength(elements); + IfBuilder zero_capacity(this); + zero_capacity.If<HCompareNumericAndBranch>(capacity, graph()->GetConstant0(), + Token::EQ); + zero_capacity.Then(); + Push(BuildCloneShallowArrayEmpty(boilerplate, + allocation_site, + alloc_site_mode)); + zero_capacity.Else(); + IfBuilder if_fixed_cow(this); + if_fixed_cow.If<HCompareMap>(elements, factory->fixed_cow_array_map()); + if_fixed_cow.Then(); + Push(BuildCloneShallowArrayCow(boilerplate, + allocation_site, + alloc_site_mode, + FAST_ELEMENTS)); + if_fixed_cow.Else(); + IfBuilder if_fixed(this); + if_fixed.If<HCompareMap>(elements, factory->fixed_array_map()); + if_fixed.Then(); + Push(BuildCloneShallowArrayNonEmpty(boilerplate, + allocation_site, + alloc_site_mode, + FAST_ELEMENTS)); + + if_fixed.Else(); + Push(BuildCloneShallowArrayNonEmpty(boilerplate, + allocation_site, + alloc_site_mode, + FAST_DOUBLE_ELEMENTS)); + if_fixed.End(); + if_fixed_cow.End(); + zero_capacity.End(); checker.ElseDeopt("Uninitialized boilerplate literals"); checker.End(); @@ -447,7 +445,7 @@ HValue* CodeStubGraphBuilder<FastCloneShallowObjectStub>::BuildCodeStub() { boilerplate, static_cast<HValue*>(NULL), access)); } - ASSERT(FLAG_allocation_site_pretenuring || (size == object_size)); + DCHECK(FLAG_allocation_site_pretenuring || (size == object_size)); if (FLAG_allocation_site_pretenuring) { BuildCreateAllocationMemento( object, Add<HConstant>(object_size), allocation_site); @@ -504,7 +502,7 @@ HValue* CodeStubGraphBuilder<CreateAllocationSiteStub>::BuildCodeStub() { // Store an empty fixed array for the code dependency. HConstant* empty_fixed_array = Add<HConstant>(isolate()->factory()->empty_fixed_array()); - HStoreNamedField* store = Add<HStoreNamedField>( + Add<HStoreNamedField>( object, HObjectAccess::ForAllocationSiteOffset( AllocationSite::kDependentCodeOffset), @@ -516,10 +514,15 @@ HValue* CodeStubGraphBuilder<CreateAllocationSiteStub>::BuildCodeStub() { HValue* site = Add<HLoadNamedField>( site_list, static_cast<HValue*>(NULL), HObjectAccess::ForAllocationSiteList()); - store = Add<HStoreNamedField>(object, + // TODO(mvstanton): This is a store to a weak pointer, which we may want to + // mark as such in order to skip the write barrier, once we have a unified + // system for weakness. For now we decided to keep it like this because having + // an initial write barrier backed store makes this pointer strong until the + // next GC, and allocation sites are designed to survive several GCs anyway. + Add<HStoreNamedField>( + object, HObjectAccess::ForAllocationSiteOffset(AllocationSite::kWeakNextOffset), site); - store->SkipWriteBarrier(); Add<HStoreNamedField>(site_list, HObjectAccess::ForAllocationSiteList(), object); @@ -537,29 +540,35 @@ Handle<Code> CreateAllocationSiteStub::GenerateCode() { template <> -HValue* CodeStubGraphBuilder<KeyedLoadFastElementStub>::BuildCodeStub() { +HValue* CodeStubGraphBuilder<LoadFastElementStub>::BuildCodeStub() { HInstruction* load = BuildUncheckedMonomorphicElementAccess( - GetParameter(0), GetParameter(1), NULL, - casted_stub()->is_js_array(), casted_stub()->elements_kind(), - LOAD, NEVER_RETURN_HOLE, STANDARD_STORE); + GetParameter(KeyedLoadIC::kReceiverIndex), + GetParameter(KeyedLoadIC::kNameIndex), + NULL, + casted_stub()->is_js_array(), + casted_stub()->elements_kind(), + LOAD, + NEVER_RETURN_HOLE, + STANDARD_STORE); return load; } -Handle<Code> KeyedLoadFastElementStub::GenerateCode() { +Handle<Code> LoadFastElementStub::GenerateCode() { return DoGenerateCode(this); } HLoadNamedField* CodeStubGraphBuilderBase::BuildLoadNamedField( - HValue* object, - Representation representation, - int offset, - bool is_inobject) { - HObjectAccess access = is_inobject + HValue* object, FieldIndex index) { + Representation representation = index.is_double() + ? Representation::Double() + : Representation::Tagged(); + int offset = index.offset(); + HObjectAccess access = index.is_inobject() ? HObjectAccess::ForObservableJSObjectOffset(offset, representation) : HObjectAccess::ForBackingStoreOffset(offset, representation); - if (representation.IsDouble()) { + if (index.is_double()) { // Load the heap number. object = Add<HLoadNamedField>( object, static_cast<HValue*>(NULL), @@ -573,10 +582,7 @@ HLoadNamedField* CodeStubGraphBuilderBase::BuildLoadNamedField( template<> HValue* CodeStubGraphBuilder<LoadFieldStub>::BuildCodeStub() { - return BuildLoadNamedField(GetParameter(0), - casted_stub()->representation(), - casted_stub()->offset(), - casted_stub()->is_inobject()); + return BuildLoadNamedField(GetParameter(0), casted_stub()->index()); } @@ -585,12 +591,65 @@ Handle<Code> LoadFieldStub::GenerateCode() { } -template<> +template <> +HValue* CodeStubGraphBuilder<LoadConstantStub>::BuildCodeStub() { + HValue* map = AddLoadMap(GetParameter(0), NULL); + HObjectAccess descriptors_access = HObjectAccess::ForObservableJSObjectOffset( + Map::kDescriptorsOffset, Representation::Tagged()); + HValue* descriptors = + Add<HLoadNamedField>(map, static_cast<HValue*>(NULL), descriptors_access); + HObjectAccess value_access = HObjectAccess::ForObservableJSObjectOffset( + DescriptorArray::GetValueOffset(casted_stub()->descriptor())); + return Add<HLoadNamedField>(descriptors, static_cast<HValue*>(NULL), + value_access); +} + + +Handle<Code> LoadConstantStub::GenerateCode() { return DoGenerateCode(this); } + + +void CodeStubGraphBuilderBase::BuildStoreNamedField( + HValue* object, HValue* value, FieldIndex index, + Representation representation) { + DCHECK(!index.is_double() || representation.IsDouble()); + int offset = index.offset(); + HObjectAccess access = + index.is_inobject() + ? HObjectAccess::ForObservableJSObjectOffset(offset, representation) + : HObjectAccess::ForBackingStoreOffset(offset, representation); + + if (representation.IsDouble()) { + // Load the heap number. + object = Add<HLoadNamedField>( + object, static_cast<HValue*>(NULL), + access.WithRepresentation(Representation::Tagged())); + // Store the double value into it. + access = HObjectAccess::ForHeapNumberValue(); + } else if (representation.IsHeapObject()) { + BuildCheckHeapObject(value); + } + + Add<HStoreNamedField>(object, access, value, INITIALIZING_STORE); +} + + +template <> +HValue* CodeStubGraphBuilder<StoreFieldStub>::BuildCodeStub() { + BuildStoreNamedField(GetParameter(0), GetParameter(2), casted_stub()->index(), + casted_stub()->representation()); + return GetParameter(2); +} + + +Handle<Code> StoreFieldStub::GenerateCode() { return DoGenerateCode(this); } + + +template <> HValue* CodeStubGraphBuilder<StringLengthStub>::BuildCodeStub() { - HValue* string = BuildLoadNamedField( - GetParameter(0), Representation::Tagged(), JSValue::kValueOffset, true); - return BuildLoadNamedField( - string, Representation::Tagged(), String::kLengthOffset, true); + HValue* string = BuildLoadNamedField(GetParameter(0), + FieldIndex::ForInObjectOffset(JSValue::kValueOffset)); + return BuildLoadNamedField(string, + FieldIndex::ForInObjectOffset(String::kLengthOffset)); } @@ -600,9 +659,11 @@ Handle<Code> StringLengthStub::GenerateCode() { template <> -HValue* CodeStubGraphBuilder<KeyedStoreFastElementStub>::BuildCodeStub() { +HValue* CodeStubGraphBuilder<StoreFastElementStub>::BuildCodeStub() { BuildUncheckedMonomorphicElementAccess( - GetParameter(0), GetParameter(1), GetParameter(2), + GetParameter(StoreIC::kReceiverIndex), + GetParameter(StoreIC::kNameIndex), + GetParameter(StoreIC::kValueIndex), casted_stub()->is_js_array(), casted_stub()->elements_kind(), STORE, NEVER_RETURN_HOLE, casted_stub()->store_mode()); @@ -610,7 +671,7 @@ HValue* CodeStubGraphBuilder<KeyedStoreFastElementStub>::BuildCodeStub() { } -Handle<Code> KeyedStoreFastElementStub::GenerateCode() { +Handle<Code> StoreFastElementStub::GenerateCode() { return DoGenerateCode(this); } @@ -644,6 +705,9 @@ HValue* CodeStubGraphBuilderBase::BuildArrayConstructor( HValue* result = NULL; switch (argument_class) { case NONE: + // This stub is very performance sensitive, the generated code must be + // tuned so that it doesn't build and eager frame. + info()->MarkMustNotHaveEagerFrame(); result = array_builder.AllocateEmptyArray(); break; case SINGLE: @@ -667,6 +731,9 @@ HValue* CodeStubGraphBuilderBase::BuildInternalArrayConstructor( HValue* result = NULL; switch (argument_class) { case NONE: + // This stub is very performance sensitive, the generated code must be + // tuned so that it doesn't build and eager frame. + info()->MarkMustNotHaveEagerFrame(); result = array_builder.AllocateEmptyArray(); break; case SINGLE: @@ -717,10 +784,11 @@ HValue* CodeStubGraphBuilderBase::BuildArrayNArgumentsConstructor( ? JSArrayBuilder::FILL_WITH_HOLE : JSArrayBuilder::DONT_FILL_WITH_HOLE; HValue* new_object = array_builder->AllocateArray(checked_length, + max_alloc_length, checked_length, fill_mode); HValue* elements = array_builder->GetElementsLocation(); - ASSERT(elements != NULL); + DCHECK(elements != NULL); // Now populate the elements correctly. LoopBuilder builder(this, @@ -854,7 +922,7 @@ HValue* CodeStubGraphBuilder<BinaryOpICStub>::BuildCodeInitializedStub() { Type* right_type = state.GetRightType(zone()); Type* result_type = state.GetResultType(zone()); - ASSERT(!left_type->Is(Type::None()) && !right_type->Is(Type::None()) && + DCHECK(!left_type->Is(Type::None()) && !right_type->Is(Type::None()) && (state.HasSideEffects() || !result_type->Is(Type::None()))); HValue* result = NULL; @@ -915,20 +983,6 @@ HValue* CodeStubGraphBuilder<BinaryOpICStub>::BuildCodeInitializedStub() { // If we encounter a generic argument, the number conversion is // observable, thus we cannot afford to bail out after the fact. if (!state.HasSideEffects()) { - if (result_type->Is(Type::SignedSmall())) { - if (state.op() == Token::SHR) { - // TODO(olivf) Replace this by a SmiTagU Instruction. - // 0x40000000: this number would convert to negative when interpreting - // the register as signed value; - IfBuilder if_of(this); - if_of.IfNot<HCompareNumericAndBranch>(result, - Add<HConstant>(static_cast<int>(SmiValuesAre32Bits() - ? 0x80000000 : 0x40000000)), Token::EQ_STRICT); - if_of.Then(); - if_of.ElseDeopt("UInt->Smi oveflow"); - if_of.End(); - } - } result = EnforceNumberType(result, result_type); } @@ -1010,14 +1064,31 @@ Handle<Code> StringAddStub::GenerateCode() { template <> HValue* CodeStubGraphBuilder<ToBooleanStub>::BuildCodeInitializedStub() { ToBooleanStub* stub = casted_stub(); + HValue* true_value = NULL; + HValue* false_value = NULL; + + switch (stub->GetMode()) { + case ToBooleanStub::RESULT_AS_SMI: + true_value = graph()->GetConstant1(); + false_value = graph()->GetConstant0(); + break; + case ToBooleanStub::RESULT_AS_ODDBALL: + true_value = graph()->GetConstantTrue(); + false_value = graph()->GetConstantFalse(); + break; + case ToBooleanStub::RESULT_AS_INVERSE_ODDBALL: + true_value = graph()->GetConstantFalse(); + false_value = graph()->GetConstantTrue(); + break; + } IfBuilder if_true(this); if_true.If<HBranch>(GetParameter(0), stub->GetTypes()); if_true.Then(); - if_true.Return(graph()->GetConstant1()); + if_true.Return(true_value); if_true.Else(); if_true.End(); - return graph()->GetConstant0(); + return false_value; } @@ -1034,7 +1105,7 @@ HValue* CodeStubGraphBuilder<StoreGlobalStub>::BuildCodeInitializedStub() { Handle<PropertyCell> placeholder_cell = isolate()->factory()->NewPropertyCell(placeholer_value); - HParameter* value = GetParameter(2); + HParameter* value = GetParameter(StoreIC::kValueIndex); if (stub->check_global()) { // Check that the map of the global has not changed: use a placeholder map @@ -1081,10 +1152,10 @@ Handle<Code> StoreGlobalStub::GenerateCode() { template<> HValue* CodeStubGraphBuilder<ElementsTransitionAndStoreStub>::BuildCodeStub() { - HValue* value = GetParameter(0); - HValue* map = GetParameter(1); - HValue* key = GetParameter(2); - HValue* object = GetParameter(3); + HValue* value = GetParameter(ElementsTransitionAndStoreStub::kValueIndex); + HValue* map = GetParameter(ElementsTransitionAndStoreStub::kMapIndex); + HValue* key = GetParameter(ElementsTransitionAndStoreStub::kKeyIndex); + HValue* object = GetParameter(ElementsTransitionAndStoreStub::kObjectIndex); if (FLAG_trace_elements_transitions) { // Tracing elements transitions is the job of the runtime. @@ -1178,7 +1249,7 @@ HInstruction* CodeStubGraphBuilderBase::LoadFromOptimizedCodeMap( int field_offset) { // By making sure to express these loads in the form [<hvalue> + constant] // the keyed load can be hoisted. - ASSERT(field_offset >= 0 && field_offset < SharedFunctionInfo::kEntryLength); + DCHECK(field_offset >= 0 && field_offset < SharedFunctionInfo::kEntryLength); HValue* field_slot = iterator; if (field_offset > 0) { HValue* field_offset_value = Add<HConstant>(field_offset); @@ -1333,7 +1404,7 @@ HValue* CodeStubGraphBuilder<FastNewContextStub>::BuildCodeStub() { // Allocate the context in new space. HAllocate* function_context = Add<HAllocate>( Add<HConstant>(length * kPointerSize + FixedArray::kHeaderSize), - HType::Tagged(), NOT_TENURED, FIXED_ARRAY_TYPE); + HType::HeapObject(), NOT_TENURED, FIXED_ARRAY_TYPE); // Set up the object header. AddStoreMapConstant(function_context, @@ -1378,18 +1449,22 @@ Handle<Code> FastNewContextStub::GenerateCode() { } -template<> -HValue* CodeStubGraphBuilder<KeyedLoadDictionaryElementStub>::BuildCodeStub() { - HValue* receiver = GetParameter(0); - HValue* key = GetParameter(1); +template <> +HValue* CodeStubGraphBuilder<LoadDictionaryElementStub>::BuildCodeStub() { + HValue* receiver = GetParameter(KeyedLoadIC::kReceiverIndex); + HValue* key = GetParameter(KeyedLoadIC::kNameIndex); Add<HCheckSmi>(key); - return BuildUncheckedDictionaryElementLoad(receiver, key); + HValue* elements = AddLoadElements(receiver); + + HValue* hash = BuildElementIndexHash(key); + + return BuildUncheckedDictionaryElementLoad(receiver, elements, key, hash); } -Handle<Code> KeyedLoadDictionaryElementStub::GenerateCode() { +Handle<Code> LoadDictionaryElementStub::GenerateCode() { return DoGenerateCode(this); } @@ -1401,6 +1476,8 @@ HValue* CodeStubGraphBuilder<RegExpConstructResultStub>::BuildCodeStub() { HValue* index = GetParameter(RegExpConstructResultStub::kIndex); HValue* input = GetParameter(RegExpConstructResultStub::kInput); + info()->MarkMustNotHaveEagerFrame(); + return BuildRegExpConstructResult(length, index, input); } @@ -1410,4 +1487,311 @@ Handle<Code> RegExpConstructResultStub::GenerateCode() { } +template <> +class CodeStubGraphBuilder<KeyedLoadGenericStub> + : public CodeStubGraphBuilderBase { + public: + CodeStubGraphBuilder(Isolate* isolate, KeyedLoadGenericStub* stub) + : CodeStubGraphBuilderBase(isolate, stub) {} + + protected: + virtual HValue* BuildCodeStub(); + + void BuildElementsKindLimitCheck(HGraphBuilder::IfBuilder* if_builder, + HValue* bit_field2, + ElementsKind kind); + + void BuildFastElementLoad(HGraphBuilder::IfBuilder* if_builder, + HValue* receiver, + HValue* key, + HValue* instance_type, + HValue* bit_field2, + ElementsKind kind); + + void BuildExternalElementLoad(HGraphBuilder::IfBuilder* if_builder, + HValue* receiver, + HValue* key, + HValue* instance_type, + HValue* bit_field2, + ElementsKind kind); + + KeyedLoadGenericStub* casted_stub() { + return static_cast<KeyedLoadGenericStub*>(stub()); + } +}; + + +void CodeStubGraphBuilder<KeyedLoadGenericStub>::BuildElementsKindLimitCheck( + HGraphBuilder::IfBuilder* if_builder, HValue* bit_field2, + ElementsKind kind) { + ElementsKind next_kind = static_cast<ElementsKind>(kind + 1); + HValue* kind_limit = Add<HConstant>( + static_cast<int>(Map::ElementsKindBits::encode(next_kind))); + + if_builder->If<HCompareNumericAndBranch>(bit_field2, kind_limit, Token::LT); + if_builder->Then(); +} + + +void CodeStubGraphBuilder<KeyedLoadGenericStub>::BuildFastElementLoad( + HGraphBuilder::IfBuilder* if_builder, HValue* receiver, HValue* key, + HValue* instance_type, HValue* bit_field2, ElementsKind kind) { + DCHECK(!IsExternalArrayElementsKind(kind)); + + BuildElementsKindLimitCheck(if_builder, bit_field2, kind); + + IfBuilder js_array_check(this); + js_array_check.If<HCompareNumericAndBranch>( + instance_type, Add<HConstant>(JS_ARRAY_TYPE), Token::EQ); + js_array_check.Then(); + Push(BuildUncheckedMonomorphicElementAccess(receiver, key, NULL, + true, kind, + LOAD, NEVER_RETURN_HOLE, + STANDARD_STORE)); + js_array_check.Else(); + Push(BuildUncheckedMonomorphicElementAccess(receiver, key, NULL, + false, kind, + LOAD, NEVER_RETURN_HOLE, + STANDARD_STORE)); + js_array_check.End(); +} + + +void CodeStubGraphBuilder<KeyedLoadGenericStub>::BuildExternalElementLoad( + HGraphBuilder::IfBuilder* if_builder, HValue* receiver, HValue* key, + HValue* instance_type, HValue* bit_field2, ElementsKind kind) { + DCHECK(IsExternalArrayElementsKind(kind)); + + BuildElementsKindLimitCheck(if_builder, bit_field2, kind); + + Push(BuildUncheckedMonomorphicElementAccess(receiver, key, NULL, + false, kind, + LOAD, NEVER_RETURN_HOLE, + STANDARD_STORE)); +} + + +HValue* CodeStubGraphBuilder<KeyedLoadGenericStub>::BuildCodeStub() { + HValue* receiver = GetParameter(KeyedLoadIC::kReceiverIndex); + HValue* key = GetParameter(KeyedLoadIC::kNameIndex); + + // Split into a smi/integer case and unique string case. + HIfContinuation index_name_split_continuation(graph()->CreateBasicBlock(), + graph()->CreateBasicBlock()); + + BuildKeyedIndexCheck(key, &index_name_split_continuation); + + IfBuilder index_name_split(this, &index_name_split_continuation); + index_name_split.Then(); + { + // Key is an index (number) + key = Pop(); + + int bit_field_mask = (1 << Map::kIsAccessCheckNeeded) | + (1 << Map::kHasIndexedInterceptor); + BuildJSObjectCheck(receiver, bit_field_mask); + + HValue* map = Add<HLoadNamedField>(receiver, static_cast<HValue*>(NULL), + HObjectAccess::ForMap()); + + HValue* instance_type = + Add<HLoadNamedField>(map, static_cast<HValue*>(NULL), + HObjectAccess::ForMapInstanceType()); + + HValue* bit_field2 = Add<HLoadNamedField>(map, + static_cast<HValue*>(NULL), + HObjectAccess::ForMapBitField2()); + + IfBuilder kind_if(this); + BuildFastElementLoad(&kind_if, receiver, key, instance_type, bit_field2, + FAST_HOLEY_ELEMENTS); + + kind_if.Else(); + { + BuildFastElementLoad(&kind_if, receiver, key, instance_type, bit_field2, + FAST_HOLEY_DOUBLE_ELEMENTS); + } + kind_if.Else(); + + // The DICTIONARY_ELEMENTS check generates a "kind_if.Then" + BuildElementsKindLimitCheck(&kind_if, bit_field2, DICTIONARY_ELEMENTS); + { + HValue* elements = AddLoadElements(receiver); + + HValue* hash = BuildElementIndexHash(key); + + Push(BuildUncheckedDictionaryElementLoad(receiver, elements, key, hash)); + } + kind_if.Else(); + + // The SLOPPY_ARGUMENTS_ELEMENTS check generates a "kind_if.Then" + BuildElementsKindLimitCheck(&kind_if, bit_field2, + SLOPPY_ARGUMENTS_ELEMENTS); + // Non-strict elements are not handled. + Add<HDeoptimize>("non-strict elements in KeyedLoadGenericStub", + Deoptimizer::EAGER); + Push(graph()->GetConstant0()); + + kind_if.Else(); + BuildExternalElementLoad(&kind_if, receiver, key, instance_type, bit_field2, + EXTERNAL_INT8_ELEMENTS); + + kind_if.Else(); + BuildExternalElementLoad(&kind_if, receiver, key, instance_type, bit_field2, + EXTERNAL_UINT8_ELEMENTS); + + kind_if.Else(); + BuildExternalElementLoad(&kind_if, receiver, key, instance_type, bit_field2, + EXTERNAL_INT16_ELEMENTS); + + kind_if.Else(); + BuildExternalElementLoad(&kind_if, receiver, key, instance_type, bit_field2, + EXTERNAL_UINT16_ELEMENTS); + + kind_if.Else(); + BuildExternalElementLoad(&kind_if, receiver, key, instance_type, bit_field2, + EXTERNAL_INT32_ELEMENTS); + + kind_if.Else(); + BuildExternalElementLoad(&kind_if, receiver, key, instance_type, bit_field2, + EXTERNAL_UINT32_ELEMENTS); + + kind_if.Else(); + BuildExternalElementLoad(&kind_if, receiver, key, instance_type, bit_field2, + EXTERNAL_FLOAT32_ELEMENTS); + + kind_if.Else(); + BuildExternalElementLoad(&kind_if, receiver, key, instance_type, bit_field2, + EXTERNAL_FLOAT64_ELEMENTS); + + kind_if.Else(); + BuildExternalElementLoad(&kind_if, receiver, key, instance_type, bit_field2, + EXTERNAL_UINT8_CLAMPED_ELEMENTS); + + kind_if.ElseDeopt("ElementsKind unhandled in KeyedLoadGenericStub"); + + kind_if.End(); + } + index_name_split.Else(); + { + // Key is a unique string. + key = Pop(); + + int bit_field_mask = (1 << Map::kIsAccessCheckNeeded) | + (1 << Map::kHasNamedInterceptor); + BuildJSObjectCheck(receiver, bit_field_mask); + + HIfContinuation continuation; + BuildTestForDictionaryProperties(receiver, &continuation); + IfBuilder if_dict_properties(this, &continuation); + if_dict_properties.Then(); + { + // Key is string, properties are dictionary mode + BuildNonGlobalObjectCheck(receiver); + + HValue* properties = Add<HLoadNamedField>( + receiver, static_cast<HValue*>(NULL), + HObjectAccess::ForPropertiesPointer()); + + HValue* hash = + Add<HLoadNamedField>(key, static_cast<HValue*>(NULL), + HObjectAccess::ForNameHashField()); + + hash = AddUncasted<HShr>(hash, Add<HConstant>(Name::kHashShift)); + + HValue* value = BuildUncheckedDictionaryElementLoad(receiver, + properties, + key, + hash); + Push(value); + } + if_dict_properties.Else(); + { + // Key is string, properties are fast mode + HValue* hash = BuildKeyedLookupCacheHash(receiver, key); + + ExternalReference cache_keys_ref = + ExternalReference::keyed_lookup_cache_keys(isolate()); + HValue* cache_keys = Add<HConstant>(cache_keys_ref); + + HValue* map = Add<HLoadNamedField>(receiver, static_cast<HValue*>(NULL), + HObjectAccess::ForMap()); + HValue* base_index = AddUncasted<HMul>(hash, Add<HConstant>(2)); + base_index->ClearFlag(HValue::kCanOverflow); + + HIfContinuation inline_or_runtime_continuation( + graph()->CreateBasicBlock(), graph()->CreateBasicBlock()); + { + IfBuilder lookup_ifs[KeyedLookupCache::kEntriesPerBucket]; + for (int probe = 0; probe < KeyedLookupCache::kEntriesPerBucket; + ++probe) { + IfBuilder* lookup_if = &lookup_ifs[probe]; + lookup_if->Initialize(this); + int probe_base = probe * KeyedLookupCache::kEntryLength; + HValue* map_index = AddUncasted<HAdd>( + base_index, + Add<HConstant>(probe_base + KeyedLookupCache::kMapIndex)); + map_index->ClearFlag(HValue::kCanOverflow); + HValue* key_index = AddUncasted<HAdd>( + base_index, + Add<HConstant>(probe_base + KeyedLookupCache::kKeyIndex)); + key_index->ClearFlag(HValue::kCanOverflow); + HValue* map_to_check = + Add<HLoadKeyed>(cache_keys, map_index, static_cast<HValue*>(NULL), + FAST_ELEMENTS, NEVER_RETURN_HOLE, 0); + lookup_if->If<HCompareObjectEqAndBranch>(map_to_check, map); + lookup_if->And(); + HValue* key_to_check = + Add<HLoadKeyed>(cache_keys, key_index, static_cast<HValue*>(NULL), + FAST_ELEMENTS, NEVER_RETURN_HOLE, 0); + lookup_if->If<HCompareObjectEqAndBranch>(key_to_check, key); + lookup_if->Then(); + { + ExternalReference cache_field_offsets_ref = + ExternalReference::keyed_lookup_cache_field_offsets(isolate()); + HValue* cache_field_offsets = + Add<HConstant>(cache_field_offsets_ref); + HValue* index = AddUncasted<HAdd>(hash, Add<HConstant>(probe)); + index->ClearFlag(HValue::kCanOverflow); + HValue* property_index = Add<HLoadKeyed>( + cache_field_offsets, index, static_cast<HValue*>(NULL), + EXTERNAL_INT32_ELEMENTS, NEVER_RETURN_HOLE, 0); + Push(property_index); + } + lookup_if->Else(); + } + for (int i = 0; i < KeyedLookupCache::kEntriesPerBucket; ++i) { + lookup_ifs[i].JoinContinuation(&inline_or_runtime_continuation); + } + } + + IfBuilder inline_or_runtime(this, &inline_or_runtime_continuation); + inline_or_runtime.Then(); + { + // Found a cached index, load property inline. + Push(Add<HLoadFieldByIndex>(receiver, Pop())); + } + inline_or_runtime.Else(); + { + // KeyedLookupCache miss; call runtime. + Add<HPushArguments>(receiver, key); + Push(Add<HCallRuntime>( + isolate()->factory()->empty_string(), + Runtime::FunctionForId(Runtime::kKeyedGetProperty), 2)); + } + inline_or_runtime.End(); + } + if_dict_properties.End(); + } + index_name_split.End(); + + return Pop(); +} + + +Handle<Code> KeyedLoadGenericStub::GenerateCode() { + return DoGenerateCode(this); +} + + } } // namespace v8::internal diff --git a/deps/v8/src/code-stubs.cc b/deps/v8/src/code-stubs.cc index 24f60ed4129..0e68ab8a51f 100644 --- a/deps/v8/src/code-stubs.cc +++ b/deps/v8/src/code-stubs.cc @@ -2,32 +2,106 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "bootstrapper.h" -#include "code-stubs.h" -#include "cpu-profiler.h" -#include "stub-cache.h" -#include "factory.h" -#include "gdb-jit.h" -#include "macro-assembler.h" +#include "src/bootstrapper.h" +#include "src/code-stubs.h" +#include "src/cpu-profiler.h" +#include "src/factory.h" +#include "src/gdb-jit.h" +#include "src/macro-assembler.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { +InterfaceDescriptor::InterfaceDescriptor() + : register_param_count_(-1) { } + + CodeStubInterfaceDescriptor::CodeStubInterfaceDescriptor() - : register_param_count_(-1), - stack_parameter_count_(no_reg), + : stack_parameter_count_(no_reg), hint_stack_parameter_count_(-1), function_mode_(NOT_JS_FUNCTION_STUB_MODE), - register_params_(NULL), deoptimization_handler_(NULL), handler_arguments_mode_(DONT_PASS_ARGUMENTS), miss_handler_(), has_miss_handler_(false) { } +void InterfaceDescriptor::Initialize( + int register_parameter_count, + Register* registers, + Representation* register_param_representations, + PlatformInterfaceDescriptor* platform_descriptor) { + platform_specific_descriptor_ = platform_descriptor; + register_param_count_ = register_parameter_count; + + // An interface descriptor must have a context register. + DCHECK(register_parameter_count > 0 && registers[0].is(ContextRegister())); + + // InterfaceDescriptor owns a copy of the registers array. + register_params_.Reset(NewArray<Register>(register_parameter_count)); + for (int i = 0; i < register_parameter_count; i++) { + register_params_[i] = registers[i]; + } + + // If a representations array is specified, then the descriptor owns that as + // well. + if (register_param_representations != NULL) { + register_param_representations_.Reset( + NewArray<Representation>(register_parameter_count)); + for (int i = 0; i < register_parameter_count; i++) { + // If there is a context register, the representation must be tagged. + DCHECK(i != 0 || register_param_representations[i].Equals( + Representation::Tagged())); + register_param_representations_[i] = register_param_representations[i]; + } + } +} + + +void CodeStubInterfaceDescriptor::Initialize( + CodeStub::Major major, int register_parameter_count, Register* registers, + Address deoptimization_handler, + Representation* register_param_representations, + int hint_stack_parameter_count, StubFunctionMode function_mode) { + InterfaceDescriptor::Initialize(register_parameter_count, registers, + register_param_representations); + + deoptimization_handler_ = deoptimization_handler; + + hint_stack_parameter_count_ = hint_stack_parameter_count; + function_mode_ = function_mode; + major_ = major; +} + + +void CodeStubInterfaceDescriptor::Initialize( + CodeStub::Major major, int register_parameter_count, Register* registers, + Register stack_parameter_count, Address deoptimization_handler, + Representation* register_param_representations, + int hint_stack_parameter_count, StubFunctionMode function_mode, + HandlerArgumentsMode handler_mode) { + Initialize(major, register_parameter_count, registers, deoptimization_handler, + register_param_representations, hint_stack_parameter_count, + function_mode); + stack_parameter_count_ = stack_parameter_count; + handler_arguments_mode_ = handler_mode; +} + + +void CallInterfaceDescriptor::Initialize( + int register_parameter_count, + Register* registers, + Representation* param_representations, + PlatformInterfaceDescriptor* platform_descriptor) { + InterfaceDescriptor::Initialize(register_parameter_count, registers, + param_representations, platform_descriptor); +} + + bool CodeStub::FindCodeInCache(Code** code_out) { UnseededNumberDictionary* stubs = isolate()->heap()->code_stubs(); int index = stubs->FindEntry(GetKey()); @@ -39,21 +113,11 @@ bool CodeStub::FindCodeInCache(Code** code_out) { } -SmartArrayPointer<const char> CodeStub::GetName() { - char buffer[100]; - NoAllocationStringAllocator allocator(buffer, - static_cast<unsigned>(sizeof(buffer))); - StringStream stream(&allocator); - PrintName(&stream); - return stream.ToCString(); -} - - void CodeStub::RecordCodeGeneration(Handle<Code> code) { IC::RegisterWeakMapDependency(code); - SmartArrayPointer<const char> name = GetName(); - PROFILE(isolate(), CodeCreateEvent(Logger::STUB_TAG, *code, name.get())); - GDBJIT(AddCode(GDBJITInterface::STUB, name.get(), *code)); + OStringStream os; + os << *this; + PROFILE(isolate(), CodeCreateEvent(Logger::STUB_TAG, *code, os.c_str())); Counters* counters = isolate()->counters(); counters->total_stubs_code_size()->Increment(code->instruction_size()); } @@ -79,6 +143,9 @@ Handle<Code> PlatformCodeStub::GenerateCode() { // Generate the new code. MacroAssembler masm(isolate(), NULL, 256); + // TODO(yangguo) remove this once the code serializer handles code stubs. + if (FLAG_serialize_toplevel) masm.enable_serializer(); + { // Update the static counter each time a new code stub is generated. isolate()->counters()->code_stubs()->Increment(); @@ -105,38 +172,32 @@ Handle<Code> PlatformCodeStub::GenerateCode() { } -void CodeStub::VerifyPlatformFeatures() { - ASSERT(CpuFeatures::VerifyCrossCompiling()); -} - - Handle<Code> CodeStub::GetCode() { Heap* heap = isolate()->heap(); Code* code; if (UseSpecialCache() ? FindCodeInSpecialCache(&code) : FindCodeInCache(&code)) { - ASSERT(GetCodeKind() == code->kind()); + DCHECK(GetCodeKind() == code->kind()); return Handle<Code>(code); } -#ifdef DEBUG - VerifyPlatformFeatures(); -#endif - { HandleScope scope(isolate()); Handle<Code> new_object = GenerateCode(); - new_object->set_major_key(MajorKey()); + new_object->set_stub_key(GetKey()); FinishCode(new_object); RecordCodeGeneration(new_object); #ifdef ENABLE_DISASSEMBLER if (FLAG_print_code_stubs) { CodeTracer::Scope trace_scope(isolate()->GetCodeTracer()); - new_object->Disassemble(GetName().get(), trace_scope.file()); - PrintF(trace_scope.file(), "\n"); + OFStream os(trace_scope.file()); + OStringStream name; + name << *this; + new_object->Disassemble(name.c_str(), os); + os << "\n"; } #endif @@ -155,7 +216,7 @@ Handle<Code> CodeStub::GetCode() { } Activate(code); - ASSERT(!NeedsImmovableCode() || + DCHECK(!NeedsImmovableCode() || heap->lo_space()->Contains(code) || heap->code_space()->FirstPage()->Contains(code->address())); return Handle<Code>(code, isolate()); @@ -169,6 +230,8 @@ const char* CodeStub::MajorName(CodeStub::Major major_key, CODE_STUB_LIST(DEF_CASE) #undef DEF_CASE case UninitializedMajorKey: return "<UninitializedMajorKey>Stub"; + case NoCache: + return "<NoCache>Stub"; default: if (!allow_unknown_keys) { UNREACHABLE(); @@ -178,14 +241,14 @@ const char* CodeStub::MajorName(CodeStub::Major major_key, } -void CodeStub::PrintBaseName(StringStream* stream) { - stream->Add("%s", MajorName(MajorKey(), false)); +void CodeStub::PrintBaseName(OStream& os) const { // NOLINT + os << MajorName(MajorKey(), false); } -void CodeStub::PrintName(StringStream* stream) { - PrintBaseName(stream); - PrintState(stream); +void CodeStub::PrintName(OStream& os) const { // NOLINT + PrintBaseName(os); + PrintState(os); } @@ -206,8 +269,8 @@ void BinaryOpICStub::GenerateAheadOfTime(Isolate* isolate) { } -void BinaryOpICStub::PrintState(StringStream* stream) { - state_.Print(stream); +void BinaryOpICStub::PrintState(OStream& os) const { // NOLINT + os << state_; } @@ -226,8 +289,9 @@ void BinaryOpICWithAllocationSiteStub::GenerateAheadOfTime(Isolate* isolate) { } -void BinaryOpICWithAllocationSiteStub::PrintState(StringStream* stream) { - state_.Print(stream); +void BinaryOpICWithAllocationSiteStub::PrintState( + OStream& os) const { // NOLINT + os << state_; } @@ -241,22 +305,22 @@ void BinaryOpICWithAllocationSiteStub::GenerateAheadOfTime( } -void StringAddStub::PrintBaseName(StringStream* stream) { - stream->Add("StringAddStub"); +void StringAddStub::PrintBaseName(OStream& os) const { // NOLINT + os << "StringAddStub"; if ((flags() & STRING_ADD_CHECK_BOTH) == STRING_ADD_CHECK_BOTH) { - stream->Add("_CheckBoth"); + os << "_CheckBoth"; } else if ((flags() & STRING_ADD_CHECK_LEFT) == STRING_ADD_CHECK_LEFT) { - stream->Add("_CheckLeft"); + os << "_CheckLeft"; } else if ((flags() & STRING_ADD_CHECK_RIGHT) == STRING_ADD_CHECK_RIGHT) { - stream->Add("_CheckRight"); + os << "_CheckRight"; } if (pretenure_flag() == TENURED) { - stream->Add("_Tenured"); + os << "_Tenured"; } } -InlineCacheState ICCompareStub::GetICState() { +InlineCacheState ICCompareStub::GetICState() const { CompareIC::State state = Max(left_, right_); switch (state) { case CompareIC::UNINITIALIZED: @@ -278,7 +342,7 @@ InlineCacheState ICCompareStub::GetICState() { void ICCompareStub::AddToSpecialCache(Handle<Code> new_object) { - ASSERT(*known_map_ != NULL); + DCHECK(*known_map_ != NULL); Isolate* isolate = new_object->GetIsolate(); Factory* factory = isolate->factory(); return Map::UpdateCodeCache(known_map_, @@ -294,7 +358,7 @@ bool ICCompareStub::FindCodeInSpecialCache(Code** code_out) { Code::Flags flags = Code::ComputeFlags( GetCodeKind(), UNINITIALIZED); - ASSERT(op_ == Token::EQ || op_ == Token::EQ_STRICT); + DCHECK(op_ == Token::EQ || op_ == Token::EQ_STRICT); Handle<Object> probe( known_map_->FindInCodeCache( strict() ? @@ -306,9 +370,9 @@ bool ICCompareStub::FindCodeInSpecialCache(Code** code_out) { *code_out = Code::cast(*probe); #ifdef DEBUG Token::Value cached_op; - ICCompareStub::DecodeMinorKey((*code_out)->stub_info(), NULL, NULL, NULL, - &cached_op); - ASSERT(op_ == cached_op); + ICCompareStub::DecodeKey((*code_out)->stub_key(), NULL, NULL, NULL, + &cached_op); + DCHECK(op_ == cached_op); #endif return true; } @@ -316,7 +380,7 @@ bool ICCompareStub::FindCodeInSpecialCache(Code** code_out) { } -int ICCompareStub::MinorKey() { +int ICCompareStub::MinorKey() const { return OpField::encode(op_ - Token::EQ) | LeftStateField::encode(left_) | RightStateField::encode(right_) | @@ -324,11 +388,11 @@ int ICCompareStub::MinorKey() { } -void ICCompareStub::DecodeMinorKey(int minor_key, - CompareIC::State* left_state, - CompareIC::State* right_state, - CompareIC::State* handler_state, - Token::Value* op) { +void ICCompareStub::DecodeKey(uint32_t stub_key, CompareIC::State* left_state, + CompareIC::State* right_state, + CompareIC::State* handler_state, + Token::Value* op) { + int minor_key = MinorKeyFromKey(stub_key); if (left_state) { *left_state = static_cast<CompareIC::State>(LeftStateField::decode(minor_key)); @@ -371,7 +435,7 @@ void ICCompareStub::Generate(MacroAssembler* masm) { GenerateObjects(masm); break; case CompareIC::KNOWN_OBJECT: - ASSERT(*known_map_ != NULL); + DCHECK(*known_map_ != NULL); GenerateKnownObjects(masm); break; case CompareIC::GENERIC: @@ -382,7 +446,7 @@ void ICCompareStub::Generate(MacroAssembler* masm) { void CompareNilICStub::UpdateStatus(Handle<Object> object) { - ASSERT(!state_.Contains(GENERIC)); + DCHECK(!state_.Contains(GENERIC)); State old_state(state_); if (object->IsNull()) { state_.Add(NULL_TYPE); @@ -407,44 +471,55 @@ template<class StateType> void HydrogenCodeStub::TraceTransition(StateType from, StateType to) { // Note: Although a no-op transition is semantically OK, it is hinting at a // bug somewhere in our state transition machinery. - ASSERT(from != to); + DCHECK(from != to); if (!FLAG_trace_ic) return; - char buffer[100]; - NoAllocationStringAllocator allocator(buffer, - static_cast<unsigned>(sizeof(buffer))); - StringStream stream(&allocator); - stream.Add("["); - PrintBaseName(&stream); - stream.Add(": "); - from.Print(&stream); - stream.Add("=>"); - to.Print(&stream); - stream.Add("]\n"); - stream.OutputToStdOut(); + OFStream os(stdout); + os << "["; + PrintBaseName(os); + os << ": " << from << "=>" << to << "]" << endl; } -void CompareNilICStub::PrintBaseName(StringStream* stream) { - CodeStub::PrintBaseName(stream); - stream->Add((nil_value_ == kNullValue) ? "(NullValue)": - "(UndefinedValue)"); +void CompareNilICStub::PrintBaseName(OStream& os) const { // NOLINT + CodeStub::PrintBaseName(os); + os << ((nil_value_ == kNullValue) ? "(NullValue)" : "(UndefinedValue)"); } -void CompareNilICStub::PrintState(StringStream* stream) { - state_.Print(stream); +void CompareNilICStub::PrintState(OStream& os) const { // NOLINT + os << state_; } -void CompareNilICStub::State::Print(StringStream* stream) const { - stream->Add("("); - SimpleListPrinter printer(stream); - if (IsEmpty()) printer.Add("None"); - if (Contains(UNDEFINED)) printer.Add("Undefined"); - if (Contains(NULL_TYPE)) printer.Add("Null"); - if (Contains(MONOMORPHIC_MAP)) printer.Add("MonomorphicMap"); - if (Contains(GENERIC)) printer.Add("Generic"); - stream->Add(")"); +// TODO(svenpanne) Make this a real infix_ostream_iterator. +class SimpleListPrinter { + public: + explicit SimpleListPrinter(OStream& os) : os_(os), first_(true) {} + + void Add(const char* s) { + if (first_) { + first_ = false; + } else { + os_ << ","; + } + os_ << s; + } + + private: + OStream& os_; + bool first_; +}; + + +OStream& operator<<(OStream& os, const CompareNilICStub::State& s) { + os << "("; + SimpleListPrinter p(os); + if (s.IsEmpty()) p.Add("None"); + if (s.Contains(CompareNilICStub::UNDEFINED)) p.Add("Undefined"); + if (s.Contains(CompareNilICStub::NULL_TYPE)) p.Add("Null"); + if (s.Contains(CompareNilICStub::MONOMORPHIC_MAP)) p.Add("MonomorphicMap"); + if (s.Contains(CompareNilICStub::GENERIC)) p.Add("Generic"); + return os << ")"; } @@ -478,31 +553,21 @@ Type* CompareNilICStub::GetInputType(Zone* zone, Handle<Map> map) { } -void CallICStub::PrintState(StringStream* stream) { - state_.Print(stream); +void CallIC_ArrayStub::PrintState(OStream& os) const { // NOLINT + os << state_ << " (Array)"; } -void InstanceofStub::PrintName(StringStream* stream) { - const char* args = ""; - if (HasArgsInRegisters()) { - args = "_REGS"; - } +void CallICStub::PrintState(OStream& os) const { // NOLINT + os << state_; +} - const char* inline_check = ""; - if (HasCallSiteInlineCheck()) { - inline_check = "_INLINE"; - } - const char* return_true_false_object = ""; - if (ReturnTrueFalseObject()) { - return_true_false_object = "_TRUEFALSE"; - } - - stream->Add("InstanceofStub%s%s%s", - args, - inline_check, - return_true_false_object); +void InstanceofStub::PrintName(OStream& os) const { // NOLINT + os << "InstanceofStub"; + if (HasArgsInRegisters()) os << "_REGS"; + if (HasCallSiteInlineCheck()) os << "_INLINE"; + if (ReturnTrueFalseObject()) os << "_TRUEFALSE"; } @@ -514,9 +579,91 @@ void JSEntryStub::FinishCode(Handle<Code> code) { } -void KeyedLoadDictionaryElementPlatformStub::Generate( - MacroAssembler* masm) { - KeyedLoadStubCompiler::GenerateLoadDictionaryElement(masm); +void LoadFastElementStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { InterfaceDescriptor::ContextRegister(), + LoadIC::ReceiverRegister(), + LoadIC::NameRegister() }; + STATIC_ASSERT(LoadIC::kParameterCount == 2); + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(KeyedLoadIC_MissFromStubFailure)); +} + + +void LoadDictionaryElementStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { InterfaceDescriptor::ContextRegister(), + LoadIC::ReceiverRegister(), + LoadIC::NameRegister() }; + STATIC_ASSERT(LoadIC::kParameterCount == 2); + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(KeyedLoadIC_MissFromStubFailure)); +} + + +void KeyedLoadGenericStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { InterfaceDescriptor::ContextRegister(), + LoadIC::ReceiverRegister(), + LoadIC::NameRegister() }; + STATIC_ASSERT(LoadIC::kParameterCount == 2); + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kKeyedGetProperty)->entry); +} + + +void HandlerStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + if (kind() == Code::LOAD_IC) { + Register registers[] = {InterfaceDescriptor::ContextRegister(), + LoadIC::ReceiverRegister(), LoadIC::NameRegister()}; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); + } else { + DCHECK_EQ(Code::STORE_IC, kind()); + Register registers[] = {InterfaceDescriptor::ContextRegister(), + StoreIC::ReceiverRegister(), + StoreIC::NameRegister(), StoreIC::ValueRegister()}; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(StoreIC_MissFromStubFailure)); + } +} + + +void StoreFastElementStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { InterfaceDescriptor::ContextRegister(), + KeyedStoreIC::ReceiverRegister(), + KeyedStoreIC::NameRegister(), + KeyedStoreIC::ValueRegister() }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(KeyedStoreIC_MissFromStubFailure)); +} + + +void ElementsTransitionAndStoreStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { InterfaceDescriptor::ContextRegister(), + ValueRegister(), + MapRegister(), + KeyRegister(), + ObjectRegister() }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(ElementsTransitionAndStoreIC_Miss)); +} + + +void InstanceofStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { InterfaceDescriptor::ContextRegister(), + InstanceofStub::left(), + InstanceofStub::right() }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); +} + + +void LoadDictionaryElementPlatformStub::Generate(MacroAssembler* masm) { + ElementHandlerCompiler::GenerateLoadDictionaryElement(masm); } @@ -526,7 +673,7 @@ void CreateAllocationSiteStub::GenerateAheadOfTime(Isolate* isolate) { } -void KeyedStoreElementStub::Generate(MacroAssembler* masm) { +void StoreElementStub::Generate(MacroAssembler* masm) { switch (elements_kind_) { case FAST_ELEMENTS: case FAST_HOLEY_ELEMENTS: @@ -543,7 +690,7 @@ void KeyedStoreElementStub::Generate(MacroAssembler* masm) { UNREACHABLE(); break; case DICTIONARY_ELEMENTS: - KeyedStoreStubCompiler::GenerateStoreDictionaryElement(masm); + ElementHandlerCompiler::GenerateStoreDictionaryElement(masm); break; case SLOPPY_ARGUMENTS_ELEMENTS: UNREACHABLE(); @@ -552,47 +699,64 @@ void KeyedStoreElementStub::Generate(MacroAssembler* masm) { } -void ArgumentsAccessStub::PrintName(StringStream* stream) { - stream->Add("ArgumentsAccessStub_"); +void ArgumentsAccessStub::PrintName(OStream& os) const { // NOLINT + os << "ArgumentsAccessStub_"; switch (type_) { - case READ_ELEMENT: stream->Add("ReadElement"); break; - case NEW_SLOPPY_FAST: stream->Add("NewSloppyFast"); break; - case NEW_SLOPPY_SLOW: stream->Add("NewSloppySlow"); break; - case NEW_STRICT: stream->Add("NewStrict"); break; + case READ_ELEMENT: + os << "ReadElement"; + break; + case NEW_SLOPPY_FAST: + os << "NewSloppyFast"; + break; + case NEW_SLOPPY_SLOW: + os << "NewSloppySlow"; + break; + case NEW_STRICT: + os << "NewStrict"; + break; } + return; } -void CallFunctionStub::PrintName(StringStream* stream) { - stream->Add("CallFunctionStub_Args%d", argc_); +void CallFunctionStub::PrintName(OStream& os) const { // NOLINT + os << "CallFunctionStub_Args" << argc_; } -void CallConstructStub::PrintName(StringStream* stream) { - stream->Add("CallConstructStub"); - if (RecordCallTarget()) stream->Add("_Recording"); +void CallConstructStub::PrintName(OStream& os) const { // NOLINT + os << "CallConstructStub"; + if (RecordCallTarget()) os << "_Recording"; } -void ArrayConstructorStub::PrintName(StringStream* stream) { - stream->Add("ArrayConstructorStub"); +void ArrayConstructorStub::PrintName(OStream& os) const { // NOLINT + os << "ArrayConstructorStub"; switch (argument_count_) { - case ANY: stream->Add("_Any"); break; - case NONE: stream->Add("_None"); break; - case ONE: stream->Add("_One"); break; - case MORE_THAN_ONE: stream->Add("_More_Than_One"); break; + case ANY: + os << "_Any"; + break; + case NONE: + os << "_None"; + break; + case ONE: + os << "_One"; + break; + case MORE_THAN_ONE: + os << "_More_Than_One"; + break; } + return; } -void ArrayConstructorStubBase::BasePrintName(const char* name, - StringStream* stream) { - stream->Add(name); - stream->Add("_"); - stream->Add(ElementsKindToString(elements_kind())); +OStream& ArrayConstructorStubBase::BasePrintName(OStream& os, // NOLINT + const char* name) const { + os << name << "_" << ElementsKindToString(elements_kind()); if (override_mode() == DISABLE_ALLOCATION_SITES) { - stream->Add("_DISABLE_ALLOCATION_SITES"); + os << "_DISABLE_ALLOCATION_SITES"; } + return os; } @@ -604,24 +768,24 @@ bool ToBooleanStub::UpdateStatus(Handle<Object> object) { } -void ToBooleanStub::PrintState(StringStream* stream) { - types_.Print(stream); +void ToBooleanStub::PrintState(OStream& os) const { // NOLINT + os << types_; } -void ToBooleanStub::Types::Print(StringStream* stream) const { - stream->Add("("); - SimpleListPrinter printer(stream); - if (IsEmpty()) printer.Add("None"); - if (Contains(UNDEFINED)) printer.Add("Undefined"); - if (Contains(BOOLEAN)) printer.Add("Bool"); - if (Contains(NULL_TYPE)) printer.Add("Null"); - if (Contains(SMI)) printer.Add("Smi"); - if (Contains(SPEC_OBJECT)) printer.Add("SpecObject"); - if (Contains(STRING)) printer.Add("String"); - if (Contains(SYMBOL)) printer.Add("Symbol"); - if (Contains(HEAP_NUMBER)) printer.Add("HeapNumber"); - stream->Add(")"); +OStream& operator<<(OStream& os, const ToBooleanStub::Types& s) { + os << "("; + SimpleListPrinter p(os); + if (s.IsEmpty()) p.Add("None"); + if (s.Contains(ToBooleanStub::UNDEFINED)) p.Add("Undefined"); + if (s.Contains(ToBooleanStub::BOOLEAN)) p.Add("Bool"); + if (s.Contains(ToBooleanStub::NULL_TYPE)) p.Add("Null"); + if (s.Contains(ToBooleanStub::SMI)) p.Add("Smi"); + if (s.Contains(ToBooleanStub::SPEC_OBJECT)) p.Add("SpecObject"); + if (s.Contains(ToBooleanStub::STRING)) p.Add("String"); + if (s.Contains(ToBooleanStub::SYMBOL)) p.Add("Symbol"); + if (s.Contains(ToBooleanStub::HEAP_NUMBER)) p.Add("HeapNumber"); + return os << ")"; } @@ -649,7 +813,7 @@ bool ToBooleanStub::Types::UpdateStatus(Handle<Object> object) { Add(SYMBOL); return true; } else if (object->IsHeapNumber()) { - ASSERT(!object->IsUndetectableObject()); + DCHECK(!object->IsUndetectableObject()); Add(HEAP_NUMBER); double value = HeapNumber::cast(*object)->value(); return value != 0 && !std::isnan(value); @@ -687,7 +851,7 @@ void ProfileEntryHookStub::EntryHookTrampoline(intptr_t function, intptr_t stack_pointer, Isolate* isolate) { FunctionEntryHook entry_hook = isolate->function_entry_hook(); - ASSERT(entry_hook != NULL); + DCHECK(entry_hook != NULL); entry_hook(function, stack_pointer); } @@ -696,7 +860,7 @@ static void InstallDescriptor(Isolate* isolate, HydrogenCodeStub* stub) { int major_key = stub->MajorKey(); CodeStubInterfaceDescriptor* descriptor = isolate->code_stub_interface_descriptor(major_key); - if (!descriptor->initialized()) { + if (!descriptor->IsInitialized()) { stub->InitializeInterfaceDescriptor(descriptor); } } @@ -733,9 +897,7 @@ void FastNewContextStub::InstallDescriptors(Isolate* isolate) { // static void FastCloneShallowArrayStub::InstallDescriptors(Isolate* isolate) { - FastCloneShallowArrayStub stub(isolate, - FastCloneShallowArrayStub::CLONE_ELEMENTS, - DONT_TRACK_ALLOCATION_SITE, 0); + FastCloneShallowArrayStub stub(isolate, DONT_TRACK_ALLOCATION_SITE); InstallDescriptor(isolate, &stub); } @@ -768,6 +930,21 @@ void RegExpConstructResultStub::InstallDescriptors(Isolate* isolate) { } +// static +void KeyedLoadGenericStub::InstallDescriptors(Isolate* isolate) { + KeyedLoadGenericStub stub(isolate); + InstallDescriptor(isolate, &stub); +} + + +// static +void StoreFieldStub::InstallDescriptors(Isolate* isolate) { + StoreFieldStub stub(isolate, FieldIndex::ForInObjectOffset(0), + Representation::None()); + InstallDescriptor(isolate, &stub); +} + + ArrayConstructorStub::ArrayConstructorStub(Isolate* isolate) : PlatformCodeStub(isolate), argument_count_(ANY) { ArrayConstructorStubBase::GenerateStubsAheadOfTime(isolate); diff --git a/deps/v8/src/code-stubs.h b/deps/v8/src/code-stubs.h index 8380266d92e..c1d051b3d74 100644 --- a/deps/v8/src/code-stubs.h +++ b/deps/v8/src/code-stubs.h @@ -5,77 +5,80 @@ #ifndef V8_CODE_STUBS_H_ #define V8_CODE_STUBS_H_ -#include "allocation.h" -#include "assembler.h" -#include "codegen.h" -#include "globals.h" -#include "macro-assembler.h" +#include "src/allocation.h" +#include "src/assembler.h" +#include "src/codegen.h" +#include "src/globals.h" +#include "src/macro-assembler.h" +#include "src/ostreams.h" namespace v8 { namespace internal { // List of code stubs used on all platforms. -#define CODE_STUB_LIST_ALL_PLATFORMS(V) \ - V(CallFunction) \ - V(CallConstruct) \ - V(BinaryOpIC) \ - V(BinaryOpICWithAllocationSite) \ - V(BinaryOpWithAllocationSite) \ - V(StringAdd) \ - V(SubString) \ - V(StringCompare) \ - V(Compare) \ - V(CompareIC) \ - V(CompareNilIC) \ - V(MathPow) \ - V(CallIC) \ - V(FunctionPrototype) \ - V(RecordWrite) \ - V(StoreBufferOverflow) \ - V(RegExpExec) \ - V(Instanceof) \ - V(ConvertToDouble) \ - V(WriteInt32ToHeapNumber) \ - V(StackCheck) \ - V(Interrupt) \ - V(FastNewClosure) \ - V(FastNewContext) \ - V(FastCloneShallowArray) \ - V(FastCloneShallowObject) \ - V(CreateAllocationSite) \ - V(ToBoolean) \ - V(ToNumber) \ - V(ArgumentsAccess) \ - V(RegExpConstructResult) \ - V(NumberToString) \ - V(DoubleToI) \ - V(CEntry) \ - V(JSEntry) \ - V(KeyedLoadElement) \ - V(ArrayNoArgumentConstructor) \ - V(ArraySingleArgumentConstructor) \ - V(ArrayNArgumentsConstructor) \ - V(InternalArrayNoArgumentConstructor) \ - V(InternalArraySingleArgumentConstructor) \ - V(InternalArrayNArgumentsConstructor) \ - V(KeyedStoreElement) \ - V(DebuggerStatement) \ - V(NameDictionaryLookup) \ - V(ElementsTransitionAndStore) \ - V(TransitionElementsKind) \ - V(StoreArrayLiteralElement) \ - V(StubFailureTrampoline) \ - V(ArrayConstructor) \ - V(InternalArrayConstructor) \ - V(ProfileEntryHook) \ - V(StoreGlobal) \ - V(CallApiFunction) \ - V(CallApiGetter) \ - /* IC Handler stubs */ \ - V(LoadField) \ - V(KeyedLoadField) \ - V(StringLength) \ - V(KeyedStringLength) +#define CODE_STUB_LIST_ALL_PLATFORMS(V) \ + V(CallFunction) \ + V(CallConstruct) \ + V(BinaryOpIC) \ + V(BinaryOpICWithAllocationSite) \ + V(BinaryOpWithAllocationSite) \ + V(StringAdd) \ + V(SubString) \ + V(StringCompare) \ + V(Compare) \ + V(CompareIC) \ + V(CompareNilIC) \ + V(MathPow) \ + V(CallIC) \ + V(CallIC_Array) \ + V(FunctionPrototype) \ + V(RecordWrite) \ + V(StoreBufferOverflow) \ + V(RegExpExec) \ + V(Instanceof) \ + V(ConvertToDouble) \ + V(WriteInt32ToHeapNumber) \ + V(StackCheck) \ + V(Interrupt) \ + V(FastNewClosure) \ + V(FastNewContext) \ + V(FastCloneShallowArray) \ + V(FastCloneShallowObject) \ + V(CreateAllocationSite) \ + V(ToBoolean) \ + V(ToNumber) \ + V(ArgumentsAccess) \ + V(RegExpConstructResult) \ + V(NumberToString) \ + V(DoubleToI) \ + V(CEntry) \ + V(JSEntry) \ + V(LoadElement) \ + V(KeyedLoadGeneric) \ + V(ArrayNoArgumentConstructor) \ + V(ArraySingleArgumentConstructor) \ + V(ArrayNArgumentsConstructor) \ + V(InternalArrayNoArgumentConstructor) \ + V(InternalArraySingleArgumentConstructor) \ + V(InternalArrayNArgumentsConstructor) \ + V(StoreElement) \ + V(DebuggerStatement) \ + V(NameDictionaryLookup) \ + V(ElementsTransitionAndStore) \ + V(TransitionElementsKind) \ + V(StoreArrayLiteralElement) \ + V(StubFailureTrampoline) \ + V(ArrayConstructor) \ + V(InternalArrayConstructor) \ + V(ProfileEntryHook) \ + V(StoreGlobal) \ + V(CallApiFunction) \ + V(CallApiGetter) \ + /* IC Handler stubs */ \ + V(LoadField) \ + V(StoreField) \ + V(LoadConstant) \ + V(StringLength) // List of code stubs only used on ARM 32 bits platforms. #if V8_TARGET_ARCH_ARM @@ -103,6 +106,12 @@ namespace internal { // List of code stubs only used on MIPS platforms. #if V8_TARGET_ARCH_MIPS +#define CODE_STUB_LIST_MIPS(V) \ + V(RegExpCEntry) \ + V(DirectCEntry) \ + V(StoreRegistersState) \ + V(RestoreRegistersState) +#elif V8_TARGET_ARCH_MIPS64 #define CODE_STUB_LIST_MIPS(V) \ V(RegExpCEntry) \ V(DirectCEntry) \ @@ -146,9 +155,11 @@ class CodeStub BASE_EMBEDDED { // Gets the major key from a code object that is a code stub or binary op IC. static Major GetMajorKey(Code* code_stub) { - return static_cast<Major>(code_stub->major_key()); + return MajorKeyFromKey(code_stub->stub_key()); } + static uint32_t NoCacheKey() { return MajorKeyBits::encode(NoCache); } + static const char* MajorName(Major major_key, bool allow_unknown_keys); explicit CodeStub(Isolate* isolate) : isolate_(isolate) { } @@ -169,40 +180,39 @@ class CodeStub BASE_EMBEDDED { bool FindCodeInCache(Code** code_out); // Returns information for computing the number key. - virtual Major MajorKey() = 0; - virtual int MinorKey() = 0; + virtual Major MajorKey() const = 0; + virtual int MinorKey() const = 0; - virtual InlineCacheState GetICState() { - return UNINITIALIZED; - } - virtual ExtraICState GetExtraICState() { - return kNoExtraICState; - } + virtual InlineCacheState GetICState() const { return UNINITIALIZED; } + virtual ExtraICState GetExtraICState() const { return kNoExtraICState; } virtual Code::StubType GetStubType() { return Code::NORMAL; } - virtual void PrintName(StringStream* stream); - - // Returns a name for logging/debugging purposes. - SmartArrayPointer<const char> GetName(); + friend OStream& operator<<(OStream& os, const CodeStub& s) { + s.PrintName(os); + return os; + } Isolate* isolate() const { return isolate_; } protected: - static bool CanUseFPRegisters(); - // Generates the assembler code for the stub. virtual Handle<Code> GenerateCode() = 0; - virtual void VerifyPlatformFeatures(); - // Returns whether the code generated for this stub needs to be allocated as // a fixed (non-moveable) code object. virtual bool NeedsImmovableCode() { return false; } - virtual void PrintBaseName(StringStream* stream); - virtual void PrintState(StringStream* stream) { } + virtual void PrintName(OStream& os) const; // NOLINT + virtual void PrintBaseName(OStream& os) const; // NOLINT + virtual void PrintState(OStream& os) const { ; } // NOLINT + + // Computes the key based on major and minor. + uint32_t GetKey() { + DCHECK(static_cast<int>(MajorKey()) < NUMBER_OF_IDS); + return MinorKeyBits::encode(MinorKey()) | MajorKeyBits::encode(MajorKey()); + } private: // Perform bookkeeping required after code generation when stub code is @@ -232,13 +242,6 @@ class CodeStub BASE_EMBEDDED { // If a stub uses a special cache override this. virtual bool UseSpecialCache() { return false; } - // Computes the key based on major and minor. - uint32_t GetKey() { - ASSERT(static_cast<int>(MajorKey()) < NUMBER_OF_IDS); - return MinorKeyBits::encode(MinorKey()) | - MajorKeyBits::encode(MajorKey()); - } - STATIC_ASSERT(NUMBER_OF_IDS < (1 << kStubMajorKeyBits)); class MajorKeyBits: public BitField<uint32_t, 0, kStubMajorKeyBits> {}; class MinorKeyBits: public BitField<uint32_t, @@ -268,97 +271,162 @@ class PlatformCodeStub : public CodeStub { enum StubFunctionMode { NOT_JS_FUNCTION_STUB_MODE, JS_FUNCTION_STUB_MODE }; enum HandlerArgumentsMode { DONT_PASS_ARGUMENTS, PASS_ARGUMENTS }; -struct CodeStubInterfaceDescriptor { - CodeStubInterfaceDescriptor(); - int register_param_count_; - Register stack_parameter_count_; - // if hint_stack_parameter_count_ > 0, the code stub can optimize the - // return sequence. Default value is -1, which means it is ignored. - int hint_stack_parameter_count_; - StubFunctionMode function_mode_; - Register* register_params_; +class PlatformInterfaceDescriptor; - Address deoptimization_handler_; - HandlerArgumentsMode handler_arguments_mode_; - bool initialized() const { return register_param_count_ >= 0; } +class InterfaceDescriptor { + public: + bool IsInitialized() const { return register_param_count_ >= 0; } + + int GetEnvironmentLength() const { return register_param_count_; } - int environment_length() const { - return register_param_count_; + int GetRegisterParameterCount() const { return register_param_count_; } + + Register GetParameterRegister(int index) const { + return register_params_[index]; + } + + Representation GetParameterRepresentation(int index) const { + DCHECK(index < register_param_count_); + if (register_param_representations_.get() == NULL) { + return Representation::Tagged(); + } + + return register_param_representations_[index]; + } + + // "Environment" versions of parameter functions. The first register + // parameter (context) is not included. + int GetEnvironmentParameterCount() const { + return GetEnvironmentLength() - 1; } + Register GetEnvironmentParameterRegister(int index) const { + return GetParameterRegister(index + 1); + } + + Representation GetEnvironmentParameterRepresentation(int index) const { + return GetParameterRepresentation(index + 1); + } + + // Some platforms have extra information to associate with the descriptor. + PlatformInterfaceDescriptor* platform_specific_descriptor() const { + return platform_specific_descriptor_; + } + + static const Register ContextRegister(); + + protected: + InterfaceDescriptor(); + virtual ~InterfaceDescriptor() {} + + void Initialize(int register_parameter_count, Register* registers, + Representation* register_param_representations, + PlatformInterfaceDescriptor* platform_descriptor = NULL); + + private: + int register_param_count_; + + // The Register params are allocated dynamically by the + // InterfaceDescriptor, and freed on destruction. This is because static + // arrays of Registers cause creation of runtime static initializers + // which we don't want. + SmartArrayPointer<Register> register_params_; + // Specifies Representations for the stub's parameter. Points to an array of + // Representations of the same length of the numbers of parameters to the + // stub, or if NULL (the default value), Representation of each parameter + // assumed to be Tagged(). + SmartArrayPointer<Representation> register_param_representations_; + + PlatformInterfaceDescriptor* platform_specific_descriptor_; + + DISALLOW_COPY_AND_ASSIGN(InterfaceDescriptor); +}; + + +class CodeStubInterfaceDescriptor: public InterfaceDescriptor { + public: + CodeStubInterfaceDescriptor(); + + void Initialize(CodeStub::Major major, int register_parameter_count, + Register* registers, Address deoptimization_handler = NULL, + Representation* register_param_representations = NULL, + int hint_stack_parameter_count = -1, + StubFunctionMode function_mode = NOT_JS_FUNCTION_STUB_MODE); + void Initialize(CodeStub::Major major, int register_parameter_count, + Register* registers, Register stack_parameter_count, + Address deoptimization_handler = NULL, + Representation* register_param_representations = NULL, + int hint_stack_parameter_count = -1, + StubFunctionMode function_mode = NOT_JS_FUNCTION_STUB_MODE, + HandlerArgumentsMode handler_mode = DONT_PASS_ARGUMENTS); + void SetMissHandler(ExternalReference handler) { miss_handler_ = handler; has_miss_handler_ = true; // Our miss handler infrastructure doesn't currently support // variable stack parameter counts. - ASSERT(!stack_parameter_count_.is_valid()); + DCHECK(!stack_parameter_count_.is_valid()); } - ExternalReference miss_handler() { - ASSERT(has_miss_handler_); + ExternalReference miss_handler() const { + DCHECK(has_miss_handler_); return miss_handler_; } - bool has_miss_handler() { + bool has_miss_handler() const { return has_miss_handler_; } - Register GetParameterRegister(int index) const { - return register_params_[index]; - } - - bool IsParameterCountRegister(int index) { - return GetParameterRegister(index).is(stack_parameter_count_); + bool IsEnvironmentParameterCountRegister(int index) const { + return GetEnvironmentParameterRegister(index).is(stack_parameter_count_); } - int GetHandlerParameterCount() { - int params = environment_length(); + int GetHandlerParameterCount() const { + int params = GetEnvironmentParameterCount(); if (handler_arguments_mode_ == PASS_ARGUMENTS) { params += 1; } return params; } + int hint_stack_parameter_count() const { return hint_stack_parameter_count_; } + Register stack_parameter_count() const { return stack_parameter_count_; } + StubFunctionMode function_mode() const { return function_mode_; } + Address deoptimization_handler() const { return deoptimization_handler_; } + CodeStub::Major MajorKey() const { return major_; } + private: + Register stack_parameter_count_; + // If hint_stack_parameter_count_ > 0, the code stub can optimize the + // return sequence. Default value is -1, which means it is ignored. + int hint_stack_parameter_count_; + StubFunctionMode function_mode_; + + Address deoptimization_handler_; + HandlerArgumentsMode handler_arguments_mode_; + ExternalReference miss_handler_; bool has_miss_handler_; + CodeStub::Major major_; }; -struct PlatformCallInterfaceDescriptor; - - -struct CallInterfaceDescriptor { - CallInterfaceDescriptor() - : register_param_count_(-1), - register_params_(NULL), - param_representations_(NULL), - platform_specific_descriptor_(NULL) { } - - bool initialized() const { return register_param_count_ >= 0; } - - int environment_length() const { - return register_param_count_; - } - - Representation GetParameterRepresentation(int index) const { - return param_representations_[index]; - } - - Register GetParameterRegister(int index) const { - return register_params_[index]; - } +class CallInterfaceDescriptor: public InterfaceDescriptor { + public: + CallInterfaceDescriptor() { } - PlatformCallInterfaceDescriptor* platform_specific_descriptor() const { - return platform_specific_descriptor_; - } + // A copy of the passed in registers and param_representations is made + // and owned by the CallInterfaceDescriptor. - int register_param_count_; - Register* register_params_; - Representation* param_representations_; - PlatformCallInterfaceDescriptor* platform_specific_descriptor_; + // TODO(mvstanton): Instead of taking parallel arrays register and + // param_representations, how about a struct that puts the representation + // and register side by side (eg, RegRep(r1, Representation::Tagged()). + // The same should go for the CodeStubInterfaceDescriptor class. + void Initialize(int register_parameter_count, Register* registers, + Representation* param_representations, + PlatformInterfaceDescriptor* platform_descriptor = NULL); }; @@ -369,7 +437,8 @@ class HydrogenCodeStub : public CodeStub { INITIALIZED }; - HydrogenCodeStub(Isolate* isolate, InitializationState state = INITIALIZED) + explicit HydrogenCodeStub(Isolate* isolate, + InitializationState state = INITIALIZED) : CodeStub(isolate) { is_uninitialized_ = (state == UNINITIALIZED); } @@ -394,7 +463,7 @@ class HydrogenCodeStub : public CodeStub { // Retrieve the code for the stub. Generate the code if needed. virtual Handle<Code> GenerateCode() = 0; - virtual int NotMissMinorKey() = 0; + virtual int NotMissMinorKey() const = 0; Handle<Code> GenerateLightweightMissCode(); @@ -406,7 +475,7 @@ class HydrogenCodeStub : public CodeStub { class IsMissBits: public BitField<bool, kStubMinorKeyBits - 1, 1> {}; void GenerateLightweightMiss(MacroAssembler* masm); - virtual int MinorKey() { + virtual int MinorKey() const { return IsMissBits::encode(is_uninitialized_) | MinorKeyBits::encode(NotMissMinorKey()); } @@ -435,15 +504,19 @@ class RuntimeCallHelper { } } // namespace v8::internal #if V8_TARGET_ARCH_IA32 -#include "ia32/code-stubs-ia32.h" +#include "src/ia32/code-stubs-ia32.h" #elif V8_TARGET_ARCH_X64 -#include "x64/code-stubs-x64.h" +#include "src/x64/code-stubs-x64.h" #elif V8_TARGET_ARCH_ARM64 -#include "arm64/code-stubs-arm64.h" +#include "src/arm64/code-stubs-arm64.h" #elif V8_TARGET_ARCH_ARM -#include "arm/code-stubs-arm.h" +#include "src/arm/code-stubs-arm.h" #elif V8_TARGET_ARCH_MIPS -#include "mips/code-stubs-mips.h" +#include "src/mips/code-stubs-mips.h" +#elif V8_TARGET_ARCH_MIPS64 +#include "src/mips64/code-stubs-mips64.h" +#elif V8_TARGET_ARCH_X87 +#include "src/x87/code-stubs-x87.h" #else #error Unsupported target architecture. #endif @@ -491,8 +564,8 @@ class ToNumberStub: public HydrogenCodeStub { } private: - Major MajorKey() { return ToNumber; } - int NotMissMinorKey() { return 0; } + Major MajorKey() const { return ToNumber; } + int NotMissMinorKey() const { return 0; } }; @@ -511,8 +584,8 @@ class NumberToStringStub V8_FINAL : public HydrogenCodeStub { static const int kNumber = 0; private: - virtual Major MajorKey() V8_OVERRIDE { return NumberToString; } - virtual int NotMissMinorKey() V8_OVERRIDE { return 0; } + virtual Major MajorKey() const V8_OVERRIDE { return NumberToString; } + virtual int NotMissMinorKey() const V8_OVERRIDE { return 0; } }; @@ -539,8 +612,8 @@ class FastNewClosureStub : public HydrogenCodeStub { class StrictModeBits: public BitField<bool, 0, 1> {}; class IsGeneratorBits: public BitField<bool, 1, 1> {}; - Major MajorKey() { return FastNewClosure; } - int NotMissMinorKey() { + Major MajorKey() const { return FastNewClosure; } + int NotMissMinorKey() const { return StrictModeBits::encode(strict_mode_ == STRICT) | IsGeneratorBits::encode(is_generator_); } @@ -556,7 +629,7 @@ class FastNewContextStub V8_FINAL : public HydrogenCodeStub { FastNewContextStub(Isolate* isolate, int slots) : HydrogenCodeStub(isolate), slots_(slots) { - ASSERT(slots_ > 0 && slots_ <= kMaximumSlots); + DCHECK(slots_ > 0 && slots_ <= kMaximumSlots); } virtual Handle<Code> GenerateCode() V8_OVERRIDE; @@ -568,8 +641,8 @@ class FastNewContextStub V8_FINAL : public HydrogenCodeStub { int slots() const { return slots_; } - virtual Major MajorKey() V8_OVERRIDE { return FastNewContext; } - virtual int NotMissMinorKey() V8_OVERRIDE { return slots_; } + virtual Major MajorKey() const V8_OVERRIDE { return FastNewContext; } + virtual int NotMissMinorKey() const V8_OVERRIDE { return slots_; } // Parameters accessed via CodeStubGraphBuilder::GetParameter() static const int kFunction = 0; @@ -581,51 +654,16 @@ class FastNewContextStub V8_FINAL : public HydrogenCodeStub { class FastCloneShallowArrayStub : public HydrogenCodeStub { public: - // Maximum length of copied elements array. - static const int kMaximumClonedLength = 8; - enum Mode { - CLONE_ELEMENTS, - CLONE_DOUBLE_ELEMENTS, - COPY_ON_WRITE_ELEMENTS, - CLONE_ANY_ELEMENTS, - LAST_CLONE_MODE = CLONE_ANY_ELEMENTS - }; - - static const int kFastCloneModeCount = LAST_CLONE_MODE + 1; - FastCloneShallowArrayStub(Isolate* isolate, - Mode mode, - AllocationSiteMode allocation_site_mode, - int length) + AllocationSiteMode allocation_site_mode) : HydrogenCodeStub(isolate), - mode_(mode), - allocation_site_mode_(allocation_site_mode), - length_((mode == COPY_ON_WRITE_ELEMENTS) ? 0 : length) { - ASSERT_GE(length_, 0); - ASSERT_LE(length_, kMaximumClonedLength); - } + allocation_site_mode_(allocation_site_mode) {} - Mode mode() const { return mode_; } - int length() const { return length_; } AllocationSiteMode allocation_site_mode() const { return allocation_site_mode_; } - ElementsKind ComputeElementsKind() const { - switch (mode()) { - case CLONE_ELEMENTS: - case COPY_ON_WRITE_ELEMENTS: - return FAST_ELEMENTS; - case CLONE_DOUBLE_ELEMENTS: - return FAST_DOUBLE_ELEMENTS; - case CLONE_ANY_ELEMENTS: - /*fall-through*/; - } - UNREACHABLE(); - return LAST_ELEMENTS_KIND; - } - - virtual Handle<Code> GenerateCode() V8_OVERRIDE; + virtual Handle<Code> GenerateCode(); virtual void InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE; @@ -633,22 +671,13 @@ class FastCloneShallowArrayStub : public HydrogenCodeStub { static void InstallDescriptors(Isolate* isolate); private: - Mode mode_; AllocationSiteMode allocation_site_mode_; - int length_; class AllocationSiteModeBits: public BitField<AllocationSiteMode, 0, 1> {}; - class ModeBits: public BitField<Mode, 1, 4> {}; - class LengthBits: public BitField<int, 5, 4> {}; // Ensure data fits within available bits. - STATIC_ASSERT(LAST_ALLOCATION_SITE_MODE == 1); - STATIC_ASSERT(kFastCloneModeCount < 16); - STATIC_ASSERT(kMaximumClonedLength < 16); - Major MajorKey() { return FastCloneShallowArray; } - int NotMissMinorKey() { - return AllocationSiteModeBits::encode(allocation_site_mode_) - | ModeBits::encode(mode_) - | LengthBits::encode(length_); + Major MajorKey() const { return FastCloneShallowArray; } + int NotMissMinorKey() const { + return AllocationSiteModeBits::encode(allocation_site_mode_); } }; @@ -660,8 +689,8 @@ class FastCloneShallowObjectStub : public HydrogenCodeStub { FastCloneShallowObjectStub(Isolate* isolate, int length) : HydrogenCodeStub(isolate), length_(length) { - ASSERT_GE(length_, 0); - ASSERT_LE(length_, kMaximumClonedProperties); + DCHECK_GE(length_, 0); + DCHECK_LE(length_, kMaximumClonedProperties); } int length() const { return length_; } @@ -674,8 +703,8 @@ class FastCloneShallowObjectStub : public HydrogenCodeStub { private: int length_; - Major MajorKey() { return FastCloneShallowObject; } - int NotMissMinorKey() { return length_; } + Major MajorKey() const { return FastCloneShallowObject; } + int NotMissMinorKey() const { return length_; } DISALLOW_COPY_AND_ASSIGN(FastCloneShallowObjectStub); }; @@ -694,8 +723,8 @@ class CreateAllocationSiteStub : public HydrogenCodeStub { CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE; private: - Major MajorKey() { return CreateAllocationSite; } - int NotMissMinorKey() { return 0; } + Major MajorKey() const { return CreateAllocationSite; } + int NotMissMinorKey() const { return 0; } DISALLOW_COPY_AND_ASSIGN(CreateAllocationSiteStub); }; @@ -718,9 +747,12 @@ class InstanceofStub: public PlatformCodeStub { void Generate(MacroAssembler* masm); + virtual void InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor); + private: - Major MajorKey() { return Instanceof; } - int MinorKey() { return static_cast<int>(flags_); } + Major MajorKey() const { return Instanceof; } + int MinorKey() const { return static_cast<int>(flags_); } bool HasArgsInRegisters() const { return (flags_ & kArgsInRegisters) != 0; @@ -734,7 +766,7 @@ class InstanceofStub: public PlatformCodeStub { return (flags_ & kReturnTrueFalseObject) != 0; } - virtual void PrintName(StringStream* stream); + virtual void PrintName(OStream& os) const V8_OVERRIDE; // NOLINT Flags flags_; }; @@ -758,10 +790,10 @@ class ArrayConstructorStub: public PlatformCodeStub { private: void GenerateDispatchToArrayStub(MacroAssembler* masm, AllocationSiteOverrideMode mode); - virtual void PrintName(StringStream* stream); + virtual void PrintName(OStream& os) const V8_OVERRIDE; // NOLINT - virtual CodeStub::Major MajorKey() { return ArrayConstructor; } - virtual int MinorKey() { return argument_count_; } + virtual CodeStub::Major MajorKey() const { return ArrayConstructor; } + virtual int MinorKey() const { return argument_count_; } ArgumentCountKey argument_count_; }; @@ -774,8 +806,8 @@ class InternalArrayConstructorStub: public PlatformCodeStub { void Generate(MacroAssembler* masm); private: - virtual CodeStub::Major MajorKey() { return InternalArrayConstructor; } - virtual int MinorKey() { return 0; } + virtual CodeStub::Major MajorKey() const { return InternalArrayConstructor; } + virtual int MinorKey() const { return 0; } void GenerateCase(MacroAssembler* masm, ElementsKind kind); }; @@ -790,40 +822,13 @@ class MathPowStub: public PlatformCodeStub { virtual void Generate(MacroAssembler* masm); private: - virtual CodeStub::Major MajorKey() { return MathPow; } - virtual int MinorKey() { return exponent_type_; } + virtual CodeStub::Major MajorKey() const { return MathPow; } + virtual int MinorKey() const { return exponent_type_; } ExponentType exponent_type_; }; -class ICStub: public PlatformCodeStub { - public: - ICStub(Isolate* isolate, Code::Kind kind) - : PlatformCodeStub(isolate), kind_(kind) { } - virtual Code::Kind GetCodeKind() const { return kind_; } - virtual InlineCacheState GetICState() { return MONOMORPHIC; } - - bool Describes(Code* code) { - return GetMajorKey(code) == MajorKey() && code->stub_info() == MinorKey(); - } - - protected: - class KindBits: public BitField<Code::Kind, 0, 4> {}; - virtual void FinishCode(Handle<Code> code) { - code->set_stub_info(MinorKey()); - } - Code::Kind kind() { return kind_; } - - virtual int MinorKey() { - return KindBits::encode(kind_); - } - - private: - Code::Kind kind_; -}; - - class CallICStub: public PlatformCodeStub { public: CallICStub(Isolate* isolate, const CallIC::State& state) @@ -844,179 +849,161 @@ class CallICStub: public PlatformCodeStub { return Code::CALL_IC; } - virtual InlineCacheState GetICState() V8_FINAL V8_OVERRIDE { - return state_.GetICState(); - } + virtual InlineCacheState GetICState() const V8_OVERRIDE { return DEFAULT; } - virtual ExtraICState GetExtraICState() V8_FINAL V8_OVERRIDE { + virtual ExtraICState GetExtraICState() const V8_FINAL V8_OVERRIDE { return state_.GetExtraICState(); } protected: - virtual int MinorKey() { return GetExtraICState(); } - virtual void PrintState(StringStream* stream) V8_FINAL V8_OVERRIDE; + virtual int MinorKey() const { return GetExtraICState(); } + virtual void PrintState(OStream& os) const V8_OVERRIDE; // NOLINT - private: - virtual CodeStub::Major MajorKey() { return CallIC; } + virtual CodeStub::Major MajorKey() const { return CallIC; } // Code generation helpers. - void GenerateMiss(MacroAssembler* masm); + void GenerateMiss(MacroAssembler* masm, IC::UtilityId id); - CallIC::State state_; + const CallIC::State state_; }; -class FunctionPrototypeStub: public ICStub { +class CallIC_ArrayStub: public CallICStub { public: - FunctionPrototypeStub(Isolate* isolate, Code::Kind kind) - : ICStub(isolate, kind) { } - virtual void Generate(MacroAssembler* masm); - - private: - virtual CodeStub::Major MajorKey() { return FunctionPrototype; } -}; + CallIC_ArrayStub(Isolate* isolate, const CallIC::State& state_in) + : CallICStub(isolate, state_in) {} + virtual void Generate(MacroAssembler* masm); -class StoreICStub: public ICStub { - public: - StoreICStub(Isolate* isolate, Code::Kind kind, StrictMode strict_mode) - : ICStub(isolate, kind), strict_mode_(strict_mode) { } - - protected: - virtual ExtraICState GetExtraICState() { - return StoreIC::ComputeExtraICState(strict_mode_); + virtual InlineCacheState GetICState() const V8_FINAL V8_OVERRIDE { + return MONOMORPHIC; } - private: - STATIC_ASSERT(KindBits::kSize == 4); - class StrictModeBits: public BitField<bool, 4, 1> {}; - virtual int MinorKey() { - return KindBits::encode(kind()) | StrictModeBits::encode(strict_mode_); - } + protected: + virtual void PrintState(OStream& os) const V8_OVERRIDE; // NOLINT - StrictMode strict_mode_; + virtual CodeStub::Major MajorKey() const { return CallIC_Array; } }; -class HICStub: public HydrogenCodeStub { +// TODO(verwaest): Translate to hydrogen code stub. +class FunctionPrototypeStub : public PlatformCodeStub { public: - explicit HICStub(Isolate* isolate) : HydrogenCodeStub(isolate) { } - virtual Code::Kind GetCodeKind() const { return kind(); } - virtual InlineCacheState GetICState() { return MONOMORPHIC; } + explicit FunctionPrototypeStub(Isolate* isolate) + : PlatformCodeStub(isolate) {} + virtual void Generate(MacroAssembler* masm); + virtual Code::Kind GetCodeKind() const { return Code::HANDLER; } - protected: - class KindBits: public BitField<Code::Kind, 0, 4> {}; - virtual Code::Kind kind() const = 0; + private: + virtual CodeStub::Major MajorKey() const { return FunctionPrototype; } + virtual int MinorKey() const { return 0; } }; -class HandlerStub: public HICStub { +class HandlerStub : public HydrogenCodeStub { public: virtual Code::Kind GetCodeKind() const { return Code::HANDLER; } - virtual ExtraICState GetExtraICState() { return kind(); } + virtual ExtraICState GetExtraICState() const { return kind(); } + virtual InlineCacheState GetICState() const { return MONOMORPHIC; } + + virtual void InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE; protected: - explicit HandlerStub(Isolate* isolate) : HICStub(isolate) { } - virtual int NotMissMinorKey() { return bit_field_; } + explicit HandlerStub(Isolate* isolate) + : HydrogenCodeStub(isolate), bit_field_(0) {} + virtual int NotMissMinorKey() const { return bit_field_; } + virtual Code::Kind kind() const = 0; int bit_field_; }; class LoadFieldStub: public HandlerStub { public: - LoadFieldStub(Isolate* isolate, - bool inobject, - int index, Representation representation) - : HandlerStub(isolate) { - Initialize(Code::LOAD_IC, inobject, index, representation); + LoadFieldStub(Isolate* isolate, FieldIndex index) + : HandlerStub(isolate), index_(index) { + int property_index_key = index_.GetFieldAccessStubKey(); + bit_field_ = EncodedLoadFieldByIndexBits::encode(property_index_key); } virtual Handle<Code> GenerateCode() V8_OVERRIDE; - virtual void InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE; + FieldIndex index() const { return index_; } - Representation representation() { - if (unboxed_double()) return Representation::Double(); - return Representation::Tagged(); - } + protected: + explicit LoadFieldStub(Isolate* isolate); + virtual Code::Kind kind() const { return Code::LOAD_IC; } + virtual Code::StubType GetStubType() { return Code::FAST; } - virtual Code::Kind kind() const { - return KindBits::decode(bit_field_); - } + private: + class EncodedLoadFieldByIndexBits : public BitField<int, 0, 13> {}; + virtual CodeStub::Major MajorKey() const { return LoadField; } + FieldIndex index_; +}; - bool is_inobject() { - return InobjectBits::decode(bit_field_); - } - int offset() { - int index = IndexBits::decode(bit_field_); - int offset = index * kPointerSize; - if (is_inobject()) return offset; - return FixedArray::kHeaderSize + offset; +class LoadConstantStub : public HandlerStub { + public: + LoadConstantStub(Isolate* isolate, int descriptor) : HandlerStub(isolate) { + bit_field_ = descriptor; } - bool unboxed_double() { - return UnboxedDoubleBits::decode(bit_field_); - } + virtual Handle<Code> GenerateCode() V8_OVERRIDE; - virtual Code::StubType GetStubType() { return Code::FAST; } + int descriptor() const { return bit_field_; } protected: - explicit LoadFieldStub(Isolate* isolate) : HandlerStub(isolate) { } - - void Initialize(Code::Kind kind, - bool inobject, - int index, - Representation representation) { - bit_field_ = KindBits::encode(kind) - | InobjectBits::encode(inobject) - | IndexBits::encode(index) - | UnboxedDoubleBits::encode(representation.IsDouble()); - } + explicit LoadConstantStub(Isolate* isolate); + virtual Code::Kind kind() const { return Code::LOAD_IC; } + virtual Code::StubType GetStubType() { return Code::FAST; } private: - STATIC_ASSERT(KindBits::kSize == 4); - class InobjectBits: public BitField<bool, 4, 1> {}; - class IndexBits: public BitField<int, 5, 11> {}; - class UnboxedDoubleBits: public BitField<bool, 16, 1> {}; - virtual CodeStub::Major MajorKey() { return LoadField; } + virtual CodeStub::Major MajorKey() const { return LoadConstant; } }; class StringLengthStub: public HandlerStub { public: - explicit StringLengthStub(Isolate* isolate) : HandlerStub(isolate) { - Initialize(Code::LOAD_IC); - } + explicit StringLengthStub(Isolate* isolate) : HandlerStub(isolate) {} virtual Handle<Code> GenerateCode() V8_OVERRIDE; - virtual void InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE; protected: - virtual Code::Kind kind() const { - return KindBits::decode(bit_field_); - } - - void Initialize(Code::Kind kind) { - bit_field_ = KindBits::encode(kind); - } + virtual Code::Kind kind() const { return Code::LOAD_IC; } + virtual Code::StubType GetStubType() { return Code::FAST; } private: - virtual CodeStub::Major MajorKey() { return StringLength; } + virtual CodeStub::Major MajorKey() const { return StringLength; } }; -class KeyedStringLengthStub: public StringLengthStub { +class StoreFieldStub : public HandlerStub { public: - explicit KeyedStringLengthStub(Isolate* isolate) : StringLengthStub(isolate) { - Initialize(Code::KEYED_LOAD_IC); + StoreFieldStub(Isolate* isolate, FieldIndex index, + Representation representation) + : HandlerStub(isolate), index_(index), representation_(representation) { + int property_index_key = index_.GetFieldAccessStubKey(); + bit_field_ = EncodedStoreFieldByIndexBits::encode(property_index_key) | + RepresentationBits::encode( + PropertyDetails::EncodeRepresentation(representation)); } - virtual void InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE; + + virtual Handle<Code> GenerateCode() V8_OVERRIDE; + + FieldIndex index() const { return index_; } + Representation representation() { return representation_; } + static void InstallDescriptors(Isolate* isolate); + + protected: + explicit StoreFieldStub(Isolate* isolate); + virtual Code::Kind kind() const { return Code::STORE_IC; } + virtual Code::StubType GetStubType() { return Code::FAST; } private: - virtual CodeStub::Major MajorKey() { return KeyedStringLength; } + class EncodedStoreFieldByIndexBits : public BitField<int, 0, 13> {}; + class RepresentationBits : public BitField<int, 13, 4> {}; + virtual CodeStub::Major MajorKey() const { return StoreField; } + FieldIndex index_; + Representation representation_; }; @@ -1051,9 +1038,6 @@ class StoreGlobalStub : public HandlerStub { virtual Handle<Code> GenerateCode() V8_OVERRIDE; - virtual void InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE; - bool is_constant() const { return IsConstantBits::decode(bit_field_); } @@ -1072,7 +1056,7 @@ class StoreGlobalStub : public HandlerStub { } private: - Major MajorKey() { return StoreGlobal; } + Major MajorKey() const { return StoreGlobal; } class IsConstantBits: public BitField<bool, 0, 1> {}; class RepresentationBits: public BitField<Representation::Kind, 1, 8> {}; @@ -1092,13 +1076,13 @@ class CallApiFunctionStub : public PlatformCodeStub { IsStoreBits::encode(is_store) | CallDataUndefinedBits::encode(call_data_undefined) | ArgumentBits::encode(argc); - ASSERT(!is_store || argc == 1); + DCHECK(!is_store || argc == 1); } private: virtual void Generate(MacroAssembler* masm) V8_OVERRIDE; - virtual Major MajorKey() V8_OVERRIDE { return CallApiFunction; } - virtual int MinorKey() V8_OVERRIDE { return bit_field_; } + virtual Major MajorKey() const V8_OVERRIDE { return CallApiFunction; } + virtual int MinorKey() const V8_OVERRIDE { return bit_field_; } class IsStoreBits: public BitField<bool, 0, 1> {}; class CallDataUndefinedBits: public BitField<bool, 1, 1> {}; @@ -1116,36 +1100,20 @@ class CallApiGetterStub : public PlatformCodeStub { private: virtual void Generate(MacroAssembler* masm) V8_OVERRIDE; - virtual Major MajorKey() V8_OVERRIDE { return CallApiGetter; } - virtual int MinorKey() V8_OVERRIDE { return 0; } + virtual Major MajorKey() const V8_OVERRIDE { return CallApiGetter; } + virtual int MinorKey() const V8_OVERRIDE { return 0; } DISALLOW_COPY_AND_ASSIGN(CallApiGetterStub); }; -class KeyedLoadFieldStub: public LoadFieldStub { - public: - KeyedLoadFieldStub(Isolate* isolate, - bool inobject, - int index, Representation representation) - : LoadFieldStub(isolate) { - Initialize(Code::KEYED_LOAD_IC, inobject, index, representation); - } - - virtual void InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE; - - private: - virtual CodeStub::Major MajorKey() { return KeyedLoadField; } -}; - - class BinaryOpICStub : public HydrogenCodeStub { public: - BinaryOpICStub(Isolate* isolate, Token::Value op, OverwriteMode mode) + BinaryOpICStub(Isolate* isolate, Token::Value op, + OverwriteMode mode = NO_OVERWRITE) : HydrogenCodeStub(isolate, UNINITIALIZED), state_(isolate, op, mode) {} - BinaryOpICStub(Isolate* isolate, const BinaryOpIC::State& state) + explicit BinaryOpICStub(Isolate* isolate, const BinaryOpIC::State& state) : HydrogenCodeStub(isolate), state_(state) {} static void GenerateAheadOfTime(Isolate* isolate); @@ -1159,26 +1127,22 @@ class BinaryOpICStub : public HydrogenCodeStub { return Code::BINARY_OP_IC; } - virtual InlineCacheState GetICState() V8_FINAL V8_OVERRIDE { + virtual InlineCacheState GetICState() const V8_FINAL V8_OVERRIDE { return state_.GetICState(); } - virtual ExtraICState GetExtraICState() V8_FINAL V8_OVERRIDE { + virtual ExtraICState GetExtraICState() const V8_FINAL V8_OVERRIDE { return state_.GetExtraICState(); } - virtual void VerifyPlatformFeatures() V8_FINAL V8_OVERRIDE { - ASSERT(CpuFeatures::VerifyCrossCompiling(SSE2)); - } - virtual Handle<Code> GenerateCode() V8_OVERRIDE; const BinaryOpIC::State& state() const { return state_; } - virtual void PrintState(StringStream* stream) V8_FINAL V8_OVERRIDE; + virtual void PrintState(OStream& os) const V8_FINAL V8_OVERRIDE; // NOLINT - virtual Major MajorKey() V8_OVERRIDE { return BinaryOpIC; } - virtual int NotMissMinorKey() V8_FINAL V8_OVERRIDE { + virtual Major MajorKey() const V8_OVERRIDE { return BinaryOpIC; } + virtual int NotMissMinorKey() const V8_FINAL V8_OVERRIDE { return GetExtraICState(); } @@ -1216,24 +1180,22 @@ class BinaryOpICWithAllocationSiteStub V8_FINAL : public PlatformCodeStub { return Code::BINARY_OP_IC; } - virtual InlineCacheState GetICState() V8_OVERRIDE { + virtual InlineCacheState GetICState() const V8_OVERRIDE { return state_.GetICState(); } - virtual ExtraICState GetExtraICState() V8_OVERRIDE { + virtual ExtraICState GetExtraICState() const V8_OVERRIDE { return state_.GetExtraICState(); } - virtual void VerifyPlatformFeatures() V8_OVERRIDE { - ASSERT(CpuFeatures::VerifyCrossCompiling(SSE2)); - } - virtual void Generate(MacroAssembler* masm) V8_OVERRIDE; - virtual void PrintState(StringStream* stream) V8_OVERRIDE; + virtual void PrintState(OStream& os) const V8_OVERRIDE; // NOLINT - virtual Major MajorKey() V8_OVERRIDE { return BinaryOpICWithAllocationSite; } - virtual int MinorKey() V8_OVERRIDE { return GetExtraICState(); } + virtual Major MajorKey() const V8_OVERRIDE { + return BinaryOpICWithAllocationSite; + } + virtual int MinorKey() const V8_OVERRIDE { return GetExtraICState(); } private: static void GenerateAheadOfTime(Isolate* isolate, @@ -1267,7 +1229,7 @@ class BinaryOpWithAllocationSiteStub V8_FINAL : public BinaryOpICStub { virtual Handle<Code> GenerateCode() V8_OVERRIDE; - virtual Major MajorKey() V8_OVERRIDE { + virtual Major MajorKey() const V8_OVERRIDE { return BinaryOpWithAllocationSite; } @@ -1307,10 +1269,6 @@ class StringAddStub V8_FINAL : public HydrogenCodeStub { return PretenureFlagBits::decode(bit_field_); } - virtual void VerifyPlatformFeatures() V8_OVERRIDE { - ASSERT(CpuFeatures::VerifyCrossCompiling(SSE2)); - } - virtual Handle<Code> GenerateCode() V8_OVERRIDE; virtual void InitializeInterfaceDescriptor( @@ -1327,10 +1285,10 @@ class StringAddStub V8_FINAL : public HydrogenCodeStub { class PretenureFlagBits: public BitField<PretenureFlag, 2, 1> {}; uint32_t bit_field_; - virtual Major MajorKey() V8_OVERRIDE { return StringAdd; } - virtual int NotMissMinorKey() V8_OVERRIDE { return bit_field_; } + virtual Major MajorKey() const V8_OVERRIDE { return StringAdd; } + virtual int NotMissMinorKey() const V8_OVERRIDE { return bit_field_; } - virtual void PrintBaseName(StringStream* stream) V8_OVERRIDE; + virtual void PrintBaseName(OStream& os) const V8_OVERRIDE; // NOLINT DISALLOW_COPY_AND_ASSIGN(StringAddStub); }; @@ -1348,20 +1306,18 @@ class ICCompareStub: public PlatformCodeStub { left_(left), right_(right), state_(handler) { - ASSERT(Token::IsCompareOp(op)); + DCHECK(Token::IsCompareOp(op)); } virtual void Generate(MacroAssembler* masm); void set_known_map(Handle<Map> map) { known_map_ = map; } - static void DecodeMinorKey(int minor_key, - CompareIC::State* left_state, - CompareIC::State* right_state, - CompareIC::State* handler_state, - Token::Value* op); + static void DecodeKey(uint32_t stub_key, CompareIC::State* left_state, + CompareIC::State* right_state, + CompareIC::State* handler_state, Token::Value* op); - virtual InlineCacheState GetICState(); + virtual InlineCacheState GetICState() const; private: class OpField: public BitField<int, 0, 3> { }; @@ -1369,12 +1325,8 @@ class ICCompareStub: public PlatformCodeStub { class RightStateField: public BitField<int, 7, 4> { }; class HandlerStateField: public BitField<int, 11, 4> { }; - virtual void FinishCode(Handle<Code> code) { - code->set_stub_info(MinorKey()); - } - - virtual CodeStub::Major MajorKey() { return CompareIC; } - virtual int MinorKey(); + virtual CodeStub::Major MajorKey() const { return CompareIC; } + virtual int MinorKey() const; virtual Code::Kind GetCodeKind() const { return Code::COMPARE_IC; } @@ -1433,7 +1385,7 @@ class CompareNilICStub : public HydrogenCodeStub { isolate->code_stub_interface_descriptor(CodeStub::CompareNilIC)); } - virtual InlineCacheState GetICState() { + virtual InlineCacheState GetICState() const { if (state_.Contains(GENERIC)) { return MEGAMORPHIC; } else if (state_.Contains(MONOMORPHIC_MAP)) { @@ -1447,7 +1399,7 @@ class CompareNilICStub : public HydrogenCodeStub { virtual Handle<Code> GenerateCode() V8_OVERRIDE; - virtual ExtraICState GetExtraICState() { + virtual ExtraICState GetExtraICState() const { return NilValueField::encode(nil_value_) | TypesField::encode(state_.ToIntegral()); } @@ -1458,8 +1410,8 @@ class CompareNilICStub : public HydrogenCodeStub { NilValue GetNilValue() const { return nil_value_; } void ClearState() { state_.RemoveAll(); } - virtual void PrintState(StringStream* stream); - virtual void PrintBaseName(StringStream* stream); + virtual void PrintState(OStream& os) const V8_OVERRIDE; // NOLINT + virtual void PrintBaseName(OStream& os) const V8_OVERRIDE; // NOLINT private: friend class CompareNilIC; @@ -1481,9 +1433,8 @@ class CompareNilICStub : public HydrogenCodeStub { public: State() : EnumSet<CompareNilType, byte>(0) { } explicit State(byte bits) : EnumSet<CompareNilType, byte>(bits) { } - - void Print(StringStream* stream) const; }; + friend OStream& operator<<(OStream& os, const State& s); CompareNilICStub(Isolate* isolate, NilValue nil, @@ -1493,8 +1444,8 @@ class CompareNilICStub : public HydrogenCodeStub { class NilValueField : public BitField<NilValue, 0, 1> {}; class TypesField : public BitField<byte, 1, NUMBER_OF_TYPES> {}; - virtual CodeStub::Major MajorKey() { return CompareNilIC; } - virtual int NotMissMinorKey() { return GetExtraICState(); } + virtual CodeStub::Major MajorKey() const { return CompareNilIC; } + virtual int NotMissMinorKey() const { return GetExtraICState(); } NilValue nil_value_; State state_; @@ -1503,6 +1454,9 @@ class CompareNilICStub : public HydrogenCodeStub { }; +OStream& operator<<(OStream& os, const CompareNilICStub::State& s); + + class CEntryStub : public PlatformCodeStub { public: CEntryStub(Isolate* isolate, @@ -1520,18 +1474,13 @@ class CEntryStub : public PlatformCodeStub { // can generate both variants ahead of time. static void GenerateAheadOfTime(Isolate* isolate); - protected: - virtual void VerifyPlatformFeatures() V8_OVERRIDE { - ASSERT(CpuFeatures::VerifyCrossCompiling(SSE2)); - }; - private: // Number of pointers/values returned. const int result_size_; SaveFPRegsMode save_doubles_; - Major MajorKey() { return CEntry; } - int MinorKey(); + Major MajorKey() const { return CEntry; } + int MinorKey() const; bool NeedsImmovableCode(); }; @@ -1547,8 +1496,8 @@ class JSEntryStub : public PlatformCodeStub { void GenerateBody(MacroAssembler* masm, bool is_construct); private: - Major MajorKey() { return JSEntry; } - int MinorKey() { return 0; } + Major MajorKey() const { return JSEntry; } + int MinorKey() const { return 0; } virtual void FinishCode(Handle<Code> code); @@ -1563,10 +1512,10 @@ class JSConstructEntryStub : public JSEntryStub { void Generate(MacroAssembler* masm) { GenerateBody(masm, true); } private: - int MinorKey() { return 1; } + int MinorKey() const { return 1; } - virtual void PrintName(StringStream* stream) { - stream->Add("JSConstructEntryStub"); + virtual void PrintName(OStream& os) const V8_OVERRIDE { // NOLINT + os << "JSConstructEntryStub"; } }; @@ -1586,8 +1535,8 @@ class ArgumentsAccessStub: public PlatformCodeStub { private: Type type_; - Major MajorKey() { return ArgumentsAccess; } - int MinorKey() { return type_; } + Major MajorKey() const { return ArgumentsAccess; } + int MinorKey() const { return type_; } void Generate(MacroAssembler* masm); void GenerateReadElement(MacroAssembler* masm); @@ -1595,7 +1544,7 @@ class ArgumentsAccessStub: public PlatformCodeStub { void GenerateNewSloppyFast(MacroAssembler* masm); void GenerateNewSloppySlow(MacroAssembler* masm); - virtual void PrintName(StringStream* stream); + virtual void PrintName(OStream& os) const V8_OVERRIDE; // NOLINT }; @@ -1604,8 +1553,8 @@ class RegExpExecStub: public PlatformCodeStub { explicit RegExpExecStub(Isolate* isolate) : PlatformCodeStub(isolate) { } private: - Major MajorKey() { return RegExpExec; } - int MinorKey() { return 0; } + Major MajorKey() const { return RegExpExec; } + int MinorKey() const { return 0; } void Generate(MacroAssembler* masm); }; @@ -1621,8 +1570,8 @@ class RegExpConstructResultStub V8_FINAL : public HydrogenCodeStub { virtual void InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE; - virtual Major MajorKey() V8_OVERRIDE { return RegExpConstructResult; } - virtual int NotMissMinorKey() V8_OVERRIDE { return 0; } + virtual Major MajorKey() const V8_OVERRIDE { return RegExpConstructResult; } + virtual int NotMissMinorKey() const V8_OVERRIDE { return 0; } static void InstallDescriptors(Isolate* isolate); @@ -1639,7 +1588,9 @@ class RegExpConstructResultStub V8_FINAL : public HydrogenCodeStub { class CallFunctionStub: public PlatformCodeStub { public: CallFunctionStub(Isolate* isolate, int argc, CallFunctionFlags flags) - : PlatformCodeStub(isolate), argc_(argc), flags_(flags) { } + : PlatformCodeStub(isolate), argc_(argc), flags_(flags) { + DCHECK(argc <= Code::kMaxArguments); + } void Generate(MacroAssembler* masm); @@ -1647,18 +1598,23 @@ class CallFunctionStub: public PlatformCodeStub { return ArgcBits::decode(minor_key); } + virtual void InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor); + private: int argc_; CallFunctionFlags flags_; - virtual void PrintName(StringStream* stream); + virtual void PrintName(OStream& os) const V8_OVERRIDE; // NOLINT // Minor key encoding in 32 bits with Bitfield <Type, shift, size>. class FlagBits: public BitField<CallFunctionFlags, 0, 2> {}; - class ArgcBits: public BitField<unsigned, 2, 32 - 2> {}; + class ArgcBits : public BitField<unsigned, 2, Code::kArgumentsBits> {}; + + STATIC_ASSERT(Code::kArgumentsBits + 2 <= kStubMinorKeyBits); - Major MajorKey() { return CallFunction; } - int MinorKey() { + Major MajorKey() const { return CallFunction; } + int MinorKey() const { // Encode the parameters in a unique 32 bit value. return FlagBits::encode(flags_) | ArgcBits::encode(argc_); } @@ -1684,15 +1640,18 @@ class CallConstructStub: public PlatformCodeStub { code->set_has_function_cache(RecordCallTarget()); } + virtual void InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor); + private: CallConstructorFlags flags_; - virtual void PrintName(StringStream* stream); + virtual void PrintName(OStream& os) const V8_OVERRIDE; // NOLINT - Major MajorKey() { return CallConstruct; } - int MinorKey() { return flags_; } + Major MajorKey() const { return CallConstruct; } + int MinorKey() const { return flags_; } - bool RecordCallTarget() { + bool RecordCallTarget() const { return (flags_ & RECORD_CONSTRUCTOR_TARGET) != 0; } }; @@ -1735,8 +1694,8 @@ class StringCharCodeAtGenerator { index_not_number_(index_not_number), index_out_of_range_(index_out_of_range), index_flags_(index_flags) { - ASSERT(!result_.is(object_)); - ASSERT(!result_.is(index_)); + DCHECK(!result_.is(object_)); + DCHECK(!result_.is(index_)); } // Generates the fast case code. On the fallthrough path |result| @@ -1783,7 +1742,7 @@ class StringCharFromCodeGenerator { Register result) : code_(code), result_(result) { - ASSERT(!code_.is(result_)); + DCHECK(!code_.is(result_)); } // Generates the fast case code. On the fallthrough path |result| @@ -1872,9 +1831,9 @@ class StringCharAtGenerator { }; -class KeyedLoadDictionaryElementStub : public HydrogenCodeStub { +class LoadDictionaryElementStub : public HydrogenCodeStub { public: - explicit KeyedLoadDictionaryElementStub(Isolate* isolate) + explicit LoadDictionaryElementStub(Isolate* isolate) : HydrogenCodeStub(isolate) {} virtual Handle<Code> GenerateCode() V8_OVERRIDE; @@ -1883,25 +1842,47 @@ class KeyedLoadDictionaryElementStub : public HydrogenCodeStub { CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE; private: - Major MajorKey() { return KeyedLoadElement; } - int NotMissMinorKey() { return DICTIONARY_ELEMENTS; } + Major MajorKey() const { return LoadElement; } + int NotMissMinorKey() const { return DICTIONARY_ELEMENTS; } - DISALLOW_COPY_AND_ASSIGN(KeyedLoadDictionaryElementStub); + DISALLOW_COPY_AND_ASSIGN(LoadDictionaryElementStub); }; -class KeyedLoadDictionaryElementPlatformStub : public PlatformCodeStub { +class LoadDictionaryElementPlatformStub : public PlatformCodeStub { public: - explicit KeyedLoadDictionaryElementPlatformStub(Isolate* isolate) + explicit LoadDictionaryElementPlatformStub(Isolate* isolate) : PlatformCodeStub(isolate) {} void Generate(MacroAssembler* masm); private: - Major MajorKey() { return KeyedLoadElement; } - int MinorKey() { return DICTIONARY_ELEMENTS; } + Major MajorKey() const { return LoadElement; } + int MinorKey() const { return DICTIONARY_ELEMENTS; } - DISALLOW_COPY_AND_ASSIGN(KeyedLoadDictionaryElementPlatformStub); + DISALLOW_COPY_AND_ASSIGN(LoadDictionaryElementPlatformStub); +}; + + +class KeyedLoadGenericStub : public HydrogenCodeStub { + public: + explicit KeyedLoadGenericStub(Isolate* isolate) : HydrogenCodeStub(isolate) {} + + virtual Handle<Code> GenerateCode() V8_OVERRIDE; + + virtual void InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE; + + static void InstallDescriptors(Isolate* isolate); + + virtual Code::Kind GetCodeKind() const { return Code::KEYED_LOAD_IC; } + virtual InlineCacheState GetICState() const { return GENERIC; } + + private: + Major MajorKey() const { return KeyedLoadGeneric; } + int NotMissMinorKey() const { return 0; } + + DISALLOW_COPY_AND_ASSIGN(KeyedLoadGenericStub); }; @@ -1919,9 +1900,7 @@ class DoubleToIStub : public PlatformCodeStub { OffsetBits::encode(offset) | IsTruncatingBits::encode(is_truncating) | SkipFastPathBits::encode(skip_fastpath) | - SSEBits::encode( - CpuFeatures::IsSafeForSnapshot(isolate, SSE2) ? - CpuFeatures::IsSafeForSnapshot(isolate, SSE3) ? 2 : 1 : 0); + SSE3Bits::encode(CpuFeatures::IsSupported(SSE3) ? 1 : 0); } Register source() { @@ -1948,11 +1927,6 @@ class DoubleToIStub : public PlatformCodeStub { virtual bool SometimesSetsUpAFrame() { return false; } - protected: - virtual void VerifyPlatformFeatures() V8_OVERRIDE { - ASSERT(CpuFeatures::VerifyCrossCompiling(SSE2)); - } - private: static const int kBitsPerRegisterNumber = 6; STATIC_ASSERT((1L << kBitsPerRegisterNumber) >= Register::kNumRegisters); @@ -1967,11 +1941,11 @@ class DoubleToIStub : public PlatformCodeStub { public BitField<int, 2 * kBitsPerRegisterNumber + 1, 3> {}; // NOLINT class SkipFastPathBits: public BitField<int, 2 * kBitsPerRegisterNumber + 4, 1> {}; // NOLINT - class SSEBits: - public BitField<int, 2 * kBitsPerRegisterNumber + 5, 2> {}; // NOLINT + class SSE3Bits: + public BitField<int, 2 * kBitsPerRegisterNumber + 5, 1> {}; // NOLINT - Major MajorKey() { return DoubleToI; } - int MinorKey() { return bit_field_; } + Major MajorKey() const { return DoubleToI; } + int MinorKey() const { return bit_field_; } int bit_field_; @@ -1979,11 +1953,10 @@ class DoubleToIStub : public PlatformCodeStub { }; -class KeyedLoadFastElementStub : public HydrogenCodeStub { +class LoadFastElementStub : public HydrogenCodeStub { public: - KeyedLoadFastElementStub(Isolate* isolate, - bool is_js_array, - ElementsKind elements_kind) + LoadFastElementStub(Isolate* isolate, bool is_js_array, + ElementsKind elements_kind) : HydrogenCodeStub(isolate) { bit_field_ = ElementsKindBits::encode(elements_kind) | IsJSArrayBits::encode(is_js_array); @@ -2007,19 +1980,17 @@ class KeyedLoadFastElementStub : public HydrogenCodeStub { class IsJSArrayBits: public BitField<bool, 8, 1> {}; uint32_t bit_field_; - Major MajorKey() { return KeyedLoadElement; } - int NotMissMinorKey() { return bit_field_; } + Major MajorKey() const { return LoadElement; } + int NotMissMinorKey() const { return bit_field_; } - DISALLOW_COPY_AND_ASSIGN(KeyedLoadFastElementStub); + DISALLOW_COPY_AND_ASSIGN(LoadFastElementStub); }; -class KeyedStoreFastElementStub : public HydrogenCodeStub { +class StoreFastElementStub : public HydrogenCodeStub { public: - KeyedStoreFastElementStub(Isolate* isolate, - bool is_js_array, - ElementsKind elements_kind, - KeyedAccessStoreMode mode) + StoreFastElementStub(Isolate* isolate, bool is_js_array, + ElementsKind elements_kind, KeyedAccessStoreMode mode) : HydrogenCodeStub(isolate) { bit_field_ = ElementsKindBits::encode(elements_kind) | IsJSArrayBits::encode(is_js_array) | @@ -2049,10 +2020,10 @@ class KeyedStoreFastElementStub : public HydrogenCodeStub { class IsJSArrayBits: public BitField<bool, 12, 1> {}; uint32_t bit_field_; - Major MajorKey() { return KeyedStoreElement; } - int NotMissMinorKey() { return bit_field_; } + Major MajorKey() const { return StoreElement; } + int NotMissMinorKey() const { return bit_field_; } - DISALLOW_COPY_AND_ASSIGN(KeyedStoreFastElementStub); + DISALLOW_COPY_AND_ASSIGN(StoreFastElementStub); }; @@ -2090,8 +2061,8 @@ class TransitionElementsKindStub : public HydrogenCodeStub { class IsJSArrayBits: public BitField<bool, 16, 1> {}; uint32_t bit_field_; - Major MajorKey() { return TransitionElementsKind; } - int NotMissMinorKey() { return bit_field_; } + Major MajorKey() const { return TransitionElementsKind; } + int NotMissMinorKey() const { return bit_field_; } DISALLOW_COPY_AND_ASSIGN(TransitionElementsKindStub); }; @@ -2106,7 +2077,7 @@ class ArrayConstructorStubBase : public HydrogenCodeStub { // It only makes sense to override local allocation site behavior // if there is a difference between the global allocation site policy // for an ElementsKind and the desired usage of the stub. - ASSERT(override_mode != DISABLE_ALLOCATION_SITES || + DCHECK(override_mode != DISABLE_ALLOCATION_SITES || AllocationSite::GetMode(kind) == TRACK_ALLOCATION_SITE); bit_field_ = ElementsKindBits::encode(kind) | AllocationSiteOverrideModeBits::encode(override_mode); @@ -2128,10 +2099,10 @@ class ArrayConstructorStubBase : public HydrogenCodeStub { static const int kAllocationSite = 1; protected: - void BasePrintName(const char* name, StringStream* stream); + OStream& BasePrintName(OStream& os, const char* name) const; // NOLINT private: - int NotMissMinorKey() { return bit_field_; } + int NotMissMinorKey() const { return bit_field_; } // Ensure data fits within available bits. STATIC_ASSERT(LAST_ALLOCATION_SITE_OVERRIDE_MODE == 1); @@ -2160,10 +2131,10 @@ class ArrayNoArgumentConstructorStub : public ArrayConstructorStubBase { CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE; private: - Major MajorKey() { return ArrayNoArgumentConstructor; } + Major MajorKey() const { return ArrayNoArgumentConstructor; } - virtual void PrintName(StringStream* stream) { - BasePrintName("ArrayNoArgumentConstructorStub", stream); + virtual void PrintName(OStream& os) const V8_OVERRIDE { // NOLINT + BasePrintName(os, "ArrayNoArgumentConstructorStub"); } DISALLOW_COPY_AND_ASSIGN(ArrayNoArgumentConstructorStub); @@ -2185,10 +2156,10 @@ class ArraySingleArgumentConstructorStub : public ArrayConstructorStubBase { CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE; private: - Major MajorKey() { return ArraySingleArgumentConstructor; } + Major MajorKey() const { return ArraySingleArgumentConstructor; } - virtual void PrintName(StringStream* stream) { - BasePrintName("ArraySingleArgumentConstructorStub", stream); + virtual void PrintName(OStream& os) const { // NOLINT + BasePrintName(os, "ArraySingleArgumentConstructorStub"); } DISALLOW_COPY_AND_ASSIGN(ArraySingleArgumentConstructorStub); @@ -2210,10 +2181,10 @@ class ArrayNArgumentsConstructorStub : public ArrayConstructorStubBase { CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE; private: - Major MajorKey() { return ArrayNArgumentsConstructor; } + Major MajorKey() const { return ArrayNArgumentsConstructor; } - virtual void PrintName(StringStream* stream) { - BasePrintName("ArrayNArgumentsConstructorStub", stream); + virtual void PrintName(OStream& os) const { // NOLINT + BasePrintName(os, "ArrayNArgumentsConstructorStub"); } DISALLOW_COPY_AND_ASSIGN(ArrayNArgumentsConstructorStub); @@ -2236,7 +2207,7 @@ class InternalArrayConstructorStubBase : public HydrogenCodeStub { ElementsKind elements_kind() const { return kind_; } private: - int NotMissMinorKey() { return kind_; } + int NotMissMinorKey() const { return kind_; } ElementsKind kind_; @@ -2257,7 +2228,7 @@ class InternalArrayNoArgumentConstructorStub : public CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE; private: - Major MajorKey() { return InternalArrayNoArgumentConstructor; } + Major MajorKey() const { return InternalArrayNoArgumentConstructor; } DISALLOW_COPY_AND_ASSIGN(InternalArrayNoArgumentConstructorStub); }; @@ -2276,7 +2247,7 @@ class InternalArraySingleArgumentConstructorStub : public CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE; private: - Major MajorKey() { return InternalArraySingleArgumentConstructor; } + Major MajorKey() const { return InternalArraySingleArgumentConstructor; } DISALLOW_COPY_AND_ASSIGN(InternalArraySingleArgumentConstructorStub); }; @@ -2294,30 +2265,26 @@ class InternalArrayNArgumentsConstructorStub : public CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE; private: - Major MajorKey() { return InternalArrayNArgumentsConstructor; } + Major MajorKey() const { return InternalArrayNArgumentsConstructor; } DISALLOW_COPY_AND_ASSIGN(InternalArrayNArgumentsConstructorStub); }; -class KeyedStoreElementStub : public PlatformCodeStub { +class StoreElementStub : public PlatformCodeStub { public: - KeyedStoreElementStub(Isolate* isolate, - bool is_js_array, - ElementsKind elements_kind, - KeyedAccessStoreMode store_mode) + StoreElementStub(Isolate* isolate, bool is_js_array, + ElementsKind elements_kind, KeyedAccessStoreMode store_mode) : PlatformCodeStub(isolate), is_js_array_(is_js_array), elements_kind_(elements_kind), - store_mode_(store_mode), - fp_registers_(CanUseFPRegisters()) { } + store_mode_(store_mode) {} - Major MajorKey() { return KeyedStoreElement; } - int MinorKey() { + Major MajorKey() const { return StoreElement; } + int MinorKey() const { return ElementsKindBits::encode(elements_kind_) | IsJSArrayBits::encode(is_js_array_) | - StoreModeBits::encode(store_mode_) | - FPRegisters::encode(fp_registers_); + StoreModeBits::encode(store_mode_); } void Generate(MacroAssembler* masm); @@ -2326,14 +2293,12 @@ class KeyedStoreElementStub : public PlatformCodeStub { class ElementsKindBits: public BitField<ElementsKind, 0, 8> {}; class StoreModeBits: public BitField<KeyedAccessStoreMode, 8, 4> {}; class IsJSArrayBits: public BitField<bool, 12, 1> {}; - class FPRegisters: public BitField<bool, 13, 1> {}; bool is_js_array_; ElementsKind elements_kind_; KeyedAccessStoreMode store_mode_; - bool fp_registers_; - DISALLOW_COPY_AND_ASSIGN(KeyedStoreElementStub); + DISALLOW_COPY_AND_ASSIGN(StoreElementStub); }; @@ -2351,6 +2316,12 @@ class ToBooleanStub: public HydrogenCodeStub { NUMBER_OF_TYPES }; + enum ResultMode { + RESULT_AS_SMI, // For Smi(1) on truthy value, Smi(0) otherwise. + RESULT_AS_ODDBALL, // For {true} on truthy value, {false} otherwise. + RESULT_AS_INVERSE_ODDBALL // For {false} on truthy value, {true} otherwise. + }; + // At most 8 different types can be distinguished, because the Code object // only has room for a single byte to hold a set of these types. :-P STATIC_ASSERT(NUMBER_OF_TYPES <= 8); @@ -2361,7 +2332,6 @@ class ToBooleanStub: public HydrogenCodeStub { explicit Types(byte bits) : EnumSet<Type, byte>(bits) {} byte ToByte() const { return ToIntegral(); } - void Print(StringStream* stream) const; bool UpdateStatus(Handle<Object> object); bool NeedsMap() const; bool CanBeUndetectable() const; @@ -2370,25 +2340,28 @@ class ToBooleanStub: public HydrogenCodeStub { static Types Generic() { return Types((1 << NUMBER_OF_TYPES) - 1); } }; - ToBooleanStub(Isolate* isolate, Types types = Types()) - : HydrogenCodeStub(isolate), types_(types) { } + ToBooleanStub(Isolate* isolate, ResultMode mode, Types types = Types()) + : HydrogenCodeStub(isolate), types_(types), mode_(mode) {} ToBooleanStub(Isolate* isolate, ExtraICState state) - : HydrogenCodeStub(isolate), types_(static_cast<byte>(state)) { } + : HydrogenCodeStub(isolate), + types_(static_cast<byte>(state)), + mode_(RESULT_AS_SMI) {} bool UpdateStatus(Handle<Object> object); Types GetTypes() { return types_; } + ResultMode GetMode() { return mode_; } virtual Handle<Code> GenerateCode() V8_OVERRIDE; virtual void InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE; virtual Code::Kind GetCodeKind() const { return Code::TO_BOOLEAN_IC; } - virtual void PrintState(StringStream* stream); + virtual void PrintState(OStream& os) const V8_OVERRIDE; // NOLINT virtual bool SometimesSetsUpAFrame() { return false; } static void InstallDescriptors(Isolate* isolate) { - ToBooleanStub stub(isolate); + ToBooleanStub stub(isolate, RESULT_AS_SMI); stub.InitializeInterfaceDescriptor( isolate->code_stub_interface_descriptor(CodeStub::ToBoolean)); } @@ -2397,11 +2370,9 @@ class ToBooleanStub: public HydrogenCodeStub { return ToBooleanStub(isolate, UNINITIALIZED).GetCode(); } - virtual ExtraICState GetExtraICState() { - return types_.ToIntegral(); - } + virtual ExtraICState GetExtraICState() const { return types_.ToIntegral(); } - virtual InlineCacheState GetICState() { + virtual InlineCacheState GetICState() const { if (types_.IsEmpty()) { return ::v8::internal::UNINITIALIZED; } else { @@ -2410,16 +2381,25 @@ class ToBooleanStub: public HydrogenCodeStub { } private: - Major MajorKey() { return ToBoolean; } - int NotMissMinorKey() { return GetExtraICState(); } + class TypesBits : public BitField<byte, 0, NUMBER_OF_TYPES> {}; + class ResultModeBits : public BitField<ResultMode, NUMBER_OF_TYPES, 2> {}; + + Major MajorKey() const { return ToBoolean; } + int NotMissMinorKey() const { + return TypesBits::encode(types_.ToByte()) | ResultModeBits::encode(mode_); + } - ToBooleanStub(Isolate* isolate, InitializationState init_state) : - HydrogenCodeStub(isolate, init_state) {} + ToBooleanStub(Isolate* isolate, InitializationState init_state) + : HydrogenCodeStub(isolate, init_state), mode_(RESULT_AS_SMI) {} Types types_; + ResultMode mode_; }; +OStream& operator<<(OStream& os, const ToBooleanStub::Types& t); + + class ElementsTransitionAndStoreStub : public HydrogenCodeStub { public: ElementsTransitionAndStoreStub(Isolate* isolate, @@ -2443,14 +2423,32 @@ class ElementsTransitionAndStoreStub : public HydrogenCodeStub { virtual void InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE; + // Parameters accessed via CodeStubGraphBuilder::GetParameter() + enum ParameterIndices { + kValueIndex, + kMapIndex, + kKeyIndex, + kObjectIndex, + kParameterCount + }; + + static const Register ValueRegister() { + return KeyedStoreIC::ValueRegister(); + } + static const Register MapRegister() { return KeyedStoreIC::MapRegister(); } + static const Register KeyRegister() { return KeyedStoreIC::NameRegister(); } + static const Register ObjectRegister() { + return KeyedStoreIC::ReceiverRegister(); + } + private: class FromBits: public BitField<ElementsKind, 0, 8> {}; class ToBits: public BitField<ElementsKind, 8, 8> {}; class IsJSArrayBits: public BitField<bool, 16, 1> {}; class StoreModeBits: public BitField<KeyedAccessStoreMode, 17, 4> {}; - Major MajorKey() { return ElementsTransitionAndStore; } - int NotMissMinorKey() { + Major MajorKey() const { return ElementsTransitionAndStore; } + int NotMissMinorKey() const { return FromBits::encode(from_kind_) | ToBits::encode(to_kind_) | IsJSArrayBits::encode(is_jsarray_) | @@ -2469,18 +2467,14 @@ class ElementsTransitionAndStoreStub : public HydrogenCodeStub { class StoreArrayLiteralElementStub : public PlatformCodeStub { public: explicit StoreArrayLiteralElementStub(Isolate* isolate) - : PlatformCodeStub(isolate), fp_registers_(CanUseFPRegisters()) { } + : PlatformCodeStub(isolate) { } private: - class FPRegisters: public BitField<bool, 0, 1> {}; - - Major MajorKey() { return StoreArrayLiteralElement; } - int MinorKey() { return FPRegisters::encode(fp_registers_); } + Major MajorKey() const { return StoreArrayLiteralElement; } + int MinorKey() const { return 0; } void Generate(MacroAssembler* masm); - bool fp_registers_; - DISALLOW_COPY_AND_ASSIGN(StoreArrayLiteralElementStub); }; @@ -2489,24 +2483,18 @@ class StubFailureTrampolineStub : public PlatformCodeStub { public: StubFailureTrampolineStub(Isolate* isolate, StubFunctionMode function_mode) : PlatformCodeStub(isolate), - fp_registers_(CanUseFPRegisters()), function_mode_(function_mode) {} static void GenerateAheadOfTime(Isolate* isolate); private: - class FPRegisters: public BitField<bool, 0, 1> {}; - class FunctionModeField: public BitField<StubFunctionMode, 1, 1> {}; + class FunctionModeField: public BitField<StubFunctionMode, 0, 1> {}; - Major MajorKey() { return StubFailureTrampoline; } - int MinorKey() { - return FPRegisters::encode(fp_registers_) | - FunctionModeField::encode(function_mode_); - } + Major MajorKey() const { return StubFailureTrampoline; } + int MinorKey() const { return FunctionModeField::encode(function_mode_); } void Generate(MacroAssembler* masm); - bool fp_registers_; StubFunctionMode function_mode_; DISALLOW_COPY_AND_ASSIGN(StubFailureTrampolineStub); @@ -2528,8 +2516,8 @@ class ProfileEntryHookStub : public PlatformCodeStub { intptr_t stack_pointer, Isolate* isolate); - Major MajorKey() { return ProfileEntryHook; } - int MinorKey() { return 0; } + Major MajorKey() const { return ProfileEntryHook; } + int MinorKey() const { return 0; } void Generate(MacroAssembler* masm); diff --git a/deps/v8/src/code.h b/deps/v8/src/code.h index 40a6950d209..d0a5fec61ff 100644 --- a/deps/v8/src/code.h +++ b/deps/v8/src/code.h @@ -5,9 +5,9 @@ #ifndef V8_CODE_H_ #define V8_CODE_H_ -#include "allocation.h" -#include "handles.h" -#include "objects.h" +#include "src/allocation.h" +#include "src/handles.h" +#include "src/objects.h" namespace v8 { namespace internal { @@ -30,11 +30,11 @@ class ParameterCount BASE_EMBEDDED { bool is_immediate() const { return !is_reg(); } Register reg() const { - ASSERT(is_reg()); + DCHECK(is_reg()); return reg_; } int immediate() const { - ASSERT(is_immediate()); + DCHECK(is_immediate()); return immediate_; } diff --git a/deps/v8/src/codegen.cc b/deps/v8/src/codegen.cc index 9da4ea21e58..a24220d9d0a 100644 --- a/deps/v8/src/codegen.cc +++ b/deps/v8/src/codegen.cc @@ -2,17 +2,17 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" - -#include "bootstrapper.h" -#include "codegen.h" -#include "compiler.h" -#include "cpu-profiler.h" -#include "debug.h" -#include "prettyprinter.h" -#include "rewriter.h" -#include "runtime.h" -#include "stub-cache.h" +#include "src/v8.h" + +#include "src/bootstrapper.h" +#include "src/codegen.h" +#include "src/compiler.h" +#include "src/cpu-profiler.h" +#include "src/debug.h" +#include "src/prettyprinter.h" +#include "src/rewriter.h" +#include "src/runtime.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -150,7 +150,8 @@ Handle<Code> CodeGenerator::MakeCodeEpilogue(MacroAssembler* masm, Handle<Code> code = isolate->factory()->NewCode(desc, flags, masm->CodeObject(), false, is_crankshafted, - info->prologue_offset()); + info->prologue_offset(), + info->is_debug() && !is_crankshafted); isolate->counters()->total_compiled_code_size()->Increment( code->instruction_size()); isolate->heap()->IncrementCodeGeneratedBytes(is_crankshafted, @@ -174,10 +175,11 @@ void CodeGenerator::PrintCode(Handle<Code> code, CompilationInfo* info) { code->kind() == Code::FUNCTION; CodeTracer::Scope tracing_scope(info->isolate()->GetCodeTracer()); + OFStream os(tracing_scope.file()); if (print_source) { Handle<Script> script = info->script(); if (!script->IsUndefined() && !script->source()->IsUndefined()) { - PrintF(tracing_scope.file(), "--- Raw source ---\n"); + os << "--- Raw source ---\n"; ConsStringIteratorOp op; StringCharacterStream stream(String::cast(script->source()), &op, @@ -188,37 +190,33 @@ void CodeGenerator::PrintCode(Handle<Code> code, CompilationInfo* info) { function->end_position() - function->start_position() + 1; for (int i = 0; i < source_len; i++) { if (stream.HasMore()) { - PrintF(tracing_scope.file(), "%c", stream.GetNext()); + os << AsReversiblyEscapedUC16(stream.GetNext()); } } - PrintF(tracing_scope.file(), "\n\n"); + os << "\n\n"; } } if (info->IsOptimizing()) { if (FLAG_print_unopt_code) { - PrintF(tracing_scope.file(), "--- Unoptimized code ---\n"); + os << "--- Unoptimized code ---\n"; info->closure()->shared()->code()->Disassemble( - function->debug_name()->ToCString().get(), tracing_scope.file()); + function->debug_name()->ToCString().get(), os); } - PrintF(tracing_scope.file(), "--- Optimized code ---\n"); - PrintF(tracing_scope.file(), - "optimization_id = %d\n", info->optimization_id()); + os << "--- Optimized code ---\n" + << "optimization_id = " << info->optimization_id() << "\n"; } else { - PrintF(tracing_scope.file(), "--- Code ---\n"); + os << "--- Code ---\n"; } if (print_source) { - PrintF(tracing_scope.file(), - "source_position = %d\n", function->start_position()); + os << "source_position = " << function->start_position() << "\n"; } if (info->IsStub()) { CodeStub::Major major_key = info->code_stub()->MajorKey(); - code->Disassemble(CodeStub::MajorName(major_key, false), - tracing_scope.file()); + code->Disassemble(CodeStub::MajorName(major_key, false), os); } else { - code->Disassemble(function->debug_name()->ToCString().get(), - tracing_scope.file()); + code->Disassemble(function->debug_name()->ToCString().get(), os); } - PrintF(tracing_scope.file(), "--- End code ---\n"); + os << "--- End code ---\n"; } #endif // ENABLE_DISASSEMBLER } @@ -256,9 +254,9 @@ void ArgumentsAccessStub::Generate(MacroAssembler* masm) { } -int CEntryStub::MinorKey() { +int CEntryStub::MinorKey() const { int result = (save_doubles_ == kSaveFPRegs) ? 1 : 0; - ASSERT(result_size_ == 1 || result_size_ == 2); + DCHECK(result_size_ == 1 || result_size_ == 2); #ifdef _WIN64 return result | ((result_size_ == 1) ? 0 : 2); #else diff --git a/deps/v8/src/codegen.h b/deps/v8/src/codegen.h index fbaee97c8f6..e01a3982a01 100644 --- a/deps/v8/src/codegen.h +++ b/deps/v8/src/codegen.h @@ -5,8 +5,8 @@ #ifndef V8_CODEGEN_H_ #define V8_CODEGEN_H_ -#include "code-stubs.h" -#include "runtime.h" +#include "src/code-stubs.h" +#include "src/runtime.h" // Include the declaration of the architecture defined class CodeGenerator. // The contract to the shared code is that the the CodeGenerator is a subclass @@ -46,15 +46,19 @@ enum TypeofState { INSIDE_TYPEOF, NOT_INSIDE_TYPEOF }; #if V8_TARGET_ARCH_IA32 -#include "ia32/codegen-ia32.h" +#include "src/ia32/codegen-ia32.h" // NOLINT #elif V8_TARGET_ARCH_X64 -#include "x64/codegen-x64.h" +#include "src/x64/codegen-x64.h" // NOLINT #elif V8_TARGET_ARCH_ARM64 -#include "arm64/codegen-arm64.h" +#include "src/arm64/codegen-arm64.h" // NOLINT #elif V8_TARGET_ARCH_ARM -#include "arm/codegen-arm.h" +#include "src/arm/codegen-arm.h" // NOLINT #elif V8_TARGET_ARCH_MIPS -#include "mips/codegen-mips.h" +#include "src/mips/codegen-mips.h" // NOLINT +#elif V8_TARGET_ARCH_MIPS64 +#include "src/mips64/codegen-mips64.h" // NOLINT +#elif V8_TARGET_ARCH_X87 +#include "src/x87/codegen-x87.h" // NOLINT #else #error Unsupported target architecture. #endif @@ -113,15 +117,30 @@ class ElementsTransitionGenerator : public AllStatic { public: // If |mode| is set to DONT_TRACK_ALLOCATION_SITE, // |allocation_memento_found| may be NULL. - static void GenerateMapChangeElementsTransition(MacroAssembler* masm, + static void GenerateMapChangeElementsTransition( + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, AllocationSiteMode mode, Label* allocation_memento_found); - static void GenerateSmiToDouble(MacroAssembler* masm, - AllocationSiteMode mode, - Label* fail); - static void GenerateDoubleToObject(MacroAssembler* masm, - AllocationSiteMode mode, - Label* fail); + static void GenerateSmiToDouble( + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, + AllocationSiteMode mode, + Label* fail); + static void GenerateDoubleToObject( + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, + AllocationSiteMode mode, + Label* fail); private: DISALLOW_COPY_AND_ASSIGN(ElementsTransitionGenerator); diff --git a/deps/v8/src/collection-iterator.js b/deps/v8/src/collection-iterator.js new file mode 100644 index 00000000000..2bccc8d7a2c --- /dev/null +++ b/deps/v8/src/collection-iterator.js @@ -0,0 +1,194 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +'use strict'; + + +// This file relies on the fact that the following declaration has been made +// in runtime.js: +// var $Set = global.Set; +// var $Map = global.Map; + + +function SetIteratorConstructor(set, kind) { + %SetIteratorInitialize(this, set, kind); +} + + +function SetIteratorNextJS() { + if (!IS_SET_ITERATOR(this)) { + throw MakeTypeError('incompatible_method_receiver', + ['Set Iterator.prototype.next', this]); + } + + var value_array = [UNDEFINED, UNDEFINED]; + var entry = {value: value_array, done: false}; + switch (%SetIteratorNext(this, value_array)) { + case 0: + entry.value = UNDEFINED; + entry.done = true; + break; + case ITERATOR_KIND_VALUES: + entry.value = value_array[0]; + break; + case ITERATOR_KIND_ENTRIES: + value_array[1] = value_array[0]; + break; + } + + return entry; +} + + +function SetIteratorSymbolIterator() { + return this; +} + + +function SetEntries() { + if (!IS_SET(this)) { + throw MakeTypeError('incompatible_method_receiver', + ['Set.prototype.entries', this]); + } + return new SetIterator(this, ITERATOR_KIND_ENTRIES); +} + + +function SetValues() { + if (!IS_SET(this)) { + throw MakeTypeError('incompatible_method_receiver', + ['Set.prototype.values', this]); + } + return new SetIterator(this, ITERATOR_KIND_VALUES); +} + + +function SetUpSetIterator() { + %CheckIsBootstrapping(); + + %SetCode(SetIterator, SetIteratorConstructor); + %FunctionSetPrototype(SetIterator, new $Object()); + %FunctionSetInstanceClassName(SetIterator, 'Set Iterator'); + InstallFunctions(SetIterator.prototype, DONT_ENUM, $Array( + 'next', SetIteratorNextJS + )); + + %FunctionSetName(SetIteratorSymbolIterator, '[Symbol.iterator]'); + %AddNamedProperty(SetIterator.prototype, symbolIterator, + SetIteratorSymbolIterator, DONT_ENUM); +} + +SetUpSetIterator(); + + +function ExtendSetPrototype() { + %CheckIsBootstrapping(); + + InstallFunctions($Set.prototype, DONT_ENUM, $Array( + 'entries', SetEntries, + 'keys', SetValues, + 'values', SetValues + )); + + %AddNamedProperty($Set.prototype, symbolIterator, SetValues, DONT_ENUM); +} + +ExtendSetPrototype(); + + + +function MapIteratorConstructor(map, kind) { + %MapIteratorInitialize(this, map, kind); +} + + +function MapIteratorSymbolIterator() { + return this; +} + + +function MapIteratorNextJS() { + if (!IS_MAP_ITERATOR(this)) { + throw MakeTypeError('incompatible_method_receiver', + ['Map Iterator.prototype.next', this]); + } + + var value_array = [UNDEFINED, UNDEFINED]; + var entry = {value: value_array, done: false}; + switch (%MapIteratorNext(this, value_array)) { + case 0: + entry.value = UNDEFINED; + entry.done = true; + break; + case ITERATOR_KIND_KEYS: + entry.value = value_array[0]; + break; + case ITERATOR_KIND_VALUES: + entry.value = value_array[1]; + break; + // ITERATOR_KIND_ENTRIES does not need any processing. + } + + return entry; +} + + +function MapEntries() { + if (!IS_MAP(this)) { + throw MakeTypeError('incompatible_method_receiver', + ['Map.prototype.entries', this]); + } + return new MapIterator(this, ITERATOR_KIND_ENTRIES); +} + + +function MapKeys() { + if (!IS_MAP(this)) { + throw MakeTypeError('incompatible_method_receiver', + ['Map.prototype.keys', this]); + } + return new MapIterator(this, ITERATOR_KIND_KEYS); +} + + +function MapValues() { + if (!IS_MAP(this)) { + throw MakeTypeError('incompatible_method_receiver', + ['Map.prototype.values', this]); + } + return new MapIterator(this, ITERATOR_KIND_VALUES); +} + + +function SetUpMapIterator() { + %CheckIsBootstrapping(); + + %SetCode(MapIterator, MapIteratorConstructor); + %FunctionSetPrototype(MapIterator, new $Object()); + %FunctionSetInstanceClassName(MapIterator, 'Map Iterator'); + InstallFunctions(MapIterator.prototype, DONT_ENUM, $Array( + 'next', MapIteratorNextJS + )); + + %FunctionSetName(MapIteratorSymbolIterator, '[Symbol.iterator]'); + %AddNamedProperty(MapIterator.prototype, symbolIterator, + MapIteratorSymbolIterator, DONT_ENUM); +} + +SetUpMapIterator(); + + +function ExtendMapPrototype() { + %CheckIsBootstrapping(); + + InstallFunctions($Map.prototype, DONT_ENUM, $Array( + 'entries', MapEntries, + 'keys', MapKeys, + 'values', MapValues + )); + + %AddNamedProperty($Map.prototype, symbolIterator, MapEntries, DONT_ENUM); +} + +ExtendMapPrototype(); diff --git a/deps/v8/src/collection.js b/deps/v8/src/collection.js index f8f3fa995ce..5e4421eb107 100644 --- a/deps/v8/src/collection.js +++ b/deps/v8/src/collection.js @@ -11,72 +11,67 @@ var $Set = global.Set; var $Map = global.Map; -// Global sentinel to be used instead of undefined keys, which are not -// supported internally but required for Harmony sets and maps. -var undefined_sentinel = {}; +// ------------------------------------------------------------------- +// Harmony Set -// Map and Set uses SameValueZero which means that +0 and -0 should be treated -// as the same value. -function NormalizeKey(key) { - if (IS_UNDEFINED(key)) { - return undefined_sentinel; +function SetConstructor(iterable) { + if (!%_IsConstructCall()) { + throw MakeTypeError('constructor_not_function', ['Set']); } - if (key === 0) { - return 0; - } + var iter, adder; - return key; -} + if (!IS_NULL_OR_UNDEFINED(iterable)) { + iter = GetIterator(iterable); + adder = this.add; + if (!IS_SPEC_FUNCTION(adder)) { + throw MakeTypeError('property_not_function', ['add', this]); + } + } + %SetInitialize(this); -// ------------------------------------------------------------------- -// Harmony Set + if (IS_UNDEFINED(iter)) return; -function SetConstructor() { - if (%_IsConstructCall()) { - %SetInitialize(this); - } else { - throw MakeTypeError('constructor_not_function', ['Set']); + var next, done; + while (!(next = iter.next()).done) { + if (!IS_SPEC_OBJECT(next)) { + throw MakeTypeError('iterator_result_not_an_object', [next]); + } + %_CallFunction(this, next.value, adder); } } -function SetAdd(key) { +function SetAddJS(key) { if (!IS_SET(this)) { throw MakeTypeError('incompatible_method_receiver', ['Set.prototype.add', this]); } - return %SetAdd(this, NormalizeKey(key)); + return %SetAdd(this, key); } -function SetHas(key) { +function SetHasJS(key) { if (!IS_SET(this)) { throw MakeTypeError('incompatible_method_receiver', ['Set.prototype.has', this]); } - return %SetHas(this, NormalizeKey(key)); + return %SetHas(this, key); } -function SetDelete(key) { +function SetDeleteJS(key) { if (!IS_SET(this)) { throw MakeTypeError('incompatible_method_receiver', ['Set.prototype.delete', this]); } - key = NormalizeKey(key); - if (%SetHas(this, key)) { - %SetDelete(this, key); - return true; - } else { - return false; - } + return %SetDelete(this, key); } -function SetGetSize() { +function SetGetSizeJS() { if (!IS_SET(this)) { throw MakeTypeError('incompatible_method_receiver', ['Set.prototype.size', this]); @@ -85,7 +80,7 @@ function SetGetSize() { } -function SetClear() { +function SetClearJS() { if (!IS_SET(this)) { throw MakeTypeError('incompatible_method_receiver', ['Set.prototype.clear', this]); @@ -104,14 +99,14 @@ function SetForEach(f, receiver) { throw MakeTypeError('called_non_callable', [f]); } - var iterator = %SetCreateIterator(this, ITERATOR_KIND_VALUES); - var entry; - try { - while (!(entry = %SetIteratorNext(iterator)).done) { - %_CallFunction(receiver, entry.value, entry.value, this, f); - } - } finally { - %SetIteratorClose(iterator); + var iterator = new SetIterator(this, ITERATOR_KIND_VALUES); + var key; + var stepping = DEBUG_IS_ACTIVE && %DebugCallbackSupportsStepping(f); + var value_array = [UNDEFINED]; + while (%SetIteratorNext(iterator, value_array)) { + if (stepping) %DebugPrepareStepInIfStepping(f); + key = value_array[0]; + %_CallFunction(receiver, key, key, this, f); } } @@ -123,17 +118,17 @@ function SetUpSet() { %SetCode($Set, SetConstructor); %FunctionSetPrototype($Set, new $Object()); - %SetProperty($Set.prototype, "constructor", $Set, DONT_ENUM); + %AddNamedProperty($Set.prototype, "constructor", $Set, DONT_ENUM); %FunctionSetLength(SetForEach, 1); // Set up the non-enumerable functions on the Set prototype object. - InstallGetter($Set.prototype, "size", SetGetSize); + InstallGetter($Set.prototype, "size", SetGetSizeJS); InstallFunctions($Set.prototype, DONT_ENUM, $Array( - "add", SetAdd, - "has", SetHas, - "delete", SetDelete, - "clear", SetClear, + "add", SetAddJS, + "has", SetHasJS, + "delete", SetDeleteJS, + "clear", SetClearJS, "forEach", SetForEach )); } @@ -144,52 +139,76 @@ SetUpSet(); // ------------------------------------------------------------------- // Harmony Map -function MapConstructor() { - if (%_IsConstructCall()) { - %MapInitialize(this); - } else { +function MapConstructor(iterable) { + if (!%_IsConstructCall()) { throw MakeTypeError('constructor_not_function', ['Map']); } + + var iter, adder; + + if (!IS_NULL_OR_UNDEFINED(iterable)) { + iter = GetIterator(iterable); + adder = this.set; + if (!IS_SPEC_FUNCTION(adder)) { + throw MakeTypeError('property_not_function', ['set', this]); + } + } + + %MapInitialize(this); + + if (IS_UNDEFINED(iter)) return; + + var next, done, nextItem; + while (!(next = iter.next()).done) { + if (!IS_SPEC_OBJECT(next)) { + throw MakeTypeError('iterator_result_not_an_object', [next]); + } + nextItem = next.value; + if (!IS_SPEC_OBJECT(nextItem)) { + throw MakeTypeError('iterator_value_not_an_object', [nextItem]); + } + %_CallFunction(this, nextItem[0], nextItem[1], adder); + } } -function MapGet(key) { +function MapGetJS(key) { if (!IS_MAP(this)) { throw MakeTypeError('incompatible_method_receiver', ['Map.prototype.get', this]); } - return %MapGet(this, NormalizeKey(key)); + return %MapGet(this, key); } -function MapSet(key, value) { +function MapSetJS(key, value) { if (!IS_MAP(this)) { throw MakeTypeError('incompatible_method_receiver', ['Map.prototype.set', this]); } - return %MapSet(this, NormalizeKey(key), value); + return %MapSet(this, key, value); } -function MapHas(key) { +function MapHasJS(key) { if (!IS_MAP(this)) { throw MakeTypeError('incompatible_method_receiver', ['Map.prototype.has', this]); } - return %MapHas(this, NormalizeKey(key)); + return %MapHas(this, key); } -function MapDelete(key) { +function MapDeleteJS(key) { if (!IS_MAP(this)) { throw MakeTypeError('incompatible_method_receiver', ['Map.prototype.delete', this]); } - return %MapDelete(this, NormalizeKey(key)); + return %MapDelete(this, key); } -function MapGetSize() { +function MapGetSizeJS() { if (!IS_MAP(this)) { throw MakeTypeError('incompatible_method_receiver', ['Map.prototype.size', this]); @@ -198,7 +217,7 @@ function MapGetSize() { } -function MapClear() { +function MapClearJS() { if (!IS_MAP(this)) { throw MakeTypeError('incompatible_method_receiver', ['Map.prototype.clear', this]); @@ -217,14 +236,12 @@ function MapForEach(f, receiver) { throw MakeTypeError('called_non_callable', [f]); } - var iterator = %MapCreateIterator(this, ITERATOR_KIND_ENTRIES); - var entry; - try { - while (!(entry = %MapIteratorNext(iterator)).done) { - %_CallFunction(receiver, entry.value[1], entry.value[0], this, f); - } - } finally { - %MapIteratorClose(iterator); + var iterator = new MapIterator(this, ITERATOR_KIND_ENTRIES); + var stepping = DEBUG_IS_ACTIVE && %DebugCallbackSupportsStepping(f); + var value_array = [UNDEFINED, UNDEFINED]; + while (%MapIteratorNext(iterator, value_array)) { + if (stepping) %DebugPrepareStepInIfStepping(f); + %_CallFunction(receiver, value_array[1], value_array[0], this, f); } } @@ -236,18 +253,18 @@ function SetUpMap() { %SetCode($Map, MapConstructor); %FunctionSetPrototype($Map, new $Object()); - %SetProperty($Map.prototype, "constructor", $Map, DONT_ENUM); + %AddNamedProperty($Map.prototype, "constructor", $Map, DONT_ENUM); %FunctionSetLength(MapForEach, 1); // Set up the non-enumerable functions on the Map prototype object. - InstallGetter($Map.prototype, "size", MapGetSize); + InstallGetter($Map.prototype, "size", MapGetSizeJS); InstallFunctions($Map.prototype, DONT_ENUM, $Array( - "get", MapGet, - "set", MapSet, - "has", MapHas, - "delete", MapDelete, - "clear", MapClear, + "get", MapGetJS, + "set", MapSetJS, + "has", MapHasJS, + "delete", MapDeleteJS, + "clear", MapClearJS, "forEach", MapForEach )); } diff --git a/deps/v8/src/compilation-cache.cc b/deps/v8/src/compilation-cache.cc index 42a94fc74f1..559f980ff12 100644 --- a/deps/v8/src/compilation-cache.cc +++ b/deps/v8/src/compilation-cache.cc @@ -2,11 +2,11 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "assembler.h" -#include "compilation-cache.h" -#include "serialize.h" +#include "src/assembler.h" +#include "src/compilation-cache.h" +#include "src/serialize.h" namespace v8 { namespace internal { @@ -43,7 +43,7 @@ CompilationCache::~CompilationCache() {} Handle<CompilationCacheTable> CompilationSubCache::GetTable(int generation) { - ASSERT(generation < generations_); + DCHECK(generation < generations_); Handle<CompilationCacheTable> result; if (tables_[generation]->IsUndefined()) { result = CompilationCacheTable::New(isolate(), kInitialCacheSize); @@ -193,7 +193,7 @@ Handle<SharedFunctionInfo> CompilationCacheScript::Lookup( if (result != NULL) { Handle<SharedFunctionInfo> shared(SharedFunctionInfo::cast(result), isolate()); - ASSERT(HasOrigin(shared, + DCHECK(HasOrigin(shared, name, line_offset, column_offset, @@ -335,7 +335,7 @@ MaybeHandle<SharedFunctionInfo> CompilationCache::LookupEval( result = eval_global_.Lookup( source, context, strict_mode, scope_position); } else { - ASSERT(scope_position != RelocInfo::kNoPosition); + DCHECK(scope_position != RelocInfo::kNoPosition); result = eval_contextual_.Lookup( source, context, strict_mode, scope_position); } @@ -370,7 +370,7 @@ void CompilationCache::PutEval(Handle<String> source, if (context->IsNativeContext()) { eval_global_.Put(source, context, function_info, scope_position); } else { - ASSERT(scope_position != RelocInfo::kNoPosition); + DCHECK(scope_position != RelocInfo::kNoPosition); eval_contextual_.Put(source, context, function_info, scope_position); } } diff --git a/deps/v8/src/compilation-cache.h b/deps/v8/src/compilation-cache.h index baa53fb45d5..fe623dc7983 100644 --- a/deps/v8/src/compilation-cache.h +++ b/deps/v8/src/compilation-cache.h @@ -34,7 +34,7 @@ class CompilationSubCache { return GetTable(kFirstGeneration); } void SetFirstTable(Handle<CompilationCacheTable> value) { - ASSERT(kFirstGeneration < generations_); + DCHECK(kFirstGeneration < generations_); tables_[kFirstGeneration] = *value; } diff --git a/deps/v8/src/compiler-intrinsics.h b/deps/v8/src/compiler-intrinsics.h index f31895e2d37..669dd28b6a9 100644 --- a/deps/v8/src/compiler-intrinsics.h +++ b/deps/v8/src/compiler-intrinsics.h @@ -5,6 +5,8 @@ #ifndef V8_COMPILER_INTRINSICS_H_ #define V8_COMPILER_INTRINSICS_H_ +#include "src/base/macros.h" + namespace v8 { namespace internal { diff --git a/deps/v8/src/compiler.cc b/deps/v8/src/compiler.cc index 7b9f705bc36..e496aee62d6 100644 --- a/deps/v8/src/compiler.cc +++ b/deps/v8/src/compiler.cc @@ -2,35 +2,48 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" - -#include "compiler.h" - -#include "bootstrapper.h" -#include "codegen.h" -#include "compilation-cache.h" -#include "cpu-profiler.h" -#include "debug.h" -#include "deoptimizer.h" -#include "full-codegen.h" -#include "gdb-jit.h" -#include "typing.h" -#include "hydrogen.h" -#include "isolate-inl.h" -#include "lithium.h" -#include "liveedit.h" -#include "parser.h" -#include "rewriter.h" -#include "runtime-profiler.h" -#include "scanner-character-streams.h" -#include "scopeinfo.h" -#include "scopes.h" -#include "vm-state-inl.h" +#include "src/v8.h" + +#include "src/compiler.h" + +#include "src/bootstrapper.h" +#include "src/codegen.h" +#include "src/compilation-cache.h" +#include "src/compiler/pipeline.h" +#include "src/cpu-profiler.h" +#include "src/debug.h" +#include "src/deoptimizer.h" +#include "src/full-codegen.h" +#include "src/gdb-jit.h" +#include "src/hydrogen.h" +#include "src/isolate-inl.h" +#include "src/lithium.h" +#include "src/liveedit.h" +#include "src/parser.h" +#include "src/rewriter.h" +#include "src/runtime-profiler.h" +#include "src/scanner-character-streams.h" +#include "src/scopeinfo.h" +#include "src/scopes.h" +#include "src/typing.h" +#include "src/vm-state-inl.h" namespace v8 { namespace internal { +ScriptData::ScriptData(const byte* data, int length) + : owns_data_(false), data_(data), length_(length) { + if (!IsAligned(reinterpret_cast<intptr_t>(data), kPointerAlignment)) { + byte* copy = NewArray<byte>(length); + DCHECK(IsAligned(reinterpret_cast<intptr_t>(copy), kPointerAlignment)); + CopyBytes(copy, data, length); + data_ = copy; + AcquireDataOwnership(); + } +} + + CompilationInfo::CompilationInfo(Handle<Script> script, Zone* zone) : flags_(StrictModeField::encode(SLOPPY)), @@ -38,11 +51,26 @@ CompilationInfo::CompilationInfo(Handle<Script> script, osr_ast_id_(BailoutId::None()), parameter_count_(0), this_has_uses_(true), - optimization_id_(-1) { + optimization_id_(-1), + ast_value_factory_(NULL), + ast_value_factory_owned_(false) { Initialize(script->GetIsolate(), BASE, zone); } +CompilationInfo::CompilationInfo(Isolate* isolate, Zone* zone) + : flags_(StrictModeField::encode(SLOPPY)), + script_(Handle<Script>::null()), + osr_ast_id_(BailoutId::None()), + parameter_count_(0), + this_has_uses_(true), + optimization_id_(-1), + ast_value_factory_(NULL), + ast_value_factory_owned_(false) { + Initialize(isolate, STUB, zone); +} + + CompilationInfo::CompilationInfo(Handle<SharedFunctionInfo> shared_info, Zone* zone) : flags_(StrictModeField::encode(SLOPPY) | IsLazy::encode(true)), @@ -51,7 +79,9 @@ CompilationInfo::CompilationInfo(Handle<SharedFunctionInfo> shared_info, osr_ast_id_(BailoutId::None()), parameter_count_(0), this_has_uses_(true), - optimization_id_(-1) { + optimization_id_(-1), + ast_value_factory_(NULL), + ast_value_factory_owned_(false) { Initialize(script_->GetIsolate(), BASE, zone); } @@ -66,7 +96,9 @@ CompilationInfo::CompilationInfo(Handle<JSFunction> closure, osr_ast_id_(BailoutId::None()), parameter_count_(0), this_has_uses_(true), - optimization_id_(-1) { + optimization_id_(-1), + ast_value_factory_(NULL), + ast_value_factory_owned_(false) { Initialize(script_->GetIsolate(), BASE, zone); } @@ -78,7 +110,9 @@ CompilationInfo::CompilationInfo(HydrogenCodeStub* stub, osr_ast_id_(BailoutId::None()), parameter_count_(0), this_has_uses_(true), - optimization_id_(-1) { + optimization_id_(-1), + ast_value_factory_(NULL), + ast_value_factory_owned_(false) { Initialize(isolate, STUB, zone); code_stub_ = stub; } @@ -93,7 +127,7 @@ void CompilationInfo::Initialize(Isolate* isolate, global_scope_ = NULL; extension_ = NULL; cached_data_ = NULL; - cached_data_mode_ = NO_CACHED_DATA; + compile_options_ = ScriptCompiler::kNoCompileOptions; zone_ = zone; deferred_handles_ = NULL; code_stub_ = NULL; @@ -110,11 +144,11 @@ void CompilationInfo::Initialize(Isolate* isolate, } mode_ = mode; abort_due_to_dependency_ = false; - if (script_->type()->value() == Script::TYPE_NATIVE) { - MarkAsNative(); - } + if (script_->type()->value() == Script::TYPE_NATIVE) MarkAsNative(); + if (isolate_->debug()->is_active()) MarkAsDebug(); + if (!shared_info_.is_null()) { - ASSERT(strict_mode() == SLOPPY); + DCHECK(strict_mode() == SLOPPY); SetStrictMode(shared_info_->strict_mode()); } set_bailout_reason(kUnknown); @@ -131,11 +165,12 @@ void CompilationInfo::Initialize(Isolate* isolate, CompilationInfo::~CompilationInfo() { delete deferred_handles_; delete no_frame_ranges_; + if (ast_value_factory_owned_) delete ast_value_factory_; #ifdef DEBUG // Check that no dependent maps have been added or added dependent maps have // been rolled back or committed. for (int i = 0; i < DependentCode::kGroupCount; i++) { - ASSERT_EQ(NULL, dependencies_[i]); + DCHECK_EQ(NULL, dependencies_[i]); } #endif // DEBUG } @@ -145,7 +180,7 @@ void CompilationInfo::CommitDependencies(Handle<Code> code) { for (int i = 0; i < DependentCode::kGroupCount; i++) { ZoneList<Handle<HeapObject> >* group_objects = dependencies_[i]; if (group_objects == NULL) continue; - ASSERT(!object_wrapper_.is_null()); + DCHECK(!object_wrapper_.is_null()); for (int j = 0; j < group_objects->length(); j++) { DependentCode::DependencyGroup group = static_cast<DependentCode::DependencyGroup>(i); @@ -177,7 +212,7 @@ void CompilationInfo::RollbackDependencies() { int CompilationInfo::num_parameters() const { if (IsStub()) { - ASSERT(parameter_count_ > 0); + DCHECK(parameter_count_ > 0); return parameter_count_; } else { return scope()->num_parameters(); @@ -231,7 +266,7 @@ bool CompilationInfo::ShouldSelfOptimize() { void CompilationInfo::PrepareForCompilation(Scope* scope) { - ASSERT(scope_ == NULL); + DCHECK(scope_ == NULL); scope_ = scope; int length = function()->slot_count(); @@ -239,7 +274,7 @@ void CompilationInfo::PrepareForCompilation(Scope* scope) { // Allocate the feedback vector too. feedback_vector_ = isolate()->factory()->NewTypeFeedbackVector(length); } - ASSERT(feedback_vector_->length() == length); + DCHECK(feedback_vector_->length() == length); } @@ -279,39 +314,28 @@ class HOptimizedGraphBuilderWithPositions: public HOptimizedGraphBuilder { }; -// Determine whether to use the full compiler for all code. If the flag -// --always-full-compiler is specified this is the case. For the virtual frame -// based compiler the full compiler is also used if a debugger is connected, as -// the code from the full compiler supports mode precise break points. For the -// crankshaft adaptive compiler debugging the optimized code is not possible at -// all. However crankshaft support recompilation of functions, so in this case -// the full compiler need not be be used if a debugger is attached, but only if -// break points has actually been set. -static bool IsDebuggerActive(Isolate* isolate) { - return isolate->use_crankshaft() ? - isolate->debug()->has_break_points() : - isolate->debugger()->IsDebuggerActive(); -} - - OptimizedCompileJob::Status OptimizedCompileJob::CreateGraph() { - ASSERT(isolate()->use_crankshaft()); - ASSERT(info()->IsOptimizing()); - ASSERT(!info()->IsCompilingForDebugging()); + DCHECK(isolate()->use_crankshaft()); + DCHECK(info()->IsOptimizing()); + DCHECK(!info()->IsCompilingForDebugging()); // We should never arrive here if there is no code object on the // shared function object. - ASSERT(info()->shared_info()->code()->kind() == Code::FUNCTION); + DCHECK(info()->shared_info()->code()->kind() == Code::FUNCTION); // We should never arrive here if optimization has been disabled on the // shared function info. - ASSERT(!info()->shared_info()->optimization_disabled()); + DCHECK(!info()->shared_info()->optimization_disabled()); // Fall back to using the full code generator if it's not possible // to use the Hydrogen-based optimizing compiler. We already have // generated code for this from the shared function object. if (FLAG_always_full_compiler) return AbortOptimization(); - if (IsDebuggerActive(isolate())) return AbortOptimization(kDebuggerIsActive); + + // Do not use crankshaft if we need to be able to set break points. + if (isolate()->DebuggerHasBreakPoints()) { + return AbortOptimization(kDebuggerHasBreakPoints); + } // Limit the number of times we re-compile a functions with // the optimizing compiler. @@ -344,18 +368,19 @@ OptimizedCompileJob::Status OptimizedCompileJob::CreateGraph() { return AbortAndDisableOptimization(kFunctionWithIllegalRedeclaration); } - // Take --hydrogen-filter into account. + // Check the whitelist for Crankshaft. if (!info()->closure()->PassesFilter(FLAG_hydrogen_filter)) { return AbortOptimization(kHydrogenFilter); } + // Crankshaft requires a version of fullcode with deoptimization support. // Recompile the unoptimized version of the code if the current version - // doesn't have deoptimization support. Alternatively, we may decide to - // run the full code generator to get a baseline for the compile-time - // performance of the hydrogen-based compiler. + // doesn't have deoptimization support already. + // Otherwise, if we are gathering compilation time and space statistics + // for hydrogen, gather baseline statistics for a fullcode compilation. bool should_recompile = !info()->shared_info()->has_deoptimization_support(); if (should_recompile || FLAG_hydrogen_stats) { - ElapsedTimer timer; + base::ElapsedTimer timer; if (FLAG_hydrogen_stats) { timer.Start(); } @@ -380,13 +405,21 @@ OptimizedCompileJob::Status OptimizedCompileJob::CreateGraph() { } } - // Check that the unoptimized, shared code is ready for - // optimizations. When using the always_opt flag we disregard the - // optimizable marker in the code object and optimize anyway. This - // is safe as long as the unoptimized code has deoptimization - // support. - ASSERT(FLAG_always_opt || info()->shared_info()->code()->optimizable()); - ASSERT(info()->shared_info()->has_deoptimization_support()); + DCHECK(info()->shared_info()->has_deoptimization_support()); + + // Check the whitelist for TurboFan. + if (info()->closure()->PassesFilter(FLAG_turbo_filter) && + // TODO(turbofan): Make try-catch work and remove this bailout. + info()->function()->dont_optimize_reason() != kTryCatchStatement && + info()->function()->dont_optimize_reason() != kTryFinallyStatement && + // TODO(turbofan): Make OSR work and remove this bailout. + !info()->is_osr()) { + compiler::Pipeline pipeline(info()); + pipeline.GenerateCode(); + if (!info()->code().is_null()) { + return SetLastStatus(SUCCEEDED); + } + } if (FLAG_trace_hydrogen) { Handle<String> name = info()->function()->debug_name(); @@ -398,7 +431,7 @@ OptimizedCompileJob::Status OptimizedCompileJob::CreateGraph() { // Type-check the function. AstTyper::Run(info()); - graph_builder_ = FLAG_hydrogen_track_positions + graph_builder_ = (FLAG_hydrogen_track_positions || FLAG_trace_ic) ? new(info()->zone()) HOptimizedGraphBuilderWithPositions(info()) : new(info()->zone()) HOptimizedGraphBuilder(info()); @@ -413,7 +446,7 @@ OptimizedCompileJob::Status OptimizedCompileJob::CreateGraph() { // The function being compiled may have bailed out due to an inline // candidate bailing out. In such a case, we don't disable // optimization on the shared_info. - ASSERT(!graph_builder_->inline_bailout() || graph_ == NULL); + DCHECK(!graph_builder_->inline_bailout() || graph_ == NULL); if (graph_ == NULL) { if (graph_builder_->inline_bailout()) { return AbortOptimization(); @@ -436,9 +469,14 @@ OptimizedCompileJob::Status OptimizedCompileJob::OptimizeGraph() { DisallowHandleDereference no_deref; DisallowCodeDependencyChange no_dependency_change; - ASSERT(last_status() == SUCCEEDED); + DCHECK(last_status() == SUCCEEDED); + // TODO(turbofan): Currently everything is done in the first phase. + if (!info()->code().is_null()) { + return last_status(); + } + Timer t(this, &time_taken_to_optimize_); - ASSERT(graph_ != NULL); + DCHECK(graph_ != NULL); BailoutReason bailout_reason = kNoReason; if (graph_->Optimize(&bailout_reason)) { @@ -453,13 +491,20 @@ OptimizedCompileJob::Status OptimizedCompileJob::OptimizeGraph() { OptimizedCompileJob::Status OptimizedCompileJob::GenerateCode() { - ASSERT(last_status() == SUCCEEDED); - ASSERT(!info()->HasAbortedDueToDependencyChange()); + DCHECK(last_status() == SUCCEEDED); + // TODO(turbofan): Currently everything is done in the first phase. + if (!info()->code().is_null()) { + RecordOptimizationStats(); + return last_status(); + } + + DCHECK(!info()->HasAbortedDueToDependencyChange()); DisallowCodeDependencyChange no_dependency_change; + DisallowJavascriptExecution no_js(isolate()); { // Scope for timer. Timer timer(this, &time_taken_to_codegen_); - ASSERT(chunk_ != NULL); - ASSERT(graph_ != NULL); + DCHECK(chunk_ != NULL); + DCHECK(graph_ != NULL); // Deferred handles reference objects that were accessible during // graph creation. To make sure that we don't encounter inconsistencies // between graph creation and code generation, we disallow accessing @@ -469,6 +514,20 @@ OptimizedCompileJob::Status OptimizedCompileJob::GenerateCode() { if (optimized_code.is_null()) { if (info()->bailout_reason() == kNoReason) { info_->set_bailout_reason(kCodeGenerationFailed); + } else if (info()->bailout_reason() == kMapBecameDeprecated) { + if (FLAG_trace_opt) { + PrintF("[aborted optimizing "); + info()->closure()->ShortPrint(); + PrintF(" because a map became deprecated]\n"); + } + return AbortOptimization(); + } else if (info()->bailout_reason() == kMapBecameUnstable) { + if (FLAG_trace_opt) { + PrintF("[aborted optimizing "); + info()->closure()->ShortPrint(); + PrintF(" because a map became unstable]\n"); + } + return AbortOptimization(); } return AbortAndDisableOptimization(); } @@ -521,9 +580,6 @@ void OptimizedCompileJob::RecordOptimizationStats() { // Sets the expected number of properties based on estimate from compiler. void SetExpectedNofPropertiesFromEstimate(Handle<SharedFunctionInfo> shared, int estimate) { - // See the comment in SetExpectedNofProperties. - if (shared->live_objects_may_exist()) return; - // If no properties are added in the constructor, they are more likely // to be added later. if (estimate == 0) estimate = 2; @@ -531,7 +587,7 @@ void SetExpectedNofPropertiesFromEstimate(Handle<SharedFunctionInfo> shared, // TODO(yangguo): check whether those heuristics are still up-to-date. // We do not shrink objects that go into a snapshot (yet), so we adjust // the estimate conservatively. - if (Serializer::enabled(shared->GetIsolate())) { + if (shared->GetIsolate()->serializer_enabled()) { estimate += 2; } else if (FLAG_clever_optimizations) { // Inobject slack tracking will reclaim redundant inobject space later, @@ -549,7 +605,7 @@ static void UpdateSharedFunctionInfo(CompilationInfo* info) { // Update the shared function info with the compiled code and the // scope info. Please note, that the order of the shared function // info initialization is important since set_scope_info might - // trigger a GC, causing the ASSERT below to be invalid if the code + // trigger a GC, causing the DCHECK below to be invalid if the code // was flushed. By setting the code object last we avoid this. Handle<SharedFunctionInfo> shared = info->shared_info(); Handle<ScopeInfo> scope_info = @@ -569,9 +625,8 @@ static void UpdateSharedFunctionInfo(CompilationInfo* info) { SetExpectedNofPropertiesFromEstimate(shared, expected); // Check the function has compiled code. - ASSERT(shared->is_compiled()); - shared->set_dont_optimize_reason(lit->dont_optimize_reason()); - shared->set_dont_inline(lit->flags()->Contains(kDontInline)); + DCHECK(shared->is_compiled()); + shared->set_bailout_reason(lit->dont_optimize_reason()); shared->set_ast_node_count(lit->ast_node_count()); shared->set_strict_mode(lit->strict_mode()); } @@ -603,18 +658,19 @@ static void SetFunctionInfo(Handle<SharedFunctionInfo> function_info, function_info->set_has_duplicate_parameters(lit->has_duplicate_parameters()); function_info->set_ast_node_count(lit->ast_node_count()); function_info->set_is_function(lit->is_function()); - function_info->set_dont_optimize_reason(lit->dont_optimize_reason()); - function_info->set_dont_inline(lit->flags()->Contains(kDontInline)); + function_info->set_bailout_reason(lit->dont_optimize_reason()); function_info->set_dont_cache(lit->flags()->Contains(kDontCache)); function_info->set_is_generator(lit->is_generator()); + function_info->set_is_arrow(lit->is_arrow()); } static bool CompileUnoptimizedCode(CompilationInfo* info) { - ASSERT(info->function() != NULL); + DCHECK(AllowCompilation::IsAllowed(info->isolate())); + DCHECK(info->function() != NULL); if (!Rewriter::Rewrite(info)) return false; if (!Scope::Analyze(info)) return false; - ASSERT(info->scope() != NULL); + DCHECK(info->scope() != NULL); if (!FullCodeGenerator::MakeCode(info)) { Isolate* isolate = info->isolate(); @@ -636,14 +692,14 @@ MUST_USE_RESULT static MaybeHandle<Code> GetUnoptimizedCodeCommon( Compiler::RecordFunctionCompilation( Logger::LAZY_COMPILE_TAG, info, info->shared_info()); UpdateSharedFunctionInfo(info); - ASSERT_EQ(Code::FUNCTION, info->code()->kind()); + DCHECK_EQ(Code::FUNCTION, info->code()->kind()); return info->code(); } MaybeHandle<Code> Compiler::GetUnoptimizedCode(Handle<JSFunction> function) { - ASSERT(!function->GetIsolate()->has_pending_exception()); - ASSERT(!function->is_compiled()); + DCHECK(!function->GetIsolate()->has_pending_exception()); + DCHECK(!function->is_compiled()); if (function->shared()->is_compiled()) { return Handle<Code>(function->shared()->code()); } @@ -672,8 +728,8 @@ MaybeHandle<Code> Compiler::GetUnoptimizedCode(Handle<JSFunction> function) { MaybeHandle<Code> Compiler::GetUnoptimizedCode( Handle<SharedFunctionInfo> shared) { - ASSERT(!shared->GetIsolate()->has_pending_exception()); - ASSERT(!shared->is_compiled()); + DCHECK(!shared->GetIsolate()->has_pending_exception()); + DCHECK(!shared->is_compiled()); CompilationInfoWithZone info(shared); return GetUnoptimizedCodeCommon(&info); @@ -692,7 +748,7 @@ bool Compiler::EnsureCompiled(Handle<JSFunction> function, return false; } function->ReplaceCode(*code); - ASSERT(function->is_compiled()); + DCHECK(function->is_compiled()); return true; } @@ -711,10 +767,12 @@ MaybeHandle<Code> Compiler::GetCodeForDebugging(Handle<JSFunction> function) { Isolate* isolate = info.isolate(); VMState<COMPILER> state(isolate); - ASSERT(!isolate->has_pending_exception()); + info.MarkAsDebug(); + + DCHECK(!isolate->has_pending_exception()); Handle<Code> old_code(function->shared()->code()); - ASSERT(old_code->kind() == Code::FUNCTION); - ASSERT(!old_code->has_debug_break_slots()); + DCHECK(old_code->kind() == Code::FUNCTION); + DCHECK(!old_code->has_debug_break_slots()); info.MarkCompilingForDebugging(); if (old_code->is_compiled_optimizable()) { @@ -727,7 +785,7 @@ MaybeHandle<Code> Compiler::GetCodeForDebugging(Handle<JSFunction> function) { if (!maybe_new_code.ToHandle(&new_code)) { isolate->clear_pending_exception(); } else { - ASSERT_EQ(old_code->is_compiled_optimizable(), + DCHECK_EQ(old_code->is_compiled_optimizable(), new_code->is_compiled_optimizable()); } return maybe_new_code; @@ -765,30 +823,32 @@ static bool DebuggerWantsEagerCompilation(CompilationInfo* info, static Handle<SharedFunctionInfo> CompileToplevel(CompilationInfo* info) { Isolate* isolate = info->isolate(); PostponeInterruptsScope postpone(isolate); - ASSERT(!isolate->native_context().is_null()); + DCHECK(!isolate->native_context().is_null()); Handle<Script> script = info->script(); // TODO(svenpanne) Obscure place for this, perhaps move to OnBeforeCompile? FixedArray* array = isolate->native_context()->embedder_data(); script->set_context_data(array->get(0)); - isolate->debugger()->OnBeforeCompile(script); + isolate->debug()->OnBeforeCompile(script); - ASSERT(info->is_eval() || info->is_global()); + DCHECK(info->is_eval() || info->is_global()); bool parse_allow_lazy = - (info->cached_data_mode() == CONSUME_CACHED_DATA || + (info->compile_options() == ScriptCompiler::kConsumeParserCache || String::cast(script->source())->length() > FLAG_min_preparse_length) && !DebuggerWantsEagerCompilation(info); - if (!parse_allow_lazy && info->cached_data_mode() != NO_CACHED_DATA) { + if (!parse_allow_lazy && + (info->compile_options() == ScriptCompiler::kProduceParserCache || + info->compile_options() == ScriptCompiler::kConsumeParserCache)) { // We are going to parse eagerly, but we either 1) have cached data produced // by lazy parsing or 2) are asked to generate cached data. We cannot use // the existing data, since it won't contain all the symbols we need for // eager parsing. In addition, it doesn't make sense to produce the data // when parsing eagerly. That data would contain all symbols, but no // functions, so it cannot be used to aid lazy parsing later. - info->SetCachedData(NULL, NO_CACHED_DATA); + info->SetCachedData(NULL, ScriptCompiler::kNoCompileOptions); } Handle<SharedFunctionInfo> result; @@ -815,16 +875,14 @@ static Handle<SharedFunctionInfo> CompileToplevel(CompilationInfo* info) { } // Allocate function. - ASSERT(!info->code().is_null()); + DCHECK(!info->code().is_null()); result = isolate->factory()->NewSharedFunctionInfo( - lit->name(), - lit->materialized_literal_count(), - lit->is_generator(), - info->code(), + lit->name(), lit->materialized_literal_count(), lit->is_generator(), + lit->is_arrow(), info->code(), ScopeInfo::Create(info->scope(), info->zone()), info->feedback_vector()); - ASSERT_EQ(RelocInfo::kNoPosition, lit->function_token_position()); + DCHECK_EQ(RelocInfo::kNoPosition, lit->function_token_position()); SetFunctionInfo(result, lit, true, script); Handle<String> script_name = script->name()->IsString() @@ -849,7 +907,7 @@ static Handle<SharedFunctionInfo> CompileToplevel(CompilationInfo* info) { live_edit_tracker.RecordFunctionInfo(result, lit, info->zone()); } - isolate->debugger()->OnAfterCompile(script, Debugger::NO_AFTER_COMPILE_FLAGS); + isolate->debug()->OnAfterCompile(script); return result; } @@ -893,7 +951,7 @@ MaybeHandle<JSFunction> Compiler::GetFunctionFromEval( shared_info->DisableOptimization(kEval); // If caller is strict mode, the result must be in strict mode as well. - ASSERT(strict_mode == SLOPPY || shared_info->strict_mode() == STRICT); + DCHECK(strict_mode == SLOPPY || shared_info->strict_mode() == STRICT); if (!shared_info->dont_cache()) { compilation_cache->PutEval( source, context, shared_info, scope_position); @@ -909,23 +967,21 @@ MaybeHandle<JSFunction> Compiler::GetFunctionFromEval( Handle<SharedFunctionInfo> Compiler::CompileScript( - Handle<String> source, - Handle<Object> script_name, - int line_offset, - int column_offset, - bool is_shared_cross_origin, - Handle<Context> context, - v8::Extension* extension, - ScriptData** cached_data, - CachedDataMode cached_data_mode, - NativesFlag natives) { - if (cached_data_mode == NO_CACHED_DATA) { + Handle<String> source, Handle<Object> script_name, int line_offset, + int column_offset, bool is_shared_cross_origin, Handle<Context> context, + v8::Extension* extension, ScriptData** cached_data, + ScriptCompiler::CompileOptions compile_options, NativesFlag natives) { + if (compile_options == ScriptCompiler::kNoCompileOptions) { cached_data = NULL; - } else if (cached_data_mode == PRODUCE_CACHED_DATA) { - ASSERT(cached_data && !*cached_data); + } else if (compile_options == ScriptCompiler::kProduceParserCache || + compile_options == ScriptCompiler::kProduceCodeCache) { + DCHECK(cached_data && !*cached_data); + DCHECK(extension == NULL); } else { - ASSERT(cached_data_mode == CONSUME_CACHED_DATA); - ASSERT(cached_data && *cached_data); + DCHECK(compile_options == ScriptCompiler::kConsumeParserCache || + compile_options == ScriptCompiler::kConsumeCodeCache); + DCHECK(cached_data && *cached_data); + DCHECK(extension == NULL); } Isolate* isolate = source->GetIsolate(); int source_length = source->length(); @@ -938,9 +994,21 @@ Handle<SharedFunctionInfo> Compiler::CompileScript( MaybeHandle<SharedFunctionInfo> maybe_result; Handle<SharedFunctionInfo> result; if (extension == NULL) { - maybe_result = compilation_cache->LookupScript( - source, script_name, line_offset, column_offset, - is_shared_cross_origin, context); + if (FLAG_serialize_toplevel && + compile_options == ScriptCompiler::kConsumeCodeCache && + !isolate->debug()->is_loaded()) { + return CodeSerializer::Deserialize(isolate, *cached_data, source); + } else { + maybe_result = compilation_cache->LookupScript( + source, script_name, line_offset, column_offset, + is_shared_cross_origin, context); + } + } + + base::ElapsedTimer timer; + if (FLAG_profile_deserialization && FLAG_serialize_toplevel && + compile_options == ScriptCompiler::kProduceCodeCache) { + timer.Start(); } if (!maybe_result.ToHandle(&result)) { @@ -961,29 +1029,45 @@ Handle<SharedFunctionInfo> Compiler::CompileScript( // Compile the function and add it to the cache. CompilationInfoWithZone info(script); info.MarkAsGlobal(); + info.SetCachedData(cached_data, compile_options); info.SetExtension(extension); - info.SetCachedData(cached_data, cached_data_mode); info.SetContext(context); + if (FLAG_serialize_toplevel && + compile_options == ScriptCompiler::kProduceCodeCache) { + info.PrepareForSerializing(); + } if (FLAG_use_strict) info.SetStrictMode(STRICT); + result = CompileToplevel(&info); if (extension == NULL && !result.is_null() && !result->dont_cache()) { compilation_cache->PutScript(source, context, result); + if (FLAG_serialize_toplevel && + compile_options == ScriptCompiler::kProduceCodeCache) { + *cached_data = CodeSerializer::Serialize(isolate, result, source); + if (FLAG_profile_deserialization) { + PrintF("[Compiling and serializing %d bytes took %0.3f ms]\n", + (*cached_data)->length(), timer.Elapsed().InMillisecondsF()); + } + } } + if (result.is_null()) isolate->ReportPendingMessages(); } else if (result->ic_age() != isolate->heap()->global_ic_age()) { - result->ResetForNewContext(isolate->heap()->global_ic_age()); + result->ResetForNewContext(isolate->heap()->global_ic_age()); } return result; } -Handle<SharedFunctionInfo> Compiler::BuildFunctionInfo(FunctionLiteral* literal, - Handle<Script> script) { +Handle<SharedFunctionInfo> Compiler::BuildFunctionInfo( + FunctionLiteral* literal, Handle<Script> script, + CompilationInfo* outer_info) { // Precondition: code has been parsed and scopes have been analyzed. CompilationInfoWithZone info(script); info.SetFunction(literal); info.PrepareForCompilation(literal->scope()); info.SetStrictMode(literal->scope()->strict_mode()); + if (outer_info->will_serialize()) info.PrepareForSerializing(); Isolate* isolate = info.isolate(); Factory* factory = isolate->factory(); @@ -1008,20 +1092,17 @@ Handle<SharedFunctionInfo> Compiler::BuildFunctionInfo(FunctionLiteral* literal, info.SetCode(code); scope_info = Handle<ScopeInfo>(ScopeInfo::Empty(isolate)); } else if (FullCodeGenerator::MakeCode(&info)) { - ASSERT(!info.code().is_null()); + DCHECK(!info.code().is_null()); scope_info = ScopeInfo::Create(info.scope(), info.zone()); } else { return Handle<SharedFunctionInfo>::null(); } // Create a shared function info object. - Handle<SharedFunctionInfo> result = - factory->NewSharedFunctionInfo(literal->name(), - literal->materialized_literal_count(), - literal->is_generator(), - info.code(), - scope_info, - info.feedback_vector()); + Handle<SharedFunctionInfo> result = factory->NewSharedFunctionInfo( + literal->name(), literal->materialized_literal_count(), + literal->is_generator(), literal->is_arrow(), info.code(), scope_info, + info.feedback_vector()); SetFunctionInfo(result, literal, false, script); RecordFunctionCompilation(Logger::FUNCTION_TAG, &info, result); result->set_allows_lazy_compilation(allow_lazy); @@ -1041,6 +1122,8 @@ MUST_USE_RESULT static MaybeHandle<Code> GetCodeFromOptimizedCodeMap( BailoutId osr_ast_id) { if (FLAG_cache_optimized_code) { Handle<SharedFunctionInfo> shared(function->shared()); + // Bound functions are not cached. + if (shared->bound()) return MaybeHandle<Code>(); DisallowHeapAllocation no_gc; int index = shared->SearchOptimizedCodeMap( function->context()->native_context(), osr_ast_id); @@ -1066,10 +1149,15 @@ static void InsertCodeIntoOptimizedCodeMap(CompilationInfo* info) { Handle<Code> code = info->code(); if (code->kind() != Code::OPTIMIZED_FUNCTION) return; // Nothing to do. + // Context specialization folds-in the context, so no sharing can occur. + if (code->is_turbofanned() && FLAG_context_specialization) return; + // Cache optimized code. if (FLAG_cache_optimized_code) { Handle<JSFunction> function = info->closure(); Handle<SharedFunctionInfo> shared(function->shared()); + // Do not cache bound functions. + if (shared->bound()) return; Handle<FixedArray> literals(function->literals()); Handle<Context> native_context(function->context()->native_context()); SharedFunctionInfo::AddToOptimizedCodeMap( @@ -1084,7 +1172,7 @@ static bool CompileOptimizedPrologue(CompilationInfo* info) { if (!Rewriter::Rewrite(info)) return false; if (!Scope::Analyze(info)) return false; - ASSERT(info->scope() != NULL); + DCHECK(info->scope() != NULL); return true; } @@ -1092,8 +1180,7 @@ static bool CompileOptimizedPrologue(CompilationInfo* info) { static bool GetOptimizedCodeNow(CompilationInfo* info) { if (!CompileOptimizedPrologue(info)) return false; - Logger::TimerEventScope timer( - info->isolate(), Logger::TimerEventScope::v8_recompile_synchronous); + TimerEventScope<TimerEventRecompileSynchronous> timer(info->isolate()); OptimizedCompileJob job(info); if (job.CreateGraph() != OptimizedCompileJob::SUCCEEDED) return false; @@ -1101,7 +1188,7 @@ static bool GetOptimizedCodeNow(CompilationInfo* info) { if (job.GenerateCode() != OptimizedCompileJob::SUCCEEDED) return false; // Success! - ASSERT(!info->isolate()->has_pending_exception()); + DCHECK(!info->isolate()->has_pending_exception()); InsertCodeIntoOptimizedCodeMap(info); Compiler::RecordFunctionCompilation( Logger::LAZY_COMPILE_TAG, info, info->shared_info()); @@ -1124,8 +1211,7 @@ static bool GetOptimizedCodeLater(CompilationInfo* info) { if (!CompileOptimizedPrologue(info)) return false; info->SaveHandles(); // Copy handles to the compilation handle scope. - Logger::TimerEventScope timer( - isolate, Logger::TimerEventScope::v8_recompile_synchronous); + TimerEventScope<TimerEventRecompileSynchronous> timer(info->isolate()); OptimizedCompileJob* job = new(info->zone()) OptimizedCompileJob(info); OptimizedCompileJob::Status status = job->CreateGraph(); @@ -1157,12 +1243,13 @@ MaybeHandle<Code> Compiler::GetOptimizedCode(Handle<JSFunction> function, SmartPointer<CompilationInfo> info(new CompilationInfoWithZone(function)); Isolate* isolate = info->isolate(); + DCHECK(AllowCompilation::IsAllowed(isolate)); VMState<COMPILER> state(isolate); - ASSERT(!isolate->has_pending_exception()); + DCHECK(!isolate->has_pending_exception()); PostponeInterruptsScope postpone(isolate); Handle<SharedFunctionInfo> shared = info->shared_info(); - ASSERT_NE(ScopeInfo::Empty(isolate), shared->scope_info()); + DCHECK_NE(ScopeInfo::Empty(isolate), shared->scope_info()); int compiled_size = shared->end_position() - shared->start_position(); isolate->counters()->total_compile_size()->Increment(compiled_size); current_code->set_profiler_ticks(0); @@ -1197,8 +1284,7 @@ Handle<Code> Compiler::GetConcurrentlyOptimizedCode(OptimizedCompileJob* job) { Isolate* isolate = info->isolate(); VMState<COMPILER> state(isolate); - Logger::TimerEventScope timer( - isolate, Logger::TimerEventScope::v8_recompile_synchronous); + TimerEventScope<TimerEventRecompileSynchronous> timer(info->isolate()); Handle<SharedFunctionInfo> shared = info->shared_info(); shared->code()->set_profiler_ticks(0); @@ -1299,7 +1385,7 @@ bool CompilationPhase::ShouldProduceTraceOutput() const { : (FLAG_trace_hydrogen && info()->closure()->PassesFilter(FLAG_trace_hydrogen_filter)); return (tracing_on && - OS::StrChr(const_cast<char*>(FLAG_trace_phase), name_[0]) != NULL); + base::OS::StrChr(const_cast<char*>(FLAG_trace_phase), name_[0]) != NULL); } } } // namespace v8::internal diff --git a/deps/v8/src/compiler.h b/deps/v8/src/compiler.h index 24a8a9f5de5..e8beca5a05a 100644 --- a/deps/v8/src/compiler.h +++ b/deps/v8/src/compiler.h @@ -5,14 +5,14 @@ #ifndef V8_COMPILER_H_ #define V8_COMPILER_H_ -#include "allocation.h" -#include "ast.h" -#include "zone.h" +#include "src/allocation.h" +#include "src/ast.h" +#include "src/zone.h" namespace v8 { namespace internal { -class ScriptData; +class AstValueFactory; class HydrogenCodeStub; // ParseRestriction is used to restrict the set of valid statements in a @@ -22,23 +22,48 @@ enum ParseRestriction { ONLY_SINGLE_FUNCTION_LITERAL // Only a single FunctionLiteral expression. }; -enum CachedDataMode { - NO_CACHED_DATA, - CONSUME_CACHED_DATA, - PRODUCE_CACHED_DATA -}; - struct OffsetRange { OffsetRange(int from, int to) : from(from), to(to) {} int from; int to; }; + +class ScriptData { + public: + ScriptData(const byte* data, int length); + ~ScriptData() { + if (owns_data_) DeleteArray(data_); + } + + const byte* data() const { return data_; } + int length() const { return length_; } + + void AcquireDataOwnership() { + DCHECK(!owns_data_); + owns_data_ = true; + } + + void ReleaseDataOwnership() { + DCHECK(owns_data_); + owns_data_ = false; + } + + private: + bool owns_data_; + const byte* data_; + int length_; + + DISALLOW_COPY_AND_ASSIGN(ScriptData); +}; + + // CompilationInfo encapsulates some information known at compile time. It // is constructed based on the resources available at compile-time. class CompilationInfo { public: CompilationInfo(Handle<JSFunction> closure, Zone* zone); + CompilationInfo(Isolate* isolate, Zone* zone); virtual ~CompilationInfo(); Isolate* isolate() const { @@ -50,7 +75,6 @@ class CompilationInfo { bool is_eval() const { return IsEval::decode(flags_); } bool is_global() const { return IsGlobal::decode(flags_); } StrictMode strict_mode() const { return StrictModeField::decode(flags_); } - bool is_in_loop() const { return IsInLoop::decode(flags_); } FunctionLiteral* function() const { return function_; } Scope* scope() const { return scope_; } Scope* global_scope() const { return global_scope_; } @@ -61,8 +85,8 @@ class CompilationInfo { HydrogenCodeStub* code_stub() const {return code_stub_; } v8::Extension* extension() const { return extension_; } ScriptData** cached_data() const { return cached_data_; } - CachedDataMode cached_data_mode() const { - return cached_data_mode_; + ScriptCompiler::CompileOptions compile_options() const { + return compile_options_; } Handle<Context> context() const { return context_; } BailoutId osr_ast_id() const { return osr_ast_id_; } @@ -73,32 +97,33 @@ class CompilationInfo { Code::Flags flags() const; void MarkAsEval() { - ASSERT(!is_lazy()); + DCHECK(!is_lazy()); flags_ |= IsEval::encode(true); } + void MarkAsGlobal() { - ASSERT(!is_lazy()); + DCHECK(!is_lazy()); flags_ |= IsGlobal::encode(true); } + void set_parameter_count(int parameter_count) { - ASSERT(IsStub()); + DCHECK(IsStub()); parameter_count_ = parameter_count; } void set_this_has_uses(bool has_no_uses) { this_has_uses_ = has_no_uses; } + bool this_has_uses() { return this_has_uses_; } + void SetStrictMode(StrictMode strict_mode) { - ASSERT(this->strict_mode() == SLOPPY || this->strict_mode() == strict_mode); + DCHECK(this->strict_mode() == SLOPPY || this->strict_mode() == strict_mode); flags_ = StrictModeField::update(flags_, strict_mode); } - void MarkAsInLoop() { - ASSERT(is_lazy()); - flags_ |= IsInLoop::encode(true); - } + void MarkAsNative() { flags_ |= IsNative::encode(true); } @@ -143,6 +168,33 @@ class CompilationInfo { return RequiresFrame::decode(flags_); } + void MarkMustNotHaveEagerFrame() { + flags_ |= MustNotHaveEagerFrame::encode(true); + } + + bool GetMustNotHaveEagerFrame() const { + return MustNotHaveEagerFrame::decode(flags_); + } + + void MarkAsDebug() { + flags_ |= IsDebug::encode(true); + } + + bool is_debug() const { + return IsDebug::decode(flags_); + } + + void PrepareForSerializing() { + flags_ |= PrepareForSerializing::encode(true); + } + + bool will_serialize() const { return PrepareForSerializing::decode(flags_); } + + bool IsCodePreAgingActive() const { + return FLAG_optimize_for_size && FLAG_age_code && !will_serialize() && + !is_debug(); + } + void SetParseRestriction(ParseRestriction restriction) { flags_ = ParseRestricitonField::update(flags_, restriction); } @@ -152,12 +204,12 @@ class CompilationInfo { } void SetFunction(FunctionLiteral* literal) { - ASSERT(function_ == NULL); + DCHECK(function_ == NULL); function_ = literal; } void PrepareForCompilation(Scope* scope); void SetGlobalScope(Scope* global_scope) { - ASSERT(global_scope_ == NULL); + DCHECK(global_scope_ == NULL); global_scope_ = global_scope; } Handle<FixedArray> feedback_vector() const { @@ -165,16 +217,16 @@ class CompilationInfo { } void SetCode(Handle<Code> code) { code_ = code; } void SetExtension(v8::Extension* extension) { - ASSERT(!is_lazy()); + DCHECK(!is_lazy()); extension_ = extension; } void SetCachedData(ScriptData** cached_data, - CachedDataMode cached_data_mode) { - cached_data_mode_ = cached_data_mode; - if (cached_data_mode == NO_CACHED_DATA) { + ScriptCompiler::CompileOptions compile_options) { + compile_options_ = compile_options; + if (compile_options == ScriptCompiler::kNoCompileOptions) { cached_data_ = NULL; } else { - ASSERT(!is_lazy()); + DCHECK(!is_lazy()); cached_data_ = cached_data; } } @@ -211,7 +263,7 @@ class CompilationInfo { bool IsOptimizable() const { return mode_ == BASE; } bool IsStub() const { return mode_ == STUB; } void SetOptimizing(BailoutId osr_ast_id, Handle<Code> unoptimized) { - ASSERT(!shared_info_.is_null()); + DCHECK(!shared_info_.is_null()); SetMode(OPTIMIZE); osr_ast_id_ = osr_ast_id; unoptimized_code_ = unoptimized; @@ -224,7 +276,7 @@ class CompilationInfo { return SupportsDeoptimization::decode(flags_); } void EnableDeoptimizationSupport() { - ASSERT(IsOptimizable()); + DCHECK(IsOptimizable()); flags_ |= SupportsDeoptimization::encode(true); } @@ -232,7 +284,7 @@ class CompilationInfo { bool ShouldSelfOptimize(); void set_deferred_handles(DeferredHandles* deferred_handles) { - ASSERT(deferred_handles_ == NULL); + DCHECK(deferred_handles_ == NULL); deferred_handles_ = deferred_handles; } @@ -260,12 +312,12 @@ class CompilationInfo { void set_bailout_reason(BailoutReason reason) { bailout_reason_ = reason; } int prologue_offset() const { - ASSERT_NE(Code::kPrologueOffsetNotSet, prologue_offset_); + DCHECK_NE(Code::kPrologueOffsetNotSet, prologue_offset_); return prologue_offset_; } void set_prologue_offset(int prologue_offset) { - ASSERT_EQ(Code::kPrologueOffsetNotSet, prologue_offset_); + DCHECK_EQ(Code::kPrologueOffsetNotSet, prologue_offset_); prologue_offset_ = prologue_offset; } @@ -291,12 +343,12 @@ class CompilationInfo { } void AbortDueToDependencyChange() { - ASSERT(!OptimizingCompilerThread::IsOptimizerThread(isolate())); + DCHECK(!OptimizingCompilerThread::IsOptimizerThread(isolate())); abort_due_to_dependency_ = true; } bool HasAbortedDueToDependencyChange() { - ASSERT(!OptimizingCompilerThread::IsOptimizerThread(isolate())); + DCHECK(!OptimizingCompilerThread::IsOptimizerThread(isolate())); return abort_due_to_dependency_; } @@ -306,6 +358,13 @@ class CompilationInfo { int optimization_id() const { return optimization_id_; } + AstValueFactory* ast_value_factory() const { return ast_value_factory_; } + void SetAstValueFactory(AstValueFactory* ast_value_factory, + bool owned = true) { + ast_value_factory_ = ast_value_factory; + ast_value_factory_owned_ = owned; + } + protected: CompilationInfo(Handle<Script> script, Zone* zone); @@ -333,7 +392,6 @@ class CompilationInfo { void Initialize(Isolate* isolate, Mode mode, Zone* zone); void SetMode(Mode mode) { - ASSERT(isolate()->use_crankshaft()); mode_ = mode; } @@ -345,8 +403,8 @@ class CompilationInfo { // Flags that can be set for eager compilation. class IsEval: public BitField<bool, 1, 1> {}; class IsGlobal: public BitField<bool, 2, 1> {}; - // Flags that can be set for lazy compilation. - class IsInLoop: public BitField<bool, 3, 1> {}; + // If the function is being compiled for the debugger. + class IsDebug: public BitField<bool, 3, 1> {}; // Strict mode - used in eager compilation. class StrictModeField: public BitField<StrictMode, 4, 1> {}; // Is this a function from our natives. @@ -368,6 +426,10 @@ class CompilationInfo { class ParseRestricitonField: public BitField<ParseRestriction, 12, 1> {}; // If the function requires a frame (for unspecified reasons) class RequiresFrame: public BitField<bool, 13, 1> {}; + // If the function cannot build a frame (for unspecified reasons) + class MustNotHaveEagerFrame: public BitField<bool, 14, 1> {}; + // If we plan to serialize the compiled code. + class PrepareForSerializing : public BitField<bool, 15, 1> {}; unsigned flags_; @@ -392,7 +454,7 @@ class CompilationInfo { // Fields possibly needed for eager compilation, NULL by default. v8::Extension* extension_; ScriptData** cached_data_; - CachedDataMode cached_data_mode_; + ScriptCompiler::CompileOptions compile_options_; // The context of the caller for eval code, and the global context for a // global script. Will be a null handle otherwise. @@ -447,6 +509,9 @@ class CompilationInfo { int optimization_id_; + AstValueFactory* ast_value_factory_; + bool ast_value_factory_owned_; + DISALLOW_COPY_AND_ASSIGN(CompilationInfo); }; @@ -545,7 +610,7 @@ class OptimizedCompileJob: public ZoneObject { } void WaitForInstall() { - ASSERT(info_->is_osr()); + DCHECK(info_->is_osr()); awaiting_install_ = true; } @@ -556,9 +621,9 @@ class OptimizedCompileJob: public ZoneObject { HOptimizedGraphBuilder* graph_builder_; HGraph* graph_; LChunk* chunk_; - TimeDelta time_taken_to_create_graph_; - TimeDelta time_taken_to_optimize_; - TimeDelta time_taken_to_codegen_; + base::TimeDelta time_taken_to_create_graph_; + base::TimeDelta time_taken_to_optimize_; + base::TimeDelta time_taken_to_codegen_; Status last_status_; bool awaiting_install_; @@ -569,9 +634,9 @@ class OptimizedCompileJob: public ZoneObject { void RecordOptimizationStats(); struct Timer { - Timer(OptimizedCompileJob* job, TimeDelta* location) + Timer(OptimizedCompileJob* job, base::TimeDelta* location) : job_(job), location_(location) { - ASSERT(location_ != NULL); + DCHECK(location_ != NULL); timer_.Start(); } @@ -580,8 +645,8 @@ class OptimizedCompileJob: public ZoneObject { } OptimizedCompileJob* job_; - ElapsedTimer timer_; - TimeDelta* location_; + base::ElapsedTimer timer_; + base::TimeDelta* location_; }; }; @@ -620,20 +685,16 @@ class Compiler : public AllStatic { // Compile a String source within a context. static Handle<SharedFunctionInfo> CompileScript( - Handle<String> source, - Handle<Object> script_name, - int line_offset, - int column_offset, - bool is_shared_cross_origin, - Handle<Context> context, - v8::Extension* extension, - ScriptData** cached_data, - CachedDataMode cached_data_mode, + Handle<String> source, Handle<Object> script_name, int line_offset, + int column_offset, bool is_shared_cross_origin, Handle<Context> context, + v8::Extension* extension, ScriptData** cached_data, + ScriptCompiler::CompileOptions compile_options, NativesFlag is_natives_code); // Create a shared function info object (the code may be lazily compiled). static Handle<SharedFunctionInfo> BuildFunctionInfo(FunctionLiteral* node, - Handle<Script> script); + Handle<Script> script, + CompilationInfo* outer); enum ConcurrencyMode { NOT_CONCURRENT, CONCURRENT }; @@ -674,12 +735,11 @@ class CompilationPhase BASE_EMBEDDED { CompilationInfo* info_; Zone zone_; unsigned info_zone_start_allocation_size_; - ElapsedTimer timer_; + base::ElapsedTimer timer_; DISALLOW_COPY_AND_ASSIGN(CompilationPhase); }; - } } // namespace v8::internal #endif // V8_COMPILER_H_ diff --git a/deps/v8/src/compiler/arm/code-generator-arm.cc b/deps/v8/src/compiler/arm/code-generator-arm.cc new file mode 100644 index 00000000000..90eb7cd4dd6 --- /dev/null +++ b/deps/v8/src/compiler/arm/code-generator-arm.cc @@ -0,0 +1,848 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/code-generator.h" + +#include "src/arm/macro-assembler-arm.h" +#include "src/compiler/code-generator-impl.h" +#include "src/compiler/gap-resolver.h" +#include "src/compiler/node-matchers.h" +#include "src/compiler/node-properties-inl.h" +#include "src/scopes.h" + +namespace v8 { +namespace internal { +namespace compiler { + +#define __ masm()-> + + +#define kScratchReg r9 + + +// Adds Arm-specific methods to convert InstructionOperands. +class ArmOperandConverter : public InstructionOperandConverter { + public: + ArmOperandConverter(CodeGenerator* gen, Instruction* instr) + : InstructionOperandConverter(gen, instr) {} + + SBit OutputSBit() const { + switch (instr_->flags_mode()) { + case kFlags_branch: + case kFlags_set: + return SetCC; + case kFlags_none: + return LeaveCC; + } + UNREACHABLE(); + return LeaveCC; + } + + Operand InputImmediate(int index) { + Constant constant = ToConstant(instr_->InputAt(index)); + switch (constant.type()) { + case Constant::kInt32: + return Operand(constant.ToInt32()); + case Constant::kFloat64: + return Operand( + isolate()->factory()->NewNumber(constant.ToFloat64(), TENURED)); + case Constant::kInt64: + case Constant::kExternalReference: + case Constant::kHeapObject: + break; + } + UNREACHABLE(); + return Operand::Zero(); + } + + Operand InputOperand2(int first_index) { + const int index = first_index; + switch (AddressingModeField::decode(instr_->opcode())) { + case kMode_None: + case kMode_Offset_RI: + case kMode_Offset_RR: + break; + case kMode_Operand2_I: + return InputImmediate(index + 0); + case kMode_Operand2_R: + return Operand(InputRegister(index + 0)); + case kMode_Operand2_R_ASR_I: + return Operand(InputRegister(index + 0), ASR, InputInt5(index + 1)); + case kMode_Operand2_R_ASR_R: + return Operand(InputRegister(index + 0), ASR, InputRegister(index + 1)); + case kMode_Operand2_R_LSL_I: + return Operand(InputRegister(index + 0), LSL, InputInt5(index + 1)); + case kMode_Operand2_R_LSL_R: + return Operand(InputRegister(index + 0), LSL, InputRegister(index + 1)); + case kMode_Operand2_R_LSR_I: + return Operand(InputRegister(index + 0), LSR, InputInt5(index + 1)); + case kMode_Operand2_R_LSR_R: + return Operand(InputRegister(index + 0), LSR, InputRegister(index + 1)); + case kMode_Operand2_R_ROR_I: + return Operand(InputRegister(index + 0), ROR, InputInt5(index + 1)); + case kMode_Operand2_R_ROR_R: + return Operand(InputRegister(index + 0), ROR, InputRegister(index + 1)); + } + UNREACHABLE(); + return Operand::Zero(); + } + + MemOperand InputOffset(int* first_index) { + const int index = *first_index; + switch (AddressingModeField::decode(instr_->opcode())) { + case kMode_None: + case kMode_Operand2_I: + case kMode_Operand2_R: + case kMode_Operand2_R_ASR_I: + case kMode_Operand2_R_ASR_R: + case kMode_Operand2_R_LSL_I: + case kMode_Operand2_R_LSL_R: + case kMode_Operand2_R_LSR_I: + case kMode_Operand2_R_LSR_R: + case kMode_Operand2_R_ROR_I: + case kMode_Operand2_R_ROR_R: + break; + case kMode_Offset_RI: + *first_index += 2; + return MemOperand(InputRegister(index + 0), InputInt32(index + 1)); + case kMode_Offset_RR: + *first_index += 2; + return MemOperand(InputRegister(index + 0), InputRegister(index + 1)); + } + UNREACHABLE(); + return MemOperand(r0); + } + + MemOperand InputOffset() { + int index = 0; + return InputOffset(&index); + } + + MemOperand ToMemOperand(InstructionOperand* op) const { + DCHECK(op != NULL); + DCHECK(!op->IsRegister()); + DCHECK(!op->IsDoubleRegister()); + DCHECK(op->IsStackSlot() || op->IsDoubleStackSlot()); + // The linkage computes where all spill slots are located. + FrameOffset offset = linkage()->GetFrameOffset(op->index(), frame(), 0); + return MemOperand(offset.from_stack_pointer() ? sp : fp, offset.offset()); + } +}; + + +// Assembles an instruction after register allocation, producing machine code. +void CodeGenerator::AssembleArchInstruction(Instruction* instr) { + ArmOperandConverter i(this, instr); + + switch (ArchOpcodeField::decode(instr->opcode())) { + case kArchJmp: + __ b(code_->GetLabel(i.InputBlock(0))); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + case kArchNop: + // don't emit code for nops. + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + case kArchRet: + AssembleReturn(); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + case kArchDeoptimize: { + int deoptimization_id = MiscField::decode(instr->opcode()); + BuildTranslation(instr, deoptimization_id); + + Address deopt_entry = Deoptimizer::GetDeoptimizationEntry( + isolate(), deoptimization_id, Deoptimizer::LAZY); + __ Call(deopt_entry, RelocInfo::RUNTIME_ENTRY); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + } + case kArmAdd: + __ add(i.OutputRegister(), i.InputRegister(0), i.InputOperand2(1), + i.OutputSBit()); + break; + case kArmAnd: + __ and_(i.OutputRegister(), i.InputRegister(0), i.InputOperand2(1), + i.OutputSBit()); + break; + case kArmBic: + __ bic(i.OutputRegister(), i.InputRegister(0), i.InputOperand2(1), + i.OutputSBit()); + break; + case kArmMul: + __ mul(i.OutputRegister(), i.InputRegister(0), i.InputRegister(1), + i.OutputSBit()); + break; + case kArmMla: + __ mla(i.OutputRegister(), i.InputRegister(0), i.InputRegister(1), + i.InputRegister(2), i.OutputSBit()); + break; + case kArmMls: { + CpuFeatureScope scope(masm(), MLS); + __ mls(i.OutputRegister(), i.InputRegister(0), i.InputRegister(1), + i.InputRegister(2)); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + } + case kArmSdiv: { + CpuFeatureScope scope(masm(), SUDIV); + __ sdiv(i.OutputRegister(), i.InputRegister(0), i.InputRegister(1)); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + } + case kArmUdiv: { + CpuFeatureScope scope(masm(), SUDIV); + __ udiv(i.OutputRegister(), i.InputRegister(0), i.InputRegister(1)); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + } + case kArmMov: + __ Move(i.OutputRegister(), i.InputOperand2(0)); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + case kArmMvn: + __ mvn(i.OutputRegister(), i.InputOperand2(0), i.OutputSBit()); + break; + case kArmOrr: + __ orr(i.OutputRegister(), i.InputRegister(0), i.InputOperand2(1), + i.OutputSBit()); + break; + case kArmEor: + __ eor(i.OutputRegister(), i.InputRegister(0), i.InputOperand2(1), + i.OutputSBit()); + break; + case kArmSub: + __ sub(i.OutputRegister(), i.InputRegister(0), i.InputOperand2(1), + i.OutputSBit()); + break; + case kArmRsb: + __ rsb(i.OutputRegister(), i.InputRegister(0), i.InputOperand2(1), + i.OutputSBit()); + break; + case kArmBfc: { + CpuFeatureScope scope(masm(), ARMv7); + __ bfc(i.OutputRegister(), i.InputInt8(1), i.InputInt8(2)); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + } + case kArmUbfx: { + CpuFeatureScope scope(masm(), ARMv7); + __ ubfx(i.OutputRegister(), i.InputRegister(0), i.InputInt8(1), + i.InputInt8(2)); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + } + case kArmCallCodeObject: { + if (instr->InputAt(0)->IsImmediate()) { + Handle<Code> code = Handle<Code>::cast(i.InputHeapObject(0)); + __ Call(code, RelocInfo::CODE_TARGET); + RecordSafepoint(instr->pointer_map(), Safepoint::kSimple, 0, + Safepoint::kNoLazyDeopt); + } else { + Register reg = i.InputRegister(0); + int entry = Code::kHeaderSize - kHeapObjectTag; + __ ldr(reg, MemOperand(reg, entry)); + __ Call(reg); + RecordSafepoint(instr->pointer_map(), Safepoint::kSimple, 0, + Safepoint::kNoLazyDeopt); + } + bool lazy_deopt = (MiscField::decode(instr->opcode()) == 1); + if (lazy_deopt) { + RecordLazyDeoptimizationEntry(instr); + } + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + } + case kArmCallJSFunction: { + Register func = i.InputRegister(0); + + // TODO(jarin) The load of the context should be separated from the call. + __ ldr(cp, FieldMemOperand(func, JSFunction::kContextOffset)); + __ ldr(ip, FieldMemOperand(func, JSFunction::kCodeEntryOffset)); + __ Call(ip); + + RecordSafepoint(instr->pointer_map(), Safepoint::kSimple, 0, + Safepoint::kNoLazyDeopt); + RecordLazyDeoptimizationEntry(instr); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + } + case kArmCallAddress: { + DirectCEntryStub stub(isolate()); + stub.GenerateCall(masm(), i.InputRegister(0)); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + } + case kArmPush: + __ Push(i.InputRegister(0)); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + case kArmDrop: { + int words = MiscField::decode(instr->opcode()); + __ Drop(words); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + } + case kArmCmp: + __ cmp(i.InputRegister(0), i.InputOperand2(1)); + DCHECK_EQ(SetCC, i.OutputSBit()); + break; + case kArmCmn: + __ cmn(i.InputRegister(0), i.InputOperand2(1)); + DCHECK_EQ(SetCC, i.OutputSBit()); + break; + case kArmTst: + __ tst(i.InputRegister(0), i.InputOperand2(1)); + DCHECK_EQ(SetCC, i.OutputSBit()); + break; + case kArmTeq: + __ teq(i.InputRegister(0), i.InputOperand2(1)); + DCHECK_EQ(SetCC, i.OutputSBit()); + break; + case kArmVcmpF64: + __ VFPCompareAndSetFlags(i.InputDoubleRegister(0), + i.InputDoubleRegister(1)); + DCHECK_EQ(SetCC, i.OutputSBit()); + break; + case kArmVaddF64: + __ vadd(i.OutputDoubleRegister(), i.InputDoubleRegister(0), + i.InputDoubleRegister(1)); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + case kArmVsubF64: + __ vsub(i.OutputDoubleRegister(), i.InputDoubleRegister(0), + i.InputDoubleRegister(1)); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + case kArmVmulF64: + __ vmul(i.OutputDoubleRegister(), i.InputDoubleRegister(0), + i.InputDoubleRegister(1)); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + case kArmVmlaF64: + __ vmla(i.OutputDoubleRegister(), i.InputDoubleRegister(1), + i.InputDoubleRegister(2)); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + case kArmVmlsF64: + __ vmls(i.OutputDoubleRegister(), i.InputDoubleRegister(1), + i.InputDoubleRegister(2)); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + case kArmVdivF64: + __ vdiv(i.OutputDoubleRegister(), i.InputDoubleRegister(0), + i.InputDoubleRegister(1)); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + case kArmVmodF64: { + // TODO(bmeurer): We should really get rid of this special instruction, + // and generate a CallAddress instruction instead. + FrameScope scope(masm(), StackFrame::MANUAL); + __ PrepareCallCFunction(0, 2, kScratchReg); + __ MovToFloatParameters(i.InputDoubleRegister(0), + i.InputDoubleRegister(1)); + __ CallCFunction(ExternalReference::mod_two_doubles_operation(isolate()), + 0, 2); + // Move the result in the double result register. + __ MovFromFloatResult(i.OutputDoubleRegister()); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + } + case kArmVnegF64: + __ vneg(i.OutputDoubleRegister(), i.InputDoubleRegister(0)); + break; + case kArmVcvtF64S32: { + SwVfpRegister scratch = kScratchDoubleReg.low(); + __ vmov(scratch, i.InputRegister(0)); + __ vcvt_f64_s32(i.OutputDoubleRegister(), scratch); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + } + case kArmVcvtF64U32: { + SwVfpRegister scratch = kScratchDoubleReg.low(); + __ vmov(scratch, i.InputRegister(0)); + __ vcvt_f64_u32(i.OutputDoubleRegister(), scratch); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + } + case kArmVcvtS32F64: { + SwVfpRegister scratch = kScratchDoubleReg.low(); + __ vcvt_s32_f64(scratch, i.InputDoubleRegister(0)); + __ vmov(i.OutputRegister(), scratch); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + } + case kArmVcvtU32F64: { + SwVfpRegister scratch = kScratchDoubleReg.low(); + __ vcvt_u32_f64(scratch, i.InputDoubleRegister(0)); + __ vmov(i.OutputRegister(), scratch); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + } + case kArmLoadWord8: + __ ldrb(i.OutputRegister(), i.InputOffset()); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + case kArmStoreWord8: { + int index = 0; + MemOperand operand = i.InputOffset(&index); + __ strb(i.InputRegister(index), operand); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + } + case kArmLoadWord16: + __ ldrh(i.OutputRegister(), i.InputOffset()); + break; + case kArmStoreWord16: { + int index = 0; + MemOperand operand = i.InputOffset(&index); + __ strh(i.InputRegister(index), operand); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + } + case kArmLoadWord32: + __ ldr(i.OutputRegister(), i.InputOffset()); + break; + case kArmStoreWord32: { + int index = 0; + MemOperand operand = i.InputOffset(&index); + __ str(i.InputRegister(index), operand); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + } + case kArmFloat64Load: + __ vldr(i.OutputDoubleRegister(), i.InputOffset()); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + case kArmFloat64Store: { + int index = 0; + MemOperand operand = i.InputOffset(&index); + __ vstr(i.InputDoubleRegister(index), operand); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + } + case kArmStoreWriteBarrier: { + Register object = i.InputRegister(0); + Register index = i.InputRegister(1); + Register value = i.InputRegister(2); + __ add(index, object, index); + __ str(value, MemOperand(index)); + SaveFPRegsMode mode = + frame()->DidAllocateDoubleRegisters() ? kSaveFPRegs : kDontSaveFPRegs; + LinkRegisterStatus lr_status = kLRHasNotBeenSaved; + __ RecordWrite(object, index, value, lr_status, mode); + DCHECK_EQ(LeaveCC, i.OutputSBit()); + break; + } + } +} + + +// Assembles branches after an instruction. +void CodeGenerator::AssembleArchBranch(Instruction* instr, + FlagsCondition condition) { + ArmOperandConverter i(this, instr); + Label done; + + // Emit a branch. The true and false targets are always the last two inputs + // to the instruction. + BasicBlock* tblock = i.InputBlock(instr->InputCount() - 2); + BasicBlock* fblock = i.InputBlock(instr->InputCount() - 1); + bool fallthru = IsNextInAssemblyOrder(fblock); + Label* tlabel = code()->GetLabel(tblock); + Label* flabel = fallthru ? &done : code()->GetLabel(fblock); + switch (condition) { + case kUnorderedEqual: + __ b(vs, flabel); + // Fall through. + case kEqual: + __ b(eq, tlabel); + break; + case kUnorderedNotEqual: + __ b(vs, tlabel); + // Fall through. + case kNotEqual: + __ b(ne, tlabel); + break; + case kSignedLessThan: + __ b(lt, tlabel); + break; + case kSignedGreaterThanOrEqual: + __ b(ge, tlabel); + break; + case kSignedLessThanOrEqual: + __ b(le, tlabel); + break; + case kSignedGreaterThan: + __ b(gt, tlabel); + break; + case kUnorderedLessThan: + __ b(vs, flabel); + // Fall through. + case kUnsignedLessThan: + __ b(lo, tlabel); + break; + case kUnorderedGreaterThanOrEqual: + __ b(vs, tlabel); + // Fall through. + case kUnsignedGreaterThanOrEqual: + __ b(hs, tlabel); + break; + case kUnorderedLessThanOrEqual: + __ b(vs, flabel); + // Fall through. + case kUnsignedLessThanOrEqual: + __ b(ls, tlabel); + break; + case kUnorderedGreaterThan: + __ b(vs, tlabel); + // Fall through. + case kUnsignedGreaterThan: + __ b(hi, tlabel); + break; + case kOverflow: + __ b(vs, tlabel); + break; + case kNotOverflow: + __ b(vc, tlabel); + break; + } + if (!fallthru) __ b(flabel); // no fallthru to flabel. + __ bind(&done); +} + + +// Assembles boolean materializations after an instruction. +void CodeGenerator::AssembleArchBoolean(Instruction* instr, + FlagsCondition condition) { + ArmOperandConverter i(this, instr); + Label done; + + // Materialize a full 32-bit 1 or 0 value. The result register is always the + // last output of the instruction. + Label check; + DCHECK_NE(0, instr->OutputCount()); + Register reg = i.OutputRegister(instr->OutputCount() - 1); + Condition cc = kNoCondition; + switch (condition) { + case kUnorderedEqual: + __ b(vc, &check); + __ mov(reg, Operand(0)); + __ b(&done); + // Fall through. + case kEqual: + cc = eq; + break; + case kUnorderedNotEqual: + __ b(vc, &check); + __ mov(reg, Operand(1)); + __ b(&done); + // Fall through. + case kNotEqual: + cc = ne; + break; + case kSignedLessThan: + cc = lt; + break; + case kSignedGreaterThanOrEqual: + cc = ge; + break; + case kSignedLessThanOrEqual: + cc = le; + break; + case kSignedGreaterThan: + cc = gt; + break; + case kUnorderedLessThan: + __ b(vc, &check); + __ mov(reg, Operand(0)); + __ b(&done); + // Fall through. + case kUnsignedLessThan: + cc = lo; + break; + case kUnorderedGreaterThanOrEqual: + __ b(vc, &check); + __ mov(reg, Operand(1)); + __ b(&done); + // Fall through. + case kUnsignedGreaterThanOrEqual: + cc = hs; + break; + case kUnorderedLessThanOrEqual: + __ b(vc, &check); + __ mov(reg, Operand(0)); + __ b(&done); + // Fall through. + case kUnsignedLessThanOrEqual: + cc = ls; + break; + case kUnorderedGreaterThan: + __ b(vc, &check); + __ mov(reg, Operand(1)); + __ b(&done); + // Fall through. + case kUnsignedGreaterThan: + cc = hi; + break; + case kOverflow: + cc = vs; + break; + case kNotOverflow: + cc = vc; + break; + } + __ bind(&check); + __ mov(reg, Operand(0)); + __ mov(reg, Operand(1), LeaveCC, cc); + __ bind(&done); +} + + +void CodeGenerator::AssemblePrologue() { + CallDescriptor* descriptor = linkage()->GetIncomingDescriptor(); + if (descriptor->kind() == CallDescriptor::kCallAddress) { + __ Push(lr, fp); + __ mov(fp, sp); + const RegList saves = descriptor->CalleeSavedRegisters(); + if (saves != 0) { // Save callee-saved registers. + int register_save_area_size = 0; + for (int i = Register::kNumRegisters - 1; i >= 0; i--) { + if (!((1 << i) & saves)) continue; + register_save_area_size += kPointerSize; + } + frame()->SetRegisterSaveAreaSize(register_save_area_size); + __ stm(db_w, sp, saves); + } + } else if (descriptor->IsJSFunctionCall()) { + CompilationInfo* info = linkage()->info(); + __ Prologue(info->IsCodePreAgingActive()); + frame()->SetRegisterSaveAreaSize( + StandardFrameConstants::kFixedFrameSizeFromFp); + + // Sloppy mode functions and builtins need to replace the receiver with the + // global proxy when called as functions (without an explicit receiver + // object). + // TODO(mstarzinger/verwaest): Should this be moved back into the CallIC? + if (info->strict_mode() == SLOPPY && !info->is_native()) { + Label ok; + // +2 for return address and saved frame pointer. + int receiver_slot = info->scope()->num_parameters() + 2; + __ ldr(r2, MemOperand(fp, receiver_slot * kPointerSize)); + __ CompareRoot(r2, Heap::kUndefinedValueRootIndex); + __ b(ne, &ok); + __ ldr(r2, GlobalObjectOperand()); + __ ldr(r2, FieldMemOperand(r2, GlobalObject::kGlobalProxyOffset)); + __ str(r2, MemOperand(fp, receiver_slot * kPointerSize)); + __ bind(&ok); + } + + } else { + __ StubPrologue(); + frame()->SetRegisterSaveAreaSize( + StandardFrameConstants::kFixedFrameSizeFromFp); + } + int stack_slots = frame()->GetSpillSlotCount(); + if (stack_slots > 0) { + __ sub(sp, sp, Operand(stack_slots * kPointerSize)); + } +} + + +void CodeGenerator::AssembleReturn() { + CallDescriptor* descriptor = linkage()->GetIncomingDescriptor(); + if (descriptor->kind() == CallDescriptor::kCallAddress) { + if (frame()->GetRegisterSaveAreaSize() > 0) { + // Remove this frame's spill slots first. + int stack_slots = frame()->GetSpillSlotCount(); + if (stack_slots > 0) { + __ add(sp, sp, Operand(stack_slots * kPointerSize)); + } + // Restore registers. + const RegList saves = descriptor->CalleeSavedRegisters(); + if (saves != 0) { + __ ldm(ia_w, sp, saves); + } + } + __ mov(sp, fp); + __ ldm(ia_w, sp, fp.bit() | lr.bit()); + __ Ret(); + } else { + __ mov(sp, fp); + __ ldm(ia_w, sp, fp.bit() | lr.bit()); + int pop_count = + descriptor->IsJSFunctionCall() ? descriptor->ParameterCount() : 0; + __ Drop(pop_count); + __ Ret(); + } +} + + +void CodeGenerator::AssembleMove(InstructionOperand* source, + InstructionOperand* destination) { + ArmOperandConverter g(this, NULL); + // Dispatch on the source and destination operand kinds. Not all + // combinations are possible. + if (source->IsRegister()) { + DCHECK(destination->IsRegister() || destination->IsStackSlot()); + Register src = g.ToRegister(source); + if (destination->IsRegister()) { + __ mov(g.ToRegister(destination), src); + } else { + __ str(src, g.ToMemOperand(destination)); + } + } else if (source->IsStackSlot()) { + DCHECK(destination->IsRegister() || destination->IsStackSlot()); + MemOperand src = g.ToMemOperand(source); + if (destination->IsRegister()) { + __ ldr(g.ToRegister(destination), src); + } else { + Register temp = kScratchReg; + __ ldr(temp, src); + __ str(temp, g.ToMemOperand(destination)); + } + } else if (source->IsConstant()) { + if (destination->IsRegister() || destination->IsStackSlot()) { + Register dst = + destination->IsRegister() ? g.ToRegister(destination) : kScratchReg; + Constant src = g.ToConstant(source); + switch (src.type()) { + case Constant::kInt32: + __ mov(dst, Operand(src.ToInt32())); + break; + case Constant::kInt64: + UNREACHABLE(); + break; + case Constant::kFloat64: + __ Move(dst, + isolate()->factory()->NewNumber(src.ToFloat64(), TENURED)); + break; + case Constant::kExternalReference: + __ mov(dst, Operand(src.ToExternalReference())); + break; + case Constant::kHeapObject: + __ Move(dst, src.ToHeapObject()); + break; + } + if (destination->IsStackSlot()) __ str(dst, g.ToMemOperand(destination)); + } else if (destination->IsDoubleRegister()) { + DwVfpRegister result = g.ToDoubleRegister(destination); + __ vmov(result, g.ToDouble(source)); + } else { + DCHECK(destination->IsDoubleStackSlot()); + DwVfpRegister temp = kScratchDoubleReg; + __ vmov(temp, g.ToDouble(source)); + __ vstr(temp, g.ToMemOperand(destination)); + } + } else if (source->IsDoubleRegister()) { + DwVfpRegister src = g.ToDoubleRegister(source); + if (destination->IsDoubleRegister()) { + DwVfpRegister dst = g.ToDoubleRegister(destination); + __ Move(dst, src); + } else { + DCHECK(destination->IsDoubleStackSlot()); + __ vstr(src, g.ToMemOperand(destination)); + } + } else if (source->IsDoubleStackSlot()) { + DCHECK(destination->IsDoubleRegister() || destination->IsDoubleStackSlot()); + MemOperand src = g.ToMemOperand(source); + if (destination->IsDoubleRegister()) { + __ vldr(g.ToDoubleRegister(destination), src); + } else { + DwVfpRegister temp = kScratchDoubleReg; + __ vldr(temp, src); + __ vstr(temp, g.ToMemOperand(destination)); + } + } else { + UNREACHABLE(); + } +} + + +void CodeGenerator::AssembleSwap(InstructionOperand* source, + InstructionOperand* destination) { + ArmOperandConverter g(this, NULL); + // Dispatch on the source and destination operand kinds. Not all + // combinations are possible. + if (source->IsRegister()) { + // Register-register. + Register temp = kScratchReg; + Register src = g.ToRegister(source); + if (destination->IsRegister()) { + Register dst = g.ToRegister(destination); + __ Move(temp, src); + __ Move(src, dst); + __ Move(dst, temp); + } else { + DCHECK(destination->IsStackSlot()); + MemOperand dst = g.ToMemOperand(destination); + __ mov(temp, src); + __ ldr(src, dst); + __ str(temp, dst); + } + } else if (source->IsStackSlot()) { + DCHECK(destination->IsStackSlot()); + Register temp_0 = kScratchReg; + SwVfpRegister temp_1 = kScratchDoubleReg.low(); + MemOperand src = g.ToMemOperand(source); + MemOperand dst = g.ToMemOperand(destination); + __ ldr(temp_0, src); + __ vldr(temp_1, dst); + __ str(temp_0, dst); + __ vstr(temp_1, src); + } else if (source->IsDoubleRegister()) { + DwVfpRegister temp = kScratchDoubleReg; + DwVfpRegister src = g.ToDoubleRegister(source); + if (destination->IsDoubleRegister()) { + DwVfpRegister dst = g.ToDoubleRegister(destination); + __ Move(temp, src); + __ Move(src, dst); + __ Move(src, temp); + } else { + DCHECK(destination->IsDoubleStackSlot()); + MemOperand dst = g.ToMemOperand(destination); + __ Move(temp, src); + __ vldr(src, dst); + __ vstr(temp, dst); + } + } else if (source->IsDoubleStackSlot()) { + DCHECK(destination->IsDoubleStackSlot()); + Register temp_0 = kScratchReg; + DwVfpRegister temp_1 = kScratchDoubleReg; + MemOperand src0 = g.ToMemOperand(source); + MemOperand src1(src0.rn(), src0.offset() + kPointerSize); + MemOperand dst0 = g.ToMemOperand(destination); + MemOperand dst1(dst0.rn(), dst0.offset() + kPointerSize); + __ vldr(temp_1, dst0); // Save destination in temp_1. + __ ldr(temp_0, src0); // Then use temp_0 to copy source to destination. + __ str(temp_0, dst0); + __ ldr(temp_0, src1); + __ str(temp_0, dst1); + __ vstr(temp_1, src0); + } else { + // No other combinations are possible. + UNREACHABLE(); + } +} + + +void CodeGenerator::AddNopForSmiCodeInlining() { + // On 32-bit ARM we do not insert nops for inlined Smi code. + UNREACHABLE(); +} + +#ifdef DEBUG + +// Checks whether the code between start_pc and end_pc is a no-op. +bool CodeGenerator::IsNopForSmiCodeInlining(Handle<Code> code, int start_pc, + int end_pc) { + return false; +} + +#endif // DEBUG + +#undef __ +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/compiler/arm/instruction-codes-arm.h b/deps/v8/src/compiler/arm/instruction-codes-arm.h new file mode 100644 index 00000000000..1d5b5c7f334 --- /dev/null +++ b/deps/v8/src/compiler/arm/instruction-codes-arm.h @@ -0,0 +1,86 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_ARM_INSTRUCTION_CODES_ARM_H_ +#define V8_COMPILER_ARM_INSTRUCTION_CODES_ARM_H_ + +namespace v8 { +namespace internal { +namespace compiler { + +// ARM-specific opcodes that specify which assembly sequence to emit. +// Most opcodes specify a single instruction. +#define TARGET_ARCH_OPCODE_LIST(V) \ + V(ArmAdd) \ + V(ArmAnd) \ + V(ArmBic) \ + V(ArmCmp) \ + V(ArmCmn) \ + V(ArmTst) \ + V(ArmTeq) \ + V(ArmOrr) \ + V(ArmEor) \ + V(ArmSub) \ + V(ArmRsb) \ + V(ArmMul) \ + V(ArmMla) \ + V(ArmMls) \ + V(ArmSdiv) \ + V(ArmUdiv) \ + V(ArmMov) \ + V(ArmMvn) \ + V(ArmBfc) \ + V(ArmUbfx) \ + V(ArmCallCodeObject) \ + V(ArmCallJSFunction) \ + V(ArmCallAddress) \ + V(ArmPush) \ + V(ArmDrop) \ + V(ArmVcmpF64) \ + V(ArmVaddF64) \ + V(ArmVsubF64) \ + V(ArmVmulF64) \ + V(ArmVmlaF64) \ + V(ArmVmlsF64) \ + V(ArmVdivF64) \ + V(ArmVmodF64) \ + V(ArmVnegF64) \ + V(ArmVcvtF64S32) \ + V(ArmVcvtF64U32) \ + V(ArmVcvtS32F64) \ + V(ArmVcvtU32F64) \ + V(ArmFloat64Load) \ + V(ArmFloat64Store) \ + V(ArmLoadWord8) \ + V(ArmStoreWord8) \ + V(ArmLoadWord16) \ + V(ArmStoreWord16) \ + V(ArmLoadWord32) \ + V(ArmStoreWord32) \ + V(ArmStoreWriteBarrier) + + +// Addressing modes represent the "shape" of inputs to an instruction. +// Many instructions support multiple addressing modes. Addressing modes +// are encoded into the InstructionCode of the instruction and tell the +// code generator after register allocation which assembler method to call. +#define TARGET_ADDRESSING_MODE_LIST(V) \ + V(Offset_RI) /* [%r0 + K] */ \ + V(Offset_RR) /* [%r0 + %r1] */ \ + V(Operand2_I) /* K */ \ + V(Operand2_R) /* %r0 */ \ + V(Operand2_R_ASR_I) /* %r0 ASR K */ \ + V(Operand2_R_LSL_I) /* %r0 LSL K */ \ + V(Operand2_R_LSR_I) /* %r0 LSR K */ \ + V(Operand2_R_ROR_I) /* %r0 ROR K */ \ + V(Operand2_R_ASR_R) /* %r0 ASR %r1 */ \ + V(Operand2_R_LSL_R) /* %r0 LSL %r1 */ \ + V(Operand2_R_LSR_R) /* %r0 LSR %r1 */ \ + V(Operand2_R_ROR_R) /* %r0 ROR %r1 */ + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_COMPILER_ARM_INSTRUCTION_CODES_ARM_H_ diff --git a/deps/v8/src/compiler/arm/instruction-selector-arm.cc b/deps/v8/src/compiler/arm/instruction-selector-arm.cc new file mode 100644 index 00000000000..029d6e3b96e --- /dev/null +++ b/deps/v8/src/compiler/arm/instruction-selector-arm.cc @@ -0,0 +1,943 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/instruction-selector-impl.h" +#include "src/compiler/node-matchers.h" +#include "src/compiler-intrinsics.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// Adds Arm-specific methods for generating InstructionOperands. +class ArmOperandGenerator V8_FINAL : public OperandGenerator { + public: + explicit ArmOperandGenerator(InstructionSelector* selector) + : OperandGenerator(selector) {} + + InstructionOperand* UseOperand(Node* node, InstructionCode opcode) { + if (CanBeImmediate(node, opcode)) { + return UseImmediate(node); + } + return UseRegister(node); + } + + bool CanBeImmediate(Node* node, InstructionCode opcode) { + int32_t value; + switch (node->opcode()) { + case IrOpcode::kInt32Constant: + case IrOpcode::kNumberConstant: + value = ValueOf<int32_t>(node->op()); + break; + default: + return false; + } + switch (ArchOpcodeField::decode(opcode)) { + case kArmAnd: + case kArmMov: + case kArmMvn: + case kArmBic: + return ImmediateFitsAddrMode1Instruction(value) || + ImmediateFitsAddrMode1Instruction(~value); + + case kArmAdd: + case kArmSub: + case kArmCmp: + case kArmCmn: + return ImmediateFitsAddrMode1Instruction(value) || + ImmediateFitsAddrMode1Instruction(-value); + + case kArmTst: + case kArmTeq: + case kArmOrr: + case kArmEor: + case kArmRsb: + return ImmediateFitsAddrMode1Instruction(value); + + case kArmFloat64Load: + case kArmFloat64Store: + return value >= -1020 && value <= 1020 && (value % 4) == 0; + + case kArmLoadWord8: + case kArmStoreWord8: + case kArmLoadWord32: + case kArmStoreWord32: + case kArmStoreWriteBarrier: + return value >= -4095 && value <= 4095; + + case kArmLoadWord16: + case kArmStoreWord16: + return value >= -255 && value <= 255; + + case kArchJmp: + case kArchNop: + case kArchRet: + case kArchDeoptimize: + case kArmMul: + case kArmMla: + case kArmMls: + case kArmSdiv: + case kArmUdiv: + case kArmBfc: + case kArmUbfx: + case kArmCallCodeObject: + case kArmCallJSFunction: + case kArmCallAddress: + case kArmPush: + case kArmDrop: + case kArmVcmpF64: + case kArmVaddF64: + case kArmVsubF64: + case kArmVmulF64: + case kArmVmlaF64: + case kArmVmlsF64: + case kArmVdivF64: + case kArmVmodF64: + case kArmVnegF64: + case kArmVcvtF64S32: + case kArmVcvtF64U32: + case kArmVcvtS32F64: + case kArmVcvtU32F64: + return false; + } + UNREACHABLE(); + return false; + } + + private: + bool ImmediateFitsAddrMode1Instruction(int32_t imm) const { + return Assembler::ImmediateFitsAddrMode1Instruction(imm); + } +}; + + +static void VisitRRRFloat64(InstructionSelector* selector, ArchOpcode opcode, + Node* node) { + ArmOperandGenerator g(selector); + selector->Emit(opcode, g.DefineAsDoubleRegister(node), + g.UseDoubleRegister(node->InputAt(0)), + g.UseDoubleRegister(node->InputAt(1))); +} + + +static bool TryMatchROR(InstructionSelector* selector, + InstructionCode* opcode_return, Node* node, + InstructionOperand** value_return, + InstructionOperand** shift_return) { + ArmOperandGenerator g(selector); + if (node->opcode() != IrOpcode::kWord32Or) return false; + Int32BinopMatcher m(node); + Node* shl = m.left().node(); + Node* shr = m.right().node(); + if (m.left().IsWord32Shr() && m.right().IsWord32Shl()) { + std::swap(shl, shr); + } else if (!m.left().IsWord32Shl() || !m.right().IsWord32Shr()) { + return false; + } + Int32BinopMatcher mshr(shr); + Int32BinopMatcher mshl(shl); + Node* value = mshr.left().node(); + if (value != mshl.left().node()) return false; + Node* shift = mshr.right().node(); + Int32Matcher mshift(shift); + if (mshift.IsInRange(1, 31) && mshl.right().Is(32 - mshift.Value())) { + *opcode_return |= AddressingModeField::encode(kMode_Operand2_R_ROR_I); + *value_return = g.UseRegister(value); + *shift_return = g.UseImmediate(shift); + return true; + } + if (mshl.right().IsInt32Sub()) { + Int32BinopMatcher mshlright(mshl.right().node()); + if (!mshlright.left().Is(32)) return false; + if (mshlright.right().node() != shift) return false; + *opcode_return |= AddressingModeField::encode(kMode_Operand2_R_ROR_R); + *value_return = g.UseRegister(value); + *shift_return = g.UseRegister(shift); + return true; + } + return false; +} + + +static inline bool TryMatchASR(InstructionSelector* selector, + InstructionCode* opcode_return, Node* node, + InstructionOperand** value_return, + InstructionOperand** shift_return) { + ArmOperandGenerator g(selector); + if (node->opcode() != IrOpcode::kWord32Sar) return false; + Int32BinopMatcher m(node); + *value_return = g.UseRegister(m.left().node()); + if (m.right().IsInRange(1, 32)) { + *opcode_return |= AddressingModeField::encode(kMode_Operand2_R_ASR_I); + *shift_return = g.UseImmediate(m.right().node()); + } else { + *opcode_return |= AddressingModeField::encode(kMode_Operand2_R_ASR_R); + *shift_return = g.UseRegister(m.right().node()); + } + return true; +} + + +static inline bool TryMatchLSL(InstructionSelector* selector, + InstructionCode* opcode_return, Node* node, + InstructionOperand** value_return, + InstructionOperand** shift_return) { + ArmOperandGenerator g(selector); + if (node->opcode() != IrOpcode::kWord32Shl) return false; + Int32BinopMatcher m(node); + *value_return = g.UseRegister(m.left().node()); + if (m.right().IsInRange(0, 31)) { + *opcode_return |= AddressingModeField::encode(kMode_Operand2_R_LSL_I); + *shift_return = g.UseImmediate(m.right().node()); + } else { + *opcode_return |= AddressingModeField::encode(kMode_Operand2_R_LSL_R); + *shift_return = g.UseRegister(m.right().node()); + } + return true; +} + + +static inline bool TryMatchLSR(InstructionSelector* selector, + InstructionCode* opcode_return, Node* node, + InstructionOperand** value_return, + InstructionOperand** shift_return) { + ArmOperandGenerator g(selector); + if (node->opcode() != IrOpcode::kWord32Shr) return false; + Int32BinopMatcher m(node); + *value_return = g.UseRegister(m.left().node()); + if (m.right().IsInRange(1, 32)) { + *opcode_return |= AddressingModeField::encode(kMode_Operand2_R_LSR_I); + *shift_return = g.UseImmediate(m.right().node()); + } else { + *opcode_return |= AddressingModeField::encode(kMode_Operand2_R_LSR_R); + *shift_return = g.UseRegister(m.right().node()); + } + return true; +} + + +static inline bool TryMatchShift(InstructionSelector* selector, + InstructionCode* opcode_return, Node* node, + InstructionOperand** value_return, + InstructionOperand** shift_return) { + return ( + TryMatchASR(selector, opcode_return, node, value_return, shift_return) || + TryMatchLSL(selector, opcode_return, node, value_return, shift_return) || + TryMatchLSR(selector, opcode_return, node, value_return, shift_return) || + TryMatchROR(selector, opcode_return, node, value_return, shift_return)); +} + + +static inline bool TryMatchImmediateOrShift(InstructionSelector* selector, + InstructionCode* opcode_return, + Node* node, + size_t* input_count_return, + InstructionOperand** inputs) { + ArmOperandGenerator g(selector); + if (g.CanBeImmediate(node, *opcode_return)) { + *opcode_return |= AddressingModeField::encode(kMode_Operand2_I); + inputs[0] = g.UseImmediate(node); + *input_count_return = 1; + return true; + } + if (TryMatchShift(selector, opcode_return, node, &inputs[0], &inputs[1])) { + *input_count_return = 2; + return true; + } + return false; +} + + +static void VisitBinop(InstructionSelector* selector, Node* node, + InstructionCode opcode, InstructionCode reverse_opcode, + FlagsContinuation* cont) { + ArmOperandGenerator g(selector); + Int32BinopMatcher m(node); + InstructionOperand* inputs[5]; + size_t input_count = 0; + InstructionOperand* outputs[2]; + size_t output_count = 0; + + if (TryMatchImmediateOrShift(selector, &opcode, m.right().node(), + &input_count, &inputs[1])) { + inputs[0] = g.UseRegister(m.left().node()); + input_count++; + } else if (TryMatchImmediateOrShift(selector, &reverse_opcode, + m.left().node(), &input_count, + &inputs[1])) { + inputs[0] = g.UseRegister(m.right().node()); + opcode = reverse_opcode; + input_count++; + } else { + opcode |= AddressingModeField::encode(kMode_Operand2_R); + inputs[input_count++] = g.UseRegister(m.left().node()); + inputs[input_count++] = g.UseRegister(m.right().node()); + } + + if (cont->IsBranch()) { + inputs[input_count++] = g.Label(cont->true_block()); + inputs[input_count++] = g.Label(cont->false_block()); + } + + outputs[output_count++] = g.DefineAsRegister(node); + if (cont->IsSet()) { + outputs[output_count++] = g.DefineAsRegister(cont->result()); + } + + DCHECK_NE(0, input_count); + DCHECK_NE(0, output_count); + DCHECK_GE(ARRAY_SIZE(inputs), input_count); + DCHECK_GE(ARRAY_SIZE(outputs), output_count); + DCHECK_NE(kMode_None, AddressingModeField::decode(opcode)); + + Instruction* instr = selector->Emit(cont->Encode(opcode), output_count, + outputs, input_count, inputs); + if (cont->IsBranch()) instr->MarkAsControl(); +} + + +static void VisitBinop(InstructionSelector* selector, Node* node, + InstructionCode opcode, InstructionCode reverse_opcode) { + FlagsContinuation cont; + VisitBinop(selector, node, opcode, reverse_opcode, &cont); +} + + +void InstructionSelector::VisitLoad(Node* node) { + MachineType rep = OpParameter<MachineType>(node); + ArmOperandGenerator g(this); + Node* base = node->InputAt(0); + Node* index = node->InputAt(1); + + InstructionOperand* result = rep == kMachineFloat64 + ? g.DefineAsDoubleRegister(node) + : g.DefineAsRegister(node); + + ArchOpcode opcode; + switch (rep) { + case kMachineFloat64: + opcode = kArmFloat64Load; + break; + case kMachineWord8: + opcode = kArmLoadWord8; + break; + case kMachineWord16: + opcode = kArmLoadWord16; + break; + case kMachineTagged: // Fall through. + case kMachineWord32: + opcode = kArmLoadWord32; + break; + default: + UNREACHABLE(); + return; + } + + if (g.CanBeImmediate(index, opcode)) { + Emit(opcode | AddressingModeField::encode(kMode_Offset_RI), result, + g.UseRegister(base), g.UseImmediate(index)); + } else if (g.CanBeImmediate(base, opcode)) { + Emit(opcode | AddressingModeField::encode(kMode_Offset_RI), result, + g.UseRegister(index), g.UseImmediate(base)); + } else { + Emit(opcode | AddressingModeField::encode(kMode_Offset_RR), result, + g.UseRegister(base), g.UseRegister(index)); + } +} + + +void InstructionSelector::VisitStore(Node* node) { + ArmOperandGenerator g(this); + Node* base = node->InputAt(0); + Node* index = node->InputAt(1); + Node* value = node->InputAt(2); + + StoreRepresentation store_rep = OpParameter<StoreRepresentation>(node); + MachineType rep = store_rep.rep; + if (store_rep.write_barrier_kind == kFullWriteBarrier) { + DCHECK(rep == kMachineTagged); + // TODO(dcarney): refactor RecordWrite function to take temp registers + // and pass them here instead of using fixed regs + // TODO(dcarney): handle immediate indices. + InstructionOperand* temps[] = {g.TempRegister(r5), g.TempRegister(r6)}; + Emit(kArmStoreWriteBarrier, NULL, g.UseFixed(base, r4), + g.UseFixed(index, r5), g.UseFixed(value, r6), ARRAY_SIZE(temps), + temps); + return; + } + DCHECK_EQ(kNoWriteBarrier, store_rep.write_barrier_kind); + InstructionOperand* val = rep == kMachineFloat64 ? g.UseDoubleRegister(value) + : g.UseRegister(value); + + ArchOpcode opcode; + switch (rep) { + case kMachineFloat64: + opcode = kArmFloat64Store; + break; + case kMachineWord8: + opcode = kArmStoreWord8; + break; + case kMachineWord16: + opcode = kArmStoreWord16; + break; + case kMachineTagged: // Fall through. + case kMachineWord32: + opcode = kArmStoreWord32; + break; + default: + UNREACHABLE(); + return; + } + + if (g.CanBeImmediate(index, opcode)) { + Emit(opcode | AddressingModeField::encode(kMode_Offset_RI), NULL, + g.UseRegister(base), g.UseImmediate(index), val); + } else if (g.CanBeImmediate(base, opcode)) { + Emit(opcode | AddressingModeField::encode(kMode_Offset_RI), NULL, + g.UseRegister(index), g.UseImmediate(base), val); + } else { + Emit(opcode | AddressingModeField::encode(kMode_Offset_RR), NULL, + g.UseRegister(base), g.UseRegister(index), val); + } +} + + +static inline void EmitBic(InstructionSelector* selector, Node* node, + Node* left, Node* right) { + ArmOperandGenerator g(selector); + InstructionCode opcode = kArmBic; + InstructionOperand* value_operand; + InstructionOperand* shift_operand; + if (TryMatchShift(selector, &opcode, right, &value_operand, &shift_operand)) { + selector->Emit(opcode, g.DefineAsRegister(node), g.UseRegister(left), + value_operand, shift_operand); + return; + } + selector->Emit(opcode | AddressingModeField::encode(kMode_Operand2_R), + g.DefineAsRegister(node), g.UseRegister(left), + g.UseRegister(right)); +} + + +void InstructionSelector::VisitWord32And(Node* node) { + ArmOperandGenerator g(this); + Int32BinopMatcher m(node); + if (m.left().IsWord32Xor() && CanCover(node, m.left().node())) { + Int32BinopMatcher mleft(m.left().node()); + if (mleft.right().Is(-1)) { + EmitBic(this, node, m.right().node(), mleft.left().node()); + return; + } + } + if (m.right().IsWord32Xor() && CanCover(node, m.right().node())) { + Int32BinopMatcher mright(m.right().node()); + if (mright.right().Is(-1)) { + EmitBic(this, node, m.left().node(), mright.left().node()); + return; + } + } + if (IsSupported(ARMv7) && m.right().HasValue()) { + uint32_t value = m.right().Value(); + uint32_t width = CompilerIntrinsics::CountSetBits(value); + uint32_t msb = CompilerIntrinsics::CountLeadingZeros(value); + if (width != 0 && msb + width == 32) { + DCHECK_EQ(0, CompilerIntrinsics::CountTrailingZeros(value)); + if (m.left().IsWord32Shr()) { + Int32BinopMatcher mleft(m.left().node()); + if (mleft.right().IsInRange(0, 31)) { + Emit(kArmUbfx, g.DefineAsRegister(node), + g.UseRegister(mleft.left().node()), + g.UseImmediate(mleft.right().node()), g.TempImmediate(width)); + return; + } + } + Emit(kArmUbfx, g.DefineAsRegister(node), g.UseRegister(m.left().node()), + g.TempImmediate(0), g.TempImmediate(width)); + return; + } + // Try to interpret this AND as BFC. + width = 32 - width; + msb = CompilerIntrinsics::CountLeadingZeros(~value); + uint32_t lsb = CompilerIntrinsics::CountTrailingZeros(~value); + if (msb + width + lsb == 32) { + Emit(kArmBfc, g.DefineSameAsFirst(node), g.UseRegister(m.left().node()), + g.TempImmediate(lsb), g.TempImmediate(width)); + return; + } + } + VisitBinop(this, node, kArmAnd, kArmAnd); +} + + +void InstructionSelector::VisitWord32Or(Node* node) { + ArmOperandGenerator g(this); + InstructionCode opcode = kArmMov; + InstructionOperand* value_operand; + InstructionOperand* shift_operand; + if (TryMatchROR(this, &opcode, node, &value_operand, &shift_operand)) { + Emit(opcode, g.DefineAsRegister(node), value_operand, shift_operand); + return; + } + VisitBinop(this, node, kArmOrr, kArmOrr); +} + + +void InstructionSelector::VisitWord32Xor(Node* node) { + ArmOperandGenerator g(this); + Int32BinopMatcher m(node); + if (m.right().Is(-1)) { + InstructionCode opcode = kArmMvn; + InstructionOperand* value_operand; + InstructionOperand* shift_operand; + if (TryMatchShift(this, &opcode, m.left().node(), &value_operand, + &shift_operand)) { + Emit(opcode, g.DefineAsRegister(node), value_operand, shift_operand); + return; + } + Emit(opcode | AddressingModeField::encode(kMode_Operand2_R), + g.DefineAsRegister(node), g.UseRegister(m.left().node())); + return; + } + VisitBinop(this, node, kArmEor, kArmEor); +} + + +template <typename TryMatchShift> +static inline void VisitShift(InstructionSelector* selector, Node* node, + TryMatchShift try_match_shift) { + ArmOperandGenerator g(selector); + InstructionCode opcode = kArmMov; + InstructionOperand* value_operand = NULL; + InstructionOperand* shift_operand = NULL; + CHECK( + try_match_shift(selector, &opcode, node, &value_operand, &shift_operand)); + selector->Emit(opcode, g.DefineAsRegister(node), value_operand, + shift_operand); +} + + +void InstructionSelector::VisitWord32Shl(Node* node) { + VisitShift(this, node, TryMatchLSL); +} + + +void InstructionSelector::VisitWord32Shr(Node* node) { + ArmOperandGenerator g(this); + Int32BinopMatcher m(node); + if (IsSupported(ARMv7) && m.left().IsWord32And() && + m.right().IsInRange(0, 31)) { + int32_t lsb = m.right().Value(); + Int32BinopMatcher mleft(m.left().node()); + if (mleft.right().HasValue()) { + uint32_t value = (mleft.right().Value() >> lsb) << lsb; + uint32_t width = CompilerIntrinsics::CountSetBits(value); + uint32_t msb = CompilerIntrinsics::CountLeadingZeros(value); + if (msb + width + lsb == 32) { + DCHECK_EQ(lsb, CompilerIntrinsics::CountTrailingZeros(value)); + Emit(kArmUbfx, g.DefineAsRegister(node), + g.UseRegister(mleft.left().node()), g.TempImmediate(lsb), + g.TempImmediate(width)); + return; + } + } + } + VisitShift(this, node, TryMatchLSR); +} + + +void InstructionSelector::VisitWord32Sar(Node* node) { + VisitShift(this, node, TryMatchASR); +} + + +void InstructionSelector::VisitInt32Add(Node* node) { + ArmOperandGenerator g(this); + Int32BinopMatcher m(node); + if (m.left().IsInt32Mul() && CanCover(node, m.left().node())) { + Int32BinopMatcher mleft(m.left().node()); + Emit(kArmMla, g.DefineAsRegister(node), g.UseRegister(mleft.left().node()), + g.UseRegister(mleft.right().node()), g.UseRegister(m.right().node())); + return; + } + if (m.right().IsInt32Mul() && CanCover(node, m.right().node())) { + Int32BinopMatcher mright(m.right().node()); + Emit(kArmMla, g.DefineAsRegister(node), g.UseRegister(mright.left().node()), + g.UseRegister(mright.right().node()), g.UseRegister(m.left().node())); + return; + } + VisitBinop(this, node, kArmAdd, kArmAdd); +} + + +void InstructionSelector::VisitInt32Sub(Node* node) { + ArmOperandGenerator g(this); + Int32BinopMatcher m(node); + if (IsSupported(MLS) && m.right().IsInt32Mul() && + CanCover(node, m.right().node())) { + Int32BinopMatcher mright(m.right().node()); + Emit(kArmMls, g.DefineAsRegister(node), g.UseRegister(mright.left().node()), + g.UseRegister(mright.right().node()), g.UseRegister(m.left().node())); + return; + } + VisitBinop(this, node, kArmSub, kArmRsb); +} + + +void InstructionSelector::VisitInt32Mul(Node* node) { + ArmOperandGenerator g(this); + Int32BinopMatcher m(node); + if (m.right().HasValue() && m.right().Value() > 0) { + int32_t value = m.right().Value(); + if (IsPowerOf2(value - 1)) { + Emit(kArmAdd | AddressingModeField::encode(kMode_Operand2_R_LSL_I), + g.DefineAsRegister(node), g.UseRegister(m.left().node()), + g.UseRegister(m.left().node()), + g.TempImmediate(WhichPowerOf2(value - 1))); + return; + } + if (value < kMaxInt && IsPowerOf2(value + 1)) { + Emit(kArmRsb | AddressingModeField::encode(kMode_Operand2_R_LSL_I), + g.DefineAsRegister(node), g.UseRegister(m.left().node()), + g.UseRegister(m.left().node()), + g.TempImmediate(WhichPowerOf2(value + 1))); + return; + } + } + Emit(kArmMul, g.DefineAsRegister(node), g.UseRegister(m.left().node()), + g.UseRegister(m.right().node())); +} + + +static void EmitDiv(InstructionSelector* selector, ArchOpcode div_opcode, + ArchOpcode f64i32_opcode, ArchOpcode i32f64_opcode, + InstructionOperand* result_operand, + InstructionOperand* left_operand, + InstructionOperand* right_operand) { + ArmOperandGenerator g(selector); + if (selector->IsSupported(SUDIV)) { + selector->Emit(div_opcode, result_operand, left_operand, right_operand); + return; + } + InstructionOperand* left_double_operand = g.TempDoubleRegister(); + InstructionOperand* right_double_operand = g.TempDoubleRegister(); + InstructionOperand* result_double_operand = g.TempDoubleRegister(); + selector->Emit(f64i32_opcode, left_double_operand, left_operand); + selector->Emit(f64i32_opcode, right_double_operand, right_operand); + selector->Emit(kArmVdivF64, result_double_operand, left_double_operand, + right_double_operand); + selector->Emit(i32f64_opcode, result_operand, result_double_operand); +} + + +static void VisitDiv(InstructionSelector* selector, Node* node, + ArchOpcode div_opcode, ArchOpcode f64i32_opcode, + ArchOpcode i32f64_opcode) { + ArmOperandGenerator g(selector); + Int32BinopMatcher m(node); + EmitDiv(selector, div_opcode, f64i32_opcode, i32f64_opcode, + g.DefineAsRegister(node), g.UseRegister(m.left().node()), + g.UseRegister(m.right().node())); +} + + +void InstructionSelector::VisitInt32Div(Node* node) { + VisitDiv(this, node, kArmSdiv, kArmVcvtF64S32, kArmVcvtS32F64); +} + + +void InstructionSelector::VisitInt32UDiv(Node* node) { + VisitDiv(this, node, kArmUdiv, kArmVcvtF64U32, kArmVcvtU32F64); +} + + +static void VisitMod(InstructionSelector* selector, Node* node, + ArchOpcode div_opcode, ArchOpcode f64i32_opcode, + ArchOpcode i32f64_opcode) { + ArmOperandGenerator g(selector); + Int32BinopMatcher m(node); + InstructionOperand* div_operand = g.TempRegister(); + InstructionOperand* result_operand = g.DefineAsRegister(node); + InstructionOperand* left_operand = g.UseRegister(m.left().node()); + InstructionOperand* right_operand = g.UseRegister(m.right().node()); + EmitDiv(selector, div_opcode, f64i32_opcode, i32f64_opcode, div_operand, + left_operand, right_operand); + if (selector->IsSupported(MLS)) { + selector->Emit(kArmMls, result_operand, div_operand, right_operand, + left_operand); + return; + } + InstructionOperand* mul_operand = g.TempRegister(); + selector->Emit(kArmMul, mul_operand, div_operand, right_operand); + selector->Emit(kArmSub, result_operand, left_operand, mul_operand); +} + + +void InstructionSelector::VisitInt32Mod(Node* node) { + VisitMod(this, node, kArmSdiv, kArmVcvtF64S32, kArmVcvtS32F64); +} + + +void InstructionSelector::VisitInt32UMod(Node* node) { + VisitMod(this, node, kArmUdiv, kArmVcvtF64U32, kArmVcvtU32F64); +} + + +void InstructionSelector::VisitChangeInt32ToFloat64(Node* node) { + ArmOperandGenerator g(this); + Emit(kArmVcvtF64S32, g.DefineAsDoubleRegister(node), + g.UseRegister(node->InputAt(0))); +} + + +void InstructionSelector::VisitChangeUint32ToFloat64(Node* node) { + ArmOperandGenerator g(this); + Emit(kArmVcvtF64U32, g.DefineAsDoubleRegister(node), + g.UseRegister(node->InputAt(0))); +} + + +void InstructionSelector::VisitChangeFloat64ToInt32(Node* node) { + ArmOperandGenerator g(this); + Emit(kArmVcvtS32F64, g.DefineAsRegister(node), + g.UseDoubleRegister(node->InputAt(0))); +} + + +void InstructionSelector::VisitChangeFloat64ToUint32(Node* node) { + ArmOperandGenerator g(this); + Emit(kArmVcvtU32F64, g.DefineAsRegister(node), + g.UseDoubleRegister(node->InputAt(0))); +} + + +void InstructionSelector::VisitFloat64Add(Node* node) { + ArmOperandGenerator g(this); + Int32BinopMatcher m(node); + if (m.left().IsFloat64Mul() && CanCover(node, m.left().node())) { + Int32BinopMatcher mleft(m.left().node()); + Emit(kArmVmlaF64, g.DefineSameAsFirst(node), + g.UseRegister(m.right().node()), g.UseRegister(mleft.left().node()), + g.UseRegister(mleft.right().node())); + return; + } + if (m.right().IsFloat64Mul() && CanCover(node, m.right().node())) { + Int32BinopMatcher mright(m.right().node()); + Emit(kArmVmlaF64, g.DefineSameAsFirst(node), g.UseRegister(m.left().node()), + g.UseRegister(mright.left().node()), + g.UseRegister(mright.right().node())); + return; + } + VisitRRRFloat64(this, kArmVaddF64, node); +} + + +void InstructionSelector::VisitFloat64Sub(Node* node) { + ArmOperandGenerator g(this); + Int32BinopMatcher m(node); + if (m.right().IsFloat64Mul() && CanCover(node, m.right().node())) { + Int32BinopMatcher mright(m.right().node()); + Emit(kArmVmlsF64, g.DefineSameAsFirst(node), g.UseRegister(m.left().node()), + g.UseRegister(mright.left().node()), + g.UseRegister(mright.right().node())); + return; + } + VisitRRRFloat64(this, kArmVsubF64, node); +} + + +void InstructionSelector::VisitFloat64Mul(Node* node) { + ArmOperandGenerator g(this); + Float64BinopMatcher m(node); + if (m.right().Is(-1.0)) { + Emit(kArmVnegF64, g.DefineAsRegister(node), + g.UseDoubleRegister(m.left().node())); + } else { + VisitRRRFloat64(this, kArmVmulF64, node); + } +} + + +void InstructionSelector::VisitFloat64Div(Node* node) { + VisitRRRFloat64(this, kArmVdivF64, node); +} + + +void InstructionSelector::VisitFloat64Mod(Node* node) { + ArmOperandGenerator g(this); + Emit(kArmVmodF64, g.DefineAsFixedDouble(node, d0), + g.UseFixedDouble(node->InputAt(0), d0), + g.UseFixedDouble(node->InputAt(1), d1))->MarkAsCall(); +} + + +void InstructionSelector::VisitCall(Node* call, BasicBlock* continuation, + BasicBlock* deoptimization) { + ArmOperandGenerator g(this); + CallDescriptor* descriptor = OpParameter<CallDescriptor*>(call); + CallBuffer buffer(zone(), descriptor); // TODO(turbofan): temp zone here? + + // Compute InstructionOperands for inputs and outputs. + // TODO(turbofan): on ARM64 it's probably better to use the code object in a + // register if there are multiple uses of it. Improve constant pool and the + // heuristics in the register allocator for where to emit constants. + InitializeCallBuffer(call, &buffer, true, false, continuation, + deoptimization); + + // TODO(dcarney): might be possible to use claim/poke instead + // Push any stack arguments. + for (int i = buffer.pushed_count - 1; i >= 0; --i) { + Node* input = buffer.pushed_nodes[i]; + Emit(kArmPush, NULL, g.UseRegister(input)); + } + + // Select the appropriate opcode based on the call type. + InstructionCode opcode; + switch (descriptor->kind()) { + case CallDescriptor::kCallCodeObject: { + bool lazy_deopt = descriptor->CanLazilyDeoptimize(); + opcode = kArmCallCodeObject | MiscField::encode(lazy_deopt ? 1 : 0); + break; + } + case CallDescriptor::kCallAddress: + opcode = kArmCallAddress; + break; + case CallDescriptor::kCallJSFunction: + opcode = kArmCallJSFunction; + break; + default: + UNREACHABLE(); + return; + } + + // Emit the call instruction. + Instruction* call_instr = + Emit(opcode, buffer.output_count, buffer.outputs, + buffer.fixed_and_control_count(), buffer.fixed_and_control_args); + + call_instr->MarkAsCall(); + if (deoptimization != NULL) { + DCHECK(continuation != NULL); + call_instr->MarkAsControl(); + } + + // Caller clean up of stack for C-style calls. + if (descriptor->kind() == CallDescriptor::kCallAddress && + buffer.pushed_count > 0) { + DCHECK(deoptimization == NULL && continuation == NULL); + Emit(kArmDrop | MiscField::encode(buffer.pushed_count), NULL); + } +} + + +void InstructionSelector::VisitInt32AddWithOverflow(Node* node, + FlagsContinuation* cont) { + VisitBinop(this, node, kArmAdd, kArmAdd, cont); +} + + +void InstructionSelector::VisitInt32SubWithOverflow(Node* node, + FlagsContinuation* cont) { + VisitBinop(this, node, kArmSub, kArmRsb, cont); +} + + +// Shared routine for multiple compare operations. +static void VisitWordCompare(InstructionSelector* selector, Node* node, + InstructionCode opcode, FlagsContinuation* cont, + bool commutative) { + ArmOperandGenerator g(selector); + Int32BinopMatcher m(node); + InstructionOperand* inputs[5]; + size_t input_count = 0; + InstructionOperand* outputs[1]; + size_t output_count = 0; + + if (TryMatchImmediateOrShift(selector, &opcode, m.right().node(), + &input_count, &inputs[1])) { + inputs[0] = g.UseRegister(m.left().node()); + input_count++; + } else if (TryMatchImmediateOrShift(selector, &opcode, m.left().node(), + &input_count, &inputs[1])) { + if (!commutative) cont->Commute(); + inputs[0] = g.UseRegister(m.right().node()); + input_count++; + } else { + opcode |= AddressingModeField::encode(kMode_Operand2_R); + inputs[input_count++] = g.UseRegister(m.left().node()); + inputs[input_count++] = g.UseRegister(m.right().node()); + } + + if (cont->IsBranch()) { + inputs[input_count++] = g.Label(cont->true_block()); + inputs[input_count++] = g.Label(cont->false_block()); + } else { + DCHECK(cont->IsSet()); + outputs[output_count++] = g.DefineAsRegister(cont->result()); + } + + DCHECK_NE(0, input_count); + DCHECK_GE(ARRAY_SIZE(inputs), input_count); + DCHECK_GE(ARRAY_SIZE(outputs), output_count); + + Instruction* instr = selector->Emit(cont->Encode(opcode), output_count, + outputs, input_count, inputs); + if (cont->IsBranch()) instr->MarkAsControl(); +} + + +void InstructionSelector::VisitWord32Test(Node* node, FlagsContinuation* cont) { + switch (node->opcode()) { + case IrOpcode::kInt32Add: + return VisitWordCompare(this, node, kArmCmn, cont, true); + case IrOpcode::kInt32Sub: + return VisitWordCompare(this, node, kArmCmp, cont, false); + case IrOpcode::kWord32And: + return VisitWordCompare(this, node, kArmTst, cont, true); + case IrOpcode::kWord32Or: + return VisitBinop(this, node, kArmOrr, kArmOrr, cont); + case IrOpcode::kWord32Xor: + return VisitWordCompare(this, node, kArmTeq, cont, true); + default: + break; + } + + ArmOperandGenerator g(this); + InstructionCode opcode = + cont->Encode(kArmTst) | AddressingModeField::encode(kMode_Operand2_R); + if (cont->IsBranch()) { + Emit(opcode, NULL, g.UseRegister(node), g.UseRegister(node), + g.Label(cont->true_block()), + g.Label(cont->false_block()))->MarkAsControl(); + } else { + Emit(opcode, g.DefineAsRegister(cont->result()), g.UseRegister(node), + g.UseRegister(node)); + } +} + + +void InstructionSelector::VisitWord32Compare(Node* node, + FlagsContinuation* cont) { + VisitWordCompare(this, node, kArmCmp, cont, false); +} + + +void InstructionSelector::VisitFloat64Compare(Node* node, + FlagsContinuation* cont) { + ArmOperandGenerator g(this); + Float64BinopMatcher m(node); + if (cont->IsBranch()) { + Emit(cont->Encode(kArmVcmpF64), NULL, g.UseDoubleRegister(m.left().node()), + g.UseDoubleRegister(m.right().node()), g.Label(cont->true_block()), + g.Label(cont->false_block()))->MarkAsControl(); + } else { + DCHECK(cont->IsSet()); + Emit(cont->Encode(kArmVcmpF64), g.DefineAsRegister(cont->result()), + g.UseDoubleRegister(m.left().node()), + g.UseDoubleRegister(m.right().node())); + } +} + +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/src/compiler/arm/linkage-arm.cc b/deps/v8/src/compiler/arm/linkage-arm.cc new file mode 100644 index 00000000000..3b5d5f7d0f2 --- /dev/null +++ b/deps/v8/src/compiler/arm/linkage-arm.cc @@ -0,0 +1,67 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "src/assembler.h" +#include "src/code-stubs.h" +#include "src/compiler/linkage.h" +#include "src/compiler/linkage-impl.h" +#include "src/zone.h" + +namespace v8 { +namespace internal { +namespace compiler { + +struct LinkageHelperTraits { + static Register ReturnValueReg() { return r0; } + static Register ReturnValue2Reg() { return r1; } + static Register JSCallFunctionReg() { return r1; } + static Register ContextReg() { return cp; } + static Register RuntimeCallFunctionReg() { return r1; } + static Register RuntimeCallArgCountReg() { return r0; } + static RegList CCalleeSaveRegisters() { + return r4.bit() | r5.bit() | r6.bit() | r7.bit() | r8.bit() | r9.bit() | + r10.bit(); + } + static Register CRegisterParameter(int i) { + static Register register_parameters[] = {r0, r1, r2, r3}; + return register_parameters[i]; + } + static int CRegisterParametersLength() { return 4; } +}; + + +CallDescriptor* Linkage::GetJSCallDescriptor(int parameter_count, Zone* zone) { + return LinkageHelper::GetJSCallDescriptor<LinkageHelperTraits>( + zone, parameter_count); +} + + +CallDescriptor* Linkage::GetRuntimeCallDescriptor( + Runtime::FunctionId function, int parameter_count, + Operator::Property properties, + CallDescriptor::DeoptimizationSupport can_deoptimize, Zone* zone) { + return LinkageHelper::GetRuntimeCallDescriptor<LinkageHelperTraits>( + zone, function, parameter_count, properties, can_deoptimize); +} + + +CallDescriptor* Linkage::GetStubCallDescriptor( + CodeStubInterfaceDescriptor* descriptor, int stack_parameter_count, + CallDescriptor::DeoptimizationSupport can_deoptimize, Zone* zone) { + return LinkageHelper::GetStubCallDescriptor<LinkageHelperTraits>( + zone, descriptor, stack_parameter_count, can_deoptimize); +} + + +CallDescriptor* Linkage::GetSimplifiedCDescriptor( + Zone* zone, int num_params, MachineType return_type, + const MachineType* param_types) { + return LinkageHelper::GetSimplifiedCDescriptor<LinkageHelperTraits>( + zone, num_params, return_type, param_types); +} +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/compiler/arm64/code-generator-arm64.cc b/deps/v8/src/compiler/arm64/code-generator-arm64.cc new file mode 100644 index 00000000000..065889cfbb3 --- /dev/null +++ b/deps/v8/src/compiler/arm64/code-generator-arm64.cc @@ -0,0 +1,854 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/code-generator.h" + +#include "src/arm64/macro-assembler-arm64.h" +#include "src/compiler/code-generator-impl.h" +#include "src/compiler/gap-resolver.h" +#include "src/compiler/node-matchers.h" +#include "src/compiler/node-properties-inl.h" +#include "src/scopes.h" + +namespace v8 { +namespace internal { +namespace compiler { + +#define __ masm()-> + + +// Adds Arm64-specific methods to convert InstructionOperands. +class Arm64OperandConverter V8_FINAL : public InstructionOperandConverter { + public: + Arm64OperandConverter(CodeGenerator* gen, Instruction* instr) + : InstructionOperandConverter(gen, instr) {} + + Register InputRegister32(int index) { + return ToRegister(instr_->InputAt(index)).W(); + } + + Register InputRegister64(int index) { return InputRegister(index); } + + Operand InputImmediate(int index) { + return ToImmediate(instr_->InputAt(index)); + } + + Operand InputOperand(int index) { return ToOperand(instr_->InputAt(index)); } + + Operand InputOperand64(int index) { return InputOperand(index); } + + Operand InputOperand32(int index) { + return ToOperand32(instr_->InputAt(index)); + } + + Register OutputRegister64() { return OutputRegister(); } + + Register OutputRegister32() { return ToRegister(instr_->Output()).W(); } + + MemOperand MemoryOperand(int* first_index) { + const int index = *first_index; + switch (AddressingModeField::decode(instr_->opcode())) { + case kMode_None: + break; + case kMode_MRI: + *first_index += 2; + return MemOperand(InputRegister(index + 0), InputInt32(index + 1)); + case kMode_MRR: + *first_index += 2; + return MemOperand(InputRegister(index + 0), InputRegister(index + 1), + SXTW); + } + UNREACHABLE(); + return MemOperand(no_reg); + } + + MemOperand MemoryOperand() { + int index = 0; + return MemoryOperand(&index); + } + + Operand ToOperand(InstructionOperand* op) { + if (op->IsRegister()) { + return Operand(ToRegister(op)); + } + return ToImmediate(op); + } + + Operand ToOperand32(InstructionOperand* op) { + if (op->IsRegister()) { + return Operand(ToRegister(op).W()); + } + return ToImmediate(op); + } + + Operand ToImmediate(InstructionOperand* operand) { + Constant constant = ToConstant(operand); + switch (constant.type()) { + case Constant::kInt32: + return Operand(constant.ToInt32()); + case Constant::kInt64: + return Operand(constant.ToInt64()); + case Constant::kFloat64: + return Operand( + isolate()->factory()->NewNumber(constant.ToFloat64(), TENURED)); + case Constant::kExternalReference: + return Operand(constant.ToExternalReference()); + case Constant::kHeapObject: + return Operand(constant.ToHeapObject()); + } + UNREACHABLE(); + return Operand(-1); + } + + MemOperand ToMemOperand(InstructionOperand* op, MacroAssembler* masm) const { + DCHECK(op != NULL); + DCHECK(!op->IsRegister()); + DCHECK(!op->IsDoubleRegister()); + DCHECK(op->IsStackSlot() || op->IsDoubleStackSlot()); + // The linkage computes where all spill slots are located. + FrameOffset offset = linkage()->GetFrameOffset(op->index(), frame(), 0); + return MemOperand(offset.from_stack_pointer() ? masm->StackPointer() : fp, + offset.offset()); + } +}; + + +#define ASSEMBLE_SHIFT(asm_instr, width) \ + do { \ + if (instr->InputAt(1)->IsRegister()) { \ + __ asm_instr(i.OutputRegister##width(), i.InputRegister##width(0), \ + i.InputRegister##width(1)); \ + } else { \ + int64_t imm = i.InputOperand##width(1).immediate().value(); \ + __ asm_instr(i.OutputRegister##width(), i.InputRegister##width(0), imm); \ + } \ + } while (0); + + +// Assembles an instruction after register allocation, producing machine code. +void CodeGenerator::AssembleArchInstruction(Instruction* instr) { + Arm64OperandConverter i(this, instr); + InstructionCode opcode = instr->opcode(); + switch (ArchOpcodeField::decode(opcode)) { + case kArchJmp: + __ B(code_->GetLabel(i.InputBlock(0))); + break; + case kArchNop: + // don't emit code for nops. + break; + case kArchRet: + AssembleReturn(); + break; + case kArchDeoptimize: { + int deoptimization_id = MiscField::decode(instr->opcode()); + BuildTranslation(instr, deoptimization_id); + + Address deopt_entry = Deoptimizer::GetDeoptimizationEntry( + isolate(), deoptimization_id, Deoptimizer::LAZY); + __ Call(deopt_entry, RelocInfo::RUNTIME_ENTRY); + break; + } + case kArm64Add: + __ Add(i.OutputRegister(), i.InputRegister(0), i.InputOperand(1)); + break; + case kArm64Add32: + if (FlagsModeField::decode(opcode) != kFlags_none) { + __ Adds(i.OutputRegister32(), i.InputRegister32(0), + i.InputOperand32(1)); + } else { + __ Add(i.OutputRegister32(), i.InputRegister32(0), i.InputOperand32(1)); + } + break; + case kArm64And: + __ And(i.OutputRegister(), i.InputRegister(0), i.InputOperand(1)); + break; + case kArm64And32: + __ And(i.OutputRegister32(), i.InputRegister32(0), i.InputOperand32(1)); + break; + case kArm64Mul: + __ Mul(i.OutputRegister(), i.InputRegister(0), i.InputRegister(1)); + break; + case kArm64Mul32: + __ Mul(i.OutputRegister32(), i.InputRegister32(0), i.InputRegister32(1)); + break; + case kArm64Idiv: + __ Sdiv(i.OutputRegister(), i.InputRegister(0), i.InputRegister(1)); + break; + case kArm64Idiv32: + __ Sdiv(i.OutputRegister32(), i.InputRegister32(0), i.InputRegister32(1)); + break; + case kArm64Udiv: + __ Udiv(i.OutputRegister(), i.InputRegister(0), i.InputRegister(1)); + break; + case kArm64Udiv32: + __ Udiv(i.OutputRegister32(), i.InputRegister32(0), i.InputRegister32(1)); + break; + case kArm64Imod: { + UseScratchRegisterScope scope(masm()); + Register temp = scope.AcquireX(); + __ Sdiv(temp, i.InputRegister(0), i.InputRegister(1)); + __ Msub(i.OutputRegister(), temp, i.InputRegister(1), i.InputRegister(0)); + break; + } + case kArm64Imod32: { + UseScratchRegisterScope scope(masm()); + Register temp = scope.AcquireW(); + __ Sdiv(temp, i.InputRegister32(0), i.InputRegister32(1)); + __ Msub(i.OutputRegister32(), temp, i.InputRegister32(1), + i.InputRegister32(0)); + break; + } + case kArm64Umod: { + UseScratchRegisterScope scope(masm()); + Register temp = scope.AcquireX(); + __ Udiv(temp, i.InputRegister(0), i.InputRegister(1)); + __ Msub(i.OutputRegister(), temp, i.InputRegister(1), i.InputRegister(0)); + break; + } + case kArm64Umod32: { + UseScratchRegisterScope scope(masm()); + Register temp = scope.AcquireW(); + __ Udiv(temp, i.InputRegister32(0), i.InputRegister32(1)); + __ Msub(i.OutputRegister32(), temp, i.InputRegister32(1), + i.InputRegister32(0)); + break; + } + // TODO(dcarney): use mvn instr?? + case kArm64Not: + __ Orn(i.OutputRegister(), xzr, i.InputOperand(0)); + break; + case kArm64Not32: + __ Orn(i.OutputRegister32(), wzr, i.InputOperand32(0)); + break; + case kArm64Neg: + __ Neg(i.OutputRegister(), i.InputOperand(0)); + break; + case kArm64Neg32: + __ Neg(i.OutputRegister32(), i.InputOperand32(0)); + break; + case kArm64Or: + __ Orr(i.OutputRegister(), i.InputRegister(0), i.InputOperand(1)); + break; + case kArm64Or32: + __ Orr(i.OutputRegister32(), i.InputRegister32(0), i.InputOperand32(1)); + break; + case kArm64Xor: + __ Eor(i.OutputRegister(), i.InputRegister(0), i.InputOperand(1)); + break; + case kArm64Xor32: + __ Eor(i.OutputRegister32(), i.InputRegister32(0), i.InputOperand32(1)); + break; + case kArm64Sub: + __ Sub(i.OutputRegister(), i.InputRegister(0), i.InputOperand(1)); + break; + case kArm64Sub32: + if (FlagsModeField::decode(opcode) != kFlags_none) { + __ Subs(i.OutputRegister32(), i.InputRegister32(0), + i.InputOperand32(1)); + } else { + __ Sub(i.OutputRegister32(), i.InputRegister32(0), i.InputOperand32(1)); + } + break; + case kArm64Shl: + ASSEMBLE_SHIFT(Lsl, 64); + break; + case kArm64Shl32: + ASSEMBLE_SHIFT(Lsl, 32); + break; + case kArm64Shr: + ASSEMBLE_SHIFT(Lsr, 64); + break; + case kArm64Shr32: + ASSEMBLE_SHIFT(Lsr, 32); + break; + case kArm64Sar: + ASSEMBLE_SHIFT(Asr, 64); + break; + case kArm64Sar32: + ASSEMBLE_SHIFT(Asr, 32); + break; + case kArm64CallCodeObject: { + if (instr->InputAt(0)->IsImmediate()) { + Handle<Code> code = Handle<Code>::cast(i.InputHeapObject(0)); + __ Call(code, RelocInfo::CODE_TARGET); + RecordSafepoint(instr->pointer_map(), Safepoint::kSimple, 0, + Safepoint::kNoLazyDeopt); + } else { + Register reg = i.InputRegister(0); + int entry = Code::kHeaderSize - kHeapObjectTag; + __ Ldr(reg, MemOperand(reg, entry)); + __ Call(reg); + RecordSafepoint(instr->pointer_map(), Safepoint::kSimple, 0, + Safepoint::kNoLazyDeopt); + } + bool lazy_deopt = (MiscField::decode(instr->opcode()) == 1); + if (lazy_deopt) { + RecordLazyDeoptimizationEntry(instr); + } + // Meaningless instruction for ICs to overwrite. + AddNopForSmiCodeInlining(); + break; + } + case kArm64CallJSFunction: { + Register func = i.InputRegister(0); + + // TODO(jarin) The load of the context should be separated from the call. + __ Ldr(cp, FieldMemOperand(func, JSFunction::kContextOffset)); + __ Ldr(x10, FieldMemOperand(func, JSFunction::kCodeEntryOffset)); + __ Call(x10); + + RecordSafepoint(instr->pointer_map(), Safepoint::kSimple, 0, + Safepoint::kNoLazyDeopt); + RecordLazyDeoptimizationEntry(instr); + break; + } + case kArm64CallAddress: { + DirectCEntryStub stub(isolate()); + stub.GenerateCall(masm(), i.InputRegister(0)); + break; + } + case kArm64Claim: { + int words = MiscField::decode(instr->opcode()); + __ Claim(words); + break; + } + case kArm64Poke: { + int slot = MiscField::decode(instr->opcode()); + Operand operand(slot * kPointerSize); + __ Poke(i.InputRegister(0), operand); + break; + } + case kArm64PokePairZero: { + // TODO(dcarney): test slot offset and register order. + int slot = MiscField::decode(instr->opcode()) - 1; + __ PokePair(i.InputRegister(0), xzr, slot * kPointerSize); + break; + } + case kArm64PokePair: { + int slot = MiscField::decode(instr->opcode()) - 1; + __ PokePair(i.InputRegister(1), i.InputRegister(0), slot * kPointerSize); + break; + } + case kArm64Drop: { + int words = MiscField::decode(instr->opcode()); + __ Drop(words); + break; + } + case kArm64Cmp: + __ Cmp(i.InputRegister(0), i.InputOperand(1)); + break; + case kArm64Cmp32: + __ Cmp(i.InputRegister32(0), i.InputOperand32(1)); + break; + case kArm64Tst: + __ Tst(i.InputRegister(0), i.InputOperand(1)); + break; + case kArm64Tst32: + __ Tst(i.InputRegister32(0), i.InputOperand32(1)); + break; + case kArm64Float64Cmp: + __ Fcmp(i.InputDoubleRegister(0), i.InputDoubleRegister(1)); + break; + case kArm64Float64Add: + __ Fadd(i.OutputDoubleRegister(), i.InputDoubleRegister(0), + i.InputDoubleRegister(1)); + break; + case kArm64Float64Sub: + __ Fsub(i.OutputDoubleRegister(), i.InputDoubleRegister(0), + i.InputDoubleRegister(1)); + break; + case kArm64Float64Mul: + __ Fmul(i.OutputDoubleRegister(), i.InputDoubleRegister(0), + i.InputDoubleRegister(1)); + break; + case kArm64Float64Div: + __ Fdiv(i.OutputDoubleRegister(), i.InputDoubleRegister(0), + i.InputDoubleRegister(1)); + break; + case kArm64Float64Mod: { + // TODO(dcarney): implement directly. See note in lithium-codegen-arm64.cc + FrameScope scope(masm(), StackFrame::MANUAL); + DCHECK(d0.is(i.InputDoubleRegister(0))); + DCHECK(d1.is(i.InputDoubleRegister(1))); + DCHECK(d0.is(i.OutputDoubleRegister())); + // TODO(dcarney): make sure this saves all relevant registers. + __ CallCFunction(ExternalReference::mod_two_doubles_operation(isolate()), + 0, 2); + break; + } + case kArm64Int32ToInt64: + __ Sxtw(i.OutputRegister(), i.InputRegister(0)); + break; + case kArm64Int64ToInt32: + if (!i.OutputRegister().is(i.InputRegister(0))) { + __ Mov(i.OutputRegister(), i.InputRegister(0)); + } + break; + case kArm64Float64ToInt32: + __ Fcvtzs(i.OutputRegister32(), i.InputDoubleRegister(0)); + break; + case kArm64Float64ToUint32: + __ Fcvtzu(i.OutputRegister32(), i.InputDoubleRegister(0)); + break; + case kArm64Int32ToFloat64: + __ Scvtf(i.OutputDoubleRegister(), i.InputRegister32(0)); + break; + case kArm64Uint32ToFloat64: + __ Ucvtf(i.OutputDoubleRegister(), i.InputRegister32(0)); + break; + case kArm64LoadWord8: + __ Ldrb(i.OutputRegister(), i.MemoryOperand()); + break; + case kArm64StoreWord8: + __ Strb(i.InputRegister(2), i.MemoryOperand()); + break; + case kArm64LoadWord16: + __ Ldrh(i.OutputRegister(), i.MemoryOperand()); + break; + case kArm64StoreWord16: + __ Strh(i.InputRegister(2), i.MemoryOperand()); + break; + case kArm64LoadWord32: + __ Ldr(i.OutputRegister32(), i.MemoryOperand()); + break; + case kArm64StoreWord32: + __ Str(i.InputRegister32(2), i.MemoryOperand()); + break; + case kArm64LoadWord64: + __ Ldr(i.OutputRegister(), i.MemoryOperand()); + break; + case kArm64StoreWord64: + __ Str(i.InputRegister(2), i.MemoryOperand()); + break; + case kArm64Float64Load: + __ Ldr(i.OutputDoubleRegister(), i.MemoryOperand()); + break; + case kArm64Float64Store: + __ Str(i.InputDoubleRegister(2), i.MemoryOperand()); + break; + case kArm64StoreWriteBarrier: { + Register object = i.InputRegister(0); + Register index = i.InputRegister(1); + Register value = i.InputRegister(2); + __ Add(index, object, Operand(index, SXTW)); + __ Str(value, MemOperand(index)); + SaveFPRegsMode mode = code_->frame()->DidAllocateDoubleRegisters() + ? kSaveFPRegs + : kDontSaveFPRegs; + // TODO(dcarney): we shouldn't test write barriers from c calls. + LinkRegisterStatus lr_status = kLRHasNotBeenSaved; + UseScratchRegisterScope scope(masm()); + Register temp = no_reg; + if (csp.is(masm()->StackPointer())) { + temp = scope.AcquireX(); + lr_status = kLRHasBeenSaved; + __ Push(lr, temp); // Need to push a pair + } + __ RecordWrite(object, index, value, lr_status, mode); + if (csp.is(masm()->StackPointer())) { + __ Pop(temp, lr); + } + break; + } + } +} + + +// Assemble branches after this instruction. +void CodeGenerator::AssembleArchBranch(Instruction* instr, + FlagsCondition condition) { + Arm64OperandConverter i(this, instr); + Label done; + + // Emit a branch. The true and false targets are always the last two inputs + // to the instruction. + BasicBlock* tblock = i.InputBlock(instr->InputCount() - 2); + BasicBlock* fblock = i.InputBlock(instr->InputCount() - 1); + bool fallthru = IsNextInAssemblyOrder(fblock); + Label* tlabel = code()->GetLabel(tblock); + Label* flabel = fallthru ? &done : code()->GetLabel(fblock); + switch (condition) { + case kUnorderedEqual: + __ B(vs, flabel); + // Fall through. + case kEqual: + __ B(eq, tlabel); + break; + case kUnorderedNotEqual: + __ B(vs, tlabel); + // Fall through. + case kNotEqual: + __ B(ne, tlabel); + break; + case kSignedLessThan: + __ B(lt, tlabel); + break; + case kSignedGreaterThanOrEqual: + __ B(ge, tlabel); + break; + case kSignedLessThanOrEqual: + __ B(le, tlabel); + break; + case kSignedGreaterThan: + __ B(gt, tlabel); + break; + case kUnorderedLessThan: + __ B(vs, flabel); + // Fall through. + case kUnsignedLessThan: + __ B(lo, tlabel); + break; + case kUnorderedGreaterThanOrEqual: + __ B(vs, tlabel); + // Fall through. + case kUnsignedGreaterThanOrEqual: + __ B(hs, tlabel); + break; + case kUnorderedLessThanOrEqual: + __ B(vs, flabel); + // Fall through. + case kUnsignedLessThanOrEqual: + __ B(ls, tlabel); + break; + case kUnorderedGreaterThan: + __ B(vs, tlabel); + // Fall through. + case kUnsignedGreaterThan: + __ B(hi, tlabel); + break; + case kOverflow: + __ B(vs, tlabel); + break; + case kNotOverflow: + __ B(vc, tlabel); + break; + } + if (!fallthru) __ B(flabel); // no fallthru to flabel. + __ Bind(&done); +} + + +// Assemble boolean materializations after this instruction. +void CodeGenerator::AssembleArchBoolean(Instruction* instr, + FlagsCondition condition) { + Arm64OperandConverter i(this, instr); + Label done; + + // Materialize a full 64-bit 1 or 0 value. The result register is always the + // last output of the instruction. + Label check; + DCHECK_NE(0, instr->OutputCount()); + Register reg = i.OutputRegister(instr->OutputCount() - 1); + Condition cc = nv; + switch (condition) { + case kUnorderedEqual: + __ B(vc, &check); + __ Mov(reg, 0); + __ B(&done); + // Fall through. + case kEqual: + cc = eq; + break; + case kUnorderedNotEqual: + __ B(vc, &check); + __ Mov(reg, 1); + __ B(&done); + // Fall through. + case kNotEqual: + cc = ne; + break; + case kSignedLessThan: + cc = lt; + break; + case kSignedGreaterThanOrEqual: + cc = ge; + break; + case kSignedLessThanOrEqual: + cc = le; + break; + case kSignedGreaterThan: + cc = gt; + break; + case kUnorderedLessThan: + __ B(vc, &check); + __ Mov(reg, 0); + __ B(&done); + // Fall through. + case kUnsignedLessThan: + cc = lo; + break; + case kUnorderedGreaterThanOrEqual: + __ B(vc, &check); + __ Mov(reg, 1); + __ B(&done); + // Fall through. + case kUnsignedGreaterThanOrEqual: + cc = hs; + break; + case kUnorderedLessThanOrEqual: + __ B(vc, &check); + __ Mov(reg, 0); + __ B(&done); + // Fall through. + case kUnsignedLessThanOrEqual: + cc = ls; + break; + case kUnorderedGreaterThan: + __ B(vc, &check); + __ Mov(reg, 1); + __ B(&done); + // Fall through. + case kUnsignedGreaterThan: + cc = hi; + break; + case kOverflow: + cc = vs; + break; + case kNotOverflow: + cc = vc; + break; + } + __ bind(&check); + __ Cset(reg, cc); + __ Bind(&done); +} + + +// TODO(dcarney): increase stack slots in frame once before first use. +static int AlignedStackSlots(int stack_slots) { + if (stack_slots & 1) stack_slots++; + return stack_slots; +} + + +void CodeGenerator::AssemblePrologue() { + CallDescriptor* descriptor = linkage()->GetIncomingDescriptor(); + if (descriptor->kind() == CallDescriptor::kCallAddress) { + __ SetStackPointer(csp); + __ Push(lr, fp); + __ Mov(fp, csp); + // TODO(dcarney): correct callee saved registers. + __ PushCalleeSavedRegisters(); + frame()->SetRegisterSaveAreaSize(20 * kPointerSize); + } else if (descriptor->IsJSFunctionCall()) { + CompilationInfo* info = linkage()->info(); + __ SetStackPointer(jssp); + __ Prologue(info->IsCodePreAgingActive()); + frame()->SetRegisterSaveAreaSize( + StandardFrameConstants::kFixedFrameSizeFromFp); + + // Sloppy mode functions and builtins need to replace the receiver with the + // global proxy when called as functions (without an explicit receiver + // object). + // TODO(mstarzinger/verwaest): Should this be moved back into the CallIC? + if (info->strict_mode() == SLOPPY && !info->is_native()) { + Label ok; + // +2 for return address and saved frame pointer. + int receiver_slot = info->scope()->num_parameters() + 2; + __ Ldr(x10, MemOperand(fp, receiver_slot * kXRegSize)); + __ JumpIfNotRoot(x10, Heap::kUndefinedValueRootIndex, &ok); + __ Ldr(x10, GlobalObjectMemOperand()); + __ Ldr(x10, FieldMemOperand(x10, GlobalObject::kGlobalProxyOffset)); + __ Str(x10, MemOperand(fp, receiver_slot * kXRegSize)); + __ Bind(&ok); + } + + } else { + __ SetStackPointer(jssp); + __ StubPrologue(); + frame()->SetRegisterSaveAreaSize( + StandardFrameConstants::kFixedFrameSizeFromFp); + } + int stack_slots = frame()->GetSpillSlotCount(); + if (stack_slots > 0) { + Register sp = __ StackPointer(); + if (!sp.Is(csp)) { + __ Sub(sp, sp, stack_slots * kPointerSize); + } + __ Sub(csp, csp, AlignedStackSlots(stack_slots) * kPointerSize); + } +} + + +void CodeGenerator::AssembleReturn() { + CallDescriptor* descriptor = linkage()->GetIncomingDescriptor(); + if (descriptor->kind() == CallDescriptor::kCallAddress) { + if (frame()->GetRegisterSaveAreaSize() > 0) { + // Remove this frame's spill slots first. + int stack_slots = frame()->GetSpillSlotCount(); + if (stack_slots > 0) { + __ Add(csp, csp, AlignedStackSlots(stack_slots) * kPointerSize); + } + // Restore registers. + // TODO(dcarney): correct callee saved registers. + __ PopCalleeSavedRegisters(); + } + __ Mov(csp, fp); + __ Pop(fp, lr); + __ Ret(); + } else { + __ Mov(jssp, fp); + __ Pop(fp, lr); + int pop_count = + descriptor->IsJSFunctionCall() ? descriptor->ParameterCount() : 0; + __ Drop(pop_count); + __ Ret(); + } +} + + +void CodeGenerator::AssembleMove(InstructionOperand* source, + InstructionOperand* destination) { + Arm64OperandConverter g(this, NULL); + // Dispatch on the source and destination operand kinds. Not all + // combinations are possible. + if (source->IsRegister()) { + DCHECK(destination->IsRegister() || destination->IsStackSlot()); + Register src = g.ToRegister(source); + if (destination->IsRegister()) { + __ Mov(g.ToRegister(destination), src); + } else { + __ Str(src, g.ToMemOperand(destination, masm())); + } + } else if (source->IsStackSlot()) { + MemOperand src = g.ToMemOperand(source, masm()); + DCHECK(destination->IsRegister() || destination->IsStackSlot()); + if (destination->IsRegister()) { + __ Ldr(g.ToRegister(destination), src); + } else { + UseScratchRegisterScope scope(masm()); + Register temp = scope.AcquireX(); + __ Ldr(temp, src); + __ Str(temp, g.ToMemOperand(destination, masm())); + } + } else if (source->IsConstant()) { + ConstantOperand* constant_source = ConstantOperand::cast(source); + if (destination->IsRegister() || destination->IsStackSlot()) { + UseScratchRegisterScope scope(masm()); + Register dst = destination->IsRegister() ? g.ToRegister(destination) + : scope.AcquireX(); + Constant src = g.ToConstant(source); + if (src.type() == Constant::kHeapObject) { + __ LoadObject(dst, src.ToHeapObject()); + } else { + __ Mov(dst, g.ToImmediate(source)); + } + if (destination->IsStackSlot()) { + __ Str(dst, g.ToMemOperand(destination, masm())); + } + } else if (destination->IsDoubleRegister()) { + FPRegister result = g.ToDoubleRegister(destination); + __ Fmov(result, g.ToDouble(constant_source)); + } else { + DCHECK(destination->IsDoubleStackSlot()); + UseScratchRegisterScope scope(masm()); + FPRegister temp = scope.AcquireD(); + __ Fmov(temp, g.ToDouble(constant_source)); + __ Str(temp, g.ToMemOperand(destination, masm())); + } + } else if (source->IsDoubleRegister()) { + FPRegister src = g.ToDoubleRegister(source); + if (destination->IsDoubleRegister()) { + FPRegister dst = g.ToDoubleRegister(destination); + __ Fmov(dst, src); + } else { + DCHECK(destination->IsDoubleStackSlot()); + __ Str(src, g.ToMemOperand(destination, masm())); + } + } else if (source->IsDoubleStackSlot()) { + DCHECK(destination->IsDoubleRegister() || destination->IsDoubleStackSlot()); + MemOperand src = g.ToMemOperand(source, masm()); + if (destination->IsDoubleRegister()) { + __ Ldr(g.ToDoubleRegister(destination), src); + } else { + UseScratchRegisterScope scope(masm()); + FPRegister temp = scope.AcquireD(); + __ Ldr(temp, src); + __ Str(temp, g.ToMemOperand(destination, masm())); + } + } else { + UNREACHABLE(); + } +} + + +void CodeGenerator::AssembleSwap(InstructionOperand* source, + InstructionOperand* destination) { + Arm64OperandConverter g(this, NULL); + // Dispatch on the source and destination operand kinds. Not all + // combinations are possible. + if (source->IsRegister()) { + // Register-register. + UseScratchRegisterScope scope(masm()); + Register temp = scope.AcquireX(); + Register src = g.ToRegister(source); + if (destination->IsRegister()) { + Register dst = g.ToRegister(destination); + __ Mov(temp, src); + __ Mov(src, dst); + __ Mov(dst, temp); + } else { + DCHECK(destination->IsStackSlot()); + MemOperand dst = g.ToMemOperand(destination, masm()); + __ Mov(temp, src); + __ Ldr(src, dst); + __ Str(temp, dst); + } + } else if (source->IsStackSlot() || source->IsDoubleStackSlot()) { + UseScratchRegisterScope scope(masm()); + CPURegister temp_0 = scope.AcquireX(); + CPURegister temp_1 = scope.AcquireX(); + MemOperand src = g.ToMemOperand(source, masm()); + MemOperand dst = g.ToMemOperand(destination, masm()); + __ Ldr(temp_0, src); + __ Ldr(temp_1, dst); + __ Str(temp_0, dst); + __ Str(temp_1, src); + } else if (source->IsDoubleRegister()) { + UseScratchRegisterScope scope(masm()); + FPRegister temp = scope.AcquireD(); + FPRegister src = g.ToDoubleRegister(source); + if (destination->IsDoubleRegister()) { + FPRegister dst = g.ToDoubleRegister(destination); + __ Fmov(temp, src); + __ Fmov(src, dst); + __ Fmov(src, temp); + } else { + DCHECK(destination->IsDoubleStackSlot()); + MemOperand dst = g.ToMemOperand(destination, masm()); + __ Fmov(temp, src); + __ Ldr(src, dst); + __ Str(temp, dst); + } + } else { + // No other combinations are possible. + UNREACHABLE(); + } +} + + +void CodeGenerator::AddNopForSmiCodeInlining() { __ movz(xzr, 0); } + +#undef __ + +#if DEBUG + +// Checks whether the code between start_pc and end_pc is a no-op. +bool CodeGenerator::IsNopForSmiCodeInlining(Handle<Code> code, int start_pc, + int end_pc) { + if (start_pc + 4 != end_pc) { + return false; + } + Address instr_address = code->instruction_start() + start_pc; + + v8::internal::Instruction* instr = + reinterpret_cast<v8::internal::Instruction*>(instr_address); + return instr->IsMovz() && instr->Rd() == xzr.code() && instr->SixtyFourBits(); +} + +#endif // DEBUG + +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/src/compiler/arm64/instruction-codes-arm64.h b/deps/v8/src/compiler/arm64/instruction-codes-arm64.h new file mode 100644 index 00000000000..2d71c02ef03 --- /dev/null +++ b/deps/v8/src/compiler/arm64/instruction-codes-arm64.h @@ -0,0 +1,103 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_ARM64_INSTRUCTION_CODES_ARM64_H_ +#define V8_COMPILER_ARM64_INSTRUCTION_CODES_ARM64_H_ + +namespace v8 { +namespace internal { +namespace compiler { + +// ARM64-specific opcodes that specify which assembly sequence to emit. +// Most opcodes specify a single instruction. +#define TARGET_ARCH_OPCODE_LIST(V) \ + V(Arm64Add) \ + V(Arm64Add32) \ + V(Arm64And) \ + V(Arm64And32) \ + V(Arm64Cmp) \ + V(Arm64Cmp32) \ + V(Arm64Tst) \ + V(Arm64Tst32) \ + V(Arm64Or) \ + V(Arm64Or32) \ + V(Arm64Xor) \ + V(Arm64Xor32) \ + V(Arm64Sub) \ + V(Arm64Sub32) \ + V(Arm64Mul) \ + V(Arm64Mul32) \ + V(Arm64Idiv) \ + V(Arm64Idiv32) \ + V(Arm64Udiv) \ + V(Arm64Udiv32) \ + V(Arm64Imod) \ + V(Arm64Imod32) \ + V(Arm64Umod) \ + V(Arm64Umod32) \ + V(Arm64Not) \ + V(Arm64Not32) \ + V(Arm64Neg) \ + V(Arm64Neg32) \ + V(Arm64Shl) \ + V(Arm64Shl32) \ + V(Arm64Shr) \ + V(Arm64Shr32) \ + V(Arm64Sar) \ + V(Arm64Sar32) \ + V(Arm64CallCodeObject) \ + V(Arm64CallJSFunction) \ + V(Arm64CallAddress) \ + V(Arm64Claim) \ + V(Arm64Poke) \ + V(Arm64PokePairZero) \ + V(Arm64PokePair) \ + V(Arm64Drop) \ + V(Arm64Float64Cmp) \ + V(Arm64Float64Add) \ + V(Arm64Float64Sub) \ + V(Arm64Float64Mul) \ + V(Arm64Float64Div) \ + V(Arm64Float64Mod) \ + V(Arm64Int32ToInt64) \ + V(Arm64Int64ToInt32) \ + V(Arm64Float64ToInt32) \ + V(Arm64Float64ToUint32) \ + V(Arm64Int32ToFloat64) \ + V(Arm64Uint32ToFloat64) \ + V(Arm64Float64Load) \ + V(Arm64Float64Store) \ + V(Arm64LoadWord8) \ + V(Arm64StoreWord8) \ + V(Arm64LoadWord16) \ + V(Arm64StoreWord16) \ + V(Arm64LoadWord32) \ + V(Arm64StoreWord32) \ + V(Arm64LoadWord64) \ + V(Arm64StoreWord64) \ + V(Arm64StoreWriteBarrier) + + +// Addressing modes represent the "shape" of inputs to an instruction. +// Many instructions support multiple addressing modes. Addressing modes +// are encoded into the InstructionCode of the instruction and tell the +// code generator after register allocation which assembler method to call. +// +// We use the following local notation for addressing modes: +// +// R = register +// O = register or stack slot +// D = double register +// I = immediate (handle, external, int32) +// MRI = [register + immediate] +// MRR = [register + register] +#define TARGET_ADDRESSING_MODE_LIST(V) \ + V(MRI) /* [%r0 + K] */ \ + V(MRR) /* [%r0 + %r1] */ + +} // namespace internal +} // namespace compiler +} // namespace v8 + +#endif // V8_COMPILER_ARM64_INSTRUCTION_CODES_ARM64_H_ diff --git a/deps/v8/src/compiler/arm64/instruction-selector-arm64.cc b/deps/v8/src/compiler/arm64/instruction-selector-arm64.cc new file mode 100644 index 00000000000..111ca2d956a --- /dev/null +++ b/deps/v8/src/compiler/arm64/instruction-selector-arm64.cc @@ -0,0 +1,667 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/instruction-selector-impl.h" +#include "src/compiler/node-matchers.h" + +namespace v8 { +namespace internal { +namespace compiler { + +enum ImmediateMode { + kArithimeticImm, // 12 bit unsigned immediate shifted left 0 or 12 bits + kShift32Imm, // 0 - 31 + kShift64Imm, // 0 -63 + kLogical32Imm, + kLogical64Imm, + kLoadStoreImm, // unsigned 9 bit or signed 7 bit + kNoImmediate +}; + + +// Adds Arm64-specific methods for generating operands. +class Arm64OperandGenerator V8_FINAL : public OperandGenerator { + public: + explicit Arm64OperandGenerator(InstructionSelector* selector) + : OperandGenerator(selector) {} + + InstructionOperand* UseOperand(Node* node, ImmediateMode mode) { + if (CanBeImmediate(node, mode)) { + return UseImmediate(node); + } + return UseRegister(node); + } + + bool CanBeImmediate(Node* node, ImmediateMode mode) { + int64_t value; + switch (node->opcode()) { + // TODO(turbofan): SMI number constants as immediates. + case IrOpcode::kInt32Constant: + value = ValueOf<int32_t>(node->op()); + break; + default: + return false; + } + unsigned ignored; + switch (mode) { + case kLogical32Imm: + // TODO(dcarney): some unencodable values can be handled by + // switching instructions. + return Assembler::IsImmLogical(static_cast<uint64_t>(value), 32, + &ignored, &ignored, &ignored); + case kLogical64Imm: + return Assembler::IsImmLogical(static_cast<uint64_t>(value), 64, + &ignored, &ignored, &ignored); + case kArithimeticImm: + // TODO(dcarney): -values can be handled by instruction swapping + return Assembler::IsImmAddSub(value); + case kShift32Imm: + return 0 <= value && value < 31; + case kShift64Imm: + return 0 <= value && value < 63; + case kLoadStoreImm: + return (0 <= value && value < (1 << 9)) || + (-(1 << 6) <= value && value < (1 << 6)); + case kNoImmediate: + return false; + } + return false; + } +}; + + +static void VisitRR(InstructionSelector* selector, ArchOpcode opcode, + Node* node) { + Arm64OperandGenerator g(selector); + selector->Emit(opcode, g.DefineAsRegister(node), + g.UseRegister(node->InputAt(0))); +} + + +static void VisitRRR(InstructionSelector* selector, ArchOpcode opcode, + Node* node) { + Arm64OperandGenerator g(selector); + selector->Emit(opcode, g.DefineAsRegister(node), + g.UseRegister(node->InputAt(0)), + g.UseRegister(node->InputAt(1))); +} + + +static void VisitRRRFloat64(InstructionSelector* selector, ArchOpcode opcode, + Node* node) { + Arm64OperandGenerator g(selector); + selector->Emit(opcode, g.DefineAsDoubleRegister(node), + g.UseDoubleRegister(node->InputAt(0)), + g.UseDoubleRegister(node->InputAt(1))); +} + + +static void VisitRRO(InstructionSelector* selector, ArchOpcode opcode, + Node* node, ImmediateMode operand_mode) { + Arm64OperandGenerator g(selector); + selector->Emit(opcode, g.DefineAsRegister(node), + g.UseRegister(node->InputAt(0)), + g.UseOperand(node->InputAt(1), operand_mode)); +} + + +// Shared routine for multiple binary operations. +static void VisitBinop(InstructionSelector* selector, Node* node, + InstructionCode opcode, ImmediateMode operand_mode, + FlagsContinuation* cont) { + Arm64OperandGenerator g(selector); + Int32BinopMatcher m(node); + InstructionOperand* inputs[4]; + size_t input_count = 0; + InstructionOperand* outputs[2]; + size_t output_count = 0; + + inputs[input_count++] = g.UseRegister(m.left().node()); + inputs[input_count++] = g.UseOperand(m.right().node(), operand_mode); + + if (cont->IsBranch()) { + inputs[input_count++] = g.Label(cont->true_block()); + inputs[input_count++] = g.Label(cont->false_block()); + } + + outputs[output_count++] = g.DefineAsRegister(node); + if (cont->IsSet()) { + outputs[output_count++] = g.DefineAsRegister(cont->result()); + } + + DCHECK_NE(0, input_count); + DCHECK_NE(0, output_count); + DCHECK_GE(ARRAY_SIZE(inputs), input_count); + DCHECK_GE(ARRAY_SIZE(outputs), output_count); + + Instruction* instr = selector->Emit(cont->Encode(opcode), output_count, + outputs, input_count, inputs); + if (cont->IsBranch()) instr->MarkAsControl(); +} + + +// Shared routine for multiple binary operations. +static void VisitBinop(InstructionSelector* selector, Node* node, + ArchOpcode opcode, ImmediateMode operand_mode) { + FlagsContinuation cont; + VisitBinop(selector, node, opcode, operand_mode, &cont); +} + + +void InstructionSelector::VisitLoad(Node* node) { + MachineType rep = OpParameter<MachineType>(node); + Arm64OperandGenerator g(this); + Node* base = node->InputAt(0); + Node* index = node->InputAt(1); + + InstructionOperand* result = rep == kMachineFloat64 + ? g.DefineAsDoubleRegister(node) + : g.DefineAsRegister(node); + + ArchOpcode opcode; + switch (rep) { + case kMachineFloat64: + opcode = kArm64Float64Load; + break; + case kMachineWord8: + opcode = kArm64LoadWord8; + break; + case kMachineWord16: + opcode = kArm64LoadWord16; + break; + case kMachineWord32: + opcode = kArm64LoadWord32; + break; + case kMachineTagged: // Fall through. + case kMachineWord64: + opcode = kArm64LoadWord64; + break; + default: + UNREACHABLE(); + return; + } + if (g.CanBeImmediate(index, kLoadStoreImm)) { + Emit(opcode | AddressingModeField::encode(kMode_MRI), result, + g.UseRegister(base), g.UseImmediate(index)); + } else if (g.CanBeImmediate(base, kLoadStoreImm)) { + Emit(opcode | AddressingModeField::encode(kMode_MRI), result, + g.UseRegister(index), g.UseImmediate(base)); + } else { + Emit(opcode | AddressingModeField::encode(kMode_MRR), result, + g.UseRegister(base), g.UseRegister(index)); + } +} + + +void InstructionSelector::VisitStore(Node* node) { + Arm64OperandGenerator g(this); + Node* base = node->InputAt(0); + Node* index = node->InputAt(1); + Node* value = node->InputAt(2); + + StoreRepresentation store_rep = OpParameter<StoreRepresentation>(node); + MachineType rep = store_rep.rep; + if (store_rep.write_barrier_kind == kFullWriteBarrier) { + DCHECK(rep == kMachineTagged); + // TODO(dcarney): refactor RecordWrite function to take temp registers + // and pass them here instead of using fixed regs + // TODO(dcarney): handle immediate indices. + InstructionOperand* temps[] = {g.TempRegister(x11), g.TempRegister(x12)}; + Emit(kArm64StoreWriteBarrier, NULL, g.UseFixed(base, x10), + g.UseFixed(index, x11), g.UseFixed(value, x12), ARRAY_SIZE(temps), + temps); + return; + } + DCHECK_EQ(kNoWriteBarrier, store_rep.write_barrier_kind); + InstructionOperand* val; + if (rep == kMachineFloat64) { + val = g.UseDoubleRegister(value); + } else { + val = g.UseRegister(value); + } + ArchOpcode opcode; + switch (rep) { + case kMachineFloat64: + opcode = kArm64Float64Store; + break; + case kMachineWord8: + opcode = kArm64StoreWord8; + break; + case kMachineWord16: + opcode = kArm64StoreWord16; + break; + case kMachineWord32: + opcode = kArm64StoreWord32; + break; + case kMachineTagged: // Fall through. + case kMachineWord64: + opcode = kArm64StoreWord64; + break; + default: + UNREACHABLE(); + return; + } + if (g.CanBeImmediate(index, kLoadStoreImm)) { + Emit(opcode | AddressingModeField::encode(kMode_MRI), NULL, + g.UseRegister(base), g.UseImmediate(index), val); + } else if (g.CanBeImmediate(base, kLoadStoreImm)) { + Emit(opcode | AddressingModeField::encode(kMode_MRI), NULL, + g.UseRegister(index), g.UseImmediate(base), val); + } else { + Emit(opcode | AddressingModeField::encode(kMode_MRR), NULL, + g.UseRegister(base), g.UseRegister(index), val); + } +} + + +void InstructionSelector::VisitWord32And(Node* node) { + VisitBinop(this, node, kArm64And32, kLogical32Imm); +} + + +void InstructionSelector::VisitWord64And(Node* node) { + VisitBinop(this, node, kArm64And, kLogical64Imm); +} + + +void InstructionSelector::VisitWord32Or(Node* node) { + VisitBinop(this, node, kArm64Or32, kLogical32Imm); +} + + +void InstructionSelector::VisitWord64Or(Node* node) { + VisitBinop(this, node, kArm64Or, kLogical64Imm); +} + + +template <typename T> +static void VisitXor(InstructionSelector* selector, Node* node, + ArchOpcode xor_opcode, ArchOpcode not_opcode) { + Arm64OperandGenerator g(selector); + BinopMatcher<IntMatcher<T>, IntMatcher<T> > m(node); + if (m.right().Is(-1)) { + selector->Emit(not_opcode, g.DefineAsRegister(node), + g.UseRegister(m.left().node())); + } else { + VisitBinop(selector, node, xor_opcode, kLogical32Imm); + } +} + + +void InstructionSelector::VisitWord32Xor(Node* node) { + VisitXor<int32_t>(this, node, kArm64Xor32, kArm64Not32); +} + + +void InstructionSelector::VisitWord64Xor(Node* node) { + VisitXor<int64_t>(this, node, kArm64Xor, kArm64Not); +} + + +void InstructionSelector::VisitWord32Shl(Node* node) { + VisitRRO(this, kArm64Shl32, node, kShift32Imm); +} + + +void InstructionSelector::VisitWord64Shl(Node* node) { + VisitRRO(this, kArm64Shl, node, kShift64Imm); +} + + +void InstructionSelector::VisitWord32Shr(Node* node) { + VisitRRO(this, kArm64Shr32, node, kShift32Imm); +} + + +void InstructionSelector::VisitWord64Shr(Node* node) { + VisitRRO(this, kArm64Shr, node, kShift64Imm); +} + + +void InstructionSelector::VisitWord32Sar(Node* node) { + VisitRRO(this, kArm64Sar32, node, kShift32Imm); +} + + +void InstructionSelector::VisitWord64Sar(Node* node) { + VisitRRO(this, kArm64Sar, node, kShift64Imm); +} + + +void InstructionSelector::VisitInt32Add(Node* node) { + VisitBinop(this, node, kArm64Add32, kArithimeticImm); +} + + +void InstructionSelector::VisitInt64Add(Node* node) { + VisitBinop(this, node, kArm64Add, kArithimeticImm); +} + + +template <typename T> +static void VisitSub(InstructionSelector* selector, Node* node, + ArchOpcode sub_opcode, ArchOpcode neg_opcode) { + Arm64OperandGenerator g(selector); + BinopMatcher<IntMatcher<T>, IntMatcher<T> > m(node); + if (m.left().Is(0)) { + selector->Emit(neg_opcode, g.DefineAsRegister(node), + g.UseRegister(m.right().node())); + } else { + VisitBinop(selector, node, sub_opcode, kArithimeticImm); + } +} + + +void InstructionSelector::VisitInt32Sub(Node* node) { + VisitSub<int32_t>(this, node, kArm64Sub32, kArm64Neg32); +} + + +void InstructionSelector::VisitInt64Sub(Node* node) { + VisitSub<int64_t>(this, node, kArm64Sub, kArm64Neg); +} + + +void InstructionSelector::VisitInt32Mul(Node* node) { + VisitRRR(this, kArm64Mul32, node); +} + + +void InstructionSelector::VisitInt64Mul(Node* node) { + VisitRRR(this, kArm64Mul, node); +} + + +void InstructionSelector::VisitInt32Div(Node* node) { + VisitRRR(this, kArm64Idiv32, node); +} + + +void InstructionSelector::VisitInt64Div(Node* node) { + VisitRRR(this, kArm64Idiv, node); +} + + +void InstructionSelector::VisitInt32UDiv(Node* node) { + VisitRRR(this, kArm64Udiv32, node); +} + + +void InstructionSelector::VisitInt64UDiv(Node* node) { + VisitRRR(this, kArm64Udiv, node); +} + + +void InstructionSelector::VisitInt32Mod(Node* node) { + VisitRRR(this, kArm64Imod32, node); +} + + +void InstructionSelector::VisitInt64Mod(Node* node) { + VisitRRR(this, kArm64Imod, node); +} + + +void InstructionSelector::VisitInt32UMod(Node* node) { + VisitRRR(this, kArm64Umod32, node); +} + + +void InstructionSelector::VisitInt64UMod(Node* node) { + VisitRRR(this, kArm64Umod, node); +} + + +void InstructionSelector::VisitConvertInt32ToInt64(Node* node) { + VisitRR(this, kArm64Int32ToInt64, node); +} + + +void InstructionSelector::VisitConvertInt64ToInt32(Node* node) { + VisitRR(this, kArm64Int64ToInt32, node); +} + + +void InstructionSelector::VisitChangeInt32ToFloat64(Node* node) { + Arm64OperandGenerator g(this); + Emit(kArm64Int32ToFloat64, g.DefineAsDoubleRegister(node), + g.UseRegister(node->InputAt(0))); +} + + +void InstructionSelector::VisitChangeUint32ToFloat64(Node* node) { + Arm64OperandGenerator g(this); + Emit(kArm64Uint32ToFloat64, g.DefineAsDoubleRegister(node), + g.UseRegister(node->InputAt(0))); +} + + +void InstructionSelector::VisitChangeFloat64ToInt32(Node* node) { + Arm64OperandGenerator g(this); + Emit(kArm64Float64ToInt32, g.DefineAsRegister(node), + g.UseDoubleRegister(node->InputAt(0))); +} + + +void InstructionSelector::VisitChangeFloat64ToUint32(Node* node) { + Arm64OperandGenerator g(this); + Emit(kArm64Float64ToUint32, g.DefineAsRegister(node), + g.UseDoubleRegister(node->InputAt(0))); +} + + +void InstructionSelector::VisitFloat64Add(Node* node) { + VisitRRRFloat64(this, kArm64Float64Add, node); +} + + +void InstructionSelector::VisitFloat64Sub(Node* node) { + VisitRRRFloat64(this, kArm64Float64Sub, node); +} + + +void InstructionSelector::VisitFloat64Mul(Node* node) { + VisitRRRFloat64(this, kArm64Float64Mul, node); +} + + +void InstructionSelector::VisitFloat64Div(Node* node) { + VisitRRRFloat64(this, kArm64Float64Div, node); +} + + +void InstructionSelector::VisitFloat64Mod(Node* node) { + Arm64OperandGenerator g(this); + Emit(kArm64Float64Mod, g.DefineAsFixedDouble(node, d0), + g.UseFixedDouble(node->InputAt(0), d0), + g.UseFixedDouble(node->InputAt(1), d1))->MarkAsCall(); +} + + +void InstructionSelector::VisitInt32AddWithOverflow(Node* node, + FlagsContinuation* cont) { + VisitBinop(this, node, kArm64Add32, kArithimeticImm, cont); +} + + +void InstructionSelector::VisitInt32SubWithOverflow(Node* node, + FlagsContinuation* cont) { + VisitBinop(this, node, kArm64Sub32, kArithimeticImm, cont); +} + + +// Shared routine for multiple compare operations. +static void VisitCompare(InstructionSelector* selector, InstructionCode opcode, + InstructionOperand* left, InstructionOperand* right, + FlagsContinuation* cont) { + Arm64OperandGenerator g(selector); + opcode = cont->Encode(opcode); + if (cont->IsBranch()) { + selector->Emit(opcode, NULL, left, right, g.Label(cont->true_block()), + g.Label(cont->false_block()))->MarkAsControl(); + } else { + DCHECK(cont->IsSet()); + selector->Emit(opcode, g.DefineAsRegister(cont->result()), left, right); + } +} + + +// Shared routine for multiple word compare operations. +static void VisitWordCompare(InstructionSelector* selector, Node* node, + InstructionCode opcode, FlagsContinuation* cont, + bool commutative) { + Arm64OperandGenerator g(selector); + Node* left = node->InputAt(0); + Node* right = node->InputAt(1); + + // Match immediates on left or right side of comparison. + if (g.CanBeImmediate(right, kArithimeticImm)) { + VisitCompare(selector, opcode, g.UseRegister(left), g.UseImmediate(right), + cont); + } else if (g.CanBeImmediate(left, kArithimeticImm)) { + if (!commutative) cont->Commute(); + VisitCompare(selector, opcode, g.UseRegister(right), g.UseImmediate(left), + cont); + } else { + VisitCompare(selector, opcode, g.UseRegister(left), g.UseRegister(right), + cont); + } +} + + +void InstructionSelector::VisitWord32Test(Node* node, FlagsContinuation* cont) { + switch (node->opcode()) { + case IrOpcode::kWord32And: + return VisitWordCompare(this, node, kArm64Tst32, cont, true); + default: + break; + } + + Arm64OperandGenerator g(this); + VisitCompare(this, kArm64Tst32, g.UseRegister(node), g.UseRegister(node), + cont); +} + + +void InstructionSelector::VisitWord64Test(Node* node, FlagsContinuation* cont) { + switch (node->opcode()) { + case IrOpcode::kWord64And: + return VisitWordCompare(this, node, kArm64Tst, cont, true); + default: + break; + } + + Arm64OperandGenerator g(this); + VisitCompare(this, kArm64Tst, g.UseRegister(node), g.UseRegister(node), cont); +} + + +void InstructionSelector::VisitWord32Compare(Node* node, + FlagsContinuation* cont) { + VisitWordCompare(this, node, kArm64Cmp32, cont, false); +} + + +void InstructionSelector::VisitWord64Compare(Node* node, + FlagsContinuation* cont) { + VisitWordCompare(this, node, kArm64Cmp, cont, false); +} + + +void InstructionSelector::VisitFloat64Compare(Node* node, + FlagsContinuation* cont) { + Arm64OperandGenerator g(this); + Node* left = node->InputAt(0); + Node* right = node->InputAt(1); + VisitCompare(this, kArm64Float64Cmp, g.UseDoubleRegister(left), + g.UseDoubleRegister(right), cont); +} + + +void InstructionSelector::VisitCall(Node* call, BasicBlock* continuation, + BasicBlock* deoptimization) { + Arm64OperandGenerator g(this); + CallDescriptor* descriptor = OpParameter<CallDescriptor*>(call); + CallBuffer buffer(zone(), descriptor); // TODO(turbofan): temp zone here? + + // Compute InstructionOperands for inputs and outputs. + // TODO(turbofan): on ARM64 it's probably better to use the code object in a + // register if there are multiple uses of it. Improve constant pool and the + // heuristics in the register allocator for where to emit constants. + InitializeCallBuffer(call, &buffer, true, false, continuation, + deoptimization); + + // Push the arguments to the stack. + bool is_c_frame = descriptor->kind() == CallDescriptor::kCallAddress; + bool pushed_count_uneven = buffer.pushed_count & 1; + int aligned_push_count = buffer.pushed_count; + if (is_c_frame && pushed_count_uneven) { + aligned_push_count++; + } + // TODO(dcarney): claim and poke probably take small immediates, + // loop here or whatever. + // Bump the stack pointer(s). + if (aligned_push_count > 0) { + // TODO(dcarney): it would be better to bump the csp here only + // and emit paired stores with increment for non c frames. + Emit(kArm64Claim | MiscField::encode(aligned_push_count), NULL); + } + // Move arguments to the stack. + { + int slot = buffer.pushed_count - 1; + // Emit the uneven pushes. + if (pushed_count_uneven) { + Node* input = buffer.pushed_nodes[slot]; + ArchOpcode opcode = is_c_frame ? kArm64PokePairZero : kArm64Poke; + Emit(opcode | MiscField::encode(slot), NULL, g.UseRegister(input)); + slot--; + } + // Now all pushes can be done in pairs. + for (; slot >= 0; slot -= 2) { + Emit(kArm64PokePair | MiscField::encode(slot), NULL, + g.UseRegister(buffer.pushed_nodes[slot]), + g.UseRegister(buffer.pushed_nodes[slot - 1])); + } + } + + // Select the appropriate opcode based on the call type. + InstructionCode opcode; + switch (descriptor->kind()) { + case CallDescriptor::kCallCodeObject: { + bool lazy_deopt = descriptor->CanLazilyDeoptimize(); + opcode = kArm64CallCodeObject | MiscField::encode(lazy_deopt ? 1 : 0); + break; + } + case CallDescriptor::kCallAddress: + opcode = kArm64CallAddress; + break; + case CallDescriptor::kCallJSFunction: + opcode = kArm64CallJSFunction; + break; + default: + UNREACHABLE(); + return; + } + + // Emit the call instruction. + Instruction* call_instr = + Emit(opcode, buffer.output_count, buffer.outputs, + buffer.fixed_and_control_count(), buffer.fixed_and_control_args); + + call_instr->MarkAsCall(); + if (deoptimization != NULL) { + DCHECK(continuation != NULL); + call_instr->MarkAsControl(); + } + + // Caller clean up of stack for C-style calls. + if (is_c_frame && aligned_push_count > 0) { + DCHECK(deoptimization == NULL && continuation == NULL); + Emit(kArm64Drop | MiscField::encode(aligned_push_count), NULL); + } +} + +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/src/compiler/arm64/linkage-arm64.cc b/deps/v8/src/compiler/arm64/linkage-arm64.cc new file mode 100644 index 00000000000..186f2d59da3 --- /dev/null +++ b/deps/v8/src/compiler/arm64/linkage-arm64.cc @@ -0,0 +1,68 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "src/assembler.h" +#include "src/code-stubs.h" +#include "src/compiler/linkage.h" +#include "src/compiler/linkage-impl.h" +#include "src/zone.h" + +namespace v8 { +namespace internal { +namespace compiler { + +struct LinkageHelperTraits { + static Register ReturnValueReg() { return x0; } + static Register ReturnValue2Reg() { return x1; } + static Register JSCallFunctionReg() { return x1; } + static Register ContextReg() { return cp; } + static Register RuntimeCallFunctionReg() { return x1; } + static Register RuntimeCallArgCountReg() { return x0; } + static RegList CCalleeSaveRegisters() { + // TODO(dcarney): correct callee saved registers. + return 0; + } + static Register CRegisterParameter(int i) { + static Register register_parameters[] = {x0, x1, x2, x3, x4, x5, x6, x7}; + return register_parameters[i]; + } + static int CRegisterParametersLength() { return 8; } +}; + + +CallDescriptor* Linkage::GetJSCallDescriptor(int parameter_count, Zone* zone) { + return LinkageHelper::GetJSCallDescriptor<LinkageHelperTraits>( + zone, parameter_count); +} + + +CallDescriptor* Linkage::GetRuntimeCallDescriptor( + Runtime::FunctionId function, int parameter_count, + Operator::Property properties, + CallDescriptor::DeoptimizationSupport can_deoptimize, Zone* zone) { + return LinkageHelper::GetRuntimeCallDescriptor<LinkageHelperTraits>( + zone, function, parameter_count, properties, can_deoptimize); +} + + +CallDescriptor* Linkage::GetStubCallDescriptor( + CodeStubInterfaceDescriptor* descriptor, int stack_parameter_count, + CallDescriptor::DeoptimizationSupport can_deoptimize, Zone* zone) { + return LinkageHelper::GetStubCallDescriptor<LinkageHelperTraits>( + zone, descriptor, stack_parameter_count, can_deoptimize); +} + + +CallDescriptor* Linkage::GetSimplifiedCDescriptor( + Zone* zone, int num_params, MachineType return_type, + const MachineType* param_types) { + return LinkageHelper::GetSimplifiedCDescriptor<LinkageHelperTraits>( + zone, num_params, return_type, param_types); +} + +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/compiler/ast-graph-builder.cc b/deps/v8/src/compiler/ast-graph-builder.cc new file mode 100644 index 00000000000..49a67157c7a --- /dev/null +++ b/deps/v8/src/compiler/ast-graph-builder.cc @@ -0,0 +1,2055 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/ast-graph-builder.h" + +#include "src/compiler.h" +#include "src/compiler/control-builders.h" +#include "src/compiler/node-properties.h" +#include "src/compiler/node-properties-inl.h" +#include "src/full-codegen.h" +#include "src/parser.h" +#include "src/scopes.h" + +namespace v8 { +namespace internal { +namespace compiler { + +AstGraphBuilder::AstGraphBuilder(CompilationInfo* info, JSGraph* jsgraph) + : StructuredGraphBuilder(jsgraph->graph(), jsgraph->common()), + info_(info), + jsgraph_(jsgraph), + globals_(0, info->zone()), + breakable_(NULL), + execution_context_(NULL) { + InitializeAstVisitor(info->zone()); +} + + +Node* AstGraphBuilder::GetFunctionClosure() { + if (!function_closure_.is_set()) { + // Parameter -1 is special for the function closure + Operator* op = common()->Parameter(-1); + Node* node = NewNode(op, graph()->start()); + function_closure_.set(node); + } + return function_closure_.get(); +} + + +Node* AstGraphBuilder::GetFunctionContext() { + if (!function_context_.is_set()) { + // Parameter (arity + 1) is special for the outer context of the function + Operator* op = common()->Parameter(info()->num_parameters() + 1); + Node* node = NewNode(op, graph()->start()); + function_context_.set(node); + } + return function_context_.get(); +} + + +bool AstGraphBuilder::CreateGraph() { + Scope* scope = info()->scope(); + DCHECK(graph() != NULL); + + // Set up the basic structure of the graph. + int parameter_count = info()->num_parameters(); + graph()->SetStart(graph()->NewNode(common()->Start(parameter_count))); + + // Initialize the top-level environment. + Environment env(this, scope, graph()->start()); + set_environment(&env); + + // Build node to initialize local function context. + Node* closure = GetFunctionClosure(); + Node* outer = GetFunctionContext(); + Node* inner = BuildLocalFunctionContext(outer, closure); + + // Push top-level function scope for the function body. + ContextScope top_context(this, scope, inner); + + // Build the arguments object if it is used. + BuildArgumentsObject(scope->arguments()); + + // Emit tracing call if requested to do so. + if (FLAG_trace) { + NewNode(javascript()->Runtime(Runtime::kTraceEnter, 0)); + } + + // Visit implicit declaration of the function name. + if (scope->is_function_scope() && scope->function() != NULL) { + VisitVariableDeclaration(scope->function()); + } + + // Visit declarations within the function scope. + VisitDeclarations(scope->declarations()); + + // TODO(mstarzinger): This should do an inlined stack check. + NewNode(javascript()->Runtime(Runtime::kStackGuard, 0)); + + // Visit statements in the function body. + VisitStatements(info()->function()->body()); + if (HasStackOverflow()) return false; + + // Emit tracing call if requested to do so. + if (FLAG_trace) { + // TODO(mstarzinger): Only traces implicit return. + Node* return_value = jsgraph()->UndefinedConstant(); + NewNode(javascript()->Runtime(Runtime::kTraceExit, 1), return_value); + } + + // Return 'undefined' in case we can fall off the end. + Node* control = NewNode(common()->Return(), jsgraph()->UndefinedConstant()); + UpdateControlDependencyToLeaveFunction(control); + + // Finish the basic structure of the graph. + environment()->UpdateControlDependency(exit_control()); + graph()->SetEnd(NewNode(common()->End())); + + return true; +} + + +// Left-hand side can only be a property, a global or a variable slot. +enum LhsKind { VARIABLE, NAMED_PROPERTY, KEYED_PROPERTY }; + + +// Determine the left-hand side kind of an assignment. +static LhsKind DetermineLhsKind(Expression* expr) { + Property* property = expr->AsProperty(); + DCHECK(expr->IsValidReferenceExpression()); + LhsKind lhs_kind = + (property == NULL) ? VARIABLE : (property->key()->IsPropertyName()) + ? NAMED_PROPERTY + : KEYED_PROPERTY; + return lhs_kind; +} + + +// Helper to find an existing shared function info in the baseline code for the +// given function literal. Used to canonicalize SharedFunctionInfo objects. +static Handle<SharedFunctionInfo> SearchSharedFunctionInfo( + Code* unoptimized_code, FunctionLiteral* expr) { + int start_position = expr->start_position(); + for (RelocIterator it(unoptimized_code); !it.done(); it.next()) { + RelocInfo* rinfo = it.rinfo(); + if (rinfo->rmode() != RelocInfo::EMBEDDED_OBJECT) continue; + Object* obj = rinfo->target_object(); + if (obj->IsSharedFunctionInfo()) { + SharedFunctionInfo* shared = SharedFunctionInfo::cast(obj); + if (shared->start_position() == start_position) { + return Handle<SharedFunctionInfo>(shared); + } + } + } + return Handle<SharedFunctionInfo>(); +} + + +StructuredGraphBuilder::Environment* AstGraphBuilder::CopyEnvironment( + StructuredGraphBuilder::Environment* env) { + return new (zone()) Environment(*reinterpret_cast<Environment*>(env)); +} + + +AstGraphBuilder::Environment::Environment(AstGraphBuilder* builder, + Scope* scope, + Node* control_dependency) + : StructuredGraphBuilder::Environment(builder, control_dependency), + parameters_count_(scope->num_parameters() + 1), + locals_count_(scope->num_stack_slots()), + parameters_node_(NULL), + locals_node_(NULL), + stack_node_(NULL), + parameters_dirty_(true), + locals_dirty_(true), + stack_dirty_(true) { + DCHECK_EQ(scope->num_parameters() + 1, parameters_count()); + + // Bind the receiver variable. + Node* receiver = builder->graph()->NewNode(common()->Parameter(0), + builder->graph()->start()); + values()->push_back(receiver); + + // Bind all parameter variables. The parameter indices are shifted by 1 + // (receiver is parameter index -1 but environment index 0). + for (int i = 0; i < scope->num_parameters(); ++i) { + Node* parameter = builder->graph()->NewNode(common()->Parameter(i + 1), + builder->graph()->start()); + values()->push_back(parameter); + } + + // Bind all local variables to undefined. + Node* undefined_constant = builder->jsgraph()->UndefinedConstant(); + values()->insert(values()->end(), locals_count(), undefined_constant); +} + + +AstGraphBuilder::Environment::Environment(const Environment& copy) + : StructuredGraphBuilder::Environment( + static_cast<StructuredGraphBuilder::Environment>(copy)), + parameters_count_(copy.parameters_count_), + locals_count_(copy.locals_count_), + parameters_node_(copy.parameters_node_), + locals_node_(copy.locals_node_), + stack_node_(copy.stack_node_), + parameters_dirty_(copy.parameters_dirty_), + locals_dirty_(copy.locals_dirty_), + stack_dirty_(copy.stack_dirty_) {} + + +Node* AstGraphBuilder::Environment::Checkpoint(BailoutId ast_id) { + if (parameters_dirty_) { + Operator* op = common()->StateValues(parameters_count()); + if (parameters_count() != 0) { + Node** parameters = &values()->front(); + parameters_node_ = graph()->NewNode(op, parameters_count(), parameters); + } else { + parameters_node_ = graph()->NewNode(op); + } + parameters_dirty_ = false; + } + if (locals_dirty_) { + Operator* op = common()->StateValues(locals_count()); + if (locals_count() != 0) { + Node** locals = &values()->at(parameters_count_); + locals_node_ = graph()->NewNode(op, locals_count(), locals); + } else { + locals_node_ = graph()->NewNode(op); + } + locals_dirty_ = false; + } + if (stack_dirty_) { + Operator* op = common()->StateValues(stack_height()); + if (stack_height() != 0) { + Node** stack = &values()->at(parameters_count_ + locals_count_); + stack_node_ = graph()->NewNode(op, stack_height(), stack); + } else { + stack_node_ = graph()->NewNode(op); + } + stack_dirty_ = false; + } + + Operator* op = common()->FrameState(ast_id); + + return graph()->NewNode(op, parameters_node_, locals_node_, stack_node_); +} + + +AstGraphBuilder::AstContext::AstContext(AstGraphBuilder* own, + Expression::Context kind, + BailoutId bailout_id) + : bailout_id_(bailout_id), + kind_(kind), + owner_(own), + outer_(own->ast_context()) { + owner()->set_ast_context(this); // Push. +#ifdef DEBUG + original_height_ = environment()->stack_height(); +#endif +} + + +AstGraphBuilder::AstContext::~AstContext() { + owner()->set_ast_context(outer_); // Pop. +} + + +AstGraphBuilder::AstEffectContext::~AstEffectContext() { + DCHECK(environment()->stack_height() == original_height_); +} + + +AstGraphBuilder::AstValueContext::~AstValueContext() { + DCHECK(environment()->stack_height() == original_height_ + 1); +} + + +AstGraphBuilder::AstTestContext::~AstTestContext() { + DCHECK(environment()->stack_height() == original_height_ + 1); +} + + +void AstGraphBuilder::AstEffectContext::ProduceValueWithLazyBailout( + Node* value) { + ProduceValue(value); + owner()->BuildLazyBailout(value, bailout_id_); +} + + +void AstGraphBuilder::AstValueContext::ProduceValueWithLazyBailout( + Node* value) { + ProduceValue(value); + owner()->BuildLazyBailout(value, bailout_id_); +} + + +void AstGraphBuilder::AstTestContext::ProduceValueWithLazyBailout(Node* value) { + environment()->Push(value); + owner()->BuildLazyBailout(value, bailout_id_); + environment()->Pop(); + ProduceValue(value); +} + + +void AstGraphBuilder::AstEffectContext::ProduceValue(Node* value) { + // The value is ignored. +} + + +void AstGraphBuilder::AstValueContext::ProduceValue(Node* value) { + environment()->Push(value); +} + + +void AstGraphBuilder::AstTestContext::ProduceValue(Node* value) { + environment()->Push(owner()->BuildToBoolean(value)); +} + + +Node* AstGraphBuilder::AstEffectContext::ConsumeValue() { return NULL; } + + +Node* AstGraphBuilder::AstValueContext::ConsumeValue() { + return environment()->Pop(); +} + + +Node* AstGraphBuilder::AstTestContext::ConsumeValue() { + return environment()->Pop(); +} + + +AstGraphBuilder::BreakableScope* AstGraphBuilder::BreakableScope::FindBreakable( + BreakableStatement* target) { + BreakableScope* current = this; + while (current != NULL && current->target_ != target) { + owner_->environment()->Drop(current->drop_extra_); + current = current->next_; + } + DCHECK(current != NULL); // Always found (unless stack is malformed). + return current; +} + + +void AstGraphBuilder::BreakableScope::BreakTarget(BreakableStatement* stmt) { + FindBreakable(stmt)->control_->Break(); +} + + +void AstGraphBuilder::BreakableScope::ContinueTarget(BreakableStatement* stmt) { + FindBreakable(stmt)->control_->Continue(); +} + + +void AstGraphBuilder::VisitForValueOrNull(Expression* expr) { + if (expr == NULL) { + return environment()->Push(jsgraph()->NullConstant()); + } + VisitForValue(expr); +} + + +void AstGraphBuilder::VisitForValues(ZoneList<Expression*>* exprs) { + for (int i = 0; i < exprs->length(); ++i) { + VisitForValue(exprs->at(i)); + } +} + + +void AstGraphBuilder::VisitForValue(Expression* expr) { + AstValueContext for_value(this, expr->id()); + if (!HasStackOverflow()) { + expr->Accept(this); + } +} + + +void AstGraphBuilder::VisitForEffect(Expression* expr) { + AstEffectContext for_effect(this, expr->id()); + if (!HasStackOverflow()) { + expr->Accept(this); + } +} + + +void AstGraphBuilder::VisitForTest(Expression* expr) { + AstTestContext for_condition(this, expr->id()); + if (!HasStackOverflow()) { + expr->Accept(this); + } +} + + +void AstGraphBuilder::VisitVariableDeclaration(VariableDeclaration* decl) { + Variable* variable = decl->proxy()->var(); + VariableMode mode = decl->mode(); + bool hole_init = mode == CONST || mode == CONST_LEGACY || mode == LET; + switch (variable->location()) { + case Variable::UNALLOCATED: { + Handle<Oddball> value = variable->binding_needs_init() + ? isolate()->factory()->the_hole_value() + : isolate()->factory()->undefined_value(); + globals()->Add(variable->name(), zone()); + globals()->Add(value, zone()); + break; + } + case Variable::PARAMETER: + case Variable::LOCAL: + if (hole_init) { + Node* value = jsgraph()->TheHoleConstant(); + environment()->Bind(variable, value); + } + break; + case Variable::CONTEXT: + if (hole_init) { + Node* value = jsgraph()->TheHoleConstant(); + Operator* op = javascript()->StoreContext(0, variable->index()); + NewNode(op, current_context(), value); + } + break; + case Variable::LOOKUP: + UNIMPLEMENTED(); + } +} + + +void AstGraphBuilder::VisitFunctionDeclaration(FunctionDeclaration* decl) { + Variable* variable = decl->proxy()->var(); + switch (variable->location()) { + case Variable::UNALLOCATED: { + Handle<SharedFunctionInfo> function = + Compiler::BuildFunctionInfo(decl->fun(), info()->script(), info()); + // Check for stack-overflow exception. + if (function.is_null()) return SetStackOverflow(); + globals()->Add(variable->name(), zone()); + globals()->Add(function, zone()); + break; + } + case Variable::PARAMETER: + case Variable::LOCAL: { + VisitForValue(decl->fun()); + Node* value = environment()->Pop(); + environment()->Bind(variable, value); + break; + } + case Variable::CONTEXT: { + VisitForValue(decl->fun()); + Node* value = environment()->Pop(); + Operator* op = javascript()->StoreContext(0, variable->index()); + NewNode(op, current_context(), value); + break; + } + case Variable::LOOKUP: + UNIMPLEMENTED(); + } +} + + +void AstGraphBuilder::VisitModuleDeclaration(ModuleDeclaration* decl) { + UNREACHABLE(); +} + + +void AstGraphBuilder::VisitImportDeclaration(ImportDeclaration* decl) { + UNREACHABLE(); +} + + +void AstGraphBuilder::VisitExportDeclaration(ExportDeclaration* decl) { + UNREACHABLE(); +} + + +void AstGraphBuilder::VisitModuleLiteral(ModuleLiteral* modl) { UNREACHABLE(); } + + +void AstGraphBuilder::VisitModuleVariable(ModuleVariable* modl) { + UNREACHABLE(); +} + + +void AstGraphBuilder::VisitModulePath(ModulePath* modl) { UNREACHABLE(); } + + +void AstGraphBuilder::VisitModuleUrl(ModuleUrl* modl) { UNREACHABLE(); } + + +void AstGraphBuilder::VisitBlock(Block* stmt) { + BlockBuilder block(this); + BreakableScope scope(this, stmt, &block, 0); + if (stmt->labels() != NULL) block.BeginBlock(); + if (stmt->scope() == NULL) { + // Visit statements in the same scope, no declarations. + VisitStatements(stmt->statements()); + } else { + Operator* op = javascript()->CreateBlockContext(); + Node* scope_info = jsgraph()->Constant(stmt->scope()->GetScopeInfo()); + Node* context = NewNode(op, scope_info, GetFunctionClosure()); + ContextScope scope(this, stmt->scope(), context); + + // Visit declarations and statements in a block scope. + VisitDeclarations(stmt->scope()->declarations()); + VisitStatements(stmt->statements()); + } + if (stmt->labels() != NULL) block.EndBlock(); +} + + +void AstGraphBuilder::VisitModuleStatement(ModuleStatement* stmt) { + UNREACHABLE(); +} + + +void AstGraphBuilder::VisitExpressionStatement(ExpressionStatement* stmt) { + VisitForEffect(stmt->expression()); +} + + +void AstGraphBuilder::VisitEmptyStatement(EmptyStatement* stmt) { + // Do nothing. +} + + +void AstGraphBuilder::VisitIfStatement(IfStatement* stmt) { + IfBuilder compare_if(this); + VisitForTest(stmt->condition()); + Node* condition = environment()->Pop(); + compare_if.If(condition); + compare_if.Then(); + Visit(stmt->then_statement()); + compare_if.Else(); + Visit(stmt->else_statement()); + compare_if.End(); +} + + +void AstGraphBuilder::VisitContinueStatement(ContinueStatement* stmt) { + StructuredGraphBuilder::Environment* env = environment()->CopyAsUnreachable(); + breakable()->ContinueTarget(stmt->target()); + set_environment(env); +} + + +void AstGraphBuilder::VisitBreakStatement(BreakStatement* stmt) { + StructuredGraphBuilder::Environment* env = environment()->CopyAsUnreachable(); + breakable()->BreakTarget(stmt->target()); + set_environment(env); +} + + +void AstGraphBuilder::VisitReturnStatement(ReturnStatement* stmt) { + VisitForValue(stmt->expression()); + Node* result = environment()->Pop(); + Node* control = NewNode(common()->Return(), result); + UpdateControlDependencyToLeaveFunction(control); +} + + +void AstGraphBuilder::VisitWithStatement(WithStatement* stmt) { + VisitForValue(stmt->expression()); + Node* value = environment()->Pop(); + Operator* op = javascript()->CreateWithContext(); + Node* context = NewNode(op, value, GetFunctionClosure()); + ContextScope scope(this, stmt->scope(), context); + Visit(stmt->statement()); +} + + +void AstGraphBuilder::VisitSwitchStatement(SwitchStatement* stmt) { + ZoneList<CaseClause*>* clauses = stmt->cases(); + SwitchBuilder compare_switch(this, clauses->length()); + BreakableScope scope(this, stmt, &compare_switch, 0); + compare_switch.BeginSwitch(); + int default_index = -1; + + // Keep the switch value on the stack until a case matches. + VisitForValue(stmt->tag()); + Node* tag = environment()->Top(); + + // Iterate over all cases and create nodes for label comparison. + for (int i = 0; i < clauses->length(); i++) { + CaseClause* clause = clauses->at(i); + + // The default is not a test, remember index. + if (clause->is_default()) { + default_index = i; + continue; + } + + // Create nodes to perform label comparison as if via '==='. The switch + // value is still on the operand stack while the label is evaluated. + VisitForValue(clause->label()); + Node* label = environment()->Pop(); + Operator* op = javascript()->StrictEqual(); + Node* condition = NewNode(op, tag, label); + compare_switch.BeginLabel(i, condition); + + // Discard the switch value at label match. + environment()->Pop(); + compare_switch.EndLabel(); + } + + // Discard the switch value and mark the default case. + environment()->Pop(); + if (default_index >= 0) { + compare_switch.DefaultAt(default_index); + } + + // Iterate over all cases and create nodes for case bodies. + for (int i = 0; i < clauses->length(); i++) { + CaseClause* clause = clauses->at(i); + compare_switch.BeginCase(i); + VisitStatements(clause->statements()); + compare_switch.EndCase(); + } + + compare_switch.EndSwitch(); +} + + +void AstGraphBuilder::VisitDoWhileStatement(DoWhileStatement* stmt) { + LoopBuilder while_loop(this); + while_loop.BeginLoop(); + VisitIterationBody(stmt, &while_loop, 0); + while_loop.EndBody(); + VisitForTest(stmt->cond()); + Node* condition = environment()->Pop(); + while_loop.BreakUnless(condition); + while_loop.EndLoop(); +} + + +void AstGraphBuilder::VisitWhileStatement(WhileStatement* stmt) { + LoopBuilder while_loop(this); + while_loop.BeginLoop(); + VisitForTest(stmt->cond()); + Node* condition = environment()->Pop(); + while_loop.BreakUnless(condition); + VisitIterationBody(stmt, &while_loop, 0); + while_loop.EndBody(); + while_loop.EndLoop(); +} + + +void AstGraphBuilder::VisitForStatement(ForStatement* stmt) { + LoopBuilder for_loop(this); + VisitIfNotNull(stmt->init()); + for_loop.BeginLoop(); + if (stmt->cond() != NULL) { + VisitForTest(stmt->cond()); + Node* condition = environment()->Pop(); + for_loop.BreakUnless(condition); + } + VisitIterationBody(stmt, &for_loop, 0); + for_loop.EndBody(); + VisitIfNotNull(stmt->next()); + for_loop.EndLoop(); +} + + +// TODO(dcarney): this is a big function. Try to clean up some. +void AstGraphBuilder::VisitForInStatement(ForInStatement* stmt) { + VisitForValue(stmt->subject()); + Node* obj = environment()->Pop(); + // Check for undefined or null before entering loop. + IfBuilder is_undefined(this); + Node* is_undefined_cond = + NewNode(javascript()->StrictEqual(), obj, jsgraph()->UndefinedConstant()); + is_undefined.If(is_undefined_cond); + is_undefined.Then(); + is_undefined.Else(); + { + IfBuilder is_null(this); + Node* is_null_cond = + NewNode(javascript()->StrictEqual(), obj, jsgraph()->NullConstant()); + is_null.If(is_null_cond); + is_null.Then(); + is_null.Else(); + // Convert object to jsobject. + // PrepareForBailoutForId(stmt->PrepareId(), TOS_REG); + obj = NewNode(javascript()->ToObject(), obj); + environment()->Push(obj); + // TODO(dcarney): should do a fast enum cache check here to skip runtime. + environment()->Push(obj); + Node* cache_type = ProcessArguments( + javascript()->Runtime(Runtime::kGetPropertyNamesFast, 1), 1); + // TODO(dcarney): these next runtime calls should be removed in favour of + // a few simplified instructions. + environment()->Push(obj); + environment()->Push(cache_type); + Node* cache_pair = + ProcessArguments(javascript()->Runtime(Runtime::kForInInit, 2), 2); + // cache_type may have been replaced. + Node* cache_array = NewNode(common()->Projection(0), cache_pair); + cache_type = NewNode(common()->Projection(1), cache_pair); + environment()->Push(cache_type); + environment()->Push(cache_array); + Node* cache_length = ProcessArguments( + javascript()->Runtime(Runtime::kForInCacheArrayLength, 2), 2); + { + // TODO(dcarney): this check is actually supposed to be for the + // empty enum case only. + IfBuilder have_no_properties(this); + Node* empty_array_cond = NewNode(javascript()->StrictEqual(), + cache_length, jsgraph()->ZeroConstant()); + have_no_properties.If(empty_array_cond); + have_no_properties.Then(); + // Pop obj and skip loop. + environment()->Pop(); + have_no_properties.Else(); + { + // Construct the rest of the environment. + environment()->Push(cache_type); + environment()->Push(cache_array); + environment()->Push(cache_length); + environment()->Push(jsgraph()->ZeroConstant()); + // PrepareForBailoutForId(stmt->BodyId(), NO_REGISTERS); + LoopBuilder for_loop(this); + for_loop.BeginLoop(); + // Check loop termination condition. + Node* index = environment()->Peek(0); + Node* exit_cond = + NewNode(javascript()->LessThan(), index, cache_length); + // TODO(jarin): provide real bailout id. + BuildLazyBailout(exit_cond, BailoutId::None()); + for_loop.BreakUnless(exit_cond); + // TODO(dcarney): this runtime call should be a handful of + // simplified instructions that + // basically produce + // value = array[index] + environment()->Push(obj); + environment()->Push(cache_array); + environment()->Push(cache_type); + environment()->Push(index); + Node* pair = + ProcessArguments(javascript()->Runtime(Runtime::kForInNext, 4), 4); + Node* value = NewNode(common()->Projection(0), pair); + Node* should_filter = NewNode(common()->Projection(1), pair); + environment()->Push(value); + { + // Test if FILTER_KEY needs to be called. + IfBuilder test_should_filter(this); + Node* should_filter_cond = + NewNode(javascript()->StrictEqual(), should_filter, + jsgraph()->TrueConstant()); + test_should_filter.If(should_filter_cond); + test_should_filter.Then(); + value = environment()->Pop(); + // TODO(dcarney): Better load from function context. + // See comment in BuildLoadBuiltinsObject. + Handle<JSFunction> function(JSFunction::cast( + info()->context()->builtins()->javascript_builtin( + Builtins::FILTER_KEY))); + // Callee. + environment()->Push(jsgraph()->HeapConstant(function)); + // Receiver. + environment()->Push(obj); + // Args. + environment()->Push(value); + // result is either the string key or Smi(0) indicating the property + // is gone. + Node* res = ProcessArguments( + javascript()->Call(3, NO_CALL_FUNCTION_FLAGS), 3); + // TODO(jarin): provide real bailout id. + BuildLazyBailout(res, BailoutId::None()); + Node* property_missing = NewNode(javascript()->StrictEqual(), res, + jsgraph()->ZeroConstant()); + { + IfBuilder is_property_missing(this); + is_property_missing.If(property_missing); + is_property_missing.Then(); + // Inc counter and continue. + Node* index_inc = + NewNode(javascript()->Add(), index, jsgraph()->OneConstant()); + environment()->Poke(0, index_inc); + // TODO(jarin): provide real bailout id. + BuildLazyBailout(index_inc, BailoutId::None()); + for_loop.Continue(); + is_property_missing.Else(); + is_property_missing.End(); + } + // Replace 'value' in environment. + environment()->Push(res); + test_should_filter.Else(); + test_should_filter.End(); + } + value = environment()->Pop(); + // Bind value and do loop body. + VisitForInAssignment(stmt->each(), value); + VisitIterationBody(stmt, &for_loop, 5); + // Inc counter and continue. + Node* index_inc = + NewNode(javascript()->Add(), index, jsgraph()->OneConstant()); + environment()->Poke(0, index_inc); + // TODO(jarin): provide real bailout id. + BuildLazyBailout(index_inc, BailoutId::None()); + for_loop.EndBody(); + for_loop.EndLoop(); + environment()->Drop(5); + // PrepareForBailoutForId(stmt->ExitId(), NO_REGISTERS); + } + have_no_properties.End(); + } + is_null.End(); + } + is_undefined.End(); +} + + +void AstGraphBuilder::VisitForOfStatement(ForOfStatement* stmt) { + VisitForValue(stmt->subject()); + environment()->Pop(); + // TODO(turbofan): create and use loop builder. +} + + +void AstGraphBuilder::VisitTryCatchStatement(TryCatchStatement* stmt) { + UNREACHABLE(); +} + + +void AstGraphBuilder::VisitTryFinallyStatement(TryFinallyStatement* stmt) { + UNREACHABLE(); +} + + +void AstGraphBuilder::VisitDebuggerStatement(DebuggerStatement* stmt) { + // TODO(turbofan): Do we really need a separate reloc-info for this? + NewNode(javascript()->Runtime(Runtime::kDebugBreak, 0)); +} + + +void AstGraphBuilder::VisitFunctionLiteral(FunctionLiteral* expr) { + Node* context = current_context(); + + // Build a new shared function info if we cannot find one in the baseline + // code. We also have a stack overflow if the recursive compilation did. + Handle<SharedFunctionInfo> shared_info = + SearchSharedFunctionInfo(info()->shared_info()->code(), expr); + if (shared_info.is_null()) { + shared_info = Compiler::BuildFunctionInfo(expr, info()->script(), info()); + CHECK(!shared_info.is_null()); // TODO(mstarzinger): Set stack overflow? + } + + // Create node to instantiate a new closure. + Node* info = jsgraph()->Constant(shared_info); + Node* pretenure = expr->pretenure() ? jsgraph()->TrueConstant() + : jsgraph()->FalseConstant(); + Operator* op = javascript()->Runtime(Runtime::kNewClosure, 3); + Node* value = NewNode(op, context, info, pretenure); + ast_context()->ProduceValue(value); +} + + +void AstGraphBuilder::VisitNativeFunctionLiteral(NativeFunctionLiteral* expr) { + UNREACHABLE(); +} + + +void AstGraphBuilder::VisitConditional(Conditional* expr) { + IfBuilder compare_if(this); + VisitForTest(expr->condition()); + Node* condition = environment()->Pop(); + compare_if.If(condition); + compare_if.Then(); + Visit(expr->then_expression()); + compare_if.Else(); + Visit(expr->else_expression()); + compare_if.End(); + ast_context()->ReplaceValue(); +} + + +void AstGraphBuilder::VisitVariableProxy(VariableProxy* expr) { + Node* value = BuildVariableLoad(expr->var(), expr->id()); + ast_context()->ProduceValue(value); +} + + +void AstGraphBuilder::VisitLiteral(Literal* expr) { + Node* value = jsgraph()->Constant(expr->value()); + ast_context()->ProduceValue(value); +} + + +void AstGraphBuilder::VisitRegExpLiteral(RegExpLiteral* expr) { + Handle<JSFunction> closure = info()->closure(); + + // Create node to materialize a regular expression literal. + Node* literals_array = jsgraph()->Constant(handle(closure->literals())); + Node* literal_index = jsgraph()->Constant(expr->literal_index()); + Node* pattern = jsgraph()->Constant(expr->pattern()); + Node* flags = jsgraph()->Constant(expr->flags()); + Operator* op = javascript()->Runtime(Runtime::kMaterializeRegExpLiteral, 4); + Node* literal = NewNode(op, literals_array, literal_index, pattern, flags); + ast_context()->ProduceValue(literal); +} + + +void AstGraphBuilder::VisitObjectLiteral(ObjectLiteral* expr) { + Handle<JSFunction> closure = info()->closure(); + + // Create node to deep-copy the literal boilerplate. + expr->BuildConstantProperties(isolate()); + Node* literals_array = jsgraph()->Constant(handle(closure->literals())); + Node* literal_index = jsgraph()->Constant(expr->literal_index()); + Node* constants = jsgraph()->Constant(expr->constant_properties()); + Node* flags = jsgraph()->Constant(expr->ComputeFlags()); + Operator* op = javascript()->Runtime(Runtime::kCreateObjectLiteral, 4); + Node* literal = NewNode(op, literals_array, literal_index, constants, flags); + + // The object is expected on the operand stack during computation of the + // property values and is the value of the entire expression. + environment()->Push(literal); + + // Mark all computed expressions that are bound to a key that is shadowed by + // a later occurrence of the same key. For the marked expressions, no store + // code is emitted. + expr->CalculateEmitStore(zone()); + + // Create nodes to store computed values into the literal. + AccessorTable accessor_table(zone()); + for (int i = 0; i < expr->properties()->length(); i++) { + ObjectLiteral::Property* property = expr->properties()->at(i); + if (property->IsCompileTimeValue()) continue; + + Literal* key = property->key(); + switch (property->kind()) { + case ObjectLiteral::Property::CONSTANT: + UNREACHABLE(); + case ObjectLiteral::Property::MATERIALIZED_LITERAL: + DCHECK(!CompileTimeValue::IsCompileTimeValue(property->value())); + // Fall through. + case ObjectLiteral::Property::COMPUTED: { + // It is safe to use [[Put]] here because the boilerplate already + // contains computed properties with an uninitialized value. + if (key->value()->IsInternalizedString()) { + if (property->emit_store()) { + VisitForValue(property->value()); + Node* value = environment()->Pop(); + PrintableUnique<Name> name = MakeUnique(key->AsPropertyName()); + Node* store = + NewNode(javascript()->StoreNamed(name), literal, value); + BuildLazyBailout(store, key->id()); + } else { + VisitForEffect(property->value()); + } + break; + } + environment()->Push(literal); // Duplicate receiver. + VisitForValue(property->key()); + VisitForValue(property->value()); + Node* value = environment()->Pop(); + Node* key = environment()->Pop(); + Node* receiver = environment()->Pop(); + if (property->emit_store()) { + Node* strict = jsgraph()->Constant(SLOPPY); + Operator* op = javascript()->Runtime(Runtime::kSetProperty, 4); + NewNode(op, receiver, key, value, strict); + } + break; + } + case ObjectLiteral::Property::PROTOTYPE: { + environment()->Push(literal); // Duplicate receiver. + VisitForValue(property->value()); + Node* value = environment()->Pop(); + Node* receiver = environment()->Pop(); + if (property->emit_store()) { + Operator* op = javascript()->Runtime(Runtime::kSetPrototype, 2); + NewNode(op, receiver, value); + } + break; + } + case ObjectLiteral::Property::GETTER: + accessor_table.lookup(key)->second->getter = property->value(); + break; + case ObjectLiteral::Property::SETTER: + accessor_table.lookup(key)->second->setter = property->value(); + break; + } + } + + // Create nodes to define accessors, using only a single call to the runtime + // for each pair of corresponding getters and setters. + for (AccessorTable::Iterator it = accessor_table.begin(); + it != accessor_table.end(); ++it) { + VisitForValue(it->first); + VisitForValueOrNull(it->second->getter); + VisitForValueOrNull(it->second->setter); + Node* setter = environment()->Pop(); + Node* getter = environment()->Pop(); + Node* name = environment()->Pop(); + Node* attr = jsgraph()->Constant(NONE); + Operator* op = + javascript()->Runtime(Runtime::kDefineAccessorPropertyUnchecked, 5); + NewNode(op, literal, name, getter, setter, attr); + } + + // Transform literals that contain functions to fast properties. + if (expr->has_function()) { + Operator* op = javascript()->Runtime(Runtime::kToFastProperties, 1); + NewNode(op, literal); + } + + ast_context()->ProduceValue(environment()->Pop()); +} + + +void AstGraphBuilder::VisitArrayLiteral(ArrayLiteral* expr) { + Handle<JSFunction> closure = info()->closure(); + + // Create node to deep-copy the literal boilerplate. + expr->BuildConstantElements(isolate()); + Node* literals_array = jsgraph()->Constant(handle(closure->literals())); + Node* literal_index = jsgraph()->Constant(expr->literal_index()); + Node* constants = jsgraph()->Constant(expr->constant_elements()); + Node* flags = jsgraph()->Constant(expr->ComputeFlags()); + Operator* op = javascript()->Runtime(Runtime::kCreateArrayLiteral, 4); + Node* literal = NewNode(op, literals_array, literal_index, constants, flags); + + // The array and the literal index are both expected on the operand stack + // during computation of the element values. + environment()->Push(literal); + environment()->Push(literal_index); + + // Create nodes to evaluate all the non-constant subexpressions and to store + // them into the newly cloned array. + for (int i = 0; i < expr->values()->length(); i++) { + Expression* subexpr = expr->values()->at(i); + if (CompileTimeValue::IsCompileTimeValue(subexpr)) continue; + + VisitForValue(subexpr); + Node* value = environment()->Pop(); + Node* index = jsgraph()->Constant(i); + Node* store = NewNode(javascript()->StoreProperty(), literal, index, value); + BuildLazyBailout(store, expr->GetIdForElement(i)); + } + + environment()->Pop(); // Array literal index. + ast_context()->ProduceValue(environment()->Pop()); +} + + +void AstGraphBuilder::VisitForInAssignment(Expression* expr, Node* value) { + DCHECK(expr->IsValidReferenceExpression()); + + // Left-hand side can only be a property, a global or a variable slot. + Property* property = expr->AsProperty(); + LhsKind assign_type = DetermineLhsKind(expr); + + // Evaluate LHS expression and store the value. + switch (assign_type) { + case VARIABLE: { + Variable* var = expr->AsVariableProxy()->var(); + // TODO(jarin) Fill in the correct bailout id. + BuildVariableAssignment(var, value, Token::ASSIGN, BailoutId::None()); + break; + } + case NAMED_PROPERTY: { + environment()->Push(value); + VisitForValue(property->obj()); + Node* object = environment()->Pop(); + value = environment()->Pop(); + PrintableUnique<Name> name = + MakeUnique(property->key()->AsLiteral()->AsPropertyName()); + Node* store = NewNode(javascript()->StoreNamed(name), object, value); + // TODO(jarin) Fill in the correct bailout id. + BuildLazyBailout(store, BailoutId::None()); + break; + } + case KEYED_PROPERTY: { + environment()->Push(value); + VisitForValue(property->obj()); + VisitForValue(property->key()); + Node* key = environment()->Pop(); + Node* object = environment()->Pop(); + value = environment()->Pop(); + Node* store = NewNode(javascript()->StoreProperty(), object, key, value); + // TODO(jarin) Fill in the correct bailout id. + BuildLazyBailout(store, BailoutId::None()); + break; + } + } +} + + +void AstGraphBuilder::VisitAssignment(Assignment* expr) { + DCHECK(expr->target()->IsValidReferenceExpression()); + + // Left-hand side can only be a property, a global or a variable slot. + Property* property = expr->target()->AsProperty(); + LhsKind assign_type = DetermineLhsKind(expr->target()); + + // Evaluate LHS expression. + switch (assign_type) { + case VARIABLE: + // Nothing to do here. + break; + case NAMED_PROPERTY: + VisitForValue(property->obj()); + break; + case KEYED_PROPERTY: { + VisitForValue(property->obj()); + VisitForValue(property->key()); + break; + } + } + + // Evaluate the value and potentially handle compound assignments by loading + // the left-hand side value and performing a binary operation. + if (expr->is_compound()) { + Node* old_value = NULL; + switch (assign_type) { + case VARIABLE: { + Variable* variable = expr->target()->AsVariableProxy()->var(); + old_value = BuildVariableLoad(variable, expr->target()->id()); + break; + } + case NAMED_PROPERTY: { + Node* object = environment()->Top(); + PrintableUnique<Name> name = + MakeUnique(property->key()->AsLiteral()->AsPropertyName()); + old_value = NewNode(javascript()->LoadNamed(name), object); + BuildLazyBailoutWithPushedNode(old_value, property->LoadId()); + break; + } + case KEYED_PROPERTY: { + Node* key = environment()->Top(); + Node* object = environment()->Peek(1); + old_value = NewNode(javascript()->LoadProperty(), object, key); + BuildLazyBailoutWithPushedNode(old_value, property->LoadId()); + break; + } + } + environment()->Push(old_value); + VisitForValue(expr->value()); + Node* right = environment()->Pop(); + Node* left = environment()->Pop(); + Node* value = BuildBinaryOp(left, right, expr->binary_op()); + environment()->Push(value); + BuildLazyBailout(value, expr->binary_operation()->id()); + } else { + VisitForValue(expr->value()); + } + + // Store the value. + Node* value = environment()->Pop(); + switch (assign_type) { + case VARIABLE: { + Variable* variable = expr->target()->AsVariableProxy()->var(); + BuildVariableAssignment(variable, value, expr->op(), + expr->AssignmentId()); + break; + } + case NAMED_PROPERTY: { + Node* object = environment()->Pop(); + PrintableUnique<Name> name = + MakeUnique(property->key()->AsLiteral()->AsPropertyName()); + Node* store = NewNode(javascript()->StoreNamed(name), object, value); + BuildLazyBailout(store, expr->AssignmentId()); + break; + } + case KEYED_PROPERTY: { + Node* key = environment()->Pop(); + Node* object = environment()->Pop(); + Node* store = NewNode(javascript()->StoreProperty(), object, key, value); + BuildLazyBailout(store, expr->AssignmentId()); + break; + } + } + + ast_context()->ProduceValue(value); +} + + +void AstGraphBuilder::VisitYield(Yield* expr) { + VisitForValue(expr->generator_object()); + VisitForValue(expr->expression()); + environment()->Pop(); + environment()->Pop(); + // TODO(turbofan): VisitYield + ast_context()->ProduceValue(jsgraph()->UndefinedConstant()); +} + + +void AstGraphBuilder::VisitThrow(Throw* expr) { + VisitForValue(expr->exception()); + Node* exception = environment()->Pop(); + Operator* op = javascript()->Runtime(Runtime::kThrow, 1); + Node* value = NewNode(op, exception); + ast_context()->ProduceValue(value); +} + + +void AstGraphBuilder::VisitProperty(Property* expr) { + Node* value; + if (expr->key()->IsPropertyName()) { + VisitForValue(expr->obj()); + Node* object = environment()->Pop(); + PrintableUnique<Name> name = + MakeUnique(expr->key()->AsLiteral()->AsPropertyName()); + value = NewNode(javascript()->LoadNamed(name), object); + } else { + VisitForValue(expr->obj()); + VisitForValue(expr->key()); + Node* key = environment()->Pop(); + Node* object = environment()->Pop(); + value = NewNode(javascript()->LoadProperty(), object, key); + } + ast_context()->ProduceValueWithLazyBailout(value); +} + + +void AstGraphBuilder::VisitCall(Call* expr) { + Expression* callee = expr->expression(); + Call::CallType call_type = expr->GetCallType(isolate()); + + // Prepare the callee and the receiver to the function call. This depends on + // the semantics of the underlying call type. + CallFunctionFlags flags = NO_CALL_FUNCTION_FLAGS; + Node* receiver_value = NULL; + Node* callee_value = NULL; + bool possibly_eval = false; + switch (call_type) { + case Call::GLOBAL_CALL: { + Variable* variable = callee->AsVariableProxy()->var(); + callee_value = BuildVariableLoad(variable, expr->expression()->id()); + receiver_value = jsgraph()->UndefinedConstant(); + break; + } + case Call::LOOKUP_SLOT_CALL: { + Variable* variable = callee->AsVariableProxy()->var(); + DCHECK(variable->location() == Variable::LOOKUP); + Node* name = jsgraph()->Constant(variable->name()); + Operator* op = javascript()->Runtime(Runtime::kLoadLookupSlot, 2); + Node* pair = NewNode(op, current_context(), name); + callee_value = NewNode(common()->Projection(0), pair); + receiver_value = NewNode(common()->Projection(1), pair); + break; + } + case Call::PROPERTY_CALL: { + Property* property = callee->AsProperty(); + VisitForValue(property->obj()); + Node* object = environment()->Top(); + if (property->key()->IsPropertyName()) { + PrintableUnique<Name> name = + MakeUnique(property->key()->AsLiteral()->AsPropertyName()); + callee_value = NewNode(javascript()->LoadNamed(name), object); + } else { + VisitForValue(property->key()); + Node* key = environment()->Pop(); + callee_value = NewNode(javascript()->LoadProperty(), object, key); + } + BuildLazyBailoutWithPushedNode(callee_value, property->LoadId()); + receiver_value = environment()->Pop(); + // Note that a PROPERTY_CALL requires the receiver to be wrapped into an + // object for sloppy callees. This could also be modeled explicitly here, + // thereby obsoleting the need for a flag to the call operator. + flags = CALL_AS_METHOD; + break; + } + case Call::POSSIBLY_EVAL_CALL: + possibly_eval = true; + // Fall through. + case Call::OTHER_CALL: + VisitForValue(callee); + callee_value = environment()->Pop(); + receiver_value = jsgraph()->UndefinedConstant(); + break; + } + + // The callee and the receiver both have to be pushed onto the operand stack + // before arguments are being evaluated. + environment()->Push(callee_value); + environment()->Push(receiver_value); + + // Evaluate all arguments to the function call, + ZoneList<Expression*>* args = expr->arguments(); + VisitForValues(args); + + // Resolve callee and receiver for a potential direct eval call. This block + // will mutate the callee and receiver values pushed onto the environment. + if (possibly_eval && args->length() > 0) { + int arg_count = args->length(); + + // Extract callee and source string from the environment. + Node* callee = environment()->Peek(arg_count + 1); + Node* source = environment()->Peek(arg_count - 1); + + // Create node to ask for help resolving potential eval call. This will + // provide a fully resolved callee and the corresponding receiver. + Node* receiver = environment()->Lookup(info()->scope()->receiver()); + Node* strict = jsgraph()->Constant(strict_mode()); + Node* position = jsgraph()->Constant(info()->scope()->start_position()); + Operator* op = + javascript()->Runtime(Runtime::kResolvePossiblyDirectEval, 5); + Node* pair = NewNode(op, callee, source, receiver, strict, position); + Node* new_callee = NewNode(common()->Projection(0), pair); + Node* new_receiver = NewNode(common()->Projection(1), pair); + + // Patch callee and receiver on the environment. + environment()->Poke(arg_count + 1, new_callee); + environment()->Poke(arg_count + 0, new_receiver); + } + + // Create node to perform the function call. + Operator* call = javascript()->Call(args->length() + 2, flags); + Node* value = ProcessArguments(call, args->length() + 2); + ast_context()->ProduceValueWithLazyBailout(value); +} + + +void AstGraphBuilder::VisitCallNew(CallNew* expr) { + VisitForValue(expr->expression()); + + // Evaluate all arguments to the construct call. + ZoneList<Expression*>* args = expr->arguments(); + VisitForValues(args); + + // Create node to perform the construct call. + Operator* call = javascript()->CallNew(args->length() + 1); + Node* value = ProcessArguments(call, args->length() + 1); + ast_context()->ProduceValueWithLazyBailout(value); +} + + +void AstGraphBuilder::VisitCallJSRuntime(CallRuntime* expr) { + Handle<String> name = expr->name(); + + // The callee and the receiver both have to be pushed onto the operand stack + // before arguments are being evaluated. + CallFunctionFlags flags = NO_CALL_FUNCTION_FLAGS; + Node* receiver_value = BuildLoadBuiltinsObject(); + PrintableUnique<String> unique = MakeUnique(name); + Node* callee_value = NewNode(javascript()->LoadNamed(unique), receiver_value); + environment()->Push(callee_value); + // TODO(jarin): Find/create a bailout id to deoptimize to (crankshaft + // refuses to optimize functions with jsruntime calls). + BuildLazyBailout(callee_value, BailoutId::None()); + environment()->Push(receiver_value); + + // Evaluate all arguments to the JS runtime call. + ZoneList<Expression*>* args = expr->arguments(); + VisitForValues(args); + + // Create node to perform the JS runtime call. + Operator* call = javascript()->Call(args->length() + 2, flags); + Node* value = ProcessArguments(call, args->length() + 2); + ast_context()->ProduceValueWithLazyBailout(value); +} + + +void AstGraphBuilder::VisitCallRuntime(CallRuntime* expr) { + const Runtime::Function* function = expr->function(); + + // Handle calls to runtime functions implemented in JavaScript separately as + // the call follows JavaScript ABI and the callee is statically unknown. + if (expr->is_jsruntime()) { + DCHECK(function == NULL && expr->name()->length() > 0); + return VisitCallJSRuntime(expr); + } + + // Evaluate all arguments to the runtime call. + ZoneList<Expression*>* args = expr->arguments(); + VisitForValues(args); + + // Create node to perform the runtime call. + Runtime::FunctionId functionId = function->function_id; + Operator* call = javascript()->Runtime(functionId, args->length()); + Node* value = ProcessArguments(call, args->length()); + ast_context()->ProduceValueWithLazyBailout(value); +} + + +void AstGraphBuilder::VisitUnaryOperation(UnaryOperation* expr) { + switch (expr->op()) { + case Token::DELETE: + return VisitDelete(expr); + case Token::VOID: + return VisitVoid(expr); + case Token::TYPEOF: + return VisitTypeof(expr); + case Token::NOT: + return VisitNot(expr); + default: + UNREACHABLE(); + } +} + + +void AstGraphBuilder::VisitCountOperation(CountOperation* expr) { + DCHECK(expr->expression()->IsValidReferenceExpression()); + + // Left-hand side can only be a property, a global or a variable slot. + Property* property = expr->expression()->AsProperty(); + LhsKind assign_type = DetermineLhsKind(expr->expression()); + + // Reserve space for result of postfix operation. + bool is_postfix = expr->is_postfix() && !ast_context()->IsEffect(); + if (is_postfix) environment()->Push(jsgraph()->UndefinedConstant()); + + // Evaluate LHS expression and get old value. + Node* old_value = NULL; + int stack_depth = -1; + switch (assign_type) { + case VARIABLE: { + Variable* variable = expr->expression()->AsVariableProxy()->var(); + old_value = BuildVariableLoad(variable, expr->expression()->id()); + stack_depth = 0; + break; + } + case NAMED_PROPERTY: { + VisitForValue(property->obj()); + Node* object = environment()->Top(); + PrintableUnique<Name> name = + MakeUnique(property->key()->AsLiteral()->AsPropertyName()); + old_value = NewNode(javascript()->LoadNamed(name), object); + BuildLazyBailoutWithPushedNode(old_value, property->LoadId()); + stack_depth = 1; + break; + } + case KEYED_PROPERTY: { + VisitForValue(property->obj()); + VisitForValue(property->key()); + Node* key = environment()->Top(); + Node* object = environment()->Peek(1); + old_value = NewNode(javascript()->LoadProperty(), object, key); + BuildLazyBailoutWithPushedNode(old_value, property->LoadId()); + stack_depth = 2; + break; + } + } + + // Convert old value into a number. + old_value = NewNode(javascript()->ToNumber(), old_value); + + // Save result for postfix expressions at correct stack depth. + if (is_postfix) environment()->Poke(stack_depth, old_value); + + // Create node to perform +1/-1 operation. + Node* value = + BuildBinaryOp(old_value, jsgraph()->OneConstant(), expr->binary_op()); + // TODO(jarin) Insert proper bailout id here (will need to change + // full code generator). + BuildLazyBailout(value, BailoutId::None()); + + // Store the value. + switch (assign_type) { + case VARIABLE: { + Variable* variable = expr->expression()->AsVariableProxy()->var(); + BuildVariableAssignment(variable, value, expr->op(), + expr->AssignmentId()); + break; + } + case NAMED_PROPERTY: { + Node* object = environment()->Pop(); + PrintableUnique<Name> name = + MakeUnique(property->key()->AsLiteral()->AsPropertyName()); + Node* store = NewNode(javascript()->StoreNamed(name), object, value); + BuildLazyBailout(store, expr->AssignmentId()); + break; + } + case KEYED_PROPERTY: { + Node* key = environment()->Pop(); + Node* object = environment()->Pop(); + Node* store = NewNode(javascript()->StoreProperty(), object, key, value); + BuildLazyBailout(store, expr->AssignmentId()); + break; + } + } + + // Restore old value for postfix expressions. + if (is_postfix) value = environment()->Pop(); + + ast_context()->ProduceValue(value); +} + + +void AstGraphBuilder::VisitBinaryOperation(BinaryOperation* expr) { + switch (expr->op()) { + case Token::COMMA: + return VisitComma(expr); + case Token::OR: + case Token::AND: + return VisitLogicalExpression(expr); + default: { + VisitForValue(expr->left()); + VisitForValue(expr->right()); + Node* right = environment()->Pop(); + Node* left = environment()->Pop(); + Node* value = BuildBinaryOp(left, right, expr->op()); + ast_context()->ProduceValueWithLazyBailout(value); + } + } +} + + +void AstGraphBuilder::VisitCompareOperation(CompareOperation* expr) { + Operator* op; + switch (expr->op()) { + case Token::EQ: + op = javascript()->Equal(); + break; + case Token::NE: + op = javascript()->NotEqual(); + break; + case Token::EQ_STRICT: + op = javascript()->StrictEqual(); + break; + case Token::NE_STRICT: + op = javascript()->StrictNotEqual(); + break; + case Token::LT: + op = javascript()->LessThan(); + break; + case Token::GT: + op = javascript()->GreaterThan(); + break; + case Token::LTE: + op = javascript()->LessThanOrEqual(); + break; + case Token::GTE: + op = javascript()->GreaterThanOrEqual(); + break; + case Token::INSTANCEOF: + op = javascript()->InstanceOf(); + break; + case Token::IN: + op = javascript()->HasProperty(); + break; + default: + op = NULL; + UNREACHABLE(); + } + VisitForValue(expr->left()); + VisitForValue(expr->right()); + Node* right = environment()->Pop(); + Node* left = environment()->Pop(); + Node* value = NewNode(op, left, right); + ast_context()->ProduceValue(value); + + BuildLazyBailout(value, expr->id()); +} + + +void AstGraphBuilder::VisitThisFunction(ThisFunction* expr) { + Node* value = GetFunctionClosure(); + ast_context()->ProduceValue(value); +} + + +void AstGraphBuilder::VisitCaseClause(CaseClause* expr) { UNREACHABLE(); } + + +void AstGraphBuilder::VisitDeclarations(ZoneList<Declaration*>* declarations) { + DCHECK(globals()->is_empty()); + AstVisitor::VisitDeclarations(declarations); + if (globals()->is_empty()) return; + Handle<FixedArray> data = + isolate()->factory()->NewFixedArray(globals()->length(), TENURED); + for (int i = 0; i < globals()->length(); ++i) data->set(i, *globals()->at(i)); + int encoded_flags = DeclareGlobalsEvalFlag::encode(info()->is_eval()) | + DeclareGlobalsNativeFlag::encode(info()->is_native()) | + DeclareGlobalsStrictMode::encode(info()->strict_mode()); + Node* flags = jsgraph()->Constant(encoded_flags); + Node* pairs = jsgraph()->Constant(data); + Operator* op = javascript()->Runtime(Runtime::kDeclareGlobals, 3); + NewNode(op, current_context(), pairs, flags); + globals()->Rewind(0); +} + + +void AstGraphBuilder::VisitIfNotNull(Statement* stmt) { + if (stmt == NULL) return; + Visit(stmt); +} + + +void AstGraphBuilder::VisitIterationBody(IterationStatement* stmt, + LoopBuilder* loop, int drop_extra) { + BreakableScope scope(this, stmt, loop, drop_extra); + Visit(stmt->body()); +} + + +void AstGraphBuilder::VisitDelete(UnaryOperation* expr) { + Node* value; + if (expr->expression()->IsVariableProxy()) { + // Delete of an unqualified identifier is only allowed in classic mode but + // deleting "this" is allowed in all language modes. + Variable* variable = expr->expression()->AsVariableProxy()->var(); + DCHECK(strict_mode() == SLOPPY || variable->is_this()); + value = BuildVariableDelete(variable); + } else if (expr->expression()->IsProperty()) { + Property* property = expr->expression()->AsProperty(); + VisitForValue(property->obj()); + VisitForValue(property->key()); + Node* key = environment()->Pop(); + Node* object = environment()->Pop(); + value = NewNode(javascript()->DeleteProperty(strict_mode()), object, key); + } else { + VisitForEffect(expr->expression()); + value = jsgraph()->TrueConstant(); + } + ast_context()->ProduceValue(value); +} + + +void AstGraphBuilder::VisitVoid(UnaryOperation* expr) { + VisitForEffect(expr->expression()); + Node* value = jsgraph()->UndefinedConstant(); + ast_context()->ProduceValue(value); +} + + +void AstGraphBuilder::VisitTypeof(UnaryOperation* expr) { + Node* operand; + if (expr->expression()->IsVariableProxy()) { + // Typeof does not throw a reference error on global variables, hence we + // perform a non-contextual load in case the operand is a variable proxy. + Variable* variable = expr->expression()->AsVariableProxy()->var(); + operand = + BuildVariableLoad(variable, expr->expression()->id(), NOT_CONTEXTUAL); + } else { + VisitForValue(expr->expression()); + operand = environment()->Pop(); + } + Node* value = NewNode(javascript()->TypeOf(), operand); + ast_context()->ProduceValue(value); +} + + +void AstGraphBuilder::VisitNot(UnaryOperation* expr) { + VisitForValue(expr->expression()); + Node* operand = environment()->Pop(); + // TODO(mstarzinger): Possible optimization when we are in effect context. + Node* value = NewNode(javascript()->UnaryNot(), operand); + ast_context()->ProduceValue(value); +} + + +void AstGraphBuilder::VisitComma(BinaryOperation* expr) { + VisitForEffect(expr->left()); + Visit(expr->right()); + ast_context()->ReplaceValue(); +} + + +void AstGraphBuilder::VisitLogicalExpression(BinaryOperation* expr) { + bool is_logical_and = expr->op() == Token::AND; + IfBuilder compare_if(this); + VisitForValue(expr->left()); + Node* condition = environment()->Top(); + compare_if.If(BuildToBoolean(condition)); + compare_if.Then(); + if (is_logical_and) { + environment()->Pop(); + Visit(expr->right()); + } else if (ast_context()->IsEffect()) { + environment()->Pop(); + } + compare_if.Else(); + if (!is_logical_and) { + environment()->Pop(); + Visit(expr->right()); + } else if (ast_context()->IsEffect()) { + environment()->Pop(); + } + compare_if.End(); + ast_context()->ReplaceValue(); +} + + +Node* AstGraphBuilder::ProcessArguments(Operator* op, int arity) { + DCHECK(environment()->stack_height() >= arity); + Node** all = info()->zone()->NewArray<Node*>(arity); // XXX: alloca? + for (int i = arity - 1; i >= 0; --i) { + all[i] = environment()->Pop(); + } + Node* value = NewNode(op, arity, all); + return value; +} + + +Node* AstGraphBuilder::BuildLocalFunctionContext(Node* context, Node* closure) { + int heap_slots = info()->num_heap_slots() - Context::MIN_CONTEXT_SLOTS; + if (heap_slots <= 0) return context; + set_current_context(context); + + // Allocate a new local context. + Operator* op = javascript()->CreateFunctionContext(); + Node* local_context = NewNode(op, closure); + set_current_context(local_context); + + // Copy parameters into context if necessary. + int num_parameters = info()->scope()->num_parameters(); + for (int i = 0; i < num_parameters; i++) { + Variable* variable = info()->scope()->parameter(i); + if (!variable->IsContextSlot()) continue; + // Temporary parameter node. The parameter indices are shifted by 1 + // (receiver is parameter index -1 but environment index 0). + Node* parameter = NewNode(common()->Parameter(i + 1), graph()->start()); + // Context variable (at bottom of the context chain). + DCHECK_EQ(0, info()->scope()->ContextChainLength(variable->scope())); + Operator* op = javascript()->StoreContext(0, variable->index()); + NewNode(op, local_context, parameter); + } + + return local_context; +} + + +Node* AstGraphBuilder::BuildArgumentsObject(Variable* arguments) { + if (arguments == NULL) return NULL; + + // Allocate and initialize a new arguments object. + Node* callee = GetFunctionClosure(); + Operator* op = javascript()->Runtime(Runtime::kNewArguments, 1); + Node* object = NewNode(op, callee); + + // Assign the object to the arguments variable. + DCHECK(arguments->IsContextSlot() || arguments->IsStackAllocated()); + // This should never lazy deopt, so it is fine to send invalid bailout id. + BuildVariableAssignment(arguments, object, Token::ASSIGN, BailoutId::None()); + + return object; +} + + +Node* AstGraphBuilder::BuildHoleCheckSilent(Node* value, Node* for_hole, + Node* not_hole) { + IfBuilder hole_check(this); + Node* the_hole = jsgraph()->TheHoleConstant(); + Node* check = NewNode(javascript()->StrictEqual(), value, the_hole); + hole_check.If(check); + hole_check.Then(); + environment()->Push(for_hole); + hole_check.Else(); + environment()->Push(not_hole); + hole_check.End(); + return environment()->Pop(); +} + + +Node* AstGraphBuilder::BuildHoleCheckThrow(Node* value, Variable* variable, + Node* not_hole) { + IfBuilder hole_check(this); + Node* the_hole = jsgraph()->TheHoleConstant(); + Node* check = NewNode(javascript()->StrictEqual(), value, the_hole); + hole_check.If(check); + hole_check.Then(); + environment()->Push(BuildThrowReferenceError(variable)); + hole_check.Else(); + environment()->Push(not_hole); + hole_check.End(); + return environment()->Pop(); +} + + +Node* AstGraphBuilder::BuildVariableLoad(Variable* variable, + BailoutId bailout_id, + ContextualMode contextual_mode) { + Node* the_hole = jsgraph()->TheHoleConstant(); + VariableMode mode = variable->mode(); + switch (variable->location()) { + case Variable::UNALLOCATED: { + // Global var, const, or let variable. + Node* global = BuildLoadGlobalObject(); + PrintableUnique<Name> name = MakeUnique(variable->name()); + Operator* op = javascript()->LoadNamed(name, contextual_mode); + Node* node = NewNode(op, global); + BuildLazyBailoutWithPushedNode(node, bailout_id); + return node; + } + case Variable::PARAMETER: + case Variable::LOCAL: { + // Local var, const, or let variable. + Node* value = environment()->Lookup(variable); + if (mode == CONST_LEGACY) { + // Perform check for uninitialized legacy const variables. + if (value->op() == the_hole->op()) { + value = jsgraph()->UndefinedConstant(); + } else if (value->opcode() == IrOpcode::kPhi) { + Node* undefined = jsgraph()->UndefinedConstant(); + value = BuildHoleCheckSilent(value, undefined, value); + } + } else if (mode == LET || mode == CONST) { + // Perform check for uninitialized let/const variables. + if (value->op() == the_hole->op()) { + value = BuildThrowReferenceError(variable); + } else if (value->opcode() == IrOpcode::kPhi) { + value = BuildHoleCheckThrow(value, variable, value); + } + } + return value; + } + case Variable::CONTEXT: { + // Context variable (potentially up the context chain). + int depth = current_scope()->ContextChainLength(variable->scope()); + bool immutable = variable->maybe_assigned() == kNotAssigned; + Operator* op = + javascript()->LoadContext(depth, variable->index(), immutable); + Node* value = NewNode(op, current_context()); + // TODO(titzer): initialization checks are redundant for already + // initialized immutable context loads, but only specialization knows. + // Maybe specializer should be a parameter to the graph builder? + if (mode == CONST_LEGACY) { + // Perform check for uninitialized legacy const variables. + Node* undefined = jsgraph()->UndefinedConstant(); + value = BuildHoleCheckSilent(value, undefined, value); + } else if (mode == LET || mode == CONST) { + // Perform check for uninitialized let/const variables. + value = BuildHoleCheckThrow(value, variable, value); + } + return value; + } + case Variable::LOOKUP: { + // Dynamic lookup of context variable (anywhere in the chain). + Node* name = jsgraph()->Constant(variable->name()); + Runtime::FunctionId function_id = + (contextual_mode == CONTEXTUAL) + ? Runtime::kLoadLookupSlot + : Runtime::kLoadLookupSlotNoReferenceError; + Operator* op = javascript()->Runtime(function_id, 2); + Node* pair = NewNode(op, current_context(), name); + return NewNode(common()->Projection(0), pair); + } + } + UNREACHABLE(); + return NULL; +} + + +Node* AstGraphBuilder::BuildVariableDelete(Variable* variable) { + switch (variable->location()) { + case Variable::UNALLOCATED: { + // Global var, const, or let variable. + Node* global = BuildLoadGlobalObject(); + Node* name = jsgraph()->Constant(variable->name()); + Operator* op = javascript()->DeleteProperty(strict_mode()); + return NewNode(op, global, name); + } + case Variable::PARAMETER: + case Variable::LOCAL: + case Variable::CONTEXT: + // Local var, const, or let variable or context variable. + return variable->is_this() ? jsgraph()->TrueConstant() + : jsgraph()->FalseConstant(); + case Variable::LOOKUP: { + // Dynamic lookup of context variable (anywhere in the chain). + Node* name = jsgraph()->Constant(variable->name()); + Operator* op = javascript()->Runtime(Runtime::kDeleteLookupSlot, 2); + return NewNode(op, current_context(), name); + } + } + UNREACHABLE(); + return NULL; +} + + +Node* AstGraphBuilder::BuildVariableAssignment(Variable* variable, Node* value, + Token::Value op, + BailoutId bailout_id) { + Node* the_hole = jsgraph()->TheHoleConstant(); + VariableMode mode = variable->mode(); + switch (variable->location()) { + case Variable::UNALLOCATED: { + // Global var, const, or let variable. + Node* global = BuildLoadGlobalObject(); + PrintableUnique<Name> name = MakeUnique(variable->name()); + Operator* op = javascript()->StoreNamed(name); + Node* store = NewNode(op, global, value); + BuildLazyBailout(store, bailout_id); + return store; + } + case Variable::PARAMETER: + case Variable::LOCAL: + // Local var, const, or let variable. + if (mode == CONST_LEGACY && op == Token::INIT_CONST_LEGACY) { + // Perform an initialization check for legacy const variables. + Node* current = environment()->Lookup(variable); + if (current->op() != the_hole->op()) { + value = BuildHoleCheckSilent(current, value, current); + } + } else if (mode == CONST_LEGACY && op != Token::INIT_CONST_LEGACY) { + // Non-initializing assignments to legacy const is ignored. + return value; + } else if (mode == LET && op != Token::INIT_LET) { + // Perform an initialization check for let declared variables. + // Also note that the dynamic hole-check is only done to ensure that + // this does not break in the presence of do-expressions within the + // temporal dead zone of a let declared variable. + Node* current = environment()->Lookup(variable); + if (current->op() == the_hole->op()) { + value = BuildThrowReferenceError(variable); + } else if (value->opcode() == IrOpcode::kPhi) { + value = BuildHoleCheckThrow(current, variable, value); + } + } else if (mode == CONST && op != Token::INIT_CONST) { + // All assignments to const variables are early errors. + UNREACHABLE(); + } + environment()->Bind(variable, value); + return value; + case Variable::CONTEXT: { + // Context variable (potentially up the context chain). + int depth = current_scope()->ContextChainLength(variable->scope()); + if (mode == CONST_LEGACY && op == Token::INIT_CONST_LEGACY) { + // Perform an initialization check for legacy const variables. + Operator* op = + javascript()->LoadContext(depth, variable->index(), false); + Node* current = NewNode(op, current_context()); + value = BuildHoleCheckSilent(current, value, current); + } else if (mode == CONST_LEGACY && op != Token::INIT_CONST_LEGACY) { + // Non-initializing assignments to legacy const is ignored. + return value; + } else if (mode == LET && op != Token::INIT_LET) { + // Perform an initialization check for let declared variables. + Operator* op = + javascript()->LoadContext(depth, variable->index(), false); + Node* current = NewNode(op, current_context()); + value = BuildHoleCheckThrow(current, variable, value); + } else if (mode == CONST && op != Token::INIT_CONST) { + // All assignments to const variables are early errors. + UNREACHABLE(); + } + Operator* op = javascript()->StoreContext(depth, variable->index()); + return NewNode(op, current_context(), value); + } + case Variable::LOOKUP: { + // Dynamic lookup of context variable (anywhere in the chain). + Node* name = jsgraph()->Constant(variable->name()); + Node* strict = jsgraph()->Constant(strict_mode()); + // TODO(mstarzinger): Use Runtime::kInitializeLegacyConstLookupSlot for + // initializations of const declarations. + Operator* op = javascript()->Runtime(Runtime::kStoreLookupSlot, 4); + return NewNode(op, value, current_context(), name, strict); + } + } + UNREACHABLE(); + return NULL; +} + + +Node* AstGraphBuilder::BuildLoadBuiltinsObject() { + // TODO(mstarzinger): Better load from function context, otherwise optimized + // code cannot be shared across native contexts. + return jsgraph()->Constant(handle(info()->context()->builtins())); +} + + +Node* AstGraphBuilder::BuildLoadGlobalObject() { +#if 0 + Node* context = GetFunctionContext(); + // TODO(mstarzinger): Use mid-level operator on FixedArray instead of the + // JS-level operator that targets JSObject. + Node* index = jsgraph()->Constant(Context::GLOBAL_OBJECT_INDEX); + return NewNode(javascript()->LoadProperty(), context, index); +#else + // TODO(mstarzinger): Better load from function context, otherwise optimized + // code cannot be shared across native contexts. See unused code above. + return jsgraph()->Constant(handle(info()->context()->global_object())); +#endif +} + + +Node* AstGraphBuilder::BuildToBoolean(Node* value) { + // TODO(mstarzinger): Possible optimization is to NOP for boolean values. + return NewNode(javascript()->ToBoolean(), value); +} + + +Node* AstGraphBuilder::BuildThrowReferenceError(Variable* variable) { + // TODO(mstarzinger): Should be unified with the VisitThrow implementation. + Node* variable_name = jsgraph()->Constant(variable->name()); + Operator* op = javascript()->Runtime(Runtime::kThrowReferenceError, 1); + return NewNode(op, variable_name); +} + + +Node* AstGraphBuilder::BuildBinaryOp(Node* left, Node* right, Token::Value op) { + Operator* js_op; + switch (op) { + case Token::BIT_OR: + js_op = javascript()->BitwiseOr(); + break; + case Token::BIT_AND: + js_op = javascript()->BitwiseAnd(); + break; + case Token::BIT_XOR: + js_op = javascript()->BitwiseXor(); + break; + case Token::SHL: + js_op = javascript()->ShiftLeft(); + break; + case Token::SAR: + js_op = javascript()->ShiftRight(); + break; + case Token::SHR: + js_op = javascript()->ShiftRightLogical(); + break; + case Token::ADD: + js_op = javascript()->Add(); + break; + case Token::SUB: + js_op = javascript()->Subtract(); + break; + case Token::MUL: + js_op = javascript()->Multiply(); + break; + case Token::DIV: + js_op = javascript()->Divide(); + break; + case Token::MOD: + js_op = javascript()->Modulus(); + break; + default: + UNREACHABLE(); + js_op = NULL; + } + return NewNode(js_op, left, right); +} + + +void AstGraphBuilder::BuildLazyBailout(Node* node, BailoutId ast_id) { + if (OperatorProperties::CanLazilyDeoptimize(node->op())) { + // The deopting node should have an outgoing control dependency. + DCHECK(environment()->GetControlDependency() == node); + + StructuredGraphBuilder::Environment* continuation_env = environment(); + // Create environment for the deoptimization block, and build the block. + StructuredGraphBuilder::Environment* deopt_env = + CopyEnvironment(continuation_env); + set_environment(deopt_env); + + NewNode(common()->LazyDeoptimization()); + + // TODO(jarin) If ast_id.IsNone(), perhaps we should generate an empty + // deopt block and make sure there is no patch entry for this (so + // that the deoptimizer dies when trying to deoptimize here). + + Node* state_node = environment()->Checkpoint(ast_id); + + Node* deoptimize_node = NewNode(common()->Deoptimize(), state_node); + + UpdateControlDependencyToLeaveFunction(deoptimize_node); + + // Continue with the original environment. + set_environment(continuation_env); + + NewNode(common()->Continuation()); + } +} + + +void AstGraphBuilder::BuildLazyBailoutWithPushedNode(Node* node, + BailoutId ast_id) { + environment()->Push(node); + BuildLazyBailout(node, ast_id); + environment()->Pop(); +} +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/compiler/ast-graph-builder.h b/deps/v8/src/compiler/ast-graph-builder.h new file mode 100644 index 00000000000..861bd5baa36 --- /dev/null +++ b/deps/v8/src/compiler/ast-graph-builder.h @@ -0,0 +1,428 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_AST_GRAPH_BUILDER_H_ +#define V8_COMPILER_AST_GRAPH_BUILDER_H_ + +#include "src/v8.h" + +#include "src/ast.h" +#include "src/compiler/graph-builder.h" +#include "src/compiler/js-graph.h" + +namespace v8 { +namespace internal { +namespace compiler { + +class ControlBuilder; +class LoopBuilder; +class Graph; + +// The AstGraphBuilder produces a high-level IR graph, based on an +// underlying AST. The produced graph can either be compiled into a +// stand-alone function or be wired into another graph for the purposes +// of function inlining. +class AstGraphBuilder : public StructuredGraphBuilder, public AstVisitor { + public: + AstGraphBuilder(CompilationInfo* info, JSGraph* jsgraph); + + // Creates a graph by visiting the entire AST. + bool CreateGraph(); + + protected: + class AstContext; + class AstEffectContext; + class AstValueContext; + class AstTestContext; + class BreakableScope; + class ContextScope; + class Environment; + + Environment* environment() { + return reinterpret_cast<Environment*>( + StructuredGraphBuilder::environment()); + } + + AstContext* ast_context() const { return ast_context_; } + BreakableScope* breakable() const { return breakable_; } + ContextScope* execution_context() const { return execution_context_; } + + void set_ast_context(AstContext* ctx) { ast_context_ = ctx; } + void set_breakable(BreakableScope* brk) { breakable_ = brk; } + void set_execution_context(ContextScope* ctx) { execution_context_ = ctx; } + + // Support for control flow builders. The concrete type of the environment + // depends on the graph builder, but environments themselves are not virtual. + typedef StructuredGraphBuilder::Environment BaseEnvironment; + virtual BaseEnvironment* CopyEnvironment(BaseEnvironment* env); + + // TODO(mstarzinger): The pipeline only needs to be a friend to access the + // function context. Remove as soon as the context is a parameter. + friend class Pipeline; + + // Getters for values in the activation record. + Node* GetFunctionClosure(); + Node* GetFunctionContext(); + + // + // The following build methods all generate graph fragments and return one + // resulting node. The operand stack height remains the same, variables and + // other dependencies tracked by the environment might be mutated though. + // + + // Builder to create a local function context. + Node* BuildLocalFunctionContext(Node* context, Node* closure); + + // Builder to create an arguments object if it is used. + Node* BuildArgumentsObject(Variable* arguments); + + // Builders for variable load and assignment. + Node* BuildVariableAssignment(Variable* var, Node* value, Token::Value op, + BailoutId bailout_id); + Node* BuildVariableDelete(Variable* var); + Node* BuildVariableLoad(Variable* var, BailoutId bailout_id, + ContextualMode mode = CONTEXTUAL); + + // Builders for accessing the function context. + Node* BuildLoadBuiltinsObject(); + Node* BuildLoadGlobalObject(); + Node* BuildLoadClosure(); + + // Builders for automatic type conversion. + Node* BuildToBoolean(Node* value); + + // Builders for error reporting at runtime. + Node* BuildThrowReferenceError(Variable* var); + + // Builders for dynamic hole-checks at runtime. + Node* BuildHoleCheckSilent(Node* value, Node* for_hole, Node* not_hole); + Node* BuildHoleCheckThrow(Node* value, Variable* var, Node* not_hole); + + // Builders for binary operations. + Node* BuildBinaryOp(Node* left, Node* right, Token::Value op); + +#define DECLARE_VISIT(type) virtual void Visit##type(type* node); + // Visiting functions for AST nodes make this an AstVisitor. + AST_NODE_LIST(DECLARE_VISIT) +#undef DECLARE_VISIT + + // Visiting function for declarations list is overridden. + virtual void VisitDeclarations(ZoneList<Declaration*>* declarations); + + private: + CompilationInfo* info_; + AstContext* ast_context_; + JSGraph* jsgraph_; + + // List of global declarations for functions and variables. + ZoneList<Handle<Object> > globals_; + + // Stack of breakable statements entered by the visitor. + BreakableScope* breakable_; + + // Stack of context objects pushed onto the chain by the visitor. + ContextScope* execution_context_; + + // Nodes representing values in the activation record. + SetOncePointer<Node> function_closure_; + SetOncePointer<Node> function_context_; + + CompilationInfo* info() { return info_; } + StrictMode strict_mode() { return info()->strict_mode(); } + JSGraph* jsgraph() { return jsgraph_; } + JSOperatorBuilder* javascript() { return jsgraph_->javascript(); } + ZoneList<Handle<Object> >* globals() { return &globals_; } + + // Current scope during visitation. + inline Scope* current_scope() const; + + // Process arguments to a call by popping {arity} elements off the operand + // stack and build a call node using the given call operator. + Node* ProcessArguments(Operator* op, int arity); + + // Visit statements. + void VisitIfNotNull(Statement* stmt); + + // Visit expressions. + void VisitForTest(Expression* expr); + void VisitForEffect(Expression* expr); + void VisitForValue(Expression* expr); + void VisitForValueOrNull(Expression* expr); + void VisitForValues(ZoneList<Expression*>* exprs); + + // Common for all IterationStatement bodies. + void VisitIterationBody(IterationStatement* stmt, LoopBuilder* loop, int); + + // Dispatched from VisitCallRuntime. + void VisitCallJSRuntime(CallRuntime* expr); + + // Dispatched from VisitUnaryOperation. + void VisitDelete(UnaryOperation* expr); + void VisitVoid(UnaryOperation* expr); + void VisitTypeof(UnaryOperation* expr); + void VisitNot(UnaryOperation* expr); + + // Dispatched from VisitBinaryOperation. + void VisitComma(BinaryOperation* expr); + void VisitLogicalExpression(BinaryOperation* expr); + void VisitArithmeticExpression(BinaryOperation* expr); + + // Dispatched from VisitForInStatement. + void VisitForInAssignment(Expression* expr, Node* value); + + void BuildLazyBailout(Node* node, BailoutId ast_id); + void BuildLazyBailoutWithPushedNode(Node* node, BailoutId ast_id); + + DEFINE_AST_VISITOR_SUBCLASS_MEMBERS(); + DISALLOW_COPY_AND_ASSIGN(AstGraphBuilder); +}; + + +// The abstract execution environment for generated code consists of +// parameter variables, local variables and the operand stack. The +// environment will perform proper SSA-renaming of all tracked nodes +// at split and merge points in the control flow. Internally all the +// values are stored in one list using the following layout: +// +// [parameters (+receiver)] [locals] [operand stack] +// +class AstGraphBuilder::Environment + : public StructuredGraphBuilder::Environment { + public: + Environment(AstGraphBuilder* builder, Scope* scope, Node* control_dependency); + Environment(const Environment& copy); + + int parameters_count() const { return parameters_count_; } + int locals_count() const { return locals_count_; } + int stack_height() { + return static_cast<int>(values()->size()) - parameters_count_ - + locals_count_; + } + + // Operations on parameter or local variables. The parameter indices are + // shifted by 1 (receiver is parameter index -1 but environment index 0). + void Bind(Variable* variable, Node* node) { + DCHECK(variable->IsStackAllocated()); + if (variable->IsParameter()) { + values()->at(variable->index() + 1) = node; + parameters_dirty_ = true; + } else { + DCHECK(variable->IsStackLocal()); + values()->at(variable->index() + parameters_count_) = node; + locals_dirty_ = true; + } + } + Node* Lookup(Variable* variable) { + DCHECK(variable->IsStackAllocated()); + if (variable->IsParameter()) { + return values()->at(variable->index() + 1); + } else { + DCHECK(variable->IsStackLocal()); + return values()->at(variable->index() + parameters_count_); + } + } + + // Operations on the operand stack. + void Push(Node* node) { + values()->push_back(node); + stack_dirty_ = true; + } + Node* Top() { + DCHECK(stack_height() > 0); + return values()->back(); + } + Node* Pop() { + DCHECK(stack_height() > 0); + Node* back = values()->back(); + values()->pop_back(); + stack_dirty_ = true; + return back; + } + + // Direct mutations of the operand stack. + void Poke(int depth, Node* node) { + DCHECK(depth >= 0 && depth < stack_height()); + int index = static_cast<int>(values()->size()) - depth - 1; + values()->at(index) = node; + stack_dirty_ = true; + } + Node* Peek(int depth) { + DCHECK(depth >= 0 && depth < stack_height()); + int index = static_cast<int>(values()->size()) - depth - 1; + return values()->at(index); + } + void Drop(int depth) { + DCHECK(depth >= 0 && depth <= stack_height()); + values()->erase(values()->end() - depth, values()->end()); + stack_dirty_ = true; + } + + // Preserve a checkpoint of the environment for the IR graph. Any + // further mutation of the environment will not affect checkpoints. + Node* Checkpoint(BailoutId ast_id); + + private: + int parameters_count_; + int locals_count_; + Node* parameters_node_; + Node* locals_node_; + Node* stack_node_; + bool parameters_dirty_; + bool locals_dirty_; + bool stack_dirty_; +}; + + +// Each expression in the AST is evaluated in a specific context. This context +// decides how the evaluation result is passed up the visitor. +class AstGraphBuilder::AstContext BASE_EMBEDDED { + public: + bool IsEffect() const { return kind_ == Expression::kEffect; } + bool IsValue() const { return kind_ == Expression::kValue; } + bool IsTest() const { return kind_ == Expression::kTest; } + + // Plug a node into this expression context. Call this function in tail + // position in the Visit functions for expressions. + virtual void ProduceValue(Node* value) = 0; + virtual void ProduceValueWithLazyBailout(Node* value) = 0; + + // Unplugs a node from this expression context. Call this to retrieve the + // result of another Visit function that already plugged the context. + virtual Node* ConsumeValue() = 0; + + // Shortcut for "context->ProduceValue(context->ConsumeValue())". + void ReplaceValue() { ProduceValue(ConsumeValue()); } + + protected: + AstContext(AstGraphBuilder* owner, Expression::Context kind, + BailoutId bailout_id); + virtual ~AstContext(); + + AstGraphBuilder* owner() const { return owner_; } + Environment* environment() const { return owner_->environment(); } + +// We want to be able to assert, in a context-specific way, that the stack +// height makes sense when the context is filled. +#ifdef DEBUG + int original_height_; +#endif + + BailoutId bailout_id_; + + private: + Expression::Context kind_; + AstGraphBuilder* owner_; + AstContext* outer_; +}; + + +// Context to evaluate expression for its side effects only. +class AstGraphBuilder::AstEffectContext V8_FINAL : public AstContext { + public: + explicit AstEffectContext(AstGraphBuilder* owner, BailoutId bailout_id) + : AstContext(owner, Expression::kEffect, bailout_id) {} + virtual ~AstEffectContext(); + virtual void ProduceValue(Node* value) V8_OVERRIDE; + virtual void ProduceValueWithLazyBailout(Node* value) V8_OVERRIDE; + virtual Node* ConsumeValue() V8_OVERRIDE; +}; + + +// Context to evaluate expression for its value (and side effects). +class AstGraphBuilder::AstValueContext V8_FINAL : public AstContext { + public: + explicit AstValueContext(AstGraphBuilder* owner, BailoutId bailout_id) + : AstContext(owner, Expression::kValue, bailout_id) {} + virtual ~AstValueContext(); + virtual void ProduceValue(Node* value) V8_OVERRIDE; + virtual void ProduceValueWithLazyBailout(Node* value) V8_OVERRIDE; + virtual Node* ConsumeValue() V8_OVERRIDE; +}; + + +// Context to evaluate expression for a condition value (and side effects). +class AstGraphBuilder::AstTestContext V8_FINAL : public AstContext { + public: + explicit AstTestContext(AstGraphBuilder* owner, BailoutId bailout_id) + : AstContext(owner, Expression::kTest, bailout_id) {} + virtual ~AstTestContext(); + virtual void ProduceValue(Node* value) V8_OVERRIDE; + virtual void ProduceValueWithLazyBailout(Node* value) V8_OVERRIDE; + virtual Node* ConsumeValue() V8_OVERRIDE; +}; + + +// Scoped class tracking breakable statements entered by the visitor. Allows to +// properly 'break' and 'continue' iteration statements as well as to 'break' +// from blocks within switch statements. +class AstGraphBuilder::BreakableScope BASE_EMBEDDED { + public: + BreakableScope(AstGraphBuilder* owner, BreakableStatement* target, + ControlBuilder* control, int drop_extra) + : owner_(owner), + target_(target), + next_(owner->breakable()), + control_(control), + drop_extra_(drop_extra) { + owner_->set_breakable(this); // Push. + } + + ~BreakableScope() { + owner_->set_breakable(next_); // Pop. + } + + // Either 'break' or 'continue' the target statement. + void BreakTarget(BreakableStatement* target); + void ContinueTarget(BreakableStatement* target); + + private: + AstGraphBuilder* owner_; + BreakableStatement* target_; + BreakableScope* next_; + ControlBuilder* control_; + int drop_extra_; + + // Find the correct scope for the target statement. Note that this also drops + // extra operands from the environment for each scope skipped along the way. + BreakableScope* FindBreakable(BreakableStatement* target); +}; + + +// Scoped class tracking context objects created by the visitor. Represents +// mutations of the context chain within the function body and allows to +// change the current {scope} and {context} during visitation. +class AstGraphBuilder::ContextScope BASE_EMBEDDED { + public: + ContextScope(AstGraphBuilder* owner, Scope* scope, Node* context) + : owner_(owner), + next_(owner->execution_context()), + outer_(owner->current_context()), + scope_(scope) { + owner_->set_execution_context(this); // Push. + owner_->set_current_context(context); + } + + ~ContextScope() { + owner_->set_execution_context(next_); // Pop. + owner_->set_current_context(outer_); + } + + // Current scope during visitation. + Scope* scope() const { return scope_; } + + private: + AstGraphBuilder* owner_; + ContextScope* next_; + Node* outer_; + Scope* scope_; +}; + +Scope* AstGraphBuilder::current_scope() const { + return execution_context_->scope(); +} +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_AST_GRAPH_BUILDER_H_ diff --git a/deps/v8/src/compiler/change-lowering.cc b/deps/v8/src/compiler/change-lowering.cc new file mode 100644 index 00000000000..3f8e45b9e71 --- /dev/null +++ b/deps/v8/src/compiler/change-lowering.cc @@ -0,0 +1,260 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/change-lowering.h" + +#include "src/compiler/common-node-cache.h" +#include "src/compiler/graph.h" + +namespace v8 { +namespace internal { +namespace compiler { + +ChangeLoweringBase::ChangeLoweringBase(Graph* graph, Linkage* linkage, + CommonNodeCache* cache) + : graph_(graph), + isolate_(graph->zone()->isolate()), + linkage_(linkage), + cache_(cache), + common_(graph->zone()), + machine_(graph->zone()) {} + + +ChangeLoweringBase::~ChangeLoweringBase() {} + + +Node* ChangeLoweringBase::ExternalConstant(ExternalReference reference) { + Node** loc = cache()->FindExternalConstant(reference); + if (*loc == NULL) { + *loc = graph()->NewNode(common()->ExternalConstant(reference)); + } + return *loc; +} + + +Node* ChangeLoweringBase::HeapConstant(PrintableUnique<HeapObject> value) { + // TODO(bmeurer): Use common node cache. + return graph()->NewNode(common()->HeapConstant(value)); +} + + +Node* ChangeLoweringBase::ImmovableHeapConstant(Handle<HeapObject> value) { + return HeapConstant( + PrintableUnique<HeapObject>::CreateImmovable(graph()->zone(), value)); +} + + +Node* ChangeLoweringBase::Int32Constant(int32_t value) { + Node** loc = cache()->FindInt32Constant(value); + if (*loc == NULL) { + *loc = graph()->NewNode(common()->Int32Constant(value)); + } + return *loc; +} + + +Node* ChangeLoweringBase::NumberConstant(double value) { + Node** loc = cache()->FindNumberConstant(value); + if (*loc == NULL) { + *loc = graph()->NewNode(common()->NumberConstant(value)); + } + return *loc; +} + + +Node* ChangeLoweringBase::CEntryStubConstant() { + if (!c_entry_stub_constant_.is_set()) { + c_entry_stub_constant_.set( + ImmovableHeapConstant(CEntryStub(isolate(), 1).GetCode())); + } + return c_entry_stub_constant_.get(); +} + + +Node* ChangeLoweringBase::TrueConstant() { + if (!true_constant_.is_set()) { + true_constant_.set( + ImmovableHeapConstant(isolate()->factory()->true_value())); + } + return true_constant_.get(); +} + + +Node* ChangeLoweringBase::FalseConstant() { + if (!false_constant_.is_set()) { + false_constant_.set( + ImmovableHeapConstant(isolate()->factory()->false_value())); + } + return false_constant_.get(); +} + + +Reduction ChangeLoweringBase::ChangeBitToBool(Node* val, Node* control) { + Node* branch = graph()->NewNode(common()->Branch(), val, control); + + Node* if_true = graph()->NewNode(common()->IfTrue(), branch); + Node* true_value = TrueConstant(); + + Node* if_false = graph()->NewNode(common()->IfFalse(), branch); + Node* false_value = FalseConstant(); + + Node* merge = graph()->NewNode(common()->Merge(2), if_true, if_false); + Node* phi = + graph()->NewNode(common()->Phi(2), true_value, false_value, merge); + + return Replace(phi); +} + + +template <size_t kPointerSize> +ChangeLowering<kPointerSize>::ChangeLowering(Graph* graph, Linkage* linkage) + : ChangeLoweringBase(graph, linkage, + new (graph->zone()) CommonNodeCache(graph->zone())) {} + + +template <size_t kPointerSize> +Reduction ChangeLowering<kPointerSize>::Reduce(Node* node) { + Node* control = graph()->start(); + Node* effect = control; + switch (node->opcode()) { + case IrOpcode::kChangeBitToBool: + return ChangeBitToBool(node->InputAt(0), control); + case IrOpcode::kChangeBoolToBit: + return ChangeBoolToBit(node->InputAt(0)); + case IrOpcode::kChangeInt32ToTagged: + return ChangeInt32ToTagged(node->InputAt(0), effect, control); + case IrOpcode::kChangeTaggedToFloat64: + return ChangeTaggedToFloat64(node->InputAt(0), effect, control); + default: + return NoChange(); + } + UNREACHABLE(); + return NoChange(); +} + + +template <> +Reduction ChangeLowering<4>::ChangeBoolToBit(Node* val) { + return Replace( + graph()->NewNode(machine()->Word32Equal(), val, TrueConstant())); +} + + +template <> +Reduction ChangeLowering<8>::ChangeBoolToBit(Node* val) { + return Replace( + graph()->NewNode(machine()->Word64Equal(), val, TrueConstant())); +} + + +template <> +Reduction ChangeLowering<4>::ChangeInt32ToTagged(Node* val, Node* effect, + Node* control) { + Node* context = NumberConstant(0); + + Node* add = graph()->NewNode(machine()->Int32AddWithOverflow(), val, val); + Node* ovf = graph()->NewNode(common()->Projection(1), add); + + Node* branch = graph()->NewNode(common()->Branch(), ovf, control); + + Node* if_true = graph()->NewNode(common()->IfTrue(), branch); + Node* number = graph()->NewNode(machine()->ChangeInt32ToFloat64(), val); + + // TODO(bmeurer): Inline allocation if possible. + const Runtime::Function* fn = + Runtime::FunctionForId(Runtime::kAllocateHeapNumber); + DCHECK_EQ(0, fn->nargs); + CallDescriptor* desc = linkage()->GetRuntimeCallDescriptor( + fn->function_id, 0, Operator::kNoProperties); + Node* heap_number = + graph()->NewNode(common()->Call(desc), CEntryStubConstant(), + ExternalConstant(ExternalReference(fn, isolate())), + Int32Constant(0), context, effect, if_true); + + Node* store = graph()->NewNode( + machine()->Store(kMachineFloat64, kNoWriteBarrier), heap_number, + Int32Constant(HeapNumber::kValueOffset - kHeapObjectTag), number, effect, + heap_number); + + Node* if_false = graph()->NewNode(common()->IfFalse(), branch); + Node* smi = graph()->NewNode(common()->Projection(0), add); + + Node* merge = graph()->NewNode(common()->Merge(2), store, if_false); + Node* phi = graph()->NewNode(common()->Phi(2), heap_number, smi, merge); + + return Replace(phi); +} + + +template <> +Reduction ChangeLowering<8>::ChangeInt32ToTagged(Node* val, Node* effect, + Node* control) { + return Replace(graph()->NewNode( + machine()->Word64Shl(), val, + Int32Constant(SmiTagging<8>::kSmiShiftSize + kSmiTagSize))); +} + + +template <> +Reduction ChangeLowering<4>::ChangeTaggedToFloat64(Node* val, Node* effect, + Node* control) { + Node* branch = graph()->NewNode( + common()->Branch(), + graph()->NewNode(machine()->Word32And(), val, Int32Constant(kSmiTagMask)), + control); + + Node* if_true = graph()->NewNode(common()->IfTrue(), branch); + Node* load = graph()->NewNode( + machine()->Load(kMachineFloat64), val, + Int32Constant(HeapNumber::kValueOffset - kHeapObjectTag), if_true); + + Node* if_false = graph()->NewNode(common()->IfFalse(), branch); + Node* number = graph()->NewNode( + machine()->ChangeInt32ToFloat64(), + graph()->NewNode( + machine()->Word32Sar(), val, + Int32Constant(SmiTagging<4>::kSmiShiftSize + kSmiTagSize))); + + Node* merge = graph()->NewNode(common()->Merge(2), if_true, if_false); + Node* phi = graph()->NewNode(common()->Phi(2), load, number, merge); + + return Replace(phi); +} + + +template <> +Reduction ChangeLowering<8>::ChangeTaggedToFloat64(Node* val, Node* effect, + Node* control) { + Node* branch = graph()->NewNode( + common()->Branch(), + graph()->NewNode(machine()->Word64And(), val, Int32Constant(kSmiTagMask)), + control); + + Node* if_true = graph()->NewNode(common()->IfTrue(), branch); + Node* load = graph()->NewNode( + machine()->Load(kMachineFloat64), val, + Int32Constant(HeapNumber::kValueOffset - kHeapObjectTag), if_true); + + Node* if_false = graph()->NewNode(common()->IfFalse(), branch); + Node* number = graph()->NewNode( + machine()->ChangeInt32ToFloat64(), + graph()->NewNode( + machine()->ConvertInt64ToInt32(), + graph()->NewNode( + machine()->Word64Sar(), val, + Int32Constant(SmiTagging<8>::kSmiShiftSize + kSmiTagSize)))); + + Node* merge = graph()->NewNode(common()->Merge(2), if_true, if_false); + Node* phi = graph()->NewNode(common()->Phi(2), load, number, merge); + + return Replace(phi); +} + + +template class ChangeLowering<4>; +template class ChangeLowering<8>; + +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/src/compiler/change-lowering.h b/deps/v8/src/compiler/change-lowering.h new file mode 100644 index 00000000000..3e16d800de7 --- /dev/null +++ b/deps/v8/src/compiler/change-lowering.h @@ -0,0 +1,79 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_CHANGE_LOWERING_H_ +#define V8_COMPILER_CHANGE_LOWERING_H_ + +#include "include/v8.h" +#include "src/compiler/common-operator.h" +#include "src/compiler/graph-reducer.h" +#include "src/compiler/machine-operator.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// Forward declarations. +class CommonNodeCache; +class Linkage; + +class ChangeLoweringBase : public Reducer { + public: + ChangeLoweringBase(Graph* graph, Linkage* linkage, CommonNodeCache* cache); + virtual ~ChangeLoweringBase(); + + protected: + Node* ExternalConstant(ExternalReference reference); + Node* HeapConstant(PrintableUnique<HeapObject> value); + Node* ImmovableHeapConstant(Handle<HeapObject> value); + Node* Int32Constant(int32_t value); + Node* NumberConstant(double value); + Node* CEntryStubConstant(); + Node* TrueConstant(); + Node* FalseConstant(); + + Reduction ChangeBitToBool(Node* val, Node* control); + + Graph* graph() const { return graph_; } + Isolate* isolate() const { return isolate_; } + Linkage* linkage() const { return linkage_; } + CommonNodeCache* cache() const { return cache_; } + CommonOperatorBuilder* common() { return &common_; } + MachineOperatorBuilder* machine() { return &machine_; } + + private: + Graph* graph_; + Isolate* isolate_; + Linkage* linkage_; + CommonNodeCache* cache_; + CommonOperatorBuilder common_; + MachineOperatorBuilder machine_; + + SetOncePointer<Node> c_entry_stub_constant_; + SetOncePointer<Node> true_constant_; + SetOncePointer<Node> false_constant_; +}; + + +template <size_t kPointerSize = kApiPointerSize> +class ChangeLowering V8_FINAL : public ChangeLoweringBase { + public: + ChangeLowering(Graph* graph, Linkage* linkage); + ChangeLowering(Graph* graph, Linkage* linkage, CommonNodeCache* cache) + : ChangeLoweringBase(graph, linkage, cache) {} + virtual ~ChangeLowering() {} + + virtual Reduction Reduce(Node* node) V8_OVERRIDE; + + private: + Reduction ChangeBoolToBit(Node* val); + Reduction ChangeInt32ToTagged(Node* val, Node* effect, Node* control); + Reduction ChangeTaggedToFloat64(Node* val, Node* effect, Node* control); +}; + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_COMPILER_CHANGE_LOWERING_H_ diff --git a/deps/v8/src/compiler/code-generator-impl.h b/deps/v8/src/compiler/code-generator-impl.h new file mode 100644 index 00000000000..a3f7e4c11d5 --- /dev/null +++ b/deps/v8/src/compiler/code-generator-impl.h @@ -0,0 +1,132 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_CODE_GENERATOR_IMPL_H_ +#define V8_COMPILER_CODE_GENERATOR_IMPL_H_ + +#include "src/compiler/code-generator.h" +#include "src/compiler/common-operator.h" +#include "src/compiler/generic-graph.h" +#include "src/compiler/instruction.h" +#include "src/compiler/linkage.h" +#include "src/compiler/machine-operator.h" +#include "src/compiler/node.h" +#include "src/compiler/opcodes.h" +#include "src/compiler/operator.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// Converts InstructionOperands from a given instruction to +// architecture-specific +// registers and operands after they have been assigned by the register +// allocator. +class InstructionOperandConverter { + public: + InstructionOperandConverter(CodeGenerator* gen, Instruction* instr) + : gen_(gen), instr_(instr) {} + + Register InputRegister(int index) { + return ToRegister(instr_->InputAt(index)); + } + + DoubleRegister InputDoubleRegister(int index) { + return ToDoubleRegister(instr_->InputAt(index)); + } + + double InputDouble(int index) { return ToDouble(instr_->InputAt(index)); } + + int32_t InputInt32(int index) { + return ToConstant(instr_->InputAt(index)).ToInt32(); + } + + int8_t InputInt8(int index) { return static_cast<int8_t>(InputInt32(index)); } + + int16_t InputInt16(int index) { + return static_cast<int16_t>(InputInt32(index)); + } + + uint8_t InputInt5(int index) { + return static_cast<uint8_t>(InputInt32(index) & 0x1F); + } + + uint8_t InputInt6(int index) { + return static_cast<uint8_t>(InputInt32(index) & 0x3F); + } + + Handle<HeapObject> InputHeapObject(int index) { + return ToHeapObject(instr_->InputAt(index)); + } + + Label* InputLabel(int index) { + return gen_->code()->GetLabel(InputBlock(index)); + } + + BasicBlock* InputBlock(int index) { + NodeId block_id = static_cast<NodeId>(InputInt32(index)); + // operand should be a block id. + DCHECK(block_id >= 0); + DCHECK(block_id < gen_->schedule()->BasicBlockCount()); + return gen_->schedule()->GetBlockById(block_id); + } + + Register OutputRegister(int index = 0) { + return ToRegister(instr_->OutputAt(index)); + } + + DoubleRegister OutputDoubleRegister() { + return ToDoubleRegister(instr_->Output()); + } + + Register TempRegister(int index) { return ToRegister(instr_->TempAt(index)); } + + Register ToRegister(InstructionOperand* op) { + DCHECK(op->IsRegister()); + return Register::FromAllocationIndex(op->index()); + } + + DoubleRegister ToDoubleRegister(InstructionOperand* op) { + DCHECK(op->IsDoubleRegister()); + return DoubleRegister::FromAllocationIndex(op->index()); + } + + Constant ToConstant(InstructionOperand* operand) { + if (operand->IsImmediate()) { + return gen_->code()->GetImmediate(operand->index()); + } + return gen_->code()->GetConstant(operand->index()); + } + + double ToDouble(InstructionOperand* operand) { + return ToConstant(operand).ToFloat64(); + } + + Handle<HeapObject> ToHeapObject(InstructionOperand* operand) { + return ToConstant(operand).ToHeapObject(); + } + + Frame* frame() const { return gen_->frame(); } + Isolate* isolate() const { return gen_->isolate(); } + Linkage* linkage() const { return gen_->linkage(); } + + protected: + CodeGenerator* gen_; + Instruction* instr_; +}; + + +// TODO(dcarney): generify this on bleeding_edge and replace this call +// when merged. +static inline void FinishCode(MacroAssembler* masm) { +#if V8_TARGET_ARCH_ARM64 || V8_TARGET_ARCH_ARM + masm->CheckConstPool(true, false); +#endif +} + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_COMPILER_CODE_GENERATOR_IMPL_H diff --git a/deps/v8/src/compiler/code-generator.cc b/deps/v8/src/compiler/code-generator.cc new file mode 100644 index 00000000000..878ace3be1e --- /dev/null +++ b/deps/v8/src/compiler/code-generator.cc @@ -0,0 +1,381 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/code-generator.h" + +#include "src/compiler/code-generator-impl.h" +#include "src/compiler/linkage.h" +#include "src/compiler/pipeline.h" + +namespace v8 { +namespace internal { +namespace compiler { + +CodeGenerator::CodeGenerator(InstructionSequence* code) + : code_(code), + current_block_(NULL), + current_source_position_(SourcePosition::Invalid()), + masm_(code->zone()->isolate(), NULL, 0), + resolver_(this), + safepoints_(code->zone()), + lazy_deoptimization_entries_( + LazyDeoptimizationEntries::allocator_type(code->zone())), + deoptimization_states_( + DeoptimizationStates::allocator_type(code->zone())), + deoptimization_literals_(Literals::allocator_type(code->zone())), + translations_(code->zone()) { + deoptimization_states_.resize(code->GetDeoptimizationEntryCount(), NULL); +} + + +Handle<Code> CodeGenerator::GenerateCode() { + CompilationInfo* info = linkage()->info(); + + // Emit a code line info recording start event. + PositionsRecorder* recorder = masm()->positions_recorder(); + LOG_CODE_EVENT(isolate(), CodeStartLinePosInfoRecordEvent(recorder)); + + // Place function entry hook if requested to do so. + if (linkage()->GetIncomingDescriptor()->IsJSFunctionCall()) { + ProfileEntryHookStub::MaybeCallEntryHook(masm()); + } + + // Architecture-specific, linkage-specific prologue. + info->set_prologue_offset(masm()->pc_offset()); + AssemblePrologue(); + + // Assemble all instructions. + for (InstructionSequence::const_iterator i = code()->begin(); + i != code()->end(); ++i) { + AssembleInstruction(*i); + } + + FinishCode(masm()); + + safepoints()->Emit(masm(), frame()->GetSpillSlotCount()); + + // TODO(titzer): what are the right code flags here? + Code::Kind kind = Code::STUB; + if (linkage()->GetIncomingDescriptor()->IsJSFunctionCall()) { + kind = Code::OPTIMIZED_FUNCTION; + } + Handle<Code> result = v8::internal::CodeGenerator::MakeCodeEpilogue( + masm(), Code::ComputeFlags(kind), info); + result->set_is_turbofanned(true); + result->set_stack_slots(frame()->GetSpillSlotCount()); + result->set_safepoint_table_offset(safepoints()->GetCodeOffset()); + + PopulateDeoptimizationData(result); + + // Emit a code line info recording stop event. + void* line_info = recorder->DetachJITHandlerData(); + LOG_CODE_EVENT(isolate(), CodeEndLinePosInfoRecordEvent(*result, line_info)); + + return result; +} + + +void CodeGenerator::RecordSafepoint(PointerMap* pointers, Safepoint::Kind kind, + int arguments, + Safepoint::DeoptMode deopt_mode) { + const ZoneList<InstructionOperand*>* operands = + pointers->GetNormalizedOperands(); + Safepoint safepoint = + safepoints()->DefineSafepoint(masm(), kind, arguments, deopt_mode); + for (int i = 0; i < operands->length(); i++) { + InstructionOperand* pointer = operands->at(i); + if (pointer->IsStackSlot()) { + safepoint.DefinePointerSlot(pointer->index(), zone()); + } else if (pointer->IsRegister() && (kind & Safepoint::kWithRegisters)) { + Register reg = Register::FromAllocationIndex(pointer->index()); + safepoint.DefinePointerRegister(reg, zone()); + } + } +} + + +void CodeGenerator::AssembleInstruction(Instruction* instr) { + if (instr->IsBlockStart()) { + // Bind a label for a block start and handle parallel moves. + BlockStartInstruction* block_start = BlockStartInstruction::cast(instr); + current_block_ = block_start->block(); + if (FLAG_code_comments) { + // TODO(titzer): these code comments are a giant memory leak. + Vector<char> buffer = Vector<char>::New(32); + SNPrintF(buffer, "-- B%d start --", block_start->block()->id()); + masm()->RecordComment(buffer.start()); + } + masm()->bind(block_start->label()); + } + if (instr->IsGapMoves()) { + // Handle parallel moves associated with the gap instruction. + AssembleGap(GapInstruction::cast(instr)); + } else if (instr->IsSourcePosition()) { + AssembleSourcePosition(SourcePositionInstruction::cast(instr)); + } else { + // Assemble architecture-specific code for the instruction. + AssembleArchInstruction(instr); + + // Assemble branches or boolean materializations after this instruction. + FlagsMode mode = FlagsModeField::decode(instr->opcode()); + FlagsCondition condition = FlagsConditionField::decode(instr->opcode()); + switch (mode) { + case kFlags_none: + return; + case kFlags_set: + return AssembleArchBoolean(instr, condition); + case kFlags_branch: + return AssembleArchBranch(instr, condition); + } + UNREACHABLE(); + } +} + + +void CodeGenerator::AssembleSourcePosition(SourcePositionInstruction* instr) { + SourcePosition source_position = instr->source_position(); + if (source_position == current_source_position_) return; + DCHECK(!source_position.IsInvalid()); + if (!source_position.IsUnknown()) { + int code_pos = source_position.raw(); + masm()->positions_recorder()->RecordPosition(source_position.raw()); + masm()->positions_recorder()->WriteRecordedPositions(); + if (FLAG_code_comments) { + Vector<char> buffer = Vector<char>::New(256); + CompilationInfo* info = linkage()->info(); + int ln = Script::GetLineNumber(info->script(), code_pos); + int cn = Script::GetColumnNumber(info->script(), code_pos); + if (info->script()->name()->IsString()) { + Handle<String> file(String::cast(info->script()->name())); + base::OS::SNPrintF(buffer.start(), buffer.length(), "-- %s:%d:%d --", + file->ToCString().get(), ln, cn); + } else { + base::OS::SNPrintF(buffer.start(), buffer.length(), + "-- <unknown>:%d:%d --", ln, cn); + } + masm()->RecordComment(buffer.start()); + } + } + current_source_position_ = source_position; +} + + +void CodeGenerator::AssembleGap(GapInstruction* instr) { + for (int i = GapInstruction::FIRST_INNER_POSITION; + i <= GapInstruction::LAST_INNER_POSITION; i++) { + GapInstruction::InnerPosition inner_pos = + static_cast<GapInstruction::InnerPosition>(i); + ParallelMove* move = instr->GetParallelMove(inner_pos); + if (move != NULL) resolver()->Resolve(move); + } +} + + +void CodeGenerator::PopulateDeoptimizationData(Handle<Code> code_object) { + CompilationInfo* info = linkage()->info(); + int deopt_count = code()->GetDeoptimizationEntryCount(); + int patch_count = static_cast<int>(lazy_deoptimization_entries_.size()); + if (patch_count == 0 && deopt_count == 0) return; + Handle<DeoptimizationInputData> data = DeoptimizationInputData::New( + isolate(), deopt_count, patch_count, TENURED); + + Handle<ByteArray> translation_array = + translations_.CreateByteArray(isolate()->factory()); + + data->SetTranslationByteArray(*translation_array); + data->SetInlinedFunctionCount(Smi::FromInt(0)); + data->SetOptimizationId(Smi::FromInt(info->optimization_id())); + // TODO(jarin) The following code was copied over from Lithium, not sure + // whether the scope or the IsOptimizing condition are really needed. + if (info->IsOptimizing()) { + // Reference to shared function info does not change between phases. + AllowDeferredHandleDereference allow_handle_dereference; + data->SetSharedFunctionInfo(*info->shared_info()); + } else { + data->SetSharedFunctionInfo(Smi::FromInt(0)); + } + + Handle<FixedArray> literals = isolate()->factory()->NewFixedArray( + static_cast<int>(deoptimization_literals_.size()), TENURED); + { + AllowDeferredHandleDereference copy_handles; + for (unsigned i = 0; i < deoptimization_literals_.size(); i++) { + literals->set(i, *deoptimization_literals_[i]); + } + data->SetLiteralArray(*literals); + } + + // No OSR in Turbofan yet... + BailoutId osr_ast_id = BailoutId::None(); + data->SetOsrAstId(Smi::FromInt(osr_ast_id.ToInt())); + data->SetOsrPcOffset(Smi::FromInt(-1)); + + // Populate deoptimization entries. + for (int i = 0; i < deopt_count; i++) { + FrameStateDescriptor* descriptor = code()->GetDeoptimizationEntry(i); + data->SetAstId(i, descriptor->bailout_id()); + CHECK_NE(NULL, deoptimization_states_[i]); + data->SetTranslationIndex( + i, Smi::FromInt(deoptimization_states_[i]->translation_id_)); + data->SetArgumentsStackHeight(i, Smi::FromInt(0)); + data->SetPc(i, Smi::FromInt(-1)); + } + + // Populate the return address patcher entries. + for (int i = 0; i < patch_count; ++i) { + LazyDeoptimizationEntry entry = lazy_deoptimization_entries_[i]; + DCHECK(entry.position_after_call() == entry.continuation()->pos() || + IsNopForSmiCodeInlining(code_object, entry.position_after_call(), + entry.continuation()->pos())); + data->SetReturnAddressPc(i, Smi::FromInt(entry.position_after_call())); + data->SetPatchedAddressPc(i, Smi::FromInt(entry.deoptimization()->pos())); + } + + code_object->set_deoptimization_data(*data); +} + + +void CodeGenerator::RecordLazyDeoptimizationEntry(Instruction* instr) { + InstructionOperandConverter i(this, instr); + + Label after_call; + masm()->bind(&after_call); + + // The continuation and deoptimization are the last two inputs: + BasicBlock* cont_block = + i.InputBlock(static_cast<int>(instr->InputCount()) - 2); + BasicBlock* deopt_block = + i.InputBlock(static_cast<int>(instr->InputCount()) - 1); + + Label* cont_label = code_->GetLabel(cont_block); + Label* deopt_label = code_->GetLabel(deopt_block); + + lazy_deoptimization_entries_.push_back( + LazyDeoptimizationEntry(after_call.pos(), cont_label, deopt_label)); +} + + +int CodeGenerator::DefineDeoptimizationLiteral(Handle<Object> literal) { + int result = static_cast<int>(deoptimization_literals_.size()); + for (unsigned i = 0; i < deoptimization_literals_.size(); ++i) { + if (deoptimization_literals_[i].is_identical_to(literal)) return i; + } + deoptimization_literals_.push_back(literal); + return result; +} + + +void CodeGenerator::BuildTranslation(Instruction* instr, + int deoptimization_id) { + // We should build translation only once. + DCHECK_EQ(NULL, deoptimization_states_[deoptimization_id]); + + FrameStateDescriptor* descriptor = + code()->GetDeoptimizationEntry(deoptimization_id); + Translation translation(&translations_, 1, 1, zone()); + translation.BeginJSFrame(descriptor->bailout_id(), + Translation::kSelfLiteralId, + descriptor->size() - descriptor->parameters_count()); + + for (int i = 0; i < descriptor->size(); i++) { + AddTranslationForOperand(&translation, instr, instr->InputAt(i)); + } + + deoptimization_states_[deoptimization_id] = + new (zone()) DeoptimizationState(translation.index()); +} + + +void CodeGenerator::AddTranslationForOperand(Translation* translation, + Instruction* instr, + InstructionOperand* op) { + if (op->IsStackSlot()) { + translation->StoreStackSlot(op->index()); + } else if (op->IsDoubleStackSlot()) { + translation->StoreDoubleStackSlot(op->index()); + } else if (op->IsRegister()) { + InstructionOperandConverter converter(this, instr); + translation->StoreRegister(converter.ToRegister(op)); + } else if (op->IsDoubleRegister()) { + InstructionOperandConverter converter(this, instr); + translation->StoreDoubleRegister(converter.ToDoubleRegister(op)); + } else if (op->IsImmediate()) { + InstructionOperandConverter converter(this, instr); + Constant constant = converter.ToConstant(op); + Handle<Object> constant_object; + switch (constant.type()) { + case Constant::kInt32: + constant_object = + isolate()->factory()->NewNumberFromInt(constant.ToInt32()); + break; + case Constant::kFloat64: + constant_object = + isolate()->factory()->NewHeapNumber(constant.ToFloat64()); + break; + case Constant::kHeapObject: + constant_object = constant.ToHeapObject(); + break; + default: + UNREACHABLE(); + } + int literal_id = DefineDeoptimizationLiteral(constant_object); + translation->StoreLiteral(literal_id); + } else { + UNREACHABLE(); + } +} + +#if !V8_TURBOFAN_BACKEND + +void CodeGenerator::AssembleArchInstruction(Instruction* instr) { + UNIMPLEMENTED(); +} + + +void CodeGenerator::AssembleArchBranch(Instruction* instr, + FlagsCondition condition) { + UNIMPLEMENTED(); +} + + +void CodeGenerator::AssembleArchBoolean(Instruction* instr, + FlagsCondition condition) { + UNIMPLEMENTED(); +} + + +void CodeGenerator::AssemblePrologue() { UNIMPLEMENTED(); } + + +void CodeGenerator::AssembleReturn() { UNIMPLEMENTED(); } + + +void CodeGenerator::AssembleMove(InstructionOperand* source, + InstructionOperand* destination) { + UNIMPLEMENTED(); +} + + +void CodeGenerator::AssembleSwap(InstructionOperand* source, + InstructionOperand* destination) { + UNIMPLEMENTED(); +} + + +void CodeGenerator::AddNopForSmiCodeInlining() { UNIMPLEMENTED(); } + + +#ifdef DEBUG +bool CodeGenerator::IsNopForSmiCodeInlining(Handle<Code> code, int start_pc, + int end_pc) { + UNIMPLEMENTED(); + return false; +} +#endif + +#endif // !V8_TURBOFAN_BACKEND + +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/src/compiler/code-generator.h b/deps/v8/src/compiler/code-generator.h new file mode 100644 index 00000000000..b603c555c39 --- /dev/null +++ b/deps/v8/src/compiler/code-generator.h @@ -0,0 +1,146 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_CODE_GENERATOR_H_ +#define V8_COMPILER_CODE_GENERATOR_H_ + +#include <deque> + +#include "src/compiler/gap-resolver.h" +#include "src/compiler/instruction.h" +#include "src/deoptimizer.h" +#include "src/macro-assembler.h" +#include "src/safepoint-table.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// Generates native code for a sequence of instructions. +class CodeGenerator V8_FINAL : public GapResolver::Assembler { + public: + explicit CodeGenerator(InstructionSequence* code); + + // Generate native code. + Handle<Code> GenerateCode(); + + InstructionSequence* code() const { return code_; } + Frame* frame() const { return code()->frame(); } + Graph* graph() const { return code()->graph(); } + Isolate* isolate() const { return zone()->isolate(); } + Linkage* linkage() const { return code()->linkage(); } + Schedule* schedule() const { return code()->schedule(); } + + private: + MacroAssembler* masm() { return &masm_; } + GapResolver* resolver() { return &resolver_; } + SafepointTableBuilder* safepoints() { return &safepoints_; } + Zone* zone() const { return code()->zone(); } + + // Checks if {block} will appear directly after {current_block_} when + // assembling code, in which case, a fall-through can be used. + bool IsNextInAssemblyOrder(const BasicBlock* block) const { + return block->rpo_number_ == (current_block_->rpo_number_ + 1) && + block->deferred_ == current_block_->deferred_; + } + + // Record a safepoint with the given pointer map. + void RecordSafepoint(PointerMap* pointers, Safepoint::Kind kind, + int arguments, Safepoint::DeoptMode deopt_mode); + + // Assemble code for the specified instruction. + void AssembleInstruction(Instruction* instr); + void AssembleSourcePosition(SourcePositionInstruction* instr); + void AssembleGap(GapInstruction* gap); + + // =========================================================================== + // ============= Architecture-specific code generation methods. ============== + // =========================================================================== + + void AssembleArchInstruction(Instruction* instr); + void AssembleArchBranch(Instruction* instr, FlagsCondition condition); + void AssembleArchBoolean(Instruction* instr, FlagsCondition condition); + + // Generates an architecture-specific, descriptor-specific prologue + // to set up a stack frame. + void AssemblePrologue(); + // Generates an architecture-specific, descriptor-specific return sequence + // to tear down a stack frame. + void AssembleReturn(); + + // =========================================================================== + // ============== Architecture-specific gap resolver methods. ================ + // =========================================================================== + + // Interface used by the gap resolver to emit moves and swaps. + virtual void AssembleMove(InstructionOperand* source, + InstructionOperand* destination) V8_OVERRIDE; + virtual void AssembleSwap(InstructionOperand* source, + InstructionOperand* destination) V8_OVERRIDE; + + // =========================================================================== + // Deoptimization table construction + void RecordLazyDeoptimizationEntry(Instruction* instr); + void PopulateDeoptimizationData(Handle<Code> code); + int DefineDeoptimizationLiteral(Handle<Object> literal); + void BuildTranslation(Instruction* instr, int deoptimization_id); + void AddTranslationForOperand(Translation* translation, Instruction* instr, + InstructionOperand* op); + void AddNopForSmiCodeInlining(); +#if DEBUG + static bool IsNopForSmiCodeInlining(Handle<Code> code, int start_pc, + int end_pc); +#endif // DEBUG + // =========================================================================== + + class LazyDeoptimizationEntry V8_FINAL { + public: + LazyDeoptimizationEntry(int position_after_call, Label* continuation, + Label* deoptimization) + : position_after_call_(position_after_call), + continuation_(continuation), + deoptimization_(deoptimization) {} + + int position_after_call() const { return position_after_call_; } + Label* continuation() const { return continuation_; } + Label* deoptimization() const { return deoptimization_; } + + private: + int position_after_call_; + Label* continuation_; + Label* deoptimization_; + }; + + struct DeoptimizationState : ZoneObject { + int translation_id_; + + explicit DeoptimizationState(int translation_id) + : translation_id_(translation_id) {} + }; + + typedef std::deque<LazyDeoptimizationEntry, + zone_allocator<LazyDeoptimizationEntry> > + LazyDeoptimizationEntries; + typedef std::deque<DeoptimizationState*, + zone_allocator<DeoptimizationState*> > + DeoptimizationStates; + typedef std::deque<Handle<Object>, zone_allocator<Handle<Object> > > Literals; + + InstructionSequence* code_; + BasicBlock* current_block_; + SourcePosition current_source_position_; + MacroAssembler masm_; + GapResolver resolver_; + SafepointTableBuilder safepoints_; + LazyDeoptimizationEntries lazy_deoptimization_entries_; + DeoptimizationStates deoptimization_states_; + Literals deoptimization_literals_; + TranslationBuffer translations_; +}; + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_COMPILER_CODE_GENERATOR_H diff --git a/deps/v8/src/compiler/common-node-cache.h b/deps/v8/src/compiler/common-node-cache.h new file mode 100644 index 00000000000..2b0ac0b6e2b --- /dev/null +++ b/deps/v8/src/compiler/common-node-cache.h @@ -0,0 +1,51 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_COMMON_NODE_CACHE_H_ +#define V8_COMPILER_COMMON_NODE_CACHE_H_ + +#include "src/assembler.h" +#include "src/compiler/node-cache.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// Bundles various caches for common nodes. +class CommonNodeCache V8_FINAL : public ZoneObject { + public: + explicit CommonNodeCache(Zone* zone) : zone_(zone) {} + + Node** FindInt32Constant(int32_t value) { + return int32_constants_.Find(zone_, value); + } + + Node** FindFloat64Constant(double value) { + // We canonicalize double constants at the bit representation level. + return float64_constants_.Find(zone_, BitCast<int64_t>(value)); + } + + Node** FindExternalConstant(ExternalReference reference) { + return external_constants_.Find(zone_, reference.address()); + } + + Node** FindNumberConstant(double value) { + // We canonicalize double constants at the bit representation level. + return number_constants_.Find(zone_, BitCast<int64_t>(value)); + } + + Zone* zone() const { return zone_; } + + private: + Int32NodeCache int32_constants_; + Int64NodeCache float64_constants_; + PtrNodeCache external_constants_; + Int64NodeCache number_constants_; + Zone* zone_; +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_COMMON_NODE_CACHE_H_ diff --git a/deps/v8/src/compiler/common-operator.h b/deps/v8/src/compiler/common-operator.h new file mode 100644 index 00000000000..3b581ae0cd2 --- /dev/null +++ b/deps/v8/src/compiler/common-operator.h @@ -0,0 +1,284 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_COMMON_OPERATOR_H_ +#define V8_COMPILER_COMMON_OPERATOR_H_ + +#include "src/v8.h" + +#include "src/assembler.h" +#include "src/compiler/linkage.h" +#include "src/compiler/opcodes.h" +#include "src/compiler/operator.h" +#include "src/unique.h" + +namespace v8 { +namespace internal { + +class OStream; + +namespace compiler { + +class ControlOperator : public Operator1<int> { + public: + ControlOperator(IrOpcode::Value opcode, uint16_t properties, int inputs, + int outputs, int controls, const char* mnemonic) + : Operator1<int>(opcode, properties, inputs, outputs, mnemonic, + controls) {} + + virtual OStream& PrintParameter(OStream& os) const { return os; } // NOLINT + int ControlInputCount() const { return parameter(); } +}; + +class CallOperator : public Operator1<CallDescriptor*> { + public: + CallOperator(CallDescriptor* descriptor, const char* mnemonic) + : Operator1<CallDescriptor*>( + IrOpcode::kCall, descriptor->properties(), descriptor->InputCount(), + descriptor->ReturnCount(), mnemonic, descriptor) {} + + virtual OStream& PrintParameter(OStream& os) const { // NOLINT + return os << "[" << *parameter() << "]"; + } +}; + +// Interface for building common operators that can be used at any level of IR, +// including JavaScript, mid-level, and low-level. +// TODO(titzer): Move the mnemonics into SimpleOperator and Operator1 classes. +class CommonOperatorBuilder { + public: + explicit CommonOperatorBuilder(Zone* zone) : zone_(zone) {} + +#define CONTROL_OP(name, inputs, controls) \ + return new (zone_) ControlOperator(IrOpcode::k##name, Operator::kFoldable, \ + inputs, 0, controls, #name); + + Operator* Start(int num_formal_parameters) { + // Outputs are formal parameters, plus context, receiver, and JSFunction. + int outputs = num_formal_parameters + 3; + return new (zone_) ControlOperator(IrOpcode::kStart, Operator::kFoldable, 0, + outputs, 0, "Start"); + } + Operator* Dead() { CONTROL_OP(Dead, 0, 0); } + Operator* End() { CONTROL_OP(End, 0, 1); } + Operator* Branch() { CONTROL_OP(Branch, 1, 1); } + Operator* IfTrue() { CONTROL_OP(IfTrue, 0, 1); } + Operator* IfFalse() { CONTROL_OP(IfFalse, 0, 1); } + Operator* Throw() { CONTROL_OP(Throw, 1, 1); } + Operator* LazyDeoptimization() { CONTROL_OP(LazyDeoptimization, 0, 1); } + Operator* Continuation() { CONTROL_OP(Continuation, 0, 1); } + + Operator* Deoptimize() { + return new (zone_) + ControlOperator(IrOpcode::kDeoptimize, 0, 1, 0, 1, "Deoptimize"); + } + + Operator* Return() { + return new (zone_) ControlOperator(IrOpcode::kReturn, 0, 1, 0, 1, "Return"); + } + + Operator* Merge(int controls) { + return new (zone_) ControlOperator(IrOpcode::kMerge, Operator::kFoldable, 0, + 0, controls, "Merge"); + } + + Operator* Loop(int controls) { + return new (zone_) ControlOperator(IrOpcode::kLoop, Operator::kFoldable, 0, + 0, controls, "Loop"); + } + + Operator* Parameter(int index) { + return new (zone_) Operator1<int>(IrOpcode::kParameter, Operator::kPure, 1, + 1, "Parameter", index); + } + Operator* Int32Constant(int32_t value) { + return new (zone_) Operator1<int>(IrOpcode::kInt32Constant, Operator::kPure, + 0, 1, "Int32Constant", value); + } + Operator* Int64Constant(int64_t value) { + return new (zone_) + Operator1<int64_t>(IrOpcode::kInt64Constant, Operator::kPure, 0, 1, + "Int64Constant", value); + } + Operator* Float64Constant(double value) { + return new (zone_) + Operator1<double>(IrOpcode::kFloat64Constant, Operator::kPure, 0, 1, + "Float64Constant", value); + } + Operator* ExternalConstant(ExternalReference value) { + return new (zone_) Operator1<ExternalReference>(IrOpcode::kExternalConstant, + Operator::kPure, 0, 1, + "ExternalConstant", value); + } + Operator* NumberConstant(double value) { + return new (zone_) + Operator1<double>(IrOpcode::kNumberConstant, Operator::kPure, 0, 1, + "NumberConstant", value); + } + Operator* HeapConstant(PrintableUnique<Object> value) { + return new (zone_) Operator1<PrintableUnique<Object> >( + IrOpcode::kHeapConstant, Operator::kPure, 0, 1, "HeapConstant", value); + } + Operator* Phi(int arguments) { + DCHECK(arguments > 0); // Disallow empty phis. + return new (zone_) Operator1<int>(IrOpcode::kPhi, Operator::kPure, + arguments, 1, "Phi", arguments); + } + Operator* EffectPhi(int arguments) { + DCHECK(arguments > 0); // Disallow empty phis. + return new (zone_) Operator1<int>(IrOpcode::kEffectPhi, Operator::kPure, 0, + 0, "EffectPhi", arguments); + } + Operator* StateValues(int arguments) { + return new (zone_) Operator1<int>(IrOpcode::kStateValues, Operator::kPure, + arguments, 1, "StateValues", arguments); + } + Operator* FrameState(BailoutId ast_id) { + return new (zone_) Operator1<BailoutId>( + IrOpcode::kFrameState, Operator::kPure, 3, 1, "FrameState", ast_id); + } + Operator* Call(CallDescriptor* descriptor) { + return new (zone_) CallOperator(descriptor, "Call"); + } + Operator* Projection(int index) { + return new (zone_) Operator1<int>(IrOpcode::kProjection, Operator::kPure, 1, + 1, "Projection", index); + } + + private: + Zone* zone_; +}; + + +template <typename T> +struct CommonOperatorTraits { + static inline bool Equals(T a, T b); + static inline bool HasValue(Operator* op); + static inline T ValueOf(Operator* op); +}; + +template <> +struct CommonOperatorTraits<int32_t> { + static inline bool Equals(int32_t a, int32_t b) { return a == b; } + static inline bool HasValue(Operator* op) { + return op->opcode() == IrOpcode::kInt32Constant || + op->opcode() == IrOpcode::kNumberConstant; + } + static inline int32_t ValueOf(Operator* op) { + if (op->opcode() == IrOpcode::kNumberConstant) { + // TODO(titzer): cache the converted int32 value in NumberConstant. + return FastD2I(reinterpret_cast<Operator1<double>*>(op)->parameter()); + } + CHECK_EQ(IrOpcode::kInt32Constant, op->opcode()); + return static_cast<Operator1<int32_t>*>(op)->parameter(); + } +}; + +template <> +struct CommonOperatorTraits<uint32_t> { + static inline bool Equals(uint32_t a, uint32_t b) { return a == b; } + static inline bool HasValue(Operator* op) { + return CommonOperatorTraits<int32_t>::HasValue(op); + } + static inline uint32_t ValueOf(Operator* op) { + if (op->opcode() == IrOpcode::kNumberConstant) { + // TODO(titzer): cache the converted uint32 value in NumberConstant. + return FastD2UI(reinterpret_cast<Operator1<double>*>(op)->parameter()); + } + return static_cast<uint32_t>(CommonOperatorTraits<int32_t>::ValueOf(op)); + } +}; + +template <> +struct CommonOperatorTraits<int64_t> { + static inline bool Equals(int64_t a, int64_t b) { return a == b; } + static inline bool HasValue(Operator* op) { + return op->opcode() == IrOpcode::kInt32Constant || + op->opcode() == IrOpcode::kInt64Constant || + op->opcode() == IrOpcode::kNumberConstant; + } + static inline int64_t ValueOf(Operator* op) { + if (op->opcode() == IrOpcode::kInt32Constant) { + return static_cast<int64_t>(CommonOperatorTraits<int32_t>::ValueOf(op)); + } + CHECK_EQ(IrOpcode::kInt64Constant, op->opcode()); + return static_cast<Operator1<int64_t>*>(op)->parameter(); + } +}; + +template <> +struct CommonOperatorTraits<uint64_t> { + static inline bool Equals(uint64_t a, uint64_t b) { return a == b; } + static inline bool HasValue(Operator* op) { + return CommonOperatorTraits<int64_t>::HasValue(op); + } + static inline uint64_t ValueOf(Operator* op) { + return static_cast<uint64_t>(CommonOperatorTraits<int64_t>::ValueOf(op)); + } +}; + +template <> +struct CommonOperatorTraits<double> { + static inline bool Equals(double a, double b) { + return DoubleRepresentation(a).bits == DoubleRepresentation(b).bits; + } + static inline bool HasValue(Operator* op) { + return op->opcode() == IrOpcode::kFloat64Constant || + op->opcode() == IrOpcode::kInt32Constant || + op->opcode() == IrOpcode::kNumberConstant; + } + static inline double ValueOf(Operator* op) { + if (op->opcode() == IrOpcode::kFloat64Constant || + op->opcode() == IrOpcode::kNumberConstant) { + return reinterpret_cast<Operator1<double>*>(op)->parameter(); + } + return static_cast<double>(CommonOperatorTraits<int32_t>::ValueOf(op)); + } +}; + +template <> +struct CommonOperatorTraits<ExternalReference> { + static inline bool Equals(ExternalReference a, ExternalReference b) { + return a == b; + } + static inline bool HasValue(Operator* op) { + return op->opcode() == IrOpcode::kExternalConstant; + } + static inline ExternalReference ValueOf(Operator* op) { + CHECK_EQ(IrOpcode::kExternalConstant, op->opcode()); + return static_cast<Operator1<ExternalReference>*>(op)->parameter(); + } +}; + +template <typename T> +struct CommonOperatorTraits<PrintableUnique<T> > { + static inline bool HasValue(Operator* op) { + return op->opcode() == IrOpcode::kHeapConstant; + } + static inline PrintableUnique<T> ValueOf(Operator* op) { + CHECK_EQ(IrOpcode::kHeapConstant, op->opcode()); + return static_cast<Operator1<PrintableUnique<T> >*>(op)->parameter(); + } +}; + +template <typename T> +struct CommonOperatorTraits<Handle<T> > { + static inline bool HasValue(Operator* op) { + return CommonOperatorTraits<PrintableUnique<T> >::HasValue(op); + } + static inline Handle<T> ValueOf(Operator* op) { + return CommonOperatorTraits<PrintableUnique<T> >::ValueOf(op).handle(); + } +}; + + +template <typename T> +inline T ValueOf(Operator* op) { + return CommonOperatorTraits<T>::ValueOf(op); +} +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_COMMON_OPERATOR_H_ diff --git a/deps/v8/src/compiler/control-builders.cc b/deps/v8/src/compiler/control-builders.cc new file mode 100644 index 00000000000..3b7d05ba555 --- /dev/null +++ b/deps/v8/src/compiler/control-builders.cc @@ -0,0 +1,144 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "control-builders.h" + +namespace v8 { +namespace internal { +namespace compiler { + + +void IfBuilder::If(Node* condition) { + builder_->NewBranch(condition); + else_environment_ = environment()->CopyForConditional(); +} + + +void IfBuilder::Then() { builder_->NewIfTrue(); } + + +void IfBuilder::Else() { + builder_->NewMerge(); + then_environment_ = environment(); + set_environment(else_environment_); + builder_->NewIfFalse(); +} + + +void IfBuilder::End() { + then_environment_->Merge(environment()); + set_environment(then_environment_); +} + + +void LoopBuilder::BeginLoop() { + builder_->NewLoop(); + loop_environment_ = environment()->CopyForLoop(); + continue_environment_ = environment()->CopyAsUnreachable(); + break_environment_ = environment()->CopyAsUnreachable(); +} + + +void LoopBuilder::Continue() { + continue_environment_->Merge(environment()); + environment()->MarkAsUnreachable(); +} + + +void LoopBuilder::Break() { + break_environment_->Merge(environment()); + environment()->MarkAsUnreachable(); +} + + +void LoopBuilder::EndBody() { + continue_environment_->Merge(environment()); + set_environment(continue_environment_); +} + + +void LoopBuilder::EndLoop() { + loop_environment_->Merge(environment()); + set_environment(break_environment_); +} + + +void LoopBuilder::BreakUnless(Node* condition) { + IfBuilder control_if(builder_); + control_if.If(condition); + control_if.Then(); + control_if.Else(); + Break(); + control_if.End(); +} + + +void SwitchBuilder::BeginSwitch() { + body_environment_ = environment()->CopyAsUnreachable(); + label_environment_ = environment()->CopyAsUnreachable(); + break_environment_ = environment()->CopyAsUnreachable(); + body_environments_.AddBlock(NULL, case_count(), zone()); +} + + +void SwitchBuilder::BeginLabel(int index, Node* condition) { + builder_->NewBranch(condition); + label_environment_ = environment()->CopyForConditional(); + builder_->NewIfTrue(); + body_environments_[index] = environment(); +} + + +void SwitchBuilder::EndLabel() { + set_environment(label_environment_); + builder_->NewIfFalse(); +} + + +void SwitchBuilder::DefaultAt(int index) { + label_environment_ = environment()->CopyAsUnreachable(); + body_environments_[index] = environment(); +} + + +void SwitchBuilder::BeginCase(int index) { + set_environment(body_environments_[index]); + environment()->Merge(body_environment_); +} + + +void SwitchBuilder::Break() { + break_environment_->Merge(environment()); + environment()->MarkAsUnreachable(); +} + + +void SwitchBuilder::EndCase() { body_environment_ = environment(); } + + +void SwitchBuilder::EndSwitch() { + break_environment_->Merge(label_environment_); + break_environment_->Merge(environment()); + set_environment(break_environment_); +} + + +void BlockBuilder::BeginBlock() { + break_environment_ = environment()->CopyAsUnreachable(); +} + + +void BlockBuilder::Break() { + break_environment_->Merge(environment()); + environment()->MarkAsUnreachable(); +} + + +void BlockBuilder::EndBlock() { + break_environment_->Merge(environment()); + set_environment(break_environment_); +} +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/compiler/control-builders.h b/deps/v8/src/compiler/control-builders.h new file mode 100644 index 00000000000..695282be8aa --- /dev/null +++ b/deps/v8/src/compiler/control-builders.h @@ -0,0 +1,144 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_CONTROL_BUILDERS_H_ +#define V8_COMPILER_CONTROL_BUILDERS_H_ + +#include "src/v8.h" + +#include "src/compiler/graph-builder.h" +#include "src/compiler/node.h" + +namespace v8 { +namespace internal { +namespace compiler { + + +// Base class for all control builders. Also provides a common interface for +// control builders to handle 'break' and 'continue' statements when they are +// used to model breakable statements. +class ControlBuilder { + public: + explicit ControlBuilder(StructuredGraphBuilder* builder) + : builder_(builder) {} + virtual ~ControlBuilder() {} + + // Interface for break and continue. + virtual void Break() { UNREACHABLE(); } + virtual void Continue() { UNREACHABLE(); } + + protected: + typedef StructuredGraphBuilder Builder; + typedef StructuredGraphBuilder::Environment Environment; + + Zone* zone() const { return builder_->zone(); } + Environment* environment() { return builder_->environment(); } + void set_environment(Environment* env) { builder_->set_environment(env); } + + Builder* builder_; +}; + + +// Tracks control flow for a conditional statement. +class IfBuilder : public ControlBuilder { + public: + explicit IfBuilder(StructuredGraphBuilder* builder) + : ControlBuilder(builder), + then_environment_(NULL), + else_environment_(NULL) {} + + // Primitive control commands. + void If(Node* condition); + void Then(); + void Else(); + void End(); + + private: + Environment* then_environment_; // Environment after the 'then' body. + Environment* else_environment_; // Environment for the 'else' body. +}; + + +// Tracks control flow for an iteration statement. +class LoopBuilder : public ControlBuilder { + public: + explicit LoopBuilder(StructuredGraphBuilder* builder) + : ControlBuilder(builder), + loop_environment_(NULL), + continue_environment_(NULL), + break_environment_(NULL) {} + + // Primitive control commands. + void BeginLoop(); + void EndBody(); + void EndLoop(); + + // Primitive support for break and continue. + virtual void Continue(); + virtual void Break(); + + // Compound control command for conditional break. + void BreakUnless(Node* condition); + + private: + Environment* loop_environment_; // Environment of the loop header. + Environment* continue_environment_; // Environment after the loop body. + Environment* break_environment_; // Environment after the loop exits. +}; + + +// Tracks control flow for a switch statement. +class SwitchBuilder : public ControlBuilder { + public: + explicit SwitchBuilder(StructuredGraphBuilder* builder, int case_count) + : ControlBuilder(builder), + body_environment_(NULL), + label_environment_(NULL), + break_environment_(NULL), + body_environments_(case_count, zone()) {} + + // Primitive control commands. + void BeginSwitch(); + void BeginLabel(int index, Node* condition); + void EndLabel(); + void DefaultAt(int index); + void BeginCase(int index); + void EndCase(); + void EndSwitch(); + + // Primitive support for break. + virtual void Break(); + + // The number of cases within a switch is statically known. + int case_count() const { return body_environments_.capacity(); } + + private: + Environment* body_environment_; // Environment after last case body. + Environment* label_environment_; // Environment for next label condition. + Environment* break_environment_; // Environment after the switch exits. + ZoneList<Environment*> body_environments_; +}; + + +// Tracks control flow for a block statement. +class BlockBuilder : public ControlBuilder { + public: + explicit BlockBuilder(StructuredGraphBuilder* builder) + : ControlBuilder(builder), break_environment_(NULL) {} + + // Primitive control commands. + void BeginBlock(); + void EndBlock(); + + // Primitive support for break. + virtual void Break(); + + private: + Environment* break_environment_; // Environment after the block exits. +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_CONTROL_BUILDERS_H_ diff --git a/deps/v8/src/compiler/frame.h b/deps/v8/src/compiler/frame.h new file mode 100644 index 00000000000..afcbc3706ae --- /dev/null +++ b/deps/v8/src/compiler/frame.h @@ -0,0 +1,104 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_FRAME_H_ +#define V8_COMPILER_FRAME_H_ + +#include "src/v8.h" + +#include "src/data-flow.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// Collects the spill slot requirements and the allocated general and double +// registers for a compiled function. Frames are usually populated by the +// register allocator and are used by Linkage to generate code for the prologue +// and epilogue to compiled code. +class Frame { + public: + Frame() + : register_save_area_size_(0), + spill_slot_count_(0), + double_spill_slot_count_(0), + allocated_registers_(NULL), + allocated_double_registers_(NULL) {} + + inline int GetSpillSlotCount() { return spill_slot_count_; } + inline int GetDoubleSpillSlotCount() { return double_spill_slot_count_; } + + void SetAllocatedRegisters(BitVector* regs) { + DCHECK(allocated_registers_ == NULL); + allocated_registers_ = regs; + } + + void SetAllocatedDoubleRegisters(BitVector* regs) { + DCHECK(allocated_double_registers_ == NULL); + allocated_double_registers_ = regs; + } + + bool DidAllocateDoubleRegisters() { + return !allocated_double_registers_->IsEmpty(); + } + + void SetRegisterSaveAreaSize(int size) { + DCHECK(IsAligned(size, kPointerSize)); + register_save_area_size_ = size; + } + + int GetRegisterSaveAreaSize() { return register_save_area_size_; } + + int AllocateSpillSlot(bool is_double) { + // If 32-bit, skip one if the new slot is a double. + if (is_double) { + if (kDoubleSize > kPointerSize) { + DCHECK(kDoubleSize == kPointerSize * 2); + spill_slot_count_++; + spill_slot_count_ |= 1; + } + double_spill_slot_count_++; + } + return spill_slot_count_++; + } + + private: + int register_save_area_size_; + int spill_slot_count_; + int double_spill_slot_count_; + BitVector* allocated_registers_; + BitVector* allocated_double_registers_; +}; + + +// Represents an offset from either the stack pointer or frame pointer. +class FrameOffset { + public: + inline bool from_stack_pointer() { return (offset_ & 1) == kFromSp; } + inline bool from_frame_pointer() { return (offset_ & 1) == kFromFp; } + inline int offset() { return offset_ & ~1; } + + inline static FrameOffset FromStackPointer(int offset) { + DCHECK((offset & 1) == 0); + return FrameOffset(offset | kFromSp); + } + + inline static FrameOffset FromFramePointer(int offset) { + DCHECK((offset & 1) == 0); + return FrameOffset(offset | kFromFp); + } + + private: + explicit FrameOffset(int offset) : offset_(offset) {} + + int offset_; // Encodes SP or FP in the low order bit. + + static const int kFromSp = 1; + static const int kFromFp = 0; +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_FRAME_H_ diff --git a/deps/v8/src/compiler/gap-resolver.cc b/deps/v8/src/compiler/gap-resolver.cc new file mode 100644 index 00000000000..f369607170b --- /dev/null +++ b/deps/v8/src/compiler/gap-resolver.cc @@ -0,0 +1,136 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/gap-resolver.h" + +#include <algorithm> +#include <functional> +#include <set> + +namespace v8 { +namespace internal { +namespace compiler { + +typedef ZoneList<MoveOperands>::iterator op_iterator; + +#ifdef ENABLE_SLOW_DCHECKS +// TODO(svenpanne) Brush up InstructionOperand with comparison? +struct InstructionOperandComparator { + bool operator()(const InstructionOperand* x, + const InstructionOperand* y) const { + return (x->kind() < y->kind()) || + (x->kind() == y->kind() && x->index() < y->index()); + } +}; +#endif + +// No operand should be the destination for more than one move. +static void VerifyMovesAreInjective(ZoneList<MoveOperands>* moves) { +#ifdef ENABLE_SLOW_DCHECKS + std::set<InstructionOperand*, InstructionOperandComparator> seen; + for (op_iterator i = moves->begin(); i != moves->end(); ++i) { + SLOW_DCHECK(seen.find(i->destination()) == seen.end()); + seen.insert(i->destination()); + } +#endif +} + + +void GapResolver::Resolve(ParallelMove* parallel_move) const { + ZoneList<MoveOperands>* moves = parallel_move->move_operands(); + // TODO(svenpanne) Use the member version of remove_if when we use real lists. + op_iterator end = + std::remove_if(moves->begin(), moves->end(), + std::mem_fun_ref(&MoveOperands::IsRedundant)); + moves->Rewind(static_cast<int>(end - moves->begin())); + + VerifyMovesAreInjective(moves); + + for (op_iterator move = moves->begin(); move != moves->end(); ++move) { + if (!move->IsEliminated()) PerformMove(moves, &*move); + } +} + + +void GapResolver::PerformMove(ZoneList<MoveOperands>* moves, + MoveOperands* move) const { + // Each call to this function performs a move and deletes it from the move + // graph. We first recursively perform any move blocking this one. We mark a + // move as "pending" on entry to PerformMove in order to detect cycles in the + // move graph. We use operand swaps to resolve cycles, which means that a + // call to PerformMove could change any source operand in the move graph. + DCHECK(!move->IsPending()); + DCHECK(!move->IsRedundant()); + + // Clear this move's destination to indicate a pending move. The actual + // destination is saved on the side. + DCHECK_NOT_NULL(move->source()); // Or else it will look eliminated. + InstructionOperand* destination = move->destination(); + move->set_destination(NULL); + + // Perform a depth-first traversal of the move graph to resolve dependencies. + // Any unperformed, unpending move with a source the same as this one's + // destination blocks this one so recursively perform all such moves. + for (op_iterator other = moves->begin(); other != moves->end(); ++other) { + if (other->Blocks(destination) && !other->IsPending()) { + // Though PerformMove can change any source operand in the move graph, + // this call cannot create a blocking move via a swap (this loop does not + // miss any). Assume there is a non-blocking move with source A and this + // move is blocked on source B and there is a swap of A and B. Then A and + // B must be involved in the same cycle (or they would not be swapped). + // Since this move's destination is B and there is only a single incoming + // edge to an operand, this move must also be involved in the same cycle. + // In that case, the blocking move will be created but will be "pending" + // when we return from PerformMove. + PerformMove(moves, other); + } + } + + // We are about to resolve this move and don't need it marked as pending, so + // restore its destination. + move->set_destination(destination); + + // This move's source may have changed due to swaps to resolve cycles and so + // it may now be the last move in the cycle. If so remove it. + InstructionOperand* source = move->source(); + if (source->Equals(destination)) { + move->Eliminate(); + return; + } + + // The move may be blocked on a (at most one) pending move, in which case we + // have a cycle. Search for such a blocking move and perform a swap to + // resolve it. + op_iterator blocker = std::find_if( + moves->begin(), moves->end(), + std::bind2nd(std::mem_fun_ref(&MoveOperands::Blocks), destination)); + if (blocker == moves->end()) { + // The easy case: This move is not blocked. + assembler_->AssembleMove(source, destination); + move->Eliminate(); + return; + } + + DCHECK(blocker->IsPending()); + // Ensure source is a register or both are stack slots, to limit swap cases. + if (source->IsStackSlot() || source->IsDoubleStackSlot()) { + std::swap(source, destination); + } + assembler_->AssembleSwap(source, destination); + move->Eliminate(); + + // Any unperformed (including pending) move with a source of either this + // move's source or destination needs to have their source changed to + // reflect the state of affairs after the swap. + for (op_iterator other = moves->begin(); other != moves->end(); ++other) { + if (other->Blocks(source)) { + other->set_source(destination); + } else if (other->Blocks(destination)) { + other->set_source(source); + } + } +} +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/compiler/gap-resolver.h b/deps/v8/src/compiler/gap-resolver.h new file mode 100644 index 00000000000..5c3aeada6e7 --- /dev/null +++ b/deps/v8/src/compiler/gap-resolver.h @@ -0,0 +1,46 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_GAP_RESOLVER_H_ +#define V8_COMPILER_GAP_RESOLVER_H_ + +#include "src/compiler/instruction.h" + +namespace v8 { +namespace internal { +namespace compiler { + +class GapResolver V8_FINAL { + public: + // Interface used by the gap resolver to emit moves and swaps. + class Assembler { + public: + virtual ~Assembler() {} + + // Assemble move. + virtual void AssembleMove(InstructionOperand* source, + InstructionOperand* destination) = 0; + // Assemble swap. + virtual void AssembleSwap(InstructionOperand* source, + InstructionOperand* destination) = 0; + }; + + explicit GapResolver(Assembler* assembler) : assembler_(assembler) {} + + // Resolve a set of parallel moves, emitting assembler instructions. + void Resolve(ParallelMove* parallel_move) const; + + private: + // Perform the given move, possibly requiring other moves to satisfy + // dependencies. + void PerformMove(ZoneList<MoveOperands>* moves, MoveOperands* move) const; + + // Assembler used to emit moves and save registers. + Assembler* const assembler_; +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_GAP_RESOLVER_H_ diff --git a/deps/v8/src/compiler/generic-algorithm-inl.h b/deps/v8/src/compiler/generic-algorithm-inl.h new file mode 100644 index 00000000000..a25131f6960 --- /dev/null +++ b/deps/v8/src/compiler/generic-algorithm-inl.h @@ -0,0 +1,48 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_GENERIC_ALGORITHM_INL_H_ +#define V8_COMPILER_GENERIC_ALGORITHM_INL_H_ + +#include <vector> + +#include "src/compiler/generic-algorithm.h" +#include "src/compiler/generic-graph.h" +#include "src/compiler/generic-node.h" +#include "src/compiler/generic-node-inl.h" + +namespace v8 { +namespace internal { +namespace compiler { + +template <class N> +class NodeInputIterationTraits { + public: + typedef N Node; + typedef typename N::Inputs::iterator Iterator; + + static Iterator begin(Node* node) { return node->inputs().begin(); } + static Iterator end(Node* node) { return node->inputs().end(); } + static int max_id(GenericGraphBase* graph) { return graph->NodeCount(); } + static Node* to(Iterator iterator) { return *iterator; } + static Node* from(Iterator iterator) { return iterator.edge().from(); } +}; + +template <class N> +class NodeUseIterationTraits { + public: + typedef N Node; + typedef typename N::Uses::iterator Iterator; + + static Iterator begin(Node* node) { return node->uses().begin(); } + static Iterator end(Node* node) { return node->uses().end(); } + static int max_id(GenericGraphBase* graph) { return graph->NodeCount(); } + static Node* to(Iterator iterator) { return *iterator; } + static Node* from(Iterator iterator) { return iterator.edge().to(); } +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_GENERIC_ALGORITHM_INL_H_ diff --git a/deps/v8/src/compiler/generic-algorithm.h b/deps/v8/src/compiler/generic-algorithm.h new file mode 100644 index 00000000000..607d339ae40 --- /dev/null +++ b/deps/v8/src/compiler/generic-algorithm.h @@ -0,0 +1,136 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_GENERIC_ALGORITHM_H_ +#define V8_COMPILER_GENERIC_ALGORITHM_H_ + +#include <deque> +#include <stack> + +#include "src/compiler/generic-graph.h" +#include "src/compiler/generic-node.h" +#include "src/zone-containers.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// GenericGraphVisit allows visitation of graphs of nodes and edges in pre- and +// post-order. Visitation uses an explicitly allocated stack rather than the +// execution stack to avoid stack overflow. Although GenericGraphVisit is +// primarily intended to traverse networks of nodes through their +// dependencies and uses, it also can be used to visit any graph-like network +// by specifying custom traits. +class GenericGraphVisit { + public: + enum Control { + CONTINUE = 0x0, // Continue depth-first normally + SKIP = 0x1, // Skip this node and its successors + REENTER = 0x2, // Allow reentering this node + DEFER = SKIP | REENTER + }; + + // struct Visitor { + // Control Pre(Traits::Node* current); + // Control Post(Traits::Node* current); + // void PreEdge(Traits::Node* from, int index, Traits::Node* to); + // void PostEdge(Traits::Node* from, int index, Traits::Node* to); + // } + template <class Visitor, class Traits, class RootIterator> + static void Visit(GenericGraphBase* graph, RootIterator root_begin, + RootIterator root_end, Visitor* visitor) { + // TODO(bmeurer): Pass "local" zone as parameter. + Zone* zone = graph->zone(); + typedef typename Traits::Node Node; + typedef typename Traits::Iterator Iterator; + typedef std::pair<Iterator, Iterator> NodeState; + typedef zone_allocator<NodeState> ZoneNodeStateAllocator; + typedef std::deque<NodeState, ZoneNodeStateAllocator> NodeStateDeque; + typedef std::stack<NodeState, NodeStateDeque> NodeStateStack; + NodeStateStack stack((NodeStateDeque(ZoneNodeStateAllocator(zone)))); + BoolVector visited(Traits::max_id(graph), false, ZoneBoolAllocator(zone)); + Node* current = *root_begin; + while (true) { + DCHECK(current != NULL); + const int id = current->id(); + DCHECK(id >= 0); + DCHECK(id < Traits::max_id(graph)); // Must be a valid id. + bool visit = !GetVisited(&visited, id); + if (visit) { + Control control = visitor->Pre(current); + visit = !IsSkip(control); + if (!IsReenter(control)) SetVisited(&visited, id, true); + } + Iterator begin(visit ? Traits::begin(current) : Traits::end(current)); + Iterator end(Traits::end(current)); + stack.push(NodeState(begin, end)); + Node* post_order_node = current; + while (true) { + NodeState top = stack.top(); + if (top.first == top.second) { + if (visit) { + Control control = visitor->Post(post_order_node); + DCHECK(!IsSkip(control)); + SetVisited(&visited, post_order_node->id(), !IsReenter(control)); + } + stack.pop(); + if (stack.empty()) { + if (++root_begin == root_end) return; + current = *root_begin; + break; + } + post_order_node = Traits::from(stack.top().first); + visit = true; + } else { + visitor->PreEdge(Traits::from(top.first), top.first.edge().index(), + Traits::to(top.first)); + current = Traits::to(top.first); + if (!GetVisited(&visited, current->id())) break; + } + top = stack.top(); + visitor->PostEdge(Traits::from(top.first), top.first.edge().index(), + Traits::to(top.first)); + ++stack.top().first; + } + } + } + + template <class Visitor, class Traits> + static void Visit(GenericGraphBase* graph, typename Traits::Node* current, + Visitor* visitor) { + typename Traits::Node* array[] = {current}; + Visit<Visitor, Traits>(graph, &array[0], &array[1], visitor); + } + + template <class B, class S> + struct NullNodeVisitor { + Control Pre(GenericNode<B, S>* node) { return CONTINUE; } + Control Post(GenericNode<B, S>* node) { return CONTINUE; } + void PreEdge(GenericNode<B, S>* from, int index, GenericNode<B, S>* to) {} + void PostEdge(GenericNode<B, S>* from, int index, GenericNode<B, S>* to) {} + }; + + private: + static bool IsSkip(Control c) { return c & SKIP; } + static bool IsReenter(Control c) { return c & REENTER; } + + // TODO(turbofan): resizing could be optionally templatized away. + static void SetVisited(BoolVector* visited, int id, bool value) { + if (id >= static_cast<int>(visited->size())) { + // Resize and set all values to unvisited. + visited->resize((3 * id) / 2, false); + } + visited->at(id) = value; + } + + static bool GetVisited(BoolVector* visited, int id) { + if (id >= static_cast<int>(visited->size())) return false; + return visited->at(id); + } +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_GENERIC_ALGORITHM_H_ diff --git a/deps/v8/src/compiler/generic-graph.h b/deps/v8/src/compiler/generic-graph.h new file mode 100644 index 00000000000..a5554565453 --- /dev/null +++ b/deps/v8/src/compiler/generic-graph.h @@ -0,0 +1,53 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_GENERIC_GRAPH_H_ +#define V8_COMPILER_GENERIC_GRAPH_H_ + +#include "src/compiler/generic-node.h" + +namespace v8 { +namespace internal { + +class Zone; + +namespace compiler { + +class GenericGraphBase : public ZoneObject { + public: + explicit GenericGraphBase(Zone* zone) : zone_(zone), next_node_id_(0) {} + + Zone* zone() const { return zone_; } + + NodeId NextNodeID() { return next_node_id_++; } + NodeId NodeCount() const { return next_node_id_; } + + private: + Zone* zone_; + NodeId next_node_id_; +}; + +template <class V> +class GenericGraph : public GenericGraphBase { + public: + explicit GenericGraph(Zone* zone) + : GenericGraphBase(zone), start_(NULL), end_(NULL) {} + + V* start() { return start_; } + V* end() { return end_; } + + void SetStart(V* start) { start_ = start; } + void SetEnd(V* end) { end_ = end; } + + private: + V* start_; + V* end_; + + DISALLOW_COPY_AND_ASSIGN(GenericGraph); +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_GENERIC_GRAPH_H_ diff --git a/deps/v8/src/compiler/generic-node-inl.h b/deps/v8/src/compiler/generic-node-inl.h new file mode 100644 index 00000000000..51d1a50162d --- /dev/null +++ b/deps/v8/src/compiler/generic-node-inl.h @@ -0,0 +1,245 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_GENERIC_NODE_INL_H_ +#define V8_COMPILER_GENERIC_NODE_INL_H_ + +#include "src/v8.h" + +#include "src/compiler/generic-graph.h" +#include "src/compiler/generic-node.h" +#include "src/zone.h" + +namespace v8 { +namespace internal { +namespace compiler { + +template <class B, class S> +GenericNode<B, S>::GenericNode(GenericGraphBase* graph, int input_count) + : BaseClass(graph->zone()), + input_count_(input_count), + has_appendable_inputs_(false), + use_count_(0), + first_use_(NULL), + last_use_(NULL) { + inputs_.static_ = reinterpret_cast<Input*>(this + 1), AssignUniqueID(graph); +} + +template <class B, class S> +inline void GenericNode<B, S>::AssignUniqueID(GenericGraphBase* graph) { + id_ = graph->NextNodeID(); +} + +template <class B, class S> +inline typename GenericNode<B, S>::Inputs::iterator +GenericNode<B, S>::Inputs::begin() { + return typename GenericNode<B, S>::Inputs::iterator(this->node_, 0); +} + +template <class B, class S> +inline typename GenericNode<B, S>::Inputs::iterator +GenericNode<B, S>::Inputs::end() { + return typename GenericNode<B, S>::Inputs::iterator( + this->node_, this->node_->InputCount()); +} + +template <class B, class S> +inline typename GenericNode<B, S>::Uses::iterator +GenericNode<B, S>::Uses::begin() { + return typename GenericNode<B, S>::Uses::iterator(this->node_); +} + +template <class B, class S> +inline typename GenericNode<B, S>::Uses::iterator +GenericNode<B, S>::Uses::end() { + return typename GenericNode<B, S>::Uses::iterator(); +} + +template <class B, class S> +void GenericNode<B, S>::ReplaceUses(GenericNode* replace_to) { + for (Use* use = first_use_; use != NULL; use = use->next) { + use->from->GetInputRecordPtr(use->input_index)->to = replace_to; + } + if (replace_to->last_use_ == NULL) { + DCHECK_EQ(NULL, replace_to->first_use_); + replace_to->first_use_ = first_use_; + } else { + DCHECK_NE(NULL, replace_to->first_use_); + replace_to->last_use_->next = first_use_; + first_use_->prev = replace_to->last_use_; + } + replace_to->last_use_ = last_use_; + replace_to->use_count_ += use_count_; + use_count_ = 0; + first_use_ = NULL; + last_use_ = NULL; +} + +template <class B, class S> +template <class UnaryPredicate> +void GenericNode<B, S>::ReplaceUsesIf(UnaryPredicate pred, + GenericNode* replace_to) { + for (Use* use = first_use_; use != NULL;) { + Use* next = use->next; + if (pred(static_cast<S*>(use->from))) { + RemoveUse(use); + replace_to->AppendUse(use); + use->from->GetInputRecordPtr(use->input_index)->to = replace_to; + } + use = next; + } +} + +template <class B, class S> +void GenericNode<B, S>::RemoveAllInputs() { + for (typename Inputs::iterator iter(inputs().begin()); iter != inputs().end(); + ++iter) { + iter.GetInput()->Update(NULL); + } +} + +template <class B, class S> +void GenericNode<B, S>::TrimInputCount(int new_input_count) { + if (new_input_count == input_count_) return; // Nothing to do. + + DCHECK(new_input_count < input_count_); + + // Update inline inputs. + for (int i = new_input_count; i < input_count_; i++) { + typename GenericNode<B, S>::Input* input = GetInputRecordPtr(i); + input->Update(NULL); + } + input_count_ = new_input_count; +} + +template <class B, class S> +void GenericNode<B, S>::ReplaceInput(int index, GenericNode<B, S>* new_to) { + Input* input = GetInputRecordPtr(index); + input->Update(new_to); +} + +template <class B, class S> +void GenericNode<B, S>::Input::Update(GenericNode<B, S>* new_to) { + GenericNode* old_to = this->to; + if (new_to == old_to) return; // Nothing to do. + // Snip out the use from where it used to be + if (old_to != NULL) { + old_to->RemoveUse(use); + } + to = new_to; + // And put it into the new node's use list. + if (new_to != NULL) { + new_to->AppendUse(use); + } else { + use->next = NULL; + use->prev = NULL; + } +} + +template <class B, class S> +void GenericNode<B, S>::EnsureAppendableInputs(Zone* zone) { + if (!has_appendable_inputs_) { + void* deque_buffer = zone->New(sizeof(InputDeque)); + InputDeque* deque = new (deque_buffer) InputDeque(ZoneInputAllocator(zone)); + for (int i = 0; i < input_count_; ++i) { + deque->push_back(inputs_.static_[i]); + } + inputs_.appendable_ = deque; + has_appendable_inputs_ = true; + } +} + +template <class B, class S> +void GenericNode<B, S>::AppendInput(Zone* zone, GenericNode<B, S>* to_append) { + EnsureAppendableInputs(zone); + Use* new_use = new (zone) Use; + Input new_input; + new_input.to = to_append; + new_input.use = new_use; + inputs_.appendable_->push_back(new_input); + new_use->input_index = input_count_; + new_use->from = this; + to_append->AppendUse(new_use); + input_count_++; +} + +template <class B, class S> +void GenericNode<B, S>::InsertInput(Zone* zone, int index, + GenericNode<B, S>* to_insert) { + DCHECK(index >= 0 && index < InputCount()); + // TODO(turbofan): Optimize this implementation! + AppendInput(zone, InputAt(InputCount() - 1)); + for (int i = InputCount() - 1; i > index; --i) { + ReplaceInput(i, InputAt(i - 1)); + } + ReplaceInput(index, to_insert); +} + +template <class B, class S> +void GenericNode<B, S>::AppendUse(Use* use) { + use->next = NULL; + use->prev = last_use_; + if (last_use_ == NULL) { + first_use_ = use; + } else { + last_use_->next = use; + } + last_use_ = use; + ++use_count_; +} + +template <class B, class S> +void GenericNode<B, S>::RemoveUse(Use* use) { + if (last_use_ == use) { + last_use_ = use->prev; + } + if (use->prev != NULL) { + use->prev->next = use->next; + } else { + first_use_ = use->next; + } + if (use->next != NULL) { + use->next->prev = use->prev; + } + --use_count_; +} + +template <class B, class S> +inline bool GenericNode<B, S>::OwnedBy(GenericNode* owner) const { + return first_use_ != NULL && first_use_->from == owner && + first_use_->next == NULL; +} + +template <class B, class S> +S* GenericNode<B, S>::New(GenericGraphBase* graph, int input_count, + S** inputs) { + size_t node_size = sizeof(GenericNode); + size_t inputs_size = input_count * sizeof(Input); + size_t uses_size = input_count * sizeof(Use); + int size = static_cast<int>(node_size + inputs_size + uses_size); + Zone* zone = graph->zone(); + void* buffer = zone->New(size); + S* result = new (buffer) S(graph, input_count); + Input* input = + reinterpret_cast<Input*>(reinterpret_cast<char*>(buffer) + node_size); + Use* use = + reinterpret_cast<Use*>(reinterpret_cast<char*>(input) + inputs_size); + + for (int current = 0; current < input_count; ++current) { + GenericNode* to = *inputs++; + input->to = to; + input->use = use; + use->input_index = current; + use->from = result; + to->AppendUse(use); + ++use; + ++input; + } + return result; +} +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_GENERIC_NODE_INL_H_ diff --git a/deps/v8/src/compiler/generic-node.h b/deps/v8/src/compiler/generic-node.h new file mode 100644 index 00000000000..287d852f5ed --- /dev/null +++ b/deps/v8/src/compiler/generic-node.h @@ -0,0 +1,271 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_GENERIC_NODE_H_ +#define V8_COMPILER_GENERIC_NODE_H_ + +#include <deque> + +#include "src/v8.h" + +#include "src/compiler/operator.h" +#include "src/zone.h" +#include "src/zone-allocator.h" + +namespace v8 { +namespace internal { +namespace compiler { + +class Operator; +class GenericGraphBase; + +typedef int NodeId; + +// A GenericNode<> is the basic primitive of graphs. GenericNode's are +// chained together by input/use chains but by default otherwise contain only an +// identifying number which specific applications of graphs and nodes can use +// to index auxiliary out-of-line data, especially transient data. +// Specializations of the templatized GenericNode<> class must provide a base +// class B that contains all of the members to be made available in each +// specialized Node instance. GenericNode uses a mixin template pattern to +// ensure that common accessors and methods expect the derived class S type +// rather than the GenericNode<B, S> type. +template <class B, class S> +class GenericNode : public B { + public: + typedef B BaseClass; + typedef S DerivedClass; + + inline NodeId id() const { return id_; } + + int InputCount() const { return input_count_; } + S* InputAt(int index) const { + return static_cast<S*>(GetInputRecordPtr(index)->to); + } + void ReplaceInput(int index, GenericNode* new_input); + void AppendInput(Zone* zone, GenericNode* new_input); + void InsertInput(Zone* zone, int index, GenericNode* new_input); + + int UseCount() { return use_count_; } + S* UseAt(int index) { + DCHECK(index < use_count_); + Use* current = first_use_; + while (index-- != 0) { + current = current->next; + } + return static_cast<S*>(current->from); + } + inline void ReplaceUses(GenericNode* replace_to); + template <class UnaryPredicate> + inline void ReplaceUsesIf(UnaryPredicate pred, GenericNode* replace_to); + void RemoveAllInputs(); + + void TrimInputCount(int input_count); + + class Inputs { + public: + class iterator; + iterator begin(); + iterator end(); + + explicit Inputs(GenericNode* node) : node_(node) {} + + private: + GenericNode* node_; + }; + + Inputs inputs() { return Inputs(this); } + + class Uses { + public: + class iterator; + iterator begin(); + iterator end(); + bool empty() { return begin() == end(); } + + explicit Uses(GenericNode* node) : node_(node) {} + + private: + GenericNode* node_; + }; + + Uses uses() { return Uses(this); } + + class Edge; + + bool OwnedBy(GenericNode* owner) const; + + static S* New(GenericGraphBase* graph, int input_count, S** inputs); + + protected: + friend class GenericGraphBase; + + class Use : public ZoneObject { + public: + GenericNode* from; + Use* next; + Use* prev; + int input_index; + }; + + class Input { + public: + GenericNode* to; + Use* use; + + void Update(GenericNode* new_to); + }; + + void EnsureAppendableInputs(Zone* zone); + + Input* GetInputRecordPtr(int index) const { + if (has_appendable_inputs_) { + return &((*inputs_.appendable_)[index]); + } else { + return inputs_.static_ + index; + } + } + + void AppendUse(Use* use); + void RemoveUse(Use* use); + + void* operator new(size_t, void* location) { return location; } + + GenericNode(GenericGraphBase* graph, int input_count); + + private: + void AssignUniqueID(GenericGraphBase* graph); + + typedef zone_allocator<Input> ZoneInputAllocator; + typedef std::deque<Input, ZoneInputAllocator> InputDeque; + + NodeId id_; + int input_count_ : 31; + bool has_appendable_inputs_ : 1; + union { + // When a node is initially allocated, it uses a static buffer to hold its + // inputs under the assumption that the number of outputs will not increase. + // When the first input is appended, the static buffer is converted into a + // deque to allow for space-efficient growing. + Input* static_; + InputDeque* appendable_; + } inputs_; + int use_count_; + Use* first_use_; + Use* last_use_; + + DISALLOW_COPY_AND_ASSIGN(GenericNode); +}; + +// An encapsulation for information associated with a single use of node as a +// input from another node, allowing access to both the defining node and +// the ndoe having the input. +template <class B, class S> +class GenericNode<B, S>::Edge { + public: + S* from() const { return static_cast<S*>(input_->use->from); } + S* to() const { return static_cast<S*>(input_->to); } + int index() const { + int index = input_->use->input_index; + DCHECK(index < input_->use->from->input_count_); + return index; + } + + private: + friend class GenericNode<B, S>::Uses::iterator; + friend class GenericNode<B, S>::Inputs::iterator; + + explicit Edge(typename GenericNode<B, S>::Input* input) : input_(input) {} + + typename GenericNode<B, S>::Input* input_; +}; + +// A forward iterator to visit the nodes which are depended upon by a node +// in the order of input. +template <class B, class S> +class GenericNode<B, S>::Inputs::iterator { + public: + iterator(const typename GenericNode<B, S>::Inputs::iterator& other) // NOLINT + : node_(other.node_), + index_(other.index_) {} + + S* operator*() { return static_cast<S*>(GetInput()->to); } + typename GenericNode<B, S>::Edge edge() { + return typename GenericNode::Edge(GetInput()); + } + bool operator==(const iterator& other) const { + return other.index_ == index_ && other.node_ == node_; + } + bool operator!=(const iterator& other) const { return !(other == *this); } + iterator& operator++() { + DCHECK(node_ != NULL); + DCHECK(index_ < node_->input_count_); + ++index_; + return *this; + } + int index() { return index_; } + + private: + friend class GenericNode; + + explicit iterator(GenericNode* node, int index) + : node_(node), index_(index) {} + + Input* GetInput() const { return node_->GetInputRecordPtr(index_); } + + GenericNode* node_; + int index_; +}; + +// A forward iterator to visit the uses of a node. The uses are returned in +// the order in which they were added as inputs. +template <class B, class S> +class GenericNode<B, S>::Uses::iterator { + public: + iterator(const typename GenericNode<B, S>::Uses::iterator& other) // NOLINT + : current_(other.current_), + index_(other.index_) {} + + S* operator*() { return static_cast<S*>(current_->from); } + typename GenericNode<B, S>::Edge edge() { + return typename GenericNode::Edge(CurrentInput()); + } + + bool operator==(const iterator& other) { return other.current_ == current_; } + bool operator!=(const iterator& other) { return other.current_ != current_; } + iterator& operator++() { + DCHECK(current_ != NULL); + index_++; + current_ = current_->next; + return *this; + } + iterator& UpdateToAndIncrement(GenericNode<B, S>* new_to) { + DCHECK(current_ != NULL); + index_++; + typename GenericNode<B, S>::Input* input = CurrentInput(); + current_ = current_->next; + input->Update(new_to); + return *this; + } + int index() const { return index_; } + + private: + friend class GenericNode<B, S>::Uses; + + iterator() : current_(NULL), index_(0) {} + explicit iterator(GenericNode<B, S>* node) + : current_(node->first_use_), index_(0) {} + + Input* CurrentInput() const { + return current_->from->GetInputRecordPtr(current_->input_index); + } + + typename GenericNode<B, S>::Use* current_; + int index_; +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_GENERIC_NODE_H_ diff --git a/deps/v8/src/compiler/graph-builder.cc b/deps/v8/src/compiler/graph-builder.cc new file mode 100644 index 00000000000..9c414f1bf9b --- /dev/null +++ b/deps/v8/src/compiler/graph-builder.cc @@ -0,0 +1,241 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/graph-builder.h" + +#include "src/compiler.h" +#include "src/compiler/generic-graph.h" +#include "src/compiler/generic-node.h" +#include "src/compiler/generic-node-inl.h" +#include "src/compiler/graph-visualizer.h" +#include "src/compiler/node-properties.h" +#include "src/compiler/node-properties-inl.h" +#include "src/compiler/operator-properties.h" +#include "src/compiler/operator-properties-inl.h" + +namespace v8 { +namespace internal { +namespace compiler { + + +StructuredGraphBuilder::StructuredGraphBuilder(Graph* graph, + CommonOperatorBuilder* common) + : GraphBuilder(graph), + common_(common), + environment_(NULL), + current_context_(NULL), + exit_control_(NULL) {} + + +Node* StructuredGraphBuilder::MakeNode(Operator* op, int value_input_count, + Node** value_inputs) { + bool has_context = OperatorProperties::HasContextInput(op); + bool has_control = OperatorProperties::GetControlInputCount(op) == 1; + bool has_effect = OperatorProperties::GetEffectInputCount(op) == 1; + + DCHECK(OperatorProperties::GetControlInputCount(op) < 2); + DCHECK(OperatorProperties::GetEffectInputCount(op) < 2); + + Node* result = NULL; + if (!has_context && !has_control && !has_effect) { + result = graph()->NewNode(op, value_input_count, value_inputs); + } else { + int input_count_with_deps = value_input_count; + if (has_context) ++input_count_with_deps; + if (has_control) ++input_count_with_deps; + if (has_effect) ++input_count_with_deps; + void* raw_buffer = alloca(kPointerSize * input_count_with_deps); + Node** buffer = reinterpret_cast<Node**>(raw_buffer); + memcpy(buffer, value_inputs, kPointerSize * value_input_count); + Node** current_input = buffer + value_input_count; + if (has_context) { + *current_input++ = current_context(); + } + if (has_effect) { + *current_input++ = environment_->GetEffectDependency(); + } + if (has_control) { + *current_input++ = environment_->GetControlDependency(); + } + result = graph()->NewNode(op, input_count_with_deps, buffer); + if (has_effect) { + environment_->UpdateEffectDependency(result); + } + if (OperatorProperties::HasControlOutput(result->op()) && + !environment()->IsMarkedAsUnreachable()) { + environment_->UpdateControlDependency(result); + } + } + + return result; +} + + +void StructuredGraphBuilder::UpdateControlDependencyToLeaveFunction( + Node* exit) { + if (environment()->IsMarkedAsUnreachable()) return; + if (exit_control() != NULL) { + exit = MergeControl(exit_control(), exit); + } + environment()->MarkAsUnreachable(); + set_exit_control(exit); +} + + +StructuredGraphBuilder::Environment* StructuredGraphBuilder::CopyEnvironment( + Environment* env) { + return new (zone()) Environment(*env); +} + + +StructuredGraphBuilder::Environment::Environment( + StructuredGraphBuilder* builder, Node* control_dependency) + : builder_(builder), + control_dependency_(control_dependency), + effect_dependency_(control_dependency), + values_(NodeVector::allocator_type(zone())) {} + + +StructuredGraphBuilder::Environment::Environment(const Environment& copy) + : builder_(copy.builder()), + control_dependency_(copy.control_dependency_), + effect_dependency_(copy.effect_dependency_), + values_(copy.values_) {} + + +void StructuredGraphBuilder::Environment::Merge(Environment* other) { + DCHECK(values_.size() == other->values_.size()); + + // Nothing to do if the other environment is dead. + if (other->IsMarkedAsUnreachable()) return; + + // Resurrect a dead environment by copying the contents of the other one and + // placing a singleton merge as the new control dependency. + if (this->IsMarkedAsUnreachable()) { + Node* other_control = other->control_dependency_; + control_dependency_ = graph()->NewNode(common()->Merge(1), other_control); + effect_dependency_ = other->effect_dependency_; + values_ = other->values_; + return; + } + + // Create a merge of the control dependencies of both environments and update + // the current environment's control dependency accordingly. + Node* control = builder_->MergeControl(this->GetControlDependency(), + other->GetControlDependency()); + UpdateControlDependency(control); + + // Create a merge of the effect dependencies of both environments and update + // the current environment's effect dependency accordingly. + Node* effect = builder_->MergeEffect(this->GetEffectDependency(), + other->GetEffectDependency(), control); + UpdateEffectDependency(effect); + + // Introduce Phi nodes for values that have differing input at merge points, + // potentially extending an existing Phi node if possible. + for (int i = 0; i < static_cast<int>(values_.size()); ++i) { + values_[i] = builder_->MergeValue(values_[i], other->values_[i], control); + } +} + + +void StructuredGraphBuilder::Environment::PrepareForLoop() { + Node* control = GetControlDependency(); + for (int i = 0; i < static_cast<int>(values()->size()); ++i) { + Node* phi = builder_->NewPhi(1, values()->at(i), control); + values()->at(i) = phi; + } + Node* effect = builder_->NewEffectPhi(1, GetEffectDependency(), control); + UpdateEffectDependency(effect); +} + + +Node* StructuredGraphBuilder::NewPhi(int count, Node* input, Node* control) { + Operator* phi_op = common()->Phi(count); + void* raw_buffer = alloca(kPointerSize * (count + 1)); + Node** buffer = reinterpret_cast<Node**>(raw_buffer); + MemsetPointer(buffer, input, count); + buffer[count] = control; + return graph()->NewNode(phi_op, count + 1, buffer); +} + + +// TODO(mstarzinger): Revisit this once we have proper effect states. +Node* StructuredGraphBuilder::NewEffectPhi(int count, Node* input, + Node* control) { + Operator* phi_op = common()->EffectPhi(count); + void* raw_buffer = alloca(kPointerSize * (count + 1)); + Node** buffer = reinterpret_cast<Node**>(raw_buffer); + MemsetPointer(buffer, input, count); + buffer[count] = control; + return graph()->NewNode(phi_op, count + 1, buffer); +} + + +Node* StructuredGraphBuilder::MergeControl(Node* control, Node* other) { + int inputs = OperatorProperties::GetControlInputCount(control->op()) + 1; + if (control->opcode() == IrOpcode::kLoop) { + // Control node for loop exists, add input. + Operator* op = common()->Loop(inputs); + control->AppendInput(zone(), other); + control->set_op(op); + } else if (control->opcode() == IrOpcode::kMerge) { + // Control node for merge exists, add input. + Operator* op = common()->Merge(inputs); + control->AppendInput(zone(), other); + control->set_op(op); + } else { + // Control node is a singleton, introduce a merge. + Operator* op = common()->Merge(inputs); + control = graph()->NewNode(op, control, other); + } + return control; +} + + +Node* StructuredGraphBuilder::MergeEffect(Node* value, Node* other, + Node* control) { + int inputs = OperatorProperties::GetControlInputCount(control->op()); + if (value->opcode() == IrOpcode::kEffectPhi && + NodeProperties::GetControlInput(value) == control) { + // Phi already exists, add input. + value->set_op(common()->EffectPhi(inputs)); + value->InsertInput(zone(), inputs - 1, other); + } else if (value != other) { + // Phi does not exist yet, introduce one. + value = NewEffectPhi(inputs, value, control); + value->ReplaceInput(inputs - 1, other); + } + return value; +} + + +Node* StructuredGraphBuilder::MergeValue(Node* value, Node* other, + Node* control) { + int inputs = OperatorProperties::GetControlInputCount(control->op()); + if (value->opcode() == IrOpcode::kPhi && + NodeProperties::GetControlInput(value) == control) { + // Phi already exists, add input. + value->set_op(common()->Phi(inputs)); + value->InsertInput(zone(), inputs - 1, other); + } else if (value != other) { + // Phi does not exist yet, introduce one. + value = NewPhi(inputs, value, control); + value->ReplaceInput(inputs - 1, other); + } + return value; +} + + +Node* StructuredGraphBuilder::dead_control() { + if (!dead_control_.is_set()) { + Node* dead_node = graph()->NewNode(common_->Dead()); + dead_control_.set(dead_node); + return dead_node; + } + return dead_control_.get(); +} +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/compiler/graph-builder.h b/deps/v8/src/compiler/graph-builder.h new file mode 100644 index 00000000000..fc900085542 --- /dev/null +++ b/deps/v8/src/compiler/graph-builder.h @@ -0,0 +1,226 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_GRAPH_BUILDER_H_ +#define V8_COMPILER_GRAPH_BUILDER_H_ + +#include "src/v8.h" + +#include "src/allocation.h" +#include "src/compiler/common-operator.h" +#include "src/compiler/graph.h" +#include "src/unique.h" + +namespace v8 { +namespace internal { +namespace compiler { + +class Node; + +// A common base class for anything that creates nodes in a graph. +class GraphBuilder { + public: + explicit GraphBuilder(Graph* graph) : graph_(graph) {} + virtual ~GraphBuilder() {} + + Node* NewNode(Operator* op) { + return MakeNode(op, 0, static_cast<Node**>(NULL)); + } + + Node* NewNode(Operator* op, Node* n1) { return MakeNode(op, 1, &n1); } + + Node* NewNode(Operator* op, Node* n1, Node* n2) { + Node* buffer[] = {n1, n2}; + return MakeNode(op, ARRAY_SIZE(buffer), buffer); + } + + Node* NewNode(Operator* op, Node* n1, Node* n2, Node* n3) { + Node* buffer[] = {n1, n2, n3}; + return MakeNode(op, ARRAY_SIZE(buffer), buffer); + } + + Node* NewNode(Operator* op, Node* n1, Node* n2, Node* n3, Node* n4) { + Node* buffer[] = {n1, n2, n3, n4}; + return MakeNode(op, ARRAY_SIZE(buffer), buffer); + } + + Node* NewNode(Operator* op, Node* n1, Node* n2, Node* n3, Node* n4, + Node* n5) { + Node* buffer[] = {n1, n2, n3, n4, n5}; + return MakeNode(op, ARRAY_SIZE(buffer), buffer); + } + + Node* NewNode(Operator* op, Node* n1, Node* n2, Node* n3, Node* n4, Node* n5, + Node* n6) { + Node* nodes[] = {n1, n2, n3, n4, n5, n6}; + return MakeNode(op, ARRAY_SIZE(nodes), nodes); + } + + Node* NewNode(Operator* op, int value_input_count, Node** value_inputs) { + return MakeNode(op, value_input_count, value_inputs); + } + + Graph* graph() const { return graph_; } + + protected: + // Base implementation used by all factory methods. + virtual Node* MakeNode(Operator* op, int value_input_count, + Node** value_inputs) = 0; + + private: + Graph* graph_; +}; + + +// The StructuredGraphBuilder produces a high-level IR graph. It is used as the +// base class for concrete implementations (e.g the AstGraphBuilder or the +// StubGraphBuilder). +class StructuredGraphBuilder : public GraphBuilder { + public: + StructuredGraphBuilder(Graph* graph, CommonOperatorBuilder* common); + virtual ~StructuredGraphBuilder() {} + + // Creates a new Phi node having {count} input values. + Node* NewPhi(int count, Node* input, Node* control); + Node* NewEffectPhi(int count, Node* input, Node* control); + + // Helpers for merging control, effect or value dependencies. + Node* MergeControl(Node* control, Node* other); + Node* MergeEffect(Node* value, Node* other, Node* control); + Node* MergeValue(Node* value, Node* other, Node* control); + + // Helpers to create new control nodes. + Node* NewIfTrue() { return NewNode(common()->IfTrue()); } + Node* NewIfFalse() { return NewNode(common()->IfFalse()); } + Node* NewMerge() { return NewNode(common()->Merge(1)); } + Node* NewLoop() { return NewNode(common()->Loop(1)); } + Node* NewBranch(Node* condition) { + return NewNode(common()->Branch(), condition); + } + + protected: + class Environment; + friend class ControlBuilder; + + // The following method creates a new node having the specified operator and + // ensures effect and control dependencies are wired up. The dependencies + // tracked by the environment might be mutated. + virtual Node* MakeNode(Operator* op, int value_input_count, + Node** value_inputs); + + Environment* environment() const { return environment_; } + void set_environment(Environment* env) { environment_ = env; } + + Node* current_context() const { return current_context_; } + void set_current_context(Node* context) { current_context_ = context; } + + Node* exit_control() const { return exit_control_; } + void set_exit_control(Node* node) { exit_control_ = node; } + + Node* dead_control(); + + // TODO(mstarzinger): Use phase-local zone instead! + Zone* zone() const { return graph()->zone(); } + Isolate* isolate() const { return zone()->isolate(); } + CommonOperatorBuilder* common() const { return common_; } + + // Helper to wrap a Handle<T> into a Unique<T>. + template <class T> + PrintableUnique<T> MakeUnique(Handle<T> object) { + return PrintableUnique<T>::CreateUninitialized(zone(), object); + } + + // Support for control flow builders. The concrete type of the environment + // depends on the graph builder, but environments themselves are not virtual. + virtual Environment* CopyEnvironment(Environment* env); + + // Helper to indicate a node exits the function body. + void UpdateControlDependencyToLeaveFunction(Node* exit); + + private: + CommonOperatorBuilder* common_; + Environment* environment_; + + // Node representing the control dependency for dead code. + SetOncePointer<Node> dead_control_; + + // Node representing the current context within the function body. + Node* current_context_; + + // Merge of all control nodes that exit the function body. + Node* exit_control_; + + DISALLOW_COPY_AND_ASSIGN(StructuredGraphBuilder); +}; + + +// The abstract execution environment contains static knowledge about +// execution state at arbitrary control-flow points. It allows for +// simulation of the control-flow at compile time. +class StructuredGraphBuilder::Environment : public ZoneObject { + public: + Environment(StructuredGraphBuilder* builder, Node* control_dependency); + Environment(const Environment& copy); + + // Control dependency tracked by this environment. + Node* GetControlDependency() { return control_dependency_; } + void UpdateControlDependency(Node* dependency) { + control_dependency_ = dependency; + } + + // Effect dependency tracked by this environment. + Node* GetEffectDependency() { return effect_dependency_; } + void UpdateEffectDependency(Node* dependency) { + effect_dependency_ = dependency; + } + + // Mark this environment as being unreachable. + void MarkAsUnreachable() { + UpdateControlDependency(builder()->dead_control()); + } + bool IsMarkedAsUnreachable() { + return GetControlDependency()->opcode() == IrOpcode::kDead; + } + + // Merge another environment into this one. + void Merge(Environment* other); + + // Copies this environment at a control-flow split point. + Environment* CopyForConditional() { return builder()->CopyEnvironment(this); } + + // Copies this environment to a potentially unreachable control-flow point. + Environment* CopyAsUnreachable() { + Environment* env = builder()->CopyEnvironment(this); + env->MarkAsUnreachable(); + return env; + } + + // Copies this environment at a loop header control-flow point. + Environment* CopyForLoop() { + PrepareForLoop(); + return builder()->CopyEnvironment(this); + } + + protected: + // TODO(mstarzinger): Use phase-local zone instead! + Zone* zone() const { return graph()->zone(); } + Graph* graph() const { return builder_->graph(); } + StructuredGraphBuilder* builder() const { return builder_; } + CommonOperatorBuilder* common() { return builder_->common(); } + NodeVector* values() { return &values_; } + + // Prepare environment to be used as loop header. + void PrepareForLoop(); + + private: + StructuredGraphBuilder* builder_; + Node* control_dependency_; + Node* effect_dependency_; + NodeVector values_; +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_GRAPH_BUILDER_H__ diff --git a/deps/v8/src/compiler/graph-inl.h b/deps/v8/src/compiler/graph-inl.h new file mode 100644 index 00000000000..f8423c3f893 --- /dev/null +++ b/deps/v8/src/compiler/graph-inl.h @@ -0,0 +1,37 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_GRAPH_INL_H_ +#define V8_COMPILER_GRAPH_INL_H_ + +#include "src/compiler/generic-algorithm-inl.h" +#include "src/compiler/graph.h" + +namespace v8 { +namespace internal { +namespace compiler { + +template <class Visitor> +void Graph::VisitNodeUsesFrom(Node* node, Visitor* visitor) { + GenericGraphVisit::Visit<Visitor, NodeUseIterationTraits<Node> >(this, node, + visitor); +} + + +template <class Visitor> +void Graph::VisitNodeUsesFromStart(Visitor* visitor) { + VisitNodeUsesFrom(start(), visitor); +} + + +template <class Visitor> +void Graph::VisitNodeInputsFromEnd(Visitor* visitor) { + GenericGraphVisit::Visit<Visitor, NodeInputIterationTraits<Node> >( + this, end(), visitor); +} +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_GRAPH_INL_H_ diff --git a/deps/v8/src/compiler/graph-reducer.cc b/deps/v8/src/compiler/graph-reducer.cc new file mode 100644 index 00000000000..f062d4bea9a --- /dev/null +++ b/deps/v8/src/compiler/graph-reducer.cc @@ -0,0 +1,94 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/graph-reducer.h" + +#include <functional> + +#include "src/compiler/graph-inl.h" + +namespace v8 { +namespace internal { +namespace compiler { + +GraphReducer::GraphReducer(Graph* graph) + : graph_(graph), reducers_(Reducers::allocator_type(graph->zone())) {} + + +static bool NodeIdIsLessThan(const Node* node, NodeId id) { + return node->id() < id; +} + + +void GraphReducer::ReduceNode(Node* node) { + Reducers::iterator skip = reducers_.end(); + static const unsigned kMaxAttempts = 16; + bool reduce = true; + for (unsigned attempts = 0; attempts <= kMaxAttempts; ++attempts) { + if (!reduce) return; + reduce = false; // Assume we don't need to rerun any reducers. + int before = graph_->NodeCount(); + for (Reducers::iterator i = reducers_.begin(); i != reducers_.end(); ++i) { + if (i == skip) continue; // Skip this reducer. + Reduction reduction = (*i)->Reduce(node); + Node* replacement = reduction.replacement(); + if (replacement == NULL) { + // No change from this reducer. + } else if (replacement == node) { + // {replacement == node} represents an in-place reduction. + // Rerun all the reducers except the current one for this node, + // as now there may be more opportunities for reduction. + reduce = true; + skip = i; + break; + } else { + if (node == graph_->start()) graph_->SetStart(replacement); + if (node == graph_->end()) graph_->SetEnd(replacement); + // If {node} was replaced by an old node, unlink {node} and assume that + // {replacement} was already reduced and finish. + if (replacement->id() < before) { + node->RemoveAllInputs(); + node->ReplaceUses(replacement); + return; + } + // Otherwise, {node} was replaced by a new node. Replace all old uses of + // {node} with {replacement}. New nodes created by this reduction can + // use {node}. + node->ReplaceUsesIf( + std::bind2nd(std::ptr_fun(&NodeIdIsLessThan), before), replacement); + // Unlink {node} if it's no longer used. + if (node->uses().empty()) node->RemoveAllInputs(); + // Rerun all the reductions on the {replacement}. + skip = reducers_.end(); + node = replacement; + reduce = true; + break; + } + } + } +} + + +// A helper class to reuse the node traversal algorithm. +struct GraphReducerVisitor V8_FINAL : public NullNodeVisitor { + explicit GraphReducerVisitor(GraphReducer* reducer) : reducer_(reducer) {} + GenericGraphVisit::Control Post(Node* node) { + reducer_->ReduceNode(node); + return GenericGraphVisit::CONTINUE; + } + GraphReducer* reducer_; +}; + + +void GraphReducer::ReduceGraph() { + GraphReducerVisitor visitor(this); + // Perform a post-order reduction of all nodes starting from the end. + graph()->VisitNodeInputsFromEnd(&visitor); +} + + +// TODO(titzer): partial graph reductions. +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/compiler/graph-reducer.h b/deps/v8/src/compiler/graph-reducer.h new file mode 100644 index 00000000000..33cded65a7b --- /dev/null +++ b/deps/v8/src/compiler/graph-reducer.h @@ -0,0 +1,77 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_GRAPH_REDUCER_H_ +#define V8_COMPILER_GRAPH_REDUCER_H_ + +#include <list> + +#include "src/zone-allocator.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// Forward declarations. +class Graph; +class Node; + + +// Represents the result of trying to reduce a node in the graph. +class Reduction V8_FINAL { + public: + explicit Reduction(Node* replacement = NULL) : replacement_(replacement) {} + + Node* replacement() const { return replacement_; } + bool Changed() const { return replacement() != NULL; } + + private: + Node* replacement_; +}; + + +// A reducer can reduce or simplify a given node based on its operator and +// inputs. This class functions as an extension point for the graph reducer for +// language-specific reductions (e.g. reduction based on types or constant +// folding of low-level operators) can be integrated into the graph reduction +// phase. +class Reducer { + public: + virtual ~Reducer() {} + + // Try to reduce a node if possible. + virtual Reduction Reduce(Node* node) = 0; + + // Helper functions for subclasses to produce reductions for a node. + static Reduction NoChange() { return Reduction(); } + static Reduction Replace(Node* node) { return Reduction(node); } + static Reduction Changed(Node* node) { return Reduction(node); } +}; + + +// Performs an iterative reduction of a node graph. +class GraphReducer V8_FINAL { + public: + explicit GraphReducer(Graph* graph); + + Graph* graph() const { return graph_; } + + void AddReducer(Reducer* reducer) { reducers_.push_back(reducer); } + + // Reduce a single node. + void ReduceNode(Node* node); + // Reduce the whole graph. + void ReduceGraph(); + + private: + typedef std::list<Reducer*, zone_allocator<Reducer*> > Reducers; + + Graph* graph_; + Reducers reducers_; +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_GRAPH_REDUCER_H_ diff --git a/deps/v8/src/compiler/graph-replay.cc b/deps/v8/src/compiler/graph-replay.cc new file mode 100644 index 00000000000..efb1180a777 --- /dev/null +++ b/deps/v8/src/compiler/graph-replay.cc @@ -0,0 +1,81 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/graph-replay.h" + +#include "src/compiler/common-operator.h" +#include "src/compiler/graph.h" +#include "src/compiler/graph-inl.h" +#include "src/compiler/node.h" +#include "src/compiler/operator.h" +#include "src/compiler/operator-properties-inl.h" + +namespace v8 { +namespace internal { +namespace compiler { + +#ifdef DEBUG + +void GraphReplayPrinter::PrintReplay(Graph* graph) { + GraphReplayPrinter replay; + PrintF(" Node* nil = graph.NewNode(common_builder.Dead());\n"); + graph->VisitNodeInputsFromEnd(&replay); +} + + +GenericGraphVisit::Control GraphReplayPrinter::Pre(Node* node) { + PrintReplayOpCreator(node->op()); + PrintF(" Node* n%d = graph.NewNode(op", node->id()); + for (int i = 0; i < node->InputCount(); ++i) { + PrintF(", nil"); + } + PrintF("); USE(n%d);\n", node->id()); + return GenericGraphVisit::CONTINUE; +} + + +void GraphReplayPrinter::PostEdge(Node* from, int index, Node* to) { + PrintF(" n%d->ReplaceInput(%d, n%d);\n", from->id(), index, to->id()); +} + + +void GraphReplayPrinter::PrintReplayOpCreator(Operator* op) { + IrOpcode::Value opcode = static_cast<IrOpcode::Value>(op->opcode()); + const char* builder = + IrOpcode::IsCommonOpcode(opcode) ? "common_builder" : "js_builder"; + const char* mnemonic = IrOpcode::IsCommonOpcode(opcode) + ? IrOpcode::Mnemonic(opcode) + : IrOpcode::Mnemonic(opcode) + 2; + PrintF(" op = %s.%s(", builder, mnemonic); + switch (opcode) { + case IrOpcode::kParameter: + case IrOpcode::kNumberConstant: + PrintF("0"); + break; + case IrOpcode::kLoad: + PrintF("unique_name"); + break; + case IrOpcode::kHeapConstant: + PrintF("unique_constant"); + break; + case IrOpcode::kPhi: + PrintF("%d", op->InputCount()); + break; + case IrOpcode::kEffectPhi: + PrintF("%d", OperatorProperties::GetEffectInputCount(op)); + break; + case IrOpcode::kLoop: + case IrOpcode::kMerge: + PrintF("%d", OperatorProperties::GetControlInputCount(op)); + break; + default: + break; + } + PrintF(");\n"); +} + +#endif // DEBUG +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/compiler/graph-replay.h b/deps/v8/src/compiler/graph-replay.h new file mode 100644 index 00000000000..cc186d77c94 --- /dev/null +++ b/deps/v8/src/compiler/graph-replay.h @@ -0,0 +1,44 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_GRAPH_REPLAY_H_ +#define V8_COMPILER_GRAPH_REPLAY_H_ + +#include "src/v8.h" + +#include "src/compiler/node.h" + +namespace v8 { +namespace internal { +namespace compiler { + +class Graph; +class Operator; + +// Helper class to print a full replay of a graph. This replay can be used to +// materialize the same graph within a C++ unit test and hence test subsequent +// optimization passes on a graph without going through the construction steps. +class GraphReplayPrinter : public NullNodeVisitor { + public: +#ifdef DEBUG + static void PrintReplay(Graph* graph); +#else + static void PrintReplay(Graph* graph) {} +#endif + + GenericGraphVisit::Control Pre(Node* node); + void PostEdge(Node* from, int index, Node* to); + + private: + GraphReplayPrinter() {} + + static void PrintReplayOpCreator(Operator* op); + + DISALLOW_COPY_AND_ASSIGN(GraphReplayPrinter); +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_GRAPH_REPLAY_H_ diff --git a/deps/v8/src/compiler/graph-visualizer.cc b/deps/v8/src/compiler/graph-visualizer.cc new file mode 100644 index 00000000000..144512ad0de --- /dev/null +++ b/deps/v8/src/compiler/graph-visualizer.cc @@ -0,0 +1,265 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/graph-visualizer.h" + +#include "src/compiler/generic-algorithm.h" +#include "src/compiler/generic-node.h" +#include "src/compiler/generic-node-inl.h" +#include "src/compiler/graph.h" +#include "src/compiler/graph-inl.h" +#include "src/compiler/node.h" +#include "src/compiler/node-properties.h" +#include "src/compiler/node-properties-inl.h" +#include "src/compiler/opcodes.h" +#include "src/compiler/operator.h" +#include "src/ostreams.h" + +namespace v8 { +namespace internal { +namespace compiler { + +#define DEAD_COLOR "#999999" + +class GraphVisualizer : public NullNodeVisitor { + public: + GraphVisualizer(OStream& os, const Graph* graph); // NOLINT + + void Print(); + + GenericGraphVisit::Control Pre(Node* node); + GenericGraphVisit::Control PreEdge(Node* from, int index, Node* to); + + private: + void AnnotateNode(Node* node); + void PrintEdge(Node* from, int index, Node* to); + + NodeSet all_nodes_; + NodeSet white_nodes_; + bool use_to_def_; + OStream& os_; + const Graph* const graph_; + + DISALLOW_COPY_AND_ASSIGN(GraphVisualizer); +}; + + +static Node* GetControlCluster(Node* node) { + if (OperatorProperties::IsBasicBlockBegin(node->op())) { + return node; + } else if (OperatorProperties::GetControlInputCount(node->op()) == 1) { + Node* control = NodeProperties::GetControlInput(node, 0); + return OperatorProperties::IsBasicBlockBegin(control->op()) ? control + : NULL; + } else { + return NULL; + } +} + + +GenericGraphVisit::Control GraphVisualizer::Pre(Node* node) { + if (all_nodes_.count(node) == 0) { + Node* control_cluster = GetControlCluster(node); + if (control_cluster != NULL) { + os_ << " subgraph cluster_BasicBlock" << control_cluster->id() << " {\n"; + } + os_ << " ID" << node->id() << " [\n"; + AnnotateNode(node); + os_ << " ]\n"; + if (control_cluster != NULL) os_ << " }\n"; + all_nodes_.insert(node); + if (use_to_def_) white_nodes_.insert(node); + } + return GenericGraphVisit::CONTINUE; +} + + +GenericGraphVisit::Control GraphVisualizer::PreEdge(Node* from, int index, + Node* to) { + if (use_to_def_) return GenericGraphVisit::CONTINUE; + // When going from def to use, only consider white -> other edges, which are + // the dead nodes that use live nodes. We're probably not interested in + // dead nodes that only use other dead nodes. + if (white_nodes_.count(from) > 0) return GenericGraphVisit::CONTINUE; + return GenericGraphVisit::SKIP; +} + + +class Escaped { + public: + explicit Escaped(const OStringStream& os) : str_(os.c_str()) {} + + friend OStream& operator<<(OStream& os, const Escaped& e) { + for (const char* s = e.str_; *s != '\0'; ++s) { + if (needs_escape(*s)) os << "\\"; + os << *s; + } + return os; + } + + private: + static bool needs_escape(char ch) { + switch (ch) { + case '>': + case '<': + case '|': + case '}': + case '{': + return true; + default: + return false; + } + } + + const char* const str_; +}; + + +static bool IsLikelyBackEdge(Node* from, int index, Node* to) { + if (from->opcode() == IrOpcode::kPhi || + from->opcode() == IrOpcode::kEffectPhi) { + Node* control = NodeProperties::GetControlInput(from, 0); + return control->opcode() != IrOpcode::kMerge && control != to && index != 0; + } else if (from->opcode() == IrOpcode::kLoop) { + return index != 0; + } else { + return false; + } +} + + +void GraphVisualizer::AnnotateNode(Node* node) { + if (!use_to_def_) { + os_ << " style=\"filled\"\n" + << " fillcolor=\"" DEAD_COLOR "\"\n"; + } + + os_ << " shape=\"record\"\n"; + switch (node->opcode()) { + case IrOpcode::kEnd: + case IrOpcode::kDead: + case IrOpcode::kStart: + os_ << " style=\"diagonals\"\n"; + break; + case IrOpcode::kMerge: + case IrOpcode::kIfTrue: + case IrOpcode::kIfFalse: + case IrOpcode::kLoop: + os_ << " style=\"rounded\"\n"; + break; + default: + break; + } + + OStringStream label; + label << *node->op(); + os_ << " label=\"{{#" << node->id() << ":" << Escaped(label); + + InputIter i = node->inputs().begin(); + for (int j = OperatorProperties::GetValueInputCount(node->op()); j > 0; + ++i, j--) { + os_ << "|<I" << i.index() << ">#" << (*i)->id(); + } + for (int j = OperatorProperties::GetContextInputCount(node->op()); j > 0; + ++i, j--) { + os_ << "|<I" << i.index() << ">X #" << (*i)->id(); + } + for (int j = OperatorProperties::GetEffectInputCount(node->op()); j > 0; + ++i, j--) { + os_ << "|<I" << i.index() << ">E #" << (*i)->id(); + } + + if (!use_to_def_ || OperatorProperties::IsBasicBlockBegin(node->op()) || + GetControlCluster(node) == NULL) { + for (int j = OperatorProperties::GetControlInputCount(node->op()); j > 0; + ++i, j--) { + os_ << "|<I" << i.index() << ">C #" << (*i)->id(); + } + } + os_ << "}"; + + if (FLAG_trace_turbo_types && !NodeProperties::IsControl(node)) { + Bounds bounds = NodeProperties::GetBounds(node); + OStringStream upper; + bounds.upper->PrintTo(upper); + OStringStream lower; + bounds.lower->PrintTo(lower); + os_ << "|" << Escaped(upper) << "|" << Escaped(lower); + } + os_ << "}\"\n"; +} + + +void GraphVisualizer::PrintEdge(Node* from, int index, Node* to) { + bool unconstrained = IsLikelyBackEdge(from, index, to); + os_ << " ID" << from->id(); + if (all_nodes_.count(to) == 0) { + os_ << ":I" << index << ":n -> DEAD_INPUT"; + } else if (OperatorProperties::IsBasicBlockBegin(from->op()) || + GetControlCluster(from) == NULL || + (OperatorProperties::GetControlInputCount(from->op()) > 0 && + NodeProperties::GetControlInput(from) != to)) { + os_ << ":I" << index << ":n -> ID" << to->id() << ":s"; + if (unconstrained) os_ << " [constraint=false,style=dotted]"; + } else { + os_ << " -> ID" << to->id() << ":s [color=transparent" + << (unconstrained ? ", constraint=false" : "") << "]"; + } + os_ << "\n"; +} + + +void GraphVisualizer::Print() { + os_ << "digraph D {\n" + << " node [fontsize=8,height=0.25]\n" + << " rankdir=\"BT\"\n" + << " \n"; + + // Make sure all nodes have been output before writing out the edges. + use_to_def_ = true; + // TODO(svenpanne) Remove the need for the const_casts. + const_cast<Graph*>(graph_)->VisitNodeInputsFromEnd(this); + white_nodes_.insert(const_cast<Graph*>(graph_)->start()); + + // Visit all uses of white nodes. + use_to_def_ = false; + GenericGraphVisit::Visit<GraphVisualizer, NodeUseIterationTraits<Node> >( + const_cast<Graph*>(graph_), white_nodes_.begin(), white_nodes_.end(), + this); + + os_ << " DEAD_INPUT [\n" + << " style=\"filled\" \n" + << " fillcolor=\"" DEAD_COLOR "\"\n" + << " ]\n" + << "\n"; + + // With all the nodes written, add the edges. + for (NodeSetIter i = all_nodes_.begin(); i != all_nodes_.end(); ++i) { + Node::Inputs inputs = (*i)->inputs(); + for (Node::Inputs::iterator iter(inputs.begin()); iter != inputs.end(); + ++iter) { + PrintEdge(iter.edge().from(), iter.edge().index(), iter.edge().to()); + } + } + os_ << "}\n"; +} + + +GraphVisualizer::GraphVisualizer(OStream& os, const Graph* graph) // NOLINT + : all_nodes_(NodeSet::key_compare(), + NodeSet::allocator_type(graph->zone())), + white_nodes_(NodeSet::key_compare(), + NodeSet::allocator_type(graph->zone())), + use_to_def_(true), + os_(os), + graph_(graph) {} + + +OStream& operator<<(OStream& os, const AsDOT& ad) { + GraphVisualizer(os, &ad.graph).Print(); + return os; +} +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/compiler/graph-visualizer.h b/deps/v8/src/compiler/graph-visualizer.h new file mode 100644 index 00000000000..12532bacf8b --- /dev/null +++ b/deps/v8/src/compiler/graph-visualizer.h @@ -0,0 +1,29 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_GRAPH_VISUALIZER_H_ +#define V8_COMPILER_GRAPH_VISUALIZER_H_ + +#include "src/v8.h" + +namespace v8 { +namespace internal { + +class OStream; + +namespace compiler { + +class Graph; + +struct AsDOT { + explicit AsDOT(const Graph& g) : graph(g) {} + const Graph& graph; +}; + +OStream& operator<<(OStream& os, const AsDOT& ad); +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_GRAPH_VISUALIZER_H_ diff --git a/deps/v8/src/compiler/graph.cc b/deps/v8/src/compiler/graph.cc new file mode 100644 index 00000000000..3f47eace815 --- /dev/null +++ b/deps/v8/src/compiler/graph.cc @@ -0,0 +1,54 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/graph.h" + +#include "src/compiler/common-operator.h" +#include "src/compiler/generic-node-inl.h" +#include "src/compiler/graph-inl.h" +#include "src/compiler/node.h" +#include "src/compiler/node-aux-data-inl.h" +#include "src/compiler/node-properties.h" +#include "src/compiler/node-properties-inl.h" +#include "src/compiler/operator-properties.h" +#include "src/compiler/operator-properties-inl.h" + +namespace v8 { +namespace internal { +namespace compiler { + +Graph::Graph(Zone* zone) + : GenericGraph<Node>(zone), + decorators_(DecoratorVector::allocator_type(zone)) {} + + +Node* Graph::NewNode(Operator* op, int input_count, Node** inputs) { + DCHECK(op->InputCount() <= input_count); + Node* result = Node::New(this, input_count, inputs); + result->Initialize(op); + for (DecoratorVector::iterator i = decorators_.begin(); + i != decorators_.end(); ++i) { + (*i)->Decorate(result); + } + return result; +} + + +void Graph::ChangeOperator(Node* node, Operator* op) { node->set_op(op); } + + +void Graph::DeleteNode(Node* node) { +#if DEBUG + // Nodes can't be deleted if they have uses. + Node::Uses::iterator use_iterator(node->uses().begin()); + DCHECK(use_iterator == node->uses().end()); +#endif + +#if DEBUG + memset(node, 0xDE, sizeof(Node)); +#endif +} +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/compiler/graph.h b/deps/v8/src/compiler/graph.h new file mode 100644 index 00000000000..65ea3b30a42 --- /dev/null +++ b/deps/v8/src/compiler/graph.h @@ -0,0 +1,97 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_GRAPH_H_ +#define V8_COMPILER_GRAPH_H_ + +#include <map> +#include <set> + +#include "src/compiler/generic-algorithm.h" +#include "src/compiler/node.h" +#include "src/compiler/node-aux-data.h" +#include "src/compiler/source-position.h" + +namespace v8 { +namespace internal { +namespace compiler { + +class GraphDecorator; + + +class Graph : public GenericGraph<Node> { + public: + explicit Graph(Zone* zone); + + // Base implementation used by all factory methods. + Node* NewNode(Operator* op, int input_count, Node** inputs); + + // Factories for nodes with static input counts. + Node* NewNode(Operator* op) { + return NewNode(op, 0, static_cast<Node**>(NULL)); + } + Node* NewNode(Operator* op, Node* n1) { return NewNode(op, 1, &n1); } + Node* NewNode(Operator* op, Node* n1, Node* n2) { + Node* nodes[] = {n1, n2}; + return NewNode(op, ARRAY_SIZE(nodes), nodes); + } + Node* NewNode(Operator* op, Node* n1, Node* n2, Node* n3) { + Node* nodes[] = {n1, n2, n3}; + return NewNode(op, ARRAY_SIZE(nodes), nodes); + } + Node* NewNode(Operator* op, Node* n1, Node* n2, Node* n3, Node* n4) { + Node* nodes[] = {n1, n2, n3, n4}; + return NewNode(op, ARRAY_SIZE(nodes), nodes); + } + Node* NewNode(Operator* op, Node* n1, Node* n2, Node* n3, Node* n4, + Node* n5) { + Node* nodes[] = {n1, n2, n3, n4, n5}; + return NewNode(op, ARRAY_SIZE(nodes), nodes); + } + Node* NewNode(Operator* op, Node* n1, Node* n2, Node* n3, Node* n4, Node* n5, + Node* n6) { + Node* nodes[] = {n1, n2, n3, n4, n5, n6}; + return NewNode(op, ARRAY_SIZE(nodes), nodes); + } + + void ChangeOperator(Node* node, Operator* op); + void DeleteNode(Node* node); + + template <class Visitor> + void VisitNodeUsesFrom(Node* node, Visitor* visitor); + + template <class Visitor> + void VisitNodeUsesFromStart(Visitor* visitor); + + template <class Visitor> + void VisitNodeInputsFromEnd(Visitor* visitor); + + void AddDecorator(GraphDecorator* decorator) { + decorators_.push_back(decorator); + } + + void RemoveDecorator(GraphDecorator* decorator) { + DecoratorVector::iterator it = + std::find(decorators_.begin(), decorators_.end(), decorator); + DCHECK(it != decorators_.end()); + decorators_.erase(it, it + 1); + } + + private: + typedef std::vector<GraphDecorator*, zone_allocator<GraphDecorator*> > + DecoratorVector; + DecoratorVector decorators_; +}; + + +class GraphDecorator : public ZoneObject { + public: + virtual ~GraphDecorator() {} + virtual void Decorate(Node* node) = 0; +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_GRAPH_H_ diff --git a/deps/v8/src/compiler/ia32/code-generator-ia32.cc b/deps/v8/src/compiler/ia32/code-generator-ia32.cc new file mode 100644 index 00000000000..31a01798a94 --- /dev/null +++ b/deps/v8/src/compiler/ia32/code-generator-ia32.cc @@ -0,0 +1,956 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/code-generator.h" + +#include "src/compiler/code-generator-impl.h" +#include "src/compiler/gap-resolver.h" +#include "src/compiler/node-matchers.h" +#include "src/compiler/node-properties-inl.h" +#include "src/ia32/assembler-ia32.h" +#include "src/ia32/macro-assembler-ia32.h" +#include "src/scopes.h" + +namespace v8 { +namespace internal { +namespace compiler { + +#define __ masm()-> + + +// Adds IA-32 specific methods for decoding operands. +class IA32OperandConverter : public InstructionOperandConverter { + public: + IA32OperandConverter(CodeGenerator* gen, Instruction* instr) + : InstructionOperandConverter(gen, instr) {} + + Operand InputOperand(int index) { return ToOperand(instr_->InputAt(index)); } + + Immediate InputImmediate(int index) { + return ToImmediate(instr_->InputAt(index)); + } + + Operand OutputOperand() { return ToOperand(instr_->Output()); } + + Operand TempOperand(int index) { return ToOperand(instr_->TempAt(index)); } + + Operand ToOperand(InstructionOperand* op, int extra = 0) { + if (op->IsRegister()) { + DCHECK(extra == 0); + return Operand(ToRegister(op)); + } else if (op->IsDoubleRegister()) { + DCHECK(extra == 0); + return Operand(ToDoubleRegister(op)); + } + DCHECK(op->IsStackSlot() || op->IsDoubleStackSlot()); + // The linkage computes where all spill slots are located. + FrameOffset offset = linkage()->GetFrameOffset(op->index(), frame(), extra); + return Operand(offset.from_stack_pointer() ? esp : ebp, offset.offset()); + } + + Operand HighOperand(InstructionOperand* op) { + DCHECK(op->IsDoubleStackSlot()); + return ToOperand(op, kPointerSize); + } + + Immediate ToImmediate(InstructionOperand* operand) { + Constant constant = ToConstant(operand); + switch (constant.type()) { + case Constant::kInt32: + return Immediate(constant.ToInt32()); + case Constant::kFloat64: + return Immediate( + isolate()->factory()->NewNumber(constant.ToFloat64(), TENURED)); + case Constant::kExternalReference: + return Immediate(constant.ToExternalReference()); + case Constant::kHeapObject: + return Immediate(constant.ToHeapObject()); + case Constant::kInt64: + break; + } + UNREACHABLE(); + return Immediate(-1); + } + + Operand MemoryOperand(int* first_input) { + const int offset = *first_input; + switch (AddressingModeField::decode(instr_->opcode())) { + case kMode_MR1I: + *first_input += 2; + return Operand(InputRegister(offset + 0), InputRegister(offset + 1), + times_1, + 0); // TODO(dcarney): K != 0 + case kMode_MRI: + *first_input += 2; + return Operand::ForRegisterPlusImmediate(InputRegister(offset + 0), + InputImmediate(offset + 1)); + case kMode_MI: + *first_input += 1; + return Operand(InputImmediate(offset + 0)); + default: + UNREACHABLE(); + return Operand(no_reg); + } + } + + Operand MemoryOperand() { + int first_input = 0; + return MemoryOperand(&first_input); + } +}; + + +static bool HasImmediateInput(Instruction* instr, int index) { + return instr->InputAt(index)->IsImmediate(); +} + + +// Assembles an instruction after register allocation, producing machine code. +void CodeGenerator::AssembleArchInstruction(Instruction* instr) { + IA32OperandConverter i(this, instr); + + switch (ArchOpcodeField::decode(instr->opcode())) { + case kArchJmp: + __ jmp(code()->GetLabel(i.InputBlock(0))); + break; + case kArchNop: + // don't emit code for nops. + break; + case kArchRet: + AssembleReturn(); + break; + case kArchDeoptimize: { + int deoptimization_id = MiscField::decode(instr->opcode()); + BuildTranslation(instr, deoptimization_id); + + Address deopt_entry = Deoptimizer::GetDeoptimizationEntry( + isolate(), deoptimization_id, Deoptimizer::LAZY); + __ call(deopt_entry, RelocInfo::RUNTIME_ENTRY); + break; + } + case kIA32Add: + if (HasImmediateInput(instr, 1)) { + __ add(i.InputOperand(0), i.InputImmediate(1)); + } else { + __ add(i.InputRegister(0), i.InputOperand(1)); + } + break; + case kIA32And: + if (HasImmediateInput(instr, 1)) { + __ and_(i.InputOperand(0), i.InputImmediate(1)); + } else { + __ and_(i.InputRegister(0), i.InputOperand(1)); + } + break; + case kIA32Cmp: + if (HasImmediateInput(instr, 1)) { + __ cmp(i.InputOperand(0), i.InputImmediate(1)); + } else { + __ cmp(i.InputRegister(0), i.InputOperand(1)); + } + break; + case kIA32Test: + if (HasImmediateInput(instr, 1)) { + __ test(i.InputOperand(0), i.InputImmediate(1)); + } else { + __ test(i.InputRegister(0), i.InputOperand(1)); + } + break; + case kIA32Imul: + if (HasImmediateInput(instr, 1)) { + __ imul(i.OutputRegister(), i.InputOperand(0), i.InputInt32(1)); + } else { + __ imul(i.OutputRegister(), i.InputOperand(1)); + } + break; + case kIA32Idiv: + __ cdq(); + __ idiv(i.InputOperand(1)); + break; + case kIA32Udiv: + __ xor_(edx, edx); + __ div(i.InputOperand(1)); + break; + case kIA32Not: + __ not_(i.OutputOperand()); + break; + case kIA32Neg: + __ neg(i.OutputOperand()); + break; + case kIA32Or: + if (HasImmediateInput(instr, 1)) { + __ or_(i.InputOperand(0), i.InputImmediate(1)); + } else { + __ or_(i.InputRegister(0), i.InputOperand(1)); + } + break; + case kIA32Xor: + if (HasImmediateInput(instr, 1)) { + __ xor_(i.InputOperand(0), i.InputImmediate(1)); + } else { + __ xor_(i.InputRegister(0), i.InputOperand(1)); + } + break; + case kIA32Sub: + if (HasImmediateInput(instr, 1)) { + __ sub(i.InputOperand(0), i.InputImmediate(1)); + } else { + __ sub(i.InputRegister(0), i.InputOperand(1)); + } + break; + case kIA32Shl: + if (HasImmediateInput(instr, 1)) { + __ shl(i.OutputRegister(), i.InputInt5(1)); + } else { + __ shl_cl(i.OutputRegister()); + } + break; + case kIA32Shr: + if (HasImmediateInput(instr, 1)) { + __ shr(i.OutputRegister(), i.InputInt5(1)); + } else { + __ shr_cl(i.OutputRegister()); + } + break; + case kIA32Sar: + if (HasImmediateInput(instr, 1)) { + __ sar(i.OutputRegister(), i.InputInt5(1)); + } else { + __ sar_cl(i.OutputRegister()); + } + break; + case kIA32Push: + if (HasImmediateInput(instr, 0)) { + __ push(i.InputImmediate(0)); + } else { + __ push(i.InputOperand(0)); + } + break; + case kIA32CallCodeObject: { + if (HasImmediateInput(instr, 0)) { + Handle<Code> code = Handle<Code>::cast(i.InputHeapObject(0)); + __ call(code, RelocInfo::CODE_TARGET); + } else { + Register reg = i.InputRegister(0); + int entry = Code::kHeaderSize - kHeapObjectTag; + __ call(Operand(reg, entry)); + } + RecordSafepoint(instr->pointer_map(), Safepoint::kSimple, 0, + Safepoint::kNoLazyDeopt); + + bool lazy_deopt = (MiscField::decode(instr->opcode()) == 1); + if (lazy_deopt) { + RecordLazyDeoptimizationEntry(instr); + } + AddNopForSmiCodeInlining(); + break; + } + case kIA32CallAddress: + if (HasImmediateInput(instr, 0)) { + // TODO(dcarney): wire up EXTERNAL_REFERENCE instead of RUNTIME_ENTRY. + __ call(reinterpret_cast<byte*>(i.InputInt32(0)), + RelocInfo::RUNTIME_ENTRY); + } else { + __ call(i.InputRegister(0)); + } + break; + case kPopStack: { + int words = MiscField::decode(instr->opcode()); + __ add(esp, Immediate(kPointerSize * words)); + break; + } + case kIA32CallJSFunction: { + Register func = i.InputRegister(0); + + // TODO(jarin) The load of the context should be separated from the call. + __ mov(esi, FieldOperand(func, JSFunction::kContextOffset)); + __ call(FieldOperand(func, JSFunction::kCodeEntryOffset)); + + RecordSafepoint(instr->pointer_map(), Safepoint::kSimple, 0, + Safepoint::kNoLazyDeopt); + RecordLazyDeoptimizationEntry(instr); + break; + } + case kSSEFloat64Cmp: + __ ucomisd(i.InputDoubleRegister(0), i.InputOperand(1)); + break; + case kSSEFloat64Add: + __ addsd(i.InputDoubleRegister(0), i.InputDoubleRegister(1)); + break; + case kSSEFloat64Sub: + __ subsd(i.InputDoubleRegister(0), i.InputDoubleRegister(1)); + break; + case kSSEFloat64Mul: + __ mulsd(i.InputDoubleRegister(0), i.InputDoubleRegister(1)); + break; + case kSSEFloat64Div: + __ divsd(i.InputDoubleRegister(0), i.InputDoubleRegister(1)); + break; + case kSSEFloat64Mod: { + // TODO(dcarney): alignment is wrong. + __ sub(esp, Immediate(kDoubleSize)); + // Move values to st(0) and st(1). + __ movsd(Operand(esp, 0), i.InputDoubleRegister(1)); + __ fld_d(Operand(esp, 0)); + __ movsd(Operand(esp, 0), i.InputDoubleRegister(0)); + __ fld_d(Operand(esp, 0)); + // Loop while fprem isn't done. + Label mod_loop; + __ bind(&mod_loop); + // This instructions traps on all kinds inputs, but we are assuming the + // floating point control word is set to ignore them all. + __ fprem(); + // The following 2 instruction implicitly use eax. + __ fnstsw_ax(); + __ sahf(); + __ j(parity_even, &mod_loop); + // Move output to stack and clean up. + __ fstp(1); + __ fstp_d(Operand(esp, 0)); + __ movsd(i.OutputDoubleRegister(), Operand(esp, 0)); + __ add(esp, Immediate(kDoubleSize)); + break; + } + case kSSEFloat64ToInt32: + __ cvttsd2si(i.OutputRegister(), i.InputOperand(0)); + break; + case kSSEFloat64ToUint32: { + XMMRegister scratch = xmm0; + __ Move(scratch, -2147483648.0); + // TODO(turbofan): IA32 SSE subsd() should take an operand. + __ addsd(scratch, i.InputDoubleRegister(0)); + __ cvttsd2si(i.OutputRegister(), scratch); + __ add(i.OutputRegister(), Immediate(0x80000000)); + break; + } + case kSSEInt32ToFloat64: + __ cvtsi2sd(i.OutputDoubleRegister(), i.InputOperand(0)); + break; + case kSSEUint32ToFloat64: + // TODO(turbofan): IA32 SSE LoadUint32() should take an operand. + __ LoadUint32(i.OutputDoubleRegister(), i.InputRegister(0)); + break; + case kSSELoad: + __ movsd(i.OutputDoubleRegister(), i.MemoryOperand()); + break; + case kSSEStore: { + int index = 0; + Operand operand = i.MemoryOperand(&index); + __ movsd(operand, i.InputDoubleRegister(index)); + break; + } + case kIA32LoadWord8: + __ movzx_b(i.OutputRegister(), i.MemoryOperand()); + break; + case kIA32StoreWord8: { + int index = 0; + Operand operand = i.MemoryOperand(&index); + __ mov_b(operand, i.InputRegister(index)); + break; + } + case kIA32StoreWord8I: { + int index = 0; + Operand operand = i.MemoryOperand(&index); + __ mov_b(operand, i.InputInt8(index)); + break; + } + case kIA32LoadWord16: + __ movzx_w(i.OutputRegister(), i.MemoryOperand()); + break; + case kIA32StoreWord16: { + int index = 0; + Operand operand = i.MemoryOperand(&index); + __ mov_w(operand, i.InputRegister(index)); + break; + } + case kIA32StoreWord16I: { + int index = 0; + Operand operand = i.MemoryOperand(&index); + __ mov_w(operand, i.InputInt16(index)); + break; + } + case kIA32LoadWord32: + __ mov(i.OutputRegister(), i.MemoryOperand()); + break; + case kIA32StoreWord32: { + int index = 0; + Operand operand = i.MemoryOperand(&index); + __ mov(operand, i.InputRegister(index)); + break; + } + case kIA32StoreWord32I: { + int index = 0; + Operand operand = i.MemoryOperand(&index); + __ mov(operand, i.InputImmediate(index)); + break; + } + case kIA32StoreWriteBarrier: { + Register object = i.InputRegister(0); + Register index = i.InputRegister(1); + Register value = i.InputRegister(2); + __ mov(Operand(object, index, times_1, 0), value); + __ lea(index, Operand(object, index, times_1, 0)); + SaveFPRegsMode mode = code_->frame()->DidAllocateDoubleRegisters() + ? kSaveFPRegs + : kDontSaveFPRegs; + __ RecordWrite(object, index, value, mode); + break; + } + } +} + + +// Assembles branches after an instruction. +void CodeGenerator::AssembleArchBranch(Instruction* instr, + FlagsCondition condition) { + IA32OperandConverter i(this, instr); + Label done; + + // Emit a branch. The true and false targets are always the last two inputs + // to the instruction. + BasicBlock* tblock = i.InputBlock(instr->InputCount() - 2); + BasicBlock* fblock = i.InputBlock(instr->InputCount() - 1); + bool fallthru = IsNextInAssemblyOrder(fblock); + Label* tlabel = code()->GetLabel(tblock); + Label* flabel = fallthru ? &done : code()->GetLabel(fblock); + Label::Distance flabel_distance = fallthru ? Label::kNear : Label::kFar; + switch (condition) { + case kUnorderedEqual: + __ j(parity_even, flabel, flabel_distance); + // Fall through. + case kEqual: + __ j(equal, tlabel); + break; + case kUnorderedNotEqual: + __ j(parity_even, tlabel); + // Fall through. + case kNotEqual: + __ j(not_equal, tlabel); + break; + case kSignedLessThan: + __ j(less, tlabel); + break; + case kSignedGreaterThanOrEqual: + __ j(greater_equal, tlabel); + break; + case kSignedLessThanOrEqual: + __ j(less_equal, tlabel); + break; + case kSignedGreaterThan: + __ j(greater, tlabel); + break; + case kUnorderedLessThan: + __ j(parity_even, flabel, flabel_distance); + // Fall through. + case kUnsignedLessThan: + __ j(below, tlabel); + break; + case kUnorderedGreaterThanOrEqual: + __ j(parity_even, tlabel); + // Fall through. + case kUnsignedGreaterThanOrEqual: + __ j(above_equal, tlabel); + break; + case kUnorderedLessThanOrEqual: + __ j(parity_even, flabel, flabel_distance); + // Fall through. + case kUnsignedLessThanOrEqual: + __ j(below_equal, tlabel); + break; + case kUnorderedGreaterThan: + __ j(parity_even, tlabel); + // Fall through. + case kUnsignedGreaterThan: + __ j(above, tlabel); + break; + case kOverflow: + __ j(overflow, tlabel); + break; + case kNotOverflow: + __ j(no_overflow, tlabel); + break; + } + if (!fallthru) __ jmp(flabel, flabel_distance); // no fallthru to flabel. + __ bind(&done); +} + + +// Assembles boolean materializations after an instruction. +void CodeGenerator::AssembleArchBoolean(Instruction* instr, + FlagsCondition condition) { + IA32OperandConverter i(this, instr); + Label done; + + // Materialize a full 32-bit 1 or 0 value. The result register is always the + // last output of the instruction. + Label check; + DCHECK_NE(0, instr->OutputCount()); + Register reg = i.OutputRegister(instr->OutputCount() - 1); + Condition cc = no_condition; + switch (condition) { + case kUnorderedEqual: + __ j(parity_odd, &check, Label::kNear); + __ mov(reg, Immediate(0)); + __ jmp(&done, Label::kNear); + // Fall through. + case kEqual: + cc = equal; + break; + case kUnorderedNotEqual: + __ j(parity_odd, &check, Label::kNear); + __ mov(reg, Immediate(1)); + __ jmp(&done, Label::kNear); + // Fall through. + case kNotEqual: + cc = not_equal; + break; + case kSignedLessThan: + cc = less; + break; + case kSignedGreaterThanOrEqual: + cc = greater_equal; + break; + case kSignedLessThanOrEqual: + cc = less_equal; + break; + case kSignedGreaterThan: + cc = greater; + break; + case kUnorderedLessThan: + __ j(parity_odd, &check, Label::kNear); + __ mov(reg, Immediate(0)); + __ jmp(&done, Label::kNear); + // Fall through. + case kUnsignedLessThan: + cc = below; + break; + case kUnorderedGreaterThanOrEqual: + __ j(parity_odd, &check, Label::kNear); + __ mov(reg, Immediate(1)); + __ jmp(&done, Label::kNear); + // Fall through. + case kUnsignedGreaterThanOrEqual: + cc = above_equal; + break; + case kUnorderedLessThanOrEqual: + __ j(parity_odd, &check, Label::kNear); + __ mov(reg, Immediate(0)); + __ jmp(&done, Label::kNear); + // Fall through. + case kUnsignedLessThanOrEqual: + cc = below_equal; + break; + case kUnorderedGreaterThan: + __ j(parity_odd, &check, Label::kNear); + __ mov(reg, Immediate(1)); + __ jmp(&done, Label::kNear); + // Fall through. + case kUnsignedGreaterThan: + cc = above; + break; + case kOverflow: + cc = overflow; + break; + case kNotOverflow: + cc = no_overflow; + break; + } + __ bind(&check); + if (reg.is_byte_register()) { + // setcc for byte registers (al, bl, cl, dl). + __ setcc(cc, reg); + __ movzx_b(reg, reg); + } else { + // Emit a branch to set a register to either 1 or 0. + Label set; + __ j(cc, &set, Label::kNear); + __ mov(reg, Immediate(0)); + __ jmp(&done, Label::kNear); + __ bind(&set); + __ mov(reg, Immediate(1)); + } + __ bind(&done); +} + + +// The calling convention for JSFunctions on IA32 passes arguments on the +// stack and the JSFunction and context in EDI and ESI, respectively, thus +// the steps of the call look as follows: + +// --{ before the call instruction }-------------------------------------------- +// | caller frame | +// ^ esp ^ ebp + +// --{ push arguments and setup ESI, EDI }-------------------------------------- +// | args + receiver | caller frame | +// ^ esp ^ ebp +// [edi = JSFunction, esi = context] + +// --{ call [edi + kCodeEntryOffset] }------------------------------------------ +// | RET | args + receiver | caller frame | +// ^ esp ^ ebp + +// =={ prologue of called function }============================================ +// --{ push ebp }--------------------------------------------------------------- +// | FP | RET | args + receiver | caller frame | +// ^ esp ^ ebp + +// --{ mov ebp, esp }----------------------------------------------------------- +// | FP | RET | args + receiver | caller frame | +// ^ ebp,esp + +// --{ push esi }--------------------------------------------------------------- +// | CTX | FP | RET | args + receiver | caller frame | +// ^esp ^ ebp + +// --{ push edi }--------------------------------------------------------------- +// | FNC | CTX | FP | RET | args + receiver | caller frame | +// ^esp ^ ebp + +// --{ subi esp, #N }----------------------------------------------------------- +// | callee frame | FNC | CTX | FP | RET | args + receiver | caller frame | +// ^esp ^ ebp + +// =={ body of called function }================================================ + +// =={ epilogue of called function }============================================ +// --{ mov esp, ebp }----------------------------------------------------------- +// | FP | RET | args + receiver | caller frame | +// ^ esp,ebp + +// --{ pop ebp }----------------------------------------------------------- +// | | RET | args + receiver | caller frame | +// ^ esp ^ ebp + +// --{ ret #A+1 }----------------------------------------------------------- +// | | caller frame | +// ^ esp ^ ebp + + +// Runtime function calls are accomplished by doing a stub call to the +// CEntryStub (a real code object). On IA32 passes arguments on the +// stack, the number of arguments in EAX, the address of the runtime function +// in EBX, and the context in ESI. + +// --{ before the call instruction }-------------------------------------------- +// | caller frame | +// ^ esp ^ ebp + +// --{ push arguments and setup EAX, EBX, and ESI }----------------------------- +// | args + receiver | caller frame | +// ^ esp ^ ebp +// [eax = #args, ebx = runtime function, esi = context] + +// --{ call #CEntryStub }------------------------------------------------------- +// | RET | args + receiver | caller frame | +// ^ esp ^ ebp + +// =={ body of runtime function }=============================================== + +// --{ runtime returns }-------------------------------------------------------- +// | caller frame | +// ^ esp ^ ebp + +// Other custom linkages (e.g. for calling directly into and out of C++) may +// need to save callee-saved registers on the stack, which is done in the +// function prologue of generated code. + +// --{ before the call instruction }-------------------------------------------- +// | caller frame | +// ^ esp ^ ebp + +// --{ set up arguments in registers on stack }--------------------------------- +// | args | caller frame | +// ^ esp ^ ebp +// [r0 = arg0, r1 = arg1, ...] + +// --{ call code }-------------------------------------------------------------- +// | RET | args | caller frame | +// ^ esp ^ ebp + +// =={ prologue of called function }============================================ +// --{ push ebp }--------------------------------------------------------------- +// | FP | RET | args | caller frame | +// ^ esp ^ ebp + +// --{ mov ebp, esp }----------------------------------------------------------- +// | FP | RET | args | caller frame | +// ^ ebp,esp + +// --{ save registers }--------------------------------------------------------- +// | regs | FP | RET | args | caller frame | +// ^ esp ^ ebp + +// --{ subi esp, #N }----------------------------------------------------------- +// | callee frame | regs | FP | RET | args | caller frame | +// ^esp ^ ebp + +// =={ body of called function }================================================ + +// =={ epilogue of called function }============================================ +// --{ restore registers }------------------------------------------------------ +// | regs | FP | RET | args | caller frame | +// ^ esp ^ ebp + +// --{ mov esp, ebp }----------------------------------------------------------- +// | FP | RET | args | caller frame | +// ^ esp,ebp + +// --{ pop ebp }---------------------------------------------------------------- +// | RET | args | caller frame | +// ^ esp ^ ebp + + +void CodeGenerator::AssemblePrologue() { + CallDescriptor* descriptor = linkage()->GetIncomingDescriptor(); + Frame* frame = code_->frame(); + int stack_slots = frame->GetSpillSlotCount(); + if (descriptor->kind() == CallDescriptor::kCallAddress) { + // Assemble a prologue similar the to cdecl calling convention. + __ push(ebp); + __ mov(ebp, esp); + const RegList saves = descriptor->CalleeSavedRegisters(); + if (saves != 0) { // Save callee-saved registers. + int register_save_area_size = 0; + for (int i = Register::kNumRegisters - 1; i >= 0; i--) { + if (!((1 << i) & saves)) continue; + __ push(Register::from_code(i)); + register_save_area_size += kPointerSize; + } + frame->SetRegisterSaveAreaSize(register_save_area_size); + } + } else if (descriptor->IsJSFunctionCall()) { + CompilationInfo* info = linkage()->info(); + __ Prologue(info->IsCodePreAgingActive()); + frame->SetRegisterSaveAreaSize( + StandardFrameConstants::kFixedFrameSizeFromFp); + + // Sloppy mode functions and builtins need to replace the receiver with the + // global proxy when called as functions (without an explicit receiver + // object). + // TODO(mstarzinger/verwaest): Should this be moved back into the CallIC? + if (info->strict_mode() == SLOPPY && !info->is_native()) { + Label ok; + // +2 for return address and saved frame pointer. + int receiver_slot = info->scope()->num_parameters() + 2; + __ mov(ecx, Operand(ebp, receiver_slot * kPointerSize)); + __ cmp(ecx, isolate()->factory()->undefined_value()); + __ j(not_equal, &ok, Label::kNear); + __ mov(ecx, GlobalObjectOperand()); + __ mov(ecx, FieldOperand(ecx, GlobalObject::kGlobalProxyOffset)); + __ mov(Operand(ebp, receiver_slot * kPointerSize), ecx); + __ bind(&ok); + } + + } else { + __ StubPrologue(); + frame->SetRegisterSaveAreaSize( + StandardFrameConstants::kFixedFrameSizeFromFp); + } + if (stack_slots > 0) { + __ sub(esp, Immediate(stack_slots * kPointerSize)); + } +} + + +void CodeGenerator::AssembleReturn() { + CallDescriptor* descriptor = linkage()->GetIncomingDescriptor(); + if (descriptor->kind() == CallDescriptor::kCallAddress) { + const RegList saves = descriptor->CalleeSavedRegisters(); + if (frame()->GetRegisterSaveAreaSize() > 0) { + // Remove this frame's spill slots first. + int stack_slots = frame()->GetSpillSlotCount(); + if (stack_slots > 0) { + __ add(esp, Immediate(stack_slots * kPointerSize)); + } + // Restore registers. + if (saves != 0) { + for (int i = 0; i < Register::kNumRegisters; i++) { + if (!((1 << i) & saves)) continue; + __ pop(Register::from_code(i)); + } + } + __ pop(ebp); // Pop caller's frame pointer. + __ ret(0); + } else { + // No saved registers. + __ mov(esp, ebp); // Move stack pointer back to frame pointer. + __ pop(ebp); // Pop caller's frame pointer. + __ ret(0); + } + } else { + __ mov(esp, ebp); // Move stack pointer back to frame pointer. + __ pop(ebp); // Pop caller's frame pointer. + int pop_count = + descriptor->IsJSFunctionCall() ? descriptor->ParameterCount() : 0; + __ ret(pop_count * kPointerSize); + } +} + + +void CodeGenerator::AssembleMove(InstructionOperand* source, + InstructionOperand* destination) { + IA32OperandConverter g(this, NULL); + // Dispatch on the source and destination operand kinds. Not all + // combinations are possible. + if (source->IsRegister()) { + DCHECK(destination->IsRegister() || destination->IsStackSlot()); + Register src = g.ToRegister(source); + Operand dst = g.ToOperand(destination); + __ mov(dst, src); + } else if (source->IsStackSlot()) { + DCHECK(destination->IsRegister() || destination->IsStackSlot()); + Operand src = g.ToOperand(source); + if (destination->IsRegister()) { + Register dst = g.ToRegister(destination); + __ mov(dst, src); + } else { + Operand dst = g.ToOperand(destination); + __ push(src); + __ pop(dst); + } + } else if (source->IsConstant()) { + Constant src_constant = g.ToConstant(source); + if (src_constant.type() == Constant::kHeapObject) { + Handle<HeapObject> src = src_constant.ToHeapObject(); + if (destination->IsRegister()) { + Register dst = g.ToRegister(destination); + __ LoadHeapObject(dst, src); + } else { + DCHECK(destination->IsStackSlot()); + Operand dst = g.ToOperand(destination); + AllowDeferredHandleDereference embedding_raw_address; + if (isolate()->heap()->InNewSpace(*src)) { + __ PushHeapObject(src); + __ pop(dst); + } else { + __ mov(dst, src); + } + } + } else if (destination->IsRegister()) { + Register dst = g.ToRegister(destination); + __ mov(dst, g.ToImmediate(source)); + } else if (destination->IsStackSlot()) { + Operand dst = g.ToOperand(destination); + __ mov(dst, g.ToImmediate(source)); + } else { + double v = g.ToDouble(source); + uint64_t int_val = BitCast<uint64_t, double>(v); + int32_t lower = static_cast<int32_t>(int_val); + int32_t upper = static_cast<int32_t>(int_val >> kBitsPerInt); + if (destination->IsDoubleRegister()) { + XMMRegister dst = g.ToDoubleRegister(destination); + __ Move(dst, v); + } else { + DCHECK(destination->IsDoubleStackSlot()); + Operand dst0 = g.ToOperand(destination); + Operand dst1 = g.HighOperand(destination); + __ mov(dst0, Immediate(lower)); + __ mov(dst1, Immediate(upper)); + } + } + } else if (source->IsDoubleRegister()) { + XMMRegister src = g.ToDoubleRegister(source); + if (destination->IsDoubleRegister()) { + XMMRegister dst = g.ToDoubleRegister(destination); + __ movaps(dst, src); + } else { + DCHECK(destination->IsDoubleStackSlot()); + Operand dst = g.ToOperand(destination); + __ movsd(dst, src); + } + } else if (source->IsDoubleStackSlot()) { + DCHECK(destination->IsDoubleRegister() || destination->IsDoubleStackSlot()); + Operand src = g.ToOperand(source); + if (destination->IsDoubleRegister()) { + XMMRegister dst = g.ToDoubleRegister(destination); + __ movsd(dst, src); + } else { + // We rely on having xmm0 available as a fixed scratch register. + Operand dst = g.ToOperand(destination); + __ movsd(xmm0, src); + __ movsd(dst, xmm0); + } + } else { + UNREACHABLE(); + } +} + + +void CodeGenerator::AssembleSwap(InstructionOperand* source, + InstructionOperand* destination) { + IA32OperandConverter g(this, NULL); + // Dispatch on the source and destination operand kinds. Not all + // combinations are possible. + if (source->IsRegister() && destination->IsRegister()) { + // Register-register. + Register src = g.ToRegister(source); + Register dst = g.ToRegister(destination); + __ xchg(dst, src); + } else if (source->IsRegister() && destination->IsStackSlot()) { + // Register-memory. + __ xchg(g.ToRegister(source), g.ToOperand(destination)); + } else if (source->IsStackSlot() && destination->IsStackSlot()) { + // Memory-memory. + Operand src = g.ToOperand(source); + Operand dst = g.ToOperand(destination); + __ push(dst); + __ push(src); + __ pop(dst); + __ pop(src); + } else if (source->IsDoubleRegister() && destination->IsDoubleRegister()) { + // XMM register-register swap. We rely on having xmm0 + // available as a fixed scratch register. + XMMRegister src = g.ToDoubleRegister(source); + XMMRegister dst = g.ToDoubleRegister(destination); + __ movaps(xmm0, src); + __ movaps(src, dst); + __ movaps(dst, xmm0); + } else if (source->IsDoubleRegister() && source->IsDoubleStackSlot()) { + // XMM register-memory swap. We rely on having xmm0 + // available as a fixed scratch register. + XMMRegister reg = g.ToDoubleRegister(source); + Operand other = g.ToOperand(destination); + __ movsd(xmm0, other); + __ movsd(other, reg); + __ movaps(reg, xmm0); + } else if (source->IsDoubleStackSlot() && destination->IsDoubleStackSlot()) { + // Double-width memory-to-memory. + Operand src0 = g.ToOperand(source); + Operand src1 = g.HighOperand(source); + Operand dst0 = g.ToOperand(destination); + Operand dst1 = g.HighOperand(destination); + __ movsd(xmm0, dst0); // Save destination in xmm0. + __ push(src0); // Then use stack to copy source to destination. + __ pop(dst0); + __ push(src1); + __ pop(dst1); + __ movsd(src0, xmm0); + } else { + // No other combinations are possible. + UNREACHABLE(); + } +} + + +void CodeGenerator::AddNopForSmiCodeInlining() { __ nop(); } + +#undef __ + +#ifdef DEBUG + +// Checks whether the code between start_pc and end_pc is a no-op. +bool CodeGenerator::IsNopForSmiCodeInlining(Handle<Code> code, int start_pc, + int end_pc) { + if (start_pc + 1 != end_pc) { + return false; + } + return *(code->instruction_start() + start_pc) == + v8::internal::Assembler::kNopByte; +} + +#endif // DEBUG +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/compiler/ia32/instruction-codes-ia32.h b/deps/v8/src/compiler/ia32/instruction-codes-ia32.h new file mode 100644 index 00000000000..f175ebb559b --- /dev/null +++ b/deps/v8/src/compiler/ia32/instruction-codes-ia32.h @@ -0,0 +1,88 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_IA32_INSTRUCTION_CODES_IA32_H_ +#define V8_COMPILER_IA32_INSTRUCTION_CODES_IA32_H_ + +namespace v8 { +namespace internal { +namespace compiler { + +// IA32-specific opcodes that specify which assembly sequence to emit. +// Most opcodes specify a single instruction. +#define TARGET_ARCH_OPCODE_LIST(V) \ + V(IA32Add) \ + V(IA32And) \ + V(IA32Cmp) \ + V(IA32Test) \ + V(IA32Or) \ + V(IA32Xor) \ + V(IA32Sub) \ + V(IA32Imul) \ + V(IA32Idiv) \ + V(IA32Udiv) \ + V(IA32Not) \ + V(IA32Neg) \ + V(IA32Shl) \ + V(IA32Shr) \ + V(IA32Sar) \ + V(IA32Push) \ + V(IA32CallCodeObject) \ + V(IA32CallAddress) \ + V(PopStack) \ + V(IA32CallJSFunction) \ + V(SSEFloat64Cmp) \ + V(SSEFloat64Add) \ + V(SSEFloat64Sub) \ + V(SSEFloat64Mul) \ + V(SSEFloat64Div) \ + V(SSEFloat64Mod) \ + V(SSEFloat64ToInt32) \ + V(SSEFloat64ToUint32) \ + V(SSEInt32ToFloat64) \ + V(SSEUint32ToFloat64) \ + V(SSELoad) \ + V(SSEStore) \ + V(IA32LoadWord8) \ + V(IA32StoreWord8) \ + V(IA32StoreWord8I) \ + V(IA32LoadWord16) \ + V(IA32StoreWord16) \ + V(IA32StoreWord16I) \ + V(IA32LoadWord32) \ + V(IA32StoreWord32) \ + V(IA32StoreWord32I) \ + V(IA32StoreWriteBarrier) + + +// Addressing modes represent the "shape" of inputs to an instruction. +// Many instructions support multiple addressing modes. Addressing modes +// are encoded into the InstructionCode of the instruction and tell the +// code generator after register allocation which assembler method to call. +// +// We use the following local notation for addressing modes: +// +// R = register +// O = register or stack slot +// D = double register +// I = immediate (handle, external, int32) +// MR = [register] +// MI = [immediate] +// MRN = [register + register * N in {1, 2, 4, 8}] +// MRI = [register + immediate] +// MRNI = [register + register * N in {1, 2, 4, 8} + immediate] +#define TARGET_ADDRESSING_MODE_LIST(V) \ + V(MI) /* [K] */ \ + V(MR) /* [%r0] */ \ + V(MRI) /* [%r0 + K] */ \ + V(MR1I) /* [%r0 + %r1 * 1 + K] */ \ + V(MR2I) /* [%r0 + %r1 * 2 + K] */ \ + V(MR4I) /* [%r0 + %r1 * 4 + K] */ \ + V(MR8I) /* [%r0 + %r1 * 8 + K] */ + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_COMPILER_IA32_INSTRUCTION_CODES_IA32_H_ diff --git a/deps/v8/src/compiler/ia32/instruction-selector-ia32.cc b/deps/v8/src/compiler/ia32/instruction-selector-ia32.cc new file mode 100644 index 00000000000..a057a1e7136 --- /dev/null +++ b/deps/v8/src/compiler/ia32/instruction-selector-ia32.cc @@ -0,0 +1,560 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/instruction-selector-impl.h" +#include "src/compiler/node-matchers.h" +#include "src/compiler/node-properties-inl.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// Adds IA32-specific methods for generating operands. +class IA32OperandGenerator V8_FINAL : public OperandGenerator { + public: + explicit IA32OperandGenerator(InstructionSelector* selector) + : OperandGenerator(selector) {} + + InstructionOperand* UseByteRegister(Node* node) { + // TODO(dcarney): relax constraint. + return UseFixed(node, edx); + } + + bool CanBeImmediate(Node* node) { + switch (node->opcode()) { + case IrOpcode::kInt32Constant: + case IrOpcode::kNumberConstant: + case IrOpcode::kExternalConstant: + return true; + case IrOpcode::kHeapConstant: { + // Constants in new space cannot be used as immediates in V8 because + // the GC does not scan code objects when collecting the new generation. + Handle<HeapObject> value = ValueOf<Handle<HeapObject> >(node->op()); + return !isolate()->heap()->InNewSpace(*value); + } + default: + return false; + } + } +}; + + +void InstructionSelector::VisitLoad(Node* node) { + MachineType rep = OpParameter<MachineType>(node); + IA32OperandGenerator g(this); + Node* base = node->InputAt(0); + Node* index = node->InputAt(1); + + InstructionOperand* output = rep == kMachineFloat64 + ? g.DefineAsDoubleRegister(node) + : g.DefineAsRegister(node); + ArchOpcode opcode; + switch (rep) { + case kMachineFloat64: + opcode = kSSELoad; + break; + case kMachineWord8: + opcode = kIA32LoadWord8; + break; + case kMachineWord16: + opcode = kIA32LoadWord16; + break; + case kMachineTagged: // Fall through. + case kMachineWord32: + opcode = kIA32LoadWord32; + break; + default: + UNREACHABLE(); + return; + } + if (g.CanBeImmediate(base)) { + if (Int32Matcher(index).Is(0)) { // load [#base + #0] + Emit(opcode | AddressingModeField::encode(kMode_MI), output, + g.UseImmediate(base)); + } else { // load [#base + %index] + Emit(opcode | AddressingModeField::encode(kMode_MRI), output, + g.UseRegister(index), g.UseImmediate(base)); + } + } else if (g.CanBeImmediate(index)) { // load [%base + #index] + Emit(opcode | AddressingModeField::encode(kMode_MRI), output, + g.UseRegister(base), g.UseImmediate(index)); + } else { // load [%base + %index + K] + Emit(opcode | AddressingModeField::encode(kMode_MR1I), output, + g.UseRegister(base), g.UseRegister(index)); + } + // TODO(turbofan): addressing modes [r+r*{2,4,8}+K] +} + + +void InstructionSelector::VisitStore(Node* node) { + IA32OperandGenerator g(this); + Node* base = node->InputAt(0); + Node* index = node->InputAt(1); + Node* value = node->InputAt(2); + + StoreRepresentation store_rep = OpParameter<StoreRepresentation>(node); + MachineType rep = store_rep.rep; + if (store_rep.write_barrier_kind == kFullWriteBarrier) { + DCHECK_EQ(kMachineTagged, rep); + // TODO(dcarney): refactor RecordWrite function to take temp registers + // and pass them here instead of using fixed regs + // TODO(dcarney): handle immediate indices. + InstructionOperand* temps[] = {g.TempRegister(ecx), g.TempRegister(edx)}; + Emit(kIA32StoreWriteBarrier, NULL, g.UseFixed(base, ebx), + g.UseFixed(index, ecx), g.UseFixed(value, edx), ARRAY_SIZE(temps), + temps); + return; + } + DCHECK_EQ(kNoWriteBarrier, store_rep.write_barrier_kind); + bool is_immediate = false; + InstructionOperand* val; + if (rep == kMachineFloat64) { + val = g.UseDoubleRegister(value); + } else { + is_immediate = g.CanBeImmediate(value); + if (is_immediate) { + val = g.UseImmediate(value); + } else if (rep == kMachineWord8) { + val = g.UseByteRegister(value); + } else { + val = g.UseRegister(value); + } + } + ArchOpcode opcode; + switch (rep) { + case kMachineFloat64: + opcode = kSSEStore; + break; + case kMachineWord8: + opcode = is_immediate ? kIA32StoreWord8I : kIA32StoreWord8; + break; + case kMachineWord16: + opcode = is_immediate ? kIA32StoreWord16I : kIA32StoreWord16; + break; + case kMachineTagged: // Fall through. + case kMachineWord32: + opcode = is_immediate ? kIA32StoreWord32I : kIA32StoreWord32; + break; + default: + UNREACHABLE(); + return; + } + if (g.CanBeImmediate(base)) { + if (Int32Matcher(index).Is(0)) { // store [#base], %|#value + Emit(opcode | AddressingModeField::encode(kMode_MI), NULL, + g.UseImmediate(base), val); + } else { // store [#base + %index], %|#value + Emit(opcode | AddressingModeField::encode(kMode_MRI), NULL, + g.UseRegister(index), g.UseImmediate(base), val); + } + } else if (g.CanBeImmediate(index)) { // store [%base + #index], %|#value + Emit(opcode | AddressingModeField::encode(kMode_MRI), NULL, + g.UseRegister(base), g.UseImmediate(index), val); + } else { // store [%base + %index], %|#value + Emit(opcode | AddressingModeField::encode(kMode_MR1I), NULL, + g.UseRegister(base), g.UseRegister(index), val); + } + // TODO(turbofan): addressing modes [r+r*{2,4,8}+K] +} + + +// Shared routine for multiple binary operations. +static void VisitBinop(InstructionSelector* selector, Node* node, + InstructionCode opcode, FlagsContinuation* cont) { + IA32OperandGenerator g(selector); + Int32BinopMatcher m(node); + InstructionOperand* inputs[4]; + size_t input_count = 0; + InstructionOperand* outputs[2]; + size_t output_count = 0; + + // TODO(turbofan): match complex addressing modes. + // TODO(turbofan): if commutative, pick the non-live-in operand as the left as + // this might be the last use and therefore its register can be reused. + if (g.CanBeImmediate(m.right().node())) { + inputs[input_count++] = g.Use(m.left().node()); + inputs[input_count++] = g.UseImmediate(m.right().node()); + } else { + inputs[input_count++] = g.UseRegister(m.left().node()); + inputs[input_count++] = g.Use(m.right().node()); + } + + if (cont->IsBranch()) { + inputs[input_count++] = g.Label(cont->true_block()); + inputs[input_count++] = g.Label(cont->false_block()); + } + + outputs[output_count++] = g.DefineSameAsFirst(node); + if (cont->IsSet()) { + // TODO(turbofan): Use byte register here. + outputs[output_count++] = g.DefineAsRegister(cont->result()); + } + + DCHECK_NE(0, input_count); + DCHECK_NE(0, output_count); + DCHECK_GE(ARRAY_SIZE(inputs), input_count); + DCHECK_GE(ARRAY_SIZE(outputs), output_count); + + Instruction* instr = selector->Emit(cont->Encode(opcode), output_count, + outputs, input_count, inputs); + if (cont->IsBranch()) instr->MarkAsControl(); +} + + +// Shared routine for multiple binary operations. +static void VisitBinop(InstructionSelector* selector, Node* node, + InstructionCode opcode) { + FlagsContinuation cont; + VisitBinop(selector, node, opcode, &cont); +} + + +void InstructionSelector::VisitWord32And(Node* node) { + VisitBinop(this, node, kIA32And); +} + + +void InstructionSelector::VisitWord32Or(Node* node) { + VisitBinop(this, node, kIA32Or); +} + + +void InstructionSelector::VisitWord32Xor(Node* node) { + IA32OperandGenerator g(this); + Int32BinopMatcher m(node); + if (m.right().Is(-1)) { + Emit(kIA32Not, g.DefineSameAsFirst(node), g.Use(m.left().node())); + } else { + VisitBinop(this, node, kIA32Xor); + } +} + + +// Shared routine for multiple shift operations. +static inline void VisitShift(InstructionSelector* selector, Node* node, + ArchOpcode opcode) { + IA32OperandGenerator g(selector); + Node* left = node->InputAt(0); + Node* right = node->InputAt(1); + + // TODO(turbofan): assembler only supports some addressing modes for shifts. + if (g.CanBeImmediate(right)) { + selector->Emit(opcode, g.DefineSameAsFirst(node), g.UseRegister(left), + g.UseImmediate(right)); + } else { + Int32BinopMatcher m(node); + if (m.right().IsWord32And()) { + Int32BinopMatcher mright(right); + if (mright.right().Is(0x1F)) { + right = mright.left().node(); + } + } + selector->Emit(opcode, g.DefineSameAsFirst(node), g.UseRegister(left), + g.UseFixed(right, ecx)); + } +} + + +void InstructionSelector::VisitWord32Shl(Node* node) { + VisitShift(this, node, kIA32Shl); +} + + +void InstructionSelector::VisitWord32Shr(Node* node) { + VisitShift(this, node, kIA32Shr); +} + + +void InstructionSelector::VisitWord32Sar(Node* node) { + VisitShift(this, node, kIA32Sar); +} + + +void InstructionSelector::VisitInt32Add(Node* node) { + VisitBinop(this, node, kIA32Add); +} + + +void InstructionSelector::VisitInt32Sub(Node* node) { + IA32OperandGenerator g(this); + Int32BinopMatcher m(node); + if (m.left().Is(0)) { + Emit(kIA32Neg, g.DefineSameAsFirst(node), g.Use(m.right().node())); + } else { + VisitBinop(this, node, kIA32Sub); + } +} + + +void InstructionSelector::VisitInt32Mul(Node* node) { + IA32OperandGenerator g(this); + Node* left = node->InputAt(0); + Node* right = node->InputAt(1); + if (g.CanBeImmediate(right)) { + Emit(kIA32Imul, g.DefineAsRegister(node), g.Use(left), + g.UseImmediate(right)); + } else if (g.CanBeImmediate(left)) { + Emit(kIA32Imul, g.DefineAsRegister(node), g.Use(right), + g.UseImmediate(left)); + } else { + // TODO(turbofan): select better left operand. + Emit(kIA32Imul, g.DefineSameAsFirst(node), g.UseRegister(left), + g.Use(right)); + } +} + + +static inline void VisitDiv(InstructionSelector* selector, Node* node, + ArchOpcode opcode) { + IA32OperandGenerator g(selector); + InstructionOperand* temps[] = {g.TempRegister(edx)}; + size_t temp_count = ARRAY_SIZE(temps); + selector->Emit(opcode, g.DefineAsFixed(node, eax), + g.UseFixed(node->InputAt(0), eax), + g.UseUnique(node->InputAt(1)), temp_count, temps); +} + + +void InstructionSelector::VisitInt32Div(Node* node) { + VisitDiv(this, node, kIA32Idiv); +} + + +void InstructionSelector::VisitInt32UDiv(Node* node) { + VisitDiv(this, node, kIA32Udiv); +} + + +static inline void VisitMod(InstructionSelector* selector, Node* node, + ArchOpcode opcode) { + IA32OperandGenerator g(selector); + InstructionOperand* temps[] = {g.TempRegister(eax), g.TempRegister(edx)}; + size_t temp_count = ARRAY_SIZE(temps); + selector->Emit(opcode, g.DefineAsFixed(node, edx), + g.UseFixed(node->InputAt(0), eax), + g.UseUnique(node->InputAt(1)), temp_count, temps); +} + + +void InstructionSelector::VisitInt32Mod(Node* node) { + VisitMod(this, node, kIA32Idiv); +} + + +void InstructionSelector::VisitInt32UMod(Node* node) { + VisitMod(this, node, kIA32Udiv); +} + + +void InstructionSelector::VisitChangeInt32ToFloat64(Node* node) { + IA32OperandGenerator g(this); + Emit(kSSEInt32ToFloat64, g.DefineAsDoubleRegister(node), + g.Use(node->InputAt(0))); +} + + +void InstructionSelector::VisitChangeUint32ToFloat64(Node* node) { + IA32OperandGenerator g(this); + // TODO(turbofan): IA32 SSE LoadUint32() should take an operand. + Emit(kSSEUint32ToFloat64, g.DefineAsDoubleRegister(node), + g.UseRegister(node->InputAt(0))); +} + + +void InstructionSelector::VisitChangeFloat64ToInt32(Node* node) { + IA32OperandGenerator g(this); + Emit(kSSEFloat64ToInt32, g.DefineAsRegister(node), g.Use(node->InputAt(0))); +} + + +void InstructionSelector::VisitChangeFloat64ToUint32(Node* node) { + IA32OperandGenerator g(this); + // TODO(turbofan): IA32 SSE subsd() should take an operand. + Emit(kSSEFloat64ToUint32, g.DefineAsRegister(node), + g.UseDoubleRegister(node->InputAt(0))); +} + + +void InstructionSelector::VisitFloat64Add(Node* node) { + IA32OperandGenerator g(this); + Emit(kSSEFloat64Add, g.DefineSameAsFirst(node), + g.UseDoubleRegister(node->InputAt(0)), + g.UseDoubleRegister(node->InputAt(1))); +} + + +void InstructionSelector::VisitFloat64Sub(Node* node) { + IA32OperandGenerator g(this); + Emit(kSSEFloat64Sub, g.DefineSameAsFirst(node), + g.UseDoubleRegister(node->InputAt(0)), + g.UseDoubleRegister(node->InputAt(1))); +} + + +void InstructionSelector::VisitFloat64Mul(Node* node) { + IA32OperandGenerator g(this); + Emit(kSSEFloat64Mul, g.DefineSameAsFirst(node), + g.UseDoubleRegister(node->InputAt(0)), + g.UseDoubleRegister(node->InputAt(1))); +} + + +void InstructionSelector::VisitFloat64Div(Node* node) { + IA32OperandGenerator g(this); + Emit(kSSEFloat64Div, g.DefineSameAsFirst(node), + g.UseDoubleRegister(node->InputAt(0)), + g.UseDoubleRegister(node->InputAt(1))); +} + + +void InstructionSelector::VisitFloat64Mod(Node* node) { + IA32OperandGenerator g(this); + InstructionOperand* temps[] = {g.TempRegister(eax)}; + Emit(kSSEFloat64Mod, g.DefineSameAsFirst(node), + g.UseDoubleRegister(node->InputAt(0)), + g.UseDoubleRegister(node->InputAt(1)), 1, temps); +} + + +void InstructionSelector::VisitInt32AddWithOverflow(Node* node, + FlagsContinuation* cont) { + VisitBinop(this, node, kIA32Add, cont); +} + + +void InstructionSelector::VisitInt32SubWithOverflow(Node* node, + FlagsContinuation* cont) { + VisitBinop(this, node, kIA32Sub, cont); +} + + +// Shared routine for multiple compare operations. +static inline void VisitCompare(InstructionSelector* selector, + InstructionCode opcode, + InstructionOperand* left, + InstructionOperand* right, + FlagsContinuation* cont) { + IA32OperandGenerator g(selector); + if (cont->IsBranch()) { + selector->Emit(cont->Encode(opcode), NULL, left, right, + g.Label(cont->true_block()), + g.Label(cont->false_block()))->MarkAsControl(); + } else { + DCHECK(cont->IsSet()); + // TODO(titzer): Needs byte register. + selector->Emit(cont->Encode(opcode), g.DefineAsRegister(cont->result()), + left, right); + } +} + + +// Shared routine for multiple word compare operations. +static inline void VisitWordCompare(InstructionSelector* selector, Node* node, + InstructionCode opcode, + FlagsContinuation* cont, bool commutative) { + IA32OperandGenerator g(selector); + Node* left = node->InputAt(0); + Node* right = node->InputAt(1); + + // Match immediates on left or right side of comparison. + if (g.CanBeImmediate(right)) { + VisitCompare(selector, opcode, g.Use(left), g.UseImmediate(right), cont); + } else if (g.CanBeImmediate(left)) { + if (!commutative) cont->Commute(); + VisitCompare(selector, opcode, g.Use(right), g.UseImmediate(left), cont); + } else { + VisitCompare(selector, opcode, g.UseRegister(left), g.Use(right), cont); + } +} + + +void InstructionSelector::VisitWord32Test(Node* node, FlagsContinuation* cont) { + switch (node->opcode()) { + case IrOpcode::kInt32Sub: + return VisitWordCompare(this, node, kIA32Cmp, cont, false); + case IrOpcode::kWord32And: + return VisitWordCompare(this, node, kIA32Test, cont, true); + default: + break; + } + + IA32OperandGenerator g(this); + VisitCompare(this, kIA32Test, g.Use(node), g.TempImmediate(-1), cont); +} + + +void InstructionSelector::VisitWord32Compare(Node* node, + FlagsContinuation* cont) { + VisitWordCompare(this, node, kIA32Cmp, cont, false); +} + + +void InstructionSelector::VisitFloat64Compare(Node* node, + FlagsContinuation* cont) { + IA32OperandGenerator g(this); + Node* left = node->InputAt(0); + Node* right = node->InputAt(1); + VisitCompare(this, kSSEFloat64Cmp, g.UseDoubleRegister(left), g.Use(right), + cont); +} + + +void InstructionSelector::VisitCall(Node* call, BasicBlock* continuation, + BasicBlock* deoptimization) { + IA32OperandGenerator g(this); + CallDescriptor* descriptor = OpParameter<CallDescriptor*>(call); + CallBuffer buffer(zone(), descriptor); + + // Compute InstructionOperands for inputs and outputs. + InitializeCallBuffer(call, &buffer, true, true, continuation, deoptimization); + + // Push any stack arguments. + for (int i = buffer.pushed_count - 1; i >= 0; --i) { + Node* input = buffer.pushed_nodes[i]; + // TODO(titzer): handle pushing double parameters. + Emit(kIA32Push, NULL, + g.CanBeImmediate(input) ? g.UseImmediate(input) : g.Use(input)); + } + + // Select the appropriate opcode based on the call type. + InstructionCode opcode; + switch (descriptor->kind()) { + case CallDescriptor::kCallCodeObject: { + bool lazy_deopt = descriptor->CanLazilyDeoptimize(); + opcode = kIA32CallCodeObject | MiscField::encode(lazy_deopt ? 1 : 0); + break; + } + case CallDescriptor::kCallAddress: + opcode = kIA32CallAddress; + break; + case CallDescriptor::kCallJSFunction: + opcode = kIA32CallJSFunction; + break; + default: + UNREACHABLE(); + return; + } + + // Emit the call instruction. + Instruction* call_instr = + Emit(opcode, buffer.output_count, buffer.outputs, + buffer.fixed_and_control_count(), buffer.fixed_and_control_args); + + call_instr->MarkAsCall(); + if (deoptimization != NULL) { + DCHECK(continuation != NULL); + call_instr->MarkAsControl(); + } + + // Caller clean up of stack for C-style calls. + if (descriptor->kind() == CallDescriptor::kCallAddress && + buffer.pushed_count > 0) { + DCHECK(deoptimization == NULL && continuation == NULL); + Emit(kPopStack | MiscField::encode(buffer.pushed_count), NULL); + } +} + +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/src/compiler/ia32/linkage-ia32.cc b/deps/v8/src/compiler/ia32/linkage-ia32.cc new file mode 100644 index 00000000000..57a2c6918a9 --- /dev/null +++ b/deps/v8/src/compiler/ia32/linkage-ia32.cc @@ -0,0 +1,63 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "src/assembler.h" +#include "src/code-stubs.h" +#include "src/compiler/linkage.h" +#include "src/compiler/linkage-impl.h" +#include "src/zone.h" + +namespace v8 { +namespace internal { +namespace compiler { + +struct LinkageHelperTraits { + static Register ReturnValueReg() { return eax; } + static Register ReturnValue2Reg() { return edx; } + static Register JSCallFunctionReg() { return edi; } + static Register ContextReg() { return esi; } + static Register RuntimeCallFunctionReg() { return ebx; } + static Register RuntimeCallArgCountReg() { return eax; } + static RegList CCalleeSaveRegisters() { + return esi.bit() | edi.bit() | ebx.bit(); + } + static Register CRegisterParameter(int i) { return no_reg; } + static int CRegisterParametersLength() { return 0; } +}; + + +CallDescriptor* Linkage::GetJSCallDescriptor(int parameter_count, Zone* zone) { + return LinkageHelper::GetJSCallDescriptor<LinkageHelperTraits>( + zone, parameter_count); +} + + +CallDescriptor* Linkage::GetRuntimeCallDescriptor( + Runtime::FunctionId function, int parameter_count, + Operator::Property properties, + CallDescriptor::DeoptimizationSupport can_deoptimize, Zone* zone) { + return LinkageHelper::GetRuntimeCallDescriptor<LinkageHelperTraits>( + zone, function, parameter_count, properties, can_deoptimize); +} + + +CallDescriptor* Linkage::GetStubCallDescriptor( + CodeStubInterfaceDescriptor* descriptor, int stack_parameter_count, + CallDescriptor::DeoptimizationSupport can_deoptimize, Zone* zone) { + return LinkageHelper::GetStubCallDescriptor<LinkageHelperTraits>( + zone, descriptor, stack_parameter_count, can_deoptimize); +} + + +CallDescriptor* Linkage::GetSimplifiedCDescriptor( + Zone* zone, int num_params, MachineType return_type, + const MachineType* param_types) { + return LinkageHelper::GetSimplifiedCDescriptor<LinkageHelperTraits>( + zone, num_params, return_type, param_types); +} +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/compiler/instruction-codes.h b/deps/v8/src/compiler/instruction-codes.h new file mode 100644 index 00000000000..35c8e31f27f --- /dev/null +++ b/deps/v8/src/compiler/instruction-codes.h @@ -0,0 +1,117 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_INSTRUCTION_CODES_H_ +#define V8_COMPILER_INSTRUCTION_CODES_H_ + +#if V8_TARGET_ARCH_ARM +#include "src/compiler/arm/instruction-codes-arm.h" +#elif V8_TARGET_ARCH_ARM64 +#include "src/compiler/arm64/instruction-codes-arm64.h" +#elif V8_TARGET_ARCH_IA32 +#include "src/compiler/ia32/instruction-codes-ia32.h" +#elif V8_TARGET_ARCH_X64 +#include "src/compiler/x64/instruction-codes-x64.h" +#else +#define TARGET_ARCH_OPCODE_LIST(V) +#define TARGET_ADDRESSING_MODE_LIST(V) +#endif +#include "src/utils.h" + +namespace v8 { +namespace internal { + +class OStream; + +namespace compiler { + +// Target-specific opcodes that specify which assembly sequence to emit. +// Most opcodes specify a single instruction. +#define ARCH_OPCODE_LIST(V) \ + V(ArchDeoptimize) \ + V(ArchJmp) \ + V(ArchNop) \ + V(ArchRet) \ + TARGET_ARCH_OPCODE_LIST(V) + +enum ArchOpcode { +#define DECLARE_ARCH_OPCODE(Name) k##Name, + ARCH_OPCODE_LIST(DECLARE_ARCH_OPCODE) +#undef DECLARE_ARCH_OPCODE +#define COUNT_ARCH_OPCODE(Name) +1 + kLastArchOpcode = -1 ARCH_OPCODE_LIST(COUNT_ARCH_OPCODE) +#undef COUNT_ARCH_OPCODE +}; + +OStream& operator<<(OStream& os, const ArchOpcode& ao); + +// Addressing modes represent the "shape" of inputs to an instruction. +// Many instructions support multiple addressing modes. Addressing modes +// are encoded into the InstructionCode of the instruction and tell the +// code generator after register allocation which assembler method to call. +#define ADDRESSING_MODE_LIST(V) \ + V(None) \ + TARGET_ADDRESSING_MODE_LIST(V) + +enum AddressingMode { +#define DECLARE_ADDRESSING_MODE(Name) kMode_##Name, + ADDRESSING_MODE_LIST(DECLARE_ADDRESSING_MODE) +#undef DECLARE_ADDRESSING_MODE +#define COUNT_ADDRESSING_MODE(Name) +1 + kLastAddressingMode = -1 ADDRESSING_MODE_LIST(COUNT_ADDRESSING_MODE) +#undef COUNT_ADDRESSING_MODE +}; + +OStream& operator<<(OStream& os, const AddressingMode& am); + +// The mode of the flags continuation (see below). +enum FlagsMode { kFlags_none = 0, kFlags_branch = 1, kFlags_set = 2 }; + +OStream& operator<<(OStream& os, const FlagsMode& fm); + +// The condition of flags continuation (see below). +enum FlagsCondition { + kEqual, + kNotEqual, + kSignedLessThan, + kSignedGreaterThanOrEqual, + kSignedLessThanOrEqual, + kSignedGreaterThan, + kUnsignedLessThan, + kUnsignedGreaterThanOrEqual, + kUnsignedLessThanOrEqual, + kUnsignedGreaterThan, + kUnorderedEqual, + kUnorderedNotEqual, + kUnorderedLessThan, + kUnorderedGreaterThanOrEqual, + kUnorderedLessThanOrEqual, + kUnorderedGreaterThan, + kOverflow, + kNotOverflow +}; + +OStream& operator<<(OStream& os, const FlagsCondition& fc); + +// The InstructionCode is an opaque, target-specific integer that encodes +// what code to emit for an instruction in the code generator. It is not +// interesting to the register allocator, as the inputs and flags on the +// instructions specify everything of interest. +typedef int32_t InstructionCode; + +// Helpers for encoding / decoding InstructionCode into the fields needed +// for code generation. We encode the instruction, addressing mode, and flags +// continuation into a single InstructionCode which is stored as part of +// the instruction. +typedef BitField<ArchOpcode, 0, 7> ArchOpcodeField; +typedef BitField<AddressingMode, 7, 4> AddressingModeField; +typedef BitField<FlagsMode, 11, 2> FlagsModeField; +typedef BitField<FlagsCondition, 13, 5> FlagsConditionField; +typedef BitField<int, 13, 19> MiscField; + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_COMPILER_INSTRUCTION_CODES_H_ diff --git a/deps/v8/src/compiler/instruction-selector-impl.h b/deps/v8/src/compiler/instruction-selector-impl.h new file mode 100644 index 00000000000..ac446b38ed8 --- /dev/null +++ b/deps/v8/src/compiler/instruction-selector-impl.h @@ -0,0 +1,371 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_INSTRUCTION_SELECTOR_IMPL_H_ +#define V8_COMPILER_INSTRUCTION_SELECTOR_IMPL_H_ + +#include "src/compiler/instruction.h" +#include "src/compiler/instruction-selector.h" +#include "src/compiler/linkage.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// A helper class for the instruction selector that simplifies construction of +// Operands. This class implements a base for architecture-specific helpers. +class OperandGenerator { + public: + explicit OperandGenerator(InstructionSelector* selector) + : selector_(selector) {} + + InstructionOperand* DefineAsRegister(Node* node) { + return Define(node, new (zone()) + UnallocatedOperand(UnallocatedOperand::MUST_HAVE_REGISTER)); + } + + InstructionOperand* DefineAsDoubleRegister(Node* node) { + return Define(node, new (zone()) + UnallocatedOperand(UnallocatedOperand::MUST_HAVE_REGISTER)); + } + + InstructionOperand* DefineSameAsFirst(Node* result) { + return Define(result, new (zone()) + UnallocatedOperand(UnallocatedOperand::SAME_AS_FIRST_INPUT)); + } + + InstructionOperand* DefineAsFixed(Node* node, Register reg) { + return Define(node, new (zone()) + UnallocatedOperand(UnallocatedOperand::FIXED_REGISTER, + Register::ToAllocationIndex(reg))); + } + + InstructionOperand* DefineAsFixedDouble(Node* node, DoubleRegister reg) { + return Define(node, new (zone()) + UnallocatedOperand(UnallocatedOperand::FIXED_DOUBLE_REGISTER, + DoubleRegister::ToAllocationIndex(reg))); + } + + InstructionOperand* DefineAsConstant(Node* node) { + selector()->MarkAsDefined(node); + sequence()->AddConstant(node->id(), ToConstant(node)); + return ConstantOperand::Create(node->id(), zone()); + } + + InstructionOperand* DefineAsLocation(Node* node, LinkageLocation location) { + return Define(node, ToUnallocatedOperand(location)); + } + + InstructionOperand* Use(Node* node) { + return Use(node, + new (zone()) UnallocatedOperand( + UnallocatedOperand::ANY, UnallocatedOperand::USED_AT_START)); + } + + InstructionOperand* UseRegister(Node* node) { + return Use(node, new (zone()) + UnallocatedOperand(UnallocatedOperand::MUST_HAVE_REGISTER, + UnallocatedOperand::USED_AT_START)); + } + + InstructionOperand* UseDoubleRegister(Node* node) { + return Use(node, new (zone()) + UnallocatedOperand(UnallocatedOperand::MUST_HAVE_REGISTER, + UnallocatedOperand::USED_AT_START)); + } + + // Use register or operand for the node. If a register is chosen, it won't + // alias any temporary or output registers. + InstructionOperand* UseUnique(Node* node) { + return Use(node, new (zone()) UnallocatedOperand(UnallocatedOperand::ANY)); + } + + // Use a unique register for the node that does not alias any temporary or + // output registers. + InstructionOperand* UseUniqueRegister(Node* node) { + return Use(node, new (zone()) + UnallocatedOperand(UnallocatedOperand::MUST_HAVE_REGISTER)); + } + + // Use a unique double register for the node that does not alias any temporary + // or output double registers. + InstructionOperand* UseUniqueDoubleRegister(Node* node) { + return Use(node, new (zone()) + UnallocatedOperand(UnallocatedOperand::MUST_HAVE_REGISTER)); + } + + InstructionOperand* UseFixed(Node* node, Register reg) { + return Use(node, new (zone()) + UnallocatedOperand(UnallocatedOperand::FIXED_REGISTER, + Register::ToAllocationIndex(reg))); + } + + InstructionOperand* UseFixedDouble(Node* node, DoubleRegister reg) { + return Use(node, new (zone()) + UnallocatedOperand(UnallocatedOperand::FIXED_DOUBLE_REGISTER, + DoubleRegister::ToAllocationIndex(reg))); + } + + InstructionOperand* UseImmediate(Node* node) { + int index = sequence()->AddImmediate(ToConstant(node)); + return ImmediateOperand::Create(index, zone()); + } + + InstructionOperand* UseLocation(Node* node, LinkageLocation location) { + return Use(node, ToUnallocatedOperand(location)); + } + + InstructionOperand* TempRegister() { + UnallocatedOperand* op = + new (zone()) UnallocatedOperand(UnallocatedOperand::MUST_HAVE_REGISTER, + UnallocatedOperand::USED_AT_START); + op->set_virtual_register(sequence()->NextVirtualRegister()); + return op; + } + + InstructionOperand* TempDoubleRegister() { + UnallocatedOperand* op = + new (zone()) UnallocatedOperand(UnallocatedOperand::MUST_HAVE_REGISTER, + UnallocatedOperand::USED_AT_START); + op->set_virtual_register(sequence()->NextVirtualRegister()); + sequence()->MarkAsDouble(op->virtual_register()); + return op; + } + + InstructionOperand* TempRegister(Register reg) { + return new (zone()) UnallocatedOperand(UnallocatedOperand::FIXED_REGISTER, + Register::ToAllocationIndex(reg)); + } + + InstructionOperand* TempImmediate(int32_t imm) { + int index = sequence()->AddImmediate(Constant(imm)); + return ImmediateOperand::Create(index, zone()); + } + + InstructionOperand* Label(BasicBlock* block) { + // TODO(bmeurer): We misuse ImmediateOperand here. + return TempImmediate(block->id()); + } + + protected: + Graph* graph() const { return selector()->graph(); } + InstructionSelector* selector() const { return selector_; } + InstructionSequence* sequence() const { return selector()->sequence(); } + Isolate* isolate() const { return zone()->isolate(); } + Zone* zone() const { return selector()->instruction_zone(); } + + private: + static Constant ToConstant(const Node* node) { + switch (node->opcode()) { + case IrOpcode::kInt32Constant: + return Constant(ValueOf<int32_t>(node->op())); + case IrOpcode::kInt64Constant: + return Constant(ValueOf<int64_t>(node->op())); + case IrOpcode::kNumberConstant: + case IrOpcode::kFloat64Constant: + return Constant(ValueOf<double>(node->op())); + case IrOpcode::kExternalConstant: + return Constant(ValueOf<ExternalReference>(node->op())); + case IrOpcode::kHeapConstant: + return Constant(ValueOf<Handle<HeapObject> >(node->op())); + default: + break; + } + UNREACHABLE(); + return Constant(static_cast<int32_t>(0)); + } + + UnallocatedOperand* Define(Node* node, UnallocatedOperand* operand) { + DCHECK_NOT_NULL(node); + DCHECK_NOT_NULL(operand); + operand->set_virtual_register(node->id()); + selector()->MarkAsDefined(node); + return operand; + } + + UnallocatedOperand* Use(Node* node, UnallocatedOperand* operand) { + DCHECK_NOT_NULL(node); + DCHECK_NOT_NULL(operand); + operand->set_virtual_register(node->id()); + selector()->MarkAsUsed(node); + return operand; + } + + UnallocatedOperand* ToUnallocatedOperand(LinkageLocation location) { + if (location.location_ == LinkageLocation::ANY_REGISTER) { + return new (zone()) + UnallocatedOperand(UnallocatedOperand::MUST_HAVE_REGISTER); + } + if (location.location_ < 0) { + return new (zone()) UnallocatedOperand(UnallocatedOperand::FIXED_SLOT, + location.location_); + } + if (location.rep_ == kMachineFloat64) { + return new (zone()) UnallocatedOperand( + UnallocatedOperand::FIXED_DOUBLE_REGISTER, location.location_); + } + return new (zone()) UnallocatedOperand(UnallocatedOperand::FIXED_REGISTER, + location.location_); + } + + InstructionSelector* selector_; +}; + + +// The flags continuation is a way to combine a branch or a materialization +// of a boolean value with an instruction that sets the flags register. +// The whole instruction is treated as a unit by the register allocator, and +// thus no spills or moves can be introduced between the flags-setting +// instruction and the branch or set it should be combined with. +class FlagsContinuation V8_FINAL { + public: + FlagsContinuation() : mode_(kFlags_none) {} + + // Creates a new flags continuation from the given condition and true/false + // blocks. + FlagsContinuation(FlagsCondition condition, BasicBlock* true_block, + BasicBlock* false_block) + : mode_(kFlags_branch), + condition_(condition), + true_block_(true_block), + false_block_(false_block) { + DCHECK_NOT_NULL(true_block); + DCHECK_NOT_NULL(false_block); + } + + // Creates a new flags continuation from the given condition and result node. + FlagsContinuation(FlagsCondition condition, Node* result) + : mode_(kFlags_set), condition_(condition), result_(result) { + DCHECK_NOT_NULL(result); + } + + bool IsNone() const { return mode_ == kFlags_none; } + bool IsBranch() const { return mode_ == kFlags_branch; } + bool IsSet() const { return mode_ == kFlags_set; } + FlagsCondition condition() const { + DCHECK(!IsNone()); + return condition_; + } + Node* result() const { + DCHECK(IsSet()); + return result_; + } + BasicBlock* true_block() const { + DCHECK(IsBranch()); + return true_block_; + } + BasicBlock* false_block() const { + DCHECK(IsBranch()); + return false_block_; + } + + void Negate() { + DCHECK(!IsNone()); + condition_ = static_cast<FlagsCondition>(condition_ ^ 1); + } + + void Commute() { + DCHECK(!IsNone()); + switch (condition_) { + case kEqual: + case kNotEqual: + case kOverflow: + case kNotOverflow: + return; + case kSignedLessThan: + condition_ = kSignedGreaterThan; + return; + case kSignedGreaterThanOrEqual: + condition_ = kSignedLessThanOrEqual; + return; + case kSignedLessThanOrEqual: + condition_ = kSignedGreaterThanOrEqual; + return; + case kSignedGreaterThan: + condition_ = kSignedLessThan; + return; + case kUnsignedLessThan: + condition_ = kUnsignedGreaterThan; + return; + case kUnsignedGreaterThanOrEqual: + condition_ = kUnsignedLessThanOrEqual; + return; + case kUnsignedLessThanOrEqual: + condition_ = kUnsignedGreaterThanOrEqual; + return; + case kUnsignedGreaterThan: + condition_ = kUnsignedLessThan; + return; + case kUnorderedEqual: + case kUnorderedNotEqual: + return; + case kUnorderedLessThan: + condition_ = kUnorderedGreaterThan; + return; + case kUnorderedGreaterThanOrEqual: + condition_ = kUnorderedLessThanOrEqual; + return; + case kUnorderedLessThanOrEqual: + condition_ = kUnorderedGreaterThanOrEqual; + return; + case kUnorderedGreaterThan: + condition_ = kUnorderedLessThan; + return; + } + UNREACHABLE(); + } + + void OverwriteAndNegateIfEqual(FlagsCondition condition) { + bool negate = condition_ == kEqual; + condition_ = condition; + if (negate) Negate(); + } + + void SwapBlocks() { std::swap(true_block_, false_block_); } + + // Encodes this flags continuation into the given opcode. + InstructionCode Encode(InstructionCode opcode) { + opcode |= FlagsModeField::encode(mode_); + if (mode_ != kFlags_none) { + opcode |= FlagsConditionField::encode(condition_); + } + return opcode; + } + + private: + FlagsMode mode_; + FlagsCondition condition_; + Node* result_; // Only valid if mode_ == kFlags_set. + BasicBlock* true_block_; // Only valid if mode_ == kFlags_branch. + BasicBlock* false_block_; // Only valid if mode_ == kFlags_branch. +}; + + +// An internal helper class for generating the operands to calls. +// TODO(bmeurer): Get rid of the CallBuffer business and make +// InstructionSelector::VisitCall platform independent instead. +struct CallBuffer { + CallBuffer(Zone* zone, CallDescriptor* descriptor); + + int output_count; + CallDescriptor* descriptor; + Node** output_nodes; + InstructionOperand** outputs; + InstructionOperand** fixed_and_control_args; + int fixed_count; + Node** pushed_nodes; + int pushed_count; + + int input_count() { return descriptor->InputCount(); } + + int control_count() { return descriptor->CanLazilyDeoptimize() ? 2 : 0; } + + int fixed_and_control_count() { return fixed_count + control_count(); } +}; + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_COMPILER_INSTRUCTION_SELECTOR_IMPL_H_ diff --git a/deps/v8/src/compiler/instruction-selector.cc b/deps/v8/src/compiler/instruction-selector.cc new file mode 100644 index 00000000000..541e0452fa9 --- /dev/null +++ b/deps/v8/src/compiler/instruction-selector.cc @@ -0,0 +1,1053 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/instruction-selector.h" + +#include "src/compiler/instruction-selector-impl.h" +#include "src/compiler/node-matchers.h" +#include "src/compiler/node-properties-inl.h" +#include "src/compiler/pipeline.h" + +namespace v8 { +namespace internal { +namespace compiler { + +InstructionSelector::InstructionSelector(InstructionSequence* sequence, + SourcePositionTable* source_positions, + Features features) + : zone_(sequence->isolate()), + sequence_(sequence), + source_positions_(source_positions), + features_(features), + current_block_(NULL), + instructions_(InstructionDeque::allocator_type(zone())), + defined_(graph()->NodeCount(), false, BoolVector::allocator_type(zone())), + used_(graph()->NodeCount(), false, BoolVector::allocator_type(zone())) {} + + +void InstructionSelector::SelectInstructions() { + // Mark the inputs of all phis in loop headers as used. + BasicBlockVector* blocks = schedule()->rpo_order(); + for (BasicBlockVectorIter i = blocks->begin(); i != blocks->end(); ++i) { + BasicBlock* block = *i; + if (!block->IsLoopHeader()) continue; + DCHECK_NE(0, block->PredecessorCount()); + DCHECK_NE(1, block->PredecessorCount()); + for (BasicBlock::const_iterator j = block->begin(); j != block->end(); + ++j) { + Node* phi = *j; + if (phi->opcode() != IrOpcode::kPhi) continue; + + // Mark all inputs as used. + Node::Inputs inputs = phi->inputs(); + for (InputIter k = inputs.begin(); k != inputs.end(); ++k) { + MarkAsUsed(*k); + } + } + } + + // Visit each basic block in post order. + for (BasicBlockVectorRIter i = blocks->rbegin(); i != blocks->rend(); ++i) { + VisitBlock(*i); + } + + // Schedule the selected instructions. + for (BasicBlockVectorIter i = blocks->begin(); i != blocks->end(); ++i) { + BasicBlock* block = *i; + size_t end = block->code_end_; + size_t start = block->code_start_; + sequence()->StartBlock(block); + while (start-- > end) { + sequence()->AddInstruction(instructions_[start], block); + } + sequence()->EndBlock(block); + } +} + + +Instruction* InstructionSelector::Emit(InstructionCode opcode, + InstructionOperand* output, + size_t temp_count, + InstructionOperand** temps) { + size_t output_count = output == NULL ? 0 : 1; + return Emit(opcode, output_count, &output, 0, NULL, temp_count, temps); +} + + +Instruction* InstructionSelector::Emit(InstructionCode opcode, + InstructionOperand* output, + InstructionOperand* a, size_t temp_count, + InstructionOperand** temps) { + size_t output_count = output == NULL ? 0 : 1; + return Emit(opcode, output_count, &output, 1, &a, temp_count, temps); +} + + +Instruction* InstructionSelector::Emit(InstructionCode opcode, + InstructionOperand* output, + InstructionOperand* a, + InstructionOperand* b, size_t temp_count, + InstructionOperand** temps) { + size_t output_count = output == NULL ? 0 : 1; + InstructionOperand* inputs[] = {a, b}; + size_t input_count = ARRAY_SIZE(inputs); + return Emit(opcode, output_count, &output, input_count, inputs, temp_count, + temps); +} + + +Instruction* InstructionSelector::Emit(InstructionCode opcode, + InstructionOperand* output, + InstructionOperand* a, + InstructionOperand* b, + InstructionOperand* c, size_t temp_count, + InstructionOperand** temps) { + size_t output_count = output == NULL ? 0 : 1; + InstructionOperand* inputs[] = {a, b, c}; + size_t input_count = ARRAY_SIZE(inputs); + return Emit(opcode, output_count, &output, input_count, inputs, temp_count, + temps); +} + + +Instruction* InstructionSelector::Emit( + InstructionCode opcode, InstructionOperand* output, InstructionOperand* a, + InstructionOperand* b, InstructionOperand* c, InstructionOperand* d, + size_t temp_count, InstructionOperand** temps) { + size_t output_count = output == NULL ? 0 : 1; + InstructionOperand* inputs[] = {a, b, c, d}; + size_t input_count = ARRAY_SIZE(inputs); + return Emit(opcode, output_count, &output, input_count, inputs, temp_count, + temps); +} + + +Instruction* InstructionSelector::Emit( + InstructionCode opcode, size_t output_count, InstructionOperand** outputs, + size_t input_count, InstructionOperand** inputs, size_t temp_count, + InstructionOperand** temps) { + Instruction* instr = + Instruction::New(instruction_zone(), opcode, output_count, outputs, + input_count, inputs, temp_count, temps); + return Emit(instr); +} + + +Instruction* InstructionSelector::Emit(Instruction* instr) { + instructions_.push_back(instr); + return instr; +} + + +bool InstructionSelector::IsNextInAssemblyOrder(const BasicBlock* block) const { + return block->rpo_number_ == (current_block_->rpo_number_ + 1) && + block->deferred_ == current_block_->deferred_; +} + + +bool InstructionSelector::CanCover(Node* user, Node* node) const { + return node->OwnedBy(user) && + schedule()->block(node) == schedule()->block(user); +} + + +bool InstructionSelector::IsDefined(Node* node) const { + DCHECK_NOT_NULL(node); + NodeId id = node->id(); + DCHECK(id >= 0); + DCHECK(id < static_cast<NodeId>(defined_.size())); + return defined_[id]; +} + + +void InstructionSelector::MarkAsDefined(Node* node) { + DCHECK_NOT_NULL(node); + NodeId id = node->id(); + DCHECK(id >= 0); + DCHECK(id < static_cast<NodeId>(defined_.size())); + defined_[id] = true; +} + + +bool InstructionSelector::IsUsed(Node* node) const { + if (!node->op()->HasProperty(Operator::kEliminatable)) return true; + NodeId id = node->id(); + DCHECK(id >= 0); + DCHECK(id < static_cast<NodeId>(used_.size())); + return used_[id]; +} + + +void InstructionSelector::MarkAsUsed(Node* node) { + DCHECK_NOT_NULL(node); + NodeId id = node->id(); + DCHECK(id >= 0); + DCHECK(id < static_cast<NodeId>(used_.size())); + used_[id] = true; +} + + +bool InstructionSelector::IsDouble(const Node* node) const { + DCHECK_NOT_NULL(node); + return sequence()->IsDouble(node->id()); +} + + +void InstructionSelector::MarkAsDouble(Node* node) { + DCHECK_NOT_NULL(node); + DCHECK(!IsReference(node)); + sequence()->MarkAsDouble(node->id()); + + // Propagate "doubleness" throughout phis. + for (UseIter i = node->uses().begin(); i != node->uses().end(); ++i) { + Node* user = *i; + if (user->opcode() != IrOpcode::kPhi) continue; + if (IsDouble(user)) continue; + MarkAsDouble(user); + } +} + + +bool InstructionSelector::IsReference(const Node* node) const { + DCHECK_NOT_NULL(node); + return sequence()->IsReference(node->id()); +} + + +void InstructionSelector::MarkAsReference(Node* node) { + DCHECK_NOT_NULL(node); + DCHECK(!IsDouble(node)); + sequence()->MarkAsReference(node->id()); + + // Propagate "referenceness" throughout phis. + for (UseIter i = node->uses().begin(); i != node->uses().end(); ++i) { + Node* user = *i; + if (user->opcode() != IrOpcode::kPhi) continue; + if (IsReference(user)) continue; + MarkAsReference(user); + } +} + + +void InstructionSelector::MarkAsRepresentation(MachineType rep, Node* node) { + DCHECK_NOT_NULL(node); + if (rep == kMachineFloat64) MarkAsDouble(node); + if (rep == kMachineTagged) MarkAsReference(node); +} + + +// TODO(bmeurer): Get rid of the CallBuffer business and make +// InstructionSelector::VisitCall platform independent instead. +CallBuffer::CallBuffer(Zone* zone, CallDescriptor* d) + : output_count(0), + descriptor(d), + output_nodes(zone->NewArray<Node*>(d->ReturnCount())), + outputs(zone->NewArray<InstructionOperand*>(d->ReturnCount())), + fixed_and_control_args( + zone->NewArray<InstructionOperand*>(input_count() + control_count())), + fixed_count(0), + pushed_nodes(zone->NewArray<Node*>(input_count())), + pushed_count(0) { + if (d->ReturnCount() > 1) { + memset(output_nodes, 0, sizeof(Node*) * d->ReturnCount()); // NOLINT + } + memset(pushed_nodes, 0, sizeof(Node*) * input_count()); // NOLINT +} + + +// TODO(bmeurer): Get rid of the CallBuffer business and make +// InstructionSelector::VisitCall platform independent instead. +void InstructionSelector::InitializeCallBuffer(Node* call, CallBuffer* buffer, + bool call_code_immediate, + bool call_address_immediate, + BasicBlock* cont_node, + BasicBlock* deopt_node) { + OperandGenerator g(this); + DCHECK_EQ(call->op()->OutputCount(), buffer->descriptor->ReturnCount()); + DCHECK_EQ(OperatorProperties::GetValueInputCount(call->op()), + buffer->input_count()); + + if (buffer->descriptor->ReturnCount() > 0) { + // Collect the projections that represent multiple outputs from this call. + if (buffer->descriptor->ReturnCount() == 1) { + buffer->output_nodes[0] = call; + } else { + call->CollectProjections(buffer->descriptor->ReturnCount(), + buffer->output_nodes); + } + + // Filter out the outputs that aren't live because no projection uses them. + for (int i = 0; i < buffer->descriptor->ReturnCount(); i++) { + if (buffer->output_nodes[i] != NULL) { + Node* output = buffer->output_nodes[i]; + LinkageLocation location = buffer->descriptor->GetReturnLocation(i); + MarkAsRepresentation(location.representation(), output); + buffer->outputs[buffer->output_count++] = + g.DefineAsLocation(output, location); + } + } + } + + buffer->fixed_count = 1; // First argument is always the callee. + Node* callee = call->InputAt(0); + switch (buffer->descriptor->kind()) { + case CallDescriptor::kCallCodeObject: + buffer->fixed_and_control_args[0] = + (call_code_immediate && callee->opcode() == IrOpcode::kHeapConstant) + ? g.UseImmediate(callee) + : g.UseRegister(callee); + break; + case CallDescriptor::kCallAddress: + buffer->fixed_and_control_args[0] = + (call_address_immediate && + (callee->opcode() == IrOpcode::kInt32Constant || + callee->opcode() == IrOpcode::kInt64Constant)) + ? g.UseImmediate(callee) + : g.UseRegister(callee); + break; + case CallDescriptor::kCallJSFunction: + buffer->fixed_and_control_args[0] = + g.UseLocation(callee, buffer->descriptor->GetInputLocation(0)); + break; + } + + int input_count = buffer->input_count(); + + // Split the arguments into pushed_nodes and fixed_args. Pushed arguments + // require an explicit push instruction before the call and do not appear + // as arguments to the call. Everything else ends up as an InstructionOperand + // argument to the call. + InputIter iter(call->inputs().begin()); + for (int index = 0; index < input_count; ++iter, ++index) { + DCHECK(iter != call->inputs().end()); + DCHECK(index == iter.index()); + if (index == 0) continue; // The first argument (callee) is already done. + InstructionOperand* op = + g.UseLocation(*iter, buffer->descriptor->GetInputLocation(index)); + if (UnallocatedOperand::cast(op)->HasFixedSlotPolicy()) { + int stack_index = -UnallocatedOperand::cast(op)->fixed_slot_index() - 1; + DCHECK(buffer->pushed_nodes[stack_index] == NULL); + buffer->pushed_nodes[stack_index] = *iter; + buffer->pushed_count++; + } else { + buffer->fixed_and_control_args[buffer->fixed_count] = op; + buffer->fixed_count++; + } + } + + // If the call can deoptimize, we add the continuation and deoptimization + // block labels. + if (buffer->descriptor->CanLazilyDeoptimize()) { + DCHECK(cont_node != NULL); + DCHECK(deopt_node != NULL); + buffer->fixed_and_control_args[buffer->fixed_count] = g.Label(cont_node); + buffer->fixed_and_control_args[buffer->fixed_count + 1] = + g.Label(deopt_node); + } else { + DCHECK(cont_node == NULL); + DCHECK(deopt_node == NULL); + } + + DCHECK(input_count == (buffer->fixed_count + buffer->pushed_count)); +} + + +void InstructionSelector::VisitBlock(BasicBlock* block) { + DCHECK_EQ(NULL, current_block_); + current_block_ = block; + int current_block_end = static_cast<int>(instructions_.size()); + + // Generate code for the block control "top down", but schedule the code + // "bottom up". + VisitControl(block); + std::reverse(instructions_.begin() + current_block_end, instructions_.end()); + + // Visit code in reverse control flow order, because architecture-specific + // matching may cover more than one node at a time. + for (BasicBlock::reverse_iterator i = block->rbegin(); i != block->rend(); + ++i) { + Node* node = *i; + // Skip nodes that are unused or already defined. + if (!IsUsed(node) || IsDefined(node)) continue; + // Generate code for this node "top down", but schedule the code "bottom + // up". + size_t current_node_end = instructions_.size(); + VisitNode(node); + std::reverse(instructions_.begin() + current_node_end, instructions_.end()); + } + + // We're done with the block. + // TODO(bmeurer): We should not mutate the schedule. + block->code_end_ = current_block_end; + block->code_start_ = static_cast<int>(instructions_.size()); + + current_block_ = NULL; +} + + +static inline void CheckNoPhis(const BasicBlock* block) { +#ifdef DEBUG + // Branch targets should not have phis. + for (BasicBlock::const_iterator i = block->begin(); i != block->end(); ++i) { + const Node* node = *i; + CHECK_NE(IrOpcode::kPhi, node->opcode()); + } +#endif +} + + +void InstructionSelector::VisitControl(BasicBlock* block) { + Node* input = block->control_input_; + switch (block->control_) { + case BasicBlockData::kGoto: + return VisitGoto(block->SuccessorAt(0)); + case BasicBlockData::kBranch: { + DCHECK_EQ(IrOpcode::kBranch, input->opcode()); + BasicBlock* tbranch = block->SuccessorAt(0); + BasicBlock* fbranch = block->SuccessorAt(1); + // SSA deconstruction requires targets of branches not to have phis. + // Edge split form guarantees this property, but is more strict. + CheckNoPhis(tbranch); + CheckNoPhis(fbranch); + if (tbranch == fbranch) return VisitGoto(tbranch); + return VisitBranch(input, tbranch, fbranch); + } + case BasicBlockData::kReturn: { + // If the result itself is a return, return its input. + Node* value = (input != NULL && input->opcode() == IrOpcode::kReturn) + ? input->InputAt(0) + : input; + return VisitReturn(value); + } + case BasicBlockData::kThrow: + return VisitThrow(input); + case BasicBlockData::kDeoptimize: + return VisitDeoptimize(input); + case BasicBlockData::kCall: { + BasicBlock* deoptimization = block->SuccessorAt(0); + BasicBlock* continuation = block->SuccessorAt(1); + VisitCall(input, continuation, deoptimization); + break; + } + case BasicBlockData::kNone: { + // TODO(titzer): exit block doesn't have control. + DCHECK(input == NULL); + break; + } + default: + UNREACHABLE(); + break; + } +} + + +void InstructionSelector::VisitNode(Node* node) { + DCHECK_NOT_NULL(schedule()->block(node)); // should only use scheduled nodes. + SourcePosition source_position = source_positions_->GetSourcePosition(node); + if (!source_position.IsUnknown()) { + DCHECK(!source_position.IsInvalid()); + if (FLAG_turbo_source_positions || node->opcode() == IrOpcode::kCall) { + Emit(SourcePositionInstruction::New(instruction_zone(), source_position)); + } + } + switch (node->opcode()) { + case IrOpcode::kStart: + case IrOpcode::kLoop: + case IrOpcode::kEnd: + case IrOpcode::kBranch: + case IrOpcode::kIfTrue: + case IrOpcode::kIfFalse: + case IrOpcode::kEffectPhi: + case IrOpcode::kMerge: + case IrOpcode::kLazyDeoptimization: + case IrOpcode::kContinuation: + // No code needed for these graph artifacts. + return; + case IrOpcode::kParameter: { + int index = OpParameter<int>(node); + MachineType rep = linkage() + ->GetIncomingDescriptor() + ->GetInputLocation(index) + .representation(); + MarkAsRepresentation(rep, node); + return VisitParameter(node); + } + case IrOpcode::kPhi: + return VisitPhi(node); + case IrOpcode::kProjection: + return VisitProjection(node); + case IrOpcode::kInt32Constant: + case IrOpcode::kInt64Constant: + case IrOpcode::kExternalConstant: + return VisitConstant(node); + case IrOpcode::kFloat64Constant: + return MarkAsDouble(node), VisitConstant(node); + case IrOpcode::kHeapConstant: + case IrOpcode::kNumberConstant: + // TODO(turbofan): only mark non-smis as references. + return MarkAsReference(node), VisitConstant(node); + case IrOpcode::kCall: + return VisitCall(node, NULL, NULL); + case IrOpcode::kFrameState: + case IrOpcode::kStateValues: + return; + case IrOpcode::kLoad: { + MachineType load_rep = OpParameter<MachineType>(node); + MarkAsRepresentation(load_rep, node); + return VisitLoad(node); + } + case IrOpcode::kStore: + return VisitStore(node); + case IrOpcode::kWord32And: + return VisitWord32And(node); + case IrOpcode::kWord32Or: + return VisitWord32Or(node); + case IrOpcode::kWord32Xor: + return VisitWord32Xor(node); + case IrOpcode::kWord32Shl: + return VisitWord32Shl(node); + case IrOpcode::kWord32Shr: + return VisitWord32Shr(node); + case IrOpcode::kWord32Sar: + return VisitWord32Sar(node); + case IrOpcode::kWord32Equal: + return VisitWord32Equal(node); + case IrOpcode::kWord64And: + return VisitWord64And(node); + case IrOpcode::kWord64Or: + return VisitWord64Or(node); + case IrOpcode::kWord64Xor: + return VisitWord64Xor(node); + case IrOpcode::kWord64Shl: + return VisitWord64Shl(node); + case IrOpcode::kWord64Shr: + return VisitWord64Shr(node); + case IrOpcode::kWord64Sar: + return VisitWord64Sar(node); + case IrOpcode::kWord64Equal: + return VisitWord64Equal(node); + case IrOpcode::kInt32Add: + return VisitInt32Add(node); + case IrOpcode::kInt32AddWithOverflow: + return VisitInt32AddWithOverflow(node); + case IrOpcode::kInt32Sub: + return VisitInt32Sub(node); + case IrOpcode::kInt32SubWithOverflow: + return VisitInt32SubWithOverflow(node); + case IrOpcode::kInt32Mul: + return VisitInt32Mul(node); + case IrOpcode::kInt32Div: + return VisitInt32Div(node); + case IrOpcode::kInt32UDiv: + return VisitInt32UDiv(node); + case IrOpcode::kInt32Mod: + return VisitInt32Mod(node); + case IrOpcode::kInt32UMod: + return VisitInt32UMod(node); + case IrOpcode::kInt32LessThan: + return VisitInt32LessThan(node); + case IrOpcode::kInt32LessThanOrEqual: + return VisitInt32LessThanOrEqual(node); + case IrOpcode::kUint32LessThan: + return VisitUint32LessThan(node); + case IrOpcode::kUint32LessThanOrEqual: + return VisitUint32LessThanOrEqual(node); + case IrOpcode::kInt64Add: + return VisitInt64Add(node); + case IrOpcode::kInt64Sub: + return VisitInt64Sub(node); + case IrOpcode::kInt64Mul: + return VisitInt64Mul(node); + case IrOpcode::kInt64Div: + return VisitInt64Div(node); + case IrOpcode::kInt64UDiv: + return VisitInt64UDiv(node); + case IrOpcode::kInt64Mod: + return VisitInt64Mod(node); + case IrOpcode::kInt64UMod: + return VisitInt64UMod(node); + case IrOpcode::kInt64LessThan: + return VisitInt64LessThan(node); + case IrOpcode::kInt64LessThanOrEqual: + return VisitInt64LessThanOrEqual(node); + case IrOpcode::kConvertInt32ToInt64: + return VisitConvertInt32ToInt64(node); + case IrOpcode::kConvertInt64ToInt32: + return VisitConvertInt64ToInt32(node); + case IrOpcode::kChangeInt32ToFloat64: + return MarkAsDouble(node), VisitChangeInt32ToFloat64(node); + case IrOpcode::kChangeUint32ToFloat64: + return MarkAsDouble(node), VisitChangeUint32ToFloat64(node); + case IrOpcode::kChangeFloat64ToInt32: + return VisitChangeFloat64ToInt32(node); + case IrOpcode::kChangeFloat64ToUint32: + return VisitChangeFloat64ToUint32(node); + case IrOpcode::kFloat64Add: + return MarkAsDouble(node), VisitFloat64Add(node); + case IrOpcode::kFloat64Sub: + return MarkAsDouble(node), VisitFloat64Sub(node); + case IrOpcode::kFloat64Mul: + return MarkAsDouble(node), VisitFloat64Mul(node); + case IrOpcode::kFloat64Div: + return MarkAsDouble(node), VisitFloat64Div(node); + case IrOpcode::kFloat64Mod: + return MarkAsDouble(node), VisitFloat64Mod(node); + case IrOpcode::kFloat64Equal: + return VisitFloat64Equal(node); + case IrOpcode::kFloat64LessThan: + return VisitFloat64LessThan(node); + case IrOpcode::kFloat64LessThanOrEqual: + return VisitFloat64LessThanOrEqual(node); + default: + V8_Fatal(__FILE__, __LINE__, "Unexpected operator #%d:%s @ node #%d", + node->opcode(), node->op()->mnemonic(), node->id()); + } +} + + +#if V8_TURBOFAN_BACKEND + +void InstructionSelector::VisitWord32Equal(Node* node) { + FlagsContinuation cont(kEqual, node); + Int32BinopMatcher m(node); + if (m.right().Is(0)) { + return VisitWord32Test(m.left().node(), &cont); + } + VisitWord32Compare(node, &cont); +} + + +void InstructionSelector::VisitInt32LessThan(Node* node) { + FlagsContinuation cont(kSignedLessThan, node); + VisitWord32Compare(node, &cont); +} + + +void InstructionSelector::VisitInt32LessThanOrEqual(Node* node) { + FlagsContinuation cont(kSignedLessThanOrEqual, node); + VisitWord32Compare(node, &cont); +} + + +void InstructionSelector::VisitUint32LessThan(Node* node) { + FlagsContinuation cont(kUnsignedLessThan, node); + VisitWord32Compare(node, &cont); +} + + +void InstructionSelector::VisitUint32LessThanOrEqual(Node* node) { + FlagsContinuation cont(kUnsignedLessThanOrEqual, node); + VisitWord32Compare(node, &cont); +} + + +void InstructionSelector::VisitWord64Equal(Node* node) { + FlagsContinuation cont(kEqual, node); + Int64BinopMatcher m(node); + if (m.right().Is(0)) { + return VisitWord64Test(m.left().node(), &cont); + } + VisitWord64Compare(node, &cont); +} + + +void InstructionSelector::VisitInt32AddWithOverflow(Node* node) { + if (Node* ovf = node->FindProjection(1)) { + FlagsContinuation cont(kOverflow, ovf); + return VisitInt32AddWithOverflow(node, &cont); + } + FlagsContinuation cont; + VisitInt32AddWithOverflow(node, &cont); +} + + +void InstructionSelector::VisitInt32SubWithOverflow(Node* node) { + if (Node* ovf = node->FindProjection(1)) { + FlagsContinuation cont(kOverflow, ovf); + return VisitInt32SubWithOverflow(node, &cont); + } + FlagsContinuation cont; + VisitInt32SubWithOverflow(node, &cont); +} + + +void InstructionSelector::VisitInt64LessThan(Node* node) { + FlagsContinuation cont(kSignedLessThan, node); + VisitWord64Compare(node, &cont); +} + + +void InstructionSelector::VisitInt64LessThanOrEqual(Node* node) { + FlagsContinuation cont(kSignedLessThanOrEqual, node); + VisitWord64Compare(node, &cont); +} + + +void InstructionSelector::VisitFloat64Equal(Node* node) { + FlagsContinuation cont(kUnorderedEqual, node); + VisitFloat64Compare(node, &cont); +} + + +void InstructionSelector::VisitFloat64LessThan(Node* node) { + FlagsContinuation cont(kUnorderedLessThan, node); + VisitFloat64Compare(node, &cont); +} + + +void InstructionSelector::VisitFloat64LessThanOrEqual(Node* node) { + FlagsContinuation cont(kUnorderedLessThanOrEqual, node); + VisitFloat64Compare(node, &cont); +} + +#endif // V8_TURBOFAN_BACKEND + +// 32 bit targets do not implement the following instructions. +#if V8_TARGET_ARCH_32_BIT && V8_TURBOFAN_BACKEND + +void InstructionSelector::VisitWord64And(Node* node) { UNIMPLEMENTED(); } + + +void InstructionSelector::VisitWord64Or(Node* node) { UNIMPLEMENTED(); } + + +void InstructionSelector::VisitWord64Xor(Node* node) { UNIMPLEMENTED(); } + + +void InstructionSelector::VisitWord64Shl(Node* node) { UNIMPLEMENTED(); } + + +void InstructionSelector::VisitWord64Shr(Node* node) { UNIMPLEMENTED(); } + + +void InstructionSelector::VisitWord64Sar(Node* node) { UNIMPLEMENTED(); } + + +void InstructionSelector::VisitInt64Add(Node* node) { UNIMPLEMENTED(); } + + +void InstructionSelector::VisitInt64Sub(Node* node) { UNIMPLEMENTED(); } + + +void InstructionSelector::VisitInt64Mul(Node* node) { UNIMPLEMENTED(); } + + +void InstructionSelector::VisitInt64Div(Node* node) { UNIMPLEMENTED(); } + + +void InstructionSelector::VisitInt64UDiv(Node* node) { UNIMPLEMENTED(); } + + +void InstructionSelector::VisitInt64Mod(Node* node) { UNIMPLEMENTED(); } + + +void InstructionSelector::VisitInt64UMod(Node* node) { UNIMPLEMENTED(); } + + +void InstructionSelector::VisitConvertInt64ToInt32(Node* node) { + UNIMPLEMENTED(); +} + + +void InstructionSelector::VisitConvertInt32ToInt64(Node* node) { + UNIMPLEMENTED(); +} + +#endif // V8_TARGET_ARCH_32_BIT && V8_TURBOFAN_BACKEND + + +// 32-bit targets and unsupported architectures need dummy implementations of +// selected 64-bit ops. +#if V8_TARGET_ARCH_32_BIT || !V8_TURBOFAN_BACKEND + +void InstructionSelector::VisitWord64Test(Node* node, FlagsContinuation* cont) { + UNIMPLEMENTED(); +} + + +void InstructionSelector::VisitWord64Compare(Node* node, + FlagsContinuation* cont) { + UNIMPLEMENTED(); +} + +#endif // V8_TARGET_ARCH_32_BIT || !V8_TURBOFAN_BACKEND + + +void InstructionSelector::VisitParameter(Node* node) { + OperandGenerator g(this); + Emit(kArchNop, g.DefineAsLocation(node, linkage()->GetParameterLocation( + OpParameter<int>(node)))); +} + + +void InstructionSelector::VisitPhi(Node* node) { + // TODO(bmeurer): Emit a PhiInstruction here. + for (InputIter i = node->inputs().begin(); i != node->inputs().end(); ++i) { + MarkAsUsed(*i); + } +} + + +void InstructionSelector::VisitProjection(Node* node) { + OperandGenerator g(this); + Node* value = node->InputAt(0); + switch (value->opcode()) { + case IrOpcode::kInt32AddWithOverflow: + case IrOpcode::kInt32SubWithOverflow: + if (OpParameter<int32_t>(node) == 0) { + Emit(kArchNop, g.DefineSameAsFirst(node), g.Use(value)); + } else { + DCHECK_EQ(1, OpParameter<int32_t>(node)); + MarkAsUsed(value); + } + break; + default: + break; + } +} + + +void InstructionSelector::VisitConstant(Node* node) { + // We must emit a NOP here because every live range needs a defining + // instruction in the register allocator. + OperandGenerator g(this); + Emit(kArchNop, g.DefineAsConstant(node)); +} + + +void InstructionSelector::VisitGoto(BasicBlock* target) { + if (IsNextInAssemblyOrder(target)) { + // fall through to the next block. + Emit(kArchNop, NULL)->MarkAsControl(); + } else { + // jump to the next block. + OperandGenerator g(this); + Emit(kArchJmp, NULL, g.Label(target))->MarkAsControl(); + } +} + + +void InstructionSelector::VisitBranch(Node* branch, BasicBlock* tbranch, + BasicBlock* fbranch) { + OperandGenerator g(this); + Node* user = branch; + Node* value = branch->InputAt(0); + + FlagsContinuation cont(kNotEqual, tbranch, fbranch); + + // If we can fall through to the true block, invert the branch. + if (IsNextInAssemblyOrder(tbranch)) { + cont.Negate(); + cont.SwapBlocks(); + } + + // Try to combine with comparisons against 0 by simply inverting the branch. + while (CanCover(user, value)) { + if (value->opcode() == IrOpcode::kWord32Equal) { + Int32BinopMatcher m(value); + if (m.right().Is(0)) { + user = value; + value = m.left().node(); + cont.Negate(); + } else { + break; + } + } else if (value->opcode() == IrOpcode::kWord64Equal) { + Int64BinopMatcher m(value); + if (m.right().Is(0)) { + user = value; + value = m.left().node(); + cont.Negate(); + } else { + break; + } + } else { + break; + } + } + + // Try to combine the branch with a comparison. + if (CanCover(user, value)) { + switch (value->opcode()) { + case IrOpcode::kWord32Equal: + cont.OverwriteAndNegateIfEqual(kEqual); + return VisitWord32Compare(value, &cont); + case IrOpcode::kInt32LessThan: + cont.OverwriteAndNegateIfEqual(kSignedLessThan); + return VisitWord32Compare(value, &cont); + case IrOpcode::kInt32LessThanOrEqual: + cont.OverwriteAndNegateIfEqual(kSignedLessThanOrEqual); + return VisitWord32Compare(value, &cont); + case IrOpcode::kUint32LessThan: + cont.OverwriteAndNegateIfEqual(kUnsignedLessThan); + return VisitWord32Compare(value, &cont); + case IrOpcode::kUint32LessThanOrEqual: + cont.OverwriteAndNegateIfEqual(kUnsignedLessThanOrEqual); + return VisitWord32Compare(value, &cont); + case IrOpcode::kWord64Equal: + cont.OverwriteAndNegateIfEqual(kEqual); + return VisitWord64Compare(value, &cont); + case IrOpcode::kInt64LessThan: + cont.OverwriteAndNegateIfEqual(kSignedLessThan); + return VisitWord64Compare(value, &cont); + case IrOpcode::kInt64LessThanOrEqual: + cont.OverwriteAndNegateIfEqual(kSignedLessThanOrEqual); + return VisitWord64Compare(value, &cont); + case IrOpcode::kFloat64Equal: + cont.OverwriteAndNegateIfEqual(kUnorderedEqual); + return VisitFloat64Compare(value, &cont); + case IrOpcode::kFloat64LessThan: + cont.OverwriteAndNegateIfEqual(kUnorderedLessThan); + return VisitFloat64Compare(value, &cont); + case IrOpcode::kFloat64LessThanOrEqual: + cont.OverwriteAndNegateIfEqual(kUnorderedLessThanOrEqual); + return VisitFloat64Compare(value, &cont); + case IrOpcode::kProjection: + // Check if this is the overflow output projection of an + // <Operation>WithOverflow node. + if (OpParameter<int32_t>(value) == 1) { + // We cannot combine the <Operation>WithOverflow with this branch + // unless the 0th projection (the use of the actual value of the + // <Operation> is either NULL, which means there's no use of the + // actual value, or was already defined, which means it is scheduled + // *AFTER* this branch). + Node* node = value->InputAt(0); + Node* result = node->FindProjection(0); + if (result == NULL || IsDefined(result)) { + switch (node->opcode()) { + case IrOpcode::kInt32AddWithOverflow: + cont.OverwriteAndNegateIfEqual(kOverflow); + return VisitInt32AddWithOverflow(node, &cont); + case IrOpcode::kInt32SubWithOverflow: + cont.OverwriteAndNegateIfEqual(kOverflow); + return VisitInt32SubWithOverflow(node, &cont); + default: + break; + } + } + } + break; + default: + break; + } + } + + // Branch could not be combined with a compare, emit compare against 0. + VisitWord32Test(value, &cont); +} + + +void InstructionSelector::VisitReturn(Node* value) { + OperandGenerator g(this); + if (value != NULL) { + Emit(kArchRet, NULL, g.UseLocation(value, linkage()->GetReturnLocation())); + } else { + Emit(kArchRet, NULL); + } +} + + +void InstructionSelector::VisitThrow(Node* value) { + UNIMPLEMENTED(); // TODO(titzer) +} + + +static InstructionOperand* UseOrImmediate(OperandGenerator* g, Node* input) { + switch (input->opcode()) { + case IrOpcode::kInt32Constant: + case IrOpcode::kNumberConstant: + case IrOpcode::kFloat64Constant: + case IrOpcode::kHeapConstant: + return g->UseImmediate(input); + default: + return g->Use(input); + } +} + + +void InstructionSelector::VisitDeoptimize(Node* deopt) { + DCHECK(deopt->op()->opcode() == IrOpcode::kDeoptimize); + Node* state = deopt->InputAt(0); + DCHECK(state->op()->opcode() == IrOpcode::kFrameState); + BailoutId ast_id = OpParameter<BailoutId>(state); + + // Add the inputs. + Node* parameters = state->InputAt(0); + int parameters_count = OpParameter<int>(parameters); + + Node* locals = state->InputAt(1); + int locals_count = OpParameter<int>(locals); + + Node* stack = state->InputAt(2); + int stack_count = OpParameter<int>(stack); + + OperandGenerator g(this); + std::vector<InstructionOperand*> inputs; + inputs.reserve(parameters_count + locals_count + stack_count); + for (int i = 0; i < parameters_count; i++) { + inputs.push_back(UseOrImmediate(&g, parameters->InputAt(i))); + } + for (int i = 0; i < locals_count; i++) { + inputs.push_back(UseOrImmediate(&g, locals->InputAt(i))); + } + for (int i = 0; i < stack_count; i++) { + inputs.push_back(UseOrImmediate(&g, stack->InputAt(i))); + } + + FrameStateDescriptor* descriptor = new (instruction_zone()) + FrameStateDescriptor(ast_id, parameters_count, locals_count, stack_count); + + DCHECK_EQ(descriptor->size(), inputs.size()); + + int deoptimization_id = sequence()->AddDeoptimizationEntry(descriptor); + Emit(kArchDeoptimize | MiscField::encode(deoptimization_id), 0, NULL, + inputs.size(), &inputs.front(), 0, NULL); +} + + +#if !V8_TURBOFAN_BACKEND + +#define DECLARE_UNIMPLEMENTED_SELECTOR(x) \ + void InstructionSelector::Visit##x(Node* node) { UNIMPLEMENTED(); } +MACHINE_OP_LIST(DECLARE_UNIMPLEMENTED_SELECTOR) +#undef DECLARE_UNIMPLEMENTED_SELECTOR + + +void InstructionSelector::VisitInt32AddWithOverflow(Node* node, + FlagsContinuation* cont) { + UNIMPLEMENTED(); +} + + +void InstructionSelector::VisitInt32SubWithOverflow(Node* node, + FlagsContinuation* cont) { + UNIMPLEMENTED(); +} + + +void InstructionSelector::VisitWord32Test(Node* node, FlagsContinuation* cont) { + UNIMPLEMENTED(); +} + + +void InstructionSelector::VisitWord32Compare(Node* node, + FlagsContinuation* cont) { + UNIMPLEMENTED(); +} + + +void InstructionSelector::VisitFloat64Compare(Node* node, + FlagsContinuation* cont) { + UNIMPLEMENTED(); +} + + +void InstructionSelector::VisitCall(Node* call, BasicBlock* continuation, + BasicBlock* deoptimization) {} + +#endif // !V8_TURBOFAN_BACKEND + +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/src/compiler/instruction-selector.h b/deps/v8/src/compiler/instruction-selector.h new file mode 100644 index 00000000000..e2833228466 --- /dev/null +++ b/deps/v8/src/compiler/instruction-selector.h @@ -0,0 +1,212 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_INSTRUCTION_SELECTOR_H_ +#define V8_COMPILER_INSTRUCTION_SELECTOR_H_ + +#include <deque> + +#include "src/compiler/common-operator.h" +#include "src/compiler/instruction.h" +#include "src/compiler/machine-operator.h" +#include "src/zone-containers.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// Forward declarations. +struct CallBuffer; // TODO(bmeurer): Remove this. +class FlagsContinuation; + +class InstructionSelector V8_FINAL { + public: + // Forward declarations. + class Features; + + InstructionSelector(InstructionSequence* sequence, + SourcePositionTable* source_positions, + Features features = SupportedFeatures()); + + // Visit code for the entire graph with the included schedule. + void SelectInstructions(); + + // =========================================================================== + // ============= Architecture-independent code emission methods. ============= + // =========================================================================== + + Instruction* Emit(InstructionCode opcode, InstructionOperand* output, + size_t temp_count = 0, InstructionOperand* *temps = NULL); + Instruction* Emit(InstructionCode opcode, InstructionOperand* output, + InstructionOperand* a, size_t temp_count = 0, + InstructionOperand* *temps = NULL); + Instruction* Emit(InstructionCode opcode, InstructionOperand* output, + InstructionOperand* a, InstructionOperand* b, + size_t temp_count = 0, InstructionOperand* *temps = NULL); + Instruction* Emit(InstructionCode opcode, InstructionOperand* output, + InstructionOperand* a, InstructionOperand* b, + InstructionOperand* c, size_t temp_count = 0, + InstructionOperand* *temps = NULL); + Instruction* Emit(InstructionCode opcode, InstructionOperand* output, + InstructionOperand* a, InstructionOperand* b, + InstructionOperand* c, InstructionOperand* d, + size_t temp_count = 0, InstructionOperand* *temps = NULL); + Instruction* Emit(InstructionCode opcode, size_t output_count, + InstructionOperand** outputs, size_t input_count, + InstructionOperand** inputs, size_t temp_count = 0, + InstructionOperand* *temps = NULL); + Instruction* Emit(Instruction* instr); + + // =========================================================================== + // ============== Architecture-independent CPU feature methods. ============== + // =========================================================================== + + class Features V8_FINAL { + public: + Features() : bits_(0) {} + explicit Features(unsigned bits) : bits_(bits) {} + explicit Features(CpuFeature f) : bits_(1u << f) {} + Features(CpuFeature f1, CpuFeature f2) : bits_((1u << f1) | (1u << f2)) {} + + bool Contains(CpuFeature f) const { return (bits_ & (1u << f)); } + + private: + unsigned bits_; + }; + + bool IsSupported(CpuFeature feature) const { + return features_.Contains(feature); + } + + // Returns the features supported on the target platform. + static Features SupportedFeatures() { + return Features(CpuFeatures::SupportedFeatures()); + } + + private: + friend class OperandGenerator; + + // =========================================================================== + // ============ Architecture-independent graph covering methods. ============= + // =========================================================================== + + // Checks if {block} will appear directly after {current_block_} when + // assembling code, in which case, a fall-through can be used. + bool IsNextInAssemblyOrder(const BasicBlock* block) const; + + // Used in pattern matching during code generation. + // Check if {node} can be covered while generating code for the current + // instruction. A node can be covered if the {user} of the node has the only + // edge and the two are in the same basic block. + bool CanCover(Node* user, Node* node) const; + + // Checks if {node} was already defined, and therefore code was already + // generated for it. + bool IsDefined(Node* node) const; + + // Inform the instruction selection that {node} was just defined. + void MarkAsDefined(Node* node); + + // Checks if {node} has any uses, and therefore code has to be generated for + // it. + bool IsUsed(Node* node) const; + + // Inform the instruction selection that {node} has at least one use and we + // will need to generate code for it. + void MarkAsUsed(Node* node); + + // Checks if {node} is marked as double. + bool IsDouble(const Node* node) const; + + // Inform the register allocator of a double result. + void MarkAsDouble(Node* node); + + // Checks if {node} is marked as reference. + bool IsReference(const Node* node) const; + + // Inform the register allocator of a reference result. + void MarkAsReference(Node* node); + + // Inform the register allocation of the representation of the value produced + // by {node}. + void MarkAsRepresentation(MachineType rep, Node* node); + + // Initialize the call buffer with the InstructionOperands, nodes, etc, + // corresponding + // to the inputs and outputs of the call. + // {call_code_immediate} to generate immediate operands to calls of code. + // {call_address_immediate} to generate immediate operands to address calls. + void InitializeCallBuffer(Node* call, CallBuffer* buffer, + bool call_code_immediate, + bool call_address_immediate, BasicBlock* cont_node, + BasicBlock* deopt_node); + + // =========================================================================== + // ============= Architecture-specific graph covering methods. =============== + // =========================================================================== + + // Visit nodes in the given block and generate code. + void VisitBlock(BasicBlock* block); + + // Visit the node for the control flow at the end of the block, generating + // code if necessary. + void VisitControl(BasicBlock* block); + + // Visit the node and generate code, if any. + void VisitNode(Node* node); + +#define DECLARE_GENERATOR(x) void Visit##x(Node* node); + MACHINE_OP_LIST(DECLARE_GENERATOR) +#undef DECLARE_GENERATOR + + void VisitInt32AddWithOverflow(Node* node, FlagsContinuation* cont); + void VisitInt32SubWithOverflow(Node* node, FlagsContinuation* cont); + + void VisitWord32Test(Node* node, FlagsContinuation* cont); + void VisitWord64Test(Node* node, FlagsContinuation* cont); + void VisitWord32Compare(Node* node, FlagsContinuation* cont); + void VisitWord64Compare(Node* node, FlagsContinuation* cont); + void VisitFloat64Compare(Node* node, FlagsContinuation* cont); + + void VisitParameter(Node* node); + void VisitPhi(Node* node); + void VisitProjection(Node* node); + void VisitConstant(Node* node); + void VisitCall(Node* call, BasicBlock* continuation, + BasicBlock* deoptimization); + void VisitGoto(BasicBlock* target); + void VisitBranch(Node* input, BasicBlock* tbranch, BasicBlock* fbranch); + void VisitReturn(Node* value); + void VisitThrow(Node* value); + void VisitDeoptimize(Node* deopt); + + // =========================================================================== + + Graph* graph() const { return sequence()->graph(); } + Linkage* linkage() const { return sequence()->linkage(); } + Schedule* schedule() const { return sequence()->schedule(); } + InstructionSequence* sequence() const { return sequence_; } + Zone* instruction_zone() const { return sequence()->zone(); } + Zone* zone() { return &zone_; } + + // =========================================================================== + + typedef zone_allocator<Instruction*> InstructionPtrZoneAllocator; + typedef std::deque<Instruction*, InstructionPtrZoneAllocator> Instructions; + + Zone zone_; + InstructionSequence* sequence_; + SourcePositionTable* source_positions_; + Features features_; + BasicBlock* current_block_; + Instructions instructions_; + BoolVector defined_; + BoolVector used_; +}; + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_COMPILER_INSTRUCTION_SELECTOR_H_ diff --git a/deps/v8/src/compiler/instruction.cc b/deps/v8/src/compiler/instruction.cc new file mode 100644 index 00000000000..a2f4ed4f47f --- /dev/null +++ b/deps/v8/src/compiler/instruction.cc @@ -0,0 +1,483 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/instruction.h" + +#include "src/compiler/common-operator.h" + +namespace v8 { +namespace internal { +namespace compiler { + +OStream& operator<<(OStream& os, const InstructionOperand& op) { + switch (op.kind()) { + case InstructionOperand::INVALID: + return os << "(0)"; + case InstructionOperand::UNALLOCATED: { + const UnallocatedOperand* unalloc = UnallocatedOperand::cast(&op); + os << "v" << unalloc->virtual_register(); + if (unalloc->basic_policy() == UnallocatedOperand::FIXED_SLOT) { + return os << "(=" << unalloc->fixed_slot_index() << "S)"; + } + switch (unalloc->extended_policy()) { + case UnallocatedOperand::NONE: + return os; + case UnallocatedOperand::FIXED_REGISTER: + return os << "(=" << Register::AllocationIndexToString( + unalloc->fixed_register_index()) << ")"; + case UnallocatedOperand::FIXED_DOUBLE_REGISTER: + return os << "(=" << DoubleRegister::AllocationIndexToString( + unalloc->fixed_register_index()) << ")"; + case UnallocatedOperand::MUST_HAVE_REGISTER: + return os << "(R)"; + case UnallocatedOperand::SAME_AS_FIRST_INPUT: + return os << "(1)"; + case UnallocatedOperand::ANY: + return os << "(-)"; + } + } + case InstructionOperand::CONSTANT: + return os << "[constant:" << op.index() << "]"; + case InstructionOperand::IMMEDIATE: + return os << "[immediate:" << op.index() << "]"; + case InstructionOperand::STACK_SLOT: + return os << "[stack:" << op.index() << "]"; + case InstructionOperand::DOUBLE_STACK_SLOT: + return os << "[double_stack:" << op.index() << "]"; + case InstructionOperand::REGISTER: + return os << "[" << Register::AllocationIndexToString(op.index()) + << "|R]"; + case InstructionOperand::DOUBLE_REGISTER: + return os << "[" << DoubleRegister::AllocationIndexToString(op.index()) + << "|R]"; + } + UNREACHABLE(); + return os; +} + + +template <InstructionOperand::Kind kOperandKind, int kNumCachedOperands> +SubKindOperand<kOperandKind, kNumCachedOperands>* + SubKindOperand<kOperandKind, kNumCachedOperands>::cache = NULL; + + +template <InstructionOperand::Kind kOperandKind, int kNumCachedOperands> +void SubKindOperand<kOperandKind, kNumCachedOperands>::SetUpCache() { + if (cache) return; + cache = new SubKindOperand[kNumCachedOperands]; + for (int i = 0; i < kNumCachedOperands; i++) { + cache[i].ConvertTo(kOperandKind, i); + } +} + + +template <InstructionOperand::Kind kOperandKind, int kNumCachedOperands> +void SubKindOperand<kOperandKind, kNumCachedOperands>::TearDownCache() { + delete[] cache; +} + + +void InstructionOperand::SetUpCaches() { +#define INSTRUCTION_OPERAND_SETUP(name, type, number) \ + name##Operand::SetUpCache(); + INSTRUCTION_OPERAND_LIST(INSTRUCTION_OPERAND_SETUP) +#undef INSTRUCTION_OPERAND_SETUP +} + + +void InstructionOperand::TearDownCaches() { +#define INSTRUCTION_OPERAND_TEARDOWN(name, type, number) \ + name##Operand::TearDownCache(); + INSTRUCTION_OPERAND_LIST(INSTRUCTION_OPERAND_TEARDOWN) +#undef INSTRUCTION_OPERAND_TEARDOWN +} + + +OStream& operator<<(OStream& os, const MoveOperands& mo) { + os << *mo.destination(); + if (!mo.source()->Equals(mo.destination())) os << " = " << *mo.source(); + return os << ";"; +} + + +bool ParallelMove::IsRedundant() const { + for (int i = 0; i < move_operands_.length(); ++i) { + if (!move_operands_[i].IsRedundant()) return false; + } + return true; +} + + +OStream& operator<<(OStream& os, const ParallelMove& pm) { + bool first = true; + for (ZoneList<MoveOperands>::iterator move = pm.move_operands()->begin(); + move != pm.move_operands()->end(); ++move) { + if (move->IsEliminated()) continue; + if (!first) os << " "; + first = false; + os << *move; + } + return os; +} + + +void PointerMap::RecordPointer(InstructionOperand* op, Zone* zone) { + // Do not record arguments as pointers. + if (op->IsStackSlot() && op->index() < 0) return; + DCHECK(!op->IsDoubleRegister() && !op->IsDoubleStackSlot()); + pointer_operands_.Add(op, zone); +} + + +void PointerMap::RemovePointer(InstructionOperand* op) { + // Do not record arguments as pointers. + if (op->IsStackSlot() && op->index() < 0) return; + DCHECK(!op->IsDoubleRegister() && !op->IsDoubleStackSlot()); + for (int i = 0; i < pointer_operands_.length(); ++i) { + if (pointer_operands_[i]->Equals(op)) { + pointer_operands_.Remove(i); + --i; + } + } +} + + +void PointerMap::RecordUntagged(InstructionOperand* op, Zone* zone) { + // Do not record arguments as pointers. + if (op->IsStackSlot() && op->index() < 0) return; + DCHECK(!op->IsDoubleRegister() && !op->IsDoubleStackSlot()); + untagged_operands_.Add(op, zone); +} + + +OStream& operator<<(OStream& os, const PointerMap& pm) { + os << "{"; + for (ZoneList<InstructionOperand*>::iterator op = + pm.pointer_operands_.begin(); + op != pm.pointer_operands_.end(); ++op) { + if (op != pm.pointer_operands_.begin()) os << ";"; + os << *op; + } + return os << "}"; +} + + +OStream& operator<<(OStream& os, const ArchOpcode& ao) { + switch (ao) { +#define CASE(Name) \ + case k##Name: \ + return os << #Name; + ARCH_OPCODE_LIST(CASE) +#undef CASE + } + UNREACHABLE(); + return os; +} + + +OStream& operator<<(OStream& os, const AddressingMode& am) { + switch (am) { + case kMode_None: + return os; +#define CASE(Name) \ + case kMode_##Name: \ + return os << #Name; + TARGET_ADDRESSING_MODE_LIST(CASE) +#undef CASE + } + UNREACHABLE(); + return os; +} + + +OStream& operator<<(OStream& os, const FlagsMode& fm) { + switch (fm) { + case kFlags_none: + return os; + case kFlags_branch: + return os << "branch"; + case kFlags_set: + return os << "set"; + } + UNREACHABLE(); + return os; +} + + +OStream& operator<<(OStream& os, const FlagsCondition& fc) { + switch (fc) { + case kEqual: + return os << "equal"; + case kNotEqual: + return os << "not equal"; + case kSignedLessThan: + return os << "signed less than"; + case kSignedGreaterThanOrEqual: + return os << "signed greater than or equal"; + case kSignedLessThanOrEqual: + return os << "signed less than or equal"; + case kSignedGreaterThan: + return os << "signed greater than"; + case kUnsignedLessThan: + return os << "unsigned less than"; + case kUnsignedGreaterThanOrEqual: + return os << "unsigned greater than or equal"; + case kUnsignedLessThanOrEqual: + return os << "unsigned less than or equal"; + case kUnsignedGreaterThan: + return os << "unsigned greater than"; + case kUnorderedEqual: + return os << "unordered equal"; + case kUnorderedNotEqual: + return os << "unordered not equal"; + case kUnorderedLessThan: + return os << "unordered less than"; + case kUnorderedGreaterThanOrEqual: + return os << "unordered greater than or equal"; + case kUnorderedLessThanOrEqual: + return os << "unordered less than or equal"; + case kUnorderedGreaterThan: + return os << "unordered greater than"; + case kOverflow: + return os << "overflow"; + case kNotOverflow: + return os << "not overflow"; + } + UNREACHABLE(); + return os; +} + + +OStream& operator<<(OStream& os, const Instruction& instr) { + if (instr.OutputCount() > 1) os << "("; + for (size_t i = 0; i < instr.OutputCount(); i++) { + if (i > 0) os << ", "; + os << *instr.OutputAt(i); + } + + if (instr.OutputCount() > 1) os << ") = "; + if (instr.OutputCount() == 1) os << " = "; + + if (instr.IsGapMoves()) { + const GapInstruction* gap = GapInstruction::cast(&instr); + os << (instr.IsBlockStart() ? " block-start" : "gap "); + for (int i = GapInstruction::FIRST_INNER_POSITION; + i <= GapInstruction::LAST_INNER_POSITION; i++) { + os << "("; + if (gap->parallel_moves_[i] != NULL) os << *gap->parallel_moves_[i]; + os << ") "; + } + } else if (instr.IsSourcePosition()) { + const SourcePositionInstruction* pos = + SourcePositionInstruction::cast(&instr); + os << "position (" << pos->source_position().raw() << ")"; + } else { + os << ArchOpcodeField::decode(instr.opcode()); + AddressingMode am = AddressingModeField::decode(instr.opcode()); + if (am != kMode_None) { + os << " : " << AddressingModeField::decode(instr.opcode()); + } + FlagsMode fm = FlagsModeField::decode(instr.opcode()); + if (fm != kFlags_none) { + os << " && " << fm << " if " + << FlagsConditionField::decode(instr.opcode()); + } + } + if (instr.InputCount() > 0) { + for (size_t i = 0; i < instr.InputCount(); i++) { + os << " " << *instr.InputAt(i); + } + } + return os << "\n"; +} + + +OStream& operator<<(OStream& os, const Constant& constant) { + switch (constant.type()) { + case Constant::kInt32: + return os << constant.ToInt32(); + case Constant::kInt64: + return os << constant.ToInt64() << "l"; + case Constant::kFloat64: + return os << constant.ToFloat64(); + case Constant::kExternalReference: + return os << constant.ToExternalReference().address(); + case Constant::kHeapObject: + return os << Brief(*constant.ToHeapObject()); + } + UNREACHABLE(); + return os; +} + + +Label* InstructionSequence::GetLabel(BasicBlock* block) { + return GetBlockStart(block)->label(); +} + + +BlockStartInstruction* InstructionSequence::GetBlockStart(BasicBlock* block) { + return BlockStartInstruction::cast(InstructionAt(block->code_start_)); +} + + +void InstructionSequence::StartBlock(BasicBlock* block) { + block->code_start_ = static_cast<int>(instructions_.size()); + BlockStartInstruction* block_start = + BlockStartInstruction::New(zone(), block); + AddInstruction(block_start, block); +} + + +void InstructionSequence::EndBlock(BasicBlock* block) { + int end = static_cast<int>(instructions_.size()); + DCHECK(block->code_start_ >= 0 && block->code_start_ < end); + block->code_end_ = end; +} + + +int InstructionSequence::AddInstruction(Instruction* instr, BasicBlock* block) { + // TODO(titzer): the order of these gaps is a holdover from Lithium. + GapInstruction* gap = GapInstruction::New(zone()); + if (instr->IsControl()) instructions_.push_back(gap); + int index = static_cast<int>(instructions_.size()); + instructions_.push_back(instr); + if (!instr->IsControl()) instructions_.push_back(gap); + if (instr->NeedsPointerMap()) { + DCHECK(instr->pointer_map() == NULL); + PointerMap* pointer_map = new (zone()) PointerMap(zone()); + pointer_map->set_instruction_position(index); + instr->set_pointer_map(pointer_map); + pointer_maps_.push_back(pointer_map); + } + return index; +} + + +BasicBlock* InstructionSequence::GetBasicBlock(int instruction_index) { + // TODO(turbofan): Optimize this. + for (;;) { + DCHECK_LE(0, instruction_index); + Instruction* instruction = InstructionAt(instruction_index--); + if (instruction->IsBlockStart()) { + return BlockStartInstruction::cast(instruction)->block(); + } + } +} + + +bool InstructionSequence::IsReference(int virtual_register) const { + return references_.find(virtual_register) != references_.end(); +} + + +bool InstructionSequence::IsDouble(int virtual_register) const { + return doubles_.find(virtual_register) != doubles_.end(); +} + + +void InstructionSequence::MarkAsReference(int virtual_register) { + references_.insert(virtual_register); +} + + +void InstructionSequence::MarkAsDouble(int virtual_register) { + doubles_.insert(virtual_register); +} + + +void InstructionSequence::AddGapMove(int index, InstructionOperand* from, + InstructionOperand* to) { + GapAt(index)->GetOrCreateParallelMove(GapInstruction::START, zone())->AddMove( + from, to, zone()); +} + + +int InstructionSequence::AddDeoptimizationEntry( + FrameStateDescriptor* descriptor) { + int deoptimization_id = static_cast<int>(deoptimization_entries_.size()); + deoptimization_entries_.push_back(descriptor); + return deoptimization_id; +} + +FrameStateDescriptor* InstructionSequence::GetDeoptimizationEntry( + int deoptimization_id) { + return deoptimization_entries_[deoptimization_id]; +} + + +int InstructionSequence::GetDeoptimizationEntryCount() { + return static_cast<int>(deoptimization_entries_.size()); +} + + +OStream& operator<<(OStream& os, const InstructionSequence& code) { + for (size_t i = 0; i < code.immediates_.size(); ++i) { + Constant constant = code.immediates_[i]; + os << "IMM#" << i << ": " << constant << "\n"; + } + int i = 0; + for (ConstantMap::const_iterator it = code.constants_.begin(); + it != code.constants_.end(); ++i, ++it) { + os << "CST#" << i << ": v" << it->first << " = " << it->second << "\n"; + } + for (int i = 0; i < code.BasicBlockCount(); i++) { + BasicBlock* block = code.BlockAt(i); + + int bid = block->id(); + os << "RPO#" << block->rpo_number_ << ": B" << bid; + CHECK(block->rpo_number_ == i); + if (block->IsLoopHeader()) { + os << " loop blocks: [" << block->rpo_number_ << ", " << block->loop_end_ + << ")"; + } + os << " instructions: [" << block->code_start_ << ", " << block->code_end_ + << ")\n predecessors:"; + + BasicBlock::Predecessors predecessors = block->predecessors(); + for (BasicBlock::Predecessors::iterator iter = predecessors.begin(); + iter != predecessors.end(); ++iter) { + os << " B" << (*iter)->id(); + } + os << "\n"; + + for (BasicBlock::const_iterator j = block->begin(); j != block->end(); + ++j) { + Node* phi = *j; + if (phi->opcode() != IrOpcode::kPhi) continue; + os << " phi: v" << phi->id() << " ="; + Node::Inputs inputs = phi->inputs(); + for (Node::Inputs::iterator iter(inputs.begin()); iter != inputs.end(); + ++iter) { + os << " v" << (*iter)->id(); + } + os << "\n"; + } + + ScopedVector<char> buf(32); + for (int j = block->first_instruction_index(); + j <= block->last_instruction_index(); j++) { + // TODO(svenpanne) Add some basic formatting to our streams. + SNPrintF(buf, "%5d", j); + os << " " << buf.start() << ": " << *code.InstructionAt(j); + } + + os << " " << block->control_; + + if (block->control_input_ != NULL) { + os << " v" << block->control_input_->id(); + } + + BasicBlock::Successors successors = block->successors(); + for (BasicBlock::Successors::iterator iter = successors.begin(); + iter != successors.end(); ++iter) { + os << " B" << (*iter)->id(); + } + os << "\n"; + } + return os; +} + +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/src/compiler/instruction.h b/deps/v8/src/compiler/instruction.h new file mode 100644 index 00000000000..7b357639efd --- /dev/null +++ b/deps/v8/src/compiler/instruction.h @@ -0,0 +1,871 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_INSTRUCTION_H_ +#define V8_COMPILER_INSTRUCTION_H_ + +#include <deque> +#include <map> +#include <set> + +// TODO(titzer): don't include the assembler? +#include "src/assembler.h" +#include "src/compiler/common-operator.h" +#include "src/compiler/frame.h" +#include "src/compiler/graph.h" +#include "src/compiler/instruction-codes.h" +#include "src/compiler/opcodes.h" +#include "src/compiler/schedule.h" +#include "src/zone-allocator.h" + +namespace v8 { +namespace internal { + +// Forward declarations. +class OStream; + +namespace compiler { + +// Forward declarations. +class Linkage; + +// A couple of reserved opcodes are used for internal use. +const InstructionCode kGapInstruction = -1; +const InstructionCode kBlockStartInstruction = -2; +const InstructionCode kSourcePositionInstruction = -3; + + +#define INSTRUCTION_OPERAND_LIST(V) \ + V(Constant, CONSTANT, 128) \ + V(Immediate, IMMEDIATE, 128) \ + V(StackSlot, STACK_SLOT, 128) \ + V(DoubleStackSlot, DOUBLE_STACK_SLOT, 128) \ + V(Register, REGISTER, Register::kNumRegisters) \ + V(DoubleRegister, DOUBLE_REGISTER, DoubleRegister::kMaxNumRegisters) + +class InstructionOperand : public ZoneObject { + public: + enum Kind { + INVALID, + UNALLOCATED, + CONSTANT, + IMMEDIATE, + STACK_SLOT, + DOUBLE_STACK_SLOT, + REGISTER, + DOUBLE_REGISTER + }; + + InstructionOperand() : value_(KindField::encode(INVALID)) {} + InstructionOperand(Kind kind, int index) { ConvertTo(kind, index); } + + Kind kind() const { return KindField::decode(value_); } + int index() const { return static_cast<int>(value_) >> KindField::kSize; } +#define INSTRUCTION_OPERAND_PREDICATE(name, type, number) \ + bool Is##name() const { return kind() == type; } + INSTRUCTION_OPERAND_LIST(INSTRUCTION_OPERAND_PREDICATE) + INSTRUCTION_OPERAND_PREDICATE(Unallocated, UNALLOCATED, 0) + INSTRUCTION_OPERAND_PREDICATE(Ignored, INVALID, 0) +#undef INSTRUCTION_OPERAND_PREDICATE + bool Equals(InstructionOperand* other) const { + return value_ == other->value_; + } + + void ConvertTo(Kind kind, int index) { + if (kind == REGISTER || kind == DOUBLE_REGISTER) DCHECK(index >= 0); + value_ = KindField::encode(kind); + value_ |= index << KindField::kSize; + DCHECK(this->index() == index); + } + + // Calls SetUpCache()/TearDownCache() for each subclass. + static void SetUpCaches(); + static void TearDownCaches(); + + protected: + typedef BitField<Kind, 0, 3> KindField; + + unsigned value_; +}; + +OStream& operator<<(OStream& os, const InstructionOperand& op); + +class UnallocatedOperand : public InstructionOperand { + public: + enum BasicPolicy { FIXED_SLOT, EXTENDED_POLICY }; + + enum ExtendedPolicy { + NONE, + ANY, + FIXED_REGISTER, + FIXED_DOUBLE_REGISTER, + MUST_HAVE_REGISTER, + SAME_AS_FIRST_INPUT + }; + + // Lifetime of operand inside the instruction. + enum Lifetime { + // USED_AT_START operand is guaranteed to be live only at + // instruction start. Register allocator is free to assign the same register + // to some other operand used inside instruction (i.e. temporary or + // output). + USED_AT_START, + + // USED_AT_END operand is treated as live until the end of + // instruction. This means that register allocator will not reuse it's + // register for any other operand inside instruction. + USED_AT_END + }; + + explicit UnallocatedOperand(ExtendedPolicy policy) + : InstructionOperand(UNALLOCATED, 0) { + value_ |= BasicPolicyField::encode(EXTENDED_POLICY); + value_ |= ExtendedPolicyField::encode(policy); + value_ |= LifetimeField::encode(USED_AT_END); + } + + UnallocatedOperand(BasicPolicy policy, int index) + : InstructionOperand(UNALLOCATED, 0) { + DCHECK(policy == FIXED_SLOT); + value_ |= BasicPolicyField::encode(policy); + value_ |= index << FixedSlotIndexField::kShift; + DCHECK(this->fixed_slot_index() == index); + } + + UnallocatedOperand(ExtendedPolicy policy, int index) + : InstructionOperand(UNALLOCATED, 0) { + DCHECK(policy == FIXED_REGISTER || policy == FIXED_DOUBLE_REGISTER); + value_ |= BasicPolicyField::encode(EXTENDED_POLICY); + value_ |= ExtendedPolicyField::encode(policy); + value_ |= LifetimeField::encode(USED_AT_END); + value_ |= FixedRegisterField::encode(index); + } + + UnallocatedOperand(ExtendedPolicy policy, Lifetime lifetime) + : InstructionOperand(UNALLOCATED, 0) { + value_ |= BasicPolicyField::encode(EXTENDED_POLICY); + value_ |= ExtendedPolicyField::encode(policy); + value_ |= LifetimeField::encode(lifetime); + } + + UnallocatedOperand* CopyUnconstrained(Zone* zone) { + UnallocatedOperand* result = new (zone) UnallocatedOperand(ANY); + result->set_virtual_register(virtual_register()); + return result; + } + + static const UnallocatedOperand* cast(const InstructionOperand* op) { + DCHECK(op->IsUnallocated()); + return static_cast<const UnallocatedOperand*>(op); + } + + static UnallocatedOperand* cast(InstructionOperand* op) { + DCHECK(op->IsUnallocated()); + return static_cast<UnallocatedOperand*>(op); + } + + // The encoding used for UnallocatedOperand operands depends on the policy + // that is + // stored within the operand. The FIXED_SLOT policy uses a compact encoding + // because it accommodates a larger pay-load. + // + // For FIXED_SLOT policy: + // +------------------------------------------+ + // | slot_index | vreg | 0 | 001 | + // +------------------------------------------+ + // + // For all other (extended) policies: + // +------------------------------------------+ + // | reg_index | L | PPP | vreg | 1 | 001 | L ... Lifetime + // +------------------------------------------+ P ... Policy + // + // The slot index is a signed value which requires us to decode it manually + // instead of using the BitField utility class. + + // The superclass has a KindField. + STATIC_ASSERT(KindField::kSize == 3); + + // BitFields for all unallocated operands. + class BasicPolicyField : public BitField<BasicPolicy, 3, 1> {}; + class VirtualRegisterField : public BitField<unsigned, 4, 18> {}; + + // BitFields specific to BasicPolicy::FIXED_SLOT. + class FixedSlotIndexField : public BitField<int, 22, 10> {}; + + // BitFields specific to BasicPolicy::EXTENDED_POLICY. + class ExtendedPolicyField : public BitField<ExtendedPolicy, 22, 3> {}; + class LifetimeField : public BitField<Lifetime, 25, 1> {}; + class FixedRegisterField : public BitField<int, 26, 6> {}; + + static const int kMaxVirtualRegisters = VirtualRegisterField::kMax + 1; + static const int kFixedSlotIndexWidth = FixedSlotIndexField::kSize; + static const int kMaxFixedSlotIndex = (1 << (kFixedSlotIndexWidth - 1)) - 1; + static const int kMinFixedSlotIndex = -(1 << (kFixedSlotIndexWidth - 1)); + + // Predicates for the operand policy. + bool HasAnyPolicy() const { + return basic_policy() == EXTENDED_POLICY && extended_policy() == ANY; + } + bool HasFixedPolicy() const { + return basic_policy() == FIXED_SLOT || + extended_policy() == FIXED_REGISTER || + extended_policy() == FIXED_DOUBLE_REGISTER; + } + bool HasRegisterPolicy() const { + return basic_policy() == EXTENDED_POLICY && + extended_policy() == MUST_HAVE_REGISTER; + } + bool HasSameAsInputPolicy() const { + return basic_policy() == EXTENDED_POLICY && + extended_policy() == SAME_AS_FIRST_INPUT; + } + bool HasFixedSlotPolicy() const { return basic_policy() == FIXED_SLOT; } + bool HasFixedRegisterPolicy() const { + return basic_policy() == EXTENDED_POLICY && + extended_policy() == FIXED_REGISTER; + } + bool HasFixedDoubleRegisterPolicy() const { + return basic_policy() == EXTENDED_POLICY && + extended_policy() == FIXED_DOUBLE_REGISTER; + } + + // [basic_policy]: Distinguish between FIXED_SLOT and all other policies. + BasicPolicy basic_policy() const { return BasicPolicyField::decode(value_); } + + // [extended_policy]: Only for non-FIXED_SLOT. The finer-grained policy. + ExtendedPolicy extended_policy() const { + DCHECK(basic_policy() == EXTENDED_POLICY); + return ExtendedPolicyField::decode(value_); + } + + // [fixed_slot_index]: Only for FIXED_SLOT. + int fixed_slot_index() const { + DCHECK(HasFixedSlotPolicy()); + return static_cast<int>(value_) >> FixedSlotIndexField::kShift; + } + + // [fixed_register_index]: Only for FIXED_REGISTER or FIXED_DOUBLE_REGISTER. + int fixed_register_index() const { + DCHECK(HasFixedRegisterPolicy() || HasFixedDoubleRegisterPolicy()); + return FixedRegisterField::decode(value_); + } + + // [virtual_register]: The virtual register ID for this operand. + int virtual_register() const { return VirtualRegisterField::decode(value_); } + void set_virtual_register(unsigned id) { + value_ = VirtualRegisterField::update(value_, id); + } + + // [lifetime]: Only for non-FIXED_SLOT. + bool IsUsedAtStart() { + DCHECK(basic_policy() == EXTENDED_POLICY); + return LifetimeField::decode(value_) == USED_AT_START; + } +}; + + +class MoveOperands V8_FINAL { + public: + MoveOperands(InstructionOperand* source, InstructionOperand* destination) + : source_(source), destination_(destination) {} + + InstructionOperand* source() const { return source_; } + void set_source(InstructionOperand* operand) { source_ = operand; } + + InstructionOperand* destination() const { return destination_; } + void set_destination(InstructionOperand* operand) { destination_ = operand; } + + // The gap resolver marks moves as "in-progress" by clearing the + // destination (but not the source). + bool IsPending() const { return destination_ == NULL && source_ != NULL; } + + // True if this move a move into the given destination operand. + bool Blocks(InstructionOperand* operand) const { + return !IsEliminated() && source()->Equals(operand); + } + + // A move is redundant if it's been eliminated, if its source and + // destination are the same, or if its destination is unneeded or constant. + bool IsRedundant() const { + return IsEliminated() || source_->Equals(destination_) || IsIgnored() || + (destination_ != NULL && destination_->IsConstant()); + } + + bool IsIgnored() const { + return destination_ != NULL && destination_->IsIgnored(); + } + + // We clear both operands to indicate move that's been eliminated. + void Eliminate() { source_ = destination_ = NULL; } + bool IsEliminated() const { + DCHECK(source_ != NULL || destination_ == NULL); + return source_ == NULL; + } + + private: + InstructionOperand* source_; + InstructionOperand* destination_; +}; + +OStream& operator<<(OStream& os, const MoveOperands& mo); + +template <InstructionOperand::Kind kOperandKind, int kNumCachedOperands> +class SubKindOperand V8_FINAL : public InstructionOperand { + public: + static SubKindOperand* Create(int index, Zone* zone) { + DCHECK(index >= 0); + if (index < kNumCachedOperands) return &cache[index]; + return new (zone) SubKindOperand(index); + } + + static SubKindOperand* cast(InstructionOperand* op) { + DCHECK(op->kind() == kOperandKind); + return reinterpret_cast<SubKindOperand*>(op); + } + + static void SetUpCache(); + static void TearDownCache(); + + private: + static SubKindOperand* cache; + + SubKindOperand() : InstructionOperand() {} + explicit SubKindOperand(int index) + : InstructionOperand(kOperandKind, index) {} +}; + + +#define INSTRUCTION_TYPEDEF_SUBKIND_OPERAND_CLASS(name, type, number) \ + typedef SubKindOperand<InstructionOperand::type, number> name##Operand; +INSTRUCTION_OPERAND_LIST(INSTRUCTION_TYPEDEF_SUBKIND_OPERAND_CLASS) +#undef INSTRUCTION_TYPEDEF_SUBKIND_OPERAND_CLASS + + +class ParallelMove V8_FINAL : public ZoneObject { + public: + explicit ParallelMove(Zone* zone) : move_operands_(4, zone) {} + + void AddMove(InstructionOperand* from, InstructionOperand* to, Zone* zone) { + move_operands_.Add(MoveOperands(from, to), zone); + } + + bool IsRedundant() const; + + ZoneList<MoveOperands>* move_operands() { return &move_operands_; } + const ZoneList<MoveOperands>* move_operands() const { + return &move_operands_; + } + + private: + ZoneList<MoveOperands> move_operands_; +}; + +OStream& operator<<(OStream& os, const ParallelMove& pm); + +class PointerMap V8_FINAL : public ZoneObject { + public: + explicit PointerMap(Zone* zone) + : pointer_operands_(8, zone), + untagged_operands_(0, zone), + instruction_position_(-1) {} + + const ZoneList<InstructionOperand*>* GetNormalizedOperands() { + for (int i = 0; i < untagged_operands_.length(); ++i) { + RemovePointer(untagged_operands_[i]); + } + untagged_operands_.Clear(); + return &pointer_operands_; + } + int instruction_position() const { return instruction_position_; } + + void set_instruction_position(int pos) { + DCHECK(instruction_position_ == -1); + instruction_position_ = pos; + } + + void RecordPointer(InstructionOperand* op, Zone* zone); + void RemovePointer(InstructionOperand* op); + void RecordUntagged(InstructionOperand* op, Zone* zone); + + private: + friend OStream& operator<<(OStream& os, const PointerMap& pm); + + ZoneList<InstructionOperand*> pointer_operands_; + ZoneList<InstructionOperand*> untagged_operands_; + int instruction_position_; +}; + +OStream& operator<<(OStream& os, const PointerMap& pm); + +// TODO(titzer): s/PointerMap/ReferenceMap/ +class Instruction : public ZoneObject { + public: + size_t OutputCount() const { return OutputCountField::decode(bit_field_); } + InstructionOperand* Output() const { return OutputAt(0); } + InstructionOperand* OutputAt(size_t i) const { + DCHECK(i < OutputCount()); + return operands_[i]; + } + + size_t InputCount() const { return InputCountField::decode(bit_field_); } + InstructionOperand* InputAt(size_t i) const { + DCHECK(i < InputCount()); + return operands_[OutputCount() + i]; + } + + size_t TempCount() const { return TempCountField::decode(bit_field_); } + InstructionOperand* TempAt(size_t i) const { + DCHECK(i < TempCount()); + return operands_[OutputCount() + InputCount() + i]; + } + + InstructionCode opcode() const { return opcode_; } + ArchOpcode arch_opcode() const { return ArchOpcodeField::decode(opcode()); } + AddressingMode addressing_mode() const { + return AddressingModeField::decode(opcode()); + } + FlagsMode flags_mode() const { return FlagsModeField::decode(opcode()); } + FlagsCondition flags_condition() const { + return FlagsConditionField::decode(opcode()); + } + + // TODO(titzer): make control and call into flags. + static Instruction* New(Zone* zone, InstructionCode opcode) { + return New(zone, opcode, 0, NULL, 0, NULL, 0, NULL); + } + + static Instruction* New(Zone* zone, InstructionCode opcode, + size_t output_count, InstructionOperand** outputs, + size_t input_count, InstructionOperand** inputs, + size_t temp_count, InstructionOperand** temps) { + DCHECK(opcode >= 0); + DCHECK(output_count == 0 || outputs != NULL); + DCHECK(input_count == 0 || inputs != NULL); + DCHECK(temp_count == 0 || temps != NULL); + InstructionOperand* none = NULL; + USE(none); + int size = static_cast<int>(RoundUp(sizeof(Instruction), kPointerSize) + + (output_count + input_count + temp_count - 1) * + sizeof(none)); + return new (zone->New(size)) Instruction( + opcode, output_count, outputs, input_count, inputs, temp_count, temps); + } + + // TODO(titzer): another holdover from lithium days; register allocator + // should not need to know about control instructions. + Instruction* MarkAsControl() { + bit_field_ = IsControlField::update(bit_field_, true); + return this; + } + Instruction* MarkAsCall() { + bit_field_ = IsCallField::update(bit_field_, true); + return this; + } + bool IsControl() const { return IsControlField::decode(bit_field_); } + bool IsCall() const { return IsCallField::decode(bit_field_); } + bool NeedsPointerMap() const { return IsCall(); } + bool HasPointerMap() const { return pointer_map_ != NULL; } + + bool IsGapMoves() const { + return opcode() == kGapInstruction || opcode() == kBlockStartInstruction; + } + bool IsBlockStart() const { return opcode() == kBlockStartInstruction; } + bool IsSourcePosition() const { + return opcode() == kSourcePositionInstruction; + } + + bool ClobbersRegisters() const { return IsCall(); } + bool ClobbersTemps() const { return IsCall(); } + bool ClobbersDoubleRegisters() const { return IsCall(); } + PointerMap* pointer_map() const { return pointer_map_; } + + void set_pointer_map(PointerMap* map) { + DCHECK(NeedsPointerMap()); + DCHECK_EQ(NULL, pointer_map_); + pointer_map_ = map; + } + + // Placement new operator so that we can smash instructions into + // zone-allocated memory. + void* operator new(size_t, void* location) { return location; } + + void operator delete(void* pointer, void* location) { UNREACHABLE(); } + + protected: + explicit Instruction(InstructionCode opcode) + : opcode_(opcode), + bit_field_(OutputCountField::encode(0) | InputCountField::encode(0) | + TempCountField::encode(0) | IsCallField::encode(false) | + IsControlField::encode(false)), + pointer_map_(NULL) {} + + Instruction(InstructionCode opcode, size_t output_count, + InstructionOperand** outputs, size_t input_count, + InstructionOperand** inputs, size_t temp_count, + InstructionOperand** temps) + : opcode_(opcode), + bit_field_(OutputCountField::encode(output_count) | + InputCountField::encode(input_count) | + TempCountField::encode(temp_count) | + IsCallField::encode(false) | IsControlField::encode(false)), + pointer_map_(NULL) { + for (size_t i = 0; i < output_count; ++i) { + operands_[i] = outputs[i]; + } + for (size_t i = 0; i < input_count; ++i) { + operands_[output_count + i] = inputs[i]; + } + for (size_t i = 0; i < temp_count; ++i) { + operands_[output_count + input_count + i] = temps[i]; + } + } + + protected: + typedef BitField<size_t, 0, 8> OutputCountField; + typedef BitField<size_t, 8, 16> InputCountField; + typedef BitField<size_t, 24, 6> TempCountField; + typedef BitField<bool, 30, 1> IsCallField; + typedef BitField<bool, 31, 1> IsControlField; + + InstructionCode opcode_; + uint32_t bit_field_; + PointerMap* pointer_map_; + InstructionOperand* operands_[1]; +}; + +OStream& operator<<(OStream& os, const Instruction& instr); + +// Represents moves inserted before an instruction due to register allocation. +// TODO(titzer): squash GapInstruction back into Instruction, since essentially +// every instruction can possibly have moves inserted before it. +class GapInstruction : public Instruction { + public: + enum InnerPosition { + BEFORE, + START, + END, + AFTER, + FIRST_INNER_POSITION = BEFORE, + LAST_INNER_POSITION = AFTER + }; + + ParallelMove* GetOrCreateParallelMove(InnerPosition pos, Zone* zone) { + if (parallel_moves_[pos] == NULL) { + parallel_moves_[pos] = new (zone) ParallelMove(zone); + } + return parallel_moves_[pos]; + } + + ParallelMove* GetParallelMove(InnerPosition pos) { + return parallel_moves_[pos]; + } + + static GapInstruction* New(Zone* zone) { + void* buffer = zone->New(sizeof(GapInstruction)); + return new (buffer) GapInstruction(kGapInstruction); + } + + static GapInstruction* cast(Instruction* instr) { + DCHECK(instr->IsGapMoves()); + return static_cast<GapInstruction*>(instr); + } + + static const GapInstruction* cast(const Instruction* instr) { + DCHECK(instr->IsGapMoves()); + return static_cast<const GapInstruction*>(instr); + } + + protected: + explicit GapInstruction(InstructionCode opcode) : Instruction(opcode) { + parallel_moves_[BEFORE] = NULL; + parallel_moves_[START] = NULL; + parallel_moves_[END] = NULL; + parallel_moves_[AFTER] = NULL; + } + + private: + friend OStream& operator<<(OStream& os, const Instruction& instr); + ParallelMove* parallel_moves_[LAST_INNER_POSITION + 1]; +}; + + +// This special kind of gap move instruction represents the beginning of a +// block of code. +// TODO(titzer): move code_start and code_end from BasicBlock to here. +class BlockStartInstruction V8_FINAL : public GapInstruction { + public: + BasicBlock* block() const { return block_; } + Label* label() { return &label_; } + + static BlockStartInstruction* New(Zone* zone, BasicBlock* block) { + void* buffer = zone->New(sizeof(BlockStartInstruction)); + return new (buffer) BlockStartInstruction(block); + } + + static BlockStartInstruction* cast(Instruction* instr) { + DCHECK(instr->IsBlockStart()); + return static_cast<BlockStartInstruction*>(instr); + } + + private: + explicit BlockStartInstruction(BasicBlock* block) + : GapInstruction(kBlockStartInstruction), block_(block) {} + + BasicBlock* block_; + Label label_; +}; + + +class SourcePositionInstruction V8_FINAL : public Instruction { + public: + static SourcePositionInstruction* New(Zone* zone, SourcePosition position) { + void* buffer = zone->New(sizeof(SourcePositionInstruction)); + return new (buffer) SourcePositionInstruction(position); + } + + SourcePosition source_position() const { return source_position_; } + + static SourcePositionInstruction* cast(Instruction* instr) { + DCHECK(instr->IsSourcePosition()); + return static_cast<SourcePositionInstruction*>(instr); + } + + static const SourcePositionInstruction* cast(const Instruction* instr) { + DCHECK(instr->IsSourcePosition()); + return static_cast<const SourcePositionInstruction*>(instr); + } + + private: + explicit SourcePositionInstruction(SourcePosition source_position) + : Instruction(kSourcePositionInstruction), + source_position_(source_position) { + DCHECK(!source_position_.IsInvalid()); + DCHECK(!source_position_.IsUnknown()); + } + + SourcePosition source_position_; +}; + + +class Constant V8_FINAL { + public: + enum Type { kInt32, kInt64, kFloat64, kExternalReference, kHeapObject }; + + explicit Constant(int32_t v) : type_(kInt32), value_(v) {} + explicit Constant(int64_t v) : type_(kInt64), value_(v) {} + explicit Constant(double v) : type_(kFloat64), value_(BitCast<int64_t>(v)) {} + explicit Constant(ExternalReference ref) + : type_(kExternalReference), value_(BitCast<intptr_t>(ref)) {} + explicit Constant(Handle<HeapObject> obj) + : type_(kHeapObject), value_(BitCast<intptr_t>(obj)) {} + + Type type() const { return type_; } + + int32_t ToInt32() const { + DCHECK_EQ(kInt32, type()); + return static_cast<int32_t>(value_); + } + + int64_t ToInt64() const { + if (type() == kInt32) return ToInt32(); + DCHECK_EQ(kInt64, type()); + return value_; + } + + double ToFloat64() const { + if (type() == kInt32) return ToInt32(); + DCHECK_EQ(kFloat64, type()); + return BitCast<double>(value_); + } + + ExternalReference ToExternalReference() const { + DCHECK_EQ(kExternalReference, type()); + return BitCast<ExternalReference>(static_cast<intptr_t>(value_)); + } + + Handle<HeapObject> ToHeapObject() const { + DCHECK_EQ(kHeapObject, type()); + return BitCast<Handle<HeapObject> >(static_cast<intptr_t>(value_)); + } + + private: + Type type_; + int64_t value_; +}; + + +class FrameStateDescriptor : public ZoneObject { + public: + FrameStateDescriptor(BailoutId bailout_id, int parameters_count, + int locals_count, int stack_count) + : bailout_id_(bailout_id), + parameters_count_(parameters_count), + locals_count_(locals_count), + stack_count_(stack_count) {} + + BailoutId bailout_id() const { return bailout_id_; } + int parameters_count() { return parameters_count_; } + int locals_count() { return locals_count_; } + int stack_count() { return stack_count_; } + + int size() { return parameters_count_ + locals_count_ + stack_count_; } + + private: + BailoutId bailout_id_; + int parameters_count_; + int locals_count_; + int stack_count_; +}; + +OStream& operator<<(OStream& os, const Constant& constant); + +typedef std::deque<Constant, zone_allocator<Constant> > ConstantDeque; +typedef std::map<int, Constant, std::less<int>, + zone_allocator<std::pair<int, Constant> > > ConstantMap; + + +typedef std::deque<Instruction*, zone_allocator<Instruction*> > + InstructionDeque; +typedef std::deque<PointerMap*, zone_allocator<PointerMap*> > PointerMapDeque; +typedef std::vector<FrameStateDescriptor*, + zone_allocator<FrameStateDescriptor*> > + DeoptimizationVector; + + +// Represents architecture-specific generated code before, during, and after +// register allocation. +// TODO(titzer): s/IsDouble/IsFloat64/ +class InstructionSequence V8_FINAL { + public: + InstructionSequence(Linkage* linkage, Graph* graph, Schedule* schedule) + : graph_(graph), + linkage_(linkage), + schedule_(schedule), + constants_(ConstantMap::key_compare(), + ConstantMap::allocator_type(zone())), + immediates_(ConstantDeque::allocator_type(zone())), + instructions_(InstructionDeque::allocator_type(zone())), + next_virtual_register_(graph->NodeCount()), + pointer_maps_(PointerMapDeque::allocator_type(zone())), + doubles_(std::less<int>(), VirtualRegisterSet::allocator_type(zone())), + references_(std::less<int>(), + VirtualRegisterSet::allocator_type(zone())), + deoptimization_entries_(DeoptimizationVector::allocator_type(zone())) {} + + int NextVirtualRegister() { return next_virtual_register_++; } + int VirtualRegisterCount() const { return next_virtual_register_; } + + int ValueCount() const { return graph_->NodeCount(); } + + int BasicBlockCount() const { + return static_cast<int>(schedule_->rpo_order()->size()); + } + + BasicBlock* BlockAt(int rpo_number) const { + return (*schedule_->rpo_order())[rpo_number]; + } + + BasicBlock* GetContainingLoop(BasicBlock* block) { + return block->loop_header_; + } + + int GetLoopEnd(BasicBlock* block) const { return block->loop_end_; } + + BasicBlock* GetBasicBlock(int instruction_index); + + int GetVirtualRegister(Node* node) const { return node->id(); } + + bool IsReference(int virtual_register) const; + bool IsDouble(int virtual_register) const; + + void MarkAsReference(int virtual_register); + void MarkAsDouble(int virtual_register); + + void AddGapMove(int index, InstructionOperand* from, InstructionOperand* to); + + Label* GetLabel(BasicBlock* block); + BlockStartInstruction* GetBlockStart(BasicBlock* block); + + typedef InstructionDeque::const_iterator const_iterator; + const_iterator begin() const { return instructions_.begin(); } + const_iterator end() const { return instructions_.end(); } + + GapInstruction* GapAt(int index) const { + return GapInstruction::cast(InstructionAt(index)); + } + bool IsGapAt(int index) const { return InstructionAt(index)->IsGapMoves(); } + Instruction* InstructionAt(int index) const { + DCHECK(index >= 0); + DCHECK(index < static_cast<int>(instructions_.size())); + return instructions_[index]; + } + + Frame* frame() { return &frame_; } + Graph* graph() const { return graph_; } + Isolate* isolate() const { return zone()->isolate(); } + Linkage* linkage() const { return linkage_; } + Schedule* schedule() const { return schedule_; } + const PointerMapDeque* pointer_maps() const { return &pointer_maps_; } + Zone* zone() const { return graph_->zone(); } + + // Used by the code generator while adding instructions. + int AddInstruction(Instruction* instr, BasicBlock* block); + void StartBlock(BasicBlock* block); + void EndBlock(BasicBlock* block); + + void AddConstant(int virtual_register, Constant constant) { + DCHECK(constants_.find(virtual_register) == constants_.end()); + constants_.insert(std::make_pair(virtual_register, constant)); + } + Constant GetConstant(int virtual_register) const { + ConstantMap::const_iterator it = constants_.find(virtual_register); + DCHECK(it != constants_.end()); + DCHECK_EQ(virtual_register, it->first); + return it->second; + } + + typedef ConstantDeque Immediates; + const Immediates& immediates() const { return immediates_; } + + int AddImmediate(Constant constant) { + int index = static_cast<int>(immediates_.size()); + immediates_.push_back(constant); + return index; + } + Constant GetImmediate(int index) const { + DCHECK(index >= 0); + DCHECK(index < static_cast<int>(immediates_.size())); + return immediates_[index]; + } + + int AddDeoptimizationEntry(FrameStateDescriptor* descriptor); + FrameStateDescriptor* GetDeoptimizationEntry(int deoptimization_id); + int GetDeoptimizationEntryCount(); + + private: + friend OStream& operator<<(OStream& os, const InstructionSequence& code); + + typedef std::set<int, std::less<int>, ZoneIntAllocator> VirtualRegisterSet; + + Graph* graph_; + Linkage* linkage_; + Schedule* schedule_; + ConstantMap constants_; + ConstantDeque immediates_; + InstructionDeque instructions_; + int next_virtual_register_; + PointerMapDeque pointer_maps_; + VirtualRegisterSet doubles_; + VirtualRegisterSet references_; + Frame frame_; + DeoptimizationVector deoptimization_entries_; +}; + +OStream& operator<<(OStream& os, const InstructionSequence& code); + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_COMPILER_INSTRUCTION_H_ diff --git a/deps/v8/src/compiler/ir-operations.txt b/deps/v8/src/compiler/ir-operations.txt new file mode 100644 index 00000000000..e69de29bb2d diff --git a/deps/v8/src/compiler/js-context-specialization.cc b/deps/v8/src/compiler/js-context-specialization.cc new file mode 100644 index 00000000000..bdf142763a8 --- /dev/null +++ b/deps/v8/src/compiler/js-context-specialization.cc @@ -0,0 +1,157 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/common-operator.h" +#include "src/compiler/generic-node-inl.h" +#include "src/compiler/graph-inl.h" +#include "src/compiler/js-context-specialization.h" +#include "src/compiler/js-operator.h" +#include "src/compiler/node-aux-data-inl.h" +#include "src/compiler/node-matchers.h" +#include "src/compiler/node-properties-inl.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// TODO(titzer): factor this out to a common routine with js-typed-lowering. +static void ReplaceEffectfulWithValue(Node* node, Node* value) { + Node* effect = NULL; + if (OperatorProperties::HasEffectInput(node->op())) { + effect = NodeProperties::GetEffectInput(node); + } + + // Requires distinguishing between value and effect edges. + UseIter iter = node->uses().begin(); + while (iter != node->uses().end()) { + if (NodeProperties::IsEffectEdge(iter.edge())) { + DCHECK_NE(NULL, effect); + iter = iter.UpdateToAndIncrement(effect); + } else { + iter = iter.UpdateToAndIncrement(value); + } + } +} + + +class ContextSpecializationVisitor : public NullNodeVisitor { + public: + explicit ContextSpecializationVisitor(JSContextSpecializer* spec) + : spec_(spec) {} + + GenericGraphVisit::Control Post(Node* node) { + switch (node->opcode()) { + case IrOpcode::kJSLoadContext: { + Reduction r = spec_->ReduceJSLoadContext(node); + if (r.Changed() && r.replacement() != node) { + ReplaceEffectfulWithValue(node, r.replacement()); + } + break; + } + case IrOpcode::kJSStoreContext: { + Reduction r = spec_->ReduceJSStoreContext(node); + if (r.Changed() && r.replacement() != node) { + ReplaceEffectfulWithValue(node, r.replacement()); + } + break; + } + default: + break; + } + return GenericGraphVisit::CONTINUE; + } + + private: + JSContextSpecializer* spec_; +}; + + +void JSContextSpecializer::SpecializeToContext() { + ReplaceEffectfulWithValue(context_, jsgraph_->Constant(info_->context())); + + ContextSpecializationVisitor visitor(this); + jsgraph_->graph()->VisitNodeInputsFromEnd(&visitor); +} + + +Reduction JSContextSpecializer::ReduceJSLoadContext(Node* node) { + DCHECK_EQ(IrOpcode::kJSLoadContext, node->opcode()); + + ValueMatcher<Handle<Context> > match(NodeProperties::GetValueInput(node, 0)); + // If the context is not constant, no reduction can occur. + if (!match.HasValue()) { + return Reducer::NoChange(); + } + + ContextAccess access = OpParameter<ContextAccess>(node); + + // Find the right parent context. + Context* context = *match.Value(); + for (int i = access.depth(); i > 0; --i) { + context = context->previous(); + } + + // If the access itself is mutable, only fold-in the parent. + if (!access.immutable()) { + // The access does not have to look up a parent, nothing to fold. + if (access.depth() == 0) { + return Reducer::NoChange(); + } + Operator* op = jsgraph_->javascript()->LoadContext(0, access.index(), + access.immutable()); + node->set_op(op); + Handle<Object> context_handle = Handle<Object>(context, info_->isolate()); + node->ReplaceInput(0, jsgraph_->Constant(context_handle)); + return Reducer::Changed(node); + } + Handle<Object> value = + Handle<Object>(context->get(access.index()), info_->isolate()); + + // Even though the context slot is immutable, the context might have escaped + // before the function to which it belongs has initialized the slot. + // We must be conservative and check if the value in the slot is currently the + // hole or undefined. If it is neither of these, then it must be initialized. + if (value->IsUndefined() || value->IsTheHole()) { + return Reducer::NoChange(); + } + + // Success. The context load can be replaced with the constant. + // TODO(titzer): record the specialization for sharing code across multiple + // contexts that have the same value in the corresponding context slot. + return Reducer::Replace(jsgraph_->Constant(value)); +} + + +Reduction JSContextSpecializer::ReduceJSStoreContext(Node* node) { + DCHECK_EQ(IrOpcode::kJSStoreContext, node->opcode()); + + ValueMatcher<Handle<Context> > match(NodeProperties::GetValueInput(node, 0)); + // If the context is not constant, no reduction can occur. + if (!match.HasValue()) { + return Reducer::NoChange(); + } + + ContextAccess access = OpParameter<ContextAccess>(node); + + // The access does not have to look up a parent, nothing to fold. + if (access.depth() == 0) { + return Reducer::NoChange(); + } + + // Find the right parent context. + Context* context = *match.Value(); + for (int i = access.depth(); i > 0; --i) { + context = context->previous(); + } + + Operator* op = jsgraph_->javascript()->StoreContext(0, access.index()); + node->set_op(op); + Handle<Object> new_context_handle = Handle<Object>(context, info_->isolate()); + node->ReplaceInput(0, jsgraph_->Constant(new_context_handle)); + + return Reducer::Changed(node); +} +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/compiler/js-context-specialization.h b/deps/v8/src/compiler/js-context-specialization.h new file mode 100644 index 00000000000..b8b50ed6c36 --- /dev/null +++ b/deps/v8/src/compiler/js-context-specialization.h @@ -0,0 +1,37 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_JS_CONTEXT_SPECIALIZATION_H_ +#define V8_COMPILER_JS_CONTEXT_SPECIALIZATION_H_ + +#include "src/compiler/graph-reducer.h" +#include "src/compiler/js-graph.h" +#include "src/contexts.h" +#include "src/v8.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// Specializes a given JSGraph to a given context, potentially constant folding +// some {LoadContext} nodes or strength reducing some {StoreContext} nodes. +class JSContextSpecializer { + public: + JSContextSpecializer(CompilationInfo* info, JSGraph* jsgraph, Node* context) + : info_(info), jsgraph_(jsgraph), context_(context) {} + + void SpecializeToContext(); + Reduction ReduceJSLoadContext(Node* node); + Reduction ReduceJSStoreContext(Node* node); + + private: + CompilationInfo* info_; + JSGraph* jsgraph_; + Node* context_; +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_JS_CONTEXT_SPECIALIZATION_H_ diff --git a/deps/v8/src/compiler/js-generic-lowering.cc b/deps/v8/src/compiler/js-generic-lowering.cc new file mode 100644 index 00000000000..68cc1cea905 --- /dev/null +++ b/deps/v8/src/compiler/js-generic-lowering.cc @@ -0,0 +1,550 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/code-stubs.h" +#include "src/compiler/common-operator.h" +#include "src/compiler/graph-inl.h" +#include "src/compiler/js-generic-lowering.h" +#include "src/compiler/machine-operator.h" +#include "src/compiler/node-aux-data-inl.h" +#include "src/compiler/node-properties-inl.h" +#include "src/unique.h" + +namespace v8 { +namespace internal { +namespace compiler { + + +// TODO(mstarzinger): This is a temporary workaround for non-hydrogen stubs for +// which we don't have an interface descriptor yet. Use ReplaceWithICStubCall +// once these stub have been made into a HydrogenCodeStub. +template <typename T> +static CodeStubInterfaceDescriptor* GetInterfaceDescriptor(Isolate* isolate, + T* stub) { + CodeStub::Major key = static_cast<CodeStub*>(stub)->MajorKey(); + CodeStubInterfaceDescriptor* d = isolate->code_stub_interface_descriptor(key); + stub->InitializeInterfaceDescriptor(d); + return d; +} + + +// TODO(mstarzinger): This is a temporary shim to be able to call an IC stub +// which doesn't have an interface descriptor yet. It mimics a hydrogen code +// stub for the underlying IC stub code. +class LoadICStubShim : public HydrogenCodeStub { + public: + LoadICStubShim(Isolate* isolate, ContextualMode contextual_mode) + : HydrogenCodeStub(isolate), contextual_mode_(contextual_mode) { + i::compiler::GetInterfaceDescriptor(isolate, this); + } + + virtual Handle<Code> GenerateCode() V8_OVERRIDE { + ExtraICState extra_state = LoadIC::ComputeExtraICState(contextual_mode_); + return LoadIC::initialize_stub(isolate(), extra_state); + } + + virtual void InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE { + Register registers[] = { InterfaceDescriptor::ContextRegister(), + LoadIC::ReceiverRegister(), + LoadIC::NameRegister() }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); + } + + private: + virtual Major MajorKey() const V8_OVERRIDE { return NoCache; } + virtual int NotMissMinorKey() const V8_OVERRIDE { return 0; } + virtual bool UseSpecialCache() V8_OVERRIDE { return true; } + + ContextualMode contextual_mode_; +}; + + +// TODO(mstarzinger): This is a temporary shim to be able to call an IC stub +// which doesn't have an interface descriptor yet. It mimics a hydrogen code +// stub for the underlying IC stub code. +class KeyedLoadICStubShim : public HydrogenCodeStub { + public: + explicit KeyedLoadICStubShim(Isolate* isolate) : HydrogenCodeStub(isolate) { + i::compiler::GetInterfaceDescriptor(isolate, this); + } + + virtual Handle<Code> GenerateCode() V8_OVERRIDE { + return isolate()->builtins()->KeyedLoadIC_Initialize(); + } + + virtual void InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE { + Register registers[] = { InterfaceDescriptor::ContextRegister(), + KeyedLoadIC::ReceiverRegister(), + KeyedLoadIC::NameRegister() }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); + } + + private: + virtual Major MajorKey() const V8_OVERRIDE { return NoCache; } + virtual int NotMissMinorKey() const V8_OVERRIDE { return 0; } + virtual bool UseSpecialCache() V8_OVERRIDE { return true; } +}; + + +// TODO(mstarzinger): This is a temporary shim to be able to call an IC stub +// which doesn't have an interface descriptor yet. It mimics a hydrogen code +// stub for the underlying IC stub code. +class StoreICStubShim : public HydrogenCodeStub { + public: + StoreICStubShim(Isolate* isolate, StrictMode strict_mode) + : HydrogenCodeStub(isolate), strict_mode_(strict_mode) { + i::compiler::GetInterfaceDescriptor(isolate, this); + } + + virtual Handle<Code> GenerateCode() V8_OVERRIDE { + return StoreIC::initialize_stub(isolate(), strict_mode_); + } + + virtual void InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE { + Register registers[] = { InterfaceDescriptor::ContextRegister(), + StoreIC::ReceiverRegister(), + StoreIC::NameRegister(), + StoreIC::ValueRegister() }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); + } + + private: + virtual Major MajorKey() const V8_OVERRIDE { return NoCache; } + virtual int NotMissMinorKey() const V8_OVERRIDE { return 0; } + virtual bool UseSpecialCache() V8_OVERRIDE { return true; } + + StrictMode strict_mode_; +}; + + +// TODO(mstarzinger): This is a temporary shim to be able to call an IC stub +// which doesn't have an interface descriptor yet. It mimics a hydrogen code +// stub for the underlying IC stub code. +class KeyedStoreICStubShim : public HydrogenCodeStub { + public: + KeyedStoreICStubShim(Isolate* isolate, StrictMode strict_mode) + : HydrogenCodeStub(isolate), strict_mode_(strict_mode) { + i::compiler::GetInterfaceDescriptor(isolate, this); + } + + virtual Handle<Code> GenerateCode() V8_OVERRIDE { + return strict_mode_ == SLOPPY + ? isolate()->builtins()->KeyedStoreIC_Initialize() + : isolate()->builtins()->KeyedStoreIC_Initialize_Strict(); + } + + virtual void InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) V8_OVERRIDE { + Register registers[] = { InterfaceDescriptor::ContextRegister(), + KeyedStoreIC::ReceiverRegister(), + KeyedStoreIC::NameRegister(), + KeyedStoreIC::ValueRegister() }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); + } + + private: + virtual Major MajorKey() const V8_OVERRIDE { return NoCache; } + virtual int NotMissMinorKey() const V8_OVERRIDE { return 0; } + virtual bool UseSpecialCache() V8_OVERRIDE { return true; } + + StrictMode strict_mode_; +}; + + +JSGenericLowering::JSGenericLowering(CompilationInfo* info, JSGraph* jsgraph, + MachineOperatorBuilder* machine, + SourcePositionTable* source_positions) + : LoweringBuilder(jsgraph->graph(), source_positions), + info_(info), + jsgraph_(jsgraph), + linkage_(new (jsgraph->zone()) Linkage(info)), + machine_(machine) {} + + +void JSGenericLowering::PatchOperator(Node* node, Operator* op) { + node->set_op(op); +} + + +void JSGenericLowering::PatchInsertInput(Node* node, int index, Node* input) { + node->InsertInput(zone(), index, input); +} + + +Node* JSGenericLowering::SmiConstant(int32_t immediate) { + return jsgraph()->SmiConstant(immediate); +} + + +Node* JSGenericLowering::Int32Constant(int immediate) { + return jsgraph()->Int32Constant(immediate); +} + + +Node* JSGenericLowering::CodeConstant(Handle<Code> code) { + return jsgraph()->HeapConstant(code); +} + + +Node* JSGenericLowering::FunctionConstant(Handle<JSFunction> function) { + return jsgraph()->HeapConstant(function); +} + + +Node* JSGenericLowering::ExternalConstant(ExternalReference ref) { + return jsgraph()->ExternalConstant(ref); +} + + +void JSGenericLowering::Lower(Node* node) { + Node* replacement = NULL; + // Dispatch according to the opcode. + switch (node->opcode()) { +#define DECLARE_CASE(x) \ + case IrOpcode::k##x: \ + replacement = Lower##x(node); \ + break; + DECLARE_CASE(Branch) + JS_OP_LIST(DECLARE_CASE) +#undef DECLARE_CASE + default: + // Nothing to see. + return; + } + + // Nothing to do if lowering was done by patching the existing node. + if (replacement == node) return; + + // Iterate through uses of the original node and replace uses accordingly. + UNIMPLEMENTED(); +} + + +#define REPLACE_IC_STUB_CALL(op, StubDeclaration) \ + Node* JSGenericLowering::Lower##op(Node* node) { \ + StubDeclaration; \ + ReplaceWithICStubCall(node, &stub); \ + return node; \ + } +REPLACE_IC_STUB_CALL(JSBitwiseOr, BinaryOpICStub stub(isolate(), Token::BIT_OR)) +REPLACE_IC_STUB_CALL(JSBitwiseXor, + BinaryOpICStub stub(isolate(), Token::BIT_XOR)) +REPLACE_IC_STUB_CALL(JSBitwiseAnd, + BinaryOpICStub stub(isolate(), Token::BIT_AND)) +REPLACE_IC_STUB_CALL(JSShiftLeft, BinaryOpICStub stub(isolate(), Token::SHL)) +REPLACE_IC_STUB_CALL(JSShiftRight, BinaryOpICStub stub(isolate(), Token::SAR)) +REPLACE_IC_STUB_CALL(JSShiftRightLogical, + BinaryOpICStub stub(isolate(), Token::SHR)) +REPLACE_IC_STUB_CALL(JSAdd, BinaryOpICStub stub(isolate(), Token::ADD)) +REPLACE_IC_STUB_CALL(JSSubtract, BinaryOpICStub stub(isolate(), Token::SUB)) +REPLACE_IC_STUB_CALL(JSMultiply, BinaryOpICStub stub(isolate(), Token::MUL)) +REPLACE_IC_STUB_CALL(JSDivide, BinaryOpICStub stub(isolate(), Token::DIV)) +REPLACE_IC_STUB_CALL(JSModulus, BinaryOpICStub stub(isolate(), Token::MOD)) +REPLACE_IC_STUB_CALL(JSToNumber, ToNumberStub stub(isolate())) +#undef REPLACE_IC_STUB_CALL + + +#define REPLACE_COMPARE_IC_CALL(op, token, pure) \ + Node* JSGenericLowering::Lower##op(Node* node) { \ + ReplaceWithCompareIC(node, token, pure); \ + return node; \ + } +REPLACE_COMPARE_IC_CALL(JSEqual, Token::EQ, false) +REPLACE_COMPARE_IC_CALL(JSNotEqual, Token::NE, false) +REPLACE_COMPARE_IC_CALL(JSStrictEqual, Token::EQ_STRICT, true) +REPLACE_COMPARE_IC_CALL(JSStrictNotEqual, Token::NE_STRICT, true) +REPLACE_COMPARE_IC_CALL(JSLessThan, Token::LT, false) +REPLACE_COMPARE_IC_CALL(JSGreaterThan, Token::GT, false) +REPLACE_COMPARE_IC_CALL(JSLessThanOrEqual, Token::LTE, false) +REPLACE_COMPARE_IC_CALL(JSGreaterThanOrEqual, Token::GTE, false) +#undef REPLACE_COMPARE_IC_CALL + + +#define REPLACE_RUNTIME_CALL(op, fun) \ + Node* JSGenericLowering::Lower##op(Node* node) { \ + ReplaceWithRuntimeCall(node, fun); \ + return node; \ + } +REPLACE_RUNTIME_CALL(JSTypeOf, Runtime::kTypeof) +REPLACE_RUNTIME_CALL(JSCreate, Runtime::kAbort) +REPLACE_RUNTIME_CALL(JSCreateFunctionContext, Runtime::kNewFunctionContext) +REPLACE_RUNTIME_CALL(JSCreateCatchContext, Runtime::kPushCatchContext) +REPLACE_RUNTIME_CALL(JSCreateWithContext, Runtime::kPushWithContext) +REPLACE_RUNTIME_CALL(JSCreateBlockContext, Runtime::kPushBlockContext) +REPLACE_RUNTIME_CALL(JSCreateModuleContext, Runtime::kPushModuleContext) +REPLACE_RUNTIME_CALL(JSCreateGlobalContext, Runtime::kAbort) +#undef REPLACE_RUNTIME + + +#define REPLACE_UNIMPLEMENTED(op) \ + Node* JSGenericLowering::Lower##op(Node* node) { \ + UNIMPLEMENTED(); \ + return node; \ + } +REPLACE_UNIMPLEMENTED(JSToString) +REPLACE_UNIMPLEMENTED(JSToName) +REPLACE_UNIMPLEMENTED(JSYield) +REPLACE_UNIMPLEMENTED(JSDebugger) +#undef REPLACE_UNIMPLEMENTED + + +static CallDescriptor::DeoptimizationSupport DeoptimizationSupportForNode( + Node* node) { + return OperatorProperties::CanLazilyDeoptimize(node->op()) + ? CallDescriptor::kCanDeoptimize + : CallDescriptor::kCannotDeoptimize; +} + + +void JSGenericLowering::ReplaceWithCompareIC(Node* node, Token::Value token, + bool pure) { + BinaryOpICStub stub(isolate(), Token::ADD); // TODO(mstarzinger): Hack. + CodeStubInterfaceDescriptor* d = stub.GetInterfaceDescriptor(); + CallDescriptor* desc_compare = linkage()->GetStubCallDescriptor(d); + Handle<Code> ic = CompareIC::GetUninitialized(isolate(), token); + Node* compare; + if (pure) { + // A pure (strict) comparison doesn't have an effect or control. + // But for the graph, we need to add these inputs. + compare = graph()->NewNode(common()->Call(desc_compare), CodeConstant(ic), + NodeProperties::GetValueInput(node, 0), + NodeProperties::GetValueInput(node, 1), + NodeProperties::GetContextInput(node), + graph()->start(), graph()->start()); + } else { + compare = graph()->NewNode(common()->Call(desc_compare), CodeConstant(ic), + NodeProperties::GetValueInput(node, 0), + NodeProperties::GetValueInput(node, 1), + NodeProperties::GetContextInput(node), + NodeProperties::GetEffectInput(node), + NodeProperties::GetControlInput(node)); + } + node->ReplaceInput(0, compare); + node->ReplaceInput(1, SmiConstant(token)); + ReplaceWithRuntimeCall(node, Runtime::kBooleanize); +} + + +void JSGenericLowering::ReplaceWithICStubCall(Node* node, + HydrogenCodeStub* stub) { + CodeStubInterfaceDescriptor* d = stub->GetInterfaceDescriptor(); + CallDescriptor* desc = linkage()->GetStubCallDescriptor( + d, 0, DeoptimizationSupportForNode(node)); + Node* stub_code = CodeConstant(stub->GetCode()); + PatchInsertInput(node, 0, stub_code); + PatchOperator(node, common()->Call(desc)); +} + + +void JSGenericLowering::ReplaceWithBuiltinCall(Node* node, + Builtins::JavaScript id, + int nargs) { + CallFunctionStub stub(isolate(), nargs - 1, NO_CALL_FUNCTION_FLAGS); + CodeStubInterfaceDescriptor* d = GetInterfaceDescriptor(isolate(), &stub); + CallDescriptor* desc = linkage()->GetStubCallDescriptor(d, nargs); + // TODO(mstarzinger): Accessing the builtins object this way prevents sharing + // of code across native contexts. Fix this by loading from given context. + Handle<JSFunction> function( + JSFunction::cast(info()->context()->builtins()->javascript_builtin(id))); + Node* stub_code = CodeConstant(stub.GetCode()); + Node* function_node = FunctionConstant(function); + PatchInsertInput(node, 0, stub_code); + PatchInsertInput(node, 1, function_node); + PatchOperator(node, common()->Call(desc)); +} + + +void JSGenericLowering::ReplaceWithRuntimeCall(Node* node, + Runtime::FunctionId f, + int nargs_override) { + Operator::Property props = node->op()->properties(); + const Runtime::Function* fun = Runtime::FunctionForId(f); + int nargs = (nargs_override < 0) ? fun->nargs : nargs_override; + CallDescriptor* desc = linkage()->GetRuntimeCallDescriptor( + f, nargs, props, DeoptimizationSupportForNode(node)); + Node* ref = ExternalConstant(ExternalReference(f, isolate())); + Node* arity = Int32Constant(nargs); + if (!centrystub_constant_.is_set()) { + centrystub_constant_.set(CodeConstant(CEntryStub(isolate(), 1).GetCode())); + } + PatchInsertInput(node, 0, centrystub_constant_.get()); + PatchInsertInput(node, nargs + 1, ref); + PatchInsertInput(node, nargs + 2, arity); + PatchOperator(node, common()->Call(desc)); +} + + +Node* JSGenericLowering::LowerBranch(Node* node) { + Node* test = graph()->NewNode(machine()->WordEqual(), node->InputAt(0), + jsgraph()->TrueConstant()); + node->ReplaceInput(0, test); + return node; +} + + +Node* JSGenericLowering::LowerJSUnaryNot(Node* node) { + ToBooleanStub stub(isolate(), ToBooleanStub::RESULT_AS_INVERSE_ODDBALL); + ReplaceWithICStubCall(node, &stub); + return node; +} + + +Node* JSGenericLowering::LowerJSToBoolean(Node* node) { + ToBooleanStub stub(isolate(), ToBooleanStub::RESULT_AS_ODDBALL); + ReplaceWithICStubCall(node, &stub); + return node; +} + + +Node* JSGenericLowering::LowerJSToObject(Node* node) { + ReplaceWithBuiltinCall(node, Builtins::TO_OBJECT, 1); + return node; +} + + +Node* JSGenericLowering::LowerJSLoadProperty(Node* node) { + KeyedLoadICStubShim stub(isolate()); + ReplaceWithICStubCall(node, &stub); + return node; +} + + +Node* JSGenericLowering::LowerJSLoadNamed(Node* node) { + LoadNamedParameters p = OpParameter<LoadNamedParameters>(node); + LoadICStubShim stub(isolate(), p.contextual_mode); + PatchInsertInput(node, 1, jsgraph()->HeapConstant(p.name)); + ReplaceWithICStubCall(node, &stub); + return node; +} + + +Node* JSGenericLowering::LowerJSStoreProperty(Node* node) { + // TODO(mstarzinger): The strict_mode needs to be carried along in the + // operator so that graphs are fully compositional for inlining. + StrictMode strict_mode = info()->strict_mode(); + KeyedStoreICStubShim stub(isolate(), strict_mode); + ReplaceWithICStubCall(node, &stub); + return node; +} + + +Node* JSGenericLowering::LowerJSStoreNamed(Node* node) { + PrintableUnique<Name> key = OpParameter<PrintableUnique<Name> >(node); + // TODO(mstarzinger): The strict_mode needs to be carried along in the + // operator so that graphs are fully compositional for inlining. + StrictMode strict_mode = info()->strict_mode(); + StoreICStubShim stub(isolate(), strict_mode); + PatchInsertInput(node, 1, jsgraph()->HeapConstant(key)); + ReplaceWithICStubCall(node, &stub); + return node; +} + + +Node* JSGenericLowering::LowerJSDeleteProperty(Node* node) { + StrictMode strict_mode = OpParameter<StrictMode>(node); + PatchInsertInput(node, 2, SmiConstant(strict_mode)); + ReplaceWithBuiltinCall(node, Builtins::DELETE, 3); + return node; +} + + +Node* JSGenericLowering::LowerJSHasProperty(Node* node) { + ReplaceWithBuiltinCall(node, Builtins::IN, 2); + return node; +} + + +Node* JSGenericLowering::LowerJSInstanceOf(Node* node) { + InstanceofStub::Flags flags = static_cast<InstanceofStub::Flags>( + InstanceofStub::kReturnTrueFalseObject | + InstanceofStub::kArgsInRegisters); + InstanceofStub stub(isolate(), flags); + CodeStubInterfaceDescriptor* d = GetInterfaceDescriptor(isolate(), &stub); + CallDescriptor* desc = linkage()->GetStubCallDescriptor(d, 0); + Node* stub_code = CodeConstant(stub.GetCode()); + PatchInsertInput(node, 0, stub_code); + PatchOperator(node, common()->Call(desc)); + return node; +} + + +Node* JSGenericLowering::LowerJSLoadContext(Node* node) { + ContextAccess access = OpParameter<ContextAccess>(node); + // TODO(mstarzinger): Use simplified operators instead of machine operators + // here so that load/store optimization can be applied afterwards. + for (int i = 0; i < access.depth(); ++i) { + node->ReplaceInput( + 0, graph()->NewNode( + machine()->Load(kMachineTagged), + NodeProperties::GetValueInput(node, 0), + Int32Constant(Context::SlotOffset(Context::PREVIOUS_INDEX)), + NodeProperties::GetEffectInput(node))); + } + node->ReplaceInput(1, Int32Constant(Context::SlotOffset(access.index()))); + PatchOperator(node, machine()->Load(kMachineTagged)); + return node; +} + + +Node* JSGenericLowering::LowerJSStoreContext(Node* node) { + ContextAccess access = OpParameter<ContextAccess>(node); + // TODO(mstarzinger): Use simplified operators instead of machine operators + // here so that load/store optimization can be applied afterwards. + for (int i = 0; i < access.depth(); ++i) { + node->ReplaceInput( + 0, graph()->NewNode( + machine()->Load(kMachineTagged), + NodeProperties::GetValueInput(node, 0), + Int32Constant(Context::SlotOffset(Context::PREVIOUS_INDEX)), + NodeProperties::GetEffectInput(node))); + } + node->ReplaceInput(2, NodeProperties::GetValueInput(node, 1)); + node->ReplaceInput(1, Int32Constant(Context::SlotOffset(access.index()))); + PatchOperator(node, machine()->Store(kMachineTagged, kFullWriteBarrier)); + return node; +} + + +Node* JSGenericLowering::LowerJSCallConstruct(Node* node) { + int arity = OpParameter<int>(node); + CallConstructStub stub(isolate(), NO_CALL_CONSTRUCTOR_FLAGS); + CodeStubInterfaceDescriptor* d = GetInterfaceDescriptor(isolate(), &stub); + CallDescriptor* desc = linkage()->GetStubCallDescriptor( + d, arity, DeoptimizationSupportForNode(node)); + Node* stub_code = CodeConstant(stub.GetCode()); + Node* construct = NodeProperties::GetValueInput(node, 0); + PatchInsertInput(node, 0, stub_code); + PatchInsertInput(node, 1, Int32Constant(arity - 1)); + PatchInsertInput(node, 2, construct); + PatchInsertInput(node, 3, jsgraph()->UndefinedConstant()); + PatchOperator(node, common()->Call(desc)); + return node; +} + + +Node* JSGenericLowering::LowerJSCallFunction(Node* node) { + CallParameters p = OpParameter<CallParameters>(node); + CallFunctionStub stub(isolate(), p.arity - 2, p.flags); + CodeStubInterfaceDescriptor* d = GetInterfaceDescriptor(isolate(), &stub); + CallDescriptor* desc = linkage()->GetStubCallDescriptor( + d, p.arity - 1, DeoptimizationSupportForNode(node)); + Node* stub_code = CodeConstant(stub.GetCode()); + PatchInsertInput(node, 0, stub_code); + PatchOperator(node, common()->Call(desc)); + return node; +} + + +Node* JSGenericLowering::LowerJSCallRuntime(Node* node) { + Runtime::FunctionId function = OpParameter<Runtime::FunctionId>(node); + int arity = OperatorProperties::GetValueInputCount(node->op()); + ReplaceWithRuntimeCall(node, function, arity); + return node; +} +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/compiler/js-generic-lowering.h b/deps/v8/src/compiler/js-generic-lowering.h new file mode 100644 index 00000000000..e3113e541da --- /dev/null +++ b/deps/v8/src/compiler/js-generic-lowering.h @@ -0,0 +1,83 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_JS_GENERIC_LOWERING_H_ +#define V8_COMPILER_JS_GENERIC_LOWERING_H_ + +#include "src/v8.h" + +#include "src/allocation.h" +#include "src/compiler/graph.h" +#include "src/compiler/js-graph.h" +#include "src/compiler/lowering-builder.h" +#include "src/compiler/opcodes.h" +#include "src/unique.h" + +namespace v8 { +namespace internal { + +// Forward declarations. +class HydrogenCodeStub; + +namespace compiler { + +// Forward declarations. +class CommonOperatorBuilder; +class MachineOperatorBuilder; +class Linkage; + +// Lowers JS-level operators to runtime and IC calls in the "generic" case. +class JSGenericLowering : public LoweringBuilder { + public: + JSGenericLowering(CompilationInfo* info, JSGraph* graph, + MachineOperatorBuilder* machine, + SourcePositionTable* source_positions); + virtual ~JSGenericLowering() {} + + virtual void Lower(Node* node); + + protected: +// Dispatched depending on opcode. +#define DECLARE_LOWER(x) Node* Lower##x(Node* node); + ALL_OP_LIST(DECLARE_LOWER) +#undef DECLARE_LOWER + + // Helpers to create new constant nodes. + Node* SmiConstant(int immediate); + Node* Int32Constant(int immediate); + Node* CodeConstant(Handle<Code> code); + Node* FunctionConstant(Handle<JSFunction> function); + Node* ExternalConstant(ExternalReference ref); + + // Helpers to patch existing nodes in the graph. + void PatchOperator(Node* node, Operator* new_op); + void PatchInsertInput(Node* node, int index, Node* input); + + // Helpers to replace existing nodes with a generic call. + void ReplaceWithCompareIC(Node* node, Token::Value token, bool pure); + void ReplaceWithICStubCall(Node* node, HydrogenCodeStub* stub); + void ReplaceWithBuiltinCall(Node* node, Builtins::JavaScript id, int args); + void ReplaceWithRuntimeCall(Node* node, Runtime::FunctionId f, int args = -1); + + Zone* zone() const { return graph()->zone(); } + Isolate* isolate() const { return zone()->isolate(); } + JSGraph* jsgraph() const { return jsgraph_; } + Graph* graph() const { return jsgraph()->graph(); } + Linkage* linkage() const { return linkage_; } + CompilationInfo* info() const { return info_; } + CommonOperatorBuilder* common() const { return jsgraph()->common(); } + MachineOperatorBuilder* machine() const { return machine_; } + + private: + CompilationInfo* info_; + JSGraph* jsgraph_; + Linkage* linkage_; + MachineOperatorBuilder* machine_; + SetOncePointer<Node> centrystub_constant_; +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_JS_GENERIC_LOWERING_H_ diff --git a/deps/v8/src/compiler/js-graph.cc b/deps/v8/src/compiler/js-graph.cc new file mode 100644 index 00000000000..2cebbc784ee --- /dev/null +++ b/deps/v8/src/compiler/js-graph.cc @@ -0,0 +1,174 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/js-graph.h" +#include "src/compiler/node-properties-inl.h" +#include "src/compiler/typer.h" + +namespace v8 { +namespace internal { +namespace compiler { + +Node* JSGraph::ImmovableHeapConstant(Handle<Object> object) { + PrintableUnique<Object> unique = + PrintableUnique<Object>::CreateImmovable(zone(), object); + return NewNode(common()->HeapConstant(unique)); +} + + +Node* JSGraph::NewNode(Operator* op) { + Node* node = graph()->NewNode(op); + typer_->Init(node); + return node; +} + + +Node* JSGraph::UndefinedConstant() { + if (!undefined_constant_.is_set()) { + undefined_constant_.set( + ImmovableHeapConstant(factory()->undefined_value())); + } + return undefined_constant_.get(); +} + + +Node* JSGraph::TheHoleConstant() { + if (!the_hole_constant_.is_set()) { + the_hole_constant_.set(ImmovableHeapConstant(factory()->the_hole_value())); + } + return the_hole_constant_.get(); +} + + +Node* JSGraph::TrueConstant() { + if (!true_constant_.is_set()) { + true_constant_.set(ImmovableHeapConstant(factory()->true_value())); + } + return true_constant_.get(); +} + + +Node* JSGraph::FalseConstant() { + if (!false_constant_.is_set()) { + false_constant_.set(ImmovableHeapConstant(factory()->false_value())); + } + return false_constant_.get(); +} + + +Node* JSGraph::NullConstant() { + if (!null_constant_.is_set()) { + null_constant_.set(ImmovableHeapConstant(factory()->null_value())); + } + return null_constant_.get(); +} + + +Node* JSGraph::ZeroConstant() { + if (!zero_constant_.is_set()) zero_constant_.set(NumberConstant(0.0)); + return zero_constant_.get(); +} + + +Node* JSGraph::OneConstant() { + if (!one_constant_.is_set()) one_constant_.set(NumberConstant(1.0)); + return one_constant_.get(); +} + + +Node* JSGraph::NaNConstant() { + if (!nan_constant_.is_set()) { + nan_constant_.set(NumberConstant(base::OS::nan_value())); + } + return nan_constant_.get(); +} + + +Node* JSGraph::HeapConstant(PrintableUnique<Object> value) { + // TODO(turbofan): canonicalize heap constants using Unique<T> + return NewNode(common()->HeapConstant(value)); +} + + +Node* JSGraph::HeapConstant(Handle<Object> value) { + // TODO(titzer): We could also match against the addresses of immortable + // immovables here, even without access to the heap, thus always + // canonicalizing references to them. + return HeapConstant( + PrintableUnique<Object>::CreateUninitialized(zone(), value)); +} + + +Node* JSGraph::Constant(Handle<Object> value) { + // Dereference the handle to determine if a number constant or other + // canonicalized node can be used. + if (value->IsNumber()) { + return Constant(value->Number()); + } else if (value->IsUndefined()) { + return UndefinedConstant(); + } else if (value->IsTrue()) { + return TrueConstant(); + } else if (value->IsFalse()) { + return FalseConstant(); + } else if (value->IsNull()) { + return NullConstant(); + } else if (value->IsTheHole()) { + return TheHoleConstant(); + } else { + return HeapConstant(value); + } +} + + +Node* JSGraph::Constant(double value) { + if (BitCast<int64_t>(value) == BitCast<int64_t>(0.0)) return ZeroConstant(); + if (BitCast<int64_t>(value) == BitCast<int64_t>(1.0)) return OneConstant(); + return NumberConstant(value); +} + + +Node* JSGraph::Constant(int32_t value) { + if (value == 0) return ZeroConstant(); + if (value == 1) return OneConstant(); + return NumberConstant(value); +} + + +Node* JSGraph::Int32Constant(int32_t value) { + Node** loc = cache_.FindInt32Constant(value); + if (*loc == NULL) { + *loc = NewNode(common()->Int32Constant(value)); + } + return *loc; +} + + +Node* JSGraph::NumberConstant(double value) { + Node** loc = cache_.FindNumberConstant(value); + if (*loc == NULL) { + *loc = NewNode(common()->NumberConstant(value)); + } + return *loc; +} + + +Node* JSGraph::Float64Constant(double value) { + Node** loc = cache_.FindFloat64Constant(value); + if (*loc == NULL) { + *loc = NewNode(common()->Float64Constant(value)); + } + return *loc; +} + + +Node* JSGraph::ExternalConstant(ExternalReference reference) { + Node** loc = cache_.FindExternalConstant(reference); + if (*loc == NULL) { + *loc = NewNode(common()->ExternalConstant(reference)); + } + return *loc; +} +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/src/compiler/js-graph.h b/deps/v8/src/compiler/js-graph.h new file mode 100644 index 00000000000..59a6b845ed3 --- /dev/null +++ b/deps/v8/src/compiler/js-graph.h @@ -0,0 +1,107 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_JS_GRAPH_H_ +#define V8_COMPILER_JS_GRAPH_H_ + +#include "src/compiler/common-node-cache.h" +#include "src/compiler/common-operator.h" +#include "src/compiler/graph.h" +#include "src/compiler/js-operator.h" +#include "src/compiler/node-properties.h" + +namespace v8 { +namespace internal { +namespace compiler { + +class Typer; + +// Implements a facade on a Graph, enhancing the graph with JS-specific +// notions, including a builder for for JS* operators, canonicalized global +// constants, and various helper methods. +class JSGraph : public ZoneObject { + public: + JSGraph(Graph* graph, CommonOperatorBuilder* common, Typer* typer) + : graph_(graph), + common_(common), + javascript_(zone()), + typer_(typer), + cache_(zone()) {} + + // Canonicalized global constants. + Node* UndefinedConstant(); + Node* TheHoleConstant(); + Node* TrueConstant(); + Node* FalseConstant(); + Node* NullConstant(); + Node* ZeroConstant(); + Node* OneConstant(); + Node* NaNConstant(); + + // Creates a HeapConstant node, possibly canonicalized, without inspecting the + // object. + Node* HeapConstant(PrintableUnique<Object> value); + + // Creates a HeapConstant node, possibly canonicalized, and may access the + // heap to inspect the object. + Node* HeapConstant(Handle<Object> value); + + // Creates a Constant node of the appropriate type for the given object. + // Accesses the heap to inspect the object and determine whether one of the + // canonicalized globals or a number constant should be returned. + Node* Constant(Handle<Object> value); + + // Creates a NumberConstant node, usually canonicalized. + Node* Constant(double value); + + // Creates a NumberConstant node, usually canonicalized. + Node* Constant(int32_t value); + + // Creates a Int32Constant node, usually canonicalized. + Node* Int32Constant(int32_t value); + + // Creates a Float64Constant node, usually canonicalized. + Node* Float64Constant(double value); + + // Creates an ExternalConstant node, usually canonicalized. + Node* ExternalConstant(ExternalReference ref); + + Node* SmiConstant(int32_t immediate) { + DCHECK(Smi::IsValid(immediate)); + return Constant(immediate); + } + + JSOperatorBuilder* javascript() { return &javascript_; } + CommonOperatorBuilder* common() { return common_; } + Graph* graph() { return graph_; } + Zone* zone() { return graph()->zone(); } + + private: + Graph* graph_; + CommonOperatorBuilder* common_; + JSOperatorBuilder javascript_; + Typer* typer_; + + SetOncePointer<Node> undefined_constant_; + SetOncePointer<Node> the_hole_constant_; + SetOncePointer<Node> true_constant_; + SetOncePointer<Node> false_constant_; + SetOncePointer<Node> null_constant_; + SetOncePointer<Node> zero_constant_; + SetOncePointer<Node> one_constant_; + SetOncePointer<Node> nan_constant_; + + CommonNodeCache cache_; + + Node* ImmovableHeapConstant(Handle<Object> value); + Node* NumberConstant(double value); + Node* NewNode(Operator* op); + + Factory* factory() { return zone()->isolate()->factory(); } +}; +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif diff --git a/deps/v8/src/compiler/js-operator.h b/deps/v8/src/compiler/js-operator.h new file mode 100644 index 00000000000..fd9547d94a5 --- /dev/null +++ b/deps/v8/src/compiler/js-operator.h @@ -0,0 +1,214 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_JS_OPERATOR_H_ +#define V8_COMPILER_JS_OPERATOR_H_ + +#include "src/compiler/linkage.h" +#include "src/compiler/opcodes.h" +#include "src/compiler/operator.h" +#include "src/unique.h" +#include "src/zone.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// Defines the location of a context slot relative to a specific scope. This is +// used as a parameter by JSLoadContext and JSStoreContext operators and allows +// accessing a context-allocated variable without keeping track of the scope. +class ContextAccess { + public: + ContextAccess(int depth, int index, bool immutable) + : immutable_(immutable), depth_(depth), index_(index) { + DCHECK(0 <= depth && depth <= kMaxUInt16); + DCHECK(0 <= index && static_cast<uint32_t>(index) <= kMaxUInt32); + } + int depth() const { return depth_; } + int index() const { return index_; } + bool immutable() const { return immutable_; } + + private: + // For space reasons, we keep this tightly packed, otherwise we could just use + // a simple int/int/bool POD. + const bool immutable_; + const uint16_t depth_; + const uint32_t index_; +}; + +// Defines the property being loaded from an object by a named load. This is +// used as a parameter by JSLoadNamed operators. +struct LoadNamedParameters { + PrintableUnique<Name> name; + ContextualMode contextual_mode; +}; + +// Defines the arity and the call flags for a JavaScript function call. This is +// used as a parameter by JSCall operators. +struct CallParameters { + int arity; + CallFunctionFlags flags; +}; + +// Interface for building JavaScript-level operators, e.g. directly from the +// AST. Most operators have no parameters, thus can be globally shared for all +// graphs. +class JSOperatorBuilder { + public: + explicit JSOperatorBuilder(Zone* zone) : zone_(zone) {} + +#define SIMPLE(name, properties, inputs, outputs) \ + return new (zone_) \ + SimpleOperator(IrOpcode::k##name, properties, inputs, outputs, #name); + +#define NOPROPS(name, inputs, outputs) \ + SIMPLE(name, Operator::kNoProperties, inputs, outputs) + +#define OP1(name, ptype, pname, properties, inputs, outputs) \ + return new (zone_) Operator1<ptype>(IrOpcode::k##name, properties, inputs, \ + outputs, #name, pname) + +#define BINOP(name) NOPROPS(name, 2, 1) +#define UNOP(name) NOPROPS(name, 1, 1) + +#define PURE_BINOP(name) SIMPLE(name, Operator::kPure, 2, 1) + + Operator* Equal() { BINOP(JSEqual); } + Operator* NotEqual() { BINOP(JSNotEqual); } + Operator* StrictEqual() { PURE_BINOP(JSStrictEqual); } + Operator* StrictNotEqual() { PURE_BINOP(JSStrictNotEqual); } + Operator* LessThan() { BINOP(JSLessThan); } + Operator* GreaterThan() { BINOP(JSGreaterThan); } + Operator* LessThanOrEqual() { BINOP(JSLessThanOrEqual); } + Operator* GreaterThanOrEqual() { BINOP(JSGreaterThanOrEqual); } + Operator* BitwiseOr() { BINOP(JSBitwiseOr); } + Operator* BitwiseXor() { BINOP(JSBitwiseXor); } + Operator* BitwiseAnd() { BINOP(JSBitwiseAnd); } + Operator* ShiftLeft() { BINOP(JSShiftLeft); } + Operator* ShiftRight() { BINOP(JSShiftRight); } + Operator* ShiftRightLogical() { BINOP(JSShiftRightLogical); } + Operator* Add() { BINOP(JSAdd); } + Operator* Subtract() { BINOP(JSSubtract); } + Operator* Multiply() { BINOP(JSMultiply); } + Operator* Divide() { BINOP(JSDivide); } + Operator* Modulus() { BINOP(JSModulus); } + + Operator* UnaryNot() { UNOP(JSUnaryNot); } + Operator* ToBoolean() { UNOP(JSToBoolean); } + Operator* ToNumber() { UNOP(JSToNumber); } + Operator* ToString() { UNOP(JSToString); } + Operator* ToName() { UNOP(JSToName); } + Operator* ToObject() { UNOP(JSToObject); } + Operator* Yield() { UNOP(JSYield); } + + Operator* Create() { SIMPLE(JSCreate, Operator::kEliminatable, 0, 1); } + + Operator* Call(int arguments, CallFunctionFlags flags) { + CallParameters parameters = {arguments, flags}; + OP1(JSCallFunction, CallParameters, parameters, Operator::kNoProperties, + arguments, 1); + } + + Operator* CallNew(int arguments) { + return new (zone_) + Operator1<int>(IrOpcode::kJSCallConstruct, Operator::kNoProperties, + arguments, 1, "JSCallConstruct", arguments); + } + + Operator* LoadProperty() { BINOP(JSLoadProperty); } + Operator* LoadNamed(PrintableUnique<Name> name, + ContextualMode contextual_mode = NOT_CONTEXTUAL) { + LoadNamedParameters parameters = {name, contextual_mode}; + OP1(JSLoadNamed, LoadNamedParameters, parameters, Operator::kNoProperties, + 1, 1); + } + + Operator* StoreProperty() { NOPROPS(JSStoreProperty, 3, 0); } + Operator* StoreNamed(PrintableUnique<Name> name) { + OP1(JSStoreNamed, PrintableUnique<Name>, name, Operator::kNoProperties, 2, + 0); + } + + Operator* DeleteProperty(StrictMode strict_mode) { + OP1(JSDeleteProperty, StrictMode, strict_mode, Operator::kNoProperties, 2, + 1); + } + + Operator* HasProperty() { NOPROPS(JSHasProperty, 2, 1); } + + Operator* LoadContext(uint16_t depth, uint32_t index, bool immutable) { + ContextAccess access(depth, index, immutable); + OP1(JSLoadContext, ContextAccess, access, + Operator::kEliminatable | Operator::kNoWrite, 1, 1); + } + Operator* StoreContext(uint16_t depth, uint32_t index) { + ContextAccess access(depth, index, false); + OP1(JSStoreContext, ContextAccess, access, Operator::kNoProperties, 2, 1); + } + + Operator* TypeOf() { SIMPLE(JSTypeOf, Operator::kPure, 1, 1); } + Operator* InstanceOf() { NOPROPS(JSInstanceOf, 2, 1); } + Operator* Debugger() { NOPROPS(JSDebugger, 0, 0); } + + // TODO(titzer): nail down the static parts of each of these context flavors. + Operator* CreateFunctionContext() { NOPROPS(JSCreateFunctionContext, 1, 1); } + Operator* CreateCatchContext(PrintableUnique<String> name) { + OP1(JSCreateCatchContext, PrintableUnique<String>, name, + Operator::kNoProperties, 1, 1); + } + Operator* CreateWithContext() { NOPROPS(JSCreateWithContext, 2, 1); } + Operator* CreateBlockContext() { NOPROPS(JSCreateBlockContext, 2, 1); } + Operator* CreateModuleContext() { NOPROPS(JSCreateModuleContext, 2, 1); } + Operator* CreateGlobalContext() { NOPROPS(JSCreateGlobalContext, 2, 1); } + + Operator* Runtime(Runtime::FunctionId function, int arguments) { + const Runtime::Function* f = Runtime::FunctionForId(function); + DCHECK(f->nargs == -1 || f->nargs == arguments); + OP1(JSCallRuntime, Runtime::FunctionId, function, Operator::kNoProperties, + arguments, f->result_size); + } + +#undef SIMPLE +#undef NOPROPS +#undef OP1 +#undef BINOP +#undef UNOP + + private: + Zone* zone_; +}; + +// Specialization for static parameters of type {ContextAccess}. +template <> +struct StaticParameterTraits<ContextAccess> { + static OStream& PrintTo(OStream& os, ContextAccess val) { // NOLINT + return os << val.depth() << "," << val.index() + << (val.immutable() ? ",imm" : ""); + } + static int HashCode(ContextAccess val) { + return (val.depth() << 16) | (val.index() & 0xffff); + } + static bool Equals(ContextAccess a, ContextAccess b) { + return a.immutable() == b.immutable() && a.depth() == b.depth() && + a.index() == b.index(); + } +}; + +// Specialization for static parameters of type {Runtime::FunctionId}. +template <> +struct StaticParameterTraits<Runtime::FunctionId> { + static OStream& PrintTo(OStream& os, Runtime::FunctionId val) { // NOLINT + const Runtime::Function* f = Runtime::FunctionForId(val); + return os << (f->name ? f->name : "?Runtime?"); + } + static int HashCode(Runtime::FunctionId val) { return static_cast<int>(val); } + static bool Equals(Runtime::FunctionId a, Runtime::FunctionId b) { + return a == b; + } +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_JS_OPERATOR_H_ diff --git a/deps/v8/src/compiler/js-typed-lowering.cc b/deps/v8/src/compiler/js-typed-lowering.cc new file mode 100644 index 00000000000..361cb94f058 --- /dev/null +++ b/deps/v8/src/compiler/js-typed-lowering.cc @@ -0,0 +1,604 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/graph-inl.h" +#include "src/compiler/js-typed-lowering.h" +#include "src/compiler/node-aux-data-inl.h" +#include "src/compiler/node-properties-inl.h" +#include "src/types.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// TODO(turbofan): js-typed-lowering improvements possible +// - immediately put in type bounds for all new nodes +// - relax effects from generic but not-side-effecting operations +// - relax effects for ToNumber(mixed) + +// Replace value uses of {node} with {value} and effect uses of {node} with +// {effect}. If {effect == NULL}, then use the effect input to {node}. +// TODO(titzer): move into a GraphEditor? +static void ReplaceUses(Node* node, Node* value, Node* effect) { + if (value == effect) { + // Effect and value updates are the same; no special iteration needed. + if (value != node) node->ReplaceUses(value); + return; + } + + if (effect == NULL) effect = NodeProperties::GetEffectInput(node); + + // The iteration requires distinguishing between value and effect edges. + UseIter iter = node->uses().begin(); + while (iter != node->uses().end()) { + if (NodeProperties::IsEffectEdge(iter.edge())) { + iter = iter.UpdateToAndIncrement(effect); + } else { + iter = iter.UpdateToAndIncrement(value); + } + } +} + + +// Relax the effects of {node} by immediately replacing effect uses of {node} +// with the effect input to {node}. +// TODO(turbofan): replace the effect input to {node} with {graph->start()}. +// TODO(titzer): move into a GraphEditor? +static void RelaxEffects(Node* node) { ReplaceUses(node, node, NULL); } + + +Reduction JSTypedLowering::ReplaceEagerly(Node* old, Node* node) { + ReplaceUses(old, node, node); + return Reducer::Changed(node); +} + + +// A helper class to simplify the process of reducing a single binop node with a +// JSOperator. This class manages the rewriting of context, control, and effect +// dependencies during lowering of a binop and contains numerous helper +// functions for matching the types of inputs to an operation. +class JSBinopReduction { + public: + JSBinopReduction(JSTypedLowering* lowering, Node* node) + : lowering_(lowering), + node_(node), + left_type_(NodeProperties::GetBounds(node->InputAt(0)).upper), + right_type_(NodeProperties::GetBounds(node->InputAt(1)).upper) {} + + void ConvertInputsToNumber() { + node_->ReplaceInput(0, ConvertToNumber(left())); + node_->ReplaceInput(1, ConvertToNumber(right())); + } + + void ConvertInputsToInt32(bool left_signed, bool right_signed) { + node_->ReplaceInput(0, ConvertToI32(left_signed, left())); + node_->ReplaceInput(1, ConvertToI32(right_signed, right())); + } + + void ConvertInputsToString() { + node_->ReplaceInput(0, ConvertToString(left())); + node_->ReplaceInput(1, ConvertToString(right())); + } + + // Convert inputs for bitwise shift operation (ES5 spec 11.7). + void ConvertInputsForShift(bool left_signed) { + node_->ReplaceInput(0, ConvertToI32(left_signed, left())); + Node* rnum = ConvertToI32(false, right()); + node_->ReplaceInput(1, graph()->NewNode(machine()->Word32And(), rnum, + jsgraph()->Int32Constant(0x1F))); + } + + void SwapInputs() { + Node* l = left(); + Node* r = right(); + node_->ReplaceInput(0, r); + node_->ReplaceInput(1, l); + std::swap(left_type_, right_type_); + } + + // Remove all effect and control inputs and outputs to this node and change + // to the pure operator {op}, possibly inserting a boolean inversion. + Reduction ChangeToPureOperator(Operator* op, bool invert = false) { + DCHECK_EQ(0, OperatorProperties::GetEffectInputCount(op)); + DCHECK_EQ(false, OperatorProperties::HasContextInput(op)); + DCHECK_EQ(0, OperatorProperties::GetControlInputCount(op)); + DCHECK_EQ(2, OperatorProperties::GetValueInputCount(op)); + + // Remove the effects from the node, if any, and update its effect usages. + if (OperatorProperties::GetEffectInputCount(node_->op()) > 0) { + RelaxEffects(node_); + } + // Remove the inputs corresponding to context, effect, and control. + NodeProperties::RemoveNonValueInputs(node_); + // Finally, update the operator to the new one. + node_->set_op(op); + + if (invert) { + // Insert an boolean not to invert the value. + Node* value = graph()->NewNode(simplified()->BooleanNot(), node_); + node_->ReplaceUses(value); + // Note: ReplaceUses() smashes all uses, so smash it back here. + value->ReplaceInput(0, node_); + return lowering_->ReplaceWith(value); + } + return lowering_->Changed(node_); + } + + bool OneInputIs(Type* t) { return left_type_->Is(t) || right_type_->Is(t); } + + bool BothInputsAre(Type* t) { + return left_type_->Is(t) && right_type_->Is(t); + } + + bool OneInputCannotBe(Type* t) { + return !left_type_->Maybe(t) || !right_type_->Maybe(t); + } + + bool NeitherInputCanBe(Type* t) { + return !left_type_->Maybe(t) && !right_type_->Maybe(t); + } + + Node* effect() { return NodeProperties::GetEffectInput(node_); } + Node* control() { return NodeProperties::GetControlInput(node_); } + Node* context() { return NodeProperties::GetContextInput(node_); } + Node* left() { return NodeProperties::GetValueInput(node_, 0); } + Node* right() { return NodeProperties::GetValueInput(node_, 1); } + Type* left_type() { return left_type_; } + Type* right_type() { return right_type_; } + + SimplifiedOperatorBuilder* simplified() { return lowering_->simplified(); } + Graph* graph() { return lowering_->graph(); } + JSGraph* jsgraph() { return lowering_->jsgraph(); } + JSOperatorBuilder* javascript() { return lowering_->javascript(); } + MachineOperatorBuilder* machine() { return lowering_->machine(); } + + private: + JSTypedLowering* lowering_; // The containing lowering instance. + Node* node_; // The original node. + Type* left_type_; // Cache of the left input's type. + Type* right_type_; // Cache of the right input's type. + + Node* ConvertToString(Node* node) { + // Avoid introducing too many eager ToString() operations. + Reduction reduced = lowering_->ReduceJSToStringInput(node); + if (reduced.Changed()) return reduced.replacement(); + Node* n = graph()->NewNode(javascript()->ToString(), node, context(), + effect(), control()); + update_effect(n); + return n; + } + + Node* ConvertToNumber(Node* node) { + // Avoid introducing too many eager ToNumber() operations. + Reduction reduced = lowering_->ReduceJSToNumberInput(node); + if (reduced.Changed()) return reduced.replacement(); + Node* n = graph()->NewNode(javascript()->ToNumber(), node, context(), + effect(), control()); + update_effect(n); + return n; + } + + // Try to narrowing a double or number operation to an Int32 operation. + bool TryNarrowingToI32(Type* type, Node* node) { + switch (node->opcode()) { + case IrOpcode::kFloat64Add: + case IrOpcode::kNumberAdd: { + JSBinopReduction r(lowering_, node); + if (r.BothInputsAre(Type::Integral32())) { + node->set_op(lowering_->machine()->Int32Add()); + // TODO(titzer): narrow bounds instead of overwriting. + NodeProperties::SetBounds(node, Bounds(type)); + return true; + } + } + case IrOpcode::kFloat64Sub: + case IrOpcode::kNumberSubtract: { + JSBinopReduction r(lowering_, node); + if (r.BothInputsAre(Type::Integral32())) { + node->set_op(lowering_->machine()->Int32Sub()); + // TODO(titzer): narrow bounds instead of overwriting. + NodeProperties::SetBounds(node, Bounds(type)); + return true; + } + } + default: + return false; + } + } + + Node* ConvertToI32(bool is_signed, Node* node) { + Type* type = is_signed ? Type::Signed32() : Type::Unsigned32(); + if (node->OwnedBy(node_)) { + // If this node {node_} has the only edge to {node}, then try narrowing + // its operation to an Int32 add or subtract. + if (TryNarrowingToI32(type, node)) return node; + } else { + // Otherwise, {node} has multiple uses. Leave it as is and let the + // further lowering passes deal with it, which use a full backwards + // fixpoint. + } + + // Avoid introducing too many eager NumberToXXnt32() operations. + node = ConvertToNumber(node); + Type* input_type = NodeProperties::GetBounds(node).upper; + + if (input_type->Is(type)) return node; // already in the value range. + + Operator* op = is_signed ? simplified()->NumberToInt32() + : simplified()->NumberToUint32(); + Node* n = graph()->NewNode(op, node); + return n; + } + + void update_effect(Node* effect) { + NodeProperties::ReplaceEffectInput(node_, effect); + } +}; + + +Reduction JSTypedLowering::ReduceJSAdd(Node* node) { + JSBinopReduction r(this, node); + if (r.OneInputIs(Type::String())) { + r.ConvertInputsToString(); + return r.ChangeToPureOperator(simplified()->StringAdd()); + } else if (r.NeitherInputCanBe(Type::String())) { + r.ConvertInputsToNumber(); + return r.ChangeToPureOperator(simplified()->NumberAdd()); + } + return NoChange(); +} + + +Reduction JSTypedLowering::ReduceNumberBinop(Node* node, Operator* numberOp) { + JSBinopReduction r(this, node); + if (r.OneInputIs(Type::Primitive())) { + // If at least one input is a primitive, then insert appropriate conversions + // to number and reduce this operator to the given numeric one. + // TODO(turbofan): make this heuristic configurable for code size. + r.ConvertInputsToNumber(); + return r.ChangeToPureOperator(numberOp); + } + // TODO(turbofan): relax/remove the effects of this operator in other cases. + return NoChange(); +} + + +Reduction JSTypedLowering::ReduceI32Binop(Node* node, bool left_signed, + bool right_signed, Operator* intOp) { + JSBinopReduction r(this, node); + // TODO(titzer): some Smi bitwise operations don't really require going + // all the way to int32, which can save tagging/untagging for some operations + // on some platforms. + // TODO(turbofan): make this heuristic configurable for code size. + r.ConvertInputsToInt32(left_signed, right_signed); + return r.ChangeToPureOperator(intOp); +} + + +Reduction JSTypedLowering::ReduceI32Shift(Node* node, bool left_signed, + Operator* shift_op) { + JSBinopReduction r(this, node); + r.ConvertInputsForShift(left_signed); + return r.ChangeToPureOperator(shift_op); +} + + +Reduction JSTypedLowering::ReduceJSComparison(Node* node) { + JSBinopReduction r(this, node); + if (r.BothInputsAre(Type::String())) { + // If both inputs are definitely strings, perform a string comparison. + Operator* stringOp; + switch (node->opcode()) { + case IrOpcode::kJSLessThan: + stringOp = simplified()->StringLessThan(); + break; + case IrOpcode::kJSGreaterThan: + stringOp = simplified()->StringLessThan(); + r.SwapInputs(); // a > b => b < a + break; + case IrOpcode::kJSLessThanOrEqual: + stringOp = simplified()->StringLessThanOrEqual(); + break; + case IrOpcode::kJSGreaterThanOrEqual: + stringOp = simplified()->StringLessThanOrEqual(); + r.SwapInputs(); // a >= b => b <= a + break; + default: + return NoChange(); + } + return r.ChangeToPureOperator(stringOp); + } else if (r.OneInputCannotBe(Type::String())) { + // If one input cannot be a string, then emit a number comparison. + Operator* less_than; + Operator* less_than_or_equal; + if (r.BothInputsAre(Type::Unsigned32())) { + less_than = machine()->Uint32LessThan(); + less_than_or_equal = machine()->Uint32LessThanOrEqual(); + } else if (r.BothInputsAre(Type::Signed32())) { + less_than = machine()->Int32LessThan(); + less_than_or_equal = machine()->Int32LessThanOrEqual(); + } else { + // TODO(turbofan): mixed signed/unsigned int32 comparisons. + r.ConvertInputsToNumber(); + less_than = simplified()->NumberLessThan(); + less_than_or_equal = simplified()->NumberLessThanOrEqual(); + } + Operator* comparison; + switch (node->opcode()) { + case IrOpcode::kJSLessThan: + comparison = less_than; + break; + case IrOpcode::kJSGreaterThan: + comparison = less_than; + r.SwapInputs(); // a > b => b < a + break; + case IrOpcode::kJSLessThanOrEqual: + comparison = less_than_or_equal; + break; + case IrOpcode::kJSGreaterThanOrEqual: + comparison = less_than_or_equal; + r.SwapInputs(); // a >= b => b <= a + break; + default: + return NoChange(); + } + return r.ChangeToPureOperator(comparison); + } + // TODO(turbofan): relax/remove effects of this operator in other cases. + return NoChange(); // Keep a generic comparison. +} + + +Reduction JSTypedLowering::ReduceJSEqual(Node* node, bool invert) { + JSBinopReduction r(this, node); + + if (r.BothInputsAre(Type::Number())) { + return r.ChangeToPureOperator(simplified()->NumberEqual(), invert); + } + if (r.BothInputsAre(Type::String())) { + return r.ChangeToPureOperator(simplified()->StringEqual(), invert); + } + if (r.BothInputsAre(Type::Receiver())) { + return r.ChangeToPureOperator( + simplified()->ReferenceEqual(Type::Receiver()), invert); + } + // TODO(turbofan): js-typed-lowering of Equal(undefined) + // TODO(turbofan): js-typed-lowering of Equal(null) + // TODO(turbofan): js-typed-lowering of Equal(boolean) + return NoChange(); +} + + +Reduction JSTypedLowering::ReduceJSStrictEqual(Node* node, bool invert) { + JSBinopReduction r(this, node); + if (r.left() == r.right()) { + // x === x is always true if x != NaN + if (!r.left_type()->Maybe(Type::NaN())) { + return ReplaceEagerly(node, invert ? jsgraph()->FalseConstant() + : jsgraph()->TrueConstant()); + } + } + if (!r.left_type()->Maybe(r.right_type())) { + // Type intersection is empty; === is always false unless both + // inputs could be strings (one internalized and one not). + if (r.OneInputCannotBe(Type::String())) { + return ReplaceEagerly(node, invert ? jsgraph()->TrueConstant() + : jsgraph()->FalseConstant()); + } + } + if (r.OneInputIs(Type::Undefined())) { + return r.ChangeToPureOperator( + simplified()->ReferenceEqual(Type::Undefined()), invert); + } + if (r.OneInputIs(Type::Null())) { + return r.ChangeToPureOperator(simplified()->ReferenceEqual(Type::Null()), + invert); + } + if (r.OneInputIs(Type::Boolean())) { + return r.ChangeToPureOperator(simplified()->ReferenceEqual(Type::Boolean()), + invert); + } + if (r.OneInputIs(Type::Object())) { + return r.ChangeToPureOperator(simplified()->ReferenceEqual(Type::Object()), + invert); + } + if (r.OneInputIs(Type::Receiver())) { + return r.ChangeToPureOperator( + simplified()->ReferenceEqual(Type::Receiver()), invert); + } + if (r.BothInputsAre(Type::String())) { + return r.ChangeToPureOperator(simplified()->StringEqual(), invert); + } + if (r.BothInputsAre(Type::Number())) { + return r.ChangeToPureOperator(simplified()->NumberEqual(), invert); + } + // TODO(turbofan): js-typed-lowering of StrictEqual(mixed types) + return NoChange(); +} + + +Reduction JSTypedLowering::ReduceJSToNumberInput(Node* input) { + if (input->opcode() == IrOpcode::kJSToNumber) { + // Recursively try to reduce the input first. + Reduction result = ReduceJSToNumberInput(input->InputAt(0)); + if (result.Changed()) { + RelaxEffects(input); + return result; + } + return Changed(input); // JSToNumber(JSToNumber(x)) => JSToNumber(x) + } + Type* input_type = NodeProperties::GetBounds(input).upper; + if (input_type->Is(Type::Number())) { + // JSToNumber(number) => x + return Changed(input); + } + if (input_type->Is(Type::Undefined())) { + // JSToNumber(undefined) => #NaN + return ReplaceWith(jsgraph()->NaNConstant()); + } + if (input_type->Is(Type::Null())) { + // JSToNumber(null) => #0 + return ReplaceWith(jsgraph()->ZeroConstant()); + } + // TODO(turbofan): js-typed-lowering of ToNumber(boolean) + // TODO(turbofan): js-typed-lowering of ToNumber(string) + return NoChange(); +} + + +Reduction JSTypedLowering::ReduceJSToStringInput(Node* input) { + if (input->opcode() == IrOpcode::kJSToString) { + // Recursively try to reduce the input first. + Reduction result = ReduceJSToStringInput(input->InputAt(0)); + if (result.Changed()) { + RelaxEffects(input); + return result; + } + return Changed(input); // JSToString(JSToString(x)) => JSToString(x) + } + Type* input_type = NodeProperties::GetBounds(input).upper; + if (input_type->Is(Type::String())) { + return Changed(input); // JSToString(string) => x + } + if (input_type->Is(Type::Undefined())) { + return ReplaceWith(jsgraph()->HeapConstant( + graph()->zone()->isolate()->factory()->undefined_string())); + } + if (input_type->Is(Type::Null())) { + return ReplaceWith(jsgraph()->HeapConstant( + graph()->zone()->isolate()->factory()->null_string())); + } + // TODO(turbofan): js-typed-lowering of ToString(boolean) + // TODO(turbofan): js-typed-lowering of ToString(number) + return NoChange(); +} + + +Reduction JSTypedLowering::ReduceJSToBooleanInput(Node* input) { + if (input->opcode() == IrOpcode::kJSToBoolean) { + // Recursively try to reduce the input first. + Reduction result = ReduceJSToBooleanInput(input->InputAt(0)); + if (result.Changed()) { + RelaxEffects(input); + return result; + } + return Changed(input); // JSToBoolean(JSToBoolean(x)) => JSToBoolean(x) + } + Type* input_type = NodeProperties::GetBounds(input).upper; + if (input_type->Is(Type::Boolean())) { + return Changed(input); // JSToBoolean(boolean) => x + } + if (input_type->Is(Type::Undefined())) { + // JSToBoolean(undefined) => #false + return ReplaceWith(jsgraph()->FalseConstant()); + } + if (input_type->Is(Type::Null())) { + // JSToBoolean(null) => #false + return ReplaceWith(jsgraph()->FalseConstant()); + } + if (input_type->Is(Type::DetectableReceiver())) { + // JSToBoolean(detectable) => #true + return ReplaceWith(jsgraph()->TrueConstant()); + } + if (input_type->Is(Type::Undetectable())) { + // JSToBoolean(undetectable) => #false + return ReplaceWith(jsgraph()->FalseConstant()); + } + if (input_type->Is(Type::Number())) { + // JSToBoolean(number) => BooleanNot(NumberEqual(x, #0)) + Node* cmp = graph()->NewNode(simplified()->NumberEqual(), input, + jsgraph()->ZeroConstant()); + Node* inv = graph()->NewNode(simplified()->BooleanNot(), cmp); + ReplaceEagerly(input, inv); + // TODO(titzer): Ugly. ReplaceEagerly smashes all uses. Smash it back here. + cmp->ReplaceInput(0, input); + return Changed(inv); + } + // TODO(turbofan): js-typed-lowering of ToBoolean(string) + return NoChange(); +} + + +static Reduction ReplaceWithReduction(Node* node, Reduction reduction) { + if (reduction.Changed()) { + ReplaceUses(node, reduction.replacement(), NULL); + return reduction; + } + return Reducer::NoChange(); +} + + +Reduction JSTypedLowering::Reduce(Node* node) { + switch (node->opcode()) { + case IrOpcode::kJSEqual: + return ReduceJSEqual(node, false); + case IrOpcode::kJSNotEqual: + return ReduceJSEqual(node, true); + case IrOpcode::kJSStrictEqual: + return ReduceJSStrictEqual(node, false); + case IrOpcode::kJSStrictNotEqual: + return ReduceJSStrictEqual(node, true); + case IrOpcode::kJSLessThan: // fall through + case IrOpcode::kJSGreaterThan: // fall through + case IrOpcode::kJSLessThanOrEqual: // fall through + case IrOpcode::kJSGreaterThanOrEqual: + return ReduceJSComparison(node); + case IrOpcode::kJSBitwiseOr: + return ReduceI32Binop(node, true, true, machine()->Word32Or()); + case IrOpcode::kJSBitwiseXor: + return ReduceI32Binop(node, true, true, machine()->Word32Xor()); + case IrOpcode::kJSBitwiseAnd: + return ReduceI32Binop(node, true, true, machine()->Word32And()); + case IrOpcode::kJSShiftLeft: + return ReduceI32Shift(node, true, machine()->Word32Shl()); + case IrOpcode::kJSShiftRight: + return ReduceI32Shift(node, true, machine()->Word32Sar()); + case IrOpcode::kJSShiftRightLogical: + return ReduceI32Shift(node, false, machine()->Word32Shr()); + case IrOpcode::kJSAdd: + return ReduceJSAdd(node); + case IrOpcode::kJSSubtract: + return ReduceNumberBinop(node, simplified()->NumberSubtract()); + case IrOpcode::kJSMultiply: + return ReduceNumberBinop(node, simplified()->NumberMultiply()); + case IrOpcode::kJSDivide: + return ReduceNumberBinop(node, simplified()->NumberDivide()); + case IrOpcode::kJSModulus: + return ReduceNumberBinop(node, simplified()->NumberModulus()); + case IrOpcode::kJSUnaryNot: { + Reduction result = ReduceJSToBooleanInput(node->InputAt(0)); + Node* value; + if (result.Changed()) { + // !x => BooleanNot(x) + value = + graph()->NewNode(simplified()->BooleanNot(), result.replacement()); + ReplaceUses(node, value, NULL); + return Changed(value); + } else { + // !x => BooleanNot(JSToBoolean(x)) + value = graph()->NewNode(simplified()->BooleanNot(), node); + node->set_op(javascript()->ToBoolean()); + ReplaceUses(node, value, node); + // Note: ReplaceUses() smashes all uses, so smash it back here. + value->ReplaceInput(0, node); + return ReplaceWith(value); + } + } + case IrOpcode::kJSToBoolean: + return ReplaceWithReduction(node, + ReduceJSToBooleanInput(node->InputAt(0))); + case IrOpcode::kJSToNumber: + return ReplaceWithReduction(node, + ReduceJSToNumberInput(node->InputAt(0))); + case IrOpcode::kJSToString: + return ReplaceWithReduction(node, + ReduceJSToStringInput(node->InputAt(0))); + default: + break; + } + return NoChange(); +} +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/compiler/js-typed-lowering.h b/deps/v8/src/compiler/js-typed-lowering.h new file mode 100644 index 00000000000..c69fc2736a1 --- /dev/null +++ b/deps/v8/src/compiler/js-typed-lowering.h @@ -0,0 +1,67 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_OPERATOR_REDUCERS_H_ +#define V8_COMPILER_OPERATOR_REDUCERS_H_ + +#include "src/compiler/graph-reducer.h" +#include "src/compiler/js-graph.h" +#include "src/compiler/lowering-builder.h" +#include "src/compiler/machine-operator.h" +#include "src/compiler/node.h" +#include "src/compiler/simplified-operator.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// Lowers JS-level operators to simplified operators based on types. +class JSTypedLowering : public LoweringBuilder { + public: + explicit JSTypedLowering(JSGraph* jsgraph, + SourcePositionTable* source_positions) + : LoweringBuilder(jsgraph->graph(), source_positions), + jsgraph_(jsgraph), + simplified_(jsgraph->zone()), + machine_(jsgraph->zone()) {} + virtual ~JSTypedLowering() {} + + Reduction Reduce(Node* node); + virtual void Lower(Node* node) { Reduce(node); } + + JSGraph* jsgraph() { return jsgraph_; } + Graph* graph() { return jsgraph_->graph(); } + + private: + friend class JSBinopReduction; + JSGraph* jsgraph_; + SimplifiedOperatorBuilder simplified_; + MachineOperatorBuilder machine_; + + Reduction ReplaceEagerly(Node* old, Node* node); + Reduction NoChange() { return Reducer::NoChange(); } + Reduction ReplaceWith(Node* node) { return Reducer::Replace(node); } + Reduction Changed(Node* node) { return Reducer::Changed(node); } + Reduction ReduceJSAdd(Node* node); + Reduction ReduceJSComparison(Node* node); + Reduction ReduceJSEqual(Node* node, bool invert); + Reduction ReduceJSStrictEqual(Node* node, bool invert); + Reduction ReduceJSToNumberInput(Node* input); + Reduction ReduceJSToStringInput(Node* input); + Reduction ReduceJSToBooleanInput(Node* input); + Reduction ReduceNumberBinop(Node* node, Operator* numberOp); + Reduction ReduceI32Binop(Node* node, bool left_signed, bool right_signed, + Operator* intOp); + Reduction ReduceI32Shift(Node* node, bool left_signed, Operator* shift_op); + + JSOperatorBuilder* javascript() { return jsgraph_->javascript(); } + CommonOperatorBuilder* common() { return jsgraph_->common(); } + SimplifiedOperatorBuilder* simplified() { return &simplified_; } + MachineOperatorBuilder* machine() { return &machine_; } +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_OPERATOR_REDUCERS_H_ diff --git a/deps/v8/src/compiler/linkage-impl.h b/deps/v8/src/compiler/linkage-impl.h new file mode 100644 index 00000000000..e7aafc3885d --- /dev/null +++ b/deps/v8/src/compiler/linkage-impl.h @@ -0,0 +1,206 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_LINKAGE_IMPL_H_ +#define V8_COMPILER_LINKAGE_IMPL_H_ + +namespace v8 { +namespace internal { +namespace compiler { + +class LinkageHelper { + public: + static LinkageLocation TaggedStackSlot(int index) { + DCHECK(index < 0); + return LinkageLocation(kMachineTagged, index); + } + + static LinkageLocation TaggedRegisterLocation(Register reg) { + return LinkageLocation(kMachineTagged, Register::ToAllocationIndex(reg)); + } + + static inline LinkageLocation WordRegisterLocation(Register reg) { + return LinkageLocation(MachineOperatorBuilder::pointer_rep(), + Register::ToAllocationIndex(reg)); + } + + static LinkageLocation UnconstrainedRegister(MachineType rep) { + return LinkageLocation(rep, LinkageLocation::ANY_REGISTER); + } + + static const RegList kNoCalleeSaved = 0; + + // TODO(turbofan): cache call descriptors for JSFunction calls. + template <typename LinkageTraits> + static CallDescriptor* GetJSCallDescriptor(Zone* zone, int parameter_count) { + const int jsfunction_count = 1; + const int context_count = 1; + int input_count = jsfunction_count + parameter_count + context_count; + + const int return_count = 1; + LinkageLocation* locations = + zone->NewArray<LinkageLocation>(return_count + input_count); + + int index = 0; + locations[index++] = + TaggedRegisterLocation(LinkageTraits::ReturnValueReg()); + locations[index++] = + TaggedRegisterLocation(LinkageTraits::JSCallFunctionReg()); + + for (int i = 0; i < parameter_count; i++) { + // All parameters to JS calls go on the stack. + int spill_slot_index = i - parameter_count; + locations[index++] = TaggedStackSlot(spill_slot_index); + } + locations[index++] = TaggedRegisterLocation(LinkageTraits::ContextReg()); + + // TODO(titzer): refactor TurboFan graph to consider context a value input. + return new (zone) + CallDescriptor(CallDescriptor::kCallJSFunction, // kind + return_count, // return_count + parameter_count, // parameter_count + input_count - context_count, // input_count + locations, // locations + Operator::kNoProperties, // properties + kNoCalleeSaved, // callee-saved registers + CallDescriptor::kCanDeoptimize); // deoptimization + } + + + // TODO(turbofan): cache call descriptors for runtime calls. + template <typename LinkageTraits> + static CallDescriptor* GetRuntimeCallDescriptor( + Zone* zone, Runtime::FunctionId function_id, int parameter_count, + Operator::Property properties, + CallDescriptor::DeoptimizationSupport can_deoptimize) { + const int code_count = 1; + const int function_count = 1; + const int num_args_count = 1; + const int context_count = 1; + const int input_count = code_count + parameter_count + function_count + + num_args_count + context_count; + + const Runtime::Function* function = Runtime::FunctionForId(function_id); + const int return_count = function->result_size; + LinkageLocation* locations = + zone->NewArray<LinkageLocation>(return_count + input_count); + + int index = 0; + if (return_count > 0) { + locations[index++] = + TaggedRegisterLocation(LinkageTraits::ReturnValueReg()); + } + if (return_count > 1) { + locations[index++] = + TaggedRegisterLocation(LinkageTraits::ReturnValue2Reg()); + } + + DCHECK_LE(return_count, 2); + + locations[index++] = UnconstrainedRegister(kMachineTagged); // CEntryStub + + for (int i = 0; i < parameter_count; i++) { + // All parameters to runtime calls go on the stack. + int spill_slot_index = i - parameter_count; + locations[index++] = TaggedStackSlot(spill_slot_index); + } + locations[index++] = + TaggedRegisterLocation(LinkageTraits::RuntimeCallFunctionReg()); + locations[index++] = + WordRegisterLocation(LinkageTraits::RuntimeCallArgCountReg()); + locations[index++] = TaggedRegisterLocation(LinkageTraits::ContextReg()); + + // TODO(titzer): refactor TurboFan graph to consider context a value input. + return new (zone) CallDescriptor(CallDescriptor::kCallCodeObject, // kind + return_count, // return_count + parameter_count, // parameter_count + input_count, // input_count + locations, // locations + properties, // properties + kNoCalleeSaved, // callee-saved registers + can_deoptimize, // deoptimization + function->name); + } + + + // TODO(turbofan): cache call descriptors for code stub calls. + template <typename LinkageTraits> + static CallDescriptor* GetStubCallDescriptor( + Zone* zone, CodeStubInterfaceDescriptor* descriptor, + int stack_parameter_count, + CallDescriptor::DeoptimizationSupport can_deoptimize) { + int register_parameter_count = descriptor->GetEnvironmentParameterCount(); + int parameter_count = register_parameter_count + stack_parameter_count; + const int code_count = 1; + const int context_count = 1; + int input_count = code_count + parameter_count + context_count; + + const int return_count = 1; + LinkageLocation* locations = + zone->NewArray<LinkageLocation>(return_count + input_count); + + int index = 0; + locations[index++] = + TaggedRegisterLocation(LinkageTraits::ReturnValueReg()); + locations[index++] = UnconstrainedRegister(kMachineTagged); // code + for (int i = 0; i < parameter_count; i++) { + if (i < register_parameter_count) { + // The first parameters to code stub calls go in registers. + Register reg = descriptor->GetEnvironmentParameterRegister(i); + locations[index++] = TaggedRegisterLocation(reg); + } else { + // The rest of the parameters go on the stack. + int stack_slot = i - register_parameter_count - stack_parameter_count; + locations[index++] = TaggedStackSlot(stack_slot); + } + } + locations[index++] = TaggedRegisterLocation(LinkageTraits::ContextReg()); + + // TODO(titzer): refactor TurboFan graph to consider context a value input. + return new (zone) + CallDescriptor(CallDescriptor::kCallCodeObject, // kind + return_count, // return_count + parameter_count, // parameter_count + input_count, // input_count + locations, // locations + Operator::kNoProperties, // properties + kNoCalleeSaved, // callee-saved registers + can_deoptimize, // deoptimization + CodeStub::MajorName(descriptor->MajorKey(), false)); + } + + + template <typename LinkageTraits> + static CallDescriptor* GetSimplifiedCDescriptor( + Zone* zone, int num_params, MachineType return_type, + const MachineType* param_types) { + LinkageLocation* locations = + zone->NewArray<LinkageLocation>(num_params + 2); + int index = 0; + locations[index++] = + TaggedRegisterLocation(LinkageTraits::ReturnValueReg()); + locations[index++] = LinkageHelper::UnconstrainedRegister( + MachineOperatorBuilder::pointer_rep()); + // TODO(dcarney): test with lots of parameters. + int i = 0; + for (; i < LinkageTraits::CRegisterParametersLength() && i < num_params; + i++) { + locations[index++] = LinkageLocation( + param_types[i], + Register::ToAllocationIndex(LinkageTraits::CRegisterParameter(i))); + } + for (; i < num_params; i++) { + locations[index++] = LinkageLocation(param_types[i], -1 - i); + } + return new (zone) CallDescriptor( + CallDescriptor::kCallAddress, 1, num_params, num_params + 1, locations, + Operator::kNoProperties, LinkageTraits::CCalleeSaveRegisters(), + CallDescriptor::kCannotDeoptimize); // TODO(jarin) should deoptimize! + } +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_LINKAGE_IMPL_H_ diff --git a/deps/v8/src/compiler/linkage.cc b/deps/v8/src/compiler/linkage.cc new file mode 100644 index 00000000000..26a3dccc474 --- /dev/null +++ b/deps/v8/src/compiler/linkage.cc @@ -0,0 +1,149 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/linkage.h" + +#include "src/code-stubs.h" +#include "src/compiler.h" +#include "src/compiler/node.h" +#include "src/compiler/pipeline.h" +#include "src/scopes.h" + +namespace v8 { +namespace internal { +namespace compiler { + + +OStream& operator<<(OStream& os, const CallDescriptor::Kind& k) { + switch (k) { + case CallDescriptor::kCallCodeObject: + os << "Code"; + break; + case CallDescriptor::kCallJSFunction: + os << "JS"; + break; + case CallDescriptor::kCallAddress: + os << "Addr"; + break; + } + return os; +} + + +OStream& operator<<(OStream& os, const CallDescriptor& d) { + // TODO(svenpanne) Output properties etc. and be less cryptic. + return os << d.kind() << ":" << d.debug_name() << ":r" << d.ReturnCount() + << "p" << d.ParameterCount() << "i" << d.InputCount() + << (d.CanLazilyDeoptimize() ? "deopt" : ""); +} + + +Linkage::Linkage(CompilationInfo* info) : info_(info) { + if (info->function() != NULL) { + // If we already have the function literal, use the number of parameters + // plus the receiver. + incoming_ = GetJSCallDescriptor(1 + info->function()->parameter_count()); + } else if (!info->closure().is_null()) { + // If we are compiling a JS function, use a JS call descriptor, + // plus the receiver. + SharedFunctionInfo* shared = info->closure()->shared(); + incoming_ = GetJSCallDescriptor(1 + shared->formal_parameter_count()); + } else if (info->code_stub() != NULL) { + // Use the code stub interface descriptor. + HydrogenCodeStub* stub = info->code_stub(); + CodeStubInterfaceDescriptor* descriptor = + info_->isolate()->code_stub_interface_descriptor(stub->MajorKey()); + incoming_ = GetStubCallDescriptor(descriptor); + } else { + incoming_ = NULL; // TODO(titzer): ? + } +} + + +FrameOffset Linkage::GetFrameOffset(int spill_slot, Frame* frame, int extra) { + if (frame->GetSpillSlotCount() > 0 || incoming_->IsJSFunctionCall() || + incoming_->kind() == CallDescriptor::kCallAddress) { + int offset; + int register_save_area_size = frame->GetRegisterSaveAreaSize(); + if (spill_slot >= 0) { + // Local or spill slot. Skip the frame pointer, function, and + // context in the fixed part of the frame. + offset = + -(spill_slot + 1) * kPointerSize - register_save_area_size + extra; + } else { + // Incoming parameter. Skip the return address. + offset = -(spill_slot + 1) * kPointerSize + kFPOnStackSize + + kPCOnStackSize + extra; + } + return FrameOffset::FromFramePointer(offset); + } else { + // No frame. Retrieve all parameters relative to stack pointer. + DCHECK(spill_slot < 0); // Must be a parameter. + int register_save_area_size = frame->GetRegisterSaveAreaSize(); + int offset = register_save_area_size - (spill_slot + 1) * kPointerSize + + kPCOnStackSize + extra; + return FrameOffset::FromStackPointer(offset); + } +} + + +CallDescriptor* Linkage::GetJSCallDescriptor(int parameter_count) { + return GetJSCallDescriptor(parameter_count, this->info_->zone()); +} + + +CallDescriptor* Linkage::GetRuntimeCallDescriptor( + Runtime::FunctionId function, int parameter_count, + Operator::Property properties, + CallDescriptor::DeoptimizationSupport can_deoptimize) { + return GetRuntimeCallDescriptor(function, parameter_count, properties, + can_deoptimize, this->info_->zone()); +} + + +CallDescriptor* Linkage::GetStubCallDescriptor( + CodeStubInterfaceDescriptor* descriptor, int stack_parameter_count, + CallDescriptor::DeoptimizationSupport can_deoptimize) { + return GetStubCallDescriptor(descriptor, stack_parameter_count, + can_deoptimize, this->info_->zone()); +} + + +//============================================================================== +// Provide unimplemented methods on unsupported architectures, to at least link. +//============================================================================== +#if !V8_TURBOFAN_BACKEND +CallDescriptor* Linkage::GetJSCallDescriptor(int parameter_count, Zone* zone) { + UNIMPLEMENTED(); + return NULL; +} + + +CallDescriptor* Linkage::GetRuntimeCallDescriptor( + Runtime::FunctionId function, int parameter_count, + Operator::Property properties, + CallDescriptor::DeoptimizationSupport can_deoptimize, Zone* zone) { + UNIMPLEMENTED(); + return NULL; +} + + +CallDescriptor* Linkage::GetStubCallDescriptor( + CodeStubInterfaceDescriptor* descriptor, int stack_parameter_count, + CallDescriptor::DeoptimizationSupport can_deoptimize, Zone* zone) { + UNIMPLEMENTED(); + return NULL; +} + + +CallDescriptor* Linkage::GetSimplifiedCDescriptor( + Zone* zone, int num_params, MachineType return_type, + const MachineType* param_types) { + UNIMPLEMENTED(); + return NULL; +} +#endif // !V8_TURBOFAN_BACKEND +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/compiler/linkage.h b/deps/v8/src/compiler/linkage.h new file mode 100644 index 00000000000..9fe02183ec4 --- /dev/null +++ b/deps/v8/src/compiler/linkage.h @@ -0,0 +1,193 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_LINKAGE_H_ +#define V8_COMPILER_LINKAGE_H_ + +#include "src/v8.h" + +#include "src/code-stubs.h" +#include "src/compiler/frame.h" +#include "src/compiler/machine-operator.h" +#include "src/compiler/node.h" +#include "src/compiler/operator.h" +#include "src/zone.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// Describes the location for a parameter or a return value to a call. +// TODO(titzer): replace with Radium locations when they are ready. +class LinkageLocation { + public: + LinkageLocation(MachineType rep, int location) + : rep_(rep), location_(location) {} + + inline MachineType representation() const { return rep_; } + + static const int16_t ANY_REGISTER = 32767; + + private: + friend class CallDescriptor; + friend class OperandGenerator; + MachineType rep_; + int16_t location_; // >= 0 implies register, otherwise stack slot. +}; + + +class CallDescriptor : public ZoneObject { + public: + // Describes whether the first parameter is a code object, a JSFunction, + // or an address--all of which require different machine sequences to call. + enum Kind { kCallCodeObject, kCallJSFunction, kCallAddress }; + + enum DeoptimizationSupport { kCanDeoptimize, kCannotDeoptimize }; + + CallDescriptor(Kind kind, int8_t return_count, int16_t parameter_count, + int16_t input_count, LinkageLocation* locations, + Operator::Property properties, RegList callee_saved_registers, + DeoptimizationSupport deoptimization_support, + const char* debug_name = "") + : kind_(kind), + return_count_(return_count), + parameter_count_(parameter_count), + input_count_(input_count), + locations_(locations), + properties_(properties), + callee_saved_registers_(callee_saved_registers), + deoptimization_support_(deoptimization_support), + debug_name_(debug_name) {} + // Returns the kind of this call. + Kind kind() const { return kind_; } + + // Returns {true} if this descriptor is a call to a JSFunction. + bool IsJSFunctionCall() const { return kind_ == kCallJSFunction; } + + // The number of return values from this call, usually 0 or 1. + int ReturnCount() const { return return_count_; } + + // The number of JavaScript parameters to this call, including receiver, + // but not the context. + int ParameterCount() const { return parameter_count_; } + + int InputCount() const { return input_count_; } + + bool CanLazilyDeoptimize() const { + return deoptimization_support_ == kCanDeoptimize; + } + + LinkageLocation GetReturnLocation(int index) { + DCHECK(index < return_count_); + return locations_[0 + index]; // return locations start at 0. + } + + LinkageLocation GetInputLocation(int index) { + DCHECK(index < input_count_ + 1); // input_count + 1 is the context. + return locations_[return_count_ + index]; // inputs start after returns. + } + + // Operator properties describe how this call can be optimized, if at all. + Operator::Property properties() const { return properties_; } + + // Get the callee-saved registers, if any, across this call. + RegList CalleeSavedRegisters() { return callee_saved_registers_; } + + const char* debug_name() const { return debug_name_; } + + private: + friend class Linkage; + + Kind kind_; + int8_t return_count_; + int16_t parameter_count_; + int16_t input_count_; + LinkageLocation* locations_; + Operator::Property properties_; + RegList callee_saved_registers_; + DeoptimizationSupport deoptimization_support_; + const char* debug_name_; +}; + +OStream& operator<<(OStream& os, const CallDescriptor& d); +OStream& operator<<(OStream& os, const CallDescriptor::Kind& k); + +// Defines the linkage for a compilation, including the calling conventions +// for incoming parameters and return value(s) as well as the outgoing calling +// convention for any kind of call. Linkage is generally architecture-specific. +// +// Can be used to translate {arg_index} (i.e. index of the call node input) as +// well as {param_index} (i.e. as stored in parameter nodes) into an operator +// representing the architecture-specific location. The following call node +// layouts are supported (where {n} is the number value inputs): +// +// #0 #1 #2 #3 [...] #n +// Call[CodeStub] code, arg 1, arg 2, arg 3, [...], context +// Call[JSFunction] function, rcvr, arg 1, arg 2, [...], context +// Call[Runtime] CEntryStub, arg 1, arg 2, arg 3, [...], fun, #arg, context +class Linkage : public ZoneObject { + public: + explicit Linkage(CompilationInfo* info); + explicit Linkage(CompilationInfo* info, CallDescriptor* incoming) + : info_(info), incoming_(incoming) {} + + // The call descriptor for this compilation unit describes the locations + // of incoming parameters and the outgoing return value(s). + CallDescriptor* GetIncomingDescriptor() { return incoming_; } + CallDescriptor* GetJSCallDescriptor(int parameter_count); + static CallDescriptor* GetJSCallDescriptor(int parameter_count, Zone* zone); + CallDescriptor* GetRuntimeCallDescriptor( + Runtime::FunctionId function, int parameter_count, + Operator::Property properties, + CallDescriptor::DeoptimizationSupport can_deoptimize = + CallDescriptor::kCannotDeoptimize); + static CallDescriptor* GetRuntimeCallDescriptor( + Runtime::FunctionId function, int parameter_count, + Operator::Property properties, + CallDescriptor::DeoptimizationSupport can_deoptimize, Zone* zone); + + CallDescriptor* GetStubCallDescriptor( + CodeStubInterfaceDescriptor* descriptor, int stack_parameter_count = 0, + CallDescriptor::DeoptimizationSupport can_deoptimize = + CallDescriptor::kCannotDeoptimize); + static CallDescriptor* GetStubCallDescriptor( + CodeStubInterfaceDescriptor* descriptor, int stack_parameter_count, + CallDescriptor::DeoptimizationSupport can_deoptimize, Zone* zone); + + // Creates a call descriptor for simplified C calls that is appropriate + // for the host platform. This simplified calling convention only supports + // integers and pointers of one word size each, i.e. no floating point, + // structs, pointers to members, etc. + static CallDescriptor* GetSimplifiedCDescriptor( + Zone* zone, int num_params, MachineType return_type, + const MachineType* param_types); + + // Get the location of an (incoming) parameter to this function. + LinkageLocation GetParameterLocation(int index) { + return incoming_->GetInputLocation(index + 1); + } + + // Get the location where this function should place its return value. + LinkageLocation GetReturnLocation() { + return incoming_->GetReturnLocation(0); + } + + // Get the frame offset for a given spill slot. The location depends on the + // calling convention and the specific frame layout, and may thus be + // architecture-specific. Negative spill slots indicate arguments on the + // caller's frame. The {extra} parameter indicates an additional offset from + // the frame offset, e.g. to index into part of a double slot. + FrameOffset GetFrameOffset(int spill_slot, Frame* frame, int extra = 0); + + CompilationInfo* info() const { return info_; } + + private: + CompilationInfo* info_; + CallDescriptor* incoming_; +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_LINKAGE_H_ diff --git a/deps/v8/src/compiler/lowering-builder.cc b/deps/v8/src/compiler/lowering-builder.cc new file mode 100644 index 00000000000..1246f54f147 --- /dev/null +++ b/deps/v8/src/compiler/lowering-builder.cc @@ -0,0 +1,45 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/graph-inl.h" +#include "src/compiler/lowering-builder.h" +#include "src/compiler/node-aux-data-inl.h" +#include "src/compiler/node-properties-inl.h" + +namespace v8 { +namespace internal { +namespace compiler { + +class LoweringBuilder::NodeVisitor : public NullNodeVisitor { + public: + explicit NodeVisitor(LoweringBuilder* lowering) : lowering_(lowering) {} + + GenericGraphVisit::Control Post(Node* node) { + if (lowering_->source_positions_ != NULL) { + SourcePositionTable::Scope pos(lowering_->source_positions_, node); + lowering_->Lower(node); + } else { + lowering_->Lower(node); + } + return GenericGraphVisit::CONTINUE; + } + + private: + LoweringBuilder* lowering_; +}; + + +LoweringBuilder::LoweringBuilder(Graph* graph, + SourcePositionTable* source_positions) + : graph_(graph), source_positions_(source_positions) {} + + +void LoweringBuilder::LowerAllNodes() { + NodeVisitor visitor(this); + graph()->VisitNodeInputsFromEnd(&visitor); +} + +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/src/compiler/lowering-builder.h b/deps/v8/src/compiler/lowering-builder.h new file mode 100644 index 00000000000..aeaaaacfd97 --- /dev/null +++ b/deps/v8/src/compiler/lowering-builder.h @@ -0,0 +1,38 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_LOWERING_BUILDER_H_ +#define V8_COMPILER_LOWERING_BUILDER_H_ + +#include "src/v8.h" + +#include "src/compiler/graph.h" + + +namespace v8 { +namespace internal { +namespace compiler { + +// TODO(dcarney): rename this class. +class LoweringBuilder { + public: + explicit LoweringBuilder(Graph* graph, SourcePositionTable* source_positions); + virtual ~LoweringBuilder() {} + + void LowerAllNodes(); + virtual void Lower(Node* node) = 0; // Exposed for testing. + + Graph* graph() const { return graph_; } + + private: + class NodeVisitor; + Graph* graph_; + SourcePositionTable* source_positions_; +}; + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_COMPILER_LOWERING_BUILDER_H_ diff --git a/deps/v8/src/compiler/machine-node-factory.h b/deps/v8/src/compiler/machine-node-factory.h new file mode 100644 index 00000000000..faee93ebb26 --- /dev/null +++ b/deps/v8/src/compiler/machine-node-factory.h @@ -0,0 +1,381 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_MACHINE_NODE_FACTORY_H_ +#define V8_COMPILER_MACHINE_NODE_FACTORY_H_ + +#ifdef USE_SIMULATOR +#define MACHINE_ASSEMBLER_SUPPORTS_CALL_C 0 +#else +#define MACHINE_ASSEMBLER_SUPPORTS_CALL_C 1 +#endif + +#include "src/v8.h" + +#include "src/compiler/machine-operator.h" +#include "src/compiler/node.h" + +namespace v8 { +namespace internal { +namespace compiler { + +class MachineCallDescriptorBuilder : public ZoneObject { + public: + MachineCallDescriptorBuilder(MachineType return_type, int parameter_count, + const MachineType* parameter_types) + : return_type_(return_type), + parameter_count_(parameter_count), + parameter_types_(parameter_types) {} + + int parameter_count() const { return parameter_count_; } + const MachineType* parameter_types() const { return parameter_types_; } + + CallDescriptor* BuildCallDescriptor(Zone* zone) { + return Linkage::GetSimplifiedCDescriptor(zone, parameter_count_, + return_type_, parameter_types_); + } + + private: + const MachineType return_type_; + const int parameter_count_; + const MachineType* const parameter_types_; +}; + + +#define ZONE() static_cast<NodeFactory*>(this)->zone() +#define COMMON() static_cast<NodeFactory*>(this)->common() +#define MACHINE() static_cast<NodeFactory*>(this)->machine() +#define NEW_NODE_0(op) static_cast<NodeFactory*>(this)->NewNode(op) +#define NEW_NODE_1(op, a) static_cast<NodeFactory*>(this)->NewNode(op, a) +#define NEW_NODE_2(op, a, b) static_cast<NodeFactory*>(this)->NewNode(op, a, b) +#define NEW_NODE_3(op, a, b, c) \ + static_cast<NodeFactory*>(this)->NewNode(op, a, b, c) + +template <typename NodeFactory> +class MachineNodeFactory { + public: + // Constants. + Node* PointerConstant(void* value) { + return IntPtrConstant(reinterpret_cast<intptr_t>(value)); + } + Node* IntPtrConstant(intptr_t value) { + // TODO(dcarney): mark generated code as unserializable if value != 0. + return kPointerSize == 8 ? Int64Constant(value) + : Int32Constant(static_cast<int>(value)); + } + Node* Int32Constant(int32_t value) { + return NEW_NODE_0(COMMON()->Int32Constant(value)); + } + Node* Int64Constant(int64_t value) { + return NEW_NODE_0(COMMON()->Int64Constant(value)); + } + Node* NumberConstant(double value) { + return NEW_NODE_0(COMMON()->NumberConstant(value)); + } + Node* Float64Constant(double value) { + return NEW_NODE_0(COMMON()->Float64Constant(value)); + } + Node* HeapConstant(Handle<Object> object) { + PrintableUnique<Object> val = + PrintableUnique<Object>::CreateUninitialized(ZONE(), object); + return NEW_NODE_0(COMMON()->HeapConstant(val)); + } + + Node* Projection(int index, Node* a) { + return NEW_NODE_1(COMMON()->Projection(index), a); + } + + // Memory Operations. + Node* Load(MachineType rep, Node* base) { + return Load(rep, base, Int32Constant(0)); + } + Node* Load(MachineType rep, Node* base, Node* index) { + return NEW_NODE_2(MACHINE()->Load(rep), base, index); + } + void Store(MachineType rep, Node* base, Node* value) { + Store(rep, base, Int32Constant(0), value); + } + void Store(MachineType rep, Node* base, Node* index, Node* value) { + NEW_NODE_3(MACHINE()->Store(rep, kNoWriteBarrier), base, index, value); + } + // Arithmetic Operations. + Node* WordAnd(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->WordAnd(), a, b); + } + Node* WordOr(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->WordOr(), a, b); + } + Node* WordXor(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->WordXor(), a, b); + } + Node* WordShl(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->WordShl(), a, b); + } + Node* WordShr(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->WordShr(), a, b); + } + Node* WordSar(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->WordSar(), a, b); + } + Node* WordEqual(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->WordEqual(), a, b); + } + Node* WordNotEqual(Node* a, Node* b) { + return WordBinaryNot(WordEqual(a, b)); + } + Node* WordNot(Node* a) { + if (MACHINE()->is32()) { + return Word32Not(a); + } else { + return Word64Not(a); + } + } + Node* WordBinaryNot(Node* a) { + if (MACHINE()->is32()) { + return Word32BinaryNot(a); + } else { + return Word64BinaryNot(a); + } + } + + Node* Word32And(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Word32And(), a, b); + } + Node* Word32Or(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Word32Or(), a, b); + } + Node* Word32Xor(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Word32Xor(), a, b); + } + Node* Word32Shl(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Word32Shl(), a, b); + } + Node* Word32Shr(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Word32Shr(), a, b); + } + Node* Word32Sar(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Word32Sar(), a, b); + } + Node* Word32Equal(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Word32Equal(), a, b); + } + Node* Word32NotEqual(Node* a, Node* b) { + return Word32BinaryNot(Word32Equal(a, b)); + } + Node* Word32Not(Node* a) { return Word32Xor(a, Int32Constant(-1)); } + Node* Word32BinaryNot(Node* a) { return Word32Equal(a, Int32Constant(0)); } + + Node* Word64And(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Word64And(), a, b); + } + Node* Word64Or(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Word64Or(), a, b); + } + Node* Word64Xor(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Word64Xor(), a, b); + } + Node* Word64Shl(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Word64Shl(), a, b); + } + Node* Word64Shr(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Word64Shr(), a, b); + } + Node* Word64Sar(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Word64Sar(), a, b); + } + Node* Word64Equal(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Word64Equal(), a, b); + } + Node* Word64NotEqual(Node* a, Node* b) { + return Word64BinaryNot(Word64Equal(a, b)); + } + Node* Word64Not(Node* a) { return Word64Xor(a, Int64Constant(-1)); } + Node* Word64BinaryNot(Node* a) { return Word64Equal(a, Int64Constant(0)); } + + Node* Int32Add(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Int32Add(), a, b); + } + Node* Int32AddWithOverflow(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Int32AddWithOverflow(), a, b); + } + Node* Int32Sub(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Int32Sub(), a, b); + } + Node* Int32SubWithOverflow(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Int32SubWithOverflow(), a, b); + } + Node* Int32Mul(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Int32Mul(), a, b); + } + Node* Int32Div(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Int32Div(), a, b); + } + Node* Int32UDiv(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Int32UDiv(), a, b); + } + Node* Int32Mod(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Int32Mod(), a, b); + } + Node* Int32UMod(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Int32UMod(), a, b); + } + Node* Int32LessThan(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Int32LessThan(), a, b); + } + Node* Int32LessThanOrEqual(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Int32LessThanOrEqual(), a, b); + } + Node* Uint32LessThan(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Uint32LessThan(), a, b); + } + Node* Uint32LessThanOrEqual(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Uint32LessThanOrEqual(), a, b); + } + Node* Int32GreaterThan(Node* a, Node* b) { return Int32LessThan(b, a); } + Node* Int32GreaterThanOrEqual(Node* a, Node* b) { + return Int32LessThanOrEqual(b, a); + } + Node* Int32Neg(Node* a) { return Int32Sub(Int32Constant(0), a); } + + Node* Int64Add(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Int64Add(), a, b); + } + Node* Int64Sub(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Int64Sub(), a, b); + } + Node* Int64Mul(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Int64Mul(), a, b); + } + Node* Int64Div(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Int64Div(), a, b); + } + Node* Int64UDiv(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Int64UDiv(), a, b); + } + Node* Int64Mod(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Int64Mod(), a, b); + } + Node* Int64UMod(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Int64UMod(), a, b); + } + Node* Int64Neg(Node* a) { return Int64Sub(Int64Constant(0), a); } + Node* Int64LessThan(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Int64LessThan(), a, b); + } + Node* Int64LessThanOrEqual(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Int64LessThanOrEqual(), a, b); + } + Node* Int64GreaterThan(Node* a, Node* b) { return Int64LessThan(b, a); } + Node* Int64GreaterThanOrEqual(Node* a, Node* b) { + return Int64LessThanOrEqual(b, a); + } + + Node* ConvertIntPtrToInt32(Node* a) { + return kPointerSize == 8 ? NEW_NODE_1(MACHINE()->ConvertInt64ToInt32(), a) + : a; + } + Node* ConvertInt32ToIntPtr(Node* a) { + return kPointerSize == 8 ? NEW_NODE_1(MACHINE()->ConvertInt32ToInt64(), a) + : a; + } + +#define INTPTR_BINOP(prefix, name) \ + Node* IntPtr##name(Node* a, Node* b) { \ + return kPointerSize == 8 ? prefix##64##name(a, b) \ + : prefix##32##name(a, b); \ + } + + INTPTR_BINOP(Int, Add); + INTPTR_BINOP(Int, Sub); + INTPTR_BINOP(Int, LessThan); + INTPTR_BINOP(Int, LessThanOrEqual); + INTPTR_BINOP(Word, Equal); + INTPTR_BINOP(Word, NotEqual); + INTPTR_BINOP(Int, GreaterThanOrEqual); + INTPTR_BINOP(Int, GreaterThan); + +#undef INTPTR_BINOP + + Node* Float64Add(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Float64Add(), a, b); + } + Node* Float64Sub(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Float64Sub(), a, b); + } + Node* Float64Mul(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Float64Mul(), a, b); + } + Node* Float64Div(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Float64Div(), a, b); + } + Node* Float64Mod(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Float64Mod(), a, b); + } + Node* Float64Equal(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Float64Equal(), a, b); + } + Node* Float64NotEqual(Node* a, Node* b) { + return WordBinaryNot(Float64Equal(a, b)); + } + Node* Float64LessThan(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Float64LessThan(), a, b); + } + Node* Float64LessThanOrEqual(Node* a, Node* b) { + return NEW_NODE_2(MACHINE()->Float64LessThanOrEqual(), a, b); + } + Node* Float64GreaterThan(Node* a, Node* b) { return Float64LessThan(b, a); } + Node* Float64GreaterThanOrEqual(Node* a, Node* b) { + return Float64LessThanOrEqual(b, a); + } + + // Conversions. + Node* ConvertInt32ToInt64(Node* a) { + return NEW_NODE_1(MACHINE()->ConvertInt32ToInt64(), a); + } + Node* ConvertInt64ToInt32(Node* a) { + return NEW_NODE_1(MACHINE()->ConvertInt64ToInt32(), a); + } + Node* ChangeInt32ToFloat64(Node* a) { + return NEW_NODE_1(MACHINE()->ChangeInt32ToFloat64(), a); + } + Node* ChangeUint32ToFloat64(Node* a) { + return NEW_NODE_1(MACHINE()->ChangeUint32ToFloat64(), a); + } + Node* ChangeFloat64ToInt32(Node* a) { + return NEW_NODE_1(MACHINE()->ChangeFloat64ToInt32(), a); + } + Node* ChangeFloat64ToUint32(Node* a) { + return NEW_NODE_1(MACHINE()->ChangeFloat64ToUint32(), a); + } + +#ifdef MACHINE_ASSEMBLER_SUPPORTS_CALL_C + // Call to C. + Node* CallC(Node* function_address, MachineType return_type, + MachineType* arg_types, Node** args, int n_args) { + CallDescriptor* descriptor = Linkage::GetSimplifiedCDescriptor( + ZONE(), n_args, return_type, arg_types); + Node** passed_args = + static_cast<Node**>(alloca((n_args + 1) * sizeof(args[0]))); + passed_args[0] = function_address; + for (int i = 0; i < n_args; ++i) { + passed_args[i + 1] = args[i]; + } + return NEW_NODE_2(COMMON()->Call(descriptor), n_args + 1, passed_args); + } +#endif +}; + +#undef NEW_NODE_0 +#undef NEW_NODE_1 +#undef NEW_NODE_2 +#undef NEW_NODE_3 +#undef MACHINE +#undef COMMON +#undef ZONE + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_COMPILER_MACHINE_NODE_FACTORY_H_ diff --git a/deps/v8/src/compiler/machine-operator-reducer.cc b/deps/v8/src/compiler/machine-operator-reducer.cc new file mode 100644 index 00000000000..4a4057646dd --- /dev/null +++ b/deps/v8/src/compiler/machine-operator-reducer.cc @@ -0,0 +1,343 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/machine-operator-reducer.h" + +#include "src/compiler/common-node-cache.h" +#include "src/compiler/generic-node-inl.h" +#include "src/compiler/graph.h" +#include "src/compiler/node-matchers.h" + +namespace v8 { +namespace internal { +namespace compiler { + +MachineOperatorReducer::MachineOperatorReducer(Graph* graph) + : graph_(graph), + cache_(new (graph->zone()) CommonNodeCache(graph->zone())), + common_(graph->zone()), + machine_(graph->zone()) {} + + +MachineOperatorReducer::MachineOperatorReducer(Graph* graph, + CommonNodeCache* cache) + : graph_(graph), + cache_(cache), + common_(graph->zone()), + machine_(graph->zone()) {} + + +Node* MachineOperatorReducer::Int32Constant(int32_t value) { + Node** loc = cache_->FindInt32Constant(value); + if (*loc == NULL) { + *loc = graph_->NewNode(common_.Int32Constant(value)); + } + return *loc; +} + + +Node* MachineOperatorReducer::Float64Constant(volatile double value) { + Node** loc = cache_->FindFloat64Constant(value); + if (*loc == NULL) { + *loc = graph_->NewNode(common_.Float64Constant(value)); + } + return *loc; +} + + +// Perform constant folding and strength reduction on machine operators. +Reduction MachineOperatorReducer::Reduce(Node* node) { + switch (node->opcode()) { + case IrOpcode::kWord32And: { + Int32BinopMatcher m(node); + if (m.right().Is(0)) return Replace(m.right().node()); // x & 0 => 0 + if (m.right().Is(-1)) return Replace(m.left().node()); // x & -1 => x + if (m.IsFoldable()) { // K & K => K + return ReplaceInt32(m.left().Value() & m.right().Value()); + } + if (m.LeftEqualsRight()) return Replace(m.left().node()); // x & x => x + break; + } + case IrOpcode::kWord32Or: { + Int32BinopMatcher m(node); + if (m.right().Is(0)) return Replace(m.left().node()); // x | 0 => x + if (m.right().Is(-1)) return Replace(m.right().node()); // x | -1 => -1 + if (m.IsFoldable()) { // K | K => K + return ReplaceInt32(m.left().Value() | m.right().Value()); + } + if (m.LeftEqualsRight()) return Replace(m.left().node()); // x | x => x + break; + } + case IrOpcode::kWord32Xor: { + Int32BinopMatcher m(node); + if (m.right().Is(0)) return Replace(m.left().node()); // x ^ 0 => x + if (m.IsFoldable()) { // K ^ K => K + return ReplaceInt32(m.left().Value() ^ m.right().Value()); + } + if (m.LeftEqualsRight()) return ReplaceInt32(0); // x ^ x => 0 + break; + } + case IrOpcode::kWord32Shl: { + Int32BinopMatcher m(node); + if (m.right().Is(0)) return Replace(m.left().node()); // x << 0 => x + if (m.IsFoldable()) { // K << K => K + return ReplaceInt32(m.left().Value() << m.right().Value()); + } + break; + } + case IrOpcode::kWord32Shr: { + Uint32BinopMatcher m(node); + if (m.right().Is(0)) return Replace(m.left().node()); // x >>> 0 => x + if (m.IsFoldable()) { // K >>> K => K + return ReplaceInt32(m.left().Value() >> m.right().Value()); + } + break; + } + case IrOpcode::kWord32Sar: { + Int32BinopMatcher m(node); + if (m.right().Is(0)) return Replace(m.left().node()); // x >> 0 => x + if (m.IsFoldable()) { // K >> K => K + return ReplaceInt32(m.left().Value() >> m.right().Value()); + } + break; + } + case IrOpcode::kWord32Equal: { + Int32BinopMatcher m(node); + if (m.IsFoldable()) { // K == K => K + return ReplaceBool(m.left().Value() == m.right().Value()); + } + if (m.left().IsInt32Sub() && m.right().Is(0)) { // x - y == 0 => x == y + Int32BinopMatcher msub(m.left().node()); + node->ReplaceInput(0, msub.left().node()); + node->ReplaceInput(1, msub.right().node()); + return Changed(node); + } + // TODO(turbofan): fold HeapConstant, ExternalReference, pointer compares + if (m.LeftEqualsRight()) return ReplaceBool(true); // x == x => true + break; + } + case IrOpcode::kInt32Add: { + Int32BinopMatcher m(node); + if (m.right().Is(0)) return Replace(m.left().node()); // x + 0 => x + if (m.IsFoldable()) { // K + K => K + return ReplaceInt32(static_cast<uint32_t>(m.left().Value()) + + static_cast<uint32_t>(m.right().Value())); + } + break; + } + case IrOpcode::kInt32Sub: { + Int32BinopMatcher m(node); + if (m.right().Is(0)) return Replace(m.left().node()); // x - 0 => x + if (m.IsFoldable()) { // K - K => K + return ReplaceInt32(static_cast<uint32_t>(m.left().Value()) - + static_cast<uint32_t>(m.right().Value())); + } + if (m.LeftEqualsRight()) return ReplaceInt32(0); // x - x => 0 + break; + } + case IrOpcode::kInt32Mul: { + Int32BinopMatcher m(node); + if (m.right().Is(0)) return Replace(m.right().node()); // x * 0 => 0 + if (m.right().Is(1)) return Replace(m.left().node()); // x * 1 => x + if (m.IsFoldable()) { // K * K => K + return ReplaceInt32(m.left().Value() * m.right().Value()); + } + if (m.right().Is(-1)) { // x * -1 => 0 - x + graph_->ChangeOperator(node, machine_.Int32Sub()); + node->ReplaceInput(0, Int32Constant(0)); + node->ReplaceInput(1, m.left().node()); + return Changed(node); + } + if (m.right().IsPowerOf2()) { // x * 2^n => x << n + graph_->ChangeOperator(node, machine_.Word32Shl()); + node->ReplaceInput(1, Int32Constant(WhichPowerOf2(m.right().Value()))); + return Changed(node); + } + break; + } + case IrOpcode::kInt32Div: { + Int32BinopMatcher m(node); + if (m.right().Is(1)) return Replace(m.left().node()); // x / 1 => x + // TODO(turbofan): if (m.left().Is(0)) + // TODO(turbofan): if (m.right().IsPowerOf2()) + // TODO(turbofan): if (m.right().Is(0)) + // TODO(turbofan): if (m.LeftEqualsRight()) + if (m.IsFoldable() && !m.right().Is(0)) { // K / K => K + if (m.right().Is(-1)) return ReplaceInt32(-m.left().Value()); + return ReplaceInt32(m.left().Value() / m.right().Value()); + } + if (m.right().Is(-1)) { // x / -1 => 0 - x + graph_->ChangeOperator(node, machine_.Int32Sub()); + node->ReplaceInput(0, Int32Constant(0)); + node->ReplaceInput(1, m.left().node()); + return Changed(node); + } + break; + } + case IrOpcode::kInt32UDiv: { + Uint32BinopMatcher m(node); + if (m.right().Is(1)) return Replace(m.left().node()); // x / 1 => x + // TODO(turbofan): if (m.left().Is(0)) + // TODO(turbofan): if (m.right().Is(0)) + // TODO(turbofan): if (m.LeftEqualsRight()) + if (m.IsFoldable() && !m.right().Is(0)) { // K / K => K + return ReplaceInt32(m.left().Value() / m.right().Value()); + } + if (m.right().IsPowerOf2()) { // x / 2^n => x >> n + graph_->ChangeOperator(node, machine_.Word32Shr()); + node->ReplaceInput(1, Int32Constant(WhichPowerOf2(m.right().Value()))); + return Changed(node); + } + break; + } + case IrOpcode::kInt32Mod: { + Int32BinopMatcher m(node); + if (m.right().Is(1)) return ReplaceInt32(0); // x % 1 => 0 + if (m.right().Is(-1)) return ReplaceInt32(0); // x % -1 => 0 + // TODO(turbofan): if (m.left().Is(0)) + // TODO(turbofan): if (m.right().IsPowerOf2()) + // TODO(turbofan): if (m.right().Is(0)) + // TODO(turbofan): if (m.LeftEqualsRight()) + if (m.IsFoldable() && !m.right().Is(0)) { // K % K => K + return ReplaceInt32(m.left().Value() % m.right().Value()); + } + break; + } + case IrOpcode::kInt32UMod: { + Uint32BinopMatcher m(node); + if (m.right().Is(1)) return ReplaceInt32(0); // x % 1 => 0 + // TODO(turbofan): if (m.left().Is(0)) + // TODO(turbofan): if (m.right().Is(0)) + // TODO(turbofan): if (m.LeftEqualsRight()) + if (m.IsFoldable() && !m.right().Is(0)) { // K % K => K + return ReplaceInt32(m.left().Value() % m.right().Value()); + } + if (m.right().IsPowerOf2()) { // x % 2^n => x & 2^n-1 + graph_->ChangeOperator(node, machine_.Word32And()); + node->ReplaceInput(1, Int32Constant(m.right().Value() - 1)); + return Changed(node); + } + break; + } + case IrOpcode::kInt32LessThan: { + Int32BinopMatcher m(node); + if (m.IsFoldable()) { // K < K => K + return ReplaceBool(m.left().Value() < m.right().Value()); + } + if (m.left().IsInt32Sub() && m.right().Is(0)) { // x - y < 0 => x < y + Int32BinopMatcher msub(m.left().node()); + node->ReplaceInput(0, msub.left().node()); + node->ReplaceInput(1, msub.right().node()); + return Changed(node); + } + if (m.left().Is(0) && m.right().IsInt32Sub()) { // 0 < x - y => y < x + Int32BinopMatcher msub(m.right().node()); + node->ReplaceInput(0, msub.right().node()); + node->ReplaceInput(1, msub.left().node()); + return Changed(node); + } + if (m.LeftEqualsRight()) return ReplaceBool(false); // x < x => false + break; + } + case IrOpcode::kInt32LessThanOrEqual: { + Int32BinopMatcher m(node); + if (m.IsFoldable()) { // K <= K => K + return ReplaceBool(m.left().Value() <= m.right().Value()); + } + if (m.left().IsInt32Sub() && m.right().Is(0)) { // x - y <= 0 => x <= y + Int32BinopMatcher msub(m.left().node()); + node->ReplaceInput(0, msub.left().node()); + node->ReplaceInput(1, msub.right().node()); + return Changed(node); + } + if (m.left().Is(0) && m.right().IsInt32Sub()) { // 0 <= x - y => y <= x + Int32BinopMatcher msub(m.right().node()); + node->ReplaceInput(0, msub.right().node()); + node->ReplaceInput(1, msub.left().node()); + return Changed(node); + } + if (m.LeftEqualsRight()) return ReplaceBool(true); // x <= x => true + break; + } + case IrOpcode::kUint32LessThan: { + Uint32BinopMatcher m(node); + if (m.left().Is(kMaxUInt32)) return ReplaceBool(false); // M < x => false + if (m.right().Is(0)) return ReplaceBool(false); // x < 0 => false + if (m.IsFoldable()) { // K < K => K + return ReplaceBool(m.left().Value() < m.right().Value()); + } + if (m.LeftEqualsRight()) return ReplaceBool(false); // x < x => false + break; + } + case IrOpcode::kUint32LessThanOrEqual: { + Uint32BinopMatcher m(node); + if (m.left().Is(0)) return ReplaceBool(true); // 0 <= x => true + if (m.right().Is(kMaxUInt32)) return ReplaceBool(true); // x <= M => true + if (m.IsFoldable()) { // K <= K => K + return ReplaceBool(m.left().Value() <= m.right().Value()); + } + if (m.LeftEqualsRight()) return ReplaceBool(true); // x <= x => true + break; + } + case IrOpcode::kFloat64Add: { + Float64BinopMatcher m(node); + if (m.IsFoldable()) { // K + K => K + return ReplaceFloat64(m.left().Value() + m.right().Value()); + } + break; + } + case IrOpcode::kFloat64Sub: { + Float64BinopMatcher m(node); + if (m.IsFoldable()) { // K - K => K + return ReplaceFloat64(m.left().Value() - m.right().Value()); + } + break; + } + case IrOpcode::kFloat64Mul: { + Float64BinopMatcher m(node); + if (m.right().Is(1)) return Replace(m.left().node()); // x * 1.0 => x + if (m.right().IsNaN()) { // x * NaN => NaN + return Replace(m.right().node()); + } + if (m.IsFoldable()) { // K * K => K + return ReplaceFloat64(m.left().Value() * m.right().Value()); + } + break; + } + case IrOpcode::kFloat64Div: { + Float64BinopMatcher m(node); + if (m.right().Is(1)) return Replace(m.left().node()); // x / 1.0 => x + if (m.right().IsNaN()) { // x / NaN => NaN + return Replace(m.right().node()); + } + if (m.left().IsNaN()) { // NaN / x => NaN + return Replace(m.left().node()); + } + if (m.IsFoldable()) { // K / K => K + return ReplaceFloat64(m.left().Value() / m.right().Value()); + } + break; + } + case IrOpcode::kFloat64Mod: { + Float64BinopMatcher m(node); + if (m.right().IsNaN()) { // x % NaN => NaN + return Replace(m.right().node()); + } + if (m.left().IsNaN()) { // NaN % x => NaN + return Replace(m.left().node()); + } + if (m.IsFoldable()) { // K % K => K + return ReplaceFloat64(modulo(m.left().Value(), m.right().Value())); + } + break; + } + // TODO(turbofan): strength-reduce and fold floating point operations. + default: + break; + } + return NoChange(); +} +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/compiler/machine-operator-reducer.h b/deps/v8/src/compiler/machine-operator-reducer.h new file mode 100644 index 00000000000..46d2931e994 --- /dev/null +++ b/deps/v8/src/compiler/machine-operator-reducer.h @@ -0,0 +1,52 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_MACHINE_OPERATOR_REDUCER_H_ +#define V8_COMPILER_MACHINE_OPERATOR_REDUCER_H_ + +#include "src/compiler/common-operator.h" +#include "src/compiler/graph-reducer.h" +#include "src/compiler/machine-operator.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// Forward declarations. +class CommonNodeCache; + +// Performs constant folding and strength reduction on nodes that have +// machine operators. +class MachineOperatorReducer : public Reducer { + public: + explicit MachineOperatorReducer(Graph* graph); + + MachineOperatorReducer(Graph* graph, CommonNodeCache* cache); + + virtual Reduction Reduce(Node* node); + + private: + Graph* graph_; + CommonNodeCache* cache_; + CommonOperatorBuilder common_; + MachineOperatorBuilder machine_; + + Node* Int32Constant(int32_t value); + Node* Float64Constant(volatile double value); + + Reduction ReplaceBool(bool value) { return ReplaceInt32(value ? 1 : 0); } + + Reduction ReplaceInt32(int32_t value) { + return Replace(Int32Constant(value)); + } + + Reduction ReplaceFloat64(volatile double value) { + return Replace(Float64Constant(value)); + } +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_MACHINE_OPERATOR_REDUCER_H_ diff --git a/deps/v8/src/compiler/machine-operator.h b/deps/v8/src/compiler/machine-operator.h new file mode 100644 index 00000000000..93ccedc2c80 --- /dev/null +++ b/deps/v8/src/compiler/machine-operator.h @@ -0,0 +1,168 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_MACHINE_OPERATOR_H_ +#define V8_COMPILER_MACHINE_OPERATOR_H_ + +#include "src/compiler/machine-type.h" +#include "src/compiler/opcodes.h" +#include "src/compiler/operator.h" +#include "src/zone.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// TODO(turbofan): other write barriers are possible based on type +enum WriteBarrierKind { kNoWriteBarrier, kFullWriteBarrier }; + + +// A Store needs a MachineType and a WriteBarrierKind +// in order to emit the correct write barrier. +struct StoreRepresentation { + MachineType rep; + WriteBarrierKind write_barrier_kind; +}; + + +// Interface for building machine-level operators. These operators are +// machine-level but machine-independent and thus define a language suitable +// for generating code to run on architectures such as ia32, x64, arm, etc. +class MachineOperatorBuilder { + public: + explicit MachineOperatorBuilder(Zone* zone, MachineType word = pointer_rep()) + : zone_(zone), word_(word) { + CHECK(word == kMachineWord32 || word == kMachineWord64); + } + +#define SIMPLE(name, properties, inputs, outputs) \ + return new (zone_) \ + SimpleOperator(IrOpcode::k##name, properties, inputs, outputs, #name); + +#define OP1(name, ptype, pname, properties, inputs, outputs) \ + return new (zone_) \ + Operator1<ptype>(IrOpcode::k##name, properties | Operator::kNoThrow, \ + inputs, outputs, #name, pname) + +#define BINOP(name) SIMPLE(name, Operator::kPure, 2, 1) +#define BINOP_O(name) SIMPLE(name, Operator::kPure, 2, 2) +#define BINOP_C(name) \ + SIMPLE(name, Operator::kCommutative | Operator::kPure, 2, 1) +#define BINOP_AC(name) \ + SIMPLE(name, \ + Operator::kAssociative | Operator::kCommutative | Operator::kPure, 2, \ + 1) +#define BINOP_ACO(name) \ + SIMPLE(name, \ + Operator::kAssociative | Operator::kCommutative | Operator::kPure, 2, \ + 2) +#define UNOP(name) SIMPLE(name, Operator::kPure, 1, 1) + +#define WORD_SIZE(x) return is64() ? Word64##x() : Word32##x() + + Operator* Load(MachineType rep) { // load [base + index] + OP1(Load, MachineType, rep, Operator::kNoWrite, 2, 1); + } + // store [base + index], value + Operator* Store(MachineType rep, WriteBarrierKind kind) { + StoreRepresentation store_rep = {rep, kind}; + OP1(Store, StoreRepresentation, store_rep, Operator::kNoRead, 3, 0); + } + + Operator* WordAnd() { WORD_SIZE(And); } + Operator* WordOr() { WORD_SIZE(Or); } + Operator* WordXor() { WORD_SIZE(Xor); } + Operator* WordShl() { WORD_SIZE(Shl); } + Operator* WordShr() { WORD_SIZE(Shr); } + Operator* WordSar() { WORD_SIZE(Sar); } + Operator* WordEqual() { WORD_SIZE(Equal); } + + Operator* Word32And() { BINOP_AC(Word32And); } + Operator* Word32Or() { BINOP_AC(Word32Or); } + Operator* Word32Xor() { BINOP_AC(Word32Xor); } + Operator* Word32Shl() { BINOP(Word32Shl); } + Operator* Word32Shr() { BINOP(Word32Shr); } + Operator* Word32Sar() { BINOP(Word32Sar); } + Operator* Word32Equal() { BINOP_C(Word32Equal); } + + Operator* Word64And() { BINOP_AC(Word64And); } + Operator* Word64Or() { BINOP_AC(Word64Or); } + Operator* Word64Xor() { BINOP_AC(Word64Xor); } + Operator* Word64Shl() { BINOP(Word64Shl); } + Operator* Word64Shr() { BINOP(Word64Shr); } + Operator* Word64Sar() { BINOP(Word64Sar); } + Operator* Word64Equal() { BINOP_C(Word64Equal); } + + Operator* Int32Add() { BINOP_AC(Int32Add); } + Operator* Int32AddWithOverflow() { BINOP_ACO(Int32AddWithOverflow); } + Operator* Int32Sub() { BINOP(Int32Sub); } + Operator* Int32SubWithOverflow() { BINOP_O(Int32SubWithOverflow); } + Operator* Int32Mul() { BINOP_AC(Int32Mul); } + Operator* Int32Div() { BINOP(Int32Div); } + Operator* Int32UDiv() { BINOP(Int32UDiv); } + Operator* Int32Mod() { BINOP(Int32Mod); } + Operator* Int32UMod() { BINOP(Int32UMod); } + Operator* Int32LessThan() { BINOP(Int32LessThan); } + Operator* Int32LessThanOrEqual() { BINOP(Int32LessThanOrEqual); } + Operator* Uint32LessThan() { BINOP(Uint32LessThan); } + Operator* Uint32LessThanOrEqual() { BINOP(Uint32LessThanOrEqual); } + + Operator* Int64Add() { BINOP_AC(Int64Add); } + Operator* Int64Sub() { BINOP(Int64Sub); } + Operator* Int64Mul() { BINOP_AC(Int64Mul); } + Operator* Int64Div() { BINOP(Int64Div); } + Operator* Int64UDiv() { BINOP(Int64UDiv); } + Operator* Int64Mod() { BINOP(Int64Mod); } + Operator* Int64UMod() { BINOP(Int64UMod); } + Operator* Int64LessThan() { BINOP(Int64LessThan); } + Operator* Int64LessThanOrEqual() { BINOP(Int64LessThanOrEqual); } + + Operator* ConvertInt32ToInt64() { UNOP(ConvertInt32ToInt64); } + Operator* ConvertInt64ToInt32() { UNOP(ConvertInt64ToInt32); } + + // Convert representation of integers between float64 and int32/uint32. + // The precise rounding mode and handling of out of range inputs are *not* + // defined for these operators, since they are intended only for use with + // integers. + // TODO(titzer): rename ConvertXXX to ChangeXXX in machine operators. + Operator* ChangeInt32ToFloat64() { UNOP(ChangeInt32ToFloat64); } + Operator* ChangeUint32ToFloat64() { UNOP(ChangeUint32ToFloat64); } + Operator* ChangeFloat64ToInt32() { UNOP(ChangeFloat64ToInt32); } + Operator* ChangeFloat64ToUint32() { UNOP(ChangeFloat64ToUint32); } + + // Floating point operators always operate with IEEE 754 round-to-nearest. + Operator* Float64Add() { BINOP_C(Float64Add); } + Operator* Float64Sub() { BINOP(Float64Sub); } + Operator* Float64Mul() { BINOP_C(Float64Mul); } + Operator* Float64Div() { BINOP(Float64Div); } + Operator* Float64Mod() { BINOP(Float64Mod); } + + // Floating point comparisons complying to IEEE 754. + Operator* Float64Equal() { BINOP_C(Float64Equal); } + Operator* Float64LessThan() { BINOP(Float64LessThan); } + Operator* Float64LessThanOrEqual() { BINOP(Float64LessThanOrEqual); } + + inline bool is32() const { return word_ == kMachineWord32; } + inline bool is64() const { return word_ == kMachineWord64; } + inline MachineType word() const { return word_; } + + static inline MachineType pointer_rep() { + return kPointerSize == 8 ? kMachineWord64 : kMachineWord32; + } + +#undef WORD_SIZE +#undef UNOP +#undef BINOP +#undef OP1 +#undef SIMPLE + + private: + Zone* zone_; + MachineType word_; +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_MACHINE_OPERATOR_H_ diff --git a/deps/v8/src/compiler/machine-type.h b/deps/v8/src/compiler/machine-type.h new file mode 100644 index 00000000000..716ca2236d9 --- /dev/null +++ b/deps/v8/src/compiler/machine-type.h @@ -0,0 +1,36 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_MACHINE_TYPE_H_ +#define V8_COMPILER_MACHINE_TYPE_H_ + +namespace v8 { +namespace internal { +namespace compiler { + +// An enumeration of the storage representations at the machine level. +// - Words are uninterpreted bits of a given fixed size that can be used +// to store integers and pointers. They are normally allocated to general +// purpose registers by the backend and are not tracked for GC. +// - Floats are bits of a given fixed size that are used to store floating +// point numbers. They are normally allocated to the floating point +// registers of the machine and are not tracked for the GC. +// - Tagged values are the size of a reference into the heap and can store +// small words or references into the heap using a language and potentially +// machine-dependent tagging scheme. These values are tracked by the code +// generator for precise GC. +enum MachineType { + kMachineWord8, + kMachineWord16, + kMachineWord32, + kMachineWord64, + kMachineFloat64, + kMachineTagged, + kMachineLast +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_MACHINE_TYPE_H_ diff --git a/deps/v8/src/compiler/node-aux-data-inl.h b/deps/v8/src/compiler/node-aux-data-inl.h new file mode 100644 index 00000000000..679320ab6f9 --- /dev/null +++ b/deps/v8/src/compiler/node-aux-data-inl.h @@ -0,0 +1,43 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_NODE_AUX_DATA_INL_H_ +#define V8_COMPILER_NODE_AUX_DATA_INL_H_ + +#include "src/compiler/graph.h" +#include "src/compiler/node.h" +#include "src/compiler/node-aux-data.h" + +namespace v8 { +namespace internal { +namespace compiler { + +template <class T> +NodeAuxData<T>::NodeAuxData(Zone* zone) + : aux_data_(ZoneAllocator(zone)) {} + + +template <class T> +void NodeAuxData<T>::Set(Node* node, const T& data) { + int id = node->id(); + if (id >= static_cast<int>(aux_data_.size())) { + aux_data_.resize(id + 1); + } + aux_data_[id] = data; +} + + +template <class T> +T NodeAuxData<T>::Get(Node* node) { + int id = node->id(); + if (id >= static_cast<int>(aux_data_.size())) { + return T(); + } + return aux_data_[id]; +} +} +} +} // namespace v8::internal::compiler + +#endif diff --git a/deps/v8/src/compiler/node-aux-data.h b/deps/v8/src/compiler/node-aux-data.h new file mode 100644 index 00000000000..1e836338a98 --- /dev/null +++ b/deps/v8/src/compiler/node-aux-data.h @@ -0,0 +1,38 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_NODE_AUX_DATA_H_ +#define V8_COMPILER_NODE_AUX_DATA_H_ + +#include <vector> + +#include "src/zone-allocator.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// Forward declarations. +class Graph; +class Node; + +template <class T> +class NodeAuxData { + public: + inline explicit NodeAuxData(Zone* zone); + + inline void Set(Node* node, const T& data); + inline T Get(Node* node); + + private: + typedef zone_allocator<T> ZoneAllocator; + typedef std::vector<T, ZoneAllocator> TZoneVector; + + TZoneVector aux_data_; +}; +} +} +} // namespace v8::internal::compiler + +#endif diff --git a/deps/v8/src/compiler/node-cache.cc b/deps/v8/src/compiler/node-cache.cc new file mode 100644 index 00000000000..c3ee58c5a23 --- /dev/null +++ b/deps/v8/src/compiler/node-cache.cc @@ -0,0 +1,120 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/node-cache.h" + +namespace v8 { +namespace internal { +namespace compiler { + +#define INITIAL_SIZE 16 +#define LINEAR_PROBE 5 + +template <typename Key> +int32_t NodeCacheHash(Key key) { + UNIMPLEMENTED(); + return 0; +} + +template <> +inline int32_t NodeCacheHash(int32_t key) { + return ComputeIntegerHash(key, 0); +} + + +template <> +inline int32_t NodeCacheHash(int64_t key) { + return ComputeLongHash(key); +} + + +template <> +inline int32_t NodeCacheHash(double key) { + return ComputeLongHash(BitCast<int64_t>(key)); +} + + +template <> +inline int32_t NodeCacheHash(void* key) { + return ComputePointerHash(key); +} + + +template <typename Key> +bool NodeCache<Key>::Resize(Zone* zone) { + if (size_ >= max_) return false; // Don't grow past the maximum size. + + // Allocate a new block of entries 4x the size. + Entry* old_entries = entries_; + int old_size = size_ + LINEAR_PROBE; + size_ = size_ * 4; + int num_entries = size_ + LINEAR_PROBE; + entries_ = zone->NewArray<Entry>(num_entries); + memset(entries_, 0, sizeof(Entry) * num_entries); + + // Insert the old entries into the new block. + for (int i = 0; i < old_size; i++) { + Entry* old = &old_entries[i]; + if (old->value_ != NULL) { + int hash = NodeCacheHash(old->key_); + int start = hash & (size_ - 1); + int end = start + LINEAR_PROBE; + for (int j = start; j < end; j++) { + Entry* entry = &entries_[j]; + if (entry->value_ == NULL) { + entry->key_ = old->key_; + entry->value_ = old->value_; + break; + } + } + } + } + return true; +} + + +template <typename Key> +Node** NodeCache<Key>::Find(Zone* zone, Key key) { + int32_t hash = NodeCacheHash(key); + if (entries_ == NULL) { + // Allocate the initial entries and insert the first entry. + int num_entries = INITIAL_SIZE + LINEAR_PROBE; + entries_ = zone->NewArray<Entry>(num_entries); + size_ = INITIAL_SIZE; + memset(entries_, 0, sizeof(Entry) * num_entries); + Entry* entry = &entries_[hash & (INITIAL_SIZE - 1)]; + entry->key_ = key; + return &entry->value_; + } + + while (true) { + // Search up to N entries after (linear probing). + int start = hash & (size_ - 1); + int end = start + LINEAR_PROBE; + for (int i = start; i < end; i++) { + Entry* entry = &entries_[i]; + if (entry->key_ == key) return &entry->value_; + if (entry->value_ == NULL) { + entry->key_ = key; + return &entry->value_; + } + } + + if (!Resize(zone)) break; // Don't grow past the maximum size. + } + + // If resized to maximum and still didn't find space, overwrite an entry. + Entry* entry = &entries_[hash & (size_ - 1)]; + entry->key_ = key; + entry->value_ = NULL; + return &entry->value_; +} + + +template class NodeCache<int64_t>; +template class NodeCache<int32_t>; +template class NodeCache<void*>; +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/compiler/node-cache.h b/deps/v8/src/compiler/node-cache.h new file mode 100644 index 00000000000..35352ea1eb1 --- /dev/null +++ b/deps/v8/src/compiler/node-cache.h @@ -0,0 +1,53 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_NODE_CACHE_H_ +#define V8_COMPILER_NODE_CACHE_H_ + +#include "src/v8.h" + +#include "src/compiler/node.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// A cache for nodes based on a key. Useful for implementing canonicalization of +// nodes such as constants, parameters, etc. +template <typename Key> +class NodeCache { + public: + explicit NodeCache(int max = 256) : entries_(NULL), size_(0), max_(max) {} + + // Search for node associated with {key} and return a pointer to a memory + // location in this cache that stores an entry for the key. If the location + // returned by this method contains a non-NULL node, the caller can use that + // node. Otherwise it is the responsibility of the caller to fill the entry + // with a new node. + // Note that a previous cache entry may be overwritten if the cache becomes + // too full or encounters too many hash collisions. + Node** Find(Zone* zone, Key key); + + private: + struct Entry { + Key key_; + Node* value_; + }; + + Entry* entries_; // lazily-allocated hash entries. + int32_t size_; + int32_t max_; + + bool Resize(Zone* zone); +}; + +// Various default cache types. +typedef NodeCache<int64_t> Int64NodeCache; +typedef NodeCache<int32_t> Int32NodeCache; +typedef NodeCache<void*> PtrNodeCache; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_NODE_CACHE_H_ diff --git a/deps/v8/src/compiler/node-matchers.h b/deps/v8/src/compiler/node-matchers.h new file mode 100644 index 00000000000..3b34d07c08f --- /dev/null +++ b/deps/v8/src/compiler/node-matchers.h @@ -0,0 +1,133 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_NODE_MATCHERS_H_ +#define V8_COMPILER_NODE_MATCHERS_H_ + +#include "src/compiler/common-operator.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// A pattern matcher for nodes. +struct NodeMatcher { + explicit NodeMatcher(Node* node) : node_(node) {} + + Node* node() const { return node_; } + Operator* op() const { return node()->op(); } + IrOpcode::Value opcode() const { return node()->opcode(); } + + bool HasProperty(Operator::Property property) const { + return op()->HasProperty(property); + } + Node* InputAt(int index) const { return node()->InputAt(index); } + +#define DEFINE_IS_OPCODE(Opcode) \ + bool Is##Opcode() const { return opcode() == IrOpcode::k##Opcode; } + ALL_OP_LIST(DEFINE_IS_OPCODE) +#undef DEFINE_IS_OPCODE + + private: + Node* node_; +}; + + +// A pattern matcher for abitrary value constants. +template <typename T> +struct ValueMatcher : public NodeMatcher { + explicit ValueMatcher(Node* node) + : NodeMatcher(node), + value_(), + has_value_(CommonOperatorTraits<T>::HasValue(node->op())) { + if (has_value_) value_ = CommonOperatorTraits<T>::ValueOf(node->op()); + } + + bool HasValue() const { return has_value_; } + T Value() const { + DCHECK(HasValue()); + return value_; + } + + bool Is(T value) const { + return HasValue() && CommonOperatorTraits<T>::Equals(Value(), value); + } + + bool IsInRange(T low, T high) const { + return HasValue() && low <= value_ && value_ <= high; + } + + private: + T value_; + bool has_value_; +}; + + +// A pattern matcher for integer constants. +template <typename T> +struct IntMatcher V8_FINAL : public ValueMatcher<T> { + explicit IntMatcher(Node* node) : ValueMatcher<T>(node) {} + + bool IsPowerOf2() const { + return this->HasValue() && this->Value() > 0 && + (this->Value() & (this->Value() - 1)) == 0; + } +}; + +typedef IntMatcher<int32_t> Int32Matcher; +typedef IntMatcher<uint32_t> Uint32Matcher; +typedef IntMatcher<int64_t> Int64Matcher; +typedef IntMatcher<uint64_t> Uint64Matcher; + + +// A pattern matcher for floating point constants. +template <typename T> +struct FloatMatcher V8_FINAL : public ValueMatcher<T> { + explicit FloatMatcher(Node* node) : ValueMatcher<T>(node) {} + + bool IsNaN() const { return this->HasValue() && std::isnan(this->Value()); } +}; + +typedef FloatMatcher<double> Float64Matcher; + + +// For shorter pattern matching code, this struct matches both the left and +// right hand sides of a binary operation and can put constants on the right +// if they appear on the left hand side of a commutative operation. +template <typename Left, typename Right> +struct BinopMatcher V8_FINAL : public NodeMatcher { + explicit BinopMatcher(Node* node) + : NodeMatcher(node), left_(InputAt(0)), right_(InputAt(1)) { + if (HasProperty(Operator::kCommutative)) PutConstantOnRight(); + } + + const Left& left() const { return left_; } + const Right& right() const { return right_; } + + bool IsFoldable() const { return left().HasValue() && right().HasValue(); } + bool LeftEqualsRight() const { return left().node() == right().node(); } + + private: + void PutConstantOnRight() { + if (left().HasValue() && !right().HasValue()) { + std::swap(left_, right_); + node()->ReplaceInput(0, left().node()); + node()->ReplaceInput(1, right().node()); + } + } + + Left left_; + Right right_; +}; + +typedef BinopMatcher<Int32Matcher, Int32Matcher> Int32BinopMatcher; +typedef BinopMatcher<Uint32Matcher, Uint32Matcher> Uint32BinopMatcher; +typedef BinopMatcher<Int64Matcher, Int64Matcher> Int64BinopMatcher; +typedef BinopMatcher<Uint64Matcher, Uint64Matcher> Uint64BinopMatcher; +typedef BinopMatcher<Float64Matcher, Float64Matcher> Float64BinopMatcher; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_NODE_MATCHERS_H_ diff --git a/deps/v8/src/compiler/node-properties-inl.h b/deps/v8/src/compiler/node-properties-inl.h new file mode 100644 index 00000000000..2d63b0cc1b7 --- /dev/null +++ b/deps/v8/src/compiler/node-properties-inl.h @@ -0,0 +1,165 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_NODE_PROPERTIES_INL_H_ +#define V8_COMPILER_NODE_PROPERTIES_INL_H_ + +#include "src/v8.h" + +#include "src/compiler/common-operator.h" +#include "src/compiler/node-properties.h" +#include "src/compiler/opcodes.h" +#include "src/compiler/operator.h" +#include "src/compiler/operator-properties-inl.h" +#include "src/compiler/operator-properties.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// ----------------------------------------------------------------------------- +// Input layout. +// Inputs are always arranged in order as follows: +// 0 [ values, context, effects, control ] node->InputCount() + +inline int NodeProperties::FirstValueIndex(Node* node) { return 0; } + +inline int NodeProperties::FirstContextIndex(Node* node) { + return PastValueIndex(node); +} + +inline int NodeProperties::FirstEffectIndex(Node* node) { + return PastContextIndex(node); +} + +inline int NodeProperties::FirstControlIndex(Node* node) { + return PastEffectIndex(node); +} + + +inline int NodeProperties::PastValueIndex(Node* node) { + return FirstValueIndex(node) + + OperatorProperties::GetValueInputCount(node->op()); +} + +inline int NodeProperties::PastContextIndex(Node* node) { + return FirstContextIndex(node) + + OperatorProperties::GetContextInputCount(node->op()); +} + +inline int NodeProperties::PastEffectIndex(Node* node) { + return FirstEffectIndex(node) + + OperatorProperties::GetEffectInputCount(node->op()); +} + +inline int NodeProperties::PastControlIndex(Node* node) { + return FirstControlIndex(node) + + OperatorProperties::GetControlInputCount(node->op()); +} + + +// ----------------------------------------------------------------------------- +// Input accessors. + +inline Node* NodeProperties::GetValueInput(Node* node, int index) { + DCHECK(0 <= index && + index < OperatorProperties::GetValueInputCount(node->op())); + return node->InputAt(FirstValueIndex(node) + index); +} + +inline Node* NodeProperties::GetContextInput(Node* node) { + DCHECK(OperatorProperties::HasContextInput(node->op())); + return node->InputAt(FirstContextIndex(node)); +} + +inline Node* NodeProperties::GetEffectInput(Node* node, int index) { + DCHECK(0 <= index && + index < OperatorProperties::GetEffectInputCount(node->op())); + return node->InputAt(FirstEffectIndex(node) + index); +} + +inline Node* NodeProperties::GetControlInput(Node* node, int index) { + DCHECK(0 <= index && + index < OperatorProperties::GetControlInputCount(node->op())); + return node->InputAt(FirstControlIndex(node) + index); +} + + +// ----------------------------------------------------------------------------- +// Edge kinds. + +inline bool NodeProperties::IsInputRange(Node::Edge edge, int first, int num) { + // TODO(titzer): edge.index() is linear time; + // edges maybe need to be marked as value/effect/control. + if (num == 0) return false; + int index = edge.index(); + return first <= index && index < first + num; +} + +inline bool NodeProperties::IsValueEdge(Node::Edge edge) { + Node* node = edge.from(); + return IsInputRange(edge, FirstValueIndex(node), + OperatorProperties::GetValueInputCount(node->op())); +} + +inline bool NodeProperties::IsContextEdge(Node::Edge edge) { + Node* node = edge.from(); + return IsInputRange(edge, FirstContextIndex(node), + OperatorProperties::GetContextInputCount(node->op())); +} + +inline bool NodeProperties::IsEffectEdge(Node::Edge edge) { + Node* node = edge.from(); + return IsInputRange(edge, FirstEffectIndex(node), + OperatorProperties::GetEffectInputCount(node->op())); +} + +inline bool NodeProperties::IsControlEdge(Node::Edge edge) { + Node* node = edge.from(); + return IsInputRange(edge, FirstControlIndex(node), + OperatorProperties::GetControlInputCount(node->op())); +} + + +// ----------------------------------------------------------------------------- +// Miscellaneous predicates. + +inline bool NodeProperties::IsControl(Node* node) { + return IrOpcode::IsControlOpcode(node->opcode()); +} + + +// ----------------------------------------------------------------------------- +// Miscellaneous mutators. + +inline void NodeProperties::ReplaceControlInput(Node* node, Node* control) { + node->ReplaceInput(FirstControlIndex(node), control); +} + +inline void NodeProperties::ReplaceEffectInput(Node* node, Node* effect, + int index) { + DCHECK(index < OperatorProperties::GetEffectInputCount(node->op())); + return node->ReplaceInput(FirstEffectIndex(node) + index, effect); +} + +inline void NodeProperties::RemoveNonValueInputs(Node* node) { + node->TrimInputCount(OperatorProperties::GetValueInputCount(node->op())); +} + + +// ----------------------------------------------------------------------------- +// Type Bounds. + +inline Bounds NodeProperties::GetBounds(Node* node) { return node->bounds(); } + +inline void NodeProperties::SetBounds(Node* node, Bounds b) { + node->set_bounds(b); +} + + +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_NODE_PROPERTIES_INL_H_ diff --git a/deps/v8/src/compiler/node-properties.h b/deps/v8/src/compiler/node-properties.h new file mode 100644 index 00000000000..6088a0a3a06 --- /dev/null +++ b/deps/v8/src/compiler/node-properties.h @@ -0,0 +1,57 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_NODE_PROPERTIES_H_ +#define V8_COMPILER_NODE_PROPERTIES_H_ + +#include "src/compiler/node.h" +#include "src/types.h" + +namespace v8 { +namespace internal { +namespace compiler { + +class Operator; + +// A facade that simplifies access to the different kinds of inputs to a node. +class NodeProperties { + public: + static inline Node* GetValueInput(Node* node, int index); + static inline Node* GetContextInput(Node* node); + static inline Node* GetEffectInput(Node* node, int index = 0); + static inline Node* GetControlInput(Node* node, int index = 0); + + static inline bool IsValueEdge(Node::Edge edge); + static inline bool IsContextEdge(Node::Edge edge); + static inline bool IsEffectEdge(Node::Edge edge); + static inline bool IsControlEdge(Node::Edge edge); + + static inline bool IsControl(Node* node); + + static inline void ReplaceControlInput(Node* node, Node* control); + static inline void ReplaceEffectInput(Node* node, Node* effect, + int index = 0); + static inline void RemoveNonValueInputs(Node* node); + + static inline Bounds GetBounds(Node* node); + static inline void SetBounds(Node* node, Bounds bounds); + + private: + static inline int FirstValueIndex(Node* node); + static inline int FirstContextIndex(Node* node); + static inline int FirstEffectIndex(Node* node); + static inline int FirstControlIndex(Node* node); + static inline int PastValueIndex(Node* node); + static inline int PastContextIndex(Node* node); + static inline int PastEffectIndex(Node* node); + static inline int PastControlIndex(Node* node); + + static inline bool IsInputRange(Node::Edge edge, int first, int count); +}; + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_COMPILER_NODE_PROPERTIES_H_ diff --git a/deps/v8/src/compiler/node.cc b/deps/v8/src/compiler/node.cc new file mode 100644 index 00000000000..4cb5748b409 --- /dev/null +++ b/deps/v8/src/compiler/node.cc @@ -0,0 +1,55 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/node.h" + +#include "src/compiler/generic-node-inl.h" + +namespace v8 { +namespace internal { +namespace compiler { + +void Node::CollectProjections(int projection_count, Node** projections) { + for (int i = 0; i < projection_count; ++i) projections[i] = NULL; + for (UseIter i = uses().begin(); i != uses().end(); ++i) { + if ((*i)->opcode() != IrOpcode::kProjection) continue; + int32_t index = OpParameter<int32_t>(*i); + DCHECK_GE(index, 0); + DCHECK_LT(index, projection_count); + DCHECK_EQ(NULL, projections[index]); + projections[index] = *i; + } +} + + +Node* Node::FindProjection(int32_t projection_index) { + for (UseIter i = uses().begin(); i != uses().end(); ++i) { + if ((*i)->opcode() == IrOpcode::kProjection && + OpParameter<int32_t>(*i) == projection_index) { + return *i; + } + } + return NULL; +} + + +OStream& operator<<(OStream& os, const Operator& op) { return op.PrintTo(os); } + + +OStream& operator<<(OStream& os, const Node& n) { + os << n.id() << ": " << *n.op(); + if (n.op()->InputCount() != 0) { + os << "("; + for (int i = 0; i < n.op()->InputCount(); ++i) { + if (i != 0) os << ", "; + os << n.InputAt(i)->id(); + } + os << ")"; + } + return os; +} + +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/src/compiler/node.h b/deps/v8/src/compiler/node.h new file mode 100644 index 00000000000..ddca510a0e9 --- /dev/null +++ b/deps/v8/src/compiler/node.h @@ -0,0 +1,95 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_NODE_H_ +#define V8_COMPILER_NODE_H_ + +#include <deque> +#include <set> +#include <vector> + +#include "src/compiler/generic-algorithm.h" +#include "src/compiler/generic-node.h" +#include "src/compiler/opcodes.h" +#include "src/compiler/operator.h" +#include "src/types.h" +#include "src/zone.h" +#include "src/zone-allocator.h" + +namespace v8 { +namespace internal { +namespace compiler { + +class NodeData { + public: + Operator* op() const { return op_; } + void set_op(Operator* op) { op_ = op; } + + IrOpcode::Value opcode() const { + DCHECK(op_->opcode() <= IrOpcode::kLast); + return static_cast<IrOpcode::Value>(op_->opcode()); + } + + Bounds bounds() { return bounds_; } + + protected: + Operator* op_; + Bounds bounds_; + explicit NodeData(Zone* zone) : bounds_(Bounds(Type::None(zone))) {} + + friend class NodeProperties; + void set_bounds(Bounds b) { bounds_ = b; } +}; + +// A Node is the basic primitive of an IR graph. In addition to the members +// inherited from Vector, Nodes only contain a mutable Operator that may change +// during compilation, e.g. during lowering passes. Other information that +// needs to be associated with Nodes during compilation must be stored +// out-of-line indexed by the Node's id. +class Node : public GenericNode<NodeData, Node> { + public: + Node(GenericGraphBase* graph, int input_count) + : GenericNode<NodeData, Node>(graph, input_count) {} + + void Initialize(Operator* op) { set_op(op); } + + void CollectProjections(int projection_count, Node** projections); + Node* FindProjection(int32_t projection_index); +}; + +OStream& operator<<(OStream& os, const Node& n); + +typedef GenericGraphVisit::NullNodeVisitor<NodeData, Node> NullNodeVisitor; + +typedef zone_allocator<Node*> NodePtrZoneAllocator; + +typedef std::set<Node*, std::less<Node*>, NodePtrZoneAllocator> NodeSet; +typedef NodeSet::iterator NodeSetIter; +typedef NodeSet::reverse_iterator NodeSetRIter; + +typedef std::deque<Node*, NodePtrZoneAllocator> NodeDeque; +typedef NodeDeque::iterator NodeDequeIter; + +typedef std::vector<Node*, NodePtrZoneAllocator> NodeVector; +typedef NodeVector::iterator NodeVectorIter; +typedef NodeVector::reverse_iterator NodeVectorRIter; + +typedef zone_allocator<NodeVector> ZoneNodeVectorAllocator; +typedef std::vector<NodeVector, ZoneNodeVectorAllocator> NodeVectorVector; +typedef NodeVectorVector::iterator NodeVectorVectorIter; +typedef NodeVectorVector::reverse_iterator NodeVectorVectorRIter; + +typedef Node::Uses::iterator UseIter; +typedef Node::Inputs::iterator InputIter; + +// Helper to extract parameters from Operator1<*> nodes. +template <typename T> +static inline T OpParameter(Node* node) { + return reinterpret_cast<Operator1<T>*>(node->op())->parameter(); +} +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_NODE_H_ diff --git a/deps/v8/src/compiler/opcodes.h b/deps/v8/src/compiler/opcodes.h new file mode 100644 index 00000000000..1371bfd16bd --- /dev/null +++ b/deps/v8/src/compiler/opcodes.h @@ -0,0 +1,297 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_OPCODES_H_ +#define V8_COMPILER_OPCODES_H_ + +// Opcodes for control operators. +#define CONTROL_OP_LIST(V) \ + V(Start) \ + V(Dead) \ + V(Loop) \ + V(End) \ + V(Branch) \ + V(IfTrue) \ + V(IfFalse) \ + V(Merge) \ + V(Return) \ + V(Throw) \ + V(Continuation) \ + V(LazyDeoptimization) \ + V(Deoptimize) + +// Opcodes for common operators. +#define LEAF_OP_LIST(V) \ + V(Int32Constant) \ + V(Int64Constant) \ + V(Float64Constant) \ + V(ExternalConstant) \ + V(NumberConstant) \ + V(HeapConstant) + +#define INNER_OP_LIST(V) \ + V(Phi) \ + V(EffectPhi) \ + V(FrameState) \ + V(StateValues) \ + V(Call) \ + V(Parameter) \ + V(Projection) + +#define COMMON_OP_LIST(V) \ + LEAF_OP_LIST(V) \ + INNER_OP_LIST(V) + +// Opcodes for JavaScript operators. +#define JS_COMPARE_BINOP_LIST(V) \ + V(JSEqual) \ + V(JSNotEqual) \ + V(JSStrictEqual) \ + V(JSStrictNotEqual) \ + V(JSLessThan) \ + V(JSGreaterThan) \ + V(JSLessThanOrEqual) \ + V(JSGreaterThanOrEqual) + +#define JS_BITWISE_BINOP_LIST(V) \ + V(JSBitwiseOr) \ + V(JSBitwiseXor) \ + V(JSBitwiseAnd) \ + V(JSShiftLeft) \ + V(JSShiftRight) \ + V(JSShiftRightLogical) + +#define JS_ARITH_BINOP_LIST(V) \ + V(JSAdd) \ + V(JSSubtract) \ + V(JSMultiply) \ + V(JSDivide) \ + V(JSModulus) + +#define JS_SIMPLE_BINOP_LIST(V) \ + JS_COMPARE_BINOP_LIST(V) \ + JS_BITWISE_BINOP_LIST(V) \ + JS_ARITH_BINOP_LIST(V) + +#define JS_LOGIC_UNOP_LIST(V) V(JSUnaryNot) + +#define JS_CONVERSION_UNOP_LIST(V) \ + V(JSToBoolean) \ + V(JSToNumber) \ + V(JSToString) \ + V(JSToName) \ + V(JSToObject) + +#define JS_OTHER_UNOP_LIST(V) V(JSTypeOf) + +#define JS_SIMPLE_UNOP_LIST(V) \ + JS_LOGIC_UNOP_LIST(V) \ + JS_CONVERSION_UNOP_LIST(V) \ + JS_OTHER_UNOP_LIST(V) + +#define JS_OBJECT_OP_LIST(V) \ + V(JSCreate) \ + V(JSLoadProperty) \ + V(JSLoadNamed) \ + V(JSStoreProperty) \ + V(JSStoreNamed) \ + V(JSDeleteProperty) \ + V(JSHasProperty) \ + V(JSInstanceOf) + +#define JS_CONTEXT_OP_LIST(V) \ + V(JSLoadContext) \ + V(JSStoreContext) \ + V(JSCreateFunctionContext) \ + V(JSCreateCatchContext) \ + V(JSCreateWithContext) \ + V(JSCreateBlockContext) \ + V(JSCreateModuleContext) \ + V(JSCreateGlobalContext) + +#define JS_OTHER_OP_LIST(V) \ + V(JSCallConstruct) \ + V(JSCallFunction) \ + V(JSCallRuntime) \ + V(JSYield) \ + V(JSDebugger) + +#define JS_OP_LIST(V) \ + JS_SIMPLE_BINOP_LIST(V) \ + JS_SIMPLE_UNOP_LIST(V) \ + JS_OBJECT_OP_LIST(V) \ + JS_CONTEXT_OP_LIST(V) \ + JS_OTHER_OP_LIST(V) + +// Opcodes for VirtuaMachine-level operators. +#define SIMPLIFIED_OP_LIST(V) \ + V(BooleanNot) \ + V(NumberEqual) \ + V(NumberLessThan) \ + V(NumberLessThanOrEqual) \ + V(NumberAdd) \ + V(NumberSubtract) \ + V(NumberMultiply) \ + V(NumberDivide) \ + V(NumberModulus) \ + V(NumberToInt32) \ + V(NumberToUint32) \ + V(ReferenceEqual) \ + V(StringEqual) \ + V(StringLessThan) \ + V(StringLessThanOrEqual) \ + V(StringAdd) \ + V(ChangeTaggedToInt32) \ + V(ChangeTaggedToUint32) \ + V(ChangeTaggedToFloat64) \ + V(ChangeInt32ToTagged) \ + V(ChangeUint32ToTagged) \ + V(ChangeFloat64ToTagged) \ + V(ChangeBoolToBit) \ + V(ChangeBitToBool) \ + V(LoadField) \ + V(LoadElement) \ + V(StoreField) \ + V(StoreElement) + +// Opcodes for Machine-level operators. +#define MACHINE_OP_LIST(V) \ + V(Load) \ + V(Store) \ + V(Word32And) \ + V(Word32Or) \ + V(Word32Xor) \ + V(Word32Shl) \ + V(Word32Shr) \ + V(Word32Sar) \ + V(Word32Equal) \ + V(Word64And) \ + V(Word64Or) \ + V(Word64Xor) \ + V(Word64Shl) \ + V(Word64Shr) \ + V(Word64Sar) \ + V(Word64Equal) \ + V(Int32Add) \ + V(Int32AddWithOverflow) \ + V(Int32Sub) \ + V(Int32SubWithOverflow) \ + V(Int32Mul) \ + V(Int32Div) \ + V(Int32UDiv) \ + V(Int32Mod) \ + V(Int32UMod) \ + V(Int32LessThan) \ + V(Int32LessThanOrEqual) \ + V(Uint32LessThan) \ + V(Uint32LessThanOrEqual) \ + V(Int64Add) \ + V(Int64Sub) \ + V(Int64Mul) \ + V(Int64Div) \ + V(Int64UDiv) \ + V(Int64Mod) \ + V(Int64UMod) \ + V(Int64LessThan) \ + V(Int64LessThanOrEqual) \ + V(ConvertInt64ToInt32) \ + V(ConvertInt32ToInt64) \ + V(ChangeInt32ToFloat64) \ + V(ChangeUint32ToFloat64) \ + V(ChangeFloat64ToInt32) \ + V(ChangeFloat64ToUint32) \ + V(Float64Add) \ + V(Float64Sub) \ + V(Float64Mul) \ + V(Float64Div) \ + V(Float64Mod) \ + V(Float64Equal) \ + V(Float64LessThan) \ + V(Float64LessThanOrEqual) + +#define VALUE_OP_LIST(V) \ + COMMON_OP_LIST(V) \ + SIMPLIFIED_OP_LIST(V) \ + MACHINE_OP_LIST(V) \ + JS_OP_LIST(V) + +// The combination of all operators at all levels and the common operators. +#define ALL_OP_LIST(V) \ + CONTROL_OP_LIST(V) \ + VALUE_OP_LIST(V) + +namespace v8 { +namespace internal { +namespace compiler { + +// Declare an enumeration with all the opcodes at all levels so that they +// can be globally, uniquely numbered. +class IrOpcode { + public: + enum Value { +#define DECLARE_OPCODE(x) k##x, + ALL_OP_LIST(DECLARE_OPCODE) +#undef DECLARE_OPCODE + kLast = -1 +#define COUNT_OPCODE(x) +1 + ALL_OP_LIST(COUNT_OPCODE) +#undef COUNT_OPCODE + }; + + // Returns the mnemonic name of an opcode. + static const char* Mnemonic(Value val) { + switch (val) { +#define RETURN_NAME(x) \ + case k##x: \ + return #x; + ALL_OP_LIST(RETURN_NAME) +#undef RETURN_NAME + default: + return "UnknownOpcode"; + } + } + + static bool IsJsOpcode(Value val) { + switch (val) { +#define RETURN_NAME(x) \ + case k##x: \ + return true; + JS_OP_LIST(RETURN_NAME) +#undef RETURN_NAME + default: + return false; + } + } + + static bool IsControlOpcode(Value val) { + switch (val) { +#define RETURN_NAME(x) \ + case k##x: \ + return true; + CONTROL_OP_LIST(RETURN_NAME) +#undef RETURN_NAME + default: + return false; + } + } + + static bool IsCommonOpcode(Value val) { + switch (val) { +#define RETURN_NAME(x) \ + case k##x: \ + return true; + CONTROL_OP_LIST(RETURN_NAME) + COMMON_OP_LIST(RETURN_NAME) +#undef RETURN_NAME + default: + return false; + } + } +}; + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_COMPILER_OPCODES_H_ diff --git a/deps/v8/src/compiler/operator-properties-inl.h b/deps/v8/src/compiler/operator-properties-inl.h new file mode 100644 index 00000000000..42833fdeb41 --- /dev/null +++ b/deps/v8/src/compiler/operator-properties-inl.h @@ -0,0 +1,191 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_OPERATOR_PROPERTIES_INL_H_ +#define V8_COMPILER_OPERATOR_PROPERTIES_INL_H_ + +#include "src/v8.h" + +#include "src/compiler/js-operator.h" +#include "src/compiler/opcodes.h" +#include "src/compiler/operator-properties.h" + +namespace v8 { +namespace internal { +namespace compiler { + +inline bool OperatorProperties::HasValueInput(Operator* op) { + return OperatorProperties::GetValueInputCount(op) > 0; +} + +inline bool OperatorProperties::HasContextInput(Operator* op) { + IrOpcode::Value opcode = static_cast<IrOpcode::Value>(op->opcode()); + return IrOpcode::IsJsOpcode(opcode); +} + +inline bool OperatorProperties::HasEffectInput(Operator* op) { + return OperatorProperties::GetEffectInputCount(op) > 0; +} + +inline bool OperatorProperties::HasControlInput(Operator* op) { + return OperatorProperties::GetControlInputCount(op) > 0; +} + + +inline int OperatorProperties::GetValueInputCount(Operator* op) { + return op->InputCount(); +} + +inline int OperatorProperties::GetContextInputCount(Operator* op) { + return OperatorProperties::HasContextInput(op) ? 1 : 0; +} + +inline int OperatorProperties::GetEffectInputCount(Operator* op) { + if (op->opcode() == IrOpcode::kEffectPhi) { + return static_cast<Operator1<int>*>(op)->parameter(); + } + if (op->HasProperty(Operator::kNoRead) && op->HasProperty(Operator::kNoWrite)) + return 0; // no effects. + return 1; +} + +inline int OperatorProperties::GetControlInputCount(Operator* op) { + switch (op->opcode()) { + case IrOpcode::kPhi: + case IrOpcode::kEffectPhi: + return 1; +#define OPCODE_CASE(x) case IrOpcode::k##x: + CONTROL_OP_LIST(OPCODE_CASE) +#undef OPCODE_CASE + return static_cast<ControlOperator*>(op)->ControlInputCount(); + default: + // If a node can lazily deoptimize, it needs control dependency. + if (CanLazilyDeoptimize(op)) { + return 1; + } + // Operators that have write effects must have a control + // dependency. Effect dependencies only ensure the correct order of + // write/read operations without consideration of control flow. Without an + // explicit control dependency writes can be float in the schedule too + // early along a path that shouldn't generate a side-effect. + return op->HasProperty(Operator::kNoWrite) ? 0 : 1; + } + return 0; +} + +inline int OperatorProperties::GetTotalInputCount(Operator* op) { + return GetValueInputCount(op) + GetContextInputCount(op) + + GetEffectInputCount(op) + GetControlInputCount(op); +} + +// ----------------------------------------------------------------------------- +// Output properties. + +inline bool OperatorProperties::HasValueOutput(Operator* op) { + return GetValueOutputCount(op) > 0; +} + +inline bool OperatorProperties::HasEffectOutput(Operator* op) { + return op->opcode() == IrOpcode::kStart || GetEffectInputCount(op) > 0; +} + +inline bool OperatorProperties::HasControlOutput(Operator* op) { + IrOpcode::Value opcode = static_cast<IrOpcode::Value>(op->opcode()); + return (opcode != IrOpcode::kEnd && IrOpcode::IsControlOpcode(opcode)) || + CanLazilyDeoptimize(op); +} + + +inline int OperatorProperties::GetValueOutputCount(Operator* op) { + return op->OutputCount(); +} + +inline int OperatorProperties::GetEffectOutputCount(Operator* op) { + return HasEffectOutput(op) ? 1 : 0; +} + +inline int OperatorProperties::GetControlOutputCount(Operator* node) { + return node->opcode() == IrOpcode::kBranch ? 2 : HasControlOutput(node) ? 1 + : 0; +} + + +inline bool OperatorProperties::IsBasicBlockBegin(Operator* op) { + uint8_t opcode = op->opcode(); + return opcode == IrOpcode::kStart || opcode == IrOpcode::kEnd || + opcode == IrOpcode::kDead || opcode == IrOpcode::kLoop || + opcode == IrOpcode::kMerge || opcode == IrOpcode::kIfTrue || + opcode == IrOpcode::kIfFalse; +} + +inline bool OperatorProperties::CanBeScheduled(Operator* op) { return true; } + +inline bool OperatorProperties::HasFixedSchedulePosition(Operator* op) { + IrOpcode::Value opcode = static_cast<IrOpcode::Value>(op->opcode()); + return (IrOpcode::IsControlOpcode(opcode)) || + opcode == IrOpcode::kParameter || opcode == IrOpcode::kEffectPhi || + opcode == IrOpcode::kPhi; +} + +inline bool OperatorProperties::IsScheduleRoot(Operator* op) { + uint8_t opcode = op->opcode(); + return opcode == IrOpcode::kEnd || opcode == IrOpcode::kEffectPhi || + opcode == IrOpcode::kPhi; +} + +inline bool OperatorProperties::CanLazilyDeoptimize(Operator* op) { + // TODO(jarin) This function allows turning on lazy deoptimization + // incrementally. It will change as we turn on lazy deopt for + // more nodes. + + if (!FLAG_turbo_deoptimization) { + return false; + } + + switch (op->opcode()) { + case IrOpcode::kCall: { + CallOperator* call_op = reinterpret_cast<CallOperator*>(op); + CallDescriptor* descriptor = call_op->parameter(); + return descriptor->CanLazilyDeoptimize(); + } + case IrOpcode::kJSCallRuntime: { + Runtime::FunctionId function = + reinterpret_cast<Operator1<Runtime::FunctionId>*>(op)->parameter(); + // TODO(jarin) At the moment, we only support lazy deoptimization for + // the %DeoptimizeFunction runtime function. + return function == Runtime::kDeoptimizeFunction; + } + + // JS function calls + case IrOpcode::kJSCallFunction: + case IrOpcode::kJSCallConstruct: + + // Binary operations + case IrOpcode::kJSBitwiseOr: + case IrOpcode::kJSBitwiseXor: + case IrOpcode::kJSBitwiseAnd: + case IrOpcode::kJSShiftLeft: + case IrOpcode::kJSShiftRight: + case IrOpcode::kJSShiftRightLogical: + case IrOpcode::kJSAdd: + case IrOpcode::kJSSubtract: + case IrOpcode::kJSMultiply: + case IrOpcode::kJSDivide: + case IrOpcode::kJSModulus: + case IrOpcode::kJSLoadProperty: + case IrOpcode::kJSStoreProperty: + case IrOpcode::kJSLoadNamed: + case IrOpcode::kJSStoreNamed: + return true; + + default: + return false; + } + return false; +} +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_OPERATOR_PROPERTIES_INL_H_ diff --git a/deps/v8/src/compiler/operator-properties.h b/deps/v8/src/compiler/operator-properties.h new file mode 100644 index 00000000000..cbc8ed9af06 --- /dev/null +++ b/deps/v8/src/compiler/operator-properties.h @@ -0,0 +1,49 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_OPERATOR_PROPERTIES_H_ +#define V8_COMPILER_OPERATOR_PROPERTIES_H_ + +#include "src/v8.h" + +namespace v8 { +namespace internal { +namespace compiler { + +class Operator; + +class OperatorProperties { + public: + static inline bool HasValueInput(Operator* node); + static inline bool HasContextInput(Operator* node); + static inline bool HasEffectInput(Operator* node); + static inline bool HasControlInput(Operator* node); + + static inline int GetValueInputCount(Operator* op); + static inline int GetContextInputCount(Operator* op); + static inline int GetEffectInputCount(Operator* op); + static inline int GetControlInputCount(Operator* op); + static inline int GetTotalInputCount(Operator* op); + + static inline bool HasValueOutput(Operator* op); + static inline bool HasEffectOutput(Operator* op); + static inline bool HasControlOutput(Operator* op); + + static inline int GetValueOutputCount(Operator* op); + static inline int GetEffectOutputCount(Operator* op); + static inline int GetControlOutputCount(Operator* op); + + static inline bool IsBasicBlockBegin(Operator* op); + + static inline bool CanBeScheduled(Operator* op); + static inline bool HasFixedSchedulePosition(Operator* op); + static inline bool IsScheduleRoot(Operator* op); + + static inline bool CanLazilyDeoptimize(Operator* op); +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_OPERATOR_PROPERTIES_H_ diff --git a/deps/v8/src/compiler/operator.h b/deps/v8/src/compiler/operator.h new file mode 100644 index 00000000000..4294d344fe9 --- /dev/null +++ b/deps/v8/src/compiler/operator.h @@ -0,0 +1,280 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_OPERATOR_H_ +#define V8_COMPILER_OPERATOR_H_ + +#include "src/v8.h" + +#include "src/assembler.h" +#include "src/ostreams.h" +#include "src/unique.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// An operator represents description of the "computation" of a node in the +// compiler IR. A computation takes values (i.e. data) as input and produces +// zero or more values as output. The side-effects of a computation must be +// captured by additional control and data dependencies which are part of the +// IR graph. +// Operators are immutable and describe the statically-known parts of a +// computation. Thus they can be safely shared by many different nodes in the +// IR graph, or even globally between graphs. Operators can have "static +// parameters" which are compile-time constant parameters to the operator, such +// as the name for a named field access, the ID of a runtime function, etc. +// Static parameters are private to the operator and only semantically +// meaningful to the operator itself. +class Operator : public ZoneObject { + public: + Operator(uint8_t opcode, uint16_t properties) + : opcode_(opcode), properties_(properties) {} + virtual ~Operator() {} + + // Properties inform the operator-independent optimizer about legal + // transformations for nodes that have this operator. + enum Property { + kNoProperties = 0, + kReducible = 1 << 0, // Participates in strength reduction. + kCommutative = 1 << 1, // OP(a, b) == OP(b, a) for all inputs. + kAssociative = 1 << 2, // OP(a, OP(b,c)) == OP(OP(a,b), c) for all inputs. + kIdempotent = 1 << 3, // OP(a); OP(a) == OP(a). + kNoRead = 1 << 4, // Has no scheduling dependency on Effects + kNoWrite = 1 << 5, // Does not modify any Effects and thereby + // create new scheduling dependencies. + kNoThrow = 1 << 6, // Can never generate an exception. + kFoldable = kNoRead | kNoWrite, + kEliminatable = kNoWrite | kNoThrow, + kPure = kNoRead | kNoWrite | kNoThrow | kIdempotent + }; + + // A small integer unique to all instances of a particular kind of operator, + // useful for quick matching for specific kinds of operators. For fast access + // the opcode is stored directly in the operator object. + inline uint8_t opcode() const { return opcode_; } + + // Returns a constant string representing the mnemonic of the operator, + // without the static parameters. Useful for debugging. + virtual const char* mnemonic() = 0; + + // Check if this operator equals another operator. Equivalent operators can + // be merged, and nodes with equivalent operators and equivalent inputs + // can be merged. + virtual bool Equals(Operator* other) = 0; + + // Compute a hashcode to speed up equivalence-set checking. + // Equal operators should always have equal hashcodes, and unequal operators + // should have unequal hashcodes with high probability. + virtual int HashCode() = 0; + + // Check whether this operator has the given property. + inline bool HasProperty(Property property) const { + return (properties_ & static_cast<int>(property)) == property; + } + + // Number of data inputs to the operator, for verifying graph structure. + virtual int InputCount() = 0; + + // Number of data outputs from the operator, for verifying graph structure. + virtual int OutputCount() = 0; + + inline Property properties() { return static_cast<Property>(properties_); } + + // TODO(titzer): API for input and output types, for typechecking graph. + private: + // Print the full operator into the given stream, including any + // static parameters. Useful for debugging and visualizing the IR. + virtual OStream& PrintTo(OStream& os) const = 0; // NOLINT + friend OStream& operator<<(OStream& os, const Operator& op); + + uint8_t opcode_; + uint16_t properties_; +}; + +OStream& operator<<(OStream& os, const Operator& op); + +// An implementation of Operator that has no static parameters. Such operators +// have just a name, an opcode, and a fixed number of inputs and outputs. +// They can represented by singletons and shared globally. +class SimpleOperator : public Operator { + public: + SimpleOperator(uint8_t opcode, uint16_t properties, int input_count, + int output_count, const char* mnemonic) + : Operator(opcode, properties), + input_count_(input_count), + output_count_(output_count), + mnemonic_(mnemonic) {} + + virtual const char* mnemonic() { return mnemonic_; } + virtual bool Equals(Operator* that) { return opcode() == that->opcode(); } + virtual int HashCode() { return opcode(); } + virtual int InputCount() { return input_count_; } + virtual int OutputCount() { return output_count_; } + + private: + virtual OStream& PrintTo(OStream& os) const { // NOLINT + return os << mnemonic_; + } + + int input_count_; + int output_count_; + const char* mnemonic_; +}; + +// Template specialization implements a kind of type class for dealing with the +// static parameters of Operator1 automatically. +template <typename T> +struct StaticParameterTraits { + static OStream& PrintTo(OStream& os, T val) { // NOLINT + return os << "??"; + } + static int HashCode(T a) { return 0; } + static bool Equals(T a, T b) { + return false; // Not every T has a ==. By default, be conservative. + } +}; + +template <> +struct StaticParameterTraits<ExternalReference> { + static OStream& PrintTo(OStream& os, ExternalReference val) { // NOLINT + os << val.address(); + const Runtime::Function* function = + Runtime::FunctionForEntry(val.address()); + if (function != NULL) { + os << " <" << function->name << ".entry>"; + } + return os; + } + static int HashCode(ExternalReference a) { + return reinterpret_cast<intptr_t>(a.address()) & 0xFFFFFFFF; + } + static bool Equals(ExternalReference a, ExternalReference b) { + return a == b; + } +}; + +// Specialization for static parameters of type {int}. +template <> +struct StaticParameterTraits<int> { + static OStream& PrintTo(OStream& os, int val) { // NOLINT + return os << val; + } + static int HashCode(int a) { return a; } + static bool Equals(int a, int b) { return a == b; } +}; + +// Specialization for static parameters of type {double}. +template <> +struct StaticParameterTraits<double> { + static OStream& PrintTo(OStream& os, double val) { // NOLINT + return os << val; + } + static int HashCode(double a) { + return static_cast<int>(BitCast<int64_t>(a)); + } + static bool Equals(double a, double b) { + return BitCast<int64_t>(a) == BitCast<int64_t>(b); + } +}; + +// Specialization for static parameters of type {PrintableUnique<Object>}. +template <> +struct StaticParameterTraits<PrintableUnique<Object> > { + static OStream& PrintTo(OStream& os, PrintableUnique<Object> val) { // NOLINT + return os << val.string(); + } + static int HashCode(PrintableUnique<Object> a) { + return static_cast<int>(a.Hashcode()); + } + static bool Equals(PrintableUnique<Object> a, PrintableUnique<Object> b) { + return a == b; + } +}; + +// Specialization for static parameters of type {PrintableUnique<Name>}. +template <> +struct StaticParameterTraits<PrintableUnique<Name> > { + static OStream& PrintTo(OStream& os, PrintableUnique<Name> val) { // NOLINT + return os << val.string(); + } + static int HashCode(PrintableUnique<Name> a) { + return static_cast<int>(a.Hashcode()); + } + static bool Equals(PrintableUnique<Name> a, PrintableUnique<Name> b) { + return a == b; + } +}; + +#if DEBUG +// Specialization for static parameters of type {Handle<Object>} to prevent any +// direct usage of Handles in constants. +template <> +struct StaticParameterTraits<Handle<Object> > { + static OStream& PrintTo(OStream& os, Handle<Object> val) { // NOLINT + UNREACHABLE(); // Should use PrintableUnique<Object> instead + return os; + } + static int HashCode(Handle<Object> a) { + UNREACHABLE(); // Should use PrintableUnique<Object> instead + return 0; + } + static bool Equals(Handle<Object> a, Handle<Object> b) { + UNREACHABLE(); // Should use PrintableUnique<Object> instead + return false; + } +}; +#endif + +// A templatized implementation of Operator that has one static parameter of +// type {T}. If a specialization of StaticParameterTraits<{T}> exists, then +// operators of this kind can automatically be hashed, compared, and printed. +template <typename T> +class Operator1 : public Operator { + public: + Operator1(uint8_t opcode, uint16_t properties, int input_count, + int output_count, const char* mnemonic, T parameter) + : Operator(opcode, properties), + input_count_(input_count), + output_count_(output_count), + mnemonic_(mnemonic), + parameter_(parameter) {} + + const T& parameter() const { return parameter_; } + + virtual const char* mnemonic() { return mnemonic_; } + virtual bool Equals(Operator* other) { + if (opcode() != other->opcode()) return false; + Operator1<T>* that = static_cast<Operator1<T>*>(other); + T temp1 = this->parameter_; + T temp2 = that->parameter_; + return StaticParameterTraits<T>::Equals(temp1, temp2); + } + virtual int HashCode() { + return opcode() + 33 * StaticParameterTraits<T>::HashCode(this->parameter_); + } + virtual int InputCount() { return input_count_; } + virtual int OutputCount() { return output_count_; } + virtual OStream& PrintParameter(OStream& os) const { // NOLINT + return StaticParameterTraits<T>::PrintTo(os << "[", parameter_) << "]"; + } + + private: + virtual OStream& PrintTo(OStream& os) const { // NOLINT + return PrintParameter(os << mnemonic_); + } + + int input_count_; + int output_count_; + const char* mnemonic_; + T parameter_; +}; + +// Type definitions for operators with specific types of parameters. +typedef Operator1<PrintableUnique<Name> > NameOperator; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_OPERATOR_H_ diff --git a/deps/v8/src/compiler/phi-reducer.h b/deps/v8/src/compiler/phi-reducer.h new file mode 100644 index 00000000000..a9b14504311 --- /dev/null +++ b/deps/v8/src/compiler/phi-reducer.h @@ -0,0 +1,42 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_PHI_REDUCER_H_ +#define V8_COMPILER_PHI_REDUCER_H_ + +#include "src/compiler/graph-reducer.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// Replaces redundant phis if all the inputs are the same or the phi itself. +class PhiReducer V8_FINAL : public Reducer { + public: + virtual Reduction Reduce(Node* node) V8_OVERRIDE { + if (node->opcode() != IrOpcode::kPhi && + node->opcode() != IrOpcode::kEffectPhi) + return NoChange(); + + int n = node->op()->InputCount(); + if (n == 1) return Replace(node->InputAt(0)); + + Node* replacement = NULL; + Node::Inputs inputs = node->inputs(); + for (InputIter it = inputs.begin(); n > 0; --n, ++it) { + Node* input = *it; + if (input != node && input != replacement) { + if (replacement != NULL) return NoChange(); + replacement = input; + } + } + DCHECK_NE(node, replacement); + return Replace(replacement); + } +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_PHI_REDUCER_H_ diff --git a/deps/v8/src/compiler/pipeline.cc b/deps/v8/src/compiler/pipeline.cc new file mode 100644 index 00000000000..b0b3eb76efe --- /dev/null +++ b/deps/v8/src/compiler/pipeline.cc @@ -0,0 +1,341 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/pipeline.h" + +#include "src/base/platform/elapsed-timer.h" +#include "src/compiler/ast-graph-builder.h" +#include "src/compiler/code-generator.h" +#include "src/compiler/graph-replay.h" +#include "src/compiler/graph-visualizer.h" +#include "src/compiler/instruction.h" +#include "src/compiler/instruction-selector.h" +#include "src/compiler/js-context-specialization.h" +#include "src/compiler/js-generic-lowering.h" +#include "src/compiler/js-typed-lowering.h" +#include "src/compiler/phi-reducer.h" +#include "src/compiler/register-allocator.h" +#include "src/compiler/schedule.h" +#include "src/compiler/scheduler.h" +#include "src/compiler/simplified-lowering.h" +#include "src/compiler/typer.h" +#include "src/compiler/verifier.h" +#include "src/hydrogen.h" +#include "src/ostreams.h" +#include "src/utils.h" + +namespace v8 { +namespace internal { +namespace compiler { + +class PhaseStats { + public: + enum PhaseKind { CREATE_GRAPH, OPTIMIZATION, CODEGEN }; + + PhaseStats(CompilationInfo* info, PhaseKind kind, const char* name) + : info_(info), + kind_(kind), + name_(name), + size_(info->zone()->allocation_size()) { + if (FLAG_turbo_stats) { + timer_.Start(); + } + } + + ~PhaseStats() { + if (FLAG_turbo_stats) { + base::TimeDelta delta = timer_.Elapsed(); + size_t bytes = info_->zone()->allocation_size() - size_; + HStatistics* stats = info_->isolate()->GetTStatistics(); + stats->SaveTiming(name_, delta, static_cast<int>(bytes)); + + switch (kind_) { + case CREATE_GRAPH: + stats->IncrementCreateGraph(delta); + break; + case OPTIMIZATION: + stats->IncrementOptimizeGraph(delta); + break; + case CODEGEN: + stats->IncrementGenerateCode(delta); + break; + } + } + } + + private: + CompilationInfo* info_; + PhaseKind kind_; + const char* name_; + size_t size_; + base::ElapsedTimer timer_; +}; + + +void Pipeline::VerifyAndPrintGraph(Graph* graph, const char* phase) { + if (FLAG_trace_turbo) { + char buffer[256]; + Vector<char> filename(buffer, sizeof(buffer)); + SmartArrayPointer<char> functionname = + info_->shared_info()->DebugName()->ToCString(); + if (strlen(functionname.get()) > 0) { + SNPrintF(filename, "turbo-%s-%s.dot", functionname.get(), phase); + } else { + SNPrintF(filename, "turbo-%p-%s.dot", static_cast<void*>(info_), phase); + } + std::replace(filename.start(), filename.start() + filename.length(), ' ', + '_'); + FILE* file = base::OS::FOpen(filename.start(), "w+"); + OFStream of(file); + of << AsDOT(*graph); + fclose(file); + + OFStream os(stdout); + os << "-- " << phase << " graph printed to file " << filename.start() + << "\n"; + } + if (VerifyGraphs()) Verifier::Run(graph); +} + + +class AstGraphBuilderWithPositions : public AstGraphBuilder { + public: + explicit AstGraphBuilderWithPositions(CompilationInfo* info, JSGraph* jsgraph, + SourcePositionTable* source_positions) + : AstGraphBuilder(info, jsgraph), source_positions_(source_positions) {} + + bool CreateGraph() { + SourcePositionTable::Scope pos(source_positions_, + SourcePosition::Unknown()); + return AstGraphBuilder::CreateGraph(); + } + +#define DEF_VISIT(type) \ + virtual void Visit##type(type* node) V8_OVERRIDE { \ + SourcePositionTable::Scope pos(source_positions_, \ + SourcePosition(node->position())); \ + AstGraphBuilder::Visit##type(node); \ + } + AST_NODE_LIST(DEF_VISIT) +#undef DEF_VISIT + + private: + SourcePositionTable* source_positions_; +}; + + +static void TraceSchedule(Schedule* schedule) { + if (!FLAG_trace_turbo) return; + OFStream os(stdout); + os << "-- Schedule --------------------------------------\n" << *schedule; +} + + +Handle<Code> Pipeline::GenerateCode() { + if (FLAG_turbo_stats) isolate()->GetTStatistics()->Initialize(info_); + + if (FLAG_trace_turbo) { + OFStream os(stdout); + os << "---------------------------------------------------\n" + << "Begin compiling method " + << info()->function()->debug_name()->ToCString().get() + << " using Turbofan" << endl; + } + + // Build the graph. + Graph graph(zone()); + SourcePositionTable source_positions(&graph); + source_positions.AddDecorator(); + // TODO(turbofan): there is no need to type anything during initial graph + // construction. This is currently only needed for the node cache, which the + // typer could sweep over later. + Typer typer(zone()); + CommonOperatorBuilder common(zone()); + JSGraph jsgraph(&graph, &common, &typer); + Node* context_node; + { + PhaseStats graph_builder_stats(info(), PhaseStats::CREATE_GRAPH, + "graph builder"); + AstGraphBuilderWithPositions graph_builder(info(), &jsgraph, + &source_positions); + graph_builder.CreateGraph(); + context_node = graph_builder.GetFunctionContext(); + } + { + PhaseStats phi_reducer_stats(info(), PhaseStats::CREATE_GRAPH, + "phi reduction"); + PhiReducer phi_reducer; + GraphReducer graph_reducer(&graph); + graph_reducer.AddReducer(&phi_reducer); + graph_reducer.ReduceGraph(); + // TODO(mstarzinger): Running reducer once ought to be enough for everyone. + graph_reducer.ReduceGraph(); + graph_reducer.ReduceGraph(); + } + + VerifyAndPrintGraph(&graph, "Initial untyped"); + + if (FLAG_context_specialization) { + SourcePositionTable::Scope pos_(&source_positions, + SourcePosition::Unknown()); + // Specialize the code to the context as aggressively as possible. + JSContextSpecializer spec(info(), &jsgraph, context_node); + spec.SpecializeToContext(); + VerifyAndPrintGraph(&graph, "Context specialized"); + } + + // Print a replay of the initial graph. + if (FLAG_print_turbo_replay) { + GraphReplayPrinter::PrintReplay(&graph); + } + + if (FLAG_turbo_types) { + { + // Type the graph. + PhaseStats typer_stats(info(), PhaseStats::CREATE_GRAPH, "typer"); + typer.Run(&graph, info()->context()); + } + // All new nodes must be typed. + typer.DecorateGraph(&graph); + { + // Lower JSOperators where we can determine types. + PhaseStats lowering_stats(info(), PhaseStats::CREATE_GRAPH, + "typed lowering"); + JSTypedLowering lowering(&jsgraph, &source_positions); + lowering.LowerAllNodes(); + + VerifyAndPrintGraph(&graph, "Lowered typed"); + } + } + + Handle<Code> code = Handle<Code>::null(); + if (SupportedTarget()) { + { + // Lower any remaining generic JSOperators. + PhaseStats lowering_stats(info(), PhaseStats::CREATE_GRAPH, + "generic lowering"); + MachineOperatorBuilder machine(zone()); + JSGenericLowering lowering(info(), &jsgraph, &machine, &source_positions); + lowering.LowerAllNodes(); + + VerifyAndPrintGraph(&graph, "Lowered generic"); + } + + // Compute a schedule. + Schedule* schedule = ComputeSchedule(&graph); + TraceSchedule(schedule); + + { + // Generate optimized code. + PhaseStats codegen_stats(info(), PhaseStats::CODEGEN, "codegen"); + Linkage linkage(info()); + code = GenerateCode(&linkage, &graph, schedule, &source_positions); + info()->SetCode(code); + } + + // Print optimized code. + v8::internal::CodeGenerator::PrintCode(code, info()); + } + + if (FLAG_trace_turbo) { + OFStream os(stdout); + os << "--------------------------------------------------\n" + << "Finished compiling method " + << info()->function()->debug_name()->ToCString().get() + << " using Turbofan" << endl; + } + + return code; +} + + +Schedule* Pipeline::ComputeSchedule(Graph* graph) { + PhaseStats schedule_stats(info(), PhaseStats::CODEGEN, "scheduling"); + return Scheduler::ComputeSchedule(graph); +} + + +Handle<Code> Pipeline::GenerateCodeForMachineGraph(Linkage* linkage, + Graph* graph, + Schedule* schedule) { + CHECK(SupportedBackend()); + if (schedule == NULL) { + VerifyAndPrintGraph(graph, "Machine"); + schedule = ComputeSchedule(graph); + } + TraceSchedule(schedule); + + SourcePositionTable source_positions(graph); + Handle<Code> code = GenerateCode(linkage, graph, schedule, &source_positions); +#if ENABLE_DISASSEMBLER + if (!code.is_null() && FLAG_print_opt_code) { + CodeTracer::Scope tracing_scope(isolate()->GetCodeTracer()); + OFStream os(tracing_scope.file()); + code->Disassemble("test code", os); + } +#endif + return code; +} + + +Handle<Code> Pipeline::GenerateCode(Linkage* linkage, Graph* graph, + Schedule* schedule, + SourcePositionTable* source_positions) { + DCHECK_NOT_NULL(graph); + DCHECK_NOT_NULL(linkage); + DCHECK_NOT_NULL(schedule); + CHECK(SupportedBackend()); + + InstructionSequence sequence(linkage, graph, schedule); + + // Select and schedule instructions covering the scheduled graph. + { + InstructionSelector selector(&sequence, source_positions); + selector.SelectInstructions(); + } + + if (FLAG_trace_turbo) { + OFStream os(stdout); + os << "----- Instruction sequence before register allocation -----\n" + << sequence; + } + + // Allocate registers. + { + int node_count = graph->NodeCount(); + if (node_count > UnallocatedOperand::kMaxVirtualRegisters) { + linkage->info()->set_bailout_reason(kNotEnoughVirtualRegistersForValues); + return Handle<Code>::null(); + } + RegisterAllocator allocator(&sequence); + if (!allocator.Allocate()) { + linkage->info()->set_bailout_reason(kNotEnoughVirtualRegistersRegalloc); + return Handle<Code>::null(); + } + } + + if (FLAG_trace_turbo) { + OFStream os(stdout); + os << "----- Instruction sequence after register allocation -----\n" + << sequence; + } + + // Generate native sequence. + CodeGenerator generator(&sequence); + return generator.GenerateCode(); +} + + +void Pipeline::SetUp() { + InstructionOperand::SetUpCaches(); +} + + +void Pipeline::TearDown() { + InstructionOperand::TearDownCaches(); +} + +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/src/compiler/pipeline.h b/deps/v8/src/compiler/pipeline.h new file mode 100644 index 00000000000..4c1c0bcea9a --- /dev/null +++ b/deps/v8/src/compiler/pipeline.h @@ -0,0 +1,68 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_PIPELINE_H_ +#define V8_COMPILER_PIPELINE_H_ + +#include "src/v8.h" + +#include "src/compiler.h" + +// Note: TODO(turbofan) implies a performance improvement opportunity, +// and TODO(name) implies an incomplete implementation + +namespace v8 { +namespace internal { +namespace compiler { + +// Clients of this interface shouldn't depend on lots of compiler internals. +class CallDescriptor; +class Graph; +class Schedule; +class SourcePositionTable; +class Linkage; + +class Pipeline { + public: + explicit Pipeline(CompilationInfo* info) : info_(info) {} + + // Run the entire pipeline and generate a handle to a code object. + Handle<Code> GenerateCode(); + + // Run the pipeline on a machine graph and generate code. If {schedule} + // is {NULL}, then compute a new schedule for code generation. + Handle<Code> GenerateCodeForMachineGraph(Linkage* linkage, Graph* graph, + Schedule* schedule = NULL); + + CompilationInfo* info() const { return info_; } + Zone* zone() { return info_->zone(); } + Isolate* isolate() { return info_->isolate(); } + + static inline bool SupportedBackend() { return V8_TURBOFAN_BACKEND != 0; } + static inline bool SupportedTarget() { return V8_TURBOFAN_TARGET != 0; } + + static inline bool VerifyGraphs() { +#ifdef DEBUG + return true; +#else + return FLAG_turbo_verify; +#endif + } + + static void SetUp(); + static void TearDown(); + + private: + CompilationInfo* info_; + + Schedule* ComputeSchedule(Graph* graph); + void VerifyAndPrintGraph(Graph* graph, const char* phase); + Handle<Code> GenerateCode(Linkage* linkage, Graph* graph, Schedule* schedule, + SourcePositionTable* source_positions); +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_PIPELINE_H_ diff --git a/deps/v8/src/compiler/raw-machine-assembler.cc b/deps/v8/src/compiler/raw-machine-assembler.cc new file mode 100644 index 00000000000..afbd268dcd9 --- /dev/null +++ b/deps/v8/src/compiler/raw-machine-assembler.cc @@ -0,0 +1,158 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/pipeline.h" +#include "src/compiler/raw-machine-assembler.h" +#include "src/compiler/scheduler.h" + +namespace v8 { +namespace internal { +namespace compiler { + +RawMachineAssembler::RawMachineAssembler( + Graph* graph, MachineCallDescriptorBuilder* call_descriptor_builder, + MachineType word) + : GraphBuilder(graph), + schedule_(new (zone()) Schedule(zone())), + machine_(zone(), word), + common_(zone()), + call_descriptor_builder_(call_descriptor_builder), + parameters_(NULL), + exit_label_(schedule()->exit()), + current_block_(schedule()->entry()) { + Node* s = graph->NewNode(common_.Start(parameter_count())); + graph->SetStart(s); + if (parameter_count() == 0) return; + parameters_ = zone()->NewArray<Node*>(parameter_count()); + for (int i = 0; i < parameter_count(); ++i) { + parameters_[i] = NewNode(common()->Parameter(i), graph->start()); + } +} + + +Schedule* RawMachineAssembler::Export() { + // Compute the correct codegen order. + DCHECK(schedule_->rpo_order()->empty()); + Scheduler::ComputeSpecialRPO(schedule_); + // Invalidate MachineAssembler. + Schedule* schedule = schedule_; + schedule_ = NULL; + return schedule; +} + + +Node* RawMachineAssembler::Parameter(int index) { + DCHECK(0 <= index && index < parameter_count()); + return parameters_[index]; +} + + +RawMachineAssembler::Label* RawMachineAssembler::Exit() { + exit_label_.used_ = true; + return &exit_label_; +} + + +void RawMachineAssembler::Goto(Label* label) { + DCHECK(current_block_ != schedule()->exit()); + schedule()->AddGoto(CurrentBlock(), Use(label)); + current_block_ = NULL; +} + + +void RawMachineAssembler::Branch(Node* condition, Label* true_val, + Label* false_val) { + DCHECK(current_block_ != schedule()->exit()); + Node* branch = NewNode(common()->Branch(), condition); + schedule()->AddBranch(CurrentBlock(), branch, Use(true_val), Use(false_val)); + current_block_ = NULL; +} + + +void RawMachineAssembler::Return(Node* value) { + schedule()->AddReturn(CurrentBlock(), value); + current_block_ = NULL; +} + + +void RawMachineAssembler::Deoptimize(Node* state) { + Node* deopt = graph()->NewNode(common()->Deoptimize(), state); + schedule()->AddDeoptimize(CurrentBlock(), deopt); + current_block_ = NULL; +} + + +Node* RawMachineAssembler::CallJS0(Node* function, Node* receiver, + Label* continuation, Label* deoptimization) { + CallDescriptor* descriptor = Linkage::GetJSCallDescriptor(1, zone()); + Node* call = graph()->NewNode(common()->Call(descriptor), function, receiver); + schedule()->AddCall(CurrentBlock(), call, Use(continuation), + Use(deoptimization)); + current_block_ = NULL; + return call; +} + + +Node* RawMachineAssembler::CallRuntime1(Runtime::FunctionId function, + Node* arg0, Label* continuation, + Label* deoptimization) { + CallDescriptor* descriptor = + Linkage::GetRuntimeCallDescriptor(function, 1, Operator::kNoProperties, + CallDescriptor::kCanDeoptimize, zone()); + + Node* centry = HeapConstant(CEntryStub(isolate(), 1).GetCode()); + Node* ref = NewNode( + common()->ExternalConstant(ExternalReference(function, isolate()))); + Node* arity = Int32Constant(1); + Node* context = Parameter(1); + + Node* call = graph()->NewNode(common()->Call(descriptor), centry, arg0, ref, + arity, context); + schedule()->AddCall(CurrentBlock(), call, Use(continuation), + Use(deoptimization)); + current_block_ = NULL; + return call; +} + + +void RawMachineAssembler::Bind(Label* label) { + DCHECK(current_block_ == NULL); + DCHECK(!label->bound_); + label->bound_ = true; + current_block_ = EnsureBlock(label); +} + + +BasicBlock* RawMachineAssembler::Use(Label* label) { + label->used_ = true; + return EnsureBlock(label); +} + + +BasicBlock* RawMachineAssembler::EnsureBlock(Label* label) { + if (label->block_ == NULL) label->block_ = schedule()->NewBasicBlock(); + return label->block_; +} + + +BasicBlock* RawMachineAssembler::CurrentBlock() { + DCHECK(current_block_); + return current_block_; +} + + +Node* RawMachineAssembler::MakeNode(Operator* op, int input_count, + Node** inputs) { + DCHECK(ScheduleValid()); + DCHECK(current_block_ != NULL); + Node* node = graph()->NewNode(op, input_count, inputs); + BasicBlock* block = op->opcode() == IrOpcode::kParameter ? schedule()->start() + : CurrentBlock(); + schedule()->AddNode(block, node); + return node; +} + +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/src/compiler/raw-machine-assembler.h b/deps/v8/src/compiler/raw-machine-assembler.h new file mode 100644 index 00000000000..6839ade4fe9 --- /dev/null +++ b/deps/v8/src/compiler/raw-machine-assembler.h @@ -0,0 +1,129 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_RAW_MACHINE_ASSEMBLER_H_ +#define V8_COMPILER_RAW_MACHINE_ASSEMBLER_H_ + +#include "src/v8.h" + +#include "src/compiler/common-operator.h" +#include "src/compiler/graph-builder.h" +#include "src/compiler/machine-node-factory.h" +#include "src/compiler/machine-operator.h" +#include "src/compiler/node.h" +#include "src/compiler/operator.h" + + +namespace v8 { +namespace internal { +namespace compiler { + +class BasicBlock; +class Schedule; + + +class RawMachineAssembler : public GraphBuilder, + public MachineNodeFactory<RawMachineAssembler> { + public: + class Label { + public: + Label() : block_(NULL), used_(false), bound_(false) {} + ~Label() { DCHECK(bound_ || !used_); } + + BasicBlock* block() { return block_; } + + private: + // Private constructor for exit label. + explicit Label(BasicBlock* block) + : block_(block), used_(false), bound_(false) {} + + BasicBlock* block_; + bool used_; + bool bound_; + friend class RawMachineAssembler; + DISALLOW_COPY_AND_ASSIGN(Label); + }; + + RawMachineAssembler(Graph* graph, + MachineCallDescriptorBuilder* call_descriptor_builder, + MachineType word = MachineOperatorBuilder::pointer_rep()); + virtual ~RawMachineAssembler() {} + + Isolate* isolate() const { return zone()->isolate(); } + Zone* zone() const { return graph()->zone(); } + MachineOperatorBuilder* machine() { return &machine_; } + CommonOperatorBuilder* common() { return &common_; } + CallDescriptor* call_descriptor() const { + return call_descriptor_builder_->BuildCallDescriptor(zone()); + } + int parameter_count() const { + return call_descriptor_builder_->parameter_count(); + } + const MachineType* parameter_types() const { + return call_descriptor_builder_->parameter_types(); + } + + // Parameters. + Node* Parameter(int index); + + // Control flow. + Label* Exit(); + void Goto(Label* label); + void Branch(Node* condition, Label* true_val, Label* false_val); + // Call to a JS function with zero parameters. + Node* CallJS0(Node* function, Node* receiver, Label* continuation, + Label* deoptimization); + // Call to a runtime function with zero parameters. + Node* CallRuntime1(Runtime::FunctionId function, Node* arg0, + Label* continuation, Label* deoptimization); + void Return(Node* value); + void Bind(Label* label); + void Deoptimize(Node* state); + + // Variables. + Node* Phi(Node* n1, Node* n2) { return NewNode(common()->Phi(2), n1, n2); } + Node* Phi(Node* n1, Node* n2, Node* n3) { + return NewNode(common()->Phi(3), n1, n2, n3); + } + Node* Phi(Node* n1, Node* n2, Node* n3, Node* n4) { + return NewNode(common()->Phi(4), n1, n2, n3, n4); + } + + // MachineAssembler is invalid after export. + Schedule* Export(); + + protected: + virtual Node* MakeNode(Operator* op, int input_count, Node** inputs); + + Schedule* schedule() { + DCHECK(ScheduleValid()); + return schedule_; + } + + private: + bool ScheduleValid() { return schedule_ != NULL; } + + BasicBlock* Use(Label* label); + BasicBlock* EnsureBlock(Label* label); + BasicBlock* CurrentBlock(); + + typedef std::vector<MachineType, zone_allocator<MachineType> > + RepresentationVector; + + Schedule* schedule_; + MachineOperatorBuilder machine_; + CommonOperatorBuilder common_; + MachineCallDescriptorBuilder* call_descriptor_builder_; + Node** parameters_; + Label exit_label_; + BasicBlock* current_block_; + + DISALLOW_COPY_AND_ASSIGN(RawMachineAssembler); +}; + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_COMPILER_RAW_MACHINE_ASSEMBLER_H_ diff --git a/deps/v8/src/compiler/register-allocator.cc b/deps/v8/src/compiler/register-allocator.cc new file mode 100644 index 00000000000..972a9045094 --- /dev/null +++ b/deps/v8/src/compiler/register-allocator.cc @@ -0,0 +1,2232 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/register-allocator.h" + +#include "src/compiler/linkage.h" +#include "src/hydrogen.h" +#include "src/string-stream.h" + +namespace v8 { +namespace internal { +namespace compiler { + +static inline LifetimePosition Min(LifetimePosition a, LifetimePosition b) { + return a.Value() < b.Value() ? a : b; +} + + +static inline LifetimePosition Max(LifetimePosition a, LifetimePosition b) { + return a.Value() > b.Value() ? a : b; +} + + +UsePosition::UsePosition(LifetimePosition pos, InstructionOperand* operand, + InstructionOperand* hint) + : operand_(operand), + hint_(hint), + pos_(pos), + next_(NULL), + requires_reg_(false), + register_beneficial_(true) { + if (operand_ != NULL && operand_->IsUnallocated()) { + const UnallocatedOperand* unalloc = UnallocatedOperand::cast(operand_); + requires_reg_ = unalloc->HasRegisterPolicy(); + register_beneficial_ = !unalloc->HasAnyPolicy(); + } + DCHECK(pos_.IsValid()); +} + + +bool UsePosition::HasHint() const { + return hint_ != NULL && !hint_->IsUnallocated(); +} + + +bool UsePosition::RequiresRegister() const { return requires_reg_; } + + +bool UsePosition::RegisterIsBeneficial() const { return register_beneficial_; } + + +void UseInterval::SplitAt(LifetimePosition pos, Zone* zone) { + DCHECK(Contains(pos) && pos.Value() != start().Value()); + UseInterval* after = new (zone) UseInterval(pos, end_); + after->next_ = next_; + next_ = after; + end_ = pos; +} + + +#ifdef DEBUG + + +void LiveRange::Verify() const { + UsePosition* cur = first_pos_; + while (cur != NULL) { + DCHECK(Start().Value() <= cur->pos().Value() && + cur->pos().Value() <= End().Value()); + cur = cur->next(); + } +} + + +bool LiveRange::HasOverlap(UseInterval* target) const { + UseInterval* current_interval = first_interval_; + while (current_interval != NULL) { + // Intervals overlap if the start of one is contained in the other. + if (current_interval->Contains(target->start()) || + target->Contains(current_interval->start())) { + return true; + } + current_interval = current_interval->next(); + } + return false; +} + + +#endif + + +LiveRange::LiveRange(int id, Zone* zone) + : id_(id), + spilled_(false), + is_phi_(false), + is_non_loop_phi_(false), + kind_(UNALLOCATED_REGISTERS), + assigned_register_(kInvalidAssignment), + last_interval_(NULL), + first_interval_(NULL), + first_pos_(NULL), + parent_(NULL), + next_(NULL), + current_interval_(NULL), + last_processed_use_(NULL), + current_hint_operand_(NULL), + spill_operand_(new (zone) InstructionOperand()), + spill_start_index_(kMaxInt) {} + + +void LiveRange::set_assigned_register(int reg, Zone* zone) { + DCHECK(!HasRegisterAssigned() && !IsSpilled()); + assigned_register_ = reg; + ConvertOperands(zone); +} + + +void LiveRange::MakeSpilled(Zone* zone) { + DCHECK(!IsSpilled()); + DCHECK(TopLevel()->HasAllocatedSpillOperand()); + spilled_ = true; + assigned_register_ = kInvalidAssignment; + ConvertOperands(zone); +} + + +bool LiveRange::HasAllocatedSpillOperand() const { + DCHECK(spill_operand_ != NULL); + return !spill_operand_->IsIgnored(); +} + + +void LiveRange::SetSpillOperand(InstructionOperand* operand) { + DCHECK(!operand->IsUnallocated()); + DCHECK(spill_operand_ != NULL); + DCHECK(spill_operand_->IsIgnored()); + spill_operand_->ConvertTo(operand->kind(), operand->index()); +} + + +UsePosition* LiveRange::NextUsePosition(LifetimePosition start) { + UsePosition* use_pos = last_processed_use_; + if (use_pos == NULL) use_pos = first_pos(); + while (use_pos != NULL && use_pos->pos().Value() < start.Value()) { + use_pos = use_pos->next(); + } + last_processed_use_ = use_pos; + return use_pos; +} + + +UsePosition* LiveRange::NextUsePositionRegisterIsBeneficial( + LifetimePosition start) { + UsePosition* pos = NextUsePosition(start); + while (pos != NULL && !pos->RegisterIsBeneficial()) { + pos = pos->next(); + } + return pos; +} + + +UsePosition* LiveRange::PreviousUsePositionRegisterIsBeneficial( + LifetimePosition start) { + UsePosition* pos = first_pos(); + UsePosition* prev = NULL; + while (pos != NULL && pos->pos().Value() < start.Value()) { + if (pos->RegisterIsBeneficial()) prev = pos; + pos = pos->next(); + } + return prev; +} + + +UsePosition* LiveRange::NextRegisterPosition(LifetimePosition start) { + UsePosition* pos = NextUsePosition(start); + while (pos != NULL && !pos->RequiresRegister()) { + pos = pos->next(); + } + return pos; +} + + +bool LiveRange::CanBeSpilled(LifetimePosition pos) { + // We cannot spill a live range that has a use requiring a register + // at the current or the immediate next position. + UsePosition* use_pos = NextRegisterPosition(pos); + if (use_pos == NULL) return true; + return use_pos->pos().Value() > + pos.NextInstruction().InstructionEnd().Value(); +} + + +InstructionOperand* LiveRange::CreateAssignedOperand(Zone* zone) { + InstructionOperand* op = NULL; + if (HasRegisterAssigned()) { + DCHECK(!IsSpilled()); + switch (Kind()) { + case GENERAL_REGISTERS: + op = RegisterOperand::Create(assigned_register(), zone); + break; + case DOUBLE_REGISTERS: + op = DoubleRegisterOperand::Create(assigned_register(), zone); + break; + default: + UNREACHABLE(); + } + } else if (IsSpilled()) { + DCHECK(!HasRegisterAssigned()); + op = TopLevel()->GetSpillOperand(); + DCHECK(!op->IsUnallocated()); + } else { + UnallocatedOperand* unalloc = + new (zone) UnallocatedOperand(UnallocatedOperand::NONE); + unalloc->set_virtual_register(id_); + op = unalloc; + } + return op; +} + + +UseInterval* LiveRange::FirstSearchIntervalForPosition( + LifetimePosition position) const { + if (current_interval_ == NULL) return first_interval_; + if (current_interval_->start().Value() > position.Value()) { + current_interval_ = NULL; + return first_interval_; + } + return current_interval_; +} + + +void LiveRange::AdvanceLastProcessedMarker( + UseInterval* to_start_of, LifetimePosition but_not_past) const { + if (to_start_of == NULL) return; + if (to_start_of->start().Value() > but_not_past.Value()) return; + LifetimePosition start = current_interval_ == NULL + ? LifetimePosition::Invalid() + : current_interval_->start(); + if (to_start_of->start().Value() > start.Value()) { + current_interval_ = to_start_of; + } +} + + +void LiveRange::SplitAt(LifetimePosition position, LiveRange* result, + Zone* zone) { + DCHECK(Start().Value() < position.Value()); + DCHECK(result->IsEmpty()); + // Find the last interval that ends before the position. If the + // position is contained in one of the intervals in the chain, we + // split that interval and use the first part. + UseInterval* current = FirstSearchIntervalForPosition(position); + + // If the split position coincides with the beginning of a use interval + // we need to split use positons in a special way. + bool split_at_start = false; + + if (current->start().Value() == position.Value()) { + // When splitting at start we need to locate the previous use interval. + current = first_interval_; + } + + while (current != NULL) { + if (current->Contains(position)) { + current->SplitAt(position, zone); + break; + } + UseInterval* next = current->next(); + if (next->start().Value() >= position.Value()) { + split_at_start = (next->start().Value() == position.Value()); + break; + } + current = next; + } + + // Partition original use intervals to the two live ranges. + UseInterval* before = current; + UseInterval* after = before->next(); + result->last_interval_ = + (last_interval_ == before) + ? after // Only interval in the range after split. + : last_interval_; // Last interval of the original range. + result->first_interval_ = after; + last_interval_ = before; + + // Find the last use position before the split and the first use + // position after it. + UsePosition* use_after = first_pos_; + UsePosition* use_before = NULL; + if (split_at_start) { + // The split position coincides with the beginning of a use interval (the + // end of a lifetime hole). Use at this position should be attributed to + // the split child because split child owns use interval covering it. + while (use_after != NULL && use_after->pos().Value() < position.Value()) { + use_before = use_after; + use_after = use_after->next(); + } + } else { + while (use_after != NULL && use_after->pos().Value() <= position.Value()) { + use_before = use_after; + use_after = use_after->next(); + } + } + + // Partition original use positions to the two live ranges. + if (use_before != NULL) { + use_before->next_ = NULL; + } else { + first_pos_ = NULL; + } + result->first_pos_ = use_after; + + // Discard cached iteration state. It might be pointing + // to the use that no longer belongs to this live range. + last_processed_use_ = NULL; + current_interval_ = NULL; + + // Link the new live range in the chain before any of the other + // ranges linked from the range before the split. + result->parent_ = (parent_ == NULL) ? this : parent_; + result->kind_ = result->parent_->kind_; + result->next_ = next_; + next_ = result; + +#ifdef DEBUG + Verify(); + result->Verify(); +#endif +} + + +// This implements an ordering on live ranges so that they are ordered by their +// start positions. This is needed for the correctness of the register +// allocation algorithm. If two live ranges start at the same offset then there +// is a tie breaker based on where the value is first used. This part of the +// ordering is merely a heuristic. +bool LiveRange::ShouldBeAllocatedBefore(const LiveRange* other) const { + LifetimePosition start = Start(); + LifetimePosition other_start = other->Start(); + if (start.Value() == other_start.Value()) { + UsePosition* pos = first_pos(); + if (pos == NULL) return false; + UsePosition* other_pos = other->first_pos(); + if (other_pos == NULL) return true; + return pos->pos().Value() < other_pos->pos().Value(); + } + return start.Value() < other_start.Value(); +} + + +void LiveRange::ShortenTo(LifetimePosition start) { + RegisterAllocator::TraceAlloc("Shorten live range %d to [%d\n", id_, + start.Value()); + DCHECK(first_interval_ != NULL); + DCHECK(first_interval_->start().Value() <= start.Value()); + DCHECK(start.Value() < first_interval_->end().Value()); + first_interval_->set_start(start); +} + + +void LiveRange::EnsureInterval(LifetimePosition start, LifetimePosition end, + Zone* zone) { + RegisterAllocator::TraceAlloc("Ensure live range %d in interval [%d %d[\n", + id_, start.Value(), end.Value()); + LifetimePosition new_end = end; + while (first_interval_ != NULL && + first_interval_->start().Value() <= end.Value()) { + if (first_interval_->end().Value() > end.Value()) { + new_end = first_interval_->end(); + } + first_interval_ = first_interval_->next(); + } + + UseInterval* new_interval = new (zone) UseInterval(start, new_end); + new_interval->next_ = first_interval_; + first_interval_ = new_interval; + if (new_interval->next() == NULL) { + last_interval_ = new_interval; + } +} + + +void LiveRange::AddUseInterval(LifetimePosition start, LifetimePosition end, + Zone* zone) { + RegisterAllocator::TraceAlloc("Add to live range %d interval [%d %d[\n", id_, + start.Value(), end.Value()); + if (first_interval_ == NULL) { + UseInterval* interval = new (zone) UseInterval(start, end); + first_interval_ = interval; + last_interval_ = interval; + } else { + if (end.Value() == first_interval_->start().Value()) { + first_interval_->set_start(start); + } else if (end.Value() < first_interval_->start().Value()) { + UseInterval* interval = new (zone) UseInterval(start, end); + interval->set_next(first_interval_); + first_interval_ = interval; + } else { + // Order of instruction's processing (see ProcessInstructions) guarantees + // that each new use interval either precedes or intersects with + // last added interval. + DCHECK(start.Value() < first_interval_->end().Value()); + first_interval_->start_ = Min(start, first_interval_->start_); + first_interval_->end_ = Max(end, first_interval_->end_); + } + } +} + + +void LiveRange::AddUsePosition(LifetimePosition pos, + InstructionOperand* operand, + InstructionOperand* hint, Zone* zone) { + RegisterAllocator::TraceAlloc("Add to live range %d use position %d\n", id_, + pos.Value()); + UsePosition* use_pos = new (zone) UsePosition(pos, operand, hint); + UsePosition* prev_hint = NULL; + UsePosition* prev = NULL; + UsePosition* current = first_pos_; + while (current != NULL && current->pos().Value() < pos.Value()) { + prev_hint = current->HasHint() ? current : prev_hint; + prev = current; + current = current->next(); + } + + if (prev == NULL) { + use_pos->set_next(first_pos_); + first_pos_ = use_pos; + } else { + use_pos->next_ = prev->next_; + prev->next_ = use_pos; + } + + if (prev_hint == NULL && use_pos->HasHint()) { + current_hint_operand_ = hint; + } +} + + +void LiveRange::ConvertOperands(Zone* zone) { + InstructionOperand* op = CreateAssignedOperand(zone); + UsePosition* use_pos = first_pos(); + while (use_pos != NULL) { + DCHECK(Start().Value() <= use_pos->pos().Value() && + use_pos->pos().Value() <= End().Value()); + + if (use_pos->HasOperand()) { + DCHECK(op->IsRegister() || op->IsDoubleRegister() || + !use_pos->RequiresRegister()); + use_pos->operand()->ConvertTo(op->kind(), op->index()); + } + use_pos = use_pos->next(); + } +} + + +bool LiveRange::CanCover(LifetimePosition position) const { + if (IsEmpty()) return false; + return Start().Value() <= position.Value() && + position.Value() < End().Value(); +} + + +bool LiveRange::Covers(LifetimePosition position) { + if (!CanCover(position)) return false; + UseInterval* start_search = FirstSearchIntervalForPosition(position); + for (UseInterval* interval = start_search; interval != NULL; + interval = interval->next()) { + DCHECK(interval->next() == NULL || + interval->next()->start().Value() >= interval->start().Value()); + AdvanceLastProcessedMarker(interval, position); + if (interval->Contains(position)) return true; + if (interval->start().Value() > position.Value()) return false; + } + return false; +} + + +LifetimePosition LiveRange::FirstIntersection(LiveRange* other) { + UseInterval* b = other->first_interval(); + if (b == NULL) return LifetimePosition::Invalid(); + LifetimePosition advance_last_processed_up_to = b->start(); + UseInterval* a = FirstSearchIntervalForPosition(b->start()); + while (a != NULL && b != NULL) { + if (a->start().Value() > other->End().Value()) break; + if (b->start().Value() > End().Value()) break; + LifetimePosition cur_intersection = a->Intersect(b); + if (cur_intersection.IsValid()) { + return cur_intersection; + } + if (a->start().Value() < b->start().Value()) { + a = a->next(); + if (a == NULL || a->start().Value() > other->End().Value()) break; + AdvanceLastProcessedMarker(a, advance_last_processed_up_to); + } else { + b = b->next(); + } + } + return LifetimePosition::Invalid(); +} + + +RegisterAllocator::RegisterAllocator(InstructionSequence* code) + : zone_(code->isolate()), + code_(code), + live_in_sets_(code->BasicBlockCount(), zone()), + live_ranges_(code->VirtualRegisterCount() * 2, zone()), + fixed_live_ranges_(NULL), + fixed_double_live_ranges_(NULL), + unhandled_live_ranges_(code->VirtualRegisterCount() * 2, zone()), + active_live_ranges_(8, zone()), + inactive_live_ranges_(8, zone()), + reusable_slots_(8, zone()), + mode_(UNALLOCATED_REGISTERS), + num_registers_(-1), + allocation_ok_(true) {} + + +void RegisterAllocator::InitializeLivenessAnalysis() { + // Initialize the live_in sets for each block to NULL. + int block_count = code()->BasicBlockCount(); + live_in_sets_.Initialize(block_count, zone()); + live_in_sets_.AddBlock(NULL, block_count, zone()); +} + + +BitVector* RegisterAllocator::ComputeLiveOut(BasicBlock* block) { + // Compute live out for the given block, except not including backward + // successor edges. + BitVector* live_out = + new (zone()) BitVector(code()->VirtualRegisterCount(), zone()); + + // Process all successor blocks. + BasicBlock::Successors successors = block->successors(); + for (BasicBlock::Successors::iterator i = successors.begin(); + i != successors.end(); ++i) { + // Add values live on entry to the successor. Note the successor's + // live_in will not be computed yet for backwards edges. + BasicBlock* successor = *i; + BitVector* live_in = live_in_sets_[successor->rpo_number_]; + if (live_in != NULL) live_out->Union(*live_in); + + // All phi input operands corresponding to this successor edge are live + // out from this block. + int index = successor->PredecessorIndexOf(block); + DCHECK(index >= 0); + DCHECK(index < static_cast<int>(successor->PredecessorCount())); + for (BasicBlock::const_iterator j = successor->begin(); + j != successor->end(); ++j) { + Node* phi = *j; + if (phi->opcode() != IrOpcode::kPhi) continue; + Node* input = phi->InputAt(index); + live_out->Add(input->id()); + } + } + + return live_out; +} + + +void RegisterAllocator::AddInitialIntervals(BasicBlock* block, + BitVector* live_out) { + // Add an interval that includes the entire block to the live range for + // each live_out value. + LifetimePosition start = + LifetimePosition::FromInstructionIndex(block->first_instruction_index()); + LifetimePosition end = LifetimePosition::FromInstructionIndex( + block->last_instruction_index()).NextInstruction(); + BitVector::Iterator iterator(live_out); + while (!iterator.Done()) { + int operand_index = iterator.Current(); + LiveRange* range = LiveRangeFor(operand_index); + range->AddUseInterval(start, end, zone()); + iterator.Advance(); + } +} + + +int RegisterAllocator::FixedDoubleLiveRangeID(int index) { + return -index - 1 - Register::kMaxNumAllocatableRegisters; +} + + +InstructionOperand* RegisterAllocator::AllocateFixed( + UnallocatedOperand* operand, int pos, bool is_tagged) { + TraceAlloc("Allocating fixed reg for op %d\n", operand->virtual_register()); + DCHECK(operand->HasFixedPolicy()); + if (operand->HasFixedSlotPolicy()) { + operand->ConvertTo(InstructionOperand::STACK_SLOT, + operand->fixed_slot_index()); + } else if (operand->HasFixedRegisterPolicy()) { + int reg_index = operand->fixed_register_index(); + operand->ConvertTo(InstructionOperand::REGISTER, reg_index); + } else if (operand->HasFixedDoubleRegisterPolicy()) { + int reg_index = operand->fixed_register_index(); + operand->ConvertTo(InstructionOperand::DOUBLE_REGISTER, reg_index); + } else { + UNREACHABLE(); + } + if (is_tagged) { + TraceAlloc("Fixed reg is tagged at %d\n", pos); + Instruction* instr = InstructionAt(pos); + if (instr->HasPointerMap()) { + instr->pointer_map()->RecordPointer(operand, code_zone()); + } + } + return operand; +} + + +LiveRange* RegisterAllocator::FixedLiveRangeFor(int index) { + DCHECK(index < Register::kMaxNumAllocatableRegisters); + LiveRange* result = fixed_live_ranges_[index]; + if (result == NULL) { + // TODO(titzer): add a utility method to allocate a new LiveRange: + // The LiveRange object itself can go in this zone, but the + // InstructionOperand needs + // to go in the code zone, since it may survive register allocation. + result = new (zone()) LiveRange(FixedLiveRangeID(index), code_zone()); + DCHECK(result->IsFixed()); + result->kind_ = GENERAL_REGISTERS; + SetLiveRangeAssignedRegister(result, index); + fixed_live_ranges_[index] = result; + } + return result; +} + + +LiveRange* RegisterAllocator::FixedDoubleLiveRangeFor(int index) { + DCHECK(index < DoubleRegister::NumAllocatableRegisters()); + LiveRange* result = fixed_double_live_ranges_[index]; + if (result == NULL) { + result = new (zone()) LiveRange(FixedDoubleLiveRangeID(index), code_zone()); + DCHECK(result->IsFixed()); + result->kind_ = DOUBLE_REGISTERS; + SetLiveRangeAssignedRegister(result, index); + fixed_double_live_ranges_[index] = result; + } + return result; +} + + +LiveRange* RegisterAllocator::LiveRangeFor(int index) { + if (index >= live_ranges_.length()) { + live_ranges_.AddBlock(NULL, index - live_ranges_.length() + 1, zone()); + } + LiveRange* result = live_ranges_[index]; + if (result == NULL) { + result = new (zone()) LiveRange(index, code_zone()); + live_ranges_[index] = result; + } + return result; +} + + +GapInstruction* RegisterAllocator::GetLastGap(BasicBlock* block) { + int last_instruction = block->last_instruction_index(); + return code()->GapAt(last_instruction - 1); +} + + +LiveRange* RegisterAllocator::LiveRangeFor(InstructionOperand* operand) { + if (operand->IsUnallocated()) { + return LiveRangeFor(UnallocatedOperand::cast(operand)->virtual_register()); + } else if (operand->IsRegister()) { + return FixedLiveRangeFor(operand->index()); + } else if (operand->IsDoubleRegister()) { + return FixedDoubleLiveRangeFor(operand->index()); + } else { + return NULL; + } +} + + +void RegisterAllocator::Define(LifetimePosition position, + InstructionOperand* operand, + InstructionOperand* hint) { + LiveRange* range = LiveRangeFor(operand); + if (range == NULL) return; + + if (range->IsEmpty() || range->Start().Value() > position.Value()) { + // Can happen if there is a definition without use. + range->AddUseInterval(position, position.NextInstruction(), zone()); + range->AddUsePosition(position.NextInstruction(), NULL, NULL, zone()); + } else { + range->ShortenTo(position); + } + + if (operand->IsUnallocated()) { + UnallocatedOperand* unalloc_operand = UnallocatedOperand::cast(operand); + range->AddUsePosition(position, unalloc_operand, hint, zone()); + } +} + + +void RegisterAllocator::Use(LifetimePosition block_start, + LifetimePosition position, + InstructionOperand* operand, + InstructionOperand* hint) { + LiveRange* range = LiveRangeFor(operand); + if (range == NULL) return; + if (operand->IsUnallocated()) { + UnallocatedOperand* unalloc_operand = UnallocatedOperand::cast(operand); + range->AddUsePosition(position, unalloc_operand, hint, zone()); + } + range->AddUseInterval(block_start, position, zone()); +} + + +void RegisterAllocator::AddConstraintsGapMove(int index, + InstructionOperand* from, + InstructionOperand* to) { + GapInstruction* gap = code()->GapAt(index); + ParallelMove* move = + gap->GetOrCreateParallelMove(GapInstruction::START, code_zone()); + if (from->IsUnallocated()) { + const ZoneList<MoveOperands>* move_operands = move->move_operands(); + for (int i = 0; i < move_operands->length(); ++i) { + MoveOperands cur = move_operands->at(i); + InstructionOperand* cur_to = cur.destination(); + if (cur_to->IsUnallocated()) { + if (UnallocatedOperand::cast(cur_to)->virtual_register() == + UnallocatedOperand::cast(from)->virtual_register()) { + move->AddMove(cur.source(), to, code_zone()); + return; + } + } + } + } + move->AddMove(from, to, code_zone()); +} + + +void RegisterAllocator::MeetRegisterConstraints(BasicBlock* block) { + int start = block->first_instruction_index(); + int end = block->last_instruction_index(); + DCHECK_NE(-1, start); + for (int i = start; i <= end; ++i) { + if (code()->IsGapAt(i)) { + Instruction* instr = NULL; + Instruction* prev_instr = NULL; + if (i < end) instr = InstructionAt(i + 1); + if (i > start) prev_instr = InstructionAt(i - 1); + MeetConstraintsBetween(prev_instr, instr, i); + if (!AllocationOk()) return; + } + } + + // Meet register constraints for the instruction in the end. + if (!code()->IsGapAt(end)) { + MeetRegisterConstraintsForLastInstructionInBlock(block); + } +} + + +void RegisterAllocator::MeetRegisterConstraintsForLastInstructionInBlock( + BasicBlock* block) { + int end = block->last_instruction_index(); + Instruction* last_instruction = InstructionAt(end); + for (size_t i = 0; i < last_instruction->OutputCount(); i++) { + InstructionOperand* output_operand = last_instruction->OutputAt(i); + DCHECK(!output_operand->IsConstant()); + UnallocatedOperand* output = UnallocatedOperand::cast(output_operand); + int output_vreg = output->virtual_register(); + LiveRange* range = LiveRangeFor(output_vreg); + bool assigned = false; + if (output->HasFixedPolicy()) { + AllocateFixed(output, -1, false); + // This value is produced on the stack, we never need to spill it. + if (output->IsStackSlot()) { + range->SetSpillOperand(output); + range->SetSpillStartIndex(end); + assigned = true; + } + + BasicBlock::Successors successors = block->successors(); + for (BasicBlock::Successors::iterator succ = successors.begin(); + succ != successors.end(); ++succ) { + DCHECK((*succ)->PredecessorCount() == 1); + int gap_index = (*succ)->first_instruction_index() + 1; + DCHECK(code()->IsGapAt(gap_index)); + + // Create an unconstrained operand for the same virtual register + // and insert a gap move from the fixed output to the operand. + UnallocatedOperand* output_copy = + new (code_zone()) UnallocatedOperand(UnallocatedOperand::ANY); + output_copy->set_virtual_register(output_vreg); + + code()->AddGapMove(gap_index, output, output_copy); + } + } + + if (!assigned) { + BasicBlock::Successors successors = block->successors(); + for (BasicBlock::Successors::iterator succ = successors.begin(); + succ != successors.end(); ++succ) { + DCHECK((*succ)->PredecessorCount() == 1); + int gap_index = (*succ)->first_instruction_index() + 1; + range->SetSpillStartIndex(gap_index); + + // This move to spill operand is not a real use. Liveness analysis + // and splitting of live ranges do not account for it. + // Thus it should be inserted to a lifetime position corresponding to + // the instruction end. + GapInstruction* gap = code()->GapAt(gap_index); + ParallelMove* move = + gap->GetOrCreateParallelMove(GapInstruction::BEFORE, code_zone()); + move->AddMove(output, range->GetSpillOperand(), code_zone()); + } + } + } +} + + +void RegisterAllocator::MeetConstraintsBetween(Instruction* first, + Instruction* second, + int gap_index) { + if (first != NULL) { + // Handle fixed temporaries. + for (size_t i = 0; i < first->TempCount(); i++) { + UnallocatedOperand* temp = UnallocatedOperand::cast(first->TempAt(i)); + if (temp->HasFixedPolicy()) { + AllocateFixed(temp, gap_index - 1, false); + } + } + + // Handle constant/fixed output operands. + for (size_t i = 0; i < first->OutputCount(); i++) { + InstructionOperand* output = first->OutputAt(i); + if (output->IsConstant()) { + int output_vreg = output->index(); + LiveRange* range = LiveRangeFor(output_vreg); + range->SetSpillStartIndex(gap_index - 1); + range->SetSpillOperand(output); + } else { + UnallocatedOperand* first_output = UnallocatedOperand::cast(output); + LiveRange* range = LiveRangeFor(first_output->virtual_register()); + bool assigned = false; + if (first_output->HasFixedPolicy()) { + UnallocatedOperand* output_copy = + first_output->CopyUnconstrained(code_zone()); + bool is_tagged = HasTaggedValue(first_output->virtual_register()); + AllocateFixed(first_output, gap_index, is_tagged); + + // This value is produced on the stack, we never need to spill it. + if (first_output->IsStackSlot()) { + range->SetSpillOperand(first_output); + range->SetSpillStartIndex(gap_index - 1); + assigned = true; + } + code()->AddGapMove(gap_index, first_output, output_copy); + } + + // Make sure we add a gap move for spilling (if we have not done + // so already). + if (!assigned) { + range->SetSpillStartIndex(gap_index); + + // This move to spill operand is not a real use. Liveness analysis + // and splitting of live ranges do not account for it. + // Thus it should be inserted to a lifetime position corresponding to + // the instruction end. + GapInstruction* gap = code()->GapAt(gap_index); + ParallelMove* move = + gap->GetOrCreateParallelMove(GapInstruction::BEFORE, code_zone()); + move->AddMove(first_output, range->GetSpillOperand(), code_zone()); + } + } + } + } + + if (second != NULL) { + // Handle fixed input operands of second instruction. + for (size_t i = 0; i < second->InputCount(); i++) { + InstructionOperand* input = second->InputAt(i); + if (input->IsImmediate()) continue; // Ignore immediates. + UnallocatedOperand* cur_input = UnallocatedOperand::cast(input); + if (cur_input->HasFixedPolicy()) { + UnallocatedOperand* input_copy = + cur_input->CopyUnconstrained(code_zone()); + bool is_tagged = HasTaggedValue(cur_input->virtual_register()); + AllocateFixed(cur_input, gap_index + 1, is_tagged); + AddConstraintsGapMove(gap_index, input_copy, cur_input); + } + } + + // Handle "output same as input" for second instruction. + for (size_t i = 0; i < second->OutputCount(); i++) { + InstructionOperand* output = second->OutputAt(i); + if (!output->IsUnallocated()) continue; + UnallocatedOperand* second_output = UnallocatedOperand::cast(output); + if (second_output->HasSameAsInputPolicy()) { + DCHECK(i == 0); // Only valid for first output. + UnallocatedOperand* cur_input = + UnallocatedOperand::cast(second->InputAt(0)); + int output_vreg = second_output->virtual_register(); + int input_vreg = cur_input->virtual_register(); + + UnallocatedOperand* input_copy = + cur_input->CopyUnconstrained(code_zone()); + cur_input->set_virtual_register(second_output->virtual_register()); + AddConstraintsGapMove(gap_index, input_copy, cur_input); + + if (HasTaggedValue(input_vreg) && !HasTaggedValue(output_vreg)) { + int index = gap_index + 1; + Instruction* instr = InstructionAt(index); + if (instr->HasPointerMap()) { + instr->pointer_map()->RecordPointer(input_copy, code_zone()); + } + } else if (!HasTaggedValue(input_vreg) && HasTaggedValue(output_vreg)) { + // The input is assumed to immediately have a tagged representation, + // before the pointer map can be used. I.e. the pointer map at the + // instruction will include the output operand (whose value at the + // beginning of the instruction is equal to the input operand). If + // this is not desired, then the pointer map at this instruction needs + // to be adjusted manually. + } + } + } + } +} + + +bool RegisterAllocator::IsOutputRegisterOf(Instruction* instr, int index) { + for (size_t i = 0; i < instr->OutputCount(); i++) { + InstructionOperand* output = instr->OutputAt(i); + if (output->IsRegister() && output->index() == index) return true; + } + return false; +} + + +bool RegisterAllocator::IsOutputDoubleRegisterOf(Instruction* instr, + int index) { + for (size_t i = 0; i < instr->OutputCount(); i++) { + InstructionOperand* output = instr->OutputAt(i); + if (output->IsDoubleRegister() && output->index() == index) return true; + } + return false; +} + + +void RegisterAllocator::ProcessInstructions(BasicBlock* block, + BitVector* live) { + int block_start = block->first_instruction_index(); + + LifetimePosition block_start_position = + LifetimePosition::FromInstructionIndex(block_start); + + for (int index = block->last_instruction_index(); index >= block_start; + index--) { + LifetimePosition curr_position = + LifetimePosition::FromInstructionIndex(index); + + Instruction* instr = InstructionAt(index); + DCHECK(instr != NULL); + if (instr->IsGapMoves()) { + // Process the moves of the gap instruction, making their sources live. + GapInstruction* gap = code()->GapAt(index); + + // TODO(titzer): no need to create the parallel move if it doesn't exist. + ParallelMove* move = + gap->GetOrCreateParallelMove(GapInstruction::START, code_zone()); + const ZoneList<MoveOperands>* move_operands = move->move_operands(); + for (int i = 0; i < move_operands->length(); ++i) { + MoveOperands* cur = &move_operands->at(i); + if (cur->IsIgnored()) continue; + InstructionOperand* from = cur->source(); + InstructionOperand* to = cur->destination(); + InstructionOperand* hint = to; + if (to->IsUnallocated()) { + int to_vreg = UnallocatedOperand::cast(to)->virtual_register(); + LiveRange* to_range = LiveRangeFor(to_vreg); + if (to_range->is_phi()) { + if (to_range->is_non_loop_phi()) { + hint = to_range->current_hint_operand(); + } + } else { + if (live->Contains(to_vreg)) { + Define(curr_position, to, from); + live->Remove(to_vreg); + } else { + cur->Eliminate(); + continue; + } + } + } else { + Define(curr_position, to, from); + } + Use(block_start_position, curr_position, from, hint); + if (from->IsUnallocated()) { + live->Add(UnallocatedOperand::cast(from)->virtual_register()); + } + } + } else { + // Process output, inputs, and temps of this non-gap instruction. + for (size_t i = 0; i < instr->OutputCount(); i++) { + InstructionOperand* output = instr->OutputAt(i); + if (output->IsUnallocated()) { + int out_vreg = UnallocatedOperand::cast(output)->virtual_register(); + live->Remove(out_vreg); + } else if (output->IsConstant()) { + int out_vreg = output->index(); + live->Remove(out_vreg); + } + Define(curr_position, output, NULL); + } + + if (instr->ClobbersRegisters()) { + for (int i = 0; i < Register::kMaxNumAllocatableRegisters; ++i) { + if (!IsOutputRegisterOf(instr, i)) { + LiveRange* range = FixedLiveRangeFor(i); + range->AddUseInterval(curr_position, curr_position.InstructionEnd(), + zone()); + } + } + } + + if (instr->ClobbersDoubleRegisters()) { + for (int i = 0; i < DoubleRegister::NumAllocatableRegisters(); ++i) { + if (!IsOutputDoubleRegisterOf(instr, i)) { + LiveRange* range = FixedDoubleLiveRangeFor(i); + range->AddUseInterval(curr_position, curr_position.InstructionEnd(), + zone()); + } + } + } + + for (size_t i = 0; i < instr->InputCount(); i++) { + InstructionOperand* input = instr->InputAt(i); + if (input->IsImmediate()) continue; // Ignore immediates. + LifetimePosition use_pos; + if (input->IsUnallocated() && + UnallocatedOperand::cast(input)->IsUsedAtStart()) { + use_pos = curr_position; + } else { + use_pos = curr_position.InstructionEnd(); + } + + Use(block_start_position, use_pos, input, NULL); + if (input->IsUnallocated()) { + live->Add(UnallocatedOperand::cast(input)->virtual_register()); + } + } + + for (size_t i = 0; i < instr->TempCount(); i++) { + InstructionOperand* temp = instr->TempAt(i); + if (instr->ClobbersTemps()) { + if (temp->IsRegister()) continue; + if (temp->IsUnallocated()) { + UnallocatedOperand* temp_unalloc = UnallocatedOperand::cast(temp); + if (temp_unalloc->HasFixedPolicy()) { + continue; + } + } + } + Use(block_start_position, curr_position.InstructionEnd(), temp, NULL); + Define(curr_position, temp, NULL); + } + } + } +} + + +void RegisterAllocator::ResolvePhis(BasicBlock* block) { + for (BasicBlock::const_iterator i = block->begin(); i != block->end(); ++i) { + Node* phi = *i; + if (phi->opcode() != IrOpcode::kPhi) continue; + + UnallocatedOperand* phi_operand = + new (code_zone()) UnallocatedOperand(UnallocatedOperand::NONE); + phi_operand->set_virtual_register(phi->id()); + + int j = 0; + Node::Inputs inputs = phi->inputs(); + for (Node::Inputs::iterator iter(inputs.begin()); iter != inputs.end(); + ++iter, ++j) { + Node* op = *iter; + // TODO(mstarzinger): Use a ValueInputIterator instead. + if (j >= block->PredecessorCount()) continue; + UnallocatedOperand* operand = + new (code_zone()) UnallocatedOperand(UnallocatedOperand::ANY); + operand->set_virtual_register(op->id()); + BasicBlock* cur_block = block->PredecessorAt(j); + // The gap move must be added without any special processing as in + // the AddConstraintsGapMove. + code()->AddGapMove(cur_block->last_instruction_index() - 1, operand, + phi_operand); + + Instruction* branch = InstructionAt(cur_block->last_instruction_index()); + DCHECK(!branch->HasPointerMap()); + USE(branch); + } + + LiveRange* live_range = LiveRangeFor(phi->id()); + BlockStartInstruction* block_start = code()->GetBlockStart(block); + block_start->GetOrCreateParallelMove(GapInstruction::START, code_zone()) + ->AddMove(phi_operand, live_range->GetSpillOperand(), code_zone()); + live_range->SetSpillStartIndex(block->first_instruction_index()); + + // We use the phi-ness of some nodes in some later heuristics. + live_range->set_is_phi(true); + if (!block->IsLoopHeader()) { + live_range->set_is_non_loop_phi(true); + } + } +} + + +bool RegisterAllocator::Allocate() { + assigned_registers_ = new (code_zone()) + BitVector(Register::NumAllocatableRegisters(), code_zone()); + assigned_double_registers_ = new (code_zone()) + BitVector(DoubleRegister::NumAllocatableRegisters(), code_zone()); + MeetRegisterConstraints(); + if (!AllocationOk()) return false; + ResolvePhis(); + BuildLiveRanges(); + AllocateGeneralRegisters(); + if (!AllocationOk()) return false; + AllocateDoubleRegisters(); + if (!AllocationOk()) return false; + PopulatePointerMaps(); + ConnectRanges(); + ResolveControlFlow(); + code()->frame()->SetAllocatedRegisters(assigned_registers_); + code()->frame()->SetAllocatedDoubleRegisters(assigned_double_registers_); + return true; +} + + +void RegisterAllocator::MeetRegisterConstraints() { + RegisterAllocatorPhase phase("L_Register constraints", this); + for (int i = 0; i < code()->BasicBlockCount(); ++i) { + MeetRegisterConstraints(code()->BlockAt(i)); + if (!AllocationOk()) return; + } +} + + +void RegisterAllocator::ResolvePhis() { + RegisterAllocatorPhase phase("L_Resolve phis", this); + + // Process the blocks in reverse order. + for (int i = code()->BasicBlockCount() - 1; i >= 0; --i) { + ResolvePhis(code()->BlockAt(i)); + } +} + + +void RegisterAllocator::ResolveControlFlow(LiveRange* range, BasicBlock* block, + BasicBlock* pred) { + LifetimePosition pred_end = + LifetimePosition::FromInstructionIndex(pred->last_instruction_index()); + LifetimePosition cur_start = + LifetimePosition::FromInstructionIndex(block->first_instruction_index()); + LiveRange* pred_cover = NULL; + LiveRange* cur_cover = NULL; + LiveRange* cur_range = range; + while (cur_range != NULL && (cur_cover == NULL || pred_cover == NULL)) { + if (cur_range->CanCover(cur_start)) { + DCHECK(cur_cover == NULL); + cur_cover = cur_range; + } + if (cur_range->CanCover(pred_end)) { + DCHECK(pred_cover == NULL); + pred_cover = cur_range; + } + cur_range = cur_range->next(); + } + + if (cur_cover->IsSpilled()) return; + DCHECK(pred_cover != NULL && cur_cover != NULL); + if (pred_cover != cur_cover) { + InstructionOperand* pred_op = + pred_cover->CreateAssignedOperand(code_zone()); + InstructionOperand* cur_op = cur_cover->CreateAssignedOperand(code_zone()); + if (!pred_op->Equals(cur_op)) { + GapInstruction* gap = NULL; + if (block->PredecessorCount() == 1) { + gap = code()->GapAt(block->first_instruction_index()); + } else { + DCHECK(pred->SuccessorCount() == 1); + gap = GetLastGap(pred); + + Instruction* branch = InstructionAt(pred->last_instruction_index()); + DCHECK(!branch->HasPointerMap()); + USE(branch); + } + gap->GetOrCreateParallelMove(GapInstruction::START, code_zone()) + ->AddMove(pred_op, cur_op, code_zone()); + } + } +} + + +ParallelMove* RegisterAllocator::GetConnectingParallelMove( + LifetimePosition pos) { + int index = pos.InstructionIndex(); + if (code()->IsGapAt(index)) { + GapInstruction* gap = code()->GapAt(index); + return gap->GetOrCreateParallelMove( + pos.IsInstructionStart() ? GapInstruction::START : GapInstruction::END, + code_zone()); + } + int gap_pos = pos.IsInstructionStart() ? (index - 1) : (index + 1); + return code()->GapAt(gap_pos)->GetOrCreateParallelMove( + (gap_pos < index) ? GapInstruction::AFTER : GapInstruction::BEFORE, + code_zone()); +} + + +BasicBlock* RegisterAllocator::GetBlock(LifetimePosition pos) { + return code()->GetBasicBlock(pos.InstructionIndex()); +} + + +void RegisterAllocator::ConnectRanges() { + RegisterAllocatorPhase phase("L_Connect ranges", this); + for (int i = 0; i < live_ranges()->length(); ++i) { + LiveRange* first_range = live_ranges()->at(i); + if (first_range == NULL || first_range->parent() != NULL) continue; + + LiveRange* second_range = first_range->next(); + while (second_range != NULL) { + LifetimePosition pos = second_range->Start(); + + if (!second_range->IsSpilled()) { + // Add gap move if the two live ranges touch and there is no block + // boundary. + if (first_range->End().Value() == pos.Value()) { + bool should_insert = true; + if (IsBlockBoundary(pos)) { + should_insert = CanEagerlyResolveControlFlow(GetBlock(pos)); + } + if (should_insert) { + ParallelMove* move = GetConnectingParallelMove(pos); + InstructionOperand* prev_operand = + first_range->CreateAssignedOperand(code_zone()); + InstructionOperand* cur_operand = + second_range->CreateAssignedOperand(code_zone()); + move->AddMove(prev_operand, cur_operand, code_zone()); + } + } + } + + first_range = second_range; + second_range = second_range->next(); + } + } +} + + +bool RegisterAllocator::CanEagerlyResolveControlFlow(BasicBlock* block) const { + if (block->PredecessorCount() != 1) return false; + return block->PredecessorAt(0)->rpo_number_ == block->rpo_number_ - 1; +} + + +void RegisterAllocator::ResolveControlFlow() { + RegisterAllocatorPhase phase("L_Resolve control flow", this); + for (int block_id = 1; block_id < code()->BasicBlockCount(); ++block_id) { + BasicBlock* block = code()->BlockAt(block_id); + if (CanEagerlyResolveControlFlow(block)) continue; + BitVector* live = live_in_sets_[block->rpo_number_]; + BitVector::Iterator iterator(live); + while (!iterator.Done()) { + int operand_index = iterator.Current(); + BasicBlock::Predecessors predecessors = block->predecessors(); + for (BasicBlock::Predecessors::iterator i = predecessors.begin(); + i != predecessors.end(); ++i) { + BasicBlock* cur = *i; + LiveRange* cur_range = LiveRangeFor(operand_index); + ResolveControlFlow(cur_range, block, cur); + } + iterator.Advance(); + } + } +} + + +void RegisterAllocator::BuildLiveRanges() { + RegisterAllocatorPhase phase("L_Build live ranges", this); + InitializeLivenessAnalysis(); + // Process the blocks in reverse order. + for (int block_id = code()->BasicBlockCount() - 1; block_id >= 0; + --block_id) { + BasicBlock* block = code()->BlockAt(block_id); + BitVector* live = ComputeLiveOut(block); + // Initially consider all live_out values live for the entire block. We + // will shorten these intervals if necessary. + AddInitialIntervals(block, live); + + // Process the instructions in reverse order, generating and killing + // live values. + ProcessInstructions(block, live); + // All phi output operands are killed by this block. + for (BasicBlock::const_iterator i = block->begin(); i != block->end(); + ++i) { + Node* phi = *i; + if (phi->opcode() != IrOpcode::kPhi) continue; + + // The live range interval already ends at the first instruction of the + // block. + live->Remove(phi->id()); + + InstructionOperand* hint = NULL; + InstructionOperand* phi_operand = NULL; + GapInstruction* gap = GetLastGap(block->PredecessorAt(0)); + + // TODO(titzer): no need to create the parallel move if it doesn't exit. + ParallelMove* move = + gap->GetOrCreateParallelMove(GapInstruction::START, code_zone()); + for (int j = 0; j < move->move_operands()->length(); ++j) { + InstructionOperand* to = move->move_operands()->at(j).destination(); + if (to->IsUnallocated() && + UnallocatedOperand::cast(to)->virtual_register() == phi->id()) { + hint = move->move_operands()->at(j).source(); + phi_operand = to; + break; + } + } + DCHECK(hint != NULL); + + LifetimePosition block_start = LifetimePosition::FromInstructionIndex( + block->first_instruction_index()); + Define(block_start, phi_operand, hint); + } + + // Now live is live_in for this block except not including values live + // out on backward successor edges. + live_in_sets_[block_id] = live; + + if (block->IsLoopHeader()) { + // Add a live range stretching from the first loop instruction to the last + // for each value live on entry to the header. + BitVector::Iterator iterator(live); + LifetimePosition start = LifetimePosition::FromInstructionIndex( + block->first_instruction_index()); + int end_index = + code()->BlockAt(block->loop_end_)->last_instruction_index(); + LifetimePosition end = + LifetimePosition::FromInstructionIndex(end_index).NextInstruction(); + while (!iterator.Done()) { + int operand_index = iterator.Current(); + LiveRange* range = LiveRangeFor(operand_index); + range->EnsureInterval(start, end, zone()); + iterator.Advance(); + } + + // Insert all values into the live in sets of all blocks in the loop. + for (int i = block->rpo_number_ + 1; i < block->loop_end_; ++i) { + live_in_sets_[i]->Union(*live); + } + } + +#ifdef DEBUG + if (block_id == 0) { + BitVector::Iterator iterator(live); + bool found = false; + while (!iterator.Done()) { + found = true; + int operand_index = iterator.Current(); + PrintF("Register allocator error: live v%d reached first block.\n", + operand_index); + LiveRange* range = LiveRangeFor(operand_index); + PrintF(" (first use is at %d)\n", range->first_pos()->pos().Value()); + CompilationInfo* info = code()->linkage()->info(); + if (info->IsStub()) { + if (info->code_stub() == NULL) { + PrintF("\n"); + } else { + CodeStub::Major major_key = info->code_stub()->MajorKey(); + PrintF(" (function: %s)\n", CodeStub::MajorName(major_key, false)); + } + } else { + DCHECK(info->IsOptimizing()); + AllowHandleDereference allow_deref; + PrintF(" (function: %s)\n", + info->function()->debug_name()->ToCString().get()); + } + iterator.Advance(); + } + DCHECK(!found); + } +#endif + } + + for (int i = 0; i < live_ranges_.length(); ++i) { + if (live_ranges_[i] != NULL) { + live_ranges_[i]->kind_ = RequiredRegisterKind(live_ranges_[i]->id()); + + // TODO(bmeurer): This is a horrible hack to make sure that for constant + // live ranges, every use requires the constant to be in a register. + // Without this hack, all uses with "any" policy would get the constant + // operand assigned. + LiveRange* range = live_ranges_[i]; + if (range->HasAllocatedSpillOperand() && + range->GetSpillOperand()->IsConstant()) { + for (UsePosition* pos = range->first_pos(); pos != NULL; + pos = pos->next_) { + pos->register_beneficial_ = true; + pos->requires_reg_ = true; + } + } + } + } +} + + +bool RegisterAllocator::SafePointsAreInOrder() const { + int safe_point = 0; + const PointerMapDeque* pointer_maps = code()->pointer_maps(); + for (PointerMapDeque::const_iterator it = pointer_maps->begin(); + it != pointer_maps->end(); ++it) { + PointerMap* map = *it; + if (safe_point > map->instruction_position()) return false; + safe_point = map->instruction_position(); + } + return true; +} + + +void RegisterAllocator::PopulatePointerMaps() { + RegisterAllocatorPhase phase("L_Populate pointer maps", this); + + DCHECK(SafePointsAreInOrder()); + + // Iterate over all safe point positions and record a pointer + // for all spilled live ranges at this point. + int last_range_start = 0; + const PointerMapDeque* pointer_maps = code()->pointer_maps(); + PointerMapDeque::const_iterator first_it = pointer_maps->begin(); + for (int range_idx = 0; range_idx < live_ranges()->length(); ++range_idx) { + LiveRange* range = live_ranges()->at(range_idx); + if (range == NULL) continue; + // Iterate over the first parts of multi-part live ranges. + if (range->parent() != NULL) continue; + // Skip non-reference values. + if (!HasTaggedValue(range->id())) continue; + // Skip empty live ranges. + if (range->IsEmpty()) continue; + + // Find the extent of the range and its children. + int start = range->Start().InstructionIndex(); + int end = 0; + for (LiveRange* cur = range; cur != NULL; cur = cur->next()) { + LifetimePosition this_end = cur->End(); + if (this_end.InstructionIndex() > end) end = this_end.InstructionIndex(); + DCHECK(cur->Start().InstructionIndex() >= start); + } + + // Most of the ranges are in order, but not all. Keep an eye on when they + // step backwards and reset the first_it so we don't miss any safe points. + if (start < last_range_start) first_it = pointer_maps->begin(); + last_range_start = start; + + // Step across all the safe points that are before the start of this range, + // recording how far we step in order to save doing this for the next range. + for (; first_it != pointer_maps->end(); ++first_it) { + PointerMap* map = *first_it; + if (map->instruction_position() >= start) break; + } + + // Step through the safe points to see whether they are in the range. + for (PointerMapDeque::const_iterator it = first_it; + it != pointer_maps->end(); ++it) { + PointerMap* map = *it; + int safe_point = map->instruction_position(); + + // The safe points are sorted so we can stop searching here. + if (safe_point - 1 > end) break; + + // Advance to the next active range that covers the current + // safe point position. + LifetimePosition safe_point_pos = + LifetimePosition::FromInstructionIndex(safe_point); + LiveRange* cur = range; + while (cur != NULL && !cur->Covers(safe_point_pos)) { + cur = cur->next(); + } + if (cur == NULL) continue; + + // Check if the live range is spilled and the safe point is after + // the spill position. + if (range->HasAllocatedSpillOperand() && + safe_point >= range->spill_start_index() && + !range->GetSpillOperand()->IsConstant()) { + TraceAlloc("Pointer for range %d (spilled at %d) at safe point %d\n", + range->id(), range->spill_start_index(), safe_point); + map->RecordPointer(range->GetSpillOperand(), code_zone()); + } + + if (!cur->IsSpilled()) { + TraceAlloc( + "Pointer in register for range %d (start at %d) " + "at safe point %d\n", + cur->id(), cur->Start().Value(), safe_point); + InstructionOperand* operand = cur->CreateAssignedOperand(code_zone()); + DCHECK(!operand->IsStackSlot()); + map->RecordPointer(operand, code_zone()); + } + } + } +} + + +void RegisterAllocator::AllocateGeneralRegisters() { + RegisterAllocatorPhase phase("L_Allocate general registers", this); + num_registers_ = Register::NumAllocatableRegisters(); + mode_ = GENERAL_REGISTERS; + AllocateRegisters(); +} + + +void RegisterAllocator::AllocateDoubleRegisters() { + RegisterAllocatorPhase phase("L_Allocate double registers", this); + num_registers_ = DoubleRegister::NumAllocatableRegisters(); + mode_ = DOUBLE_REGISTERS; + AllocateRegisters(); +} + + +void RegisterAllocator::AllocateRegisters() { + DCHECK(unhandled_live_ranges_.is_empty()); + + for (int i = 0; i < live_ranges_.length(); ++i) { + if (live_ranges_[i] != NULL) { + if (live_ranges_[i]->Kind() == mode_) { + AddToUnhandledUnsorted(live_ranges_[i]); + } + } + } + SortUnhandled(); + DCHECK(UnhandledIsSorted()); + + DCHECK(reusable_slots_.is_empty()); + DCHECK(active_live_ranges_.is_empty()); + DCHECK(inactive_live_ranges_.is_empty()); + + if (mode_ == DOUBLE_REGISTERS) { + for (int i = 0; i < DoubleRegister::NumAllocatableRegisters(); ++i) { + LiveRange* current = fixed_double_live_ranges_.at(i); + if (current != NULL) { + AddToInactive(current); + } + } + } else { + DCHECK(mode_ == GENERAL_REGISTERS); + for (int i = 0; i < fixed_live_ranges_.length(); ++i) { + LiveRange* current = fixed_live_ranges_.at(i); + if (current != NULL) { + AddToInactive(current); + } + } + } + + while (!unhandled_live_ranges_.is_empty()) { + DCHECK(UnhandledIsSorted()); + LiveRange* current = unhandled_live_ranges_.RemoveLast(); + DCHECK(UnhandledIsSorted()); + LifetimePosition position = current->Start(); +#ifdef DEBUG + allocation_finger_ = position; +#endif + TraceAlloc("Processing interval %d start=%d\n", current->id(), + position.Value()); + + if (current->HasAllocatedSpillOperand()) { + TraceAlloc("Live range %d already has a spill operand\n", current->id()); + LifetimePosition next_pos = position; + if (code()->IsGapAt(next_pos.InstructionIndex())) { + next_pos = next_pos.NextInstruction(); + } + UsePosition* pos = current->NextUsePositionRegisterIsBeneficial(next_pos); + // If the range already has a spill operand and it doesn't need a + // register immediately, split it and spill the first part of the range. + if (pos == NULL) { + Spill(current); + continue; + } else if (pos->pos().Value() > + current->Start().NextInstruction().Value()) { + // Do not spill live range eagerly if use position that can benefit from + // the register is too close to the start of live range. + SpillBetween(current, current->Start(), pos->pos()); + if (!AllocationOk()) return; + DCHECK(UnhandledIsSorted()); + continue; + } + } + + for (int i = 0; i < active_live_ranges_.length(); ++i) { + LiveRange* cur_active = active_live_ranges_.at(i); + if (cur_active->End().Value() <= position.Value()) { + ActiveToHandled(cur_active); + --i; // The live range was removed from the list of active live ranges. + } else if (!cur_active->Covers(position)) { + ActiveToInactive(cur_active); + --i; // The live range was removed from the list of active live ranges. + } + } + + for (int i = 0; i < inactive_live_ranges_.length(); ++i) { + LiveRange* cur_inactive = inactive_live_ranges_.at(i); + if (cur_inactive->End().Value() <= position.Value()) { + InactiveToHandled(cur_inactive); + --i; // Live range was removed from the list of inactive live ranges. + } else if (cur_inactive->Covers(position)) { + InactiveToActive(cur_inactive); + --i; // Live range was removed from the list of inactive live ranges. + } + } + + DCHECK(!current->HasRegisterAssigned() && !current->IsSpilled()); + + bool result = TryAllocateFreeReg(current); + if (!AllocationOk()) return; + + if (!result) AllocateBlockedReg(current); + if (!AllocationOk()) return; + + if (current->HasRegisterAssigned()) { + AddToActive(current); + } + } + + reusable_slots_.Rewind(0); + active_live_ranges_.Rewind(0); + inactive_live_ranges_.Rewind(0); +} + + +const char* RegisterAllocator::RegisterName(int allocation_index) { + if (mode_ == GENERAL_REGISTERS) { + return Register::AllocationIndexToString(allocation_index); + } else { + return DoubleRegister::AllocationIndexToString(allocation_index); + } +} + + +void RegisterAllocator::TraceAlloc(const char* msg, ...) { + if (FLAG_trace_alloc) { + va_list arguments; + va_start(arguments, msg); + base::OS::VPrint(msg, arguments); + va_end(arguments); + } +} + + +bool RegisterAllocator::HasTaggedValue(int virtual_register) const { + return code()->IsReference(virtual_register); +} + + +RegisterKind RegisterAllocator::RequiredRegisterKind( + int virtual_register) const { + return (code()->IsDouble(virtual_register)) ? DOUBLE_REGISTERS + : GENERAL_REGISTERS; +} + + +void RegisterAllocator::AddToActive(LiveRange* range) { + TraceAlloc("Add live range %d to active\n", range->id()); + active_live_ranges_.Add(range, zone()); +} + + +void RegisterAllocator::AddToInactive(LiveRange* range) { + TraceAlloc("Add live range %d to inactive\n", range->id()); + inactive_live_ranges_.Add(range, zone()); +} + + +void RegisterAllocator::AddToUnhandledSorted(LiveRange* range) { + if (range == NULL || range->IsEmpty()) return; + DCHECK(!range->HasRegisterAssigned() && !range->IsSpilled()); + DCHECK(allocation_finger_.Value() <= range->Start().Value()); + for (int i = unhandled_live_ranges_.length() - 1; i >= 0; --i) { + LiveRange* cur_range = unhandled_live_ranges_.at(i); + if (range->ShouldBeAllocatedBefore(cur_range)) { + TraceAlloc("Add live range %d to unhandled at %d\n", range->id(), i + 1); + unhandled_live_ranges_.InsertAt(i + 1, range, zone()); + DCHECK(UnhandledIsSorted()); + return; + } + } + TraceAlloc("Add live range %d to unhandled at start\n", range->id()); + unhandled_live_ranges_.InsertAt(0, range, zone()); + DCHECK(UnhandledIsSorted()); +} + + +void RegisterAllocator::AddToUnhandledUnsorted(LiveRange* range) { + if (range == NULL || range->IsEmpty()) return; + DCHECK(!range->HasRegisterAssigned() && !range->IsSpilled()); + TraceAlloc("Add live range %d to unhandled unsorted at end\n", range->id()); + unhandled_live_ranges_.Add(range, zone()); +} + + +static int UnhandledSortHelper(LiveRange* const* a, LiveRange* const* b) { + DCHECK(!(*a)->ShouldBeAllocatedBefore(*b) || + !(*b)->ShouldBeAllocatedBefore(*a)); + if ((*a)->ShouldBeAllocatedBefore(*b)) return 1; + if ((*b)->ShouldBeAllocatedBefore(*a)) return -1; + return (*a)->id() - (*b)->id(); +} + + +// Sort the unhandled live ranges so that the ranges to be processed first are +// at the end of the array list. This is convenient for the register allocation +// algorithm because it is efficient to remove elements from the end. +void RegisterAllocator::SortUnhandled() { + TraceAlloc("Sort unhandled\n"); + unhandled_live_ranges_.Sort(&UnhandledSortHelper); +} + + +bool RegisterAllocator::UnhandledIsSorted() { + int len = unhandled_live_ranges_.length(); + for (int i = 1; i < len; i++) { + LiveRange* a = unhandled_live_ranges_.at(i - 1); + LiveRange* b = unhandled_live_ranges_.at(i); + if (a->Start().Value() < b->Start().Value()) return false; + } + return true; +} + + +void RegisterAllocator::FreeSpillSlot(LiveRange* range) { + // Check that we are the last range. + if (range->next() != NULL) return; + + if (!range->TopLevel()->HasAllocatedSpillOperand()) return; + + InstructionOperand* spill_operand = range->TopLevel()->GetSpillOperand(); + if (spill_operand->IsConstant()) return; + if (spill_operand->index() >= 0) { + reusable_slots_.Add(range, zone()); + } +} + + +InstructionOperand* RegisterAllocator::TryReuseSpillSlot(LiveRange* range) { + if (reusable_slots_.is_empty()) return NULL; + if (reusable_slots_.first()->End().Value() > + range->TopLevel()->Start().Value()) { + return NULL; + } + InstructionOperand* result = + reusable_slots_.first()->TopLevel()->GetSpillOperand(); + reusable_slots_.Remove(0); + return result; +} + + +void RegisterAllocator::ActiveToHandled(LiveRange* range) { + DCHECK(active_live_ranges_.Contains(range)); + active_live_ranges_.RemoveElement(range); + TraceAlloc("Moving live range %d from active to handled\n", range->id()); + FreeSpillSlot(range); +} + + +void RegisterAllocator::ActiveToInactive(LiveRange* range) { + DCHECK(active_live_ranges_.Contains(range)); + active_live_ranges_.RemoveElement(range); + inactive_live_ranges_.Add(range, zone()); + TraceAlloc("Moving live range %d from active to inactive\n", range->id()); +} + + +void RegisterAllocator::InactiveToHandled(LiveRange* range) { + DCHECK(inactive_live_ranges_.Contains(range)); + inactive_live_ranges_.RemoveElement(range); + TraceAlloc("Moving live range %d from inactive to handled\n", range->id()); + FreeSpillSlot(range); +} + + +void RegisterAllocator::InactiveToActive(LiveRange* range) { + DCHECK(inactive_live_ranges_.Contains(range)); + inactive_live_ranges_.RemoveElement(range); + active_live_ranges_.Add(range, zone()); + TraceAlloc("Moving live range %d from inactive to active\n", range->id()); +} + + +// TryAllocateFreeReg and AllocateBlockedReg assume this +// when allocating local arrays. +STATIC_ASSERT(DoubleRegister::kMaxNumAllocatableRegisters >= + Register::kMaxNumAllocatableRegisters); + + +bool RegisterAllocator::TryAllocateFreeReg(LiveRange* current) { + LifetimePosition free_until_pos[DoubleRegister::kMaxNumAllocatableRegisters]; + + for (int i = 0; i < num_registers_; i++) { + free_until_pos[i] = LifetimePosition::MaxPosition(); + } + + for (int i = 0; i < active_live_ranges_.length(); ++i) { + LiveRange* cur_active = active_live_ranges_.at(i); + free_until_pos[cur_active->assigned_register()] = + LifetimePosition::FromInstructionIndex(0); + } + + for (int i = 0; i < inactive_live_ranges_.length(); ++i) { + LiveRange* cur_inactive = inactive_live_ranges_.at(i); + DCHECK(cur_inactive->End().Value() > current->Start().Value()); + LifetimePosition next_intersection = + cur_inactive->FirstIntersection(current); + if (!next_intersection.IsValid()) continue; + int cur_reg = cur_inactive->assigned_register(); + free_until_pos[cur_reg] = Min(free_until_pos[cur_reg], next_intersection); + } + + InstructionOperand* hint = current->FirstHint(); + if (hint != NULL && (hint->IsRegister() || hint->IsDoubleRegister())) { + int register_index = hint->index(); + TraceAlloc( + "Found reg hint %s (free until [%d) for live range %d (end %d[).\n", + RegisterName(register_index), free_until_pos[register_index].Value(), + current->id(), current->End().Value()); + + // The desired register is free until the end of the current live range. + if (free_until_pos[register_index].Value() >= current->End().Value()) { + TraceAlloc("Assigning preferred reg %s to live range %d\n", + RegisterName(register_index), current->id()); + SetLiveRangeAssignedRegister(current, register_index); + return true; + } + } + + // Find the register which stays free for the longest time. + int reg = 0; + for (int i = 1; i < RegisterCount(); ++i) { + if (free_until_pos[i].Value() > free_until_pos[reg].Value()) { + reg = i; + } + } + + LifetimePosition pos = free_until_pos[reg]; + + if (pos.Value() <= current->Start().Value()) { + // All registers are blocked. + return false; + } + + if (pos.Value() < current->End().Value()) { + // Register reg is available at the range start but becomes blocked before + // the range end. Split current at position where it becomes blocked. + LiveRange* tail = SplitRangeAt(current, pos); + if (!AllocationOk()) return false; + AddToUnhandledSorted(tail); + } + + + // Register reg is available at the range start and is free until + // the range end. + DCHECK(pos.Value() >= current->End().Value()); + TraceAlloc("Assigning free reg %s to live range %d\n", RegisterName(reg), + current->id()); + SetLiveRangeAssignedRegister(current, reg); + + return true; +} + + +void RegisterAllocator::AllocateBlockedReg(LiveRange* current) { + UsePosition* register_use = current->NextRegisterPosition(current->Start()); + if (register_use == NULL) { + // There is no use in the current live range that requires a register. + // We can just spill it. + Spill(current); + return; + } + + + LifetimePosition use_pos[DoubleRegister::kMaxNumAllocatableRegisters]; + LifetimePosition block_pos[DoubleRegister::kMaxNumAllocatableRegisters]; + + for (int i = 0; i < num_registers_; i++) { + use_pos[i] = block_pos[i] = LifetimePosition::MaxPosition(); + } + + for (int i = 0; i < active_live_ranges_.length(); ++i) { + LiveRange* range = active_live_ranges_[i]; + int cur_reg = range->assigned_register(); + if (range->IsFixed() || !range->CanBeSpilled(current->Start())) { + block_pos[cur_reg] = use_pos[cur_reg] = + LifetimePosition::FromInstructionIndex(0); + } else { + UsePosition* next_use = + range->NextUsePositionRegisterIsBeneficial(current->Start()); + if (next_use == NULL) { + use_pos[cur_reg] = range->End(); + } else { + use_pos[cur_reg] = next_use->pos(); + } + } + } + + for (int i = 0; i < inactive_live_ranges_.length(); ++i) { + LiveRange* range = inactive_live_ranges_.at(i); + DCHECK(range->End().Value() > current->Start().Value()); + LifetimePosition next_intersection = range->FirstIntersection(current); + if (!next_intersection.IsValid()) continue; + int cur_reg = range->assigned_register(); + if (range->IsFixed()) { + block_pos[cur_reg] = Min(block_pos[cur_reg], next_intersection); + use_pos[cur_reg] = Min(block_pos[cur_reg], use_pos[cur_reg]); + } else { + use_pos[cur_reg] = Min(use_pos[cur_reg], next_intersection); + } + } + + int reg = 0; + for (int i = 1; i < RegisterCount(); ++i) { + if (use_pos[i].Value() > use_pos[reg].Value()) { + reg = i; + } + } + + LifetimePosition pos = use_pos[reg]; + + if (pos.Value() < register_use->pos().Value()) { + // All registers are blocked before the first use that requires a register. + // Spill starting part of live range up to that use. + SpillBetween(current, current->Start(), register_use->pos()); + return; + } + + if (block_pos[reg].Value() < current->End().Value()) { + // Register becomes blocked before the current range end. Split before that + // position. + LiveRange* tail = SplitBetween(current, current->Start(), + block_pos[reg].InstructionStart()); + if (!AllocationOk()) return; + AddToUnhandledSorted(tail); + } + + // Register reg is not blocked for the whole range. + DCHECK(block_pos[reg].Value() >= current->End().Value()); + TraceAlloc("Assigning blocked reg %s to live range %d\n", RegisterName(reg), + current->id()); + SetLiveRangeAssignedRegister(current, reg); + + // This register was not free. Thus we need to find and spill + // parts of active and inactive live regions that use the same register + // at the same lifetime positions as current. + SplitAndSpillIntersecting(current); +} + + +LifetimePosition RegisterAllocator::FindOptimalSpillingPos( + LiveRange* range, LifetimePosition pos) { + BasicBlock* block = GetBlock(pos.InstructionStart()); + BasicBlock* loop_header = + block->IsLoopHeader() ? block : code()->GetContainingLoop(block); + + if (loop_header == NULL) return pos; + + UsePosition* prev_use = range->PreviousUsePositionRegisterIsBeneficial(pos); + + while (loop_header != NULL) { + // We are going to spill live range inside the loop. + // If possible try to move spilling position backwards to loop header. + // This will reduce number of memory moves on the back edge. + LifetimePosition loop_start = LifetimePosition::FromInstructionIndex( + loop_header->first_instruction_index()); + + if (range->Covers(loop_start)) { + if (prev_use == NULL || prev_use->pos().Value() < loop_start.Value()) { + // No register beneficial use inside the loop before the pos. + pos = loop_start; + } + } + + // Try hoisting out to an outer loop. + loop_header = code()->GetContainingLoop(loop_header); + } + + return pos; +} + + +void RegisterAllocator::SplitAndSpillIntersecting(LiveRange* current) { + DCHECK(current->HasRegisterAssigned()); + int reg = current->assigned_register(); + LifetimePosition split_pos = current->Start(); + for (int i = 0; i < active_live_ranges_.length(); ++i) { + LiveRange* range = active_live_ranges_[i]; + if (range->assigned_register() == reg) { + UsePosition* next_pos = range->NextRegisterPosition(current->Start()); + LifetimePosition spill_pos = FindOptimalSpillingPos(range, split_pos); + if (next_pos == NULL) { + SpillAfter(range, spill_pos); + } else { + // When spilling between spill_pos and next_pos ensure that the range + // remains spilled at least until the start of the current live range. + // This guarantees that we will not introduce new unhandled ranges that + // start before the current range as this violates allocation invariant + // and will lead to an inconsistent state of active and inactive + // live-ranges: ranges are allocated in order of their start positions, + // ranges are retired from active/inactive when the start of the + // current live-range is larger than their end. + SpillBetweenUntil(range, spill_pos, current->Start(), next_pos->pos()); + } + if (!AllocationOk()) return; + ActiveToHandled(range); + --i; + } + } + + for (int i = 0; i < inactive_live_ranges_.length(); ++i) { + LiveRange* range = inactive_live_ranges_[i]; + DCHECK(range->End().Value() > current->Start().Value()); + if (range->assigned_register() == reg && !range->IsFixed()) { + LifetimePosition next_intersection = range->FirstIntersection(current); + if (next_intersection.IsValid()) { + UsePosition* next_pos = range->NextRegisterPosition(current->Start()); + if (next_pos == NULL) { + SpillAfter(range, split_pos); + } else { + next_intersection = Min(next_intersection, next_pos->pos()); + SpillBetween(range, split_pos, next_intersection); + } + if (!AllocationOk()) return; + InactiveToHandled(range); + --i; + } + } + } +} + + +bool RegisterAllocator::IsBlockBoundary(LifetimePosition pos) { + return pos.IsInstructionStart() && + InstructionAt(pos.InstructionIndex())->IsBlockStart(); +} + + +LiveRange* RegisterAllocator::SplitRangeAt(LiveRange* range, + LifetimePosition pos) { + DCHECK(!range->IsFixed()); + TraceAlloc("Splitting live range %d at %d\n", range->id(), pos.Value()); + + if (pos.Value() <= range->Start().Value()) return range; + + // We can't properly connect liveranges if split occured at the end + // of control instruction. + DCHECK(pos.IsInstructionStart() || + !InstructionAt(pos.InstructionIndex())->IsControl()); + + int vreg = GetVirtualRegister(); + if (!AllocationOk()) return NULL; + LiveRange* result = LiveRangeFor(vreg); + range->SplitAt(pos, result, zone()); + return result; +} + + +LiveRange* RegisterAllocator::SplitBetween(LiveRange* range, + LifetimePosition start, + LifetimePosition end) { + DCHECK(!range->IsFixed()); + TraceAlloc("Splitting live range %d in position between [%d, %d]\n", + range->id(), start.Value(), end.Value()); + + LifetimePosition split_pos = FindOptimalSplitPos(start, end); + DCHECK(split_pos.Value() >= start.Value()); + return SplitRangeAt(range, split_pos); +} + + +LifetimePosition RegisterAllocator::FindOptimalSplitPos(LifetimePosition start, + LifetimePosition end) { + int start_instr = start.InstructionIndex(); + int end_instr = end.InstructionIndex(); + DCHECK(start_instr <= end_instr); + + // We have no choice + if (start_instr == end_instr) return end; + + BasicBlock* start_block = GetBlock(start); + BasicBlock* end_block = GetBlock(end); + + if (end_block == start_block) { + // The interval is split in the same basic block. Split at the latest + // possible position. + return end; + } + + BasicBlock* block = end_block; + // Find header of outermost loop. + // TODO(titzer): fix redundancy below. + while (code()->GetContainingLoop(block) != NULL && + code()->GetContainingLoop(block)->rpo_number_ > + start_block->rpo_number_) { + block = code()->GetContainingLoop(block); + } + + // We did not find any suitable outer loop. Split at the latest possible + // position unless end_block is a loop header itself. + if (block == end_block && !end_block->IsLoopHeader()) return end; + + return LifetimePosition::FromInstructionIndex( + block->first_instruction_index()); +} + + +void RegisterAllocator::SpillAfter(LiveRange* range, LifetimePosition pos) { + LiveRange* second_part = SplitRangeAt(range, pos); + if (!AllocationOk()) return; + Spill(second_part); +} + + +void RegisterAllocator::SpillBetween(LiveRange* range, LifetimePosition start, + LifetimePosition end) { + SpillBetweenUntil(range, start, start, end); +} + + +void RegisterAllocator::SpillBetweenUntil(LiveRange* range, + LifetimePosition start, + LifetimePosition until, + LifetimePosition end) { + CHECK(start.Value() < end.Value()); + LiveRange* second_part = SplitRangeAt(range, start); + if (!AllocationOk()) return; + + if (second_part->Start().Value() < end.Value()) { + // The split result intersects with [start, end[. + // Split it at position between ]start+1, end[, spill the middle part + // and put the rest to unhandled. + LiveRange* third_part = SplitBetween( + second_part, Max(second_part->Start().InstructionEnd(), until), + end.PrevInstruction().InstructionEnd()); + if (!AllocationOk()) return; + + DCHECK(third_part != second_part); + + Spill(second_part); + AddToUnhandledSorted(third_part); + } else { + // The split result does not intersect with [start, end[. + // Nothing to spill. Just put it to unhandled as whole. + AddToUnhandledSorted(second_part); + } +} + + +void RegisterAllocator::Spill(LiveRange* range) { + DCHECK(!range->IsSpilled()); + TraceAlloc("Spilling live range %d\n", range->id()); + LiveRange* first = range->TopLevel(); + + if (!first->HasAllocatedSpillOperand()) { + InstructionOperand* op = TryReuseSpillSlot(range); + if (op == NULL) { + // Allocate a new operand referring to the spill slot. + RegisterKind kind = range->Kind(); + int index = code()->frame()->AllocateSpillSlot(kind == DOUBLE_REGISTERS); + if (kind == DOUBLE_REGISTERS) { + op = DoubleStackSlotOperand::Create(index, zone()); + } else { + DCHECK(kind == GENERAL_REGISTERS); + op = StackSlotOperand::Create(index, zone()); + } + } + first->SetSpillOperand(op); + } + range->MakeSpilled(code_zone()); +} + + +int RegisterAllocator::RegisterCount() const { return num_registers_; } + + +#ifdef DEBUG + + +void RegisterAllocator::Verify() const { + for (int i = 0; i < live_ranges()->length(); ++i) { + LiveRange* current = live_ranges()->at(i); + if (current != NULL) current->Verify(); + } +} + + +#endif + + +void RegisterAllocator::SetLiveRangeAssignedRegister(LiveRange* range, + int reg) { + if (range->Kind() == DOUBLE_REGISTERS) { + assigned_double_registers_->Add(reg); + } else { + DCHECK(range->Kind() == GENERAL_REGISTERS); + assigned_registers_->Add(reg); + } + range->set_assigned_register(reg, code_zone()); +} + + +RegisterAllocatorPhase::RegisterAllocatorPhase(const char* name, + RegisterAllocator* allocator) + : CompilationPhase(name, allocator->code()->linkage()->info()), + allocator_(allocator) { + if (FLAG_turbo_stats) { + allocator_zone_start_allocation_size_ = + allocator->zone()->allocation_size(); + } +} + + +RegisterAllocatorPhase::~RegisterAllocatorPhase() { + if (FLAG_turbo_stats) { + unsigned size = allocator_->zone()->allocation_size() - + allocator_zone_start_allocation_size_; + isolate()->GetTStatistics()->SaveTiming(name(), base::TimeDelta(), size); + } +#ifdef DEBUG + if (allocator_ != NULL) allocator_->Verify(); +#endif +} +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/compiler/register-allocator.h b/deps/v8/src/compiler/register-allocator.h new file mode 100644 index 00000000000..881ce37f7d5 --- /dev/null +++ b/deps/v8/src/compiler/register-allocator.h @@ -0,0 +1,548 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_REGISTER_ALLOCATOR_H_ +#define V8_REGISTER_ALLOCATOR_H_ + +#include "src/allocation.h" +#include "src/compiler/instruction.h" +#include "src/compiler/node.h" +#include "src/compiler/schedule.h" +#include "src/macro-assembler.h" +#include "src/zone.h" + +namespace v8 { +namespace internal { + +// Forward declarations. +class BitVector; +class InstructionOperand; +class UnallocatedOperand; +class ParallelMove; +class PointerMap; + +namespace compiler { + +enum RegisterKind { + UNALLOCATED_REGISTERS, + GENERAL_REGISTERS, + DOUBLE_REGISTERS +}; + + +// This class represents a single point of a InstructionOperand's lifetime. For +// each instruction there are exactly two lifetime positions: the beginning and +// the end of the instruction. Lifetime positions for different instructions are +// disjoint. +class LifetimePosition { + public: + // Return the lifetime position that corresponds to the beginning of + // the instruction with the given index. + static LifetimePosition FromInstructionIndex(int index) { + return LifetimePosition(index * kStep); + } + + // Returns a numeric representation of this lifetime position. + int Value() const { return value_; } + + // Returns the index of the instruction to which this lifetime position + // corresponds. + int InstructionIndex() const { + DCHECK(IsValid()); + return value_ / kStep; + } + + // Returns true if this lifetime position corresponds to the instruction + // start. + bool IsInstructionStart() const { return (value_ & (kStep - 1)) == 0; } + + // Returns the lifetime position for the start of the instruction which + // corresponds to this lifetime position. + LifetimePosition InstructionStart() const { + DCHECK(IsValid()); + return LifetimePosition(value_ & ~(kStep - 1)); + } + + // Returns the lifetime position for the end of the instruction which + // corresponds to this lifetime position. + LifetimePosition InstructionEnd() const { + DCHECK(IsValid()); + return LifetimePosition(InstructionStart().Value() + kStep / 2); + } + + // Returns the lifetime position for the beginning of the next instruction. + LifetimePosition NextInstruction() const { + DCHECK(IsValid()); + return LifetimePosition(InstructionStart().Value() + kStep); + } + + // Returns the lifetime position for the beginning of the previous + // instruction. + LifetimePosition PrevInstruction() const { + DCHECK(IsValid()); + DCHECK(value_ > 1); + return LifetimePosition(InstructionStart().Value() - kStep); + } + + // Constructs the lifetime position which does not correspond to any + // instruction. + LifetimePosition() : value_(-1) {} + + // Returns true if this lifetime positions corrensponds to some + // instruction. + bool IsValid() const { return value_ != -1; } + + static inline LifetimePosition Invalid() { return LifetimePosition(); } + + static inline LifetimePosition MaxPosition() { + // We have to use this kind of getter instead of static member due to + // crash bug in GDB. + return LifetimePosition(kMaxInt); + } + + private: + static const int kStep = 2; + + // Code relies on kStep being a power of two. + STATIC_ASSERT(IS_POWER_OF_TWO(kStep)); + + explicit LifetimePosition(int value) : value_(value) {} + + int value_; +}; + + +// Representation of the non-empty interval [start,end[. +class UseInterval : public ZoneObject { + public: + UseInterval(LifetimePosition start, LifetimePosition end) + : start_(start), end_(end), next_(NULL) { + DCHECK(start.Value() < end.Value()); + } + + LifetimePosition start() const { return start_; } + LifetimePosition end() const { return end_; } + UseInterval* next() const { return next_; } + + // Split this interval at the given position without effecting the + // live range that owns it. The interval must contain the position. + void SplitAt(LifetimePosition pos, Zone* zone); + + // If this interval intersects with other return smallest position + // that belongs to both of them. + LifetimePosition Intersect(const UseInterval* other) const { + if (other->start().Value() < start_.Value()) return other->Intersect(this); + if (other->start().Value() < end_.Value()) return other->start(); + return LifetimePosition::Invalid(); + } + + bool Contains(LifetimePosition point) const { + return start_.Value() <= point.Value() && point.Value() < end_.Value(); + } + + void set_start(LifetimePosition start) { start_ = start; } + void set_next(UseInterval* next) { next_ = next; } + + LifetimePosition start_; + LifetimePosition end_; + UseInterval* next_; +}; + +// Representation of a use position. +class UsePosition : public ZoneObject { + public: + UsePosition(LifetimePosition pos, InstructionOperand* operand, + InstructionOperand* hint); + + InstructionOperand* operand() const { return operand_; } + bool HasOperand() const { return operand_ != NULL; } + + InstructionOperand* hint() const { return hint_; } + bool HasHint() const; + bool RequiresRegister() const; + bool RegisterIsBeneficial() const; + + LifetimePosition pos() const { return pos_; } + UsePosition* next() const { return next_; } + + void set_next(UsePosition* next) { next_ = next; } + + InstructionOperand* const operand_; + InstructionOperand* const hint_; + LifetimePosition const pos_; + UsePosition* next_; + bool requires_reg_; + bool register_beneficial_; +}; + +// Representation of SSA values' live ranges as a collection of (continuous) +// intervals over the instruction ordering. +class LiveRange : public ZoneObject { + public: + static const int kInvalidAssignment = 0x7fffffff; + + LiveRange(int id, Zone* zone); + + UseInterval* first_interval() const { return first_interval_; } + UsePosition* first_pos() const { return first_pos_; } + LiveRange* parent() const { return parent_; } + LiveRange* TopLevel() { return (parent_ == NULL) ? this : parent_; } + LiveRange* next() const { return next_; } + bool IsChild() const { return parent() != NULL; } + int id() const { return id_; } + bool IsFixed() const { return id_ < 0; } + bool IsEmpty() const { return first_interval() == NULL; } + InstructionOperand* CreateAssignedOperand(Zone* zone); + int assigned_register() const { return assigned_register_; } + int spill_start_index() const { return spill_start_index_; } + void set_assigned_register(int reg, Zone* zone); + void MakeSpilled(Zone* zone); + bool is_phi() const { return is_phi_; } + void set_is_phi(bool is_phi) { is_phi_ = is_phi; } + bool is_non_loop_phi() const { return is_non_loop_phi_; } + void set_is_non_loop_phi(bool is_non_loop_phi) { + is_non_loop_phi_ = is_non_loop_phi; + } + + // Returns use position in this live range that follows both start + // and last processed use position. + // Modifies internal state of live range! + UsePosition* NextUsePosition(LifetimePosition start); + + // Returns use position for which register is required in this live + // range and which follows both start and last processed use position + // Modifies internal state of live range! + UsePosition* NextRegisterPosition(LifetimePosition start); + + // Returns use position for which register is beneficial in this live + // range and which follows both start and last processed use position + // Modifies internal state of live range! + UsePosition* NextUsePositionRegisterIsBeneficial(LifetimePosition start); + + // Returns use position for which register is beneficial in this live + // range and which precedes start. + UsePosition* PreviousUsePositionRegisterIsBeneficial(LifetimePosition start); + + // Can this live range be spilled at this position. + bool CanBeSpilled(LifetimePosition pos); + + // Split this live range at the given position which must follow the start of + // the range. + // All uses following the given position will be moved from this + // live range to the result live range. + void SplitAt(LifetimePosition position, LiveRange* result, Zone* zone); + + RegisterKind Kind() const { return kind_; } + bool HasRegisterAssigned() const { + return assigned_register_ != kInvalidAssignment; + } + bool IsSpilled() const { return spilled_; } + + InstructionOperand* current_hint_operand() const { + DCHECK(current_hint_operand_ == FirstHint()); + return current_hint_operand_; + } + InstructionOperand* FirstHint() const { + UsePosition* pos = first_pos_; + while (pos != NULL && !pos->HasHint()) pos = pos->next(); + if (pos != NULL) return pos->hint(); + return NULL; + } + + LifetimePosition Start() const { + DCHECK(!IsEmpty()); + return first_interval()->start(); + } + + LifetimePosition End() const { + DCHECK(!IsEmpty()); + return last_interval_->end(); + } + + bool HasAllocatedSpillOperand() const; + InstructionOperand* GetSpillOperand() const { return spill_operand_; } + void SetSpillOperand(InstructionOperand* operand); + + void SetSpillStartIndex(int start) { + spill_start_index_ = Min(start, spill_start_index_); + } + + bool ShouldBeAllocatedBefore(const LiveRange* other) const; + bool CanCover(LifetimePosition position) const; + bool Covers(LifetimePosition position); + LifetimePosition FirstIntersection(LiveRange* other); + + // Add a new interval or a new use position to this live range. + void EnsureInterval(LifetimePosition start, LifetimePosition end, Zone* zone); + void AddUseInterval(LifetimePosition start, LifetimePosition end, Zone* zone); + void AddUsePosition(LifetimePosition pos, InstructionOperand* operand, + InstructionOperand* hint, Zone* zone); + + // Shorten the most recently added interval by setting a new start. + void ShortenTo(LifetimePosition start); + +#ifdef DEBUG + // True if target overlaps an existing interval. + bool HasOverlap(UseInterval* target) const; + void Verify() const; +#endif + + private: + void ConvertOperands(Zone* zone); + UseInterval* FirstSearchIntervalForPosition(LifetimePosition position) const; + void AdvanceLastProcessedMarker(UseInterval* to_start_of, + LifetimePosition but_not_past) const; + + int id_; + bool spilled_; + bool is_phi_; + bool is_non_loop_phi_; + RegisterKind kind_; + int assigned_register_; + UseInterval* last_interval_; + UseInterval* first_interval_; + UsePosition* first_pos_; + LiveRange* parent_; + LiveRange* next_; + // This is used as a cache, it doesn't affect correctness. + mutable UseInterval* current_interval_; + UsePosition* last_processed_use_; + // This is used as a cache, it's invalid outside of BuildLiveRanges. + InstructionOperand* current_hint_operand_; + InstructionOperand* spill_operand_; + int spill_start_index_; + + friend class RegisterAllocator; // Assigns to kind_. +}; + + +class RegisterAllocator BASE_EMBEDDED { + public: + explicit RegisterAllocator(InstructionSequence* code); + + static void TraceAlloc(const char* msg, ...); + + // Checks whether the value of a given virtual register is a reference. + // TODO(titzer): rename this to IsReference. + bool HasTaggedValue(int virtual_register) const; + + // Returns the register kind required by the given virtual register. + RegisterKind RequiredRegisterKind(int virtual_register) const; + + bool Allocate(); + + const ZoneList<LiveRange*>* live_ranges() const { return &live_ranges_; } + const Vector<LiveRange*>* fixed_live_ranges() const { + return &fixed_live_ranges_; + } + const Vector<LiveRange*>* fixed_double_live_ranges() const { + return &fixed_double_live_ranges_; + } + + inline InstructionSequence* code() const { return code_; } + + // This zone is for datastructures only needed during register allocation. + inline Zone* zone() { return &zone_; } + + // This zone is for InstructionOperands and moves that live beyond register + // allocation. + inline Zone* code_zone() { return code()->zone(); } + + int GetVirtualRegister() { + int vreg = code()->NextVirtualRegister(); + if (vreg >= UnallocatedOperand::kMaxVirtualRegisters) { + allocation_ok_ = false; + // Maintain the invariant that we return something below the maximum. + return 0; + } + return vreg; + } + + bool AllocationOk() { return allocation_ok_; } + +#ifdef DEBUG + void Verify() const; +#endif + + BitVector* assigned_registers() { return assigned_registers_; } + BitVector* assigned_double_registers() { return assigned_double_registers_; } + + private: + void MeetRegisterConstraints(); + void ResolvePhis(); + void BuildLiveRanges(); + void AllocateGeneralRegisters(); + void AllocateDoubleRegisters(); + void ConnectRanges(); + void ResolveControlFlow(); + void PopulatePointerMaps(); // TODO(titzer): rename to PopulateReferenceMaps. + void AllocateRegisters(); + bool CanEagerlyResolveControlFlow(BasicBlock* block) const; + inline bool SafePointsAreInOrder() const; + + // Liveness analysis support. + void InitializeLivenessAnalysis(); + BitVector* ComputeLiveOut(BasicBlock* block); + void AddInitialIntervals(BasicBlock* block, BitVector* live_out); + bool IsOutputRegisterOf(Instruction* instr, int index); + bool IsOutputDoubleRegisterOf(Instruction* instr, int index); + void ProcessInstructions(BasicBlock* block, BitVector* live); + void MeetRegisterConstraints(BasicBlock* block); + void MeetConstraintsBetween(Instruction* first, Instruction* second, + int gap_index); + void MeetRegisterConstraintsForLastInstructionInBlock(BasicBlock* block); + void ResolvePhis(BasicBlock* block); + + // Helper methods for building intervals. + InstructionOperand* AllocateFixed(UnallocatedOperand* operand, int pos, + bool is_tagged); + LiveRange* LiveRangeFor(InstructionOperand* operand); + void Define(LifetimePosition position, InstructionOperand* operand, + InstructionOperand* hint); + void Use(LifetimePosition block_start, LifetimePosition position, + InstructionOperand* operand, InstructionOperand* hint); + void AddConstraintsGapMove(int index, InstructionOperand* from, + InstructionOperand* to); + + // Helper methods for updating the life range lists. + void AddToActive(LiveRange* range); + void AddToInactive(LiveRange* range); + void AddToUnhandledSorted(LiveRange* range); + void AddToUnhandledUnsorted(LiveRange* range); + void SortUnhandled(); + bool UnhandledIsSorted(); + void ActiveToHandled(LiveRange* range); + void ActiveToInactive(LiveRange* range); + void InactiveToHandled(LiveRange* range); + void InactiveToActive(LiveRange* range); + void FreeSpillSlot(LiveRange* range); + InstructionOperand* TryReuseSpillSlot(LiveRange* range); + + // Helper methods for allocating registers. + bool TryAllocateFreeReg(LiveRange* range); + void AllocateBlockedReg(LiveRange* range); + + // Live range splitting helpers. + + // Split the given range at the given position. + // If range starts at or after the given position then the + // original range is returned. + // Otherwise returns the live range that starts at pos and contains + // all uses from the original range that follow pos. Uses at pos will + // still be owned by the original range after splitting. + LiveRange* SplitRangeAt(LiveRange* range, LifetimePosition pos); + + // Split the given range in a position from the interval [start, end]. + LiveRange* SplitBetween(LiveRange* range, LifetimePosition start, + LifetimePosition end); + + // Find a lifetime position in the interval [start, end] which + // is optimal for splitting: it is either header of the outermost + // loop covered by this interval or the latest possible position. + LifetimePosition FindOptimalSplitPos(LifetimePosition start, + LifetimePosition end); + + // Spill the given life range after position pos. + void SpillAfter(LiveRange* range, LifetimePosition pos); + + // Spill the given life range after position [start] and up to position [end]. + void SpillBetween(LiveRange* range, LifetimePosition start, + LifetimePosition end); + + // Spill the given life range after position [start] and up to position [end]. + // Range is guaranteed to be spilled at least until position [until]. + void SpillBetweenUntil(LiveRange* range, LifetimePosition start, + LifetimePosition until, LifetimePosition end); + + void SplitAndSpillIntersecting(LiveRange* range); + + // If we are trying to spill a range inside the loop try to + // hoist spill position out to the point just before the loop. + LifetimePosition FindOptimalSpillingPos(LiveRange* range, + LifetimePosition pos); + + void Spill(LiveRange* range); + bool IsBlockBoundary(LifetimePosition pos); + + // Helper methods for resolving control flow. + void ResolveControlFlow(LiveRange* range, BasicBlock* block, + BasicBlock* pred); + + inline void SetLiveRangeAssignedRegister(LiveRange* range, int reg); + + // Return parallel move that should be used to connect ranges split at the + // given position. + ParallelMove* GetConnectingParallelMove(LifetimePosition pos); + + // Return the block which contains give lifetime position. + BasicBlock* GetBlock(LifetimePosition pos); + + // Helper methods for the fixed registers. + int RegisterCount() const; + static int FixedLiveRangeID(int index) { return -index - 1; } + static int FixedDoubleLiveRangeID(int index); + LiveRange* FixedLiveRangeFor(int index); + LiveRange* FixedDoubleLiveRangeFor(int index); + LiveRange* LiveRangeFor(int index); + GapInstruction* GetLastGap(BasicBlock* block); + + const char* RegisterName(int allocation_index); + + inline Instruction* InstructionAt(int index) { + return code()->InstructionAt(index); + } + + Zone zone_; + InstructionSequence* code_; + + // During liveness analysis keep a mapping from block id to live_in sets + // for blocks already analyzed. + ZoneList<BitVector*> live_in_sets_; + + // Liveness analysis results. + ZoneList<LiveRange*> live_ranges_; + + // Lists of live ranges + EmbeddedVector<LiveRange*, Register::kMaxNumAllocatableRegisters> + fixed_live_ranges_; + EmbeddedVector<LiveRange*, DoubleRegister::kMaxNumAllocatableRegisters> + fixed_double_live_ranges_; + ZoneList<LiveRange*> unhandled_live_ranges_; + ZoneList<LiveRange*> active_live_ranges_; + ZoneList<LiveRange*> inactive_live_ranges_; + ZoneList<LiveRange*> reusable_slots_; + + RegisterKind mode_; + int num_registers_; + + BitVector* assigned_registers_; + BitVector* assigned_double_registers_; + + // Indicates success or failure during register allocation. + bool allocation_ok_; + +#ifdef DEBUG + LifetimePosition allocation_finger_; +#endif + + DISALLOW_COPY_AND_ASSIGN(RegisterAllocator); +}; + + +class RegisterAllocatorPhase : public CompilationPhase { + public: + RegisterAllocatorPhase(const char* name, RegisterAllocator* allocator); + ~RegisterAllocatorPhase(); + + private: + RegisterAllocator* allocator_; + unsigned allocator_zone_start_allocation_size_; + + DISALLOW_COPY_AND_ASSIGN(RegisterAllocatorPhase); +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_REGISTER_ALLOCATOR_H_ diff --git a/deps/v8/src/compiler/representation-change.h b/deps/v8/src/compiler/representation-change.h new file mode 100644 index 00000000000..bd5fb5f7934 --- /dev/null +++ b/deps/v8/src/compiler/representation-change.h @@ -0,0 +1,411 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_REPRESENTATION_CHANGE_H_ +#define V8_COMPILER_REPRESENTATION_CHANGE_H_ + +#include "src/compiler/js-graph.h" +#include "src/compiler/machine-operator.h" +#include "src/compiler/node-properties-inl.h" +#include "src/compiler/simplified-operator.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// The types and representations tracked during representation inference +// and change insertion. +// TODO(titzer): First, merge MachineType and RepType. +// TODO(titzer): Second, Use the real type system instead of RepType. +enum RepType { + // Representations. + rBit = 1 << 0, + rWord32 = 1 << 1, + rWord64 = 1 << 2, + rFloat64 = 1 << 3, + rTagged = 1 << 4, + + // Types. + tBool = 1 << 5, + tInt32 = 1 << 6, + tUint32 = 1 << 7, + tInt64 = 1 << 8, + tUint64 = 1 << 9, + tNumber = 1 << 10, + tAny = 1 << 11 +}; + +#define REP_TYPE_STRLEN 24 + +typedef uint16_t RepTypeUnion; + + +inline void RenderRepTypeUnion(char* buf, RepTypeUnion info) { + base::OS::SNPrintF(buf, REP_TYPE_STRLEN, "{%s%s%s%s%s %s%s%s%s%s%s%s}", + (info & rBit) ? "k" : " ", (info & rWord32) ? "w" : " ", + (info & rWord64) ? "q" : " ", + (info & rFloat64) ? "f" : " ", + (info & rTagged) ? "t" : " ", (info & tBool) ? "Z" : " ", + (info & tInt32) ? "I" : " ", (info & tUint32) ? "U" : " ", + (info & tInt64) ? "L" : " ", (info & tUint64) ? "J" : " ", + (info & tNumber) ? "N" : " ", (info & tAny) ? "*" : " "); +} + + +const RepTypeUnion rMask = rBit | rWord32 | rWord64 | rFloat64 | rTagged; +const RepTypeUnion tMask = + tBool | tInt32 | tUint32 | tInt64 | tUint64 | tNumber | tAny; +const RepType rPtr = kPointerSize == 4 ? rWord32 : rWord64; + +// Contains logic related to changing the representation of values for constants +// and other nodes, as well as lowering Simplified->Machine operators. +// Eagerly folds any representation changes for constants. +class RepresentationChanger { + public: + RepresentationChanger(JSGraph* jsgraph, SimplifiedOperatorBuilder* simplified, + MachineOperatorBuilder* machine, Isolate* isolate) + : jsgraph_(jsgraph), + simplified_(simplified), + machine_(machine), + isolate_(isolate), + testing_type_errors_(false), + type_error_(false) {} + + + Node* GetRepresentationFor(Node* node, RepTypeUnion output_type, + RepTypeUnion use_type) { + if (!IsPowerOf2(output_type & rMask)) { + // There should be only one output representation. + return TypeError(node, output_type, use_type); + } + if ((use_type & rMask) == (output_type & rMask)) { + // Representations are the same. That's a no-op. + return node; + } + if (use_type & rTagged) { + return GetTaggedRepresentationFor(node, output_type); + } else if (use_type & rFloat64) { + return GetFloat64RepresentationFor(node, output_type); + } else if (use_type & rWord32) { + return GetWord32RepresentationFor(node, output_type, use_type & tUint32); + } else if (use_type & rBit) { + return GetBitRepresentationFor(node, output_type); + } else if (use_type & rWord64) { + return GetWord64RepresentationFor(node, output_type); + } else { + return node; + } + } + + Node* GetTaggedRepresentationFor(Node* node, RepTypeUnion output_type) { + // Eagerly fold representation changes for constants. + switch (node->opcode()) { + case IrOpcode::kNumberConstant: + case IrOpcode::kHeapConstant: + return node; // No change necessary. + case IrOpcode::kInt32Constant: + if (output_type & tUint32) { + uint32_t value = ValueOf<uint32_t>(node->op()); + return jsgraph()->Constant(static_cast<double>(value)); + } else if (output_type & tInt32) { + int32_t value = ValueOf<int32_t>(node->op()); + return jsgraph()->Constant(value); + } else if (output_type & rBit) { + return ValueOf<int32_t>(node->op()) == 0 ? jsgraph()->FalseConstant() + : jsgraph()->TrueConstant(); + } else { + return TypeError(node, output_type, rTagged); + } + case IrOpcode::kFloat64Constant: + return jsgraph()->Constant(ValueOf<double>(node->op())); + default: + break; + } + // Select the correct X -> Tagged operator. + Operator* op; + if (output_type & rBit) { + op = simplified()->ChangeBitToBool(); + } else if (output_type & rWord32) { + if (output_type & tUint32) { + op = simplified()->ChangeUint32ToTagged(); + } else if (output_type & tInt32) { + op = simplified()->ChangeInt32ToTagged(); + } else { + return TypeError(node, output_type, rTagged); + } + } else if (output_type & rFloat64) { + op = simplified()->ChangeFloat64ToTagged(); + } else { + return TypeError(node, output_type, rTagged); + } + return jsgraph()->graph()->NewNode(op, node); + } + + Node* GetFloat64RepresentationFor(Node* node, RepTypeUnion output_type) { + // Eagerly fold representation changes for constants. + switch (node->opcode()) { + case IrOpcode::kNumberConstant: + return jsgraph()->Float64Constant(ValueOf<double>(node->op())); + case IrOpcode::kInt32Constant: + if (output_type & tUint32) { + uint32_t value = ValueOf<uint32_t>(node->op()); + return jsgraph()->Float64Constant(static_cast<double>(value)); + } else { + int32_t value = ValueOf<int32_t>(node->op()); + return jsgraph()->Float64Constant(value); + } + case IrOpcode::kFloat64Constant: + return node; // No change necessary. + default: + break; + } + // Select the correct X -> Float64 operator. + Operator* op; + if (output_type & rWord32) { + if (output_type & tUint32) { + op = machine()->ChangeUint32ToFloat64(); + } else { + op = machine()->ChangeInt32ToFloat64(); + } + } else if (output_type & rTagged) { + op = simplified()->ChangeTaggedToFloat64(); + } else { + return TypeError(node, output_type, rFloat64); + } + return jsgraph()->graph()->NewNode(op, node); + } + + Node* GetWord32RepresentationFor(Node* node, RepTypeUnion output_type, + bool use_unsigned) { + // Eagerly fold representation changes for constants. + switch (node->opcode()) { + case IrOpcode::kInt32Constant: + return node; // No change necessary. + case IrOpcode::kNumberConstant: + case IrOpcode::kFloat64Constant: { + double value = ValueOf<double>(node->op()); + if (value < 0) { + DCHECK(IsInt32Double(value)); + int32_t iv = static_cast<int32_t>(value); + return jsgraph()->Int32Constant(iv); + } else { + DCHECK(IsUint32Double(value)); + int32_t iv = static_cast<int32_t>(static_cast<uint32_t>(value)); + return jsgraph()->Int32Constant(iv); + } + } + default: + break; + } + // Select the correct X -> Word32 operator. + Operator* op = NULL; + if (output_type & rFloat64) { + if (output_type & tUint32 || use_unsigned) { + op = machine()->ChangeFloat64ToUint32(); + } else { + op = machine()->ChangeFloat64ToInt32(); + } + } else if (output_type & rTagged) { + if (output_type & tUint32 || use_unsigned) { + op = simplified()->ChangeTaggedToUint32(); + } else { + op = simplified()->ChangeTaggedToInt32(); + } + } else if (output_type & rBit) { + return node; // Sloppy comparison -> word32. + } else { + return TypeError(node, output_type, rWord32); + } + return jsgraph()->graph()->NewNode(op, node); + } + + Node* GetBitRepresentationFor(Node* node, RepTypeUnion output_type) { + // Eagerly fold representation changes for constants. + switch (node->opcode()) { + case IrOpcode::kInt32Constant: { + int32_t value = ValueOf<int32_t>(node->op()); + if (value == 0 || value == 1) return node; + return jsgraph()->OneConstant(); // value != 0 + } + case IrOpcode::kHeapConstant: { + Handle<Object> handle = ValueOf<Handle<Object> >(node->op()); + DCHECK(*handle == isolate()->heap()->true_value() || + *handle == isolate()->heap()->false_value()); + return jsgraph()->Int32Constant( + *handle == isolate()->heap()->true_value() ? 1 : 0); + } + default: + break; + } + // Select the correct X -> Bit operator. + Operator* op; + if (output_type & rWord32) { + return node; // No change necessary. + } else if (output_type & rWord64) { + return node; // TODO(titzer): No change necessary, on 64-bit. + } else if (output_type & rTagged) { + op = simplified()->ChangeBoolToBit(); + } else { + return TypeError(node, output_type, rBit); + } + return jsgraph()->graph()->NewNode(op, node); + } + + Node* GetWord64RepresentationFor(Node* node, RepTypeUnion output_type) { + if (output_type & rBit) { + return node; // Sloppy comparison -> word64 + } + // Can't really convert Word64 to anything else. Purported to be internal. + return TypeError(node, output_type, rWord64); + } + + static RepType TypeForMachineType(MachineType rep) { + // TODO(titzer): merge MachineType and RepType. + switch (rep) { + case kMachineWord8: + return rWord32; + case kMachineWord16: + return rWord32; + case kMachineWord32: + return rWord32; + case kMachineWord64: + return rWord64; + case kMachineFloat64: + return rFloat64; + case kMachineTagged: + return rTagged; + default: + UNREACHABLE(); + return static_cast<RepType>(0); + } + } + + Operator* Int32OperatorFor(IrOpcode::Value opcode) { + switch (opcode) { + case IrOpcode::kNumberAdd: + return machine()->Int32Add(); + case IrOpcode::kNumberSubtract: + return machine()->Int32Sub(); + case IrOpcode::kNumberEqual: + return machine()->Word32Equal(); + case IrOpcode::kNumberLessThan: + return machine()->Int32LessThan(); + case IrOpcode::kNumberLessThanOrEqual: + return machine()->Int32LessThanOrEqual(); + default: + UNREACHABLE(); + return NULL; + } + } + + Operator* Uint32OperatorFor(IrOpcode::Value opcode) { + switch (opcode) { + case IrOpcode::kNumberAdd: + return machine()->Int32Add(); + case IrOpcode::kNumberSubtract: + return machine()->Int32Sub(); + case IrOpcode::kNumberEqual: + return machine()->Word32Equal(); + case IrOpcode::kNumberLessThan: + return machine()->Uint32LessThan(); + case IrOpcode::kNumberLessThanOrEqual: + return machine()->Uint32LessThanOrEqual(); + default: + UNREACHABLE(); + return NULL; + } + } + + Operator* Float64OperatorFor(IrOpcode::Value opcode) { + switch (opcode) { + case IrOpcode::kNumberAdd: + return machine()->Float64Add(); + case IrOpcode::kNumberSubtract: + return machine()->Float64Sub(); + case IrOpcode::kNumberMultiply: + return machine()->Float64Mul(); + case IrOpcode::kNumberDivide: + return machine()->Float64Div(); + case IrOpcode::kNumberModulus: + return machine()->Float64Mod(); + case IrOpcode::kNumberEqual: + return machine()->Float64Equal(); + case IrOpcode::kNumberLessThan: + return machine()->Float64LessThan(); + case IrOpcode::kNumberLessThanOrEqual: + return machine()->Float64LessThanOrEqual(); + default: + UNREACHABLE(); + return NULL; + } + } + + RepType TypeForField(const FieldAccess& access) { + RepType tElement = static_cast<RepType>(0); // TODO(titzer) + RepType rElement = TypeForMachineType(access.representation); + return static_cast<RepType>(tElement | rElement); + } + + RepType TypeForElement(const ElementAccess& access) { + RepType tElement = static_cast<RepType>(0); // TODO(titzer) + RepType rElement = TypeForMachineType(access.representation); + return static_cast<RepType>(tElement | rElement); + } + + RepType TypeForBasePointer(const FieldAccess& access) { + if (access.tag() != 0) return static_cast<RepType>(tAny | rTagged); + return kPointerSize == 8 ? rWord64 : rWord32; + } + + RepType TypeForBasePointer(const ElementAccess& access) { + if (access.tag() != 0) return static_cast<RepType>(tAny | rTagged); + return kPointerSize == 8 ? rWord64 : rWord32; + } + + RepType TypeFromUpperBound(Type* type) { + if (type->Is(Type::None())) + return tAny; // TODO(titzer): should be an error + if (type->Is(Type::Signed32())) return tInt32; + if (type->Is(Type::Unsigned32())) return tUint32; + if (type->Is(Type::Number())) return tNumber; + if (type->Is(Type::Boolean())) return tBool; + return tAny; + } + + private: + JSGraph* jsgraph_; + SimplifiedOperatorBuilder* simplified_; + MachineOperatorBuilder* machine_; + Isolate* isolate_; + + friend class RepresentationChangerTester; // accesses the below fields. + + bool testing_type_errors_; // If {true}, don't abort on a type error. + bool type_error_; // Set when a type error is detected. + + Node* TypeError(Node* node, RepTypeUnion output_type, RepTypeUnion use) { + type_error_ = true; + if (!testing_type_errors_) { + char buf1[REP_TYPE_STRLEN]; + char buf2[REP_TYPE_STRLEN]; + RenderRepTypeUnion(buf1, output_type); + RenderRepTypeUnion(buf2, use); + V8_Fatal(__FILE__, __LINE__, + "RepresentationChangerError: node #%d:%s of rep" + "%s cannot be changed to rep%s", + node->id(), node->op()->mnemonic(), buf1, buf2); + } + return node; + } + + JSGraph* jsgraph() { return jsgraph_; } + Isolate* isolate() { return isolate_; } + SimplifiedOperatorBuilder* simplified() { return simplified_; } + MachineOperatorBuilder* machine() { return machine_; } +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_REPRESENTATION_CHANGE_H_ diff --git a/deps/v8/src/compiler/schedule.cc b/deps/v8/src/compiler/schedule.cc new file mode 100644 index 00000000000..64766765bf0 --- /dev/null +++ b/deps/v8/src/compiler/schedule.cc @@ -0,0 +1,92 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/node.h" +#include "src/compiler/node-properties.h" +#include "src/compiler/node-properties-inl.h" +#include "src/compiler/schedule.h" +#include "src/ostreams.h" + +namespace v8 { +namespace internal { +namespace compiler { + +OStream& operator<<(OStream& os, const BasicBlockData::Control& c) { + switch (c) { + case BasicBlockData::kNone: + return os << "none"; + case BasicBlockData::kGoto: + return os << "goto"; + case BasicBlockData::kBranch: + return os << "branch"; + case BasicBlockData::kReturn: + return os << "return"; + case BasicBlockData::kThrow: + return os << "throw"; + case BasicBlockData::kCall: + return os << "call"; + case BasicBlockData::kDeoptimize: + return os << "deoptimize"; + } + UNREACHABLE(); + return os; +} + + +OStream& operator<<(OStream& os, const Schedule& s) { + // TODO(svenpanne) Const-correct the RPO stuff/iterators. + BasicBlockVector* rpo = const_cast<Schedule*>(&s)->rpo_order(); + for (BasicBlockVectorIter i = rpo->begin(); i != rpo->end(); ++i) { + BasicBlock* block = *i; + os << "--- BLOCK B" << block->id(); + if (block->PredecessorCount() != 0) os << " <- "; + BasicBlock::Predecessors predecessors = block->predecessors(); + bool comma = false; + for (BasicBlock::Predecessors::iterator j = predecessors.begin(); + j != predecessors.end(); ++j) { + if (comma) os << ", "; + comma = true; + os << "B" << (*j)->id(); + } + os << " ---\n"; + for (BasicBlock::const_iterator j = block->begin(); j != block->end(); + ++j) { + Node* node = *j; + os << " " << *node; + if (!NodeProperties::IsControl(node)) { + Bounds bounds = NodeProperties::GetBounds(node); + os << " : "; + bounds.lower->PrintTo(os); + if (!bounds.upper->Is(bounds.lower)) { + os << ".."; + bounds.upper->PrintTo(os); + } + } + os << "\n"; + } + BasicBlock::Control control = block->control_; + if (control != BasicBlock::kNone) { + os << " "; + if (block->control_input_ != NULL) { + os << *block->control_input_; + } else { + os << "Goto"; + } + os << " -> "; + BasicBlock::Successors successors = block->successors(); + comma = false; + for (BasicBlock::Successors::iterator j = successors.begin(); + j != successors.end(); ++j) { + if (comma) os << ", "; + comma = true; + os << "B" << (*j)->id(); + } + os << "\n"; + } + } + return os; +} +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/src/compiler/schedule.h b/deps/v8/src/compiler/schedule.h new file mode 100644 index 00000000000..e730f3324ca --- /dev/null +++ b/deps/v8/src/compiler/schedule.h @@ -0,0 +1,335 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_SCHEDULE_H_ +#define V8_COMPILER_SCHEDULE_H_ + +#include <vector> + +#include "src/v8.h" + +#include "src/compiler/generic-algorithm.h" +#include "src/compiler/generic-graph.h" +#include "src/compiler/generic-node.h" +#include "src/compiler/generic-node-inl.h" +#include "src/compiler/node.h" +#include "src/compiler/opcodes.h" +#include "src/zone.h" + +namespace v8 { +namespace internal { +namespace compiler { + +class BasicBlock; +class Graph; +class ConstructScheduleData; +class CodeGenerator; // Because of a namespace bug in clang. + +class BasicBlockData { + public: + // Possible control nodes that can end a block. + enum Control { + kNone, // Control not initialized yet. + kGoto, // Goto a single successor block. + kBranch, // Branch if true to first successor, otherwise second. + kReturn, // Return a value from this method. + kThrow, // Throw an exception. + kCall, // Call to a possibly deoptimizing or throwing function. + kDeoptimize // Deoptimize. + }; + + int32_t rpo_number_; // special RPO number of the block. + BasicBlock* loop_header_; // Pointer to dominating loop header basic block, + // NULL if none. For loop headers, this points to + // enclosing loop header. + int32_t loop_depth_; // loop nesting, 0 is top-level + int32_t loop_end_; // end of the loop, if this block is a loop header. + int32_t code_start_; // start index of arch-specific code. + int32_t code_end_; // end index of arch-specific code. + bool deferred_; // {true} if this block is considered the slow + // path. + Control control_; // Control at the end of the block. + Node* control_input_; // Input value for control. + NodeVector nodes_; // nodes of this block in forward order. + + explicit BasicBlockData(Zone* zone) + : rpo_number_(-1), + loop_header_(NULL), + loop_depth_(0), + loop_end_(-1), + code_start_(-1), + code_end_(-1), + deferred_(false), + control_(kNone), + control_input_(NULL), + nodes_(NodeVector::allocator_type(zone)) {} + + inline bool IsLoopHeader() const { return loop_end_ >= 0; } + inline bool LoopContains(BasicBlockData* block) const { + // RPO numbers must be initialized. + DCHECK(rpo_number_ >= 0); + DCHECK(block->rpo_number_ >= 0); + if (loop_end_ < 0) return false; // This is not a loop. + return block->rpo_number_ >= rpo_number_ && block->rpo_number_ < loop_end_; + } + int first_instruction_index() { + DCHECK(code_start_ >= 0); + DCHECK(code_end_ > 0); + DCHECK(code_end_ >= code_start_); + return code_start_; + } + int last_instruction_index() { + DCHECK(code_start_ >= 0); + DCHECK(code_end_ > 0); + DCHECK(code_end_ >= code_start_); + return code_end_ - 1; + } +}; + +OStream& operator<<(OStream& os, const BasicBlockData::Control& c); + +// A basic block contains an ordered list of nodes and ends with a control +// node. Note that if a basic block has phis, then all phis must appear as the +// first nodes in the block. +class BasicBlock V8_FINAL : public GenericNode<BasicBlockData, BasicBlock> { + public: + BasicBlock(GenericGraphBase* graph, int input_count) + : GenericNode<BasicBlockData, BasicBlock>(graph, input_count) {} + + typedef Uses Successors; + typedef Inputs Predecessors; + + Successors successors() { return static_cast<Successors>(uses()); } + Predecessors predecessors() { return static_cast<Predecessors>(inputs()); } + + int PredecessorCount() { return InputCount(); } + BasicBlock* PredecessorAt(int index) { return InputAt(index); } + + int SuccessorCount() { return UseCount(); } + BasicBlock* SuccessorAt(int index) { return UseAt(index); } + + int PredecessorIndexOf(BasicBlock* predecessor) { + BasicBlock::Predecessors predecessors = this->predecessors(); + for (BasicBlock::Predecessors::iterator i = predecessors.begin(); + i != predecessors.end(); ++i) { + if (*i == predecessor) return i.index(); + } + return -1; + } + + inline BasicBlock* loop_header() { + return static_cast<BasicBlock*>(loop_header_); + } + inline BasicBlock* ContainingLoop() { + if (IsLoopHeader()) return this; + return static_cast<BasicBlock*>(loop_header_); + } + + typedef NodeVector::iterator iterator; + iterator begin() { return nodes_.begin(); } + iterator end() { return nodes_.end(); } + + typedef NodeVector::const_iterator const_iterator; + const_iterator begin() const { return nodes_.begin(); } + const_iterator end() const { return nodes_.end(); } + + typedef NodeVector::reverse_iterator reverse_iterator; + reverse_iterator rbegin() { return nodes_.rbegin(); } + reverse_iterator rend() { return nodes_.rend(); } + + private: + DISALLOW_COPY_AND_ASSIGN(BasicBlock); +}; + +typedef GenericGraphVisit::NullNodeVisitor<BasicBlockData, BasicBlock> + NullBasicBlockVisitor; + +typedef zone_allocator<BasicBlock*> BasicBlockPtrZoneAllocator; +typedef std::vector<BasicBlock*, BasicBlockPtrZoneAllocator> BasicBlockVector; +typedef BasicBlockVector::iterator BasicBlockVectorIter; +typedef BasicBlockVector::reverse_iterator BasicBlockVectorRIter; + +// A schedule represents the result of assigning nodes to basic blocks +// and ordering them within basic blocks. Prior to computing a schedule, +// a graph has no notion of control flow ordering other than that induced +// by the graph's dependencies. A schedule is required to generate code. +class Schedule : public GenericGraph<BasicBlock> { + public: + explicit Schedule(Zone* zone) + : GenericGraph<BasicBlock>(zone), + zone_(zone), + all_blocks_(BasicBlockVector::allocator_type(zone)), + nodeid_to_block_(BasicBlockVector::allocator_type(zone)), + rpo_order_(BasicBlockVector::allocator_type(zone)), + immediate_dominator_(BasicBlockVector::allocator_type(zone)) { + NewBasicBlock(); // entry. + NewBasicBlock(); // exit. + SetStart(entry()); + SetEnd(exit()); + } + + // TODO(titzer): rewrite users of these methods to use start() and end(). + BasicBlock* entry() const { return all_blocks_[0]; } // Return entry block. + BasicBlock* exit() const { return all_blocks_[1]; } // Return exit block. + + // Return the block which contains {node}, if any. + BasicBlock* block(Node* node) const { + if (node->id() < static_cast<NodeId>(nodeid_to_block_.size())) { + return nodeid_to_block_[node->id()]; + } + return NULL; + } + + BasicBlock* dominator(BasicBlock* block) { + return immediate_dominator_[block->id()]; + } + + bool IsScheduled(Node* node) { + int length = static_cast<int>(nodeid_to_block_.size()); + if (node->id() >= length) return false; + return nodeid_to_block_[node->id()] != NULL; + } + + BasicBlock* GetBlockById(int block_id) { return all_blocks_[block_id]; } + + int BasicBlockCount() const { return NodeCount(); } + int RpoBlockCount() const { return static_cast<int>(rpo_order_.size()); } + + typedef ContainerPointerWrapper<BasicBlockVector> BasicBlocks; + + // Return a list of all the blocks in the schedule, in arbitrary order. + BasicBlocks all_blocks() { return BasicBlocks(&all_blocks_); } + + // Check if nodes {a} and {b} are in the same block. + inline bool SameBasicBlock(Node* a, Node* b) const { + BasicBlock* block = this->block(a); + return block != NULL && block == this->block(b); + } + + // BasicBlock building: create a new block. + inline BasicBlock* NewBasicBlock() { + BasicBlock* block = + BasicBlock::New(this, 0, static_cast<BasicBlock**>(NULL)); + all_blocks_.push_back(block); + return block; + } + + // BasicBlock building: records that a node will later be added to a block but + // doesn't actually add the node to the block. + inline void PlanNode(BasicBlock* block, Node* node) { + if (FLAG_trace_turbo_scheduler) { + PrintF("Planning node %d for future add to block %d\n", node->id(), + block->id()); + } + DCHECK(this->block(node) == NULL); + SetBlockForNode(block, node); + } + + // BasicBlock building: add a node to the end of the block. + inline void AddNode(BasicBlock* block, Node* node) { + if (FLAG_trace_turbo_scheduler) { + PrintF("Adding node %d to block %d\n", node->id(), block->id()); + } + DCHECK(this->block(node) == NULL || this->block(node) == block); + block->nodes_.push_back(node); + SetBlockForNode(block, node); + } + + // BasicBlock building: add a goto to the end of {block}. + void AddGoto(BasicBlock* block, BasicBlock* succ) { + DCHECK(block->control_ == BasicBlock::kNone); + block->control_ = BasicBlock::kGoto; + AddSuccessor(block, succ); + } + + // BasicBlock building: add a (branching) call at the end of {block}. + void AddCall(BasicBlock* block, Node* call, BasicBlock* cont_block, + BasicBlock* deopt_block) { + DCHECK(block->control_ == BasicBlock::kNone); + DCHECK(call->opcode() == IrOpcode::kCall); + block->control_ = BasicBlock::kCall; + // Insert the deopt block first so that the RPO order builder picks + // it first (and thus it ends up late in the RPO order). + AddSuccessor(block, deopt_block); + AddSuccessor(block, cont_block); + SetControlInput(block, call); + } + + // BasicBlock building: add a branch at the end of {block}. + void AddBranch(BasicBlock* block, Node* branch, BasicBlock* tblock, + BasicBlock* fblock) { + DCHECK(block->control_ == BasicBlock::kNone); + DCHECK(branch->opcode() == IrOpcode::kBranch); + block->control_ = BasicBlock::kBranch; + AddSuccessor(block, tblock); + AddSuccessor(block, fblock); + SetControlInput(block, branch); + } + + // BasicBlock building: add a return at the end of {block}. + void AddReturn(BasicBlock* block, Node* input) { + // TODO(titzer): require a Return node here. + DCHECK(block->control_ == BasicBlock::kNone); + block->control_ = BasicBlock::kReturn; + SetControlInput(block, input); + if (block != exit()) AddSuccessor(block, exit()); + } + + // BasicBlock building: add a throw at the end of {block}. + void AddThrow(BasicBlock* block, Node* input) { + DCHECK(block->control_ == BasicBlock::kNone); + block->control_ = BasicBlock::kThrow; + SetControlInput(block, input); + if (block != exit()) AddSuccessor(block, exit()); + } + + // BasicBlock building: add a deopt at the end of {block}. + void AddDeoptimize(BasicBlock* block, Node* state) { + DCHECK(block->control_ == BasicBlock::kNone); + block->control_ = BasicBlock::kDeoptimize; + SetControlInput(block, state); + block->deferred_ = true; // By default, consider deopts the slow path. + if (block != exit()) AddSuccessor(block, exit()); + } + + friend class Scheduler; + friend class CodeGenerator; + + void AddSuccessor(BasicBlock* block, BasicBlock* succ) { + succ->AppendInput(zone_, block); + } + + BasicBlockVector* rpo_order() { return &rpo_order_; } + + private: + friend class ScheduleVisualizer; + + void SetControlInput(BasicBlock* block, Node* node) { + block->control_input_ = node; + SetBlockForNode(block, node); + } + + void SetBlockForNode(BasicBlock* block, Node* node) { + int length = static_cast<int>(nodeid_to_block_.size()); + if (node->id() >= length) { + nodeid_to_block_.resize(node->id() + 1); + } + nodeid_to_block_[node->id()] = block; + } + + Zone* zone_; + BasicBlockVector all_blocks_; // All basic blocks in the schedule. + BasicBlockVector nodeid_to_block_; // Map from node to containing block. + BasicBlockVector rpo_order_; // Reverse-post-order block list. + BasicBlockVector immediate_dominator_; // Maps to a block's immediate + // dominator, indexed by block + // id. +}; + +OStream& operator<<(OStream& os, const Schedule& s); +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_SCHEDULE_H_ diff --git a/deps/v8/src/compiler/scheduler.cc b/deps/v8/src/compiler/scheduler.cc new file mode 100644 index 00000000000..6a40091698c --- /dev/null +++ b/deps/v8/src/compiler/scheduler.cc @@ -0,0 +1,1048 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/scheduler.h" + +#include "src/compiler/graph.h" +#include "src/compiler/graph-inl.h" +#include "src/compiler/node.h" +#include "src/compiler/node-properties.h" +#include "src/compiler/node-properties-inl.h" +#include "src/data-flow.h" + +namespace v8 { +namespace internal { +namespace compiler { + +Scheduler::Scheduler(Zone* zone, Graph* graph, Schedule* schedule) + : graph_(graph), + schedule_(schedule), + branches_(NodeVector::allocator_type(zone)), + calls_(NodeVector::allocator_type(zone)), + deopts_(NodeVector::allocator_type(zone)), + returns_(NodeVector::allocator_type(zone)), + loops_and_merges_(NodeVector::allocator_type(zone)), + node_block_placement_(BasicBlockVector::allocator_type(zone)), + unscheduled_uses_(IntVector::allocator_type(zone)), + scheduled_nodes_(NodeVectorVector::allocator_type(zone)), + schedule_root_nodes_(NodeVector::allocator_type(zone)), + schedule_early_rpo_index_(IntVector::allocator_type(zone)) {} + + +Schedule* Scheduler::ComputeSchedule(Graph* graph) { + Zone tmp_zone(graph->zone()->isolate()); + Schedule* schedule = new (graph->zone()) Schedule(graph->zone()); + Scheduler scheduler(&tmp_zone, graph, schedule); + + schedule->AddNode(schedule->end(), graph->end()); + + scheduler.PrepareAuxiliaryNodeData(); + scheduler.CreateBlocks(); + scheduler.WireBlocks(); + scheduler.PrepareAuxiliaryBlockData(); + + Scheduler::ComputeSpecialRPO(schedule); + scheduler.GenerateImmediateDominatorTree(); + + scheduler.PrepareUses(); + scheduler.ScheduleEarly(); + scheduler.ScheduleLate(); + + return schedule; +} + + +class CreateBlockVisitor : public NullNodeVisitor { + public: + explicit CreateBlockVisitor(Scheduler* scheduler) : scheduler_(scheduler) {} + + GenericGraphVisit::Control Post(Node* node) { + Schedule* schedule = scheduler_->schedule_; + switch (node->opcode()) { + case IrOpcode::kIfTrue: + case IrOpcode::kIfFalse: + case IrOpcode::kContinuation: + case IrOpcode::kLazyDeoptimization: { + BasicBlock* block = schedule->NewBasicBlock(); + schedule->AddNode(block, node); + break; + } + case IrOpcode::kLoop: + case IrOpcode::kMerge: { + BasicBlock* block = schedule->NewBasicBlock(); + schedule->AddNode(block, node); + scheduler_->loops_and_merges_.push_back(node); + break; + } + case IrOpcode::kBranch: { + scheduler_->branches_.push_back(node); + break; + } + case IrOpcode::kDeoptimize: { + scheduler_->deopts_.push_back(node); + break; + } + case IrOpcode::kCall: { + if (OperatorProperties::CanLazilyDeoptimize(node->op())) { + scheduler_->calls_.push_back(node); + } + break; + } + case IrOpcode::kReturn: + scheduler_->returns_.push_back(node); + break; + default: + break; + } + + return GenericGraphVisit::CONTINUE; + } + + private: + Scheduler* scheduler_; +}; + + +void Scheduler::CreateBlocks() { + CreateBlockVisitor create_blocks(this); + if (FLAG_trace_turbo_scheduler) { + PrintF("---------------- CREATING BLOCKS ------------------\n"); + } + schedule_->AddNode(schedule_->entry(), graph_->start()); + graph_->VisitNodeInputsFromEnd(&create_blocks); +} + + +void Scheduler::WireBlocks() { + if (FLAG_trace_turbo_scheduler) { + PrintF("----------------- WIRING BLOCKS -------------------\n"); + } + AddSuccessorsForBranches(); + AddSuccessorsForReturns(); + AddSuccessorsForCalls(); + AddSuccessorsForDeopts(); + AddPredecessorsForLoopsAndMerges(); + // TODO(danno): Handle Throw, et al. +} + + +void Scheduler::PrepareAuxiliaryNodeData() { + unscheduled_uses_.resize(graph_->NodeCount(), 0); + schedule_early_rpo_index_.resize(graph_->NodeCount(), 0); +} + + +void Scheduler::PrepareAuxiliaryBlockData() { + Zone* zone = schedule_->zone(); + scheduled_nodes_.resize(schedule_->BasicBlockCount(), + NodeVector(NodeVector::allocator_type(zone))); + schedule_->immediate_dominator_.resize(schedule_->BasicBlockCount(), NULL); +} + + +void Scheduler::AddPredecessorsForLoopsAndMerges() { + for (NodeVectorIter i = loops_and_merges_.begin(); + i != loops_and_merges_.end(); ++i) { + Node* merge_or_loop = *i; + BasicBlock* block = schedule_->block(merge_or_loop); + DCHECK(block != NULL); + // For all of the merge's control inputs, add a goto at the end to the + // merge's basic block. + for (InputIter j = (*i)->inputs().begin(); j != (*i)->inputs().end(); ++j) { + if (OperatorProperties::IsBasicBlockBegin((*i)->op())) { + BasicBlock* predecessor_block = schedule_->block(*j); + if ((*j)->opcode() != IrOpcode::kReturn && + (*j)->opcode() != IrOpcode::kDeoptimize) { + DCHECK(predecessor_block != NULL); + if (FLAG_trace_turbo_scheduler) { + IrOpcode::Value opcode = (*i)->opcode(); + PrintF("node %d (%s) in block %d -> block %d\n", (*i)->id(), + IrOpcode::Mnemonic(opcode), predecessor_block->id(), + block->id()); + } + schedule_->AddGoto(predecessor_block, block); + } + } + } + } +} + + +void Scheduler::AddSuccessorsForCalls() { + for (NodeVectorIter i = calls_.begin(); i != calls_.end(); ++i) { + Node* call = *i; + DCHECK(call->opcode() == IrOpcode::kCall); + DCHECK(OperatorProperties::CanLazilyDeoptimize(call->op())); + + Node* lazy_deopt_node = NULL; + Node* cont_node = NULL; + // Find the continuation and lazy-deopt nodes among the uses. + for (UseIter use_iter = call->uses().begin(); + use_iter != call->uses().end(); ++use_iter) { + switch ((*use_iter)->opcode()) { + case IrOpcode::kContinuation: { + DCHECK(cont_node == NULL); + cont_node = *use_iter; + break; + } + case IrOpcode::kLazyDeoptimization: { + DCHECK(lazy_deopt_node == NULL); + lazy_deopt_node = *use_iter; + break; + } + default: + break; + } + } + DCHECK(lazy_deopt_node != NULL); + DCHECK(cont_node != NULL); + BasicBlock* cont_successor_block = schedule_->block(cont_node); + BasicBlock* deopt_successor_block = schedule_->block(lazy_deopt_node); + Node* call_block_node = NodeProperties::GetControlInput(call); + BasicBlock* call_block = schedule_->block(call_block_node); + if (FLAG_trace_turbo_scheduler) { + IrOpcode::Value opcode = call->opcode(); + PrintF("node %d (%s) in block %d -> block %d\n", call->id(), + IrOpcode::Mnemonic(opcode), call_block->id(), + cont_successor_block->id()); + PrintF("node %d (%s) in block %d -> block %d\n", call->id(), + IrOpcode::Mnemonic(opcode), call_block->id(), + deopt_successor_block->id()); + } + schedule_->AddCall(call_block, call, cont_successor_block, + deopt_successor_block); + } +} + + +void Scheduler::AddSuccessorsForDeopts() { + for (NodeVectorIter i = deopts_.begin(); i != deopts_.end(); ++i) { + Node* deopt_block_node = NodeProperties::GetControlInput(*i); + BasicBlock* deopt_block = schedule_->block(deopt_block_node); + DCHECK(deopt_block != NULL); + if (FLAG_trace_turbo_scheduler) { + IrOpcode::Value opcode = (*i)->opcode(); + PrintF("node %d (%s) in block %d -> end\n", (*i)->id(), + IrOpcode::Mnemonic(opcode), deopt_block->id()); + } + schedule_->AddDeoptimize(deopt_block, *i); + } +} + + +void Scheduler::AddSuccessorsForBranches() { + for (NodeVectorIter i = branches_.begin(); i != branches_.end(); ++i) { + Node* branch = *i; + DCHECK(branch->opcode() == IrOpcode::kBranch); + Node* branch_block_node = NodeProperties::GetControlInput(branch); + BasicBlock* branch_block = schedule_->block(branch_block_node); + DCHECK(branch_block != NULL); + UseIter use_iter = branch->uses().begin(); + Node* first_successor = *use_iter; + ++use_iter; + DCHECK(use_iter != branch->uses().end()); + Node* second_successor = *use_iter; + DCHECK(++use_iter == branch->uses().end()); + Node* true_successor_node = first_successor->opcode() == IrOpcode::kIfTrue + ? first_successor + : second_successor; + Node* false_successor_node = first_successor->opcode() == IrOpcode::kIfTrue + ? second_successor + : first_successor; + DCHECK(true_successor_node->opcode() == IrOpcode::kIfTrue); + DCHECK(false_successor_node->opcode() == IrOpcode::kIfFalse); + BasicBlock* true_successor_block = schedule_->block(true_successor_node); + BasicBlock* false_successor_block = schedule_->block(false_successor_node); + DCHECK(true_successor_block != NULL); + DCHECK(false_successor_block != NULL); + if (FLAG_trace_turbo_scheduler) { + IrOpcode::Value opcode = branch->opcode(); + PrintF("node %d (%s) in block %d -> block %d\n", branch->id(), + IrOpcode::Mnemonic(opcode), branch_block->id(), + true_successor_block->id()); + PrintF("node %d (%s) in block %d -> block %d\n", branch->id(), + IrOpcode::Mnemonic(opcode), branch_block->id(), + false_successor_block->id()); + } + schedule_->AddBranch(branch_block, branch, true_successor_block, + false_successor_block); + } +} + + +void Scheduler::AddSuccessorsForReturns() { + for (NodeVectorIter i = returns_.begin(); i != returns_.end(); ++i) { + Node* return_block_node = NodeProperties::GetControlInput(*i); + BasicBlock* return_block = schedule_->block(return_block_node); + DCHECK(return_block != NULL); + if (FLAG_trace_turbo_scheduler) { + IrOpcode::Value opcode = (*i)->opcode(); + PrintF("node %d (%s) in block %d -> end\n", (*i)->id(), + IrOpcode::Mnemonic(opcode), return_block->id()); + } + schedule_->AddReturn(return_block, *i); + } +} + + +BasicBlock* Scheduler::GetCommonDominator(BasicBlock* b1, BasicBlock* b2) { + while (b1 != b2) { + int b1_rpo = GetRPONumber(b1); + int b2_rpo = GetRPONumber(b2); + DCHECK(b1_rpo != b2_rpo); + if (b1_rpo < b2_rpo) { + b2 = schedule_->immediate_dominator_[b2->id()]; + } else { + b1 = schedule_->immediate_dominator_[b1->id()]; + } + } + return b1; +} + + +void Scheduler::GenerateImmediateDominatorTree() { + // Build the dominator graph. TODO(danno): consider using Lengauer & Tarjan's + // if this becomes really slow. + if (FLAG_trace_turbo_scheduler) { + PrintF("------------ IMMEDIATE BLOCK DOMINATORS -----------\n"); + } + for (size_t i = 0; i < schedule_->rpo_order_.size(); i++) { + BasicBlock* current_rpo = schedule_->rpo_order_[i]; + if (current_rpo != schedule_->entry()) { + BasicBlock::Predecessors::iterator current_pred = + current_rpo->predecessors().begin(); + BasicBlock::Predecessors::iterator end = + current_rpo->predecessors().end(); + DCHECK(current_pred != end); + BasicBlock* dominator = *current_pred; + ++current_pred; + // For multiple predecessors, walk up the rpo ordering until a common + // dominator is found. + int current_rpo_pos = GetRPONumber(current_rpo); + while (current_pred != end) { + // Don't examine backwards edges + BasicBlock* pred = *current_pred; + if (GetRPONumber(pred) < current_rpo_pos) { + dominator = GetCommonDominator(dominator, *current_pred); + } + ++current_pred; + } + schedule_->immediate_dominator_[current_rpo->id()] = dominator; + if (FLAG_trace_turbo_scheduler) { + PrintF("Block %d's idom is %d\n", current_rpo->id(), dominator->id()); + } + } + } +} + + +class ScheduleEarlyNodeVisitor : public NullNodeVisitor { + public: + explicit ScheduleEarlyNodeVisitor(Scheduler* scheduler) + : has_changed_rpo_constraints_(true), + scheduler_(scheduler), + schedule_(scheduler->schedule_) {} + + GenericGraphVisit::Control Pre(Node* node) { + int id = node->id(); + int max_rpo = 0; + // Fixed nodes already know their schedule early position. + if (IsFixedNode(node)) { + BasicBlock* block = schedule_->block(node); + DCHECK(block != NULL); + max_rpo = block->rpo_number_; + if (scheduler_->schedule_early_rpo_index_[id] != max_rpo) { + has_changed_rpo_constraints_ = true; + } + scheduler_->schedule_early_rpo_index_[id] = max_rpo; + if (FLAG_trace_turbo_scheduler) { + PrintF("Node %d pre-scheduled early at rpo limit %d\n", id, max_rpo); + } + } + return GenericGraphVisit::CONTINUE; + } + + GenericGraphVisit::Control Post(Node* node) { + int id = node->id(); + int max_rpo = 0; + // Otherwise, the minimum rpo for the node is the max of all of the inputs. + if (!IsFixedNode(node)) { + DCHECK(!OperatorProperties::IsBasicBlockBegin(node->op())); + for (InputIter i = node->inputs().begin(); i != node->inputs().end(); + ++i) { + int control_rpo = scheduler_->schedule_early_rpo_index_[(*i)->id()]; + if (control_rpo > max_rpo) { + max_rpo = control_rpo; + } + } + if (scheduler_->schedule_early_rpo_index_[id] != max_rpo) { + has_changed_rpo_constraints_ = true; + } + scheduler_->schedule_early_rpo_index_[id] = max_rpo; + if (FLAG_trace_turbo_scheduler) { + PrintF("Node %d post-scheduled early at rpo limit %d\n", id, max_rpo); + } + } + return GenericGraphVisit::CONTINUE; + } + + static bool IsFixedNode(Node* node) { + return OperatorProperties::HasFixedSchedulePosition(node->op()) || + !OperatorProperties::CanBeScheduled(node->op()); + } + + // TODO(mstarzinger): Dirty hack to unblock others, schedule early should be + // rewritten to use a pre-order traversal from the start instead. + bool has_changed_rpo_constraints_; + + private: + Scheduler* scheduler_; + Schedule* schedule_; +}; + + +void Scheduler::ScheduleEarly() { + if (FLAG_trace_turbo_scheduler) { + PrintF("------------------- SCHEDULE EARLY ----------------\n"); + } + + int fixpoint_count = 0; + ScheduleEarlyNodeVisitor visitor(this); + while (visitor.has_changed_rpo_constraints_) { + visitor.has_changed_rpo_constraints_ = false; + graph_->VisitNodeInputsFromEnd(&visitor); + fixpoint_count++; + } + + if (FLAG_trace_turbo_scheduler) { + PrintF("It took %d iterations to determine fixpoint\n", fixpoint_count); + } +} + + +class PrepareUsesVisitor : public NullNodeVisitor { + public: + explicit PrepareUsesVisitor(Scheduler* scheduler) + : scheduler_(scheduler), schedule_(scheduler->schedule_) {} + + GenericGraphVisit::Control Pre(Node* node) { + // Some nodes must be scheduled explicitly to ensure they are in exactly the + // right place; it's a convenient place during the preparation of use counts + // to schedule them. + if (!schedule_->IsScheduled(node) && + OperatorProperties::HasFixedSchedulePosition(node->op())) { + if (FLAG_trace_turbo_scheduler) { + PrintF("Fixed position node %d is unscheduled, scheduling now\n", + node->id()); + } + IrOpcode::Value opcode = node->opcode(); + BasicBlock* block = + opcode == IrOpcode::kParameter + ? schedule_->entry() + : schedule_->block(NodeProperties::GetControlInput(node)); + DCHECK(block != NULL); + schedule_->AddNode(block, node); + } + + if (OperatorProperties::IsScheduleRoot(node->op())) { + scheduler_->schedule_root_nodes_.push_back(node); + } + + return GenericGraphVisit::CONTINUE; + } + + void PostEdge(Node* from, int index, Node* to) { + // If the edge is from an unscheduled node, then tally it in the use count + // for all of its inputs. The same criterion will be used in ScheduleLate + // for decrementing use counts. + if (!schedule_->IsScheduled(from) && + OperatorProperties::CanBeScheduled(from->op())) { + DCHECK(!OperatorProperties::HasFixedSchedulePosition(from->op())); + ++scheduler_->unscheduled_uses_[to->id()]; + if (FLAG_trace_turbo_scheduler) { + PrintF("Incrementing uses of node %d from %d to %d\n", to->id(), + from->id(), scheduler_->unscheduled_uses_[to->id()]); + } + } + } + + private: + Scheduler* scheduler_; + Schedule* schedule_; +}; + + +void Scheduler::PrepareUses() { + if (FLAG_trace_turbo_scheduler) { + PrintF("------------------- PREPARE USES ------------------\n"); + } + // Count the uses of every node, it will be used to ensure that all of a + // node's uses are scheduled before the node itself. + PrepareUsesVisitor prepare_uses(this); + graph_->VisitNodeInputsFromEnd(&prepare_uses); +} + + +class ScheduleLateNodeVisitor : public NullNodeVisitor { + public: + explicit ScheduleLateNodeVisitor(Scheduler* scheduler) + : scheduler_(scheduler), schedule_(scheduler_->schedule_) {} + + GenericGraphVisit::Control Pre(Node* node) { + // Don't schedule nodes that cannot be scheduled or are already scheduled. + if (!OperatorProperties::CanBeScheduled(node->op()) || + schedule_->IsScheduled(node)) { + return GenericGraphVisit::CONTINUE; + } + DCHECK(!OperatorProperties::HasFixedSchedulePosition(node->op())); + + // If all the uses of a node have been scheduled, then the node itself can + // be scheduled. + bool eligible = scheduler_->unscheduled_uses_[node->id()] == 0; + if (FLAG_trace_turbo_scheduler) { + PrintF("Testing for schedule eligibility for node %d -> %s\n", node->id(), + eligible ? "true" : "false"); + } + if (!eligible) return GenericGraphVisit::DEFER; + + // Determine the dominating block for all of the uses of this node. It is + // the latest block that this node can be scheduled in. + BasicBlock* block = NULL; + for (Node::Uses::iterator i = node->uses().begin(); i != node->uses().end(); + ++i) { + BasicBlock* use_block = GetBlockForUse(i.edge()); + block = block == NULL ? use_block : use_block == NULL + ? block + : scheduler_->GetCommonDominator( + block, use_block); + } + DCHECK(block != NULL); + + int min_rpo = scheduler_->schedule_early_rpo_index_[node->id()]; + if (FLAG_trace_turbo_scheduler) { + PrintF( + "Schedule late conservative for node %d is block %d at " + "loop depth %d, min rpo = %d\n", + node->id(), block->id(), block->loop_depth_, min_rpo); + } + // Hoist nodes out of loops if possible. Nodes can be hoisted iteratively + // into enlcosing loop pre-headers until they would preceed their + // ScheduleEarly position. + BasicBlock* hoist_block = block; + while (hoist_block != NULL && hoist_block->rpo_number_ >= min_rpo) { + if (hoist_block->loop_depth_ < block->loop_depth_) { + block = hoist_block; + if (FLAG_trace_turbo_scheduler) { + PrintF("Hoisting node %d to block %d\n", node->id(), block->id()); + } + } + // Try to hoist to the pre-header of the loop header. + hoist_block = hoist_block->loop_header(); + if (hoist_block != NULL) { + BasicBlock* pre_header = schedule_->dominator(hoist_block); + DCHECK(pre_header == NULL || + *hoist_block->predecessors().begin() == pre_header); + if (FLAG_trace_turbo_scheduler) { + PrintF( + "Try hoist to pre-header block %d of loop header block %d," + " depth would be %d\n", + pre_header->id(), hoist_block->id(), pre_header->loop_depth_); + } + hoist_block = pre_header; + } + } + + ScheduleNode(block, node); + + return GenericGraphVisit::CONTINUE; + } + + private: + BasicBlock* GetBlockForUse(Node::Edge edge) { + Node* use = edge.from(); + IrOpcode::Value opcode = use->opcode(); + // If the use is a phi, forward through the the phi to the basic block + // corresponding to the phi's input. + if (opcode == IrOpcode::kPhi || opcode == IrOpcode::kEffectPhi) { + int index = edge.index(); + if (FLAG_trace_turbo_scheduler) { + PrintF("Use %d is input %d to a phi\n", use->id(), index); + } + use = NodeProperties::GetControlInput(use, 0); + opcode = use->opcode(); + DCHECK(opcode == IrOpcode::kMerge || opcode == IrOpcode::kLoop); + use = NodeProperties::GetControlInput(use, index); + } + BasicBlock* result = schedule_->block(use); + if (result == NULL) return NULL; + if (FLAG_trace_turbo_scheduler) { + PrintF("Must dominate use %d in block %d\n", use->id(), result->id()); + } + return result; + } + + bool IsNodeEligible(Node* node) { + bool eligible = scheduler_->unscheduled_uses_[node->id()] == 0; + return eligible; + } + + void ScheduleNode(BasicBlock* block, Node* node) { + schedule_->PlanNode(block, node); + scheduler_->scheduled_nodes_[block->id()].push_back(node); + + // Reduce the use count of the node's inputs to potentially make them + // scheduable. + for (InputIter i = node->inputs().begin(); i != node->inputs().end(); ++i) { + DCHECK(scheduler_->unscheduled_uses_[(*i)->id()] > 0); + --scheduler_->unscheduled_uses_[(*i)->id()]; + if (FLAG_trace_turbo_scheduler) { + PrintF("Decrementing use count for node %d from node %d (now %d)\n", + (*i)->id(), i.edge().from()->id(), + scheduler_->unscheduled_uses_[(*i)->id()]); + if (scheduler_->unscheduled_uses_[(*i)->id()] == 0) { + PrintF("node %d is now eligible for scheduling\n", (*i)->id()); + } + } + } + } + + Scheduler* scheduler_; + Schedule* schedule_; +}; + + +void Scheduler::ScheduleLate() { + if (FLAG_trace_turbo_scheduler) { + PrintF("------------------- SCHEDULE LATE -----------------\n"); + } + + // Schedule: Places nodes in dominator block of all their uses. + ScheduleLateNodeVisitor schedule_late_visitor(this); + + for (NodeVectorIter i = schedule_root_nodes_.begin(); + i != schedule_root_nodes_.end(); ++i) { + GenericGraphVisit::Visit<ScheduleLateNodeVisitor, + NodeInputIterationTraits<Node> >( + graph_, *i, &schedule_late_visitor); + } + + // Add collected nodes for basic blocks to their blocks in the right order. + int block_num = 0; + for (NodeVectorVectorIter i = scheduled_nodes_.begin(); + i != scheduled_nodes_.end(); ++i) { + for (NodeVectorRIter j = i->rbegin(); j != i->rend(); ++j) { + schedule_->AddNode(schedule_->all_blocks_.at(block_num), *j); + } + block_num++; + } +} + + +// Numbering for BasicBlockData.rpo_number_ for this block traversal: +static const int kBlockOnStack = -2; +static const int kBlockVisited1 = -3; +static const int kBlockVisited2 = -4; +static const int kBlockUnvisited1 = -1; +static const int kBlockUnvisited2 = kBlockVisited1; + +struct SpecialRPOStackFrame { + BasicBlock* block; + int index; +}; + +struct BlockList { + BasicBlock* block; + BlockList* next; + + BlockList* Add(Zone* zone, BasicBlock* b) { + BlockList* list = static_cast<BlockList*>(zone->New(sizeof(BlockList))); + list->block = b; + list->next = this; + return list; + } + + void Serialize(BasicBlockVector* final_order) { + for (BlockList* l = this; l != NULL; l = l->next) { + l->block->rpo_number_ = static_cast<int>(final_order->size()); + final_order->push_back(l->block); + } + } +}; + +struct LoopInfo { + BasicBlock* header; + ZoneList<BasicBlock*>* outgoing; + BitVector* members; + LoopInfo* prev; + BlockList* end; + BlockList* start; + + void AddOutgoing(Zone* zone, BasicBlock* block) { + if (outgoing == NULL) outgoing = new (zone) ZoneList<BasicBlock*>(2, zone); + outgoing->Add(block, zone); + } +}; + + +static int Push(SpecialRPOStackFrame* stack, int depth, BasicBlock* child, + int unvisited) { + if (child->rpo_number_ == unvisited) { + stack[depth].block = child; + stack[depth].index = 0; + child->rpo_number_ = kBlockOnStack; + return depth + 1; + } + return depth; +} + + +// Computes loop membership from the backedges of the control flow graph. +static LoopInfo* ComputeLoopInfo( + Zone* zone, SpecialRPOStackFrame* queue, int num_loops, int num_blocks, + ZoneList<std::pair<BasicBlock*, int> >* backedges) { + LoopInfo* loops = zone->NewArray<LoopInfo>(num_loops); + memset(loops, 0, num_loops * sizeof(LoopInfo)); + + // Compute loop membership starting from backedges. + // O(max(loop_depth) * max(|loop|) + for (int i = 0; i < backedges->length(); i++) { + BasicBlock* member = backedges->at(i).first; + BasicBlock* header = member->SuccessorAt(backedges->at(i).second); + int loop_num = header->loop_end_; + if (loops[loop_num].header == NULL) { + loops[loop_num].header = header; + loops[loop_num].members = new (zone) BitVector(num_blocks, zone); + } + + int queue_length = 0; + if (member != header) { + // As long as the header doesn't have a backedge to itself, + // Push the member onto the queue and process its predecessors. + if (!loops[loop_num].members->Contains(member->id())) { + loops[loop_num].members->Add(member->id()); + } + queue[queue_length++].block = member; + } + + // Propagate loop membership backwards. All predecessors of M up to the + // loop header H are members of the loop too. O(|blocks between M and H|). + while (queue_length > 0) { + BasicBlock* block = queue[--queue_length].block; + for (int i = 0; i < block->PredecessorCount(); i++) { + BasicBlock* pred = block->PredecessorAt(i); + if (pred != header) { + if (!loops[loop_num].members->Contains(pred->id())) { + loops[loop_num].members->Add(pred->id()); + queue[queue_length++].block = pred; + } + } + } + } + } + return loops; +} + + +#if DEBUG +static void PrintRPO(int num_loops, LoopInfo* loops, BasicBlockVector* order) { + PrintF("-- RPO with %d loops ", num_loops); + if (num_loops > 0) { + PrintF("("); + for (int i = 0; i < num_loops; i++) { + if (i > 0) PrintF(" "); + PrintF("B%d", loops[i].header->id()); + } + PrintF(") "); + } + PrintF("-- \n"); + + for (int i = 0; i < static_cast<int>(order->size()); i++) { + BasicBlock* block = (*order)[i]; + int bid = block->id(); + PrintF("%5d:", i); + for (int i = 0; i < num_loops; i++) { + bool membership = loops[i].members->Contains(bid); + bool range = loops[i].header->LoopContains(block); + PrintF(membership ? " |" : " "); + PrintF(range ? "x" : " "); + } + PrintF(" B%d: ", bid); + if (block->loop_end_ >= 0) { + PrintF(" range: [%d, %d)", block->rpo_number_, block->loop_end_); + } + PrintF("\n"); + } +} + + +static void VerifySpecialRPO(int num_loops, LoopInfo* loops, + BasicBlockVector* order) { + DCHECK(order->size() > 0); + DCHECK((*order)[0]->id() == 0); // entry should be first. + + for (int i = 0; i < num_loops; i++) { + LoopInfo* loop = &loops[i]; + BasicBlock* header = loop->header; + + DCHECK(header != NULL); + DCHECK(header->rpo_number_ >= 0); + DCHECK(header->rpo_number_ < static_cast<int>(order->size())); + DCHECK(header->loop_end_ >= 0); + DCHECK(header->loop_end_ <= static_cast<int>(order->size())); + DCHECK(header->loop_end_ > header->rpo_number_); + + // Verify the start ... end list relationship. + int links = 0; + BlockList* l = loop->start; + DCHECK(l != NULL && l->block == header); + bool end_found; + while (true) { + if (l == NULL || l == loop->end) { + end_found = (loop->end == l); + break; + } + // The list should be in same order as the final result. + DCHECK(l->block->rpo_number_ == links + loop->header->rpo_number_); + links++; + l = l->next; + DCHECK(links < static_cast<int>(2 * order->size())); // cycle? + } + DCHECK(links > 0); + DCHECK(links == (header->loop_end_ - header->rpo_number_)); + DCHECK(end_found); + + // Check the contiguousness of loops. + int count = 0; + for (int j = 0; j < static_cast<int>(order->size()); j++) { + BasicBlock* block = order->at(j); + DCHECK(block->rpo_number_ == j); + if (j < header->rpo_number_ || j >= header->loop_end_) { + DCHECK(!loop->members->Contains(block->id())); + } else { + if (block == header) { + DCHECK(!loop->members->Contains(block->id())); + } else { + DCHECK(loop->members->Contains(block->id())); + } + count++; + } + } + DCHECK(links == count); + } +} +#endif // DEBUG + + +// Compute the special reverse-post-order block ordering, which is essentially +// a RPO of the graph where loop bodies are contiguous. Properties: +// 1. If block A is a predecessor of B, then A appears before B in the order, +// unless B is a loop header and A is in the loop headed at B +// (i.e. A -> B is a backedge). +// => If block A dominates block B, then A appears before B in the order. +// => If block A is a loop header, A appears before all blocks in the loop +// headed at A. +// 2. All loops are contiguous in the order (i.e. no intervening blocks that +// do not belong to the loop.) +// Note a simple RPO traversal satisfies (1) but not (3). +BasicBlockVector* Scheduler::ComputeSpecialRPO(Schedule* schedule) { + Zone tmp_zone(schedule->zone()->isolate()); + Zone* zone = &tmp_zone; + if (FLAG_trace_turbo_scheduler) { + PrintF("------------- COMPUTING SPECIAL RPO ---------------\n"); + } + // RPO should not have been computed for this schedule yet. + CHECK_EQ(kBlockUnvisited1, schedule->entry()->rpo_number_); + CHECK_EQ(0, static_cast<int>(schedule->rpo_order_.size())); + + // Perform an iterative RPO traversal using an explicit stack, + // recording backedges that form cycles. O(|B|). + ZoneList<std::pair<BasicBlock*, int> > backedges(1, zone); + SpecialRPOStackFrame* stack = + zone->NewArray<SpecialRPOStackFrame>(schedule->BasicBlockCount()); + BasicBlock* entry = schedule->entry(); + BlockList* order = NULL; + int stack_depth = Push(stack, 0, entry, kBlockUnvisited1); + int num_loops = 0; + + while (stack_depth > 0) { + int current = stack_depth - 1; + SpecialRPOStackFrame* frame = stack + current; + + if (frame->index < frame->block->SuccessorCount()) { + // Process the next successor. + BasicBlock* succ = frame->block->SuccessorAt(frame->index++); + if (succ->rpo_number_ == kBlockVisited1) continue; + if (succ->rpo_number_ == kBlockOnStack) { + // The successor is on the stack, so this is a backedge (cycle). + backedges.Add( + std::pair<BasicBlock*, int>(frame->block, frame->index - 1), zone); + if (succ->loop_end_ < 0) { + // Assign a new loop number to the header if it doesn't have one. + succ->loop_end_ = num_loops++; + } + } else { + // Push the successor onto the stack. + DCHECK(succ->rpo_number_ == kBlockUnvisited1); + stack_depth = Push(stack, stack_depth, succ, kBlockUnvisited1); + } + } else { + // Finished with all successors; pop the stack and add the block. + order = order->Add(zone, frame->block); + frame->block->rpo_number_ = kBlockVisited1; + stack_depth--; + } + } + + // If no loops were encountered, then the order we computed was correct. + LoopInfo* loops = NULL; + if (num_loops != 0) { + // Otherwise, compute the loop information from the backedges in order + // to perform a traversal that groups loop bodies together. + loops = ComputeLoopInfo(zone, stack, num_loops, schedule->BasicBlockCount(), + &backedges); + + // Initialize the "loop stack". Note the entry could be a loop header. + LoopInfo* loop = entry->IsLoopHeader() ? &loops[entry->loop_end_] : NULL; + order = NULL; + + // Perform an iterative post-order traversal, visiting loop bodies before + // edges that lead out of loops. Visits each block once, but linking loop + // sections together is linear in the loop size, so overall is + // O(|B| + max(loop_depth) * max(|loop|)) + stack_depth = Push(stack, 0, entry, kBlockUnvisited2); + while (stack_depth > 0) { + SpecialRPOStackFrame* frame = stack + (stack_depth - 1); + BasicBlock* block = frame->block; + BasicBlock* succ = NULL; + + if (frame->index < block->SuccessorCount()) { + // Process the next normal successor. + succ = block->SuccessorAt(frame->index++); + } else if (block->IsLoopHeader()) { + // Process additional outgoing edges from the loop header. + if (block->rpo_number_ == kBlockOnStack) { + // Finish the loop body the first time the header is left on the + // stack. + DCHECK(loop != NULL && loop->header == block); + loop->start = order->Add(zone, block); + order = loop->end; + block->rpo_number_ = kBlockVisited2; + // Pop the loop stack and continue visiting outgoing edges within the + // the context of the outer loop, if any. + loop = loop->prev; + // We leave the loop header on the stack; the rest of this iteration + // and later iterations will go through its outgoing edges list. + } + + // Use the next outgoing edge if there are any. + int outgoing_index = frame->index - block->SuccessorCount(); + LoopInfo* info = &loops[block->loop_end_]; + DCHECK(loop != info); + if (info->outgoing != NULL && + outgoing_index < info->outgoing->length()) { + succ = info->outgoing->at(outgoing_index); + frame->index++; + } + } + + if (succ != NULL) { + // Process the next successor. + if (succ->rpo_number_ == kBlockOnStack) continue; + if (succ->rpo_number_ == kBlockVisited2) continue; + DCHECK(succ->rpo_number_ == kBlockUnvisited2); + if (loop != NULL && !loop->members->Contains(succ->id())) { + // The successor is not in the current loop or any nested loop. + // Add it to the outgoing edges of this loop and visit it later. + loop->AddOutgoing(zone, succ); + } else { + // Push the successor onto the stack. + stack_depth = Push(stack, stack_depth, succ, kBlockUnvisited2); + if (succ->IsLoopHeader()) { + // Push the inner loop onto the loop stack. + DCHECK(succ->loop_end_ >= 0 && succ->loop_end_ < num_loops); + LoopInfo* next = &loops[succ->loop_end_]; + next->end = order; + next->prev = loop; + loop = next; + } + } + } else { + // Finished with all successors of the current block. + if (block->IsLoopHeader()) { + // If we are going to pop a loop header, then add its entire body. + LoopInfo* info = &loops[block->loop_end_]; + for (BlockList* l = info->start; true; l = l->next) { + if (l->next == info->end) { + l->next = order; + info->end = order; + break; + } + } + order = info->start; + } else { + // Pop a single node off the stack and add it to the order. + order = order->Add(zone, block); + block->rpo_number_ = kBlockVisited2; + } + stack_depth--; + } + } + } + + // Construct the final order from the list. + BasicBlockVector* final_order = &schedule->rpo_order_; + order->Serialize(final_order); + + // Compute the correct loop header for every block and set the correct loop + // ends. + LoopInfo* current_loop = NULL; + BasicBlock* current_header = NULL; + int loop_depth = 0; + for (BasicBlockVectorIter i = final_order->begin(); i != final_order->end(); + ++i) { + BasicBlock* current = *i; + current->loop_header_ = current_header; + if (current->IsLoopHeader()) { + loop_depth++; + current_loop = &loops[current->loop_end_]; + BlockList* end = current_loop->end; + current->loop_end_ = end == NULL ? static_cast<int>(final_order->size()) + : end->block->rpo_number_; + current_header = current_loop->header; + if (FLAG_trace_turbo_scheduler) { + PrintF("Block %d is a loop header, increment loop depth to %d\n", + current->id(), loop_depth); + } + } else { + while (current_header != NULL && + current->rpo_number_ >= current_header->loop_end_) { + DCHECK(current_header->IsLoopHeader()); + DCHECK(current_loop != NULL); + current_loop = current_loop->prev; + current_header = current_loop == NULL ? NULL : current_loop->header; + --loop_depth; + } + } + current->loop_depth_ = loop_depth; + if (FLAG_trace_turbo_scheduler) { + if (current->loop_header_ == NULL) { + PrintF("Block %d's loop header is NULL, loop depth %d\n", current->id(), + current->loop_depth_); + } else { + PrintF("Block %d's loop header is block %d, loop depth %d\n", + current->id(), current->loop_header_->id(), + current->loop_depth_); + } + } + } + +#if DEBUG + if (FLAG_trace_turbo_scheduler) PrintRPO(num_loops, loops, final_order); + VerifySpecialRPO(num_loops, loops, final_order); +#endif + return final_order; +} +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/compiler/scheduler.h b/deps/v8/src/compiler/scheduler.h new file mode 100644 index 00000000000..db620edb55c --- /dev/null +++ b/deps/v8/src/compiler/scheduler.h @@ -0,0 +1,84 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_SCHEDULER_H_ +#define V8_COMPILER_SCHEDULER_H_ + +#include <vector> + +#include "src/v8.h" + +#include "src/compiler/opcodes.h" +#include "src/compiler/schedule.h" +#include "src/zone-allocator.h" +#include "src/zone-containers.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// Computes a schedule from a graph, placing nodes into basic blocks and +// ordering the basic blocks in the special RPO order. +class Scheduler { + public: + // Create a new schedule and place all computations from the graph in it. + static Schedule* ComputeSchedule(Graph* graph); + + // Compute the RPO of blocks in an existing schedule. + static BasicBlockVector* ComputeSpecialRPO(Schedule* schedule); + + private: + Graph* graph_; + Schedule* schedule_; + NodeVector branches_; + NodeVector calls_; + NodeVector deopts_; + NodeVector returns_; + NodeVector loops_and_merges_; + BasicBlockVector node_block_placement_; + IntVector unscheduled_uses_; + NodeVectorVector scheduled_nodes_; + NodeVector schedule_root_nodes_; + IntVector schedule_early_rpo_index_; + + Scheduler(Zone* zone, Graph* graph, Schedule* schedule); + + int GetRPONumber(BasicBlock* block) { + DCHECK(block->rpo_number_ >= 0 && + block->rpo_number_ < static_cast<int>(schedule_->rpo_order_.size())); + DCHECK(schedule_->rpo_order_[block->rpo_number_] == block); + return block->rpo_number_; + } + + void PrepareAuxiliaryNodeData(); + void PrepareAuxiliaryBlockData(); + + friend class CreateBlockVisitor; + void CreateBlocks(); + + void WireBlocks(); + + void AddPredecessorsForLoopsAndMerges(); + void AddSuccessorsForBranches(); + void AddSuccessorsForReturns(); + void AddSuccessorsForCalls(); + void AddSuccessorsForDeopts(); + + void GenerateImmediateDominatorTree(); + BasicBlock* GetCommonDominator(BasicBlock* b1, BasicBlock* b2); + + friend class ScheduleEarlyNodeVisitor; + void ScheduleEarly(); + + friend class PrepareUsesVisitor; + void PrepareUses(); + + friend class ScheduleLateNodeVisitor; + void ScheduleLate(); +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_SCHEDULER_H_ diff --git a/deps/v8/src/compiler/simplified-lowering.cc b/deps/v8/src/compiler/simplified-lowering.cc new file mode 100644 index 00000000000..e32a51e136b --- /dev/null +++ b/deps/v8/src/compiler/simplified-lowering.cc @@ -0,0 +1,1014 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/simplified-lowering.h" + +#include <deque> +#include <queue> + +#include "src/compiler/common-operator.h" +#include "src/compiler/graph-inl.h" +#include "src/compiler/node-properties-inl.h" +#include "src/compiler/representation-change.h" +#include "src/compiler/simplified-lowering.h" +#include "src/compiler/simplified-operator.h" +#include "src/objects.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// Macro for outputting trace information from representation inference. +#define TRACE(x) \ + if (FLAG_trace_representation) PrintF x + +// Representation selection and lowering of {Simplified} operators to machine +// operators are interwined. We use a fixpoint calculation to compute both the +// output representation and the best possible lowering for {Simplified} nodes. +// Representation change insertion ensures that all values are in the correct +// machine representation after this phase, as dictated by the machine +// operators themselves. +enum Phase { + // 1.) PROPAGATE: Traverse the graph from the end, pushing usage information + // backwards from uses to definitions, around cycles in phis, according + // to local rules for each operator. + // During this phase, the usage information for a node determines the best + // possible lowering for each operator so far, and that in turn determines + // the output representation. + // Therefore, to be correct, this phase must iterate to a fixpoint before + // the next phase can begin. + PROPAGATE, + + // 2.) LOWER: perform lowering for all {Simplified} nodes by replacing some + // operators for some nodes, expanding some nodes to multiple nodes, or + // removing some (redundant) nodes. + // During this phase, use the {RepresentationChanger} to insert + // representation changes between uses that demand a particular + // representation and nodes that produce a different representation. + LOWER +}; + + +class RepresentationSelector { + public: + // Information for each node tracked during the fixpoint. + struct NodeInfo { + RepTypeUnion use : 14; // Union of all usages for the node. + bool queued : 1; // Bookkeeping for the traversal. + bool visited : 1; // Bookkeeping for the traversal. + RepTypeUnion output : 14; // Output type of the node. + }; + + RepresentationSelector(JSGraph* jsgraph, Zone* zone, + RepresentationChanger* changer) + : jsgraph_(jsgraph), + count_(jsgraph->graph()->NodeCount()), + info_(zone->NewArray<NodeInfo>(count_)), + nodes_(NodeVector::allocator_type(zone)), + replacements_(NodeVector::allocator_type(zone)), + contains_js_nodes_(false), + phase_(PROPAGATE), + changer_(changer), + queue_(std::deque<Node*, NodePtrZoneAllocator>( + NodePtrZoneAllocator(zone))) { + memset(info_, 0, sizeof(NodeInfo) * count_); + } + + void Run(SimplifiedLowering* lowering) { + // Run propagation phase to a fixpoint. + TRACE(("--{Propagation phase}--\n")); + phase_ = PROPAGATE; + Enqueue(jsgraph_->graph()->end()); + // Process nodes from the queue until it is empty. + while (!queue_.empty()) { + Node* node = queue_.front(); + NodeInfo* info = GetInfo(node); + queue_.pop(); + info->queued = false; + TRACE((" visit #%d: %s\n", node->id(), node->op()->mnemonic())); + VisitNode(node, info->use, NULL); + TRACE((" ==> output ")); + PrintInfo(info->output); + TRACE(("\n")); + } + + // Run lowering and change insertion phase. + TRACE(("--{Simplified lowering phase}--\n")); + phase_ = LOWER; + // Process nodes from the collected {nodes_} vector. + for (NodeVector::iterator i = nodes_.begin(); i != nodes_.end(); ++i) { + Node* node = *i; + TRACE((" visit #%d: %s\n", node->id(), node->op()->mnemonic())); + // Reuse {VisitNode()} so the representation rules are in one place. + VisitNode(node, GetUseInfo(node), lowering); + } + + // Perform the final replacements. + for (NodeVector::iterator i = replacements_.begin(); + i != replacements_.end(); ++i) { + Node* node = *i; + Node* replacement = *(++i); + node->ReplaceUses(replacement); + } + } + + // Enqueue {node} if the {use} contains new information for that node. + // Add {node} to {nodes_} if this is the first time it's been visited. + void Enqueue(Node* node, RepTypeUnion use = 0) { + if (phase_ != PROPAGATE) return; + NodeInfo* info = GetInfo(node); + if (!info->visited) { + // First visit of this node. + info->visited = true; + info->queued = true; + nodes_.push_back(node); + queue_.push(node); + TRACE((" initial: ")); + info->use |= use; + PrintUseInfo(node); + return; + } + TRACE((" queue?: ")); + PrintUseInfo(node); + if ((info->use & use) != use) { + // New usage information for the node is available. + if (!info->queued) { + queue_.push(node); + info->queued = true; + TRACE((" added: ")); + } else { + TRACE((" inqueue: ")); + } + info->use |= use; + PrintUseInfo(node); + } + } + + bool lower() { return phase_ == LOWER; } + + void Enqueue(Node* node, RepType use) { + Enqueue(node, static_cast<RepTypeUnion>(use)); + } + + void SetOutput(Node* node, RepTypeUnion output) { + // Every node should have at most one output representation. Note that + // phis can have 0, if they have not been used in a representation-inducing + // instruction. + DCHECK((output & rMask) == 0 || IsPowerOf2(output & rMask)); + GetInfo(node)->output = output; + } + + bool BothInputsAre(Node* node, Type* type) { + DCHECK_EQ(2, node->InputCount()); + return NodeProperties::GetBounds(node->InputAt(0)).upper->Is(type) && + NodeProperties::GetBounds(node->InputAt(1)).upper->Is(type); + } + + void ProcessInput(Node* node, int index, RepTypeUnion use) { + Node* input = node->InputAt(index); + if (phase_ == PROPAGATE) { + // In the propagate phase, propagate the usage information backward. + Enqueue(input, use); + } else { + // In the change phase, insert a change before the use if necessary. + if ((use & rMask) == 0) return; // No input requirement on the use. + RepTypeUnion output = GetInfo(input)->output; + if ((output & rMask & use) == 0) { + // Output representation doesn't match usage. + TRACE((" change: #%d:%s(@%d #%d:%s) ", node->id(), + node->op()->mnemonic(), index, input->id(), + input->op()->mnemonic())); + TRACE((" from ")); + PrintInfo(output); + TRACE((" to ")); + PrintInfo(use); + TRACE(("\n")); + Node* n = changer_->GetRepresentationFor(input, output, use); + node->ReplaceInput(index, n); + } + } + } + + static const RepTypeUnion kFloat64 = rFloat64 | tNumber; + static const RepTypeUnion kInt32 = rWord32 | tInt32; + static const RepTypeUnion kUint32 = rWord32 | tUint32; + static const RepTypeUnion kInt64 = rWord64 | tInt64; + static const RepTypeUnion kUint64 = rWord64 | tUint64; + static const RepTypeUnion kAnyTagged = rTagged | tAny; + + // The default, most general visitation case. For {node}, process all value, + // context, effect, and control inputs, assuming that value inputs should have + // {rTagged} representation and can observe all output values {tAny}. + void VisitInputs(Node* node) { + InputIter i = node->inputs().begin(); + for (int j = OperatorProperties::GetValueInputCount(node->op()); j > 0; + ++i, j--) { + ProcessInput(node, i.index(), kAnyTagged); // Value inputs + } + for (int j = OperatorProperties::GetContextInputCount(node->op()); j > 0; + ++i, j--) { + ProcessInput(node, i.index(), kAnyTagged); // Context inputs + } + for (int j = OperatorProperties::GetEffectInputCount(node->op()); j > 0; + ++i, j--) { + Enqueue(*i); // Effect inputs: just visit + } + for (int j = OperatorProperties::GetControlInputCount(node->op()); j > 0; + ++i, j--) { + Enqueue(*i); // Control inputs: just visit + } + SetOutput(node, kAnyTagged); + } + + // Helper for binops of the I x I -> O variety. + void VisitBinop(Node* node, RepTypeUnion input_use, RepTypeUnion output) { + DCHECK_EQ(2, node->InputCount()); + ProcessInput(node, 0, input_use); + ProcessInput(node, 1, input_use); + SetOutput(node, output); + } + + // Helper for unops of the I -> O variety. + void VisitUnop(Node* node, RepTypeUnion input_use, RepTypeUnion output) { + DCHECK_EQ(1, node->InputCount()); + ProcessInput(node, 0, input_use); + SetOutput(node, output); + } + + // Helper for leaf nodes. + void VisitLeaf(Node* node, RepTypeUnion output) { + DCHECK_EQ(0, node->InputCount()); + SetOutput(node, output); + } + + // Helpers for specific types of binops. + void VisitFloat64Binop(Node* node) { VisitBinop(node, kFloat64, kFloat64); } + void VisitInt32Binop(Node* node) { VisitBinop(node, kInt32, kInt32); } + void VisitUint32Binop(Node* node) { VisitBinop(node, kUint32, kUint32); } + void VisitInt64Binop(Node* node) { VisitBinop(node, kInt64, kInt64); } + void VisitUint64Binop(Node* node) { VisitBinop(node, kUint64, kUint64); } + void VisitFloat64Cmp(Node* node) { VisitBinop(node, kFloat64, rBit); } + void VisitInt32Cmp(Node* node) { VisitBinop(node, kInt32, rBit); } + void VisitUint32Cmp(Node* node) { VisitBinop(node, kUint32, rBit); } + void VisitInt64Cmp(Node* node) { VisitBinop(node, kInt64, rBit); } + void VisitUint64Cmp(Node* node) { VisitBinop(node, kUint64, rBit); } + + // Helper for handling phis. + void VisitPhi(Node* node, RepTypeUnion use) { + // First, propagate the usage information to inputs of the phi. + int values = OperatorProperties::GetValueInputCount(node->op()); + Node::Inputs inputs = node->inputs(); + for (Node::Inputs::iterator iter(inputs.begin()); iter != inputs.end(); + ++iter, --values) { + // Propagate {use} of the phi to value inputs, and 0 to control. + // TODO(titzer): it'd be nice to have distinguished edge kinds here. + ProcessInput(node, iter.index(), values > 0 ? use : 0); + } + // Phis adapt to whatever output representation their uses demand, + // pushing representation changes to their inputs. + RepTypeUnion use_rep = GetUseInfo(node) & rMask; + RepTypeUnion use_type = GetUseInfo(node) & tMask; + RepTypeUnion rep = 0; + if (use_rep & rTagged) { + rep = rTagged; // Tagged overrides everything. + } else if (use_rep & rFloat64) { + rep = rFloat64; + } else if (use_rep & rWord64) { + rep = rWord64; + } else if (use_rep & rWord32) { + rep = rWord32; + } else if (use_rep & rBit) { + rep = rBit; + } else { + // There was no representation associated with any of the uses. + // TODO(titzer): Select the best rep using phi's type, not the usage type? + if (use_type & tAny) { + rep = rTagged; + } else if (use_type & tNumber) { + rep = rFloat64; + } else if (use_type & tInt64 || use_type & tUint64) { + rep = rWord64; + } else if (use_type & tInt32 || use_type & tUint32) { + rep = rWord32; + } else if (use_type & tBool) { + rep = rBit; + } else { + UNREACHABLE(); // should have at least a usage type! + } + } + // Preserve the usage type, but set the representation. + Type* upper = NodeProperties::GetBounds(node).upper; + SetOutput(node, rep | changer_->TypeFromUpperBound(upper)); + } + + Operator* Int32Op(Node* node) { + return changer_->Int32OperatorFor(node->opcode()); + } + + Operator* Uint32Op(Node* node) { + return changer_->Uint32OperatorFor(node->opcode()); + } + + Operator* Float64Op(Node* node) { + return changer_->Float64OperatorFor(node->opcode()); + } + + // Dispatching routine for visiting the node {node} with the usage {use}. + // Depending on the operator, propagate new usage info to the inputs. + void VisitNode(Node* node, RepTypeUnion use, SimplifiedLowering* lowering) { + switch (node->opcode()) { + //------------------------------------------------------------------ + // Common operators. + //------------------------------------------------------------------ + case IrOpcode::kStart: + case IrOpcode::kDead: + return VisitLeaf(node, 0); + case IrOpcode::kParameter: { + // TODO(titzer): use representation from linkage. + Type* upper = NodeProperties::GetBounds(node).upper; + ProcessInput(node, 0, 0); + SetOutput(node, rTagged | changer_->TypeFromUpperBound(upper)); + return; + } + case IrOpcode::kInt32Constant: + return VisitLeaf(node, rWord32); + case IrOpcode::kInt64Constant: + return VisitLeaf(node, rWord64); + case IrOpcode::kFloat64Constant: + return VisitLeaf(node, rFloat64); + case IrOpcode::kExternalConstant: + return VisitLeaf(node, rPtr); + case IrOpcode::kNumberConstant: + return VisitLeaf(node, rTagged); + case IrOpcode::kHeapConstant: + return VisitLeaf(node, rTagged); + + case IrOpcode::kEnd: + case IrOpcode::kIfTrue: + case IrOpcode::kIfFalse: + case IrOpcode::kReturn: + case IrOpcode::kMerge: + case IrOpcode::kThrow: + return VisitInputs(node); // default visit for all node inputs. + + case IrOpcode::kBranch: + ProcessInput(node, 0, rBit); + Enqueue(NodeProperties::GetControlInput(node, 0)); + break; + case IrOpcode::kPhi: + return VisitPhi(node, use); + +//------------------------------------------------------------------ +// JavaScript operators. +//------------------------------------------------------------------ +// For now, we assume that all JS operators were too complex to lower +// to Simplified and that they will always require tagged value inputs +// and produce tagged value outputs. +// TODO(turbofan): it might be possible to lower some JSOperators here, +// but that responsibility really lies in the typed lowering phase. +#define DEFINE_JS_CASE(x) case IrOpcode::k##x: + JS_OP_LIST(DEFINE_JS_CASE) +#undef DEFINE_JS_CASE + contains_js_nodes_ = true; + VisitInputs(node); + return SetOutput(node, rTagged); + + //------------------------------------------------------------------ + // Simplified operators. + //------------------------------------------------------------------ + case IrOpcode::kBooleanNot: { + if (lower()) { + RepTypeUnion input = GetInfo(node->InputAt(0))->output; + if (input & rBit) { + // BooleanNot(x: rBit) => WordEqual(x, #0) + node->set_op(lowering->machine()->WordEqual()); + node->AppendInput(jsgraph_->zone(), jsgraph_->Int32Constant(0)); + } else { + // BooleanNot(x: rTagged) => WordEqual(x, #false) + node->set_op(lowering->machine()->WordEqual()); + node->AppendInput(jsgraph_->zone(), jsgraph_->FalseConstant()); + } + } else { + // No input representation requirement; adapt during lowering. + ProcessInput(node, 0, tBool); + SetOutput(node, rBit); + } + break; + } + case IrOpcode::kNumberEqual: + case IrOpcode::kNumberLessThan: + case IrOpcode::kNumberLessThanOrEqual: { + // Number comparisons reduce to integer comparisons for integer inputs. + if (BothInputsAre(node, Type::Signed32())) { + // => signed Int32Cmp + VisitInt32Cmp(node); + if (lower()) node->set_op(Int32Op(node)); + } else if (BothInputsAre(node, Type::Unsigned32())) { + // => unsigned Int32Cmp + VisitUint32Cmp(node); + if (lower()) node->set_op(Uint32Op(node)); + } else { + // => Float64Cmp + VisitFloat64Cmp(node); + if (lower()) node->set_op(Float64Op(node)); + } + break; + } + case IrOpcode::kNumberAdd: + case IrOpcode::kNumberSubtract: { + // Add and subtract reduce to Int32Add/Sub if the inputs + // are already integers and all uses are truncating. + if (BothInputsAre(node, Type::Signed32()) && + (use & (tUint32 | tNumber | tAny)) == 0) { + // => signed Int32Add/Sub + VisitInt32Binop(node); + if (lower()) node->set_op(Int32Op(node)); + } else if (BothInputsAre(node, Type::Unsigned32()) && + (use & (tInt32 | tNumber | tAny)) == 0) { + // => unsigned Int32Add/Sub + VisitUint32Binop(node); + if (lower()) node->set_op(Uint32Op(node)); + } else { + // => Float64Add/Sub + VisitFloat64Binop(node); + if (lower()) node->set_op(Float64Op(node)); + } + break; + } + case IrOpcode::kNumberMultiply: + case IrOpcode::kNumberDivide: + case IrOpcode::kNumberModulus: { + // Float64Mul/Div/Mod + VisitFloat64Binop(node); + if (lower()) node->set_op(Float64Op(node)); + break; + } + case IrOpcode::kNumberToInt32: { + RepTypeUnion use_rep = use & rMask; + if (lower()) { + RepTypeUnion in = GetInfo(node->InputAt(0))->output; + if ((in & tMask) == tInt32 || (in & rMask) == rWord32) { + // If the input has type int32, or is already a word32, just change + // representation if necessary. + VisitUnop(node, tInt32 | use_rep, tInt32 | use_rep); + DeferReplacement(node, node->InputAt(0)); + } else { + // Require the input in float64 format and perform truncation. + // TODO(turbofan): could also avoid the truncation with a tag check. + VisitUnop(node, tInt32 | rFloat64, tInt32 | rWord32); + // TODO(titzer): should be a truncation. + node->set_op(lowering->machine()->ChangeFloat64ToInt32()); + } + } else { + // Propagate a type to the input, but pass through representation. + VisitUnop(node, tInt32, tInt32 | use_rep); + } + break; + } + case IrOpcode::kNumberToUint32: { + RepTypeUnion use_rep = use & rMask; + if (lower()) { + RepTypeUnion in = GetInfo(node->InputAt(0))->output; + if ((in & tMask) == tUint32 || (in & rMask) == rWord32) { + // The input has type int32, just change representation. + VisitUnop(node, tUint32 | use_rep, tUint32 | use_rep); + DeferReplacement(node, node->InputAt(0)); + } else { + // Require the input in float64 format to perform truncation. + // TODO(turbofan): could also avoid the truncation with a tag check. + VisitUnop(node, tUint32 | rFloat64, tUint32 | rWord32); + // TODO(titzer): should be a truncation. + node->set_op(lowering->machine()->ChangeFloat64ToUint32()); + } + } else { + // Propagate a type to the input, but pass through representation. + VisitUnop(node, tUint32, tUint32 | use_rep); + } + break; + } + case IrOpcode::kReferenceEqual: { + VisitBinop(node, kAnyTagged, rBit); + if (lower()) node->set_op(lowering->machine()->WordEqual()); + break; + } + case IrOpcode::kStringEqual: { + VisitBinop(node, kAnyTagged, rBit); + // TODO(titzer): lower StringEqual to stub/runtime call. + break; + } + case IrOpcode::kStringLessThan: { + VisitBinop(node, kAnyTagged, rBit); + // TODO(titzer): lower StringLessThan to stub/runtime call. + break; + } + case IrOpcode::kStringLessThanOrEqual: { + VisitBinop(node, kAnyTagged, rBit); + // TODO(titzer): lower StringLessThanOrEqual to stub/runtime call. + break; + } + case IrOpcode::kStringAdd: { + VisitBinop(node, kAnyTagged, kAnyTagged); + // TODO(titzer): lower StringAdd to stub/runtime call. + break; + } + case IrOpcode::kLoadField: { + FieldAccess access = FieldAccessOf(node->op()); + ProcessInput(node, 0, changer_->TypeForBasePointer(access)); + SetOutput(node, changer_->TypeForField(access)); + if (lower()) lowering->DoLoadField(node); + break; + } + case IrOpcode::kStoreField: { + FieldAccess access = FieldAccessOf(node->op()); + ProcessInput(node, 0, changer_->TypeForBasePointer(access)); + ProcessInput(node, 1, changer_->TypeForField(access)); + SetOutput(node, 0); + if (lower()) lowering->DoStoreField(node); + break; + } + case IrOpcode::kLoadElement: { + ElementAccess access = ElementAccessOf(node->op()); + ProcessInput(node, 0, changer_->TypeForBasePointer(access)); + ProcessInput(node, 1, kInt32); // element index + SetOutput(node, changer_->TypeForElement(access)); + if (lower()) lowering->DoLoadElement(node); + break; + } + case IrOpcode::kStoreElement: { + ElementAccess access = ElementAccessOf(node->op()); + ProcessInput(node, 0, changer_->TypeForBasePointer(access)); + ProcessInput(node, 1, kInt32); // element index + ProcessInput(node, 2, changer_->TypeForElement(access)); + SetOutput(node, 0); + if (lower()) lowering->DoStoreElement(node); + break; + } + + //------------------------------------------------------------------ + // Machine-level operators. + //------------------------------------------------------------------ + case IrOpcode::kLoad: { + // TODO(titzer): machine loads/stores need to know BaseTaggedness!? + RepType tBase = rTagged; + MachineType rep = OpParameter<MachineType>(node); + ProcessInput(node, 0, tBase); // pointer or object + ProcessInput(node, 1, kInt32); // index + SetOutput(node, changer_->TypeForMachineType(rep)); + break; + } + case IrOpcode::kStore: { + // TODO(titzer): machine loads/stores need to know BaseTaggedness!? + RepType tBase = rTagged; + StoreRepresentation rep = OpParameter<StoreRepresentation>(node); + ProcessInput(node, 0, tBase); // pointer or object + ProcessInput(node, 1, kInt32); // index + ProcessInput(node, 2, changer_->TypeForMachineType(rep.rep)); + SetOutput(node, 0); + break; + } + case IrOpcode::kWord32Shr: + // We output unsigned int32 for shift right because JavaScript. + return VisitBinop(node, rWord32, rWord32 | tUint32); + case IrOpcode::kWord32And: + case IrOpcode::kWord32Or: + case IrOpcode::kWord32Xor: + case IrOpcode::kWord32Shl: + case IrOpcode::kWord32Sar: + // We use signed int32 as the output type for these word32 operations, + // though the machine bits are the same for either signed or unsigned, + // because JavaScript considers the result from these operations signed. + return VisitBinop(node, rWord32, rWord32 | tInt32); + case IrOpcode::kWord32Equal: + return VisitBinop(node, rWord32, rBit); + + case IrOpcode::kInt32Add: + case IrOpcode::kInt32Sub: + case IrOpcode::kInt32Mul: + case IrOpcode::kInt32Div: + case IrOpcode::kInt32Mod: + return VisitInt32Binop(node); + case IrOpcode::kInt32UDiv: + case IrOpcode::kInt32UMod: + return VisitUint32Binop(node); + case IrOpcode::kInt32LessThan: + case IrOpcode::kInt32LessThanOrEqual: + return VisitInt32Cmp(node); + + case IrOpcode::kUint32LessThan: + case IrOpcode::kUint32LessThanOrEqual: + return VisitUint32Cmp(node); + + case IrOpcode::kInt64Add: + case IrOpcode::kInt64Sub: + case IrOpcode::kInt64Mul: + case IrOpcode::kInt64Div: + case IrOpcode::kInt64Mod: + return VisitInt64Binop(node); + case IrOpcode::kInt64LessThan: + case IrOpcode::kInt64LessThanOrEqual: + return VisitInt64Cmp(node); + + case IrOpcode::kInt64UDiv: + case IrOpcode::kInt64UMod: + return VisitUint64Binop(node); + + case IrOpcode::kWord64And: + case IrOpcode::kWord64Or: + case IrOpcode::kWord64Xor: + case IrOpcode::kWord64Shl: + case IrOpcode::kWord64Shr: + case IrOpcode::kWord64Sar: + return VisitBinop(node, rWord64, rWord64); + case IrOpcode::kWord64Equal: + return VisitBinop(node, rWord64, rBit); + + case IrOpcode::kConvertInt32ToInt64: + return VisitUnop(node, tInt32 | rWord32, tInt32 | rWord64); + case IrOpcode::kConvertInt64ToInt32: + return VisitUnop(node, tInt64 | rWord64, tInt32 | rWord32); + + case IrOpcode::kChangeInt32ToFloat64: + return VisitUnop(node, tInt32 | rWord32, tInt32 | rFloat64); + case IrOpcode::kChangeUint32ToFloat64: + return VisitUnop(node, tUint32 | rWord32, tUint32 | rFloat64); + case IrOpcode::kChangeFloat64ToInt32: + return VisitUnop(node, tInt32 | rFloat64, tInt32 | rWord32); + case IrOpcode::kChangeFloat64ToUint32: + return VisitUnop(node, tUint32 | rFloat64, tUint32 | rWord32); + + case IrOpcode::kFloat64Add: + case IrOpcode::kFloat64Sub: + case IrOpcode::kFloat64Mul: + case IrOpcode::kFloat64Div: + case IrOpcode::kFloat64Mod: + return VisitFloat64Binop(node); + case IrOpcode::kFloat64Equal: + case IrOpcode::kFloat64LessThan: + case IrOpcode::kFloat64LessThanOrEqual: + return VisitFloat64Cmp(node); + default: + VisitInputs(node); + break; + } + } + + void DeferReplacement(Node* node, Node* replacement) { + if (replacement->id() < count_) { + // Replace with a previously existing node eagerly. + node->ReplaceUses(replacement); + } else { + // Otherwise, we are replacing a node with a representation change. + // Such a substitution must be done after all lowering is done, because + // new nodes do not have {NodeInfo} entries, and that would confuse + // the representation change insertion for uses of it. + replacements_.push_back(node); + replacements_.push_back(replacement); + } + // TODO(titzer) node->RemoveAllInputs(); // Node is now dead. + } + + void PrintUseInfo(Node* node) { + TRACE(("#%d:%-20s ", node->id(), node->op()->mnemonic())); + PrintInfo(GetUseInfo(node)); + TRACE(("\n")); + } + + void PrintInfo(RepTypeUnion info) { + if (FLAG_trace_representation) { + char buf[REP_TYPE_STRLEN]; + RenderRepTypeUnion(buf, info); + TRACE(("%s", buf)); + } + } + + private: + JSGraph* jsgraph_; + int count_; // number of nodes in the graph + NodeInfo* info_; // node id -> usage information + NodeVector nodes_; // collected nodes + NodeVector replacements_; // replacements to be done after lowering + bool contains_js_nodes_; // {true} if a JS operator was seen + Phase phase_; // current phase of algorithm + RepresentationChanger* changer_; // for inserting representation changes + + std::queue<Node*, std::deque<Node*, NodePtrZoneAllocator> > queue_; + + NodeInfo* GetInfo(Node* node) { + DCHECK(node->id() >= 0); + DCHECK(node->id() < count_); + return &info_[node->id()]; + } + + RepTypeUnion GetUseInfo(Node* node) { return GetInfo(node)->use; } +}; + + +Node* SimplifiedLowering::IsTagged(Node* node) { + // TODO(titzer): factor this out to a TaggingScheme abstraction. + STATIC_ASSERT(kSmiTagMask == 1); // Only works if tag is the low bit. + return graph()->NewNode(machine()->WordAnd(), node, + jsgraph()->Int32Constant(kSmiTagMask)); +} + + +void SimplifiedLowering::LowerAllNodes() { + SimplifiedOperatorBuilder simplified(graph()->zone()); + RepresentationChanger changer(jsgraph(), &simplified, machine(), + graph()->zone()->isolate()); + RepresentationSelector selector(jsgraph(), zone(), &changer); + selector.Run(this); + + LoweringBuilder::LowerAllNodes(); +} + + +Node* SimplifiedLowering::Untag(Node* node) { + // TODO(titzer): factor this out to a TaggingScheme abstraction. + Node* shift_amount = jsgraph()->Int32Constant(kSmiTagSize + kSmiShiftSize); + return graph()->NewNode(machine()->WordSar(), node, shift_amount); +} + + +Node* SimplifiedLowering::SmiTag(Node* node) { + // TODO(titzer): factor this out to a TaggingScheme abstraction. + Node* shift_amount = jsgraph()->Int32Constant(kSmiTagSize + kSmiShiftSize); + return graph()->NewNode(machine()->WordShl(), node, shift_amount); +} + + +Node* SimplifiedLowering::OffsetMinusTagConstant(int32_t offset) { + return jsgraph()->Int32Constant(offset - kHeapObjectTag); +} + + +static void UpdateControlSuccessors(Node* before, Node* node) { + DCHECK(IrOpcode::IsControlOpcode(before->opcode())); + UseIter iter = before->uses().begin(); + while (iter != before->uses().end()) { + if (IrOpcode::IsControlOpcode((*iter)->opcode()) && + NodeProperties::IsControlEdge(iter.edge())) { + iter = iter.UpdateToAndIncrement(node); + continue; + } + ++iter; + } +} + + +void SimplifiedLowering::DoChangeTaggedToUI32(Node* node, Node* effect, + Node* control, bool is_signed) { + // if (IsTagged(val)) + // ConvertFloat64To(Int32|Uint32)(Load[kMachineFloat64](input, #value_offset)) + // else Untag(val) + Node* val = node->InputAt(0); + Node* branch = graph()->NewNode(common()->Branch(), IsTagged(val), control); + + // true branch. + Node* tbranch = graph()->NewNode(common()->IfTrue(), branch); + Node* loaded = graph()->NewNode( + machine()->Load(kMachineFloat64), val, + OffsetMinusTagConstant(HeapNumber::kValueOffset), effect); + Operator* op = is_signed ? machine()->ChangeFloat64ToInt32() + : machine()->ChangeFloat64ToUint32(); + Node* converted = graph()->NewNode(op, loaded); + + // false branch. + Node* fbranch = graph()->NewNode(common()->IfFalse(), branch); + Node* untagged = Untag(val); + + // merge. + Node* merge = graph()->NewNode(common()->Merge(2), tbranch, fbranch); + Node* phi = graph()->NewNode(common()->Phi(2), converted, untagged, merge); + UpdateControlSuccessors(control, merge); + branch->ReplaceInput(1, control); + node->ReplaceUses(phi); +} + + +void SimplifiedLowering::DoChangeTaggedToFloat64(Node* node, Node* effect, + Node* control) { + // if (IsTagged(input)) Load[kMachineFloat64](input, #value_offset) + // else ConvertFloat64(Untag(input)) + Node* val = node->InputAt(0); + Node* branch = graph()->NewNode(common()->Branch(), IsTagged(val), control); + + // true branch. + Node* tbranch = graph()->NewNode(common()->IfTrue(), branch); + Node* loaded = graph()->NewNode( + machine()->Load(kMachineFloat64), val, + OffsetMinusTagConstant(HeapNumber::kValueOffset), effect); + + // false branch. + Node* fbranch = graph()->NewNode(common()->IfFalse(), branch); + Node* untagged = Untag(val); + Node* converted = + graph()->NewNode(machine()->ChangeInt32ToFloat64(), untagged); + + // merge. + Node* merge = graph()->NewNode(common()->Merge(2), tbranch, fbranch); + Node* phi = graph()->NewNode(common()->Phi(2), loaded, converted, merge); + UpdateControlSuccessors(control, merge); + branch->ReplaceInput(1, control); + node->ReplaceUses(phi); +} + + +void SimplifiedLowering::DoChangeUI32ToTagged(Node* node, Node* effect, + Node* control, bool is_signed) { + Node* val = node->InputAt(0); + Node* is_smi = NULL; + if (is_signed) { + if (SmiValuesAre32Bits()) { + // All int32s fit in this case. + DCHECK(kPointerSize == 8); + return node->ReplaceUses(SmiTag(val)); + } else { + // TODO(turbofan): use an Int32AddWithOverflow to tag and check here. + Node* lt = graph()->NewNode(machine()->Int32LessThanOrEqual(), val, + jsgraph()->Int32Constant(Smi::kMaxValue)); + Node* gt = + graph()->NewNode(machine()->Int32LessThanOrEqual(), + jsgraph()->Int32Constant(Smi::kMinValue), val); + is_smi = graph()->NewNode(machine()->Word32And(), lt, gt); + } + } else { + // Check if Uint32 value is in the smi range. + is_smi = graph()->NewNode(machine()->Uint32LessThanOrEqual(), val, + jsgraph()->Int32Constant(Smi::kMaxValue)); + } + + // TODO(turbofan): fold smi test branch eagerly. + // if (IsSmi(input)) SmiTag(input); + // else InlineAllocAndInitHeapNumber(ConvertToFloat64(input))) + Node* branch = graph()->NewNode(common()->Branch(), is_smi, control); + + // true branch. + Node* tbranch = graph()->NewNode(common()->IfTrue(), branch); + Node* smi_tagged = SmiTag(val); + + // false branch. + Node* fbranch = graph()->NewNode(common()->IfFalse(), branch); + Node* heap_num = jsgraph()->Constant(0.0); // TODO(titzer): alloc and init + + // merge. + Node* merge = graph()->NewNode(common()->Merge(2), tbranch, fbranch); + Node* phi = graph()->NewNode(common()->Phi(2), smi_tagged, heap_num, merge); + UpdateControlSuccessors(control, merge); + branch->ReplaceInput(1, control); + node->ReplaceUses(phi); +} + + +void SimplifiedLowering::DoChangeFloat64ToTagged(Node* node, Node* effect, + Node* control) { + return; // TODO(titzer): need to call runtime to allocate in one branch +} + + +void SimplifiedLowering::DoChangeBoolToBit(Node* node, Node* effect, + Node* control) { + Node* cmp = graph()->NewNode(machine()->WordEqual(), node->InputAt(0), + jsgraph()->TrueConstant()); + node->ReplaceUses(cmp); +} + + +void SimplifiedLowering::DoChangeBitToBool(Node* node, Node* effect, + Node* control) { + Node* val = node->InputAt(0); + Node* branch = graph()->NewNode(common()->Branch(), val, control); + + // true branch. + Node* tbranch = graph()->NewNode(common()->IfTrue(), branch); + // false branch. + Node* fbranch = graph()->NewNode(common()->IfFalse(), branch); + // merge. + Node* merge = graph()->NewNode(common()->Merge(2), tbranch, fbranch); + Node* phi = graph()->NewNode(common()->Phi(2), jsgraph()->TrueConstant(), + jsgraph()->FalseConstant(), merge); + UpdateControlSuccessors(control, merge); + branch->ReplaceInput(1, control); + node->ReplaceUses(phi); +} + + +static WriteBarrierKind ComputeWriteBarrierKind(BaseTaggedness base_is_tagged, + MachineType representation, + Type* type) { + // TODO(turbofan): skip write barriers for Smis, etc. + if (base_is_tagged == kTaggedBase && representation == kMachineTagged) { + // Write barriers are only for writes into heap objects (i.e. tagged base). + return kFullWriteBarrier; + } + return kNoWriteBarrier; +} + + +void SimplifiedLowering::DoLoadField(Node* node) { + const FieldAccess& access = FieldAccessOf(node->op()); + node->set_op(machine_.Load(access.representation)); + Node* offset = jsgraph()->Int32Constant(access.offset - access.tag()); + node->InsertInput(zone(), 1, offset); +} + + +void SimplifiedLowering::DoStoreField(Node* node) { + const FieldAccess& access = FieldAccessOf(node->op()); + WriteBarrierKind kind = ComputeWriteBarrierKind( + access.base_is_tagged, access.representation, access.type); + node->set_op(machine_.Store(access.representation, kind)); + Node* offset = jsgraph()->Int32Constant(access.offset - access.tag()); + node->InsertInput(zone(), 1, offset); +} + + +Node* SimplifiedLowering::ComputeIndex(const ElementAccess& access, + Node* index) { + int element_size = 0; + switch (access.representation) { + case kMachineTagged: + element_size = kPointerSize; + break; + case kMachineWord8: + element_size = 1; + break; + case kMachineWord16: + element_size = 2; + break; + case kMachineWord32: + element_size = 4; + break; + case kMachineWord64: + case kMachineFloat64: + element_size = 8; + break; + case kMachineLast: + UNREACHABLE(); + break; + } + if (element_size != 1) { + index = graph()->NewNode(machine()->Int32Mul(), + jsgraph()->Int32Constant(element_size), index); + } + int fixed_offset = access.header_size - access.tag(); + if (fixed_offset == 0) return index; + return graph()->NewNode(machine()->Int32Add(), index, + jsgraph()->Int32Constant(fixed_offset)); +} + + +void SimplifiedLowering::DoLoadElement(Node* node) { + const ElementAccess& access = ElementAccessOf(node->op()); + node->set_op(machine_.Load(access.representation)); + node->ReplaceInput(1, ComputeIndex(access, node->InputAt(1))); +} + + +void SimplifiedLowering::DoStoreElement(Node* node) { + const ElementAccess& access = ElementAccessOf(node->op()); + WriteBarrierKind kind = ComputeWriteBarrierKind( + access.base_is_tagged, access.representation, access.type); + node->set_op(machine_.Store(access.representation, kind)); + node->ReplaceInput(1, ComputeIndex(access, node->InputAt(1))); +} + + +void SimplifiedLowering::Lower(Node* node) {} + + +void SimplifiedLowering::LowerChange(Node* node, Node* effect, Node* control) { + switch (node->opcode()) { + case IrOpcode::kChangeTaggedToInt32: + DoChangeTaggedToUI32(node, effect, control, true); + break; + case IrOpcode::kChangeTaggedToUint32: + DoChangeTaggedToUI32(node, effect, control, false); + break; + case IrOpcode::kChangeTaggedToFloat64: + DoChangeTaggedToFloat64(node, effect, control); + break; + case IrOpcode::kChangeInt32ToTagged: + DoChangeUI32ToTagged(node, effect, control, true); + break; + case IrOpcode::kChangeUint32ToTagged: + DoChangeUI32ToTagged(node, effect, control, false); + break; + case IrOpcode::kChangeFloat64ToTagged: + DoChangeFloat64ToTagged(node, effect, control); + break; + case IrOpcode::kChangeBoolToBit: + DoChangeBoolToBit(node, effect, control); + break; + case IrOpcode::kChangeBitToBool: + DoChangeBitToBool(node, effect, control); + break; + default: + UNREACHABLE(); + break; + } +} + +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/src/compiler/simplified-lowering.h b/deps/v8/src/compiler/simplified-lowering.h new file mode 100644 index 00000000000..c85515d9447 --- /dev/null +++ b/deps/v8/src/compiler/simplified-lowering.h @@ -0,0 +1,71 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_SIMPLIFIED_LOWERING_H_ +#define V8_COMPILER_SIMPLIFIED_LOWERING_H_ + +#include "src/compiler/graph-reducer.h" +#include "src/compiler/js-graph.h" +#include "src/compiler/lowering-builder.h" +#include "src/compiler/machine-operator.h" +#include "src/compiler/node.h" +#include "src/compiler/simplified-operator.h" + +namespace v8 { +namespace internal { +namespace compiler { + +class SimplifiedLowering : public LoweringBuilder { + public: + explicit SimplifiedLowering(JSGraph* jsgraph, + SourcePositionTable* source_positions) + : LoweringBuilder(jsgraph->graph(), source_positions), + jsgraph_(jsgraph), + machine_(jsgraph->zone()) {} + virtual ~SimplifiedLowering() {} + + void LowerAllNodes(); + + virtual void Lower(Node* node); + void LowerChange(Node* node, Node* effect, Node* control); + + // TODO(titzer): These are exposed for direct testing. Use a friend class. + void DoLoadField(Node* node); + void DoStoreField(Node* node); + void DoLoadElement(Node* node); + void DoStoreElement(Node* node); + + private: + JSGraph* jsgraph_; + MachineOperatorBuilder machine_; + + Node* SmiTag(Node* node); + Node* IsTagged(Node* node); + Node* Untag(Node* node); + Node* OffsetMinusTagConstant(int32_t offset); + Node* ComputeIndex(const ElementAccess& access, Node* index); + + void DoChangeTaggedToUI32(Node* node, Node* effect, Node* control, + bool is_signed); + void DoChangeUI32ToTagged(Node* node, Node* effect, Node* control, + bool is_signed); + void DoChangeTaggedToFloat64(Node* node, Node* effect, Node* control); + void DoChangeFloat64ToTagged(Node* node, Node* effect, Node* control); + void DoChangeBoolToBit(Node* node, Node* effect, Node* control); + void DoChangeBitToBool(Node* node, Node* effect, Node* control); + + friend class RepresentationSelector; + + Zone* zone() { return jsgraph_->zone(); } + JSGraph* jsgraph() { return jsgraph_; } + Graph* graph() { return jsgraph()->graph(); } + CommonOperatorBuilder* common() { return jsgraph()->common(); } + MachineOperatorBuilder* machine() { return &machine_; } +}; + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_COMPILER_SIMPLIFIED_LOWERING_H_ diff --git a/deps/v8/src/compiler/simplified-node-factory.h b/deps/v8/src/compiler/simplified-node-factory.h new file mode 100644 index 00000000000..8660ce6700e --- /dev/null +++ b/deps/v8/src/compiler/simplified-node-factory.h @@ -0,0 +1,128 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_SIMPLIFIED_NODE_FACTORY_H_ +#define V8_COMPILER_SIMPLIFIED_NODE_FACTORY_H_ + +#include "src/compiler/node.h" +#include "src/compiler/simplified-operator.h" + +namespace v8 { +namespace internal { +namespace compiler { + +#define SIMPLIFIED() static_cast<NodeFactory*>(this)->simplified() +#define NEW_NODE_1(op, a) static_cast<NodeFactory*>(this)->NewNode(op, a) +#define NEW_NODE_2(op, a, b) static_cast<NodeFactory*>(this)->NewNode(op, a, b) +#define NEW_NODE_3(op, a, b, c) \ + static_cast<NodeFactory*>(this)->NewNode(op, a, b, c) + +template <typename NodeFactory> +class SimplifiedNodeFactory { + public: + Node* BooleanNot(Node* a) { + return NEW_NODE_1(SIMPLIFIED()->BooleanNot(), a); + } + + Node* NumberEqual(Node* a, Node* b) { + return NEW_NODE_2(SIMPLIFIED()->NumberEqual(), a, b); + } + Node* NumberNotEqual(Node* a, Node* b) { + return NEW_NODE_2(SIMPLIFIED()->NumberNotEqual(), a, b); + } + Node* NumberLessThan(Node* a, Node* b) { + return NEW_NODE_2(SIMPLIFIED()->NumberLessThan(), a, b); + } + Node* NumberLessThanOrEqual(Node* a, Node* b) { + return NEW_NODE_2(SIMPLIFIED()->NumberLessThanOrEqual(), a, b); + } + Node* NumberAdd(Node* a, Node* b) { + return NEW_NODE_2(SIMPLIFIED()->NumberAdd(), a, b); + } + Node* NumberSubtract(Node* a, Node* b) { + return NEW_NODE_2(SIMPLIFIED()->NumberSubtract(), a, b); + } + Node* NumberMultiply(Node* a, Node* b) { + return NEW_NODE_2(SIMPLIFIED()->NumberMultiply(), a, b); + } + Node* NumberDivide(Node* a, Node* b) { + return NEW_NODE_2(SIMPLIFIED()->NumberDivide(), a, b); + } + Node* NumberModulus(Node* a, Node* b) { + return NEW_NODE_2(SIMPLIFIED()->NumberModulus(), a, b); + } + Node* NumberToInt32(Node* a) { + return NEW_NODE_1(SIMPLIFIED()->NumberToInt32(), a); + } + Node* NumberToUint32(Node* a) { + return NEW_NODE_1(SIMPLIFIED()->NumberToUint32(), a); + } + + Node* ReferenceEqual(Type* type, Node* a, Node* b) { + return NEW_NODE_2(SIMPLIFIED()->ReferenceEqual(), a, b); + } + + Node* StringEqual(Node* a, Node* b) { + return NEW_NODE_2(SIMPLIFIED()->StringEqual(), a, b); + } + Node* StringLessThan(Node* a, Node* b) { + return NEW_NODE_2(SIMPLIFIED()->StringLessThan(), a, b); + } + Node* StringLessThanOrEqual(Node* a, Node* b) { + return NEW_NODE_2(SIMPLIFIED()->StringLessThanOrEqual(), a, b); + } + Node* StringAdd(Node* a, Node* b) { + return NEW_NODE_2(SIMPLIFIED()->StringAdd(), a, b); + } + + Node* ChangeTaggedToInt32(Node* a) { + return NEW_NODE_1(SIMPLIFIED()->ChangeTaggedToInt32(), a); + } + Node* ChangeTaggedToUint32(Node* a) { + return NEW_NODE_1(SIMPLIFIED()->ChangeTaggedToUint32(), a); + } + Node* ChangeTaggedToFloat64(Node* a) { + return NEW_NODE_1(SIMPLIFIED()->ChangeTaggedToFloat64(), a); + } + Node* ChangeInt32ToTagged(Node* a) { + return NEW_NODE_1(SIMPLIFIED()->ChangeInt32ToTagged(), a); + } + Node* ChangeUint32ToTagged(Node* a) { + return NEW_NODE_1(SIMPLIFIED()->ChangeUint32ToTagged(), a); + } + Node* ChangeFloat64ToTagged(Node* a) { + return NEW_NODE_1(SIMPLIFIED()->ChangeFloat64ToTagged(), a); + } + Node* ChangeBoolToBit(Node* a) { + return NEW_NODE_1(SIMPLIFIED()->ChangeBoolToBit(), a); + } + Node* ChangeBitToBool(Node* a) { + return NEW_NODE_1(SIMPLIFIED()->ChangeBitToBool(), a); + } + + Node* LoadField(const FieldAccess& access, Node* object) { + return NEW_NODE_1(SIMPLIFIED()->LoadField(access), object); + } + Node* StoreField(const FieldAccess& access, Node* object, Node* value) { + return NEW_NODE_2(SIMPLIFIED()->StoreField(access), object, value); + } + Node* LoadElement(const ElementAccess& access, Node* object, Node* index) { + return NEW_NODE_2(SIMPLIFIED()->LoadElement(access), object, index); + } + Node* StoreElement(const ElementAccess& access, Node* object, Node* index, + Node* value) { + return NEW_NODE_3(SIMPLIFIED()->StoreElement(access), object, index, value); + } +}; + +#undef NEW_NODE_1 +#undef NEW_NODE_2 +#undef NEW_NODE_3 +#undef SIMPLIFIED + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_COMPILER_SIMPLIFIED_NODE_FACTORY_H_ diff --git a/deps/v8/src/compiler/simplified-operator.h b/deps/v8/src/compiler/simplified-operator.h new file mode 100644 index 00000000000..9cf08c37046 --- /dev/null +++ b/deps/v8/src/compiler/simplified-operator.h @@ -0,0 +1,189 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_SIMPLIFIED_OPERATOR_H_ +#define V8_COMPILER_SIMPLIFIED_OPERATOR_H_ + +#include "src/compiler/machine-operator.h" +#include "src/compiler/opcodes.h" +#include "src/zone.h" + +namespace v8 { +namespace internal { +namespace compiler { + +enum BaseTaggedness { kUntaggedBase, kTaggedBase }; + +// An access descriptor for loads/stores of fixed structures like field +// accesses of heap objects. Accesses from either tagged or untagged base +// pointers are supported; untagging is done automatically during lowering. +struct FieldAccess { + BaseTaggedness base_is_tagged; // specifies if the base pointer is tagged. + int offset; // offset of the field, without tag. + Handle<Name> name; // debugging only. + Type* type; // type of the field. + MachineType representation; // machine representation of field. + + int tag() const { return base_is_tagged == kTaggedBase ? kHeapObjectTag : 0; } +}; + + +// An access descriptor for loads/stores of indexed structures like characters +// in strings or off-heap backing stores. Accesses from either tagged or +// untagged base pointers are supported; untagging is done automatically during +// lowering. +struct ElementAccess { + BaseTaggedness base_is_tagged; // specifies if the base pointer is tagged. + int header_size; // size of the header, without tag. + Type* type; // type of the element. + MachineType representation; // machine representation of element. + + int tag() const { return base_is_tagged == kTaggedBase ? kHeapObjectTag : 0; } +}; + + +// If the accessed object is not a heap object, add this to the header_size. +static const int kNonHeapObjectHeaderSize = kHeapObjectTag; + + +// Specialization for static parameters of type {FieldAccess}. +template <> +struct StaticParameterTraits<const FieldAccess> { + static OStream& PrintTo(OStream& os, const FieldAccess& val) { // NOLINT + return os << val.offset; + } + static int HashCode(const FieldAccess& val) { + return (val.offset < 16) | (val.representation & 0xffff); + } + static bool Equals(const FieldAccess& a, const FieldAccess& b) { + return a.base_is_tagged == b.base_is_tagged && a.offset == b.offset && + a.representation == b.representation && a.type->Is(b.type); + } +}; + + +// Specialization for static parameters of type {ElementAccess}. +template <> +struct StaticParameterTraits<const ElementAccess> { + static OStream& PrintTo(OStream& os, const ElementAccess& val) { // NOLINT + return os << val.header_size; + } + static int HashCode(const ElementAccess& val) { + return (val.header_size < 16) | (val.representation & 0xffff); + } + static bool Equals(const ElementAccess& a, const ElementAccess& b) { + return a.base_is_tagged == b.base_is_tagged && + a.header_size == b.header_size && + a.representation == b.representation && a.type->Is(b.type); + } +}; + + +inline const FieldAccess FieldAccessOf(Operator* op) { + DCHECK(op->opcode() == IrOpcode::kLoadField || + op->opcode() == IrOpcode::kStoreField); + return static_cast<Operator1<FieldAccess>*>(op)->parameter(); +} + + +inline const ElementAccess ElementAccessOf(Operator* op) { + DCHECK(op->opcode() == IrOpcode::kLoadElement || + op->opcode() == IrOpcode::kStoreElement); + return static_cast<Operator1<ElementAccess>*>(op)->parameter(); +} + + +// Interface for building simplified operators, which represent the +// medium-level operations of V8, including adding numbers, allocating objects, +// indexing into objects and arrays, etc. +// All operators are typed but many are representation independent. + +// Number values from JS can be in one of these representations: +// - Tagged: word-sized integer that is either +// - a signed small integer (31 or 32 bits plus a tag) +// - a tagged pointer to a HeapNumber object that has a float64 field +// - Int32: an untagged signed 32-bit integer +// - Uint32: an untagged unsigned 32-bit integer +// - Float64: an untagged float64 + +// Additional representations for intermediate code or non-JS code: +// - Int64: an untagged signed 64-bit integer +// - Uint64: an untagged unsigned 64-bit integer +// - Float32: an untagged float32 + +// Boolean values can be: +// - Bool: a tagged pointer to either the canonical JS #false or +// the canonical JS #true object +// - Bit: an untagged integer 0 or 1, but word-sized +class SimplifiedOperatorBuilder { + public: + explicit inline SimplifiedOperatorBuilder(Zone* zone) : zone_(zone) {} + +#define SIMPLE(name, properties, inputs, outputs) \ + return new (zone_) \ + SimpleOperator(IrOpcode::k##name, properties, inputs, outputs, #name); + +#define OP1(name, ptype, pname, properties, inputs, outputs) \ + return new (zone_) \ + Operator1<ptype>(IrOpcode::k##name, properties | Operator::kNoThrow, \ + inputs, outputs, #name, pname) + +#define UNOP(name) SIMPLE(name, Operator::kPure, 1, 1) +#define BINOP(name) SIMPLE(name, Operator::kPure, 2, 1) + + Operator* BooleanNot() const { UNOP(BooleanNot); } + + Operator* NumberEqual() const { BINOP(NumberEqual); } + Operator* NumberLessThan() const { BINOP(NumberLessThan); } + Operator* NumberLessThanOrEqual() const { BINOP(NumberLessThanOrEqual); } + Operator* NumberAdd() const { BINOP(NumberAdd); } + Operator* NumberSubtract() const { BINOP(NumberSubtract); } + Operator* NumberMultiply() const { BINOP(NumberMultiply); } + Operator* NumberDivide() const { BINOP(NumberDivide); } + Operator* NumberModulus() const { BINOP(NumberModulus); } + Operator* NumberToInt32() const { UNOP(NumberToInt32); } + Operator* NumberToUint32() const { UNOP(NumberToUint32); } + + Operator* ReferenceEqual(Type* type) const { BINOP(ReferenceEqual); } + + Operator* StringEqual() const { BINOP(StringEqual); } + Operator* StringLessThan() const { BINOP(StringLessThan); } + Operator* StringLessThanOrEqual() const { BINOP(StringLessThanOrEqual); } + Operator* StringAdd() const { BINOP(StringAdd); } + + Operator* ChangeTaggedToInt32() const { UNOP(ChangeTaggedToInt32); } + Operator* ChangeTaggedToUint32() const { UNOP(ChangeTaggedToUint32); } + Operator* ChangeTaggedToFloat64() const { UNOP(ChangeTaggedToFloat64); } + Operator* ChangeInt32ToTagged() const { UNOP(ChangeInt32ToTagged); } + Operator* ChangeUint32ToTagged() const { UNOP(ChangeUint32ToTagged); } + Operator* ChangeFloat64ToTagged() const { UNOP(ChangeFloat64ToTagged); } + Operator* ChangeBoolToBit() const { UNOP(ChangeBoolToBit); } + Operator* ChangeBitToBool() const { UNOP(ChangeBitToBool); } + + Operator* LoadField(const FieldAccess& access) const { + OP1(LoadField, FieldAccess, access, Operator::kNoWrite, 1, 1); + } + Operator* StoreField(const FieldAccess& access) const { + OP1(StoreField, FieldAccess, access, Operator::kNoRead, 2, 0); + } + Operator* LoadElement(const ElementAccess& access) const { + OP1(LoadElement, ElementAccess, access, Operator::kNoWrite, 2, 1); + } + Operator* StoreElement(const ElementAccess& access) const { + OP1(StoreElement, ElementAccess, access, Operator::kNoRead, 3, 0); + } + +#undef BINOP +#undef UNOP +#undef OP1 +#undef SIMPLE + + private: + Zone* zone_; +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_SIMPLIFIED_OPERATOR_H_ diff --git a/deps/v8/src/compiler/source-position.cc b/deps/v8/src/compiler/source-position.cc new file mode 100644 index 00000000000..11783900a0a --- /dev/null +++ b/deps/v8/src/compiler/source-position.cc @@ -0,0 +1,55 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/source-position.h" +#include "src/compiler/graph.h" +#include "src/compiler/node-aux-data-inl.h" + +namespace v8 { +namespace internal { +namespace compiler { + +class SourcePositionTable::Decorator : public GraphDecorator { + public: + explicit Decorator(SourcePositionTable* source_positions) + : source_positions_(source_positions) {} + + virtual void Decorate(Node* node) { + DCHECK(!source_positions_->current_position_.IsInvalid()); + source_positions_->table_.Set(node, source_positions_->current_position_); + } + + private: + SourcePositionTable* source_positions_; +}; + + +SourcePositionTable::SourcePositionTable(Graph* graph) + : graph_(graph), + decorator_(NULL), + current_position_(SourcePosition::Invalid()), + table_(graph->zone()) {} + + +void SourcePositionTable::AddDecorator() { + DCHECK(decorator_ == NULL); + decorator_ = new (graph_->zone()) Decorator(this); + graph_->AddDecorator(decorator_); +} + + +void SourcePositionTable::RemoveDecorator() { + DCHECK(decorator_ != NULL); + graph_->RemoveDecorator(decorator_); + decorator_ = NULL; +} + + +SourcePosition SourcePositionTable::GetSourcePosition(Node* node) { + return table_.Get(node); +} + +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/src/compiler/source-position.h b/deps/v8/src/compiler/source-position.h new file mode 100644 index 00000000000..b81582fd99e --- /dev/null +++ b/deps/v8/src/compiler/source-position.h @@ -0,0 +1,99 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_SOURCE_POSITION_H_ +#define V8_COMPILER_SOURCE_POSITION_H_ + +#include "src/assembler.h" +#include "src/compiler/node-aux-data.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// Encapsulates encoding and decoding of sources positions from which Nodes +// originated. +class SourcePosition V8_FINAL { + public: + explicit SourcePosition(int raw = kUnknownPosition) : raw_(raw) {} + + static SourcePosition Unknown() { return SourcePosition(kUnknownPosition); } + bool IsUnknown() const { return raw() == kUnknownPosition; } + + static SourcePosition Invalid() { return SourcePosition(kInvalidPosition); } + bool IsInvalid() const { return raw() == kInvalidPosition; } + + int raw() const { return raw_; } + + private: + static const int kInvalidPosition = -2; + static const int kUnknownPosition = RelocInfo::kNoPosition; + STATIC_ASSERT(kInvalidPosition != kUnknownPosition); + int raw_; +}; + + +inline bool operator==(const SourcePosition& lhs, const SourcePosition& rhs) { + return lhs.raw() == rhs.raw(); +} + +inline bool operator!=(const SourcePosition& lhs, const SourcePosition& rhs) { + return !(lhs == rhs); +} + + +class SourcePositionTable V8_FINAL { + public: + class Scope { + public: + Scope(SourcePositionTable* source_positions, SourcePosition position) + : source_positions_(source_positions), + prev_position_(source_positions->current_position_) { + Init(position); + } + Scope(SourcePositionTable* source_positions, Node* node) + : source_positions_(source_positions), + prev_position_(source_positions->current_position_) { + Init(source_positions_->GetSourcePosition(node)); + } + ~Scope() { source_positions_->current_position_ = prev_position_; } + + private: + void Init(SourcePosition position) { + if (!position.IsUnknown() || prev_position_.IsInvalid()) { + source_positions_->current_position_ = position; + } + } + + SourcePositionTable* source_positions_; + SourcePosition prev_position_; + DISALLOW_COPY_AND_ASSIGN(Scope); + }; + + explicit SourcePositionTable(Graph* graph); + ~SourcePositionTable() { + if (decorator_ != NULL) RemoveDecorator(); + } + + void AddDecorator(); + void RemoveDecorator(); + + SourcePosition GetSourcePosition(Node* node); + + private: + class Decorator; + + Graph* graph_; + Decorator* decorator_; + SourcePosition current_position_; + NodeAuxData<SourcePosition> table_; + + DISALLOW_COPY_AND_ASSIGN(SourcePositionTable); +}; + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif diff --git a/deps/v8/src/compiler/structured-machine-assembler.cc b/deps/v8/src/compiler/structured-machine-assembler.cc new file mode 100644 index 00000000000..dbf2134a1f6 --- /dev/null +++ b/deps/v8/src/compiler/structured-machine-assembler.cc @@ -0,0 +1,664 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/pipeline.h" +#include "src/compiler/scheduler.h" +#include "src/compiler/structured-machine-assembler.h" + +namespace v8 { +namespace internal { +namespace compiler { + +Node* Variable::Get() const { return smasm_->GetVariable(offset_); } + + +void Variable::Set(Node* value) const { smasm_->SetVariable(offset_, value); } + + +StructuredMachineAssembler::StructuredMachineAssembler( + Graph* graph, MachineCallDescriptorBuilder* call_descriptor_builder, + MachineType word) + : GraphBuilder(graph), + schedule_(new (zone()) Schedule(zone())), + machine_(zone(), word), + common_(zone()), + call_descriptor_builder_(call_descriptor_builder), + parameters_(NULL), + current_environment_(new (zone()) + Environment(zone(), schedule()->entry(), false)), + number_of_variables_(0) { + Node* s = graph->NewNode(common_.Start(parameter_count())); + graph->SetStart(s); + if (parameter_count() == 0) return; + parameters_ = zone()->NewArray<Node*>(parameter_count()); + for (int i = 0; i < parameter_count(); ++i) { + parameters_[i] = NewNode(common()->Parameter(i), graph->start()); + } +} + + +Schedule* StructuredMachineAssembler::Export() { + // Compute the correct codegen order. + DCHECK(schedule_->rpo_order()->empty()); + Scheduler::ComputeSpecialRPO(schedule_); + // Invalidate MachineAssembler. + Schedule* schedule = schedule_; + schedule_ = NULL; + return schedule; +} + + +Node* StructuredMachineAssembler::Parameter(int index) { + DCHECK(0 <= index && index < parameter_count()); + return parameters_[index]; +} + + +Node* StructuredMachineAssembler::MakeNode(Operator* op, int input_count, + Node** inputs) { + DCHECK(ScheduleValid()); + DCHECK(current_environment_ != NULL); + Node* node = graph()->NewNode(op, input_count, inputs); + BasicBlock* block = NULL; + switch (op->opcode()) { + case IrOpcode::kParameter: + case IrOpcode::kInt32Constant: + case IrOpcode::kInt64Constant: + case IrOpcode::kFloat64Constant: + case IrOpcode::kExternalConstant: + case IrOpcode::kNumberConstant: + case IrOpcode::kHeapConstant: + // Parameters and constants must be in start. + block = schedule()->start(); + break; + default: + // Verify all leaf nodes handled above. + DCHECK((op->OutputCount() == 0) == (op->opcode() == IrOpcode::kStore)); + block = current_environment_->block_; + break; + } + if (block != NULL) { + schedule()->AddNode(block, node); + } + return node; +} + + +Variable StructuredMachineAssembler::NewVariable(Node* initial_value) { + CHECK(initial_value != NULL); + int offset = number_of_variables_++; + // Extend current environment to correct number of values. + NodeVector* variables = CurrentVars(); + size_t to_add = number_of_variables_ - variables->size(); + if (to_add != 0) { + variables->reserve(number_of_variables_); + variables->insert(variables->end(), to_add, NULL); + } + variables->at(offset) = initial_value; + return Variable(this, offset); +} + + +Node* StructuredMachineAssembler::GetVariable(int offset) { + DCHECK(ScheduleValid()); + return VariableAt(current_environment_, offset); +} + + +void StructuredMachineAssembler::SetVariable(int offset, Node* value) { + DCHECK(ScheduleValid()); + Node*& ref = VariableAt(current_environment_, offset); + ref = value; +} + + +Node*& StructuredMachineAssembler::VariableAt(Environment* environment, + int32_t offset) { + // Variable used out of scope. + CHECK(static_cast<size_t>(offset) < environment->variables_.size()); + Node*& value = environment->variables_.at(offset); + CHECK(value != NULL); // Variable used out of scope. + return value; +} + + +void StructuredMachineAssembler::Return(Node* value) { + BasicBlock* block = current_environment_->block_; + if (block != NULL) { + schedule()->AddReturn(block, value); + } + CopyCurrentAsDead(); +} + + +void StructuredMachineAssembler::CopyCurrentAsDead() { + DCHECK(current_environment_ != NULL); + bool is_dead = current_environment_->is_dead_; + current_environment_->is_dead_ = true; + Environment* next = Copy(current_environment_); + current_environment_->is_dead_ = is_dead; + current_environment_ = next; +} + + +StructuredMachineAssembler::Environment* StructuredMachineAssembler::Copy( + Environment* env, int truncate_at) { + Environment* new_env = new (zone()) Environment(zone(), NULL, env->is_dead_); + if (!new_env->is_dead_) { + new_env->block_ = schedule()->NewBasicBlock(); + } + new_env->variables_.reserve(truncate_at); + NodeVectorIter end = env->variables_.end(); + DCHECK(truncate_at <= static_cast<int>(env->variables_.size())); + end -= static_cast<int>(env->variables_.size()) - truncate_at; + new_env->variables_.insert(new_env->variables_.begin(), + env->variables_.begin(), end); + return new_env; +} + + +StructuredMachineAssembler::Environment* +StructuredMachineAssembler::CopyForLoopHeader(Environment* env) { + Environment* new_env = new (zone()) Environment(zone(), NULL, env->is_dead_); + if (!new_env->is_dead_) { + new_env->block_ = schedule()->NewBasicBlock(); + } + new_env->variables_.reserve(env->variables_.size()); + for (NodeVectorIter i = env->variables_.begin(); i != env->variables_.end(); + ++i) { + Node* phi = NULL; + if (*i != NULL) { + phi = graph()->NewNode(common()->Phi(1), *i); + if (new_env->block_ != NULL) { + schedule()->AddNode(new_env->block_, phi); + } + } + new_env->variables_.push_back(phi); + } + return new_env; +} + + +void StructuredMachineAssembler::MergeBackEdgesToLoopHeader( + Environment* header, EnvironmentVector* environments) { + // Only merge as many variables are were declared before this loop. + int n = static_cast<int>(header->variables_.size()); + // TODO(dcarney): invert loop order and extend phis once. + for (EnvironmentVector::iterator i = environments->begin(); + i != environments->end(); ++i) { + Environment* from = *i; + if (from->is_dead_) continue; + AddGoto(from, header); + for (int i = 0; i < n; ++i) { + Node* phi = header->variables_[i]; + if (phi == NULL) continue; + phi->set_op(common()->Phi(phi->InputCount() + 1)); + phi->AppendInput(zone(), VariableAt(from, i)); + } + } +} + + +void StructuredMachineAssembler::Merge(EnvironmentVector* environments, + int truncate_at) { + DCHECK(current_environment_ == NULL || current_environment_->is_dead_); + Environment* next = new (zone()) Environment(zone(), NULL, false); + current_environment_ = next; + size_t n_vars = number_of_variables_; + NodeVector& vars = next->variables_; + vars.reserve(n_vars); + Node** scratch = NULL; + size_t n_envs = environments->size(); + Environment** live_environments = reinterpret_cast<Environment**>( + alloca(sizeof(environments->at(0)) * n_envs)); + size_t n_live = 0; + for (size_t i = 0; i < n_envs; i++) { + if (environments->at(i)->is_dead_) continue; + live_environments[n_live++] = environments->at(i); + } + n_envs = n_live; + if (n_live == 0) next->is_dead_ = true; + if (!next->is_dead_) { + next->block_ = schedule()->NewBasicBlock(); + } + for (size_t j = 0; j < n_vars; ++j) { + Node* resolved = NULL; + // Find first non equal variable. + size_t i = 0; + for (; i < n_envs; i++) { + DCHECK(live_environments[i]->variables_.size() <= n_vars); + Node* val = NULL; + if (j < static_cast<size_t>(truncate_at)) { + val = live_environments[i]->variables_.at(j); + // TODO(dcarney): record start position at time of split. + // all variables after this should not be NULL. + if (val != NULL) { + val = VariableAt(live_environments[i], static_cast<int>(j)); + } + } + if (val == resolved) continue; + if (i != 0) break; + resolved = val; + } + // Have to generate a phi. + if (i < n_envs) { + // All values thus far uninitialized, variable used out of scope. + CHECK(resolved != NULL); + // Init scratch buffer. + if (scratch == NULL) { + scratch = static_cast<Node**>(alloca(n_envs * sizeof(resolved))); + } + for (size_t k = 0; k < i; k++) { + scratch[k] = resolved; + } + for (; i < n_envs; i++) { + scratch[i] = live_environments[i]->variables_[j]; + } + resolved = graph()->NewNode(common()->Phi(static_cast<int>(n_envs)), + static_cast<int>(n_envs), scratch); + if (next->block_ != NULL) { + schedule()->AddNode(next->block_, resolved); + } + } + vars.push_back(resolved); + } +} + + +void StructuredMachineAssembler::AddGoto(Environment* from, Environment* to) { + if (to->is_dead_) { + DCHECK(from->is_dead_); + return; + } + DCHECK(!from->is_dead_); + schedule()->AddGoto(from->block_, to->block_); +} + + +// TODO(dcarney): add pass before rpo to schedule to compute these. +BasicBlock* StructuredMachineAssembler::TrampolineFor(BasicBlock* block) { + BasicBlock* trampoline = schedule()->NewBasicBlock(); + schedule()->AddGoto(trampoline, block); + return trampoline; +} + + +void StructuredMachineAssembler::AddBranch(Environment* environment, + Node* condition, + Environment* true_val, + Environment* false_val) { + DCHECK(environment->is_dead_ == true_val->is_dead_); + DCHECK(environment->is_dead_ == false_val->is_dead_); + if (true_val->block_ == false_val->block_) { + if (environment->is_dead_) return; + AddGoto(environment, true_val); + return; + } + Node* branch = graph()->NewNode(common()->Branch(), condition); + if (environment->is_dead_) return; + BasicBlock* true_block = TrampolineFor(true_val->block_); + BasicBlock* false_block = TrampolineFor(false_val->block_); + schedule()->AddBranch(environment->block_, branch, true_block, false_block); +} + + +StructuredMachineAssembler::Environment::Environment(Zone* zone, + BasicBlock* block, + bool is_dead) + : block_(block), + variables_(NodeVector::allocator_type(zone)), + is_dead_(is_dead) {} + + +StructuredMachineAssembler::IfBuilder::IfBuilder( + StructuredMachineAssembler* smasm) + : smasm_(smasm), + if_clauses_(IfClauses::allocator_type(smasm_->zone())), + pending_exit_merges_(EnvironmentVector::allocator_type(smasm_->zone())) { + DCHECK(smasm_->current_environment_ != NULL); + PushNewIfClause(); + DCHECK(!IsDone()); +} + + +StructuredMachineAssembler::IfBuilder& +StructuredMachineAssembler::IfBuilder::If() { + DCHECK(smasm_->current_environment_ != NULL); + IfClause* clause = CurrentClause(); + if (clause->then_environment_ != NULL || clause->else_environment_ != NULL) { + PushNewIfClause(); + } + return *this; +} + + +StructuredMachineAssembler::IfBuilder& +StructuredMachineAssembler::IfBuilder::If(Node* condition) { + If(); + IfClause* clause = CurrentClause(); + // Store branch for future resolution. + UnresolvedBranch* next = new (smasm_->zone()) + UnresolvedBranch(smasm_->current_environment_, condition, NULL); + if (clause->unresolved_list_tail_ != NULL) { + clause->unresolved_list_tail_->next_ = next; + } + clause->unresolved_list_tail_ = next; + // Push onto merge queues. + clause->pending_else_merges_.push_back(next); + clause->pending_then_merges_.push_back(next); + smasm_->current_environment_ = NULL; + return *this; +} + + +void StructuredMachineAssembler::IfBuilder::And() { + CurrentClause()->ResolvePendingMerges(smasm_, kCombineThen, kExpressionTerm); +} + + +void StructuredMachineAssembler::IfBuilder::Or() { + CurrentClause()->ResolvePendingMerges(smasm_, kCombineElse, kExpressionTerm); +} + + +void StructuredMachineAssembler::IfBuilder::Then() { + CurrentClause()->ResolvePendingMerges(smasm_, kCombineThen, kExpressionDone); +} + + +void StructuredMachineAssembler::IfBuilder::Else() { + AddCurrentToPending(); + CurrentClause()->ResolvePendingMerges(smasm_, kCombineElse, kExpressionDone); +} + + +void StructuredMachineAssembler::IfBuilder::AddCurrentToPending() { + if (smasm_->current_environment_ != NULL && + !smasm_->current_environment_->is_dead_) { + pending_exit_merges_.push_back(smasm_->current_environment_); + } + smasm_->current_environment_ = NULL; +} + + +void StructuredMachineAssembler::IfBuilder::PushNewIfClause() { + int curr_size = + static_cast<int>(smasm_->current_environment_->variables_.size()); + IfClause* clause = new (smasm_->zone()) IfClause(smasm_->zone(), curr_size); + if_clauses_.push_back(clause); +} + + +StructuredMachineAssembler::IfBuilder::IfClause::IfClause( + Zone* zone, int initial_environment_size) + : unresolved_list_tail_(NULL), + initial_environment_size_(initial_environment_size), + expression_states_(ExpressionStates::allocator_type(zone)), + pending_then_merges_(PendingMergeStack::allocator_type(zone)), + pending_else_merges_(PendingMergeStack::allocator_type(zone)), + then_environment_(NULL), + else_environment_(NULL) { + PushNewExpressionState(); +} + + +StructuredMachineAssembler::IfBuilder::PendingMergeStackRange +StructuredMachineAssembler::IfBuilder::IfClause::ComputeRelevantMerges( + CombineType combine_type) { + DCHECK(!expression_states_.empty()); + PendingMergeStack* stack; + int start; + if (combine_type == kCombineThen) { + stack = &pending_then_merges_; + start = expression_states_.back().pending_then_size_; + } else { + DCHECK(combine_type == kCombineElse); + stack = &pending_else_merges_; + start = expression_states_.back().pending_else_size_; + } + PendingMergeStackRange data; + data.merge_stack_ = stack; + data.start_ = start; + data.size_ = static_cast<int>(stack->size()) - start; + return data; +} + + +void StructuredMachineAssembler::IfBuilder::IfClause::ResolvePendingMerges( + StructuredMachineAssembler* smasm, CombineType combine_type, + ResolutionType resolution_type) { + DCHECK(smasm->current_environment_ == NULL); + PendingMergeStackRange data = ComputeRelevantMerges(combine_type); + DCHECK_EQ(data.merge_stack_->back(), unresolved_list_tail_); + DCHECK(data.size_ > 0); + // TODO(dcarney): assert no new variables created during expression building. + int truncate_at = initial_environment_size_; + if (data.size_ == 1) { + // Just copy environment in common case. + smasm->current_environment_ = + smasm->Copy(unresolved_list_tail_->environment_, truncate_at); + } else { + EnvironmentVector environments( + EnvironmentVector::allocator_type(smasm->zone())); + environments.reserve(data.size_); + CopyEnvironments(data, &environments); + DCHECK(static_cast<int>(environments.size()) == data.size_); + smasm->Merge(&environments, truncate_at); + } + Environment* then_environment = then_environment_; + Environment* else_environment = NULL; + if (resolution_type == kExpressionDone) { + DCHECK(expression_states_.size() == 1); + // Set the current then_ or else_environment_ to the new merged environment. + if (combine_type == kCombineThen) { + DCHECK(then_environment_ == NULL && else_environment_ == NULL); + this->then_environment_ = smasm->current_environment_; + } else { + DCHECK(else_environment_ == NULL); + this->else_environment_ = smasm->current_environment_; + } + } else { + DCHECK(resolution_type == kExpressionTerm); + DCHECK(then_environment_ == NULL && else_environment_ == NULL); + } + if (combine_type == kCombineThen) { + then_environment = smasm->current_environment_; + } else { + DCHECK(combine_type == kCombineElse); + else_environment = smasm->current_environment_; + } + // Finalize branches and clear the pending stack. + FinalizeBranches(smasm, data, combine_type, then_environment, + else_environment); +} + + +void StructuredMachineAssembler::IfBuilder::IfClause::CopyEnvironments( + const PendingMergeStackRange& data, EnvironmentVector* environments) { + PendingMergeStack::iterator i = data.merge_stack_->begin(); + PendingMergeStack::iterator end = data.merge_stack_->end(); + for (i += data.start_; i != end; ++i) { + environments->push_back((*i)->environment_); + } +} + + +void StructuredMachineAssembler::IfBuilder::IfClause::PushNewExpressionState() { + ExpressionState next; + next.pending_then_size_ = static_cast<int>(pending_then_merges_.size()); + next.pending_else_size_ = static_cast<int>(pending_else_merges_.size()); + expression_states_.push_back(next); +} + + +void StructuredMachineAssembler::IfBuilder::IfClause::PopExpressionState() { + expression_states_.pop_back(); + DCHECK(!expression_states_.empty()); +} + + +void StructuredMachineAssembler::IfBuilder::IfClause::FinalizeBranches( + StructuredMachineAssembler* smasm, const PendingMergeStackRange& data, + CombineType combine_type, Environment* const then_environment, + Environment* const else_environment) { + DCHECK(unresolved_list_tail_ != NULL); + DCHECK(smasm->current_environment_ != NULL); + if (data.size_ == 0) return; + PendingMergeStack::iterator curr = data.merge_stack_->begin(); + PendingMergeStack::iterator end = data.merge_stack_->end(); + // Finalize everything but the head first, + // in the order the branches enter the merge block. + end -= 1; + Environment* true_val = then_environment; + Environment* false_val = else_environment; + Environment** next; + if (combine_type == kCombineThen) { + next = &false_val; + } else { + DCHECK(combine_type == kCombineElse); + next = &true_val; + } + for (curr += data.start_; curr != end; ++curr) { + UnresolvedBranch* branch = *curr; + *next = branch->next_->environment_; + smasm->AddBranch(branch->environment_, branch->condition_, true_val, + false_val); + } + DCHECK(curr + 1 == data.merge_stack_->end()); + // Now finalize the tail if possible. + if (then_environment != NULL && else_environment != NULL) { + UnresolvedBranch* branch = *curr; + smasm->AddBranch(branch->environment_, branch->condition_, then_environment, + else_environment); + } + // Clear the merge stack. + PendingMergeStack::iterator begin = data.merge_stack_->begin(); + begin += data.start_; + data.merge_stack_->erase(begin, data.merge_stack_->end()); + DCHECK_EQ(static_cast<int>(data.merge_stack_->size()), data.start_); +} + + +void StructuredMachineAssembler::IfBuilder::End() { + DCHECK(!IsDone()); + AddCurrentToPending(); + size_t current_pending = pending_exit_merges_.size(); + // All unresolved branch edges are now set to pending. + for (IfClauses::iterator i = if_clauses_.begin(); i != if_clauses_.end(); + ++i) { + IfClause* clause = *i; + DCHECK(clause->expression_states_.size() == 1); + PendingMergeStackRange data; + // Copy then environments. + data = clause->ComputeRelevantMerges(kCombineThen); + clause->CopyEnvironments(data, &pending_exit_merges_); + Environment* head = NULL; + // Will resolve the head node in the else_merge + if (data.size_ > 0 && clause->then_environment_ == NULL && + clause->else_environment_ == NULL) { + head = pending_exit_merges_.back(); + pending_exit_merges_.pop_back(); + } + // Copy else environments. + data = clause->ComputeRelevantMerges(kCombineElse); + clause->CopyEnvironments(data, &pending_exit_merges_); + if (head != NULL) { + // Must have data to merge, or else head will never get a branch. + DCHECK(data.size_ != 0); + pending_exit_merges_.push_back(head); + } + } + smasm_->Merge(&pending_exit_merges_, + if_clauses_[0]->initial_environment_size_); + // Anything initally pending jumps into the new environment. + for (size_t i = 0; i < current_pending; ++i) { + smasm_->AddGoto(pending_exit_merges_[i], smasm_->current_environment_); + } + // Resolve all branches. + for (IfClauses::iterator i = if_clauses_.begin(); i != if_clauses_.end(); + ++i) { + IfClause* clause = *i; + // Must finalize all environments, so ensure they are set correctly. + Environment* then_environment = clause->then_environment_; + if (then_environment == NULL) { + then_environment = smasm_->current_environment_; + } + Environment* else_environment = clause->else_environment_; + PendingMergeStackRange data; + // Finalize then environments. + data = clause->ComputeRelevantMerges(kCombineThen); + clause->FinalizeBranches(smasm_, data, kCombineThen, then_environment, + else_environment); + // Finalize else environments. + // Now set the else environment so head is finalized for edge case above. + if (else_environment == NULL) { + else_environment = smasm_->current_environment_; + } + data = clause->ComputeRelevantMerges(kCombineElse); + clause->FinalizeBranches(smasm_, data, kCombineElse, then_environment, + else_environment); + } + // Future accesses to this builder should crash immediately. + pending_exit_merges_.clear(); + if_clauses_.clear(); + DCHECK(IsDone()); +} + + +StructuredMachineAssembler::LoopBuilder::LoopBuilder( + StructuredMachineAssembler* smasm) + : smasm_(smasm), + header_environment_(NULL), + pending_header_merges_(EnvironmentVector::allocator_type(smasm_->zone())), + pending_exit_merges_(EnvironmentVector::allocator_type(smasm_->zone())) { + DCHECK(smasm_->current_environment_ != NULL); + // Create header environment. + header_environment_ = smasm_->CopyForLoopHeader(smasm_->current_environment_); + smasm_->AddGoto(smasm_->current_environment_, header_environment_); + // Create body environment. + Environment* body = smasm_->Copy(header_environment_); + smasm_->AddGoto(header_environment_, body); + smasm_->current_environment_ = body; + DCHECK(!IsDone()); +} + + +void StructuredMachineAssembler::LoopBuilder::Continue() { + DCHECK(!IsDone()); + pending_header_merges_.push_back(smasm_->current_environment_); + smasm_->CopyCurrentAsDead(); +} + + +void StructuredMachineAssembler::LoopBuilder::Break() { + DCHECK(!IsDone()); + pending_exit_merges_.push_back(smasm_->current_environment_); + smasm_->CopyCurrentAsDead(); +} + + +void StructuredMachineAssembler::LoopBuilder::End() { + DCHECK(!IsDone()); + if (smasm_->current_environment_ != NULL) { + Continue(); + } + // Do loop header merges. + smasm_->MergeBackEdgesToLoopHeader(header_environment_, + &pending_header_merges_); + int initial_size = static_cast<int>(header_environment_->variables_.size()); + // Do loop exit merges, truncating loop variables away. + smasm_->Merge(&pending_exit_merges_, initial_size); + for (EnvironmentVector::iterator i = pending_exit_merges_.begin(); + i != pending_exit_merges_.end(); ++i) { + smasm_->AddGoto(*i, smasm_->current_environment_); + } + pending_header_merges_.clear(); + pending_exit_merges_.clear(); + header_environment_ = NULL; + DCHECK(IsDone()); +} + +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/src/compiler/structured-machine-assembler.h b/deps/v8/src/compiler/structured-machine-assembler.h new file mode 100644 index 00000000000..a6cb8ca88b1 --- /dev/null +++ b/deps/v8/src/compiler/structured-machine-assembler.h @@ -0,0 +1,311 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_STRUCTURED_MACHINE_ASSEMBLER_H_ +#define V8_COMPILER_STRUCTURED_MACHINE_ASSEMBLER_H_ + +#include "src/v8.h" + +#include "src/compiler/common-operator.h" +#include "src/compiler/graph-builder.h" +#include "src/compiler/machine-node-factory.h" +#include "src/compiler/machine-operator.h" +#include "src/compiler/node.h" +#include "src/compiler/operator.h" + + +namespace v8 { +namespace internal { +namespace compiler { + +class BasicBlock; +class Schedule; +class StructuredMachineAssembler; + + +class Variable : public ZoneObject { + public: + Node* Get() const; + void Set(Node* value) const; + + private: + Variable(StructuredMachineAssembler* smasm, int offset) + : smasm_(smasm), offset_(offset) {} + + friend class StructuredMachineAssembler; + friend class StructuredMachineAssemblerFriend; + StructuredMachineAssembler* const smasm_; + const int offset_; +}; + + +class StructuredMachineAssembler + : public GraphBuilder, + public MachineNodeFactory<StructuredMachineAssembler> { + public: + class Environment : public ZoneObject { + public: + Environment(Zone* zone, BasicBlock* block, bool is_dead_); + + private: + BasicBlock* block_; + NodeVector variables_; + bool is_dead_; + friend class StructuredMachineAssembler; + DISALLOW_COPY_AND_ASSIGN(Environment); + }; + + class IfBuilder; + friend class IfBuilder; + class LoopBuilder; + friend class LoopBuilder; + + StructuredMachineAssembler( + Graph* graph, MachineCallDescriptorBuilder* call_descriptor_builder, + MachineType word = MachineOperatorBuilder::pointer_rep()); + virtual ~StructuredMachineAssembler() {} + + Isolate* isolate() const { return zone()->isolate(); } + Zone* zone() const { return graph()->zone(); } + MachineOperatorBuilder* machine() { return &machine_; } + CommonOperatorBuilder* common() { return &common_; } + CallDescriptor* call_descriptor() const { + return call_descriptor_builder_->BuildCallDescriptor(zone()); + } + int parameter_count() const { + return call_descriptor_builder_->parameter_count(); + } + const MachineType* parameter_types() const { + return call_descriptor_builder_->parameter_types(); + } + + // Parameters. + Node* Parameter(int index); + // Variables. + Variable NewVariable(Node* initial_value); + // Control flow. + void Return(Node* value); + + // MachineAssembler is invalid after export. + Schedule* Export(); + + protected: + virtual Node* MakeNode(Operator* op, int input_count, Node** inputs); + + Schedule* schedule() { + DCHECK(ScheduleValid()); + return schedule_; + } + + private: + bool ScheduleValid() { return schedule_ != NULL; } + + typedef std::vector<Environment*, zone_allocator<Environment*> > + EnvironmentVector; + + NodeVector* CurrentVars() { return ¤t_environment_->variables_; } + Node*& VariableAt(Environment* environment, int offset); + Node* GetVariable(int offset); + void SetVariable(int offset, Node* value); + + void AddBranch(Environment* environment, Node* condition, + Environment* true_val, Environment* false_val); + void AddGoto(Environment* from, Environment* to); + BasicBlock* TrampolineFor(BasicBlock* block); + + void CopyCurrentAsDead(); + Environment* Copy(Environment* environment) { + return Copy(environment, static_cast<int>(environment->variables_.size())); + } + Environment* Copy(Environment* environment, int truncate_at); + void Merge(EnvironmentVector* environments, int truncate_at); + Environment* CopyForLoopHeader(Environment* environment); + void MergeBackEdgesToLoopHeader(Environment* header, + EnvironmentVector* environments); + + typedef std::vector<MachineType, zone_allocator<MachineType> > + RepresentationVector; + + Schedule* schedule_; + MachineOperatorBuilder machine_; + CommonOperatorBuilder common_; + MachineCallDescriptorBuilder* call_descriptor_builder_; + Node** parameters_; + Environment* current_environment_; + int number_of_variables_; + + friend class Variable; + // For testing only. + friend class StructuredMachineAssemblerFriend; + DISALLOW_COPY_AND_ASSIGN(StructuredMachineAssembler); +}; + +// IfBuilder constructs of nested if-else expressions which more or less follow +// C semantics. Foe example: +// +// if (x) {do_x} else if (y) {do_y} else {do_z} +// +// would look like this: +// +// IfBuilder b; +// b.If(x).Then(); +// do_x +// b.Else(); +// b.If().Then(); +// do_y +// b.Else(); +// do_z +// b.End(); +// +// Then() and Else() can be skipped, representing an empty block in C. +// Combinations like If(x).Then().If(x).Then() are legitimate, but +// Else().Else() is not. That is, once you've nested an If(), you can't get to a +// higher level If() branch. +// TODO(dcarney): describe expressions once the api is finalized. +class StructuredMachineAssembler::IfBuilder { + public: + explicit IfBuilder(StructuredMachineAssembler* smasm); + ~IfBuilder() { + if (!IsDone()) End(); + } + + IfBuilder& If(); // TODO(dcarney): this should take an expression. + IfBuilder& If(Node* condition); + void Then(); + void Else(); + void End(); + + // The next 4 functions are exposed for expression support. + // They will be private once I have a nice expression api. + void And(); + void Or(); + IfBuilder& OpenParen() { + DCHECK(smasm_->current_environment_ != NULL); + CurrentClause()->PushNewExpressionState(); + return *this; + } + IfBuilder& CloseParen() { + DCHECK(smasm_->current_environment_ == NULL); + CurrentClause()->PopExpressionState(); + return *this; + } + + private: + // UnresolvedBranch represents the chain of environments created while + // generating an expression. At this point, a branch Node + // cannot be created, as the target environments of the branch are not yet + // available, so everything required to create the branch Node is + // stored in this structure until the target environments are resolved. + struct UnresolvedBranch : public ZoneObject { + UnresolvedBranch(Environment* environment, Node* condition, + UnresolvedBranch* next) + : environment_(environment), condition_(condition), next_(next) {} + // environment_ will eventually be terminated by a branch on condition_. + Environment* environment_; + Node* condition_; + // next_ is the next link in the UnresolvedBranch chain, and will be + // either the true or false branch jumped to from environment_. + UnresolvedBranch* next_; + }; + + struct ExpressionState { + int pending_then_size_; + int pending_else_size_; + }; + + typedef std::vector<ExpressionState, zone_allocator<ExpressionState> > + ExpressionStates; + typedef std::vector<UnresolvedBranch*, zone_allocator<UnresolvedBranch*> > + PendingMergeStack; + struct IfClause; + typedef std::vector<IfClause*, zone_allocator<IfClause*> > IfClauses; + + struct PendingMergeStackRange { + PendingMergeStack* merge_stack_; + int start_; + int size_; + }; + + enum CombineType { kCombineThen, kCombineElse }; + enum ResolutionType { kExpressionTerm, kExpressionDone }; + + // IfClause represents one level of if-then-else nesting plus the associated + // expression. + // A call to If() triggers creation of a new nesting level after expression + // creation is complete - ie Then() or Else() has been called. + struct IfClause : public ZoneObject { + IfClause(Zone* zone, int initial_environment_size); + void CopyEnvironments(const PendingMergeStackRange& data, + EnvironmentVector* environments); + void ResolvePendingMerges(StructuredMachineAssembler* smasm, + CombineType combine_type, + ResolutionType resolution_type); + PendingMergeStackRange ComputeRelevantMerges(CombineType combine_type); + void FinalizeBranches(StructuredMachineAssembler* smasm, + const PendingMergeStackRange& offset_data, + CombineType combine_type, + Environment* then_environment, + Environment* else_environment); + void PushNewExpressionState(); + void PopExpressionState(); + + // Each invocation of And or Or creates a new UnresolvedBranch. + // These form a singly-linked list, of which we only need to keep track of + // the tail. On creation of an UnresolvedBranch, pending_then_merges_ and + // pending_else_merges_ each push a copy, which are removed on merges to the + // respective environment. + UnresolvedBranch* unresolved_list_tail_; + int initial_environment_size_; + // expression_states_ keeps track of the state of pending_*_merges_, + // pushing and popping the lengths of these on + // OpenParend() and CloseParend() respectively. + ExpressionStates expression_states_; + PendingMergeStack pending_then_merges_; + PendingMergeStack pending_else_merges_; + // then_environment_ is created iff there is a call to Then(), otherwise + // branches which would merge to it merge to the exit environment instead. + // Likewise for else_environment_. + Environment* then_environment_; + Environment* else_environment_; + }; + + IfClause* CurrentClause() { return if_clauses_.back(); } + void AddCurrentToPending(); + void PushNewIfClause(); + bool IsDone() { return if_clauses_.empty(); } + + StructuredMachineAssembler* smasm_; + IfClauses if_clauses_; + EnvironmentVector pending_exit_merges_; + DISALLOW_COPY_AND_ASSIGN(IfBuilder); +}; + + +class StructuredMachineAssembler::LoopBuilder { + public: + explicit LoopBuilder(StructuredMachineAssembler* smasm); + ~LoopBuilder() { + if (!IsDone()) End(); + } + + void Break(); + void Continue(); + void End(); + + private: + friend class StructuredMachineAssembler; + bool IsDone() { return header_environment_ == NULL; } + + StructuredMachineAssembler* smasm_; + Environment* header_environment_; + EnvironmentVector pending_header_merges_; + EnvironmentVector pending_exit_merges_; + DISALLOW_COPY_AND_ASSIGN(LoopBuilder); +}; + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_COMPILER_STRUCTURED_MACHINE_ASSEMBLER_H_ diff --git a/deps/v8/src/compiler/typer.cc b/deps/v8/src/compiler/typer.cc new file mode 100644 index 00000000000..2aa18699dda --- /dev/null +++ b/deps/v8/src/compiler/typer.cc @@ -0,0 +1,842 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/graph-inl.h" +#include "src/compiler/js-operator.h" +#include "src/compiler/node.h" +#include "src/compiler/node-properties-inl.h" +#include "src/compiler/node-properties.h" +#include "src/compiler/simplified-operator.h" +#include "src/compiler/typer.h" + +namespace v8 { +namespace internal { +namespace compiler { + +Typer::Typer(Zone* zone) : zone_(zone) { + Type* number = Type::Number(zone); + Type* signed32 = Type::Signed32(zone); + Type* unsigned32 = Type::Unsigned32(zone); + Type* integral32 = Type::Integral32(zone); + Type* object = Type::Object(zone); + Type* undefined = Type::Undefined(zone); + number_fun0_ = Type::Function(number, zone); + number_fun1_ = Type::Function(number, number, zone); + number_fun2_ = Type::Function(number, number, number, zone); + imul_fun_ = Type::Function(signed32, integral32, integral32, zone); + +#define NATIVE_TYPE(sem, rep) \ + Type::Intersect(Type::sem(zone), Type::rep(zone), zone) + // TODO(rossberg): Use range types for more precision, once we have them. + Type* int8 = NATIVE_TYPE(SignedSmall, UntaggedInt8); + Type* int16 = NATIVE_TYPE(SignedSmall, UntaggedInt16); + Type* int32 = NATIVE_TYPE(Signed32, UntaggedInt32); + Type* uint8 = NATIVE_TYPE(UnsignedSmall, UntaggedInt8); + Type* uint16 = NATIVE_TYPE(UnsignedSmall, UntaggedInt16); + Type* uint32 = NATIVE_TYPE(Unsigned32, UntaggedInt32); + Type* float32 = NATIVE_TYPE(Number, UntaggedFloat32); + Type* float64 = NATIVE_TYPE(Number, UntaggedFloat64); +#undef NATIVE_TYPE + Type* buffer = Type::Buffer(zone); + Type* int8_array = Type::Array(int8, zone); + Type* int16_array = Type::Array(int16, zone); + Type* int32_array = Type::Array(int32, zone); + Type* uint8_array = Type::Array(uint8, zone); + Type* uint16_array = Type::Array(uint16, zone); + Type* uint32_array = Type::Array(uint32, zone); + Type* float32_array = Type::Array(float32, zone); + Type* float64_array = Type::Array(float64, zone); + Type* arg1 = Type::Union(unsigned32, object, zone); + Type* arg2 = Type::Union(unsigned32, undefined, zone); + Type* arg3 = arg2; + array_buffer_fun_ = Type::Function(buffer, unsigned32, zone); + int8_array_fun_ = Type::Function(int8_array, arg1, arg2, arg3, zone); + int16_array_fun_ = Type::Function(int16_array, arg1, arg2, arg3, zone); + int32_array_fun_ = Type::Function(int32_array, arg1, arg2, arg3, zone); + uint8_array_fun_ = Type::Function(uint8_array, arg1, arg2, arg3, zone); + uint16_array_fun_ = Type::Function(uint16_array, arg1, arg2, arg3, zone); + uint32_array_fun_ = Type::Function(uint32_array, arg1, arg2, arg3, zone); + float32_array_fun_ = Type::Function(float32_array, arg1, arg2, arg3, zone); + float64_array_fun_ = Type::Function(float64_array, arg1, arg2, arg3, zone); +} + + +class Typer::Visitor : public NullNodeVisitor { + public: + Visitor(Typer* typer, MaybeHandle<Context> context) + : typer_(typer), context_(context) {} + + Bounds TypeNode(Node* node) { + switch (node->opcode()) { +#define DECLARE_CASE(x) case IrOpcode::k##x: return Type##x(node); + VALUE_OP_LIST(DECLARE_CASE) +#undef DECLARE_CASE + +#define DECLARE_CASE(x) case IrOpcode::k##x: + CONTROL_OP_LIST(DECLARE_CASE) +#undef DECLARE_CASE + break; + } + return Bounds(Type::None(zone())); + } + + Type* TypeConstant(Handle<Object> value); + + protected: +#define DECLARE_METHOD(x) inline Bounds Type##x(Node* node); + VALUE_OP_LIST(DECLARE_METHOD) +#undef DECLARE_METHOD + + Bounds OperandType(Node* node, int i) { + return NodeProperties::GetBounds(NodeProperties::GetValueInput(node, i)); + } + + Type* ContextType(Node* node) { + Bounds result = + NodeProperties::GetBounds(NodeProperties::GetContextInput(node)); + DCHECK(result.upper->Is(Type::Internal())); + DCHECK(result.lower->Equals(result.upper)); + return result.upper; + } + + Zone* zone() { return typer_->zone(); } + Isolate* isolate() { return typer_->isolate(); } + MaybeHandle<Context> context() { return context_; } + + private: + Typer* typer_; + MaybeHandle<Context> context_; +}; + + +class Typer::RunVisitor : public Typer::Visitor { + public: + RunVisitor(Typer* typer, MaybeHandle<Context> context) + : Visitor(typer, context), + phis(NodeSet::key_compare(), NodeSet::allocator_type(typer->zone())) {} + + GenericGraphVisit::Control Pre(Node* node) { + return NodeProperties::IsControl(node) + && node->opcode() != IrOpcode::kEnd + && node->opcode() != IrOpcode::kMerge + && node->opcode() != IrOpcode::kReturn + ? GenericGraphVisit::SKIP : GenericGraphVisit::CONTINUE; + } + + GenericGraphVisit::Control Post(Node* node) { + Bounds bounds = TypeNode(node); + if (node->opcode() == IrOpcode::kPhi) { + // Remember phis for least fixpoint iteration. + phis.insert(node); + } else { + NodeProperties::SetBounds(node, bounds); + } + return GenericGraphVisit::CONTINUE; + } + + NodeSet phis; +}; + + +class Typer::NarrowVisitor : public Typer::Visitor { + public: + NarrowVisitor(Typer* typer, MaybeHandle<Context> context) + : Visitor(typer, context) {} + + GenericGraphVisit::Control Pre(Node* node) { + Bounds previous = NodeProperties::GetBounds(node); + Bounds bounds = TypeNode(node); + NodeProperties::SetBounds(node, Bounds::Both(bounds, previous, zone())); + DCHECK(bounds.Narrows(previous)); + // Stop when nothing changed (but allow reentry in case it does later). + return previous.Narrows(bounds) + ? GenericGraphVisit::DEFER : GenericGraphVisit::REENTER; + } + + GenericGraphVisit::Control Post(Node* node) { + return GenericGraphVisit::REENTER; + } +}; + + +class Typer::WidenVisitor : public Typer::Visitor { + public: + WidenVisitor(Typer* typer, MaybeHandle<Context> context) + : Visitor(typer, context) {} + + GenericGraphVisit::Control Pre(Node* node) { + Bounds previous = NodeProperties::GetBounds(node); + Bounds bounds = TypeNode(node); + DCHECK(previous.lower->Is(bounds.lower)); + DCHECK(previous.upper->Is(bounds.upper)); + NodeProperties::SetBounds(node, bounds); // TODO(rossberg): Either? + // Stop when nothing changed (but allow reentry in case it does later). + return bounds.Narrows(previous) + ? GenericGraphVisit::DEFER : GenericGraphVisit::REENTER; + } + + GenericGraphVisit::Control Post(Node* node) { + return GenericGraphVisit::REENTER; + } +}; + + +void Typer::Run(Graph* graph, MaybeHandle<Context> context) { + RunVisitor typing(this, context); + graph->VisitNodeInputsFromEnd(&typing); + // Find least fixpoint. + for (NodeSetIter i = typing.phis.begin(); i != typing.phis.end(); ++i) { + Widen(graph, *i, context); + } +} + + +void Typer::Narrow(Graph* graph, Node* start, MaybeHandle<Context> context) { + NarrowVisitor typing(this, context); + graph->VisitNodeUsesFrom(start, &typing); +} + + +void Typer::Widen(Graph* graph, Node* start, MaybeHandle<Context> context) { + WidenVisitor typing(this, context); + graph->VisitNodeUsesFrom(start, &typing); +} + + +void Typer::Init(Node* node) { + Visitor typing(this, MaybeHandle<Context>()); + Bounds bounds = typing.TypeNode(node); + NodeProperties::SetBounds(node, bounds); +} + + +// Common operators. +Bounds Typer::Visitor::TypeParameter(Node* node) { + return Bounds::Unbounded(zone()); +} + + +Bounds Typer::Visitor::TypeInt32Constant(Node* node) { + // TODO(titzer): only call Type::Of() if the type is not already known. + return Bounds(Type::Of(ValueOf<int32_t>(node->op()), zone())); +} + + +Bounds Typer::Visitor::TypeInt64Constant(Node* node) { + // TODO(titzer): only call Type::Of() if the type is not already known. + return Bounds( + Type::Of(static_cast<double>(ValueOf<int64_t>(node->op())), zone())); +} + + +Bounds Typer::Visitor::TypeFloat64Constant(Node* node) { + // TODO(titzer): only call Type::Of() if the type is not already known. + return Bounds(Type::Of(ValueOf<double>(node->op()), zone())); +} + + +Bounds Typer::Visitor::TypeNumberConstant(Node* node) { + // TODO(titzer): only call Type::Of() if the type is not already known. + return Bounds(Type::Of(ValueOf<double>(node->op()), zone())); +} + + +Bounds Typer::Visitor::TypeHeapConstant(Node* node) { + return Bounds(TypeConstant(ValueOf<Handle<Object> >(node->op()))); +} + + +Bounds Typer::Visitor::TypeExternalConstant(Node* node) { + return Bounds(Type::Internal(zone())); +} + + +Bounds Typer::Visitor::TypePhi(Node* node) { + int arity = OperatorProperties::GetValueInputCount(node->op()); + Bounds bounds = OperandType(node, 0); + for (int i = 1; i < arity; ++i) { + bounds = Bounds::Either(bounds, OperandType(node, i), zone()); + } + return bounds; +} + + +Bounds Typer::Visitor::TypeEffectPhi(Node* node) { + return Bounds(Type::None(zone())); +} + + +Bounds Typer::Visitor::TypeFrameState(Node* node) { + return Bounds(Type::None(zone())); +} + + +Bounds Typer::Visitor::TypeStateValues(Node* node) { + return Bounds(Type::None(zone())); +} + + +Bounds Typer::Visitor::TypeCall(Node* node) { + return Bounds::Unbounded(zone()); +} + + +Bounds Typer::Visitor::TypeProjection(Node* node) { + // TODO(titzer): use the output type of the input to determine the bounds. + return Bounds::Unbounded(zone()); +} + + +// JS comparison operators. + +#define DEFINE_METHOD(x) \ + Bounds Typer::Visitor::Type##x(Node* node) { \ + return Bounds(Type::Boolean(zone())); \ + } +JS_COMPARE_BINOP_LIST(DEFINE_METHOD) +#undef DEFINE_METHOD + + +// JS bitwise operators. + +Bounds Typer::Visitor::TypeJSBitwiseOr(Node* node) { + Bounds left = OperandType(node, 0); + Bounds right = OperandType(node, 1); + Type* upper = Type::Union(left.upper, right.upper, zone()); + if (!upper->Is(Type::Signed32())) upper = Type::Signed32(zone()); + Type* lower = Type::Intersect(Type::SignedSmall(zone()), upper, zone()); + return Bounds(lower, upper); +} + + +Bounds Typer::Visitor::TypeJSBitwiseAnd(Node* node) { + Bounds left = OperandType(node, 0); + Bounds right = OperandType(node, 1); + Type* upper = Type::Union(left.upper, right.upper, zone()); + if (!upper->Is(Type::Signed32())) upper = Type::Signed32(zone()); + Type* lower = Type::Intersect(Type::SignedSmall(zone()), upper, zone()); + return Bounds(lower, upper); +} + + +Bounds Typer::Visitor::TypeJSBitwiseXor(Node* node) { + return Bounds(Type::SignedSmall(zone()), Type::Signed32(zone())); +} + + +Bounds Typer::Visitor::TypeJSShiftLeft(Node* node) { + return Bounds(Type::SignedSmall(zone()), Type::Signed32(zone())); +} + + +Bounds Typer::Visitor::TypeJSShiftRight(Node* node) { + return Bounds(Type::SignedSmall(zone()), Type::Signed32(zone())); +} + + +Bounds Typer::Visitor::TypeJSShiftRightLogical(Node* node) { + return Bounds(Type::UnsignedSmall(zone()), Type::Unsigned32(zone())); +} + + +// JS arithmetic operators. + +Bounds Typer::Visitor::TypeJSAdd(Node* node) { + Bounds left = OperandType(node, 0); + Bounds right = OperandType(node, 1); + Type* lower = + left.lower->Is(Type::None()) || right.lower->Is(Type::None()) ? + Type::None(zone()) : + left.lower->Is(Type::Number()) && right.lower->Is(Type::Number()) ? + Type::SignedSmall(zone()) : + left.lower->Is(Type::String()) || right.lower->Is(Type::String()) ? + Type::String(zone()) : Type::None(zone()); + Type* upper = + left.upper->Is(Type::None()) && right.upper->Is(Type::None()) ? + Type::None(zone()) : + left.upper->Is(Type::Number()) && right.upper->Is(Type::Number()) ? + Type::Number(zone()) : + left.upper->Is(Type::String()) || right.upper->Is(Type::String()) ? + Type::String(zone()) : Type::NumberOrString(zone()); + return Bounds(lower, upper); +} + + +Bounds Typer::Visitor::TypeJSSubtract(Node* node) { + return Bounds(Type::SignedSmall(zone()), Type::Number(zone())); +} + + +Bounds Typer::Visitor::TypeJSMultiply(Node* node) { + return Bounds(Type::SignedSmall(zone()), Type::Number(zone())); +} + + +Bounds Typer::Visitor::TypeJSDivide(Node* node) { + return Bounds(Type::SignedSmall(zone()), Type::Number(zone())); +} + + +Bounds Typer::Visitor::TypeJSModulus(Node* node) { + return Bounds(Type::SignedSmall(zone()), Type::Number(zone())); +} + + +// JS unary operators. + +Bounds Typer::Visitor::TypeJSUnaryNot(Node* node) { + return Bounds(Type::Boolean(zone())); +} + + +Bounds Typer::Visitor::TypeJSTypeOf(Node* node) { + return Bounds(Type::InternalizedString(zone())); +} + + +// JS conversion operators. + +Bounds Typer::Visitor::TypeJSToBoolean(Node* node) { + return Bounds(Type::Boolean(zone())); +} + + +Bounds Typer::Visitor::TypeJSToNumber(Node* node) { + return Bounds(Type::SignedSmall(zone()), Type::Number(zone())); +} + + +Bounds Typer::Visitor::TypeJSToString(Node* node) { + return Bounds(Type::None(zone()), Type::String(zone())); +} + + +Bounds Typer::Visitor::TypeJSToName(Node* node) { + return Bounds(Type::None(zone()), Type::Name(zone())); +} + + +Bounds Typer::Visitor::TypeJSToObject(Node* node) { + return Bounds(Type::None(zone()), Type::Object(zone())); +} + + +// JS object operators. + +Bounds Typer::Visitor::TypeJSCreate(Node* node) { + return Bounds(Type::None(zone()), Type::Object(zone())); +} + + +Bounds Typer::Visitor::TypeJSLoadProperty(Node* node) { + Bounds object = OperandType(node, 0); + Bounds name = OperandType(node, 1); + Bounds result = Bounds::Unbounded(zone()); + // TODO(rossberg): Use range types and sized array types to filter undefined. + if (object.lower->IsArray() && name.lower->Is(Type::Integral32())) { + result.lower = Type::Union( + object.lower->AsArray()->Element(), Type::Undefined(zone()), zone()); + } + if (object.upper->IsArray() && name.upper->Is(Type::Integral32())) { + result.upper = Type::Union( + object.upper->AsArray()->Element(), Type::Undefined(zone()), zone()); + } + return result; +} + + +Bounds Typer::Visitor::TypeJSLoadNamed(Node* node) { + return Bounds::Unbounded(zone()); +} + + +Bounds Typer::Visitor::TypeJSStoreProperty(Node* node) { + return Bounds(Type::None(zone())); +} + + +Bounds Typer::Visitor::TypeJSStoreNamed(Node* node) { + return Bounds(Type::None(zone())); +} + + +Bounds Typer::Visitor::TypeJSDeleteProperty(Node* node) { + return Bounds(Type::Boolean(zone())); +} + + +Bounds Typer::Visitor::TypeJSHasProperty(Node* node) { + return Bounds(Type::Boolean(zone())); +} + + +Bounds Typer::Visitor::TypeJSInstanceOf(Node* node) { + return Bounds(Type::Boolean(zone())); +} + + +// JS context operators. + +Bounds Typer::Visitor::TypeJSLoadContext(Node* node) { + Bounds outer = OperandType(node, 0); + DCHECK(outer.upper->Is(Type::Internal())); + DCHECK(outer.lower->Equals(outer.upper)); + ContextAccess access = OpParameter<ContextAccess>(node); + Type* context_type = outer.upper; + MaybeHandle<Context> context; + if (context_type->IsConstant()) { + context = Handle<Context>::cast(context_type->AsConstant()->Value()); + } + // Walk context chain (as far as known), mirroring dynamic lookup. + // Since contexts are mutable, the information is only useful as a lower + // bound. + // TODO(rossberg): Could use scope info to fix upper bounds for constant + // bindings if we know that this code is never shared. + for (int i = access.depth(); i > 0; --i) { + if (context_type->IsContext()) { + context_type = context_type->AsContext()->Outer(); + if (context_type->IsConstant()) { + context = Handle<Context>::cast(context_type->AsConstant()->Value()); + } + } else { + context = handle(context.ToHandleChecked()->previous(), isolate()); + } + } + if (context.is_null()) { + return Bounds::Unbounded(zone()); + } else { + Handle<Object> value = + handle(context.ToHandleChecked()->get(access.index()), isolate()); + Type* lower = TypeConstant(value); + return Bounds(lower, Type::Any(zone())); + } +} + + +Bounds Typer::Visitor::TypeJSStoreContext(Node* node) { + return Bounds(Type::None(zone())); +} + + +Bounds Typer::Visitor::TypeJSCreateFunctionContext(Node* node) { + Type* outer = ContextType(node); + return Bounds(Type::Context(outer, zone())); +} + + +Bounds Typer::Visitor::TypeJSCreateCatchContext(Node* node) { + Type* outer = ContextType(node); + return Bounds(Type::Context(outer, zone())); +} + + +Bounds Typer::Visitor::TypeJSCreateWithContext(Node* node) { + Type* outer = ContextType(node); + return Bounds(Type::Context(outer, zone())); +} + + +Bounds Typer::Visitor::TypeJSCreateBlockContext(Node* node) { + Type* outer = ContextType(node); + return Bounds(Type::Context(outer, zone())); +} + + +Bounds Typer::Visitor::TypeJSCreateModuleContext(Node* node) { + // TODO(rossberg): this is probably incorrect + Type* outer = ContextType(node); + return Bounds(Type::Context(outer, zone())); +} + + +Bounds Typer::Visitor::TypeJSCreateGlobalContext(Node* node) { + Type* outer = ContextType(node); + return Bounds(Type::Context(outer, zone())); +} + + +// JS other operators. + +Bounds Typer::Visitor::TypeJSYield(Node* node) { + return Bounds::Unbounded(zone()); +} + + +Bounds Typer::Visitor::TypeJSCallConstruct(Node* node) { + return Bounds(Type::None(zone()), Type::Receiver(zone())); +} + + +Bounds Typer::Visitor::TypeJSCallFunction(Node* node) { + Bounds fun = OperandType(node, 0); + Type* lower = fun.lower->IsFunction() + ? fun.lower->AsFunction()->Result() : Type::None(zone()); + Type* upper = fun.upper->IsFunction() + ? fun.upper->AsFunction()->Result() : Type::Any(zone()); + return Bounds(lower, upper); +} + + +Bounds Typer::Visitor::TypeJSCallRuntime(Node* node) { + return Bounds::Unbounded(zone()); +} + + +Bounds Typer::Visitor::TypeJSDebugger(Node* node) { + return Bounds::Unbounded(zone()); +} + + +// Simplified operators. + +Bounds Typer::Visitor::TypeBooleanNot(Node* node) { + return Bounds(Type::Boolean(zone())); +} + + +Bounds Typer::Visitor::TypeNumberEqual(Node* node) { + return Bounds(Type::Boolean(zone())); +} + + +Bounds Typer::Visitor::TypeNumberLessThan(Node* node) { + return Bounds(Type::Boolean(zone())); +} + + +Bounds Typer::Visitor::TypeNumberLessThanOrEqual(Node* node) { + return Bounds(Type::Boolean(zone())); +} + + +Bounds Typer::Visitor::TypeNumberAdd(Node* node) { + return Bounds(Type::Number(zone())); +} + + +Bounds Typer::Visitor::TypeNumberSubtract(Node* node) { + return Bounds(Type::Number(zone())); +} + + +Bounds Typer::Visitor::TypeNumberMultiply(Node* node) { + return Bounds(Type::Number(zone())); +} + + +Bounds Typer::Visitor::TypeNumberDivide(Node* node) { + return Bounds(Type::Number(zone())); +} + + +Bounds Typer::Visitor::TypeNumberModulus(Node* node) { + return Bounds(Type::Number(zone())); +} + + +Bounds Typer::Visitor::TypeNumberToInt32(Node* node) { + Bounds arg = OperandType(node, 0); + Type* s32 = Type::Signed32(zone()); + Type* lower = arg.lower->Is(s32) ? arg.lower : s32; + Type* upper = arg.upper->Is(s32) ? arg.upper : s32; + return Bounds(lower, upper); +} + + +Bounds Typer::Visitor::TypeNumberToUint32(Node* node) { + Bounds arg = OperandType(node, 0); + Type* u32 = Type::Unsigned32(zone()); + Type* lower = arg.lower->Is(u32) ? arg.lower : u32; + Type* upper = arg.upper->Is(u32) ? arg.upper : u32; + return Bounds(lower, upper); +} + + +Bounds Typer::Visitor::TypeReferenceEqual(Node* node) { + return Bounds(Type::Boolean(zone())); +} + + +Bounds Typer::Visitor::TypeStringEqual(Node* node) { + return Bounds(Type::Boolean(zone())); +} + + +Bounds Typer::Visitor::TypeStringLessThan(Node* node) { + return Bounds(Type::Boolean(zone())); +} + + +Bounds Typer::Visitor::TypeStringLessThanOrEqual(Node* node) { + return Bounds(Type::Boolean(zone())); +} + + +Bounds Typer::Visitor::TypeStringAdd(Node* node) { + return Bounds(Type::String(zone())); +} + + +Bounds Typer::Visitor::TypeChangeTaggedToInt32(Node* node) { + // TODO(titzer): type is type of input, representation is Word32. + return Bounds(Type::Integral32()); +} + + +Bounds Typer::Visitor::TypeChangeTaggedToUint32(Node* node) { + return Bounds(Type::Integral32()); // TODO(titzer): add appropriate rep +} + + +Bounds Typer::Visitor::TypeChangeTaggedToFloat64(Node* node) { + // TODO(titzer): type is type of input, representation is Float64. + return Bounds(Type::Number()); +} + + +Bounds Typer::Visitor::TypeChangeInt32ToTagged(Node* node) { + // TODO(titzer): type is type of input, representation is Tagged. + return Bounds(Type::Integral32()); +} + + +Bounds Typer::Visitor::TypeChangeUint32ToTagged(Node* node) { + // TODO(titzer): type is type of input, representation is Tagged. + return Bounds(Type::Unsigned32()); +} + + +Bounds Typer::Visitor::TypeChangeFloat64ToTagged(Node* node) { + // TODO(titzer): type is type of input, representation is Tagged. + return Bounds(Type::Number()); +} + + +Bounds Typer::Visitor::TypeChangeBoolToBit(Node* node) { + // TODO(titzer): type is type of input, representation is Bit. + return Bounds(Type::Boolean()); +} + + +Bounds Typer::Visitor::TypeChangeBitToBool(Node* node) { + // TODO(titzer): type is type of input, representation is Tagged. + return Bounds(Type::Boolean()); +} + + +Bounds Typer::Visitor::TypeLoadField(Node* node) { + return Bounds(FieldAccessOf(node->op()).type); +} + + +Bounds Typer::Visitor::TypeLoadElement(Node* node) { + return Bounds(ElementAccessOf(node->op()).type); +} + + +Bounds Typer::Visitor::TypeStoreField(Node* node) { + return Bounds(Type::None()); +} + + +Bounds Typer::Visitor::TypeStoreElement(Node* node) { + return Bounds(Type::None()); +} + + +// Machine operators. + +// TODO(rossberg): implement +#define DEFINE_METHOD(x) \ + Bounds Typer::Visitor::Type##x(Node* node) { return Bounds(Type::None()); } +MACHINE_OP_LIST(DEFINE_METHOD) +#undef DEFINE_METHOD + + +// Heap constants. + +Type* Typer::Visitor::TypeConstant(Handle<Object> value) { + if (value->IsJSFunction() && JSFunction::cast(*value)->IsBuiltin() && + !context().is_null()) { + Handle<Context> native = + handle(context().ToHandleChecked()->native_context(), isolate()); + if (*value == native->math_abs_fun()) { + return typer_->number_fun1_; // TODO(rossberg): can't express overloading + } else if (*value == native->math_acos_fun()) { + return typer_->number_fun1_; + } else if (*value == native->math_asin_fun()) { + return typer_->number_fun1_; + } else if (*value == native->math_atan_fun()) { + return typer_->number_fun1_; + } else if (*value == native->math_atan2_fun()) { + return typer_->number_fun2_; + } else if (*value == native->math_ceil_fun()) { + return typer_->number_fun1_; + } else if (*value == native->math_cos_fun()) { + return typer_->number_fun1_; + } else if (*value == native->math_exp_fun()) { + return typer_->number_fun1_; + } else if (*value == native->math_floor_fun()) { + return typer_->number_fun1_; + } else if (*value == native->math_imul_fun()) { + return typer_->imul_fun_; + } else if (*value == native->math_log_fun()) { + return typer_->number_fun1_; + } else if (*value == native->math_pow_fun()) { + return typer_->number_fun2_; + } else if (*value == native->math_random_fun()) { + return typer_->number_fun0_; + } else if (*value == native->math_round_fun()) { + return typer_->number_fun1_; + } else if (*value == native->math_sin_fun()) { + return typer_->number_fun1_; + } else if (*value == native->math_sqrt_fun()) { + return typer_->number_fun1_; + } else if (*value == native->math_tan_fun()) { + return typer_->number_fun1_; + } else if (*value == native->array_buffer_fun()) { + return typer_->array_buffer_fun_; + } else if (*value == native->int8_array_fun()) { + return typer_->int8_array_fun_; + } else if (*value == native->int16_array_fun()) { + return typer_->int16_array_fun_; + } else if (*value == native->int32_array_fun()) { + return typer_->int32_array_fun_; + } else if (*value == native->uint8_array_fun()) { + return typer_->uint8_array_fun_; + } else if (*value == native->uint16_array_fun()) { + return typer_->uint16_array_fun_; + } else if (*value == native->uint32_array_fun()) { + return typer_->uint32_array_fun_; + } else if (*value == native->float32_array_fun()) { + return typer_->float32_array_fun_; + } else if (*value == native->float64_array_fun()) { + return typer_->float64_array_fun_; + } + } + return Type::Constant(value, zone()); +} + + +namespace { + +class TyperDecorator : public GraphDecorator { + public: + explicit TyperDecorator(Typer* typer) : typer_(typer) {} + virtual void Decorate(Node* node) { typer_->Init(node); } + + private: + Typer* typer_; +}; + +} + + +void Typer::DecorateGraph(Graph* graph) { + graph->AddDecorator(new (zone()) TyperDecorator(this)); +} + +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/compiler/typer.h b/deps/v8/src/compiler/typer.h new file mode 100644 index 00000000000..2957e4b4a8e --- /dev/null +++ b/deps/v8/src/compiler/typer.h @@ -0,0 +1,57 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_TYPER_H_ +#define V8_COMPILER_TYPER_H_ + +#include "src/v8.h" + +#include "src/compiler/graph.h" +#include "src/compiler/opcodes.h" +#include "src/types.h" + +namespace v8 { +namespace internal { +namespace compiler { + +class Typer { + public: + explicit Typer(Zone* zone); + + void Init(Node* node); + void Run(Graph* graph, MaybeHandle<Context> context); + void Narrow(Graph* graph, Node* node, MaybeHandle<Context> context); + void Widen(Graph* graph, Node* node, MaybeHandle<Context> context); + + void DecorateGraph(Graph* graph); + + Zone* zone() { return zone_; } + Isolate* isolate() { return zone_->isolate(); } + + private: + class Visitor; + class RunVisitor; + class NarrowVisitor; + class WidenVisitor; + + Zone* zone_; + Type* number_fun0_; + Type* number_fun1_; + Type* number_fun2_; + Type* imul_fun_; + Type* array_buffer_fun_; + Type* int8_array_fun_; + Type* int16_array_fun_; + Type* int32_array_fun_; + Type* uint8_array_fun_; + Type* uint16_array_fun_; + Type* uint32_array_fun_; + Type* float32_array_fun_; + Type* float64_array_fun_; +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_TYPER_H_ diff --git a/deps/v8/src/compiler/verifier.cc b/deps/v8/src/compiler/verifier.cc new file mode 100644 index 00000000000..97bb762aff5 --- /dev/null +++ b/deps/v8/src/compiler/verifier.cc @@ -0,0 +1,245 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/verifier.h" + +#include "src/compiler/generic-algorithm.h" +#include "src/compiler/generic-node-inl.h" +#include "src/compiler/generic-node.h" +#include "src/compiler/graph-inl.h" +#include "src/compiler/graph.h" +#include "src/compiler/node.h" +#include "src/compiler/node-properties-inl.h" +#include "src/compiler/node-properties.h" +#include "src/compiler/opcodes.h" +#include "src/compiler/operator.h" + +namespace v8 { +namespace internal { +namespace compiler { + + +static bool IsDefUseChainLinkPresent(Node* def, Node* use) { + Node::Uses uses = def->uses(); + for (Node::Uses::iterator it = uses.begin(); it != uses.end(); ++it) { + if (*it == use) return true; + } + return false; +} + + +static bool IsUseDefChainLinkPresent(Node* def, Node* use) { + Node::Inputs inputs = use->inputs(); + for (Node::Inputs::iterator it = inputs.begin(); it != inputs.end(); ++it) { + if (*it == def) return true; + } + return false; +} + + +class Verifier::Visitor : public NullNodeVisitor { + public: + explicit Visitor(Zone* zone) + : reached_from_start(NodeSet::key_compare(), + NodeSet::allocator_type(zone)), + reached_from_end(NodeSet::key_compare(), + NodeSet::allocator_type(zone)) {} + + // Fulfills the PreNodeCallback interface. + GenericGraphVisit::Control Pre(Node* node); + + bool from_start; + NodeSet reached_from_start; + NodeSet reached_from_end; +}; + + +GenericGraphVisit::Control Verifier::Visitor::Pre(Node* node) { + int value_count = OperatorProperties::GetValueInputCount(node->op()); + int context_count = OperatorProperties::GetContextInputCount(node->op()); + int effect_count = OperatorProperties::GetEffectInputCount(node->op()); + int control_count = OperatorProperties::GetControlInputCount(node->op()); + + // Verify number of inputs matches up. + int input_count = value_count + context_count + effect_count + control_count; + CHECK_EQ(input_count, node->InputCount()); + + // Verify all value inputs actually produce a value. + for (int i = 0; i < value_count; ++i) { + Node* value = NodeProperties::GetValueInput(node, i); + CHECK(OperatorProperties::HasValueOutput(value->op())); + CHECK(IsDefUseChainLinkPresent(value, node)); + CHECK(IsUseDefChainLinkPresent(value, node)); + } + + // Verify all context inputs are value nodes. + for (int i = 0; i < context_count; ++i) { + Node* context = NodeProperties::GetContextInput(node); + CHECK(OperatorProperties::HasValueOutput(context->op())); + CHECK(IsDefUseChainLinkPresent(context, node)); + CHECK(IsUseDefChainLinkPresent(context, node)); + } + + // Verify all effect inputs actually have an effect. + for (int i = 0; i < effect_count; ++i) { + Node* effect = NodeProperties::GetEffectInput(node); + CHECK(OperatorProperties::HasEffectOutput(effect->op())); + CHECK(IsDefUseChainLinkPresent(effect, node)); + CHECK(IsUseDefChainLinkPresent(effect, node)); + } + + // Verify all control inputs are control nodes. + for (int i = 0; i < control_count; ++i) { + Node* control = NodeProperties::GetControlInput(node, i); + CHECK(OperatorProperties::HasControlOutput(control->op())); + CHECK(IsDefUseChainLinkPresent(control, node)); + CHECK(IsUseDefChainLinkPresent(control, node)); + } + + // Verify all successors are projections if multiple value outputs exist. + if (OperatorProperties::GetValueOutputCount(node->op()) > 1) { + Node::Uses uses = node->uses(); + for (Node::Uses::iterator it = uses.begin(); it != uses.end(); ++it) { + CHECK(!NodeProperties::IsValueEdge(it.edge()) || + (*it)->opcode() == IrOpcode::kProjection || + (*it)->opcode() == IrOpcode::kParameter); + } + } + + switch (node->opcode()) { + case IrOpcode::kStart: + // Start has no inputs. + CHECK_EQ(0, input_count); + break; + case IrOpcode::kEnd: + // End has no outputs. + CHECK(!OperatorProperties::HasValueOutput(node->op())); + CHECK(!OperatorProperties::HasEffectOutput(node->op())); + CHECK(!OperatorProperties::HasControlOutput(node->op())); + break; + case IrOpcode::kDead: + // Dead is never connected to the graph. + UNREACHABLE(); + case IrOpcode::kBranch: { + // Branch uses are IfTrue and IfFalse. + Node::Uses uses = node->uses(); + bool got_true = false, got_false = false; + for (Node::Uses::iterator it = uses.begin(); it != uses.end(); ++it) { + CHECK(((*it)->opcode() == IrOpcode::kIfTrue && !got_true) || + ((*it)->opcode() == IrOpcode::kIfFalse && !got_false)); + if ((*it)->opcode() == IrOpcode::kIfTrue) got_true = true; + if ((*it)->opcode() == IrOpcode::kIfFalse) got_false = true; + } + // TODO(rossberg): Currently fails for various tests. + // CHECK(got_true && got_false); + break; + } + case IrOpcode::kIfTrue: + case IrOpcode::kIfFalse: + CHECK_EQ(IrOpcode::kBranch, + NodeProperties::GetControlInput(node, 0)->opcode()); + break; + case IrOpcode::kLoop: + case IrOpcode::kMerge: + break; + case IrOpcode::kReturn: + // TODO(rossberg): check successor is End + break; + case IrOpcode::kThrow: + // TODO(rossberg): what are the constraints on these? + break; + case IrOpcode::kParameter: { + // Parameters have the start node as inputs. + CHECK_EQ(1, input_count); + CHECK_EQ(IrOpcode::kStart, + NodeProperties::GetValueInput(node, 0)->opcode()); + // Parameter has an input that produces enough values. + int index = static_cast<Operator1<int>*>(node->op())->parameter(); + Node* input = NodeProperties::GetValueInput(node, 0); + // Currently, parameter indices start at -1 instead of 0. + CHECK_GT(OperatorProperties::GetValueOutputCount(input->op()), index + 1); + break; + } + case IrOpcode::kInt32Constant: + case IrOpcode::kInt64Constant: + case IrOpcode::kFloat64Constant: + case IrOpcode::kExternalConstant: + case IrOpcode::kNumberConstant: + case IrOpcode::kHeapConstant: + // Constants have no inputs. + CHECK_EQ(0, input_count); + break; + case IrOpcode::kPhi: { + // Phi input count matches parent control node. + CHECK_EQ(1, control_count); + Node* control = NodeProperties::GetControlInput(node, 0); + CHECK_EQ(value_count, + OperatorProperties::GetControlInputCount(control->op())); + break; + } + case IrOpcode::kEffectPhi: { + // EffectPhi input count matches parent control node. + CHECK_EQ(1, control_count); + Node* control = NodeProperties::GetControlInput(node, 0); + CHECK_EQ(effect_count, + OperatorProperties::GetControlInputCount(control->op())); + break; + } + case IrOpcode::kLazyDeoptimization: + // TODO(jarin): what are the constraints on these? + break; + case IrOpcode::kDeoptimize: + // TODO(jarin): what are the constraints on these? + break; + case IrOpcode::kFrameState: + // TODO(jarin): what are the constraints on these? + break; + case IrOpcode::kCall: + // TODO(rossberg): what are the constraints on these? + break; + case IrOpcode::kContinuation: + // TODO(jarin): what are the constraints on these? + break; + case IrOpcode::kProjection: { + // Projection has an input that produces enough values. + int index = static_cast<Operator1<int>*>(node->op())->parameter(); + Node* input = NodeProperties::GetValueInput(node, 0); + CHECK_GT(OperatorProperties::GetValueOutputCount(input->op()), index); + break; + } + default: + // TODO(rossberg): Check other node kinds. + break; + } + + if (from_start) { + reached_from_start.insert(node); + } else { + reached_from_end.insert(node); + } + + return GenericGraphVisit::CONTINUE; +} + + +void Verifier::Run(Graph* graph) { + Visitor visitor(graph->zone()); + + CHECK_NE(NULL, graph->start()); + visitor.from_start = true; + graph->VisitNodeUsesFromStart(&visitor); + CHECK_NE(NULL, graph->end()); + visitor.from_start = false; + graph->VisitNodeInputsFromEnd(&visitor); + + // All control nodes reachable from end are reachable from start. + for (NodeSet::iterator it = visitor.reached_from_end.begin(); + it != visitor.reached_from_end.end(); ++it) { + CHECK(!NodeProperties::IsControl(*it) || + visitor.reached_from_start.count(*it)); + } +} +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/compiler/verifier.h b/deps/v8/src/compiler/verifier.h new file mode 100644 index 00000000000..788c6a56579 --- /dev/null +++ b/deps/v8/src/compiler/verifier.h @@ -0,0 +1,28 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_VERIFIER_H_ +#define V8_COMPILER_VERIFIER_H_ + +#include "src/v8.h" + +#include "src/compiler/graph.h" + +namespace v8 { +namespace internal { +namespace compiler { + +class Verifier { + public: + static void Run(Graph* graph); + + private: + class Visitor; + DISALLOW_COPY_AND_ASSIGN(Verifier); +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_COMPILER_VERIFIER_H_ diff --git a/deps/v8/src/compiler/x64/code-generator-x64.cc b/deps/v8/src/compiler/x64/code-generator-x64.cc new file mode 100644 index 00000000000..9f278ad8980 --- /dev/null +++ b/deps/v8/src/compiler/x64/code-generator-x64.cc @@ -0,0 +1,1001 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/code-generator.h" + +#include "src/compiler/code-generator-impl.h" +#include "src/compiler/gap-resolver.h" +#include "src/compiler/node-matchers.h" +#include "src/compiler/node-properties-inl.h" +#include "src/scopes.h" +#include "src/x64/assembler-x64.h" +#include "src/x64/macro-assembler-x64.h" + +namespace v8 { +namespace internal { +namespace compiler { + +#define __ masm()-> + + +// TODO(turbofan): Cleanup these hacks. +enum Immediate64Type { kImm64Value, kImm64Handle, kImm64Reference }; + + +struct Immediate64 { + uint64_t value; + Handle<Object> handle; + ExternalReference reference; + Immediate64Type type; +}; + + +enum RegisterOrOperandType { kRegister, kDoubleRegister, kOperand }; + + +struct RegisterOrOperand { + RegisterOrOperand() : operand(no_reg, 0) {} + Register reg; + DoubleRegister double_reg; + Operand operand; + RegisterOrOperandType type; +}; + + +// Adds X64 specific methods for decoding operands. +class X64OperandConverter : public InstructionOperandConverter { + public: + X64OperandConverter(CodeGenerator* gen, Instruction* instr) + : InstructionOperandConverter(gen, instr) {} + + RegisterOrOperand InputRegisterOrOperand(int index) { + return ToRegisterOrOperand(instr_->InputAt(index)); + } + + Immediate InputImmediate(int index) { + return ToImmediate(instr_->InputAt(index)); + } + + RegisterOrOperand OutputRegisterOrOperand() { + return ToRegisterOrOperand(instr_->Output()); + } + + Immediate64 InputImmediate64(int index) { + return ToImmediate64(instr_->InputAt(index)); + } + + Immediate64 ToImmediate64(InstructionOperand* operand) { + Constant constant = ToConstant(operand); + Immediate64 immediate; + immediate.value = 0xbeefdeaddeefbeed; + immediate.type = kImm64Value; + switch (constant.type()) { + case Constant::kInt32: + case Constant::kInt64: + immediate.value = constant.ToInt64(); + return immediate; + case Constant::kFloat64: + immediate.type = kImm64Handle; + immediate.handle = + isolate()->factory()->NewNumber(constant.ToFloat64(), TENURED); + return immediate; + case Constant::kExternalReference: + immediate.type = kImm64Reference; + immediate.reference = constant.ToExternalReference(); + return immediate; + case Constant::kHeapObject: + immediate.type = kImm64Handle; + immediate.handle = constant.ToHeapObject(); + return immediate; + } + UNREACHABLE(); + return immediate; + } + + Immediate ToImmediate(InstructionOperand* operand) { + Constant constant = ToConstant(operand); + switch (constant.type()) { + case Constant::kInt32: + return Immediate(constant.ToInt32()); + case Constant::kInt64: + case Constant::kFloat64: + case Constant::kExternalReference: + case Constant::kHeapObject: + break; + } + UNREACHABLE(); + return Immediate(-1); + } + + Operand ToOperand(InstructionOperand* op, int extra = 0) { + RegisterOrOperand result = ToRegisterOrOperand(op, extra); + DCHECK_EQ(kOperand, result.type); + return result.operand; + } + + RegisterOrOperand ToRegisterOrOperand(InstructionOperand* op, int extra = 0) { + RegisterOrOperand result; + if (op->IsRegister()) { + DCHECK(extra == 0); + result.type = kRegister; + result.reg = ToRegister(op); + return result; + } else if (op->IsDoubleRegister()) { + DCHECK(extra == 0); + DCHECK(extra == 0); + result.type = kDoubleRegister; + result.double_reg = ToDoubleRegister(op); + return result; + } + + DCHECK(op->IsStackSlot() || op->IsDoubleStackSlot()); + + result.type = kOperand; + // The linkage computes where all spill slots are located. + FrameOffset offset = linkage()->GetFrameOffset(op->index(), frame(), extra); + result.operand = + Operand(offset.from_stack_pointer() ? rsp : rbp, offset.offset()); + return result; + } + + Operand MemoryOperand(int* first_input) { + const int offset = *first_input; + switch (AddressingModeField::decode(instr_->opcode())) { + case kMode_MR1I: { + *first_input += 2; + Register index = InputRegister(offset + 1); + return Operand(InputRegister(offset + 0), index, times_1, + 0); // TODO(dcarney): K != 0 + } + case kMode_MRI: + *first_input += 2; + return Operand(InputRegister(offset + 0), InputInt32(offset + 1)); + default: + UNREACHABLE(); + return Operand(no_reg, 0); + } + } + + Operand MemoryOperand() { + int first_input = 0; + return MemoryOperand(&first_input); + } +}; + + +static bool HasImmediateInput(Instruction* instr, int index) { + return instr->InputAt(index)->IsImmediate(); +} + + +#define ASSEMBLE_BINOP(asm_instr) \ + do { \ + if (HasImmediateInput(instr, 1)) { \ + RegisterOrOperand input = i.InputRegisterOrOperand(0); \ + if (input.type == kRegister) { \ + __ asm_instr(input.reg, i.InputImmediate(1)); \ + } else { \ + __ asm_instr(input.operand, i.InputImmediate(1)); \ + } \ + } else { \ + RegisterOrOperand input = i.InputRegisterOrOperand(1); \ + if (input.type == kRegister) { \ + __ asm_instr(i.InputRegister(0), input.reg); \ + } else { \ + __ asm_instr(i.InputRegister(0), input.operand); \ + } \ + } \ + } while (0) + + +#define ASSEMBLE_SHIFT(asm_instr, width) \ + do { \ + if (HasImmediateInput(instr, 1)) { \ + __ asm_instr(i.OutputRegister(), Immediate(i.InputInt##width(1))); \ + } else { \ + __ asm_instr##_cl(i.OutputRegister()); \ + } \ + } while (0) + + +// Assembles an instruction after register allocation, producing machine code. +void CodeGenerator::AssembleArchInstruction(Instruction* instr) { + X64OperandConverter i(this, instr); + + switch (ArchOpcodeField::decode(instr->opcode())) { + case kArchJmp: + __ jmp(code_->GetLabel(i.InputBlock(0))); + break; + case kArchNop: + // don't emit code for nops. + break; + case kArchRet: + AssembleReturn(); + break; + case kArchDeoptimize: { + int deoptimization_id = MiscField::decode(instr->opcode()); + BuildTranslation(instr, deoptimization_id); + + Address deopt_entry = Deoptimizer::GetDeoptimizationEntry( + isolate(), deoptimization_id, Deoptimizer::LAZY); + __ call(deopt_entry, RelocInfo::RUNTIME_ENTRY); + break; + } + case kX64Add32: + ASSEMBLE_BINOP(addl); + break; + case kX64Add: + ASSEMBLE_BINOP(addq); + break; + case kX64Sub32: + ASSEMBLE_BINOP(subl); + break; + case kX64Sub: + ASSEMBLE_BINOP(subq); + break; + case kX64And32: + ASSEMBLE_BINOP(andl); + break; + case kX64And: + ASSEMBLE_BINOP(andq); + break; + case kX64Cmp32: + ASSEMBLE_BINOP(cmpl); + break; + case kX64Cmp: + ASSEMBLE_BINOP(cmpq); + break; + case kX64Test32: + ASSEMBLE_BINOP(testl); + break; + case kX64Test: + ASSEMBLE_BINOP(testq); + break; + case kX64Imul32: + if (HasImmediateInput(instr, 1)) { + RegisterOrOperand input = i.InputRegisterOrOperand(0); + if (input.type == kRegister) { + __ imull(i.OutputRegister(), input.reg, i.InputImmediate(1)); + } else { + __ movq(kScratchRegister, input.operand); + __ imull(i.OutputRegister(), kScratchRegister, i.InputImmediate(1)); + } + } else { + RegisterOrOperand input = i.InputRegisterOrOperand(1); + if (input.type == kRegister) { + __ imull(i.OutputRegister(), input.reg); + } else { + __ imull(i.OutputRegister(), input.operand); + } + } + break; + case kX64Imul: + if (HasImmediateInput(instr, 1)) { + RegisterOrOperand input = i.InputRegisterOrOperand(0); + if (input.type == kRegister) { + __ imulq(i.OutputRegister(), input.reg, i.InputImmediate(1)); + } else { + __ movq(kScratchRegister, input.operand); + __ imulq(i.OutputRegister(), kScratchRegister, i.InputImmediate(1)); + } + } else { + RegisterOrOperand input = i.InputRegisterOrOperand(1); + if (input.type == kRegister) { + __ imulq(i.OutputRegister(), input.reg); + } else { + __ imulq(i.OutputRegister(), input.operand); + } + } + break; + case kX64Idiv32: + __ cdq(); + __ idivl(i.InputRegister(1)); + break; + case kX64Idiv: + __ cqo(); + __ idivq(i.InputRegister(1)); + break; + case kX64Udiv32: + __ xorl(rdx, rdx); + __ divl(i.InputRegister(1)); + break; + case kX64Udiv: + __ xorq(rdx, rdx); + __ divq(i.InputRegister(1)); + break; + case kX64Not: { + RegisterOrOperand output = i.OutputRegisterOrOperand(); + if (output.type == kRegister) { + __ notq(output.reg); + } else { + __ notq(output.operand); + } + break; + } + case kX64Not32: { + RegisterOrOperand output = i.OutputRegisterOrOperand(); + if (output.type == kRegister) { + __ notl(output.reg); + } else { + __ notl(output.operand); + } + break; + } + case kX64Neg: { + RegisterOrOperand output = i.OutputRegisterOrOperand(); + if (output.type == kRegister) { + __ negq(output.reg); + } else { + __ negq(output.operand); + } + break; + } + case kX64Neg32: { + RegisterOrOperand output = i.OutputRegisterOrOperand(); + if (output.type == kRegister) { + __ negl(output.reg); + } else { + __ negl(output.operand); + } + break; + } + case kX64Or32: + ASSEMBLE_BINOP(orl); + break; + case kX64Or: + ASSEMBLE_BINOP(orq); + break; + case kX64Xor32: + ASSEMBLE_BINOP(xorl); + break; + case kX64Xor: + ASSEMBLE_BINOP(xorq); + break; + case kX64Shl32: + ASSEMBLE_SHIFT(shll, 5); + break; + case kX64Shl: + ASSEMBLE_SHIFT(shlq, 6); + break; + case kX64Shr32: + ASSEMBLE_SHIFT(shrl, 5); + break; + case kX64Shr: + ASSEMBLE_SHIFT(shrq, 6); + break; + case kX64Sar32: + ASSEMBLE_SHIFT(sarl, 5); + break; + case kX64Sar: + ASSEMBLE_SHIFT(sarq, 6); + break; + case kX64Push: { + RegisterOrOperand input = i.InputRegisterOrOperand(0); + if (input.type == kRegister) { + __ pushq(input.reg); + } else { + __ pushq(input.operand); + } + break; + } + case kX64PushI: + __ pushq(i.InputImmediate(0)); + break; + case kX64CallCodeObject: { + if (HasImmediateInput(instr, 0)) { + Handle<Code> code = Handle<Code>::cast(i.InputHeapObject(0)); + __ Call(code, RelocInfo::CODE_TARGET); + } else { + Register reg = i.InputRegister(0); + int entry = Code::kHeaderSize - kHeapObjectTag; + __ Call(Operand(reg, entry)); + } + RecordSafepoint(instr->pointer_map(), Safepoint::kSimple, 0, + Safepoint::kNoLazyDeopt); + bool lazy_deopt = (MiscField::decode(instr->opcode()) == 1); + if (lazy_deopt) { + RecordLazyDeoptimizationEntry(instr); + } + AddNopForSmiCodeInlining(); + break; + } + case kX64CallAddress: + if (HasImmediateInput(instr, 0)) { + Immediate64 imm = i.InputImmediate64(0); + DCHECK_EQ(kImm64Value, imm.type); + __ Call(reinterpret_cast<byte*>(imm.value), RelocInfo::NONE64); + } else { + __ call(i.InputRegister(0)); + } + break; + case kPopStack: { + int words = MiscField::decode(instr->opcode()); + __ addq(rsp, Immediate(kPointerSize * words)); + break; + } + case kX64CallJSFunction: { + Register func = i.InputRegister(0); + + // TODO(jarin) The load of the context should be separated from the call. + __ movp(rsi, FieldOperand(func, JSFunction::kContextOffset)); + __ Call(FieldOperand(func, JSFunction::kCodeEntryOffset)); + + RecordSafepoint(instr->pointer_map(), Safepoint::kSimple, 0, + Safepoint::kNoLazyDeopt); + RecordLazyDeoptimizationEntry(instr); + break; + } + case kSSEFloat64Cmp: { + RegisterOrOperand input = i.InputRegisterOrOperand(1); + if (input.type == kDoubleRegister) { + __ ucomisd(i.InputDoubleRegister(0), input.double_reg); + } else { + __ ucomisd(i.InputDoubleRegister(0), input.operand); + } + break; + } + case kSSEFloat64Add: + __ addsd(i.InputDoubleRegister(0), i.InputDoubleRegister(1)); + break; + case kSSEFloat64Sub: + __ subsd(i.InputDoubleRegister(0), i.InputDoubleRegister(1)); + break; + case kSSEFloat64Mul: + __ mulsd(i.InputDoubleRegister(0), i.InputDoubleRegister(1)); + break; + case kSSEFloat64Div: + __ divsd(i.InputDoubleRegister(0), i.InputDoubleRegister(1)); + break; + case kSSEFloat64Mod: { + __ subq(rsp, Immediate(kDoubleSize)); + // Move values to st(0) and st(1). + __ movsd(Operand(rsp, 0), i.InputDoubleRegister(1)); + __ fld_d(Operand(rsp, 0)); + __ movsd(Operand(rsp, 0), i.InputDoubleRegister(0)); + __ fld_d(Operand(rsp, 0)); + // Loop while fprem isn't done. + Label mod_loop; + __ bind(&mod_loop); + // This instructions traps on all kinds inputs, but we are assuming the + // floating point control word is set to ignore them all. + __ fprem(); + // The following 2 instruction implicitly use rax. + __ fnstsw_ax(); + if (CpuFeatures::IsSupported(SAHF) && masm()->IsEnabled(SAHF)) { + __ sahf(); + } else { + __ shrl(rax, Immediate(8)); + __ andl(rax, Immediate(0xFF)); + __ pushq(rax); + __ popfq(); + } + __ j(parity_even, &mod_loop); + // Move output to stack and clean up. + __ fstp(1); + __ fstp_d(Operand(rsp, 0)); + __ movsd(i.OutputDoubleRegister(), Operand(rsp, 0)); + __ addq(rsp, Immediate(kDoubleSize)); + break; + } + case kX64Int32ToInt64: + __ movzxwq(i.OutputRegister(), i.InputRegister(0)); + break; + case kX64Int64ToInt32: + __ Move(i.OutputRegister(), i.InputRegister(0)); + break; + case kSSEFloat64ToInt32: { + RegisterOrOperand input = i.InputRegisterOrOperand(0); + if (input.type == kDoubleRegister) { + __ cvttsd2si(i.OutputRegister(), input.double_reg); + } else { + __ cvttsd2si(i.OutputRegister(), input.operand); + } + break; + } + case kSSEFloat64ToUint32: { + // TODO(turbofan): X64 SSE cvttsd2siq should support operands. + __ cvttsd2siq(i.OutputRegister(), i.InputDoubleRegister(0)); + __ andl(i.OutputRegister(), i.OutputRegister()); // clear upper bits. + // TODO(turbofan): generated code should not look at the upper 32 bits + // of the result, but those bits could escape to the outside world. + break; + } + case kSSEInt32ToFloat64: { + RegisterOrOperand input = i.InputRegisterOrOperand(0); + if (input.type == kRegister) { + __ cvtlsi2sd(i.OutputDoubleRegister(), input.reg); + } else { + __ cvtlsi2sd(i.OutputDoubleRegister(), input.operand); + } + break; + } + case kSSEUint32ToFloat64: { + // TODO(turbofan): X64 SSE cvtqsi2sd should support operands. + __ cvtqsi2sd(i.OutputDoubleRegister(), i.InputRegister(0)); + break; + } + + case kSSELoad: + __ movsd(i.OutputDoubleRegister(), i.MemoryOperand()); + break; + case kSSEStore: { + int index = 0; + Operand operand = i.MemoryOperand(&index); + __ movsd(operand, i.InputDoubleRegister(index)); + break; + } + case kX64LoadWord8: + __ movzxbl(i.OutputRegister(), i.MemoryOperand()); + break; + case kX64StoreWord8: { + int index = 0; + Operand operand = i.MemoryOperand(&index); + __ movb(operand, i.InputRegister(index)); + break; + } + case kX64StoreWord8I: { + int index = 0; + Operand operand = i.MemoryOperand(&index); + __ movb(operand, Immediate(i.InputInt8(index))); + break; + } + case kX64LoadWord16: + __ movzxwl(i.OutputRegister(), i.MemoryOperand()); + break; + case kX64StoreWord16: { + int index = 0; + Operand operand = i.MemoryOperand(&index); + __ movw(operand, i.InputRegister(index)); + break; + } + case kX64StoreWord16I: { + int index = 0; + Operand operand = i.MemoryOperand(&index); + __ movw(operand, Immediate(i.InputInt16(index))); + break; + } + case kX64LoadWord32: + __ movl(i.OutputRegister(), i.MemoryOperand()); + break; + case kX64StoreWord32: { + int index = 0; + Operand operand = i.MemoryOperand(&index); + __ movl(operand, i.InputRegister(index)); + break; + } + case kX64StoreWord32I: { + int index = 0; + Operand operand = i.MemoryOperand(&index); + __ movl(operand, i.InputImmediate(index)); + break; + } + case kX64LoadWord64: + __ movq(i.OutputRegister(), i.MemoryOperand()); + break; + case kX64StoreWord64: { + int index = 0; + Operand operand = i.MemoryOperand(&index); + __ movq(operand, i.InputRegister(index)); + break; + } + case kX64StoreWord64I: { + int index = 0; + Operand operand = i.MemoryOperand(&index); + __ movq(operand, i.InputImmediate(index)); + break; + } + case kX64StoreWriteBarrier: { + Register object = i.InputRegister(0); + Register index = i.InputRegister(1); + Register value = i.InputRegister(2); + __ movsxlq(index, index); + __ movq(Operand(object, index, times_1, 0), value); + __ leaq(index, Operand(object, index, times_1, 0)); + SaveFPRegsMode mode = code_->frame()->DidAllocateDoubleRegisters() + ? kSaveFPRegs + : kDontSaveFPRegs; + __ RecordWrite(object, index, value, mode); + break; + } + } +} + + +// Assembles branches after this instruction. +void CodeGenerator::AssembleArchBranch(Instruction* instr, + FlagsCondition condition) { + X64OperandConverter i(this, instr); + Label done; + + // Emit a branch. The true and false targets are always the last two inputs + // to the instruction. + BasicBlock* tblock = i.InputBlock(static_cast<int>(instr->InputCount()) - 2); + BasicBlock* fblock = i.InputBlock(static_cast<int>(instr->InputCount()) - 1); + bool fallthru = IsNextInAssemblyOrder(fblock); + Label* tlabel = code()->GetLabel(tblock); + Label* flabel = fallthru ? &done : code()->GetLabel(fblock); + Label::Distance flabel_distance = fallthru ? Label::kNear : Label::kFar; + switch (condition) { + case kUnorderedEqual: + __ j(parity_even, flabel, flabel_distance); + // Fall through. + case kEqual: + __ j(equal, tlabel); + break; + case kUnorderedNotEqual: + __ j(parity_even, tlabel); + // Fall through. + case kNotEqual: + __ j(not_equal, tlabel); + break; + case kSignedLessThan: + __ j(less, tlabel); + break; + case kSignedGreaterThanOrEqual: + __ j(greater_equal, tlabel); + break; + case kSignedLessThanOrEqual: + __ j(less_equal, tlabel); + break; + case kSignedGreaterThan: + __ j(greater, tlabel); + break; + case kUnorderedLessThan: + __ j(parity_even, flabel, flabel_distance); + // Fall through. + case kUnsignedLessThan: + __ j(below, tlabel); + break; + case kUnorderedGreaterThanOrEqual: + __ j(parity_even, tlabel); + // Fall through. + case kUnsignedGreaterThanOrEqual: + __ j(above_equal, tlabel); + break; + case kUnorderedLessThanOrEqual: + __ j(parity_even, flabel, flabel_distance); + // Fall through. + case kUnsignedLessThanOrEqual: + __ j(below_equal, tlabel); + break; + case kUnorderedGreaterThan: + __ j(parity_even, tlabel); + // Fall through. + case kUnsignedGreaterThan: + __ j(above, tlabel); + break; + case kOverflow: + __ j(overflow, tlabel); + break; + case kNotOverflow: + __ j(no_overflow, tlabel); + break; + } + if (!fallthru) __ jmp(flabel, flabel_distance); // no fallthru to flabel. + __ bind(&done); +} + + +// Assembles boolean materializations after this instruction. +void CodeGenerator::AssembleArchBoolean(Instruction* instr, + FlagsCondition condition) { + X64OperandConverter i(this, instr); + Label done; + + // Materialize a full 64-bit 1 or 0 value. The result register is always the + // last output of the instruction. + Label check; + DCHECK_NE(0, instr->OutputCount()); + Register reg = i.OutputRegister(static_cast<int>(instr->OutputCount() - 1)); + Condition cc = no_condition; + switch (condition) { + case kUnorderedEqual: + __ j(parity_odd, &check, Label::kNear); + __ movl(reg, Immediate(0)); + __ jmp(&done, Label::kNear); + // Fall through. + case kEqual: + cc = equal; + break; + case kUnorderedNotEqual: + __ j(parity_odd, &check, Label::kNear); + __ movl(reg, Immediate(1)); + __ jmp(&done, Label::kNear); + // Fall through. + case kNotEqual: + cc = not_equal; + break; + case kSignedLessThan: + cc = less; + break; + case kSignedGreaterThanOrEqual: + cc = greater_equal; + break; + case kSignedLessThanOrEqual: + cc = less_equal; + break; + case kSignedGreaterThan: + cc = greater; + break; + case kUnorderedLessThan: + __ j(parity_odd, &check, Label::kNear); + __ movl(reg, Immediate(0)); + __ jmp(&done, Label::kNear); + // Fall through. + case kUnsignedLessThan: + cc = below; + break; + case kUnorderedGreaterThanOrEqual: + __ j(parity_odd, &check, Label::kNear); + __ movl(reg, Immediate(1)); + __ jmp(&done, Label::kNear); + // Fall through. + case kUnsignedGreaterThanOrEqual: + cc = above_equal; + break; + case kUnorderedLessThanOrEqual: + __ j(parity_odd, &check, Label::kNear); + __ movl(reg, Immediate(0)); + __ jmp(&done, Label::kNear); + // Fall through. + case kUnsignedLessThanOrEqual: + cc = below_equal; + break; + case kUnorderedGreaterThan: + __ j(parity_odd, &check, Label::kNear); + __ movl(reg, Immediate(1)); + __ jmp(&done, Label::kNear); + // Fall through. + case kUnsignedGreaterThan: + cc = above; + break; + case kOverflow: + cc = overflow; + break; + case kNotOverflow: + cc = no_overflow; + break; + } + __ bind(&check); + __ setcc(cc, reg); + __ movzxbl(reg, reg); + __ bind(&done); +} + + +void CodeGenerator::AssemblePrologue() { + CallDescriptor* descriptor = linkage()->GetIncomingDescriptor(); + int stack_slots = frame()->GetSpillSlotCount(); + if (descriptor->kind() == CallDescriptor::kCallAddress) { + __ pushq(rbp); + __ movq(rbp, rsp); + const RegList saves = descriptor->CalleeSavedRegisters(); + if (saves != 0) { // Save callee-saved registers. + int register_save_area_size = 0; + for (int i = Register::kNumRegisters - 1; i >= 0; i--) { + if (!((1 << i) & saves)) continue; + __ pushq(Register::from_code(i)); + register_save_area_size += kPointerSize; + } + frame()->SetRegisterSaveAreaSize(register_save_area_size); + } + } else if (descriptor->IsJSFunctionCall()) { + CompilationInfo* info = linkage()->info(); + __ Prologue(info->IsCodePreAgingActive()); + frame()->SetRegisterSaveAreaSize( + StandardFrameConstants::kFixedFrameSizeFromFp); + + // Sloppy mode functions and builtins need to replace the receiver with the + // global proxy when called as functions (without an explicit receiver + // object). + // TODO(mstarzinger/verwaest): Should this be moved back into the CallIC? + if (info->strict_mode() == SLOPPY && !info->is_native()) { + Label ok; + StackArgumentsAccessor args(rbp, info->scope()->num_parameters()); + __ movp(rcx, args.GetReceiverOperand()); + __ CompareRoot(rcx, Heap::kUndefinedValueRootIndex); + __ j(not_equal, &ok, Label::kNear); + __ movp(rcx, GlobalObjectOperand()); + __ movp(rcx, FieldOperand(rcx, GlobalObject::kGlobalProxyOffset)); + __ movp(args.GetReceiverOperand(), rcx); + __ bind(&ok); + } + + } else { + __ StubPrologue(); + frame()->SetRegisterSaveAreaSize( + StandardFrameConstants::kFixedFrameSizeFromFp); + } + if (stack_slots > 0) { + __ subq(rsp, Immediate(stack_slots * kPointerSize)); + } +} + + +void CodeGenerator::AssembleReturn() { + CallDescriptor* descriptor = linkage()->GetIncomingDescriptor(); + if (descriptor->kind() == CallDescriptor::kCallAddress) { + if (frame()->GetRegisterSaveAreaSize() > 0) { + // Remove this frame's spill slots first. + int stack_slots = frame()->GetSpillSlotCount(); + if (stack_slots > 0) { + __ addq(rsp, Immediate(stack_slots * kPointerSize)); + } + const RegList saves = descriptor->CalleeSavedRegisters(); + // Restore registers. + if (saves != 0) { + for (int i = 0; i < Register::kNumRegisters; i++) { + if (!((1 << i) & saves)) continue; + __ popq(Register::from_code(i)); + } + } + __ popq(rbp); // Pop caller's frame pointer. + __ ret(0); + } else { + // No saved registers. + __ movq(rsp, rbp); // Move stack pointer back to frame pointer. + __ popq(rbp); // Pop caller's frame pointer. + __ ret(0); + } + } else { + __ movq(rsp, rbp); // Move stack pointer back to frame pointer. + __ popq(rbp); // Pop caller's frame pointer. + int pop_count = + descriptor->IsJSFunctionCall() ? descriptor->ParameterCount() : 0; + __ ret(pop_count * kPointerSize); + } +} + + +void CodeGenerator::AssembleMove(InstructionOperand* source, + InstructionOperand* destination) { + X64OperandConverter g(this, NULL); + // Dispatch on the source and destination operand kinds. Not all + // combinations are possible. + if (source->IsRegister()) { + DCHECK(destination->IsRegister() || destination->IsStackSlot()); + Register src = g.ToRegister(source); + if (destination->IsRegister()) { + __ movq(g.ToRegister(destination), src); + } else { + __ movq(g.ToOperand(destination), src); + } + } else if (source->IsStackSlot()) { + DCHECK(destination->IsRegister() || destination->IsStackSlot()); + Operand src = g.ToOperand(source); + if (destination->IsRegister()) { + Register dst = g.ToRegister(destination); + __ movq(dst, src); + } else { + // Spill on demand to use a temporary register for memory-to-memory + // moves. + Register tmp = kScratchRegister; + Operand dst = g.ToOperand(destination); + __ movq(tmp, src); + __ movq(dst, tmp); + } + } else if (source->IsConstant()) { + ConstantOperand* constant_source = ConstantOperand::cast(source); + if (destination->IsRegister() || destination->IsStackSlot()) { + Register dst = destination->IsRegister() ? g.ToRegister(destination) + : kScratchRegister; + Immediate64 imm = g.ToImmediate64(constant_source); + switch (imm.type) { + case kImm64Value: + __ Set(dst, imm.value); + break; + case kImm64Reference: + __ Move(dst, imm.reference); + break; + case kImm64Handle: + __ Move(dst, imm.handle); + break; + } + if (destination->IsStackSlot()) { + __ movq(g.ToOperand(destination), kScratchRegister); + } + } else { + __ movq(kScratchRegister, + BitCast<uint64_t, double>(g.ToDouble(constant_source))); + if (destination->IsDoubleRegister()) { + __ movq(g.ToDoubleRegister(destination), kScratchRegister); + } else { + DCHECK(destination->IsDoubleStackSlot()); + __ movq(g.ToOperand(destination), kScratchRegister); + } + } + } else if (source->IsDoubleRegister()) { + XMMRegister src = g.ToDoubleRegister(source); + if (destination->IsDoubleRegister()) { + XMMRegister dst = g.ToDoubleRegister(destination); + __ movsd(dst, src); + } else { + DCHECK(destination->IsDoubleStackSlot()); + Operand dst = g.ToOperand(destination); + __ movsd(dst, src); + } + } else if (source->IsDoubleStackSlot()) { + DCHECK(destination->IsDoubleRegister() || destination->IsDoubleStackSlot()); + Operand src = g.ToOperand(source); + if (destination->IsDoubleRegister()) { + XMMRegister dst = g.ToDoubleRegister(destination); + __ movsd(dst, src); + } else { + // We rely on having xmm0 available as a fixed scratch register. + Operand dst = g.ToOperand(destination); + __ movsd(xmm0, src); + __ movsd(dst, xmm0); + } + } else { + UNREACHABLE(); + } +} + + +void CodeGenerator::AssembleSwap(InstructionOperand* source, + InstructionOperand* destination) { + X64OperandConverter g(this, NULL); + // Dispatch on the source and destination operand kinds. Not all + // combinations are possible. + if (source->IsRegister() && destination->IsRegister()) { + // Register-register. + __ xchgq(g.ToRegister(source), g.ToRegister(destination)); + } else if (source->IsRegister() && destination->IsStackSlot()) { + Register src = g.ToRegister(source); + Operand dst = g.ToOperand(destination); + __ xchgq(src, dst); + } else if ((source->IsStackSlot() && destination->IsStackSlot()) || + (source->IsDoubleStackSlot() && + destination->IsDoubleStackSlot())) { + // Memory-memory. + Register tmp = kScratchRegister; + Operand src = g.ToOperand(source); + Operand dst = g.ToOperand(destination); + __ movq(tmp, dst); + __ xchgq(tmp, src); + __ movq(dst, tmp); + } else if (source->IsDoubleRegister() && destination->IsDoubleRegister()) { + // XMM register-register swap. We rely on having xmm0 + // available as a fixed scratch register. + XMMRegister src = g.ToDoubleRegister(source); + XMMRegister dst = g.ToDoubleRegister(destination); + __ movsd(xmm0, src); + __ movsd(src, dst); + __ movsd(dst, xmm0); + } else if (source->IsDoubleRegister() && destination->IsDoubleRegister()) { + // XMM register-memory swap. We rely on having xmm0 + // available as a fixed scratch register. + XMMRegister src = g.ToDoubleRegister(source); + Operand dst = g.ToOperand(destination); + __ movsd(xmm0, src); + __ movsd(src, dst); + __ movsd(dst, xmm0); + } else { + // No other combinations are possible. + UNREACHABLE(); + } +} + + +void CodeGenerator::AddNopForSmiCodeInlining() { __ nop(); } + +#undef __ + +#ifdef DEBUG + +// Checks whether the code between start_pc and end_pc is a no-op. +bool CodeGenerator::IsNopForSmiCodeInlining(Handle<Code> code, int start_pc, + int end_pc) { + if (start_pc + 1 != end_pc) { + return false; + } + return *(code->instruction_start() + start_pc) == + v8::internal::Assembler::kNopByte; +} + +#endif + +} // namespace internal +} // namespace compiler +} // namespace v8 diff --git a/deps/v8/src/compiler/x64/instruction-codes-x64.h b/deps/v8/src/compiler/x64/instruction-codes-x64.h new file mode 100644 index 00000000000..8ba33ab10d1 --- /dev/null +++ b/deps/v8/src/compiler/x64/instruction-codes-x64.h @@ -0,0 +1,108 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_X64_INSTRUCTION_CODES_X64_H_ +#define V8_COMPILER_X64_INSTRUCTION_CODES_X64_H_ + +namespace v8 { +namespace internal { +namespace compiler { + +// X64-specific opcodes that specify which assembly sequence to emit. +// Most opcodes specify a single instruction. +#define TARGET_ARCH_OPCODE_LIST(V) \ + V(X64Add) \ + V(X64Add32) \ + V(X64And) \ + V(X64And32) \ + V(X64Cmp) \ + V(X64Cmp32) \ + V(X64Test) \ + V(X64Test32) \ + V(X64Or) \ + V(X64Or32) \ + V(X64Xor) \ + V(X64Xor32) \ + V(X64Sub) \ + V(X64Sub32) \ + V(X64Imul) \ + V(X64Imul32) \ + V(X64Idiv) \ + V(X64Idiv32) \ + V(X64Udiv) \ + V(X64Udiv32) \ + V(X64Not) \ + V(X64Not32) \ + V(X64Neg) \ + V(X64Neg32) \ + V(X64Shl) \ + V(X64Shl32) \ + V(X64Shr) \ + V(X64Shr32) \ + V(X64Sar) \ + V(X64Sar32) \ + V(X64Push) \ + V(X64PushI) \ + V(X64CallCodeObject) \ + V(X64CallAddress) \ + V(PopStack) \ + V(X64CallJSFunction) \ + V(SSEFloat64Cmp) \ + V(SSEFloat64Add) \ + V(SSEFloat64Sub) \ + V(SSEFloat64Mul) \ + V(SSEFloat64Div) \ + V(SSEFloat64Mod) \ + V(X64Int32ToInt64) \ + V(X64Int64ToInt32) \ + V(SSEFloat64ToInt32) \ + V(SSEFloat64ToUint32) \ + V(SSEInt32ToFloat64) \ + V(SSEUint32ToFloat64) \ + V(SSELoad) \ + V(SSEStore) \ + V(X64LoadWord8) \ + V(X64StoreWord8) \ + V(X64StoreWord8I) \ + V(X64LoadWord16) \ + V(X64StoreWord16) \ + V(X64StoreWord16I) \ + V(X64LoadWord32) \ + V(X64StoreWord32) \ + V(X64StoreWord32I) \ + V(X64LoadWord64) \ + V(X64StoreWord64) \ + V(X64StoreWord64I) \ + V(X64StoreWriteBarrier) + + +// Addressing modes represent the "shape" of inputs to an instruction. +// Many instructions support multiple addressing modes. Addressing modes +// are encoded into the InstructionCode of the instruction and tell the +// code generator after register allocation which assembler method to call. +// +// We use the following local notation for addressing modes: +// +// R = register +// O = register or stack slot +// D = double register +// I = immediate (handle, external, int32) +// MR = [register] +// MI = [immediate] +// MRN = [register + register * N in {1, 2, 4, 8}] +// MRI = [register + immediate] +// MRNI = [register + register * N in {1, 2, 4, 8} + immediate] +#define TARGET_ADDRESSING_MODE_LIST(V) \ + V(MR) /* [%r1] */ \ + V(MRI) /* [%r1 + K] */ \ + V(MR1I) /* [%r1 + %r2 + K] */ \ + V(MR2I) /* [%r1 + %r2*2 + K] */ \ + V(MR4I) /* [%r1 + %r2*4 + K] */ \ + V(MR8I) /* [%r1 + %r2*8 + K] */ + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_COMPILER_X64_INSTRUCTION_CODES_X64_H_ diff --git a/deps/v8/src/compiler/x64/instruction-selector-x64.cc b/deps/v8/src/compiler/x64/instruction-selector-x64.cc new file mode 100644 index 00000000000..965e612e2d6 --- /dev/null +++ b/deps/v8/src/compiler/x64/instruction-selector-x64.cc @@ -0,0 +1,722 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/instruction-selector-impl.h" +#include "src/compiler/node-matchers.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// Adds X64-specific methods for generating operands. +class X64OperandGenerator V8_FINAL : public OperandGenerator { + public: + explicit X64OperandGenerator(InstructionSelector* selector) + : OperandGenerator(selector) {} + + InstructionOperand* TempRegister(Register reg) { + return new (zone()) UnallocatedOperand(UnallocatedOperand::FIXED_REGISTER, + Register::ToAllocationIndex(reg)); + } + + InstructionOperand* UseByteRegister(Node* node) { + // TODO(dcarney): relax constraint. + return UseFixed(node, rdx); + } + + InstructionOperand* UseImmediate64(Node* node) { return UseImmediate(node); } + + bool CanBeImmediate(Node* node) { + switch (node->opcode()) { + case IrOpcode::kInt32Constant: + return true; + default: + return false; + } + } + + bool CanBeImmediate64(Node* node) { + switch (node->opcode()) { + case IrOpcode::kInt32Constant: + return true; + case IrOpcode::kNumberConstant: + return true; + case IrOpcode::kHeapConstant: { + // Constants in new space cannot be used as immediates in V8 because + // the GC does not scan code objects when collecting the new generation. + Handle<HeapObject> value = ValueOf<Handle<HeapObject> >(node->op()); + return !isolate()->heap()->InNewSpace(*value); + } + default: + return false; + } + } +}; + + +void InstructionSelector::VisitLoad(Node* node) { + MachineType rep = OpParameter<MachineType>(node); + X64OperandGenerator g(this); + Node* base = node->InputAt(0); + Node* index = node->InputAt(1); + + InstructionOperand* output = rep == kMachineFloat64 + ? g.DefineAsDoubleRegister(node) + : g.DefineAsRegister(node); + ArchOpcode opcode; + switch (rep) { + case kMachineFloat64: + opcode = kSSELoad; + break; + case kMachineWord8: + opcode = kX64LoadWord8; + break; + case kMachineWord16: + opcode = kX64LoadWord16; + break; + case kMachineWord32: + opcode = kX64LoadWord32; + break; + case kMachineTagged: // Fall through. + case kMachineWord64: + opcode = kX64LoadWord64; + break; + default: + UNREACHABLE(); + return; + } + if (g.CanBeImmediate(base)) { + // load [#base + %index] + Emit(opcode | AddressingModeField::encode(kMode_MRI), output, + g.UseRegister(index), g.UseImmediate(base)); + } else if (g.CanBeImmediate(index)) { // load [%base + #index] + Emit(opcode | AddressingModeField::encode(kMode_MRI), output, + g.UseRegister(base), g.UseImmediate(index)); + } else { // load [%base + %index + K] + Emit(opcode | AddressingModeField::encode(kMode_MR1I), output, + g.UseRegister(base), g.UseRegister(index)); + } + // TODO(turbofan): addressing modes [r+r*{2,4,8}+K] +} + + +void InstructionSelector::VisitStore(Node* node) { + X64OperandGenerator g(this); + Node* base = node->InputAt(0); + Node* index = node->InputAt(1); + Node* value = node->InputAt(2); + + StoreRepresentation store_rep = OpParameter<StoreRepresentation>(node); + MachineType rep = store_rep.rep; + if (store_rep.write_barrier_kind == kFullWriteBarrier) { + DCHECK(rep == kMachineTagged); + // TODO(dcarney): refactor RecordWrite function to take temp registers + // and pass them here instead of using fixed regs + // TODO(dcarney): handle immediate indices. + InstructionOperand* temps[] = {g.TempRegister(rcx), g.TempRegister(rdx)}; + Emit(kX64StoreWriteBarrier, NULL, g.UseFixed(base, rbx), + g.UseFixed(index, rcx), g.UseFixed(value, rdx), ARRAY_SIZE(temps), + temps); + return; + } + DCHECK_EQ(kNoWriteBarrier, store_rep.write_barrier_kind); + bool is_immediate = false; + InstructionOperand* val; + if (rep == kMachineFloat64) { + val = g.UseDoubleRegister(value); + } else { + is_immediate = g.CanBeImmediate(value); + if (is_immediate) { + val = g.UseImmediate(value); + } else if (rep == kMachineWord8) { + val = g.UseByteRegister(value); + } else { + val = g.UseRegister(value); + } + } + ArchOpcode opcode; + switch (rep) { + case kMachineFloat64: + opcode = kSSEStore; + break; + case kMachineWord8: + opcode = is_immediate ? kX64StoreWord8I : kX64StoreWord8; + break; + case kMachineWord16: + opcode = is_immediate ? kX64StoreWord16I : kX64StoreWord16; + break; + case kMachineWord32: + opcode = is_immediate ? kX64StoreWord32I : kX64StoreWord32; + break; + case kMachineTagged: // Fall through. + case kMachineWord64: + opcode = is_immediate ? kX64StoreWord64I : kX64StoreWord64; + break; + default: + UNREACHABLE(); + return; + } + if (g.CanBeImmediate(base)) { + // store [#base + %index], %|#value + Emit(opcode | AddressingModeField::encode(kMode_MRI), NULL, + g.UseRegister(index), g.UseImmediate(base), val); + } else if (g.CanBeImmediate(index)) { // store [%base + #index], %|#value + Emit(opcode | AddressingModeField::encode(kMode_MRI), NULL, + g.UseRegister(base), g.UseImmediate(index), val); + } else { // store [%base + %index], %|#value + Emit(opcode | AddressingModeField::encode(kMode_MR1I), NULL, + g.UseRegister(base), g.UseRegister(index), val); + } + // TODO(turbofan): addressing modes [r+r*{2,4,8}+K] +} + + +// Shared routine for multiple binary operations. +static void VisitBinop(InstructionSelector* selector, Node* node, + InstructionCode opcode, FlagsContinuation* cont) { + X64OperandGenerator g(selector); + Int32BinopMatcher m(node); + InstructionOperand* inputs[4]; + size_t input_count = 0; + InstructionOperand* outputs[2]; + size_t output_count = 0; + + // TODO(turbofan): match complex addressing modes. + // TODO(turbofan): if commutative, pick the non-live-in operand as the left as + // this might be the last use and therefore its register can be reused. + if (g.CanBeImmediate(m.right().node())) { + inputs[input_count++] = g.Use(m.left().node()); + inputs[input_count++] = g.UseImmediate(m.right().node()); + } else { + inputs[input_count++] = g.UseRegister(m.left().node()); + inputs[input_count++] = g.Use(m.right().node()); + } + + if (cont->IsBranch()) { + inputs[input_count++] = g.Label(cont->true_block()); + inputs[input_count++] = g.Label(cont->false_block()); + } + + outputs[output_count++] = g.DefineSameAsFirst(node); + if (cont->IsSet()) { + outputs[output_count++] = g.DefineAsRegister(cont->result()); + } + + DCHECK_NE(0, input_count); + DCHECK_NE(0, output_count); + DCHECK_GE(ARRAY_SIZE(inputs), input_count); + DCHECK_GE(ARRAY_SIZE(outputs), output_count); + + Instruction* instr = selector->Emit(cont->Encode(opcode), output_count, + outputs, input_count, inputs); + if (cont->IsBranch()) instr->MarkAsControl(); +} + + +// Shared routine for multiple binary operations. +static void VisitBinop(InstructionSelector* selector, Node* node, + InstructionCode opcode) { + FlagsContinuation cont; + VisitBinop(selector, node, opcode, &cont); +} + + +void InstructionSelector::VisitWord32And(Node* node) { + VisitBinop(this, node, kX64And32); +} + + +void InstructionSelector::VisitWord64And(Node* node) { + VisitBinop(this, node, kX64And); +} + + +void InstructionSelector::VisitWord32Or(Node* node) { + VisitBinop(this, node, kX64Or32); +} + + +void InstructionSelector::VisitWord64Or(Node* node) { + VisitBinop(this, node, kX64Or); +} + + +template <typename T> +static void VisitXor(InstructionSelector* selector, Node* node, + ArchOpcode xor_opcode, ArchOpcode not_opcode) { + X64OperandGenerator g(selector); + BinopMatcher<IntMatcher<T>, IntMatcher<T> > m(node); + if (m.right().Is(-1)) { + selector->Emit(not_opcode, g.DefineSameAsFirst(node), + g.Use(m.left().node())); + } else { + VisitBinop(selector, node, xor_opcode); + } +} + + +void InstructionSelector::VisitWord32Xor(Node* node) { + VisitXor<int32_t>(this, node, kX64Xor32, kX64Not32); +} + + +void InstructionSelector::VisitWord64Xor(Node* node) { + VisitXor<int64_t>(this, node, kX64Xor, kX64Not); +} + + +// Shared routine for multiple 32-bit shift operations. +// TODO(bmeurer): Merge this with VisitWord64Shift using template magic? +static void VisitWord32Shift(InstructionSelector* selector, Node* node, + ArchOpcode opcode) { + X64OperandGenerator g(selector); + Node* left = node->InputAt(0); + Node* right = node->InputAt(1); + + // TODO(turbofan): assembler only supports some addressing modes for shifts. + if (g.CanBeImmediate(right)) { + selector->Emit(opcode, g.DefineSameAsFirst(node), g.UseRegister(left), + g.UseImmediate(right)); + } else { + Int32BinopMatcher m(node); + if (m.right().IsWord32And()) { + Int32BinopMatcher mright(right); + if (mright.right().Is(0x1F)) { + right = mright.left().node(); + } + } + selector->Emit(opcode, g.DefineSameAsFirst(node), g.UseRegister(left), + g.UseFixed(right, rcx)); + } +} + + +// Shared routine for multiple 64-bit shift operations. +// TODO(bmeurer): Merge this with VisitWord32Shift using template magic? +static void VisitWord64Shift(InstructionSelector* selector, Node* node, + ArchOpcode opcode) { + X64OperandGenerator g(selector); + Node* left = node->InputAt(0); + Node* right = node->InputAt(1); + + // TODO(turbofan): assembler only supports some addressing modes for shifts. + if (g.CanBeImmediate(right)) { + selector->Emit(opcode, g.DefineSameAsFirst(node), g.UseRegister(left), + g.UseImmediate(right)); + } else { + Int64BinopMatcher m(node); + if (m.right().IsWord64And()) { + Int64BinopMatcher mright(right); + if (mright.right().Is(0x3F)) { + right = mright.left().node(); + } + } + selector->Emit(opcode, g.DefineSameAsFirst(node), g.UseRegister(left), + g.UseFixed(right, rcx)); + } +} + + +void InstructionSelector::VisitWord32Shl(Node* node) { + VisitWord32Shift(this, node, kX64Shl32); +} + + +void InstructionSelector::VisitWord64Shl(Node* node) { + VisitWord64Shift(this, node, kX64Shl); +} + + +void InstructionSelector::VisitWord32Shr(Node* node) { + VisitWord32Shift(this, node, kX64Shr32); +} + + +void InstructionSelector::VisitWord64Shr(Node* node) { + VisitWord64Shift(this, node, kX64Shr); +} + + +void InstructionSelector::VisitWord32Sar(Node* node) { + VisitWord32Shift(this, node, kX64Sar32); +} + + +void InstructionSelector::VisitWord64Sar(Node* node) { + VisitWord64Shift(this, node, kX64Sar); +} + + +void InstructionSelector::VisitInt32Add(Node* node) { + VisitBinop(this, node, kX64Add32); +} + + +void InstructionSelector::VisitInt64Add(Node* node) { + VisitBinop(this, node, kX64Add); +} + + +template <typename T> +static void VisitSub(InstructionSelector* selector, Node* node, + ArchOpcode sub_opcode, ArchOpcode neg_opcode) { + X64OperandGenerator g(selector); + BinopMatcher<IntMatcher<T>, IntMatcher<T> > m(node); + if (m.left().Is(0)) { + selector->Emit(neg_opcode, g.DefineSameAsFirst(node), + g.Use(m.right().node())); + } else { + VisitBinop(selector, node, sub_opcode); + } +} + + +void InstructionSelector::VisitInt32Sub(Node* node) { + VisitSub<int32_t>(this, node, kX64Sub32, kX64Neg32); +} + + +void InstructionSelector::VisitInt64Sub(Node* node) { + VisitSub<int64_t>(this, node, kX64Sub, kX64Neg); +} + + +static void VisitMul(InstructionSelector* selector, Node* node, + ArchOpcode opcode) { + X64OperandGenerator g(selector); + Node* left = node->InputAt(0); + Node* right = node->InputAt(1); + if (g.CanBeImmediate(right)) { + selector->Emit(opcode, g.DefineAsRegister(node), g.Use(left), + g.UseImmediate(right)); + } else if (g.CanBeImmediate(left)) { + selector->Emit(opcode, g.DefineAsRegister(node), g.Use(right), + g.UseImmediate(left)); + } else { + // TODO(turbofan): select better left operand. + selector->Emit(opcode, g.DefineSameAsFirst(node), g.UseRegister(left), + g.Use(right)); + } +} + + +void InstructionSelector::VisitInt32Mul(Node* node) { + VisitMul(this, node, kX64Imul32); +} + + +void InstructionSelector::VisitInt64Mul(Node* node) { + VisitMul(this, node, kX64Imul); +} + + +static void VisitDiv(InstructionSelector* selector, Node* node, + ArchOpcode opcode) { + X64OperandGenerator g(selector); + InstructionOperand* temps[] = {g.TempRegister(rdx)}; + selector->Emit( + opcode, g.DefineAsFixed(node, rax), g.UseFixed(node->InputAt(0), rax), + g.UseUniqueRegister(node->InputAt(1)), ARRAY_SIZE(temps), temps); +} + + +void InstructionSelector::VisitInt32Div(Node* node) { + VisitDiv(this, node, kX64Idiv32); +} + + +void InstructionSelector::VisitInt64Div(Node* node) { + VisitDiv(this, node, kX64Idiv); +} + + +void InstructionSelector::VisitInt32UDiv(Node* node) { + VisitDiv(this, node, kX64Udiv32); +} + + +void InstructionSelector::VisitInt64UDiv(Node* node) { + VisitDiv(this, node, kX64Udiv); +} + + +static void VisitMod(InstructionSelector* selector, Node* node, + ArchOpcode opcode) { + X64OperandGenerator g(selector); + InstructionOperand* temps[] = {g.TempRegister(rax), g.TempRegister(rdx)}; + selector->Emit( + opcode, g.DefineAsFixed(node, rdx), g.UseFixed(node->InputAt(0), rax), + g.UseUniqueRegister(node->InputAt(1)), ARRAY_SIZE(temps), temps); +} + + +void InstructionSelector::VisitInt32Mod(Node* node) { + VisitMod(this, node, kX64Idiv32); +} + + +void InstructionSelector::VisitInt64Mod(Node* node) { + VisitMod(this, node, kX64Idiv); +} + + +void InstructionSelector::VisitInt32UMod(Node* node) { + VisitMod(this, node, kX64Udiv32); +} + + +void InstructionSelector::VisitInt64UMod(Node* node) { + VisitMod(this, node, kX64Udiv); +} + + +void InstructionSelector::VisitChangeInt32ToFloat64(Node* node) { + X64OperandGenerator g(this); + Emit(kSSEInt32ToFloat64, g.DefineAsDoubleRegister(node), + g.Use(node->InputAt(0))); +} + + +void InstructionSelector::VisitChangeUint32ToFloat64(Node* node) { + X64OperandGenerator g(this); + // TODO(turbofan): X64 SSE cvtqsi2sd should support operands. + Emit(kSSEUint32ToFloat64, g.DefineAsDoubleRegister(node), + g.UseRegister(node->InputAt(0))); +} + + +void InstructionSelector::VisitChangeFloat64ToInt32(Node* node) { + X64OperandGenerator g(this); + Emit(kSSEFloat64ToInt32, g.DefineAsRegister(node), g.Use(node->InputAt(0))); +} + + +void InstructionSelector::VisitChangeFloat64ToUint32(Node* node) { + X64OperandGenerator g(this); + // TODO(turbofan): X64 SSE cvttsd2siq should support operands. + Emit(kSSEFloat64ToUint32, g.DefineAsRegister(node), + g.UseDoubleRegister(node->InputAt(0))); +} + + +void InstructionSelector::VisitFloat64Add(Node* node) { + X64OperandGenerator g(this); + Emit(kSSEFloat64Add, g.DefineSameAsFirst(node), + g.UseDoubleRegister(node->InputAt(0)), + g.UseDoubleRegister(node->InputAt(1))); +} + + +void InstructionSelector::VisitFloat64Sub(Node* node) { + X64OperandGenerator g(this); + Emit(kSSEFloat64Sub, g.DefineSameAsFirst(node), + g.UseDoubleRegister(node->InputAt(0)), + g.UseDoubleRegister(node->InputAt(1))); +} + + +void InstructionSelector::VisitFloat64Mul(Node* node) { + X64OperandGenerator g(this); + Emit(kSSEFloat64Mul, g.DefineSameAsFirst(node), + g.UseDoubleRegister(node->InputAt(0)), + g.UseDoubleRegister(node->InputAt(1))); +} + + +void InstructionSelector::VisitFloat64Div(Node* node) { + X64OperandGenerator g(this); + Emit(kSSEFloat64Div, g.DefineSameAsFirst(node), + g.UseDoubleRegister(node->InputAt(0)), + g.UseDoubleRegister(node->InputAt(1))); +} + + +void InstructionSelector::VisitFloat64Mod(Node* node) { + X64OperandGenerator g(this); + InstructionOperand* temps[] = {g.TempRegister(rax)}; + Emit(kSSEFloat64Mod, g.DefineSameAsFirst(node), + g.UseDoubleRegister(node->InputAt(0)), + g.UseDoubleRegister(node->InputAt(1)), 1, temps); +} + + +void InstructionSelector::VisitConvertInt64ToInt32(Node* node) { + X64OperandGenerator g(this); + // TODO(dcarney): other modes + Emit(kX64Int64ToInt32, g.DefineAsRegister(node), + g.UseRegister(node->InputAt(0))); +} + + +void InstructionSelector::VisitConvertInt32ToInt64(Node* node) { + X64OperandGenerator g(this); + // TODO(dcarney): other modes + Emit(kX64Int32ToInt64, g.DefineAsRegister(node), + g.UseRegister(node->InputAt(0))); +} + + +void InstructionSelector::VisitInt32AddWithOverflow(Node* node, + FlagsContinuation* cont) { + VisitBinop(this, node, kX64Add32, cont); +} + + +void InstructionSelector::VisitInt32SubWithOverflow(Node* node, + FlagsContinuation* cont) { + VisitBinop(this, node, kX64Sub32, cont); +} + + +// Shared routine for multiple compare operations. +static void VisitCompare(InstructionSelector* selector, InstructionCode opcode, + InstructionOperand* left, InstructionOperand* right, + FlagsContinuation* cont) { + X64OperandGenerator g(selector); + opcode = cont->Encode(opcode); + if (cont->IsBranch()) { + selector->Emit(opcode, NULL, left, right, g.Label(cont->true_block()), + g.Label(cont->false_block()))->MarkAsControl(); + } else { + DCHECK(cont->IsSet()); + selector->Emit(opcode, g.DefineAsRegister(cont->result()), left, right); + } +} + + +// Shared routine for multiple word compare operations. +static void VisitWordCompare(InstructionSelector* selector, Node* node, + InstructionCode opcode, FlagsContinuation* cont, + bool commutative) { + X64OperandGenerator g(selector); + Node* left = node->InputAt(0); + Node* right = node->InputAt(1); + + // Match immediates on left or right side of comparison. + if (g.CanBeImmediate(right)) { + VisitCompare(selector, opcode, g.Use(left), g.UseImmediate(right), cont); + } else if (g.CanBeImmediate(left)) { + if (!commutative) cont->Commute(); + VisitCompare(selector, opcode, g.Use(right), g.UseImmediate(left), cont); + } else { + VisitCompare(selector, opcode, g.UseRegister(left), g.Use(right), cont); + } +} + + +void InstructionSelector::VisitWord32Test(Node* node, FlagsContinuation* cont) { + switch (node->opcode()) { + case IrOpcode::kInt32Sub: + return VisitWordCompare(this, node, kX64Cmp32, cont, false); + case IrOpcode::kWord32And: + return VisitWordCompare(this, node, kX64Test32, cont, true); + default: + break; + } + + X64OperandGenerator g(this); + VisitCompare(this, kX64Test32, g.Use(node), g.TempImmediate(-1), cont); +} + + +void InstructionSelector::VisitWord64Test(Node* node, FlagsContinuation* cont) { + switch (node->opcode()) { + case IrOpcode::kInt64Sub: + return VisitWordCompare(this, node, kX64Cmp, cont, false); + case IrOpcode::kWord64And: + return VisitWordCompare(this, node, kX64Test, cont, true); + default: + break; + } + + X64OperandGenerator g(this); + VisitCompare(this, kX64Test, g.Use(node), g.TempImmediate(-1), cont); +} + + +void InstructionSelector::VisitWord32Compare(Node* node, + FlagsContinuation* cont) { + VisitWordCompare(this, node, kX64Cmp32, cont, false); +} + + +void InstructionSelector::VisitWord64Compare(Node* node, + FlagsContinuation* cont) { + VisitWordCompare(this, node, kX64Cmp, cont, false); +} + + +void InstructionSelector::VisitFloat64Compare(Node* node, + FlagsContinuation* cont) { + X64OperandGenerator g(this); + Node* left = node->InputAt(0); + Node* right = node->InputAt(1); + VisitCompare(this, kSSEFloat64Cmp, g.UseDoubleRegister(left), g.Use(right), + cont); +} + + +void InstructionSelector::VisitCall(Node* call, BasicBlock* continuation, + BasicBlock* deoptimization) { + X64OperandGenerator g(this); + CallDescriptor* descriptor = OpParameter<CallDescriptor*>(call); + CallBuffer buffer(zone(), descriptor); // TODO(turbofan): temp zone here? + + // Compute InstructionOperands for inputs and outputs. + InitializeCallBuffer(call, &buffer, true, true, continuation, deoptimization); + + // TODO(dcarney): stack alignment for c calls. + // TODO(dcarney): shadow space on window for c calls. + // Push any stack arguments. + for (int i = buffer.pushed_count - 1; i >= 0; --i) { + Node* input = buffer.pushed_nodes[i]; + // TODO(titzer): handle pushing double parameters. + if (g.CanBeImmediate(input)) { + Emit(kX64PushI, NULL, g.UseImmediate(input)); + } else { + Emit(kX64Push, NULL, g.Use(input)); + } + } + + // Select the appropriate opcode based on the call type. + InstructionCode opcode; + switch (descriptor->kind()) { + case CallDescriptor::kCallCodeObject: { + bool lazy_deopt = descriptor->CanLazilyDeoptimize(); + opcode = kX64CallCodeObject | MiscField::encode(lazy_deopt ? 1 : 0); + break; + } + case CallDescriptor::kCallAddress: + opcode = kX64CallAddress; + break; + case CallDescriptor::kCallJSFunction: + opcode = kX64CallJSFunction; + break; + default: + UNREACHABLE(); + return; + } + + // Emit the call instruction. + Instruction* call_instr = + Emit(opcode, buffer.output_count, buffer.outputs, + buffer.fixed_and_control_count(), buffer.fixed_and_control_args); + + call_instr->MarkAsCall(); + if (deoptimization != NULL) { + DCHECK(continuation != NULL); + call_instr->MarkAsControl(); + } + + // Caller clean up of stack for C-style calls. + if (descriptor->kind() == CallDescriptor::kCallAddress && + buffer.pushed_count > 0) { + DCHECK(deoptimization == NULL && continuation == NULL); + Emit(kPopStack | MiscField::encode(buffer.pushed_count), NULL); + } +} + +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/src/compiler/x64/linkage-x64.cc b/deps/v8/src/compiler/x64/linkage-x64.cc new file mode 100644 index 00000000000..84c01e65467 --- /dev/null +++ b/deps/v8/src/compiler/x64/linkage-x64.cc @@ -0,0 +1,83 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "src/assembler.h" +#include "src/code-stubs.h" +#include "src/compiler/linkage.h" +#include "src/compiler/linkage-impl.h" +#include "src/zone.h" + +namespace v8 { +namespace internal { +namespace compiler { + +#ifdef _WIN64 +const bool kWin64 = true; +#else +const bool kWin64 = false; +#endif + +struct LinkageHelperTraits { + static Register ReturnValueReg() { return rax; } + static Register ReturnValue2Reg() { return rdx; } + static Register JSCallFunctionReg() { return rdi; } + static Register ContextReg() { return rsi; } + static Register RuntimeCallFunctionReg() { return rbx; } + static Register RuntimeCallArgCountReg() { return rax; } + static RegList CCalleeSaveRegisters() { + if (kWin64) { + return rbx.bit() | rdi.bit() | rsi.bit() | r12.bit() | r13.bit() | + r14.bit() | r15.bit(); + } else { + return rbx.bit() | r12.bit() | r13.bit() | r14.bit() | r15.bit(); + } + } + static Register CRegisterParameter(int i) { + if (kWin64) { + static Register register_parameters[] = {rcx, rdx, r8, r9}; + return register_parameters[i]; + } else { + static Register register_parameters[] = {rdi, rsi, rdx, rcx, r8, r9}; + return register_parameters[i]; + } + } + static int CRegisterParametersLength() { return kWin64 ? 4 : 6; } +}; + + +CallDescriptor* Linkage::GetJSCallDescriptor(int parameter_count, Zone* zone) { + return LinkageHelper::GetJSCallDescriptor<LinkageHelperTraits>( + zone, parameter_count); +} + + +CallDescriptor* Linkage::GetRuntimeCallDescriptor( + Runtime::FunctionId function, int parameter_count, + Operator::Property properties, + CallDescriptor::DeoptimizationSupport can_deoptimize, Zone* zone) { + return LinkageHelper::GetRuntimeCallDescriptor<LinkageHelperTraits>( + zone, function, parameter_count, properties, can_deoptimize); +} + + +CallDescriptor* Linkage::GetStubCallDescriptor( + CodeStubInterfaceDescriptor* descriptor, int stack_parameter_count, + CallDescriptor::DeoptimizationSupport can_deoptimize, Zone* zone) { + return LinkageHelper::GetStubCallDescriptor<LinkageHelperTraits>( + zone, descriptor, stack_parameter_count, can_deoptimize); +} + + +CallDescriptor* Linkage::GetSimplifiedCDescriptor( + Zone* zone, int num_params, MachineType return_type, + const MachineType* param_types) { + return LinkageHelper::GetSimplifiedCDescriptor<LinkageHelperTraits>( + zone, num_params, return_type, param_types); +} + +} +} +} // namespace v8::internal::compiler diff --git a/deps/v8/src/contexts.cc b/deps/v8/src/contexts.cc index 58ae49a9cc8..1b495265265 100644 --- a/deps/v8/src/contexts.cc +++ b/deps/v8/src/contexts.cc @@ -2,11 +2,11 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "bootstrapper.h" -#include "debug.h" -#include "scopeinfo.h" +#include "src/bootstrapper.h" +#include "src/debug.h" +#include "src/scopeinfo.h" namespace v8 { namespace internal { @@ -15,7 +15,7 @@ Context* Context::declaration_context() { Context* current = this; while (!current->IsFunctionContext() && !current->IsNativeContext()) { current = current->previous(); - ASSERT(current->closure() == closure()); + DCHECK(current->closure() == closure()); } return current; } @@ -26,7 +26,7 @@ JSBuiltinsObject* Context::builtins() { if (object->IsJSGlobalObject()) { return JSGlobalObject::cast(object)->builtins(); } else { - ASSERT(object->IsJSBuiltinsObject()); + DCHECK(object->IsJSBuiltinsObject()); return JSBuiltinsObject::cast(object); } } @@ -51,7 +51,7 @@ Context* Context::native_context() { // During bootstrapping, the global object might not be set and we // have to search the context chain to find the native context. - ASSERT(this->GetIsolate()->bootstrapper()->IsActive()); + DCHECK(this->GetIsolate()->bootstrapper()->IsActive()); Context* current = this; while (!current->IsNativeContext()) { JSFunction* closure = JSFunction::cast(current->closure()); @@ -71,6 +71,38 @@ void Context::set_global_proxy(JSObject* object) { } +/** + * Lookups a property in an object environment, taking the unscopables into + * account. This is used For HasBinding spec algorithms for ObjectEnvironment. + */ +static Maybe<PropertyAttributes> UnscopableLookup(LookupIterator* it) { + Isolate* isolate = it->isolate(); + + Maybe<PropertyAttributes> attrs = JSReceiver::GetPropertyAttributes(it); + DCHECK(attrs.has_value || isolate->has_pending_exception()); + if (!attrs.has_value || attrs.value == ABSENT) return attrs; + + Handle<Symbol> unscopables_symbol( + isolate->native_context()->unscopables_symbol(), isolate); + Handle<Object> receiver = it->GetReceiver(); + Handle<Object> unscopables; + MaybeHandle<Object> maybe_unscopables = + Object::GetProperty(receiver, unscopables_symbol); + if (!maybe_unscopables.ToHandle(&unscopables)) { + return Maybe<PropertyAttributes>(); + } + if (!unscopables->IsSpecObject()) return attrs; + Maybe<bool> blacklist = JSReceiver::HasProperty( + Handle<JSReceiver>::cast(unscopables), it->name()); + if (!blacklist.has_value) { + DCHECK(isolate->has_pending_exception()); + return Maybe<PropertyAttributes>(); + } + if (blacklist.value) return maybe(ABSENT); + return attrs; +} + + Handle<Object> Context::Lookup(Handle<String> name, ContextLookupFlags flags, int* index, @@ -106,15 +138,22 @@ Handle<Object> Context::Lookup(Handle<String> name, // Context extension objects needs to behave as if they have no // prototype. So even if we want to follow prototype chains, we need // to only do a local lookup for context extension objects. + Maybe<PropertyAttributes> maybe; if ((flags & FOLLOW_PROTOTYPE_CHAIN) == 0 || object->IsJSContextExtensionObject()) { - *attributes = JSReceiver::GetLocalPropertyAttribute(object, name); + maybe = JSReceiver::GetOwnPropertyAttributes(object, name); + } else if (context->IsWithContext()) { + LookupIterator it(object, name); + maybe = UnscopableLookup(&it); } else { - *attributes = JSReceiver::GetPropertyAttribute(object, name); + maybe = JSReceiver::GetPropertyAttributes(object, name); } - if (isolate->has_pending_exception()) return Handle<Object>(); - if (*attributes != ABSENT) { + if (!maybe.has_value) return Handle<Object>(); + DCHECK(!isolate->has_pending_exception()); + *attributes = maybe.value; + + if (maybe.value != ABSENT) { if (FLAG_trace_contexts) { PrintF("=> found property in context object %p\n", reinterpret_cast<void*>(*object)); @@ -137,9 +176,12 @@ Handle<Object> Context::Lookup(Handle<String> name, } VariableMode mode; InitializationFlag init_flag; - int slot_index = - ScopeInfo::ContextSlotIndex(scope_info, name, &mode, &init_flag); - ASSERT(slot_index < 0 || slot_index >= MIN_CONTEXT_SLOTS); + // TODO(sigurds) Figure out whether maybe_assigned_flag should + // be used to compute binding_flags. + MaybeAssignedFlag maybe_assigned_flag; + int slot_index = ScopeInfo::ContextSlotIndex( + scope_info, name, &mode, &init_flag, &maybe_assigned_flag); + DCHECK(slot_index < 0 || slot_index >= MIN_CONTEXT_SLOTS); if (slot_index >= 0) { if (FLAG_trace_contexts) { PrintF("=> found local in context slot %d (mode = %d)\n", @@ -200,7 +242,7 @@ Handle<Object> Context::Lookup(Handle<String> name, } *index = function_index; *attributes = READ_ONLY; - ASSERT(mode == CONST_LEGACY || mode == CONST); + DCHECK(mode == CONST_LEGACY || mode == CONST); *binding_flags = (mode == CONST_LEGACY) ? IMMUTABLE_IS_INITIALIZED : IMMUTABLE_IS_INITIALIZED_HARMONY; return context; @@ -236,8 +278,8 @@ Handle<Object> Context::Lookup(Handle<String> name, void Context::AddOptimizedFunction(JSFunction* function) { - ASSERT(IsNativeContext()); -#ifdef ENABLE_SLOW_ASSERTS + DCHECK(IsNativeContext()); +#ifdef ENABLE_SLOW_DCHECKS if (FLAG_enable_slow_asserts) { Object* element = get(OPTIMIZED_FUNCTIONS_LIST); while (!element->IsUndefined()) { @@ -266,7 +308,7 @@ void Context::AddOptimizedFunction(JSFunction* function) { flusher->EvictCandidate(function); } - ASSERT(function->next_function_link()->IsUndefined()); + DCHECK(function->next_function_link()->IsUndefined()); function->set_next_function_link(get(OPTIMIZED_FUNCTIONS_LIST)); set(OPTIMIZED_FUNCTIONS_LIST, function); @@ -274,12 +316,12 @@ void Context::AddOptimizedFunction(JSFunction* function) { void Context::RemoveOptimizedFunction(JSFunction* function) { - ASSERT(IsNativeContext()); + DCHECK(IsNativeContext()); Object* element = get(OPTIMIZED_FUNCTIONS_LIST); JSFunction* prev = NULL; while (!element->IsUndefined()) { JSFunction* element_function = JSFunction::cast(element); - ASSERT(element_function->next_function_link()->IsUndefined() || + DCHECK(element_function->next_function_link()->IsUndefined() || element_function->next_function_link()->IsJSFunction()); if (element_function == function) { if (prev == NULL) { @@ -298,46 +340,46 @@ void Context::RemoveOptimizedFunction(JSFunction* function) { void Context::SetOptimizedFunctionsListHead(Object* head) { - ASSERT(IsNativeContext()); + DCHECK(IsNativeContext()); set(OPTIMIZED_FUNCTIONS_LIST, head); } Object* Context::OptimizedFunctionsListHead() { - ASSERT(IsNativeContext()); + DCHECK(IsNativeContext()); return get(OPTIMIZED_FUNCTIONS_LIST); } void Context::AddOptimizedCode(Code* code) { - ASSERT(IsNativeContext()); - ASSERT(code->kind() == Code::OPTIMIZED_FUNCTION); - ASSERT(code->next_code_link()->IsUndefined()); + DCHECK(IsNativeContext()); + DCHECK(code->kind() == Code::OPTIMIZED_FUNCTION); + DCHECK(code->next_code_link()->IsUndefined()); code->set_next_code_link(get(OPTIMIZED_CODE_LIST)); set(OPTIMIZED_CODE_LIST, code); } void Context::SetOptimizedCodeListHead(Object* head) { - ASSERT(IsNativeContext()); + DCHECK(IsNativeContext()); set(OPTIMIZED_CODE_LIST, head); } Object* Context::OptimizedCodeListHead() { - ASSERT(IsNativeContext()); + DCHECK(IsNativeContext()); return get(OPTIMIZED_CODE_LIST); } void Context::SetDeoptimizedCodeListHead(Object* head) { - ASSERT(IsNativeContext()); + DCHECK(IsNativeContext()); set(DEOPTIMIZED_CODE_LIST, head); } Object* Context::DeoptimizedCodeListHead() { - ASSERT(IsNativeContext()); + DCHECK(IsNativeContext()); return get(DEOPTIMIZED_CODE_LIST); } diff --git a/deps/v8/src/contexts.h b/deps/v8/src/contexts.h index f1aa380c760..63c9955b979 100644 --- a/deps/v8/src/contexts.h +++ b/deps/v8/src/contexts.h @@ -5,8 +5,8 @@ #ifndef V8_CONTEXTS_H_ #define V8_CONTEXTS_H_ -#include "heap.h" -#include "objects.h" +#include "src/heap/heap.h" +#include "src/objects.h" namespace v8 { namespace internal { @@ -73,120 +73,135 @@ enum BindingFlags { // must always be allocated via Heap::AllocateContext() or // Factory::NewContext. -#define NATIVE_CONTEXT_FIELDS(V) \ - V(GLOBAL_PROXY_INDEX, JSObject, global_proxy_object) \ - V(SECURITY_TOKEN_INDEX, Object, security_token) \ - V(BOOLEAN_FUNCTION_INDEX, JSFunction, boolean_function) \ - V(NUMBER_FUNCTION_INDEX, JSFunction, number_function) \ - V(STRING_FUNCTION_INDEX, JSFunction, string_function) \ - V(STRING_FUNCTION_PROTOTYPE_MAP_INDEX, Map, string_function_prototype_map) \ - V(SYMBOL_FUNCTION_INDEX, JSFunction, symbol_function) \ - V(OBJECT_FUNCTION_INDEX, JSFunction, object_function) \ - V(INTERNAL_ARRAY_FUNCTION_INDEX, JSFunction, internal_array_function) \ - V(ARRAY_FUNCTION_INDEX, JSFunction, array_function) \ - V(JS_ARRAY_MAPS_INDEX, Object, js_array_maps) \ - V(DATE_FUNCTION_INDEX, JSFunction, date_function) \ - V(JSON_OBJECT_INDEX, JSObject, json_object) \ - V(REGEXP_FUNCTION_INDEX, JSFunction, regexp_function) \ - V(INITIAL_OBJECT_PROTOTYPE_INDEX, JSObject, initial_object_prototype) \ - V(INITIAL_ARRAY_PROTOTYPE_INDEX, JSObject, initial_array_prototype) \ - V(CREATE_DATE_FUN_INDEX, JSFunction, create_date_fun) \ - V(TO_NUMBER_FUN_INDEX, JSFunction, to_number_fun) \ - V(TO_STRING_FUN_INDEX, JSFunction, to_string_fun) \ - V(TO_DETAIL_STRING_FUN_INDEX, JSFunction, to_detail_string_fun) \ - V(TO_OBJECT_FUN_INDEX, JSFunction, to_object_fun) \ - V(TO_INTEGER_FUN_INDEX, JSFunction, to_integer_fun) \ - V(TO_UINT32_FUN_INDEX, JSFunction, to_uint32_fun) \ - V(TO_INT32_FUN_INDEX, JSFunction, to_int32_fun) \ - V(GLOBAL_EVAL_FUN_INDEX, JSFunction, global_eval_fun) \ - V(INSTANTIATE_FUN_INDEX, JSFunction, instantiate_fun) \ - V(CONFIGURE_INSTANCE_FUN_INDEX, JSFunction, configure_instance_fun) \ - V(ARRAY_BUFFER_FUN_INDEX, JSFunction, array_buffer_fun) \ - V(UINT8_ARRAY_FUN_INDEX, JSFunction, uint8_array_fun) \ - V(INT8_ARRAY_FUN_INDEX, JSFunction, int8_array_fun) \ - V(UINT16_ARRAY_FUN_INDEX, JSFunction, uint16_array_fun) \ - V(INT16_ARRAY_FUN_INDEX, JSFunction, int16_array_fun) \ - V(UINT32_ARRAY_FUN_INDEX, JSFunction, uint32_array_fun) \ - V(INT32_ARRAY_FUN_INDEX, JSFunction, int32_array_fun) \ - V(FLOAT32_ARRAY_FUN_INDEX, JSFunction, float32_array_fun) \ - V(FLOAT64_ARRAY_FUN_INDEX, JSFunction, float64_array_fun) \ - V(UINT8_CLAMPED_ARRAY_FUN_INDEX, JSFunction, uint8_clamped_array_fun) \ - V(INT8_ARRAY_EXTERNAL_MAP_INDEX, Map, int8_array_external_map) \ - V(UINT8_ARRAY_EXTERNAL_MAP_INDEX, Map, uint8_array_external_map) \ - V(INT16_ARRAY_EXTERNAL_MAP_INDEX, Map, int16_array_external_map) \ - V(UINT16_ARRAY_EXTERNAL_MAP_INDEX, Map, uint16_array_external_map) \ - V(INT32_ARRAY_EXTERNAL_MAP_INDEX, Map, int32_array_external_map) \ - V(UINT32_ARRAY_EXTERNAL_MAP_INDEX, Map, uint32_array_external_map) \ - V(FLOAT32_ARRAY_EXTERNAL_MAP_INDEX, Map, float32_array_external_map) \ - V(FLOAT64_ARRAY_EXTERNAL_MAP_INDEX, Map, float64_array_external_map) \ - V(UINT8_CLAMPED_ARRAY_EXTERNAL_MAP_INDEX, Map, \ - uint8_clamped_array_external_map) \ - V(DATA_VIEW_FUN_INDEX, JSFunction, data_view_fun) \ - V(SLOPPY_FUNCTION_MAP_INDEX, Map, sloppy_function_map) \ - V(STRICT_FUNCTION_MAP_INDEX, Map, strict_function_map) \ - V(SLOPPY_FUNCTION_WITHOUT_PROTOTYPE_MAP_INDEX, Map, \ - sloppy_function_without_prototype_map) \ - V(STRICT_FUNCTION_WITHOUT_PROTOTYPE_MAP_INDEX, Map, \ - strict_function_without_prototype_map) \ - V(REGEXP_RESULT_MAP_INDEX, Map, regexp_result_map)\ - V(SLOPPY_ARGUMENTS_BOILERPLATE_INDEX, JSObject, \ - sloppy_arguments_boilerplate) \ - V(ALIASED_ARGUMENTS_BOILERPLATE_INDEX, JSObject, \ - aliased_arguments_boilerplate) \ - V(STRICT_ARGUMENTS_BOILERPLATE_INDEX, JSObject, \ - strict_arguments_boilerplate) \ - V(MESSAGE_LISTENERS_INDEX, JSObject, message_listeners) \ - V(MAKE_MESSAGE_FUN_INDEX, JSFunction, make_message_fun) \ - V(GET_STACK_TRACE_LINE_INDEX, JSFunction, get_stack_trace_line_fun) \ - V(CONFIGURE_GLOBAL_INDEX, JSFunction, configure_global_fun) \ - V(FUNCTION_CACHE_INDEX, JSObject, function_cache) \ - V(JSFUNCTION_RESULT_CACHES_INDEX, FixedArray, jsfunction_result_caches) \ - V(NORMALIZED_MAP_CACHE_INDEX, NormalizedMapCache, normalized_map_cache) \ - V(RUNTIME_CONTEXT_INDEX, Context, runtime_context) \ - V(CALL_AS_FUNCTION_DELEGATE_INDEX, JSFunction, call_as_function_delegate) \ - V(CALL_AS_CONSTRUCTOR_DELEGATE_INDEX, JSFunction, \ - call_as_constructor_delegate) \ - V(SCRIPT_FUNCTION_INDEX, JSFunction, script_function) \ - V(OPAQUE_REFERENCE_FUNCTION_INDEX, JSFunction, opaque_reference_function) \ - V(CONTEXT_EXTENSION_FUNCTION_INDEX, JSFunction, context_extension_function) \ - V(MAP_CACHE_INDEX, Object, map_cache) \ - V(EMBEDDER_DATA_INDEX, FixedArray, embedder_data) \ - V(ALLOW_CODE_GEN_FROM_STRINGS_INDEX, Object, allow_code_gen_from_strings) \ - V(ERROR_MESSAGE_FOR_CODE_GEN_FROM_STRINGS_INDEX, Object, \ - error_message_for_code_gen_from_strings) \ - V(RUN_MICROTASKS_INDEX, JSFunction, run_microtasks) \ - V(ENQUEUE_MICROTASK_INDEX, JSFunction, enqueue_microtask) \ - V(IS_PROMISE_INDEX, JSFunction, is_promise) \ - V(PROMISE_CREATE_INDEX, JSFunction, promise_create) \ - V(PROMISE_RESOLVE_INDEX, JSFunction, promise_resolve) \ - V(PROMISE_REJECT_INDEX, JSFunction, promise_reject) \ - V(PROMISE_CHAIN_INDEX, JSFunction, promise_chain) \ - V(PROMISE_CATCH_INDEX, JSFunction, promise_catch) \ - V(TO_COMPLETE_PROPERTY_DESCRIPTOR_INDEX, JSFunction, \ - to_complete_property_descriptor) \ - V(DERIVED_HAS_TRAP_INDEX, JSFunction, derived_has_trap) \ - V(DERIVED_GET_TRAP_INDEX, JSFunction, derived_get_trap) \ - V(DERIVED_SET_TRAP_INDEX, JSFunction, derived_set_trap) \ - V(PROXY_ENUMERATE_INDEX, JSFunction, proxy_enumerate) \ - V(OBSERVERS_NOTIFY_CHANGE_INDEX, JSFunction, observers_notify_change) \ - V(OBSERVERS_ENQUEUE_SPLICE_INDEX, JSFunction, observers_enqueue_splice) \ - V(OBSERVERS_BEGIN_SPLICE_INDEX, JSFunction, \ - observers_begin_perform_splice) \ - V(OBSERVERS_END_SPLICE_INDEX, JSFunction, \ - observers_end_perform_splice) \ - V(NATIVE_OBJECT_OBSERVE_INDEX, JSFunction, \ - native_object_observe) \ - V(NATIVE_OBJECT_GET_NOTIFIER_INDEX, JSFunction, \ - native_object_get_notifier) \ - V(NATIVE_OBJECT_NOTIFIER_PERFORM_CHANGE, JSFunction, \ - native_object_notifier_perform_change) \ - V(SLOPPY_GENERATOR_FUNCTION_MAP_INDEX, Map, sloppy_generator_function_map) \ - V(STRICT_GENERATOR_FUNCTION_MAP_INDEX, Map, strict_generator_function_map) \ - V(GENERATOR_OBJECT_PROTOTYPE_MAP_INDEX, Map, \ - generator_object_prototype_map) \ - V(ITERATOR_RESULT_MAP_INDEX, Map, iterator_result_map) \ - V(MAP_ITERATOR_MAP_INDEX, Map, map_iterator_map) \ - V(SET_ITERATOR_MAP_INDEX, Map, set_iterator_map) +#define NATIVE_CONTEXT_FIELDS(V) \ + V(GLOBAL_PROXY_INDEX, JSObject, global_proxy_object) \ + V(SECURITY_TOKEN_INDEX, Object, security_token) \ + V(BOOLEAN_FUNCTION_INDEX, JSFunction, boolean_function) \ + V(NUMBER_FUNCTION_INDEX, JSFunction, number_function) \ + V(STRING_FUNCTION_INDEX, JSFunction, string_function) \ + V(STRING_FUNCTION_PROTOTYPE_MAP_INDEX, Map, string_function_prototype_map) \ + V(SYMBOL_FUNCTION_INDEX, JSFunction, symbol_function) \ + V(OBJECT_FUNCTION_INDEX, JSFunction, object_function) \ + V(INTERNAL_ARRAY_FUNCTION_INDEX, JSFunction, internal_array_function) \ + V(ARRAY_FUNCTION_INDEX, JSFunction, array_function) \ + V(JS_ARRAY_MAPS_INDEX, Object, js_array_maps) \ + V(DATE_FUNCTION_INDEX, JSFunction, date_function) \ + V(JSON_OBJECT_INDEX, JSObject, json_object) \ + V(REGEXP_FUNCTION_INDEX, JSFunction, regexp_function) \ + V(INITIAL_OBJECT_PROTOTYPE_INDEX, JSObject, initial_object_prototype) \ + V(INITIAL_ARRAY_PROTOTYPE_INDEX, JSObject, initial_array_prototype) \ + V(CREATE_DATE_FUN_INDEX, JSFunction, create_date_fun) \ + V(TO_NUMBER_FUN_INDEX, JSFunction, to_number_fun) \ + V(TO_STRING_FUN_INDEX, JSFunction, to_string_fun) \ + V(TO_DETAIL_STRING_FUN_INDEX, JSFunction, to_detail_string_fun) \ + V(TO_OBJECT_FUN_INDEX, JSFunction, to_object_fun) \ + V(TO_INTEGER_FUN_INDEX, JSFunction, to_integer_fun) \ + V(TO_UINT32_FUN_INDEX, JSFunction, to_uint32_fun) \ + V(TO_INT32_FUN_INDEX, JSFunction, to_int32_fun) \ + V(GLOBAL_EVAL_FUN_INDEX, JSFunction, global_eval_fun) \ + V(INSTANTIATE_FUN_INDEX, JSFunction, instantiate_fun) \ + V(CONFIGURE_INSTANCE_FUN_INDEX, JSFunction, configure_instance_fun) \ + V(MATH_ABS_FUN_INDEX, JSFunction, math_abs_fun) \ + V(MATH_ACOS_FUN_INDEX, JSFunction, math_acos_fun) \ + V(MATH_ASIN_FUN_INDEX, JSFunction, math_asin_fun) \ + V(MATH_ATAN_FUN_INDEX, JSFunction, math_atan_fun) \ + V(MATH_ATAN2_FUN_INDEX, JSFunction, math_atan2_fun) \ + V(MATH_CEIL_FUN_INDEX, JSFunction, math_ceil_fun) \ + V(MATH_COS_FUN_INDEX, JSFunction, math_cos_fun) \ + V(MATH_EXP_FUN_INDEX, JSFunction, math_exp_fun) \ + V(MATH_FLOOR_FUN_INDEX, JSFunction, math_floor_fun) \ + V(MATH_IMUL_FUN_INDEX, JSFunction, math_imul_fun) \ + V(MATH_LOG_FUN_INDEX, JSFunction, math_log_fun) \ + V(MATH_MAX_FUN_INDEX, JSFunction, math_max_fun) \ + V(MATH_MIN_FUN_INDEX, JSFunction, math_min_fun) \ + V(MATH_POW_FUN_INDEX, JSFunction, math_pow_fun) \ + V(MATH_RANDOM_FUN_INDEX, JSFunction, math_random_fun) \ + V(MATH_ROUND_FUN_INDEX, JSFunction, math_round_fun) \ + V(MATH_SIN_FUN_INDEX, JSFunction, math_sin_fun) \ + V(MATH_SQRT_FUN_INDEX, JSFunction, math_sqrt_fun) \ + V(MATH_TAN_FUN_INDEX, JSFunction, math_tan_fun) \ + V(ARRAY_BUFFER_FUN_INDEX, JSFunction, array_buffer_fun) \ + V(UINT8_ARRAY_FUN_INDEX, JSFunction, uint8_array_fun) \ + V(INT8_ARRAY_FUN_INDEX, JSFunction, int8_array_fun) \ + V(UINT16_ARRAY_FUN_INDEX, JSFunction, uint16_array_fun) \ + V(INT16_ARRAY_FUN_INDEX, JSFunction, int16_array_fun) \ + V(UINT32_ARRAY_FUN_INDEX, JSFunction, uint32_array_fun) \ + V(INT32_ARRAY_FUN_INDEX, JSFunction, int32_array_fun) \ + V(FLOAT32_ARRAY_FUN_INDEX, JSFunction, float32_array_fun) \ + V(FLOAT64_ARRAY_FUN_INDEX, JSFunction, float64_array_fun) \ + V(UINT8_CLAMPED_ARRAY_FUN_INDEX, JSFunction, uint8_clamped_array_fun) \ + V(INT8_ARRAY_EXTERNAL_MAP_INDEX, Map, int8_array_external_map) \ + V(UINT8_ARRAY_EXTERNAL_MAP_INDEX, Map, uint8_array_external_map) \ + V(INT16_ARRAY_EXTERNAL_MAP_INDEX, Map, int16_array_external_map) \ + V(UINT16_ARRAY_EXTERNAL_MAP_INDEX, Map, uint16_array_external_map) \ + V(INT32_ARRAY_EXTERNAL_MAP_INDEX, Map, int32_array_external_map) \ + V(UINT32_ARRAY_EXTERNAL_MAP_INDEX, Map, uint32_array_external_map) \ + V(FLOAT32_ARRAY_EXTERNAL_MAP_INDEX, Map, float32_array_external_map) \ + V(FLOAT64_ARRAY_EXTERNAL_MAP_INDEX, Map, float64_array_external_map) \ + V(UINT8_CLAMPED_ARRAY_EXTERNAL_MAP_INDEX, Map, \ + uint8_clamped_array_external_map) \ + V(DATA_VIEW_FUN_INDEX, JSFunction, data_view_fun) \ + V(SLOPPY_FUNCTION_MAP_INDEX, Map, sloppy_function_map) \ + V(SLOPPY_FUNCTION_WITH_READONLY_PROTOTYPE_MAP_INDEX, Map, \ + sloppy_function_with_readonly_prototype_map) \ + V(STRICT_FUNCTION_MAP_INDEX, Map, strict_function_map) \ + V(SLOPPY_FUNCTION_WITHOUT_PROTOTYPE_MAP_INDEX, Map, \ + sloppy_function_without_prototype_map) \ + V(STRICT_FUNCTION_WITHOUT_PROTOTYPE_MAP_INDEX, Map, \ + strict_function_without_prototype_map) \ + V(BOUND_FUNCTION_MAP_INDEX, Map, bound_function_map) \ + V(REGEXP_RESULT_MAP_INDEX, Map, regexp_result_map) \ + V(SLOPPY_ARGUMENTS_MAP_INDEX, Map, sloppy_arguments_map) \ + V(ALIASED_ARGUMENTS_MAP_INDEX, Map, aliased_arguments_map) \ + V(STRICT_ARGUMENTS_MAP_INDEX, Map, strict_arguments_map) \ + V(MESSAGE_LISTENERS_INDEX, JSObject, message_listeners) \ + V(MAKE_MESSAGE_FUN_INDEX, JSFunction, make_message_fun) \ + V(GET_STACK_TRACE_LINE_INDEX, JSFunction, get_stack_trace_line_fun) \ + V(CONFIGURE_GLOBAL_INDEX, JSFunction, configure_global_fun) \ + V(FUNCTION_CACHE_INDEX, JSObject, function_cache) \ + V(JSFUNCTION_RESULT_CACHES_INDEX, FixedArray, jsfunction_result_caches) \ + V(NORMALIZED_MAP_CACHE_INDEX, Object, normalized_map_cache) \ + V(RUNTIME_CONTEXT_INDEX, Context, runtime_context) \ + V(CALL_AS_FUNCTION_DELEGATE_INDEX, JSFunction, call_as_function_delegate) \ + V(CALL_AS_CONSTRUCTOR_DELEGATE_INDEX, JSFunction, \ + call_as_constructor_delegate) \ + V(SCRIPT_FUNCTION_INDEX, JSFunction, script_function) \ + V(OPAQUE_REFERENCE_FUNCTION_INDEX, JSFunction, opaque_reference_function) \ + V(CONTEXT_EXTENSION_FUNCTION_INDEX, JSFunction, context_extension_function) \ + V(MAP_CACHE_INDEX, Object, map_cache) \ + V(EMBEDDER_DATA_INDEX, FixedArray, embedder_data) \ + V(ALLOW_CODE_GEN_FROM_STRINGS_INDEX, Object, allow_code_gen_from_strings) \ + V(ERROR_MESSAGE_FOR_CODE_GEN_FROM_STRINGS_INDEX, Object, \ + error_message_for_code_gen_from_strings) \ + V(IS_PROMISE_INDEX, JSFunction, is_promise) \ + V(PROMISE_CREATE_INDEX, JSFunction, promise_create) \ + V(PROMISE_RESOLVE_INDEX, JSFunction, promise_resolve) \ + V(PROMISE_REJECT_INDEX, JSFunction, promise_reject) \ + V(PROMISE_CHAIN_INDEX, JSFunction, promise_chain) \ + V(PROMISE_CATCH_INDEX, JSFunction, promise_catch) \ + V(PROMISE_THEN_INDEX, JSFunction, promise_then) \ + V(TO_COMPLETE_PROPERTY_DESCRIPTOR_INDEX, JSFunction, \ + to_complete_property_descriptor) \ + V(DERIVED_HAS_TRAP_INDEX, JSFunction, derived_has_trap) \ + V(DERIVED_GET_TRAP_INDEX, JSFunction, derived_get_trap) \ + V(DERIVED_SET_TRAP_INDEX, JSFunction, derived_set_trap) \ + V(PROXY_ENUMERATE_INDEX, JSFunction, proxy_enumerate) \ + V(OBSERVERS_NOTIFY_CHANGE_INDEX, JSFunction, observers_notify_change) \ + V(OBSERVERS_ENQUEUE_SPLICE_INDEX, JSFunction, observers_enqueue_splice) \ + V(OBSERVERS_BEGIN_SPLICE_INDEX, JSFunction, observers_begin_perform_splice) \ + V(OBSERVERS_END_SPLICE_INDEX, JSFunction, observers_end_perform_splice) \ + V(NATIVE_OBJECT_OBSERVE_INDEX, JSFunction, native_object_observe) \ + V(NATIVE_OBJECT_GET_NOTIFIER_INDEX, JSFunction, native_object_get_notifier) \ + V(NATIVE_OBJECT_NOTIFIER_PERFORM_CHANGE, JSFunction, \ + native_object_notifier_perform_change) \ + V(SLOPPY_GENERATOR_FUNCTION_MAP_INDEX, Map, sloppy_generator_function_map) \ + V(STRICT_GENERATOR_FUNCTION_MAP_INDEX, Map, strict_generator_function_map) \ + V(GENERATOR_OBJECT_PROTOTYPE_MAP_INDEX, Map, generator_object_prototype_map) \ + V(ITERATOR_RESULT_MAP_INDEX, Map, iterator_result_map) \ + V(MAP_ITERATOR_MAP_INDEX, Map, map_iterator_map) \ + V(SET_ITERATOR_MAP_INDEX, Map, set_iterator_map) \ + V(ITERATOR_SYMBOL_INDEX, Symbol, iterator_symbol) \ + V(UNSCOPABLES_SYMBOL_INDEX, Symbol, unscopables_symbol) // JSFunctions are pairs (context, function code), sometimes also called // closures. A Context object is used to represent function contexts and @@ -237,7 +252,7 @@ class Context: public FixedArray { public: // Conversions. static Context* cast(Object* context) { - ASSERT(context->IsContext()); + DCHECK(context->IsContext()); return reinterpret_cast<Context*>(context); } @@ -260,14 +275,16 @@ class Context: public FixedArray { // These slots are only in native contexts. GLOBAL_PROXY_INDEX = MIN_CONTEXT_SLOTS, SECURITY_TOKEN_INDEX, - SLOPPY_ARGUMENTS_BOILERPLATE_INDEX, - ALIASED_ARGUMENTS_BOILERPLATE_INDEX, - STRICT_ARGUMENTS_BOILERPLATE_INDEX, + SLOPPY_ARGUMENTS_MAP_INDEX, + ALIASED_ARGUMENTS_MAP_INDEX, + STRICT_ARGUMENTS_MAP_INDEX, REGEXP_RESULT_MAP_INDEX, SLOPPY_FUNCTION_MAP_INDEX, + SLOPPY_FUNCTION_WITH_READONLY_PROTOTYPE_MAP_INDEX, STRICT_FUNCTION_MAP_INDEX, SLOPPY_FUNCTION_WITHOUT_PROTOTYPE_MAP_INDEX, STRICT_FUNCTION_WITHOUT_PROTOTYPE_MAP_INDEX, + BOUND_FUNCTION_MAP_INDEX, INITIAL_OBJECT_PROTOTYPE_INDEX, INITIAL_ARRAY_PROTOTYPE_INDEX, BOOLEAN_FUNCTION_INDEX, @@ -294,6 +311,25 @@ class Context: public FixedArray { GLOBAL_EVAL_FUN_INDEX, INSTANTIATE_FUN_INDEX, CONFIGURE_INSTANCE_FUN_INDEX, + MATH_ABS_FUN_INDEX, + MATH_ACOS_FUN_INDEX, + MATH_ASIN_FUN_INDEX, + MATH_ATAN_FUN_INDEX, + MATH_ATAN2_FUN_INDEX, + MATH_CEIL_FUN_INDEX, + MATH_COS_FUN_INDEX, + MATH_EXP_FUN_INDEX, + MATH_FLOOR_FUN_INDEX, + MATH_IMUL_FUN_INDEX, + MATH_LOG_FUN_INDEX, + MATH_MAX_FUN_INDEX, + MATH_MIN_FUN_INDEX, + MATH_POW_FUN_INDEX, + MATH_RANDOM_FUN_INDEX, + MATH_ROUND_FUN_INDEX, + MATH_SIN_FUN_INDEX, + MATH_SQRT_FUN_INDEX, + MATH_TAN_FUN_INDEX, ARRAY_BUFFER_FUN_INDEX, UINT8_ARRAY_FUN_INDEX, INT8_ARRAY_FUN_INDEX, @@ -339,6 +375,7 @@ class Context: public FixedArray { PROMISE_REJECT_INDEX, PROMISE_CHAIN_INDEX, PROMISE_CATCH_INDEX, + PROMISE_THEN_INDEX, TO_COMPLETE_PROPERTY_DESCRIPTOR_INDEX, DERIVED_HAS_TRAP_INDEX, DERIVED_GET_TRAP_INDEX, @@ -357,6 +394,8 @@ class Context: public FixedArray { ITERATOR_RESULT_MAP_INDEX, MAP_ITERATOR_MAP_INDEX, SET_ITERATOR_MAP_INDEX, + ITERATOR_SYMBOL_INDEX, + UNSCOPABLES_SYMBOL_INDEX, // Properties from here are treated as weak references by the full GC. // Scavenge treats them as strong references. @@ -368,7 +407,6 @@ class Context: public FixedArray { // Total number of slots. NATIVE_CONTEXT_SLOTS, - FIRST_WEAK_SLOT = OPTIMIZED_FUNCTIONS_LIST }; @@ -378,7 +416,7 @@ class Context: public FixedArray { Context* previous() { Object* result = unchecked_previous(); - ASSERT(IsBootstrappingOrValidParentContext(result, this)); + DCHECK(IsBootstrappingOrValidParentContext(result, this)); return reinterpret_cast<Context*>(result); } void set_previous(Context* context) { set(PREVIOUS_INDEX, context); } @@ -396,7 +434,7 @@ class Context: public FixedArray { GlobalObject* global_object() { Object* result = get(GLOBAL_OBJECT_INDEX); - ASSERT(IsBootstrappingOrGlobalObject(this->GetIsolate(), result)); + DCHECK(IsBootstrappingOrGlobalObject(this->GetIsolate(), result)); return reinterpret_cast<GlobalObject*>(result); } void set_global_object(GlobalObject* object) { @@ -448,6 +486,11 @@ class Context: public FixedArray { return map == map->GetHeap()->global_context_map(); } + bool HasSameSecurityTokenAs(Context* that) { + return this->global_object()->native_context()->security_token() == + that->global_object()->native_context()->security_token(); + } + // A native context holds a list of all functions with optimized code. void AddOptimizedFunction(JSFunction* function); void RemoveOptimizedFunction(JSFunction* function); @@ -466,15 +509,15 @@ class Context: public FixedArray { #define NATIVE_CONTEXT_FIELD_ACCESSORS(index, type, name) \ void set_##name(type* value) { \ - ASSERT(IsNativeContext()); \ + DCHECK(IsNativeContext()); \ set(index, value); \ } \ bool is_##name(type* value) { \ - ASSERT(IsNativeContext()); \ + DCHECK(IsNativeContext()); \ return type::cast(get(index)) == value; \ } \ type* name() { \ - ASSERT(IsNativeContext()); \ + DCHECK(IsNativeContext()); \ return type::cast(get(index)); \ } NATIVE_CONTEXT_FIELDS(NATIVE_CONTEXT_FIELD_ACCESSORS) @@ -539,8 +582,8 @@ class Context: public FixedArray { static bool IsBootstrappingOrGlobalObject(Isolate* isolate, Object* object); #endif - STATIC_CHECK(kHeaderSize == Internals::kContextHeaderSize); - STATIC_CHECK(EMBEDDER_DATA_INDEX == Internals::kContextEmbedderDataIndex); + STATIC_ASSERT(kHeaderSize == Internals::kContextHeaderSize); + STATIC_ASSERT(EMBEDDER_DATA_INDEX == Internals::kContextEmbedderDataIndex); }; } } // namespace v8::internal diff --git a/deps/v8/src/conversions-inl.h b/deps/v8/src/conversions-inl.h index 43363f3738a..ce3054ba312 100644 --- a/deps/v8/src/conversions-inl.h +++ b/deps/v8/src/conversions-inl.h @@ -5,20 +5,20 @@ #ifndef V8_CONVERSIONS_INL_H_ #define V8_CONVERSIONS_INL_H_ -#include <limits.h> // Required for INT_MAX etc. #include <float.h> // Required for DBL_MAX and on Win32 for finite() +#include <limits.h> // Required for INT_MAX etc. #include <stdarg.h> #include <cmath> -#include "globals.h" // Required for V8_INFINITY +#include "src/globals.h" // Required for V8_INFINITY // ---------------------------------------------------------------------------- // Extra POSIX/ANSI functions for Win32/MSVC. -#include "conversions.h" -#include "double.h" -#include "platform.h" -#include "scanner.h" -#include "strtod.h" +#include "src/base/platform/platform.h" +#include "src/conversions.h" +#include "src/double.h" +#include "src/scanner.h" +#include "src/strtod.h" namespace v8 { namespace internal { @@ -58,7 +58,7 @@ inline unsigned int FastD2UI(double x) { Address mantissa_ptr = reinterpret_cast<Address>(&x) + kIntSize; #endif // Copy least significant 32 bits of mantissa. - OS::MemCopy(&result, mantissa_ptr, sizeof(result)); + memcpy(&result, mantissa_ptr, sizeof(result)); return negative ? ~result + 1 : result; } // Large number (outside uint32 range), Infinity or NaN. @@ -92,7 +92,7 @@ template <class Iterator, class EndMark> bool SubStringEquals(Iterator* current, EndMark end, const char* substring) { - ASSERT(**current == *substring); + DCHECK(**current == *substring); for (substring++; *substring != '\0'; substring++) { ++*current; if (*current == end || **current != *substring) return false; @@ -123,7 +123,7 @@ double InternalStringToIntDouble(UnicodeCache* unicode_cache, EndMark end, bool negative, bool allow_trailing_junk) { - ASSERT(current != end); + DCHECK(current != end); // Skip leading 0s. while (*current == '0') { @@ -202,8 +202,8 @@ double InternalStringToIntDouble(UnicodeCache* unicode_cache, ++current; } while (current != end); - ASSERT(number < ((int64_t)1 << 53)); - ASSERT(static_cast<int64_t>(static_cast<double>(number)) == number); + DCHECK(number < ((int64_t)1 << 53)); + DCHECK(static_cast<int64_t>(static_cast<double>(number)) == number); if (exponent == 0) { if (negative) { @@ -213,7 +213,7 @@ double InternalStringToIntDouble(UnicodeCache* unicode_cache, return static_cast<double>(number); } - ASSERT(number != 0); + DCHECK(number != 0); return std::ldexp(static_cast<double>(negative ? -number : number), exponent); } @@ -324,7 +324,7 @@ double InternalStringToInt(UnicodeCache* unicode_cache, if (buffer_pos <= kMaxSignificantDigits) { // If the number has more than kMaxSignificantDigits it will be parsed // as infinity. - ASSERT(buffer_pos < kBufferSize); + DCHECK(buffer_pos < kBufferSize); buffer[buffer_pos++] = static_cast<char>(*current); } ++current; @@ -336,7 +336,7 @@ double InternalStringToInt(UnicodeCache* unicode_cache, return JunkStringValue(); } - SLOW_ASSERT(buffer_pos < kBufferSize); + SLOW_DCHECK(buffer_pos < kBufferSize); buffer[buffer_pos] = '\0'; Vector<const char> buffer_vector(buffer, buffer_pos); return negative ? -Strtod(buffer_vector, 0) : Strtod(buffer_vector, 0); @@ -384,7 +384,7 @@ double InternalStringToInt(UnicodeCache* unicode_cache, if (m > kMaximumMultiplier) break; part = part * radix + d; multiplier = m; - ASSERT(multiplier > part); + DCHECK(multiplier > part); ++current; if (current == end) { @@ -473,7 +473,7 @@ double InternalStringToDouble(UnicodeCache* unicode_cache, return JunkStringValue(); } - ASSERT(buffer_pos == 0); + DCHECK(buffer_pos == 0); return (sign == NEGATIVE) ? -V8_INFINITY : V8_INFINITY; } @@ -536,7 +536,7 @@ double InternalStringToDouble(UnicodeCache* unicode_cache, // Copy significant digits of the integer part (if any) to the buffer. while (*current >= '0' && *current <= '9') { if (significant_digits < kMaxSignificantDigits) { - ASSERT(buffer_pos < kBufferSize); + DCHECK(buffer_pos < kBufferSize); buffer[buffer_pos++] = static_cast<char>(*current); significant_digits++; // Will later check if it's an octal in the buffer. @@ -581,7 +581,7 @@ double InternalStringToDouble(UnicodeCache* unicode_cache, // instead. while (*current >= '0' && *current <= '9') { if (significant_digits < kMaxSignificantDigits) { - ASSERT(buffer_pos < kBufferSize); + DCHECK(buffer_pos < kBufferSize); buffer[buffer_pos++] = static_cast<char>(*current); significant_digits++; exponent--; @@ -635,7 +635,7 @@ double InternalStringToDouble(UnicodeCache* unicode_cache, } const int max_exponent = INT_MAX / 2; - ASSERT(-max_exponent / 2 <= exponent && exponent <= max_exponent / 2); + DCHECK(-max_exponent / 2 <= exponent && exponent <= max_exponent / 2); int num = 0; do { // Check overflow. @@ -673,7 +673,7 @@ double InternalStringToDouble(UnicodeCache* unicode_cache, exponent--; } - SLOW_ASSERT(buffer_pos < kBufferSize); + SLOW_DCHECK(buffer_pos < kBufferSize); buffer[buffer_pos] = '\0'; double converted = Strtod(Vector<const char>(buffer, buffer_pos), exponent); diff --git a/deps/v8/src/conversions.cc b/deps/v8/src/conversions.cc index 14dbc2b7f8c..4b41d5e6a8d 100644 --- a/deps/v8/src/conversions.cc +++ b/deps/v8/src/conversions.cc @@ -2,20 +2,20 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include <stdarg.h> #include <limits.h> +#include <stdarg.h> #include <cmath> -#include "v8.h" +#include "src/v8.h" -#include "assert-scope.h" -#include "conversions.h" -#include "conversions-inl.h" -#include "dtoa.h" -#include "factory.h" -#include "list-inl.h" -#include "strtod.h" -#include "utils.h" +#include "src/assert-scope.h" +#include "src/conversions-inl.h" +#include "src/conversions.h" +#include "src/dtoa.h" +#include "src/factory.h" +#include "src/list-inl.h" +#include "src/strtod.h" +#include "src/utils.h" #ifndef _STLP_VENDOR_CSTD // STLPort doesn't import fpclassify into the std namespace. @@ -197,8 +197,8 @@ char* DoubleToFixedCString(double value, int f) { const int kMaxDigitsBeforePoint = 21; const double kFirstNonFixed = 1e21; const int kMaxDigitsAfterPoint = 20; - ASSERT(f >= 0); - ASSERT(f <= kMaxDigitsAfterPoint); + DCHECK(f >= 0); + DCHECK(f <= kMaxDigitsAfterPoint); bool negative = false; double abs_value = value; @@ -299,7 +299,7 @@ static char* CreateExponentialRepresentation(char* decimal_rep, char* DoubleToExponentialCString(double value, int f) { const int kMaxDigitsAfterPoint = 20; // f might be -1 to signal that f was undefined in JavaScript. - ASSERT(f >= -1 && f <= kMaxDigitsAfterPoint); + DCHECK(f >= -1 && f <= kMaxDigitsAfterPoint); bool negative = false; if (value < 0) { @@ -316,7 +316,7 @@ char* DoubleToExponentialCString(double value, int f) { const int kV8DtoaBufferCapacity = kMaxDigitsAfterPoint + 1 + 1; // Make sure that the buffer is big enough, even if we fall back to the // shortest representation (which happens when f equals -1). - ASSERT(kBase10MaximalLength <= kMaxDigitsAfterPoint + 1); + DCHECK(kBase10MaximalLength <= kMaxDigitsAfterPoint + 1); char decimal_rep[kV8DtoaBufferCapacity]; int decimal_rep_length; @@ -330,8 +330,8 @@ char* DoubleToExponentialCString(double value, int f) { Vector<char>(decimal_rep, kV8DtoaBufferCapacity), &sign, &decimal_rep_length, &decimal_point); } - ASSERT(decimal_rep_length > 0); - ASSERT(decimal_rep_length <= f + 1); + DCHECK(decimal_rep_length > 0); + DCHECK(decimal_rep_length <= f + 1); int exponent = decimal_point - 1; char* result = @@ -344,7 +344,7 @@ char* DoubleToExponentialCString(double value, int f) { char* DoubleToPrecisionCString(double value, int p) { const int kMinimalDigits = 1; const int kMaximalDigits = 21; - ASSERT(p >= kMinimalDigits && p <= kMaximalDigits); + DCHECK(p >= kMinimalDigits && p <= kMaximalDigits); USE(kMinimalDigits); bool negative = false; @@ -364,7 +364,7 @@ char* DoubleToPrecisionCString(double value, int p) { DoubleToAscii(value, DTOA_PRECISION, p, Vector<char>(decimal_rep, kV8DtoaBufferCapacity), &sign, &decimal_rep_length, &decimal_point); - ASSERT(decimal_rep_length <= p); + DCHECK(decimal_rep_length <= p); int exponent = decimal_point - 1; @@ -412,7 +412,7 @@ char* DoubleToPrecisionCString(double value, int p) { char* DoubleToRadixCString(double value, int radix) { - ASSERT(radix >= 2 && radix <= 36); + DCHECK(radix >= 2 && radix <= 36); // Character array used for conversion. static const char chars[] = "0123456789abcdefghijklmnopqrstuvwxyz"; @@ -446,7 +446,7 @@ char* DoubleToRadixCString(double value, int radix) { integer_part /= radix; } while (integer_part >= 1.0); // Sanity check. - ASSERT(integer_pos > 0); + DCHECK(integer_pos > 0); // Add sign if needed. if (is_negative) integer_buffer[integer_pos--] = '-'; diff --git a/deps/v8/src/conversions.h b/deps/v8/src/conversions.h index d6c99aa9f57..c33de77cd1f 100644 --- a/deps/v8/src/conversions.h +++ b/deps/v8/src/conversions.h @@ -7,10 +7,10 @@ #include <limits> -#include "checks.h" -#include "handles.h" -#include "objects.h" -#include "utils.h" +#include "src/base/logging.h" +#include "src/handles.h" +#include "src/objects.h" +#include "src/utils.h" namespace v8 { namespace internal { @@ -41,7 +41,8 @@ inline bool isBinaryDigit(int x) { // The fast double-to-(unsigned-)int conversion routine does not guarantee // rounding towards zero. -// For NaN and values outside the int range, return INT_MIN or INT_MAX. +// If x is NaN, the result is INT_MIN. Otherwise the result is the argument x, +// clamped to [INT_MIN, INT_MAX] and then rounded to an integer. inline int FastD2IChecked(double x) { if (!(x >= INT_MIN)) return INT_MIN; // Negation to catch NaNs. if (x > INT_MAX) return INT_MAX; @@ -163,6 +164,17 @@ static inline bool IsInt32Double(double value) { } +// UInteger32 is an integer that can be represented as an unsigned 32-bit +// integer. It has to be in the range [0, 2^32 - 1]. +// We also have to check for negative 0 as it is not a UInteger32. +static inline bool IsUint32Double(double value) { + return !IsMinusZero(value) && + value >= 0 && + value <= kMaxUInt32 && + value == FastUI2D(FastD2UI(value)); +} + + // Convert from Number object to C integer. inline int32_t NumberToInt32(Object* number) { if (number->IsSmi()) return Smi::cast(number)->value(); @@ -187,7 +199,7 @@ inline bool TryNumberToSize(Isolate* isolate, SealHandleScope shs(isolate); if (number->IsSmi()) { int value = Smi::cast(number)->value(); - ASSERT(static_cast<unsigned>(Smi::kMaxValue) + DCHECK(static_cast<unsigned>(Smi::kMaxValue) <= std::numeric_limits<size_t>::max()); if (value >= 0) { *result = static_cast<size_t>(value); @@ -195,7 +207,7 @@ inline bool TryNumberToSize(Isolate* isolate, } return false; } else { - ASSERT(number->IsHeapNumber()); + DCHECK(number->IsHeapNumber()); double value = HeapNumber::cast(number)->value(); if (value >= 0 && value <= std::numeric_limits<size_t>::max()) { diff --git a/deps/v8/src/counters.cc b/deps/v8/src/counters.cc index eb1b4c27f00..a8dcc0bdcb8 100644 --- a/deps/v8/src/counters.cc +++ b/deps/v8/src/counters.cc @@ -2,11 +2,11 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "counters.h" -#include "isolate.h" -#include "platform.h" +#include "src/base/platform/platform.h" +#include "src/counters.h" +#include "src/isolate.h" namespace v8 { namespace internal { @@ -55,6 +55,11 @@ void HistogramTimer::Stop() { Counters::Counters(Isolate* isolate) { +#define HR(name, caption, min, max, num_buckets) \ + name##_ = Histogram(#caption, min, max, num_buckets, isolate); + HISTOGRAM_RANGE_LIST(HR) +#undef HR + #define HT(name, caption) \ name##_ = HistogramTimer(#caption, 0, 10000, 50, isolate); HISTOGRAM_TIMER_LIST(HT) @@ -109,7 +114,43 @@ Counters::Counters(Isolate* isolate) { } +void Counters::ResetCounters() { +#define SC(name, caption) name##_.Reset(); + STATS_COUNTER_LIST_1(SC) + STATS_COUNTER_LIST_2(SC) +#undef SC + +#define SC(name) \ + count_of_##name##_.Reset(); \ + size_of_##name##_.Reset(); + INSTANCE_TYPE_LIST(SC) +#undef SC + +#define SC(name) \ + count_of_CODE_TYPE_##name##_.Reset(); \ + size_of_CODE_TYPE_##name##_.Reset(); + CODE_KIND_LIST(SC) +#undef SC + +#define SC(name) \ + count_of_FIXED_ARRAY_##name##_.Reset(); \ + size_of_FIXED_ARRAY_##name##_.Reset(); + FIXED_ARRAY_SUB_INSTANCE_TYPE_LIST(SC) +#undef SC + +#define SC(name) \ + count_of_CODE_AGE_##name##_.Reset(); \ + size_of_CODE_AGE_##name##_.Reset(); + CODE_AGE_LIST_COMPLETE(SC) +#undef SC +} + + void Counters::ResetHistograms() { +#define HR(name, caption, min, max, num_buckets) name##_.Reset(); + HISTOGRAM_RANGE_LIST(HR) +#undef HR + #define HT(name, caption) name##_.Reset(); HISTOGRAM_TIMER_LIST(HT) #undef HT diff --git a/deps/v8/src/counters.h b/deps/v8/src/counters.h index 18a9aef29ba..f778f556be6 100644 --- a/deps/v8/src/counters.h +++ b/deps/v8/src/counters.h @@ -5,11 +5,11 @@ #ifndef V8_COUNTERS_H_ #define V8_COUNTERS_H_ -#include "../include/v8.h" -#include "allocation.h" -#include "objects.h" -#include "platform/elapsed-timer.h" -#include "v8globals.h" +#include "include/v8.h" +#include "src/allocation.h" +#include "src/base/platform/elapsed-timer.h" +#include "src/globals.h" +#include "src/objects.h" namespace v8 { namespace internal { @@ -139,10 +139,13 @@ class StatsCounter { // given counter without calling the runtime system. int* GetInternalPointer() { int* loc = GetPtr(); - ASSERT(loc != NULL); + DCHECK(loc != NULL); return loc; } + // Reset the cached internal pointer. + void Reset() { lookup_done_ = false; } + protected: // Returns the cached address of this counter location. int* GetPtr() { @@ -241,11 +244,11 @@ class HistogramTimer : public Histogram { // TODO(bmeurer): Remove this when HistogramTimerScope is fixed. #ifdef DEBUG - ElapsedTimer* timer() { return &timer_; } + base::ElapsedTimer* timer() { return &timer_; } #endif private: - ElapsedTimer timer_; + base::ElapsedTimer timer_; }; // Helper class for scoping a HistogramTimer. @@ -265,11 +268,12 @@ class HistogramTimerScope BASE_EMBEDDED { } else { timer_->Start(); } + } #else : timer_(timer) { timer_->Start(); -#endif } +#endif ~HistogramTimerScope() { #ifdef DEBUG if (!skipped_timer_start_) { @@ -279,6 +283,7 @@ class HistogramTimerScope BASE_EMBEDDED { timer_->Stop(); #endif } + private: HistogramTimer* timer_; #ifdef DEBUG @@ -286,19 +291,25 @@ class HistogramTimerScope BASE_EMBEDDED { #endif }; - -#define HISTOGRAM_TIMER_LIST(HT) \ - /* Garbage collection timers. */ \ - HT(gc_compactor, V8.GCCompactor) \ - HT(gc_scavenger, V8.GCScavenger) \ - HT(gc_context, V8.GCContext) /* GC context cleanup time */ \ - /* Parsing timers. */ \ - HT(parse, V8.Parse) \ - HT(parse_lazy, V8.ParseLazy) \ - HT(pre_parse, V8.PreParse) \ - /* Total compilation times. */ \ - HT(compile, V8.Compile) \ - HT(compile_eval, V8.CompileEval) \ +#define HISTOGRAM_RANGE_LIST(HR) \ + /* Generic range histograms */ \ + HR(gc_idle_time_allotted_in_ms, V8.GCIdleTimeAllottedInMS, 0, 10000, 101) + +#define HISTOGRAM_TIMER_LIST(HT) \ + /* Garbage collection timers. */ \ + HT(gc_compactor, V8.GCCompactor) \ + HT(gc_scavenger, V8.GCScavenger) \ + HT(gc_context, V8.GCContext) /* GC context cleanup time */ \ + HT(gc_idle_notification, V8.GCIdleNotification) \ + HT(gc_incremental_marking, V8.GCIncrementalMarking) \ + HT(gc_low_memory_notification, V8.GCLowMemoryNotification) \ + /* Parsing timers. */ \ + HT(parse, V8.Parse) \ + HT(parse_lazy, V8.ParseLazy) \ + HT(pre_parse, V8.PreParse) \ + /* Total compilation times. */ \ + HT(compile, V8.Compile) \ + HT(compile_eval, V8.CompileEval) \ HT(compile_lazy, V8.CompileLazy) #define HISTOGRAM_PERCENTAGE_LIST(HP) \ @@ -379,6 +390,7 @@ class HistogramTimerScope BASE_EMBEDDED { SC(call_premonomorphic_stubs, V8.CallPreMonomorphicStubs) \ SC(call_normal_stubs, V8.CallNormalStubs) \ SC(call_megamorphic_stubs, V8.CallMegamorphicStubs) \ + SC(inlined_copied_elements, V8.InlinedCopiedElements) \ SC(arguments_adaptors, V8.ArgumentsAdaptors) \ SC(compilation_cache_hits, V8.CompilationCacheHits) \ SC(compilation_cache_misses, V8.CompilationCacheMisses) \ @@ -543,6 +555,11 @@ class HistogramTimerScope BASE_EMBEDDED { // This file contains all the v8 counters that are in use. class Counters { public: +#define HR(name, caption, min, max, num_buckets) \ + Histogram* name() { return &name##_; } + HISTOGRAM_RANGE_LIST(HR) +#undef HR + #define HT(name, caption) \ HistogramTimer* name() { return &name##_; } HISTOGRAM_TIMER_LIST(HT) @@ -626,9 +643,14 @@ class Counters { stats_counter_count }; + void ResetCounters(); void ResetHistograms(); private: +#define HR(name, caption, min, max, num_buckets) Histogram name##_; + HISTOGRAM_RANGE_LIST(HR) +#undef HR + #define HT(name, caption) \ HistogramTimer name##_; HISTOGRAM_TIMER_LIST(HT) diff --git a/deps/v8/src/cpu-profiler-inl.h b/deps/v8/src/cpu-profiler-inl.h index c4efef1b323..c63a9c3cc25 100644 --- a/deps/v8/src/cpu-profiler-inl.h +++ b/deps/v8/src/cpu-profiler-inl.h @@ -5,12 +5,12 @@ #ifndef V8_CPU_PROFILER_INL_H_ #define V8_CPU_PROFILER_INL_H_ -#include "cpu-profiler.h" +#include "src/cpu-profiler.h" #include <new> -#include "circular-queue-inl.h" -#include "profile-generator-inl.h" -#include "unbound-queue-inl.h" +#include "src/circular-queue-inl.h" +#include "src/profile-generator-inl.h" +#include "src/unbound-queue-inl.h" namespace v8 { namespace internal { @@ -28,6 +28,14 @@ void CodeMoveEventRecord::UpdateCodeMap(CodeMap* code_map) { } +void CodeDisableOptEventRecord::UpdateCodeMap(CodeMap* code_map) { + CodeEntry* entry = code_map->FindEntry(start); + if (entry != NULL) { + entry->set_bailout_reason(bailout_reason); + } +} + + void SharedFunctionInfoMoveEventRecord::UpdateCodeMap(CodeMap* code_map) { code_map->MoveCode(from, to); } diff --git a/deps/v8/src/cpu-profiler.cc b/deps/v8/src/cpu-profiler.cc index abe29340d02..68a565c73a2 100644 --- a/deps/v8/src/cpu-profiler.cc +++ b/deps/v8/src/cpu-profiler.cc @@ -2,17 +2,17 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "cpu-profiler-inl.h" +#include "src/cpu-profiler-inl.h" -#include "compiler.h" -#include "frames-inl.h" -#include "hashmap.h" -#include "log-inl.h" -#include "vm-state-inl.h" +#include "src/compiler.h" +#include "src/frames-inl.h" +#include "src/hashmap.h" +#include "src/log-inl.h" +#include "src/vm-state-inl.h" -#include "../include/v8-profiler.h" +#include "include/v8-profiler.h" namespace v8 { namespace internal { @@ -23,7 +23,7 @@ static const int kProfilerStackSize = 64 * KB; ProfilerEventsProcessor::ProfilerEventsProcessor( ProfileGenerator* generator, Sampler* sampler, - TimeDelta period) + base::TimeDelta period) : Thread(Thread::Options("v8:ProfEvntProc", kProfilerStackSize)), generator_(generator), sampler_(sampler), @@ -108,7 +108,7 @@ ProfilerEventsProcessor::SampleProcessingResult void ProfilerEventsProcessor::Run() { while (running_) { - ElapsedTimer timer; + base::ElapsedTimer timer; timer.Start(); // Keep processing existing events until we need to do next sample. do { @@ -222,21 +222,21 @@ void CpuProfiler::CodeCreateEvent(Logger::LogEventsAndTags tag, } -void CpuProfiler::CodeCreateEvent(Logger::LogEventsAndTags tag, - Code* code, +void CpuProfiler::CodeCreateEvent(Logger::LogEventsAndTags tag, Code* code, SharedFunctionInfo* shared, - CompilationInfo* info, - Name* name) { + CompilationInfo* info, Name* script_name) { if (FilterOutCodeCreateEvent(tag)) return; CodeEventsContainer evt_rec(CodeEventRecord::CODE_CREATION); CodeCreateEventRecord* rec = &evt_rec.CodeCreateEventRecord_; rec->start = code->address(); - rec->entry = profiles_->NewCodeEntry(tag, profiles_->GetFunctionName(name)); + rec->entry = profiles_->NewCodeEntry( + tag, profiles_->GetFunctionName(shared->DebugName()), + CodeEntry::kEmptyNamePrefix, profiles_->GetName(script_name)); if (info) { rec->entry->set_no_frame_ranges(info->ReleaseNoFrameRanges()); } if (shared->script()->IsScript()) { - ASSERT(Script::cast(shared->script())); + DCHECK(Script::cast(shared->script())); Script* script = Script::cast(shared->script()); rec->entry->set_script_id(script->id()->value()); rec->entry->set_bailout_reason( @@ -248,26 +248,22 @@ void CpuProfiler::CodeCreateEvent(Logger::LogEventsAndTags tag, } -void CpuProfiler::CodeCreateEvent(Logger::LogEventsAndTags tag, - Code* code, +void CpuProfiler::CodeCreateEvent(Logger::LogEventsAndTags tag, Code* code, SharedFunctionInfo* shared, - CompilationInfo* info, - Name* source, int line, int column) { + CompilationInfo* info, Name* script_name, + int line, int column) { if (FilterOutCodeCreateEvent(tag)) return; CodeEventsContainer evt_rec(CodeEventRecord::CODE_CREATION); CodeCreateEventRecord* rec = &evt_rec.CodeCreateEventRecord_; rec->start = code->address(); rec->entry = profiles_->NewCodeEntry( - tag, - profiles_->GetFunctionName(shared->DebugName()), - CodeEntry::kEmptyNamePrefix, - profiles_->GetName(source), - line, + tag, profiles_->GetFunctionName(shared->DebugName()), + CodeEntry::kEmptyNamePrefix, profiles_->GetName(script_name), line, column); if (info) { rec->entry->set_no_frame_ranges(info->ReleaseNoFrameRanges()); } - ASSERT(Script::cast(shared->script())); + DCHECK(Script::cast(shared->script())); Script* script = Script::cast(shared->script()); rec->entry->set_script_id(script->id()->value()); rec->size = code->ExecutableSize(); @@ -304,6 +300,15 @@ void CpuProfiler::CodeMoveEvent(Address from, Address to) { } +void CpuProfiler::CodeDisableOptEvent(Code* code, SharedFunctionInfo* shared) { + CodeEventsContainer evt_rec(CodeEventRecord::CODE_DISABLE_OPT); + CodeDisableOptEventRecord* rec = &evt_rec.CodeDisableOptEventRecord_; + rec->start = code->address(); + rec->bailout_reason = GetBailoutReason(shared->DisableOptimizationReason()); + processor_->Enqueue(evt_rec); +} + + void CpuProfiler::CodeDeleteEvent(Address from) { } @@ -364,7 +369,7 @@ void CpuProfiler::SetterCallbackEvent(Name* name, Address entry_point) { CpuProfiler::CpuProfiler(Isolate* isolate) : isolate_(isolate), - sampling_interval_(TimeDelta::FromMicroseconds( + sampling_interval_(base::TimeDelta::FromMicroseconds( FLAG_cpu_profiler_sampling_interval)), profiles_(new CpuProfilesCollection(isolate->heap())), generator_(NULL), @@ -378,7 +383,7 @@ CpuProfiler::CpuProfiler(Isolate* isolate, ProfileGenerator* test_generator, ProfilerEventsProcessor* test_processor) : isolate_(isolate), - sampling_interval_(TimeDelta::FromMicroseconds( + sampling_interval_(base::TimeDelta::FromMicroseconds( FLAG_cpu_profiler_sampling_interval)), profiles_(test_profiles), generator_(test_generator), @@ -388,13 +393,13 @@ CpuProfiler::CpuProfiler(Isolate* isolate, CpuProfiler::~CpuProfiler() { - ASSERT(!is_profiling_); + DCHECK(!is_profiling_); delete profiles_; } -void CpuProfiler::set_sampling_interval(TimeDelta value) { - ASSERT(!is_profiling_); +void CpuProfiler::set_sampling_interval(base::TimeDelta value) { + DCHECK(!is_profiling_); sampling_interval_ = value; } @@ -432,7 +437,7 @@ void CpuProfiler::StartProcessorIfNotStarted() { generator_, sampler, sampling_interval_); is_profiling_ = true; // Enumerate stuff we already have in the heap. - ASSERT(isolate_->heap()->HasBeenSetUp()); + DCHECK(isolate_->heap()->HasBeenSetUp()); if (!FLAG_prof_browser_mode) { logger->LogCodeObjects(); } @@ -488,7 +493,7 @@ void CpuProfiler::StopProcessor() { void CpuProfiler::LogBuiltins() { Builtins* builtins = isolate_->builtins(); - ASSERT(builtins->is_initialized()); + DCHECK(builtins->is_initialized()); for (int i = 0; i < Builtins::builtin_count; i++) { CodeEventsContainer evt_rec(CodeEventRecord::REPORT_BUILTIN); ReportBuiltinEventRecord* rec = &evt_rec.ReportBuiltinEventRecord_; diff --git a/deps/v8/src/cpu-profiler.h b/deps/v8/src/cpu-profiler.h index e87fe9e7722..c1e75a101ad 100644 --- a/deps/v8/src/cpu-profiler.h +++ b/deps/v8/src/cpu-profiler.h @@ -5,12 +5,12 @@ #ifndef V8_CPU_PROFILER_H_ #define V8_CPU_PROFILER_H_ -#include "allocation.h" -#include "atomicops.h" -#include "circular-queue.h" -#include "platform/time.h" -#include "sampler.h" -#include "unbound-queue.h" +#include "src/allocation.h" +#include "src/base/atomicops.h" +#include "src/base/platform/time.h" +#include "src/circular-queue.h" +#include "src/sampler.h" +#include "src/unbound-queue.h" namespace v8 { namespace internal { @@ -26,6 +26,7 @@ class ProfileGenerator; #define CODE_EVENTS_TYPE_LIST(V) \ V(CODE_CREATION, CodeCreateEventRecord) \ V(CODE_MOVE, CodeMoveEventRecord) \ + V(CODE_DISABLE_OPT, CodeDisableOptEventRecord) \ V(SHARED_FUNC_MOVE, SharedFunctionInfoMoveEventRecord) \ V(REPORT_BUILTIN, ReportBuiltinEventRecord) @@ -65,6 +66,15 @@ class CodeMoveEventRecord : public CodeEventRecord { }; +class CodeDisableOptEventRecord : public CodeEventRecord { + public: + Address start; + const char* bailout_reason; + + INLINE(void UpdateCodeMap(CodeMap* code_map)); +}; + + class SharedFunctionInfoMoveEventRecord : public CodeEventRecord { public: Address from; @@ -112,11 +122,11 @@ class CodeEventsContainer { // This class implements both the profile events processor thread and // methods called by event producers: VM and stack sampler threads. -class ProfilerEventsProcessor : public Thread { +class ProfilerEventsProcessor : public base::Thread { public: ProfilerEventsProcessor(ProfileGenerator* generator, Sampler* sampler, - TimeDelta period); + base::TimeDelta period); virtual ~ProfilerEventsProcessor() {} // Thread control. @@ -155,7 +165,7 @@ class ProfilerEventsProcessor : public Thread { Sampler* sampler_; bool running_; // Sampling period in microseconds. - const TimeDelta period_; + const base::TimeDelta period_; UnboundQueue<CodeEventsContainer> events_buffer_; static const size_t kTickSampleBufferSize = 1 * MB; static const size_t kTickSampleQueueLength = @@ -190,7 +200,7 @@ class CpuProfiler : public CodeEventListener { virtual ~CpuProfiler(); - void set_sampling_interval(TimeDelta value); + void set_sampling_interval(base::TimeDelta value); void StartProfiling(const char* title, bool record_samples = false); void StartProfiling(String* title, bool record_samples); CpuProfile* StopProfiling(const char* title); @@ -211,20 +221,18 @@ class CpuProfiler : public CodeEventListener { Code* code, const char* comment); virtual void CodeCreateEvent(Logger::LogEventsAndTags tag, Code* code, Name* name); - virtual void CodeCreateEvent(Logger::LogEventsAndTags tag, - Code* code, + virtual void CodeCreateEvent(Logger::LogEventsAndTags tag, Code* code, SharedFunctionInfo* shared, - CompilationInfo* info, - Name* name); - virtual void CodeCreateEvent(Logger::LogEventsAndTags tag, - Code* code, + CompilationInfo* info, Name* script_name); + virtual void CodeCreateEvent(Logger::LogEventsAndTags tag, Code* code, SharedFunctionInfo* shared, - CompilationInfo* info, - Name* source, int line, int column); + CompilationInfo* info, Name* script_name, + int line, int column); virtual void CodeCreateEvent(Logger::LogEventsAndTags tag, Code* code, int args_count); virtual void CodeMovingGCEvent() {} virtual void CodeMoveEvent(Address from, Address to); + virtual void CodeDisableOptEvent(Code* code, SharedFunctionInfo* shared); virtual void CodeDeleteEvent(Address from); virtual void GetterCallbackEvent(Name* name, Address entry_point); virtual void RegExpCodeCreateEvent(Code* code, String* source); @@ -248,7 +256,7 @@ class CpuProfiler : public CodeEventListener { void LogBuiltins(); Isolate* isolate_; - TimeDelta sampling_interval_; + base::TimeDelta sampling_interval_; CpuProfilesCollection* profiles_; ProfileGenerator* generator_; ProfilerEventsProcessor* processor_; diff --git a/deps/v8/src/d8-debug.cc b/deps/v8/src/d8-debug.cc index 7c1beaf2e52..71e006c48e6 100644 --- a/deps/v8/src/d8-debug.cc +++ b/deps/v8/src/d8-debug.cc @@ -2,29 +2,18 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "d8.h" -#include "d8-debug.h" -#include "debug-agent.h" -#include "platform/socket.h" - +#include "src/d8.h" +#include "src/d8-debug.h" namespace v8 { -static bool was_running = true; - void PrintPrompt(bool is_running) { const char* prompt = is_running? "> " : "dbg> "; - was_running = is_running; printf("%s", prompt); fflush(stdout); } -void PrintPrompt() { - PrintPrompt(was_running); -} - - void HandleDebugEvent(const Debug::EventDetails& event_details) { // TODO(svenpanne) There should be a way to retrieve this in the callback. Isolate* isolate = Isolate::GetCurrent(); @@ -140,208 +129,4 @@ void HandleDebugEvent(const Debug::EventDetails& event_details) { } } - -void RunRemoteDebugger(Isolate* isolate, int port) { - RemoteDebugger debugger(isolate, port); - debugger.Run(); -} - - -void RemoteDebugger::Run() { - bool ok; - - // Connect to the debugger agent. - conn_ = new i::Socket; - static const int kPortStrSize = 6; - char port_str[kPortStrSize]; - i::OS::SNPrintF(i::Vector<char>(port_str, kPortStrSize), "%d", port_); - ok = conn_->Connect("localhost", port_str); - if (!ok) { - printf("Unable to connect to debug agent %d\n", i::Socket::GetLastError()); - return; - } - - // Start the receiver thread. - ReceiverThread receiver(this); - receiver.Start(); - - // Start the keyboard thread. - KeyboardThread keyboard(this); - keyboard.Start(); - PrintPrompt(); - - // Process events received from debugged VM and from the keyboard. - bool terminate = false; - while (!terminate) { - event_available_.Wait(); - RemoteDebuggerEvent* event = GetEvent(); - switch (event->type()) { - case RemoteDebuggerEvent::kMessage: - HandleMessageReceived(event->data()); - break; - case RemoteDebuggerEvent::kKeyboard: - HandleKeyboardCommand(event->data()); - break; - case RemoteDebuggerEvent::kDisconnect: - terminate = true; - break; - - default: - UNREACHABLE(); - } - delete event; - } - - delete conn_; - conn_ = NULL; - // Wait for the receiver thread to end. - receiver.Join(); -} - - -void RemoteDebugger::MessageReceived(i::SmartArrayPointer<char> message) { - RemoteDebuggerEvent* event = - new RemoteDebuggerEvent(RemoteDebuggerEvent::kMessage, message); - AddEvent(event); -} - - -void RemoteDebugger::KeyboardCommand(i::SmartArrayPointer<char> command) { - RemoteDebuggerEvent* event = - new RemoteDebuggerEvent(RemoteDebuggerEvent::kKeyboard, command); - AddEvent(event); -} - - -void RemoteDebugger::ConnectionClosed() { - RemoteDebuggerEvent* event = - new RemoteDebuggerEvent(RemoteDebuggerEvent::kDisconnect, - i::SmartArrayPointer<char>()); - AddEvent(event); -} - - -void RemoteDebugger::AddEvent(RemoteDebuggerEvent* event) { - i::LockGuard<i::Mutex> lock_guard(&event_access_); - if (head_ == NULL) { - ASSERT(tail_ == NULL); - head_ = event; - tail_ = event; - } else { - ASSERT(tail_ != NULL); - tail_->set_next(event); - tail_ = event; - } - event_available_.Signal(); -} - - -RemoteDebuggerEvent* RemoteDebugger::GetEvent() { - i::LockGuard<i::Mutex> lock_guard(&event_access_); - ASSERT(head_ != NULL); - RemoteDebuggerEvent* result = head_; - head_ = head_->next(); - if (head_ == NULL) { - ASSERT(tail_ == result); - tail_ = NULL; - } - return result; -} - - -void RemoteDebugger::HandleMessageReceived(char* message) { - Locker lock(isolate_); - HandleScope scope(isolate_); - - // Print the event details. - TryCatch try_catch; - Handle<Object> details = Shell::DebugMessageDetails( - isolate_, Handle<String>::Cast(String::NewFromUtf8(isolate_, message))); - if (try_catch.HasCaught()) { - Shell::ReportException(isolate_, &try_catch); - PrintPrompt(); - return; - } - String::Utf8Value str(details->Get(String::NewFromUtf8(isolate_, "text"))); - if (str.length() == 0) { - // Empty string is used to signal not to process this event. - return; - } - if (*str != NULL) { - printf("%s\n", *str); - } else { - printf("???\n"); - } - - bool is_running = details->Get(String::NewFromUtf8(isolate_, "running")) - ->ToBoolean() - ->Value(); - PrintPrompt(is_running); -} - - -void RemoteDebugger::HandleKeyboardCommand(char* command) { - Locker lock(isolate_); - HandleScope scope(isolate_); - - // Convert the debugger command to a JSON debugger request. - TryCatch try_catch; - Handle<Value> request = Shell::DebugCommandToJSONRequest( - isolate_, String::NewFromUtf8(isolate_, command)); - if (try_catch.HasCaught()) { - Shell::ReportException(isolate_, &try_catch); - PrintPrompt(); - return; - } - - // If undefined is returned the command was handled internally and there is - // no JSON to send. - if (request->IsUndefined()) { - PrintPrompt(); - return; - } - - // Send the JSON debugger request. - i::DebuggerAgentUtil::SendMessage(conn_, Handle<String>::Cast(request)); -} - - -void ReceiverThread::Run() { - // Receive the connect message (with empty body). - i::SmartArrayPointer<char> message = - i::DebuggerAgentUtil::ReceiveMessage(remote_debugger_->conn()); - ASSERT(message.get() == NULL); - - while (true) { - // Receive a message. - i::SmartArrayPointer<char> message = - i::DebuggerAgentUtil::ReceiveMessage(remote_debugger_->conn()); - if (message.get() == NULL) { - remote_debugger_->ConnectionClosed(); - return; - } - - // Pass the message to the main thread. - remote_debugger_->MessageReceived(message); - } -} - - -void KeyboardThread::Run() { - static const int kBufferSize = 256; - while (true) { - // read keyboard input. - char command[kBufferSize]; - char* str = fgets(command, kBufferSize, stdin); - if (str == NULL) { - break; - } - - // Pass the keyboard command to the main thread. - remote_debugger_->KeyboardCommand( - i::SmartArrayPointer<char>(i::StrDup(command))); - } -} - - } // namespace v8 diff --git a/deps/v8/src/d8-debug.h b/deps/v8/src/d8-debug.h index c211be7590f..1a693cc86d2 100644 --- a/deps/v8/src/d8-debug.h +++ b/deps/v8/src/d8-debug.h @@ -6,127 +6,14 @@ #define V8_D8_DEBUG_H_ -#include "d8.h" -#include "debug.h" -#include "platform/socket.h" +#include "src/d8.h" +#include "src/debug.h" namespace v8 { - void HandleDebugEvent(const Debug::EventDetails& event_details); -// Start the remove debugger connecting to a V8 debugger agent on the specified -// port. -void RunRemoteDebugger(Isolate* isolate, int port); - -// Forward declerations. -class RemoteDebuggerEvent; -class ReceiverThread; - - -// Remote debugging class. -class RemoteDebugger { - public: - explicit RemoteDebugger(Isolate* isolate, int port) - : isolate_(isolate), - port_(port), - event_available_(0), - head_(NULL), tail_(NULL) {} - void Run(); - - // Handle events from the subordinate threads. - void MessageReceived(i::SmartArrayPointer<char> message); - void KeyboardCommand(i::SmartArrayPointer<char> command); - void ConnectionClosed(); - - private: - // Add new debugger event to the list. - void AddEvent(RemoteDebuggerEvent* event); - // Read next debugger event from the list. - RemoteDebuggerEvent* GetEvent(); - - // Handle a message from the debugged V8. - void HandleMessageReceived(char* message); - // Handle a keyboard command. - void HandleKeyboardCommand(char* command); - - // Get connection to agent in debugged V8. - i::Socket* conn() { return conn_; } - - Isolate* isolate_; - int port_; // Port used to connect to debugger V8. - i::Socket* conn_; // Connection to debugger agent in debugged V8. - - // Linked list of events from debugged V8 and from keyboard input. Access to - // the list is guarded by a mutex and a semaphore signals new items in the - // list. - i::Mutex event_access_; - i::Semaphore event_available_; - RemoteDebuggerEvent* head_; - RemoteDebuggerEvent* tail_; - - friend class ReceiverThread; -}; - - -// Thread reading from debugged V8 instance. -class ReceiverThread: public i::Thread { - public: - explicit ReceiverThread(RemoteDebugger* remote_debugger) - : Thread("d8:ReceiverThrd"), - remote_debugger_(remote_debugger) {} - ~ReceiverThread() {} - - void Run(); - - private: - RemoteDebugger* remote_debugger_; -}; - - -// Thread reading keyboard input. -class KeyboardThread: public i::Thread { - public: - explicit KeyboardThread(RemoteDebugger* remote_debugger) - : Thread("d8:KeyboardThrd"), - remote_debugger_(remote_debugger) {} - ~KeyboardThread() {} - - void Run(); - - private: - RemoteDebugger* remote_debugger_; -}; - - -// Events processed by the main deubgger thread. -class RemoteDebuggerEvent { - public: - RemoteDebuggerEvent(int type, i::SmartArrayPointer<char> data) - : type_(type), data_(data), next_(NULL) { - ASSERT(type == kMessage || type == kKeyboard || type == kDisconnect); - } - - static const int kMessage = 1; - static const int kKeyboard = 2; - static const int kDisconnect = 3; - - int type() { return type_; } - char* data() { return data_.get(); } - - private: - void set_next(RemoteDebuggerEvent* event) { next_ = event; } - RemoteDebuggerEvent* next() { return next_; } - - int type_; - i::SmartArrayPointer<char> data_; - RemoteDebuggerEvent* next_; - - friend class RemoteDebugger; -}; - - } // namespace v8 diff --git a/deps/v8/src/d8-posix.cc b/deps/v8/src/d8-posix.cc index 869cd531331..59c50b432fc 100644 --- a/deps/v8/src/d8-posix.cc +++ b/deps/v8/src/d8-posix.cc @@ -2,22 +2,19 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. - -#include <stdlib.h> #include <errno.h> -#include <sys/types.h> +#include <fcntl.h> +#include <signal.h> +#include <stdlib.h> +#include <string.h> +#include <sys/select.h> #include <sys/stat.h> #include <sys/time.h> -#include <time.h> -#include <unistd.h> -#include <fcntl.h> +#include <sys/types.h> #include <sys/wait.h> -#include <signal.h> - +#include <unistd.h> -#include "d8.h" -#include "d8-debug.h" -#include "debug.h" +#include "src/d8.h" namespace v8 { @@ -83,7 +80,7 @@ static int LengthWithoutIncompleteUtf8(char* buffer, int len) { static bool WaitOnFD(int fd, int read_timeout, int total_timeout, - struct timeval& start_time) { + const struct timeval& start_time) { fd_set readfds, writefds, exceptfds; struct timeval timeout; int gone = 0; @@ -206,8 +203,8 @@ class ExecArgs { } } static const unsigned kMaxArgs = 1000; - char** arg_array() { return exec_args_; } - char* arg0() { return exec_args_[0]; } + char* const* arg_array() const { return exec_args_; } + const char* arg0() const { return exec_args_[0]; } private: char* exec_args_[kMaxArgs + 1]; @@ -249,7 +246,7 @@ static const int kWriteFD = 1; // It only returns if an error occurred. static void ExecSubprocess(int* exec_error_fds, int* stdout_fds, - ExecArgs& exec_args) { + const ExecArgs& exec_args) { close(exec_error_fds[kReadFD]); // Don't need this in the child. close(stdout_fds[kReadFD]); // Don't need this in the child. close(1); // Close stdout. @@ -288,7 +285,7 @@ static bool ChildLaunchedOK(Isolate* isolate, int* exec_error_fds) { // succeeded or false if an exception was thrown. static Handle<Value> GetStdout(Isolate* isolate, int child_fd, - struct timeval& start_time, + const struct timeval& start_time, int read_timeout, int total_timeout) { Handle<String> accumulator = String::Empty(isolate); @@ -360,8 +357,8 @@ static Handle<Value> GetStdout(Isolate* isolate, // Get exit status of child. static bool WaitForChild(Isolate* isolate, int pid, - ZombieProtector& child_waiter, - struct timeval& start_time, + ZombieProtector& child_waiter, // NOLINT + const struct timeval& start_time, int read_timeout, int total_timeout) { #ifdef HAS_WAITID diff --git a/deps/v8/src/d8-readline.cc b/deps/v8/src/d8-readline.cc index cb59f6ef133..39c93d35de5 100644 --- a/deps/v8/src/d8-readline.cc +++ b/deps/v8/src/d8-readline.cc @@ -10,7 +10,7 @@ // The readline includes leaves RETURN defined which breaks V8 compilation. #undef RETURN -#include "d8.h" +#include "src/d8.h" // There are incompatibilities between different versions and different // implementations of readline. This smooths out one known incompatibility. @@ -82,10 +82,7 @@ bool ReadLineEditor::Close() { Handle<String> ReadLineEditor::Prompt(const char* prompt) { char* result = NULL; - { // Release lock for blocking input. - Unlocker unlock(Isolate::GetCurrent()); - result = readline(prompt); - } + result = readline(prompt); if (result == NULL) return Handle<String>(); AddHistory(result); return String::NewFromUtf8(isolate_, result); @@ -123,7 +120,6 @@ char* ReadLineEditor::CompletionGenerator(const char* text, int state) { static unsigned current_index; static Persistent<Array> current_completions; Isolate* isolate = read_line_editor.isolate_; - Locker lock(isolate); HandleScope scope(isolate); Handle<Array> completions; if (state == 0) { diff --git a/deps/v8/src/d8-windows.cc b/deps/v8/src/d8-windows.cc index 8e0dc96b6dc..06c0a4e87d6 100644 --- a/deps/v8/src/d8-windows.cc +++ b/deps/v8/src/d8-windows.cc @@ -2,11 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. - -#include "d8.h" -#include "d8-debug.h" -#include "debug.h" -#include "api.h" +#include "src/d8.h" namespace v8 { diff --git a/deps/v8/src/d8.cc b/deps/v8/src/d8.cc index 396d68b7fa9..356a64b2dca 100644 --- a/deps/v8/src/d8.cc +++ b/deps/v8/src/d8.cc @@ -26,32 +26,37 @@ #endif // !V8_SHARED #ifdef V8_SHARED -#include "../include/v8-testing.h" +#include "include/v8-testing.h" #endif // V8_SHARED +#if !defined(V8_SHARED) && defined(ENABLE_GDB_JIT_INTERFACE) +#include "src/gdb-jit.h" +#endif + #ifdef ENABLE_VTUNE_JIT_INTERFACE -#include "third_party/vtune/v8-vtune.h" +#include "src/third_party/vtune/v8-vtune.h" #endif -#include "d8.h" +#include "src/d8.h" +#include "include/libplatform/libplatform.h" #ifndef V8_SHARED -#include "api.h" -#include "checks.h" -#include "cpu.h" -#include "d8-debug.h" -#include "debug.h" -#include "natives.h" -#include "platform.h" -#include "v8.h" +#include "src/api.h" +#include "src/base/cpu.h" +#include "src/base/logging.h" +#include "src/base/platform/platform.h" +#include "src/d8-debug.h" +#include "src/debug.h" +#include "src/natives.h" +#include "src/v8.h" #endif // !V8_SHARED #if !defined(_WIN32) && !defined(_WIN64) #include <unistd.h> // NOLINT #endif -#ifndef ASSERT -#define ASSERT(condition) assert(condition) +#ifndef DCHECK +#define DCHECK(condition) assert(condition) #endif namespace v8 { @@ -134,11 +139,12 @@ Handle<String> DumbLineEditor::Prompt(const char* prompt) { #ifndef V8_SHARED CounterMap* Shell::counter_map_; -i::OS::MemoryMappedFile* Shell::counters_file_ = NULL; +base::OS::MemoryMappedFile* Shell::counters_file_ = NULL; CounterCollection Shell::local_counters_; CounterCollection* Shell::counters_ = &local_counters_; -i::Mutex Shell::context_mutex_; -const i::TimeTicks Shell::kInitialTicks = i::TimeTicks::HighResolutionNow(); +base::Mutex Shell::context_mutex_; +const base::TimeTicks Shell::kInitialTicks = + base::TimeTicks::HighResolutionNow(); Persistent<Context> Shell::utility_context_; #endif // !V8_SHARED @@ -164,6 +170,36 @@ const char* Shell::ToCString(const v8::String::Utf8Value& value) { } +// Compile a string within the current v8 context. +Local<UnboundScript> Shell::CompileString( + Isolate* isolate, Local<String> source, Local<Value> name, + v8::ScriptCompiler::CompileOptions compile_options) { + ScriptOrigin origin(name); + ScriptCompiler::Source script_source(source, origin); + Local<UnboundScript> script = + ScriptCompiler::CompileUnbound(isolate, &script_source, compile_options); + + // Was caching requested & successful? Then compile again, now with cache. + if (script_source.GetCachedData()) { + if (compile_options == ScriptCompiler::kProduceCodeCache) { + compile_options = ScriptCompiler::kConsumeCodeCache; + } else if (compile_options == ScriptCompiler::kProduceParserCache) { + compile_options = ScriptCompiler::kConsumeParserCache; + } else { + DCHECK(false); // A new compile option? + } + ScriptCompiler::Source cached_source( + source, origin, new v8::ScriptCompiler::CachedData( + script_source.GetCachedData()->data, + script_source.GetCachedData()->length, + v8::ScriptCompiler::CachedData::BufferNotOwned)); + script = ScriptCompiler::CompileUnbound(isolate, &cached_source, + compile_options); + } + return script; +} + + // Executes a string within the current v8 context. bool Shell::ExecuteString(Isolate* isolate, Handle<String> source, @@ -182,10 +218,9 @@ bool Shell::ExecuteString(Isolate* isolate, // When debugging make exceptions appear to be uncaught. try_catch.SetVerbose(true); } - ScriptOrigin origin(name); - ScriptCompiler::Source script_source(source, origin); + Handle<UnboundScript> script = - ScriptCompiler::CompileUnbound(isolate, &script_source); + Shell::CompileString(isolate, source, name, options.compile_options); if (script.IsEmpty()) { // Print errors that happened during compilation. if (report_exceptions && !FLAG_debugger) @@ -200,13 +235,13 @@ bool Shell::ExecuteString(Isolate* isolate, realm->Exit(); data->realm_current_ = data->realm_switch_; if (result.IsEmpty()) { - ASSERT(try_catch.HasCaught()); + DCHECK(try_catch.HasCaught()); // Print errors that happened during execution. if (report_exceptions && !FLAG_debugger) ReportException(isolate, &try_catch); return false; } else { - ASSERT(!try_catch.HasCaught()); + DCHECK(!try_catch.HasCaught()); if (print_result) { #if !defined(V8_SHARED) if (options.test_shell) { @@ -290,9 +325,19 @@ int PerIsolateData::RealmIndexOrThrow( #ifndef V8_SHARED // performance.now() returns a time stamp as double, measured in milliseconds. +// When FLAG_verify_predictable mode is enabled it returns current value +// of Heap::allocations_count(). void Shell::PerformanceNow(const v8::FunctionCallbackInfo<v8::Value>& args) { - i::TimeDelta delta = i::TimeTicks::HighResolutionNow() - kInitialTicks; - args.GetReturnValue().Set(delta.InMillisecondsF()); + if (i::FLAG_verify_predictable) { + Isolate* v8_isolate = args.GetIsolate(); + i::Heap* heap = reinterpret_cast<i::Isolate*>(v8_isolate)->heap(); + args.GetReturnValue().Set(heap->synthetic_time()); + + } else { + base::TimeDelta delta = + base::TimeTicks::HighResolutionNow() - kInitialTicks; + args.GetReturnValue().Set(delta.InMillisecondsF()); + } } #endif // !V8_SHARED @@ -472,10 +517,7 @@ Handle<String> Shell::ReadFromStdin(Isolate* isolate) { // not been fully read into the buffer yet (does not end with '\n'). // If fgets gets an error, just give up. char* input = NULL; - { // Release lock for blocking input. - Unlocker unlock(isolate); - input = fgets(buffer, kBufferSize, stdin); - } + input = fgets(buffer, kBufferSize, stdin); if (input == NULL) return Handle<String>(); length = static_cast<int>(strlen(buffer)); if (length == 0) { @@ -555,7 +597,7 @@ void Shell::ReportException(Isolate* isolate, v8::TryCatch* try_catch) { printf("%s\n", exception_string); } else { // Print (filename):(line number): (message). - v8::String::Utf8Value filename(message->GetScriptResourceName()); + v8::String::Utf8Value filename(message->GetScriptOrigin().ResourceName()); const char* filename_string = ToCString(filename); int linenum = message->GetLineNumber(); printf("%s:%i: %s\n", filename_string, linenum, exception_string); @@ -638,16 +680,6 @@ Local<Value> Shell::DebugCommandToJSONRequest(Isolate* isolate, } -void Shell::DispatchDebugMessages() { - Isolate* isolate = v8::Isolate::GetCurrent(); - HandleScope handle_scope(isolate); - v8::Local<v8::Context> context = - v8::Local<v8::Context>::New(isolate, Shell::evaluation_context_); - v8::Context::Scope context_scope(context); - v8::Debug::ProcessDebugMessages(); -} - - int32_t* Counter::Bind(const char* name, bool is_histogram) { int i; for (i = 0; i < kMaxNameSize - 1 && name[i]; i++) @@ -678,8 +710,8 @@ Counter* CounterCollection::GetNextCounter() { } -void Shell::MapCounters(const char* name) { - counters_file_ = i::OS::MemoryMappedFile::create( +void Shell::MapCounters(v8::Isolate* isolate, const char* name) { + counters_file_ = base::OS::MemoryMappedFile::create( name, sizeof(CounterCollection), &local_counters_); void* memory = (counters_file_ == NULL) ? NULL : counters_file_->memory(); @@ -688,9 +720,9 @@ void Shell::MapCounters(const char* name) { Exit(1); } counters_ = static_cast<CounterCollection*>(memory); - V8::SetCounterFunction(LookupCounter); - V8::SetCreateHistogramFunction(CreateHistogram); - V8::SetAddHistogramSampleFunction(AddHistogramSample); + isolate->SetCounterFunction(LookupCounter); + isolate->SetCreateHistogramFunction(CreateHistogram); + isolate->SetAddHistogramSampleFunction(AddHistogramSample); } @@ -715,7 +747,7 @@ Counter* Shell::GetCounter(const char* name, bool is_histogram) { counter->Bind(name, is_histogram); } } else { - ASSERT(counter->is_histogram() == is_histogram); + DCHECK(counter->is_histogram() == is_histogram); } return counter; } @@ -747,7 +779,6 @@ void Shell::AddHistogramSample(void* histogram, int sample) { void Shell::InstallUtilityScript(Isolate* isolate) { - Locker lock(isolate); HandleScope scope(isolate); // If we use the utility context, we have to set the security tokens so that // utility, evaluation and debug context can all access each other. @@ -763,11 +794,12 @@ void Shell::InstallUtilityScript(Isolate* isolate) { // Install the debugger object in the utility scope i::Debug* debug = reinterpret_cast<i::Isolate*>(isolate)->debug(); debug->Load(); + i::Handle<i::Context> debug_context = debug->debug_context(); i::Handle<i::JSObject> js_debug - = i::Handle<i::JSObject>(debug->debug_context()->global_object()); + = i::Handle<i::JSObject>(debug_context->global_object()); utility_context->Global()->Set(String::NewFromUtf8(isolate, "$debug"), Utils::ToLocal(js_debug)); - debug->debug_context()->set_security_token( + debug_context->set_security_token( reinterpret_cast<i::Isolate*>(isolate)->heap()->undefined_value()); // Run the d8 shell utility script in the utility context @@ -796,9 +828,7 @@ void Shell::InstallUtilityScript(Isolate* isolate) { script_object->set_type(i::Smi::FromInt(i::Script::TYPE_NATIVE)); // Start the in-process debugger if requested. - if (i::FLAG_debugger && !i::FLAG_debugger_agent) { - v8::Debug::SetDebugEventListener2(HandleDebugEvent); - } + if (i::FLAG_debugger) v8::Debug::SetDebugEventListener(HandleDebugEvent); } #endif // !V8_SHARED @@ -813,7 +843,7 @@ class BZip2Decompressor : public v8::StartupDataDecompressor { int* raw_data_size, const char* compressed_data, int compressed_data_size) { - ASSERT_EQ(v8::StartupData::kBZip2, + DCHECK_EQ(v8::StartupData::kBZip2, v8::V8::GetCompressedStartupDataAlgorithm()); unsigned int decompressed_size = *raw_data_size; int result = @@ -878,11 +908,9 @@ Handle<ObjectTemplate> Shell::CreateGlobalTemplate(Isolate* isolate) { performance_template); #endif // !V8_SHARED -#if !defined(V8_SHARED) && !defined(_WIN32) && !defined(_WIN64) Handle<ObjectTemplate> os_templ = ObjectTemplate::New(isolate); AddOSMethods(isolate, os_templ); global_template->Set(String::NewFromUtf8(isolate, "os"), os_templ); -#endif // !V8_SHARED && !_WIN32 && !_WIN64 return global_template; } @@ -902,11 +930,11 @@ void Shell::Initialize(Isolate* isolate) { Shell::counter_map_ = new CounterMap(); // Set up counters if (i::StrLength(i::FLAG_map_counters) != 0) - MapCounters(i::FLAG_map_counters); + MapCounters(isolate, i::FLAG_map_counters); if (i::FLAG_dump_counters || i::FLAG_track_gc_object_stats) { - V8::SetCounterFunction(LookupCounter); - V8::SetCreateHistogramFunction(CreateHistogram); - V8::SetAddHistogramSampleFunction(AddHistogramSample); + isolate->SetCounterFunction(LookupCounter); + isolate->SetCreateHistogramFunction(CreateHistogram); + isolate->SetAddHistogramSampleFunction(AddHistogramSample); } #endif // !V8_SHARED } @@ -915,17 +943,10 @@ void Shell::Initialize(Isolate* isolate) { void Shell::InitializeDebugger(Isolate* isolate) { if (options.test_shell) return; #ifndef V8_SHARED - Locker lock(isolate); HandleScope scope(isolate); Handle<ObjectTemplate> global_template = CreateGlobalTemplate(isolate); utility_context_.Reset(isolate, Context::New(isolate, NULL, global_template)); - - // Start the debugger agent if requested. - if (i::FLAG_debugger_agent) { - v8::Debug::EnableAgent("d8 shell", i::FLAG_debugger_port, true); - v8::Debug::SetDebugMessageDispatchHandler(DispatchDebugMessages, true); - } #endif // !V8_SHARED } @@ -933,13 +954,13 @@ void Shell::InitializeDebugger(Isolate* isolate) { Local<Context> Shell::CreateEvaluationContext(Isolate* isolate) { #ifndef V8_SHARED // This needs to be a critical section since this is not thread-safe - i::LockGuard<i::Mutex> lock_guard(&context_mutex_); + base::LockGuard<base::Mutex> lock_guard(&context_mutex_); #endif // !V8_SHARED // Initialize the global objects Handle<ObjectTemplate> global_template = CreateGlobalTemplate(isolate); EscapableHandleScope handle_scope(isolate); Local<Context> context = Context::New(isolate, NULL, global_template); - ASSERT(!context.IsEmpty()); + DCHECK(!context.IsEmpty()); Context::Scope scope(context); #ifndef V8_SHARED @@ -1048,8 +1069,6 @@ static FILE* FOpen(const char* path, const char* mode) { static char* ReadChars(Isolate* isolate, const char* name, int* size_out) { - // Release the V8 lock while reading files. - v8::Unlocker unlocker(isolate); FILE* file = FOpen(name, "rb"); if (file == NULL) return NULL; @@ -1088,7 +1107,7 @@ static void ReadBufferWeakCallback( void Shell::ReadBuffer(const v8::FunctionCallbackInfo<v8::Value>& args) { - ASSERT(sizeof(char) == sizeof(uint8_t)); // NOLINT + DCHECK(sizeof(char) == sizeof(uint8_t)); // NOLINT String::Utf8Value filename(args[0]); int length; if (*filename == NULL) { @@ -1116,29 +1135,6 @@ void Shell::ReadBuffer(const v8::FunctionCallbackInfo<v8::Value>& args) { } -#ifndef V8_SHARED -static char* ReadToken(char* data, char token) { - char* next = i::OS::StrChr(data, token); - if (next != NULL) { - *next = '\0'; - return (next + 1); - } - - return NULL; -} - - -static char* ReadLine(char* data) { - return ReadToken(data, '\n'); -} - - -static char* ReadWord(char* data) { - return ReadToken(data, ' '); -} -#endif // !V8_SHARED - - // Reads a file into a v8 string. Handle<String> Shell::ReadFile(Isolate* isolate, const char* name) { int size = 0; @@ -1152,7 +1148,6 @@ Handle<String> Shell::ReadFile(Isolate* isolate, const char* name) { void Shell::RunShell(Isolate* isolate) { - Locker locker(isolate); HandleScope outer_scope(isolate); v8::Local<v8::Context> context = v8::Local<v8::Context>::New(isolate, evaluation_context_); @@ -1172,71 +1167,6 @@ void Shell::RunShell(Isolate* isolate) { } -#ifndef V8_SHARED -class ShellThread : public i::Thread { - public: - // Takes ownership of the underlying char array of |files|. - ShellThread(Isolate* isolate, char* files) - : Thread("d8:ShellThread"), - isolate_(isolate), files_(files) { } - - ~ShellThread() { - delete[] files_; - } - - virtual void Run(); - private: - Isolate* isolate_; - char* files_; -}; - - -void ShellThread::Run() { - char* ptr = files_; - while ((ptr != NULL) && (*ptr != '\0')) { - // For each newline-separated line. - char* next_line = ReadLine(ptr); - - if (*ptr == '#') { - // Skip comment lines. - ptr = next_line; - continue; - } - - // Prepare the context for this thread. - Locker locker(isolate_); - HandleScope outer_scope(isolate_); - Local<Context> thread_context = - Shell::CreateEvaluationContext(isolate_); - Context::Scope context_scope(thread_context); - PerIsolateData::RealmScope realm_scope(PerIsolateData::Get(isolate_)); - - while ((ptr != NULL) && (*ptr != '\0')) { - HandleScope inner_scope(isolate_); - char* filename = ptr; - ptr = ReadWord(ptr); - - // Skip empty strings. - if (strlen(filename) == 0) { - continue; - } - - Handle<String> str = Shell::ReadFile(isolate_, filename); - if (str.IsEmpty()) { - printf("File '%s' not found\n", filename); - Shell::Exit(1); - } - - Shell::ExecuteString( - isolate_, str, String::NewFromUtf8(isolate_, filename), false, false); - } - - ptr = next_line; - } -} -#endif // !V8_SHARED - - SourceGroup::~SourceGroup() { #ifndef V8_SHARED delete thread_; @@ -1294,12 +1224,12 @@ Handle<String> SourceGroup::ReadFile(Isolate* isolate, const char* name) { #ifndef V8_SHARED -i::Thread::Options SourceGroup::GetThreadOptions() { +base::Thread::Options SourceGroup::GetThreadOptions() { // On some systems (OSX 10.6) the stack size default is 0.5Mb or less // which is not enough to parse the big literal expressions used in tests. // The stack size should be at least StackGuard::kLimitSize + some // OS-specific padding for thread startup code. 2Mbytes seems to be enough. - return i::Thread::Options("IsolateThread", 2 * MB); + return base::Thread::Options("IsolateThread", 2 * MB); } @@ -1309,7 +1239,6 @@ void SourceGroup::ExecuteInThread() { next_semaphore_.Wait(); { Isolate::Scope iscope(isolate); - Locker lock(isolate); { HandleScope scope(isolate); PerIsolateData data(isolate); @@ -1322,12 +1251,19 @@ void SourceGroup::ExecuteInThread() { } if (Shell::options.send_idle_notification) { const int kLongIdlePauseInMs = 1000; - V8::ContextDisposedNotification(); - V8::IdleNotification(kLongIdlePauseInMs); + isolate->ContextDisposedNotification(); + isolate->IdleNotification(kLongIdlePauseInMs); + } + if (Shell::options.invoke_weak_callbacks) { + // By sending a low memory notifications, we will try hard to collect + // all garbage and will therefore also invoke all weak callbacks of + // actually unreachable persistent handles. + isolate->LowMemoryNotification(); } } done_semaphore_.Signal(); } while (!Shell::options.last_run); + isolate->Dispose(); } @@ -1352,7 +1288,13 @@ void SourceGroup::WaitForThread() { #endif // !V8_SHARED +void SetFlagsFromString(const char* flags) { + v8::V8::SetFlagsFromString(flags, static_cast<int>(strlen(flags))); +} + + bool Shell::SetOptions(int argc, char* argv[]) { + bool logfile_per_isolate = false; for (int i = 0; i < argc; i++) { if (strcmp(argv[i], "--stress-opt") == 0) { options.stress_opt = true; @@ -1370,6 +1312,9 @@ bool Shell::SetOptions(int argc, char* argv[]) { // No support for stressing if we can't use --always-opt. options.stress_opt = false; options.stress_deopt = false; + } else if (strcmp(argv[i], "--logfile-per-isolate") == 0) { + logfile_per_isolate = true; + argv[i] = NULL; } else if (strcmp(argv[i], "--shell") == 0) { options.interactive_shell = true; argv[i] = NULL; @@ -1379,6 +1324,11 @@ bool Shell::SetOptions(int argc, char* argv[]) { } else if (strcmp(argv[i], "--send-idle-notification") == 0) { options.send_idle_notification = true; argv[i] = NULL; + } else if (strcmp(argv[i], "--invoke-weak-callbacks") == 0) { + options.invoke_weak_callbacks = true; + // TODO(jochen) See issue 3351 + options.send_idle_notification = true; + argv[i] = NULL; } else if (strcmp(argv[i], "-f") == 0) { // Ignore any -f flags for compatibility with other stand-alone // JavaScript engines. @@ -1389,13 +1339,6 @@ bool Shell::SetOptions(int argc, char* argv[]) { return false; #endif // V8_SHARED options.num_isolates++; - } else if (strcmp(argv[i], "-p") == 0) { -#ifdef V8_SHARED - printf("D8 with shared library does not support multi-threading\n"); - return false; -#else - options.num_parallel_files++; -#endif // V8_SHARED } else if (strcmp(argv[i], "--dump-heap-constants") == 0) { #ifdef V8_SHARED printf("D8 with shared library does not support constant dumping\n"); @@ -1410,41 +1353,38 @@ bool Shell::SetOptions(int argc, char* argv[]) { } else if (strncmp(argv[i], "--icu-data-file=", 16) == 0) { options.icu_data_file = argv[i] + 16; argv[i] = NULL; - } #ifdef V8_SHARED - else if (strcmp(argv[i], "--dump-counters") == 0) { + } else if (strcmp(argv[i], "--dump-counters") == 0) { printf("D8 with shared library does not include counters\n"); return false; } else if (strcmp(argv[i], "--debugger") == 0) { printf("Javascript debugger not included\n"); return false; - } #endif // V8_SHARED - } - -#ifndef V8_SHARED - // Run parallel threads if we are not using --isolate - options.parallel_files = new char*[options.num_parallel_files]; - int parallel_files_set = 0; - for (int i = 1; i < argc; i++) { - if (argv[i] == NULL) continue; - if (strcmp(argv[i], "-p") == 0 && i + 1 < argc) { - if (options.num_isolates > 1) { - printf("-p is not compatible with --isolate\n"); +#ifdef V8_USE_EXTERNAL_STARTUP_DATA + } else if (strncmp(argv[i], "--natives_blob=", 15) == 0) { + options.natives_blob = argv[i] + 15; + argv[i] = NULL; + } else if (strncmp(argv[i], "--snapshot_blob=", 16) == 0) { + options.snapshot_blob = argv[i] + 16; + argv[i] = NULL; +#endif // V8_USE_EXTERNAL_STARTUP_DATA + } else if (strcmp(argv[i], "--cache") == 0 || + strncmp(argv[i], "--cache=", 8) == 0) { + const char* value = argv[i] + 7; + if (!*value || strncmp(value, "=code", 6) == 0) { + options.compile_options = v8::ScriptCompiler::kProduceCodeCache; + } else if (strncmp(value, "=parse", 7) == 0) { + options.compile_options = v8::ScriptCompiler::kProduceParserCache; + } else if (strncmp(value, "=none", 6) == 0) { + options.compile_options = v8::ScriptCompiler::kNoCompileOptions; + } else { + printf("Unknown option to --cache.\n"); return false; } argv[i] = NULL; - i++; - options.parallel_files[parallel_files_set] = argv[i]; - parallel_files_set++; - argv[i] = NULL; } } - if (parallel_files_set != options.num_parallel_files) { - printf("-p requires a file containing a list of files as parameter\n"); - return false; - } -#endif // !V8_SHARED v8::V8::SetFlagsFromCommandLine(&argc, argv, true); @@ -1464,94 +1404,61 @@ bool Shell::SetOptions(int argc, char* argv[]) { } current->End(argc); + if (!logfile_per_isolate && options.num_isolates) { + SetFlagsFromString("--nologfile_per_isolate"); + } + return true; } int Shell::RunMain(Isolate* isolate, int argc, char* argv[]) { #ifndef V8_SHARED - i::List<i::Thread*> threads(1); - if (options.parallel_files != NULL) { - for (int i = 0; i < options.num_parallel_files; i++) { - char* files = NULL; - { Locker lock(isolate); - int size = 0; - files = ReadChars(isolate, options.parallel_files[i], &size); - } - if (files == NULL) { - printf("File list '%s' not found\n", options.parallel_files[i]); - Exit(1); - } - ShellThread* thread = new ShellThread(isolate, files); - thread->Start(); - threads.Add(thread); - } - } for (int i = 1; i < options.num_isolates; ++i) { options.isolate_sources[i].StartExecuteInThread(); } #endif // !V8_SHARED - { // NOLINT - Locker lock(isolate); - { - HandleScope scope(isolate); - Local<Context> context = CreateEvaluationContext(isolate); - if (options.last_run) { - // Keep using the same context in the interactive shell. - evaluation_context_.Reset(isolate, context); + { + HandleScope scope(isolate); + Local<Context> context = CreateEvaluationContext(isolate); + if (options.last_run && options.use_interactive_shell()) { + // Keep using the same context in the interactive shell. + evaluation_context_.Reset(isolate, context); #ifndef V8_SHARED - // If the interactive debugger is enabled make sure to activate - // it before running the files passed on the command line. - if (i::FLAG_debugger) { - InstallUtilityScript(isolate); - } -#endif // !V8_SHARED - } - { - Context::Scope cscope(context); - PerIsolateData::RealmScope realm_scope(PerIsolateData::Get(isolate)); - options.isolate_sources[0].Execute(isolate); + // If the interactive debugger is enabled make sure to activate + // it before running the files passed on the command line. + if (i::FLAG_debugger) { + InstallUtilityScript(isolate); } +#endif // !V8_SHARED } - if (!options.last_run) { - if (options.send_idle_notification) { - const int kLongIdlePauseInMs = 1000; - V8::ContextDisposedNotification(); - V8::IdleNotification(kLongIdlePauseInMs); - } + { + Context::Scope cscope(context); + PerIsolateData::RealmScope realm_scope(PerIsolateData::Get(isolate)); + options.isolate_sources[0].Execute(isolate); } } + if (options.send_idle_notification) { + const int kLongIdlePauseInMs = 1000; + isolate->ContextDisposedNotification(); + isolate->IdleNotification(kLongIdlePauseInMs); + } + if (options.invoke_weak_callbacks) { + // By sending a low memory notifications, we will try hard to collect all + // garbage and will therefore also invoke all weak callbacks of actually + // unreachable persistent handles. + isolate->LowMemoryNotification(); + } #ifndef V8_SHARED for (int i = 1; i < options.num_isolates; ++i) { options.isolate_sources[i].WaitForThread(); } - - for (int i = 0; i < threads.length(); i++) { - i::Thread* thread = threads[i]; - thread->Join(); - delete thread; - } #endif // !V8_SHARED return 0; } -#ifdef V8_SHARED -static void SetStandaloneFlagsViaCommandLine() { - int fake_argc = 3; - char **fake_argv = new char*[3]; - fake_argv[0] = NULL; - fake_argv[1] = strdup("--trace-hydrogen-file=hydrogen.cfg"); - fake_argv[2] = strdup("--redirect-code-traces-to=code.asm"); - v8::V8::SetFlagsFromCommandLine(&fake_argc, fake_argv, false); - free(fake_argv[1]); - free(fake_argv[2]); - delete[] fake_argv; -} -#endif - - #ifndef V8_SHARED static void DumpHeapConstants(i::Isolate* isolate) { i::Heap* heap = isolate->heap(); @@ -1612,20 +1519,11 @@ static void DumpHeapConstants(i::Isolate* isolate) { class ShellArrayBufferAllocator : public v8::ArrayBuffer::Allocator { public: virtual void* Allocate(size_t length) { - void* result = malloc(length); - memset(result, 0, length); - return result; - } - virtual void* AllocateUninitialized(size_t length) { - return malloc(length); + void* data = AllocateUninitialized(length); + return data == NULL ? data : memset(data, 0, length); } + virtual void* AllocateUninitialized(size_t length) { return malloc(length); } virtual void Free(void* data, size_t) { free(data); } - // TODO(dslomov): Remove when v8:2823 is fixed. - virtual void Free(void* data) { -#ifndef V8_SHARED - UNREACHABLE(); -#endif - } }; @@ -1637,20 +1535,75 @@ class MockArrayBufferAllocator : public v8::ArrayBuffer::Allocator { virtual void* AllocateUninitialized(size_t length) V8_OVERRIDE { return malloc(0); } - virtual void Free(void*, size_t) V8_OVERRIDE { + virtual void Free(void* p, size_t) V8_OVERRIDE { + free(p); + } +}; + + +#ifdef V8_USE_EXTERNAL_STARTUP_DATA +class StartupDataHandler { + public: + StartupDataHandler(const char* natives_blob, + const char* snapshot_blob) { + Load(natives_blob, &natives_, v8::V8::SetNativesDataBlob); + Load(snapshot_blob, &snapshot_, v8::V8::SetSnapshotDataBlob); + } + + ~StartupDataHandler() { + delete[] natives_.data; + delete[] snapshot_.data; + } + + private: + void Load(const char* blob_file, + v8::StartupData* startup_data, + void (*setter_fn)(v8::StartupData*)) { + startup_data->data = NULL; + startup_data->compressed_size = 0; + startup_data->raw_size = 0; + + if (!blob_file) + return; + + FILE* file = fopen(blob_file, "rb"); + if (!file) + return; + + fseek(file, 0, SEEK_END); + startup_data->raw_size = ftell(file); + rewind(file); + + startup_data->data = new char[startup_data->raw_size]; + startup_data->compressed_size = fread( + const_cast<char*>(startup_data->data), 1, startup_data->raw_size, + file); + fclose(file); + + if (startup_data->raw_size == startup_data->compressed_size) + (*setter_fn)(startup_data); } + + v8::StartupData natives_; + v8::StartupData snapshot_; + + // Disallow copy & assign. + StartupDataHandler(const StartupDataHandler& other); + void operator=(const StartupDataHandler& other); }; +#endif // V8_USE_EXTERNAL_STARTUP_DATA int Shell::Main(int argc, char* argv[]) { if (!SetOptions(argc, argv)) return 1; v8::V8::InitializeICU(options.icu_data_file); -#ifndef V8_SHARED - i::FLAG_trace_hydrogen_file = "hydrogen.cfg"; - i::FLAG_redirect_code_traces_to = "code.asm"; -#else - SetStandaloneFlagsViaCommandLine(); + v8::Platform* platform = v8::platform::CreateDefaultPlatform(); + v8::V8::InitializePlatform(platform); +#ifdef V8_USE_EXTERNAL_STARTUP_DATA + StartupDataHandler startup_data(options.natives_blob, options.snapshot_blob); #endif + SetFlagsFromString("--trace-hydrogen-file=hydrogen.cfg"); + SetFlagsFromString("--redirect-code-traces-to=code.asm"); ShellArrayBufferAllocator array_buffer_allocator; MockArrayBufferAllocator mock_arraybuffer_allocator; if (options.mock_arraybuffer_allocator) { @@ -1659,17 +1612,24 @@ int Shell::Main(int argc, char* argv[]) { v8::V8::SetArrayBufferAllocator(&array_buffer_allocator); } int result = 0; - Isolate* isolate = Isolate::GetCurrent(); + Isolate* isolate = Isolate::New(); #ifndef V8_SHARED v8::ResourceConstraints constraints; - constraints.ConfigureDefaults(i::OS::TotalPhysicalMemory(), - i::OS::MaxVirtualMemory(), - i::CPU::NumberOfProcessorsOnline()); + constraints.ConfigureDefaults(base::OS::TotalPhysicalMemory(), + base::OS::MaxVirtualMemory(), + base::OS::NumberOfProcessorsOnline()); v8::SetResourceConstraints(isolate, &constraints); #endif DumbLineEditor dumb_line_editor(isolate); { + Isolate::Scope scope(isolate); Initialize(isolate); +#if !defined(V8_SHARED) && defined(ENABLE_GDB_JIT_INTERFACE) + if (i::FLAG_gdbjit) { + v8::V8::SetJitCodeEventHandler(v8::kJitCodeEventDefault, + i::GDBJITInterface::EventHandler); + } +#endif #ifdef ENABLE_VTUNE_JIT_INTERFACE vTune::InitializeVtuneForV8(); #endif @@ -1709,21 +1669,9 @@ int Shell::Main(int argc, char* argv[]) { result = RunMain(isolate, argc, argv); } - -#ifndef V8_SHARED - // Run remote debugger if requested, but never on --test - if (i::FLAG_remote_debugger && !options.test_shell) { - InstallUtilityScript(isolate); - RunRemoteDebugger(isolate, i::FLAG_debugger_port); - return 0; - } -#endif // !V8_SHARED - // Run interactive shell if explicitly requested or if no script has been // executed, but never on --test - - if (( options.interactive_shell || !options.script_executed ) - && !options.test_shell ) { + if (options.use_interactive_shell()) { #ifndef V8_SHARED if (!i::FLAG_debugger) { InstallUtilityScript(isolate); @@ -1732,7 +1680,10 @@ int Shell::Main(int argc, char* argv[]) { RunShell(isolate); } } + isolate->Dispose(); V8::Dispose(); + V8::ShutdownPlatform(); + delete platform; OnExit(); diff --git a/deps/v8/src/d8.gyp b/deps/v8/src/d8.gyp index 0e51baaacac..a084979de22 100644 --- a/deps/v8/src/d8.gyp +++ b/deps/v8/src/d8.gyp @@ -41,10 +41,11 @@ 'type': 'executable', 'dependencies': [ '../tools/gyp/v8.gyp:v8', + '../tools/gyp/v8.gyp:v8_libplatform', ], # Generated source files need this explicitly: 'include_dirs+': [ - '../src', + '..', ], 'sources': [ 'd8.cc', @@ -57,6 +58,14 @@ 'libraries': [ '-lreadline', ], 'sources': [ 'd8-readline.cc' ], }], + ['(OS=="linux" or OS=="mac" or OS=="freebsd" or OS=="netbsd" \ + or OS=="openbsd" or OS=="solaris" or OS=="android" \ + or OS=="qnx")', { + 'sources': [ 'd8-posix.cc', ] + }], + [ 'OS=="win"', { + 'sources': [ 'd8-windows.cc', ] + }], [ 'component!="shared_library"', { 'sources': [ 'd8-debug.cc', '<(SHARED_INTERMEDIATE_DIR)/d8-js.cc', ], 'conditions': [ @@ -69,14 +78,6 @@ 'd8_js2c', ], }], - ['(OS=="linux" or OS=="mac" or OS=="freebsd" or OS=="netbsd" \ - or OS=="openbsd" or OS=="solaris" or OS=="android" \ - or OS=="qnx")', { - 'sources': [ 'd8-posix.cc', ] - }], - [ 'OS=="win"', { - 'sources': [ 'd8-windows.cc', ] - }], ], }], ['v8_enable_vtunejit==1', { diff --git a/deps/v8/src/d8.h b/deps/v8/src/d8.h index bf290238bbc..991e5a51b0d 100644 --- a/deps/v8/src/d8.h +++ b/deps/v8/src/d8.h @@ -6,12 +6,12 @@ #define V8_D8_H_ #ifndef V8_SHARED -#include "allocation.h" -#include "hashmap.h" -#include "smart-pointers.h" -#include "v8.h" +#include "src/allocation.h" +#include "src/hashmap.h" +#include "src/smart-pointers.h" +#include "src/v8.h" #else -#include "../include/v8.h" +#include "include/v8.h" #endif // !V8_SHARED namespace v8 { @@ -69,7 +69,7 @@ class CounterMap { const_cast<char*>(name), Hash(name), true); - ASSERT(answer != NULL); + DCHECK(answer != NULL); answer->value = value; } class Iterator { @@ -141,10 +141,10 @@ class SourceGroup { void WaitForThread(); private: - class IsolateThread : public i::Thread { + class IsolateThread : public base::Thread { public: explicit IsolateThread(SourceGroup* group) - : i::Thread(GetThreadOptions()), group_(group) {} + : base::Thread(GetThreadOptions()), group_(group) {} virtual void Run() { group_->ExecuteInThread(); @@ -154,12 +154,12 @@ class SourceGroup { SourceGroup* group_; }; - static i::Thread::Options GetThreadOptions(); + static base::Thread::Options GetThreadOptions(); void ExecuteInThread(); - i::Semaphore next_semaphore_; - i::Semaphore done_semaphore_; - i::Thread* thread_; + base::Semaphore next_semaphore_; + base::Semaphore done_semaphore_; + base::Thread* thread_; #endif // !V8_SHARED void ExitShell(int exit_code); @@ -194,39 +194,37 @@ class BinaryResource : public v8::String::ExternalAsciiStringResource { class ShellOptions { public: - ShellOptions() : -#ifndef V8_SHARED - num_parallel_files(0), - parallel_files(NULL), -#endif // !V8_SHARED - script_executed(false), - last_run(true), - send_idle_notification(false), - stress_opt(false), - stress_deopt(false), - interactive_shell(false), - test_shell(false), - dump_heap_constants(false), - expected_to_throw(false), - mock_arraybuffer_allocator(false), - num_isolates(1), - isolate_sources(NULL), - icu_data_file(NULL) { } + ShellOptions() + : script_executed(false), + last_run(true), + send_idle_notification(false), + invoke_weak_callbacks(false), + stress_opt(false), + stress_deopt(false), + interactive_shell(false), + test_shell(false), + dump_heap_constants(false), + expected_to_throw(false), + mock_arraybuffer_allocator(false), + num_isolates(1), + compile_options(v8::ScriptCompiler::kNoCompileOptions), + isolate_sources(NULL), + icu_data_file(NULL), + natives_blob(NULL), + snapshot_blob(NULL) {} ~ShellOptions() { -#ifndef V8_SHARED - delete[] parallel_files; -#endif // !V8_SHARED delete[] isolate_sources; } -#ifndef V8_SHARED - int num_parallel_files; - char** parallel_files; -#endif // !V8_SHARED + bool use_interactive_shell() { + return (interactive_shell || !script_executed) && !test_shell; + } + bool script_executed; bool last_run; bool send_idle_notification; + bool invoke_weak_callbacks; bool stress_opt; bool stress_deopt; bool interactive_shell; @@ -235,8 +233,11 @@ class ShellOptions { bool expected_to_throw; bool mock_arraybuffer_allocator; int num_isolates; + v8::ScriptCompiler::CompileOptions compile_options; SourceGroup* isolate_sources; const char* icu_data_file; + const char* natives_blob; + const char* snapshot_blob; }; #ifdef V8_SHARED @@ -246,6 +247,9 @@ class Shell : public i::AllStatic { #endif // V8_SHARED public: + static Local<UnboundScript> CompileString( + Isolate* isolate, Local<String> source, Local<Value> name, + v8::ScriptCompiler::CompileOptions compile_options); static bool ExecuteString(Isolate* isolate, Handle<String> source, Handle<Value> name, @@ -270,13 +274,12 @@ class Shell : public i::AllStatic { int max, size_t buckets); static void AddHistogramSample(void* histogram, int sample); - static void MapCounters(const char* name); + static void MapCounters(v8::Isolate* isolate, const char* name); static Local<Object> DebugMessageDetails(Isolate* isolate, Handle<String> message); static Local<Value> DebugCommandToJSONRequest(Isolate* isolate, Handle<String> command); - static void DispatchDebugMessages(); static void PerformanceNow(const v8::FunctionCallbackInfo<v8::Value>& args); #endif // !V8_SHARED @@ -369,9 +372,9 @@ class Shell : public i::AllStatic { // don't want to store the stats in a memory-mapped file static CounterCollection local_counters_; static CounterCollection* counters_; - static i::OS::MemoryMappedFile* counters_file_; - static i::Mutex context_mutex_; - static const i::TimeTicks kInitialTicks; + static base::OS::MemoryMappedFile* counters_file_; + static base::Mutex context_mutex_; + static const base::TimeTicks kInitialTicks; static Counter* GetCounter(const char* name, bool is_histogram); static void InstallUtilityScript(Isolate* isolate); diff --git a/deps/v8/src/d8.js b/deps/v8/src/d8.js index 0c1f32dc8d2..2b927aface1 100644 --- a/deps/v8/src/d8.js +++ b/deps/v8/src/d8.js @@ -208,10 +208,6 @@ function DebugEventDetails(response) { details.text = result; break; - case 'scriptCollected': - details.text = result; - break; - default: details.text = 'Unknown debug event ' + response.event(); } @@ -503,7 +499,7 @@ RequestPacket.prototype.toJSONProtocol = function() { json += '"seq":' + this.seq; json += ',"type":"' + this.type + '"'; if (this.command) { - json += ',"command":' + StringToJSON_(this.command); + json += ',"command":' + JSON.stringify(this.command); } if (this.arguments) { json += ',"arguments":'; @@ -511,7 +507,7 @@ RequestPacket.prototype.toJSONProtocol = function() { if (this.arguments.toJSONProtocol) { json += this.arguments.toJSONProtocol(); } else { - json += SimpleObjectToJSON_(this.arguments); + json += JSON.stringify(this.arguments); } } json += '}'; @@ -1965,214 +1961,6 @@ ProtocolReference.prototype.handle = function() { }; -function MakeJSONPair_(name, value) { - return '"' + name + '":' + value; -} - - -function ArrayToJSONObject_(content) { - return '{' + content.join(',') + '}'; -} - - -function ArrayToJSONArray_(content) { - return '[' + content.join(',') + ']'; -} - - -function BooleanToJSON_(value) { - return String(value); -} - - -function NumberToJSON_(value) { - return String(value); -} - - -// Mapping of some control characters to avoid the \uXXXX syntax for most -// commonly used control cahracters. -var ctrlCharMap_ = { - '\b': '\\b', - '\t': '\\t', - '\n': '\\n', - '\f': '\\f', - '\r': '\\r', - '"' : '\\"', - '\\': '\\\\' -}; - - -// Regular expression testing for ", \ and control characters (0x00 - 0x1F). -var ctrlCharTest_ = new RegExp('["\\\\\x00-\x1F]'); - - -// Regular expression matching ", \ and control characters (0x00 - 0x1F) -// globally. -var ctrlCharMatch_ = new RegExp('["\\\\\x00-\x1F]', 'g'); - - -/** - * Convert a String to its JSON representation (see http://www.json.org/). To - * avoid depending on the String object this method calls the functions in - * string.js directly and not through the value. - * @param {String} value The String value to format as JSON - * @return {string} JSON formatted String value - */ -function StringToJSON_(value) { - // Check for" , \ and control characters (0x00 - 0x1F). No need to call - // RegExpTest as ctrlchar is constructed using RegExp. - if (ctrlCharTest_.test(value)) { - // Replace ", \ and control characters (0x00 - 0x1F). - return '"' + - value.replace(ctrlCharMatch_, function (char) { - // Use charmap if possible. - var mapped = ctrlCharMap_[char]; - if (mapped) return mapped; - mapped = char.charCodeAt(); - // Convert control character to unicode escape sequence. - return '\\u00' + - '0' + // TODO %NumberToRadixString(Math.floor(mapped / 16), 16) + - '0'; // TODO %NumberToRadixString(mapped % 16, 16) - }) - + '"'; - } - - // Simple string with no special characters. - return '"' + value + '"'; -} - - -/** - * Convert a Date to ISO 8601 format. To avoid depending on the Date object - * this method calls the functions in date.js directly and not through the - * value. - * @param {Date} value The Date value to format as JSON - * @return {string} JSON formatted Date value - */ -function DateToISO8601_(value) { - var f = function(n) { - return n < 10 ? '0' + n : n; - }; - var g = function(n) { - return n < 10 ? '00' + n : n < 100 ? '0' + n : n; - }; - return builtins.GetUTCFullYearFrom(value) + '-' + - f(builtins.GetUTCMonthFrom(value) + 1) + '-' + - f(builtins.GetUTCDateFrom(value)) + 'T' + - f(builtins.GetUTCHoursFrom(value)) + ':' + - f(builtins.GetUTCMinutesFrom(value)) + ':' + - f(builtins.GetUTCSecondsFrom(value)) + '.' + - g(builtins.GetUTCMillisecondsFrom(value)) + 'Z'; -} - - -/** - * Convert a Date to ISO 8601 format. To avoid depending on the Date object - * this method calls the functions in date.js directly and not through the - * value. - * @param {Date} value The Date value to format as JSON - * @return {string} JSON formatted Date value - */ -function DateToJSON_(value) { - return '"' + DateToISO8601_(value) + '"'; -} - - -/** - * Convert an Object to its JSON representation (see http://www.json.org/). - * This implementation simply runs through all string property names and adds - * each property to the JSON representation for some predefined types. For type - * "object" the function calls itself recursively unless the object has the - * function property "toJSONProtocol" in which case that is used. This is not - * a general implementation but sufficient for the debugger. Note that circular - * structures will cause infinite recursion. - * @param {Object} object The object to format as JSON - * @return {string} JSON formatted object value - */ -function SimpleObjectToJSON_(object) { - var content = []; - for (var key in object) { - // Only consider string keys. - if (typeof key == 'string') { - var property_value = object[key]; - - // Format the value based on its type. - var property_value_json; - switch (typeof property_value) { - case 'object': - if (IS_NULL(property_value)) { - property_value_json = 'null'; - } else if (typeof property_value.toJSONProtocol == 'function') { - property_value_json = property_value.toJSONProtocol(true); - } else if (property_value.constructor.name == 'Array'){ - property_value_json = SimpleArrayToJSON_(property_value); - } else { - property_value_json = SimpleObjectToJSON_(property_value); - } - break; - - case 'boolean': - property_value_json = BooleanToJSON_(property_value); - break; - - case 'number': - property_value_json = NumberToJSON_(property_value); - break; - - case 'string': - property_value_json = StringToJSON_(property_value); - break; - - default: - property_value_json = null; - } - - // Add the property if relevant. - if (property_value_json) { - content.push(StringToJSON_(key) + ':' + property_value_json); - } - } - } - - // Make JSON object representation. - return '{' + content.join(',') + '}'; -} - - -/** - * Convert an array to its JSON representation. This is a VERY simple - * implementation just to support what is needed for the debugger. - * @param {Array} arrya The array to format as JSON - * @return {string} JSON formatted array value - */ -function SimpleArrayToJSON_(array) { - // Make JSON array representation. - var json = '['; - for (var i = 0; i < array.length; i++) { - if (i != 0) { - json += ','; - } - var elem = array[i]; - if (elem.toJSONProtocol) { - json += elem.toJSONProtocol(true); - } else if (typeof(elem) === 'object') { - json += SimpleObjectToJSON_(elem); - } else if (typeof(elem) === 'boolean') { - json += BooleanToJSON_(elem); - } else if (typeof(elem) === 'number') { - json += NumberToJSON_(elem); - } else if (typeof(elem) === 'string') { - json += StringToJSON_(elem); - } else { - json += elem; - } - } - json += ']'; - return json; -} - - // A more universal stringify that supports more types than JSON. // Used by the d8 shell to output results. var stringifyDepthLimit = 4; // To avoid crashing on cyclic objects @@ -2192,7 +1980,7 @@ function Stringify(x, depth) { case "string": return "\"" + x.toString() + "\""; case "symbol": - return "Symbol(" + (x.name ? Stringify(x.name, depth) : "") + ")" + return x.toString(); case "object": if (IS_NULL(x)) return "null"; if (x.constructor && x.constructor.name === "Array") { @@ -2208,18 +1996,22 @@ function Stringify(x, depth) { if (string && string !== "[object Object]") return string; } catch(e) {} var props = []; - for (var name in x) { + var names = Object.getOwnPropertyNames(x); + names = names.concat(Object.getOwnPropertySymbols(x)); + for (var i in names) { + var name = names[i]; var desc = Object.getOwnPropertyDescriptor(x, name); if (IS_UNDEFINED(desc)) continue; + if (IS_SYMBOL(name)) name = "[" + Stringify(name) + "]"; if ("value" in desc) { props.push(name + ": " + Stringify(desc.value, depth - 1)); } - if ("get" in desc) { - var getter = desc.get.toString(); + if (desc.get) { + var getter = Stringify(desc.get); props.push("get " + name + getter.slice(getter.indexOf('('))); } - if ("set" in desc) { - var setter = desc.set.toString(); + if (desc.set) { + var setter = Stringify(desc.set); props.push("set " + name + setter.slice(setter.indexOf('('))); } } diff --git a/deps/v8/src/data-flow.cc b/deps/v8/src/data-flow.cc index a2df9336827..e591778fa1c 100644 --- a/deps/v8/src/data-flow.cc +++ b/deps/v8/src/data-flow.cc @@ -2,10 +2,10 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "data-flow.h" -#include "scopes.h" +#include "src/data-flow.h" +#include "src/scopes.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/data-flow.h b/deps/v8/src/data-flow.h index 98435e603cc..042e29f854b 100644 --- a/deps/v8/src/data-flow.h +++ b/deps/v8/src/data-flow.h @@ -5,12 +5,12 @@ #ifndef V8_DATAFLOW_H_ #define V8_DATAFLOW_H_ -#include "v8.h" +#include "src/v8.h" -#include "allocation.h" -#include "ast.h" -#include "compiler.h" -#include "zone-inl.h" +#include "src/allocation.h" +#include "src/ast.h" +#include "src/compiler.h" +#include "src/zone-inl.h" namespace v8 { namespace internal { @@ -25,7 +25,7 @@ class BitVector: public ZoneObject { current_index_(0), current_value_(target->data_[0]), current_(-1) { - ASSERT(target->data_length_ > 0); + DCHECK(target->data_length_ > 0); Advance(); } ~Iterator() { } @@ -34,7 +34,7 @@ class BitVector: public ZoneObject { void Advance(); int Current() const { - ASSERT(!Done()); + DCHECK(!Done()); return current_; } @@ -66,7 +66,7 @@ class BitVector: public ZoneObject { : length_(length), data_length_(SizeFor(length)), data_(zone->NewArray<uint32_t>(data_length_)) { - ASSERT(length > 0); + DCHECK(length > 0); Clear(); } @@ -87,7 +87,7 @@ class BitVector: public ZoneObject { } void CopyFrom(const BitVector& other) { - ASSERT(other.length() <= length()); + DCHECK(other.length() <= length()); for (int i = 0; i < other.data_length_; i++) { data_[i] = other.data_[i]; } @@ -97,30 +97,30 @@ class BitVector: public ZoneObject { } bool Contains(int i) const { - ASSERT(i >= 0 && i < length()); + DCHECK(i >= 0 && i < length()); uint32_t block = data_[i / 32]; return (block & (1U << (i % 32))) != 0; } void Add(int i) { - ASSERT(i >= 0 && i < length()); + DCHECK(i >= 0 && i < length()); data_[i / 32] |= (1U << (i % 32)); } void Remove(int i) { - ASSERT(i >= 0 && i < length()); + DCHECK(i >= 0 && i < length()); data_[i / 32] &= ~(1U << (i % 32)); } void Union(const BitVector& other) { - ASSERT(other.length() == length()); + DCHECK(other.length() == length()); for (int i = 0; i < data_length_; i++) { data_[i] |= other.data_[i]; } } bool UnionIsChanged(const BitVector& other) { - ASSERT(other.length() == length()); + DCHECK(other.length() == length()); bool changed = false; for (int i = 0; i < data_length_; i++) { uint32_t old_data = data_[i]; @@ -131,14 +131,14 @@ class BitVector: public ZoneObject { } void Intersect(const BitVector& other) { - ASSERT(other.length() == length()); + DCHECK(other.length() == length()); for (int i = 0; i < data_length_; i++) { data_[i] &= other.data_[i]; } } void Subtract(const BitVector& other) { - ASSERT(other.length() == length()); + DCHECK(other.length() == length()); for (int i = 0; i < data_length_; i++) { data_[i] &= ~other.data_[i]; } @@ -164,6 +164,15 @@ class BitVector: public ZoneObject { return true; } + int Count() const { + int count = 0; + for (int i = 0; i < data_length_; i++) { + int data = data_[i]; + if (data != 0) count += CompilerIntrinsics::CountSetBits(data); + } + return count; + } + int length() const { return length_; } #ifdef DEBUG diff --git a/deps/v8/src/date.cc b/deps/v8/src/date.cc index e0fb5ee1250..6b95cb72129 100644 --- a/deps/v8/src/date.cc +++ b/deps/v8/src/date.cc @@ -2,12 +2,12 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "date.h" +#include "src/date.h" -#include "v8.h" +#include "src/v8.h" -#include "objects.h" -#include "objects-inl.h" +#include "src/objects.h" +#include "src/objects-inl.h" namespace v8 { namespace internal { @@ -31,7 +31,7 @@ void DateCache::ResetDateCache() { } else { stamp_ = Smi::FromInt(stamp_->value() + 1); } - ASSERT(stamp_ != Smi::FromInt(kInvalidStamp)); + DCHECK(stamp_ != Smi::FromInt(kInvalidStamp)); for (int i = 0; i < kDSTSize; ++i) { ClearSegment(&dst_[i]); } @@ -40,7 +40,7 @@ void DateCache::ResetDateCache() { after_ = &dst_[1]; local_offset_ms_ = kInvalidLocalOffsetInMs; ymd_valid_ = false; - OS::ClearTimezoneCache(tz_cache_); + base::OS::ClearTimezoneCache(tz_cache_); } @@ -73,7 +73,7 @@ void DateCache::YearMonthDayFromDays( *year = 400 * (days / kDaysIn400Years) - kYearsOffset; days %= kDaysIn400Years; - ASSERT(DaysFromYearMonth(*year, 0) + days == save_days); + DCHECK(DaysFromYearMonth(*year, 0) + days == save_days); days--; int yd1 = days / kDaysIn100Years; @@ -93,12 +93,12 @@ void DateCache::YearMonthDayFromDays( bool is_leap = (!yd1 || yd2) && !yd3; - ASSERT(days >= -1); - ASSERT(is_leap || (days >= 0)); - ASSERT((days < 365) || (is_leap && (days < 366))); - ASSERT(is_leap == ((*year % 4 == 0) && (*year % 100 || (*year % 400 == 0)))); - ASSERT(is_leap || ((DaysFromYearMonth(*year, 0) + days) == save_days)); - ASSERT(!is_leap || ((DaysFromYearMonth(*year, 0) + days + 1) == save_days)); + DCHECK(days >= -1); + DCHECK(is_leap || (days >= 0)); + DCHECK((days < 365) || (is_leap && (days < 366))); + DCHECK(is_leap == ((*year % 4 == 0) && (*year % 100 || (*year % 400 == 0)))); + DCHECK(is_leap || ((DaysFromYearMonth(*year, 0) + days) == save_days)); + DCHECK(!is_leap || ((DaysFromYearMonth(*year, 0) + days + 1) == save_days)); days += is_leap; @@ -124,7 +124,7 @@ void DateCache::YearMonthDayFromDays( *day = days - 31 + 1; } } - ASSERT(DaysFromYearMonth(*year, *month) + *day - 1 == save_days); + DCHECK(DaysFromYearMonth(*year, *month) + *day - 1 == save_days); ymd_valid_ = true; ymd_year_ = *year; ymd_month_ = *month; @@ -146,8 +146,8 @@ int DateCache::DaysFromYearMonth(int year, int month) { month += 12; } - ASSERT(month >= 0); - ASSERT(month < 12); + DCHECK(month >= 0); + DCHECK(month < 12); // year_delta is an arbitrary number such that: // a) year_delta = -1 (mod 400) @@ -222,8 +222,8 @@ int DateCache::DaylightSavingsOffsetInMs(int64_t time_ms) { ProbeDST(time_sec); - ASSERT(InvalidSegment(before_) || before_->start_sec <= time_sec); - ASSERT(InvalidSegment(after_) || time_sec < after_->start_sec); + DCHECK(InvalidSegment(before_) || before_->start_sec <= time_sec); + DCHECK(InvalidSegment(after_) || time_sec < after_->start_sec); if (InvalidSegment(before_)) { // Cache miss. @@ -264,7 +264,7 @@ int DateCache::DaylightSavingsOffsetInMs(int64_t time_ms) { int new_offset_ms = GetDaylightSavingsOffsetFromOS(new_after_start_sec); ExtendTheAfterSegment(new_after_start_sec, new_offset_ms); } else { - ASSERT(!InvalidSegment(after_)); + DCHECK(!InvalidSegment(after_)); // Update the usage counter of after_ since it is going to be used. after_->last_used = ++dst_usage_counter_; } @@ -291,7 +291,7 @@ int DateCache::DaylightSavingsOffsetInMs(int64_t time_ms) { return offset_ms; } } else { - ASSERT(after_->offset_ms == offset_ms); + DCHECK(after_->offset_ms == offset_ms); after_->start_sec = middle_sec; if (time_sec >= after_->start_sec) { // This swap helps the optimistic fast check in subsequent invocations. @@ -310,7 +310,7 @@ int DateCache::DaylightSavingsOffsetInMs(int64_t time_ms) { void DateCache::ProbeDST(int time_sec) { DST* before = NULL; DST* after = NULL; - ASSERT(before_ != after_); + DCHECK(before_ != after_); for (int i = 0; i < kDSTSize; ++i) { if (dst_[i].start_sec <= time_sec) { @@ -334,12 +334,12 @@ void DateCache::ProbeDST(int time_sec) { ? after_ : LeastRecentlyUsedDST(before); } - ASSERT(before != NULL); - ASSERT(after != NULL); - ASSERT(before != after); - ASSERT(InvalidSegment(before) || before->start_sec <= time_sec); - ASSERT(InvalidSegment(after) || time_sec < after->start_sec); - ASSERT(InvalidSegment(before) || InvalidSegment(after) || + DCHECK(before != NULL); + DCHECK(after != NULL); + DCHECK(before != after); + DCHECK(InvalidSegment(before) || before->start_sec <= time_sec); + DCHECK(InvalidSegment(after) || time_sec < after->start_sec); + DCHECK(InvalidSegment(before) || InvalidSegment(after) || before->end_sec < after->start_sec); before_ = before; diff --git a/deps/v8/src/date.h b/deps/v8/src/date.h index a2af685d276..633dd9f38e7 100644 --- a/deps/v8/src/date.h +++ b/deps/v8/src/date.h @@ -5,9 +5,9 @@ #ifndef V8_DATE_H_ #define V8_DATE_H_ -#include "allocation.h" -#include "globals.h" -#include "platform.h" +#include "src/allocation.h" +#include "src/base/platform/platform.h" +#include "src/globals.h" namespace v8 { @@ -39,12 +39,12 @@ class DateCache { // It is an invariant of DateCache that cache stamp is non-negative. static const int kInvalidStamp = -1; - DateCache() : stamp_(0), tz_cache_(OS::CreateTimezoneCache()) { + DateCache() : stamp_(0), tz_cache_(base::OS::CreateTimezoneCache()) { ResetDateCache(); } virtual ~DateCache() { - OS::DisposeTimezoneCache(tz_cache_); + base::OS::DisposeTimezoneCache(tz_cache_); tz_cache_ = NULL; } @@ -93,7 +93,7 @@ class DateCache { if (time_ms < 0 || time_ms > kMaxEpochTimeInMs) { time_ms = EquivalentTime(time_ms); } - return OS::LocalTimezone(static_cast<double>(time_ms), tz_cache_); + return base::OS::LocalTimezone(static_cast<double>(time_ms), tz_cache_); } // ECMA 262 - 15.9.5.26 @@ -162,12 +162,13 @@ class DateCache { // These functions are virtual so that we can override them when testing. virtual int GetDaylightSavingsOffsetFromOS(int64_t time_sec) { double time_ms = static_cast<double>(time_sec * 1000); - return static_cast<int>(OS::DaylightSavingsOffset(time_ms, tz_cache_)); + return static_cast<int>( + base::OS::DaylightSavingsOffset(time_ms, tz_cache_)); } virtual int GetLocalOffsetFromOS() { - double offset = OS::LocalTimeOffset(tz_cache_); - ASSERT(offset < kInvalidLocalOffsetInMs); + double offset = base::OS::LocalTimeOffset(tz_cache_); + DCHECK(offset < kInvalidLocalOffsetInMs); return static_cast<int>(offset); } @@ -234,7 +235,7 @@ class DateCache { int ymd_month_; int ymd_day_; - TimezoneCache* tz_cache_; + base::TimezoneCache* tz_cache_; }; } } // namespace v8::internal diff --git a/deps/v8/src/date.js b/deps/v8/src/date.js index 2a445979e6e..87c87bfda60 100644 --- a/deps/v8/src/date.js +++ b/deps/v8/src/date.js @@ -2,6 +2,8 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. +"use strict"; + // This file relies on the fact that the following declarations have been made // in v8natives.js: // var $isFinite = GlobalIsFinite; @@ -761,7 +763,7 @@ function SetUpDate() { )); // Set up non-enumerable constructor property of the Date prototype object. - %SetProperty($Date.prototype, "constructor", $Date, DONT_ENUM); + %AddNamedProperty($Date.prototype, "constructor", $Date, DONT_ENUM); // Set up non-enumerable functions of the Date prototype object and // set their names. diff --git a/deps/v8/src/dateparser-inl.h b/deps/v8/src/dateparser-inl.h index 4e866e18d29..f7360f8c023 100644 --- a/deps/v8/src/dateparser-inl.h +++ b/deps/v8/src/dateparser-inl.h @@ -5,7 +5,7 @@ #ifndef V8_DATEPARSER_INL_H_ #define V8_DATEPARSER_INL_H_ -#include "dateparser.h" +#include "src/dateparser.h" namespace v8 { namespace internal { @@ -14,7 +14,7 @@ template <typename Char> bool DateParser::Parse(Vector<Char> str, FixedArray* out, UnicodeCache* unicode_cache) { - ASSERT(out->length() >= OUTPUT_SIZE); + DCHECK(out->length() >= OUTPUT_SIZE); InputReader<Char> in(unicode_cache, str); DateStringTokenizer<Char> scanner(&in); TimeZoneComposer tz; @@ -175,7 +175,7 @@ DateParser::DateToken DateParser::DateStringTokenizer<CharType>::Scan() { if (in_->Skip('.')) return DateToken::Symbol('.'); if (in_->Skip(')')) return DateToken::Symbol(')'); if (in_->IsAsciiAlphaOrAbove()) { - ASSERT(KeywordTable::kPrefixLength == 3); + DCHECK(KeywordTable::kPrefixLength == 3); uint32_t buffer[3] = {0, 0, 0}; int length = in_->ReadWord(buffer, 3); int index = KeywordTable::Lookup(buffer, length); @@ -200,9 +200,9 @@ DateParser::DateToken DateParser::ParseES5DateTime( DayComposer* day, TimeComposer* time, TimeZoneComposer* tz) { - ASSERT(day->IsEmpty()); - ASSERT(time->IsEmpty()); - ASSERT(tz->IsEmpty()); + DCHECK(day->IsEmpty()); + DCHECK(time->IsEmpty()); + DCHECK(tz->IsEmpty()); // Parse mandatory date string: [('-'|'+')yy]yyyy[':'MM[':'DD]] if (scanner->Peek().IsAsciiSign()) { diff --git a/deps/v8/src/dateparser.cc b/deps/v8/src/dateparser.cc index 0c2c18b34cf..5db0391a677 100644 --- a/deps/v8/src/dateparser.cc +++ b/deps/v8/src/dateparser.cc @@ -2,9 +2,9 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "dateparser.h" +#include "src/dateparser.h" namespace v8 { namespace internal { @@ -177,7 +177,7 @@ int DateParser::ReadMilliseconds(DateToken token) { // most significant digits. int factor = 1; do { - ASSERT(factor <= 100000000); // factor won't overflow. + DCHECK(factor <= 100000000); // factor won't overflow. factor *= 10; length--; } while (length > 3); diff --git a/deps/v8/src/dateparser.h b/deps/v8/src/dateparser.h index 1b4efb6ae17..f2845902641 100644 --- a/deps/v8/src/dateparser.h +++ b/deps/v8/src/dateparser.h @@ -5,8 +5,8 @@ #ifndef V8_DATEPARSER_H_ #define V8_DATEPARSER_H_ -#include "allocation.h" -#include "char-predicates-inl.h" +#include "src/allocation.h" +#include "src/char-predicates-inl.h" namespace v8 { namespace internal { @@ -151,19 +151,19 @@ class DateParser : public AllStatic { int length() { return length_; } int number() { - ASSERT(IsNumber()); + DCHECK(IsNumber()); return value_; } KeywordType keyword_type() { - ASSERT(IsKeyword()); + DCHECK(IsKeyword()); return static_cast<KeywordType>(tag_); } int keyword_value() { - ASSERT(IsKeyword()); + DCHECK(IsKeyword()); return value_; } char symbol() { - ASSERT(IsSymbol()); + DCHECK(IsSymbol()); return static_cast<char>(value_); } bool IsSymbol(char symbol) { @@ -179,7 +179,7 @@ class DateParser : public AllStatic { return tag_ == kSymbolTag && (value_ == '-' || value_ == '+'); } int ascii_sign() { - ASSERT(IsAsciiSign()); + DCHECK(IsAsciiSign()); return 44 - value_; } bool IsKeywordZ() { diff --git a/deps/v8/src/debug-agent.cc b/deps/v8/src/debug-agent.cc deleted file mode 100644 index 94f334bfe4d..00000000000 --- a/deps/v8/src/debug-agent.cc +++ /dev/null @@ -1,481 +0,0 @@ -// Copyright 2012 the V8 project authors. All rights reserved. -// Use of this source code is governed by a BSD-style license that can be -// found in the LICENSE file. - -#include "v8.h" -#include "debug.h" -#include "debug-agent.h" -#include "platform/socket.h" - -namespace v8 { -namespace internal { - -// Public V8 debugger API message handler function. This function just delegates -// to the debugger agent through it's data parameter. -void DebuggerAgentMessageHandler(const v8::Debug::Message& message) { - Isolate* isolate = reinterpret_cast<Isolate*>(message.GetIsolate()); - DebuggerAgent* agent = isolate->debugger_agent_instance(); - ASSERT(agent != NULL); - agent->DebuggerMessage(message); -} - - -DebuggerAgent::DebuggerAgent(Isolate* isolate, const char* name, int port) - : Thread(name), - isolate_(isolate), - name_(StrDup(name)), - port_(port), - server_(new Socket), - terminate_(false), - session_(NULL), - terminate_now_(0), - listening_(0) { - ASSERT(isolate_->debugger_agent_instance() == NULL); - isolate_->set_debugger_agent_instance(this); -} - - -DebuggerAgent::~DebuggerAgent() { - isolate_->set_debugger_agent_instance(NULL); - delete server_; -} - - -// Debugger agent main thread. -void DebuggerAgent::Run() { - // Allow this socket to reuse port even if still in TIME_WAIT. - server_->SetReuseAddress(true); - - // First bind the socket to the requested port. - bool bound = false; - while (!bound && !terminate_) { - bound = server_->Bind(port_); - - // If an error occurred wait a bit before retrying. The most common error - // would be that the port is already in use so this avoids a busy loop and - // make the agent take over the port when it becomes free. - if (!bound) { - const TimeDelta kTimeout = TimeDelta::FromSeconds(1); - PrintF("Failed to open socket on port %d, " - "waiting %d ms before retrying\n", port_, - static_cast<int>(kTimeout.InMilliseconds())); - if (!terminate_now_.WaitFor(kTimeout)) { - if (terminate_) return; - } - } - } - - // Accept connections on the bound port. - while (!terminate_) { - bool ok = server_->Listen(1); - listening_.Signal(); - if (ok) { - // Accept the new connection. - Socket* client = server_->Accept(); - ok = client != NULL; - if (ok) { - // Create and start a new session. - CreateSession(client); - } - } - } -} - - -void DebuggerAgent::Shutdown() { - // Set the termination flag. - terminate_ = true; - - // Signal termination and make the server exit either its listen call or its - // binding loop. This makes sure that no new sessions can be established. - terminate_now_.Signal(); - server_->Shutdown(); - Join(); - - // Close existing session if any. - CloseSession(); -} - - -void DebuggerAgent::WaitUntilListening() { - listening_.Wait(); -} - -static const char* kCreateSessionMessage = - "Remote debugging session already active\r\n"; - -void DebuggerAgent::CreateSession(Socket* client) { - LockGuard<RecursiveMutex> session_access_guard(&session_access_); - - // If another session is already established terminate this one. - if (session_ != NULL) { - int len = StrLength(kCreateSessionMessage); - int res = client->Send(kCreateSessionMessage, len); - delete client; - USE(res); - return; - } - - // Create a new session and hook up the debug message handler. - session_ = new DebuggerAgentSession(this, client); - isolate_->debugger()->SetMessageHandler(DebuggerAgentMessageHandler); - session_->Start(); -} - - -void DebuggerAgent::CloseSession() { - LockGuard<RecursiveMutex> session_access_guard(&session_access_); - - // Terminate the session. - if (session_ != NULL) { - session_->Shutdown(); - session_->Join(); - delete session_; - session_ = NULL; - } -} - - -void DebuggerAgent::DebuggerMessage(const v8::Debug::Message& message) { - LockGuard<RecursiveMutex> session_access_guard(&session_access_); - - // Forward the message handling to the session. - if (session_ != NULL) { - v8::String::Value val(message.GetJSON()); - session_->DebuggerMessage(Vector<uint16_t>(const_cast<uint16_t*>(*val), - val.length())); - } -} - - -DebuggerAgentSession::~DebuggerAgentSession() { - delete client_; -} - - -void DebuggerAgent::OnSessionClosed(DebuggerAgentSession* session) { - // Don't do anything during termination. - if (terminate_) { - return; - } - - // Terminate the session. - LockGuard<RecursiveMutex> session_access_guard(&session_access_); - ASSERT(session == session_); - if (session == session_) { - session_->Shutdown(); - delete session_; - session_ = NULL; - } -} - - -void DebuggerAgentSession::Run() { - // Send the hello message. - bool ok = DebuggerAgentUtil::SendConnectMessage(client_, agent_->name_.get()); - if (!ok) return; - - while (true) { - // Read data from the debugger front end. - SmartArrayPointer<char> message = - DebuggerAgentUtil::ReceiveMessage(client_); - - const char* msg = message.get(); - bool is_closing_session = (msg == NULL); - - if (msg == NULL) { - // If we lost the connection, then simulate a disconnect msg: - msg = "{\"seq\":1,\"type\":\"request\",\"command\":\"disconnect\"}"; - - } else { - // Check if we're getting a disconnect request: - const char* disconnectRequestStr = - "\"type\":\"request\",\"command\":\"disconnect\"}"; - const char* result = strstr(msg, disconnectRequestStr); - if (result != NULL) { - is_closing_session = true; - } - } - - // Convert UTF-8 to UTF-16. - unibrow::Utf8Decoder<128> decoder(msg, StrLength(msg)); - int utf16_length = decoder.Utf16Length(); - ScopedVector<uint16_t> temp(utf16_length + 1); - decoder.WriteUtf16(temp.start(), utf16_length); - - // Send the request received to the debugger. - v8::Debug::SendCommand(reinterpret_cast<v8::Isolate*>(agent_->isolate()), - temp.start(), - utf16_length, - NULL); - - if (is_closing_session) { - // Session is closed. - agent_->OnSessionClosed(this); - return; - } - } -} - - -void DebuggerAgentSession::DebuggerMessage(Vector<uint16_t> message) { - DebuggerAgentUtil::SendMessage(client_, message); -} - - -void DebuggerAgentSession::Shutdown() { - // Shutdown the socket to end the blocking receive. - client_->Shutdown(); -} - - -const char* const DebuggerAgentUtil::kContentLength = "Content-Length"; - - -SmartArrayPointer<char> DebuggerAgentUtil::ReceiveMessage(Socket* conn) { - int received; - - // Read header. - int content_length = 0; - while (true) { - const int kHeaderBufferSize = 80; - char header_buffer[kHeaderBufferSize]; - int header_buffer_position = 0; - char c = '\0'; // One character receive buffer. - char prev_c = '\0'; // Previous character. - - // Read until CRLF. - while (!(c == '\n' && prev_c == '\r')) { - prev_c = c; - received = conn->Receive(&c, 1); - if (received == 0) { - PrintF("Error %d\n", Socket::GetLastError()); - return SmartArrayPointer<char>(); - } - - // Add character to header buffer. - if (header_buffer_position < kHeaderBufferSize) { - header_buffer[header_buffer_position++] = c; - } - } - - // Check for end of header (empty header line). - if (header_buffer_position == 2) { // Receive buffer contains CRLF. - break; - } - - // Terminate header. - ASSERT(header_buffer_position > 1); // At least CRLF is received. - ASSERT(header_buffer_position <= kHeaderBufferSize); - header_buffer[header_buffer_position - 2] = '\0'; - - // Split header. - char* key = header_buffer; - char* value = NULL; - for (int i = 0; header_buffer[i] != '\0'; i++) { - if (header_buffer[i] == ':') { - header_buffer[i] = '\0'; - value = header_buffer + i + 1; - while (*value == ' ') { - value++; - } - break; - } - } - - // Check that key is Content-Length. - if (strcmp(key, kContentLength) == 0) { - // Get the content length value if present and within a sensible range. - if (value == NULL || strlen(value) > 7) { - return SmartArrayPointer<char>(); - } - for (int i = 0; value[i] != '\0'; i++) { - // Bail out if illegal data. - if (value[i] < '0' || value[i] > '9') { - return SmartArrayPointer<char>(); - } - content_length = 10 * content_length + (value[i] - '0'); - } - } else { - // For now just print all other headers than Content-Length. - PrintF("%s: %s\n", key, value != NULL ? value : "(no value)"); - } - } - - // Return now if no body. - if (content_length == 0) { - return SmartArrayPointer<char>(); - } - - // Read body. - char* buffer = NewArray<char>(content_length + 1); - received = ReceiveAll(conn, buffer, content_length); - if (received < content_length) { - PrintF("Error %d\n", Socket::GetLastError()); - return SmartArrayPointer<char>(); - } - buffer[content_length] = '\0'; - - return SmartArrayPointer<char>(buffer); -} - - -bool DebuggerAgentUtil::SendConnectMessage(Socket* conn, - const char* embedding_host) { - static const int kBufferSize = 80; - char buffer[kBufferSize]; // Sending buffer. - bool ok; - int len; - - // Send the header. - len = OS::SNPrintF(Vector<char>(buffer, kBufferSize), - "Type: connect\r\n"); - ok = conn->Send(buffer, len); - if (!ok) return false; - - len = OS::SNPrintF(Vector<char>(buffer, kBufferSize), - "V8-Version: %s\r\n", v8::V8::GetVersion()); - ok = conn->Send(buffer, len); - if (!ok) return false; - - len = OS::SNPrintF(Vector<char>(buffer, kBufferSize), - "Protocol-Version: 1\r\n"); - ok = conn->Send(buffer, len); - if (!ok) return false; - - if (embedding_host != NULL) { - len = OS::SNPrintF(Vector<char>(buffer, kBufferSize), - "Embedding-Host: %s\r\n", embedding_host); - ok = conn->Send(buffer, len); - if (!ok) return false; - } - - len = OS::SNPrintF(Vector<char>(buffer, kBufferSize), - "%s: 0\r\n", kContentLength); - ok = conn->Send(buffer, len); - if (!ok) return false; - - // Terminate header with empty line. - len = OS::SNPrintF(Vector<char>(buffer, kBufferSize), "\r\n"); - ok = conn->Send(buffer, len); - if (!ok) return false; - - // No body for connect message. - - return true; -} - - -bool DebuggerAgentUtil::SendMessage(Socket* conn, - const Vector<uint16_t> message) { - static const int kBufferSize = 80; - char buffer[kBufferSize]; // Sending buffer both for header and body. - - // Calculate the message size in UTF-8 encoding. - int utf8_len = 0; - int previous = unibrow::Utf16::kNoPreviousCharacter; - for (int i = 0; i < message.length(); i++) { - uint16_t character = message[i]; - utf8_len += unibrow::Utf8::Length(character, previous); - previous = character; - } - - // Send the header. - int len = OS::SNPrintF(Vector<char>(buffer, kBufferSize), - "%s: %d\r\n", kContentLength, utf8_len); - if (conn->Send(buffer, len) < len) { - return false; - } - - // Terminate header with empty line. - len = OS::SNPrintF(Vector<char>(buffer, kBufferSize), "\r\n"); - if (conn->Send(buffer, len) < len) { - return false; - } - - // Send message body as UTF-8. - int buffer_position = 0; // Current buffer position. - previous = unibrow::Utf16::kNoPreviousCharacter; - for (int i = 0; i < message.length(); i++) { - // Write next UTF-8 encoded character to buffer. - uint16_t character = message[i]; - buffer_position += - unibrow::Utf8::Encode(buffer + buffer_position, character, previous); - ASSERT(buffer_position <= kBufferSize); - - // Send buffer if full or last character is encoded. - if (kBufferSize - buffer_position < - unibrow::Utf16::kMaxExtraUtf8BytesForOneUtf16CodeUnit || - i == message.length() - 1) { - if (unibrow::Utf16::IsLeadSurrogate(character)) { - const int kEncodedSurrogateLength = - unibrow::Utf16::kUtf8BytesToCodeASurrogate; - ASSERT(buffer_position >= kEncodedSurrogateLength); - len = buffer_position - kEncodedSurrogateLength; - if (conn->Send(buffer, len) < len) { - return false; - } - for (int i = 0; i < kEncodedSurrogateLength; i++) { - buffer[i] = buffer[buffer_position + i]; - } - buffer_position = kEncodedSurrogateLength; - } else { - len = buffer_position; - if (conn->Send(buffer, len) < len) { - return false; - } - buffer_position = 0; - } - } - previous = character; - } - - return true; -} - - -bool DebuggerAgentUtil::SendMessage(Socket* conn, - const v8::Handle<v8::String> request) { - static const int kBufferSize = 80; - char buffer[kBufferSize]; // Sending buffer both for header and body. - - // Convert the request to UTF-8 encoding. - v8::String::Utf8Value utf8_request(request); - - // Send the header. - int len = OS::SNPrintF(Vector<char>(buffer, kBufferSize), - "Content-Length: %d\r\n", utf8_request.length()); - if (conn->Send(buffer, len) < len) { - return false; - } - - // Terminate header with empty line. - len = OS::SNPrintF(Vector<char>(buffer, kBufferSize), "\r\n"); - if (conn->Send(buffer, len) < len) { - return false; - } - - // Send message body as UTF-8. - len = utf8_request.length(); - if (conn->Send(*utf8_request, len) < len) { - return false; - } - - return true; -} - - -// Receive the full buffer before returning unless an error occours. -int DebuggerAgentUtil::ReceiveAll(Socket* conn, char* data, int len) { - int total_received = 0; - while (total_received < len) { - int received = conn->Receive(data + total_received, len - total_received); - if (received == 0) { - return total_received; - } - total_received += received; - } - return total_received; -} - -} } // namespace v8::internal diff --git a/deps/v8/src/debug-agent.h b/deps/v8/src/debug-agent.h deleted file mode 100644 index 3e3f25a5ddd..00000000000 --- a/deps/v8/src/debug-agent.h +++ /dev/null @@ -1,93 +0,0 @@ -// Copyright 2006-2008 the V8 project authors. All rights reserved. -// Use of this source code is governed by a BSD-style license that can be -// found in the LICENSE file. - -#ifndef V8_DEBUG_AGENT_H_ -#define V8_DEBUG_AGENT_H_ - -#include "../include/v8-debug.h" -#include "platform.h" - -namespace v8 { -namespace internal { - -// Forward decelrations. -class DebuggerAgentSession; -class Socket; - - -// Debugger agent which starts a socket listener on the debugger port and -// handles connection from a remote debugger. -class DebuggerAgent: public Thread { - public: - DebuggerAgent(Isolate* isolate, const char* name, int port); - ~DebuggerAgent(); - - void Shutdown(); - void WaitUntilListening(); - - Isolate* isolate() { return isolate_; } - - private: - void Run(); - void CreateSession(Socket* socket); - void DebuggerMessage(const v8::Debug::Message& message); - void CloseSession(); - void OnSessionClosed(DebuggerAgentSession* session); - - Isolate* isolate_; - SmartArrayPointer<const char> name_; // Name of the embedding application. - int port_; // Port to use for the agent. - Socket* server_; // Server socket for listen/accept. - bool terminate_; // Termination flag. - RecursiveMutex session_access_; // Mutex guarding access to session_. - DebuggerAgentSession* session_; // Current active session if any. - Semaphore terminate_now_; // Semaphore to signal termination. - Semaphore listening_; - - friend class DebuggerAgentSession; - friend void DebuggerAgentMessageHandler(const v8::Debug::Message& message); - - DISALLOW_COPY_AND_ASSIGN(DebuggerAgent); -}; - - -// Debugger agent session. The session receives requests from the remote -// debugger and sends debugger events/responses to the remote debugger. -class DebuggerAgentSession: public Thread { - public: - DebuggerAgentSession(DebuggerAgent* agent, Socket* client) - : Thread("v8:DbgAgntSessn"), - agent_(agent), client_(client) {} - ~DebuggerAgentSession(); - - void DebuggerMessage(Vector<uint16_t> message); - void Shutdown(); - - private: - void Run(); - - void DebuggerMessage(Vector<char> message); - - DebuggerAgent* agent_; - Socket* client_; - - DISALLOW_COPY_AND_ASSIGN(DebuggerAgentSession); -}; - - -// Utility methods factored out to be used by the D8 shell as well. -class DebuggerAgentUtil { - public: - static const char* const kContentLength; - - static SmartArrayPointer<char> ReceiveMessage(Socket* conn); - static bool SendConnectMessage(Socket* conn, const char* embedding_host); - static bool SendMessage(Socket* conn, const Vector<uint16_t> message); - static bool SendMessage(Socket* conn, const v8::Handle<v8::String> message); - static int ReceiveAll(Socket* conn, char* data, int len); -}; - -} } // namespace v8::internal - -#endif // V8_DEBUG_AGENT_H_ diff --git a/deps/v8/src/debug-debugger.js b/deps/v8/src/debug-debugger.js index 0ce88328ba3..a4c8801ea6d 100644 --- a/deps/v8/src/debug-debugger.js +++ b/deps/v8/src/debug-debugger.js @@ -19,7 +19,9 @@ Debug.DebugEvent = { Break: 1, NewFunction: 3, BeforeCompile: 4, AfterCompile: 5, - ScriptCollected: 6 }; + CompileError: 6, + PromiseEvent: 7, + AsyncTaskEvent: 8 }; // Types of exceptions that can be broken upon. Debug.ExceptionBreak = { Caught : 0, @@ -986,44 +988,39 @@ ExecutionState.prototype.debugCommandProcessor = function(opt_is_running) { }; -function MakeBreakEvent(exec_state, break_points_hit) { - return new BreakEvent(exec_state, break_points_hit); +function MakeBreakEvent(break_id, break_points_hit) { + return new BreakEvent(break_id, break_points_hit); } -function BreakEvent(exec_state, break_points_hit) { - this.exec_state_ = exec_state; +function BreakEvent(break_id, break_points_hit) { + this.frame_ = new FrameMirror(break_id, 0); this.break_points_hit_ = break_points_hit; } -BreakEvent.prototype.executionState = function() { - return this.exec_state_; -}; - - BreakEvent.prototype.eventType = function() { return Debug.DebugEvent.Break; }; BreakEvent.prototype.func = function() { - return this.exec_state_.frame(0).func(); + return this.frame_.func(); }; BreakEvent.prototype.sourceLine = function() { - return this.exec_state_.frame(0).sourceLine(); + return this.frame_.sourceLine(); }; BreakEvent.prototype.sourceColumn = function() { - return this.exec_state_.frame(0).sourceColumn(); + return this.frame_.sourceColumn(); }; BreakEvent.prototype.sourceLineText = function() { - return this.exec_state_.frame(0).sourceLineText(); + return this.frame_.sourceLineText(); }; @@ -1036,8 +1033,7 @@ BreakEvent.prototype.toJSONProtocol = function() { var o = { seq: next_response_seq++, type: "event", event: "break", - body: { invocationText: this.exec_state_.frame(0).invocationText(), - } + body: { invocationText: this.frame_.invocationText() } }; // Add script related information to the event if available. @@ -1070,24 +1066,19 @@ BreakEvent.prototype.toJSONProtocol = function() { }; -function MakeExceptionEvent(exec_state, exception, uncaught, promise) { - return new ExceptionEvent(exec_state, exception, uncaught, promise); +function MakeExceptionEvent(break_id, exception, uncaught, promise) { + return new ExceptionEvent(break_id, exception, uncaught, promise); } -function ExceptionEvent(exec_state, exception, uncaught, promise) { - this.exec_state_ = exec_state; +function ExceptionEvent(break_id, exception, uncaught, promise) { + this.exec_state_ = new ExecutionState(break_id); this.exception_ = exception; this.uncaught_ = uncaught; this.promise_ = promise; } -ExceptionEvent.prototype.executionState = function() { - return this.exec_state_; -}; - - ExceptionEvent.prototype.eventType = function() { return Debug.DebugEvent.Exception; }; @@ -1154,29 +1145,19 @@ ExceptionEvent.prototype.toJSONProtocol = function() { }; -function MakeCompileEvent(exec_state, script, before) { - return new CompileEvent(exec_state, script, before); +function MakeCompileEvent(script, type) { + return new CompileEvent(script, type); } -function CompileEvent(exec_state, script, before) { - this.exec_state_ = exec_state; +function CompileEvent(script, type) { this.script_ = MakeMirror(script); - this.before_ = before; + this.type_ = type; } -CompileEvent.prototype.executionState = function() { - return this.exec_state_; -}; - - CompileEvent.prototype.eventType = function() { - if (this.before_) { - return Debug.DebugEvent.BeforeCompile; - } else { - return Debug.DebugEvent.AfterCompile; - } + return this.type_; }; @@ -1188,10 +1169,13 @@ CompileEvent.prototype.script = function() { CompileEvent.prototype.toJSONProtocol = function() { var o = new ProtocolMessage(); o.running = true; - if (this.before_) { - o.event = "beforeCompile"; - } else { - o.event = "afterCompile"; + switch (this.type_) { + case Debug.DebugEvent.BeforeCompile: + o.event = "beforeCompile"; + case Debug.DebugEvent.AfterCompile: + o.event = "afterCompile"; + case Debug.DebugEvent.CompileError: + o.event = "compileError"; } o.body = {}; o.body.script = this.script_; @@ -1200,37 +1184,6 @@ CompileEvent.prototype.toJSONProtocol = function() { }; -function MakeScriptCollectedEvent(exec_state, id) { - return new ScriptCollectedEvent(exec_state, id); -} - - -function ScriptCollectedEvent(exec_state, id) { - this.exec_state_ = exec_state; - this.id_ = id; -} - - -ScriptCollectedEvent.prototype.id = function() { - return this.id_; -}; - - -ScriptCollectedEvent.prototype.executionState = function() { - return this.exec_state_; -}; - - -ScriptCollectedEvent.prototype.toJSONProtocol = function() { - var o = new ProtocolMessage(); - o.running = true; - o.event = "scriptCollected"; - o.body = {}; - o.body.script = { id: this.id() }; - return o.toJSONProtocol(); -}; - - function MakeScriptObject_(script, include_source) { var o = { id: script.id(), name: script.name(), @@ -1248,6 +1201,66 @@ function MakeScriptObject_(script, include_source) { } +function MakePromiseEvent(event_data) { + return new PromiseEvent(event_data); +} + + +function PromiseEvent(event_data) { + this.promise_ = event_data.promise; + this.parentPromise_ = event_data.parentPromise; + this.status_ = event_data.status; + this.value_ = event_data.value; +} + + +PromiseEvent.prototype.promise = function() { + return MakeMirror(this.promise_); +} + + +PromiseEvent.prototype.parentPromise = function() { + return MakeMirror(this.parentPromise_); +} + + +PromiseEvent.prototype.status = function() { + return this.status_; +} + + +PromiseEvent.prototype.value = function() { + return MakeMirror(this.value_); +} + + +function MakeAsyncTaskEvent(event_data) { + return new AsyncTaskEvent(event_data); +} + + +function AsyncTaskEvent(event_data) { + this.type_ = event_data.type; + this.name_ = event_data.name; + this.id_ = event_data.id; +} + + +AsyncTaskEvent.prototype.type = function() { + return this.type_; +} + + +AsyncTaskEvent.prototype.name = function() { + return this.name_; +} + + +AsyncTaskEvent.prototype.id = function() { + return this.id_; +} + + function DebugCommandProcessor(exec_state, opt_is_running) { this.exec_state_ = exec_state; this.running_ = opt_is_running || false; @@ -1388,63 +1401,10 @@ DebugCommandProcessor.prototype.processDebugJSONRequest = function( } } - if (request.command == 'continue') { - this.continueRequest_(request, response); - } else if (request.command == 'break') { - this.breakRequest_(request, response); - } else if (request.command == 'setbreakpoint') { - this.setBreakPointRequest_(request, response); - } else if (request.command == 'changebreakpoint') { - this.changeBreakPointRequest_(request, response); - } else if (request.command == 'clearbreakpoint') { - this.clearBreakPointRequest_(request, response); - } else if (request.command == 'clearbreakpointgroup') { - this.clearBreakPointGroupRequest_(request, response); - } else if (request.command == 'disconnect') { - this.disconnectRequest_(request, response); - } else if (request.command == 'setexceptionbreak') { - this.setExceptionBreakRequest_(request, response); - } else if (request.command == 'listbreakpoints') { - this.listBreakpointsRequest_(request, response); - } else if (request.command == 'backtrace') { - this.backtraceRequest_(request, response); - } else if (request.command == 'frame') { - this.frameRequest_(request, response); - } else if (request.command == 'scopes') { - this.scopesRequest_(request, response); - } else if (request.command == 'scope') { - this.scopeRequest_(request, response); - } else if (request.command == 'setVariableValue') { - this.setVariableValueRequest_(request, response); - } else if (request.command == 'evaluate') { - this.evaluateRequest_(request, response); - } else if (request.command == 'lookup') { - this.lookupRequest_(request, response); - } else if (request.command == 'references') { - this.referencesRequest_(request, response); - } else if (request.command == 'source') { - this.sourceRequest_(request, response); - } else if (request.command == 'scripts') { - this.scriptsRequest_(request, response); - } else if (request.command == 'threads') { - this.threadsRequest_(request, response); - } else if (request.command == 'suspend') { - this.suspendRequest_(request, response); - } else if (request.command == 'version') { - this.versionRequest_(request, response); - } else if (request.command == 'changelive') { - this.changeLiveRequest_(request, response); - } else if (request.command == 'restartframe') { - this.restartFrameRequest_(request, response); - } else if (request.command == 'flags') { - this.debuggerFlagsRequest_(request, response); - } else if (request.command == 'v8flags') { - this.v8FlagsRequest_(request, response); - - // GC tools: - } else if (request.command == 'gc') { - this.gcRequest_(request, response); - + var key = request.command.toLowerCase(); + var handler = DebugCommandProcessor.prototype.dispatch_[key]; + if (IS_FUNCTION(handler)) { + %_CallFunction(this, request, response, handler); } else { throw new Error('Unknown command "' + request.command + '" in request'); } @@ -2492,6 +2452,40 @@ DebugCommandProcessor.prototype.gcRequest_ = function(request, response) { }; +DebugCommandProcessor.prototype.dispatch_ = (function() { + var proto = DebugCommandProcessor.prototype; + return { + "continue": proto.continueRequest_, + "break" : proto.breakRequest_, + "setbreakpoint" : proto.setBreakPointRequest_, + "changebreakpoint": proto.changeBreakPointRequest_, + "clearbreakpoint": proto.clearBreakPointRequest_, + "clearbreakpointgroup": proto.clearBreakPointGroupRequest_, + "disconnect": proto.disconnectRequest_, + "setexceptionbreak": proto.setExceptionBreakRequest_, + "listbreakpoints": proto.listBreakpointsRequest_, + "backtrace": proto.backtraceRequest_, + "frame": proto.frameRequest_, + "scopes": proto.scopesRequest_, + "scope": proto.scopeRequest_, + "setvariablevalue": proto.setVariableValueRequest_, + "evaluate": proto.evaluateRequest_, + "lookup": proto.lookupRequest_, + "references": proto.referencesRequest_, + "source": proto.sourceRequest_, + "scripts": proto.scriptsRequest_, + "threads": proto.threadsRequest_, + "suspend": proto.suspendRequest_, + "version": proto.versionRequest_, + "changelive": proto.changeLiveRequest_, + "restartframe": proto.restartFrameRequest_, + "flags": proto.debuggerFlagsRequest_, + "v8flag": proto.v8FlagsRequest_, + "gc": proto.gcRequest_, + }; +})(); + + // Check whether the previously processed command caused the VM to become // running. DebugCommandProcessor.prototype.isRunning = function() { @@ -2504,17 +2498,6 @@ DebugCommandProcessor.prototype.systemBreak = function(cmd, args) { }; -function NumberToHex8Str(n) { - var r = ""; - for (var i = 0; i < 8; ++i) { - var c = hexCharArray[n & 0x0F]; // hexCharArray is defined in uri.js - r = c + r; - n = n >>> 4; - } - return r; -} - - /** * Convert an Object to its debugger protocol representation. The representation * may be serilized to a JSON object using JSON.stringify(). diff --git a/deps/v8/src/debug.cc b/deps/v8/src/debug.cc index 3ecf8bada73..2ae8630885b 100644 --- a/deps/v8/src/debug.cc +++ b/deps/v8/src/debug.cc @@ -2,65 +2,56 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" - -#include "api.h" -#include "arguments.h" -#include "bootstrapper.h" -#include "code-stubs.h" -#include "codegen.h" -#include "compilation-cache.h" -#include "compiler.h" -#include "debug.h" -#include "deoptimizer.h" -#include "execution.h" -#include "full-codegen.h" -#include "global-handles.h" -#include "ic.h" -#include "ic-inl.h" -#include "isolate-inl.h" -#include "list.h" -#include "messages.h" -#include "natives.h" -#include "stub-cache.h" -#include "log.h" - -#include "../include/v8-debug.h" +#include "src/v8.h" + +#include "src/api.h" +#include "src/arguments.h" +#include "src/bootstrapper.h" +#include "src/code-stubs.h" +#include "src/codegen.h" +#include "src/compilation-cache.h" +#include "src/compiler.h" +#include "src/debug.h" +#include "src/deoptimizer.h" +#include "src/execution.h" +#include "src/full-codegen.h" +#include "src/global-handles.h" +#include "src/ic.h" +#include "src/ic-inl.h" +#include "src/isolate-inl.h" +#include "src/list.h" +#include "src/log.h" +#include "src/messages.h" +#include "src/natives.h" +#include "src/stub-cache.h" + +#include "include/v8-debug.h" namespace v8 { namespace internal { Debug::Debug(Isolate* isolate) - : has_break_points_(false), - script_cache_(NULL), - debug_info_list_(NULL), - disable_break_(false), + : debug_context_(Handle<Context>()), + event_listener_(Handle<Object>()), + event_listener_data_(Handle<Object>()), + message_handler_(NULL), + command_received_(0), + command_queue_(isolate->logger(), kQueueInitialSize), + event_command_queue_(isolate->logger(), kQueueInitialSize), + is_active_(false), + is_suppressed_(false), + live_edit_enabled_(true), // TODO(yangguo): set to false by default. + has_break_points_(false), + break_disabled_(false), break_on_exception_(false), break_on_uncaught_exception_(false), - promise_catch_handlers_(0), - promise_getters_(0), + script_cache_(NULL), + debug_info_list_(NULL), isolate_(isolate) { - memset(registers_, 0, sizeof(JSCallerSavedBuffer)); ThreadInit(); } -Debug::~Debug() { -} - - -static void PrintLn(v8::Local<v8::Value> value) { - v8::Local<v8::String> s = value->ToString(); - ScopedVector<char> data(s->Utf8Length() + 1); - if (data.start() == NULL) { - V8::FatalProcessOutOfMemory("PrintLn"); - return; - } - s->WriteUtf8(data.start()); - PrintF("%s\n", data.start()); -} - - static v8::Handle<v8::Context> GetDebugEventContext(Isolate* isolate) { Handle<Context> context = isolate->debug()->debugger_entry()->GetContext(); // Isolate::context() may have been NULL when "script collected" event @@ -82,16 +73,32 @@ BreakLocationIterator::BreakLocationIterator(Handle<DebugInfo> debug_info, BreakLocationIterator::~BreakLocationIterator() { - ASSERT(reloc_iterator_ != NULL); - ASSERT(reloc_iterator_original_ != NULL); + DCHECK(reloc_iterator_ != NULL); + DCHECK(reloc_iterator_original_ != NULL); delete reloc_iterator_; delete reloc_iterator_original_; } +// Check whether a code stub with the specified major key is a possible break +// point location when looking for source break locations. +static bool IsSourceBreakStub(Code* code) { + CodeStub::Major major_key = CodeStub::GetMajorKey(code); + return major_key == CodeStub::CallFunction; +} + + +// Check whether a code stub with the specified major key is a possible break +// location. +static bool IsBreakStub(Code* code) { + CodeStub::Major major_key = CodeStub::GetMajorKey(code); + return major_key == CodeStub::CallFunction; +} + + void BreakLocationIterator::Next() { DisallowHeapAllocation no_gc; - ASSERT(!RinfoDone()); + DCHECK(!RinfoDone()); // Iterate through reloc info for code and original code stopping at each // breakable code target. @@ -112,8 +119,8 @@ void BreakLocationIterator::Next() { // statement position. position_ = static_cast<int>( rinfo()->data() - debug_info_->shared()->start_position()); - ASSERT(position_ >= 0); - ASSERT(statement_position_ >= 0); + DCHECK(position_ >= 0); + DCHECK(statement_position_ >= 0); } if (IsDebugBreakSlot()) { @@ -138,15 +145,14 @@ void BreakLocationIterator::Next() { if (IsDebuggerStatement()) { break_point_++; return; - } - if (type_ == ALL_BREAK_LOCATIONS) { - if (Debug::IsBreakStub(code)) { + } else if (type_ == ALL_BREAK_LOCATIONS) { + if (IsBreakStub(code)) { break_point_++; return; } } else { - ASSERT(type_ == SOURCE_BREAK_LOCATIONS); - if (Debug::IsSourceBreakStub(code)) { + DCHECK(type_ == SOURCE_BREAK_LOCATIONS); + if (IsSourceBreakStub(code)) { break_point_++; return; } @@ -266,10 +272,8 @@ bool BreakLocationIterator::Done() const { void BreakLocationIterator::SetBreakPoint(Handle<Object> break_point_object) { // If there is not already a real break point here patch code with debug // break. - if (!HasBreakPoint()) { - SetDebugBreak(); - } - ASSERT(IsDebugBreak() || IsDebuggerStatement()); + if (!HasBreakPoint()) SetDebugBreak(); + DCHECK(IsDebugBreak() || IsDebuggerStatement()); // Set the break point information. DebugInfo::SetBreakPoint(debug_info_, code_position(), position(), statement_position(), @@ -283,20 +287,18 @@ void BreakLocationIterator::ClearBreakPoint(Handle<Object> break_point_object) { // If there are no more break points here remove the debug break. if (!HasBreakPoint()) { ClearDebugBreak(); - ASSERT(!IsDebugBreak()); + DCHECK(!IsDebugBreak()); } } void BreakLocationIterator::SetOneShot() { // Debugger statement always calls debugger. No need to modify it. - if (IsDebuggerStatement()) { - return; - } + if (IsDebuggerStatement()) return; // If there is a real break point here no more to do. if (HasBreakPoint()) { - ASSERT(IsDebugBreak()); + DCHECK(IsDebugBreak()); return; } @@ -307,35 +309,29 @@ void BreakLocationIterator::SetOneShot() { void BreakLocationIterator::ClearOneShot() { // Debugger statement always calls debugger. No need to modify it. - if (IsDebuggerStatement()) { - return; - } + if (IsDebuggerStatement()) return; // If there is a real break point here no more to do. if (HasBreakPoint()) { - ASSERT(IsDebugBreak()); + DCHECK(IsDebugBreak()); return; } // Patch code removing debug break. ClearDebugBreak(); - ASSERT(!IsDebugBreak()); + DCHECK(!IsDebugBreak()); } void BreakLocationIterator::SetDebugBreak() { // Debugger statement always calls debugger. No need to modify it. - if (IsDebuggerStatement()) { - return; - } + if (IsDebuggerStatement()) return; // If there is already a break point here just return. This might happen if // the same code is flooded with break points twice. Flooding the same // function twice might happen when stepping in a function with an exception // handler as the handler and the function is the same. - if (IsDebugBreak()) { - return; - } + if (IsDebugBreak()) return; if (RelocInfo::IsJSReturn(rmode())) { // Patch the frame exit code with a break point. @@ -347,15 +343,13 @@ void BreakLocationIterator::SetDebugBreak() { // Patch the IC call. SetDebugBreakAtIC(); } - ASSERT(IsDebugBreak()); + DCHECK(IsDebugBreak()); } void BreakLocationIterator::ClearDebugBreak() { // Debugger statement always calls debugger. No need to modify it. - if (IsDebuggerStatement()) { - return; - } + if (IsDebuggerStatement()) return; if (RelocInfo::IsJSReturn(rmode())) { // Restore the frame exit code. @@ -367,7 +361,7 @@ void BreakLocationIterator::ClearDebugBreak() { // Patch the IC call. ClearDebugBreakAtIC(); } - ASSERT(!IsDebugBreak()); + DCHECK(!IsDebugBreak()); } @@ -379,7 +373,7 @@ bool BreakLocationIterator::IsStepInLocation(Isolate* isolate) { Address target = original_rinfo()->target_address(); Handle<Code> target_code(Code::GetCodeFromTargetAddress(target)); if (target_code->kind() == Code::STUB) { - return target_code->major_key() == CodeStub::CallFunction; + return CodeStub::GetMajorKey(*target_code) == CodeStub::CallFunction; } return target_code->is_call_stub(); } @@ -404,7 +398,8 @@ void BreakLocationIterator::PrepareStepIn(Isolate* isolate) { } bool is_call_function_stub = (maybe_call_function_stub->kind() == Code::STUB && - maybe_call_function_stub->major_key() == CodeStub::CallFunction); + CodeStub::GetMajorKey(*maybe_call_function_stub) == + CodeStub::CallFunction); // Step in through construct call requires no changes to the running code. // Step in through getters/setters should already be prepared as well @@ -413,7 +408,7 @@ void BreakLocationIterator::PrepareStepIn(Isolate* isolate) { // Step in through CallFunction stub should also be prepared by caller of // this function (Debug::PrepareStep) which should flood target function // with breakpoints. - ASSERT(RelocInfo::IsConstructCall(rmode()) || + DCHECK(RelocInfo::IsConstructCall(rmode()) || target_code->is_inline_cache_stub() || is_call_function_stub); #endif @@ -443,6 +438,53 @@ bool BreakLocationIterator::IsDebugBreak() { } +// Find the builtin to use for invoking the debug break +static Handle<Code> DebugBreakForIC(Handle<Code> code, RelocInfo::Mode mode) { + Isolate* isolate = code->GetIsolate(); + + // Find the builtin debug break function matching the calling convention + // used by the call site. + if (code->is_inline_cache_stub()) { + switch (code->kind()) { + case Code::CALL_IC: + return isolate->builtins()->CallICStub_DebugBreak(); + + case Code::LOAD_IC: + return isolate->builtins()->LoadIC_DebugBreak(); + + case Code::STORE_IC: + return isolate->builtins()->StoreIC_DebugBreak(); + + case Code::KEYED_LOAD_IC: + return isolate->builtins()->KeyedLoadIC_DebugBreak(); + + case Code::KEYED_STORE_IC: + return isolate->builtins()->KeyedStoreIC_DebugBreak(); + + case Code::COMPARE_NIL_IC: + return isolate->builtins()->CompareNilIC_DebugBreak(); + + default: + UNREACHABLE(); + } + } + if (RelocInfo::IsConstructCall(mode)) { + if (code->has_function_cache()) { + return isolate->builtins()->CallConstructStub_Recording_DebugBreak(); + } else { + return isolate->builtins()->CallConstructStub_DebugBreak(); + } + } + if (code->kind() == Code::STUB) { + DCHECK(CodeStub::GetMajorKey(*code) == CodeStub::CallFunction); + return isolate->builtins()->CallFunctionStub_DebugBreak(); + } + + UNREACHABLE(); + return Handle<Code>::null(); +} + + void BreakLocationIterator::SetDebugBreakAtIC() { // Patch the original code with the current address as the current address // might have changed by the inline caching since the code was copied. @@ -455,7 +497,7 @@ void BreakLocationIterator::SetDebugBreakAtIC() { // Patch the code to invoke the builtin debug break function matching the // calling convention used by the call site. - Handle<Code> dbgbrk_code(Debug::FindDebugBreak(target_code, mode)); + Handle<Code> dbgbrk_code = DebugBreakForIC(target_code, mode); rinfo()->set_target_address(dbgbrk_code->entry()); } } @@ -494,7 +536,7 @@ void BreakLocationIterator::ClearAllDebugBreak() { bool BreakLocationIterator::RinfoDone() const { - ASSERT(reloc_iterator_->done() == reloc_iterator_original_->done()); + DCHECK(reloc_iterator_->done() == reloc_iterator_original_->done()); return reloc_iterator_->done(); } @@ -503,9 +545,9 @@ void BreakLocationIterator::RinfoNext() { reloc_iterator_->next(); reloc_iterator_original_->next(); #ifdef DEBUG - ASSERT(reloc_iterator_->done() == reloc_iterator_original_->done()); + DCHECK(reloc_iterator_->done() == reloc_iterator_original_->done()); if (!reloc_iterator_->done()) { - ASSERT(rmode() == original_rmode()); + DCHECK(rmode() == original_rmode()); } #endif } @@ -523,66 +565,55 @@ void Debug::ThreadInit() { thread_local_.queued_step_count_ = 0; thread_local_.step_into_fp_ = 0; thread_local_.step_out_fp_ = 0; - thread_local_.after_break_target_ = 0; // TODO(isolates): frames_are_dropped_? - thread_local_.debugger_entry_ = NULL; - thread_local_.pending_interrupts_ = 0; + thread_local_.current_debug_scope_ = NULL; thread_local_.restarter_frame_function_pointer_ = NULL; + thread_local_.promise_on_stack_ = NULL; } char* Debug::ArchiveDebug(char* storage) { char* to = storage; - OS::MemCopy(to, reinterpret_cast<char*>(&thread_local_), sizeof(ThreadLocal)); - to += sizeof(ThreadLocal); - OS::MemCopy(to, reinterpret_cast<char*>(®isters_), sizeof(registers_)); + MemCopy(to, reinterpret_cast<char*>(&thread_local_), sizeof(ThreadLocal)); ThreadInit(); - ASSERT(to <= storage + ArchiveSpacePerThread()); return storage + ArchiveSpacePerThread(); } char* Debug::RestoreDebug(char* storage) { char* from = storage; - OS::MemCopy( - reinterpret_cast<char*>(&thread_local_), from, sizeof(ThreadLocal)); - from += sizeof(ThreadLocal); - OS::MemCopy(reinterpret_cast<char*>(®isters_), from, sizeof(registers_)); - ASSERT(from <= storage + ArchiveSpacePerThread()); + MemCopy(reinterpret_cast<char*>(&thread_local_), from, sizeof(ThreadLocal)); return storage + ArchiveSpacePerThread(); } int Debug::ArchiveSpacePerThread() { - return sizeof(ThreadLocal) + sizeof(JSCallerSavedBuffer); + return sizeof(ThreadLocal); } -// Frame structure (conforms InternalFrame structure): -// -- code -// -- SMI maker -// -- function (slot is called "context") -// -- frame base -Object** Debug::SetUpFrameDropperFrame(StackFrame* bottom_js_frame, - Handle<Code> code) { - ASSERT(bottom_js_frame->is_java_script()); - - Address fp = bottom_js_frame->fp(); +ScriptCache::ScriptCache(Isolate* isolate) : HashMap(HashMap::PointersMatch), + isolate_(isolate) { + Heap* heap = isolate_->heap(); + HandleScope scope(isolate_); - // Move function pointer into "context" slot. - Memory::Object_at(fp + StandardFrameConstants::kContextOffset) = - Memory::Object_at(fp + JavaScriptFrameConstants::kFunctionOffset); + // Perform two GCs to get rid of all unreferenced scripts. The first GC gets + // rid of all the cached script wrappers and the second gets rid of the + // scripts which are no longer referenced. + heap->CollectAllGarbage(Heap::kMakeHeapIterableMask, "ScriptCache"); + heap->CollectAllGarbage(Heap::kMakeHeapIterableMask, "ScriptCache"); - Memory::Object_at(fp + InternalFrameConstants::kCodeOffset) = *code; - Memory::Object_at(fp + StandardFrameConstants::kMarkerOffset) = - Smi::FromInt(StackFrame::INTERNAL); + // Scan heap for Script objects. + HeapIterator iterator(heap); + DisallowHeapAllocation no_allocation; - return reinterpret_cast<Object**>(&Memory::Object_at( - fp + StandardFrameConstants::kContextOffset)); + for (HeapObject* obj = iterator.next(); obj != NULL; obj = iterator.next()) { + if (obj->IsScript() && Script::cast(obj)->HasValidSource()) { + Add(Handle<Script>(Script::cast(obj))); + } + } } -const int Debug::kFrameDropperFrameSize = 4; - void ScriptCache::Add(Handle<Script> script) { GlobalHandles* global_handles = isolate_->global_handles(); @@ -591,7 +622,14 @@ void ScriptCache::Add(Handle<Script> script) { HashMap::Entry* entry = HashMap::Lookup(reinterpret_cast<void*>(id), Hash(id), true); if (entry->value != NULL) { - ASSERT(*script == *reinterpret_cast<Script**>(entry->value)); +#ifdef DEBUG + // The code deserializer may introduce duplicate Script objects. + // Assert that the Script objects with the same id have the same name. + Handle<Script> found(reinterpret_cast<Script**>(entry->value)); + DCHECK(script->id() == found->id()); + DCHECK(!script->name()->IsString() || + String::cast(script->name())->Equals(String::cast(found->name()))); +#endif return; } // Globalize the script object, make it weak and use the location of the @@ -610,7 +648,7 @@ Handle<FixedArray> ScriptCache::GetScripts() { Handle<FixedArray> instances = factory->NewFixedArray(occupancy()); int count = 0; for (HashMap::Entry* entry = Start(); entry != NULL; entry = Next(entry)) { - ASSERT(entry->value != NULL); + DCHECK(entry->value != NULL); if (entry->value != NULL) { instances->set(count, *reinterpret_cast<Script**>(entry->value)); count++; @@ -620,21 +658,12 @@ Handle<FixedArray> ScriptCache::GetScripts() { } -void ScriptCache::ProcessCollectedScripts() { - Debugger* debugger = isolate_->debugger(); - for (int i = 0; i < collected_scripts_.length(); i++) { - debugger->OnScriptCollected(collected_scripts_[i]); - } - collected_scripts_.Clear(); -} - - void ScriptCache::Clear() { // Iterate the script cache to get rid of all the weak handles. for (HashMap::Entry* entry = Start(); entry != NULL; entry = Next(entry)) { - ASSERT(entry != NULL); + DCHECK(entry != NULL); Object** location = reinterpret_cast<Object**>(entry->value); - ASSERT((*location)->IsScript()); + DCHECK((*location)->IsScript()); GlobalHandles::ClearWeakness(location); GlobalHandles::Destroy(location); } @@ -657,7 +686,6 @@ void ScriptCache::HandleWeakScript( HashMap::Entry* entry = script_cache->Lookup(key, hash, false); Object** location = reinterpret_cast<Object**>(entry->value); script_cache->Remove(key, hash); - script_cache->collected_scripts_.Add(id); // Clear the weak handle. GlobalHandles::Destroy(location); @@ -680,7 +708,7 @@ void Debug::HandleWeakDebugInfo( for (DebugInfoListNode* n = debug->debug_info_list_; n != NULL; n = n->next()) { - ASSERT(n != node); + DCHECK(n != node); } #endif } @@ -706,9 +734,7 @@ bool Debug::CompileDebuggerScript(Isolate* isolate, int index) { HandleScope scope(isolate); // Bail out if the index is invalid. - if (index == -1) { - return false; - } + if (index == -1) return false; // Find source and name for the requested script. Handle<String> source_code = @@ -720,16 +746,13 @@ bool Debug::CompileDebuggerScript(Isolate* isolate, int index) { // Compile the script. Handle<SharedFunctionInfo> function_info; - function_info = Compiler::CompileScript(source_code, - script_name, 0, 0, - false, - context, - NULL, NULL, NO_CACHED_DATA, - NATIVES_CODE); + function_info = Compiler::CompileScript( + source_code, script_name, 0, 0, false, context, NULL, NULL, + ScriptCompiler::kNoCompileOptions, NATIVES_CODE); // Silently ignore stack overflows during compilation. if (function_info.is_null()) { - ASSERT(isolate->has_pending_exception()); + DCHECK(isolate->has_pending_exception()); isolate->clear_pending_exception(); return false; } @@ -741,20 +764,20 @@ bool Debug::CompileDebuggerScript(Isolate* isolate, int index) { Handle<Object> exception; MaybeHandle<Object> result = Execution::TryCall(function, - Handle<Object>(context->global_object(), isolate), + handle(context->global_proxy()), 0, NULL, &exception); // Check for caught exceptions. if (result.is_null()) { - ASSERT(!isolate->has_pending_exception()); + DCHECK(!isolate->has_pending_exception()); MessageLocation computed_location; isolate->ComputeLocation(&computed_location); Handle<Object> message = MessageHandler::MakeMessageObject( isolate, "error_loading_debugger", &computed_location, Vector<Handle<Object> >::empty(), Handle<JSArray>()); - ASSERT(!isolate->has_pending_exception()); + DCHECK(!isolate->has_pending_exception()); if (!exception.is_null()) { isolate->set_pending_exception(*exception); MessageHandler::ReportMessage(isolate, NULL, message); @@ -772,20 +795,16 @@ bool Debug::CompileDebuggerScript(Isolate* isolate, int index) { bool Debug::Load() { // Return if debugger is already loaded. - if (IsLoaded()) return true; - - Debugger* debugger = isolate_->debugger(); + if (is_loaded()) return true; // Bail out if we're already in the process of compiling the native // JavaScript source code for the debugger. - if (debugger->compiling_natives() || - debugger->is_loading_debugger()) - return false; - debugger->set_loading_debugger(true); + if (is_suppressed_) return false; + SuppressDebug while_loading(this); // Disable breakpoints and interrupts while compiling and running the // debugger scripts including the context creation code. - DisableBreak disable(isolate_, true); + DisableBreak disable(this, true); PostponeInterruptsScope postpone(isolate_); // Create the debugger context. @@ -793,7 +812,7 @@ bool Debug::Load() { ExtensionConfiguration no_extensions; Handle<Context> context = isolate_->bootstrapper()->CreateEnvironment( - Handle<Object>::null(), + MaybeHandle<JSGlobalProxy>(), v8::Handle<ObjectTemplate>(), &no_extensions); @@ -807,18 +826,14 @@ bool Debug::Load() { // Expose the builtins object in the debugger context. Handle<String> key = isolate_->factory()->InternalizeOneByteString( STATIC_ASCII_VECTOR("builtins")); - Handle<GlobalObject> global = Handle<GlobalObject>(context->global_object()); + Handle<GlobalObject> global = + Handle<GlobalObject>(context->global_object(), isolate_); + Handle<JSBuiltinsObject> builtin = + Handle<JSBuiltinsObject>(global->builtins(), isolate_); RETURN_ON_EXCEPTION_VALUE( - isolate_, - JSReceiver::SetProperty(global, - key, - Handle<Object>(global->builtins(), isolate_), - NONE, - SLOPPY), - false); + isolate_, Object::SetProperty(global, key, builtin, SLOPPY), false); // Compile the JavaScript for the debugger in the debugger context. - debugger->set_compiling_natives(true); bool caught_exception = !CompileDebuggerScript(isolate_, Natives::GetIndex("mirror")) || !CompileDebuggerScript(isolate_, Natives::GetIndex("debug")); @@ -827,68 +842,51 @@ bool Debug::Load() { caught_exception = caught_exception || !CompileDebuggerScript(isolate_, Natives::GetIndex("liveedit")); } - - debugger->set_compiling_natives(false); - - // Make sure we mark the debugger as not loading before we might - // return. - debugger->set_loading_debugger(false); - // Check for caught exceptions. if (caught_exception) return false; - // Debugger loaded, create debugger context global handle. debug_context_ = Handle<Context>::cast( isolate_->global_handles()->Create(*context)); - return true; } void Debug::Unload() { + ClearAllBreakPoints(); + ClearStepping(); + + // Match unmatched PopPromise calls. + while (thread_local_.promise_on_stack_) PopPromise(); + // Return debugger is not loaded. - if (!IsLoaded()) { - return; - } + if (!is_loaded()) return; // Clear the script cache. - DestroyScriptCache(); + if (script_cache_ != NULL) { + delete script_cache_; + script_cache_ = NULL; + } // Clear debugger context global handle. - GlobalHandles::Destroy(reinterpret_cast<Object**>(debug_context_.location())); + GlobalHandles::Destroy(Handle<Object>::cast(debug_context_).location()); debug_context_ = Handle<Context>(); } -// Set the flag indicating that preemption happened during debugging. -void Debug::PreemptionWhileInDebugger() { - ASSERT(InDebugger()); - Debug::set_interrupts_pending(PREEMPT); -} - - -Object* Debug::Break(Arguments args) { +void Debug::Break(Arguments args, JavaScriptFrame* frame) { Heap* heap = isolate_->heap(); HandleScope scope(isolate_); - ASSERT(args.length() == 0); - - thread_local_.frame_drop_mode_ = FRAMES_UNTOUCHED; + DCHECK(args.length() == 0); - // Get the top-most JavaScript frame. - JavaScriptFrameIterator it(isolate_); - JavaScriptFrame* frame = it.frame(); + // Initialize LiveEdit. + LiveEdit::InitializeThreadLocal(this); // Just continue if breaks are disabled or debugger cannot be loaded. - if (disable_break() || !Load()) { - SetAfterBreakTarget(frame); - return heap->undefined_value(); - } + if (break_disabled_) return; // Enter the debugger. - EnterDebugger debugger(isolate_); - if (debugger.FailedToEnter()) { - return heap->undefined_value(); - } + DebugScope debug_scope(this); + if (debug_scope.failed()) return; // Postpone interrupt during breakpoint processing. PostponeInterruptsScope postpone(isolate_); @@ -924,10 +922,11 @@ Object* Debug::Break(Arguments args) { // If step out is active skip everything until the frame where we need to step // out to is reached, unless real breakpoint is hit. - if (StepOutActive() && frame->fp() != step_out_fp() && + if (StepOutActive() && + frame->fp() != thread_local_.step_out_fp_ && break_points_hit->IsUndefined() ) { // Step count should always be 0 for StepOut. - ASSERT(thread_local_.step_count_ == 0); + DCHECK(thread_local_.step_count_ == 0); } else if (!break_points_hit->IsUndefined() || (thread_local_.last_step_action_ != StepNone && thread_local_.step_count_ == 0)) { @@ -947,7 +946,7 @@ Object* Debug::Break(Arguments args) { PrepareStep(StepNext, step_count, StackFrame::NO_ID); } else { // Notify the debug event listeners. - isolate_->debugger()->OnDebugBreak(break_points_hit, false); + OnDebugBreak(break_points_hit, false); } } else if (thread_local_.last_step_action_ != StepNone) { // Hold on to last step action as it is cleared by the call to @@ -984,40 +983,15 @@ Object* Debug::Break(Arguments args) { // Set up for the remaining steps. PrepareStep(step_action, step_count, StackFrame::NO_ID); } - - if (thread_local_.frame_drop_mode_ == FRAMES_UNTOUCHED) { - SetAfterBreakTarget(frame); - } else if (thread_local_.frame_drop_mode_ == - FRAME_DROPPED_IN_IC_CALL) { - // We must have been calling IC stub. Do not go there anymore. - Code* plain_return = isolate_->builtins()->builtin( - Builtins::kPlainReturn_LiveEdit); - thread_local_.after_break_target_ = plain_return->entry(); - } else if (thread_local_.frame_drop_mode_ == - FRAME_DROPPED_IN_DEBUG_SLOT_CALL) { - // Debug break slot stub does not return normally, instead it manually - // cleans the stack and jumps. We should patch the jump address. - Code* plain_return = isolate_->builtins()->builtin( - Builtins::kFrameDropper_LiveEdit); - thread_local_.after_break_target_ = plain_return->entry(); - } else if (thread_local_.frame_drop_mode_ == - FRAME_DROPPED_IN_DIRECT_CALL) { - // Nothing to do, after_break_target is not used here. - } else if (thread_local_.frame_drop_mode_ == - FRAME_DROPPED_IN_RETURN_CALL) { - Code* plain_return = isolate_->builtins()->builtin( - Builtins::kFrameDropper_LiveEdit); - thread_local_.after_break_target_ = plain_return->entry(); - } else { - UNREACHABLE(); - } - - return heap->undefined_value(); } RUNTIME_FUNCTION(Debug_Break) { - return isolate->debug()->Break(args); + // Get the top-most JavaScript frame. + JavaScriptFrameIterator it(isolate); + isolate->debug()->Break(args, it.frame()); + isolate->debug()->SetAfterBreakTarget(it.frame()); + return isolate->heap()->undefined_value(); } @@ -1031,7 +1005,7 @@ Handle<Object> Debug::CheckBreakPoints(Handle<Object> break_point_objects) { // they are in a FixedArray. Handle<FixedArray> break_points_hit; int break_points_hit_count = 0; - ASSERT(!break_point_objects->IsUndefined()); + DCHECK(!break_point_objects->IsUndefined()); if (break_point_objects->IsFixedArray()) { Handle<FixedArray> array(FixedArray::cast(*break_point_objects)); break_points_hit = factory->NewFixedArray(array->length()); @@ -1103,12 +1077,12 @@ bool Debug::HasDebugInfo(Handle<SharedFunctionInfo> shared) { // Return the debug info for this function. EnsureDebugInfo must be called // prior to ensure the debug info has been generated for shared. Handle<DebugInfo> Debug::GetDebugInfo(Handle<SharedFunctionInfo> shared) { - ASSERT(HasDebugInfo(shared)); + DCHECK(HasDebugInfo(shared)); return Handle<DebugInfo>(DebugInfo::cast(shared->debug_info())); } -void Debug::SetBreakPoint(Handle<JSFunction> function, +bool Debug::SetBreakPoint(Handle<JSFunction> function, Handle<Object> break_point_object, int* source_position) { HandleScope scope(isolate_); @@ -1119,12 +1093,12 @@ void Debug::SetBreakPoint(Handle<JSFunction> function, Handle<SharedFunctionInfo> shared(function->shared()); if (!EnsureDebugInfo(shared, function)) { // Return if retrieving debug info failed. - return; + return true; } Handle<DebugInfo> debug_info = GetDebugInfo(shared); // Source positions starts with zero. - ASSERT(*source_position >= 0); + DCHECK(*source_position >= 0); // Find the break point and change it. BreakLocationIterator it(debug_info, SOURCE_BREAK_LOCATIONS); @@ -1134,7 +1108,7 @@ void Debug::SetBreakPoint(Handle<JSFunction> function, *source_position = it.position(); // At least one active break point now. - ASSERT(debug_info->GetBreakPointCount() > 0); + return debug_info->GetBreakPointCount() > 0; } @@ -1168,7 +1142,7 @@ bool Debug::SetBreakPointForScript(Handle<Script> script, Handle<DebugInfo> debug_info = GetDebugInfo(shared); // Source positions starts with zero. - ASSERT(position >= 0); + DCHECK(position >= 0); // Find the break point and change it. BreakLocationIterator it(debug_info, SOURCE_BREAK_LOCATIONS); @@ -1178,7 +1152,7 @@ bool Debug::SetBreakPointForScript(Handle<Script> script, *source_position = it.position() + shared->start_position(); // At least one active break point now. - ASSERT(debug_info->GetBreakPointCount() > 0); + DCHECK(debug_info->GetBreakPointCount() > 0); return true; } @@ -1255,7 +1229,7 @@ void Debug::FloodBoundFunctionWithOneShot(Handle<JSFunction> function) { isolate_); if (!bindee.is_null() && bindee->IsJSFunction() && - !JSFunction::cast(*bindee)->IsBuiltin()) { + !JSFunction::cast(*bindee)->IsFromNativeScript()) { Handle<JSFunction> bindee_function(JSFunction::cast(*bindee)); Debug::FloodWithOneShot(bindee_function); } @@ -1298,53 +1272,68 @@ bool Debug::IsBreakOnException(ExceptionBreakType type) { } -void Debug::PromiseHandlePrologue(Handle<JSFunction> promise_getter) { - Handle<JSFunction> promise_getter_global = Handle<JSFunction>::cast( - isolate_->global_handles()->Create(*promise_getter)); - StackHandler* handler = - StackHandler::FromAddress(Isolate::handler(isolate_->thread_local_top())); - promise_getters_.Add(promise_getter_global); - promise_catch_handlers_.Add(handler); +PromiseOnStack::PromiseOnStack(Isolate* isolate, PromiseOnStack* prev, + Handle<JSObject> promise) + : isolate_(isolate), prev_(prev) { + handler_ = StackHandler::FromAddress( + Isolate::handler(isolate->thread_local_top())); + promise_ = + Handle<JSObject>::cast(isolate->global_handles()->Create(*promise)); } -void Debug::PromiseHandleEpilogue() { - if (promise_catch_handlers_.length() == 0) return; - promise_catch_handlers_.RemoveLast(); - Handle<Object> promise_getter = promise_getters_.RemoveLast(); - isolate_->global_handles()->Destroy(promise_getter.location()); +PromiseOnStack::~PromiseOnStack() { + isolate_->global_handles()->Destroy( + Handle<Object>::cast(promise_).location()); } -Handle<Object> Debug::GetPromiseForUncaughtException() { +void Debug::PushPromise(Handle<JSObject> promise) { + PromiseOnStack* prev = thread_local_.promise_on_stack_; + thread_local_.promise_on_stack_ = new PromiseOnStack(isolate_, prev, promise); +} + + +void Debug::PopPromise() { + if (thread_local_.promise_on_stack_ == NULL) return; + PromiseOnStack* prev = thread_local_.promise_on_stack_->prev(); + delete thread_local_.promise_on_stack_; + thread_local_.promise_on_stack_ = prev; +} + + +Handle<Object> Debug::GetPromiseOnStackOnThrow() { Handle<Object> undefined = isolate_->factory()->undefined_value(); - if (promise_getters_.length() == 0) return undefined; - Handle<JSFunction> promise_getter = promise_getters_.last(); - StackHandler* promise_catch = promise_catch_handlers_.last(); + if (thread_local_.promise_on_stack_ == NULL) return undefined; + StackHandler* promise_try = thread_local_.promise_on_stack_->handler(); // Find the top-most try-catch handler. StackHandler* handler = StackHandler::FromAddress( Isolate::handler(isolate_->thread_local_top())); - while (handler != NULL && !handler->is_catch()) { + do { + if (handler == promise_try) { + // Mark the pushed try-catch handler to prevent a later duplicate event + // triggered with the following reject. + return thread_local_.promise_on_stack_->promise(); + } handler = handler->next(); - } -#ifdef DEBUG - // Make sure that our promise catch handler is in the list of handlers, - // even if it's not the top-most try-catch handler. - StackHandler* temp = handler; - while (temp != promise_catch && !temp->is_catch()) { - temp = temp->next(); - CHECK(temp != NULL); - } -#endif // DEBUG - - if (handler == promise_catch) { - return Execution::Call( - isolate_, promise_getter, undefined, 0, NULL).ToHandleChecked(); - } + // Throwing inside a Promise can be intercepted by an inner try-catch, so + // we stop at the first try-catch handler. + } while (handler != NULL && !handler->is_catch()); return undefined; } +bool Debug::PromiseHasRejectHandler(Handle<JSObject> promise) { + Handle<JSFunction> fun = Handle<JSFunction>::cast( + JSObject::GetDataProperty(isolate_->js_builtins_object(), + isolate_->factory()->NewStringFromStaticAscii( + "PromiseHasRejectHandler"))); + Handle<Object> result = + Execution::Call(isolate_, fun, promise, 0, NULL).ToHandleChecked(); + return result->IsTrue(); +} + + void Debug::PrepareStep(StepAction step_action, int step_count, StackFrame::Id frame_id) { @@ -1352,7 +1341,7 @@ void Debug::PrepareStep(StepAction step_action, PrepareForBreakPoints(); - ASSERT(Debug::InDebugger()); + DCHECK(in_debug_scope()); // Remember this step action and count. thread_local_.last_step_action_ = step_action; @@ -1440,7 +1429,8 @@ void Debug::PrepareStep(StepAction step_action, Code::GetCodeFromTargetAddress(original_target); } if ((maybe_call_function_stub->kind() == Code::STUB && - maybe_call_function_stub->major_key() == CodeStub::CallFunction) || + CodeStub::GetMajorKey(maybe_call_function_stub) == + CodeStub::CallFunction) || maybe_call_function_stub->kind() == Code::CALL_IC) { // Save reference to the code as we may need it to find out arguments // count for 'step in' later. @@ -1459,11 +1449,12 @@ void Debug::PrepareStep(StepAction step_action, frames_it.Advance(); } } else { - ASSERT(it.IsExit()); + DCHECK(it.IsExit()); frames_it.Advance(); } // Skip builtin functions on the stack. - while (!frames_it.done() && frames_it.frame()->function()->IsBuiltin()) { + while (!frames_it.done() && + frames_it.frame()->function()->IsFromNativeScript()) { frames_it.Advance(); } // Step out: If there is a JavaScript caller frame, we need to @@ -1500,17 +1491,7 @@ void Debug::PrepareStep(StepAction step_action, bool is_call_ic = call_function_stub->kind() == Code::CALL_IC; // Find out number of arguments from the stub minor key. - // Reverse lookup required as the minor key cannot be retrieved - // from the code object. - Handle<Object> obj( - isolate_->heap()->code_stubs()->SlowReverseLookup( - *call_function_stub), - isolate_); - ASSERT(!obj.is_null()); - ASSERT(!(*obj)->IsUndefined()); - ASSERT(obj->IsSmi()); - // Get the STUB key and extract major and minor key. - uint32_t key = Smi::cast(*obj)->value(); + uint32_t key = call_function_stub->stub_key(); // Argc in the stub is the number of arguments passed - not the // expected arguments of the called function. int call_function_arg_count = is_call_ic @@ -1518,8 +1499,9 @@ void Debug::PrepareStep(StepAction step_action, : CallFunctionStub::ExtractArgcFromMinorKey( CodeStub::MinorKeyFromKey(key)); - ASSERT(is_call_ic || - call_function_stub->major_key() == CodeStub::MajorKeyFromKey(key)); + DCHECK(is_call_ic || + CodeStub::GetMajorKey(*call_function_stub) == + CodeStub::MajorKeyFromKey(key)); // Find target function on the expression stack. // Expression stack looks like this (top to bottom): @@ -1529,7 +1511,7 @@ void Debug::PrepareStep(StepAction step_action, // Receiver // Function to call int expressions_count = frame->ComputeExpressionsCount(); - ASSERT(expressions_count - 2 - call_function_arg_count >= 0); + DCHECK(expressions_count - 2 - call_function_arg_count >= 0); Object* fun = frame->GetExpression( expressions_count - 2 - call_function_arg_count); @@ -1550,7 +1532,7 @@ void Debug::PrepareStep(StepAction step_action, Handle<JSFunction> js_function(JSFunction::cast(fun)); if (js_function->shared()->bound()) { Debug::FloodBoundFunctionWithOneShot(js_function); - } else if (!js_function->IsBuiltin()) { + } else if (!js_function->IsFromNativeScript()) { // Don't step into builtins. // It will also compile target function if it's not compiled yet. FloodWithOneShot(js_function); @@ -1567,7 +1549,7 @@ void Debug::PrepareStep(StepAction step_action, if (is_load_or_store) { // Remember source position and frame to handle step in getter/setter. If // there is a custom getter/setter it will be handled in - // Object::Get/SetPropertyWithCallback, otherwise the step action will be + // Object::Get/SetPropertyWithAccessor, otherwise the step action will be // propagated on the next Debug::Break. thread_local_.last_statement_position_ = debug_info->code()->SourceStatementPosition(frame->pc()); @@ -1623,67 +1605,7 @@ bool Debug::IsDebugBreak(Address addr) { } -// Check whether a code stub with the specified major key is a possible break -// point location when looking for source break locations. -bool Debug::IsSourceBreakStub(Code* code) { - CodeStub::Major major_key = CodeStub::GetMajorKey(code); - return major_key == CodeStub::CallFunction; -} - - -// Check whether a code stub with the specified major key is a possible break -// location. -bool Debug::IsBreakStub(Code* code) { - CodeStub::Major major_key = CodeStub::GetMajorKey(code); - return major_key == CodeStub::CallFunction; -} - - -// Find the builtin to use for invoking the debug break -Handle<Code> Debug::FindDebugBreak(Handle<Code> code, RelocInfo::Mode mode) { - Isolate* isolate = code->GetIsolate(); - // Find the builtin debug break function matching the calling convention - // used by the call site. - if (code->is_inline_cache_stub()) { - switch (code->kind()) { - case Code::CALL_IC: - return isolate->builtins()->CallICStub_DebugBreak(); - - case Code::LOAD_IC: - return isolate->builtins()->LoadIC_DebugBreak(); - - case Code::STORE_IC: - return isolate->builtins()->StoreIC_DebugBreak(); - - case Code::KEYED_LOAD_IC: - return isolate->builtins()->KeyedLoadIC_DebugBreak(); - - case Code::KEYED_STORE_IC: - return isolate->builtins()->KeyedStoreIC_DebugBreak(); - - case Code::COMPARE_NIL_IC: - return isolate->builtins()->CompareNilIC_DebugBreak(); - - default: - UNREACHABLE(); - } - } - if (RelocInfo::IsConstructCall(mode)) { - if (code->has_function_cache()) { - return isolate->builtins()->CallConstructStub_Recording_DebugBreak(); - } else { - return isolate->builtins()->CallConstructStub_DebugBreak(); - } - } - if (code->kind() == Code::STUB) { - ASSERT(code->major_key() == CodeStub::CallFunction); - return isolate->builtins()->CallFunctionStub_DebugBreak(); - } - - UNREACHABLE(); - return Handle<Code>::null(); -} // Simple function for returning the source positions for active break points. @@ -1728,18 +1650,6 @@ Handle<Object> Debug::GetSourceBreakLocations( } -void Debug::NewBreak(StackFrame::Id break_frame_id) { - thread_local_.break_frame_id_ = break_frame_id; - thread_local_.break_id_ = ++thread_local_.break_count_; -} - - -void Debug::SetBreak(StackFrame::Id break_frame_id, int break_id) { - thread_local_.break_frame_id_ = break_frame_id; - thread_local_.break_id_ = break_id; -} - - // Handle stepping into a function. void Debug::HandleStepIn(Handle<JSFunction> function, Handle<Object> holder, @@ -1752,7 +1662,7 @@ void Debug::HandleStepIn(Handle<JSFunction> function, it.Advance(); // For constructor functions skip another frame. if (is_constructor) { - ASSERT(it.frame()->is_construct()); + DCHECK(it.frame()->is_construct()); it.Advance(); } fp = it.frame()->fp(); @@ -1760,11 +1670,11 @@ void Debug::HandleStepIn(Handle<JSFunction> function, // Flood the function with one-shot break points if it is called from where // step into was requested. - if (fp == step_in_fp()) { + if (fp == thread_local_.step_into_fp_) { if (function->shared()->bound()) { // Handle Function.prototype.bind Debug::FloodBoundFunctionWithOneShot(function); - } else if (!function->IsBuiltin()) { + } else if (!function->IsFromNativeScript()) { // Don't allow step into functions in the native context. if (function->shared()->code() == isolate->builtins()->builtin(Builtins::kFunctionApply) || @@ -1776,7 +1686,7 @@ void Debug::HandleStepIn(Handle<JSFunction> function, // function. if (!holder.is_null() && holder->IsJSFunction()) { Handle<JSFunction> js_function = Handle<JSFunction>::cast(holder); - if (!js_function->IsBuiltin()) { + if (!js_function->IsFromNativeScript()) { Debug::FloodWithOneShot(js_function); } else if (js_function->shared()->bound()) { // Handle Function.prototype.bind @@ -1824,7 +1734,7 @@ void Debug::ClearOneShot() { void Debug::ActivateStepIn(StackFrame* frame) { - ASSERT(!StepOutActive()); + DCHECK(!StepOutActive()); thread_local_.step_into_fp_ = frame->UnpaddedFP(); } @@ -1835,7 +1745,7 @@ void Debug::ClearStepIn() { void Debug::ActivateStepOut(StackFrame* frame) { - ASSERT(!StepInActive()); + DCHECK(!StepInActive()); thread_local_.step_out_fp_ = frame->UnpaddedFP(); } @@ -1873,7 +1783,7 @@ static void CollectActiveFunctionsFromThread( } } else if (frame->function()->IsJSFunction()) { JSFunction* function = frame->function(); - ASSERT(frame->LookupCode()->kind() == Code::FUNCTION); + DCHECK(frame->LookupCode()->kind() == Code::FUNCTION); active_functions->Add(Handle<JSFunction>(function)); function->shared()->code()->set_gc_metadata(active_code_marker); } @@ -1886,10 +1796,10 @@ static void CollectActiveFunctionsFromThread( // Assembler::CheckConstPool() and Assembler::CheckVeneerPool(). Note that this // is only useful for architectures using constant pools or veneer pools. static int ComputeCodeOffsetFromPcOffset(Code *code, int pc_offset) { - ASSERT_EQ(code->kind(), Code::FUNCTION); - ASSERT(!code->has_debug_break_slots()); - ASSERT_LE(0, pc_offset); - ASSERT_LT(pc_offset, code->instruction_end() - code->instruction_start()); + DCHECK_EQ(code->kind(), Code::FUNCTION); + DCHECK(!code->has_debug_break_slots()); + DCHECK_LE(0, pc_offset); + DCHECK_LT(pc_offset, code->instruction_end() - code->instruction_start()); int mask = RelocInfo::ModeMask(RelocInfo::CONST_POOL) | RelocInfo::ModeMask(RelocInfo::VENEER_POOL); @@ -1898,9 +1808,9 @@ static int ComputeCodeOffsetFromPcOffset(Code *code, int pc_offset) { for (RelocIterator it(code, mask); !it.done(); it.next()) { RelocInfo* info = it.rinfo(); if (info->pc() >= pc) break; - ASSERT(RelocInfo::IsConstPool(info->rmode())); + DCHECK(RelocInfo::IsConstPool(info->rmode())); code_offset -= static_cast<int>(info->data()); - ASSERT_LE(0, code_offset); + DCHECK_LE(0, code_offset); } return code_offset; @@ -1909,7 +1819,7 @@ static int ComputeCodeOffsetFromPcOffset(Code *code, int pc_offset) { // The inverse of ComputeCodeOffsetFromPcOffset. static int ComputePcOffsetFromCodeOffset(Code *code, int code_offset) { - ASSERT_EQ(code->kind(), Code::FUNCTION); + DCHECK_EQ(code->kind(), Code::FUNCTION); int mask = RelocInfo::ModeMask(RelocInfo::DEBUG_BREAK_SLOT) | RelocInfo::ModeMask(RelocInfo::CONST_POOL) | @@ -1921,14 +1831,14 @@ static int ComputePcOffsetFromCodeOffset(Code *code, int code_offset) { if (RelocInfo::IsDebugBreakSlot(info->rmode())) { reloc += Assembler::kDebugBreakSlotLength; } else { - ASSERT(RelocInfo::IsConstPool(info->rmode())); + DCHECK(RelocInfo::IsConstPool(info->rmode())); reloc += static_cast<int>(info->data()); } } int pc_offset = code_offset + reloc; - ASSERT_LT(code->instruction_start() + pc_offset, code->instruction_end()); + DCHECK_LT(code->instruction_start() + pc_offset, code->instruction_end()); return pc_offset; } @@ -1944,7 +1854,7 @@ static void RedirectActivationsToRecompiledCodeOnThread( JSFunction* function = frame->function(); - ASSERT(frame->LookupCode()->kind() == Code::FUNCTION); + DCHECK(frame->LookupCode()->kind() == Code::FUNCTION); Handle<Code> frame_code(frame->LookupCode()); if (frame_code->has_debug_break_slots()) continue; @@ -1982,6 +1892,11 @@ static void RedirectActivationsToRecompiledCodeOnThread( reinterpret_cast<intptr_t>(new_pc)); } + if (FLAG_enable_ool_constant_pool) { + // Update constant pool pointer for new code. + frame->set_constant_pool(new_code->constant_pool()); + } + // Patch the return address to return into the code with // debug break slots. frame->set_pc(new_pc); @@ -2017,47 +1932,31 @@ class ActiveFunctionsRedirector : public ThreadVisitor { }; -class ForceDebuggerActive { - public: - explicit ForceDebuggerActive(Isolate *isolate) { - isolate_ = isolate; - old_state_ = isolate->debugger()->force_debugger_active(); - isolate_->debugger()->set_force_debugger_active(true); +static void EnsureFunctionHasDebugBreakSlots(Handle<JSFunction> function) { + if (function->code()->kind() == Code::FUNCTION && + function->code()->has_debug_break_slots()) { + // Nothing to do. Function code already had debug break slots. + return; } - - ~ForceDebuggerActive() { - isolate_->debugger()->set_force_debugger_active(old_state_); + // Make sure that the shared full code is compiled with debug + // break slots. + if (!function->shared()->code()->has_debug_break_slots()) { + MaybeHandle<Code> code = Compiler::GetCodeForDebugging(function); + // Recompilation can fail. In that case leave the code as it was. + if (!code.is_null()) function->ReplaceCode(*code.ToHandleChecked()); + } else { + // Simply use shared code if it has debug break slots. + function->ReplaceCode(function->shared()->code()); } - - private: - Isolate *isolate_; - bool old_state_; - - DISALLOW_COPY_AND_ASSIGN(ForceDebuggerActive); -}; - - -void Debug::MaybeRecompileFunctionForDebugging(Handle<JSFunction> function) { - ASSERT_EQ(Code::FUNCTION, function->code()->kind()); - ASSERT_EQ(function->code(), function->shared()->code()); - - if (function->code()->has_debug_break_slots()) return; - - ForceDebuggerActive force_debugger_active(isolate_); - MaybeHandle<Code> code = Compiler::GetCodeForDebugging(function); - // Recompilation can fail. In that case leave the code as it was. - if (!code.is_null()) - function->ReplaceCode(*code.ToHandleChecked()); - ASSERT_EQ(function->code(), function->shared()->code()); } -void Debug::RecompileAndRelocateSuspendedGenerators( +static void RecompileAndRelocateSuspendedGenerators( const List<Handle<JSGeneratorObject> > &generators) { for (int i = 0; i < generators.length(); i++) { Handle<JSFunction> fun(generators[i]->function()); - MaybeRecompileFunctionForDebugging(fun); + EnsureFunctionHasDebugBreakSlots(fun); int code_offset = generators[i]->continuation(); int pc_offset = ComputePcOffsetFromCodeOffset(fun->code(), code_offset); @@ -2106,6 +2005,7 @@ void Debug::PrepareForBreakPoints() { Heap* heap = isolate_->heap(); heap->CollectAllGarbage(Heap::kMakeHeapIterableMask, "preparing for breakpoints"); + HeapIterator iterator(heap); // Ensure no GC in this scope as we are going to use gc_metadata // field in the Code object to mark active functions. @@ -2125,7 +2025,6 @@ void Debug::PrepareForBreakPoints() { // Scan the heap for all non-optimized functions which have no // debug break slots and are not active or inlined into an active // function and mark them for lazy compilation. - HeapIterator iterator(heap); HeapObject* obj = NULL; while (((obj = iterator.next()) != NULL)) { if (obj->IsJSFunction()) { @@ -2134,7 +2033,7 @@ void Debug::PrepareForBreakPoints() { if (!shared->allows_lazy_compilation()) continue; if (!shared->script()->IsScript()) continue; - if (function->IsBuiltin()) continue; + if (function->IsFromNativeScript()) continue; if (shared->code()->gc_metadata() == active_code_marker) continue; if (shared->is_generator()) { @@ -2145,8 +2044,8 @@ void Debug::PrepareForBreakPoints() { Code::Kind kind = function->code()->kind(); if (kind == Code::FUNCTION && !function->code()->has_debug_break_slots()) { - function->set_code(*lazy_compile); - function->shared()->set_code(*lazy_compile); + function->ReplaceCode(*lazy_compile); + function->shared()->ReplaceCode(*lazy_compile); } else if (kind == Code::BUILTIN && (function->IsInOptimizationQueue() || function->IsMarkedForOptimization() || @@ -2155,10 +2054,10 @@ void Debug::PrepareForBreakPoints() { Code* shared_code = function->shared()->code(); if (shared_code->kind() == Code::FUNCTION && shared_code->has_debug_break_slots()) { - function->set_code(shared_code); + function->ReplaceCode(shared_code); } else { - function->set_code(*lazy_compile); - function->shared()->set_code(*lazy_compile); + function->ReplaceCode(*lazy_compile); + function->shared()->ReplaceCode(*lazy_compile); } } } else if (obj->IsJSGeneratorObject()) { @@ -2166,11 +2065,11 @@ void Debug::PrepareForBreakPoints() { if (!gen->is_suspended()) continue; JSFunction* fun = gen->function(); - ASSERT_EQ(fun->code()->kind(), Code::FUNCTION); + DCHECK_EQ(fun->code()->kind(), Code::FUNCTION); if (fun->code()->has_debug_break_slots()) continue; int pc_offset = gen->continuation(); - ASSERT_LT(0, pc_offset); + DCHECK_LT(0, pc_offset); int code_offset = ComputeCodeOffsetFromPcOffset(fun->code(), pc_offset); @@ -2199,8 +2098,8 @@ void Debug::PrepareForBreakPoints() { Handle<JSFunction> &function = generator_functions[i]; if (function->code()->kind() != Code::FUNCTION) continue; if (function->code()->has_debug_break_slots()) continue; - function->set_code(*lazy_compile); - function->shared()->set_code(*lazy_compile); + function->ReplaceCode(*lazy_compile); + function->shared()->ReplaceCode(*lazy_compile); } // Now recompile all functions with activation frames and and @@ -2217,7 +2116,7 @@ void Debug::PrepareForBreakPoints() { if (!shared->allows_lazy_compilation()) continue; if (shared->code()->kind() == Code::BUILTIN) continue; - MaybeRecompileFunctionForDebugging(function); + EnsureFunctionHasDebugBreakSlots(function); } RedirectActivationsToRecompiledCodeOnThread(isolate_, @@ -2250,9 +2149,7 @@ Object* Debug::FindSharedFunctionInfoInScript(Handle<Script> script, Handle<SharedFunctionInfo> target; Heap* heap = isolate_->heap(); while (!done) { - { // Extra scope for iterator and no-allocation. - heap->EnsureHeapIsIterable(); - DisallowHeapAllocation no_alloc_during_heap_iteration; + { // Extra scope for iterator. HeapIterator iterator(heap); for (HeapObject* obj = iterator.next(); obj != NULL; obj = iterator.next()) { @@ -2262,7 +2159,7 @@ Object* Debug::FindSharedFunctionInfoInScript(Handle<Script> script, if (obj->IsJSFunction()) { function = Handle<JSFunction>(JSFunction::cast(obj)); shared = Handle<SharedFunctionInfo>(function->shared()); - ASSERT(shared->allows_lazy_compilation() || shared->is_compiled()); + DCHECK(shared->allows_lazy_compilation() || shared->is_compiled()); found_next_candidate = true; } else if (obj->IsSharedFunctionInfo()) { shared = Handle<SharedFunctionInfo>(SharedFunctionInfo::cast(obj)); @@ -2344,7 +2241,7 @@ bool Debug::EnsureDebugInfo(Handle<SharedFunctionInfo> shared, // Return if we already have the debug info for shared. if (HasDebugInfo(shared)) { - ASSERT(shared->is_compiled()); + DCHECK(shared->is_compiled()); return true; } @@ -2370,7 +2267,7 @@ bool Debug::EnsureDebugInfo(Handle<SharedFunctionInfo> shared, void Debug::RemoveDebugInfo(Handle<DebugInfo> debug_info) { - ASSERT(debug_info_list_ != NULL); + DCHECK(debug_info_list_ != NULL); // Run through the debug info objects to find this one and remove it. DebugInfoListNode* prev = NULL; DebugInfoListNode* current = debug_info_list_; @@ -2401,8 +2298,11 @@ void Debug::RemoveDebugInfo(Handle<DebugInfo> debug_info) { void Debug::SetAfterBreakTarget(JavaScriptFrame* frame) { - HandleScope scope(isolate_); + after_break_target_ = NULL; + + if (LiveEdit::SetAfterBreakTarget(this)) return; // LiveEdit did the job. + HandleScope scope(isolate_); PrepareForBreakPoints(); // Get the executing function in which the debug break occurred. @@ -2418,13 +2318,13 @@ void Debug::SetAfterBreakTarget(JavaScriptFrame* frame) { #ifdef DEBUG // Get the code which is actually executing. Handle<Code> frame_code(frame->LookupCode()); - ASSERT(frame_code.is_identical_to(code)); + DCHECK(frame_code.is_identical_to(code)); #endif // Find the call address in the running code. This address holds the call to // either a DebugBreakXXX or to the debug break return entry code if the // break point is still active after processing the break point. - Address addr = frame->pc() - Assembler::kPatchDebugBreakSlotReturnOffset; + Address addr = Assembler::break_address_from_return_address(frame->pc()); // Check if the location is at JS exit or debug break slot. bool at_js_return = false; @@ -2451,38 +2351,38 @@ void Debug::SetAfterBreakTarget(JavaScriptFrame* frame) { // place in the original code. If not the break point was removed during // break point processing. if (break_at_js_return_active) { - addr += original_code->instruction_start() - code->instruction_start(); + addr += original_code->instruction_start() - code->instruction_start(); } // Move back to where the call instruction sequence started. - thread_local_.after_break_target_ = - addr - Assembler::kPatchReturnSequenceAddressOffset; + after_break_target_ = addr - Assembler::kPatchReturnSequenceAddressOffset; } else if (at_debug_break_slot) { // Address of where the debug break slot starts. addr = addr - Assembler::kPatchDebugBreakSlotAddressOffset; // Continue just after the slot. - thread_local_.after_break_target_ = addr + Assembler::kDebugBreakSlotLength; - } else if (IsDebugBreak(Assembler::target_address_at(addr, *code))) { - // We now know that there is still a debug break call at the target address, - // so the break point is still there and the original code will hold the - // address to jump to in order to complete the call which is replaced by a - // call to DebugBreakXXX. - - // Find the corresponding address in the original code. - addr += original_code->instruction_start() - code->instruction_start(); - - // Install jump to the call address in the original code. This will be the - // call which was overwritten by the call to DebugBreakXXX. - thread_local_.after_break_target_ = - Assembler::target_address_at(addr, *original_code); + after_break_target_ = addr + Assembler::kDebugBreakSlotLength; } else { - // There is no longer a break point present. Don't try to look in the - // original code as the running code will have the right address. This takes - // care of the case where the last break point is removed from the function - // and therefore no "original code" is available. - thread_local_.after_break_target_ = - Assembler::target_address_at(addr, *code); + addr = Assembler::target_address_from_return_address(frame->pc()); + if (IsDebugBreak(Assembler::target_address_at(addr, *code))) { + // We now know that there is still a debug break call at the target + // address, so the break point is still there and the original code will + // hold the address to jump to in order to complete the call which is + // replaced by a call to DebugBreakXXX. + + // Find the corresponding address in the original code. + addr += original_code->instruction_start() - code->instruction_start(); + + // Install jump to the call address in the original code. This will be the + // call which was overwritten by the call to DebugBreakXXX. + after_break_target_ = Assembler::target_address_at(addr, *original_code); + } else { + // There is no longer a break point present. Don't try to look in the + // original code as the running code will have the right address. This + // takes care of the case where the last break point is removed from the + // function and therefore no "original code" is available. + after_break_target_ = Assembler::target_address_at(addr, *code); + } } } @@ -2511,11 +2411,11 @@ bool Debug::IsBreakAtReturn(JavaScriptFrame* frame) { #ifdef DEBUG // Get the code which is actually executing. Handle<Code> frame_code(frame->LookupCode()); - ASSERT(frame_code.is_identical_to(code)); + DCHECK(frame_code.is_identical_to(code)); #endif // Find the call address in the running code. - Address addr = frame->pc() - Assembler::kPatchDebugBreakSlotReturnOffset; + Address addr = Assembler::break_address_from_return_address(frame->pc()); // Check if the location is at JS return. RelocIterator it(debug_info->code()); @@ -2531,9 +2431,9 @@ bool Debug::IsBreakAtReturn(JavaScriptFrame* frame) { void Debug::FramesHaveBeenDropped(StackFrame::Id new_break_frame_id, - FrameDropMode mode, + LiveEdit::FrameDropMode mode, Object** restarter_frame_function_pointer) { - if (mode != CURRENTLY_SET_MODE) { + if (mode != LiveEdit::CURRENTLY_SET_MODE) { thread_local_.frame_drop_mode_ = mode; } thread_local_.break_frame_id_ = new_break_frame_id; @@ -2542,94 +2442,32 @@ void Debug::FramesHaveBeenDropped(StackFrame::Id new_break_frame_id, } -const int Debug::FramePaddingLayout::kInitialSize = 1; - - -// Any even value bigger than kInitialSize as needed for stack scanning. -const int Debug::FramePaddingLayout::kPaddingValue = kInitialSize + 1; - - bool Debug::IsDebugGlobal(GlobalObject* global) { - return IsLoaded() && global == debug_context()->global_object(); + return is_loaded() && global == debug_context()->global_object(); } void Debug::ClearMirrorCache() { PostponeInterruptsScope postpone(isolate_); HandleScope scope(isolate_); - ASSERT(isolate_->context() == *Debug::debug_context()); - - // Clear the mirror cache. - Handle<String> function_name = isolate_->factory()->InternalizeOneByteString( - STATIC_ASCII_VECTOR("ClearMirrorCache")); - Handle<Object> fun = Object::GetProperty( - isolate_->global_object(), function_name).ToHandleChecked(); - ASSERT(fun->IsJSFunction()); - Execution::TryCall( - Handle<JSFunction>::cast(fun), - Handle<JSObject>(Debug::debug_context()->global_object()), - 0, - NULL); -} - - -void Debug::CreateScriptCache() { - Heap* heap = isolate_->heap(); - HandleScope scope(isolate_); - - // Perform two GCs to get rid of all unreferenced scripts. The first GC gets - // rid of all the cached script wrappers and the second gets rid of the - // scripts which are no longer referenced. The second also sweeps precisely, - // which saves us doing yet another GC to make the heap iterable. - heap->CollectAllGarbage(Heap::kNoGCFlags, "Debug::CreateScriptCache"); - heap->CollectAllGarbage(Heap::kMakeHeapIterableMask, - "Debug::CreateScriptCache"); - - ASSERT(script_cache_ == NULL); - script_cache_ = new ScriptCache(isolate_); - - // Scan heap for Script objects. - int count = 0; - HeapIterator iterator(heap); - DisallowHeapAllocation no_allocation; - - for (HeapObject* obj = iterator.next(); obj != NULL; obj = iterator.next()) { - if (obj->IsScript() && Script::cast(obj)->HasValidSource()) { - script_cache_->Add(Handle<Script>(Script::cast(obj))); - count++; - } - } -} - - -void Debug::DestroyScriptCache() { - // Get rid of the script cache if it was created. - if (script_cache_ != NULL) { - delete script_cache_; - script_cache_ = NULL; - } -} - - -void Debug::AddScriptToScriptCache(Handle<Script> script) { - if (script_cache_ != NULL) { - script_cache_->Add(script); - } + AssertDebugContext(); + Factory* factory = isolate_->factory(); + Handle<GlobalObject> global(isolate_->global_object()); + JSObject::SetProperty(global, + factory->NewStringFromAsciiChecked("next_handle_"), + handle(Smi::FromInt(0), isolate_), + SLOPPY).Check(); + JSObject::SetProperty(global, + factory->NewStringFromAsciiChecked("mirror_cache_"), + factory->NewJSArray(0, FAST_ELEMENTS), + SLOPPY).Check(); } Handle<FixedArray> Debug::GetLoadedScripts() { // Create and fill the script cache when the loaded scripts is requested for // the first time. - if (script_cache_ == NULL) { - CreateScriptCache(); - } - - // If the script cache is not active just return an empty array. - ASSERT(script_cache_ != NULL); - if (script_cache_ == NULL) { - isolate_->factory()->NewFixedArray(0); - } + if (script_cache_ == NULL) script_cache_ = new ScriptCache(isolate_); // Perform GC to get unreferenced scripts evicted from the cache before // returning the content. @@ -2656,146 +2494,109 @@ void Debug::RecordEvalCaller(Handle<Script> script) { } -void Debug::AfterGarbageCollection() { - // Generate events for collected scripts. - if (script_cache_ != NULL) { - script_cache_->ProcessCollectedScripts(); - } -} - - -Debugger::Debugger(Isolate* isolate) - : debugger_access_(isolate->debugger_access()), - event_listener_(Handle<Object>()), - event_listener_data_(Handle<Object>()), - compiling_natives_(false), - is_loading_debugger_(false), - live_edit_enabled_(true), - never_unload_debugger_(false), - force_debugger_active_(false), - message_handler_(NULL), - debugger_unload_pending_(false), - host_dispatch_handler_(NULL), - debug_message_dispatch_handler_(NULL), - message_dispatch_helper_thread_(NULL), - host_dispatch_period_(TimeDelta::FromMilliseconds(100)), - agent_(NULL), - command_queue_(isolate->logger(), kQueueInitialSize), - command_received_(0), - event_command_queue_(isolate->logger(), kQueueInitialSize), - isolate_(isolate) { -} - - -Debugger::~Debugger() {} - - -MaybeHandle<Object> Debugger::MakeJSObject( - Vector<const char> constructor_name, - int argc, - Handle<Object> argv[]) { - ASSERT(isolate_->context() == *isolate_->debug()->debug_context()); - +MaybeHandle<Object> Debug::MakeJSObject(const char* constructor_name, + int argc, + Handle<Object> argv[]) { + AssertDebugContext(); // Create the execution state object. - Handle<String> constructor_str = - isolate_->factory()->InternalizeUtf8String(constructor_name); - ASSERT(!constructor_str.is_null()); + Handle<GlobalObject> global(isolate_->global_object()); Handle<Object> constructor = Object::GetProperty( - isolate_->global_object(), constructor_str).ToHandleChecked(); - ASSERT(constructor->IsJSFunction()); + isolate_, global, constructor_name).ToHandleChecked(); + DCHECK(constructor->IsJSFunction()); if (!constructor->IsJSFunction()) return MaybeHandle<Object>(); - return Execution::TryCall( - Handle<JSFunction>::cast(constructor), - Handle<JSObject>(isolate_->debug()->debug_context()->global_object()), - argc, - argv); + // We do not handle interrupts here. In particular, termination interrupts. + PostponeInterruptsScope no_interrupts(isolate_); + return Execution::TryCall(Handle<JSFunction>::cast(constructor), + handle(debug_context()->global_proxy()), + argc, + argv); } -MaybeHandle<Object> Debugger::MakeExecutionState() { +MaybeHandle<Object> Debug::MakeExecutionState() { // Create the execution state object. - Handle<Object> break_id = isolate_->factory()->NewNumberFromInt( - isolate_->debug()->break_id()); - Handle<Object> argv[] = { break_id }; - return MakeJSObject(CStrVector("MakeExecutionState"), ARRAY_SIZE(argv), argv); + Handle<Object> argv[] = { isolate_->factory()->NewNumberFromInt(break_id()) }; + return MakeJSObject("MakeExecutionState", ARRAY_SIZE(argv), argv); } -MaybeHandle<Object> Debugger::MakeBreakEvent(Handle<Object> break_points_hit) { - Handle<Object> exec_state; - if (!MakeExecutionState().ToHandle(&exec_state)) return MaybeHandle<Object>(); +MaybeHandle<Object> Debug::MakeBreakEvent(Handle<Object> break_points_hit) { // Create the new break event object. - Handle<Object> argv[] = { exec_state, break_points_hit }; - return MakeJSObject(CStrVector("MakeBreakEvent"), ARRAY_SIZE(argv), argv); + Handle<Object> argv[] = { isolate_->factory()->NewNumberFromInt(break_id()), + break_points_hit }; + return MakeJSObject("MakeBreakEvent", ARRAY_SIZE(argv), argv); } -MaybeHandle<Object> Debugger::MakeExceptionEvent(Handle<Object> exception, - bool uncaught, - Handle<Object> promise) { - Handle<Object> exec_state; - if (!MakeExecutionState().ToHandle(&exec_state)) return MaybeHandle<Object>(); +MaybeHandle<Object> Debug::MakeExceptionEvent(Handle<Object> exception, + bool uncaught, + Handle<Object> promise) { // Create the new exception event object. - Handle<Object> argv[] = { exec_state, + Handle<Object> argv[] = { isolate_->factory()->NewNumberFromInt(break_id()), exception, isolate_->factory()->ToBoolean(uncaught), promise }; - return MakeJSObject(CStrVector("MakeExceptionEvent"), ARRAY_SIZE(argv), argv); + return MakeJSObject("MakeExceptionEvent", ARRAY_SIZE(argv), argv); } -MaybeHandle<Object> Debugger::MakeCompileEvent(Handle<Script> script, - bool before) { - Handle<Object> exec_state; - if (!MakeExecutionState().ToHandle(&exec_state)) return MaybeHandle<Object>(); +MaybeHandle<Object> Debug::MakeCompileEvent(Handle<Script> script, + v8::DebugEvent type) { // Create the compile event object. Handle<Object> script_wrapper = Script::GetWrapper(script); - Handle<Object> argv[] = { exec_state, - script_wrapper, - isolate_->factory()->ToBoolean(before) }; - return MakeJSObject(CStrVector("MakeCompileEvent"), ARRAY_SIZE(argv), argv); + Handle<Object> argv[] = { script_wrapper, + isolate_->factory()->NewNumberFromInt(type) }; + return MakeJSObject("MakeCompileEvent", ARRAY_SIZE(argv), argv); } -MaybeHandle<Object> Debugger::MakeScriptCollectedEvent(int id) { - Handle<Object> exec_state; - if (!MakeExecutionState().ToHandle(&exec_state)) return MaybeHandle<Object>(); - // Create the script collected event object. - Handle<Object> id_object = Handle<Smi>(Smi::FromInt(id), isolate_); - Handle<Object> argv[] = { exec_state, id_object }; +MaybeHandle<Object> Debug::MakePromiseEvent(Handle<JSObject> event_data) { + // Create the promise event object. + Handle<Object> argv[] = { event_data }; + return MakeJSObject("MakePromiseEvent", ARRAY_SIZE(argv), argv); +} - return MakeJSObject( - CStrVector("MakeScriptCollectedEvent"), ARRAY_SIZE(argv), argv); + +MaybeHandle<Object> Debug::MakeAsyncTaskEvent(Handle<JSObject> task_event) { + // Create the async task event object. + Handle<Object> argv[] = { task_event }; + return MakeJSObject("MakeAsyncTaskEvent", ARRAY_SIZE(argv), argv); } -void Debugger::OnException(Handle<Object> exception, bool uncaught) { +void Debug::OnThrow(Handle<Object> exception, bool uncaught) { + if (in_debug_scope() || ignore_events()) return; HandleScope scope(isolate_); - Debug* debug = isolate_->debug(); + OnException(exception, uncaught, GetPromiseOnStackOnThrow()); +} - // Bail out based on state or if there is no listener for this event - if (debug->InDebugger()) return; - if (!Debugger::EventActive(v8::Exception)) return; - Handle<Object> promise = debug->GetPromiseForUncaughtException(); - uncaught |= !promise->IsUndefined(); +void Debug::OnPromiseReject(Handle<JSObject> promise, Handle<Object> value) { + if (in_debug_scope() || ignore_events()) return; + HandleScope scope(isolate_); + OnException(value, false, promise); +} + +void Debug::OnException(Handle<Object> exception, bool uncaught, + Handle<Object> promise) { + if (promise->IsJSObject()) { + uncaught |= !PromiseHasRejectHandler(Handle<JSObject>::cast(promise)); + } // Bail out if exception breaks are not active if (uncaught) { // Uncaught exceptions are reported by either flags. - if (!(debug->break_on_uncaught_exception() || - debug->break_on_exception())) return; + if (!(break_on_uncaught_exception_ || break_on_exception_)) return; } else { // Caught exceptions are reported is activated. - if (!debug->break_on_exception()) return; + if (!break_on_exception_) return; } - // Enter the debugger. - EnterDebugger debugger(isolate_); - if (debugger.FailedToEnter()) return; + DebugScope debug_scope(this); + if (debug_scope.failed()) return; // Clear all current stepping setup. - debug->ClearStepping(); + ClearStepping(); // Create the event data object. Handle<Object> event_data; @@ -2811,19 +2612,32 @@ void Debugger::OnException(Handle<Object> exception, bool uncaught) { } -void Debugger::OnDebugBreak(Handle<Object> break_points_hit, - bool auto_continue) { +void Debug::OnCompileError(Handle<Script> script) { + // No more to do if not debugging. + if (in_debug_scope() || ignore_events()) return; + HandleScope scope(isolate_); + DebugScope debug_scope(this); + if (debug_scope.failed()) return; - // Debugger has already been entered by caller. - ASSERT(isolate_->context() == *isolate_->debug()->debug_context()); + // Create the compile state object. + Handle<Object> event_data; + // Bail out and don't call debugger if exception. + if (!MakeCompileEvent(script, v8::CompileError).ToHandle(&event_data)) return; - // Bail out if there is no listener for this event - if (!Debugger::EventActive(v8::Break)) return; + // Process debug event. + ProcessDebugEvent(v8::CompileError, Handle<JSObject>::cast(event_data), true); +} - // Debugger must be entered in advance. - ASSERT(isolate_->context() == *isolate_->debug()->debug_context()); +void Debug::OnDebugBreak(Handle<Object> break_points_hit, + bool auto_continue) { + // The caller provided for DebugScope. + AssertDebugContext(); + // Bail out if there is no listener for this event + if (ignore_events()) return; + + HandleScope scope(isolate_); // Create the event data object. Handle<Object> event_data; // Bail out and don't call debugger if exception. @@ -2836,22 +2650,18 @@ void Debugger::OnDebugBreak(Handle<Object> break_points_hit, } -void Debugger::OnBeforeCompile(Handle<Script> script) { - HandleScope scope(isolate_); - - // Bail out based on state or if there is no listener for this event - if (isolate_->debug()->InDebugger()) return; - if (compiling_natives()) return; - if (!EventActive(v8::BeforeCompile)) return; +void Debug::OnBeforeCompile(Handle<Script> script) { + if (in_debug_scope() || ignore_events()) return; - // Enter the debugger. - EnterDebugger debugger(isolate_); - if (debugger.FailedToEnter()) return; + HandleScope scope(isolate_); + DebugScope debug_scope(this); + if (debug_scope.failed()) return; // Create the event data object. Handle<Object> event_data; // Bail out and don't call debugger if exception. - if (!MakeCompileEvent(script, true).ToHandle(&event_data)) return; + if (!MakeCompileEvent(script, v8::BeforeCompile).ToHandle(&event_data)) + return; // Process debug event. ProcessDebugEvent(v8::BeforeCompile, @@ -2861,26 +2671,16 @@ void Debugger::OnBeforeCompile(Handle<Script> script) { // Handle debugger actions when a new script is compiled. -void Debugger::OnAfterCompile(Handle<Script> script, - AfterCompileFlags after_compile_flags) { - HandleScope scope(isolate_); - Debug* debug = isolate_->debug(); - +void Debug::OnAfterCompile(Handle<Script> script) { // Add the newly compiled script to the script cache. - debug->AddScriptToScriptCache(script); + if (script_cache_ != NULL) script_cache_->Add(script); // No more to do if not debugging. - if (!IsDebuggerActive()) return; - - // No compile events while compiling natives. - if (compiling_natives()) return; - - // Store whether in debugger before entering debugger. - bool in_debugger = debug->InDebugger(); + if (in_debug_scope() || ignore_events()) return; - // Enter the debugger. - EnterDebugger debugger(isolate_); - if (debugger.FailedToEnter()) return; + HandleScope scope(isolate_); + DebugScope debug_scope(this); + if (debug_scope.failed()) return; // If debugging there might be script break points registered for this // script. Make sure that these break points are set. @@ -2889,14 +2689,14 @@ void Debugger::OnAfterCompile(Handle<Script> script, Handle<String> update_script_break_points_string = isolate_->factory()->InternalizeOneByteString( STATIC_ASCII_VECTOR("UpdateScriptBreakPoints")); - Handle<GlobalObject> debug_global(debug->debug_context()->global_object()); + Handle<GlobalObject> debug_global(debug_context()->global_object()); Handle<Object> update_script_break_points = Object::GetProperty( debug_global, update_script_break_points_string).ToHandleChecked(); if (!update_script_break_points->IsJSFunction()) { return; } - ASSERT(update_script_break_points->IsJSFunction()); + DCHECK(update_script_break_points->IsJSFunction()); // Wrap the script object in a proper JS object before passing it // to JavaScript. @@ -2910,54 +2710,60 @@ void Debugger::OnAfterCompile(Handle<Script> script, argv).is_null()) { return; } - // Bail out based on state or if there is no listener for this event - if (in_debugger && (after_compile_flags & SEND_WHEN_DEBUGGING) == 0) return; - if (!Debugger::EventActive(v8::AfterCompile)) return; // Create the compile state object. Handle<Object> event_data; // Bail out and don't call debugger if exception. - if (!MakeCompileEvent(script, false).ToHandle(&event_data)) return; + if (!MakeCompileEvent(script, v8::AfterCompile).ToHandle(&event_data)) return; // Process debug event. ProcessDebugEvent(v8::AfterCompile, Handle<JSObject>::cast(event_data), true); } -void Debugger::OnScriptCollected(int id) { +void Debug::OnPromiseEvent(Handle<JSObject> data) { + if (in_debug_scope() || ignore_events()) return; + HandleScope scope(isolate_); + DebugScope debug_scope(this); + if (debug_scope.failed()) return; - // No more to do if not debugging. - if (isolate_->debug()->InDebugger()) return; - if (!IsDebuggerActive()) return; - if (!Debugger::EventActive(v8::ScriptCollected)) return; + // Create the script collected state object. + Handle<Object> event_data; + // Bail out and don't call debugger if exception. + if (!MakePromiseEvent(data).ToHandle(&event_data)) return; - // Enter the debugger. - EnterDebugger debugger(isolate_); - if (debugger.FailedToEnter()) return; + // Process debug event. + ProcessDebugEvent(v8::PromiseEvent, + Handle<JSObject>::cast(event_data), + true); +} + + +void Debug::OnAsyncTaskEvent(Handle<JSObject> data) { + if (in_debug_scope() || ignore_events()) return; + + HandleScope scope(isolate_); + DebugScope debug_scope(this); + if (debug_scope.failed()) return; // Create the script collected state object. Handle<Object> event_data; // Bail out and don't call debugger if exception. - if (!MakeScriptCollectedEvent(id).ToHandle(&event_data)) return; + if (!MakeAsyncTaskEvent(data).ToHandle(&event_data)) return; // Process debug event. - ProcessDebugEvent(v8::ScriptCollected, + ProcessDebugEvent(v8::AsyncTaskEvent, Handle<JSObject>::cast(event_data), true); } -void Debugger::ProcessDebugEvent(v8::DebugEvent event, - Handle<JSObject> event_data, - bool auto_continue) { +void Debug::ProcessDebugEvent(v8::DebugEvent event, + Handle<JSObject> event_data, + bool auto_continue) { HandleScope scope(isolate_); - // Clear any pending debug break if this is a real break. - if (!auto_continue) { - isolate_->debug()->clear_interrupt_pending(DEBUGBREAK); - } - // Create the execution state. Handle<Object> exec_state; // Bail out and don't call debugger if exception. @@ -2992,87 +2798,52 @@ void Debugger::ProcessDebugEvent(v8::DebugEvent event, } -void Debugger::CallEventCallback(v8::DebugEvent event, - Handle<Object> exec_state, - Handle<Object> event_data, - v8::Debug::ClientData* client_data) { +void Debug::CallEventCallback(v8::DebugEvent event, + Handle<Object> exec_state, + Handle<Object> event_data, + v8::Debug::ClientData* client_data) { if (event_listener_->IsForeign()) { - CallCEventCallback(event, exec_state, event_data, client_data); + // Invoke the C debug event listener. + v8::Debug::EventCallback callback = + FUNCTION_CAST<v8::Debug::EventCallback>( + Handle<Foreign>::cast(event_listener_)->foreign_address()); + EventDetailsImpl event_details(event, + Handle<JSObject>::cast(exec_state), + Handle<JSObject>::cast(event_data), + event_listener_data_, + client_data); + callback(event_details); + DCHECK(!isolate_->has_scheduled_exception()); } else { - CallJSEventCallback(event, exec_state, event_data); + // Invoke the JavaScript debug event listener. + DCHECK(event_listener_->IsJSFunction()); + Handle<Object> argv[] = { Handle<Object>(Smi::FromInt(event), isolate_), + exec_state, + event_data, + event_listener_data_ }; + Handle<JSReceiver> global(isolate_->global_proxy()); + Execution::TryCall(Handle<JSFunction>::cast(event_listener_), + global, ARRAY_SIZE(argv), argv); } } -void Debugger::CallCEventCallback(v8::DebugEvent event, - Handle<Object> exec_state, - Handle<Object> event_data, - v8::Debug::ClientData* client_data) { - Handle<Foreign> callback_obj(Handle<Foreign>::cast(event_listener_)); - v8::Debug::EventCallback2 callback = - FUNCTION_CAST<v8::Debug::EventCallback2>( - callback_obj->foreign_address()); - EventDetailsImpl event_details( - event, - Handle<JSObject>::cast(exec_state), - Handle<JSObject>::cast(event_data), - event_listener_data_, - client_data); - callback(event_details); +Handle<Context> Debug::GetDebugContext() { + DebugScope debug_scope(this); + // The global handle may be destroyed soon after. Return it reboxed. + return handle(*debug_context(), isolate_); } -void Debugger::CallJSEventCallback(v8::DebugEvent event, - Handle<Object> exec_state, - Handle<Object> event_data) { - ASSERT(event_listener_->IsJSFunction()); - Handle<JSFunction> fun(Handle<JSFunction>::cast(event_listener_)); - - // Invoke the JavaScript debug event listener. - Handle<Object> argv[] = { Handle<Object>(Smi::FromInt(event), isolate_), - exec_state, - event_data, - event_listener_data_ }; - Execution::TryCall(fun, - isolate_->global_object(), - ARRAY_SIZE(argv), - argv); - // Silently ignore exceptions from debug event listeners. -} - - -Handle<Context> Debugger::GetDebugContext() { - never_unload_debugger_ = true; - EnterDebugger debugger(isolate_); - return isolate_->debug()->debug_context(); -} - - -void Debugger::UnloadDebugger() { - Debug* debug = isolate_->debug(); - - // Make sure that there are no breakpoints left. - debug->ClearAllBreakPoints(); - - // Unload the debugger if feasible. - if (!never_unload_debugger_) { - debug->Unload(); - } - - // Clear the flag indicating that the debugger should be unloaded. - debugger_unload_pending_ = false; -} - - -void Debugger::NotifyMessageHandler(v8::DebugEvent event, - Handle<JSObject> exec_state, - Handle<JSObject> event_data, - bool auto_continue) { - v8::Isolate* isolate = reinterpret_cast<v8::Isolate*>(isolate_); +void Debug::NotifyMessageHandler(v8::DebugEvent event, + Handle<JSObject> exec_state, + Handle<JSObject> event_data, + bool auto_continue) { + // Prevent other interrupts from triggering, for example API callbacks, + // while dispatching message handler callbacks. + PostponeInterruptsScope no_interrupts(isolate_); + DCHECK(is_active_); HandleScope scope(isolate_); - - if (!isolate_->debug()->Load()) return; - // Process the individual events. bool sendEventMessage = false; switch (event) { @@ -3088,9 +2859,6 @@ void Debugger::NotifyMessageHandler(v8::DebugEvent event, case v8::AfterCompile: sendEventMessage = true; break; - case v8::ScriptCollected: - sendEventMessage = true; - break; case v8::NewFunction: break; default: @@ -3100,8 +2868,8 @@ void Debugger::NotifyMessageHandler(v8::DebugEvent event, // The debug command interrupt flag might have been set when the command was // added. It should be enough to clear the flag only once while we are in the // debugger. - ASSERT(isolate_->debug()->InDebugger()); - isolate_->stack_guard()->Continue(DEBUGCOMMAND); + DCHECK(in_debug_scope()); + isolate_->stack_guard()->ClearDebugCommand(); // Notify the debugger that a debug event has occurred unless auto continue is // active in which case no event is send. @@ -3118,217 +2886,135 @@ void Debugger::NotifyMessageHandler(v8::DebugEvent event, // in the queue if any. For script collected events don't even process // messages in the queue as the execution state might not be what is expected // by the client. - if ((auto_continue && !HasCommands()) || event == v8::ScriptCollected) { - return; - } - - v8::TryCatch try_catch; + if (auto_continue && !has_commands()) return; // DebugCommandProcessor goes here. - v8::Local<v8::Object> cmd_processor; - { - v8::Local<v8::Object> api_exec_state = - v8::Utils::ToLocal(Handle<JSObject>::cast(exec_state)); - v8::Local<v8::String> fun_name = v8::String::NewFromUtf8( - isolate, "debugCommandProcessor"); - v8::Local<v8::Function> fun = - v8::Local<v8::Function>::Cast(api_exec_state->Get(fun_name)); - - v8::Handle<v8::Boolean> running = v8::Boolean::New(isolate, auto_continue); - static const int kArgc = 1; - v8::Handle<Value> argv[kArgc] = { running }; - cmd_processor = v8::Local<v8::Object>::Cast( - fun->Call(api_exec_state, kArgc, argv)); - if (try_catch.HasCaught()) { - PrintLn(try_catch.Exception()); - return; - } - } - bool running = auto_continue; + Handle<Object> cmd_processor_ctor = Object::GetProperty( + isolate_, exec_state, "debugCommandProcessor").ToHandleChecked(); + Handle<Object> ctor_args[] = { isolate_->factory()->ToBoolean(running) }; + Handle<Object> cmd_processor = Execution::Call( + isolate_, cmd_processor_ctor, exec_state, 1, ctor_args).ToHandleChecked(); + Handle<JSFunction> process_debug_request = Handle<JSFunction>::cast( + Object::GetProperty( + isolate_, cmd_processor, "processDebugRequest").ToHandleChecked()); + Handle<Object> is_running = Object::GetProperty( + isolate_, cmd_processor, "isRunning").ToHandleChecked(); + // Process requests from the debugger. - while (true) { + do { // Wait for new command in the queue. - if (Debugger::host_dispatch_handler_) { - // In case there is a host dispatch - do periodic dispatches. - if (!command_received_.WaitFor(host_dispatch_period_)) { - // Timout expired, do the dispatch. - Debugger::host_dispatch_handler_(); - continue; - } - } else { - // In case there is no host dispatch - just wait. - command_received_.Wait(); - } + command_received_.Wait(); // Get the command from the queue. CommandMessage command = command_queue_.Get(); isolate_->logger()->DebugTag( "Got request from command queue, in interactive loop."); - if (!Debugger::IsDebuggerActive()) { + if (!is_active()) { // Delete command text and user data. command.Dispose(); return; } - // Invoke JavaScript to process the debug request. - v8::Local<v8::String> fun_name; - v8::Local<v8::Function> fun; - v8::Local<v8::Value> request; - v8::TryCatch try_catch; - fun_name = v8::String::NewFromUtf8(isolate, "processDebugRequest"); - fun = v8::Local<v8::Function>::Cast(cmd_processor->Get(fun_name)); - - request = v8::String::NewFromTwoByte(isolate, command.text().start(), - v8::String::kNormalString, - command.text().length()); - static const int kArgc = 1; - v8::Handle<Value> argv[kArgc] = { request }; - v8::Local<v8::Value> response_val = fun->Call(cmd_processor, kArgc, argv); - - // Get the response. - v8::Local<v8::String> response; - if (!try_catch.HasCaught()) { - // Get response string. - if (!response_val->IsUndefined()) { - response = v8::Local<v8::String>::Cast(response_val); + Vector<const uc16> command_text( + const_cast<const uc16*>(command.text().start()), + command.text().length()); + Handle<String> request_text = isolate_->factory()->NewStringFromTwoByte( + command_text).ToHandleChecked(); + Handle<Object> request_args[] = { request_text }; + Handle<Object> exception; + Handle<Object> answer_value; + Handle<String> answer; + MaybeHandle<Object> maybe_result = Execution::TryCall( + process_debug_request, cmd_processor, 1, request_args, &exception); + + if (maybe_result.ToHandle(&answer_value)) { + if (answer_value->IsUndefined()) { + answer = isolate_->factory()->empty_string(); } else { - response = v8::String::NewFromUtf8(isolate, ""); + answer = Handle<String>::cast(answer_value); } // Log the JSON request/response. if (FLAG_trace_debug_json) { - PrintLn(request); - PrintLn(response); + PrintF("%s\n", request_text->ToCString().get()); + PrintF("%s\n", answer->ToCString().get()); } - // Get the running state. - fun_name = v8::String::NewFromUtf8(isolate, "isRunning"); - fun = v8::Local<v8::Function>::Cast(cmd_processor->Get(fun_name)); - static const int kArgc = 1; - v8::Handle<Value> argv[kArgc] = { response }; - v8::Local<v8::Value> running_val = fun->Call(cmd_processor, kArgc, argv); - if (!try_catch.HasCaught()) { - running = running_val->ToBoolean()->Value(); - } + Handle<Object> is_running_args[] = { answer }; + maybe_result = Execution::Call( + isolate_, is_running, cmd_processor, 1, is_running_args); + running = maybe_result.ToHandleChecked()->IsTrue(); } else { - // In case of failure the result text is the exception text. - response = try_catch.Exception()->ToString(); + answer = Handle<String>::cast( + Execution::ToString(isolate_, exception).ToHandleChecked()); } // Return the result. MessageImpl message = MessageImpl::NewResponse( - event, - running, - Handle<JSObject>::cast(exec_state), - Handle<JSObject>::cast(event_data), - Handle<String>(Utils::OpenHandle(*response)), - command.client_data()); + event, running, exec_state, event_data, answer, command.client_data()); InvokeMessageHandler(message); command.Dispose(); // Return from debug event processing if either the VM is put into the // running state (through a continue command) or auto continue is active // and there are no more commands queued. - if (running && !HasCommands()) { - return; - } - } + } while (!running || has_commands()); } -void Debugger::SetEventListener(Handle<Object> callback, - Handle<Object> data) { - HandleScope scope(isolate_); +void Debug::SetEventListener(Handle<Object> callback, + Handle<Object> data) { GlobalHandles* global_handles = isolate_->global_handles(); - // Clear the global handles for the event listener and the event listener data - // object. - if (!event_listener_.is_null()) { - GlobalHandles::Destroy( - reinterpret_cast<Object**>(event_listener_.location())); - event_listener_ = Handle<Object>(); - } - if (!event_listener_data_.is_null()) { - GlobalHandles::Destroy( - reinterpret_cast<Object**>(event_listener_data_.location())); - event_listener_data_ = Handle<Object>(); - } + // Remove existing entry. + GlobalHandles::Destroy(event_listener_.location()); + event_listener_ = Handle<Object>(); + GlobalHandles::Destroy(event_listener_data_.location()); + event_listener_data_ = Handle<Object>(); - // If there is a new debug event listener register it together with its data - // object. + // Set new entry. if (!callback->IsUndefined() && !callback->IsNull()) { - event_listener_ = Handle<Object>::cast( - global_handles->Create(*callback)); - if (data.is_null()) { - data = isolate_->factory()->undefined_value(); - } - event_listener_data_ = Handle<Object>::cast( - global_handles->Create(*data)); + event_listener_ = global_handles->Create(*callback); + if (data.is_null()) data = isolate_->factory()->undefined_value(); + event_listener_data_ = global_handles->Create(*data); } - ListenersChanged(); + UpdateState(); } -void Debugger::SetMessageHandler(v8::Debug::MessageHandler2 handler) { - LockGuard<RecursiveMutex> with(debugger_access_); - +void Debug::SetMessageHandler(v8::Debug::MessageHandler handler) { message_handler_ = handler; - ListenersChanged(); - if (handler == NULL) { + UpdateState(); + if (handler == NULL && in_debug_scope()) { // Send an empty command to the debugger if in a break to make JavaScript // run again if the debugger is closed. - if (isolate_->debug()->InDebugger()) { - ProcessCommand(Vector<const uint16_t>::empty()); - } + EnqueueCommandMessage(Vector<const uint16_t>::empty()); } } -void Debugger::ListenersChanged() { - bool active = IsDebuggerActive(); - if (active) { - // Disable the compilation cache when the debugger is active. + +void Debug::UpdateState() { + is_active_ = message_handler_ != NULL || !event_listener_.is_null(); + if (is_active_ || in_debug_scope()) { + // Note that the debug context could have already been loaded to + // bootstrap test cases. isolate_->compilation_cache()->Disable(); - debugger_unload_pending_ = false; - } else { + is_active_ = Load(); + } else if (is_loaded()) { isolate_->compilation_cache()->Enable(); - // Unload the debugger if event listener and message handler cleared. - // Schedule this for later, because we may be in non-V8 thread. - debugger_unload_pending_ = true; - } -} - - -void Debugger::SetHostDispatchHandler(v8::Debug::HostDispatchHandler handler, - TimeDelta period) { - host_dispatch_handler_ = handler; - host_dispatch_period_ = period; -} - - -void Debugger::SetDebugMessageDispatchHandler( - v8::Debug::DebugMessageDispatchHandler handler, bool provide_locker) { - LockGuard<Mutex> lock_guard(&dispatch_handler_access_); - debug_message_dispatch_handler_ = handler; - - if (provide_locker && message_dispatch_helper_thread_ == NULL) { - message_dispatch_helper_thread_ = new MessageDispatchHelperThread(isolate_); - message_dispatch_helper_thread_->Start(); + Unload(); } } // Calls the registered debug message handler. This callback is part of the // public API. -void Debugger::InvokeMessageHandler(MessageImpl message) { - LockGuard<RecursiveMutex> with(debugger_access_); - - if (message_handler_ != NULL) { - message_handler_(message); - } +void Debug::InvokeMessageHandler(MessageImpl message) { + if (message_handler_ != NULL) message_handler_(message); } @@ -3336,8 +3022,8 @@ void Debugger::InvokeMessageHandler(MessageImpl message) { // a copy of the command string managed by the debugger. Up to this // point, the command data was managed by the API client. Called // by the API client thread. -void Debugger::ProcessCommand(Vector<const uint16_t> command, - v8::Debug::ClientData* client_data) { +void Debug::EnqueueCommandMessage(Vector<const uint16_t> command, + v8::Debug::ClientData* client_data) { // Need to cast away const. CommandMessage message = CommandMessage::New( Vector<uint16_t>(const_cast<uint16_t*>(command.start()), @@ -3348,59 +3034,22 @@ void Debugger::ProcessCommand(Vector<const uint16_t> command, command_received_.Signal(); // Set the debug command break flag to have the command processed. - if (!isolate_->debug()->InDebugger()) { - isolate_->stack_guard()->DebugCommand(); - } - - MessageDispatchHelperThread* dispatch_thread; - { - LockGuard<Mutex> lock_guard(&dispatch_handler_access_); - dispatch_thread = message_dispatch_helper_thread_; - } - - if (dispatch_thread == NULL) { - CallMessageDispatchHandler(); - } else { - dispatch_thread->Schedule(); - } + if (!in_debug_scope()) isolate_->stack_guard()->RequestDebugCommand(); } -bool Debugger::HasCommands() { - return !command_queue_.IsEmpty(); -} - - -void Debugger::EnqueueDebugCommand(v8::Debug::ClientData* client_data) { +void Debug::EnqueueDebugCommand(v8::Debug::ClientData* client_data) { CommandMessage message = CommandMessage::New(Vector<uint16_t>(), client_data); event_command_queue_.Put(message); // Set the debug command break flag to have the command processed. - if (!isolate_->debug()->InDebugger()) { - isolate_->stack_guard()->DebugCommand(); - } -} - - -bool Debugger::IsDebuggerActive() { - LockGuard<RecursiveMutex> with(debugger_access_); - - return message_handler_ != NULL || - !event_listener_.is_null() || - force_debugger_active_; + if (!in_debug_scope()) isolate_->stack_guard()->RequestDebugCommand(); } -MaybeHandle<Object> Debugger::Call(Handle<JSFunction> fun, - Handle<Object> data) { - // When calling functions in the debugger prevent it from beeing unloaded. - Debugger::never_unload_debugger_ = true; - - // Enter the debugger. - EnterDebugger debugger(isolate_); - if (debugger.FailedToEnter()) { - return isolate_->factory()->undefined_value(); - } +MaybeHandle<Object> Debug::Call(Handle<JSFunction> fun, Handle<Object> data) { + DebugScope debug_scope(this); + if (debug_scope.failed()) return isolate_->factory()->undefined_value(); // Create the execution state. Handle<Object> exec_state; @@ -3412,151 +3061,112 @@ MaybeHandle<Object> Debugger::Call(Handle<JSFunction> fun, return Execution::Call( isolate_, fun, - Handle<Object>(isolate_->debug()->debug_context_->global_proxy(), - isolate_), + Handle<Object>(debug_context()->global_proxy(), isolate_), ARRAY_SIZE(argv), argv); } -static void StubMessageHandler2(const v8::Debug::Message& message) { - // Simply ignore message. -} +void Debug::HandleDebugBreak() { + // Ignore debug break during bootstrapping. + if (isolate_->bootstrapper()->IsActive()) return; + // Just continue if breaks are disabled. + if (break_disabled_) return; + // Ignore debug break if debugger is not active. + if (!is_active()) return; + StackLimitCheck check(isolate_); + if (check.HasOverflowed()) return; -bool Debugger::StartAgent(const char* name, int port, - bool wait_for_connection) { - if (wait_for_connection) { - // Suspend V8 if it is already running or set V8 to suspend whenever - // it starts. - // Provide stub message handler; V8 auto-continues each suspend - // when there is no message handler; we doesn't need it. - // Once become suspended, V8 will stay so indefinitely long, until remote - // debugger connects and issues "continue" command. - Debugger::message_handler_ = StubMessageHandler2; - v8::Debug::DebugBreak(reinterpret_cast<v8::Isolate*>(isolate_)); + { JavaScriptFrameIterator it(isolate_); + DCHECK(!it.done()); + Object* fun = it.frame()->function(); + if (fun && fun->IsJSFunction()) { + // Don't stop in builtin functions. + if (JSFunction::cast(fun)->IsBuiltin()) return; + GlobalObject* global = JSFunction::cast(fun)->context()->global_object(); + // Don't stop in debugger functions. + if (IsDebugGlobal(global)) return; + } } - if (agent_ == NULL) { - agent_ = new DebuggerAgent(isolate_, name, port); - agent_->Start(); - } - return true; -} + // Collect the break state before clearing the flags. + bool debug_command_only = isolate_->stack_guard()->CheckDebugCommand() && + !isolate_->stack_guard()->CheckDebugBreak(); + isolate_->stack_guard()->ClearDebugBreak(); -void Debugger::StopAgent() { - if (agent_ != NULL) { - agent_->Shutdown(); - agent_->Join(); - delete agent_; - agent_ = NULL; - } + ProcessDebugMessages(debug_command_only); } -void Debugger::WaitForAgent() { - if (agent_ != NULL) - agent_->WaitUntilListening(); -} +void Debug::ProcessDebugMessages(bool debug_command_only) { + isolate_->stack_guard()->ClearDebugCommand(); + StackLimitCheck check(isolate_); + if (check.HasOverflowed()) return; -void Debugger::CallMessageDispatchHandler() { - v8::Debug::DebugMessageDispatchHandler handler; - { - LockGuard<Mutex> lock_guard(&dispatch_handler_access_); - handler = Debugger::debug_message_dispatch_handler_; - } - if (handler != NULL) { - handler(); - } -} + HandleScope scope(isolate_); + DebugScope debug_scope(this); + if (debug_scope.failed()) return; + // Notify the debug event listeners. Indicate auto continue if the break was + // a debug command break. + OnDebugBreak(isolate_->factory()->undefined_value(), debug_command_only); +} -EnterDebugger::EnterDebugger(Isolate* isolate) - : isolate_(isolate), - prev_(isolate_->debug()->debugger_entry()), - it_(isolate_), - has_js_frames_(!it_.done()), - save_(isolate_) { - Debug* debug = isolate_->debug(); - ASSERT(prev_ != NULL || !debug->is_interrupt_pending(PREEMPT)); - ASSERT(prev_ != NULL || !debug->is_interrupt_pending(DEBUGBREAK)); +DebugScope::DebugScope(Debug* debug) + : debug_(debug), + prev_(debug->debugger_entry()), + save_(debug_->isolate_), + no_termination_exceptons_(debug_->isolate_, + StackGuard::TERMINATE_EXECUTION) { // Link recursive debugger entry. - debug->set_debugger_entry(this); + debug_->thread_local_.current_debug_scope_ = this; // Store the previous break id and frame id. - break_id_ = debug->break_id(); - break_frame_id_ = debug->break_frame_id(); + break_id_ = debug_->break_id(); + break_frame_id_ = debug_->break_frame_id(); // Create the new break info. If there is no JavaScript frames there is no // break frame id. - if (has_js_frames_) { - debug->NewBreak(it_.frame()->id()); - } else { - debug->NewBreak(StackFrame::NO_ID); - } + JavaScriptFrameIterator it(isolate()); + bool has_js_frames = !it.done(); + debug_->thread_local_.break_frame_id_ = has_js_frames ? it.frame()->id() + : StackFrame::NO_ID; + debug_->SetNextBreakId(); + debug_->UpdateState(); // Make sure that debugger is loaded and enter the debugger context. - load_failed_ = !debug->Load(); - if (!load_failed_) { - // NOTE the member variable save which saves the previous context before - // this change. - isolate_->set_context(*debug->debug_context()); - } + // The previous context is kept in save_. + failed_ = !debug_->is_loaded(); + if (!failed_) isolate()->set_context(*debug->debug_context()); } -EnterDebugger::~EnterDebugger() { - Debug* debug = isolate_->debug(); - - // Restore to the previous break state. - debug->SetBreak(break_frame_id_, break_id_); - // Check for leaving the debugger. - if (!load_failed_ && prev_ == NULL) { +DebugScope::~DebugScope() { + if (!failed_ && prev_ == NULL) { // Clear mirror cache when leaving the debugger. Skip this if there is a // pending exception as clearing the mirror cache calls back into // JavaScript. This can happen if the v8::Debug::Call is used in which // case the exception should end up in the calling code. - if (!isolate_->has_pending_exception()) { - // Try to avoid any pending debug break breaking in the clear mirror - // cache JavaScript code. - if (isolate_->stack_guard()->IsDebugBreak()) { - debug->set_interrupts_pending(DEBUGBREAK); - isolate_->stack_guard()->Continue(DEBUGBREAK); - } - debug->ClearMirrorCache(); - } - - // Request preemption and debug break when leaving the last debugger entry - // if any of these where recorded while debugging. - if (debug->is_interrupt_pending(PREEMPT)) { - // This re-scheduling of preemption is to avoid starvation in some - // debugging scenarios. - debug->clear_interrupt_pending(PREEMPT); - isolate_->stack_guard()->Preempt(); - } - if (debug->is_interrupt_pending(DEBUGBREAK)) { - debug->clear_interrupt_pending(DEBUGBREAK); - isolate_->stack_guard()->DebugBreak(); - } + if (!isolate()->has_pending_exception()) debug_->ClearMirrorCache(); // If there are commands in the queue when leaving the debugger request // that these commands are processed. - if (isolate_->debugger()->HasCommands()) { - isolate_->stack_guard()->DebugCommand(); - } - - // If leaving the debugger with the debugger no longer active unload it. - if (!isolate_->debugger()->IsDebuggerActive()) { - isolate_->debugger()->UnloadDebugger(); - } + if (debug_->has_commands()) isolate()->stack_guard()->RequestDebugCommand(); } // Leaving this debugger entry. - debug->set_debugger_entry(prev_); + debug_->thread_local_.current_debug_scope_ = prev_; + + // Restore to the previous break state. + debug_->thread_local_.break_frame_id_ = break_frame_id_; + debug_->thread_local_.break_id_ = break_id_; + + debug_->UpdateState(); } @@ -3662,7 +3272,7 @@ v8::Handle<v8::Context> MessageImpl::GetEventContext() const { Isolate* isolate = event_data_->GetIsolate(); v8::Handle<v8::Context> context = GetDebugEventContext(isolate); // Isolate::context() may be NULL when "script collected" event occures. - ASSERT(!context.IsEmpty() || event_ == v8::ScriptCollected); + DCHECK(!context.IsEmpty()); return context; } @@ -3726,10 +3336,6 @@ CommandMessage::CommandMessage(const Vector<uint16_t>& text, } -CommandMessage::~CommandMessage() { -} - - void CommandMessage::Dispose() { text_.Dispose(); delete client_data_; @@ -3750,16 +3356,13 @@ CommandMessageQueue::CommandMessageQueue(int size) : start_(0), end_(0), CommandMessageQueue::~CommandMessageQueue() { - while (!IsEmpty()) { - CommandMessage m = Get(); - m.Dispose(); - } + while (!IsEmpty()) Get().Dispose(); DeleteArray(messages_); } CommandMessage CommandMessageQueue::Get() { - ASSERT(!IsEmpty()); + DCHECK(!IsEmpty()); int result = start_; start_ = (start_ + 1) % size_; return messages_[result]; @@ -3794,13 +3397,13 @@ LockingCommandMessageQueue::LockingCommandMessageQueue(Logger* logger, int size) bool LockingCommandMessageQueue::IsEmpty() const { - LockGuard<Mutex> lock_guard(&mutex_); + base::LockGuard<base::Mutex> lock_guard(&mutex_); return queue_.IsEmpty(); } CommandMessage LockingCommandMessageQueue::Get() { - LockGuard<Mutex> lock_guard(&mutex_); + base::LockGuard<base::Mutex> lock_guard(&mutex_); CommandMessage result = queue_.Get(); logger_->DebugEvent("Get", result.text()); return result; @@ -3808,49 +3411,15 @@ CommandMessage LockingCommandMessageQueue::Get() { void LockingCommandMessageQueue::Put(const CommandMessage& message) { - LockGuard<Mutex> lock_guard(&mutex_); + base::LockGuard<base::Mutex> lock_guard(&mutex_); queue_.Put(message); logger_->DebugEvent("Put", message.text()); } void LockingCommandMessageQueue::Clear() { - LockGuard<Mutex> lock_guard(&mutex_); + base::LockGuard<base::Mutex> lock_guard(&mutex_); queue_.Clear(); } - -MessageDispatchHelperThread::MessageDispatchHelperThread(Isolate* isolate) - : Thread("v8:MsgDispHelpr"), - isolate_(isolate), sem_(0), - already_signalled_(false) { -} - - -void MessageDispatchHelperThread::Schedule() { - { - LockGuard<Mutex> lock_guard(&mutex_); - if (already_signalled_) { - return; - } - already_signalled_ = true; - } - sem_.Signal(); -} - - -void MessageDispatchHelperThread::Run() { - while (true) { - sem_.Wait(); - { - LockGuard<Mutex> lock_guard(&mutex_); - already_signalled_ = false; - } - { - Locker locker(reinterpret_cast<v8::Isolate*>(isolate_)); - isolate_->debugger()->CallMessageDispatchHandler(); - } - } -} - } } // namespace v8::internal diff --git a/deps/v8/src/debug.h b/deps/v8/src/debug.h index 457a5fad84b..e60e1aaab8f 100644 --- a/deps/v8/src/debug.h +++ b/deps/v8/src/debug.h @@ -5,27 +5,27 @@ #ifndef V8_DEBUG_H_ #define V8_DEBUG_H_ -#include "allocation.h" -#include "arguments.h" -#include "assembler.h" -#include "debug-agent.h" -#include "execution.h" -#include "factory.h" -#include "flags.h" -#include "frames-inl.h" -#include "hashmap.h" -#include "platform.h" -#include "string-stream.h" -#include "v8threads.h" - -#include "../include/v8-debug.h" +#include "src/allocation.h" +#include "src/arguments.h" +#include "src/assembler.h" +#include "src/base/platform/platform.h" +#include "src/execution.h" +#include "src/factory.h" +#include "src/flags.h" +#include "src/frames-inl.h" +#include "src/hashmap.h" +#include "src/liveedit.h" +#include "src/string-stream.h" +#include "src/v8threads.h" + +#include "include/v8-debug.h" namespace v8 { namespace internal { // Forward declarations. -class EnterDebugger; +class DebugScope; // Step actions. NOTE: These values are in macros.py as well. @@ -150,10 +150,7 @@ class BreakLocationIterator { // the cache is the script id. class ScriptCache : private HashMap { public: - explicit ScriptCache(Isolate* isolate) - : HashMap(HashMap::PointersMatch), - isolate_(isolate), - collected_scripts_(10) {} + explicit ScriptCache(Isolate* isolate); virtual ~ScriptCache() { Clear(); } // Add script to the cache. @@ -162,9 +159,6 @@ class ScriptCache : private HashMap { // Return the scripts in the cache. Handle<FixedArray> GetScripts(); - // Generate debugger events for collected scripts. - void ProcessCollectedScripts(); - private: // Calculate the hash value from the key (script id). static uint32_t Hash(int key) { @@ -179,8 +173,6 @@ class ScriptCache : private HashMap { const v8::WeakCallbackData<v8::Value, void>& data); Isolate* isolate_; - // List used during GC to temporarily store id's of collected scripts. - List<int> collected_scripts_; }; @@ -203,410 +195,6 @@ class DebugInfoListNode { DebugInfoListNode* next_; }; -// This class contains the debugger support. The main purpose is to handle -// setting break points in the code. -// -// This class controls the debug info for all functions which currently have -// active breakpoints in them. This debug info is held in the heap root object -// debug_info which is a FixedArray. Each entry in this list is of class -// DebugInfo. -class Debug { - public: - bool Load(); - void Unload(); - bool IsLoaded() { return !debug_context_.is_null(); } - bool InDebugger() { return thread_local_.debugger_entry_ != NULL; } - void PreemptionWhileInDebugger(); - - Object* Break(Arguments args); - void SetBreakPoint(Handle<JSFunction> function, - Handle<Object> break_point_object, - int* source_position); - bool SetBreakPointForScript(Handle<Script> script, - Handle<Object> break_point_object, - int* source_position, - BreakPositionAlignment alignment); - void ClearBreakPoint(Handle<Object> break_point_object); - void ClearAllBreakPoints(); - void FloodWithOneShot(Handle<JSFunction> function); - void FloodBoundFunctionWithOneShot(Handle<JSFunction> function); - void FloodHandlerWithOneShot(); - void ChangeBreakOnException(ExceptionBreakType type, bool enable); - bool IsBreakOnException(ExceptionBreakType type); - - void PromiseHandlePrologue(Handle<JSFunction> promise_getter); - void PromiseHandleEpilogue(); - // Returns a promise if it does not have a reject handler. - Handle<Object> GetPromiseForUncaughtException(); - - void PrepareStep(StepAction step_action, - int step_count, - StackFrame::Id frame_id); - void ClearStepping(); - void ClearStepOut(); - bool IsStepping() { return thread_local_.step_count_ > 0; } - bool StepNextContinue(BreakLocationIterator* break_location_iterator, - JavaScriptFrame* frame); - static Handle<DebugInfo> GetDebugInfo(Handle<SharedFunctionInfo> shared); - static bool HasDebugInfo(Handle<SharedFunctionInfo> shared); - - void PrepareForBreakPoints(); - - // This function is used in FunctionNameUsing* tests. - Object* FindSharedFunctionInfoInScript(Handle<Script> script, int position); - - // Returns whether the operation succeeded. Compilation can only be triggered - // if a valid closure is passed as the second argument, otherwise the shared - // function needs to be compiled already. - bool EnsureDebugInfo(Handle<SharedFunctionInfo> shared, - Handle<JSFunction> function); - - // Returns true if the current stub call is patched to call the debugger. - static bool IsDebugBreak(Address addr); - // Returns true if the current return statement has been patched to be - // a debugger breakpoint. - static bool IsDebugBreakAtReturn(RelocInfo* rinfo); - - // Check whether a code stub with the specified major key is a possible break - // point location. - static bool IsSourceBreakStub(Code* code); - static bool IsBreakStub(Code* code); - - // Find the builtin to use for invoking the debug break - static Handle<Code> FindDebugBreak(Handle<Code> code, RelocInfo::Mode mode); - - static Handle<Object> GetSourceBreakLocations( - Handle<SharedFunctionInfo> shared, - BreakPositionAlignment position_aligment); - - // Getter for the debug_context. - inline Handle<Context> debug_context() { return debug_context_; } - - // Check whether a global object is the debug global object. - bool IsDebugGlobal(GlobalObject* global); - - // Check whether this frame is just about to return. - bool IsBreakAtReturn(JavaScriptFrame* frame); - - // Fast check to see if any break points are active. - inline bool has_break_points() { return has_break_points_; } - - void NewBreak(StackFrame::Id break_frame_id); - void SetBreak(StackFrame::Id break_frame_id, int break_id); - StackFrame::Id break_frame_id() { - return thread_local_.break_frame_id_; - } - int break_id() { return thread_local_.break_id_; } - - bool StepInActive() { return thread_local_.step_into_fp_ != 0; } - void HandleStepIn(Handle<JSFunction> function, - Handle<Object> holder, - Address fp, - bool is_constructor); - Address step_in_fp() { return thread_local_.step_into_fp_; } - Address* step_in_fp_addr() { return &thread_local_.step_into_fp_; } - - bool StepOutActive() { return thread_local_.step_out_fp_ != 0; } - Address step_out_fp() { return thread_local_.step_out_fp_; } - - EnterDebugger* debugger_entry() { - return thread_local_.debugger_entry_; - } - void set_debugger_entry(EnterDebugger* entry) { - thread_local_.debugger_entry_ = entry; - } - - // Check whether any of the specified interrupts are pending. - bool is_interrupt_pending(InterruptFlag what) { - return (thread_local_.pending_interrupts_ & what) != 0; - } - - // Set specified interrupts as pending. - void set_interrupts_pending(InterruptFlag what) { - thread_local_.pending_interrupts_ |= what; - } - - // Clear specified interrupts from pending. - void clear_interrupt_pending(InterruptFlag what) { - thread_local_.pending_interrupts_ &= ~static_cast<int>(what); - } - - // Getter and setter for the disable break state. - bool disable_break() { return disable_break_; } - void set_disable_break(bool disable_break) { - disable_break_ = disable_break; - } - - // Getters for the current exception break state. - bool break_on_exception() { return break_on_exception_; } - bool break_on_uncaught_exception() { - return break_on_uncaught_exception_; - } - - enum AddressId { - k_after_break_target_address, - k_restarter_frame_function_pointer - }; - - // Support for setting the address to jump to when returning from break point. - Address* after_break_target_address() { - return reinterpret_cast<Address*>(&thread_local_.after_break_target_); - } - Address* restarter_frame_function_pointer_address() { - Object*** address = &thread_local_.restarter_frame_function_pointer_; - return reinterpret_cast<Address*>(address); - } - - // Support for saving/restoring registers when handling debug break calls. - Object** register_address(int r) { - return ®isters_[r]; - } - - static const int kEstimatedNofDebugInfoEntries = 16; - static const int kEstimatedNofBreakPointsInFunction = 16; - - // Passed to MakeWeak. - static void HandleWeakDebugInfo( - const v8::WeakCallbackData<v8::Value, void>& data); - - friend class Debugger; - friend Handle<FixedArray> GetDebuggedFunctions(); // In test-debug.cc - friend void CheckDebuggerUnloaded(bool check_functions); // In test-debug.cc - - // Threading support. - char* ArchiveDebug(char* to); - char* RestoreDebug(char* from); - static int ArchiveSpacePerThread(); - void FreeThreadResources() { } - - // Mirror cache handling. - void ClearMirrorCache(); - - // Script cache handling. - void CreateScriptCache(); - void DestroyScriptCache(); - void AddScriptToScriptCache(Handle<Script> script); - Handle<FixedArray> GetLoadedScripts(); - - // Record function from which eval was called. - static void RecordEvalCaller(Handle<Script> script); - - // Garbage collection notifications. - void AfterGarbageCollection(); - - // Code generator routines. - static void GenerateSlot(MacroAssembler* masm); - static void GenerateCallICStubDebugBreak(MacroAssembler* masm); - static void GenerateLoadICDebugBreak(MacroAssembler* masm); - static void GenerateStoreICDebugBreak(MacroAssembler* masm); - static void GenerateKeyedLoadICDebugBreak(MacroAssembler* masm); - static void GenerateKeyedStoreICDebugBreak(MacroAssembler* masm); - static void GenerateCompareNilICDebugBreak(MacroAssembler* masm); - static void GenerateReturnDebugBreak(MacroAssembler* masm); - static void GenerateCallFunctionStubDebugBreak(MacroAssembler* masm); - static void GenerateCallConstructStubDebugBreak(MacroAssembler* masm); - static void GenerateCallConstructStubRecordDebugBreak(MacroAssembler* masm); - static void GenerateSlotDebugBreak(MacroAssembler* masm); - static void GeneratePlainReturnLiveEdit(MacroAssembler* masm); - - // FrameDropper is a code replacement for a JavaScript frame with possibly - // several frames above. - // There is no calling conventions here, because it never actually gets - // called, it only gets returned to. - static void GenerateFrameDropperLiveEdit(MacroAssembler* masm); - - // Describes how exactly a frame has been dropped from stack. - enum FrameDropMode { - // No frame has been dropped. - FRAMES_UNTOUCHED, - // The top JS frame had been calling IC stub. IC stub mustn't be called now. - FRAME_DROPPED_IN_IC_CALL, - // The top JS frame had been calling debug break slot stub. Patch the - // address this stub jumps to in the end. - FRAME_DROPPED_IN_DEBUG_SLOT_CALL, - // The top JS frame had been calling some C++ function. The return address - // gets patched automatically. - FRAME_DROPPED_IN_DIRECT_CALL, - FRAME_DROPPED_IN_RETURN_CALL, - CURRENTLY_SET_MODE - }; - - void FramesHaveBeenDropped(StackFrame::Id new_break_frame_id, - FrameDropMode mode, - Object** restarter_frame_function_pointer); - - // Initializes an artificial stack frame. The data it contains is used for: - // a. successful work of frame dropper code which eventually gets control, - // b. being compatible with regular stack structure for various stack - // iterators. - // Returns address of stack allocated pointer to restarted function, - // the value that is called 'restarter_frame_function_pointer'. The value - // at this address (possibly updated by GC) may be used later when preparing - // 'step in' operation. - static Object** SetUpFrameDropperFrame(StackFrame* bottom_js_frame, - Handle<Code> code); - - static const int kFrameDropperFrameSize; - - // Architecture-specific constant. - static const bool kFrameDropperSupported; - - /** - * Defines layout of a stack frame that supports padding. This is a regular - * internal frame that has a flexible stack structure. LiveEdit can shift - * its lower part up the stack, taking up the 'padding' space when additional - * stack memory is required. - * Such frame is expected immediately above the topmost JavaScript frame. - * - * Stack Layout: - * --- Top - * LiveEdit routine frames - * --- - * C frames of debug handler - * --- - * ... - * --- - * An internal frame that has n padding words: - * - any number of words as needed by code -- upper part of frame - * - padding size: a Smi storing n -- current size of padding - * - padding: n words filled with kPaddingValue in form of Smi - * - 3 context/type words of a regular InternalFrame - * - fp - * --- - * Topmost JavaScript frame - * --- - * ... - * --- Bottom - */ - class FramePaddingLayout : public AllStatic { - public: - // Architecture-specific constant. - static const bool kIsSupported; - - // A size of frame base including fp. Padding words starts right above - // the base. - static const int kFrameBaseSize = 4; - - // A number of words that should be reserved on stack for the LiveEdit use. - // Normally equals 1. Stored on stack in form of Smi. - static const int kInitialSize; - // A value that padding words are filled with (in form of Smi). Going - // bottom-top, the first word not having this value is a counter word. - static const int kPaddingValue; - }; - - private: - explicit Debug(Isolate* isolate); - ~Debug(); - - static bool CompileDebuggerScript(Isolate* isolate, int index); - void ClearOneShot(); - void ActivateStepIn(StackFrame* frame); - void ClearStepIn(); - void ActivateStepOut(StackFrame* frame); - void ClearStepNext(); - // Returns whether the compile succeeded. - void RemoveDebugInfo(Handle<DebugInfo> debug_info); - void SetAfterBreakTarget(JavaScriptFrame* frame); - Handle<Object> CheckBreakPoints(Handle<Object> break_point); - bool CheckBreakPoint(Handle<Object> break_point_object); - - void MaybeRecompileFunctionForDebugging(Handle<JSFunction> function); - void RecompileAndRelocateSuspendedGenerators( - const List<Handle<JSGeneratorObject> > &suspended_generators); - - // Global handle to debug context where all the debugger JavaScript code is - // loaded. - Handle<Context> debug_context_; - - // Boolean state indicating whether any break points are set. - bool has_break_points_; - - // Cache of all scripts in the heap. - ScriptCache* script_cache_; - - // List of active debug info objects. - DebugInfoListNode* debug_info_list_; - - bool disable_break_; - bool break_on_exception_; - bool break_on_uncaught_exception_; - - // When a promise is being resolved, we may want to trigger a debug event for - // the case we catch a throw. For this purpose we remember the try-catch - // handler address that would catch the exception. We also hold onto a - // closure that returns a promise if the exception is considered uncaught. - // Due to the possibility of reentry we use a list to form a stack. - List<StackHandler*> promise_catch_handlers_; - List<Handle<JSFunction> > promise_getters_; - - // Per-thread data. - class ThreadLocal { - public: - // Counter for generating next break id. - int break_count_; - - // Current break id. - int break_id_; - - // Frame id for the frame of the current break. - StackFrame::Id break_frame_id_; - - // Step action for last step performed. - StepAction last_step_action_; - - // Source statement position from last step next action. - int last_statement_position_; - - // Number of steps left to perform before debug event. - int step_count_; - - // Frame pointer from last step next action. - Address last_fp_; - - // Number of queued steps left to perform before debug event. - int queued_step_count_; - - // Frame pointer for frame from which step in was performed. - Address step_into_fp_; - - // Frame pointer for the frame where debugger should be called when current - // step out action is completed. - Address step_out_fp_; - - // Storage location for jump when exiting debug break calls. - Address after_break_target_; - - // Stores the way how LiveEdit has patched the stack. It is used when - // debugger returns control back to user script. - FrameDropMode frame_drop_mode_; - - // Top debugger entry. - EnterDebugger* debugger_entry_; - - // Pending interrupts scheduled while debugging. - int pending_interrupts_; - - // When restarter frame is on stack, stores the address - // of the pointer to function being restarted. Otherwise (most of the time) - // stores NULL. This pointer is used with 'step in' implementation. - Object** restarter_frame_function_pointer_; - }; - - // Storage location for registers when handling debug break calls - JSCallerSavedBuffer registers_; - ThreadLocal thread_local_; - void ThreadInit(); - - Isolate* isolate_; - - friend class Isolate; - - DISALLOW_COPY_AND_ASSIGN(Debug); -}; - - -DECLARE_RUNTIME_FUNCTION(Debug_Break); // Message delivered to the message handler callback. This is either a debugger @@ -691,7 +279,6 @@ class CommandMessage { static CommandMessage New(const Vector<uint16_t>& command, v8::Debug::ClientData* data); CommandMessage(); - ~CommandMessage(); // Deletes user data and disposes of the text. void Dispose(); @@ -705,6 +292,7 @@ class CommandMessage { v8::Debug::ClientData* client_data_; }; + // A Queue of CommandMessage objects. A thread-safe version is // LockingCommandMessageQueue, based on this class. class CommandMessageQueue BASE_EMBEDDED { @@ -726,9 +314,6 @@ class CommandMessageQueue BASE_EMBEDDED { }; -class MessageDispatchHelperThread; - - // LockingCommandMessageQueue is a thread-safe circular buffer of CommandMessage // messages. The message data is not managed by LockingCommandMessageQueue. // Pointers to the data are passed in and out. Implemented by adding a @@ -743,19 +328,209 @@ class LockingCommandMessageQueue BASE_EMBEDDED { private: Logger* logger_; CommandMessageQueue queue_; - mutable Mutex mutex_; + mutable base::Mutex mutex_; DISALLOW_COPY_AND_ASSIGN(LockingCommandMessageQueue); }; -class Debugger { +class PromiseOnStack { public: - ~Debugger(); + PromiseOnStack(Isolate* isolate, PromiseOnStack* prev, + Handle<JSObject> getter); + ~PromiseOnStack(); + StackHandler* handler() { return handler_; } + Handle<JSObject> promise() { return promise_; } + PromiseOnStack* prev() { return prev_; } + private: + Isolate* isolate_; + StackHandler* handler_; + Handle<JSObject> promise_; + PromiseOnStack* prev_; +}; - void DebugRequest(const uint16_t* json_request, int length); +// This class contains the debugger support. The main purpose is to handle +// setting break points in the code. +// +// This class controls the debug info for all functions which currently have +// active breakpoints in them. This debug info is held in the heap root object +// debug_info which is a FixedArray. Each entry in this list is of class +// DebugInfo. +class Debug { + public: + // Debug event triggers. + void OnDebugBreak(Handle<Object> break_points_hit, bool auto_continue); + + void OnThrow(Handle<Object> exception, bool uncaught); + void OnPromiseReject(Handle<JSObject> promise, Handle<Object> value); + void OnCompileError(Handle<Script> script); + void OnBeforeCompile(Handle<Script> script); + void OnAfterCompile(Handle<Script> script); + void OnPromiseEvent(Handle<JSObject> data); + void OnAsyncTaskEvent(Handle<JSObject> data); + + // API facing. + void SetEventListener(Handle<Object> callback, Handle<Object> data); + void SetMessageHandler(v8::Debug::MessageHandler handler); + void EnqueueCommandMessage(Vector<const uint16_t> command, + v8::Debug::ClientData* client_data = NULL); + // Enqueue a debugger command to the command queue for event listeners. + void EnqueueDebugCommand(v8::Debug::ClientData* client_data = NULL); + MUST_USE_RESULT MaybeHandle<Object> Call(Handle<JSFunction> fun, + Handle<Object> data); + Handle<Context> GetDebugContext(); + void HandleDebugBreak(); + void ProcessDebugMessages(bool debug_command_only); + + // Internal logic + bool Load(); + void Break(Arguments args, JavaScriptFrame*); + void SetAfterBreakTarget(JavaScriptFrame* frame); + + // Scripts handling. + Handle<FixedArray> GetLoadedScripts(); + + // Break point handling. + bool SetBreakPoint(Handle<JSFunction> function, + Handle<Object> break_point_object, + int* source_position); + bool SetBreakPointForScript(Handle<Script> script, + Handle<Object> break_point_object, + int* source_position, + BreakPositionAlignment alignment); + void ClearBreakPoint(Handle<Object> break_point_object); + void ClearAllBreakPoints(); + void FloodWithOneShot(Handle<JSFunction> function); + void FloodBoundFunctionWithOneShot(Handle<JSFunction> function); + void FloodHandlerWithOneShot(); + void ChangeBreakOnException(ExceptionBreakType type, bool enable); + bool IsBreakOnException(ExceptionBreakType type); + + // Stepping handling. + void PrepareStep(StepAction step_action, + int step_count, + StackFrame::Id frame_id); + void ClearStepping(); + void ClearStepOut(); + bool IsStepping() { return thread_local_.step_count_ > 0; } + bool StepNextContinue(BreakLocationIterator* break_location_iterator, + JavaScriptFrame* frame); + bool StepInActive() { return thread_local_.step_into_fp_ != 0; } + void HandleStepIn(Handle<JSFunction> function, + Handle<Object> holder, + Address fp, + bool is_constructor); + bool StepOutActive() { return thread_local_.step_out_fp_ != 0; } + + // Purge all code objects that have no debug break slots. + void PrepareForBreakPoints(); + + // Returns whether the operation succeeded. Compilation can only be triggered + // if a valid closure is passed as the second argument, otherwise the shared + // function needs to be compiled already. + bool EnsureDebugInfo(Handle<SharedFunctionInfo> shared, + Handle<JSFunction> function); + static Handle<DebugInfo> GetDebugInfo(Handle<SharedFunctionInfo> shared); + static bool HasDebugInfo(Handle<SharedFunctionInfo> shared); + + // This function is used in FunctionNameUsing* tests. + Object* FindSharedFunctionInfoInScript(Handle<Script> script, int position); + + // Returns true if the current stub call is patched to call the debugger. + static bool IsDebugBreak(Address addr); + // Returns true if the current return statement has been patched to be + // a debugger breakpoint. + static bool IsDebugBreakAtReturn(RelocInfo* rinfo); + + static Handle<Object> GetSourceBreakLocations( + Handle<SharedFunctionInfo> shared, + BreakPositionAlignment position_aligment); + + // Check whether a global object is the debug global object. + bool IsDebugGlobal(GlobalObject* global); + + // Check whether this frame is just about to return. + bool IsBreakAtReturn(JavaScriptFrame* frame); + + // Promise handling. + // Push and pop a promise and the current try-catch handler. + void PushPromise(Handle<JSObject> promise); + void PopPromise(); + + // Support for LiveEdit + void FramesHaveBeenDropped(StackFrame::Id new_break_frame_id, + LiveEdit::FrameDropMode mode, + Object** restarter_frame_function_pointer); + + // Passed to MakeWeak. + static void HandleWeakDebugInfo( + const v8::WeakCallbackData<v8::Value, void>& data); + + // Threading support. + char* ArchiveDebug(char* to); + char* RestoreDebug(char* from); + static int ArchiveSpacePerThread(); + void FreeThreadResources() { } + + // Record function from which eval was called. + static void RecordEvalCaller(Handle<Script> script); + + // Flags and states. + DebugScope* debugger_entry() { return thread_local_.current_debug_scope_; } + inline Handle<Context> debug_context() { return debug_context_; } + void set_live_edit_enabled(bool v) { live_edit_enabled_ = v; } + bool live_edit_enabled() const { + return FLAG_enable_liveedit && live_edit_enabled_ ; + } + + inline bool is_active() const { return is_active_; } + inline bool is_loaded() const { return !debug_context_.is_null(); } + inline bool has_break_points() const { return has_break_points_; } + inline bool in_debug_scope() const { + return thread_local_.current_debug_scope_ != NULL; + } + void set_disable_break(bool v) { break_disabled_ = v; } + + StackFrame::Id break_frame_id() { return thread_local_.break_frame_id_; } + int break_id() { return thread_local_.break_id_; } + + // Support for embedding into generated code. + Address is_active_address() { + return reinterpret_cast<Address>(&is_active_); + } + + Address after_break_target_address() { + return reinterpret_cast<Address>(&after_break_target_); + } + + Address restarter_frame_function_pointer_address() { + Object*** address = &thread_local_.restarter_frame_function_pointer_; + return reinterpret_cast<Address>(address); + } + + Address step_in_fp_addr() { + return reinterpret_cast<Address>(&thread_local_.step_into_fp_); + } + + private: + explicit Debug(Isolate* isolate); + + void UpdateState(); + void Unload(); + void SetNextBreakId() { + thread_local_.break_id_ = ++thread_local_.break_count_; + } + + // Check whether there are commands in the command queue. + inline bool has_commands() const { return !command_queue_.IsEmpty(); } + inline bool ignore_events() const { return is_suppressed_ || !is_active_; } + + void OnException(Handle<Object> exception, bool uncaught, + Handle<Object> promise); + + // Constructors for debug event objects. MUST_USE_RESULT MaybeHandle<Object> MakeJSObject( - Vector<const char> constructor_name, + const char* constructor_name, int argc, Handle<Object> argv[]); MUST_USE_RESULT MaybeHandle<Object> MakeExecutionState(); @@ -766,20 +541,23 @@ class Debugger { bool uncaught, Handle<Object> promise); MUST_USE_RESULT MaybeHandle<Object> MakeCompileEvent( - Handle<Script> script, bool before); - MUST_USE_RESULT MaybeHandle<Object> MakeScriptCollectedEvent(int id); + Handle<Script> script, v8::DebugEvent type); + MUST_USE_RESULT MaybeHandle<Object> MakePromiseEvent( + Handle<JSObject> promise_event); + MUST_USE_RESULT MaybeHandle<Object> MakeAsyncTaskEvent( + Handle<JSObject> task_event); - void OnDebugBreak(Handle<Object> break_points_hit, bool auto_continue); - void OnException(Handle<Object> exception, bool uncaught); - void OnBeforeCompile(Handle<Script> script); + // Mirror cache handling. + void ClearMirrorCache(); - enum AfterCompileFlags { - NO_AFTER_COMPILE_FLAGS, - SEND_WHEN_DEBUGGING - }; - void OnAfterCompile(Handle<Script> script, - AfterCompileFlags after_compile_flags); - void OnScriptCollected(int id); + // Returns a promise if the pushed try-catch handler matches the current one. + Handle<Object> GetPromiseOnStackOnThrow(); + bool PromiseHasRejectHandler(Handle<JSObject> promise); + + void CallEventCallback(v8::DebugEvent event, + Handle<Object> exec_state, + Handle<Object> event_data, + v8::Debug::ClientData* client_data); void ProcessDebugEvent(v8::DebugEvent event, Handle<JSObject> event_data, bool auto_continue); @@ -787,239 +565,209 @@ class Debugger { Handle<JSObject> exec_state, Handle<JSObject> event_data, bool auto_continue); - void SetEventListener(Handle<Object> callback, Handle<Object> data); - void SetMessageHandler(v8::Debug::MessageHandler2 handler); - void SetHostDispatchHandler(v8::Debug::HostDispatchHandler handler, - TimeDelta period); - void SetDebugMessageDispatchHandler( - v8::Debug::DebugMessageDispatchHandler handler, - bool provide_locker); - - // Invoke the message handler function. void InvokeMessageHandler(MessageImpl message); - // Add a debugger command to the command queue. - void ProcessCommand(Vector<const uint16_t> command, - v8::Debug::ClientData* client_data = NULL); + static bool CompileDebuggerScript(Isolate* isolate, int index); + void ClearOneShot(); + void ActivateStepIn(StackFrame* frame); + void ClearStepIn(); + void ActivateStepOut(StackFrame* frame); + void ClearStepNext(); + // Returns whether the compile succeeded. + void RemoveDebugInfo(Handle<DebugInfo> debug_info); + Handle<Object> CheckBreakPoints(Handle<Object> break_point); + bool CheckBreakPoint(Handle<Object> break_point_object); - // Check whether there are commands in the command queue. - bool HasCommands(); + inline void AssertDebugContext() { + DCHECK(isolate_->context() == *debug_context()); + DCHECK(in_debug_scope()); + } - // Enqueue a debugger command to the command queue for event listeners. - void EnqueueDebugCommand(v8::Debug::ClientData* client_data = NULL); + void ThreadInit(); - MUST_USE_RESULT MaybeHandle<Object> Call(Handle<JSFunction> fun, - Handle<Object> data); + // Global handles. + Handle<Context> debug_context_; + Handle<Object> event_listener_; + Handle<Object> event_listener_data_; - // Start the debugger agent listening on the provided port. - bool StartAgent(const char* name, int port, - bool wait_for_connection = false); + v8::Debug::MessageHandler message_handler_; - // Stop the debugger agent. - void StopAgent(); + static const int kQueueInitialSize = 4; + base::Semaphore command_received_; // Signaled for each command received. + LockingCommandMessageQueue command_queue_; + LockingCommandMessageQueue event_command_queue_; - // Blocks until the agent has started listening for connections - void WaitForAgent(); + bool is_active_; + bool is_suppressed_; + bool live_edit_enabled_; + bool has_break_points_; + bool break_disabled_; + bool break_on_exception_; + bool break_on_uncaught_exception_; - void CallMessageDispatchHandler(); + ScriptCache* script_cache_; // Cache of all scripts in the heap. + DebugInfoListNode* debug_info_list_; // List of active debug info objects. - Handle<Context> GetDebugContext(); + // Storage location for jump when exiting debug break calls. + // Note that this address is not GC safe. It should be computed immediately + // before returning to the DebugBreakCallHelper. + Address after_break_target_; - // Unload the debugger if possible. Only called when no debugger is currently - // active. - void UnloadDebugger(); - friend void ForceUnloadDebugger(); // In test-debug.cc + // Per-thread data. + class ThreadLocal { + public: + // Top debugger entry. + DebugScope* current_debug_scope_; - inline bool EventActive(v8::DebugEvent event) { - LockGuard<RecursiveMutex> lock_guard(debugger_access_); + // Counter for generating next break id. + int break_count_; - // Check whether the message handler was been cleared. - if (debugger_unload_pending_) { - if (isolate_->debug()->debugger_entry() == NULL) { - UnloadDebugger(); - } - } + // Current break id. + int break_id_; - if (((event == v8::BeforeCompile) || (event == v8::AfterCompile)) && - !FLAG_debug_compile_events) { - return false; + // Frame id for the frame of the current break. + StackFrame::Id break_frame_id_; - } else if ((event == v8::ScriptCollected) && - !FLAG_debug_script_collected_events) { - return false; - } + // Step action for last step performed. + StepAction last_step_action_; - // Currently argument event is not used. - return !compiling_natives_ && Debugger::IsDebuggerActive(); - } + // Source statement position from last step next action. + int last_statement_position_; - void set_compiling_natives(bool compiling_natives) { - compiling_natives_ = compiling_natives; - } - bool compiling_natives() const { return compiling_natives_; } - void set_loading_debugger(bool v) { is_loading_debugger_ = v; } - bool is_loading_debugger() const { return is_loading_debugger_; } - void set_live_edit_enabled(bool v) { live_edit_enabled_ = v; } - bool live_edit_enabled() const { - return FLAG_enable_liveedit && live_edit_enabled_ ; - } - void set_force_debugger_active(bool force_debugger_active) { - force_debugger_active_ = force_debugger_active; - } - bool force_debugger_active() const { return force_debugger_active_; } + // Number of steps left to perform before debug event. + int step_count_; - bool IsDebuggerActive(); + // Frame pointer from last step next action. + Address last_fp_; - private: - explicit Debugger(Isolate* isolate); + // Number of queued steps left to perform before debug event. + int queued_step_count_; - void CallEventCallback(v8::DebugEvent event, - Handle<Object> exec_state, - Handle<Object> event_data, - v8::Debug::ClientData* client_data); - void CallCEventCallback(v8::DebugEvent event, - Handle<Object> exec_state, - Handle<Object> event_data, - v8::Debug::ClientData* client_data); - void CallJSEventCallback(v8::DebugEvent event, - Handle<Object> exec_state, - Handle<Object> event_data); - void ListenersChanged(); - - RecursiveMutex* debugger_access_; // Mutex guarding debugger variables. - Handle<Object> event_listener_; // Global handle to listener. - Handle<Object> event_listener_data_; - bool compiling_natives_; // Are we compiling natives? - bool is_loading_debugger_; // Are we loading the debugger? - bool live_edit_enabled_; // Enable LiveEdit. - bool never_unload_debugger_; // Can we unload the debugger? - bool force_debugger_active_; // Activate debugger without event listeners. - v8::Debug::MessageHandler2 message_handler_; - bool debugger_unload_pending_; // Was message handler cleared? - v8::Debug::HostDispatchHandler host_dispatch_handler_; - Mutex dispatch_handler_access_; // Mutex guarding dispatch handler. - v8::Debug::DebugMessageDispatchHandler debug_message_dispatch_handler_; - MessageDispatchHelperThread* message_dispatch_helper_thread_; - TimeDelta host_dispatch_period_; - - DebuggerAgent* agent_; + // Frame pointer for frame from which step in was performed. + Address step_into_fp_; - static const int kQueueInitialSize = 4; - LockingCommandMessageQueue command_queue_; - Semaphore command_received_; // Signaled for each command received. - LockingCommandMessageQueue event_command_queue_; + // Frame pointer for the frame where debugger should be called when current + // step out action is completed. + Address step_out_fp_; + + // Stores the way how LiveEdit has patched the stack. It is used when + // debugger returns control back to user script. + LiveEdit::FrameDropMode frame_drop_mode_; + + // When restarter frame is on stack, stores the address + // of the pointer to function being restarted. Otherwise (most of the time) + // stores NULL. This pointer is used with 'step in' implementation. + Object** restarter_frame_function_pointer_; + + // When a promise is being resolved, we may want to trigger a debug event + // if we catch a throw. For this purpose we remember the try-catch + // handler address that would catch the exception. We also hold onto a + // closure that returns a promise if the exception is considered uncaught. + // Due to the possibility of reentry we use a linked list. + PromiseOnStack* promise_on_stack_; + }; + + // Storage location for registers when handling debug break calls + ThreadLocal thread_local_; Isolate* isolate_; - friend class EnterDebugger; friend class Isolate; + friend class DebugScope; + friend class DisableBreak; + friend class LiveEdit; + friend class SuppressDebug; + + friend Handle<FixedArray> GetDebuggedFunctions(); // In test-debug.cc + friend void CheckDebuggerUnloaded(bool check_functions); // In test-debug.cc - DISALLOW_COPY_AND_ASSIGN(Debugger); + DISALLOW_COPY_AND_ASSIGN(Debug); }; -// This class is used for entering the debugger. Create an instance in the stack -// to enter the debugger. This will set the current break state, make sure the -// debugger is loaded and switch to the debugger context. If the debugger for -// some reason could not be entered FailedToEnter will return true. -class EnterDebugger BASE_EMBEDDED { - public: - explicit EnterDebugger(Isolate* isolate); - ~EnterDebugger(); +DECLARE_RUNTIME_FUNCTION(Debug_Break); - // Check whether the debugger could be entered. - inline bool FailedToEnter() { return load_failed_; } - // Check whether there are any JavaScript frames on the stack. - inline bool HasJavaScriptFrames() { return has_js_frames_; } +// This scope is used to load and enter the debug context and create a new +// break state. Leaving the scope will restore the previous state. +// On failure to load, FailedToEnter returns true. +class DebugScope BASE_EMBEDDED { + public: + explicit DebugScope(Debug* debug); + ~DebugScope(); + + // Check whether loading was successful. + inline bool failed() { return failed_; } // Get the active context from before entering the debugger. inline Handle<Context> GetContext() { return save_.context(); } private: - Isolate* isolate_; - EnterDebugger* prev_; // Previous debugger entry if entered recursively. - JavaScriptFrameIterator it_; - const bool has_js_frames_; // Were there any JavaScript frames? + Isolate* isolate() { return debug_->isolate_; } + + Debug* debug_; + DebugScope* prev_; // Previous scope if entered recursively. StackFrame::Id break_frame_id_; // Previous break frame id. - int break_id_; // Previous break id. - bool load_failed_; // Did the debugger fail to load? - SaveContext save_; // Saves previous context. + int break_id_; // Previous break id. + bool failed_; // Did the debug context fail to load? + SaveContext save_; // Saves previous context. + PostponeInterruptsScope no_termination_exceptons_; }; // Stack allocated class for disabling break. class DisableBreak BASE_EMBEDDED { public: - explicit DisableBreak(Isolate* isolate, bool disable_break) - : isolate_(isolate) { - prev_disable_break_ = isolate_->debug()->disable_break(); - isolate_->debug()->set_disable_break(disable_break); - } - ~DisableBreak() { - isolate_->debug()->set_disable_break(prev_disable_break_); + explicit DisableBreak(Debug* debug, bool disable_break) + : debug_(debug), old_state_(debug->break_disabled_) { + debug_->break_disabled_ = disable_break; } + ~DisableBreak() { debug_->break_disabled_ = old_state_; } private: - Isolate* isolate_; - // The previous state of the disable break used to restore the value when this - // object is destructed. - bool prev_disable_break_; + Debug* debug_; + bool old_state_; + DISALLOW_COPY_AND_ASSIGN(DisableBreak); }; -// Debug_Address encapsulates the Address pointers used in generating debug -// code. -class Debug_Address { +class SuppressDebug BASE_EMBEDDED { public: - explicit Debug_Address(Debug::AddressId id) : id_(id) { } - - static Debug_Address AfterBreakTarget() { - return Debug_Address(Debug::k_after_break_target_address); - } - - static Debug_Address RestarterFrameFunctionPointer() { - return Debug_Address(Debug::k_restarter_frame_function_pointer); - } - - Address address(Isolate* isolate) const { - Debug* debug = isolate->debug(); - switch (id_) { - case Debug::k_after_break_target_address: - return reinterpret_cast<Address>(debug->after_break_target_address()); - case Debug::k_restarter_frame_function_pointer: - return reinterpret_cast<Address>( - debug->restarter_frame_function_pointer_address()); - default: - UNREACHABLE(); - return NULL; - } + explicit SuppressDebug(Debug* debug) + : debug_(debug), old_state_(debug->is_suppressed_) { + debug_->is_suppressed_ = true; } + ~SuppressDebug() { debug_->is_suppressed_ = old_state_; } private: - Debug::AddressId id_; + Debug* debug_; + bool old_state_; + DISALLOW_COPY_AND_ASSIGN(SuppressDebug); }; -// The optional thread that Debug Agent may use to temporary call V8 to process -// pending debug requests if debuggee is not running V8 at the moment. -// Techincally it does not call V8 itself, rather it asks embedding program -// to do this via v8::Debug::HostDispatchHandler -class MessageDispatchHelperThread: public Thread { - public: - explicit MessageDispatchHelperThread(Isolate* isolate); - ~MessageDispatchHelperThread() {} - - void Schedule(); - - private: - void Run(); - Isolate* isolate_; - Semaphore sem_; - Mutex mutex_; - bool already_signalled_; +// Code generator routines. +class DebugCodegen : public AllStatic { + public: + static void GenerateSlot(MacroAssembler* masm); + static void GenerateCallICStubDebugBreak(MacroAssembler* masm); + static void GenerateLoadICDebugBreak(MacroAssembler* masm); + static void GenerateStoreICDebugBreak(MacroAssembler* masm); + static void GenerateKeyedLoadICDebugBreak(MacroAssembler* masm); + static void GenerateKeyedStoreICDebugBreak(MacroAssembler* masm); + static void GenerateCompareNilICDebugBreak(MacroAssembler* masm); + static void GenerateReturnDebugBreak(MacroAssembler* masm); + static void GenerateCallFunctionStubDebugBreak(MacroAssembler* masm); + static void GenerateCallConstructStubDebugBreak(MacroAssembler* masm); + static void GenerateCallConstructStubRecordDebugBreak(MacroAssembler* masm); + static void GenerateSlotDebugBreak(MacroAssembler* masm); + static void GeneratePlainReturnLiveEdit(MacroAssembler* masm); - DISALLOW_COPY_AND_ASSIGN(MessageDispatchHelperThread); + // FrameDropper is a code replacement for a JavaScript frame with possibly + // several frames above. + // There is no calling conventions here, because it never actually gets + // called, it only gets returned to. + static void GenerateFrameDropperLiveEdit(MacroAssembler* masm); }; diff --git a/deps/v8/src/deoptimizer.cc b/deps/v8/src/deoptimizer.cc index e8cf5993e6e..1df7df84d03 100644 --- a/deps/v8/src/deoptimizer.cc +++ b/deps/v8/src/deoptimizer.cc @@ -2,16 +2,16 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "accessors.h" -#include "codegen.h" -#include "deoptimizer.h" -#include "disasm.h" -#include "full-codegen.h" -#include "global-handles.h" -#include "macro-assembler.h" -#include "prettyprinter.h" +#include "src/accessors.h" +#include "src/codegen.h" +#include "src/deoptimizer.h" +#include "src/disasm.h" +#include "src/full-codegen.h" +#include "src/global-handles.h" +#include "src/macro-assembler.h" +#include "src/prettyprinter.h" namespace v8 { @@ -19,7 +19,7 @@ namespace internal { static MemoryChunk* AllocateCodeChunk(MemoryAllocator* allocator) { return allocator->AllocateChunk(Deoptimizer::GetMaxDeoptTableSize(), - OS::CommitPageSize(), + base::OS::CommitPageSize(), #if defined(__native_client__) // The Native Client port of V8 uses an interpreter, // so code pages don't need PROT_EXEC. @@ -101,7 +101,7 @@ static const int kDeoptTableMaxEpilogueCodeSize = 2 * KB; size_t Deoptimizer::GetMaxDeoptTableSize() { int entries_size = Deoptimizer::kMaxNumberOfEntries * Deoptimizer::table_entry_size_; - int commit_page_size = static_cast<int>(OS::CommitPageSize()); + int commit_page_size = static_cast<int>(base::OS::CommitPageSize()); int page_count = ((kDeoptTableMaxEpilogueCodeSize + entries_size - 1) / commit_page_size) + 1; return static_cast<size_t>(commit_page_size * page_count); @@ -352,8 +352,11 @@ void Deoptimizer::DeoptimizeMarkedCodeForContext(Context* context) { } SafepointEntry safepoint = code->GetSafepointEntry(it.frame()->pc()); int deopt_index = safepoint.deoptimization_index(); - bool safe_to_deopt = deopt_index != Safepoint::kNoDeoptimizationIndex; - CHECK(topmost_optimized_code == NULL || safe_to_deopt); + // Turbofan deopt is checked when we are patching addresses on stack. + bool turbofanned = code->is_turbofanned(); + bool safe_to_deopt = + deopt_index != Safepoint::kNoDeoptimizationIndex || turbofanned; + CHECK(topmost_optimized_code == NULL || safe_to_deopt || turbofanned); if (topmost_optimized_code == NULL) { topmost_optimized_code = code; safe_to_deopt_topmost_optimized_code = safe_to_deopt; @@ -374,6 +377,7 @@ void Deoptimizer::DeoptimizeMarkedCodeForContext(Context* context) { Code* code = Code::cast(element); CHECK_EQ(code->kind(), Code::OPTIMIZED_FUNCTION); Object* next = code->next_code_link(); + if (code->marked_for_deoptimization()) { // Put the code into the list for later patching. codes.Add(code, &zone); @@ -396,6 +400,10 @@ void Deoptimizer::DeoptimizeMarkedCodeForContext(Context* context) { element = next; } + if (FLAG_turbo_deoptimization) { + PatchStackForMarkedCode(isolate); + } + // TODO(titzer): we need a handle scope only because of the macro assembler, // which is only used in EnsureCodeForDeoptimizationEntry. HandleScope scope(isolate); @@ -404,17 +412,81 @@ void Deoptimizer::DeoptimizeMarkedCodeForContext(Context* context) { for (int i = 0; i < codes.length(); i++) { #ifdef DEBUG if (codes[i] == topmost_optimized_code) { - ASSERT(safe_to_deopt_topmost_optimized_code); + DCHECK(safe_to_deopt_topmost_optimized_code); } #endif // It is finally time to die, code object. + + // Remove the code from optimized code map. + DeoptimizationInputData* deopt_data = + DeoptimizationInputData::cast(codes[i]->deoptimization_data()); + SharedFunctionInfo* shared = + SharedFunctionInfo::cast(deopt_data->SharedFunctionInfo()); + shared->EvictFromOptimizedCodeMap(codes[i], "deoptimized code"); + // Do platform-specific patching to force any activations to lazy deopt. - PatchCodeForDeoptimization(isolate, codes[i]); + // + // We skip patching Turbofan code - we patch return addresses on stack. + // TODO(jarin) We should still zap the code object (but we have to + // be careful not to zap the deoptimization block). + if (!codes[i]->is_turbofanned()) { + PatchCodeForDeoptimization(isolate, codes[i]); + + // We might be in the middle of incremental marking with compaction. + // Tell collector to treat this code object in a special way and + // ignore all slots that might have been recorded on it. + isolate->heap()->mark_compact_collector()->InvalidateCode(codes[i]); + } + } +} + + +static int FindPatchAddressForReturnAddress(Code* code, int pc) { + DeoptimizationInputData* input_data = + DeoptimizationInputData::cast(code->deoptimization_data()); + int patch_count = input_data->ReturnAddressPatchCount(); + for (int i = 0; i < patch_count; i++) { + int return_pc = input_data->ReturnAddressPc(i)->value(); + int patch_pc = input_data->PatchedAddressPc(i)->value(); + // If the supplied pc matches the return pc or if the address + // has been already patched, return the patch pc. + if (pc == return_pc || pc == patch_pc) { + return patch_pc; + } + } + return -1; +} - // We might be in the middle of incremental marking with compaction. - // Tell collector to treat this code object in a special way and - // ignore all slots that might have been recorded on it. - isolate->heap()->mark_compact_collector()->InvalidateCode(codes[i]); + +// For all marked Turbofanned code on stack, change the return address to go +// to the deoptimization block. +void Deoptimizer::PatchStackForMarkedCode(Isolate* isolate) { + // TODO(jarin) We should tolerate missing patch entry for the topmost frame. + for (StackFrameIterator it(isolate, isolate->thread_local_top()); !it.done(); + it.Advance()) { + StackFrame::Type type = it.frame()->type(); + if (type == StackFrame::OPTIMIZED) { + Code* code = it.frame()->LookupCode(); + if (code->is_turbofanned() && code->marked_for_deoptimization()) { + JSFunction* function = + static_cast<OptimizedFrame*>(it.frame())->function(); + Address* pc_address = it.frame()->pc_address(); + int pc_offset = + static_cast<int>(*pc_address - code->instruction_start()); + int new_pc_offset = FindPatchAddressForReturnAddress(code, pc_offset); + + if (FLAG_trace_deopt) { + CodeTracer::Scope scope(isolate->GetCodeTracer()); + PrintF(scope.file(), "[patching stack address for function: "); + function->PrintName(scope.file()); + PrintF(scope.file(), " (Pc offset %i -> %i)]\n", pc_offset, + new_pc_offset); + } + + CHECK_LE(0, new_pc_offset); + *pc_address += new_pc_offset - pc_offset; + } + } } } @@ -459,9 +531,11 @@ void Deoptimizer::DeoptimizeGlobalObject(JSObject* object) { reinterpret_cast<intptr_t>(object)); } if (object->IsJSGlobalProxy()) { - Object* proto = object->GetPrototype(); - CHECK(proto->IsJSGlobalObject()); - Context* native_context = GlobalObject::cast(proto)->native_context(); + PrototypeIterator iter(object->GetIsolate(), object); + // TODO(verwaest): This CHECK will be hit if the global proxy is detached. + CHECK(iter.GetCurrent()->IsJSGlobalObject()); + Context* native_context = + GlobalObject::cast(iter.GetCurrent())->native_context(); MarkAllCodeForContext(native_context); DeoptimizeMarkedCodeForContext(native_context); } else if (object->IsGlobalObject()) { @@ -562,7 +636,7 @@ Deoptimizer::Deoptimizer(Isolate* isolate, if (function->IsSmi()) { function = NULL; } - ASSERT(from != NULL); + DCHECK(from != NULL); if (function != NULL && function->IsOptimized()) { function->shared()->increment_deopt_count(); if (bailout_type_ == Deoptimizer::SOFT) { @@ -577,9 +651,9 @@ Deoptimizer::Deoptimizer(Isolate* isolate, compiled_code_ = FindOptimizedCode(function, optimized_code); #if DEBUG - ASSERT(compiled_code_ != NULL); + DCHECK(compiled_code_ != NULL); if (type == EAGER || type == SOFT || type == LAZY) { - ASSERT(compiled_code_->kind() != Code::FUNCTION); + DCHECK(compiled_code_->kind() != Code::FUNCTION); } #endif @@ -610,7 +684,7 @@ Code* Deoptimizer::FindOptimizedCode(JSFunction* function, : compiled_code; } case Deoptimizer::DEBUGGER: - ASSERT(optimized_code->contains(from_)); + DCHECK(optimized_code->contains(from_)); return optimized_code; } FATAL("Could not find code for optimized function"); @@ -629,8 +703,8 @@ void Deoptimizer::PrintFunctionName() { Deoptimizer::~Deoptimizer() { - ASSERT(input_ == NULL && output_ == NULL); - ASSERT(disallow_heap_allocation_ == NULL); + DCHECK(input_ == NULL && output_ == NULL); + DCHECK(disallow_heap_allocation_ == NULL); delete trace_scope_; } @@ -681,7 +755,7 @@ int Deoptimizer::GetDeoptimizationId(Isolate* isolate, addr >= start + (kMaxNumberOfEntries * table_entry_size_)) { return kNotDeoptimizationEntry; } - ASSERT_EQ(0, + DCHECK_EQ(0, static_cast<int>(addr - start) % table_entry_size_); return static_cast<int>(addr - start) / table_entry_size_; } @@ -699,13 +773,10 @@ int Deoptimizer::GetOutputInfo(DeoptimizationOutputData* data, return data->PcAndState(i)->value(); } } - PrintF(stderr, "[couldn't find pc offset for node=%d]\n", id.ToInt()); - PrintF(stderr, "[method: %s]\n", shared->DebugName()->ToCString().get()); - // Print the source code if available. - HeapStringAllocator string_allocator; - StringStream stream(&string_allocator); - shared->SourceCodePrint(&stream, -1); - PrintF(stderr, "[source:\n%s\n]", stream.ToCString().get()); + OFStream os(stderr); + os << "[couldn't find pc offset for node=" << id.ToInt() << "]\n" + << "[method: " << shared->DebugName()->ToCString().get() << "]\n" + << "[source:\n" << SourceCodeOf(shared) << "\n]" << endl; FATAL("unable to find pc offset during deoptimization"); return -1; @@ -721,7 +792,7 @@ int Deoptimizer::GetDeoptimizedCodeCount(Isolate* isolate) { Object* element = native_context->DeoptimizedCodeListHead(); while (!element->IsUndefined()) { Code* code = Code::cast(element); - ASSERT(code->kind() == Code::OPTIMIZED_FUNCTION); + DCHECK(code->kind() == Code::OPTIMIZED_FUNCTION); length++; element = code->next_code_link(); } @@ -739,7 +810,7 @@ void Deoptimizer::DoComputeOutputFrames() { compiled_code_->kind() == Code::OPTIMIZED_FUNCTION) { LOG(isolate(), CodeDeoptEvent(compiled_code_)); } - ElapsedTimer timer; + base::ElapsedTimer timer; // Determine basic deoptimization information. The optimized frame is // described by the input data. @@ -758,7 +829,8 @@ void Deoptimizer::DoComputeOutputFrames() { input_data->OptimizationId()->value(), bailout_id_, fp_to_sp_delta_); - if (bailout_type_ == EAGER || bailout_type_ == SOFT) { + if (bailout_type_ == EAGER || bailout_type_ == SOFT || + (compiled_code_->is_hydrogen_stub())) { compiled_code_->PrintDeoptLocation(trace_scope_->file(), bailout_id_); } } @@ -772,13 +844,13 @@ void Deoptimizer::DoComputeOutputFrames() { TranslationIterator iterator(translations, translation_index); Translation::Opcode opcode = static_cast<Translation::Opcode>(iterator.Next()); - ASSERT(Translation::BEGIN == opcode); + DCHECK(Translation::BEGIN == opcode); USE(opcode); // Read the number of output frames and allocate an array for their // descriptions. int count = iterator.Next(); iterator.Next(); // Drop JS frames count. - ASSERT(output_ == NULL); + DCHECK(output_ == NULL); output_ = new FrameDescription*[count]; for (int i = 0; i < count; ++i) { output_[i] = NULL; @@ -903,7 +975,10 @@ void Deoptimizer::DoComputeJSFrame(TranslationIterator* iterator, intptr_t top_address; if (is_bottommost) { // Determine whether the input frame contains alignment padding. - has_alignment_padding_ = HasAlignmentPadding(function) ? 1 : 0; + has_alignment_padding_ = + (!compiled_code_->is_turbofanned() && HasAlignmentPadding(function)) + ? 1 + : 0; // 2 = context and function in the frame. // If the optimized frame had alignment padding, adjust the frame pointer // to point to the new position of the old frame pointer after padding @@ -963,7 +1038,7 @@ void Deoptimizer::DoComputeJSFrame(TranslationIterator* iterator, } output_frame->SetCallerFp(output_offset, value); intptr_t fp_value = top_address + output_offset; - ASSERT(!is_bottommost || (input_->GetRegister(fp_reg.code()) + + DCHECK(!is_bottommost || (input_->GetRegister(fp_reg.code()) + has_alignment_padding_ * kPointerSize) == fp_value); output_frame->SetFp(fp_value); if (is_topmost) output_frame->SetRegister(fp_reg.code(), fp_value); @@ -973,7 +1048,7 @@ void Deoptimizer::DoComputeJSFrame(TranslationIterator* iterator, V8PRIxPTR " ; caller's fp\n", fp_value, output_offset, value); } - ASSERT(!is_bottommost || !has_alignment_padding_ || + DCHECK(!is_bottommost || !has_alignment_padding_ || (fp_value & kPointerSize) != 0); if (FLAG_enable_ool_constant_pool) { @@ -1022,7 +1097,7 @@ void Deoptimizer::DoComputeJSFrame(TranslationIterator* iterator, value = reinterpret_cast<intptr_t>(function); // The function for the bottommost output frame should also agree with the // input frame. - ASSERT(!is_bottommost || input_->GetFrameSlot(input_offset) == value); + DCHECK(!is_bottommost || input_->GetFrameSlot(input_offset) == value); output_frame->SetFrameSlot(output_offset, value); if (trace_scope_ != NULL) { PrintF(trace_scope_->file(), @@ -1188,7 +1263,7 @@ void Deoptimizer::DoComputeArgumentsAdaptorFrame(TranslationIterator* iterator, top_address + output_offset, output_offset, value, height - 1); } - ASSERT(0 == output_offset); + DCHECK(0 == output_offset); Builtins* builtins = isolate_->builtins(); Code* adaptor_trampoline = @@ -1226,8 +1301,8 @@ void Deoptimizer::DoComputeConstructStubFrame(TranslationIterator* iterator, output_frame->SetFrameType(StackFrame::CONSTRUCT); // Construct stub can not be topmost or bottommost. - ASSERT(frame_index > 0 && frame_index < output_count_ - 1); - ASSERT(output_[frame_index] == NULL); + DCHECK(frame_index > 0 && frame_index < output_count_ - 1); + DCHECK(output_[frame_index] == NULL); output_[frame_index] = output_frame; // The top address of the frame is computed from the previous @@ -1548,19 +1623,23 @@ void Deoptimizer::DoComputeCompiledStubFrame(TranslationIterator* iterator, // reg = JSFunction context // - CHECK(compiled_code_->is_crankshafted() && - compiled_code_->kind() != Code::OPTIMIZED_FUNCTION); - int major_key = compiled_code_->major_key(); + CHECK(compiled_code_->is_hydrogen_stub()); + int major_key = CodeStub::GetMajorKey(compiled_code_); CodeStubInterfaceDescriptor* descriptor = isolate_->code_stub_interface_descriptor(major_key); + // Check that there is a matching descriptor to the major key. + // This will fail if there has not been one installed to the isolate. + DCHECK_EQ(descriptor->MajorKey(), major_key); // The output frame must have room for all pushed register parameters // and the standard stack frame slots. Include space for an argument // object to the callee and optionally the space to pass the argument // object to the stub failure handler. - CHECK_GE(descriptor->register_param_count_, 0); - int height_in_bytes = kPointerSize * descriptor->register_param_count_ + - sizeof(Arguments) + kPointerSize; + int param_count = descriptor->GetEnvironmentParameterCount(); + CHECK_GE(param_count, 0); + + int height_in_bytes = kPointerSize * param_count + sizeof(Arguments) + + kPointerSize; int fixed_frame_size = StandardFrameConstants::kFixedFrameSize; int input_frame_size = input_->GetFrameSize(); int output_frame_size = height_in_bytes + fixed_frame_size; @@ -1654,7 +1733,7 @@ void Deoptimizer::DoComputeCompiledStubFrame(TranslationIterator* iterator, } intptr_t caller_arg_count = 0; - bool arg_count_known = !descriptor->stack_parameter_count_.is_valid(); + bool arg_count_known = !descriptor->stack_parameter_count().is_valid(); // Build the Arguments object for the caller's parameters and a pointer to it. output_frame_offset -= kPointerSize; @@ -1702,11 +1781,12 @@ void Deoptimizer::DoComputeCompiledStubFrame(TranslationIterator* iterator, // Copy the register parameters to the failure frame. int arguments_length_offset = -1; - for (int i = 0; i < descriptor->register_param_count_; ++i) { + for (int i = 0; i < param_count; ++i) { output_frame_offset -= kPointerSize; DoTranslateCommand(iterator, 0, output_frame_offset); - if (!arg_count_known && descriptor->IsParameterCountRegister(i)) { + if (!arg_count_known && + descriptor->IsEnvironmentParameterCountRegister(i)) { arguments_length_offset = output_frame_offset; } } @@ -1749,10 +1829,10 @@ void Deoptimizer::DoComputeCompiledStubFrame(TranslationIterator* iterator, // Compute this frame's PC, state, and continuation. Code* trampoline = NULL; - StubFunctionMode function_mode = descriptor->function_mode_; + StubFunctionMode function_mode = descriptor->function_mode(); StubFailureTrampolineStub(isolate_, function_mode).FindCodeInCache(&trampoline); - ASSERT(trampoline != NULL); + DCHECK(trampoline != NULL); output_frame->SetPc(reinterpret_cast<intptr_t>( trampoline->instruction_start())); if (FLAG_enable_ool_constant_pool) { @@ -1764,7 +1844,8 @@ void Deoptimizer::DoComputeCompiledStubFrame(TranslationIterator* iterator, output_frame->SetRegister(constant_pool_reg.code(), constant_pool_value); } output_frame->SetState(Smi::FromInt(FullCodeGenerator::NO_REGISTERS)); - Code* notify_failure = NotifyStubFailureBuiltin(); + Code* notify_failure = + isolate_->builtins()->builtin(Builtins::kNotifyStubFailureSaveDoubles); output_frame->SetContinuation( reinterpret_cast<intptr_t>(notify_failure->entry())); } @@ -1798,7 +1879,7 @@ Handle<Object> Deoptimizer::MaterializeNextHeapObject() { Handle<JSObject> arguments = isolate_->factory()->NewArgumentsObject(function, length); Handle<FixedArray> array = isolate_->factory()->NewFixedArray(length); - ASSERT_EQ(array->length(), length); + DCHECK_EQ(array->length(), length); arguments->set_elements(*array); materialized_objects_->Add(arguments); for (int i = 0; i < length; ++i) { @@ -1812,9 +1893,11 @@ Handle<Object> Deoptimizer::MaterializeNextHeapObject() { Handle<Map> map = Map::GeneralizeAllFieldRepresentations( Handle<Map>::cast(MaterializeNextValue())); switch (map->instance_type()) { + case MUTABLE_HEAP_NUMBER_TYPE: case HEAP_NUMBER_TYPE: { // Reuse the HeapNumber value directly as it is already properly - // tagged and skip materializing the HeapNumber explicitly. + // tagged and skip materializing the HeapNumber explicitly. Turn mutable + // heap numbers immutable. Handle<Object> object = MaterializeNextValue(); if (object_index < prev_materialized_count_) { materialized_objects_->Add(Handle<Object>( @@ -1840,7 +1923,8 @@ Handle<Object> Deoptimizer::MaterializeNextHeapObject() { object->set_elements(FixedArrayBase::cast(*elements)); for (int i = 0; i < length - 3; ++i) { Handle<Object> value = MaterializeNextValue(); - object->FastPropertyAtPut(i, *value); + FieldIndex index = FieldIndex::ForPropertyIndex(object->map(), i); + object->FastPropertyAtPut(index, *value); } break; } @@ -1875,6 +1959,9 @@ Handle<Object> Deoptimizer::MaterializeNextHeapObject() { Handle<Object> Deoptimizer::MaterializeNextValue() { int value_index = materialization_value_index_++; Handle<Object> value = materialized_values_->at(value_index); + if (value->IsMutableHeapNumber()) { + HeapNumber::cast(*value)->set_map(isolate_->heap()->heap_number_map()); + } if (*value == isolate_->heap()->arguments_marker()) { value = MaterializeNextHeapObject(); } @@ -1883,7 +1970,7 @@ Handle<Object> Deoptimizer::MaterializeNextValue() { void Deoptimizer::MaterializeHeapObjects(JavaScriptFrameIterator* it) { - ASSERT_NE(DEBUGGER, bailout_type_); + DCHECK_NE(DEBUGGER, bailout_type_); MaterializedObjectStore* materialized_store = isolate_->materialized_object_store(); @@ -1936,7 +2023,7 @@ void Deoptimizer::MaterializeHeapObjects(JavaScriptFrameIterator* it) { d.value(), d.destination()); } - ASSERT(values.at(d.destination())->IsTheHole()); + DCHECK(values.at(d.destination())->IsTheHole()); values.Set(d.destination(), num); } @@ -2770,7 +2857,7 @@ void Deoptimizer::EnsureCodeForDeoptimizationEntry(Isolate* isolate, GenerateDeoptimizationEntries(&masm, entry_count, type); CodeDesc desc; masm.GetCode(&desc); - ASSERT(!RelocInfo::RequiresRelocation(desc)); + DCHECK(!RelocInfo::RequiresRelocation(desc)); MemoryChunk* chunk = data->deopt_entry_code_[type]; CHECK(static_cast<int>(Deoptimizer::GetMaxDeoptTableSize()) >= @@ -2778,7 +2865,7 @@ void Deoptimizer::EnsureCodeForDeoptimizationEntry(Isolate* isolate, chunk->CommitArea(desc.instr_size); CopyBytes(chunk->area_start(), desc.buffer, static_cast<size_t>(desc.instr_size)); - CPU::FlushICache(chunk->area_start(), desc.instr_size); + CpuFeatures::FlushICache(chunk->area_start(), desc.instr_size); data->deopt_entry_code_entries_[type] = entry_count; } @@ -2864,7 +2951,7 @@ unsigned FrameDescription::GetExpressionCount() { Object* FrameDescription::GetExpression(int index) { - ASSERT_EQ(StackFrame::JAVA_SCRIPT, type_); + DCHECK_EQ(StackFrame::JAVA_SCRIPT, type_); unsigned offset = GetOffsetFromSlotIndex(index); return reinterpret_cast<Object*>(*GetFrameSlotPointer(offset)); } @@ -2890,7 +2977,7 @@ int32_t TranslationIterator::Next() { // bit of zero (marks the end). uint32_t bits = 0; for (int i = 0; true; i += 7) { - ASSERT(HasNext()); + DCHECK(HasNext()); uint8_t next = buffer_->get(index_++); bits |= (next >> 1) << i; if ((next & 1) == 0) break; @@ -2905,8 +2992,7 @@ int32_t TranslationIterator::Next() { Handle<ByteArray> TranslationBuffer::CreateByteArray(Factory* factory) { int length = contents_.length(); Handle<ByteArray> result = factory->NewByteArray(length, TENURED); - OS::MemCopy( - result->GetDataStartAddress(), contents_.ToVector().start(), length); + MemCopy(result->GetDataStartAddress(), contents_.ToVector().start(), length); return result; } @@ -3260,7 +3346,11 @@ Handle<Object> SlotRef::GetValue(Isolate* isolate) { return Handle<Object>(Memory::Object_at(addr_), isolate); case INT32: { +#if V8_TARGET_BIG_ENDIAN && V8_HOST_ARCH_64_BIT + int value = Memory::int32_at(addr_ + kIntSize); +#else int value = Memory::int32_at(addr_); +#endif if (Smi::IsValid(value)) { return Handle<Object>(Smi::FromInt(value), isolate); } else { @@ -3269,7 +3359,11 @@ Handle<Object> SlotRef::GetValue(Isolate* isolate) { } case UINT32: { +#if V8_TARGET_BIG_ENDIAN && V8_HOST_ARCH_64_BIT + uint32_t value = Memory::uint32_at(addr_ + kIntSize); +#else uint32_t value = Memory::uint32_at(addr_); +#endif if (value <= static_cast<uint32_t>(Smi::kMaxValue)) { return Handle<Object>(Smi::FromInt(static_cast<int>(value)), isolate); } else { @@ -3382,6 +3476,7 @@ Handle<Object> SlotRefValueBuilder::GetNext(Isolate* isolate, int lvl) { // TODO(jarin) this should be unified with the code in // Deoptimizer::MaterializeNextHeapObject() switch (map->instance_type()) { + case MUTABLE_HEAP_NUMBER_TYPE: case HEAP_NUMBER_TYPE: { // Reuse the HeapNumber value directly as it is already properly // tagged and skip materializing the HeapNumber explicitly. @@ -3406,7 +3501,8 @@ Handle<Object> SlotRefValueBuilder::GetNext(Isolate* isolate, int lvl) { object->set_elements(FixedArrayBase::cast(*elements)); for (int i = 0; i < length - 3; ++i) { Handle<Object> value = GetNext(isolate, lvl + 1); - object->FastPropertyAtPut(i, *value); + FieldIndex index = FieldIndex::ForPropertyIndex(object->map(), i); + object->FastPropertyAtPut(index, *value); } return object; } diff --git a/deps/v8/src/deoptimizer.h b/deps/v8/src/deoptimizer.h index 373f888ae3d..a0cc6975c3c 100644 --- a/deps/v8/src/deoptimizer.h +++ b/deps/v8/src/deoptimizer.h @@ -5,11 +5,11 @@ #ifndef V8_DEOPTIMIZER_H_ #define V8_DEOPTIMIZER_H_ -#include "v8.h" +#include "src/v8.h" -#include "allocation.h" -#include "macro-assembler.h" -#include "zone-inl.h" +#include "src/allocation.h" +#include "src/macro-assembler.h" +#include "src/zone-inl.h" namespace v8 { @@ -177,6 +177,8 @@ class Deoptimizer : public Malloced { // refer to that code. static void DeoptimizeMarkedCode(Isolate* isolate); + static void PatchStackForMarkedCode(Isolate* isolate); + // Visit all the known optimized functions in a given isolate. static void VisitAllOptimizedFunctions( Isolate* isolate, OptimizedFunctionVisitor* visitor); @@ -387,10 +389,6 @@ class Deoptimizer : public Malloced { // at the dynamic alignment state slot inside the frame. bool HasAlignmentPadding(JSFunction* function); - // Select the version of NotifyStubFailure builtin that either saves or - // doesn't save the double registers depending on CPU features. - Code* NotifyStubFailureBuiltin(); - Isolate* isolate_; JSFunction* function_; Code* compiled_code_; @@ -464,7 +462,7 @@ class FrameDescription { } uint32_t GetFrameSize() const { - ASSERT(static_cast<uint32_t>(frame_size_) == frame_size_); + DCHECK(static_cast<uint32_t>(frame_size_) == frame_size_); return static_cast<uint32_t>(frame_size_); } @@ -493,11 +491,11 @@ class FrameDescription { intptr_t GetRegister(unsigned n) const { #if DEBUG - // This convoluted ASSERT is needed to work around a gcc problem that + // This convoluted DCHECK is needed to work around a gcc problem that // improperly detects an array bounds overflow in optimized debug builds - // when using a plain ASSERT. + // when using a plain DCHECK. if (n >= ARRAY_SIZE(registers_)) { - ASSERT(false); + DCHECK(false); return 0; } #endif @@ -505,17 +503,17 @@ class FrameDescription { } double GetDoubleRegister(unsigned n) const { - ASSERT(n < ARRAY_SIZE(double_registers_)); + DCHECK(n < ARRAY_SIZE(double_registers_)); return double_registers_[n]; } void SetRegister(unsigned n, intptr_t value) { - ASSERT(n < ARRAY_SIZE(registers_)); + DCHECK(n < ARRAY_SIZE(registers_)); registers_[n] = value; } void SetDoubleRegister(unsigned n, double value) { - ASSERT(n < ARRAY_SIZE(double_registers_)); + DCHECK(n < ARRAY_SIZE(double_registers_)); double_registers_[n] = value; } @@ -611,7 +609,7 @@ class FrameDescription { intptr_t frame_content_[1]; intptr_t* GetFrameSlotPointer(unsigned offset) { - ASSERT(offset < frame_size_); + DCHECK(offset < frame_size_); return reinterpret_cast<intptr_t*>( reinterpret_cast<Address>(this) + frame_content_offset() + offset); } @@ -660,7 +658,7 @@ class TranslationIterator BASE_EMBEDDED { public: TranslationIterator(ByteArray* buffer, int index) : buffer_(buffer), index_(index) { - ASSERT(index >= 0 && index < buffer->length()); + DCHECK(index >= 0 && index < buffer->length()); } int32_t Next(); @@ -932,13 +930,13 @@ class DeoptimizedFrameInfo : public Malloced { // Get an incoming argument. Object* GetParameter(int index) { - ASSERT(0 <= index && index < parameters_count()); + DCHECK(0 <= index && index < parameters_count()); return parameters_[index]; } // Get an expression from the expression stack. Object* GetExpression(int index) { - ASSERT(0 <= index && index < expression_count()); + DCHECK(0 <= index && index < expression_count()); return expression_stack_[index]; } @@ -949,13 +947,13 @@ class DeoptimizedFrameInfo : public Malloced { private: // Set an incoming argument. void SetParameter(int index, Object* obj) { - ASSERT(0 <= index && index < parameters_count()); + DCHECK(0 <= index && index < parameters_count()); parameters_[index] = obj; } // Set an expression on the expression stack. void SetExpression(int index, Object* obj) { - ASSERT(0 <= index && index < expression_count()); + DCHECK(0 <= index && index < expression_count()); expression_stack_[index] = obj; } diff --git a/deps/v8/src/disassembler.cc b/deps/v8/src/disassembler.cc index 754d5a77a53..942b2be4527 100644 --- a/deps/v8/src/disassembler.cc +++ b/deps/v8/src/disassembler.cc @@ -2,17 +2,17 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" - -#include "code-stubs.h" -#include "codegen.h" -#include "debug.h" -#include "deoptimizer.h" -#include "disasm.h" -#include "disassembler.h" -#include "macro-assembler.h" -#include "serialize.h" -#include "string-stream.h" +#include "src/v8.h" + +#include "src/code-stubs.h" +#include "src/codegen.h" +#include "src/debug.h" +#include "src/deoptimizer.h" +#include "src/disasm.h" +#include "src/disassembler.h" +#include "src/macro-assembler.h" +#include "src/serialize.h" +#include "src/string-stream.h" namespace v8 { namespace internal { @@ -50,7 +50,7 @@ class V8NameConverter: public disasm::NameConverter { const char* V8NameConverter::NameOfAddress(byte* pc) const { const char* name = code_->GetIsolate()->builtins()->Lookup(pc); if (name != NULL) { - OS::SNPrintF(v8_buffer_, "%s (%p)", name, pc); + SNPrintF(v8_buffer_, "%s (%p)", name, pc); return v8_buffer_.start(); } @@ -58,7 +58,7 @@ const char* V8NameConverter::NameOfAddress(byte* pc) const { int offs = static_cast<int>(pc - code_->instruction_start()); // print as code offset, if it seems reasonable if (0 <= offs && offs < code_->instruction_size()) { - OS::SNPrintF(v8_buffer_, "%d (%p)", offs, pc); + SNPrintF(v8_buffer_, "%d (%p)", offs, pc); return v8_buffer_.start(); } } @@ -95,7 +95,6 @@ static int DecodeIt(Isolate* isolate, SealHandleScope shs(isolate); DisallowHeapAllocation no_alloc; ExternalReferenceEncoder ref_encoder(isolate); - Heap* heap = isolate->heap(); v8::internal::EmbeddedVector<char, 128> decode_buffer; v8::internal::EmbeddedVector<char, kOutBufferSize> out_buffer; @@ -114,27 +113,27 @@ static int DecodeIt(Isolate* isolate, // First decode instruction so that we know its length. byte* prev_pc = pc; if (constants > 0) { - OS::SNPrintF(decode_buffer, - "%08x constant", - *reinterpret_cast<int32_t*>(pc)); + SNPrintF(decode_buffer, + "%08x constant", + *reinterpret_cast<int32_t*>(pc)); constants--; pc += 4; } else { int num_const = d.ConstantPoolSizeAt(pc); if (num_const >= 0) { - OS::SNPrintF(decode_buffer, - "%08x constant pool begin", - *reinterpret_cast<int32_t*>(pc)); + SNPrintF(decode_buffer, + "%08x constant pool begin", + *reinterpret_cast<int32_t*>(pc)); constants = num_const; pc += 4; } else if (it != NULL && !it->done() && it->rinfo()->pc() == pc && it->rinfo()->rmode() == RelocInfo::INTERNAL_REFERENCE) { // raw pointer embedded in code stream, e.g., jump table byte* ptr = *reinterpret_cast<byte**>(pc); - OS::SNPrintF(decode_buffer, - "%08" V8PRIxPTR " jump table entry %4" V8PRIdPTR, - ptr, - ptr - begin); + SNPrintF(decode_buffer, + "%08" V8PRIxPTR " jump table entry %4" V8PRIdPTR, + reinterpret_cast<intptr_t>(ptr), + ptr - begin); pc += 4; } else { decode_buffer[0] = '\0'; @@ -226,29 +225,21 @@ static int DecodeIt(Isolate* isolate, out.AddFormatted(", %s", Code::StubType2String(type)); } } else if (kind == Code::STUB || kind == Code::HANDLER) { - // Reverse lookup required as the minor key cannot be retrieved - // from the code object. - Object* obj = heap->code_stubs()->SlowReverseLookup(code); - if (obj != heap->undefined_value()) { - ASSERT(obj->IsSmi()); - // Get the STUB key and extract major and minor key. - uint32_t key = Smi::cast(obj)->value(); - uint32_t minor_key = CodeStub::MinorKeyFromKey(key); - CodeStub::Major major_key = CodeStub::GetMajorKey(code); - ASSERT(major_key == CodeStub::MajorKeyFromKey(key)); - out.AddFormatted(" %s, %s, ", - Code::Kind2String(kind), - CodeStub::MajorName(major_key, false)); - switch (major_key) { - case CodeStub::CallFunction: { - int argc = - CallFunctionStub::ExtractArgcFromMinorKey(minor_key); - out.AddFormatted("argc = %d", argc); - break; - } - default: - out.AddFormatted("minor: %d", minor_key); + // Get the STUB key and extract major and minor key. + uint32_t key = code->stub_key(); + uint32_t minor_key = CodeStub::MinorKeyFromKey(key); + CodeStub::Major major_key = CodeStub::GetMajorKey(code); + DCHECK(major_key == CodeStub::MajorKeyFromKey(key)); + out.AddFormatted(" %s, %s, ", Code::Kind2String(kind), + CodeStub::MajorName(major_key, false)); + switch (major_key) { + case CodeStub::CallFunction: { + int argc = CallFunctionStub::ExtractArgcFromMinorKey(minor_key); + out.AddFormatted("argc = %d", argc); + break; } + default: + out.AddFormatted("minor: %d", minor_key); } } else { out.AddFormatted(" %s", Code::Kind2String(kind)); diff --git a/deps/v8/src/disassembler.h b/deps/v8/src/disassembler.h index f5f596efc9c..f65f5385791 100644 --- a/deps/v8/src/disassembler.h +++ b/deps/v8/src/disassembler.h @@ -5,7 +5,7 @@ #ifndef V8_DISASSEMBLER_H_ #define V8_DISASSEMBLER_H_ -#include "allocation.h" +#include "src/allocation.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/diy-fp.cc b/deps/v8/src/diy-fp.cc index 51f75abb734..cdad2a8ae22 100644 --- a/deps/v8/src/diy-fp.cc +++ b/deps/v8/src/diy-fp.cc @@ -2,10 +2,10 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "../include/v8stdint.h" -#include "globals.h" -#include "checks.h" -#include "diy-fp.h" +#include "include/v8stdint.h" +#include "src/base/logging.h" +#include "src/diy-fp.h" +#include "src/globals.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/diy-fp.h b/deps/v8/src/diy-fp.h index f8f2673c410..31f787265ea 100644 --- a/deps/v8/src/diy-fp.h +++ b/deps/v8/src/diy-fp.h @@ -25,8 +25,8 @@ class DiyFp { // must be bigger than the significand of other. // The result will not be normalized. void Subtract(const DiyFp& other) { - ASSERT(e_ == other.e_); - ASSERT(f_ >= other.f_); + DCHECK(e_ == other.e_); + DCHECK(f_ >= other.f_); f_ -= other.f_; } @@ -51,7 +51,7 @@ class DiyFp { } void Normalize() { - ASSERT(f_ != 0); + DCHECK(f_ != 0); uint64_t f = f_; int e = e_; diff --git a/deps/v8/src/double.h b/deps/v8/src/double.h index 9fb7f843ad4..7b4486728aa 100644 --- a/deps/v8/src/double.h +++ b/deps/v8/src/double.h @@ -5,7 +5,7 @@ #ifndef V8_DOUBLE_H_ #define V8_DOUBLE_H_ -#include "diy-fp.h" +#include "src/diy-fp.h" namespace v8 { namespace internal { @@ -34,14 +34,14 @@ class Double { // The value encoded by this Double must be greater or equal to +0.0. // It must not be special (infinity, or NaN). DiyFp AsDiyFp() const { - ASSERT(Sign() > 0); - ASSERT(!IsSpecial()); + DCHECK(Sign() > 0); + DCHECK(!IsSpecial()); return DiyFp(Significand(), Exponent()); } // The value encoded by this Double must be strictly greater than 0. DiyFp AsNormalizedDiyFp() const { - ASSERT(value() > 0.0); + DCHECK(value() > 0.0); uint64_t f = Significand(); int e = Exponent(); @@ -121,7 +121,7 @@ class Double { // Precondition: the value encoded by this Double must be greater or equal // than +0.0. DiyFp UpperBoundary() const { - ASSERT(Sign() > 0); + DCHECK(Sign() > 0); return DiyFp(Significand() * 2 + 1, Exponent() - 1); } @@ -130,7 +130,7 @@ class Double { // exponent as m_plus. // Precondition: the value encoded by this Double must be greater than 0. void NormalizedBoundaries(DiyFp* out_m_minus, DiyFp* out_m_plus) const { - ASSERT(value() > 0.0); + DCHECK(value() > 0.0); DiyFp v = this->AsDiyFp(); bool significand_is_zero = (v.f() == kHiddenBit); DiyFp m_plus = DiyFp::Normalize(DiyFp((v.f() << 1) + 1, v.e() - 1)); diff --git a/deps/v8/src/dtoa.cc b/deps/v8/src/dtoa.cc index 42a95ed96e9..f39b0b070d3 100644 --- a/deps/v8/src/dtoa.cc +++ b/deps/v8/src/dtoa.cc @@ -4,16 +4,16 @@ #include <cmath> -#include "../include/v8stdint.h" -#include "checks.h" -#include "utils.h" +#include "include/v8stdint.h" +#include "src/base/logging.h" +#include "src/utils.h" -#include "dtoa.h" +#include "src/dtoa.h" -#include "bignum-dtoa.h" -#include "double.h" -#include "fast-dtoa.h" -#include "fixed-dtoa.h" +#include "src/bignum-dtoa.h" +#include "src/double.h" +#include "src/fast-dtoa.h" +#include "src/fixed-dtoa.h" namespace v8 { namespace internal { @@ -32,8 +32,8 @@ static BignumDtoaMode DtoaToBignumDtoaMode(DtoaMode dtoa_mode) { void DoubleToAscii(double v, DtoaMode mode, int requested_digits, Vector<char> buffer, int* sign, int* length, int* point) { - ASSERT(!Double(v).IsSpecial()); - ASSERT(mode == DTOA_SHORTEST || requested_digits >= 0); + DCHECK(!Double(v).IsSpecial()); + DCHECK(mode == DTOA_SHORTEST || requested_digits >= 0); if (Double(v).Sign() < 0) { *sign = 1; diff --git a/deps/v8/src/effects.h b/deps/v8/src/effects.h index 9360fda0456..9481bb8875e 100644 --- a/deps/v8/src/effects.h +++ b/deps/v8/src/effects.h @@ -5,9 +5,9 @@ #ifndef V8_EFFECTS_H_ #define V8_EFFECTS_H_ -#include "v8.h" +#include "src/v8.h" -#include "types.h" +#include "src/types.h" namespace v8 { namespace internal { @@ -33,7 +33,7 @@ struct Effect { Bounds bounds; Effect() : modality(DEFINITE) {} - Effect(Bounds b, Modality m = DEFINITE) : modality(m), bounds(b) {} + explicit Effect(Bounds b, Modality m = DEFINITE) : modality(m), bounds(b) {} // The unknown effect. static Effect Unknown(Zone* zone) { @@ -195,15 +195,15 @@ class EffectsBase { typedef typename Mapping::Locator Locator; bool Contains(Var var) { - ASSERT(var != kNoVar); + DCHECK(var != kNoVar); return map_->Contains(var); } bool Find(Var var, Locator* locator) { - ASSERT(var != kNoVar); + DCHECK(var != kNoVar); return map_->Find(var, locator); } bool Insert(Var var, Locator* locator) { - ASSERT(var != kNoVar); + DCHECK(var != kNoVar); return map_->Insert(var, locator); } @@ -259,7 +259,7 @@ class NestedEffectsBase { bool is_empty() { return node_ == NULL; } bool Contains(Var var) { - ASSERT(var != kNoVar); + DCHECK(var != kNoVar); for (Node* node = node_; node != NULL; node = node->previous) { if (node->effects.Contains(var)) return true; } @@ -267,7 +267,7 @@ class NestedEffectsBase { } bool Find(Var var, Locator* locator) { - ASSERT(var != kNoVar); + DCHECK(var != kNoVar); for (Node* node = node_; node != NULL; node = node->previous) { if (node->effects.Find(var, locator)) return true; } @@ -293,7 +293,7 @@ class NestedEffectsBase { template<class Var, Var kNoVar> bool NestedEffectsBase<Var, kNoVar>::Insert(Var var, Locator* locator) { - ASSERT(var != kNoVar); + DCHECK(var != kNoVar); if (!node_->effects.Insert(var, locator)) return false; Locator shadowed; for (Node* node = node_->previous; node != NULL; node = node->previous) { @@ -326,7 +326,7 @@ class NestedEffects: public NestedEffects Pop() { NestedEffects result = *this; result.pop(); - ASSERT(!this->is_empty()); + DCHECK(!this->is_empty()); return result; } }; diff --git a/deps/v8/src/elements-kind.cc b/deps/v8/src/elements-kind.cc index adab39679d1..0ebc6dc246b 100644 --- a/deps/v8/src/elements-kind.cc +++ b/deps/v8/src/elements-kind.cc @@ -2,11 +2,12 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "elements-kind.h" +#include "src/elements-kind.h" -#include "api.h" -#include "elements.h" -#include "objects.h" +#include "src/api.h" +#include "src/base/lazy-instance.h" +#include "src/elements.h" +#include "src/objects.h" namespace v8 { namespace internal { @@ -51,23 +52,16 @@ int ElementsKindToShiftSize(ElementsKind elements_kind) { } -const char* ElementsKindToString(ElementsKind kind) { - ElementsAccessor* accessor = ElementsAccessor::ForKind(kind); - return accessor->name(); -} - - -void PrintElementsKind(FILE* out, ElementsKind kind) { - PrintF(out, "%s", ElementsKindToString(kind)); +int GetDefaultHeaderSizeForElementsKind(ElementsKind elements_kind) { + STATIC_ASSERT(FixedArray::kHeaderSize == FixedDoubleArray::kHeaderSize); + return IsExternalArrayElementsKind(elements_kind) + ? 0 : (FixedArray::kHeaderSize - kHeapObjectTag); } -ElementsKind GetInitialFastElementsKind() { - if (FLAG_packed_arrays) { - return FAST_SMI_ELEMENTS; - } else { - return FAST_HOLEY_SMI_ELEMENTS; - } +const char* ElementsKindToString(ElementsKind kind) { + ElementsAccessor* accessor = ElementsAccessor::ForKind(kind); + return accessor->name(); } @@ -96,13 +90,13 @@ struct InitializeFastElementsKindSequence { }; -static LazyInstance<ElementsKind*, - InitializeFastElementsKindSequence>::type +static base::LazyInstance<ElementsKind*, + InitializeFastElementsKindSequence>::type fast_elements_kind_sequence = LAZY_INSTANCE_INITIALIZER; ElementsKind GetFastElementsKindFromSequenceIndex(int sequence_number) { - ASSERT(sequence_number >= 0 && + DCHECK(sequence_number >= 0 && sequence_number < kFastElementsKindCount); return fast_elements_kind_sequence.Get()[sequence_number]; } @@ -136,8 +130,8 @@ ElementsKind GetNextTransitionElementsKind(ElementsKind kind) { ElementsKind GetNextMoreGeneralFastElementsKind(ElementsKind elements_kind, bool allow_only_packed) { - ASSERT(IsFastElementsKind(elements_kind)); - ASSERT(elements_kind != TERMINAL_FAST_ELEMENTS_KIND); + DCHECK(IsFastElementsKind(elements_kind)); + DCHECK(elements_kind != TERMINAL_FAST_ELEMENTS_KIND); while (true) { elements_kind = GetNextTransitionElementsKind(elements_kind); if (!IsFastHoleyElementsKind(elements_kind) || !allow_only_packed) { diff --git a/deps/v8/src/elements-kind.h b/deps/v8/src/elements-kind.h index 1a550b0a335..b48a5dfe024 100644 --- a/deps/v8/src/elements-kind.h +++ b/deps/v8/src/elements-kind.h @@ -5,7 +5,7 @@ #ifndef V8_ELEMENTS_KIND_H_ #define V8_ELEMENTS_KIND_H_ -#include "v8checks.h" +#include "src/checks.h" namespace v8 { namespace internal { @@ -72,10 +72,10 @@ const int kFastElementsKindPackedToHoley = FAST_HOLEY_SMI_ELEMENTS - FAST_SMI_ELEMENTS; int ElementsKindToShiftSize(ElementsKind elements_kind); +int GetDefaultHeaderSizeForElementsKind(ElementsKind elements_kind); const char* ElementsKindToString(ElementsKind kind); -void PrintElementsKind(FILE* out, ElementsKind kind); -ElementsKind GetInitialFastElementsKind(); +inline ElementsKind GetInitialFastElementsKind() { return FAST_SMI_ELEMENTS; } ElementsKind GetFastElementsKindFromSequenceIndex(int sequence_number); int GetSequenceIndexFromFastElementsKind(ElementsKind elements_kind); @@ -106,7 +106,7 @@ inline bool IsFixedTypedArrayElementsKind(ElementsKind kind) { inline bool IsFastElementsKind(ElementsKind kind) { - ASSERT(FIRST_FAST_ELEMENTS_KIND == 0); + DCHECK(FIRST_FAST_ELEMENTS_KIND == 0); return kind <= FAST_HOLEY_DOUBLE_ELEMENTS; } @@ -209,7 +209,7 @@ inline ElementsKind GetHoleyElementsKind(ElementsKind packed_kind) { inline ElementsKind FastSmiToObjectElementsKind(ElementsKind from_kind) { - ASSERT(IsFastSmiElementsKind(from_kind)); + DCHECK(IsFastSmiElementsKind(from_kind)); return (from_kind == FAST_SMI_ELEMENTS) ? FAST_ELEMENTS : FAST_HOLEY_ELEMENTS; diff --git a/deps/v8/src/elements.cc b/deps/v8/src/elements.cc index 580a7186bda..945a9e7f6e1 100644 --- a/deps/v8/src/elements.cc +++ b/deps/v8/src/elements.cc @@ -2,13 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "arguments.h" -#include "conversions.h" -#include "elements.h" -#include "objects.h" -#include "utils.h" +#include "src/arguments.h" +#include "src/conversions.h" +#include "src/elements.h" +#include "src/objects.h" +#include "src/utils.h" // Each concrete ElementsAccessor can handle exactly one ElementsKind, // several abstract ElementsAccessor classes are used to allow sharing @@ -112,7 +112,7 @@ template<ElementsKind Kind> class ElementsKindTraits { #define ELEMENTS_TRAITS(Class, KindParam, Store) \ template<> class ElementsKindTraits<KindParam> { \ - public: \ + public: /* NOLINT */ \ static const ElementsKind Kind = KindParam; \ typedef Store BackingStore; \ }; @@ -147,19 +147,18 @@ static MaybeHandle<Object> ThrowArrayLengthRangeError(Isolate* isolate) { } -static void CopyObjectToObjectElements(Handle<FixedArrayBase> from_base, +static void CopyObjectToObjectElements(FixedArrayBase* from_base, ElementsKind from_kind, uint32_t from_start, - Handle<FixedArrayBase> to_base, - ElementsKind to_kind, - uint32_t to_start, + FixedArrayBase* to_base, + ElementsKind to_kind, uint32_t to_start, int raw_copy_size) { - ASSERT(to_base->map() != + DCHECK(to_base->map() != from_base->GetIsolate()->heap()->fixed_cow_array_map()); DisallowHeapAllocation no_allocation; int copy_size = raw_copy_size; if (raw_copy_size < 0) { - ASSERT(raw_copy_size == ElementsAccessor::kCopyToEnd || + DCHECK(raw_copy_size == ElementsAccessor::kCopyToEnd || raw_copy_size == ElementsAccessor::kCopyToEndAndInitializeToHole); copy_size = Min(from_base->length() - from_start, to_base->length() - to_start); @@ -168,18 +167,18 @@ static void CopyObjectToObjectElements(Handle<FixedArrayBase> from_base, int length = to_base->length() - start; if (length > 0) { Heap* heap = from_base->GetHeap(); - MemsetPointer(Handle<FixedArray>::cast(to_base)->data_start() + start, + MemsetPointer(FixedArray::cast(to_base)->data_start() + start, heap->the_hole_value(), length); } } } - ASSERT((copy_size + static_cast<int>(to_start)) <= to_base->length() && + DCHECK((copy_size + static_cast<int>(to_start)) <= to_base->length() && (copy_size + static_cast<int>(from_start)) <= from_base->length()); if (copy_size == 0) return; - Handle<FixedArray> from = Handle<FixedArray>::cast(from_base); - Handle<FixedArray> to = Handle<FixedArray>::cast(to_base); - ASSERT(IsFastSmiOrObjectElementsKind(from_kind)); - ASSERT(IsFastSmiOrObjectElementsKind(to_kind)); + FixedArray* from = FixedArray::cast(from_base); + FixedArray* to = FixedArray::cast(to_base); + DCHECK(IsFastSmiOrObjectElementsKind(from_kind)); + DCHECK(IsFastSmiOrObjectElementsKind(to_kind)); Address to_address = to->address() + FixedArray::kHeaderSize; Address from_address = from->address() + FixedArray::kHeaderSize; CopyWords(reinterpret_cast<Object**>(to_address) + to_start, @@ -188,29 +187,25 @@ static void CopyObjectToObjectElements(Handle<FixedArrayBase> from_base, if (IsFastObjectElementsKind(from_kind) && IsFastObjectElementsKind(to_kind)) { Heap* heap = from->GetHeap(); - if (!heap->InNewSpace(*to)) { + if (!heap->InNewSpace(to)) { heap->RecordWrites(to->address(), to->OffsetOfElementAt(to_start), copy_size); } - heap->incremental_marking()->RecordWrites(*to); + heap->incremental_marking()->RecordWrites(to); } } -static void CopyDictionaryToObjectElements(Handle<FixedArrayBase> from_base, - uint32_t from_start, - Handle<FixedArrayBase> to_base, - ElementsKind to_kind, - uint32_t to_start, - int raw_copy_size) { - Handle<SeededNumberDictionary> from = - Handle<SeededNumberDictionary>::cast(from_base); +static void CopyDictionaryToObjectElements( + FixedArrayBase* from_base, uint32_t from_start, FixedArrayBase* to_base, + ElementsKind to_kind, uint32_t to_start, int raw_copy_size) { DisallowHeapAllocation no_allocation; + SeededNumberDictionary* from = SeededNumberDictionary::cast(from_base); int copy_size = raw_copy_size; Heap* heap = from->GetHeap(); if (raw_copy_size < 0) { - ASSERT(raw_copy_size == ElementsAccessor::kCopyToEnd || + DCHECK(raw_copy_size == ElementsAccessor::kCopyToEnd || raw_copy_size == ElementsAccessor::kCopyToEndAndInitializeToHole); copy_size = from->max_number_key() + 1 - from_start; if (raw_copy_size == ElementsAccessor::kCopyToEndAndInitializeToHole) { @@ -218,15 +213,15 @@ static void CopyDictionaryToObjectElements(Handle<FixedArrayBase> from_base, int length = to_base->length() - start; if (length > 0) { Heap* heap = from->GetHeap(); - MemsetPointer(Handle<FixedArray>::cast(to_base)->data_start() + start, + MemsetPointer(FixedArray::cast(to_base)->data_start() + start, heap->the_hole_value(), length); } } } - ASSERT(*to_base != *from_base); - ASSERT(IsFastSmiOrObjectElementsKind(to_kind)); + DCHECK(to_base != from_base); + DCHECK(IsFastSmiOrObjectElementsKind(to_kind)); if (copy_size == 0) return; - Handle<FixedArray> to = Handle<FixedArray>::cast(to_base); + FixedArray* to = FixedArray::cast(to_base); uint32_t to_length = to->length(); if (to_start + copy_size > to_length) { copy_size = to_length - to_start; @@ -235,19 +230,19 @@ static void CopyDictionaryToObjectElements(Handle<FixedArrayBase> from_base, int entry = from->FindEntry(i + from_start); if (entry != SeededNumberDictionary::kNotFound) { Object* value = from->ValueAt(entry); - ASSERT(!value->IsTheHole()); + DCHECK(!value->IsTheHole()); to->set(i + to_start, value, SKIP_WRITE_BARRIER); } else { to->set_the_hole(i + to_start); } } if (IsFastObjectElementsKind(to_kind)) { - if (!heap->InNewSpace(*to)) { + if (!heap->InNewSpace(to)) { heap->RecordWrites(to->address(), to->OffsetOfElementAt(to_start), copy_size); } - heap->incremental_marking()->RecordWrites(*to); + heap->incremental_marking()->RecordWrites(to); } } @@ -258,10 +253,10 @@ static void CopyDoubleToObjectElements(Handle<FixedArrayBase> from_base, ElementsKind to_kind, uint32_t to_start, int raw_copy_size) { - ASSERT(IsFastSmiOrObjectElementsKind(to_kind)); + DCHECK(IsFastSmiOrObjectElementsKind(to_kind)); int copy_size = raw_copy_size; if (raw_copy_size < 0) { - ASSERT(raw_copy_size == ElementsAccessor::kCopyToEnd || + DCHECK(raw_copy_size == ElementsAccessor::kCopyToEnd || raw_copy_size == ElementsAccessor::kCopyToEndAndInitializeToHole); copy_size = Min(from_base->length() - from_start, to_base->length() - to_start); @@ -273,12 +268,12 @@ static void CopyDoubleToObjectElements(Handle<FixedArrayBase> from_base, int length = to_base->length() - start; if (length > 0) { Heap* heap = from_base->GetHeap(); - MemsetPointer(Handle<FixedArray>::cast(to_base)->data_start() + start, + MemsetPointer(FixedArray::cast(*to_base)->data_start() + start, heap->the_hole_value(), length); } } } - ASSERT((copy_size + static_cast<int>(to_start)) <= to_base->length() && + DCHECK((copy_size + static_cast<int>(to_start)) <= to_base->length() && (copy_size + static_cast<int>(from_start)) <= from_base->length()); if (copy_size == 0) return; Isolate* isolate = from_base->GetIsolate(); @@ -289,7 +284,7 @@ static void CopyDoubleToObjectElements(Handle<FixedArrayBase> from_base, if (IsFastSmiElementsKind(to_kind)) { UNIMPLEMENTED(); } else { - ASSERT(IsFastObjectElementsKind(to_kind)); + DCHECK(IsFastObjectElementsKind(to_kind)); Handle<Object> value = FixedDoubleArray::get(from, i + from_start); to->set(i + to_start, *value, UPDATE_WRITE_BARRIER); } @@ -297,29 +292,28 @@ static void CopyDoubleToObjectElements(Handle<FixedArrayBase> from_base, } -static void CopyDoubleToDoubleElements(Handle<FixedArrayBase> from_base, +static void CopyDoubleToDoubleElements(FixedArrayBase* from_base, uint32_t from_start, - Handle<FixedArrayBase> to_base, - uint32_t to_start, - int raw_copy_size) { + FixedArrayBase* to_base, + uint32_t to_start, int raw_copy_size) { DisallowHeapAllocation no_allocation; int copy_size = raw_copy_size; if (raw_copy_size < 0) { - ASSERT(raw_copy_size == ElementsAccessor::kCopyToEnd || + DCHECK(raw_copy_size == ElementsAccessor::kCopyToEnd || raw_copy_size == ElementsAccessor::kCopyToEndAndInitializeToHole); copy_size = Min(from_base->length() - from_start, to_base->length() - to_start); if (raw_copy_size == ElementsAccessor::kCopyToEndAndInitializeToHole) { for (int i = to_start + copy_size; i < to_base->length(); ++i) { - Handle<FixedDoubleArray>::cast(to_base)->set_the_hole(i); + FixedDoubleArray::cast(to_base)->set_the_hole(i); } } } - ASSERT((copy_size + static_cast<int>(to_start)) <= to_base->length() && + DCHECK((copy_size + static_cast<int>(to_start)) <= to_base->length() && (copy_size + static_cast<int>(from_start)) <= from_base->length()); if (copy_size == 0) return; - Handle<FixedDoubleArray> from = Handle<FixedDoubleArray>::cast(from_base); - Handle<FixedDoubleArray> to = Handle<FixedDoubleArray>::cast(to_base); + FixedDoubleArray* from = FixedDoubleArray::cast(from_base); + FixedDoubleArray* to = FixedDoubleArray::cast(to_base); Address to_address = to->address() + FixedDoubleArray::kHeaderSize; Address from_address = from->address() + FixedDoubleArray::kHeaderSize; to_address += kDoubleSize * to_start; @@ -331,33 +325,32 @@ static void CopyDoubleToDoubleElements(Handle<FixedArrayBase> from_base, } -static void CopySmiToDoubleElements(Handle<FixedArrayBase> from_base, +static void CopySmiToDoubleElements(FixedArrayBase* from_base, uint32_t from_start, - Handle<FixedArrayBase> to_base, - uint32_t to_start, + FixedArrayBase* to_base, uint32_t to_start, int raw_copy_size) { DisallowHeapAllocation no_allocation; int copy_size = raw_copy_size; if (raw_copy_size < 0) { - ASSERT(raw_copy_size == ElementsAccessor::kCopyToEnd || + DCHECK(raw_copy_size == ElementsAccessor::kCopyToEnd || raw_copy_size == ElementsAccessor::kCopyToEndAndInitializeToHole); copy_size = from_base->length() - from_start; if (raw_copy_size == ElementsAccessor::kCopyToEndAndInitializeToHole) { for (int i = to_start + copy_size; i < to_base->length(); ++i) { - Handle<FixedDoubleArray>::cast(to_base)->set_the_hole(i); + FixedDoubleArray::cast(to_base)->set_the_hole(i); } } } - ASSERT((copy_size + static_cast<int>(to_start)) <= to_base->length() && + DCHECK((copy_size + static_cast<int>(to_start)) <= to_base->length() && (copy_size + static_cast<int>(from_start)) <= from_base->length()); if (copy_size == 0) return; - Handle<FixedArray> from = Handle<FixedArray>::cast(from_base); - Handle<FixedDoubleArray> to = Handle<FixedDoubleArray>::cast(to_base); - Handle<Object> the_hole = from->GetIsolate()->factory()->the_hole_value(); + FixedArray* from = FixedArray::cast(from_base); + FixedDoubleArray* to = FixedDoubleArray::cast(to_base); + Object* the_hole = from->GetHeap()->the_hole_value(); for (uint32_t from_end = from_start + static_cast<uint32_t>(copy_size); from_start < from_end; from_start++, to_start++) { Object* hole_or_smi = from->get(from_start); - if (hole_or_smi == *the_hole) { + if (hole_or_smi == the_hole) { to->set_the_hole(to_start); } else { to->set(to_start, Smi::cast(hole_or_smi)->value()); @@ -366,23 +359,22 @@ static void CopySmiToDoubleElements(Handle<FixedArrayBase> from_base, } -static void CopyPackedSmiToDoubleElements(Handle<FixedArrayBase> from_base, +static void CopyPackedSmiToDoubleElements(FixedArrayBase* from_base, uint32_t from_start, - Handle<FixedArrayBase> to_base, - uint32_t to_start, - int packed_size, + FixedArrayBase* to_base, + uint32_t to_start, int packed_size, int raw_copy_size) { DisallowHeapAllocation no_allocation; int copy_size = raw_copy_size; uint32_t to_end; if (raw_copy_size < 0) { - ASSERT(raw_copy_size == ElementsAccessor::kCopyToEnd || + DCHECK(raw_copy_size == ElementsAccessor::kCopyToEnd || raw_copy_size == ElementsAccessor::kCopyToEndAndInitializeToHole); copy_size = packed_size - from_start; if (raw_copy_size == ElementsAccessor::kCopyToEndAndInitializeToHole) { to_end = to_base->length(); for (uint32_t i = to_start + copy_size; i < to_end; ++i) { - Handle<FixedDoubleArray>::cast(to_base)->set_the_hole(i); + FixedDoubleArray::cast(to_base)->set_the_hole(i); } } else { to_end = to_start + static_cast<uint32_t>(copy_size); @@ -390,49 +382,48 @@ static void CopyPackedSmiToDoubleElements(Handle<FixedArrayBase> from_base, } else { to_end = to_start + static_cast<uint32_t>(copy_size); } - ASSERT(static_cast<int>(to_end) <= to_base->length()); - ASSERT(packed_size >= 0 && packed_size <= copy_size); - ASSERT((copy_size + static_cast<int>(to_start)) <= to_base->length() && + DCHECK(static_cast<int>(to_end) <= to_base->length()); + DCHECK(packed_size >= 0 && packed_size <= copy_size); + DCHECK((copy_size + static_cast<int>(to_start)) <= to_base->length() && (copy_size + static_cast<int>(from_start)) <= from_base->length()); if (copy_size == 0) return; - Handle<FixedArray> from = Handle<FixedArray>::cast(from_base); - Handle<FixedDoubleArray> to = Handle<FixedDoubleArray>::cast(to_base); + FixedArray* from = FixedArray::cast(from_base); + FixedDoubleArray* to = FixedDoubleArray::cast(to_base); for (uint32_t from_end = from_start + static_cast<uint32_t>(packed_size); from_start < from_end; from_start++, to_start++) { Object* smi = from->get(from_start); - ASSERT(!smi->IsTheHole()); + DCHECK(!smi->IsTheHole()); to->set(to_start, Smi::cast(smi)->value()); } } -static void CopyObjectToDoubleElements(Handle<FixedArrayBase> from_base, +static void CopyObjectToDoubleElements(FixedArrayBase* from_base, uint32_t from_start, - Handle<FixedArrayBase> to_base, - uint32_t to_start, - int raw_copy_size) { + FixedArrayBase* to_base, + uint32_t to_start, int raw_copy_size) { DisallowHeapAllocation no_allocation; int copy_size = raw_copy_size; if (raw_copy_size < 0) { - ASSERT(raw_copy_size == ElementsAccessor::kCopyToEnd || + DCHECK(raw_copy_size == ElementsAccessor::kCopyToEnd || raw_copy_size == ElementsAccessor::kCopyToEndAndInitializeToHole); copy_size = from_base->length() - from_start; if (raw_copy_size == ElementsAccessor::kCopyToEndAndInitializeToHole) { for (int i = to_start + copy_size; i < to_base->length(); ++i) { - Handle<FixedDoubleArray>::cast(to_base)->set_the_hole(i); + FixedDoubleArray::cast(to_base)->set_the_hole(i); } } } - ASSERT((copy_size + static_cast<int>(to_start)) <= to_base->length() && + DCHECK((copy_size + static_cast<int>(to_start)) <= to_base->length() && (copy_size + static_cast<int>(from_start)) <= from_base->length()); if (copy_size == 0) return; - Handle<FixedArray> from = Handle<FixedArray>::cast(from_base); - Handle<FixedDoubleArray> to = Handle<FixedDoubleArray>::cast(to_base); - Handle<Object> the_hole = from->GetIsolate()->factory()->the_hole_value(); + FixedArray* from = FixedArray::cast(from_base); + FixedDoubleArray* to = FixedDoubleArray::cast(to_base); + Object* the_hole = from->GetHeap()->the_hole_value(); for (uint32_t from_end = from_start + copy_size; from_start < from_end; from_start++, to_start++) { Object* hole_or_object = from->get(from_start); - if (hole_or_object == *the_hole) { + if (hole_or_object == the_hole) { to->set_the_hole(to_start); } else { to->set(to_start, hole_or_object->Number()); @@ -441,27 +432,26 @@ static void CopyObjectToDoubleElements(Handle<FixedArrayBase> from_base, } -static void CopyDictionaryToDoubleElements(Handle<FixedArrayBase> from_base, +static void CopyDictionaryToDoubleElements(FixedArrayBase* from_base, uint32_t from_start, - Handle<FixedArrayBase> to_base, + FixedArrayBase* to_base, uint32_t to_start, int raw_copy_size) { - Handle<SeededNumberDictionary> from = - Handle<SeededNumberDictionary>::cast(from_base); DisallowHeapAllocation no_allocation; + SeededNumberDictionary* from = SeededNumberDictionary::cast(from_base); int copy_size = raw_copy_size; if (copy_size < 0) { - ASSERT(copy_size == ElementsAccessor::kCopyToEnd || + DCHECK(copy_size == ElementsAccessor::kCopyToEnd || copy_size == ElementsAccessor::kCopyToEndAndInitializeToHole); copy_size = from->max_number_key() + 1 - from_start; if (raw_copy_size == ElementsAccessor::kCopyToEndAndInitializeToHole) { for (int i = to_start + copy_size; i < to_base->length(); ++i) { - Handle<FixedDoubleArray>::cast(to_base)->set_the_hole(i); + FixedDoubleArray::cast(to_base)->set_the_hole(i); } } } if (copy_size == 0) return; - Handle<FixedDoubleArray> to = Handle<FixedDoubleArray>::cast(to_base); + FixedDoubleArray* to = FixedDoubleArray::cast(to_base); uint32_t to_length = to->length(); if (to_start + copy_size > to_length) { copy_size = to_length - to_start; @@ -751,7 +741,7 @@ class ElementsAccessorBase : public ElementsAccessor { Handle<FixedArrayBase> to, uint32_t to_start, int copy_size) V8_FINAL V8_OVERRIDE { - ASSERT(!from.is_null()); + DCHECK(!from.is_null()); ElementsAccessorSubclass::CopyElementsImpl( from, from_start, to, from_kind, to_start, kPackedSizeNotKnown, copy_size); @@ -785,10 +775,10 @@ class ElementsAccessorBase : public ElementsAccessor { Handle<FixedArray> to, Handle<FixedArrayBase> from) V8_FINAL V8_OVERRIDE { int len0 = to->length(); -#ifdef ENABLE_SLOW_ASSERTS +#ifdef ENABLE_SLOW_DCHECKS if (FLAG_enable_slow_asserts) { for (int i = 0; i < len0; i++) { - ASSERT(!to->get(i)->IsTheHole()); + DCHECK(!to->get(i)->IsTheHole()); } } #endif @@ -812,7 +802,7 @@ class ElementsAccessorBase : public ElementsAccessor { ElementsAccessorSubclass::GetImpl(receiver, holder, key, from), FixedArray); - ASSERT(!value->IsTheHole()); + DCHECK(!value->IsTheHole()); if (!HasKey(to, value)) { extra++; } @@ -830,7 +820,7 @@ class ElementsAccessorBase : public ElementsAccessor { WriteBarrierMode mode = result->GetWriteBarrierMode(no_gc); for (int i = 0; i < len0; i++) { Object* e = to->get(i); - ASSERT(e->IsString() || e->IsNumber()); + DCHECK(e->IsString() || e->IsNumber()); result->set(i, e, mode); } } @@ -852,7 +842,7 @@ class ElementsAccessorBase : public ElementsAccessor { } } } - ASSERT(extra == index); + DCHECK(extra == index); return result; } @@ -883,8 +873,7 @@ class ElementsAccessorBase : public ElementsAccessor { // Super class for all fast element arrays. template<typename FastElementsAccessorSubclass, - typename KindTraits, - int ElementSize> + typename KindTraits> class FastElementsAccessor : public ElementsAccessorBase<FastElementsAccessorSubclass, KindTraits> { public: @@ -897,8 +886,7 @@ class FastElementsAccessor typedef typename KindTraits::BackingStore BackingStore; - // Adjusts the length of the fast backing store or returns the new length or - // undefined in case conversion to a slow backing store should be performed. + // Adjusts the length of the fast backing store. static Handle<Object> SetLengthWithoutNormalize( Handle<FixedArrayBase> backing_store, Handle<JSArray> array, @@ -927,15 +915,8 @@ class FastElementsAccessor if (length == 0) { array->initialize_elements(); } else { - int filler_size = (old_capacity - length) * ElementSize; - Address filler_start = backing_store->address() + - BackingStore::OffsetOfElementAt(length); - array->GetHeap()->CreateFillerObjectAt(filler_start, filler_size); - - // We are storing the new length using release store after creating a - // filler for the left-over space to avoid races with the sweeper - // thread. - backing_store->synchronized_set_length(length); + isolate->heap()->RightTrimFixedArray<Heap::FROM_MUTATOR>( + *backing_store, old_capacity - length); } } else { // Otherwise, fill the unused tail with holes. @@ -950,21 +931,16 @@ class FastElementsAccessor // Check whether the backing store should be expanded. uint32_t min = JSObject::NewElementsCapacity(old_capacity); uint32_t new_capacity = length > min ? length : min; - if (!array->ShouldConvertToSlowElements(new_capacity)) { - FastElementsAccessorSubclass:: - SetFastElementsCapacityAndLength(array, new_capacity, length); - JSObject::ValidateElements(array); - return length_object; - } - - // Request conversion to slow elements. - return isolate->factory()->undefined_value(); + FastElementsAccessorSubclass::SetFastElementsCapacityAndLength( + array, new_capacity, length); + JSObject::ValidateElements(array); + return length_object; } static Handle<Object> DeleteCommon(Handle<JSObject> obj, uint32_t key, JSReceiver::DeleteMode mode) { - ASSERT(obj->HasFastSmiOrObjectElements() || + DCHECK(obj->HasFastSmiOrObjectElements() || obj->HasFastDoubleElements() || obj->HasFastArgumentsElements()); Isolate* isolate = obj->GetIsolate(); @@ -1044,7 +1020,7 @@ class FastElementsAccessor HandleScope scope(isolate); Handle<FixedArrayBase> elements(holder->elements(), isolate); Map* map = elements->map(); - ASSERT((IsFastSmiOrObjectElementsKind(KindTraits::Kind) && + DCHECK((IsFastSmiOrObjectElementsKind(KindTraits::Kind) && (map == isolate->heap()->fixed_array_map() || map == isolate->heap()->fixed_cow_array_map())) || (IsFastDoubleElementsKind(KindTraits::Kind) == @@ -1054,7 +1030,7 @@ class FastElementsAccessor for (int i = 0; i < length; i++) { HandleScope scope(isolate); Handle<BackingStore> backing_store = Handle<BackingStore>::cast(elements); - ASSERT((!IsFastSmiElementsKind(KindTraits::Kind) || + DCHECK((!IsFastSmiElementsKind(KindTraits::Kind) || BackingStore::get(backing_store, i)->IsSmi()) || (IsFastHoleyElementsKind(KindTraits::Kind) == backing_store->is_the_hole(i))); @@ -1094,14 +1070,11 @@ static inline ElementsKind ElementsKindForArray(Handle<FixedArrayBase> array) { template<typename FastElementsAccessorSubclass, typename KindTraits> class FastSmiOrObjectElementsAccessor - : public FastElementsAccessor<FastElementsAccessorSubclass, - KindTraits, - kPointerSize> { + : public FastElementsAccessor<FastElementsAccessorSubclass, KindTraits> { public: explicit FastSmiOrObjectElementsAccessor(const char* name) : FastElementsAccessor<FastElementsAccessorSubclass, - KindTraits, - kPointerSize>(name) {} + KindTraits>(name) {} static void CopyElementsImpl(Handle<FixedArrayBase> from, uint32_t from_start, @@ -1116,8 +1089,8 @@ class FastSmiOrObjectElementsAccessor case FAST_HOLEY_SMI_ELEMENTS: case FAST_ELEMENTS: case FAST_HOLEY_ELEMENTS: - CopyObjectToObjectElements( - from, from_kind, from_start, to, to_kind, to_start, copy_size); + CopyObjectToObjectElements(*from, from_kind, from_start, *to, to_kind, + to_start, copy_size); break; case FAST_DOUBLE_ELEMENTS: case FAST_HOLEY_DOUBLE_ELEMENTS: @@ -1125,8 +1098,8 @@ class FastSmiOrObjectElementsAccessor from, from_start, to, to_kind, to_start, copy_size); break; case DICTIONARY_ELEMENTS: - CopyDictionaryToObjectElements( - from, from_start, to, to_kind, to_start, copy_size); + CopyDictionaryToObjectElements(*from, from_start, *to, to_kind, + to_start, copy_size); break; case SLOPPY_ARGUMENTS_ELEMENTS: { // TODO(verwaest): This is a temporary hack to support extending @@ -1215,14 +1188,11 @@ class FastHoleyObjectElementsAccessor template<typename FastElementsAccessorSubclass, typename KindTraits> class FastDoubleElementsAccessor - : public FastElementsAccessor<FastElementsAccessorSubclass, - KindTraits, - kDoubleSize> { + : public FastElementsAccessor<FastElementsAccessorSubclass, KindTraits> { public: explicit FastDoubleElementsAccessor(const char* name) : FastElementsAccessor<FastElementsAccessorSubclass, - KindTraits, - kDoubleSize>(name) {} + KindTraits>(name) {} static void SetFastElementsCapacityAndLength(Handle<JSObject> obj, uint32_t capacity, @@ -1240,23 +1210,23 @@ class FastDoubleElementsAccessor int copy_size) { switch (from_kind) { case FAST_SMI_ELEMENTS: - CopyPackedSmiToDoubleElements( - from, from_start, to, to_start, packed_size, copy_size); + CopyPackedSmiToDoubleElements(*from, from_start, *to, to_start, + packed_size, copy_size); break; case FAST_HOLEY_SMI_ELEMENTS: - CopySmiToDoubleElements(from, from_start, to, to_start, copy_size); + CopySmiToDoubleElements(*from, from_start, *to, to_start, copy_size); break; case FAST_DOUBLE_ELEMENTS: case FAST_HOLEY_DOUBLE_ELEMENTS: - CopyDoubleToDoubleElements(from, from_start, to, to_start, copy_size); + CopyDoubleToDoubleElements(*from, from_start, *to, to_start, copy_size); break; case FAST_ELEMENTS: case FAST_HOLEY_ELEMENTS: - CopyObjectToDoubleElements(from, from_start, to, to_start, copy_size); + CopyObjectToDoubleElements(*from, from_start, *to, to_start, copy_size); break; case DICTIONARY_ELEMENTS: - CopyDictionaryToDoubleElements( - from, from_start, to, to_start, copy_size); + CopyDictionaryToDoubleElements(*from, from_start, *to, to_start, + copy_size); break; case SLOPPY_ARGUMENTS_ELEMENTS: UNREACHABLE(); @@ -1436,9 +1406,7 @@ class DictionaryElementsAccessor } if (new_length == 0) { - // If the length of a slow array is reset to zero, we clear - // the array and flush backing storage. This has the added - // benefit that the array returns to fast mode. + // Flush the backing store. JSObject::ResetElements(array); } else { DisallowHeapAllocation no_gc; @@ -1637,7 +1605,7 @@ class SloppyArgumentsElementsAccessor : public ElementsAccessorBase< DisallowHeapAllocation no_gc; Context* context = Context::cast(parameter_map->get(0)); int context_index = Handle<Smi>::cast(probe)->value(); - ASSERT(!context->get(context_index)->IsTheHole()); + DCHECK(!context->get(context_index)->IsTheHole()); return handle(context->get(context_index), isolate); } else { // Object is not mapped, defer to the arguments. @@ -1655,7 +1623,7 @@ class SloppyArgumentsElementsAccessor : public ElementsAccessorBase< AliasedArgumentsEntry* entry = AliasedArgumentsEntry::cast(*result); Context* context = Context::cast(parameter_map->get(0)); int context_index = entry->aliased_context_slot(); - ASSERT(!context->get(context_index)->IsTheHole()); + DCHECK(!context->get(context_index)->IsTheHole()); return handle(context->get(context_index), isolate); } else { return result; @@ -1856,13 +1824,13 @@ MaybeHandle<Object> ElementsAccessorBase<ElementsAccessorSubclass, if (value >= 0) { Handle<Object> new_length = ElementsAccessorSubclass:: SetLengthWithoutNormalize(backing_store, array, smi_length, value); - ASSERT(!new_length.is_null()); + DCHECK(!new_length.is_null()); // even though the proposed length was a smi, new_length could // still be a heap number because SetLengthWithoutNormalize doesn't // allow the array length property to drop below the index of // non-deletable elements. - ASSERT(new_length->IsSmi() || new_length->IsHeapNumber() || + DCHECK(new_length->IsSmi() || new_length->IsHeapNumber() || new_length->IsUndefined()); if (new_length->IsSmi()) { array->set_length(*Handle<Smi>::cast(new_length)); @@ -1883,13 +1851,13 @@ MaybeHandle<Object> ElementsAccessorBase<ElementsAccessorSubclass, if (length->ToArrayIndex(&value)) { Handle<SeededNumberDictionary> dictionary = JSObject::NormalizeElements(array); - ASSERT(!dictionary.is_null()); + DCHECK(!dictionary.is_null()); Handle<Object> new_length = DictionaryElementsAccessor:: SetLengthWithoutNormalize(dictionary, array, length, value); - ASSERT(!new_length.is_null()); + DCHECK(!new_length.is_null()); - ASSERT(new_length->IsNumber()); + DCHECK(new_length->IsNumber()); array->set_length(*new_length); return array; } else { diff --git a/deps/v8/src/elements.h b/deps/v8/src/elements.h index 88f5db8c5b5..3496a644aa5 100644 --- a/deps/v8/src/elements.h +++ b/deps/v8/src/elements.h @@ -5,10 +5,10 @@ #ifndef V8_ELEMENTS_H_ #define V8_ELEMENTS_H_ -#include "elements-kind.h" -#include "objects.h" -#include "heap.h" -#include "isolate.h" +#include "src/elements-kind.h" +#include "src/heap/heap.h" +#include "src/isolate.h" +#include "src/objects.h" namespace v8 { namespace internal { @@ -199,7 +199,7 @@ class ElementsAccessor { // Returns a shared ElementsAccessor for the specified ElementsKind. static ElementsAccessor* ForKind(ElementsKind elements_kind) { - ASSERT(elements_kind < kElementsKindCount); + DCHECK(elements_kind < kElementsKindCount); return elements_accessors_[elements_kind]; } diff --git a/deps/v8/src/execution.cc b/deps/v8/src/execution.cc index b74ef4d62ea..f146c3031eb 100644 --- a/deps/v8/src/execution.cc +++ b/deps/v8/src/execution.cc @@ -2,13 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "execution.h" +#include "src/execution.h" -#include "bootstrapper.h" -#include "codegen.h" -#include "deoptimizer.h" -#include "isolate-inl.h" -#include "vm-state-inl.h" +#include "src/bootstrapper.h" +#include "src/codegen.h" +#include "src/deoptimizer.h" +#include "src/isolate-inl.h" +#include "src/vm-state-inl.h" namespace v8 { namespace internal { @@ -19,9 +19,7 @@ StackGuard::StackGuard() void StackGuard::set_interrupt_limits(const ExecutionAccess& lock) { - ASSERT(isolate_ != NULL); - // Ignore attempts to interrupt when interrupts are postponed. - if (should_postpone_interrupts(lock)) return; + DCHECK(isolate_ != NULL); thread_local_.jslimit_ = kInterruptLimit; thread_local_.climit_ = kInterruptLimit; isolate_->heap()->SetStackLimits(); @@ -29,7 +27,7 @@ void StackGuard::set_interrupt_limits(const ExecutionAccess& lock) { void StackGuard::reset_limits(const ExecutionAccess& lock) { - ASSERT(isolate_ != NULL); + DCHECK(isolate_ != NULL); thread_local_.jslimit_ = thread_local_.real_jslimit_; thread_local_.climit_ = thread_local_.real_climit_; isolate_->heap()->SetStackLimits(); @@ -70,13 +68,12 @@ MUST_USE_RESULT static MaybeHandle<Object> Invoke( // receiver instead to avoid having a 'this' pointer which refers // directly to a global object. if (receiver->IsGlobalObject()) { - Handle<GlobalObject> global = Handle<GlobalObject>::cast(receiver); - receiver = Handle<JSObject>(global->global_receiver()); + receiver = handle(Handle<GlobalObject>::cast(receiver)->global_proxy()); } // Make sure that the global object of the context we're about to // make the current one is indeed a global object. - ASSERT(function->context()->global_object()->IsGlobalObject()); + DCHECK(function->context()->global_object()->IsGlobalObject()); { // Save and restore context around invocation and block the @@ -100,11 +97,11 @@ MUST_USE_RESULT static MaybeHandle<Object> Invoke( // Update the pending exception flag and return the value. bool has_exception = value->IsException(); - ASSERT(has_exception == isolate->has_pending_exception()); + DCHECK(has_exception == isolate->has_pending_exception()); if (has_exception) { isolate->ReportPendingMessages(); // Reset stepping state when script exits with uncaught exception. - if (isolate->debugger()->IsDebuggerActive()) { + if (isolate->debug()->is_active()) { isolate->debug()->ClearStepping(); } return MaybeHandle<Object>(); @@ -133,13 +130,8 @@ MaybeHandle<Object> Execution::Call(Isolate* isolate, !func->shared()->native() && func->shared()->strict_mode() == SLOPPY) { if (receiver->IsUndefined() || receiver->IsNull()) { - Object* global = func->context()->global_object()->global_receiver(); - // Under some circumstances, 'global' can be the JSBuiltinsObject - // In that case, don't rewrite. (FWIW, the same holds for - // GetIsolate()->global_object()->global_receiver().) - if (!global->IsJSBuiltinsObject()) { - receiver = Handle<Object>(global, func->GetIsolate()); - } + receiver = handle(func->global_proxy()); + DCHECK(!receiver->IsJSBuiltinsObject()); } else { ASSIGN_RETURN_ON_EXCEPTION( isolate, receiver, ToObject(isolate, receiver), Object); @@ -153,7 +145,7 @@ MaybeHandle<Object> Execution::Call(Isolate* isolate, MaybeHandle<Object> Execution::New(Handle<JSFunction> func, int argc, Handle<Object> argv[]) { - return Invoke(true, func, func->GetIsolate()->global_object(), argc, argv); + return Invoke(true, func, handle(func->global_proxy()), argc, argv); } @@ -176,9 +168,9 @@ MaybeHandle<Object> Execution::TryCall(Handle<JSFunction> func, MaybeHandle<Object> maybe_result = Invoke(false, func, receiver, argc, args); if (maybe_result.is_null()) { - ASSERT(catcher.HasCaught()); - ASSERT(isolate->has_pending_exception()); - ASSERT(isolate->external_caught_exception()); + DCHECK(catcher.HasCaught()); + DCHECK(isolate->has_pending_exception()); + DCHECK(isolate->external_caught_exception()); if (exception_out != NULL) { if (isolate->pending_exception() == isolate->heap()->termination_exception()) { @@ -190,15 +182,15 @@ MaybeHandle<Object> Execution::TryCall(Handle<JSFunction> func, isolate->OptionalRescheduleException(true); } - ASSERT(!isolate->has_pending_exception()); - ASSERT(!isolate->external_caught_exception()); + DCHECK(!isolate->has_pending_exception()); + DCHECK(!isolate->external_caught_exception()); return maybe_result; } Handle<Object> Execution::GetFunctionDelegate(Isolate* isolate, Handle<Object> object) { - ASSERT(!object->IsJSFunction()); + DCHECK(!object->IsJSFunction()); Factory* factory = isolate->factory(); // If you return a function from here, it will be called when an @@ -225,7 +217,7 @@ Handle<Object> Execution::GetFunctionDelegate(Isolate* isolate, MaybeHandle<Object> Execution::TryGetFunctionDelegate(Isolate* isolate, Handle<Object> object) { - ASSERT(!object->IsJSFunction()); + DCHECK(!object->IsJSFunction()); // If object is a function proxy, get its handler. Iterate if necessary. Object* fun = *object; @@ -253,7 +245,7 @@ MaybeHandle<Object> Execution::TryGetFunctionDelegate(Isolate* isolate, Handle<Object> Execution::GetConstructorDelegate(Isolate* isolate, Handle<Object> object) { - ASSERT(!object->IsJSFunction()); + DCHECK(!object->IsJSFunction()); // If you return a function from here, it will be called when an // attempt is made to call the given object as a constructor. @@ -279,7 +271,7 @@ Handle<Object> Execution::GetConstructorDelegate(Isolate* isolate, MaybeHandle<Object> Execution::TryGetConstructorDelegate( Isolate* isolate, Handle<Object> object) { - ASSERT(!object->IsJSFunction()); + DCHECK(!object->IsJSFunction()); // If you return a function from here, it will be called when an // attempt is made to call the given object as a constructor. @@ -307,35 +299,6 @@ MaybeHandle<Object> Execution::TryGetConstructorDelegate( } -void Execution::RunMicrotasks(Isolate* isolate) { - ASSERT(isolate->microtask_pending()); - Execution::Call( - isolate, - isolate->run_microtasks(), - isolate->factory()->undefined_value(), - 0, - NULL).Check(); -} - - -void Execution::EnqueueMicrotask(Isolate* isolate, Handle<Object> microtask) { - Handle<Object> args[] = { microtask }; - Execution::Call( - isolate, - isolate->enqueue_microtask(), - isolate->factory()->undefined_value(), - 1, - args).Check(); -} - - -bool StackGuard::IsStackOverflow() { - ExecutionAccess access(isolate_); - return (thread_local_.jslimit_ != kInterruptLimit && - thread_local_.climit_ != kInterruptLimit); -} - - void StackGuard::EnableInterrupts() { ExecutionAccess access(isolate_); if (has_pending_interrupts(access)) { @@ -366,196 +329,78 @@ void StackGuard::DisableInterrupts() { } -bool StackGuard::ShouldPostponeInterrupts() { - ExecutionAccess access(isolate_); - return should_postpone_interrupts(access); -} - - -bool StackGuard::IsInterrupted() { - ExecutionAccess access(isolate_); - return (thread_local_.interrupt_flags_ & INTERRUPT) != 0; -} - - -void StackGuard::Interrupt() { +void StackGuard::PushPostponeInterruptsScope(PostponeInterruptsScope* scope) { ExecutionAccess access(isolate_); - thread_local_.interrupt_flags_ |= INTERRUPT; - set_interrupt_limits(access); + // Intercept already requested interrupts. + int intercepted = thread_local_.interrupt_flags_ & scope->intercept_mask_; + scope->intercepted_flags_ = intercepted; + thread_local_.interrupt_flags_ &= ~intercepted; + if (!has_pending_interrupts(access)) reset_limits(access); + // Add scope to the chain. + scope->prev_ = thread_local_.postpone_interrupts_; + thread_local_.postpone_interrupts_ = scope; } -bool StackGuard::IsPreempted() { +void StackGuard::PopPostponeInterruptsScope() { ExecutionAccess access(isolate_); - return thread_local_.interrupt_flags_ & PREEMPT; + PostponeInterruptsScope* top = thread_local_.postpone_interrupts_; + // Make intercepted interrupts active. + DCHECK((thread_local_.interrupt_flags_ & top->intercept_mask_) == 0); + thread_local_.interrupt_flags_ |= top->intercepted_flags_; + if (has_pending_interrupts(access)) set_interrupt_limits(access); + // Remove scope from chain. + thread_local_.postpone_interrupts_ = top->prev_; } -void StackGuard::Preempt() { +bool StackGuard::CheckInterrupt(InterruptFlag flag) { ExecutionAccess access(isolate_); - thread_local_.interrupt_flags_ |= PREEMPT; - set_interrupt_limits(access); -} - - -bool StackGuard::IsTerminateExecution() { - ExecutionAccess access(isolate_); - return (thread_local_.interrupt_flags_ & TERMINATE) != 0; -} - - -void StackGuard::CancelTerminateExecution() { - ExecutionAccess access(isolate_); - Continue(TERMINATE); - isolate_->CancelTerminateExecution(); -} - - -void StackGuard::TerminateExecution() { - ExecutionAccess access(isolate_); - thread_local_.interrupt_flags_ |= TERMINATE; - set_interrupt_limits(access); + return thread_local_.interrupt_flags_ & flag; } -bool StackGuard::IsGCRequest() { +void StackGuard::RequestInterrupt(InterruptFlag flag) { ExecutionAccess access(isolate_); - return (thread_local_.interrupt_flags_ & GC_REQUEST) != 0; -} - - -void StackGuard::RequestGC() { - ExecutionAccess access(isolate_); - thread_local_.interrupt_flags_ |= GC_REQUEST; - if (thread_local_.postpone_interrupts_nesting_ == 0) { - thread_local_.jslimit_ = thread_local_.climit_ = kInterruptLimit; - isolate_->heap()->SetStackLimits(); - } -} - - -bool StackGuard::IsInstallCodeRequest() { - ExecutionAccess access(isolate_); - return (thread_local_.interrupt_flags_ & INSTALL_CODE) != 0; -} - - -void StackGuard::RequestInstallCode() { - ExecutionAccess access(isolate_); - thread_local_.interrupt_flags_ |= INSTALL_CODE; - if (thread_local_.postpone_interrupts_nesting_ == 0) { - thread_local_.jslimit_ = thread_local_.climit_ = kInterruptLimit; - isolate_->heap()->SetStackLimits(); + // Check the chain of PostponeInterruptsScopes for interception. + if (thread_local_.postpone_interrupts_ && + thread_local_.postpone_interrupts_->Intercept(flag)) { + return; } -} - - -bool StackGuard::IsFullDeopt() { - ExecutionAccess access(isolate_); - return (thread_local_.interrupt_flags_ & FULL_DEOPT) != 0; -} - - -void StackGuard::FullDeopt() { - ExecutionAccess access(isolate_); - thread_local_.interrupt_flags_ |= FULL_DEOPT; - set_interrupt_limits(access); -} - - -bool StackGuard::IsDeoptMarkedAllocationSites() { - ExecutionAccess access(isolate_); - return (thread_local_.interrupt_flags_ & DEOPT_MARKED_ALLOCATION_SITES) != 0; -} - - -void StackGuard::DeoptMarkedAllocationSites() { - ExecutionAccess access(isolate_); - thread_local_.interrupt_flags_ |= DEOPT_MARKED_ALLOCATION_SITES; - set_interrupt_limits(access); -} - -bool StackGuard::IsDebugBreak() { - ExecutionAccess access(isolate_); - return thread_local_.interrupt_flags_ & DEBUGBREAK; -} - - -void StackGuard::DebugBreak() { - ExecutionAccess access(isolate_); - thread_local_.interrupt_flags_ |= DEBUGBREAK; + // Not intercepted. Set as active interrupt flag. + thread_local_.interrupt_flags_ |= flag; set_interrupt_limits(access); } -bool StackGuard::IsDebugCommand() { +void StackGuard::ClearInterrupt(InterruptFlag flag) { ExecutionAccess access(isolate_); - return thread_local_.interrupt_flags_ & DEBUGCOMMAND; -} - - -void StackGuard::DebugCommand() { - ExecutionAccess access(isolate_); - thread_local_.interrupt_flags_ |= DEBUGCOMMAND; - set_interrupt_limits(access); -} - - -void StackGuard::Continue(InterruptFlag after_what) { - ExecutionAccess access(isolate_); - thread_local_.interrupt_flags_ &= ~static_cast<int>(after_what); - if (!should_postpone_interrupts(access) && !has_pending_interrupts(access)) { - reset_limits(access); + // Clear the interrupt flag from the chain of PostponeInterruptsScopes. + for (PostponeInterruptsScope* current = thread_local_.postpone_interrupts_; + current != NULL; + current = current->prev_) { + current->intercepted_flags_ &= ~flag; } -} - -void StackGuard::RequestInterrupt(InterruptCallback callback, void* data) { - ExecutionAccess access(isolate_); - thread_local_.interrupt_flags_ |= API_INTERRUPT; - thread_local_.interrupt_callback_ = callback; - thread_local_.interrupt_callback_data_ = data; - set_interrupt_limits(access); -} - - -void StackGuard::ClearInterrupt() { - thread_local_.interrupt_callback_ = 0; - thread_local_.interrupt_callback_data_ = 0; - Continue(API_INTERRUPT); + // Clear the interrupt flag from the active interrupt flags. + thread_local_.interrupt_flags_ &= ~flag; + if (!has_pending_interrupts(access)) reset_limits(access); } -bool StackGuard::IsAPIInterrupt() { +bool StackGuard::CheckAndClearInterrupt(InterruptFlag flag) { ExecutionAccess access(isolate_); - return thread_local_.interrupt_flags_ & API_INTERRUPT; -} - - -void StackGuard::InvokeInterruptCallback() { - InterruptCallback callback = 0; - void* data = 0; - - { - ExecutionAccess access(isolate_); - callback = thread_local_.interrupt_callback_; - data = thread_local_.interrupt_callback_data_; - thread_local_.interrupt_callback_ = NULL; - thread_local_.interrupt_callback_data_ = NULL; - } - - if (callback != NULL) { - VMState<EXTERNAL> state(isolate_); - HandleScope handle_scope(isolate_); - callback(reinterpret_cast<v8::Isolate*>(isolate_), data); - } + bool result = (thread_local_.interrupt_flags_ & flag); + thread_local_.interrupt_flags_ &= ~flag; + if (!has_pending_interrupts(access)) reset_limits(access); + return result; } char* StackGuard::ArchiveStackGuard(char* to) { ExecutionAccess access(isolate_); - OS::MemCopy(to, reinterpret_cast<char*>(&thread_local_), sizeof(ThreadLocal)); + MemCopy(to, reinterpret_cast<char*>(&thread_local_), sizeof(ThreadLocal)); ThreadLocal blank; // Set the stack limits using the old thread_local_. @@ -572,8 +417,7 @@ char* StackGuard::ArchiveStackGuard(char* to) { char* StackGuard::RestoreStackGuard(char* from) { ExecutionAccess access(isolate_); - OS::MemCopy( - reinterpret_cast<char*>(&thread_local_), from, sizeof(ThreadLocal)); + MemCopy(reinterpret_cast<char*>(&thread_local_), from, sizeof(ThreadLocal)); isolate_->heap()->SetStackLimits(); return from + sizeof(ThreadLocal); } @@ -591,33 +435,25 @@ void StackGuard::ThreadLocal::Clear() { jslimit_ = kIllegalLimit; real_climit_ = kIllegalLimit; climit_ = kIllegalLimit; - nesting_ = 0; - postpone_interrupts_nesting_ = 0; + postpone_interrupts_ = NULL; interrupt_flags_ = 0; - interrupt_callback_ = NULL; - interrupt_callback_data_ = NULL; } bool StackGuard::ThreadLocal::Initialize(Isolate* isolate) { bool should_set_stack_limits = false; if (real_climit_ == kIllegalLimit) { - // Takes the address of the limit variable in order to find out where - // the top of stack is right now. const uintptr_t kLimitSize = FLAG_stack_size * KB; - uintptr_t limit = reinterpret_cast<uintptr_t>(&limit) - kLimitSize; - ASSERT(reinterpret_cast<uintptr_t>(&limit) > kLimitSize); + DCHECK(GetCurrentStackPosition() > kLimitSize); + uintptr_t limit = GetCurrentStackPosition() - kLimitSize; real_jslimit_ = SimulatorStack::JsLimitFromCLimit(isolate, limit); jslimit_ = SimulatorStack::JsLimitFromCLimit(isolate, limit); real_climit_ = limit; climit_ = limit; should_set_stack_limits = true; } - nesting_ = 0; - postpone_interrupts_nesting_ = 0; + postpone_interrupts_ = NULL; interrupt_flags_ = 0; - interrupt_callback_ = NULL; - interrupt_callback_data_ = NULL; return should_set_stack_limits; } @@ -834,147 +670,38 @@ Handle<String> Execution::GetStackTraceLine(Handle<Object> recv, } -static Object* RuntimePreempt(Isolate* isolate) { - // Clear the preempt request flag. - isolate->stack_guard()->Continue(PREEMPT); - - if (isolate->debug()->InDebugger()) { - // If currently in the debugger don't do any actual preemption but record - // that preemption occoured while in the debugger. - isolate->debug()->PreemptionWhileInDebugger(); - } else { - // Perform preemption. - v8::Unlocker unlocker(reinterpret_cast<v8::Isolate*>(isolate)); - Thread::YieldCPU(); +Object* StackGuard::HandleInterrupts() { + if (CheckAndClearInterrupt(GC_REQUEST)) { + isolate_->heap()->CollectAllGarbage(Heap::kNoGCFlags, "GC interrupt"); } - return isolate->heap()->undefined_value(); -} - - -Object* Execution::DebugBreakHelper(Isolate* isolate) { - // Just continue if breaks are disabled. - if (isolate->debug()->disable_break()) { - return isolate->heap()->undefined_value(); + if (CheckDebugBreak() || CheckDebugCommand()) { + isolate_->debug()->HandleDebugBreak(); } - // Ignore debug break during bootstrapping. - if (isolate->bootstrapper()->IsActive()) { - return isolate->heap()->undefined_value(); + if (CheckAndClearInterrupt(TERMINATE_EXECUTION)) { + return isolate_->TerminateExecution(); } - // Ignore debug break if debugger is not active. - if (!isolate->debugger()->IsDebuggerActive()) { - return isolate->heap()->undefined_value(); + if (CheckAndClearInterrupt(DEOPT_MARKED_ALLOCATION_SITES)) { + isolate_->heap()->DeoptMarkedAllocationSites(); } - StackLimitCheck check(isolate); - if (check.HasOverflowed()) { - return isolate->heap()->undefined_value(); + if (CheckAndClearInterrupt(INSTALL_CODE)) { + DCHECK(isolate_->concurrent_recompilation_enabled()); + isolate_->optimizing_compiler_thread()->InstallOptimizedFunctions(); } - { - JavaScriptFrameIterator it(isolate); - ASSERT(!it.done()); - Object* fun = it.frame()->function(); - if (fun && fun->IsJSFunction()) { - // Don't stop in builtin functions. - if (JSFunction::cast(fun)->IsBuiltin()) { - return isolate->heap()->undefined_value(); - } - GlobalObject* global = JSFunction::cast(fun)->context()->global_object(); - // Don't stop in debugger functions. - if (isolate->debug()->IsDebugGlobal(global)) { - return isolate->heap()->undefined_value(); - } - } + if (CheckAndClearInterrupt(API_INTERRUPT)) { + // Callback must be invoked outside of ExecusionAccess lock. + isolate_->InvokeApiInterruptCallback(); } - // Collect the break state before clearing the flags. - bool debug_command_only = - isolate->stack_guard()->IsDebugCommand() && - !isolate->stack_guard()->IsDebugBreak(); - - // Clear the debug break request flag. - isolate->stack_guard()->Continue(DEBUGBREAK); + isolate_->counters()->stack_interrupts()->Increment(); + isolate_->counters()->runtime_profiler_ticks()->Increment(); + isolate_->runtime_profiler()->OptimizeNow(); - ProcessDebugMessages(isolate, debug_command_only); - - // Return to continue execution. - return isolate->heap()->undefined_value(); -} - - -void Execution::ProcessDebugMessages(Isolate* isolate, - bool debug_command_only) { - // Clear the debug command request flag. - isolate->stack_guard()->Continue(DEBUGCOMMAND); - - StackLimitCheck check(isolate); - if (check.HasOverflowed()) { - return; - } - - HandleScope scope(isolate); - // Enter the debugger. Just continue if we fail to enter the debugger. - EnterDebugger debugger(isolate); - if (debugger.FailedToEnter()) { - return; - } - - // Notify the debug event listeners. Indicate auto continue if the break was - // a debug command break. - isolate->debugger()->OnDebugBreak(isolate->factory()->undefined_value(), - debug_command_only); -} - - -Object* Execution::HandleStackGuardInterrupt(Isolate* isolate) { - StackGuard* stack_guard = isolate->stack_guard(); - if (stack_guard->ShouldPostponeInterrupts()) { - return isolate->heap()->undefined_value(); - } - - if (stack_guard->IsAPIInterrupt()) { - stack_guard->InvokeInterruptCallback(); - stack_guard->Continue(API_INTERRUPT); - } - - if (stack_guard->IsGCRequest()) { - isolate->heap()->CollectAllGarbage(Heap::kNoGCFlags, - "StackGuard GC request"); - stack_guard->Continue(GC_REQUEST); - } - - isolate->counters()->stack_interrupts()->Increment(); - isolate->counters()->runtime_profiler_ticks()->Increment(); - if (stack_guard->IsDebugBreak() || stack_guard->IsDebugCommand()) { - DebugBreakHelper(isolate); - } - if (stack_guard->IsPreempted()) RuntimePreempt(isolate); - if (stack_guard->IsTerminateExecution()) { - stack_guard->Continue(TERMINATE); - return isolate->TerminateExecution(); - } - if (stack_guard->IsInterrupted()) { - stack_guard->Continue(INTERRUPT); - return isolate->StackOverflow(); - } - if (stack_guard->IsFullDeopt()) { - stack_guard->Continue(FULL_DEOPT); - Deoptimizer::DeoptimizeAll(isolate); - } - if (stack_guard->IsDeoptMarkedAllocationSites()) { - stack_guard->Continue(DEOPT_MARKED_ALLOCATION_SITES); - isolate->heap()->DeoptMarkedAllocationSites(); - } - if (stack_guard->IsInstallCodeRequest()) { - ASSERT(isolate->concurrent_recompilation_enabled()); - stack_guard->Continue(INSTALL_CODE); - isolate->optimizing_compiler_thread()->InstallOptimizedFunctions(); - } - isolate->runtime_profiler()->OptimizeNow(); - return isolate->heap()->undefined_value(); + return isolate_->heap()->undefined_value(); } } } // namespace v8::internal diff --git a/deps/v8/src/execution.h b/deps/v8/src/execution.h index c165faba0b1..2a41bb8ba0d 100644 --- a/deps/v8/src/execution.h +++ b/deps/v8/src/execution.h @@ -5,26 +5,11 @@ #ifndef V8_EXECUTION_H_ #define V8_EXECUTION_H_ -#include "handles.h" +#include "src/handles.h" namespace v8 { namespace internal { -// Flag used to set the interrupt causes. -enum InterruptFlag { - INTERRUPT = 1 << 0, - DEBUGBREAK = 1 << 1, - DEBUGCOMMAND = 1 << 2, - PREEMPT = 1 << 3, - TERMINATE = 1 << 4, - GC_REQUEST = 1 << 5, - FULL_DEOPT = 1 << 6, - INSTALL_CODE = 1 << 7, - API_INTERRUPT = 1 << 8, - DEOPT_MARKED_ALLOCATION_SITES = 1 << 9 -}; - - class Execution V8_FINAL : public AllStatic { public: // Call a function, the caller supplies a receiver and an array @@ -119,13 +104,6 @@ class Execution V8_FINAL : public AllStatic { Handle<Object> pos, Handle<Object> is_global); - static Object* DebugBreakHelper(Isolate* isolate); - static void ProcessDebugMessages(Isolate* isolate, bool debug_command_only); - - // If the stack guard is triggered, but it is not an actual - // stack overflow, then handle the interruption accordingly. - static Object* HandleStackGuardInterrupt(Isolate* isolate); - // Get a function delegate (or undefined) for the given non-function // object. Used for support calling objects as functions. static Handle<Object> GetFunctionDelegate(Isolate* isolate, @@ -140,13 +118,11 @@ class Execution V8_FINAL : public AllStatic { Handle<Object> object); static MaybeHandle<Object> TryGetConstructorDelegate(Isolate* isolate, Handle<Object> object); - - static void RunMicrotasks(Isolate* isolate); - static void EnqueueMicrotask(Isolate* isolate, Handle<Object> microtask); }; class ExecutionAccess; +class PostponeInterruptsScope; // StackGuard contains the handling of the limits that are used to limit the @@ -170,32 +146,31 @@ class StackGuard V8_FINAL { // it has been set up. void ClearThread(const ExecutionAccess& lock); - bool IsStackOverflow(); - bool IsPreempted(); - void Preempt(); - bool IsInterrupted(); - void Interrupt(); - bool IsTerminateExecution(); - void TerminateExecution(); - void CancelTerminateExecution(); - bool IsDebugBreak(); - void DebugBreak(); - bool IsDebugCommand(); - void DebugCommand(); - bool IsGCRequest(); - void RequestGC(); - bool IsInstallCodeRequest(); - void RequestInstallCode(); - bool IsFullDeopt(); - void FullDeopt(); - bool IsDeoptMarkedAllocationSites(); - void DeoptMarkedAllocationSites(); - void Continue(InterruptFlag after_what); - - void RequestInterrupt(InterruptCallback callback, void* data); - void ClearInterrupt(); - bool IsAPIInterrupt(); - void InvokeInterruptCallback(); +#define INTERRUPT_LIST(V) \ + V(DEBUGBREAK, DebugBreak, 0) \ + V(DEBUGCOMMAND, DebugCommand, 1) \ + V(TERMINATE_EXECUTION, TerminateExecution, 2) \ + V(GC_REQUEST, GC, 3) \ + V(INSTALL_CODE, InstallCode, 4) \ + V(API_INTERRUPT, ApiInterrupt, 5) \ + V(DEOPT_MARKED_ALLOCATION_SITES, DeoptMarkedAllocationSites, 6) + +#define V(NAME, Name, id) \ + inline bool Check##Name() { return CheckInterrupt(NAME); } \ + inline void Request##Name() { RequestInterrupt(NAME); } \ + inline void Clear##Name() { ClearInterrupt(NAME); } + INTERRUPT_LIST(V) +#undef V + + // Flag used to set the interrupt causes. + enum InterruptFlag { + #define V(NAME, Name, id) NAME = (1 << id), + INTERRUPT_LIST(V) + #undef V + #define V(NAME, Name, id) NAME | + ALL_INTERRUPTS = INTERRUPT_LIST(V) 0 + #undef V + }; // This provides an asynchronous read of the stack limits for the current // thread. There are no locks protecting this, but it is assumed that you @@ -218,24 +193,24 @@ class StackGuard V8_FINAL { Address address_of_real_jslimit() { return reinterpret_cast<Address>(&thread_local_.real_jslimit_); } - bool ShouldPostponeInterrupts(); + + // If the stack guard is triggered, but it is not an actual + // stack overflow, then handle the interruption accordingly. + Object* HandleInterrupts(); private: StackGuard(); + bool CheckInterrupt(InterruptFlag flag); + void RequestInterrupt(InterruptFlag flag); + void ClearInterrupt(InterruptFlag flag); + bool CheckAndClearInterrupt(InterruptFlag flag); + // You should hold the ExecutionAccess lock when calling this method. bool has_pending_interrupts(const ExecutionAccess& lock) { - // Sanity check: We shouldn't be asking about pending interrupts - // unless we're not postponing them anymore. - ASSERT(!should_postpone_interrupts(lock)); return thread_local_.interrupt_flags_ != 0; } - // You should hold the ExecutionAccess lock when calling this method. - bool should_postpone_interrupts(const ExecutionAccess& lock) { - return thread_local_.postpone_interrupts_nesting_ > 0; - } - // You should hold the ExecutionAccess lock when calling this method. inline void set_interrupt_limits(const ExecutionAccess& lock); @@ -247,7 +222,7 @@ class StackGuard V8_FINAL { void EnableInterrupts(); void DisableInterrupts(); -#if V8_TARGET_ARCH_X64 || V8_TARGET_ARCH_ARM64 +#if V8_TARGET_ARCH_64_BIT static const uintptr_t kInterruptLimit = V8_UINT64_C(0xfffffffffffffffe); static const uintptr_t kIllegalLimit = V8_UINT64_C(0xfffffffffffffff8); #else @@ -255,6 +230,9 @@ class StackGuard V8_FINAL { static const uintptr_t kIllegalLimit = 0xfffffff8; #endif + void PushPostponeInterruptsScope(PostponeInterruptsScope* scope); + void PopPostponeInterruptsScope(); + class ThreadLocal V8_FINAL { public: ThreadLocal() { Clear(); } @@ -279,12 +257,8 @@ class StackGuard V8_FINAL { uintptr_t real_climit_; // Actual C++ stack limit set for the VM. uintptr_t climit_; - int nesting_; - int postpone_interrupts_nesting_; + PostponeInterruptsScope* postpone_interrupts_; int interrupt_flags_; - - InterruptCallback interrupt_callback_; - void* interrupt_callback_data_; }; // TODO(isolates): Technically this could be calculated directly from a diff --git a/deps/v8/src/extensions/externalize-string-extension.cc b/deps/v8/src/extensions/externalize-string-extension.cc index c942b613ac1..3f31249c543 100644 --- a/deps/v8/src/extensions/externalize-string-extension.cc +++ b/deps/v8/src/extensions/externalize-string-extension.cc @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "externalize-string-extension.h" +#include "src/extensions/externalize-string-extension.h" namespace v8 { namespace internal { @@ -44,7 +44,7 @@ ExternalizeStringExtension::GetNativeFunctionTemplate( return v8::FunctionTemplate::New(isolate, ExternalizeStringExtension::Externalize); } else { - ASSERT(strcmp(*v8::String::Utf8Value(str), "isAsciiString") == 0); + DCHECK(strcmp(*v8::String::Utf8Value(str), "isAsciiString") == 0); return v8::FunctionTemplate::New(isolate, ExternalizeStringExtension::IsAscii); } diff --git a/deps/v8/src/extensions/externalize-string-extension.h b/deps/v8/src/extensions/externalize-string-extension.h index 305c67dcd68..74b5665ef0e 100644 --- a/deps/v8/src/extensions/externalize-string-extension.h +++ b/deps/v8/src/extensions/externalize-string-extension.h @@ -5,7 +5,7 @@ #ifndef V8_EXTENSIONS_EXTERNALIZE_STRING_EXTENSION_H_ #define V8_EXTENSIONS_EXTERNALIZE_STRING_EXTENSION_H_ -#include "v8.h" +#include "src/v8.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/extensions/free-buffer-extension.cc b/deps/v8/src/extensions/free-buffer-extension.cc index 4ca0ab59168..e8c7732b66f 100644 --- a/deps/v8/src/extensions/free-buffer-extension.cc +++ b/deps/v8/src/extensions/free-buffer-extension.cc @@ -2,9 +2,10 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "free-buffer-extension.h" -#include "platform.h" -#include "v8.h" +#include "src/extensions/free-buffer-extension.h" + +#include "src/base/platform/platform.h" +#include "src/v8.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/extensions/free-buffer-extension.h b/deps/v8/src/extensions/free-buffer-extension.h index e1fc9fb866e..bccf760cc21 100644 --- a/deps/v8/src/extensions/free-buffer-extension.h +++ b/deps/v8/src/extensions/free-buffer-extension.h @@ -5,7 +5,7 @@ #ifndef V8_EXTENSIONS_FREE_BUFFER_EXTENSION_H_ #define V8_EXTENSIONS_FREE_BUFFER_EXTENSION_H_ -#include "v8.h" +#include "src/v8.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/extensions/gc-extension.cc b/deps/v8/src/extensions/gc-extension.cc index 5a5884df9d5..74b74811c33 100644 --- a/deps/v8/src/extensions/gc-extension.cc +++ b/deps/v8/src/extensions/gc-extension.cc @@ -2,8 +2,9 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "gc-extension.h" -#include "platform.h" +#include "src/extensions/gc-extension.h" + +#include "src/base/platform/platform.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/extensions/gc-extension.h b/deps/v8/src/extensions/gc-extension.h index 05f39354a12..789354597e5 100644 --- a/deps/v8/src/extensions/gc-extension.h +++ b/deps/v8/src/extensions/gc-extension.h @@ -5,7 +5,7 @@ #ifndef V8_EXTENSIONS_GC_EXTENSION_H_ #define V8_EXTENSIONS_GC_EXTENSION_H_ -#include "v8.h" +#include "src/v8.h" namespace v8 { namespace internal { @@ -22,8 +22,8 @@ class GCExtension : public v8::Extension { private: static const char* BuildSource(char* buf, size_t size, const char* fun_name) { - OS::SNPrintF(Vector<char>(buf, static_cast<int>(size)), - "native function %s();", fun_name); + SNPrintF(Vector<char>(buf, static_cast<int>(size)), + "native function %s();", fun_name); return buf; } diff --git a/deps/v8/src/extensions/statistics-extension.cc b/deps/v8/src/extensions/statistics-extension.cc index a7770e58954..6f63245eea6 100644 --- a/deps/v8/src/extensions/statistics-extension.cc +++ b/deps/v8/src/extensions/statistics-extension.cc @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "statistics-extension.h" +#include "src/extensions/statistics-extension.h" namespace v8 { namespace internal { @@ -14,7 +14,7 @@ const char* const StatisticsExtension::kSource = v8::Handle<v8::FunctionTemplate> StatisticsExtension::GetNativeFunctionTemplate( v8::Isolate* isolate, v8::Handle<v8::String> str) { - ASSERT(strcmp(*v8::String::Utf8Value(str), "getV8Statistics") == 0); + DCHECK(strcmp(*v8::String::Utf8Value(str), "getV8Statistics") == 0); return v8::FunctionTemplate::New(isolate, StatisticsExtension::GetCounters); } diff --git a/deps/v8/src/extensions/statistics-extension.h b/deps/v8/src/extensions/statistics-extension.h index 7eea82b0602..0915e61de05 100644 --- a/deps/v8/src/extensions/statistics-extension.h +++ b/deps/v8/src/extensions/statistics-extension.h @@ -5,7 +5,7 @@ #ifndef V8_EXTENSIONS_STATISTICS_EXTENSION_H_ #define V8_EXTENSIONS_STATISTICS_EXTENSION_H_ -#include "v8.h" +#include "src/v8.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/extensions/trigger-failure-extension.cc b/deps/v8/src/extensions/trigger-failure-extension.cc index 31a9818de16..b0aacb42c65 100644 --- a/deps/v8/src/extensions/trigger-failure-extension.cc +++ b/deps/v8/src/extensions/trigger-failure-extension.cc @@ -2,8 +2,8 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "trigger-failure-extension.h" -#include "v8.h" +#include "src/extensions/trigger-failure-extension.h" +#include "src/v8.h" namespace v8 { namespace internal { @@ -44,13 +44,13 @@ void TriggerFailureExtension::TriggerCheckFalse( void TriggerFailureExtension::TriggerAssertFalse( const v8::FunctionCallbackInfo<v8::Value>& args) { - ASSERT(false); + DCHECK(false); } void TriggerFailureExtension::TriggerSlowAssertFalse( const v8::FunctionCallbackInfo<v8::Value>& args) { - SLOW_ASSERT(false); + SLOW_DCHECK(false); } } } // namespace v8::internal diff --git a/deps/v8/src/extensions/trigger-failure-extension.h b/deps/v8/src/extensions/trigger-failure-extension.h index f012b8b5837..6974da5e311 100644 --- a/deps/v8/src/extensions/trigger-failure-extension.h +++ b/deps/v8/src/extensions/trigger-failure-extension.h @@ -5,7 +5,7 @@ #ifndef V8_EXTENSIONS_TRIGGER_FAILURE_EXTENSION_H_ #define V8_EXTENSIONS_TRIGGER_FAILURE_EXTENSION_H_ -#include "v8.h" +#include "src/v8.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/factory.cc b/deps/v8/src/factory.cc index 9abb015d806..39e32806b7b 100644 --- a/deps/v8/src/factory.cc +++ b/deps/v8/src/factory.cc @@ -2,11 +2,12 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "factory.h" +#include "src/factory.h" -#include "conversions.h" -#include "isolate-inl.h" -#include "macro-assembler.h" +#include "src/allocation-site-scopes.h" +#include "src/conversions.h" +#include "src/isolate-inl.h" +#include "src/macro-assembler.h" namespace v8 { namespace internal { @@ -60,7 +61,7 @@ Handle<Oddball> Factory::NewOddball(Handle<Map> map, Handle<FixedArray> Factory::NewFixedArray(int size, PretenureFlag pretenure) { - ASSERT(0 <= size); + DCHECK(0 <= size); CALL_HEAP_FUNCTION( isolate(), isolate()->heap()->AllocateFixedArray(size, pretenure), @@ -70,7 +71,7 @@ Handle<FixedArray> Factory::NewFixedArray(int size, PretenureFlag pretenure) { Handle<FixedArray> Factory::NewFixedArrayWithHoles(int size, PretenureFlag pretenure) { - ASSERT(0 <= size); + DCHECK(0 <= size); CALL_HEAP_FUNCTION( isolate(), isolate()->heap()->AllocateFixedArrayWithFiller(size, @@ -90,7 +91,7 @@ Handle<FixedArray> Factory::NewUninitializedFixedArray(int size) { Handle<FixedArrayBase> Factory::NewFixedDoubleArray(int size, PretenureFlag pretenure) { - ASSERT(0 <= size); + DCHECK(0 <= size); CALL_HEAP_FUNCTION( isolate(), isolate()->heap()->AllocateUninitializedFixedDoubleArray(size, pretenure), @@ -101,7 +102,7 @@ Handle<FixedArrayBase> Factory::NewFixedDoubleArray(int size, Handle<FixedArrayBase> Factory::NewFixedDoubleArrayWithHoles( int size, PretenureFlag pretenure) { - ASSERT(0 <= size); + DCHECK(0 <= size); Handle<FixedArrayBase> array = NewFixedDoubleArray(size, pretenure); if (size > 0) { Handle<FixedDoubleArray> double_array = @@ -115,18 +116,23 @@ Handle<FixedArrayBase> Factory::NewFixedDoubleArrayWithHoles( Handle<ConstantPoolArray> Factory::NewConstantPoolArray( - int number_of_int64_entries, - int number_of_code_ptr_entries, - int number_of_heap_ptr_entries, - int number_of_int32_entries) { - ASSERT(number_of_int64_entries > 0 || number_of_code_ptr_entries > 0 || - number_of_heap_ptr_entries > 0 || number_of_int32_entries > 0); + const ConstantPoolArray::NumberOfEntries& small) { + DCHECK(small.total_count() > 0); CALL_HEAP_FUNCTION( isolate(), - isolate()->heap()->AllocateConstantPoolArray(number_of_int64_entries, - number_of_code_ptr_entries, - number_of_heap_ptr_entries, - number_of_int32_entries), + isolate()->heap()->AllocateConstantPoolArray(small), + ConstantPoolArray); +} + + +Handle<ConstantPoolArray> Factory::NewExtendedConstantPoolArray( + const ConstantPoolArray::NumberOfEntries& small, + const ConstantPoolArray::NumberOfEntries& extended) { + DCHECK(small.total_count() > 0); + DCHECK(extended.total_count() > 0); + CALL_HEAP_FUNCTION( + isolate(), + isolate()->heap()->AllocateExtendedConstantPoolArray(small, extended), ConstantPoolArray); } @@ -146,7 +152,6 @@ Handle<AccessorPair> Factory::NewAccessorPair() { Handle<AccessorPair>::cast(NewStruct(ACCESSOR_PAIR_TYPE)); accessors->set_getter(*the_hole_value(), SKIP_WRITE_BARRIER); accessors->set_setter(*the_hole_value(), SKIP_WRITE_BARRIER); - accessors->set_access_flags(Smi::FromInt(0), SKIP_WRITE_BARRIER); return accessors; } @@ -207,9 +212,7 @@ template Handle<String> Factory::InternalizeStringWithKey< MaybeHandle<String> Factory::NewStringFromOneByte(Vector<const uint8_t> string, PretenureFlag pretenure) { int length = string.length(); - if (length == 1) { - return LookupSingleCharacterStringFromCode(string[0]); - } + if (length == 1) return LookupSingleCharacterStringFromCode(string[0]); Handle<SeqOneByteString> result; ASSIGN_RETURN_ON_EXCEPTION( isolate(), @@ -236,22 +239,56 @@ MaybeHandle<String> Factory::NewStringFromUtf8(Vector<const char> string, // since UTF8 is backwards compatible with ASCII. return NewStringFromOneByte(Vector<const uint8_t>::cast(string), pretenure); } + // Non-ASCII and we need to decode. - CALL_HEAP_FUNCTION( - isolate(), - isolate()->heap()->AllocateStringFromUtf8Slow(string, - non_ascii_start, - pretenure), + Access<UnicodeCache::Utf8Decoder> + decoder(isolate()->unicode_cache()->utf8_decoder()); + decoder->Reset(string.start() + non_ascii_start, + length - non_ascii_start); + int utf16_length = decoder->Utf16Length(); + DCHECK(utf16_length > 0); + // Allocate string. + Handle<SeqTwoByteString> result; + ASSIGN_RETURN_ON_EXCEPTION( + isolate(), result, + NewRawTwoByteString(non_ascii_start + utf16_length, pretenure), String); + // Copy ascii portion. + uint16_t* data = result->GetChars(); + const char* ascii_data = string.start(); + for (int i = 0; i < non_ascii_start; i++) { + *data++ = *ascii_data++; + } + // Now write the remainder. + decoder->WriteUtf16(data, utf16_length); + return result; } MaybeHandle<String> Factory::NewStringFromTwoByte(Vector<const uc16> string, PretenureFlag pretenure) { - CALL_HEAP_FUNCTION( - isolate(), - isolate()->heap()->AllocateStringFromTwoByte(string, pretenure), - String); + int length = string.length(); + const uc16* start = string.start(); + if (String::IsOneByte(start, length)) { + if (length == 1) return LookupSingleCharacterStringFromCode(string[0]); + Handle<SeqOneByteString> result; + ASSIGN_RETURN_ON_EXCEPTION( + isolate(), + result, + NewRawOneByteString(length, pretenure), + String); + CopyChars(result->GetChars(), start, length); + return result; + } else { + Handle<SeqTwoByteString> result; + ASSIGN_RETURN_ON_EXCEPTION( + isolate(), + result, + NewRawTwoByteString(length, pretenure), + String); + CopyChars(result->GetChars(), start, length); + return result; + } } @@ -323,6 +360,9 @@ MaybeHandle<Map> Factory::InternalizedStringMapForString( MaybeHandle<SeqOneByteString> Factory::NewRawOneByteString( int length, PretenureFlag pretenure) { + if (length > String::kMaxLength || length < 0) { + return isolate()->Throw<SeqOneByteString>(NewInvalidStringLengthError()); + } CALL_HEAP_FUNCTION( isolate(), isolate()->heap()->AllocateRawOneByteString(length, pretenure), @@ -332,6 +372,9 @@ MaybeHandle<SeqOneByteString> Factory::NewRawOneByteString( MaybeHandle<SeqTwoByteString> Factory::NewRawTwoByteString( int length, PretenureFlag pretenure) { + if (length > String::kMaxLength || length < 0) { + return isolate()->Throw<SeqTwoByteString>(NewInvalidStringLengthError()); + } CALL_HEAP_FUNCTION( isolate(), isolate()->heap()->AllocateRawTwoByteString(length, pretenure), @@ -355,7 +398,7 @@ Handle<String> Factory::LookupSingleCharacterStringFromCode(uint32_t code) { single_character_string_cache()->set(code, *result); return result; } - ASSERT(code <= String::kMaxUtf16CodeUnitU); + DCHECK(code <= String::kMaxUtf16CodeUnitU); Handle<SeqTwoByteString> result = NewRawTwoByteString(1).ToHandleChecked(); result->SeqTwoByteStringSet(0, static_cast<uint16_t>(code)); @@ -387,7 +430,7 @@ static inline Handle<String> MakeOrFindTwoCharacterString(Isolate* isolate, // when building the new string. if (static_cast<unsigned>(c1 | c2) <= String::kMaxOneByteCharCodeU) { // We can do this. - ASSERT(IsPowerOf2(String::kMaxOneByteCharCodeU + 1)); // because of this. + DCHECK(IsPowerOf2(String::kMaxOneByteCharCodeU + 1)); // because of this. Handle<SeqOneByteString> str = isolate->factory()->NewRawOneByteString(2).ToHandleChecked(); uint8_t* dest = str->GetChars(); @@ -457,8 +500,8 @@ MaybeHandle<String> Factory::NewConsString(Handle<String> left, if (length < ConsString::kMinLength) { // Note that neither of the two inputs can be a slice because: STATIC_ASSERT(ConsString::kMinLength <= SlicedString::kMinLength); - ASSERT(left->IsFlat()); - ASSERT(right->IsFlat()); + DCHECK(left->IsFlat()); + DCHECK(right->IsFlat()); STATIC_ASSERT(ConsString::kMinLength <= String::kMaxLength); if (is_one_byte) { @@ -501,26 +544,13 @@ MaybeHandle<String> Factory::NewConsString(Handle<String> left, } -Handle<String> Factory::NewFlatConcatString(Handle<String> first, - Handle<String> second) { - int total_length = first->length() + second->length(); - if (first->IsOneByteRepresentation() && second->IsOneByteRepresentation()) { - return ConcatStringContent<uint8_t>( - NewRawOneByteString(total_length).ToHandleChecked(), first, second); - } else { - return ConcatStringContent<uc16>( - NewRawTwoByteString(total_length).ToHandleChecked(), first, second); - } -} - - Handle<String> Factory::NewProperSubString(Handle<String> str, int begin, int end) { #if VERIFY_HEAP if (FLAG_verify_heap) str->StringVerify(); #endif - ASSERT(begin > 0 || end < str->length()); + DCHECK(begin > 0 || end < str->length()); str = String::Flatten(str); @@ -564,7 +594,7 @@ Handle<String> Factory::NewProperSubString(Handle<String> str, offset += slice->offset(); } - ASSERT(str->IsSeqString() || str->IsExternalString()); + DCHECK(str->IsSeqString() || str->IsExternalString()); Handle<Map> map = str->IsOneByteRepresentation() ? sliced_ascii_string_map() : sliced_string_map(); Handle<SlicedString> slice = New<SlicedString>(map, NEW_SPACE); @@ -581,8 +611,7 @@ MaybeHandle<String> Factory::NewExternalStringFromAscii( const ExternalAsciiString::Resource* resource) { size_t length = resource->length(); if (length > static_cast<size_t>(String::kMaxLength)) { - isolate()->ThrowInvalidStringLength(); - return MaybeHandle<String>(); + return isolate()->Throw<String>(NewInvalidStringLengthError()); } Handle<Map> map = external_ascii_string_map(); @@ -600,8 +629,7 @@ MaybeHandle<String> Factory::NewExternalStringFromTwoByte( const ExternalTwoByteString::Resource* resource) { size_t length = resource->length(); if (length > static_cast<size_t>(String::kMaxLength)) { - isolate()->ThrowInvalidStringLength(); - return MaybeHandle<String>(); + return isolate()->Throw<String>(NewInvalidStringLengthError()); } // For small strings we check whether the resource contains only @@ -636,12 +664,20 @@ Handle<Symbol> Factory::NewPrivateSymbol() { } +Handle<Symbol> Factory::NewPrivateOwnSymbol() { + Handle<Symbol> symbol = NewSymbol(); + symbol->set_is_private(true); + symbol->set_is_own(true); + return symbol; +} + + Handle<Context> Factory::NewNativeContext() { Handle<FixedArray> array = NewFixedArray(Context::NATIVE_CONTEXT_SLOTS); array->set_map_no_write_barrier(*native_context_map()); Handle<Context> context = Handle<Context>::cast(array); context->set_js_array_maps(*undefined_value()); - ASSERT(context->IsNativeContext()); + DCHECK(context->IsNativeContext()); return context; } @@ -656,7 +692,7 @@ Handle<Context> Factory::NewGlobalContext(Handle<JSFunction> function, context->set_previous(function->context()); context->set_extension(*scope_info); context->set_global_object(function->context()->global_object()); - ASSERT(context->IsGlobalContext()); + DCHECK(context->IsGlobalContext()); return context; } @@ -674,7 +710,7 @@ Handle<Context> Factory::NewModuleContext(Handle<ScopeInfo> scope_info) { Handle<Context> Factory::NewFunctionContext(int length, Handle<JSFunction> function) { - ASSERT(length >= Context::MIN_CONTEXT_SLOTS); + DCHECK(length >= Context::MIN_CONTEXT_SLOTS); Handle<FixedArray> array = NewFixedArray(length); array->set_map_no_write_barrier(*function_context_map()); Handle<Context> context = Handle<Context>::cast(array); @@ -822,7 +858,7 @@ Handle<Foreign> Factory::NewForeign(const AccessorDescriptor* desc) { Handle<ByteArray> Factory::NewByteArray(int length, PretenureFlag pretenure) { - ASSERT(0 <= length); + DCHECK(0 <= length); CALL_HEAP_FUNCTION( isolate(), isolate()->heap()->AllocateByteArray(length, pretenure), @@ -834,7 +870,7 @@ Handle<ExternalArray> Factory::NewExternalArray(int length, ExternalArrayType array_type, void* external_pointer, PretenureFlag pretenure) { - ASSERT(0 <= length && length <= Smi::kMaxValue); + DCHECK(0 <= length && length <= Smi::kMaxValue); CALL_HEAP_FUNCTION( isolate(), isolate()->heap()->AllocateExternalArray(length, @@ -849,7 +885,7 @@ Handle<FixedTypedArrayBase> Factory::NewFixedTypedArray( int length, ExternalArrayType array_type, PretenureFlag pretenure) { - ASSERT(0 <= length && length <= Smi::kMaxValue); + DCHECK(0 <= length && length <= Smi::kMaxValue); CALL_HEAP_FUNCTION( isolate(), isolate()->heap()->AllocateFixedTypedArray(length, @@ -941,7 +977,7 @@ Handle<FixedArray> Factory::CopyFixedArray(Handle<FixedArray> array) { Handle<FixedArray> Factory::CopyAndTenureFixedCOWArray( Handle<FixedArray> array) { - ASSERT(isolate()->heap()->InNewSpace(*array)); + DCHECK(isolate()->heap()->InNewSpace(*array)); CALL_HEAP_FUNCTION(isolate(), isolate()->heap()->CopyAndTenureFixedCOWArray(*array), FixedArray); @@ -969,7 +1005,7 @@ Handle<Object> Factory::NewNumber(double value, // We need to distinguish the minus zero value and this cannot be // done after conversion to int. Doing this by comparing bit // patterns is faster than using fpclassify() et al. - if (IsMinusZero(value)) return NewHeapNumber(-0.0, pretenure); + if (IsMinusZero(value)) return NewHeapNumber(-0.0, IMMUTABLE, pretenure); int int_value = FastD2I(value); if (value == int_value && Smi::IsValid(int_value)) { @@ -977,15 +1013,15 @@ Handle<Object> Factory::NewNumber(double value, } // Materialize the value in the heap. - return NewHeapNumber(value, pretenure); + return NewHeapNumber(value, IMMUTABLE, pretenure); } Handle<Object> Factory::NewNumberFromInt(int32_t value, PretenureFlag pretenure) { if (Smi::IsValid(value)) return handle(Smi::FromInt(value), isolate()); - // Bypass NumberFromDouble to avoid various redundant checks. - return NewHeapNumber(FastI2D(value), pretenure); + // Bypass NewNumber to avoid various redundant checks. + return NewHeapNumber(FastI2D(value), IMMUTABLE, pretenure); } @@ -995,15 +1031,17 @@ Handle<Object> Factory::NewNumberFromUint(uint32_t value, if (int32v >= 0 && Smi::IsValid(int32v)) { return handle(Smi::FromInt(int32v), isolate()); } - return NewHeapNumber(FastUI2D(value), pretenure); + return NewHeapNumber(FastUI2D(value), IMMUTABLE, pretenure); } Handle<HeapNumber> Factory::NewHeapNumber(double value, + MutableMode mode, PretenureFlag pretenure) { CALL_HEAP_FUNCTION( isolate(), - isolate()->heap()->AllocateHeapNumber(value, pretenure), HeapNumber); + isolate()->heap()->AllocateHeapNumber(value, mode, pretenure), + HeapNumber); } @@ -1092,7 +1130,7 @@ Handle<String> Factory::EmergencyNewError(const char* message, char* p = &buffer[0]; Vector<char> v(buffer, kBufferSize); - OS::StrNCpy(v, message, space); + StrNCpy(v, message, space); space -= Min(space, strlen(message)); p = &buffer[kBufferSize] - space; @@ -1105,7 +1143,7 @@ Handle<String> Factory::EmergencyNewError(const char* message, Object::GetElement(isolate(), args, i).ToHandleChecked()); SmartArrayPointer<char> arg = arg_str->ToCString(); Vector<char> v2(p, static_cast<int>(space)); - OS::StrNCpy(v2, arg.get(), space); + StrNCpy(v2, arg.get(), space); space -= Min(space, strlen(arg.get())); p = &buffer[kBufferSize] - space; } @@ -1188,6 +1226,7 @@ void Factory::InitializeFunction(Handle<JSFunction> function, function->set_prototype_or_initial_map(*the_hole_value()); function->set_literals_or_bindings(*empty_fixed_array()); function->set_next_function_link(*undefined_value()); + if (info->is_arrow()) function->RemovePrototype(); } @@ -1202,103 +1241,75 @@ Handle<JSFunction> Factory::NewFunction(Handle<Map> map, } -Handle<JSFunction> Factory::NewFunction(Handle<String> name, - Handle<Code> code, - MaybeHandle<Object> maybe_prototype) { - Handle<SharedFunctionInfo> info = NewSharedFunctionInfo(name); - ASSERT(info->strict_mode() == SLOPPY); - info->set_code(*code); - Handle<Context> context(isolate()->context()->native_context()); - Handle<Map> map = maybe_prototype.is_null() - ? isolate()->sloppy_function_without_prototype_map() - : isolate()->sloppy_function_map(); - Handle<JSFunction> result = NewFunction(map, info, context); - Handle<Object> prototype; - if (maybe_prototype.ToHandle(&prototype)) { - result->set_prototype_or_initial_map(*prototype); - } - return result; +Handle<JSFunction> Factory::NewFunction(Handle<Map> map, + Handle<String> name, + MaybeHandle<Code> code) { + Handle<Context> context(isolate()->native_context()); + Handle<SharedFunctionInfo> info = NewSharedFunctionInfo(name, code); + DCHECK((info->strict_mode() == SLOPPY) && + (map.is_identical_to(isolate()->sloppy_function_map()) || + map.is_identical_to( + isolate()->sloppy_function_without_prototype_map()) || + map.is_identical_to( + isolate()->sloppy_function_with_readonly_prototype_map()))); + return NewFunction(map, info, context); +} + + +Handle<JSFunction> Factory::NewFunction(Handle<String> name) { + return NewFunction( + isolate()->sloppy_function_map(), name, MaybeHandle<Code>()); +} + + +Handle<JSFunction> Factory::NewFunctionWithoutPrototype(Handle<String> name, + Handle<Code> code) { + return NewFunction( + isolate()->sloppy_function_without_prototype_map(), name, code); } -Handle<JSFunction> Factory::NewFunctionWithPrototype(Handle<String> name, - Handle<Object> prototype) { - Handle<SharedFunctionInfo> info = NewSharedFunctionInfo(name); - ASSERT(info->strict_mode() == SLOPPY); - Handle<Context> context(isolate()->context()->native_context()); - Handle<Map> map = isolate()->sloppy_function_map(); - Handle<JSFunction> result = NewFunction(map, info, context); +Handle<JSFunction> Factory::NewFunction(Handle<String> name, + Handle<Code> code, + Handle<Object> prototype, + bool read_only_prototype) { + Handle<Map> map = read_only_prototype + ? isolate()->sloppy_function_with_readonly_prototype_map() + : isolate()->sloppy_function_map(); + Handle<JSFunction> result = NewFunction(map, name, code); result->set_prototype_or_initial_map(*prototype); return result; } -Handle<JSFunction> Factory::NewFunction(MaybeHandle<Object> maybe_prototype, - Handle<String> name, +Handle<JSFunction> Factory::NewFunction(Handle<String> name, + Handle<Code> code, + Handle<Object> prototype, InstanceType type, int instance_size, - Handle<Code> code, - bool force_initial_map) { + bool read_only_prototype) { // Allocate the function - Handle<JSFunction> function = NewFunction(name, code, maybe_prototype); - - if (force_initial_map || - type != JS_OBJECT_TYPE || - instance_size != JSObject::kHeaderSize) { - Handle<Object> prototype = maybe_prototype.ToHandleChecked(); - Handle<Map> initial_map = NewMap(type, instance_size); - if (prototype->IsJSObject()) { - JSObject::SetLocalPropertyIgnoreAttributes( - Handle<JSObject>::cast(prototype), - constructor_string(), - function, - DONT_ENUM).Assert(); - } else if (!function->shared()->is_generator()) { - prototype = NewFunctionPrototype(function); - } - initial_map->set_prototype(*prototype); - function->set_initial_map(*initial_map); - initial_map->set_constructor(*function); - } else { - ASSERT(!function->has_initial_map()); - ASSERT(!function->has_prototype()); + Handle<JSFunction> function = NewFunction( + name, code, prototype, read_only_prototype); + + Handle<Map> initial_map = NewMap( + type, instance_size, GetInitialFastElementsKind()); + if (prototype->IsTheHole() && !function->shared()->is_generator()) { + prototype = NewFunctionPrototype(function); } + JSFunction::SetInitialMap(function, initial_map, + Handle<JSReceiver>::cast(prototype)); + return function; } Handle<JSFunction> Factory::NewFunction(Handle<String> name, - InstanceType type, - int instance_size, Handle<Code> code, - bool force_initial_map) { - return NewFunction( - the_hole_value(), name, type, instance_size, code, force_initial_map); -} - - -Handle<JSFunction> Factory::NewFunctionWithPrototype(Handle<String> name, - InstanceType type, - int instance_size, - Handle<JSObject> prototype, - Handle<Code> code, - bool force_initial_map) { - // Allocate the function. - Handle<JSFunction> function = NewFunction(name, code, prototype); - - if (force_initial_map || - type != JS_OBJECT_TYPE || - instance_size != JSObject::kHeaderSize) { - Handle<Map> initial_map = NewMap(type, - instance_size, - GetInitialFastElementsKind()); - function->set_initial_map(*initial_map); - initial_map->set_constructor(*function); - } - - JSFunction::SetPrototype(function, prototype); - return function; + InstanceType type, + int instance_size) { + return NewFunction(name, code, the_hole_value(), type, instance_size); } @@ -1315,17 +1326,15 @@ Handle<JSObject> Factory::NewFunctionPrototype(Handle<JSFunction> function) { // Each function prototype gets a fresh map to avoid unwanted sharing of // maps between prototypes of different constructors. Handle<JSFunction> object_function(native_context->object_function()); - ASSERT(object_function->has_initial_map()); - new_map = Map::Copy(handle(object_function->initial_map())); + DCHECK(object_function->has_initial_map()); + new_map = handle(object_function->initial_map()); } + DCHECK(!new_map->is_prototype_map()); Handle<JSObject> prototype = NewJSObjectFromMap(new_map); if (!function->shared()->is_generator()) { - JSObject::SetLocalPropertyIgnoreAttributes(prototype, - constructor_string(), - function, - DONT_ENUM).Assert(); + JSObject::AddProperty(prototype, constructor_string(), function, DONT_ENUM); } return prototype; @@ -1365,7 +1374,7 @@ Handle<JSFunction> Factory::NewFunctionFromSharedFunctionInfo( FixedArray* literals = info->GetLiteralsFromOptimizedCodeMap(index); if (literals != NULL) result->set_literals(literals); Code* code = info->GetCodeFromOptimizedCodeMap(index); - ASSERT(!code->marked_for_deoptimization()); + DCHECK(!code->marked_for_deoptimization()); result->ReplaceCode(code); return result; } @@ -1383,18 +1392,6 @@ Handle<JSFunction> Factory::NewFunctionFromSharedFunctionInfo( } -Handle<JSObject> Factory::NewIteratorResultObject(Handle<Object> value, - bool done) { - Handle<Map> map(isolate()->native_context()->iterator_result_map()); - Handle<JSObject> result = NewJSObjectFromMap(map, NOT_TENURED, false); - result->InObjectPropertyAtPut( - JSGeneratorObject::kResultValuePropertyIndex, *value); - result->InObjectPropertyAtPut( - JSGeneratorObject::kResultDonePropertyIndex, *ToBoolean(done)); - return result; -} - - Handle<ScopeInfo> Factory::NewScopeInfo(int length) { Handle<FixedArray> array = NewFixedArray(length, TENURED); array->set_map_no_write_barrier(*scope_info_map()); @@ -1423,7 +1420,8 @@ Handle<Code> Factory::NewCode(const CodeDesc& desc, Handle<Object> self_ref, bool immovable, bool crankshafted, - int prologue_offset) { + int prologue_offset, + bool is_debug) { Handle<ByteArray> reloc_info = NewByteArray(desc.reloc_size, TENURED); Handle<ConstantPoolArray> constant_pool = desc.origin->NewConstantPool(isolate()); @@ -1433,7 +1431,8 @@ Handle<Code> Factory::NewCode(const CodeDesc& desc, int obj_size = Code::SizeFor(body_size); Handle<Code> code = NewCodeRaw(obj_size, immovable); - ASSERT(!isolate()->code_range()->exists() || + DCHECK(isolate()->code_range() == NULL || + !isolate()->code_range()->valid() || isolate()->code_range()->contains(code->address())); // The code object has not been fully initialized yet. We rely on the @@ -1448,7 +1447,7 @@ Handle<Code> Factory::NewCode(const CodeDesc& desc, code->set_raw_kind_specific_flags2(0); code->set_is_crankshafted(crankshafted); code->set_deoptimization_data(*empty_fixed_array(), SKIP_WRITE_BARRIER); - code->set_raw_type_feedback_info(*undefined_value()); + code->set_raw_type_feedback_info(Smi::FromInt(0)); code->set_next_code_link(*undefined_value()); code->set_handler_table(*empty_fixed_array(), SKIP_WRITE_BARRIER); code->set_prologue_offset(prologue_offset); @@ -1456,13 +1455,14 @@ Handle<Code> Factory::NewCode(const CodeDesc& desc, code->set_marked_for_deoptimization(false); } + if (is_debug) { + DCHECK(code->kind() == Code::FUNCTION); + code->set_has_debug_break_slots(true); + } + desc.origin->PopulateConstantPool(*constant_pool); code->set_constant_pool(*constant_pool); - if (code->kind() == Code::FUNCTION) { - code->set_has_debug_break_slots(isolate()->debugger()->IsDebuggerActive()); - } - // Allow self references to created code object by patching the handle to // point to the newly allocated Code object. if (!self_ref.is_null()) *(self_ref.location()) = *code; @@ -1529,19 +1529,19 @@ Handle<JSModule> Factory::NewJSModule(Handle<Context> context, Handle<GlobalObject> Factory::NewGlobalObject(Handle<JSFunction> constructor) { - ASSERT(constructor->has_initial_map()); + DCHECK(constructor->has_initial_map()); Handle<Map> map(constructor->initial_map()); - ASSERT(map->is_dictionary_map()); + DCHECK(map->is_dictionary_map()); // Make sure no field properties are described in the initial map. // This guarantees us that normalizing the properties does not // require us to change property values to PropertyCells. - ASSERT(map->NextFreePropertyIndex() == 0); + DCHECK(map->NextFreePropertyIndex() == 0); // Make sure we don't have a ton of pre-allocated slots in the // global objects. They will be unused once we normalize the object. - ASSERT(map->unused_property_fields() == 0); - ASSERT(map->inobject_properties() == 0); + DCHECK(map->unused_property_fields() == 0); + DCHECK(map->inobject_properties() == 0); // Initial size of the backing store to avoid resize of the storage during // bootstrapping. The size differs between the JS global object ad the @@ -1558,7 +1558,7 @@ Handle<GlobalObject> Factory::NewGlobalObject(Handle<JSFunction> constructor) { Handle<DescriptorArray> descs(map->instance_descriptors()); for (int i = 0; i < map->NumberOfOwnDescriptors(); i++) { PropertyDetails details = descs->GetDetails(i); - ASSERT(details.type() == CALLBACKS); // Only accessors are expected. + DCHECK(details.type() == CALLBACKS); // Only accessors are expected. PropertyDetails d = PropertyDetails(details.attributes(), CALLBACKS, i + 1); Handle<Name> name(descs->GetKey(i)); Handle<Object> value(descs->GetCallbacksObject(i), isolate()); @@ -1580,7 +1580,7 @@ Handle<GlobalObject> Factory::NewGlobalObject(Handle<JSFunction> constructor) { global->set_properties(*dictionary); // Make sure result is a global object with properties in dictionary. - ASSERT(global->IsGlobalObject() && !global->HasFastProperties()); + DCHECK(global->IsGlobalObject() && !global->HasFastProperties()); return global; } @@ -1627,7 +1627,7 @@ Handle<JSArray> Factory::NewJSArrayWithElements(Handle<FixedArrayBase> elements, ElementsKind elements_kind, int length, PretenureFlag pretenure) { - ASSERT(length <= elements->length()); + DCHECK(length <= elements->length()); Handle<JSArray> array = NewJSArray(elements_kind, pretenure); array->set_elements(*elements); @@ -1641,7 +1641,7 @@ void Factory::NewJSArrayStorage(Handle<JSArray> array, int length, int capacity, ArrayStorageAllocationMode mode) { - ASSERT(capacity >= length); + DCHECK(capacity >= length); if (capacity == 0) { array->set_length(Smi::FromInt(0)); @@ -1655,15 +1655,15 @@ void Factory::NewJSArrayStorage(Handle<JSArray> array, if (mode == DONT_INITIALIZE_ARRAY_ELEMENTS) { elms = NewFixedDoubleArray(capacity); } else { - ASSERT(mode == INITIALIZE_ARRAY_ELEMENTS_WITH_HOLE); + DCHECK(mode == INITIALIZE_ARRAY_ELEMENTS_WITH_HOLE); elms = NewFixedDoubleArrayWithHoles(capacity); } } else { - ASSERT(IsFastSmiOrObjectElementsKind(elements_kind)); + DCHECK(IsFastSmiOrObjectElementsKind(elements_kind)); if (mode == DONT_INITIALIZE_ARRAY_ELEMENTS) { elms = NewUninitializedFixedArray(capacity); } else { - ASSERT(mode == INITIALIZE_ARRAY_ELEMENTS_WITH_HOLE); + DCHECK(mode == INITIALIZE_ARRAY_ELEMENTS_WITH_HOLE); elms = NewFixedArrayWithHoles(capacity); } } @@ -1675,10 +1675,10 @@ void Factory::NewJSArrayStorage(Handle<JSArray> array, Handle<JSGeneratorObject> Factory::NewJSGeneratorObject( Handle<JSFunction> function) { - ASSERT(function->shared()->is_generator()); + DCHECK(function->shared()->is_generator()); JSFunction::EnsureHasInitialMap(function); Handle<Map> map(function->initial_map()); - ASSERT(map->instance_type() == JS_GENERATOR_OBJECT_TYPE); + DCHECK(map->instance_type() == JS_GENERATOR_OBJECT_TYPE); CALL_HEAP_FUNCTION( isolate(), isolate()->heap()->AllocateJSObjectFromMap(*map), @@ -1688,7 +1688,7 @@ Handle<JSGeneratorObject> Factory::NewJSGeneratorObject( Handle<JSArrayBuffer> Factory::NewJSArrayBuffer() { Handle<JSFunction> array_buffer_fun( - isolate()->context()->native_context()->array_buffer_fun()); + isolate()->native_context()->array_buffer_fun()); CALL_HEAP_FUNCTION( isolate(), isolate()->heap()->AllocateJSObject(*array_buffer_fun), @@ -1698,7 +1698,7 @@ Handle<JSArrayBuffer> Factory::NewJSArrayBuffer() { Handle<JSDataView> Factory::NewJSDataView() { Handle<JSFunction> data_view_fun( - isolate()->context()->native_context()->data_view_fun()); + isolate()->native_context()->data_view_fun()); CALL_HEAP_FUNCTION( isolate(), isolate()->heap()->AllocateJSObject(*data_view_fun), @@ -1775,7 +1775,7 @@ Handle<JSProxy> Factory::NewJSFunctionProxy(Handle<Object> handler, void Factory::ReinitializeJSReceiver(Handle<JSReceiver> object, InstanceType type, int size) { - ASSERT(type >= FIRST_JS_OBJECT_TYPE); + DCHECK(type >= FIRST_JS_OBJECT_TYPE); // Allocate fresh map. // TODO(rossberg): Once we optimize proxies, cache these maps. @@ -1783,7 +1783,7 @@ void Factory::ReinitializeJSReceiver(Handle<JSReceiver> object, // Check that the receiver has at least the size of the fresh object. int size_difference = object->map()->instance_size() - map->instance_size(); - ASSERT(size_difference >= 0); + DCHECK(size_difference >= 0); map->set_prototype(object->map()->prototype()); @@ -1797,15 +1797,22 @@ void Factory::ReinitializeJSReceiver(Handle<JSReceiver> object, OneByteStringKey key(STATIC_ASCII_VECTOR("<freezing call trap>"), heap->HashSeed()); Handle<String> name = InternalizeStringWithKey(&key); - shared = NewSharedFunctionInfo(name); + shared = NewSharedFunctionInfo(name, MaybeHandle<Code>()); } // In order to keep heap in consistent state there must be no allocations // before object re-initialization is finished and filler object is installed. DisallowHeapAllocation no_allocation; + // Put in filler if the new object is smaller than the old. + if (size_difference > 0) { + Address address = object->address(); + heap->CreateFillerObjectAt(address + map->instance_size(), size_difference); + heap->AdjustLiveBytes(address, -size_difference, Heap::FROM_MUTATOR); + } + // Reset the map for the object. - object->set_map(*map); + object->synchronized_set_map(*map); Handle<JSObject> jsobj = Handle<JSObject>::cast(object); // Reinitialize the object from the constructor map. @@ -1815,21 +1822,15 @@ void Factory::ReinitializeJSReceiver(Handle<JSReceiver> object, if (type == JS_FUNCTION_TYPE) { map->set_function_with_prototype(true); Handle<JSFunction> js_function = Handle<JSFunction>::cast(object); - Handle<Context> context(isolate()->context()->native_context()); + Handle<Context> context(isolate()->native_context()); InitializeFunction(js_function, shared.ToHandleChecked(), context); } - - // Put in filler if the new object is smaller than the old. - if (size_difference > 0) { - heap->CreateFillerObjectAt( - object->address() + map->instance_size(), size_difference); - } } void Factory::ReinitializeJSGlobalProxy(Handle<JSGlobalProxy> object, Handle<JSFunction> constructor) { - ASSERT(constructor->has_initial_map()); + DCHECK(constructor->has_initial_map()); Handle<Map> map(constructor->initial_map(), isolate()); // The proxy's hash should be retained across reinitialization. @@ -1837,8 +1838,8 @@ void Factory::ReinitializeJSGlobalProxy(Handle<JSGlobalProxy> object, // Check that the already allocated object has the same size and type as // objects allocated using the constructor. - ASSERT(map->instance_size() == object->map()->instance_size()); - ASSERT(map->instance_type() == object->map()->instance_type()); + DCHECK(map->instance_size() == object->map()->instance_size()); + DCHECK(map->instance_type() == object->map()->instance_type()); // Allocate the backing storage for the properties. int prop_size = map->InitialPropertiesLength(); @@ -1849,7 +1850,7 @@ void Factory::ReinitializeJSGlobalProxy(Handle<JSGlobalProxy> object, DisallowHeapAllocation no_allocation; // Reset the map for the object. - object->set_map(constructor->initial_map()); + object->synchronized_set_map(*map); Heap* heap = isolate()->heap(); // Reinitialize the object from the constructor map. @@ -1872,7 +1873,7 @@ void Factory::BecomeJSFunction(Handle<JSReceiver> object) { Handle<FixedArray> Factory::NewTypeFeedbackVector(int slot_count) { // Ensure we can skip the write barrier - ASSERT_EQ(isolate()->heap()->uninitialized_symbol(), + DCHECK_EQ(isolate()->heap()->uninitialized_symbol(), *TypeFeedbackInfo::UninitializedSentinel(isolate())); CALL_HEAP_FUNCTION( @@ -1886,16 +1887,13 @@ Handle<FixedArray> Factory::NewTypeFeedbackVector(int slot_count) { Handle<SharedFunctionInfo> Factory::NewSharedFunctionInfo( - Handle<String> name, - int number_of_literals, - bool is_generator, - Handle<Code> code, - Handle<ScopeInfo> scope_info, + Handle<String> name, int number_of_literals, bool is_generator, + bool is_arrow, Handle<Code> code, Handle<ScopeInfo> scope_info, Handle<FixedArray> feedback_vector) { - Handle<SharedFunctionInfo> shared = NewSharedFunctionInfo(name); - shared->set_code(*code); + Handle<SharedFunctionInfo> shared = NewSharedFunctionInfo(name, code); shared->set_scope_info(*scope_info); shared->set_feedback_vector(*feedback_vector); + shared->set_is_arrow(is_arrow); int literals_array_size = number_of_literals; // If the function contains object, regexp or array literals, // allocate extra space for a literals array prefix containing the @@ -1934,15 +1932,20 @@ Handle<JSMessageObject> Factory::NewJSMessageObject( } -Handle<SharedFunctionInfo> Factory::NewSharedFunctionInfo(Handle<String> name) { +Handle<SharedFunctionInfo> Factory::NewSharedFunctionInfo( + Handle<String> name, + MaybeHandle<Code> maybe_code) { Handle<Map> map = shared_function_info_map(); Handle<SharedFunctionInfo> share = New<SharedFunctionInfo>(map, OLD_POINTER_SPACE); // Set pointer fields. share->set_name(*name); - Code* illegal = isolate()->builtins()->builtin(Builtins::kIllegal); - share->set_code(illegal); + Handle<Code> code; + if (!maybe_code.ToHandle(&code)) { + code = handle(isolate()->builtins()->builtin(Builtins::kIllegal)); + } + share->set_code(*code); share->set_optimized_code_map(Smi::FromInt(0)); share->set_scope_info(ScopeInfo::Empty(isolate())); Code* construct_stub = @@ -1954,7 +1957,6 @@ Handle<SharedFunctionInfo> Factory::NewSharedFunctionInfo(Handle<String> name) { share->set_debug_info(*undefined_value(), SKIP_WRITE_BARRIER); share->set_inferred_name(*empty_string(), SKIP_WRITE_BARRIER); share->set_feedback_vector(*empty_fixed_array(), SKIP_WRITE_BARRIER); - share->set_initial_map(*undefined_value(), SKIP_WRITE_BARRIER); share->set_profiler_ticks(0); share->set_ast_node_count(0); share->set_counters(0); @@ -2012,7 +2014,7 @@ void Factory::SetNumberStringCache(Handle<Object> number, // cache in the snapshot to keep boot-time memory usage down. // If we expand the number string cache already while creating // the snapshot then that didn't work out. - ASSERT(!Serializer::enabled(isolate()) || FLAG_extra_code != NULL); + DCHECK(!isolate()->serializer_enabled() || FLAG_extra_code != NULL); Handle<FixedArray> new_cache = NewFixedArray(full_size, TENURED); isolate()->heap()->set_number_string_cache(*new_cache); return; @@ -2062,7 +2064,7 @@ Handle<DebugInfo> Factory::NewDebugInfo(Handle<SharedFunctionInfo> shared) { // debug info object to avoid allocation while setting up the debug info // object. Handle<FixedArray> break_points( - NewFixedArray(Debug::kEstimatedNofBreakPointsInFunction)); + NewFixedArray(DebugInfo::kEstimatedNofBreakPointsInFunction)); // Create and set up the debug info object. Debug info contains function, a // copy of the original code, the executing code and initial fixed array for @@ -2081,11 +2083,22 @@ Handle<DebugInfo> Factory::NewDebugInfo(Handle<SharedFunctionInfo> shared) { } -Handle<JSObject> Factory::NewArgumentsObject(Handle<Object> callee, +Handle<JSObject> Factory::NewArgumentsObject(Handle<JSFunction> callee, int length) { - CALL_HEAP_FUNCTION( - isolate(), - isolate()->heap()->AllocateArgumentsObject(*callee, length), JSObject); + bool strict_mode_callee = callee->shared()->strict_mode() == STRICT; + Handle<Map> map = strict_mode_callee ? isolate()->strict_arguments_map() + : isolate()->sloppy_arguments_map(); + + AllocationSiteUsageContext context(isolate(), Handle<AllocationSite>(), + false); + DCHECK(!isolate()->has_pending_exception()); + Handle<JSObject> result = NewJSObjectFromMap(map); + Handle<Smi> value(Smi::FromInt(length), isolate()); + Object::SetProperty(result, length_string(), value, STRICT).Assert(); + if (!strict_mode_callee) { + Object::SetProperty(result, callee_string(), callee, STRICT).Assert(); + } + return result; } @@ -2096,44 +2109,45 @@ Handle<JSFunction> Factory::CreateApiFunction( Handle<Code> code = isolate()->builtins()->HandleApiCall(); Handle<Code> construct_stub = isolate()->builtins()->JSConstructStubApi(); - int internal_field_count = 0; - if (!obj->instance_template()->IsUndefined()) { - Handle<ObjectTemplateInfo> instance_template = - Handle<ObjectTemplateInfo>( - ObjectTemplateInfo::cast(obj->instance_template())); - internal_field_count = - Smi::cast(instance_template->internal_field_count())->value(); - } - - // TODO(svenpanne) Kill ApiInstanceType and refactor things by generalizing - // JSObject::GetHeaderSize. - int instance_size = kPointerSize * internal_field_count; - InstanceType type; - switch (instance_type) { - case JavaScriptObject: - type = JS_OBJECT_TYPE; - instance_size += JSObject::kHeaderSize; - break; - case InnerGlobalObject: - type = JS_GLOBAL_OBJECT_TYPE; - instance_size += JSGlobalObject::kSize; - break; - case OuterGlobalObject: - type = JS_GLOBAL_PROXY_TYPE; - instance_size += JSGlobalProxy::kSize; - break; - default: - UNREACHABLE(); - type = JS_OBJECT_TYPE; // Keep the compiler happy. - break; - } + Handle<JSFunction> result; + if (obj->remove_prototype()) { + result = NewFunctionWithoutPrototype(empty_string(), code); + } else { + int internal_field_count = 0; + if (!obj->instance_template()->IsUndefined()) { + Handle<ObjectTemplateInfo> instance_template = + Handle<ObjectTemplateInfo>( + ObjectTemplateInfo::cast(obj->instance_template())); + internal_field_count = + Smi::cast(instance_template->internal_field_count())->value(); + } - MaybeHandle<Object> maybe_prototype = prototype; - if (obj->remove_prototype()) maybe_prototype = MaybeHandle<Object>(); + // TODO(svenpanne) Kill ApiInstanceType and refactor things by generalizing + // JSObject::GetHeaderSize. + int instance_size = kPointerSize * internal_field_count; + InstanceType type; + switch (instance_type) { + case JavaScriptObjectType: + type = JS_OBJECT_TYPE; + instance_size += JSObject::kHeaderSize; + break; + case GlobalObjectType: + type = JS_GLOBAL_OBJECT_TYPE; + instance_size += JSGlobalObject::kSize; + break; + case GlobalProxyType: + type = JS_GLOBAL_PROXY_TYPE; + instance_size += JSGlobalProxy::kSize; + break; + default: + UNREACHABLE(); + type = JS_OBJECT_TYPE; // Keep the compiler happy. + break; + } - Handle<JSFunction> result = NewFunction( - maybe_prototype, Factory::empty_string(), type, - instance_size, code, !obj->remove_prototype()); + result = NewFunction(empty_string(), code, prototype, type, + instance_size, obj->read_only_prototype()); + } result->shared()->set_length(obj->length()); Handle<Object> class_name(obj->class_name(), isolate()); @@ -2146,11 +2160,26 @@ Handle<JSFunction> Factory::CreateApiFunction( result->shared()->DontAdaptArguments(); if (obj->remove_prototype()) { - ASSERT(result->shared()->IsApiFunction()); - ASSERT(!result->has_initial_map()); - ASSERT(!result->has_prototype()); + DCHECK(result->shared()->IsApiFunction()); + DCHECK(!result->has_initial_map()); + DCHECK(!result->has_prototype()); return result; } + + if (prototype->IsTheHole()) { +#ifdef DEBUG + LookupIterator it(handle(JSObject::cast(result->prototype())), + constructor_string(), + LookupIterator::CHECK_OWN_REAL); + MaybeHandle<Object> maybe_prop = Object::GetProperty(&it); + DCHECK(it.IsFound()); + DCHECK(maybe_prop.ToHandleChecked().is_identical_to(result)); +#endif + } else { + JSObject::AddProperty(handle(JSObject::cast(result->prototype())), + constructor_string(), result, DONT_ENUM); + } + // Down from here is only valid for API functions that can be used as a // constructor (don't set the "remove prototype" flag). @@ -2253,7 +2282,7 @@ Handle<JSFunction> Factory::CreateApiFunction( JSObject::SetAccessor(result, accessor).Assert(); } - ASSERT(result->shared()->IsApiFunction()); + DCHECK(result->shared()->IsApiFunction()); return result; } diff --git a/deps/v8/src/factory.h b/deps/v8/src/factory.h index 91a036cca79..aa1f94d8145 100644 --- a/deps/v8/src/factory.h +++ b/deps/v8/src/factory.h @@ -5,7 +5,7 @@ #ifndef V8_FACTORY_H_ #define V8_FACTORY_H_ -#include "isolate.h" +#include "src/isolate.h" namespace v8 { namespace internal { @@ -45,10 +45,11 @@ class Factory V8_FINAL { PretenureFlag pretenure = NOT_TENURED); Handle<ConstantPoolArray> NewConstantPoolArray( - int number_of_int64_entries, - int number_of_code_ptr_entries, - int number_of_heap_ptr_entries, - int number_of_int32_entries); + const ConstantPoolArray::NumberOfEntries& small); + + Handle<ConstantPoolArray> NewExtendedConstantPoolArray( + const ConstantPoolArray::NumberOfEntries& small, + const ConstantPoolArray::NumberOfEntries& extended); Handle<OrderedHashSet> NewOrderedHashSet(); Handle<OrderedHashMap> NewOrderedHashMap(); @@ -109,7 +110,7 @@ class Factory V8_FINAL { inline Handle<String> NewStringFromStaticAscii( const char (&str)[N], PretenureFlag pretenure = NOT_TENURED) { - ASSERT(N == StrLength(str) + 1); + DCHECK(N == StrLength(str) + 1); return NewStringFromOneByte( STATIC_ASCII_VECTOR(str), pretenure).ToHandleChecked(); } @@ -121,6 +122,23 @@ class Factory V8_FINAL { OneByteVector(str), pretenure).ToHandleChecked(); } + + // Allocates and fully initializes a String. There are two String + // encodings: ASCII and two byte. One should choose between the three string + // allocation functions based on the encoding of the string buffer used to + // initialized the string. + // - ...FromAscii initializes the string from a buffer that is ASCII + // encoded (it does not check that the buffer is ASCII encoded) and the + // result will be ASCII encoded. + // - ...FromUTF8 initializes the string from a buffer that is UTF-8 + // encoded. If the characters are all single-byte characters, the + // result will be ASCII encoded, otherwise it will converted to two + // byte. + // - ...FromTwoByte initializes the string from a buffer that is two-byte + // encoded. If the characters are all single-byte characters, the + // result will be converted to ASCII, otherwise it will be left as + // two-byte. + // TODO(dcarney): remove this function. MUST_USE_RESULT inline MaybeHandle<String> NewStringFromAscii( Vector<const char> str, @@ -179,10 +197,6 @@ class Factory V8_FINAL { MUST_USE_RESULT MaybeHandle<String> NewConsString(Handle<String> left, Handle<String> right); - // Create a new sequential string containing the concatenation of the inputs. - Handle<String> NewFlatConcatString(Handle<String> first, - Handle<String> second); - // Create a new string object which holds a proper substring of a string. Handle<String> NewProperSubString(Handle<String> str, int begin, @@ -207,6 +221,7 @@ class Factory V8_FINAL { // Create a symbol. Handle<Symbol> NewSymbol(); Handle<Symbol> NewPrivateSymbol(); + Handle<Symbol> NewPrivateOwnSymbol(); // Create a global (but otherwise uninitialized) context. Handle<Context> NewNativeContext(); @@ -334,16 +349,16 @@ class Factory V8_FINAL { return NewNumber(static_cast<double>(value), pretenure); } Handle<HeapNumber> NewHeapNumber(double value, + MutableMode mode = IMMUTABLE, PretenureFlag pretenure = NOT_TENURED); - // These objects are used by the api to create env-independent data // structures in the heap. inline Handle<JSObject> NewNeanderObject() { return NewJSObjectFromMap(neander_map()); } - Handle<JSObject> NewArgumentsObject(Handle<Object> callee, int length); + Handle<JSObject> NewArgumentsObject(Handle<JSFunction> callee, int length); // JS objects are pretenured when allocated by the bootstrapper and // runtime. @@ -378,10 +393,8 @@ class Factory V8_FINAL { // Create a JSArray with a specified length and elements initialized // according to the specified mode. Handle<JSArray> NewJSArray( - ElementsKind elements_kind, - int length, - int capacity, - ArrayStorageAllocationMode mode = INITIALIZE_ARRAY_ELEMENTS_WITH_HOLE, + ElementsKind elements_kind, int length, int capacity, + ArrayStorageAllocationMode mode = DONT_INITIALIZE_ARRAY_ELEMENTS, PretenureFlag pretenure = NOT_TENURED); Handle<JSArray> NewJSArray( @@ -453,35 +466,27 @@ class Factory V8_FINAL { Handle<JSFunction> NewFunction(Handle<String> name, Handle<Code> code, - MaybeHandle<Object> maybe_prototype = - MaybeHandle<Object>()); - - Handle<JSFunction> NewFunctionWithPrototype(Handle<String> name, - Handle<Object> prototype); + Handle<Object> prototype, + bool read_only_prototype = false); + Handle<JSFunction> NewFunction(Handle<String> name); + Handle<JSFunction> NewFunctionWithoutPrototype(Handle<String> name, + Handle<Code> code); Handle<JSFunction> NewFunctionFromSharedFunctionInfo( Handle<SharedFunctionInfo> function_info, Handle<Context> context, PretenureFlag pretenure = TENURED); - Handle<JSFunction> NewFunction(MaybeHandle<Object> maybe_prototype, - Handle<String> name, - InstanceType type, - int instance_size, - Handle<Code> code, - bool force_initial_map); Handle<JSFunction> NewFunction(Handle<String> name, + Handle<Code> code, + Handle<Object> prototype, InstanceType type, int instance_size, + bool read_only_prototype = false); + Handle<JSFunction> NewFunction(Handle<String> name, Handle<Code> code, - bool force_initial_map); - - Handle<JSFunction> NewFunctionWithPrototype(Handle<String> name, - InstanceType type, - int instance_size, - Handle<JSObject> prototype, - Handle<Code> code, - bool force_initial_map); + InstanceType type, + int instance_size); // Create a serialized scope info. Handle<ScopeInfo> NewScopeInfo(int length); @@ -497,7 +502,8 @@ class Factory V8_FINAL { Handle<Object> self_reference, bool immovable = false, bool crankshafted = false, - int prologue_offset = Code::kPrologueOffsetNotSet); + int prologue_offset = Code::kPrologueOffsetNotSet, + bool is_debug = false); Handle<Code> CopyCode(Handle<Code> code); @@ -540,8 +546,6 @@ class Factory V8_FINAL { Handle<Object> NewEvalError(const char* message, Vector< Handle<Object> > args); - Handle<JSObject> NewIteratorResultObject(Handle<Object> value, bool done); - Handle<String> NumberToString(Handle<Object> number, bool check_number_string_cache = true); @@ -550,15 +554,15 @@ class Factory V8_FINAL { } enum ApiInstanceType { - JavaScriptObject, - InnerGlobalObject, - OuterGlobalObject + JavaScriptObjectType, + GlobalObjectType, + GlobalProxyType }; Handle<JSFunction> CreateApiFunction( Handle<FunctionTemplateInfo> data, Handle<Object> prototype, - ApiInstanceType type = JavaScriptObject); + ApiInstanceType type = JavaScriptObjectType); Handle<JSFunction> InstallMembers(Handle<JSFunction> function); @@ -602,13 +606,11 @@ class Factory V8_FINAL { // Allocates a new SharedFunctionInfo object. Handle<SharedFunctionInfo> NewSharedFunctionInfo( - Handle<String> name, - int number_of_literals, - bool is_generator, - Handle<Code> code, - Handle<ScopeInfo> scope_info, + Handle<String> name, int number_of_literals, bool is_generator, + bool is_arrow, Handle<Code> code, Handle<ScopeInfo> scope_info, Handle<FixedArray> feedback_vector); - Handle<SharedFunctionInfo> NewSharedFunctionInfo(Handle<String> name); + Handle<SharedFunctionInfo> NewSharedFunctionInfo(Handle<String> name, + MaybeHandle<Code> code); // Allocate a new type feedback vector Handle<FixedArray> NewTypeFeedbackVector(int slot_count); @@ -670,20 +672,6 @@ class Factory V8_FINAL { // Creates a code object that is not yet fully initialized yet. inline Handle<Code> NewCodeRaw(int object_size, bool immovable); - // Initializes a function with a shared part and prototype. - // Note: this code was factored out of NewFunction such that other parts of - // the VM could use it. Specifically, a function that creates instances of - // type JS_FUNCTION_TYPE benefit from the use of this function. - inline void InitializeFunction(Handle<JSFunction> function, - Handle<SharedFunctionInfo> info, - Handle<Context> context); - - // Creates a function initialized with a shared part. - inline Handle<JSFunction> NewFunction(Handle<Map> map, - Handle<SharedFunctionInfo> info, - Handle<Context> context, - PretenureFlag pretenure = TENURED); - // Create a new map cache. Handle<MapCache> NewMapCache(int at_least_space_for); @@ -698,6 +686,24 @@ class Factory V8_FINAL { // Update the cache with a new number-string pair. void SetNumberStringCache(Handle<Object> number, Handle<String> string); + + // Initializes a function with a shared part and prototype. + // Note: this code was factored out of NewFunction such that other parts of + // the VM could use it. Specifically, a function that creates instances of + // type JS_FUNCTION_TYPE benefit from the use of this function. + inline void InitializeFunction(Handle<JSFunction> function, + Handle<SharedFunctionInfo> info, + Handle<Context> context); + + // Creates a function initialized with a shared part. + Handle<JSFunction> NewFunction(Handle<Map> map, + Handle<SharedFunctionInfo> info, + Handle<Context> context, + PretenureFlag pretenure = TENURED); + + Handle<JSFunction> NewFunction(Handle<Map> map, + Handle<String> name, + MaybeHandle<Code> maybe_code); }; } } // namespace v8::internal diff --git a/deps/v8/src/fast-dtoa.cc b/deps/v8/src/fast-dtoa.cc index 87ad2870dd4..13b04634de7 100644 --- a/deps/v8/src/fast-dtoa.cc +++ b/deps/v8/src/fast-dtoa.cc @@ -2,15 +2,15 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "../include/v8stdint.h" -#include "checks.h" -#include "utils.h" +#include "include/v8stdint.h" +#include "src/base/logging.h" +#include "src/utils.h" -#include "fast-dtoa.h" +#include "src/fast-dtoa.h" -#include "cached-powers.h" -#include "diy-fp.h" -#include "double.h" +#include "src/cached-powers.h" +#include "src/diy-fp.h" +#include "src/double.h" namespace v8 { namespace internal { @@ -120,7 +120,7 @@ static bool RoundWeed(Vector<char> buffer, // Conceptually rest ~= too_high - buffer // We need to do the following tests in this order to avoid over- and // underflows. - ASSERT(rest <= unsafe_interval); + DCHECK(rest <= unsafe_interval); while (rest < small_distance && // Negated condition 1 unsafe_interval - rest >= ten_kappa && // Negated condition 2 (rest + ten_kappa < small_distance || // buffer{-1} > w_high @@ -166,7 +166,7 @@ static bool RoundWeedCounted(Vector<char> buffer, uint64_t ten_kappa, uint64_t unit, int* kappa) { - ASSERT(rest < ten_kappa); + DCHECK(rest < ten_kappa); // The following tests are done in a specific order to avoid overflows. They // will work correctly with any uint64 values of rest < ten_kappa and unit. // @@ -365,9 +365,9 @@ static bool DigitGen(DiyFp low, Vector<char> buffer, int* length, int* kappa) { - ASSERT(low.e() == w.e() && w.e() == high.e()); - ASSERT(low.f() + 1 <= high.f() - 1); - ASSERT(kMinimalTargetExponent <= w.e() && w.e() <= kMaximalTargetExponent); + DCHECK(low.e() == w.e() && w.e() == high.e()); + DCHECK(low.f() + 1 <= high.f() - 1); + DCHECK(kMinimalTargetExponent <= w.e() && w.e() <= kMaximalTargetExponent); // low, w and high are imprecise, but by less than one ulp (unit in the last // place). // If we remove (resp. add) 1 ulp from low (resp. high) we are certain that @@ -435,9 +435,9 @@ static bool DigitGen(DiyFp low, // data (like the interval or 'unit'), too. // Note that the multiplication by 10 does not overflow, because w.e >= -60 // and thus one.e >= -60. - ASSERT(one.e() >= -60); - ASSERT(fractionals < one.f()); - ASSERT(V8_2PART_UINT64_C(0xFFFFFFFF, FFFFFFFF) / 10 >= one.f()); + DCHECK(one.e() >= -60); + DCHECK(fractionals < one.f()); + DCHECK(V8_2PART_UINT64_C(0xFFFFFFFF, FFFFFFFF) / 10 >= one.f()); while (true) { fractionals *= 10; unit *= 10; @@ -490,9 +490,9 @@ static bool DigitGenCounted(DiyFp w, Vector<char> buffer, int* length, int* kappa) { - ASSERT(kMinimalTargetExponent <= w.e() && w.e() <= kMaximalTargetExponent); - ASSERT(kMinimalTargetExponent >= -60); - ASSERT(kMaximalTargetExponent <= -32); + DCHECK(kMinimalTargetExponent <= w.e() && w.e() <= kMaximalTargetExponent); + DCHECK(kMinimalTargetExponent >= -60); + DCHECK(kMaximalTargetExponent <= -32); // w is assumed to have an error less than 1 unit. Whenever w is scaled we // also scale its error. uint64_t w_error = 1; @@ -543,9 +543,9 @@ static bool DigitGenCounted(DiyFp w, // data (the 'unit'), too. // Note that the multiplication by 10 does not overflow, because w.e >= -60 // and thus one.e >= -60. - ASSERT(one.e() >= -60); - ASSERT(fractionals < one.f()); - ASSERT(V8_2PART_UINT64_C(0xFFFFFFFF, FFFFFFFF) / 10 >= one.f()); + DCHECK(one.e() >= -60); + DCHECK(fractionals < one.f()); + DCHECK(V8_2PART_UINT64_C(0xFFFFFFFF, FFFFFFFF) / 10 >= one.f()); while (requested_digits > 0 && fractionals > w_error) { fractionals *= 10; w_error *= 10; @@ -585,7 +585,7 @@ static bool Grisu3(double v, // Grisu3 will never output representations that lie exactly on a boundary. DiyFp boundary_minus, boundary_plus; Double(v).NormalizedBoundaries(&boundary_minus, &boundary_plus); - ASSERT(boundary_plus.e() == w.e()); + DCHECK(boundary_plus.e() == w.e()); DiyFp ten_mk; // Cached power of ten: 10^-k int mk; // -k int ten_mk_minimal_binary_exponent = @@ -596,7 +596,7 @@ static bool Grisu3(double v, ten_mk_minimal_binary_exponent, ten_mk_maximal_binary_exponent, &ten_mk, &mk); - ASSERT((kMinimalTargetExponent <= w.e() + ten_mk.e() + + DCHECK((kMinimalTargetExponent <= w.e() + ten_mk.e() + DiyFp::kSignificandSize) && (kMaximalTargetExponent >= w.e() + ten_mk.e() + DiyFp::kSignificandSize)); @@ -610,7 +610,7 @@ static bool Grisu3(double v, // In other words: let f = scaled_w.f() and e = scaled_w.e(), then // (f-1) * 2^e < w*10^k < (f+1) * 2^e DiyFp scaled_w = DiyFp::Times(w, ten_mk); - ASSERT(scaled_w.e() == + DCHECK(scaled_w.e() == boundary_plus.e() + ten_mk.e() + DiyFp::kSignificandSize); // In theory it would be possible to avoid some recomputations by computing // the difference between w and boundary_minus/plus (a power of 2) and to @@ -655,7 +655,7 @@ static bool Grisu3Counted(double v, ten_mk_minimal_binary_exponent, ten_mk_maximal_binary_exponent, &ten_mk, &mk); - ASSERT((kMinimalTargetExponent <= w.e() + ten_mk.e() + + DCHECK((kMinimalTargetExponent <= w.e() + ten_mk.e() + DiyFp::kSignificandSize) && (kMaximalTargetExponent >= w.e() + ten_mk.e() + DiyFp::kSignificandSize)); @@ -689,8 +689,8 @@ bool FastDtoa(double v, Vector<char> buffer, int* length, int* decimal_point) { - ASSERT(v > 0); - ASSERT(!Double(v).IsSpecial()); + DCHECK(v > 0); + DCHECK(!Double(v).IsSpecial()); bool result = false; int decimal_exponent = 0; diff --git a/deps/v8/src/feedback-slots.h b/deps/v8/src/feedback-slots.h index bc33a460761..9951fc8fdcc 100644 --- a/deps/v8/src/feedback-slots.h +++ b/deps/v8/src/feedback-slots.h @@ -5,9 +5,9 @@ #ifndef V8_FEEDBACK_SLOTS_H_ #define V8_FEEDBACK_SLOTS_H_ -#include "v8.h" +#include "src/v8.h" -#include "isolate.h" +#include "src/isolate.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/field-index-inl.h b/deps/v8/src/field-index-inl.h new file mode 100644 index 00000000000..5508adb1619 --- /dev/null +++ b/deps/v8/src/field-index-inl.h @@ -0,0 +1,124 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_FIELD_INDEX_INL_H_ +#define V8_FIELD_INDEX_INL_H_ + +#include "src/field-index.h" + +namespace v8 { +namespace internal { + + +inline FieldIndex FieldIndex::ForInObjectOffset(int offset, Map* map) { + DCHECK((offset % kPointerSize) == 0); + int index = offset / kPointerSize; + if (map == NULL) { + return FieldIndex(true, index, false, index + 1, 0, true); + } + int first_inobject_offset = map->GetInObjectPropertyOffset(0); + if (offset < first_inobject_offset) { + return FieldIndex(true, index, false, 0, 0, true); + } else { + return FieldIndex::ForPropertyIndex(map, offset / kPointerSize); + } +} + + +inline FieldIndex FieldIndex::ForPropertyIndex(Map* map, + int property_index, + bool is_double) { + DCHECK(map->instance_type() >= FIRST_NONSTRING_TYPE); + int inobject_properties = map->inobject_properties(); + bool is_inobject = property_index < inobject_properties; + int first_inobject_offset; + if (is_inobject) { + first_inobject_offset = map->GetInObjectPropertyOffset(0); + } else { + first_inobject_offset = FixedArray::kHeaderSize; + property_index -= inobject_properties; + } + return FieldIndex(is_inobject, + property_index + first_inobject_offset / kPointerSize, + is_double, inobject_properties, first_inobject_offset); +} + + +// Takes an index as computed by GetLoadFieldByIndex and reconstructs a +// FieldIndex object from it. +inline FieldIndex FieldIndex::ForLoadByFieldIndex(Map* map, int orig_index) { + int field_index = orig_index; + int is_inobject = true; + bool is_double = field_index & 1; + int first_inobject_offset = 0; + field_index >>= 1; + if (field_index < 0) { + field_index = -(field_index + 1); + is_inobject = false; + first_inobject_offset = FixedArray::kHeaderSize; + field_index += FixedArray::kHeaderSize / kPointerSize; + } else { + first_inobject_offset = map->GetInObjectPropertyOffset(0); + field_index += JSObject::kHeaderSize / kPointerSize; + } + FieldIndex result(is_inobject, field_index, is_double, + map->inobject_properties(), first_inobject_offset); + DCHECK(result.GetLoadByFieldIndex() == orig_index); + return result; +} + + +// Returns the index format accepted by the HLoadFieldByIndex instruction. +// (In-object: zero-based from (object start + JSObject::kHeaderSize), +// out-of-object: zero-based from FixedArray::kHeaderSize.) +inline int FieldIndex::GetLoadByFieldIndex() const { + // For efficiency, the LoadByFieldIndex instruction takes an index that is + // optimized for quick access. If the property is inline, the index is + // positive. If it's out-of-line, the encoded index is -raw_index - 1 to + // disambiguate the zero out-of-line index from the zero inobject case. + // The index itself is shifted up by one bit, the lower-most bit + // signifying if the field is a mutable double box (1) or not (0). + int result = index(); + if (is_inobject()) { + result -= JSObject::kHeaderSize / kPointerSize; + } else { + result -= FixedArray::kHeaderSize / kPointerSize; + result = -result - 1; + } + result <<= 1; + return is_double() ? (result | 1) : result; +} + + +inline FieldIndex FieldIndex::ForDescriptor(Map* map, int descriptor_index) { + PropertyDetails details = + map->instance_descriptors()->GetDetails(descriptor_index); + int field_index = + map->instance_descriptors()->GetFieldIndex(descriptor_index); + return ForPropertyIndex(map, field_index, + details.representation().IsDouble()); +} + + +inline FieldIndex FieldIndex::ForKeyedLookupCacheIndex(Map* map, int index) { + if (FLAG_compiled_keyed_generic_loads) { + return ForLoadByFieldIndex(map, index); + } else { + return ForPropertyIndex(map, index); + } +} + + +inline int FieldIndex::GetKeyedLookupCacheIndex() const { + if (FLAG_compiled_keyed_generic_loads) { + return GetLoadByFieldIndex(); + } else { + return property_index(); + } +} + + +} } // namespace v8::internal + +#endif diff --git a/deps/v8/src/field-index.cc b/deps/v8/src/field-index.cc new file mode 100644 index 00000000000..5392afc9f2c --- /dev/null +++ b/deps/v8/src/field-index.cc @@ -0,0 +1,23 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "src/field-index.h" +#include "src/objects.h" +#include "src/objects-inl.h" + +namespace v8 { +namespace internal { + + +FieldIndex FieldIndex::ForLookupResult(const LookupResult* lookup_result) { + Map* map = lookup_result->holder()->map(); + return ForPropertyIndex(map, + lookup_result->GetFieldIndexFromMap(map), + lookup_result->representation().IsDouble()); +} + + +} } // namespace v8::internal diff --git a/deps/v8/src/field-index.h b/deps/v8/src/field-index.h new file mode 100644 index 00000000000..8650c8fb8fd --- /dev/null +++ b/deps/v8/src/field-index.h @@ -0,0 +1,112 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_FIELD_INDEX_H_ +#define V8_FIELD_INDEX_H_ + +#include "src/property-details.h" +#include "src/utils.h" + +namespace v8 { +namespace internal { + +class Map; + +// Wrapper class to hold a field index, usually but not necessarily generated +// from a property index. When available, the wrapper class captures additional +// information to allow the field index to be translated back into the property +// index it was originally generated from. +class FieldIndex V8_FINAL { + public: + static FieldIndex ForPropertyIndex(Map* map, + int index, + bool is_double = false); + static FieldIndex ForInObjectOffset(int offset, Map* map = NULL); + static FieldIndex ForLookupResult(const LookupResult* result); + static FieldIndex ForDescriptor(Map* map, int descriptor_index); + static FieldIndex ForLoadByFieldIndex(Map* map, int index); + static FieldIndex ForKeyedLookupCacheIndex(Map* map, int index); + + int GetLoadByFieldIndex() const; + + bool is_inobject() const { + return IsInObjectBits::decode(bit_field_); + } + + bool is_double() const { + return IsDoubleBits::decode(bit_field_); + } + + int offset() const { + return index() * kPointerSize; + } + + // Zero-indexed from beginning of the object. + int index() const { + return IndexBits::decode(bit_field_); + } + + int outobject_array_index() const { + DCHECK(!is_inobject()); + return index() - first_inobject_property_offset() / kPointerSize; + } + + // Zero-based from the first inobject property. Overflows to out-of-object + // properties. + int property_index() const { + DCHECK(!IsHiddenField::decode(bit_field_)); + int result = index() - first_inobject_property_offset() / kPointerSize; + if (!is_inobject()) { + result += InObjectPropertyBits::decode(bit_field_); + } + return result; + } + + int GetKeyedLookupCacheIndex() const; + + int GetFieldAccessStubKey() const { + return bit_field_ & + (IsInObjectBits::kMask | IsDoubleBits::kMask | IndexBits::kMask); + } + + private: + FieldIndex(bool is_inobject, int local_index, bool is_double, + int inobject_properties, int first_inobject_property_offset, + bool is_hidden = false) { + DCHECK((first_inobject_property_offset & (kPointerSize - 1)) == 0); + bit_field_ = IsInObjectBits::encode(is_inobject) | + IsDoubleBits::encode(is_double) | + FirstInobjectPropertyOffsetBits::encode(first_inobject_property_offset) | + IsHiddenField::encode(is_hidden) | + IndexBits::encode(local_index) | + InObjectPropertyBits::encode(inobject_properties); + } + + int first_inobject_property_offset() const { + DCHECK(!IsHiddenField::decode(bit_field_)); + return FirstInobjectPropertyOffsetBits::decode(bit_field_); + } + + static const int kIndexBitsSize = kDescriptorIndexBitCount + 1; + + // Index from beginning of object. + class IndexBits: public BitField<int, 0, kIndexBitsSize> {}; + class IsInObjectBits: public BitField<bool, IndexBits::kNext, 1> {}; + class IsDoubleBits: public BitField<bool, IsInObjectBits::kNext, 1> {}; + // Number of inobject properties. + class InObjectPropertyBits + : public BitField<int, IsDoubleBits::kNext, kDescriptorIndexBitCount> {}; + // Offset of first inobject property from beginning of object. + class FirstInobjectPropertyOffsetBits + : public BitField<int, InObjectPropertyBits::kNext, 7> {}; + class IsHiddenField + : public BitField<bool, FirstInobjectPropertyOffsetBits::kNext, 1> {}; + STATIC_ASSERT(IsHiddenField::kNext <= 32); + + int bit_field_; +}; + +} } // namespace v8::internal + +#endif diff --git a/deps/v8/src/fixed-dtoa.cc b/deps/v8/src/fixed-dtoa.cc index 014f8ab869c..56fe9abafa1 100644 --- a/deps/v8/src/fixed-dtoa.cc +++ b/deps/v8/src/fixed-dtoa.cc @@ -4,12 +4,12 @@ #include <cmath> -#include "../include/v8stdint.h" -#include "checks.h" -#include "utils.h" +#include "include/v8stdint.h" +#include "src/base/logging.h" +#include "src/utils.h" -#include "double.h" -#include "fixed-dtoa.h" +#include "src/double.h" +#include "src/fixed-dtoa.h" namespace v8 { namespace internal { @@ -35,11 +35,11 @@ class UInt128 { accumulator >>= 32; accumulator = accumulator + (high_bits_ >> 32) * multiplicand; high_bits_ = (accumulator << 32) + part; - ASSERT((accumulator >> 32) == 0); + DCHECK((accumulator >> 32) == 0); } void Shift(int shift_amount) { - ASSERT(-64 <= shift_amount && shift_amount <= 64); + DCHECK(-64 <= shift_amount && shift_amount <= 64); if (shift_amount == 0) { return; } else if (shift_amount == -64) { @@ -212,13 +212,13 @@ static void RoundUp(Vector<char> buffer, int* length, int* decimal_point) { static void FillFractionals(uint64_t fractionals, int exponent, int fractional_count, Vector<char> buffer, int* length, int* decimal_point) { - ASSERT(-128 <= exponent && exponent <= 0); + DCHECK(-128 <= exponent && exponent <= 0); // 'fractionals' is a fixed-point number, with binary point at bit // (-exponent). Inside the function the non-converted remainder of fractionals // is a fixed-point number, with binary point at bit 'point'. if (-exponent <= 64) { // One 64 bit number is sufficient. - ASSERT(fractionals >> 56 == 0); + DCHECK(fractionals >> 56 == 0); int point = -exponent; for (int i = 0; i < fractional_count; ++i) { if (fractionals == 0) break; @@ -244,7 +244,7 @@ static void FillFractionals(uint64_t fractionals, int exponent, RoundUp(buffer, length, decimal_point); } } else { // We need 128 bits. - ASSERT(64 < -exponent && -exponent <= 128); + DCHECK(64 < -exponent && -exponent <= 128); UInt128 fractionals128 = UInt128(fractionals, 0); fractionals128.Shift(-exponent - 64); int point = 128; @@ -362,7 +362,7 @@ bool FastFixedDtoa(double v, } else if (exponent < -128) { // This configuration (with at most 20 digits) means that all digits must be // 0. - ASSERT(fractional_count <= 20); + DCHECK(fractional_count <= 20); buffer[0] = '\0'; *length = 0; *decimal_point = -fractional_count; diff --git a/deps/v8/src/flag-definitions.h b/deps/v8/src/flag-definitions.h index 30cbcd7bc60..55af2e67400 100644 --- a/deps/v8/src/flag-definitions.h +++ b/deps/v8/src/flag-definitions.h @@ -14,16 +14,14 @@ // this will just be an extern declaration, but for a readonly flag we let the // compiler make better optimizations by giving it the value. #if defined(FLAG_MODE_DECLARE) -#define FLAG_FULL(ftype, ctype, nam, def, cmt) \ - extern ctype FLAG_##nam; +#define FLAG_FULL(ftype, ctype, nam, def, cmt) extern ctype FLAG_##nam; #define FLAG_READONLY(ftype, ctype, nam, def, cmt) \ static ctype const FLAG_##nam = def; // We want to supply the actual storage and value for the flag variable in the // .cc file. We only do this for writable flags. #elif defined(FLAG_MODE_DEFINE) -#define FLAG_FULL(ftype, ctype, nam, def, cmt) \ - ctype FLAG_##nam = def; +#define FLAG_FULL(ftype, ctype, nam, def, cmt) ctype FLAG_##nam = def; // We need to define all of our default values so that the Flag structure can // access them by pointer. These are just used internally inside of one .cc, @@ -35,18 +33,22 @@ // We want to write entries into our meta data table, for internal parsing and // printing / etc in the flag parser code. We only do this for writable flags. #elif defined(FLAG_MODE_META) -#define FLAG_FULL(ftype, ctype, nam, def, cmt) \ - { Flag::TYPE_##ftype, #nam, &FLAG_##nam, &FLAGDEFAULT_##nam, cmt, false }, -#define FLAG_ALIAS(ftype, ctype, alias, nam) \ - { Flag::TYPE_##ftype, #alias, &FLAG_##nam, &FLAGDEFAULT_##nam, \ - "alias for --"#nam, false }, +#define FLAG_FULL(ftype, ctype, nam, def, cmt) \ + { Flag::TYPE_##ftype, #nam, &FLAG_##nam, &FLAGDEFAULT_##nam, cmt, false } \ + , +#define FLAG_ALIAS(ftype, ctype, alias, nam) \ + { \ + Flag::TYPE_##ftype, #alias, &FLAG_##nam, &FLAGDEFAULT_##nam, \ + "alias for --" #nam, false \ + } \ + , // We produce the code to set flags when it is implied by another flag. #elif defined(FLAG_MODE_DEFINE_IMPLICATIONS) -#define DEFINE_implication(whenflag, thenflag) \ +#define DEFINE_IMPLICATION(whenflag, thenflag) \ if (FLAG_##whenflag) FLAG_##thenflag = true; -#define DEFINE_neg_implication(whenflag, thenflag) \ +#define DEFINE_NEG_IMPLICATION(whenflag, thenflag) \ if (FLAG_##whenflag) FLAG_##thenflag = false; #else @@ -66,12 +68,12 @@ #define FLAG_ALIAS(ftype, ctype, alias, nam) #endif -#ifndef DEFINE_implication -#define DEFINE_implication(whenflag, thenflag) +#ifndef DEFINE_IMPLICATION +#define DEFINE_IMPLICATION(whenflag, thenflag) #endif -#ifndef DEFINE_neg_implication -#define DEFINE_neg_implication(whenflag, thenflag) +#ifndef DEFINE_NEG_IMPLICATION +#define DEFINE_NEG_IMPLICATION(whenflag, thenflag) #endif #define COMMA , @@ -79,10 +81,8 @@ #ifdef FLAG_MODE_DECLARE // Structure used to hold a collection of arguments to the JavaScript code. struct JSArguments { -public: - inline const char*& operator[] (int idx) const { - return argv[idx]; - } + public: + inline const char*& operator[](int idx) const { return argv[idx]; } static JSArguments Create(int argc, const char** argv) { JSArguments args; args.argc = argc; @@ -105,37 +105,41 @@ struct MaybeBoolFlag { }; #endif -#if (defined CAN_USE_VFP3_INSTRUCTIONS) || !(defined ARM_TEST) -# define ENABLE_VFP3_DEFAULT true +#if (defined CAN_USE_VFP3_INSTRUCTIONS) || !(defined ARM_TEST_NO_FEATURE_PROBE) +#define ENABLE_VFP3_DEFAULT true #else -# define ENABLE_VFP3_DEFAULT false +#define ENABLE_VFP3_DEFAULT false #endif -#if (defined CAN_USE_ARMV7_INSTRUCTIONS) || !(defined ARM_TEST) -# define ENABLE_ARMV7_DEFAULT true +#if (defined CAN_USE_ARMV7_INSTRUCTIONS) || !(defined ARM_TEST_NO_FEATURE_PROBE) +#define ENABLE_ARMV7_DEFAULT true #else -# define ENABLE_ARMV7_DEFAULT false +#define ENABLE_ARMV7_DEFAULT false #endif -#if (defined CAN_USE_VFP32DREGS) || !(defined ARM_TEST) -# define ENABLE_32DREGS_DEFAULT true +#if (defined CAN_USE_VFP32DREGS) || !(defined ARM_TEST_NO_FEATURE_PROBE) +#define ENABLE_32DREGS_DEFAULT true #else -# define ENABLE_32DREGS_DEFAULT false +#define ENABLE_32DREGS_DEFAULT false +#endif +#if (defined CAN_USE_NEON) || !(defined ARM_TEST_NO_FEATURE_PROBE) +# define ENABLE_NEON_DEFAULT true +#else +# define ENABLE_NEON_DEFAULT false #endif -#define DEFINE_bool(nam, def, cmt) FLAG(BOOL, bool, nam, def, cmt) -#define DEFINE_maybe_bool(nam, cmt) FLAG(MAYBE_BOOL, MaybeBoolFlag, nam, \ - { false COMMA false }, cmt) -#define DEFINE_int(nam, def, cmt) FLAG(INT, int, nam, def, cmt) -#define DEFINE_float(nam, def, cmt) FLAG(FLOAT, double, nam, def, cmt) -#define DEFINE_string(nam, def, cmt) FLAG(STRING, const char*, nam, def, cmt) -#define DEFINE_args(nam, cmt) FLAG(ARGS, JSArguments, nam, \ - { 0 COMMA NULL }, cmt) - -#define DEFINE_ALIAS_bool(alias, nam) FLAG_ALIAS(BOOL, bool, alias, nam) -#define DEFINE_ALIAS_int(alias, nam) FLAG_ALIAS(INT, int, alias, nam) -#define DEFINE_ALIAS_float(alias, nam) FLAG_ALIAS(FLOAT, double, alias, nam) -#define DEFINE_ALIAS_string(alias, nam) \ +#define DEFINE_BOOL(nam, def, cmt) FLAG(BOOL, bool, nam, def, cmt) +#define DEFINE_MAYBE_BOOL(nam, cmt) \ + FLAG(MAYBE_BOOL, MaybeBoolFlag, nam, {false COMMA false}, cmt) +#define DEFINE_INT(nam, def, cmt) FLAG(INT, int, nam, def, cmt) +#define DEFINE_FLOAT(nam, def, cmt) FLAG(FLOAT, double, nam, def, cmt) +#define DEFINE_STRING(nam, def, cmt) FLAG(STRING, const char*, nam, def, cmt) +#define DEFINE_ARGS(nam, cmt) FLAG(ARGS, JSArguments, nam, {0 COMMA NULL}, cmt) + +#define DEFINE_ALIAS_BOOL(alias, nam) FLAG_ALIAS(BOOL, bool, alias, nam) +#define DEFINE_ALIAS_INT(alias, nam) FLAG_ALIAS(INT, int, alias, nam) +#define DEFINE_ALIAS_FLOAT(alias, nam) FLAG_ALIAS(FLOAT, double, alias, nam) +#define DEFINE_ALIAS_STRING(alias, nam) \ FLAG_ALIAS(STRING, const char*, alias, nam) -#define DEFINE_ALIAS_args(alias, nam) FLAG_ALIAS(ARGS, JSArguments, alias, nam) +#define DEFINE_ALIAS_ARGS(alias, nam) FLAG_ALIAS(ARGS, JSArguments, alias, nam) // // Flags in all modes. @@ -143,550 +147,548 @@ struct MaybeBoolFlag { #define FLAG FLAG_FULL // Flags for language modes and experimental language features. -DEFINE_bool(use_strict, false, "enforce strict mode") -DEFINE_bool(es_staging, false, "enable upcoming ES6+ features") +DEFINE_BOOL(use_strict, false, "enforce strict mode") +DEFINE_BOOL(es_staging, false, "enable upcoming ES6+ features") -DEFINE_bool(harmony_typeof, false, "enable harmony semantics for typeof") -DEFINE_bool(harmony_scoping, false, "enable harmony block scoping") -DEFINE_bool(harmony_modules, false, +DEFINE_BOOL(harmony_scoping, false, "enable harmony block scoping") +DEFINE_BOOL(harmony_modules, false, "enable harmony modules (implies block scoping)") -DEFINE_bool(harmony_symbols, false, - "enable harmony symbols (a.k.a. private names)") -DEFINE_bool(harmony_proxies, false, "enable harmony proxies") -DEFINE_bool(harmony_collections, false, - "enable harmony collections (sets, maps)") -DEFINE_bool(harmony_generators, false, "enable harmony generators") -DEFINE_bool(harmony_iteration, false, "enable harmony iteration (for-of)") -DEFINE_bool(harmony_numeric_literals, false, +DEFINE_BOOL(harmony_proxies, false, "enable harmony proxies") +DEFINE_BOOL(harmony_generators, false, "enable harmony generators") +DEFINE_BOOL(harmony_numeric_literals, false, "enable harmony numeric literals (0o77, 0b11)") -DEFINE_bool(harmony_strings, false, "enable harmony string") -DEFINE_bool(harmony_arrays, false, "enable harmony arrays") -DEFINE_bool(harmony_maths, false, "enable harmony math functions") -DEFINE_bool(harmony_promises, true, "(dummy flag, has no effect)") -DEFINE_bool(harmony, false, "enable all harmony features (except typeof)") - -DEFINE_implication(harmony, harmony_scoping) -DEFINE_implication(harmony, harmony_modules) -DEFINE_implication(harmony, harmony_symbols) -DEFINE_implication(harmony, harmony_proxies) -DEFINE_implication(harmony, harmony_collections) -DEFINE_implication(harmony, harmony_generators) -DEFINE_implication(harmony, harmony_iteration) -DEFINE_implication(harmony, harmony_numeric_literals) -DEFINE_implication(harmony, harmony_strings) -DEFINE_implication(harmony, harmony_arrays) -DEFINE_implication(harmony_modules, harmony_scoping) - -DEFINE_implication(harmony, es_staging) -DEFINE_implication(es_staging, harmony_maths) +DEFINE_BOOL(harmony_strings, false, "enable harmony string") +DEFINE_BOOL(harmony_arrays, false, "enable harmony arrays") +DEFINE_BOOL(harmony_arrow_functions, false, "enable harmony arrow functions") +DEFINE_BOOL(harmony, false, "enable all harmony features (except proxies)") + +DEFINE_IMPLICATION(harmony, harmony_scoping) +DEFINE_IMPLICATION(harmony, harmony_modules) +// TODO(rossberg): Reenable when problems are sorted out. +// DEFINE_IMPLICATION(harmony, harmony_proxies) +DEFINE_IMPLICATION(harmony, harmony_generators) +DEFINE_IMPLICATION(harmony, harmony_numeric_literals) +DEFINE_IMPLICATION(harmony, harmony_strings) +DEFINE_IMPLICATION(harmony, harmony_arrays) +DEFINE_IMPLICATION(harmony, harmony_arrow_functions) +DEFINE_IMPLICATION(harmony_modules, harmony_scoping) + +DEFINE_IMPLICATION(harmony, es_staging) // Flags for experimental implementation features. -DEFINE_bool(packed_arrays, true, "optimizes arrays that have no holes") -DEFINE_bool(smi_only_arrays, true, "tracks arrays with only smi values") -DEFINE_bool(compiled_keyed_dictionary_loads, true, +DEFINE_BOOL(compiled_keyed_dictionary_loads, true, "use optimizing compiler to generate keyed dictionary load stubs") -DEFINE_bool(clever_optimizations, true, +DEFINE_BOOL(compiled_keyed_generic_loads, false, + "use optimizing compiler to generate keyed generic load stubs") +DEFINE_BOOL(clever_optimizations, true, "Optimize object size, Array shift, DOM strings and string +") -DEFINE_bool(pretenuring, true, "allocate objects in old space") // TODO(hpayer): We will remove this flag as soon as we have pretenuring // support for specific allocation sites. -DEFINE_bool(pretenuring_call_new, false, "pretenure call new") -DEFINE_bool(allocation_site_pretenuring, true, +DEFINE_BOOL(pretenuring_call_new, false, "pretenure call new") +DEFINE_BOOL(allocation_site_pretenuring, true, "pretenure with allocation sites") -DEFINE_bool(trace_pretenuring, false, +DEFINE_BOOL(trace_pretenuring, false, "trace pretenuring decisions of HAllocate instructions") -DEFINE_bool(trace_pretenuring_statistics, false, +DEFINE_BOOL(trace_pretenuring_statistics, false, "trace allocation site pretenuring statistics") -DEFINE_bool(track_fields, true, "track fields with only smi values") -DEFINE_bool(track_double_fields, true, "track fields with double values") -DEFINE_bool(track_heap_object_fields, true, "track fields with heap values") -DEFINE_bool(track_computed_fields, true, "track computed boilerplate fields") -DEFINE_implication(track_double_fields, track_fields) -DEFINE_implication(track_heap_object_fields, track_fields) -DEFINE_implication(track_computed_fields, track_fields) -DEFINE_bool(track_field_types, true, "track field types") -DEFINE_implication(track_field_types, track_fields) -DEFINE_implication(track_field_types, track_heap_object_fields) -DEFINE_bool(smi_binop, true, "support smi representation in binary operations") +DEFINE_BOOL(track_fields, true, "track fields with only smi values") +DEFINE_BOOL(track_double_fields, true, "track fields with double values") +DEFINE_BOOL(track_heap_object_fields, true, "track fields with heap values") +DEFINE_BOOL(track_computed_fields, true, "track computed boilerplate fields") +DEFINE_IMPLICATION(track_double_fields, track_fields) +DEFINE_IMPLICATION(track_heap_object_fields, track_fields) +DEFINE_IMPLICATION(track_computed_fields, track_fields) +DEFINE_BOOL(track_field_types, true, "track field types") +DEFINE_IMPLICATION(track_field_types, track_fields) +DEFINE_IMPLICATION(track_field_types, track_heap_object_fields) +DEFINE_BOOL(smi_binop, true, "support smi representation in binary operations") +DEFINE_BOOL(vector_ics, false, "support vector-based ics") // Flags for optimization types. -DEFINE_bool(optimize_for_size, false, +DEFINE_BOOL(optimize_for_size, false, "Enables optimizations which favor memory size over execution " "speed.") // Flags for data representation optimizations -DEFINE_bool(unbox_double_arrays, true, "automatically unbox arrays of doubles") -DEFINE_bool(string_slices, true, "use string slices") +DEFINE_BOOL(unbox_double_arrays, true, "automatically unbox arrays of doubles") +DEFINE_BOOL(string_slices, true, "use string slices") // Flags for Crankshaft. -DEFINE_bool(crankshaft, true, "use crankshaft") -DEFINE_string(hydrogen_filter, "*", "optimization filter") -DEFINE_bool(use_gvn, true, "use hydrogen global value numbering") -DEFINE_int(gvn_iterations, 3, "maximum number of GVN fix-point iterations") -DEFINE_bool(use_canonicalizing, true, "use hydrogen instruction canonicalizing") -DEFINE_bool(use_inlining, true, "use function inlining") -DEFINE_bool(use_escape_analysis, true, "use hydrogen escape analysis") -DEFINE_bool(use_allocation_folding, true, "use allocation folding") -DEFINE_bool(use_local_allocation_folding, false, "only fold in basic blocks") -DEFINE_bool(use_write_barrier_elimination, true, +DEFINE_BOOL(crankshaft, true, "use crankshaft") +DEFINE_STRING(hydrogen_filter, "*", "optimization filter") +DEFINE_BOOL(use_gvn, true, "use hydrogen global value numbering") +DEFINE_INT(gvn_iterations, 3, "maximum number of GVN fix-point iterations") +DEFINE_BOOL(use_canonicalizing, true, "use hydrogen instruction canonicalizing") +DEFINE_BOOL(use_inlining, true, "use function inlining") +DEFINE_BOOL(use_escape_analysis, true, "use hydrogen escape analysis") +DEFINE_BOOL(use_allocation_folding, true, "use allocation folding") +DEFINE_BOOL(use_local_allocation_folding, false, "only fold in basic blocks") +DEFINE_BOOL(use_write_barrier_elimination, true, "eliminate write barriers targeting allocations in optimized code") -DEFINE_int(max_inlining_levels, 5, "maximum number of inlining levels") -DEFINE_int(max_inlined_source_size, 600, +DEFINE_INT(max_inlining_levels, 5, "maximum number of inlining levels") +DEFINE_INT(max_inlined_source_size, 600, "maximum source size in bytes considered for a single inlining") -DEFINE_int(max_inlined_nodes, 196, +DEFINE_INT(max_inlined_nodes, 196, "maximum number of AST nodes considered for a single inlining") -DEFINE_int(max_inlined_nodes_cumulative, 400, +DEFINE_INT(max_inlined_nodes_cumulative, 400, "maximum cumulative number of AST nodes considered for inlining") -DEFINE_bool(loop_invariant_code_motion, true, "loop invariant code motion") -DEFINE_bool(fast_math, true, "faster (but maybe less accurate) math functions") -DEFINE_bool(collect_megamorphic_maps_from_stub_cache, true, +DEFINE_BOOL(loop_invariant_code_motion, true, "loop invariant code motion") +DEFINE_BOOL(fast_math, true, "faster (but maybe less accurate) math functions") +DEFINE_BOOL(collect_megamorphic_maps_from_stub_cache, true, "crankshaft harvests type feedback from stub cache") -DEFINE_bool(hydrogen_stats, false, "print statistics for hydrogen") -DEFINE_bool(trace_check_elimination, false, "trace check elimination phase") -DEFINE_bool(trace_hydrogen, false, "trace generated hydrogen to file") -DEFINE_string(trace_hydrogen_filter, "*", "hydrogen tracing filter") -DEFINE_bool(trace_hydrogen_stubs, false, "trace generated hydrogen for stubs") -DEFINE_string(trace_hydrogen_file, NULL, "trace hydrogen to given file name") -DEFINE_string(trace_phase, "HLZ", "trace generated IR for specified phases") -DEFINE_bool(trace_inlining, false, "trace inlining decisions") -DEFINE_bool(trace_load_elimination, false, "trace load elimination") -DEFINE_bool(trace_store_elimination, false, "trace store elimination") -DEFINE_bool(trace_alloc, false, "trace register allocator") -DEFINE_bool(trace_all_uses, false, "trace all use positions") -DEFINE_bool(trace_range, false, "trace range analysis") -DEFINE_bool(trace_gvn, false, "trace global value numbering") -DEFINE_bool(trace_representation, false, "trace representation types") -DEFINE_bool(trace_escape_analysis, false, "trace hydrogen escape analysis") -DEFINE_bool(trace_allocation_folding, false, "trace allocation folding") -DEFINE_bool(trace_track_allocation_sites, false, +DEFINE_BOOL(hydrogen_stats, false, "print statistics for hydrogen") +DEFINE_BOOL(trace_check_elimination, false, "trace check elimination phase") +DEFINE_BOOL(trace_hydrogen, false, "trace generated hydrogen to file") +DEFINE_STRING(trace_hydrogen_filter, "*", "hydrogen tracing filter") +DEFINE_BOOL(trace_hydrogen_stubs, false, "trace generated hydrogen for stubs") +DEFINE_STRING(trace_hydrogen_file, NULL, "trace hydrogen to given file name") +DEFINE_STRING(trace_phase, "HLZ", "trace generated IR for specified phases") +DEFINE_BOOL(trace_inlining, false, "trace inlining decisions") +DEFINE_BOOL(trace_load_elimination, false, "trace load elimination") +DEFINE_BOOL(trace_store_elimination, false, "trace store elimination") +DEFINE_BOOL(trace_alloc, false, "trace register allocator") +DEFINE_BOOL(trace_all_uses, false, "trace all use positions") +DEFINE_BOOL(trace_range, false, "trace range analysis") +DEFINE_BOOL(trace_gvn, false, "trace global value numbering") +DEFINE_BOOL(trace_representation, false, "trace representation types") +DEFINE_BOOL(trace_removable_simulates, false, "trace removable simulates") +DEFINE_BOOL(trace_escape_analysis, false, "trace hydrogen escape analysis") +DEFINE_BOOL(trace_allocation_folding, false, "trace allocation folding") +DEFINE_BOOL(trace_track_allocation_sites, false, "trace the tracking of allocation sites") -DEFINE_bool(trace_migration, false, "trace object migration") -DEFINE_bool(trace_generalization, false, "trace map generalization") -DEFINE_bool(stress_pointer_maps, false, "pointer map for every instruction") -DEFINE_bool(stress_environments, false, "environment for every instruction") -DEFINE_int(deopt_every_n_times, 0, +DEFINE_BOOL(trace_migration, false, "trace object migration") +DEFINE_BOOL(trace_generalization, false, "trace map generalization") +DEFINE_BOOL(stress_pointer_maps, false, "pointer map for every instruction") +DEFINE_BOOL(stress_environments, false, "environment for every instruction") +DEFINE_INT(deopt_every_n_times, 0, "deoptimize every n times a deopt point is passed") -DEFINE_int(deopt_every_n_garbage_collections, 0, +DEFINE_INT(deopt_every_n_garbage_collections, 0, "deoptimize every n garbage collections") -DEFINE_bool(print_deopt_stress, false, "print number of possible deopt points") -DEFINE_bool(trap_on_deopt, false, "put a break point before deoptimizing") -DEFINE_bool(trap_on_stub_deopt, false, +DEFINE_BOOL(print_deopt_stress, false, "print number of possible deopt points") +DEFINE_BOOL(trap_on_deopt, false, "put a break point before deoptimizing") +DEFINE_BOOL(trap_on_stub_deopt, false, "put a break point before deoptimizing a stub") -DEFINE_bool(deoptimize_uncommon_cases, true, "deoptimize uncommon cases") -DEFINE_bool(polymorphic_inlining, true, "polymorphic inlining") -DEFINE_bool(use_osr, true, "use on-stack replacement") -DEFINE_bool(array_bounds_checks_elimination, true, +DEFINE_BOOL(deoptimize_uncommon_cases, true, "deoptimize uncommon cases") +DEFINE_BOOL(polymorphic_inlining, true, "polymorphic inlining") +DEFINE_BOOL(use_osr, true, "use on-stack replacement") +DEFINE_BOOL(array_bounds_checks_elimination, true, "perform array bounds checks elimination") -DEFINE_bool(trace_bce, false, "trace array bounds check elimination") -DEFINE_bool(array_bounds_checks_hoisting, false, +DEFINE_BOOL(trace_bce, false, "trace array bounds check elimination") +DEFINE_BOOL(array_bounds_checks_hoisting, false, "perform array bounds checks hoisting") -DEFINE_bool(array_index_dehoisting, true, - "perform array index dehoisting") -DEFINE_bool(analyze_environment_liveness, true, +DEFINE_BOOL(array_index_dehoisting, true, "perform array index dehoisting") +DEFINE_BOOL(analyze_environment_liveness, true, "analyze liveness of environment slots and zap dead values") -DEFINE_bool(load_elimination, true, "use load elimination") -DEFINE_bool(check_elimination, true, "use check elimination") -DEFINE_bool(store_elimination, false, "use store elimination") -DEFINE_bool(dead_code_elimination, true, "use dead code elimination") -DEFINE_bool(fold_constants, true, "use constant folding") -DEFINE_bool(trace_dead_code_elimination, false, "trace dead code elimination") -DEFINE_bool(unreachable_code_elimination, true, "eliminate unreachable code") -DEFINE_bool(trace_osr, false, "trace on-stack replacement") -DEFINE_int(stress_runs, 0, "number of stress runs") -DEFINE_bool(optimize_closures, true, "optimize closures") -DEFINE_bool(lookup_sample_by_shared, true, +DEFINE_BOOL(load_elimination, true, "use load elimination") +DEFINE_BOOL(check_elimination, true, "use check elimination") +DEFINE_BOOL(store_elimination, false, "use store elimination") +DEFINE_BOOL(dead_code_elimination, true, "use dead code elimination") +DEFINE_BOOL(fold_constants, true, "use constant folding") +DEFINE_BOOL(trace_dead_code_elimination, false, "trace dead code elimination") +DEFINE_BOOL(unreachable_code_elimination, true, "eliminate unreachable code") +DEFINE_BOOL(trace_osr, false, "trace on-stack replacement") +DEFINE_INT(stress_runs, 0, "number of stress runs") +DEFINE_BOOL(optimize_closures, true, "optimize closures") +DEFINE_BOOL(lookup_sample_by_shared, true, "when picking a function to optimize, watch for shared function " "info, not JSFunction itself") -DEFINE_bool(cache_optimized_code, true, - "cache optimized code for closures") -DEFINE_bool(flush_optimized_code_cache, true, +DEFINE_BOOL(cache_optimized_code, true, "cache optimized code for closures") +DEFINE_BOOL(flush_optimized_code_cache, true, "flushes the cache of optimized code for closures on every GC") -DEFINE_bool(inline_construct, true, "inline constructor calls") -DEFINE_bool(inline_arguments, true, "inline functions with arguments object") -DEFINE_bool(inline_accessors, true, "inline JavaScript accessors") -DEFINE_int(escape_analysis_iterations, 2, +DEFINE_BOOL(inline_construct, true, "inline constructor calls") +DEFINE_BOOL(inline_arguments, true, "inline functions with arguments object") +DEFINE_BOOL(inline_accessors, true, "inline JavaScript accessors") +DEFINE_INT(escape_analysis_iterations, 2, "maximum number of escape analysis fix-point iterations") -DEFINE_bool(optimize_for_in, true, - "optimize functions containing for-in loops") -DEFINE_bool(opt_safe_uint32_operations, true, +DEFINE_BOOL(optimize_for_in, true, "optimize functions containing for-in loops") +DEFINE_BOOL(opt_safe_uint32_operations, true, "allow uint32 values on optimize frames if they are used only in " "safe operations") -DEFINE_bool(concurrent_recompilation, true, +DEFINE_BOOL(concurrent_recompilation, true, "optimizing hot functions asynchronously on a separate thread") -DEFINE_bool(trace_concurrent_recompilation, false, +DEFINE_BOOL(trace_concurrent_recompilation, false, "track concurrent recompilation") -DEFINE_int(concurrent_recompilation_queue_length, 8, +DEFINE_INT(concurrent_recompilation_queue_length, 8, "the length of the concurrent compilation queue") -DEFINE_int(concurrent_recompilation_delay, 0, +DEFINE_INT(concurrent_recompilation_delay, 0, "artificial compilation delay in ms") -DEFINE_bool(block_concurrent_recompilation, false, +DEFINE_BOOL(block_concurrent_recompilation, false, "block queued jobs until released") -DEFINE_bool(concurrent_osr, false, - "concurrent on-stack replacement") -DEFINE_implication(concurrent_osr, concurrent_recompilation) +DEFINE_BOOL(concurrent_osr, true, "concurrent on-stack replacement") +DEFINE_IMPLICATION(concurrent_osr, concurrent_recompilation) -DEFINE_bool(omit_map_checks_for_leaf_maps, true, +DEFINE_BOOL(omit_map_checks_for_leaf_maps, true, "do not emit check maps for constant values that have a leaf map, " "deoptimize the optimized code if the layout of the maps changes.") -DEFINE_int(typed_array_max_size_in_heap, 64, - "threshold for in-heap typed array") +// Flags for TurboFan. +DEFINE_STRING(turbo_filter, "~", "optimization filter for TurboFan compiler") +DEFINE_BOOL(trace_turbo, false, "trace generated TurboFan IR") +DEFINE_BOOL(trace_turbo_types, true, "trace generated TurboFan types") +DEFINE_BOOL(trace_turbo_scheduler, false, "trace generated TurboFan scheduler") +DEFINE_BOOL(turbo_verify, false, "verify TurboFan graphs at each phase") +DEFINE_BOOL(turbo_stats, false, "print TurboFan statistics") +DEFINE_BOOL(turbo_types, false, "use typed lowering in TurboFan") +DEFINE_BOOL(turbo_source_positions, false, + "track source code positions when building TurboFan IR") +DEFINE_BOOL(context_specialization, true, + "enable context specialization in TurboFan") +DEFINE_BOOL(turbo_deoptimization, false, "enable deoptimization in TurboFan") + +DEFINE_INT(typed_array_max_size_in_heap, 64, + "threshold for in-heap typed array") // Profiler flags. -DEFINE_int(frame_count, 1, "number of stack frames inspected by the profiler") - // 0x1800 fits in the immediate field of an ARM instruction. -DEFINE_int(interrupt_budget, 0x1800, +DEFINE_INT(frame_count, 1, "number of stack frames inspected by the profiler") +// 0x1800 fits in the immediate field of an ARM instruction. +DEFINE_INT(interrupt_budget, 0x1800, "execution budget before interrupt is triggered") -DEFINE_int(type_info_threshold, 25, +DEFINE_INT(type_info_threshold, 25, "percentage of ICs that must have type info to allow optimization") -DEFINE_int(self_opt_count, 130, "call count before self-optimization") +DEFINE_INT(generic_ic_threshold, 30, + "max percentage of megamorphic/generic ICs to allow optimization") +DEFINE_INT(self_opt_count, 130, "call count before self-optimization") -DEFINE_bool(trace_opt_verbose, false, "extra verbose compilation tracing") -DEFINE_implication(trace_opt_verbose, trace_opt) +DEFINE_BOOL(trace_opt_verbose, false, "extra verbose compilation tracing") +DEFINE_IMPLICATION(trace_opt_verbose, trace_opt) // assembler-ia32.cc / assembler-arm.cc / assembler-x64.cc -DEFINE_bool(debug_code, false, - "generate extra code (assertions) for debugging") -DEFINE_bool(code_comments, false, "emit comments in code disassembly") -DEFINE_bool(enable_sse2, true, - "enable use of SSE2 instructions if available") -DEFINE_bool(enable_sse3, true, - "enable use of SSE3 instructions if available") -DEFINE_bool(enable_sse4_1, true, +DEFINE_BOOL(debug_code, false, "generate extra code (assertions) for debugging") +DEFINE_BOOL(code_comments, false, "emit comments in code disassembly") +DEFINE_BOOL(enable_sse3, true, "enable use of SSE3 instructions if available") +DEFINE_BOOL(enable_sse4_1, true, "enable use of SSE4.1 instructions if available") -DEFINE_bool(enable_cmov, true, - "enable use of CMOV instruction if available") -DEFINE_bool(enable_sahf, true, +DEFINE_BOOL(enable_sahf, true, "enable use of SAHF instruction if available (X64 only)") -DEFINE_bool(enable_vfp3, ENABLE_VFP3_DEFAULT, +DEFINE_BOOL(enable_vfp3, ENABLE_VFP3_DEFAULT, "enable use of VFP3 instructions if available") -DEFINE_bool(enable_armv7, ENABLE_ARMV7_DEFAULT, +DEFINE_BOOL(enable_armv7, ENABLE_ARMV7_DEFAULT, "enable use of ARMv7 instructions if available (ARM only)") -DEFINE_bool(enable_neon, true, +DEFINE_BOOL(enable_neon, ENABLE_NEON_DEFAULT, "enable use of NEON instructions if available (ARM only)") -DEFINE_bool(enable_sudiv, true, +DEFINE_BOOL(enable_sudiv, true, "enable use of SDIV and UDIV instructions if available (ARM only)") -DEFINE_bool(enable_movw_movt, false, +DEFINE_BOOL(enable_mls, true, + "enable use of MLS instructions if available (ARM only)") +DEFINE_BOOL(enable_movw_movt, false, "enable loading 32-bit constant by means of movw/movt " "instruction pairs (ARM only)") -DEFINE_bool(enable_unaligned_accesses, true, +DEFINE_BOOL(enable_unaligned_accesses, true, "enable unaligned accesses for ARMv7 (ARM only)") -DEFINE_bool(enable_32dregs, ENABLE_32DREGS_DEFAULT, +DEFINE_BOOL(enable_32dregs, ENABLE_32DREGS_DEFAULT, "enable use of d16-d31 registers on ARM - this requires VFP3") -DEFINE_bool(enable_vldr_imm, false, +DEFINE_BOOL(enable_vldr_imm, false, "enable use of constant pools for double immediate (ARM only)") -DEFINE_bool(force_long_branches, false, +DEFINE_BOOL(force_long_branches, false, "force all emitted branches to be in long mode (MIPS only)") +// cpu-arm64.cc +DEFINE_BOOL(enable_always_align_csp, true, + "enable alignment of csp to 16 bytes on platforms which prefer " + "the register to always be aligned (ARM64 only)") + // bootstrapper.cc -DEFINE_string(expose_natives_as, NULL, "expose natives in global object") -DEFINE_string(expose_debug_as, NULL, "expose debug in global object") -DEFINE_bool(expose_free_buffer, false, "expose freeBuffer extension") -DEFINE_bool(expose_gc, false, "expose gc extension") -DEFINE_string(expose_gc_as, NULL, +DEFINE_STRING(expose_natives_as, NULL, "expose natives in global object") +DEFINE_STRING(expose_debug_as, NULL, "expose debug in global object") +DEFINE_BOOL(expose_free_buffer, false, "expose freeBuffer extension") +DEFINE_BOOL(expose_gc, false, "expose gc extension") +DEFINE_STRING(expose_gc_as, NULL, "expose gc extension under the specified name") -DEFINE_implication(expose_gc_as, expose_gc) -DEFINE_bool(expose_externalize_string, false, +DEFINE_IMPLICATION(expose_gc_as, expose_gc) +DEFINE_BOOL(expose_externalize_string, false, "expose externalize string extension") -DEFINE_bool(expose_trigger_failure, false, "expose trigger-failure extension") -DEFINE_int(stack_trace_limit, 10, "number of stack frames to capture") -DEFINE_bool(builtins_in_stack_traces, false, +DEFINE_BOOL(expose_trigger_failure, false, "expose trigger-failure extension") +DEFINE_INT(stack_trace_limit, 10, "number of stack frames to capture") +DEFINE_BOOL(builtins_in_stack_traces, false, "show built-in functions in stack traces") -DEFINE_bool(disable_native_files, false, "disable builtin natives files") +DEFINE_BOOL(disable_native_files, false, "disable builtin natives files") // builtins-ia32.cc -DEFINE_bool(inline_new, true, "use fast inline allocation") +DEFINE_BOOL(inline_new, true, "use fast inline allocation") // codegen-ia32.cc / codegen-arm.cc -DEFINE_bool(trace_codegen, false, +DEFINE_BOOL(trace_codegen, false, "print name of functions for which code is generated") -DEFINE_bool(trace, false, "trace function calls") -DEFINE_bool(mask_constants_with_cookie, true, +DEFINE_BOOL(trace, false, "trace function calls") +DEFINE_BOOL(mask_constants_with_cookie, true, "use random jit cookie to mask large constants") // codegen.cc -DEFINE_bool(lazy, true, "use lazy compilation") -DEFINE_bool(trace_opt, false, "trace lazy optimization") -DEFINE_bool(trace_opt_stats, false, "trace lazy optimization statistics") -DEFINE_bool(opt, true, "use adaptive optimizations") -DEFINE_bool(always_opt, false, "always try to optimize functions") -DEFINE_bool(always_osr, false, "always try to OSR functions") -DEFINE_bool(prepare_always_opt, false, "prepare for turning on always opt") -DEFINE_bool(trace_deopt, false, "trace optimize function deoptimization") -DEFINE_bool(trace_stub_failures, false, +DEFINE_BOOL(lazy, true, "use lazy compilation") +DEFINE_BOOL(trace_opt, false, "trace lazy optimization") +DEFINE_BOOL(trace_opt_stats, false, "trace lazy optimization statistics") +DEFINE_BOOL(opt, true, "use adaptive optimizations") +DEFINE_BOOL(always_opt, false, "always try to optimize functions") +DEFINE_BOOL(always_osr, false, "always try to OSR functions") +DEFINE_BOOL(prepare_always_opt, false, "prepare for turning on always opt") +DEFINE_BOOL(trace_deopt, false, "trace optimize function deoptimization") +DEFINE_BOOL(trace_stub_failures, false, "trace deoptimization of generated code stubs") +DEFINE_BOOL(serialize_toplevel, false, "enable caching of toplevel scripts") + // compiler.cc -DEFINE_int(min_preparse_length, 1024, +DEFINE_INT(min_preparse_length, 1024, "minimum length for automatic enable preparsing") -DEFINE_bool(always_full_compiler, false, +DEFINE_BOOL(always_full_compiler, false, "try to use the dedicated run-once backend for all code") -DEFINE_int(max_opt_count, 10, +DEFINE_INT(max_opt_count, 10, "maximum number of optimization attempts before giving up.") // compilation-cache.cc -DEFINE_bool(compilation_cache, true, "enable compilation cache") +DEFINE_BOOL(compilation_cache, true, "enable compilation cache") -DEFINE_bool(cache_prototype_transitions, true, "cache prototype transitions") +DEFINE_BOOL(cache_prototype_transitions, true, "cache prototype transitions") // cpu-profiler.cc -DEFINE_int(cpu_profiler_sampling_interval, 1000, +DEFINE_INT(cpu_profiler_sampling_interval, 1000, "CPU profiler sampling interval in microseconds") // debug.cc -DEFINE_bool(trace_debug_json, false, "trace debugging JSON request/response") -DEFINE_bool(trace_js_array_abuse, false, +DEFINE_BOOL(trace_debug_json, false, "trace debugging JSON request/response") +DEFINE_BOOL(trace_js_array_abuse, false, "trace out-of-bounds accesses to JS arrays") -DEFINE_bool(trace_external_array_abuse, false, +DEFINE_BOOL(trace_external_array_abuse, false, "trace out-of-bounds-accesses to external arrays") -DEFINE_bool(trace_array_abuse, false, +DEFINE_BOOL(trace_array_abuse, false, "trace out-of-bounds accesses to all arrays") -DEFINE_implication(trace_array_abuse, trace_js_array_abuse) -DEFINE_implication(trace_array_abuse, trace_external_array_abuse) -DEFINE_bool(enable_liveedit, true, "enable liveedit experimental feature") -DEFINE_bool(hard_abort, true, "abort by crashing") +DEFINE_IMPLICATION(trace_array_abuse, trace_js_array_abuse) +DEFINE_IMPLICATION(trace_array_abuse, trace_external_array_abuse) +DEFINE_BOOL(enable_liveedit, true, "enable liveedit experimental feature") +DEFINE_BOOL(hard_abort, true, "abort by crashing") // execution.cc -// Slightly less than 1MB on 64-bit, since Windows' default stack size for +// Slightly less than 1MB, since Windows' default stack size for // the main execution thread is 1MB for both 32 and 64-bit. -DEFINE_int(stack_size, kPointerSize * 123, +DEFINE_INT(stack_size, 984, "default size of stack region v8 is allowed to use (in kBytes)") // frames.cc -DEFINE_int(max_stack_trace_source_length, 300, +DEFINE_INT(max_stack_trace_source_length, 300, "maximum length of function source code printed in a stack trace.") // full-codegen.cc -DEFINE_bool(always_inline_smi_code, false, +DEFINE_BOOL(always_inline_smi_code, false, "always inline smi code in non-opt code") // heap.cc -DEFINE_int(max_new_space_size, 0, - "max size of the new space consisting of two semi-spaces which are half" - "the size (in MBytes)") -DEFINE_int(max_old_space_size, 0, "max size of the old space (in Mbytes)") -DEFINE_int(max_executable_size, 0, "max size of executable memory (in Mbytes)") -DEFINE_bool(gc_global, false, "always perform global GCs") -DEFINE_int(gc_interval, -1, "garbage collect after <n> allocations") -DEFINE_bool(trace_gc, false, +DEFINE_INT(min_semi_space_size, 0, + "min size of a semi-space (in MBytes), the new space consists of two" + "semi-spaces") +DEFINE_INT(max_semi_space_size, 0, + "max size of a semi-space (in MBytes), the new space consists of two" + "semi-spaces") +DEFINE_INT(max_old_space_size, 0, "max size of the old space (in Mbytes)") +DEFINE_INT(max_executable_size, 0, "max size of executable memory (in Mbytes)") +DEFINE_BOOL(gc_global, false, "always perform global GCs") +DEFINE_INT(gc_interval, -1, "garbage collect after <n> allocations") +DEFINE_BOOL(trace_gc, false, "print one trace line following each garbage collection") -DEFINE_bool(trace_gc_nvp, false, +DEFINE_BOOL(trace_gc_nvp, false, "print one detailed trace line in name=value format " "after each garbage collection") -DEFINE_bool(trace_gc_ignore_scavenger, false, +DEFINE_BOOL(trace_gc_ignore_scavenger, false, "do not print trace line after scavenger collection") -DEFINE_bool(print_cumulative_gc_stat, false, +DEFINE_BOOL(print_cumulative_gc_stat, false, "print cumulative GC statistics in name=value format on exit") -DEFINE_bool(print_max_heap_committed, false, +DEFINE_BOOL(print_max_heap_committed, false, "print statistics of the maximum memory committed for the heap " "in name=value format on exit") -DEFINE_bool(trace_gc_verbose, false, +DEFINE_BOOL(trace_gc_verbose, false, "print more details following each garbage collection") -DEFINE_bool(trace_fragmentation, false, +DEFINE_BOOL(trace_fragmentation, false, "report fragmentation for old pointer and data pages") -DEFINE_bool(trace_external_memory, false, - "print amount of external allocated memory after each time " - "it is adjusted.") -DEFINE_bool(collect_maps, true, +DEFINE_BOOL(collect_maps, true, "garbage collect maps from which no objects can be reached") -DEFINE_bool(weak_embedded_maps_in_ic, true, +DEFINE_BOOL(weak_embedded_maps_in_ic, true, "make maps embedded in inline cache stubs") -DEFINE_bool(weak_embedded_maps_in_optimized_code, true, +DEFINE_BOOL(weak_embedded_maps_in_optimized_code, true, "make maps embedded in optimized code weak") -DEFINE_bool(weak_embedded_objects_in_optimized_code, true, +DEFINE_BOOL(weak_embedded_objects_in_optimized_code, true, "make objects embedded in optimized code weak") -DEFINE_bool(flush_code, true, +DEFINE_BOOL(flush_code, true, "flush code that we expect not to use again (during full gc)") -DEFINE_bool(flush_code_incrementally, true, +DEFINE_BOOL(flush_code_incrementally, true, "flush code that we expect not to use again (incrementally)") -DEFINE_bool(trace_code_flushing, false, "trace code flushing progress") -DEFINE_bool(age_code, true, +DEFINE_BOOL(trace_code_flushing, false, "trace code flushing progress") +DEFINE_BOOL(age_code, true, "track un-executed functions to age code and flush only " "old code (required for code flushing)") -DEFINE_bool(incremental_marking, true, "use incremental marking") -DEFINE_bool(incremental_marking_steps, true, "do incremental marking steps") -DEFINE_bool(trace_incremental_marking, false, +DEFINE_BOOL(incremental_marking, true, "use incremental marking") +DEFINE_BOOL(incremental_marking_steps, true, "do incremental marking steps") +DEFINE_BOOL(trace_incremental_marking, false, "trace progress of the incremental marking") -DEFINE_bool(track_gc_object_stats, false, +DEFINE_BOOL(track_gc_object_stats, false, "track object counts and memory usage") -DEFINE_bool(parallel_sweeping, false, "enable parallel sweeping") -DEFINE_bool(concurrent_sweeping, true, "enable concurrent sweeping") -DEFINE_int(sweeper_threads, 0, +DEFINE_BOOL(always_precise_sweeping, true, "always sweep precisely") +DEFINE_BOOL(parallel_sweeping, false, "enable parallel sweeping") +DEFINE_BOOL(concurrent_sweeping, true, "enable concurrent sweeping") +DEFINE_INT(sweeper_threads, 0, "number of parallel and concurrent sweeping threads") -DEFINE_bool(job_based_sweeping, false, "enable job based sweeping") +DEFINE_BOOL(job_based_sweeping, false, "enable job based sweeping") #ifdef VERIFY_HEAP -DEFINE_bool(verify_heap, false, "verify heap pointers before and after GC") +DEFINE_BOOL(verify_heap, false, "verify heap pointers before and after GC") #endif // heap-snapshot-generator.cc -DEFINE_bool(heap_profiler_trace_objects, false, +DEFINE_BOOL(heap_profiler_trace_objects, false, "Dump heap object allocations/movements/size_updates") // v8.cc -DEFINE_bool(use_idle_notification, true, +DEFINE_BOOL(use_idle_notification, true, "Use idle notification to reduce memory footprint.") // ic.cc -DEFINE_bool(use_ic, true, "use inline caching") +DEFINE_BOOL(use_ic, true, "use inline caching") +DEFINE_BOOL(trace_ic, false, "trace inline cache state transitions") // macro-assembler-ia32.cc -DEFINE_bool(native_code_counters, false, +DEFINE_BOOL(native_code_counters, false, "generate extra code for manipulating stats counters") // mark-compact.cc -DEFINE_bool(always_compact, false, "Perform compaction on every full GC") -DEFINE_bool(never_compact, false, +DEFINE_BOOL(always_compact, false, "Perform compaction on every full GC") +DEFINE_BOOL(never_compact, false, "Never perform compaction on full GC - testing only") -DEFINE_bool(compact_code_space, true, +DEFINE_BOOL(compact_code_space, true, "Compact code space on full non-incremental collections") -DEFINE_bool(incremental_code_compaction, true, +DEFINE_BOOL(incremental_code_compaction, true, "Compact code space on full incremental collections") -DEFINE_bool(cleanup_code_caches_at_gc, true, +DEFINE_BOOL(cleanup_code_caches_at_gc, true, "Flush inline caches prior to mark compact collection and " "flush code caches in maps during mark compact cycle.") -DEFINE_bool(use_marking_progress_bar, true, +DEFINE_BOOL(use_marking_progress_bar, true, "Use a progress bar to scan large objects in increments when " "incremental marking is active.") -DEFINE_bool(zap_code_space, true, +DEFINE_BOOL(zap_code_space, true, "Zap free memory in code space with 0xCC while sweeping.") -DEFINE_int(random_seed, 0, +DEFINE_INT(random_seed, 0, "Default seed for initializing random generator " "(0, the default, means to use system random).") // objects.cc -DEFINE_bool(use_verbose_printer, true, "allows verbose printing") +DEFINE_BOOL(use_verbose_printer, true, "allows verbose printing") // parser.cc -DEFINE_bool(allow_natives_syntax, false, "allow natives syntax") -DEFINE_bool(trace_parse, false, "trace parsing and preparsing") +DEFINE_BOOL(allow_natives_syntax, false, "allow natives syntax") +DEFINE_BOOL(trace_parse, false, "trace parsing and preparsing") // simulator-arm.cc, simulator-arm64.cc and simulator-mips.cc -DEFINE_bool(trace_sim, false, "Trace simulator execution") -DEFINE_bool(debug_sim, false, "Enable debugging the simulator") -DEFINE_bool(check_icache, false, +DEFINE_BOOL(trace_sim, false, "Trace simulator execution") +DEFINE_BOOL(debug_sim, false, "Enable debugging the simulator") +DEFINE_BOOL(check_icache, false, "Check icache flushes in ARM and MIPS simulator") -DEFINE_int(stop_sim_at, 0, "Simulator stop after x number of instructions") -#ifdef V8_TARGET_ARCH_ARM64 -DEFINE_int(sim_stack_alignment, 16, +DEFINE_INT(stop_sim_at, 0, "Simulator stop after x number of instructions") +#if defined(V8_TARGET_ARCH_ARM64) || defined(V8_TARGET_ARCH_MIPS64) +DEFINE_INT(sim_stack_alignment, 16, "Stack alignment in bytes in simulator. This must be a power of two " "and it must be at least 16. 16 is default.") #else -DEFINE_int(sim_stack_alignment, 8, +DEFINE_INT(sim_stack_alignment, 8, "Stack alingment in bytes in simulator (4 or 8, 8 is default)") #endif -DEFINE_int(sim_stack_size, 2 * MB / KB, - "Stack size of the ARM64 simulator in kBytes (default is 2 MB)") -DEFINE_bool(log_regs_modified, true, +DEFINE_INT(sim_stack_size, 2 * MB / KB, + "Stack size of the ARM64 and MIPS64 simulator " + "in kBytes (default is 2 MB)") +DEFINE_BOOL(log_regs_modified, true, "When logging register values, only print modified registers.") -DEFINE_bool(log_colour, true, - "When logging, try to use coloured output.") -DEFINE_bool(ignore_asm_unimplemented_break, false, +DEFINE_BOOL(log_colour, true, "When logging, try to use coloured output.") +DEFINE_BOOL(ignore_asm_unimplemented_break, false, "Don't break for ASM_UNIMPLEMENTED_BREAK macros.") -DEFINE_bool(trace_sim_messages, false, +DEFINE_BOOL(trace_sim_messages, false, "Trace simulator debug messages. Implied by --trace-sim.") // isolate.cc -DEFINE_bool(stack_trace_on_illegal, false, +DEFINE_BOOL(stack_trace_on_illegal, false, "print stack trace when an illegal exception is thrown") -DEFINE_bool(abort_on_uncaught_exception, false, +DEFINE_BOOL(abort_on_uncaught_exception, false, "abort program (dump core) when an uncaught exception is thrown") -DEFINE_bool(randomize_hashes, true, +DEFINE_BOOL(randomize_hashes, true, "randomize hashes to avoid predictable hash collisions " "(with snapshots this option cannot override the baked-in seed)") -DEFINE_int(hash_seed, 0, +DEFINE_INT(hash_seed, 0, "Fixed seed to use to hash property keys (0 means random)" "(with snapshots this option cannot override the baked-in seed)") // snapshot-common.cc -DEFINE_bool(profile_deserialization, false, +DEFINE_BOOL(profile_deserialization, false, "Print the time it takes to deserialize the snapshot.") // Regexp -DEFINE_bool(regexp_optimization, true, "generate optimized regexp code") +DEFINE_BOOL(regexp_optimization, true, "generate optimized regexp code") // Testing flags test/cctest/test-{flags,api,serialization}.cc -DEFINE_bool(testing_bool_flag, true, "testing_bool_flag") -DEFINE_maybe_bool(testing_maybe_bool_flag, "testing_maybe_bool_flag") -DEFINE_int(testing_int_flag, 13, "testing_int_flag") -DEFINE_float(testing_float_flag, 2.5, "float-flag") -DEFINE_string(testing_string_flag, "Hello, world!", "string-flag") -DEFINE_int(testing_prng_seed, 42, "Seed used for threading test randomness") +DEFINE_BOOL(testing_bool_flag, true, "testing_bool_flag") +DEFINE_MAYBE_BOOL(testing_maybe_bool_flag, "testing_maybe_bool_flag") +DEFINE_INT(testing_int_flag, 13, "testing_int_flag") +DEFINE_FLOAT(testing_float_flag, 2.5, "float-flag") +DEFINE_STRING(testing_string_flag, "Hello, world!", "string-flag") +DEFINE_INT(testing_prng_seed, 42, "Seed used for threading test randomness") #ifdef _WIN32 -DEFINE_string(testing_serialization_file, "C:\\Windows\\Temp\\serdes", +DEFINE_STRING(testing_serialization_file, "C:\\Windows\\Temp\\serdes", "file in which to testing_serialize heap") #else -DEFINE_string(testing_serialization_file, "/tmp/serdes", +DEFINE_STRING(testing_serialization_file, "/tmp/serdes", "file in which to serialize heap") #endif // mksnapshot.cc -DEFINE_string(extra_code, NULL, "A filename with extra code to be included in" - " the snapshot (mksnapshot only)") -DEFINE_string(raw_file, NULL, "A file to write the raw snapshot bytes to. " - "(mksnapshot only)") -DEFINE_string(raw_context_file, NULL, "A file to write the raw context " - "snapshot bytes to. (mksnapshot only)") -DEFINE_bool(omit, false, "Omit raw snapshot bytes in generated code. " - "(mksnapshot only)") +DEFINE_STRING(extra_code, NULL, + "A filename with extra code to be included in" + " the snapshot (mksnapshot only)") +DEFINE_STRING(raw_file, NULL, + "A file to write the raw snapshot bytes to. " + "(mksnapshot only)") +DEFINE_STRING(raw_context_file, NULL, + "A file to write the raw context " + "snapshot bytes to. (mksnapshot only)") +DEFINE_STRING(startup_blob, NULL, + "Write V8 startup blob file. " + "(mksnapshot only)") // code-stubs-hydrogen.cc -DEFINE_bool(profile_hydrogen_code_stub_compilation, false, +DEFINE_BOOL(profile_hydrogen_code_stub_compilation, false, "Print the time it takes to lazily compile hydrogen code stubs.") -DEFINE_bool(predictable, false, "enable predictable mode") -DEFINE_neg_implication(predictable, concurrent_recompilation) -DEFINE_neg_implication(predictable, concurrent_osr) -DEFINE_neg_implication(predictable, concurrent_sweeping) -DEFINE_neg_implication(predictable, parallel_sweeping) +DEFINE_BOOL(predictable, false, "enable predictable mode") +DEFINE_NEG_IMPLICATION(predictable, concurrent_recompilation) +DEFINE_NEG_IMPLICATION(predictable, concurrent_osr) +DEFINE_NEG_IMPLICATION(predictable, concurrent_sweeping) +DEFINE_NEG_IMPLICATION(predictable, parallel_sweeping) // // Dev shell flags // -DEFINE_bool(help, false, "Print usage message, including flags, on console") -DEFINE_bool(dump_counters, false, "Dump counters on exit") +DEFINE_BOOL(help, false, "Print usage message, including flags, on console") +DEFINE_BOOL(dump_counters, false, "Dump counters on exit") -DEFINE_bool(debugger, false, "Enable JavaScript debugger") -DEFINE_bool(remote_debugger, false, "Connect JavaScript debugger to the " - "debugger agent in another process") -DEFINE_bool(debugger_agent, false, "Enable debugger agent") -DEFINE_int(debugger_port, 5858, "Port to use for remote debugging") +DEFINE_BOOL(debugger, false, "Enable JavaScript debugger") -DEFINE_string(map_counters, "", "Map counters to a file") -DEFINE_args(js_arguments, +DEFINE_STRING(map_counters, "", "Map counters to a file") +DEFINE_ARGS(js_arguments, "Pass all remaining arguments to the script. Alias for \"--\".") -#if defined(WEBOS__) -DEFINE_bool(debug_compile_events, false, "Enable debugger compile events") -DEFINE_bool(debug_script_collected_events, false, - "Enable debugger script collected events") -#else -DEFINE_bool(debug_compile_events, true, "Enable debugger compile events") -DEFINE_bool(debug_script_collected_events, true, - "Enable debugger script collected events") -#endif - - // // GDB JIT integration flags. // -DEFINE_bool(gdbjit, false, "enable GDBJIT interface (disables compacting GC)") -DEFINE_bool(gdbjit_full, false, "enable GDBJIT interface for all code objects") -DEFINE_bool(gdbjit_dump, false, "dump elf objects with debug info to disk") -DEFINE_string(gdbjit_dump_filter, "", +DEFINE_BOOL(gdbjit, false, "enable GDBJIT interface (disables compacting GC)") +DEFINE_BOOL(gdbjit_full, false, "enable GDBJIT interface for all code objects") +DEFINE_BOOL(gdbjit_dump, false, "dump elf objects with debug info to disk") +DEFINE_STRING(gdbjit_dump_filter, "", "dump only objects containing this substring") // mark-compact.cc -DEFINE_bool(force_marking_deque_overflows, false, +DEFINE_BOOL(force_marking_deque_overflows, false, "force overflows of marking deque by reducing it's size " "to 64 words") -DEFINE_bool(stress_compaction, false, +DEFINE_BOOL(stress_compaction, false, "stress the GC compactor to flush out bugs (implies " "--force_marking_deque_overflows)") @@ -701,63 +703,64 @@ DEFINE_bool(stress_compaction, false, #endif // checks.cc -#ifdef ENABLE_SLOW_ASSERTS -DEFINE_bool(enable_slow_asserts, false, +#ifdef ENABLE_SLOW_DCHECKS +DEFINE_BOOL(enable_slow_asserts, false, "enable asserts that are slow to execute") #endif // codegen-ia32.cc / codegen-arm.cc / macro-assembler-*.cc -DEFINE_bool(print_source, false, "pretty print source code") -DEFINE_bool(print_builtin_source, false, +DEFINE_BOOL(print_source, false, "pretty print source code") +DEFINE_BOOL(print_builtin_source, false, "pretty print source code for builtins") -DEFINE_bool(print_ast, false, "print source AST") -DEFINE_bool(print_builtin_ast, false, "print source AST for builtins") -DEFINE_string(stop_at, "", "function name where to insert a breakpoint") -DEFINE_bool(trap_on_abort, false, "replace aborts by breakpoints") +DEFINE_BOOL(print_ast, false, "print source AST") +DEFINE_BOOL(print_builtin_ast, false, "print source AST for builtins") +DEFINE_STRING(stop_at, "", "function name where to insert a breakpoint") +DEFINE_BOOL(trap_on_abort, false, "replace aborts by breakpoints") // compiler.cc -DEFINE_bool(print_builtin_scopes, false, "print scopes for builtins") -DEFINE_bool(print_scopes, false, "print scopes") +DEFINE_BOOL(print_builtin_scopes, false, "print scopes for builtins") +DEFINE_BOOL(print_scopes, false, "print scopes") // contexts.cc -DEFINE_bool(trace_contexts, false, "trace contexts operations") +DEFINE_BOOL(trace_contexts, false, "trace contexts operations") // heap.cc -DEFINE_bool(gc_verbose, false, "print stuff during garbage collection") -DEFINE_bool(heap_stats, false, "report heap statistics before and after GC") -DEFINE_bool(code_stats, false, "report code statistics after GC") -DEFINE_bool(verify_native_context_separation, false, +DEFINE_BOOL(gc_verbose, false, "print stuff during garbage collection") +DEFINE_BOOL(heap_stats, false, "report heap statistics before and after GC") +DEFINE_BOOL(code_stats, false, "report code statistics after GC") +DEFINE_BOOL(verify_native_context_separation, false, "verify that code holds on to at most one native context after GC") -DEFINE_bool(print_handles, false, "report handles after GC") -DEFINE_bool(print_global_handles, false, "report global handles after GC") +DEFINE_BOOL(print_handles, false, "report handles after GC") +DEFINE_BOOL(print_global_handles, false, "report global handles after GC") -// ic.cc -DEFINE_bool(trace_ic, false, "trace inline cache state transitions") +// TurboFan debug-only flags. +DEFINE_BOOL(print_turbo_replay, false, + "print C++ code to recreate TurboFan graphs") // interface.cc -DEFINE_bool(print_interfaces, false, "print interfaces") -DEFINE_bool(print_interface_details, false, "print interface inference details") -DEFINE_int(print_interface_depth, 5, "depth for printing interfaces") +DEFINE_BOOL(print_interfaces, false, "print interfaces") +DEFINE_BOOL(print_interface_details, false, "print interface inference details") +DEFINE_INT(print_interface_depth, 5, "depth for printing interfaces") // objects.cc -DEFINE_bool(trace_normalization, false, +DEFINE_BOOL(trace_normalization, false, "prints when objects are turned into dictionaries.") // runtime.cc -DEFINE_bool(trace_lazy, false, "trace lazy compilation") +DEFINE_BOOL(trace_lazy, false, "trace lazy compilation") // spaces.cc -DEFINE_bool(collect_heap_spill_statistics, false, +DEFINE_BOOL(collect_heap_spill_statistics, false, "report heap spill statistics along with heap_stats " "(requires heap_stats)") -DEFINE_bool(trace_isolates, false, "trace isolate state changes") +DEFINE_BOOL(trace_isolates, false, "trace isolate state changes") // Regexp -DEFINE_bool(regexp_possessive_quantifier, false, +DEFINE_BOOL(regexp_possessive_quantifier, false, "enable possessive quantifier syntax for testing") -DEFINE_bool(trace_regexp_bytecodes, false, "trace regexp bytecode execution") -DEFINE_bool(trace_regexp_assembler, false, +DEFINE_BOOL(trace_regexp_bytecodes, false, "trace regexp bytecode execution") +DEFINE_BOOL(trace_regexp_assembler, false, "trace regexp macro assembler calls.") // @@ -767,50 +770,52 @@ DEFINE_bool(trace_regexp_assembler, false, #define FLAG FLAG_FULL // log.cc -DEFINE_bool(log, false, +DEFINE_BOOL(log, false, "Minimal logging (no API, code, GC, suspect, or handles samples).") -DEFINE_bool(log_all, false, "Log all events to the log file.") -DEFINE_bool(log_api, false, "Log API events to the log file.") -DEFINE_bool(log_code, false, +DEFINE_BOOL(log_all, false, "Log all events to the log file.") +DEFINE_BOOL(log_api, false, "Log API events to the log file.") +DEFINE_BOOL(log_code, false, "Log code events to the log file without profiling.") -DEFINE_bool(log_gc, false, +DEFINE_BOOL(log_gc, false, "Log heap samples on garbage collection for the hp2ps tool.") -DEFINE_bool(log_handles, false, "Log global handle events.") -DEFINE_bool(log_snapshot_positions, false, +DEFINE_BOOL(log_handles, false, "Log global handle events.") +DEFINE_BOOL(log_snapshot_positions, false, "log positions of (de)serialized objects in the snapshot.") -DEFINE_bool(log_suspect, false, "Log suspect operations.") -DEFINE_bool(prof, false, +DEFINE_BOOL(log_suspect, false, "Log suspect operations.") +DEFINE_BOOL(prof, false, "Log statistical profiling information (implies --log-code).") -DEFINE_bool(prof_browser_mode, true, +DEFINE_BOOL(prof_browser_mode, true, "Used with --prof, turns on browser-compatible mode for profiling.") -DEFINE_bool(log_regexp, false, "Log regular expression execution.") -DEFINE_string(logfile, "v8.log", "Specify the name of the log file.") -DEFINE_bool(logfile_per_isolate, true, "Separate log files for each isolate.") -DEFINE_bool(ll_prof, false, "Enable low-level linux profiler.") -DEFINE_bool(perf_basic_prof, false, +DEFINE_BOOL(log_regexp, false, "Log regular expression execution.") +DEFINE_STRING(logfile, "v8.log", "Specify the name of the log file.") +DEFINE_BOOL(logfile_per_isolate, true, "Separate log files for each isolate.") +DEFINE_BOOL(ll_prof, false, "Enable low-level linux profiler.") +DEFINE_BOOL(perf_basic_prof, false, "Enable perf linux profiler (basic support).") -DEFINE_bool(perf_jit_prof, false, +DEFINE_NEG_IMPLICATION(perf_basic_prof, compact_code_space) +DEFINE_BOOL(perf_jit_prof, false, "Enable perf linux profiler (experimental annotate support).") -DEFINE_string(gc_fake_mmap, "/tmp/__v8_gc__", +DEFINE_NEG_IMPLICATION(perf_jit_prof, compact_code_space) +DEFINE_STRING(gc_fake_mmap, "/tmp/__v8_gc__", "Specify the name of the file for fake gc mmap used in ll_prof") -DEFINE_bool(log_internal_timer_events, false, "Time internal events.") -DEFINE_bool(log_timer_events, false, +DEFINE_BOOL(log_internal_timer_events, false, "Time internal events.") +DEFINE_BOOL(log_timer_events, false, "Time events including external callbacks.") -DEFINE_implication(log_timer_events, log_internal_timer_events) -DEFINE_implication(log_internal_timer_events, prof) -DEFINE_bool(log_instruction_stats, false, "Log AArch64 instruction statistics.") -DEFINE_string(log_instruction_file, "arm64_inst.csv", +DEFINE_IMPLICATION(log_timer_events, log_internal_timer_events) +DEFINE_IMPLICATION(log_internal_timer_events, prof) +DEFINE_BOOL(log_instruction_stats, false, "Log AArch64 instruction statistics.") +DEFINE_STRING(log_instruction_file, "arm64_inst.csv", "AArch64 instruction statistics log file.") -DEFINE_int(log_instruction_period, 1 << 22, +DEFINE_INT(log_instruction_period, 1 << 22, "AArch64 instruction statistics logging period.") -DEFINE_bool(redirect_code_traces, false, +DEFINE_BOOL(redirect_code_traces, false, "output deopt information and disassembly into file " "code-<pid>-<isolate id>.asm") -DEFINE_string(redirect_code_traces_to, NULL, - "output deopt information and disassembly into the given file") +DEFINE_STRING(redirect_code_traces_to, NULL, + "output deopt information and disassembly into the given file") -DEFINE_bool(hydrogen_track_positions, false, +DEFINE_BOOL(hydrogen_track_positions, false, "track source code positions when building IR") // @@ -824,51 +829,71 @@ DEFINE_bool(hydrogen_track_positions, false, #endif // elements.cc -DEFINE_bool(trace_elements_transitions, false, "trace elements transitions") +DEFINE_BOOL(trace_elements_transitions, false, "trace elements transitions") -DEFINE_bool(trace_creation_allocation_sites, false, +DEFINE_BOOL(trace_creation_allocation_sites, false, "trace the creation of allocation sites") // code-stubs.cc -DEFINE_bool(print_code_stubs, false, "print code stubs") -DEFINE_bool(test_secondary_stub_cache, false, +DEFINE_BOOL(print_code_stubs, false, "print code stubs") +DEFINE_BOOL(test_secondary_stub_cache, false, "test secondary stub cache by disabling the primary one") -DEFINE_bool(test_primary_stub_cache, false, +DEFINE_BOOL(test_primary_stub_cache, false, "test primary stub cache by disabling the secondary one") // codegen-ia32.cc / codegen-arm.cc -DEFINE_bool(print_code, false, "print generated code") -DEFINE_bool(print_opt_code, false, "print optimized code") -DEFINE_bool(print_unopt_code, false, "print unoptimized code before " +DEFINE_BOOL(print_code, false, "print generated code") +DEFINE_BOOL(print_opt_code, false, "print optimized code") +DEFINE_BOOL(print_unopt_code, false, + "print unoptimized code before " "printing optimized code based on it") -DEFINE_bool(print_code_verbose, false, "print more information for code") -DEFINE_bool(print_builtin_code, false, "print generated code for builtins") +DEFINE_BOOL(print_code_verbose, false, "print more information for code") +DEFINE_BOOL(print_builtin_code, false, "print generated code for builtins") #ifdef ENABLE_DISASSEMBLER -DEFINE_bool(sodium, false, "print generated code output suitable for use with " +DEFINE_BOOL(sodium, false, + "print generated code output suitable for use with " "the Sodium code viewer") -DEFINE_implication(sodium, print_code_stubs) -DEFINE_implication(sodium, print_code) -DEFINE_implication(sodium, print_opt_code) -DEFINE_implication(sodium, hydrogen_track_positions) -DEFINE_implication(sodium, code_comments) - -DEFINE_bool(print_all_code, false, "enable all flags related to printing code") -DEFINE_implication(print_all_code, print_code) -DEFINE_implication(print_all_code, print_opt_code) -DEFINE_implication(print_all_code, print_unopt_code) -DEFINE_implication(print_all_code, print_code_verbose) -DEFINE_implication(print_all_code, print_builtin_code) -DEFINE_implication(print_all_code, print_code_stubs) -DEFINE_implication(print_all_code, code_comments) +DEFINE_IMPLICATION(sodium, print_code_stubs) +DEFINE_IMPLICATION(sodium, print_code) +DEFINE_IMPLICATION(sodium, print_opt_code) +DEFINE_IMPLICATION(sodium, hydrogen_track_positions) +DEFINE_IMPLICATION(sodium, code_comments) + +DEFINE_BOOL(print_all_code, false, "enable all flags related to printing code") +DEFINE_IMPLICATION(print_all_code, print_code) +DEFINE_IMPLICATION(print_all_code, print_opt_code) +DEFINE_IMPLICATION(print_all_code, print_unopt_code) +DEFINE_IMPLICATION(print_all_code, print_code_verbose) +DEFINE_IMPLICATION(print_all_code, print_builtin_code) +DEFINE_IMPLICATION(print_all_code, print_code_stubs) +DEFINE_IMPLICATION(print_all_code, code_comments) #ifdef DEBUG -DEFINE_implication(print_all_code, trace_codegen) +DEFINE_IMPLICATION(print_all_code, trace_codegen) #endif #endif + +// +// VERIFY_PREDICTABLE related flags +// +#undef FLAG + +#ifdef VERIFY_PREDICTABLE +#define FLAG FLAG_FULL +#else +#define FLAG FLAG_READONLY +#endif + +DEFINE_BOOL(verify_predictable, false, + "this mode is used for checking that V8 behaves predictably") +DEFINE_INT(dump_allocations_digest_at_alloc, 0, + "dump allocations digest each n-th allocation") + + // // Read-only flags // @@ -876,7 +901,7 @@ DEFINE_implication(print_all_code, trace_codegen) #define FLAG FLAG_READONLY // assembler-arm.h -DEFINE_bool(enable_ool_constant_pool, V8_OOL_CONSTANT_POOL, +DEFINE_BOOL(enable_ool_constant_pool, V8_OOL_CONSTANT_POOL, "enable use of out-of-line constant pools (ARM only)") // Cleanup... @@ -885,19 +910,19 @@ DEFINE_bool(enable_ool_constant_pool, V8_OOL_CONSTANT_POOL, #undef FLAG #undef FLAG_ALIAS -#undef DEFINE_bool -#undef DEFINE_maybe_bool -#undef DEFINE_int -#undef DEFINE_string -#undef DEFINE_float -#undef DEFINE_args -#undef DEFINE_implication -#undef DEFINE_neg_implication -#undef DEFINE_ALIAS_bool -#undef DEFINE_ALIAS_int -#undef DEFINE_ALIAS_string -#undef DEFINE_ALIAS_float -#undef DEFINE_ALIAS_args +#undef DEFINE_BOOL +#undef DEFINE_MAYBE_BOOL +#undef DEFINE_INT +#undef DEFINE_STRING +#undef DEFINE_FLOAT +#undef DEFINE_ARGS +#undef DEFINE_IMPLICATION +#undef DEFINE_NEG_IMPLICATION +#undef DEFINE_ALIAS_BOOL +#undef DEFINE_ALIAS_INT +#undef DEFINE_ALIAS_STRING +#undef DEFINE_ALIAS_FLOAT +#undef DEFINE_ALIAS_ARGS #undef FLAG_MODE_DECLARE #undef FLAG_MODE_DEFINE diff --git a/deps/v8/src/flags.cc b/deps/v8/src/flags.cc index 19a10e4165f..98f21ef2c4c 100644 --- a/deps/v8/src/flags.cc +++ b/deps/v8/src/flags.cc @@ -5,26 +5,22 @@ #include <ctype.h> #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "platform.h" -#include "smart-pointers.h" -#include "string-stream.h" - -#if V8_TARGET_ARCH_ARM -#include "arm/assembler-arm-inl.h" -#endif +#include "src/assembler.h" +#include "src/base/platform/platform.h" +#include "src/ostreams.h" namespace v8 { namespace internal { // Define all of our flags. #define FLAG_MODE_DEFINE -#include "flag-definitions.h" +#include "src/flag-definitions.h" // NOLINT // Define all of our flags default values. #define FLAG_MODE_DEFINE_DEFAULTS -#include "flag-definitions.h" +#include "src/flag-definitions.h" // NOLINT namespace { @@ -49,32 +45,32 @@ struct Flag { const char* comment() const { return cmt_; } bool* bool_variable() const { - ASSERT(type_ == TYPE_BOOL); + DCHECK(type_ == TYPE_BOOL); return reinterpret_cast<bool*>(valptr_); } MaybeBoolFlag* maybe_bool_variable() const { - ASSERT(type_ == TYPE_MAYBE_BOOL); + DCHECK(type_ == TYPE_MAYBE_BOOL); return reinterpret_cast<MaybeBoolFlag*>(valptr_); } int* int_variable() const { - ASSERT(type_ == TYPE_INT); + DCHECK(type_ == TYPE_INT); return reinterpret_cast<int*>(valptr_); } double* float_variable() const { - ASSERT(type_ == TYPE_FLOAT); + DCHECK(type_ == TYPE_FLOAT); return reinterpret_cast<double*>(valptr_); } const char* string_value() const { - ASSERT(type_ == TYPE_STRING); + DCHECK(type_ == TYPE_STRING); return *reinterpret_cast<const char**>(valptr_); } void set_string_value(const char* value, bool owns_ptr) { - ASSERT(type_ == TYPE_STRING); + DCHECK(type_ == TYPE_STRING); const char** ptr = reinterpret_cast<const char**>(valptr_); if (owns_ptr_ && *ptr != NULL) DeleteArray(*ptr); *ptr = value; @@ -82,32 +78,32 @@ struct Flag { } JSArguments* args_variable() const { - ASSERT(type_ == TYPE_ARGS); + DCHECK(type_ == TYPE_ARGS); return reinterpret_cast<JSArguments*>(valptr_); } bool bool_default() const { - ASSERT(type_ == TYPE_BOOL); + DCHECK(type_ == TYPE_BOOL); return *reinterpret_cast<const bool*>(defptr_); } int int_default() const { - ASSERT(type_ == TYPE_INT); + DCHECK(type_ == TYPE_INT); return *reinterpret_cast<const int*>(defptr_); } double float_default() const { - ASSERT(type_ == TYPE_FLOAT); + DCHECK(type_ == TYPE_FLOAT); return *reinterpret_cast<const double*>(defptr_); } const char* string_default() const { - ASSERT(type_ == TYPE_STRING); + DCHECK(type_ == TYPE_STRING); return *reinterpret_cast<const char* const *>(defptr_); } JSArguments args_default() const { - ASSERT(type_ == TYPE_ARGS); + DCHECK(type_ == TYPE_ARGS); return *reinterpret_cast<const JSArguments*>(defptr_); } @@ -163,7 +159,7 @@ struct Flag { Flag flags[] = { #define FLAG_MODE_META -#include "flag-definitions.h" +#include "src/flag-definitions.h" }; const size_t num_flags = sizeof(flags) / sizeof(*flags); @@ -185,41 +181,39 @@ static const char* Type2String(Flag::FlagType type) { } -static SmartArrayPointer<const char> ToString(Flag* flag) { - HeapStringAllocator string_allocator; - StringStream buffer(&string_allocator); - switch (flag->type()) { +OStream& operator<<(OStream& os, const Flag& flag) { // NOLINT + switch (flag.type()) { case Flag::TYPE_BOOL: - buffer.Add("%s", (*flag->bool_variable() ? "true" : "false")); + os << (*flag.bool_variable() ? "true" : "false"); break; case Flag::TYPE_MAYBE_BOOL: - buffer.Add("%s", flag->maybe_bool_variable()->has_value - ? (flag->maybe_bool_variable()->value ? "true" : "false") - : "unset"); + os << (flag.maybe_bool_variable()->has_value + ? (flag.maybe_bool_variable()->value ? "true" : "false") + : "unset"); break; case Flag::TYPE_INT: - buffer.Add("%d", *flag->int_variable()); + os << *flag.int_variable(); break; case Flag::TYPE_FLOAT: - buffer.Add("%f", FmtElm(*flag->float_variable())); + os << *flag.float_variable(); break; case Flag::TYPE_STRING: { - const char* str = flag->string_value(); - buffer.Add("%s", str ? str : "NULL"); + const char* str = flag.string_value(); + os << (str ? str : "NULL"); break; } case Flag::TYPE_ARGS: { - JSArguments args = *flag->args_variable(); + JSArguments args = *flag.args_variable(); if (args.argc > 0) { - buffer.Add("%s", args[0]); + os << args[0]; for (int i = 1; i < args.argc; i++) { - buffer.Add(" %s", args[i]); + os << args[i]; } } break; } } - return buffer.ToCString(); + return os; } @@ -231,28 +225,27 @@ List<const char*>* FlagList::argv() { Flag* f = &flags[i]; if (!f->IsDefault()) { if (f->type() == Flag::TYPE_ARGS) { - ASSERT(args_flag == NULL); + DCHECK(args_flag == NULL); args_flag = f; // Must be last in arguments. continue; } - HeapStringAllocator string_allocator; - StringStream buffer(&string_allocator); - if (f->type() != Flag::TYPE_BOOL || *(f->bool_variable())) { - buffer.Add("--%s", f->name()); - } else { - buffer.Add("--no%s", f->name()); + { + bool disabled = f->type() == Flag::TYPE_BOOL && !*f->bool_variable(); + OStringStream os; + os << (disabled ? "--no" : "--") << f->name(); + args->Add(StrDup(os.c_str())); } - args->Add(buffer.ToCString().Detach()); if (f->type() != Flag::TYPE_BOOL) { - args->Add(ToString(f).Detach()); + OStringStream os; + os << *f; + args->Add(StrDup(os.c_str())); } } } if (args_flag != NULL) { - HeapStringAllocator string_allocator; - StringStream buffer(&string_allocator); - buffer.Add("--%s", args_flag->name()); - args->Add(buffer.ToCString().Detach()); + OStringStream os; + os << "--" << args_flag->name(); + args->Add(StrDup(os.c_str())); JSArguments jsargs = *args_flag->args_variable(); for (int j = 0; j < jsargs.argc; j++) { args->Add(StrDup(jsargs[j])); @@ -308,7 +301,7 @@ static void SplitArgument(const char* arg, // make a copy so we can NUL-terminate flag name size_t n = arg - *name; CHECK(n < static_cast<size_t>(buffer_size)); // buffer is too small - OS::MemCopy(buffer, *name, n); + MemCopy(buffer, *name, n); buffer[n] = '\0'; *name = buffer; // get the value @@ -337,15 +330,10 @@ static Flag* FindFlag(const char* name) { } -bool FlagList::serializer_enabled_ = false; - - // static int FlagList::SetFlagsFromCommandLine(int* argc, char** argv, - bool remove_flags, - bool serializer_enabled) { - serializer_enabled_ = serializer_enabled; + bool remove_flags) { int return_code = 0; // parse arguments for (int i = 1; i < *argc;) { @@ -384,7 +372,8 @@ int FlagList::SetFlagsFromCommandLine(int* argc, value == NULL) { if (i < *argc) { value = argv[i++]; - } else { + } + if (!value) { PrintF(stderr, "Error: missing value for flag %s of type %s\n" "Try --help for options\n", arg, Type2String(flag->type())); @@ -483,7 +472,7 @@ static char* SkipBlackSpace(char* p) { int FlagList::SetFlagsFromString(const char* str, int len) { // make a 0-terminated copy of str ScopedVector<char> copy0(len + 1); - OS::MemCopy(copy0.start(), str, len); + MemCopy(copy0.start(), str, len); copy0[len] = '\0'; // strip leading white space @@ -525,30 +514,29 @@ void FlagList::ResetAllFlags() { // static void FlagList::PrintHelp() { -#if V8_TARGET_ARCH_ARM + CpuFeatures::Probe(false); CpuFeatures::PrintTarget(); - CpuFeatures::Probe(serializer_enabled_); CpuFeatures::PrintFeatures(); -#endif // V8_TARGET_ARCH_ARM - - printf("Usage:\n"); - printf(" shell [options] -e string\n"); - printf(" execute string in V8\n"); - printf(" shell [options] file1 file2 ... filek\n"); - printf(" run JavaScript scripts in file1, file2, ..., filek\n"); - printf(" shell [options]\n"); - printf(" shell [options] --shell [file1 file2 ... filek]\n"); - printf(" run an interactive JavaScript shell\n"); - printf(" d8 [options] file1 file2 ... filek\n"); - printf(" d8 [options]\n"); - printf(" d8 [options] --shell [file1 file2 ... filek]\n"); - printf(" run the new debugging shell\n\n"); - printf("Options:\n"); + + OFStream os(stdout); + os << "Usage:\n" + << " shell [options] -e string\n" + << " execute string in V8\n" + << " shell [options] file1 file2 ... filek\n" + << " run JavaScript scripts in file1, file2, ..., filek\n" + << " shell [options]\n" + << " shell [options] --shell [file1 file2 ... filek]\n" + << " run an interactive JavaScript shell\n" + << " d8 [options] file1 file2 ... filek\n" + << " d8 [options]\n" + << " d8 [options] --shell [file1 file2 ... filek]\n" + << " run the new debugging shell\n\n" + << "Options:\n"; for (size_t i = 0; i < num_flags; ++i) { Flag* f = &flags[i]; - SmartArrayPointer<const char> value = ToString(f); - printf(" --%s (%s)\n type: %s default: %s\n", - f->name(), f->comment(), Type2String(f->type()), value.get()); + os << " --" << f->name() << " (" << f->comment() << ")\n" + << " type: " << Type2String(f->type()) << " default: " << *f + << "\n"; } } @@ -556,7 +544,7 @@ void FlagList::PrintHelp() { // static void FlagList::EnforceFlagImplications() { #define FLAG_MODE_DEFINE_IMPLICATIONS -#include "flag-definitions.h" +#include "src/flag-definitions.h" #undef FLAG_MODE_DEFINE_IMPLICATIONS } diff --git a/deps/v8/src/flags.h b/deps/v8/src/flags.h index df786d7c333..78522ffce62 100644 --- a/deps/v8/src/flags.h +++ b/deps/v8/src/flags.h @@ -5,14 +5,14 @@ #ifndef V8_FLAGS_H_ #define V8_FLAGS_H_ -#include "atomicops.h" +#include "src/globals.h" namespace v8 { namespace internal { // Declare all of our flags. #define FLAG_MODE_DECLARE -#include "flag-definitions.h" +#include "src/flag-definitions.h" // NOLINT // The global list of all flags. class FlagList { @@ -42,8 +42,7 @@ class FlagList { // -- (equivalent to --js_arguments, captures all remaining args) static int SetFlagsFromCommandLine(int* argc, char** argv, - bool remove_flags, - bool serializer_enabled = false); + bool remove_flags); // Set the flag values by parsing the string str. Splits string into argc // substrings argv[], each of which consisting of non-white-space chars, @@ -58,10 +57,6 @@ class FlagList { // Set flags as consequence of being implied by another flag. static void EnforceFlagImplications(); - - private: - // TODO(svenpanne) Remove this when Serializer/startup has been refactored. - static bool serializer_enabled_; }; } } // namespace v8::internal diff --git a/deps/v8/src/frames-inl.h b/deps/v8/src/frames-inl.h index 9b5d4dbb96b..9241a449f4c 100644 --- a/deps/v8/src/frames-inl.h +++ b/deps/v8/src/frames-inl.h @@ -5,20 +5,24 @@ #ifndef V8_FRAMES_INL_H_ #define V8_FRAMES_INL_H_ -#include "frames.h" -#include "isolate.h" -#include "v8memory.h" +#include "src/frames.h" +#include "src/isolate.h" +#include "src/v8memory.h" #if V8_TARGET_ARCH_IA32 -#include "ia32/frames-ia32.h" +#include "src/ia32/frames-ia32.h" // NOLINT #elif V8_TARGET_ARCH_X64 -#include "x64/frames-x64.h" +#include "src/x64/frames-x64.h" // NOLINT #elif V8_TARGET_ARCH_ARM64 -#include "arm64/frames-arm64.h" +#include "src/arm64/frames-arm64.h" // NOLINT #elif V8_TARGET_ARCH_ARM -#include "arm/frames-arm.h" +#include "src/arm/frames-arm.h" // NOLINT #elif V8_TARGET_ARCH_MIPS -#include "mips/frames-mips.h" +#include "src/mips/frames-mips.h" // NOLINT +#elif V8_TARGET_ARCH_MIPS64 +#include "src/mips64/frames-mips64.h" // NOLINT +#elif V8_TARGET_ARCH_X87 +#include "src/x87/frames-x87.h" // NOLINT #else #error Unsupported target architecture. #endif @@ -204,7 +208,7 @@ inline JavaScriptFrame::JavaScriptFrame(StackFrameIteratorBase* iterator) Address JavaScriptFrame::GetParameterSlot(int index) const { int param_count = ComputeParametersCount(); - ASSERT(-1 <= index && index < param_count); + DCHECK(-1 <= index && index < param_count); int parameter_offset = (param_count - index - 1) * kPointerSize; return caller_sp() + parameter_offset; } @@ -217,10 +221,10 @@ Object* JavaScriptFrame::GetParameter(int index) const { inline Address JavaScriptFrame::GetOperandSlot(int index) const { Address base = fp() + JavaScriptFrameConstants::kLocal0Offset; - ASSERT(IsAddressAligned(base, kPointerSize)); - ASSERT_EQ(type(), JAVA_SCRIPT); - ASSERT_LT(index, ComputeOperandsCount()); - ASSERT_LE(0, index); + DCHECK(IsAddressAligned(base, kPointerSize)); + DCHECK_EQ(type(), JAVA_SCRIPT); + DCHECK_LT(index, ComputeOperandsCount()); + DCHECK_LE(0, index); // Operand stack grows down. return base - index * kPointerSize; } @@ -236,9 +240,9 @@ inline int JavaScriptFrame::ComputeOperandsCount() const { // Base points to low address of first operand and stack grows down, so add // kPointerSize to get the actual stack size. intptr_t stack_size_in_bytes = (base + kPointerSize) - sp(); - ASSERT(IsAligned(stack_size_in_bytes, kPointerSize)); - ASSERT(type() == JAVA_SCRIPT); - ASSERT(stack_size_in_bytes >= 0); + DCHECK(IsAligned(stack_size_in_bytes, kPointerSize)); + DCHECK(type() == JAVA_SCRIPT); + DCHECK(stack_size_in_bytes >= 0); return static_cast<int>(stack_size_in_bytes >> kPointerSizeLog2); } @@ -313,14 +317,14 @@ inline JavaScriptFrame* JavaScriptFrameIterator::frame() const { // the JavaScript frame type, because we may encounter arguments // adaptor frames. StackFrame* frame = iterator_.frame(); - ASSERT(frame->is_java_script() || frame->is_arguments_adaptor()); + DCHECK(frame->is_java_script() || frame->is_arguments_adaptor()); return static_cast<JavaScriptFrame*>(frame); } inline StackFrame* SafeStackFrameIterator::frame() const { - ASSERT(!done()); - ASSERT(frame_->is_java_script() || frame_->is_exit()); + DCHECK(!done()); + DCHECK(frame_->is_java_script() || frame_->is_exit()); return frame_; } diff --git a/deps/v8/src/frames.cc b/deps/v8/src/frames.cc index e7c2a149ea8..e892f805efa 100644 --- a/deps/v8/src/frames.cc +++ b/deps/v8/src/frames.cc @@ -2,18 +2,17 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" - -#include "ast.h" -#include "deoptimizer.h" -#include "frames-inl.h" -#include "full-codegen.h" -#include "lazy-instance.h" -#include "mark-compact.h" -#include "safepoint-table.h" -#include "scopeinfo.h" -#include "string-stream.h" -#include "vm-state-inl.h" +#include "src/v8.h" + +#include "src/ast.h" +#include "src/deoptimizer.h" +#include "src/frames-inl.h" +#include "src/full-codegen.h" +#include "src/heap/mark-compact.h" +#include "src/safepoint-table.h" +#include "src/scopeinfo.h" +#include "src/string-stream.h" +#include "src/vm-state-inl.h" namespace v8 { namespace internal { @@ -30,7 +29,7 @@ class StackHandlerIterator BASE_EMBEDDED { StackHandlerIterator(const StackFrame* frame, StackHandler* handler) : limit_(frame->fp()), handler_(handler) { // Make sure the handler has already been unwound to this frame. - ASSERT(frame->sp() <= handler->address()); + DCHECK(frame->sp() <= handler->address()); } StackHandler* handler() const { return handler_; } @@ -39,7 +38,7 @@ class StackHandlerIterator BASE_EMBEDDED { return handler_ == NULL || handler_->address() > limit_; } void Advance() { - ASSERT(!done()); + DCHECK(!done()); handler_ = handler_->next(); } @@ -76,7 +75,7 @@ StackFrameIterator::StackFrameIterator(Isolate* isolate, ThreadLocalTop* t) void StackFrameIterator::Advance() { - ASSERT(!done()); + DCHECK(!done()); // Compute the state of the calling frame before restoring // callee-saved registers and unwinding handlers. This allows the // frame code that computes the caller state to access the top @@ -94,7 +93,7 @@ void StackFrameIterator::Advance() { // When we're done iterating over the stack frames, the handler // chain must have been completely unwound. - ASSERT(!done() || handler_ == NULL); + DCHECK(!done() || handler_ == NULL); } @@ -112,7 +111,7 @@ StackFrame* StackFrameIteratorBase::SingletonFor(StackFrame::Type type, StackFrame::State* state) { if (type == StackFrame::NONE) return NULL; StackFrame* result = SingletonFor(type); - ASSERT(result != NULL); + DCHECK(result != NULL); result->state_ = *state; return result; } @@ -157,7 +156,7 @@ void JavaScriptFrameIterator::Advance() { void JavaScriptFrameIterator::AdvanceToArgumentsFrame() { if (!frame()->has_adapted_arguments()) return; iterator_.Advance(); - ASSERT(iterator_.frame()->is_arguments_adaptor()); + DCHECK(iterator_.frame()->is_arguments_adaptor()); } @@ -206,7 +205,7 @@ SafeStackFrameIterator::SafeStackFrameIterator( type = ExitFrame::GetStateForFramePointer(Isolate::c_entry_fp(top), &state); top_frame_type_ = type; } else if (IsValidStackAddress(fp)) { - ASSERT(fp != NULL); + DCHECK(fp != NULL); state.fp = fp; state.sp = sp; state.pc_address = StackFrame::ResolveReturnAddressLocation( @@ -259,7 +258,7 @@ bool SafeStackFrameIterator::IsValidTop(ThreadLocalTop* top) const { void SafeStackFrameIterator::AdvanceOneFrame() { - ASSERT(!done()); + DCHECK(!done()); StackFrame* last_frame = frame_; Address last_sp = last_frame->sp(), last_fp = last_frame->fp(); // Before advancing to the next stack frame, perform pointer validity tests. @@ -342,7 +341,7 @@ void SafeStackFrameIterator::Advance() { frame_->state_.pc_address = callback_address; } external_callback_scope_ = external_callback_scope_->previous(); - ASSERT(external_callback_scope_ == NULL || + DCHECK(external_callback_scope_ == NULL || external_callback_scope_->scope_address() > frame_->fp()); return; } @@ -362,9 +361,9 @@ Code* StackFrame::GetSafepointData(Isolate* isolate, isolate->inner_pointer_to_code_cache()->GetCacheEntry(inner_pointer); if (!entry->safepoint_entry.is_valid()) { entry->safepoint_entry = entry->code->GetSafepointEntry(inner_pointer); - ASSERT(entry->safepoint_entry.is_valid()); + DCHECK(entry->safepoint_entry.is_valid()); } else { - ASSERT(entry->safepoint_entry.Equals( + DCHECK(entry->safepoint_entry.Equals( entry->code->GetSafepointEntry(inner_pointer))); } @@ -391,7 +390,7 @@ void StackFrame::IteratePc(ObjectVisitor* v, Address* pc_address, Code* holder) { Address pc = *pc_address; - ASSERT(GcSafeCodeContains(holder, pc)); + DCHECK(GcSafeCodeContains(holder, pc)); unsigned pc_offset = static_cast<unsigned>(pc - holder->instruction_start()); Object* code = holder; v->VisitPointer(&code); @@ -405,14 +404,14 @@ void StackFrame::IteratePc(ObjectVisitor* v, void StackFrame::SetReturnAddressLocationResolver( ReturnAddressLocationResolver resolver) { - ASSERT(return_address_location_resolver_ == NULL); + DCHECK(return_address_location_resolver_ == NULL); return_address_location_resolver_ = resolver; } StackFrame::Type StackFrame::ComputeType(const StackFrameIteratorBase* iterator, State* state) { - ASSERT(state->fp != NULL); + DCHECK(state->fp != NULL); if (StandardFrame::IsArgumentsAdaptorFrame(state->fp)) { return ARGUMENTS_ADAPTOR; } @@ -429,7 +428,7 @@ StackFrame::Type StackFrame::ComputeType(const StackFrameIteratorBase* iterator, if (!iterator->can_access_heap_objects_) return JAVA_SCRIPT; Code::Kind kind = GetContainingCode(iterator->isolate(), *(state->pc_address))->kind(); - ASSERT(kind == Code::FUNCTION || kind == Code::OPTIMIZED_FUNCTION); + DCHECK(kind == Code::FUNCTION || kind == Code::OPTIMIZED_FUNCTION); return (kind == Code::OPTIMIZED_FUNCTION) ? OPTIMIZED : JAVA_SCRIPT; } return static_cast<StackFrame::Type>(Smi::cast(marker)->value()); @@ -450,7 +449,7 @@ StackFrame::Type StackFrame::GetCallerState(State* state) const { Address StackFrame::UnpaddedFP() const { -#if V8_TARGET_ARCH_IA32 +#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87 if (!is_optimized()) return fp(); int32_t alignment_state = Memory::int32_at( fp() + JavaScriptFrameConstants::kDynamicAlignmentStateOffset); @@ -540,7 +539,7 @@ StackFrame::Type ExitFrame::GetStateForFramePointer(Address fp, State* state) { if (fp == 0) return NONE; Address sp = ComputeStackPointer(fp); FillState(fp, sp, state); - ASSERT(*state->pc_address != NULL); + DCHECK(*state->pc_address != NULL); return EXIT; } @@ -582,7 +581,7 @@ int StandardFrame::ComputeExpressionsCount() const { StandardFrameConstants::kExpressionsOffset + kPointerSize; Address base = fp() + offset; Address limit = sp(); - ASSERT(base >= limit); // stack grows downwards + DCHECK(base >= limit); // stack grows downwards // Include register-allocated locals in number of expressions. return static_cast<int>((base - limit) / kPointerSize); } @@ -616,7 +615,7 @@ bool StandardFrame::IsExpressionInsideHandler(int n) const { void StandardFrame::IterateCompiledFrame(ObjectVisitor* v) const { // Make sure that we're not doing "safe" stack frame iteration. We cannot // possibly find pointers in optimized frames in that state. - ASSERT(can_access_heap_objects()); + DCHECK(can_access_heap_objects()); // Compute the safepoint information. unsigned stack_slots = 0; @@ -640,7 +639,7 @@ void StandardFrame::IterateCompiledFrame(ObjectVisitor* v) const { // Skip saved double registers. if (safepoint_entry.has_doubles()) { // Number of doubles not known at snapshot time. - ASSERT(!Serializer::enabled(isolate())); + DCHECK(!isolate()->serializer_enabled()); parameters_base += DoubleRegister::NumAllocatableRegisters() * kDoubleSize / kPointerSize; } @@ -709,7 +708,7 @@ void OptimizedFrame::Iterate(ObjectVisitor* v) const { #ifdef DEBUG // Make sure that optimized frames do not contain any stack handlers. StackHandlerIterator it(this, top_handler()); - ASSERT(it.done()); + DCHECK(it.done()); #endif IterateCompiledFrame(v); @@ -747,7 +746,7 @@ Code* JavaScriptFrame::unchecked_code() const { int JavaScriptFrame::GetNumberOfIncomingArguments() const { - ASSERT(can_access_heap_objects() && + DCHECK(can_access_heap_objects() && isolate()->heap()->gc_state() == Heap::NOT_IN_GC); return function()->shared()->formal_parameter_count(); @@ -760,13 +759,13 @@ Address JavaScriptFrame::GetCallerStackPointer() const { void JavaScriptFrame::GetFunctions(List<JSFunction*>* functions) { - ASSERT(functions->length() == 0); + DCHECK(functions->length() == 0); functions->Add(function()); } void JavaScriptFrame::Summarize(List<FrameSummary>* functions) { - ASSERT(functions->length() == 0); + DCHECK(functions->length() == 0); Code* code_pointer = LookupCode(); int offset = static_cast<int>(pc() - code_pointer->address()); FrameSummary summary(receiver(), @@ -778,9 +777,37 @@ void JavaScriptFrame::Summarize(List<FrameSummary>* functions) { } -void JavaScriptFrame::PrintTop(Isolate* isolate, - FILE* file, - bool print_args, +void JavaScriptFrame::PrintFunctionAndOffset(JSFunction* function, Code* code, + Address pc, FILE* file, + bool print_line_number) { + PrintF(file, "%s", function->IsOptimized() ? "*" : "~"); + function->PrintName(file); + int code_offset = static_cast<int>(pc - code->instruction_start()); + PrintF(file, "+%d", code_offset); + if (print_line_number) { + SharedFunctionInfo* shared = function->shared(); + int source_pos = code->SourcePosition(pc); + Object* maybe_script = shared->script(); + if (maybe_script->IsScript()) { + Script* script = Script::cast(maybe_script); + int line = script->GetLineNumber(source_pos) + 1; + Object* script_name_raw = script->name(); + if (script_name_raw->IsString()) { + String* script_name = String::cast(script->name()); + SmartArrayPointer<char> c_script_name = + script_name->ToCString(DISALLOW_NULLS, ROBUST_STRING_TRAVERSAL); + PrintF(file, " at %s:%d", c_script_name.get(), line); + } else { + PrintF(file, " at <unknown>:%d", line); + } + } else { + PrintF(file, " at <unknown>:<unknown>"); + } + } +} + + +void JavaScriptFrame::PrintTop(Isolate* isolate, FILE* file, bool print_args, bool print_line_number) { // constructor calls DisallowHeapAllocation no_allocation; @@ -789,37 +816,8 @@ void JavaScriptFrame::PrintTop(Isolate* isolate, if (it.frame()->is_java_script()) { JavaScriptFrame* frame = it.frame(); if (frame->IsConstructor()) PrintF(file, "new "); - // function name - JSFunction* fun = frame->function(); - fun->PrintName(); - Code* js_code = frame->unchecked_code(); - Address pc = frame->pc(); - int code_offset = - static_cast<int>(pc - js_code->instruction_start()); - PrintF("+%d", code_offset); - SharedFunctionInfo* shared = fun->shared(); - if (print_line_number) { - Code* code = Code::cast(isolate->FindCodeObject(pc)); - int source_pos = code->SourcePosition(pc); - Object* maybe_script = shared->script(); - if (maybe_script->IsScript()) { - Script* script = Script::cast(maybe_script); - int line = script->GetLineNumber(source_pos) + 1; - Object* script_name_raw = script->name(); - if (script_name_raw->IsString()) { - String* script_name = String::cast(script->name()); - SmartArrayPointer<char> c_script_name = - script_name->ToCString(DISALLOW_NULLS, - ROBUST_STRING_TRAVERSAL); - PrintF(file, " at %s:%d", c_script_name.get(), line); - } else { - PrintF(file, " at <unknown>:%d", line); - } - } else { - PrintF(file, " at <unknown>:<unknown>"); - } - } - + PrintFunctionAndOffset(frame->function(), frame->unchecked_code(), + frame->pc(), file, print_line_number); if (print_args) { // function arguments // (we are intentionally only printing the actually @@ -843,7 +841,7 @@ void JavaScriptFrame::PrintTop(Isolate* isolate, void JavaScriptFrame::SaveOperandStack(FixedArray* store, int* stack_handler_index) const { int operands_count = store->length(); - ASSERT_LE(operands_count, ComputeOperandsCount()); + DCHECK_LE(operands_count, ComputeOperandsCount()); // Visit the stack in LIFO order, saving operands and stack handlers into the // array. The saved stack handlers store a link to the next stack handler, @@ -857,8 +855,8 @@ void JavaScriptFrame::SaveOperandStack(FixedArray* store, for (; GetOperandSlot(i) < handler->address(); i--) { store->set(i, GetOperand(i)); } - ASSERT_GE(i + 1, StackHandlerConstants::kSlotCount); - ASSERT_EQ(handler->address(), GetOperandSlot(i)); + DCHECK_GE(i + 1, StackHandlerConstants::kSlotCount); + DCHECK_EQ(handler->address(), GetOperandSlot(i)); int next_stack_handler_index = i + 1 - StackHandlerConstants::kSlotCount; handler->Unwind(isolate(), store, next_stack_handler_index, *stack_handler_index); @@ -876,17 +874,17 @@ void JavaScriptFrame::SaveOperandStack(FixedArray* store, void JavaScriptFrame::RestoreOperandStack(FixedArray* store, int stack_handler_index) { int operands_count = store->length(); - ASSERT_LE(operands_count, ComputeOperandsCount()); + DCHECK_LE(operands_count, ComputeOperandsCount()); int i = 0; while (i <= stack_handler_index) { if (i < stack_handler_index) { // An operand. - ASSERT_EQ(GetOperand(i), isolate()->heap()->the_hole_value()); + DCHECK_EQ(GetOperand(i), isolate()->heap()->the_hole_value()); Memory::Object_at(GetOperandSlot(i)) = store->get(i); i++; } else { // A stack handler. - ASSERT_EQ(i, stack_handler_index); + DCHECK_EQ(i, stack_handler_index); // The FixedArray store grows up. The stack grows down. So the operand // slot for i actually points to the bottom of the top word in the // handler. The base of the StackHandler* is the address of the bottom @@ -900,7 +898,7 @@ void JavaScriptFrame::RestoreOperandStack(FixedArray* store, } for (; i < operands_count; i++) { - ASSERT_EQ(GetOperand(i), isolate()->heap()->the_hole_value()); + DCHECK_EQ(GetOperand(i), isolate()->heap()->the_hole_value()); Memory::Object_at(GetOperandSlot(i)) = store->get(i); } } @@ -930,8 +928,14 @@ JSFunction* OptimizedFrame::LiteralAt(FixedArray* literal_array, void OptimizedFrame::Summarize(List<FrameSummary>* frames) { - ASSERT(frames->length() == 0); - ASSERT(is_optimized()); + DCHECK(frames->length() == 0); + DCHECK(is_optimized()); + + // Delegate to JS frame in absence of inlining. + // TODO(turbofan): Revisit once we support inlining. + if (LookupCode()->is_turbofanned()) { + return JavaScriptFrame::Summarize(frames); + } int deopt_index = Safepoint::kNoDeoptimizationIndex; DeoptimizationInputData* data = GetDeoptimizationData(&deopt_index); @@ -942,15 +946,12 @@ void OptimizedFrame::Summarize(List<FrameSummary>* frames) { // throw. An entry with no deoptimization index indicates a call-site // without a lazy-deopt. As a consequence we are not allowed to inline // functions containing throw. - if (deopt_index == Safepoint::kNoDeoptimizationIndex) { - JavaScriptFrame::Summarize(frames); - return; - } + DCHECK(deopt_index != Safepoint::kNoDeoptimizationIndex); TranslationIterator it(data->TranslationByteArray(), data->TranslationIndex(deopt_index)->value()); Translation::Opcode opcode = static_cast<Translation::Opcode>(it.Next()); - ASSERT(opcode == Translation::BEGIN); + DCHECK(opcode == Translation::BEGIN); it.Next(); // Drop frame count. int jsframe_count = it.Next(); @@ -1010,7 +1011,7 @@ void OptimizedFrame::Summarize(List<FrameSummary>* frames) { function->shared()); unsigned pc_offset = FullCodeGenerator::PcField::decode(entry) + Code::kHeaderSize; - ASSERT(pc_offset > 0); + DCHECK(pc_offset > 0); FrameSummary summary(receiver, function, code, pc_offset, is_constructor); frames->Add(summary); @@ -1018,20 +1019,20 @@ void OptimizedFrame::Summarize(List<FrameSummary>* frames) { } else if (opcode == Translation::CONSTRUCT_STUB_FRAME) { // The next encountered JS_FRAME will be marked as a constructor call. it.Skip(Translation::NumberOfOperandsFor(opcode)); - ASSERT(!is_constructor); + DCHECK(!is_constructor); is_constructor = true; } else { // Skip over operands to advance to the next opcode. it.Skip(Translation::NumberOfOperandsFor(opcode)); } } - ASSERT(!is_constructor); + DCHECK(!is_constructor); } DeoptimizationInputData* OptimizedFrame::GetDeoptimizationData( int* deopt_index) { - ASSERT(is_optimized()); + DCHECK(is_optimized()); JSFunction* opt_function = function(); Code* code = opt_function->code(); @@ -1043,19 +1044,25 @@ DeoptimizationInputData* OptimizedFrame::GetDeoptimizationData( code = isolate()->inner_pointer_to_code_cache()-> GcSafeFindCodeForInnerPointer(pc()); } - ASSERT(code != NULL); - ASSERT(code->kind() == Code::OPTIMIZED_FUNCTION); + DCHECK(code != NULL); + DCHECK(code->kind() == Code::OPTIMIZED_FUNCTION); SafepointEntry safepoint_entry = code->GetSafepointEntry(pc()); *deopt_index = safepoint_entry.deoptimization_index(); - ASSERT(*deopt_index != Safepoint::kNoDeoptimizationIndex); + DCHECK(*deopt_index != Safepoint::kNoDeoptimizationIndex); return DeoptimizationInputData::cast(code->deoptimization_data()); } int OptimizedFrame::GetInlineCount() { - ASSERT(is_optimized()); + DCHECK(is_optimized()); + + // Delegate to JS frame in absence of inlining. + // TODO(turbofan): Revisit once we support inlining. + if (LookupCode()->is_turbofanned()) { + return JavaScriptFrame::GetInlineCount(); + } int deopt_index = Safepoint::kNoDeoptimizationIndex; DeoptimizationInputData* data = GetDeoptimizationData(&deopt_index); @@ -1063,7 +1070,7 @@ int OptimizedFrame::GetInlineCount() { TranslationIterator it(data->TranslationByteArray(), data->TranslationIndex(deopt_index)->value()); Translation::Opcode opcode = static_cast<Translation::Opcode>(it.Next()); - ASSERT(opcode == Translation::BEGIN); + DCHECK(opcode == Translation::BEGIN); USE(opcode); it.Next(); // Drop frame count. int jsframe_count = it.Next(); @@ -1072,8 +1079,14 @@ int OptimizedFrame::GetInlineCount() { void OptimizedFrame::GetFunctions(List<JSFunction*>* functions) { - ASSERT(functions->length() == 0); - ASSERT(is_optimized()); + DCHECK(functions->length() == 0); + DCHECK(is_optimized()); + + // Delegate to JS frame in absence of inlining. + // TODO(turbofan): Revisit once we support inlining. + if (LookupCode()->is_turbofanned()) { + return JavaScriptFrame::GetFunctions(functions); + } int deopt_index = Safepoint::kNoDeoptimizationIndex; DeoptimizationInputData* data = GetDeoptimizationData(&deopt_index); @@ -1082,7 +1095,7 @@ void OptimizedFrame::GetFunctions(List<JSFunction*>* functions) { TranslationIterator it(data->TranslationByteArray(), data->TranslationIndex(deopt_index)->value()); Translation::Opcode opcode = static_cast<Translation::Opcode>(it.Next()); - ASSERT(opcode == Translation::BEGIN); + DCHECK(opcode == Translation::BEGIN); it.Next(); // Drop frame count. int jsframe_count = it.Next(); @@ -1130,7 +1143,7 @@ Code* ArgumentsAdaptorFrame::unchecked_code() const { Code* InternalFrame::unchecked_code() const { const int offset = InternalFrameConstants::kCodeOffset; Object* code = Memory::Object_at(fp() + offset); - ASSERT(code != NULL); + DCHECK(code != NULL); return reinterpret_cast<Code*>(code); } @@ -1235,6 +1248,10 @@ void JavaScriptFrame::Print(StringStream* accumulator, if (this->context() != NULL && this->context()->IsContext()) { context = Context::cast(this->context()); } + while (context->IsWithContext()) { + context = context->previous(); + DCHECK(context != NULL); + } // Print heap-allocated local variables. if (heap_locals_count > 0) { @@ -1245,8 +1262,9 @@ void JavaScriptFrame::Print(StringStream* accumulator, accumulator->PrintName(scope_info->ContextLocalName(i)); accumulator->Add(" = "); if (context != NULL) { - if (i < context->length()) { - accumulator->Add("%o", context->get(Context::MIN_CONTEXT_SLOTS + i)); + int index = Context::MIN_CONTEXT_SLOTS + i; + if (index < context->length()) { + accumulator->Add("%o", context->get(index)); } else { accumulator->Add( "// warning: missing context slot - inconsistent frame?"); @@ -1269,10 +1287,12 @@ void JavaScriptFrame::Print(StringStream* accumulator, // Print details about the function. if (FLAG_max_stack_trace_source_length != 0 && code != NULL) { + OStringStream os; SharedFunctionInfo* shared = function->shared(); - accumulator->Add("--------- s o u r c e c o d e ---------\n"); - shared->SourceCodePrint(accumulator, FLAG_max_stack_trace_source_length); - accumulator->Add("\n-----------------------------------------\n"); + os << "--------- s o u r c e c o d e ---------\n" + << SourceCodeOf(shared, FLAG_max_stack_trace_source_length) + << "\n-----------------------------------------\n"; + accumulator->Add(os.c_str()); } accumulator->Add("}\n\n"); @@ -1311,15 +1331,15 @@ void ArgumentsAdaptorFrame::Print(StringStream* accumulator, void EntryFrame::Iterate(ObjectVisitor* v) const { StackHandlerIterator it(this, top_handler()); - ASSERT(!it.done()); + DCHECK(!it.done()); StackHandler* handler = it.handler(); - ASSERT(handler->is_js_entry()); + DCHECK(handler->is_js_entry()); handler->Iterate(v, LookupCode()); #ifdef DEBUG // Make sure that the entry frame does not contain more than one // stack handler. it.Advance(); - ASSERT(it.done()); + DCHECK(it.done()); #endif IteratePc(v, pc_address(), LookupCode()); } @@ -1399,7 +1419,7 @@ Code* StubFailureTrampolineFrame::unchecked_code() const { JavaScriptFrame* StackFrameLocator::FindJavaScriptFrame(int n) { - ASSERT(n >= 0); + DCHECK(n >= 0); for (int i = 0; i <= n; i++) { while (!iterator_.frame()->is_java_script()) iterator_.Advance(); if (i == n) return JavaScriptFrame::cast(iterator_.frame()); @@ -1428,7 +1448,7 @@ static int GcSafeSizeOfCodeSpaceObject(HeapObject* object) { #ifdef DEBUG static bool GcSafeCodeContains(HeapObject* code, Address addr) { Map* map = GcSafeMapOfCodeSpaceObject(code); - ASSERT(map == code->GetHeap()->code_map()); + DCHECK(map == code->GetHeap()->code_map()); Address start = code->address(); Address end = code->address() + code->SizeFromMap(map); return start <= addr && addr < end; @@ -1439,7 +1459,7 @@ static bool GcSafeCodeContains(HeapObject* code, Address addr) { Code* InnerPointerToCodeCache::GcSafeCastToCode(HeapObject* object, Address inner_pointer) { Code* code = reinterpret_cast<Code*>(object); - ASSERT(code != NULL && GcSafeCodeContains(code, inner_pointer)); + DCHECK(code != NULL && GcSafeCodeContains(code, inner_pointer)); return code; } @@ -1480,7 +1500,7 @@ Code* InnerPointerToCodeCache::GcSafeFindCodeForInnerPointer( InnerPointerToCodeCache::InnerPointerToCodeCacheEntry* InnerPointerToCodeCache::GetCacheEntry(Address inner_pointer) { isolate_->counters()->pc_to_code()->Increment(); - ASSERT(IsPowerOf2(kInnerPointerToCodeCacheSize)); + DCHECK(IsPowerOf2(kInnerPointerToCodeCacheSize)); uint32_t hash = ComputeIntegerHash( static_cast<uint32_t>(reinterpret_cast<uintptr_t>(inner_pointer)), v8::internal::kZeroHashSeed); @@ -1488,7 +1508,7 @@ InnerPointerToCodeCache::InnerPointerToCodeCacheEntry* InnerPointerToCodeCacheEntry* entry = cache(index); if (entry->inner_pointer == inner_pointer) { isolate_->counters()->pc_to_code_cached()->Increment(); - ASSERT(entry->code == GcSafeFindCodeForInnerPointer(inner_pointer)); + DCHECK(entry->code == GcSafeFindCodeForInnerPointer(inner_pointer)); } else { // Because this code may be interrupted by a profiling signal that // also queries the cache, we cannot update inner_pointer before the code @@ -1510,8 +1530,8 @@ void StackHandler::Unwind(Isolate* isolate, int offset, int previous_handler_offset) const { STATIC_ASSERT(StackHandlerConstants::kSlotCount >= 5); - ASSERT_LE(0, offset); - ASSERT_GE(array->length(), offset + StackHandlerConstants::kSlotCount); + DCHECK_LE(0, offset); + DCHECK_GE(array->length(), offset + StackHandlerConstants::kSlotCount); // Unwinding a stack handler into an array chains it in the opposite // direction, re-using the "next" slot as a "previous" link, so that stack // handlers can be later re-wound in the correct order. Decode the "state" @@ -1531,8 +1551,8 @@ int StackHandler::Rewind(Isolate* isolate, int offset, Address fp) { STATIC_ASSERT(StackHandlerConstants::kSlotCount >= 5); - ASSERT_LE(0, offset); - ASSERT_GE(array->length(), offset + StackHandlerConstants::kSlotCount); + DCHECK_LE(0, offset); + DCHECK_GE(array->length(), offset + StackHandlerConstants::kSlotCount); Smi* prev_handler_offset = Smi::cast(array->get(offset)); Code* code = Code::cast(array->get(offset + 1)); Smi* smi_index = Smi::cast(array->get(offset + 2)); @@ -1575,12 +1595,12 @@ void SetUpJSCallerSavedCodeData() { if ((kJSCallerSaved & (1 << r)) != 0) caller_saved_code_data.reg_code[i++] = r; - ASSERT(i == kNumJSCallerSaved); + DCHECK(i == kNumJSCallerSaved); } int JSCallerSavedCode(int n) { - ASSERT(0 <= n && n < kNumJSCallerSaved); + DCHECK(0 <= n && n < kNumJSCallerSaved); return caller_saved_code_data.reg_code[n]; } diff --git a/deps/v8/src/frames.h b/deps/v8/src/frames.h index 3dd6e93681b..f7e60aef33b 100644 --- a/deps/v8/src/frames.h +++ b/deps/v8/src/frames.h @@ -5,9 +5,9 @@ #ifndef V8_FRAMES_H_ #define V8_FRAMES_H_ -#include "allocation.h" -#include "handles.h" -#include "safepoint-table.h" +#include "src/allocation.h" +#include "src/handles.h" +#include "src/safepoint-table.h" namespace v8 { namespace internal { @@ -371,7 +371,7 @@ class EntryFrame: public StackFrame { virtual void Iterate(ObjectVisitor* v) const; static EntryFrame* cast(StackFrame* frame) { - ASSERT(frame->is_entry()); + DCHECK(frame->is_entry()); return static_cast<EntryFrame*>(frame); } virtual void SetCallerFp(Address caller_fp); @@ -399,7 +399,7 @@ class EntryConstructFrame: public EntryFrame { virtual Code* unchecked_code() const; static EntryConstructFrame* cast(StackFrame* frame) { - ASSERT(frame->is_entry_construct()); + DCHECK(frame->is_entry_construct()); return static_cast<EntryConstructFrame*>(frame); } @@ -427,7 +427,7 @@ class ExitFrame: public StackFrame { virtual void SetCallerFp(Address caller_fp); static ExitFrame* cast(StackFrame* frame) { - ASSERT(frame->is_exit()); + DCHECK(frame->is_exit()); return static_cast<ExitFrame*>(frame); } @@ -467,7 +467,7 @@ class StandardFrame: public StackFrame { virtual void SetCallerFp(Address caller_fp); static StandardFrame* cast(StackFrame* frame) { - ASSERT(frame->is_standard()); + DCHECK(frame->is_standard()); return static_cast<StandardFrame*>(frame); } @@ -610,13 +610,15 @@ class JavaScriptFrame: public StandardFrame { static Register constant_pool_pointer_register(); static JavaScriptFrame* cast(StackFrame* frame) { - ASSERT(frame->is_java_script()); + DCHECK(frame->is_java_script()); return static_cast<JavaScriptFrame*>(frame); } - static void PrintTop(Isolate* isolate, - FILE* file, - bool print_args, + static void PrintFunctionAndOffset(JSFunction* function, Code* code, + Address pc, FILE* file, + bool print_line_number); + + static void PrintTop(Isolate* isolate, FILE* file, bool print_args, bool print_line_number); protected: @@ -697,7 +699,7 @@ class ArgumentsAdaptorFrame: public JavaScriptFrame { virtual Code* unchecked_code() const; static ArgumentsAdaptorFrame* cast(StackFrame* frame) { - ASSERT(frame->is_arguments_adaptor()); + DCHECK(frame->is_arguments_adaptor()); return static_cast<ArgumentsAdaptorFrame*>(frame); } @@ -729,7 +731,7 @@ class InternalFrame: public StandardFrame { virtual Code* unchecked_code() const; static InternalFrame* cast(StackFrame* frame) { - ASSERT(frame->is_internal()); + DCHECK(frame->is_internal()); return static_cast<InternalFrame*>(frame); } @@ -784,7 +786,7 @@ class ConstructFrame: public InternalFrame { virtual Type type() const { return CONSTRUCT; } static ConstructFrame* cast(StackFrame* frame) { - ASSERT(frame->is_construct()); + DCHECK(frame->is_construct()); return static_cast<ConstructFrame*>(frame); } @@ -815,7 +817,7 @@ class StackFrameIteratorBase BASE_EMBEDDED { const bool can_access_heap_objects_; StackHandler* handler() const { - ASSERT(!done()); + DCHECK(!done()); return handler_; } @@ -838,7 +840,7 @@ class StackFrameIterator: public StackFrameIteratorBase { StackFrameIterator(Isolate* isolate, ThreadLocalTop* t); StackFrame* frame() const { - ASSERT(!done()); + DCHECK(!done()); return frame_; } void Advance(); @@ -930,13 +932,6 @@ class StackFrameLocator BASE_EMBEDDED { }; -// Used specify the type of prologue to generate. -enum PrologueFrameMode { - BUILD_FUNCTION_FRAME, - BUILD_STUB_FRAME -}; - - // Reads all frames on the current stack and copies them into the current // zone memory. Vector<StackFrame*> CreateStackMap(Isolate* isolate, Zone* zone); diff --git a/deps/v8/src/full-codegen.cc b/deps/v8/src/full-codegen.cc index 2846d2ba14c..0297f88f581 100644 --- a/deps/v8/src/full-codegen.cc +++ b/deps/v8/src/full-codegen.cc @@ -2,19 +2,19 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" - -#include "codegen.h" -#include "compiler.h" -#include "debug.h" -#include "full-codegen.h" -#include "liveedit.h" -#include "macro-assembler.h" -#include "prettyprinter.h" -#include "scopes.h" -#include "scopeinfo.h" -#include "snapshot.h" -#include "stub-cache.h" +#include "src/v8.h" + +#include "src/codegen.h" +#include "src/compiler.h" +#include "src/debug.h" +#include "src/full-codegen.h" +#include "src/liveedit.h" +#include "src/macro-assembler.h" +#include "src/prettyprinter.h" +#include "src/scopeinfo.h" +#include "src/scopes.h" +#include "src/snapshot.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -290,8 +290,7 @@ void BreakableStatementChecker::VisitThisFunction(ThisFunction* expr) { bool FullCodeGenerator::MakeCode(CompilationInfo* info) { Isolate* isolate = info->isolate(); - Logger::TimerEventScope timer( - isolate, Logger::TimerEventScope::v8_compile_full_code); + TimerEventScope<TimerEventCompileFullCode> timer(info->isolate()); Handle<Script> script = info->script(); if (!script->IsUndefined() && !script->source()->IsUndefined()) { @@ -301,16 +300,15 @@ bool FullCodeGenerator::MakeCode(CompilationInfo* info) { CodeGenerator::MakeCodePrologue(info, "full"); const int kInitialBufferSize = 4 * KB; MacroAssembler masm(info->isolate(), NULL, kInitialBufferSize); -#ifdef ENABLE_GDB_JIT_INTERFACE - masm.positions_recorder()->StartGDBJITLineInfoRecording(); -#endif + if (info->will_serialize()) masm.enable_serializer(); + LOG_CODE_EVENT(isolate, CodeStartLinePosInfoRecordEvent(masm.positions_recorder())); FullCodeGenerator cgen(&masm, info); cgen.Generate(); if (cgen.HasStackOverflow()) { - ASSERT(!isolate->has_pending_exception()); + DCHECK(!isolate->has_pending_exception()); return false; } unsigned table_offset = cgen.EmitBackEdgeTable(); @@ -328,16 +326,8 @@ bool FullCodeGenerator::MakeCode(CompilationInfo* info) { code->set_allow_osr_at_loop_nesting_level(0); code->set_profiler_ticks(0); code->set_back_edge_table_offset(table_offset); - code->set_back_edges_patched_for_osr(false); CodeGenerator::PrintCode(code, info); info->SetCode(code); -#ifdef ENABLE_GDB_JIT_INTERFACE - if (FLAG_gdbjit) { - GDBJITLineInfo* lineinfo = - masm.positions_recorder()->DetachGDBJITLineInfo(); - GDBJIT(RegisterDetailedLineInfo(*code, lineinfo)); - } -#endif void* line_info = masm.positions_recorder()->DetachJITHandlerData(); LOG_CODE_EVENT(isolate, CodeEndLinePosInfoRecordEvent(*code, line_info)); return true; @@ -348,7 +338,7 @@ unsigned FullCodeGenerator::EmitBackEdgeTable() { // The back edge table consists of a length (in number of entries) // field, and then a sequence of entries. Each entry is a pair of AST id // and code-relative pc offset. - masm()->Align(kIntSize); + masm()->Align(kPointerSize); unsigned offset = masm()->pc_offset(); unsigned length = back_edges_.length(); __ dd(length); @@ -373,7 +363,7 @@ void FullCodeGenerator::EnsureSlotContainsAllocationSite(int slot) { void FullCodeGenerator::PopulateDeoptimizationData(Handle<Code> code) { // Fill in the deoptimization information. - ASSERT(info_->HasDeoptimizationSupport() || bailout_entries_.is_empty()); + DCHECK(info_->HasDeoptimizationSupport() || bailout_entries_.is_empty()); if (!info_->HasDeoptimizationSupport()) return; int length = bailout_entries_.length(); Handle<DeoptimizationOutputData> data = @@ -389,7 +379,7 @@ void FullCodeGenerator::PopulateDeoptimizationData(Handle<Code> code) { void FullCodeGenerator::PopulateTypeFeedbackInfo(Handle<Code> code) { Handle<TypeFeedbackInfo> info = isolate()->factory()->NewTypeFeedbackInfo(); info->set_ic_total_count(ic_total_count_); - ASSERT(!isolate()->heap()->InNewSpace(*info)); + DCHECK(!isolate()->heap()->InNewSpace(*info)); code->set_type_feedback_info(*info); } @@ -402,7 +392,7 @@ void FullCodeGenerator::Initialize() { // we disable the production of debug code in the full compiler if we are // either generating a snapshot or we booted from a snapshot. generate_debug_code_ = FLAG_debug_code && - !Serializer::enabled(isolate()) && + !masm_->serializer_enabled() && !Snapshot::HaveASnapshotToStartFrom(); masm_->set_emit_debug_code(generate_debug_code_); masm_->set_predictable_code_size(true); @@ -439,7 +429,7 @@ void FullCodeGenerator::RecordJSReturnSite(Call* call) { #ifdef DEBUG // In debug builds, mark the return so we can verify that this function // was called. - ASSERT(!call->return_is_recorded_); + DCHECK(!call->return_is_recorded_); call->return_is_recorded_ = true; #endif } @@ -451,18 +441,21 @@ void FullCodeGenerator::PrepareForBailoutForId(BailoutId id, State state) { if (!info_->HasDeoptimizationSupport()) return; unsigned pc_and_state = StateField::encode(state) | PcField::encode(masm_->pc_offset()); - ASSERT(Smi::IsValid(pc_and_state)); + DCHECK(Smi::IsValid(pc_and_state)); +#ifdef DEBUG + for (int i = 0; i < bailout_entries_.length(); ++i) { + DCHECK(bailout_entries_[i].id != id); + } +#endif BailoutEntry entry = { id, pc_and_state }; - ASSERT(!prepared_bailout_ids_.Contains(id.ToInt())); - prepared_bailout_ids_.Add(id.ToInt(), zone()); bailout_entries_.Add(entry, zone()); } void FullCodeGenerator::RecordBackEdge(BailoutId ast_id) { // The pc offset does not need to be encoded and packed together with a state. - ASSERT(masm_->pc_offset() > 0); - ASSERT(loop_depth() > 0); + DCHECK(masm_->pc_offset() > 0); + DCHECK(loop_depth() > 0); uint8_t depth = Min(loop_depth(), Code::kMaxLoopNestingMarker); BackEdgeEntry entry = { ast_id, static_cast<unsigned>(masm_->pc_offset()), depth }; @@ -578,7 +571,7 @@ void FullCodeGenerator::DoTest(const TestContext* context) { void FullCodeGenerator::AllocateModules(ZoneList<Declaration*>* declarations) { - ASSERT(scope_->is_global_scope()); + DCHECK(scope_->is_global_scope()); for (int i = 0; i < declarations->length(); i++) { ModuleDeclaration* declaration = declarations->at(i)->AsModuleDeclaration(); @@ -588,15 +581,15 @@ void FullCodeGenerator::AllocateModules(ZoneList<Declaration*>* declarations) { Comment cmnt(masm_, "[ Link nested modules"); Scope* scope = module->body()->scope(); Interface* interface = scope->interface(); - ASSERT(interface->IsModule() && interface->IsFrozen()); + DCHECK(interface->IsModule() && interface->IsFrozen()); interface->Allocate(scope->module_var()->index()); // Set up module context. - ASSERT(scope->interface()->Index() >= 0); + DCHECK(scope->interface()->Index() >= 0); __ Push(Smi::FromInt(scope->interface()->Index())); __ Push(scope->GetScopeInfo()); - __ CallRuntime(Runtime::kHiddenPushModuleContext, 2); + __ CallRuntime(Runtime::kPushModuleContext, 2); StoreToFrameField(StandardFrameConstants::kContextOffset, context_register()); @@ -685,7 +678,7 @@ void FullCodeGenerator::VisitDeclarations( // This is a scope hosting modules. Allocate a descriptor array to pass // to the runtime for initialization. Comment cmnt(masm_, "[ Allocate modules"); - ASSERT(scope_->is_global_scope()); + DCHECK(scope_->is_global_scope()); modules_ = isolate()->factory()->NewFixedArray(scope_->num_modules(), TENURED); module_index_ = 0; @@ -699,7 +692,7 @@ void FullCodeGenerator::VisitDeclarations( if (scope_->num_modules() != 0) { // Initialize modules from descriptor array. - ASSERT(module_index_ == modules_->length()); + DCHECK(module_index_ == modules_->length()); DeclareModules(modules_); modules_ = saved_modules; module_index_ = saved_module_index; @@ -728,15 +721,15 @@ void FullCodeGenerator::VisitModuleLiteral(ModuleLiteral* module) { Comment cmnt(masm_, "[ ModuleLiteral"); SetStatementPosition(block); - ASSERT(!modules_.is_null()); - ASSERT(module_index_ < modules_->length()); + DCHECK(!modules_.is_null()); + DCHECK(module_index_ < modules_->length()); int index = module_index_++; // Set up module context. - ASSERT(interface->Index() >= 0); + DCHECK(interface->Index() >= 0); __ Push(Smi::FromInt(interface->Index())); __ Push(Smi::FromInt(0)); - __ CallRuntime(Runtime::kHiddenPushModuleContext, 2); + __ CallRuntime(Runtime::kPushModuleContext, 2); StoreToFrameField(StandardFrameConstants::kContextOffset, context_register()); { @@ -774,9 +767,9 @@ void FullCodeGenerator::VisitModuleUrl(ModuleUrl* module) { Scope* scope = module->body()->scope(); Interface* interface = scope_->interface(); - ASSERT(interface->IsModule() && interface->IsFrozen()); - ASSERT(!modules_.is_null()); - ASSERT(module_index_ < modules_->length()); + DCHECK(interface->IsModule() && interface->IsFrozen()); + DCHECK(!modules_.is_null()); + DCHECK(module_index_ < modules_->length()); interface->Allocate(scope->module_var()->index()); int index = module_index_++; @@ -787,7 +780,7 @@ void FullCodeGenerator::VisitModuleUrl(ModuleUrl* module) { int FullCodeGenerator::DeclareGlobalsFlags() { - ASSERT(DeclareGlobalsStrictMode::is_valid(strict_mode())); + DCHECK(DeclareGlobalsStrictMode::is_valid(strict_mode())); return DeclareGlobalsEvalFlag::encode(is_eval()) | DeclareGlobalsNativeFlag::encode(is_native()) | DeclareGlobalsStrictMode::encode(strict_mode()); @@ -805,7 +798,7 @@ void FullCodeGenerator::SetReturnPosition(FunctionLiteral* fun) { void FullCodeGenerator::SetStatementPosition(Statement* stmt) { - if (!isolate()->debugger()->IsDebuggerActive()) { + if (!info_->is_debug()) { CodeGenerator::RecordPositions(masm_, stmt->position()); } else { // Check if the statement will be breakable without adding a debug break @@ -820,14 +813,14 @@ void FullCodeGenerator::SetStatementPosition(Statement* stmt) { // If the position recording did record a new position generate a debug // break slot to make the statement breakable. if (position_recorded) { - Debug::GenerateSlot(masm_); + DebugCodegen::GenerateSlot(masm_); } } } void FullCodeGenerator::SetExpressionPosition(Expression* expr) { - if (!isolate()->debugger()->IsDebuggerActive()) { + if (!info_->is_debug()) { CodeGenerator::RecordPositions(masm_, expr->position()); } else { // Check if the expression will be breakable without adding a debug break @@ -846,17 +839,12 @@ void FullCodeGenerator::SetExpressionPosition(Expression* expr) { // If the position recording did record a new position generate a debug // break slot to make the statement breakable. if (position_recorded) { - Debug::GenerateSlot(masm_); + DebugCodegen::GenerateSlot(masm_); } } } -void FullCodeGenerator::SetStatementPosition(int pos) { - CodeGenerator::RecordPositions(masm_, pos); -} - - void FullCodeGenerator::SetSourcePosition(int pos) { if (pos != RelocInfo::kNoPosition) { masm_->positions_recorder()->RecordPosition(pos); @@ -880,8 +868,8 @@ FullCodeGenerator::InlineFunctionGenerator FullCodeGenerator::FindInlineFunctionGenerator(Runtime::FunctionId id) { int lookup_index = static_cast<int>(id) - static_cast<int>(Runtime::kFirstInlineFunction); - ASSERT(lookup_index >= 0); - ASSERT(static_cast<size_t>(lookup_index) < + DCHECK(lookup_index >= 0); + DCHECK(static_cast<size_t>(lookup_index) < ARRAY_SIZE(kInlineFunctionGenerators)); return kInlineFunctionGenerators[lookup_index]; } @@ -889,8 +877,8 @@ FullCodeGenerator::InlineFunctionGenerator void FullCodeGenerator::EmitInlineRuntimeCall(CallRuntime* expr) { const Runtime::Function* function = expr->function(); - ASSERT(function != NULL); - ASSERT(function->intrinsic_type == Runtime::INLINE); + DCHECK(function != NULL); + DCHECK(function->intrinsic_type == Runtime::INLINE); InlineFunctionGenerator generator = FindInlineFunctionGenerator(function->function_id); ((*this).*(generator))(expr); @@ -899,14 +887,14 @@ void FullCodeGenerator::EmitInlineRuntimeCall(CallRuntime* expr) { void FullCodeGenerator::EmitGeneratorNext(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); EmitGeneratorResume(args->at(0), args->at(1), JSGeneratorObject::NEXT); } void FullCodeGenerator::EmitGeneratorThrow(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); EmitGeneratorResume(args->at(0), args->at(1), JSGeneratorObject::THROW); } @@ -1004,7 +992,7 @@ void FullCodeGenerator::VisitLogicalExpression(BinaryOperation* expr) { PrepareForBailoutForId(right_id, NO_REGISTERS); } else { - ASSERT(context()->IsEffect()); + DCHECK(context()->IsEffect()); Label eval_right; if (is_logical_and) { VisitForControl(left, &eval_right, &done, &eval_right); @@ -1049,28 +1037,30 @@ void FullCodeGenerator::VisitBlock(Block* stmt) { Scope* saved_scope = scope(); // Push a block context when entering a block with block scoped variables. - if (stmt->scope() != NULL) { + if (stmt->scope() == NULL) { + PrepareForBailoutForId(stmt->EntryId(), NO_REGISTERS); + } else { scope_ = stmt->scope(); - ASSERT(!scope_->is_module_scope()); + DCHECK(!scope_->is_module_scope()); { Comment cmnt(masm_, "[ Extend block context"); __ Push(scope_->GetScopeInfo()); PushFunctionArgumentForContextAllocation(); - __ CallRuntime(Runtime::kHiddenPushBlockContext, 2); + __ CallRuntime(Runtime::kPushBlockContext, 2); // Replace the context stored in the frame. StoreToFrameField(StandardFrameConstants::kContextOffset, context_register()); + PrepareForBailoutForId(stmt->EntryId(), NO_REGISTERS); } { Comment cmnt(masm_, "[ Declarations"); VisitDeclarations(scope_->declarations()); + PrepareForBailoutForId(stmt->DeclsId(), NO_REGISTERS); } } - PrepareForBailoutForId(stmt->EntryId(), NO_REGISTERS); VisitStatements(stmt->statements()); scope_ = saved_scope; __ bind(nested_block.break_label()); - PrepareForBailoutForId(stmt->ExitId(), NO_REGISTERS); // Pop block context if necessary. if (stmt->scope() != NULL) { @@ -1079,6 +1069,7 @@ void FullCodeGenerator::VisitBlock(Block* stmt) { StoreToFrameField(StandardFrameConstants::kContextOffset, context_register()); } + PrepareForBailoutForId(stmt->ExitId(), NO_REGISTERS); } @@ -1087,7 +1078,7 @@ void FullCodeGenerator::VisitModuleStatement(ModuleStatement* stmt) { __ Push(Smi::FromInt(stmt->proxy()->interface()->Index())); __ Push(Smi::FromInt(0)); - __ CallRuntime(Runtime::kHiddenPushModuleContext, 2); + __ CallRuntime(Runtime::kPushModuleContext, 2); StoreToFrameField( StandardFrameConstants::kContextOffset, context_register()); @@ -1226,7 +1217,7 @@ void FullCodeGenerator::VisitWithStatement(WithStatement* stmt) { VisitForStackValue(stmt->expression()); PushFunctionArgumentForContextAllocation(); - __ CallRuntime(Runtime::kHiddenPushWithContext, 2); + __ CallRuntime(Runtime::kPushWithContext, 2); StoreToFrameField(StandardFrameConstants::kContextOffset, context_register()); Scope* saved_scope = scope(); @@ -1278,31 +1269,28 @@ void FullCodeGenerator::VisitDoWhileStatement(DoWhileStatement* stmt) { void FullCodeGenerator::VisitWhileStatement(WhileStatement* stmt) { Comment cmnt(masm_, "[ WhileStatement"); - Label test, body; + Label loop, body; Iteration loop_statement(this, stmt); increment_loop_depth(); - // Emit the test at the bottom of the loop. - __ jmp(&test); + __ bind(&loop); + + SetExpressionPosition(stmt->cond()); + VisitForControl(stmt->cond(), + &body, + loop_statement.break_label(), + &body); PrepareForBailoutForId(stmt->BodyId(), NO_REGISTERS); __ bind(&body); Visit(stmt->body()); - // Emit the statement position here as this is where the while - // statement code starts. __ bind(loop_statement.continue_label()); - SetStatementPosition(stmt); // Check stack before looping. - EmitBackEdgeBookkeeping(stmt, &body); - - __ bind(&test); - VisitForControl(stmt->cond(), - &body, - loop_statement.break_label(), - loop_statement.break_label()); + EmitBackEdgeBookkeeping(stmt, &loop); + __ jmp(&loop); PrepareForBailoutForId(stmt->ExitId(), NO_REGISTERS); __ bind(loop_statement.break_label()); @@ -1379,14 +1367,14 @@ void FullCodeGenerator::VisitTryCatchStatement(TryCatchStatement* stmt) { __ Push(stmt->variable()->name()); __ Push(result_register()); PushFunctionArgumentForContextAllocation(); - __ CallRuntime(Runtime::kHiddenPushCatchContext, 3); + __ CallRuntime(Runtime::kPushCatchContext, 3); StoreToFrameField(StandardFrameConstants::kContextOffset, context_register()); } Scope* saved_scope = scope(); scope_ = stmt->scope(); - ASSERT(scope_->declarations()->is_empty()); + DCHECK(scope_->declarations()->is_empty()); { WithOrCatch catch_body(this); Visit(stmt->catch_block()); } @@ -1443,7 +1431,7 @@ void FullCodeGenerator::VisitTryFinallyStatement(TryFinallyStatement* stmt) { // rethrow the exception if it returns. __ Call(&finally_entry); __ Push(result_register()); - __ CallRuntime(Runtime::kHiddenReThrow, 1); + __ CallRuntime(Runtime::kReThrow, 1); // Finally block implementation. __ bind(&finally_entry); @@ -1524,7 +1512,7 @@ void FullCodeGenerator::VisitFunctionLiteral(FunctionLiteral* expr) { // Build the function boilerplate and instantiate it. Handle<SharedFunctionInfo> function_info = - Compiler::BuildFunctionInfo(expr, script()); + Compiler::BuildFunctionInfo(expr, script(), info_); if (function_info.is_null()) { SetStackOverflow(); return; @@ -1542,7 +1530,7 @@ void FullCodeGenerator::VisitNativeFunctionLiteral( v8::Handle<v8::FunctionTemplate> fun_template = expr->extension()->GetNativeFunctionTemplate( reinterpret_cast<v8::Isolate*>(isolate()), v8::Utils::ToLocal(name)); - ASSERT(!fun_template.IsEmpty()); + DCHECK(!fun_template.IsEmpty()); // Instantiate the function and create a shared function info from it. Handle<JSFunction> fun = Utils::OpenHandle(*fun_template->GetFunction()); @@ -1550,10 +1538,11 @@ void FullCodeGenerator::VisitNativeFunctionLiteral( Handle<Code> code = Handle<Code>(fun->shared()->code()); Handle<Code> construct_stub = Handle<Code>(fun->shared()->construct_stub()); bool is_generator = false; + bool is_arrow = false; Handle<SharedFunctionInfo> shared = isolate()->factory()->NewSharedFunctionInfo( - name, literals, is_generator, - code, Handle<ScopeInfo>(fun->shared()->scope_info()), + name, literals, is_generator, is_arrow, code, + Handle<ScopeInfo>(fun->shared()->scope_info()), Handle<FixedArray>(fun->shared()->feedback_vector())); shared->set_construct_stub(*construct_stub); @@ -1569,7 +1558,7 @@ void FullCodeGenerator::VisitNativeFunctionLiteral( void FullCodeGenerator::VisitThrow(Throw* expr) { Comment cmnt(masm_, "[ Throw"); VisitForStackValue(expr->exception()); - __ CallRuntime(Runtime::kHiddenThrow, 1); + __ CallRuntime(Runtime::kThrow, 1); // Never returns here. } @@ -1611,22 +1600,24 @@ void BackEdgeTable::Patch(Isolate* isolate, Code* unoptimized) { DisallowHeapAllocation no_gc; Code* patch = isolate->builtins()->builtin(Builtins::kOnStackReplacement); - // Iterate over the back edge table and patch every interrupt + // Increment loop nesting level by one and iterate over the back edge table + // to find the matching loops to patch the interrupt // call to an unconditional call to the replacement code. - int loop_nesting_level = unoptimized->allow_osr_at_loop_nesting_level(); + int loop_nesting_level = unoptimized->allow_osr_at_loop_nesting_level() + 1; + if (loop_nesting_level > Code::kMaxLoopNestingMarker) return; BackEdgeTable back_edges(unoptimized, &no_gc); for (uint32_t i = 0; i < back_edges.length(); i++) { if (static_cast<int>(back_edges.loop_depth(i)) == loop_nesting_level) { - ASSERT_EQ(INTERRUPT, GetBackEdgeState(isolate, + DCHECK_EQ(INTERRUPT, GetBackEdgeState(isolate, unoptimized, back_edges.pc(i))); PatchAt(unoptimized, back_edges.pc(i), ON_STACK_REPLACEMENT, patch); } } - unoptimized->set_back_edges_patched_for_osr(true); - ASSERT(Verify(isolate, unoptimized, loop_nesting_level)); + unoptimized->set_allow_osr_at_loop_nesting_level(loop_nesting_level); + DCHECK(Verify(isolate, unoptimized)); } @@ -1635,23 +1626,21 @@ void BackEdgeTable::Revert(Isolate* isolate, Code* unoptimized) { Code* patch = isolate->builtins()->builtin(Builtins::kInterruptCheck); // Iterate over the back edge table and revert the patched interrupt calls. - ASSERT(unoptimized->back_edges_patched_for_osr()); int loop_nesting_level = unoptimized->allow_osr_at_loop_nesting_level(); BackEdgeTable back_edges(unoptimized, &no_gc); for (uint32_t i = 0; i < back_edges.length(); i++) { if (static_cast<int>(back_edges.loop_depth(i)) <= loop_nesting_level) { - ASSERT_NE(INTERRUPT, GetBackEdgeState(isolate, + DCHECK_NE(INTERRUPT, GetBackEdgeState(isolate, unoptimized, back_edges.pc(i))); PatchAt(unoptimized, back_edges.pc(i), INTERRUPT, patch); } } - unoptimized->set_back_edges_patched_for_osr(false); unoptimized->set_allow_osr_at_loop_nesting_level(0); // Assert that none of the back edges are patched anymore. - ASSERT(Verify(isolate, unoptimized, -1)); + DCHECK(Verify(isolate, unoptimized)); } @@ -1677,10 +1666,9 @@ void BackEdgeTable::RemoveStackCheck(Handle<Code> code, uint32_t pc_offset) { #ifdef DEBUG -bool BackEdgeTable::Verify(Isolate* isolate, - Code* unoptimized, - int loop_nesting_level) { +bool BackEdgeTable::Verify(Isolate* isolate, Code* unoptimized) { DisallowHeapAllocation no_gc; + int loop_nesting_level = unoptimized->allow_osr_at_loop_nesting_level(); BackEdgeTable back_edges(unoptimized, &no_gc); for (uint32_t i = 0; i < back_edges.length(); i++) { uint32_t loop_depth = back_edges.loop_depth(i); diff --git a/deps/v8/src/full-codegen.h b/deps/v8/src/full-codegen.h index 167538ed9c6..6814946529b 100644 --- a/deps/v8/src/full-codegen.h +++ b/deps/v8/src/full-codegen.h @@ -5,17 +5,17 @@ #ifndef V8_FULL_CODEGEN_H_ #define V8_FULL_CODEGEN_H_ -#include "v8.h" - -#include "allocation.h" -#include "assert-scope.h" -#include "ast.h" -#include "code-stubs.h" -#include "codegen.h" -#include "compiler.h" -#include "data-flow.h" -#include "globals.h" -#include "objects.h" +#include "src/v8.h" + +#include "src/allocation.h" +#include "src/assert-scope.h" +#include "src/ast.h" +#include "src/code-stubs.h" +#include "src/codegen.h" +#include "src/compiler.h" +#include "src/data-flow.h" +#include "src/globals.h" +#include "src/objects.h" namespace v8 { namespace internal { @@ -74,6 +74,7 @@ class FullCodeGenerator: public AstVisitor { info->zone()), back_edges_(2, info->zone()), ic_total_count_(0) { + DCHECK(!info->IsStub()); Initialize(); } @@ -98,7 +99,7 @@ class FullCodeGenerator: public AstVisitor { static const int kMaxBackEdgeWeight = 127; // Platform-specific code size multiplier. -#if V8_TARGET_ARCH_IA32 +#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87 static const int kCodeSizeMultiplier = 105; static const int kBootCodeSizeMultiplier = 100; #elif V8_TARGET_ARCH_X64 @@ -114,6 +115,9 @@ class FullCodeGenerator: public AstVisitor { #elif V8_TARGET_ARCH_MIPS static const int kCodeSizeMultiplier = 149; static const int kBootCodeSizeMultiplier = 120; +#elif V8_TARGET_ARCH_MIPS64 + static const int kCodeSizeMultiplier = 149; + static const int kBootCodeSizeMultiplier = 120; #else #error Unsupported target architecture. #endif @@ -133,7 +137,7 @@ class FullCodeGenerator: public AstVisitor { } virtual ~NestedStatement() { // Unlink from codegen's nesting stack. - ASSERT_EQ(this, codegen_->nesting_stack_); + DCHECK_EQ(this, codegen_->nesting_stack_); codegen_->nesting_stack_ = previous_; } @@ -216,7 +220,7 @@ class FullCodeGenerator: public AstVisitor { ++(*context_length); } return previous_; - }; + } }; // The try block of a try/catch statement. @@ -319,6 +323,13 @@ class FullCodeGenerator: public AstVisitor { Label* if_true, Label* if_false, Label* fall_through); +#elif V8_TARGET_ARCH_MIPS64 + void Split(Condition cc, + Register lhs, + const Operand& rhs, + Label* if_true, + Label* if_false, + Label* fall_through); #else // All non-mips arch. void Split(Condition cc, Label* if_true, @@ -484,11 +495,11 @@ class FullCodeGenerator: public AstVisitor { JSGeneratorObject::ResumeMode resume_mode); // Platform-specific code for loading variables. - void EmitLoadGlobalCheckExtensions(Variable* var, + void EmitLoadGlobalCheckExtensions(VariableProxy* proxy, TypeofState typeof_state, Label* slow); MemOperand ContextSlotOperandCheckExtensions(Variable* var, Label* slow); - void EmitDynamicLookupFastCase(Variable* var, + void EmitDynamicLookupFastCase(VariableProxy* proxy, TypeofState typeof_state, Label* slow, Label* done); @@ -539,7 +550,6 @@ class FullCodeGenerator: public AstVisitor { // Helper functions to EmitVariableAssignment void EmitStoreToStackLocalOrContextSlot(Variable* var, MemOperand location); - void EmitCallStoreContextSlot(Handle<String> name, StrictMode strict_mode); // Complete a named property assignment. The receiver is expected on top // of the stack and the right-hand-side value in the accumulator. @@ -561,7 +571,6 @@ class FullCodeGenerator: public AstVisitor { void SetReturnPosition(FunctionLiteral* fun); void SetStatementPosition(Statement* stmt); void SetExpressionPosition(Expression* expr); - void SetStatementPosition(int pos); void SetSourcePosition(int pos); // Non-local control flow support. @@ -572,7 +581,7 @@ class FullCodeGenerator: public AstVisitor { int loop_depth() { return loop_depth_; } void increment_loop_depth() { loop_depth_++; } void decrement_loop_depth() { - ASSERT(loop_depth_ > 0); + DCHECK(loop_depth_ > 0); loop_depth_--; } @@ -756,7 +765,7 @@ class FullCodeGenerator: public AstVisitor { fall_through_(fall_through) { } static const TestContext* cast(const ExpressionContext* context) { - ASSERT(context->IsTest()); + DCHECK(context->IsTest()); return reinterpret_cast<const TestContext*>(context); } @@ -819,7 +828,6 @@ class FullCodeGenerator: public AstVisitor { int module_index_; const ExpressionContext* context_; ZoneList<BailoutEntry> bailout_entries_; - GrowableBitVector prepared_bailout_ids_; ZoneList<BackEdgeEntry> back_edges_; int ic_total_count_; Handle<FixedArray> handler_table_; @@ -858,7 +866,7 @@ class AccessorTable: public TemplateHashMap<Literal, class BackEdgeTable { public: BackEdgeTable(Code* code, DisallowHeapAllocation* required) { - ASSERT(code->kind() == Code::FUNCTION); + DCHECK(code->kind() == Code::FUNCTION); instruction_start_ = code->instruction_start(); Address table_address = instruction_start_ + code->back_edge_table_offset(); length_ = Memory::uint32_at(table_address); @@ -890,10 +898,8 @@ class BackEdgeTable { OSR_AFTER_STACK_CHECK }; - // Patch all interrupts with allowed loop depth in the unoptimized code to - // unconditionally call replacement_code. - static void Patch(Isolate* isolate, - Code* unoptimized_code); + // Increase allowed loop nesting level by one and patch those matching loops. + static void Patch(Isolate* isolate, Code* unoptimized_code); // Patch the back edge to the target state, provided the correct callee. static void PatchAt(Code* unoptimized_code, @@ -919,14 +925,12 @@ class BackEdgeTable { #ifdef DEBUG // Verify that all back edges of a certain loop depth are patched. - static bool Verify(Isolate* isolate, - Code* unoptimized_code, - int loop_nesting_level); + static bool Verify(Isolate* isolate, Code* unoptimized_code); #endif // DEBUG private: Address entry_at(uint32_t index) { - ASSERT(index < length_); + DCHECK(index < length_); return start_ + index * kEntrySize; } diff --git a/deps/v8/src/func-name-inferrer.cc b/deps/v8/src/func-name-inferrer.cc index 370d6264dff..b3a64b26f82 100644 --- a/deps/v8/src/func-name-inferrer.cc +++ b/deps/v8/src/func-name-inferrer.cc @@ -2,17 +2,19 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "ast.h" -#include "func-name-inferrer.h" -#include "list-inl.h" +#include "src/ast.h" +#include "src/ast-value-factory.h" +#include "src/func-name-inferrer.h" +#include "src/list-inl.h" namespace v8 { namespace internal { -FuncNameInferrer::FuncNameInferrer(Isolate* isolate, Zone* zone) - : isolate_(isolate), +FuncNameInferrer::FuncNameInferrer(AstValueFactory* ast_value_factory, + Zone* zone) + : ast_value_factory_(ast_value_factory), entries_stack_(10, zone), names_stack_(5, zone), funcs_to_infer_(4, zone), @@ -20,40 +22,36 @@ FuncNameInferrer::FuncNameInferrer(Isolate* isolate, Zone* zone) } -void FuncNameInferrer::PushEnclosingName(Handle<String> name) { +void FuncNameInferrer::PushEnclosingName(const AstRawString* name) { // Enclosing name is a name of a constructor function. To check // that it is really a constructor, we check that it is not empty // and starts with a capital letter. - if (name->length() > 0 && Runtime::IsUpperCaseChar( - isolate()->runtime_state(), name->Get(0))) { + if (!name->IsEmpty() && unibrow::Uppercase::Is(name->FirstCharacter())) { names_stack_.Add(Name(name, kEnclosingConstructorName), zone()); } } -void FuncNameInferrer::PushLiteralName(Handle<String> name) { - if (IsOpen() && - !String::Equals(isolate()->factory()->prototype_string(), name)) { +void FuncNameInferrer::PushLiteralName(const AstRawString* name) { + if (IsOpen() && name != ast_value_factory_->prototype_string()) { names_stack_.Add(Name(name, kLiteralName), zone()); } } -void FuncNameInferrer::PushVariableName(Handle<String> name) { - if (IsOpen() && - !String::Equals(isolate()->factory()->dot_result_string(), name)) { +void FuncNameInferrer::PushVariableName(const AstRawString* name) { + if (IsOpen() && name != ast_value_factory_->dot_result_string()) { names_stack_.Add(Name(name, kVariableName), zone()); } } -Handle<String> FuncNameInferrer::MakeNameFromStack() { - return MakeNameFromStackHelper(0, isolate()->factory()->empty_string()); +const AstString* FuncNameInferrer::MakeNameFromStack() { + return MakeNameFromStackHelper(0, ast_value_factory_->empty_string()); } - -Handle<String> FuncNameInferrer::MakeNameFromStackHelper(int pos, - Handle<String> prev) { +const AstString* FuncNameInferrer::MakeNameFromStackHelper( + int pos, const AstString* prev) { if (pos >= names_stack_.length()) return prev; if (pos < names_stack_.length() - 1 && names_stack_.at(pos).type == kVariableName && @@ -62,12 +60,11 @@ Handle<String> FuncNameInferrer::MakeNameFromStackHelper(int pos, return MakeNameFromStackHelper(pos + 1, prev); } else { if (prev->length() > 0) { - Handle<String> name = names_stack_.at(pos).name; + const AstRawString* name = names_stack_.at(pos).name; if (prev->length() + name->length() + 1 > String::kMaxLength) return prev; - Factory* factory = isolate()->factory(); - Handle<String> curr = - factory->NewConsString(factory->dot_string(), name).ToHandleChecked(); - curr = factory->NewConsString(prev, curr).ToHandleChecked(); + const AstConsString* curr = ast_value_factory_->NewConsString( + ast_value_factory_->dot_string(), name); + curr = ast_value_factory_->NewConsString(prev, curr); return MakeNameFromStackHelper(pos + 1, curr); } else { return MakeNameFromStackHelper(pos + 1, names_stack_.at(pos).name); @@ -77,9 +74,9 @@ Handle<String> FuncNameInferrer::MakeNameFromStackHelper(int pos, void FuncNameInferrer::InferFunctionsNames() { - Handle<String> func_name = MakeNameFromStack(); + const AstString* func_name = MakeNameFromStack(); for (int i = 0; i < funcs_to_infer_.length(); ++i) { - funcs_to_infer_[i]->set_inferred_name(func_name); + funcs_to_infer_[i]->set_raw_inferred_name(func_name); } funcs_to_infer_.Rewind(0); } diff --git a/deps/v8/src/func-name-inferrer.h b/deps/v8/src/func-name-inferrer.h index f0fdbd22e02..8b077f9d8cf 100644 --- a/deps/v8/src/func-name-inferrer.h +++ b/deps/v8/src/func-name-inferrer.h @@ -5,14 +5,16 @@ #ifndef V8_FUNC_NAME_INFERRER_H_ #define V8_FUNC_NAME_INFERRER_H_ -#include "handles.h" -#include "zone.h" +#include "src/handles.h" +#include "src/zone.h" namespace v8 { namespace internal { +class AstRawString; +class AstString; +class AstValueFactory; class FunctionLiteral; -class Isolate; // FuncNameInferrer is a stateful class that is used to perform name // inference for anonymous functions during static analysis of source code. @@ -26,13 +28,13 @@ class Isolate; // a name. class FuncNameInferrer : public ZoneObject { public: - FuncNameInferrer(Isolate* isolate, Zone* zone); + FuncNameInferrer(AstValueFactory* ast_value_factory, Zone* zone); // Returns whether we have entered name collection state. bool IsOpen() const { return !entries_stack_.is_empty(); } // Pushes an enclosing the name of enclosing function onto names stack. - void PushEnclosingName(Handle<String> name); + void PushEnclosingName(const AstRawString* name); // Enters name collection state. void Enter() { @@ -40,9 +42,9 @@ class FuncNameInferrer : public ZoneObject { } // Pushes an encountered name onto names stack when in collection state. - void PushLiteralName(Handle<String> name); + void PushLiteralName(const AstRawString* name); - void PushVariableName(Handle<String> name); + void PushVariableName(const AstRawString* name); // Adds a function to infer name for. void AddFunction(FunctionLiteral* func_to_infer) { @@ -59,7 +61,7 @@ class FuncNameInferrer : public ZoneObject { // Infers a function name and leaves names collection state. void Infer() { - ASSERT(IsOpen()); + DCHECK(IsOpen()); if (!funcs_to_infer_.is_empty()) { InferFunctionsNames(); } @@ -67,7 +69,7 @@ class FuncNameInferrer : public ZoneObject { // Leaves names collection state. void Leave() { - ASSERT(IsOpen()); + DCHECK(IsOpen()); names_stack_.Rewind(entries_stack_.RemoveLast()); if (entries_stack_.is_empty()) funcs_to_infer_.Clear(); @@ -80,24 +82,24 @@ class FuncNameInferrer : public ZoneObject { kVariableName }; struct Name { - Name(Handle<String> name, NameType type) : name(name), type(type) { } - Handle<String> name; + Name(const AstRawString* name, NameType type) : name(name), type(type) {} + const AstRawString* name; NameType type; }; - Isolate* isolate() { return isolate_; } Zone* zone() const { return zone_; } // Constructs a full name in dotted notation from gathered names. - Handle<String> MakeNameFromStack(); + const AstString* MakeNameFromStack(); // A helper function for MakeNameFromStack. - Handle<String> MakeNameFromStackHelper(int pos, Handle<String> prev); + const AstString* MakeNameFromStackHelper(int pos, + const AstString* prev); // Performs name inferring for added functions. void InferFunctionsNames(); - Isolate* isolate_; + AstValueFactory* ast_value_factory_; ZoneList<int> entries_stack_; ZoneList<Name> names_stack_; ZoneList<FunctionLiteral*> funcs_to_infer_; diff --git a/deps/v8/src/gdb-jit.cc b/deps/v8/src/gdb-jit.cc index f2861c15f35..c9a57b5fa0f 100644 --- a/deps/v8/src/gdb-jit.cc +++ b/deps/v8/src/gdb-jit.cc @@ -3,18 +3,19 @@ // found in the LICENSE file. #ifdef ENABLE_GDB_JIT_INTERFACE -#include "v8.h" -#include "gdb-jit.h" - -#include "bootstrapper.h" -#include "compiler.h" -#include "frames.h" -#include "frames-inl.h" -#include "global-handles.h" -#include "messages.h" -#include "natives.h" -#include "platform.h" -#include "scopes.h" +#include "src/v8.h" + +#include "src/base/platform/platform.h" +#include "src/bootstrapper.h" +#include "src/compiler.h" +#include "src/frames-inl.h" +#include "src/frames.h" +#include "src/gdb-jit.h" +#include "src/global-handles.h" +#include "src/messages.h" +#include "src/natives.h" +#include "src/ostreams.h" +#include "src/scopes.h" namespace v8 { namespace internal { @@ -114,7 +115,7 @@ class Writer BASE_EMBEDDED { if (delta == 0) return; uintptr_t padding = align - delta; Ensure(position_ += padding); - ASSERT((position_ % align) == 0); + DCHECK((position_ % align) == 0); } void WriteULEB128(uintptr_t value) { @@ -154,7 +155,7 @@ class Writer BASE_EMBEDDED { template<typename T> T* RawSlotAt(uintptr_t offset) { - ASSERT(offset < capacity_ && offset + sizeof(T) <= capacity_); + DCHECK(offset < capacity_ && offset + sizeof(T) <= capacity_); return reinterpret_cast<T*>(&buffer_[offset]); } @@ -194,7 +195,7 @@ class DebugSectionBase : public ZoneObject { struct MachOSectionHeader { char sectname[16]; char segname[16]; -#if V8_TARGET_ARCH_IA32 +#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87 uint32_t addr; uint32_t size; #else @@ -230,7 +231,7 @@ class MachOSection : public DebugSectionBase<MachOSectionHeader> { align_(align), flags_(flags) { if (align_ != 0) { - ASSERT(IsPowerOf2(align)); + DCHECK(IsPowerOf2(align)); align_ = WhichPowerOf2(align_); } } @@ -249,8 +250,8 @@ class MachOSection : public DebugSectionBase<MachOSectionHeader> { header->reserved2 = 0; memset(header->sectname, 0, sizeof(header->sectname)); memset(header->segname, 0, sizeof(header->segname)); - ASSERT(strlen(name_) < sizeof(header->sectname)); - ASSERT(strlen(segment_) < sizeof(header->segname)); + DCHECK(strlen(name_) < sizeof(header->sectname)); + DCHECK(strlen(segment_) < sizeof(header->segname)); strncpy(header->sectname, name_, sizeof(header->sectname)); strncpy(header->segname, segment_, sizeof(header->segname)); } @@ -442,7 +443,7 @@ class ELFStringTable : public ELFSection { } virtual void WriteBody(Writer::Slot<Header> header, Writer* w) { - ASSERT(writer_ == NULL); + DCHECK(writer_ == NULL); header->offset = offset_; header->size = size_; } @@ -511,7 +512,7 @@ class MachO BASE_EMBEDDED { uint32_t cmd; uint32_t cmdsize; char segname[16]; -#if V8_TARGET_ARCH_IA32 +#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87 uint32_t vmaddr; uint32_t vmsize; uint32_t fileoff; @@ -535,9 +536,9 @@ class MachO BASE_EMBEDDED { Writer::Slot<MachOHeader> WriteHeader(Writer* w) { - ASSERT(w->position() == 0); + DCHECK(w->position() == 0); Writer::Slot<MachOHeader> header = w->CreateSlotHere<MachOHeader>(); -#if V8_TARGET_ARCH_IA32 +#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87 header->magic = 0xFEEDFACEu; header->cputype = 7; // i386 header->cpusubtype = 3; // CPU_SUBTYPE_I386_ALL @@ -562,7 +563,7 @@ class MachO BASE_EMBEDDED { uintptr_t code_size) { Writer::Slot<MachOSegmentCommand> cmd = w->CreateSlotHere<MachOSegmentCommand>(); -#if V8_TARGET_ARCH_IA32 +#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87 cmd->cmd = LC_SEGMENT_32; #else cmd->cmd = LC_SEGMENT_64; @@ -647,20 +648,21 @@ class ELF BASE_EMBEDDED { void WriteHeader(Writer* w) { - ASSERT(w->position() == 0); + DCHECK(w->position() == 0); Writer::Slot<ELFHeader> header = w->CreateSlotHere<ELFHeader>(); -#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_ARM +#if (V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_ARM || V8_TARGET_ARCH_X87 || \ + (V8_TARGET_ARCH_X64 && V8_TARGET_ARCH_32_BIT)) const uint8_t ident[16] = { 0x7f, 'E', 'L', 'F', 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0}; -#elif V8_TARGET_ARCH_X64 +#elif V8_TARGET_ARCH_X64 && V8_TARGET_ARCH_64_BIT const uint8_t ident[16] = { 0x7f, 'E', 'L', 'F', 2, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0}; #else #error Unsupported target architecture. #endif - OS::MemCopy(header->ident, ident, 16); + memcpy(header->ident, ident, 16); header->type = 1; -#if V8_TARGET_ARCH_IA32 +#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87 header->machine = 3; #elif V8_TARGET_ARCH_X64 // Processor identification value for x64 is 62 as defined in @@ -689,7 +691,7 @@ class ELF BASE_EMBEDDED { void WriteSectionTable(Writer* w) { // Section headers table immediately follows file header. - ASSERT(w->position() == sizeof(ELFHeader)); + DCHECK(w->position() == sizeof(ELFHeader)); Writer::Slot<ELFSection::Header> headers = w->CreateSlotsHere<ELFSection::Header>(sections_.length()); @@ -762,7 +764,8 @@ class ELFSymbol BASE_EMBEDDED { Binding binding() const { return static_cast<Binding>(info >> 4); } -#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_ARM +#if (V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_ARM || V8_TARGET_ARCH_X87 || \ + (V8_TARGET_ARCH_X64 && V8_TARGET_ARCH_32_BIT)) struct SerializedLayout { SerializedLayout(uint32_t name, uintptr_t value, @@ -785,7 +788,7 @@ class ELFSymbol BASE_EMBEDDED { uint8_t other; uint16_t section; }; -#elif V8_TARGET_ARCH_X64 +#elif V8_TARGET_ARCH_X64 && V8_TARGET_ARCH_64_BIT struct SerializedLayout { SerializedLayout(uint32_t name, uintptr_t value, @@ -897,6 +900,32 @@ class ELFSymbolTable : public ELFSection { #endif // defined(__ELF) +class LineInfo : public Malloced { + public: + LineInfo() : pc_info_(10) {} + + void SetPosition(intptr_t pc, int pos, bool is_statement) { + AddPCInfo(PCInfo(pc, pos, is_statement)); + } + + struct PCInfo { + PCInfo(intptr_t pc, int pos, bool is_statement) + : pc_(pc), pos_(pos), is_statement_(is_statement) {} + + intptr_t pc_; + int pos_; + bool is_statement_; + }; + + List<PCInfo>* pc_info() { return &pc_info_; } + + private: + void AddPCInfo(const PCInfo& pc_info) { pc_info_.Add(pc_info); } + + List<PCInfo> pc_info_; +}; + + class CodeDescription BASE_EMBEDDED { public: #if V8_TARGET_ARCH_X64 @@ -908,27 +937,21 @@ class CodeDescription BASE_EMBEDDED { }; #endif - CodeDescription(const char* name, - Code* code, - Handle<Script> script, - GDBJITLineInfo* lineinfo, - GDBJITInterface::CodeTag tag, + CodeDescription(const char* name, Code* code, Handle<Script> script, + LineInfo* lineinfo, GDBJITInterface::CodeTag tag, CompilationInfo* info) : name_(name), code_(code), script_(script), lineinfo_(lineinfo), tag_(tag), - info_(info) { - } + info_(info) {} const char* name() const { return name_; } - GDBJITLineInfo* lineinfo() const { - return lineinfo_; - } + LineInfo* lineinfo() const { return lineinfo_; } GDBJITInterface::CodeTag tag() const { return tag_; @@ -964,12 +987,12 @@ class CodeDescription BASE_EMBEDDED { #if V8_TARGET_ARCH_X64 uintptr_t GetStackStateStartAddress(StackState state) const { - ASSERT(state < STACK_STATE_MAX); + DCHECK(state < STACK_STATE_MAX); return stack_state_start_addresses_[state]; } void SetStackStateStartAddress(StackState state, uintptr_t addr) { - ASSERT(state < STACK_STATE_MAX); + DCHECK(state < STACK_STATE_MAX); stack_state_start_addresses_[state] = addr; } #endif @@ -987,7 +1010,7 @@ class CodeDescription BASE_EMBEDDED { const char* name_; Code* code_; Handle<Script> script_; - GDBJITLineInfo* lineinfo_; + LineInfo* lineinfo_; GDBJITInterface::CodeTag tag_; CompilationInfo* info_; #if V8_TARGET_ARCH_X64 @@ -1084,7 +1107,7 @@ class DebugInfoSection : public DebugSection { w->Write<intptr_t>(desc_->CodeStart() + desc_->CodeSize()); Writer::Slot<uint32_t> fb_block_size = w->CreateSlotHere<uint32_t>(); uintptr_t fb_block_start = w->position(); -#if V8_TARGET_ARCH_IA32 +#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87 w->Write<uint8_t>(DW_OP_reg5); // The frame pointer's here on ia32 #elif V8_TARGET_ARCH_X64 w->Write<uint8_t>(DW_OP_reg6); // and here on x64. @@ -1092,6 +1115,8 @@ class DebugInfoSection : public DebugSection { UNIMPLEMENTED(); #elif V8_TARGET_ARCH_MIPS UNIMPLEMENTED(); +#elif V8_TARGET_ARCH_MIPS64 + UNIMPLEMENTED(); #else #error Unsupported target architecture. #endif @@ -1130,11 +1155,11 @@ class DebugInfoSection : public DebugSection { } // See contexts.h for more information. - ASSERT(Context::MIN_CONTEXT_SLOTS == 4); - ASSERT(Context::CLOSURE_INDEX == 0); - ASSERT(Context::PREVIOUS_INDEX == 1); - ASSERT(Context::EXTENSION_INDEX == 2); - ASSERT(Context::GLOBAL_OBJECT_INDEX == 3); + DCHECK(Context::MIN_CONTEXT_SLOTS == 4); + DCHECK(Context::CLOSURE_INDEX == 0); + DCHECK(Context::PREVIOUS_INDEX == 1); + DCHECK(Context::EXTENSION_INDEX == 2); + DCHECK(Context::GLOBAL_OBJECT_INDEX == 3); w->WriteULEB128(current_abbreviation++); w->WriteString(".closure"); w->WriteULEB128(current_abbreviation++); @@ -1282,7 +1307,7 @@ class DebugAbbrevSection : public DebugSection { bool WriteBodyInternal(Writer* w) { int current_abbreviation = 1; bool extra_info = desc_->IsInfoAvailable(); - ASSERT(desc_->IsLineInfoAvailable()); + DCHECK(desc_->IsLineInfoAvailable()); w->WriteULEB128(current_abbreviation++); w->WriteULEB128(DW_TAG_COMPILE_UNIT); w->Write<uint8_t>(extra_info ? DW_CHILDREN_YES : DW_CHILDREN_NO); @@ -1447,13 +1472,13 @@ class DebugLineSection : public DebugSection { intptr_t line = 1; bool is_statement = true; - List<GDBJITLineInfo::PCInfo>* pc_info = desc_->lineinfo()->pc_info(); + List<LineInfo::PCInfo>* pc_info = desc_->lineinfo()->pc_info(); pc_info->Sort(&ComparePCInfo); int pc_info_length = pc_info->length(); for (int i = 0; i < pc_info_length; i++) { - GDBJITLineInfo::PCInfo* info = &pc_info->at(i); - ASSERT(info->pc_ >= pc); + LineInfo::PCInfo* info = &pc_info->at(i); + DCHECK(info->pc_ >= pc); // Reduce bloating in the debug line table by removing duplicate line // entries (per DWARF2 standard). @@ -1523,8 +1548,8 @@ class DebugLineSection : public DebugSection { w->Write<uint8_t>(op); } - static int ComparePCInfo(const GDBJITLineInfo::PCInfo* a, - const GDBJITLineInfo::PCInfo* b) { + static int ComparePCInfo(const LineInfo::PCInfo* a, + const LineInfo::PCInfo* b) { if (a->pc_ == b->pc_) { if (a->is_statement_ != b->is_statement_) { return b->is_statement_ ? +1 : -1; @@ -1623,7 +1648,7 @@ void UnwindInfoSection::WriteLength(Writer* w, } } - ASSERT((w->position() - initial_position) % kPointerSize == 0); + DCHECK((w->position() - initial_position) % kPointerSize == 0); length_slot->set(w->position() - initial_position); } @@ -1819,8 +1844,9 @@ extern "C" { #ifdef OBJECT_PRINT void __gdb_print_v8_object(Object* object) { - object->Print(); - PrintF(stdout, "\n"); + OFStream os(stdout); + object->Print(os); + os << flush; } #endif } @@ -1833,7 +1859,7 @@ static JITCodeEntry* CreateCodeEntry(Address symfile_addr, entry->symfile_addr_ = reinterpret_cast<Address>(entry + 1); entry->symfile_size_ = symfile_size; - OS::MemCopy(entry->symfile_addr_, symfile_addr, symfile_size); + MemCopy(entry->symfile_addr_, symfile_addr, symfile_size); entry->prev_ = entry->next_ = NULL; @@ -1857,12 +1883,12 @@ static void RegisterCodeEntry(JITCodeEntry* entry, static const char* kObjFileExt = ".o"; char file_name[64]; - OS::SNPrintF(Vector<char>(file_name, kMaxFileNameSize), - "%s%s%d%s", - kElfFilePrefix, - (name_hint != NULL) ? name_hint : "", - file_num++, - kObjFileExt); + SNPrintF(Vector<char>(file_name, kMaxFileNameSize), + "%s%s%d%s", + kElfFilePrefix, + (name_hint != NULL) ? name_hint : "", + file_num++, + kObjFileExt); WriteBytes(file_name, entry->symfile_addr_, entry->symfile_size_); } #endif @@ -1962,15 +1988,15 @@ static bool IsLineInfoTagged(void* ptr) { } -static void* TagLineInfo(GDBJITLineInfo* ptr) { +static void* TagLineInfo(LineInfo* ptr) { return reinterpret_cast<void*>( reinterpret_cast<intptr_t>(ptr) | kLineInfoTag); } -static GDBJITLineInfo* UntagLineInfo(void* ptr) { - return reinterpret_cast<GDBJITLineInfo*>( - reinterpret_cast<intptr_t>(ptr) & ~kLineInfoTag); +static LineInfo* UntagLineInfo(void* ptr) { + return reinterpret_cast<LineInfo*>(reinterpret_cast<intptr_t>(ptr) & + ~kLineInfoTag); } @@ -2030,7 +2056,7 @@ static void AddUnwindInfo(CodeDescription* desc) { } -static LazyMutex mutex = LAZY_MUTEX_INITIALIZER; +static base::LazyMutex mutex = LAZY_MUTEX_INITIALIZER; void GDBJITInterface::AddCode(const char* name, @@ -2038,15 +2064,13 @@ void GDBJITInterface::AddCode(const char* name, GDBJITInterface::CodeTag tag, Script* script, CompilationInfo* info) { - if (!FLAG_gdbjit) return; - - LockGuard<Mutex> lock_guard(mutex.Pointer()); + base::LockGuard<base::Mutex> lock_guard(mutex.Pointer()); DisallowHeapAllocation no_gc; HashMap::Entry* e = GetEntries()->Lookup(code, HashForCodeObject(code), true); if (e->value != NULL && !IsLineInfoTagged(e->value)) return; - GDBJITLineInfo* lineinfo = UntagLineInfo(e->value); + LineInfo* lineinfo = UntagLineInfo(e->value); CodeDescription code_desc(name, code, script != NULL ? Handle<Script>(script) @@ -2064,7 +2088,7 @@ void GDBJITInterface::AddCode(const char* name, AddUnwindInfo(&code_desc); Isolate* isolate = code->GetIsolate(); JITCodeEntry* entry = CreateELFObject(&code_desc, isolate); - ASSERT(!IsLineInfoTagged(entry)); + DCHECK(!IsLineInfoTagged(entry)); delete lineinfo; e->value = entry; @@ -2084,49 +2108,10 @@ void GDBJITInterface::AddCode(const char* name, } -void GDBJITInterface::AddCode(GDBJITInterface::CodeTag tag, - const char* name, - Code* code) { - if (!FLAG_gdbjit) return; - - EmbeddedVector<char, 256> buffer; - StringBuilder builder(buffer.start(), buffer.length()); - - builder.AddString(Tag2String(tag)); - if ((name != NULL) && (*name != '\0')) { - builder.AddString(": "); - builder.AddString(name); - } else { - builder.AddFormatted(": code object %p", static_cast<void*>(code)); - } - - AddCode(builder.Finalize(), code, tag, NULL, NULL); -} - - -void GDBJITInterface::AddCode(GDBJITInterface::CodeTag tag, - Name* name, - Code* code) { - if (!FLAG_gdbjit) return; - if (name != NULL && name->IsString()) { - AddCode(tag, String::cast(name)->ToCString(DISALLOW_NULLS).get(), code); - } else { - AddCode(tag, "", code); - } -} - - -void GDBJITInterface::AddCode(GDBJITInterface::CodeTag tag, Code* code) { - if (!FLAG_gdbjit) return; - - AddCode(tag, "", code); -} - - void GDBJITInterface::RemoveCode(Code* code) { if (!FLAG_gdbjit) return; - LockGuard<Mutex> lock_guard(mutex.Pointer()); + base::LockGuard<base::Mutex> lock_guard(mutex.Pointer()); HashMap::Entry* e = GetEntries()->Lookup(code, HashForCodeObject(code), false); @@ -2162,15 +2147,62 @@ void GDBJITInterface::RemoveCodeRange(Address start, Address end) { } -void GDBJITInterface::RegisterDetailedLineInfo(Code* code, - GDBJITLineInfo* line_info) { - LockGuard<Mutex> lock_guard(mutex.Pointer()); - ASSERT(!IsLineInfoTagged(line_info)); +static void RegisterDetailedLineInfo(Code* code, LineInfo* line_info) { + base::LockGuard<base::Mutex> lock_guard(mutex.Pointer()); + DCHECK(!IsLineInfoTagged(line_info)); HashMap::Entry* e = GetEntries()->Lookup(code, HashForCodeObject(code), true); - ASSERT(e->value == NULL); + DCHECK(e->value == NULL); e->value = TagLineInfo(line_info); } +void GDBJITInterface::EventHandler(const v8::JitCodeEvent* event) { + if (!FLAG_gdbjit) return; + switch (event->type) { + case v8::JitCodeEvent::CODE_ADDED: { + Code* code = Code::GetCodeFromTargetAddress( + reinterpret_cast<Address>(event->code_start)); + if (code->kind() == Code::OPTIMIZED_FUNCTION || + code->kind() == Code::FUNCTION) { + break; + } + EmbeddedVector<char, 256> buffer; + StringBuilder builder(buffer.start(), buffer.length()); + builder.AddSubstring(event->name.str, static_cast<int>(event->name.len)); + AddCode(builder.Finalize(), code, NON_FUNCTION, NULL, NULL); + break; + } + case v8::JitCodeEvent::CODE_MOVED: + break; + case v8::JitCodeEvent::CODE_REMOVED: { + Code* code = Code::GetCodeFromTargetAddress( + reinterpret_cast<Address>(event->code_start)); + RemoveCode(code); + break; + } + case v8::JitCodeEvent::CODE_ADD_LINE_POS_INFO: { + LineInfo* line_info = reinterpret_cast<LineInfo*>(event->user_data); + line_info->SetPosition(static_cast<intptr_t>(event->line_info.offset), + static_cast<int>(event->line_info.pos), + event->line_info.position_type == + v8::JitCodeEvent::STATEMENT_POSITION); + break; + } + case v8::JitCodeEvent::CODE_START_LINE_INFO_RECORDING: { + v8::JitCodeEvent* mutable_event = const_cast<v8::JitCodeEvent*>(event); + mutable_event->user_data = new LineInfo(); + break; + } + case v8::JitCodeEvent::CODE_END_LINE_INFO_RECORDING: { + LineInfo* line_info = reinterpret_cast<LineInfo*>(event->user_data); + Code* code = Code::GetCodeFromTargetAddress( + reinterpret_cast<Address>(event->code_start)); + RegisterDetailedLineInfo(code, line_info); + break; + } + } +} + + } } // namespace v8::internal #endif diff --git a/deps/v8/src/gdb-jit.h b/deps/v8/src/gdb-jit.h index a961740001d..14536cf0b38 100644 --- a/deps/v8/src/gdb-jit.h +++ b/deps/v8/src/gdb-jit.h @@ -5,7 +5,7 @@ #ifndef V8_GDB_JIT_H_ #define V8_GDB_JIT_H_ -#include "allocation.h" +#include "src/allocation.h" // // Basic implementation of GDB JIT Interface client. @@ -14,97 +14,34 @@ // #ifdef ENABLE_GDB_JIT_INTERFACE -#include "v8.h" -#include "factory.h" +#include "src/v8.h" + +#include "src/factory.h" namespace v8 { namespace internal { class CompilationInfo; -#define CODE_TAGS_LIST(V) \ - V(LOAD_IC) \ - V(KEYED_LOAD_IC) \ - V(STORE_IC) \ - V(KEYED_STORE_IC) \ - V(STUB) \ - V(BUILTIN) \ - V(SCRIPT) \ - V(EVAL) \ - V(FUNCTION) - -class GDBJITLineInfo : public Malloced { - public: - GDBJITLineInfo() - : pc_info_(10) { } - - void SetPosition(intptr_t pc, int pos, bool is_statement) { - AddPCInfo(PCInfo(pc, pos, is_statement)); - } - - struct PCInfo { - PCInfo(intptr_t pc, int pos, bool is_statement) - : pc_(pc), pos_(pos), is_statement_(is_statement) { } - - intptr_t pc_; - int pos_; - bool is_statement_; - }; - - List<PCInfo>* pc_info() { - return &pc_info_; - } - - private: - void AddPCInfo(const PCInfo& pc_info) { - pc_info_.Add(pc_info); - } - - List<PCInfo> pc_info_; -}; - - class GDBJITInterface: public AllStatic { public: - enum CodeTag { -#define V(x) x, - CODE_TAGS_LIST(V) -#undef V - TAG_COUNT - }; - - static const char* Tag2String(CodeTag tag) { - switch (tag) { -#define V(x) case x: return #x; - CODE_TAGS_LIST(V) -#undef V - default: - return NULL; - } - } - - static void AddCode(const char* name, - Code* code, - CodeTag tag, - Script* script, - CompilationInfo* info); + enum CodeTag { NON_FUNCTION, FUNCTION }; + + // Main entry point into GDB JIT realized as a JitCodeEventHandler. + static void EventHandler(const v8::JitCodeEvent* event); static void AddCode(Handle<Name> name, Handle<Script> script, Handle<Code> code, CompilationInfo* info); - static void AddCode(CodeTag tag, Name* name, Code* code); - - static void AddCode(CodeTag tag, const char* name, Code* code); + static void RemoveCodeRange(Address start, Address end); - static void AddCode(CodeTag tag, Code* code); + private: + static void AddCode(const char* name, Code* code, CodeTag tag, Script* script, + CompilationInfo* info); static void RemoveCode(Code* code); - - static void RemoveCodeRange(Address start, Address end); - - static void RegisterDetailedLineInfo(Code* code, GDBJITLineInfo* line_info); }; #define GDBJIT(action) GDBJITInterface::action diff --git a/deps/v8/src/generator.js b/deps/v8/src/generator.js index c152e3a6eea..c62fe2c7713 100644 --- a/deps/v8/src/generator.js +++ b/deps/v8/src/generator.js @@ -32,6 +32,10 @@ function GeneratorObjectThrow(exn) { return %_GeneratorThrow(this, exn); } +function GeneratorObjectIterator() { + return this; +} + function GeneratorFunctionPrototypeConstructor(x) { if (%_IsConstructCall()) { throw MakeTypeError('not_constructor', ['GeneratorFunctionPrototype']); @@ -40,10 +44,12 @@ function GeneratorFunctionPrototypeConstructor(x) { function GeneratorFunctionConstructor(arg1) { // length == 1 var source = NewFunctionString(arguments, 'function*'); - var global_receiver = %GlobalReceiver(global); + var global_proxy = %GlobalProxy(global); // Compile the string in the constructor and not a helper so that errors // appear to come from here. - var f = %_CallFunction(global_receiver, %CompileString(source, true)); + var f = %CompileString(source, true); + if (!IS_FUNCTION(f)) return f; + f = %_CallFunction(global_proxy, f); %FunctionMarkNameShouldPrintAsAnonymous(f); return f; } @@ -56,13 +62,16 @@ function SetUpGenerators() { DONT_ENUM | DONT_DELETE | READ_ONLY, ["next", GeneratorObjectNext, "throw", GeneratorObjectThrow]); - %SetProperty(GeneratorObjectPrototype, "constructor", - GeneratorFunctionPrototype, DONT_ENUM | DONT_DELETE | READ_ONLY); - %SetPrototype(GeneratorFunctionPrototype, $Function.prototype); + %FunctionSetName(GeneratorObjectIterator, '[Symbol.iterator]'); + %AddNamedProperty(GeneratorObjectPrototype, symbolIterator, + GeneratorObjectIterator, DONT_ENUM | DONT_DELETE | READ_ONLY); + %AddNamedProperty(GeneratorObjectPrototype, "constructor", + GeneratorFunctionPrototype, DONT_ENUM | DONT_DELETE | READ_ONLY); + %InternalSetPrototype(GeneratorFunctionPrototype, $Function.prototype); %SetCode(GeneratorFunctionPrototype, GeneratorFunctionPrototypeConstructor); - %SetProperty(GeneratorFunctionPrototype, "constructor", - GeneratorFunction, DONT_ENUM | DONT_DELETE | READ_ONLY); - %SetPrototype(GeneratorFunction, $Function); + %AddNamedProperty(GeneratorFunctionPrototype, "constructor", + GeneratorFunction, DONT_ENUM | DONT_DELETE | READ_ONLY); + %InternalSetPrototype(GeneratorFunction, $Function); %SetCode(GeneratorFunction, GeneratorFunctionConstructor); } diff --git a/deps/v8/src/global-handles.cc b/deps/v8/src/global-handles.cc index 168a670d2b2..940d53bb150 100644 --- a/deps/v8/src/global-handles.cc +++ b/deps/v8/src/global-handles.cc @@ -2,12 +2,12 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "api.h" -#include "global-handles.h" +#include "src/api.h" +#include "src/global-handles.h" -#include "vm-state-inl.h" +#include "src/vm-state-inl.h" namespace v8 { namespace internal { @@ -38,13 +38,13 @@ class GlobalHandles::Node { // Maps handle location (slot) to the containing node. static Node* FromLocation(Object** location) { - ASSERT(OFFSET_OF(Node, object_) == 0); + DCHECK(OFFSET_OF(Node, object_) == 0); return reinterpret_cast<Node*>(location); } Node() { - ASSERT(OFFSET_OF(Node, class_id_) == Internals::kNodeClassIdOffset); - ASSERT(OFFSET_OF(Node, flags_) == Internals::kNodeFlagsOffset); + DCHECK(OFFSET_OF(Node, class_id_) == Internals::kNodeClassIdOffset); + DCHECK(OFFSET_OF(Node, flags_) == Internals::kNodeFlagsOffset); STATIC_ASSERT(static_cast<int>(NodeState::kMask) == Internals::kNodeStateMask); STATIC_ASSERT(WEAK == Internals::kNodeStateIsWeakValue); @@ -73,7 +73,7 @@ class GlobalHandles::Node { void Initialize(int index, Node** first_free) { index_ = static_cast<uint8_t>(index); - ASSERT(static_cast<int>(index_) == index); + DCHECK(static_cast<int>(index_) == index); set_state(FREE); set_in_new_space_list(false); parameter_or_next_free_.next_free = *first_free; @@ -81,7 +81,7 @@ class GlobalHandles::Node { } void Acquire(Object* object) { - ASSERT(state() == FREE); + DCHECK(state() == FREE); object_ = object; class_id_ = v8::HeapProfiler::kPersistentHandleNoClassId; set_independent(false); @@ -93,7 +93,7 @@ class GlobalHandles::Node { } void Release() { - ASSERT(state() != FREE); + DCHECK(state() != FREE); set_state(FREE); // Zap the values for eager trapping. object_ = reinterpret_cast<Object*>(kGlobalHandleZapValue); @@ -162,18 +162,18 @@ class GlobalHandles::Node { } void MarkPending() { - ASSERT(state() == WEAK); + DCHECK(state() == WEAK); set_state(PENDING); } // Independent flag accessors. void MarkIndependent() { - ASSERT(state() != FREE); + DCHECK(state() != FREE); set_independent(true); } void MarkPartiallyDependent() { - ASSERT(state() != FREE); + DCHECK(state() != FREE); if (GetGlobalHandles()->isolate()->heap()->InNewSpace(object_)) { set_partially_dependent(true); } @@ -186,34 +186,35 @@ class GlobalHandles::Node { // Callback parameter accessors. void set_parameter(void* parameter) { - ASSERT(state() != FREE); + DCHECK(state() != FREE); parameter_or_next_free_.parameter = parameter; } void* parameter() const { - ASSERT(state() != FREE); + DCHECK(state() != FREE); return parameter_or_next_free_.parameter; } // Accessors for next free node in the free list. Node* next_free() { - ASSERT(state() == FREE); + DCHECK(state() == FREE); return parameter_or_next_free_.next_free; } void set_next_free(Node* value) { - ASSERT(state() == FREE); + DCHECK(state() == FREE); parameter_or_next_free_.next_free = value; } void MakeWeak(void* parameter, WeakCallback weak_callback) { - ASSERT(weak_callback != NULL); - ASSERT(state() != FREE); + DCHECK(weak_callback != NULL); + DCHECK(state() != FREE); + CHECK(object_ != NULL); set_state(WEAK); set_parameter(parameter); weak_callback_ = weak_callback; } void* ClearWeakness() { - ASSERT(state() != FREE); + DCHECK(state() != FREE); void* p = parameter(); set_state(NORMAL); set_parameter(NULL); @@ -234,9 +235,9 @@ class GlobalHandles::Node { { // Check that we are not passing a finalized external string to // the callback. - ASSERT(!object_->IsExternalAsciiString() || + DCHECK(!object_->IsExternalAsciiString() || ExternalAsciiString::cast(object_)->resource() != NULL); - ASSERT(!object_->IsExternalTwoByteString() || + DCHECK(!object_->IsExternalTwoByteString() || ExternalTwoByteString::cast(object_)->resource() != NULL); // Leaving V8. VMState<EXTERNAL> state(isolate); @@ -315,12 +316,12 @@ class GlobalHandles::NodeBlock { } Node* node_at(int index) { - ASSERT(0 <= index && index < kSize); + DCHECK(0 <= index && index < kSize); return &nodes_[index]; } void IncreaseUses() { - ASSERT(used_nodes_ < kSize); + DCHECK(used_nodes_ < kSize); if (used_nodes_++ == 0) { NodeBlock* old_first = global_handles_->first_used_block_; global_handles_->first_used_block_ = this; @@ -332,7 +333,7 @@ class GlobalHandles::NodeBlock { } void DecreaseUses() { - ASSERT(used_nodes_ > 0); + DCHECK(used_nodes_ > 0); if (--used_nodes_ == 0) { if (next_used_ != NULL) next_used_->prev_used_ = prev_used_; if (prev_used_ != NULL) prev_used_->next_used_ = next_used_; @@ -370,7 +371,7 @@ GlobalHandles::NodeBlock* GlobalHandles::Node::FindBlock() { intptr_t ptr = reinterpret_cast<intptr_t>(this); ptr = ptr - index_ * sizeof(Node); NodeBlock* block = reinterpret_cast<NodeBlock*>(ptr); - ASSERT(block->node_at(index_) == this); + DCHECK(block->node_at(index_) == this); return block; } @@ -404,12 +405,12 @@ class GlobalHandles::NodeIterator { bool done() const { return block_ == NULL; } Node* node() const { - ASSERT(!done()); + DCHECK(!done()); return block_->node_at(index_); } void Advance() { - ASSERT(!done()); + DCHECK(!done()); if (++index_ < NodeBlock::kSize) return; index_ = 0; block_ = block_->next_used(); @@ -449,7 +450,7 @@ Handle<Object> GlobalHandles::Create(Object* value) { first_block_ = new NodeBlock(this, first_block_); first_block_->PutNodesOnFreeList(&first_free_); } - ASSERT(first_free_ != NULL); + DCHECK(first_free_ != NULL); // Take the first node in the free list. Node* result = first_free_; first_free_ = result->next_free(); @@ -464,7 +465,7 @@ Handle<Object> GlobalHandles::Create(Object* value) { Handle<Object> GlobalHandles::CopyGlobal(Object** location) { - ASSERT(location != NULL); + DCHECK(location != NULL); return Node::FromLocation(location)->GetGlobalHandles()->Create(*location); } @@ -543,7 +544,7 @@ void GlobalHandles::IdentifyNewSpaceWeakIndependentHandles( WeakSlotCallbackWithHeap f) { for (int i = 0; i < new_space_nodes_.length(); ++i) { Node* node = new_space_nodes_[i]; - ASSERT(node->is_in_new_space_list()); + DCHECK(node->is_in_new_space_list()); if ((node->is_independent() || node->is_partially_dependent()) && node->IsWeak() && f(isolate_->heap(), node->location())) { node->MarkPending(); @@ -555,7 +556,7 @@ void GlobalHandles::IdentifyNewSpaceWeakIndependentHandles( void GlobalHandles::IterateNewSpaceWeakIndependentRoots(ObjectVisitor* v) { for (int i = 0; i < new_space_nodes_.length(); ++i) { Node* node = new_space_nodes_[i]; - ASSERT(node->is_in_new_space_list()); + DCHECK(node->is_in_new_space_list()); if ((node->is_independent() || node->is_partially_dependent()) && node->IsWeakRetainer()) { v->VisitPointer(node->location()); @@ -571,7 +572,7 @@ bool GlobalHandles::IterateObjectGroups(ObjectVisitor* v, bool any_group_was_visited = false; for (int i = 0; i < object_groups_.length(); i++) { ObjectGroup* entry = object_groups_.at(i); - ASSERT(entry != NULL); + DCHECK(entry != NULL); Object*** objects = entry->objects; bool group_should_be_visited = false; @@ -611,17 +612,17 @@ bool GlobalHandles::IterateObjectGroups(ObjectVisitor* v, int GlobalHandles::PostGarbageCollectionProcessing( - GarbageCollector collector, GCTracer* tracer) { + GarbageCollector collector) { // Process weak global handle callbacks. This must be done after the // GC is completely done, because the callbacks may invoke arbitrary // API functions. - ASSERT(isolate_->heap()->gc_state() == Heap::NOT_IN_GC); + DCHECK(isolate_->heap()->gc_state() == Heap::NOT_IN_GC); const int initial_post_gc_processing_count = ++post_gc_processing_count_; int freed_nodes = 0; if (collector == SCAVENGER) { for (int i = 0; i < new_space_nodes_.length(); ++i) { Node* node = new_space_nodes_[i]; - ASSERT(node->is_in_new_space_list()); + DCHECK(node->is_in_new_space_list()); if (!node->IsRetainer()) { // Free nodes do not have weak callbacks. Do not use them to compute // the freed_nodes. @@ -670,18 +671,18 @@ int GlobalHandles::PostGarbageCollectionProcessing( int last = 0; for (int i = 0; i < new_space_nodes_.length(); ++i) { Node* node = new_space_nodes_[i]; - ASSERT(node->is_in_new_space_list()); + DCHECK(node->is_in_new_space_list()); if (node->IsRetainer()) { if (isolate_->heap()->InNewSpace(node->object())) { new_space_nodes_[last++] = node; - tracer->increment_nodes_copied_in_new_space(); + isolate_->heap()->IncrementNodesCopiedInNewSpace(); } else { node->set_in_new_space_list(false); - tracer->increment_nodes_promoted(); + isolate_->heap()->IncrementNodesPromoted(); } } else { node->set_in_new_space_list(false); - tracer->increment_nodes_died_in_new_space(); + isolate_->heap()->IncrementNodesDiedInNewSpace(); } } new_space_nodes_.Rewind(last); @@ -817,7 +818,7 @@ void GlobalHandles::AddObjectGroup(Object*** handles, v8::RetainedObjectInfo* info) { #ifdef DEBUG for (size_t i = 0; i < length; ++i) { - ASSERT(!Node::FromLocation(handles[i])->is_independent()); + DCHECK(!Node::FromLocation(handles[i])->is_independent()); } #endif if (length == 0) { @@ -848,9 +849,9 @@ void GlobalHandles::AddImplicitReferences(HeapObject** parent, Object*** children, size_t length) { #ifdef DEBUG - ASSERT(!Node::FromLocation(BitCast<Object**>(parent))->is_independent()); + DCHECK(!Node::FromLocation(BitCast<Object**>(parent))->is_independent()); for (size_t i = 0; i < length; ++i) { - ASSERT(!Node::FromLocation(children[i])->is_independent()); + DCHECK(!Node::FromLocation(children[i])->is_independent()); } #endif if (length == 0) return; @@ -862,13 +863,13 @@ void GlobalHandles::AddImplicitReferences(HeapObject** parent, void GlobalHandles::SetReferenceFromGroup(UniqueId id, Object** child) { - ASSERT(!Node::FromLocation(child)->is_independent()); + DCHECK(!Node::FromLocation(child)->is_independent()); implicit_ref_connections_.Add(ObjectGroupConnection(id, child)); } void GlobalHandles::SetReference(HeapObject** parent, Object** child) { - ASSERT(!Node::FromLocation(child)->is_independent()); + DCHECK(!Node::FromLocation(child)->is_independent()); ImplicitRefGroup* group = new ImplicitRefGroup(parent, 1); group->children[0] = child; implicit_ref_groups_.Add(group); @@ -1020,7 +1021,7 @@ EternalHandles::~EternalHandles() { void EternalHandles::IterateAllRoots(ObjectVisitor* visitor) { int limit = size_; for (int i = 0; i < blocks_.length(); i++) { - ASSERT(limit > 0); + DCHECK(limit > 0); Object** block = blocks_[i]; visitor->VisitPointers(block, block + Min(limit, kSize)); limit -= kSize; @@ -1048,9 +1049,9 @@ void EternalHandles::PostGarbageCollectionProcessing(Heap* heap) { void EternalHandles::Create(Isolate* isolate, Object* object, int* index) { - ASSERT_EQ(kInvalidIndex, *index); + DCHECK_EQ(kInvalidIndex, *index); if (object == NULL) return; - ASSERT_NE(isolate->heap()->the_hole_value(), object); + DCHECK_NE(isolate->heap()->the_hole_value(), object); int block = size_ >> kShift; int offset = size_ & kMask; // need to resize @@ -1060,7 +1061,7 @@ void EternalHandles::Create(Isolate* isolate, Object* object, int* index) { MemsetPointer(next_block, the_hole, kSize); blocks_.Add(next_block); } - ASSERT_EQ(isolate->heap()->the_hole_value(), blocks_[block][offset]); + DCHECK_EQ(isolate->heap()->the_hole_value(), blocks_[block][offset]); blocks_[block][offset] = object; if (isolate->heap()->InNewSpace(object)) { new_space_indices_.Add(size_); diff --git a/deps/v8/src/global-handles.h b/deps/v8/src/global-handles.h index 34bd8d9ec2b..ff34821ce7d 100644 --- a/deps/v8/src/global-handles.h +++ b/deps/v8/src/global-handles.h @@ -5,17 +5,16 @@ #ifndef V8_GLOBAL_HANDLES_H_ #define V8_GLOBAL_HANDLES_H_ -#include "../include/v8.h" -#include "../include/v8-profiler.h" +#include "include/v8.h" +#include "include/v8-profiler.h" -#include "handles.h" -#include "list.h" -#include "utils.h" +#include "src/handles.h" +#include "src/list.h" +#include "src/utils.h" namespace v8 { namespace internal { -class GCTracer; class HeapStats; class ObjectVisitor; @@ -38,7 +37,7 @@ class ObjectVisitor; struct ObjectGroup { explicit ObjectGroup(size_t length) : info(NULL), length(length) { - ASSERT(length > 0); + DCHECK(length > 0); objects = new Object**[length]; } ~ObjectGroup(); @@ -52,7 +51,7 @@ struct ObjectGroup { struct ImplicitRefGroup { ImplicitRefGroup(HeapObject** parent, size_t length) : parent(parent), length(length) { - ASSERT(length > 0); + DCHECK(length > 0); children = new Object**[length]; } ~ImplicitRefGroup(); @@ -156,8 +155,7 @@ class GlobalHandles { // Process pending weak handles. // Returns the number of freed nodes. - int PostGarbageCollectionProcessing(GarbageCollector collector, - GCTracer* tracer); + int PostGarbageCollectionProcessing(GarbageCollector collector); // Iterates over all strong handles. void IterateStrongRoots(ObjectVisitor* v); @@ -337,7 +335,7 @@ class EternalHandles { // Grab the handle for an existing SingletonHandle. inline Handle<Object> GetSingleton(SingletonHandle singleton) { - ASSERT(Exists(singleton)); + DCHECK(Exists(singleton)); return Get(singleton_handles_[singleton]); } @@ -369,7 +367,7 @@ class EternalHandles { // Gets the slot for an index inline Object** GetLocation(int index) { - ASSERT(index >= 0 && index < size_); + DCHECK(index >= 0 && index < size_); return &blocks_[index >> kShift][index & kMask]; } diff --git a/deps/v8/src/globals.h b/deps/v8/src/globals.h index 5b4be2a6dcb..258707493ef 100644 --- a/deps/v8/src/globals.h +++ b/deps/v8/src/globals.h @@ -5,9 +5,11 @@ #ifndef V8_GLOBALS_H_ #define V8_GLOBALS_H_ -#include "../include/v8stdint.h" +#include "include/v8stdint.h" -#include "base/macros.h" +#include "src/base/build_config.h" +#include "src/base/logging.h" +#include "src/base/macros.h" // Unfortunately, the INFINITY macro cannot be used with the '-pedantic' // warning flag and certain versions of GCC due to a bug: @@ -23,93 +25,27 @@ # define V8_INFINITY INFINITY #endif -namespace v8 { -namespace internal { - -// Processor architecture detection. For more info on what's defined, see: -// http://msdn.microsoft.com/en-us/library/b0084kay.aspx -// http://www.agner.org/optimize/calling_conventions.pdf -// or with gcc, run: "echo | gcc -E -dM -" -#if defined(_M_X64) || defined(__x86_64__) -#if defined(__native_client__) -// For Native Client builds of V8, use V8_TARGET_ARCH_ARM, so that V8 -// generates ARM machine code, together with a portable ARM simulator -// compiled for the host architecture in question. -// -// Since Native Client is ILP-32 on all architectures we use -// V8_HOST_ARCH_IA32 on both 32- and 64-bit x86. -#define V8_HOST_ARCH_IA32 1 -#define V8_HOST_ARCH_32_BIT 1 -#define V8_HOST_CAN_READ_UNALIGNED 1 +#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X64 || V8_TARGET_ARCH_ARM || \ + V8_TARGET_ARCH_ARM64 +#define V8_TURBOFAN_BACKEND 1 #else -#define V8_HOST_ARCH_X64 1 -#define V8_HOST_ARCH_64_BIT 1 -#define V8_HOST_CAN_READ_UNALIGNED 1 -#endif // __native_client__ -#elif defined(_M_IX86) || defined(__i386__) -#define V8_HOST_ARCH_IA32 1 -#define V8_HOST_ARCH_32_BIT 1 -#define V8_HOST_CAN_READ_UNALIGNED 1 -#elif defined(__AARCH64EL__) -#define V8_HOST_ARCH_ARM64 1 -#define V8_HOST_ARCH_64_BIT 1 -#define V8_HOST_CAN_READ_UNALIGNED 1 -#elif defined(__ARMEL__) -#define V8_HOST_ARCH_ARM 1 -#define V8_HOST_ARCH_32_BIT 1 -#elif defined(__MIPSEB__) || defined(__MIPSEL__) -#define V8_HOST_ARCH_MIPS 1 -#define V8_HOST_ARCH_32_BIT 1 -#else -#error "Host architecture was not detected as supported by v8" +#define V8_TURBOFAN_BACKEND 0 #endif - -#if defined(__ARM_ARCH_7A__) || \ - defined(__ARM_ARCH_7R__) || \ - defined(__ARM_ARCH_7__) -# define CAN_USE_ARMV7_INSTRUCTIONS 1 -# ifndef CAN_USE_VFP3_INSTRUCTIONS -# define CAN_USE_VFP3_INSTRUCTIONS -# endif +#if V8_TURBOFAN_BACKEND && !(V8_OS_WIN && V8_TARGET_ARCH_X64) +#define V8_TURBOFAN_TARGET 1 +#else +#define V8_TURBOFAN_TARGET 0 #endif +namespace v8 { -// Target architecture detection. This may be set externally. If not, detect -// in the same way as the host architecture, that is, target the native -// environment as presented by the compiler. -#if !V8_TARGET_ARCH_X64 && !V8_TARGET_ARCH_IA32 && \ - !V8_TARGET_ARCH_ARM && !V8_TARGET_ARCH_ARM64 && !V8_TARGET_ARCH_MIPS -#if defined(_M_X64) || defined(__x86_64__) -#define V8_TARGET_ARCH_X64 1 -#elif defined(_M_IX86) || defined(__i386__) -#define V8_TARGET_ARCH_IA32 1 -#elif defined(__AARCH64EL__) -#define V8_TARGET_ARCH_ARM64 1 -#elif defined(__ARMEL__) -#define V8_TARGET_ARCH_ARM 1 -#elif defined(__MIPSEB__) || defined(__MIPSEL__) -#define V8_TARGET_ARCH_MIPS 1 -#else -#error Target architecture was not detected as supported by v8 -#endif -#endif +namespace base { +class Mutex; +class RecursiveMutex; +class VirtualMemory; +} -// Check for supported combinations of host and target architectures. -#if V8_TARGET_ARCH_IA32 && !V8_HOST_ARCH_IA32 -#error Target architecture ia32 is only supported on ia32 host -#endif -#if V8_TARGET_ARCH_X64 && !V8_HOST_ARCH_X64 -#error Target architecture x64 is only supported on x64 host -#endif -#if (V8_TARGET_ARCH_ARM && !(V8_HOST_ARCH_IA32 || V8_HOST_ARCH_ARM)) -#error Target architecture arm is only supported on arm and ia32 host -#endif -#if (V8_TARGET_ARCH_ARM64 && !(V8_HOST_ARCH_X64 || V8_HOST_ARCH_ARM64)) -#error Target architecture arm64 is only supported on arm64 and x64 host -#endif -#if (V8_TARGET_ARCH_MIPS && !(V8_HOST_ARCH_IA32 || V8_HOST_ARCH_MIPS)) -#error Target architecture mips is only supported on mips and ia32 host -#endif +namespace internal { // Determine whether we are running in a simulated environment. // Setting USE_SIMULATOR explicitly from the build script will force @@ -124,25 +60,9 @@ namespace internal { #if (V8_TARGET_ARCH_MIPS && !V8_HOST_ARCH_MIPS) #define USE_SIMULATOR 1 #endif +#if (V8_TARGET_ARCH_MIPS64 && !V8_HOST_ARCH_MIPS64) +#define USE_SIMULATOR 1 #endif - -// Determine architecture endianness. -#if V8_TARGET_ARCH_IA32 -#define V8_TARGET_LITTLE_ENDIAN 1 -#elif V8_TARGET_ARCH_X64 -#define V8_TARGET_LITTLE_ENDIAN 1 -#elif V8_TARGET_ARCH_ARM -#define V8_TARGET_LITTLE_ENDIAN 1 -#elif V8_TARGET_ARCH_ARM64 -#define V8_TARGET_LITTLE_ENDIAN 1 -#elif V8_TARGET_ARCH_MIPS -#if defined(__MIPSEB__) -#define V8_TARGET_BIG_ENDIAN 1 -#else -#define V8_TARGET_LITTLE_ENDIAN 1 -#endif -#else -#error Unknown target architecture endianness #endif // Determine whether the architecture uses an out-of-line constant pool. @@ -168,60 +88,6 @@ typedef unsigned int __my_bool__; typedef uint8_t byte; typedef byte* Address; -// Define our own macros for writing 64-bit constants. This is less fragile -// than defining __STDC_CONSTANT_MACROS before including <stdint.h>, and it -// works on compilers that don't have it (like MSVC). -#if V8_CC_MSVC -# define V8_UINT64_C(x) (x ## UI64) -# define V8_INT64_C(x) (x ## I64) -# if V8_HOST_ARCH_64_BIT -# define V8_INTPTR_C(x) (x ## I64) -# define V8_PTR_PREFIX "ll" -# else -# define V8_INTPTR_C(x) (x) -# define V8_PTR_PREFIX "" -# endif // V8_HOST_ARCH_64_BIT -#elif V8_CC_MINGW64 -# define V8_UINT64_C(x) (x ## ULL) -# define V8_INT64_C(x) (x ## LL) -# define V8_INTPTR_C(x) (x ## LL) -# define V8_PTR_PREFIX "I64" -#elif V8_HOST_ARCH_64_BIT -# if V8_OS_MACOSX -# define V8_UINT64_C(x) (x ## ULL) -# define V8_INT64_C(x) (x ## LL) -# else -# define V8_UINT64_C(x) (x ## UL) -# define V8_INT64_C(x) (x ## L) -# endif -# define V8_INTPTR_C(x) (x ## L) -# define V8_PTR_PREFIX "l" -#else -# define V8_UINT64_C(x) (x ## ULL) -# define V8_INT64_C(x) (x ## LL) -# define V8_INTPTR_C(x) (x) -# define V8_PTR_PREFIX "" -#endif - -// The following macro works on both 32 and 64-bit platforms. -// Usage: instead of writing 0x1234567890123456 -// write V8_2PART_UINT64_C(0x12345678,90123456); -#define V8_2PART_UINT64_C(a, b) (((static_cast<uint64_t>(a) << 32) + 0x##b##u)) - -#define V8PRIxPTR V8_PTR_PREFIX "x" -#define V8PRIdPTR V8_PTR_PREFIX "d" -#define V8PRIuPTR V8_PTR_PREFIX "u" - -// Fix for Mac OS X defining uintptr_t as "unsigned long": -#if V8_OS_MACOSX -#undef V8PRIxPTR -#define V8PRIxPTR "lx" -#endif - -#if V8_OS_MACOSX || defined(__FreeBSD__) || defined(__OpenBSD__) -#define USING_BSD_ABI -#endif - // ----------------------------------------------------------------------------- // Constants @@ -249,7 +115,11 @@ const int kInt64Size = sizeof(int64_t); // NOLINT const int kDoubleSize = sizeof(double); // NOLINT const int kIntptrSize = sizeof(intptr_t); // NOLINT const int kPointerSize = sizeof(void*); // NOLINT +#if V8_TARGET_ARCH_X64 && V8_TARGET_ARCH_32_BIT +const int kRegisterSize = kPointerSize + kPointerSize; +#else const int kRegisterSize = kPointerSize; +#endif const int kPCOnStackSize = kRegisterSize; const int kFPOnStackSize = kRegisterSize; @@ -259,14 +129,24 @@ const int kDoubleSizeLog2 = 3; const int kPointerSizeLog2 = 3; const intptr_t kIntptrSignBit = V8_INT64_C(0x8000000000000000); const uintptr_t kUintptrAllBitsSet = V8_UINT64_C(0xFFFFFFFFFFFFFFFF); -const bool kIs64BitArch = true; +const bool kRequiresCodeRange = true; +const size_t kMaximalCodeRangeSize = 512 * MB; #else const int kPointerSizeLog2 = 2; const intptr_t kIntptrSignBit = 0x80000000; const uintptr_t kUintptrAllBitsSet = 0xFFFFFFFFu; -const bool kIs64BitArch = false; +#if V8_TARGET_ARCH_X64 && V8_TARGET_ARCH_32_BIT +// x32 port also requires code range. +const bool kRequiresCodeRange = true; +const size_t kMaximalCodeRangeSize = 256 * MB; +#else +const bool kRequiresCodeRange = false; +const size_t kMaximalCodeRangeSize = 0 * MB; +#endif #endif +STATIC_ASSERT(kPointerSize == (1 << kPointerSizeLog2)); + const int kBitsPerByte = 8; const int kBitsPerByteLog2 = 3; const int kBitsPerPointer = kPointerSize * kBitsPerByte; @@ -299,12 +179,6 @@ const int kUC16Size = sizeof(uc16); // NOLINT #define ROUND_UP(n, sz) (((n) + ((sz) - 1)) & ~((sz) - 1)) -// The USE(x) template is used to silence C++ compiler warnings -// issued for (yet) unused variables (typically parameters). -template <typename T> -inline void USE(T) { } - - // FUNCTION_ADDR(f) gets the address of a C function f. #define FUNCTION_ADDR(f) \ (reinterpret_cast<v8::internal::Address>(reinterpret_cast<intptr_t>(f))) @@ -333,6 +207,553 @@ template <typename T, class P = FreeStoreAllocationPolicy> class List; enum StrictMode { SLOPPY, STRICT }; +// Mask for the sign bit in a smi. +const intptr_t kSmiSignMask = kIntptrSignBit; + +const int kObjectAlignmentBits = kPointerSizeLog2; +const intptr_t kObjectAlignment = 1 << kObjectAlignmentBits; +const intptr_t kObjectAlignmentMask = kObjectAlignment - 1; + +// Desired alignment for pointers. +const intptr_t kPointerAlignment = (1 << kPointerSizeLog2); +const intptr_t kPointerAlignmentMask = kPointerAlignment - 1; + +// Desired alignment for double values. +const intptr_t kDoubleAlignment = 8; +const intptr_t kDoubleAlignmentMask = kDoubleAlignment - 1; + +// Desired alignment for generated code is 32 bytes (to improve cache line +// utilization). +const int kCodeAlignmentBits = 5; +const intptr_t kCodeAlignment = 1 << kCodeAlignmentBits; +const intptr_t kCodeAlignmentMask = kCodeAlignment - 1; + +// The owner field of a page is tagged with the page header tag. We need that +// to find out if a slot is part of a large object. If we mask out the lower +// 0xfffff bits (1M pages), go to the owner offset, and see that this field +// is tagged with the page header tag, we can just look up the owner. +// Otherwise, we know that we are somewhere (not within the first 1M) in a +// large object. +const int kPageHeaderTag = 3; +const int kPageHeaderTagSize = 2; +const intptr_t kPageHeaderTagMask = (1 << kPageHeaderTagSize) - 1; + + +// Zap-value: The value used for zapping dead objects. +// Should be a recognizable hex value tagged as a failure. +#ifdef V8_HOST_ARCH_64_BIT +const Address kZapValue = + reinterpret_cast<Address>(V8_UINT64_C(0xdeadbeedbeadbeef)); +const Address kHandleZapValue = + reinterpret_cast<Address>(V8_UINT64_C(0x1baddead0baddeaf)); +const Address kGlobalHandleZapValue = + reinterpret_cast<Address>(V8_UINT64_C(0x1baffed00baffedf)); +const Address kFromSpaceZapValue = + reinterpret_cast<Address>(V8_UINT64_C(0x1beefdad0beefdaf)); +const uint64_t kDebugZapValue = V8_UINT64_C(0xbadbaddbbadbaddb); +const uint64_t kSlotsZapValue = V8_UINT64_C(0xbeefdeadbeefdeef); +const uint64_t kFreeListZapValue = 0xfeed1eaffeed1eaf; +#else +const Address kZapValue = reinterpret_cast<Address>(0xdeadbeef); +const Address kHandleZapValue = reinterpret_cast<Address>(0xbaddeaf); +const Address kGlobalHandleZapValue = reinterpret_cast<Address>(0xbaffedf); +const Address kFromSpaceZapValue = reinterpret_cast<Address>(0xbeefdaf); +const uint32_t kSlotsZapValue = 0xbeefdeef; +const uint32_t kDebugZapValue = 0xbadbaddb; +const uint32_t kFreeListZapValue = 0xfeed1eaf; +#endif + +const int kCodeZapValue = 0xbadc0de; + +// On Intel architecture, cache line size is 64 bytes. +// On ARM it may be less (32 bytes), but as far this constant is +// used for aligning data, it doesn't hurt to align on a greater value. +#define PROCESSOR_CACHE_LINE_SIZE 64 + +// Constants relevant to double precision floating point numbers. +// If looking only at the top 32 bits, the QNaN mask is bits 19 to 30. +const uint32_t kQuietNaNHighBitsMask = 0xfff << (51 - 32); + + +// ----------------------------------------------------------------------------- +// Forward declarations for frequently used classes + +class AccessorInfo; +class Allocation; +class Arguments; +class Assembler; +class Code; +class CodeGenerator; +class CodeStub; +class Context; +class Debug; +class Debugger; +class DebugInfo; +class Descriptor; +class DescriptorArray; +class TransitionArray; +class ExternalReference; +class FixedArray; +class FunctionTemplateInfo; +class MemoryChunk; +class SeededNumberDictionary; +class UnseededNumberDictionary; +class NameDictionary; +template <typename T> class MaybeHandle; +template <typename T> class Handle; +class Heap; +class HeapObject; +class IC; +class InterceptorInfo; +class Isolate; +class JSReceiver; +class JSArray; +class JSFunction; +class JSObject; +class LargeObjectSpace; +class LookupResult; +class MacroAssembler; +class Map; +class MapSpace; +class MarkCompactCollector; +class NewSpace; +class Object; +class OldSpace; +class Foreign; +class Scope; +class ScopeInfo; +class Script; +class Smi; +template <typename Config, class Allocator = FreeStoreAllocationPolicy> + class SplayTree; +class String; +class Name; +class Struct; +class Variable; +class RelocInfo; +class Deserializer; +class MessageLocation; + +typedef bool (*WeakSlotCallback)(Object** pointer); + +typedef bool (*WeakSlotCallbackWithHeap)(Heap* heap, Object** pointer); + +// ----------------------------------------------------------------------------- +// Miscellaneous + +// NOTE: SpaceIterator depends on AllocationSpace enumeration values being +// consecutive. +enum AllocationSpace { + NEW_SPACE, // Semispaces collected with copying collector. + OLD_POINTER_SPACE, // May contain pointers to new space. + OLD_DATA_SPACE, // Must not have pointers to new space. + CODE_SPACE, // No pointers to new space, marked executable. + MAP_SPACE, // Only and all map objects. + CELL_SPACE, // Only and all cell objects. + PROPERTY_CELL_SPACE, // Only and all global property cell objects. + LO_SPACE, // Promoted large objects. + INVALID_SPACE, // Only used in AllocationResult to signal success. + + FIRST_SPACE = NEW_SPACE, + LAST_SPACE = LO_SPACE, + FIRST_PAGED_SPACE = OLD_POINTER_SPACE, + LAST_PAGED_SPACE = PROPERTY_CELL_SPACE +}; +const int kSpaceTagSize = 3; +const int kSpaceTagMask = (1 << kSpaceTagSize) - 1; + + +// A flag that indicates whether objects should be pretenured when +// allocated (allocated directly into the old generation) or not +// (allocated in the young generation if the object size and type +// allows). +enum PretenureFlag { NOT_TENURED, TENURED }; + +enum MinimumCapacity { + USE_DEFAULT_MINIMUM_CAPACITY, + USE_CUSTOM_MINIMUM_CAPACITY +}; + +enum GarbageCollector { SCAVENGER, MARK_COMPACTOR }; + +enum Executability { NOT_EXECUTABLE, EXECUTABLE }; + +enum VisitMode { + VISIT_ALL, + VISIT_ALL_IN_SCAVENGE, + VISIT_ALL_IN_SWEEP_NEWSPACE, + VISIT_ONLY_STRONG +}; + +// Flag indicating whether code is built into the VM (one of the natives files). +enum NativesFlag { NOT_NATIVES_CODE, NATIVES_CODE }; + + +// A CodeDesc describes a buffer holding instructions and relocation +// information. The instructions start at the beginning of the buffer +// and grow forward, the relocation information starts at the end of +// the buffer and grows backward. +// +// |<--------------- buffer_size ---------------->| +// |<-- instr_size -->| |<-- reloc_size -->| +// +==================+========+==================+ +// | instructions | free | reloc info | +// +==================+========+==================+ +// ^ +// | +// buffer + +struct CodeDesc { + byte* buffer; + int buffer_size; + int instr_size; + int reloc_size; + Assembler* origin; +}; + + +// Callback function used for iterating objects in heap spaces, +// for example, scanning heap objects. +typedef int (*HeapObjectCallback)(HeapObject* obj); + + +// Callback function used for checking constraints when copying/relocating +// objects. Returns true if an object can be copied/relocated from its +// old_addr to a new_addr. +typedef bool (*ConstraintCallback)(Address new_addr, Address old_addr); + + +// Callback function on inline caches, used for iterating over inline caches +// in compiled code. +typedef void (*InlineCacheCallback)(Code* code, Address ic); + + +// State for inline cache call sites. Aliased as IC::State. +enum InlineCacheState { + // Has never been executed. + UNINITIALIZED, + // Has been executed but monomorhic state has been delayed. + PREMONOMORPHIC, + // Has been executed and only one receiver type has been seen. + MONOMORPHIC, + // Check failed due to prototype (or map deprecation). + PROTOTYPE_FAILURE, + // Multiple receiver types have been seen. + POLYMORPHIC, + // Many receiver types have been seen. + MEGAMORPHIC, + // A generic handler is installed and no extra typefeedback is recorded. + GENERIC, + // Special state for debug break or step in prepare stubs. + DEBUG_STUB, + // Type-vector-based ICs have a default state, with the full calculation + // of IC state only determined by a look at the IC and the typevector + // together. + DEFAULT +}; + + +enum CallFunctionFlags { + NO_CALL_FUNCTION_FLAGS, + CALL_AS_METHOD, + // Always wrap the receiver and call to the JSFunction. Only use this flag + // both the receiver type and the target method are statically known. + WRAP_AND_CALL +}; + + +enum CallConstructorFlags { + NO_CALL_CONSTRUCTOR_FLAGS, + // The call target is cached in the instruction stream. + RECORD_CONSTRUCTOR_TARGET +}; + + +enum CacheHolderFlag { + kCacheOnPrototype, + kCacheOnPrototypeReceiverIsDictionary, + kCacheOnPrototypeReceiverIsPrimitive, + kCacheOnReceiver +}; + + +// The Store Buffer (GC). +typedef enum { + kStoreBufferFullEvent, + kStoreBufferStartScanningPagesEvent, + kStoreBufferScanningPageEvent +} StoreBufferEvent; + + +typedef void (*StoreBufferCallback)(Heap* heap, + MemoryChunk* page, + StoreBufferEvent event); + + +// Union used for fast testing of specific double values. +union DoubleRepresentation { + double value; + int64_t bits; + DoubleRepresentation(double x) { value = x; } + bool operator==(const DoubleRepresentation& other) const { + return bits == other.bits; + } +}; + + +// Union used for customized checking of the IEEE double types +// inlined within v8 runtime, rather than going to the underlying +// platform headers and libraries +union IeeeDoubleLittleEndianArchType { + double d; + struct { + unsigned int man_low :32; + unsigned int man_high :20; + unsigned int exp :11; + unsigned int sign :1; + } bits; +}; + + +union IeeeDoubleBigEndianArchType { + double d; + struct { + unsigned int sign :1; + unsigned int exp :11; + unsigned int man_high :20; + unsigned int man_low :32; + } bits; +}; + + +// AccessorCallback +struct AccessorDescriptor { + Object* (*getter)(Isolate* isolate, Object* object, void* data); + Object* (*setter)( + Isolate* isolate, JSObject* object, Object* value, void* data); + void* data; +}; + + +// Logging and profiling. A StateTag represents a possible state of +// the VM. The logger maintains a stack of these. Creating a VMState +// object enters a state by pushing on the stack, and destroying a +// VMState object leaves a state by popping the current state from the +// stack. + +enum StateTag { + JS, + GC, + COMPILER, + OTHER, + EXTERNAL, + IDLE +}; + + +// ----------------------------------------------------------------------------- +// Macros + +// Testers for test. + +#define HAS_SMI_TAG(value) \ + ((reinterpret_cast<intptr_t>(value) & kSmiTagMask) == kSmiTag) + +#define HAS_FAILURE_TAG(value) \ + ((reinterpret_cast<intptr_t>(value) & kFailureTagMask) == kFailureTag) + +// OBJECT_POINTER_ALIGN returns the value aligned as a HeapObject pointer +#define OBJECT_POINTER_ALIGN(value) \ + (((value) + kObjectAlignmentMask) & ~kObjectAlignmentMask) + +// POINTER_SIZE_ALIGN returns the value aligned as a pointer. +#define POINTER_SIZE_ALIGN(value) \ + (((value) + kPointerAlignmentMask) & ~kPointerAlignmentMask) + +// CODE_POINTER_ALIGN returns the value aligned as a generated code segment. +#define CODE_POINTER_ALIGN(value) \ + (((value) + kCodeAlignmentMask) & ~kCodeAlignmentMask) + +// Support for tracking C++ memory allocation. Insert TRACK_MEMORY("Fisk") +// inside a C++ class and new and delete will be overloaded so logging is +// performed. +// This file (globals.h) is included before log.h, so we use direct calls to +// the Logger rather than the LOG macro. +#ifdef DEBUG +#define TRACK_MEMORY(name) \ + void* operator new(size_t size) { \ + void* result = ::operator new(size); \ + Logger::NewEventStatic(name, result, size); \ + return result; \ + } \ + void operator delete(void* object) { \ + Logger::DeleteEventStatic(name, object); \ + ::operator delete(object); \ + } +#else +#define TRACK_MEMORY(name) +#endif + + +// CPU feature flags. +enum CpuFeature { + // x86 + SSE4_1, + SSE3, + SAHF, + // ARM + VFP3, + ARMv7, + SUDIV, + MLS, + UNALIGNED_ACCESSES, + MOVW_MOVT_IMMEDIATE_LOADS, + VFP32DREGS, + NEON, + // MIPS + FPU, + // ARM64 + ALWAYS_ALIGN_CSP, + NUMBER_OF_CPU_FEATURES +}; + + +// Used to specify if a macro instruction must perform a smi check on tagged +// values. +enum SmiCheckType { + DONT_DO_SMI_CHECK, + DO_SMI_CHECK +}; + + +enum ScopeType { + EVAL_SCOPE, // The top-level scope for an eval source. + FUNCTION_SCOPE, // The top-level scope for a function. + MODULE_SCOPE, // The scope introduced by a module literal + GLOBAL_SCOPE, // The top-level scope for a program or a top-level eval. + CATCH_SCOPE, // The scope introduced by catch. + BLOCK_SCOPE, // The scope introduced by a new block. + WITH_SCOPE // The scope introduced by with. +}; + + +const uint32_t kHoleNanUpper32 = 0x7FFFFFFF; +const uint32_t kHoleNanLower32 = 0xFFFFFFFF; +const uint32_t kNaNOrInfinityLowerBoundUpper32 = 0x7FF00000; + +const uint64_t kHoleNanInt64 = + (static_cast<uint64_t>(kHoleNanUpper32) << 32) | kHoleNanLower32; +const uint64_t kLastNonNaNInt64 = + (static_cast<uint64_t>(kNaNOrInfinityLowerBoundUpper32) << 32); + + +// The order of this enum has to be kept in sync with the predicates below. +enum VariableMode { + // User declared variables: + VAR, // declared via 'var', and 'function' declarations + + CONST_LEGACY, // declared via legacy 'const' declarations + + LET, // declared via 'let' declarations (first lexical) + + CONST, // declared via 'const' declarations + + MODULE, // declared via 'module' declaration (last lexical) + + // Variables introduced by the compiler: + INTERNAL, // like VAR, but not user-visible (may or may not + // be in a context) + + TEMPORARY, // temporary variables (not user-visible), stack-allocated + // unless the scope as a whole has forced context allocation + + DYNAMIC, // always require dynamic lookup (we don't know + // the declaration) + + DYNAMIC_GLOBAL, // requires dynamic lookup, but we know that the + // variable is global unless it has been shadowed + // by an eval-introduced variable + + DYNAMIC_LOCAL // requires dynamic lookup, but we know that the + // variable is local and where it is unless it + // has been shadowed by an eval-introduced + // variable +}; + + +inline bool IsDynamicVariableMode(VariableMode mode) { + return mode >= DYNAMIC && mode <= DYNAMIC_LOCAL; +} + + +inline bool IsDeclaredVariableMode(VariableMode mode) { + return mode >= VAR && mode <= MODULE; +} + + +inline bool IsLexicalVariableMode(VariableMode mode) { + return mode >= LET && mode <= MODULE; +} + + +inline bool IsImmutableVariableMode(VariableMode mode) { + return (mode >= CONST && mode <= MODULE) || mode == CONST_LEGACY; +} + + +// ES6 Draft Rev3 10.2 specifies declarative environment records with mutable +// and immutable bindings that can be in two states: initialized and +// uninitialized. In ES5 only immutable bindings have these two states. When +// accessing a binding, it needs to be checked for initialization. However in +// the following cases the binding is initialized immediately after creation +// so the initialization check can always be skipped: +// 1. Var declared local variables. +// var foo; +// 2. A local variable introduced by a function declaration. +// function foo() {} +// 3. Parameters +// function x(foo) {} +// 4. Catch bound variables. +// try {} catch (foo) {} +// 6. Function variables of named function expressions. +// var x = function foo() {} +// 7. Implicit binding of 'this'. +// 8. Implicit binding of 'arguments' in functions. +// +// ES5 specified object environment records which are introduced by ES elements +// such as Program and WithStatement that associate identifier bindings with the +// properties of some object. In the specification only mutable bindings exist +// (which may be non-writable) and have no distinct initialization step. However +// V8 allows const declarations in global code with distinct creation and +// initialization steps which are represented by non-writable properties in the +// global object. As a result also these bindings need to be checked for +// initialization. +// +// The following enum specifies a flag that indicates if the binding needs a +// distinct initialization step (kNeedsInitialization) or if the binding is +// immediately initialized upon creation (kCreatedInitialized). +enum InitializationFlag { + kNeedsInitialization, + kCreatedInitialized +}; + + +enum MaybeAssignedFlag { kNotAssigned, kMaybeAssigned }; + + +enum ClearExceptionFlag { + KEEP_EXCEPTION, + CLEAR_EXCEPTION +}; + + +enum MinusZeroMode { + TREAT_MINUS_ZERO_AS_ZERO, + FAIL_ON_MINUS_ZERO +}; + } } // namespace v8::internal +namespace i = v8::internal; + #endif // V8_GLOBALS_H_ diff --git a/deps/v8/src/handles-inl.h b/deps/v8/src/handles-inl.h index d9f8c69c1b1..65b78c5deb1 100644 --- a/deps/v8/src/handles-inl.h +++ b/deps/v8/src/handles-inl.h @@ -6,10 +6,10 @@ #ifndef V8_HANDLES_INL_H_ #define V8_HANDLES_INL_H_ -#include "api.h" -#include "handles.h" -#include "heap.h" -#include "isolate.h" +#include "src/api.h" +#include "src/handles.h" +#include "src/heap/heap.h" +#include "src/isolate.h" namespace v8 { namespace internal { @@ -29,7 +29,7 @@ Handle<T>::Handle(T* obj, Isolate* isolate) { template <typename T> inline bool Handle<T>::is_identical_to(const Handle<T> o) const { // Dereferencing deferred handles to check object equality is safe. - SLOW_ASSERT( + SLOW_DCHECK( (location_ == NULL || IsDereferenceAllowed(NO_DEFERRED_CHECK)) && (o.location_ == NULL || o.IsDereferenceAllowed(NO_DEFERRED_CHECK))); if (location_ == o.location_) return true; @@ -40,13 +40,13 @@ inline bool Handle<T>::is_identical_to(const Handle<T> o) const { template <typename T> inline T* Handle<T>::operator*() const { - SLOW_ASSERT(IsDereferenceAllowed(INCLUDE_DEFERRED_CHECK)); + SLOW_DCHECK(IsDereferenceAllowed(INCLUDE_DEFERRED_CHECK)); return *BitCast<T**>(location_); } template <typename T> inline T** Handle<T>::location() const { - SLOW_ASSERT(location_ == NULL || + SLOW_DCHECK(location_ == NULL || IsDereferenceAllowed(INCLUDE_DEFERRED_CHECK)); return location_; } @@ -54,7 +54,7 @@ inline T** Handle<T>::location() const { #ifdef DEBUG template <typename T> bool Handle<T>::IsDereferenceAllowed(DereferenceCheckMode mode) const { - ASSERT(location_ != NULL); + DCHECK(location_ != NULL); Object* object = *BitCast<T**>(location_); if (object->IsSmi()) return true; HeapObject* heap_object = HeapObject::cast(object); @@ -123,7 +123,7 @@ Handle<T> HandleScope::CloseAndEscape(Handle<T> handle_value) { // Throw away all handles in the current scope. CloseScope(isolate_, prev_next_, prev_limit_); // Allocate one handle in the parent scope. - ASSERT(current->level > 0); + DCHECK(current->level > 0); Handle<T> result(CreateHandle<T>(isolate_, value)); // Reinitialize the current scope (so that it's ready // to be used or closed again). @@ -136,14 +136,14 @@ Handle<T> HandleScope::CloseAndEscape(Handle<T> handle_value) { template <typename T> T** HandleScope::CreateHandle(Isolate* isolate, T* value) { - ASSERT(AllowHandleAllocation::IsAllowed()); + DCHECK(AllowHandleAllocation::IsAllowed()); HandleScopeData* current = isolate->handle_scope_data(); internal::Object** cur = current->next; if (cur == current->limit) cur = Extend(isolate); // Update the current next field, set the value in the created // handle, and return the result. - ASSERT(cur < current->limit); + DCHECK(cur < current->limit); current->next = cur + 1; T** result = reinterpret_cast<T**>(cur); @@ -170,9 +170,9 @@ inline SealHandleScope::~SealHandleScope() { // Restore state in current handle scope to re-enable handle // allocations. HandleScopeData* current = isolate_->handle_scope_data(); - ASSERT_EQ(0, current->level); + DCHECK_EQ(0, current->level); current->level = level_; - ASSERT_EQ(current->next, current->limit); + DCHECK_EQ(current->next, current->limit); current->limit = limit_; } diff --git a/deps/v8/src/handles.cc b/deps/v8/src/handles.cc index 89e034e9d4b..d9b130f3cac 100644 --- a/deps/v8/src/handles.cc +++ b/deps/v8/src/handles.cc @@ -2,9 +2,9 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "handles.h" +#include "src/handles.h" namespace v8 { namespace internal { @@ -24,7 +24,7 @@ Object** HandleScope::Extend(Isolate* isolate) { Object** result = current->next; - ASSERT(result == current->limit); + DCHECK(result == current->limit); // Make sure there's at least one scope on the stack and that the // top of the scope stack isn't a barrier. if (!Utils::ApiCheck(current->level != 0, @@ -39,7 +39,7 @@ Object** HandleScope::Extend(Isolate* isolate) { Object** limit = &impl->blocks()->last()[kHandleBlockSize]; if (current->limit != limit) { current->limit = limit; - ASSERT(limit - current->next < kHandleBlockSize); + DCHECK(limit - current->next < kHandleBlockSize); } } @@ -66,7 +66,7 @@ void HandleScope::DeleteExtensions(Isolate* isolate) { #ifdef ENABLE_HANDLE_ZAPPING void HandleScope::ZapRange(Object** start, Object** end) { - ASSERT(end - start <= kHandleBlockSize); + DCHECK(end - start <= kHandleBlockSize); for (Object** p = start; p != end; p++) { *reinterpret_cast<Address*>(p) = v8::internal::kHandleZapValue; } @@ -95,7 +95,7 @@ DeferredHandleScope::DeferredHandleScope(Isolate* isolate) HandleScopeData* data = impl_->isolate()->handle_scope_data(); Object** new_next = impl_->GetSpareOrNewBlock(); Object** new_limit = &new_next[kHandleBlockSize]; - ASSERT(data->limit == &impl_->blocks()->last()[kHandleBlockSize]); + DCHECK(data->limit == &impl_->blocks()->last()[kHandleBlockSize]); impl_->blocks()->Add(new_next); #ifdef DEBUG @@ -111,8 +111,8 @@ DeferredHandleScope::DeferredHandleScope(Isolate* isolate) DeferredHandleScope::~DeferredHandleScope() { impl_->isolate()->handle_scope_data()->level--; - ASSERT(handles_detached_); - ASSERT(impl_->isolate()->handle_scope_data()->level == prev_level_); + DCHECK(handles_detached_); + DCHECK(impl_->isolate()->handle_scope_data()->level == prev_level_); } diff --git a/deps/v8/src/handles.h b/deps/v8/src/handles.h index 5dc4a5ddda2..577e83a9041 100644 --- a/deps/v8/src/handles.h +++ b/deps/v8/src/handles.h @@ -5,7 +5,7 @@ #ifndef V8_HANDLES_H_ #define V8_HANDLES_H_ -#include "objects.h" +#include "src/objects.h" namespace v8 { namespace internal { @@ -44,10 +44,10 @@ class MaybeHandle { location_ = reinterpret_cast<T**>(maybe_handle.location_); } - INLINE(void Assert()) { ASSERT(location_ != NULL); } - INLINE(void Check()) { CHECK(location_ != NULL); } + INLINE(void Assert() const) { DCHECK(location_ != NULL); } + INLINE(void Check() const) { CHECK(location_ != NULL); } - INLINE(Handle<T> ToHandleChecked()) { + INLINE(Handle<T> ToHandleChecked()) const { Check(); return Handle<T>(location_); } diff --git a/deps/v8/src/harmony-math.js b/deps/v8/src/harmony-math.js deleted file mode 100644 index 505e9a163f4..00000000000 --- a/deps/v8/src/harmony-math.js +++ /dev/null @@ -1,246 +0,0 @@ -// Copyright 2013 the V8 project authors. All rights reserved. -// Use of this source code is governed by a BSD-style license that can be -// found in the LICENSE file. - -'use strict'; - -// ES6 draft 09-27-13, section 20.2.2.28. -function MathSign(x) { - x = TO_NUMBER_INLINE(x); - if (x > 0) return 1; - if (x < 0) return -1; - if (x === 0) return x; - return NAN; -} - - -// ES6 draft 09-27-13, section 20.2.2.34. -function MathTrunc(x) { - x = TO_NUMBER_INLINE(x); - if (x > 0) return MathFloor(x); - if (x < 0) return MathCeil(x); - if (x === 0) return x; - return NAN; -} - - -// ES6 draft 09-27-13, section 20.2.2.30. -function MathSinh(x) { - if (!IS_NUMBER(x)) x = NonNumberToNumber(x); - // Idempotent for NaN, +/-0 and +/-Infinity. - if (x === 0 || !NUMBER_IS_FINITE(x)) return x; - return (MathExp(x) - MathExp(-x)) / 2; -} - - -// ES6 draft 09-27-13, section 20.2.2.12. -function MathCosh(x) { - if (!IS_NUMBER(x)) x = NonNumberToNumber(x); - if (!NUMBER_IS_FINITE(x)) return MathAbs(x); - return (MathExp(x) + MathExp(-x)) / 2; -} - - -// ES6 draft 09-27-13, section 20.2.2.33. -function MathTanh(x) { - if (!IS_NUMBER(x)) x = NonNumberToNumber(x); - // Idempotent for +/-0. - if (x === 0) return x; - // Returns +/-1 for +/-Infinity. - if (!NUMBER_IS_FINITE(x)) return MathSign(x); - var exp1 = MathExp(x); - var exp2 = MathExp(-x); - return (exp1 - exp2) / (exp1 + exp2); -} - - -// ES6 draft 09-27-13, section 20.2.2.5. -function MathAsinh(x) { - if (!IS_NUMBER(x)) x = NonNumberToNumber(x); - // Idempotent for NaN, +/-0 and +/-Infinity. - if (x === 0 || !NUMBER_IS_FINITE(x)) return x; - if (x > 0) return MathLog(x + MathSqrt(x * x + 1)); - // This is to prevent numerical errors caused by large negative x. - return -MathLog(-x + MathSqrt(x * x + 1)); -} - - -// ES6 draft 09-27-13, section 20.2.2.3. -function MathAcosh(x) { - if (!IS_NUMBER(x)) x = NonNumberToNumber(x); - if (x < 1) return NAN; - // Idempotent for NaN and +Infinity. - if (!NUMBER_IS_FINITE(x)) return x; - return MathLog(x + MathSqrt(x + 1) * MathSqrt(x - 1)); -} - - -// ES6 draft 09-27-13, section 20.2.2.7. -function MathAtanh(x) { - if (!IS_NUMBER(x)) x = NonNumberToNumber(x); - // Idempotent for +/-0. - if (x === 0) return x; - // Returns NaN for NaN and +/- Infinity. - if (!NUMBER_IS_FINITE(x)) return NAN; - return 0.5 * MathLog((1 + x) / (1 - x)); -} - - -// ES6 draft 09-27-13, section 20.2.2.21. -function MathLog10(x) { - return MathLog(x) * 0.434294481903251828; // log10(x) = log(x)/log(10). -} - - -// ES6 draft 09-27-13, section 20.2.2.22. -function MathLog2(x) { - return MathLog(x) * 1.442695040888963407; // log2(x) = log(x)/log(2). -} - - -// ES6 draft 09-27-13, section 20.2.2.17. -function MathHypot(x, y) { // Function length is 2. - // We may want to introduce fast paths for two arguments and when - // normalization to avoid overflow is not necessary. For now, we - // simply assume the general case. - var length = %_ArgumentsLength(); - var args = new InternalArray(length); - var max = 0; - for (var i = 0; i < length; i++) { - var n = %_Arguments(i); - if (!IS_NUMBER(n)) n = NonNumberToNumber(n); - if (n === INFINITY || n === -INFINITY) return INFINITY; - n = MathAbs(n); - if (n > max) max = n; - args[i] = n; - } - - // Kahan summation to avoid rounding errors. - // Normalize the numbers to the largest one to avoid overflow. - if (max === 0) max = 1; - var sum = 0; - var compensation = 0; - for (var i = 0; i < length; i++) { - var n = args[i] / max; - var summand = n * n - compensation; - var preliminary = sum + summand; - compensation = (preliminary - sum) - summand; - sum = preliminary; - } - return MathSqrt(sum) * max; -} - - -// ES6 draft 09-27-13, section 20.2.2.16. -function MathFround(x) { - return %MathFround(TO_NUMBER_INLINE(x)); -} - - -function MathClz32(x) { - x = ToUint32(TO_NUMBER_INLINE(x)); - if (x == 0) return 32; - var result = 0; - // Binary search. - if ((x & 0xFFFF0000) === 0) { x <<= 16; result += 16; }; - if ((x & 0xFF000000) === 0) { x <<= 8; result += 8; }; - if ((x & 0xF0000000) === 0) { x <<= 4; result += 4; }; - if ((x & 0xC0000000) === 0) { x <<= 2; result += 2; }; - if ((x & 0x80000000) === 0) { x <<= 1; result += 1; }; - return result; -} - - -// ES6 draft 09-27-13, section 20.2.2.9. -// Cube root approximation, refer to: http://metamerist.com/cbrt/cbrt.htm -// Using initial approximation adapted from Kahan's cbrt and 4 iterations -// of Newton's method. -function MathCbrt(x) { - if (!IS_NUMBER(x)) x = NonNumberToNumber(x); - if (x == 0 || !NUMBER_IS_FINITE(x)) return x; - return x >= 0 ? CubeRoot(x) : -CubeRoot(-x); -} - -macro NEWTON_ITERATION_CBRT(x, approx) - (1.0 / 3.0) * (x / (approx * approx) + 2 * approx); -endmacro - -function CubeRoot(x) { - var approx_hi = MathFloor(%_DoubleHi(x) / 3) + 0x2A9F7893; - var approx = %_ConstructDouble(approx_hi, 0); - approx = NEWTON_ITERATION_CBRT(x, approx); - approx = NEWTON_ITERATION_CBRT(x, approx); - approx = NEWTON_ITERATION_CBRT(x, approx); - return NEWTON_ITERATION_CBRT(x, approx); -} - - - -// ES6 draft 09-27-13, section 20.2.2.14. -// Use Taylor series to approximate. -// exp(x) - 1 at 0 == -1 + exp(0) + exp'(0)*x/1! + exp''(0)*x^2/2! + ... -// == x/1! + x^2/2! + x^3/3! + ... -// The closer x is to 0, the fewer terms are required. -function MathExpm1(x) { - if (!IS_NUMBER(x)) x = NonNumberToNumber(x); - var xabs = MathAbs(x); - if (xabs < 2E-7) { - return x * (1 + x * (1/2)); - } else if (xabs < 6E-5) { - return x * (1 + x * (1/2 + x * (1/6))); - } else if (xabs < 2E-2) { - return x * (1 + x * (1/2 + x * (1/6 + - x * (1/24 + x * (1/120 + x * (1/720)))))); - } else { // Use regular exp if not close enough to 0. - return MathExp(x) - 1; - } -} - - -// ES6 draft 09-27-13, section 20.2.2.20. -// Use Taylor series to approximate. With y = x + 1; -// log(y) at 1 == log(1) + log'(1)(y-1)/1! + log''(1)(y-1)^2/2! + ... -// == 0 + x - x^2/2 + x^3/3 ... -// The closer x is to 0, the fewer terms are required. -function MathLog1p(x) { - if (!IS_NUMBER(x)) x = NonNumberToNumber(x); - var xabs = MathAbs(x); - if (xabs < 1E-7) { - return x * (1 - x * (1/2)); - } else if (xabs < 3E-5) { - return x * (1 - x * (1/2 - x * (1/3))); - } else if (xabs < 7E-3) { - return x * (1 - x * (1/2 - x * (1/3 - x * (1/4 - - x * (1/5 - x * (1/6 - x * (1/7))))))); - } else { // Use regular log if not close enough to 0. - return MathLog(1 + x); - } -} - - -function ExtendMath() { - %CheckIsBootstrapping(); - - // Set up the non-enumerable functions on the Math object. - InstallFunctions($Math, DONT_ENUM, $Array( - "sign", MathSign, - "trunc", MathTrunc, - "sinh", MathSinh, - "cosh", MathCosh, - "tanh", MathTanh, - "asinh", MathAsinh, - "acosh", MathAcosh, - "atanh", MathAtanh, - "log10", MathLog10, - "log2", MathLog2, - "hypot", MathHypot, - "fround", MathFround, - "clz32", MathClz32, - "cbrt", MathCbrt, - "log1p", MathLog1p, - "expm1", MathExpm1 - )); -} - - -ExtendMath(); diff --git a/deps/v8/src/harmony-string.js b/deps/v8/src/harmony-string.js index 4cd8e6687ed..ae13745cdbf 100644 --- a/deps/v8/src/harmony-string.js +++ b/deps/v8/src/harmony-string.js @@ -120,17 +120,71 @@ function StringContains(searchString /* position */) { // length == 1 } +// ES6 Draft 05-22-2014, section 21.1.3.3 +function StringCodePointAt(pos) { + CHECK_OBJECT_COERCIBLE(this, "String.prototype.codePointAt"); + + var string = TO_STRING_INLINE(this); + var size = string.length; + pos = TO_INTEGER(pos); + if (pos < 0 || pos >= size) { + return UNDEFINED; + } + var first = %_StringCharCodeAt(string, pos); + if (first < 0xD800 || first > 0xDBFF || pos + 1 == size) { + return first; + } + var second = %_StringCharCodeAt(string, pos + 1); + if (second < 0xDC00 || second > 0xDFFF) { + return first; + } + return (first - 0xD800) * 0x400 + second + 0x2400; +} + + +// ES6 Draft 05-22-2014, section 21.1.2.2 +function StringFromCodePoint(_) { // length = 1 + var code; + var length = %_ArgumentsLength(); + var index; + var result = ""; + for (index = 0; index < length; index++) { + code = %_Arguments(index); + if (!%_IsSmi(code)) { + code = ToNumber(code); + } + if (code < 0 || code > 0x10FFFF || code !== TO_INTEGER(code)) { + throw MakeRangeError("invalid_code_point", [code]); + } + if (code <= 0xFFFF) { + result += %_StringCharFromCode(code); + } else { + code -= 0x10000; + result += %_StringCharFromCode((code >>> 10) & 0x3FF | 0xD800); + result += %_StringCharFromCode(code & 0x3FF | 0xDC00); + } + } + return result; +} + + // ------------------------------------------------------------------- function ExtendStringPrototype() { %CheckIsBootstrapping(); + // Set up the non-enumerable functions on the String object. + InstallFunctions($String, DONT_ENUM, $Array( + "fromCodePoint", StringFromCodePoint + )); + // Set up the non-enumerable functions on the String prototype object. InstallFunctions($String.prototype, DONT_ENUM, $Array( - "repeat", StringRepeat, - "startsWith", StringStartsWith, + "codePointAt", StringCodePointAt, + "contains", StringContains, "endsWith", StringEndsWith, - "contains", StringContains + "repeat", StringRepeat, + "startsWith", StringStartsWith )); } diff --git a/deps/v8/src/hashmap.h b/deps/v8/src/hashmap.h index 4a363b7ce00..26dbd584736 100644 --- a/deps/v8/src/hashmap.h +++ b/deps/v8/src/hashmap.h @@ -5,9 +5,9 @@ #ifndef V8_HASHMAP_H_ #define V8_HASHMAP_H_ -#include "allocation.h" -#include "checks.h" -#include "utils.h" +#include "src/allocation.h" +#include "src/base/logging.h" +#include "src/utils.h" namespace v8 { namespace internal { @@ -164,7 +164,7 @@ void* TemplateHashMapImpl<AllocationPolicy>::Remove(void* key, uint32_t hash) { // This guarantees loop termination as there is at least one empty entry so // eventually the removed entry will have an empty entry after it. - ASSERT(occupancy_ < capacity_); + DCHECK(occupancy_ < capacity_); // p is the candidate entry to clear. q is used to scan forwards. Entry* q = p; // Start at the entry to remove. @@ -224,7 +224,7 @@ template<class AllocationPolicy> typename TemplateHashMapImpl<AllocationPolicy>::Entry* TemplateHashMapImpl<AllocationPolicy>::Next(Entry* p) const { const Entry* end = map_end(); - ASSERT(map_ - 1 <= p && p < end); + DCHECK(map_ - 1 <= p && p < end); for (p++; p < end; p++) { if (p->key != NULL) { return p; @@ -237,14 +237,14 @@ typename TemplateHashMapImpl<AllocationPolicy>::Entry* template<class AllocationPolicy> typename TemplateHashMapImpl<AllocationPolicy>::Entry* TemplateHashMapImpl<AllocationPolicy>::Probe(void* key, uint32_t hash) { - ASSERT(key != NULL); + DCHECK(key != NULL); - ASSERT(IsPowerOf2(capacity_)); + DCHECK(IsPowerOf2(capacity_)); Entry* p = map_ + (hash & (capacity_ - 1)); const Entry* end = map_end(); - ASSERT(map_ <= p && p < end); + DCHECK(map_ <= p && p < end); - ASSERT(occupancy_ < capacity_); // Guarantees loop termination. + DCHECK(occupancy_ < capacity_); // Guarantees loop termination. while (p->key != NULL && (hash != p->hash || !match_(key, p->key))) { p++; if (p >= end) { @@ -259,7 +259,7 @@ typename TemplateHashMapImpl<AllocationPolicy>::Entry* template<class AllocationPolicy> void TemplateHashMapImpl<AllocationPolicy>::Initialize( uint32_t capacity, AllocationPolicy allocator) { - ASSERT(IsPowerOf2(capacity)); + DCHECK(IsPowerOf2(capacity)); map_ = reinterpret_cast<Entry*>(allocator.New(capacity * sizeof(Entry))); if (map_ == NULL) { v8::internal::FatalProcessOutOfMemory("HashMap::Initialize"); diff --git a/deps/v8/src/heap-profiler.cc b/deps/v8/src/heap-profiler.cc index 6068bf43bda..d86ce5ec327 100644 --- a/deps/v8/src/heap-profiler.cc +++ b/deps/v8/src/heap-profiler.cc @@ -2,12 +2,12 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "heap-profiler.h" +#include "src/heap-profiler.h" -#include "allocation-tracker.h" -#include "heap-snapshot-generator-inl.h" +#include "src/allocation-tracker.h" +#include "src/heap-snapshot-generator-inl.h" namespace v8 { namespace internal { @@ -45,7 +45,7 @@ void HeapProfiler::RemoveSnapshot(HeapSnapshot* snapshot) { void HeapProfiler::DefineWrapperClass( uint16_t class_id, v8::HeapProfiler::WrapperInfoCallback callback) { - ASSERT(class_id != v8::HeapProfiler::kPersistentHandleNoClassId); + DCHECK(class_id != v8::HeapProfiler::kPersistentHandleNoClassId); if (wrapper_callbacks_.length() <= class_id) { wrapper_callbacks_.AddBlock( NULL, class_id - wrapper_callbacks_.length() + 1); @@ -93,7 +93,7 @@ HeapSnapshot* HeapProfiler::TakeSnapshot( void HeapProfiler::StartHeapObjectsTracking(bool track_allocations) { ids_->UpdateHeapObjectsMap(); is_tracking_object_moves_ = true; - ASSERT(!is_tracking_allocations()); + DCHECK(!is_tracking_allocations()); if (track_allocations) { allocation_tracker_.Reset(new AllocationTracker(ids_.get(), names_.get())); heap()->DisableInlineAllocation(); @@ -173,9 +173,6 @@ void HeapProfiler::SetRetainedObjectInfo(UniqueId id, Handle<HeapObject> HeapProfiler::FindHeapObjectById(SnapshotObjectId id) { - heap()->CollectAllGarbage(Heap::kMakeHeapIterableMask, - "HeapProfiler::FindHeapObjectById"); - DisallowHeapAllocation no_allocation; HeapObject* object = NULL; HeapIterator iterator(heap(), HeapIterator::kFilterUnreachable); // Make sure that object with the given id is still reachable. @@ -183,7 +180,7 @@ Handle<HeapObject> HeapProfiler::FindHeapObjectById(SnapshotObjectId id) { obj != NULL; obj = iterator.next()) { if (ids_->FindEntry(obj->address()) == id) { - ASSERT(object == NULL); + DCHECK(object == NULL); object = obj; // Can't break -- kFilterUnreachable requires full heap traversal. } diff --git a/deps/v8/src/heap-profiler.h b/deps/v8/src/heap-profiler.h index 6fdfd955ecc..4197d4d54c9 100644 --- a/deps/v8/src/heap-profiler.h +++ b/deps/v8/src/heap-profiler.h @@ -5,9 +5,9 @@ #ifndef V8_HEAP_PROFILER_H_ #define V8_HEAP_PROFILER_H_ -#include "heap-snapshot-generator-inl.h" -#include "isolate.h" -#include "smart-pointers.h" +#include "src/heap-snapshot-generator-inl.h" +#include "src/isolate.h" +#include "src/smart-pointers.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/heap-snapshot-generator-inl.h b/deps/v8/src/heap-snapshot-generator-inl.h index 90f3b1bc1b0..f7d87aa31b2 100644 --- a/deps/v8/src/heap-snapshot-generator-inl.h +++ b/deps/v8/src/heap-snapshot-generator-inl.h @@ -5,7 +5,7 @@ #ifndef V8_HEAP_SNAPSHOT_GENERATOR_INL_H_ #define V8_HEAP_SNAPSHOT_GENERATOR_INL_H_ -#include "heap-snapshot-generator.h" +#include "src/heap-snapshot-generator.h" namespace v8 { namespace internal { @@ -35,8 +35,8 @@ int HeapEntry::set_children_index(int index) { HeapGraphEdge** HeapEntry::children_arr() { - ASSERT(children_index_ >= 0); - SLOW_ASSERT(children_index_ < snapshot_->children().length() || + DCHECK(children_index_ >= 0); + SLOW_DCHECK(children_index_ < snapshot_->children().length() || (children_index_ == snapshot_->children().length() && children_count_ == 0)); return &snapshot_->children().first() + children_index_; diff --git a/deps/v8/src/heap-snapshot-generator.cc b/deps/v8/src/heap-snapshot-generator.cc index cafee77b4ce..eff9f9a9340 100644 --- a/deps/v8/src/heap-snapshot-generator.cc +++ b/deps/v8/src/heap-snapshot-generator.cc @@ -2,16 +2,16 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "heap-snapshot-generator-inl.h" +#include "src/heap-snapshot-generator-inl.h" -#include "allocation-tracker.h" -#include "code-stubs.h" -#include "conversions.h" -#include "debug.h" -#include "heap-profiler.h" -#include "types.h" +#include "src/allocation-tracker.h" +#include "src/code-stubs.h" +#include "src/conversions.h" +#include "src/debug.h" +#include "src/heap-profiler.h" +#include "src/types.h" namespace v8 { namespace internal { @@ -22,7 +22,7 @@ HeapGraphEdge::HeapGraphEdge(Type type, const char* name, int from, int to) from_index_(from), to_index_(to), name_(name) { - ASSERT(type == kContextVariable + DCHECK(type == kContextVariable || type == kProperty || type == kInternal || type == kShortcut @@ -35,7 +35,7 @@ HeapGraphEdge::HeapGraphEdge(Type type, int index, int from, int to) from_index_(from), to_index_(to), index_(index) { - ASSERT(type == kElement || type == kHidden); + DCHECK(type == kElement || type == kHidden); } @@ -82,22 +82,22 @@ void HeapEntry::SetIndexedReference(HeapGraphEdge::Type type, void HeapEntry::Print( const char* prefix, const char* edge_name, int max_depth, int indent) { - STATIC_CHECK(sizeof(unsigned) == sizeof(id())); - OS::Print("%6" V8PRIuPTR " @%6u %*c %s%s: ", - self_size(), id(), indent, ' ', prefix, edge_name); + STATIC_ASSERT(sizeof(unsigned) == sizeof(id())); + base::OS::Print("%6" V8PRIuPTR " @%6u %*c %s%s: ", self_size(), id(), indent, + ' ', prefix, edge_name); if (type() != kString) { - OS::Print("%s %.40s\n", TypeAsString(), name_); + base::OS::Print("%s %.40s\n", TypeAsString(), name_); } else { - OS::Print("\""); + base::OS::Print("\""); const char* c = name_; while (*c && (c - name_) <= 40) { if (*c != '\n') - OS::Print("%c", *c); + base::OS::Print("%c", *c); else - OS::Print("\\n"); + base::OS::Print("\\n"); ++c; } - OS::Print("\"\n"); + base::OS::Print("\"\n"); } if (--max_depth == 0) return; Vector<HeapGraphEdge*> ch = children(); @@ -112,7 +112,7 @@ void HeapEntry::Print( edge_name = edge.name(); break; case HeapGraphEdge::kElement: - OS::SNPrintF(index, "%d", edge.index()); + SNPrintF(index, "%d", edge.index()); break; case HeapGraphEdge::kInternal: edge_prefix = "$"; @@ -123,7 +123,7 @@ void HeapEntry::Print( break; case HeapGraphEdge::kHidden: edge_prefix = "$"; - OS::SNPrintF(index, "%d", edge.index()); + SNPrintF(index, "%d", edge.index()); break; case HeapGraphEdge::kShortcut: edge_prefix = "^"; @@ -134,7 +134,7 @@ void HeapEntry::Print( edge_name = edge.name(); break; default: - OS::SNPrintF(index, "!!! unknown edge type: %d ", edge.type()); + SNPrintF(index, "!!! unknown edge type: %d ", edge.type()); } edge.to()->Print(edge_prefix, edge_name, max_depth, indent + 2); } @@ -155,6 +155,7 @@ const char* HeapEntry::TypeAsString() { case kSynthetic: return "/synthetic/"; case kConsString: return "/concatenated string/"; case kSlicedString: return "/sliced string/"; + case kSymbol: return "/symbol/"; default: return "???"; } } @@ -189,10 +190,10 @@ HeapSnapshot::HeapSnapshot(HeapProfiler* profiler, gc_roots_index_(HeapEntry::kNoEntry), natives_root_index_(HeapEntry::kNoEntry), max_snapshot_js_object_id_(0) { - STATIC_CHECK( + STATIC_ASSERT( sizeof(HeapGraphEdge) == SnapshotSizeConstants<kPointerSize>::kExpectedHeapGraphEdgeSize); - STATIC_CHECK( + STATIC_ASSERT( sizeof(HeapEntry) == SnapshotSizeConstants<kPointerSize>::kExpectedHeapEntrySize); USE(SnapshotSizeConstants<4>::kExpectedHeapGraphEdgeSize); @@ -217,21 +218,21 @@ void HeapSnapshot::RememberLastJSObjectId() { HeapEntry* HeapSnapshot::AddRootEntry() { - ASSERT(root_index_ == HeapEntry::kNoEntry); - ASSERT(entries_.is_empty()); // Root entry must be the first one. + DCHECK(root_index_ == HeapEntry::kNoEntry); + DCHECK(entries_.is_empty()); // Root entry must be the first one. HeapEntry* entry = AddEntry(HeapEntry::kSynthetic, "", HeapObjectsMap::kInternalRootObjectId, 0, 0); root_index_ = entry->index(); - ASSERT(root_index_ == 0); + DCHECK(root_index_ == 0); return entry; } HeapEntry* HeapSnapshot::AddGcRootsEntry() { - ASSERT(gc_roots_index_ == HeapEntry::kNoEntry); + DCHECK(gc_roots_index_ == HeapEntry::kNoEntry); HeapEntry* entry = AddEntry(HeapEntry::kSynthetic, "(GC roots)", HeapObjectsMap::kGcRootsObjectId, @@ -243,8 +244,8 @@ HeapEntry* HeapSnapshot::AddGcRootsEntry() { HeapEntry* HeapSnapshot::AddGcSubrootEntry(int tag) { - ASSERT(gc_subroot_indexes_[tag] == HeapEntry::kNoEntry); - ASSERT(0 <= tag && tag < VisitorSynchronization::kNumberOfSyncTags); + DCHECK(gc_subroot_indexes_[tag] == HeapEntry::kNoEntry); + DCHECK(0 <= tag && tag < VisitorSynchronization::kNumberOfSyncTags); HeapEntry* entry = AddEntry( HeapEntry::kSynthetic, VisitorSynchronization::kTagNames[tag], @@ -268,14 +269,14 @@ HeapEntry* HeapSnapshot::AddEntry(HeapEntry::Type type, void HeapSnapshot::FillChildren() { - ASSERT(children().is_empty()); + DCHECK(children().is_empty()); children().Allocate(edges().length()); int children_index = 0; for (int i = 0; i < entries().length(); ++i) { HeapEntry* entry = &entries()[i]; children_index = entry->set_children_index(children_index); } - ASSERT(edges().length() == children_index); + DCHECK(edges().length() == children_index); for (int i = 0; i < edges().length(); ++i) { HeapGraphEdge* edge = &edges()[i]; edge->ReplaceToIndexWithEntry(this); @@ -374,8 +375,8 @@ HeapObjectsMap::HeapObjectsMap(Heap* heap) bool HeapObjectsMap::MoveObject(Address from, Address to, int object_size) { - ASSERT(to != NULL); - ASSERT(from != NULL); + DCHECK(to != NULL); + DCHECK(from != NULL); if (from == to) return false; void* from_value = entries_map_.Remove(from, ComputePointerHash(from)); if (from_value == NULL) { @@ -432,7 +433,7 @@ SnapshotObjectId HeapObjectsMap::FindEntry(Address addr) { if (entry == NULL) return 0; int entry_index = static_cast<int>(reinterpret_cast<intptr_t>(entry->value)); EntryInfo& entry_info = entries_.at(entry_index); - ASSERT(static_cast<uint32_t>(entries_.length()) > entries_map_.occupancy()); + DCHECK(static_cast<uint32_t>(entries_.length()) > entries_map_.occupancy()); return entry_info.id; } @@ -440,7 +441,7 @@ SnapshotObjectId HeapObjectsMap::FindEntry(Address addr) { SnapshotObjectId HeapObjectsMap::FindOrAddEntry(Address addr, unsigned int size, bool accessed) { - ASSERT(static_cast<uint32_t>(entries_.length()) > entries_map_.occupancy()); + DCHECK(static_cast<uint32_t>(entries_.length()) > entries_map_.occupancy()); HashMap::Entry* entry = entries_map_.Lookup(addr, ComputePointerHash(addr), true); if (entry->value != NULL) { @@ -461,7 +462,7 @@ SnapshotObjectId HeapObjectsMap::FindOrAddEntry(Address addr, SnapshotObjectId id = next_id_; next_id_ += kObjectIdStep; entries_.Add(EntryInfo(id, addr, size, accessed)); - ASSERT(static_cast<uint32_t>(entries_.length()) > entries_map_.occupancy()); + DCHECK(static_cast<uint32_t>(entries_.length()) > entries_map_.occupancy()); return id; } @@ -614,7 +615,7 @@ SnapshotObjectId HeapObjectsMap::PushHeapObjectsStats(OutputStream* stream) { time_intervals_.Add(TimeInterval(next_id_)); int prefered_chunk_size = stream->GetChunkSize(); List<v8::HeapStatsUpdate> stats_buffer; - ASSERT(!entries_.is_empty()); + DCHECK(!entries_.is_empty()); EntryInfo* entry_info = &entries_.first(); EntryInfo* end_entry_info = &entries_.last() + 1; for (int time_interval_index = 0; @@ -644,7 +645,7 @@ SnapshotObjectId HeapObjectsMap::PushHeapObjectsStats(OutputStream* stream) { } } } - ASSERT(entry_info == end_entry_info); + DCHECK(entry_info == end_entry_info); if (!stats_buffer.is_empty()) { OutputStream::WriteResult result = stream->WriteHeapStatsChunk( &stats_buffer.first(), stats_buffer.length()); @@ -656,7 +657,7 @@ SnapshotObjectId HeapObjectsMap::PushHeapObjectsStats(OutputStream* stream) { void HeapObjectsMap::RemoveDeadEntries() { - ASSERT(entries_.length() > 0 && + DCHECK(entries_.length() > 0 && entries_.at(0).id == 0 && entries_.at(0).addr == NULL); int first_free_entry = 1; @@ -669,7 +670,7 @@ void HeapObjectsMap::RemoveDeadEntries() { entries_.at(first_free_entry).accessed = false; HashMap::Entry* entry = entries_map_.Lookup( entry_info.addr, ComputePointerHash(entry_info.addr), false); - ASSERT(entry); + DCHECK(entry); entry->value = reinterpret_cast<void*>(first_free_entry); ++first_free_entry; } else { @@ -680,7 +681,7 @@ void HeapObjectsMap::RemoveDeadEntries() { } } entries_.Rewind(first_free_entry); - ASSERT(static_cast<uint32_t>(entries_.length()) - 1 == + DCHECK(static_cast<uint32_t>(entries_.length()) - 1 == entries_map_.occupancy()); } @@ -722,7 +723,7 @@ int HeapEntriesMap::Map(HeapThing thing) { void HeapEntriesMap::Pair(HeapThing thing, int entry) { HashMap::Entry* cache_entry = entries_.Lookup(thing, Hash(thing), true); - ASSERT(cache_entry->value == NULL); + DCHECK(cache_entry->value == NULL); cache_entry->value = reinterpret_cast<void*>(static_cast<intptr_t>(entry)); } @@ -851,6 +852,8 @@ HeapEntry* V8HeapExplorer::AddEntry(HeapObject* object) { return AddEntry(object, HeapEntry::kString, names_->GetName(String::cast(object))); + } else if (object->IsSymbol()) { + return AddEntry(object, HeapEntry::kSymbol, "symbol"); } else if (object->IsCode()) { return AddEntry(object, HeapEntry::kCode, ""); } else if (object->IsSharedFunctionInfo()) { @@ -1056,9 +1059,9 @@ class IndexedReferencesExtractor : public ObjectVisitor { static void MarkVisitedField(HeapObject* obj, int offset) { if (offset < 0) return; Address field = obj->address() + offset; - ASSERT(Memory::Object_at(field)->IsHeapObject()); + DCHECK(Memory::Object_at(field)->IsHeapObject()); intptr_t p = reinterpret_cast<intptr_t>(Memory::Object_at(field)); - ASSERT(!IsMarked(p)); + DCHECK(!IsMarked(p)); intptr_t p_tagged = p | kTag; Memory::Object_at(field) = reinterpret_cast<Object*>(p_tagged); } @@ -1069,7 +1072,7 @@ class IndexedReferencesExtractor : public ObjectVisitor { if (IsMarked(p)) { intptr_t p_untagged = (p & ~kTaggingMask) | kHeapObjectTag; *field = reinterpret_cast<Object*>(p_untagged); - ASSERT((*field)->IsHeapObject()); + DCHECK((*field)->IsHeapObject()); return true; } return false; @@ -1095,9 +1098,20 @@ bool V8HeapExplorer::ExtractReferencesPass1(int entry, HeapObject* obj) { } else if (obj->IsJSArrayBuffer()) { ExtractJSArrayBufferReferences(entry, JSArrayBuffer::cast(obj)); } else if (obj->IsJSObject()) { + if (obj->IsJSWeakSet()) { + ExtractJSWeakCollectionReferences(entry, JSWeakSet::cast(obj)); + } else if (obj->IsJSWeakMap()) { + ExtractJSWeakCollectionReferences(entry, JSWeakMap::cast(obj)); + } else if (obj->IsJSSet()) { + ExtractJSCollectionReferences(entry, JSSet::cast(obj)); + } else if (obj->IsJSMap()) { + ExtractJSCollectionReferences(entry, JSMap::cast(obj)); + } ExtractJSObjectReferences(entry, JSObject::cast(obj)); } else if (obj->IsString()) { ExtractStringReferences(entry, String::cast(obj)); + } else if (obj->IsSymbol()) { + ExtractSymbolReferences(entry, Symbol::cast(obj)); } else if (obj->IsMap()) { ExtractMapReferences(entry, Map::cast(obj)); } else if (obj->IsSharedFunctionInfo()) { @@ -1150,8 +1164,8 @@ void V8HeapExplorer::ExtractJSObjectReferences( ExtractPropertyReferences(js_obj, entry); ExtractElementReferences(js_obj, entry); ExtractInternalReferences(js_obj, entry); - SetPropertyReference( - obj, entry, heap_->proto_string(), js_obj->GetPrototype()); + PrototypeIterator iter(heap_->isolate(), js_obj); + SetPropertyReference(obj, entry, heap_->proto_string(), iter.GetCurrent()); if (obj->IsJSFunction()) { JSFunction* js_fun = JSFunction::cast(js_obj); Object* proto_or_map = js_fun->prototype_or_initial_map(); @@ -1191,9 +1205,9 @@ void V8HeapExplorer::ExtractJSObjectReferences( SetWeakReference(js_fun, entry, "next_function_link", js_fun->next_function_link(), JSFunction::kNextFunctionLinkOffset); - STATIC_CHECK(JSFunction::kNextFunctionLinkOffset + STATIC_ASSERT(JSFunction::kNextFunctionLinkOffset == JSFunction::kNonWeakFieldsEndOffset); - STATIC_CHECK(JSFunction::kNextFunctionLinkOffset + kPointerSize + STATIC_ASSERT(JSFunction::kNextFunctionLinkOffset + kPointerSize == JSFunction::kSize); } else if (obj->IsGlobalObject()) { GlobalObject* global_obj = GlobalObject::cast(obj); @@ -1207,9 +1221,9 @@ void V8HeapExplorer::ExtractJSObjectReferences( "global_context", global_obj->global_context(), GlobalObject::kGlobalContextOffset); SetInternalReference(global_obj, entry, - "global_receiver", global_obj->global_receiver(), - GlobalObject::kGlobalReceiverOffset); - STATIC_CHECK(GlobalObject::kHeaderSize - JSObject::kHeaderSize == + "global_proxy", global_obj->global_proxy(), + GlobalObject::kGlobalProxyOffset); + STATIC_ASSERT(GlobalObject::kHeaderSize - JSObject::kHeaderSize == 4 * kPointerSize); } else if (obj->IsJSArrayBufferView()) { JSArrayBufferView* view = JSArrayBufferView::cast(obj); @@ -1244,6 +1258,29 @@ void V8HeapExplorer::ExtractStringReferences(int entry, String* string) { } +void V8HeapExplorer::ExtractSymbolReferences(int entry, Symbol* symbol) { + SetInternalReference(symbol, entry, + "name", symbol->name(), + Symbol::kNameOffset); +} + + +void V8HeapExplorer::ExtractJSCollectionReferences(int entry, + JSCollection* collection) { + SetInternalReference(collection, entry, "table", collection->table(), + JSCollection::kTableOffset); +} + + +void V8HeapExplorer::ExtractJSWeakCollectionReferences( + int entry, JSWeakCollection* collection) { + MarkAsWeakContainer(collection->table()); + SetInternalReference(collection, entry, + "table", collection->table(), + JSWeakCollection::kTableOffset); +} + + void V8HeapExplorer::ExtractContextReferences(int entry, Context* context) { if (context == context->declaration_context()) { ScopeInfo* scope_info = context->closure()->shared()->scope_info(); @@ -1292,10 +1329,12 @@ void V8HeapExplorer::ExtractContextReferences(int entry, Context* context) { EXTRACT_CONTEXT_FIELD(DEOPTIMIZED_CODE_LIST, unused, deoptimized_code_list); EXTRACT_CONTEXT_FIELD(NEXT_CONTEXT_LINK, unused, next_context_link); #undef EXTRACT_CONTEXT_FIELD - STATIC_CHECK(Context::OPTIMIZED_FUNCTIONS_LIST == Context::FIRST_WEAK_SLOT); - STATIC_CHECK(Context::NEXT_CONTEXT_LINK + 1 - == Context::NATIVE_CONTEXT_SLOTS); - STATIC_CHECK(Context::FIRST_WEAK_SLOT + 5 == Context::NATIVE_CONTEXT_SLOTS); + STATIC_ASSERT(Context::OPTIMIZED_FUNCTIONS_LIST == + Context::FIRST_WEAK_SLOT); + STATIC_ASSERT(Context::NEXT_CONTEXT_LINK + 1 == + Context::NATIVE_CONTEXT_SLOTS); + STATIC_ASSERT(Context::FIRST_WEAK_SLOT + 5 == + Context::NATIVE_CONTEXT_SLOTS); } } @@ -1409,9 +1448,6 @@ void V8HeapExplorer::ExtractSharedFunctionInfoReferences( SetInternalReference(obj, entry, "feedback_vector", shared->feedback_vector(), SharedFunctionInfo::kFeedbackVectorOffset); - SetWeakReference(obj, entry, - "initial_map", shared->initial_map(), - SharedFunctionInfo::kInitialMapOffset); } @@ -1463,8 +1499,8 @@ void V8HeapExplorer::TagBuiltinCodeObject(Code* code, const char* name) { void V8HeapExplorer::TagCodeObject(Code* code) { if (code->kind() == Code::STUB) { TagObject(code, names_->GetFormatted( - "(%s code)", CodeStub::MajorName( - static_cast<CodeStub::Major>(code->major_key()), true))); + "(%s code)", CodeStub::MajorName( + CodeStub::GetMajorKey(code), true))); } } @@ -1533,7 +1569,7 @@ void V8HeapExplorer::ExtractAllocationSiteReferences(int entry, AllocationSite::kDependentCodeOffset); // Do not visit weak_next as it is not visited by the StaticVisitor, // and we're not very interested in weak_next field here. - STATIC_CHECK(AllocationSite::kWeakNextOffset >= + STATIC_ASSERT(AllocationSite::kWeakNextOffset >= AllocationSite::BodyDescriptor::kEndOffset); } @@ -1617,6 +1653,8 @@ void V8HeapExplorer::ExtractPropertyReferences(JSObject* js_obj, int entry) { for (int i = 0; i < real_size; i++) { switch (descs->GetType(i)) { case FIELD: { + Representation r = descs->GetDetails(i).representation(); + if (r.IsSmi() || r.IsDouble()) break; int index = descs->GetFieldIndex(i); Name* k = descs->GetKey(i); @@ -1636,7 +1674,9 @@ void V8HeapExplorer::ExtractPropertyReferences(JSObject* js_obj, int entry) { js_obj->GetInObjectPropertyOffset(index)); } } else { - Object* value = js_obj->RawFastPropertyAt(index); + FieldIndex field_index = + FieldIndex::ForDescriptor(js_obj->map(), i); + Object* value = js_obj->RawFastPropertyAt(field_index); if (k != heap_->hidden_string()) { SetPropertyReference(js_obj, entry, k, value); } else { @@ -1722,7 +1762,7 @@ void V8HeapExplorer::ExtractElementReferences(JSObject* js_obj, int entry) { for (int i = 0; i < length; ++i) { Object* k = dictionary->KeyAt(i); if (dictionary->IsKey(k)) { - ASSERT(k->IsNumber()); + DCHECK(k->IsNumber()); uint32_t index = static_cast<uint32_t>(k->Number()); SetElementReference(js_obj, entry, index, dictionary->ValueAt(i)); } @@ -1752,7 +1792,7 @@ String* V8HeapExplorer::GetConstructorName(JSObject* object) { Object* constructor_prop = NULL; Isolate* isolate = heap->isolate(); LookupResult result(isolate); - object->LocalLookupRealNamedProperty( + object->LookupOwnRealNamedProperty( isolate->factory()->constructor_string(), &result); if (!result.IsFound()) return object->constructor_name(); @@ -1803,7 +1843,7 @@ class RootsReferencesExtractor : public ObjectVisitor { void SetCollectingAllReferences() { collecting_all_references_ = true; } void FillReferences(V8HeapExplorer* explorer) { - ASSERT(strong_references_.length() <= all_references_.length()); + DCHECK(strong_references_.length() <= all_references_.length()); Builtins* builtins = heap_->isolate()->builtins(); for (int i = 0; i < reference_tags_.length(); ++i) { explorer->SetGcRootsReference(reference_tags_[i].tag); @@ -1817,7 +1857,7 @@ class RootsReferencesExtractor : public ObjectVisitor { all_references_[all_index]); if (reference_tags_[tags_index].tag == VisitorSynchronization::kBuiltins) { - ASSERT(all_references_[all_index]->IsCode()); + DCHECK(all_references_[all_index]->IsCode()); explorer->TagBuiltinCodeObject( Code::cast(all_references_[all_index]), builtins->name(builtin_index++)); @@ -1926,7 +1966,7 @@ void V8HeapExplorer::SetContextReference(HeapObject* parent_obj, String* reference_name, Object* child_obj, int field_offset) { - ASSERT(parent_entry == GetEntry(parent_obj)->index()); + DCHECK(parent_entry == GetEntry(parent_obj)->index()); HeapEntry* child_entry = GetEntry(child_obj); if (child_entry != NULL) { filler_->SetNamedReference(HeapGraphEdge::kContextVariable, @@ -1942,7 +1982,7 @@ void V8HeapExplorer::SetNativeBindReference(HeapObject* parent_obj, int parent_entry, const char* reference_name, Object* child_obj) { - ASSERT(parent_entry == GetEntry(parent_obj)->index()); + DCHECK(parent_entry == GetEntry(parent_obj)->index()); HeapEntry* child_entry = GetEntry(child_obj); if (child_entry != NULL) { filler_->SetNamedReference(HeapGraphEdge::kShortcut, @@ -1957,7 +1997,7 @@ void V8HeapExplorer::SetElementReference(HeapObject* parent_obj, int parent_entry, int index, Object* child_obj) { - ASSERT(parent_entry == GetEntry(parent_obj)->index()); + DCHECK(parent_entry == GetEntry(parent_obj)->index()); HeapEntry* child_entry = GetEntry(child_obj); if (child_entry != NULL) { filler_->SetIndexedReference(HeapGraphEdge::kElement, @@ -1973,7 +2013,7 @@ void V8HeapExplorer::SetInternalReference(HeapObject* parent_obj, const char* reference_name, Object* child_obj, int field_offset) { - ASSERT(parent_entry == GetEntry(parent_obj)->index()); + DCHECK(parent_entry == GetEntry(parent_obj)->index()); HeapEntry* child_entry = GetEntry(child_obj); if (child_entry == NULL) return; if (IsEssentialObject(child_obj)) { @@ -1991,7 +2031,7 @@ void V8HeapExplorer::SetInternalReference(HeapObject* parent_obj, int index, Object* child_obj, int field_offset) { - ASSERT(parent_entry == GetEntry(parent_obj)->index()); + DCHECK(parent_entry == GetEntry(parent_obj)->index()); HeapEntry* child_entry = GetEntry(child_obj); if (child_entry == NULL) return; if (IsEssentialObject(child_obj)) { @@ -2008,7 +2048,7 @@ void V8HeapExplorer::SetHiddenReference(HeapObject* parent_obj, int parent_entry, int index, Object* child_obj) { - ASSERT(parent_entry == GetEntry(parent_obj)->index()); + DCHECK(parent_entry == GetEntry(parent_obj)->index()); HeapEntry* child_entry = GetEntry(child_obj); if (child_entry != NULL && IsEssentialObject(child_obj)) { filler_->SetIndexedReference(HeapGraphEdge::kHidden, @@ -2024,7 +2064,7 @@ void V8HeapExplorer::SetWeakReference(HeapObject* parent_obj, const char* reference_name, Object* child_obj, int field_offset) { - ASSERT(parent_entry == GetEntry(parent_obj)->index()); + DCHECK(parent_entry == GetEntry(parent_obj)->index()); HeapEntry* child_entry = GetEntry(child_obj); if (child_entry == NULL) return; if (IsEssentialObject(child_obj)) { @@ -2042,7 +2082,7 @@ void V8HeapExplorer::SetWeakReference(HeapObject* parent_obj, int index, Object* child_obj, int field_offset) { - ASSERT(parent_entry == GetEntry(parent_obj)->index()); + DCHECK(parent_entry == GetEntry(parent_obj)->index()); HeapEntry* child_entry = GetEntry(child_obj); if (child_entry == NULL) return; if (IsEssentialObject(child_obj)) { @@ -2061,7 +2101,7 @@ void V8HeapExplorer::SetPropertyReference(HeapObject* parent_obj, Object* child_obj, const char* name_format_string, int field_offset) { - ASSERT(parent_entry == GetEntry(parent_obj)->index()); + DCHECK(parent_entry == GetEntry(parent_obj)->index()); HeapEntry* child_entry = GetEntry(child_obj); if (child_entry != NULL) { HeapGraphEdge::Type type = @@ -2093,7 +2133,7 @@ void V8HeapExplorer::SetRootGcRootsReference() { void V8HeapExplorer::SetUserGlobalReference(Object* child_obj) { HeapEntry* child_entry = GetEntry(child_obj); - ASSERT(child_entry != NULL); + DCHECK(child_entry != NULL); filler_->SetNamedAutoIndexReference( HeapGraphEdge::kShortcut, snapshot_->root()->index(), @@ -2374,7 +2414,7 @@ void NativeObjectsExplorer::FillImplicitReferences() { HeapObject* parent = *group->parent; int parent_entry = filler_->FindOrAddEntry(parent, native_entries_allocator_)->index(); - ASSERT(parent_entry != HeapEntry::kNoEntry); + DCHECK(parent_entry != HeapEntry::kNoEntry); Object*** children = group->children; for (size_t j = 0; j < group->length; ++j) { Object* child = *children[j]; @@ -2475,7 +2515,7 @@ void NativeObjectsExplorer::SetNativeRootReference( v8::RetainedObjectInfo* info) { HeapEntry* child_entry = filler_->FindOrAddEntry(info, native_entries_allocator_); - ASSERT(child_entry != NULL); + DCHECK(child_entry != NULL); NativeGroupRetainedObjectInfo* group_info = FindOrAddGroupInfo(info->GetGroupLabel()); HeapEntry* group_entry = @@ -2490,10 +2530,10 @@ void NativeObjectsExplorer::SetNativeRootReference( void NativeObjectsExplorer::SetWrapperNativeReferences( HeapObject* wrapper, v8::RetainedObjectInfo* info) { HeapEntry* wrapper_entry = filler_->FindEntry(wrapper); - ASSERT(wrapper_entry != NULL); + DCHECK(wrapper_entry != NULL); HeapEntry* info_entry = filler_->FindOrAddEntry(info, native_entries_allocator_); - ASSERT(info_entry != NULL); + DCHECK(info_entry != NULL); filler_->SetNamedReference(HeapGraphEdge::kInternal, wrapper_entry->index(), "native", @@ -2512,7 +2552,7 @@ void NativeObjectsExplorer::SetRootNativeRootsReference() { static_cast<NativeGroupRetainedObjectInfo*>(entry->value); HeapEntry* group_entry = filler_->FindOrAddEntry(group_info, native_entries_allocator_); - ASSERT(group_entry != NULL); + DCHECK(group_entry != NULL); filler_->SetIndexedAutoIndexReference( HeapGraphEdge::kElement, snapshot_->root()->index(), @@ -2560,19 +2600,14 @@ bool HeapSnapshotGenerator::GenerateSnapshot() { #ifdef VERIFY_HEAP Heap* debug_heap = heap_; - CHECK(!debug_heap->old_data_space()->was_swept_conservatively()); - CHECK(!debug_heap->old_pointer_space()->was_swept_conservatively()); - CHECK(!debug_heap->code_space()->was_swept_conservatively()); - CHECK(!debug_heap->cell_space()->was_swept_conservatively()); - CHECK(!debug_heap->property_cell_space()-> - was_swept_conservatively()); - CHECK(!debug_heap->map_space()->was_swept_conservatively()); + CHECK(debug_heap->old_data_space()->swept_precisely()); + CHECK(debug_heap->old_pointer_space()->swept_precisely()); + CHECK(debug_heap->code_space()->swept_precisely()); + CHECK(debug_heap->cell_space()->swept_precisely()); + CHECK(debug_heap->property_cell_space()->swept_precisely()); + CHECK(debug_heap->map_space()->swept_precisely()); #endif - // The following code uses heap iterators, so we want the heap to be - // stable. It should follow TagGlobalObjects as that can allocate. - DisallowHeapAllocation no_alloc; - #ifdef VERIFY_HEAP debug_heap->Verify(); #endif @@ -2648,12 +2683,12 @@ class OutputStreamWriter { chunk_(chunk_size_), chunk_pos_(0), aborted_(false) { - ASSERT(chunk_size_ > 0); + DCHECK(chunk_size_ > 0); } bool aborted() { return aborted_; } void AddCharacter(char c) { - ASSERT(c != '\0'); - ASSERT(chunk_pos_ < chunk_size_); + DCHECK(c != '\0'); + DCHECK(chunk_pos_ < chunk_size_); chunk_[chunk_pos_++] = c; MaybeWriteChunk(); } @@ -2662,13 +2697,13 @@ class OutputStreamWriter { } void AddSubstring(const char* s, int n) { if (n <= 0) return; - ASSERT(static_cast<size_t>(n) <= strlen(s)); + DCHECK(static_cast<size_t>(n) <= strlen(s)); const char* s_end = s + n; while (s < s_end) { - int s_chunk_size = Min( - chunk_size_ - chunk_pos_, static_cast<int>(s_end - s)); - ASSERT(s_chunk_size > 0); - OS::MemCopy(chunk_.start() + chunk_pos_, s, s_chunk_size); + int s_chunk_size = + Min(chunk_size_ - chunk_pos_, static_cast<int>(s_end - s)); + DCHECK(s_chunk_size > 0); + MemCopy(chunk_.start() + chunk_pos_, s, s_chunk_size); s += s_chunk_size; chunk_pos_ += s_chunk_size; MaybeWriteChunk(); @@ -2677,7 +2712,7 @@ class OutputStreamWriter { void AddNumber(unsigned n) { AddNumberImpl<unsigned>(n, "%u"); } void Finalize() { if (aborted_) return; - ASSERT(chunk_pos_ < chunk_size_); + DCHECK(chunk_pos_ < chunk_size_); if (chunk_pos_ != 0) { WriteChunk(); } @@ -2691,21 +2726,21 @@ class OutputStreamWriter { static const int kMaxNumberSize = MaxDecimalDigitsIn<sizeof(T)>::kUnsigned + 1; if (chunk_size_ - chunk_pos_ >= kMaxNumberSize) { - int result = OS::SNPrintF( + int result = SNPrintF( chunk_.SubVector(chunk_pos_, chunk_size_), format, n); - ASSERT(result != -1); + DCHECK(result != -1); chunk_pos_ += result; MaybeWriteChunk(); } else { EmbeddedVector<char, kMaxNumberSize> buffer; - int result = OS::SNPrintF(buffer, format, n); + int result = SNPrintF(buffer, format, n); USE(result); - ASSERT(result != -1); + DCHECK(result != -1); AddString(buffer.start()); } } void MaybeWriteChunk() { - ASSERT(chunk_pos_ <= chunk_size_); + DCHECK(chunk_pos_ <= chunk_size_); if (chunk_pos_ == chunk_size_) { WriteChunk(); } @@ -2735,7 +2770,7 @@ void HeapSnapshotJSONSerializer::Serialize(v8::OutputStream* stream) { snapshot_->profiler()->allocation_tracker()) { allocation_tracker->PrepareForSerialization(); } - ASSERT(writer_ == NULL); + DCHECK(writer_ == NULL); writer_ = new OutputStreamWriter(stream); SerializeImpl(); delete writer_; @@ -2744,7 +2779,7 @@ void HeapSnapshotJSONSerializer::Serialize(v8::OutputStream* stream) { void HeapSnapshotJSONSerializer::SerializeImpl() { - ASSERT(0 == snapshot_->root()->index()); + DCHECK(0 == snapshot_->root()->index()); writer_->AddCharacter('{'); writer_->AddString("\"snapshot\":{"); SerializeSnapshot(); @@ -2804,7 +2839,7 @@ template<> struct ToUnsigned<8> { template<typename T> static int utoa_impl(T value, const Vector<char>& buffer, int buffer_pos) { - STATIC_CHECK(static_cast<T>(-1) > 0); // Check that T is unsigned + STATIC_ASSERT(static_cast<T>(-1) > 0); // Check that T is unsigned int number_of_digits = 0; T t = value; do { @@ -2825,7 +2860,7 @@ static int utoa_impl(T value, const Vector<char>& buffer, int buffer_pos) { template<typename T> static int utoa(T value, const Vector<char>& buffer, int buffer_pos) { typename ToUnsigned<sizeof(value)>::Type unsigned_value = value; - STATIC_CHECK(sizeof(value) == sizeof(unsigned_value)); + STATIC_ASSERT(sizeof(value) == sizeof(unsigned_value)); return utoa_impl(unsigned_value, buffer, buffer_pos); } @@ -2857,7 +2892,7 @@ void HeapSnapshotJSONSerializer::SerializeEdge(HeapGraphEdge* edge, void HeapSnapshotJSONSerializer::SerializeEdges() { List<HeapGraphEdge*>& edges = snapshot_->children(); for (int i = 0; i < edges.length(); ++i) { - ASSERT(i == 0 || + DCHECK(i == 0 || edges[i - 1]->from()->index() <= edges[i]->from()->index()); SerializeEdge(edges[i], i == 0); if (writer_->aborted()) return; @@ -3041,7 +3076,7 @@ static int SerializePosition(int position, const Vector<char>& buffer, if (position == -1) { buffer[buffer_pos++] = '0'; } else { - ASSERT(position >= 0); + DCHECK(position >= 0); buffer_pos = utoa(static_cast<unsigned>(position + 1), buffer, buffer_pos); } return buffer_pos; @@ -3125,7 +3160,7 @@ void HeapSnapshotJSONSerializer::SerializeString(const unsigned char* s) { unibrow::uchar c = unibrow::Utf8::CalculateValue(s, length, &cursor); if (c != unibrow::Utf8::kBadChar) { WriteUChar(writer_, c); - ASSERT(cursor != 0); + DCHECK(cursor != 0); s += cursor - 1; } else { writer_->AddCharacter('?'); diff --git a/deps/v8/src/heap-snapshot-generator.h b/deps/v8/src/heap-snapshot-generator.h index a0f2a6293c1..1aea5a0264d 100644 --- a/deps/v8/src/heap-snapshot-generator.h +++ b/deps/v8/src/heap-snapshot-generator.h @@ -5,7 +5,7 @@ #ifndef V8_HEAP_SNAPSHOT_GENERATOR_H_ #define V8_HEAP_SNAPSHOT_GENERATOR_H_ -#include "profile-generator-inl.h" +#include "src/profile-generator-inl.h" namespace v8 { namespace internal { @@ -35,11 +35,11 @@ class HeapGraphEdge BASE_EMBEDDED { Type type() const { return static_cast<Type>(type_); } int index() const { - ASSERT(type_ == kElement || type_ == kHidden); + DCHECK(type_ == kElement || type_ == kHidden); return index_; } const char* name() const { - ASSERT(type_ == kContextVariable + DCHECK(type_ == kContextVariable || type_ == kProperty || type_ == kInternal || type_ == kShortcut @@ -83,7 +83,8 @@ class HeapEntry BASE_EMBEDDED { kNative = v8::HeapGraphNode::kNative, kSynthetic = v8::HeapGraphNode::kSynthetic, kConsString = v8::HeapGraphNode::kConsString, - kSlicedString = v8::HeapGraphNode::kSlicedString + kSlicedString = v8::HeapGraphNode::kSlicedString, + kSymbol = v8::HeapGraphNode::kSymbol }; static const int kNoEntry; @@ -368,6 +369,10 @@ class V8HeapExplorer : public HeapEntriesAllocator { void ExtractJSGlobalProxyReferences(int entry, JSGlobalProxy* proxy); void ExtractJSObjectReferences(int entry, JSObject* js_obj); void ExtractStringReferences(int entry, String* obj); + void ExtractSymbolReferences(int entry, Symbol* symbol); + void ExtractJSCollectionReferences(int entry, JSCollection* collection); + void ExtractJSWeakCollectionReferences(int entry, + JSWeakCollection* collection); void ExtractContextReferences(int entry, Context* context); void ExtractMapReferences(int entry, Map* map); void ExtractSharedFunctionInfoReferences(int entry, diff --git a/deps/v8/src/heap/gc-tracer.cc b/deps/v8/src/heap/gc-tracer.cc new file mode 100644 index 00000000000..12de0e457e5 --- /dev/null +++ b/deps/v8/src/heap/gc-tracer.cc @@ -0,0 +1,402 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "src/heap/gc-tracer.h" + +namespace v8 { +namespace internal { + +static intptr_t CountTotalHolesSize(Heap* heap) { + intptr_t holes_size = 0; + OldSpaces spaces(heap); + for (OldSpace* space = spaces.next(); space != NULL; space = spaces.next()) { + holes_size += space->Waste() + space->Available(); + } + return holes_size; +} + + +GCTracer::Event::Event(Type type, const char* gc_reason, + const char* collector_reason) + : type(type), + gc_reason(gc_reason), + collector_reason(collector_reason), + start_time(0.0), + end_time(0.0), + start_object_size(0), + end_object_size(0), + start_memory_size(0), + end_memory_size(0), + start_holes_size(0), + end_holes_size(0), + cumulative_incremental_marking_steps(0), + incremental_marking_steps(0), + cumulative_incremental_marking_bytes(0), + incremental_marking_bytes(0), + cumulative_incremental_marking_duration(0.0), + incremental_marking_duration(0.0), + cumulative_pure_incremental_marking_duration(0.0), + pure_incremental_marking_duration(0.0), + longest_incremental_marking_step(0.0) { + for (int i = 0; i < Scope::NUMBER_OF_SCOPES; i++) { + scopes[i] = 0; + } +} + + +const char* GCTracer::Event::TypeName(bool short_name) const { + switch (type) { + case SCAVENGER: + if (short_name) { + return "s"; + } else { + return "Scavenge"; + } + case MARK_COMPACTOR: + if (short_name) { + return "ms"; + } else { + return "Mark-sweep"; + } + case START: + if (short_name) { + return "st"; + } else { + return "Start"; + } + } + return "Unknown Event Type"; +} + + +GCTracer::GCTracer(Heap* heap) + : heap_(heap), + cumulative_incremental_marking_steps_(0), + cumulative_incremental_marking_bytes_(0), + cumulative_incremental_marking_duration_(0.0), + cumulative_pure_incremental_marking_duration_(0.0), + longest_incremental_marking_step_(0.0), + cumulative_marking_duration_(0.0), + cumulative_sweeping_duration_(0.0) { + current_ = Event(Event::START, NULL, NULL); + current_.end_time = base::OS::TimeCurrentMillis(); + previous_ = previous_mark_compactor_event_ = current_; +} + + +void GCTracer::Start(GarbageCollector collector, const char* gc_reason, + const char* collector_reason) { + previous_ = current_; + if (current_.type == Event::MARK_COMPACTOR) + previous_mark_compactor_event_ = current_; + + if (collector == SCAVENGER) { + current_ = Event(Event::SCAVENGER, gc_reason, collector_reason); + } else { + current_ = Event(Event::MARK_COMPACTOR, gc_reason, collector_reason); + } + + current_.start_time = base::OS::TimeCurrentMillis(); + current_.start_object_size = heap_->SizeOfObjects(); + current_.start_memory_size = heap_->isolate()->memory_allocator()->Size(); + current_.start_holes_size = CountTotalHolesSize(heap_); + + current_.cumulative_incremental_marking_steps = + cumulative_incremental_marking_steps_; + current_.cumulative_incremental_marking_bytes = + cumulative_incremental_marking_bytes_; + current_.cumulative_incremental_marking_duration = + cumulative_incremental_marking_duration_; + current_.cumulative_pure_incremental_marking_duration = + cumulative_pure_incremental_marking_duration_; + current_.longest_incremental_marking_step = longest_incremental_marking_step_; + + for (int i = 0; i < Scope::NUMBER_OF_SCOPES; i++) { + current_.scopes[i] = 0; + } +} + + +void GCTracer::Stop() { + current_.end_time = base::OS::TimeCurrentMillis(); + current_.end_object_size = heap_->SizeOfObjects(); + current_.end_memory_size = heap_->isolate()->memory_allocator()->Size(); + current_.end_holes_size = CountTotalHolesSize(heap_); + + if (current_.type == Event::SCAVENGER) { + current_.incremental_marking_steps = + current_.cumulative_incremental_marking_steps - + previous_.cumulative_incremental_marking_steps; + current_.incremental_marking_bytes = + current_.cumulative_incremental_marking_bytes - + previous_.cumulative_incremental_marking_bytes; + current_.incremental_marking_duration = + current_.cumulative_incremental_marking_duration - + previous_.cumulative_incremental_marking_duration; + current_.pure_incremental_marking_duration = + current_.cumulative_pure_incremental_marking_duration - + previous_.cumulative_pure_incremental_marking_duration; + scavenger_events_.push_front(current_); + } else { + current_.incremental_marking_steps = + current_.cumulative_incremental_marking_steps - + previous_mark_compactor_event_.cumulative_incremental_marking_steps; + current_.incremental_marking_bytes = + current_.cumulative_incremental_marking_bytes - + previous_mark_compactor_event_.cumulative_incremental_marking_bytes; + current_.incremental_marking_duration = + current_.cumulative_incremental_marking_duration - + previous_mark_compactor_event_.cumulative_incremental_marking_duration; + current_.pure_incremental_marking_duration = + current_.cumulative_pure_incremental_marking_duration - + previous_mark_compactor_event_ + .cumulative_pure_incremental_marking_duration; + longest_incremental_marking_step_ = 0.0; + mark_compactor_events_.push_front(current_); + } + + // TODO(ernstm): move the code below out of GCTracer. + + if (!FLAG_trace_gc && !FLAG_print_cumulative_gc_stat) return; + + double duration = current_.end_time - current_.start_time; + double spent_in_mutator = Max(current_.start_time - previous_.end_time, 0.0); + + heap_->UpdateCumulativeGCStatistics(duration, spent_in_mutator, + current_.scopes[Scope::MC_MARK]); + + if (current_.type == Event::SCAVENGER && FLAG_trace_gc_ignore_scavenger) + return; + + if (FLAG_trace_gc) { + if (FLAG_trace_gc_nvp) + PrintNVP(); + else + Print(); + + heap_->PrintShortHeapStatistics(); + } +} + + +void GCTracer::AddIncrementalMarkingStep(double duration, intptr_t bytes) { + cumulative_incremental_marking_steps_++; + cumulative_incremental_marking_bytes_ += bytes; + cumulative_incremental_marking_duration_ += duration; + longest_incremental_marking_step_ = + Max(longest_incremental_marking_step_, duration); + cumulative_marking_duration_ += duration; + if (bytes > 0) { + cumulative_pure_incremental_marking_duration_ += duration; + } +} + + +void GCTracer::Print() const { + PrintPID("%8.0f ms: ", heap_->isolate()->time_millis_since_init()); + + PrintF("%s %.1f (%.1f) -> %.1f (%.1f) MB, ", current_.TypeName(false), + static_cast<double>(current_.start_object_size) / MB, + static_cast<double>(current_.start_memory_size) / MB, + static_cast<double>(current_.end_object_size) / MB, + static_cast<double>(current_.end_memory_size) / MB); + + int external_time = static_cast<int>(current_.scopes[Scope::EXTERNAL]); + if (external_time > 0) PrintF("%d / ", external_time); + + double duration = current_.end_time - current_.start_time; + PrintF("%.1f ms", duration); + if (current_.type == Event::SCAVENGER) { + if (current_.incremental_marking_steps > 0) { + PrintF(" (+ %.1f ms in %d steps since last GC)", + current_.incremental_marking_duration, + current_.incremental_marking_steps); + } + } else { + if (current_.incremental_marking_steps > 0) { + PrintF( + " (+ %.1f ms in %d steps since start of marking, " + "biggest step %.1f ms)", + current_.incremental_marking_duration, + current_.incremental_marking_steps, + current_.longest_incremental_marking_step); + } + } + + if (current_.gc_reason != NULL) { + PrintF(" [%s]", current_.gc_reason); + } + + if (current_.collector_reason != NULL) { + PrintF(" [%s]", current_.collector_reason); + } + + PrintF(".\n"); +} + + +void GCTracer::PrintNVP() const { + PrintPID("%8.0f ms: ", heap_->isolate()->time_millis_since_init()); + + double duration = current_.end_time - current_.start_time; + double spent_in_mutator = current_.start_time - previous_.end_time; + + PrintF("pause=%.1f ", duration); + PrintF("mutator=%.1f ", spent_in_mutator); + PrintF("gc=%s ", current_.TypeName(true)); + + PrintF("external=%.1f ", current_.scopes[Scope::EXTERNAL]); + PrintF("mark=%.1f ", current_.scopes[Scope::MC_MARK]); + PrintF("sweep=%.2f ", current_.scopes[Scope::MC_SWEEP]); + PrintF("sweepns=%.2f ", current_.scopes[Scope::MC_SWEEP_NEWSPACE]); + PrintF("sweepos=%.2f ", current_.scopes[Scope::MC_SWEEP_OLDSPACE]); + PrintF("sweepcode=%.2f ", current_.scopes[Scope::MC_SWEEP_CODE]); + PrintF("sweepcell=%.2f ", current_.scopes[Scope::MC_SWEEP_CELL]); + PrintF("sweepmap=%.2f ", current_.scopes[Scope::MC_SWEEP_MAP]); + PrintF("evacuate=%.1f ", current_.scopes[Scope::MC_EVACUATE_PAGES]); + PrintF("new_new=%.1f ", + current_.scopes[Scope::MC_UPDATE_NEW_TO_NEW_POINTERS]); + PrintF("root_new=%.1f ", + current_.scopes[Scope::MC_UPDATE_ROOT_TO_NEW_POINTERS]); + PrintF("old_new=%.1f ", + current_.scopes[Scope::MC_UPDATE_OLD_TO_NEW_POINTERS]); + PrintF("compaction_ptrs=%.1f ", + current_.scopes[Scope::MC_UPDATE_POINTERS_TO_EVACUATED]); + PrintF("intracompaction_ptrs=%.1f ", + current_.scopes[Scope::MC_UPDATE_POINTERS_BETWEEN_EVACUATED]); + PrintF("misc_compaction=%.1f ", + current_.scopes[Scope::MC_UPDATE_MISC_POINTERS]); + PrintF("weakcollection_process=%.1f ", + current_.scopes[Scope::MC_WEAKCOLLECTION_PROCESS]); + PrintF("weakcollection_clear=%.1f ", + current_.scopes[Scope::MC_WEAKCOLLECTION_CLEAR]); + PrintF("weakcollection_abort=%.1f ", + current_.scopes[Scope::MC_WEAKCOLLECTION_ABORT]); + + PrintF("total_size_before=%" V8_PTR_PREFIX "d ", current_.start_object_size); + PrintF("total_size_after=%" V8_PTR_PREFIX "d ", current_.end_object_size); + PrintF("holes_size_before=%" V8_PTR_PREFIX "d ", current_.start_holes_size); + PrintF("holes_size_after=%" V8_PTR_PREFIX "d ", current_.end_holes_size); + + intptr_t allocated_since_last_gc = + current_.start_object_size - previous_.end_object_size; + PrintF("allocated=%" V8_PTR_PREFIX "d ", allocated_since_last_gc); + PrintF("promoted=%" V8_PTR_PREFIX "d ", heap_->promoted_objects_size_); + PrintF("semi_space_copied=%" V8_PTR_PREFIX "d ", + heap_->semi_space_copied_object_size_); + PrintF("nodes_died_in_new=%d ", heap_->nodes_died_in_new_space_); + PrintF("nodes_copied_in_new=%d ", heap_->nodes_copied_in_new_space_); + PrintF("nodes_promoted=%d ", heap_->nodes_promoted_); + PrintF("promotion_rate=%.1f%% ", heap_->promotion_rate_); + PrintF("semi_space_copy_rate=%.1f%% ", heap_->semi_space_copied_rate_); + + if (current_.type == Event::SCAVENGER) { + PrintF("steps_count=%d ", current_.incremental_marking_steps); + PrintF("steps_took=%.1f ", current_.incremental_marking_duration); + } else { + PrintF("steps_count=%d ", current_.incremental_marking_steps); + PrintF("steps_took=%.1f ", current_.incremental_marking_duration); + PrintF("longest_step=%.1f ", current_.longest_incremental_marking_step); + PrintF("incremental_marking_throughput=%" V8_PTR_PREFIX "d ", + IncrementalMarkingSpeedInBytesPerMillisecond()); + } + + PrintF("\n"); +} + + +double GCTracer::MeanDuration(const EventBuffer& events) const { + if (events.empty()) return 0.0; + + double mean = 0.0; + EventBuffer::const_iterator iter = events.begin(); + while (iter != events.end()) { + mean += iter->end_time - iter->start_time; + ++iter; + } + + return mean / events.size(); +} + + +double GCTracer::MaxDuration(const EventBuffer& events) const { + if (events.empty()) return 0.0; + + double maximum = 0.0f; + EventBuffer::const_iterator iter = events.begin(); + while (iter != events.end()) { + maximum = Max(iter->end_time - iter->start_time, maximum); + ++iter; + } + + return maximum; +} + + +double GCTracer::MeanIncrementalMarkingDuration() const { + if (cumulative_incremental_marking_steps_ == 0) return 0.0; + + // We haven't completed an entire round of incremental marking, yet. + // Use data from GCTracer instead of data from event buffers. + if (mark_compactor_events_.empty()) { + return cumulative_incremental_marking_duration_ / + cumulative_incremental_marking_steps_; + } + + int steps = 0; + double durations = 0.0; + EventBuffer::const_iterator iter = mark_compactor_events_.begin(); + while (iter != mark_compactor_events_.end()) { + steps += iter->incremental_marking_steps; + durations += iter->incremental_marking_duration; + ++iter; + } + + if (steps == 0) return 0.0; + + return durations / steps; +} + + +double GCTracer::MaxIncrementalMarkingDuration() const { + // We haven't completed an entire round of incremental marking, yet. + // Use data from GCTracer instead of data from event buffers. + if (mark_compactor_events_.empty()) return longest_incremental_marking_step_; + + double max_duration = 0.0; + EventBuffer::const_iterator iter = mark_compactor_events_.begin(); + while (iter != mark_compactor_events_.end()) + max_duration = Max(iter->longest_incremental_marking_step, max_duration); + + return max_duration; +} + + +intptr_t GCTracer::IncrementalMarkingSpeedInBytesPerMillisecond() const { + if (cumulative_incremental_marking_duration_ == 0.0) return 0; + + // We haven't completed an entire round of incremental marking, yet. + // Use data from GCTracer instead of data from event buffers. + if (mark_compactor_events_.empty()) { + return static_cast<intptr_t>(cumulative_incremental_marking_bytes_ / + cumulative_pure_incremental_marking_duration_); + } + + intptr_t bytes = 0; + double durations = 0.0; + EventBuffer::const_iterator iter = mark_compactor_events_.begin(); + while (iter != mark_compactor_events_.end()) { + bytes += iter->incremental_marking_bytes; + durations += iter->pure_incremental_marking_duration; + ++iter; + } + + if (durations == 0.0) return 0; + + return static_cast<intptr_t>(bytes / durations); +} +} +} // namespace v8::internal diff --git a/deps/v8/src/heap/gc-tracer.h b/deps/v8/src/heap/gc-tracer.h new file mode 100644 index 00000000000..14281a4c8d9 --- /dev/null +++ b/deps/v8/src/heap/gc-tracer.h @@ -0,0 +1,356 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_HEAP_GC_TRACER_H_ +#define V8_HEAP_GC_TRACER_H_ + +namespace v8 { +namespace internal { + +// A simple ring buffer class with maximum size known at compile time. +// The class only implements the functionality required in GCTracer. +template <typename T, size_t MAX_SIZE> +class RingBuffer { + public: + class const_iterator { + public: + const_iterator() : index_(0), elements_(NULL) {} + + const_iterator(size_t index, const T* elements) + : index_(index), elements_(elements) {} + + bool operator==(const const_iterator& rhs) const { + return elements_ == rhs.elements_ && index_ == rhs.index_; + } + + bool operator!=(const const_iterator& rhs) const { + return elements_ != rhs.elements_ || index_ != rhs.index_; + } + + operator const T*() const { return elements_ + index_; } + + const T* operator->() const { return elements_ + index_; } + + const T& operator*() const { return elements_[index_]; } + + const_iterator& operator++() { + index_ = (index_ + 1) % (MAX_SIZE + 1); + return *this; + } + + const_iterator& operator--() { + index_ = (index_ + MAX_SIZE) % (MAX_SIZE + 1); + return *this; + } + + private: + size_t index_; + const T* elements_; + }; + + RingBuffer() : begin_(0), end_(0) {} + + bool empty() const { return begin_ == end_; } + size_t size() const { + return (end_ - begin_ + MAX_SIZE + 1) % (MAX_SIZE + 1); + } + const_iterator begin() const { return const_iterator(begin_, elements_); } + const_iterator end() const { return const_iterator(end_, elements_); } + const_iterator back() const { return --end(); } + void push_back(const T& element) { + elements_[end_] = element; + end_ = (end_ + 1) % (MAX_SIZE + 1); + if (end_ == begin_) begin_ = (begin_ + 1) % (MAX_SIZE + 1); + } + void push_front(const T& element) { + begin_ = (begin_ + MAX_SIZE) % (MAX_SIZE + 1); + if (begin_ == end_) end_ = (end_ + MAX_SIZE) % (MAX_SIZE + 1); + elements_[begin_] = element; + } + + private: + T elements_[MAX_SIZE + 1]; + size_t begin_; + size_t end_; + + DISALLOW_COPY_AND_ASSIGN(RingBuffer); +}; + + +// GCTracer collects and prints ONE line after each garbage collector +// invocation IFF --trace_gc is used. +// TODO(ernstm): Unit tests. +class GCTracer BASE_EMBEDDED { + public: + class Scope BASE_EMBEDDED { + public: + enum ScopeId { + EXTERNAL, + MC_MARK, + MC_SWEEP, + MC_SWEEP_NEWSPACE, + MC_SWEEP_OLDSPACE, + MC_SWEEP_CODE, + MC_SWEEP_CELL, + MC_SWEEP_MAP, + MC_EVACUATE_PAGES, + MC_UPDATE_NEW_TO_NEW_POINTERS, + MC_UPDATE_ROOT_TO_NEW_POINTERS, + MC_UPDATE_OLD_TO_NEW_POINTERS, + MC_UPDATE_POINTERS_TO_EVACUATED, + MC_UPDATE_POINTERS_BETWEEN_EVACUATED, + MC_UPDATE_MISC_POINTERS, + MC_WEAKCOLLECTION_PROCESS, + MC_WEAKCOLLECTION_CLEAR, + MC_WEAKCOLLECTION_ABORT, + MC_FLUSH_CODE, + NUMBER_OF_SCOPES + }; + + Scope(GCTracer* tracer, ScopeId scope) : tracer_(tracer), scope_(scope) { + start_time_ = base::OS::TimeCurrentMillis(); + } + + ~Scope() { + DCHECK(scope_ < NUMBER_OF_SCOPES); // scope_ is unsigned. + tracer_->current_.scopes[scope_] += + base::OS::TimeCurrentMillis() - start_time_; + } + + private: + GCTracer* tracer_; + ScopeId scope_; + double start_time_; + + DISALLOW_COPY_AND_ASSIGN(Scope); + }; + + + class Event { + public: + enum Type { SCAVENGER = 0, MARK_COMPACTOR = 1, START = 2 }; + + // Default constructor leaves the event uninitialized. + Event() {} + + Event(Type type, const char* gc_reason, const char* collector_reason); + + // Returns a string describing the event type. + const char* TypeName(bool short_name) const; + + // Type of event + Type type; + + const char* gc_reason; + const char* collector_reason; + + // Timestamp set in the constructor. + double start_time; + + // Timestamp set in the destructor. + double end_time; + + // Size of objects in heap set in constructor. + intptr_t start_object_size; + + // Size of objects in heap set in destructor. + intptr_t end_object_size; + + // Size of memory allocated from OS set in constructor. + intptr_t start_memory_size; + + // Size of memory allocated from OS set in destructor. + intptr_t end_memory_size; + + // Total amount of space either wasted or contained in one of free lists + // before the current GC. + intptr_t start_holes_size; + + // Total amount of space either wasted or contained in one of free lists + // after the current GC. + intptr_t end_holes_size; + + // Number of incremental marking steps since creation of tracer. + // (value at start of event) + int cumulative_incremental_marking_steps; + + // Incremental marking steps since + // - last event for SCAVENGER events + // - last MARK_COMPACTOR event for MARK_COMPACTOR events + int incremental_marking_steps; + + // Bytes marked since creation of tracer (value at start of event). + intptr_t cumulative_incremental_marking_bytes; + + // Bytes marked since + // - last event for SCAVENGER events + // - last MARK_COMPACTOR event for MARK_COMPACTOR events + intptr_t incremental_marking_bytes; + + // Cumulative duration of incremental marking steps since creation of + // tracer. (value at start of event) + double cumulative_incremental_marking_duration; + + // Duration of incremental marking steps since + // - last event for SCAVENGER events + // - last MARK_COMPACTOR event for MARK_COMPACTOR events + double incremental_marking_duration; + + // Cumulative pure duration of incremental marking steps since creation of + // tracer. (value at start of event) + double cumulative_pure_incremental_marking_duration; + + // Duration of pure incremental marking steps since + // - last event for SCAVENGER events + // - last MARK_COMPACTOR event for MARK_COMPACTOR events + double pure_incremental_marking_duration; + + // Longest incremental marking step since start of marking. + // (value at start of event) + double longest_incremental_marking_step; + + // Amounts of time spent in different scopes during GC. + double scopes[Scope::NUMBER_OF_SCOPES]; + }; + + static const int kRingBufferMaxSize = 10; + + typedef RingBuffer<Event, kRingBufferMaxSize> EventBuffer; + + explicit GCTracer(Heap* heap); + + // Start collecting data. + void Start(GarbageCollector collector, const char* gc_reason, + const char* collector_reason); + + // Stop collecting data and print results. + void Stop(); + + // Log an incremental marking step. + void AddIncrementalMarkingStep(double duration, intptr_t bytes); + + // Log time spent in marking. + void AddMarkingTime(double duration) { + cumulative_marking_duration_ += duration; + } + + // Time spent in marking. + double cumulative_marking_duration() const { + return cumulative_marking_duration_; + } + + // Log time spent in sweeping on main thread. + void AddSweepingTime(double duration) { + cumulative_sweeping_duration_ += duration; + } + + // Time spent in sweeping on main thread. + double cumulative_sweeping_duration() const { + return cumulative_sweeping_duration_; + } + + // Compute the mean duration of the last scavenger events. Returns 0 if no + // events have been recorded. + double MeanScavengerDuration() const { + return MeanDuration(scavenger_events_); + } + + // Compute the max duration of the last scavenger events. Returns 0 if no + // events have been recorded. + double MaxScavengerDuration() const { return MaxDuration(scavenger_events_); } + + // Compute the mean duration of the last mark compactor events. Returns 0 if + // no events have been recorded. + double MeanMarkCompactorDuration() const { + return MeanDuration(mark_compactor_events_); + } + + // Compute the max duration of the last mark compactor events. Return 0 if no + // events have been recorded. + double MaxMarkCompactorDuration() const { + return MaxDuration(mark_compactor_events_); + } + + // Compute the mean step duration of the last incremental marking round. + // Returns 0 if no incremental marking round has been completed. + double MeanIncrementalMarkingDuration() const; + + // Compute the max step duration of the last incremental marking round. + // Returns 0 if no incremental marking round has been completed. + double MaxIncrementalMarkingDuration() const; + + // Compute the average incremental marking speed in bytes/second. Returns 0 if + // no events have been recorded. + intptr_t IncrementalMarkingSpeedInBytesPerMillisecond() const; + + private: + // Print one detailed trace line in name=value format. + // TODO(ernstm): Move to Heap. + void PrintNVP() const; + + // Print one trace line. + // TODO(ernstm): Move to Heap. + void Print() const; + + // Compute the mean duration of the events in the given ring buffer. + double MeanDuration(const EventBuffer& events) const; + + // Compute the max duration of the events in the given ring buffer. + double MaxDuration(const EventBuffer& events) const; + + // Pointer to the heap that owns this tracer. + Heap* heap_; + + // Current tracer event. Populated during Start/Stop cycle. Valid after Stop() + // has returned. + Event current_; + + // Previous tracer event. + Event previous_; + + // Previous MARK_COMPACTOR event. + Event previous_mark_compactor_event_; + + // RingBuffers for SCAVENGER events. + EventBuffer scavenger_events_; + + // RingBuffers for MARK_COMPACTOR events. + EventBuffer mark_compactor_events_; + + // Cumulative number of incremental marking steps since creation of tracer. + int cumulative_incremental_marking_steps_; + + // Cumulative size of incremental marking steps (in bytes) since creation of + // tracer. + intptr_t cumulative_incremental_marking_bytes_; + + // Cumulative duration of incremental marking steps since creation of tracer. + double cumulative_incremental_marking_duration_; + + // Cumulative duration of pure incremental marking steps since creation of + // tracer. + double cumulative_pure_incremental_marking_duration_; + + // Longest incremental marking step since start of marking. + double longest_incremental_marking_step_; + + // Total marking time. + // This timer is precise when run with --print-cumulative-gc-stat + double cumulative_marking_duration_; + + // Total sweeping time on the main thread. + // This timer is precise when run with --print-cumulative-gc-stat + // TODO(hpayer): Account for sweeping time on sweeper threads. Add a + // different field for that. + // TODO(hpayer): This timer right now just holds the sweeping time + // of the initial atomic sweeping pause. Make sure that it accumulates + // all sweeping operations performed on the main thread. + double cumulative_sweeping_duration_; + + DISALLOW_COPY_AND_ASSIGN(GCTracer); +}; +} +} // namespace v8::internal + +#endif // V8_HEAP_GC_TRACER_H_ diff --git a/deps/v8/src/heap-inl.h b/deps/v8/src/heap/heap-inl.h similarity index 68% rename from deps/v8/src/heap-inl.h rename to deps/v8/src/heap/heap-inl.h index 3cddcb91dc6..adb6e25bb71 100644 --- a/deps/v8/src/heap-inl.h +++ b/deps/v8/src/heap/heap-inl.h @@ -2,19 +2,20 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#ifndef V8_HEAP_INL_H_ -#define V8_HEAP_INL_H_ +#ifndef V8_HEAP_HEAP_INL_H_ +#define V8_HEAP_HEAP_INL_H_ #include <cmath> -#include "heap.h" -#include "heap-profiler.h" -#include "isolate.h" -#include "list-inl.h" -#include "objects.h" -#include "platform.h" -#include "store-buffer.h" -#include "store-buffer-inl.h" +#include "src/base/platform/platform.h" +#include "src/cpu-profiler.h" +#include "src/heap/heap.h" +#include "src/heap/store-buffer.h" +#include "src/heap/store-buffer-inl.h" +#include "src/heap-profiler.h" +#include "src/isolate.h" +#include "src/list-inl.h" +#include "src/objects.h" namespace v8 { namespace internal { @@ -28,13 +29,13 @@ void PromotionQueue::insert(HeapObject* target, int size) { if (NewSpacePage::IsAtStart(reinterpret_cast<Address>(rear_))) { NewSpacePage* rear_page = NewSpacePage::FromAddress(reinterpret_cast<Address>(rear_)); - ASSERT(!rear_page->prev_page()->is_anchor()); + DCHECK(!rear_page->prev_page()->is_anchor()); rear_ = reinterpret_cast<intptr_t*>(rear_page->prev_page()->area_end()); ActivateGuardIfOnTheSamePage(); } if (guard_) { - ASSERT(GetHeadPage() == + DCHECK(GetHeadPage() == Page::FromAllocationTop(reinterpret_cast<Address>(limit_))); if ((rear_ - 2) < limit_) { @@ -46,7 +47,7 @@ void PromotionQueue::insert(HeapObject* target, int size) { *(--rear_) = reinterpret_cast<intptr_t>(target); *(--rear_) = size; - // Assert no overflow into live objects. +// Assert no overflow into live objects. #ifdef DEBUG SemiSpace::AssertValidRange(target->GetIsolate()->heap()->new_space()->top(), reinterpret_cast<Address>(rear_)); @@ -56,12 +57,12 @@ void PromotionQueue::insert(HeapObject* target, int size) { void PromotionQueue::ActivateGuardIfOnTheSamePage() { guard_ = guard_ || - heap_->new_space()->active_space()->current_page()->address() == - GetHeadPage()->address(); + heap_->new_space()->active_space()->current_page()->address() == + GetHeadPage()->address(); } -template<> +template <> bool inline Heap::IsOneByte(Vector<const char> str, int chars) { // TODO(dcarney): incorporate Latin-1 check when Latin-1 is supported? // ASCII only check. @@ -69,7 +70,7 @@ bool inline Heap::IsOneByte(Vector<const char> str, int chars) { } -template<> +template <> bool inline Heap::IsOneByte(String* str, int chars) { return str->IsOneByteRepresentation(); } @@ -78,16 +79,16 @@ bool inline Heap::IsOneByte(String* str, int chars) { AllocationResult Heap::AllocateInternalizedStringFromUtf8( Vector<const char> str, int chars, uint32_t hash_field) { if (IsOneByte(str, chars)) { - return AllocateOneByteInternalizedString( - Vector<const uint8_t>::cast(str), hash_field); + return AllocateOneByteInternalizedString(Vector<const uint8_t>::cast(str), + hash_field); } return AllocateInternalizedStringImpl<false>(str, chars, hash_field); } -template<typename T> -AllocationResult Heap::AllocateInternalizedStringImpl( - T t, int chars, uint32_t hash_field) { +template <typename T> +AllocationResult Heap::AllocateInternalizedStringImpl(T t, int chars, + uint32_t hash_field) { if (IsOneByte(t, chars)) { return AllocateInternalizedStringImpl<true>(t, chars, hash_field); } @@ -96,11 +97,8 @@ AllocationResult Heap::AllocateInternalizedStringImpl( AllocationResult Heap::AllocateOneByteInternalizedString( - Vector<const uint8_t> str, - uint32_t hash_field) { - if (str.length() > String::kMaxLength) { - return isolate()->ThrowInvalidStringLength(); - } + Vector<const uint8_t> str, uint32_t hash_field) { + CHECK_GE(String::kMaxLength, str.length()); // Compute map and object size. Map* map = ascii_internalized_string_map(); int size = SeqOneByteString::SizeFor(str.length()); @@ -108,7 +106,8 @@ AllocationResult Heap::AllocateOneByteInternalizedString( // Allocate string. HeapObject* result; - { AllocationResult allocation = AllocateRaw(size, space, OLD_DATA_SPACE); + { + AllocationResult allocation = AllocateRaw(size, space, OLD_DATA_SPACE); if (!allocation.To(&result)) return allocation; } @@ -119,11 +118,11 @@ AllocationResult Heap::AllocateOneByteInternalizedString( answer->set_length(str.length()); answer->set_hash_field(hash_field); - ASSERT_EQ(size, answer->Size()); + DCHECK_EQ(size, answer->Size()); // Fill in the characters. - OS::MemCopy(answer->address() + SeqOneByteString::kHeaderSize, - str.start(), str.length()); + MemCopy(answer->address() + SeqOneByteString::kHeaderSize, str.start(), + str.length()); return answer; } @@ -131,9 +130,7 @@ AllocationResult Heap::AllocateOneByteInternalizedString( AllocationResult Heap::AllocateTwoByteInternalizedString(Vector<const uc16> str, uint32_t hash_field) { - if (str.length() > String::kMaxLength) { - return isolate()->ThrowInvalidStringLength(); - } + CHECK_GE(String::kMaxLength, str.length()); // Compute map and object size. Map* map = internalized_string_map(); int size = SeqTwoByteString::SizeFor(str.length()); @@ -141,7 +138,8 @@ AllocationResult Heap::AllocateTwoByteInternalizedString(Vector<const uc16> str, // Allocate string. HeapObject* result; - { AllocationResult allocation = AllocateRaw(size, space, OLD_DATA_SPACE); + { + AllocationResult allocation = AllocateRaw(size, space, OLD_DATA_SPACE); if (!allocation.To(&result)) return allocation; } @@ -151,11 +149,11 @@ AllocationResult Heap::AllocateTwoByteInternalizedString(Vector<const uc16> str, answer->set_length(str.length()); answer->set_hash_field(hash_field); - ASSERT_EQ(size, answer->Size()); + DCHECK_EQ(size, answer->Size()); // Fill in the characters. - OS::MemCopy(answer->address() + SeqTwoByteString::kHeaderSize, - str.start(), str.length() * kUC16Size); + MemCopy(answer->address() + SeqTwoByteString::kHeaderSize, str.start(), + str.length() * kUC16Size); return answer; } @@ -178,16 +176,13 @@ AllocationResult Heap::CopyConstantPoolArray(ConstantPoolArray* src) { } -AllocationResult Heap::AllocateRaw(int size_in_bytes, - AllocationSpace space, +AllocationResult Heap::AllocateRaw(int size_in_bytes, AllocationSpace space, AllocationSpace retry_space) { - ASSERT(AllowHandleAllocation::IsAllowed()); - ASSERT(AllowHeapAllocation::IsAllowed()); - ASSERT(gc_state_ == NOT_IN_GC); - HeapProfiler* profiler = isolate_->heap_profiler(); + DCHECK(AllowHandleAllocation::IsAllowed()); + DCHECK(AllowHeapAllocation::IsAllowed()); + DCHECK(gc_state_ == NOT_IN_GC); #ifdef DEBUG - if (FLAG_gc_interval >= 0 && - AllowAllocationFailure::IsAllowed(isolate_) && + if (FLAG_gc_interval >= 0 && AllowAllocationFailure::IsAllowed(isolate_) && Heap::allocation_timeout_-- <= 0) { return AllocationResult::Retry(space); } @@ -199,13 +194,11 @@ AllocationResult Heap::AllocateRaw(int size_in_bytes, AllocationResult allocation; if (NEW_SPACE == space) { allocation = new_space_.AllocateRaw(size_in_bytes); - if (always_allocate() && - allocation.IsRetry() && - retry_space != NEW_SPACE) { + if (always_allocate() && allocation.IsRetry() && retry_space != NEW_SPACE) { space = retry_space; } else { - if (profiler->is_tracking_allocations() && allocation.To(&object)) { - profiler->AllocationEvent(object->address(), size_in_bytes); + if (allocation.To(&object)) { + OnAllocationEvent(object, size_in_bytes); } return allocation; } @@ -216,7 +209,12 @@ AllocationResult Heap::AllocateRaw(int size_in_bytes, } else if (OLD_DATA_SPACE == space) { allocation = old_data_space_->AllocateRaw(size_in_bytes); } else if (CODE_SPACE == space) { - allocation = code_space_->AllocateRaw(size_in_bytes); + if (size_in_bytes <= code_space()->AreaSize()) { + allocation = code_space_->AllocateRaw(size_in_bytes); + } else { + // Large code objects are allocated in large object space. + allocation = lo_space_->AllocateRaw(size_in_bytes, EXECUTABLE); + } } else if (LO_SPACE == space) { allocation = lo_space_->AllocateRaw(size_in_bytes, NOT_EXECUTABLE); } else if (CELL_SPACE == space) { @@ -224,23 +222,106 @@ AllocationResult Heap::AllocateRaw(int size_in_bytes, } else if (PROPERTY_CELL_SPACE == space) { allocation = property_cell_space_->AllocateRaw(size_in_bytes); } else { - ASSERT(MAP_SPACE == space); + DCHECK(MAP_SPACE == space); allocation = map_space_->AllocateRaw(size_in_bytes); } - if (allocation.IsRetry()) old_gen_exhausted_ = true; - if (profiler->is_tracking_allocations() && allocation.To(&object)) { - profiler->AllocationEvent(object->address(), size_in_bytes); + if (allocation.To(&object)) { + OnAllocationEvent(object, size_in_bytes); + } else { + old_gen_exhausted_ = true; } return allocation; } +void Heap::OnAllocationEvent(HeapObject* object, int size_in_bytes) { + HeapProfiler* profiler = isolate_->heap_profiler(); + if (profiler->is_tracking_allocations()) { + profiler->AllocationEvent(object->address(), size_in_bytes); + } + + if (FLAG_verify_predictable) { + ++allocations_count_; + + UpdateAllocationsHash(object); + UpdateAllocationsHash(size_in_bytes); + + if ((FLAG_dump_allocations_digest_at_alloc > 0) && + (--dump_allocations_hash_countdown_ == 0)) { + dump_allocations_hash_countdown_ = FLAG_dump_allocations_digest_at_alloc; + PrintAlloctionsHash(); + } + } +} + + +void Heap::OnMoveEvent(HeapObject* target, HeapObject* source, + int size_in_bytes) { + HeapProfiler* heap_profiler = isolate_->heap_profiler(); + if (heap_profiler->is_tracking_object_moves()) { + heap_profiler->ObjectMoveEvent(source->address(), target->address(), + size_in_bytes); + } + + if (isolate_->logger()->is_logging_code_events() || + isolate_->cpu_profiler()->is_profiling()) { + if (target->IsSharedFunctionInfo()) { + PROFILE(isolate_, SharedFunctionInfoMoveEvent(source->address(), + target->address())); + } + } + + if (FLAG_verify_predictable) { + ++allocations_count_; + + UpdateAllocationsHash(source); + UpdateAllocationsHash(target); + UpdateAllocationsHash(size_in_bytes); + + if ((FLAG_dump_allocations_digest_at_alloc > 0) && + (--dump_allocations_hash_countdown_ == 0)) { + dump_allocations_hash_countdown_ = FLAG_dump_allocations_digest_at_alloc; + PrintAlloctionsHash(); + } + } +} + + +void Heap::UpdateAllocationsHash(HeapObject* object) { + Address object_address = object->address(); + MemoryChunk* memory_chunk = MemoryChunk::FromAddress(object_address); + AllocationSpace allocation_space = memory_chunk->owner()->identity(); + + STATIC_ASSERT(kSpaceTagSize + kPageSizeBits <= 32); + uint32_t value = + static_cast<uint32_t>(object_address - memory_chunk->address()) | + (static_cast<uint32_t>(allocation_space) << kPageSizeBits); + + UpdateAllocationsHash(value); +} + + +void Heap::UpdateAllocationsHash(uint32_t value) { + uint16_t c1 = static_cast<uint16_t>(value); + uint16_t c2 = static_cast<uint16_t>(value >> 16); + raw_allocations_hash_ = + StringHasher::AddCharacterCore(raw_allocations_hash_, c1); + raw_allocations_hash_ = + StringHasher::AddCharacterCore(raw_allocations_hash_, c2); +} + + +void Heap::PrintAlloctionsHash() { + uint32_t hash = StringHasher::GetHashCore(raw_allocations_hash_); + PrintF("\n### Allocations = %u, hash = 0x%08x\n", allocations_count_, hash); +} + + void Heap::FinalizeExternalString(String* string) { - ASSERT(string->IsExternalString()); + DCHECK(string->IsExternalString()); v8::String::ExternalStringResourceBase** resource_addr = reinterpret_cast<v8::String::ExternalStringResourceBase**>( - reinterpret_cast<byte*>(string) + - ExternalString::kResourceOffset - + reinterpret_cast<byte*>(string) + ExternalString::kResourceOffset - kHeapObjectTag); // Dispose of the C++ object if it has not already been disposed. @@ -253,16 +334,14 @@ void Heap::FinalizeExternalString(String* string) { bool Heap::InNewSpace(Object* object) { bool result = new_space_.Contains(object); - ASSERT(!result || // Either not in new space - gc_state_ != NOT_IN_GC || // ... or in the middle of GC - InToSpace(object)); // ... or in to-space (where we allocate). + DCHECK(!result || // Either not in new space + gc_state_ != NOT_IN_GC || // ... or in the middle of GC + InToSpace(object)); // ... or in to-space (where we allocate). return result; } -bool Heap::InNewSpace(Address address) { - return new_space_.Contains(address); -} +bool Heap::InNewSpace(Address address) { return new_space_.Contains(address); } bool Heap::InFromSpace(Object* object) { @@ -302,15 +381,10 @@ bool Heap::OldGenerationAllocationLimitReached() { bool Heap::ShouldBePromoted(Address old_address, int object_size) { - // An object should be promoted if: - // - the object has survived a scavenge operation or - // - to space is already 25% full. NewSpacePage* page = NewSpacePage::FromAddress(old_address); Address age_mark = new_space_.age_mark(); - bool below_mark = page->IsFlagSet(MemoryChunk::NEW_SPACE_BELOW_AGE_MARK) && - (!page->ContainsLimit(age_mark) || old_address < age_mark); - return below_mark || (new_space_.Size() + object_size) >= - (new_space_.EffectiveCapacity() >> 2); + return page->IsFlagSet(MemoryChunk::NEW_SPACE_BELOW_AGE_MARK) && + (!page->ContainsLimit(age_mark) || old_address < age_mark); } @@ -331,9 +405,7 @@ void Heap::RecordWrites(Address address, int start, int len) { OldSpace* Heap::TargetSpace(HeapObject* object) { InstanceType type = object->map()->instance_type(); AllocationSpace space = TargetSpaceId(type); - return (space == OLD_POINTER_SPACE) - ? old_pointer_space_ - : old_data_space_; + return (space == OLD_POINTER_SPACE) ? old_pointer_space_ : old_data_space_; } @@ -344,21 +416,21 @@ AllocationSpace Heap::TargetSpaceId(InstanceType type) { // know that object has the heap object tag. // These objects are never allocated in new space. - ASSERT(type != MAP_TYPE); - ASSERT(type != CODE_TYPE); - ASSERT(type != ODDBALL_TYPE); - ASSERT(type != CELL_TYPE); - ASSERT(type != PROPERTY_CELL_TYPE); + DCHECK(type != MAP_TYPE); + DCHECK(type != CODE_TYPE); + DCHECK(type != ODDBALL_TYPE); + DCHECK(type != CELL_TYPE); + DCHECK(type != PROPERTY_CELL_TYPE); if (type <= LAST_NAME_TYPE) { if (type == SYMBOL_TYPE) return OLD_POINTER_SPACE; - ASSERT(type < FIRST_NONSTRING_TYPE); + DCHECK(type < FIRST_NONSTRING_TYPE); // There are four string representations: sequential strings, external // strings, cons strings, and sliced strings. // Only the latter two contain non-map-word pointers to heap objects. return ((type & kIsIndirectStringMask) == kIsIndirectStringTag) - ? OLD_POINTER_SPACE - : OLD_DATA_SPACE; + ? OLD_POINTER_SPACE + : OLD_DATA_SPACE; } else { return (type <= LAST_DATA_TYPE) ? OLD_DATA_SPACE : OLD_POINTER_SPACE; } @@ -388,9 +460,9 @@ bool Heap::AllowedToBeMigrated(HeapObject* obj, AllocationSpace dst) { case NEW_SPACE: return dst == src || dst == TargetSpaceId(type); case OLD_POINTER_SPACE: - return dst == src && - (dst == TargetSpaceId(type) || obj->IsFiller() || - (obj->IsExternalString() && ExternalString::cast(obj)->is_short())); + return dst == src && (dst == TargetSpaceId(type) || obj->IsFiller() || + (obj->IsExternalString() && + ExternalString::cast(obj)->is_short())); case OLD_DATA_SPACE: return dst == src && dst == TargetSpaceId(type); case CODE_SPACE: @@ -409,14 +481,13 @@ bool Heap::AllowedToBeMigrated(HeapObject* obj, AllocationSpace dst) { void Heap::CopyBlock(Address dst, Address src, int byte_size) { - CopyWords(reinterpret_cast<Object**>(dst), - reinterpret_cast<Object**>(src), + CopyWords(reinterpret_cast<Object**>(dst), reinterpret_cast<Object**>(src), static_cast<size_t>(byte_size / kPointerSize)); } void Heap::MoveBlock(Address dst, Address src, int byte_size) { - ASSERT(IsAligned(byte_size, kPointerSize)); + DCHECK(IsAligned(byte_size, kPointerSize)); int size_in_words = byte_size / kPointerSize; @@ -429,14 +500,12 @@ void Heap::MoveBlock(Address dst, Address src, int byte_size) { *dst_slot++ = *src_slot++; } } else { - OS::MemMove(dst, src, static_cast<size_t>(byte_size)); + MemMove(dst, src, static_cast<size_t>(byte_size)); } } -void Heap::ScavengePointer(HeapObject** p) { - ScavengeObject(p, *p); -} +void Heap::ScavengePointer(HeapObject** p) { ScavengeObject(p, *p); } AllocationMemento* Heap::FindAllocationMemento(HeapObject* object) { @@ -446,8 +515,7 @@ AllocationMemento* Heap::FindAllocationMemento(HeapObject* object) { Address object_address = object->address(); Address memento_address = object_address + object->Size(); Address last_memento_word_address = memento_address + kPointerSize; - if (!NewSpacePage::OnSamePage(object_address, - last_memento_word_address)) { + if (!NewSpacePage::OnSamePage(object_address, last_memento_word_address)) { return NULL; } @@ -463,7 +531,7 @@ AllocationMemento* Heap::FindAllocationMemento(HeapObject* object) { // the test makes it possible to have a single, unified version of // FindAllocationMemento that is used both by the GC and the mutator. Address top = NewSpaceTop(); - ASSERT(memento_address == top || + DCHECK(memento_address == top || memento_address + HeapObject::kHeaderSize <= top || !NewSpacePage::OnSamePage(memento_address, top)); if (memento_address == top) return NULL; @@ -477,10 +545,11 @@ AllocationMemento* Heap::FindAllocationMemento(HeapObject* object) { void Heap::UpdateAllocationSiteFeedback(HeapObject* object, ScratchpadSlotMode mode) { Heap* heap = object->GetHeap(); - ASSERT(heap->InFromSpace(object)); + DCHECK(heap->InFromSpace(object)); if (!FLAG_allocation_site_pretenuring || - !AllocationSite::CanTrack(object->map()->instance_type())) return; + !AllocationSite::CanTrack(object->map()->instance_type())) + return; AllocationMemento* memento = heap->FindAllocationMemento(object); if (memento == NULL) return; @@ -492,7 +561,7 @@ void Heap::UpdateAllocationSiteFeedback(HeapObject* object, void Heap::ScavengeObject(HeapObject** p, HeapObject* object) { - ASSERT(object->GetIsolate()->heap()->InFromSpace(object)); + DCHECK(object->GetIsolate()->heap()->InFromSpace(object)); // We use the first word (where the map pointer usually is) of a heap // object to record the forwarding pointer. A forwarding pointer can @@ -504,7 +573,7 @@ void Heap::ScavengeObject(HeapObject** p, HeapObject* object) { // copied. if (first_word.IsForwardingAddress()) { HeapObject* dest = first_word.ToForwardingAddress(); - ASSERT(object->GetIsolate()->heap()->InFromSpace(*p)); + DCHECK(object->GetIsolate()->heap()->InFromSpace(*p)); *p = dest; return; } @@ -512,14 +581,13 @@ void Heap::ScavengeObject(HeapObject** p, HeapObject* object) { UpdateAllocationSiteFeedback(object, IGNORE_SCRATCHPAD_SLOT); // AllocationMementos are unrooted and shouldn't survive a scavenge - ASSERT(object->map() != object->GetHeap()->allocation_memento_map()); + DCHECK(object->map() != object->GetHeap()->allocation_memento_map()); // Call the slow part of scavenge object. return ScavengeObjectSlow(p, object); } -bool Heap::CollectGarbage(AllocationSpace space, - const char* gc_reason, +bool Heap::CollectGarbage(AllocationSpace space, const char* gc_reason, const v8::GCCallbackFlags callbackFlags) { const char* collector_reason = NULL; GarbageCollector collector = SelectGarbageCollector(space, &collector_reason); @@ -527,50 +595,9 @@ bool Heap::CollectGarbage(AllocationSpace space, } -int64_t Heap::AdjustAmountOfExternalAllocatedMemory( - int64_t change_in_bytes) { - ASSERT(HasBeenSetUp()); - int64_t amount = amount_of_external_allocated_memory_ + change_in_bytes; - if (change_in_bytes > 0) { - // Avoid overflow. - if (amount > amount_of_external_allocated_memory_) { - amount_of_external_allocated_memory_ = amount; - } else { - // Give up and reset the counters in case of an overflow. - amount_of_external_allocated_memory_ = 0; - amount_of_external_allocated_memory_at_last_global_gc_ = 0; - } - int64_t amount_since_last_global_gc = PromotedExternalMemorySize(); - if (amount_since_last_global_gc > external_allocation_limit_) { - CollectAllGarbage(kNoGCFlags, "external memory allocation limit reached"); - } - } else { - // Avoid underflow. - if (amount >= 0) { - amount_of_external_allocated_memory_ = amount; - } else { - // Give up and reset the counters in case of an underflow. - amount_of_external_allocated_memory_ = 0; - amount_of_external_allocated_memory_at_last_global_gc_ = 0; - } - } - if (FLAG_trace_external_memory) { - PrintPID("%8.0f ms: ", isolate()->time_millis_since_init()); - PrintF("Adjust amount of external memory: delta=%6" V8_PTR_PREFIX "d KB, " - "amount=%6" V8_PTR_PREFIX "d KB, since_gc=%6" V8_PTR_PREFIX "d KB, " - "isolate=0x%08" V8PRIxPTR ".\n", - static_cast<intptr_t>(change_in_bytes / KB), - static_cast<intptr_t>(amount_of_external_allocated_memory_ / KB), - static_cast<intptr_t>(PromotedExternalMemorySize() / KB), - reinterpret_cast<intptr_t>(isolate())); - } - ASSERT(amount_of_external_allocated_memory_ >= 0); - return amount_of_external_allocated_memory_; -} - - Isolate* Heap::isolate() { - return reinterpret_cast<Isolate*>(reinterpret_cast<intptr_t>(this) - + return reinterpret_cast<Isolate*>( + reinterpret_cast<intptr_t>(this) - reinterpret_cast<size_t>(reinterpret_cast<Isolate*>(4)->heap()) + 4); } @@ -582,55 +609,49 @@ Isolate* Heap::isolate() { // Warning: Do not use the identifiers __object__, __maybe_object__ or // __scope__ in a call to this macro. -#define RETURN_OBJECT_UNLESS_EXCEPTION(ISOLATE, RETURN_VALUE, RETURN_EMPTY) \ - if (!__allocation__.IsRetry()) { \ - __object__ = __allocation__.ToObjectChecked(); \ - if (__object__ == (ISOLATE)->heap()->exception()) { RETURN_EMPTY; } \ - RETURN_VALUE; \ - } - -#define CALL_AND_RETRY(ISOLATE, FUNCTION_CALL, RETURN_VALUE, RETURN_EMPTY) \ - do { \ - AllocationResult __allocation__ = FUNCTION_CALL; \ - Object* __object__ = NULL; \ - RETURN_OBJECT_UNLESS_EXCEPTION(ISOLATE, RETURN_VALUE, RETURN_EMPTY) \ - (ISOLATE)->heap()->CollectGarbage(__allocation__.RetrySpace(), \ - "allocation failure"); \ - __allocation__ = FUNCTION_CALL; \ - RETURN_OBJECT_UNLESS_EXCEPTION(ISOLATE, RETURN_VALUE, RETURN_EMPTY) \ - (ISOLATE)->counters()->gc_last_resort_from_handles()->Increment(); \ - (ISOLATE)->heap()->CollectAllAvailableGarbage("last resort gc"); \ - { \ - AlwaysAllocateScope __scope__(ISOLATE); \ - __allocation__ = FUNCTION_CALL; \ - } \ - RETURN_OBJECT_UNLESS_EXCEPTION(ISOLATE, RETURN_VALUE, RETURN_EMPTY) \ - /* TODO(1181417): Fix this. */ \ - v8::internal::Heap::FatalProcessOutOfMemory("CALL_AND_RETRY_LAST", true); \ - RETURN_EMPTY; \ +#define RETURN_OBJECT_UNLESS_RETRY(ISOLATE, RETURN_VALUE) \ + if (__allocation__.To(&__object__)) { \ + DCHECK(__object__ != (ISOLATE)->heap()->exception()); \ + RETURN_VALUE; \ + } + +#define CALL_AND_RETRY(ISOLATE, FUNCTION_CALL, RETURN_VALUE, RETURN_EMPTY) \ + do { \ + AllocationResult __allocation__ = FUNCTION_CALL; \ + Object* __object__ = NULL; \ + RETURN_OBJECT_UNLESS_RETRY(ISOLATE, RETURN_VALUE) \ + (ISOLATE)->heap()->CollectGarbage(__allocation__.RetrySpace(), \ + "allocation failure"); \ + __allocation__ = FUNCTION_CALL; \ + RETURN_OBJECT_UNLESS_RETRY(ISOLATE, RETURN_VALUE) \ + (ISOLATE)->counters()->gc_last_resort_from_handles()->Increment(); \ + (ISOLATE)->heap()->CollectAllAvailableGarbage("last resort gc"); \ + { \ + AlwaysAllocateScope __scope__(ISOLATE); \ + __allocation__ = FUNCTION_CALL; \ + } \ + RETURN_OBJECT_UNLESS_RETRY(ISOLATE, RETURN_VALUE) \ + /* TODO(1181417): Fix this. */ \ + v8::internal::Heap::FatalProcessOutOfMemory("CALL_AND_RETRY_LAST", true); \ + RETURN_EMPTY; \ } while (false) -#define CALL_AND_RETRY_OR_DIE( \ - ISOLATE, FUNCTION_CALL, RETURN_VALUE, RETURN_EMPTY) \ - CALL_AND_RETRY( \ - ISOLATE, \ - FUNCTION_CALL, \ - RETURN_VALUE, \ - RETURN_EMPTY) +#define CALL_AND_RETRY_OR_DIE(ISOLATE, FUNCTION_CALL, RETURN_VALUE, \ + RETURN_EMPTY) \ + CALL_AND_RETRY(ISOLATE, FUNCTION_CALL, RETURN_VALUE, RETURN_EMPTY) #define CALL_HEAP_FUNCTION(ISOLATE, FUNCTION_CALL, TYPE) \ - CALL_AND_RETRY_OR_DIE(ISOLATE, \ - FUNCTION_CALL, \ + CALL_AND_RETRY_OR_DIE(ISOLATE, FUNCTION_CALL, \ return Handle<TYPE>(TYPE::cast(__object__), ISOLATE), \ - return Handle<TYPE>()) \ + return Handle<TYPE>()) -#define CALL_HEAP_FUNCTION_VOID(ISOLATE, FUNCTION_CALL) \ +#define CALL_HEAP_FUNCTION_VOID(ISOLATE, FUNCTION_CALL) \ CALL_AND_RETRY_OR_DIE(ISOLATE, FUNCTION_CALL, return, return) void ExternalStringTable::AddString(String* string) { - ASSERT(string->IsExternalString()); + DCHECK(string->IsExternalString()); if (heap_->InNewSpace(string)) { new_space_strings_.Add(string); } else { @@ -657,21 +678,21 @@ void ExternalStringTable::Verify() { #ifdef DEBUG for (int i = 0; i < new_space_strings_.length(); ++i) { Object* obj = Object::cast(new_space_strings_[i]); - ASSERT(heap_->InNewSpace(obj)); - ASSERT(obj != heap_->the_hole_value()); + DCHECK(heap_->InNewSpace(obj)); + DCHECK(obj != heap_->the_hole_value()); } for (int i = 0; i < old_space_strings_.length(); ++i) { Object* obj = Object::cast(old_space_strings_[i]); - ASSERT(!heap_->InNewSpace(obj)); - ASSERT(obj != heap_->the_hole_value()); + DCHECK(!heap_->InNewSpace(obj)); + DCHECK(obj != heap_->the_hole_value()); } #endif } void ExternalStringTable::AddOldString(String* string) { - ASSERT(string->IsExternalString()); - ASSERT(!heap_->InNewSpace(string)); + DCHECK(string->IsExternalString()); + DCHECK(!heap_->InNewSpace(string)); old_space_strings_.Add(string); } @@ -708,14 +729,14 @@ AlwaysAllocateScope::AlwaysAllocateScope(Isolate* isolate) // non-handle code to call handle code. The code still works but // performance will degrade, so we want to catch this situation // in debug mode. - ASSERT(heap_->always_allocate_scope_depth_ == 0); + DCHECK(heap_->always_allocate_scope_depth_ == 0); heap_->always_allocate_scope_depth_++; } AlwaysAllocateScope::~AlwaysAllocateScope() { heap_->always_allocate_scope_depth_--; - ASSERT(heap_->always_allocate_scope_depth_ == 0); + DCHECK(heap_->always_allocate_scope_depth_ == 0); } @@ -738,9 +759,7 @@ GCCallbacksScope::GCCallbacksScope(Heap* heap) : heap_(heap) { } -GCCallbacksScope::~GCCallbacksScope() { - heap_->gc_callbacks_depth_--; -} +GCCallbacksScope::~GCCallbacksScope() { heap_->gc_callbacks_depth_--; } bool GCCallbacksScope::CheckReenter() { @@ -761,16 +780,10 @@ void VerifyPointersVisitor::VisitPointers(Object** start, Object** end) { void VerifySmisVisitor::VisitPointers(Object** start, Object** end) { for (Object** current = start; current < end; current++) { - CHECK((*current)->IsSmi()); + CHECK((*current)->IsSmi()); } } - - -double GCTracer::SizeOfHeapObjects() { - return (static_cast<double>(heap_->SizeOfObjects())) / MB; } +} // namespace v8::internal - -} } // namespace v8::internal - -#endif // V8_HEAP_INL_H_ +#endif // V8_HEAP_HEAP_INL_H_ diff --git a/deps/v8/src/heap.cc b/deps/v8/src/heap/heap.cc similarity index 68% rename from deps/v8/src/heap.cc rename to deps/v8/src/heap/heap.cc index ff9c26c40f7..fd08c8292f8 100644 --- a/deps/v8/src/heap.cc +++ b/deps/v8/src/heap/heap.cc @@ -2,41 +2,46 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" - -#include "accessors.h" -#include "api.h" -#include "bootstrapper.h" -#include "codegen.h" -#include "compilation-cache.h" -#include "conversions.h" -#include "cpu-profiler.h" -#include "debug.h" -#include "deoptimizer.h" -#include "global-handles.h" -#include "heap-profiler.h" -#include "incremental-marking.h" -#include "isolate-inl.h" -#include "mark-compact.h" -#include "natives.h" -#include "objects-visiting.h" -#include "objects-visiting-inl.h" -#include "once.h" -#include "runtime-profiler.h" -#include "scopeinfo.h" -#include "snapshot.h" -#include "store-buffer.h" -#include "utils/random-number-generator.h" -#include "utils.h" -#include "v8threads.h" -#include "vm-state-inl.h" +#include "src/v8.h" + +#include "src/accessors.h" +#include "src/api.h" +#include "src/base/once.h" +#include "src/base/utils/random-number-generator.h" +#include "src/bootstrapper.h" +#include "src/codegen.h" +#include "src/compilation-cache.h" +#include "src/conversions.h" +#include "src/cpu-profiler.h" +#include "src/debug.h" +#include "src/deoptimizer.h" +#include "src/global-handles.h" +#include "src/heap/incremental-marking.h" +#include "src/heap/mark-compact.h" +#include "src/heap/objects-visiting-inl.h" +#include "src/heap/objects-visiting.h" +#include "src/heap/store-buffer.h" +#include "src/heap-profiler.h" +#include "src/isolate-inl.h" +#include "src/natives.h" +#include "src/runtime-profiler.h" +#include "src/scopeinfo.h" +#include "src/snapshot.h" +#include "src/utils.h" +#include "src/v8threads.h" +#include "src/vm-state-inl.h" + #if V8_TARGET_ARCH_ARM && !V8_INTERPRETED_REGEXP -#include "regexp-macro-assembler.h" -#include "arm/regexp-macro-assembler-arm.h" +#include "src/regexp-macro-assembler.h" // NOLINT +#include "src/arm/regexp-macro-assembler-arm.h" // NOLINT #endif #if V8_TARGET_ARCH_MIPS && !V8_INTERPRETED_REGEXP -#include "regexp-macro-assembler.h" -#include "mips/regexp-macro-assembler-mips.h" +#include "src/regexp-macro-assembler.h" // NOLINT +#include "src/mips/regexp-macro-assembler-mips.h" // NOLINT +#endif +#if V8_TARGET_ARCH_MIPS64 && !V8_INTERPRETED_REGEXP +#include "src/regexp-macro-assembler.h" +#include "src/mips64/regexp-macro-assembler-mips64.h" #endif namespace v8 { @@ -44,24 +49,25 @@ namespace internal { Heap::Heap() - : isolate_(NULL), + : amount_of_external_allocated_memory_(0), + amount_of_external_allocated_memory_at_last_global_gc_(0), + isolate_(NULL), code_range_size_(0), -// semispace_size_ should be a power of 2 and old_generation_size_ should be -// a multiple of Page::kPageSize. + // semispace_size_ should be a power of 2 and old_generation_size_ should + // be a multiple of Page::kPageSize. reserved_semispace_size_(8 * (kPointerSize / 4) * MB), - max_semispace_size_(8 * (kPointerSize / 4) * MB), + max_semi_space_size_(8 * (kPointerSize / 4) * MB), initial_semispace_size_(Page::kPageSize), max_old_generation_size_(700ul * (kPointerSize / 4) * MB), max_executable_size_(256ul * (kPointerSize / 4) * MB), -// Variables set based on semispace_size_ and old_generation_size_ in -// ConfigureHeap (survived_since_last_expansion_, external_allocation_limit_) -// Will be 4 * reserved_semispace_size_ to ensure that young -// generation can be aligned to its size. + // Variables set based on semispace_size_ and old_generation_size_ in + // ConfigureHeap. + // Will be 4 * reserved_semispace_size_ to ensure that young + // generation can be aligned to its size. maximum_committed_(0), survived_since_last_expansion_(0), sweep_generation_(0), always_allocate_scope_depth_(0), - linear_allocation_scope_depth_(0), contexts_disposed_(0), global_ic_age_(0), flush_monomorphic_ics_(false), @@ -76,6 +82,9 @@ Heap::Heap() lo_space_(NULL), gc_state_(NOT_IN_GC), gc_post_processing_depth_(0), + allocations_count_(0), + raw_allocations_hash_(0), + dump_allocations_hash_countdown_(FLAG_dump_allocations_digest_at_alloc), ms_count_(0), gc_count_(0), remembered_unmapped_pages_index_(0), @@ -83,30 +92,27 @@ Heap::Heap() #ifdef DEBUG allocation_timeout_(0), #endif // DEBUG - new_space_high_promotion_mode_active_(false), old_generation_allocation_limit_(kMinimumOldGenerationAllocationLimit), - external_allocation_limit_(0), - amount_of_external_allocated_memory_(0), - amount_of_external_allocated_memory_at_last_global_gc_(0), old_gen_exhausted_(false), inline_allocation_disabled_(false), store_buffer_rebuilder_(store_buffer()), hidden_string_(NULL), gc_safe_size_of_old_object_(NULL), total_regexp_code_generated_(0), - tracer_(NULL), - young_survivors_after_last_gc_(0), + tracer_(this), high_survival_rate_period_length_(0), - low_survival_rate_period_length_(0), - survival_rate_(0), - previous_survival_rate_trend_(Heap::STABLE), - survival_rate_trend_(Heap::STABLE), + promoted_objects_size_(0), + promotion_rate_(0), + semi_space_copied_object_size_(0), + semi_space_copied_rate_(0), + nodes_died_in_new_space_(0), + nodes_copied_in_new_space_(0), + nodes_promoted_(0), + maximum_size_scavenges_(0), max_gc_pause_(0.0), total_gc_time_ms_(0.0), max_alive_after_gc_(0), min_in_mutator_(kMaxInt), - alive_after_last_gc_(0), - last_gc_end_timestamp_(0.0), marking_time_(0.0), sweeping_time_(0.0), mark_compact_collector_(this), @@ -131,20 +137,21 @@ Heap::Heap() external_string_table_(this), chunks_queued_for_free_(NULL), gc_callbacks_depth_(0) { - // Allow build-time customization of the max semispace size. Building - // V8 with snapshots and a non-default max semispace size is much - // easier if you can define it as part of the build environment. +// Allow build-time customization of the max semispace size. Building +// V8 with snapshots and a non-default max semispace size is much +// easier if you can define it as part of the build environment. #if defined(V8_MAX_SEMISPACE_SIZE) - max_semispace_size_ = reserved_semispace_size_ = V8_MAX_SEMISPACE_SIZE; + max_semi_space_size_ = reserved_semispace_size_ = V8_MAX_SEMISPACE_SIZE; #endif // Ensure old_generation_size_ is a multiple of kPageSize. - ASSERT(MB >= Page::kPageSize); + DCHECK(MB >= Page::kPageSize); memset(roots_, 0, sizeof(roots_[0]) * kRootListLength); - native_contexts_list_ = NULL; - array_buffers_list_ = Smi::FromInt(0); - allocation_sites_list_ = Smi::FromInt(0); + set_native_contexts_list(NULL); + set_array_buffers_list(Smi::FromInt(0)); + set_allocation_sites_list(Smi::FromInt(0)); + set_encountered_weak_collections(Smi::FromInt(0)); // Put a dummy entry in the remembered pages so we can find the list the // minidump even if there are no real unmapped pages. RememberUnmappedPage(NULL, false); @@ -156,27 +163,20 @@ Heap::Heap() intptr_t Heap::Capacity() { if (!HasBeenSetUp()) return 0; - return new_space_.Capacity() + - old_pointer_space_->Capacity() + - old_data_space_->Capacity() + - code_space_->Capacity() + - map_space_->Capacity() + - cell_space_->Capacity() + - property_cell_space_->Capacity(); + return new_space_.Capacity() + old_pointer_space_->Capacity() + + old_data_space_->Capacity() + code_space_->Capacity() + + map_space_->Capacity() + cell_space_->Capacity() + + property_cell_space_->Capacity(); } intptr_t Heap::CommittedMemory() { if (!HasBeenSetUp()) return 0; - return new_space_.CommittedMemory() + - old_pointer_space_->CommittedMemory() + - old_data_space_->CommittedMemory() + - code_space_->CommittedMemory() + - map_space_->CommittedMemory() + - cell_space_->CommittedMemory() + - property_cell_space_->CommittedMemory() + - lo_space_->Size(); + return new_space_.CommittedMemory() + old_pointer_space_->CommittedMemory() + + old_data_space_->CommittedMemory() + code_space_->CommittedMemory() + + map_space_->CommittedMemory() + cell_space_->CommittedMemory() + + property_cell_space_->CommittedMemory() + lo_space_->Size(); } @@ -184,13 +184,13 @@ size_t Heap::CommittedPhysicalMemory() { if (!HasBeenSetUp()) return 0; return new_space_.CommittedPhysicalMemory() + - old_pointer_space_->CommittedPhysicalMemory() + - old_data_space_->CommittedPhysicalMemory() + - code_space_->CommittedPhysicalMemory() + - map_space_->CommittedPhysicalMemory() + - cell_space_->CommittedPhysicalMemory() + - property_cell_space_->CommittedPhysicalMemory() + - lo_space_->CommittedPhysicalMemory(); + old_pointer_space_->CommittedPhysicalMemory() + + old_data_space_->CommittedPhysicalMemory() + + code_space_->CommittedPhysicalMemory() + + map_space_->CommittedPhysicalMemory() + + cell_space_->CommittedPhysicalMemory() + + property_cell_space_->CommittedPhysicalMemory() + + lo_space_->CommittedPhysicalMemory(); } @@ -214,24 +214,17 @@ void Heap::UpdateMaximumCommitted() { intptr_t Heap::Available() { if (!HasBeenSetUp()) return 0; - return new_space_.Available() + - old_pointer_space_->Available() + - old_data_space_->Available() + - code_space_->Available() + - map_space_->Available() + - cell_space_->Available() + - property_cell_space_->Available(); + return new_space_.Available() + old_pointer_space_->Available() + + old_data_space_->Available() + code_space_->Available() + + map_space_->Available() + cell_space_->Available() + + property_cell_space_->Available(); } bool Heap::HasBeenSetUp() { - return old_pointer_space_ != NULL && - old_data_space_ != NULL && - code_space_ != NULL && - map_space_ != NULL && - cell_space_ != NULL && - property_cell_space_ != NULL && - lo_space_ != NULL; + return old_pointer_space_ != NULL && old_data_space_ != NULL && + code_space_ != NULL && map_space_ != NULL && cell_space_ != NULL && + property_cell_space_ != NULL && lo_space_ != NULL; } @@ -266,8 +259,9 @@ GarbageCollector Heap::SelectGarbageCollector(AllocationSpace space, // Have allocation in OLD and LO failed? if (old_gen_exhausted_) { - isolate_->counters()-> - gc_compactor_caused_by_oldspace_exhaustion()->Increment(); + isolate_->counters() + ->gc_compactor_caused_by_oldspace_exhaustion() + ->Increment(); *reason = "old generations exhausted"; return MARK_COMPACTOR; } @@ -282,8 +276,9 @@ GarbageCollector Heap::SelectGarbageCollector(AllocationSpace space, // space. Undercounting is safe---we may get an unrequested full GC when // a scavenge would have succeeded. if (isolate_->memory_allocator()->MaxAvailable() <= new_space_.Size()) { - isolate_->counters()-> - gc_compactor_caused_by_oldspace_exhaustion()->Increment(); + isolate_->counters() + ->gc_compactor_caused_by_oldspace_exhaustion() + ->Increment(); *reason = "scavenge might not succeed"; return MARK_COMPACTOR; } @@ -297,9 +292,9 @@ GarbageCollector Heap::SelectGarbageCollector(AllocationSpace space, // TODO(1238405): Combine the infrastructure for --heap-stats and // --log-gc to avoid the complicated preprocessor and flag testing. void Heap::ReportStatisticsBeforeGC() { - // Heap::ReportHeapStatistics will also log NewSpace statistics when - // compiled --log-gc is set. The following logic is used to avoid - // double logging. +// Heap::ReportHeapStatistics will also log NewSpace statistics when +// compiled --log-gc is set. The following logic is used to avoid +// double logging. #ifdef DEBUG if (FLAG_heap_stats || FLAG_log_gc) new_space_.CollectStatistics(); if (FLAG_heap_stats) { @@ -320,63 +315,76 @@ void Heap::ReportStatisticsBeforeGC() { void Heap::PrintShortHeapStatistics() { if (!FLAG_trace_gc_verbose) return; - PrintPID("Memory allocator, used: %6" V8_PTR_PREFIX "d KB" - ", available: %6" V8_PTR_PREFIX "d KB\n", + PrintPID("Memory allocator, used: %6" V8_PTR_PREFIX + "d KB" + ", available: %6" V8_PTR_PREFIX "d KB\n", isolate_->memory_allocator()->Size() / KB, isolate_->memory_allocator()->Available() / KB); - PrintPID("New space, used: %6" V8_PTR_PREFIX "d KB" - ", available: %6" V8_PTR_PREFIX "d KB" - ", committed: %6" V8_PTR_PREFIX "d KB\n", - new_space_.Size() / KB, - new_space_.Available() / KB, + PrintPID("New space, used: %6" V8_PTR_PREFIX + "d KB" + ", available: %6" V8_PTR_PREFIX + "d KB" + ", committed: %6" V8_PTR_PREFIX "d KB\n", + new_space_.Size() / KB, new_space_.Available() / KB, new_space_.CommittedMemory() / KB); - PrintPID("Old pointers, used: %6" V8_PTR_PREFIX "d KB" - ", available: %6" V8_PTR_PREFIX "d KB" - ", committed: %6" V8_PTR_PREFIX "d KB\n", + PrintPID("Old pointers, used: %6" V8_PTR_PREFIX + "d KB" + ", available: %6" V8_PTR_PREFIX + "d KB" + ", committed: %6" V8_PTR_PREFIX "d KB\n", old_pointer_space_->SizeOfObjects() / KB, old_pointer_space_->Available() / KB, old_pointer_space_->CommittedMemory() / KB); - PrintPID("Old data space, used: %6" V8_PTR_PREFIX "d KB" - ", available: %6" V8_PTR_PREFIX "d KB" - ", committed: %6" V8_PTR_PREFIX "d KB\n", + PrintPID("Old data space, used: %6" V8_PTR_PREFIX + "d KB" + ", available: %6" V8_PTR_PREFIX + "d KB" + ", committed: %6" V8_PTR_PREFIX "d KB\n", old_data_space_->SizeOfObjects() / KB, old_data_space_->Available() / KB, old_data_space_->CommittedMemory() / KB); - PrintPID("Code space, used: %6" V8_PTR_PREFIX "d KB" - ", available: %6" V8_PTR_PREFIX "d KB" - ", committed: %6" V8_PTR_PREFIX "d KB\n", - code_space_->SizeOfObjects() / KB, - code_space_->Available() / KB, + PrintPID("Code space, used: %6" V8_PTR_PREFIX + "d KB" + ", available: %6" V8_PTR_PREFIX + "d KB" + ", committed: %6" V8_PTR_PREFIX "d KB\n", + code_space_->SizeOfObjects() / KB, code_space_->Available() / KB, code_space_->CommittedMemory() / KB); - PrintPID("Map space, used: %6" V8_PTR_PREFIX "d KB" - ", available: %6" V8_PTR_PREFIX "d KB" - ", committed: %6" V8_PTR_PREFIX "d KB\n", - map_space_->SizeOfObjects() / KB, - map_space_->Available() / KB, + PrintPID("Map space, used: %6" V8_PTR_PREFIX + "d KB" + ", available: %6" V8_PTR_PREFIX + "d KB" + ", committed: %6" V8_PTR_PREFIX "d KB\n", + map_space_->SizeOfObjects() / KB, map_space_->Available() / KB, map_space_->CommittedMemory() / KB); - PrintPID("Cell space, used: %6" V8_PTR_PREFIX "d KB" - ", available: %6" V8_PTR_PREFIX "d KB" - ", committed: %6" V8_PTR_PREFIX "d KB\n", - cell_space_->SizeOfObjects() / KB, - cell_space_->Available() / KB, + PrintPID("Cell space, used: %6" V8_PTR_PREFIX + "d KB" + ", available: %6" V8_PTR_PREFIX + "d KB" + ", committed: %6" V8_PTR_PREFIX "d KB\n", + cell_space_->SizeOfObjects() / KB, cell_space_->Available() / KB, cell_space_->CommittedMemory() / KB); - PrintPID("PropertyCell space, used: %6" V8_PTR_PREFIX "d KB" - ", available: %6" V8_PTR_PREFIX "d KB" - ", committed: %6" V8_PTR_PREFIX "d KB\n", + PrintPID("PropertyCell space, used: %6" V8_PTR_PREFIX + "d KB" + ", available: %6" V8_PTR_PREFIX + "d KB" + ", committed: %6" V8_PTR_PREFIX "d KB\n", property_cell_space_->SizeOfObjects() / KB, property_cell_space_->Available() / KB, property_cell_space_->CommittedMemory() / KB); - PrintPID("Large object space, used: %6" V8_PTR_PREFIX "d KB" - ", available: %6" V8_PTR_PREFIX "d KB" - ", committed: %6" V8_PTR_PREFIX "d KB\n", - lo_space_->SizeOfObjects() / KB, - lo_space_->Available() / KB, + PrintPID("Large object space, used: %6" V8_PTR_PREFIX + "d KB" + ", available: %6" V8_PTR_PREFIX + "d KB" + ", committed: %6" V8_PTR_PREFIX "d KB\n", + lo_space_->SizeOfObjects() / KB, lo_space_->Available() / KB, lo_space_->CommittedMemory() / KB); - PrintPID("All spaces, used: %6" V8_PTR_PREFIX "d KB" - ", available: %6" V8_PTR_PREFIX "d KB" - ", committed: %6" V8_PTR_PREFIX "d KB\n", - this->SizeOfObjects() / KB, - this->Available() / KB, + PrintPID("All spaces, used: %6" V8_PTR_PREFIX + "d KB" + ", available: %6" V8_PTR_PREFIX + "d KB" + ", committed: %6" V8_PTR_PREFIX "d KB\n", + this->SizeOfObjects() / KB, this->Available() / KB, this->CommittedMemory() / KB); PrintPID("External memory reported: %6" V8_PTR_PREFIX "d KB\n", static_cast<intptr_t>(amount_of_external_allocated_memory_ / KB)); @@ -387,8 +395,8 @@ void Heap::PrintShortHeapStatistics() { // TODO(1238405): Combine the infrastructure for --heap-stats and // --log-gc to avoid the complicated preprocessor and flag testing. void Heap::ReportStatisticsAfterGC() { - // Similar to the before GC, we use some complicated logic to ensure that - // NewSpace statistics are logged exactly once when --log-gc is turned on. +// Similar to the before GC, we use some complicated logic to ensure that +// NewSpace statistics are logged exactly once when --log-gc is turned on. #if defined(DEBUG) if (FLAG_heap_stats) { new_space_.CollectStatistics(); @@ -403,7 +411,8 @@ void Heap::ReportStatisticsAfterGC() { void Heap::GarbageCollectionPrologue() { - { AllowHeapAllocation for_the_first_part_of_prologue; + { + AllowHeapAllocation for_the_first_part_of_prologue; ClearJSFunctionResultCaches(); gc_count_++; unflattened_strings_length_ = 0; @@ -419,10 +428,17 @@ void Heap::GarbageCollectionPrologue() { #endif } + // Reset GC statistics. + promoted_objects_size_ = 0; + semi_space_copied_object_size_ = 0; + nodes_died_in_new_space_ = 0; + nodes_copied_in_new_space_ = 0; + nodes_promoted_ = 0; + UpdateMaximumCommitted(); #ifdef DEBUG - ASSERT(!AllowHeapAllocation::IsAllowed() && gc_state_ == NOT_IN_GC); + DCHECK(!AllowHeapAllocation::IsAllowed() && gc_state_ == NOT_IN_GC); if (FLAG_gc_verbose) Print(); @@ -434,6 +450,13 @@ void Heap::GarbageCollectionPrologue() { if (isolate()->concurrent_osr_enabled()) { isolate()->optimizing_compiler_thread()->AgeBufferedOsrJobs(); } + + if (new_space_.IsAtMaximumCapacity()) { + maximum_size_scavenges_++; + } else { + maximum_size_scavenges_ = 0; + } + CheckNewSpaceExpansionCriteria(); } @@ -463,8 +486,7 @@ void Heap::ClearAllICsByKind(Code::Kind kind) { void Heap::RepairFreeListsAfterBoot() { PagedSpaces spaces(this); - for (PagedSpace* space = spaces.next(); - space != NULL; + for (PagedSpace* space = spaces.next(); space != NULL; space = spaces.next()) { space->RepairFreeListsAfterBoot(); } @@ -481,29 +503,44 @@ void Heap::ProcessPretenuringFeedback() { // If the scratchpad overflowed, we have to iterate over the allocation // sites list. + // TODO(hpayer): We iterate over the whole list of allocation sites when + // we grew to the maximum semi-space size to deopt maybe tenured + // allocation sites. We could hold the maybe tenured allocation sites + // in a seperate data structure if this is a performance problem. + bool deopt_maybe_tenured = DeoptMaybeTenuredAllocationSites(); bool use_scratchpad = - allocation_sites_scratchpad_length_ < kAllocationSiteScratchpadSize; + allocation_sites_scratchpad_length_ < kAllocationSiteScratchpadSize && + !deopt_maybe_tenured; int i = 0; Object* list_element = allocation_sites_list(); bool trigger_deoptimization = false; - while (use_scratchpad ? - i < allocation_sites_scratchpad_length_ : - list_element->IsAllocationSite()) { - AllocationSite* site = use_scratchpad ? - AllocationSite::cast(allocation_sites_scratchpad()->get(i)) : - AllocationSite::cast(list_element); + bool maximum_size_scavenge = MaximumSizeScavenge(); + while (use_scratchpad ? i < allocation_sites_scratchpad_length_ + : list_element->IsAllocationSite()) { + AllocationSite* site = + use_scratchpad + ? AllocationSite::cast(allocation_sites_scratchpad()->get(i)) + : AllocationSite::cast(list_element); allocation_mementos_found += site->memento_found_count(); if (site->memento_found_count() > 0) { active_allocation_sites++; + if (site->DigestPretenuringFeedback(maximum_size_scavenge)) { + trigger_deoptimization = true; + } + if (site->GetPretenureMode() == TENURED) { + tenure_decisions++; + } else { + dont_tenure_decisions++; + } + allocation_sites++; } - if (site->DigestPretenuringFeedback()) trigger_deoptimization = true; - if (site->GetPretenureMode() == TENURED) { - tenure_decisions++; - } else { - dont_tenure_decisions++; + + if (deopt_maybe_tenured && site->IsMaybeTenure()) { + site->set_deopt_dependent_code(true); + trigger_deoptimization = true; } - allocation_sites++; + if (use_scratchpad) { i++; } else { @@ -512,24 +549,21 @@ void Heap::ProcessPretenuringFeedback() { } if (trigger_deoptimization) { - isolate_->stack_guard()->DeoptMarkedAllocationSites(); + isolate_->stack_guard()->RequestDeoptMarkedAllocationSites(); } FlushAllocationSitesScratchpad(); if (FLAG_trace_pretenuring_statistics && - (allocation_mementos_found > 0 || - tenure_decisions > 0 || + (allocation_mementos_found > 0 || tenure_decisions > 0 || dont_tenure_decisions > 0)) { - PrintF("GC: (mode, #visited allocation sites, #active allocation sites, " - "#mementos, #tenure decisions, #donttenure decisions) " - "(%s, %d, %d, %d, %d, %d)\n", - use_scratchpad ? "use scratchpad" : "use list", - allocation_sites, - active_allocation_sites, - allocation_mementos_found, - tenure_decisions, - dont_tenure_decisions); + PrintF( + "GC: (mode, #visited allocation sites, #active allocation sites, " + "#mementos, #tenure decisions, #donttenure decisions) " + "(%s, %d, %d, %d, %d, %d)\n", + use_scratchpad ? "use scratchpad" : "use list", allocation_sites, + active_allocation_sites, allocation_mementos_found, tenure_decisions, + dont_tenure_decisions); } } } @@ -544,8 +578,7 @@ void Heap::DeoptMarkedAllocationSites() { AllocationSite* site = AllocationSite::cast(list_element); if (site->deopt_dependent_code()) { site->dependent_code()->MarkCodeForDeoptimization( - isolate_, - DependentCode::kAllocationSiteTenuringChangedGroup); + isolate_, DependentCode::kAllocationSiteTenuringChangedGroup); site->set_deopt_dependent_code(false); } list_element = site->weak_next(); @@ -602,41 +635,35 @@ void Heap::GarbageCollectionEpilogue() { if (full_codegen_bytes_generated_ + crankshaft_codegen_bytes_generated_ > 0) { isolate_->counters()->codegen_fraction_crankshaft()->AddSample( static_cast<int>((crankshaft_codegen_bytes_generated_ * 100.0) / - (crankshaft_codegen_bytes_generated_ - + full_codegen_bytes_generated_))); + (crankshaft_codegen_bytes_generated_ + + full_codegen_bytes_generated_))); } if (CommittedMemory() > 0) { isolate_->counters()->external_fragmentation_total()->AddSample( static_cast<int>(100 - (SizeOfObjects() * 100.0) / CommittedMemory())); - isolate_->counters()->heap_fraction_new_space()-> - AddSample(static_cast<int>( - (new_space()->CommittedMemory() * 100.0) / CommittedMemory())); + isolate_->counters()->heap_fraction_new_space()->AddSample(static_cast<int>( + (new_space()->CommittedMemory() * 100.0) / CommittedMemory())); isolate_->counters()->heap_fraction_old_pointer_space()->AddSample( - static_cast<int>( - (old_pointer_space()->CommittedMemory() * 100.0) / - CommittedMemory())); + static_cast<int>((old_pointer_space()->CommittedMemory() * 100.0) / + CommittedMemory())); isolate_->counters()->heap_fraction_old_data_space()->AddSample( - static_cast<int>( - (old_data_space()->CommittedMemory() * 100.0) / - CommittedMemory())); - isolate_->counters()->heap_fraction_code_space()-> - AddSample(static_cast<int>( - (code_space()->CommittedMemory() * 100.0) / CommittedMemory())); - isolate_->counters()->heap_fraction_map_space()->AddSample( - static_cast<int>( - (map_space()->CommittedMemory() * 100.0) / CommittedMemory())); + static_cast<int>((old_data_space()->CommittedMemory() * 100.0) / + CommittedMemory())); + isolate_->counters()->heap_fraction_code_space()->AddSample( + static_cast<int>((code_space()->CommittedMemory() * 100.0) / + CommittedMemory())); + isolate_->counters()->heap_fraction_map_space()->AddSample(static_cast<int>( + (map_space()->CommittedMemory() * 100.0) / CommittedMemory())); isolate_->counters()->heap_fraction_cell_space()->AddSample( - static_cast<int>( - (cell_space()->CommittedMemory() * 100.0) / CommittedMemory())); - isolate_->counters()->heap_fraction_property_cell_space()-> - AddSample(static_cast<int>( - (property_cell_space()->CommittedMemory() * 100.0) / - CommittedMemory())); - isolate_->counters()->heap_fraction_lo_space()-> - AddSample(static_cast<int>( - (lo_space()->CommittedMemory() * 100.0) / CommittedMemory())); + static_cast<int>((cell_space()->CommittedMemory() * 100.0) / + CommittedMemory())); + isolate_->counters()->heap_fraction_property_cell_space()->AddSample( + static_cast<int>((property_cell_space()->CommittedMemory() * 100.0) / + CommittedMemory())); + isolate_->counters()->heap_fraction_lo_space()->AddSample(static_cast<int>( + (lo_space()->CommittedMemory() * 100.0) / CommittedMemory())); isolate_->counters()->heap_sample_total_committed()->AddSample( static_cast<int>(CommittedMemory() / KB)); @@ -646,10 +673,10 @@ void Heap::GarbageCollectionEpilogue() { static_cast<int>(map_space()->CommittedMemory() / KB)); isolate_->counters()->heap_sample_cell_space_committed()->AddSample( static_cast<int>(cell_space()->CommittedMemory() / KB)); - isolate_->counters()-> - heap_sample_property_cell_space_committed()-> - AddSample(static_cast<int>( - property_cell_space()->CommittedMemory() / KB)); + isolate_->counters() + ->heap_sample_property_cell_space_committed() + ->AddSample( + static_cast<int>(property_cell_space()->CommittedMemory() / KB)); isolate_->counters()->heap_sample_code_space_committed()->AddSample( static_cast<int>(code_space()->CommittedMemory() / KB)); @@ -657,21 +684,22 @@ void Heap::GarbageCollectionEpilogue() { static_cast<int>(MaximumCommittedMemory() / KB)); } -#define UPDATE_COUNTERS_FOR_SPACE(space) \ - isolate_->counters()->space##_bytes_available()->Set( \ - static_cast<int>(space()->Available())); \ - isolate_->counters()->space##_bytes_committed()->Set( \ - static_cast<int>(space()->CommittedMemory())); \ - isolate_->counters()->space##_bytes_used()->Set( \ +#define UPDATE_COUNTERS_FOR_SPACE(space) \ + isolate_->counters()->space##_bytes_available()->Set( \ + static_cast<int>(space()->Available())); \ + isolate_->counters()->space##_bytes_committed()->Set( \ + static_cast<int>(space()->CommittedMemory())); \ + isolate_->counters()->space##_bytes_used()->Set( \ static_cast<int>(space()->SizeOfObjects())); -#define UPDATE_FRAGMENTATION_FOR_SPACE(space) \ - if (space()->CommittedMemory() > 0) { \ - isolate_->counters()->external_fragmentation_##space()->AddSample( \ - static_cast<int>(100 - \ - (space()->SizeOfObjects() * 100.0) / space()->CommittedMemory())); \ - } -#define UPDATE_COUNTERS_AND_FRAGMENTATION_FOR_SPACE(space) \ - UPDATE_COUNTERS_FOR_SPACE(space) \ +#define UPDATE_FRAGMENTATION_FOR_SPACE(space) \ + if (space()->CommittedMemory() > 0) { \ + isolate_->counters()->external_fragmentation_##space()->AddSample( \ + static_cast<int>(100 - \ + (space()->SizeOfObjects() * 100.0) / \ + space()->CommittedMemory())); \ + } +#define UPDATE_COUNTERS_AND_FRAGMENTATION_FOR_SPACE(space) \ + UPDATE_COUNTERS_FOR_SPACE(space) \ UPDATE_FRAGMENTATION_FOR_SPACE(space) UPDATE_COUNTERS_FOR_SPACE(new_space) @@ -689,12 +717,14 @@ void Heap::GarbageCollectionEpilogue() { #ifdef DEBUG ReportStatisticsAfterGC(); #endif // DEBUG - isolate_->debug()->AfterGarbageCollection(); + + // Remember the last top pointer so that we can later find out + // whether we allocated in new space since the last GC. + new_space_top_after_last_gc_ = new_space()->top(); } -void Heap::CollectAllGarbage(int flags, - const char* gc_reason, +void Heap::CollectAllGarbage(int flags, const char* gc_reason, const v8::GCCallbackFlags gc_callback_flags) { // Since we are ignoring the return value, the exact choice of space does // not matter, so long as we do not specify NEW_SPACE, which would not @@ -755,8 +785,7 @@ void Heap::EnsureFillerObjectAtTop() { } -bool Heap::CollectGarbage(GarbageCollector collector, - const char* gc_reason, +bool Heap::CollectGarbage(GarbageCollector collector, const char* gc_reason, const char* collector_reason, const v8::GCCallbackFlags gc_callback_flags) { // The VM is in the GC state until exiting this function. @@ -788,7 +817,7 @@ bool Heap::CollectGarbage(GarbageCollector collector, const intptr_t kStepSizeWhenDelayedByScavenge = 1 * MB; incremental_marking()->Step(kStepSizeWhenDelayedByScavenge, IncrementalMarking::NO_GC_VIA_STACK_GUARD); - if (!incremental_marking()->IsComplete()) { + if (!incremental_marking()->IsComplete() && !FLAG_gc_global) { if (FLAG_trace_incremental_marking) { PrintF("[IncrementalMarking] Delaying MarkSweep.\n"); } @@ -799,34 +828,29 @@ bool Heap::CollectGarbage(GarbageCollector collector, bool next_gc_likely_to_collect_more = false; - { GCTracer tracer(this, gc_reason, collector_reason); - ASSERT(AllowHeapAllocation::IsAllowed()); + { + tracer()->Start(collector, gc_reason, collector_reason); + DCHECK(AllowHeapAllocation::IsAllowed()); DisallowHeapAllocation no_allocation_during_gc; GarbageCollectionPrologue(); - // The GC count was incremented in the prologue. Tell the tracer about - // it. - tracer.set_gc_count(gc_count_); - - // Tell the tracer which collector we've selected. - tracer.set_collector(collector); { HistogramTimerScope histogram_timer_scope( (collector == SCAVENGER) ? isolate_->counters()->gc_scavenger() : isolate_->counters()->gc_compactor()); next_gc_likely_to_collect_more = - PerformGarbageCollection(collector, &tracer, gc_callback_flags); + PerformGarbageCollection(collector, gc_callback_flags); } GarbageCollectionEpilogue(); + tracer()->Stop(); } // Start incremental marking for the next cycle. The heap snapshot // generator needs incremental marking to stay off after it aborted. if (!mark_compact_collector()->abort_incremental_marking() && incremental_marking()->IsStopped() && - incremental_marking()->WorthActivating() && - NextGCIsLikelyToBeFull()) { + incremental_marking()->WorthActivating() && NextGCIsLikelyToBeFull()) { incremental_marking()->Start(); } @@ -845,17 +869,13 @@ int Heap::NotifyContextDisposed() { } -void Heap::MoveElements(FixedArray* array, - int dst_index, - int src_index, +void Heap::MoveElements(FixedArray* array, int dst_index, int src_index, int len) { if (len == 0) return; - ASSERT(array->map() != fixed_cow_array_map()); + DCHECK(array->map() != fixed_cow_array_map()); Object** dst_objects = array->data_start() + dst_index; - OS::MemMove(dst_objects, - array->data_start() + src_index, - len * kPointerSize); + MemMove(dst_objects, array->data_start() + src_index, len * kPointerSize); if (!InNewSpace(array)) { for (int i = 0; i < len; i++) { // TODO(hpayer): check store buffer for entries @@ -893,9 +913,7 @@ static void VerifyStringTable(Heap* heap) { static bool AbortIncrementalMarkingAndCollectGarbage( - Heap* heap, - AllocationSpace space, - const char* gc_reason = NULL) { + Heap* heap, AllocationSpace space, const char* gc_reason = NULL) { heap->mark_compact_collector()->SetFlags(Heap::kAbortIncrementalMarkingMask); bool result = heap->CollectGarbage(space, gc_reason); heap->mark_compact_collector()->SetFlags(Heap::kNoGCFlags); @@ -903,13 +921,13 @@ static bool AbortIncrementalMarkingAndCollectGarbage( } -void Heap::ReserveSpace(int *sizes, Address *locations_out) { +void Heap::ReserveSpace(int* sizes, Address* locations_out) { bool gc_performed = true; int counter = 0; static const int kThreshold = 20; while (gc_performed && counter++ < kThreshold) { gc_performed = false; - ASSERT(NEW_SPACE == FIRST_PAGED_SPACE - 1); + DCHECK(NEW_SPACE == FIRST_PAGED_SPACE - 1); for (int space = NEW_SPACE; space <= LAST_PAGED_SPACE; space++) { if (sizes[space] != 0) { AllocationResult allocation; @@ -925,8 +943,7 @@ void Heap::ReserveSpace(int *sizes, Address *locations_out) { "failed to reserve space in the new space"); } else { AbortIncrementalMarkingAndCollectGarbage( - this, - static_cast<AllocationSpace>(space), + this, static_cast<AllocationSpace>(space), "failed to reserve space in paged space"); } gc_performed = true; @@ -960,7 +977,7 @@ void Heap::EnsureFromSpaceIsCommitted() { void Heap::ClearJSFunctionResultCaches() { if (isolate_->bootstrapper()->IsActive()) return; - Object* context = native_contexts_list_; + Object* context = native_contexts_list(); while (!context->IsUndefined()) { // Get the caches for this context. GC can happen when the context // is not fully initialized, so the caches can be undefined. @@ -986,7 +1003,7 @@ void Heap::ClearNormalizedMapCaches() { return; } - Object* context = native_contexts_list_; + Object* context = native_contexts_list(); while (!context->IsUndefined()) { // GC can happen when the context is not fully initialized, // so the cache can be undefined. @@ -1000,42 +1017,27 @@ void Heap::ClearNormalizedMapCaches() { } -void Heap::UpdateSurvivalRateTrend(int start_new_space_size) { +void Heap::UpdateSurvivalStatistics(int start_new_space_size) { if (start_new_space_size == 0) return; - double survival_rate = - (static_cast<double>(young_survivors_after_last_gc_) * 100) / - start_new_space_size; + promotion_rate_ = (static_cast<double>(promoted_objects_size_) / + static_cast<double>(start_new_space_size) * 100); + + semi_space_copied_rate_ = + (static_cast<double>(semi_space_copied_object_size_) / + static_cast<double>(start_new_space_size) * 100); + + double survival_rate = promotion_rate_ + semi_space_copied_rate_; if (survival_rate > kYoungSurvivalRateHighThreshold) { high_survival_rate_period_length_++; } else { high_survival_rate_period_length_ = 0; } - - if (survival_rate < kYoungSurvivalRateLowThreshold) { - low_survival_rate_period_length_++; - } else { - low_survival_rate_period_length_ = 0; - } - - double survival_rate_diff = survival_rate_ - survival_rate; - - if (survival_rate_diff > kYoungSurvivalRateAllowedDeviation) { - set_survival_rate_trend(DECREASING); - } else if (survival_rate_diff < -kYoungSurvivalRateAllowedDeviation) { - set_survival_rate_trend(INCREASING); - } else { - set_survival_rate_trend(STABLE); - } - - survival_rate_ = survival_rate; } bool Heap::PerformGarbageCollection( - GarbageCollector collector, - GCTracer* tracer, - const v8::GCCallbackFlags gc_callback_flags) { + GarbageCollector collector, const v8::GCCallbackFlags gc_callback_flags) { int freed_global_handles = 0; if (collector != SCAVENGER) { @@ -1051,10 +1053,11 @@ bool Heap::PerformGarbageCollection( GCType gc_type = collector == MARK_COMPACTOR ? kGCTypeMarkSweepCompact : kGCTypeScavenge; - { GCCallbacksScope scope(this); + { + GCCallbacksScope scope(this); if (scope.CheckReenter()) { AllowHeapAllocation allow_allocation; - GCTracer::Scope scope(tracer, GCTracer::Scope::EXTERNAL); + GCTracer::Scope scope(tracer(), GCTracer::Scope::EXTERNAL); VMState<EXTERNAL> state(isolate_); HandleScope handle_scope(isolate_); CallGCPrologueCallbacks(gc_type, kNoGCCallbackFlags); @@ -1074,82 +1077,32 @@ bool Heap::PerformGarbageCollection( if (collector == MARK_COMPACTOR) { // Perform mark-sweep with optional compaction. - MarkCompact(tracer); + MarkCompact(); sweep_generation_++; - - UpdateSurvivalRateTrend(start_new_space_size); - // Temporarily set the limit for case when PostGarbageCollectionProcessing // allocates and triggers GC. The real limit is set at after // PostGarbageCollectionProcessing. old_generation_allocation_limit_ = OldGenerationAllocationLimit(PromotedSpaceSizeOfObjects(), 0); - old_gen_exhausted_ = false; } else { - tracer_ = tracer; Scavenge(); - tracer_ = NULL; - - UpdateSurvivalRateTrend(start_new_space_size); - } - - if (!new_space_high_promotion_mode_active_ && - new_space_.Capacity() == new_space_.MaximumCapacity() && - IsStableOrIncreasingSurvivalTrend() && - IsHighSurvivalRate()) { - // Stable high survival rates even though young generation is at - // maximum capacity indicates that most objects will be promoted. - // To decrease scavenger pauses and final mark-sweep pauses, we - // have to limit maximal capacity of the young generation. - SetNewSpaceHighPromotionModeActive(true); - if (FLAG_trace_gc) { - PrintPID("Limited new space size due to high promotion rate: %d MB\n", - new_space_.InitialCapacity() / MB); - } - // The high promotion mode is our indicator to turn on pretenuring. We have - // to deoptimize all optimized code in global pretenuring mode and all - // code which should be tenured in local pretenuring mode. - if (FLAG_pretenuring) { - if (!FLAG_allocation_site_pretenuring) { - isolate_->stack_guard()->FullDeopt(); - } - } - } else if (new_space_high_promotion_mode_active_ && - IsStableOrDecreasingSurvivalTrend() && - IsLowSurvivalRate()) { - // Decreasing low survival rates might indicate that the above high - // promotion mode is over and we should allow the young generation - // to grow again. - SetNewSpaceHighPromotionModeActive(false); - if (FLAG_trace_gc) { - PrintPID("Unlimited new space size due to low promotion rate: %d MB\n", - new_space_.MaximumCapacity() / MB); - } - // Trigger deoptimization here to turn off global pretenuring as soon as - // possible. - if (FLAG_pretenuring && !FLAG_allocation_site_pretenuring) { - isolate_->stack_guard()->FullDeopt(); - } } - if (new_space_high_promotion_mode_active_ && - new_space_.Capacity() > new_space_.InitialCapacity()) { - new_space_.Shrink(); - } + UpdateSurvivalStatistics(start_new_space_size); isolate_->counters()->objs_since_last_young()->Set(0); // Callbacks that fire after this point might trigger nested GCs and // restart incremental marking, the assertion can't be moved down. - ASSERT(collector == SCAVENGER || incremental_marking()->IsStopped()); + DCHECK(collector == SCAVENGER || incremental_marking()->IsStopped()); gc_post_processing_depth_++; - { AllowHeapAllocation allow_allocation; - GCTracer::Scope scope(tracer, GCTracer::Scope::EXTERNAL); + { + AllowHeapAllocation allow_allocation; + GCTracer::Scope scope(tracer(), GCTracer::Scope::EXTERNAL); freed_global_handles = - isolate_->global_handles()->PostGarbageCollectionProcessing( - collector, tracer); + isolate_->global_handles()->PostGarbageCollectionProcessing(collector); } gc_post_processing_depth_--; @@ -1162,15 +1115,15 @@ bool Heap::PerformGarbageCollection( // Register the amount of external allocated memory. amount_of_external_allocated_memory_at_last_global_gc_ = amount_of_external_allocated_memory_; - old_generation_allocation_limit_ = - OldGenerationAllocationLimit(PromotedSpaceSizeOfObjects(), - freed_global_handles); + old_generation_allocation_limit_ = OldGenerationAllocationLimit( + PromotedSpaceSizeOfObjects(), freed_global_handles); } - { GCCallbacksScope scope(this); + { + GCCallbacksScope scope(this); if (scope.CheckReenter()) { AllowHeapAllocation allow_allocation; - GCTracer::Scope scope(tracer, GCTracer::Scope::EXTERNAL); + GCTracer::Scope scope(tracer(), GCTracer::Scope::EXTERNAL); VMState<EXTERNAL> state(isolate_); HandleScope handle_scope(isolate_); CallGCEpilogueCallbacks(gc_type, gc_callback_flags); @@ -1215,24 +1168,22 @@ void Heap::CallGCEpilogueCallbacks(GCType gc_type, callback(gc_type, gc_callback_flags); } else { v8::Isolate* isolate = reinterpret_cast<v8::Isolate*>(this->isolate()); - gc_epilogue_callbacks_[i].callback( - isolate, gc_type, gc_callback_flags); + gc_epilogue_callbacks_[i].callback(isolate, gc_type, gc_callback_flags); } } } } -void Heap::MarkCompact(GCTracer* tracer) { +void Heap::MarkCompact() { gc_state_ = MARK_COMPACT; LOG(isolate_, ResourceEvent("markcompact", "begin")); uint64_t size_of_objects_before_gc = SizeOfObjects(); - mark_compact_collector_.Prepare(tracer); + mark_compact_collector_.Prepare(); ms_count_++; - tracer->set_full_gc_count(ms_count_); MarkCompactPrologue(); @@ -1275,7 +1226,7 @@ void Heap::MarkCompactPrologue() { // Helper class for copying HeapObjects -class ScavengeVisitor: public ObjectVisitor { +class ScavengeVisitor : public ObjectVisitor { public: explicit ScavengeVisitor(Heap* heap) : heap_(heap) {} @@ -1301,10 +1252,10 @@ class ScavengeVisitor: public ObjectVisitor { #ifdef VERIFY_HEAP // Visitor class to verify pointers in code or data space do not point into // new space. -class VerifyNonPointerSpacePointersVisitor: public ObjectVisitor { +class VerifyNonPointerSpacePointersVisitor : public ObjectVisitor { public: explicit VerifyNonPointerSpacePointersVisitor(Heap* heap) : heap_(heap) {} - void VisitPointers(Object** start, Object**end) { + void VisitPointers(Object** start, Object** end) { for (Object** current = start; current < end; current++) { if ((*current)->IsHeapObject()) { CHECK(!heap_->InNewSpace(HeapObject::cast(*current))); @@ -1322,16 +1273,16 @@ static void VerifyNonPointerSpacePointers(Heap* heap) { // do not expect them. VerifyNonPointerSpacePointersVisitor v(heap); HeapObjectIterator code_it(heap->code_space()); - for (HeapObject* object = code_it.Next(); - object != NULL; object = code_it.Next()) + for (HeapObject* object = code_it.Next(); object != NULL; + object = code_it.Next()) object->Iterate(&v); // The old data space was normally swept conservatively so that the iterator // doesn't work, so we normally skip the next bit. - if (!heap->old_data_space()->was_swept_conservatively()) { + if (heap->old_data_space()->swept_precisely()) { HeapObjectIterator data_it(heap->old_data_space()); - for (HeapObject* object = data_it.Next(); - object != NULL; object = data_it.Next()) + for (HeapObject* object = data_it.Next(); object != NULL; + object = data_it.Next()) object->Iterate(&v); } } @@ -1340,8 +1291,7 @@ static void VerifyNonPointerSpacePointers(Heap* heap) { void Heap::CheckNewSpaceExpansionCriteria() { if (new_space_.Capacity() < new_space_.MaximumCapacity() && - survived_since_last_expansion_ > new_space_.Capacity() && - !new_space_high_promotion_mode_active_) { + survived_since_last_expansion_ > new_space_.Capacity()) { // Grow the size of new space if there is room to grow, enough data // has survived scavenge since the last expansion and we are not in // high promotion mode. @@ -1353,14 +1303,12 @@ void Heap::CheckNewSpaceExpansionCriteria() { static bool IsUnscavengedHeapObject(Heap* heap, Object** p) { return heap->InNewSpace(*p) && - !HeapObject::cast(*p)->map_word().IsForwardingAddress(); + !HeapObject::cast(*p)->map_word().IsForwardingAddress(); } -void Heap::ScavengeStoreBufferCallback( - Heap* heap, - MemoryChunk* page, - StoreBufferEvent event) { +void Heap::ScavengeStoreBufferCallback(Heap* heap, MemoryChunk* page, + StoreBufferEvent event) { heap->store_buffer_rebuilder_.Callback(page, event); } @@ -1386,7 +1334,7 @@ void StoreBufferRebuilder::Callback(MemoryChunk* page, StoreBufferEvent event) { // In this case the page we scanned took a reasonable number of slots in // the store buffer. It has now been rehabilitated and is no longer // marked scan_on_scavenge. - ASSERT(!current_page_->scan_on_scavenge()); + DCHECK(!current_page_->scan_on_scavenge()); } } start_of_current_page_ = store_buffer_->Top(); @@ -1403,10 +1351,10 @@ void StoreBufferRebuilder::Callback(MemoryChunk* page, StoreBufferEvent event) { } else { // Store Buffer overflowed while scanning a particular old space page for // pointers to new space. - ASSERT(current_page_ == page); - ASSERT(page != NULL); + DCHECK(current_page_ == page); + DCHECK(page != NULL); current_page_->set_scan_on_scavenge(true); - ASSERT(start_of_current_page_ != store_buffer_->Top()); + DCHECK(start_of_current_page_ != store_buffer_->Top()); store_buffer_->SetTop(start_of_current_page_); } } else { @@ -1419,8 +1367,8 @@ void PromotionQueue::Initialize() { // Assumes that a NewSpacePage exactly fits a number of promotion queue // entries (where each is a pair of intptr_t). This allows us to simplify // the test fpr when to switch pages. - ASSERT((Page::kPageSize - MemoryChunk::kBodyOffset) % (2 * kPointerSize) - == 0); + DCHECK((Page::kPageSize - MemoryChunk::kBodyOffset) % (2 * kPointerSize) == + 0); limit_ = reinterpret_cast<intptr_t*>(heap_->new_space()->ToSpaceStart()); front_ = rear_ = reinterpret_cast<intptr_t*>(heap_->new_space()->ToSpaceEnd()); @@ -1430,12 +1378,11 @@ void PromotionQueue::Initialize() { void PromotionQueue::RelocateQueueHead() { - ASSERT(emergency_stack_ == NULL); + DCHECK(emergency_stack_ == NULL); Page* p = Page::FromAllocationTop(reinterpret_cast<Address>(rear_)); intptr_t* head_start = rear_; - intptr_t* head_end = - Min(front_, reinterpret_cast<intptr_t*>(p->area_end())); + intptr_t* head_end = Min(front_, reinterpret_cast<intptr_t*>(p->area_end())); int entries_count = static_cast<int>(head_end - head_start) / kEntrySizeInWords; @@ -1453,7 +1400,7 @@ void PromotionQueue::RelocateQueueHead() { class ScavengeWeakObjectRetainer : public WeakObjectRetainer { public: - explicit ScavengeWeakObjectRetainer(Heap* heap) : heap_(heap) { } + explicit ScavengeWeakObjectRetainer(Heap* heap) : heap_(heap) {} virtual Object* RetainAs(Object* object) { if (!heap_->InFromSpace(object)) { @@ -1490,8 +1437,6 @@ void Heap::Scavenge() { // Used for updating survived_since_last_expansion_ at function end. intptr_t survived_watermark = PromotedSpaceSizeOfObjects(); - CheckNewSpaceExpansionCriteria(); - SelectScavengingVisitorsTable(); incremental_marking()->PrepareForScavenge(); @@ -1531,8 +1476,7 @@ void Heap::Scavenge() { // Copy objects reachable from the old generation. { - StoreBufferRebuildScope scope(this, - store_buffer(), + StoreBufferRebuildScope scope(this, store_buffer(), &ScavengeStoreBufferCallback); store_buffer()->IteratePointersToNewSpace(&ScavengeObject); } @@ -1540,8 +1484,7 @@ void Heap::Scavenge() { // Copy objects reachable from simple cells by scavenging cell values // directly. HeapObjectIterator cell_iterator(cell_space_); - for (HeapObject* heap_object = cell_iterator.Next(); - heap_object != NULL; + for (HeapObject* heap_object = cell_iterator.Next(); heap_object != NULL; heap_object = cell_iterator.Next()) { if (heap_object->IsCell()) { Cell* cell = Cell::cast(heap_object); @@ -1565,15 +1508,15 @@ void Heap::Scavenge() { } } + // Copy objects reachable from the encountered weak collections list. + scavenge_visitor.VisitPointer(&encountered_weak_collections_); + // Copy objects reachable from the code flushing candidates list. MarkCompactCollector* collector = mark_compact_collector(); if (collector->is_code_flushing_enabled()) { collector->code_flusher()->IteratePointersToFromSpace(&scavenge_visitor); } - // Scavenge object reachable from the native contexts list directly. - scavenge_visitor.VisitPointer(BitCast<Object**>(&native_contexts_list_)); - new_space_front = DoScavenge(&scavenge_visitor, new_space_front); while (isolate()->global_handles()->IterateObjectGroups( @@ -1599,7 +1542,7 @@ void Heap::Scavenge() { ScavengeWeakObjectRetainer weak_object_retainer(this); ProcessWeakReferences(&weak_object_retainer); - ASSERT(new_space_front == new_space_.top()); + DCHECK(new_space_front == new_space_.top()); // Set age mark. new_space_.set_age_mark(new_space_.top()); @@ -1649,12 +1592,12 @@ void Heap::UpdateNewSpaceReferencesInExternalStringTable( Object** last = start; for (Object** p = start; p < end; ++p) { - ASSERT(InFromSpace(*p)); + DCHECK(InFromSpace(*p)); String* target = updater_func(this, p); if (target == NULL) continue; - ASSERT(target->IsExternalString()); + DCHECK(target->IsExternalString()); if (InNewSpace(target)) { // String is still in new space. Update the table entry. @@ -1666,14 +1609,13 @@ void Heap::UpdateNewSpaceReferencesInExternalStringTable( } } - ASSERT(last <= end); + DCHECK(last <= end); external_string_table_.ShrinkNewStrings(static_cast<int>(last - start)); } void Heap::UpdateReferencesInExternalStringTable( ExternalStringTableUpdaterCallback updater_func) { - // Update old space string references. if (external_string_table_.old_space_strings_.length() > 0) { Object** start = &external_string_table_.old_space_strings_[0]; @@ -1686,36 +1628,24 @@ void Heap::UpdateReferencesInExternalStringTable( void Heap::ProcessWeakReferences(WeakObjectRetainer* retainer) { - // We don't record weak slots during marking or scavenges. - // Instead we do it once when we complete mark-compact cycle. - // Note that write barrier has no effect if we are already in the middle of - // compacting mark-sweep cycle and we have to record slots manually. - bool record_slots = - gc_state() == MARK_COMPACT && - mark_compact_collector()->is_compacting(); - ProcessArrayBuffers(retainer, record_slots); - ProcessNativeContexts(retainer, record_slots); + ProcessArrayBuffers(retainer); + ProcessNativeContexts(retainer); // TODO(mvstanton): AllocationSites only need to be processed during // MARK_COMPACT, as they live in old space. Verify and address. - ProcessAllocationSites(retainer, record_slots); + ProcessAllocationSites(retainer); } -void Heap::ProcessNativeContexts(WeakObjectRetainer* retainer, - bool record_slots) { - Object* head = - VisitWeakList<Context>( - this, native_contexts_list(), retainer, record_slots); + +void Heap::ProcessNativeContexts(WeakObjectRetainer* retainer) { + Object* head = VisitWeakList<Context>(this, native_contexts_list(), retainer); // Update the head of the list of contexts. - native_contexts_list_ = head; + set_native_contexts_list(head); } -void Heap::ProcessArrayBuffers(WeakObjectRetainer* retainer, - bool record_slots) { +void Heap::ProcessArrayBuffers(WeakObjectRetainer* retainer) { Object* array_buffer_obj = - VisitWeakList<JSArrayBuffer>(this, - array_buffers_list(), - retainer, record_slots); + VisitWeakList<JSArrayBuffer>(this, array_buffers_list(), retainer); set_array_buffers_list(array_buffer_obj); } @@ -1727,16 +1657,13 @@ void Heap::TearDownArrayBuffers() { Runtime::FreeArrayBuffer(isolate(), buffer); o = buffer->weak_next(); } - array_buffers_list_ = undefined; + set_array_buffers_list(undefined); } -void Heap::ProcessAllocationSites(WeakObjectRetainer* retainer, - bool record_slots) { +void Heap::ProcessAllocationSites(WeakObjectRetainer* retainer) { Object* allocation_site_obj = - VisitWeakList<AllocationSite>(this, - allocation_sites_list(), - retainer, record_slots); + VisitWeakList<AllocationSite>(this, allocation_sites_list(), retainer); set_allocation_sites_list(allocation_site_obj); } @@ -1754,7 +1681,7 @@ void Heap::ResetAllAllocationSitesDependentCode(PretenureFlag flag) { } cur = casted->weak_next(); } - if (marked) isolate_->stack_guard()->DeoptMarkedAllocationSites(); + if (marked) isolate_->stack_guard()->RequestDeoptMarkedAllocationSites(); } @@ -1763,7 +1690,7 @@ void Heap::EvaluateOldSpaceLocalPretenuring( uint64_t size_of_objects_after_gc = SizeOfObjects(); double old_generation_survival_rate = (static_cast<double>(size_of_objects_after_gc) * 100) / - static_cast<double>(size_of_objects_before_gc); + static_cast<double>(size_of_objects_before_gc); if (old_generation_survival_rate < kOldSurvivalRateLowThreshold) { // Too many objects died in the old generation, pretenuring of wrong @@ -1772,8 +1699,10 @@ void Heap::EvaluateOldSpaceLocalPretenuring( // our pretenuring decisions. ResetAllAllocationSitesDependentCode(TENURED); if (FLAG_trace_pretenuring) { - PrintF("Deopt all allocation sites dependent code due to low survival " - "rate in the old generation %f\n", old_generation_survival_rate); + PrintF( + "Deopt all allocation sites dependent code due to low survival " + "rate in the old generation %f\n", + old_generation_survival_rate); } } } @@ -1786,14 +1715,16 @@ void Heap::VisitExternalResources(v8::ExternalResourceVisitor* visitor) { class ExternalStringTableVisitorAdapter : public ObjectVisitor { public: explicit ExternalStringTableVisitorAdapter( - v8::ExternalResourceVisitor* visitor) : visitor_(visitor) {} + v8::ExternalResourceVisitor* visitor) + : visitor_(visitor) {} virtual void VisitPointers(Object** start, Object** end) { for (Object** p = start; p < end; p++) { - ASSERT((*p)->IsExternalString()); - visitor_->VisitExternalString(Utils::ToLocal( - Handle<String>(String::cast(*p)))); + DCHECK((*p)->IsExternalString()); + visitor_->VisitExternalString( + Utils::ToLocal(Handle<String>(String::cast(*p)))); } } + private: v8::ExternalResourceVisitor* visitor_; } external_string_table_visitor(visitor); @@ -1824,7 +1755,7 @@ Address Heap::DoScavenge(ObjectVisitor* scavenge_visitor, if (!NewSpacePage::IsAtEnd(new_space_front)) { HeapObject* object = HeapObject::FromAddress(new_space_front); new_space_front += - NewSpaceScavenger::IterateBody(object->map(), object); + NewSpaceScavenger::IterateBody(object->map(), object); } else { new_space_front = NewSpacePage::FromLimit(new_space_front)->next_page()->area_start(); @@ -1833,8 +1764,7 @@ Address Heap::DoScavenge(ObjectVisitor* scavenge_visitor, // Promote and process all the to-be-promoted objects. { - StoreBufferRebuildScope scope(this, - store_buffer(), + StoreBufferRebuildScope scope(this, store_buffer(), &ScavengeStoreBufferCallback); while (!promotion_queue()->is_empty()) { HeapObject* target; @@ -1845,10 +1775,9 @@ Address Heap::DoScavenge(ObjectVisitor* scavenge_visitor, // during old space pointer iteration. Thus we search specificly // for pointers to from semispace instead of looking for pointers // to new space. - ASSERT(!target->IsMap()); - IterateAndMarkPointersToFromSpace(target->address(), - target->address() + size, - &ScavengeObject); + DCHECK(!target->IsMap()); + IterateAndMarkPointersToFromSpace( + target->address(), target->address() + size, &ScavengeObject); } } @@ -1860,16 +1789,18 @@ Address Heap::DoScavenge(ObjectVisitor* scavenge_visitor, } -STATIC_ASSERT((FixedDoubleArray::kHeaderSize & kDoubleAlignmentMask) == 0); -STATIC_ASSERT((ConstantPoolArray::kHeaderSize & kDoubleAlignmentMask) == 0); +STATIC_ASSERT((FixedDoubleArray::kHeaderSize & kDoubleAlignmentMask) == + 0); // NOLINT +STATIC_ASSERT((ConstantPoolArray::kFirstEntryOffset & kDoubleAlignmentMask) == + 0); // NOLINT +STATIC_ASSERT((ConstantPoolArray::kExtendedFirstOffset & + kDoubleAlignmentMask) == 0); // NOLINT -INLINE(static HeapObject* EnsureDoubleAligned(Heap* heap, - HeapObject* object, +INLINE(static HeapObject* EnsureDoubleAligned(Heap* heap, HeapObject* object, int size)); -static HeapObject* EnsureDoubleAligned(Heap* heap, - HeapObject* object, +static HeapObject* EnsureDoubleAligned(Heap* heap, HeapObject* object, int size) { if ((OffsetFrom(object->address()) & kDoubleAlignmentMask) != 0) { heap->CreateFillerObjectAt(object->address(), kPointerSize); @@ -1891,8 +1822,8 @@ enum LoggingAndProfiling { enum MarksHandling { TRANSFER_MARKS, IGNORE_MARKS }; -template<MarksHandling marks_handling, - LoggingAndProfiling logging_and_profiling_mode> +template <MarksHandling marks_handling, + LoggingAndProfiling logging_and_profiling_mode> class ScavengingVisitor : public StaticVisitorBase { public: static void Initialize() { @@ -1905,69 +1836,63 @@ class ScavengingVisitor : public StaticVisitorBase { table_.Register(kVisitFixedTypedArray, &EvacuateFixedTypedArray); table_.Register(kVisitFixedFloat64Array, &EvacuateFixedFloat64Array); - table_.Register(kVisitNativeContext, - &ObjectEvacuationStrategy<POINTER_OBJECT>:: - template VisitSpecialized<Context::kSize>); - - table_.Register(kVisitConsString, - &ObjectEvacuationStrategy<POINTER_OBJECT>:: - template VisitSpecialized<ConsString::kSize>); + table_.Register( + kVisitNativeContext, + &ObjectEvacuationStrategy<POINTER_OBJECT>::template VisitSpecialized< + Context::kSize>); - table_.Register(kVisitSlicedString, - &ObjectEvacuationStrategy<POINTER_OBJECT>:: - template VisitSpecialized<SlicedString::kSize>); + table_.Register( + kVisitConsString, + &ObjectEvacuationStrategy<POINTER_OBJECT>::template VisitSpecialized< + ConsString::kSize>); - table_.Register(kVisitSymbol, - &ObjectEvacuationStrategy<POINTER_OBJECT>:: - template VisitSpecialized<Symbol::kSize>); + table_.Register( + kVisitSlicedString, + &ObjectEvacuationStrategy<POINTER_OBJECT>::template VisitSpecialized< + SlicedString::kSize>); - table_.Register(kVisitSharedFunctionInfo, - &ObjectEvacuationStrategy<POINTER_OBJECT>:: - template VisitSpecialized<SharedFunctionInfo::kSize>); + table_.Register( + kVisitSymbol, + &ObjectEvacuationStrategy<POINTER_OBJECT>::template VisitSpecialized< + Symbol::kSize>); - table_.Register(kVisitJSWeakMap, - &ObjectEvacuationStrategy<POINTER_OBJECT>:: - Visit); + table_.Register( + kVisitSharedFunctionInfo, + &ObjectEvacuationStrategy<POINTER_OBJECT>::template VisitSpecialized< + SharedFunctionInfo::kSize>); - table_.Register(kVisitJSWeakSet, - &ObjectEvacuationStrategy<POINTER_OBJECT>:: - Visit); + table_.Register(kVisitJSWeakCollection, + &ObjectEvacuationStrategy<POINTER_OBJECT>::Visit); table_.Register(kVisitJSArrayBuffer, - &ObjectEvacuationStrategy<POINTER_OBJECT>:: - Visit); + &ObjectEvacuationStrategy<POINTER_OBJECT>::Visit); table_.Register(kVisitJSTypedArray, - &ObjectEvacuationStrategy<POINTER_OBJECT>:: - Visit); + &ObjectEvacuationStrategy<POINTER_OBJECT>::Visit); table_.Register(kVisitJSDataView, - &ObjectEvacuationStrategy<POINTER_OBJECT>:: - Visit); + &ObjectEvacuationStrategy<POINTER_OBJECT>::Visit); table_.Register(kVisitJSRegExp, - &ObjectEvacuationStrategy<POINTER_OBJECT>:: - Visit); + &ObjectEvacuationStrategy<POINTER_OBJECT>::Visit); if (marks_handling == IGNORE_MARKS) { - table_.Register(kVisitJSFunction, - &ObjectEvacuationStrategy<POINTER_OBJECT>:: - template VisitSpecialized<JSFunction::kSize>); + table_.Register( + kVisitJSFunction, + &ObjectEvacuationStrategy<POINTER_OBJECT>::template VisitSpecialized< + JSFunction::kSize>); } else { table_.Register(kVisitJSFunction, &EvacuateJSFunction); } table_.RegisterSpecializations<ObjectEvacuationStrategy<DATA_OBJECT>, - kVisitDataObject, - kVisitDataObjectGeneric>(); + kVisitDataObject, kVisitDataObjectGeneric>(); table_.RegisterSpecializations<ObjectEvacuationStrategy<POINTER_OBJECT>, - kVisitJSObject, - kVisitJSObjectGeneric>(); + kVisitJSObject, kVisitJSObjectGeneric>(); table_.RegisterSpecializations<ObjectEvacuationStrategy<POINTER_OBJECT>, - kVisitStruct, - kVisitStructGeneric>(); + kVisitStruct, kVisitStructGeneric>(); } static VisitorDispatchTable<ScavengingCallback>* GetTable() { @@ -1975,7 +1900,7 @@ class ScavengingVisitor : public StaticVisitorBase { } private: - enum ObjectContents { DATA_OBJECT, POINTER_OBJECT }; + enum ObjectContents { DATA_OBJECT, POINTER_OBJECT }; static void RecordCopiedObject(Heap* heap, HeapObject* obj) { bool should_record = false; @@ -1995,10 +1920,21 @@ class ScavengingVisitor : public StaticVisitorBase { // Helper function used by CopyObject to copy a source object to an // allocated target object and update the forwarding pointer in the source // object. Returns the target object. - INLINE(static void MigrateObject(Heap* heap, - HeapObject* source, - HeapObject* target, - int size)) { + INLINE(static void MigrateObject(Heap* heap, HeapObject* source, + HeapObject* target, int size)) { + // If we migrate into to-space, then the to-space top pointer should be + // right after the target object. Incorporate double alignment + // over-allocation. + DCHECK(!heap->InToSpace(target) || + target->address() + size == heap->new_space()->top() || + target->address() + size + kPointerSize == heap->new_space()->top()); + + // Make sure that we do not overwrite the promotion queue which is at + // the end of to-space. + DCHECK(!heap->InToSpace(target) || + heap->promotion_queue()->IsBelowPromotionQueue( + heap->new_space()->top())); + // Copy the content of source to target. heap->CopyBlock(target->address(), source->address(), size); @@ -2008,19 +1944,7 @@ class ScavengingVisitor : public StaticVisitorBase { if (logging_and_profiling_mode == LOGGING_AND_PROFILING_ENABLED) { // Update NewSpace stats if necessary. RecordCopiedObject(heap, target); - Isolate* isolate = heap->isolate(); - HeapProfiler* heap_profiler = isolate->heap_profiler(); - if (heap_profiler->is_tracking_object_moves()) { - heap_profiler->ObjectMoveEvent(source->address(), target->address(), - size); - } - if (isolate->logger()->is_logging_code_events() || - isolate->cpu_profiler()->is_profiling()) { - if (target->IsSharedFunctionInfo()) { - PROFILE(isolate, SharedFunctionInfoMoveEvent( - source->address(), target->address())); - } - } + heap->OnMoveEvent(target, source, size); } if (marks_handling == TRANSFER_MARKS) { @@ -2030,82 +1954,123 @@ class ScavengingVisitor : public StaticVisitorBase { } } - - template<ObjectContents object_contents, int alignment> - static inline void EvacuateObject(Map* map, - HeapObject** slot, - HeapObject* object, - int object_size) { - SLOW_ASSERT(object_size <= Page::kMaxRegularHeapObjectSize); - SLOW_ASSERT(object->Size() == object_size); + template <int alignment> + static inline bool SemiSpaceCopyObject(Map* map, HeapObject** slot, + HeapObject* object, int object_size) { + Heap* heap = map->GetHeap(); int allocation_size = object_size; if (alignment != kObjectAlignment) { - ASSERT(alignment == kDoubleAlignment); + DCHECK(alignment == kDoubleAlignment); allocation_size += kPointerSize; } + DCHECK(heap->AllowedToBeMigrated(object, NEW_SPACE)); + AllocationResult allocation = + heap->new_space()->AllocateRaw(allocation_size); + + HeapObject* target = NULL; // Initialization to please compiler. + if (allocation.To(&target)) { + if (alignment != kObjectAlignment) { + target = EnsureDoubleAligned(heap, target, allocation_size); + } + + // Order is important here: Set the promotion limit before migrating + // the object. Otherwise we may end up overwriting promotion queue + // entries when we migrate the object. + heap->promotion_queue()->SetNewLimit(heap->new_space()->top()); + + // Order is important: slot might be inside of the target if target + // was allocated over a dead object and slot comes from the store + // buffer. + *slot = target; + MigrateObject(heap, object, target, object_size); + + heap->IncrementSemiSpaceCopiedObjectSize(object_size); + return true; + } + return false; + } + + + template <ObjectContents object_contents, int alignment> + static inline bool PromoteObject(Map* map, HeapObject** slot, + HeapObject* object, int object_size) { Heap* heap = map->GetHeap(); - if (heap->ShouldBePromoted(object->address(), object_size)) { - AllocationResult allocation; - if (object_contents == DATA_OBJECT) { - ASSERT(heap->AllowedToBeMigrated(object, OLD_DATA_SPACE)); - allocation = heap->old_data_space()->AllocateRaw(allocation_size); - } else { - ASSERT(heap->AllowedToBeMigrated(object, OLD_POINTER_SPACE)); - allocation = heap->old_pointer_space()->AllocateRaw(allocation_size); + int allocation_size = object_size; + if (alignment != kObjectAlignment) { + DCHECK(alignment == kDoubleAlignment); + allocation_size += kPointerSize; + } + + AllocationResult allocation; + if (object_contents == DATA_OBJECT) { + DCHECK(heap->AllowedToBeMigrated(object, OLD_DATA_SPACE)); + allocation = heap->old_data_space()->AllocateRaw(allocation_size); + } else { + DCHECK(heap->AllowedToBeMigrated(object, OLD_POINTER_SPACE)); + allocation = heap->old_pointer_space()->AllocateRaw(allocation_size); + } + + HeapObject* target = NULL; // Initialization to please compiler. + if (allocation.To(&target)) { + if (alignment != kObjectAlignment) { + target = EnsureDoubleAligned(heap, target, allocation_size); } - HeapObject* target = NULL; // Initialization to please compiler. - if (allocation.To(&target)) { - if (alignment != kObjectAlignment) { - target = EnsureDoubleAligned(heap, target, allocation_size); + // Order is important: slot might be inside of the target if target + // was allocated over a dead object and slot comes from the store + // buffer. + *slot = target; + MigrateObject(heap, object, target, object_size); + + if (object_contents == POINTER_OBJECT) { + if (map->instance_type() == JS_FUNCTION_TYPE) { + heap->promotion_queue()->insert(target, + JSFunction::kNonWeakFieldsEndOffset); + } else { + heap->promotion_queue()->insert(target, object_size); } + } + heap->IncrementPromotedObjectsSize(object_size); + return true; + } + return false; + } - // Order is important: slot might be inside of the target if target - // was allocated over a dead object and slot comes from the store - // buffer. - *slot = target; - MigrateObject(heap, object, target, object_size); - if (object_contents == POINTER_OBJECT) { - if (map->instance_type() == JS_FUNCTION_TYPE) { - heap->promotion_queue()->insert( - target, JSFunction::kNonWeakFieldsEndOffset); - } else { - heap->promotion_queue()->insert(target, object_size); - } - } + template <ObjectContents object_contents, int alignment> + static inline void EvacuateObject(Map* map, HeapObject** slot, + HeapObject* object, int object_size) { + SLOW_DCHECK(object_size <= Page::kMaxRegularHeapObjectSize); + SLOW_DCHECK(object->Size() == object_size); + Heap* heap = map->GetHeap(); - heap->tracer()->increment_promoted_objects_size(object_size); + if (!heap->ShouldBePromoted(object->address(), object_size)) { + // A semi-space copy may fail due to fragmentation. In that case, we + // try to promote the object. + if (SemiSpaceCopyObject<alignment>(map, slot, object, object_size)) { return; } } - ASSERT(heap->AllowedToBeMigrated(object, NEW_SPACE)); - AllocationResult allocation = - heap->new_space()->AllocateRaw(allocation_size); - heap->promotion_queue()->SetNewLimit(heap->new_space()->top()); - HeapObject* target = HeapObject::cast(allocation.ToObjectChecked()); - if (alignment != kObjectAlignment) { - target = EnsureDoubleAligned(heap, target, allocation_size); + if (PromoteObject<object_contents, alignment>(map, slot, object, + object_size)) { + return; } - // Order is important: slot might be inside of the target if target - // was allocated over a dead object and slot comes from the store - // buffer. - *slot = target; - MigrateObject(heap, object, target, object_size); - return; + // If promotion failed, we try to copy the object to the other semi-space + if (SemiSpaceCopyObject<alignment>(map, slot, object, object_size)) return; + + UNREACHABLE(); } - static inline void EvacuateJSFunction(Map* map, - HeapObject** slot, + static inline void EvacuateJSFunction(Map* map, HeapObject** slot, HeapObject* object) { - ObjectEvacuationStrategy<POINTER_OBJECT>:: - template VisitSpecialized<JSFunction::kSize>(map, slot, object); + ObjectEvacuationStrategy<POINTER_OBJECT>::template VisitSpecialized< + JSFunction::kSize>(map, slot, object); HeapObject* target = *slot; MarkBit mark_bit = Marking::MarkBitFrom(target); @@ -2117,92 +2082,79 @@ class ScavengingVisitor : public StaticVisitorBase { Address code_entry_slot = target->address() + JSFunction::kCodeEntryOffset; Code* code = Code::cast(Code::GetObjectFromEntryAddress(code_entry_slot)); - map->GetHeap()->mark_compact_collector()-> - RecordCodeEntrySlot(code_entry_slot, code); + map->GetHeap()->mark_compact_collector()->RecordCodeEntrySlot( + code_entry_slot, code); } } - static inline void EvacuateFixedArray(Map* map, - HeapObject** slot, + static inline void EvacuateFixedArray(Map* map, HeapObject** slot, HeapObject* object) { int object_size = FixedArray::BodyDescriptor::SizeOf(map, object); - EvacuateObject<POINTER_OBJECT, kObjectAlignment>( - map, slot, object, object_size); + EvacuateObject<POINTER_OBJECT, kObjectAlignment>(map, slot, object, + object_size); } - static inline void EvacuateFixedDoubleArray(Map* map, - HeapObject** slot, + static inline void EvacuateFixedDoubleArray(Map* map, HeapObject** slot, HeapObject* object) { int length = reinterpret_cast<FixedDoubleArray*>(object)->length(); int object_size = FixedDoubleArray::SizeFor(length); - EvacuateObject<DATA_OBJECT, kDoubleAlignment>( - map, slot, object, object_size); + EvacuateObject<DATA_OBJECT, kDoubleAlignment>(map, slot, object, + object_size); } - static inline void EvacuateFixedTypedArray(Map* map, - HeapObject** slot, + static inline void EvacuateFixedTypedArray(Map* map, HeapObject** slot, HeapObject* object) { int object_size = reinterpret_cast<FixedTypedArrayBase*>(object)->size(); - EvacuateObject<DATA_OBJECT, kObjectAlignment>( - map, slot, object, object_size); + EvacuateObject<DATA_OBJECT, kObjectAlignment>(map, slot, object, + object_size); } - static inline void EvacuateFixedFloat64Array(Map* map, - HeapObject** slot, + static inline void EvacuateFixedFloat64Array(Map* map, HeapObject** slot, HeapObject* object) { int object_size = reinterpret_cast<FixedFloat64Array*>(object)->size(); - EvacuateObject<DATA_OBJECT, kDoubleAlignment>( - map, slot, object, object_size); + EvacuateObject<DATA_OBJECT, kDoubleAlignment>(map, slot, object, + object_size); } - static inline void EvacuateByteArray(Map* map, - HeapObject** slot, + static inline void EvacuateByteArray(Map* map, HeapObject** slot, HeapObject* object) { int object_size = reinterpret_cast<ByteArray*>(object)->ByteArraySize(); - EvacuateObject<DATA_OBJECT, kObjectAlignment>( - map, slot, object, object_size); + EvacuateObject<DATA_OBJECT, kObjectAlignment>(map, slot, object, + object_size); } - static inline void EvacuateSeqOneByteString(Map* map, - HeapObject** slot, - HeapObject* object) { - int object_size = SeqOneByteString::cast(object)-> - SeqOneByteStringSize(map->instance_type()); - EvacuateObject<DATA_OBJECT, kObjectAlignment>( - map, slot, object, object_size); + static inline void EvacuateSeqOneByteString(Map* map, HeapObject** slot, + HeapObject* object) { + int object_size = SeqOneByteString::cast(object) + ->SeqOneByteStringSize(map->instance_type()); + EvacuateObject<DATA_OBJECT, kObjectAlignment>(map, slot, object, + object_size); } - static inline void EvacuateSeqTwoByteString(Map* map, - HeapObject** slot, + static inline void EvacuateSeqTwoByteString(Map* map, HeapObject** slot, HeapObject* object) { - int object_size = SeqTwoByteString::cast(object)-> - SeqTwoByteStringSize(map->instance_type()); - EvacuateObject<DATA_OBJECT, kObjectAlignment>( - map, slot, object, object_size); + int object_size = SeqTwoByteString::cast(object) + ->SeqTwoByteStringSize(map->instance_type()); + EvacuateObject<DATA_OBJECT, kObjectAlignment>(map, slot, object, + object_size); } - static inline bool IsShortcutCandidate(int type) { - return ((type & kShortcutTypeMask) == kShortcutTypeTag); - } - - static inline void EvacuateShortcutCandidate(Map* map, - HeapObject** slot, + static inline void EvacuateShortcutCandidate(Map* map, HeapObject** slot, HeapObject* object) { - ASSERT(IsShortcutCandidate(map->instance_type())); + DCHECK(IsShortcutCandidate(map->instance_type())); Heap* heap = map->GetHeap(); if (marks_handling == IGNORE_MARKS && - ConsString::cast(object)->unchecked_second() == - heap->empty_string()) { + ConsString::cast(object)->unchecked_second() == heap->empty_string()) { HeapObject* first = HeapObject::cast(ConsString::cast(object)->unchecked_first()); @@ -2228,27 +2180,24 @@ class ScavengingVisitor : public StaticVisitorBase { } int object_size = ConsString::kSize; - EvacuateObject<POINTER_OBJECT, kObjectAlignment>( - map, slot, object, object_size); + EvacuateObject<POINTER_OBJECT, kObjectAlignment>(map, slot, object, + object_size); } - template<ObjectContents object_contents> + template <ObjectContents object_contents> class ObjectEvacuationStrategy { public: - template<int object_size> - static inline void VisitSpecialized(Map* map, - HeapObject** slot, + template <int object_size> + static inline void VisitSpecialized(Map* map, HeapObject** slot, HeapObject* object) { - EvacuateObject<object_contents, kObjectAlignment>( - map, slot, object, object_size); + EvacuateObject<object_contents, kObjectAlignment>(map, slot, object, + object_size); } - static inline void Visit(Map* map, - HeapObject** slot, - HeapObject* object) { + static inline void Visit(Map* map, HeapObject** slot, HeapObject* object) { int object_size = map->instance_size(); - EvacuateObject<object_contents, kObjectAlignment>( - map, slot, object, object_size); + EvacuateObject<object_contents, kObjectAlignment>(map, slot, object, + object_size); } }; @@ -2256,8 +2205,8 @@ class ScavengingVisitor : public StaticVisitorBase { }; -template<MarksHandling marks_handling, - LoggingAndProfiling logging_and_profiling_mode> +template <MarksHandling marks_handling, + LoggingAndProfiling logging_and_profiling_mode> VisitorDispatchTable<ScavengingCallback> ScavengingVisitor<marks_handling, logging_and_profiling_mode>::table_; @@ -2274,30 +2223,26 @@ static void InitializeScavengingVisitorsTables() { void Heap::SelectScavengingVisitorsTable() { bool logging_and_profiling = - isolate()->logger()->is_logging() || + FLAG_verify_predictable || isolate()->logger()->is_logging() || isolate()->cpu_profiler()->is_profiling() || (isolate()->heap_profiler() != NULL && isolate()->heap_profiler()->is_tracking_object_moves()); if (!incremental_marking()->IsMarking()) { if (!logging_and_profiling) { - scavenging_visitors_table_.CopyFrom( - ScavengingVisitor<IGNORE_MARKS, - LOGGING_AND_PROFILING_DISABLED>::GetTable()); + scavenging_visitors_table_.CopyFrom(ScavengingVisitor< + IGNORE_MARKS, LOGGING_AND_PROFILING_DISABLED>::GetTable()); } else { - scavenging_visitors_table_.CopyFrom( - ScavengingVisitor<IGNORE_MARKS, - LOGGING_AND_PROFILING_ENABLED>::GetTable()); + scavenging_visitors_table_.CopyFrom(ScavengingVisitor< + IGNORE_MARKS, LOGGING_AND_PROFILING_ENABLED>::GetTable()); } } else { if (!logging_and_profiling) { - scavenging_visitors_table_.CopyFrom( - ScavengingVisitor<TRANSFER_MARKS, - LOGGING_AND_PROFILING_DISABLED>::GetTable()); + scavenging_visitors_table_.CopyFrom(ScavengingVisitor< + TRANSFER_MARKS, LOGGING_AND_PROFILING_DISABLED>::GetTable()); } else { - scavenging_visitors_table_.CopyFrom( - ScavengingVisitor<TRANSFER_MARKS, - LOGGING_AND_PROFILING_ENABLED>::GetTable()); + scavenging_visitors_table_.CopyFrom(ScavengingVisitor< + TRANSFER_MARKS, LOGGING_AND_PROFILING_ENABLED>::GetTable()); } if (incremental_marking()->IsCompacting()) { @@ -2315,9 +2260,9 @@ void Heap::SelectScavengingVisitorsTable() { void Heap::ScavengeObjectSlow(HeapObject** p, HeapObject* object) { - SLOW_ASSERT(object->GetIsolate()->heap()->InFromSpace(object)); + SLOW_DCHECK(object->GetIsolate()->heap()->InFromSpace(object)); MapWord first_word = object->map_word(); - SLOW_ASSERT(!first_word.IsForwardingAddress()); + SLOW_DCHECK(!first_word.IsForwardingAddress()); Map* map = first_word.ToMap(); map->GetHeap()->DoScavengeObject(map, p, object); } @@ -2334,7 +2279,7 @@ AllocationResult Heap::AllocatePartialMap(InstanceType instance_type, reinterpret_cast<Map*>(result)->set_instance_type(instance_type); reinterpret_cast<Map*>(result)->set_instance_size(instance_size); reinterpret_cast<Map*>(result)->set_visitor_id( - StaticVisitorBase::GetVisitorId(instance_type, instance_size)); + StaticVisitorBase::GetVisitorId(instance_type, instance_size)); reinterpret_cast<Map*>(result)->set_inobject_properties(0); reinterpret_cast<Map*>(result)->set_pre_allocated_property_fields(0); reinterpret_cast<Map*>(result)->set_unused_property_fields(0); @@ -2381,16 +2326,16 @@ AllocationResult Heap::AllocateMap(InstanceType instance_type, } -AllocationResult Heap::AllocateFillerObject(int size, - bool double_align, +AllocationResult Heap::AllocateFillerObject(int size, bool double_align, AllocationSpace space) { HeapObject* obj; - { AllocationResult allocation = AllocateRaw(size, space, space); + { + AllocationResult allocation = AllocateRaw(size, space, space); if (!allocation.To(&obj)) return allocation; } #ifdef DEBUG MemoryChunk* chunk = MemoryChunk::FromAddress(obj->address()); - ASSERT(chunk->owner()->identity() == space); + DCHECK(chunk->owner()->identity() == space); #endif CreateFillerObjectAt(obj->address(), size); return obj; @@ -2398,32 +2343,36 @@ AllocationResult Heap::AllocateFillerObject(int size, const Heap::StringTypeTable Heap::string_type_table[] = { -#define STRING_TYPE_ELEMENT(type, size, name, camel_name) \ - {type, size, k##camel_name##MapRootIndex}, - STRING_TYPE_LIST(STRING_TYPE_ELEMENT) +#define STRING_TYPE_ELEMENT(type, size, name, camel_name) \ + { type, size, k##camel_name##MapRootIndex } \ + , + STRING_TYPE_LIST(STRING_TYPE_ELEMENT) #undef STRING_TYPE_ELEMENT }; const Heap::ConstantStringTable Heap::constant_string_table[] = { -#define CONSTANT_STRING_ELEMENT(name, contents) \ - {contents, k##name##RootIndex}, - INTERNALIZED_STRING_LIST(CONSTANT_STRING_ELEMENT) +#define CONSTANT_STRING_ELEMENT(name, contents) \ + { contents, k##name##RootIndex } \ + , + INTERNALIZED_STRING_LIST(CONSTANT_STRING_ELEMENT) #undef CONSTANT_STRING_ELEMENT }; const Heap::StructTable Heap::struct_table[] = { -#define STRUCT_TABLE_ELEMENT(NAME, Name, name) \ - { NAME##_TYPE, Name::kSize, k##Name##MapRootIndex }, - STRUCT_LIST(STRUCT_TABLE_ELEMENT) +#define STRUCT_TABLE_ELEMENT(NAME, Name, name) \ + { NAME##_TYPE, Name::kSize, k##Name##MapRootIndex } \ + , + STRUCT_LIST(STRUCT_TABLE_ELEMENT) #undef STRUCT_TABLE_ELEMENT }; bool Heap::CreateInitialMaps() { HeapObject* obj; - { AllocationResult allocation = AllocatePartialMap(MAP_TYPE, Map::kSize); + { + AllocationResult allocation = AllocatePartialMap(MAP_TYPE, Map::kSize); if (!allocation.To(&obj)) return false; } // Map::cast cannot be used due to uninitialized map field. @@ -2431,12 +2380,13 @@ bool Heap::CreateInitialMaps() { set_meta_map(new_meta_map); new_meta_map->set_map(new_meta_map); - { // Partial map allocation -#define ALLOCATE_PARTIAL_MAP(instance_type, size, field_name) \ - { Map* map; \ - if (!AllocatePartialMap((instance_type), (size)).To(&map)) return false; \ - set_##field_name##_map(map); \ - } + { // Partial map allocation +#define ALLOCATE_PARTIAL_MAP(instance_type, size, field_name) \ + { \ + Map* map; \ + if (!AllocatePartialMap((instance_type), (size)).To(&map)) return false; \ + set_##field_name##_map(map); \ + } ALLOCATE_PARTIAL_MAP(FIXED_ARRAY_TYPE, kVariableSizeSentinel, fixed_array); ALLOCATE_PARTIAL_MAP(ODDBALL_TYPE, Oddball::kSize, undefined); @@ -2448,35 +2398,40 @@ bool Heap::CreateInitialMaps() { } // Allocate the empty array. - { AllocationResult allocation = AllocateEmptyFixedArray(); + { + AllocationResult allocation = AllocateEmptyFixedArray(); if (!allocation.To(&obj)) return false; } set_empty_fixed_array(FixedArray::cast(obj)); - { AllocationResult allocation = Allocate(null_map(), OLD_POINTER_SPACE); + { + AllocationResult allocation = Allocate(null_map(), OLD_POINTER_SPACE); if (!allocation.To(&obj)) return false; } set_null_value(Oddball::cast(obj)); Oddball::cast(obj)->set_kind(Oddball::kNull); - { AllocationResult allocation = Allocate(undefined_map(), OLD_POINTER_SPACE); + { + AllocationResult allocation = Allocate(undefined_map(), OLD_POINTER_SPACE); if (!allocation.To(&obj)) return false; } set_undefined_value(Oddball::cast(obj)); Oddball::cast(obj)->set_kind(Oddball::kUndefined); - ASSERT(!InNewSpace(undefined_value())); + DCHECK(!InNewSpace(undefined_value())); // Set preliminary exception sentinel value before actually initializing it. set_exception(null_value()); // Allocate the empty descriptor array. - { AllocationResult allocation = AllocateEmptyFixedArray(); + { + AllocationResult allocation = AllocateEmptyFixedArray(); if (!allocation.To(&obj)) return false; } set_empty_descriptor_array(DescriptorArray::cast(obj)); // Allocate the constant pool array. - { AllocationResult allocation = AllocateEmptyConstantPoolArray(); + { + AllocationResult allocation = AllocateEmptyConstantPoolArray(); if (!allocation.To(&obj)) return false; } set_empty_constant_pool_array(ConstantPoolArray::cast(obj)); @@ -2525,21 +2480,24 @@ bool Heap::CreateInitialMaps() { constant_pool_array_map()->set_prototype(null_value()); constant_pool_array_map()->set_constructor(null_value()); - { // Map allocation -#define ALLOCATE_MAP(instance_type, size, field_name) \ - { Map* map; \ - if (!AllocateMap((instance_type), size).To(&map)) return false; \ - set_##field_name##_map(map); \ - } + { // Map allocation +#define ALLOCATE_MAP(instance_type, size, field_name) \ + { \ + Map* map; \ + if (!AllocateMap((instance_type), size).To(&map)) return false; \ + set_##field_name##_map(map); \ + } -#define ALLOCATE_VARSIZE_MAP(instance_type, field_name) \ - ALLOCATE_MAP(instance_type, kVariableSizeSentinel, field_name) +#define ALLOCATE_VARSIZE_MAP(instance_type, field_name) \ + ALLOCATE_MAP(instance_type, kVariableSizeSentinel, field_name) ALLOCATE_VARSIZE_MAP(FIXED_ARRAY_TYPE, fixed_cow_array) - ASSERT(fixed_array_map() != fixed_cow_array_map()); + DCHECK(fixed_array_map() != fixed_cow_array_map()); ALLOCATE_VARSIZE_MAP(FIXED_ARRAY_TYPE, scope_info) ALLOCATE_MAP(HEAP_NUMBER_TYPE, HeapNumber::kSize, heap_number) + ALLOCATE_MAP(MUTABLE_HEAP_NUMBER_TYPE, HeapNumber::kSize, + mutable_heap_number) ALLOCATE_MAP(SYMBOL_TYPE, Symbol::kSize, symbol) ALLOCATE_MAP(FOREIGN_TYPE, Foreign::kSize, foreign) @@ -2553,7 +2511,8 @@ bool Heap::CreateInitialMaps() { for (unsigned i = 0; i < ARRAY_SIZE(string_type_table); i++) { const StringTypeTable& entry = string_type_table[i]; - { AllocationResult allocation = AllocateMap(entry.type, entry.size); + { + AllocationResult allocation = AllocateMap(entry.type, entry.size); if (!allocation.To(&obj)) return false; } // Mark cons string maps as unstable, because their objects can change @@ -2573,18 +2532,17 @@ bool Heap::CreateInitialMaps() { ALLOCATE_VARSIZE_MAP(BYTE_ARRAY_TYPE, byte_array) ALLOCATE_VARSIZE_MAP(FREE_SPACE_TYPE, free_space) -#define ALLOCATE_EXTERNAL_ARRAY_MAP(Type, type, TYPE, ctype, size) \ - ALLOCATE_MAP(EXTERNAL_##TYPE##_ARRAY_TYPE, ExternalArray::kAlignedSize, \ - external_##type##_array) +#define ALLOCATE_EXTERNAL_ARRAY_MAP(Type, type, TYPE, ctype, size) \ + ALLOCATE_MAP(EXTERNAL_##TYPE##_ARRAY_TYPE, ExternalArray::kAlignedSize, \ + external_##type##_array) - TYPED_ARRAYS(ALLOCATE_EXTERNAL_ARRAY_MAP) + TYPED_ARRAYS(ALLOCATE_EXTERNAL_ARRAY_MAP) #undef ALLOCATE_EXTERNAL_ARRAY_MAP -#define ALLOCATE_FIXED_TYPED_ARRAY_MAP(Type, type, TYPE, ctype, size) \ - ALLOCATE_VARSIZE_MAP(FIXED_##TYPE##_ARRAY_TYPE, \ - fixed_##type##_array) +#define ALLOCATE_FIXED_TYPED_ARRAY_MAP(Type, type, TYPE, ctype, size) \ + ALLOCATE_VARSIZE_MAP(FIXED_##TYPE##_ARRAY_TYPE, fixed_##type##_array) - TYPED_ARRAYS(ALLOCATE_FIXED_TYPED_ARRAY_MAP) + TYPED_ARRAYS(ALLOCATE_FIXED_TYPED_ARRAY_MAP) #undef ALLOCATE_FIXED_TYPED_ARRAY_MAP ALLOCATE_VARSIZE_MAP(FIXED_ARRAY_TYPE, sloppy_arguments_elements) @@ -2600,8 +2558,7 @@ bool Heap::CreateInitialMaps() { for (unsigned i = 0; i < ARRAY_SIZE(struct_table); i++) { const StructTable& entry = struct_table[i]; Map* map; - if (!AllocateMap(entry.type, entry.size).To(&map)) - return false; + if (!AllocateMap(entry.type, entry.size).To(&map)) return false; roots_[entry.index] = map; } @@ -2621,49 +2578,50 @@ bool Heap::CreateInitialMaps() { StaticVisitorBase::kVisitNativeContext); ALLOCATE_MAP(SHARED_FUNCTION_INFO_TYPE, SharedFunctionInfo::kAlignedSize, - shared_function_info) + shared_function_info) - ALLOCATE_MAP(JS_MESSAGE_OBJECT_TYPE, JSMessageObject::kSize, - message_object) - ALLOCATE_MAP(JS_OBJECT_TYPE, JSObject::kHeaderSize + kPointerSize, - external) + ALLOCATE_MAP(JS_MESSAGE_OBJECT_TYPE, JSMessageObject::kSize, message_object) + ALLOCATE_MAP(JS_OBJECT_TYPE, JSObject::kHeaderSize + kPointerSize, external) external_map()->set_is_extensible(false); #undef ALLOCATE_VARSIZE_MAP #undef ALLOCATE_MAP } - { // Empty arrays - { ByteArray* byte_array; + { // Empty arrays + { + ByteArray* byte_array; if (!AllocateByteArray(0, TENURED).To(&byte_array)) return false; set_empty_byte_array(byte_array); } -#define ALLOCATE_EMPTY_EXTERNAL_ARRAY(Type, type, TYPE, ctype, size) \ - { ExternalArray* obj; \ - if (!AllocateEmptyExternalArray(kExternal##Type##Array).To(&obj)) \ - return false; \ - set_empty_external_##type##_array(obj); \ - } +#define ALLOCATE_EMPTY_EXTERNAL_ARRAY(Type, type, TYPE, ctype, size) \ + { \ + ExternalArray* obj; \ + if (!AllocateEmptyExternalArray(kExternal##Type##Array).To(&obj)) \ + return false; \ + set_empty_external_##type##_array(obj); \ + } TYPED_ARRAYS(ALLOCATE_EMPTY_EXTERNAL_ARRAY) #undef ALLOCATE_EMPTY_EXTERNAL_ARRAY -#define ALLOCATE_EMPTY_FIXED_TYPED_ARRAY(Type, type, TYPE, ctype, size) \ - { FixedTypedArrayBase* obj; \ - if (!AllocateEmptyFixedTypedArray(kExternal##Type##Array).To(&obj)) \ - return false; \ - set_empty_fixed_##type##_array(obj); \ - } +#define ALLOCATE_EMPTY_FIXED_TYPED_ARRAY(Type, type, TYPE, ctype, size) \ + { \ + FixedTypedArrayBase* obj; \ + if (!AllocateEmptyFixedTypedArray(kExternal##Type##Array).To(&obj)) \ + return false; \ + set_empty_fixed_##type##_array(obj); \ + } TYPED_ARRAYS(ALLOCATE_EMPTY_FIXED_TYPED_ARRAY) #undef ALLOCATE_EMPTY_FIXED_TYPED_ARRAY } - ASSERT(!InNewSpace(empty_fixed_array())); + DCHECK(!InNewSpace(empty_fixed_array())); return true; } -AllocationResult Heap::AllocateHeapNumber(double value, +AllocationResult Heap::AllocateHeapNumber(double value, MutableMode mode, PretenureFlag pretenure) { // Statically ensure that it is safe to allocate heap numbers in paged // spaces. @@ -2673,11 +2631,13 @@ AllocationResult Heap::AllocateHeapNumber(double value, AllocationSpace space = SelectSpace(size, OLD_DATA_SPACE, pretenure); HeapObject* result; - { AllocationResult allocation = AllocateRaw(size, space, OLD_DATA_SPACE); + { + AllocationResult allocation = AllocateRaw(size, space, OLD_DATA_SPACE); if (!allocation.To(&result)) return allocation; } - result->set_map_no_write_barrier(heap_number_map()); + Map* map = mode == MUTABLE ? mutable_heap_number_map() : heap_number_map(); + HeapObject::cast(result)->set_map_no_write_barrier(map); HeapNumber::cast(result)->set_value(value); return result; } @@ -2688,7 +2648,8 @@ AllocationResult Heap::AllocateCell(Object* value) { STATIC_ASSERT(Cell::kSize <= Page::kMaxRegularHeapObjectSize); HeapObject* result; - { AllocationResult allocation = AllocateRaw(size, CELL_SPACE, CELL_SPACE); + { + AllocationResult allocation = AllocateRaw(size, CELL_SPACE, CELL_SPACE); if (!allocation.To(&result)) return allocation; } result->set_map_no_write_barrier(cell_map()); @@ -2783,12 +2744,13 @@ void Heap::CreateInitialObjects() { HandleScope scope(isolate()); Factory* factory = isolate()->factory(); - // The -0 value must be set before NumberFromDouble works. - set_minus_zero_value(*factory->NewHeapNumber(-0.0, TENURED)); - ASSERT(std::signbit(minus_zero_value()->Number()) != 0); + // The -0 value must be set before NewNumber works. + set_minus_zero_value(*factory->NewHeapNumber(-0.0, IMMUTABLE, TENURED)); + DCHECK(std::signbit(minus_zero_value()->Number()) != 0); - set_nan_value(*factory->NewHeapNumber(OS::nan_value(), TENURED)); - set_infinity_value(*factory->NewHeapNumber(V8_INFINITY, TENURED)); + set_nan_value( + *factory->NewHeapNumber(base::OS::nan_value(), IMMUTABLE, TENURED)); + set_infinity_value(*factory->NewHeapNumber(V8_INFINITY, IMMUTABLE, TENURED)); // The hole has not been created yet, but we want to put something // predictable in the gaps in the string table, so lets make that Smi zero. @@ -2798,62 +2760,45 @@ void Heap::CreateInitialObjects() { set_string_table(*StringTable::New(isolate(), kInitialStringTableSize)); // Finish initializing oddballs after creating the string table. - Oddball::Initialize(isolate(), - factory->undefined_value(), - "undefined", - factory->nan_value(), - Oddball::kUndefined); + Oddball::Initialize(isolate(), factory->undefined_value(), "undefined", + factory->nan_value(), Oddball::kUndefined); // Initialize the null_value. - Oddball::Initialize(isolate(), - factory->null_value(), - "null", - handle(Smi::FromInt(0), isolate()), - Oddball::kNull); - - set_true_value(*factory->NewOddball(factory->boolean_map(), - "true", + Oddball::Initialize(isolate(), factory->null_value(), "null", + handle(Smi::FromInt(0), isolate()), Oddball::kNull); + + set_true_value(*factory->NewOddball(factory->boolean_map(), "true", handle(Smi::FromInt(1), isolate()), Oddball::kTrue)); - set_false_value(*factory->NewOddball(factory->boolean_map(), - "false", + set_false_value(*factory->NewOddball(factory->boolean_map(), "false", handle(Smi::FromInt(0), isolate()), Oddball::kFalse)); - set_the_hole_value(*factory->NewOddball(factory->the_hole_map(), - "hole", + set_the_hole_value(*factory->NewOddball(factory->the_hole_map(), "hole", handle(Smi::FromInt(-1), isolate()), Oddball::kTheHole)); - set_uninitialized_value( - *factory->NewOddball(factory->uninitialized_map(), - "uninitialized", - handle(Smi::FromInt(-1), isolate()), - Oddball::kUninitialized)); - - set_arguments_marker(*factory->NewOddball(factory->arguments_marker_map(), - "arguments_marker", - handle(Smi::FromInt(-4), isolate()), - Oddball::kArgumentMarker)); - - set_no_interceptor_result_sentinel( - *factory->NewOddball(factory->no_interceptor_result_sentinel_map(), - "no_interceptor_result_sentinel", - handle(Smi::FromInt(-2), isolate()), - Oddball::kOther)); - - set_termination_exception( - *factory->NewOddball(factory->termination_exception_map(), - "termination_exception", - handle(Smi::FromInt(-3), isolate()), - Oddball::kOther)); - - set_exception( - *factory->NewOddball(factory->exception_map(), - "exception", - handle(Smi::FromInt(-5), isolate()), - Oddball::kException)); + set_uninitialized_value(*factory->NewOddball( + factory->uninitialized_map(), "uninitialized", + handle(Smi::FromInt(-1), isolate()), Oddball::kUninitialized)); + + set_arguments_marker(*factory->NewOddball( + factory->arguments_marker_map(), "arguments_marker", + handle(Smi::FromInt(-4), isolate()), Oddball::kArgumentMarker)); + + set_no_interceptor_result_sentinel(*factory->NewOddball( + factory->no_interceptor_result_sentinel_map(), + "no_interceptor_result_sentinel", handle(Smi::FromInt(-2), isolate()), + Oddball::kOther)); + + set_termination_exception(*factory->NewOddball( + factory->termination_exception_map(), "termination_exception", + handle(Smi::FromInt(-3), isolate()), Oddball::kOther)); + + set_exception(*factory->NewOddball(factory->exception_map(), "exception", + handle(Smi::FromInt(-5), isolate()), + Oddball::kException)); for (unsigned i = 0; i < ARRAY_SIZE(constant_string_table); i++) { Handle<String> str = @@ -2889,16 +2834,16 @@ void Heap::CreateInitialObjects() { // Allocate the dictionary of intrinsic function names. Handle<NameDictionary> intrinsic_names = - NameDictionary::New(isolate(), Runtime::kNumFunctions); + NameDictionary::New(isolate(), Runtime::kNumFunctions, TENURED); Runtime::InitializeIntrinsicFunctionNames(isolate(), intrinsic_names); set_intrinsic_function_names(*intrinsic_names); - set_number_string_cache(*factory->NewFixedArray( - kInitialNumberStringCacheSize * 2, TENURED)); + set_number_string_cache( + *factory->NewFixedArray(kInitialNumberStringCacheSize * 2, TENURED)); // Allocate cache for single character one byte strings. - set_single_character_string_cache(*factory->NewFixedArray( - String::kMaxOneByteCharCode + 1, TENURED)); + set_single_character_string_cache( + *factory->NewFixedArray(String::kMaxOneByteCharCode + 1, TENURED)); // Allocate cache for string split and regexp-multiple. set_string_split_cache(*factory->NewFixedArray( @@ -2907,8 +2852,8 @@ void Heap::CreateInitialObjects() { RegExpResultsCache::kRegExpResultsCacheSize, TENURED)); // Allocate cache for external strings pointing to native source code. - set_natives_source_cache(*factory->NewFixedArray( - Natives::GetBuiltinsCount())); + set_natives_source_cache( + *factory->NewFixedArray(Natives::GetBuiltinsCount())); set_undefined_cell(*factory->NewCell(factory->undefined_value())); @@ -2919,16 +2864,19 @@ void Heap::CreateInitialObjects() { set_observation_state(*factory->NewJSObjectFromMap( factory->NewMap(JS_OBJECT_TYPE, JSObject::kHeaderSize))); - // Allocate object to hold object microtask state. - set_microtask_state(*factory->NewJSObjectFromMap( - factory->NewMap(JS_OBJECT_TYPE, JSObject::kHeaderSize))); + // Microtask queue uses the empty fixed array as a sentinel for "empty". + // Number of queued microtasks stored in Isolate::pending_microtask_count(). + set_microtask_queue(empty_fixed_array()); - set_frozen_symbol(*factory->NewPrivateSymbol()); - set_nonexistent_symbol(*factory->NewPrivateSymbol()); + set_detailed_stack_trace_symbol(*factory->NewPrivateSymbol()); set_elements_transition_symbol(*factory->NewPrivateSymbol()); - set_uninitialized_symbol(*factory->NewPrivateSymbol()); + set_frozen_symbol(*factory->NewPrivateSymbol()); set_megamorphic_symbol(*factory->NewPrivateSymbol()); + set_nonexistent_symbol(*factory->NewPrivateSymbol()); + set_normal_ic_symbol(*factory->NewPrivateSymbol()); set_observed_symbol(*factory->NewPrivateSymbol()); + set_stack_trace_symbol(*factory->NewPrivateSymbol()); + set_uninitialized_symbol(*factory->NewPrivateSymbol()); Handle<SeededNumberDictionary> slow_element_dictionary = SeededNumberDictionary::New(isolate(), 0, TENURED); @@ -2940,8 +2888,8 @@ void Heap::CreateInitialObjects() { // Handling of script id generation is in Factory::NewScript. set_last_script_id(Smi::FromInt(v8::UnboundScript::kNoScriptId)); - set_allocation_sites_scratchpad(*factory->NewFixedArray( - kAllocationSiteScratchpadSize, TENURED)); + set_allocation_sites_scratchpad( + *factory->NewFixedArray(kAllocationSiteScratchpadSize, TENURED)); InitializeAllocationSitesScratchpad(); // Initialize keyed lookup cache. @@ -2960,28 +2908,27 @@ void Heap::CreateInitialObjects() { bool Heap::RootCanBeWrittenAfterInitialization(Heap::RootListIndex root_index) { RootListIndex writable_roots[] = { - kStoreBufferTopRootIndex, - kStackLimitRootIndex, - kNumberStringCacheRootIndex, - kInstanceofCacheFunctionRootIndex, - kInstanceofCacheMapRootIndex, - kInstanceofCacheAnswerRootIndex, - kCodeStubsRootIndex, - kNonMonomorphicCacheRootIndex, - kPolymorphicCodeCacheRootIndex, - kLastScriptIdRootIndex, - kEmptyScriptRootIndex, - kRealStackLimitRootIndex, - kArgumentsAdaptorDeoptPCOffsetRootIndex, - kConstructStubDeoptPCOffsetRootIndex, - kGetterStubDeoptPCOffsetRootIndex, - kSetterStubDeoptPCOffsetRootIndex, - kStringTableRootIndex, + kStoreBufferTopRootIndex, + kStackLimitRootIndex, + kNumberStringCacheRootIndex, + kInstanceofCacheFunctionRootIndex, + kInstanceofCacheMapRootIndex, + kInstanceofCacheAnswerRootIndex, + kCodeStubsRootIndex, + kNonMonomorphicCacheRootIndex, + kPolymorphicCodeCacheRootIndex, + kLastScriptIdRootIndex, + kEmptyScriptRootIndex, + kRealStackLimitRootIndex, + kArgumentsAdaptorDeoptPCOffsetRootIndex, + kConstructStubDeoptPCOffsetRootIndex, + kGetterStubDeoptPCOffsetRootIndex, + kSetterStubDeoptPCOffsetRootIndex, + kStringTableRootIndex, }; for (unsigned int i = 0; i < ARRAY_SIZE(writable_roots); i++) { - if (root_index == writable_roots[i]) - return true; + if (root_index == writable_roots[i]) return true; } return false; } @@ -2989,29 +2936,27 @@ bool Heap::RootCanBeWrittenAfterInitialization(Heap::RootListIndex root_index) { bool Heap::RootCanBeTreatedAsConstant(RootListIndex root_index) { return !RootCanBeWrittenAfterInitialization(root_index) && - !InNewSpace(roots_array_start()[root_index]); + !InNewSpace(roots_array_start()[root_index]); } -Object* RegExpResultsCache::Lookup(Heap* heap, - String* key_string, - Object* key_pattern, - ResultsCacheType type) { +Object* RegExpResultsCache::Lookup(Heap* heap, String* key_string, + Object* key_pattern, ResultsCacheType type) { FixedArray* cache; if (!key_string->IsInternalizedString()) return Smi::FromInt(0); if (type == STRING_SPLIT_SUBSTRINGS) { - ASSERT(key_pattern->IsString()); + DCHECK(key_pattern->IsString()); if (!key_pattern->IsInternalizedString()) return Smi::FromInt(0); cache = heap->string_split_cache(); } else { - ASSERT(type == REGEXP_MULTIPLE_INDICES); - ASSERT(key_pattern->IsFixedArray()); + DCHECK(type == REGEXP_MULTIPLE_INDICES); + DCHECK(key_pattern->IsFixedArray()); cache = heap->regexp_multiple_cache(); } uint32_t hash = key_string->Hash(); uint32_t index = ((hash & (kRegExpResultsCacheSize - 1)) & - ~(kArrayEntriesPerCacheEntry - 1)); + ~(kArrayEntriesPerCacheEntry - 1)); if (cache->get(index + kStringOffset) == key_string && cache->get(index + kPatternOffset) == key_pattern) { return cache->get(index + kArrayOffset); @@ -3026,8 +2971,7 @@ Object* RegExpResultsCache::Lookup(Heap* heap, } -void RegExpResultsCache::Enter(Isolate* isolate, - Handle<String> key_string, +void RegExpResultsCache::Enter(Isolate* isolate, Handle<String> key_string, Handle<Object> key_pattern, Handle<FixedArray> value_array, ResultsCacheType type) { @@ -3035,18 +2979,18 @@ void RegExpResultsCache::Enter(Isolate* isolate, Handle<FixedArray> cache; if (!key_string->IsInternalizedString()) return; if (type == STRING_SPLIT_SUBSTRINGS) { - ASSERT(key_pattern->IsString()); + DCHECK(key_pattern->IsString()); if (!key_pattern->IsInternalizedString()) return; cache = factory->string_split_cache(); } else { - ASSERT(type == REGEXP_MULTIPLE_INDICES); - ASSERT(key_pattern->IsFixedArray()); + DCHECK(type == REGEXP_MULTIPLE_INDICES); + DCHECK(key_pattern->IsFixedArray()); cache = factory->regexp_multiple_cache(); } uint32_t hash = key_string->Hash(); uint32_t index = ((hash & (kRegExpResultsCacheSize - 1)) & - ~(kArrayEntriesPerCacheEntry - 1)); + ~(kArrayEntriesPerCacheEntry - 1)); if (cache->get(index + kStringOffset) == Smi::FromInt(0)) { cache->set(index + kStringOffset, *key_string); cache->set(index + kPatternOffset, *key_pattern); @@ -3092,7 +3036,7 @@ int Heap::FullSizeNumberStringCacheLength() { // Compute the size of the number string cache based on the max newspace size. // The number string cache has a minimum size based on twice the initial cache // size to ensure that it is bigger after being made 'full size'. - int number_string_cache_size = max_semispace_size_ / 512; + int number_string_cache_size = max_semi_space_size_ / 512; number_string_cache_size = Max(kInitialNumberStringCacheSize * 2, Min(0x4000, number_string_cache_size)); // There is a string and a number per entry so the length is twice the number @@ -3119,7 +3063,7 @@ void Heap::FlushAllocationSitesScratchpad() { void Heap::InitializeAllocationSitesScratchpad() { - ASSERT(allocation_sites_scratchpad()->length() == + DCHECK(allocation_sites_scratchpad()->length() == kAllocationSiteScratchpadSize); for (int i = 0; i < kAllocationSiteScratchpadSize; i++) { allocation_sites_scratchpad()->set_undefined(i); @@ -3133,8 +3077,8 @@ void Heap::AddAllocationSiteToScratchpad(AllocationSite* site, // We cannot use the normal write-barrier because slots need to be // recorded with non-incremental marking as well. We have to explicitly // record the slot to take evacuation candidates into account. - allocation_sites_scratchpad()->set( - allocation_sites_scratchpad_length_, site, SKIP_WRITE_BARRIER); + allocation_sites_scratchpad()->set(allocation_sites_scratchpad_length_, + site, SKIP_WRITE_BARRIER); Object** slot = allocation_sites_scratchpad()->RawFieldOfElementAt( allocation_sites_scratchpad_length_); @@ -3143,8 +3087,8 @@ void Heap::AddAllocationSiteToScratchpad(AllocationSite* site, // candidates are not part of the global list of old space pages and // releasing an evacuation candidate due to a slots buffer overflow // results in lost pages. - mark_compact_collector()->RecordSlot( - slot, slot, *slot, SlotsBuffer::IGNORE_OVERFLOW); + mark_compact_collector()->RecordSlot(slot, slot, *slot, + SlotsBuffer::IGNORE_OVERFLOW); } allocation_sites_scratchpad_length_++; } @@ -3159,9 +3103,9 @@ Map* Heap::MapForExternalArrayType(ExternalArrayType array_type) { Heap::RootListIndex Heap::RootIndexForExternalArrayType( ExternalArrayType array_type) { switch (array_type) { -#define ARRAY_TYPE_TO_ROOT_INDEX(Type, type, TYPE, ctype, size) \ - case kExternal##Type##Array: \ - return kExternal##Type##ArrayMapRootIndex; +#define ARRAY_TYPE_TO_ROOT_INDEX(Type, type, TYPE, ctype, size) \ + case kExternal##Type##Array: \ + return kExternal##Type##ArrayMapRootIndex; TYPED_ARRAYS(ARRAY_TYPE_TO_ROOT_INDEX) #undef ARRAY_TYPE_TO_ROOT_INDEX @@ -3181,9 +3125,9 @@ Map* Heap::MapForFixedTypedArray(ExternalArrayType array_type) { Heap::RootListIndex Heap::RootIndexForFixedTypedArray( ExternalArrayType array_type) { switch (array_type) { -#define ARRAY_TYPE_TO_ROOT_INDEX(Type, type, TYPE, ctype, size) \ - case kExternal##Type##Array: \ - return kFixed##Type##ArrayMapRootIndex; +#define ARRAY_TYPE_TO_ROOT_INDEX(Type, type, TYPE, ctype, size) \ + case kExternal##Type##Array: \ + return kFixed##Type##ArrayMapRootIndex; TYPED_ARRAYS(ARRAY_TYPE_TO_ROOT_INDEX) #undef ARRAY_TYPE_TO_ROOT_INDEX @@ -3198,9 +3142,9 @@ Heap::RootListIndex Heap::RootIndexForFixedTypedArray( Heap::RootListIndex Heap::RootIndexForEmptyExternalArray( ElementsKind elementsKind) { switch (elementsKind) { -#define ELEMENT_KIND_TO_ROOT_INDEX(Type, type, TYPE, ctype, size) \ - case EXTERNAL_##TYPE##_ELEMENTS: \ - return kEmptyExternal##Type##ArrayRootIndex; +#define ELEMENT_KIND_TO_ROOT_INDEX(Type, type, TYPE, ctype, size) \ + case EXTERNAL_##TYPE##_ELEMENTS: \ + return kEmptyExternal##Type##ArrayRootIndex; TYPED_ARRAYS(ELEMENT_KIND_TO_ROOT_INDEX) #undef ELEMENT_KIND_TO_ROOT_INDEX @@ -3215,9 +3159,9 @@ Heap::RootListIndex Heap::RootIndexForEmptyExternalArray( Heap::RootListIndex Heap::RootIndexForEmptyFixedTypedArray( ElementsKind elementsKind) { switch (elementsKind) { -#define ELEMENT_KIND_TO_ROOT_INDEX(Type, type, TYPE, ctype, size) \ - case TYPE##_ELEMENTS: \ - return kEmptyFixed##Type##ArrayRootIndex; +#define ELEMENT_KIND_TO_ROOT_INDEX(Type, type, TYPE, ctype, size) \ + case TYPE##_ELEMENTS: \ + return kEmptyFixed##Type##ArrayRootIndex; TYPED_ARRAYS(ELEMENT_KIND_TO_ROOT_INDEX) #undef ELEMENT_KIND_TO_ROOT_INDEX @@ -3260,7 +3204,8 @@ AllocationResult Heap::AllocateByteArray(int length, PretenureFlag pretenure) { int size = ByteArray::SizeFor(length); AllocationSpace space = SelectSpace(size, OLD_DATA_SPACE, pretenure); HeapObject* result; - { AllocationResult allocation = AllocateRaw(size, space, OLD_DATA_SPACE); + { + AllocationResult allocation = AllocateRaw(size, space, OLD_DATA_SPACE); if (!allocation.To(&result)) return allocation; } @@ -3299,10 +3244,7 @@ bool Heap::CanMoveObjectStart(HeapObject* object) { // for concurrent sweeping. The WasSwept predicate for concurrently swept // pages is set after sweeping all pages. return (!is_in_old_pointer_space && !is_in_old_data_space) || - page->WasSwept() || - (mark_compact_collector()->AreSweeperThreadsActivated() && - page->parallel_sweeping() <= - MemoryChunk::PARALLEL_SWEEPING_FINALIZE); + page->WasSwept() || page->SweepingCompleted(); } @@ -3318,39 +3260,130 @@ void Heap::AdjustLiveBytes(Address address, int by, InvocationMode mode) { } +FixedArrayBase* Heap::LeftTrimFixedArray(FixedArrayBase* object, + int elements_to_trim) { + const int element_size = object->IsFixedArray() ? kPointerSize : kDoubleSize; + const int bytes_to_trim = elements_to_trim * element_size; + Map* map = object->map(); + + // For now this trick is only applied to objects in new and paged space. + // In large object space the object's start must coincide with chunk + // and thus the trick is just not applicable. + DCHECK(!lo_space()->Contains(object)); + DCHECK(object->map() != fixed_cow_array_map()); + + STATIC_ASSERT(FixedArrayBase::kMapOffset == 0); + STATIC_ASSERT(FixedArrayBase::kLengthOffset == kPointerSize); + STATIC_ASSERT(FixedArrayBase::kHeaderSize == 2 * kPointerSize); + + const int len = object->length(); + DCHECK(elements_to_trim <= len); + + // Calculate location of new array start. + Address new_start = object->address() + bytes_to_trim; + + // Technically in new space this write might be omitted (except for + // debug mode which iterates through the heap), but to play safer + // we still do it. + CreateFillerObjectAt(object->address(), bytes_to_trim); + + // Initialize header of the trimmed array. Since left trimming is only + // performed on pages which are not concurrently swept creating a filler + // object does not require synchronization. + DCHECK(CanMoveObjectStart(object)); + Object** former_start = HeapObject::RawField(object, 0); + int new_start_index = elements_to_trim * (element_size / kPointerSize); + former_start[new_start_index] = map; + former_start[new_start_index + 1] = Smi::FromInt(len - elements_to_trim); + FixedArrayBase* new_object = + FixedArrayBase::cast(HeapObject::FromAddress(new_start)); + + // Maintain consistency of live bytes during incremental marking + marking()->TransferMark(object->address(), new_start); + AdjustLiveBytes(new_start, -bytes_to_trim, Heap::FROM_MUTATOR); + + // Notify the heap profiler of change in object layout. + OnMoveEvent(new_object, object, new_object->Size()); + return new_object; +} + + +// Force instantiation of templatized method. +template +void Heap::RightTrimFixedArray<Heap::FROM_GC>(FixedArrayBase*, int); +template +void Heap::RightTrimFixedArray<Heap::FROM_MUTATOR>(FixedArrayBase*, int); + + +template<Heap::InvocationMode mode> +void Heap::RightTrimFixedArray(FixedArrayBase* object, int elements_to_trim) { + const int element_size = object->IsFixedArray() ? kPointerSize : kDoubleSize; + const int bytes_to_trim = elements_to_trim * element_size; + + // For now this trick is only applied to objects in new and paged space. + DCHECK(!lo_space()->Contains(object)); + DCHECK(object->map() != fixed_cow_array_map()); + + const int len = object->length(); + DCHECK(elements_to_trim < len); + + // Calculate location of new array end. + Address new_end = object->address() + object->Size() - bytes_to_trim; + + // Technically in new space this write might be omitted (except for + // debug mode which iterates through the heap), but to play safer + // we still do it. + CreateFillerObjectAt(new_end, bytes_to_trim); + + // Initialize header of the trimmed array. We are storing the new length + // using release store after creating a filler for the left-over space to + // avoid races with the sweeper thread. + object->synchronized_set_length(len - elements_to_trim); + + // Maintain consistency of live bytes during incremental marking + AdjustLiveBytes(object->address(), -bytes_to_trim, mode); + + // Notify the heap profiler of change in object layout. The array may not be + // moved during GC, and size has to be adjusted nevertheless. + HeapProfiler* profiler = isolate()->heap_profiler(); + if (profiler->is_tracking_allocations()) { + profiler->UpdateObjectSizeEvent(object->address(), object->Size()); + } +} + + AllocationResult Heap::AllocateExternalArray(int length, - ExternalArrayType array_type, - void* external_pointer, - PretenureFlag pretenure) { + ExternalArrayType array_type, + void* external_pointer, + PretenureFlag pretenure) { int size = ExternalArray::kAlignedSize; AllocationSpace space = SelectSpace(size, OLD_DATA_SPACE, pretenure); HeapObject* result; - { AllocationResult allocation = AllocateRaw(size, space, OLD_DATA_SPACE); + { + AllocationResult allocation = AllocateRaw(size, space, OLD_DATA_SPACE); if (!allocation.To(&result)) return allocation; } - result->set_map_no_write_barrier( - MapForExternalArrayType(array_type)); + result->set_map_no_write_barrier(MapForExternalArrayType(array_type)); ExternalArray::cast(result)->set_length(length); ExternalArray::cast(result)->set_external_pointer(external_pointer); return result; } -static void ForFixedTypedArray(ExternalArrayType array_type, - int* element_size, +static void ForFixedTypedArray(ExternalArrayType array_type, int* element_size, ElementsKind* element_kind) { switch (array_type) { -#define TYPED_ARRAY_CASE(Type, type, TYPE, ctype, size) \ - case kExternal##Type##Array: \ - *element_size = size; \ - *element_kind = TYPE##_ELEMENTS; \ - return; +#define TYPED_ARRAY_CASE(Type, type, TYPE, ctype, size) \ + case kExternal##Type##Array: \ + *element_size = size; \ + *element_kind = TYPE##_ELEMENTS; \ + return; TYPED_ARRAYS(TYPED_ARRAY_CASE) #undef TYPED_ARRAY_CASE default: - *element_size = 0; // Bogus + *element_size = 0; // Bogus *element_kind = UINT8_ELEMENTS; // Bogus UNREACHABLE(); } @@ -3363,8 +3396,8 @@ AllocationResult Heap::AllocateFixedTypedArray(int length, int element_size; ElementsKind elements_kind; ForFixedTypedArray(array_type, &element_size, &elements_kind); - int size = OBJECT_POINTER_ALIGN( - length * element_size + FixedTypedArrayBase::kDataOffset); + int size = OBJECT_POINTER_ALIGN(length * element_size + + FixedTypedArrayBase::kDataOffset); #ifndef V8_HOST_ARCH_64_BIT if (array_type == kExternalFloat64Array) { size += kPointerSize; @@ -3388,35 +3421,34 @@ AllocationResult Heap::AllocateFixedTypedArray(int length, } -AllocationResult Heap::AllocateCode(int object_size, - bool immovable) { - ASSERT(IsAligned(static_cast<intptr_t>(object_size), kCodeAlignment)); - AllocationResult allocation; - // Large code objects and code objects which should stay at a fixed address - // are allocated in large object space. +AllocationResult Heap::AllocateCode(int object_size, bool immovable) { + DCHECK(IsAligned(static_cast<intptr_t>(object_size), kCodeAlignment)); + AllocationResult allocation = + AllocateRaw(object_size, CODE_SPACE, CODE_SPACE); + HeapObject* result; - bool force_lo_space = object_size > code_space()->AreaSize(); - if (force_lo_space) { - allocation = lo_space_->AllocateRaw(object_size, EXECUTABLE); - } else { - allocation = AllocateRaw(object_size, CODE_SPACE, CODE_SPACE); - } if (!allocation.To(&result)) return allocation; - if (immovable && !force_lo_space && - // Objects on the first page of each space are never moved. - !code_space_->FirstPage()->Contains(result->address())) { - // Discard the first code allocation, which was on a page where it could be - // moved. - CreateFillerObjectAt(result->address(), object_size); - allocation = lo_space_->AllocateRaw(object_size, EXECUTABLE); - if (!allocation.To(&result)) return allocation; + if (immovable) { + Address address = result->address(); + // Code objects which should stay at a fixed address are allocated either + // in the first page of code space (objects on the first page of each space + // are never moved) or in large object space. + if (!code_space_->FirstPage()->Contains(address) && + MemoryChunk::FromAddress(address)->owner()->identity() != LO_SPACE) { + // Discard the first code allocation, which was on a page where it could + // be moved. + CreateFillerObjectAt(result->address(), object_size); + allocation = lo_space_->AllocateRaw(object_size, EXECUTABLE); + if (!allocation.To(&result)) return allocation; + OnAllocationEvent(result, object_size); + } } result->set_map_no_write_barrier(code_map()); Code* code = Code::cast(result); - ASSERT(!isolate_->code_range()->exists() || - isolate_->code_range()->contains(code->address())); + DCHECK(isolate_->code_range() == NULL || !isolate_->code_range()->valid() || + isolate_->code_range()->contains(code->address())); code->set_gc_metadata(Smi::FromInt(0)); code->set_ic_age(global_ic_age_); return code; @@ -3436,15 +3468,10 @@ AllocationResult Heap::CopyCode(Code* code) { new_constant_pool = empty_constant_pool_array(); } + HeapObject* result; // Allocate an object the same size as the code object. int obj_size = code->Size(); - if (obj_size > code_space()->AreaSize()) { - allocation = lo_space_->AllocateRaw(obj_size, EXECUTABLE); - } else { - allocation = AllocateRaw(obj_size, CODE_SPACE, CODE_SPACE); - } - - HeapObject* result; + allocation = AllocateRaw(obj_size, CODE_SPACE, CODE_SPACE); if (!allocation.To(&result)) return allocation; // Copy code object. @@ -3457,8 +3484,8 @@ AllocationResult Heap::CopyCode(Code* code) { new_code->set_constant_pool(new_constant_pool); // Relocate the copy. - ASSERT(!isolate_->code_range()->exists() || - isolate_->code_range()->contains(code->address())); + DCHECK(isolate_->code_range() == NULL || !isolate_->code_range()->valid() || + isolate_->code_range()->contains(code->address())); new_code->Relocate(new_addr - old_addr); return new_code; } @@ -3468,7 +3495,8 @@ AllocationResult Heap::CopyCode(Code* code, Vector<byte> reloc_info) { // Allocate ByteArray and ConstantPoolArray before the Code object, so that we // do not risk leaving uninitialized Code object (and breaking the heap). ByteArray* reloc_info_array; - { AllocationResult allocation = + { + AllocationResult allocation = AllocateByteArray(reloc_info.length(), TENURED); if (!allocation.To(&reloc_info_array)) return allocation; } @@ -3477,8 +3505,7 @@ AllocationResult Heap::CopyCode(Code* code, Vector<byte> reloc_info) { code->constant_pool() != empty_constant_pool_array()) { // Copy the constant pool, since edits to the copied code may modify // the constant pool. - AllocationResult allocation = - CopyConstantPoolArray(code->constant_pool()); + AllocationResult allocation = CopyConstantPoolArray(code->constant_pool()); if (!allocation.To(&new_constant_pool)) return allocation; } else { new_constant_pool = empty_constant_pool_array(); @@ -3493,14 +3520,9 @@ AllocationResult Heap::CopyCode(Code* code, Vector<byte> reloc_info) { size_t relocation_offset = static_cast<size_t>(code->instruction_end() - old_addr); - AllocationResult allocation; - if (new_obj_size > code_space()->AreaSize()) { - allocation = lo_space_->AllocateRaw(new_obj_size, EXECUTABLE); - } else { - allocation = AllocateRaw(new_obj_size, CODE_SPACE, CODE_SPACE); - } - HeapObject* result; + AllocationResult allocation = + AllocateRaw(new_obj_size, CODE_SPACE, CODE_SPACE); if (!allocation.To(&result)) return allocation; // Copy code object. @@ -3516,13 +3538,12 @@ AllocationResult Heap::CopyCode(Code* code, Vector<byte> reloc_info) { new_code->set_constant_pool(new_constant_pool); // Copy patched rinfo. - CopyBytes(new_code->relocation_start(), - reloc_info.start(), + CopyBytes(new_code->relocation_start(), reloc_info.start(), static_cast<size_t>(reloc_info.length())); // Relocate the copy. - ASSERT(!isolate_->code_range()->exists() || - isolate_->code_range()->contains(code->address())); + DCHECK(isolate_->code_range() == NULL || !isolate_->code_range()->valid() || + isolate_->code_range()->contains(code->address())); new_code->Relocate(new_addr - old_addr); #ifdef VERIFY_HEAP @@ -3535,7 +3556,7 @@ AllocationResult Heap::CopyCode(Code* code, Vector<byte> reloc_info) { void Heap::InitializeAllocationMemento(AllocationMemento* memento, AllocationSite* allocation_site) { memento->set_map_no_write_barrier(allocation_memento_map()); - ASSERT(allocation_site->map() == allocation_site_map()); + DCHECK(allocation_site->map() == allocation_site_map()); memento->set_allocation_site(allocation_site, SKIP_WRITE_BARRIER); if (FLAG_allocation_site_pretenuring) { allocation_site->IncrementMementoCreateCount(); @@ -3544,9 +3565,9 @@ void Heap::InitializeAllocationMemento(AllocationMemento* memento, AllocationResult Heap::Allocate(Map* map, AllocationSpace space, - AllocationSite* allocation_site) { - ASSERT(gc_state_ == NOT_IN_GC); - ASSERT(map->instance_type() != MAP_TYPE); + AllocationSite* allocation_site) { + DCHECK(gc_state_ == NOT_IN_GC); + DCHECK(map->instance_type() != MAP_TYPE); // If allocation failures are disallowed, we may allocate in a different // space when new space is full and the object is not a large object. AllocationSpace retry_space = @@ -3569,60 +3590,7 @@ AllocationResult Heap::Allocate(Map* map, AllocationSpace space, } -AllocationResult Heap::AllocateArgumentsObject(Object* callee, int length) { - // To get fast allocation and map sharing for arguments objects we - // allocate them based on an arguments boilerplate. - - JSObject* boilerplate; - int arguments_object_size; - bool strict_mode_callee = callee->IsJSFunction() && - JSFunction::cast(callee)->shared()->strict_mode() == STRICT; - if (strict_mode_callee) { - boilerplate = - isolate()->context()->native_context()->strict_arguments_boilerplate(); - arguments_object_size = kStrictArgumentsObjectSize; - } else { - boilerplate = - isolate()->context()->native_context()->sloppy_arguments_boilerplate(); - arguments_object_size = kSloppyArgumentsObjectSize; - } - - // Check that the size of the boilerplate matches our - // expectations. The ArgumentsAccessStub::GenerateNewObject relies - // on the size being a known constant. - ASSERT(arguments_object_size == boilerplate->map()->instance_size()); - - // Do the allocation. - HeapObject* result; - { AllocationResult allocation = - AllocateRaw(arguments_object_size, NEW_SPACE, OLD_POINTER_SPACE); - if (!allocation.To(&result)) return allocation; - } - - // Copy the content. The arguments boilerplate doesn't have any - // fields that point to new space so it's safe to skip the write - // barrier here. - CopyBlock(result->address(), boilerplate->address(), JSObject::kHeaderSize); - - // Set the length property. - JSObject* js_obj = JSObject::cast(result); - js_obj->InObjectPropertyAtPut( - kArgumentsLengthIndex, Smi::FromInt(length), SKIP_WRITE_BARRIER); - // Set the callee property for sloppy mode arguments object only. - if (!strict_mode_callee) { - js_obj->InObjectPropertyAtPut(kArgumentsCalleeIndex, callee); - } - - // Check the state of the object - ASSERT(js_obj->HasFastProperties()); - ASSERT(js_obj->HasFastObjectElements()); - - return js_obj; -} - - -void Heap::InitializeJSObjectFromMap(JSObject* obj, - FixedArray* properties, +void Heap::InitializeJSObjectFromMap(JSObject* obj, FixedArray* properties, Map* map) { obj->set_properties(properties); obj->initialize_elements(); @@ -3641,10 +3609,10 @@ void Heap::InitializeJSObjectFromMap(JSObject* obj, // so that object accesses before the constructor completes (e.g. in the // debugger) will not cause a crash. if (map->constructor()->IsJSFunction() && - JSFunction::cast(map->constructor())->shared()-> - IsInobjectSlackTrackingInProgress()) { + JSFunction::cast(map->constructor()) + ->IsInobjectSlackTrackingInProgress()) { // We might want to shrink the object later. - ASSERT(obj->GetInternalFieldCount() == 0); + DCHECK(obj->GetInternalFieldCount() == 0); filler = Heap::one_pointer_filler_map(); } else { filler = Heap::undefined_value(); @@ -3654,25 +3622,24 @@ void Heap::InitializeJSObjectFromMap(JSObject* obj, AllocationResult Heap::AllocateJSObjectFromMap( - Map* map, - PretenureFlag pretenure, - bool allocate_properties, + Map* map, PretenureFlag pretenure, bool allocate_properties, AllocationSite* allocation_site) { // JSFunctions should be allocated using AllocateFunction to be // properly initialized. - ASSERT(map->instance_type() != JS_FUNCTION_TYPE); + DCHECK(map->instance_type() != JS_FUNCTION_TYPE); // Both types of global objects should be allocated using // AllocateGlobalObject to be properly initialized. - ASSERT(map->instance_type() != JS_GLOBAL_OBJECT_TYPE); - ASSERT(map->instance_type() != JS_BUILTINS_OBJECT_TYPE); + DCHECK(map->instance_type() != JS_GLOBAL_OBJECT_TYPE); + DCHECK(map->instance_type() != JS_BUILTINS_OBJECT_TYPE); // Allocate the backing storage for the properties. FixedArray* properties; if (allocate_properties) { int prop_size = map->InitialPropertiesLength(); - ASSERT(prop_size >= 0); - { AllocationResult allocation = AllocateFixedArray(prop_size, pretenure); + DCHECK(prop_size >= 0); + { + AllocationResult allocation = AllocateFixedArray(prop_size, pretenure); if (!allocation.To(&properties)) return allocation; } } else { @@ -3688,8 +3655,7 @@ AllocationResult Heap::AllocateJSObjectFromMap( // Initialize the JSObject. InitializeJSObjectFromMap(js_obj, properties, map); - ASSERT(js_obj->HasFastElements() || - js_obj->HasExternalArrayElements() || + DCHECK(js_obj->HasFastElements() || js_obj->HasExternalArrayElements() || js_obj->HasFixedTypedArrayElements()); return js_obj; } @@ -3698,7 +3664,7 @@ AllocationResult Heap::AllocateJSObjectFromMap( AllocationResult Heap::AllocateJSObject(JSFunction* constructor, PretenureFlag pretenure, AllocationSite* allocation_site) { - ASSERT(constructor->has_initial_map()); + DCHECK(constructor->has_initial_map()); // Allocate the object based on the constructors initial map. AllocationResult allocation = AllocateJSObjectFromMap( @@ -3706,7 +3672,7 @@ AllocationResult Heap::AllocateJSObject(JSFunction* constructor, #ifdef DEBUG // Make sure result is NOT a global object if valid. HeapObject* obj; - ASSERT(!allocation.To(&obj) || !obj->IsGlobalObject()); + DCHECK(!allocation.To(&obj) || !obj->IsGlobalObject()); #endif return allocation; } @@ -3715,48 +3681,44 @@ AllocationResult Heap::AllocateJSObject(JSFunction* constructor, AllocationResult Heap::CopyJSObject(JSObject* source, AllocationSite* site) { // Never used to copy functions. If functions need to be copied we // have to be careful to clear the literals array. - SLOW_ASSERT(!source->IsJSFunction()); + SLOW_DCHECK(!source->IsJSFunction()); // Make the clone. Map* map = source->map(); int object_size = map->instance_size(); HeapObject* clone; - ASSERT(site == NULL || AllocationSite::CanTrack(map->instance_type())); + DCHECK(site == NULL || AllocationSite::CanTrack(map->instance_type())); WriteBarrierMode wb_mode = UPDATE_WRITE_BARRIER; // If we're forced to always allocate, we use the general allocation // functions which may leave us with an object in old space. if (always_allocate()) { - { AllocationResult allocation = + { + AllocationResult allocation = AllocateRaw(object_size, NEW_SPACE, OLD_POINTER_SPACE); if (!allocation.To(&clone)) return allocation; } Address clone_address = clone->address(); - CopyBlock(clone_address, - source->address(), - object_size); + CopyBlock(clone_address, source->address(), object_size); // Update write barrier for all fields that lie beyond the header. - RecordWrites(clone_address, - JSObject::kHeaderSize, + RecordWrites(clone_address, JSObject::kHeaderSize, (object_size - JSObject::kHeaderSize) / kPointerSize); } else { wb_mode = SKIP_WRITE_BARRIER; - { int adjusted_object_size = site != NULL - ? object_size + AllocationMemento::kSize - : object_size; - AllocationResult allocation = + { + int adjusted_object_size = + site != NULL ? object_size + AllocationMemento::kSize : object_size; + AllocationResult allocation = AllocateRaw(adjusted_object_size, NEW_SPACE, NEW_SPACE); if (!allocation.To(&clone)) return allocation; } - SLOW_ASSERT(InNewSpace(clone)); + SLOW_DCHECK(InNewSpace(clone)); // Since we know the clone is allocated in new space, we can copy // the contents without worrying about updating the write barrier. - CopyBlock(clone->address(), - source->address(), - object_size); + CopyBlock(clone->address(), source->address(), object_size); if (site != NULL) { AllocationMemento* alloc_memento = reinterpret_cast<AllocationMemento*>( @@ -3765,14 +3727,15 @@ AllocationResult Heap::CopyJSObject(JSObject* source, AllocationSite* site) { } } - SLOW_ASSERT( - JSObject::cast(clone)->GetElementsKind() == source->GetElementsKind()); + SLOW_DCHECK(JSObject::cast(clone)->GetElementsKind() == + source->GetElementsKind()); FixedArrayBase* elements = FixedArrayBase::cast(source->elements()); FixedArray* properties = FixedArray::cast(source->properties()); // Update elements if necessary. if (elements->length() > 0) { FixedArrayBase* elem; - { AllocationResult allocation; + { + AllocationResult allocation; if (elements->map() == fixed_cow_array_map()) { allocation = FixedArray::cast(elements); } else if (source->HasFastDoubleElements()) { @@ -3787,7 +3750,8 @@ AllocationResult Heap::CopyJSObject(JSObject* source, AllocationSite* site) { // Update properties if necessary. if (properties->length() > 0) { FixedArray* prop; - { AllocationResult allocation = CopyFixedArray(properties); + { + AllocationResult allocation = CopyFixedArray(properties); if (!allocation.To(&prop)) return allocation; } JSObject::cast(clone)->set_properties(prop, wb_mode); @@ -3797,82 +3761,22 @@ AllocationResult Heap::CopyJSObject(JSObject* source, AllocationSite* site) { } -AllocationResult Heap::AllocateStringFromUtf8Slow(Vector<const char> string, - int non_ascii_start, - PretenureFlag pretenure) { - // Continue counting the number of characters in the UTF-8 string, starting - // from the first non-ascii character or word. - Access<UnicodeCache::Utf8Decoder> - decoder(isolate_->unicode_cache()->utf8_decoder()); - decoder->Reset(string.start() + non_ascii_start, - string.length() - non_ascii_start); - int utf16_length = decoder->Utf16Length(); - ASSERT(utf16_length > 0); - // Allocate string. - HeapObject* result; - { - int chars = non_ascii_start + utf16_length; - AllocationResult allocation = AllocateRawTwoByteString(chars, pretenure); - if (!allocation.To(&result) || result->IsException()) { - return allocation; - } - } - // Copy ascii portion. - uint16_t* data = SeqTwoByteString::cast(result)->GetChars(); - if (non_ascii_start != 0) { - const char* ascii_data = string.start(); - for (int i = 0; i < non_ascii_start; i++) { - *data++ = *ascii_data++; - } - } - // Now write the remainder. - decoder->WriteUtf16(data, utf16_length); - return result; -} - - -AllocationResult Heap::AllocateStringFromTwoByte(Vector<const uc16> string, - PretenureFlag pretenure) { - // Check if the string is an ASCII string. - HeapObject* result; - int length = string.length(); - const uc16* start = string.start(); - - if (String::IsOneByte(start, length)) { - AllocationResult allocation = AllocateRawOneByteString(length, pretenure); - if (!allocation.To(&result) || result->IsException()) { - return allocation; - } - CopyChars(SeqOneByteString::cast(result)->GetChars(), start, length); - } else { // It's not a one byte string. - AllocationResult allocation = AllocateRawTwoByteString(length, pretenure); - if (!allocation.To(&result) || result->IsException()) { - return allocation; - } - CopyChars(SeqTwoByteString::cast(result)->GetChars(), start, length); - } - return result; -} - - -static inline void WriteOneByteData(Vector<const char> vector, - uint8_t* chars, +static inline void WriteOneByteData(Vector<const char> vector, uint8_t* chars, int len) { // Only works for ascii. - ASSERT(vector.length() == len); - OS::MemCopy(chars, vector.start(), len); + DCHECK(vector.length() == len); + MemCopy(chars, vector.start(), len); } -static inline void WriteTwoByteData(Vector<const char> vector, - uint16_t* chars, +static inline void WriteTwoByteData(Vector<const char> vector, uint16_t* chars, int len) { const uint8_t* stream = reinterpret_cast<const uint8_t*>(vector.start()); unsigned stream_length = vector.length(); while (stream_length != 0) { unsigned consumed = 0; uint32_t c = unibrow::Utf8::ValueOf(stream, stream_length, &consumed); - ASSERT(c != unibrow::Utf8::kBadChar); - ASSERT(consumed <= stream_length); + DCHECK(c != unibrow::Utf8::kBadChar); + DCHECK(consumed <= stream_length); stream_length -= consumed; stream += consumed; if (c > unibrow::Utf16::kMaxNonSurrogateCharCode) { @@ -3886,34 +3790,33 @@ static inline void WriteTwoByteData(Vector<const char> vector, *chars++ = c; } } - ASSERT(stream_length == 0); - ASSERT(len == 0); + DCHECK(stream_length == 0); + DCHECK(len == 0); } static inline void WriteOneByteData(String* s, uint8_t* chars, int len) { - ASSERT(s->length() == len); + DCHECK(s->length() == len); String::WriteToFlat(s, chars, 0, len); } static inline void WriteTwoByteData(String* s, uint16_t* chars, int len) { - ASSERT(s->length() == len); + DCHECK(s->length() == len); String::WriteToFlat(s, chars, 0, len); } -template<bool is_one_byte, typename T> -AllocationResult Heap::AllocateInternalizedStringImpl( - T t, int chars, uint32_t hash_field) { - ASSERT(chars >= 0); +template <bool is_one_byte, typename T> +AllocationResult Heap::AllocateInternalizedStringImpl(T t, int chars, + uint32_t hash_field) { + DCHECK(chars >= 0); // Compute map and object size. int size; Map* map; - if (chars < 0 || chars > String::kMaxLength) { - return isolate()->ThrowInvalidStringLength(); - } + DCHECK_LE(0, chars); + DCHECK_GE(String::kMaxLength, chars); if (is_one_byte) { map = ascii_internalized_string_map(); size = SeqOneByteString::SizeFor(chars); @@ -3925,7 +3828,8 @@ AllocationResult Heap::AllocateInternalizedStringImpl( // Allocate string. HeapObject* result; - { AllocationResult allocation = AllocateRaw(size, space, OLD_DATA_SPACE); + { + AllocationResult allocation = AllocateRaw(size, space, OLD_DATA_SPACE); if (!allocation.To(&result)) return allocation; } @@ -3935,7 +3839,7 @@ AllocationResult Heap::AllocateInternalizedStringImpl( answer->set_length(chars); answer->set_hash_field(hash_field); - ASSERT_EQ(size, answer->Size()); + DCHECK_EQ(size, answer->Size()); if (is_one_byte) { WriteOneByteData(t, SeqOneByteString::cast(answer)->GetChars(), chars); @@ -3947,28 +3851,27 @@ AllocationResult Heap::AllocateInternalizedStringImpl( // Need explicit instantiations. -template -AllocationResult Heap::AllocateInternalizedStringImpl<true>( - String*, int, uint32_t); -template -AllocationResult Heap::AllocateInternalizedStringImpl<false>( - String*, int, uint32_t); -template -AllocationResult Heap::AllocateInternalizedStringImpl<false>( +template AllocationResult Heap::AllocateInternalizedStringImpl<true>(String*, + int, + uint32_t); +template AllocationResult Heap::AllocateInternalizedStringImpl<false>(String*, + int, + uint32_t); +template AllocationResult Heap::AllocateInternalizedStringImpl<false>( Vector<const char>, int, uint32_t); AllocationResult Heap::AllocateRawOneByteString(int length, PretenureFlag pretenure) { - if (length < 0 || length > String::kMaxLength) { - return isolate()->ThrowInvalidStringLength(); - } + DCHECK_LE(0, length); + DCHECK_GE(String::kMaxLength, length); int size = SeqOneByteString::SizeFor(length); - ASSERT(size <= SeqOneByteString::kMaxSize); + DCHECK(size <= SeqOneByteString::kMaxSize); AllocationSpace space = SelectSpace(size, OLD_DATA_SPACE, pretenure); HeapObject* result; - { AllocationResult allocation = AllocateRaw(size, space, OLD_DATA_SPACE); + { + AllocationResult allocation = AllocateRaw(size, space, OLD_DATA_SPACE); if (!allocation.To(&result)) return allocation; } @@ -3976,7 +3879,7 @@ AllocationResult Heap::AllocateRawOneByteString(int length, result->set_map_no_write_barrier(ascii_string_map()); String::cast(result)->set_length(length); String::cast(result)->set_hash_field(String::kEmptyHashField); - ASSERT_EQ(size, HeapObject::cast(result)->Size()); + DCHECK_EQ(size, HeapObject::cast(result)->Size()); return result; } @@ -3984,15 +3887,15 @@ AllocationResult Heap::AllocateRawOneByteString(int length, AllocationResult Heap::AllocateRawTwoByteString(int length, PretenureFlag pretenure) { - if (length < 0 || length > String::kMaxLength) { - return isolate()->ThrowInvalidStringLength(); - } + DCHECK_LE(0, length); + DCHECK_GE(String::kMaxLength, length); int size = SeqTwoByteString::SizeFor(length); - ASSERT(size <= SeqTwoByteString::kMaxSize); + DCHECK(size <= SeqTwoByteString::kMaxSize); AllocationSpace space = SelectSpace(size, OLD_DATA_SPACE, pretenure); HeapObject* result; - { AllocationResult allocation = AllocateRaw(size, space, OLD_DATA_SPACE); + { + AllocationResult allocation = AllocateRaw(size, space, OLD_DATA_SPACE); if (!allocation.To(&result)) return allocation; } @@ -4000,7 +3903,7 @@ AllocationResult Heap::AllocateRawTwoByteString(int length, result->set_map_no_write_barrier(string_map()); String::cast(result)->set_length(length); String::cast(result)->set_hash_field(String::kEmptyHashField); - ASSERT_EQ(size, HeapObject::cast(result)->Size()); + DCHECK_EQ(size, HeapObject::cast(result)->Size()); return result; } @@ -4008,7 +3911,8 @@ AllocationResult Heap::AllocateRawTwoByteString(int length, AllocationResult Heap::AllocateEmptyFixedArray() { int size = FixedArray::SizeFor(0); HeapObject* result; - { AllocationResult allocation = + { + AllocationResult allocation = AllocateRaw(size, OLD_DATA_SPACE, OLD_DATA_SPACE); if (!allocation.To(&result)) return allocation; } @@ -4032,7 +3936,8 @@ AllocationResult Heap::CopyAndTenureFixedCOWArray(FixedArray* src) { int len = src->length(); HeapObject* obj; - { AllocationResult allocation = AllocateRawFixedArray(len, TENURED); + { + AllocationResult allocation = AllocateRawFixedArray(len, TENURED); if (!allocation.To(&obj)) return allocation; } obj->set_map_no_write_barrier(fixed_array_map()); @@ -4061,13 +3966,13 @@ AllocationResult Heap::AllocateEmptyFixedTypedArray( AllocationResult Heap::CopyFixedArrayWithMap(FixedArray* src, Map* map) { int len = src->length(); HeapObject* obj; - { AllocationResult allocation = AllocateRawFixedArray(len, NOT_TENURED); + { + AllocationResult allocation = AllocateRawFixedArray(len, NOT_TENURED); if (!allocation.To(&obj)) return allocation; } if (InNewSpace(obj)) { obj->set_map_no_write_barrier(map); - CopyBlock(obj->address() + kPointerSize, - src->address() + kPointerSize, + CopyBlock(obj->address() + kPointerSize, src->address() + kPointerSize, FixedArray::SizeFor(len) - kPointerSize); return obj; } @@ -4087,37 +3992,39 @@ AllocationResult Heap::CopyFixedDoubleArrayWithMap(FixedDoubleArray* src, Map* map) { int len = src->length(); HeapObject* obj; - { AllocationResult allocation = AllocateRawFixedDoubleArray(len, NOT_TENURED); + { + AllocationResult allocation = AllocateRawFixedDoubleArray(len, NOT_TENURED); if (!allocation.To(&obj)) return allocation; } obj->set_map_no_write_barrier(map); - CopyBlock( - obj->address() + FixedDoubleArray::kLengthOffset, - src->address() + FixedDoubleArray::kLengthOffset, - FixedDoubleArray::SizeFor(len) - FixedDoubleArray::kLengthOffset); + CopyBlock(obj->address() + FixedDoubleArray::kLengthOffset, + src->address() + FixedDoubleArray::kLengthOffset, + FixedDoubleArray::SizeFor(len) - FixedDoubleArray::kLengthOffset); return obj; } AllocationResult Heap::CopyConstantPoolArrayWithMap(ConstantPoolArray* src, Map* map) { - int int64_entries = src->count_of_int64_entries(); - int code_ptr_entries = src->count_of_code_ptr_entries(); - int heap_ptr_entries = src->count_of_heap_ptr_entries(); - int int32_entries = src->count_of_int32_entries(); HeapObject* obj; - { AllocationResult allocation = - AllocateConstantPoolArray(int64_entries, code_ptr_entries, - heap_ptr_entries, int32_entries); + if (src->is_extended_layout()) { + ConstantPoolArray::NumberOfEntries small(src, + ConstantPoolArray::SMALL_SECTION); + ConstantPoolArray::NumberOfEntries extended( + src, ConstantPoolArray::EXTENDED_SECTION); + AllocationResult allocation = + AllocateExtendedConstantPoolArray(small, extended); + if (!allocation.To(&obj)) return allocation; + } else { + ConstantPoolArray::NumberOfEntries small(src, + ConstantPoolArray::SMALL_SECTION); + AllocationResult allocation = AllocateConstantPoolArray(small); if (!allocation.To(&obj)) return allocation; } obj->set_map_no_write_barrier(map); - int size = ConstantPoolArray::SizeFor( - int64_entries, code_ptr_entries, heap_ptr_entries, int32_entries); - CopyBlock( - obj->address() + ConstantPoolArray::kLengthOffset, - src->address() + ConstantPoolArray::kLengthOffset, - size - ConstantPoolArray::kLengthOffset); + CopyBlock(obj->address() + ConstantPoolArray::kFirstEntryOffset, + src->address() + ConstantPoolArray::kFirstEntryOffset, + src->size() - ConstantPoolArray::kFirstEntryOffset); return obj; } @@ -4137,13 +4044,14 @@ AllocationResult Heap::AllocateRawFixedArray(int length, AllocationResult Heap::AllocateFixedArrayWithFiller(int length, PretenureFlag pretenure, Object* filler) { - ASSERT(length >= 0); - ASSERT(empty_fixed_array()->IsFixedArray()); + DCHECK(length >= 0); + DCHECK(empty_fixed_array()->IsFixedArray()); if (length == 0) return empty_fixed_array(); - ASSERT(!InNewSpace(filler)); + DCHECK(!InNewSpace(filler)); HeapObject* result; - { AllocationResult allocation = AllocateRawFixedArray(length, pretenure); + { + AllocationResult allocation = AllocateRawFixedArray(length, pretenure); if (!allocation.To(&result)) return allocation; } @@ -4164,7 +4072,8 @@ AllocationResult Heap::AllocateUninitializedFixedArray(int length) { if (length == 0) return empty_fixed_array(); HeapObject* obj; - { AllocationResult allocation = AllocateRawFixedArray(length, NOT_TENURED); + { + AllocationResult allocation = AllocateRawFixedArray(length, NOT_TENURED); if (!allocation.To(&obj)) return allocation; } @@ -4175,8 +4084,7 @@ AllocationResult Heap::AllocateUninitializedFixedArray(int length) { AllocationResult Heap::AllocateUninitializedFixedDoubleArray( - int length, - PretenureFlag pretenure) { + int length, PretenureFlag pretenure) { if (length == 0) return empty_fixed_array(); HeapObject* elements; @@ -4201,7 +4109,8 @@ AllocationResult Heap::AllocateRawFixedDoubleArray(int length, AllocationSpace space = SelectSpace(size, OLD_DATA_SPACE, pretenure); HeapObject* object; - { AllocationResult allocation = AllocateRaw(size, space, OLD_DATA_SPACE); + { + AllocationResult allocation = AllocateRaw(size, space, OLD_DATA_SPACE); if (!allocation.To(&object)) return allocation; } @@ -4209,68 +4118,67 @@ AllocationResult Heap::AllocateRawFixedDoubleArray(int length, } -AllocationResult Heap::AllocateConstantPoolArray(int number_of_int64_entries, - int number_of_code_ptr_entries, - int number_of_heap_ptr_entries, - int number_of_int32_entries) { - CHECK(number_of_int64_entries >= 0 && - number_of_int64_entries <= ConstantPoolArray::kMaxEntriesPerType && - number_of_code_ptr_entries >= 0 && - number_of_code_ptr_entries <= ConstantPoolArray::kMaxEntriesPerType && - number_of_heap_ptr_entries >= 0 && - number_of_heap_ptr_entries <= ConstantPoolArray::kMaxEntriesPerType && - number_of_int32_entries >= 0 && - number_of_int32_entries <= ConstantPoolArray::kMaxEntriesPerType); - int size = ConstantPoolArray::SizeFor(number_of_int64_entries, - number_of_code_ptr_entries, - number_of_heap_ptr_entries, - number_of_int32_entries); +AllocationResult Heap::AllocateConstantPoolArray( + const ConstantPoolArray::NumberOfEntries& small) { + CHECK(small.are_in_range(0, ConstantPoolArray::kMaxSmallEntriesPerType)); + int size = ConstantPoolArray::SizeFor(small); #ifndef V8_HOST_ARCH_64_BIT size += kPointerSize; #endif AllocationSpace space = SelectSpace(size, OLD_POINTER_SPACE, TENURED); HeapObject* object; - { AllocationResult allocation = AllocateRaw(size, space, OLD_POINTER_SPACE); + { + AllocationResult allocation = AllocateRaw(size, space, OLD_POINTER_SPACE); if (!allocation.To(&object)) return allocation; } object = EnsureDoubleAligned(this, object, size); object->set_map_no_write_barrier(constant_pool_array_map()); ConstantPoolArray* constant_pool = ConstantPoolArray::cast(object); - constant_pool->Init(number_of_int64_entries, - number_of_code_ptr_entries, - number_of_heap_ptr_entries, - number_of_int32_entries); - if (number_of_code_ptr_entries > 0) { - int offset = - constant_pool->OffsetOfElementAt(constant_pool->first_code_ptr_index()); - MemsetPointer( - reinterpret_cast<Address*>(HeapObject::RawField(constant_pool, offset)), - isolate()->builtins()->builtin(Builtins::kIllegal)->entry(), - number_of_code_ptr_entries); - } - if (number_of_heap_ptr_entries > 0) { - int offset = - constant_pool->OffsetOfElementAt(constant_pool->first_heap_ptr_index()); - MemsetPointer( - HeapObject::RawField(constant_pool, offset), - undefined_value(), - number_of_heap_ptr_entries); + constant_pool->Init(small); + constant_pool->ClearPtrEntries(isolate()); + return constant_pool; +} + + +AllocationResult Heap::AllocateExtendedConstantPoolArray( + const ConstantPoolArray::NumberOfEntries& small, + const ConstantPoolArray::NumberOfEntries& extended) { + CHECK(small.are_in_range(0, ConstantPoolArray::kMaxSmallEntriesPerType)); + CHECK(extended.are_in_range(0, kMaxInt)); + int size = ConstantPoolArray::SizeForExtended(small, extended); +#ifndef V8_HOST_ARCH_64_BIT + size += kPointerSize; +#endif + AllocationSpace space = SelectSpace(size, OLD_POINTER_SPACE, TENURED); + + HeapObject* object; + { + AllocationResult allocation = AllocateRaw(size, space, OLD_POINTER_SPACE); + if (!allocation.To(&object)) return allocation; } + object = EnsureDoubleAligned(this, object, size); + object->set_map_no_write_barrier(constant_pool_array_map()); + + ConstantPoolArray* constant_pool = ConstantPoolArray::cast(object); + constant_pool->InitExtended(small, extended); + constant_pool->ClearPtrEntries(isolate()); return constant_pool; } AllocationResult Heap::AllocateEmptyConstantPoolArray() { - int size = ConstantPoolArray::SizeFor(0, 0, 0, 0); + ConstantPoolArray::NumberOfEntries small(0, 0, 0, 0); + int size = ConstantPoolArray::SizeFor(small); HeapObject* result; - { AllocationResult allocation = + { + AllocationResult allocation = AllocateRaw(size, OLD_DATA_SPACE, OLD_DATA_SPACE); if (!allocation.To(&result)) return allocation; } result->set_map_no_write_barrier(constant_pool_array_map()); - ConstantPoolArray::cast(result)->Init(0, 0, 0, 0); + ConstantPoolArray::cast(result)->Init(small); return result; } @@ -4295,12 +4203,12 @@ AllocationResult Heap::AllocateSymbol() { } while (hash == 0 && attempts < 30); if (hash == 0) hash = 1; // never return 0 - Symbol::cast(result)->set_hash_field( - Name::kIsNotArrayIndexMask | (hash << Name::kHashShift)); + Symbol::cast(result) + ->set_hash_field(Name::kIsNotArrayIndexMask | (hash << Name::kHashShift)); Symbol::cast(result)->set_name(undefined_value()); Symbol::cast(result)->set_flags(Smi::FromInt(0)); - ASSERT(!Symbol::cast(result)->is_private()); + DCHECK(!Symbol::cast(result)->is_private()); return result; } @@ -4309,8 +4217,10 @@ AllocationResult Heap::AllocateStruct(InstanceType type) { Map* map; switch (type) { #define MAKE_CASE(NAME, Name, name) \ - case NAME##_TYPE: map = name##_map(); break; -STRUCT_LIST(MAKE_CASE) + case NAME##_TYPE: \ + map = name##_map(); \ + break; + STRUCT_LIST(MAKE_CASE) #undef MAKE_CASE default: UNREACHABLE(); @@ -4319,7 +4229,8 @@ STRUCT_LIST(MAKE_CASE) int size = map->instance_size(); AllocationSpace space = SelectSpace(size, OLD_POINTER_SPACE, TENURED); Struct* result; - { AllocationResult allocation = Allocate(map, space); + { + AllocationResult allocation = Allocate(map, space); if (!allocation.To(&result)) return allocation; } result->InitializeBody(size); @@ -4328,23 +4239,29 @@ STRUCT_LIST(MAKE_CASE) bool Heap::IsHeapIterable() { - return (!old_pointer_space()->was_swept_conservatively() && - !old_data_space()->was_swept_conservatively()); + // TODO(hpayer): This function is not correct. Allocation folding in old + // space breaks the iterability. + return (old_pointer_space()->swept_precisely() && + old_data_space()->swept_precisely() && + new_space_top_after_last_gc_ == new_space()->top()); } -void Heap::EnsureHeapIsIterable() { - ASSERT(AllowHeapAllocation::IsAllowed()); +void Heap::MakeHeapIterable() { + DCHECK(AllowHeapAllocation::IsAllowed()); if (!IsHeapIterable()) { - CollectAllGarbage(kMakeHeapIterableMask, "Heap::EnsureHeapIsIterable"); + CollectAllGarbage(kMakeHeapIterableMask, "Heap::MakeHeapIterable"); + } + if (mark_compact_collector()->sweeping_in_progress()) { + mark_compact_collector()->EnsureSweepingCompleted(); } - ASSERT(IsHeapIterable()); + DCHECK(IsHeapIterable()); } void Heap::AdvanceIdleIncrementalMarking(intptr_t step_size) { incremental_marking()->Step(step_size, - IncrementalMarking::NO_GC_VIA_STACK_GUARD); + IncrementalMarking::NO_GC_VIA_STACK_GUARD, true); if (incremental_marking()->IsComplete()) { bool uncommit = false; @@ -4353,7 +4270,8 @@ void Heap::AdvanceIdleIncrementalMarking(intptr_t step_size) { isolate_->compilation_cache()->Clear(); uncommit = true; } - CollectAllGarbage(kNoGCFlags, "idle notification: finalize incremental"); + CollectAllGarbage(kReduceMemoryFootprintMask, + "idle notification: finalize incremental"); mark_sweeps_since_idle_round_started_++; gc_count_at_last_idle_gc_ = gc_count_; if (uncommit) { @@ -4365,6 +4283,9 @@ void Heap::AdvanceIdleIncrementalMarking(intptr_t step_size) { bool Heap::IdleNotification(int hint) { + // If incremental marking is off, we do not perform idle notification. + if (!FLAG_incremental_marking) return true; + // Hints greater than this value indicate that // the embedder is requesting a lot of GC work. const int kMaxHint = 1000; @@ -4375,8 +4296,11 @@ bool Heap::IdleNotification(int hint) { // The size factor is in range [5..250]. The numbers here are chosen from // experiments. If you changes them, make sure to test with // chrome/performance_ui_tests --gtest_filter="GeneralMixMemoryTest.* - intptr_t step_size = - size_factor * IncrementalMarking::kAllocatedThreshold; + intptr_t step_size = size_factor * IncrementalMarking::kAllocatedThreshold; + + isolate()->counters()->gc_idle_time_allotted_in_ms()->AddSample(hint); + HistogramTimerScope idle_notification_scope( + isolate_->counters()->gc_idle_notification()); if (contexts_disposed_ > 0) { contexts_disposed_ = 0; @@ -4397,10 +4321,6 @@ bool Heap::IdleNotification(int hint) { return false; } - if (!FLAG_incremental_marking || Serializer::enabled(isolate_)) { - return IdleGlobalGC(); - } - // By doing small chunks of GC work in each IdleNotification, // perform a round of incremental GCs and after that wait until // the mutator creates enough garbage to justify a new round. @@ -4417,8 +4337,8 @@ bool Heap::IdleNotification(int hint) { } } - int remaining_mark_sweeps = kMaxMarkSweepsInIdleRound - - mark_sweeps_since_idle_round_started_; + int remaining_mark_sweeps = + kMaxMarkSweepsInIdleRound - mark_sweeps_since_idle_round_started_; if (incremental_marking()->IsStopped()) { // If there are no more than two GCs left in this idle round and we are @@ -4447,74 +4367,14 @@ bool Heap::IdleNotification(int hint) { // If the IdleNotifcation is called with a large hint we will wait for // the sweepter threads here. if (hint >= kMinHintForFullGC && - mark_compact_collector()->IsConcurrentSweepingInProgress()) { - mark_compact_collector()->WaitUntilSweepingCompleted(); + mark_compact_collector()->sweeping_in_progress()) { + mark_compact_collector()->EnsureSweepingCompleted(); } return false; } -bool Heap::IdleGlobalGC() { - static const int kIdlesBeforeScavenge = 4; - static const int kIdlesBeforeMarkSweep = 7; - static const int kIdlesBeforeMarkCompact = 8; - static const int kMaxIdleCount = kIdlesBeforeMarkCompact + 1; - static const unsigned int kGCsBetweenCleanup = 4; - - if (!last_idle_notification_gc_count_init_) { - last_idle_notification_gc_count_ = gc_count_; - last_idle_notification_gc_count_init_ = true; - } - - bool uncommit = true; - bool finished = false; - - // Reset the number of idle notifications received when a number of - // GCs have taken place. This allows another round of cleanup based - // on idle notifications if enough work has been carried out to - // provoke a number of garbage collections. - if (gc_count_ - last_idle_notification_gc_count_ < kGCsBetweenCleanup) { - number_idle_notifications_ = - Min(number_idle_notifications_ + 1, kMaxIdleCount); - } else { - number_idle_notifications_ = 0; - last_idle_notification_gc_count_ = gc_count_; - } - - if (number_idle_notifications_ == kIdlesBeforeScavenge) { - CollectGarbage(NEW_SPACE, "idle notification"); - new_space_.Shrink(); - last_idle_notification_gc_count_ = gc_count_; - } else if (number_idle_notifications_ == kIdlesBeforeMarkSweep) { - // Before doing the mark-sweep collections we clear the - // compilation cache to avoid hanging on to source code and - // generated code for cached functions. - isolate_->compilation_cache()->Clear(); - - CollectAllGarbage(kReduceMemoryFootprintMask, "idle notification"); - new_space_.Shrink(); - last_idle_notification_gc_count_ = gc_count_; - - } else if (number_idle_notifications_ == kIdlesBeforeMarkCompact) { - CollectAllGarbage(kReduceMemoryFootprintMask, "idle notification"); - new_space_.Shrink(); - last_idle_notification_gc_count_ = gc_count_; - number_idle_notifications_ = 0; - finished = true; - } else if (number_idle_notifications_ > kIdlesBeforeMarkCompact) { - // If we have received more than kIdlesBeforeMarkCompact idle - // notifications we do not perform any cleanup because we don't - // expect to gain much by doing so. - finished = true; - } - - if (uncommit) UncommitFromSpace(); - - return finished; -} - - #ifdef DEBUG void Heap::Print() { @@ -4543,8 +4403,8 @@ void Heap::ReportCodeStatistics(const char* title) { // just-completed scavenge collection). void Heap::ReportHeapStatistics(const char* title) { USE(title); - PrintF(">>>>>> =============== %s (%d) =============== >>>>>>\n", - title, gc_count_); + PrintF(">>>>>> =============== %s (%d) =============== >>>>>>\n", title, + gc_count_); PrintF("old_generation_allocation_limit_ %" V8_PTR_PREFIX "d\n", old_generation_allocation_limit_); @@ -4576,22 +4436,18 @@ void Heap::ReportHeapStatistics(const char* title) { #endif // DEBUG -bool Heap::Contains(HeapObject* value) { - return Contains(value->address()); -} +bool Heap::Contains(HeapObject* value) { return Contains(value->address()); } bool Heap::Contains(Address addr) { if (isolate_->memory_allocator()->IsOutsideAllocatedSpace(addr)) return false; return HasBeenSetUp() && - (new_space_.ToSpaceContains(addr) || - old_pointer_space_->Contains(addr) || - old_data_space_->Contains(addr) || - code_space_->Contains(addr) || - map_space_->Contains(addr) || - cell_space_->Contains(addr) || - property_cell_space_->Contains(addr) || - lo_space_->SlowContains(addr)); + (new_space_.ToSpaceContains(addr) || + old_pointer_space_->Contains(addr) || + old_data_space_->Contains(addr) || code_space_->Contains(addr) || + map_space_->Contains(addr) || cell_space_->Contains(addr) || + property_cell_space_->Contains(addr) || + lo_space_->SlowContains(addr)); } @@ -4636,6 +4492,11 @@ void Heap::Verify() { store_buffer()->Verify(); + if (mark_compact_collector()->sweeping_in_progress()) { + // We have to wait here for the sweeper threads to have an iterable heap. + mark_compact_collector()->EnsureSweepingCompleted(); + } + VerifyPointersVisitor visitor; IterateRoots(&visitor, VISIT_ONLY_STRONG); @@ -4664,16 +4525,14 @@ void Heap::ZapFromSpace() { while (it.has_next()) { NewSpacePage* page = it.next(); for (Address cursor = page->area_start(), limit = page->area_end(); - cursor < limit; - cursor += kPointerSize) { + cursor < limit; cursor += kPointerSize) { Memory::Address_at(cursor) = kFromSpaceZapValue; } } } -void Heap::IterateAndMarkPointersToFromSpace(Address start, - Address end, +void Heap::IterateAndMarkPointersToFromSpace(Address start, Address end, ObjectSlotCallback callback) { Address slot_address = start; @@ -4702,12 +4561,12 @@ void Heap::IterateAndMarkPointersToFromSpace(Address start, HeapObject::cast(object)); Object* new_object = *slot; if (InNewSpace(new_object)) { - SLOW_ASSERT(Heap::InToSpace(new_object)); - SLOW_ASSERT(new_object->IsHeapObject()); + SLOW_DCHECK(Heap::InToSpace(new_object)); + SLOW_DCHECK(new_object->IsHeapObject()); store_buffer_.EnterDirectlyIntoStoreBuffer( reinterpret_cast<Address>(slot)); } - SLOW_ASSERT(!MarkCompactCollector::IsOnEvacuationCandidate(new_object)); + SLOW_DCHECK(!MarkCompactCollector::IsOnEvacuationCandidate(new_object)); } else if (record_slots && MarkCompactCollector::IsOnEvacuationCandidate(object)) { mark_compact_collector()->RecordSlot(slot, slot, object); @@ -4730,21 +4589,17 @@ bool IsAMapPointerAddress(Object** addr) { } -bool EverythingsAPointer(Object** addr) { - return true; -} +bool EverythingsAPointer(Object** addr) { return true; } -static void CheckStoreBuffer(Heap* heap, - Object** current, - Object** limit, +static void CheckStoreBuffer(Heap* heap, Object** current, Object** limit, Object**** store_buffer_position, Object*** store_buffer_top, CheckStoreBufferFilter filter, Address special_garbage_start, Address special_garbage_end) { Map* free_space_map = heap->free_space_map(); - for ( ; current < limit; current++) { + for (; current < limit; current++) { Object* o = *current; Address current_address = reinterpret_cast<Address>(current); // Skip free space. @@ -4753,8 +4608,8 @@ static void CheckStoreBuffer(Heap* heap, FreeSpace* free_space = FreeSpace::cast(HeapObject::FromAddress(current_address)); int skip = free_space->Size(); - ASSERT(current_address + skip <= reinterpret_cast<Address>(limit)); - ASSERT(skip > 0); + DCHECK(current_address + skip <= reinterpret_cast<Address>(limit)); + DCHECK(skip > 0); current_address += skip - kPointerSize; current = reinterpret_cast<Object**>(current_address); continue; @@ -4768,9 +4623,9 @@ static void CheckStoreBuffer(Heap* heap, continue; } if (!(*filter)(current)) continue; - ASSERT(current_address < special_garbage_start || + DCHECK(current_address < special_garbage_start || current_address >= special_garbage_end); - ASSERT(reinterpret_cast<uintptr_t>(o) != kFreeListZapValue); + DCHECK(reinterpret_cast<uintptr_t>(o) != kFreeListZapValue); // We have to check that the pointer does not point into new space // without trying to cast it to a heap object since the hash field of // a string can contain values like 1 and 3 which are tagged null @@ -4809,13 +4664,8 @@ void Heap::OldPointerSpaceCheckStoreBuffer() { Object*** store_buffer_top = store_buffer()->Top(); Object** limit = reinterpret_cast<Object**>(end); - CheckStoreBuffer(this, - current, - limit, - &store_buffer_position, - store_buffer_top, - &EverythingsAPointer, - space->top(), + CheckStoreBuffer(this, current, limit, &store_buffer_position, + store_buffer_top, &EverythingsAPointer, space->top(), space->limit()); } } @@ -4837,13 +4687,8 @@ void Heap::MapSpaceCheckStoreBuffer() { Object*** store_buffer_top = store_buffer()->Top(); Object** limit = reinterpret_cast<Object**>(end); - CheckStoreBuffer(this, - current, - limit, - &store_buffer_position, - store_buffer_top, - &IsAMapPointerAddress, - space->top(), + CheckStoreBuffer(this, current, limit, &store_buffer_position, + store_buffer_top, &IsAMapPointerAddress, space->top(), space->limit()); } } @@ -4861,14 +4706,8 @@ void Heap::LargeObjectSpaceCheckStoreBuffer() { Object** current = reinterpret_cast<Object**>(object->address()); Object** limit = reinterpret_cast<Object**>(object->address() + object->Size()); - CheckStoreBuffer(this, - current, - limit, - &store_buffer_position, - store_buffer_top, - &EverythingsAPointer, - NULL, - NULL); + CheckStoreBuffer(this, current, limit, &store_buffer_position, + store_buffer_top, &EverythingsAPointer, NULL, NULL); } } } @@ -4884,8 +4723,7 @@ void Heap::IterateRoots(ObjectVisitor* v, VisitMode mode) { void Heap::IterateWeakRoots(ObjectVisitor* v, VisitMode mode) { v->VisitPointer(reinterpret_cast<Object**>(&roots_[kStringTableRootIndex])); v->Synchronize(VisitorSynchronization::kStringTable); - if (mode != VISIT_ALL_IN_SCAVENGE && - mode != VISIT_ALL_IN_SWEEP_NEWSPACE) { + if (mode != VISIT_ALL_IN_SCAVENGE && mode != VISIT_ALL_IN_SWEEP_NEWSPACE) { // Scavenge collections have special processing for this. external_string_table_.Iterate(v); } @@ -4981,61 +4819,54 @@ void Heap::IterateStrongRoots(ObjectVisitor* v, VisitMode mode) { // TODO(1236194): Since the heap size is configurable on the command line // and through the API, we should gracefully handle the case that the heap // size is not big enough to fit all the initial objects. -bool Heap::ConfigureHeap(int max_semispace_size, - intptr_t max_old_space_size, - intptr_t max_executable_size, - intptr_t code_range_size) { +bool Heap::ConfigureHeap(int max_semi_space_size, int max_old_space_size, + int max_executable_size, size_t code_range_size) { if (HasBeenSetUp()) return false; + // Overwrite default configuration. + if (max_semi_space_size > 0) { + max_semi_space_size_ = max_semi_space_size * MB; + } + if (max_old_space_size > 0) { + max_old_generation_size_ = max_old_space_size * MB; + } + if (max_executable_size > 0) { + max_executable_size_ = max_executable_size * MB; + } + // If max space size flags are specified overwrite the configuration. - if (FLAG_max_new_space_size > 0) { - max_semispace_size = (FLAG_max_new_space_size / 2) * kLumpOfMemory; + if (FLAG_max_semi_space_size > 0) { + max_semi_space_size_ = FLAG_max_semi_space_size * MB; } if (FLAG_max_old_space_size > 0) { - max_old_space_size = FLAG_max_old_space_size * kLumpOfMemory; + max_old_generation_size_ = FLAG_max_old_space_size * MB; } if (FLAG_max_executable_size > 0) { - max_executable_size = FLAG_max_executable_size * kLumpOfMemory; + max_executable_size_ = FLAG_max_executable_size * MB; } if (FLAG_stress_compaction) { // This will cause more frequent GCs when stressing. - max_semispace_size_ = Page::kPageSize; - } - - if (max_semispace_size > 0) { - if (max_semispace_size < Page::kPageSize) { - max_semispace_size = Page::kPageSize; - if (FLAG_trace_gc) { - PrintPID("Max semispace size cannot be less than %dkbytes\n", - Page::kPageSize >> 10); - } - } - max_semispace_size_ = max_semispace_size; + max_semi_space_size_ = Page::kPageSize; } - if (Snapshot::IsEnabled()) { + if (Snapshot::HaveASnapshotToStartFrom()) { // If we are using a snapshot we always reserve the default amount // of memory for each semispace because code in the snapshot has // write-barrier code that relies on the size and alignment of new // space. We therefore cannot use a larger max semispace size // than the default reserved semispace size. - if (max_semispace_size_ > reserved_semispace_size_) { - max_semispace_size_ = reserved_semispace_size_; + if (max_semi_space_size_ > reserved_semispace_size_) { + max_semi_space_size_ = reserved_semispace_size_; if (FLAG_trace_gc) { - PrintPID("Max semispace size cannot be more than %dkbytes\n", + PrintPID("Max semi-space size cannot be more than %d kbytes\n", reserved_semispace_size_ >> 10); } } } else { // If we are not using snapshots we reserve space for the actual // max semispace size. - reserved_semispace_size_ = max_semispace_size_; - } - - if (max_old_space_size > 0) max_old_generation_size_ = max_old_space_size; - if (max_executable_size > 0) { - max_executable_size_ = RoundUp(max_executable_size, Page::kPageSize); + reserved_semispace_size_ = max_semi_space_size_; } // The max executable size must be less than or equal to the max old @@ -5046,42 +4877,46 @@ bool Heap::ConfigureHeap(int max_semispace_size, // The new space size must be a power of two to support single-bit testing // for containment. - max_semispace_size_ = RoundUpToPowerOf2(max_semispace_size_); + max_semi_space_size_ = RoundUpToPowerOf2(max_semi_space_size_); reserved_semispace_size_ = RoundUpToPowerOf2(reserved_semispace_size_); - initial_semispace_size_ = Min(initial_semispace_size_, max_semispace_size_); - // The external allocation limit should be below 256 MB on all architectures - // to avoid unnecessary low memory notifications, as that is the threshold - // for some embedders. - external_allocation_limit_ = 12 * max_semispace_size_; - ASSERT(external_allocation_limit_ <= 256 * MB); + if (FLAG_min_semi_space_size > 0) { + int initial_semispace_size = FLAG_min_semi_space_size * MB; + if (initial_semispace_size > max_semi_space_size_) { + initial_semispace_size_ = max_semi_space_size_; + if (FLAG_trace_gc) { + PrintPID( + "Min semi-space size cannot be more than the maximum" + "semi-space size of %d MB\n", + max_semi_space_size_); + } + } else { + initial_semispace_size_ = initial_semispace_size; + } + } + + initial_semispace_size_ = Min(initial_semispace_size_, max_semi_space_size_); // The old generation is paged and needs at least one page for each space. int paged_space_count = LAST_PAGED_SPACE - FIRST_PAGED_SPACE + 1; - max_old_generation_size_ = Max(static_cast<intptr_t>(paged_space_count * - Page::kPageSize), - RoundUp(max_old_generation_size_, - Page::kPageSize)); + max_old_generation_size_ = + Max(static_cast<intptr_t>(paged_space_count * Page::kPageSize), + max_old_generation_size_); // We rely on being able to allocate new arrays in paged spaces. - ASSERT(Page::kMaxRegularHeapObjectSize >= + DCHECK(Page::kMaxRegularHeapObjectSize >= (JSArray::kSize + FixedArray::SizeFor(JSObject::kInitialMaxFastElementArray) + AllocationMemento::kSize)); - code_range_size_ = code_range_size; + code_range_size_ = code_range_size * MB; configured_ = true; return true; } -bool Heap::ConfigureHeapDefault() { - return ConfigureHeap(static_cast<intptr_t>(FLAG_max_new_space_size / 2) * KB, - static_cast<intptr_t>(FLAG_max_old_space_size) * MB, - static_cast<intptr_t>(FLAG_max_executable_size) * MB, - static_cast<intptr_t>(0)); -} +bool Heap::ConfigureHeapDefault() { return ConfigureHeap(0, 0, 0, 0); } void Heap::RecordStats(HeapStats* stats, bool take_snapshot) { @@ -5107,15 +4942,14 @@ void Heap::RecordStats(HeapStats* stats, bool take_snapshot) { *stats->memory_allocator_capacity = isolate()->memory_allocator()->Size() + isolate()->memory_allocator()->Available(); - *stats->os_error = OS::GetLastError(); - isolate()->memory_allocator()->Available(); + *stats->os_error = base::OS::GetLastError(); + isolate()->memory_allocator()->Available(); if (take_snapshot) { HeapIterator iterator(this); - for (HeapObject* obj = iterator.next(); - obj != NULL; + for (HeapObject* obj = iterator.next(); obj != NULL; obj = iterator.next()) { InstanceType type = obj->map()->instance_type(); - ASSERT(0 <= type && type <= LAST_TYPE); + DCHECK(0 <= type && type <= LAST_TYPE); stats->objects_per_type[type]++; stats->size_per_type[type] += obj->Size(); } @@ -5124,21 +4958,19 @@ void Heap::RecordStats(HeapStats* stats, bool take_snapshot) { intptr_t Heap::PromotedSpaceSizeOfObjects() { - return old_pointer_space_->SizeOfObjects() - + old_data_space_->SizeOfObjects() - + code_space_->SizeOfObjects() - + map_space_->SizeOfObjects() - + cell_space_->SizeOfObjects() - + property_cell_space_->SizeOfObjects() - + lo_space_->SizeOfObjects(); + return old_pointer_space_->SizeOfObjects() + + old_data_space_->SizeOfObjects() + code_space_->SizeOfObjects() + + map_space_->SizeOfObjects() + cell_space_->SizeOfObjects() + + property_cell_space_->SizeOfObjects() + lo_space_->SizeOfObjects(); } int64_t Heap::PromotedExternalMemorySize() { - if (amount_of_external_allocated_memory_ - <= amount_of_external_allocated_memory_at_last_global_gc_) return 0; - return amount_of_external_allocated_memory_ - - amount_of_external_allocated_memory_at_last_global_gc_; + if (amount_of_external_allocated_memory_ <= + amount_of_external_allocated_memory_at_last_global_gc_) + return 0; + return amount_of_external_allocated_memory_ - + amount_of_external_allocated_memory_at_last_global_gc_; } @@ -5167,7 +4999,7 @@ intptr_t Heap::OldGenerationAllocationLimit(intptr_t old_gen_size, // (kMinHandles, max_factor) and (kMaxHandles, min_factor). factor = max_factor - (freed_global_handles - kMinHandles) * (max_factor - min_factor) / - (kMaxHandles - kMinHandles); + (kMaxHandles - kMinHandles); } if (FLAG_stress_compaction || @@ -5201,8 +5033,7 @@ void Heap::DisableInlineAllocation() { // Update inline allocation limit for old spaces. PagedSpaces spaces(this); - for (PagedSpace* space = spaces.next(); - space != NULL; + for (PagedSpace* space = spaces.next(); space != NULL; space = spaces.next()) { space->EmptyAllocationInfo(); } @@ -5235,34 +5066,29 @@ bool Heap::SetUp() { if (!ConfigureHeapDefault()) return false; } - CallOnce(&initialize_gc_once, &InitializeGCOnce); + base::CallOnce(&initialize_gc_once, &InitializeGCOnce); MarkMapPointersAsEncoded(false); // Set up memory allocator. if (!isolate_->memory_allocator()->SetUp(MaxReserved(), MaxExecutableSize())) - return false; + return false; // Set up new space. - if (!new_space_.SetUp(reserved_semispace_size_, max_semispace_size_)) { + if (!new_space_.SetUp(reserved_semispace_size_, max_semi_space_size_)) { return false; } + new_space_top_after_last_gc_ = new_space()->top(); // Initialize old pointer space. - old_pointer_space_ = - new OldSpace(this, - max_old_generation_size_, - OLD_POINTER_SPACE, - NOT_EXECUTABLE); + old_pointer_space_ = new OldSpace(this, max_old_generation_size_, + OLD_POINTER_SPACE, NOT_EXECUTABLE); if (old_pointer_space_ == NULL) return false; if (!old_pointer_space_->SetUp()) return false; // Initialize old data space. - old_data_space_ = - new OldSpace(this, - max_old_generation_size_, - OLD_DATA_SPACE, - NOT_EXECUTABLE); + old_data_space_ = new OldSpace(this, max_old_generation_size_, OLD_DATA_SPACE, + NOT_EXECUTABLE); if (old_data_space_ == NULL) return false; if (!old_data_space_->SetUp()) return false; @@ -5299,7 +5125,7 @@ bool Heap::SetUp() { if (!lo_space_->SetUp()) return false; // Set up the seed that is used to randomize the string hash function. - ASSERT(hash_seed() == 0); + DCHECK(hash_seed() == 0); if (FLAG_randomize_hashes) { if (FLAG_hash_seed == 0) { int rnd = isolate()->random_number_generator()->NextInt(); @@ -5329,28 +5155,26 @@ bool Heap::CreateHeapObjects() { CreateInitialObjects(); CHECK_EQ(0, gc_count_); - native_contexts_list_ = undefined_value(); - array_buffers_list_ = undefined_value(); - allocation_sites_list_ = undefined_value(); + set_native_contexts_list(undefined_value()); + set_array_buffers_list(undefined_value()); + set_allocation_sites_list(undefined_value()); weak_object_to_code_table_ = undefined_value(); return true; } void Heap::SetStackLimits() { - ASSERT(isolate_ != NULL); - ASSERT(isolate_ == isolate()); + DCHECK(isolate_ != NULL); + DCHECK(isolate_ == isolate()); // On 64 bit machines, pointers are generally out of range of Smis. We write // something that looks like an out of range Smi to the GC. // Set up the special root array entries containing the stack limits. // These are actually addresses, but the tag makes the GC ignore it. - roots_[kStackLimitRootIndex] = - reinterpret_cast<Object*>( - (isolate_->stack_guard()->jslimit() & ~kSmiTagMask) | kSmiTag); - roots_[kRealStackLimitRootIndex] = - reinterpret_cast<Object*>( - (isolate_->stack_guard()->real_jslimit() & ~kSmiTagMask) | kSmiTag); + roots_[kStackLimitRootIndex] = reinterpret_cast<Object*>( + (isolate_->stack_guard()->jslimit() & ~kSmiTagMask) | kSmiTag); + roots_[kRealStackLimitRootIndex] = reinterpret_cast<Object*>( + (isolate_->stack_guard()->real_jslimit() & ~kSmiTagMask) | kSmiTag); } @@ -5370,38 +5194,41 @@ void Heap::TearDown() { PrintF("max_gc_pause=%.1f ", get_max_gc_pause()); PrintF("total_gc_time=%.1f ", total_gc_time_ms_); PrintF("min_in_mutator=%.1f ", get_min_in_mutator()); - PrintF("max_alive_after_gc=%" V8_PTR_PREFIX "d ", - get_max_alive_after_gc()); - PrintF("total_marking_time=%.1f ", marking_time()); - PrintF("total_sweeping_time=%.1f ", sweeping_time()); + PrintF("max_alive_after_gc=%" V8_PTR_PREFIX "d ", get_max_alive_after_gc()); + PrintF("total_marking_time=%.1f ", tracer_.cumulative_sweeping_duration()); + PrintF("total_sweeping_time=%.1f ", tracer_.cumulative_sweeping_duration()); PrintF("\n\n"); } if (FLAG_print_max_heap_committed) { PrintF("\n"); PrintF("maximum_committed_by_heap=%" V8_PTR_PREFIX "d ", - MaximumCommittedMemory()); + MaximumCommittedMemory()); PrintF("maximum_committed_by_new_space=%" V8_PTR_PREFIX "d ", - new_space_.MaximumCommittedMemory()); + new_space_.MaximumCommittedMemory()); PrintF("maximum_committed_by_old_pointer_space=%" V8_PTR_PREFIX "d ", - old_data_space_->MaximumCommittedMemory()); + old_data_space_->MaximumCommittedMemory()); PrintF("maximum_committed_by_old_data_space=%" V8_PTR_PREFIX "d ", - old_pointer_space_->MaximumCommittedMemory()); + old_pointer_space_->MaximumCommittedMemory()); PrintF("maximum_committed_by_old_data_space=%" V8_PTR_PREFIX "d ", - old_pointer_space_->MaximumCommittedMemory()); + old_pointer_space_->MaximumCommittedMemory()); PrintF("maximum_committed_by_code_space=%" V8_PTR_PREFIX "d ", - code_space_->MaximumCommittedMemory()); + code_space_->MaximumCommittedMemory()); PrintF("maximum_committed_by_map_space=%" V8_PTR_PREFIX "d ", - map_space_->MaximumCommittedMemory()); + map_space_->MaximumCommittedMemory()); PrintF("maximum_committed_by_cell_space=%" V8_PTR_PREFIX "d ", - cell_space_->MaximumCommittedMemory()); + cell_space_->MaximumCommittedMemory()); PrintF("maximum_committed_by_property_space=%" V8_PTR_PREFIX "d ", - property_cell_space_->MaximumCommittedMemory()); + property_cell_space_->MaximumCommittedMemory()); PrintF("maximum_committed_by_lo_space=%" V8_PTR_PREFIX "d ", - lo_space_->MaximumCommittedMemory()); + lo_space_->MaximumCommittedMemory()); PrintF("\n\n"); } + if (FLAG_verify_predictable) { + PrintAlloctionsHash(); + } + TearDownArrayBuffers(); isolate_->global_handles()->TearDown(); @@ -5462,17 +5289,16 @@ void Heap::TearDown() { void Heap::AddGCPrologueCallback(v8::Isolate::GCPrologueCallback callback, - GCType gc_type, - bool pass_isolate) { - ASSERT(callback != NULL); + GCType gc_type, bool pass_isolate) { + DCHECK(callback != NULL); GCPrologueCallbackPair pair(callback, gc_type, pass_isolate); - ASSERT(!gc_prologue_callbacks_.Contains(pair)); + DCHECK(!gc_prologue_callbacks_.Contains(pair)); return gc_prologue_callbacks_.Add(pair); } void Heap::RemoveGCPrologueCallback(v8::Isolate::GCPrologueCallback callback) { - ASSERT(callback != NULL); + DCHECK(callback != NULL); for (int i = 0; i < gc_prologue_callbacks_.length(); ++i) { if (gc_prologue_callbacks_[i].callback == callback) { gc_prologue_callbacks_.Remove(i); @@ -5484,17 +5310,16 @@ void Heap::RemoveGCPrologueCallback(v8::Isolate::GCPrologueCallback callback) { void Heap::AddGCEpilogueCallback(v8::Isolate::GCEpilogueCallback callback, - GCType gc_type, - bool pass_isolate) { - ASSERT(callback != NULL); + GCType gc_type, bool pass_isolate) { + DCHECK(callback != NULL); GCEpilogueCallbackPair pair(callback, gc_type, pass_isolate); - ASSERT(!gc_epilogue_callbacks_.Contains(pair)); + DCHECK(!gc_epilogue_callbacks_.Contains(pair)); return gc_epilogue_callbacks_.Add(pair); } void Heap::RemoveGCEpilogueCallback(v8::Isolate::GCEpilogueCallback callback) { - ASSERT(callback != NULL); + DCHECK(callback != NULL); for (int i = 0; i < gc_epilogue_callbacks_.length(); ++i) { if (gc_epilogue_callbacks_[i].callback == callback) { gc_epilogue_callbacks_.Remove(i); @@ -5508,8 +5333,8 @@ void Heap::RemoveGCEpilogueCallback(v8::Isolate::GCEpilogueCallback callback) { // TODO(ishell): Find a better place for this. void Heap::AddWeakObjectToCodeDependency(Handle<Object> obj, Handle<DependentCode> dep) { - ASSERT(!InNewSpace(*obj)); - ASSERT(!InNewSpace(*dep)); + DCHECK(!InNewSpace(*obj)); + DCHECK(!InNewSpace(*dep)); // This handle scope keeps the table handle local to this function, which // allows us to safely skip write barriers in table update operations. HandleScope scope(isolate()); @@ -5521,7 +5346,7 @@ void Heap::AddWeakObjectToCodeDependency(Handle<Object> obj, WeakHashTable::cast(weak_object_to_code_table_)->Zap(the_hole_value()); } set_weak_object_to_code_table(*table); - ASSERT_EQ(*dep, table->Lookup(obj)); + DCHECK_EQ(*dep, table->Lookup(obj)); } @@ -5534,8 +5359,9 @@ DependentCode* Heap::LookupWeakObjectToCodeDependency(Handle<Object> obj) { void Heap::EnsureWeakObjectToCodeTable() { if (!weak_object_to_code_table()->IsHashTable()) { - set_weak_object_to_code_table(*WeakHashTable::New( - isolate(), 16, USE_DEFAULT_MINIMUM_CAPACITY, TENURED)); + set_weak_object_to_code_table( + *WeakHashTable::New(isolate(), 16, USE_DEFAULT_MINIMUM_CAPACITY, + TENURED)); } } @@ -5546,12 +5372,11 @@ void Heap::FatalProcessOutOfMemory(const char* location, bool take_snapshot) { #ifdef DEBUG -class PrintHandleVisitor: public ObjectVisitor { +class PrintHandleVisitor : public ObjectVisitor { public: void VisitPointers(Object** start, Object** end) { for (Object** p = start; p < end; p++) - PrintF(" handle %p to %p\n", - reinterpret_cast<void*>(p), + PrintF(" handle %p to %p\n", reinterpret_cast<void*>(p), reinterpret_cast<void*>(*p)); } }; @@ -5610,7 +5435,6 @@ PagedSpace* PagedSpaces::next() { } - OldSpace* OldSpaces::next() { switch (counter_++) { case OLD_POINTER_SPACE: @@ -5629,16 +5453,14 @@ SpaceIterator::SpaceIterator(Heap* heap) : heap_(heap), current_space_(FIRST_SPACE), iterator_(NULL), - size_func_(NULL) { -} + size_func_(NULL) {} SpaceIterator::SpaceIterator(Heap* heap, HeapObjectCallback size_func) : heap_(heap), current_space_(FIRST_SPACE), iterator_(NULL), - size_func_(size_func) { -} + size_func_(size_func) {} SpaceIterator::~SpaceIterator() { @@ -5671,7 +5493,7 @@ ObjectIterator* SpaceIterator::next() { // Create an iterator for the space to iterate. ObjectIterator* SpaceIterator::CreateIterator() { - ASSERT(iterator_ == NULL); + DCHECK(iterator_ == NULL); switch (current_space_) { case NEW_SPACE: @@ -5694,8 +5516,8 @@ ObjectIterator* SpaceIterator::CreateIterator() { iterator_ = new HeapObjectIterator(heap_->cell_space(), size_func_); break; case PROPERTY_CELL_SPACE: - iterator_ = new HeapObjectIterator(heap_->property_cell_space(), - size_func_); + iterator_ = + new HeapObjectIterator(heap_->property_cell_space(), size_func_); break; case LO_SPACE: iterator_ = new LargeObjectIterator(heap_->lo_space(), size_func_); @@ -5703,7 +5525,7 @@ ObjectIterator* SpaceIterator::CreateIterator() { } // Return the newly allocated iterator; - ASSERT(iterator_ != NULL); + DCHECK(iterator_ != NULL); return iterator_; } @@ -5770,7 +5592,9 @@ class UnreachableObjectsFilter : public HeapObjectsFilter { HeapIterator::HeapIterator(Heap* heap) - : heap_(heap), + : make_heap_iterable_helper_(heap), + no_heap_allocation_(), + heap_(heap), filtering_(HeapIterator::kNoFiltering), filter_(NULL) { Init(); @@ -5779,16 +5603,16 @@ HeapIterator::HeapIterator(Heap* heap) HeapIterator::HeapIterator(Heap* heap, HeapIterator::HeapObjectsFiltering filtering) - : heap_(heap), + : make_heap_iterable_helper_(heap), + no_heap_allocation_(), + heap_(heap), filtering_(filtering), filter_(NULL) { Init(); } -HeapIterator::~HeapIterator() { - Shutdown(); -} +HeapIterator::~HeapIterator() { Shutdown(); } void HeapIterator::Init() { @@ -5810,7 +5634,7 @@ void HeapIterator::Shutdown() { // Assert that in filtering mode we have iterated through all // objects. Otherwise, heap will be left in an inconsistent state. if (filtering_ != kNoFiltering) { - ASSERT(object_iterator_ == NULL); + DCHECK(object_iterator_ == NULL); } #endif // Make sure the last iterator is deallocated. @@ -5864,14 +5688,13 @@ void HeapIterator::reset() { Object* const PathTracer::kAnyGlobalObject = NULL; -class PathTracer::MarkVisitor: public ObjectVisitor { +class PathTracer::MarkVisitor : public ObjectVisitor { public: explicit MarkVisitor(PathTracer* tracer) : tracer_(tracer) {} void VisitPointers(Object** start, Object** end) { // Scan all HeapObject pointers in [start, end) for (Object** p = start; !tracer_->found() && (p < end); p++) { - if ((*p)->IsHeapObject()) - tracer_->MarkRecursively(p, this); + if ((*p)->IsHeapObject()) tracer_->MarkRecursively(p, this); } } @@ -5880,14 +5703,13 @@ class PathTracer::MarkVisitor: public ObjectVisitor { }; -class PathTracer::UnmarkVisitor: public ObjectVisitor { +class PathTracer::UnmarkVisitor : public ObjectVisitor { public: explicit UnmarkVisitor(PathTracer* tracer) : tracer_(tracer) {} void VisitPointers(Object** start, Object** end) { // Scan all HeapObject pointers in [start, end) for (Object** p = start; p < end; p++) { - if ((*p)->IsHeapObject()) - tracer_->UnmarkRecursively(p, this); + if ((*p)->IsHeapObject()) tracer_->UnmarkRecursively(p, this); } } @@ -5915,7 +5737,7 @@ void PathTracer::Reset() { void PathTracer::TracePathFrom(Object** root) { - ASSERT((search_target_ == kAnyGlobalObject) || + DCHECK((search_target_ == kAnyGlobalObject) || search_target_->IsHeapObject()); found_target_in_trace_ = false; Reset(); @@ -5940,9 +5762,8 @@ void PathTracer::MarkRecursively(Object** p, MarkVisitor* mark_visitor) { HeapObject* obj = HeapObject::cast(*p); - Object* map = obj->map(); - - if (!map->IsHeapObject()) return; // visited before + MapWord map_word = obj->map_word(); + if (!map_word.ToMap()->IsHeapObject()) return; // visited before if (found_target_in_trace_) return; // stop if target found object_stack_.Add(obj); @@ -5956,32 +5777,32 @@ void PathTracer::MarkRecursively(Object** p, MarkVisitor* mark_visitor) { bool is_native_context = SafeIsNativeContext(obj); // not visited yet - Map* map_p = reinterpret_cast<Map*>(HeapObject::cast(map)); + Map* map = Map::cast(map_word.ToMap()); - Address map_addr = map_p->address(); - - obj->set_map_no_write_barrier(reinterpret_cast<Map*>(map_addr + kMarkTag)); + MapWord marked_map_word = + MapWord::FromRawValue(obj->map_word().ToRawValue() + kMarkTag); + obj->set_map_word(marked_map_word); // Scan the object body. if (is_native_context && (visit_mode_ == VISIT_ONLY_STRONG)) { // This is specialized to scan Context's properly. - Object** start = reinterpret_cast<Object**>(obj->address() + - Context::kHeaderSize); - Object** end = reinterpret_cast<Object**>(obj->address() + - Context::kHeaderSize + Context::FIRST_WEAK_SLOT * kPointerSize); + Object** start = + reinterpret_cast<Object**>(obj->address() + Context::kHeaderSize); + Object** end = + reinterpret_cast<Object**>(obj->address() + Context::kHeaderSize + + Context::FIRST_WEAK_SLOT * kPointerSize); mark_visitor->VisitPointers(start, end); } else { - obj->IterateBody(map_p->instance_type(), - obj->SizeFromMap(map_p), - mark_visitor); + obj->IterateBody(map->instance_type(), obj->SizeFromMap(map), mark_visitor); } // Scan the map after the body because the body is a lot more interesting // when doing leak detection. - MarkRecursively(&map, mark_visitor); + MarkRecursively(reinterpret_cast<Object**>(&map), mark_visitor); - if (!found_target_in_trace_) // don't pop if found the target + if (!found_target_in_trace_) { // don't pop if found the target object_stack_.RemoveLast(); + } } @@ -5990,41 +5811,34 @@ void PathTracer::UnmarkRecursively(Object** p, UnmarkVisitor* unmark_visitor) { HeapObject* obj = HeapObject::cast(*p); - Object* map = obj->map(); - - if (map->IsHeapObject()) return; // unmarked already - - Address map_addr = reinterpret_cast<Address>(map); - - map_addr -= kMarkTag; - - ASSERT_TAG_ALIGNED(map_addr); + MapWord map_word = obj->map_word(); + if (map_word.ToMap()->IsHeapObject()) return; // unmarked already - HeapObject* map_p = HeapObject::FromAddress(map_addr); + MapWord unmarked_map_word = + MapWord::FromRawValue(map_word.ToRawValue() - kMarkTag); + obj->set_map_word(unmarked_map_word); - obj->set_map_no_write_barrier(reinterpret_cast<Map*>(map_p)); + Map* map = Map::cast(unmarked_map_word.ToMap()); - UnmarkRecursively(reinterpret_cast<Object**>(&map_p), unmark_visitor); + UnmarkRecursively(reinterpret_cast<Object**>(&map), unmark_visitor); - obj->IterateBody(Map::cast(map_p)->instance_type(), - obj->SizeFromMap(Map::cast(map_p)), - unmark_visitor); + obj->IterateBody(map->instance_type(), obj->SizeFromMap(map), unmark_visitor); } void PathTracer::ProcessResults() { if (found_target_) { - PrintF("=====================================\n"); - PrintF("==== Path to object ====\n"); - PrintF("=====================================\n\n"); + OFStream os(stdout); + os << "=====================================\n" + << "==== Path to object ====\n" + << "=====================================\n\n"; - ASSERT(!object_stack_.is_empty()); + DCHECK(!object_stack_.is_empty()); for (int i = 0; i < object_stack_.length(); i++) { - if (i > 0) PrintF("\n |\n |\n V\n\n"); - Object* obj = object_stack_[i]; - obj->Print(); + if (i > 0) os << "\n |\n |\n V\n\n"; + object_stack_[i]->Print(os); } - PrintF("=====================================\n"); + os << "=====================================\n"; } } @@ -6050,212 +5864,26 @@ void Heap::TracePathToObject(Object* target) { // and finds a path to any global object and prints it. Useful for // determining the source for leaks of global objects. void Heap::TracePathToGlobal() { - PathTracer tracer(PathTracer::kAnyGlobalObject, - PathTracer::FIND_ALL, + PathTracer tracer(PathTracer::kAnyGlobalObject, PathTracer::FIND_ALL, VISIT_ALL); IterateRoots(&tracer, VISIT_ONLY_STRONG); } #endif -static intptr_t CountTotalHolesSize(Heap* heap) { - intptr_t holes_size = 0; - OldSpaces spaces(heap); - for (OldSpace* space = spaces.next(); - space != NULL; - space = spaces.next()) { - holes_size += space->Waste() + space->Available(); - } - return holes_size; -} - - -GCTracer::GCTracer(Heap* heap, - const char* gc_reason, - const char* collector_reason) - : start_time_(0.0), - start_object_size_(0), - start_memory_size_(0), - gc_count_(0), - full_gc_count_(0), - allocated_since_last_gc_(0), - spent_in_mutator_(0), - promoted_objects_size_(0), - nodes_died_in_new_space_(0), - nodes_copied_in_new_space_(0), - nodes_promoted_(0), - heap_(heap), - gc_reason_(gc_reason), - collector_reason_(collector_reason) { - if (!FLAG_trace_gc && !FLAG_print_cumulative_gc_stat) return; - start_time_ = OS::TimeCurrentMillis(); - start_object_size_ = heap_->SizeOfObjects(); - start_memory_size_ = heap_->isolate()->memory_allocator()->Size(); - - for (int i = 0; i < Scope::kNumberOfScopes; i++) { - scopes_[i] = 0; - } - - in_free_list_or_wasted_before_gc_ = CountTotalHolesSize(heap); - - allocated_since_last_gc_ = - heap_->SizeOfObjects() - heap_->alive_after_last_gc_; - - if (heap_->last_gc_end_timestamp_ > 0) { - spent_in_mutator_ = Max(start_time_ - heap_->last_gc_end_timestamp_, 0.0); - } - - steps_count_ = heap_->incremental_marking()->steps_count(); - steps_took_ = heap_->incremental_marking()->steps_took(); - longest_step_ = heap_->incremental_marking()->longest_step(); - steps_count_since_last_gc_ = - heap_->incremental_marking()->steps_count_since_last_gc(); - steps_took_since_last_gc_ = - heap_->incremental_marking()->steps_took_since_last_gc(); -} - - -GCTracer::~GCTracer() { - // Printf ONE line iff flag is set. - if (!FLAG_trace_gc && !FLAG_print_cumulative_gc_stat) return; - - bool first_gc = (heap_->last_gc_end_timestamp_ == 0); - - heap_->alive_after_last_gc_ = heap_->SizeOfObjects(); - heap_->last_gc_end_timestamp_ = OS::TimeCurrentMillis(); - - double time = heap_->last_gc_end_timestamp_ - start_time_; - - // Update cumulative GC statistics if required. +void Heap::UpdateCumulativeGCStatistics(double duration, + double spent_in_mutator, + double marking_time) { if (FLAG_print_cumulative_gc_stat) { - heap_->total_gc_time_ms_ += time; - heap_->max_gc_pause_ = Max(heap_->max_gc_pause_, time); - heap_->max_alive_after_gc_ = Max(heap_->max_alive_after_gc_, - heap_->alive_after_last_gc_); - if (!first_gc) { - heap_->min_in_mutator_ = Min(heap_->min_in_mutator_, - spent_in_mutator_); - } + total_gc_time_ms_ += duration; + max_gc_pause_ = Max(max_gc_pause_, duration); + max_alive_after_gc_ = Max(max_alive_after_gc_, SizeOfObjects()); + min_in_mutator_ = Min(min_in_mutator_, spent_in_mutator); } else if (FLAG_trace_gc_verbose) { - heap_->total_gc_time_ms_ += time; + total_gc_time_ms_ += duration; } - if (collector_ == SCAVENGER && FLAG_trace_gc_ignore_scavenger) return; - - heap_->AddMarkingTime(scopes_[Scope::MC_MARK]); - - if (FLAG_print_cumulative_gc_stat && !FLAG_trace_gc) return; - PrintPID("%8.0f ms: ", heap_->isolate()->time_millis_since_init()); - - if (!FLAG_trace_gc_nvp) { - int external_time = static_cast<int>(scopes_[Scope::EXTERNAL]); - - double end_memory_size_mb = - static_cast<double>(heap_->isolate()->memory_allocator()->Size()) / MB; - - PrintF("%s %.1f (%.1f) -> %.1f (%.1f) MB, ", - CollectorString(), - static_cast<double>(start_object_size_) / MB, - static_cast<double>(start_memory_size_) / MB, - SizeOfHeapObjects(), - end_memory_size_mb); - - if (external_time > 0) PrintF("%d / ", external_time); - PrintF("%.1f ms", time); - if (steps_count_ > 0) { - if (collector_ == SCAVENGER) { - PrintF(" (+ %.1f ms in %d steps since last GC)", - steps_took_since_last_gc_, - steps_count_since_last_gc_); - } else { - PrintF(" (+ %.1f ms in %d steps since start of marking, " - "biggest step %.1f ms)", - steps_took_, - steps_count_, - longest_step_); - } - } - - if (gc_reason_ != NULL) { - PrintF(" [%s]", gc_reason_); - } - - if (collector_reason_ != NULL) { - PrintF(" [%s]", collector_reason_); - } - - PrintF(".\n"); - } else { - PrintF("pause=%.1f ", time); - PrintF("mutator=%.1f ", spent_in_mutator_); - PrintF("gc="); - switch (collector_) { - case SCAVENGER: - PrintF("s"); - break; - case MARK_COMPACTOR: - PrintF("ms"); - break; - default: - UNREACHABLE(); - } - PrintF(" "); - - PrintF("external=%.1f ", scopes_[Scope::EXTERNAL]); - PrintF("mark=%.1f ", scopes_[Scope::MC_MARK]); - PrintF("sweep=%.2f ", scopes_[Scope::MC_SWEEP]); - PrintF("sweepns=%.2f ", scopes_[Scope::MC_SWEEP_NEWSPACE]); - PrintF("sweepos=%.2f ", scopes_[Scope::MC_SWEEP_OLDSPACE]); - PrintF("evacuate=%.1f ", scopes_[Scope::MC_EVACUATE_PAGES]); - PrintF("new_new=%.1f ", scopes_[Scope::MC_UPDATE_NEW_TO_NEW_POINTERS]); - PrintF("root_new=%.1f ", scopes_[Scope::MC_UPDATE_ROOT_TO_NEW_POINTERS]); - PrintF("old_new=%.1f ", scopes_[Scope::MC_UPDATE_OLD_TO_NEW_POINTERS]); - PrintF("compaction_ptrs=%.1f ", - scopes_[Scope::MC_UPDATE_POINTERS_TO_EVACUATED]); - PrintF("intracompaction_ptrs=%.1f ", - scopes_[Scope::MC_UPDATE_POINTERS_BETWEEN_EVACUATED]); - PrintF("misc_compaction=%.1f ", scopes_[Scope::MC_UPDATE_MISC_POINTERS]); - PrintF("weakcollection_process=%.1f ", - scopes_[Scope::MC_WEAKCOLLECTION_PROCESS]); - PrintF("weakcollection_clear=%.1f ", - scopes_[Scope::MC_WEAKCOLLECTION_CLEAR]); - - PrintF("total_size_before=%" V8_PTR_PREFIX "d ", start_object_size_); - PrintF("total_size_after=%" V8_PTR_PREFIX "d ", heap_->SizeOfObjects()); - PrintF("holes_size_before=%" V8_PTR_PREFIX "d ", - in_free_list_or_wasted_before_gc_); - PrintF("holes_size_after=%" V8_PTR_PREFIX "d ", CountTotalHolesSize(heap_)); - - PrintF("allocated=%" V8_PTR_PREFIX "d ", allocated_since_last_gc_); - PrintF("promoted=%" V8_PTR_PREFIX "d ", promoted_objects_size_); - PrintF("nodes_died_in_new=%d ", nodes_died_in_new_space_); - PrintF("nodes_copied_in_new=%d ", nodes_copied_in_new_space_); - PrintF("nodes_promoted=%d ", nodes_promoted_); - - if (collector_ == SCAVENGER) { - PrintF("stepscount=%d ", steps_count_since_last_gc_); - PrintF("stepstook=%.1f ", steps_took_since_last_gc_); - } else { - PrintF("stepscount=%d ", steps_count_); - PrintF("stepstook=%.1f ", steps_took_); - PrintF("longeststep=%.1f ", longest_step_); - } - - PrintF("\n"); - } - - heap_->PrintShortHeapStatistics(); -} - - -const char* GCTracer::CollectorString() { - switch (collector_) { - case SCAVENGER: - return "Scavenge"; - case MARK_COMPACTOR: - return "Mark-sweep"; - } - return "Unknown GC"; + marking_time_ += marking_time; } @@ -6281,25 +5909,23 @@ int KeyedLookupCache::Lookup(Handle<Map> map, Handle<Name> name) { } -void KeyedLookupCache::Update(Handle<Map> map, - Handle<Name> name, +void KeyedLookupCache::Update(Handle<Map> map, Handle<Name> name, int field_offset) { DisallowHeapAllocation no_gc; if (!name->IsUniqueName()) { - if (!StringTable::InternalizeStringIfExists(name->GetIsolate(), - Handle<String>::cast(name)). - ToHandle(&name)) { + if (!StringTable::InternalizeStringIfExists( + name->GetIsolate(), Handle<String>::cast(name)).ToHandle(&name)) { return; } } // This cache is cleared only between mark compact passes, so we expect the // cache to only contain old space names. - ASSERT(!map->GetIsolate()->heap()->InNewSpace(*name)); + DCHECK(!map->GetIsolate()->heap()->InNewSpace(*name)); int index = (Hash(map, name) & kHashMask); // After a GC there will be free slots, so we use them in order (this may // help to get the most frequently used one in position 0). - for (int i = 0; i< kEntriesPerBucket; i++) { + for (int i = 0; i < kEntriesPerBucket; i++) { Key& key = keys_[index]; Object* free_entry_indicator = NULL; if (key.map == free_entry_indicator) { @@ -6342,7 +5968,7 @@ void ExternalStringTable::CleanUp() { if (new_space_strings_[i] == heap_->the_hole_value()) { continue; } - ASSERT(new_space_strings_[i]->IsExternalString()); + DCHECK(new_space_strings_[i]->IsExternalString()); if (heap_->InNewSpace(new_space_strings_[i])) { new_space_strings_[last++] = new_space_strings_[i]; } else { @@ -6357,8 +5983,8 @@ void ExternalStringTable::CleanUp() { if (old_space_strings_[i] == heap_->the_hole_value()) { continue; } - ASSERT(old_space_strings_[i]->IsExternalString()); - ASSERT(!heap_->InNewSpace(old_space_strings_[i])); + DCHECK(old_space_strings_[i]->IsExternalString()); + DCHECK(!heap_->InNewSpace(old_space_strings_[i])); old_space_strings_[last++] = old_space_strings_[i]; } old_space_strings_.Rewind(last); @@ -6408,8 +6034,8 @@ void Heap::FreeQueuedChunks() { // If FromAnyPointerAddress encounters a slot that belongs to one of // these smaller pieces it will treat it as a slot on a normal Page. Address chunk_end = chunk->address() + chunk->size(); - MemoryChunk* inner = MemoryChunk::FromAddress( - chunk->address() + Page::kPageSize); + MemoryChunk* inner = + MemoryChunk::FromAddress(chunk->address() + Page::kPageSize); MemoryChunk* inner_last = MemoryChunk::FromAddress(chunk_end - 1); while (inner <= inner_last) { // Size of a large chunk is always a multiple of @@ -6422,8 +6048,7 @@ void Heap::FreeQueuedChunks() { inner->set_size(Page::kPageSize); inner->set_owner(lo_space()); inner->SetFlag(MemoryChunk::ABOUT_TO_BE_FREED); - inner = MemoryChunk::FromAddress( - inner->address() + Page::kPageSize); + inner = MemoryChunk::FromAddress(inner->address() + Page::kPageSize); } } } @@ -6462,20 +6087,21 @@ void Heap::ClearObjectStats(bool clear_last_time_stats) { } -static LazyMutex checkpoint_object_stats_mutex = LAZY_MUTEX_INITIALIZER; +static base::LazyMutex checkpoint_object_stats_mutex = LAZY_MUTEX_INITIALIZER; void Heap::CheckpointObjectStats() { - LockGuard<Mutex> lock_guard(checkpoint_object_stats_mutex.Pointer()); + base::LockGuard<base::Mutex> lock_guard( + checkpoint_object_stats_mutex.Pointer()); Counters* counters = isolate()->counters(); -#define ADJUST_LAST_TIME_OBJECT_COUNT(name) \ - counters->count_of_##name()->Increment( \ - static_cast<int>(object_counts_[name])); \ - counters->count_of_##name()->Decrement( \ - static_cast<int>(object_counts_last_time_[name])); \ - counters->size_of_##name()->Increment( \ - static_cast<int>(object_sizes_[name])); \ - counters->size_of_##name()->Decrement( \ +#define ADJUST_LAST_TIME_OBJECT_COUNT(name) \ + counters->count_of_##name()->Increment( \ + static_cast<int>(object_counts_[name])); \ + counters->count_of_##name()->Decrement( \ + static_cast<int>(object_counts_last_time_[name])); \ + counters->size_of_##name()->Increment( \ + static_cast<int>(object_sizes_[name])); \ + counters->size_of_##name()->Decrement( \ static_cast<int>(object_sizes_last_time_[name])); INSTANCE_TYPE_LIST(ADJUST_LAST_TIME_OBJECT_COUNT) #undef ADJUST_LAST_TIME_OBJECT_COUNT @@ -6518,9 +6144,9 @@ void Heap::CheckpointObjectStats() { CODE_AGE_LIST_COMPLETE(ADJUST_LAST_TIME_OBJECT_COUNT) #undef ADJUST_LAST_TIME_OBJECT_COUNT - OS::MemCopy(object_counts_last_time_, object_counts_, sizeof(object_counts_)); - OS::MemCopy(object_sizes_last_time_, object_sizes_, sizeof(object_sizes_)); + MemCopy(object_counts_last_time_, object_counts_, sizeof(object_counts_)); + MemCopy(object_sizes_last_time_, object_sizes_, sizeof(object_sizes_)); ClearObjectStats(); } - -} } // namespace v8::internal +} +} // namespace v8::internal diff --git a/deps/v8/src/heap.h b/deps/v8/src/heap/heap.h similarity index 72% rename from deps/v8/src/heap.h rename to deps/v8/src/heap/heap.h index f88526e2555..c313333362f 100644 --- a/deps/v8/src/heap.h +++ b/deps/v8/src/heap/heap.h @@ -2,23 +2,23 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#ifndef V8_HEAP_H_ -#define V8_HEAP_H_ +#ifndef V8_HEAP_HEAP_H_ +#define V8_HEAP_HEAP_H_ #include <cmath> -#include "allocation.h" -#include "assert-scope.h" -#include "counters.h" -#include "globals.h" -#include "incremental-marking.h" -#include "list.h" -#include "mark-compact.h" -#include "objects-visiting.h" -#include "spaces.h" -#include "splay-tree-inl.h" -#include "store-buffer.h" -#include "v8globals.h" +#include "src/allocation.h" +#include "src/assert-scope.h" +#include "src/counters.h" +#include "src/globals.h" +#include "src/heap/gc-tracer.h" +#include "src/heap/incremental-marking.h" +#include "src/heap/mark-compact.h" +#include "src/heap/objects-visiting.h" +#include "src/heap/spaces.h" +#include "src/heap/store-buffer.h" +#include "src/list.h" +#include "src/splay-tree-inl.h" namespace v8 { namespace internal { @@ -43,6 +43,7 @@ namespace internal { V(Map, shared_function_info_map, SharedFunctionInfoMap) \ V(Map, meta_map, MetaMap) \ V(Map, heap_number_map, HeapNumberMap) \ + V(Map, mutable_heap_number_map, MutableHeapNumberMap) \ V(Map, native_context_map, NativeContextMap) \ V(Map, fixed_array_map, FixedArrayMap) \ V(Map, code_map, CodeMap) \ @@ -78,33 +79,24 @@ namespace internal { V(Map, sliced_string_map, SlicedStringMap) \ V(Map, sliced_ascii_string_map, SlicedAsciiStringMap) \ V(Map, external_string_map, ExternalStringMap) \ - V(Map, \ - external_string_with_one_byte_data_map, \ + V(Map, external_string_with_one_byte_data_map, \ ExternalStringWithOneByteDataMap) \ V(Map, external_ascii_string_map, ExternalAsciiStringMap) \ V(Map, short_external_string_map, ShortExternalStringMap) \ - V(Map, \ - short_external_string_with_one_byte_data_map, \ + V(Map, short_external_string_with_one_byte_data_map, \ ShortExternalStringWithOneByteDataMap) \ V(Map, internalized_string_map, InternalizedStringMap) \ V(Map, ascii_internalized_string_map, AsciiInternalizedStringMap) \ - V(Map, \ - external_internalized_string_map, \ - ExternalInternalizedStringMap) \ - V(Map, \ - external_internalized_string_with_one_byte_data_map, \ + V(Map, external_internalized_string_map, ExternalInternalizedStringMap) \ + V(Map, external_internalized_string_with_one_byte_data_map, \ ExternalInternalizedStringWithOneByteDataMap) \ - V(Map, \ - external_ascii_internalized_string_map, \ + V(Map, external_ascii_internalized_string_map, \ ExternalAsciiInternalizedStringMap) \ - V(Map, \ - short_external_internalized_string_map, \ + V(Map, short_external_internalized_string_map, \ ShortExternalInternalizedStringMap) \ - V(Map, \ - short_external_internalized_string_with_one_byte_data_map, \ + V(Map, short_external_internalized_string_with_one_byte_data_map, \ ShortExternalInternalizedStringWithOneByteDataMap) \ - V(Map, \ - short_external_ascii_internalized_string_map, \ + V(Map, short_external_ascii_internalized_string_map, \ ShortExternalAsciiInternalizedStringMap) \ V(Map, short_external_ascii_string_map, ShortExternalAsciiStringMap) \ V(Map, undetectable_string_map, UndetectableStringMap) \ @@ -118,20 +110,16 @@ namespace internal { V(Map, external_float32_array_map, ExternalFloat32ArrayMap) \ V(Map, external_float64_array_map, ExternalFloat64ArrayMap) \ V(Map, external_uint8_clamped_array_map, ExternalUint8ClampedArrayMap) \ - V(ExternalArray, empty_external_int8_array, \ - EmptyExternalInt8Array) \ - V(ExternalArray, empty_external_uint8_array, \ - EmptyExternalUint8Array) \ + V(ExternalArray, empty_external_int8_array, EmptyExternalInt8Array) \ + V(ExternalArray, empty_external_uint8_array, EmptyExternalUint8Array) \ V(ExternalArray, empty_external_int16_array, EmptyExternalInt16Array) \ - V(ExternalArray, empty_external_uint16_array, \ - EmptyExternalUint16Array) \ + V(ExternalArray, empty_external_uint16_array, EmptyExternalUint16Array) \ V(ExternalArray, empty_external_int32_array, EmptyExternalInt32Array) \ - V(ExternalArray, empty_external_uint32_array, \ - EmptyExternalUint32Array) \ + V(ExternalArray, empty_external_uint32_array, EmptyExternalUint32Array) \ V(ExternalArray, empty_external_float32_array, EmptyExternalFloat32Array) \ V(ExternalArray, empty_external_float64_array, EmptyExternalFloat64Array) \ V(ExternalArray, empty_external_uint8_clamped_array, \ - EmptyExternalUint8ClampedArray) \ + EmptyExternalUint8ClampedArray) \ V(Map, fixed_uint8_array_map, FixedUint8ArrayMap) \ V(Map, fixed_int8_array_map, FixedInt8ArrayMap) \ V(Map, fixed_uint16_array_map, FixedUint16ArrayMap) \ @@ -150,7 +138,7 @@ namespace internal { V(FixedTypedArrayBase, empty_fixed_float32_array, EmptyFixedFloat32Array) \ V(FixedTypedArrayBase, empty_fixed_float64_array, EmptyFixedFloat64Array) \ V(FixedTypedArrayBase, empty_fixed_uint8_clamped_array, \ - EmptyFixedUint8ClampedArray) \ + EmptyFixedUint8ClampedArray) \ V(Map, sloppy_arguments_elements_map, SloppyArgumentsElementsMap) \ V(Map, function_context_map, FunctionContextMap) \ V(Map, catch_context_map, CatchContextMap) \ @@ -190,183 +178,174 @@ namespace internal { V(Symbol, nonexistent_symbol, NonExistentSymbol) \ V(Symbol, elements_transition_symbol, ElementsTransitionSymbol) \ V(SeededNumberDictionary, empty_slow_element_dictionary, \ - EmptySlowElementDictionary) \ + EmptySlowElementDictionary) \ V(Symbol, observed_symbol, ObservedSymbol) \ V(Symbol, uninitialized_symbol, UninitializedSymbol) \ V(Symbol, megamorphic_symbol, MegamorphicSymbol) \ + V(Symbol, stack_trace_symbol, StackTraceSymbol) \ + V(Symbol, detailed_stack_trace_symbol, DetailedStackTraceSymbol) \ + V(Symbol, normal_ic_symbol, NormalICSymbol) \ V(FixedArray, materialized_objects, MaterializedObjects) \ V(FixedArray, allocation_sites_scratchpad, AllocationSitesScratchpad) \ - V(JSObject, microtask_state, MicrotaskState) + V(FixedArray, microtask_queue, MicrotaskQueue) // Entries in this list are limited to Smis and are not visited during GC. -#define SMI_ROOT_LIST(V) \ - V(Smi, stack_limit, StackLimit) \ - V(Smi, real_stack_limit, RealStackLimit) \ - V(Smi, last_script_id, LastScriptId) \ - V(Smi, arguments_adaptor_deopt_pc_offset, ArgumentsAdaptorDeoptPCOffset) \ - V(Smi, construct_stub_deopt_pc_offset, ConstructStubDeoptPCOffset) \ - V(Smi, getter_stub_deopt_pc_offset, GetterStubDeoptPCOffset) \ +#define SMI_ROOT_LIST(V) \ + V(Smi, stack_limit, StackLimit) \ + V(Smi, real_stack_limit, RealStackLimit) \ + V(Smi, last_script_id, LastScriptId) \ + V(Smi, arguments_adaptor_deopt_pc_offset, ArgumentsAdaptorDeoptPCOffset) \ + V(Smi, construct_stub_deopt_pc_offset, ConstructStubDeoptPCOffset) \ + V(Smi, getter_stub_deopt_pc_offset, GetterStubDeoptPCOffset) \ V(Smi, setter_stub_deopt_pc_offset, SetterStubDeoptPCOffset) -#define ROOT_LIST(V) \ - STRONG_ROOT_LIST(V) \ - SMI_ROOT_LIST(V) \ +#define ROOT_LIST(V) \ + STRONG_ROOT_LIST(V) \ + SMI_ROOT_LIST(V) \ V(StringTable, string_table, StringTable) // Heap roots that are known to be immortal immovable, for which we can safely // skip write barriers. -#define IMMORTAL_IMMOVABLE_ROOT_LIST(V) \ - V(byte_array_map) \ - V(free_space_map) \ - V(one_pointer_filler_map) \ - V(two_pointer_filler_map) \ - V(undefined_value) \ - V(the_hole_value) \ - V(null_value) \ - V(true_value) \ - V(false_value) \ - V(uninitialized_value) \ - V(cell_map) \ - V(global_property_cell_map) \ - V(shared_function_info_map) \ - V(meta_map) \ - V(heap_number_map) \ - V(native_context_map) \ - V(fixed_array_map) \ - V(code_map) \ - V(scope_info_map) \ - V(fixed_cow_array_map) \ - V(fixed_double_array_map) \ - V(constant_pool_array_map) \ - V(no_interceptor_result_sentinel) \ - V(hash_table_map) \ - V(ordered_hash_table_map) \ - V(empty_fixed_array) \ - V(empty_byte_array) \ - V(empty_descriptor_array) \ - V(empty_constant_pool_array) \ - V(arguments_marker) \ - V(symbol_map) \ - V(sloppy_arguments_elements_map) \ - V(function_context_map) \ - V(catch_context_map) \ - V(with_context_map) \ - V(block_context_map) \ - V(module_context_map) \ - V(global_context_map) \ - V(undefined_map) \ - V(the_hole_map) \ - V(null_map) \ - V(boolean_map) \ - V(uninitialized_map) \ - V(message_object_map) \ - V(foreign_map) \ +#define IMMORTAL_IMMOVABLE_ROOT_LIST(V) \ + V(byte_array_map) \ + V(free_space_map) \ + V(one_pointer_filler_map) \ + V(two_pointer_filler_map) \ + V(undefined_value) \ + V(the_hole_value) \ + V(null_value) \ + V(true_value) \ + V(false_value) \ + V(uninitialized_value) \ + V(cell_map) \ + V(global_property_cell_map) \ + V(shared_function_info_map) \ + V(meta_map) \ + V(heap_number_map) \ + V(mutable_heap_number_map) \ + V(native_context_map) \ + V(fixed_array_map) \ + V(code_map) \ + V(scope_info_map) \ + V(fixed_cow_array_map) \ + V(fixed_double_array_map) \ + V(constant_pool_array_map) \ + V(no_interceptor_result_sentinel) \ + V(hash_table_map) \ + V(ordered_hash_table_map) \ + V(empty_fixed_array) \ + V(empty_byte_array) \ + V(empty_descriptor_array) \ + V(empty_constant_pool_array) \ + V(arguments_marker) \ + V(symbol_map) \ + V(sloppy_arguments_elements_map) \ + V(function_context_map) \ + V(catch_context_map) \ + V(with_context_map) \ + V(block_context_map) \ + V(module_context_map) \ + V(global_context_map) \ + V(undefined_map) \ + V(the_hole_map) \ + V(null_map) \ + V(boolean_map) \ + V(uninitialized_map) \ + V(message_object_map) \ + V(foreign_map) \ V(neander_map) -#define INTERNALIZED_STRING_LIST(V) \ - V(Array_string, "Array") \ - V(Object_string, "Object") \ - V(proto_string, "__proto__") \ - V(arguments_string, "arguments") \ - V(Arguments_string, "Arguments") \ - V(call_string, "call") \ - V(apply_string, "apply") \ - V(caller_string, "caller") \ - V(boolean_string, "boolean") \ - V(Boolean_string, "Boolean") \ - V(callee_string, "callee") \ - V(constructor_string, "constructor") \ - V(dot_result_string, ".result") \ - V(dot_for_string, ".for.") \ - V(dot_iterator_string, ".iterator") \ - V(dot_generator_object_string, ".generator_object") \ - V(eval_string, "eval") \ - V(empty_string, "") \ - V(function_string, "function") \ - V(length_string, "length") \ - V(module_string, "module") \ - V(name_string, "name") \ - V(native_string, "native") \ - V(null_string, "null") \ - V(number_string, "number") \ - V(Number_string, "Number") \ - V(nan_string, "NaN") \ - V(RegExp_string, "RegExp") \ - V(source_string, "source") \ - V(global_string, "global") \ - V(ignore_case_string, "ignoreCase") \ - V(multiline_string, "multiline") \ - V(input_string, "input") \ - V(index_string, "index") \ - V(last_index_string, "lastIndex") \ - V(object_string, "object") \ - V(literals_string, "literals") \ - V(prototype_string, "prototype") \ - V(string_string, "string") \ - V(String_string, "String") \ - V(symbol_string, "symbol") \ - V(Symbol_string, "Symbol") \ - V(for_string, "for") \ - V(for_api_string, "for_api") \ - V(for_intern_string, "for_intern") \ - V(private_api_string, "private_api") \ - V(private_intern_string, "private_intern") \ - V(Date_string, "Date") \ - V(this_string, "this") \ - V(to_string_string, "toString") \ - V(char_at_string, "CharAt") \ - V(undefined_string, "undefined") \ - V(value_of_string, "valueOf") \ - V(stack_string, "stack") \ - V(toJSON_string, "toJSON") \ - V(InitializeVarGlobal_string, "InitializeVarGlobal") \ - V(InitializeConstGlobal_string, "InitializeConstGlobal") \ - V(KeyedLoadElementMonomorphic_string, \ - "KeyedLoadElementMonomorphic") \ - V(KeyedStoreElementMonomorphic_string, \ - "KeyedStoreElementMonomorphic") \ - V(stack_overflow_string, "kStackOverflowBoilerplate") \ - V(illegal_access_string, "illegal access") \ - V(get_string, "get") \ - V(set_string, "set") \ - V(map_field_string, "%map") \ - V(elements_field_string, "%elements") \ - V(length_field_string, "%length") \ - V(cell_value_string, "%cell_value") \ - V(function_class_string, "Function") \ - V(illegal_argument_string, "illegal argument") \ - V(MakeReferenceError_string, "MakeReferenceError") \ - V(MakeSyntaxError_string, "MakeSyntaxError") \ - V(MakeTypeError_string, "MakeTypeError") \ - V(unknown_label_string, "unknown_label") \ - V(space_string, " ") \ - V(exec_string, "exec") \ - V(zero_string, "0") \ - V(global_eval_string, "GlobalEval") \ - V(identity_hash_string, "v8::IdentityHash") \ - V(closure_string, "(closure)") \ - V(use_strict_string, "use strict") \ - V(dot_string, ".") \ - V(anonymous_function_string, "(anonymous function)") \ - V(compare_ic_string, "==") \ - V(strict_compare_ic_string, "===") \ - V(infinity_string, "Infinity") \ - V(minus_infinity_string, "-Infinity") \ - V(hidden_stack_trace_string, "v8::hidden_stack_trace") \ - V(query_colon_string, "(?:)") \ - V(Generator_string, "Generator") \ - V(throw_string, "throw") \ - V(done_string, "done") \ - V(value_string, "value") \ - V(next_string, "next") \ - V(byte_length_string, "byteLength") \ - V(byte_offset_string, "byteOffset") \ - V(buffer_string, "buffer") \ - V(intl_initialized_marker_string, "v8::intl_initialized_marker") \ +#define INTERNALIZED_STRING_LIST(V) \ + V(Array_string, "Array") \ + V(Object_string, "Object") \ + V(proto_string, "__proto__") \ + V(arguments_string, "arguments") \ + V(Arguments_string, "Arguments") \ + V(call_string, "call") \ + V(apply_string, "apply") \ + V(caller_string, "caller") \ + V(boolean_string, "boolean") \ + V(Boolean_string, "Boolean") \ + V(callee_string, "callee") \ + V(constructor_string, "constructor") \ + V(dot_result_string, ".result") \ + V(dot_for_string, ".for.") \ + V(eval_string, "eval") \ + V(empty_string, "") \ + V(function_string, "function") \ + V(length_string, "length") \ + V(name_string, "name") \ + V(null_string, "null") \ + V(number_string, "number") \ + V(Number_string, "Number") \ + V(nan_string, "NaN") \ + V(RegExp_string, "RegExp") \ + V(source_string, "source") \ + V(source_url_string, "source_url") \ + V(source_mapping_url_string, "source_mapping_url") \ + V(global_string, "global") \ + V(ignore_case_string, "ignoreCase") \ + V(multiline_string, "multiline") \ + V(input_string, "input") \ + V(index_string, "index") \ + V(last_index_string, "lastIndex") \ + V(object_string, "object") \ + V(literals_string, "literals") \ + V(prototype_string, "prototype") \ + V(string_string, "string") \ + V(String_string, "String") \ + V(symbol_string, "symbol") \ + V(Symbol_string, "Symbol") \ + V(for_string, "for") \ + V(for_api_string, "for_api") \ + V(for_intern_string, "for_intern") \ + V(private_api_string, "private_api") \ + V(private_intern_string, "private_intern") \ + V(Date_string, "Date") \ + V(to_string_string, "toString") \ + V(char_at_string, "CharAt") \ + V(undefined_string, "undefined") \ + V(value_of_string, "valueOf") \ + V(stack_string, "stack") \ + V(toJSON_string, "toJSON") \ + V(InitializeVarGlobal_string, "InitializeVarGlobal") \ + V(InitializeConstGlobal_string, "InitializeConstGlobal") \ + V(KeyedLoadMonomorphic_string, "KeyedLoadMonomorphic") \ + V(KeyedStoreMonomorphic_string, "KeyedStoreMonomorphic") \ + V(stack_overflow_string, "kStackOverflowBoilerplate") \ + V(illegal_access_string, "illegal access") \ + V(get_string, "get") \ + V(set_string, "set") \ + V(map_field_string, "%map") \ + V(elements_field_string, "%elements") \ + V(length_field_string, "%length") \ + V(cell_value_string, "%cell_value") \ + V(function_class_string, "Function") \ + V(illegal_argument_string, "illegal argument") \ + V(space_string, " ") \ + V(exec_string, "exec") \ + V(zero_string, "0") \ + V(global_eval_string, "GlobalEval") \ + V(identity_hash_string, "v8::IdentityHash") \ + V(closure_string, "(closure)") \ + V(dot_string, ".") \ + V(compare_ic_string, "==") \ + V(strict_compare_ic_string, "===") \ + V(infinity_string, "Infinity") \ + V(minus_infinity_string, "-Infinity") \ + V(query_colon_string, "(?:)") \ + V(Generator_string, "Generator") \ + V(throw_string, "throw") \ + V(done_string, "done") \ + V(value_string, "value") \ + V(next_string, "next") \ + V(byte_length_string, "byteLength") \ + V(byte_offset_string, "byteOffset") \ + V(buffer_string, "buffer") \ + V(intl_initialized_marker_string, "v8::intl_initialized_marker") \ V(intl_impl_object_string, "v8::intl_object") // Forward declarations. -class GCTracer; class HeapStats; class Isolate; class WeakObjectRetainer; @@ -378,8 +357,7 @@ typedef String* (*ExternalStringTableUpdaterCallback)(Heap* heap, class StoreBufferRebuilder { public: explicit StoreBufferRebuilder(StoreBuffer* store_buffer) - : store_buffer_(store_buffer) { - } + : store_buffer_(store_buffer) {} void Callback(MemoryChunk* page, StoreBufferEvent event); @@ -396,7 +374,6 @@ class StoreBufferRebuilder { }; - // A queue of objects promoted during scavenge. Each object is accompanied // by it's size to avoid dereferencing a map pointer for scanning. class PromotionQueue { @@ -406,12 +383,12 @@ class PromotionQueue { rear_(NULL), limit_(NULL), emergency_stack_(0), - heap_(heap) { } + heap_(heap) {} void Initialize(); void Destroy() { - ASSERT(is_empty()); + DCHECK(is_empty()); delete emergency_stack_; emergency_stack_ = NULL; } @@ -427,7 +404,7 @@ class PromotionQueue { return; } - ASSERT(GetHeadPage() == Page::FromAllocationTop(limit)); + DCHECK(GetHeadPage() == Page::FromAllocationTop(limit)); limit_ = reinterpret_cast<intptr_t*>(limit); if (limit_ <= rear_) { @@ -437,15 +414,27 @@ class PromotionQueue { RelocateQueueHead(); } + bool IsBelowPromotionQueue(Address to_space_top) { + // If the given to-space top pointer and the head of the promotion queue + // are not on the same page, then the to-space objects are below the + // promotion queue. + if (GetHeadPage() != Page::FromAddress(to_space_top)) { + return true; + } + // If the to space top pointer is smaller or equal than the promotion + // queue head, then the to-space objects are below the promotion queue. + return reinterpret_cast<intptr_t*>(to_space_top) <= rear_; + } + bool is_empty() { return (front_ == rear_) && - (emergency_stack_ == NULL || emergency_stack_->length() == 0); + (emergency_stack_ == NULL || emergency_stack_->length() == 0); } inline void insert(HeapObject* target, int size); void remove(HeapObject** target, int* size) { - ASSERT(!is_empty()); + DCHECK(!is_empty()); if (front_ == rear_) { Entry e = emergency_stack_->RemoveLast(); *target = e.obj_; @@ -456,9 +445,8 @@ class PromotionQueue { if (NewSpacePage::IsAtStart(reinterpret_cast<Address>(front_))) { NewSpacePage* front_page = NewSpacePage::FromAddress(reinterpret_cast<Address>(front_)); - ASSERT(!front_page->prev_page()->is_anchor()); - front_ = - reinterpret_cast<intptr_t*>(front_page->prev_page()->area_end()); + DCHECK(!front_page->prev_page()->is_anchor()); + front_ = reinterpret_cast<intptr_t*>(front_page->prev_page()->area_end()); } *target = reinterpret_cast<HeapObject*>(*(--front_)); *size = static_cast<int>(*(--front_)); @@ -478,7 +466,7 @@ class PromotionQueue { static const int kEntrySizeInWords = 2; struct Entry { - Entry(HeapObject* obj, int size) : obj_(obj), size_(size) { } + Entry(HeapObject* obj, int size) : obj_(obj), size_(size) {} HeapObject* obj_; int size_; @@ -493,8 +481,7 @@ class PromotionQueue { }; -typedef void (*ScavengingCallback)(Map* map, - HeapObject** slot, +typedef void (*ScavengingCallback)(Map* map, HeapObject** slot, HeapObject* object); @@ -516,7 +503,7 @@ class ExternalStringTable { void TearDown(); private: - explicit ExternalStringTable(Heap* heap) : heap_(heap) { } + explicit ExternalStringTable(Heap* heap) : heap_(heap) {} friend class Heap; @@ -546,12 +533,10 @@ enum ArrayStorageAllocationMode { class Heap { public: - // Configure heap size before setup. Return false if the heap has been + // Configure heap size in MB before setup. Return false if the heap has been // set up already. - bool ConfigureHeap(int max_semispace_size, - intptr_t max_old_space_size, - intptr_t max_executable_size, - intptr_t code_range_size); + bool ConfigureHeap(int max_semi_space_size, int max_old_space_size, + int max_executable_size, size_t code_range_size); bool ConfigureHeapDefault(); // Prepares the heap, setting up memory areas that are needed in the isolate @@ -581,7 +566,7 @@ class Heap { intptr_t MaxReserved() { return 4 * reserved_semispace_size_ + max_old_generation_size_; } - int MaxSemiSpaceSize() { return max_semispace_size_; } + int MaxSemiSpaceSize() { return max_semi_space_size_; } int ReservedSemiSpaceSize() { return reserved_semispace_size_; } int InitialSemiSpaceSize() { return initial_semispace_size_; } intptr_t MaxOldGenerationSize() { return max_old_generation_size_; } @@ -628,9 +613,7 @@ class Heap { OldSpace* code_space() { return code_space_; } MapSpace* map_space() { return map_space_; } CellSpace* cell_space() { return cell_space_; } - PropertyCellSpace* property_cell_space() { - return property_cell_space_; - } + PropertyCellSpace* property_cell_space() { return property_cell_space_; } LargeObjectSpace* lo_space() { return lo_space_; } PagedSpace* paged_space(int idx) { switch (idx) { @@ -657,9 +640,6 @@ class Heap { Address always_allocate_scope_depth_address() { return reinterpret_cast<Address>(&always_allocate_scope_depth_); } - bool linear_allocation() { - return linear_allocation_scope_depth_ != 0; - } Address* NewSpaceAllocationTopAddress() { return new_space_.allocation_top_address(); @@ -685,8 +665,8 @@ class Heap { // Returns a deep copy of the JavaScript object. // Properties and elements are copied too. // Optionally takes an AllocationSite to be appended in an AllocationMemento. - MUST_USE_RESULT AllocationResult CopyJSObject(JSObject* source, - AllocationSite* site = NULL); + MUST_USE_RESULT AllocationResult + CopyJSObject(JSObject* source, AllocationSite* site = NULL); // Clear the Instanceof cache (used when a prototype changes). inline void ClearInstanceofCache(); @@ -697,7 +677,7 @@ class Heap { // For use during bootup. void RepairFreeListsAfterBoot(); - template<typename T> + template <typename T> static inline bool IsOneByte(T t, int chars); // Move len elements within a given array from src_index index to dst_index @@ -720,16 +700,26 @@ class Heap { inline void FinalizeExternalString(String* string); // Initialize a filler object to keep the ability to iterate over the heap - // when shortening objects. + // when introducing gaps within pages. void CreateFillerObjectAt(Address addr, int size); bool CanMoveObjectStart(HeapObject* object); + // Indicates whether live bytes adjustment is triggered from within the GC + // code or from mutator code. enum InvocationMode { FROM_GC, FROM_MUTATOR }; - // Maintain marking consistency for IncrementalMarking. + // Maintain consistency of live bytes during incremental marking. void AdjustLiveBytes(Address address, int by, InvocationMode mode); + // Trim the given array from the left. Note that this relocates the object + // start and hence is only valid if there is only a single reference to it. + FixedArrayBase* LeftTrimFixedArray(FixedArrayBase* obj, int elements_to_trim); + + // Trim the given array from the right. + template<Heap::InvocationMode mode> + void RightTrimFixedArray(FixedArrayBase* obj, int elements_to_trim); + // Converts the given boolean condition to JavaScript boolean value. inline Object* ToBoolean(bool condition); @@ -737,8 +727,7 @@ class Heap { // Returns whether there is a chance that another major GC could // collect more garbage. inline bool CollectGarbage( - AllocationSpace space, - const char* gc_reason = NULL, + AllocationSpace space, const char* gc_reason = NULL, const GCCallbackFlags gc_callback_flags = kNoGCCallbackFlags); static const int kNoGCFlags = 0; @@ -755,8 +744,7 @@ class Heap { // non-zero, then the slower precise sweeper is used, which leaves the heap // in a state where we can iterate over the heap visiting all objects. void CollectAllGarbage( - int flags, - const char* gc_reason = NULL, + int flags, const char* gc_reason = NULL, const GCCallbackFlags gc_callback_flags = kNoGCCallbackFlags); // Last hope GC, should try to squeeze as much as possible. @@ -765,10 +753,6 @@ class Heap { // Check whether the heap is currently iterable. bool IsHeapIterable(); - // Ensure that we have swept all spaces in such a way that we can iterate - // over all objects. May cause a GC. - void EnsureHeapIsIterable(); - // Notify the heap that a context has been disposed. int NotifyContextDisposed(); @@ -789,40 +773,33 @@ class Heap { PromotionQueue* promotion_queue() { return &promotion_queue_; } void AddGCPrologueCallback(v8::Isolate::GCPrologueCallback callback, - GCType gc_type_filter, - bool pass_isolate = true); + GCType gc_type_filter, bool pass_isolate = true); void RemoveGCPrologueCallback(v8::Isolate::GCPrologueCallback callback); void AddGCEpilogueCallback(v8::Isolate::GCEpilogueCallback callback, - GCType gc_type_filter, - bool pass_isolate = true); + GCType gc_type_filter, bool pass_isolate = true); void RemoveGCEpilogueCallback(v8::Isolate::GCEpilogueCallback callback); - // Heap root getters. We have versions with and without type::cast() here. - // You can't use type::cast during GC because the assert fails. - // TODO(1490): Try removing the unchecked accessors, now that GC marking does - // not corrupt the map. -#define ROOT_ACCESSOR(type, name, camel_name) \ - type* name() { \ - return type::cast(roots_[k##camel_name##RootIndex]); \ - } \ - type* raw_unchecked_##name() { \ - return reinterpret_cast<type*>(roots_[k##camel_name##RootIndex]); \ +// Heap root getters. We have versions with and without type::cast() here. +// You can't use type::cast during GC because the assert fails. +// TODO(1490): Try removing the unchecked accessors, now that GC marking does +// not corrupt the map. +#define ROOT_ACCESSOR(type, name, camel_name) \ + type* name() { return type::cast(roots_[k##camel_name##RootIndex]); } \ + type* raw_unchecked_##name() { \ + return reinterpret_cast<type*>(roots_[k##camel_name##RootIndex]); \ } ROOT_LIST(ROOT_ACCESSOR) #undef ROOT_ACCESSOR // Utility type maps -#define STRUCT_MAP_ACCESSOR(NAME, Name, name) \ - Map* name##_map() { \ - return Map::cast(roots_[k##Name##MapRootIndex]); \ - } +#define STRUCT_MAP_ACCESSOR(NAME, Name, name) \ + Map* name##_map() { return Map::cast(roots_[k##Name##MapRootIndex]); } STRUCT_LIST(STRUCT_MAP_ACCESSOR) #undef STRUCT_MAP_ACCESSOR -#define STRING_ACCESSOR(name, str) String* name() { \ - return String::cast(roots_[k##name##RootIndex]); \ - } +#define STRING_ACCESSOR(name, str) \ + String* name() { return String::cast(roots_[k##name##RootIndex]); } INTERNALIZED_STRING_LIST(STRING_ACCESSOR) #undef STRING_ACCESSOR @@ -833,21 +810,28 @@ class Heap { void set_native_contexts_list(Object* object) { native_contexts_list_ = object; } - Object* native_contexts_list() { return native_contexts_list_; } + Object* native_contexts_list() const { return native_contexts_list_; } - void set_array_buffers_list(Object* object) { - array_buffers_list_ = object; - } - Object* array_buffers_list() { return array_buffers_list_; } + void set_array_buffers_list(Object* object) { array_buffers_list_ = object; } + Object* array_buffers_list() const { return array_buffers_list_; } void set_allocation_sites_list(Object* object) { allocation_sites_list_ = object; } Object* allocation_sites_list() { return allocation_sites_list_; } + + // Used in CreateAllocationSiteStub and the (de)serializer. Object** allocation_sites_list_address() { return &allocation_sites_list_; } Object* weak_object_to_code_table() { return weak_object_to_code_table_; } + void set_encountered_weak_collections(Object* weak_collection) { + encountered_weak_collections_ = weak_collection; + } + Object* encountered_weak_collections() const { + return encountered_weak_collections_; + } + // Number of mark-sweeps. unsigned int ms_count() { return ms_count_; } @@ -863,8 +847,7 @@ class Heap { // Iterate pointers to from semispace of new space found in memory interval // from start to end. - void IterateAndMarkPointersToFromSpace(Address start, - Address end, + void IterateAndMarkPointersToFromSpace(Address start, Address end, ObjectSlotCallback callback); // Returns whether the object resides in new space. @@ -936,11 +919,6 @@ class Heap { return reinterpret_cast<Address*>(&roots_[kStoreBufferTopRootIndex]); } - // Get address of native contexts list for serialization support. - Object** native_contexts_list_address() { - return &native_contexts_list_; - } - #ifdef VERIFY_HEAP // Verify the heap is in its normal state before or after a GC. void Verify(); @@ -977,6 +955,13 @@ class Heap { #endif } + // Number of "runtime allocations" done so far. + uint32_t allocations_count() { return allocations_count_; } + + // Returns deterministic "time" value in ms. Works only with + // FLAG_verify_predictable. + double synthetic_time() { return allocations_count_ / 100.0; } + // Print short heap statistics. void PrintShortHeapStatistics(); @@ -992,9 +977,7 @@ class Heap { inline bool IsInGCPostProcessing() { return gc_post_processing_depth_ > 0; } #ifdef DEBUG - void set_allocation_timeout(int timeout) { - allocation_timeout_ = timeout; - } + void set_allocation_timeout(int timeout) { allocation_timeout_ = timeout; } void TracePathToObjectFrom(Object* target, Object* root); void TracePathToObject(Object* target); @@ -1008,10 +991,7 @@ class Heap { static inline void ScavengePointer(HeapObject** p); static inline void ScavengeObject(HeapObject** p, HeapObject* object); - enum ScratchpadSlotMode { - IGNORE_SCRATCHPAD_SLOT, - RECORD_SCRATCHPAD_SLOT - }; + enum ScratchpadSlotMode { IGNORE_SCRATCHPAD_SLOT, RECORD_SCRATCHPAD_SLOT }; // If an object has an AllocationMemento trailing it, return it, otherwise // return NULL; @@ -1020,12 +1000,12 @@ class Heap { // An object may have an AllocationSite associated with it through a trailing // AllocationMemento. Its feedback should be updated when objects are found // in the heap. - static inline void UpdateAllocationSiteFeedback( - HeapObject* object, ScratchpadSlotMode mode); + static inline void UpdateAllocationSiteFeedback(HeapObject* object, + ScratchpadSlotMode mode); // Support for partial snapshots. After calling this we have a linear // space to write objects in each space. - void ReserveSpace(int *sizes, Address* addresses); + void ReserveSpace(int* sizes, Address* addresses); // // Support for the API. @@ -1033,27 +1013,6 @@ class Heap { void CreateApiObjects(); - // Adjusts the amount of registered external memory. - // Returns the adjusted value. - inline int64_t AdjustAmountOfExternalAllocatedMemory( - int64_t change_in_bytes); - - // This is only needed for testing high promotion mode. - void SetNewSpaceHighPromotionModeActive(bool mode) { - new_space_high_promotion_mode_active_ = mode; - } - - // Returns the allocation mode (pre-tenuring) based on observed promotion - // rates of previous collections. - inline PretenureFlag GetPretenureMode() { - return FLAG_pretenuring && new_space_high_promotion_mode_active_ - ? TENURED : NOT_TENURED; - } - - inline Address* NewSpaceHighPromotionModeActiveAddress() { - return reinterpret_cast<Address*>(&new_space_high_promotion_mode_active_); - } - inline intptr_t PromotedTotalSize() { int64_t total = PromotedSpaceSizeOfObjects() + PromotedExternalMemorySize(); if (total > kMaxInt) return static_cast<intptr_t>(kMaxInt); @@ -1072,25 +1031,31 @@ class Heap { static const intptr_t kMinimumOldGenerationAllocationLimit = 8 * (Page::kPageSize > MB ? Page::kPageSize : MB); - static const int kLumpOfMemory = (i::kPointerSize / 4) * i::MB; + static const int kPointerMultiplier = i::kPointerSize / 4; - // The new space size has to be a power of 2. - static const int kMaxNewSpaceSizeLowMemoryDevice = 2 * kLumpOfMemory; - static const int kMaxNewSpaceSizeMediumMemoryDevice = 8 * kLumpOfMemory; - static const int kMaxNewSpaceSizeHighMemoryDevice = 16 * kLumpOfMemory; - static const int kMaxNewSpaceSizeHugeMemoryDevice = 16 * kLumpOfMemory; + // The new space size has to be a power of 2. Sizes are in MB. + static const int kMaxSemiSpaceSizeLowMemoryDevice = 1 * kPointerMultiplier; + static const int kMaxSemiSpaceSizeMediumMemoryDevice = 4 * kPointerMultiplier; + static const int kMaxSemiSpaceSizeHighMemoryDevice = 8 * kPointerMultiplier; + static const int kMaxSemiSpaceSizeHugeMemoryDevice = 8 * kPointerMultiplier; // The old space size has to be a multiple of Page::kPageSize. - static const int kMaxOldSpaceSizeLowMemoryDevice = 128 * kLumpOfMemory; - static const int kMaxOldSpaceSizeMediumMemoryDevice = 256 * kLumpOfMemory; - static const int kMaxOldSpaceSizeHighMemoryDevice = 512 * kLumpOfMemory; - static const int kMaxOldSpaceSizeHugeMemoryDevice = 700 * kLumpOfMemory; + // Sizes are in MB. + static const int kMaxOldSpaceSizeLowMemoryDevice = 128 * kPointerMultiplier; + static const int kMaxOldSpaceSizeMediumMemoryDevice = + 256 * kPointerMultiplier; + static const int kMaxOldSpaceSizeHighMemoryDevice = 512 * kPointerMultiplier; + static const int kMaxOldSpaceSizeHugeMemoryDevice = 700 * kPointerMultiplier; // The executable size has to be a multiple of Page::kPageSize. - static const int kMaxExecutableSizeLowMemoryDevice = 96 * kLumpOfMemory; - static const int kMaxExecutableSizeMediumMemoryDevice = 192 * kLumpOfMemory; - static const int kMaxExecutableSizeHighMemoryDevice = 256 * kLumpOfMemory; - static const int kMaxExecutableSizeHugeMemoryDevice = 256 * kLumpOfMemory; + // Sizes are in MB. + static const int kMaxExecutableSizeLowMemoryDevice = 96 * kPointerMultiplier; + static const int kMaxExecutableSizeMediumMemoryDevice = + 192 * kPointerMultiplier; + static const int kMaxExecutableSizeHighMemoryDevice = + 256 * kPointerMultiplier; + static const int kMaxExecutableSizeHugeMemoryDevice = + 256 * kPointerMultiplier; intptr_t OldGenerationAllocationLimit(intptr_t old_gen_size, int freed_global_handles); @@ -1115,27 +1080,26 @@ class Heap { INTERNALIZED_STRING_LIST(STRING_INDEX_DECLARATION) #undef STRING_DECLARATION - // Utility type maps +// Utility type maps #define DECLARE_STRUCT_MAP(NAME, Name, name) k##Name##MapRootIndex, STRUCT_LIST(DECLARE_STRUCT_MAP) #undef DECLARE_STRUCT_MAP - kStringTableRootIndex, #define ROOT_INDEX_DECLARATION(type, name, camel_name) k##camel_name##RootIndex, SMI_ROOT_LIST(ROOT_INDEX_DECLARATION) #undef ROOT_INDEX_DECLARATION - kRootListLength, kStrongRootListLength = kStringTableRootIndex, kSmiRootsStart = kStringTableRootIndex + 1 }; - STATIC_CHECK(kUndefinedValueRootIndex == Internals::kUndefinedValueRootIndex); - STATIC_CHECK(kNullValueRootIndex == Internals::kNullValueRootIndex); - STATIC_CHECK(kTrueValueRootIndex == Internals::kTrueValueRootIndex); - STATIC_CHECK(kFalseValueRootIndex == Internals::kFalseValueRootIndex); - STATIC_CHECK(kempty_stringRootIndex == Internals::kEmptyStringRootIndex); + STATIC_ASSERT(kUndefinedValueRootIndex == + Internals::kUndefinedValueRootIndex); + STATIC_ASSERT(kNullValueRootIndex == Internals::kNullValueRootIndex); + STATIC_ASSERT(kTrueValueRootIndex == Internals::kTrueValueRootIndex); + STATIC_ASSERT(kFalseValueRootIndex == Internals::kFalseValueRootIndex); + STATIC_ASSERT(kempty_stringRootIndex == Internals::kEmptyStringRootIndex); // Generated code can embed direct references to non-writable roots if // they are in new space. @@ -1144,12 +1108,10 @@ class Heap { bool RootCanBeTreatedAsConstant(RootListIndex root_index); Map* MapForFixedTypedArray(ExternalArrayType array_type); - RootListIndex RootIndexForFixedTypedArray( - ExternalArrayType array_type); + RootListIndex RootIndexForFixedTypedArray(ExternalArrayType array_type); Map* MapForExternalArrayType(ExternalArrayType array_type); - RootListIndex RootIndexForExternalArrayType( - ExternalArrayType array_type); + RootListIndex RootIndexForExternalArrayType(ExternalArrayType array_type); RootListIndex RootIndexForEmptyExternalArray(ElementsKind kind); RootListIndex RootIndexForEmptyFixedTypedArray(ElementsKind kind); @@ -1169,9 +1131,24 @@ class Heap { // Check new space expansion criteria and expand semispaces if it was hit. void CheckNewSpaceExpansionCriteria(); + inline void IncrementPromotedObjectsSize(int object_size) { + DCHECK(object_size > 0); + promoted_objects_size_ += object_size; + } + + inline void IncrementSemiSpaceCopiedObjectSize(int object_size) { + DCHECK(object_size > 0); + semi_space_copied_object_size_ += object_size; + } + + inline void IncrementNodesDiedInNewSpace() { nodes_died_in_new_space_++; } + + inline void IncrementNodesCopiedInNewSpace() { nodes_copied_in_new_space_++; } + + inline void IncrementNodesPromoted() { nodes_promoted_++; } + inline void IncrementYoungSurvivorsCounter(int survived) { - ASSERT(survived >= 0); - young_survivors_after_last_gc_ = survived; + DCHECK(survived >= 0); survived_since_last_expansion_ += survived; } @@ -1198,17 +1175,15 @@ class Heap { void VisitExternalResources(v8::ExternalResourceVisitor* visitor); - // Helper function that governs the promotion policy from new space to - // old. If the object's old address lies below the new space's age - // mark or if we've already filled the bottom 1/16th of the to space, - // we try to promote this object. + // An object should be promoted if the object has survived a + // scavenge operation. inline bool ShouldBePromoted(Address old_address, int object_size); void ClearJSFunctionResultCaches(); void ClearNormalizedMapCaches(); - GCTracer* tracer() { return tracer_; } + GCTracer* tracer() { return &tracer_; } // Returns the size of objects residing in non new spaces. intptr_t PromotedSpaceSizeOfObjects(); @@ -1226,6 +1201,10 @@ class Heap { } } + // Update GC statistics that are tracked on the Heap. + void UpdateCumulativeGCStatistics(double duration, double spent_in_mutator, + double marking_time); + // Returns maximum GC pause. double get_max_gc_pause() { return max_gc_pause_; } @@ -1235,48 +1214,22 @@ class Heap { // Returns minimal interval between two subsequent collections. double get_min_in_mutator() { return min_in_mutator_; } - // TODO(hpayer): remove, should be handled by GCTracer - void AddMarkingTime(double marking_time) { - marking_time_ += marking_time; - } - - double marking_time() const { - return marking_time_; - } - - // TODO(hpayer): remove, should be handled by GCTracer - void AddSweepingTime(double sweeping_time) { - sweeping_time_ += sweeping_time; - } - - double sweeping_time() const { - return sweeping_time_; - } - MarkCompactCollector* mark_compact_collector() { return &mark_compact_collector_; } - StoreBuffer* store_buffer() { - return &store_buffer_; - } + StoreBuffer* store_buffer() { return &store_buffer_; } - Marking* marking() { - return &marking_; - } + Marking* marking() { return &marking_; } - IncrementalMarking* incremental_marking() { - return &incremental_marking_; - } + IncrementalMarking* incremental_marking() { return &incremental_marking_; } ExternalStringTable* external_string_table() { return &external_string_table_; } // Returns the current sweep generation. - int sweep_generation() { - return sweep_generation_; - } + int sweep_generation() { return sweep_generation_; } inline Isolate* isolate(); @@ -1303,27 +1256,27 @@ class Heap { uint32_t HashSeed() { uint32_t seed = static_cast<uint32_t>(hash_seed()->value()); - ASSERT(FLAG_randomize_hashes || seed == 0); + DCHECK(FLAG_randomize_hashes || seed == 0); return seed; } void SetArgumentsAdaptorDeoptPCOffset(int pc_offset) { - ASSERT(arguments_adaptor_deopt_pc_offset() == Smi::FromInt(0)); + DCHECK(arguments_adaptor_deopt_pc_offset() == Smi::FromInt(0)); set_arguments_adaptor_deopt_pc_offset(Smi::FromInt(pc_offset)); } void SetConstructStubDeoptPCOffset(int pc_offset) { - ASSERT(construct_stub_deopt_pc_offset() == Smi::FromInt(0)); + DCHECK(construct_stub_deopt_pc_offset() == Smi::FromInt(0)); set_construct_stub_deopt_pc_offset(Smi::FromInt(pc_offset)); } void SetGetterStubDeoptPCOffset(int pc_offset) { - ASSERT(getter_stub_deopt_pc_offset() == Smi::FromInt(0)); + DCHECK(getter_stub_deopt_pc_offset() == Smi::FromInt(0)); set_getter_stub_deopt_pc_offset(Smi::FromInt(pc_offset)); } void SetSetterStubDeoptPCOffset(int pc_offset) { - ASSERT(setter_stub_deopt_pc_offset() == Smi::FromInt(0)); + DCHECK(setter_stub_deopt_pc_offset() == Smi::FromInt(0)); set_setter_stub_deopt_pc_offset(Smi::FromInt(pc_offset)); } @@ -1332,9 +1285,7 @@ class Heap { // Global inline caching age: it is incremented on some GCs after context // disposal. We use it to flush inline caches. - int global_ic_age() { - return global_ic_age_; - } + int global_ic_age() { return global_ic_age_; } void AgeInlineCaches() { global_ic_age_ = (global_ic_age_ + 1) & SharedFunctionInfo::ICAgeBits::kMax; @@ -1348,6 +1299,12 @@ class Heap { void DeoptMarkedAllocationSites(); + bool MaximumSizeScavenge() { return maximum_size_scavenges_ > 0; } + + bool DeoptMaybeTenuredAllocationSites() { + return new_space_.IsAtMaximumCapacity() && maximum_size_scavenges_ == 0; + } + // ObjectStats are kept in two arrays, counts and sizes. Related stats are // stored in a contiguous linear buffer. Stats groups are stored one after // another. @@ -1361,7 +1318,7 @@ class Heap { }; void RecordObjectStats(InstanceType type, size_t size) { - ASSERT(type <= LAST_TYPE); + DCHECK(type <= LAST_TYPE); object_counts_[type]++; object_sizes_[type] += size; } @@ -1370,9 +1327,9 @@ class Heap { int code_sub_type_index = FIRST_CODE_KIND_SUB_TYPE + code_sub_type; int code_age_index = FIRST_CODE_AGE_SUB_TYPE + code_age - Code::kFirstCodeAge; - ASSERT(code_sub_type_index >= FIRST_CODE_KIND_SUB_TYPE && + DCHECK(code_sub_type_index >= FIRST_CODE_KIND_SUB_TYPE && code_sub_type_index < FIRST_CODE_AGE_SUB_TYPE); - ASSERT(code_age_index >= FIRST_CODE_AGE_SUB_TYPE && + DCHECK(code_age_index >= FIRST_CODE_AGE_SUB_TYPE && code_age_index < OBJECT_STATS_COUNT); object_counts_[code_sub_type_index]++; object_sizes_[code_sub_type_index] += size; @@ -1381,7 +1338,7 @@ class Heap { } void RecordFixedArraySubTypeStats(int array_sub_type, size_t size) { - ASSERT(array_sub_type <= LAST_FIXED_ARRAY_SUB_TYPE); + DCHECK(array_sub_type <= LAST_FIXED_ARRAY_SUB_TYPE); object_counts_[FIRST_FIXED_ARRAY_SUB_TYPE + array_sub_type]++; object_sizes_[FIRST_FIXED_ARRAY_SUB_TYPE + array_sub_type] += size; } @@ -1397,9 +1354,7 @@ class Heap { } - ~RelocationLock() { - heap_->relocation_mutex_.Unlock(); - } + ~RelocationLock() { heap_->relocation_mutex_.Unlock(); } private: Heap* heap_; @@ -1419,70 +1374,80 @@ class Heap { static void FatalProcessOutOfMemory(const char* location, bool take_snapshot = false); + // This event is triggered after successful allocation of a new object made + // by runtime. Allocations of target space for object evacuation do not + // trigger the event. In order to track ALL allocations one must turn off + // FLAG_inline_new and FLAG_use_allocation_folding. + inline void OnAllocationEvent(HeapObject* object, int size_in_bytes); + + // This event is triggered after object is moved to a new place. + inline void OnMoveEvent(HeapObject* target, HeapObject* source, + int size_in_bytes); + protected: // Methods made available to tests. // Allocates a JS Map in the heap. - MUST_USE_RESULT AllocationResult AllocateMap( - InstanceType instance_type, - int instance_size, - ElementsKind elements_kind = TERMINAL_FAST_ELEMENTS_KIND); + MUST_USE_RESULT AllocationResult + AllocateMap(InstanceType instance_type, int instance_size, + ElementsKind elements_kind = TERMINAL_FAST_ELEMENTS_KIND); // Allocates and initializes a new JavaScript object based on a // constructor. // If allocation_site is non-null, then a memento is emitted after the object // that points to the site. - MUST_USE_RESULT AllocationResult AllocateJSObject( - JSFunction* constructor, - PretenureFlag pretenure = NOT_TENURED, - AllocationSite* allocation_site = NULL); + MUST_USE_RESULT AllocationResult + AllocateJSObject(JSFunction* constructor, + PretenureFlag pretenure = NOT_TENURED, + AllocationSite* allocation_site = NULL); // Allocates and initializes a new JavaScript object based on a map. // Passing an allocation site means that a memento will be created that // points to the site. - MUST_USE_RESULT AllocationResult AllocateJSObjectFromMap( - Map* map, - PretenureFlag pretenure = NOT_TENURED, - bool alloc_props = true, - AllocationSite* allocation_site = NULL); + MUST_USE_RESULT AllocationResult + AllocateJSObjectFromMap(Map* map, PretenureFlag pretenure = NOT_TENURED, + bool alloc_props = true, + AllocationSite* allocation_site = NULL); // Allocated a HeapNumber from value. - MUST_USE_RESULT AllocationResult AllocateHeapNumber( - double value, PretenureFlag pretenure = NOT_TENURED); + MUST_USE_RESULT AllocationResult + AllocateHeapNumber(double value, MutableMode mode = IMMUTABLE, + PretenureFlag pretenure = NOT_TENURED); // Allocate a byte array of the specified length - MUST_USE_RESULT AllocationResult AllocateByteArray( - int length, - PretenureFlag pretenure = NOT_TENURED); - - // Allocates an arguments object - optionally with an elements array. - MUST_USE_RESULT AllocationResult AllocateArgumentsObject( - Object* callee, int length); + MUST_USE_RESULT AllocationResult + AllocateByteArray(int length, PretenureFlag pretenure = NOT_TENURED); // Copy the code and scope info part of the code object, but insert // the provided data as the relocation information. - MUST_USE_RESULT AllocationResult CopyCode(Code* code, - Vector<byte> reloc_info); + MUST_USE_RESULT AllocationResult + CopyCode(Code* code, Vector<byte> reloc_info); MUST_USE_RESULT AllocationResult CopyCode(Code* code); // Allocates a fixed array initialized with undefined values - MUST_USE_RESULT AllocationResult AllocateFixedArray( - int length, - PretenureFlag pretenure = NOT_TENURED); + MUST_USE_RESULT AllocationResult + AllocateFixedArray(int length, PretenureFlag pretenure = NOT_TENURED); private: Heap(); + // The amount of external memory registered through the API kept alive + // by global handles + int64_t amount_of_external_allocated_memory_; + + // Caches the amount of external memory registered at the last global gc. + int64_t amount_of_external_allocated_memory_at_last_global_gc_; + // This can be calculated directly from a pointer to the heap; however, it is // more expedient to get at the isolate directly from within Heap methods. Isolate* isolate_; Object* roots_[kRootListLength]; - intptr_t code_range_size_; + size_t code_range_size_; int reserved_semispace_size_; - int max_semispace_size_; + int max_semi_space_size_; int initial_semispace_size_; intptr_t max_old_generation_size_; intptr_t max_executable_size_; @@ -1496,7 +1461,6 @@ class Heap { int sweep_generation_; int always_allocate_scope_depth_; - int linear_allocation_scope_depth_; // For keeping track of context disposals. int contexts_disposed_; @@ -1517,12 +1481,25 @@ class Heap { LargeObjectSpace* lo_space_; HeapState gc_state_; int gc_post_processing_depth_; + Address new_space_top_after_last_gc_; // Returns the amount of external memory registered since last global gc. int64_t PromotedExternalMemorySize(); - unsigned int ms_count_; // how many mark-sweep collections happened - unsigned int gc_count_; // how many gc happened + // How many "runtime allocations" happened. + uint32_t allocations_count_; + + // Running hash over allocations performed. + uint32_t raw_allocations_hash_; + + // Countdown counter, dumps allocation hash when 0. + uint32_t dump_allocations_hash_countdown_; + + // How many mark-sweep collections happened. + unsigned int ms_count_; + + // How many gc happened. + unsigned int gc_count_; // For post mortem debugging. static const int kRememberedUnmappedPages = 128; @@ -1532,12 +1509,12 @@ class Heap { // Total length of the strings we failed to flatten since the last GC. int unflattened_strings_length_; -#define ROOT_ACCESSOR(type, name, camel_name) \ - inline void set_##name(type* value) { \ - /* The deserializer makes use of the fact that these common roots are */ \ - /* never in new space and never on a page that is being compacted. */ \ - ASSERT(k##camel_name##RootIndex >= kOldSpaceRoots || !InNewSpace(value)); \ - roots_[k##camel_name##RootIndex] = value; \ +#define ROOT_ACCESSOR(type, name, camel_name) \ + inline void set_##name(type* value) { \ + /* The deserializer makes use of the fact that these common roots are */ \ + /* never in new space and never on a page that is being compacted. */ \ + DCHECK(k##camel_name##RootIndex >= kOldSpaceRoots || !InNewSpace(value)); \ + roots_[k##camel_name##RootIndex] = value; \ } ROOT_LIST(ROOT_ACCESSOR) #undef ROOT_ACCESSOR @@ -1549,28 +1526,12 @@ class Heap { int allocation_timeout_; #endif // DEBUG - // Indicates that the new space should be kept small due to high promotion - // rates caused by the mutator allocating a lot of long-lived objects. - // TODO(hpayer): change to bool if no longer accessed from generated code - intptr_t new_space_high_promotion_mode_active_; - // Limit that triggers a global GC on the next (normally caused) GC. This // is checked when we have already decided to do a GC to help determine // which collector to invoke, before expanding a paged space in the old // generation and on every allocation in large object space. intptr_t old_generation_allocation_limit_; - // Limit on the amount of externally allocated memory allowed - // between global GCs. If reached a global GC is forced. - intptr_t external_allocation_limit_; - - // The amount of external memory registered through the API kept alive - // by global handles - int64_t amount_of_external_allocated_memory_; - - // Caches the amount of external memory registered at the last global gc. - int64_t amount_of_external_allocated_memory_at_last_global_gc_; - // Indicates that an allocation has failed in the old generation since the // last GC. bool old_gen_exhausted_; @@ -1590,6 +1551,11 @@ class Heap { // start. Object* weak_object_to_code_table_; + // List of encountered weak collections (JSWeakMap and JSWeakSet) during + // marking. It is initialized during marking, destroyed after marking and + // contains Smi(0) while marking is not active. + Object* encountered_weak_collections_; + StoreBufferRebuilder store_buffer_rebuilder_; struct StringTypeTable { @@ -1621,10 +1587,8 @@ class Heap { // Allocations in the callback function are disallowed. struct GCPrologueCallbackPair { GCPrologueCallbackPair(v8::Isolate::GCPrologueCallback callback, - GCType gc_type, - bool pass_isolate) - : callback(callback), gc_type(gc_type), pass_isolate_(pass_isolate) { - } + GCType gc_type, bool pass_isolate) + : callback(callback), gc_type(gc_type), pass_isolate_(pass_isolate) {} bool operator==(const GCPrologueCallbackPair& pair) const { return pair.callback == callback; } @@ -1637,10 +1601,8 @@ class Heap { struct GCEpilogueCallbackPair { GCEpilogueCallbackPair(v8::Isolate::GCPrologueCallback callback, - GCType gc_type, - bool pass_isolate) - : callback(callback), gc_type(gc_type), pass_isolate_(pass_isolate) { - } + GCType gc_type, bool pass_isolate) + : callback(callback), gc_type(gc_type), pass_isolate_(pass_isolate) {} bool operator==(const GCEpilogueCallbackPair& pair) const { return pair.callback == callback; } @@ -1657,7 +1619,7 @@ class Heap { // Update the GC state. Called from the mark-compact collector. void MarkMapPointersAsEncoded(bool encoded) { - ASSERT(!encoded); + DCHECK(!encoded); gc_safe_size_of_old_object_ = &GcSafeSizeOfOldObject; } @@ -1681,12 +1643,15 @@ class Heap { // with the allocation memento of the object at the top void EnsureFillerObjectAtTop(); + // Ensure that we have swept all spaces in such a way that we can iterate + // over all objects. May cause a GC. + void MakeHeapIterable(); + // Performs garbage collection operation. // Returns whether there is a chance that another major GC could // collect more garbage. bool CollectGarbage( - GarbageCollector collector, - const char* gc_reason, + GarbageCollector collector, const char* gc_reason, const char* collector_reason, const GCCallbackFlags gc_callback_flags = kNoGCCallbackFlags); @@ -1695,7 +1660,6 @@ class Heap { // collect more garbage. bool PerformGarbageCollection( GarbageCollector collector, - GCTracer* tracer, const GCCallbackFlags gc_callback_flags = kNoGCCallbackFlags); inline void UpdateOldSpaceLimits(); @@ -1705,7 +1669,7 @@ class Heap { static AllocationSpace SelectSpace(int object_size, AllocationSpace preferred_old_space, PretenureFlag pretenure) { - ASSERT(preferred_old_space == OLD_POINTER_SPACE || + DCHECK(preferred_old_space == OLD_POINTER_SPACE || preferred_old_space == OLD_DATA_SPACE); if (object_size > Page::kMaxRegularHeapObjectSize) return LO_SPACE; return (pretenure == TENURED) ? preferred_old_space : NEW_SPACE; @@ -1716,77 +1680,49 @@ class Heap { // performed by the runtime and should not be bypassed (to extend this to // inlined allocations, use the Heap::DisableInlineAllocation() support). MUST_USE_RESULT inline AllocationResult AllocateRaw( - int size_in_bytes, - AllocationSpace space, - AllocationSpace retry_space); + int size_in_bytes, AllocationSpace space, AllocationSpace retry_space); // Allocates a heap object based on the map. - MUST_USE_RESULT AllocationResult Allocate( - Map* map, - AllocationSpace space, - AllocationSite* allocation_site = NULL); + MUST_USE_RESULT AllocationResult + Allocate(Map* map, AllocationSpace space, + AllocationSite* allocation_site = NULL); // Allocates a partial map for bootstrapping. - MUST_USE_RESULT AllocationResult AllocatePartialMap( - InstanceType instance_type, - int instance_size); + MUST_USE_RESULT AllocationResult + AllocatePartialMap(InstanceType instance_type, int instance_size); // Initializes a JSObject based on its map. - void InitializeJSObjectFromMap(JSObject* obj, - FixedArray* properties, + void InitializeJSObjectFromMap(JSObject* obj, FixedArray* properties, Map* map); void InitializeAllocationMemento(AllocationMemento* memento, AllocationSite* allocation_site); // Allocate a block of memory in the given space (filled with a filler). // Used as a fall-back for generated code when the space is full. - MUST_USE_RESULT AllocationResult AllocateFillerObject(int size, - bool double_align, - AllocationSpace space); + MUST_USE_RESULT AllocationResult + AllocateFillerObject(int size, bool double_align, AllocationSpace space); // Allocate an uninitialized fixed array. - MUST_USE_RESULT AllocationResult AllocateRawFixedArray( - int length, PretenureFlag pretenure); + MUST_USE_RESULT AllocationResult + AllocateRawFixedArray(int length, PretenureFlag pretenure); // Allocate an uninitialized fixed double array. - MUST_USE_RESULT AllocationResult AllocateRawFixedDoubleArray( - int length, PretenureFlag pretenure); + MUST_USE_RESULT AllocationResult + AllocateRawFixedDoubleArray(int length, PretenureFlag pretenure); // Allocate an initialized fixed array with the given filler value. - MUST_USE_RESULT AllocationResult AllocateFixedArrayWithFiller( - int length, PretenureFlag pretenure, Object* filler); + MUST_USE_RESULT AllocationResult + AllocateFixedArrayWithFiller(int length, PretenureFlag pretenure, + Object* filler); // Allocate and partially initializes a String. There are two String // encodings: ASCII and two byte. These functions allocate a string of the // given length and set its map and length fields. The characters of the // string are uninitialized. - MUST_USE_RESULT AllocationResult AllocateRawOneByteString( - int length, PretenureFlag pretenure); - MUST_USE_RESULT AllocationResult AllocateRawTwoByteString( - int length, PretenureFlag pretenure); - - // Allocates and fully initializes a String. There are two String - // encodings: ASCII and two byte. One should choose between the three string - // allocation functions based on the encoding of the string buffer used to - // initialized the string. - // - ...FromAscii initializes the string from a buffer that is ASCII - // encoded (it does not check that the buffer is ASCII encoded) and the - // result will be ASCII encoded. - // - ...FromUTF8 initializes the string from a buffer that is UTF-8 - // encoded. If the characters are all single-byte characters, the - // result will be ASCII encoded, otherwise it will converted to two - // byte. - // - ...FromTwoByte initializes the string from a buffer that is two-byte - // encoded. If the characters are all single-byte characters, the - // result will be converted to ASCII, otherwise it will be left as - // two-byte. - MUST_USE_RESULT AllocationResult AllocateStringFromUtf8Slow( - Vector<const char> str, - int non_ascii_start, - PretenureFlag pretenure = NOT_TENURED); - MUST_USE_RESULT AllocationResult AllocateStringFromTwoByte( - Vector<const uc16> str, - PretenureFlag pretenure = NOT_TENURED); + MUST_USE_RESULT AllocationResult + AllocateRawOneByteString(int length, PretenureFlag pretenure); + MUST_USE_RESULT AllocationResult + AllocateRawTwoByteString(int length, PretenureFlag pretenure); bool CreateInitialMaps(); void CreateInitialObjects(); @@ -1794,23 +1730,19 @@ class Heap { // Allocates an internalized string in old space based on the character // stream. MUST_USE_RESULT inline AllocationResult AllocateInternalizedStringFromUtf8( - Vector<const char> str, - int chars, - uint32_t hash_field); + Vector<const char> str, int chars, uint32_t hash_field); MUST_USE_RESULT inline AllocationResult AllocateOneByteInternalizedString( - Vector<const uint8_t> str, - uint32_t hash_field); + Vector<const uint8_t> str, uint32_t hash_field); MUST_USE_RESULT inline AllocationResult AllocateTwoByteInternalizedString( - Vector<const uc16> str, - uint32_t hash_field); + Vector<const uc16> str, uint32_t hash_field); - template<bool is_one_byte, typename T> - MUST_USE_RESULT AllocationResult AllocateInternalizedStringImpl( - T t, int chars, uint32_t hash_field); + template <bool is_one_byte, typename T> + MUST_USE_RESULT AllocationResult + AllocateInternalizedStringImpl(T t, int chars, uint32_t hash_field); - template<typename T> + template <typename T> MUST_USE_RESULT inline AllocationResult AllocateInternalizedStringImpl( T t, int chars, uint32_t hash_field); @@ -1823,8 +1755,8 @@ class Heap { // Make a copy of src, set the map, and return the copy. Returns // Failure::RetryAfterGC(requested_bytes, space) if the allocation failed. - MUST_USE_RESULT AllocationResult CopyFixedArrayWithMap(FixedArray* src, - Map* map); + MUST_USE_RESULT AllocationResult + CopyFixedArrayWithMap(FixedArray* src, Map* map); // Make a copy of src and return it. Returns // Failure::RetryAfterGC(requested_bytes, space) if the allocation failed. @@ -1839,46 +1771,43 @@ class Heap { // Computes a single character string where the character has code. // A cache is used for ASCII codes. - MUST_USE_RESULT AllocationResult LookupSingleCharacterStringFromCode( - uint16_t code); + MUST_USE_RESULT AllocationResult + LookupSingleCharacterStringFromCode(uint16_t code); // Allocate a symbol in old space. MUST_USE_RESULT AllocationResult AllocateSymbol(); // Make a copy of src, set the map, and return the copy. - MUST_USE_RESULT AllocationResult CopyConstantPoolArrayWithMap( - ConstantPoolArray* src, Map* map); + MUST_USE_RESULT AllocationResult + CopyConstantPoolArrayWithMap(ConstantPoolArray* src, Map* map); MUST_USE_RESULT AllocationResult AllocateConstantPoolArray( - int number_of_int64_entries, - int number_of_code_ptr_entries, - int number_of_heap_ptr_entries, - int number_of_int32_entries); + const ConstantPoolArray::NumberOfEntries& small); + + MUST_USE_RESULT AllocationResult AllocateExtendedConstantPoolArray( + const ConstantPoolArray::NumberOfEntries& small, + const ConstantPoolArray::NumberOfEntries& extended); // Allocates an external array of the specified length and type. - MUST_USE_RESULT AllocationResult AllocateExternalArray( - int length, - ExternalArrayType array_type, - void* external_pointer, - PretenureFlag pretenure); + MUST_USE_RESULT AllocationResult + AllocateExternalArray(int length, ExternalArrayType array_type, + void* external_pointer, PretenureFlag pretenure); // Allocates a fixed typed array of the specified length and type. - MUST_USE_RESULT AllocationResult AllocateFixedTypedArray( - int length, - ExternalArrayType array_type, - PretenureFlag pretenure); + MUST_USE_RESULT AllocationResult + AllocateFixedTypedArray(int length, ExternalArrayType array_type, + PretenureFlag pretenure); // Make a copy of src and return it. MUST_USE_RESULT AllocationResult CopyAndTenureFixedCOWArray(FixedArray* src); // Make a copy of src, set the map, and return the copy. - MUST_USE_RESULT AllocationResult CopyFixedDoubleArrayWithMap( - FixedDoubleArray* src, Map* map); + MUST_USE_RESULT AllocationResult + CopyFixedDoubleArrayWithMap(FixedDoubleArray* src, Map* map); // Allocates a fixed double array with uninitialized values. Returns MUST_USE_RESULT AllocationResult AllocateUninitializedFixedDoubleArray( - int length, - PretenureFlag pretenure = NOT_TENURED); + int length, PretenureFlag pretenure = NOT_TENURED); // These five Create*EntryStub functions are here and forced to not be inlined // because of a gcc-4.4 bug that assigns wrong vtable entries. @@ -1891,12 +1820,12 @@ class Heap { MUST_USE_RESULT AllocationResult AllocateEmptyFixedArray(); // Allocate empty external array of given type. - MUST_USE_RESULT AllocationResult AllocateEmptyExternalArray( - ExternalArrayType array_type); + MUST_USE_RESULT AllocationResult + AllocateEmptyExternalArray(ExternalArrayType array_type); // Allocate empty fixed typed array of given type. - MUST_USE_RESULT AllocationResult AllocateEmptyFixedTypedArray( - ExternalArrayType array_type); + MUST_USE_RESULT AllocationResult + AllocateEmptyFixedTypedArray(ExternalArrayType array_type); // Allocate empty constant pool array. MUST_USE_RESULT AllocationResult AllocateEmptyConstantPoolArray(); @@ -1911,11 +1840,11 @@ class Heap { MUST_USE_RESULT AllocationResult AllocateStruct(InstanceType type); // Allocates a new foreign object. - MUST_USE_RESULT AllocationResult AllocateForeign( - Address address, PretenureFlag pretenure = NOT_TENURED); + MUST_USE_RESULT AllocationResult + AllocateForeign(Address address, PretenureFlag pretenure = NOT_TENURED); - MUST_USE_RESULT AllocationResult AllocateCode(int object_size, - bool immovable); + MUST_USE_RESULT AllocationResult + AllocateCode(int object_size, bool immovable); MUST_USE_RESULT AllocationResult InternalizeStringWithKey(HashTableKey* key); @@ -1934,23 +1863,21 @@ class Heap { void ZapFromSpace(); static String* UpdateNewSpaceReferenceInExternalStringTableEntry( - Heap* heap, - Object** pointer); + Heap* heap, Object** pointer); Address DoScavenge(ObjectVisitor* scavenge_visitor, Address new_space_front); - static void ScavengeStoreBufferCallback(Heap* heap, - MemoryChunk* page, + static void ScavengeStoreBufferCallback(Heap* heap, MemoryChunk* page, StoreBufferEvent event); // Performs a major collection in the whole heap. - void MarkCompact(GCTracer* tracer); + void MarkCompact(); // Code to be run before and after mark-compact. void MarkCompactPrologue(); - void ProcessNativeContexts(WeakObjectRetainer* retainer, bool record_slots); - void ProcessArrayBuffers(WeakObjectRetainer* retainer, bool record_slots); - void ProcessAllocationSites(WeakObjectRetainer* retainer, bool record_slots); + void ProcessNativeContexts(WeakObjectRetainer* retainer); + void ProcessArrayBuffers(WeakObjectRetainer* retainer); + void ProcessAllocationSites(WeakObjectRetainer* retainer); // Deopts all code that contains allocation instruction which are tenured or // not tenured. Moreover it clears the pretenuring allocation site statistics. @@ -1974,7 +1901,7 @@ class Heap { // Total RegExp code ever generated double total_regexp_code_generated_; - GCTracer* tracer_; + GCTracer tracer_; // Creates and installs the full-sized number string cache. int FullSizeNumberStringCacheLength(); @@ -1991,78 +1918,35 @@ class Heap { void AddAllocationSiteToScratchpad(AllocationSite* site, ScratchpadSlotMode mode); - void UpdateSurvivalRateTrend(int start_new_space_size); - - enum SurvivalRateTrend { INCREASING, STABLE, DECREASING, FLUCTUATING }; + void UpdateSurvivalStatistics(int start_new_space_size); static const int kYoungSurvivalRateHighThreshold = 90; - static const int kYoungSurvivalRateLowThreshold = 10; static const int kYoungSurvivalRateAllowedDeviation = 15; - static const int kOldSurvivalRateLowThreshold = 20; + static const int kOldSurvivalRateLowThreshold = 10; - int young_survivors_after_last_gc_; int high_survival_rate_period_length_; - int low_survival_rate_period_length_; - double survival_rate_; - SurvivalRateTrend previous_survival_rate_trend_; - SurvivalRateTrend survival_rate_trend_; - - void set_survival_rate_trend(SurvivalRateTrend survival_rate_trend) { - ASSERT(survival_rate_trend != FLUCTUATING); - previous_survival_rate_trend_ = survival_rate_trend_; - survival_rate_trend_ = survival_rate_trend; - } - - SurvivalRateTrend survival_rate_trend() { - if (survival_rate_trend_ == STABLE) { - return STABLE; - } else if (previous_survival_rate_trend_ == STABLE) { - return survival_rate_trend_; - } else if (survival_rate_trend_ != previous_survival_rate_trend_) { - return FLUCTUATING; - } else { - return survival_rate_trend_; - } - } - - bool IsStableOrIncreasingSurvivalTrend() { - switch (survival_rate_trend()) { - case STABLE: - case INCREASING: - return true; - default: - return false; - } - } - - bool IsStableOrDecreasingSurvivalTrend() { - switch (survival_rate_trend()) { - case STABLE: - case DECREASING: - return true; - default: - return false; - } - } - - bool IsIncreasingSurvivalTrend() { - return survival_rate_trend() == INCREASING; - } + intptr_t promoted_objects_size_; + double promotion_rate_; + intptr_t semi_space_copied_object_size_; + double semi_space_copied_rate_; + int nodes_died_in_new_space_; + int nodes_copied_in_new_space_; + int nodes_promoted_; - bool IsHighSurvivalRate() { - return high_survival_rate_period_length_ > 0; - } + // This is the pretenuring trigger for allocation sites that are in maybe + // tenure state. When we switched to the maximum new space size we deoptimize + // the code that belongs to the allocation site and derive the lifetime + // of the allocation site. + unsigned int maximum_size_scavenges_; - bool IsLowSurvivalRate() { - return low_survival_rate_period_length_ > 0; - } + // TODO(hpayer): Allocation site pretenuring may make this method obsolete. + // Re-visit incremental marking heuristics. + bool IsHighSurvivalRate() { return high_survival_rate_period_length_ > 0; } void SelectScavengingVisitorsTable(); - void StartIdleRound() { - mark_sweeps_since_idle_round_started_ = 0; - } + void StartIdleRound() { mark_sweeps_since_idle_round_started_ = 0; } void FinishIdleRound() { mark_sweeps_since_idle_round_started_ = kMaxMarkSweepsInIdleRound; @@ -2085,15 +1969,12 @@ class Heap { return heap_size_mb / kMbPerMs; } - // Returns true if no more GC work is left. - bool IdleGlobalGC(); - void AdvanceIdleIncrementalMarking(intptr_t step_size); void ClearObjectStats(bool clear_last_time_stats = false); void set_weak_object_to_code_table(Object* value) { - ASSERT(!InNewSpace(value)); + DCHECK(!InNewSpace(value)); weak_object_to_code_table_ = value; } @@ -2101,6 +1982,10 @@ class Heap { return &weak_object_to_code_table_; } + inline void UpdateAllocationsHash(HeapObject* object); + inline void UpdateAllocationsHash(uint32_t value); + inline void PrintAlloctionsHash(); + static const int kInitialStringTableSize = 2048; static const int kInitialEvalCacheSize = 64; static const int kInitialNumberStringCacheSize = 256; @@ -2123,11 +2008,6 @@ class Heap { // Minimal interval between two subsequent collections. double min_in_mutator_; - // Size of objects alive after last GC. - intptr_t alive_after_last_gc_; - - double last_gc_end_timestamp_; - // Cumulative GC time spent in marking double marking_time_; @@ -2182,14 +2062,15 @@ class Heap { MemoryChunk* chunks_queued_for_free_; - Mutex relocation_mutex_; + base::Mutex relocation_mutex_; int gc_callbacks_depth_; + friend class AlwaysAllocateScope; friend class Factory; + friend class GCCallbacksScope; friend class GCTracer; - friend class AlwaysAllocateScope; - friend class Page; + friend class HeapIterator; friend class Isolate; friend class MarkCompactCollector; friend class MarkCompactMarkingVisitor; @@ -2197,7 +2078,7 @@ class Heap { #ifdef VERIFY_HEAP friend class NoWeakObjectVerificationScope; #endif - friend class GCCallbacksScope; + friend class Page; DISALLOW_COPY_AND_ASSIGN(Heap); }; @@ -2208,33 +2089,33 @@ class HeapStats { static const int kStartMarker = 0xDECADE00; static const int kEndMarker = 0xDECADE01; - int* start_marker; // 0 - int* new_space_size; // 1 - int* new_space_capacity; // 2 - intptr_t* old_pointer_space_size; // 3 - intptr_t* old_pointer_space_capacity; // 4 - intptr_t* old_data_space_size; // 5 - intptr_t* old_data_space_capacity; // 6 - intptr_t* code_space_size; // 7 - intptr_t* code_space_capacity; // 8 - intptr_t* map_space_size; // 9 - intptr_t* map_space_capacity; // 10 - intptr_t* cell_space_size; // 11 - intptr_t* cell_space_capacity; // 12 - intptr_t* lo_space_size; // 13 - int* global_handle_count; // 14 - int* weak_global_handle_count; // 15 - int* pending_global_handle_count; // 16 - int* near_death_global_handle_count; // 17 - int* free_global_handle_count; // 18 - intptr_t* memory_allocator_size; // 19 - intptr_t* memory_allocator_capacity; // 20 - int* objects_per_type; // 21 - int* size_per_type; // 22 - int* os_error; // 23 - int* end_marker; // 24 - intptr_t* property_cell_space_size; // 25 - intptr_t* property_cell_space_capacity; // 26 + int* start_marker; // 0 + int* new_space_size; // 1 + int* new_space_capacity; // 2 + intptr_t* old_pointer_space_size; // 3 + intptr_t* old_pointer_space_capacity; // 4 + intptr_t* old_data_space_size; // 5 + intptr_t* old_data_space_capacity; // 6 + intptr_t* code_space_size; // 7 + intptr_t* code_space_capacity; // 8 + intptr_t* map_space_size; // 9 + intptr_t* map_space_capacity; // 10 + intptr_t* cell_space_size; // 11 + intptr_t* cell_space_capacity; // 12 + intptr_t* lo_space_size; // 13 + int* global_handle_count; // 14 + int* weak_global_handle_count; // 15 + int* pending_global_handle_count; // 16 + int* near_death_global_handle_count; // 17 + int* free_global_handle_count; // 18 + intptr_t* memory_allocator_size; // 19 + intptr_t* memory_allocator_capacity; // 20 + int* objects_per_type; // 21 + int* size_per_type; // 22 + int* os_error; // 23 + int* end_marker; // 24 + intptr_t* property_cell_space_size; // 25 + intptr_t* property_cell_space_capacity; // 26 }; @@ -2276,14 +2157,14 @@ class GCCallbacksScope { // point into the heap to a location that has a map pointer at its first word. // Caveat: Heap::Contains is an approximation because it can return true for // objects in a heap space but above the allocation pointer. -class VerifyPointersVisitor: public ObjectVisitor { +class VerifyPointersVisitor : public ObjectVisitor { public: inline void VisitPointers(Object** start, Object** end); }; // Verify that all objects are Smis. -class VerifySmisVisitor: public ObjectVisitor { +class VerifySmisVisitor : public ObjectVisitor { public: inline void VisitPointers(Object** start, Object** end); }; @@ -2295,6 +2176,7 @@ class AllSpaces BASE_EMBEDDED { public: explicit AllSpaces(Heap* heap) : heap_(heap), counter_(FIRST_SPACE) {} Space* next(); + private: Heap* heap_; int counter_; @@ -2308,6 +2190,7 @@ class OldSpaces BASE_EMBEDDED { public: explicit OldSpaces(Heap* heap) : heap_(heap), counter_(OLD_POINTER_SPACE) {} OldSpace* next(); + private: Heap* heap_; int counter_; @@ -2321,6 +2204,7 @@ class PagedSpaces BASE_EMBEDDED { public: explicit PagedSpaces(Heap* heap) : heap_(heap), counter_(OLD_POINTER_SPACE) {} PagedSpace* next(); + private: Heap* heap_; int counter_; @@ -2343,7 +2227,7 @@ class SpaceIterator : public Malloced { ObjectIterator* CreateIterator(); Heap* heap_; - int current_space_; // from enum AllocationSpace. + int current_space_; // from enum AllocationSpace. ObjectIterator* iterator_; // object iterator for the current space. HeapObjectCallback size_func_; }; @@ -2353,6 +2237,9 @@ class SpaceIterator : public Malloced { // aggregates the specific iterators for the different spaces as // these can only iterate over one space only. // +// HeapIterator ensures there is no allocation during its lifetime +// (using an embedded DisallowHeapAllocation instance). +// // HeapIterator can skip free list nodes (that is, de-allocated heap // objects that still remain in the heap). As implementation of free // nodes filtering uses GC marks, it can't be used during MS/MC GC @@ -2362,10 +2249,7 @@ class HeapObjectsFilter; class HeapIterator BASE_EMBEDDED { public: - enum HeapObjectsFiltering { - kNoFiltering, - kFilterUnreachable - }; + enum HeapObjectsFiltering { kNoFiltering, kFilterUnreachable }; explicit HeapIterator(Heap* heap); HeapIterator(Heap* heap, HeapObjectsFiltering filtering); @@ -2375,12 +2259,18 @@ class HeapIterator BASE_EMBEDDED { void reset(); private: + struct MakeHeapIterableHelper { + explicit MakeHeapIterableHelper(Heap* heap) { heap->MakeHeapIterable(); } + }; + // Perform the initialization. void Init(); // Perform all necessary shutdown (destruction) work. void Shutdown(); HeapObject* NextObject(); + MakeHeapIterableHelper make_heap_iterable_helper_; + DisallowHeapAllocation no_heap_allocation_; Heap* heap_; HeapObjectsFiltering filtering_; HeapObjectsFilter* filter_; @@ -2409,6 +2299,9 @@ class KeyedLookupCache { static const int kMapHashShift = 5; static const int kHashMask = -4; // Zero the last two bits. static const int kEntriesPerBucket = 4; + static const int kEntryLength = 2; + static const int kMapIndex = 0; + static const int kKeyIndex = 1; static const int kNotFound = -1; // kEntriesPerBucket should be a power of 2. @@ -2428,9 +2321,7 @@ class KeyedLookupCache { // Get the address of the keys and field_offsets arrays. Used in // generated code to perform cache lookups. - Address keys_address() { - return reinterpret_cast<Address>(&keys_); - } + Address keys_address() { return reinterpret_cast<Address>(&keys_); } Address field_offsets_address() { return reinterpret_cast<Address>(&field_offsets_); @@ -2468,7 +2359,7 @@ class DescriptorLookupCache { // Update an element in the cache. void Update(Map* source, Name* name, int result) { - ASSERT(result != kAbsent); + DCHECK(result != kAbsent); if (name->IsUniqueName()) { int index = Hash(source, name); Key& key = keys_[index]; @@ -2495,11 +2386,11 @@ class DescriptorLookupCache { static int Hash(Object* source, Name* name) { // Uses only lower 32 bits if pointers are larger. uint32_t source_hash = - static_cast<uint32_t>(reinterpret_cast<uintptr_t>(source)) - >> kPointerSizeLog2; + static_cast<uint32_t>(reinterpret_cast<uintptr_t>(source)) >> + kPointerSizeLog2; uint32_t name_hash = - static_cast<uint32_t>(reinterpret_cast<uintptr_t>(name)) - >> kPointerSizeLog2; + static_cast<uint32_t>(reinterpret_cast<uintptr_t>(name)) >> + kPointerSizeLog2; return (source_hash ^ name_hash) % kLength; } @@ -2517,162 +2408,18 @@ class DescriptorLookupCache { }; -// GCTracer collects and prints ONE line after each garbage collector -// invocation IFF --trace_gc is used. - -class GCTracer BASE_EMBEDDED { - public: - class Scope BASE_EMBEDDED { - public: - enum ScopeId { - EXTERNAL, - MC_MARK, - MC_SWEEP, - MC_SWEEP_NEWSPACE, - MC_SWEEP_OLDSPACE, - MC_EVACUATE_PAGES, - MC_UPDATE_NEW_TO_NEW_POINTERS, - MC_UPDATE_ROOT_TO_NEW_POINTERS, - MC_UPDATE_OLD_TO_NEW_POINTERS, - MC_UPDATE_POINTERS_TO_EVACUATED, - MC_UPDATE_POINTERS_BETWEEN_EVACUATED, - MC_UPDATE_MISC_POINTERS, - MC_WEAKCOLLECTION_PROCESS, - MC_WEAKCOLLECTION_CLEAR, - MC_FLUSH_CODE, - kNumberOfScopes - }; - - Scope(GCTracer* tracer, ScopeId scope) - : tracer_(tracer), - scope_(scope) { - start_time_ = OS::TimeCurrentMillis(); - } - - ~Scope() { - ASSERT(scope_ < kNumberOfScopes); // scope_ is unsigned. - tracer_->scopes_[scope_] += OS::TimeCurrentMillis() - start_time_; - } - - private: - GCTracer* tracer_; - ScopeId scope_; - double start_time_; - }; - - explicit GCTracer(Heap* heap, - const char* gc_reason, - const char* collector_reason); - ~GCTracer(); - - // Sets the collector. - void set_collector(GarbageCollector collector) { collector_ = collector; } - - // Sets the GC count. - void set_gc_count(unsigned int count) { gc_count_ = count; } - - // Sets the full GC count. - void set_full_gc_count(int count) { full_gc_count_ = count; } - - void increment_promoted_objects_size(int object_size) { - promoted_objects_size_ += object_size; - } - - void increment_nodes_died_in_new_space() { - nodes_died_in_new_space_++; - } - - void increment_nodes_copied_in_new_space() { - nodes_copied_in_new_space_++; - } - - void increment_nodes_promoted() { - nodes_promoted_++; - } - - private: - // Returns a string matching the collector. - const char* CollectorString(); - - // Returns size of object in heap (in MB). - inline double SizeOfHeapObjects(); - - // Timestamp set in the constructor. - double start_time_; - - // Size of objects in heap set in constructor. - intptr_t start_object_size_; - - // Size of memory allocated from OS set in constructor. - intptr_t start_memory_size_; - - // Type of collector. - GarbageCollector collector_; - - // A count (including this one, e.g. the first collection is 1) of the - // number of garbage collections. - unsigned int gc_count_; - - // A count (including this one) of the number of full garbage collections. - int full_gc_count_; - - // Amounts of time spent in different scopes during GC. - double scopes_[Scope::kNumberOfScopes]; - - // Total amount of space either wasted or contained in one of free lists - // before the current GC. - intptr_t in_free_list_or_wasted_before_gc_; - - // Difference between space used in the heap at the beginning of the current - // collection and the end of the previous collection. - intptr_t allocated_since_last_gc_; - - // Amount of time spent in mutator that is time elapsed between end of the - // previous collection and the beginning of the current one. - double spent_in_mutator_; - - // Size of objects promoted during the current collection. - intptr_t promoted_objects_size_; - - // Number of died nodes in the new space. - int nodes_died_in_new_space_; - - // Number of copied nodes to the new space. - int nodes_copied_in_new_space_; - - // Number of promoted nodes to the old space. - int nodes_promoted_; - - // Incremental marking steps counters. - int steps_count_; - double steps_took_; - double longest_step_; - int steps_count_since_last_gc_; - double steps_took_since_last_gc_; - - Heap* heap_; - - const char* gc_reason_; - const char* collector_reason_; -}; - - class RegExpResultsCache { public: enum ResultsCacheType { REGEXP_MULTIPLE_INDICES, STRING_SPLIT_SUBSTRINGS }; // Attempt to retrieve a cached result. On failure, 0 is returned as a Smi. // On success, the returned result is guaranteed to be a COW-array. - static Object* Lookup(Heap* heap, - String* key_string, - Object* key_pattern, + static Object* Lookup(Heap* heap, String* key_string, Object* key_pattern, ResultsCacheType type); // Attempt to add value_array to the cache specified by type. On success, // value_array is turned into a COW-array. - static void Enter(Isolate* isolate, - Handle<String> key_string, - Handle<Object> key_pattern, - Handle<FixedArray> value_array, + static void Enter(Isolate* isolate, Handle<String> key_string, + Handle<Object> key_pattern, Handle<FixedArray> value_array, ResultsCacheType type); static void Clear(FixedArray* cache); static const int kRegExpResultsCacheSize = 0x100; @@ -2713,13 +2460,13 @@ class IntrusiveMarking { static void ClearMark(HeapObject* object) { uintptr_t map_word = object->map_word().ToRawValue(); object->set_map_word(MapWord::FromRawValue(map_word | kNotMarkedBit)); - ASSERT(!IsMarked(object)); + DCHECK(!IsMarked(object)); } static void SetMark(HeapObject* object) { uintptr_t map_word = object->map_word().ToRawValue(); object->set_map_word(MapWord::FromRawValue(map_word & ~kNotMarkedBit)); - ASSERT(IsMarked(object)); + DCHECK(IsMarked(object)); } static Map* MapOfMarkedObject(HeapObject* object) { @@ -2733,7 +2480,7 @@ class IntrusiveMarking { private: static const uintptr_t kNotMarkedBit = 0x1; - STATIC_ASSERT((kHeapObjectTag & kNotMarkedBit) != 0); + STATIC_ASSERT((kHeapObjectTag & kNotMarkedBit) != 0); // NOLINT }; @@ -2748,11 +2495,13 @@ class PathTracer : public ObjectVisitor { FIND_FIRST // Will stop the search after first match. }; + // Tags 0, 1, and 3 are used. Use 2 for marking visited HeapObject. + static const int kMarkTag = 2; + // For the WhatToFind arg, if FIND_FIRST is specified, tracing will stop // after the first match. If FIND_ALL is specified, then tracing will be // done for all matches. - PathTracer(Object* search_target, - WhatToFind what_to_find, + PathTracer(Object* search_target, WhatToFind what_to_find, VisitMode visit_mode) : search_target_(search_target), found_target_(false), @@ -2779,9 +2528,6 @@ class PathTracer : public ObjectVisitor { void UnmarkRecursively(Object** p, UnmarkVisitor* unmark_visitor); virtual void ProcessResults(); - // Tags 0, 1, and 3 are used. Use 2 for marking visited HeapObject. - static const int kMarkTag = 2; - Object* search_target_; bool found_target_; bool found_target_in_trace_; @@ -2795,7 +2541,7 @@ class PathTracer : public ObjectVisitor { DISALLOW_IMPLICIT_CONSTRUCTORS(PathTracer); }; #endif // DEBUG +} +} // namespace v8::internal -} } // namespace v8::internal - -#endif // V8_HEAP_H_ +#endif // V8_HEAP_HEAP_H_ diff --git a/deps/v8/src/incremental-marking-inl.h b/deps/v8/src/heap/incremental-marking-inl.h similarity index 82% rename from deps/v8/src/incremental-marking-inl.h rename to deps/v8/src/heap/incremental-marking-inl.h index 19d471c3675..5258c5c22ee 100644 --- a/deps/v8/src/incremental-marking-inl.h +++ b/deps/v8/src/heap/incremental-marking-inl.h @@ -2,17 +2,16 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#ifndef V8_INCREMENTAL_MARKING_INL_H_ -#define V8_INCREMENTAL_MARKING_INL_H_ +#ifndef V8_HEAP_INCREMENTAL_MARKING_INL_H_ +#define V8_HEAP_INCREMENTAL_MARKING_INL_H_ -#include "incremental-marking.h" +#include "src/heap/incremental-marking.h" namespace v8 { namespace internal { -bool IncrementalMarking::BaseRecordWrite(HeapObject* obj, - Object** slot, +bool IncrementalMarking::BaseRecordWrite(HeapObject* obj, Object** slot, Object* value) { HeapObject* value_heap_obj = HeapObject::cast(value); MarkBit value_bit = Marking::MarkBitFrom(value_heap_obj); @@ -42,8 +41,7 @@ bool IncrementalMarking::BaseRecordWrite(HeapObject* obj, } -void IncrementalMarking::RecordWrite(HeapObject* obj, - Object** slot, +void IncrementalMarking::RecordWrite(HeapObject* obj, Object** slot, Object* value) { if (IsMarking() && value->IsHeapObject()) { RecordWriteSlow(obj, slot, value); @@ -51,15 +49,13 @@ void IncrementalMarking::RecordWrite(HeapObject* obj, } -void IncrementalMarking::RecordWriteOfCodeEntry(JSFunction* host, - Object** slot, +void IncrementalMarking::RecordWriteOfCodeEntry(JSFunction* host, Object** slot, Code* value) { if (IsMarking()) RecordWriteOfCodeEntrySlow(host, slot, value); } -void IncrementalMarking::RecordWriteIntoCode(HeapObject* obj, - RelocInfo* rinfo, +void IncrementalMarking::RecordWriteIntoCode(HeapObject* obj, RelocInfo* rinfo, Object* value) { if (IsMarking() && value->IsHeapObject()) { RecordWriteIntoCodeSlow(obj, rinfo, value); @@ -84,9 +80,9 @@ void IncrementalMarking::RecordWrites(HeapObject* obj) { void IncrementalMarking::BlackToGreyAndUnshift(HeapObject* obj, MarkBit mark_bit) { - ASSERT(Marking::MarkBitFrom(obj) == mark_bit); - ASSERT(obj->Size() >= 2*kPointerSize); - ASSERT(IsMarking()); + DCHECK(Marking::MarkBitFrom(obj) == mark_bit); + DCHECK(obj->Size() >= 2 * kPointerSize); + DCHECK(IsMarking()); Marking::BlackToGrey(mark_bit); int obj_size = obj->Size(); MemoryChunk::IncrementLiveBytesFromGC(obj->address(), -obj_size); @@ -115,8 +111,7 @@ void IncrementalMarking::WhiteToGreyAndPush(HeapObject* obj, MarkBit mark_bit) { Marking::WhiteToGrey(mark_bit); marking_deque_.PushGrey(obj); } +} +} // namespace v8::internal - -} } // namespace v8::internal - -#endif // V8_INCREMENTAL_MARKING_INL_H_ +#endif // V8_HEAP_INCREMENTAL_MARKING_INL_H_ diff --git a/deps/v8/src/incremental-marking.cc b/deps/v8/src/heap/incremental-marking.cc similarity index 72% rename from deps/v8/src/incremental-marking.cc rename to deps/v8/src/heap/incremental-marking.cc index 2b6765c72ca..c922e83a67b 100644 --- a/deps/v8/src/incremental-marking.cc +++ b/deps/v8/src/heap/incremental-marking.cc @@ -2,15 +2,15 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "incremental-marking.h" +#include "src/heap/incremental-marking.h" -#include "code-stubs.h" -#include "compilation-cache.h" -#include "conversions.h" -#include "objects-visiting.h" -#include "objects-visiting-inl.h" +#include "src/code-stubs.h" +#include "src/compilation-cache.h" +#include "src/conversions.h" +#include "src/heap/objects-visiting.h" +#include "src/heap/objects-visiting-inl.h" namespace v8 { namespace internal { @@ -22,43 +22,34 @@ IncrementalMarking::IncrementalMarking(Heap* heap) marking_deque_memory_(NULL), marking_deque_memory_committed_(false), steps_count_(0), - steps_took_(0), - longest_step_(0.0), old_generation_space_available_at_start_of_incremental_(0), old_generation_space_used_at_start_of_incremental_(0), - steps_count_since_last_gc_(0), - steps_took_since_last_gc_(0), should_hurry_(false), marking_speed_(0), allocated_(0), no_marking_scope_depth_(0), - unscanned_bytes_of_large_object_(0) { -} + unscanned_bytes_of_large_object_(0) {} -void IncrementalMarking::TearDown() { - delete marking_deque_memory_; -} +void IncrementalMarking::TearDown() { delete marking_deque_memory_; } -void IncrementalMarking::RecordWriteSlow(HeapObject* obj, - Object** slot, +void IncrementalMarking::RecordWriteSlow(HeapObject* obj, Object** slot, Object* value) { if (BaseRecordWrite(obj, slot, value) && slot != NULL) { MarkBit obj_bit = Marking::MarkBitFrom(obj); if (Marking::IsBlack(obj_bit)) { // Object is not going to be rescanned we need to record the slot. - heap_->mark_compact_collector()->RecordSlot( - HeapObject::RawField(obj, 0), slot, value); + heap_->mark_compact_collector()->RecordSlot(HeapObject::RawField(obj, 0), + slot, value); } } } -void IncrementalMarking::RecordWriteFromCode(HeapObject* obj, - Object** slot, +void IncrementalMarking::RecordWriteFromCode(HeapObject* obj, Object** slot, Isolate* isolate) { - ASSERT(obj->IsHeapObject()); + DCHECK(obj->IsHeapObject()); IncrementalMarking* marking = isolate->heap()->incremental_marking(); MemoryChunk* chunk = MemoryChunk::FromAddress(obj->address()); @@ -66,7 +57,7 @@ void IncrementalMarking::RecordWriteFromCode(HeapObject* obj, if (counter < (MemoryChunk::kWriteBarrierCounterGranularity / 2)) { marking->write_barriers_invoked_since_last_step_ += MemoryChunk::kWriteBarrierCounterGranularity - - chunk->write_barrier_counter(); + chunk->write_barrier_counter(); chunk->set_write_barrier_counter( MemoryChunk::kWriteBarrierCounterGranularity); } @@ -75,8 +66,7 @@ void IncrementalMarking::RecordWriteFromCode(HeapObject* obj, } -void IncrementalMarking::RecordCodeTargetPatch(Code* host, - Address pc, +void IncrementalMarking::RecordCodeTargetPatch(Code* host, Address pc, HeapObject* value) { if (IsMarking()) { RelocInfo rinfo(pc, RelocInfo::CODE_TARGET, 0, host); @@ -87,8 +77,9 @@ void IncrementalMarking::RecordCodeTargetPatch(Code* host, void IncrementalMarking::RecordCodeTargetPatch(Address pc, HeapObject* value) { if (IsMarking()) { - Code* host = heap_->isolate()->inner_pointer_to_code_cache()-> - GcSafeFindCodeForInnerPointer(pc); + Code* host = heap_->isolate() + ->inner_pointer_to_code_cache() + ->GcSafeFindCodeForInnerPointer(pc); RelocInfo rinfo(pc, RelocInfo::CODE_TARGET, 0, host); RecordWriteIntoCode(host, &rinfo, value); } @@ -99,9 +90,9 @@ void IncrementalMarking::RecordWriteOfCodeEntrySlow(JSFunction* host, Object** slot, Code* value) { if (BaseRecordWrite(host, slot, value)) { - ASSERT(slot != NULL); - heap_->mark_compact_collector()-> - RecordCodeEntrySlot(reinterpret_cast<Address>(slot), value); + DCHECK(slot != NULL); + heap_->mark_compact_collector()->RecordCodeEntrySlot( + reinterpret_cast<Address>(slot), value); } } @@ -145,24 +136,22 @@ static void MarkObjectGreyDoNotEnqueue(Object* obj) { static inline void MarkBlackOrKeepGrey(HeapObject* heap_object, - MarkBit mark_bit, - int size) { - ASSERT(!Marking::IsImpossible(mark_bit)); + MarkBit mark_bit, int size) { + DCHECK(!Marking::IsImpossible(mark_bit)); if (mark_bit.Get()) return; mark_bit.Set(); MemoryChunk::IncrementLiveBytesFromGC(heap_object->address(), size); - ASSERT(Marking::IsBlack(mark_bit)); + DCHECK(Marking::IsBlack(mark_bit)); } static inline void MarkBlackOrKeepBlack(HeapObject* heap_object, - MarkBit mark_bit, - int size) { - ASSERT(!Marking::IsImpossible(mark_bit)); + MarkBit mark_bit, int size) { + DCHECK(!Marking::IsImpossible(mark_bit)); if (Marking::IsBlack(mark_bit)) return; Marking::MarkBlack(mark_bit); MemoryChunk::IncrementLiveBytesFromGC(heap_object->address(), size); - ASSERT(Marking::IsBlack(mark_bit)); + DCHECK(Marking::IsBlack(mark_bit)); } @@ -193,15 +182,14 @@ class IncrementalMarkingMarkingVisitor // fully scanned. Fall back to scanning it through to the end in case this // fails because of a full deque. int object_size = FixedArray::BodyDescriptor::SizeOf(map, object); - int start_offset = Max(FixedArray::BodyDescriptor::kStartOffset, - chunk->progress_bar()); - int end_offset = Min(object_size, - start_offset + kProgressBarScanningChunk); + int start_offset = + Max(FixedArray::BodyDescriptor::kStartOffset, chunk->progress_bar()); + int end_offset = + Min(object_size, start_offset + kProgressBarScanningChunk); int already_scanned_offset = start_offset; bool scan_until_end = false; do { - VisitPointersWithAnchor(heap, - HeapObject::RawField(object, 0), + VisitPointersWithAnchor(heap, HeapObject::RawField(object, 0), HeapObject::RawField(object, start_offset), HeapObject::RawField(object, end_offset)); start_offset = end_offset; @@ -222,22 +210,16 @@ class IncrementalMarkingMarkingVisitor static void VisitNativeContextIncremental(Map* map, HeapObject* object) { Context* context = Context::cast(object); - // We will mark cache black with a separate pass - // when we finish marking. - MarkObjectGreyDoNotEnqueue(context->normalized_map_cache()); + // We will mark cache black with a separate pass when we finish marking. + // Note that GC can happen when the context is not fully initialized, + // so the cache can be undefined. + Object* cache = context->get(Context::NORMALIZED_MAP_CACHE_INDEX); + if (!cache->IsUndefined()) { + MarkObjectGreyDoNotEnqueue(cache); + } VisitNativeContext(map, context); } - static void VisitWeakCollection(Map* map, HeapObject* object) { - Heap* heap = map->GetHeap(); - VisitPointers(heap, - HeapObject::RawField(object, - JSWeakCollection::kPropertiesOffset), - HeapObject::RawField(object, JSWeakCollection::kSize)); - } - - static void BeforeVisitingSharedFunctionInfo(HeapObject* object) {} - INLINE(static void VisitPointer(Heap* heap, Object** p)) { Object* obj = *p; if (obj->IsHeapObject()) { @@ -256,10 +238,8 @@ class IncrementalMarkingMarkingVisitor } } - INLINE(static void VisitPointersWithAnchor(Heap* heap, - Object** anchor, - Object** start, - Object** end)) { + INLINE(static void VisitPointersWithAnchor(Heap* heap, Object** anchor, + Object** start, Object** end)) { for (Object** p = start; p < end; p++) { Object* obj = *p; if (obj->IsHeapObject()) { @@ -300,12 +280,9 @@ class IncrementalMarkingRootMarkingVisitor : public ObjectVisitor { public: explicit IncrementalMarkingRootMarkingVisitor( IncrementalMarking* incremental_marking) - : incremental_marking_(incremental_marking) { - } + : incremental_marking_(incremental_marking) {} - void VisitPointer(Object** p) { - MarkObjectByPointer(p); - } + void VisitPointer(Object** p) { MarkObjectByPointer(p); } void VisitPointers(Object** start, Object** end) { for (Object** p = start; p < end; p++) MarkObjectByPointer(p); @@ -345,8 +322,7 @@ void IncrementalMarking::SetOldSpacePageFlags(MemoryChunk* chunk, // It's difficult to filter out slots recorded for large objects. if (chunk->owner()->identity() == LO_SPACE && - chunk->size() > static_cast<size_t>(Page::kPageSize) && - is_compacting) { + chunk->size() > static_cast<size_t>(Page::kPageSize) && is_compacting) { chunk->SetFlag(MemoryChunk::RESCAN_ON_EVACUATION); } } else if (chunk->owner()->identity() == CELL_SPACE || @@ -456,18 +432,16 @@ bool IncrementalMarking::WorthActivating() { // Only start incremental marking in a safe state: 1) when incremental // marking is turned on, 2) when we are currently not in a GC, and // 3) when we are currently not serializing or deserializing the heap. - return FLAG_incremental_marking && - FLAG_incremental_marking_steps && - heap_->gc_state() == Heap::NOT_IN_GC && - !Serializer::enabled(heap_->isolate()) && - heap_->isolate()->IsInitialized() && - heap_->PromotedSpaceSizeOfObjects() > kActivationThreshold; + return FLAG_incremental_marking && FLAG_incremental_marking_steps && + heap_->gc_state() == Heap::NOT_IN_GC && + !heap_->isolate()->serializer_enabled() && + heap_->isolate()->IsInitialized() && + heap_->PromotedSpaceSizeOfObjects() > kActivationThreshold; } void IncrementalMarking::ActivateGeneratedStub(Code* stub) { - ASSERT(RecordWriteStub::GetMode(stub) == - RecordWriteStub::STORE_BUFFER_ONLY); + DCHECK(RecordWriteStub::GetMode(stub) == RecordWriteStub::STORE_BUFFER_ONLY); if (!IsMarking()) { // Initially stub is generated in STORE_BUFFER_ONLY mode thus @@ -491,8 +465,7 @@ static void PatchIncrementalMarkingRecordWriteStubs( if (stubs->IsKey(k)) { uint32_t key = NumberToUint32(k); - if (CodeStub::MajorKeyFromKey(key) == - CodeStub::RecordWrite) { + if (CodeStub::MajorKeyFromKey(key) == CodeStub::RecordWrite) { Object* e = stubs->ValueAt(i); if (e->IsCode()) { RecordWriteStub::Patch(Code::cast(e), mode); @@ -505,7 +478,7 @@ static void PatchIncrementalMarkingRecordWriteStubs( void IncrementalMarking::EnsureMarkingDequeIsCommitted() { if (marking_deque_memory_ == NULL) { - marking_deque_memory_ = new VirtualMemory(4 * MB); + marking_deque_memory_ = new base::VirtualMemory(4 * MB); } if (!marking_deque_memory_committed_) { bool success = marking_deque_memory_->Commit( @@ -533,16 +506,16 @@ void IncrementalMarking::Start(CompactionFlag flag) { if (FLAG_trace_incremental_marking) { PrintF("[IncrementalMarking] Start\n"); } - ASSERT(FLAG_incremental_marking); - ASSERT(FLAG_incremental_marking_steps); - ASSERT(state_ == STOPPED); - ASSERT(heap_->gc_state() == Heap::NOT_IN_GC); - ASSERT(!Serializer::enabled(heap_->isolate())); - ASSERT(heap_->isolate()->IsInitialized()); + DCHECK(FLAG_incremental_marking); + DCHECK(FLAG_incremental_marking_steps); + DCHECK(state_ == STOPPED); + DCHECK(heap_->gc_state() == Heap::NOT_IN_GC); + DCHECK(!heap_->isolate()->serializer_enabled()); + DCHECK(heap_->isolate()->IsInitialized()); ResetStepCounters(); - if (!heap_->mark_compact_collector()->IsConcurrentSweepingInProgress()) { + if (!heap_->mark_compact_collector()->sweeping_in_progress()) { StartMarking(flag); } else { if (FLAG_trace_incremental_marking) { @@ -561,13 +534,14 @@ void IncrementalMarking::StartMarking(CompactionFlag flag) { } is_compacting_ = !FLAG_never_compact && (flag == ALLOW_COMPACTION) && - heap_->mark_compact_collector()->StartCompaction( - MarkCompactCollector::INCREMENTAL_COMPACTION); + heap_->mark_compact_collector()->StartCompaction( + MarkCompactCollector::INCREMENTAL_COMPACTION); state_ = MARKING; - RecordWriteStub::Mode mode = is_compacting_ ? - RecordWriteStub::INCREMENTAL_COMPACTION : RecordWriteStub::INCREMENTAL; + RecordWriteStub::Mode mode = is_compacting_ + ? RecordWriteStub::INCREMENTAL_COMPACTION + : RecordWriteStub::INCREMENTAL; PatchIncrementalMarkingRecordWriteStubs(heap_, mode); @@ -581,7 +555,7 @@ void IncrementalMarking::StartMarking(CompactionFlag flag) { ActivateIncrementalWriteBarrier(); - // Marking bits are cleared by the sweeper. +// Marking bits are cleared by the sweeper. #ifdef VERIFY_HEAP if (FLAG_verify_heap) { heap_->mark_compact_collector()->VerifyMarkbitsAreClean(); @@ -633,7 +607,7 @@ void IncrementalMarking::UpdateMarkingDequeAfterScavenge() { while (current != limit) { HeapObject* obj = array[current]; - ASSERT(obj->IsHeapObject()); + DCHECK(obj->IsHeapObject()); current = ((current + 1) & mask); if (heap_->InNewSpace(obj)) { MapWord map_word = obj->map_word(); @@ -641,10 +615,10 @@ void IncrementalMarking::UpdateMarkingDequeAfterScavenge() { HeapObject* dest = map_word.ToForwardingAddress(); array[new_top] = dest; new_top = ((new_top + 1) & mask); - ASSERT(new_top != marking_deque_.bottom()); + DCHECK(new_top != marking_deque_.bottom()); #ifdef DEBUG MarkBit mark_bit = Marking::MarkBitFrom(obj); - ASSERT(Marking::IsGrey(mark_bit) || + DCHECK(Marking::IsGrey(mark_bit) || (obj->IsFiller() && Marking::IsWhite(mark_bit))); #endif } @@ -653,22 +627,18 @@ void IncrementalMarking::UpdateMarkingDequeAfterScavenge() { // stack when we perform in place array shift. array[new_top] = obj; new_top = ((new_top + 1) & mask); - ASSERT(new_top != marking_deque_.bottom()); + DCHECK(new_top != marking_deque_.bottom()); #ifdef DEBUG - MarkBit mark_bit = Marking::MarkBitFrom(obj); - MemoryChunk* chunk = MemoryChunk::FromAddress(obj->address()); - ASSERT(Marking::IsGrey(mark_bit) || - (obj->IsFiller() && Marking::IsWhite(mark_bit)) || - (chunk->IsFlagSet(MemoryChunk::HAS_PROGRESS_BAR) && - Marking::IsBlack(mark_bit))); + MarkBit mark_bit = Marking::MarkBitFrom(obj); + MemoryChunk* chunk = MemoryChunk::FromAddress(obj->address()); + DCHECK(Marking::IsGrey(mark_bit) || + (obj->IsFiller() && Marking::IsWhite(mark_bit)) || + (chunk->IsFlagSet(MemoryChunk::HAS_PROGRESS_BAR) && + Marking::IsBlack(mark_bit))); #endif } } marking_deque_.set_top(new_top); - - steps_took_since_last_gc_ = 0; - steps_count_since_last_gc_ = 0; - longest_step_ = 0.0; } @@ -681,9 +651,9 @@ void IncrementalMarking::VisitObject(Map* map, HeapObject* obj, int size) { IncrementalMarkingMarkingVisitor::IterateBody(map, obj); MarkBit mark_bit = Marking::MarkBitFrom(obj); -#if ENABLE_SLOW_ASSERTS +#if ENABLE_SLOW_DCHECKS MemoryChunk* chunk = MemoryChunk::FromAddress(obj->address()); - SLOW_ASSERT(Marking::IsGrey(mark_bit) || + SLOW_DCHECK(Marking::IsGrey(mark_bit) || (obj->IsFiller() && Marking::IsWhite(mark_bit)) || (chunk->IsFlagSet(MemoryChunk::HAS_PROGRESS_BAR) && Marking::IsBlack(mark_bit))); @@ -692,9 +662,10 @@ void IncrementalMarking::VisitObject(Map* map, HeapObject* obj, int size) { } -void IncrementalMarking::ProcessMarkingDeque(intptr_t bytes_to_process) { +intptr_t IncrementalMarking::ProcessMarkingDeque(intptr_t bytes_to_process) { + intptr_t bytes_processed = 0; Map* filler_map = heap_->one_pointer_filler_map(); - while (!marking_deque_.IsEmpty() && bytes_to_process > 0) { + while (!marking_deque_.IsEmpty() && bytes_processed < bytes_to_process) { HeapObject* obj = marking_deque_.Pop(); // Explicitly skip one word fillers. Incremental markbit patterns are @@ -705,8 +676,12 @@ void IncrementalMarking::ProcessMarkingDeque(intptr_t bytes_to_process) { int size = obj->SizeFromMap(map); unscanned_bytes_of_large_object_ = 0; VisitObject(map, obj, size); - bytes_to_process -= (size - unscanned_bytes_of_large_object_); + int delta = (size - unscanned_bytes_of_large_object_); + // TODO(jochen): remove after http://crbug.com/381820 is resolved. + CHECK_LT(0, delta); + bytes_processed += delta; } + return bytes_processed; } @@ -729,7 +704,7 @@ void IncrementalMarking::Hurry() { if (state() == MARKING) { double start = 0.0; if (FLAG_trace_incremental_marking || FLAG_print_cumulative_gc_stat) { - start = OS::TimeCurrentMillis(); + start = base::OS::TimeCurrentMillis(); if (FLAG_trace_incremental_marking) { PrintF("[IncrementalMarking] Hurry\n"); } @@ -739,9 +714,9 @@ void IncrementalMarking::Hurry() { ProcessMarkingDeque(); state_ = COMPLETE; if (FLAG_trace_incremental_marking || FLAG_print_cumulative_gc_stat) { - double end = OS::TimeCurrentMillis(); + double end = base::OS::TimeCurrentMillis(); double delta = end - start; - heap_->AddMarkingTime(delta); + heap_->tracer()->AddMarkingTime(delta); if (FLAG_trace_incremental_marking) { PrintF("[IncrementalMarking] Complete (hurry), spent %d ms.\n", static_cast<int>(delta)); @@ -797,7 +772,7 @@ void IncrementalMarking::Abort() { } } } - heap_->isolate()->stack_guard()->Continue(GC_REQUEST); + heap_->isolate()->stack_guard()->ClearGC(); state_ = STOPPED; is_compacting_ = false; } @@ -813,8 +788,8 @@ void IncrementalMarking::Finalize() { PatchIncrementalMarkingRecordWriteStubs(heap_, RecordWriteStub::STORE_BUFFER_ONLY); DeactivateIncrementalWriteBarrier(); - ASSERT(marking_deque_.IsEmpty()); - heap_->isolate()->stack_guard()->Continue(GC_REQUEST); + DCHECK(marking_deque_.IsEmpty()); + heap_->isolate()->stack_guard()->ClearGC(); } @@ -846,10 +821,9 @@ void IncrementalMarking::OldSpaceStep(intptr_t allocated) { } -void IncrementalMarking::Step(intptr_t allocated_bytes, - CompletionAction action) { - if (heap_->gc_state() != Heap::NOT_IN_GC || - !FLAG_incremental_marking || +void IncrementalMarking::Step(intptr_t allocated_bytes, CompletionAction action, + bool force_marking) { + if (heap_->gc_state() != Heap::NOT_IN_GC || !FLAG_incremental_marking || !FLAG_incremental_marking_steps || (state_ != SWEEPING && state_ != MARKING)) { return; @@ -857,7 +831,7 @@ void IncrementalMarking::Step(intptr_t allocated_bytes, allocated_ += allocated_bytes; - if (allocated_ < kAllocatedThreshold && + if (!force_marking && allocated_ < kAllocatedThreshold && write_barriers_invoked_since_last_step_ < kWriteBarriersInvokedThreshold) { return; @@ -865,128 +839,124 @@ void IncrementalMarking::Step(intptr_t allocated_bytes, if (state_ == MARKING && no_marking_scope_depth_ > 0) return; - // The marking speed is driven either by the allocation rate or by the rate - // at which we are having to check the color of objects in the write barrier. - // It is possible for a tight non-allocating loop to run a lot of write - // barriers before we get here and check them (marking can only take place on - // allocation), so to reduce the lumpiness we don't use the write barriers - // invoked since last step directly to determine the amount of work to do. - intptr_t bytes_to_process = - marking_speed_ * Max(allocated_, write_barriers_invoked_since_last_step_); - allocated_ = 0; - write_barriers_invoked_since_last_step_ = 0; - - bytes_scanned_ += bytes_to_process; - - double start = 0; - - if (FLAG_trace_incremental_marking || FLAG_trace_gc || - FLAG_print_cumulative_gc_stat) { - start = OS::TimeCurrentMillis(); - } - - if (state_ == SWEEPING) { - if (heap_->mark_compact_collector()->IsConcurrentSweepingInProgress() && - heap_->mark_compact_collector()->IsSweepingCompleted()) { - heap_->mark_compact_collector()->WaitUntilSweepingCompleted(); - } - if (!heap_->mark_compact_collector()->IsConcurrentSweepingInProgress()) { - bytes_scanned_ = 0; - StartMarking(PREVENT_COMPACTION); + { + HistogramTimerScope incremental_marking_scope( + heap_->isolate()->counters()->gc_incremental_marking()); + double start = base::OS::TimeCurrentMillis(); + + // The marking speed is driven either by the allocation rate or by the rate + // at which we are having to check the color of objects in the write + // barrier. + // It is possible for a tight non-allocating loop to run a lot of write + // barriers before we get here and check them (marking can only take place + // on + // allocation), so to reduce the lumpiness we don't use the write barriers + // invoked since last step directly to determine the amount of work to do. + intptr_t bytes_to_process = + marking_speed_ * + Max(allocated_, write_barriers_invoked_since_last_step_); + allocated_ = 0; + write_barriers_invoked_since_last_step_ = 0; + + bytes_scanned_ += bytes_to_process; + intptr_t bytes_processed = 0; + + if (state_ == SWEEPING) { + if (heap_->mark_compact_collector()->sweeping_in_progress() && + heap_->mark_compact_collector()->IsSweepingCompleted()) { + heap_->mark_compact_collector()->EnsureSweepingCompleted(); + } + if (!heap_->mark_compact_collector()->sweeping_in_progress()) { + bytes_scanned_ = 0; + StartMarking(PREVENT_COMPACTION); + } + } else if (state_ == MARKING) { + bytes_processed = ProcessMarkingDeque(bytes_to_process); + if (marking_deque_.IsEmpty()) MarkingComplete(action); } - } else if (state_ == MARKING) { - ProcessMarkingDeque(bytes_to_process); - if (marking_deque_.IsEmpty()) MarkingComplete(action); - } - steps_count_++; - steps_count_since_last_gc_++; + steps_count_++; - bool speed_up = false; + bool speed_up = false; - if ((steps_count_ % kMarkingSpeedAccellerationInterval) == 0) { - if (FLAG_trace_gc) { - PrintPID("Speed up marking after %d steps\n", - static_cast<int>(kMarkingSpeedAccellerationInterval)); + if ((steps_count_ % kMarkingSpeedAccellerationInterval) == 0) { + if (FLAG_trace_gc) { + PrintPID("Speed up marking after %d steps\n", + static_cast<int>(kMarkingSpeedAccellerationInterval)); + } + speed_up = true; } - speed_up = true; - } - bool space_left_is_very_small = - (old_generation_space_available_at_start_of_incremental_ < 10 * MB); + bool space_left_is_very_small = + (old_generation_space_available_at_start_of_incremental_ < 10 * MB); - bool only_1_nth_of_space_that_was_available_still_left = - (SpaceLeftInOldSpace() * (marking_speed_ + 1) < - old_generation_space_available_at_start_of_incremental_); + bool only_1_nth_of_space_that_was_available_still_left = + (SpaceLeftInOldSpace() * (marking_speed_ + 1) < + old_generation_space_available_at_start_of_incremental_); - if (space_left_is_very_small || - only_1_nth_of_space_that_was_available_still_left) { - if (FLAG_trace_gc) PrintPID("Speed up marking because of low space left\n"); - speed_up = true; - } - - bool size_of_old_space_multiplied_by_n_during_marking = - (heap_->PromotedTotalSize() > - (marking_speed_ + 1) * - old_generation_space_used_at_start_of_incremental_); - if (size_of_old_space_multiplied_by_n_during_marking) { - speed_up = true; - if (FLAG_trace_gc) { - PrintPID("Speed up marking because of heap size increase\n"); + if (space_left_is_very_small || + only_1_nth_of_space_that_was_available_still_left) { + if (FLAG_trace_gc) + PrintPID("Speed up marking because of low space left\n"); + speed_up = true; } - } - - int64_t promoted_during_marking = heap_->PromotedTotalSize() - - old_generation_space_used_at_start_of_incremental_; - intptr_t delay = marking_speed_ * MB; - intptr_t scavenge_slack = heap_->MaxSemiSpaceSize(); - // We try to scan at at least twice the speed that we are allocating. - if (promoted_during_marking > bytes_scanned_ / 2 + scavenge_slack + delay) { - if (FLAG_trace_gc) { - PrintPID("Speed up marking because marker was not keeping up\n"); + bool size_of_old_space_multiplied_by_n_during_marking = + (heap_->PromotedTotalSize() > + (marking_speed_ + 1) * + old_generation_space_used_at_start_of_incremental_); + if (size_of_old_space_multiplied_by_n_during_marking) { + speed_up = true; + if (FLAG_trace_gc) { + PrintPID("Speed up marking because of heap size increase\n"); + } } - speed_up = true; - } - if (speed_up) { - if (state_ != MARKING) { + int64_t promoted_during_marking = + heap_->PromotedTotalSize() - + old_generation_space_used_at_start_of_incremental_; + intptr_t delay = marking_speed_ * MB; + intptr_t scavenge_slack = heap_->MaxSemiSpaceSize(); + + // We try to scan at at least twice the speed that we are allocating. + if (promoted_during_marking > bytes_scanned_ / 2 + scavenge_slack + delay) { if (FLAG_trace_gc) { - PrintPID("Postponing speeding up marking until marking starts\n"); + PrintPID("Speed up marking because marker was not keeping up\n"); } - } else { - marking_speed_ += kMarkingSpeedAccelleration; - marking_speed_ = static_cast<int>( - Min(kMaxMarkingSpeed, - static_cast<intptr_t>(marking_speed_ * 1.3))); - if (FLAG_trace_gc) { - PrintPID("Marking speed increased to %d\n", marking_speed_); + speed_up = true; + } + + if (speed_up) { + if (state_ != MARKING) { + if (FLAG_trace_gc) { + PrintPID("Postponing speeding up marking until marking starts\n"); + } + } else { + marking_speed_ += kMarkingSpeedAccelleration; + marking_speed_ = static_cast<int>( + Min(kMaxMarkingSpeed, static_cast<intptr_t>(marking_speed_ * 1.3))); + if (FLAG_trace_gc) { + PrintPID("Marking speed increased to %d\n", marking_speed_); + } } } - } - if (FLAG_trace_incremental_marking || FLAG_trace_gc || - FLAG_print_cumulative_gc_stat) { - double end = OS::TimeCurrentMillis(); - double delta = (end - start); - longest_step_ = Max(longest_step_, delta); - steps_took_ += delta; - steps_took_since_last_gc_ += delta; - heap_->AddMarkingTime(delta); + double end = base::OS::TimeCurrentMillis(); + double duration = (end - start); + // Note that we report zero bytes here when sweeping was in progress or + // when we just started incremental marking. In these cases we did not + // process the marking deque. + heap_->tracer()->AddIncrementalMarkingStep(duration, bytes_processed); } } void IncrementalMarking::ResetStepCounters() { steps_count_ = 0; - steps_took_ = 0; - longest_step_ = 0.0; old_generation_space_available_at_start_of_incremental_ = SpaceLeftInOldSpace(); old_generation_space_used_at_start_of_incremental_ = heap_->PromotedTotalSize(); - steps_count_since_last_gc_ = 0; - steps_took_since_last_gc_ = 0; bytes_rescanned_ = 0; marking_speed_ = kInitialMarkingSpeed; bytes_scanned_ = 0; @@ -997,5 +967,5 @@ void IncrementalMarking::ResetStepCounters() { int64_t IncrementalMarking::SpaceLeftInOldSpace() { return heap_->MaxOldGenerationSize() - heap_->PromotedSpaceSizeOfObjects(); } - -} } // namespace v8::internal +} +} // namespace v8::internal diff --git a/deps/v8/src/incremental-marking.h b/deps/v8/src/heap/incremental-marking.h similarity index 75% rename from deps/v8/src/incremental-marking.h rename to deps/v8/src/heap/incremental-marking.h index 2ea30b17c2f..c054fbd40fb 100644 --- a/deps/v8/src/incremental-marking.h +++ b/deps/v8/src/heap/incremental-marking.h @@ -2,13 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#ifndef V8_INCREMENTAL_MARKING_H_ -#define V8_INCREMENTAL_MARKING_H_ +#ifndef V8_HEAP_INCREMENTAL_MARKING_H_ +#define V8_HEAP_INCREMENTAL_MARKING_H_ -#include "execution.h" -#include "mark-compact.h" -#include "objects.h" +#include "src/execution.h" +#include "src/heap/mark-compact.h" +#include "src/objects.h" namespace v8 { namespace internal { @@ -16,17 +16,9 @@ namespace internal { class IncrementalMarking { public: - enum State { - STOPPED, - SWEEPING, - MARKING, - COMPLETE - }; - - enum CompletionAction { - GC_VIA_STACK_GUARD, - NO_GC_VIA_STACK_GUARD - }; + enum State { STOPPED, SWEEPING, MARKING, COMPLETE }; + + enum CompletionAction { GC_VIA_STACK_GUARD, NO_GC_VIA_STACK_GUARD }; explicit IncrementalMarking(Heap* heap); @@ -35,7 +27,7 @@ class IncrementalMarking { void TearDown(); State state() { - ASSERT(state_ == STOPPED || FLAG_incremental_marking); + DCHECK(state_ == STOPPED || FLAG_incremental_marking); return state_; } @@ -91,7 +83,8 @@ class IncrementalMarking { void OldSpaceStep(intptr_t allocated); - void Step(intptr_t allocated, CompletionAction action); + void Step(intptr_t allocated, CompletionAction action, + bool force_marking = false); inline void RestartIfNotMarking() { if (state_ == COMPLETE) { @@ -102,8 +95,7 @@ class IncrementalMarking { } } - static void RecordWriteFromCode(HeapObject* obj, - Object** slot, + static void RecordWriteFromCode(HeapObject* obj, Object** slot, Isolate* isolate); // Record a slot for compaction. Returns false for objects that are @@ -114,17 +106,14 @@ class IncrementalMarking { // the incremental cycle (stays white). INLINE(bool BaseRecordWrite(HeapObject* obj, Object** slot, Object* value)); INLINE(void RecordWrite(HeapObject* obj, Object** slot, Object* value)); - INLINE(void RecordWriteIntoCode(HeapObject* obj, - RelocInfo* rinfo, + INLINE(void RecordWriteIntoCode(HeapObject* obj, RelocInfo* rinfo, Object* value)); - INLINE(void RecordWriteOfCodeEntry(JSFunction* host, - Object** slot, + INLINE(void RecordWriteOfCodeEntry(JSFunction* host, Object** slot, Code* value)); void RecordWriteSlow(HeapObject* obj, Object** slot, Object* value); - void RecordWriteIntoCodeSlow(HeapObject* obj, - RelocInfo* rinfo, + void RecordWriteIntoCodeSlow(HeapObject* obj, RelocInfo* rinfo, Object* value); void RecordWriteOfCodeEntrySlow(JSFunction* host, Object** slot, Code* value); void RecordCodeTargetPatch(Code* host, Address pc, HeapObject* value); @@ -136,26 +125,6 @@ class IncrementalMarking { inline void WhiteToGreyAndPush(HeapObject* obj, MarkBit mark_bit); - inline int steps_count() { - return steps_count_; - } - - inline double steps_took() { - return steps_took_; - } - - inline double longest_step() { - return longest_step_; - } - - inline int steps_count_since_last_gc() { - return steps_count_since_last_gc_; - } - - inline double steps_took_since_last_gc() { - return steps_took_since_last_gc_; - } - inline void SetOldSpacePageFlags(MemoryChunk* chunk) { SetOldSpacePageFlags(chunk, IsMarking(), IsCompacting()); } @@ -174,22 +143,19 @@ class IncrementalMarking { if (IsMarking()) { if (marking_speed_ < kFastMarking) { if (FLAG_trace_gc) { - PrintPID("Increasing marking speed to %d " - "due to high promotion rate\n", - static_cast<int>(kFastMarking)); + PrintPID( + "Increasing marking speed to %d " + "due to high promotion rate\n", + static_cast<int>(kFastMarking)); } marking_speed_ = kFastMarking; } } } - void EnterNoMarkingScope() { - no_marking_scope_depth_++; - } + void EnterNoMarkingScope() { no_marking_scope_depth_++; } - void LeaveNoMarkingScope() { - no_marking_scope_depth_--; - } + void LeaveNoMarkingScope() { no_marking_scope_depth_--; } void UncommitMarkingDeque(); @@ -212,8 +178,7 @@ class IncrementalMarking { static void DeactivateIncrementalWriteBarrierForSpace(NewSpace* space); void DeactivateIncrementalWriteBarrier(); - static void SetOldSpacePageFlags(MemoryChunk* chunk, - bool is_marking, + static void SetOldSpacePageFlags(MemoryChunk* chunk, bool is_marking, bool is_compacting); static void SetNewSpacePageFlags(NewSpacePage* chunk, bool is_marking); @@ -222,7 +187,7 @@ class IncrementalMarking { INLINE(void ProcessMarkingDeque()); - INLINE(void ProcessMarkingDeque(intptr_t bytes_to_process)); + INLINE(intptr_t ProcessMarkingDeque(intptr_t bytes_to_process)); INLINE(void VisitObject(Map* map, HeapObject* obj, int size)); @@ -231,17 +196,13 @@ class IncrementalMarking { State state_; bool is_compacting_; - VirtualMemory* marking_deque_memory_; + base::VirtualMemory* marking_deque_memory_; bool marking_deque_memory_committed_; MarkingDeque marking_deque_; int steps_count_; - double steps_took_; - double longest_step_; int64_t old_generation_space_available_at_start_of_incremental_; int64_t old_generation_space_used_at_start_of_incremental_; - int steps_count_since_last_gc_; - double steps_took_since_last_gc_; int64_t bytes_rescanned_; bool should_hurry_; int marking_speed_; @@ -255,7 +216,7 @@ class IncrementalMarking { DISALLOW_IMPLICIT_CONSTRUCTORS(IncrementalMarking); }; +} +} // namespace v8::internal -} } // namespace v8::internal - -#endif // V8_INCREMENTAL_MARKING_H_ +#endif // V8_HEAP_INCREMENTAL_MARKING_H_ diff --git a/deps/v8/src/mark-compact-inl.h b/deps/v8/src/heap/mark-compact-inl.h similarity index 74% rename from deps/v8/src/mark-compact-inl.h rename to deps/v8/src/heap/mark-compact-inl.h index 608a0ec98e8..934fce847db 100644 --- a/deps/v8/src/mark-compact-inl.h +++ b/deps/v8/src/heap/mark-compact-inl.h @@ -2,12 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#ifndef V8_MARK_COMPACT_INL_H_ -#define V8_MARK_COMPACT_INL_H_ +#ifndef V8_HEAP_MARK_COMPACT_INL_H_ +#define V8_HEAP_MARK_COMPACT_INL_H_ -#include "isolate.h" -#include "memory.h" -#include "mark-compact.h" +#include <memory.h> + +#include "src/heap/mark-compact.h" +#include "src/isolate.h" namespace v8 { @@ -30,49 +31,45 @@ void MarkCompactCollector::SetFlags(int flags) { void MarkCompactCollector::MarkObject(HeapObject* obj, MarkBit mark_bit) { - ASSERT(Marking::MarkBitFrom(obj) == mark_bit); + DCHECK(Marking::MarkBitFrom(obj) == mark_bit); if (!mark_bit.Get()) { mark_bit.Set(); MemoryChunk::IncrementLiveBytesFromGC(obj->address(), obj->Size()); - ASSERT(IsMarked(obj)); - ASSERT(obj->GetIsolate()->heap()->Contains(obj)); + DCHECK(IsMarked(obj)); + DCHECK(obj->GetIsolate()->heap()->Contains(obj)); marking_deque_.PushBlack(obj); } } void MarkCompactCollector::SetMark(HeapObject* obj, MarkBit mark_bit) { - ASSERT(!mark_bit.Get()); - ASSERT(Marking::MarkBitFrom(obj) == mark_bit); + DCHECK(!mark_bit.Get()); + DCHECK(Marking::MarkBitFrom(obj) == mark_bit); mark_bit.Set(); MemoryChunk::IncrementLiveBytesFromGC(obj->address(), obj->Size()); } bool MarkCompactCollector::IsMarked(Object* obj) { - ASSERT(obj->IsHeapObject()); + DCHECK(obj->IsHeapObject()); HeapObject* heap_object = HeapObject::cast(obj); return Marking::MarkBitFrom(heap_object).Get(); } -void MarkCompactCollector::RecordSlot(Object** anchor_slot, - Object** slot, +void MarkCompactCollector::RecordSlot(Object** anchor_slot, Object** slot, Object* object, SlotsBuffer::AdditionMode mode) { Page* object_page = Page::FromAddress(reinterpret_cast<Address>(object)); if (object_page->IsEvacuationCandidate() && !ShouldSkipEvacuationSlotRecording(anchor_slot)) { if (!SlotsBuffer::AddTo(&slots_buffer_allocator_, - object_page->slots_buffer_address(), - slot, - mode)) { + object_page->slots_buffer_address(), slot, mode)) { EvictEvacuationCandidate(object_page); } } } +} +} // namespace v8::internal - -} } // namespace v8::internal - -#endif // V8_MARK_COMPACT_INL_H_ +#endif // V8_HEAP_MARK_COMPACT_INL_H_ diff --git a/deps/v8/src/mark-compact.cc b/deps/v8/src/heap/mark-compact.cc similarity index 73% rename from deps/v8/src/mark-compact.cc rename to deps/v8/src/heap/mark-compact.cc index 784098ba71d..abb4e1beb8e 100644 --- a/deps/v8/src/mark-compact.cc +++ b/deps/v8/src/heap/mark-compact.cc @@ -2,24 +2,25 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" - -#include "code-stubs.h" -#include "compilation-cache.h" -#include "cpu-profiler.h" -#include "deoptimizer.h" -#include "execution.h" -#include "gdb-jit.h" -#include "global-handles.h" -#include "heap-profiler.h" -#include "ic-inl.h" -#include "incremental-marking.h" -#include "mark-compact.h" -#include "objects-visiting.h" -#include "objects-visiting-inl.h" -#include "spaces-inl.h" -#include "stub-cache.h" -#include "sweeper-thread.h" +#include "src/v8.h" + +#include "src/base/atomicops.h" +#include "src/code-stubs.h" +#include "src/compilation-cache.h" +#include "src/cpu-profiler.h" +#include "src/deoptimizer.h" +#include "src/execution.h" +#include "src/gdb-jit.h" +#include "src/global-handles.h" +#include "src/heap/incremental-marking.h" +#include "src/heap/mark-compact.h" +#include "src/heap/objects-visiting.h" +#include "src/heap/objects-visiting-inl.h" +#include "src/heap/spaces-inl.h" +#include "src/heap/sweeper-thread.h" +#include "src/heap-profiler.h" +#include "src/ic-inl.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -34,7 +35,8 @@ const char* Marking::kImpossibleBitPattern = "01"; // ------------------------------------------------------------------------- // MarkCompactCollector -MarkCompactCollector::MarkCompactCollector(Heap* heap) : // NOLINT +MarkCompactCollector::MarkCompactCollector(Heap* heap) + : // NOLINT #ifdef DEBUG state_(IDLE), #endif @@ -44,18 +46,17 @@ MarkCompactCollector::MarkCompactCollector(Heap* heap) : // NOLINT marking_parity_(ODD_MARKING_PARITY), compacting_(false), was_marked_incrementally_(false), - sweeping_pending_(false), + sweeping_in_progress_(false), pending_sweeper_jobs_semaphore_(0), sequential_sweeping_(false), - tracer_(NULL), migration_slots_buffer_(NULL), heap_(heap), code_flusher_(NULL), - encountered_weak_collections_(NULL), - have_code_to_deoptimize_(false) { } + have_code_to_deoptimize_(false) { +} #ifdef VERIFY_HEAP -class VerifyMarkingVisitor: public ObjectVisitor { +class VerifyMarkingVisitor : public ObjectVisitor { public: explicit VerifyMarkingVisitor(Heap* heap) : heap_(heap) {} @@ -69,7 +70,7 @@ class VerifyMarkingVisitor: public ObjectVisitor { } void VisitEmbeddedPointer(RelocInfo* rinfo) { - ASSERT(rinfo->rmode() == RelocInfo::EMBEDDED_OBJECT); + DCHECK(rinfo->rmode() == RelocInfo::EMBEDDED_OBJECT); if (!rinfo->host()->IsWeakObject(rinfo->target_object())) { Object* p = rinfo->target_object(); VisitPointer(&p); @@ -78,7 +79,7 @@ class VerifyMarkingVisitor: public ObjectVisitor { void VisitCell(RelocInfo* rinfo) { Code* code = rinfo->host(); - ASSERT(rinfo->rmode() == RelocInfo::CELL); + DCHECK(rinfo->rmode() == RelocInfo::CELL); if (!code->IsWeakObject(rinfo->target_cell())) { ObjectVisitor::VisitCell(rinfo); } @@ -94,9 +95,7 @@ static void VerifyMarking(Heap* heap, Address bottom, Address top) { HeapObject* object; Address next_object_must_be_here_or_later = bottom; - for (Address current = bottom; - current < top; - current += kPointerSize) { + for (Address current = bottom; current < top; current += kPointerSize) { object = HeapObject::FromAddress(current); if (MarkCompactCollector::IsMarked(object)) { CHECK(current >= next_object_must_be_here_or_later); @@ -113,7 +112,7 @@ static void VerifyMarking(NewSpace* space) { // The bottom position is at the start of its page. Allows us to use // page->area_start() as start of range on all pages. CHECK_EQ(space->bottom(), - NewSpacePage::FromAddress(space->bottom())->area_start()); + NewSpacePage::FromAddress(space->bottom())->area_start()); while (it.has_next()) { NewSpacePage* page = it.next(); Address limit = it.has_next() ? page->area_end() : end; @@ -155,7 +154,7 @@ static void VerifyMarking(Heap* heap) { } -class VerifyEvacuationVisitor: public ObjectVisitor { +class VerifyEvacuationVisitor : public ObjectVisitor { public: void VisitPointers(Object** start, Object** end) { for (Object** current = start; current < end; current++) { @@ -168,19 +167,14 @@ class VerifyEvacuationVisitor: public ObjectVisitor { }; -static void VerifyEvacuation(Address bottom, Address top) { +static void VerifyEvacuation(Page* page) { VerifyEvacuationVisitor visitor; - HeapObject* object; - Address next_object_must_be_here_or_later = bottom; - - for (Address current = bottom; - current < top; - current += kPointerSize) { - object = HeapObject::FromAddress(current); - if (MarkCompactCollector::IsMarked(object)) { - CHECK(current >= next_object_must_be_here_or_later); - object->Iterate(&visitor); - next_object_must_be_here_or_later = current + object->Size(); + HeapObjectIterator iterator(page, NULL); + for (HeapObject* heap_object = iterator.Next(); heap_object != NULL; + heap_object = iterator.Next()) { + // We skip free space objects. + if (!heap_object->IsFiller()) { + heap_object->Iterate(&visitor); } } } @@ -204,28 +198,29 @@ static void VerifyEvacuation(NewSpace* space) { } -static void VerifyEvacuation(PagedSpace* space) { - // TODO(hpayer): Bring back VerifyEvacuation for parallel-concurrently - // swept pages. - if ((FLAG_concurrent_sweeping || FLAG_parallel_sweeping) && - space->was_swept_conservatively()) return; +static void VerifyEvacuation(Heap* heap, PagedSpace* space) { + if (!space->swept_precisely()) return; + if (FLAG_use_allocation_folding && + (space == heap->old_pointer_space() || space == heap->old_data_space())) { + return; + } PageIterator it(space); while (it.has_next()) { Page* p = it.next(); if (p->IsEvacuationCandidate()) continue; - VerifyEvacuation(p->area_start(), p->area_end()); + VerifyEvacuation(p); } } static void VerifyEvacuation(Heap* heap) { - VerifyEvacuation(heap->old_pointer_space()); - VerifyEvacuation(heap->old_data_space()); - VerifyEvacuation(heap->code_space()); - VerifyEvacuation(heap->cell_space()); - VerifyEvacuation(heap->property_cell_space()); - VerifyEvacuation(heap->map_space()); + VerifyEvacuation(heap, heap->old_pointer_space()); + VerifyEvacuation(heap, heap->old_data_space()); + VerifyEvacuation(heap, heap->code_space()); + VerifyEvacuation(heap, heap->cell_space()); + VerifyEvacuation(heap, heap->property_cell_space()); + VerifyEvacuation(heap, heap->map_space()); VerifyEvacuation(heap->new_space()); VerifyEvacuationVisitor visitor; @@ -235,7 +230,7 @@ static void VerifyEvacuation(Heap* heap) { #ifdef DEBUG -class VerifyNativeContextSeparationVisitor: public ObjectVisitor { +class VerifyNativeContextSeparationVisitor : public ObjectVisitor { public: VerifyNativeContextSeparationVisitor() : current_native_context_(NULL) {} @@ -274,8 +269,8 @@ class VerifyNativeContextSeparationVisitor: public ObjectVisitor { // Set array length to zero to prevent cycles while iterating // over array bodies, this is easier than intrusive marking. array->set_length(0); - array->IterateBody( - FIXED_ARRAY_TYPE, FixedArray::SizeFor(length), this); + array->IterateBody(FIXED_ARRAY_TYPE, FixedArray::SizeFor(length), + this); array->set_length(length); } break; @@ -292,6 +287,7 @@ class VerifyNativeContextSeparationVisitor: public ObjectVisitor { case CODE_TYPE: case FIXED_DOUBLE_ARRAY_TYPE: case HEAP_NUMBER_TYPE: + case MUTABLE_HEAP_NUMBER_TYPE: case INTERCEPTOR_INFO_TYPE: case ODDBALL_TYPE: case SCRIPT_TYPE: @@ -336,9 +332,7 @@ void MarkCompactCollector::SetUp() { } -void MarkCompactCollector::TearDown() { - AbortCompaction(); -} +void MarkCompactCollector::TearDown() { AbortCompaction(); } void MarkCompactCollector::AddEvacuationCandidate(Page* p) { @@ -352,16 +346,14 @@ static void TraceFragmentation(PagedSpace* space) { intptr_t reserved = (number_of_pages * space->AreaSize()); intptr_t free = reserved - space->SizeOfObjects(); PrintF("[%s]: %d pages, %d (%.1f%%) free\n", - AllocationSpaceName(space->identity()), - number_of_pages, - static_cast<int>(free), - static_cast<double>(free) * 100 / reserved); + AllocationSpaceName(space->identity()), number_of_pages, + static_cast<int>(free), static_cast<double>(free) * 100 / reserved); } bool MarkCompactCollector::StartCompaction(CompactionMode mode) { if (!compacting_) { - ASSERT(evacuation_candidates_.length() == 0); + DCHECK(evacuation_candidates_.length() == 0); #ifdef ENABLE_GDB_JIT_INTERFACE // If GDBJIT interface is active disable compaction. @@ -371,9 +363,8 @@ bool MarkCompactCollector::StartCompaction(CompactionMode mode) { CollectEvacuationCandidates(heap()->old_pointer_space()); CollectEvacuationCandidates(heap()->old_data_space()); - if (FLAG_compact_code_space && - (mode == NON_INCREMENTAL_COMPACTION || - FLAG_incremental_code_compaction)) { + if (FLAG_compact_code_space && (mode == NON_INCREMENTAL_COMPACTION || + FLAG_incremental_code_compaction)) { CollectEvacuationCandidates(heap()->code_space()); } else if (FLAG_trace_fragmentation) { TraceFragmentation(heap()->code_space()); @@ -399,11 +390,10 @@ bool MarkCompactCollector::StartCompaction(CompactionMode mode) { void MarkCompactCollector::CollectGarbage() { // Make sure that Prepare() has been called. The individual steps below will // update the state as they proceed. - ASSERT(state_ == PREPARE_GC); - ASSERT(encountered_weak_collections_ == Smi::FromInt(0)); + DCHECK(state_ == PREPARE_GC); MarkLiveObjects(); - ASSERT(heap_->incremental_marking()->IsStopped()); + DCHECK(heap_->incremental_marking()->IsStopped()); if (FLAG_collect_maps) ClearNonLiveReferences(); @@ -417,8 +407,6 @@ void MarkCompactCollector::CollectGarbage() { SweepSpaces(); - if (!FLAG_collect_maps) ReattachInitialMaps(); - #ifdef DEBUG if (FLAG_verify_native_context_separation) { VerifyNativeContextSeparation(heap_); @@ -439,11 +427,9 @@ void MarkCompactCollector::CollectGarbage() { if (marking_parity_ == EVEN_MARKING_PARITY) { marking_parity_ = ODD_MARKING_PARITY; } else { - ASSERT(marking_parity_ == ODD_MARKING_PARITY); + DCHECK(marking_parity_ == ODD_MARKING_PARITY); marking_parity_ = EVEN_MARKING_PARITY; } - - tracer_ = NULL; } @@ -490,8 +476,7 @@ void MarkCompactCollector::VerifyMarkbitsAreClean() { void MarkCompactCollector::VerifyWeakEmbeddedObjectsInCode() { HeapObjectIterator code_iterator(heap()->code_space()); - for (HeapObject* obj = code_iterator.Next(); - obj != NULL; + for (HeapObject* obj = code_iterator.Next(); obj != NULL; obj = code_iterator.Next()) { Code* code = Code::cast(obj); if (!code->is_optimized_code() && !code->is_weak_stub()) continue; @@ -503,9 +488,7 @@ void MarkCompactCollector::VerifyWeakEmbeddedObjectsInCode() { void MarkCompactCollector::VerifyOmittedMapChecks() { HeapObjectIterator iterator(heap()->map_space()); - for (HeapObject* obj = iterator.Next(); - obj != NULL; - obj = iterator.Next()) { + for (HeapObject* obj = iterator.Next(); obj != NULL; obj = iterator.Next()) { Map* map = Map::cast(obj); map->VerifyOmittedMapChecks(); } @@ -553,15 +536,14 @@ void MarkCompactCollector::ClearMarkbits() { class MarkCompactCollector::SweeperTask : public v8::Task { public: - SweeperTask(Heap* heap, PagedSpace* space) - : heap_(heap), space_(space) {} + SweeperTask(Heap* heap, PagedSpace* space) : heap_(heap), space_(space) {} virtual ~SweeperTask() {} private: // v8::Task overrides. virtual void Run() V8_OVERRIDE { - heap_->mark_compact_collector()->SweepInParallel(space_); + heap_->mark_compact_collector()->SweepInParallel(space_, 0); heap_->mark_compact_collector()->pending_sweeper_jobs_semaphore_.Signal(); } @@ -573,9 +555,9 @@ class MarkCompactCollector::SweeperTask : public v8::Task { void MarkCompactCollector::StartSweeperThreads() { - ASSERT(free_list_old_pointer_space_.get()->IsEmpty()); - ASSERT(free_list_old_data_space_.get()->IsEmpty()); - sweeping_pending_ = true; + DCHECK(free_list_old_pointer_space_.get()->IsEmpty()); + DCHECK(free_list_old_data_space_.get()->IsEmpty()); + sweeping_in_progress_ = true; for (int i = 0; i < isolate()->num_sweeper_threads(); i++) { isolate()->sweeper_threads()[i]->StartSweeping(); } @@ -590,8 +572,17 @@ void MarkCompactCollector::StartSweeperThreads() { } -void MarkCompactCollector::WaitUntilSweepingCompleted() { - ASSERT(sweeping_pending_ == true); +void MarkCompactCollector::EnsureSweepingCompleted() { + DCHECK(sweeping_in_progress_ == true); + + // If sweeping is not completed, we try to complete it here. If we do not + // have sweeper threads we have to complete since we do not have a good + // indicator for a swept space in that case. + if (!AreSweeperThreadsActivated() || !IsSweepingCompleted()) { + SweepInParallel(heap()->paged_space(OLD_DATA_SPACE), 0); + SweepInParallel(heap()->paged_space(OLD_POINTER_SPACE), 0); + } + for (int i = 0; i < isolate()->num_sweeper_threads(); i++) { isolate()->sweeper_threads()[i]->WaitForSweeperThread(); } @@ -601,11 +592,17 @@ void MarkCompactCollector::WaitUntilSweepingCompleted() { pending_sweeper_jobs_semaphore_.Wait(); } ParallelSweepSpacesComplete(); - sweeping_pending_ = false; + sweeping_in_progress_ = false; RefillFreeList(heap()->paged_space(OLD_DATA_SPACE)); RefillFreeList(heap()->paged_space(OLD_POINTER_SPACE)); heap()->paged_space(OLD_DATA_SPACE)->ResetUnsweptFreeBytes(); heap()->paged_space(OLD_POINTER_SPACE)->ResetUnsweptFreeBytes(); + +#ifdef VERIFY_HEAP + if (FLAG_verify_heap) { + VerifyEvacuation(heap_); + } +#endif } @@ -615,12 +612,15 @@ bool MarkCompactCollector::IsSweepingCompleted() { return false; } } + if (FLAG_job_based_sweeping) { - if (!pending_sweeper_jobs_semaphore_.WaitFor(TimeDelta::FromSeconds(0))) { + if (!pending_sweeper_jobs_semaphore_.WaitFor( + base::TimeDelta::FromSeconds(0))) { return false; } pending_sweeper_jobs_semaphore_.Signal(); } + return true; } @@ -649,14 +649,9 @@ bool MarkCompactCollector::AreSweeperThreadsActivated() { } -bool MarkCompactCollector::IsConcurrentSweepingInProgress() { - return sweeping_pending_; -} - - void Marking::TransferMark(Address old_start, Address new_start) { // This is only used when resizing an object. - ASSERT(MemoryChunk::FromAddress(old_start) == + DCHECK(MemoryChunk::FromAddress(old_start) == MemoryChunk::FromAddress(new_start)); if (!heap_->incremental_marking()->IsMarking()) return; @@ -675,13 +670,13 @@ void Marking::TransferMark(Address old_start, Address new_start) { if (Marking::IsBlack(old_mark_bit)) { old_mark_bit.Clear(); - ASSERT(IsWhite(old_mark_bit)); + DCHECK(IsWhite(old_mark_bit)); Marking::MarkBlack(new_mark_bit); return; } else if (Marking::IsGrey(old_mark_bit)) { old_mark_bit.Clear(); old_mark_bit.Next().Clear(); - ASSERT(IsWhite(old_mark_bit)); + DCHECK(IsWhite(old_mark_bit)); heap_->incremental_marking()->WhiteToGreyAndPush( HeapObject::FromAddress(new_start), new_mark_bit); heap_->incremental_marking()->RestartIfNotMarking(); @@ -689,22 +684,29 @@ void Marking::TransferMark(Address old_start, Address new_start) { #ifdef DEBUG ObjectColor new_color = Color(new_mark_bit); - ASSERT(new_color == old_color); + DCHECK(new_color == old_color); #endif } const char* AllocationSpaceName(AllocationSpace space) { switch (space) { - case NEW_SPACE: return "NEW_SPACE"; - case OLD_POINTER_SPACE: return "OLD_POINTER_SPACE"; - case OLD_DATA_SPACE: return "OLD_DATA_SPACE"; - case CODE_SPACE: return "CODE_SPACE"; - case MAP_SPACE: return "MAP_SPACE"; - case CELL_SPACE: return "CELL_SPACE"; + case NEW_SPACE: + return "NEW_SPACE"; + case OLD_POINTER_SPACE: + return "OLD_POINTER_SPACE"; + case OLD_DATA_SPACE: + return "OLD_DATA_SPACE"; + case CODE_SPACE: + return "CODE_SPACE"; + case MAP_SPACE: + return "MAP_SPACE"; + case CELL_SPACE: + return "CELL_SPACE"; case PROPERTY_CELL_SPACE: return "PROPERTY_CELL_SPACE"; - case LO_SPACE: return "LO_SPACE"; + case LO_SPACE: + return "LO_SPACE"; default: UNREACHABLE(); } @@ -720,10 +722,8 @@ static int FreeListFragmentation(PagedSpace* space, Page* p) { // If page was not swept then there are no free list items on it. if (!p->WasSwept()) { if (FLAG_trace_fragmentation) { - PrintF("%p [%s]: %d bytes live (unswept)\n", - reinterpret_cast<void*>(p), - AllocationSpaceName(space->identity()), - p->LiveBytes()); + PrintF("%p [%s]: %d bytes live (unswept)\n", reinterpret_cast<void*>(p), + AllocationSpaceName(space->identity()), p->LiveBytes()); } return 0; } @@ -735,31 +735,24 @@ static int FreeListFragmentation(PagedSpace* space, Page* p) { intptr_t ratio_threshold; intptr_t area_size = space->AreaSize(); if (space->identity() == CODE_SPACE) { - ratio = (sizes.medium_size_ * 10 + sizes.large_size_ * 2) * 100 / - area_size; + ratio = (sizes.medium_size_ * 10 + sizes.large_size_ * 2) * 100 / area_size; ratio_threshold = 10; } else { - ratio = (sizes.small_size_ * 5 + sizes.medium_size_) * 100 / - area_size; + ratio = (sizes.small_size_ * 5 + sizes.medium_size_) * 100 / area_size; ratio_threshold = 15; } if (FLAG_trace_fragmentation) { PrintF("%p [%s]: %d (%.2f%%) %d (%.2f%%) %d (%.2f%%) %d (%.2f%%) %s\n", - reinterpret_cast<void*>(p), - AllocationSpaceName(space->identity()), + reinterpret_cast<void*>(p), AllocationSpaceName(space->identity()), static_cast<int>(sizes.small_size_), - static_cast<double>(sizes.small_size_ * 100) / - area_size, + static_cast<double>(sizes.small_size_ * 100) / area_size, static_cast<int>(sizes.medium_size_), - static_cast<double>(sizes.medium_size_ * 100) / - area_size, + static_cast<double>(sizes.medium_size_ * 100) / area_size, static_cast<int>(sizes.large_size_), - static_cast<double>(sizes.large_size_ * 100) / - area_size, + static_cast<double>(sizes.large_size_ * 100) / area_size, static_cast<int>(sizes.huge_size_), - static_cast<double>(sizes.huge_size_ * 100) / - area_size, + static_cast<double>(sizes.huge_size_ * 100) / area_size, (ratio > ratio_threshold) ? "[fragmented]" : ""); } @@ -774,7 +767,7 @@ static int FreeListFragmentation(PagedSpace* space, Page* p) { void MarkCompactCollector::CollectEvacuationCandidates(PagedSpace* space) { - ASSERT(space->identity() == OLD_POINTER_SPACE || + DCHECK(space->identity() == OLD_POINTER_SPACE || space->identity() == OLD_DATA_SPACE || space->identity() == CODE_SPACE); @@ -789,8 +782,8 @@ void MarkCompactCollector::CollectEvacuationCandidates(PagedSpace* space) { class Candidate { public: - Candidate() : fragmentation_(0), page_(NULL) { } - Candidate(int f, Page* p) : fragmentation_(f), page_(p) { } + Candidate() : fragmentation_(0), page_(NULL) {} + Candidate(int f, Page* p) : fragmentation_(f), page_(p) {} int fragmentation() { return fragmentation_; } Page* page() { return page_; } @@ -800,10 +793,7 @@ void MarkCompactCollector::CollectEvacuationCandidates(PagedSpace* space) { Page* page_; }; - enum CompactionMode { - COMPACT_FREE_LISTS, - REDUCE_MEMORY_FOOTPRINT - }; + enum CompactionMode { COMPACT_FREE_LISTS, REDUCE_MEMORY_FOOTPRINT }; CompactionMode mode = COMPACT_FREE_LISTS; @@ -829,12 +819,12 @@ void MarkCompactCollector::CollectEvacuationCandidates(PagedSpace* space) { } if (FLAG_trace_fragmentation && mode == REDUCE_MEMORY_FOOTPRINT) { - PrintF("Estimated over reserved memory: %.1f / %.1f MB (threshold %d), " - "evacuation candidate limit: %d\n", - static_cast<double>(over_reserved) / MB, - static_cast<double>(reserved) / MB, - static_cast<int>(kFreenessThreshold), - max_evacuation_candidates); + PrintF( + "Estimated over reserved memory: %.1f / %.1f MB (threshold %d), " + "evacuation candidate limit: %d\n", + static_cast<double>(over_reserved) / MB, + static_cast<double>(reserved) / MB, + static_cast<int>(kFreenessThreshold), max_evacuation_candidates); } intptr_t estimated_release = 0; @@ -885,8 +875,7 @@ void MarkCompactCollector::CollectEvacuationCandidates(PagedSpace* space) { } if (FLAG_trace_fragmentation) { - PrintF("%p [%s]: %d (%.2f%%) free %s\n", - reinterpret_cast<void*>(p), + PrintF("%p [%s]: %d (%.2f%%) free %s\n", reinterpret_cast<void*>(p), AllocationSpaceName(space->identity()), static_cast<int>(free_bytes), static_cast<double>(free_bytes * 100) / p->area_size(), @@ -921,8 +910,7 @@ void MarkCompactCollector::CollectEvacuationCandidates(PagedSpace* space) { } if (count > 0 && FLAG_trace_fragmentation) { - PrintF("Collected %d evacuation candidates for space %s\n", - count, + PrintF("Collected %d evacuation candidates for space %s\n", count, AllocationSpaceName(space->identity())); } } @@ -941,33 +929,30 @@ void MarkCompactCollector::AbortCompaction() { evacuation_candidates_.Rewind(0); invalidated_code_.Rewind(0); } - ASSERT_EQ(0, evacuation_candidates_.length()); + DCHECK_EQ(0, evacuation_candidates_.length()); } -void MarkCompactCollector::Prepare(GCTracer* tracer) { +void MarkCompactCollector::Prepare() { was_marked_incrementally_ = heap()->incremental_marking()->IsMarking(); - // Rather than passing the tracer around we stash it in a static member - // variable. - tracer_ = tracer; - #ifdef DEBUG - ASSERT(state_ == IDLE); + DCHECK(state_ == IDLE); state_ = PREPARE_GC; #endif - ASSERT(!FLAG_never_compact || !FLAG_always_compact); + DCHECK(!FLAG_never_compact || !FLAG_always_compact); - if (IsConcurrentSweepingInProgress()) { + if (sweeping_in_progress()) { // Instead of waiting we could also abort the sweeper threads here. - WaitUntilSweepingCompleted(); + EnsureSweepingCompleted(); } // Clear marking bits if incremental marking is aborted. if (was_marked_incrementally_ && abort_incremental_marking_) { heap()->incremental_marking()->Abort(); ClearMarkbits(); + AbortWeakCollections(); AbortCompaction(); was_marked_incrementally_ = false; } @@ -979,8 +964,7 @@ void MarkCompactCollector::Prepare(GCTracer* tracer) { } PagedSpaces spaces(heap()); - for (PagedSpace* space = spaces.next(); - space != NULL; + for (PagedSpace* space = spaces.next(); space != NULL; space = spaces.next()) { space->PrepareForMarkCompact(); } @@ -995,7 +979,7 @@ void MarkCompactCollector::Prepare(GCTracer* tracer) { void MarkCompactCollector::Finish() { #ifdef DEBUG - ASSERT(state_ == SWEEP_SPACES || state_ == RELOCATE_OBJECTS); + DCHECK(state_ == SWEEP_SPACES || state_ == RELOCATE_OBJECTS); state_ = IDLE; #endif // The stub cache is not traversed during GC; clear the cache to @@ -1071,13 +1055,13 @@ void CodeFlusher::ProcessJSFunctionCandidates() { // setter did not record the slot update and we have to do that manually. Address slot = candidate->address() + JSFunction::kCodeEntryOffset; Code* target = Code::cast(Code::GetObjectFromEntryAddress(slot)); - isolate_->heap()->mark_compact_collector()-> - RecordCodeEntrySlot(slot, target); + isolate_->heap()->mark_compact_collector()->RecordCodeEntrySlot(slot, + target); Object** shared_code_slot = HeapObject::RawField(shared, SharedFunctionInfo::kCodeOffset); - isolate_->heap()->mark_compact_collector()-> - RecordSlot(shared_code_slot, shared_code_slot, *shared_code_slot); + isolate_->heap()->mark_compact_collector()->RecordSlot( + shared_code_slot, shared_code_slot, *shared_code_slot); candidate = next_candidate; } @@ -1109,8 +1093,8 @@ void CodeFlusher::ProcessSharedFunctionInfoCandidates() { Object** code_slot = HeapObject::RawField(candidate, SharedFunctionInfo::kCodeOffset); - isolate_->heap()->mark_compact_collector()-> - RecordSlot(code_slot, code_slot, *code_slot); + isolate_->heap()->mark_compact_collector()->RecordSlot(code_slot, code_slot, + *code_slot); candidate = next_candidate; } @@ -1132,8 +1116,7 @@ void CodeFlusher::ProcessOptimizedCodeMaps() { FixedArray* code_map = FixedArray::cast(holder->optimized_code_map()); int new_length = SharedFunctionInfo::kEntriesStart; int old_length = code_map->length(); - for (int i = SharedFunctionInfo::kEntriesStart; - i < old_length; + for (int i = SharedFunctionInfo::kEntriesStart; i < old_length; i += SharedFunctionInfo::kEntryLength) { Code* code = Code::cast(code_map->get(i + SharedFunctionInfo::kCachedCodeOffset)); @@ -1146,12 +1129,12 @@ void CodeFlusher::ProcessOptimizedCodeMaps() { Object* object = code_map->get(i + j); code_map->set(dst_index, object); if (j == SharedFunctionInfo::kOsrAstIdOffset) { - ASSERT(object->IsSmi()); + DCHECK(object->IsSmi()); } else { - ASSERT(Marking::IsBlack( - Marking::MarkBitFrom(HeapObject::cast(*slot)))); - isolate_->heap()->mark_compact_collector()-> - RecordSlot(slot, slot, *slot); + DCHECK( + Marking::IsBlack(Marking::MarkBitFrom(HeapObject::cast(*slot)))); + isolate_->heap()->mark_compact_collector()->RecordSlot(slot, slot, + *slot); } } } @@ -1202,7 +1185,7 @@ void CodeFlusher::EvictCandidate(SharedFunctionInfo* shared_info) { void CodeFlusher::EvictCandidate(JSFunction* function) { - ASSERT(!function->next_function_link()->IsUndefined()); + DCHECK(!function->next_function_link()->IsUndefined()); Object* undefined = isolate_->heap()->undefined_value(); // Make sure previous flushing decisions are revisited. @@ -1239,8 +1222,9 @@ void CodeFlusher::EvictCandidate(JSFunction* function) { void CodeFlusher::EvictOptimizedCodeMap(SharedFunctionInfo* code_map_holder) { - ASSERT(!FixedArray::cast(code_map_holder->optimized_code_map())-> - get(SharedFunctionInfo::kNextMapIndex)->IsUndefined()); + DCHECK(!FixedArray::cast(code_map_holder->optimized_code_map()) + ->get(SharedFunctionInfo::kNextMapIndex) + ->IsUndefined()); // Make sure previous flushing decisions are revisited. isolate_->heap()->incremental_marking()->RecordWrites(code_map_holder); @@ -1282,7 +1266,7 @@ void CodeFlusher::EvictJSFunctionCandidates() { EvictCandidate(candidate); candidate = next_candidate; } - ASSERT(jsfunction_candidates_head_ == NULL); + DCHECK(jsfunction_candidates_head_ == NULL); } @@ -1294,7 +1278,7 @@ void CodeFlusher::EvictSharedFunctionInfoCandidates() { EvictCandidate(candidate); candidate = next_candidate; } - ASSERT(shared_function_info_candidates_head_ == NULL); + DCHECK(shared_function_info_candidates_head_ == NULL); } @@ -1306,7 +1290,7 @@ void CodeFlusher::EvictOptimizedCodeMaps() { EvictOptimizedCodeMap(holder); holder = next_holder; } - ASSERT(optimized_code_map_holder_head_ == NULL); + DCHECK(optimized_code_map_holder_head_ == NULL); } @@ -1350,7 +1334,7 @@ static inline HeapObject* ShortCircuitConsString(Object** p) { if (!FLAG_clever_optimizations) return object; Map* map = object->map(); InstanceType type = map->instance_type(); - if ((type & kShortcutTypeMask) != kShortcutTypeTag) return object; + if (!IsShortcutCandidate(type)) return object; Object* second = reinterpret_cast<ConsString*>(object)->second(); Heap* heap = map->GetHeap(); @@ -1372,15 +1356,14 @@ static inline HeapObject* ShortCircuitConsString(Object** p) { class MarkCompactMarkingVisitor : public StaticMarkingVisitor<MarkCompactMarkingVisitor> { public: - static void ObjectStatsVisitBase(StaticVisitorBase::VisitorId id, - Map* map, HeapObject* obj); + static void ObjectStatsVisitBase(StaticVisitorBase::VisitorId id, Map* map, + HeapObject* obj); static void ObjectStatsCountFixedArray( - FixedArrayBase* fixed_array, - FixedArraySubInstanceType fast_type, + FixedArrayBase* fixed_array, FixedArraySubInstanceType fast_type, FixedArraySubInstanceType dictionary_type); - template<MarkCompactMarkingVisitor::VisitorId id> + template <MarkCompactMarkingVisitor::VisitorId id> class ObjectStatsTracker { public: static inline void Visit(Map* map, HeapObject* obj); @@ -1424,8 +1407,7 @@ class MarkCompactMarkingVisitor // Mark object pointed to by p. INLINE(static void MarkObjectByPointer(MarkCompactCollector* collector, - Object** anchor_slot, - Object** p)) { + Object** anchor_slot, Object** p)) { if (!(*p)->IsHeapObject()) return; HeapObject* object = ShortCircuitConsString(p); collector->RecordSlot(anchor_slot, p, object); @@ -1438,8 +1420,8 @@ class MarkCompactMarkingVisitor INLINE(static void VisitUnmarkedObject(MarkCompactCollector* collector, HeapObject* obj)) { #ifdef DEBUG - ASSERT(collector->heap()->Contains(obj)); - ASSERT(!collector->heap()->mark_compact_collector()->IsMarked(obj)); + DCHECK(collector->heap()->Contains(obj)); + DCHECK(!collector->heap()->mark_compact_collector()->IsMarked(obj)); #endif Map* map = obj->map(); Heap* heap = obj->GetHeap(); @@ -1453,8 +1435,7 @@ class MarkCompactMarkingVisitor // Visit all unmarked objects pointed to by [start, end). // Returns false if the operation fails (lack of stack space). - INLINE(static bool VisitUnmarkedObjects(Heap* heap, - Object** start, + INLINE(static bool VisitUnmarkedObjects(Heap* heap, Object** start, Object** end)) { // Return false is we are close to the stack limit. StackLimitCheck check(heap->isolate()); @@ -1474,64 +1455,21 @@ class MarkCompactMarkingVisitor return true; } - INLINE(static void BeforeVisitingSharedFunctionInfo(HeapObject* object)) { - SharedFunctionInfo* shared = SharedFunctionInfo::cast(object); - shared->BeforeVisitingPointers(); - } - - static void VisitWeakCollection(Map* map, HeapObject* object) { - MarkCompactCollector* collector = map->GetHeap()->mark_compact_collector(); - JSWeakCollection* weak_collection = - reinterpret_cast<JSWeakCollection*>(object); - - // Enqueue weak map in linked list of encountered weak maps. - if (weak_collection->next() == Smi::FromInt(0)) { - weak_collection->set_next(collector->encountered_weak_collections()); - collector->set_encountered_weak_collections(weak_collection); - } - - // Skip visiting the backing hash table containing the mappings. - int object_size = JSWeakCollection::BodyDescriptor::SizeOf(map, object); - BodyVisitorBase<MarkCompactMarkingVisitor>::IteratePointers( - map->GetHeap(), - object, - JSWeakCollection::BodyDescriptor::kStartOffset, - JSWeakCollection::kTableOffset); - BodyVisitorBase<MarkCompactMarkingVisitor>::IteratePointers( - map->GetHeap(), - object, - JSWeakCollection::kTableOffset + kPointerSize, - object_size); - - // Mark the backing hash table without pushing it on the marking stack. - Object* table_object = weak_collection->table(); - if (!table_object->IsHashTable()) return; - WeakHashTable* table = WeakHashTable::cast(table_object); - Object** table_slot = - HeapObject::RawField(weak_collection, JSWeakCollection::kTableOffset); - MarkBit table_mark = Marking::MarkBitFrom(table); - collector->RecordSlot(table_slot, table_slot, table); - if (!table_mark.Get()) collector->SetMark(table, table_mark); - // Recording the map slot can be skipped, because maps are not compacted. - collector->MarkObject(table->map(), Marking::MarkBitFrom(table->map())); - ASSERT(MarkCompactCollector::IsMarked(table->map())); - } - private: - template<int id> + template <int id> static inline void TrackObjectStatsAndVisit(Map* map, HeapObject* obj); // Code flushing support. static const int kRegExpCodeThreshold = 5; - static void UpdateRegExpCodeAgeAndFlush(Heap* heap, - JSRegExp* re, + static void UpdateRegExpCodeAgeAndFlush(Heap* heap, JSRegExp* re, bool is_ascii) { // Make sure that the fixed array is in fact initialized on the RegExp. // We could potentially trigger a GC when initializing the RegExp. if (HeapObject::cast(re->data())->map()->instance_type() != - FIXED_ARRAY_TYPE) return; + FIXED_ARRAY_TYPE) + return; // Make sure this is a RegExp that actually contains code. if (re->TypeTag() != JSRegExp::IRREGEXP) return; @@ -1548,8 +1486,7 @@ class MarkCompactMarkingVisitor // object. FixedArray* data = FixedArray::cast(re->data()); Object** slot = data->data_start() + JSRegExp::saved_code_index(is_ascii); - heap->mark_compact_collector()-> - RecordSlot(slot, slot, code); + heap->mark_compact_collector()->RecordSlot(slot, slot, code); // Set a number in the 0-255 range to guarantee no smi overflow. re->SetDataAt(JSRegExp::code_index(is_ascii), @@ -1598,19 +1535,16 @@ class MarkCompactMarkingVisitor void MarkCompactMarkingVisitor::ObjectStatsCountFixedArray( - FixedArrayBase* fixed_array, - FixedArraySubInstanceType fast_type, + FixedArrayBase* fixed_array, FixedArraySubInstanceType fast_type, FixedArraySubInstanceType dictionary_type) { Heap* heap = fixed_array->map()->GetHeap(); if (fixed_array->map() != heap->fixed_cow_array_map() && fixed_array->map() != heap->fixed_double_array_map() && fixed_array != heap->empty_fixed_array()) { if (fixed_array->IsDictionary()) { - heap->RecordFixedArraySubTypeStats(dictionary_type, - fixed_array->Size()); + heap->RecordFixedArraySubTypeStats(dictionary_type, fixed_array->Size()); } else { - heap->RecordFixedArraySubTypeStats(fast_type, - fixed_array->Size()); + heap->RecordFixedArraySubTypeStats(fast_type, fixed_array->Size()); } } } @@ -1624,8 +1558,7 @@ void MarkCompactMarkingVisitor::ObjectStatsVisitBase( non_count_table_.GetVisitorById(id)(map, obj); if (obj->IsJSObject()) { JSObject* object = JSObject::cast(obj); - ObjectStatsCountFixedArray(object->elements(), - DICTIONARY_ELEMENTS_SUB_TYPE, + ObjectStatsCountFixedArray(object->elements(), DICTIONARY_ELEMENTS_SUB_TYPE, FAST_ELEMENTS_SUB_TYPE); ObjectStatsCountFixedArray(object->properties(), DICTIONARY_PROPERTIES_SUB_TYPE, @@ -1634,21 +1567,21 @@ void MarkCompactMarkingVisitor::ObjectStatsVisitBase( } -template<MarkCompactMarkingVisitor::VisitorId id> -void MarkCompactMarkingVisitor::ObjectStatsTracker<id>::Visit( - Map* map, HeapObject* obj) { +template <MarkCompactMarkingVisitor::VisitorId id> +void MarkCompactMarkingVisitor::ObjectStatsTracker<id>::Visit(Map* map, + HeapObject* obj) { ObjectStatsVisitBase(id, map, obj); } -template<> +template <> class MarkCompactMarkingVisitor::ObjectStatsTracker< MarkCompactMarkingVisitor::kVisitMap> { public: static inline void Visit(Map* map, HeapObject* obj) { Heap* heap = map->GetHeap(); Map* map_obj = Map::cast(obj); - ASSERT(map->instance_type() == MAP_TYPE); + DCHECK(map->instance_type() == MAP_TYPE); DescriptorArray* array = map_obj->instance_descriptors(); if (map_obj->owns_descriptors() && array != heap->empty_descriptor_array()) { @@ -1676,14 +1609,14 @@ class MarkCompactMarkingVisitor::ObjectStatsTracker< }; -template<> +template <> class MarkCompactMarkingVisitor::ObjectStatsTracker< MarkCompactMarkingVisitor::kVisitCode> { public: static inline void Visit(Map* map, HeapObject* obj) { Heap* heap = map->GetHeap(); int object_size = obj->Size(); - ASSERT(map->instance_type() == CODE_TYPE); + DCHECK(map->instance_type() == CODE_TYPE); Code* code_obj = Code::cast(obj); heap->RecordCodeSubTypeStats(code_obj->kind(), code_obj->GetRawAge(), object_size); @@ -1692,7 +1625,7 @@ class MarkCompactMarkingVisitor::ObjectStatsTracker< }; -template<> +template <> class MarkCompactMarkingVisitor::ObjectStatsTracker< MarkCompactMarkingVisitor::kVisitSharedFunctionInfo> { public: @@ -1701,15 +1634,14 @@ class MarkCompactMarkingVisitor::ObjectStatsTracker< SharedFunctionInfo* sfi = SharedFunctionInfo::cast(obj); if (sfi->scope_info() != heap->empty_fixed_array()) { heap->RecordFixedArraySubTypeStats( - SCOPE_INFO_SUB_TYPE, - FixedArray::cast(sfi->scope_info())->Size()); + SCOPE_INFO_SUB_TYPE, FixedArray::cast(sfi->scope_info())->Size()); } ObjectStatsVisitBase(kVisitSharedFunctionInfo, map, obj); } }; -template<> +template <> class MarkCompactMarkingVisitor::ObjectStatsTracker< MarkCompactMarkingVisitor::kVisitFixedArray> { public: @@ -1717,9 +1649,8 @@ class MarkCompactMarkingVisitor::ObjectStatsTracker< Heap* heap = map->GetHeap(); FixedArray* fixed_array = FixedArray::cast(obj); if (fixed_array == heap->string_table()) { - heap->RecordFixedArraySubTypeStats( - STRING_TABLE_SUB_TYPE, - fixed_array->Size()); + heap->RecordFixedArraySubTypeStats(STRING_TABLE_SUB_TYPE, + fixed_array->Size()); } ObjectStatsVisitBase(kVisitFixedArray, map, obj); } @@ -1729,14 +1660,13 @@ class MarkCompactMarkingVisitor::ObjectStatsTracker< void MarkCompactMarkingVisitor::Initialize() { StaticMarkingVisitor<MarkCompactMarkingVisitor>::Initialize(); - table_.Register(kVisitJSRegExp, - &VisitRegExpAndFlushCode); + table_.Register(kVisitJSRegExp, &VisitRegExpAndFlushCode); if (FLAG_track_gc_object_stats) { // Copy the visitor table to make call-through possible. non_count_table_.CopyFrom(&table_); -#define VISITOR_ID_COUNT_FUNCTION(id) \ - table_.Register(kVisit##id, ObjectStatsTracker<kVisit##id>::Visit); +#define VISITOR_ID_COUNT_FUNCTION(id) \ + table_.Register(kVisit##id, ObjectStatsTracker<kVisit##id>::Visit); VISITOR_ID_LIST(VISITOR_ID_COUNT_FUNCTION) #undef VISITOR_ID_COUNT_FUNCTION } @@ -1821,7 +1751,7 @@ void MarkCompactCollector::PrepareForCodeFlushing() { MarkObject(descriptor_array, descriptor_array_mark); // Make sure we are not referencing the code from the stack. - ASSERT(this == heap()->mark_compact_collector()); + DCHECK(this == heap()->mark_compact_collector()); PrepareThreadForCodeFlushing(heap()->isolate(), heap()->isolate()->thread_local_top()); @@ -1843,11 +1773,9 @@ void MarkCompactCollector::PrepareForCodeFlushing() { class RootMarkingVisitor : public ObjectVisitor { public: explicit RootMarkingVisitor(Heap* heap) - : collector_(heap->mark_compact_collector()) { } + : collector_(heap->mark_compact_collector()) {} - void VisitPointer(Object** p) { - MarkObjectByPointer(p); - } + void VisitPointer(Object** p) { MarkObjectByPointer(p); } void VisitPointers(Object** start, Object** end) { for (Object** p = start; p < end; p++) MarkObjectByPointer(p); @@ -1855,7 +1783,7 @@ class RootMarkingVisitor : public ObjectVisitor { // Skip the weak next code link in a code object, which is visited in // ProcessTopOptimizedFrame. - void VisitNextCodeLink(Object** p) { } + void VisitNextCodeLink(Object** p) {} private: void MarkObjectByPointer(Object** p) { @@ -1885,11 +1813,10 @@ class RootMarkingVisitor : public ObjectVisitor { // Helper class for pruning the string table. -template<bool finalize_external_strings> +template <bool finalize_external_strings> class StringTableCleaner : public ObjectVisitor { public: - explicit StringTableCleaner(Heap* heap) - : heap_(heap), pointers_removed_(0) { } + explicit StringTableCleaner(Heap* heap) : heap_(heap), pointers_removed_(0) {} virtual void VisitPointers(Object** start, Object** end) { // Visit all HeapObject pointers in [start, end). @@ -1898,7 +1825,7 @@ class StringTableCleaner : public ObjectVisitor { if (o->IsHeapObject() && !Marking::MarkBitFrom(HeapObject::cast(o)).Get()) { if (finalize_external_strings) { - ASSERT(o->IsExternalString()); + DCHECK(o->IsExternalString()); heap_->FinalizeExternalString(String::cast(*p)); } else { pointers_removed_++; @@ -1910,7 +1837,7 @@ class StringTableCleaner : public ObjectVisitor { } int PointersRemoved() { - ASSERT(!finalize_external_strings); + DCHECK(!finalize_external_strings); return pointers_removed_; } @@ -1949,18 +1876,16 @@ class MarkCompactWeakObjectRetainer : public WeakObjectRetainer { // Fill the marking stack with overflowed objects returned by the given // iterator. Stop when the marking stack is filled or the end of the space // is reached, whichever comes first. -template<class T> +template <class T> static void DiscoverGreyObjectsWithIterator(Heap* heap, MarkingDeque* marking_deque, T* it) { // The caller should ensure that the marking stack is initially not full, // so that we don't waste effort pointlessly scanning for objects. - ASSERT(!marking_deque->IsFull()); + DCHECK(!marking_deque->IsFull()); Map* filler_map = heap->one_pointer_filler_map(); - for (HeapObject* object = it->Next(); - object != NULL; - object = it->Next()) { + for (HeapObject* object = it->Next(); object != NULL; object = it->Next()) { MarkBit markbit = Marking::MarkBitFrom(object); if ((object->map() != filler_map) && Marking::IsGrey(markbit)) { Marking::GreyToBlack(markbit); @@ -1977,11 +1902,11 @@ static inline int MarkWordToObjectStarts(uint32_t mark_bits, int* starts); static void DiscoverGreyObjectsOnPage(MarkingDeque* marking_deque, MemoryChunk* p) { - ASSERT(!marking_deque->IsFull()); - ASSERT(strcmp(Marking::kWhiteBitPattern, "00") == 0); - ASSERT(strcmp(Marking::kBlackBitPattern, "10") == 0); - ASSERT(strcmp(Marking::kGreyBitPattern, "11") == 0); - ASSERT(strcmp(Marking::kImpossibleBitPattern, "01") == 0); + DCHECK(!marking_deque->IsFull()); + DCHECK(strcmp(Marking::kWhiteBitPattern, "00") == 0); + DCHECK(strcmp(Marking::kBlackBitPattern, "10") == 0); + DCHECK(strcmp(Marking::kGreyBitPattern, "11") == 0); + DCHECK(strcmp(Marking::kImpossibleBitPattern, "01") == 0); for (MarkBitCellIterator it(p); !it.Done(); it.Advance()) { Address cell_base = it.CurrentCellBase(); @@ -1992,9 +1917,9 @@ static void DiscoverGreyObjectsOnPage(MarkingDeque* marking_deque, MarkBit::CellType grey_objects; if (it.HasNext()) { - const MarkBit::CellType next_cell = *(cell+1); - grey_objects = current_cell & - ((current_cell >> 1) | (next_cell << (Bitmap::kBitsPerCell - 1))); + const MarkBit::CellType next_cell = *(cell + 1); + grey_objects = current_cell & ((current_cell >> 1) | + (next_cell << (Bitmap::kBitsPerCell - 1))); } else { grey_objects = current_cell & (current_cell >> 1); } @@ -2005,7 +1930,7 @@ static void DiscoverGreyObjectsOnPage(MarkingDeque* marking_deque, grey_objects >>= trailing_zeros; offset += trailing_zeros; MarkBit markbit(cell, 1 << offset, false); - ASSERT(Marking::IsGrey(markbit)); + DCHECK(Marking::IsGrey(markbit)); Marking::GreyToBlack(markbit); Address addr = cell_base + offset * kPointerSize; HeapObject* object = HeapObject::FromAddress(addr); @@ -2021,13 +1946,12 @@ static void DiscoverGreyObjectsOnPage(MarkingDeque* marking_deque, } -int MarkCompactCollector::DiscoverAndPromoteBlackObjectsOnPage( - NewSpace* new_space, - NewSpacePage* p) { - ASSERT(strcmp(Marking::kWhiteBitPattern, "00") == 0); - ASSERT(strcmp(Marking::kBlackBitPattern, "10") == 0); - ASSERT(strcmp(Marking::kGreyBitPattern, "11") == 0); - ASSERT(strcmp(Marking::kImpossibleBitPattern, "01") == 0); +int MarkCompactCollector::DiscoverAndEvacuateBlackObjectsOnPage( + NewSpace* new_space, NewSpacePage* p) { + DCHECK(strcmp(Marking::kWhiteBitPattern, "00") == 0); + DCHECK(strcmp(Marking::kBlackBitPattern, "10") == 0); + DCHECK(strcmp(Marking::kGreyBitPattern, "11") == 0); + DCHECK(strcmp(Marking::kImpossibleBitPattern, "01") == 0); MarkBit::CellType* cells = p->markbits()->cells(); int survivors_size = 0; @@ -2054,12 +1978,13 @@ int MarkCompactCollector::DiscoverAndPromoteBlackObjectsOnPage( offset++; current_cell >>= 1; - // Aggressively promote young survivors to the old space. - if (TryPromoteObject(object, size)) { + + // TODO(hpayer): Refactor EvacuateObject and call this function instead. + if (heap()->ShouldBePromoted(object->address(), size) && + TryPromoteObject(object, size)) { continue; } - // Promotion failed. Just migrate object to another semispace. AllocationResult allocation = new_space->AllocateRaw(size); if (allocation.IsRetry()) { if (!new_space->AddFreshPage()) { @@ -2069,14 +1994,12 @@ int MarkCompactCollector::DiscoverAndPromoteBlackObjectsOnPage( UNREACHABLE(); } allocation = new_space->AllocateRaw(size); - ASSERT(!allocation.IsRetry()); + DCHECK(!allocation.IsRetry()); } Object* target = allocation.ToObjectChecked(); - MigrateObject(HeapObject::cast(target), - object, - size, - NEW_SPACE); + MigrateObject(HeapObject::cast(target), object, size, NEW_SPACE); + heap()->IncrementSemiSpaceCopiedObjectSize(size); } *cells = 0; } @@ -2084,19 +2007,13 @@ int MarkCompactCollector::DiscoverAndPromoteBlackObjectsOnPage( } -static void DiscoverGreyObjectsInSpace(Heap* heap, - MarkingDeque* marking_deque, +static void DiscoverGreyObjectsInSpace(Heap* heap, MarkingDeque* marking_deque, PagedSpace* space) { - if (!space->was_swept_conservatively()) { - HeapObjectIterator it(space); - DiscoverGreyObjectsWithIterator(heap, marking_deque, &it); - } else { - PageIterator it(space); - while (it.has_next()) { - Page* p = it.next(); - DiscoverGreyObjectsOnPage(marking_deque, p); - if (marking_deque->IsFull()) return; - } + PageIterator it(space); + while (it.has_next()) { + Page* p = it.next(); + DiscoverGreyObjectsOnPage(marking_deque, p); + if (marking_deque->IsFull()) return; } } @@ -2125,7 +2042,7 @@ bool MarkCompactCollector::IsUnmarkedHeapObject(Object** p) { bool MarkCompactCollector::IsUnmarkedHeapObjectWithHeap(Heap* heap, Object** p) { Object* o = *p; - ASSERT(o->IsHeapObject()); + DCHECK(o->IsHeapObject()); HeapObject* heap_object = HeapObject::cast(o); MarkBit mark = Marking::MarkBitFrom(heap_object); return !mark.Get(); @@ -2177,7 +2094,7 @@ void MarkCompactCollector::MarkImplicitRefGroups() { int last = 0; for (int i = 0; i < ref_groups->length(); i++) { ImplicitRefGroup* entry = ref_groups->at(i); - ASSERT(entry != NULL); + DCHECK(entry != NULL); if (!IsMarked(*entry->parent)) { (*ref_groups)[last++] = entry; @@ -2219,9 +2136,9 @@ void MarkCompactCollector::MarkWeakObjectToCodeTable() { void MarkCompactCollector::EmptyMarkingDeque() { while (!marking_deque_.IsEmpty()) { HeapObject* object = marking_deque_.Pop(); - ASSERT(object->IsHeapObject()); - ASSERT(heap()->Contains(object)); - ASSERT(Marking::IsBlack(Marking::MarkBitFrom(object))); + DCHECK(object->IsHeapObject()); + DCHECK(heap()->Contains(object)); + DCHECK(Marking::IsBlack(Marking::MarkBitFrom(object))); Map* map = object->map(); MarkBit map_mark = Marking::MarkBitFrom(map); @@ -2238,45 +2155,33 @@ void MarkCompactCollector::EmptyMarkingDeque() { // overflowed objects in the heap so the overflow flag on the markings stack // is cleared. void MarkCompactCollector::RefillMarkingDeque() { - ASSERT(marking_deque_.overflowed()); + DCHECK(marking_deque_.overflowed()); DiscoverGreyObjectsInNewSpace(heap(), &marking_deque_); if (marking_deque_.IsFull()) return; - DiscoverGreyObjectsInSpace(heap(), - &marking_deque_, + DiscoverGreyObjectsInSpace(heap(), &marking_deque_, heap()->old_pointer_space()); if (marking_deque_.IsFull()) return; - DiscoverGreyObjectsInSpace(heap(), - &marking_deque_, - heap()->old_data_space()); + DiscoverGreyObjectsInSpace(heap(), &marking_deque_, heap()->old_data_space()); if (marking_deque_.IsFull()) return; - DiscoverGreyObjectsInSpace(heap(), - &marking_deque_, - heap()->code_space()); + DiscoverGreyObjectsInSpace(heap(), &marking_deque_, heap()->code_space()); if (marking_deque_.IsFull()) return; - DiscoverGreyObjectsInSpace(heap(), - &marking_deque_, - heap()->map_space()); + DiscoverGreyObjectsInSpace(heap(), &marking_deque_, heap()->map_space()); if (marking_deque_.IsFull()) return; - DiscoverGreyObjectsInSpace(heap(), - &marking_deque_, - heap()->cell_space()); + DiscoverGreyObjectsInSpace(heap(), &marking_deque_, heap()->cell_space()); if (marking_deque_.IsFull()) return; - DiscoverGreyObjectsInSpace(heap(), - &marking_deque_, + DiscoverGreyObjectsInSpace(heap(), &marking_deque_, heap()->property_cell_space()); if (marking_deque_.IsFull()) return; LargeObjectIterator lo_it(heap()->lo_space()); - DiscoverGreyObjectsWithIterator(heap(), - &marking_deque_, - &lo_it); + DiscoverGreyObjectsWithIterator(heap(), &marking_deque_, &lo_it); if (marking_deque_.IsFull()) return; marking_deque_.ClearOverflowed(); @@ -2300,7 +2205,7 @@ void MarkCompactCollector::ProcessMarkingDeque() { // stack including references only considered in the atomic marking pause. void MarkCompactCollector::ProcessEphemeralMarking(ObjectVisitor* visitor) { bool work_to_do = true; - ASSERT(marking_deque_.IsEmpty()); + DCHECK(marking_deque_.IsEmpty()); while (work_to_do) { isolate()->global_handles()->IterateObjectGroups( visitor, &IsUnmarkedHeapObjectWithHeap); @@ -2331,7 +2236,11 @@ void MarkCompactCollector::ProcessTopOptimizedFrame(ObjectVisitor* visitor) { void MarkCompactCollector::MarkLiveObjects() { - GCTracer::Scope gc_scope(tracer_, GCTracer::Scope::MC_MARK); + GCTracer::Scope gc_scope(heap()->tracer(), GCTracer::Scope::MC_MARK); + double start_time = 0.0; + if (FLAG_print_cumulative_gc_stat) { + start_time = base::OS::TimeCurrentMillis(); + } // The recursive GC marker detects when it is nearing stack overflow, // and switches to a different marking system. JS interrupts interfere // with the C stack limit check. @@ -2356,7 +2265,7 @@ void MarkCompactCollector::MarkLiveObjects() { } #ifdef DEBUG - ASSERT(state_ == PREPARE_GC); + DCHECK(state_ == PREPARE_GC); state_ = MARK_LIVE_OBJECTS; #endif // The to space contains live objects, a page in from space is used as a @@ -2366,9 +2275,8 @@ void MarkCompactCollector::MarkLiveObjects() { if (FLAG_force_marking_deque_overflows) { marking_deque_end = marking_deque_start + 64 * kPointerSize; } - marking_deque_.Initialize(marking_deque_start, - marking_deque_end); - ASSERT(!marking_deque_.overflowed()); + marking_deque_.Initialize(marking_deque_start, marking_deque_end); + DCHECK(!marking_deque_.overflowed()); if (incremental_marking_overflowed) { // There are overflowed objects left in the heap after incremental marking. @@ -2384,12 +2292,11 @@ void MarkCompactCollector::MarkLiveObjects() { HeapObjectIterator cell_iterator(heap()->cell_space()); HeapObject* cell; while ((cell = cell_iterator.Next()) != NULL) { - ASSERT(cell->IsCell()); + DCHECK(cell->IsCell()); if (IsMarked(cell)) { int offset = Cell::kValueOffset; MarkCompactMarkingVisitor::VisitPointer( - heap(), - reinterpret_cast<Object**>(cell->address() + offset)); + heap(), reinterpret_cast<Object**>(cell->address() + offset)); } } } @@ -2398,7 +2305,7 @@ void MarkCompactCollector::MarkLiveObjects() { heap()->property_cell_space()); HeapObject* cell; while ((cell = js_global_property_cell_iterator.Next()) != NULL) { - ASSERT(cell->IsPropertyCell()); + DCHECK(cell->IsPropertyCell()); if (IsMarked(cell)) { MarkCompactMarkingVisitor::VisitPropertyCell(cell->map(), cell); } @@ -2436,6 +2343,10 @@ void MarkCompactCollector::MarkLiveObjects() { ProcessEphemeralMarking(&root_visitor); AfterMarking(); + + if (FLAG_print_cumulative_gc_stat) { + heap_->tracer()->AddMarkingTime(base::OS::TimeCurrentMillis() - start_time); + } } @@ -2483,7 +2394,7 @@ void MarkCompactCollector::AfterMarking() { void MarkCompactCollector::ProcessMapCaches() { - Object* raw_context = heap()->native_contexts_list_; + Object* raw_context = heap()->native_contexts_list(); while (raw_context != heap()->undefined_value()) { Context* context = reinterpret_cast<Context*>(raw_context); if (IsMarked(context)) { @@ -2497,19 +2408,19 @@ void MarkCompactCollector::ProcessMapCaches() { MapCache* map_cache = reinterpret_cast<MapCache*>(raw_map_cache); int existing_elements = map_cache->NumberOfElements(); int used_elements = 0; - for (int i = MapCache::kElementsStartIndex; - i < map_cache->length(); + for (int i = MapCache::kElementsStartIndex; i < map_cache->length(); i += MapCache::kEntrySize) { Object* raw_key = map_cache->get(i); if (raw_key == heap()->undefined_value() || - raw_key == heap()->the_hole_value()) continue; + raw_key == heap()->the_hole_value()) + continue; STATIC_ASSERT(MapCache::kEntrySize == 2); Object* raw_map = map_cache->get(i + 1); if (raw_map->IsHeapObject() && IsMarked(raw_map)) { ++used_elements; } else { // Delete useless entries with unmarked maps. - ASSERT(raw_map->IsMap()); + DCHECK(raw_map->IsMap()); map_cache->set_the_hole(i); map_cache->set_the_hole(i + 1); } @@ -2533,43 +2444,18 @@ void MarkCompactCollector::ProcessMapCaches() { } -void MarkCompactCollector::ReattachInitialMaps() { - HeapObjectIterator map_iterator(heap()->map_space()); - for (HeapObject* obj = map_iterator.Next(); - obj != NULL; - obj = map_iterator.Next()) { - Map* map = Map::cast(obj); - - STATIC_ASSERT(LAST_TYPE == LAST_JS_RECEIVER_TYPE); - if (map->instance_type() < FIRST_JS_RECEIVER_TYPE) continue; - - if (map->attached_to_shared_function_info()) { - JSFunction::cast(map->constructor())->shared()->AttachInitialMap(map); - } - } -} - - void MarkCompactCollector::ClearNonLiveReferences() { // Iterate over the map space, setting map transitions that go from // a marked map to an unmarked map to null transitions. This action // is carried out only on maps of JSObjects and related subtypes. HeapObjectIterator map_iterator(heap()->map_space()); - for (HeapObject* obj = map_iterator.Next(); - obj != NULL; + for (HeapObject* obj = map_iterator.Next(); obj != NULL; obj = map_iterator.Next()) { Map* map = Map::cast(obj); if (!map->CanTransition()) continue; MarkBit map_mark = Marking::MarkBitFrom(map); - if (map_mark.Get() && map->attached_to_shared_function_info()) { - // This map is used for inobject slack tracking and has been detached - // from SharedFunctionInfo during the mark phase. - // Since it survived the GC, reattach it now. - JSFunction::cast(map->constructor())->shared()->AttachInitialMap(map); - } - ClearNonLivePrototypeTransitions(map); ClearNonLiveMapTransitions(map, map_mark); @@ -2584,8 +2470,7 @@ void MarkCompactCollector::ClearNonLiveReferences() { // Iterate over property cell space, removing dependent code that is not // otherwise kept alive by strong references. HeapObjectIterator cell_iterator(heap_->property_cell_space()); - for (HeapObject* cell = cell_iterator.Next(); - cell != NULL; + for (HeapObject* cell = cell_iterator.Next(); cell != NULL; cell = cell_iterator.Next()) { if (IsMarked(cell)) { ClearNonLiveDependentCode(PropertyCell::cast(cell)->dependent_code()); @@ -2595,8 +2480,7 @@ void MarkCompactCollector::ClearNonLiveReferences() { // Iterate over allocation sites, removing dependent code that is not // otherwise kept alive by strong references. Object* undefined = heap()->undefined_value(); - for (Object* site = heap()->allocation_sites_list(); - site != undefined; + for (Object* site = heap()->allocation_sites_list(); site != undefined; site = AllocationSite::cast(site)->weak_next()) { if (IsMarked(site)) { ClearNonLiveDependentCode(AllocationSite::cast(site)->dependent_code()); @@ -2654,18 +2538,13 @@ void MarkCompactCollector::ClearNonLivePrototypeTransitions(Map* map) { Object* prototype = prototype_transitions->get(proto_offset + i * step); Object* cached_map = prototype_transitions->get(map_offset + i * step); if (IsMarked(prototype) && IsMarked(cached_map)) { - ASSERT(!prototype->IsUndefined()); + DCHECK(!prototype->IsUndefined()); int proto_index = proto_offset + new_number_of_transitions * step; int map_index = map_offset + new_number_of_transitions * step; if (new_number_of_transitions != i) { - prototype_transitions->set( - proto_index, - prototype, - UPDATE_WRITE_BARRIER); - prototype_transitions->set( - map_index, - cached_map, - SKIP_WRITE_BARRIER); + prototype_transitions->set(proto_index, prototype, + UPDATE_WRITE_BARRIER); + prototype_transitions->set(map_index, cached_map, SKIP_WRITE_BARRIER); } Object** slot = prototype_transitions->RawFieldOfElementAt(proto_index); RecordSlot(slot, slot, prototype); @@ -2679,8 +2558,7 @@ void MarkCompactCollector::ClearNonLivePrototypeTransitions(Map* map) { // Fill slots that became free with undefined value. for (int i = new_number_of_transitions * step; - i < number_of_transitions * step; - i++) { + i < number_of_transitions * step; i++) { prototype_transitions->set_undefined(header + i); } } @@ -2697,8 +2575,118 @@ void MarkCompactCollector::ClearNonLiveMapTransitions(Map* map, bool current_is_alive = map_mark.Get(); bool parent_is_alive = Marking::MarkBitFrom(parent).Get(); if (!current_is_alive && parent_is_alive) { - parent->ClearNonLiveTransitions(heap()); + ClearMapTransitions(parent); + } +} + + +// Clear a possible back pointer in case the transition leads to a dead map. +// Return true in case a back pointer has been cleared and false otherwise. +bool MarkCompactCollector::ClearMapBackPointer(Map* target) { + if (Marking::MarkBitFrom(target).Get()) return false; + target->SetBackPointer(heap_->undefined_value(), SKIP_WRITE_BARRIER); + return true; +} + + +void MarkCompactCollector::ClearMapTransitions(Map* map) { + // If there are no transitions to be cleared, return. + // TODO(verwaest) Should be an assert, otherwise back pointers are not + // properly cleared. + if (!map->HasTransitionArray()) return; + + TransitionArray* t = map->transitions(); + + int transition_index = 0; + + DescriptorArray* descriptors = map->instance_descriptors(); + bool descriptors_owner_died = false; + + // Compact all live descriptors to the left. + for (int i = 0; i < t->number_of_transitions(); ++i) { + Map* target = t->GetTarget(i); + if (ClearMapBackPointer(target)) { + if (target->instance_descriptors() == descriptors) { + descriptors_owner_died = true; + } + } else { + if (i != transition_index) { + Name* key = t->GetKey(i); + t->SetKey(transition_index, key); + Object** key_slot = t->GetKeySlot(transition_index); + RecordSlot(key_slot, key_slot, key); + // Target slots do not need to be recorded since maps are not compacted. + t->SetTarget(transition_index, t->GetTarget(i)); + } + transition_index++; + } + } + + // If there are no transitions to be cleared, return. + // TODO(verwaest) Should be an assert, otherwise back pointers are not + // properly cleared. + if (transition_index == t->number_of_transitions()) return; + + int number_of_own_descriptors = map->NumberOfOwnDescriptors(); + + if (descriptors_owner_died) { + if (number_of_own_descriptors > 0) { + TrimDescriptorArray(map, descriptors, number_of_own_descriptors); + DCHECK(descriptors->number_of_descriptors() == number_of_own_descriptors); + map->set_owns_descriptors(true); + } else { + DCHECK(descriptors == heap_->empty_descriptor_array()); + } + } + + // Note that we never eliminate a transition array, though we might right-trim + // such that number_of_transitions() == 0. If this assumption changes, + // TransitionArray::CopyInsert() will need to deal with the case that a + // transition array disappeared during GC. + int trim = t->number_of_transitions() - transition_index; + if (trim > 0) { + heap_->RightTrimFixedArray<Heap::FROM_GC>( + t, t->IsSimpleTransition() ? trim + : trim * TransitionArray::kTransitionSize); } + DCHECK(map->HasTransitionArray()); +} + + +void MarkCompactCollector::TrimDescriptorArray(Map* map, + DescriptorArray* descriptors, + int number_of_own_descriptors) { + int number_of_descriptors = descriptors->number_of_descriptors_storage(); + int to_trim = number_of_descriptors - number_of_own_descriptors; + if (to_trim == 0) return; + + heap_->RightTrimFixedArray<Heap::FROM_GC>( + descriptors, to_trim * DescriptorArray::kDescriptorSize); + descriptors->SetNumberOfDescriptors(number_of_own_descriptors); + + if (descriptors->HasEnumCache()) TrimEnumCache(map, descriptors); + descriptors->Sort(); +} + + +void MarkCompactCollector::TrimEnumCache(Map* map, + DescriptorArray* descriptors) { + int live_enum = map->EnumLength(); + if (live_enum == kInvalidEnumCacheSentinel) { + live_enum = map->NumberOfDescribedProperties(OWN_DESCRIPTORS, DONT_ENUM); + } + if (live_enum == 0) return descriptors->ClearEnumCache(); + + FixedArray* enum_cache = descriptors->GetEnumCache(); + + int to_trim = enum_cache->length() - live_enum; + if (to_trim <= 0) return; + heap_->RightTrimFixedArray<Heap::FROM_GC>(descriptors->GetEnumCache(), + to_trim); + + if (!descriptors->HasEnumIndicesCache()) return; + FixedArray* enum_indices_cache = descriptors->GetEnumIndicesCache(); + heap_->RightTrimFixedArray<Heap::FROM_GC>(enum_indices_cache, to_trim); } @@ -2708,7 +2696,7 @@ void MarkCompactCollector::ClearDependentICList(Object* head) { while (current != undefined) { Code* code = Code::cast(current); if (IsMarked(code)) { - ASSERT(code->is_weak_stub()); + DCHECK(code->is_weak_stub()); IC::InvalidateMaps(code); } current = code->next_code_link(); @@ -2717,8 +2705,7 @@ void MarkCompactCollector::ClearDependentICList(Object* head) { } -void MarkCompactCollector::ClearDependentCode( - DependentCode* entries) { +void MarkCompactCollector::ClearDependentCode(DependentCode* entries) { DisallowHeapAllocation no_allocation; DependentCode::GroupStartIndexes starts(entries); int number_of_entries = starts.number_of_entries(); @@ -2726,7 +2713,7 @@ void MarkCompactCollector::ClearDependentCode( int g = DependentCode::kWeakICGroup; if (starts.at(g) != starts.at(g + 1)) { int i = starts.at(g); - ASSERT(i + 1 == starts.at(g + 1)); + DCHECK(i + 1 == starts.at(g + 1)); Object* head = entries->object_at(i); ClearDependentICList(head); } @@ -2734,7 +2721,7 @@ void MarkCompactCollector::ClearDependentCode( for (int i = starts.at(g); i < starts.at(g + 1); i++) { // If the entry is compilation info then the map must be alive, // and ClearDependentCode shouldn't be called. - ASSERT(entries->is_code_at(i)); + DCHECK(entries->is_code_at(i)); Code* code = entries->code_at(i); if (IsMarked(code) && !code->marked_for_deoptimization()) { code->set_marked_for_deoptimization(true); @@ -2755,10 +2742,10 @@ int MarkCompactCollector::ClearNonLiveDependentCodeInGroup( // Dependent weak IC stubs form a linked list and only the head is stored // in the dependent code array. if (start != end) { - ASSERT(start + 1 == end); + DCHECK(start + 1 == end); Object* old_head = entries->object_at(start); MarkCompactWeakObjectRetainer retainer; - Object* head = VisitWeakList<Code>(heap(), old_head, &retainer, true); + Object* head = VisitWeakList<Code>(heap(), old_head, &retainer); entries->set_object_at(new_start, head); Object** slot = entries->slot_at(new_start); RecordSlot(slot, slot, head); @@ -2769,7 +2756,7 @@ int MarkCompactCollector::ClearNonLiveDependentCodeInGroup( } else { for (int i = start; i < end; i++) { Object* obj = entries->object_at(i); - ASSERT(obj->IsCode() || IsMarked(obj)); + DCHECK(obj->IsCode() || IsMarked(obj)); if (IsMarked(obj) && (!obj->IsCode() || !WillBeDeoptimized(Code::cast(obj)))) { if (new_start + survived != i) { @@ -2806,24 +2793,26 @@ void MarkCompactCollector::ClearNonLiveDependentCode(DependentCode* entries) { void MarkCompactCollector::ProcessWeakCollections() { - GCTracer::Scope gc_scope(tracer_, GCTracer::Scope::MC_WEAKCOLLECTION_PROCESS); - Object* weak_collection_obj = encountered_weak_collections(); + GCTracer::Scope gc_scope(heap()->tracer(), + GCTracer::Scope::MC_WEAKCOLLECTION_PROCESS); + Object* weak_collection_obj = heap()->encountered_weak_collections(); while (weak_collection_obj != Smi::FromInt(0)) { - ASSERT(MarkCompactCollector::IsMarked( - HeapObject::cast(weak_collection_obj))); JSWeakCollection* weak_collection = reinterpret_cast<JSWeakCollection*>(weak_collection_obj); - ObjectHashTable* table = ObjectHashTable::cast(weak_collection->table()); - Object** anchor = reinterpret_cast<Object**>(table->address()); - for (int i = 0; i < table->Capacity(); i++) { - if (MarkCompactCollector::IsMarked(HeapObject::cast(table->KeyAt(i)))) { - Object** key_slot = - table->RawFieldOfElementAt(ObjectHashTable::EntryToIndex(i)); - RecordSlot(anchor, key_slot, *key_slot); - Object** value_slot = - table->RawFieldOfElementAt(ObjectHashTable::EntryToValueIndex(i)); - MarkCompactMarkingVisitor::MarkObjectByPointer( - this, anchor, value_slot); + DCHECK(MarkCompactCollector::IsMarked(weak_collection)); + if (weak_collection->table()->IsHashTable()) { + ObjectHashTable* table = ObjectHashTable::cast(weak_collection->table()); + Object** anchor = reinterpret_cast<Object**>(table->address()); + for (int i = 0; i < table->Capacity(); i++) { + if (MarkCompactCollector::IsMarked(HeapObject::cast(table->KeyAt(i)))) { + Object** key_slot = + table->RawFieldOfElementAt(ObjectHashTable::EntryToIndex(i)); + RecordSlot(anchor, key_slot, *key_slot); + Object** value_slot = + table->RawFieldOfElementAt(ObjectHashTable::EntryToValueIndex(i)); + MarkCompactMarkingVisitor::MarkObjectByPointer(this, anchor, + value_slot); + } } } weak_collection_obj = weak_collection->next(); @@ -2832,23 +2821,51 @@ void MarkCompactCollector::ProcessWeakCollections() { void MarkCompactCollector::ClearWeakCollections() { - GCTracer::Scope gc_scope(tracer_, GCTracer::Scope::MC_WEAKCOLLECTION_CLEAR); - Object* weak_collection_obj = encountered_weak_collections(); + GCTracer::Scope gc_scope(heap()->tracer(), + GCTracer::Scope::MC_WEAKCOLLECTION_CLEAR); + Object* weak_collection_obj = heap()->encountered_weak_collections(); while (weak_collection_obj != Smi::FromInt(0)) { - ASSERT(MarkCompactCollector::IsMarked( - HeapObject::cast(weak_collection_obj))); JSWeakCollection* weak_collection = reinterpret_cast<JSWeakCollection*>(weak_collection_obj); - ObjectHashTable* table = ObjectHashTable::cast(weak_collection->table()); - for (int i = 0; i < table->Capacity(); i++) { - if (!MarkCompactCollector::IsMarked(HeapObject::cast(table->KeyAt(i)))) { - table->RemoveEntry(i); + DCHECK(MarkCompactCollector::IsMarked(weak_collection)); + if (weak_collection->table()->IsHashTable()) { + ObjectHashTable* table = ObjectHashTable::cast(weak_collection->table()); + for (int i = 0; i < table->Capacity(); i++) { + HeapObject* key = HeapObject::cast(table->KeyAt(i)); + if (!MarkCompactCollector::IsMarked(key)) { + table->RemoveEntry(i); + } } } weak_collection_obj = weak_collection->next(); - weak_collection->set_next(Smi::FromInt(0)); + weak_collection->set_next(heap()->undefined_value()); + } + heap()->set_encountered_weak_collections(Smi::FromInt(0)); +} + + +void MarkCompactCollector::AbortWeakCollections() { + GCTracer::Scope gc_scope(heap()->tracer(), + GCTracer::Scope::MC_WEAKCOLLECTION_ABORT); + Object* weak_collection_obj = heap()->encountered_weak_collections(); + while (weak_collection_obj != Smi::FromInt(0)) { + JSWeakCollection* weak_collection = + reinterpret_cast<JSWeakCollection*>(weak_collection_obj); + weak_collection_obj = weak_collection->next(); + weak_collection->set_next(heap()->undefined_value()); + } + heap()->set_encountered_weak_collections(Smi::FromInt(0)); +} + + +void MarkCompactCollector::RecordMigratedSlot(Object* value, Address slot) { + if (heap_->InNewSpace(value)) { + heap_->store_buffer()->Mark(slot); + } else if (value->IsHeapObject() && IsOnEvacuationCandidate(value)) { + SlotsBuffer::AddTo(&slots_buffer_allocator_, &migration_slots_buffer_, + reinterpret_cast<Object**>(slot), + SlotsBuffer::IGNORE_OVERFLOW); } - set_encountered_weak_collections(Smi::FromInt(0)); } @@ -2866,35 +2883,27 @@ void MarkCompactCollector::ClearWeakCollections() { // pointer iteration. This is an issue if the store buffer overflows and we // have to scan the entire old space, including dead objects, looking for // pointers to new space. -void MarkCompactCollector::MigrateObject(HeapObject* dst, - HeapObject* src, - int size, - AllocationSpace dest) { +void MarkCompactCollector::MigrateObject(HeapObject* dst, HeapObject* src, + int size, AllocationSpace dest) { Address dst_addr = dst->address(); Address src_addr = src->address(); - HeapProfiler* heap_profiler = heap()->isolate()->heap_profiler(); - if (heap_profiler->is_tracking_object_moves()) { - heap_profiler->ObjectMoveEvent(src_addr, dst_addr, size); - } - ASSERT(heap()->AllowedToBeMigrated(src, dest)); - ASSERT(dest != LO_SPACE && size <= Page::kMaxRegularHeapObjectSize); + DCHECK(heap()->AllowedToBeMigrated(src, dest)); + DCHECK(dest != LO_SPACE && size <= Page::kMaxRegularHeapObjectSize); if (dest == OLD_POINTER_SPACE) { Address src_slot = src_addr; Address dst_slot = dst_addr; - ASSERT(IsAligned(size, kPointerSize)); + DCHECK(IsAligned(size, kPointerSize)); for (int remaining = size / kPointerSize; remaining > 0; remaining--) { Object* value = Memory::Object_at(src_slot); Memory::Object_at(dst_slot) = value; - if (heap_->InNewSpace(value)) { - heap_->store_buffer()->Mark(dst_slot); - } else if (value->IsHeapObject() && IsOnEvacuationCandidate(value)) { - SlotsBuffer::AddTo(&slots_buffer_allocator_, - &migration_slots_buffer_, - reinterpret_cast<Object**>(dst_slot), - SlotsBuffer::IGNORE_OVERFLOW); + // We special case ConstantPoolArrays below since they could contain + // integers value entries which look like tagged pointers. + // TODO(mstarzinger): restructure this code to avoid this special-casing. + if (!src->IsConstantPoolArray()) { + RecordMigratedSlot(value, dst_slot); } src_slot += kPointerSize; @@ -2906,61 +2915,62 @@ void MarkCompactCollector::MigrateObject(HeapObject* dst, Address code_entry = Memory::Address_at(code_entry_slot); if (Page::FromAddress(code_entry)->IsEvacuationCandidate()) { - SlotsBuffer::AddTo(&slots_buffer_allocator_, - &migration_slots_buffer_, - SlotsBuffer::CODE_ENTRY_SLOT, - code_entry_slot, + SlotsBuffer::AddTo(&slots_buffer_allocator_, &migration_slots_buffer_, + SlotsBuffer::CODE_ENTRY_SLOT, code_entry_slot, SlotsBuffer::IGNORE_OVERFLOW); } - } else if (compacting_ && dst->IsConstantPoolArray()) { - ConstantPoolArray* constant_pool = ConstantPoolArray::cast(dst); - for (int i = 0; i < constant_pool->count_of_code_ptr_entries(); i++) { + } else if (dst->IsConstantPoolArray()) { + ConstantPoolArray* array = ConstantPoolArray::cast(dst); + ConstantPoolArray::Iterator code_iter(array, ConstantPoolArray::CODE_PTR); + while (!code_iter.is_finished()) { Address code_entry_slot = - dst_addr + constant_pool->OffsetOfElementAt(i); + dst_addr + array->OffsetOfElementAt(code_iter.next_index()); Address code_entry = Memory::Address_at(code_entry_slot); if (Page::FromAddress(code_entry)->IsEvacuationCandidate()) { - SlotsBuffer::AddTo(&slots_buffer_allocator_, - &migration_slots_buffer_, - SlotsBuffer::CODE_ENTRY_SLOT, - code_entry_slot, + SlotsBuffer::AddTo(&slots_buffer_allocator_, &migration_slots_buffer_, + SlotsBuffer::CODE_ENTRY_SLOT, code_entry_slot, SlotsBuffer::IGNORE_OVERFLOW); } } + ConstantPoolArray::Iterator heap_iter(array, ConstantPoolArray::HEAP_PTR); + while (!heap_iter.is_finished()) { + Address heap_slot = + dst_addr + array->OffsetOfElementAt(heap_iter.next_index()); + Object* value = Memory::Object_at(heap_slot); + RecordMigratedSlot(value, heap_slot); + } } } else if (dest == CODE_SPACE) { PROFILE(isolate(), CodeMoveEvent(src_addr, dst_addr)); heap()->MoveBlock(dst_addr, src_addr, size); - SlotsBuffer::AddTo(&slots_buffer_allocator_, - &migration_slots_buffer_, - SlotsBuffer::RELOCATED_CODE_OBJECT, - dst_addr, + SlotsBuffer::AddTo(&slots_buffer_allocator_, &migration_slots_buffer_, + SlotsBuffer::RELOCATED_CODE_OBJECT, dst_addr, SlotsBuffer::IGNORE_OVERFLOW); Code::cast(dst)->Relocate(dst_addr - src_addr); } else { - ASSERT(dest == OLD_DATA_SPACE || dest == NEW_SPACE); + DCHECK(dest == OLD_DATA_SPACE || dest == NEW_SPACE); heap()->MoveBlock(dst_addr, src_addr, size); } + heap()->OnMoveEvent(dst, src, size); Memory::Address_at(src_addr) = dst_addr; } // Visitor for updating pointers from live objects in old spaces to new space. // It does not expect to encounter pointers to dead objects. -class PointersUpdatingVisitor: public ObjectVisitor { +class PointersUpdatingVisitor : public ObjectVisitor { public: - explicit PointersUpdatingVisitor(Heap* heap) : heap_(heap) { } + explicit PointersUpdatingVisitor(Heap* heap) : heap_(heap) {} - void VisitPointer(Object** p) { - UpdatePointer(p); - } + void VisitPointer(Object** p) { UpdatePointer(p); } void VisitPointers(Object** start, Object** end) { for (Object** p = start; p < end; p++) UpdatePointer(p); } void VisitEmbeddedPointer(RelocInfo* rinfo) { - ASSERT(rinfo->rmode() == RelocInfo::EMBEDDED_OBJECT); + DCHECK(rinfo->rmode() == RelocInfo::EMBEDDED_OBJECT); Object* target = rinfo->target_object(); Object* old_target = target; VisitPointer(&target); @@ -2972,7 +2982,7 @@ class PointersUpdatingVisitor: public ObjectVisitor { } void VisitCodeTarget(RelocInfo* rinfo) { - ASSERT(RelocInfo::IsCodeTarget(rinfo->rmode())); + DCHECK(RelocInfo::IsCodeTarget(rinfo->rmode())); Object* target = Code::GetCodeFromTargetAddress(rinfo->target_address()); Object* old_target = target; VisitPointer(&target); @@ -2982,9 +2992,9 @@ class PointersUpdatingVisitor: public ObjectVisitor { } void VisitCodeAgeSequence(RelocInfo* rinfo) { - ASSERT(RelocInfo::IsCodeAgeSequence(rinfo->rmode())); + DCHECK(RelocInfo::IsCodeAgeSequence(rinfo->rmode())); Object* stub = rinfo->code_age_stub(); - ASSERT(stub != NULL); + DCHECK(stub != NULL); VisitPointer(&stub); if (stub != rinfo->code_age_stub()) { rinfo->set_code_age_stub(Code::cast(stub)); @@ -2992,7 +3002,7 @@ class PointersUpdatingVisitor: public ObjectVisitor { } void VisitDebugTarget(RelocInfo* rinfo) { - ASSERT((RelocInfo::IsJSReturn(rinfo->rmode()) && + DCHECK((RelocInfo::IsJSReturn(rinfo->rmode()) && rinfo->IsPatchedReturnSequence()) || (RelocInfo::IsDebugBreakSlot(rinfo->rmode()) && rinfo->IsPatchedDebugBreakSlotSequence())); @@ -3010,19 +3020,17 @@ class PointersUpdatingVisitor: public ObjectVisitor { MapWord map_word = heap_obj->map_word(); if (map_word.IsForwardingAddress()) { - ASSERT(heap->InFromSpace(heap_obj) || + DCHECK(heap->InFromSpace(heap_obj) || MarkCompactCollector::IsOnEvacuationCandidate(heap_obj)); HeapObject* target = map_word.ToForwardingAddress(); *slot = target; - ASSERT(!heap->InFromSpace(target) && + DCHECK(!heap->InFromSpace(target) && !MarkCompactCollector::IsOnEvacuationCandidate(target)); } } private: - inline void UpdatePointer(Object** p) { - UpdateSlot(heap_, p); - } + inline void UpdatePointer(Object** p) { UpdateSlot(heap_, p); } Heap* heap_; }; @@ -3038,20 +3046,10 @@ static void UpdatePointer(HeapObject** address, HeapObject* object) { // compare and swap may fail in the case where the pointer update tries to // update garbage memory which was concurrently accessed by the sweeper. if (new_addr != NULL) { - NoBarrier_CompareAndSwap( - reinterpret_cast<AtomicWord*>(address), - reinterpret_cast<AtomicWord>(object), - reinterpret_cast<AtomicWord>(HeapObject::FromAddress(new_addr))); - } else { - // We have to zap this pointer, because the store buffer may overflow later, - // and then we have to scan the entire heap and we don't want to find - // spurious newspace pointers in the old space. - // TODO(mstarzinger): This was changed to a sentinel value to track down - // rare crashes, change it back to Smi::FromInt(0) later. - NoBarrier_CompareAndSwap( - reinterpret_cast<AtomicWord*>(address), - reinterpret_cast<AtomicWord>(object), - reinterpret_cast<AtomicWord>(Smi::FromInt(0x0f100d00 >> 1))); + base::NoBarrier_CompareAndSwap( + reinterpret_cast<base::AtomicWord*>(address), + reinterpret_cast<base::AtomicWord>(object), + reinterpret_cast<base::AtomicWord>(HeapObject::FromAddress(new_addr))); } } @@ -3070,21 +3068,17 @@ static String* UpdateReferenceInExternalStringTableEntry(Heap* heap, bool MarkCompactCollector::TryPromoteObject(HeapObject* object, int object_size) { - ASSERT(object_size <= Page::kMaxRegularHeapObjectSize); + DCHECK(object_size <= Page::kMaxRegularHeapObjectSize); OldSpace* target_space = heap()->TargetSpace(object); - ASSERT(target_space == heap()->old_pointer_space() || + DCHECK(target_space == heap()->old_pointer_space() || target_space == heap()->old_data_space()); HeapObject* target; AllocationResult allocation = target_space->AllocateRaw(object_size); if (allocation.To(&target)) { - MigrateObject(target, - object, - object_size, - target_space->identity()); - heap()->mark_compact_collector()->tracer()-> - increment_promoted_objects_size(object_size); + MigrateObject(target, object, object_size, target_space->identity()); + heap()->IncrementPromotedObjectsSize(object_size); return true; } @@ -3097,7 +3091,6 @@ void MarkCompactCollector::EvacuateNewSpace() { // sweep collection by failing allocations. But since we are already in // a mark-sweep allocation, there is no sense in trying to trigger one. AlwaysAllocateScope scope(isolate()); - heap()->CheckNewSpaceExpansionCriteria(); NewSpace* new_space = heap()->new_space(); @@ -3119,7 +3112,7 @@ void MarkCompactCollector::EvacuateNewSpace() { NewSpacePageIterator it(from_bottom, from_top); while (it.has_next()) { NewSpacePage* p = it.next(); - survivors_size += DiscoverAndPromoteBlackObjectsOnPage(new_space, p); + survivors_size += DiscoverAndEvacuateBlackObjectsOnPage(new_space, p); } heap_->IncrementYoungSurvivorsCounter(survivors_size); @@ -3130,7 +3123,7 @@ void MarkCompactCollector::EvacuateNewSpace() { void MarkCompactCollector::EvacuateLiveObjectsFromPage(Page* p) { AlwaysAllocateScope always_allocate(isolate()); PagedSpace* space = static_cast<PagedSpace*>(p->owner()); - ASSERT(p->IsEvacuationCandidate() && !p->WasSwept()); + DCHECK(p->IsEvacuationCandidate() && !p->WasSwept()); p->MarkSweptPrecisely(); int offsets[16]; @@ -3145,12 +3138,18 @@ void MarkCompactCollector::EvacuateLiveObjectsFromPage(Page* p) { for (int i = 0; i < live_objects; i++) { Address object_addr = cell_base + offsets[i] * kPointerSize; HeapObject* object = HeapObject::FromAddress(object_addr); - ASSERT(Marking::IsBlack(Marking::MarkBitFrom(object))); + DCHECK(Marking::IsBlack(Marking::MarkBitFrom(object))); int size = object->Size(); HeapObject* target_object; AllocationResult allocation = space->AllocateRaw(size); + if (!allocation.To(&target_object)) { + // If allocation failed, use emergency memory and re-try allocation. + CHECK(space->HasEmergencyMemory()); + space->UseEmergencyMemory(); + allocation = space->AllocateRaw(size); + } if (!allocation.To(&target_object)) { // OS refused to give us memory. V8::FatalProcessOutOfMemory("Evacuation"); @@ -3158,7 +3157,7 @@ void MarkCompactCollector::EvacuateLiveObjectsFromPage(Page* p) { } MigrateObject(target_object, object, size, space->identity()); - ASSERT(object->map_word().IsForwardingAddress()); + DCHECK(object->map_word().IsForwardingAddress()); } // Clear marking bits for current cell. @@ -3172,14 +3171,20 @@ void MarkCompactCollector::EvacuatePages() { int npages = evacuation_candidates_.length(); for (int i = 0; i < npages; i++) { Page* p = evacuation_candidates_[i]; - ASSERT(p->IsEvacuationCandidate() || + DCHECK(p->IsEvacuationCandidate() || p->IsFlagSet(Page::RESCAN_ON_EVACUATION)); - ASSERT(static_cast<int>(p->parallel_sweeping()) == - MemoryChunk::PARALLEL_SWEEPING_DONE); + DCHECK(static_cast<int>(p->parallel_sweeping()) == + MemoryChunk::SWEEPING_DONE); + PagedSpace* space = static_cast<PagedSpace*>(p->owner()); + // Allocate emergency memory for the case when compaction fails due to out + // of memory. + if (!space->HasEmergencyMemory()) { + space->CreateEmergencyMemory(); + } if (p->IsEvacuationCandidate()) { - // During compaction we might have to request a new page. - // Check that space still have room for that. - if (static_cast<PagedSpace*>(p->owner())->CanExpand()) { + // During compaction we might have to request a new page. Check that we + // have an emergency page and the space still has room for that. + if (space->HasEmergencyMemory() && space->CanExpand()) { EvacuateLiveObjectsFromPage(p); } else { // Without room for expansion evacuation is not guaranteed to succeed. @@ -3190,7 +3195,17 @@ void MarkCompactCollector::EvacuatePages() { page->ClearEvacuationCandidate(); page->SetFlag(Page::RESCAN_ON_EVACUATION); } - return; + break; + } + } + } + if (npages > 0) { + // Release emergency memory. + PagedSpaces spaces(heap()); + for (PagedSpace* space = spaces.next(); space != NULL; + space = spaces.next()) { + if (space->HasEmergencyMemory()) { + space->FreeEmergencyMemory(); } } } @@ -3212,10 +3227,8 @@ class EvacuationWeakObjectRetainer : public WeakObjectRetainer { }; -static inline void UpdateSlot(Isolate* isolate, - ObjectVisitor* v, - SlotsBuffer::SlotType slot_type, - Address addr) { +static inline void UpdateSlot(Isolate* isolate, ObjectVisitor* v, + SlotsBuffer::SlotType slot_type, Address addr) { switch (slot_type) { case SlotsBuffer::CODE_TARGET_SLOT: { RelocInfo rinfo(addr, RelocInfo::CODE_TARGET, 0, NULL); @@ -3253,22 +3266,26 @@ static inline void UpdateSlot(Isolate* isolate, } -enum SweepingMode { - SWEEP_ONLY, - SWEEP_AND_VISIT_LIVE_OBJECTS -}; +enum SweepingMode { SWEEP_ONLY, SWEEP_AND_VISIT_LIVE_OBJECTS }; -enum SkipListRebuildingMode { - REBUILD_SKIP_LIST, - IGNORE_SKIP_LIST -}; +enum SkipListRebuildingMode { REBUILD_SKIP_LIST, IGNORE_SKIP_LIST }; -enum FreeSpaceTreatmentMode { - IGNORE_FREE_SPACE, - ZAP_FREE_SPACE -}; +enum FreeSpaceTreatmentMode { IGNORE_FREE_SPACE, ZAP_FREE_SPACE }; + + +template <MarkCompactCollector::SweepingParallelism mode> +static intptr_t Free(PagedSpace* space, FreeList* free_list, Address start, + int size) { + if (mode == MarkCompactCollector::SWEEP_ON_MAIN_THREAD) { + DCHECK(free_list == NULL); + return space->Free(start, size); + } else { + // TODO(hpayer): account for wasted bytes in concurrent sweeping too. + return size - free_list->Free(start, size); + } +} // Sweep a space precisely. After this has been done the space can @@ -3277,26 +3294,22 @@ enum FreeSpaceTreatmentMode { // over it. Map space is swept precisely, because it is not compacted. // Slots in live objects pointing into evacuation candidates are updated // if requested. -template<SweepingMode sweeping_mode, - SkipListRebuildingMode skip_list_mode, - FreeSpaceTreatmentMode free_space_mode> -static void SweepPrecisely(PagedSpace* space, - Page* p, - ObjectVisitor* v) { - ASSERT(!p->IsEvacuationCandidate() && !p->WasSwept()); - ASSERT_EQ(skip_list_mode == REBUILD_SKIP_LIST, +// Returns the size of the biggest continuous freed memory chunk in bytes. +template <SweepingMode sweeping_mode, + MarkCompactCollector::SweepingParallelism parallelism, + SkipListRebuildingMode skip_list_mode, + FreeSpaceTreatmentMode free_space_mode> +static int SweepPrecisely(PagedSpace* space, FreeList* free_list, Page* p, + ObjectVisitor* v) { + DCHECK(!p->IsEvacuationCandidate() && !p->WasSwept()); + DCHECK_EQ(skip_list_mode == REBUILD_SKIP_LIST, space->identity() == CODE_SPACE); - ASSERT((p->skip_list() == NULL) || (skip_list_mode == REBUILD_SKIP_LIST)); - - double start_time = 0.0; - if (FLAG_print_cumulative_gc_stat) { - start_time = OS::TimeCurrentMillis(); - } - - p->MarkSweptPrecisely(); + DCHECK((p->skip_list() == NULL) || (skip_list_mode == REBUILD_SKIP_LIST)); + DCHECK(parallelism == MarkCompactCollector::SWEEP_ON_MAIN_THREAD || + sweeping_mode == SWEEP_ONLY); Address free_start = p->area_start(); - ASSERT(reinterpret_cast<intptr_t>(free_start) % (32 * kPointerSize) == 0); + DCHECK(reinterpret_cast<intptr_t>(free_start) % (32 * kPointerSize) == 0); int offsets[16]; SkipList* skip_list = p->skip_list(); @@ -3305,18 +3318,23 @@ static void SweepPrecisely(PagedSpace* space, skip_list->Clear(); } + intptr_t freed_bytes = 0; + intptr_t max_freed_bytes = 0; + for (MarkBitCellIterator it(p); !it.Done(); it.Advance()) { Address cell_base = it.CurrentCellBase(); MarkBit::CellType* cell = it.CurrentCell(); int live_objects = MarkWordToObjectStarts(*cell, offsets); int live_index = 0; - for ( ; live_objects != 0; live_objects--) { + for (; live_objects != 0; live_objects--) { Address free_end = cell_base + offsets[live_index++] * kPointerSize; if (free_end != free_start) { + int size = static_cast<int>(free_end - free_start); if (free_space_mode == ZAP_FREE_SPACE) { - memset(free_start, 0xcc, static_cast<int>(free_end - free_start)); + memset(free_start, 0xcc, size); } - space->Free(free_start, static_cast<int>(free_end - free_start)); + freed_bytes = Free<parallelism>(space, free_list, free_start, size); + max_freed_bytes = Max(freed_bytes, max_freed_bytes); #ifdef ENABLE_GDB_JIT_INTERFACE if (FLAG_gdbjit && space->identity() == CODE_SPACE) { GDBJITInterface::RemoveCodeRange(free_start, free_end); @@ -3324,19 +3342,17 @@ static void SweepPrecisely(PagedSpace* space, #endif } HeapObject* live_object = HeapObject::FromAddress(free_end); - ASSERT(Marking::IsBlack(Marking::MarkBitFrom(live_object))); + DCHECK(Marking::IsBlack(Marking::MarkBitFrom(live_object))); Map* map = live_object->map(); int size = live_object->SizeFromMap(map); if (sweeping_mode == SWEEP_AND_VISIT_LIVE_OBJECTS) { live_object->IterateBody(map->instance_type(), size, v); } if ((skip_list_mode == REBUILD_SKIP_LIST) && skip_list != NULL) { - int new_region_start = - SkipList::RegionNumber(free_end); + int new_region_start = SkipList::RegionNumber(free_end); int new_region_end = SkipList::RegionNumber(free_end + size - kPointerSize); - if (new_region_start != curr_region || - new_region_end != curr_region) { + if (new_region_start != curr_region || new_region_end != curr_region) { skip_list->AddObject(free_end, size); curr_region = new_region_end; } @@ -3347,10 +3363,12 @@ static void SweepPrecisely(PagedSpace* space, *cell = 0; } if (free_start != p->area_end()) { + int size = static_cast<int>(p->area_end() - free_start); if (free_space_mode == ZAP_FREE_SPACE) { - memset(free_start, 0xcc, static_cast<int>(p->area_end() - free_start)); + memset(free_start, 0xcc, size); } - space->Free(free_start, static_cast<int>(p->area_end() - free_start)); + freed_bytes = Free<parallelism>(space, free_list, free_start, size); + max_freed_bytes = Max(freed_bytes, max_freed_bytes); #ifdef ENABLE_GDB_JIT_INTERFACE if (FLAG_gdbjit && space->identity() == CODE_SPACE) { GDBJITInterface::RemoveCodeRange(free_start, p->area_end()); @@ -3358,17 +3376,22 @@ static void SweepPrecisely(PagedSpace* space, #endif } p->ResetLiveBytes(); - if (FLAG_print_cumulative_gc_stat) { - space->heap()->AddSweepingTime(OS::TimeCurrentMillis() - start_time); + + if (parallelism == MarkCompactCollector::SWEEP_IN_PARALLEL) { + // When concurrent sweeping is active, the page will be marked after + // sweeping by the main thread. + p->set_parallel_sweeping(MemoryChunk::SWEEPING_FINALIZE); + } else { + p->MarkSweptPrecisely(); } + return FreeList::GuaranteedAllocatable(static_cast<int>(max_freed_bytes)); } static bool SetMarkBitsUnderInvalidatedCode(Code* code, bool value) { Page* p = Page::FromAddress(code->address()); - if (p->IsEvacuationCandidate() || - p->IsFlagSet(Page::RESCAN_ON_EVACUATION)) { + if (p->IsEvacuationCandidate() || p->IsFlagSet(Page::RESCAN_ON_EVACUATION)) { return false; } @@ -3401,7 +3424,7 @@ static bool SetMarkBitsUnderInvalidatedCode(Code* code, bool value) { *end_cell |= end_mask; } } else { - for (MarkBit::CellType* cell = start_cell ; cell <= end_cell; cell++) { + for (MarkBit::CellType* cell = start_cell; cell <= end_cell; cell++) { *cell = 0; } } @@ -3432,7 +3455,7 @@ static bool IsOnInvalidatedCodeObject(Address addr) { void MarkCompactCollector::InvalidateCode(Code* code) { if (heap_->incremental_marking()->IsCompacting() && !ShouldSkipEvacuationSlotRecording(code)) { - ASSERT(compacting_); + DCHECK(compacting_); // If the object is white than no slots were recorded on it yet. MarkBit mark_bit = Marking::MarkBitFrom(code); @@ -3490,52 +3513,56 @@ void MarkCompactCollector::EvacuateNewSpaceAndCandidates() { Heap::RelocationLock relocation_lock(heap()); bool code_slots_filtering_required; - { GCTracer::Scope gc_scope(tracer_, GCTracer::Scope::MC_SWEEP_NEWSPACE); + { + GCTracer::Scope gc_scope(heap()->tracer(), + GCTracer::Scope::MC_SWEEP_NEWSPACE); code_slots_filtering_required = MarkInvalidatedCode(); EvacuateNewSpace(); } - { GCTracer::Scope gc_scope(tracer_, GCTracer::Scope::MC_EVACUATE_PAGES); + { + GCTracer::Scope gc_scope(heap()->tracer(), + GCTracer::Scope::MC_EVACUATE_PAGES); EvacuatePages(); } // Second pass: find pointers to new space and update them. PointersUpdatingVisitor updating_visitor(heap()); - { GCTracer::Scope gc_scope(tracer_, + { + GCTracer::Scope gc_scope(heap()->tracer(), GCTracer::Scope::MC_UPDATE_NEW_TO_NEW_POINTERS); // Update pointers in to space. SemiSpaceIterator to_it(heap()->new_space()->bottom(), heap()->new_space()->top()); - for (HeapObject* object = to_it.Next(); - object != NULL; + for (HeapObject* object = to_it.Next(); object != NULL; object = to_it.Next()) { Map* map = object->map(); - object->IterateBody(map->instance_type(), - object->SizeFromMap(map), + object->IterateBody(map->instance_type(), object->SizeFromMap(map), &updating_visitor); } } - { GCTracer::Scope gc_scope(tracer_, + { + GCTracer::Scope gc_scope(heap()->tracer(), GCTracer::Scope::MC_UPDATE_ROOT_TO_NEW_POINTERS); // Update roots. heap_->IterateRoots(&updating_visitor, VISIT_ALL_IN_SWEEP_NEWSPACE); } - { GCTracer::Scope gc_scope(tracer_, + { + GCTracer::Scope gc_scope(heap()->tracer(), GCTracer::Scope::MC_UPDATE_OLD_TO_NEW_POINTERS); - StoreBufferRebuildScope scope(heap_, - heap_->store_buffer(), + StoreBufferRebuildScope scope(heap_, heap_->store_buffer(), &Heap::ScavengeStoreBufferCallback); heap_->store_buffer()->IteratePointersToNewSpaceAndClearMaps( &UpdatePointer); } - { GCTracer::Scope gc_scope(tracer_, + { + GCTracer::Scope gc_scope(heap()->tracer(), GCTracer::Scope::MC_UPDATE_POINTERS_TO_EVACUATED); - SlotsBuffer::UpdateSlotsRecordedIn(heap_, - migration_slots_buffer_, + SlotsBuffer::UpdateSlotsRecordedIn(heap_, migration_slots_buffer_, code_slots_filtering_required); if (FLAG_trace_fragmentation) { PrintF(" migration slots buffer: %d\n", @@ -3560,20 +3587,20 @@ void MarkCompactCollector::EvacuateNewSpaceAndCandidates() { } int npages = evacuation_candidates_.length(); - { GCTracer::Scope gc_scope( - tracer_, GCTracer::Scope::MC_UPDATE_POINTERS_BETWEEN_EVACUATED); + { + GCTracer::Scope gc_scope( + heap()->tracer(), + GCTracer::Scope::MC_UPDATE_POINTERS_BETWEEN_EVACUATED); for (int i = 0; i < npages; i++) { Page* p = evacuation_candidates_[i]; - ASSERT(p->IsEvacuationCandidate() || + DCHECK(p->IsEvacuationCandidate() || p->IsFlagSet(Page::RESCAN_ON_EVACUATION)); if (p->IsEvacuationCandidate()) { - SlotsBuffer::UpdateSlotsRecordedIn(heap_, - p->slots_buffer(), + SlotsBuffer::UpdateSlotsRecordedIn(heap_, p->slots_buffer(), code_slots_filtering_required); if (FLAG_trace_fragmentation) { - PrintF(" page %p slots buffer: %d\n", - reinterpret_cast<void*>(p), + PrintF(" page %p slots buffer: %d\n", reinterpret_cast<void*>(p), SlotsBuffer::SizeOfChain(p->slots_buffer())); } @@ -3592,25 +3619,22 @@ void MarkCompactCollector::EvacuateNewSpaceAndCandidates() { switch (space->identity()) { case OLD_DATA_SPACE: - SweepConservatively<SWEEP_SEQUENTIALLY>(space, NULL, p); + SweepConservatively<SWEEP_ON_MAIN_THREAD>(space, NULL, p); break; case OLD_POINTER_SPACE: - SweepPrecisely<SWEEP_AND_VISIT_LIVE_OBJECTS, - IGNORE_SKIP_LIST, - IGNORE_FREE_SPACE>( - space, p, &updating_visitor); + SweepPrecisely<SWEEP_AND_VISIT_LIVE_OBJECTS, SWEEP_ON_MAIN_THREAD, + IGNORE_SKIP_LIST, IGNORE_FREE_SPACE>( + space, NULL, p, &updating_visitor); break; case CODE_SPACE: if (FLAG_zap_code_space) { - SweepPrecisely<SWEEP_AND_VISIT_LIVE_OBJECTS, - REBUILD_SKIP_LIST, - ZAP_FREE_SPACE>( - space, p, &updating_visitor); + SweepPrecisely<SWEEP_AND_VISIT_LIVE_OBJECTS, SWEEP_ON_MAIN_THREAD, + REBUILD_SKIP_LIST, ZAP_FREE_SPACE>( + space, NULL, p, &updating_visitor); } else { - SweepPrecisely<SWEEP_AND_VISIT_LIVE_OBJECTS, - REBUILD_SKIP_LIST, - IGNORE_FREE_SPACE>( - space, p, &updating_visitor); + SweepPrecisely<SWEEP_AND_VISIT_LIVE_OBJECTS, SWEEP_ON_MAIN_THREAD, + REBUILD_SKIP_LIST, IGNORE_FREE_SPACE>( + space, NULL, p, &updating_visitor); } break; default: @@ -3621,12 +3645,12 @@ void MarkCompactCollector::EvacuateNewSpaceAndCandidates() { } } - GCTracer::Scope gc_scope(tracer_, GCTracer::Scope::MC_UPDATE_MISC_POINTERS); + GCTracer::Scope gc_scope(heap()->tracer(), + GCTracer::Scope::MC_UPDATE_MISC_POINTERS); // Update pointers from cells. HeapObjectIterator cell_iterator(heap_->cell_space()); - for (HeapObject* cell = cell_iterator.Next(); - cell != NULL; + for (HeapObject* cell = cell_iterator.Next(); cell != NULL; cell = cell_iterator.Next()) { if (cell->IsCell()) { Cell::BodyDescriptor::IterateBody(cell, &updating_visitor); @@ -3635,17 +3659,13 @@ void MarkCompactCollector::EvacuateNewSpaceAndCandidates() { HeapObjectIterator js_global_property_cell_iterator( heap_->property_cell_space()); - for (HeapObject* cell = js_global_property_cell_iterator.Next(); - cell != NULL; + for (HeapObject* cell = js_global_property_cell_iterator.Next(); cell != NULL; cell = js_global_property_cell_iterator.Next()) { if (cell->IsPropertyCell()) { PropertyCell::BodyDescriptor::IterateBody(cell, &updating_visitor); } } - // Update the head of the native contexts list in the heap. - updating_visitor.VisitPointer(heap_->native_contexts_list_address()); - heap_->string_table()->Iterate(&updating_visitor); updating_visitor.VisitPointer(heap_->weak_object_to_code_table_address()); if (heap_->weak_object_to_code_table()->IsHashTable()) { @@ -3668,14 +3688,8 @@ void MarkCompactCollector::EvacuateNewSpaceAndCandidates() { heap_->isolate()->inner_pointer_to_code_cache()->Flush(); -#ifdef VERIFY_HEAP - if (FLAG_verify_heap) { - VerifyEvacuation(heap_); - } -#endif - slots_buffer_allocator_.DeallocateChain(&migration_slots_buffer_); - ASSERT(migration_slots_buffer_ == NULL); + DCHECK(migration_slots_buffer_ == NULL); } @@ -3726,177 +3740,348 @@ static const int kStartTableUnusedEntry = 126; // Since objects are at least 2 words large we don't have entries for two // consecutive 1 bits. All entries after 170 have at least 2 consecutive bits. char kStartTable[kStartTableLines * kStartTableEntriesPerLine] = { - 0, _, _, _, _, // 0 - 1, 0, _, _, _, // 1 - 1, 1, _, _, _, // 2 - X, _, _, _, _, // 3 - 1, 2, _, _, _, // 4 - 2, 0, 2, _, _, // 5 - X, _, _, _, _, // 6 - X, _, _, _, _, // 7 - 1, 3, _, _, _, // 8 - 2, 0, 3, _, _, // 9 - 2, 1, 3, _, _, // 10 - X, _, _, _, _, // 11 - X, _, _, _, _, // 12 - X, _, _, _, _, // 13 - X, _, _, _, _, // 14 - X, _, _, _, _, // 15 - 1, 4, _, _, _, // 16 - 2, 0, 4, _, _, // 17 - 2, 1, 4, _, _, // 18 - X, _, _, _, _, // 19 - 2, 2, 4, _, _, // 20 - 3, 0, 2, 4, _, // 21 - X, _, _, _, _, // 22 - X, _, _, _, _, // 23 - X, _, _, _, _, // 24 - X, _, _, _, _, // 25 - X, _, _, _, _, // 26 - X, _, _, _, _, // 27 - X, _, _, _, _, // 28 - X, _, _, _, _, // 29 - X, _, _, _, _, // 30 - X, _, _, _, _, // 31 - 1, 5, _, _, _, // 32 - 2, 0, 5, _, _, // 33 - 2, 1, 5, _, _, // 34 - X, _, _, _, _, // 35 - 2, 2, 5, _, _, // 36 - 3, 0, 2, 5, _, // 37 - X, _, _, _, _, // 38 - X, _, _, _, _, // 39 - 2, 3, 5, _, _, // 40 - 3, 0, 3, 5, _, // 41 - 3, 1, 3, 5, _, // 42 - X, _, _, _, _, // 43 - X, _, _, _, _, // 44 - X, _, _, _, _, // 45 - X, _, _, _, _, // 46 - X, _, _, _, _, // 47 - X, _, _, _, _, // 48 - X, _, _, _, _, // 49 - X, _, _, _, _, // 50 - X, _, _, _, _, // 51 - X, _, _, _, _, // 52 - X, _, _, _, _, // 53 - X, _, _, _, _, // 54 - X, _, _, _, _, // 55 - X, _, _, _, _, // 56 - X, _, _, _, _, // 57 - X, _, _, _, _, // 58 - X, _, _, _, _, // 59 - X, _, _, _, _, // 60 - X, _, _, _, _, // 61 - X, _, _, _, _, // 62 - X, _, _, _, _, // 63 - 1, 6, _, _, _, // 64 - 2, 0, 6, _, _, // 65 - 2, 1, 6, _, _, // 66 - X, _, _, _, _, // 67 - 2, 2, 6, _, _, // 68 - 3, 0, 2, 6, _, // 69 - X, _, _, _, _, // 70 - X, _, _, _, _, // 71 - 2, 3, 6, _, _, // 72 - 3, 0, 3, 6, _, // 73 - 3, 1, 3, 6, _, // 74 - X, _, _, _, _, // 75 - X, _, _, _, _, // 76 - X, _, _, _, _, // 77 - X, _, _, _, _, // 78 - X, _, _, _, _, // 79 - 2, 4, 6, _, _, // 80 - 3, 0, 4, 6, _, // 81 - 3, 1, 4, 6, _, // 82 - X, _, _, _, _, // 83 - 3, 2, 4, 6, _, // 84 - 4, 0, 2, 4, 6, // 85 - X, _, _, _, _, // 86 - X, _, _, _, _, // 87 - X, _, _, _, _, // 88 - X, _, _, _, _, // 89 - X, _, _, _, _, // 90 - X, _, _, _, _, // 91 - X, _, _, _, _, // 92 - X, _, _, _, _, // 93 - X, _, _, _, _, // 94 - X, _, _, _, _, // 95 - X, _, _, _, _, // 96 - X, _, _, _, _, // 97 - X, _, _, _, _, // 98 - X, _, _, _, _, // 99 - X, _, _, _, _, // 100 - X, _, _, _, _, // 101 - X, _, _, _, _, // 102 - X, _, _, _, _, // 103 - X, _, _, _, _, // 104 - X, _, _, _, _, // 105 - X, _, _, _, _, // 106 - X, _, _, _, _, // 107 - X, _, _, _, _, // 108 - X, _, _, _, _, // 109 - X, _, _, _, _, // 110 - X, _, _, _, _, // 111 - X, _, _, _, _, // 112 - X, _, _, _, _, // 113 - X, _, _, _, _, // 114 - X, _, _, _, _, // 115 - X, _, _, _, _, // 116 - X, _, _, _, _, // 117 - X, _, _, _, _, // 118 - X, _, _, _, _, // 119 - X, _, _, _, _, // 120 - X, _, _, _, _, // 121 - X, _, _, _, _, // 122 - X, _, _, _, _, // 123 - X, _, _, _, _, // 124 - X, _, _, _, _, // 125 - X, _, _, _, _, // 126 - X, _, _, _, _, // 127 - 1, 7, _, _, _, // 128 - 2, 0, 7, _, _, // 129 - 2, 1, 7, _, _, // 130 - X, _, _, _, _, // 131 - 2, 2, 7, _, _, // 132 - 3, 0, 2, 7, _, // 133 - X, _, _, _, _, // 134 - X, _, _, _, _, // 135 - 2, 3, 7, _, _, // 136 - 3, 0, 3, 7, _, // 137 - 3, 1, 3, 7, _, // 138 - X, _, _, _, _, // 139 - X, _, _, _, _, // 140 - X, _, _, _, _, // 141 - X, _, _, _, _, // 142 - X, _, _, _, _, // 143 - 2, 4, 7, _, _, // 144 - 3, 0, 4, 7, _, // 145 - 3, 1, 4, 7, _, // 146 - X, _, _, _, _, // 147 - 3, 2, 4, 7, _, // 148 - 4, 0, 2, 4, 7, // 149 - X, _, _, _, _, // 150 - X, _, _, _, _, // 151 - X, _, _, _, _, // 152 - X, _, _, _, _, // 153 - X, _, _, _, _, // 154 - X, _, _, _, _, // 155 - X, _, _, _, _, // 156 - X, _, _, _, _, // 157 - X, _, _, _, _, // 158 - X, _, _, _, _, // 159 - 2, 5, 7, _, _, // 160 - 3, 0, 5, 7, _, // 161 - 3, 1, 5, 7, _, // 162 - X, _, _, _, _, // 163 - 3, 2, 5, 7, _, // 164 - 4, 0, 2, 5, 7, // 165 - X, _, _, _, _, // 166 - X, _, _, _, _, // 167 - 3, 3, 5, 7, _, // 168 - 4, 0, 3, 5, 7, // 169 - 4, 1, 3, 5, 7 // 170 + 0, _, _, + _, _, // 0 + 1, 0, _, + _, _, // 1 + 1, 1, _, + _, _, // 2 + X, _, _, + _, _, // 3 + 1, 2, _, + _, _, // 4 + 2, 0, 2, + _, _, // 5 + X, _, _, + _, _, // 6 + X, _, _, + _, _, // 7 + 1, 3, _, + _, _, // 8 + 2, 0, 3, + _, _, // 9 + 2, 1, 3, + _, _, // 10 + X, _, _, + _, _, // 11 + X, _, _, + _, _, // 12 + X, _, _, + _, _, // 13 + X, _, _, + _, _, // 14 + X, _, _, + _, _, // 15 + 1, 4, _, + _, _, // 16 + 2, 0, 4, + _, _, // 17 + 2, 1, 4, + _, _, // 18 + X, _, _, + _, _, // 19 + 2, 2, 4, + _, _, // 20 + 3, 0, 2, + 4, _, // 21 + X, _, _, + _, _, // 22 + X, _, _, + _, _, // 23 + X, _, _, + _, _, // 24 + X, _, _, + _, _, // 25 + X, _, _, + _, _, // 26 + X, _, _, + _, _, // 27 + X, _, _, + _, _, // 28 + X, _, _, + _, _, // 29 + X, _, _, + _, _, // 30 + X, _, _, + _, _, // 31 + 1, 5, _, + _, _, // 32 + 2, 0, 5, + _, _, // 33 + 2, 1, 5, + _, _, // 34 + X, _, _, + _, _, // 35 + 2, 2, 5, + _, _, // 36 + 3, 0, 2, + 5, _, // 37 + X, _, _, + _, _, // 38 + X, _, _, + _, _, // 39 + 2, 3, 5, + _, _, // 40 + 3, 0, 3, + 5, _, // 41 + 3, 1, 3, + 5, _, // 42 + X, _, _, + _, _, // 43 + X, _, _, + _, _, // 44 + X, _, _, + _, _, // 45 + X, _, _, + _, _, // 46 + X, _, _, + _, _, // 47 + X, _, _, + _, _, // 48 + X, _, _, + _, _, // 49 + X, _, _, + _, _, // 50 + X, _, _, + _, _, // 51 + X, _, _, + _, _, // 52 + X, _, _, + _, _, // 53 + X, _, _, + _, _, // 54 + X, _, _, + _, _, // 55 + X, _, _, + _, _, // 56 + X, _, _, + _, _, // 57 + X, _, _, + _, _, // 58 + X, _, _, + _, _, // 59 + X, _, _, + _, _, // 60 + X, _, _, + _, _, // 61 + X, _, _, + _, _, // 62 + X, _, _, + _, _, // 63 + 1, 6, _, + _, _, // 64 + 2, 0, 6, + _, _, // 65 + 2, 1, 6, + _, _, // 66 + X, _, _, + _, _, // 67 + 2, 2, 6, + _, _, // 68 + 3, 0, 2, + 6, _, // 69 + X, _, _, + _, _, // 70 + X, _, _, + _, _, // 71 + 2, 3, 6, + _, _, // 72 + 3, 0, 3, + 6, _, // 73 + 3, 1, 3, + 6, _, // 74 + X, _, _, + _, _, // 75 + X, _, _, + _, _, // 76 + X, _, _, + _, _, // 77 + X, _, _, + _, _, // 78 + X, _, _, + _, _, // 79 + 2, 4, 6, + _, _, // 80 + 3, 0, 4, + 6, _, // 81 + 3, 1, 4, + 6, _, // 82 + X, _, _, + _, _, // 83 + 3, 2, 4, + 6, _, // 84 + 4, 0, 2, + 4, 6, // 85 + X, _, _, + _, _, // 86 + X, _, _, + _, _, // 87 + X, _, _, + _, _, // 88 + X, _, _, + _, _, // 89 + X, _, _, + _, _, // 90 + X, _, _, + _, _, // 91 + X, _, _, + _, _, // 92 + X, _, _, + _, _, // 93 + X, _, _, + _, _, // 94 + X, _, _, + _, _, // 95 + X, _, _, + _, _, // 96 + X, _, _, + _, _, // 97 + X, _, _, + _, _, // 98 + X, _, _, + _, _, // 99 + X, _, _, + _, _, // 100 + X, _, _, + _, _, // 101 + X, _, _, + _, _, // 102 + X, _, _, + _, _, // 103 + X, _, _, + _, _, // 104 + X, _, _, + _, _, // 105 + X, _, _, + _, _, // 106 + X, _, _, + _, _, // 107 + X, _, _, + _, _, // 108 + X, _, _, + _, _, // 109 + X, _, _, + _, _, // 110 + X, _, _, + _, _, // 111 + X, _, _, + _, _, // 112 + X, _, _, + _, _, // 113 + X, _, _, + _, _, // 114 + X, _, _, + _, _, // 115 + X, _, _, + _, _, // 116 + X, _, _, + _, _, // 117 + X, _, _, + _, _, // 118 + X, _, _, + _, _, // 119 + X, _, _, + _, _, // 120 + X, _, _, + _, _, // 121 + X, _, _, + _, _, // 122 + X, _, _, + _, _, // 123 + X, _, _, + _, _, // 124 + X, _, _, + _, _, // 125 + X, _, _, + _, _, // 126 + X, _, _, + _, _, // 127 + 1, 7, _, + _, _, // 128 + 2, 0, 7, + _, _, // 129 + 2, 1, 7, + _, _, // 130 + X, _, _, + _, _, // 131 + 2, 2, 7, + _, _, // 132 + 3, 0, 2, + 7, _, // 133 + X, _, _, + _, _, // 134 + X, _, _, + _, _, // 135 + 2, 3, 7, + _, _, // 136 + 3, 0, 3, + 7, _, // 137 + 3, 1, 3, + 7, _, // 138 + X, _, _, + _, _, // 139 + X, _, _, + _, _, // 140 + X, _, _, + _, _, // 141 + X, _, _, + _, _, // 142 + X, _, _, + _, _, // 143 + 2, 4, 7, + _, _, // 144 + 3, 0, 4, + 7, _, // 145 + 3, 1, 4, + 7, _, // 146 + X, _, _, + _, _, // 147 + 3, 2, 4, + 7, _, // 148 + 4, 0, 2, + 4, 7, // 149 + X, _, _, + _, _, // 150 + X, _, _, + _, _, // 151 + X, _, _, + _, _, // 152 + X, _, _, + _, _, // 153 + X, _, _, + _, _, // 154 + X, _, _, + _, _, // 155 + X, _, _, + _, _, // 156 + X, _, _, + _, _, // 157 + X, _, _, + _, _, // 158 + X, _, _, + _, _, // 159 + 2, 5, 7, + _, _, // 160 + 3, 0, 5, + 7, _, // 161 + 3, 1, 5, + 7, _, // 162 + X, _, _, + _, _, // 163 + 3, 2, 5, + 7, _, // 164 + 4, 0, 2, + 5, 7, // 165 + X, _, _, + _, _, // 166 + X, _, _, + _, _, // 167 + 3, 3, 5, + 7, _, // 168 + 4, 0, 3, + 5, 7, // 169 + 4, 1, 3, + 5, 7 // 170 }; #undef _ #undef X @@ -3909,19 +4094,19 @@ static inline int MarkWordToObjectStarts(uint32_t mark_bits, int* starts) { int offset = 0; // No consecutive 1 bits. - ASSERT((mark_bits & 0x180) != 0x180); - ASSERT((mark_bits & 0x18000) != 0x18000); - ASSERT((mark_bits & 0x1800000) != 0x1800000); + DCHECK((mark_bits & 0x180) != 0x180); + DCHECK((mark_bits & 0x18000) != 0x18000); + DCHECK((mark_bits & 0x1800000) != 0x1800000); while (mark_bits != 0) { int byte = (mark_bits & 0xff); mark_bits >>= 8; if (byte != 0) { - ASSERT(byte < kStartTableLines); // No consecutive 1 bits. + DCHECK(byte < kStartTableLines); // No consecutive 1 bits. char* table = kStartTable + byte * kStartTableEntriesPerLine; int objects_in_these_8_words = table[0]; - ASSERT(objects_in_these_8_words != kStartTableInvalidLine); - ASSERT(objects_in_these_8_words < kStartTableEntriesPerLine); + DCHECK(objects_in_these_8_words != kStartTableInvalidLine); + DCHECK(objects_in_these_8_words < kStartTableEntriesPerLine); for (int i = 0; i < objects_in_these_8_words; i++) { starts[objects++] = offset + table[1 + i]; } @@ -3934,10 +4119,10 @@ static inline int MarkWordToObjectStarts(uint32_t mark_bits, int* starts) { static inline Address DigestFreeStart(Address approximate_free_start, uint32_t free_start_cell) { - ASSERT(free_start_cell != 0); + DCHECK(free_start_cell != 0); // No consecutive 1 bits. - ASSERT((free_start_cell & (free_start_cell << 1)) == 0); + DCHECK((free_start_cell & (free_start_cell << 1)) == 0); int offsets[16]; uint32_t cell = free_start_cell; @@ -3955,7 +4140,7 @@ static inline Address DigestFreeStart(Address approximate_free_start, cell |= cell >> 1; cell = (cell + 1) >> 1; int live_objects = MarkWordToObjectStarts(cell, offsets); - ASSERT(live_objects == 1); + DCHECK(live_objects == 1); offset_of_last_live = offsets[live_objects - 1]; } Address last_live_start = @@ -3967,49 +4152,34 @@ static inline Address DigestFreeStart(Address approximate_free_start, static inline Address StartOfLiveObject(Address block_address, uint32_t cell) { - ASSERT(cell != 0); + DCHECK(cell != 0); // No consecutive 1 bits. - ASSERT((cell & (cell << 1)) == 0); + DCHECK((cell & (cell << 1)) == 0); int offsets[16]; if (cell == 0x80000000u) { // Avoid overflow below. return block_address + 31 * kPointerSize; } uint32_t first_set_bit = ((cell ^ (cell - 1)) + 1) >> 1; - ASSERT((first_set_bit & cell) == first_set_bit); + DCHECK((first_set_bit & cell) == first_set_bit); int live_objects = MarkWordToObjectStarts(first_set_bit, offsets); - ASSERT(live_objects == 1); + DCHECK(live_objects == 1); USE(live_objects); return block_address + offsets[0] * kPointerSize; } -template<MarkCompactCollector::SweepingParallelism mode> -static intptr_t Free(PagedSpace* space, - FreeList* free_list, - Address start, - int size) { - if (mode == MarkCompactCollector::SWEEP_SEQUENTIALLY) { - return space->Free(start, size); - } else { - return size - free_list->Free(start, size); - } -} - - // Force instantiation of templatized SweepConservatively method for -// SWEEP_SEQUENTIALLY mode. -template intptr_t MarkCompactCollector:: - SweepConservatively<MarkCompactCollector::SWEEP_SEQUENTIALLY>( - PagedSpace*, FreeList*, Page*); +// SWEEP_ON_MAIN_THREAD mode. +template int MarkCompactCollector::SweepConservatively< + MarkCompactCollector::SWEEP_ON_MAIN_THREAD>(PagedSpace*, FreeList*, Page*); // Force instantiation of templatized SweepConservatively method for // SWEEP_IN_PARALLEL mode. -template intptr_t MarkCompactCollector:: - SweepConservatively<MarkCompactCollector::SWEEP_IN_PARALLEL>( - PagedSpace*, FreeList*, Page*); +template int MarkCompactCollector::SweepConservatively< + MarkCompactCollector::SWEEP_IN_PARALLEL>(PagedSpace*, FreeList*, Page*); // Sweeps a space conservatively. After this has been done the larger free @@ -4019,23 +4189,17 @@ template intptr_t MarkCompactCollector:: // because it means that any FreeSpace maps left actually describe a region of // memory that can be ignored when scanning. Dead objects other than free // spaces will not contain the free space map. -template<MarkCompactCollector::SweepingParallelism mode> -intptr_t MarkCompactCollector::SweepConservatively(PagedSpace* space, - FreeList* free_list, - Page* p) { - ASSERT(!p->IsEvacuationCandidate() && !p->WasSwept()); - ASSERT((mode == MarkCompactCollector::SWEEP_IN_PARALLEL && - free_list != NULL) || - (mode == MarkCompactCollector::SWEEP_SEQUENTIALLY && - free_list == NULL)); - - // When parallel sweeping is active, the page will be marked after - // sweeping by the main thread. - if (mode != MarkCompactCollector::SWEEP_IN_PARALLEL) { - p->MarkSweptConservatively(); - } +template <MarkCompactCollector::SweepingParallelism mode> +int MarkCompactCollector::SweepConservatively(PagedSpace* space, + FreeList* free_list, Page* p) { + DCHECK(!p->IsEvacuationCandidate() && !p->WasSwept()); + DCHECK( + (mode == MarkCompactCollector::SWEEP_IN_PARALLEL && free_list != NULL) || + (mode == MarkCompactCollector::SWEEP_ON_MAIN_THREAD && + free_list == NULL)); intptr_t freed_bytes = 0; + intptr_t max_freed_bytes = 0; size_t size = 0; // Skip over all the dead objects at the start of the page and mark them free. @@ -4050,10 +4214,18 @@ intptr_t MarkCompactCollector::SweepConservatively(PagedSpace* space, if (it.Done()) { size = p->area_end() - p->area_start(); - freed_bytes += Free<mode>(space, free_list, p->area_start(), - static_cast<int>(size)); - ASSERT_EQ(0, p->LiveBytes()); - return freed_bytes; + freed_bytes = + Free<mode>(space, free_list, p->area_start(), static_cast<int>(size)); + max_freed_bytes = Max(freed_bytes, max_freed_bytes); + DCHECK_EQ(0, p->LiveBytes()); + if (mode == MarkCompactCollector::SWEEP_IN_PARALLEL) { + // When concurrent sweeping is active, the page will be marked after + // sweeping by the main thread. + p->set_parallel_sweeping(MemoryChunk::SWEEPING_FINALIZE); + } else { + p->MarkSweptConservatively(); + } + return FreeList::GuaranteedAllocatable(static_cast<int>(max_freed_bytes)); } // Grow the size of the start-of-page free space a little to get up to the @@ -4061,8 +4233,9 @@ intptr_t MarkCompactCollector::SweepConservatively(PagedSpace* space, Address free_end = StartOfLiveObject(cell_base, *cell); // Free the first free space. size = free_end - p->area_start(); - freed_bytes += Free<mode>(space, free_list, p->area_start(), - static_cast<int>(size)); + freed_bytes = + Free<mode>(space, free_list, p->area_start(), static_cast<int>(size)); + max_freed_bytes = Max(freed_bytes, max_freed_bytes); // The start of the current free area is represented in undigested form by // the address of the last 32-word section that contained a live object and @@ -4087,8 +4260,9 @@ intptr_t MarkCompactCollector::SweepConservatively(PagedSpace* space, // so now we need to find the start of the first live object at the // end of the free space. free_end = StartOfLiveObject(cell_base, *cell); - freed_bytes += Free<mode>(space, free_list, free_start, - static_cast<int>(free_end - free_start)); + freed_bytes = Free<mode>(space, free_list, free_start, + static_cast<int>(free_end - free_start)); + max_freed_bytes = Max(freed_bytes, max_freed_bytes); } } // Update our undigested record of where the current free area started. @@ -4102,38 +4276,67 @@ intptr_t MarkCompactCollector::SweepConservatively(PagedSpace* space, // Handle the free space at the end of the page. if (cell_base - free_start > 32 * kPointerSize) { free_start = DigestFreeStart(free_start, free_start_cell); - freed_bytes += Free<mode>(space, free_list, free_start, - static_cast<int>(p->area_end() - free_start)); + freed_bytes = Free<mode>(space, free_list, free_start, + static_cast<int>(p->area_end() - free_start)); + max_freed_bytes = Max(freed_bytes, max_freed_bytes); } p->ResetLiveBytes(); - return freed_bytes; + if (mode == MarkCompactCollector::SWEEP_IN_PARALLEL) { + // When concurrent sweeping is active, the page will be marked after + // sweeping by the main thread. + p->set_parallel_sweeping(MemoryChunk::SWEEPING_FINALIZE); + } else { + p->MarkSweptConservatively(); + } + return FreeList::GuaranteedAllocatable(static_cast<int>(max_freed_bytes)); } -void MarkCompactCollector::SweepInParallel(PagedSpace* space) { +int MarkCompactCollector::SweepInParallel(PagedSpace* space, + int required_freed_bytes) { + int max_freed = 0; + int max_freed_overall = 0; PageIterator it(space); - FreeList* free_list = space == heap()->old_pointer_space() - ? free_list_old_pointer_space_.get() - : free_list_old_data_space_.get(); - FreeList private_free_list(space); while (it.has_next()) { Page* p = it.next(); - - if (p->TryParallelSweeping()) { - SweepConservatively<SWEEP_IN_PARALLEL>(space, &private_free_list, p); - free_list->Concatenate(&private_free_list); - p->set_parallel_sweeping(MemoryChunk::PARALLEL_SWEEPING_FINALIZE); + max_freed = SweepInParallel(p, space); + DCHECK(max_freed >= 0); + if (required_freed_bytes > 0 && max_freed >= required_freed_bytes) { + return max_freed; } + max_freed_overall = Max(max_freed, max_freed_overall); if (p == space->end_of_unswept_pages()) break; } + return max_freed_overall; +} + + +int MarkCompactCollector::SweepInParallel(Page* page, PagedSpace* space) { + int max_freed = 0; + if (page->TryParallelSweeping()) { + FreeList* free_list = space == heap()->old_pointer_space() + ? free_list_old_pointer_space_.get() + : free_list_old_data_space_.get(); + FreeList private_free_list(space); + if (space->swept_precisely()) { + max_freed = SweepPrecisely<SWEEP_ONLY, SWEEP_IN_PARALLEL, + IGNORE_SKIP_LIST, IGNORE_FREE_SPACE>( + space, &private_free_list, page, NULL); + } else { + max_freed = SweepConservatively<SWEEP_IN_PARALLEL>( + space, &private_free_list, page); + } + free_list->Concatenate(&private_free_list); + } + return max_freed; } void MarkCompactCollector::SweepSpace(PagedSpace* space, SweeperType sweeper) { - space->set_was_swept_conservatively(sweeper == CONSERVATIVE || - sweeper == PARALLEL_CONSERVATIVE || - sweeper == CONCURRENT_CONSERVATIVE); + space->set_swept_precisely(sweeper == PRECISE || + sweeper == CONCURRENT_PRECISE || + sweeper == PARALLEL_PRECISE); space->ClearStats(); // We defensively initialize end_of_unswept_pages_ here with the first page @@ -4148,7 +4351,7 @@ void MarkCompactCollector::SweepSpace(PagedSpace* space, SweeperType sweeper) { while (it.has_next()) { Page* p = it.next(); - ASSERT(p->parallel_sweeping() == MemoryChunk::PARALLEL_SWEEPING_DONE); + DCHECK(p->parallel_sweeping() == MemoryChunk::SWEEPING_DONE); // Clear sweeping flags indicating that marking bits are still intact. p->ClearSweptPrecisely(); @@ -4157,7 +4360,7 @@ void MarkCompactCollector::SweepSpace(PagedSpace* space, SweeperType sweeper) { if (p->IsFlagSet(Page::RESCAN_ON_EVACUATION) || p->IsEvacuationCandidate()) { // Will be processed in EvacuateNewSpaceAndCandidates. - ASSERT(evacuation_candidates_.length() > 0); + DCHECK(evacuation_candidates_.length() > 0); continue; } @@ -4178,15 +4381,6 @@ void MarkCompactCollector::SweepSpace(PagedSpace* space, SweeperType sweeper) { } switch (sweeper) { - case CONSERVATIVE: { - if (FLAG_gc_verbose) { - PrintF("Sweeping 0x%" V8PRIxPTR " conservatively.\n", - reinterpret_cast<intptr_t>(p)); - } - SweepConservatively<SWEEP_SEQUENTIALLY>(space, NULL, p); - pages_swept++; - break; - } case CONCURRENT_CONSERVATIVE: case PARALLEL_CONSERVATIVE: { if (!parallel_sweeping_active) { @@ -4194,7 +4388,7 @@ void MarkCompactCollector::SweepSpace(PagedSpace* space, SweeperType sweeper) { PrintF("Sweeping 0x%" V8PRIxPTR " conservatively.\n", reinterpret_cast<intptr_t>(p)); } - SweepConservatively<SWEEP_SEQUENTIALLY>(space, NULL, p); + SweepConservatively<SWEEP_ON_MAIN_THREAD>(space, NULL, p); pages_swept++; parallel_sweeping_active = true; } else { @@ -4202,40 +4396,58 @@ void MarkCompactCollector::SweepSpace(PagedSpace* space, SweeperType sweeper) { PrintF("Sweeping 0x%" V8PRIxPTR " conservatively in parallel.\n", reinterpret_cast<intptr_t>(p)); } - p->set_parallel_sweeping(MemoryChunk::PARALLEL_SWEEPING_PENDING); + p->set_parallel_sweeping(MemoryChunk::SWEEPING_PENDING); space->IncreaseUnsweptFreeBytes(p); } space->set_end_of_unswept_pages(p); break; } + case CONCURRENT_PRECISE: + case PARALLEL_PRECISE: + if (!parallel_sweeping_active) { + if (FLAG_gc_verbose) { + PrintF("Sweeping 0x%" V8PRIxPTR " precisely.\n", + reinterpret_cast<intptr_t>(p)); + } + SweepPrecisely<SWEEP_ONLY, SWEEP_ON_MAIN_THREAD, IGNORE_SKIP_LIST, + IGNORE_FREE_SPACE>(space, NULL, p, NULL); + pages_swept++; + parallel_sweeping_active = true; + } else { + if (FLAG_gc_verbose) { + PrintF("Sweeping 0x%" V8PRIxPTR " conservatively in parallel.\n", + reinterpret_cast<intptr_t>(p)); + } + p->set_parallel_sweeping(MemoryChunk::SWEEPING_PENDING); + space->IncreaseUnsweptFreeBytes(p); + } + space->set_end_of_unswept_pages(p); + break; case PRECISE: { if (FLAG_gc_verbose) { PrintF("Sweeping 0x%" V8PRIxPTR " precisely.\n", reinterpret_cast<intptr_t>(p)); } if (space->identity() == CODE_SPACE && FLAG_zap_code_space) { - SweepPrecisely<SWEEP_ONLY, REBUILD_SKIP_LIST, ZAP_FREE_SPACE>( - space, p, NULL); + SweepPrecisely<SWEEP_ONLY, SWEEP_ON_MAIN_THREAD, REBUILD_SKIP_LIST, + ZAP_FREE_SPACE>(space, NULL, p, NULL); } else if (space->identity() == CODE_SPACE) { - SweepPrecisely<SWEEP_ONLY, REBUILD_SKIP_LIST, IGNORE_FREE_SPACE>( - space, p, NULL); + SweepPrecisely<SWEEP_ONLY, SWEEP_ON_MAIN_THREAD, REBUILD_SKIP_LIST, + IGNORE_FREE_SPACE>(space, NULL, p, NULL); } else { - SweepPrecisely<SWEEP_ONLY, IGNORE_SKIP_LIST, IGNORE_FREE_SPACE>( - space, p, NULL); + SweepPrecisely<SWEEP_ONLY, SWEEP_ON_MAIN_THREAD, IGNORE_SKIP_LIST, + IGNORE_FREE_SPACE>(space, NULL, p, NULL); } pages_swept++; break; } - default: { - UNREACHABLE(); - } + default: { UNREACHABLE(); } } } if (FLAG_gc_verbose) { PrintF("SweepSpace: %s (%d pages swept)\n", - AllocationSpaceName(space->identity()), - pages_swept); + AllocationSpaceName(space->identity()), pages_swept); } // Give pages that are queued to be freed back to the OS. @@ -4243,15 +4455,39 @@ void MarkCompactCollector::SweepSpace(PagedSpace* space, SweeperType sweeper) { } +static bool ShouldStartSweeperThreads(MarkCompactCollector::SweeperType type) { + return type == MarkCompactCollector::PARALLEL_CONSERVATIVE || + type == MarkCompactCollector::CONCURRENT_CONSERVATIVE || + type == MarkCompactCollector::PARALLEL_PRECISE || + type == MarkCompactCollector::CONCURRENT_PRECISE; +} + + +static bool ShouldWaitForSweeperThreads( + MarkCompactCollector::SweeperType type) { + return type == MarkCompactCollector::PARALLEL_CONSERVATIVE || + type == MarkCompactCollector::PARALLEL_PRECISE; +} + + void MarkCompactCollector::SweepSpaces() { - GCTracer::Scope gc_scope(tracer_, GCTracer::Scope::MC_SWEEP); + GCTracer::Scope gc_scope(heap()->tracer(), GCTracer::Scope::MC_SWEEP); + double start_time = 0.0; + if (FLAG_print_cumulative_gc_stat) { + start_time = base::OS::TimeCurrentMillis(); + } + #ifdef DEBUG state_ = SWEEP_SPACES; #endif - SweeperType how_to_sweep = CONSERVATIVE; - if (AreSweeperThreadsActivated()) { - if (FLAG_parallel_sweeping) how_to_sweep = PARALLEL_CONSERVATIVE; - if (FLAG_concurrent_sweeping) how_to_sweep = CONCURRENT_CONSERVATIVE; + SweeperType how_to_sweep = CONCURRENT_CONSERVATIVE; + if (FLAG_parallel_sweeping) how_to_sweep = PARALLEL_CONSERVATIVE; + if (FLAG_concurrent_sweeping) how_to_sweep = CONCURRENT_CONSERVATIVE; + if (FLAG_always_precise_sweeping && FLAG_parallel_sweeping) { + how_to_sweep = PARALLEL_PRECISE; + } + if (FLAG_always_precise_sweeping && FLAG_concurrent_sweeping) { + how_to_sweep = CONCURRENT_PRECISE; } if (sweep_precisely_) how_to_sweep = PRECISE; @@ -4262,39 +4498,59 @@ void MarkCompactCollector::SweepSpaces() { // the map space last because freeing non-live maps overwrites them and // the other spaces rely on possibly non-live maps to get the sizes for // non-live objects. - { GCTracer::Scope sweep_scope(tracer_, GCTracer::Scope::MC_SWEEP_OLDSPACE); - { SequentialSweepingScope scope(this); + { + GCTracer::Scope sweep_scope(heap()->tracer(), + GCTracer::Scope::MC_SWEEP_OLDSPACE); + { + SequentialSweepingScope scope(this); SweepSpace(heap()->old_pointer_space(), how_to_sweep); SweepSpace(heap()->old_data_space(), how_to_sweep); } - if (how_to_sweep == PARALLEL_CONSERVATIVE || - how_to_sweep == CONCURRENT_CONSERVATIVE) { + if (ShouldStartSweeperThreads(how_to_sweep)) { StartSweeperThreads(); } - if (how_to_sweep == PARALLEL_CONSERVATIVE) { - WaitUntilSweepingCompleted(); + if (ShouldWaitForSweeperThreads(how_to_sweep)) { + EnsureSweepingCompleted(); } } RemoveDeadInvalidatedCode(); - SweepSpace(heap()->code_space(), PRECISE); - SweepSpace(heap()->cell_space(), PRECISE); - SweepSpace(heap()->property_cell_space(), PRECISE); + { + GCTracer::Scope sweep_scope(heap()->tracer(), + GCTracer::Scope::MC_SWEEP_CODE); + SweepSpace(heap()->code_space(), PRECISE); + } + + { + GCTracer::Scope sweep_scope(heap()->tracer(), + GCTracer::Scope::MC_SWEEP_CELL); + SweepSpace(heap()->cell_space(), PRECISE); + SweepSpace(heap()->property_cell_space(), PRECISE); + } EvacuateNewSpaceAndCandidates(); // ClearNonLiveTransitions depends on precise sweeping of map space to // detect whether unmarked map became dead in this collection or in one // of the previous ones. - SweepSpace(heap()->map_space(), PRECISE); + { + GCTracer::Scope sweep_scope(heap()->tracer(), + GCTracer::Scope::MC_SWEEP_MAP); + SweepSpace(heap()->map_space(), PRECISE); + } // Deallocate unmarked objects and clear marked bits for marked objects. heap_->lo_space()->FreeUnmarkedObjects(); // Deallocate evacuated candidate pages. ReleaseEvacuationCandidates(); + + if (FLAG_print_cumulative_gc_stat) { + heap_->tracer()->AddSweepingTime(base::OS::TimeCurrentMillis() - + start_time); + } } @@ -4302,11 +4558,15 @@ void MarkCompactCollector::ParallelSweepSpaceComplete(PagedSpace* space) { PageIterator it(space); while (it.has_next()) { Page* p = it.next(); - if (p->parallel_sweeping() == MemoryChunk::PARALLEL_SWEEPING_FINALIZE) { - p->set_parallel_sweeping(MemoryChunk::PARALLEL_SWEEPING_DONE); - p->MarkSweptConservatively(); + if (p->parallel_sweeping() == MemoryChunk::SWEEPING_FINALIZE) { + p->set_parallel_sweeping(MemoryChunk::SWEEPING_DONE); + if (space->swept_precisely()) { + p->MarkSweptPrecisely(); + } else { + p->MarkSweptConservatively(); + } } - ASSERT(p->parallel_sweeping() == MemoryChunk::PARALLEL_SWEEPING_DONE); + DCHECK(p->parallel_sweeping() == MemoryChunk::SWEEPING_DONE); } } @@ -4318,7 +4578,7 @@ void MarkCompactCollector::ParallelSweepSpacesComplete() { void MarkCompactCollector::EnableCodeFlushing(bool enable) { - if (isolate()->debug()->IsLoaded() || + if (isolate()->debug()->is_loaded() || isolate()->debug()->has_break_points()) { enable = false; } @@ -4344,20 +4604,13 @@ void MarkCompactCollector::EnableCodeFlushing(bool enable) { // code objects. We should either reenable it or change our tools. void MarkCompactCollector::ReportDeleteIfNeeded(HeapObject* obj, Isolate* isolate) { -#ifdef ENABLE_GDB_JIT_INTERFACE - if (obj->IsCode()) { - GDBJITInterface::RemoveCode(reinterpret_cast<Code*>(obj)); - } -#endif if (obj->IsCode()) { PROFILE(isolate, CodeDeleteEvent(obj->address())); } } -Isolate* MarkCompactCollector::isolate() const { - return heap_->isolate(); -} +Isolate* MarkCompactCollector::isolate() const { return heap_->isolate(); } void MarkCompactCollector::Initialize() { @@ -4372,10 +4625,8 @@ bool SlotsBuffer::IsTypedSlot(ObjectSlot slot) { bool SlotsBuffer::AddTo(SlotsBufferAllocator* allocator, - SlotsBuffer** buffer_address, - SlotType type, - Address addr, - AdditionMode mode) { + SlotsBuffer** buffer_address, SlotType type, + Address addr, AdditionMode mode) { SlotsBuffer* buffer = *buffer_address; if (buffer == NULL || !buffer->HasSpaceForTypedSlot()) { if (mode == FAIL_ON_OVERFLOW && ChainLengthThresholdReached(buffer)) { @@ -4385,7 +4636,7 @@ bool SlotsBuffer::AddTo(SlotsBufferAllocator* allocator, buffer = allocator->AllocateBuffer(buffer); *buffer_address = buffer; } - ASSERT(buffer->HasSpaceForTypedSlot()); + DCHECK(buffer->HasSpaceForTypedSlot()); buffer->Add(reinterpret_cast<ObjectSlot>(type)); buffer->Add(reinterpret_cast<ObjectSlot>(addr)); return true; @@ -4418,22 +4669,18 @@ void MarkCompactCollector::RecordRelocSlot(RelocInfo* rinfo, Object* target) { // This doesn't need to be typed since it is just a normal heap pointer. Object** target_pointer = reinterpret_cast<Object**>(rinfo->constant_pool_entry_address()); - success = SlotsBuffer::AddTo(&slots_buffer_allocator_, - target_page->slots_buffer_address(), - target_pointer, - SlotsBuffer::FAIL_ON_OVERFLOW); + success = SlotsBuffer::AddTo( + &slots_buffer_allocator_, target_page->slots_buffer_address(), + target_pointer, SlotsBuffer::FAIL_ON_OVERFLOW); } else if (RelocInfo::IsCodeTarget(rmode) && rinfo->IsInConstantPool()) { - success = SlotsBuffer::AddTo(&slots_buffer_allocator_, - target_page->slots_buffer_address(), - SlotsBuffer::CODE_ENTRY_SLOT, - rinfo->constant_pool_entry_address(), - SlotsBuffer::FAIL_ON_OVERFLOW); + success = SlotsBuffer::AddTo( + &slots_buffer_allocator_, target_page->slots_buffer_address(), + SlotsBuffer::CODE_ENTRY_SLOT, rinfo->constant_pool_entry_address(), + SlotsBuffer::FAIL_ON_OVERFLOW); } else { - success = SlotsBuffer::AddTo(&slots_buffer_allocator_, - target_page->slots_buffer_address(), - SlotTypeForRMode(rmode), - rinfo->pc(), - SlotsBuffer::FAIL_ON_OVERFLOW); + success = SlotsBuffer::AddTo( + &slots_buffer_allocator_, target_page->slots_buffer_address(), + SlotTypeForRMode(rmode), rinfo->pc(), SlotsBuffer::FAIL_ON_OVERFLOW); } if (!success) { EvictEvacuationCandidate(target_page); @@ -4448,8 +4695,7 @@ void MarkCompactCollector::RecordCodeEntrySlot(Address slot, Code* target) { !ShouldSkipEvacuationSlotRecording(reinterpret_cast<Object**>(slot))) { if (!SlotsBuffer::AddTo(&slots_buffer_allocator_, target_page->slots_buffer_address(), - SlotsBuffer::CODE_ENTRY_SLOT, - slot, + SlotsBuffer::CODE_ENTRY_SLOT, slot, SlotsBuffer::FAIL_ON_OVERFLOW)) { EvictEvacuationCandidate(target_page); } @@ -4458,10 +4704,11 @@ void MarkCompactCollector::RecordCodeEntrySlot(Address slot, Code* target) { void MarkCompactCollector::RecordCodeTargetPatch(Address pc, Code* target) { - ASSERT(heap()->gc_state() == Heap::MARK_COMPACT); + DCHECK(heap()->gc_state() == Heap::MARK_COMPACT); if (is_compacting()) { - Code* host = isolate()->inner_pointer_to_code_cache()-> - GcSafeFindCodeForInnerPointer(pc); + Code* host = + isolate()->inner_pointer_to_code_cache()->GcSafeFindCodeForInnerPointer( + pc); MarkBit mark_bit = Marking::MarkBitFrom(host); if (Marking::IsBlack(mark_bit)) { RelocInfo rinfo(pc, RelocInfo::CODE_TARGET, 0, host); @@ -4486,10 +4733,8 @@ void SlotsBuffer::UpdateSlots(Heap* heap) { PointersUpdatingVisitor::UpdateSlot(heap, slot); } else { ++slot_idx; - ASSERT(slot_idx < idx_); - UpdateSlot(heap->isolate(), - &v, - DecodeSlotType(slot), + DCHECK(slot_idx < idx_); + UpdateSlot(heap->isolate(), &v, DecodeSlotType(slot), reinterpret_cast<Address>(slots_[slot_idx])); } } @@ -4507,12 +4752,10 @@ void SlotsBuffer::UpdateSlotsWithFilter(Heap* heap) { } } else { ++slot_idx; - ASSERT(slot_idx < idx_); + DCHECK(slot_idx < idx_); Address pc = reinterpret_cast<Address>(slots_[slot_idx]); if (!IsOnInvalidatedCodeObject(pc)) { - UpdateSlot(heap->isolate(), - &v, - DecodeSlotType(slot), + UpdateSlot(heap->isolate(), &v, DecodeSlotType(slot), reinterpret_cast<Address>(slots_[slot_idx])); } } @@ -4539,6 +4782,5 @@ void SlotsBufferAllocator::DeallocateChain(SlotsBuffer** buffer_address) { } *buffer_address = NULL; } - - -} } // namespace v8::internal +} +} // namespace v8::internal diff --git a/deps/v8/src/mark-compact.h b/deps/v8/src/heap/mark-compact.h similarity index 83% rename from deps/v8/src/mark-compact.h rename to deps/v8/src/heap/mark-compact.h index 254f2589c99..a32c16b6f27 100644 --- a/deps/v8/src/mark-compact.h +++ b/deps/v8/src/heap/mark-compact.h @@ -2,11 +2,11 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#ifndef V8_MARK_COMPACT_H_ -#define V8_MARK_COMPACT_H_ +#ifndef V8_HEAP_MARK_COMPACT_H_ +#define V8_HEAP_MARK_COMPACT_H_ -#include "compiler-intrinsics.h" -#include "spaces.h" +#include "src/compiler-intrinsics.h" +#include "src/heap/spaces.h" namespace v8 { namespace internal { @@ -18,7 +18,6 @@ typedef bool (*IsAliveFunction)(HeapObject* obj, int* size, int* offset); // Forward declarations. class CodeFlusher; -class GCTracer; class MarkCompactCollector; class MarkingVisitor; class RootMarkingVisitor; @@ -26,9 +25,7 @@ class RootMarkingVisitor; class Marking { public: - explicit Marking(Heap* heap) - : heap_(heap) { - } + explicit Marking(Heap* heap) : heap_(heap) {} INLINE(static MarkBit MarkBitFrom(Address addr)); @@ -50,9 +47,7 @@ class Marking { // White markbits: 00 - this is required by the mark bit clearer. static const char* kWhiteBitPattern; - INLINE(static bool IsWhite(MarkBit mark_bit)) { - return !mark_bit.Get(); - } + INLINE(static bool IsWhite(MarkBit mark_bit)) { return !mark_bit.Get(); } // Grey markbits: 11 static const char* kGreyBitPattern; @@ -65,18 +60,14 @@ class Marking { mark_bit.Next().Clear(); } - INLINE(static void BlackToGrey(MarkBit markbit)) { - markbit.Next().Set(); - } + INLINE(static void BlackToGrey(MarkBit markbit)) { markbit.Next().Set(); } INLINE(static void WhiteToGrey(MarkBit markbit)) { markbit.Set(); markbit.Next().Set(); } - INLINE(static void GreyToBlack(MarkBit markbit)) { - markbit.Next().Clear(); - } + INLINE(static void GreyToBlack(MarkBit markbit)) { markbit.Next().Clear(); } INLINE(static void BlackToGrey(HeapObject* obj)) { BlackToGrey(MarkBitFrom(obj)); @@ -99,10 +90,14 @@ class Marking { static const char* ColorName(ObjectColor color) { switch (color) { - case BLACK_OBJECT: return "black"; - case WHITE_OBJECT: return "white"; - case GREY_OBJECT: return "grey"; - case IMPOSSIBLE_COLOR: return "impossible"; + case BLACK_OBJECT: + return "black"; + case WHITE_OBJECT: + return "white"; + case GREY_OBJECT: + return "grey"; + case IMPOSSIBLE_COLOR: + return "impossible"; } return "error"; } @@ -121,8 +116,7 @@ class Marking { #endif // Returns true if the transferred color is black. - INLINE(static bool TransferColor(HeapObject* from, - HeapObject* to)) { + INLINE(static bool TransferColor(HeapObject* from, HeapObject* to)) { MarkBit from_mark_bit = MarkBitFrom(from); MarkBit to_mark_bit = MarkBitFrom(to); bool is_black = false; @@ -146,7 +140,7 @@ class Marking { class MarkingDeque { public: MarkingDeque() - : array_(NULL), top_(0), bottom_(0), mask_(0), overflowed_(false) { } + : array_(NULL), top_(0), bottom_(0), mask_(0), overflowed_(false) {} void Initialize(Address low, Address high) { HeapObject** obj_low = reinterpret_cast<HeapObject**>(low); @@ -171,7 +165,7 @@ class MarkingDeque { // otherwise mark the object as overflowed and wait for a rescan of the // heap. INLINE(void PushBlack(HeapObject* object)) { - ASSERT(object->IsHeapObject()); + DCHECK(object->IsHeapObject()); if (IsFull()) { Marking::BlackToGrey(object); MemoryChunk::IncrementLiveBytesFromGC(object->address(), -object->Size()); @@ -183,7 +177,7 @@ class MarkingDeque { } INLINE(void PushGrey(HeapObject* object)) { - ASSERT(object->IsHeapObject()); + DCHECK(object->IsHeapObject()); if (IsFull()) { SetOverflowed(); } else { @@ -193,15 +187,15 @@ class MarkingDeque { } INLINE(HeapObject* Pop()) { - ASSERT(!IsEmpty()); + DCHECK(!IsEmpty()); top_ = ((top_ - 1) & mask_); HeapObject* object = array_[top_]; - ASSERT(object->IsHeapObject()); + DCHECK(object->IsHeapObject()); return object; } INLINE(void UnshiftGrey(HeapObject* object)) { - ASSERT(object->IsHeapObject()); + DCHECK(object->IsHeapObject()); if (IsFull()) { SetOverflowed(); } else { @@ -262,11 +256,10 @@ class SlotsBuffer { } } - ~SlotsBuffer() { - } + ~SlotsBuffer() {} void Add(ObjectSlot slot) { - ASSERT(0 <= idx_ && idx_ < kNumberOfElements); + DCHECK(0 <= idx_ && idx_ < kNumberOfElements); slots_[idx_++] = slot; } @@ -312,16 +305,11 @@ class SlotsBuffer { (buffer->chain_length_ - 1) * kNumberOfElements); } - inline bool IsFull() { - return idx_ == kNumberOfElements; - } + inline bool IsFull() { return idx_ == kNumberOfElements; } - inline bool HasSpaceForTypedSlot() { - return idx_ < kNumberOfElements - 1; - } + inline bool HasSpaceForTypedSlot() { return idx_ < kNumberOfElements - 1; } - static void UpdateSlotsRecordedIn(Heap* heap, - SlotsBuffer* buffer, + static void UpdateSlotsRecordedIn(Heap* heap, SlotsBuffer* buffer, bool code_slots_filtering_required) { while (buffer != NULL) { if (code_slots_filtering_required) { @@ -333,18 +321,14 @@ class SlotsBuffer { } } - enum AdditionMode { - FAIL_ON_OVERFLOW, - IGNORE_OVERFLOW - }; + enum AdditionMode { FAIL_ON_OVERFLOW, IGNORE_OVERFLOW }; static bool ChainLengthThresholdReached(SlotsBuffer* buffer) { return buffer != NULL && buffer->chain_length_ >= kChainLengthThreshold; } INLINE(static bool AddTo(SlotsBufferAllocator* allocator, - SlotsBuffer** buffer_address, - ObjectSlot slot, + SlotsBuffer** buffer_address, ObjectSlot slot, AdditionMode mode)) { SlotsBuffer* buffer = *buffer_address; if (buffer == NULL || buffer->IsFull()) { @@ -362,9 +346,7 @@ class SlotsBuffer { static bool IsTypedSlot(ObjectSlot slot); static bool AddTo(SlotsBufferAllocator* allocator, - SlotsBuffer** buffer_address, - SlotType type, - Address addr, + SlotsBuffer** buffer_address, SlotType type, Address addr, AdditionMode mode); static const int kNumberOfElements = 1021; @@ -405,7 +387,7 @@ class CodeFlusher { } void AddCandidate(JSFunction* function) { - ASSERT(function->code() == function->shared()->code()); + DCHECK(function->code() == function->shared()->code()); if (GetNextCandidate(function)->IsUndefined()) { SetNextCandidate(function, jsfunction_candidates_head_); jsfunction_candidates_head_ = function; @@ -461,7 +443,7 @@ class CodeFlusher { } static void ClearNextCandidate(JSFunction* candidate, Object* undefined) { - ASSERT(undefined->IsUndefined()); + DCHECK(undefined->IsUndefined()); candidate->set_next_function_link(undefined, SKIP_WRITE_BARRIER); } @@ -528,24 +510,17 @@ class MarkCompactCollector { // Prepares for GC by resetting relocation info in old and map spaces and // choosing spaces to compact. - void Prepare(GCTracer* tracer); + void Prepare(); // Performs a global garbage collection. void CollectGarbage(); - enum CompactionMode { - INCREMENTAL_COMPACTION, - NON_INCREMENTAL_COMPACTION - }; + enum CompactionMode { INCREMENTAL_COMPACTION, NON_INCREMENTAL_COMPACTION }; bool StartCompaction(CompactionMode mode); void AbortCompaction(); - // During a full GC, there is a stack-allocated GCTracer that is used for - // bookkeeping information. Return a pointer to that tracer. - GCTracer* tracer() { return tracer_; } - #ifdef DEBUG // Checks whether performing mark-compact collection. bool in_use() { return state_ > PREPARE_GC; } @@ -570,16 +545,14 @@ class MarkCompactCollector { void EnableCodeFlushing(bool enable); enum SweeperType { - CONSERVATIVE, PARALLEL_CONSERVATIVE, CONCURRENT_CONSERVATIVE, + PARALLEL_PRECISE, + CONCURRENT_PRECISE, PRECISE }; - enum SweepingParallelism { - SWEEP_SEQUENTIALLY, - SWEEP_IN_PARALLEL - }; + enum SweepingParallelism { SWEEP_ON_MAIN_THREAD, SWEEP_IN_PARALLEL }; #ifdef VERIFY_HEAP void VerifyMarkbitsAreClean(); @@ -590,25 +563,24 @@ class MarkCompactCollector { #endif // Sweep a single page from the given space conservatively. - // Return a number of reclaimed bytes. - template<SweepingParallelism type> - static intptr_t SweepConservatively(PagedSpace* space, - FreeList* free_list, - Page* p); + // Returns the size of the biggest continuous freed memory chunk in bytes. + template <SweepingParallelism type> + static int SweepConservatively(PagedSpace* space, FreeList* free_list, + Page* p); INLINE(static bool ShouldSkipEvacuationSlotRecording(Object** anchor)) { - return Page::FromAddress(reinterpret_cast<Address>(anchor))-> - ShouldSkipEvacuationSlotRecording(); + return Page::FromAddress(reinterpret_cast<Address>(anchor)) + ->ShouldSkipEvacuationSlotRecording(); } INLINE(static bool ShouldSkipEvacuationSlotRecording(Object* host)) { - return Page::FromAddress(reinterpret_cast<Address>(host))-> - ShouldSkipEvacuationSlotRecording(); + return Page::FromAddress(reinterpret_cast<Address>(host)) + ->ShouldSkipEvacuationSlotRecording(); } INLINE(static bool IsOnEvacuationCandidate(Object* obj)) { - return Page::FromAddress(reinterpret_cast<Address>(obj))-> - IsEvacuationCandidate(); + return Page::FromAddress(reinterpret_cast<Address>(obj)) + ->IsEvacuationCandidate(); } INLINE(void EvictEvacuationCandidate(Page* page)) { @@ -636,26 +608,15 @@ class MarkCompactCollector { void RecordCodeEntrySlot(Address slot, Code* target); void RecordCodeTargetPatch(Address pc, Code* target); - INLINE(void RecordSlot(Object** anchor_slot, - Object** slot, - Object* object, - SlotsBuffer::AdditionMode mode = - SlotsBuffer::FAIL_ON_OVERFLOW)); + INLINE(void RecordSlot( + Object** anchor_slot, Object** slot, Object* object, + SlotsBuffer::AdditionMode mode = SlotsBuffer::FAIL_ON_OVERFLOW)); - void MigrateObject(HeapObject* dst, - HeapObject* src, - int size, + void MigrateObject(HeapObject* dst, HeapObject* src, int size, AllocationSpace to_old_space); bool TryPromoteObject(HeapObject* object, int object_size); - inline Object* encountered_weak_collections() { - return encountered_weak_collections_; - } - inline void set_encountered_weak_collections(Object* weak_collection) { - encountered_weak_collections_ = weak_collection; - } - void InvalidateCode(Code* code); void ClearMarkbits(); @@ -666,26 +627,36 @@ class MarkCompactCollector { MarkingParity marking_parity() { return marking_parity_; } - // Concurrent and parallel sweeping support. - void SweepInParallel(PagedSpace* space); + // Concurrent and parallel sweeping support. If required_freed_bytes was set + // to a value larger than 0, then sweeping returns after a block of at least + // required_freed_bytes was freed. If required_freed_bytes was set to zero + // then the whole given space is swept. It returns the size of the maximum + // continuous freed memory chunk. + int SweepInParallel(PagedSpace* space, int required_freed_bytes); - void WaitUntilSweepingCompleted(); + // Sweeps a given page concurrently to the sweeper threads. It returns the + // size of the maximum continuous freed memory chunk. + int SweepInParallel(Page* page, PagedSpace* space); + void EnsureSweepingCompleted(); + + // If sweeper threads are not active this method will return true. If + // this is a latency issue we should be smarter here. Otherwise, it will + // return true if the sweeper threads are done processing the pages. bool IsSweepingCompleted(); void RefillFreeList(PagedSpace* space); bool AreSweeperThreadsActivated(); - bool IsConcurrentSweepingInProgress(); + // Checks if sweeping is in progress right now on any space. + bool sweeping_in_progress() { return sweeping_in_progress_; } void set_sequential_sweeping(bool sequential_sweeping) { sequential_sweeping_ = sequential_sweeping; } - bool sequential_sweeping() const { - return sequential_sweeping_; - } + bool sequential_sweeping() const { return sequential_sweeping_; } // Mark the global table which maps weak objects to dependent code without // marking its contents. @@ -740,16 +711,12 @@ class MarkCompactCollector { bool was_marked_incrementally_; // True if concurrent or parallel sweeping is currently in progress. - bool sweeping_pending_; + bool sweeping_in_progress_; - Semaphore pending_sweeper_jobs_semaphore_; + base::Semaphore pending_sweeper_jobs_semaphore_; bool sequential_sweeping_; - // A pointer to the current stack-allocated GC tracer object during a full - // collection (NULL before and after). - GCTracer* tracer_; - SlotsBufferAllocator slots_buffer_allocator_; SlotsBuffer* migration_slots_buffer_; @@ -844,6 +811,11 @@ class MarkCompactCollector { void ClearNonLiveReferences(); void ClearNonLivePrototypeTransitions(Map* map); void ClearNonLiveMapTransitions(Map* map, MarkBit map_mark); + void ClearMapTransitions(Map* map); + bool ClearMapBackPointer(Map* map); + void TrimDescriptorArray(Map* map, DescriptorArray* descriptors, + int number_of_own_descriptors); + void TrimEnumCache(Map* map, DescriptorArray* descriptors); void ClearDependentCode(DependentCode* dependent_code); void ClearDependentICList(Object* head); @@ -851,12 +823,6 @@ class MarkCompactCollector { int ClearNonLiveDependentCodeInGroup(DependentCode* dependent_code, int group, int start, int end, int new_start); - // Marking detaches initial maps from SharedFunctionInfo objects - // to make this reference weak. We need to reattach initial maps - // back after collection. This is either done during - // ClearNonLiveTransitions pass or by calling this function. - void ReattachInitialMaps(); - // Mark all values associated with reachable keys in weak collections // encountered so far. This might push new object or even new weak maps onto // the marking stack. @@ -867,6 +833,10 @@ class MarkCompactCollector { // The linked list of all encountered weak maps is destroyed. void ClearWeakCollections(); + // We have to remove all encountered weak maps from the list of weak + // collections when incremental marking is aborted. + void AbortWeakCollections(); + // ----------------------------------------------------------------------- // Phase 2: Sweeping to clear mark bits and free non-live objects for // a non-compacting collection. @@ -883,8 +853,8 @@ class MarkCompactCollector { // regions to each space's free list. void SweepSpaces(); - int DiscoverAndPromoteBlackObjectsOnPage(NewSpace* new_space, - NewSpacePage* p); + int DiscoverAndEvacuateBlackObjectsOnPage(NewSpace* new_space, + NewSpacePage* p); void EvacuateNewSpace(); @@ -908,6 +878,9 @@ class MarkCompactCollector { void ParallelSweepSpaceComplete(PagedSpace* space); + // Updates store buffer and slot buffer for a pointer in a migrating object. + void RecordMigratedSlot(Object* value, Address slot); + #ifdef DEBUG friend class MarkObjectVisitor; static void VisitObject(HeapObject* obj); @@ -919,7 +892,6 @@ class MarkCompactCollector { Heap* heap_; MarkingDeque marking_deque_; CodeFlusher* code_flusher_; - Object* encountered_weak_collections_; bool have_code_to_deoptimize_; List<Page*> evacuation_candidates_; @@ -934,15 +906,12 @@ class MarkCompactCollector { class MarkBitCellIterator BASE_EMBEDDED { public: - explicit MarkBitCellIterator(MemoryChunk* chunk) - : chunk_(chunk) { - last_cell_index_ = Bitmap::IndexToCell( - Bitmap::CellAlignIndex( - chunk_->AddressToMarkbitIndex(chunk_->area_end()))); + explicit MarkBitCellIterator(MemoryChunk* chunk) : chunk_(chunk) { + last_cell_index_ = Bitmap::IndexToCell(Bitmap::CellAlignIndex( + chunk_->AddressToMarkbitIndex(chunk_->area_end()))); cell_base_ = chunk_->area_start(); cell_index_ = Bitmap::IndexToCell( - Bitmap::CellAlignIndex( - chunk_->AddressToMarkbitIndex(cell_base_))); + Bitmap::CellAlignIndex(chunk_->AddressToMarkbitIndex(cell_base_))); cells_ = chunk_->markbits()->cells(); } @@ -951,14 +920,14 @@ class MarkBitCellIterator BASE_EMBEDDED { inline bool HasNext() { return cell_index_ < last_cell_index_ - 1; } inline MarkBit::CellType* CurrentCell() { - ASSERT(cell_index_ == Bitmap::IndexToCell(Bitmap::CellAlignIndex( - chunk_->AddressToMarkbitIndex(cell_base_)))); + DCHECK(cell_index_ == Bitmap::IndexToCell(Bitmap::CellAlignIndex( + chunk_->AddressToMarkbitIndex(cell_base_)))); return &cells_[cell_index_]; } inline Address CurrentCellBase() { - ASSERT(cell_index_ == Bitmap::IndexToCell(Bitmap::CellAlignIndex( - chunk_->AddressToMarkbitIndex(cell_base_)))); + DCHECK(cell_index_ == Bitmap::IndexToCell(Bitmap::CellAlignIndex( + chunk_->AddressToMarkbitIndex(cell_base_)))); return cell_base_; } @@ -978,14 +947,12 @@ class MarkBitCellIterator BASE_EMBEDDED { class SequentialSweepingScope BASE_EMBEDDED { public: - explicit SequentialSweepingScope(MarkCompactCollector *collector) : - collector_(collector) { + explicit SequentialSweepingScope(MarkCompactCollector* collector) + : collector_(collector) { collector_->set_sequential_sweeping(true); } - ~SequentialSweepingScope() { - collector_->set_sequential_sweeping(false); - } + ~SequentialSweepingScope() { collector_->set_sequential_sweeping(false); } private: MarkCompactCollector* collector_; @@ -993,7 +960,7 @@ class SequentialSweepingScope BASE_EMBEDDED { const char* AllocationSpaceName(AllocationSpace space); +} +} // namespace v8::internal -} } // namespace v8::internal - -#endif // V8_MARK_COMPACT_H_ +#endif // V8_HEAP_MARK_COMPACT_H_ diff --git a/deps/v8/src/objects-visiting-inl.h b/deps/v8/src/heap/objects-visiting-inl.h similarity index 69% rename from deps/v8/src/objects-visiting-inl.h rename to deps/v8/src/heap/objects-visiting-inl.h index bb2e99243a1..8846d27bceb 100644 --- a/deps/v8/src/objects-visiting-inl.h +++ b/deps/v8/src/heap/objects-visiting-inl.h @@ -9,48 +9,43 @@ namespace v8 { namespace internal { -template<typename StaticVisitor> +template <typename StaticVisitor> void StaticNewSpaceVisitor<StaticVisitor>::Initialize() { - table_.Register(kVisitShortcutCandidate, - &FixedBodyVisitor<StaticVisitor, - ConsString::BodyDescriptor, - int>::Visit); + table_.Register( + kVisitShortcutCandidate, + &FixedBodyVisitor<StaticVisitor, ConsString::BodyDescriptor, int>::Visit); - table_.Register(kVisitConsString, - &FixedBodyVisitor<StaticVisitor, - ConsString::BodyDescriptor, - int>::Visit); + table_.Register( + kVisitConsString, + &FixedBodyVisitor<StaticVisitor, ConsString::BodyDescriptor, int>::Visit); table_.Register(kVisitSlicedString, - &FixedBodyVisitor<StaticVisitor, - SlicedString::BodyDescriptor, - int>::Visit); + &FixedBodyVisitor<StaticVisitor, SlicedString::BodyDescriptor, + int>::Visit); - table_.Register(kVisitSymbol, - &FixedBodyVisitor<StaticVisitor, - Symbol::BodyDescriptor, - int>::Visit); + table_.Register( + kVisitSymbol, + &FixedBodyVisitor<StaticVisitor, Symbol::BodyDescriptor, int>::Visit); table_.Register(kVisitFixedArray, &FlexibleBodyVisitor<StaticVisitor, - FixedArray::BodyDescriptor, - int>::Visit); + FixedArray::BodyDescriptor, int>::Visit); table_.Register(kVisitFixedDoubleArray, &VisitFixedDoubleArray); table_.Register(kVisitFixedTypedArray, &VisitFixedTypedArray); table_.Register(kVisitFixedFloat64Array, &VisitFixedTypedArray); - table_.Register(kVisitNativeContext, - &FixedBodyVisitor<StaticVisitor, - Context::ScavengeBodyDescriptor, - int>::Visit); + table_.Register( + kVisitNativeContext, + &FixedBodyVisitor<StaticVisitor, Context::ScavengeBodyDescriptor, + int>::Visit); table_.Register(kVisitByteArray, &VisitByteArray); - table_.Register(kVisitSharedFunctionInfo, - &FixedBodyVisitor<StaticVisitor, - SharedFunctionInfo::BodyDescriptor, - int>::Visit); + table_.Register( + kVisitSharedFunctionInfo, + &FixedBodyVisitor<StaticVisitor, SharedFunctionInfo::BodyDescriptor, + int>::Visit); table_.Register(kVisitSeqOneByteString, &VisitSeqOneByteString); @@ -66,47 +61,39 @@ void StaticNewSpaceVisitor<StaticVisitor>::Initialize() { table_.Register(kVisitFreeSpace, &VisitFreeSpace); - table_.Register(kVisitJSWeakMap, &JSObjectVisitor::Visit); - - table_.Register(kVisitJSWeakSet, &JSObjectVisitor::Visit); + table_.Register(kVisitJSWeakCollection, &JSObjectVisitor::Visit); table_.Register(kVisitJSRegExp, &JSObjectVisitor::Visit); - table_.template RegisterSpecializations<DataObjectVisitor, - kVisitDataObject, + table_.template RegisterSpecializations<DataObjectVisitor, kVisitDataObject, kVisitDataObjectGeneric>(); - table_.template RegisterSpecializations<JSObjectVisitor, - kVisitJSObject, + table_.template RegisterSpecializations<JSObjectVisitor, kVisitJSObject, kVisitJSObjectGeneric>(); - table_.template RegisterSpecializations<StructVisitor, - kVisitStruct, + table_.template RegisterSpecializations<StructVisitor, kVisitStruct, kVisitStructGeneric>(); } -template<typename StaticVisitor> +template <typename StaticVisitor> int StaticNewSpaceVisitor<StaticVisitor>::VisitJSArrayBuffer( Map* map, HeapObject* object) { Heap* heap = map->GetHeap(); - STATIC_ASSERT( - JSArrayBuffer::kWeakFirstViewOffset == - JSArrayBuffer::kWeakNextOffset + kPointerSize); - VisitPointers( - heap, - HeapObject::RawField(object, JSArrayBuffer::BodyDescriptor::kStartOffset), - HeapObject::RawField(object, JSArrayBuffer::kWeakNextOffset)); + STATIC_ASSERT(JSArrayBuffer::kWeakFirstViewOffset == + JSArrayBuffer::kWeakNextOffset + kPointerSize); + VisitPointers(heap, HeapObject::RawField( + object, JSArrayBuffer::BodyDescriptor::kStartOffset), + HeapObject::RawField(object, JSArrayBuffer::kWeakNextOffset)); VisitPointers( - heap, - HeapObject::RawField(object, - JSArrayBuffer::kWeakNextOffset + 2 * kPointerSize), + heap, HeapObject::RawField( + object, JSArrayBuffer::kWeakNextOffset + 2 * kPointerSize), HeapObject::RawField(object, JSArrayBuffer::kSizeWithInternalFields)); return JSArrayBuffer::kSizeWithInternalFields; } -template<typename StaticVisitor> +template <typename StaticVisitor> int StaticNewSpaceVisitor<StaticVisitor>::VisitJSTypedArray( Map* map, HeapObject* object) { VisitPointers( @@ -114,51 +101,45 @@ int StaticNewSpaceVisitor<StaticVisitor>::VisitJSTypedArray( HeapObject::RawField(object, JSTypedArray::BodyDescriptor::kStartOffset), HeapObject::RawField(object, JSTypedArray::kWeakNextOffset)); VisitPointers( - map->GetHeap(), - HeapObject::RawField(object, - JSTypedArray::kWeakNextOffset + kPointerSize), + map->GetHeap(), HeapObject::RawField( + object, JSTypedArray::kWeakNextOffset + kPointerSize), HeapObject::RawField(object, JSTypedArray::kSizeWithInternalFields)); return JSTypedArray::kSizeWithInternalFields; } -template<typename StaticVisitor> -int StaticNewSpaceVisitor<StaticVisitor>::VisitJSDataView( - Map* map, HeapObject* object) { +template <typename StaticVisitor> +int StaticNewSpaceVisitor<StaticVisitor>::VisitJSDataView(Map* map, + HeapObject* object) { VisitPointers( map->GetHeap(), HeapObject::RawField(object, JSDataView::BodyDescriptor::kStartOffset), HeapObject::RawField(object, JSDataView::kWeakNextOffset)); VisitPointers( map->GetHeap(), - HeapObject::RawField(object, - JSDataView::kWeakNextOffset + kPointerSize), + HeapObject::RawField(object, JSDataView::kWeakNextOffset + kPointerSize), HeapObject::RawField(object, JSDataView::kSizeWithInternalFields)); return JSDataView::kSizeWithInternalFields; } -template<typename StaticVisitor> +template <typename StaticVisitor> void StaticMarkingVisitor<StaticVisitor>::Initialize() { table_.Register(kVisitShortcutCandidate, - &FixedBodyVisitor<StaticVisitor, - ConsString::BodyDescriptor, - void>::Visit); + &FixedBodyVisitor<StaticVisitor, ConsString::BodyDescriptor, + void>::Visit); table_.Register(kVisitConsString, - &FixedBodyVisitor<StaticVisitor, - ConsString::BodyDescriptor, - void>::Visit); + &FixedBodyVisitor<StaticVisitor, ConsString::BodyDescriptor, + void>::Visit); table_.Register(kVisitSlicedString, - &FixedBodyVisitor<StaticVisitor, - SlicedString::BodyDescriptor, - void>::Visit); + &FixedBodyVisitor<StaticVisitor, SlicedString::BodyDescriptor, + void>::Visit); - table_.Register(kVisitSymbol, - &FixedBodyVisitor<StaticVisitor, - Symbol::BodyDescriptor, - void>::Visit); + table_.Register( + kVisitSymbol, + &FixedBodyVisitor<StaticVisitor, Symbol::BodyDescriptor, void>::Visit); table_.Register(kVisitFixedArray, &FixedArrayVisitor::Visit); @@ -182,14 +163,11 @@ void StaticMarkingVisitor<StaticVisitor>::Initialize() { table_.Register(kVisitSeqTwoByteString, &DataObjectVisitor::Visit); - table_.Register(kVisitJSWeakMap, &StaticVisitor::VisitWeakCollection); + table_.Register(kVisitJSWeakCollection, &VisitWeakCollection); - table_.Register(kVisitJSWeakSet, &StaticVisitor::VisitWeakCollection); - - table_.Register(kVisitOddball, - &FixedBodyVisitor<StaticVisitor, - Oddball::BodyDescriptor, - void>::Visit); + table_.Register( + kVisitOddball, + &FixedBodyVisitor<StaticVisitor, Oddball::BodyDescriptor, void>::Visit); table_.Register(kVisitMap, &VisitMap); @@ -207,28 +185,24 @@ void StaticMarkingVisitor<StaticVisitor>::Initialize() { // Registration for kVisitJSRegExp is done by StaticVisitor. - table_.Register(kVisitCell, - &FixedBodyVisitor<StaticVisitor, - Cell::BodyDescriptor, - void>::Visit); + table_.Register( + kVisitCell, + &FixedBodyVisitor<StaticVisitor, Cell::BodyDescriptor, void>::Visit); table_.Register(kVisitPropertyCell, &VisitPropertyCell); - table_.template RegisterSpecializations<DataObjectVisitor, - kVisitDataObject, + table_.template RegisterSpecializations<DataObjectVisitor, kVisitDataObject, kVisitDataObjectGeneric>(); - table_.template RegisterSpecializations<JSObjectVisitor, - kVisitJSObject, + table_.template RegisterSpecializations<JSObjectVisitor, kVisitJSObject, kVisitJSObjectGeneric>(); - table_.template RegisterSpecializations<StructObjectVisitor, - kVisitStruct, + table_.template RegisterSpecializations<StructObjectVisitor, kVisitStruct, kVisitStructGeneric>(); } -template<typename StaticVisitor> +template <typename StaticVisitor> void StaticMarkingVisitor<StaticVisitor>::VisitCodeEntry( Heap* heap, Address entry_address) { Code* code = Code::cast(Code::GetObjectFromEntryAddress(entry_address)); @@ -237,11 +211,10 @@ void StaticMarkingVisitor<StaticVisitor>::VisitCodeEntry( } -template<typename StaticVisitor> +template <typename StaticVisitor> void StaticMarkingVisitor<StaticVisitor>::VisitEmbeddedPointer( Heap* heap, RelocInfo* rinfo) { - ASSERT(rinfo->rmode() == RelocInfo::EMBEDDED_OBJECT); - ASSERT(!rinfo->target_object()->IsConsString()); + DCHECK(rinfo->rmode() == RelocInfo::EMBEDDED_OBJECT); HeapObject* object = HeapObject::cast(rinfo->target_object()); heap->mark_compact_collector()->RecordRelocSlot(rinfo, object); // TODO(ulan): It could be better to record slots only for strongly embedded @@ -253,10 +226,10 @@ void StaticMarkingVisitor<StaticVisitor>::VisitEmbeddedPointer( } -template<typename StaticVisitor> -void StaticMarkingVisitor<StaticVisitor>::VisitCell( - Heap* heap, RelocInfo* rinfo) { - ASSERT(rinfo->rmode() == RelocInfo::CELL); +template <typename StaticVisitor> +void StaticMarkingVisitor<StaticVisitor>::VisitCell(Heap* heap, + RelocInfo* rinfo) { + DCHECK(rinfo->rmode() == RelocInfo::CELL); Cell* cell = rinfo->target_cell(); // No need to record slots because the cell space is not compacted during GC. if (!rinfo->host()->IsWeakObject(cell)) { @@ -265,10 +238,10 @@ void StaticMarkingVisitor<StaticVisitor>::VisitCell( } -template<typename StaticVisitor> -void StaticMarkingVisitor<StaticVisitor>::VisitDebugTarget( - Heap* heap, RelocInfo* rinfo) { - ASSERT((RelocInfo::IsJSReturn(rinfo->rmode()) && +template <typename StaticVisitor> +void StaticMarkingVisitor<StaticVisitor>::VisitDebugTarget(Heap* heap, + RelocInfo* rinfo) { + DCHECK((RelocInfo::IsJSReturn(rinfo->rmode()) && rinfo->IsPatchedReturnSequence()) || (RelocInfo::IsDebugBreakSlot(rinfo->rmode()) && rinfo->IsPatchedDebugBreakSlotSequence())); @@ -278,21 +251,20 @@ void StaticMarkingVisitor<StaticVisitor>::VisitDebugTarget( } -template<typename StaticVisitor> -void StaticMarkingVisitor<StaticVisitor>::VisitCodeTarget( - Heap* heap, RelocInfo* rinfo) { - ASSERT(RelocInfo::IsCodeTarget(rinfo->rmode())); +template <typename StaticVisitor> +void StaticMarkingVisitor<StaticVisitor>::VisitCodeTarget(Heap* heap, + RelocInfo* rinfo) { + DCHECK(RelocInfo::IsCodeTarget(rinfo->rmode())); Code* target = Code::GetCodeFromTargetAddress(rinfo->target_address()); // Monomorphic ICs are preserved when possible, but need to be flushed // when they might be keeping a Context alive, or when the heap is about // to be serialized. - - if (FLAG_cleanup_code_caches_at_gc && target->is_inline_cache_stub() - && (target->ic_state() == MEGAMORPHIC || target->ic_state() == GENERIC || - target->ic_state() == POLYMORPHIC || heap->flush_monomorphic_ics() || - Serializer::enabled(heap->isolate()) || - target->ic_age() != heap->global_ic_age() || - target->is_invalidated_weak_stub())) { + if (FLAG_cleanup_code_caches_at_gc && target->is_inline_cache_stub() && + (target->ic_state() == MEGAMORPHIC || target->ic_state() == GENERIC || + target->ic_state() == POLYMORPHIC || heap->flush_monomorphic_ics() || + heap->isolate()->serializer_enabled() || + target->ic_age() != heap->global_ic_age() || + target->is_invalidated_weak_stub())) { IC::Clear(heap->isolate(), rinfo->pc(), rinfo->host()->constant_pool()); target = Code::GetCodeFromTargetAddress(rinfo->target_address()); } @@ -301,27 +273,25 @@ void StaticMarkingVisitor<StaticVisitor>::VisitCodeTarget( } -template<typename StaticVisitor> +template <typename StaticVisitor> void StaticMarkingVisitor<StaticVisitor>::VisitCodeAgeSequence( Heap* heap, RelocInfo* rinfo) { - ASSERT(RelocInfo::IsCodeAgeSequence(rinfo->rmode())); + DCHECK(RelocInfo::IsCodeAgeSequence(rinfo->rmode())); Code* target = rinfo->code_age_stub(); - ASSERT(target != NULL); + DCHECK(target != NULL); heap->mark_compact_collector()->RecordRelocSlot(rinfo, target); StaticVisitor::MarkObject(heap, target); } -template<typename StaticVisitor> +template <typename StaticVisitor> void StaticMarkingVisitor<StaticVisitor>::VisitNativeContext( Map* map, HeapObject* object) { - FixedBodyVisitor<StaticVisitor, - Context::MarkCompactBodyDescriptor, + FixedBodyVisitor<StaticVisitor, Context::MarkCompactBodyDescriptor, void>::Visit(map, object); MarkCompactCollector* collector = map->GetHeap()->mark_compact_collector(); - for (int idx = Context::FIRST_WEAK_SLOT; - idx < Context::NATIVE_CONTEXT_SLOTS; + for (int idx = Context::FIRST_WEAK_SLOT; idx < Context::NATIVE_CONTEXT_SLOTS; ++idx) { Object** slot = Context::cast(object)->RawFieldOfElementAt(idx); collector->RecordSlot(slot, slot, *slot); @@ -329,9 +299,9 @@ void StaticMarkingVisitor<StaticVisitor>::VisitNativeContext( } -template<typename StaticVisitor> -void StaticMarkingVisitor<StaticVisitor>::VisitMap( - Map* map, HeapObject* object) { +template <typename StaticVisitor> +void StaticMarkingVisitor<StaticVisitor>::VisitMap(Map* map, + HeapObject* object) { Heap* heap = map->GetHeap(); Map* map_object = Map::cast(object); @@ -345,14 +315,14 @@ void StaticMarkingVisitor<StaticVisitor>::VisitMap( if (FLAG_collect_maps && map_object->CanTransition()) { MarkMapContents(heap, map_object); } else { - StaticVisitor::VisitPointers(heap, - HeapObject::RawField(object, Map::kPointerFieldsBeginOffset), + StaticVisitor::VisitPointers( + heap, HeapObject::RawField(object, Map::kPointerFieldsBeginOffset), HeapObject::RawField(object, Map::kPointerFieldsEndOffset)); } } -template<typename StaticVisitor> +template <typename StaticVisitor> void StaticMarkingVisitor<StaticVisitor>::VisitPropertyCell( Map* map, HeapObject* object) { Heap* heap = map->GetHeap(); @@ -370,13 +340,14 @@ void StaticMarkingVisitor<StaticVisitor>::VisitPropertyCell( StaticVisitor::VisitPointer(heap, slot); } - StaticVisitor::VisitPointers(heap, + StaticVisitor::VisitPointers( + heap, HeapObject::RawField(object, PropertyCell::kPointerFieldsBeginOffset), HeapObject::RawField(object, PropertyCell::kPointerFieldsEndOffset)); } -template<typename StaticVisitor> +template <typename StaticVisitor> void StaticMarkingVisitor<StaticVisitor>::VisitAllocationSite( Map* map, HeapObject* object) { Heap* heap = map->GetHeap(); @@ -395,25 +366,60 @@ void StaticMarkingVisitor<StaticVisitor>::VisitAllocationSite( StaticVisitor::VisitPointer(heap, slot); } - StaticVisitor::VisitPointers(heap, + StaticVisitor::VisitPointers( + heap, HeapObject::RawField(object, AllocationSite::kPointerFieldsBeginOffset), HeapObject::RawField(object, AllocationSite::kPointerFieldsEndOffset)); } -template<typename StaticVisitor> -void StaticMarkingVisitor<StaticVisitor>::VisitCode( +template <typename StaticVisitor> +void StaticMarkingVisitor<StaticVisitor>::VisitWeakCollection( Map* map, HeapObject* object) { Heap* heap = map->GetHeap(); + JSWeakCollection* weak_collection = + reinterpret_cast<JSWeakCollection*>(object); + + // Enqueue weak collection in linked list of encountered weak collections. + if (weak_collection->next() == heap->undefined_value()) { + weak_collection->set_next(heap->encountered_weak_collections()); + heap->set_encountered_weak_collections(weak_collection); + } + + // Skip visiting the backing hash table containing the mappings and the + // pointer to the other enqueued weak collections, both are post-processed. + StaticVisitor::VisitPointers( + heap, HeapObject::RawField(object, JSWeakCollection::kPropertiesOffset), + HeapObject::RawField(object, JSWeakCollection::kTableOffset)); + STATIC_ASSERT(JSWeakCollection::kTableOffset + kPointerSize == + JSWeakCollection::kNextOffset); + STATIC_ASSERT(JSWeakCollection::kNextOffset + kPointerSize == + JSWeakCollection::kSize); + + // Partially initialized weak collection is enqueued, but table is ignored. + if (!weak_collection->table()->IsHashTable()) return; + + // Mark the backing hash table without pushing it on the marking stack. + Object** slot = HeapObject::RawField(object, JSWeakCollection::kTableOffset); + HeapObject* obj = HeapObject::cast(*slot); + heap->mark_compact_collector()->RecordSlot(slot, slot, obj); + StaticVisitor::MarkObjectWithoutPush(heap, obj); +} + + +template <typename StaticVisitor> +void StaticMarkingVisitor<StaticVisitor>::VisitCode(Map* map, + HeapObject* object) { + Heap* heap = map->GetHeap(); Code* code = Code::cast(object); - if (FLAG_age_code && !Serializer::enabled(heap->isolate())) { + if (FLAG_age_code && !heap->isolate()->serializer_enabled()) { code->MakeOlder(heap->mark_compact_collector()->marking_parity()); } code->CodeIterateBody<StaticVisitor>(heap); } -template<typename StaticVisitor> +template <typename StaticVisitor> void StaticMarkingVisitor<StaticVisitor>::VisitSharedFunctionInfo( Map* map, HeapObject* object) { Heap* heap = map->GetHeap(); @@ -424,8 +430,7 @@ void StaticMarkingVisitor<StaticVisitor>::VisitSharedFunctionInfo( if (FLAG_cleanup_code_caches_at_gc) { shared->ClearTypeFeedbackInfo(); } - if (FLAG_cache_optimized_code && - FLAG_flush_optimized_code_cache && + if (FLAG_cache_optimized_code && FLAG_flush_optimized_code_cache && !shared->optimized_code_map()->IsSmi()) { // Always flush the optimized code map if requested by flag. shared->ClearOptimizedCodeMap(); @@ -464,28 +469,29 @@ void StaticMarkingVisitor<StaticVisitor>::VisitSharedFunctionInfo( } -template<typename StaticVisitor> +template <typename StaticVisitor> void StaticMarkingVisitor<StaticVisitor>::VisitConstantPoolArray( Map* map, HeapObject* object) { Heap* heap = map->GetHeap(); - ConstantPoolArray* constant_pool = ConstantPoolArray::cast(object); - for (int i = 0; i < constant_pool->count_of_code_ptr_entries(); i++) { - int index = constant_pool->first_code_ptr_index() + i; - Address code_entry = - reinterpret_cast<Address>(constant_pool->RawFieldOfElementAt(index)); + ConstantPoolArray* array = ConstantPoolArray::cast(object); + ConstantPoolArray::Iterator code_iter(array, ConstantPoolArray::CODE_PTR); + while (!code_iter.is_finished()) { + Address code_entry = reinterpret_cast<Address>( + array->RawFieldOfElementAt(code_iter.next_index())); StaticVisitor::VisitCodeEntry(heap, code_entry); } - for (int i = 0; i < constant_pool->count_of_heap_ptr_entries(); i++) { - int index = constant_pool->first_heap_ptr_index() + i; - Object** slot = constant_pool->RawFieldOfElementAt(index); + + ConstantPoolArray::Iterator heap_iter(array, ConstantPoolArray::HEAP_PTR); + while (!heap_iter.is_finished()) { + Object** slot = array->RawFieldOfElementAt(heap_iter.next_index()); HeapObject* object = HeapObject::cast(*slot); heap->mark_compact_collector()->RecordSlot(slot, slot, object); bool is_weak_object = - (constant_pool->get_weak_object_state() == - ConstantPoolArray::WEAK_OBJECTS_IN_OPTIMIZED_CODE && + (array->get_weak_object_state() == + ConstantPoolArray::WEAK_OBJECTS_IN_OPTIMIZED_CODE && Code::IsWeakObjectInOptimizedCode(object)) || - (constant_pool->get_weak_object_state() == - ConstantPoolArray::WEAK_OBJECTS_IN_IC && + (array->get_weak_object_state() == + ConstantPoolArray::WEAK_OBJECTS_IN_IC && Code::IsWeakObjectInIC(object)); if (!is_weak_object) { StaticVisitor::MarkObject(heap, object); @@ -494,9 +500,9 @@ void StaticMarkingVisitor<StaticVisitor>::VisitConstantPoolArray( } -template<typename StaticVisitor> -void StaticMarkingVisitor<StaticVisitor>::VisitJSFunction( - Map* map, HeapObject* object) { +template <typename StaticVisitor> +void StaticMarkingVisitor<StaticVisitor>::VisitJSFunction(Map* map, + HeapObject* object) { Heap* heap = map->GetHeap(); JSFunction* function = JSFunction::cast(object); MarkCompactCollector* collector = heap->mark_compact_collector(); @@ -532,38 +538,36 @@ void StaticMarkingVisitor<StaticVisitor>::VisitJSFunction( } -template<typename StaticVisitor> -void StaticMarkingVisitor<StaticVisitor>::VisitJSRegExp( - Map* map, HeapObject* object) { +template <typename StaticVisitor> +void StaticMarkingVisitor<StaticVisitor>::VisitJSRegExp(Map* map, + HeapObject* object) { int last_property_offset = JSRegExp::kSize + kPointerSize * map->inobject_properties(); - StaticVisitor::VisitPointers(map->GetHeap(), - HeapObject::RawField(object, JSRegExp::kPropertiesOffset), + StaticVisitor::VisitPointers( + map->GetHeap(), HeapObject::RawField(object, JSRegExp::kPropertiesOffset), HeapObject::RawField(object, last_property_offset)); } -template<typename StaticVisitor> +template <typename StaticVisitor> void StaticMarkingVisitor<StaticVisitor>::VisitJSArrayBuffer( Map* map, HeapObject* object) { Heap* heap = map->GetHeap(); - STATIC_ASSERT( - JSArrayBuffer::kWeakFirstViewOffset == - JSArrayBuffer::kWeakNextOffset + kPointerSize); + STATIC_ASSERT(JSArrayBuffer::kWeakFirstViewOffset == + JSArrayBuffer::kWeakNextOffset + kPointerSize); StaticVisitor::VisitPointers( heap, HeapObject::RawField(object, JSArrayBuffer::BodyDescriptor::kStartOffset), HeapObject::RawField(object, JSArrayBuffer::kWeakNextOffset)); StaticVisitor::VisitPointers( - heap, - HeapObject::RawField(object, - JSArrayBuffer::kWeakNextOffset + 2 * kPointerSize), + heap, HeapObject::RawField( + object, JSArrayBuffer::kWeakNextOffset + 2 * kPointerSize), HeapObject::RawField(object, JSArrayBuffer::kSizeWithInternalFields)); } -template<typename StaticVisitor> +template <typename StaticVisitor> void StaticMarkingVisitor<StaticVisitor>::VisitJSTypedArray( Map* map, HeapObject* object) { StaticVisitor::VisitPointers( @@ -571,31 +575,29 @@ void StaticMarkingVisitor<StaticVisitor>::VisitJSTypedArray( HeapObject::RawField(object, JSTypedArray::BodyDescriptor::kStartOffset), HeapObject::RawField(object, JSTypedArray::kWeakNextOffset)); StaticVisitor::VisitPointers( - map->GetHeap(), - HeapObject::RawField(object, - JSTypedArray::kWeakNextOffset + kPointerSize), + map->GetHeap(), HeapObject::RawField( + object, JSTypedArray::kWeakNextOffset + kPointerSize), HeapObject::RawField(object, JSTypedArray::kSizeWithInternalFields)); } -template<typename StaticVisitor> -void StaticMarkingVisitor<StaticVisitor>::VisitJSDataView( - Map* map, HeapObject* object) { +template <typename StaticVisitor> +void StaticMarkingVisitor<StaticVisitor>::VisitJSDataView(Map* map, + HeapObject* object) { StaticVisitor::VisitPointers( map->GetHeap(), HeapObject::RawField(object, JSDataView::BodyDescriptor::kStartOffset), HeapObject::RawField(object, JSDataView::kWeakNextOffset)); StaticVisitor::VisitPointers( map->GetHeap(), - HeapObject::RawField(object, - JSDataView::kWeakNextOffset + kPointerSize), + HeapObject::RawField(object, JSDataView::kWeakNextOffset + kPointerSize), HeapObject::RawField(object, JSDataView::kSizeWithInternalFields)); } -template<typename StaticVisitor> -void StaticMarkingVisitor<StaticVisitor>::MarkMapContents( - Heap* heap, Map* map) { +template <typename StaticVisitor> +void StaticMarkingVisitor<StaticVisitor>::MarkMapContents(Heap* heap, + Map* map) { // Make sure that the back pointer stored either in the map itself or // inside its transitions array is marked. Skip recording the back // pointer slot since map space is not compacted. @@ -618,16 +620,15 @@ void StaticMarkingVisitor<StaticVisitor>::MarkMapContents( DescriptorArray* descriptors = map->instance_descriptors(); if (StaticVisitor::MarkObjectWithoutPush(heap, descriptors) && descriptors->length() > 0) { - StaticVisitor::VisitPointers(heap, - descriptors->GetFirstElementAddress(), - descriptors->GetDescriptorEndSlot(0)); + StaticVisitor::VisitPointers(heap, descriptors->GetFirstElementAddress(), + descriptors->GetDescriptorEndSlot(0)); } int start = 0; int end = map->NumberOfOwnDescriptors(); if (start < end) { StaticVisitor::VisitPointers(heap, - descriptors->GetDescriptorStartSlot(start), - descriptors->GetDescriptorEndSlot(end)); + descriptors->GetDescriptorStartSlot(start), + descriptors->GetDescriptorEndSlot(end)); } // Mark prototype dependent codes array but do not push it onto marking @@ -641,13 +642,13 @@ void StaticMarkingVisitor<StaticVisitor>::MarkMapContents( // Mark the pointer fields of the Map. Since the transitions array has // been marked already, it is fine that one of these fields contains a // pointer to it. - StaticVisitor::VisitPointers(heap, - HeapObject::RawField(map, Map::kPointerFieldsBeginOffset), + StaticVisitor::VisitPointers( + heap, HeapObject::RawField(map, Map::kPointerFieldsBeginOffset), HeapObject::RawField(map, Map::kPointerFieldsEndOffset)); } -template<typename StaticVisitor> +template <typename StaticVisitor> void StaticMarkingVisitor<StaticVisitor>::MarkTransitionArray( Heap* heap, TransitionArray* transitions) { if (!StaticVisitor::MarkObjectWithoutPush(heap, transitions)) return; @@ -671,17 +672,19 @@ void StaticMarkingVisitor<StaticVisitor>::MarkTransitionArray( } -template<typename StaticVisitor> -void StaticMarkingVisitor<StaticVisitor>::MarkInlinedFunctionsCode( - Heap* heap, Code* code) { +template <typename StaticVisitor> +void StaticMarkingVisitor<StaticVisitor>::MarkInlinedFunctionsCode(Heap* heap, + Code* code) { + // Skip in absence of inlining. + // TODO(turbofan): Revisit once we support inlining. + if (code->is_turbofanned()) return; // For optimized functions we should retain both non-optimized version // of its code and non-optimized version of all inlined functions. // This is required to support bailing out from inlined code. DeoptimizationInputData* data = DeoptimizationInputData::cast(code->deoptimization_data()); FixedArray* literals = data->LiteralArray(); - for (int i = 0, count = data->InlinedFunctionCount()->value(); - i < count; + for (int i = 0, count = data->InlinedFunctionCount()->value(); i < count; i++) { JSFunction* inlined = JSFunction::cast(literals->get(i)); StaticVisitor::MarkObject(heap, inlined->shared()->code()); @@ -691,20 +694,20 @@ void StaticMarkingVisitor<StaticVisitor>::MarkInlinedFunctionsCode( inline static bool IsValidNonBuiltinContext(Object* context) { return context->IsContext() && - !Context::cast(context)->global_object()->IsJSBuiltinsObject(); + !Context::cast(context)->global_object()->IsJSBuiltinsObject(); } inline static bool HasSourceCode(Heap* heap, SharedFunctionInfo* info) { Object* undefined = heap->undefined_value(); return (info->script() != undefined) && - (reinterpret_cast<Script*>(info->script())->source() != undefined); + (reinterpret_cast<Script*>(info->script())->source() != undefined); } -template<typename StaticVisitor> -bool StaticMarkingVisitor<StaticVisitor>::IsFlushable( - Heap* heap, JSFunction* function) { +template <typename StaticVisitor> +bool StaticMarkingVisitor<StaticVisitor>::IsFlushable(Heap* heap, + JSFunction* function) { SharedFunctionInfo* shared_info = function->shared(); // Code is either on stack, in compilation cache or referenced @@ -733,7 +736,7 @@ bool StaticMarkingVisitor<StaticVisitor>::IsFlushable( } -template<typename StaticVisitor> +template <typename StaticVisitor> bool StaticMarkingVisitor<StaticVisitor>::IsFlushable( Heap* heap, SharedFunctionInfo* shared_info) { // Code is either on stack, in compilation cache or referenced @@ -791,45 +794,39 @@ bool StaticMarkingVisitor<StaticVisitor>::IsFlushable( } -template<typename StaticVisitor> +template <typename StaticVisitor> void StaticMarkingVisitor<StaticVisitor>::VisitSharedFunctionInfoStrongCode( Heap* heap, HeapObject* object) { - StaticVisitor::BeforeVisitingSharedFunctionInfo(object); - Object** start_slot = - HeapObject::RawField(object, - SharedFunctionInfo::BodyDescriptor::kStartOffset); - Object** end_slot = - HeapObject::RawField(object, - SharedFunctionInfo::BodyDescriptor::kEndOffset); + Object** start_slot = HeapObject::RawField( + object, SharedFunctionInfo::BodyDescriptor::kStartOffset); + Object** end_slot = HeapObject::RawField( + object, SharedFunctionInfo::BodyDescriptor::kEndOffset); StaticVisitor::VisitPointers(heap, start_slot, end_slot); } -template<typename StaticVisitor> +template <typename StaticVisitor> void StaticMarkingVisitor<StaticVisitor>::VisitSharedFunctionInfoWeakCode( Heap* heap, HeapObject* object) { - StaticVisitor::BeforeVisitingSharedFunctionInfo(object); Object** name_slot = HeapObject::RawField(object, SharedFunctionInfo::kNameOffset); StaticVisitor::VisitPointer(heap, name_slot); // Skip visiting kCodeOffset as it is treated weakly here. STATIC_ASSERT(SharedFunctionInfo::kNameOffset + kPointerSize == - SharedFunctionInfo::kCodeOffset); + SharedFunctionInfo::kCodeOffset); STATIC_ASSERT(SharedFunctionInfo::kCodeOffset + kPointerSize == - SharedFunctionInfo::kOptimizedCodeMapOffset); + SharedFunctionInfo::kOptimizedCodeMapOffset); Object** start_slot = - HeapObject::RawField(object, - SharedFunctionInfo::kOptimizedCodeMapOffset); - Object** end_slot = - HeapObject::RawField(object, - SharedFunctionInfo::BodyDescriptor::kEndOffset); + HeapObject::RawField(object, SharedFunctionInfo::kOptimizedCodeMapOffset); + Object** end_slot = HeapObject::RawField( + object, SharedFunctionInfo::BodyDescriptor::kEndOffset); StaticVisitor::VisitPointers(heap, start_slot, end_slot); } -template<typename StaticVisitor> +template <typename StaticVisitor> void StaticMarkingVisitor<StaticVisitor>::VisitJSFunctionStrongCode( Heap* heap, HeapObject* object) { Object** start_slot = @@ -840,17 +837,16 @@ void StaticMarkingVisitor<StaticVisitor>::VisitJSFunctionStrongCode( VisitCodeEntry(heap, object->address() + JSFunction::kCodeEntryOffset); STATIC_ASSERT(JSFunction::kCodeEntryOffset + kPointerSize == - JSFunction::kPrototypeOrInitialMapOffset); + JSFunction::kPrototypeOrInitialMapOffset); start_slot = HeapObject::RawField(object, JSFunction::kPrototypeOrInitialMapOffset); - end_slot = - HeapObject::RawField(object, JSFunction::kNonWeakFieldsEndOffset); + end_slot = HeapObject::RawField(object, JSFunction::kNonWeakFieldsEndOffset); StaticVisitor::VisitPointers(heap, start_slot, end_slot); } -template<typename StaticVisitor> +template <typename StaticVisitor> void StaticMarkingVisitor<StaticVisitor>::VisitJSFunctionWeakCode( Heap* heap, HeapObject* object) { Object** start_slot = @@ -861,12 +857,11 @@ void StaticMarkingVisitor<StaticVisitor>::VisitJSFunctionWeakCode( // Skip visiting kCodeEntryOffset as it is treated weakly here. STATIC_ASSERT(JSFunction::kCodeEntryOffset + kPointerSize == - JSFunction::kPrototypeOrInitialMapOffset); + JSFunction::kPrototypeOrInitialMapOffset); start_slot = HeapObject::RawField(object, JSFunction::kPrototypeOrInitialMapOffset); - end_slot = - HeapObject::RawField(object, JSFunction::kNonWeakFieldsEndOffset); + end_slot = HeapObject::RawField(object, JSFunction::kNonWeakFieldsEndOffset); StaticVisitor::VisitPointers(heap, start_slot, end_slot); } @@ -897,7 +892,7 @@ void Code::CodeIterateBody(ObjectVisitor* v) { } -template<typename StaticVisitor> +template <typename StaticVisitor> void Code::CodeIterateBody(Heap* heap) { int mode_mask = RelocInfo::kCodeTargetMask | RelocInfo::ModeMask(RelocInfo::EMBEDDED_OBJECT) | @@ -913,8 +908,7 @@ void Code::CodeIterateBody(Heap* heap) { heap, reinterpret_cast<Object**>(this->address() + kRelocationInfoOffset)); StaticVisitor::VisitPointer( - heap, - reinterpret_cast<Object**>(this->address() + kHandlerTableOffset)); + heap, reinterpret_cast<Object**>(this->address() + kHandlerTableOffset)); StaticVisitor::VisitPointer( heap, reinterpret_cast<Object**>(this->address() + kDeoptimizationDataOffset)); @@ -922,11 +916,9 @@ void Code::CodeIterateBody(Heap* heap) { heap, reinterpret_cast<Object**>(this->address() + kTypeFeedbackInfoOffset)); StaticVisitor::VisitNextCodeLink( - heap, - reinterpret_cast<Object**>(this->address() + kNextCodeLinkOffset)); + heap, reinterpret_cast<Object**>(this->address() + kNextCodeLinkOffset)); StaticVisitor::VisitPointer( - heap, - reinterpret_cast<Object**>(this->address() + kConstantPoolOffset)); + heap, reinterpret_cast<Object**>(this->address() + kConstantPoolOffset)); RelocIterator it(this, mode_mask); @@ -934,8 +926,7 @@ void Code::CodeIterateBody(Heap* heap) { it.rinfo()->template Visit<StaticVisitor>(heap); } } - - -} } // namespace v8::internal +} +} // namespace v8::internal #endif // V8_OBJECTS_VISITING_INL_H_ diff --git a/deps/v8/src/objects-visiting.cc b/deps/v8/src/heap/objects-visiting.cc similarity index 54% rename from deps/v8/src/objects-visiting.cc rename to deps/v8/src/heap/objects-visiting.cc index 24cff3487fe..a316d12dcdf 100644 --- a/deps/v8/src/objects-visiting.cc +++ b/deps/v8/src/heap/objects-visiting.cc @@ -2,23 +2,17 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "ic-inl.h" -#include "objects-visiting.h" +#include "src/heap/objects-visiting.h" +#include "src/ic-inl.h" namespace v8 { namespace internal { -static inline bool IsShortcutCandidate(int type) { - return ((type & kShortcutTypeMask) == kShortcutTypeTag); -} - - StaticVisitorBase::VisitorId StaticVisitorBase::GetVisitorId( - int instance_type, - int instance_size) { + int instance_type, int instance_size) { if (instance_type < FIRST_NONSTRING_TYPE) { switch (instance_type & kStringRepresentationMask) { case kSeqStringTag: @@ -39,8 +33,7 @@ StaticVisitorBase::VisitorId StaticVisitorBase::GetVisitorId( return kVisitSlicedString; case kExternalStringTag: - return GetVisitorIdForSize(kVisitDataObject, - kVisitDataObjectGeneric, + return GetVisitorIdForSize(kVisitDataObject, kVisitDataObjectGeneric, instance_size); } UNREACHABLE(); @@ -78,20 +71,16 @@ StaticVisitorBase::VisitorId StaticVisitorBase::GetVisitorId( return kVisitPropertyCell; case JS_SET_TYPE: - return GetVisitorIdForSize(kVisitStruct, - kVisitStructGeneric, + return GetVisitorIdForSize(kVisitStruct, kVisitStructGeneric, JSSet::kSize); case JS_MAP_TYPE: - return GetVisitorIdForSize(kVisitStruct, - kVisitStructGeneric, + return GetVisitorIdForSize(kVisitStruct, kVisitStructGeneric, JSMap::kSize); case JS_WEAK_MAP_TYPE: - return kVisitJSWeakMap; - case JS_WEAK_SET_TYPE: - return kVisitJSWeakSet; + return kVisitJSWeakCollection; case JS_REGEXP_TYPE: return kVisitJSRegExp; @@ -100,18 +89,15 @@ StaticVisitorBase::VisitorId StaticVisitorBase::GetVisitorId( return kVisitSharedFunctionInfo; case JS_PROXY_TYPE: - return GetVisitorIdForSize(kVisitStruct, - kVisitStructGeneric, + return GetVisitorIdForSize(kVisitStruct, kVisitStructGeneric, JSProxy::kSize); case JS_FUNCTION_PROXY_TYPE: - return GetVisitorIdForSize(kVisitStruct, - kVisitStructGeneric, + return GetVisitorIdForSize(kVisitStruct, kVisitStructGeneric, JSFunctionProxy::kSize); case FOREIGN_TYPE: - return GetVisitorIdForSize(kVisitDataObject, - kVisitDataObjectGeneric, + return GetVisitorIdForSize(kVisitDataObject, kVisitDataObjectGeneric, Foreign::kSize); case SYMBOL_TYPE: @@ -142,20 +128,19 @@ StaticVisitorBase::VisitorId StaticVisitorBase::GetVisitorId( case JS_MESSAGE_OBJECT_TYPE: case JS_SET_ITERATOR_TYPE: case JS_MAP_ITERATOR_TYPE: - return GetVisitorIdForSize(kVisitJSObject, - kVisitJSObjectGeneric, + return GetVisitorIdForSize(kVisitJSObject, kVisitJSObjectGeneric, instance_size); case JS_FUNCTION_TYPE: return kVisitJSFunction; case HEAP_NUMBER_TYPE: -#define EXTERNAL_ARRAY_CASE(Type, type, TYPE, ctype, size) \ - case EXTERNAL_##TYPE##_ARRAY_TYPE: + case MUTABLE_HEAP_NUMBER_TYPE: +#define EXTERNAL_ARRAY_CASE(Type, type, TYPE, ctype, size) \ + case EXTERNAL_##TYPE##_ARRAY_TYPE: - TYPED_ARRAYS(EXTERNAL_ARRAY_CASE) - return GetVisitorIdForSize(kVisitDataObject, - kVisitDataObjectGeneric, + TYPED_ARRAYS(EXTERNAL_ARRAY_CASE) + return GetVisitorIdForSize(kVisitDataObject, kVisitDataObjectGeneric, instance_size); #undef EXTERNAL_ARRAY_CASE @@ -172,17 +157,15 @@ StaticVisitorBase::VisitorId StaticVisitorBase::GetVisitorId( case FIXED_FLOAT64_ARRAY_TYPE: return kVisitFixedFloat64Array; -#define MAKE_STRUCT_CASE(NAME, Name, name) \ - case NAME##_TYPE: +#define MAKE_STRUCT_CASE(NAME, Name, name) case NAME##_TYPE: STRUCT_LIST(MAKE_STRUCT_CASE) #undef MAKE_STRUCT_CASE - if (instance_type == ALLOCATION_SITE_TYPE) { - return kVisitAllocationSite; - } + if (instance_type == ALLOCATION_SITE_TYPE) { + return kVisitAllocationSite; + } - return GetVisitorIdForSize(kVisitStruct, - kVisitStructGeneric, - instance_size); + return GetVisitorIdForSize(kVisitStruct, kVisitStructGeneric, + instance_size); default: UNREACHABLE(); @@ -191,19 +174,27 @@ StaticVisitorBase::VisitorId StaticVisitorBase::GetVisitorId( } +// We don't record weak slots during marking or scavenges. Instead we do it +// once when we complete mark-compact cycle. Note that write barrier has no +// effect if we are already in the middle of compacting mark-sweep cycle and we +// have to record slots manually. +static bool MustRecordSlots(Heap* heap) { + return heap->gc_state() == Heap::MARK_COMPACT && + heap->mark_compact_collector()->is_compacting(); +} + + template <class T> struct WeakListVisitor; template <class T> -Object* VisitWeakList(Heap* heap, - Object* list, - WeakObjectRetainer* retainer, - bool record_slots) { +Object* VisitWeakList(Heap* heap, Object* list, WeakObjectRetainer* retainer) { Object* undefined = heap->undefined_value(); Object* head = undefined; T* tail = NULL; MarkCompactCollector* collector = heap->mark_compact_collector(); + bool record_slots = MustRecordSlots(heap); while (list != undefined) { // Check whether to keep the candidate in the list. T* candidate = reinterpret_cast<T*>(list); @@ -214,23 +205,22 @@ Object* VisitWeakList(Heap* heap, head = retained; } else { // Subsequent elements in the list. - ASSERT(tail != NULL); + DCHECK(tail != NULL); WeakListVisitor<T>::SetWeakNext(tail, retained); if (record_slots) { Object** next_slot = - HeapObject::RawField(tail, WeakListVisitor<T>::WeakNextOffset()); + HeapObject::RawField(tail, WeakListVisitor<T>::WeakNextOffset()); collector->RecordSlot(next_slot, next_slot, retained); } } // Retained object is new tail. - ASSERT(!retained->IsUndefined()); + DCHECK(!retained->IsUndefined()); candidate = reinterpret_cast<T*>(retained); tail = candidate; // tail is a live object, visit it. - WeakListVisitor<T>::VisitLiveObject( - heap, tail, retainer, record_slots); + WeakListVisitor<T>::VisitLiveObject(heap, tail, retainer); } else { WeakListVisitor<T>::VisitPhantomObject(heap, candidate); } @@ -248,8 +238,7 @@ Object* VisitWeakList(Heap* heap, template <class T> -static void ClearWeakList(Heap* heap, - Object* list) { +static void ClearWeakList(Heap* heap, Object* list) { Object* undefined = heap->undefined_value(); while (list != undefined) { T* candidate = reinterpret_cast<T*>(list); @@ -259,7 +248,7 @@ static void ClearWeakList(Heap* heap, } -template<> +template <> struct WeakListVisitor<JSFunction> { static void SetWeakNext(JSFunction* function, Object* next) { function->set_next_function_link(next); @@ -269,148 +258,114 @@ struct WeakListVisitor<JSFunction> { return function->next_function_link(); } - static int WeakNextOffset() { - return JSFunction::kNextFunctionLinkOffset; - } + static int WeakNextOffset() { return JSFunction::kNextFunctionLinkOffset; } - static void VisitLiveObject(Heap*, JSFunction*, - WeakObjectRetainer*, bool) { - } + static void VisitLiveObject(Heap*, JSFunction*, WeakObjectRetainer*) {} - static void VisitPhantomObject(Heap*, JSFunction*) { - } + static void VisitPhantomObject(Heap*, JSFunction*) {} }; -template<> +template <> struct WeakListVisitor<Code> { static void SetWeakNext(Code* code, Object* next) { code->set_next_code_link(next); } - static Object* WeakNext(Code* code) { - return code->next_code_link(); - } + static Object* WeakNext(Code* code) { return code->next_code_link(); } - static int WeakNextOffset() { - return Code::kNextCodeLinkOffset; - } + static int WeakNextOffset() { return Code::kNextCodeLinkOffset; } - static void VisitLiveObject(Heap*, Code*, - WeakObjectRetainer*, bool) { - } + static void VisitLiveObject(Heap*, Code*, WeakObjectRetainer*) {} - static void VisitPhantomObject(Heap*, Code*) { - } + static void VisitPhantomObject(Heap*, Code*) {} }; -template<> +template <> struct WeakListVisitor<Context> { static void SetWeakNext(Context* context, Object* next) { - context->set(Context::NEXT_CONTEXT_LINK, - next, - UPDATE_WRITE_BARRIER); + context->set(Context::NEXT_CONTEXT_LINK, next, UPDATE_WRITE_BARRIER); } static Object* WeakNext(Context* context) { return context->get(Context::NEXT_CONTEXT_LINK); } - static void VisitLiveObject(Heap* heap, - Context* context, - WeakObjectRetainer* retainer, - bool record_slots) { + static int WeakNextOffset() { + return FixedArray::SizeFor(Context::NEXT_CONTEXT_LINK); + } + + static void VisitLiveObject(Heap* heap, Context* context, + WeakObjectRetainer* retainer) { // Process the three weak lists linked off the context. - DoWeakList<JSFunction>(heap, context, retainer, record_slots, - Context::OPTIMIZED_FUNCTIONS_LIST); - DoWeakList<Code>(heap, context, retainer, record_slots, - Context::OPTIMIZED_CODE_LIST); - DoWeakList<Code>(heap, context, retainer, record_slots, - Context::DEOPTIMIZED_CODE_LIST); + DoWeakList<JSFunction>(heap, context, retainer, + Context::OPTIMIZED_FUNCTIONS_LIST); + DoWeakList<Code>(heap, context, retainer, Context::OPTIMIZED_CODE_LIST); + DoWeakList<Code>(heap, context, retainer, Context::DEOPTIMIZED_CODE_LIST); } - template<class T> - static void DoWeakList(Heap* heap, - Context* context, - WeakObjectRetainer* retainer, - bool record_slots, - int index) { + template <class T> + static void DoWeakList(Heap* heap, Context* context, + WeakObjectRetainer* retainer, int index) { // Visit the weak list, removing dead intermediate elements. - Object* list_head = VisitWeakList<T>(heap, context->get(index), retainer, - record_slots); + Object* list_head = VisitWeakList<T>(heap, context->get(index), retainer); // Update the list head. context->set(index, list_head, UPDATE_WRITE_BARRIER); - if (record_slots) { + if (MustRecordSlots(heap)) { // Record the updated slot if necessary. - Object** head_slot = HeapObject::RawField( - context, FixedArray::SizeFor(index)); - heap->mark_compact_collector()->RecordSlot( - head_slot, head_slot, list_head); + Object** head_slot = + HeapObject::RawField(context, FixedArray::SizeFor(index)); + heap->mark_compact_collector()->RecordSlot(head_slot, head_slot, + list_head); } } static void VisitPhantomObject(Heap* heap, Context* context) { ClearWeakList<JSFunction>(heap, - context->get(Context::OPTIMIZED_FUNCTIONS_LIST)); + context->get(Context::OPTIMIZED_FUNCTIONS_LIST)); ClearWeakList<Code>(heap, context->get(Context::OPTIMIZED_CODE_LIST)); ClearWeakList<Code>(heap, context->get(Context::DEOPTIMIZED_CODE_LIST)); } - - static int WeakNextOffset() { - return FixedArray::SizeFor(Context::NEXT_CONTEXT_LINK); - } }; -template<> +template <> struct WeakListVisitor<JSArrayBufferView> { static void SetWeakNext(JSArrayBufferView* obj, Object* next) { obj->set_weak_next(next); } - static Object* WeakNext(JSArrayBufferView* obj) { - return obj->weak_next(); - } + static Object* WeakNext(JSArrayBufferView* obj) { return obj->weak_next(); } - static void VisitLiveObject(Heap*, - JSArrayBufferView* obj, - WeakObjectRetainer* retainer, - bool record_slots) {} + static int WeakNextOffset() { return JSArrayBufferView::kWeakNextOffset; } - static void VisitPhantomObject(Heap*, JSArrayBufferView*) {} + static void VisitLiveObject(Heap*, JSArrayBufferView*, WeakObjectRetainer*) {} - static int WeakNextOffset() { - return JSArrayBufferView::kWeakNextOffset; - } + static void VisitPhantomObject(Heap*, JSArrayBufferView*) {} }; -template<> +template <> struct WeakListVisitor<JSArrayBuffer> { static void SetWeakNext(JSArrayBuffer* obj, Object* next) { obj->set_weak_next(next); } - static Object* WeakNext(JSArrayBuffer* obj) { - return obj->weak_next(); - } + static Object* WeakNext(JSArrayBuffer* obj) { return obj->weak_next(); } + + static int WeakNextOffset() { return JSArrayBuffer::kWeakNextOffset; } - static void VisitLiveObject(Heap* heap, - JSArrayBuffer* array_buffer, - WeakObjectRetainer* retainer, - bool record_slots) { - Object* typed_array_obj = - VisitWeakList<JSArrayBufferView>( - heap, - array_buffer->weak_first_view(), - retainer, record_slots); + static void VisitLiveObject(Heap* heap, JSArrayBuffer* array_buffer, + WeakObjectRetainer* retainer) { + Object* typed_array_obj = VisitWeakList<JSArrayBufferView>( + heap, array_buffer->weak_first_view(), retainer); array_buffer->set_weak_first_view(typed_array_obj); - if (typed_array_obj != heap->undefined_value() && record_slots) { - Object** slot = HeapObject::RawField( - array_buffer, JSArrayBuffer::kWeakFirstViewOffset); + if (typed_array_obj != heap->undefined_value() && MustRecordSlots(heap)) { + Object** slot = HeapObject::RawField(array_buffer, + JSArrayBuffer::kWeakFirstViewOffset); heap->mark_compact_collector()->RecordSlot(slot, slot, typed_array_obj); } } @@ -418,53 +373,42 @@ struct WeakListVisitor<JSArrayBuffer> { static void VisitPhantomObject(Heap* heap, JSArrayBuffer* phantom) { Runtime::FreeArrayBuffer(heap->isolate(), phantom); } - - static int WeakNextOffset() { - return JSArrayBuffer::kWeakNextOffset; - } }; -template<> +template <> struct WeakListVisitor<AllocationSite> { static void SetWeakNext(AllocationSite* obj, Object* next) { obj->set_weak_next(next); } - static Object* WeakNext(AllocationSite* obj) { - return obj->weak_next(); - } + static Object* WeakNext(AllocationSite* obj) { return obj->weak_next(); } - static void VisitLiveObject(Heap* heap, - AllocationSite* site, - WeakObjectRetainer* retainer, - bool record_slots) {} + static int WeakNextOffset() { return AllocationSite::kWeakNextOffset; } - static void VisitPhantomObject(Heap* heap, AllocationSite* phantom) {} + static void VisitLiveObject(Heap*, AllocationSite*, WeakObjectRetainer*) {} - static int WeakNextOffset() { - return AllocationSite::kWeakNextOffset; - } + static void VisitPhantomObject(Heap*, AllocationSite*) {} }; -template Object* VisitWeakList<Code>( - Heap* heap, Object* list, WeakObjectRetainer* retainer, bool record_slots); +template Object* VisitWeakList<Code>(Heap* heap, Object* list, + WeakObjectRetainer* retainer); -template Object* VisitWeakList<JSFunction>( - Heap* heap, Object* list, WeakObjectRetainer* retainer, bool record_slots); +template Object* VisitWeakList<JSFunction>(Heap* heap, Object* list, + WeakObjectRetainer* retainer); -template Object* VisitWeakList<Context>( - Heap* heap, Object* list, WeakObjectRetainer* retainer, bool record_slots); +template Object* VisitWeakList<Context>(Heap* heap, Object* list, + WeakObjectRetainer* retainer); -template Object* VisitWeakList<JSArrayBuffer>( - Heap* heap, Object* list, WeakObjectRetainer* retainer, bool record_slots); +template Object* VisitWeakList<JSArrayBuffer>(Heap* heap, Object* list, + WeakObjectRetainer* retainer); -template Object* VisitWeakList<AllocationSite>( - Heap* heap, Object* list, WeakObjectRetainer* retainer, bool record_slots); - -} } // namespace v8::internal +template Object* VisitWeakList<AllocationSite>(Heap* heap, Object* list, + WeakObjectRetainer* retainer); +} +} // namespace v8::internal diff --git a/deps/v8/src/objects-visiting.h b/deps/v8/src/heap/objects-visiting.h similarity index 68% rename from deps/v8/src/objects-visiting.h rename to deps/v8/src/heap/objects-visiting.h index 05f82574cca..919a800c972 100644 --- a/deps/v8/src/objects-visiting.h +++ b/deps/v8/src/heap/objects-visiting.h @@ -5,7 +5,7 @@ #ifndef V8_OBJECTS_VISITING_H_ #define V8_OBJECTS_VISITING_H_ -#include "allocation.h" +#include "src/allocation.h" // This file provides base classes and auxiliary methods for defining // static object visitors used during GC. @@ -23,61 +23,60 @@ namespace internal { // Base class for all static visitors. class StaticVisitorBase : public AllStatic { public: -#define VISITOR_ID_LIST(V) \ - V(SeqOneByteString) \ - V(SeqTwoByteString) \ - V(ShortcutCandidate) \ - V(ByteArray) \ - V(FreeSpace) \ - V(FixedArray) \ - V(FixedDoubleArray) \ - V(FixedTypedArray) \ - V(FixedFloat64Array) \ - V(ConstantPoolArray) \ - V(NativeContext) \ - V(AllocationSite) \ - V(DataObject2) \ - V(DataObject3) \ - V(DataObject4) \ - V(DataObject5) \ - V(DataObject6) \ - V(DataObject7) \ - V(DataObject8) \ - V(DataObject9) \ - V(DataObjectGeneric) \ - V(JSObject2) \ - V(JSObject3) \ - V(JSObject4) \ - V(JSObject5) \ - V(JSObject6) \ - V(JSObject7) \ - V(JSObject8) \ - V(JSObject9) \ - V(JSObjectGeneric) \ - V(Struct2) \ - V(Struct3) \ - V(Struct4) \ - V(Struct5) \ - V(Struct6) \ - V(Struct7) \ - V(Struct8) \ - V(Struct9) \ - V(StructGeneric) \ - V(ConsString) \ - V(SlicedString) \ - V(Symbol) \ - V(Oddball) \ - V(Code) \ - V(Map) \ - V(Cell) \ - V(PropertyCell) \ - V(SharedFunctionInfo) \ - V(JSFunction) \ - V(JSWeakMap) \ - V(JSWeakSet) \ - V(JSArrayBuffer) \ - V(JSTypedArray) \ - V(JSDataView) \ +#define VISITOR_ID_LIST(V) \ + V(SeqOneByteString) \ + V(SeqTwoByteString) \ + V(ShortcutCandidate) \ + V(ByteArray) \ + V(FreeSpace) \ + V(FixedArray) \ + V(FixedDoubleArray) \ + V(FixedTypedArray) \ + V(FixedFloat64Array) \ + V(ConstantPoolArray) \ + V(NativeContext) \ + V(AllocationSite) \ + V(DataObject2) \ + V(DataObject3) \ + V(DataObject4) \ + V(DataObject5) \ + V(DataObject6) \ + V(DataObject7) \ + V(DataObject8) \ + V(DataObject9) \ + V(DataObjectGeneric) \ + V(JSObject2) \ + V(JSObject3) \ + V(JSObject4) \ + V(JSObject5) \ + V(JSObject6) \ + V(JSObject7) \ + V(JSObject8) \ + V(JSObject9) \ + V(JSObjectGeneric) \ + V(Struct2) \ + V(Struct3) \ + V(Struct4) \ + V(Struct5) \ + V(Struct6) \ + V(Struct7) \ + V(Struct8) \ + V(Struct9) \ + V(StructGeneric) \ + V(ConsString) \ + V(SlicedString) \ + V(Symbol) \ + V(Oddball) \ + V(Code) \ + V(Map) \ + V(Cell) \ + V(PropertyCell) \ + V(SharedFunctionInfo) \ + V(JSFunction) \ + V(JSWeakCollection) \ + V(JSArrayBuffer) \ + V(JSTypedArray) \ + V(JSDataView) \ V(JSRegExp) // For data objects, JS objects and structs along with generic visitor which @@ -90,7 +89,7 @@ class StaticVisitorBase : public AllStatic { // id of specialized visitor from given instance size, base visitor id and // generic visitor's id. enum VisitorId { -#define VISITOR_ID_ENUM_DECL(id) kVisit##id, +#define VISITOR_ID_ENUM_DECL(id) kVisit##id, VISITOR_ID_LIST(VISITOR_ID_ENUM_DECL) #undef VISITOR_ID_ENUM_DECL kVisitorIdCount, @@ -113,15 +112,13 @@ class StaticVisitorBase : public AllStatic { // For visitors that allow specialization by size calculate VisitorId based // on size, base visitor id and generic visitor id. - static VisitorId GetVisitorIdForSize(VisitorId base, - VisitorId generic, + static VisitorId GetVisitorIdForSize(VisitorId base, VisitorId generic, int object_size) { - ASSERT((base == kVisitDataObject) || - (base == kVisitStruct) || + DCHECK((base == kVisitDataObject) || (base == kVisitStruct) || (base == kVisitJSObject)); - ASSERT(IsAligned(object_size, kPointerSize)); - ASSERT(kMinObjectSizeInWords * kPointerSize <= object_size); - ASSERT(object_size <= Page::kMaxRegularHeapObjectSize); + DCHECK(IsAligned(object_size, kPointerSize)); + DCHECK(kMinObjectSizeInWords * kPointerSize <= object_size); + DCHECK(object_size <= Page::kMaxRegularHeapObjectSize); const VisitorId specialization = static_cast<VisitorId>( base + (object_size >> kPointerSizeLog2) - kMinObjectSizeInWords); @@ -131,7 +128,7 @@ class StaticVisitorBase : public AllStatic { }; -template<typename Callback> +template <typename Callback> class VisitorDispatchTable { public: void CopyFrom(VisitorDispatchTable* other) { @@ -139,7 +136,7 @@ class VisitorDispatchTable { // every element of callbacks_ array will remain correct // pointer (memcpy might be implemented as a byte copying loop). for (int i = 0; i < StaticVisitorBase::kVisitorIdCount; i++) { - NoBarrier_Store(&callbacks_[i], other->callbacks_[i]); + base::NoBarrier_Store(&callbacks_[i], other->callbacks_[i]); } } @@ -152,14 +149,12 @@ class VisitorDispatchTable { } void Register(StaticVisitorBase::VisitorId id, Callback callback) { - ASSERT(id < StaticVisitorBase::kVisitorIdCount); // id is unsigned. - callbacks_[id] = reinterpret_cast<AtomicWord>(callback); + DCHECK(id < StaticVisitorBase::kVisitorIdCount); // id is unsigned. + callbacks_[id] = reinterpret_cast<base::AtomicWord>(callback); } - template<typename Visitor, - StaticVisitorBase::VisitorId base, - StaticVisitorBase::VisitorId generic, - int object_size_in_words> + template <typename Visitor, StaticVisitorBase::VisitorId base, + StaticVisitorBase::VisitorId generic, int object_size_in_words> void RegisterSpecialization() { static const int size = object_size_in_words * kPointerSize; Register(StaticVisitorBase::GetVisitorIdForSize(base, generic, size), @@ -167,12 +162,11 @@ class VisitorDispatchTable { } - template<typename Visitor, - StaticVisitorBase::VisitorId base, - StaticVisitorBase::VisitorId generic> + template <typename Visitor, StaticVisitorBase::VisitorId base, + StaticVisitorBase::VisitorId generic> void RegisterSpecializations() { - STATIC_ASSERT( - (generic - base + StaticVisitorBase::kMinObjectSizeInWords) == 10); + STATIC_ASSERT((generic - base + StaticVisitorBase::kMinObjectSizeInWords) == + 10); RegisterSpecialization<Visitor, base, generic, 2>(); RegisterSpecialization<Visitor, base, generic, 3>(); RegisterSpecialization<Visitor, base, generic, 4>(); @@ -185,60 +179,50 @@ class VisitorDispatchTable { } private: - AtomicWord callbacks_[StaticVisitorBase::kVisitorIdCount]; + base::AtomicWord callbacks_[StaticVisitorBase::kVisitorIdCount]; }; -template<typename StaticVisitor> +template <typename StaticVisitor> class BodyVisitorBase : public AllStatic { public: - INLINE(static void IteratePointers(Heap* heap, - HeapObject* object, - int start_offset, - int end_offset)) { - Object** start_slot = reinterpret_cast<Object**>(object->address() + - start_offset); - Object** end_slot = reinterpret_cast<Object**>(object->address() + - end_offset); + INLINE(static void IteratePointers(Heap* heap, HeapObject* object, + int start_offset, int end_offset)) { + Object** start_slot = + reinterpret_cast<Object**>(object->address() + start_offset); + Object** end_slot = + reinterpret_cast<Object**>(object->address() + end_offset); StaticVisitor::VisitPointers(heap, start_slot, end_slot); } }; -template<typename StaticVisitor, typename BodyDescriptor, typename ReturnType> +template <typename StaticVisitor, typename BodyDescriptor, typename ReturnType> class FlexibleBodyVisitor : public BodyVisitorBase<StaticVisitor> { public: INLINE(static ReturnType Visit(Map* map, HeapObject* object)) { int object_size = BodyDescriptor::SizeOf(map, object); BodyVisitorBase<StaticVisitor>::IteratePointers( - map->GetHeap(), - object, - BodyDescriptor::kStartOffset, - object_size); + map->GetHeap(), object, BodyDescriptor::kStartOffset, object_size); return static_cast<ReturnType>(object_size); } - template<int object_size> + template <int object_size> static inline ReturnType VisitSpecialized(Map* map, HeapObject* object) { - ASSERT(BodyDescriptor::SizeOf(map, object) == object_size); + DCHECK(BodyDescriptor::SizeOf(map, object) == object_size); BodyVisitorBase<StaticVisitor>::IteratePointers( - map->GetHeap(), - object, - BodyDescriptor::kStartOffset, - object_size); + map->GetHeap(), object, BodyDescriptor::kStartOffset, object_size); return static_cast<ReturnType>(object_size); } }; -template<typename StaticVisitor, typename BodyDescriptor, typename ReturnType> +template <typename StaticVisitor, typename BodyDescriptor, typename ReturnType> class FixedBodyVisitor : public BodyVisitorBase<StaticVisitor> { public: INLINE(static ReturnType Visit(Map* map, HeapObject* object)) { BodyVisitorBase<StaticVisitor>::IteratePointers( - map->GetHeap(), - object, - BodyDescriptor::kStartOffset, + map->GetHeap(), object, BodyDescriptor::kStartOffset, BodyDescriptor::kEndOffset); return static_cast<ReturnType>(BodyDescriptor::kSize); } @@ -261,7 +245,7 @@ class FixedBodyVisitor : public BodyVisitorBase<StaticVisitor> { // (see http://en.wikipedia.org/wiki/Curiously_recurring_template_pattern). // We use CRTP to guarantee aggressive compile time optimizations (i.e. // inlining and specialization of StaticVisitor::VisitPointers methods). -template<typename StaticVisitor> +template <typename StaticVisitor> class StaticNewSpaceVisitor : public StaticVisitorBase { public: static void Initialize(); @@ -284,11 +268,9 @@ class StaticNewSpaceVisitor : public StaticVisitorBase { // Don't visit code entry. We are using this visitor only during scavenges. VisitPointers( - heap, - HeapObject::RawField(object, - JSFunction::kCodeEntryOffset + kPointerSize), - HeapObject::RawField(object, - JSFunction::kNonWeakFieldsEndOffset)); + heap, HeapObject::RawField(object, + JSFunction::kCodeEntryOffset + kPointerSize), + HeapObject::RawField(object, JSFunction::kNonWeakFieldsEndOffset)); return JSFunction::kSize; } @@ -310,13 +292,13 @@ class StaticNewSpaceVisitor : public StaticVisitorBase { } INLINE(static int VisitSeqOneByteString(Map* map, HeapObject* object)) { - return SeqOneByteString::cast(object)-> - SeqOneByteStringSize(map->instance_type()); + return SeqOneByteString::cast(object) + ->SeqOneByteStringSize(map->instance_type()); } INLINE(static int VisitSeqTwoByteString(Map* map, HeapObject* object)) { - return SeqTwoByteString::cast(object)-> - SeqTwoByteStringSize(map->instance_type()); + return SeqTwoByteString::cast(object) + ->SeqTwoByteStringSize(map->instance_type()); } INLINE(static int VisitFreeSpace(Map* map, HeapObject* object)) { @@ -329,7 +311,7 @@ class StaticNewSpaceVisitor : public StaticVisitorBase { class DataObjectVisitor { public: - template<int object_size> + template <int object_size> static inline int VisitSpecialized(Map* map, HeapObject* object) { return object_size; } @@ -339,13 +321,11 @@ class StaticNewSpaceVisitor : public StaticVisitorBase { } }; - typedef FlexibleBodyVisitor<StaticVisitor, - StructBodyDescriptor, - int> StructVisitor; + typedef FlexibleBodyVisitor<StaticVisitor, StructBodyDescriptor, int> + StructVisitor; - typedef FlexibleBodyVisitor<StaticVisitor, - JSObject::BodyDescriptor, - int> JSObjectVisitor; + typedef FlexibleBodyVisitor<StaticVisitor, JSObject::BodyDescriptor, int> + JSObjectVisitor; typedef int (*Callback)(Map* map, HeapObject* object); @@ -353,7 +333,7 @@ class StaticNewSpaceVisitor : public StaticVisitorBase { }; -template<typename StaticVisitor> +template <typename StaticVisitor> VisitorDispatchTable<typename StaticNewSpaceVisitor<StaticVisitor>::Callback> StaticNewSpaceVisitor<StaticVisitor>::table_; @@ -372,7 +352,7 @@ VisitorDispatchTable<typename StaticNewSpaceVisitor<StaticVisitor>::Callback> // } // // This is an example of Curiously recurring template pattern. -template<typename StaticVisitor> +template <typename StaticVisitor> class StaticMarkingVisitor : public StaticVisitorBase { public: static void Initialize(); @@ -382,17 +362,16 @@ class StaticMarkingVisitor : public StaticVisitorBase { } INLINE(static void VisitPropertyCell(Map* map, HeapObject* object)); - INLINE(static void VisitAllocationSite(Map* map, HeapObject* object)); INLINE(static void VisitCodeEntry(Heap* heap, Address entry_address)); INLINE(static void VisitEmbeddedPointer(Heap* heap, RelocInfo* rinfo)); INLINE(static void VisitCell(Heap* heap, RelocInfo* rinfo)); INLINE(static void VisitDebugTarget(Heap* heap, RelocInfo* rinfo)); INLINE(static void VisitCodeTarget(Heap* heap, RelocInfo* rinfo)); INLINE(static void VisitCodeAgeSequence(Heap* heap, RelocInfo* rinfo)); - INLINE(static void VisitExternalReference(RelocInfo* rinfo)) { } - INLINE(static void VisitRuntimeEntry(RelocInfo* rinfo)) { } + INLINE(static void VisitExternalReference(RelocInfo* rinfo)) {} + INLINE(static void VisitRuntimeEntry(RelocInfo* rinfo)) {} // Skip the weak next code link in a code object. - INLINE(static void VisitNextCodeLink(Heap* heap, Object** slot)) { } + INLINE(static void VisitNextCodeLink(Heap* heap, Object** slot)) {} // TODO(mstarzinger): This should be made protected once refactoring is done. // Mark non-optimize code for functions inlined into the given optimized @@ -404,6 +383,8 @@ class StaticMarkingVisitor : public StaticVisitorBase { INLINE(static void VisitCode(Map* map, HeapObject* object)); INLINE(static void VisitSharedFunctionInfo(Map* map, HeapObject* object)); INLINE(static void VisitConstantPoolArray(Map* map, HeapObject* object)); + INLINE(static void VisitAllocationSite(Map* map, HeapObject* object)); + INLINE(static void VisitWeakCollection(Map* map, HeapObject* object)); INLINE(static void VisitJSFunction(Map* map, HeapObject* object)); INLINE(static void VisitJSRegExp(Map* map, HeapObject* object)); INLINE(static void VisitJSArrayBuffer(Map* map, HeapObject* object)); @@ -429,25 +410,20 @@ class StaticMarkingVisitor : public StaticVisitorBase { class DataObjectVisitor { public: - template<int size> - static inline void VisitSpecialized(Map* map, HeapObject* object) { - } + template <int size> + static inline void VisitSpecialized(Map* map, HeapObject* object) {} - INLINE(static void Visit(Map* map, HeapObject* object)) { - } + INLINE(static void Visit(Map* map, HeapObject* object)) {} }; - typedef FlexibleBodyVisitor<StaticVisitor, - FixedArray::BodyDescriptor, - void> FixedArrayVisitor; + typedef FlexibleBodyVisitor<StaticVisitor, FixedArray::BodyDescriptor, void> + FixedArrayVisitor; - typedef FlexibleBodyVisitor<StaticVisitor, - JSObject::BodyDescriptor, - void> JSObjectVisitor; + typedef FlexibleBodyVisitor<StaticVisitor, JSObject::BodyDescriptor, void> + JSObjectVisitor; - typedef FlexibleBodyVisitor<StaticVisitor, - StructBodyDescriptor, - void> StructObjectVisitor; + typedef FlexibleBodyVisitor<StaticVisitor, StructBodyDescriptor, void> + StructObjectVisitor; typedef void (*Callback)(Map* map, HeapObject* object); @@ -455,7 +431,7 @@ class StaticMarkingVisitor : public StaticVisitorBase { }; -template<typename StaticVisitor> +template <typename StaticVisitor> VisitorDispatchTable<typename StaticMarkingVisitor<StaticVisitor>::Callback> StaticMarkingVisitor<StaticVisitor>::table_; @@ -469,11 +445,8 @@ class WeakObjectRetainer; // pointers. The template parameter T is a WeakListVisitor that defines how to // access the next-element pointers. template <class T> -Object* VisitWeakList(Heap* heap, - Object* list, - WeakObjectRetainer* retainer, - bool record_slots); - -} } // namespace v8::internal +Object* VisitWeakList(Heap* heap, Object* list, WeakObjectRetainer* retainer); +} +} // namespace v8::internal #endif // V8_OBJECTS_VISITING_H_ diff --git a/deps/v8/src/spaces-inl.h b/deps/v8/src/heap/spaces-inl.h similarity index 72% rename from deps/v8/src/spaces-inl.h rename to deps/v8/src/heap/spaces-inl.h index da9c03d9137..56c2bad70c5 100644 --- a/deps/v8/src/spaces-inl.h +++ b/deps/v8/src/heap/spaces-inl.h @@ -2,13 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#ifndef V8_SPACES_INL_H_ -#define V8_SPACES_INL_H_ +#ifndef V8_HEAP_SPACES_INL_H_ +#define V8_HEAP_SPACES_INL_H_ -#include "heap-profiler.h" -#include "isolate.h" -#include "spaces.h" -#include "v8memory.h" +#include "src/heap/spaces.h" +#include "src/heap-profiler.h" +#include "src/isolate.h" +#include "src/v8memory.h" namespace v8 { namespace internal { @@ -31,16 +31,14 @@ void Bitmap::Clear(MemoryChunk* chunk) { PageIterator::PageIterator(PagedSpace* space) : space_(space), prev_page_(&space->anchor_), - next_page_(prev_page_->next_page()) { } + next_page_(prev_page_->next_page()) {} -bool PageIterator::has_next() { - return next_page_ != &space_->anchor_; -} +bool PageIterator::has_next() { return next_page_ != &space_->anchor_; } Page* PageIterator::next() { - ASSERT(has_next()); + DCHECK(has_next()); prev_page_ = next_page_; next_page_ = next_page_->next_page(); return prev_page_; @@ -54,12 +52,12 @@ Page* PageIterator::next() { NewSpacePageIterator::NewSpacePageIterator(NewSpace* space) : prev_page_(NewSpacePage::FromAddress(space->ToSpaceStart())->prev_page()), next_page_(NewSpacePage::FromAddress(space->ToSpaceStart())), - last_page_(NewSpacePage::FromLimit(space->ToSpaceEnd())) { } + last_page_(NewSpacePage::FromLimit(space->ToSpaceEnd())) {} NewSpacePageIterator::NewSpacePageIterator(SemiSpace* space) : prev_page_(space->anchor()), next_page_(prev_page_->next_page()), - last_page_(prev_page_->prev_page()) { } + last_page_(prev_page_->prev_page()) {} NewSpacePageIterator::NewSpacePageIterator(Address start, Address limit) : prev_page_(NewSpacePage::FromAddress(start)->prev_page()), @@ -69,13 +67,11 @@ NewSpacePageIterator::NewSpacePageIterator(Address start, Address limit) } -bool NewSpacePageIterator::has_next() { - return prev_page_ != last_page_; -} +bool NewSpacePageIterator::has_next() { return prev_page_ != last_page_; } NewSpacePage* NewSpacePageIterator::next() { - ASSERT(has_next()); + DCHECK(has_next()); prev_page_ = next_page_; next_page_ = next_page_->next_page(); return prev_page_; @@ -93,9 +89,9 @@ HeapObject* HeapObjectIterator::FromCurrentPage() { HeapObject* obj = HeapObject::FromAddress(cur_addr_); int obj_size = (size_func_ == NULL) ? obj->Size() : size_func_(obj); cur_addr_ += obj_size; - ASSERT(cur_addr_ <= cur_end_); + DCHECK(cur_addr_ <= cur_end_); if (!obj->IsFiller()) { - ASSERT_OBJECT_SIZE(obj_size); + DCHECK_OBJECT_SIZE(obj_size); return obj; } } @@ -109,27 +105,26 @@ HeapObject* HeapObjectIterator::FromCurrentPage() { #ifdef ENABLE_HEAP_PROTECTION void MemoryAllocator::Protect(Address start, size_t size) { - OS::Protect(start, size); + base::OS::Protect(start, size); } -void MemoryAllocator::Unprotect(Address start, - size_t size, +void MemoryAllocator::Unprotect(Address start, size_t size, Executability executable) { - OS::Unprotect(start, size, executable); + base::OS::Unprotect(start, size, executable); } void MemoryAllocator::ProtectChunkFromPage(Page* page) { int id = GetChunkId(page); - OS::Protect(chunks_[id].address(), chunks_[id].size()); + base::OS::Protect(chunks_[id].address(), chunks_[id].size()); } void MemoryAllocator::UnprotectChunkFromPage(Page* page) { int id = GetChunkId(page); - OS::Unprotect(chunks_[id].address(), chunks_[id].size(), - chunks_[id].owner()->executable() == EXECUTABLE); + base::OS::Unprotect(chunks_[id].address(), chunks_[id].size(), + chunks_[id].owner()->executable() == EXECUTABLE); } #endif @@ -137,13 +132,11 @@ void MemoryAllocator::UnprotectChunkFromPage(Page* page) { // -------------------------------------------------------------------------- // PagedSpace -Page* Page::Initialize(Heap* heap, - MemoryChunk* chunk, - Executability executable, +Page* Page::Initialize(Heap* heap, MemoryChunk* chunk, Executability executable, PagedSpace* owner) { Page* page = reinterpret_cast<Page*>(chunk); - ASSERT(page->area_size() <= kMaxRegularHeapObjectSize); - ASSERT(chunk->owner() == owner); + DCHECK(page->area_size() <= kMaxRegularHeapObjectSize); + DCHECK(chunk->owner() == owner); owner->IncreaseCapacity(page->area_size()); owner->Free(page->area_start(), page->area_size()); @@ -209,29 +202,29 @@ PointerChunkIterator::PointerChunkIterator(Heap* heap) : state_(kOldPointerState), old_pointer_iterator_(heap->old_pointer_space()), map_iterator_(heap->map_space()), - lo_iterator_(heap->lo_space()) { } + lo_iterator_(heap->lo_space()) {} Page* Page::next_page() { - ASSERT(next_chunk()->owner() == owner()); + DCHECK(next_chunk()->owner() == owner()); return static_cast<Page*>(next_chunk()); } Page* Page::prev_page() { - ASSERT(prev_chunk()->owner() == owner()); + DCHECK(prev_chunk()->owner() == owner()); return static_cast<Page*>(prev_chunk()); } void Page::set_next_page(Page* page) { - ASSERT(page->owner() == owner()); + DCHECK(page->owner() == owner()); set_next_chunk(page); } void Page::set_prev_page(Page* page) { - ASSERT(page->owner() == owner()); + DCHECK(page->owner() == owner()); set_prev_chunk(page); } @@ -253,26 +246,14 @@ HeapObject* PagedSpace::AllocateLinearly(int size_in_bytes) { // Raw allocation. AllocationResult PagedSpace::AllocateRaw(int size_in_bytes) { HeapObject* object = AllocateLinearly(size_in_bytes); - if (object != NULL) { - if (identity() == CODE_SPACE) { - SkipList::Update(object->address(), size_in_bytes); - } - return object; - } - ASSERT(!heap()->linear_allocation() || - (anchor_.next_chunk() == &anchor_ && - anchor_.prev_chunk() == &anchor_)); - - object = free_list_.Allocate(size_in_bytes); - if (object != NULL) { - if (identity() == CODE_SPACE) { - SkipList::Update(object->address(), size_in_bytes); + if (object == NULL) { + object = free_list_.Allocate(size_in_bytes); + if (object == NULL) { + object = SlowAllocateRaw(size_in_bytes); } - return object; } - object = SlowAllocateRaw(size_in_bytes); if (object != NULL) { if (identity() == CODE_SPACE) { SkipList::Update(object->address(), size_in_bytes); @@ -290,21 +271,6 @@ AllocationResult PagedSpace::AllocateRaw(int size_in_bytes) { AllocationResult NewSpace::AllocateRaw(int size_in_bytes) { Address old_top = allocation_info_.top(); -#ifdef DEBUG - // If we are stressing compaction we waste some memory in new space - // in order to get more frequent GCs. - if (FLAG_stress_compaction && !heap()->linear_allocation()) { - if (allocation_info_.limit() - old_top >= size_in_bytes * 4) { - int filler_size = size_in_bytes * 4; - for (int i = 0; i < filler_size; i += kPointerSize) { - *(reinterpret_cast<Object**>(old_top + i)) = - heap()->one_pointer_filler_map(); - } - old_top += filler_size; - allocation_info_.set_top(allocation_info_.top() + filler_size); - } - } -#endif if (allocation_info_.limit() - old_top < size_in_bytes) { return SlowAllocateRaw(size_in_bytes); @@ -312,7 +278,7 @@ AllocationResult NewSpace::AllocateRaw(int size_in_bytes) { HeapObject* obj = HeapObject::FromAddress(old_top); allocation_info_.set_top(allocation_info_.top() + size_in_bytes); - ASSERT_SEMISPACE_ALLOCATION_INFO(allocation_info_, to_space_); + DCHECK_SEMISPACE_ALLOCATION_INFO(allocation_info_, to_space_); return obj; } @@ -332,11 +298,11 @@ intptr_t LargeObjectSpace::Available() { bool FreeListNode::IsFreeListNode(HeapObject* object) { Map* map = object->map(); Heap* heap = object->GetHeap(); - return map == heap->raw_unchecked_free_space_map() - || map == heap->raw_unchecked_one_pointer_filler_map() - || map == heap->raw_unchecked_two_pointer_filler_map(); + return map == heap->raw_unchecked_free_space_map() || + map == heap->raw_unchecked_one_pointer_filler_map() || + map == heap->raw_unchecked_two_pointer_filler_map(); } +} +} // namespace v8::internal -} } // namespace v8::internal - -#endif // V8_SPACES_INL_H_ +#endif // V8_HEAP_SPACES_INL_H_ diff --git a/deps/v8/src/spaces.cc b/deps/v8/src/heap/spaces.cc similarity index 79% rename from deps/v8/src/spaces.cc rename to deps/v8/src/heap/spaces.cc index fd319ab717a..9be53e03f28 100644 --- a/deps/v8/src/spaces.cc +++ b/deps/v8/src/heap/spaces.cc @@ -2,13 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "full-codegen.h" -#include "macro-assembler.h" -#include "mark-compact.h" -#include "msan.h" -#include "platform.h" +#include "src/base/platform/platform.h" +#include "src/full-codegen.h" +#include "src/heap/mark-compact.h" +#include "src/macro-assembler.h" +#include "src/msan.h" namespace v8 { namespace internal { @@ -22,11 +22,7 @@ HeapObjectIterator::HeapObjectIterator(PagedSpace* space) { // just an anchor for the double linked page list. Initialize as if we have // reached the end of the anchor page, then the first iteration will move on // to the first page. - Initialize(space, - NULL, - NULL, - kAllPagesInSpace, - NULL); + Initialize(space, NULL, NULL, kAllPagesInSpace, NULL); } @@ -36,38 +32,32 @@ HeapObjectIterator::HeapObjectIterator(PagedSpace* space, // just an anchor for the double linked page list. Initialize the current // address and end as NULL, then the first iteration will move on // to the first page. - Initialize(space, - NULL, - NULL, - kAllPagesInSpace, - size_func); + Initialize(space, NULL, NULL, kAllPagesInSpace, size_func); } HeapObjectIterator::HeapObjectIterator(Page* page, HeapObjectCallback size_func) { Space* owner = page->owner(); - ASSERT(owner == page->heap()->old_pointer_space() || + DCHECK(owner == page->heap()->old_pointer_space() || owner == page->heap()->old_data_space() || owner == page->heap()->map_space() || owner == page->heap()->cell_space() || owner == page->heap()->property_cell_space() || owner == page->heap()->code_space()); - Initialize(reinterpret_cast<PagedSpace*>(owner), - page->area_start(), - page->area_end(), - kOnePageOnly, - size_func); - ASSERT(page->WasSweptPrecisely()); + Initialize(reinterpret_cast<PagedSpace*>(owner), page->area_start(), + page->area_end(), kOnePageOnly, size_func); + DCHECK(page->WasSweptPrecisely() || + (static_cast<PagedSpace*>(owner)->swept_precisely() && + page->SweepingCompleted())); } -void HeapObjectIterator::Initialize(PagedSpace* space, - Address cur, Address end, +void HeapObjectIterator::Initialize(PagedSpace* space, Address cur, Address end, HeapObjectIterator::PageMode mode, HeapObjectCallback size_f) { // Check that we actually can iterate this space. - ASSERT(!space->was_swept_conservatively()); + DCHECK(space->swept_precisely()); space_ = space; cur_addr_ = cur; @@ -80,20 +70,22 @@ void HeapObjectIterator::Initialize(PagedSpace* space, // We have hit the end of the page and should advance to the next block of // objects. This happens at the end of the page. bool HeapObjectIterator::AdvanceToNextPage() { - ASSERT(cur_addr_ == cur_end_); + DCHECK(cur_addr_ == cur_end_); if (page_mode_ == kOnePageOnly) return false; Page* cur_page; if (cur_addr_ == NULL) { cur_page = space_->anchor(); } else { cur_page = Page::FromAddress(cur_addr_ - 1); - ASSERT(cur_addr_ == cur_page->area_end()); + DCHECK(cur_addr_ == cur_page->area_end()); } cur_page = cur_page->next_page(); if (cur_page == space_->anchor()) return false; cur_addr_ = cur_page->area_start(); cur_end_ = cur_page->area_end(); - ASSERT(cur_page->WasSweptPrecisely()); + DCHECK(cur_page->WasSweptPrecisely() || + (static_cast<PagedSpace*>(cur_page->owner())->swept_precisely() && + cur_page->SweepingCompleted())); return true; } @@ -107,24 +99,25 @@ CodeRange::CodeRange(Isolate* isolate) code_range_(NULL), free_list_(0), allocation_list_(0), - current_allocation_block_index_(0) { -} + current_allocation_block_index_(0) {} bool CodeRange::SetUp(size_t requested) { - ASSERT(code_range_ == NULL); + DCHECK(code_range_ == NULL); if (requested == 0) { - // On 64-bit platform(s), we put all code objects in a 512 MB range of - // virtual address space, so that they can call each other with near calls. - if (kIs64BitArch) { - requested = 512 * MB; + // When a target requires the code range feature, we put all code objects + // in a kMaximalCodeRangeSize range of virtual address space, so that + // they can call each other with near calls. + if (kRequiresCodeRange) { + requested = kMaximalCodeRangeSize; } else { return true; } } - code_range_ = new VirtualMemory(requested); + DCHECK(!kRequiresCodeRange || requested <= kMaximalCodeRangeSize); + code_range_ = new base::VirtualMemory(requested); CHECK(code_range_ != NULL); if (!code_range_->IsReserved()) { delete code_range_; @@ -133,9 +126,8 @@ bool CodeRange::SetUp(size_t requested) { } // We are sure that we have mapped a block of requested addresses. - ASSERT(code_range_->size() == requested); - LOG(isolate_, - NewEvent("CodeRange", code_range_->address(), requested)); + DCHECK(code_range_->size() == requested); + LOG(isolate_, NewEvent("CodeRange", code_range_->address(), requested)); Address base = reinterpret_cast<Address>(code_range_->address()); Address aligned_base = RoundUp(reinterpret_cast<Address>(code_range_->address()), @@ -200,8 +192,8 @@ bool CodeRange::GetNextAllocationBlock(size_t requested) { Address CodeRange::AllocateRawMemory(const size_t requested_size, const size_t commit_size, size_t* allocated) { - ASSERT(commit_size <= requested_size); - ASSERT(current_allocation_block_index_ < allocation_list_.length()); + DCHECK(commit_size <= requested_size); + DCHECK(current_allocation_block_index_ < allocation_list_.length()); if (requested_size > allocation_list_[current_allocation_block_index_].size) { // Find an allocation block large enough. if (!GetNextAllocationBlock(requested_size)) return NULL; @@ -215,12 +207,10 @@ Address CodeRange::AllocateRawMemory(const size_t requested_size, } else { *allocated = aligned_requested; } - ASSERT(*allocated <= current.size); - ASSERT(IsAddressAligned(current.start, MemoryChunk::kAlignment)); - if (!isolate_->memory_allocator()->CommitExecutableMemory(code_range_, - current.start, - commit_size, - *allocated)) { + DCHECK(*allocated <= current.size); + DCHECK(IsAddressAligned(current.start, MemoryChunk::kAlignment)); + if (!isolate_->memory_allocator()->CommitExecutableMemory( + code_range_, current.start, commit_size, *allocated)) { *allocated = 0; return NULL; } @@ -245,17 +235,17 @@ bool CodeRange::UncommitRawMemory(Address start, size_t length) { void CodeRange::FreeRawMemory(Address address, size_t length) { - ASSERT(IsAddressAligned(address, MemoryChunk::kAlignment)); + DCHECK(IsAddressAligned(address, MemoryChunk::kAlignment)); free_list_.Add(FreeBlock(address, length)); code_range_->Uncommit(address, length); } void CodeRange::TearDown() { - delete code_range_; // Frees all memory in the virtual memory range. - code_range_ = NULL; - free_list_.Free(); - allocation_list_.Free(); + delete code_range_; // Frees all memory in the virtual memory range. + code_range_ = NULL; + free_list_.Free(); + allocation_list_.Free(); } @@ -270,14 +260,13 @@ MemoryAllocator::MemoryAllocator(Isolate* isolate) size_(0), size_executable_(0), lowest_ever_allocated_(reinterpret_cast<void*>(-1)), - highest_ever_allocated_(reinterpret_cast<void*>(0)) { -} + highest_ever_allocated_(reinterpret_cast<void*>(0)) {} bool MemoryAllocator::SetUp(intptr_t capacity, intptr_t capacity_executable) { capacity_ = RoundUp(capacity, Page::kPageSize); capacity_executable_ = RoundUp(capacity_executable, Page::kPageSize); - ASSERT_GE(capacity_, capacity_executable_); + DCHECK_GE(capacity_, capacity_executable_); size_ = 0; size_executable_ = 0; @@ -288,18 +277,18 @@ bool MemoryAllocator::SetUp(intptr_t capacity, intptr_t capacity_executable) { void MemoryAllocator::TearDown() { // Check that spaces were torn down before MemoryAllocator. - ASSERT(size_ == 0); + DCHECK(size_ == 0); // TODO(gc) this will be true again when we fix FreeMemory. - // ASSERT(size_executable_ == 0); + // DCHECK(size_executable_ == 0); capacity_ = 0; capacity_executable_ = 0; } -bool MemoryAllocator::CommitMemory(Address base, - size_t size, +bool MemoryAllocator::CommitMemory(Address base, size_t size, Executability executable) { - if (!VirtualMemory::CommitRegion(base, size, executable == EXECUTABLE)) { + if (!base::VirtualMemory::CommitRegion(base, size, + executable == EXECUTABLE)) { return false; } UpdateAllocatedSpaceLimits(base, base + size); @@ -307,81 +296,79 @@ bool MemoryAllocator::CommitMemory(Address base, } -void MemoryAllocator::FreeMemory(VirtualMemory* reservation, +void MemoryAllocator::FreeMemory(base::VirtualMemory* reservation, Executability executable) { // TODO(gc) make code_range part of memory allocator? - ASSERT(reservation->IsReserved()); + DCHECK(reservation->IsReserved()); size_t size = reservation->size(); - ASSERT(size_ >= size); + DCHECK(size_ >= size); size_ -= size; isolate_->counters()->memory_allocated()->Decrement(static_cast<int>(size)); if (executable == EXECUTABLE) { - ASSERT(size_executable_ >= size); + DCHECK(size_executable_ >= size); size_executable_ -= size; } // Code which is part of the code-range does not have its own VirtualMemory. - ASSERT(!isolate_->code_range()->contains( - static_cast<Address>(reservation->address()))); - ASSERT(executable == NOT_EXECUTABLE || !isolate_->code_range()->exists()); + DCHECK(isolate_->code_range() == NULL || + !isolate_->code_range()->contains( + static_cast<Address>(reservation->address()))); + DCHECK(executable == NOT_EXECUTABLE || isolate_->code_range() == NULL || + !isolate_->code_range()->valid()); reservation->Release(); } -void MemoryAllocator::FreeMemory(Address base, - size_t size, +void MemoryAllocator::FreeMemory(Address base, size_t size, Executability executable) { // TODO(gc) make code_range part of memory allocator? - ASSERT(size_ >= size); + DCHECK(size_ >= size); size_ -= size; isolate_->counters()->memory_allocated()->Decrement(static_cast<int>(size)); if (executable == EXECUTABLE) { - ASSERT(size_executable_ >= size); + DCHECK(size_executable_ >= size); size_executable_ -= size; } - if (isolate_->code_range()->contains(static_cast<Address>(base))) { - ASSERT(executable == EXECUTABLE); + if (isolate_->code_range() != NULL && + isolate_->code_range()->contains(static_cast<Address>(base))) { + DCHECK(executable == EXECUTABLE); isolate_->code_range()->FreeRawMemory(base, size); } else { - ASSERT(executable == NOT_EXECUTABLE || !isolate_->code_range()->exists()); - bool result = VirtualMemory::ReleaseRegion(base, size); + DCHECK(executable == NOT_EXECUTABLE || isolate_->code_range() == NULL || + !isolate_->code_range()->valid()); + bool result = base::VirtualMemory::ReleaseRegion(base, size); USE(result); - ASSERT(result); + DCHECK(result); } } -Address MemoryAllocator::ReserveAlignedMemory(size_t size, - size_t alignment, - VirtualMemory* controller) { - VirtualMemory reservation(size, alignment); +Address MemoryAllocator::ReserveAlignedMemory(size_t size, size_t alignment, + base::VirtualMemory* controller) { + base::VirtualMemory reservation(size, alignment); if (!reservation.IsReserved()) return NULL; size_ += reservation.size(); - Address base = RoundUp(static_cast<Address>(reservation.address()), - alignment); + Address base = + RoundUp(static_cast<Address>(reservation.address()), alignment); controller->TakeControl(&reservation); return base; } -Address MemoryAllocator::AllocateAlignedMemory(size_t reserve_size, - size_t commit_size, - size_t alignment, - Executability executable, - VirtualMemory* controller) { - ASSERT(commit_size <= reserve_size); - VirtualMemory reservation; +Address MemoryAllocator::AllocateAlignedMemory( + size_t reserve_size, size_t commit_size, size_t alignment, + Executability executable, base::VirtualMemory* controller) { + DCHECK(commit_size <= reserve_size); + base::VirtualMemory reservation; Address base = ReserveAlignedMemory(reserve_size, alignment, &reservation); if (base == NULL) return NULL; if (executable == EXECUTABLE) { - if (!CommitExecutableMemory(&reservation, - base, - commit_size, + if (!CommitExecutableMemory(&reservation, base, commit_size, reserve_size)) { base = NULL; } @@ -412,26 +399,21 @@ void Page::InitializeAsAnchor(PagedSpace* owner) { } -NewSpacePage* NewSpacePage::Initialize(Heap* heap, - Address start, +NewSpacePage* NewSpacePage::Initialize(Heap* heap, Address start, SemiSpace* semi_space) { Address area_start = start + NewSpacePage::kObjectStartOffset; Address area_end = start + Page::kPageSize; - MemoryChunk* chunk = MemoryChunk::Initialize(heap, - start, - Page::kPageSize, - area_start, - area_end, - NOT_EXECUTABLE, - semi_space); + MemoryChunk* chunk = + MemoryChunk::Initialize(heap, start, Page::kPageSize, area_start, + area_end, NOT_EXECUTABLE, semi_space); chunk->set_next_chunk(NULL); chunk->set_prev_chunk(NULL); chunk->initialize_scan_on_scavenge(true); bool in_to_space = (semi_space->id() != kFromSpace); chunk->SetFlag(in_to_space ? MemoryChunk::IN_TO_SPACE : MemoryChunk::IN_FROM_SPACE); - ASSERT(!chunk->IsFlagSet(in_to_space ? MemoryChunk::IN_FROM_SPACE + DCHECK(!chunk->IsFlagSet(in_to_space ? MemoryChunk::IN_FROM_SPACE : MemoryChunk::IN_TO_SPACE)); NewSpacePage* page = static_cast<NewSpacePage*>(chunk); heap->incremental_marking()->SetNewSpacePageFlags(page); @@ -449,16 +431,12 @@ void NewSpacePage::InitializeAsAnchor(SemiSpace* semi_space) { } -MemoryChunk* MemoryChunk::Initialize(Heap* heap, - Address base, - size_t size, - Address area_start, - Address area_end, - Executability executable, - Space* owner) { +MemoryChunk* MemoryChunk::Initialize(Heap* heap, Address base, size_t size, + Address area_start, Address area_end, + Executability executable, Space* owner) { MemoryChunk* chunk = FromAddress(base); - ASSERT(base == chunk->address()); + DCHECK(base == chunk->address()); chunk->heap_ = heap; chunk->size_ = size; @@ -472,7 +450,7 @@ MemoryChunk* MemoryChunk::Initialize(Heap* heap, chunk->write_barrier_counter_ = kWriteBarrierCounterGranularity; chunk->progress_bar_ = 0; chunk->high_water_mark_ = static_cast<int>(area_start - base); - chunk->set_parallel_sweeping(PARALLEL_SWEEPING_DONE); + chunk->set_parallel_sweeping(SWEEPING_DONE); chunk->available_in_small_free_list_ = 0; chunk->available_in_medium_free_list_ = 0; chunk->available_in_large_free_list_ = 0; @@ -483,8 +461,8 @@ MemoryChunk* MemoryChunk::Initialize(Heap* heap, chunk->initialize_scan_on_scavenge(false); chunk->SetFlag(WAS_SWEPT_PRECISELY); - ASSERT(OFFSET_OF(MemoryChunk, flags_) == kFlagsOffset); - ASSERT(OFFSET_OF(MemoryChunk, live_byte_count_) == kLiveBytesOffset); + DCHECK(OFFSET_OF(MemoryChunk, flags_) == kFlagsOffset); + DCHECK(OFFSET_OF(MemoryChunk, live_byte_count_) == kLiveBytesOffset); if (executable == EXECUTABLE) { chunk->SetFlag(IS_EXECUTABLE); @@ -500,29 +478,31 @@ MemoryChunk* MemoryChunk::Initialize(Heap* heap, // Commit MemoryChunk area to the requested size. bool MemoryChunk::CommitArea(size_t requested) { - size_t guard_size = IsFlagSet(IS_EXECUTABLE) ? - MemoryAllocator::CodePageGuardSize() : 0; + size_t guard_size = + IsFlagSet(IS_EXECUTABLE) ? MemoryAllocator::CodePageGuardSize() : 0; size_t header_size = area_start() - address() - guard_size; - size_t commit_size = RoundUp(header_size + requested, OS::CommitPageSize()); + size_t commit_size = + RoundUp(header_size + requested, base::OS::CommitPageSize()); size_t committed_size = RoundUp(header_size + (area_end() - area_start()), - OS::CommitPageSize()); + base::OS::CommitPageSize()); if (commit_size > committed_size) { // Commit size should be less or equal than the reserved size. - ASSERT(commit_size <= size() - 2 * guard_size); + DCHECK(commit_size <= size() - 2 * guard_size); // Append the committed area. Address start = address() + committed_size + guard_size; size_t length = commit_size - committed_size; if (reservation_.IsReserved()) { - Executability executable = IsFlagSet(IS_EXECUTABLE) - ? EXECUTABLE : NOT_EXECUTABLE; - if (!heap()->isolate()->memory_allocator()->CommitMemory( - start, length, executable)) { + Executability executable = + IsFlagSet(IS_EXECUTABLE) ? EXECUTABLE : NOT_EXECUTABLE; + if (!heap()->isolate()->memory_allocator()->CommitMemory(start, length, + executable)) { return false; } } else { CodeRange* code_range = heap_->isolate()->code_range(); - ASSERT(code_range->exists() && IsFlagSet(IS_EXECUTABLE)); + DCHECK(code_range != NULL && code_range->valid() && + IsFlagSet(IS_EXECUTABLE)); if (!code_range->CommitRawMemory(start, length)) return false; } @@ -530,7 +510,7 @@ bool MemoryChunk::CommitArea(size_t requested) { heap_->isolate()->memory_allocator()->ZapBlock(start, length); } } else if (commit_size < committed_size) { - ASSERT(commit_size > 0); + DCHECK(commit_size > 0); // Shrink the committed area. size_t length = committed_size - commit_size; Address start = address() + committed_size + guard_size - length; @@ -538,7 +518,8 @@ bool MemoryChunk::CommitArea(size_t requested) { if (!reservation_.Uncommit(start, length)) return false; } else { CodeRange* code_range = heap_->isolate()->code_range(); - ASSERT(code_range->exists() && IsFlagSet(IS_EXECUTABLE)); + DCHECK(code_range != NULL && code_range->valid() && + IsFlagSet(IS_EXECUTABLE)); if (!code_range->UncommitRawMemory(start, length)) return false; } } @@ -572,12 +553,12 @@ MemoryChunk* MemoryAllocator::AllocateChunk(intptr_t reserve_area_size, intptr_t commit_area_size, Executability executable, Space* owner) { - ASSERT(commit_area_size <= reserve_area_size); + DCHECK(commit_area_size <= reserve_area_size); size_t chunk_size; Heap* heap = isolate_->heap(); Address base = NULL; - VirtualMemory reservation; + base::VirtualMemory reservation; Address area_start = NULL; Address area_end = NULL; @@ -613,36 +594,33 @@ MemoryChunk* MemoryAllocator::AllocateChunk(intptr_t reserve_area_size, if (executable == EXECUTABLE) { chunk_size = RoundUp(CodePageAreaStartOffset() + reserve_area_size, - OS::CommitPageSize()) + CodePageGuardSize(); + base::OS::CommitPageSize()) + + CodePageGuardSize(); // Check executable memory limit. if (size_executable_ + chunk_size > capacity_executable_) { - LOG(isolate_, - StringEvent("MemoryAllocator::AllocateRawMemory", - "V8 Executable Allocation capacity exceeded")); + LOG(isolate_, StringEvent("MemoryAllocator::AllocateRawMemory", + "V8 Executable Allocation capacity exceeded")); return NULL; } // Size of header (not executable) plus area (executable). size_t commit_size = RoundUp(CodePageGuardStartOffset() + commit_area_size, - OS::CommitPageSize()); + base::OS::CommitPageSize()); // Allocate executable memory either from code range or from the // OS. - if (isolate_->code_range()->exists()) { - base = isolate_->code_range()->AllocateRawMemory(chunk_size, - commit_size, + if (isolate_->code_range() != NULL && isolate_->code_range()->valid()) { + base = isolate_->code_range()->AllocateRawMemory(chunk_size, commit_size, &chunk_size); - ASSERT(IsAligned(reinterpret_cast<intptr_t>(base), - MemoryChunk::kAlignment)); + DCHECK( + IsAligned(reinterpret_cast<intptr_t>(base), MemoryChunk::kAlignment)); if (base == NULL) return NULL; size_ += chunk_size; // Update executable memory size. size_executable_ += chunk_size; } else { - base = AllocateAlignedMemory(chunk_size, - commit_size, - MemoryChunk::kAlignment, - executable, + base = AllocateAlignedMemory(chunk_size, commit_size, + MemoryChunk::kAlignment, executable, &reservation); if (base == NULL) return NULL; // Update executable memory size. @@ -658,14 +636,13 @@ MemoryChunk* MemoryAllocator::AllocateChunk(intptr_t reserve_area_size, area_end = area_start + commit_area_size; } else { chunk_size = RoundUp(MemoryChunk::kObjectStartOffset + reserve_area_size, - OS::CommitPageSize()); - size_t commit_size = RoundUp(MemoryChunk::kObjectStartOffset + - commit_area_size, OS::CommitPageSize()); - base = AllocateAlignedMemory(chunk_size, - commit_size, - MemoryChunk::kAlignment, - executable, - &reservation); + base::OS::CommitPageSize()); + size_t commit_size = + RoundUp(MemoryChunk::kObjectStartOffset + commit_area_size, + base::OS::CommitPageSize()); + base = + AllocateAlignedMemory(chunk_size, commit_size, MemoryChunk::kAlignment, + executable, &reservation); if (base == NULL) return NULL; @@ -679,8 +656,8 @@ MemoryChunk* MemoryAllocator::AllocateChunk(intptr_t reserve_area_size, // Use chunk_size for statistics and callbacks because we assume that they // treat reserved but not-yet committed memory regions of chunks as allocated. - isolate_->counters()->memory_allocated()-> - Increment(static_cast<int>(chunk_size)); + isolate_->counters()->memory_allocated()->Increment( + static_cast<int>(chunk_size)); LOG(isolate_, NewEvent("MemoryChunk", base, chunk_size)); if (owner != NULL) { @@ -688,13 +665,8 @@ MemoryChunk* MemoryAllocator::AllocateChunk(intptr_t reserve_area_size, PerformAllocationCallback(space, kAllocationActionAllocate, chunk_size); } - MemoryChunk* result = MemoryChunk::Initialize(heap, - base, - chunk_size, - area_start, - area_end, - executable, - owner); + MemoryChunk* result = MemoryChunk::Initialize( + heap, base, chunk_size, area_start, area_end, executable, owner); result->set_reserved_memory(&reservation); MSAN_MEMORY_IS_INITIALIZED_IN_JIT(base, chunk_size); return result; @@ -710,8 +682,7 @@ void Page::ResetFreeListStatistics() { } -Page* MemoryAllocator::AllocatePage(intptr_t size, - PagedSpace* owner, +Page* MemoryAllocator::AllocatePage(intptr_t size, PagedSpace* owner, Executability executable) { MemoryChunk* chunk = AllocateChunk(size, size, executable, owner); @@ -724,10 +695,8 @@ Page* MemoryAllocator::AllocatePage(intptr_t size, LargePage* MemoryAllocator::AllocateLargePage(intptr_t object_size, Space* owner, Executability executable) { - MemoryChunk* chunk = AllocateChunk(object_size, - object_size, - executable, - owner); + MemoryChunk* chunk = + AllocateChunk(object_size, object_size, executable, owner); if (chunk == NULL) return NULL; return LargePage::Initialize(isolate_->heap(), chunk); } @@ -741,25 +710,22 @@ void MemoryAllocator::Free(MemoryChunk* chunk) { PerformAllocationCallback(space, kAllocationActionFree, chunk->size()); } - isolate_->heap()->RememberUnmappedPage( - reinterpret_cast<Address>(chunk), chunk->IsEvacuationCandidate()); + isolate_->heap()->RememberUnmappedPage(reinterpret_cast<Address>(chunk), + chunk->IsEvacuationCandidate()); delete chunk->slots_buffer(); delete chunk->skip_list(); - VirtualMemory* reservation = chunk->reserved_memory(); + base::VirtualMemory* reservation = chunk->reserved_memory(); if (reservation->IsReserved()) { FreeMemory(reservation, chunk->executable()); } else { - FreeMemory(chunk->address(), - chunk->size(), - chunk->executable()); + FreeMemory(chunk->address(), chunk->size(), chunk->executable()); } } -bool MemoryAllocator::CommitBlock(Address start, - size_t size, +bool MemoryAllocator::CommitBlock(Address start, size_t size, Executability executable) { if (!CommitMemory(start, size, executable)) return false; @@ -773,7 +739,7 @@ bool MemoryAllocator::CommitBlock(Address start, bool MemoryAllocator::UncommitBlock(Address start, size_t size) { - if (!VirtualMemory::UncommitRegion(start, size)) return false; + if (!base::VirtualMemory::UncommitRegion(start, size)) return false; isolate_->counters()->memory_allocated()->Decrement(static_cast<int>(size)); return true; } @@ -791,7 +757,7 @@ void MemoryAllocator::PerformAllocationCallback(ObjectSpace space, size_t size) { for (int i = 0; i < memory_allocation_callbacks_.length(); ++i) { MemoryAllocationCallbackRegistration registration = - memory_allocation_callbacks_[i]; + memory_allocation_callbacks_[i]; if ((registration.space & space) == space && (registration.action & action) == action) registration.callback(space, action, static_cast<int>(size)); @@ -809,19 +775,18 @@ bool MemoryAllocator::MemoryAllocationCallbackRegistered( void MemoryAllocator::AddMemoryAllocationCallback( - MemoryAllocationCallback callback, - ObjectSpace space, + MemoryAllocationCallback callback, ObjectSpace space, AllocationAction action) { - ASSERT(callback != NULL); + DCHECK(callback != NULL); MemoryAllocationCallbackRegistration registration(callback, space, action); - ASSERT(!MemoryAllocator::MemoryAllocationCallbackRegistered(callback)); + DCHECK(!MemoryAllocator::MemoryAllocationCallbackRegistered(callback)); return memory_allocation_callbacks_.Add(registration); } void MemoryAllocator::RemoveMemoryAllocationCallback( - MemoryAllocationCallback callback) { - ASSERT(callback != NULL); + MemoryAllocationCallback callback) { + DCHECK(callback != NULL); for (int i = 0; i < memory_allocation_callbacks_.length(); ++i) { if (memory_allocation_callbacks_[i].callback == callback) { memory_allocation_callbacks_.Remove(i); @@ -835,10 +800,12 @@ void MemoryAllocator::RemoveMemoryAllocationCallback( #ifdef DEBUG void MemoryAllocator::ReportStatistics() { float pct = static_cast<float>(capacity_ - size_) / capacity_; - PrintF(" capacity: %" V8_PTR_PREFIX "d" - ", used: %" V8_PTR_PREFIX "d" - ", available: %%%d\n\n", - capacity_, size_, static_cast<int>(pct*100)); + PrintF(" capacity: %" V8_PTR_PREFIX + "d" + ", used: %" V8_PTR_PREFIX + "d" + ", available: %%%d\n\n", + capacity_, size_, static_cast<int>(pct * 100)); } #endif @@ -846,12 +813,12 @@ void MemoryAllocator::ReportStatistics() { int MemoryAllocator::CodePageGuardStartOffset() { // We are guarding code pages: the first OS page after the header // will be protected as non-writable. - return RoundUp(Page::kObjectStartOffset, OS::CommitPageSize()); + return RoundUp(Page::kObjectStartOffset, base::OS::CommitPageSize()); } int MemoryAllocator::CodePageGuardSize() { - return static_cast<int>(OS::CommitPageSize()); + return static_cast<int>(base::OS::CommitPageSize()); } @@ -865,18 +832,15 @@ int MemoryAllocator::CodePageAreaStartOffset() { int MemoryAllocator::CodePageAreaEndOffset() { // We are guarding code pages: the last OS page will be protected as // non-writable. - return Page::kPageSize - static_cast<int>(OS::CommitPageSize()); + return Page::kPageSize - static_cast<int>(base::OS::CommitPageSize()); } -bool MemoryAllocator::CommitExecutableMemory(VirtualMemory* vm, - Address start, - size_t commit_size, +bool MemoryAllocator::CommitExecutableMemory(base::VirtualMemory* vm, + Address start, size_t commit_size, size_t reserved_size) { // Commit page header (not executable). - if (!vm->Commit(start, - CodePageGuardStartOffset(), - false)) { + if (!vm->Commit(start, CodePageGuardStartOffset(), false)) { return false; } @@ -887,8 +851,7 @@ bool MemoryAllocator::CommitExecutableMemory(VirtualMemory* vm, // Commit page body (executable). if (!vm->Commit(start + CodePageAreaStartOffset(), - commit_size - CodePageGuardStartOffset(), - true)) { + commit_size - CodePageGuardStartOffset(), true)) { return false; } @@ -897,9 +860,9 @@ bool MemoryAllocator::CommitExecutableMemory(VirtualMemory* vm, return false; } - UpdateAllocatedSpaceLimits(start, - start + CodePageAreaStartOffset() + - commit_size - CodePageGuardStartOffset()); + UpdateAllocatedSpaceLimits(start, start + CodePageAreaStartOffset() + + commit_size - + CodePageGuardStartOffset()); return true; } @@ -919,23 +882,21 @@ void MemoryChunk::IncrementLiveBytesFromMutator(Address address, int by) { // ----------------------------------------------------------------------------- // PagedSpace implementation -PagedSpace::PagedSpace(Heap* heap, - intptr_t max_capacity, - AllocationSpace id, +PagedSpace::PagedSpace(Heap* heap, intptr_t max_capacity, AllocationSpace id, Executability executable) : Space(heap, id, executable), free_list_(this), - was_swept_conservatively_(false), + swept_precisely_(true), unswept_free_bytes_(0), - end_of_unswept_pages_(NULL) { + end_of_unswept_pages_(NULL), + emergency_memory_(NULL) { if (id == CODE_SPACE) { - area_size_ = heap->isolate()->memory_allocator()-> - CodePageAreaSize(); + area_size_ = heap->isolate()->memory_allocator()->CodePageAreaSize(); } else { area_size_ = Page::kPageSize - Page::kObjectStartOffset; } - max_capacity_ = (RoundDown(max_capacity, Page::kPageSize) / Page::kPageSize) - * AreaSize(); + max_capacity_ = + (RoundDown(max_capacity, Page::kPageSize) / Page::kPageSize) * AreaSize(); accounting_stats_.Clear(); allocation_info_.set_top(NULL); @@ -945,14 +906,10 @@ PagedSpace::PagedSpace(Heap* heap, } -bool PagedSpace::SetUp() { - return true; -} +bool PagedSpace::SetUp() { return true; } -bool PagedSpace::HasBeenSetUp() { - return true; -} +bool PagedSpace::HasBeenSetUp() { return true; } void PagedSpace::TearDown() { @@ -967,7 +924,7 @@ void PagedSpace::TearDown() { size_t PagedSpace::CommittedPhysicalMemory() { - if (!VirtualMemory::HasLazyCommits()) return CommittedMemory(); + if (!base::VirtualMemory::HasLazyCommits()) return CommittedMemory(); MemoryChunk::UpdateHighWaterMark(allocation_info_.top()); size_t size = 0; PageIterator it(this); @@ -980,7 +937,7 @@ size_t PagedSpace::CommittedPhysicalMemory() { Object* PagedSpace::FindObject(Address addr) { // Note: this function can only be called on precisely swept spaces. - ASSERT(!heap()->mark_compact_collector()->in_use()); + DCHECK(!heap()->mark_compact_collector()->in_use()); if (!Contains(addr)) return Smi::FromInt(0); // Signaling not found. @@ -998,11 +955,11 @@ Object* PagedSpace::FindObject(Address addr) { bool PagedSpace::CanExpand() { - ASSERT(max_capacity_ % AreaSize() == 0); + DCHECK(max_capacity_ % AreaSize() == 0); if (Capacity() == max_capacity_) return false; - ASSERT(Capacity() < max_capacity_); + DCHECK(Capacity() < max_capacity_); // Are we going to exceed capacity for this space? if ((Capacity() + Page::kPageSize) > max_capacity_) return false; @@ -1020,11 +977,11 @@ bool PagedSpace::Expand() { size = SizeOfFirstPage(); } - Page* p = heap()->isolate()->memory_allocator()->AllocatePage( - size, this, executable()); + Page* p = heap()->isolate()->memory_allocator()->AllocatePage(size, this, + executable()); if (p == NULL) return false; - ASSERT(Capacity() <= max_capacity_); + DCHECK(Capacity() <= max_capacity_); p->InsertAfter(anchor_.prev_page()); @@ -1036,7 +993,7 @@ intptr_t PagedSpace::SizeOfFirstPage() { int size = 0; switch (identity()) { case OLD_POINTER_SPACE: - size = 96 * kPointerSize * KB; + size = 112 * kPointerSize * KB; break; case OLD_DATA_SPACE: size = 192 * KB; @@ -1050,18 +1007,20 @@ intptr_t PagedSpace::SizeOfFirstPage() { case PROPERTY_CELL_SPACE: size = 8 * kPointerSize * KB; break; - case CODE_SPACE: - if (heap()->isolate()->code_range()->exists()) { + case CODE_SPACE: { + CodeRange* code_range = heap()->isolate()->code_range(); + if (code_range != NULL && code_range->valid()) { // When code range exists, code pages are allocated in a special way // (from the reserved code range). That part of the code is not yet // upgraded to handle small pages. size = AreaSize(); } else { - size = RoundUp( - 480 * KB * FullCodeGenerator::kBootCodeSizeMultiplier / 100, - kPointerSize); + size = + RoundUp(480 * KB * FullCodeGenerator::kBootCodeSizeMultiplier / 100, + kPointerSize); } break; + } default: UNREACHABLE(); } @@ -1103,13 +1062,13 @@ void PagedSpace::IncreaseCapacity(int size) { void PagedSpace::ReleasePage(Page* page) { - ASSERT(page->LiveBytes() == 0); - ASSERT(AreaSize() == page->area_size()); + DCHECK(page->LiveBytes() == 0); + DCHECK(AreaSize() == page->area_size()); if (page->WasSwept()) { intptr_t size = free_list_.EvictFreeListItems(page); accounting_stats_.AllocateBytes(size); - ASSERT_EQ(AreaSize(), static_cast<int>(size)); + DCHECK_EQ(AreaSize(), static_cast<int>(size)); } else { DecreaseUnsweptFreeBytes(page); } @@ -1119,7 +1078,7 @@ void PagedSpace::ReleasePage(Page* page) { page->ClearFlag(MemoryChunk::SCAN_ON_SCAVENGE); } - ASSERT(!free_list_.ContainsPageFreeListItems(page)); + DCHECK(!free_list_.ContainsPageFreeListItems(page)); if (Page::FromAllocationTop(allocation_info_.top()) == page) { allocation_info_.set_top(NULL); @@ -1133,19 +1092,42 @@ void PagedSpace::ReleasePage(Page* page) { heap()->QueueMemoryChunkForFree(page); } - ASSERT(Capacity() > 0); + DCHECK(Capacity() > 0); accounting_stats_.ShrinkSpace(AreaSize()); } +void PagedSpace::CreateEmergencyMemory() { + emergency_memory_ = heap()->isolate()->memory_allocator()->AllocateChunk( + AreaSize(), AreaSize(), executable(), this); +} + + +void PagedSpace::FreeEmergencyMemory() { + Page* page = static_cast<Page*>(emergency_memory_); + DCHECK(page->LiveBytes() == 0); + DCHECK(AreaSize() == page->area_size()); + DCHECK(!free_list_.ContainsPageFreeListItems(page)); + heap()->isolate()->memory_allocator()->Free(page); + emergency_memory_ = NULL; +} + + +void PagedSpace::UseEmergencyMemory() { + Page* page = Page::Initialize(heap(), emergency_memory_, executable(), this); + page->InsertAfter(anchor_.prev_page()); + emergency_memory_ = NULL; +} + + #ifdef DEBUG -void PagedSpace::Print() { } +void PagedSpace::Print() {} #endif #ifdef VERIFY_HEAP void PagedSpace::Verify(ObjectVisitor* visitor) { // We can only iterate over the pages if they were swept precisely. - if (was_swept_conservatively_) return; + if (!swept_precisely_) return; bool allocation_pointer_found_in_space = (allocation_info_.top() == allocation_info_.limit()); @@ -1205,42 +1187,40 @@ bool NewSpace::SetUp(int reserved_semispace_capacity, int initial_semispace_capacity = heap()->InitialSemiSpaceSize(); size_t size = 2 * reserved_semispace_capacity; - Address base = - heap()->isolate()->memory_allocator()->ReserveAlignedMemory( - size, size, &reservation_); + Address base = heap()->isolate()->memory_allocator()->ReserveAlignedMemory( + size, size, &reservation_); if (base == NULL) return false; chunk_base_ = base; chunk_size_ = static_cast<uintptr_t>(size); LOG(heap()->isolate(), NewEvent("InitialChunk", chunk_base_, chunk_size_)); - ASSERT(initial_semispace_capacity <= maximum_semispace_capacity); - ASSERT(IsPowerOf2(maximum_semispace_capacity)); + DCHECK(initial_semispace_capacity <= maximum_semispace_capacity); + DCHECK(IsPowerOf2(maximum_semispace_capacity)); // Allocate and set up the histogram arrays if necessary. allocated_histogram_ = NewArray<HistogramInfo>(LAST_TYPE + 1); promoted_histogram_ = NewArray<HistogramInfo>(LAST_TYPE + 1); -#define SET_NAME(name) allocated_histogram_[name].set_name(#name); \ - promoted_histogram_[name].set_name(#name); +#define SET_NAME(name) \ + allocated_histogram_[name].set_name(#name); \ + promoted_histogram_[name].set_name(#name); INSTANCE_TYPE_LIST(SET_NAME) #undef SET_NAME - ASSERT(reserved_semispace_capacity == heap()->ReservedSemiSpaceSize()); - ASSERT(static_cast<intptr_t>(chunk_size_) >= + DCHECK(reserved_semispace_capacity == heap()->ReservedSemiSpaceSize()); + DCHECK(static_cast<intptr_t>(chunk_size_) >= 2 * heap()->ReservedSemiSpaceSize()); - ASSERT(IsAddressAligned(chunk_base_, 2 * reserved_semispace_capacity, 0)); + DCHECK(IsAddressAligned(chunk_base_, 2 * reserved_semispace_capacity, 0)); - to_space_.SetUp(chunk_base_, - initial_semispace_capacity, + to_space_.SetUp(chunk_base_, initial_semispace_capacity, maximum_semispace_capacity); from_space_.SetUp(chunk_base_ + reserved_semispace_capacity, - initial_semispace_capacity, - maximum_semispace_capacity); + initial_semispace_capacity, maximum_semispace_capacity); if (!to_space_.Commit()) { return false; } - ASSERT(!from_space_.is_committed()); // No need to use memory yet. + DCHECK(!from_space_.is_committed()); // No need to use memory yet. start_ = chunk_base_; address_mask_ = ~(2 * reserved_semispace_capacity - 1); @@ -1272,7 +1252,7 @@ void NewSpace::TearDown() { LOG(heap()->isolate(), DeleteEvent("InitialChunk", chunk_base_)); - ASSERT(reservation_.IsReserved()); + DCHECK(reservation_.IsReserved()); heap()->isolate()->memory_allocator()->FreeMemory(&reservation_, NOT_EXECUTABLE); chunk_base_ = NULL; @@ -1280,14 +1260,12 @@ void NewSpace::TearDown() { } -void NewSpace::Flip() { - SemiSpace::Swap(&from_space_, &to_space_); -} +void NewSpace::Flip() { SemiSpace::Swap(&from_space_, &to_space_); } void NewSpace::Grow() { // Double the semispace size but only up to maximum capacity. - ASSERT(Capacity() < MaximumCapacity()); + DCHECK(Capacity() < MaximumCapacity()); int new_capacity = Min(MaximumCapacity(), 2 * static_cast<int>(Capacity())); if (to_space_.GrowTo(new_capacity)) { // Only grow from space if we managed to grow to-space. @@ -1301,7 +1279,7 @@ void NewSpace::Grow() { } } } - ASSERT_SEMISPACE_ALLOCATION_INFO(allocation_info_, to_space_); + DCHECK_SEMISPACE_ALLOCATION_INFO(allocation_info_, to_space_); } @@ -1309,7 +1287,7 @@ void NewSpace::Shrink() { int new_capacity = Max(InitialCapacity(), 2 * SizeAsInt()); int rounded_new_capacity = RoundUp(new_capacity, Page::kPageSize); if (rounded_new_capacity < Capacity() && - to_space_.ShrinkTo(rounded_new_capacity)) { + to_space_.ShrinkTo(rounded_new_capacity)) { // Only shrink from-space if we managed to shrink to-space. from_space_.Reset(); if (!from_space_.ShrinkTo(rounded_new_capacity)) { @@ -1322,7 +1300,7 @@ void NewSpace::Shrink() { } } } - ASSERT_SEMISPACE_ALLOCATION_INFO(allocation_info_, to_space_); + DCHECK_SEMISPACE_ALLOCATION_INFO(allocation_info_, to_space_); } @@ -1331,7 +1309,7 @@ void NewSpace::UpdateAllocationInfo() { allocation_info_.set_top(to_space_.page_low()); allocation_info_.set_limit(to_space_.page_high()); UpdateInlineAllocationLimit(0); - ASSERT_SEMISPACE_ALLOCATION_INFO(allocation_info_, to_space_); + DCHECK_SEMISPACE_ALLOCATION_INFO(allocation_info_, to_space_); } @@ -1363,7 +1341,7 @@ void NewSpace::UpdateInlineAllocationLimit(int size_in_bytes) { Address new_limit = new_top + inline_allocation_limit_step_; allocation_info_.set_limit(Min(new_limit, high)); } - ASSERT_SEMISPACE_ALLOCATION_INFO(allocation_info_, to_space_); + DCHECK_SEMISPACE_ALLOCATION_INFO(allocation_info_, to_space_); } @@ -1408,16 +1386,16 @@ AllocationResult NewSpace::SlowAllocateRaw(int size_in_bytes) { // the new limit accordingly. Address new_top = old_top + size_in_bytes; int bytes_allocated = static_cast<int>(new_top - top_on_previous_step_); - heap()->incremental_marking()->Step( - bytes_allocated, IncrementalMarking::GC_VIA_STACK_GUARD); + heap()->incremental_marking()->Step(bytes_allocated, + IncrementalMarking::GC_VIA_STACK_GUARD); UpdateInlineAllocationLimit(size_in_bytes); top_on_previous_step_ = new_top; return AllocateRaw(size_in_bytes); } else if (AddFreshPage()) { // Switched to new page. Try allocating again. int bytes_allocated = static_cast<int>(old_top - top_on_previous_step_); - heap()->incremental_marking()->Step( - bytes_allocated, IncrementalMarking::GC_VIA_STACK_GUARD); + heap()->incremental_marking()->Step(bytes_allocated, + IncrementalMarking::GC_VIA_STACK_GUARD); top_on_previous_step_ = to_space_.page_low(); return AllocateRaw(size_in_bytes); } else { @@ -1431,7 +1409,7 @@ AllocationResult NewSpace::SlowAllocateRaw(int size_in_bytes) { // that it works (it depends on the invariants we are checking). void NewSpace::Verify() { // The allocation pointer should be in the space or at the very end. - ASSERT_SEMISPACE_ALLOCATION_INFO(allocation_info_, to_space_); + DCHECK_SEMISPACE_ALLOCATION_INFO(allocation_info_, to_space_); // There should be objects packed in from the low address up to the // allocation pointer. @@ -1485,8 +1463,7 @@ void NewSpace::Verify() { // ----------------------------------------------------------------------------- // SemiSpace implementation -void SemiSpace::SetUp(Address start, - int initial_capacity, +void SemiSpace::SetUp(Address start, int initial_capacity, int maximum_capacity) { // Creates a space in the young generation. The constructor does not // allocate memory from the OS. A SemiSpace is given a contiguous chunk of @@ -1494,7 +1471,7 @@ void SemiSpace::SetUp(Address start, // otherwise. In the mark-compact collector, the memory region of the from // space is used as the marking stack. It requires contiguous memory // addresses. - ASSERT(maximum_capacity >= Page::kPageSize); + DCHECK(maximum_capacity >= Page::kPageSize); initial_capacity_ = RoundDown(initial_capacity, Page::kPageSize); capacity_ = initial_capacity; maximum_capacity_ = RoundDown(maximum_capacity, Page::kPageSize); @@ -1515,10 +1492,9 @@ void SemiSpace::TearDown() { bool SemiSpace::Commit() { - ASSERT(!is_committed()); + DCHECK(!is_committed()); int pages = capacity_ / Page::kPageSize; - if (!heap()->isolate()->memory_allocator()->CommitBlock(start_, - capacity_, + if (!heap()->isolate()->memory_allocator()->CommitBlock(start_, capacity_, executable())) { return false; } @@ -1526,7 +1502,7 @@ bool SemiSpace::Commit() { NewSpacePage* current = anchor(); for (int i = 0; i < pages; i++) { NewSpacePage* new_page = - NewSpacePage::Initialize(heap(), start_ + i * Page::kPageSize, this); + NewSpacePage::Initialize(heap(), start_ + i * Page::kPageSize, this); new_page->InsertAfter(current); current = new_page; } @@ -1539,7 +1515,7 @@ bool SemiSpace::Commit() { bool SemiSpace::Uncommit() { - ASSERT(is_committed()); + DCHECK(is_committed()); Address start = start_ + maximum_capacity_ - capacity_; if (!heap()->isolate()->memory_allocator()->UncommitBlock(start, capacity_)) { return false; @@ -1567,27 +1543,26 @@ bool SemiSpace::GrowTo(int new_capacity) { if (!is_committed()) { if (!Commit()) return false; } - ASSERT((new_capacity & Page::kPageAlignmentMask) == 0); - ASSERT(new_capacity <= maximum_capacity_); - ASSERT(new_capacity > capacity_); + DCHECK((new_capacity & Page::kPageAlignmentMask) == 0); + DCHECK(new_capacity <= maximum_capacity_); + DCHECK(new_capacity > capacity_); int pages_before = capacity_ / Page::kPageSize; int pages_after = new_capacity / Page::kPageSize; size_t delta = new_capacity - capacity_; - ASSERT(IsAligned(delta, OS::AllocateAlignment())); + DCHECK(IsAligned(delta, base::OS::AllocateAlignment())); if (!heap()->isolate()->memory_allocator()->CommitBlock( - start_ + capacity_, delta, executable())) { + start_ + capacity_, delta, executable())) { return false; } SetCapacity(new_capacity); NewSpacePage* last_page = anchor()->prev_page(); - ASSERT(last_page != anchor()); + DCHECK(last_page != anchor()); for (int i = pages_before; i < pages_after; i++) { Address page_address = start_ + i * Page::kPageSize; - NewSpacePage* new_page = NewSpacePage::Initialize(heap(), - page_address, - this); + NewSpacePage* new_page = + NewSpacePage::Initialize(heap(), page_address, this); new_page->InsertAfter(last_page); Bitmap::Clear(new_page); // Duplicate the flags that was set on the old page. @@ -1600,12 +1575,12 @@ bool SemiSpace::GrowTo(int new_capacity) { bool SemiSpace::ShrinkTo(int new_capacity) { - ASSERT((new_capacity & Page::kPageAlignmentMask) == 0); - ASSERT(new_capacity >= initial_capacity_); - ASSERT(new_capacity < capacity_); + DCHECK((new_capacity & Page::kPageAlignmentMask) == 0); + DCHECK(new_capacity >= initial_capacity_); + DCHECK(new_capacity < capacity_); if (is_committed()) { size_t delta = capacity_ - new_capacity; - ASSERT(IsAligned(delta, OS::AllocateAlignment())); + DCHECK(IsAligned(delta, base::OS::AllocateAlignment())); MemoryAllocator* allocator = heap()->isolate()->memory_allocator(); if (!allocator->UncommitBlock(start_ + new_capacity, delta)) { @@ -1617,7 +1592,7 @@ bool SemiSpace::ShrinkTo(int new_capacity) { NewSpacePage::FromAddress(start_ + (pages_after - 1) * Page::kPageSize); new_last_page->set_next_page(anchor()); anchor()->set_prev_page(new_last_page); - ASSERT((current_page_ >= first_page()) && (current_page_ <= new_last_page)); + DCHECK((current_page_ >= first_page()) && (current_page_ <= new_last_page)); } SetCapacity(new_capacity); @@ -1648,8 +1623,8 @@ void SemiSpace::FlipPages(intptr_t flags, intptr_t mask) { page->SetFlag(MemoryChunk::IN_FROM_SPACE); page->ClearFlag(MemoryChunk::IN_TO_SPACE); } - ASSERT(page->IsFlagSet(MemoryChunk::SCAN_ON_SCAVENGE)); - ASSERT(page->IsFlagSet(MemoryChunk::IN_TO_SPACE) || + DCHECK(page->IsFlagSet(MemoryChunk::SCAN_ON_SCAVENGE)); + DCHECK(page->IsFlagSet(MemoryChunk::IN_TO_SPACE) || page->IsFlagSet(MemoryChunk::IN_FROM_SPACE)); page = page->next_page(); } @@ -1657,15 +1632,15 @@ void SemiSpace::FlipPages(intptr_t flags, intptr_t mask) { void SemiSpace::Reset() { - ASSERT(anchor_.next_page() != &anchor_); + DCHECK(anchor_.next_page() != &anchor_); current_page_ = anchor_.next_page(); } void SemiSpace::Swap(SemiSpace* from, SemiSpace* to) { // We won't be swapping semispaces without data in them. - ASSERT(from->anchor_.next_page() != &from->anchor_); - ASSERT(to->anchor_.next_page() != &to->anchor_); + DCHECK(from->anchor_.next_page() != &from->anchor_); + DCHECK(to->anchor_.next_page() != &to->anchor_); // Swap bits. SemiSpace tmp = *from; @@ -1692,7 +1667,7 @@ void SemiSpace::SetCapacity(int new_capacity) { void SemiSpace::set_age_mark(Address mark) { - ASSERT(NewSpacePage::FromLimit(mark)->semi_space() == this); + DCHECK(NewSpacePage::FromLimit(mark)->semi_space() == this); age_mark_ = mark; // Mark all pages up to the one containing mark. NewSpacePageIterator it(space_start(), mark); @@ -1703,7 +1678,7 @@ void SemiSpace::set_age_mark(Address mark) { #ifdef DEBUG -void SemiSpace::Print() { } +void SemiSpace::Print() {} #endif #ifdef VERIFY_HEAP @@ -1725,8 +1700,8 @@ void SemiSpace::Verify() { if (page->heap()->incremental_marking()->IsMarking()) { CHECK(page->IsFlagSet(MemoryChunk::POINTERS_FROM_HERE_ARE_INTERESTING)); } else { - CHECK(!page->IsFlagSet( - MemoryChunk::POINTERS_FROM_HERE_ARE_INTERESTING)); + CHECK( + !page->IsFlagSet(MemoryChunk::POINTERS_FROM_HERE_ARE_INTERESTING)); } // TODO(gc): Check that the live_bytes_count_ field matches the // black marking on the page (if we make it match in new-space). @@ -1783,8 +1758,7 @@ SemiSpaceIterator::SemiSpaceIterator(Address from, Address to) { } -void SemiSpaceIterator::Initialize(Address start, - Address end, +void SemiSpaceIterator::Initialize(Address start, Address end, HeapObjectCallback size_func) { SemiSpace::AssertValidRange(start, end); current_ = start; @@ -1796,7 +1770,7 @@ void SemiSpaceIterator::Initialize(Address start, #ifdef DEBUG // heap_histograms is shared, always clear it before using it. static void ClearHistograms(Isolate* isolate) { - // We reset the name each time, though it hasn't changed. +// We reset the name each time, though it hasn't changed. #define DEF_TYPE_NAME(name) isolate->heap_histograms()[name].set_name(#name); INSTANCE_TYPE_LIST(DEF_TYPE_NAME) #undef DEF_TYPE_NAME @@ -1832,14 +1806,14 @@ static void ReportCodeKindStatistics(int* code_kind_statistics) { static int CollectHistogramInfo(HeapObject* obj) { Isolate* isolate = obj->GetIsolate(); InstanceType type = obj->map()->instance_type(); - ASSERT(0 <= type && type <= LAST_TYPE); - ASSERT(isolate->heap_histograms()[type].name() != NULL); + DCHECK(0 <= type && type <= LAST_TYPE); + DCHECK(isolate->heap_histograms()[type].name() != NULL); isolate->heap_histograms()[type].increment_number(1); isolate->heap_histograms()[type].increment_bytes(obj->Size()); if (FLAG_collect_heap_spill_statistics && obj->IsJSObject()) { - JSObject::cast(obj)->IncrementSpillStatistics( - isolate->js_spill_information()); + JSObject::cast(obj) + ->IncrementSpillStatistics(isolate->js_spill_information()); } return obj->Size(); @@ -1861,9 +1835,9 @@ static void ReportHistogram(Isolate* isolate, bool print_spill) { // Summarize string types. int string_number = 0; int string_bytes = 0; -#define INCREMENT(type, size, name, camel_name) \ - string_number += isolate->heap_histograms()[type].number(); \ - string_bytes += isolate->heap_histograms()[type].bytes(); +#define INCREMENT(type, size, name, camel_name) \ + string_number += isolate->heap_histograms()[type].number(); \ + string_bytes += isolate->heap_histograms()[type].bytes(); STRING_TYPE_LIST(INCREMENT) #undef INCREMENT if (string_number > 0) { @@ -1898,15 +1872,15 @@ void NewSpace::CollectStatistics() { } -static void DoReportStatistics(Isolate* isolate, - HistogramInfo* info, const char* description) { +static void DoReportStatistics(Isolate* isolate, HistogramInfo* info, + const char* description) { LOG(isolate, HeapSampleBeginEvent("NewSpace", description)); // Lump all the string types together. int string_number = 0; int string_bytes = 0; -#define INCREMENT(type, size, name, camel_name) \ - string_number += info[type].number(); \ - string_bytes += info[type].bytes(); +#define INCREMENT(type, size, name, camel_name) \ + string_number += info[type].number(); \ + string_bytes += info[type].bytes(); STRING_TYPE_LIST(INCREMENT) #undef INCREMENT if (string_number > 0) { @@ -1917,9 +1891,8 @@ static void DoReportStatistics(Isolate* isolate, // Then do the other types. for (int i = FIRST_NONSTRING_TYPE; i <= LAST_TYPE; ++i) { if (info[i].number() > 0) { - LOG(isolate, - HeapSampleItemEvent(info[i].name(), info[i].number(), - info[i].bytes())); + LOG(isolate, HeapSampleItemEvent(info[i].name(), info[i].number(), + info[i].bytes())); } } LOG(isolate, HeapSampleEndEvent("NewSpace", description)); @@ -1930,14 +1903,14 @@ void NewSpace::ReportStatistics() { #ifdef DEBUG if (FLAG_heap_stats) { float pct = static_cast<float>(Available()) / Capacity(); - PrintF(" capacity: %" V8_PTR_PREFIX "d" - ", available: %" V8_PTR_PREFIX "d, %%%d\n", - Capacity(), Available(), static_cast<int>(pct*100)); + PrintF(" capacity: %" V8_PTR_PREFIX + "d" + ", available: %" V8_PTR_PREFIX "d, %%%d\n", + Capacity(), Available(), static_cast<int>(pct * 100)); PrintF("\n Object Histogram:\n"); for (int i = 0; i <= LAST_TYPE; i++) { if (allocated_histogram_[i].number() > 0) { - PrintF(" %-34s%10d (%10d bytes)\n", - allocated_histogram_[i].name(), + PrintF(" %-34s%10d (%10d bytes)\n", allocated_histogram_[i].name(), allocated_histogram_[i].number(), allocated_histogram_[i].bytes()); } @@ -1956,7 +1929,7 @@ void NewSpace::ReportStatistics() { void NewSpace::RecordAllocation(HeapObject* obj) { InstanceType type = obj->map()->instance_type(); - ASSERT(0 <= type && type <= LAST_TYPE); + DCHECK(0 <= type && type <= LAST_TYPE); allocated_histogram_[type].increment_number(1); allocated_histogram_[type].increment_bytes(obj->Size()); } @@ -1964,14 +1937,14 @@ void NewSpace::RecordAllocation(HeapObject* obj) { void NewSpace::RecordPromotion(HeapObject* obj) { InstanceType type = obj->map()->instance_type(); - ASSERT(0 <= type && type <= LAST_TYPE); + DCHECK(0 <= type && type <= LAST_TYPE); promoted_histogram_[type].increment_number(1); promoted_histogram_[type].increment_bytes(obj->Size()); } size_t NewSpace::CommittedPhysicalMemory() { - if (!VirtualMemory::HasLazyCommits()) return CommittedMemory(); + if (!base::VirtualMemory::HasLazyCommits()) return CommittedMemory(); MemoryChunk::UpdateHighWaterMark(allocation_info_.top()); size_t size = to_space_.CommittedPhysicalMemory(); if (from_space_.is_committed()) { @@ -1985,8 +1958,8 @@ size_t NewSpace::CommittedPhysicalMemory() { // Free lists for old object spaces implementation void FreeListNode::set_size(Heap* heap, int size_in_bytes) { - ASSERT(size_in_bytes > 0); - ASSERT(IsAligned(size_in_bytes, kPointerSize)); + DCHECK(size_in_bytes > 0); + DCHECK(IsAligned(size_in_bytes, kPointerSize)); // We write a map and possibly size information to the block. If the block // is big enough to be a FreeSpace with at least one extra word (the next @@ -2010,15 +1983,15 @@ void FreeListNode::set_size(Heap* heap, int size_in_bytes) { } else { UNREACHABLE(); } - // We would like to ASSERT(Size() == size_in_bytes) but this would fail during + // We would like to DCHECK(Size() == size_in_bytes) but this would fail during // deserialization because the free space map is not done yet. } FreeListNode* FreeListNode::next() { - ASSERT(IsFreeListNode(this)); + DCHECK(IsFreeListNode(this)); if (map() == GetHeap()->raw_unchecked_free_space_map()) { - ASSERT(map() == NULL || Size() >= kNextOffset + kPointerSize); + DCHECK(map() == NULL || Size() >= kNextOffset + kPointerSize); return reinterpret_cast<FreeListNode*>( Memory::Address_at(address() + kNextOffset)); } else { @@ -2029,9 +2002,9 @@ FreeListNode* FreeListNode::next() { FreeListNode** FreeListNode::next_address() { - ASSERT(IsFreeListNode(this)); + DCHECK(IsFreeListNode(this)); if (map() == GetHeap()->raw_unchecked_free_space_map()) { - ASSERT(Size() >= kNextOffset + kPointerSize); + DCHECK(Size() >= kNextOffset + kPointerSize); return reinterpret_cast<FreeListNode**>(address() + kNextOffset); } else { return reinterpret_cast<FreeListNode**>(address() + kPointerSize); @@ -2040,17 +2013,19 @@ FreeListNode** FreeListNode::next_address() { void FreeListNode::set_next(FreeListNode* next) { - ASSERT(IsFreeListNode(this)); + DCHECK(IsFreeListNode(this)); // While we are booting the VM the free space map will actually be null. So // we have to make sure that we don't try to use it for anything at that // stage. if (map() == GetHeap()->raw_unchecked_free_space_map()) { - ASSERT(map() == NULL || Size() >= kNextOffset + kPointerSize); - NoBarrier_Store(reinterpret_cast<AtomicWord*>(address() + kNextOffset), - reinterpret_cast<AtomicWord>(next)); + DCHECK(map() == NULL || Size() >= kNextOffset + kPointerSize); + base::NoBarrier_Store( + reinterpret_cast<base::AtomicWord*>(address() + kNextOffset), + reinterpret_cast<base::AtomicWord>(next)); } else { - NoBarrier_Store(reinterpret_cast<AtomicWord*>(address() + kPointerSize), - reinterpret_cast<AtomicWord>(next)); + base::NoBarrier_Store( + reinterpret_cast<base::AtomicWord*>(address() + kPointerSize), + reinterpret_cast<base::AtomicWord>(next)); } } @@ -2061,9 +2036,9 @@ intptr_t FreeListCategory::Concatenate(FreeListCategory* category) { // This is safe (not going to deadlock) since Concatenate operations // are never performed on the same free lists at the same time in // reverse order. - LockGuard<Mutex> target_lock_guard(mutex()); - LockGuard<Mutex> source_lock_guard(category->mutex()); - ASSERT(category->end_ != NULL); + base::LockGuard<base::Mutex> target_lock_guard(mutex()); + base::LockGuard<base::Mutex> source_lock_guard(category->mutex()); + DCHECK(category->end_ != NULL); free_bytes = category->available(); if (end_ == NULL) { end_ = category->end(); @@ -2071,7 +2046,7 @@ intptr_t FreeListCategory::Concatenate(FreeListCategory* category) { category->end()->set_next(top()); } set_top(category->top()); - NoBarrier_Store(&top_, category->top_); + base::NoBarrier_Store(&top_, category->top_); available_ += category->available(); category->Reset(); } @@ -2118,7 +2093,7 @@ bool FreeListCategory::ContainsPageFreeListItemsInList(Page* p) { } -FreeListNode* FreeListCategory::PickNodeFromList(int *node_size) { +FreeListNode* FreeListCategory::PickNodeFromList(int* node_size) { FreeListNode* node = top(); if (node == NULL) return NULL; @@ -2146,7 +2121,7 @@ FreeListNode* FreeListCategory::PickNodeFromList(int *node_size) { FreeListNode* FreeListCategory::PickNodeFromList(int size_in_bytes, - int *node_size) { + int* node_size) { FreeListNode* node = PickNodeFromList(node_size); if (node != NULL && *node_size < size_in_bytes) { Free(node, *node_size); @@ -2174,15 +2149,14 @@ void FreeListCategory::RepairFreeList(Heap* heap) { if (*map_location == NULL) { *map_location = heap->free_space_map(); } else { - ASSERT(*map_location == heap->free_space_map()); + DCHECK(*map_location == heap->free_space_map()); } n = n->next(); } } -FreeList::FreeList(PagedSpace* owner) - : owner_(owner), heap_(owner->heap()) { +FreeList::FreeList(PagedSpace* owner) : owner_(owner), heap_(owner->heap()) { Reset(); } @@ -2234,7 +2208,7 @@ int FreeList::Free(Address start, int size_in_bytes) { page->add_available_in_huge_free_list(size_in_bytes); } - ASSERT(IsVeryLong() || available() == SumFreeLists()); + DCHECK(IsVeryLong() || available() == SumFreeLists()); return 0; } @@ -2246,10 +2220,10 @@ FreeListNode* FreeList::FindNodeFor(int size_in_bytes, int* node_size) { if (size_in_bytes <= kSmallAllocationMax) { node = small_list_.PickNodeFromList(node_size); if (node != NULL) { - ASSERT(size_in_bytes <= *node_size); + DCHECK(size_in_bytes <= *node_size); page = Page::FromAddress(node->address()); page->add_available_in_small_free_list(-(*node_size)); - ASSERT(IsVeryLong() || available() == SumFreeLists()); + DCHECK(IsVeryLong() || available() == SumFreeLists()); return node; } } @@ -2257,10 +2231,10 @@ FreeListNode* FreeList::FindNodeFor(int size_in_bytes, int* node_size) { if (size_in_bytes <= kMediumAllocationMax) { node = medium_list_.PickNodeFromList(node_size); if (node != NULL) { - ASSERT(size_in_bytes <= *node_size); + DCHECK(size_in_bytes <= *node_size); page = Page::FromAddress(node->address()); page->add_available_in_medium_free_list(-(*node_size)); - ASSERT(IsVeryLong() || available() == SumFreeLists()); + DCHECK(IsVeryLong() || available() == SumFreeLists()); return node; } } @@ -2268,18 +2242,17 @@ FreeListNode* FreeList::FindNodeFor(int size_in_bytes, int* node_size) { if (size_in_bytes <= kLargeAllocationMax) { node = large_list_.PickNodeFromList(node_size); if (node != NULL) { - ASSERT(size_in_bytes <= *node_size); + DCHECK(size_in_bytes <= *node_size); page = Page::FromAddress(node->address()); page->add_available_in_large_free_list(-(*node_size)); - ASSERT(IsVeryLong() || available() == SumFreeLists()); + DCHECK(IsVeryLong() || available() == SumFreeLists()); return node; } } int huge_list_available = huge_list_.available(); FreeListNode* top_node = huge_list_.top(); - for (FreeListNode** cur = &top_node; - *cur != NULL; + for (FreeListNode** cur = &top_node; *cur != NULL; cur = (*cur)->next_address()) { FreeListNode* cur_node = *cur; while (cur_node != NULL && @@ -2297,7 +2270,7 @@ FreeListNode* FreeList::FindNodeFor(int size_in_bytes, int* node_size) { break; } - ASSERT((*cur)->map() == heap_->raw_unchecked_free_space_map()); + DCHECK((*cur)->map() == heap_->raw_unchecked_free_space_map()); FreeSpace* cur_as_free_space = reinterpret_cast<FreeSpace*>(*cur); int size = cur_as_free_space->Size(); if (size >= size_in_bytes) { @@ -2319,34 +2292,34 @@ FreeListNode* FreeList::FindNodeFor(int size_in_bytes, int* node_size) { huge_list_.set_available(huge_list_available); if (node != NULL) { - ASSERT(IsVeryLong() || available() == SumFreeLists()); + DCHECK(IsVeryLong() || available() == SumFreeLists()); return node; } if (size_in_bytes <= kSmallListMax) { node = small_list_.PickNodeFromList(size_in_bytes, node_size); if (node != NULL) { - ASSERT(size_in_bytes <= *node_size); + DCHECK(size_in_bytes <= *node_size); page = Page::FromAddress(node->address()); page->add_available_in_small_free_list(-(*node_size)); } } else if (size_in_bytes <= kMediumListMax) { node = medium_list_.PickNodeFromList(size_in_bytes, node_size); if (node != NULL) { - ASSERT(size_in_bytes <= *node_size); + DCHECK(size_in_bytes <= *node_size); page = Page::FromAddress(node->address()); page->add_available_in_medium_free_list(-(*node_size)); } } else if (size_in_bytes <= kLargeListMax) { node = large_list_.PickNodeFromList(size_in_bytes, node_size); if (node != NULL) { - ASSERT(size_in_bytes <= *node_size); + DCHECK(size_in_bytes <= *node_size); page = Page::FromAddress(node->address()); page->add_available_in_large_free_list(-(*node_size)); } } - ASSERT(IsVeryLong() || available() == SumFreeLists()); + DCHECK(IsVeryLong() || available() == SumFreeLists()); return node; } @@ -2356,11 +2329,11 @@ FreeListNode* FreeList::FindNodeFor(int size_in_bytes, int* node_size) { // the allocation fails then NULL is returned, and the caller can perform a GC // or allocate a new page before retrying. HeapObject* FreeList::Allocate(int size_in_bytes) { - ASSERT(0 < size_in_bytes); - ASSERT(size_in_bytes <= kMaxBlockSize); - ASSERT(IsAligned(size_in_bytes, kPointerSize)); + DCHECK(0 < size_in_bytes); + DCHECK(size_in_bytes <= kMaxBlockSize); + DCHECK(IsAligned(size_in_bytes, kPointerSize)); // Don't free list allocate if there is linear space available. - ASSERT(owner_->limit() - owner_->top() < size_in_bytes); + DCHECK(owner_->limit() - owner_->top() < size_in_bytes); int old_linear_size = static_cast<int>(owner_->limit() - owner_->top()); // Mark the old linear allocation area with a free space map so it can be @@ -2368,8 +2341,8 @@ HeapObject* FreeList::Allocate(int size_in_bytes) { // if it is big enough. owner_->Free(owner_->top(), old_linear_size); - owner_->heap()->incremental_marking()->OldSpaceStep( - size_in_bytes - old_linear_size); + owner_->heap()->incremental_marking()->OldSpaceStep(size_in_bytes - + old_linear_size); int new_node_size = 0; FreeListNode* new_node = FindNodeFor(size_in_bytes, &new_node_size); @@ -2379,7 +2352,7 @@ HeapObject* FreeList::Allocate(int size_in_bytes) { } int bytes_left = new_node_size - size_in_bytes; - ASSERT(bytes_left >= 0); + DCHECK(bytes_left >= 0); #ifdef DEBUG for (int i = 0; i < size_in_bytes / kPointerSize; i++) { @@ -2391,7 +2364,7 @@ HeapObject* FreeList::Allocate(int size_in_bytes) { // The old-space-step might have finished sweeping and restarted marking. // Verify that it did not turn the page of the new node into an evacuation // candidate. - ASSERT(!MarkCompactCollector::IsOnEvacuationCandidate(new_node)); + DCHECK(!MarkCompactCollector::IsOnEvacuationCandidate(new_node)); const int kThreshold = IncrementalMarking::kAllocatedThreshold; @@ -2403,7 +2376,7 @@ HeapObject* FreeList::Allocate(int size_in_bytes) { // Keep the linear allocation area empty if requested to do so, just // return area back to the free list instead. owner_->Free(new_node->address() + size_in_bytes, bytes_left); - ASSERT(owner_->top() == NULL && owner_->limit() == NULL); + DCHECK(owner_->top() == NULL && owner_->limit() == NULL); } else if (bytes_left > kThreshold && owner_->heap()->incremental_marking()->IsMarkingIncomplete() && FLAG_incremental_marking_steps) { @@ -2436,8 +2409,8 @@ intptr_t FreeList::EvictFreeListItems(Page* p) { if (sum < p->area_size()) { sum += small_list_.EvictFreeListItemsInList(p) + - medium_list_.EvictFreeListItemsInList(p) + - large_list_.EvictFreeListItemsInList(p); + medium_list_.EvictFreeListItemsInList(p) + + large_list_.EvictFreeListItemsInList(p); p->set_available_in_small_free_list(0); p->set_available_in_medium_free_list(0); p->set_available_in_large_free_list(0); @@ -2468,7 +2441,7 @@ intptr_t FreeListCategory::SumFreeList() { intptr_t sum = 0; FreeListNode* cur = top(); while (cur != NULL) { - ASSERT(cur->map() == cur->GetHeap()->raw_unchecked_free_space_map()); + DCHECK(cur->map() == cur->GetHeap()->raw_unchecked_free_space_map()); FreeSpace* cur_as_free_space = reinterpret_cast<FreeSpace*>(cur); sum += cur_as_free_space->nobarrier_size(); cur = cur->next(); @@ -2493,10 +2466,10 @@ int FreeListCategory::FreeListLength() { bool FreeList::IsVeryLong() { - if (small_list_.FreeListLength() == kVeryLongFreeList) return true; - if (medium_list_.FreeListLength() == kVeryLongFreeList) return true; - if (large_list_.FreeListLength() == kVeryLongFreeList) return true; - if (huge_list_.FreeListLength() == kVeryLongFreeList) return true; + if (small_list_.FreeListLength() == kVeryLongFreeList) return true; + if (medium_list_.FreeListLength() == kVeryLongFreeList) return true; + if (large_list_.FreeListLength() == kVeryLongFreeList) return true; + if (huge_list_.FreeListLength() == kVeryLongFreeList) return true; return false; } @@ -2532,7 +2505,7 @@ void PagedSpace::PrepareForMarkCompact() { intptr_t PagedSpace::SizeOfObjects() { - ASSERT(heap()->mark_compact_collector()->IsConcurrentSweepingInProgress() || + DCHECK(heap()->mark_compact_collector()->sweeping_in_progress() || (unswept_free_bytes_ == 0)); return Size() - unswept_free_bytes_ - (limit() - top()); } @@ -2542,16 +2515,14 @@ intptr_t PagedSpace::SizeOfObjects() { // on the heap. If there was already a free list then the elements on it // were created with the wrong FreeSpaceMap (normally NULL), so we need to // fix them. -void PagedSpace::RepairFreeListsAfterBoot() { - free_list_.RepairLists(heap()); -} +void PagedSpace::RepairFreeListsAfterBoot() { free_list_.RepairLists(heap()); } void PagedSpace::EvictEvacuationCandidatesFromFreeLists() { if (allocation_info_.top() >= allocation_info_.limit()) return; - if (Page::FromAllocationTop(allocation_info_.top())-> - IsEvacuationCandidate()) { + if (Page::FromAllocationTop(allocation_info_.top()) + ->IsEvacuationCandidate()) { // Create filler object to keep page iterable if it was iterable. int remaining = static_cast<int>(allocation_info_.limit() - allocation_info_.top()); @@ -2563,17 +2534,45 @@ void PagedSpace::EvictEvacuationCandidatesFromFreeLists() { } +HeapObject* PagedSpace::WaitForSweeperThreadsAndRetryAllocation( + int size_in_bytes) { + MarkCompactCollector* collector = heap()->mark_compact_collector(); + if (collector->sweeping_in_progress()) { + // Wait for the sweeper threads here and complete the sweeping phase. + collector->EnsureSweepingCompleted(); + + // After waiting for the sweeper threads, there may be new free-list + // entries. + return free_list_.Allocate(size_in_bytes); + } + return NULL; +} + + HeapObject* PagedSpace::SlowAllocateRaw(int size_in_bytes) { // Allocation in this space has failed. - // If sweeper threads are active, try to re-fill the free-lists. MarkCompactCollector* collector = heap()->mark_compact_collector(); - if (collector->IsConcurrentSweepingInProgress()) { + // Sweeping is still in progress. + if (collector->sweeping_in_progress()) { + // First try to refill the free-list, concurrent sweeper threads + // may have freed some objects in the meantime. collector->RefillFreeList(this); // Retry the free list allocation. HeapObject* object = free_list_.Allocate(size_in_bytes); if (object != NULL) return object; + + // If sweeping is still in progress try to sweep pages on the main thread. + int free_chunk = collector->SweepInParallel(this, size_in_bytes); + collector->RefillFreeList(this); + if (free_chunk >= size_in_bytes) { + HeapObject* object = free_list_.Allocate(size_in_bytes); + // We should be able to allocate an object here since we just freed that + // much memory. + DCHECK(object != NULL); + if (object != NULL) return object; + } } // Free list allocation failed and there is no next page. Fail if we have @@ -2581,27 +2580,22 @@ HeapObject* PagedSpace::SlowAllocateRaw(int size_in_bytes) { // collection. if (!heap()->always_allocate() && heap()->OldGenerationAllocationLimitReached()) { - return NULL; + // If sweeper threads are active, wait for them at that point and steal + // elements form their free-lists. + HeapObject* object = WaitForSweeperThreadsAndRetryAllocation(size_in_bytes); + if (object != NULL) return object; } // Try to expand the space and allocate in the new next page. if (Expand()) { - ASSERT(CountTotalPages() > 1 || size_in_bytes <= free_list_.available()); + DCHECK(CountTotalPages() > 1 || size_in_bytes <= free_list_.available()); return free_list_.Allocate(size_in_bytes); } - // If sweeper threads are active, wait for them at that point. - if (collector->IsConcurrentSweepingInProgress()) { - collector->WaitUntilSweepingCompleted(); - - // After waiting for the sweeper threads, there may be new free-list - // entries. - HeapObject* object = free_list_.Allocate(size_in_bytes); - if (object != NULL) return object; - } - - // Finally, fail. - return NULL; + // If sweeper threads are active, wait for them at that point and steal + // elements form their free-lists. Allocation may still fail their which + // would indicate that there is not enough memory for the given allocation. + return WaitForSweeperThreadsAndRetryAllocation(size_in_bytes); } @@ -2610,13 +2604,14 @@ void PagedSpace::ReportCodeStatistics(Isolate* isolate) { CommentStatistic* comments_statistics = isolate->paged_space_comments_statistics(); ReportCodeKindStatistics(isolate->code_kind_statistics()); - PrintF("Code comment statistics (\" [ comment-txt : size/ " - "count (average)\"):\n"); + PrintF( + "Code comment statistics (\" [ comment-txt : size/ " + "count (average)\"):\n"); for (int i = 0; i <= CommentStatistic::kMaxComments; i++) { const CommentStatistic& cs = comments_statistics[i]; if (cs.size > 0) { PrintF(" %-30s: %10d/%6d (%d)\n", cs.comment, cs.size, cs.count, - cs.size/cs.count); + cs.size / cs.count); } } PrintF("\n"); @@ -2665,8 +2660,8 @@ static void EnterComment(Isolate* isolate, const char* comment, int delta) { // Call for each nested comment start (start marked with '[ xxx', end marked // with ']'. RelocIterator 'it' must point to a comment reloc info. static void CollectCommentStatistics(Isolate* isolate, RelocIterator* it) { - ASSERT(!it->done()); - ASSERT(it->rinfo()->rmode() == RelocInfo::COMMENT); + DCHECK(!it->done()); + DCHECK(it->rinfo()->rmode() == RelocInfo::COMMENT); const char* tmp = reinterpret_cast<const char*>(it->rinfo()->data()); if (tmp[0] != '[') { // Not a nested comment; skip @@ -2682,7 +2677,7 @@ static void CollectCommentStatistics(Isolate* isolate, RelocIterator* it) { while (true) { // All nested comments must be terminated properly, and therefore exit // from loop. - ASSERT(!it->done()); + DCHECK(!it->done()); if (it->rinfo()->rmode() == RelocInfo::COMMENT) { const char* const txt = reinterpret_cast<const char*>(it->rinfo()->data()); @@ -2721,7 +2716,7 @@ void PagedSpace::CollectCodeStatistics() { it.next(); } - ASSERT(code->instruction_start() <= prev_pc && + DCHECK(code->instruction_start() <= prev_pc && prev_pc <= code->instruction_end()); delta += static_cast<int>(code->instruction_end() - prev_pc); EnterComment(isolate, "NoComment", delta); @@ -2732,12 +2727,14 @@ void PagedSpace::CollectCodeStatistics() { void PagedSpace::ReportStatistics() { int pct = static_cast<int>(Available() * 100 / Capacity()); - PrintF(" capacity: %" V8_PTR_PREFIX "d" - ", waste: %" V8_PTR_PREFIX "d" - ", available: %" V8_PTR_PREFIX "d, %%%d\n", + PrintF(" capacity: %" V8_PTR_PREFIX + "d" + ", waste: %" V8_PTR_PREFIX + "d" + ", available: %" V8_PTR_PREFIX "d, %%%d\n", Capacity(), Waste(), Available(), pct); - if (was_swept_conservatively_) return; + if (!swept_precisely_) return; ClearHistograms(heap()->isolate()); HeapObjectIterator obj_it(this); for (HeapObject* obj = obj_it.Next(); obj != NULL; obj = obj_it.Next()) @@ -2753,9 +2750,7 @@ void PagedSpace::ReportStatistics() { // there is at least one non-inlined virtual function. I would prefer to hide // the VerifyObject definition behind VERIFY_HEAP. -void MapSpace::VerifyObject(HeapObject* object) { - CHECK(object->IsMap()); -} +void MapSpace::VerifyObject(HeapObject* object) { CHECK(object->IsMap()); } // ----------------------------------------------------------------------------- @@ -2764,9 +2759,7 @@ void MapSpace::VerifyObject(HeapObject* object) { // there is at least one non-inlined virtual function. I would prefer to hide // the VerifyObject definition behind VERIFY_HEAP. -void CellSpace::VerifyObject(HeapObject* object) { - CHECK(object->IsCell()); -} +void CellSpace::VerifyObject(HeapObject* object) { CHECK(object->IsCell()); } void PropertyCellSpace::VerifyObject(HeapObject* object) { @@ -2801,13 +2794,10 @@ HeapObject* LargeObjectIterator::Next() { // ----------------------------------------------------------------------------- // LargeObjectSpace -static bool ComparePointers(void* key1, void* key2) { - return key1 == key2; -} +static bool ComparePointers(void* key1, void* key2) { return key1 == key2; } -LargeObjectSpace::LargeObjectSpace(Heap* heap, - intptr_t max_capacity, +LargeObjectSpace::LargeObjectSpace(Heap* heap, intptr_t max_capacity, AllocationSpace id) : Space(heap, id, NOT_EXECUTABLE), // Managed on a per-allocation basis max_capacity_(max_capacity), @@ -2857,10 +2847,10 @@ AllocationResult LargeObjectSpace::AllocateRaw(int object_size, return AllocationResult::Retry(identity()); } - LargePage* page = heap()->isolate()->memory_allocator()-> - AllocateLargePage(object_size, this, executable); + LargePage* page = heap()->isolate()->memory_allocator()->AllocateLargePage( + object_size, this, executable); if (page == NULL) return AllocationResult::Retry(identity()); - ASSERT(page->area_size() >= object_size); + DCHECK(page->area_size() >= object_size); size_ += static_cast<int>(page->size()); objects_size_ += object_size; @@ -2878,9 +2868,8 @@ AllocationResult LargeObjectSpace::AllocateRaw(int object_size, uintptr_t limit = base + (page->size() - 1) / MemoryChunk::kAlignment; for (uintptr_t key = base; key <= limit; key++) { HashMap::Entry* entry = chunk_map_.Lookup(reinterpret_cast<void*>(key), - static_cast<uint32_t>(key), - true); - ASSERT(entry != NULL); + static_cast<uint32_t>(key), true); + DCHECK(entry != NULL); entry->value = page; } @@ -2900,7 +2889,7 @@ AllocationResult LargeObjectSpace::AllocateRaw(int object_size, size_t LargeObjectSpace::CommittedPhysicalMemory() { - if (!VirtualMemory::HasLazyCommits()) return CommittedMemory(); + if (!base::VirtualMemory::HasLazyCommits()) return CommittedMemory(); size_t size = 0; LargePage* current = first_page_; while (current != NULL) { @@ -2924,12 +2913,11 @@ Object* LargeObjectSpace::FindObject(Address a) { LargePage* LargeObjectSpace::FindPage(Address a) { uintptr_t key = reinterpret_cast<uintptr_t>(a) / MemoryChunk::kAlignment; HashMap::Entry* e = chunk_map_.Lookup(reinterpret_cast<void*>(key), - static_cast<uint32_t>(key), - false); + static_cast<uint32_t>(key), false); if (e != NULL) { - ASSERT(e->value != NULL); + DCHECK(e->value != NULL); LargePage* page = reinterpret_cast<LargePage*>(e->value); - ASSERT(page->is_valid()); + DCHECK(page->is_valid()); if (page->Contains(a)) { return page; } @@ -2964,8 +2952,8 @@ void LargeObjectSpace::FreeUnmarkedObjects() { } // Free the chunk. - heap()->mark_compact_collector()->ReportDeleteIfNeeded( - object, heap()->isolate()); + heap()->mark_compact_collector()->ReportDeleteIfNeeded(object, + heap()->isolate()); size_ -= static_cast<int>(page->size()); objects_size_ -= object->Size(); page_count_--; @@ -2974,8 +2962,8 @@ void LargeObjectSpace::FreeUnmarkedObjects() { // Use variable alignment to help pass length check (<= 80 characters) // of single line in tools/presubmit.py. const intptr_t alignment = MemoryChunk::kAlignment; - uintptr_t base = reinterpret_cast<uintptr_t>(page)/alignment; - uintptr_t limit = base + (page->size()-1)/alignment; + uintptr_t base = reinterpret_cast<uintptr_t>(page) / alignment; + uintptr_t limit = base + (page->size() - 1) / alignment; for (uintptr_t key = base; key <= limit; key++) { chunk_map_.Remove(reinterpret_cast<void*>(key), static_cast<uint32_t>(key)); @@ -2998,7 +2986,7 @@ bool LargeObjectSpace::Contains(HeapObject* object) { bool owned = (chunk->owner() == this); - SLOW_ASSERT(!owned || FindObject(address)->IsHeapObject()); + SLOW_DCHECK(!owned || FindObject(address)->IsHeapObject()); return owned; } @@ -3008,8 +2996,7 @@ bool LargeObjectSpace::Contains(HeapObject* object) { // We do not assume that the large object iterator works, because it depends // on the invariants we are checking during verification. void LargeObjectSpace::Verify() { - for (LargePage* chunk = first_page_; - chunk != NULL; + for (LargePage* chunk = first_page_; chunk != NULL; chunk = chunk->next_page()) { // Each chunk contains an object that starts at the large object page's // object area start. @@ -3025,10 +3012,12 @@ void LargeObjectSpace::Verify() { // We have only code, sequential strings, external strings // (sequential strings that have been morphed into external - // strings), fixed arrays, and byte arrays in large object space. + // strings), fixed arrays, byte arrays, and constant pool arrays in the + // large object space. CHECK(object->IsCode() || object->IsSeqString() || - object->IsExternalString() || object->IsFixedArray() || - object->IsFixedDoubleArray() || object->IsByteArray()); + object->IsExternalString() || object->IsFixedArray() || + object->IsFixedDoubleArray() || object->IsByteArray() || + object->IsConstantPoolArray()); // The object itself should look OK. object->ObjectVerify(); @@ -3036,9 +3025,7 @@ void LargeObjectSpace::Verify() { // Byte arrays and strings don't have interior pointers. if (object->IsCode()) { VerifyPointersVisitor code_visitor; - object->IterateBody(map->instance_type(), - object->Size(), - &code_visitor); + object->IterateBody(map->instance_type(), object->Size(), &code_visitor); } else if (object->IsFixedArray()) { FixedArray* array = FixedArray::cast(object); for (int j = 0; j < array->length(); j++) { @@ -3057,9 +3044,10 @@ void LargeObjectSpace::Verify() { #ifdef DEBUG void LargeObjectSpace::Print() { + OFStream os(stdout); LargeObjectIterator it(this); for (HeapObject* obj = it.Next(); obj != NULL; obj = it.Next()) { - obj->Print(); + obj->Print(os); } } @@ -3074,8 +3062,10 @@ void LargeObjectSpace::ReportStatistics() { CollectHistogramInfo(obj); } - PrintF(" number of objects %d, " - "size of objects %" V8_PTR_PREFIX "d\n", num_objects, objects_size_); + PrintF( + " number of objects %d, " + "size of objects %" V8_PTR_PREFIX "d\n", + num_objects, objects_size_); if (num_objects > 0) ReportHistogram(heap()->isolate(), false); } @@ -3094,14 +3084,12 @@ void LargeObjectSpace::CollectCodeStatistics() { void Page::Print() { // Make a best-effort to print the objects in the page. - PrintF("Page@%p in %s\n", - this->address(), + PrintF("Page@%p in %s\n", this->address(), AllocationSpaceName(this->owner()->identity())); printf(" --------------------------------------\n"); HeapObjectIterator objects(this, heap()->GcSafeSizeOfOldObjectFunction()); unsigned mark_size = 0; - for (HeapObject* object = objects.Next(); - object != NULL; + for (HeapObject* object = objects.Next(); object != NULL; object = objects.Next()) { bool is_marked = Marking::MarkBitFrom(object).Get(); PrintF(" %c ", (is_marked ? '!' : ' ')); // Indent a little. @@ -3116,5 +3104,5 @@ void Page::Print() { } #endif // DEBUG - -} } // namespace v8::internal +} +} // namespace v8::internal diff --git a/deps/v8/src/spaces.h b/deps/v8/src/heap/spaces.h similarity index 83% rename from deps/v8/src/spaces.h rename to deps/v8/src/heap/spaces.h index dd410754b06..312d75f52ee 100644 --- a/deps/v8/src/spaces.h +++ b/deps/v8/src/heap/spaces.h @@ -2,15 +2,16 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#ifndef V8_SPACES_H_ -#define V8_SPACES_H_ +#ifndef V8_HEAP_SPACES_H_ +#define V8_HEAP_SPACES_H_ -#include "allocation.h" -#include "hashmap.h" -#include "list.h" -#include "log.h" -#include "platform/mutex.h" -#include "utils.h" +#include "src/allocation.h" +#include "src/base/atomicops.h" +#include "src/base/platform/mutex.h" +#include "src/hashmap.h" +#include "src/list.h" +#include "src/log.h" +#include "src/utils.h" namespace v8 { namespace internal { @@ -37,7 +38,7 @@ class Isolate; // may be larger than the page size. // // A store-buffer based write barrier is used to keep track of intergenerational -// references. See store-buffer.h. +// references. See heap/store-buffer.h. // // During scavenges and mark-sweep collections we sometimes (after a store // buffer overflow) iterate intergenerational pointers without decoding heap @@ -73,21 +74,20 @@ class Isolate; // Some assertion macros used in the debugging mode. -#define ASSERT_PAGE_ALIGNED(address) \ - ASSERT((OffsetFrom(address) & Page::kPageAlignmentMask) == 0) +#define DCHECK_PAGE_ALIGNED(address) \ + DCHECK((OffsetFrom(address) & Page::kPageAlignmentMask) == 0) -#define ASSERT_OBJECT_ALIGNED(address) \ - ASSERT((OffsetFrom(address) & kObjectAlignmentMask) == 0) +#define DCHECK_OBJECT_ALIGNED(address) \ + DCHECK((OffsetFrom(address) & kObjectAlignmentMask) == 0) -#define ASSERT_OBJECT_SIZE(size) \ - ASSERT((0 < size) && (size <= Page::kMaxRegularHeapObjectSize)) +#define DCHECK_OBJECT_SIZE(size) \ + DCHECK((0 < size) && (size <= Page::kMaxRegularHeapObjectSize)) -#define ASSERT_PAGE_OFFSET(offset) \ - ASSERT((Page::kObjectStartOffset <= offset) \ - && (offset <= Page::kPageSize)) +#define DCHECK_PAGE_OFFSET(offset) \ + DCHECK((Page::kObjectStartOffset <= offset) && (offset <= Page::kPageSize)) -#define ASSERT_MAP_PAGE_INDEX(index) \ - ASSERT((0 <= index) && (index <= MapSpace::kMaxMapPageIndex)) +#define DCHECK_MAP_PAGE_INDEX(index) \ + DCHECK((0 <= index) && (index <= MapSpace::kMaxMapPageIndex)) class PagedSpace; @@ -102,7 +102,7 @@ class MarkBit { typedef uint32_t CellType; inline MarkBit(CellType* cell, CellType mask, bool data_only) - : cell_(cell), mask_(mask), data_only_(data_only) { } + : cell_(cell), mask_(mask), data_only_(data_only) {} inline CellType* cell() { return cell_; } inline CellType mask() { return mask_; } @@ -148,20 +148,17 @@ class Bitmap { static const uint32_t kBytesPerCell = kBitsPerCell / kBitsPerByte; static const uint32_t kBytesPerCellLog2 = kBitsPerCellLog2 - kBitsPerByteLog2; - static const size_t kLength = - (1 << kPageSizeBits) >> (kPointerSizeLog2); + static const size_t kLength = (1 << kPageSizeBits) >> (kPointerSizeLog2); static const size_t kSize = - (1 << kPageSizeBits) >> (kPointerSizeLog2 + kBitsPerByteLog2); + (1 << kPageSizeBits) >> (kPointerSizeLog2 + kBitsPerByteLog2); static int CellsForLength(int length) { return (length + kBitsPerCell - 1) >> kBitsPerCellLog2; } - int CellsCount() { - return CellsForLength(kLength); - } + int CellsCount() { return CellsForLength(kLength); } static int SizeFor(int cells_count) { return sizeof(MarkBit::CellType) * cells_count; @@ -183,9 +180,7 @@ class Bitmap { return reinterpret_cast<MarkBit::CellType*>(this); } - INLINE(Address address()) { - return reinterpret_cast<Address>(this); - } + INLINE(Address address()) { return reinterpret_cast<Address>(this); } INLINE(static Bitmap* FromAddress(Address addr)) { return reinterpret_cast<Bitmap*>(addr); @@ -209,7 +204,7 @@ class Bitmap { class CellPrinter { public: - CellPrinter() : seq_start(0), seq_type(0), seq_length(0) { } + CellPrinter() : seq_start(0), seq_type(0), seq_length(0) {} void Print(uint32_t pos, uint32_t cell) { if (cell == seq_type) { @@ -233,9 +228,7 @@ class Bitmap { void Flush() { if (seq_length > 0) { - PrintF("%d: %dx%d\n", - seq_start, - seq_type == 0 ? 0 : 1, + PrintF("%d: %dx%d\n", seq_start, seq_type == 0 ? 0 : 1, seq_length * kBitsPerCell); seq_length = 0; } @@ -282,6 +275,10 @@ class MemoryChunk { static MemoryChunk* FromAddress(Address a) { return reinterpret_cast<MemoryChunk*>(OffsetFrom(a) & ~kAlignmentMask); } + static const MemoryChunk* FromAddress(const byte* a) { + return reinterpret_cast<const MemoryChunk*>(OffsetFrom(a) & + ~kAlignmentMask); + } // Only works for addresses in pointer spaces, not data or code spaces. static inline MemoryChunk* FromAnyPointerAddress(Heap* heap, Address addr); @@ -291,48 +288,44 @@ class MemoryChunk { bool is_valid() { return address() != NULL; } MemoryChunk* next_chunk() const { - return reinterpret_cast<MemoryChunk*>(Acquire_Load(&next_chunk_)); + return reinterpret_cast<MemoryChunk*>(base::Acquire_Load(&next_chunk_)); } MemoryChunk* prev_chunk() const { - return reinterpret_cast<MemoryChunk*>(Acquire_Load(&prev_chunk_)); + return reinterpret_cast<MemoryChunk*>(base::Acquire_Load(&prev_chunk_)); } void set_next_chunk(MemoryChunk* next) { - Release_Store(&next_chunk_, reinterpret_cast<AtomicWord>(next)); + base::Release_Store(&next_chunk_, reinterpret_cast<base::AtomicWord>(next)); } void set_prev_chunk(MemoryChunk* prev) { - Release_Store(&prev_chunk_, reinterpret_cast<AtomicWord>(prev)); + base::Release_Store(&prev_chunk_, reinterpret_cast<base::AtomicWord>(prev)); } Space* owner() const { - if ((reinterpret_cast<intptr_t>(owner_) & kFailureTagMask) == - kFailureTag) { + if ((reinterpret_cast<intptr_t>(owner_) & kPageHeaderTagMask) == + kPageHeaderTag) { return reinterpret_cast<Space*>(reinterpret_cast<intptr_t>(owner_) - - kFailureTag); + kPageHeaderTag); } else { return NULL; } } void set_owner(Space* space) { - ASSERT((reinterpret_cast<intptr_t>(space) & kFailureTagMask) == 0); - owner_ = reinterpret_cast<Address>(space) + kFailureTag; - ASSERT((reinterpret_cast<intptr_t>(owner_) & kFailureTagMask) == - kFailureTag); + DCHECK((reinterpret_cast<intptr_t>(space) & kPageHeaderTagMask) == 0); + owner_ = reinterpret_cast<Address>(space) + kPageHeaderTag; + DCHECK((reinterpret_cast<intptr_t>(owner_) & kPageHeaderTagMask) == + kPageHeaderTag); } - VirtualMemory* reserved_memory() { - return &reservation_; - } + base::VirtualMemory* reserved_memory() { return &reservation_; } - void InitializeReservedMemory() { - reservation_.Reset(); - } + void InitializeReservedMemory() { reservation_.Reset(); } - void set_reserved_memory(VirtualMemory* reservation) { - ASSERT_NOT_NULL(reservation); + void set_reserved_memory(base::VirtualMemory* reservation) { + DCHECK_NOT_NULL(reservation); reservation_.TakeControl(reservation); } @@ -404,23 +397,16 @@ class MemoryChunk { static const int kPointersFromHereAreInterestingMask = 1 << POINTERS_FROM_HERE_ARE_INTERESTING; - static const int kEvacuationCandidateMask = - 1 << EVACUATION_CANDIDATE; + static const int kEvacuationCandidateMask = 1 << EVACUATION_CANDIDATE; static const int kSkipEvacuationSlotsRecordingMask = - (1 << EVACUATION_CANDIDATE) | - (1 << RESCAN_ON_EVACUATION) | - (1 << IN_FROM_SPACE) | - (1 << IN_TO_SPACE); + (1 << EVACUATION_CANDIDATE) | (1 << RESCAN_ON_EVACUATION) | + (1 << IN_FROM_SPACE) | (1 << IN_TO_SPACE); - void SetFlag(int flag) { - flags_ |= static_cast<uintptr_t>(1) << flag; - } + void SetFlag(int flag) { flags_ |= static_cast<uintptr_t>(1) << flag; } - void ClearFlag(int flag) { - flags_ &= ~(static_cast<uintptr_t>(1) << flag); - } + void ClearFlag(int flag) { flags_ &= ~(static_cast<uintptr_t>(1) << flag); } void SetFlagTo(int flag, bool value) { if (value) { @@ -445,57 +431,56 @@ class MemoryChunk { intptr_t GetFlags() { return flags_; } - // PARALLEL_SWEEPING_DONE - The page state when sweeping is complete or - // sweeping must not be performed on that page. - // PARALLEL_SWEEPING_FINALIZE - A sweeper thread is done sweeping this - // page and will not touch the page memory anymore. - // PARALLEL_SWEEPING_IN_PROGRESS - This page is currently swept by a - // sweeper thread. - // PARALLEL_SWEEPING_PENDING - This page is ready for parallel sweeping. + // SWEEPING_DONE - The page state when sweeping is complete or sweeping must + // not be performed on that page. + // SWEEPING_FINALIZE - A sweeper thread is done sweeping this page and will + // not touch the page memory anymore. + // SWEEPING_IN_PROGRESS - This page is currently swept by a sweeper thread. + // SWEEPING_PENDING - This page is ready for parallel sweeping. enum ParallelSweepingState { - PARALLEL_SWEEPING_DONE, - PARALLEL_SWEEPING_FINALIZE, - PARALLEL_SWEEPING_IN_PROGRESS, - PARALLEL_SWEEPING_PENDING + SWEEPING_DONE, + SWEEPING_FINALIZE, + SWEEPING_IN_PROGRESS, + SWEEPING_PENDING }; ParallelSweepingState parallel_sweeping() { return static_cast<ParallelSweepingState>( - Acquire_Load(¶llel_sweeping_)); + base::Acquire_Load(¶llel_sweeping_)); } void set_parallel_sweeping(ParallelSweepingState state) { - Release_Store(¶llel_sweeping_, state); + base::Release_Store(¶llel_sweeping_, state); } bool TryParallelSweeping() { - return Acquire_CompareAndSwap(¶llel_sweeping_, - PARALLEL_SWEEPING_PENDING, - PARALLEL_SWEEPING_IN_PROGRESS) == - PARALLEL_SWEEPING_PENDING; + return base::Acquire_CompareAndSwap(¶llel_sweeping_, SWEEPING_PENDING, + SWEEPING_IN_PROGRESS) == + SWEEPING_PENDING; } + bool SweepingCompleted() { return parallel_sweeping() <= SWEEPING_FINALIZE; } + // Manage live byte count (count of bytes known to be live, // because they are marked black). void ResetLiveBytes() { if (FLAG_gc_verbose) { - PrintF("ResetLiveBytes:%p:%x->0\n", - static_cast<void*>(this), live_byte_count_); + PrintF("ResetLiveBytes:%p:%x->0\n", static_cast<void*>(this), + live_byte_count_); } live_byte_count_ = 0; } void IncrementLiveBytes(int by) { if (FLAG_gc_verbose) { - printf("UpdateLiveBytes:%p:%x%c=%x->%x\n", - static_cast<void*>(this), live_byte_count_, - ((by < 0) ? '-' : '+'), ((by < 0) ? -by : by), + printf("UpdateLiveBytes:%p:%x%c=%x->%x\n", static_cast<void*>(this), + live_byte_count_, ((by < 0) ? '-' : '+'), ((by < 0) ? -by : by), live_byte_count_ + by); } live_byte_count_ += by; - ASSERT_LE(static_cast<unsigned>(live_byte_count_), size_); + DCHECK_LE(static_cast<unsigned>(live_byte_count_), size_); } int LiveBytes() { - ASSERT(static_cast<unsigned>(live_byte_count_) <= size_); + DCHECK(static_cast<unsigned>(live_byte_count_) <= size_); return live_byte_count_; } @@ -508,12 +493,12 @@ class MemoryChunk { } int progress_bar() { - ASSERT(IsFlagSet(HAS_PROGRESS_BAR)); + DCHECK(IsFlagSet(HAS_PROGRESS_BAR)); return progress_bar_; } void set_progress_bar(int progress_bar) { - ASSERT(IsFlagSet(HAS_PROGRESS_BAR)); + DCHECK(IsFlagSet(HAS_PROGRESS_BAR)); progress_bar_ = progress_bar; } @@ -526,7 +511,7 @@ class MemoryChunk { bool IsLeftOfProgressBar(Object** slot) { Address slot_address = reinterpret_cast<Address>(slot); - ASSERT(slot_address > this->address()); + DCHECK(slot_address > this->address()); return (slot_address - (this->address() + kObjectStartOffset)) < progress_bar(); } @@ -545,19 +530,17 @@ class MemoryChunk { static const intptr_t kSizeOffset = 0; static const intptr_t kLiveBytesOffset = - kSizeOffset + kPointerSize + kPointerSize + kPointerSize + - kPointerSize + kPointerSize + - kPointerSize + kPointerSize + kPointerSize + kIntSize; + kSizeOffset + kPointerSize + kPointerSize + kPointerSize + kPointerSize + + kPointerSize + kPointerSize + kPointerSize + kPointerSize + kIntSize; static const size_t kSlotsBufferOffset = kLiveBytesOffset + kIntSize; static const size_t kWriteBarrierCounterOffset = kSlotsBufferOffset + kPointerSize + kPointerSize; - static const size_t kHeaderSize = kWriteBarrierCounterOffset + kPointerSize + - kIntSize + kIntSize + kPointerSize + - 5 * kPointerSize + - kPointerSize + kPointerSize; + static const size_t kHeaderSize = + kWriteBarrierCounterOffset + kPointerSize + kIntSize + kIntSize + + kPointerSize + 5 * kPointerSize + kPointerSize + kPointerSize; static const int kBodyOffset = CODE_POINTER_ALIGN(kHeaderSize + Bitmap::kSize); @@ -566,14 +549,13 @@ class MemoryChunk { // code alignment to be suitable for both. Also aligned to 32 words because // the marking bitmap is arranged in 32 bit chunks. static const int kObjectStartAlignment = 32 * kPointerSize; - static const int kObjectStartOffset = kBodyOffset - 1 + + static const int kObjectStartOffset = + kBodyOffset - 1 + (kObjectStartAlignment - (kBodyOffset - 1) % kObjectStartAlignment); size_t size() const { return size_; } - void set_size(size_t size) { - size_ = size; - } + void set_size(size_t size) { size_ = size; } void SetArea(Address area_start, Address area_end) { area_start_ = area_start; @@ -584,21 +566,15 @@ class MemoryChunk { return IsFlagSet(IS_EXECUTABLE) ? EXECUTABLE : NOT_EXECUTABLE; } - bool ContainsOnlyData() { - return IsFlagSet(CONTAINS_ONLY_DATA); - } + bool ContainsOnlyData() { return IsFlagSet(CONTAINS_ONLY_DATA); } bool InNewSpace() { return (flags_ & ((1 << IN_FROM_SPACE) | (1 << IN_TO_SPACE))) != 0; } - bool InToSpace() { - return IsFlagSet(IN_TO_SPACE); - } + bool InToSpace() { return IsFlagSet(IN_TO_SPACE); } - bool InFromSpace() { - return IsFlagSet(IN_FROM_SPACE); - } + bool InFromSpace() { return IsFlagSet(IN_FROM_SPACE); } // --------------------------------------------------------------------- // Markbits support @@ -614,8 +590,7 @@ class MemoryChunk { } inline static uint32_t FastAddressToMarkbitIndex(Address addr) { - const intptr_t offset = - reinterpret_cast<intptr_t>(addr) & kAlignmentMask; + const intptr_t offset = reinterpret_cast<intptr_t>(addr) & kAlignmentMask; return static_cast<uint32_t>(offset) >> kPointerSizeLog2; } @@ -627,7 +602,7 @@ class MemoryChunk { void InsertAfter(MemoryChunk* other); void Unlink(); - inline Heap* heap() { return heap_; } + inline Heap* heap() const { return heap_; } static const int kFlagsOffset = kPointerSize; @@ -637,43 +612,31 @@ class MemoryChunk { return (flags_ & kSkipEvacuationSlotsRecordingMask) != 0; } - inline SkipList* skip_list() { - return skip_list_; - } + inline SkipList* skip_list() { return skip_list_; } - inline void set_skip_list(SkipList* skip_list) { - skip_list_ = skip_list; - } + inline void set_skip_list(SkipList* skip_list) { skip_list_ = skip_list; } - inline SlotsBuffer* slots_buffer() { - return slots_buffer_; - } + inline SlotsBuffer* slots_buffer() { return slots_buffer_; } - inline SlotsBuffer** slots_buffer_address() { - return &slots_buffer_; - } + inline SlotsBuffer** slots_buffer_address() { return &slots_buffer_; } void MarkEvacuationCandidate() { - ASSERT(slots_buffer_ == NULL); + DCHECK(slots_buffer_ == NULL); SetFlag(EVACUATION_CANDIDATE); } void ClearEvacuationCandidate() { - ASSERT(slots_buffer_ == NULL); + DCHECK(slots_buffer_ == NULL); ClearFlag(EVACUATION_CANDIDATE); } Address area_start() { return area_start_; } Address area_end() { return area_end_; } - int area_size() { - return static_cast<int>(area_end() - area_start()); - } + int area_size() { return static_cast<int>(area_end() - area_start()); } bool CommitArea(size_t requested); // Approximate amount of physical memory committed for this chunk. - size_t CommittedPhysicalMemory() { - return high_water_mark_; - } + size_t CommittedPhysicalMemory() { return high_water_mark_; } static inline void UpdateHighWaterMark(Address mark); @@ -686,7 +649,7 @@ class MemoryChunk { Address area_end_; // If the chunk needs to remember its memory reservation, it is stored here. - VirtualMemory reservation_; + base::VirtualMemory reservation_; // The identity of the owning space. This is tagged as a failure pointer, but // no failure can be in an object, so this can be distinguished from any entry // in a fixed array. @@ -707,7 +670,7 @@ class MemoryChunk { // count highest number of bytes ever allocated on the page. int high_water_mark_; - AtomicWord parallel_sweeping_; + base::AtomicWord parallel_sweeping_; // PagedSpace free-list statistics. intptr_t available_in_small_free_list_; @@ -716,25 +679,21 @@ class MemoryChunk { intptr_t available_in_huge_free_list_; intptr_t non_available_small_blocks_; - static MemoryChunk* Initialize(Heap* heap, - Address base, - size_t size, - Address area_start, - Address area_end, - Executability executable, - Space* owner); + static MemoryChunk* Initialize(Heap* heap, Address base, size_t size, + Address area_start, Address area_end, + Executability executable, Space* owner); private: // next_chunk_ holds a pointer of type MemoryChunk - AtomicWord next_chunk_; + base::AtomicWord next_chunk_; // prev_chunk_ holds a pointer of type MemoryChunk - AtomicWord prev_chunk_; + base::AtomicWord prev_chunk_; friend class MemoryAllocator; }; -STATIC_CHECK(sizeof(MemoryChunk) <= MemoryChunk::kHeaderSize); +STATIC_ASSERT(sizeof(MemoryChunk) <= MemoryChunk::kHeaderSize); // ----------------------------------------------------------------------------- @@ -781,7 +740,7 @@ class Page : public MemoryChunk { // Returns the address for a given offset to the this page. Address OffsetToAddress(int offset) { - ASSERT_PAGE_OFFSET(offset); + DCHECK_PAGE_OFFSET(offset); return address() + offset; } @@ -801,10 +760,8 @@ class Page : public MemoryChunk { inline void ClearGCFields(); - static inline Page* Initialize(Heap* heap, - MemoryChunk* chunk, - Executability executable, - PagedSpace* owner); + static inline Page* Initialize(Heap* heap, MemoryChunk* chunk, + Executability executable, PagedSpace* owner); void InitializeAsAnchor(PagedSpace* owner); @@ -841,29 +798,26 @@ class Page : public MemoryChunk { }; -STATIC_CHECK(sizeof(Page) <= MemoryChunk::kHeaderSize); +STATIC_ASSERT(sizeof(Page) <= MemoryChunk::kHeaderSize); class LargePage : public MemoryChunk { public: - HeapObject* GetObject() { - return HeapObject::FromAddress(area_start()); - } + HeapObject* GetObject() { return HeapObject::FromAddress(area_start()); } inline LargePage* next_page() const { return static_cast<LargePage*>(next_chunk()); } - inline void set_next_page(LargePage* page) { - set_next_chunk(page); - } + inline void set_next_page(LargePage* page) { set_next_chunk(page); } + private: static inline LargePage* Initialize(Heap* heap, MemoryChunk* chunk); friend class MemoryAllocator; }; -STATIC_CHECK(sizeof(LargePage) <= MemoryChunk::kHeaderSize); +STATIC_ASSERT(sizeof(LargePage) <= MemoryChunk::kHeaderSize); // ---------------------------------------------------------------------------- // Space is the abstract superclass for all allocation spaces. @@ -929,13 +883,13 @@ class CodeRange { // manage it. void TearDown(); - bool exists() { return this != NULL && code_range_ != NULL; } + bool valid() { return code_range_ != NULL; } Address start() { - if (this == NULL || code_range_ == NULL) return NULL; + DCHECK(valid()); return static_cast<Address>(code_range_->address()); } bool contains(Address address) { - if (this == NULL || code_range_ == NULL) return false; + if (!valid()) return false; Address start = static_cast<Address>(code_range_->address()); return start <= address && address < start + code_range_->size(); } @@ -954,19 +908,19 @@ class CodeRange { Isolate* isolate_; // The reserved range of virtual memory that all code objects are put in. - VirtualMemory* code_range_; + base::VirtualMemory* code_range_; // Plain old data class, just a struct plus a constructor. class FreeBlock { public: FreeBlock(Address start_arg, size_t size_arg) : start(start_arg), size(size_arg) { - ASSERT(IsAddressAligned(start, MemoryChunk::kAlignment)); - ASSERT(size >= static_cast<size_t>(Page::kPageSize)); + DCHECK(IsAddressAligned(start, MemoryChunk::kAlignment)); + DCHECK(size >= static_cast<size_t>(Page::kPageSize)); } FreeBlock(void* start_arg, size_t size_arg) : start(static_cast<Address>(start_arg)), size(size_arg) { - ASSERT(IsAddressAligned(start, MemoryChunk::kAlignment)); - ASSERT(size >= static_cast<size_t>(Page::kPageSize)); + DCHECK(IsAddressAligned(start, MemoryChunk::kAlignment)); + DCHECK(size >= static_cast<size_t>(Page::kPageSize)); } Address start; @@ -997,9 +951,7 @@ class CodeRange { class SkipList { public: - SkipList() { - Clear(); - } + SkipList() { Clear(); } void Clear() { for (int idx = 0; idx < kSize; idx++) { @@ -1007,9 +959,7 @@ class SkipList { } } - Address StartFor(Address addr) { - return starts_[RegionNumber(addr)]; - } + Address StartFor(Address addr) { return starts_[RegionNumber(addr)]; } void AddObject(Address addr, int size) { int start_region = RegionNumber(addr); @@ -1062,11 +1012,11 @@ class MemoryAllocator { void TearDown(); - Page* AllocatePage( - intptr_t size, PagedSpace* owner, Executability executable); + Page* AllocatePage(intptr_t size, PagedSpace* owner, + Executability executable); - LargePage* AllocateLargePage( - intptr_t object_size, Space* owner, Executability executable); + LargePage* AllocateLargePage(intptr_t object_size, Space* owner, + Executability executable); void Free(MemoryChunk* chunk); @@ -1094,7 +1044,7 @@ class MemoryAllocator { // been allocated by this MemoryAllocator. V8_INLINE bool IsOutsideAllocatedSpace(const void* address) const { return address < lowest_ever_allocated_ || - address >= highest_ever_allocated_; + address >= highest_ever_allocated_; } #ifdef DEBUG @@ -1107,21 +1057,17 @@ class MemoryAllocator { // could be committed later by calling MemoryChunk::CommitArea. MemoryChunk* AllocateChunk(intptr_t reserve_area_size, intptr_t commit_area_size, - Executability executable, - Space* space); - - Address ReserveAlignedMemory(size_t requested, - size_t alignment, - VirtualMemory* controller); - Address AllocateAlignedMemory(size_t reserve_size, - size_t commit_size, - size_t alignment, - Executability executable, - VirtualMemory* controller); + Executability executable, Space* space); + + Address ReserveAlignedMemory(size_t requested, size_t alignment, + base::VirtualMemory* controller); + Address AllocateAlignedMemory(size_t reserve_size, size_t commit_size, + size_t alignment, Executability executable, + base::VirtualMemory* controller); bool CommitMemory(Address addr, size_t size, Executability executable); - void FreeMemory(VirtualMemory* reservation, Executability executable); + void FreeMemory(base::VirtualMemory* reservation, Executability executable); void FreeMemory(Address addr, size_t size, Executability executable); // Commit a contiguous block of memory from the initial chunk. Assumes that @@ -1140,19 +1086,15 @@ class MemoryAllocator { // filling it up with a recognizable non-NULL bit pattern. void ZapBlock(Address start, size_t size); - void PerformAllocationCallback(ObjectSpace space, - AllocationAction action, + void PerformAllocationCallback(ObjectSpace space, AllocationAction action, size_t size); void AddMemoryAllocationCallback(MemoryAllocationCallback callback, - ObjectSpace space, - AllocationAction action); + ObjectSpace space, AllocationAction action); - void RemoveMemoryAllocationCallback( - MemoryAllocationCallback callback); + void RemoveMemoryAllocationCallback(MemoryAllocationCallback callback); - bool MemoryAllocationCallbackRegistered( - MemoryAllocationCallback callback); + bool MemoryAllocationCallbackRegistered(MemoryAllocationCallback callback); static int CodePageGuardStartOffset(); @@ -1166,9 +1108,8 @@ class MemoryAllocator { return CodePageAreaEndOffset() - CodePageAreaStartOffset(); } - MUST_USE_RESULT bool CommitExecutableMemory(VirtualMemory* vm, - Address start, - size_t commit_size, + MUST_USE_RESULT bool CommitExecutableMemory(base::VirtualMemory* vm, + Address start, size_t commit_size, size_t reserved_size); private: @@ -1196,16 +1137,14 @@ class MemoryAllocator { MemoryAllocationCallbackRegistration(MemoryAllocationCallback callback, ObjectSpace space, AllocationAction action) - : callback(callback), space(space), action(action) { - } + : callback(callback), space(space), action(action) {} MemoryAllocationCallback callback; ObjectSpace space; AllocationAction action; }; // A List of callback that are triggered when memory is allocated or free'd - List<MemoryAllocationCallbackRegistration> - memory_allocation_callbacks_; + List<MemoryAllocationCallbackRegistration> memory_allocation_callbacks_; // Initializes pages in a chunk. Returns the first page address. // This function and GetChunkId() are provided for the mark-compact @@ -1233,7 +1172,7 @@ class MemoryAllocator { class ObjectIterator : public Malloced { public: - virtual ~ObjectIterator() { } + virtual ~ObjectIterator() {} virtual HeapObject* next_object() = 0; }; @@ -1248,7 +1187,7 @@ class ObjectIterator : public Malloced { // If objects are allocated in the page during iteration the iterator may // or may not iterate over those objects. The caller must create a new // iterator in order to be sure to visit these new objects. -class HeapObjectIterator: public ObjectIterator { +class HeapObjectIterator : public ObjectIterator { public: // Creates a new object iterator in a given space. // If the size function is not given, the iterator calls the default @@ -1268,15 +1207,13 @@ class HeapObjectIterator: public ObjectIterator { return NULL; } - virtual HeapObject* next_object() { - return Next(); - } + virtual HeapObject* next_object() { return Next(); } private: enum PageMode { kOnePageOnly, kAllPagesInSpace }; - Address cur_addr_; // Current iteration point. - Address cur_end_; // End iteration point. + Address cur_addr_; // Current iteration point. + Address cur_end_; // End iteration point. HeapObjectCallback size_func_; // Size function or NULL. PagedSpace* space_; PageMode page_mode_; @@ -1289,11 +1226,8 @@ class HeapObjectIterator: public ObjectIterator { bool AdvanceToNextPage(); // Initializes fields. - inline void Initialize(PagedSpace* owner, - Address start, - Address end, - PageMode mode, - HeapObjectCallback size_func); + inline void Initialize(PagedSpace* owner, Address start, Address end, + PageMode mode, HeapObjectCallback size_func); }; @@ -1324,45 +1258,41 @@ class PageIterator BASE_EMBEDDED { // space. class AllocationInfo { public: - AllocationInfo() : top_(NULL), limit_(NULL) { - } + AllocationInfo() : top_(NULL), limit_(NULL) {} INLINE(void set_top(Address top)) { - SLOW_ASSERT(top == NULL || - (reinterpret_cast<intptr_t>(top) & HeapObjectTagMask()) == 0); + SLOW_DCHECK(top == NULL || + (reinterpret_cast<intptr_t>(top) & HeapObjectTagMask()) == 0); top_ = top; } INLINE(Address top()) const { - SLOW_ASSERT(top_ == NULL || - (reinterpret_cast<intptr_t>(top_) & HeapObjectTagMask()) == 0); + SLOW_DCHECK(top_ == NULL || + (reinterpret_cast<intptr_t>(top_) & HeapObjectTagMask()) == 0); return top_; } - Address* top_address() { - return &top_; - } + Address* top_address() { return &top_; } INLINE(void set_limit(Address limit)) { - SLOW_ASSERT(limit == NULL || - (reinterpret_cast<intptr_t>(limit) & HeapObjectTagMask()) == 0); + SLOW_DCHECK(limit == NULL || + (reinterpret_cast<intptr_t>(limit) & HeapObjectTagMask()) == 0); limit_ = limit; } INLINE(Address limit()) const { - SLOW_ASSERT(limit_ == NULL || - (reinterpret_cast<intptr_t>(limit_) & HeapObjectTagMask()) == 0); + SLOW_DCHECK(limit_ == NULL || + (reinterpret_cast<intptr_t>(limit_) & HeapObjectTagMask()) == + 0); return limit_; } - Address* limit_address() { - return &limit_; - } + Address* limit_address() { return &limit_; } #ifdef DEBUG bool VerifyPagedAllocation() { - return (Page::FromAllocationTop(top_) == Page::FromAllocationTop(limit_)) - && (top_ <= limit_); + return (Page::FromAllocationTop(top_) == Page::FromAllocationTop(limit_)) && + (top_ <= limit_); } #endif @@ -1427,7 +1357,7 @@ class AllocationStats BASE_EMBEDDED { if (capacity_ > max_capacity_) { max_capacity_ = capacity_; } - ASSERT(size_ >= 0); + DCHECK(size_ >= 0); } // Shrink the space by removing available bytes. Since shrinking is done @@ -1436,26 +1366,25 @@ class AllocationStats BASE_EMBEDDED { void ShrinkSpace(int size_in_bytes) { capacity_ -= size_in_bytes; size_ -= size_in_bytes; - ASSERT(size_ >= 0); + DCHECK(size_ >= 0); } // Allocate from available bytes (available -> size). void AllocateBytes(intptr_t size_in_bytes) { size_ += size_in_bytes; - ASSERT(size_ >= 0); + DCHECK(size_ >= 0); } // Free allocated bytes, making them available (size -> available). void DeallocateBytes(intptr_t size_in_bytes) { size_ -= size_in_bytes; - ASSERT(size_ >= 0); + DCHECK(size_ >= 0); } // Waste free bytes (available -> waste). void WasteBytes(int size_in_bytes) { - size_ -= size_in_bytes; + DCHECK(size_in_bytes >= 0); waste_ += size_in_bytes; - ASSERT(size_ >= 0); } private: @@ -1473,7 +1402,7 @@ class AllocationStats BASE_EMBEDDED { // (free-list node pointers have the heap object tag, and they have a map like // a heap object). They have a size and a next pointer. The next pointer is // the raw address of the next free list node (or NULL). -class FreeListNode: public HeapObject { +class FreeListNode : public HeapObject { public: // Obtain a free-list node from a raw address. This is not a cast because // it does not check nor require that the first word at the address is a map @@ -1512,10 +1441,7 @@ class FreeListNode: public HeapObject { // the end element of the linked list of free memory blocks. class FreeListCategory { public: - FreeListCategory() : - top_(0), - end_(NULL), - available_(0) {} + FreeListCategory() : top_(0), end_(NULL), available_(0) {} intptr_t Concatenate(FreeListCategory* category); @@ -1523,8 +1449,8 @@ class FreeListCategory { void Free(FreeListNode* node, int size_in_bytes); - FreeListNode* PickNodeFromList(int *node_size); - FreeListNode* PickNodeFromList(int size_in_bytes, int *node_size); + FreeListNode* PickNodeFromList(int* node_size); + FreeListNode* PickNodeFromList(int size_in_bytes, int* node_size); intptr_t EvictFreeListItemsInList(Page* p); bool ContainsPageFreeListItemsInList(Page* p); @@ -1532,11 +1458,11 @@ class FreeListCategory { void RepairFreeList(Heap* heap); FreeListNode* top() const { - return reinterpret_cast<FreeListNode*>(NoBarrier_Load(&top_)); + return reinterpret_cast<FreeListNode*>(base::NoBarrier_Load(&top_)); } void set_top(FreeListNode* top) { - NoBarrier_Store(&top_, reinterpret_cast<AtomicWord>(top)); + base::NoBarrier_Store(&top_, reinterpret_cast<base::AtomicWord>(top)); } FreeListNode** GetEndAddress() { return &end_; } @@ -1547,11 +1473,9 @@ class FreeListCategory { int available() const { return available_; } void set_available(int available) { available_ = available; } - Mutex* mutex() { return &mutex_; } + base::Mutex* mutex() { return &mutex_; } - bool IsEmpty() { - return top() == 0; - } + bool IsEmpty() { return top() == 0; } #ifdef DEBUG intptr_t SumFreeList(); @@ -1560,9 +1484,9 @@ class FreeListCategory { private: // top_ points to the top FreeListNode* in the free list category. - AtomicWord top_; + base::AtomicWord top_; FreeListNode* end_; - Mutex mutex_; + base::Mutex mutex_; // Total available bytes in all blocks of this free list category. int available_; @@ -1615,6 +1539,21 @@ class FreeList { // aligned, and the size should be a non-zero multiple of the word size. int Free(Address start, int size_in_bytes); + // This method returns how much memory can be allocated after freeing + // maximum_freed memory. + static inline int GuaranteedAllocatable(int maximum_freed) { + if (maximum_freed < kSmallListMin) { + return 0; + } else if (maximum_freed <= kSmallListMax) { + return kSmallAllocationMax; + } else if (maximum_freed <= kMediumListMax) { + return kMediumAllocationMax; + } else if (maximum_freed <= kLargeListMax) { + return kLargeAllocationMax; + } + return maximum_freed; + } + // Allocate a block of size 'size_in_bytes' from the free list. The block // is unitialized. A failure is returned if no block is available. The // number of bytes lost to fragmentation is returned in the output parameter @@ -1672,11 +1611,11 @@ class FreeList { class AllocationResult { public: // Implicit constructor from Object*. - AllocationResult(Object* object) : object_(object), // NOLINT - retry_space_(INVALID_SPACE) { } + AllocationResult(Object* object) // NOLINT + : object_(object), + retry_space_(INVALID_SPACE) {} - AllocationResult() : object_(NULL), - retry_space_(INVALID_SPACE) { } + AllocationResult() : object_(NULL), retry_space_(INVALID_SPACE) {} static inline AllocationResult Retry(AllocationSpace space = NEW_SPACE) { return AllocationResult(space); @@ -1697,13 +1636,13 @@ class AllocationResult { } AllocationSpace RetrySpace() { - ASSERT(IsRetry()); + DCHECK(IsRetry()); return retry_space_; } private: - explicit AllocationResult(AllocationSpace space) : object_(NULL), - retry_space_(space) { } + explicit AllocationResult(AllocationSpace space) + : object_(NULL), retry_space_(space) {} Object* object_; AllocationSpace retry_space_; @@ -1713,9 +1652,7 @@ class AllocationResult { class PagedSpace : public Space { public: // Creates a space with a maximum capacity, and an id. - PagedSpace(Heap* heap, - intptr_t max_capacity, - AllocationSpace id, + PagedSpace(Heap* heap, intptr_t max_capacity, AllocationSpace id, Executability executable); virtual ~PagedSpace() {} @@ -1819,9 +1756,7 @@ class PagedSpace : public Space { Address limit() { return allocation_info_.limit(); } // The allocation top address. - Address* allocation_top_address() { - return allocation_info_.top_address(); - } + Address* allocation_top_address() { return allocation_info_.top_address(); } // The allocation limit address. Address* allocation_limit_address() { @@ -1838,17 +1773,16 @@ class PagedSpace : public Space { // no attempt to add area to free list is made. int Free(Address start, int size_in_bytes) { int wasted = free_list_.Free(start, size_in_bytes); - accounting_stats_.DeallocateBytes(size_in_bytes - wasted); + accounting_stats_.DeallocateBytes(size_in_bytes); + accounting_stats_.WasteBytes(wasted); return size_in_bytes - wasted; } - void ResetFreeList() { - free_list_.Reset(); - } + void ResetFreeList() { free_list_.Reset(); } // Set space allocation info. void SetTopAndLimit(Address top, Address limit) { - ASSERT(top == limit || + DCHECK(top == limit || Page::FromAddress(top) == Page::FromAddress(limit - 1)); MemoryChunk::UpdateHighWaterMark(allocation_info_.top()); allocation_info_.set_top(top); @@ -1864,9 +1798,7 @@ class PagedSpace : public Space { SetTopAndLimit(NULL, NULL); } - void Allocate(int bytes) { - accounting_stats_.AllocateBytes(bytes); - } + void Allocate(int bytes) { accounting_stats_.AllocateBytes(bytes); } void IncreaseCapacity(int size); @@ -1898,38 +1830,31 @@ class PagedSpace : public Space { static void ResetCodeStatistics(Isolate* isolate); #endif - bool was_swept_conservatively() { return was_swept_conservatively_; } - void set_was_swept_conservatively(bool b) { was_swept_conservatively_ = b; } + bool swept_precisely() { return swept_precisely_; } + void set_swept_precisely(bool b) { swept_precisely_ = b; } // Evacuation candidates are swept by evacuator. Needs to return a valid // result before _and_ after evacuation has finished. static bool ShouldBeSweptBySweeperThreads(Page* p) { return !p->IsEvacuationCandidate() && - !p->IsFlagSet(Page::RESCAN_ON_EVACUATION) && - !p->WasSweptPrecisely(); + !p->IsFlagSet(Page::RESCAN_ON_EVACUATION) && !p->WasSweptPrecisely(); } - void IncrementUnsweptFreeBytes(intptr_t by) { - unswept_free_bytes_ += by; - } + void IncrementUnsweptFreeBytes(intptr_t by) { unswept_free_bytes_ += by; } void IncreaseUnsweptFreeBytes(Page* p) { - ASSERT(ShouldBeSweptBySweeperThreads(p)); + DCHECK(ShouldBeSweptBySweeperThreads(p)); unswept_free_bytes_ += (p->area_size() - p->LiveBytes()); } - void DecrementUnsweptFreeBytes(intptr_t by) { - unswept_free_bytes_ -= by; - } + void DecrementUnsweptFreeBytes(intptr_t by) { unswept_free_bytes_ -= by; } void DecreaseUnsweptFreeBytes(Page* p) { - ASSERT(ShouldBeSweptBySweeperThreads(p)); + DCHECK(ShouldBeSweptBySweeperThreads(p)); unswept_free_bytes_ -= (p->area_size() - p->LiveBytes()); } - void ResetUnsweptFreeBytes() { - unswept_free_bytes_ = 0; - } + void ResetUnsweptFreeBytes() { unswept_free_bytes_ = 0; } // This function tries to steal size_in_bytes memory from the sweeper threads // free-lists. If it does not succeed stealing enough memory, it will wait @@ -1937,13 +1862,9 @@ class PagedSpace : public Space { // It returns true when sweeping is completed and false otherwise. bool EnsureSweeperProgress(intptr_t size_in_bytes); - void set_end_of_unswept_pages(Page* page) { - end_of_unswept_pages_ = page; - } + void set_end_of_unswept_pages(Page* page) { end_of_unswept_pages_ = page; } - Page* end_of_unswept_pages() { - return end_of_unswept_pages_; - } + Page* end_of_unswept_pages() { return end_of_unswept_pages_; } Page* FirstPage() { return anchor_.next_page(); } Page* LastPage() { return anchor_.prev_page(); } @@ -1956,9 +1877,13 @@ class PagedSpace : public Space { int CountTotalPages(); // Return size of allocatable area on a page in this space. - inline int AreaSize() { - return area_size_; - } + inline int AreaSize() { return area_size_; } + + void CreateEmergencyMemory(); + void FreeEmergencyMemory(); + void UseEmergencyMemory(); + + bool HasEmergencyMemory() { return emergency_memory_ != NULL; } protected: FreeList* free_list() { return &free_list_; } @@ -1982,7 +1907,8 @@ class PagedSpace : public Space { // Normal allocation information. AllocationInfo allocation_info_; - bool was_swept_conservatively_; + // This space was swept precisely, hence it is iterable. + bool swept_precisely_; // The number of free bytes which could be reclaimed by advancing the // concurrent sweeper threads. This is only an estimation because concurrent @@ -1994,6 +1920,12 @@ class PagedSpace : public Space { // end_of_unswept_pages_ page. Page* end_of_unswept_pages_; + // Emergency memory is the memory of a full page for a given space, allocated + // conservatively before evacuating a page. If compaction fails due to out + // of memory error the emergency memory can be used to complete compaction. + // If not used, the emergency memory is released after compaction. + MemoryChunk* emergency_memory_; + // Expands the space by allocating a fixed number of pages. Returns false if // it cannot allocate requested number of pages from OS, or if the hard heap // size limit has been hit. @@ -2003,8 +1935,14 @@ class PagedSpace : public Space { // address denoted by top in allocation_info_. inline HeapObject* AllocateLinearly(int size_in_bytes); + // If sweeping is still in progress try to sweep unswept pages. If that is + // not successful, wait for the sweeper threads and re-try free-list + // allocation. + MUST_USE_RESULT HeapObject* WaitForSweeperThreadsAndRetryAllocation( + int size_in_bytes); + // Slow path of AllocateRaw. This function is space-dependent. - MUST_USE_RESULT virtual HeapObject* SlowAllocateRaw(int size_in_bytes); + MUST_USE_RESULT HeapObject* SlowAllocateRaw(int size_in_bytes); friend class PageIterator; friend class MarkCompactCollector; @@ -2034,7 +1972,7 @@ class NumberAndSizeInfo BASE_EMBEDDED { // HistogramInfo class for recording a single "bar" of a histogram. This // class is used for collecting statistics to print to the log file. -class HistogramInfo: public NumberAndSizeInfo { +class HistogramInfo : public NumberAndSizeInfo { public: HistogramInfo() : NumberAndSizeInfo() {} @@ -2046,10 +1984,7 @@ class HistogramInfo: public NumberAndSizeInfo { }; -enum SemiSpaceId { - kFromSpace = 0, - kToSpace = 1 -}; +enum SemiSpaceId { kFromSpace = 0, kToSpace = 1 }; class SemiSpace; @@ -2060,9 +1995,9 @@ class NewSpacePage : public MemoryChunk { // GC related flags copied from from-space to to-space when // flipping semispaces. static const intptr_t kCopyOnFlipFlagsMask = - (1 << MemoryChunk::POINTERS_TO_HERE_ARE_INTERESTING) | - (1 << MemoryChunk::POINTERS_FROM_HERE_ARE_INTERESTING) | - (1 << MemoryChunk::SCAN_ON_SCAVENGE); + (1 << MemoryChunk::POINTERS_TO_HERE_ARE_INTERESTING) | + (1 << MemoryChunk::POINTERS_FROM_HERE_ARE_INTERESTING) | + (1 << MemoryChunk::SCAN_ON_SCAVENGE); static const int kAreaSize = Page::kMaxRegularHeapObjectSize; @@ -2070,36 +2005,28 @@ class NewSpacePage : public MemoryChunk { return static_cast<NewSpacePage*>(next_chunk()); } - inline void set_next_page(NewSpacePage* page) { - set_next_chunk(page); - } + inline void set_next_page(NewSpacePage* page) { set_next_chunk(page); } inline NewSpacePage* prev_page() const { return static_cast<NewSpacePage*>(prev_chunk()); } - inline void set_prev_page(NewSpacePage* page) { - set_prev_chunk(page); - } + inline void set_prev_page(NewSpacePage* page) { set_prev_chunk(page); } - SemiSpace* semi_space() { - return reinterpret_cast<SemiSpace*>(owner()); - } + SemiSpace* semi_space() { return reinterpret_cast<SemiSpace*>(owner()); } bool is_anchor() { return !this->InNewSpace(); } static bool IsAtStart(Address addr) { - return (reinterpret_cast<intptr_t>(addr) & Page::kPageAlignmentMask) - == kObjectStartOffset; + return (reinterpret_cast<intptr_t>(addr) & Page::kPageAlignmentMask) == + kObjectStartOffset; } static bool IsAtEnd(Address addr) { return (reinterpret_cast<intptr_t>(addr) & Page::kPageAlignmentMask) == 0; } - Address address() { - return reinterpret_cast<Address>(this); - } + Address address() { return reinterpret_cast<Address>(this); } // Finds the NewSpacePage containg the given address. static inline NewSpacePage* FromAddress(Address address_in_page) { @@ -2125,12 +2052,9 @@ class NewSpacePage : public MemoryChunk { private: // Create a NewSpacePage object that is only used as anchor // for the doubly-linked list of real pages. - explicit NewSpacePage(SemiSpace* owner) { - InitializeAsAnchor(owner); - } + explicit NewSpacePage(SemiSpace* owner) { InitializeAsAnchor(owner); } - static NewSpacePage* Initialize(Heap* heap, - Address start, + static NewSpacePage* Initialize(Heap* heap, Address start, SemiSpace* semi_space); // Intialize a fake NewSpacePage used as sentinel at the ends @@ -2154,12 +2078,12 @@ class SemiSpace : public Space { public: // Constructor. SemiSpace(Heap* heap, SemiSpaceId semispace) - : Space(heap, NEW_SPACE, NOT_EXECUTABLE), - start_(NULL), - age_mark_(NULL), - id_(semispace), - anchor_(this), - current_page_(NULL) { } + : Space(heap, NEW_SPACE, NOT_EXECUTABLE), + start_(NULL), + age_mark_(NULL), + id_(semispace), + anchor_(this), + current_page_(NULL) {} // Sets up the semispace using the given chunk. void SetUp(Address start, int initial_capacity, int maximum_capacity); @@ -2183,24 +2107,18 @@ class SemiSpace : public Space { // Returns the start address of the first page of the space. Address space_start() { - ASSERT(anchor_.next_page() != &anchor_); + DCHECK(anchor_.next_page() != &anchor_); return anchor_.next_page()->area_start(); } // Returns the start address of the current page of the space. - Address page_low() { - return current_page_->area_start(); - } + Address page_low() { return current_page_->area_start(); } // Returns one past the end address of the space. - Address space_end() { - return anchor_.prev_page()->area_end(); - } + Address space_end() { return anchor_.prev_page()->area_end(); } // Returns one past the end address of the current page of the space. - Address page_high() { - return current_page_->area_end(); - } + Address page_high() { return current_page_->area_end(); } bool AdvancePage() { NewSpacePage* next_page = current_page_->next_page(); @@ -2219,8 +2137,8 @@ class SemiSpace : public Space { // True if the address is in the address range of this semispace (not // necessarily below the allocation pointer). bool Contains(Address a) { - return (reinterpret_cast<uintptr_t>(a) & address_mask_) - == reinterpret_cast<uintptr_t>(start_); + return (reinterpret_cast<uintptr_t>(a) & address_mask_) == + reinterpret_cast<uintptr_t>(start_); } // True if the object is a heap object in the address range of this @@ -2312,6 +2230,7 @@ class SemiSpace : public Space { friend class SemiSpaceIterator; friend class NewSpacePageIterator; + public: TRACK_MEMORY("SemiSpace") }; @@ -2343,7 +2262,7 @@ class SemiSpaceIterator : public ObjectIterator { if (NewSpacePage::IsAtEnd(current_)) { NewSpacePage* page = NewSpacePage::FromLimit(current_); page = page->next_page(); - ASSERT(!page->is_anchor()); + DCHECK(!page->is_anchor()); current_ = page->area_start(); if (current_ == limit_) return NULL; } @@ -2359,9 +2278,7 @@ class SemiSpaceIterator : public ObjectIterator { virtual HeapObject* next_object() { return Next(); } private: - void Initialize(Address start, - Address end, - HeapObjectCallback size_func); + void Initialize(Address start, Address end, HeapObjectCallback size_func); // The current iteration point. Address current_; @@ -2410,14 +2327,14 @@ class NewSpace : public Space { public: // Constructor. explicit NewSpace(Heap* heap) - : Space(heap, NEW_SPACE, NOT_EXECUTABLE), - to_space_(heap, kToSpace), - from_space_(heap, kFromSpace), - reservation_(), - inline_allocation_limit_step_(0) {} + : Space(heap, NEW_SPACE, NOT_EXECUTABLE), + to_space_(heap, kToSpace), + from_space_(heap, kFromSpace), + reservation_(), + inline_allocation_limit_step_(0) {} // Sets up the new space using the given chunk. - bool SetUp(int reserved_semispace_size_, int max_semispace_size); + bool SetUp(int reserved_semispace_size_, int max_semi_space_size); // Tears down the space. Heap memory was not allocated by the space, so it // is not deallocated here. @@ -2441,8 +2358,8 @@ class NewSpace : public Space { // True if the address or object lies in the address range of either // semispace (not necessarily below the allocation pointer). bool Contains(Address a) { - return (reinterpret_cast<uintptr_t>(a) & address_mask_) - == reinterpret_cast<uintptr_t>(start_); + return (reinterpret_cast<uintptr_t>(a) & address_mask_) == + reinterpret_cast<uintptr_t>(start_); } bool Contains(Object* o) { @@ -2453,7 +2370,7 @@ class NewSpace : public Space { // Return the allocated bytes in the active semispace. virtual intptr_t Size() { return pages_used_ * NewSpacePage::kAreaSize + - static_cast<int>(top() - to_space_.page_low()); + static_cast<int>(top() - to_space_.page_low()); } // The same, but returning an int. We have to have the one that returns @@ -2463,13 +2380,13 @@ class NewSpace : public Space { // Return the current capacity of a semispace. intptr_t EffectiveCapacity() { - SLOW_ASSERT(to_space_.Capacity() == from_space_.Capacity()); + SLOW_DCHECK(to_space_.Capacity() == from_space_.Capacity()); return (to_space_.Capacity() / Page::kPageSize) * NewSpacePage::kAreaSize; } // Return the current capacity of a semispace. intptr_t Capacity() { - ASSERT(to_space_.Capacity() == from_space_.Capacity()); + DCHECK(to_space_.Capacity() == from_space_.Capacity()); return to_space_.Capacity(); } @@ -2482,43 +2399,43 @@ class NewSpace : public Space { // Return the total amount of memory committed for new space. intptr_t MaximumCommittedMemory() { return to_space_.MaximumCommittedMemory() + - from_space_.MaximumCommittedMemory(); + from_space_.MaximumCommittedMemory(); } // Approximate amount of physical memory committed for this space. size_t CommittedPhysicalMemory(); // Return the available bytes without growing. - intptr_t Available() { - return Capacity() - Size(); - } + intptr_t Available() { return Capacity() - Size(); } // Return the maximum capacity of a semispace. int MaximumCapacity() { - ASSERT(to_space_.MaximumCapacity() == from_space_.MaximumCapacity()); + DCHECK(to_space_.MaximumCapacity() == from_space_.MaximumCapacity()); return to_space_.MaximumCapacity(); } + bool IsAtMaximumCapacity() { return Capacity() == MaximumCapacity(); } + // Returns the initial capacity of a semispace. int InitialCapacity() { - ASSERT(to_space_.InitialCapacity() == from_space_.InitialCapacity()); + DCHECK(to_space_.InitialCapacity() == from_space_.InitialCapacity()); return to_space_.InitialCapacity(); } // Return the address of the allocation pointer in the active semispace. Address top() { - ASSERT(to_space_.current_page()->ContainsLimit(allocation_info_.top())); + DCHECK(to_space_.current_page()->ContainsLimit(allocation_info_.top())); return allocation_info_.top(); } void set_top(Address top) { - ASSERT(to_space_.current_page()->ContainsLimit(top)); + DCHECK(to_space_.current_page()->ContainsLimit(top)); allocation_info_.set_top(top); } // Return the address of the allocation pointer limit in the active semispace. Address limit() { - ASSERT(to_space_.current_page()->ContainsLimit(allocation_info_.limit())); + DCHECK(to_space_.current_page()->ContainsLimit(allocation_info_.limit())); return allocation_info_.limit(); } @@ -2536,8 +2453,8 @@ class NewSpace : public Space { uintptr_t mask() { return address_mask_; } INLINE(uint32_t AddressToMarkbitIndex(Address addr)) { - ASSERT(Contains(addr)); - ASSERT(IsAligned(OffsetFrom(addr), kPointerSize) || + DCHECK(Contains(addr)); + DCHECK(IsAligned(OffsetFrom(addr), kPointerSize) || IsAligned(OffsetFrom(addr) - 1, kPointerSize)); return static_cast<uint32_t>(addr - start_) >> kPointerSizeLog2; } @@ -2547,9 +2464,7 @@ class NewSpace : public Space { } // The allocation top and limit address. - Address* allocation_top_address() { - return allocation_info_.top_address(); - } + Address* allocation_top_address() { return allocation_info_.top_address(); } // The allocation limit address. Address* allocation_limit_address() { @@ -2649,7 +2564,7 @@ class NewSpace : public Space { // The semispaces. SemiSpace to_space_; SemiSpace from_space_; - VirtualMemory reservation_; + base::VirtualMemory reservation_; int pages_used_; // Start address and bit mask for containment testing. @@ -2689,12 +2604,9 @@ class OldSpace : public PagedSpace { public: // Creates an old space object with a given maximum capacity. // The constructor does not allocate pages from OS. - OldSpace(Heap* heap, - intptr_t max_capacity, - AllocationSpace id, + OldSpace(Heap* heap, intptr_t max_capacity, AllocationSpace id, Executability executable) - : PagedSpace(heap, max_capacity, id, executable) { - } + : PagedSpace(heap, max_capacity, id, executable) {} public: TRACK_MEMORY("OldSpace") @@ -2703,10 +2615,10 @@ class OldSpace : public PagedSpace { // For contiguous spaces, top should be in the space (or at the end) and limit // should be the end of the space. -#define ASSERT_SEMISPACE_ALLOCATION_INFO(info, space) \ - SLOW_ASSERT((space).page_low() <= (info).top() \ - && (info).top() <= (space).page_high() \ - && (info).limit() <= (space).page_high()) +#define DCHECK_SEMISPACE_ALLOCATION_INFO(info, space) \ + SLOW_DCHECK((space).page_low() <= (info).top() && \ + (info).top() <= (space).page_high() && \ + (info).limit() <= (space).page_high()) // ----------------------------------------------------------------------------- @@ -2717,8 +2629,7 @@ class MapSpace : public PagedSpace { // Creates a map space object with a maximum capacity. MapSpace(Heap* heap, intptr_t max_capacity, AllocationSpace id) : PagedSpace(heap, max_capacity, id, NOT_EXECUTABLE), - max_map_space_pages_(kMaxMapPageIndex - 1) { - } + max_map_space_pages_(kMaxMapPageIndex - 1) {} // Given an index, returns the page address. // TODO(1600): this limit is artifical just to keep code compilable @@ -2757,8 +2668,7 @@ class CellSpace : public PagedSpace { public: // Creates a property cell space object with a maximum capacity. CellSpace(Heap* heap, intptr_t max_capacity, AllocationSpace id) - : PagedSpace(heap, max_capacity, id, NOT_EXECUTABLE) { - } + : PagedSpace(heap, max_capacity, id, NOT_EXECUTABLE) {} virtual int RoundSizeDownToObjectAlignment(int size) { if (IsPowerOf2(Cell::kSize)) { @@ -2782,10 +2692,8 @@ class CellSpace : public PagedSpace { class PropertyCellSpace : public PagedSpace { public: // Creates a property cell space object with a maximum capacity. - PropertyCellSpace(Heap* heap, intptr_t max_capacity, - AllocationSpace id) - : PagedSpace(heap, max_capacity, id, NOT_EXECUTABLE) { - } + PropertyCellSpace(Heap* heap, intptr_t max_capacity, AllocationSpace id) + : PagedSpace(heap, max_capacity, id, NOT_EXECUTABLE) {} virtual int RoundSizeDownToObjectAlignment(int size) { if (IsPowerOf2(PropertyCell::kSize)) { @@ -2828,34 +2736,24 @@ class LargeObjectSpace : public Space { // Shared implementation of AllocateRaw, AllocateRawCode and // AllocateRawFixedArray. - MUST_USE_RESULT AllocationResult AllocateRaw(int object_size, - Executability executable); + MUST_USE_RESULT AllocationResult + AllocateRaw(int object_size, Executability executable); // Available bytes for objects in this space. inline intptr_t Available(); - virtual intptr_t Size() { - return size_; - } + virtual intptr_t Size() { return size_; } - virtual intptr_t SizeOfObjects() { - return objects_size_; - } + virtual intptr_t SizeOfObjects() { return objects_size_; } - intptr_t MaximumCommittedMemory() { - return maximum_committed_; - } + intptr_t MaximumCommittedMemory() { return maximum_committed_; } - intptr_t CommittedMemory() { - return Size(); - } + intptr_t CommittedMemory() { return Size(); } // Approximate amount of physical memory committed for this space. size_t CommittedPhysicalMemory(); - int PageCount() { - return page_count_; - } + int PageCount() { return page_count_; } // Finds an object for a given address, returns a Smi if it is not found. // The function iterates through all objects in this space, may be slow. @@ -2894,8 +2792,8 @@ class LargeObjectSpace : public Space { intptr_t maximum_committed_; // The head of the linked list of large object chunks. LargePage* first_page_; - intptr_t size_; // allocated bytes - int page_count_; // number of chunks + intptr_t size_; // allocated bytes + int page_count_; // number of chunks intptr_t objects_size_; // size of objects // Map MemoryChunk::kAlignment-aligned chunks to large pages covering them HashMap chunk_map_; @@ -2907,7 +2805,7 @@ class LargeObjectSpace : public Space { }; -class LargeObjectIterator: public ObjectIterator { +class LargeObjectIterator : public ObjectIterator { public: explicit LargeObjectIterator(LargeObjectSpace* space); LargeObjectIterator(LargeObjectSpace* space, HeapObjectCallback size_func); @@ -2971,12 +2869,7 @@ class PointerChunkIterator BASE_EMBEDDED { private: - enum State { - kOldPointerState, - kMapState, - kLargeObjectState, - kFinishedState - }; + enum State { kOldPointerState, kMapState, kLargeObjectState, kFinishedState }; State state_; PageIterator old_pointer_iterator_; PageIterator map_iterator_; @@ -2998,8 +2891,7 @@ struct CommentStatistic { static const int kMaxComments = 64; }; #endif +} +} // namespace v8::internal - -} } // namespace v8::internal - -#endif // V8_SPACES_H_ +#endif // V8_HEAP_SPACES_H_ diff --git a/deps/v8/src/store-buffer-inl.h b/deps/v8/src/heap/store-buffer-inl.h similarity index 75% rename from deps/v8/src/store-buffer-inl.h rename to deps/v8/src/heap/store-buffer-inl.h index 709415e45b3..1606465a094 100644 --- a/deps/v8/src/store-buffer-inl.h +++ b/deps/v8/src/heap/store-buffer-inl.h @@ -5,7 +5,7 @@ #ifndef V8_STORE_BUFFER_INL_H_ #define V8_STORE_BUFFER_INL_H_ -#include "store-buffer.h" +#include "src/heap/store-buffer.h" namespace v8 { namespace internal { @@ -16,24 +16,24 @@ Address StoreBuffer::TopAddress() { void StoreBuffer::Mark(Address addr) { - ASSERT(!heap_->cell_space()->Contains(addr)); - ASSERT(!heap_->code_space()->Contains(addr)); - ASSERT(!heap_->old_data_space()->Contains(addr)); + DCHECK(!heap_->cell_space()->Contains(addr)); + DCHECK(!heap_->code_space()->Contains(addr)); + DCHECK(!heap_->old_data_space()->Contains(addr)); Address* top = reinterpret_cast<Address*>(heap_->store_buffer_top()); *top++ = addr; heap_->public_set_store_buffer_top(top); if ((reinterpret_cast<uintptr_t>(top) & kStoreBufferOverflowBit) != 0) { - ASSERT(top == limit_); + DCHECK(top == limit_); Compact(); } else { - ASSERT(top < limit_); + DCHECK(top < limit_); } } void StoreBuffer::EnterDirectlyIntoStoreBuffer(Address addr) { if (store_buffer_rebuilding_enabled_) { - SLOW_ASSERT(!heap_->cell_space()->Contains(addr) && + SLOW_DCHECK(!heap_->cell_space()->Contains(addr) && !heap_->code_space()->Contains(addr) && !heap_->old_data_space()->Contains(addr) && !heap_->new_space()->Contains(addr)); @@ -43,9 +43,8 @@ void StoreBuffer::EnterDirectlyIntoStoreBuffer(Address addr) { old_buffer_is_sorted_ = false; old_buffer_is_filtered_ = false; if (top >= old_limit_) { - ASSERT(callback_ != NULL); - (*callback_)(heap_, - MemoryChunk::FromAnyPointerAddress(heap_, addr), + DCHECK(callback_ != NULL); + (*callback_)(heap_, MemoryChunk::FromAnyPointerAddress(heap_, addr), kStoreBufferFullEvent); } } @@ -58,8 +57,7 @@ void StoreBuffer::ClearDeadObject(HeapObject* object) { map_field = NULL; } } - - -} } // namespace v8::internal +} +} // namespace v8::internal #endif // V8_STORE_BUFFER_INL_H_ diff --git a/deps/v8/src/store-buffer.cc b/deps/v8/src/heap/store-buffer.cc similarity index 61% rename from deps/v8/src/store-buffer.cc rename to deps/v8/src/heap/store-buffer.cc index b0f7d2f8030..b48e1a40493 100644 --- a/deps/v8/src/store-buffer.cc +++ b/deps/v8/src/heap/store-buffer.cc @@ -2,13 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "store-buffer.h" - #include <algorithm> -#include "v8.h" -#include "counters.h" -#include "store-buffer-inl.h" +#include "src/v8.h" + +#include "src/base/atomicops.h" +#include "src/counters.h" +#include "src/heap/store-buffer-inl.h" namespace v8 { namespace internal { @@ -30,12 +30,11 @@ StoreBuffer::StoreBuffer(Heap* heap) virtual_memory_(NULL), hash_set_1_(NULL), hash_set_2_(NULL), - hash_sets_are_empty_(true) { -} + hash_sets_are_empty_(true) {} void StoreBuffer::SetUp() { - virtual_memory_ = new VirtualMemory(kStoreBufferSize * 3); + virtual_memory_ = new base::VirtualMemory(kStoreBufferSize * 3); uintptr_t start_as_int = reinterpret_cast<uintptr_t>(virtual_memory_->address()); start_ = @@ -43,33 +42,33 @@ void StoreBuffer::SetUp() { limit_ = start_ + (kStoreBufferSize / kPointerSize); old_virtual_memory_ = - new VirtualMemory(kOldStoreBufferLength * kPointerSize); + new base::VirtualMemory(kOldStoreBufferLength * kPointerSize); old_top_ = old_start_ = reinterpret_cast<Address*>(old_virtual_memory_->address()); // Don't know the alignment requirements of the OS, but it is certainly not // less than 0xfff. - ASSERT((reinterpret_cast<uintptr_t>(old_start_) & 0xfff) == 0); - int initial_length = static_cast<int>(OS::CommitPageSize() / kPointerSize); - ASSERT(initial_length > 0); - ASSERT(initial_length <= kOldStoreBufferLength); + DCHECK((reinterpret_cast<uintptr_t>(old_start_) & 0xfff) == 0); + int initial_length = + static_cast<int>(base::OS::CommitPageSize() / kPointerSize); + DCHECK(initial_length > 0); + DCHECK(initial_length <= kOldStoreBufferLength); old_limit_ = old_start_ + initial_length; old_reserved_limit_ = old_start_ + kOldStoreBufferLength; - CHECK(old_virtual_memory_->Commit( - reinterpret_cast<void*>(old_start_), - (old_limit_ - old_start_) * kPointerSize, - false)); + CHECK(old_virtual_memory_->Commit(reinterpret_cast<void*>(old_start_), + (old_limit_ - old_start_) * kPointerSize, + false)); - ASSERT(reinterpret_cast<Address>(start_) >= virtual_memory_->address()); - ASSERT(reinterpret_cast<Address>(limit_) >= virtual_memory_->address()); + DCHECK(reinterpret_cast<Address>(start_) >= virtual_memory_->address()); + DCHECK(reinterpret_cast<Address>(limit_) >= virtual_memory_->address()); Address* vm_limit = reinterpret_cast<Address*>( reinterpret_cast<char*>(virtual_memory_->address()) + - virtual_memory_->size()); - ASSERT(start_ <= vm_limit); - ASSERT(limit_ <= vm_limit); + virtual_memory_->size()); + DCHECK(start_ <= vm_limit); + DCHECK(limit_ <= vm_limit); USE(vm_limit); - ASSERT((reinterpret_cast<uintptr_t>(limit_) & kStoreBufferOverflowBit) != 0); - ASSERT((reinterpret_cast<uintptr_t>(limit_ - 1) & kStoreBufferOverflowBit) == + DCHECK((reinterpret_cast<uintptr_t>(limit_) & kStoreBufferOverflowBit) != 0); + DCHECK((reinterpret_cast<uintptr_t>(limit_ - 1) & kStoreBufferOverflowBit) == 0); CHECK(virtual_memory_->Commit(reinterpret_cast<Address>(start_), @@ -106,7 +105,7 @@ void StoreBuffer::Uniq() { // Remove adjacent duplicates and cells that do not point at new space. Address previous = NULL; Address* write = old_start_; - ASSERT(may_move_store_buffer_entries_); + DCHECK(may_move_store_buffer_entries_); for (Address* read = old_start_; read < old_top_; read++) { Address current = *read; if (current != previous) { @@ -130,15 +129,14 @@ void StoreBuffer::EnsureSpace(intptr_t space_needed) { old_limit_ < old_reserved_limit_) { size_t grow = old_limit_ - old_start_; // Double size. CHECK(old_virtual_memory_->Commit(reinterpret_cast<void*>(old_limit_), - grow * kPointerSize, - false)); + grow * kPointerSize, false)); old_limit_ += grow; } if (SpaceAvailable(space_needed)) return; if (old_buffer_is_filtered_) return; - ASSERT(may_move_store_buffer_entries_); + DCHECK(may_move_store_buffer_entries_); Compact(); old_buffer_is_filtered_ = true; @@ -165,17 +163,16 @@ void StoreBuffer::EnsureSpace(intptr_t space_needed) { static const struct Samples { int prime_sample_step; int threshold; - } samples[kSampleFinenesses] = { - { 97, ((Page::kPageSize / kPointerSize) / 97) / 8 }, - { 23, ((Page::kPageSize / kPointerSize) / 23) / 16 }, - { 7, ((Page::kPageSize / kPointerSize) / 7) / 32 }, - { 3, ((Page::kPageSize / kPointerSize) / 3) / 256 }, - { 1, 0} - }; + } samples[kSampleFinenesses] = { + {97, ((Page::kPageSize / kPointerSize) / 97) / 8}, + {23, ((Page::kPageSize / kPointerSize) / 23) / 16}, + {7, ((Page::kPageSize / kPointerSize) / 7) / 32}, + {3, ((Page::kPageSize / kPointerSize) / 3) / 256}, + {1, 0}}; for (int i = 0; i < kSampleFinenesses; i++) { ExemptPopularPages(samples[i].prime_sample_step, samples[i].threshold); // As a last resort we mark all pages as being exempt from the store buffer. - ASSERT(i != (kSampleFinenesses - 1) || old_top_ == old_start_); + DCHECK(i != (kSampleFinenesses - 1) || old_top_ == old_start_); if (SpaceAvailable(space_needed)) return; } UNREACHABLE(); @@ -314,11 +311,9 @@ bool StoreBuffer::CellIsInStoreBuffer(Address cell_address) { void StoreBuffer::ClearFilteringHashSets() { if (!hash_sets_are_empty_) { - memset(reinterpret_cast<void*>(hash_set_1_), - 0, + memset(reinterpret_cast<void*>(hash_set_1_), 0, sizeof(uintptr_t) * kHashSetLength); - memset(reinterpret_cast<void*>(hash_set_2_), - 0, + memset(reinterpret_cast<void*>(hash_set_2_), 0, sizeof(uintptr_t) * kHashSetLength); hash_sets_are_empty_ = true; } @@ -332,27 +327,6 @@ void StoreBuffer::GCPrologue() { #ifdef VERIFY_HEAP -static void DummyScavengePointer(HeapObject** p, HeapObject* o) { - // Do nothing. -} - - -void StoreBuffer::VerifyPointers(PagedSpace* space, - RegionCallback region_callback) { - PageIterator it(space); - - while (it.has_next()) { - Page* page = it.next(); - FindPointersToNewSpaceOnPage( - reinterpret_cast<PagedSpace*>(page->owner()), - page, - region_callback, - &DummyScavengePointer, - false); - } -} - - void StoreBuffer::VerifyPointers(LargeObjectSpace* space) { LargeObjectIterator it(space); for (HeapObject* object = it.Next(); object != NULL; object = it.Next()) { @@ -366,7 +340,7 @@ void StoreBuffer::VerifyPointers(LargeObjectSpace* space) { // checks that pointers which satisfy predicate point into // the active semispace. Object* object = reinterpret_cast<Object*>( - NoBarrier_Load(reinterpret_cast<AtomicWord*>(slot))); + base::NoBarrier_Load(reinterpret_cast<base::AtomicWord*>(slot))); heap_->InNewSpace(object); slot_address += kPointerSize; } @@ -378,10 +352,6 @@ void StoreBuffer::VerifyPointers(LargeObjectSpace* space) { void StoreBuffer::Verify() { #ifdef VERIFY_HEAP - VerifyPointers(heap_->old_pointer_space(), - &StoreBuffer::FindPointersToNewSpaceInRegion); - VerifyPointers(heap_->map_space(), - &StoreBuffer::FindPointersToNewSpaceInMapsRegion); VerifyPointers(heap_->lo_space()); #endif } @@ -398,25 +368,22 @@ void StoreBuffer::GCEpilogue() { void StoreBuffer::FindPointersToNewSpaceInRegion( - Address start, - Address end, - ObjectSlotCallback slot_callback, + Address start, Address end, ObjectSlotCallback slot_callback, bool clear_maps) { - for (Address slot_address = start; - slot_address < end; + for (Address slot_address = start; slot_address < end; slot_address += kPointerSize) { Object** slot = reinterpret_cast<Object**>(slot_address); Object* object = reinterpret_cast<Object*>( - NoBarrier_Load(reinterpret_cast<AtomicWord*>(slot))); + base::NoBarrier_Load(reinterpret_cast<base::AtomicWord*>(slot))); if (heap_->InNewSpace(object)) { HeapObject* heap_object = reinterpret_cast<HeapObject*>(object); - ASSERT(heap_object->IsHeapObject()); + DCHECK(heap_object->IsHeapObject()); // The new space object was not promoted if it still contains a map // pointer. Clear the map field now lazily. if (clear_maps) ClearDeadObject(heap_object); slot_callback(reinterpret_cast<HeapObject**>(slot), heap_object); object = reinterpret_cast<Object*>( - NoBarrier_Load(reinterpret_cast<AtomicWord*>(slot))); + base::NoBarrier_Load(reinterpret_cast<base::AtomicWord*>(slot))); if (heap_->InNewSpace(object)) { EnterDirectlyIntoStoreBuffer(slot_address); } @@ -425,154 +392,8 @@ void StoreBuffer::FindPointersToNewSpaceInRegion( } -// Compute start address of the first map following given addr. -static inline Address MapStartAlign(Address addr) { - Address page = Page::FromAddress(addr)->area_start(); - return page + (((addr - page) + (Map::kSize - 1)) / Map::kSize * Map::kSize); -} - - -// Compute end address of the first map preceding given addr. -static inline Address MapEndAlign(Address addr) { - Address page = Page::FromAllocationTop(addr)->area_start(); - return page + ((addr - page) / Map::kSize * Map::kSize); -} - - -void StoreBuffer::FindPointersToNewSpaceInMaps( - Address start, - Address end, - ObjectSlotCallback slot_callback, - bool clear_maps) { - ASSERT(MapStartAlign(start) == start); - ASSERT(MapEndAlign(end) == end); - - Address map_address = start; - while (map_address < end) { - ASSERT(!heap_->InNewSpace(Memory::Object_at(map_address))); - ASSERT(Memory::Object_at(map_address)->IsMap()); - - Address pointer_fields_start = map_address + Map::kPointerFieldsBeginOffset; - Address pointer_fields_end = map_address + Map::kPointerFieldsEndOffset; - - FindPointersToNewSpaceInRegion(pointer_fields_start, - pointer_fields_end, - slot_callback, - clear_maps); - map_address += Map::kSize; - } -} - - -void StoreBuffer::FindPointersToNewSpaceInMapsRegion( - Address start, - Address end, - ObjectSlotCallback slot_callback, - bool clear_maps) { - Address map_aligned_start = MapStartAlign(start); - Address map_aligned_end = MapEndAlign(end); - - ASSERT(map_aligned_start == start); - ASSERT(map_aligned_end == end); - - FindPointersToNewSpaceInMaps(map_aligned_start, - map_aligned_end, - slot_callback, - clear_maps); -} - - -// This function iterates over all the pointers in a paged space in the heap, -// looking for pointers into new space. Within the pages there may be dead -// objects that have not been overwritten by free spaces or fillers because of -// concurrent sweeping. These dead objects may not contain pointers to new -// space. The garbage areas that have been swept properly (these will normally -// be the large ones) will be marked with free space and filler map words. In -// addition any area that has never been used at all for object allocation must -// be marked with a free space or filler. Because the free space and filler -// maps do not move we can always recognize these even after a compaction. -// Normal objects like FixedArrays and JSObjects should not contain references -// to these maps. Constant pool array objects may contain references to these -// maps, however, constant pool arrays cannot contain pointers to new space -// objects, therefore they are skipped. The special garbage section (see -// comment in spaces.h) is skipped since it can contain absolutely anything. -// Any objects that are allocated during iteration may or may not be visited by -// the iteration, but they will not be partially visited. -void StoreBuffer::FindPointersToNewSpaceOnPage( - PagedSpace* space, - Page* page, - RegionCallback region_callback, - ObjectSlotCallback slot_callback, - bool clear_maps) { - Address visitable_start = page->area_start(); - Address end_of_page = page->area_end(); - - Address visitable_end = visitable_start; - - Object* free_space_map = heap_->free_space_map(); - Object* two_pointer_filler_map = heap_->two_pointer_filler_map(); - Object* constant_pool_array_map = heap_->constant_pool_array_map(); - - while (visitable_end < end_of_page) { - // The sweeper thread concurrently may write free space maps and size to - // this page. We need acquire load here to make sure that we get a - // consistent view of maps and their sizes. - Object* o = reinterpret_cast<Object*>( - Acquire_Load(reinterpret_cast<AtomicWord*>(visitable_end))); - // Skip fillers or constant pool arrays (which never contain new-space - // pointers but can contain pointers which can be confused for fillers) - // but not things that look like fillers in the special garbage section - // which can contain anything. - if (o == free_space_map || - o == two_pointer_filler_map || - o == constant_pool_array_map || - (visitable_end == space->top() && visitable_end != space->limit())) { - if (visitable_start != visitable_end) { - // After calling this the special garbage section may have moved. - (this->*region_callback)(visitable_start, - visitable_end, - slot_callback, - clear_maps); - if (visitable_end >= space->top() && visitable_end < space->limit()) { - visitable_end = space->limit(); - visitable_start = visitable_end; - continue; - } - } - if (visitable_end == space->top() && visitable_end != space->limit()) { - visitable_start = visitable_end = space->limit(); - } else { - // At this point we are either at the start of a filler, a - // constant pool array, or we are at the point where the space->top() - // used to be before the visit_pointer_region call above. Either way we - // can skip the object at the current spot: We don't promise to visit - // objects allocated during heap traversal, and if space->top() moved - // then it must be because an object was allocated at this point. - visitable_start = - visitable_end + HeapObject::FromAddress(visitable_end)->Size(); - visitable_end = visitable_start; - } - } else { - ASSERT(o != free_space_map); - ASSERT(o != two_pointer_filler_map); - ASSERT(o != constant_pool_array_map); - ASSERT(visitable_end < space->top() || visitable_end >= space->limit()); - visitable_end += kPointerSize; - } - } - ASSERT(visitable_end == end_of_page); - if (visitable_start != visitable_end) { - (this->*region_callback)(visitable_start, - visitable_end, - slot_callback, - clear_maps); - } -} - - -void StoreBuffer::IteratePointersInStoreBuffer( - ObjectSlotCallback slot_callback, - bool clear_maps) { +void StoreBuffer::IteratePointersInStoreBuffer(ObjectSlotCallback slot_callback, + bool clear_maps) { Address* limit = old_top_; old_top_ = old_start_; { @@ -583,7 +404,7 @@ void StoreBuffer::IteratePointersInStoreBuffer( #endif Object** slot = reinterpret_cast<Object**>(*current); Object* object = reinterpret_cast<Object*>( - NoBarrier_Load(reinterpret_cast<AtomicWord*>(slot))); + base::NoBarrier_Load(reinterpret_cast<base::AtomicWord*>(slot))); if (heap_->InFromSpace(object)) { HeapObject* heap_object = reinterpret_cast<HeapObject*>(object); // The new space object was not promoted if it still contains a map @@ -591,12 +412,12 @@ void StoreBuffer::IteratePointersInStoreBuffer( if (clear_maps) ClearDeadObject(heap_object); slot_callback(reinterpret_cast<HeapObject**>(slot), heap_object); object = reinterpret_cast<Object*>( - NoBarrier_Load(reinterpret_cast<AtomicWord*>(slot))); + base::NoBarrier_Load(reinterpret_cast<base::AtomicWord*>(slot))); if (heap_->InNewSpace(object)) { EnterDirectlyIntoStoreBuffer(reinterpret_cast<Address>(slot)); } } - ASSERT(old_top_ == saved_top + 1 || old_top_ == saved_top); + DCHECK(old_top_ == saved_top + 1 || old_top_ == saved_top); } } } @@ -649,21 +470,59 @@ void StoreBuffer::IteratePointersToNewSpace(ObjectSlotCallback slot_callback, if (chunk->owner() == heap_->lo_space()) { LargePage* large_page = reinterpret_cast<LargePage*>(chunk); HeapObject* array = large_page->GetObject(); - ASSERT(array->IsFixedArray()); + DCHECK(array->IsFixedArray()); Address start = array->address(); Address end = start + array->Size(); FindPointersToNewSpaceInRegion(start, end, slot_callback, clear_maps); } else { Page* page = reinterpret_cast<Page*>(chunk); PagedSpace* owner = reinterpret_cast<PagedSpace*>(page->owner()); - FindPointersToNewSpaceOnPage( - owner, - page, - (owner == heap_->map_space() ? - &StoreBuffer::FindPointersToNewSpaceInMapsRegion : - &StoreBuffer::FindPointersToNewSpaceInRegion), - slot_callback, - clear_maps); + Address start = page->area_start(); + Address end = page->area_end(); + if (owner == heap_->map_space()) { + DCHECK(page->WasSweptPrecisely()); + HeapObjectIterator iterator(page, NULL); + for (HeapObject* heap_object = iterator.Next(); heap_object != NULL; + heap_object = iterator.Next()) { + // We skip free space objects. + if (!heap_object->IsFiller()) { + FindPointersToNewSpaceInRegion( + heap_object->address() + HeapObject::kHeaderSize, + heap_object->address() + heap_object->Size(), slot_callback, + clear_maps); + } + } + } else { + if (!page->SweepingCompleted()) { + heap_->mark_compact_collector()->SweepInParallel(page, owner); + if (!page->SweepingCompleted()) { + // We were not able to sweep that page, i.e., a concurrent + // sweeper thread currently owns this page. + // TODO(hpayer): This may introduce a huge pause here. We + // just care about finish sweeping of the scan on scavenge page. + heap_->mark_compact_collector()->EnsureSweepingCompleted(); + } + } + // TODO(hpayer): remove the special casing and merge map and pointer + // space handling as soon as we removed conservative sweeping. + CHECK(page->owner() == heap_->old_pointer_space()); + if (heap_->old_pointer_space()->swept_precisely()) { + HeapObjectIterator iterator(page, NULL); + for (HeapObject* heap_object = iterator.Next(); + heap_object != NULL; heap_object = iterator.Next()) { + // We iterate over objects that contain new space pointers only. + if (heap_object->MayContainNewSpacePointers()) { + FindPointersToNewSpaceInRegion( + heap_object->address() + HeapObject::kHeaderSize, + heap_object->address() + heap_object->Size(), + slot_callback, clear_maps); + } + } + } else { + FindPointersToNewSpaceInRegion(start, end, slot_callback, + clear_maps); + } + } } } } @@ -681,19 +540,19 @@ void StoreBuffer::Compact() { // There's no check of the limit in the loop below so we check here for // the worst case (compaction doesn't eliminate any pointers). - ASSERT(top <= limit_); + DCHECK(top <= limit_); heap_->public_set_store_buffer_top(start_); EnsureSpace(top - start_); - ASSERT(may_move_store_buffer_entries_); + DCHECK(may_move_store_buffer_entries_); // Goes through the addresses in the store buffer attempting to remove // duplicates. In the interest of speed this is a lossy operation. Some // duplicates will remain. We have two hash sets with different hash // functions to reduce the number of unnecessary clashes. hash_sets_are_empty_ = false; // Hash sets are in use. for (Address* current = start_; current < top; current++) { - ASSERT(!heap_->cell_space()->Contains(*current)); - ASSERT(!heap_->code_space()->Contains(*current)); - ASSERT(!heap_->old_data_space()->Contains(*current)); + DCHECK(!heap_->cell_space()->Contains(*current)); + DCHECK(!heap_->code_space()->Contains(*current)); + DCHECK(!heap_->old_data_space()->Contains(*current)); uintptr_t int_addr = reinterpret_cast<uintptr_t>(*current); // Shift out the last bits including any tags. int_addr >>= kPointerSizeLog2; @@ -722,9 +581,9 @@ void StoreBuffer::Compact() { old_buffer_is_sorted_ = false; old_buffer_is_filtered_ = false; *old_top_++ = reinterpret_cast<Address>(int_addr << kPointerSizeLog2); - ASSERT(old_top_ <= old_limit_); + DCHECK(old_top_ <= old_limit_); } heap_->isolate()->counters()->store_buffer_compactions()->Increment(); } - -} } // namespace v8::internal +} +} // namespace v8::internal diff --git a/deps/v8/src/store-buffer.h b/deps/v8/src/heap/store-buffer.h similarity index 84% rename from deps/v8/src/store-buffer.h rename to deps/v8/src/heap/store-buffer.h index bf3a3f7378a..5efd6922bcd 100644 --- a/deps/v8/src/store-buffer.h +++ b/deps/v8/src/heap/store-buffer.h @@ -5,11 +5,10 @@ #ifndef V8_STORE_BUFFER_H_ #define V8_STORE_BUFFER_H_ -#include "allocation.h" -#include "checks.h" -#include "globals.h" -#include "platform.h" -#include "v8globals.h" +#include "src/allocation.h" +#include "src/base/logging.h" +#include "src/base/platform/platform.h" +#include "src/globals.h" namespace v8 { namespace internal { @@ -20,8 +19,7 @@ class StoreBuffer; typedef void (*ObjectSlotCallback)(HeapObject** from, HeapObject* to); -typedef void (StoreBuffer::*RegionCallback)(Address start, - Address end, +typedef void (StoreBuffer::*RegionCallback)(Address start, Address end, ObjectSlotCallback slot_callback, bool clear_maps); @@ -82,8 +80,8 @@ class StoreBuffer { Object*** Start() { return reinterpret_cast<Object***>(old_start_); } Object*** Top() { return reinterpret_cast<Object***>(old_top_); } void SetTop(Object*** top) { - ASSERT(top >= Start()); - ASSERT(top <= Limit()); + DCHECK(top >= Start()); + DCHECK(top <= Limit()); old_top_ = reinterpret_cast<Address*>(top); } @@ -120,7 +118,7 @@ class StoreBuffer { Address* old_limit_; Address* old_top_; Address* old_reserved_limit_; - VirtualMemory* old_virtual_memory_; + base::VirtualMemory* old_virtual_memory_; bool old_buffer_is_sorted_; bool old_buffer_is_filtered_; @@ -132,7 +130,7 @@ class StoreBuffer { StoreBufferCallback callback_; bool may_move_store_buffer_entries_; - VirtualMemory* virtual_memory_; + base::VirtualMemory* virtual_memory_; // Two hash sets used for filtering. // If address is in the hash set then it is guaranteed to be in the @@ -148,12 +146,11 @@ class StoreBuffer { void ExemptPopularPages(int prime_sample_step, int threshold); // Set the map field of the object to NULL if contains a map. - inline void ClearDeadObject(HeapObject *object); + inline void ClearDeadObject(HeapObject* object); void IteratePointersToNewSpace(ObjectSlotCallback callback, bool clear_maps); - void FindPointersToNewSpaceInRegion(Address start, - Address end, + void FindPointersToNewSpaceInRegion(Address start, Address end, ObjectSlotCallback slot_callback, bool clear_maps); @@ -162,36 +159,14 @@ class StoreBuffer { // If either visit_pointer_region or callback can cause an allocation // in old space and changes in allocation watermark then // can_preallocate_during_iteration should be set to true. - void IteratePointersOnPage( - PagedSpace* space, - Page* page, - RegionCallback region_callback, - ObjectSlotCallback slot_callback); - - void FindPointersToNewSpaceInMaps( - Address start, - Address end, - ObjectSlotCallback slot_callback, - bool clear_maps); - - void FindPointersToNewSpaceInMapsRegion( - Address start, - Address end, - ObjectSlotCallback slot_callback, - bool clear_maps); - - void FindPointersToNewSpaceOnPage( - PagedSpace* space, - Page* page, - RegionCallback region_callback, - ObjectSlotCallback slot_callback, - bool clear_maps); + void IteratePointersOnPage(PagedSpace* space, Page* page, + RegionCallback region_callback, + ObjectSlotCallback slot_callback); void IteratePointersInStoreBuffer(ObjectSlotCallback slot_callback, bool clear_maps); #ifdef VERIFY_HEAP - void VerifyPointers(PagedSpace* space, RegionCallback region_callback); void VerifyPointers(LargeObjectSpace* space); #endif @@ -202,8 +177,7 @@ class StoreBuffer { class StoreBufferRebuildScope { public: - explicit StoreBufferRebuildScope(Heap* heap, - StoreBuffer* store_buffer, + explicit StoreBufferRebuildScope(Heap* heap, StoreBuffer* store_buffer, StoreBufferCallback callback) : store_buffer_(store_buffer), stored_state_(store_buffer->store_buffer_rebuilding_enabled_), @@ -241,7 +215,7 @@ class DontMoveStoreBufferEntriesScope { StoreBuffer* store_buffer_; bool stored_state_; }; - -} } // namespace v8::internal +} +} // namespace v8::internal #endif // V8_STORE_BUFFER_H_ diff --git a/deps/v8/src/sweeper-thread.cc b/deps/v8/src/heap/sweeper-thread.cc similarity index 52% rename from deps/v8/src/sweeper-thread.cc rename to deps/v8/src/heap/sweeper-thread.cc index e8c8cd68d84..b0e8cea219d 100644 --- a/deps/v8/src/sweeper-thread.cc +++ b/deps/v8/src/heap/sweeper-thread.cc @@ -2,12 +2,12 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "sweeper-thread.h" +#include "src/heap/sweeper-thread.h" -#include "v8.h" +#include "src/v8.h" -#include "isolate.h" -#include "v8threads.h" +#include "src/isolate.h" +#include "src/v8threads.h" namespace v8 { namespace internal { @@ -15,15 +15,15 @@ namespace internal { static const int kSweeperThreadStackSize = 64 * KB; SweeperThread::SweeperThread(Isolate* isolate) - : Thread(Thread::Options("v8:SweeperThread", kSweeperThreadStackSize)), - isolate_(isolate), - heap_(isolate->heap()), - collector_(heap_->mark_compact_collector()), - start_sweeping_semaphore_(0), - end_sweeping_semaphore_(0), - stop_semaphore_(0) { - ASSERT(!FLAG_job_based_sweeping); - NoBarrier_Store(&stop_thread_, static_cast<AtomicWord>(false)); + : Thread(Thread::Options("v8:SweeperThread", kSweeperThreadStackSize)), + isolate_(isolate), + heap_(isolate->heap()), + collector_(heap_->mark_compact_collector()), + start_sweeping_semaphore_(0), + end_sweeping_semaphore_(0), + stop_semaphore_(0) { + DCHECK(!FLAG_job_based_sweeping); + base::NoBarrier_Store(&stop_thread_, static_cast<base::AtomicWord>(false)); } @@ -36,38 +36,34 @@ void SweeperThread::Run() { while (true) { start_sweeping_semaphore_.Wait(); - if (Acquire_Load(&stop_thread_)) { + if (base::Acquire_Load(&stop_thread_)) { stop_semaphore_.Signal(); return; } - collector_->SweepInParallel(heap_->old_data_space()); - collector_->SweepInParallel(heap_->old_pointer_space()); + collector_->SweepInParallel(heap_->old_data_space(), 0); + collector_->SweepInParallel(heap_->old_pointer_space(), 0); end_sweeping_semaphore_.Signal(); } } void SweeperThread::Stop() { - Release_Store(&stop_thread_, static_cast<AtomicWord>(true)); + base::Release_Store(&stop_thread_, static_cast<base::AtomicWord>(true)); start_sweeping_semaphore_.Signal(); stop_semaphore_.Wait(); Join(); } -void SweeperThread::StartSweeping() { - start_sweeping_semaphore_.Signal(); -} +void SweeperThread::StartSweeping() { start_sweeping_semaphore_.Signal(); } -void SweeperThread::WaitForSweeperThread() { - end_sweeping_semaphore_.Wait(); -} +void SweeperThread::WaitForSweeperThread() { end_sweeping_semaphore_.Wait(); } bool SweeperThread::SweepingCompleted() { - bool value = end_sweeping_semaphore_.WaitFor(TimeDelta::FromSeconds(0)); + bool value = end_sweeping_semaphore_.WaitFor(base::TimeDelta::FromSeconds(0)); if (value) { end_sweeping_semaphore_.Signal(); } @@ -79,8 +75,8 @@ int SweeperThread::NumberOfThreads(int max_available) { if (!FLAG_concurrent_sweeping && !FLAG_parallel_sweeping) return 0; if (FLAG_sweeper_threads > 0) return FLAG_sweeper_threads; if (FLAG_concurrent_sweeping) return max_available - 1; - ASSERT(FLAG_parallel_sweeping); + DCHECK(FLAG_parallel_sweeping); return max_available; } - -} } // namespace v8::internal +} +} // namespace v8::internal diff --git a/deps/v8/src/sweeper-thread.h b/deps/v8/src/heap/sweeper-thread.h similarity index 50% rename from deps/v8/src/sweeper-thread.h rename to deps/v8/src/heap/sweeper-thread.h index 794e660aa61..fc6bdda05b5 100644 --- a/deps/v8/src/sweeper-thread.h +++ b/deps/v8/src/heap/sweeper-thread.h @@ -2,22 +2,22 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#ifndef V8_SWEEPER_THREAD_H_ -#define V8_SWEEPER_THREAD_H_ +#ifndef V8_HEAP_SWEEPER_THREAD_H_ +#define V8_HEAP_SWEEPER_THREAD_H_ -#include "atomicops.h" -#include "flags.h" -#include "platform.h" -#include "utils.h" +#include "src/base/atomicops.h" +#include "src/base/platform/platform.h" +#include "src/flags.h" +#include "src/utils.h" -#include "spaces.h" +#include "src/heap/spaces.h" -#include "heap.h" +#include "src/heap/heap.h" namespace v8 { namespace internal { -class SweeperThread : public Thread { +class SweeperThread : public base::Thread { public: explicit SweeperThread(Isolate* isolate); ~SweeperThread() {} @@ -34,12 +34,12 @@ class SweeperThread : public Thread { Isolate* isolate_; Heap* heap_; MarkCompactCollector* collector_; - Semaphore start_sweeping_semaphore_; - Semaphore end_sweeping_semaphore_; - Semaphore stop_semaphore_; - volatile AtomicWord stop_thread_; + base::Semaphore start_sweeping_semaphore_; + base::Semaphore end_sweeping_semaphore_; + base::Semaphore stop_semaphore_; + volatile base::AtomicWord stop_thread_; }; +} +} // namespace v8::internal -} } // namespace v8::internal - -#endif // V8_SWEEPER_THREAD_H_ +#endif // V8_HEAP_SWEEPER_THREAD_H_ diff --git a/deps/v8/src/hydrogen-alias-analysis.h b/deps/v8/src/hydrogen-alias-analysis.h index 10325d2ec65..368dd5f020d 100644 --- a/deps/v8/src/hydrogen-alias-analysis.h +++ b/deps/v8/src/hydrogen-alias-analysis.h @@ -5,7 +5,7 @@ #ifndef V8_HYDROGEN_ALIAS_ANALYSIS_H_ #define V8_HYDROGEN_ALIAS_ANALYSIS_H_ -#include "hydrogen.h" +#include "src/hydrogen.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-bce.cc b/deps/v8/src/hydrogen-bce.cc index 5f45c1fd0dc..18bd0affb6e 100644 --- a/deps/v8/src/hydrogen-bce.cc +++ b/deps/v8/src/hydrogen-bce.cc @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "hydrogen-bce.h" +#include "src/hydrogen-bce.h" namespace v8 { namespace internal { @@ -51,6 +51,9 @@ class BoundsCheckKey : public ZoneObject { constant = HConstant::cast(index->right()); index_base = index->left(); } + } else if (check->index()->IsConstant()) { + index_base = check->block()->graph()->GetConstant0(); + constant = HConstant::cast(check->index()); } if (constant != NULL && constant->HasInteger32Value()) { @@ -110,7 +113,7 @@ class BoundsCheckBbData: public ZoneObject { void UpdateUpperOffsets(HBoundsCheck* check, int32_t offset) { BoundsCheckBbData* data = FatherInDominatorTree(); while (data != NULL && data->UpperCheck() == check) { - ASSERT(data->upper_offset_ < offset); + DCHECK(data->upper_offset_ < offset); data->upper_offset_ = offset; data = data->FatherInDominatorTree(); } @@ -119,7 +122,7 @@ class BoundsCheckBbData: public ZoneObject { void UpdateLowerOffsets(HBoundsCheck* check, int32_t offset) { BoundsCheckBbData* data = FatherInDominatorTree(); while (data != NULL && data->LowerCheck() == check) { - ASSERT(data->lower_offset_ > offset); + DCHECK(data->lower_offset_ > offset); data->lower_offset_ = offset; data = data->FatherInDominatorTree(); } @@ -139,7 +142,7 @@ class BoundsCheckBbData: public ZoneObject { // new_offset, and new_check is removed. void CoverCheck(HBoundsCheck* new_check, int32_t new_offset) { - ASSERT(new_check->index()->representation().IsSmiOrInteger32()); + DCHECK(new_check->index()->representation().IsSmiOrInteger32()); bool keep_new_check = false; if (new_offset > upper_offset_) { @@ -167,8 +170,8 @@ class BoundsCheckBbData: public ZoneObject { if (!keep_new_check) { if (FLAG_trace_bce) { - OS::Print("Eliminating check #%d after tightening\n", - new_check->id()); + base::OS::Print("Eliminating check #%d after tightening\n", + new_check->id()); } new_check->block()->graph()->isolate()->counters()-> bounds_checks_eliminated()->Increment(); @@ -177,11 +180,11 @@ class BoundsCheckBbData: public ZoneObject { HBoundsCheck* first_check = new_check == lower_check_ ? upper_check_ : lower_check_; if (FLAG_trace_bce) { - OS::Print("Moving second check #%d after first check #%d\n", - new_check->id(), first_check->id()); + base::OS::Print("Moving second check #%d after first check #%d\n", + new_check->id(), first_check->id()); } // The length is guaranteed to be live at first_check. - ASSERT(new_check->length() == first_check->length()); + DCHECK(new_check->length() == first_check->length()); HInstruction* old_position = new_check->next(); new_check->Unlink(); new_check->InsertAfter(first_check); @@ -219,54 +222,69 @@ class BoundsCheckBbData: public ZoneObject { void MoveIndexIfNecessary(HValue* index_raw, HBoundsCheck* insert_before, HInstruction* end_of_scan_range) { - if (!index_raw->IsAdd() && !index_raw->IsSub()) { - // index_raw can be HAdd(index_base, offset), HSub(index_base, offset), - // or index_base directly. In the latter case, no need to move anything. - return; - } - HArithmeticBinaryOperation* index = - HArithmeticBinaryOperation::cast(index_raw); - HValue* left_input = index->left(); - HValue* right_input = index->right(); - bool must_move_index = false; - bool must_move_left_input = false; - bool must_move_right_input = false; - for (HInstruction* cursor = end_of_scan_range; cursor != insert_before;) { - if (cursor == left_input) must_move_left_input = true; - if (cursor == right_input) must_move_right_input = true; - if (cursor == index) must_move_index = true; - if (cursor->previous() == NULL) { - cursor = cursor->block()->dominator()->end(); - } else { - cursor = cursor->previous(); + // index_raw can be HAdd(index_base, offset), HSub(index_base, offset), + // HConstant(offset) or index_base directly. + // In the latter case, no need to move anything. + if (index_raw->IsAdd() || index_raw->IsSub()) { + HArithmeticBinaryOperation* index = + HArithmeticBinaryOperation::cast(index_raw); + HValue* left_input = index->left(); + HValue* right_input = index->right(); + bool must_move_index = false; + bool must_move_left_input = false; + bool must_move_right_input = false; + for (HInstruction* cursor = end_of_scan_range; cursor != insert_before;) { + if (cursor == left_input) must_move_left_input = true; + if (cursor == right_input) must_move_right_input = true; + if (cursor == index) must_move_index = true; + if (cursor->previous() == NULL) { + cursor = cursor->block()->dominator()->end(); + } else { + cursor = cursor->previous(); + } + } + if (must_move_index) { + index->Unlink(); + index->InsertBefore(insert_before); + } + // The BCE algorithm only selects mergeable bounds checks that share + // the same "index_base", so we'll only ever have to move constants. + if (must_move_left_input) { + HConstant::cast(left_input)->Unlink(); + HConstant::cast(left_input)->InsertBefore(index); + } + if (must_move_right_input) { + HConstant::cast(right_input)->Unlink(); + HConstant::cast(right_input)->InsertBefore(index); + } + } else if (index_raw->IsConstant()) { + HConstant* index = HConstant::cast(index_raw); + bool must_move = false; + for (HInstruction* cursor = end_of_scan_range; cursor != insert_before;) { + if (cursor == index) must_move = true; + if (cursor->previous() == NULL) { + cursor = cursor->block()->dominator()->end(); + } else { + cursor = cursor->previous(); + } + } + if (must_move) { + index->Unlink(); + index->InsertBefore(insert_before); } - } - if (must_move_index) { - index->Unlink(); - index->InsertBefore(insert_before); - } - // The BCE algorithm only selects mergeable bounds checks that share - // the same "index_base", so we'll only ever have to move constants. - if (must_move_left_input) { - HConstant::cast(left_input)->Unlink(); - HConstant::cast(left_input)->InsertBefore(index); - } - if (must_move_right_input) { - HConstant::cast(right_input)->Unlink(); - HConstant::cast(right_input)->InsertBefore(index); } } void TightenCheck(HBoundsCheck* original_check, HBoundsCheck* tighter_check, int32_t new_offset) { - ASSERT(original_check->length() == tighter_check->length()); + DCHECK(original_check->length() == tighter_check->length()); MoveIndexIfNecessary(tighter_check->index(), original_check, tighter_check); original_check->ReplaceAllUsesWith(original_check->index()); original_check->SetOperandAt(0, tighter_check->index()); if (FLAG_trace_bce) { - OS::Print("Tightened check #%d with offset %d from #%d\n", - original_check->id(), new_offset, tighter_check->id()); + base::OS::Print("Tightened check #%d with offset %d from #%d\n", + original_check->id(), new_offset, tighter_check->id()); } } @@ -378,15 +396,15 @@ BoundsCheckBbData* HBoundsCheckEliminationPhase::PreProcessBlock( NULL); *data_p = bb_data_list; if (FLAG_trace_bce) { - OS::Print("Fresh bounds check data for block #%d: [%d]\n", - bb->block_id(), offset); + base::OS::Print("Fresh bounds check data for block #%d: [%d]\n", + bb->block_id(), offset); } } else if (data->OffsetIsCovered(offset)) { bb->graph()->isolate()->counters()-> bounds_checks_eliminated()->Increment(); if (FLAG_trace_bce) { - OS::Print("Eliminating bounds check #%d, offset %d is covered\n", - check->id(), offset); + base::OS::Print("Eliminating bounds check #%d, offset %d is covered\n", + check->id(), offset); } check->DeleteAndReplaceWith(check->ActualValue()); } else if (data->BasicBlock() == bb) { @@ -421,8 +439,8 @@ BoundsCheckBbData* HBoundsCheckEliminationPhase::PreProcessBlock( bb_data_list, data); if (FLAG_trace_bce) { - OS::Print("Updated bounds check data for block #%d: [%d - %d]\n", - bb->block_id(), new_lower_offset, new_upper_offset); + base::OS::Print("Updated bounds check data for block #%d: [%d - %d]\n", + bb->block_id(), new_lower_offset, new_upper_offset); } table_.Insert(key, bb_data_list, zone()); } diff --git a/deps/v8/src/hydrogen-bce.h b/deps/v8/src/hydrogen-bce.h index 6e501701fe4..70c0a07d066 100644 --- a/deps/v8/src/hydrogen-bce.h +++ b/deps/v8/src/hydrogen-bce.h @@ -5,7 +5,7 @@ #ifndef V8_HYDROGEN_BCE_H_ #define V8_HYDROGEN_BCE_H_ -#include "hydrogen.h" +#include "src/hydrogen.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-bch.cc b/deps/v8/src/hydrogen-bch.cc index e889c196f37..5af6030346a 100644 --- a/deps/v8/src/hydrogen-bch.cc +++ b/deps/v8/src/hydrogen-bch.cc @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "hydrogen-bch.h" +#include "src/hydrogen-bch.h" namespace v8 { namespace internal { @@ -44,7 +44,7 @@ class InductionVariableBlocksTable BASE_EMBEDDED { * induction variable). */ void InitializeLoop(InductionVariableData* data) { - ASSERT(data->limit() != NULL); + DCHECK(data->limit() != NULL); HLoopInformation* loop = data->phi()->block()->current_loop(); is_start_ = (block() == loop->loop_header()); is_proper_exit_ = (block() == data->induction_exit_target()); @@ -55,7 +55,7 @@ class InductionVariableBlocksTable BASE_EMBEDDED { // Utility methods to iterate over dominated blocks. void ResetCurrentDominatedBlock() { current_dominated_block_ = kNoBlock; } HBasicBlock* CurrentDominatedBlock() { - ASSERT(current_dominated_block_ != kNoBlock); + DCHECK(current_dominated_block_ != kNoBlock); return current_dominated_block_ < block()->dominated_blocks()->length() ? block()->dominated_blocks()->at(current_dominated_block_) : NULL; } @@ -181,7 +181,7 @@ class InductionVariableBlocksTable BASE_EMBEDDED { Element element; element.set_block(graph->blocks()->at(i)); elements_.Add(element, graph->zone()); - ASSERT(at(i)->block()->block_id() == i); + DCHECK(at(i)->block()->block_id() == i); } } diff --git a/deps/v8/src/hydrogen-bch.h b/deps/v8/src/hydrogen-bch.h index 28d540157fa..852c264c4f1 100644 --- a/deps/v8/src/hydrogen-bch.h +++ b/deps/v8/src/hydrogen-bch.h @@ -5,7 +5,7 @@ #ifndef V8_HYDROGEN_BCH_H_ #define V8_HYDROGEN_BCH_H_ -#include "hydrogen.h" +#include "src/hydrogen.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-canonicalize.cc b/deps/v8/src/hydrogen-canonicalize.cc index 77f4a55eb3f..c15b8d99c09 100644 --- a/deps/v8/src/hydrogen-canonicalize.cc +++ b/deps/v8/src/hydrogen-canonicalize.cc @@ -2,8 +2,8 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "hydrogen-canonicalize.h" -#include "hydrogen-redundant-phi.h" +#include "src/hydrogen-canonicalize.h" +#include "src/hydrogen-redundant-phi.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-canonicalize.h b/deps/v8/src/hydrogen-canonicalize.h index a0737ed631c..eb230332fdd 100644 --- a/deps/v8/src/hydrogen-canonicalize.h +++ b/deps/v8/src/hydrogen-canonicalize.h @@ -5,7 +5,7 @@ #ifndef V8_HYDROGEN_CANONICALIZE_H_ #define V8_HYDROGEN_CANONICALIZE_H_ -#include "hydrogen.h" +#include "src/hydrogen.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-check-elimination.cc b/deps/v8/src/hydrogen-check-elimination.cc index de1e5b0648a..1530fe1cf5d 100644 --- a/deps/v8/src/hydrogen-check-elimination.cc +++ b/deps/v8/src/hydrogen-check-elimination.cc @@ -2,9 +2,10 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "hydrogen-check-elimination.h" -#include "hydrogen-alias-analysis.h" -#include "hydrogen-flow-engine.h" +#include "src/hydrogen-check-elimination.h" + +#include "src/hydrogen-alias-analysis.h" +#include "src/hydrogen-flow-engine.h" #define GLOBAL 1 @@ -24,9 +25,47 @@ namespace internal { typedef const UniqueSet<Map>* MapSet; struct HCheckTableEntry { + enum State { + // We have seen a map check (i.e. an HCheckMaps) for these maps, so we can + // use this information to eliminate further map checks, elements kind + // transitions, etc. + CHECKED, + // Same as CHECKED, but we also know that these maps are stable. + CHECKED_STABLE, + // These maps are stable, but not checked (i.e. we learned this via field + // type tracking or from a constant, or they were initially CHECKED_STABLE, + // but became UNCHECKED_STABLE because of an instruction that changes maps + // or elements kind), and we need a stability check for them in order to use + // this information for check elimination (which turns them back to + // CHECKED_STABLE). + UNCHECKED_STABLE + }; + + static const char* State2String(State state) { + switch (state) { + case CHECKED: return "checked"; + case CHECKED_STABLE: return "checked stable"; + case UNCHECKED_STABLE: return "unchecked stable"; + } + UNREACHABLE(); + return NULL; + } + + static State StateMerge(State state1, State state2) { + if (state1 == state2) return state1; + if ((state1 == CHECKED && state2 == CHECKED_STABLE) || + (state2 == CHECKED && state1 == CHECKED_STABLE)) { + return CHECKED; + } + DCHECK((state1 == CHECKED_STABLE && state2 == UNCHECKED_STABLE) || + (state2 == CHECKED_STABLE && state1 == UNCHECKED_STABLE)); + return UNCHECKED_STABLE; + } + HValue* object_; // The object being approximated. NULL => invalid entry. HInstruction* check_; // The last check instruction. MapSet maps_; // The set of known maps for the object. + State state_; // The state of this entry. }; @@ -65,6 +104,10 @@ class HCheckTable : public ZoneObject { ReduceCompareObjectEqAndBranch(HCompareObjectEqAndBranch::cast(instr)); break; } + case HValue::kIsStringAndBranch: { + ReduceIsStringAndBranch(HIsStringAndBranch::cast(instr)); + break; + } case HValue::kTransitionElementsKind: { ReduceTransitionElementsKind( HTransitionElementsKind::cast(instr)); @@ -74,16 +117,23 @@ class HCheckTable : public ZoneObject { ReduceCheckHeapObject(HCheckHeapObject::cast(instr)); break; } + case HValue::kCheckInstanceType: { + ReduceCheckInstanceType(HCheckInstanceType::cast(instr)); + break; + } default: { // If the instruction changes maps uncontrollably, drop everything. - if (instr->CheckChangesFlag(kElementsKind) || - instr->CheckChangesFlag(kMaps) || - instr->CheckChangesFlag(kOsrEntries)) { + if (instr->CheckChangesFlag(kOsrEntries)) { Kill(); + break; + } + if (instr->CheckChangesFlag(kElementsKind) || + instr->CheckChangesFlag(kMaps)) { + KillUnstableEntries(); } } // Improvements possible: - // - eliminate redundant HCheckSmi, HCheckInstanceType instructions + // - eliminate redundant HCheckSmi instructions // - track which values have been HCheckHeapObject'd } @@ -127,10 +177,11 @@ class HCheckTable : public ZoneObject { HCheckTable* copy = new(zone) HCheckTable(phase_); for (int i = 0; i < size_; i++) { HCheckTableEntry* old_entry = &entries_[i]; - ASSERT(old_entry->maps_->size() > 0); + DCHECK(old_entry->maps_->size() > 0); HCheckTableEntry* new_entry = ©->entries_[i]; new_entry->object_ = old_entry->object_; new_entry->maps_ = old_entry->maps_; + new_entry->state_ = old_entry->state_; // Keep the check if the existing check's block dominates the successor. if (old_entry->check_ != NULL && old_entry->check_->block()->Dominates(succ)) { @@ -156,7 +207,7 @@ class HCheckTable : public ZoneObject { HCheckTableEntry* pred_entry = copy->Find(phi_operand); if (pred_entry != NULL) { // Create an entry for a phi in the table. - copy->Insert(phi, NULL, pred_entry->maps_); + copy->Insert(phi, NULL, pred_entry->maps_, pred_entry->state_); } } } @@ -172,19 +223,25 @@ class HCheckTable : public ZoneObject { HValue* object = cmp->value()->ActualValue(); HCheckTableEntry* entry = copy->Find(object); if (is_true_branch) { + HCheckTableEntry::State state = cmp->map_is_stable() + ? HCheckTableEntry::CHECKED_STABLE + : HCheckTableEntry::CHECKED; // Learn on the true branch of if(CompareMap(x)). if (entry == NULL) { - copy->Insert(object, cmp, cmp->map()); + copy->Insert(object, cmp, cmp->map(), state); } else { entry->maps_ = new(zone) UniqueSet<Map>(cmp->map(), zone); entry->check_ = cmp; + entry->state_ = state; } } else { // Learn on the false branch of if(CompareMap(x)). if (entry != NULL) { + EnsureChecked(entry, object, cmp); UniqueSet<Map>* maps = entry->maps_->Copy(zone); maps->Remove(cmp->map()); entry->maps_ = maps; + DCHECK_NE(HCheckTableEntry::UNCHECKED_STABLE, entry->state_); } } learned = true; @@ -198,14 +255,42 @@ class HCheckTable : public ZoneObject { HCheckTableEntry* re = copy->Find(right); if (le == NULL) { if (re != NULL) { - copy->Insert(left, NULL, re->maps_); + copy->Insert(left, NULL, re->maps_, re->state_); } } else if (re == NULL) { - copy->Insert(right, NULL, le->maps_); + copy->Insert(right, NULL, le->maps_, le->state_); } else { + EnsureChecked(le, cmp->left(), cmp); + EnsureChecked(re, cmp->right(), cmp); le->maps_ = re->maps_ = le->maps_->Intersect(re->maps_, zone); + le->state_ = re->state_ = HCheckTableEntry::StateMerge( + le->state_, re->state_); + DCHECK_NE(HCheckTableEntry::UNCHECKED_STABLE, le->state_); + DCHECK_NE(HCheckTableEntry::UNCHECKED_STABLE, re->state_); } learned = true; + } else if (end->IsIsStringAndBranch()) { + HIsStringAndBranch* cmp = HIsStringAndBranch::cast(end); + HValue* object = cmp->value()->ActualValue(); + HCheckTableEntry* entry = copy->Find(object); + if (is_true_branch) { + // Learn on the true branch of if(IsString(x)). + if (entry == NULL) { + copy->Insert(object, NULL, string_maps(), + HCheckTableEntry::CHECKED); + } else { + EnsureChecked(entry, object, cmp); + entry->maps_ = entry->maps_->Intersect(string_maps(), zone); + DCHECK_NE(HCheckTableEntry::UNCHECKED_STABLE, entry->state_); + } + } else { + // Learn on the false branch of if(IsString(x)). + if (entry != NULL) { + EnsureChecked(entry, object, cmp); + entry->maps_ = entry->maps_->Subtract(string_maps(), zone); + DCHECK_NE(HCheckTableEntry::UNCHECKED_STABLE, entry->state_); + } + } } // Learning on false branches requires storing negative facts. } @@ -244,16 +329,22 @@ class HCheckTable : public ZoneObject { that_entry = that->Find(this_entry->object_); } - if (that_entry == NULL) { + if (that_entry == NULL || + (that_entry->state_ == HCheckTableEntry::CHECKED && + this_entry->state_ == HCheckTableEntry::UNCHECKED_STABLE) || + (this_entry->state_ == HCheckTableEntry::CHECKED && + that_entry->state_ == HCheckTableEntry::UNCHECKED_STABLE)) { this_entry->object_ = NULL; compact = true; } else { this_entry->maps_ = this_entry->maps_->Union(that_entry->maps_, zone); + this_entry->state_ = HCheckTableEntry::StateMerge( + this_entry->state_, that_entry->state_); if (this_entry->check_ != that_entry->check_) { this_entry->check_ = NULL; } - ASSERT(this_entry->maps_->size() > 0); + DCHECK(this_entry->maps_->size() > 0); } } if (compact) Compact(); @@ -272,16 +363,23 @@ class HCheckTable : public ZoneObject { HCheckTableEntry* entry = Find(object); if (entry != NULL) { // entry found; - MapSet a = entry->maps_; - const UniqueSet<Map>* i = instr->maps(); - if (a->IsSubset(i)) { + HGraph* graph = instr->block()->graph(); + if (entry->maps_->IsSubset(instr->maps())) { // The first check is more strict; the second is redundant. if (entry->check_ != NULL) { + DCHECK_NE(HCheckTableEntry::UNCHECKED_STABLE, entry->state_); TRACE(("Replacing redundant CheckMaps #%d at B%d with #%d\n", instr->id(), instr->block()->block_id(), entry->check_->id())); instr->DeleteAndReplaceWith(entry->check_); INC_STAT(redundant_); - } else { + } else if (entry->state_ == HCheckTableEntry::UNCHECKED_STABLE) { + DCHECK_EQ(NULL, entry->check_); + TRACE(("Marking redundant CheckMaps #%d at B%d as stability check\n", + instr->id(), instr->block()->block_id())); + instr->set_maps(entry->maps_->Copy(graph->zone())); + instr->MarkAsStabilityCheck(); + entry->state_ = HCheckTableEntry::CHECKED_STABLE; + } else if (!instr->IsStabilityCheck()) { TRACE(("Marking redundant CheckMaps #%d at B%d as dead\n", instr->id(), instr->block()->block_id())); // Mark check as dead but leave it in the graph as a checkpoint for @@ -292,16 +390,22 @@ class HCheckTable : public ZoneObject { } return; } - HGraph* graph = instr->block()->graph(); - MapSet intersection = i->Intersect(a, graph->zone()); + MapSet intersection = instr->maps()->Intersect( + entry->maps_, graph->zone()); if (intersection->size() == 0) { - // Intersection is empty; probably megamorphic, which is likely to - // deopt anyway, so just leave things as they are. + // Intersection is empty; probably megamorphic. INC_STAT(empty_); + entry->object_ = NULL; + Compact(); } else { // Update set of maps in the entry. entry->maps_ = intersection; - if (intersection->size() != i->size()) { + // Update state of the entry. + if (instr->maps_are_stable() || + entry->state_ == HCheckTableEntry::UNCHECKED_STABLE) { + entry->state_ = HCheckTableEntry::CHECKED_STABLE; + } + if (intersection->size() != instr->maps()->size()) { // Narrow set of maps in the second check maps instruction. if (entry->check_ != NULL && entry->check_->block() == instr->block() && @@ -309,6 +413,7 @@ class HCheckTable : public ZoneObject { // There is a check in the same block so replace it with a more // strict check and eliminate the second check entirely. HCheckMaps* check = HCheckMaps::cast(entry->check_); + DCHECK(!check->IsStabilityCheck()); TRACE(("CheckMaps #%d at B%d narrowed\n", check->id(), check->block()->block_id())); // Update map set and ensure that the check is alive. @@ -321,7 +426,7 @@ class HCheckTable : public ZoneObject { TRACE(("CheckMaps #%d at B%d narrowed\n", instr->id(), instr->block()->block_id())); instr->set_maps(intersection); - entry->check_ = instr; + entry->check_ = instr->IsStabilityCheck() ? NULL : instr; } if (FLAG_trace_check_elimination) { @@ -332,7 +437,55 @@ class HCheckTable : public ZoneObject { } } else { // No entry; insert a new one. - Insert(object, instr, instr->maps()); + HCheckTableEntry::State state = instr->maps_are_stable() + ? HCheckTableEntry::CHECKED_STABLE + : HCheckTableEntry::CHECKED; + HCheckMaps* check = instr->IsStabilityCheck() ? NULL : instr; + Insert(object, check, instr->maps(), state); + } + } + + void ReduceCheckInstanceType(HCheckInstanceType* instr) { + HValue* value = instr->value()->ActualValue(); + HCheckTableEntry* entry = Find(value); + if (entry == NULL) { + if (instr->check() == HCheckInstanceType::IS_STRING) { + Insert(value, NULL, string_maps(), HCheckTableEntry::CHECKED); + } + return; + } + UniqueSet<Map>* maps = new(zone()) UniqueSet<Map>( + entry->maps_->size(), zone()); + for (int i = 0; i < entry->maps_->size(); ++i) { + InstanceType type; + Unique<Map> map = entry->maps_->at(i); + { + // This is safe, because maps don't move and their instance type does + // not change. + AllowHandleDereference allow_deref; + type = map.handle()->instance_type(); + } + if (instr->is_interval_check()) { + InstanceType first_type, last_type; + instr->GetCheckInterval(&first_type, &last_type); + if (first_type <= type && type <= last_type) maps->Add(map, zone()); + } else { + uint8_t mask, tag; + instr->GetCheckMaskAndTag(&mask, &tag); + if ((type & mask) == tag) maps->Add(map, zone()); + } + } + if (maps->size() == entry->maps_->size()) { + TRACE(("Removing redundant CheckInstanceType #%d at B%d\n", + instr->id(), instr->block()->block_id())); + EnsureChecked(entry, value, instr); + instr->DeleteAndReplaceWith(value); + INC_STAT(removed_cit_); + } else if (maps->size() != 0) { + entry->maps_ = maps; + if (entry->state_ == HCheckTableEntry::UNCHECKED_STABLE) { + entry->state_ = HCheckTableEntry::CHECKED_STABLE; + } } } @@ -342,27 +495,30 @@ class HCheckTable : public ZoneObject { // Check if we introduce field maps here. MapSet maps = instr->maps(); if (maps != NULL) { - ASSERT_NE(0, maps->size()); - Insert(instr, NULL, maps); + DCHECK_NE(0, maps->size()); + Insert(instr, NULL, maps, HCheckTableEntry::UNCHECKED_STABLE); } return; } HValue* object = instr->object()->ActualValue(); - MapSet maps = FindMaps(object); - if (maps == NULL || maps->size() != 1) return; // Not a constant. + HCheckTableEntry* entry = Find(object); + if (entry == NULL || entry->maps_->size() != 1) return; // Not a constant. - Unique<Map> map = maps->at(0); + EnsureChecked(entry, object, instr); + Unique<Map> map = entry->maps_->at(0); + bool map_is_stable = (entry->state_ != HCheckTableEntry::CHECKED); HConstant* constant = HConstant::CreateAndInsertBefore( - instr->block()->graph()->zone(), map, true, instr); + instr->block()->graph()->zone(), map, map_is_stable, instr); instr->DeleteAndReplaceWith(constant); INC_STAT(loads_); } void ReduceCheckHeapObject(HCheckHeapObject* instr) { - if (FindMaps(instr->value()->ActualValue()) != NULL) { + HValue* value = instr->value()->ActualValue(); + if (Find(value) != NULL) { // If the object has known maps, it's definitely a heap object. - instr->DeleteAndReplaceWith(instr->value()); + instr->DeleteAndReplaceWith(value); INC_STAT(removed_cho_); } } @@ -372,12 +528,20 @@ class HCheckTable : public ZoneObject { if (instr->has_transition()) { // This store transitions the object to a new map. Kill(object); - Insert(object, NULL, HConstant::cast(instr->transition())->MapValue()); + HConstant* c_transition = HConstant::cast(instr->transition()); + HCheckTableEntry::State state = c_transition->HasStableMapValue() + ? HCheckTableEntry::CHECKED_STABLE + : HCheckTableEntry::CHECKED; + Insert(object, NULL, c_transition->MapValue(), state); } else if (instr->access().IsMap()) { // This is a store directly to the map field of the object. Kill(object); if (!instr->value()->IsConstant()) return; - Insert(object, NULL, HConstant::cast(instr->value())->MapValue()); + HConstant* c_value = HConstant::cast(instr->value()); + HCheckTableEntry::State state = c_value->HasStableMapValue() + ? HCheckTableEntry::CHECKED_STABLE + : HCheckTableEntry::CHECKED; + Insert(object, NULL, c_value->MapValue(), state); } else { // If the instruction changes maps, it should be handled above. CHECK(!instr->CheckChangesFlag(kMaps)); @@ -385,12 +549,14 @@ class HCheckTable : public ZoneObject { } void ReduceCompareMap(HCompareMap* instr) { - MapSet maps = FindMaps(instr->value()->ActualValue()); - if (maps == NULL) return; + HCheckTableEntry* entry = Find(instr->value()->ActualValue()); + if (entry == NULL) return; + + EnsureChecked(entry, instr->value(), instr); int succ; - if (maps->Contains(instr->map())) { - if (maps->size() != 1) { + if (entry->maps_->Contains(instr->map())) { + if (entry->maps_->size() != 1) { TRACE(("CompareMap #%d for #%d at B%d can't be eliminated: " "ambiguous set of maps\n", instr->id(), instr->value()->id(), instr->block()->block_id())); @@ -413,11 +579,18 @@ class HCheckTable : public ZoneObject { } void ReduceCompareObjectEqAndBranch(HCompareObjectEqAndBranch* instr) { - MapSet maps_left = FindMaps(instr->left()->ActualValue()); - if (maps_left == NULL) return; - MapSet maps_right = FindMaps(instr->right()->ActualValue()); - if (maps_right == NULL) return; - MapSet intersection = maps_left->Intersect(maps_right, zone()); + HValue* left = instr->left()->ActualValue(); + HCheckTableEntry* le = Find(left); + if (le == NULL) return; + HValue* right = instr->right()->ActualValue(); + HCheckTableEntry* re = Find(right); + if (re == NULL) return; + + EnsureChecked(le, left, instr); + EnsureChecked(re, right, instr); + + // TODO(bmeurer): Add a predicate here instead of computing the intersection + MapSet intersection = le->maps_->Intersect(re->maps_, zone()); if (intersection->size() > 0) return; TRACE(("Marking redundant CompareObjectEqAndBranch #%d at B%d as false\n", @@ -429,10 +602,34 @@ class HCheckTable : public ZoneObject { instr->block()->MarkSuccEdgeUnreachable(unreachable_succ); } + void ReduceIsStringAndBranch(HIsStringAndBranch* instr) { + HValue* value = instr->value()->ActualValue(); + HCheckTableEntry* entry = Find(value); + if (entry == NULL) return; + EnsureChecked(entry, value, instr); + int succ; + if (entry->maps_->IsSubset(string_maps())) { + TRACE(("Marking redundant IsStringAndBranch #%d at B%d as true\n", + instr->id(), instr->block()->block_id())); + succ = 0; + } else { + MapSet intersection = entry->maps_->Intersect(string_maps(), zone()); + if (intersection->size() > 0) return; + TRACE(("Marking redundant IsStringAndBranch #%d at B%d as false\n", + instr->id(), instr->block()->block_id())); + succ = 1; + } + instr->set_known_successor_index(succ); + int unreachable_succ = 1 - succ; + instr->block()->MarkSuccEdgeUnreachable(unreachable_succ); + } + void ReduceTransitionElementsKind(HTransitionElementsKind* instr) { - HCheckTableEntry* entry = Find(instr->object()->ActualValue()); + HValue* object = instr->object()->ActualValue(); + HCheckTableEntry* entry = Find(object); // Can only learn more about an object that already has a known set of maps. if (entry == NULL) return; + EnsureChecked(entry, object, instr); if (entry->maps_->Contains(instr->original_map())) { // If the object has the original map, it will be transitioned. UniqueSet<Map>* maps = entry->maps_->Copy(zone()); @@ -441,30 +638,60 @@ class HCheckTable : public ZoneObject { entry->maps_ = maps; } else { // Object does not have the given map, thus the transition is redundant. - instr->DeleteAndReplaceWith(instr->object()); + instr->DeleteAndReplaceWith(object); INC_STAT(transitions_); } } + void EnsureChecked(HCheckTableEntry* entry, + HValue* value, + HInstruction* instr) { + if (entry->state_ != HCheckTableEntry::UNCHECKED_STABLE) return; + HGraph* graph = instr->block()->graph(); + HCheckMaps* check = HCheckMaps::CreateAndInsertBefore( + graph->zone(), value, entry->maps_->Copy(graph->zone()), true, instr); + check->MarkAsStabilityCheck(); + entry->state_ = HCheckTableEntry::CHECKED_STABLE; + entry->check_ = NULL; + } + // Kill everything in the table. void Kill() { size_ = 0; cursor_ = 0; } + // Kill all unstable entries in the table. + void KillUnstableEntries() { + bool compact = false; + for (int i = 0; i < size_; ++i) { + HCheckTableEntry* entry = &entries_[i]; + DCHECK_NOT_NULL(entry->object_); + if (entry->state_ == HCheckTableEntry::CHECKED) { + entry->object_ = NULL; + compact = true; + } else { + // All checked stable entries become unchecked stable. + entry->state_ = HCheckTableEntry::UNCHECKED_STABLE; + entry->check_ = NULL; + } + } + if (compact) Compact(); + } + // Kill everything in the table that may alias {object}. void Kill(HValue* object) { bool compact = false; for (int i = 0; i < size_; i++) { HCheckTableEntry* entry = &entries_[i]; - ASSERT(entry->object_ != NULL); + DCHECK(entry->object_ != NULL); if (phase_->aliasing_->MayAlias(entry->object_, object)) { entry->object_ = NULL; compact = true; } } if (compact) Compact(); - ASSERT(Find(object) == NULL); + DCHECK(Find(object) == NULL); } void Compact() { @@ -479,8 +706,8 @@ class HCheckTable : public ZoneObject { size_--; } } - ASSERT(size_ == dest); - ASSERT(cursor_ <= size_); + DCHECK(size_ == dest); + DCHECK(cursor_ <= size_); // Preserve the age of the entries by moving the older entries to the end. if (cursor_ == size_) return; // Cursor already points at end. @@ -491,9 +718,9 @@ class HCheckTable : public ZoneObject { int L = cursor_; int R = size_ - cursor_; - OS::MemMove(&tmp_entries[0], &entries_[0], L * sizeof(HCheckTableEntry)); - OS::MemMove(&entries_[0], &entries_[L], R * sizeof(HCheckTableEntry)); - OS::MemMove(&entries_[R], &tmp_entries[0], L * sizeof(HCheckTableEntry)); + MemMove(&tmp_entries[0], &entries_[0], L * sizeof(HCheckTableEntry)); + MemMove(&entries_[0], &entries_[L], R * sizeof(HCheckTableEntry)); + MemMove(&entries_[R], &tmp_entries[0], L * sizeof(HCheckTableEntry)); } cursor_ = size_; // Move cursor to end. @@ -507,14 +734,15 @@ class HCheckTable : public ZoneObject { for (int i = 0; i < table->size_; i++) { HCheckTableEntry* entry = &table->entries_[i]; - ASSERT(entry->object_ != NULL); + DCHECK(entry->object_ != NULL); PrintF(" checkmaps-table @%d: %s #%d ", i, entry->object_->IsPhi() ? "phi" : "object", entry->object_->id()); if (entry->check_ != NULL) { PrintF("check #%d ", entry->check_->id()); } MapSet list = entry->maps_; - PrintF("%d maps { ", list->size()); + PrintF("%d %s maps { ", list->size(), + HCheckTableEntry::State2String(entry->state_)); for (int j = 0; j < list->size(); j++) { if (j > 0) PrintF(", "); PrintF("%" V8PRIxPTR, list->at(j).Hashcode()); @@ -527,32 +755,36 @@ class HCheckTable : public ZoneObject { for (int i = size_ - 1; i >= 0; i--) { // Search from most-recently-inserted to least-recently-inserted. HCheckTableEntry* entry = &entries_[i]; - ASSERT(entry->object_ != NULL); + DCHECK(entry->object_ != NULL); if (phase_->aliasing_->MustAlias(entry->object_, object)) return entry; } return NULL; } - MapSet FindMaps(HValue* object) { - HCheckTableEntry* entry = Find(object); - return entry == NULL ? NULL : entry->maps_; + void Insert(HValue* object, + HInstruction* check, + Unique<Map> map, + HCheckTableEntry::State state) { + Insert(object, check, new(zone()) UniqueSet<Map>(map, zone()), state); } - void Insert(HValue* object, HInstruction* check, Unique<Map> map) { - Insert(object, check, new(zone()) UniqueSet<Map>(map, zone())); - } - - void Insert(HValue* object, HInstruction* check, MapSet maps) { + void Insert(HValue* object, + HInstruction* check, + MapSet maps, + HCheckTableEntry::State state) { + DCHECK(state != HCheckTableEntry::UNCHECKED_STABLE || check == NULL); HCheckTableEntry* entry = &entries_[cursor_++]; entry->object_ = object; entry->check_ = check; entry->maps_ = maps; + entry->state_ = state; // If the table becomes full, wrap around and overwrite older entries. if (cursor_ == kMaxTrackedObjects) cursor_ = 0; if (size_ < kMaxTrackedObjects) size_++; } Zone* zone() const { return phase_->zone(); } + MapSet string_maps() const { return phase_->string_maps(); } friend class HCheckMapsEffects; friend class HCheckEliminationPhase; @@ -561,7 +793,7 @@ class HCheckTable : public ZoneObject { HCheckTableEntry entries_[kMaxTrackedObjects]; int16_t cursor_; // Must be <= kMaxTrackedObjects int16_t size_; // Must be <= kMaxTrackedObjects - // TODO(titzer): STATIC_ASSERT kMaxTrackedObjects < max(cursor_) + STATIC_ASSERT(kMaxTrackedObjects < (1 << 15)); }; @@ -569,8 +801,7 @@ class HCheckTable : public ZoneObject { // needed for check elimination. class HCheckMapsEffects : public ZoneObject { public: - explicit HCheckMapsEffects(Zone* zone) - : objects_(0, zone), maps_stored_(false) {} + explicit HCheckMapsEffects(Zone* zone) : objects_(0, zone) { } // Effects are _not_ disabled. inline bool Disabled() const { return false; } @@ -590,21 +821,25 @@ class HCheckMapsEffects : public ZoneObject { break; } default: { - maps_stored_ |= (instr->CheckChangesFlag(kMaps) | - instr->CheckChangesFlag(kOsrEntries) | - instr->CheckChangesFlag(kElementsKind)); + flags_.Add(instr->ChangesFlags()); + break; } } } // Apply these effects to the given check elimination table. void Apply(HCheckTable* table) { - if (maps_stored_) { + if (flags_.Contains(kOsrEntries)) { // Uncontrollable map modifications; kill everything. table->Kill(); return; } + // Kill all unstable entries. + if (flags_.Contains(kElementsKind) || flags_.Contains(kMaps)) { + table->KillUnstableEntries(); + } + // Kill maps for each object contained in these effects. for (int i = 0; i < objects_.length(); ++i) { table->Kill(objects_[i]->ActualValue()); @@ -613,7 +848,7 @@ class HCheckMapsEffects : public ZoneObject { // Union these effects with the other effects. void Union(HCheckMapsEffects* that, Zone* zone) { - maps_stored_ |= that->maps_stored_; + flags_.Add(that->flags_); for (int i = 0; i < that->objects_.length(); ++i) { objects_.Add(that->objects_[i], zone); } @@ -621,7 +856,7 @@ class HCheckMapsEffects : public ZoneObject { private: ZoneList<HValue*> objects_; - bool maps_stored_ : 1; + GVNFlagSet flags_; }; @@ -656,6 +891,7 @@ void HCheckEliminationPhase::PrintStats() { PRINT_STAT(redundant); PRINT_STAT(removed); PRINT_STAT(removed_cho); + PRINT_STAT(removed_cit); PRINT_STAT(narrowed); PRINT_STAT(loads); PRINT_STAT(empty); diff --git a/deps/v8/src/hydrogen-check-elimination.h b/deps/v8/src/hydrogen-check-elimination.h index b38447b1e7b..7102a439f3b 100644 --- a/deps/v8/src/hydrogen-check-elimination.h +++ b/deps/v8/src/hydrogen-check-elimination.h @@ -5,8 +5,8 @@ #ifndef V8_HYDROGEN_CHECK_ELIMINATION_H_ #define V8_HYDROGEN_CHECK_ELIMINATION_H_ -#include "hydrogen.h" -#include "hydrogen-alias-analysis.h" +#include "src/hydrogen.h" +#include "src/hydrogen-alias-analysis.h" namespace v8 { namespace internal { @@ -16,11 +16,20 @@ namespace internal { class HCheckEliminationPhase : public HPhase { public: explicit HCheckEliminationPhase(HGraph* graph) - : HPhase("H_Check Elimination", graph), aliasing_() { + : HPhase("H_Check Elimination", graph), aliasing_(), + string_maps_(kStringMapsSize, zone()) { + // Compute the set of string maps. + #define ADD_STRING_MAP(type, size, name, Name) \ + string_maps_.Add(Unique<Map>::CreateImmovable( \ + graph->isolate()->factory()->name##_map()), zone()); + STRING_TYPE_LIST(ADD_STRING_MAP) + #undef ADD_STRING_MAP + DCHECK_EQ(kStringMapsSize, string_maps_.size()); #ifdef DEBUG redundant_ = 0; removed_ = 0; removed_cho_ = 0; + removed_cit_ = 0; narrowed_ = 0; loads_ = 0; empty_ = 0; @@ -35,13 +44,20 @@ class HCheckEliminationPhase : public HPhase { friend class HCheckTable; private: + const UniqueSet<Map>* string_maps() const { return &string_maps_; } + void PrintStats(); HAliasAnalyzer* aliasing_; + #define COUNT(type, size, name, Name) + 1 + static const int kStringMapsSize = 0 STRING_TYPE_LIST(COUNT); + #undef COUNT + UniqueSet<Map> string_maps_; #ifdef DEBUG int redundant_; int removed_; int removed_cho_; + int removed_cit_; int narrowed_; int loads_; int empty_; diff --git a/deps/v8/src/hydrogen-dce.cc b/deps/v8/src/hydrogen-dce.cc index be551efa8b6..c55426d0fc3 100644 --- a/deps/v8/src/hydrogen-dce.cc +++ b/deps/v8/src/hydrogen-dce.cc @@ -2,8 +2,8 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "hydrogen-dce.h" -#include "v8.h" +#include "src/hydrogen-dce.h" +#include "src/v8.h" namespace v8 { namespace internal { @@ -32,16 +32,14 @@ void HDeadCodeEliminationPhase::MarkLive( void HDeadCodeEliminationPhase::PrintLive(HValue* ref, HValue* instr) { - HeapStringAllocator allocator; - StringStream stream(&allocator); + OFStream os(stdout); + os << "[MarkLive "; if (ref != NULL) { - ref->PrintTo(&stream); + os << *ref; } else { - stream.Add("root "); + os << "root "; } - stream.Add(" -> "); - instr->PrintTo(&stream); - PrintF("[MarkLive %s]\n", stream.ToCString().get()); + os << " -> " << *instr << "]" << endl; } @@ -61,7 +59,7 @@ void HDeadCodeEliminationPhase::MarkLiveInstructions() { } } - ASSERT(worklist.is_empty()); // Should have processed everything. + DCHECK(worklist.is_empty()); // Should have processed everything. } diff --git a/deps/v8/src/hydrogen-dce.h b/deps/v8/src/hydrogen-dce.h index 18cd755c953..af3679d9d39 100644 --- a/deps/v8/src/hydrogen-dce.h +++ b/deps/v8/src/hydrogen-dce.h @@ -5,7 +5,7 @@ #ifndef V8_HYDROGEN_DCE_H_ #define V8_HYDROGEN_DCE_H_ -#include "hydrogen.h" +#include "src/hydrogen.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-dehoist.cc b/deps/v8/src/hydrogen-dehoist.cc index 44aeb4887d0..0c7a9b964fc 100644 --- a/deps/v8/src/hydrogen-dehoist.cc +++ b/deps/v8/src/hydrogen-dehoist.cc @@ -2,7 +2,8 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "hydrogen-dehoist.h" +#include "src/hydrogen-dehoist.h" +#include "src/base/safe_math.h" namespace v8 { namespace internal { @@ -28,14 +29,25 @@ static void DehoistArrayIndex(ArrayInstructionInterface* array_operation) { if (!constant->HasInteger32Value()) return; int32_t sign = binary_operation->IsSub() ? -1 : 1; int32_t value = constant->Integer32Value() * sign; - // We limit offset values to 30 bits because we want to avoid the risk of - // overflows when the offset is added to the object header size. - if (value >= 1 << array_operation->MaxIndexOffsetBits() || value < 0) return; + if (value < 0) return; + + // Multiply value by elements size, bailing out on overflow. + int32_t elements_kind_size = + 1 << ElementsKindToShiftSize(array_operation->elements_kind()); + v8::base::internal::CheckedNumeric<int32_t> multiply_result = value; + multiply_result = multiply_result * elements_kind_size; + if (!multiply_result.IsValid()) return; + value = multiply_result.ValueOrDie(); + + // Ensure that the array operation can add value to existing base offset + // without overflowing. + if (!array_operation->TryIncreaseBaseOffset(value)) return; + array_operation->SetKey(subexpression); if (binary_operation->HasNoUses()) { binary_operation->DeleteAndReplaceWith(NULL); } - array_operation->SetIndexOffset(static_cast<uint32_t>(value)); + array_operation->SetDehoisted(true); } diff --git a/deps/v8/src/hydrogen-dehoist.h b/deps/v8/src/hydrogen-dehoist.h index 930e29bdc54..4aab30fafa1 100644 --- a/deps/v8/src/hydrogen-dehoist.h +++ b/deps/v8/src/hydrogen-dehoist.h @@ -5,7 +5,7 @@ #ifndef V8_HYDROGEN_DEHOIST_H_ #define V8_HYDROGEN_DEHOIST_H_ -#include "hydrogen.h" +#include "src/hydrogen.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-environment-liveness.cc b/deps/v8/src/hydrogen-environment-liveness.cc index 2dd1233defe..8e9018fb4e2 100644 --- a/deps/v8/src/hydrogen-environment-liveness.cc +++ b/deps/v8/src/hydrogen-environment-liveness.cc @@ -3,7 +3,7 @@ // found in the LICENSE file. -#include "hydrogen-environment-liveness.h" +#include "src/hydrogen-environment-liveness.h" namespace v8 { @@ -22,7 +22,7 @@ HEnvironmentLivenessAnalysisPhase::HEnvironmentLivenessAnalysisPhase( collect_markers_(true), last_simulate_(NULL), went_live_since_last_simulate_(maximum_environment_size_, zone()) { - ASSERT(maximum_environment_size_ > 0); + DCHECK(maximum_environment_size_ > 0); for (int i = 0; i < block_count_; ++i) { live_at_block_start_.Add( new(zone()) BitVector(maximum_environment_size_, zone()), zone()); @@ -61,7 +61,7 @@ void HEnvironmentLivenessAnalysisPhase::ZapEnvironmentSlotsInSuccessors( } HSimulate* simulate = first_simulate_.at(successor_id); if (simulate == NULL) continue; - ASSERT(VerifyClosures(simulate->closure(), + DCHECK(VerifyClosures(simulate->closure(), block->last_environment()->closure())); ZapEnvironmentSlot(i, simulate); } @@ -74,7 +74,7 @@ void HEnvironmentLivenessAnalysisPhase::ZapEnvironmentSlotsForInstruction( if (!marker->CheckFlag(HValue::kEndsLiveRange)) return; HSimulate* simulate = marker->next_simulate(); if (simulate != NULL) { - ASSERT(VerifyClosures(simulate->closure(), marker->closure())); + DCHECK(VerifyClosures(simulate->closure(), marker->closure())); ZapEnvironmentSlot(marker->index(), simulate); } } @@ -109,7 +109,7 @@ void HEnvironmentLivenessAnalysisPhase::UpdateLivenessAtInstruction( if (marker->kind() == HEnvironmentMarker::LOOKUP) { live->Add(index); } else { - ASSERT(marker->kind() == HEnvironmentMarker::BIND); + DCHECK(marker->kind() == HEnvironmentMarker::BIND); live->Remove(index); went_live_since_last_simulate_.Add(index); } @@ -124,10 +124,10 @@ void HEnvironmentLivenessAnalysisPhase::UpdateLivenessAtInstruction( live->Clear(); last_simulate_ = NULL; - // The following ASSERTs guard the assumption used in case + // The following DCHECKs guard the assumption used in case // kEnterInlined below: - ASSERT(instr->next()->IsSimulate()); - ASSERT(instr->next()->next()->IsGoto()); + DCHECK(instr->next()->IsSimulate()); + DCHECK(instr->next()->next()->IsGoto()); break; case HValue::kEnterInlined: { @@ -135,7 +135,7 @@ void HEnvironmentLivenessAnalysisPhase::UpdateLivenessAtInstruction( // target block. Here we make use of the fact that the end of an // inline sequence always looks like this: HLeaveInlined, HSimulate, // HGoto (to return_target block), with no environment lookups in - // between (see ASSERTs above). + // between (see DCHECKs above). HEnterInlined* enter = HEnterInlined::cast(instr); live->Clear(); for (int i = 0; i < enter->return_targets()->length(); ++i) { @@ -156,7 +156,7 @@ void HEnvironmentLivenessAnalysisPhase::UpdateLivenessAtInstruction( void HEnvironmentLivenessAnalysisPhase::Run() { - ASSERT(maximum_environment_size_ > 0); + DCHECK(maximum_environment_size_ > 0); // Main iteration. Compute liveness of environment slots, and store it // for each block until it doesn't change any more. For efficiency, visit diff --git a/deps/v8/src/hydrogen-environment-liveness.h b/deps/v8/src/hydrogen-environment-liveness.h index c72cd0f3379..e595927f9d4 100644 --- a/deps/v8/src/hydrogen-environment-liveness.h +++ b/deps/v8/src/hydrogen-environment-liveness.h @@ -6,7 +6,7 @@ #define V8_HYDROGEN_ENVIRONMENT_LIVENESS_H_ -#include "hydrogen.h" +#include "src/hydrogen.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-escape-analysis.cc b/deps/v8/src/hydrogen-escape-analysis.cc index 8514d889f05..3b0f15870fa 100644 --- a/deps/v8/src/hydrogen-escape-analysis.cc +++ b/deps/v8/src/hydrogen-escape-analysis.cc @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "hydrogen-escape-analysis.h" +#include "src/hydrogen-escape-analysis.h" namespace v8 { namespace internal { @@ -144,7 +144,7 @@ HValue* HEscapeAnalysisPhase::NewLoadReplacement( HLoadNamedField* load, HValue* load_value) { HValue* replacement = load_value; Representation representation = load->representation(); - if (representation.IsSmi()) { + if (representation.IsSmiOrInteger32() || representation.IsDouble()) { Zone* zone = graph()->zone(); HInstruction* new_instr = HForceRepresentation::New(zone, NULL, load_value, representation); @@ -189,7 +189,7 @@ void HEscapeAnalysisPhase::AnalyzeDataFlow(HInstruction* allocate) { HLoadNamedField* load = HLoadNamedField::cast(instr); int index = load->access().offset() / kPointerSize; if (load->object() != allocate) continue; - ASSERT(load->access().IsInobject()); + DCHECK(load->access().IsInobject()); HValue* replacement = NewLoadReplacement(load, state->OperandAt(index)); load->DeleteAndReplaceWith(replacement); @@ -203,7 +203,7 @@ void HEscapeAnalysisPhase::AnalyzeDataFlow(HInstruction* allocate) { HStoreNamedField* store = HStoreNamedField::cast(instr); int index = store->access().offset() / kPointerSize; if (store->object() != allocate) continue; - ASSERT(store->access().IsInobject()); + DCHECK(store->access().IsInobject()); state = NewStateCopy(store->previous(), state); state->SetOperandAt(index, store->value()); if (store->has_transition()) { @@ -286,7 +286,7 @@ void HEscapeAnalysisPhase::AnalyzeDataFlow(HInstruction* allocate) { } // All uses have been handled. - ASSERT(allocate->HasNoUses()); + DCHECK(allocate->HasNoUses()); allocate->DeleteAndReplaceWith(NULL); } @@ -299,14 +299,14 @@ void HEscapeAnalysisPhase::PerformScalarReplacement() { int size_in_bytes = allocate->size()->GetInteger32Constant(); number_of_values_ = size_in_bytes / kPointerSize; number_of_objects_++; - block_states_.Clear(); + block_states_.Rewind(0); // Perform actual analysis step. AnalyzeDataFlow(allocate); cumulative_values_ += number_of_values_; - ASSERT(allocate->HasNoUses()); - ASSERT(!allocate->IsLinked()); + DCHECK(allocate->HasNoUses()); + DCHECK(!allocate->IsLinked()); } } @@ -320,7 +320,7 @@ void HEscapeAnalysisPhase::Run() { CollectCapturedValues(); if (captured_.is_empty()) break; PerformScalarReplacement(); - captured_.Clear(); + captured_.Rewind(0); } } diff --git a/deps/v8/src/hydrogen-escape-analysis.h b/deps/v8/src/hydrogen-escape-analysis.h index 5ff0b32d3e3..0726b8edbe4 100644 --- a/deps/v8/src/hydrogen-escape-analysis.h +++ b/deps/v8/src/hydrogen-escape-analysis.h @@ -5,8 +5,8 @@ #ifndef V8_HYDROGEN_ESCAPE_ANALYSIS_H_ #define V8_HYDROGEN_ESCAPE_ANALYSIS_H_ -#include "allocation.h" -#include "hydrogen.h" +#include "src/allocation.h" +#include "src/hydrogen.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-flow-engine.h b/deps/v8/src/hydrogen-flow-engine.h index 04902b297aa..257ab466a1e 100644 --- a/deps/v8/src/hydrogen-flow-engine.h +++ b/deps/v8/src/hydrogen-flow-engine.h @@ -5,9 +5,9 @@ #ifndef V8_HYDROGEN_FLOW_ENGINE_H_ #define V8_HYDROGEN_FLOW_ENGINE_H_ -#include "hydrogen.h" -#include "hydrogen-instructions.h" -#include "zone.h" +#include "src/hydrogen.h" +#include "src/hydrogen-instructions.h" +#include "src/zone.h" namespace v8 { namespace internal { @@ -102,7 +102,7 @@ class HFlowEngine { State* state = State::Finish(StateAt(block), block, zone_); if (block->IsReachable()) { - ASSERT(state != NULL); + DCHECK(state != NULL); if (block->IsLoopHeader()) { // Apply loop effects before analyzing loop body. ComputeLoopEffects(block)->Apply(state); @@ -139,7 +139,7 @@ class HFlowEngine { // Computes and caches the loop effects for the loop which has the given // block as its loop header. Effects* ComputeLoopEffects(HBasicBlock* block) { - ASSERT(block->IsLoopHeader()); + DCHECK(block->IsLoopHeader()); Effects* effects = loop_effects_[block->block_id()]; if (effects != NULL) return effects; // Already analyzed this loop. @@ -154,7 +154,7 @@ class HFlowEngine { HBasicBlock* member = graph_->blocks()->at(i); if (i != block->block_id() && member->IsLoopHeader()) { // Recursively compute and cache the effects of the nested loop. - ASSERT(member->loop_information()->parent_loop() == loop); + DCHECK(member->loop_information()->parent_loop() == loop); Effects* nested = ComputeLoopEffects(member); effects->Union(nested, zone_); // Skip the nested loop's blocks. @@ -162,7 +162,7 @@ class HFlowEngine { } else { // Process all the effects of the block. if (member->IsUnreachable()) continue; - ASSERT(member->current_loop() == loop); + DCHECK(member->current_loop() == loop); for (HInstructionIterator it(member); !it.Done(); it.Advance()) { effects->Process(it.Current(), zone_); } @@ -195,7 +195,7 @@ class HFlowEngine { } inline void CheckPredecessorCount(HBasicBlock* block) { - ASSERT(block->predecessors()->length() == pred_counts_[block->block_id()]); + DCHECK(block->predecessors()->length() == pred_counts_[block->block_id()]); } inline void IncrementPredecessorCount(HBasicBlock* block) { diff --git a/deps/v8/src/hydrogen-gvn.cc b/deps/v8/src/hydrogen-gvn.cc index f9d1b408a7f..794f51855e8 100644 --- a/deps/v8/src/hydrogen-gvn.cc +++ b/deps/v8/src/hydrogen-gvn.cc @@ -2,9 +2,9 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "hydrogen.h" -#include "hydrogen-gvn.h" -#include "v8.h" +#include "src/hydrogen.h" +#include "src/hydrogen-gvn.h" +#include "src/v8.h" namespace v8 { namespace internal { @@ -83,8 +83,8 @@ class HSideEffectMap V8_FINAL BASE_EMBEDDED { bool IsEmpty() const { return count_ == 0; } inline HInstruction* operator[](int i) const { - ASSERT(0 <= i); - ASSERT(i < kNumberOfTrackedSideEffects); + DCHECK(0 <= i); + DCHECK(i < kNumberOfTrackedSideEffects); return data_[i]; } inline HInstruction* at(int i) const { return operator[](i); } @@ -98,7 +98,7 @@ class HSideEffectMap V8_FINAL BASE_EMBEDDED { void TraceGVN(const char* msg, ...) { va_list arguments; va_start(arguments, msg); - OS::VPrint(msg, arguments); + base::OS::VPrint(msg, arguments); va_end(arguments); } @@ -140,10 +140,10 @@ HInstructionMap::HInstructionMap(Zone* zone, const HInstructionMap* other) lists_(zone->NewArray<HInstructionMapListElement>(other->lists_size_)), free_list_head_(other->free_list_head_), side_effects_tracker_(other->side_effects_tracker_) { - OS::MemCopy( - array_, other->array_, array_size_ * sizeof(HInstructionMapListElement)); - OS::MemCopy( - lists_, other->lists_, lists_size_ * sizeof(HInstructionMapListElement)); + MemCopy(array_, other->array_, + array_size_ * sizeof(HInstructionMapListElement)); + MemCopy(lists_, other->lists_, + lists_size_ * sizeof(HInstructionMapListElement)); } @@ -212,7 +212,7 @@ HInstruction* HInstructionMap::Lookup(HInstruction* instr) const { void HInstructionMap::Resize(int new_size, Zone* zone) { - ASSERT(new_size > count_); + DCHECK(new_size > count_); // Hashing the values into the new array has no more collisions than in the // old hash map, so we can use the existing lists_ array, if we are careful. @@ -252,12 +252,12 @@ void HInstructionMap::Resize(int new_size, Zone* zone) { } } USE(old_count); - ASSERT(count_ == old_count); + DCHECK(count_ == old_count); } void HInstructionMap::ResizeLists(int new_size, Zone* zone) { - ASSERT(new_size > lists_size_); + DCHECK(new_size > lists_size_); HInstructionMapListElement* new_lists = zone->NewArray<HInstructionMapListElement>(new_size); @@ -270,8 +270,7 @@ void HInstructionMap::ResizeLists(int new_size, Zone* zone) { lists_ = new_lists; if (old_lists != NULL) { - OS::MemCopy( - lists_, old_lists, old_size * sizeof(HInstructionMapListElement)); + MemCopy(lists_, old_lists, old_size * sizeof(HInstructionMapListElement)); } for (int i = old_size; i < lists_size_; ++i) { lists_[i].next = free_list_head_; @@ -281,10 +280,10 @@ void HInstructionMap::ResizeLists(int new_size, Zone* zone) { void HInstructionMap::Insert(HInstruction* instr, Zone* zone) { - ASSERT(instr != NULL); + DCHECK(instr != NULL); // Resizing when half of the hashtable is filled up. if (count_ >= array_size_ >> 1) Resize(array_size_ << 1, zone); - ASSERT(count_ < array_size_); + DCHECK(count_ < array_size_); count_++; uint32_t pos = Bound(static_cast<uint32_t>(instr->Hashcode())); if (array_[pos].instr == NULL) { @@ -295,11 +294,11 @@ void HInstructionMap::Insert(HInstruction* instr, Zone* zone) { ResizeLists(lists_size_ << 1, zone); } int new_element_pos = free_list_head_; - ASSERT(new_element_pos != kNil); + DCHECK(new_element_pos != kNil); free_list_head_ = lists_[free_list_head_].next; lists_[new_element_pos].instr = instr; lists_[new_element_pos].next = array_[pos].next; - ASSERT(array_[pos].next == kNil || lists_[array_[pos].next].instr != NULL); + DCHECK(array_[pos].next == kNil || lists_[array_[pos].next].instr != NULL); array_[pos].next = new_element_pos; } } @@ -315,9 +314,9 @@ HSideEffectMap::HSideEffectMap(HSideEffectMap* other) : count_(other->count_) { } -HSideEffectMap& HSideEffectMap::operator= (const HSideEffectMap& other) { +HSideEffectMap& HSideEffectMap::operator=(const HSideEffectMap& other) { if (this != &other) { - OS::MemCopy(data_, other.data_, kNumberOfTrackedSideEffects * kPointerSize); + MemCopy(data_, other.data_, kNumberOfTrackedSideEffects * kPointerSize); } return *this; } @@ -401,20 +400,20 @@ SideEffects SideEffectsTracker::ComputeDependsOn(HInstruction* instr) { } -void SideEffectsTracker::PrintSideEffectsTo(StringStream* stream, - SideEffects side_effects) const { +OStream& operator<<(OStream& os, const TrackedEffects& te) { + SideEffectsTracker* t = te.tracker; const char* separator = ""; - stream->Add("["); + os << "["; for (int bit = 0; bit < kNumberOfFlags; ++bit) { GVNFlag flag = GVNFlagFromInt(bit); - if (side_effects.ContainsFlag(flag)) { - stream->Add(separator); + if (te.effects.ContainsFlag(flag)) { + os << separator; separator = ", "; switch (flag) { -#define DECLARE_FLAG(Type) \ - case k##Type: \ - stream->Add(#Type); \ - break; +#define DECLARE_FLAG(Type) \ + case k##Type: \ + os << #Type; \ + break; GVN_TRACKED_FLAG_LIST(DECLARE_FLAG) GVN_UNTRACKED_FLAG_LIST(DECLARE_FLAG) #undef DECLARE_FLAG @@ -423,21 +422,20 @@ GVN_UNTRACKED_FLAG_LIST(DECLARE_FLAG) } } } - for (int index = 0; index < num_global_vars_; ++index) { - if (side_effects.ContainsSpecial(GlobalVar(index))) { - stream->Add(separator); + for (int index = 0; index < t->num_global_vars_; ++index) { + if (te.effects.ContainsSpecial(t->GlobalVar(index))) { + os << separator << "[" << *t->global_vars_[index].handle() << "]"; separator = ", "; - stream->Add("[%p]", *global_vars_[index].handle()); } } - for (int index = 0; index < num_inobject_fields_; ++index) { - if (side_effects.ContainsSpecial(InobjectField(index))) { - stream->Add(separator); + for (int index = 0; index < t->num_inobject_fields_; ++index) { + if (te.effects.ContainsSpecial(t->InobjectField(index))) { + os << separator << t->inobject_fields_[index]; separator = ", "; - inobject_fields_[index].PrintTo(stream); } } - stream->Add("]"); + os << "]"; + return os; } @@ -450,11 +448,9 @@ bool SideEffectsTracker::ComputeGlobalVar(Unique<Cell> cell, int* index) { } if (num_global_vars_ < kNumberOfGlobalVars) { if (FLAG_trace_gvn) { - HeapStringAllocator allocator; - StringStream stream(&allocator); - stream.Add("Tracking global var [%p] (mapped to index %d)\n", - *cell.handle(), num_global_vars_); - stream.OutputToStdOut(); + OFStream os(stdout); + os << "Tracking global var [" << *cell.handle() << "] " + << "(mapped to index " << num_global_vars_ << ")" << endl; } *index = num_global_vars_; global_vars_[num_global_vars_++] = cell; @@ -474,12 +470,9 @@ bool SideEffectsTracker::ComputeInobjectField(HObjectAccess access, } if (num_inobject_fields_ < kNumberOfInobjectFields) { if (FLAG_trace_gvn) { - HeapStringAllocator allocator; - StringStream stream(&allocator); - stream.Add("Tracking inobject field access "); - access.PrintTo(&stream); - stream.Add(" (mapped to index %d)\n", num_inobject_fields_); - stream.OutputToStdOut(); + OFStream os(stdout); + os << "Tracking inobject field access " << access << " (mapped to index " + << num_inobject_fields_ << ")" << endl; } *index = num_inobject_fields_; inobject_fields_[num_inobject_fields_++] = access; @@ -495,7 +488,7 @@ HGlobalValueNumberingPhase::HGlobalValueNumberingPhase(HGraph* graph) block_side_effects_(graph->blocks()->length(), zone()), loop_side_effects_(graph->blocks()->length(), zone()), visited_on_paths_(graph->blocks()->length(), zone()) { - ASSERT(!AllowHandleAllocation::IsAllowed()); + DCHECK(!AllowHandleAllocation::IsAllowed()); block_side_effects_.AddBlock( SideEffects(), graph->blocks()->length(), zone()); loop_side_effects_.AddBlock( @@ -504,7 +497,7 @@ HGlobalValueNumberingPhase::HGlobalValueNumberingPhase(HGraph* graph) void HGlobalValueNumberingPhase::Run() { - ASSERT(!removed_side_effects_); + DCHECK(!removed_side_effects_); for (int i = FLAG_gvn_iterations; i > 0; --i) { // Compute the side effects. ComputeBlockSideEffects(); @@ -520,8 +513,8 @@ void HGlobalValueNumberingPhase::Run() { removed_side_effects_ = false; // Clear all side effects. - ASSERT_EQ(block_side_effects_.length(), graph()->blocks()->length()); - ASSERT_EQ(loop_side_effects_.length(), graph()->blocks()->length()); + DCHECK_EQ(block_side_effects_.length(), graph()->blocks()->length()); + DCHECK_EQ(loop_side_effects_.length(), graph()->blocks()->length()); for (int i = 0; i < graph()->blocks()->length(); ++i) { block_side_effects_[i].RemoveAll(); loop_side_effects_[i].RemoveAll(); @@ -572,13 +565,9 @@ void HGlobalValueNumberingPhase::LoopInvariantCodeMotion() { if (block->IsLoopHeader()) { SideEffects side_effects = loop_side_effects_[block->block_id()]; if (FLAG_trace_gvn) { - HeapStringAllocator allocator; - StringStream stream(&allocator); - stream.Add("Try loop invariant motion for block B%d changes ", - block->block_id()); - side_effects_tracker_.PrintSideEffectsTo(&stream, side_effects); - stream.Add("\n"); - stream.OutputToStdOut(); + OFStream os(stdout); + os << "Try loop invariant motion for " << *block << " changes " + << Print(side_effects) << endl; } HBasicBlock* last = block->loop_information()->GetLastBackEdge(); for (int j = block->block_id(); j <= last->block_id(); ++j) { @@ -595,13 +584,9 @@ void HGlobalValueNumberingPhase::ProcessLoopBlock( SideEffects loop_kills) { HBasicBlock* pre_header = loop_header->predecessors()->at(0); if (FLAG_trace_gvn) { - HeapStringAllocator allocator; - StringStream stream(&allocator); - stream.Add("Loop invariant code motion for B%d depends on ", - block->block_id()); - side_effects_tracker_.PrintSideEffectsTo(&stream, loop_kills); - stream.Add("\n"); - stream.OutputToStdOut(); + OFStream os(stdout); + os << "Loop invariant code motion for " << *block << " depends on " + << Print(loop_kills) << endl; } HInstruction* instr = block->first(); while (instr != NULL) { @@ -610,17 +595,11 @@ void HGlobalValueNumberingPhase::ProcessLoopBlock( SideEffects changes = side_effects_tracker_.ComputeChanges(instr); SideEffects depends_on = side_effects_tracker_.ComputeDependsOn(instr); if (FLAG_trace_gvn) { - HeapStringAllocator allocator; - StringStream stream(&allocator); - stream.Add("Checking instruction i%d (%s) changes ", - instr->id(), instr->Mnemonic()); - side_effects_tracker_.PrintSideEffectsTo(&stream, changes); - stream.Add(", depends on "); - side_effects_tracker_.PrintSideEffectsTo(&stream, depends_on); - stream.Add(". Loop changes "); - side_effects_tracker_.PrintSideEffectsTo(&stream, loop_kills); - stream.Add("\n"); - stream.OutputToStdOut(); + OFStream os(stdout); + os << "Checking instruction i" << instr->id() << " (" + << instr->Mnemonic() << ") changes " << Print(changes) + << ", depends on " << Print(depends_on) << ". Loop changes " + << Print(loop_kills) << endl; } bool can_hoist = !depends_on.ContainsAnyOf(loop_kills); if (can_hoist && !graph()->use_optimistic_licm()) { @@ -855,19 +834,17 @@ void HGlobalValueNumberingPhase::AnalyzeGraph() { map->Kill(changes); dominators->Store(changes, instr); if (FLAG_trace_gvn) { - HeapStringAllocator allocator; - StringStream stream(&allocator); - stream.Add("Instruction i%d changes ", instr->id()); - side_effects_tracker_.PrintSideEffectsTo(&stream, changes); - stream.Add("\n"); - stream.OutputToStdOut(); + OFStream os(stdout); + os << "Instruction i" << instr->id() << " changes " << Print(changes) + << endl; } } - if (instr->CheckFlag(HValue::kUseGVN)) { - ASSERT(!instr->HasObservableSideEffects()); + if (instr->CheckFlag(HValue::kUseGVN) && + !instr->CheckFlag(HValue::kCantBeReplaced)) { + DCHECK(!instr->HasObservableSideEffects()); HInstruction* other = map->Lookup(instr); if (other != NULL) { - ASSERT(instr->Equals(other) && other->Equals(instr)); + DCHECK(instr->Equals(other) && other->Equals(instr)); TRACE_GVN_4("Replacing instruction i%d (%s) with i%d (%s)\n", instr->id(), instr->Mnemonic(), diff --git a/deps/v8/src/hydrogen-gvn.h b/deps/v8/src/hydrogen-gvn.h index 25a5c24da63..3cab59c7353 100644 --- a/deps/v8/src/hydrogen-gvn.h +++ b/deps/v8/src/hydrogen-gvn.h @@ -5,14 +5,16 @@ #ifndef V8_HYDROGEN_GVN_H_ #define V8_HYDROGEN_GVN_H_ -#include "hydrogen.h" -#include "hydrogen-instructions.h" -#include "compiler.h" -#include "zone.h" +#include "src/compiler.h" +#include "src/hydrogen.h" +#include "src/hydrogen-instructions.h" +#include "src/zone.h" namespace v8 { namespace internal { +class OStream; + // This class extends GVNFlagSet with additional "special" dynamic side effects, // which can be used to represent side effects that cannot be expressed using // the GVNFlags of an HInstruction. These special side effects are tracked by a @@ -22,7 +24,7 @@ class SideEffects V8_FINAL { static const int kNumberOfSpecials = 64 - kNumberOfFlags; SideEffects() : bits_(0) { - ASSERT(kNumberOfFlags + kNumberOfSpecials == sizeof(bits_) * CHAR_BIT); + DCHECK(kNumberOfFlags + kNumberOfSpecials == sizeof(bits_) * CHAR_BIT); } explicit SideEffects(GVNFlagSet flags) : bits_(flags.ToIntegral()) {} bool IsEmpty() const { return bits_ == 0; } @@ -38,15 +40,14 @@ class SideEffects V8_FINAL { void RemoveFlag(GVNFlag flag) { bits_ &= ~MaskFlag(flag); } void RemoveAll() { bits_ = 0; } uint64_t ToIntegral() const { return bits_; } - void PrintTo(StringStream* stream) const; private: uint64_t MaskFlag(GVNFlag flag) const { return static_cast<uint64_t>(1) << static_cast<unsigned>(flag); } uint64_t MaskSpecial(int special) const { - ASSERT(special >= 0); - ASSERT(special < kNumberOfSpecials); + DCHECK(special >= 0); + DCHECK(special < kNumberOfSpecials); return static_cast<uint64_t>(1) << static_cast<unsigned>( special + kNumberOfFlags); } @@ -55,6 +56,8 @@ class SideEffects V8_FINAL { }; +struct TrackedEffects; + // Tracks global variable and inobject field loads/stores in a fine grained // fashion, and represents them using the "special" dynamic side effects of the // SideEffects class (see above). This way unrelated global variable/inobject @@ -65,20 +68,20 @@ class SideEffectsTracker V8_FINAL BASE_EMBEDDED { SideEffectsTracker() : num_global_vars_(0), num_inobject_fields_(0) {} SideEffects ComputeChanges(HInstruction* instr); SideEffects ComputeDependsOn(HInstruction* instr); - void PrintSideEffectsTo(StringStream* stream, SideEffects side_effects) const; private: + friend OStream& operator<<(OStream& os, const TrackedEffects& f); bool ComputeGlobalVar(Unique<Cell> cell, int* index); bool ComputeInobjectField(HObjectAccess access, int* index); static int GlobalVar(int index) { - ASSERT(index >= 0); - ASSERT(index < kNumberOfGlobalVars); + DCHECK(index >= 0); + DCHECK(index < kNumberOfGlobalVars); return index; } static int InobjectField(int index) { - ASSERT(index >= 0); - ASSERT(index < kNumberOfInobjectFields); + DCHECK(index >= 0); + DCHECK(index < kNumberOfInobjectFields); return index + kNumberOfGlobalVars; } @@ -95,6 +98,18 @@ class SideEffectsTracker V8_FINAL BASE_EMBEDDED { }; +// Helper class for printing, because the effects don't know their tracker. +struct TrackedEffects { + TrackedEffects(SideEffectsTracker* t, SideEffects e) + : tracker(t), effects(e) {} + SideEffectsTracker* tracker; + SideEffects effects; +}; + + +OStream& operator<<(OStream& os, const TrackedEffects& f); + + // Perform common subexpression elimination and loop-invariant code motion. class HGlobalValueNumberingPhase V8_FINAL : public HPhase { public: @@ -114,6 +129,9 @@ class HGlobalValueNumberingPhase V8_FINAL : public HPhase { SideEffects loop_kills); bool AllowCodeMotion(); bool ShouldMove(HInstruction* instr, HBasicBlock* loop_header); + TrackedEffects Print(SideEffects side_effects) { + return TrackedEffects(&side_effects_tracker_, side_effects); + } SideEffectsTracker side_effects_tracker_; bool removed_side_effects_; diff --git a/deps/v8/src/hydrogen-infer-representation.cc b/deps/v8/src/hydrogen-infer-representation.cc index 3e983476f3f..3815ba514e6 100644 --- a/deps/v8/src/hydrogen-infer-representation.cc +++ b/deps/v8/src/hydrogen-infer-representation.cc @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "hydrogen-infer-representation.h" +#include "src/hydrogen-infer-representation.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-infer-representation.h b/deps/v8/src/hydrogen-infer-representation.h index a2c8466f938..d07f89d973f 100644 --- a/deps/v8/src/hydrogen-infer-representation.h +++ b/deps/v8/src/hydrogen-infer-representation.h @@ -5,7 +5,7 @@ #ifndef V8_HYDROGEN_INFER_REPRESENTATION_H_ #define V8_HYDROGEN_INFER_REPRESENTATION_H_ -#include "hydrogen.h" +#include "src/hydrogen.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-infer-types.cc b/deps/v8/src/hydrogen-infer-types.cc index 7a3208b67f3..e69b4fad206 100644 --- a/deps/v8/src/hydrogen-infer-types.cc +++ b/deps/v8/src/hydrogen-infer-types.cc @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "hydrogen-infer-types.h" +#include "src/hydrogen-infer-types.h" namespace v8 { namespace internal { @@ -46,7 +46,7 @@ void HInferTypesPhase::InferTypes(int from_inclusive, int to_inclusive) { } } } - ASSERT(in_worklist_.IsEmpty()); + DCHECK(in_worklist_.IsEmpty()); } } } diff --git a/deps/v8/src/hydrogen-infer-types.h b/deps/v8/src/hydrogen-infer-types.h index d1abcdac5a1..41337ac5c0d 100644 --- a/deps/v8/src/hydrogen-infer-types.h +++ b/deps/v8/src/hydrogen-infer-types.h @@ -5,7 +5,7 @@ #ifndef V8_HYDROGEN_INFER_TYPES_H_ #define V8_HYDROGEN_INFER_TYPES_H_ -#include "hydrogen.h" +#include "src/hydrogen.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-instructions.cc b/deps/v8/src/hydrogen-instructions.cc index 56ac7196c37..b75bec0f5ec 100644 --- a/deps/v8/src/hydrogen-instructions.cc +++ b/deps/v8/src/hydrogen-instructions.cc @@ -2,27 +2,33 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "double.h" -#include "factory.h" -#include "hydrogen-infer-representation.h" -#include "property-details-inl.h" +#include "src/double.h" +#include "src/factory.h" +#include "src/hydrogen-infer-representation.h" +#include "src/property-details-inl.h" #if V8_TARGET_ARCH_IA32 -#include "ia32/lithium-ia32.h" +#include "src/ia32/lithium-ia32.h" // NOLINT #elif V8_TARGET_ARCH_X64 -#include "x64/lithium-x64.h" +#include "src/x64/lithium-x64.h" // NOLINT #elif V8_TARGET_ARCH_ARM64 -#include "arm64/lithium-arm64.h" +#include "src/arm64/lithium-arm64.h" // NOLINT #elif V8_TARGET_ARCH_ARM -#include "arm/lithium-arm.h" +#include "src/arm/lithium-arm.h" // NOLINT #elif V8_TARGET_ARCH_MIPS -#include "mips/lithium-mips.h" +#include "src/mips/lithium-mips.h" // NOLINT +#elif V8_TARGET_ARCH_MIPS64 +#include "src/mips64/lithium-mips64.h" // NOLINT +#elif V8_TARGET_ARCH_X87 +#include "src/x87/lithium-x87.h" // NOLINT #else #error Unsupported target architecture. #endif +#include "src/base/safe_math.h" + namespace v8 { namespace internal { @@ -35,7 +41,7 @@ HYDROGEN_CONCRETE_INSTRUCTION_LIST(DEFINE_COMPILE) Isolate* HValue::isolate() const { - ASSERT(block() != NULL); + DCHECK(block() != NULL); return block()->isolate(); } @@ -51,7 +57,7 @@ void HValue::AssumeRepresentation(Representation r) { void HValue::InferRepresentation(HInferRepresentationPhase* h_infer) { - ASSERT(CheckFlag(kFlexibleRepresentation)); + DCHECK(CheckFlag(kFlexibleRepresentation)); Representation new_rep = RepresentationFromInputs(); UpdateRepresentation(new_rep, h_infer, "inputs"); new_rep = RepresentationFromUses(); @@ -286,7 +292,7 @@ void Range::KeepOrder() { #ifdef DEBUG void Range::Verify() const { - ASSERT(lower_ <= upper_); + DCHECK(lower_ <= upper_); } #endif @@ -306,46 +312,6 @@ bool Range::MulAndCheckOverflow(const Representation& r, Range* other) { } -const char* HType::ToString() { - // Note: The c1visualizer syntax for locals allows only a sequence of the - // following characters: A-Za-z0-9_-|: - switch (type_) { - case kNone: return "none"; - case kTagged: return "tagged"; - case kTaggedPrimitive: return "primitive"; - case kTaggedNumber: return "number"; - case kSmi: return "smi"; - case kHeapNumber: return "heap-number"; - case kString: return "string"; - case kBoolean: return "boolean"; - case kNonPrimitive: return "non-primitive"; - case kJSArray: return "array"; - case kJSObject: return "object"; - } - UNREACHABLE(); - return "unreachable"; -} - - -HType HType::TypeFromValue(Handle<Object> value) { - HType result = HType::Tagged(); - if (value->IsSmi()) { - result = HType::Smi(); - } else if (value->IsHeapNumber()) { - result = HType::HeapNumber(); - } else if (value->IsString()) { - result = HType::String(); - } else if (value->IsBoolean()) { - result = HType::Boolean(); - } else if (value->IsJSObject()) { - result = HType::JSObject(); - } else if (value->IsJSArray()) { - result = HType::JSArray(); - } - return result; -} - - bool HValue::IsDefinedAfter(HBasicBlock* other) const { return block()->block_id() > other->block_id(); } @@ -455,7 +421,7 @@ bool HValue::Equals(HValue* other) { if (OperandAt(i)->id() != other->OperandAt(i)->id()) return false; } bool result = DataEquals(other); - ASSERT(!result || Hashcode() == other->Hashcode()); + DCHECK(!result || Hashcode() == other->Hashcode()); return result; } @@ -527,7 +493,7 @@ void HValue::ReplaceAllUsesWith(HValue* other) { while (use_list_ != NULL) { HUseListNode* list_node = use_list_; HValue* value = list_node->value(); - ASSERT(!value->block()->IsStartBlock()); + DCHECK(!value->block()->IsStartBlock()); value->InternalSetOperandAt(list_node->index(), other); use_list_ = list_node->tail(); list_node->set_tail(other->use_list_); @@ -553,7 +519,7 @@ void HValue::Kill() { void HValue::SetBlock(HBasicBlock* block) { - ASSERT(block_ == NULL || block == NULL); + DCHECK(block_ == NULL || block == NULL); block_ = block; if (id_ == kNoNumber && block != NULL) { id_ = block->graph()->GetNextValueID(this); @@ -561,36 +527,36 @@ void HValue::SetBlock(HBasicBlock* block) { } -void HValue::PrintTypeTo(StringStream* stream) { - if (!representation().IsTagged() || type().Equals(HType::Tagged())) return; - stream->Add(" type:%s", type().ToString()); +OStream& operator<<(OStream& os, const HValue& v) { return v.PrintTo(os); } + + +OStream& operator<<(OStream& os, const TypeOf& t) { + if (t.value->representation().IsTagged() && + !t.value->type().Equals(HType::Tagged())) + return os; + return os << " type:" << t.value->type(); } -void HValue::PrintChangesTo(StringStream* stream) { - GVNFlagSet changes_flags = ChangesFlags(); - if (changes_flags.IsEmpty()) return; - stream->Add(" changes["); - if (changes_flags == AllSideEffectsFlagSet()) { - stream->Add("*"); +OStream& operator<<(OStream& os, const ChangesOf& c) { + GVNFlagSet changes_flags = c.value->ChangesFlags(); + if (changes_flags.IsEmpty()) return os; + os << " changes["; + if (changes_flags == c.value->AllSideEffectsFlagSet()) { + os << "*"; } else { bool add_comma = false; -#define PRINT_DO(Type) \ - if (changes_flags.Contains(k##Type)) { \ - if (add_comma) stream->Add(","); \ - add_comma = true; \ - stream->Add(#Type); \ - } +#define PRINT_DO(Type) \ + if (changes_flags.Contains(k##Type)) { \ + if (add_comma) os << ","; \ + add_comma = true; \ + os << #Type; \ + } GVN_TRACKED_FLAG_LIST(PRINT_DO); GVN_UNTRACKED_FLAG_LIST(PRINT_DO); #undef PRINT_DO } - stream->Add("]"); -} - - -void HValue::PrintNameTo(StringStream* stream) { - stream->Add("%s%d", representation_.Mnemonic(), id()); + return os << "]"; } @@ -631,74 +597,63 @@ void HValue::RegisterUse(int index, HValue* new_value) { void HValue::AddNewRange(Range* r, Zone* zone) { if (!HasRange()) ComputeInitialRange(zone); if (!HasRange()) range_ = new(zone) Range(); - ASSERT(HasRange()); + DCHECK(HasRange()); r->StackUpon(range_); range_ = r; } void HValue::RemoveLastAddedRange() { - ASSERT(HasRange()); - ASSERT(range_->next() != NULL); + DCHECK(HasRange()); + DCHECK(range_->next() != NULL); range_ = range_->next(); } void HValue::ComputeInitialRange(Zone* zone) { - ASSERT(!HasRange()); + DCHECK(!HasRange()); range_ = InferRange(zone); - ASSERT(HasRange()); + DCHECK(HasRange()); } -void HSourcePosition::PrintTo(FILE* out) { - if (IsUnknown()) { - PrintF(out, "<?>"); +OStream& operator<<(OStream& os, const HSourcePosition& p) { + if (p.IsUnknown()) { + return os << "<?>"; + } else if (FLAG_hydrogen_track_positions) { + return os << "<" << p.inlining_id() << ":" << p.position() << ">"; } else { - if (FLAG_hydrogen_track_positions) { - PrintF(out, "<%d:%d>", inlining_id(), position()); - } else { - PrintF(out, "<0:%d>", raw()); - } + return os << "<0:" << p.raw() << ">"; } } -void HInstruction::PrintTo(StringStream* stream) { - PrintMnemonicTo(stream); - PrintDataTo(stream); - PrintChangesTo(stream); - PrintTypeTo(stream); - if (CheckFlag(HValue::kHasNoObservableSideEffects)) { - stream->Add(" [noOSE]"); - } - if (CheckFlag(HValue::kIsDead)) { - stream->Add(" [dead]"); - } +OStream& HInstruction::PrintTo(OStream& os) const { // NOLINT + os << Mnemonic() << " "; + PrintDataTo(os) << ChangesOf(this) << TypeOf(this); + if (CheckFlag(HValue::kHasNoObservableSideEffects)) os << " [noOSE]"; + if (CheckFlag(HValue::kIsDead)) os << " [dead]"; + return os; } -void HInstruction::PrintDataTo(StringStream *stream) { +OStream& HInstruction::PrintDataTo(OStream& os) const { // NOLINT for (int i = 0; i < OperandCount(); ++i) { - if (i > 0) stream->Add(" "); - OperandAt(i)->PrintNameTo(stream); + if (i > 0) os << " "; + os << NameOf(OperandAt(i)); } -} - - -void HInstruction::PrintMnemonicTo(StringStream* stream) { - stream->Add("%s ", Mnemonic()); + return os; } void HInstruction::Unlink() { - ASSERT(IsLinked()); - ASSERT(!IsControlInstruction()); // Must never move control instructions. - ASSERT(!IsBlockEntry()); // Doesn't make sense to delete these. - ASSERT(previous_ != NULL); + DCHECK(IsLinked()); + DCHECK(!IsControlInstruction()); // Must never move control instructions. + DCHECK(!IsBlockEntry()); // Doesn't make sense to delete these. + DCHECK(previous_ != NULL); previous_->next_ = next_; if (next_ == NULL) { - ASSERT(block()->last() == this); + DCHECK(block()->last() == this); block()->set_last(previous_); } else { next_->previous_ = previous_; @@ -708,11 +663,11 @@ void HInstruction::Unlink() { void HInstruction::InsertBefore(HInstruction* next) { - ASSERT(!IsLinked()); - ASSERT(!next->IsBlockEntry()); - ASSERT(!IsControlInstruction()); - ASSERT(!next->block()->IsStartBlock()); - ASSERT(next->previous_ != NULL); + DCHECK(!IsLinked()); + DCHECK(!next->IsBlockEntry()); + DCHECK(!IsControlInstruction()); + DCHECK(!next->block()->IsStartBlock()); + DCHECK(next->previous_ != NULL); HInstruction* prev = next->previous(); prev->next_ = this; next->previous_ = this; @@ -726,14 +681,14 @@ void HInstruction::InsertBefore(HInstruction* next) { void HInstruction::InsertAfter(HInstruction* previous) { - ASSERT(!IsLinked()); - ASSERT(!previous->IsControlInstruction()); - ASSERT(!IsControlInstruction() || previous->next_ == NULL); + DCHECK(!IsLinked()); + DCHECK(!previous->IsControlInstruction()); + DCHECK(!IsControlInstruction() || previous->next_ == NULL); HBasicBlock* block = previous->block(); // Never insert anything except constants into the start block after finishing // it. if (block->IsStartBlock() && block->IsFinished() && !IsConstant()) { - ASSERT(block->end()->SecondSuccessor() == NULL); + DCHECK(block->end()->SecondSuccessor() == NULL); InsertAfter(block->end()->FirstSuccessor()->first()); return; } @@ -743,7 +698,7 @@ void HInstruction::InsertAfter(HInstruction* previous) { // simulate instruction instead. HInstruction* next = previous->next_; if (previous->HasObservableSideEffects() && next != NULL) { - ASSERT(next->IsSimulate()); + DCHECK(next->IsSimulate()); previous = next; next = previous->next_; } @@ -762,6 +717,21 @@ void HInstruction::InsertAfter(HInstruction* previous) { } +bool HInstruction::Dominates(HInstruction* other) { + if (block() != other->block()) { + return block()->Dominates(other->block()); + } + // Both instructions are in the same basic block. This instruction + // should precede the other one in order to dominate it. + for (HInstruction* instr = next(); instr != NULL; instr = instr->next()) { + if (instr == other) { + return true; + } + } + return false; +} + + #ifdef DEBUG void HInstruction::Verify() { // Verify that input operands are defined before use. @@ -778,19 +748,19 @@ void HInstruction::Verify() { cur = cur->previous(); } // Must reach other operand in the same block! - ASSERT(cur == other_operand); + DCHECK(cur == other_operand); } } else { // If the following assert fires, you may have forgotten an // AddInstruction. - ASSERT(other_block->Dominates(cur_block)); + DCHECK(other_block->Dominates(cur_block)); } } // Verify that instructions that may have side-effects are followed // by a simulate instruction. if (HasObservableSideEffects() && !IsOsrEntry()) { - ASSERT(next()->IsSimulate()); + DCHECK(next()->IsSimulate()); } // Verify that instructions that can be eliminated by GVN have overridden @@ -801,7 +771,7 @@ void HInstruction::Verify() { // Verify that all uses are in the graph. for (HUseIterator use = uses(); !use.Done(); use.Advance()) { if (use.value()->IsInstruction()) { - ASSERT(HInstruction::cast(use.value())->IsLinked()); + DCHECK(HInstruction::cast(use.value())->IsLinked()); } } } @@ -820,7 +790,6 @@ bool HInstruction::CanDeoptimize() { case HValue::kBlockEntry: case HValue::kBoundsCheckBaseIndexInformation: case HValue::kCallFunction: - case HValue::kCallJSFunction: case HValue::kCallNew: case HValue::kCallNewArray: case HValue::kCallStub: @@ -865,13 +834,12 @@ bool HInstruction::CanDeoptimize() { case HValue::kMathMinMax: case HValue::kParameter: case HValue::kPhi: - case HValue::kPushArgument: + case HValue::kPushArguments: case HValue::kRegExpLiteral: case HValue::kReturn: - case HValue::kRor: - case HValue::kSar: case HValue::kSeqStringGetChar: case HValue::kStoreCodeEntry: + case HValue::kStoreFrameContext: case HValue::kStoreKeyed: case HValue::kStoreNamedField: case HValue::kStoreNamedGeneric: @@ -884,10 +852,12 @@ bool HInstruction::CanDeoptimize() { return false; case HValue::kAdd: + case HValue::kAllocateBlockContext: case HValue::kApplyArguments: case HValue::kBitwise: case HValue::kBoundsCheck: case HValue::kBranch: + case HValue::kCallJSFunction: case HValue::kCallRuntime: case HValue::kChange: case HValue::kCheckHeapObject: @@ -914,6 +884,8 @@ bool HInstruction::CanDeoptimize() { case HValue::kMul: case HValue::kOsrEntry: case HValue::kPower: + case HValue::kRor: + case HValue::kSar: case HValue::kSeqStringSetChar: case HValue::kShl: case HValue::kShr: @@ -938,27 +910,28 @@ bool HInstruction::CanDeoptimize() { } -void HDummyUse::PrintDataTo(StringStream* stream) { - value()->PrintNameTo(stream); +OStream& operator<<(OStream& os, const NameOf& v) { + return os << v.value->representation().Mnemonic() << v.value->id(); +} + +OStream& HDummyUse::PrintDataTo(OStream& os) const { // NOLINT + return os << NameOf(value()); } -void HEnvironmentMarker::PrintDataTo(StringStream* stream) { - stream->Add("%s var[%d]", kind() == BIND ? "bind" : "lookup", index()); +OStream& HEnvironmentMarker::PrintDataTo(OStream& os) const { // NOLINT + return os << (kind() == BIND ? "bind" : "lookup") << " var[" << index() + << "]"; } -void HUnaryCall::PrintDataTo(StringStream* stream) { - value()->PrintNameTo(stream); - stream->Add(" "); - stream->Add("#%d", argument_count()); +OStream& HUnaryCall::PrintDataTo(OStream& os) const { // NOLINT + return os << NameOf(value()) << " #" << argument_count(); } -void HCallJSFunction::PrintDataTo(StringStream* stream) { - function()->PrintNameTo(stream); - stream->Add(" "); - stream->Add("#%d", argument_count()); +OStream& HCallJSFunction::PrintDataTo(OStream& os) const { // NOLINT + return os << NameOf(function()) << " #" << argument_count(); } @@ -984,14 +957,9 @@ HCallJSFunction* HCallJSFunction::New( } - - -void HBinaryCall::PrintDataTo(StringStream* stream) { - first()->PrintNameTo(stream); - stream->Add(" "); - second()->PrintNameTo(stream); - stream->Add(" "); - stream->Add("#%d", argument_count()); +OStream& HBinaryCall::PrintDataTo(OStream& os) const { // NOLINT + return os << NameOf(first()) << " " << NameOf(second()) << " #" + << argument_count(); } @@ -1001,7 +969,7 @@ void HBoundsCheck::ApplyIndexChange() { DecompositionResult decomposition; bool index_is_decomposable = index()->TryDecompose(&decomposition); if (index_is_decomposable) { - ASSERT(decomposition.base() == base()); + DCHECK(decomposition.base() == base()); if (decomposition.offset() == offset() && decomposition.scale() == scale()) return; } else { @@ -1045,27 +1013,24 @@ void HBoundsCheck::ApplyIndexChange() { } -void HBoundsCheck::PrintDataTo(StringStream* stream) { - index()->PrintNameTo(stream); - stream->Add(" "); - length()->PrintNameTo(stream); +OStream& HBoundsCheck::PrintDataTo(OStream& os) const { // NOLINT + os << NameOf(index()) << " " << NameOf(length()); if (base() != NULL && (offset() != 0 || scale() != 0)) { - stream->Add(" base: (("); + os << " base: (("; if (base() != index()) { - index()->PrintNameTo(stream); + os << NameOf(index()); } else { - stream->Add("index"); + os << "index"; } - stream->Add(" + %d) >> %d)", offset(), scale()); - } - if (skip_check()) { - stream->Add(" [DISABLED]"); + os << " + " << offset() << ") >> " << scale() << ")"; } + if (skip_check()) os << " [DISABLED]"; + return os; } void HBoundsCheck::InferRepresentation(HInferRepresentationPhase* h_infer) { - ASSERT(CheckFlag(kFlexibleRepresentation)); + DCHECK(CheckFlag(kFlexibleRepresentation)); HValue* actual_index = index()->ActualValue(); HValue* actual_length = length()->ActualValue(); Representation index_rep = actual_index->representation(); @@ -1103,84 +1068,78 @@ Range* HBoundsCheck::InferRange(Zone* zone) { } -void HBoundsCheckBaseIndexInformation::PrintDataTo(StringStream* stream) { - stream->Add("base: "); - base_index()->PrintNameTo(stream); - stream->Add(", check: "); - base_index()->PrintNameTo(stream); +OStream& HBoundsCheckBaseIndexInformation::PrintDataTo( + OStream& os) const { // NOLINT + // TODO(svenpanne) This 2nd base_index() looks wrong... + return os << "base: " << NameOf(base_index()) + << ", check: " << NameOf(base_index()); } -void HCallWithDescriptor::PrintDataTo(StringStream* stream) { +OStream& HCallWithDescriptor::PrintDataTo(OStream& os) const { // NOLINT for (int i = 0; i < OperandCount(); i++) { - OperandAt(i)->PrintNameTo(stream); - stream->Add(" "); + os << NameOf(OperandAt(i)) << " "; } - stream->Add("#%d", argument_count()); + return os << "#" << argument_count(); } -void HCallNewArray::PrintDataTo(StringStream* stream) { - stream->Add(ElementsKindToString(elements_kind())); - stream->Add(" "); - HBinaryCall::PrintDataTo(stream); +OStream& HCallNewArray::PrintDataTo(OStream& os) const { // NOLINT + os << ElementsKindToString(elements_kind()) << " "; + return HBinaryCall::PrintDataTo(os); } -void HCallRuntime::PrintDataTo(StringStream* stream) { - stream->Add("%o ", *name()); - if (save_doubles() == kSaveFPRegs) { - stream->Add("[save doubles] "); - } - stream->Add("#%d", argument_count()); +OStream& HCallRuntime::PrintDataTo(OStream& os) const { // NOLINT + os << name()->ToCString().get() << " "; + if (save_doubles() == kSaveFPRegs) os << "[save doubles] "; + return os << "#" << argument_count(); +} + + +OStream& HClassOfTestAndBranch::PrintDataTo(OStream& os) const { // NOLINT + return os << "class_of_test(" << NameOf(value()) << ", \"" + << class_name()->ToCString().get() << "\")"; } -void HClassOfTestAndBranch::PrintDataTo(StringStream* stream) { - stream->Add("class_of_test("); - value()->PrintNameTo(stream); - stream->Add(", \"%o\")", *class_name()); +OStream& HWrapReceiver::PrintDataTo(OStream& os) const { // NOLINT + return os << NameOf(receiver()) << " " << NameOf(function()); } -void HWrapReceiver::PrintDataTo(StringStream* stream) { - receiver()->PrintNameTo(stream); - stream->Add(" "); - function()->PrintNameTo(stream); +OStream& HAccessArgumentsAt::PrintDataTo(OStream& os) const { // NOLINT + return os << NameOf(arguments()) << "[" << NameOf(index()) << "], length " + << NameOf(length()); } -void HAccessArgumentsAt::PrintDataTo(StringStream* stream) { - arguments()->PrintNameTo(stream); - stream->Add("["); - index()->PrintNameTo(stream); - stream->Add("], length "); - length()->PrintNameTo(stream); +OStream& HAllocateBlockContext::PrintDataTo(OStream& os) const { // NOLINT + return os << NameOf(context()) << " " << NameOf(function()); } -void HControlInstruction::PrintDataTo(StringStream* stream) { - stream->Add(" goto ("); +OStream& HControlInstruction::PrintDataTo(OStream& os) const { // NOLINT + os << " goto ("; bool first_block = true; for (HSuccessorIterator it(this); !it.Done(); it.Advance()) { - stream->Add(first_block ? "B%d" : ", B%d", it.Current()->block_id()); + if (!first_block) os << ", "; + os << *it.Current(); first_block = false; } - stream->Add(")"); + return os << ")"; } -void HUnaryControlInstruction::PrintDataTo(StringStream* stream) { - value()->PrintNameTo(stream); - HControlInstruction::PrintDataTo(stream); +OStream& HUnaryControlInstruction::PrintDataTo(OStream& os) const { // NOLINT + os << NameOf(value()); + return HControlInstruction::PrintDataTo(os); } -void HReturn::PrintDataTo(StringStream* stream) { - value()->PrintNameTo(stream); - stream->Add(" (pop "); - parameter_count()->PrintNameTo(stream); - stream->Add(" values)"); +OStream& HReturn::PrintDataTo(OStream& os) const { // NOLINT + return os << NameOf(value()) << " (pop " << NameOf(parameter_count()) + << " values)"; } @@ -1212,8 +1171,8 @@ Representation HBranch::observed_input_representation(int index) { bool HBranch::KnownSuccessorBlock(HBasicBlock** block) { HValue* value = this->value(); if (value->EmitAtUses()) { - ASSERT(value->IsConstant()); - ASSERT(!value->representation().IsDouble()); + DCHECK(value->IsConstant()); + DCHECK(!value->representation().IsDouble()); *block = HConstant::cast(value)->BooleanValue() ? FirstSuccessor() : SecondSuccessor(); @@ -1224,35 +1183,44 @@ bool HBranch::KnownSuccessorBlock(HBasicBlock** block) { } -void HBranch::PrintDataTo(StringStream* stream) { - HUnaryControlInstruction::PrintDataTo(stream); - stream->Add(" "); - expected_input_types().Print(stream); +OStream& HBranch::PrintDataTo(OStream& os) const { // NOLINT + return HUnaryControlInstruction::PrintDataTo(os) << " " + << expected_input_types(); } -void HCompareMap::PrintDataTo(StringStream* stream) { - value()->PrintNameTo(stream); - stream->Add(" (%p)", *map().handle()); - HControlInstruction::PrintDataTo(stream); +OStream& HCompareMap::PrintDataTo(OStream& os) const { // NOLINT + os << NameOf(value()) << " (" << *map().handle() << ")"; + HControlInstruction::PrintDataTo(os); if (known_successor_index() == 0) { - stream->Add(" [true]"); + os << " [true]"; } else if (known_successor_index() == 1) { - stream->Add(" [false]"); + os << " [false]"; } + return os; } const char* HUnaryMathOperation::OpName() const { switch (op()) { - case kMathFloor: return "floor"; - case kMathRound: return "round"; - case kMathAbs: return "abs"; - case kMathLog: return "log"; - case kMathExp: return "exp"; - case kMathSqrt: return "sqrt"; - case kMathPowHalf: return "pow-half"; - case kMathClz32: return "clz32"; + case kMathFloor: + return "floor"; + case kMathFround: + return "fround"; + case kMathRound: + return "round"; + case kMathAbs: + return "abs"; + case kMathLog: + return "log"; + case kMathExp: + return "exp"; + case kMathSqrt: + return "sqrt"; + case kMathPowHalf: + return "pow-half"; + case kMathClz32: + return "clz32"; default: UNREACHABLE(); return NULL; @@ -1285,43 +1253,41 @@ Range* HUnaryMathOperation::InferRange(Zone* zone) { } -void HUnaryMathOperation::PrintDataTo(StringStream* stream) { - const char* name = OpName(); - stream->Add("%s ", name); - value()->PrintNameTo(stream); +OStream& HUnaryMathOperation::PrintDataTo(OStream& os) const { // NOLINT + return os << OpName() << " " << NameOf(value()); } -void HUnaryOperation::PrintDataTo(StringStream* stream) { - value()->PrintNameTo(stream); +OStream& HUnaryOperation::PrintDataTo(OStream& os) const { // NOLINT + return os << NameOf(value()); } -void HHasInstanceTypeAndBranch::PrintDataTo(StringStream* stream) { - value()->PrintNameTo(stream); +OStream& HHasInstanceTypeAndBranch::PrintDataTo(OStream& os) const { // NOLINT + os << NameOf(value()); switch (from_) { case FIRST_JS_RECEIVER_TYPE: - if (to_ == LAST_TYPE) stream->Add(" spec_object"); + if (to_ == LAST_TYPE) os << " spec_object"; break; case JS_REGEXP_TYPE: - if (to_ == JS_REGEXP_TYPE) stream->Add(" reg_exp"); + if (to_ == JS_REGEXP_TYPE) os << " reg_exp"; break; case JS_ARRAY_TYPE: - if (to_ == JS_ARRAY_TYPE) stream->Add(" array"); + if (to_ == JS_ARRAY_TYPE) os << " array"; break; case JS_FUNCTION_TYPE: - if (to_ == JS_FUNCTION_TYPE) stream->Add(" function"); + if (to_ == JS_FUNCTION_TYPE) os << " function"; break; default: break; } + return os; } -void HTypeofIsAndBranch::PrintDataTo(StringStream* stream) { - value()->PrintNameTo(stream); - stream->Add(" == %o", *type_literal_.handle()); - HControlInstruction::PrintDataTo(stream); +OStream& HTypeofIsAndBranch::PrintDataTo(OStream& os) const { // NOLINT + os << NameOf(value()) << " == " << type_literal()->ToCString().get(); + return HControlInstruction::PrintDataTo(os); } @@ -1338,10 +1304,9 @@ static String* TypeOfString(HConstant* constant, Isolate* isolate) { return heap->boolean_string(); } if (unique.IsKnownGlobal(heap->null_value())) { - return FLAG_harmony_typeof ? heap->null_string() - : heap->object_string(); + return heap->object_string(); } - ASSERT(unique.IsKnownGlobal(heap->undefined_value())); + DCHECK(unique.IsKnownGlobal(heap->undefined_value())); return heap->undefined_string(); } case SYMBOL_TYPE: @@ -1373,10 +1338,8 @@ bool HTypeofIsAndBranch::KnownSuccessorBlock(HBasicBlock** block) { } -void HCheckMapValue::PrintDataTo(StringStream* stream) { - value()->PrintNameTo(stream); - stream->Add(" "); - map()->PrintNameTo(stream); +OStream& HCheckMapValue::PrintDataTo(OStream& os) const { // NOLINT + return os << NameOf(value()) << " " << NameOf(map()); } @@ -1391,23 +1354,19 @@ HValue* HCheckMapValue::Canonicalize() { } -void HForInPrepareMap::PrintDataTo(StringStream* stream) { - enumerable()->PrintNameTo(stream); +OStream& HForInPrepareMap::PrintDataTo(OStream& os) const { // NOLINT + return os << NameOf(enumerable()); } -void HForInCacheArray::PrintDataTo(StringStream* stream) { - enumerable()->PrintNameTo(stream); - stream->Add(" "); - map()->PrintNameTo(stream); - stream->Add("[%d]", idx_); +OStream& HForInCacheArray::PrintDataTo(OStream& os) const { // NOLINT + return os << NameOf(enumerable()) << " " << NameOf(map()) << "[" << idx_ + << "]"; } -void HLoadFieldByIndex::PrintDataTo(StringStream* stream) { - object()->PrintNameTo(stream); - stream->Add(" "); - index()->PrintNameTo(stream); +OStream& HLoadFieldByIndex::PrintDataTo(OStream& os) const { // NOLINT + return os << NameOf(object()) << " " << NameOf(index()); } @@ -1543,8 +1502,8 @@ HValue* HWrapReceiver::Canonicalize() { } -void HTypeof::PrintDataTo(StringStream* stream) { - value()->PrintNameTo(stream); +OStream& HTypeof::PrintDataTo(OStream& os) const { // NOLINT + return os << NameOf(value()); } @@ -1554,7 +1513,10 @@ HInstruction* HForceRepresentation::New(Zone* zone, HValue* context, HConstant* c = HConstant::cast(value); if (c->HasNumberValue()) { double double_res = c->DoubleValue(); - if (representation.CanContainDouble(double_res)) { + if (representation.IsDouble()) { + return HConstant::New(zone, context, double_res); + + } else if (representation.CanContainDouble(double_res)) { return HConstant::New(zone, context, static_cast<int32_t>(double_res), representation); @@ -1565,20 +1527,20 @@ HInstruction* HForceRepresentation::New(Zone* zone, HValue* context, } -void HForceRepresentation::PrintDataTo(StringStream* stream) { - stream->Add("%s ", representation().Mnemonic()); - value()->PrintNameTo(stream); +OStream& HForceRepresentation::PrintDataTo(OStream& os) const { // NOLINT + return os << representation().Mnemonic() << " " << NameOf(value()); } -void HChange::PrintDataTo(StringStream* stream) { - HUnaryOperation::PrintDataTo(stream); - stream->Add(" %s to %s", from().Mnemonic(), to().Mnemonic()); +OStream& HChange::PrintDataTo(OStream& os) const { // NOLINT + HUnaryOperation::PrintDataTo(os); + os << " " << from().Mnemonic() << " to " << to().Mnemonic(); - if (CanTruncateToSmi()) stream->Add(" truncating-smi"); - if (CanTruncateToInt32()) stream->Add(" truncating-int32"); - if (CheckFlag(kBailoutOnMinusZero)) stream->Add(" -0?"); - if (CheckFlag(kAllowUndefinedAsNaN)) stream->Add(" allow-undefined-as-nan"); + if (CanTruncateToSmi()) os << " truncating-smi"; + if (CanTruncateToInt32()) os << " truncating-int32"; + if (CheckFlag(kBailoutOnMinusZero)) os << " -0?"; + if (CheckFlag(kAllowUndefinedAsNaN)) os << " allow-undefined-as-nan"; + return os; } @@ -1592,7 +1554,7 @@ HValue* HUnaryMathOperation::Canonicalize() { val, representation(), false, false)); } } - if (op() == kMathFloor && value()->IsDiv() && value()->UseCount() == 1) { + if (op() == kMathFloor && value()->IsDiv() && value()->HasOneUse()) { HDiv* hdiv = HDiv::cast(value()); HValue* left = hdiv->left(); @@ -1633,7 +1595,9 @@ HValue* HUnaryMathOperation::Canonicalize() { HValue* HCheckInstanceType::Canonicalize() { - if (check_ == IS_STRING && value()->type().IsString()) { + if ((check_ == IS_SPEC_OBJECT && value()->type().IsJSObject()) || + (check_ == IS_JS_ARRAY && value()->type().IsJSArray()) || + (check_ == IS_STRING && value()->type().IsString())) { return value(); } @@ -1648,7 +1612,7 @@ HValue* HCheckInstanceType::Canonicalize() { void HCheckInstanceType::GetCheckInterval(InstanceType* first, InstanceType* last) { - ASSERT(is_interval_check()); + DCHECK(is_interval_check()); switch (check_) { case IS_SPEC_OBJECT: *first = FIRST_SPEC_OBJECT_TYPE; @@ -1664,7 +1628,7 @@ void HCheckInstanceType::GetCheckInterval(InstanceType* first, void HCheckInstanceType::GetCheckMaskAndTag(uint8_t* mask, uint8_t* tag) { - ASSERT(!is_interval_check()); + DCHECK(!is_interval_check()); switch (check_) { case IS_STRING: *mask = kIsNotStringMask; @@ -1680,13 +1644,14 @@ void HCheckInstanceType::GetCheckMaskAndTag(uint8_t* mask, uint8_t* tag) { } -void HCheckMaps::PrintDataTo(StringStream* stream) { - value()->PrintNameTo(stream); - stream->Add(" [%p", *maps()->at(0).handle()); +OStream& HCheckMaps::PrintDataTo(OStream& os) const { // NOLINT + os << NameOf(value()) << " [" << *maps()->at(0).handle(); for (int i = 1; i < maps()->size(); ++i) { - stream->Add(",%p", *maps()->at(i).handle()); + os << "," << *maps()->at(i).handle(); } - stream->Add("]%s", IsStabilityCheck() ? "(stability-check)" : ""); + os << "]"; + if (IsStabilityCheck()) os << "(stability-check)"; + return os; } @@ -1710,10 +1675,8 @@ HValue* HCheckMaps::Canonicalize() { } -void HCheckValue::PrintDataTo(StringStream* stream) { - value()->PrintNameTo(stream); - stream->Add(" "); - object().handle()->ShortPrint(stream); +OStream& HCheckValue::PrintDataTo(OStream& os) const { // NOLINT + return os << NameOf(value()) << " " << Brief(*object().handle()); } @@ -1723,7 +1686,7 @@ HValue* HCheckValue::Canonicalize() { } -const char* HCheckInstanceType::GetCheckName() { +const char* HCheckInstanceType::GetCheckName() const { switch (check_) { case IS_SPEC_OBJECT: return "object"; case IS_JS_ARRAY: return "array"; @@ -1735,34 +1698,30 @@ const char* HCheckInstanceType::GetCheckName() { } -void HCheckInstanceType::PrintDataTo(StringStream* stream) { - stream->Add("%s ", GetCheckName()); - HUnaryOperation::PrintDataTo(stream); +OStream& HCheckInstanceType::PrintDataTo(OStream& os) const { // NOLINT + os << GetCheckName() << " "; + return HUnaryOperation::PrintDataTo(os); } -void HCallStub::PrintDataTo(StringStream* stream) { - stream->Add("%s ", - CodeStub::MajorName(major_key_, false)); - HUnaryCall::PrintDataTo(stream); +OStream& HCallStub::PrintDataTo(OStream& os) const { // NOLINT + os << CodeStub::MajorName(major_key_, false) << " "; + return HUnaryCall::PrintDataTo(os); } -void HUnknownOSRValue::PrintDataTo(StringStream *stream) { +OStream& HUnknownOSRValue::PrintDataTo(OStream& os) const { // NOLINT const char* type = "expression"; if (environment_->is_local_index(index_)) type = "local"; if (environment_->is_special_index(index_)) type = "special"; if (environment_->is_parameter_index(index_)) type = "parameter"; - stream->Add("%s @ %d", type, index_); + return os << type << " @ " << index_; } -void HInstanceOf::PrintDataTo(StringStream* stream) { - left()->PrintNameTo(stream); - stream->Add(" "); - right()->PrintNameTo(stream); - stream->Add(" "); - context()->PrintNameTo(stream); +OStream& HInstanceOf::PrintDataTo(OStream& os) const { // NOLINT + return os << NameOf(left()) << " " << NameOf(right()) << " " + << NameOf(context()); } @@ -1972,15 +1931,18 @@ Range* HMathFloorOfDiv::InferRange(Zone* zone) { } +// Returns the absolute value of its argument minus one, avoiding undefined +// behavior at kMinInt. +static int32_t AbsMinus1(int32_t a) { return a < 0 ? -(a + 1) : (a - 1); } + + Range* HMod::InferRange(Zone* zone) { if (representation().IsInteger32()) { Range* a = left()->range(); Range* b = right()->range(); - // The magnitude of the modulus is bounded by the right operand. Note that - // apart for the cases involving kMinInt, the calculation below is the same - // as Max(Abs(b->lower()), Abs(b->upper())) - 1. - int32_t positive_bound = -(Min(NegAbs(b->lower()), NegAbs(b->upper())) + 1); + // The magnitude of the modulus is bounded by the right operand. + int32_t positive_bound = Max(AbsMinus1(b->lower()), AbsMinus1(b->upper())); // The result of the modulo operation has the sign of its left operand. bool left_can_be_negative = a->CanBeMinusZero() || a->CanBeNegative(); @@ -2093,7 +2055,7 @@ void InductionVariableData::DecomposeBitwise( void InductionVariableData::AddCheck(HBoundsCheck* check, int32_t upper_limit) { - ASSERT(limit_validity() != NULL); + DCHECK(limit_validity() != NULL); if (limit_validity() != check->block() && !limit_validity()->Dominates(check->block())) return; if (!phi()->block()->current_loop()->IsNestedInThisLoop( @@ -2131,9 +2093,9 @@ void InductionVariableData::ChecksRelatedToLength::UseNewIndexInCurrentBlock( int32_t mask, HValue* index_base, HValue* context) { - ASSERT(first_check_in_block() != NULL); + DCHECK(first_check_in_block() != NULL); HValue* previous_index = first_check_in_block()->index(); - ASSERT(context != NULL); + DCHECK(context != NULL); Zone* zone = index_base->block()->graph()->zone(); set_added_constant(HConstant::New(zone, context, mask)); @@ -2147,18 +2109,18 @@ void InductionVariableData::ChecksRelatedToLength::UseNewIndexInCurrentBlock( first_check_in_block()->ReplaceAllUsesWith(first_check_in_block()->index()); HInstruction* new_index = HBitwise::New(zone, context, token, index_base, added_constant()); - ASSERT(new_index->IsBitwise()); + DCHECK(new_index->IsBitwise()); new_index->ClearAllSideEffects(); new_index->AssumeRepresentation(Representation::Integer32()); set_added_index(HBitwise::cast(new_index)); added_index()->InsertBefore(first_check_in_block()); } - ASSERT(added_index()->op() == token); + DCHECK(added_index()->op() == token); added_index()->SetOperandAt(1, index_base); added_index()->SetOperandAt(2, added_constant()); first_check_in_block()->SetOperandAt(0, added_index()); - if (previous_index->UseCount() == 0) { + if (previous_index->HasNoUses()) { previous_index->DeleteAndReplaceWith(NULL); } } @@ -2263,7 +2225,7 @@ int32_t InductionVariableData::ComputeIncrement(HPhi* phi, */ void InductionVariableData::UpdateAdditionalLimit( InductionVariableLimitUpdate* update) { - ASSERT(update->updated_variable == this); + DCHECK(update->updated_variable == this); if (update->limit_is_upper) { swap(&additional_upper_limit_, &update->limit); swap(&additional_upper_limit_is_included_, &update->limit_is_included); @@ -2394,7 +2356,7 @@ void InductionVariableData::ComputeLimitFromPredecessorBlock( } else { other_target = branch->SuccessorAt(0); token = Token::NegateCompareOp(token); - ASSERT(block == branch->SuccessorAt(1)); + DCHECK(block == branch->SuccessorAt(1)); } InductionVariableData* data; @@ -2459,7 +2421,7 @@ Range* HMathMinMax::InferRange(Zone* zone) { if (operation_ == kMathMax) { res->CombinedMax(b); } else { - ASSERT(operation_ == kMathMin); + DCHECK(operation_ == kMathMin); res->CombinedMin(b); } return res; @@ -2469,22 +2431,23 @@ Range* HMathMinMax::InferRange(Zone* zone) { } -void HPhi::PrintTo(StringStream* stream) { - stream->Add("["); +void HPushArguments::AddInput(HValue* value) { + inputs_.Add(NULL, value->block()->zone()); + SetOperandAt(OperandCount() - 1, value); +} + + +OStream& HPhi::PrintTo(OStream& os) const { // NOLINT + os << "["; for (int i = 0; i < OperandCount(); ++i) { - HValue* value = OperandAt(i); - stream->Add(" "); - value->PrintNameTo(stream); - stream->Add(" "); + os << " " << NameOf(OperandAt(i)) << " "; } - stream->Add(" uses:%d_%ds_%di_%dd_%dt", - UseCount(), - smi_non_phi_uses() + smi_indirect_uses(), - int32_non_phi_uses() + int32_indirect_uses(), - double_non_phi_uses() + double_indirect_uses(), - tagged_non_phi_uses() + tagged_indirect_uses()); - PrintTypeTo(stream); - stream->Add("]"); + return os << " uses:" << UseCount() << "_" + << smi_non_phi_uses() + smi_indirect_uses() << "s_" + << int32_non_phi_uses() + int32_indirect_uses() << "i_" + << double_non_phi_uses() + double_indirect_uses() << "d_" + << tagged_non_phi_uses() + tagged_indirect_uses() << "t" + << TypeOf(this) << "]"; } @@ -2518,15 +2481,15 @@ HValue* HPhi::GetRedundantReplacement() { HValue* current = OperandAt(position++); if (current != this && current != candidate) return NULL; } - ASSERT(candidate != this); + DCHECK(candidate != this); return candidate; } void HPhi::DeleteFromGraph() { - ASSERT(block() != NULL); + DCHECK(block() != NULL); block()->RemovePhi(this); - ASSERT(block() == NULL); + DCHECK(block() == NULL); } @@ -2606,27 +2569,28 @@ void HSimulate::MergeWith(ZoneList<HSimulate*>* list) { } -void HSimulate::PrintDataTo(StringStream* stream) { - stream->Add("id=%d", ast_id().ToInt()); - if (pop_count_ > 0) stream->Add(" pop %d", pop_count_); +OStream& HSimulate::PrintDataTo(OStream& os) const { // NOLINT + os << "id=" << ast_id().ToInt(); + if (pop_count_ > 0) os << " pop " << pop_count_; if (values_.length() > 0) { - if (pop_count_ > 0) stream->Add(" /"); + if (pop_count_ > 0) os << " /"; for (int i = values_.length() - 1; i >= 0; --i) { if (HasAssignedIndexAt(i)) { - stream->Add(" var[%d] = ", GetAssignedIndexAt(i)); + os << " var[" << GetAssignedIndexAt(i) << "] = "; } else { - stream->Add(" push "); + os << " push "; } - values_[i]->PrintNameTo(stream); - if (i > 0) stream->Add(","); + os << NameOf(values_[i]); + if (i > 0) os << ","; } } + return os; } void HSimulate::ReplayEnvironment(HEnvironment* env) { if (done_with_replay_) return; - ASSERT(env != NULL); + DCHECK(env != NULL); env->set_ast_id(ast_id()); env->Drop(pop_count()); for (int i = values()->length() - 1; i >= 0; --i) { @@ -2659,7 +2623,7 @@ static void ReplayEnvironmentNested(const ZoneList<HValue*>* values, // Replay captured objects by replacing all captured objects with the // same capture id in the current and all outer environments. void HCapturedObject::ReplayEnvironment(HEnvironment* env) { - ASSERT(env != NULL); + DCHECK(env != NULL); while (env != NULL) { ReplayEnvironmentNested(env->values(), this); env = env->outer(); @@ -2667,22 +2631,22 @@ void HCapturedObject::ReplayEnvironment(HEnvironment* env) { } -void HCapturedObject::PrintDataTo(StringStream* stream) { - stream->Add("#%d ", capture_id()); - HDematerializedObject::PrintDataTo(stream); +OStream& HCapturedObject::PrintDataTo(OStream& os) const { // NOLINT + os << "#" << capture_id() << " "; + return HDematerializedObject::PrintDataTo(os); } void HEnterInlined::RegisterReturnTarget(HBasicBlock* return_target, Zone* zone) { - ASSERT(return_target->IsInlineReturnTarget()); + DCHECK(return_target->IsInlineReturnTarget()); return_targets_.Add(return_target, zone); } -void HEnterInlined::PrintDataTo(StringStream* stream) { - SmartArrayPointer<char> name = function()->debug_name()->ToCString(); - stream->Add("%s, id=%d", name.get(), function()->id().ToInt()); +OStream& HEnterInlined::PrintDataTo(OStream& os) const { // NOLINT + return os << function()->debug_name()->ToCString().get() + << ", id=" << function()->id().ToInt(); } @@ -2693,7 +2657,7 @@ static bool IsInteger32(double value) { HConstant::HConstant(Handle<Object> object, Representation r) - : HTemplateInstruction<0>(HType::TypeFromValue(object)), + : HTemplateInstruction<0>(HType::FromValue(object)), object_(Unique<Object>::CreateUninitialized(object)), object_map_(Handle<Map>::null()), has_stable_map_value_(false), @@ -2751,8 +2715,8 @@ HConstant::HConstant(Unique<Object> object, boolean_value_(boolean_value), is_undetectable_(is_undetectable), instance_type_(instance_type) { - ASSERT(!object.handle().is_null()); - ASSERT(!type.IsTaggedNumber()); + DCHECK(!object.handle().is_null()); + DCHECK(!type.IsTaggedNumber() || type.IsNone()); Initialize(r); } @@ -2810,7 +2774,7 @@ HConstant::HConstant(double double_value, HConstant::HConstant(ExternalReference reference) - : HTemplateInstruction<0>(HType::None()), + : HTemplateInstruction<0>(HType::Any()), object_(Unique<Object>(Handle<Object>::null())), object_map_(Handle<Map>::null()), has_stable_map_value_(false), @@ -2868,10 +2832,10 @@ bool HConstant::ImmortalImmovable() const { return false; } - ASSERT(!object_.handle().is_null()); + DCHECK(!object_.handle().is_null()); Heap* heap = isolate()->heap(); - ASSERT(!object_.IsKnownGlobal(heap->minus_zero_value())); - ASSERT(!object_.IsKnownGlobal(heap->nan_value())); + DCHECK(!object_.IsKnownGlobal(heap->minus_zero_value())); + DCHECK(!object_.IsKnownGlobal(heap->nan_value())); return #define IMMORTAL_IMMOVABLE_ROOT(name) \ object_.IsKnownGlobal(heap->name()) || @@ -2890,15 +2854,16 @@ bool HConstant::ImmortalImmovable() const { bool HConstant::EmitAtUses() { - ASSERT(IsLinked()); + DCHECK(IsLinked()); if (block()->graph()->has_osr() && block()->graph()->IsStandardConstant(this)) { // TODO(titzer): this seems like a hack that should be fixed by custom OSR. return true; } - if (UseCount() == 0) return true; + if (HasNoUses()) return true; if (IsCell()) return false; if (representation().IsDouble()) return false; + if (representation().IsExternal()) return false; return true; } @@ -2917,7 +2882,7 @@ HConstant* HConstant::CopyToRepresentation(Representation r, Zone* zone) const { if (has_external_reference_value_) { return new(zone) HConstant(external_reference_value_); } - ASSERT(!object_.handle().is_null()); + DCHECK(!object_.handle().is_null()); return new(zone) HConstant(object_, object_map_, has_stable_map_value_, @@ -2954,7 +2919,7 @@ Maybe<HConstant*> HConstant::CopyToTruncatedNumber(Zone* zone) { res = handle->BooleanValue() ? new(zone) HConstant(1) : new(zone) HConstant(0); } else if (handle->IsUndefined()) { - res = new(zone) HConstant(OS::nan_value()); + res = new(zone) HConstant(base::OS::nan_value()); } else if (handle->IsNull()) { res = new(zone) HConstant(0); } @@ -2962,41 +2927,35 @@ Maybe<HConstant*> HConstant::CopyToTruncatedNumber(Zone* zone) { } -void HConstant::PrintDataTo(StringStream* stream) { +OStream& HConstant::PrintDataTo(OStream& os) const { // NOLINT if (has_int32_value_) { - stream->Add("%d ", int32_value_); + os << int32_value_ << " "; } else if (has_double_value_) { - stream->Add("%f ", FmtElm(double_value_)); + os << double_value_ << " "; } else if (has_external_reference_value_) { - stream->Add("%p ", reinterpret_cast<void*>( - external_reference_value_.address())); + os << reinterpret_cast<void*>(external_reference_value_.address()) << " "; } else { - handle(Isolate::Current())->ShortPrint(stream); - stream->Add(" "); - if (HasStableMapValue()) { - stream->Add("[stable-map] "); - } - if (HasObjectMap()) { - stream->Add("[map %p] ", *ObjectMap().handle()); - } - } - if (!is_not_in_new_space_) { - stream->Add("[new space] "); + // The handle() method is silently and lazily mutating the object. + Handle<Object> h = const_cast<HConstant*>(this)->handle(Isolate::Current()); + os << Brief(*h) << " "; + if (HasStableMapValue()) os << "[stable-map] "; + if (HasObjectMap()) os << "[map " << *ObjectMap().handle() << "] "; } + if (!is_not_in_new_space_) os << "[new space] "; + return os; } -void HBinaryOperation::PrintDataTo(StringStream* stream) { - left()->PrintNameTo(stream); - stream->Add(" "); - right()->PrintNameTo(stream); - if (CheckFlag(kCanOverflow)) stream->Add(" !"); - if (CheckFlag(kBailoutOnMinusZero)) stream->Add(" -0?"); +OStream& HBinaryOperation::PrintDataTo(OStream& os) const { // NOLINT + os << NameOf(left()) << " " << NameOf(right()); + if (CheckFlag(kCanOverflow)) os << " !"; + if (CheckFlag(kBailoutOnMinusZero)) os << " -0?"; + return os; } void HBinaryOperation::InferRepresentation(HInferRepresentationPhase* h_infer) { - ASSERT(CheckFlag(kFlexibleRepresentation)); + DCHECK(CheckFlag(kFlexibleRepresentation)); Representation new_rep = RepresentationFromInputs(); UpdateRepresentation(new_rep, h_infer, "inputs"); @@ -3063,7 +3022,7 @@ void HBinaryOperation::AssumeRepresentation(Representation r) { void HMathMinMax::InferRepresentation(HInferRepresentationPhase* h_infer) { - ASSERT(CheckFlag(kFlexibleRepresentation)); + DCHECK(CheckFlag(kFlexibleRepresentation)); Representation new_rep = RepresentationFromInputs(); UpdateRepresentation(new_rep, h_infer, "inputs"); // Do not care about uses. @@ -3214,35 +3173,27 @@ Range* HLoadKeyed::InferRange(Zone* zone) { } -void HCompareGeneric::PrintDataTo(StringStream* stream) { - stream->Add(Token::Name(token())); - stream->Add(" "); - HBinaryOperation::PrintDataTo(stream); +OStream& HCompareGeneric::PrintDataTo(OStream& os) const { // NOLINT + os << Token::Name(token()) << " "; + return HBinaryOperation::PrintDataTo(os); } -void HStringCompareAndBranch::PrintDataTo(StringStream* stream) { - stream->Add(Token::Name(token())); - stream->Add(" "); - HControlInstruction::PrintDataTo(stream); +OStream& HStringCompareAndBranch::PrintDataTo(OStream& os) const { // NOLINT + os << Token::Name(token()) << " "; + return HControlInstruction::PrintDataTo(os); } -void HCompareNumericAndBranch::PrintDataTo(StringStream* stream) { - stream->Add(Token::Name(token())); - stream->Add(" "); - left()->PrintNameTo(stream); - stream->Add(" "); - right()->PrintNameTo(stream); - HControlInstruction::PrintDataTo(stream); +OStream& HCompareNumericAndBranch::PrintDataTo(OStream& os) const { // NOLINT + os << Token::Name(token()) << " " << NameOf(left()) << " " << NameOf(right()); + return HControlInstruction::PrintDataTo(os); } -void HCompareObjectEqAndBranch::PrintDataTo(StringStream* stream) { - left()->PrintNameTo(stream); - stream->Add(" "); - right()->PrintNameTo(stream); - HControlInstruction::PrintDataTo(stream); +OStream& HCompareObjectEqAndBranch::PrintDataTo(OStream& os) const { // NOLINT + os << NameOf(left()) << " " << NameOf(right()); + return HControlInstruction::PrintDataTo(os); } @@ -3285,11 +3236,27 @@ bool HIsObjectAndBranch::KnownSuccessorBlock(HBasicBlock** block) { bool HIsStringAndBranch::KnownSuccessorBlock(HBasicBlock** block) { + if (known_successor_index() != kNoKnownSuccessorIndex) { + *block = SuccessorAt(known_successor_index()); + return true; + } if (FLAG_fold_constants && value()->IsConstant()) { *block = HConstant::cast(value())->HasStringValue() ? FirstSuccessor() : SecondSuccessor(); return true; } + if (value()->type().IsString()) { + *block = FirstSuccessor(); + return true; + } + if (value()->type().IsSmi() || + value()->type().IsNull() || + value()->type().IsBoolean() || + value()->type().IsUndefined() || + value()->type().IsJSObject()) { + *block = SecondSuccessor(); + return true; + } *block = NULL; return false; } @@ -3364,9 +3331,8 @@ void HCompareMinusZeroAndBranch::InferRepresentation( } - -void HGoto::PrintDataTo(StringStream* stream) { - stream->Add("B%d", SuccessorAt(0)->block_id()); +OStream& HGoto::PrintDataTo(OStream& os) const { // NOLINT + return os << *SuccessorAt(0); } @@ -3409,64 +3375,65 @@ void HCompareNumericAndBranch::InferRepresentation( } -void HParameter::PrintDataTo(StringStream* stream) { - stream->Add("%u", index()); +OStream& HParameter::PrintDataTo(OStream& os) const { // NOLINT + return os << index(); } -void HLoadNamedField::PrintDataTo(StringStream* stream) { - object()->PrintNameTo(stream); - access_.PrintTo(stream); +OStream& HLoadNamedField::PrintDataTo(OStream& os) const { // NOLINT + os << NameOf(object()) << access_; if (maps() != NULL) { - stream->Add(" [%p", *maps()->at(0).handle()); + os << " [" << *maps()->at(0).handle(); for (int i = 1; i < maps()->size(); ++i) { - stream->Add(",%p", *maps()->at(i).handle()); + os << "," << *maps()->at(i).handle(); } - stream->Add("]"); + os << "]"; } - if (HasDependency()) { - stream->Add(" "); - dependency()->PrintNameTo(stream); - } + if (HasDependency()) os << " " << NameOf(dependency()); + return os; } -void HLoadNamedGeneric::PrintDataTo(StringStream* stream) { - object()->PrintNameTo(stream); - stream->Add("."); - stream->Add(String::cast(*name())->ToCString().get()); +OStream& HLoadNamedGeneric::PrintDataTo(OStream& os) const { // NOLINT + Handle<String> n = Handle<String>::cast(name()); + return os << NameOf(object()) << "." << n->ToCString().get(); } -void HLoadKeyed::PrintDataTo(StringStream* stream) { +OStream& HLoadKeyed::PrintDataTo(OStream& os) const { // NOLINT if (!is_external()) { - elements()->PrintNameTo(stream); + os << NameOf(elements()); } else { - ASSERT(elements_kind() >= FIRST_EXTERNAL_ARRAY_ELEMENTS_KIND && + DCHECK(elements_kind() >= FIRST_EXTERNAL_ARRAY_ELEMENTS_KIND && elements_kind() <= LAST_EXTERNAL_ARRAY_ELEMENTS_KIND); - elements()->PrintNameTo(stream); - stream->Add("."); - stream->Add(ElementsKindToString(elements_kind())); + os << NameOf(elements()) << "." << ElementsKindToString(elements_kind()); } - stream->Add("["); - key()->PrintNameTo(stream); - if (IsDehoisted()) { - stream->Add(" + %d]", index_offset()); - } else { - stream->Add("]"); - } + os << "[" << NameOf(key()); + if (IsDehoisted()) os << " + " << base_offset(); + os << "]"; - if (HasDependency()) { - stream->Add(" "); - dependency()->PrintNameTo(stream); - } + if (HasDependency()) os << " " << NameOf(dependency()); + if (RequiresHoleCheck()) os << " check_hole"; + return os; +} - if (RequiresHoleCheck()) { - stream->Add(" check_hole"); - } + +bool HLoadKeyed::TryIncreaseBaseOffset(uint32_t increase_by_value) { + // The base offset is usually simply the size of the array header, except + // with dehoisting adds an addition offset due to a array index key + // manipulation, in which case it becomes (array header size + + // constant-offset-from-key * kPointerSize) + uint32_t base_offset = BaseOffsetField::decode(bit_field_); + v8::base::internal::CheckedNumeric<uint32_t> addition_result = base_offset; + addition_result += increase_by_value; + if (!addition_result.IsValid()) return false; + base_offset = addition_result.ValueOrDie(); + if (!BaseOffsetField::is_valid(base_offset)) return false; + bit_field_ = BaseOffsetField::update(bit_field_, base_offset); + return true; } @@ -3523,11 +3490,8 @@ bool HLoadKeyed::RequiresHoleCheck() const { } -void HLoadKeyedGeneric::PrintDataTo(StringStream* stream) { - object()->PrintNameTo(stream); - stream->Add("["); - key()->PrintNameTo(stream); - stream->Add("]"); +OStream& HLoadKeyedGeneric::PrintDataTo(OStream& os) const { // NOLINT + return os << NameOf(object()) << "[" << NameOf(key()) << "]"; } @@ -3568,79 +3532,60 @@ HValue* HLoadKeyedGeneric::Canonicalize() { } -void HStoreNamedGeneric::PrintDataTo(StringStream* stream) { - object()->PrintNameTo(stream); - stream->Add("."); - ASSERT(name()->IsString()); - stream->Add(String::cast(*name())->ToCString().get()); - stream->Add(" = "); - value()->PrintNameTo(stream); +OStream& HStoreNamedGeneric::PrintDataTo(OStream& os) const { // NOLINT + Handle<String> n = Handle<String>::cast(name()); + return os << NameOf(object()) << "." << n->ToCString().get() << " = " + << NameOf(value()); } -void HStoreNamedField::PrintDataTo(StringStream* stream) { - object()->PrintNameTo(stream); - access_.PrintTo(stream); - stream->Add(" = "); - value()->PrintNameTo(stream); - if (NeedsWriteBarrier()) { - stream->Add(" (write-barrier)"); - } - if (has_transition()) { - stream->Add(" (transition map %p)", *transition_map()); - } +OStream& HStoreNamedField::PrintDataTo(OStream& os) const { // NOLINT + os << NameOf(object()) << access_ << " = " << NameOf(value()); + if (NeedsWriteBarrier()) os << " (write-barrier)"; + if (has_transition()) os << " (transition map " << *transition_map() << ")"; + return os; } -void HStoreKeyed::PrintDataTo(StringStream* stream) { +OStream& HStoreKeyed::PrintDataTo(OStream& os) const { // NOLINT if (!is_external()) { - elements()->PrintNameTo(stream); + os << NameOf(elements()); } else { - elements()->PrintNameTo(stream); - stream->Add("."); - stream->Add(ElementsKindToString(elements_kind())); - ASSERT(elements_kind() >= FIRST_EXTERNAL_ARRAY_ELEMENTS_KIND && + DCHECK(elements_kind() >= FIRST_EXTERNAL_ARRAY_ELEMENTS_KIND && elements_kind() <= LAST_EXTERNAL_ARRAY_ELEMENTS_KIND); + os << NameOf(elements()) << "." << ElementsKindToString(elements_kind()); } - stream->Add("["); - key()->PrintNameTo(stream); - if (IsDehoisted()) { - stream->Add(" + %d] = ", index_offset()); - } else { - stream->Add("] = "); - } - - value()->PrintNameTo(stream); + os << "[" << NameOf(key()); + if (IsDehoisted()) os << " + " << base_offset(); + return os << "] = " << NameOf(value()); } -void HStoreKeyedGeneric::PrintDataTo(StringStream* stream) { - object()->PrintNameTo(stream); - stream->Add("["); - key()->PrintNameTo(stream); - stream->Add("] = "); - value()->PrintNameTo(stream); +OStream& HStoreKeyedGeneric::PrintDataTo(OStream& os) const { // NOLINT + return os << NameOf(object()) << "[" << NameOf(key()) + << "] = " << NameOf(value()); } -void HTransitionElementsKind::PrintDataTo(StringStream* stream) { - object()->PrintNameTo(stream); +OStream& HTransitionElementsKind::PrintDataTo(OStream& os) const { // NOLINT + os << NameOf(object()); ElementsKind from_kind = original_map().handle()->elements_kind(); ElementsKind to_kind = transitioned_map().handle()->elements_kind(); - stream->Add(" %p [%s] -> %p [%s]", - *original_map().handle(), - ElementsAccessor::ForKind(from_kind)->name(), - *transitioned_map().handle(), - ElementsAccessor::ForKind(to_kind)->name()); - if (IsSimpleMapChangeTransition(from_kind, to_kind)) stream->Add(" (simple)"); + os << " " << *original_map().handle() << " [" + << ElementsAccessor::ForKind(from_kind)->name() << "] -> " + << *transitioned_map().handle() << " [" + << ElementsAccessor::ForKind(to_kind)->name() << "]"; + if (IsSimpleMapChangeTransition(from_kind, to_kind)) os << " (simple)"; + return os; } -void HLoadGlobalCell::PrintDataTo(StringStream* stream) { - stream->Add("[%p]", *cell().handle()); - if (!details_.IsDontDelete()) stream->Add(" (deleteable)"); - if (details_.IsReadOnly()) stream->Add(" (read-only)"); +OStream& HLoadGlobalCell::PrintDataTo(OStream& os) const { // NOLINT + os << "[" << *cell().handle() << "]"; + if (!details_.IsDontDelete()) os << " (deleteable)"; + if (details_.IsReadOnly()) os << " (read-only)"; + return os; } @@ -3654,36 +3599,33 @@ bool HLoadGlobalCell::RequiresHoleCheck() const { } -void HLoadGlobalGeneric::PrintDataTo(StringStream* stream) { - stream->Add("%o ", *name()); +OStream& HLoadGlobalGeneric::PrintDataTo(OStream& os) const { // NOLINT + return os << name()->ToCString().get() << " "; } -void HInnerAllocatedObject::PrintDataTo(StringStream* stream) { - base_object()->PrintNameTo(stream); - stream->Add(" offset "); - offset()->PrintTo(stream); +OStream& HInnerAllocatedObject::PrintDataTo(OStream& os) const { // NOLINT + os << NameOf(base_object()) << " offset "; + return offset()->PrintTo(os); } -void HStoreGlobalCell::PrintDataTo(StringStream* stream) { - stream->Add("[%p] = ", *cell().handle()); - value()->PrintNameTo(stream); - if (!details_.IsDontDelete()) stream->Add(" (deleteable)"); - if (details_.IsReadOnly()) stream->Add(" (read-only)"); +OStream& HStoreGlobalCell::PrintDataTo(OStream& os) const { // NOLINT + os << "[" << *cell().handle() << "] = " << NameOf(value()); + if (!details_.IsDontDelete()) os << " (deleteable)"; + if (details_.IsReadOnly()) os << " (read-only)"; + return os; } -void HLoadContextSlot::PrintDataTo(StringStream* stream) { - value()->PrintNameTo(stream); - stream->Add("[%d]", slot_index()); +OStream& HLoadContextSlot::PrintDataTo(OStream& os) const { // NOLINT + return os << NameOf(value()) << "[" << slot_index() << "]"; } -void HStoreContextSlot::PrintDataTo(StringStream* stream) { - context()->PrintNameTo(stream); - stream->Add("[%d] = ", slot_index()); - value()->PrintNameTo(stream); +OStream& HStoreContextSlot::PrintDataTo(OStream& os) const { // NOLINT + return os << NameOf(context()) << "[" << slot_index() + << "] = " << NameOf(value()); } @@ -3732,7 +3674,7 @@ Representation HUnaryMathOperation::RepresentationFromInputs() { bool HAllocate::HandleSideEffectDominator(GVNFlag side_effect, HValue* dominator) { - ASSERT(side_effect == kNewSpacePromotion); + DCHECK(side_effect == kNewSpacePromotion); Zone* zone = block()->zone(); if (!FLAG_use_allocation_folding) return false; @@ -3759,10 +3701,10 @@ bool HAllocate::HandleSideEffectDominator(GVNFlag side_effect, HValue* current_size = size(); // TODO(hpayer): Add support for non-constant allocation in dominator. - if (!current_size->IsInteger32Constant() || - !dominator_size->IsInteger32Constant()) { + if (!dominator_size->IsInteger32Constant()) { if (FLAG_trace_allocation_folding) { - PrintF("#%d (%s) cannot fold into #%d (%s), dynamic allocation size\n", + PrintF("#%d (%s) cannot fold into #%d (%s), " + "dynamic allocation size in dominator\n", id(), Mnemonic(), dominator->id(), dominator->Mnemonic()); } return false; @@ -3773,7 +3715,33 @@ bool HAllocate::HandleSideEffectDominator(GVNFlag side_effect, return false; } - ASSERT((IsNewSpaceAllocation() && + if (!has_size_upper_bound()) { + if (FLAG_trace_allocation_folding) { + PrintF("#%d (%s) cannot fold into #%d (%s), " + "can't estimate total allocation size\n", + id(), Mnemonic(), dominator->id(), dominator->Mnemonic()); + } + return false; + } + + if (!current_size->IsInteger32Constant()) { + // If it's not constant then it is a size_in_bytes calculation graph + // like this: (const_header_size + const_element_size * size). + DCHECK(current_size->IsInstruction()); + + HInstruction* current_instr = HInstruction::cast(current_size); + if (!current_instr->Dominates(dominator_allocate)) { + if (FLAG_trace_allocation_folding) { + PrintF("#%d (%s) cannot fold into #%d (%s), dynamic size " + "value does not dominate target allocation\n", + id(), Mnemonic(), dominator_allocate->id(), + dominator_allocate->Mnemonic()); + } + return false; + } + } + + DCHECK((IsNewSpaceAllocation() && dominator_allocate->IsNewSpaceAllocation()) || (IsOldDataSpaceAllocation() && dominator_allocate->IsOldDataSpaceAllocation()) || @@ -3785,20 +3753,16 @@ bool HAllocate::HandleSideEffectDominator(GVNFlag side_effect, int32_t original_object_size = HConstant::cast(dominator_size)->GetInteger32Constant(); int32_t dominator_size_constant = original_object_size; - int32_t current_size_constant = - HConstant::cast(current_size)->GetInteger32Constant(); - int32_t new_dominator_size = dominator_size_constant + current_size_constant; if (MustAllocateDoubleAligned()) { - if (!dominator_allocate->MustAllocateDoubleAligned()) { - dominator_allocate->MakeDoubleAligned(); - } if ((dominator_size_constant & kDoubleAlignmentMask) != 0) { dominator_size_constant += kDoubleSize / 2; - new_dominator_size += kDoubleSize / 2; } } + int32_t current_size_max_value = size_upper_bound()->GetInteger32Constant(); + int32_t new_dominator_size = dominator_size_constant + current_size_max_value; + // Since we clear the first word after folded memory, we cannot use the // whole Page::kMaxRegularHeapObjectSize memory. if (new_dominator_size > Page::kMaxRegularHeapObjectSize - kPointerSize) { @@ -3810,27 +3774,54 @@ bool HAllocate::HandleSideEffectDominator(GVNFlag side_effect, return false; } - HInstruction* new_dominator_size_constant = HConstant::CreateAndInsertBefore( - zone, - context(), - new_dominator_size, - Representation::None(), - dominator_allocate); - dominator_allocate->UpdateSize(new_dominator_size_constant); + HInstruction* new_dominator_size_value; + + if (current_size->IsInteger32Constant()) { + new_dominator_size_value = + HConstant::CreateAndInsertBefore(zone, + context(), + new_dominator_size, + Representation::None(), + dominator_allocate); + } else { + HValue* new_dominator_size_constant = + HConstant::CreateAndInsertBefore(zone, + context(), + dominator_size_constant, + Representation::Integer32(), + dominator_allocate); + + // Add old and new size together and insert. + current_size->ChangeRepresentation(Representation::Integer32()); + + new_dominator_size_value = HAdd::New(zone, context(), + new_dominator_size_constant, current_size); + new_dominator_size_value->ClearFlag(HValue::kCanOverflow); + new_dominator_size_value->ChangeRepresentation(Representation::Integer32()); + + new_dominator_size_value->InsertBefore(dominator_allocate); + } + + dominator_allocate->UpdateSize(new_dominator_size_value); + + if (MustAllocateDoubleAligned()) { + if (!dominator_allocate->MustAllocateDoubleAligned()) { + dominator_allocate->MakeDoubleAligned(); + } + } + bool keep_new_space_iterable = FLAG_log_gc || FLAG_heap_stats; #ifdef VERIFY_HEAP - if (FLAG_verify_heap && dominator_allocate->IsNewSpaceAllocation()) { + keep_new_space_iterable = keep_new_space_iterable || FLAG_verify_heap; +#endif + + if (keep_new_space_iterable && dominator_allocate->IsNewSpaceAllocation()) { dominator_allocate->MakePrefillWithFiller(); } else { // TODO(hpayer): This is a short-term hack to make allocation mementos // work again in new space. dominator_allocate->ClearNextMapWord(original_object_size); } -#else - // TODO(hpayer): This is a short-term hack to make allocation mementos - // work again in new space. - dominator_allocate->ClearNextMapWord(original_object_size); -#endif dominator_allocate->UpdateClearNextMapWord(MustClearNextMapWord()); @@ -3898,7 +3889,7 @@ HAllocate* HAllocate::GetFoldableDominator(HAllocate* dominator) { return NULL; } - ASSERT((IsOldDataSpaceAllocation() && + DCHECK((IsOldDataSpaceAllocation() && dominator_dominator->IsOldDataSpaceAllocation()) || (IsOldPointerSpaceAllocation() && dominator_dominator->IsOldPointerSpaceAllocation())); @@ -3924,7 +3915,7 @@ HAllocate* HAllocate::GetFoldableDominator(HAllocate* dominator) { void HAllocate::UpdateFreeSpaceFiller(int32_t free_space_size) { - ASSERT(filler_free_space_size_ != NULL); + DCHECK(filler_free_space_size_ != NULL); Zone* zone = block()->zone(); // We must explicitly force Smi representation here because on x64 we // would otherwise automatically choose int32, but the actual store @@ -3941,7 +3932,7 @@ void HAllocate::UpdateFreeSpaceFiller(int32_t free_space_size) { void HAllocate::CreateFreeSpaceFiller(int32_t free_space_size) { - ASSERT(filler_free_space_size_ == NULL); + DCHECK(filler_free_space_size_ == NULL); Zone* zone = block()->zone(); HInstruction* free_space_instr = HInnerAllocatedObject::New(zone, context(), dominating_allocate_, @@ -3949,7 +3940,7 @@ void HAllocate::CreateFreeSpaceFiller(int32_t free_space_size) { free_space_instr->InsertBefore(this); HConstant* filler_map = HConstant::CreateAndInsertAfter( zone, Unique<Map>::CreateImmovable( - isolate()->factory()->free_space_map()), free_space_instr); + isolate()->factory()->free_space_map()), true, free_space_instr); HInstruction* store_map = HStoreNamedField::New(zone, context(), free_space_instr, HObjectAccess::ForMap(), filler_map); store_map->SetFlag(HValue::kHasNoObservableSideEffects); @@ -3987,15 +3978,27 @@ void HAllocate::ClearNextMapWord(int offset) { } -void HAllocate::PrintDataTo(StringStream* stream) { - size()->PrintNameTo(stream); - stream->Add(" ("); - if (IsNewSpaceAllocation()) stream->Add("N"); - if (IsOldPointerSpaceAllocation()) stream->Add("P"); - if (IsOldDataSpaceAllocation()) stream->Add("D"); - if (MustAllocateDoubleAligned()) stream->Add("A"); - if (MustPrefillWithFiller()) stream->Add("F"); - stream->Add(")"); +OStream& HAllocate::PrintDataTo(OStream& os) const { // NOLINT + os << NameOf(size()) << " ("; + if (IsNewSpaceAllocation()) os << "N"; + if (IsOldPointerSpaceAllocation()) os << "P"; + if (IsOldDataSpaceAllocation()) os << "D"; + if (MustAllocateDoubleAligned()) os << "A"; + if (MustPrefillWithFiller()) os << "F"; + return os << ")"; +} + + +bool HStoreKeyed::TryIncreaseBaseOffset(uint32_t increase_by_value) { + // The base offset is usually simply the size of the array header, except + // with dehoisting adds an addition offset due to a array index key + // manipulation, in which case it becomes (array header size + + // constant-offset-from-key * kPointerSize) + v8::base::internal::CheckedNumeric<uint32_t> addition_result = base_offset_; + addition_result += increase_by_value; + if (!addition_result.IsValid()) return false; + base_offset_ = addition_result.ValueOrDie(); + return true; } @@ -4073,10 +4076,9 @@ HInstruction* HStringAdd::New(Zone* zone, Handle<String> right_string = c_right->StringValue(); // Prevent possible exception by invalid string length. if (left_string->length() + right_string->length() < String::kMaxLength) { - Handle<String> concat = zone->isolate()->factory()->NewFlatConcatString( + MaybeHandle<String> concat = zone->isolate()->factory()->NewConsString( c_left->StringValue(), c_right->StringValue()); - ASSERT(!concat.is_null()); - return HConstant::New(zone, context, concat); + return HConstant::New(zone, context, concat.ToHandleChecked()); } } } @@ -4085,19 +4087,21 @@ HInstruction* HStringAdd::New(Zone* zone, } -void HStringAdd::PrintDataTo(StringStream* stream) { +OStream& HStringAdd::PrintDataTo(OStream& os) const { // NOLINT if ((flags() & STRING_ADD_CHECK_BOTH) == STRING_ADD_CHECK_BOTH) { - stream->Add("_CheckBoth"); + os << "_CheckBoth"; } else if ((flags() & STRING_ADD_CHECK_BOTH) == STRING_ADD_CHECK_LEFT) { - stream->Add("_CheckLeft"); + os << "_CheckLeft"; } else if ((flags() & STRING_ADD_CHECK_BOTH) == STRING_ADD_CHECK_RIGHT) { - stream->Add("_CheckRight"); + os << "_CheckRight"; } - HBinaryOperation::PrintDataTo(stream); - stream->Add(" ("); - if (pretenure_flag() == NOT_TENURED) stream->Add("N"); - else if (pretenure_flag() == TENURED) stream->Add("D"); - stream->Add(")"); + HBinaryOperation::PrintDataTo(os); + os << " ("; + if (pretenure_flag() == NOT_TENURED) + os << "N"; + else if (pretenure_flag() == TENURED) + os << "D"; + return os << ")"; } @@ -4128,7 +4132,7 @@ HInstruction* HUnaryMathOperation::New( if (!constant->HasNumberValue()) break; double d = constant->DoubleValue(); if (std::isnan(d)) { // NaN poisons everything. - return H_CONSTANT_DOUBLE(OS::nan_value()); + return H_CONSTANT_DOUBLE(base::OS::nan_value()); } if (std::isinf(d)) { // +Infinity and -Infinity. switch (op) { @@ -4136,11 +4140,12 @@ HInstruction* HUnaryMathOperation::New( return H_CONSTANT_DOUBLE((d > 0.0) ? d : 0.0); case kMathLog: case kMathSqrt: - return H_CONSTANT_DOUBLE((d > 0.0) ? d : OS::nan_value()); + return H_CONSTANT_DOUBLE((d > 0.0) ? d : base::OS::nan_value()); case kMathPowHalf: case kMathAbs: return H_CONSTANT_DOUBLE((d > 0.0) ? d : -d); case kMathRound: + case kMathFround: case kMathFloor: return H_CONSTANT_DOUBLE(d); case kMathClz32: @@ -4167,9 +4172,11 @@ HInstruction* HUnaryMathOperation::New( // Doubles are represented as Significant * 2 ^ Exponent. If the // Exponent is not negative, the double value is already an integer. if (Double(d).Exponent() >= 0) return H_CONSTANT_DOUBLE(d); - return H_CONSTANT_DOUBLE(std::floor(d + 0.5)); + return H_CONSTANT_DOUBLE(Floor(d + 0.5)); + case kMathFround: + return H_CONSTANT_DOUBLE(static_cast<double>(static_cast<float>(d))); case kMathFloor: - return H_CONSTANT_DOUBLE(std::floor(d)); + return H_CONSTANT_DOUBLE(Floor(d)); case kMathClz32: { uint32_t i = DoubleToUint32(d); return H_CONSTANT_INT( @@ -4231,7 +4238,8 @@ HInstruction* HPower::New(Zone* zone, if (c_left->HasNumberValue() && c_right->HasNumberValue()) { double result = power_helper(c_left->DoubleValue(), c_right->DoubleValue()); - return H_CONSTANT_DOUBLE(std::isnan(result) ? OS::nan_value() : result); + return H_CONSTANT_DOUBLE(std::isnan(result) ? base::OS::nan_value() + : result); } } return new(zone) HPower(left, right); @@ -4264,7 +4272,7 @@ HInstruction* HMathMinMax::New( } } // All comparisons failed, must be NaN. - return H_CONSTANT_DOUBLE(OS::nan_value()); + return H_CONSTANT_DOUBLE(base::OS::nan_value()); } } return new(zone) HMathMinMax(context, left, right, op); @@ -4402,8 +4410,8 @@ HInstruction* HSeqStringGetChar::New(Zone* zone, if (c_string->HasStringValue() && c_index->HasInteger32Value()) { Handle<String> s = c_string->StringValue(); int32_t i = c_index->Integer32Value(); - ASSERT_LE(0, i); - ASSERT_LT(i, s->length()); + DCHECK_LE(0, i); + DCHECK_LT(i, s->length()); return H_CONSTANT_INT(s->Get(i)); } } @@ -4415,10 +4423,9 @@ HInstruction* HSeqStringGetChar::New(Zone* zone, #undef H_CONSTANT_DOUBLE -void HBitwise::PrintDataTo(StringStream* stream) { - stream->Add(Token::Name(op_)); - stream->Add(" "); - HBitwiseBinaryOperation::PrintDataTo(stream); +OStream& HBitwise::PrintDataTo(OStream& os) const { // NOLINT + os << Token::Name(op_) << " "; + return HBitwiseBinaryOperation::PrintDataTo(os); } @@ -4459,7 +4466,7 @@ void HPhi::SimplifyConstantInputs() { void HPhi::InferRepresentation(HInferRepresentationPhase* h_infer) { - ASSERT(CheckFlag(kFlexibleRepresentation)); + DCHECK(CheckFlag(kFlexibleRepresentation)); Representation new_rep = RepresentationFromInputs(); UpdateRepresentation(new_rep, h_infer, "inputs"); new_rep = RepresentationFromUses(); @@ -4523,12 +4530,12 @@ bool HValue::HasNonSmiUse() { #ifdef DEBUG void HPhi::Verify() { - ASSERT(OperandCount() == block()->predecessors()->length()); + DCHECK(OperandCount() == block()->predecessors()->length()); for (int i = 0; i < OperandCount(); ++i) { HValue* value = OperandAt(i); HBasicBlock* defining_block = value->block(); HBasicBlock* predecessor_block = block()->predecessors()->at(i); - ASSERT(defining_block == predecessor_block || + DCHECK(defining_block == predecessor_block || defining_block->Dominates(predecessor_block)); } } @@ -4536,27 +4543,27 @@ void HPhi::Verify() { void HSimulate::Verify() { HInstruction::Verify(); - ASSERT(HasAstId() || next()->IsEnterInlined()); + DCHECK(HasAstId() || next()->IsEnterInlined()); } void HCheckHeapObject::Verify() { HInstruction::Verify(); - ASSERT(HasNoUses()); + DCHECK(HasNoUses()); } void HCheckValue::Verify() { HInstruction::Verify(); - ASSERT(HasNoUses()); + DCHECK(HasNoUses()); } #endif HObjectAccess HObjectAccess::ForFixedArrayHeader(int offset) { - ASSERT(offset >= 0); - ASSERT(offset < FixedArray::kHeaderSize); + DCHECK(offset >= 0); + DCHECK(offset < FixedArray::kHeaderSize); if (offset == FixedArray::kLengthOffset) return ForFixedArrayLength(); return HObjectAccess(kInobject, offset); } @@ -4564,7 +4571,7 @@ HObjectAccess HObjectAccess::ForFixedArrayHeader(int offset) { HObjectAccess HObjectAccess::ForMapAndOffset(Handle<Map> map, int offset, Representation representation) { - ASSERT(offset >= 0); + DCHECK(offset >= 0); Portion portion = kInobject; if (offset == JSObject::kElementsOffset) { @@ -4604,16 +4611,16 @@ HObjectAccess HObjectAccess::ForAllocationSiteOffset(int offset) { HObjectAccess HObjectAccess::ForContextSlot(int index) { - ASSERT(index >= 0); + DCHECK(index >= 0); Portion portion = kInobject; int offset = Context::kHeaderSize + index * kPointerSize; - ASSERT_EQ(offset, Context::SlotOffset(index) + kHeapObjectTag); + DCHECK_EQ(offset, Context::SlotOffset(index) + kHeapObjectTag); return HObjectAccess(portion, offset, Representation::Tagged()); } HObjectAccess HObjectAccess::ForJSArrayOffset(int offset) { - ASSERT(offset >= 0); + DCHECK(offset >= 0); Portion portion = kInobject; if (offset == JSObject::kElementsOffset) { @@ -4629,7 +4636,7 @@ HObjectAccess HObjectAccess::ForJSArrayOffset(int offset) { HObjectAccess HObjectAccess::ForBackingStoreOffset(int offset, Representation representation) { - ASSERT(offset >= 0); + DCHECK(offset >= 0); return HObjectAccess(kBackingStore, offset, representation, Handle<String>::null(), false, false); } @@ -4638,7 +4645,7 @@ HObjectAccess HObjectAccess::ForBackingStoreOffset(int offset, HObjectAccess HObjectAccess::ForField(Handle<Map> map, LookupResult* lookup, Handle<String> name) { - ASSERT(lookup->IsField() || lookup->IsTransitionToField()); + DCHECK(lookup->IsField() || lookup->IsTransitionToField()); int index; Representation representation; if (lookup->IsField()) { @@ -4747,39 +4754,39 @@ void HObjectAccess::SetGVNFlags(HValue *instr, PropertyAccessType access_type) { } -void HObjectAccess::PrintTo(StringStream* stream) const { - stream->Add("."); +OStream& operator<<(OStream& os, const HObjectAccess& access) { + os << "."; - switch (portion()) { - case kArrayLengths: - case kStringLengths: - stream->Add("%length"); + switch (access.portion()) { + case HObjectAccess::kArrayLengths: + case HObjectAccess::kStringLengths: + os << "%length"; break; - case kElementsPointer: - stream->Add("%elements"); + case HObjectAccess::kElementsPointer: + os << "%elements"; break; - case kMaps: - stream->Add("%map"); + case HObjectAccess::kMaps: + os << "%map"; break; - case kDouble: // fall through - case kInobject: - if (!name_.is_null()) { - stream->Add(String::cast(*name_)->ToCString().get()); + case HObjectAccess::kDouble: // fall through + case HObjectAccess::kInobject: + if (!access.name().is_null()) { + os << Handle<String>::cast(access.name())->ToCString().get(); } - stream->Add("[in-object]"); + os << "[in-object]"; break; - case kBackingStore: - if (!name_.is_null()) { - stream->Add(String::cast(*name_)->ToCString().get()); + case HObjectAccess::kBackingStore: + if (!access.name().is_null()) { + os << Handle<String>::cast(access.name())->ToCString().get(); } - stream->Add("[backing-store]"); + os << "[backing-store]"; break; - case kExternalMemory: - stream->Add("[external-memory]"); + case HObjectAccess::kExternalMemory: + os << "[external-memory]"; break; } - stream->Add("@%d", offset()); + return os << "@" << access.offset(); } } } // namespace v8::internal diff --git a/deps/v8/src/hydrogen-instructions.h b/deps/v8/src/hydrogen-instructions.h index 9ac3bfb59b6..ed1243507c8 100644 --- a/deps/v8/src/hydrogen-instructions.h +++ b/deps/v8/src/hydrogen-instructions.h @@ -5,23 +5,25 @@ #ifndef V8_HYDROGEN_INSTRUCTIONS_H_ #define V8_HYDROGEN_INSTRUCTIONS_H_ -#include "v8.h" - -#include "allocation.h" -#include "code-stubs.h" -#include "conversions.h" -#include "data-flow.h" -#include "deoptimizer.h" -#include "small-pointer-list.h" -#include "string-stream.h" -#include "unique.h" -#include "utils.h" -#include "zone.h" +#include "src/v8.h" + +#include "src/allocation.h" +#include "src/code-stubs.h" +#include "src/conversions.h" +#include "src/data-flow.h" +#include "src/deoptimizer.h" +#include "src/feedback-slots.h" +#include "src/hydrogen-types.h" +#include "src/small-pointer-list.h" +#include "src/unique.h" +#include "src/utils.h" +#include "src/zone.h" namespace v8 { namespace internal { // Forward declarations. +struct ChangesOf; class HBasicBlock; class HDiv; class HEnvironment; @@ -32,6 +34,7 @@ class HStoreNamedField; class HValue; class LInstruction; class LChunkBuilder; +class OStream; #define HYDROGEN_ABSTRACT_INSTRUCTION_LIST(V) \ V(ArithmeticBinaryOperation) \ @@ -45,6 +48,7 @@ class LChunkBuilder; V(AbnormalExit) \ V(AccessArgumentsAt) \ V(Add) \ + V(AllocateBlockContext) \ V(Allocate) \ V(ApplyArguments) \ V(ArgumentsElements) \ @@ -126,7 +130,7 @@ class LChunkBuilder; V(OsrEntry) \ V(Parameter) \ V(Power) \ - V(PushArgument) \ + V(PushArguments) \ V(RegExpLiteral) \ V(Return) \ V(Ror) \ @@ -139,6 +143,7 @@ class LChunkBuilder; V(StackCheck) \ V(StoreCodeEntry) \ V(StoreContextSlot) \ + V(StoreFrameContext) \ V(StoreGlobalCell) \ V(StoreKeyed) \ V(StoreKeyedGeneric) \ @@ -186,7 +191,7 @@ class LChunkBuilder; #define DECLARE_ABSTRACT_INSTRUCTION(type) \ virtual bool Is##type() const V8_FINAL V8_OVERRIDE { return true; } \ static H##type* cast(HValue* value) { \ - ASSERT(value->Is##type()); \ + DCHECK(value->Is##type()); \ return reinterpret_cast<H##type*>(value); \ } @@ -195,7 +200,7 @@ class LChunkBuilder; virtual LInstruction* CompileToLithium( \ LChunkBuilder* builder) V8_FINAL V8_OVERRIDE; \ static H##type* cast(HValue* value) { \ - ASSERT(value->Is##type()); \ + DCHECK(value->Is##type()); \ return reinterpret_cast<H##type*>(value); \ } \ virtual Opcode opcode() const V8_FINAL V8_OVERRIDE { \ @@ -281,124 +286,6 @@ class Range V8_FINAL : public ZoneObject { }; -class HType V8_FINAL { - public: - static HType None() { return HType(kNone); } - static HType Tagged() { return HType(kTagged); } - static HType TaggedPrimitive() { return HType(kTaggedPrimitive); } - static HType TaggedNumber() { return HType(kTaggedNumber); } - static HType Smi() { return HType(kSmi); } - static HType HeapNumber() { return HType(kHeapNumber); } - static HType String() { return HType(kString); } - static HType Boolean() { return HType(kBoolean); } - static HType NonPrimitive() { return HType(kNonPrimitive); } - static HType JSArray() { return HType(kJSArray); } - static HType JSObject() { return HType(kJSObject); } - - // Return the weakest (least precise) common type. - HType Combine(HType other) { - return HType(static_cast<Type>(type_ & other.type_)); - } - - bool Equals(const HType& other) const { - return type_ == other.type_; - } - - bool IsSubtypeOf(const HType& other) { - return Combine(other).Equals(other); - } - - bool IsTaggedPrimitive() const { - return ((type_ & kTaggedPrimitive) == kTaggedPrimitive); - } - - bool IsTaggedNumber() const { - return ((type_ & kTaggedNumber) == kTaggedNumber); - } - - bool IsSmi() const { - return ((type_ & kSmi) == kSmi); - } - - bool IsHeapNumber() const { - return ((type_ & kHeapNumber) == kHeapNumber); - } - - bool IsString() const { - return ((type_ & kString) == kString); - } - - bool IsNonString() const { - return IsTaggedPrimitive() || IsSmi() || IsHeapNumber() || - IsBoolean() || IsJSArray(); - } - - bool IsBoolean() const { - return ((type_ & kBoolean) == kBoolean); - } - - bool IsNonPrimitive() const { - return ((type_ & kNonPrimitive) == kNonPrimitive); - } - - bool IsJSArray() const { - return ((type_ & kJSArray) == kJSArray); - } - - bool IsJSObject() const { - return ((type_ & kJSObject) == kJSObject); - } - - bool IsHeapObject() const { - return IsHeapNumber() || IsString() || IsBoolean() || IsNonPrimitive(); - } - - bool ToStringOrToNumberCanBeObserved(Representation representation) { - switch (type_) { - case kTaggedPrimitive: // fallthru - case kTaggedNumber: // fallthru - case kSmi: // fallthru - case kHeapNumber: // fallthru - case kString: // fallthru - case kBoolean: - return false; - case kJSArray: // fallthru - case kJSObject: - return true; - case kTagged: - break; - } - return !representation.IsSmiOrInteger32() && !representation.IsDouble(); - } - - static HType TypeFromValue(Handle<Object> value); - - const char* ToString(); - - private: - enum Type { - kNone = 0x0, // 0000 0000 0000 0000 - kTagged = 0x1, // 0000 0000 0000 0001 - kTaggedPrimitive = 0x5, // 0000 0000 0000 0101 - kTaggedNumber = 0xd, // 0000 0000 0000 1101 - kSmi = 0x1d, // 0000 0000 0001 1101 - kHeapNumber = 0x2d, // 0000 0000 0010 1101 - kString = 0x45, // 0000 0000 0100 0101 - kBoolean = 0x85, // 0000 0000 1000 0101 - kNonPrimitive = 0x101, // 0000 0001 0000 0001 - kJSObject = 0x301, // 0000 0011 0000 0001 - kJSArray = 0x701 // 0000 0111 0000 0001 - }; - - // Make sure type fits in int16. - STATIC_ASSERT(kJSArray < (1 << (2 * kBitsPerByte))); - - explicit HType(Type t) : type_(t) { } - - int16_t type_; -}; - - class HUseListNode: public ZoneObject { public: HUseListNode(HValue* value, int index, HUseListNode* tail) @@ -434,12 +321,12 @@ class HUseIterator V8_FINAL BASE_EMBEDDED { void Advance(); HValue* value() { - ASSERT(!Done()); + DCHECK(!Done()); return value_; } int index() { - ASSERT(!Done()); + DCHECK(!Done()); return index_; } @@ -471,8 +358,8 @@ enum GVNFlag { static inline GVNFlag GVNFlagFromInt(int i) { - ASSERT(i >= 0); - ASSERT(i < kNumberOfFlags); + DCHECK(i >= 0); + DCHECK(i < kNumberOfFlags); return static_cast<GVNFlag>(i); } @@ -560,13 +447,11 @@ class HSourcePosition { int raw() const { return value_; } - void PrintTo(FILE* f); - private: typedef BitField<int, 0, 9> InliningIdField; // Offset from the start of the inlined function. - typedef BitField<int, 9, 22> PositionField; + typedef BitField<int, 9, 23> PositionField; // On HPositionInfo can use this constructor. explicit HSourcePosition(int value) : value_(value) { } @@ -580,6 +465,9 @@ class HSourcePosition { }; +OStream& operator<<(OStream& os, const HSourcePosition& p); + + class HValue : public ZoneObject { public: static const int kNoNumber = -1; @@ -618,6 +506,10 @@ class HValue : public ZoneObject { // flag. kUint32, kHasNoObservableSideEffects, + // Indicates an instruction shouldn't be replaced by optimization, this flag + // is useful to set in cases where recomputing a value is cheaper than + // extending the value's live range and spilling it. + kCantBeReplaced, // Indicates the instruction is live during dead code elimination. kIsLive, @@ -659,7 +551,7 @@ class HValue : public ZoneObject { return IsShl() || IsShr() || IsSar(); } - HValue(HType type = HType::Tagged()) + explicit HValue(HType type = HType::Tagged()) : block_(NULL), id_(kNoNumber), type_(type), @@ -693,8 +585,8 @@ class HValue : public ZoneObject { Representation representation() const { return representation_; } void ChangeRepresentation(Representation r) { - ASSERT(CheckFlag(kFlexibleRepresentation)); - ASSERT(!CheckFlag(kCannotBeTagged) || !r.IsTagged()); + DCHECK(CheckFlag(kFlexibleRepresentation)); + DCHECK(!CheckFlag(kCannotBeTagged) || !r.IsTagged()); RepresentationChanged(r); representation_ = r; if (r.IsTagged()) { @@ -718,14 +610,10 @@ class HValue : public ZoneObject { HType type() const { return type_; } void set_type(HType new_type) { - ASSERT(new_type.IsSubtypeOf(type_)); + DCHECK(new_type.IsSubtypeOf(type_)); type_ = new_type; } - bool IsHeapObject() { - return representation_.IsHeapObject() || type_.IsHeapObject(); - } - // There are HInstructions that do not really change a value, they // only add pieces of information to it (like bounds checks, map checks, // smi checks...). @@ -771,13 +659,16 @@ class HValue : public ZoneObject { bool IsDefinedAfter(HBasicBlock* other) const; // Operands. - virtual int OperandCount() = 0; + virtual int OperandCount() const = 0; virtual HValue* OperandAt(int index) const = 0; void SetOperandAt(int index, HValue* value); void DeleteAndReplaceWith(HValue* other); void ReplaceAllUsesWith(HValue* other); bool HasNoUses() const { return use_list_ == NULL; } + bool HasOneUse() const { + return use_list_ != NULL && use_list_->tail() == NULL; + } bool HasMultipleUses() const { return use_list_ != NULL && use_list_->tail() != NULL; } @@ -839,11 +730,11 @@ class HValue : public ZoneObject { } Range* range() const { - ASSERT(!range_poisoned_); + DCHECK(!range_poisoned_); return range_; } bool HasRange() const { - ASSERT(!range_poisoned_); + DCHECK(!range_poisoned_); return range_ != NULL; } #ifdef DEBUG @@ -877,10 +768,7 @@ class HValue : public ZoneObject { virtual void FinalizeUniqueness() { } // Printing support. - virtual void PrintTo(StringStream* stream) = 0; - void PrintNameTo(StringStream* stream); - void PrintTypeTo(StringStream* stream); - void PrintChangesTo(StringStream* stream); + virtual OStream& PrintTo(OStream& os) const = 0; // NOLINT const char* Mnemonic() const; @@ -928,13 +816,13 @@ class HValue : public ZoneObject { // Returns true conservatively if the program might be able to observe a // ToString() operation on this value. bool ToStringCanBeObserved() const { - return type().ToStringOrToNumberCanBeObserved(representation()); + return ToStringOrToNumberCanBeObserved(); } // Returns true conservatively if the program might be able to observe a // ToNumber() operation on this value. bool ToNumberCanBeObserved() const { - return type().ToStringOrToNumberCanBeObserved(representation()); + return ToStringOrToNumberCanBeObserved(); } MinusZeroMode GetMinusZeroMode() { @@ -950,6 +838,12 @@ class HValue : public ZoneObject { return false; } + bool ToStringOrToNumberCanBeObserved() const { + if (type().IsTaggedPrimitive()) return false; + if (type().IsJSObject()) return true; + return !representation().IsSmiOrInteger32() && !representation().IsDouble(); + } + virtual Representation RepresentationFromInputs() { return representation(); } @@ -967,12 +861,12 @@ class HValue : public ZoneObject { virtual void DeleteFromGraph() = 0; virtual void InternalSetOperandAt(int index, HValue* value) = 0; void clear_block() { - ASSERT(block_ != NULL); + DCHECK(block_ != NULL); block_ = NULL; } void set_representation(Representation r) { - ASSERT(representation_.IsNone() && !r.IsNone()); + DCHECK(representation_.IsNone() && !r.IsNone()); representation_ = r; } @@ -991,6 +885,7 @@ class HValue : public ZoneObject { result.Remove(kOsrEntries); return result; } + friend OStream& operator<<(OStream& os, const ChangesOf& v); // A flag mask of all side effects that can make observable changes in // an executing program (i.e. are not safe to repeat, move or remove); @@ -1032,6 +927,30 @@ class HValue : public ZoneObject { DISALLOW_COPY_AND_ASSIGN(HValue); }; +// Support for printing various aspects of an HValue. +struct NameOf { + explicit NameOf(const HValue* const v) : value(v) {} + const HValue* value; +}; + + +struct TypeOf { + explicit TypeOf(const HValue* const v) : value(v) {} + const HValue* value; +}; + + +struct ChangesOf { + explicit ChangesOf(const HValue* const v) : value(v) {} + const HValue* value; +}; + + +OStream& operator<<(OStream& os, const HValue& v); +OStream& operator<<(OStream& os, const NameOf& v); +OStream& operator<<(OStream& os, const TypeOf& v); +OStream& operator<<(OStream& os, const ChangesOf& v); + #define DECLARE_INSTRUCTION_FACTORY_P0(I) \ static I* New(Zone* zone, HValue* context) { \ @@ -1074,6 +993,18 @@ class HValue : public ZoneObject { return new(zone) I(p1, p2, p3, p4, p5); \ } +#define DECLARE_INSTRUCTION_FACTORY_P6(I, P1, P2, P3, P4, P5, P6) \ + static I* New(Zone* zone, \ + HValue* context, \ + P1 p1, \ + P2 p2, \ + P3 p3, \ + P4 p4, \ + P5 p5, \ + P6 p6) { \ + return new(zone) I(p1, p2, p3, p4, p5, p6); \ + } + #define DECLARE_INSTRUCTION_WITH_CONTEXT_FACTORY_P0(I) \ static I* New(Zone* zone, HValue* context) { \ return new(zone) I(context); \ @@ -1158,7 +1089,7 @@ class HPositionInfo { data_ = reinterpret_cast<intptr_t>(positions); set_position(pos); - ASSERT(has_operand_positions()); + DCHECK(has_operand_positions()); } HSourcePosition operand_position(int idx) const { @@ -1177,7 +1108,7 @@ class HPositionInfo { static const intptr_t kFirstOperandPosIndex = 1; HSourcePosition* operand_position_slot(int idx) const { - ASSERT(has_operand_positions()); + DCHECK(has_operand_positions()); return &(operand_positions()[kFirstOperandPosIndex + idx]); } @@ -1186,7 +1117,7 @@ class HPositionInfo { } HSourcePosition* operand_positions() const { - ASSERT(has_operand_positions()); + DCHECK(has_operand_positions()); return reinterpret_cast<HSourcePosition*>(data_); } @@ -1196,12 +1127,12 @@ class HPositionInfo { return (val & kPositionTag) != 0; } static intptr_t UntagPosition(intptr_t val) { - ASSERT(IsTaggedPosition(val)); + DCHECK(IsTaggedPosition(val)); return val >> kPositionShift; } static intptr_t TagPosition(intptr_t val) { const intptr_t result = (val << kPositionShift) | kPositionTag; - ASSERT(UntagPosition(result) == val); + DCHECK(UntagPosition(result) == val); return result; } @@ -1214,8 +1145,8 @@ class HInstruction : public HValue { HInstruction* next() const { return next_; } HInstruction* previous() const { return previous_; } - virtual void PrintTo(StringStream* stream) V8_OVERRIDE; - virtual void PrintDataTo(StringStream* stream); + virtual OStream& PrintTo(OStream& os) const V8_OVERRIDE; // NOLINT + virtual OStream& PrintDataTo(OStream& os) const; // NOLINT bool IsLinked() const { return block() != NULL; } void Unlink(); @@ -1242,8 +1173,8 @@ class HInstruction : public HValue { return !position().IsUnknown(); } void set_position(HSourcePosition position) { - ASSERT(!has_position()); - ASSERT(!position.IsUnknown()); + DCHECK(!has_position()); + DCHECK(!position.IsUnknown()); position_.set_position(position); } @@ -1252,11 +1183,12 @@ class HInstruction : public HValue { return pos.IsUnknown() ? position() : pos; } void set_operand_position(Zone* zone, int index, HSourcePosition pos) { - ASSERT(0 <= index && index < OperandCount()); + DCHECK(0 <= index && index < OperandCount()); position_.ensure_storage_for_operand_positions(zone, OperandCount()); position_.set_operand_position(index, pos); } + bool Dominates(HInstruction* other); bool CanTruncateToSmi() const { return CheckFlag(kTruncatingToSmi); } bool CanTruncateToInt32() const { return CheckFlag(kTruncatingToInt32); } @@ -1273,7 +1205,7 @@ class HInstruction : public HValue { DECLARE_ABSTRACT_INSTRUCTION(Instruction) protected: - HInstruction(HType type = HType::Tagged()) + explicit HInstruction(HType type = HType::Tagged()) : HValue(type), next_(NULL), previous_(NULL), @@ -1285,12 +1217,10 @@ class HInstruction : public HValue { private: void InitializeAsFirst(HBasicBlock* block) { - ASSERT(!IsLinked()); + DCHECK(!IsLinked()); SetBlock(block); } - void PrintMnemonicTo(StringStream* stream); - HInstruction* next_; HInstruction* previous_; HPositionInfo position_; @@ -1302,13 +1232,14 @@ class HInstruction : public HValue { template<int V> class HTemplateInstruction : public HInstruction { public: - virtual int OperandCount() V8_FINAL V8_OVERRIDE { return V; } + virtual int OperandCount() const V8_FINAL V8_OVERRIDE { return V; } virtual HValue* OperandAt(int i) const V8_FINAL V8_OVERRIDE { return inputs_[i]; } protected: - HTemplateInstruction(HType type = HType::Tagged()) : HInstruction(type) {} + explicit HTemplateInstruction(HType type = HType::Tagged()) + : HInstruction(type) {} virtual void InternalSetOperandAt(int i, HValue* value) V8_FINAL V8_OVERRIDE { inputs_[i] = value; @@ -1321,11 +1252,11 @@ class HTemplateInstruction : public HInstruction { class HControlInstruction : public HInstruction { public: - virtual HBasicBlock* SuccessorAt(int i) = 0; - virtual int SuccessorCount() = 0; + virtual HBasicBlock* SuccessorAt(int i) const = 0; + virtual int SuccessorCount() const = 0; virtual void SetSuccessorAt(int i, HBasicBlock* block) = 0; - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT virtual bool KnownSuccessorBlock(HBasicBlock** block) { *block = NULL; @@ -1351,15 +1282,15 @@ class HControlInstruction : public HInstruction { class HSuccessorIterator V8_FINAL BASE_EMBEDDED { public: - explicit HSuccessorIterator(HControlInstruction* instr) - : instr_(instr), current_(0) { } + explicit HSuccessorIterator(const HControlInstruction* instr) + : instr_(instr), current_(0) {} bool Done() { return current_ >= instr_->SuccessorCount(); } HBasicBlock* Current() { return instr_->SuccessorAt(current_); } void Advance() { current_++; } private: - HControlInstruction* instr_; + const HControlInstruction* instr_; int current_; }; @@ -1367,13 +1298,13 @@ class HSuccessorIterator V8_FINAL BASE_EMBEDDED { template<int S, int V> class HTemplateControlInstruction : public HControlInstruction { public: - int SuccessorCount() V8_OVERRIDE { return S; } - HBasicBlock* SuccessorAt(int i) V8_OVERRIDE { return successors_[i]; } + int SuccessorCount() const V8_OVERRIDE { return S; } + HBasicBlock* SuccessorAt(int i) const V8_OVERRIDE { return successors_[i]; } void SetSuccessorAt(int i, HBasicBlock* block) V8_OVERRIDE { successors_[i] = block; } - int OperandCount() V8_OVERRIDE { return V; } + int OperandCount() const V8_OVERRIDE { return V; } HValue* OperandAt(int i) const V8_OVERRIDE { return inputs_[i]; } @@ -1408,14 +1339,14 @@ class HDummyUse V8_FINAL : public HTemplateInstruction<1> { set_representation(Representation::Tagged()); } - HValue* value() { return OperandAt(0); } + HValue* value() const { return OperandAt(0); } virtual bool HasEscapingOperandAt(int index) V8_OVERRIDE { return false; } virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { return Representation::None(); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT DECLARE_CONCRETE_INSTRUCTION(DummyUse); }; @@ -1449,7 +1380,7 @@ class HGoto V8_FINAL : public HTemplateControlInstruction<1, 0> { return Representation::None(); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT DECLARE_CONCRETE_INSTRUCTION(Goto) }; @@ -1502,9 +1433,9 @@ class HUnaryControlInstruction : public HTemplateControlInstruction<2, 1> { SetSuccessorAt(1, false_target); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT - HValue* value() { return OperandAt(0); } + HValue* value() const { return OperandAt(0); } }; @@ -1524,7 +1455,7 @@ class HBranch V8_FINAL : public HUnaryControlInstruction { virtual bool KnownSuccessorBlock(HBasicBlock** block) V8_OVERRIDE; - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT ToBooleanStub::Types expected_input_types() const { return expected_input_types_; @@ -1561,7 +1492,7 @@ class HCompareMap V8_FINAL : public HUnaryControlInstruction { return false; } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT static const int kNoKnownSuccessorIndex = -1; int known_successor_index() const { return known_successor_index_; } @@ -1570,6 +1501,7 @@ class HCompareMap V8_FINAL : public HUnaryControlInstruction { } Unique<Map> map() const { return map_; } + bool map_is_stable() const { return map_is_stable_; } virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { return Representation::Tagged(); @@ -1586,12 +1518,14 @@ class HCompareMap V8_FINAL : public HUnaryControlInstruction { HBasicBlock* true_target = NULL, HBasicBlock* false_target = NULL) : HUnaryControlInstruction(value, true_target, false_target), - known_successor_index_(kNoKnownSuccessorIndex), map_(Unique<Map>(map)) { - ASSERT(!map.is_null()); + known_successor_index_(kNoKnownSuccessorIndex), + map_is_stable_(map->is_stable()), + map_(Unique<Map>::CreateImmovable(map)) { set_representation(Representation::Tagged()); } - int known_successor_index_; + int known_successor_index_ : 31; + bool map_is_stable_ : 1; Unique<Map> map_; }; @@ -1632,11 +1566,11 @@ class HReturn V8_FINAL : public HTemplateControlInstruction<0, 3> { return Representation::Tagged(); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT - HValue* value() { return OperandAt(0); } - HValue* context() { return OperandAt(1); } - HValue* parameter_count() { return OperandAt(2); } + HValue* value() const { return OperandAt(0); } + HValue* context() const { return OperandAt(1); } + HValue* parameter_count() const { return OperandAt(2); } DECLARE_CONCRETE_INSTRUCTION(Return) @@ -1665,7 +1599,7 @@ class HAbnormalExit V8_FINAL : public HTemplateControlInstruction<0, 0> { class HUnaryOperation : public HTemplateInstruction<1> { public: - HUnaryOperation(HValue* value, HType type = HType::Tagged()) + explicit HUnaryOperation(HValue* value, HType type = HType::Tagged()) : HTemplateInstruction<1>(type) { SetOperandAt(0, value); } @@ -1675,7 +1609,7 @@ class HUnaryOperation : public HTemplateInstruction<1> { } HValue* value() const { return OperandAt(0); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT }; @@ -1699,13 +1633,13 @@ class HForceRepresentation V8_FINAL : public HTemplateInstruction<1> { static HInstruction* New(Zone* zone, HValue* context, HValue* value, Representation required_representation); - HValue* value() { return OperandAt(0); } + HValue* value() const { return OperandAt(0); } virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { return representation(); // Same as the output representation. } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT DECLARE_CONCRETE_INSTRUCTION(ForceRepresentation) @@ -1724,9 +1658,9 @@ class HChange V8_FINAL : public HUnaryOperation { bool is_truncating_to_smi, bool is_truncating_to_int32) : HUnaryOperation(value) { - ASSERT(!value->representation().IsNone()); - ASSERT(!to.IsNone()); - ASSERT(!value->representation().Equals(to)); + DCHECK(!value->representation().IsNone()); + DCHECK(!to.IsNone()); + DCHECK(!value->representation().Equals(to)); set_representation(to); SetFlag(kUseGVN); SetFlag(kCanOverflow); @@ -1761,7 +1695,7 @@ class HChange V8_FINAL : public HUnaryOperation { virtual Range* InferRange(Zone* zone) V8_OVERRIDE; - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT DECLARE_CONCRETE_INSTRUCTION(Change) @@ -1880,19 +1814,19 @@ class HSimulate V8_FINAL : public HInstruction { done_with_replay_(false) {} ~HSimulate() {} - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT bool HasAstId() const { return !ast_id_.IsNone(); } BailoutId ast_id() const { return ast_id_; } void set_ast_id(BailoutId id) { - ASSERT(!HasAstId()); + DCHECK(!HasAstId()); ast_id_ = id; } int pop_count() const { return pop_count_; } const ZoneList<HValue*>* values() const { return &values_; } int GetAssignedIndexAt(int index) const { - ASSERT(HasAssignedIndexAt(index)); + DCHECK(HasAssignedIndexAt(index)); return assigned_indexes_[index]; } bool HasAssignedIndexAt(int index) const { @@ -1910,7 +1844,7 @@ class HSimulate V8_FINAL : public HInstruction { } return -1; } - virtual int OperandCount() V8_OVERRIDE { return values_.length(); } + virtual int OperandCount() const V8_OVERRIDE { return values_.length(); } virtual HValue* OperandAt(int index) const V8_OVERRIDE { return values_[index]; } @@ -1975,8 +1909,8 @@ class HEnvironmentMarker V8_FINAL : public HTemplateInstruction<1> { DECLARE_INSTRUCTION_FACTORY_P2(HEnvironmentMarker, Kind, int); - Kind kind() { return kind_; } - int index() { return index_; } + Kind kind() const { return kind_; } + int index() const { return index_; } HSimulate* next_simulate() { return next_simulate_; } void set_next_simulate(HSimulate* simulate) { next_simulate_ = simulate; @@ -1986,12 +1920,12 @@ class HEnvironmentMarker V8_FINAL : public HTemplateInstruction<1> { return Representation::None(); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT #ifdef DEBUG void set_closure(Handle<JSFunction> closure) { - ASSERT(closure_.is_null()); - ASSERT(!closure.is_null()); + DCHECK(closure_.is_null()); + DCHECK(!closure.is_null()); closure_ = closure; } Handle<JSFunction> closure() const { return closure_; } @@ -2081,7 +2015,7 @@ class HEnterInlined V8_FINAL : public HTemplateInstruction<0> { void RegisterReturnTarget(HBasicBlock* return_target, Zone* zone); ZoneList<HBasicBlock*>* return_targets() { return &return_targets_; } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT Handle<JSFunction> closure() const { return closure_; } int arguments_count() const { return arguments_count_; } @@ -2155,23 +2089,71 @@ class HLeaveInlined V8_FINAL : public HTemplateInstruction<0> { }; -class HPushArgument V8_FINAL : public HUnaryOperation { +class HPushArguments V8_FINAL : public HInstruction { public: - DECLARE_INSTRUCTION_FACTORY_P1(HPushArgument, HValue*); + static HPushArguments* New(Zone* zone, HValue* context) { + return new(zone) HPushArguments(zone); + } + static HPushArguments* New(Zone* zone, HValue* context, HValue* arg1) { + HPushArguments* instr = new(zone) HPushArguments(zone); + instr->AddInput(arg1); + return instr; + } + static HPushArguments* New(Zone* zone, HValue* context, HValue* arg1, + HValue* arg2) { + HPushArguments* instr = new(zone) HPushArguments(zone); + instr->AddInput(arg1); + instr->AddInput(arg2); + return instr; + } + static HPushArguments* New(Zone* zone, HValue* context, HValue* arg1, + HValue* arg2, HValue* arg3) { + HPushArguments* instr = new(zone) HPushArguments(zone); + instr->AddInput(arg1); + instr->AddInput(arg2); + instr->AddInput(arg3); + return instr; + } + static HPushArguments* New(Zone* zone, HValue* context, HValue* arg1, + HValue* arg2, HValue* arg3, HValue* arg4) { + HPushArguments* instr = new(zone) HPushArguments(zone); + instr->AddInput(arg1); + instr->AddInput(arg2); + instr->AddInput(arg3); + instr->AddInput(arg4); + return instr; + } virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { return Representation::Tagged(); } - virtual int argument_delta() const V8_OVERRIDE { return 1; } - HValue* argument() { return OperandAt(0); } + virtual int argument_delta() const V8_OVERRIDE { return inputs_.length(); } + HValue* argument(int i) { return OperandAt(i); } + + virtual int OperandCount() const V8_FINAL V8_OVERRIDE { + return inputs_.length(); + } + virtual HValue* OperandAt(int i) const V8_FINAL V8_OVERRIDE { + return inputs_[i]; + } + + void AddInput(HValue* value); + + DECLARE_CONCRETE_INSTRUCTION(PushArguments) - DECLARE_CONCRETE_INSTRUCTION(PushArgument) + protected: + virtual void InternalSetOperandAt(int i, HValue* value) V8_FINAL V8_OVERRIDE { + inputs_[i] = value; + } private: - explicit HPushArgument(HValue* value) : HUnaryOperation(value) { + explicit HPushArguments(Zone* zone) + : HInstruction(HType::Tagged()), inputs_(4, zone) { set_representation(Representation::Tagged()); } + + ZoneList<HValue*> inputs_; }; @@ -2268,9 +2250,9 @@ class HUnaryCall : public HCall<1> { return Representation::Tagged(); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT - HValue* value() { return OperandAt(0); } + HValue* value() const { return OperandAt(0); } }; @@ -2282,15 +2264,15 @@ class HBinaryCall : public HCall<2> { SetOperandAt(1, second); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT virtual Representation RequiredInputRepresentation( int index) V8_FINAL V8_OVERRIDE { return Representation::Tagged(); } - HValue* first() { return OperandAt(0); } - HValue* second() { return OperandAt(1); } + HValue* first() const { return OperandAt(0); } + HValue* second() const { return OperandAt(1); } }; @@ -2302,13 +2284,13 @@ class HCallJSFunction V8_FINAL : public HCall<1> { int argument_count, bool pass_argument_count); - HValue* function() { return OperandAt(0); } + HValue* function() const { return OperandAt(0); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT virtual Representation RequiredInputRepresentation( int index) V8_FINAL V8_OVERRIDE { - ASSERT(index == 0); + DCHECK(index == 0); return Representation::Tagged(); } @@ -2342,16 +2324,18 @@ class HCallWithDescriptor V8_FINAL : public HInstruction { static HCallWithDescriptor* New(Zone* zone, HValue* context, HValue* target, int argument_count, - const CallInterfaceDescriptor* descriptor, - Vector<HValue*>& operands) { - ASSERT(operands.length() == descriptor->environment_length()); + const InterfaceDescriptor* descriptor, + const Vector<HValue*>& operands) { + DCHECK(operands.length() == descriptor->GetEnvironmentLength()); HCallWithDescriptor* res = new(zone) HCallWithDescriptor(target, argument_count, descriptor, operands, zone); return res; } - virtual int OperandCount() V8_FINAL V8_OVERRIDE { return values_.length(); } + virtual int OperandCount() const V8_FINAL V8_OVERRIDE { + return values_.length(); + } virtual HValue* OperandAt(int index) const V8_FINAL V8_OVERRIDE { return values_[index]; } @@ -2362,7 +2346,7 @@ class HCallWithDescriptor V8_FINAL : public HInstruction { return Representation::Tagged(); } else { int par_index = index - 1; - ASSERT(par_index < descriptor_->environment_length()); + DCHECK(par_index < descriptor_->GetEnvironmentLength()); return descriptor_->GetParameterRepresentation(par_index); } } @@ -2381,7 +2365,7 @@ class HCallWithDescriptor V8_FINAL : public HInstruction { return -argument_count_; } - const CallInterfaceDescriptor* descriptor() const { + const InterfaceDescriptor* descriptor() const { return descriptor_; } @@ -2389,17 +2373,17 @@ class HCallWithDescriptor V8_FINAL : public HInstruction { return OperandAt(0); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT private: // The argument count includes the receiver. HCallWithDescriptor(HValue* target, int argument_count, - const CallInterfaceDescriptor* descriptor, - Vector<HValue*>& operands, + const InterfaceDescriptor* descriptor, + const Vector<HValue*>& operands, Zone* zone) : descriptor_(descriptor), - values_(descriptor->environment_length() + 1, zone) { + values_(descriptor->GetEnvironmentLength() + 1, zone) { argument_count_ = argument_count; AddOperand(target, zone); for (int i = 0; i < operands.length(); i++) { @@ -2419,7 +2403,7 @@ class HCallWithDescriptor V8_FINAL : public HInstruction { values_[index] = value; } - const CallInterfaceDescriptor* descriptor_; + const InterfaceDescriptor* descriptor_; ZoneList<HValue*> values_; int argument_count_; }; @@ -2524,7 +2508,7 @@ class HCallNewArray V8_FINAL : public HBinaryCall { HValue* context() { return first(); } HValue* constructor() { return second(); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT ElementsKind elements_kind() const { return elements_kind_; } @@ -2547,7 +2531,7 @@ class HCallRuntime V8_FINAL : public HCall<1> { const Runtime::Function*, int); - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT HValue* context() { return OperandAt(0); } const Runtime::Function* function() const { return c_function_; } @@ -2611,10 +2595,10 @@ class HUnaryMathOperation V8_FINAL : public HTemplateInstruction<2> { HValue* value, BuiltinFunctionId op); - HValue* context() { return OperandAt(0); } - HValue* value() { return OperandAt(1); } + HValue* context() const { return OperandAt(0); } + HValue* value() const { return OperandAt(1); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { if (index == 0) { @@ -2623,6 +2607,7 @@ class HUnaryMathOperation V8_FINAL : public HTemplateInstruction<2> { switch (op_) { case kMathFloor: case kMathRound: + case kMathFround: case kMathSqrt: case kMathPowHalf: case kMathLog: @@ -2689,6 +2674,7 @@ class HUnaryMathOperation V8_FINAL : public HTemplateInstruction<2> { // is tagged, and not when it is an unboxed double or unboxed integer. SetChangesFlag(kNewSpacePromotion); break; + case kMathFround: case kMathLog: case kMathExp: case kMathSqrt: @@ -2731,7 +2717,7 @@ class HLoadRoot V8_FINAL : public HTemplateInstruction<0> { } private: - HLoadRoot(Heap::RootListIndex index, HType type = HType::Tagged()) + explicit HLoadRoot(Heap::RootListIndex index, HType type = HType::Tagged()) : HTemplateInstruction<0>(type), index_(index) { SetFlag(kUseGVN); // TODO(bmeurer): We'll need kDependsOnRoots once we add the @@ -2764,6 +2750,7 @@ class HCheckMaps V8_FINAL : public HTemplateInstruction<2> { bool IsStabilityCheck() const { return is_stability_check_; } void MarkAsStabilityCheck() { + maps_are_stable_ = true; has_migration_target_ = false; is_stability_check_ = true; ClearChangesFlag(kNewSpacePromotion); @@ -2775,10 +2762,16 @@ class HCheckMaps V8_FINAL : public HTemplateInstruction<2> { virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { return Representation::Tagged(); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; - HValue* value() { return OperandAt(0); } - HValue* typecheck() { return OperandAt(1); } + virtual HType CalculateInferredType() V8_OVERRIDE { + if (value()->type().IsHeapObject()) return value()->type(); + return HType::HeapObject(); + } + + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT + + HValue* value() const { return OperandAt(0); } + HValue* typecheck() const { return OperandAt(1); } const UniqueSet<Map>* maps() const { return maps_; } void set_maps(const UniqueSet<Map>* maps) { maps_ = maps; } @@ -2794,16 +2787,16 @@ class HCheckMaps V8_FINAL : public HTemplateInstruction<2> { Unique<Map> map, bool map_is_stable, HInstruction* instr) { - return CreateAndInsertAfter(zone, value, new(zone) UniqueSet<Map>( - map, zone), map_is_stable, instr); + return instr->Append(new(zone) HCheckMaps( + value, new(zone) UniqueSet<Map>(map, zone), map_is_stable)); } - static HCheckMaps* CreateAndInsertAfter(Zone* zone, - HValue* value, - const UniqueSet<Map>* maps, - bool maps_are_stable, - HInstruction* instr) { - return instr->Append(new(zone) HCheckMaps(value, maps, maps_are_stable)); + static HCheckMaps* CreateAndInsertBefore(Zone* zone, + HValue* value, + const UniqueSet<Map>* maps, + bool maps_are_stable, + HInstruction* instr) { + return instr->Prepend(new(zone) HCheckMaps(value, maps, maps_are_stable)); } DECLARE_CONCRETE_INSTRUCTION(CheckMaps) @@ -2817,10 +2810,10 @@ class HCheckMaps V8_FINAL : public HTemplateInstruction<2> { private: HCheckMaps(HValue* value, const UniqueSet<Map>* maps, bool maps_are_stable) - : HTemplateInstruction<2>(value->type()), maps_(maps), + : HTemplateInstruction<2>(HType::HeapObject()), maps_(maps), has_migration_target_(false), is_stability_check_(false), maps_are_stable_(maps_are_stable) { - ASSERT_NE(0, maps->size()); + DCHECK_NE(0, maps->size()); SetOperandAt(0, value); // Use the object value for the dependency. SetOperandAt(1, value); @@ -2831,10 +2824,10 @@ class HCheckMaps V8_FINAL : public HTemplateInstruction<2> { } HCheckMaps(HValue* value, const UniqueSet<Map>* maps, HValue* typecheck) - : HTemplateInstruction<2>(value->type()), maps_(maps), + : HTemplateInstruction<2>(HType::HeapObject()), maps_(maps), has_migration_target_(false), is_stability_check_(false), maps_are_stable_(true) { - ASSERT_NE(0, maps->size()); + DCHECK_NE(0, maps->size()); SetOperandAt(0, value); // Use the object value for the dependency if NULL is passed. SetOperandAt(1, typecheck ? typecheck : value); @@ -2883,7 +2876,7 @@ class HCheckValue V8_FINAL : public HUnaryOperation { virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { return Representation::Tagged(); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT virtual HValue* Canonicalize() V8_OVERRIDE; @@ -2929,18 +2922,31 @@ class HCheckInstanceType V8_FINAL : public HUnaryOperation { DECLARE_INSTRUCTION_FACTORY_P2(HCheckInstanceType, HValue*, Check); - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { return Representation::Tagged(); } + virtual HType CalculateInferredType() V8_OVERRIDE { + switch (check_) { + case IS_SPEC_OBJECT: return HType::JSObject(); + case IS_JS_ARRAY: return HType::JSArray(); + case IS_STRING: return HType::String(); + case IS_INTERNALIZED_STRING: return HType::String(); + } + UNREACHABLE(); + return HType::Tagged(); + } + virtual HValue* Canonicalize() V8_OVERRIDE; bool is_interval_check() const { return check_ <= LAST_INTERVAL_CHECK; } void GetCheckInterval(InstanceType* first, InstanceType* last); void GetCheckMaskAndTag(uint8_t* mask, uint8_t* tag); + Check check() const { return check_; } + DECLARE_CONCRETE_INSTRUCTION(CheckInstanceType) protected: @@ -2955,10 +2961,10 @@ class HCheckInstanceType V8_FINAL : public HUnaryOperation { virtual int RedefinedOperandIndex() { return 0; } private: - const char* GetCheckName(); + const char* GetCheckName() const; HCheckInstanceType(HValue* value, Check check) - : HUnaryOperation(value), check_(check) { + : HUnaryOperation(value, HType::HeapObject()), check_(check) { set_representation(Representation::Tagged()); SetFlag(kUseGVN); } @@ -3005,6 +3011,11 @@ class HCheckHeapObject V8_FINAL : public HUnaryOperation { return Representation::Tagged(); } + virtual HType CalculateInferredType() V8_OVERRIDE { + if (value()->type().IsHeapObject()) return value()->type(); + return HType::HeapObject(); + } + #ifdef DEBUG virtual void Verify() V8_OVERRIDE; #endif @@ -3019,8 +3030,7 @@ class HCheckHeapObject V8_FINAL : public HUnaryOperation { virtual bool DataEquals(HValue* other) V8_OVERRIDE { return true; } private: - explicit HCheckHeapObject(HValue* value) - : HUnaryOperation(value, HType::NonPrimitive()) { + explicit HCheckHeapObject(HValue* value) : HUnaryOperation(value) { set_representation(Representation::Tagged()); SetFlag(kUseGVN); } @@ -3056,7 +3066,7 @@ class InductionVariableData V8_FINAL : public ZoneObject { InductionVariableCheck* next() { return next_; } bool HasUpperLimit() { return upper_limit_ >= 0; } int32_t upper_limit() { - ASSERT(HasUpperLimit()); + DCHECK(HasUpperLimit()); return upper_limit_; } void set_upper_limit(int32_t upper_limit) { @@ -3259,7 +3269,7 @@ class HPhi V8_FINAL : public HValue { non_phi_uses_[i] = 0; indirect_uses_[i] = 0; } - ASSERT(merged_index >= 0 || merged_index == kInvalidMergedIndex); + DCHECK(merged_index >= 0 || merged_index == kInvalidMergedIndex); SetFlag(kFlexibleRepresentation); SetFlag(kAllowUndefinedAsNaN); } @@ -3276,7 +3286,7 @@ class HPhi V8_FINAL : public HValue { return representation(); } virtual HType CalculateInferredType() V8_OVERRIDE; - virtual int OperandCount() V8_OVERRIDE { return inputs_.length(); } + virtual int OperandCount() const V8_OVERRIDE { return inputs_.length(); } virtual HValue* OperandAt(int index) const V8_OVERRIDE { return inputs_[index]; } @@ -3302,11 +3312,11 @@ class HPhi V8_FINAL : public HValue { induction_variable_data_->limit() != NULL; } void DetectInductionVariable() { - ASSERT(induction_variable_data_ == NULL); + DCHECK(induction_variable_data_ == NULL); induction_variable_data_ = InductionVariableData::ExaminePhi(this); } - virtual void PrintTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintTo(OStream& os) const V8_OVERRIDE; // NOLINT #ifdef DEBUG virtual void Verify() V8_OVERRIDE; @@ -3343,7 +3353,7 @@ class HPhi V8_FINAL : public HValue { int phi_id() { return phi_id_; } static HPhi* cast(HValue* value) { - ASSERT(value->IsPhi()); + DCHECK(value->IsPhi()); return reinterpret_cast<HPhi*>(value); } virtual Opcode opcode() const V8_OVERRIDE { return HValue::kPhi; } @@ -3378,7 +3388,9 @@ class HDematerializedObject : public HInstruction { public: HDematerializedObject(int count, Zone* zone) : values_(count, zone) {} - virtual int OperandCount() V8_FINAL V8_OVERRIDE { return values_.length(); } + virtual int OperandCount() const V8_FINAL V8_OVERRIDE { + return values_.length(); + } virtual HValue* OperandAt(int index) const V8_FINAL V8_OVERRIDE { return values_[index]; } @@ -3448,15 +3460,15 @@ class HCapturedObject V8_FINAL : public HDematerializedObject { HValue* map_value() const { return values()->first(); } void ReuseSideEffectsFromStore(HInstruction* store) { - ASSERT(store->HasObservableSideEffects()); - ASSERT(store->IsStoreNamedField()); + DCHECK(store->HasObservableSideEffects()); + DCHECK(store->IsStoreNamedField()); changes_flags_.Add(store->ChangesFlags()); } // Replay effects of this instruction on the given environment. void ReplayEnvironment(HEnvironment* env); - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT DECLARE_CONCRETE_INSTRUCTION(CapturedObject) @@ -3497,21 +3509,22 @@ class HConstant V8_FINAL : public HTemplateInstruction<0> { } static HConstant* CreateAndInsertBefore(Zone* zone, - Unique<Object> object, - bool is_not_in_new_space, + Unique<Map> map, + bool map_is_stable, HInstruction* instruction) { return instruction->Prepend(new(zone) HConstant( - object, Unique<Map>(Handle<Map>::null()), false, - Representation::Tagged(), HType::Tagged(), is_not_in_new_space, - false, false, kUnknownInstanceType)); + map, Unique<Map>(Handle<Map>::null()), map_is_stable, + Representation::Tagged(), HType::HeapObject(), true, + false, false, MAP_TYPE)); } static HConstant* CreateAndInsertAfter(Zone* zone, Unique<Map> map, + bool map_is_stable, HInstruction* instruction) { return instruction->Append(new(zone) HConstant( - map, Unique<Map>(Handle<Map>::null()), false, - Representation::Tagged(), HType::Tagged(), true, + map, Unique<Map>(Handle<Map>::null()), map_is_stable, + Representation::Tagged(), HType::HeapObject(), true, false, false, MAP_TYPE)); } @@ -3523,7 +3536,7 @@ class HConstant V8_FINAL : public HTemplateInstruction<0> { isolate->factory()->NewNumber(double_value_, TENURED)); } AllowDeferredHandleDereference smi_check; - ASSERT(has_int32_value_ || !object_.handle()->IsSmi()); + DCHECK(has_int32_value_ || !object_.handle()->IsSmi()); return object_.handle(); } @@ -3544,6 +3557,10 @@ class HConstant V8_FINAL : public HTemplateInstruction<0> { return instance_type_ == CELL_TYPE || instance_type_ == PROPERTY_CELL_TYPE; } + bool IsMap() const { + return instance_type_ == MAP_TYPE; + } + virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { return Representation::None(); } @@ -3557,19 +3574,19 @@ class HConstant V8_FINAL : public HTemplateInstruction<0> { } virtual bool EmitAtUses() V8_OVERRIDE; - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT HConstant* CopyToRepresentation(Representation r, Zone* zone) const; Maybe<HConstant*> CopyToTruncatedInt32(Zone* zone); Maybe<HConstant*> CopyToTruncatedNumber(Zone* zone); bool HasInteger32Value() const { return has_int32_value_; } int32_t Integer32Value() const { - ASSERT(HasInteger32Value()); + DCHECK(HasInteger32Value()); return int32_value_; } bool HasSmiValue() const { return has_smi_value_; } bool HasDoubleValue() const { return has_double_value_; } double DoubleValue() const { - ASSERT(HasDoubleValue()); + DCHECK(HasDoubleValue()); return double_value_; } bool IsTheHole() const { @@ -3580,7 +3597,7 @@ class HConstant V8_FINAL : public HTemplateInstruction<0> { } bool HasNumberValue() const { return has_double_value_; } int32_t NumberValueAsInteger32() const { - ASSERT(HasNumberValue()); + DCHECK(HasNumberValue()); // Irrespective of whether a numeric HConstant can be safely // represented as an int32, we store the (in some cases lossy) // representation of the number in int32_value_. @@ -3588,11 +3605,11 @@ class HConstant V8_FINAL : public HTemplateInstruction<0> { } bool HasStringValue() const { if (has_double_value_ || has_int32_value_) return false; - ASSERT(!object_.handle().is_null()); + DCHECK(!object_.handle().is_null()); return instance_type_ < FIRST_NONSTRING_TYPE; } Handle<String> StringValue() const { - ASSERT(HasStringValue()); + DCHECK(HasStringValue()); return Handle<String>::cast(object_.handle()); } bool HasInternalizedStringValue() const { @@ -3613,17 +3630,17 @@ class HConstant V8_FINAL : public HTemplateInstruction<0> { bool HasMapValue() const { return instance_type_ == MAP_TYPE; } Unique<Map> MapValue() const { - ASSERT(HasMapValue()); + DCHECK(HasMapValue()); return Unique<Map>::cast(GetUnique()); } bool HasStableMapValue() const { - ASSERT(HasMapValue() || !has_stable_map_value_); + DCHECK(HasMapValue() || !has_stable_map_value_); return has_stable_map_value_; } bool HasObjectMap() const { return !object_map_.IsNull(); } Unique<Map> ObjectMap() const { - ASSERT(HasObjectMap()); + DCHECK(HasObjectMap()); return object_map_; } @@ -3635,14 +3652,14 @@ class HConstant V8_FINAL : public HTemplateInstruction<0> { } else if (has_external_reference_value_) { return reinterpret_cast<intptr_t>(external_reference_value_.address()); } else { - ASSERT(!object_.handle().is_null()); + DCHECK(!object_.handle().is_null()); return object_.Hashcode(); } } virtual void FinalizeUniqueness() V8_OVERRIDE { if (!has_double_value_ && !has_external_reference_value_) { - ASSERT(!object_.handle().is_null()); + DCHECK(!object_.handle().is_null()); object_ = Unique<Object>(object_.handle()); } } @@ -3674,7 +3691,7 @@ class HConstant V8_FINAL : public HTemplateInstruction<0> { other_constant->has_external_reference_value_) { return false; } - ASSERT(!object_.handle().is_null()); + DCHECK(!object_.handle().is_null()); return other_constant->object_ == object_; } } @@ -3690,7 +3707,8 @@ class HConstant V8_FINAL : public HTemplateInstruction<0> { private: friend class HGraph; - HConstant(Handle<Object> handle, Representation r = Representation::None()); + explicit HConstant(Handle<Object> handle, + Representation r = Representation::None()); HConstant(int32_t value, Representation r = Representation::None(), bool is_not_in_new_space = true, @@ -3754,7 +3772,7 @@ class HBinaryOperation : public HTemplateInstruction<3> { HType type = HType::Tagged()) : HTemplateInstruction<3>(type), observed_output_representation_(Representation::None()) { - ASSERT(left != NULL && right != NULL); + DCHECK(left != NULL && right != NULL); SetOperandAt(0, context); SetOperandAt(1, left); SetOperandAt(2, right); @@ -3778,7 +3796,7 @@ class HBinaryOperation : public HTemplateInstruction<3> { // Otherwise, if there is only one use of the right operand, it would be // better off on the left for platforms that only have 2-arg arithmetic // ops (e.g ia32, x64) that clobber the left operand. - return right()->UseCount() == 1; + return right()->HasOneUse(); } HValue* BetterLeftOperand() { @@ -3790,7 +3808,7 @@ class HBinaryOperation : public HTemplateInstruction<3> { } void set_observed_input_representation(int index, Representation rep) { - ASSERT(index >= 1 && index <= 2); + DCHECK(index >= 1 && index <= 2); observed_input_representation_[index - 1] = rep; } @@ -3819,7 +3837,7 @@ class HBinaryOperation : public HTemplateInstruction<3> { virtual bool IsCommutative() const { return false; } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { if (index == 0) return Representation::Tagged(); @@ -3859,12 +3877,12 @@ class HWrapReceiver V8_FINAL : public HTemplateInstruction<2> { return Representation::Tagged(); } - HValue* receiver() { return OperandAt(0); } - HValue* function() { return OperandAt(1); } + HValue* receiver() const { return OperandAt(0); } + HValue* function() const { return OperandAt(1); } virtual HValue* Canonicalize() V8_OVERRIDE; - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT bool known_function() const { return known_function_; } DECLARE_CONCRETE_INSTRUCTION(WrapReceiver) @@ -3973,7 +3991,7 @@ class HAccessArgumentsAt V8_FINAL : public HTemplateInstruction<3> { public: DECLARE_INSTRUCTION_FACTORY_P3(HAccessArgumentsAt, HValue*, HValue*, HValue*); - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { // The arguments elements is considered tagged. @@ -3982,9 +4000,9 @@ class HAccessArgumentsAt V8_FINAL : public HTemplateInstruction<3> { : Representation::Integer32(); } - HValue* arguments() { return OperandAt(0); } - HValue* length() { return OperandAt(1); } - HValue* index() { return OperandAt(2); } + HValue* arguments() const { return OperandAt(0); } + HValue* length() const { return OperandAt(1); } + HValue* index() const { return OperandAt(2); } DECLARE_CONCRETE_INSTRUCTION(AccessArgumentsAt) @@ -4011,13 +4029,13 @@ class HBoundsCheck V8_FINAL : public HTemplateInstruction<2> { bool skip_check() const { return skip_check_; } void set_skip_check() { skip_check_ = true; } - HValue* base() { return base_; } - int offset() { return offset_; } - int scale() { return scale_; } + HValue* base() const { return base_; } + int offset() const { return offset_; } + int scale() const { return scale_; } void ApplyIndexChange(); bool DetectCompoundIndex() { - ASSERT(base() == NULL); + DCHECK(base() == NULL); DecompositionResult decomposition; if (index()->TryDecompose(&decomposition)) { @@ -4037,13 +4055,13 @@ class HBoundsCheck V8_FINAL : public HTemplateInstruction<2> { return representation(); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT virtual void InferRepresentation( HInferRepresentationPhase* h_infer) V8_OVERRIDE; - HValue* index() { return OperandAt(0); } - HValue* length() { return OperandAt(1); } - bool allow_equality() { return allow_equality_; } + HValue* index() const { return OperandAt(0); } + HValue* length() const { return OperandAt(1); } + bool allow_equality() const { return allow_equality_; } void set_allow_equality(bool v) { allow_equality_ = v; } virtual int RedefinedOperandIndex() V8_OVERRIDE { return 0; } @@ -4099,7 +4117,7 @@ class HBoundsCheckBaseIndexInformation V8_FINAL } } - HValue* base_index() { return OperandAt(0); } + HValue* base_index() const { return OperandAt(0); } HBoundsCheck* bounds_check() { return HBoundsCheck::cast(OperandAt(1)); } DECLARE_CONCRETE_INSTRUCTION(BoundsCheckBaseIndexInformation) @@ -4108,7 +4126,7 @@ class HBoundsCheckBaseIndexInformation V8_FINAL return representation(); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT virtual int RedefinedOperandIndex() V8_OVERRIDE { return 0; } virtual bool IsPurelyInformativeDefinition() V8_OVERRIDE { return true; } @@ -4118,7 +4136,7 @@ class HBoundsCheckBaseIndexInformation V8_FINAL class HBitwiseBinaryOperation : public HBinaryOperation { public: HBitwiseBinaryOperation(HValue* context, HValue* left, HValue* right, - HType type = HType::Tagged()) + HType type = HType::TaggedNumber()) : HBinaryOperation(context, left, right, type) { SetFlag(kFlexibleRepresentation); SetFlag(kTruncatingToInt32); @@ -4234,7 +4252,7 @@ class HCompareGeneric V8_FINAL : public HBinaryOperation { } Token::Value token() const { return token_; } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT DECLARE_CONCRETE_INSTRUCTION(CompareGeneric) @@ -4245,7 +4263,7 @@ class HCompareGeneric V8_FINAL : public HBinaryOperation { Token::Value token) : HBinaryOperation(context, left, right, HType::Boolean()), token_(token) { - ASSERT(Token::IsCompareOp(token)); + DCHECK(Token::IsCompareOp(token)); set_representation(Representation::Tagged()); SetAllSideEffects(); } @@ -4262,8 +4280,8 @@ class HCompareNumericAndBranch : public HTemplateControlInstruction<2, 2> { HValue*, HValue*, Token::Value, HBasicBlock*, HBasicBlock*); - HValue* left() { return OperandAt(0); } - HValue* right() { return OperandAt(1); } + HValue* left() const { return OperandAt(0); } + HValue* right() const { return OperandAt(1); } Token::Value token() const { return token_; } void set_observed_input_representation(Representation left, @@ -4284,7 +4302,7 @@ class HCompareNumericAndBranch : public HTemplateControlInstruction<2, 2> { virtual bool KnownSuccessorBlock(HBasicBlock** block) V8_OVERRIDE; - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT void SetOperandPositions(Zone* zone, HSourcePosition left_pos, @@ -4303,7 +4321,7 @@ class HCompareNumericAndBranch : public HTemplateControlInstruction<2, 2> { HBasicBlock* false_target = NULL) : token_(token) { SetFlag(kFlexibleRepresentation); - ASSERT(Token::IsCompareOp(token)); + DCHECK(Token::IsCompareOp(token)); SetOperandAt(0, left); SetOperandAt(1, right); SetSuccessorAt(0, true_target); @@ -4377,10 +4395,10 @@ class HCompareObjectEqAndBranch : public HTemplateControlInstruction<2, 2> { known_successor_index_ = known_successor_index; } - HValue* left() { return OperandAt(0); } - HValue* right() { return OperandAt(1); } + HValue* left() const { return OperandAt(0); } + HValue* right() const { return OperandAt(1); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { return Representation::Tagged(); @@ -4398,12 +4416,6 @@ class HCompareObjectEqAndBranch : public HTemplateControlInstruction<2, 2> { HBasicBlock* true_target = NULL, HBasicBlock* false_target = NULL) : known_successor_index_(kNoKnownSuccessorIndex) { - ASSERT(!left->IsConstant() || - (!HConstant::cast(left)->HasInteger32Value() || - HConstant::cast(left)->HasSmiValue())); - ASSERT(!right->IsConstant() || - (!HConstant::cast(right)->HasInteger32Value() || - HConstant::cast(right)->HasSmiValue())); SetOperandAt(0, left); SetOperandAt(1, right); SetSuccessorAt(0, true_target); @@ -4448,6 +4460,12 @@ class HIsStringAndBranch V8_FINAL : public HUnaryControlInstruction { virtual bool KnownSuccessorBlock(HBasicBlock** block) V8_OVERRIDE; + static const int kNoKnownSuccessorIndex = -1; + int known_successor_index() const { return known_successor_index_; } + void set_known_successor_index(int known_successor_index) { + known_successor_index_ = known_successor_index; + } + DECLARE_CONCRETE_INSTRUCTION(IsStringAndBranch) protected: @@ -4457,7 +4475,10 @@ class HIsStringAndBranch V8_FINAL : public HUnaryControlInstruction { HIsStringAndBranch(HValue* value, HBasicBlock* true_target = NULL, HBasicBlock* false_target = NULL) - : HUnaryControlInstruction(value, true_target, false_target) {} + : HUnaryControlInstruction(value, true_target, false_target), + known_successor_index_(kNoKnownSuccessorIndex) { } + + int known_successor_index_; }; @@ -4521,7 +4542,7 @@ class HStringCompareAndBranch : public HTemplateControlInstruction<2, 3> { HValue* right() { return OperandAt(2); } Token::Value token() const { return token_; } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { return Representation::Tagged(); @@ -4539,7 +4560,7 @@ class HStringCompareAndBranch : public HTemplateControlInstruction<2, 3> { HValue* right, Token::Value token) : token_(token) { - ASSERT(Token::IsCompareOp(token)); + DCHECK(Token::IsCompareOp(token)); SetOperandAt(0, context); SetOperandAt(1, left); SetOperandAt(2, right); @@ -4575,7 +4596,7 @@ class HHasInstanceTypeAndBranch V8_FINAL : public HUnaryControlInstruction { InstanceType from() { return from_; } InstanceType to() { return to_; } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { return Representation::Tagged(); @@ -4590,7 +4611,7 @@ class HHasInstanceTypeAndBranch V8_FINAL : public HUnaryControlInstruction { : HUnaryControlInstruction(value, NULL, NULL), from_(type), to_(type) { } HHasInstanceTypeAndBranch(HValue* value, InstanceType from, InstanceType to) : HUnaryControlInstruction(value, NULL, NULL), from_(from), to_(to) { - ASSERT(to == LAST_TYPE); // Others not implemented yet in backend. + DCHECK(to == LAST_TYPE); // Others not implemented yet in backend. } InstanceType from_; @@ -4647,7 +4668,7 @@ class HClassOfTestAndBranch V8_FINAL : public HUnaryControlInstruction { return Representation::Tagged(); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT Handle<String> class_name() const { return class_name_; } @@ -4664,8 +4685,8 @@ class HTypeofIsAndBranch V8_FINAL : public HUnaryControlInstruction { public: DECLARE_INSTRUCTION_FACTORY_P2(HTypeofIsAndBranch, HValue*, Handle<String>); - Handle<String> type_literal() { return type_literal_.handle(); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + Handle<String> type_literal() const { return type_literal_.handle(); } + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT DECLARE_CONCRETE_INSTRUCTION(TypeofIsAndBranch) @@ -4696,7 +4717,7 @@ class HInstanceOf V8_FINAL : public HBinaryOperation { return Representation::Tagged(); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT DECLARE_CONCRETE_INSTRUCTION(InstanceOf) @@ -5055,7 +5076,7 @@ class HBitwise V8_FINAL : public HBitwiseBinaryOperation { virtual HValue* Canonicalize() V8_OVERRIDE; - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT DECLARE_CONCRETE_INSTRUCTION(Bitwise) @@ -5071,9 +5092,9 @@ class HBitwise V8_FINAL : public HBitwiseBinaryOperation { Token::Value op, HValue* left, HValue* right) - : HBitwiseBinaryOperation(context, left, right, HType::TaggedNumber()), + : HBitwiseBinaryOperation(context, left, right), op_(op) { - ASSERT(op == Token::BIT_AND || op == Token::BIT_OR || op == Token::BIT_XOR); + DCHECK(op == Token::BIT_AND || op == Token::BIT_OR || op == Token::BIT_XOR); // BIT_AND with a smi-range positive value will always unset the // entire sign-extension of the smi-sign. if (op == Token::BIT_AND && @@ -5278,7 +5299,7 @@ class HParameter V8_FINAL : public HTemplateInstruction<0> { unsigned index() const { return index_; } ParameterKind kind() const { return kind_; } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { return Representation::None(); @@ -5314,7 +5335,7 @@ class HCallStub V8_FINAL : public HUnaryCall { HValue* context() { return value(); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT DECLARE_CONCRETE_INSTRUCTION(CallStub) @@ -5332,7 +5353,7 @@ class HUnknownOSRValue V8_FINAL : public HTemplateInstruction<0> { public: DECLARE_INSTRUCTION_FACTORY_P2(HUnknownOSRValue, HEnvironment*, int); - virtual void PrintDataTo(StringStream* stream); + virtual OStream& PrintDataTo(OStream& os) const; // NOLINT virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { return Representation::None(); @@ -5372,7 +5393,7 @@ class HLoadGlobalCell V8_FINAL : public HTemplateInstruction<0> { Unique<Cell> cell() const { return cell_; } bool RequiresHoleCheck() const; - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT virtual intptr_t Hashcode() V8_OVERRIDE { return cell_.Hashcode(); @@ -5411,14 +5432,25 @@ class HLoadGlobalCell V8_FINAL : public HTemplateInstruction<0> { class HLoadGlobalGeneric V8_FINAL : public HTemplateInstruction<2> { public: DECLARE_INSTRUCTION_WITH_CONTEXT_FACTORY_P3(HLoadGlobalGeneric, HValue*, - Handle<Object>, bool); + Handle<String>, bool); HValue* context() { return OperandAt(0); } HValue* global_object() { return OperandAt(1); } - Handle<Object> name() const { return name_; } + Handle<String> name() const { return name_; } bool for_typeof() const { return for_typeof_; } + int slot() const { + DCHECK(FLAG_vector_ics && + slot_ != FeedbackSlotInterface::kInvalidFeedbackSlot); + return slot_; + } + Handle<FixedArray> feedback_vector() const { return feedback_vector_; } + void SetVectorAndSlot(Handle<FixedArray> vector, int slot) { + DCHECK(FLAG_vector_ics); + feedback_vector_ = vector; + slot_ = slot; + } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { return Representation::Tagged(); @@ -5427,20 +5459,20 @@ class HLoadGlobalGeneric V8_FINAL : public HTemplateInstruction<2> { DECLARE_CONCRETE_INSTRUCTION(LoadGlobalGeneric) private: - HLoadGlobalGeneric(HValue* context, - HValue* global_object, - Handle<Object> name, - bool for_typeof) - : name_(name), - for_typeof_(for_typeof) { + HLoadGlobalGeneric(HValue* context, HValue* global_object, + Handle<String> name, bool for_typeof) + : name_(name), for_typeof_(for_typeof), + slot_(FeedbackSlotInterface::kInvalidFeedbackSlot) { SetOperandAt(0, context); SetOperandAt(1, global_object); set_representation(Representation::Tagged()); SetAllSideEffects(); } - Handle<Object> name_; + Handle<String> name_; bool for_typeof_; + Handle<FixedArray> feedback_vector_; + int slot_; }; @@ -5467,8 +5499,15 @@ class HAllocate V8_FINAL : public HTemplateInstruction<2> { // Maximum instance size for which allocations will be inlined. static const int kMaxInlineSize = 64 * kPointerSize; - HValue* context() { return OperandAt(0); } - HValue* size() { return OperandAt(1); } + HValue* context() const { return OperandAt(0); } + HValue* size() const { return OperandAt(1); } + + bool has_size_upper_bound() { return size_upper_bound_ != NULL; } + HConstant* size_upper_bound() { return size_upper_bound_; } + void set_size_upper_bound(HConstant* value) { + DCHECK(size_upper_bound_ == NULL); + size_upper_bound_ = value; + } virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { if (index == 0) { @@ -5521,7 +5560,7 @@ class HAllocate V8_FINAL : public HTemplateInstruction<2> { virtual bool HandleSideEffectDominator(GVNFlag side_effect, HValue* dominator) V8_OVERRIDE; - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT DECLARE_CONCRETE_INSTRUCTION(Allocate) @@ -5545,9 +5584,10 @@ class HAllocate V8_FINAL : public HTemplateInstruction<2> { : HTemplateInstruction<2>(type), flags_(ComputeFlags(pretenure_flag, instance_type)), dominating_allocate_(NULL), - filler_free_space_size_(NULL) { + filler_free_space_size_(NULL), + size_upper_bound_(NULL) { SetOperandAt(0, context); - SetOperandAt(1, size); + UpdateSize(size); set_representation(Representation::Tagged()); SetFlag(kTrackSideEffectDominators); SetChangesFlag(kNewSpacePromotion); @@ -5594,6 +5634,11 @@ class HAllocate V8_FINAL : public HTemplateInstruction<2> { void UpdateSize(HValue* size) { SetOperandAt(1, size); + if (size->IsInteger32Constant()) { + size_upper_bound_ = HConstant::cast(size); + } else { + size_upper_bound_ = NULL; + } } HAllocate* GetFoldableDominator(HAllocate* dominator); @@ -5615,6 +5660,7 @@ class HAllocate V8_FINAL : public HTemplateInstruction<2> { Handle<Map> known_initial_map_; HAllocate* dominating_allocate_; HStoreNamedField* filler_free_space_size_; + HConstant* size_upper_bound_; }; @@ -5650,45 +5696,46 @@ class HInnerAllocatedObject V8_FINAL : public HTemplateInstruction<2> { HValue* context, HValue* value, HValue* offset, - HType type = HType::Tagged()) { + HType type) { return new(zone) HInnerAllocatedObject(value, offset, type); } - HValue* base_object() { return OperandAt(0); } - HValue* offset() { return OperandAt(1); } + HValue* base_object() const { return OperandAt(0); } + HValue* offset() const { return OperandAt(1); } virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { return index == 0 ? Representation::Tagged() : Representation::Integer32(); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT DECLARE_CONCRETE_INSTRUCTION(InnerAllocatedObject) private: HInnerAllocatedObject(HValue* value, HValue* offset, - HType type = HType::Tagged()) - : HTemplateInstruction<2>(type) { - ASSERT(value->IsAllocate()); + HType type) : HTemplateInstruction<2>(type) { + DCHECK(value->IsAllocate()); + DCHECK(type.IsHeapObject()); SetOperandAt(0, value); SetOperandAt(1, offset); - set_type(type); set_representation(Representation::Tagged()); } }; inline bool StoringValueNeedsWriteBarrier(HValue* value) { - return !value->type().IsBoolean() - && !value->type().IsSmi() + return !value->type().IsSmi() + && !value->type().IsNull() + && !value->type().IsBoolean() + && !value->type().IsUndefined() && !(value->IsConstant() && HConstant::cast(value)->ImmortalImmovable()); } inline bool ReceiverObjectNeedsWriteBarrier(HValue* object, HValue* value, - HValue* new_space_dominator) { + HValue* dominator) { while (object->IsInnerAllocatedObject()) { object = HInnerAllocatedObject::cast(object)->base_object(); } @@ -5700,24 +5747,46 @@ inline bool ReceiverObjectNeedsWriteBarrier(HValue* object, // Stores to external references require no write barriers return false; } - if (object != new_space_dominator) return true; - if (object->IsAllocate()) { - // Stores to new space allocations require no write barriers if the object - // is the new space dominator. + // We definitely need a write barrier unless the object is the allocation + // dominator. + if (object == dominator && object->IsAllocate()) { + // Stores to new space allocations require no write barriers. if (HAllocate::cast(object)->IsNewSpaceAllocation()) { return false; } - // Likewise we don't need a write barrier if we store a value that - // originates from the same allocation (via allocation folding). + // Stores to old space allocations require no write barriers if the value is + // a constant provably not in new space. + if (value->IsConstant() && HConstant::cast(value)->NotInNewSpace()) { + return false; + } + // Stores to old space allocations require no write barriers if the value is + // an old space allocation. while (value->IsInnerAllocatedObject()) { value = HInnerAllocatedObject::cast(value)->base_object(); } - return object != value; + if (value->IsAllocate() && + !HAllocate::cast(value)->IsNewSpaceAllocation()) { + return false; + } } return true; } +inline PointersToHereCheck PointersToHereCheckForObject(HValue* object, + HValue* dominator) { + while (object->IsInnerAllocatedObject()) { + object = HInnerAllocatedObject::cast(object)->base_object(); + } + if (object == dominator && + object->IsAllocate() && + HAllocate::cast(object)->IsNewSpaceAllocation()) { + return kPointersToHereAreAlwaysInteresting; + } + return kPointersToHereMaybeInteresting; +} + + class HStoreGlobalCell V8_FINAL : public HUnaryOperation { public: DECLARE_INSTRUCTION_FACTORY_P3(HStoreGlobalCell, HValue*, @@ -5738,7 +5807,7 @@ class HStoreGlobalCell V8_FINAL : public HUnaryOperation { virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { return Representation::Tagged(); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT DECLARE_CONCRETE_INSTRUCTION(StoreGlobalCell) @@ -5772,20 +5841,8 @@ class HLoadContextSlot V8_FINAL : public HUnaryOperation { kCheckReturnUndefined }; - HLoadContextSlot(HValue* context, Variable* var) - : HUnaryOperation(context), slot_index_(var->index()) { - ASSERT(var->IsContextSlot()); - switch (var->mode()) { - case LET: - case CONST: - mode_ = kCheckDeoptimize; - break; - case CONST_LEGACY: - mode_ = kCheckReturnUndefined; - break; - default: - mode_ = kNoCheck; - } + HLoadContextSlot(HValue* context, int slot_index, Mode mode) + : HUnaryOperation(context), slot_index_(slot_index), mode_(mode) { set_representation(Representation::Tagged()); SetFlag(kUseGVN); SetDependsOnFlag(kContextSlots); @@ -5806,7 +5863,7 @@ class HLoadContextSlot V8_FINAL : public HUnaryOperation { return Representation::Tagged(); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT DECLARE_CONCRETE_INSTRUCTION(LoadContextSlot) @@ -5842,8 +5899,8 @@ class HStoreContextSlot V8_FINAL : public HTemplateInstruction<2> { DECLARE_INSTRUCTION_FACTORY_P4(HStoreContextSlot, HValue*, int, Mode, HValue*); - HValue* context() { return OperandAt(0); } - HValue* value() { return OperandAt(1); } + HValue* context() const { return OperandAt(0); } + HValue* value() const { return OperandAt(1); } int slot_index() const { return slot_index_; } Mode mode() const { return mode_; } @@ -5863,7 +5920,7 @@ class HStoreContextSlot V8_FINAL : public HTemplateInstruction<2> { return Representation::Tagged(); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT DECLARE_CONCRETE_INSTRUCTION(StoreContextSlot) @@ -6033,9 +6090,14 @@ class HObjectAccess V8_FINAL { return HObjectAccess(kMaps, JSObject::kMapOffset); } - static HObjectAccess ForMapInstanceSize() { + static HObjectAccess ForMapAsInteger32() { + return HObjectAccess(kMaps, JSObject::kMapOffset, + Representation::Integer32()); + } + + static HObjectAccess ForMapInObjectProperties() { return HObjectAccess(kInobject, - Map::kInstanceSizeOffset, + Map::kInObjectPropertiesOffset, Representation::UInteger8()); } @@ -6045,6 +6107,40 @@ class HObjectAccess V8_FINAL { Representation::UInteger8()); } + static HObjectAccess ForMapInstanceSize() { + return HObjectAccess(kInobject, + Map::kInstanceSizeOffset, + Representation::UInteger8()); + } + + static HObjectAccess ForMapBitField() { + return HObjectAccess(kInobject, + Map::kBitFieldOffset, + Representation::UInteger8()); + } + + static HObjectAccess ForMapBitField2() { + return HObjectAccess(kInobject, + Map::kBitField2Offset, + Representation::UInteger8()); + } + + static HObjectAccess ForNameHashField() { + return HObjectAccess(kInobject, + Name::kHashFieldOffset, + Representation::Integer32()); + } + + static HObjectAccess ForMapInstanceTypeAndBitField() { + STATIC_ASSERT((Map::kInstanceTypeAndBitFieldOffset & 1) == 0); + // Ensure the two fields share one 16-bit word, endian-independent. + STATIC_ASSERT((Map::kBitFieldOffset & ~1) == + (Map::kInstanceTypeOffset & ~1)); + return HObjectAccess(kInobject, + Map::kInstanceTypeAndBitFieldOffset, + Representation::UInteger16()); + } + static HObjectAccess ForPropertyCellValue() { return HObjectAccess(kInobject, PropertyCell::kValueOffset); } @@ -6062,6 +6158,11 @@ class HObjectAccess V8_FINAL { Handle<String>::null(), false, false); } + static HObjectAccess ForExternalUInteger8() { + return HObjectAccess(kExternalMemory, 0, Representation::UInteger8(), + Handle<String>::null(), false, false); + } + // Create an access to an offset in a fixed array header. static HObjectAccess ForFixedArrayHeader(int offset); @@ -6146,8 +6247,6 @@ class HObjectAccess V8_FINAL { return HObjectAccess(kInobject, GlobalObject::kNativeContextOffset); } - void PrintTo(StringStream* stream) const; - inline bool Equals(HObjectAccess that) const { return value_ == that.value_; // portion and offset must match } @@ -6183,12 +6282,12 @@ class HObjectAccess V8_FINAL { OffsetField::encode(offset)), name_(name) { // assert that the fields decode correctly - ASSERT(this->offset() == offset); - ASSERT(this->portion() == portion); - ASSERT(this->immutable() == immutable); - ASSERT(this->existing_inobject_property() == existing_inobject_property); - ASSERT(RepresentationField::decode(value_) == representation.kind()); - ASSERT(!this->existing_inobject_property() || IsInobject()); + DCHECK(this->offset() == offset); + DCHECK(this->portion() == portion); + DCHECK(this->immutable() == immutable); + DCHECK(this->existing_inobject_property() == existing_inobject_property); + DCHECK(RepresentationField::decode(value_) == representation.kind()); + DCHECK(!this->existing_inobject_property() || IsInobject()); } class PortionField : public BitField<Portion, 0, 3> {}; @@ -6203,6 +6302,7 @@ class HObjectAccess V8_FINAL { friend class HLoadNamedField; friend class HStoreNamedField; friend class SideEffectsTracker; + friend OStream& operator<<(OStream& os, const HObjectAccess& access); inline Portion portion() const { return PortionField::decode(value_); @@ -6210,6 +6310,9 @@ class HObjectAccess V8_FINAL { }; +OStream& operator<<(OStream& os, const HObjectAccess& access); + + class HLoadNamedField V8_FINAL : public HTemplateInstruction<2> { public: DECLARE_INSTRUCTION_FACTORY_P3(HLoadNamedField, HValue*, @@ -6217,9 +6320,9 @@ class HLoadNamedField V8_FINAL : public HTemplateInstruction<2> { DECLARE_INSTRUCTION_FACTORY_P5(HLoadNamedField, HValue*, HValue*, HObjectAccess, const UniqueSet<Map>*, HType); - HValue* object() { return OperandAt(0); } - HValue* dependency() { - ASSERT(HasDependency()); + HValue* object() const { return OperandAt(0); } + HValue* dependency() const { + DCHECK(HasDependency()); return OperandAt(1); } bool HasDependency() const { return OperandAt(0) != OperandAt(1); } @@ -6242,9 +6345,10 @@ class HLoadNamedField V8_FINAL : public HTemplateInstruction<2> { return Representation::Tagged(); } virtual Range* InferRange(Zone* zone) V8_OVERRIDE; - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT bool CanBeReplacedWith(HValue* other) const { + if (!CheckFlag(HValue::kCantBeReplaced)) return false; if (!type().Equals(other->type())) return false; if (!representation().Equals(other->representation())) return false; if (!other->IsLoadNamedField()) return true; @@ -6271,7 +6375,7 @@ class HLoadNamedField V8_FINAL : public HTemplateInstruction<2> { HValue* dependency, HObjectAccess access) : access_(access), maps_(NULL) { - ASSERT_NOT_NULL(object); + DCHECK_NOT_NULL(object); SetOperandAt(0, object); SetOperandAt(1, dependency ? dependency : object); @@ -6293,9 +6397,7 @@ class HLoadNamedField V8_FINAL : public HTemplateInstruction<2> { representation.IsInteger32()) { set_representation(representation); } else if (representation.IsHeapObject()) { - // TODO(bmeurer): This is probably broken. What we actually want to to - // instead is set_representation(Representation::HeapObject()). - set_type(HType::NonPrimitive()); + set_type(HType::HeapObject()); set_representation(Representation::Tagged()); } else { set_representation(Representation::Tagged()); @@ -6309,17 +6411,15 @@ class HLoadNamedField V8_FINAL : public HTemplateInstruction<2> { const UniqueSet<Map>* maps, HType type) : HTemplateInstruction<2>(type), access_(access), maps_(maps) { - ASSERT_NOT_NULL(maps); - ASSERT_NE(0, maps->size()); + DCHECK_NOT_NULL(maps); + DCHECK_NE(0, maps->size()); - ASSERT_NOT_NULL(object); + DCHECK_NOT_NULL(object); SetOperandAt(0, object); SetOperandAt(1, dependency ? dependency : object); - ASSERT(access.representation().IsHeapObject()); - // TODO(bmeurer): This is probably broken. What we actually want to to - // instead is set_representation(Representation::HeapObject()). - if (!type.IsHeapObject()) set_type(HType::NonPrimitive()); + DCHECK(access.representation().IsHeapObject()); + DCHECK(type.IsHeapObject()); set_representation(Representation::Tagged()); access.SetGVNFlags(this, LOAD); @@ -6337,21 +6437,34 @@ class HLoadNamedGeneric V8_FINAL : public HTemplateInstruction<2> { DECLARE_INSTRUCTION_WITH_CONTEXT_FACTORY_P2(HLoadNamedGeneric, HValue*, Handle<Object>); - HValue* context() { return OperandAt(0); } - HValue* object() { return OperandAt(1); } + HValue* context() const { return OperandAt(0); } + HValue* object() const { return OperandAt(1); } Handle<Object> name() const { return name_; } + int slot() const { + DCHECK(FLAG_vector_ics && + slot_ != FeedbackSlotInterface::kInvalidFeedbackSlot); + return slot_; + } + Handle<FixedArray> feedback_vector() const { return feedback_vector_; } + void SetVectorAndSlot(Handle<FixedArray> vector, int slot) { + DCHECK(FLAG_vector_ics); + feedback_vector_ = vector; + slot_ = slot; + } + virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { return Representation::Tagged(); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT DECLARE_CONCRETE_INSTRUCTION(LoadNamedGeneric) private: HLoadNamedGeneric(HValue* context, HValue* object, Handle<Object> name) - : name_(name) { + : name_(name), + slot_(FeedbackSlotInterface::kInvalidFeedbackSlot) { SetOperandAt(0, context); SetOperandAt(1, object); set_representation(Representation::Tagged()); @@ -6359,6 +6472,8 @@ class HLoadNamedGeneric V8_FINAL : public HTemplateInstruction<2> { } Handle<Object> name_; + Handle<FixedArray> feedback_vector_; + int slot_; }; @@ -6390,11 +6505,12 @@ class ArrayInstructionInterface { public: virtual HValue* GetKey() = 0; virtual void SetKey(HValue* key) = 0; - virtual void SetIndexOffset(uint32_t index_offset) = 0; - virtual int MaxIndexOffsetBits() = 0; - virtual bool IsDehoisted() = 0; + virtual ElementsKind elements_kind() const = 0; + // TryIncreaseBaseOffset returns false if overflow would result. + virtual bool TryIncreaseBaseOffset(uint32_t increase_by_value) = 0; + virtual bool IsDehoisted() const = 0; virtual void SetDehoisted(bool is_dehoisted) = 0; - virtual ~ArrayInstructionInterface() { }; + virtual ~ArrayInstructionInterface() { } static Representation KeyedAccessIndexRequirement(Representation r) { return r.IsInteger32() || SmiValuesAre32Bits() @@ -6403,6 +6519,8 @@ class ArrayInstructionInterface { }; +static const int kDefaultKeyedHeaderOffsetSentinel = -1; + enum LoadKeyedHoleMode { NEVER_RETURN_HOLE, ALLOW_RETURN_HOLE @@ -6416,6 +6534,8 @@ class HLoadKeyed V8_FINAL ElementsKind); DECLARE_INSTRUCTION_FACTORY_P5(HLoadKeyed, HValue*, HValue*, HValue*, ElementsKind, LoadKeyedHoleMode); + DECLARE_INSTRUCTION_FACTORY_P6(HLoadKeyed, HValue*, HValue*, HValue*, + ElementsKind, LoadKeyedHoleMode, int); bool is_external() const { return IsExternalArrayElementsKind(elements_kind()); @@ -6426,27 +6546,22 @@ class HLoadKeyed V8_FINAL bool is_typed_elements() const { return is_external() || is_fixed_typed_array(); } - HValue* elements() { return OperandAt(0); } - HValue* key() { return OperandAt(1); } - HValue* dependency() { - ASSERT(HasDependency()); + HValue* elements() const { return OperandAt(0); } + HValue* key() const { return OperandAt(1); } + HValue* dependency() const { + DCHECK(HasDependency()); return OperandAt(2); } bool HasDependency() const { return OperandAt(0) != OperandAt(2); } - uint32_t index_offset() { return IndexOffsetField::decode(bit_field_); } - void SetIndexOffset(uint32_t index_offset) { - bit_field_ = IndexOffsetField::update(bit_field_, index_offset); - } - virtual int MaxIndexOffsetBits() { - return kBitsForIndexOffset; - } + uint32_t base_offset() const { return BaseOffsetField::decode(bit_field_); } + bool TryIncreaseBaseOffset(uint32_t increase_by_value); HValue* GetKey() { return key(); } void SetKey(HValue* key) { SetOperandAt(1, key); } - bool IsDehoisted() { return IsDehoistedField::decode(bit_field_); } + bool IsDehoisted() const { return IsDehoistedField::decode(bit_field_); } void SetDehoisted(bool is_dehoisted) { bit_field_ = IsDehoistedField::update(bit_field_, is_dehoisted); } - ElementsKind elements_kind() const { + virtual ElementsKind elements_kind() const V8_OVERRIDE { return ElementsKindField::decode(bit_field_); } LoadKeyedHoleMode hole_mode() const { @@ -6473,7 +6588,7 @@ class HLoadKeyed V8_FINAL return RequiredInputRepresentation(index); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT bool UsesMustHandleHole() const; bool AllUsesCanTreatHoleAsNaN() const; @@ -6488,7 +6603,7 @@ class HLoadKeyed V8_FINAL if (!other->IsLoadKeyed()) return false; HLoadKeyed* other_load = HLoadKeyed::cast(other); - if (IsDehoisted() && index_offset() != other_load->index_offset()) + if (IsDehoisted() && base_offset() != other_load->base_offset()) return false; return elements_kind() == other_load->elements_kind(); } @@ -6498,10 +6613,15 @@ class HLoadKeyed V8_FINAL HValue* key, HValue* dependency, ElementsKind elements_kind, - LoadKeyedHoleMode mode = NEVER_RETURN_HOLE) + LoadKeyedHoleMode mode = NEVER_RETURN_HOLE, + int offset = kDefaultKeyedHeaderOffsetSentinel) : bit_field_(0) { + offset = offset == kDefaultKeyedHeaderOffsetSentinel + ? GetDefaultHeaderSizeForElementsKind(elements_kind) + : offset; bit_field_ = ElementsKindField::encode(elements_kind) | - HoleModeField::encode(mode); + HoleModeField::encode(mode) | + BaseOffsetField::encode(offset); SetOperandAt(0, obj); SetOperandAt(1, key); @@ -6510,7 +6630,7 @@ class HLoadKeyed V8_FINAL if (!is_typed_elements()) { // I can detect the case between storing double (holey and fast) and // smi/object by looking at elements_kind_. - ASSERT(IsFastSmiOrObjectElementsKind(elements_kind) || + DCHECK(IsFastSmiOrObjectElementsKind(elements_kind) || IsFastDoubleElementsKind(elements_kind)); if (IsFastSmiOrObjectElementsKind(elements_kind)) { @@ -6564,16 +6684,16 @@ class HLoadKeyed V8_FINAL enum LoadKeyedBits { kBitsForElementsKind = 5, kBitsForHoleMode = 1, - kBitsForIndexOffset = 25, + kBitsForBaseOffset = 25, kBitsForIsDehoisted = 1, kStartElementsKind = 0, kStartHoleMode = kStartElementsKind + kBitsForElementsKind, - kStartIndexOffset = kStartHoleMode + kBitsForHoleMode, - kStartIsDehoisted = kStartIndexOffset + kBitsForIndexOffset + kStartBaseOffset = kStartHoleMode + kBitsForHoleMode, + kStartIsDehoisted = kStartBaseOffset + kBitsForBaseOffset }; - STATIC_ASSERT((kBitsForElementsKind + kBitsForIndexOffset + + STATIC_ASSERT((kBitsForElementsKind + kBitsForBaseOffset + kBitsForIsDehoisted) <= sizeof(uint32_t)*8); STATIC_ASSERT(kElementsKindCount <= (1 << kBitsForElementsKind)); class ElementsKindField: @@ -6582,8 +6702,8 @@ class HLoadKeyed V8_FINAL class HoleModeField: public BitField<LoadKeyedHoleMode, kStartHoleMode, kBitsForHoleMode> {}; // NOLINT - class IndexOffsetField: - public BitField<uint32_t, kStartIndexOffset, kBitsForIndexOffset> + class BaseOffsetField: + public BitField<uint32_t, kStartBaseOffset, kBitsForBaseOffset> {}; // NOLINT class IsDehoistedField: public BitField<bool, kStartIsDehoisted, kBitsForIsDehoisted> @@ -6596,11 +6716,22 @@ class HLoadKeyedGeneric V8_FINAL : public HTemplateInstruction<3> { public: DECLARE_INSTRUCTION_WITH_CONTEXT_FACTORY_P2(HLoadKeyedGeneric, HValue*, HValue*); - HValue* object() { return OperandAt(0); } - HValue* key() { return OperandAt(1); } - HValue* context() { return OperandAt(2); } + HValue* object() const { return OperandAt(0); } + HValue* key() const { return OperandAt(1); } + HValue* context() const { return OperandAt(2); } + int slot() const { + DCHECK(FLAG_vector_ics && + slot_ != FeedbackSlotInterface::kInvalidFeedbackSlot); + return slot_; + } + Handle<FixedArray> feedback_vector() const { return feedback_vector_; } + void SetVectorAndSlot(Handle<FixedArray> vector, int slot) { + DCHECK(FLAG_vector_ics); + feedback_vector_ = vector; + slot_ = slot; + } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { // tagged[tagged] @@ -6612,13 +6743,17 @@ class HLoadKeyedGeneric V8_FINAL : public HTemplateInstruction<3> { DECLARE_CONCRETE_INSTRUCTION(LoadKeyedGeneric) private: - HLoadKeyedGeneric(HValue* context, HValue* obj, HValue* key) { + HLoadKeyedGeneric(HValue* context, HValue* obj, HValue* key) + : slot_(FeedbackSlotInterface::kInvalidFeedbackSlot) { set_representation(Representation::Tagged()); SetOperandAt(0, obj); SetOperandAt(1, key); SetOperandAt(2, context); SetAllSideEffects(); } + + Handle<FixedArray> feedback_vector_; + int slot_; }; @@ -6674,24 +6809,19 @@ class HStoreNamedField V8_FINAL : public HTemplateInstruction<3> { } virtual bool HandleSideEffectDominator(GVNFlag side_effect, HValue* dominator) V8_OVERRIDE { - ASSERT(side_effect == kNewSpacePromotion); + DCHECK(side_effect == kNewSpacePromotion); if (!FLAG_use_write_barrier_elimination) return false; - new_space_dominator_ = dominator; + dominator_ = dominator; return false; } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; - - void SkipWriteBarrier() { write_barrier_mode_ = SKIP_WRITE_BARRIER; } - bool IsSkipWriteBarrier() const { - return write_barrier_mode_ == SKIP_WRITE_BARRIER; - } + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT HValue* object() const { return OperandAt(0); } HValue* value() const { return OperandAt(1); } HValue* transition() const { return OperandAt(2); } HObjectAccess access() const { return access_; } - HValue* new_space_dominator() const { return new_space_dominator_; } + HValue* dominator() const { return dominator_; } bool has_transition() const { return has_transition_; } StoreFieldOrKeyedMode store_mode() const { return store_mode_; } @@ -6705,35 +6835,37 @@ class HStoreNamedField V8_FINAL : public HTemplateInstruction<3> { } void SetTransition(HConstant* transition) { - ASSERT(!has_transition()); // Only set once. + DCHECK(!has_transition()); // Only set once. SetOperandAt(2, transition); has_transition_ = true; + SetChangesFlag(kMaps); } - bool NeedsWriteBarrier() { - ASSERT(!field_representation().IsDouble() || !has_transition()); - if (IsSkipWriteBarrier()) return false; + bool NeedsWriteBarrier() const { + DCHECK(!field_representation().IsDouble() || !has_transition()); if (field_representation().IsDouble()) return false; if (field_representation().IsSmi()) return false; if (field_representation().IsInteger32()) return false; if (field_representation().IsExternal()) return false; return StoringValueNeedsWriteBarrier(value()) && - ReceiverObjectNeedsWriteBarrier(object(), value(), - new_space_dominator()); + ReceiverObjectNeedsWriteBarrier(object(), value(), dominator()); } bool NeedsWriteBarrierForMap() { - if (IsSkipWriteBarrier()) return false; return ReceiverObjectNeedsWriteBarrier(object(), transition(), - new_space_dominator()); + dominator()); } SmiCheck SmiCheckForWriteBarrier() const { if (field_representation().IsHeapObject()) return OMIT_SMI_CHECK; - if (value()->IsHeapObject()) return OMIT_SMI_CHECK; + if (value()->type().IsHeapObject()) return OMIT_SMI_CHECK; return INLINE_SMI_CHECK; } + PointersToHereCheck PointersToHereCheckForValue() const { + return PointersToHereCheckForObject(value(), dominator()); + } + Representation field_representation() const { return access_.representation(); } @@ -6742,19 +6874,31 @@ class HStoreNamedField V8_FINAL : public HTemplateInstruction<3> { SetOperandAt(1, value); } + bool CanBeReplacedWith(HStoreNamedField* that) const { + if (!this->access().Equals(that->access())) return false; + if (SmiValuesAre32Bits() && + this->field_representation().IsSmi() && + this->store_mode() == INITIALIZING_STORE && + that->store_mode() == STORE_TO_INITIALIZED_ENTRY) { + // We cannot replace an initializing store to a smi field with a store to + // an initialized entry on 64-bit architectures (with 32-bit smis). + return false; + } + return true; + } + private: HStoreNamedField(HValue* obj, HObjectAccess access, HValue* val, StoreFieldOrKeyedMode store_mode = INITIALIZING_STORE) : access_(access), - new_space_dominator_(NULL), - write_barrier_mode_(UPDATE_WRITE_BARRIER), + dominator_(NULL), has_transition_(false), store_mode_(store_mode) { // Stores to a non existing in-object property are allowed only to the // newly allocated objects (via HAllocate or HInnerAllocatedObject). - ASSERT(!access.IsInobject() || access.existing_inobject_property() || + DCHECK(!access.IsInobject() || access.existing_inobject_property() || obj->IsAllocate() || obj->IsInnerAllocatedObject()); SetOperandAt(0, obj); SetOperandAt(1, val); @@ -6763,8 +6907,7 @@ class HStoreNamedField V8_FINAL : public HTemplateInstruction<3> { } HObjectAccess access_; - HValue* new_space_dominator_; - WriteBarrierMode write_barrier_mode_ : 1; + HValue* dominator_; bool has_transition_ : 1; StoreFieldOrKeyedMode store_mode_ : 1; }; @@ -6775,13 +6918,13 @@ class HStoreNamedGeneric V8_FINAL : public HTemplateInstruction<3> { DECLARE_INSTRUCTION_WITH_CONTEXT_FACTORY_P4(HStoreNamedGeneric, HValue*, Handle<String>, HValue*, StrictMode); - HValue* object() { return OperandAt(0); } - HValue* value() { return OperandAt(1); } - HValue* context() { return OperandAt(2); } - Handle<String> name() { return name_; } - StrictMode strict_mode() { return strict_mode_; } + HValue* object() const { return OperandAt(0); } + HValue* value() const { return OperandAt(1); } + HValue* context() const { return OperandAt(2); } + Handle<String> name() const { return name_; } + StrictMode strict_mode() const { return strict_mode_; } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { return Representation::Tagged(); @@ -6815,6 +6958,8 @@ class HStoreKeyed V8_FINAL ElementsKind); DECLARE_INSTRUCTION_FACTORY_P5(HStoreKeyed, HValue*, HValue*, HValue*, ElementsKind, StoreFieldOrKeyedMode); + DECLARE_INSTRUCTION_FACTORY_P6(HStoreKeyed, HValue*, HValue*, HValue*, + ElementsKind, StoreFieldOrKeyedMode, int); virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { // kind_fast: tagged[int32] = tagged @@ -6830,7 +6975,7 @@ class HStoreKeyed V8_FINAL OperandAt(1)->representation()); } - ASSERT_EQ(index, 2); + DCHECK_EQ(index, 2); return RequiredValueRepresentation(elements_kind_, store_mode_); } @@ -6873,26 +7018,24 @@ class HStoreKeyed V8_FINAL return Representation::None(); } Representation r = RequiredValueRepresentation(elements_kind_, store_mode_); + // For fast object elements kinds, don't assume anything. if (r.IsTagged()) return Representation::None(); return r; } - HValue* elements() { return OperandAt(0); } - HValue* key() { return OperandAt(1); } - HValue* value() { return OperandAt(2); } + HValue* elements() const { return OperandAt(0); } + HValue* key() const { return OperandAt(1); } + HValue* value() const { return OperandAt(2); } bool value_is_smi() const { return IsFastSmiElementsKind(elements_kind_); } StoreFieldOrKeyedMode store_mode() const { return store_mode_; } ElementsKind elements_kind() const { return elements_kind_; } - uint32_t index_offset() { return index_offset_; } - void SetIndexOffset(uint32_t index_offset) { index_offset_ = index_offset; } - virtual int MaxIndexOffsetBits() { - return 31 - ElementsKindToShiftSize(elements_kind_); - } + uint32_t base_offset() const { return base_offset_; } + bool TryIncreaseBaseOffset(uint32_t increase_by_value); HValue* GetKey() { return key(); } void SetKey(HValue* key) { SetOperandAt(1, key); } - bool IsDehoisted() { return is_dehoisted_; } + bool IsDehoisted() const { return is_dehoisted_; } void SetDehoisted(bool is_dehoisted) { is_dehoisted_ = is_dehoisted; } bool IsUninitialized() { return is_uninitialized_; } void SetUninitialized(bool is_uninitialized) { @@ -6905,39 +7048,45 @@ class HStoreKeyed V8_FINAL virtual bool HandleSideEffectDominator(GVNFlag side_effect, HValue* dominator) V8_OVERRIDE { - ASSERT(side_effect == kNewSpacePromotion); - new_space_dominator_ = dominator; + DCHECK(side_effect == kNewSpacePromotion); + dominator_ = dominator; return false; } - HValue* new_space_dominator() const { return new_space_dominator_; } + HValue* dominator() const { return dominator_; } bool NeedsWriteBarrier() { if (value_is_smi()) { return false; } else { return StoringValueNeedsWriteBarrier(value()) && - ReceiverObjectNeedsWriteBarrier(elements(), value(), - new_space_dominator()); + ReceiverObjectNeedsWriteBarrier(elements(), value(), dominator()); } } + PointersToHereCheck PointersToHereCheckForValue() const { + return PointersToHereCheckForObject(value(), dominator()); + } + bool NeedsCanonicalization(); - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT DECLARE_CONCRETE_INSTRUCTION(StoreKeyed) private: HStoreKeyed(HValue* obj, HValue* key, HValue* val, ElementsKind elements_kind, - StoreFieldOrKeyedMode store_mode = INITIALIZING_STORE) + StoreFieldOrKeyedMode store_mode = INITIALIZING_STORE, + int offset = kDefaultKeyedHeaderOffsetSentinel) : elements_kind_(elements_kind), - index_offset_(0), + base_offset_(offset == kDefaultKeyedHeaderOffsetSentinel + ? GetDefaultHeaderSizeForElementsKind(elements_kind) + : offset), is_dehoisted_(false), is_uninitialized_(false), store_mode_(store_mode), - new_space_dominator_(NULL) { + dominator_(NULL) { SetOperandAt(0, obj); SetOperandAt(1, key); SetOperandAt(2, val); @@ -6970,11 +7119,11 @@ class HStoreKeyed V8_FINAL } ElementsKind elements_kind_; - uint32_t index_offset_; + uint32_t base_offset_; bool is_dehoisted_ : 1; bool is_uninitialized_ : 1; StoreFieldOrKeyedMode store_mode_: 1; - HValue* new_space_dominator_; + HValue* dominator_; }; @@ -6983,18 +7132,18 @@ class HStoreKeyedGeneric V8_FINAL : public HTemplateInstruction<4> { DECLARE_INSTRUCTION_WITH_CONTEXT_FACTORY_P4(HStoreKeyedGeneric, HValue*, HValue*, HValue*, StrictMode); - HValue* object() { return OperandAt(0); } - HValue* key() { return OperandAt(1); } - HValue* value() { return OperandAt(2); } - HValue* context() { return OperandAt(3); } - StrictMode strict_mode() { return strict_mode_; } + HValue* object() const { return OperandAt(0); } + HValue* key() const { return OperandAt(1); } + HValue* value() const { return OperandAt(2); } + HValue* context() const { return OperandAt(3); } + StrictMode strict_mode() const { return strict_mode_; } virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { // tagged[tagged] = tagged return Representation::Tagged(); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT DECLARE_CONCRETE_INSTRUCTION(StoreKeyedGeneric) @@ -7031,14 +7180,14 @@ class HTransitionElementsKind V8_FINAL : public HTemplateInstruction<2> { return Representation::Tagged(); } - HValue* object() { return OperandAt(0); } - HValue* context() { return OperandAt(1); } - Unique<Map> original_map() { return original_map_; } - Unique<Map> transitioned_map() { return transitioned_map_; } - ElementsKind from_kind() { return from_kind_; } - ElementsKind to_kind() { return to_kind_; } + HValue* object() const { return OperandAt(0); } + HValue* context() const { return OperandAt(1); } + Unique<Map> original_map() const { return original_map_; } + Unique<Map> transitioned_map() const { return transitioned_map_; } + ElementsKind from_kind() const { return from_kind_; } + ElementsKind to_kind() const { return to_kind_; } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT DECLARE_CONCRETE_INSTRUCTION(TransitionElementsKind) @@ -7096,7 +7245,7 @@ class HStringAdd V8_FINAL : public HBinaryOperation { return Representation::Tagged(); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT DECLARE_CONCRETE_INSTRUCTION(StringAdd) @@ -7331,10 +7480,10 @@ class HTypeof V8_FINAL : public HTemplateInstruction<2> { public: DECLARE_INSTRUCTION_WITH_CONTEXT_FACTORY_P1(HTypeof, HValue*); - HValue* context() { return OperandAt(0); } - HValue* value() { return OperandAt(1); } + HValue* context() const { return OperandAt(0); } + HValue* value() const { return OperandAt(1); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { return Representation::Tagged(); @@ -7390,10 +7539,10 @@ class HToFastProperties V8_FINAL : public HUnaryOperation { // This instruction is not marked as kChangesMaps, but does // change the map of the input operand. Use it only when creating // object literals via a runtime call. - ASSERT(value->IsCallRuntime()); + DCHECK(value->IsCallRuntime()); #ifdef DEBUG const Runtime::Function* function = HCallRuntime::cast(value)->function(); - ASSERT(function->function_id == Runtime::kHiddenCreateObjectLiteral); + DCHECK(function->function_id == Runtime::kCreateObjectLiteral); #endif } @@ -7451,7 +7600,7 @@ class HSeqStringGetChar V8_FINAL : public HTemplateInstruction<2> { if (encoding() == String::ONE_BYTE_ENCODING) { return new(zone) Range(0, String::kMaxOneByteCharCode); } else { - ASSERT_EQ(String::TWO_BYTE_ENCODING, encoding()); + DCHECK_EQ(String::TWO_BYTE_ENCODING, encoding()); return new(zone) Range(0, String::kMaxUtf16CodeUnit); } } @@ -7518,14 +7667,15 @@ class HCheckMapValue V8_FINAL : public HTemplateInstruction<2> { return Representation::Tagged(); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT virtual HType CalculateInferredType() V8_OVERRIDE { - return HType::Tagged(); + if (value()->type().IsHeapObject()) return value()->type(); + return HType::HeapObject(); } - HValue* value() { return OperandAt(0); } - HValue* map() { return OperandAt(1); } + HValue* value() const { return OperandAt(0); } + HValue* map() const { return OperandAt(1); } virtual HValue* Canonicalize() V8_OVERRIDE; @@ -7539,8 +7689,8 @@ class HCheckMapValue V8_FINAL : public HTemplateInstruction<2> { } private: - HCheckMapValue(HValue* value, - HValue* map) { + HCheckMapValue(HValue* value, HValue* map) + : HTemplateInstruction<2>(HType::HeapObject()) { SetOperandAt(0, value); SetOperandAt(1, map); set_representation(Representation::Tagged()); @@ -7559,10 +7709,10 @@ class HForInPrepareMap V8_FINAL : public HTemplateInstruction<2> { return Representation::Tagged(); } - HValue* context() { return OperandAt(0); } - HValue* enumerable() { return OperandAt(1); } + HValue* context() const { return OperandAt(0); } + HValue* enumerable() const { return OperandAt(1); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT virtual HType CalculateInferredType() V8_OVERRIDE { return HType::Tagged(); @@ -7589,9 +7739,9 @@ class HForInCacheArray V8_FINAL : public HTemplateInstruction<2> { return Representation::Tagged(); } - HValue* enumerable() { return OperandAt(0); } - HValue* map() { return OperandAt(1); } - int idx() { return idx_; } + HValue* enumerable() const { return OperandAt(0); } + HValue* map() const { return OperandAt(1); } + int idx() const { return idx_; } HForInCacheArray* index_cache() { return index_cache_; @@ -7601,7 +7751,7 @@ class HForInCacheArray V8_FINAL : public HTemplateInstruction<2> { index_cache_ = index_cache; } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT virtual HType CalculateInferredType() V8_OVERRIDE { return HType::Tagged(); @@ -7625,6 +7775,8 @@ class HForInCacheArray V8_FINAL : public HTemplateInstruction<2> { class HLoadFieldByIndex V8_FINAL : public HTemplateInstruction<2> { public: + DECLARE_INSTRUCTION_FACTORY_P2(HLoadFieldByIndex, HValue*, HValue*); + HLoadFieldByIndex(HValue* object, HValue* index) { SetOperandAt(0, object); @@ -7634,13 +7786,17 @@ class HLoadFieldByIndex V8_FINAL : public HTemplateInstruction<2> { } virtual Representation RequiredInputRepresentation(int index) V8_OVERRIDE { - return Representation::Tagged(); + if (index == 1) { + return Representation::Smi(); + } else { + return Representation::Tagged(); + } } - HValue* object() { return OperandAt(0); } - HValue* index() { return OperandAt(1); } + HValue* object() const { return OperandAt(0); } + HValue* index() const { return OperandAt(1); } - virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual OStream& PrintDataTo(OStream& os) const V8_OVERRIDE; // NOLINT virtual HType CalculateInferredType() V8_OVERRIDE { return HType::Tagged(); @@ -7653,6 +7809,57 @@ class HLoadFieldByIndex V8_FINAL : public HTemplateInstruction<2> { }; +class HStoreFrameContext: public HUnaryOperation { + public: + DECLARE_INSTRUCTION_FACTORY_P1(HStoreFrameContext, HValue*); + + HValue* context() { return OperandAt(0); } + + virtual Representation RequiredInputRepresentation(int index) { + return Representation::Tagged(); + } + + DECLARE_CONCRETE_INSTRUCTION(StoreFrameContext) + private: + explicit HStoreFrameContext(HValue* context) + : HUnaryOperation(context) { + set_representation(Representation::Tagged()); + SetChangesFlag(kContextSlots); + } +}; + + +class HAllocateBlockContext: public HTemplateInstruction<2> { + public: + DECLARE_INSTRUCTION_FACTORY_P3(HAllocateBlockContext, HValue*, + HValue*, Handle<ScopeInfo>); + HValue* context() const { return OperandAt(0); } + HValue* function() const { return OperandAt(1); } + Handle<ScopeInfo> scope_info() const { return scope_info_; } + + virtual Representation RequiredInputRepresentation(int index) { + return Representation::Tagged(); + } + + virtual OStream& PrintDataTo(OStream& os) const; // NOLINT + + DECLARE_CONCRETE_INSTRUCTION(AllocateBlockContext) + + private: + HAllocateBlockContext(HValue* context, + HValue* function, + Handle<ScopeInfo> scope_info) + : scope_info_(scope_info) { + SetOperandAt(0, context); + SetOperandAt(1, function); + set_representation(Representation::Tagged()); + } + + Handle<ScopeInfo> scope_info_; +}; + + + #undef DECLARE_INSTRUCTION #undef DECLARE_CONCRETE_INSTRUCTION diff --git a/deps/v8/src/hydrogen-load-elimination.cc b/deps/v8/src/hydrogen-load-elimination.cc index 1198d2b7ab5..c5fd88b396b 100644 --- a/deps/v8/src/hydrogen-load-elimination.cc +++ b/deps/v8/src/hydrogen-load-elimination.cc @@ -2,10 +2,10 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "hydrogen-alias-analysis.h" -#include "hydrogen-load-elimination.h" -#include "hydrogen-instructions.h" -#include "hydrogen-flow-engine.h" +#include "src/hydrogen-alias-analysis.h" +#include "src/hydrogen-flow-engine.h" +#include "src/hydrogen-instructions.h" +#include "src/hydrogen-load-elimination.h" namespace v8 { namespace internal { @@ -25,11 +25,10 @@ class HFieldApproximation : public ZoneObject { // Recursively copy the entire linked list of field approximations. HFieldApproximation* Copy(Zone* zone) { - if (this == NULL) return NULL; HFieldApproximation* copy = new(zone) HFieldApproximation(); copy->object_ = this->object_; copy->last_value_ = this->last_value_; - copy->next_ = this->next_->Copy(zone); + copy->next_ = this->next_ == NULL ? NULL : this->next_->Copy(zone); return copy; } }; @@ -123,7 +122,7 @@ class HLoadEliminationTable : public ZoneObject { HLoadEliminationTable* pred_state, HBasicBlock* pred_block, Zone* zone) { - ASSERT(pred_state != NULL); + DCHECK(pred_state != NULL); if (succ_state == NULL) { return pred_state->Copy(succ_block, pred_block, zone); } else { @@ -136,7 +135,7 @@ class HLoadEliminationTable : public ZoneObject { static HLoadEliminationTable* Finish(HLoadEliminationTable* state, HBasicBlock* block, Zone* zone) { - ASSERT(state != NULL); + DCHECK(state != NULL); return state; } @@ -148,7 +147,7 @@ class HLoadEliminationTable : public ZoneObject { new(zone) HLoadEliminationTable(zone, aliasing_); copy->EnsureFields(fields_.length()); for (int i = 0; i < fields_.length(); i++) { - copy->fields_[i] = fields_[i]->Copy(zone); + copy->fields_[i] = fields_[i] == NULL ? NULL : fields_[i]->Copy(zone); } if (FLAG_trace_load_elimination) { TRACE((" copy-to B%d\n", succ->block_id())); @@ -201,7 +200,7 @@ class HLoadEliminationTable : public ZoneObject { // which the load should be replaced. Otherwise, return {instr}. HValue* load(HLoadNamedField* instr) { // There must be no loads from non observable in-object properties. - ASSERT(!instr->access().IsInobject() || + DCHECK(!instr->access().IsInobject() || instr->access().existing_inobject_property()); int field = FieldOf(instr->access()); @@ -383,7 +382,7 @@ class HLoadEliminationTable : public ZoneObject { // farthest away from the current instruction. HFieldApproximation* ReuseLastApproximation(int field) { HFieldApproximation* approx = fields_[field]; - ASSERT(approx != NULL); + DCHECK(approx != NULL); HFieldApproximation* prev = NULL; while (approx->next_ != NULL) { diff --git a/deps/v8/src/hydrogen-load-elimination.h b/deps/v8/src/hydrogen-load-elimination.h index 98cd03d5833..e6b432c6aca 100644 --- a/deps/v8/src/hydrogen-load-elimination.h +++ b/deps/v8/src/hydrogen-load-elimination.h @@ -5,7 +5,7 @@ #ifndef V8_HYDROGEN_LOAD_ELIMINATION_H_ #define V8_HYDROGEN_LOAD_ELIMINATION_H_ -#include "hydrogen.h" +#include "src/hydrogen.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-mark-deoptimize.cc b/deps/v8/src/hydrogen-mark-deoptimize.cc index 338e239e206..47642e45cdd 100644 --- a/deps/v8/src/hydrogen-mark-deoptimize.cc +++ b/deps/v8/src/hydrogen-mark-deoptimize.cc @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "hydrogen-mark-deoptimize.h" +#include "src/hydrogen-mark-deoptimize.h" namespace v8 { namespace internal { @@ -20,8 +20,8 @@ void HMarkDeoptimizeOnUndefinedPhase::Run() { void HMarkDeoptimizeOnUndefinedPhase::ProcessPhi(HPhi* phi) { - ASSERT(phi->CheckFlag(HValue::kAllowUndefinedAsNaN)); - ASSERT(worklist_.is_empty()); + DCHECK(phi->CheckFlag(HValue::kAllowUndefinedAsNaN)); + DCHECK(worklist_.is_empty()); // Push the phi onto the worklist phi->ClearFlag(HValue::kAllowUndefinedAsNaN); diff --git a/deps/v8/src/hydrogen-mark-deoptimize.h b/deps/v8/src/hydrogen-mark-deoptimize.h index 7b302fcc260..52a6ef96c9e 100644 --- a/deps/v8/src/hydrogen-mark-deoptimize.h +++ b/deps/v8/src/hydrogen-mark-deoptimize.h @@ -5,7 +5,7 @@ #ifndef V8_HYDROGEN_MARK_DEOPTIMIZE_H_ #define V8_HYDROGEN_MARK_DEOPTIMIZE_H_ -#include "hydrogen.h" +#include "src/hydrogen.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-mark-unreachable.cc b/deps/v8/src/hydrogen-mark-unreachable.cc index c80de639e2f..05779ca524b 100644 --- a/deps/v8/src/hydrogen-mark-unreachable.cc +++ b/deps/v8/src/hydrogen-mark-unreachable.cc @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "hydrogen-mark-unreachable.h" +#include "src/hydrogen-mark-unreachable.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-mark-unreachable.h b/deps/v8/src/hydrogen-mark-unreachable.h index a406c5cff1e..d43d22bbba0 100644 --- a/deps/v8/src/hydrogen-mark-unreachable.h +++ b/deps/v8/src/hydrogen-mark-unreachable.h @@ -5,7 +5,7 @@ #ifndef V8_HYDROGEN_MARK_UNREACHABLE_H_ #define V8_HYDROGEN_MARK_UNREACHABLE_H_ -#include "hydrogen.h" +#include "src/hydrogen.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-osr.cc b/deps/v8/src/hydrogen-osr.cc index ff46ea7ebca..89c28acdab9 100644 --- a/deps/v8/src/hydrogen-osr.cc +++ b/deps/v8/src/hydrogen-osr.cc @@ -2,8 +2,8 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "hydrogen.h" -#include "hydrogen-osr.h" +#include "src/hydrogen.h" +#include "src/hydrogen-osr.h" namespace v8 { namespace internal { @@ -15,13 +15,13 @@ bool HOsrBuilder::HasOsrEntryAt(IterationStatement* statement) { HBasicBlock* HOsrBuilder::BuildOsrLoopEntry(IterationStatement* statement) { - ASSERT(HasOsrEntryAt(statement)); + DCHECK(HasOsrEntryAt(statement)); Zone* zone = builder_->zone(); HGraph* graph = builder_->graph(); // only one OSR point per compile is allowed. - ASSERT(graph->osr() == NULL); + DCHECK(graph->osr() == NULL); // remember this builder as the one OSR builder in the graph. graph->set_osr(this); diff --git a/deps/v8/src/hydrogen-osr.h b/deps/v8/src/hydrogen-osr.h index 00bb2d4e196..433548c1a8e 100644 --- a/deps/v8/src/hydrogen-osr.h +++ b/deps/v8/src/hydrogen-osr.h @@ -5,9 +5,9 @@ #ifndef V8_HYDROGEN_OSR_H_ #define V8_HYDROGEN_OSR_H_ -#include "hydrogen.h" -#include "ast.h" -#include "zone.h" +#include "src/hydrogen.h" +#include "src/ast.h" +#include "src/zone.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-range-analysis.cc b/deps/v8/src/hydrogen-range-analysis.cc index 609dd88fa15..f5c5a9f1be7 100644 --- a/deps/v8/src/hydrogen-range-analysis.cc +++ b/deps/v8/src/hydrogen-range-analysis.cc @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "hydrogen-range-analysis.h" +#include "src/hydrogen-range-analysis.h" namespace v8 { namespace internal { @@ -26,7 +26,7 @@ void HRangeAnalysisPhase::TraceRange(const char* msg, ...) { if (FLAG_trace_range) { va_list arguments; va_start(arguments, msg); - OS::VPrint(msg, arguments); + base::OS::VPrint(msg, arguments); va_end(arguments); } } @@ -64,9 +64,9 @@ void HRangeAnalysisPhase::Run() { // Propagate flags for negative zero checks upwards from conversions // int32-to-tagged and int32-to-double. Representation from = instr->value()->representation(); - ASSERT(from.Equals(instr->from())); + DCHECK(from.Equals(instr->from())); if (from.IsSmiOrInteger32()) { - ASSERT(instr->to().IsTagged() || + DCHECK(instr->to().IsTagged() || instr->to().IsDouble() || instr->to().IsSmiOrInteger32()); PropagateMinusZeroChecks(instr->value()); @@ -121,7 +121,7 @@ void HRangeAnalysisPhase::PoisonRanges() { void HRangeAnalysisPhase::InferControlFlowRange(HCompareNumericAndBranch* test, HBasicBlock* dest) { - ASSERT((test->FirstSuccessor() == dest) == (test->SecondSuccessor() != dest)); + DCHECK((test->FirstSuccessor() == dest) == (test->SecondSuccessor() != dest)); if (test->representation().IsSmiOrInteger32()) { Token::Value op = test->token(); if (test->SecondSuccessor() == dest) { @@ -170,7 +170,7 @@ void HRangeAnalysisPhase::UpdateControlFlowRange(Token::Value op, void HRangeAnalysisPhase::InferRange(HValue* value) { - ASSERT(!value->HasRange()); + DCHECK(!value->HasRange()); if (!value->representation().IsNone()) { value->ComputeInitialRange(graph()->zone()); Range* range = value->range(); @@ -184,7 +184,7 @@ void HRangeAnalysisPhase::InferRange(HValue* value) { void HRangeAnalysisPhase::RollBackTo(int index) { - ASSERT(index <= changed_ranges_.length()); + DCHECK(index <= changed_ranges_.length()); for (int i = index; i < changed_ranges_.length(); ++i) { changed_ranges_[i]->RemoveLastAddedRange(); } @@ -213,8 +213,8 @@ void HRangeAnalysisPhase::AddRange(HValue* value, Range* range) { void HRangeAnalysisPhase::PropagateMinusZeroChecks(HValue* value) { - ASSERT(worklist_.is_empty()); - ASSERT(in_worklist_.IsEmpty()); + DCHECK(worklist_.is_empty()); + DCHECK(in_worklist_.IsEmpty()); AddToWorklist(value); while (!worklist_.is_empty()) { @@ -282,8 +282,8 @@ void HRangeAnalysisPhase::PropagateMinusZeroChecks(HValue* value) { } in_worklist_.Clear(); - ASSERT(in_worklist_.IsEmpty()); - ASSERT(worklist_.is_empty()); + DCHECK(in_worklist_.IsEmpty()); + DCHECK(worklist_.is_empty()); } diff --git a/deps/v8/src/hydrogen-range-analysis.h b/deps/v8/src/hydrogen-range-analysis.h index 2ed2ecb22e0..1269ec7529c 100644 --- a/deps/v8/src/hydrogen-range-analysis.h +++ b/deps/v8/src/hydrogen-range-analysis.h @@ -5,7 +5,7 @@ #ifndef V8_HYDROGEN_RANGE_ANALYSIS_H_ #define V8_HYDROGEN_RANGE_ANALYSIS_H_ -#include "hydrogen.h" +#include "src/hydrogen.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-redundant-phi.cc b/deps/v8/src/hydrogen-redundant-phi.cc index 5757cfb0a3c..0b9b0aaf1dc 100644 --- a/deps/v8/src/hydrogen-redundant-phi.cc +++ b/deps/v8/src/hydrogen-redundant-phi.cc @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "hydrogen-redundant-phi.h" +#include "src/hydrogen-redundant-phi.h" namespace v8 { namespace internal { @@ -25,7 +25,7 @@ void HRedundantPhiEliminationPhase::Run() { // Make sure that we *really* removed all redundant phis. for (int i = 0; i < blocks->length(); ++i) { for (int j = 0; j < blocks->at(i)->phis()->length(); j++) { - ASSERT(blocks->at(i)->phis()->at(j)->GetRedundantReplacement() == NULL); + DCHECK(blocks->at(i)->phis()->at(j)->GetRedundantReplacement() == NULL); } } #endif diff --git a/deps/v8/src/hydrogen-redundant-phi.h b/deps/v8/src/hydrogen-redundant-phi.h index 9e1092b398d..7f5ec4e52dd 100644 --- a/deps/v8/src/hydrogen-redundant-phi.h +++ b/deps/v8/src/hydrogen-redundant-phi.h @@ -5,7 +5,7 @@ #ifndef V8_HYDROGEN_REDUNDANT_PHI_H_ #define V8_HYDROGEN_REDUNDANT_PHI_H_ -#include "hydrogen.h" +#include "src/hydrogen.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-removable-simulates.cc b/deps/v8/src/hydrogen-removable-simulates.cc index 7bbe3cbf284..a28021deb88 100644 --- a/deps/v8/src/hydrogen-removable-simulates.cc +++ b/deps/v8/src/hydrogen-removable-simulates.cc @@ -2,84 +2,179 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "hydrogen-removable-simulates.h" +#include "src/hydrogen-flow-engine.h" +#include "src/hydrogen-instructions.h" +#include "src/hydrogen-removable-simulates.h" namespace v8 { namespace internal { -void HMergeRemovableSimulatesPhase::Run() { - ZoneList<HSimulate*> mergelist(2, zone()); - for (int i = 0; i < graph()->blocks()->length(); ++i) { - HBasicBlock* block = graph()->blocks()->at(i); - // Make sure the merge list is empty at the start of a block. - ASSERT(mergelist.is_empty()); - // Nasty heuristic: Never remove the first simulate in a block. This - // just so happens to have a beneficial effect on register allocation. - bool first = true; - for (HInstructionIterator it(block); !it.Done(); it.Advance()) { - HInstruction* current = it.Current(); - if (current->IsEnterInlined()) { - // Ensure there's a non-foldable HSimulate before an HEnterInlined to - // avoid folding across HEnterInlined. - ASSERT(!HSimulate::cast(current->previous())-> - is_candidate_for_removal()); - } - if (current->IsLeaveInlined()) { - // Never fold simulates from inlined environments into simulates in the - // outer environment. Simply remove all accumulated simulates without - // merging. This is safe because simulates after instructions with side - // effects are never added to the merge list. - while (!mergelist.is_empty()) { - mergelist.RemoveLast()->DeleteAndReplaceWith(NULL); - } - continue; - } - if (current->IsReturn()) { - // Drop mergeable simulates in the list. This is safe because - // simulates after instructions with side effects are never added - // to the merge list. - while (!mergelist.is_empty()) { - mergelist.RemoveLast()->DeleteAndReplaceWith(NULL); - } - continue; - } - // Skip the non-simulates and the first simulate. - if (!current->IsSimulate()) continue; - if (first) { - first = false; - continue; - } - HSimulate* current_simulate = HSimulate::cast(current); - if (!current_simulate->is_candidate_for_removal()) { - current_simulate->MergeWith(&mergelist); - } else if (current_simulate->ast_id().IsNone()) { - ASSERT(current_simulate->next()->IsEnterInlined()); - if (!mergelist.is_empty()) { - HSimulate* last = mergelist.RemoveLast(); - last->MergeWith(&mergelist); - } - } else if (current_simulate->previous()->HasObservableSideEffects()) { - while (current_simulate->next()->IsSimulate()) { - it.Advance(); - HSimulate* next_simulate = HSimulate::cast(it.Current()); - if (next_simulate->ast_id().IsNone()) break; - mergelist.Add(current_simulate, zone()); - current_simulate = next_simulate; - if (!current_simulate->is_candidate_for_removal()) break; +class State : public ZoneObject { + public: + explicit State(Zone* zone) + : zone_(zone), mergelist_(2, zone), first_(true), mode_(NORMAL) { } + + State* Process(HInstruction* instr, Zone* zone) { + if (FLAG_trace_removable_simulates) { + PrintF("[%s with state %p in B%d: #%d %s]\n", + mode_ == NORMAL ? "processing" : "collecting", + reinterpret_cast<void*>(this), instr->block()->block_id(), + instr->id(), instr->Mnemonic()); + } + // Forward-merge "trains" of simulates after an instruction with observable + // side effects to keep live ranges short. + if (mode_ == COLLECT_CONSECUTIVE_SIMULATES) { + if (instr->IsSimulate()) { + HSimulate* current_simulate = HSimulate::cast(instr); + if (current_simulate->is_candidate_for_removal() && + !current_simulate->ast_id().IsNone()) { + Remember(current_simulate); + return this; } - current_simulate->MergeWith(&mergelist); - } else { - // Accumulate this simulate for folding later on. - mergelist.Add(current_simulate, zone()); } + FlushSimulates(); + mode_ = NORMAL; } - - if (!mergelist.is_empty()) { + // Ensure there's a non-foldable HSimulate before an HEnterInlined to avoid + // folding across HEnterInlined. + DCHECK(!(instr->IsEnterInlined() && + HSimulate::cast(instr->previous())->is_candidate_for_removal())); + if (instr->IsLeaveInlined() || instr->IsReturn()) { + // Never fold simulates from inlined environments into simulates in the + // outer environment. Simply remove all accumulated simulates without + // merging. This is safe because simulates after instructions with side + // effects are never added to the merge list. The same reasoning holds for + // return instructions. + RemoveSimulates(); + return this; + } + if (instr->IsControlInstruction()) { // Merge the accumulated simulates at the end of the block. - HSimulate* last = mergelist.RemoveLast(); - last->MergeWith(&mergelist); + FlushSimulates(); + return this; + } + // Skip the non-simulates and the first simulate. + if (!instr->IsSimulate()) return this; + if (first_) { + first_ = false; + return this; + } + HSimulate* current_simulate = HSimulate::cast(instr); + if (!current_simulate->is_candidate_for_removal()) { + Remember(current_simulate); + FlushSimulates(); + } else if (current_simulate->ast_id().IsNone()) { + DCHECK(current_simulate->next()->IsEnterInlined()); + FlushSimulates(); + } else if (current_simulate->previous()->HasObservableSideEffects()) { + Remember(current_simulate); + mode_ = COLLECT_CONSECUTIVE_SIMULATES; + } else { + Remember(current_simulate); + } + + return this; + } + + static State* Merge(State* succ_state, + HBasicBlock* succ_block, + State* pred_state, + HBasicBlock* pred_block, + Zone* zone) { + return (succ_state == NULL) + ? pred_state->Copy(succ_block, pred_block, zone) + : succ_state->Merge(succ_block, pred_state, pred_block, zone); + } + + static State* Finish(State* state, HBasicBlock* block, Zone* zone) { + if (FLAG_trace_removable_simulates) { + PrintF("[preparing state %p for B%d]\n", reinterpret_cast<void*>(state), + block->block_id()); } + // For our current local analysis, we should not remember simulates across + // block boundaries. + DCHECK(!state->HasRememberedSimulates()); + // Nasty heuristic: Never remove the first simulate in a block. This + // just so happens to have a beneficial effect on register allocation. + state->first_ = true; + return state; } + + private: + explicit State(const State& other) + : zone_(other.zone_), + mergelist_(other.mergelist_, other.zone_), + first_(other.first_), + mode_(other.mode_) { } + + enum Mode { NORMAL, COLLECT_CONSECUTIVE_SIMULATES }; + + bool HasRememberedSimulates() const { return !mergelist_.is_empty(); } + + void Remember(HSimulate* sim) { + mergelist_.Add(sim, zone_); + } + + void FlushSimulates() { + if (HasRememberedSimulates()) { + mergelist_.RemoveLast()->MergeWith(&mergelist_); + } + } + + void RemoveSimulates() { + while (HasRememberedSimulates()) { + mergelist_.RemoveLast()->DeleteAndReplaceWith(NULL); + } + } + + State* Copy(HBasicBlock* succ_block, HBasicBlock* pred_block, Zone* zone) { + State* copy = new(zone) State(*this); + if (FLAG_trace_removable_simulates) { + PrintF("[copy state %p from B%d to new state %p for B%d]\n", + reinterpret_cast<void*>(this), pred_block->block_id(), + reinterpret_cast<void*>(copy), succ_block->block_id()); + } + return copy; + } + + State* Merge(HBasicBlock* succ_block, + State* pred_state, + HBasicBlock* pred_block, + Zone* zone) { + // For our current local analysis, we should not remember simulates across + // block boundaries. + DCHECK(!pred_state->HasRememberedSimulates()); + DCHECK(!HasRememberedSimulates()); + if (FLAG_trace_removable_simulates) { + PrintF("[merge state %p from B%d into %p for B%d]\n", + reinterpret_cast<void*>(pred_state), pred_block->block_id(), + reinterpret_cast<void*>(this), succ_block->block_id()); + } + return this; + } + + Zone* zone_; + ZoneList<HSimulate*> mergelist_; + bool first_; + Mode mode_; +}; + + +// We don't use effects here. +class Effects : public ZoneObject { + public: + explicit Effects(Zone* zone) { } + bool Disabled() { return true; } + void Process(HInstruction* instr, Zone* zone) { } + void Apply(State* state) { } + void Union(Effects* that, Zone* zone) { } +}; + + +void HMergeRemovableSimulatesPhase::Run() { + HFlowEngine<State, Effects> engine(graph(), zone()); + State* state = new(zone()) State(zone()); + engine.AnalyzeDominatedBlocks(graph()->blocks()->at(0), state); } } } // namespace v8::internal diff --git a/deps/v8/src/hydrogen-removable-simulates.h b/deps/v8/src/hydrogen-removable-simulates.h index aec5fce1d22..9bd25056bdf 100644 --- a/deps/v8/src/hydrogen-removable-simulates.h +++ b/deps/v8/src/hydrogen-removable-simulates.h @@ -5,7 +5,7 @@ #ifndef V8_HYDROGEN_REMOVABLE_SIMULATES_H_ #define V8_HYDROGEN_REMOVABLE_SIMULATES_H_ -#include "hydrogen.h" +#include "src/hydrogen.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-representation-changes.cc b/deps/v8/src/hydrogen-representation-changes.cc index 15f1a6e444d..ebb03b503ae 100644 --- a/deps/v8/src/hydrogen-representation-changes.cc +++ b/deps/v8/src/hydrogen-representation-changes.cc @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "hydrogen-representation-changes.h" +#include "src/hydrogen-representation-changes.h" namespace v8 { namespace internal { @@ -41,7 +41,7 @@ void HRepresentationChangesPhase::InsertRepresentationChangeForUse( if (!use_value->operand_position(use_index).IsUnknown()) { new_value->set_position(use_value->operand_position(use_index)); } else { - ASSERT(!FLAG_hydrogen_track_positions || + DCHECK(!FLAG_hydrogen_track_positions || !graph()->info()->IsOptimizing()); } } @@ -51,6 +51,15 @@ void HRepresentationChangesPhase::InsertRepresentationChangeForUse( } +static bool IsNonDeoptingIntToSmiChange(HChange* change) { + Representation from_rep = change->from(); + Representation to_rep = change->to(); + // Flags indicating Uint32 operations are set in a later Hydrogen phase. + DCHECK(!change->CheckFlag(HValue::kUint32)); + return from_rep.IsInteger32() && to_rep.IsSmi() && SmiValuesAre32Bits(); +} + + void HRepresentationChangesPhase::InsertRepresentationChangesForValue( HValue* value) { Representation r = value->representation(); @@ -65,17 +74,33 @@ void HRepresentationChangesPhase::InsertRepresentationChangesForValue( int use_index = it.index(); Representation req = use_value->RequiredInputRepresentation(use_index); if (req.IsNone() || req.Equals(r)) continue; + + // If this is an HForceRepresentation instruction, and an HChange has been + // inserted above it, examine the input representation of the HChange. If + // that's int32, and this HForceRepresentation use is int32, and int32 to + // smi changes can't cause deoptimisation, set the input of the use to the + // input of the HChange. + if (value->IsForceRepresentation()) { + HValue* input = HForceRepresentation::cast(value)->value(); + if (input->IsChange()) { + HChange* change = HChange::cast(input); + if (change->from().Equals(req) && IsNonDeoptingIntToSmiChange(change)) { + use_value->SetOperandAt(use_index, change->value()); + continue; + } + } + } InsertRepresentationChangeForUse(value, use_value, use_index, req); } if (value->HasNoUses()) { - ASSERT(value->IsConstant()); + DCHECK(value->IsConstant() || value->IsForceRepresentation()); value->DeleteAndReplaceWith(NULL); - } - - // The only purpose of a HForceRepresentation is to represent the value - // after the (possible) HChange instruction. We make it disappear. - if (value->IsForceRepresentation()) { - value->DeleteAndReplaceWith(HForceRepresentation::cast(value)->value()); + } else { + // The only purpose of a HForceRepresentation is to represent the value + // after the (possible) HChange instruction. We make it disappear. + if (value->IsForceRepresentation()) { + value->DeleteAndReplaceWith(HForceRepresentation::cast(value)->value()); + } } } diff --git a/deps/v8/src/hydrogen-representation-changes.h b/deps/v8/src/hydrogen-representation-changes.h index ff57e192922..2f5958a70f3 100644 --- a/deps/v8/src/hydrogen-representation-changes.h +++ b/deps/v8/src/hydrogen-representation-changes.h @@ -5,7 +5,7 @@ #ifndef V8_HYDROGEN_REPRESENTATION_CHANGES_H_ #define V8_HYDROGEN_REPRESENTATION_CHANGES_H_ -#include "hydrogen.h" +#include "src/hydrogen.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-sce.cc b/deps/v8/src/hydrogen-sce.cc index dfb3a7e3ac8..b7ab9fd7db6 100644 --- a/deps/v8/src/hydrogen-sce.cc +++ b/deps/v8/src/hydrogen-sce.cc @@ -2,8 +2,8 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "hydrogen-sce.h" -#include "v8.h" +#include "src/hydrogen-sce.h" +#include "src/v8.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-sce.h b/deps/v8/src/hydrogen-sce.h index 48ac8e3319f..276d3486764 100644 --- a/deps/v8/src/hydrogen-sce.h +++ b/deps/v8/src/hydrogen-sce.h @@ -5,7 +5,7 @@ #ifndef V8_HYDROGEN_SCE_H_ #define V8_HYDROGEN_SCE_H_ -#include "hydrogen.h" +#include "src/hydrogen.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-store-elimination.cc b/deps/v8/src/hydrogen-store-elimination.cc index cf5f3a15e69..ee718e6407e 100644 --- a/deps/v8/src/hydrogen-store-elimination.cc +++ b/deps/v8/src/hydrogen-store-elimination.cc @@ -2,8 +2,8 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "hydrogen-store-elimination.h" -#include "hydrogen-instructions.h" +#include "src/hydrogen-instructions.h" +#include "src/hydrogen-store-elimination.h" namespace v8 { namespace internal { @@ -30,8 +30,10 @@ void HStoreEliminationPhase::Run() { for (int i = 0; i < graph()->blocks()->length(); i++) { unobserved_.Rewind(0); HBasicBlock* block = graph()->blocks()->at(i); + if (!block->IsReachable()) continue; for (HInstructionIterator it(block); !it.Done(); it.Advance()) { HInstruction* instr = it.Current(); + if (instr->CheckFlag(HValue::kIsDead)) continue; // TODO(titzer): eliminate unobserved HStoreKeyed instructions too. switch (instr->opcode()) { @@ -58,7 +60,7 @@ void HStoreEliminationPhase::ProcessStore(HStoreNamedField* store) { while (i < unobserved_.length()) { HStoreNamedField* prev = unobserved_.at(i); if (aliasing_->MustAlias(object, prev->object()->ActualValue()) && - store->access().Equals(prev->access())) { + prev->CanBeReplacedWith(store)) { // This store is guaranteed to overwrite the previous store. prev->DeleteAndReplaceWith(NULL); TRACE(("++ Unobserved store S%d overwritten by S%d\n", @@ -97,17 +99,20 @@ void HStoreEliminationPhase::ProcessInstr(HInstruction* instr, GVNFlagSet flags) { if (unobserved_.length() == 0) return; // Nothing to do. if (instr->CanDeoptimize()) { - TRACE(("-- Observed stores at I%d (might deoptimize)\n", instr->id())); + TRACE(("-- Observed stores at I%d (%s might deoptimize)\n", + instr->id(), instr->Mnemonic())); unobserved_.Rewind(0); return; } if (instr->CheckChangesFlag(kNewSpacePromotion)) { - TRACE(("-- Observed stores at I%d (might GC)\n", instr->id())); + TRACE(("-- Observed stores at I%d (%s might GC)\n", + instr->id(), instr->Mnemonic())); unobserved_.Rewind(0); return; } if (instr->DependsOnFlags().ContainsAnyOf(flags)) { - TRACE(("-- Observed stores at I%d (GVN flags)\n", instr->id())); + TRACE(("-- Observed stores at I%d (GVN flags of %s)\n", + instr->id(), instr->Mnemonic())); unobserved_.Rewind(0); return; } diff --git a/deps/v8/src/hydrogen-store-elimination.h b/deps/v8/src/hydrogen-store-elimination.h index e697708c333..35a23a26602 100644 --- a/deps/v8/src/hydrogen-store-elimination.h +++ b/deps/v8/src/hydrogen-store-elimination.h @@ -5,8 +5,8 @@ #ifndef V8_HYDROGEN_STORE_ELIMINATION_H_ #define V8_HYDROGEN_STORE_ELIMINATION_H_ -#include "hydrogen.h" -#include "hydrogen-alias-analysis.h" +#include "src/hydrogen.h" +#include "src/hydrogen-alias-analysis.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen-types.cc b/deps/v8/src/hydrogen-types.cc new file mode 100644 index 00000000000..c83ff3cf890 --- /dev/null +++ b/deps/v8/src/hydrogen-types.cc @@ -0,0 +1,70 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/hydrogen-types.h" + +#include "src/ostreams.h" +#include "src/types-inl.h" + + +namespace v8 { +namespace internal { + +// static +template <class T> +HType HType::FromType(typename T::TypeHandle type) { + if (T::Any()->Is(type)) return HType::Any(); + if (type->Is(T::None())) return HType::None(); + if (type->Is(T::SignedSmall())) return HType::Smi(); + if (type->Is(T::Number())) return HType::TaggedNumber(); + if (type->Is(T::Null())) return HType::Null(); + if (type->Is(T::String())) return HType::String(); + if (type->Is(T::Boolean())) return HType::Boolean(); + if (type->Is(T::Undefined())) return HType::Undefined(); + if (type->Is(T::Array())) return HType::JSArray(); + if (type->Is(T::Object())) return HType::JSObject(); + return HType::Tagged(); +} + + +// static +template +HType HType::FromType<Type>(Type* type); + + +// static +template +HType HType::FromType<HeapType>(Handle<HeapType> type); + + +// static +HType HType::FromValue(Handle<Object> value) { + if (value->IsSmi()) return HType::Smi(); + if (value->IsNull()) return HType::Null(); + if (value->IsHeapNumber()) return HType::HeapNumber(); + if (value->IsString()) return HType::String(); + if (value->IsBoolean()) return HType::Boolean(); + if (value->IsUndefined()) return HType::Undefined(); + if (value->IsJSArray()) return HType::JSArray(); + if (value->IsJSObject()) return HType::JSObject(); + DCHECK(value->IsHeapObject()); + return HType::HeapObject(); +} + + +OStream& operator<<(OStream& os, const HType& t) { + // Note: The c1visualizer syntax for locals allows only a sequence of the + // following characters: A-Za-z0-9_-|: + switch (t.kind_) { +#define DEFINE_CASE(Name, mask) \ + case HType::k##Name: \ + return os << #Name; + HTYPE_LIST(DEFINE_CASE) +#undef DEFINE_CASE + } + UNREACHABLE(); + return os; +} + +} } // namespace v8::internal diff --git a/deps/v8/src/hydrogen-types.h b/deps/v8/src/hydrogen-types.h new file mode 100644 index 00000000000..d662a167b9f --- /dev/null +++ b/deps/v8/src/hydrogen-types.h @@ -0,0 +1,90 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef HYDROGEN_TYPES_H_ +#define HYDROGEN_TYPES_H_ + +#include <climits> + +#include "src/base/macros.h" + +namespace v8 { +namespace internal { + +// Forward declarations. +template <typename T> class Handle; +class Object; +class OStream; + +#define HTYPE_LIST(V) \ + V(Any, 0x0) /* 0000 0000 0000 0000 */ \ + V(Tagged, 0x1) /* 0000 0000 0000 0001 */ \ + V(TaggedPrimitive, 0x5) /* 0000 0000 0000 0101 */ \ + V(TaggedNumber, 0xd) /* 0000 0000 0000 1101 */ \ + V(Smi, 0x1d) /* 0000 0000 0001 1101 */ \ + V(HeapObject, 0x21) /* 0000 0000 0010 0001 */ \ + V(HeapPrimitive, 0x25) /* 0000 0000 0010 0101 */ \ + V(Null, 0x27) /* 0000 0000 0010 0111 */ \ + V(HeapNumber, 0x2d) /* 0000 0000 0010 1101 */ \ + V(String, 0x65) /* 0000 0000 0110 0101 */ \ + V(Boolean, 0xa5) /* 0000 0000 1010 0101 */ \ + V(Undefined, 0x125) /* 0000 0001 0010 0101 */ \ + V(JSObject, 0x221) /* 0000 0010 0010 0001 */ \ + V(JSArray, 0x621) /* 0000 0110 0010 0001 */ \ + V(None, 0x7ff) /* 0000 0111 1111 1111 */ + +class HType V8_FINAL { + public: + #define DECLARE_CONSTRUCTOR(Name, mask) \ + static HType Name() V8_WARN_UNUSED_RESULT { return HType(k##Name); } + HTYPE_LIST(DECLARE_CONSTRUCTOR) + #undef DECLARE_CONSTRUCTOR + + // Return the weakest (least precise) common type. + HType Combine(HType other) const V8_WARN_UNUSED_RESULT { + return HType(static_cast<Kind>(kind_ & other.kind_)); + } + + bool Equals(HType other) const V8_WARN_UNUSED_RESULT { + return kind_ == other.kind_; + } + + bool IsSubtypeOf(HType other) const V8_WARN_UNUSED_RESULT { + return Combine(other).Equals(other); + } + + #define DECLARE_IS_TYPE(Name, mask) \ + bool Is##Name() const V8_WARN_UNUSED_RESULT { \ + return IsSubtypeOf(HType::Name()); \ + } + HTYPE_LIST(DECLARE_IS_TYPE) + #undef DECLARE_IS_TYPE + + template <class T> + static HType FromType(typename T::TypeHandle type) V8_WARN_UNUSED_RESULT; + static HType FromValue(Handle<Object> value) V8_WARN_UNUSED_RESULT; + + friend OStream& operator<<(OStream& os, const HType& t); + + private: + enum Kind { + #define DECLARE_TYPE(Name, mask) k##Name = mask, + HTYPE_LIST(DECLARE_TYPE) + #undef DECLARE_TYPE + LAST_KIND = kNone + }; + + // Make sure type fits in int16. + STATIC_ASSERT(LAST_KIND < (1 << (CHAR_BIT * sizeof(int16_t)))); + + explicit HType(Kind kind) : kind_(kind) { } + + int16_t kind_; +}; + + +OStream& operator<<(OStream& os, const HType& t); +} } // namespace v8::internal + +#endif // HYDROGEN_TYPES_H_ diff --git a/deps/v8/src/hydrogen-uint32-analysis.cc b/deps/v8/src/hydrogen-uint32-analysis.cc index 21fbec9f336..585a7066f60 100644 --- a/deps/v8/src/hydrogen-uint32-analysis.cc +++ b/deps/v8/src/hydrogen-uint32-analysis.cc @@ -2,12 +2,36 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "hydrogen-uint32-analysis.h" +#include "src/hydrogen-uint32-analysis.h" namespace v8 { namespace internal { +static bool IsUnsignedLoad(HLoadKeyed* instr) { + switch (instr->elements_kind()) { + case EXTERNAL_UINT8_ELEMENTS: + case EXTERNAL_UINT16_ELEMENTS: + case EXTERNAL_UINT32_ELEMENTS: + case EXTERNAL_UINT8_CLAMPED_ELEMENTS: + case UINT8_ELEMENTS: + case UINT16_ELEMENTS: + case UINT32_ELEMENTS: + case UINT8_CLAMPED_ELEMENTS: + return true; + default: + return false; + } +} + + +static bool IsUint32Operation(HValue* instr) { + return instr->IsShr() || + (instr->IsLoadKeyed() && IsUnsignedLoad(HLoadKeyed::cast(instr))) || + (instr->IsInteger32Constant() && instr->GetInteger32Constant() >= 0); +} + + bool HUint32AnalysisPhase::IsSafeUint32Use(HValue* val, HValue* use) { // Operations that operate on bits are safe. if (use->IsBitwise() || use->IsShl() || use->IsSar() || use->IsShr()) { @@ -17,10 +41,10 @@ bool HUint32AnalysisPhase::IsSafeUint32Use(HValue* val, HValue* use) { return true; } else if (use->IsChange()) { // Conversions have special support for uint32. - // This ASSERT guards that the conversion in question is actually + // This DCHECK guards that the conversion in question is actually // implemented. Do not extend the whitelist without adding // support to LChunkBuilder::DoChange(). - ASSERT(HChange::cast(use)->to().IsDouble() || + DCHECK(HChange::cast(use)->to().IsDouble() || HChange::cast(use)->to().IsSmi() || HChange::cast(use)->to().IsTagged()); return true; @@ -31,12 +55,15 @@ bool HUint32AnalysisPhase::IsSafeUint32Use(HValue* val, HValue* use) { // operation. if (store->value() == val) { // Clamping or a conversion to double should have beed inserted. - ASSERT(store->elements_kind() != EXTERNAL_UINT8_CLAMPED_ELEMENTS); - ASSERT(store->elements_kind() != EXTERNAL_FLOAT32_ELEMENTS); - ASSERT(store->elements_kind() != EXTERNAL_FLOAT64_ELEMENTS); + DCHECK(store->elements_kind() != EXTERNAL_UINT8_CLAMPED_ELEMENTS); + DCHECK(store->elements_kind() != EXTERNAL_FLOAT32_ELEMENTS); + DCHECK(store->elements_kind() != EXTERNAL_FLOAT64_ELEMENTS); return true; } } + } else if (use->IsCompareNumericAndBranch()) { + HCompareNumericAndBranch* c = HCompareNumericAndBranch::cast(use); + return IsUint32Operation(c->left()) && IsUint32Operation(c->right()); } return false; diff --git a/deps/v8/src/hydrogen-uint32-analysis.h b/deps/v8/src/hydrogen-uint32-analysis.h index 8d672ac6a6c..4d2797fa3a1 100644 --- a/deps/v8/src/hydrogen-uint32-analysis.h +++ b/deps/v8/src/hydrogen-uint32-analysis.h @@ -5,7 +5,7 @@ #ifndef V8_HYDROGEN_UINT32_ANALYSIS_H_ #define V8_HYDROGEN_UINT32_ANALYSIS_H_ -#include "hydrogen.h" +#include "src/hydrogen.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/hydrogen.cc b/deps/v8/src/hydrogen.cc index 06dfcfc536f..63174aa5db8 100644 --- a/deps/v8/src/hydrogen.cc +++ b/deps/v8/src/hydrogen.cc @@ -2,55 +2,60 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "hydrogen.h" +#include "src/hydrogen.h" #include <algorithm> -#include "v8.h" -#include "allocation-site-scopes.h" -#include "codegen.h" -#include "full-codegen.h" -#include "hashmap.h" -#include "hydrogen-bce.h" -#include "hydrogen-bch.h" -#include "hydrogen-canonicalize.h" -#include "hydrogen-check-elimination.h" -#include "hydrogen-dce.h" -#include "hydrogen-dehoist.h" -#include "hydrogen-environment-liveness.h" -#include "hydrogen-escape-analysis.h" -#include "hydrogen-infer-representation.h" -#include "hydrogen-infer-types.h" -#include "hydrogen-load-elimination.h" -#include "hydrogen-gvn.h" -#include "hydrogen-mark-deoptimize.h" -#include "hydrogen-mark-unreachable.h" -#include "hydrogen-osr.h" -#include "hydrogen-range-analysis.h" -#include "hydrogen-redundant-phi.h" -#include "hydrogen-removable-simulates.h" -#include "hydrogen-representation-changes.h" -#include "hydrogen-sce.h" -#include "hydrogen-store-elimination.h" -#include "hydrogen-uint32-analysis.h" -#include "lithium-allocator.h" -#include "parser.h" -#include "runtime.h" -#include "scopeinfo.h" -#include "scopes.h" -#include "stub-cache.h" -#include "typing.h" +#include "src/v8.h" + +#include "src/allocation-site-scopes.h" +#include "src/codegen.h" +#include "src/full-codegen.h" +#include "src/hashmap.h" +#include "src/hydrogen-bce.h" +#include "src/hydrogen-bch.h" +#include "src/hydrogen-canonicalize.h" +#include "src/hydrogen-check-elimination.h" +#include "src/hydrogen-dce.h" +#include "src/hydrogen-dehoist.h" +#include "src/hydrogen-environment-liveness.h" +#include "src/hydrogen-escape-analysis.h" +#include "src/hydrogen-gvn.h" +#include "src/hydrogen-infer-representation.h" +#include "src/hydrogen-infer-types.h" +#include "src/hydrogen-load-elimination.h" +#include "src/hydrogen-mark-deoptimize.h" +#include "src/hydrogen-mark-unreachable.h" +#include "src/hydrogen-osr.h" +#include "src/hydrogen-range-analysis.h" +#include "src/hydrogen-redundant-phi.h" +#include "src/hydrogen-removable-simulates.h" +#include "src/hydrogen-representation-changes.h" +#include "src/hydrogen-sce.h" +#include "src/hydrogen-store-elimination.h" +#include "src/hydrogen-uint32-analysis.h" +#include "src/lithium-allocator.h" +#include "src/parser.h" +#include "src/runtime.h" +#include "src/scopeinfo.h" +#include "src/scopes.h" +#include "src/stub-cache.h" +#include "src/typing.h" #if V8_TARGET_ARCH_IA32 -#include "ia32/lithium-codegen-ia32.h" +#include "src/ia32/lithium-codegen-ia32.h" // NOLINT #elif V8_TARGET_ARCH_X64 -#include "x64/lithium-codegen-x64.h" +#include "src/x64/lithium-codegen-x64.h" // NOLINT #elif V8_TARGET_ARCH_ARM64 -#include "arm64/lithium-codegen-arm64.h" +#include "src/arm64/lithium-codegen-arm64.h" // NOLINT #elif V8_TARGET_ARCH_ARM -#include "arm/lithium-codegen-arm.h" +#include "src/arm/lithium-codegen-arm.h" // NOLINT #elif V8_TARGET_ARCH_MIPS -#include "mips/lithium-codegen-mips.h" +#include "src/mips/lithium-codegen-mips.h" // NOLINT +#elif V8_TARGET_ARCH_MIPS64 +#include "src/mips64/lithium-codegen-mips64.h" // NOLINT +#elif V8_TARGET_ARCH_X87 +#include "src/x87/lithium-codegen-x87.h" // NOLINT #else #error Unsupported target architecture. #endif @@ -79,7 +84,8 @@ HBasicBlock::HBasicBlock(HGraph* graph) is_inline_return_target_(false), is_reachable_(true), dominates_loop_successors_(false), - is_osr_entry_(false) { } + is_osr_entry_(false), + is_ordered_(false) { } Isolate* HBasicBlock::isolate() const { @@ -93,27 +99,27 @@ void HBasicBlock::MarkUnreachable() { void HBasicBlock::AttachLoopInformation() { - ASSERT(!IsLoopHeader()); + DCHECK(!IsLoopHeader()); loop_information_ = new(zone()) HLoopInformation(this, zone()); } void HBasicBlock::DetachLoopInformation() { - ASSERT(IsLoopHeader()); + DCHECK(IsLoopHeader()); loop_information_ = NULL; } void HBasicBlock::AddPhi(HPhi* phi) { - ASSERT(!IsStartBlock()); + DCHECK(!IsStartBlock()); phis_.Add(phi, zone()); phi->SetBlock(this); } void HBasicBlock::RemovePhi(HPhi* phi) { - ASSERT(phi->block() == this); - ASSERT(phis_.Contains(phi)); + DCHECK(phi->block() == this); + DCHECK(phis_.Contains(phi)); phi->Kill(); phis_.RemoveElement(phi); phi->SetBlock(NULL); @@ -122,22 +128,22 @@ void HBasicBlock::RemovePhi(HPhi* phi) { void HBasicBlock::AddInstruction(HInstruction* instr, HSourcePosition position) { - ASSERT(!IsStartBlock() || !IsFinished()); - ASSERT(!instr->IsLinked()); - ASSERT(!IsFinished()); + DCHECK(!IsStartBlock() || !IsFinished()); + DCHECK(!instr->IsLinked()); + DCHECK(!IsFinished()); if (!position.IsUnknown()) { instr->set_position(position); } if (first_ == NULL) { - ASSERT(last_environment() != NULL); - ASSERT(!last_environment()->ast_id().IsNone()); + DCHECK(last_environment() != NULL); + DCHECK(!last_environment()->ast_id().IsNone()); HBlockEntry* entry = new(zone()) HBlockEntry(); entry->InitializeAsFirst(this); if (!position.IsUnknown()) { entry->set_position(position); } else { - ASSERT(!FLAG_hydrogen_track_positions || + DCHECK(!FLAG_hydrogen_track_positions || !graph()->info()->IsOptimizing()); } first_ = last_ = entry; @@ -158,9 +164,9 @@ HPhi* HBasicBlock::AddNewPhi(int merged_index) { HSimulate* HBasicBlock::CreateSimulate(BailoutId ast_id, RemovableSimulate removable) { - ASSERT(HasEnvironment()); + DCHECK(HasEnvironment()); HEnvironment* environment = last_environment(); - ASSERT(ast_id.IsNone() || + DCHECK(ast_id.IsNone() || ast_id == BailoutId::StubEntry() || environment->closure()->shared()->VerifyBailoutId(ast_id)); @@ -191,7 +197,7 @@ HSimulate* HBasicBlock::CreateSimulate(BailoutId ast_id, void HBasicBlock::Finish(HControlInstruction* end, HSourcePosition position) { - ASSERT(!IsFinished()); + DCHECK(!IsFinished()); AddInstruction(end, position); end_ = end; for (HSuccessorIterator it(end); !it.Done(); it.Advance()) { @@ -228,8 +234,8 @@ void HBasicBlock::AddLeaveInlined(HValue* return_value, HBasicBlock* target = state->function_return(); bool drop_extra = state->inlining_kind() == NORMAL_RETURN; - ASSERT(target->IsInlineReturnTarget()); - ASSERT(return_value != NULL); + DCHECK(target->IsInlineReturnTarget()); + DCHECK(return_value != NULL); HEnvironment* env = last_environment(); int argument_count = env->arguments_environment()->parameter_count(); AddInstruction(new(zone()) HLeaveInlined(state->entry(), argument_count), @@ -243,8 +249,8 @@ void HBasicBlock::AddLeaveInlined(HValue* return_value, void HBasicBlock::SetInitialEnvironment(HEnvironment* env) { - ASSERT(!HasEnvironment()); - ASSERT(first() == NULL); + DCHECK(!HasEnvironment()); + DCHECK(first() == NULL); UpdateEnvironment(env); } @@ -257,12 +263,12 @@ void HBasicBlock::UpdateEnvironment(HEnvironment* env) { void HBasicBlock::SetJoinId(BailoutId ast_id) { int length = predecessors_.length(); - ASSERT(length > 0); + DCHECK(length > 0); for (int i = 0; i < length; i++) { HBasicBlock* predecessor = predecessors_[i]; - ASSERT(predecessor->end()->IsGoto()); + DCHECK(predecessor->end()->IsGoto()); HSimulate* simulate = HSimulate::cast(predecessor->end()->previous()); - ASSERT(i != 0 || + DCHECK(i != 0 || (predecessor->last_environment()->closure().is_null() || predecessor->last_environment()->closure()->shared() ->VerifyBailoutId(ast_id))); @@ -300,7 +306,7 @@ int HBasicBlock::LoopNestingDepth() const { void HBasicBlock::PostProcessLoopHeader(IterationStatement* stmt) { - ASSERT(IsLoopHeader()); + DCHECK(IsLoopHeader()); SetJoinId(stmt->EntryId()); if (predecessors()->length() == 1) { @@ -318,10 +324,10 @@ void HBasicBlock::PostProcessLoopHeader(IterationStatement* stmt) { void HBasicBlock::MarkSuccEdgeUnreachable(int succ) { - ASSERT(IsFinished()); + DCHECK(IsFinished()); HBasicBlock* succ_block = end()->SuccessorAt(succ); - ASSERT(succ_block->predecessors()->length() == 1); + DCHECK(succ_block->predecessors()->length() == 1); succ_block->MarkUnreachable(); } @@ -331,10 +337,10 @@ void HBasicBlock::RegisterPredecessor(HBasicBlock* pred) { // Only loop header blocks can have a predecessor added after // instructions have been added to the block (they have phis for all // values in the environment, these phis may be eliminated later). - ASSERT(IsLoopHeader() || first_ == NULL); + DCHECK(IsLoopHeader() || first_ == NULL); HEnvironment* incoming_env = pred->last_environment(); if (IsLoopHeader()) { - ASSERT(phis()->length() == incoming_env->length()); + DCHECK(phis()->length() == incoming_env->length()); for (int i = 0; i < phis_.length(); ++i) { phis_[i]->AddInput(incoming_env->values()->at(i)); } @@ -342,7 +348,7 @@ void HBasicBlock::RegisterPredecessor(HBasicBlock* pred) { last_environment()->AddIncomingEdge(this, pred->last_environment()); } } else if (!HasEnvironment() && !IsFinished()) { - ASSERT(!IsLoopHeader()); + DCHECK(!IsLoopHeader()); SetInitialEnvironment(pred->last_environment()->Copy()); } @@ -351,7 +357,7 @@ void HBasicBlock::RegisterPredecessor(HBasicBlock* pred) { void HBasicBlock::AddDominatedBlock(HBasicBlock* block) { - ASSERT(!dominated_blocks_.Contains(block)); + DCHECK(!dominated_blocks_.Contains(block)); // Keep the list of dominated blocks sorted such that if there is two // succeeding block in this list, the predecessor is before the successor. int index = 0; @@ -377,11 +383,11 @@ void HBasicBlock::AssignCommonDominator(HBasicBlock* other) { } else { second = second->dominator(); } - ASSERT(first != NULL && second != NULL); + DCHECK(first != NULL && second != NULL); } if (dominator_ != first) { - ASSERT(dominator_->dominated_blocks_.Contains(this)); + DCHECK(dominator_->dominated_blocks_.Contains(this)); dominator_->dominated_blocks_.RemoveElement(this); dominator_ = first; first->AddDominatedBlock(this); @@ -423,7 +429,7 @@ void HBasicBlock::AssignLoopSuccessorDominators() { // dominator information about the current loop that's being processed, // and not nested loops, which will be processed when // AssignLoopSuccessorDominators gets called on their header. - ASSERT(outstanding_successors >= 0); + DCHECK(outstanding_successors >= 0); HBasicBlock* parent_loop_header = dominator_candidate->parent_loop_header(); if (outstanding_successors == 0 && (parent_loop_header == this && !dominator_candidate->IsLoopHeader())) { @@ -437,7 +443,7 @@ void HBasicBlock::AssignLoopSuccessorDominators() { if (successor->block_id() > dominator_candidate->block_id() && successor->block_id() <= last->block_id()) { // Backwards edges must land on loop headers. - ASSERT(successor->block_id() > dominator_candidate->block_id() || + DCHECK(successor->block_id() > dominator_candidate->block_id() || successor->IsLoopHeader()); outstanding_successors++; } @@ -458,13 +464,13 @@ int HBasicBlock::PredecessorIndexOf(HBasicBlock* predecessor) const { #ifdef DEBUG void HBasicBlock::Verify() { // Check that every block is finished. - ASSERT(IsFinished()); - ASSERT(block_id() >= 0); + DCHECK(IsFinished()); + DCHECK(block_id() >= 0); // Check that the incoming edges are in edge split form. if (predecessors_.length() > 1) { for (int i = 0; i < predecessors_.length(); ++i) { - ASSERT(predecessors_[i]->end()->SecondSuccessor() == NULL); + DCHECK(predecessors_[i]->end()->SecondSuccessor() == NULL); } } } @@ -567,10 +573,10 @@ void HGraph::Verify(bool do_full_verify) const { // Check that every block contains at least one node and that only the last // node is a control instruction. HInstruction* current = block->first(); - ASSERT(current != NULL && current->IsBlockEntry()); + DCHECK(current != NULL && current->IsBlockEntry()); while (current != NULL) { - ASSERT((current->next() == NULL) == current->IsControlInstruction()); - ASSERT(current->block() == block); + DCHECK((current->next() == NULL) == current->IsControlInstruction()); + DCHECK(current->block() == block); current->Verify(); current = current->next(); } @@ -578,13 +584,13 @@ void HGraph::Verify(bool do_full_verify) const { // Check that successors are correctly set. HBasicBlock* first = block->end()->FirstSuccessor(); HBasicBlock* second = block->end()->SecondSuccessor(); - ASSERT(second == NULL || first != NULL); + DCHECK(second == NULL || first != NULL); // Check that the predecessor array is correct. if (first != NULL) { - ASSERT(first->predecessors()->Contains(block)); + DCHECK(first->predecessors()->Contains(block)); if (second != NULL) { - ASSERT(second->predecessors()->Contains(block)); + DCHECK(second->predecessors()->Contains(block)); } } @@ -601,36 +607,36 @@ void HGraph::Verify(bool do_full_verify) const { block->predecessors()->first()->last_environment()->ast_id(); for (int k = 0; k < block->predecessors()->length(); k++) { HBasicBlock* predecessor = block->predecessors()->at(k); - ASSERT(predecessor->end()->IsGoto() || + DCHECK(predecessor->end()->IsGoto() || predecessor->end()->IsDeoptimize()); - ASSERT(predecessor->last_environment()->ast_id() == id); + DCHECK(predecessor->last_environment()->ast_id() == id); } } } // Check special property of first block to have no predecessors. - ASSERT(blocks_.at(0)->predecessors()->is_empty()); + DCHECK(blocks_.at(0)->predecessors()->is_empty()); if (do_full_verify) { // Check that the graph is fully connected. ReachabilityAnalyzer analyzer(entry_block_, blocks_.length(), NULL); - ASSERT(analyzer.visited_count() == blocks_.length()); + DCHECK(analyzer.visited_count() == blocks_.length()); // Check that entry block dominator is NULL. - ASSERT(entry_block_->dominator() == NULL); + DCHECK(entry_block_->dominator() == NULL); // Check dominators. for (int i = 0; i < blocks_.length(); ++i) { HBasicBlock* block = blocks_.at(i); if (block->dominator() == NULL) { // Only start block may have no dominator assigned to. - ASSERT(i == 0); + DCHECK(i == 0); } else { // Assert that block is unreachable if dominator must not be visited. ReachabilityAnalyzer dominator_analyzer(entry_block_, blocks_.length(), block->dominator()); - ASSERT(!dominator_analyzer.reachable()->Contains(block->block_id())); + DCHECK(!dominator_analyzer.reachable()->Contains(block->block_id())); } } } @@ -698,11 +704,11 @@ HConstant* HGraph::GetConstant##Name() { \ } -DEFINE_GET_CONSTANT(Undefined, undefined, undefined, HType::Tagged(), false) +DEFINE_GET_CONSTANT(Undefined, undefined, undefined, HType::Undefined(), false) DEFINE_GET_CONSTANT(True, true, boolean, HType::Boolean(), true) DEFINE_GET_CONSTANT(False, false, boolean, HType::Boolean(), false) -DEFINE_GET_CONSTANT(Hole, the_hole, the_hole, HType::Tagged(), false) -DEFINE_GET_CONSTANT(Null, null, null, HType::Tagged(), false) +DEFINE_GET_CONSTANT(Hole, the_hole, the_hole, HType::None(), false) +DEFINE_GET_CONSTANT(Null, null, null, HType::Null(), false) #undef DEFINE_GET_CONSTANT @@ -741,54 +747,52 @@ bool HGraph::IsStandardConstant(HConstant* constant) { } +HGraphBuilder::IfBuilder::IfBuilder() : builder_(NULL), needs_compare_(true) {} + + HGraphBuilder::IfBuilder::IfBuilder(HGraphBuilder* builder) - : builder_(builder), - finished_(false), - did_then_(false), - did_else_(false), - did_else_if_(false), - did_and_(false), - did_or_(false), - captured_(false), - needs_compare_(true), - pending_merge_block_(false), - split_edge_merge_block_(NULL), - merge_at_join_blocks_(NULL), - normal_merge_at_join_block_count_(0), - deopt_merge_at_join_block_count_(0) { - HEnvironment* env = builder->environment(); - first_true_block_ = builder->CreateBasicBlock(env->Copy()); - first_false_block_ = builder->CreateBasicBlock(env->Copy()); + : needs_compare_(true) { + Initialize(builder); +} + + +HGraphBuilder::IfBuilder::IfBuilder(HGraphBuilder* builder, + HIfContinuation* continuation) + : needs_compare_(false), first_true_block_(NULL), first_false_block_(NULL) { + InitializeDontCreateBlocks(builder); + continuation->Continue(&first_true_block_, &first_false_block_); +} + + +void HGraphBuilder::IfBuilder::InitializeDontCreateBlocks( + HGraphBuilder* builder) { + builder_ = builder; + finished_ = false; + did_then_ = false; + did_else_ = false; + did_else_if_ = false; + did_and_ = false; + did_or_ = false; + captured_ = false; + pending_merge_block_ = false; + split_edge_merge_block_ = NULL; + merge_at_join_blocks_ = NULL; + normal_merge_at_join_block_count_ = 0; + deopt_merge_at_join_block_count_ = 0; } -HGraphBuilder::IfBuilder::IfBuilder( - HGraphBuilder* builder, - HIfContinuation* continuation) - : builder_(builder), - finished_(false), - did_then_(false), - did_else_(false), - did_else_if_(false), - did_and_(false), - did_or_(false), - captured_(false), - needs_compare_(false), - pending_merge_block_(false), - first_true_block_(NULL), - first_false_block_(NULL), - split_edge_merge_block_(NULL), - merge_at_join_blocks_(NULL), - normal_merge_at_join_block_count_(0), - deopt_merge_at_join_block_count_(0) { - continuation->Continue(&first_true_block_, - &first_false_block_); +void HGraphBuilder::IfBuilder::Initialize(HGraphBuilder* builder) { + InitializeDontCreateBlocks(builder); + HEnvironment* env = builder->environment(); + first_true_block_ = builder->CreateBasicBlock(env->Copy()); + first_false_block_ = builder->CreateBasicBlock(env->Copy()); } HControlInstruction* HGraphBuilder::IfBuilder::AddCompare( HControlInstruction* compare) { - ASSERT(did_then_ == did_else_); + DCHECK(did_then_ == did_else_); if (did_else_) { // Handle if-then-elseif did_else_if_ = true; @@ -798,14 +802,13 @@ HControlInstruction* HGraphBuilder::IfBuilder::AddCompare( did_or_ = false; pending_merge_block_ = false; split_edge_merge_block_ = NULL; - HEnvironment* env = builder_->environment(); - first_true_block_ = builder_->CreateBasicBlock(env->Copy()); - first_false_block_ = builder_->CreateBasicBlock(env->Copy()); + HEnvironment* env = builder()->environment(); + first_true_block_ = builder()->CreateBasicBlock(env->Copy()); + first_false_block_ = builder()->CreateBasicBlock(env->Copy()); } if (split_edge_merge_block_ != NULL) { HEnvironment* env = first_false_block_->last_environment(); - HBasicBlock* split_edge = - builder_->CreateBasicBlock(env->Copy()); + HBasicBlock* split_edge = builder()->CreateBasicBlock(env->Copy()); if (did_or_) { compare->SetSuccessorAt(0, split_edge); compare->SetSuccessorAt(1, first_false_block_); @@ -813,81 +816,80 @@ HControlInstruction* HGraphBuilder::IfBuilder::AddCompare( compare->SetSuccessorAt(0, first_true_block_); compare->SetSuccessorAt(1, split_edge); } - builder_->GotoNoSimulate(split_edge, split_edge_merge_block_); + builder()->GotoNoSimulate(split_edge, split_edge_merge_block_); } else { compare->SetSuccessorAt(0, first_true_block_); compare->SetSuccessorAt(1, first_false_block_); } - builder_->FinishCurrentBlock(compare); + builder()->FinishCurrentBlock(compare); needs_compare_ = false; return compare; } void HGraphBuilder::IfBuilder::Or() { - ASSERT(!needs_compare_); - ASSERT(!did_and_); + DCHECK(!needs_compare_); + DCHECK(!did_and_); did_or_ = true; HEnvironment* env = first_false_block_->last_environment(); if (split_edge_merge_block_ == NULL) { - split_edge_merge_block_ = - builder_->CreateBasicBlock(env->Copy()); - builder_->GotoNoSimulate(first_true_block_, split_edge_merge_block_); + split_edge_merge_block_ = builder()->CreateBasicBlock(env->Copy()); + builder()->GotoNoSimulate(first_true_block_, split_edge_merge_block_); first_true_block_ = split_edge_merge_block_; } - builder_->set_current_block(first_false_block_); - first_false_block_ = builder_->CreateBasicBlock(env->Copy()); + builder()->set_current_block(first_false_block_); + first_false_block_ = builder()->CreateBasicBlock(env->Copy()); } void HGraphBuilder::IfBuilder::And() { - ASSERT(!needs_compare_); - ASSERT(!did_or_); + DCHECK(!needs_compare_); + DCHECK(!did_or_); did_and_ = true; HEnvironment* env = first_false_block_->last_environment(); if (split_edge_merge_block_ == NULL) { - split_edge_merge_block_ = builder_->CreateBasicBlock(env->Copy()); - builder_->GotoNoSimulate(first_false_block_, split_edge_merge_block_); + split_edge_merge_block_ = builder()->CreateBasicBlock(env->Copy()); + builder()->GotoNoSimulate(first_false_block_, split_edge_merge_block_); first_false_block_ = split_edge_merge_block_; } - builder_->set_current_block(first_true_block_); - first_true_block_ = builder_->CreateBasicBlock(env->Copy()); + builder()->set_current_block(first_true_block_); + first_true_block_ = builder()->CreateBasicBlock(env->Copy()); } void HGraphBuilder::IfBuilder::CaptureContinuation( HIfContinuation* continuation) { - ASSERT(!did_else_if_); - ASSERT(!finished_); - ASSERT(!captured_); + DCHECK(!did_else_if_); + DCHECK(!finished_); + DCHECK(!captured_); HBasicBlock* true_block = NULL; HBasicBlock* false_block = NULL; Finish(&true_block, &false_block); - ASSERT(true_block != NULL); - ASSERT(false_block != NULL); + DCHECK(true_block != NULL); + DCHECK(false_block != NULL); continuation->Capture(true_block, false_block); captured_ = true; - builder_->set_current_block(NULL); + builder()->set_current_block(NULL); End(); } void HGraphBuilder::IfBuilder::JoinContinuation(HIfContinuation* continuation) { - ASSERT(!did_else_if_); - ASSERT(!finished_); - ASSERT(!captured_); + DCHECK(!did_else_if_); + DCHECK(!finished_); + DCHECK(!captured_); HBasicBlock* true_block = NULL; HBasicBlock* false_block = NULL; Finish(&true_block, &false_block); merge_at_join_blocks_ = NULL; if (true_block != NULL && !true_block->IsFinished()) { - ASSERT(continuation->IsTrueReachable()); - builder_->GotoNoSimulate(true_block, continuation->true_branch()); + DCHECK(continuation->IsTrueReachable()); + builder()->GotoNoSimulate(true_block, continuation->true_branch()); } if (false_block != NULL && !false_block->IsFinished()) { - ASSERT(continuation->IsFalseReachable()); - builder_->GotoNoSimulate(false_block, continuation->false_branch()); + DCHECK(continuation->IsFalseReachable()); + builder()->GotoNoSimulate(false_block, continuation->false_branch()); } captured_ = true; End(); @@ -895,75 +897,74 @@ void HGraphBuilder::IfBuilder::JoinContinuation(HIfContinuation* continuation) { void HGraphBuilder::IfBuilder::Then() { - ASSERT(!captured_); - ASSERT(!finished_); + DCHECK(!captured_); + DCHECK(!finished_); did_then_ = true; if (needs_compare_) { // Handle if's without any expressions, they jump directly to the "else" // branch. However, we must pretend that the "then" branch is reachable, // so that the graph builder visits it and sees any live range extending // constructs within it. - HConstant* constant_false = builder_->graph()->GetConstantFalse(); + HConstant* constant_false = builder()->graph()->GetConstantFalse(); ToBooleanStub::Types boolean_type = ToBooleanStub::Types(); boolean_type.Add(ToBooleanStub::BOOLEAN); HBranch* branch = builder()->New<HBranch>( constant_false, boolean_type, first_true_block_, first_false_block_); - builder_->FinishCurrentBlock(branch); + builder()->FinishCurrentBlock(branch); } - builder_->set_current_block(first_true_block_); + builder()->set_current_block(first_true_block_); pending_merge_block_ = true; } void HGraphBuilder::IfBuilder::Else() { - ASSERT(did_then_); - ASSERT(!captured_); - ASSERT(!finished_); + DCHECK(did_then_); + DCHECK(!captured_); + DCHECK(!finished_); AddMergeAtJoinBlock(false); - builder_->set_current_block(first_false_block_); + builder()->set_current_block(first_false_block_); pending_merge_block_ = true; did_else_ = true; } void HGraphBuilder::IfBuilder::Deopt(const char* reason) { - ASSERT(did_then_); - builder_->Add<HDeoptimize>(reason, Deoptimizer::EAGER); + DCHECK(did_then_); + builder()->Add<HDeoptimize>(reason, Deoptimizer::EAGER); AddMergeAtJoinBlock(true); } void HGraphBuilder::IfBuilder::Return(HValue* value) { - HValue* parameter_count = builder_->graph()->GetConstantMinus1(); - builder_->FinishExitCurrentBlock( - builder_->New<HReturn>(value, parameter_count)); + HValue* parameter_count = builder()->graph()->GetConstantMinus1(); + builder()->FinishExitCurrentBlock( + builder()->New<HReturn>(value, parameter_count)); AddMergeAtJoinBlock(false); } void HGraphBuilder::IfBuilder::AddMergeAtJoinBlock(bool deopt) { if (!pending_merge_block_) return; - HBasicBlock* block = builder_->current_block(); - ASSERT(block == NULL || !block->IsFinished()); - MergeAtJoinBlock* record = - new(builder_->zone()) MergeAtJoinBlock(block, deopt, - merge_at_join_blocks_); + HBasicBlock* block = builder()->current_block(); + DCHECK(block == NULL || !block->IsFinished()); + MergeAtJoinBlock* record = new (builder()->zone()) + MergeAtJoinBlock(block, deopt, merge_at_join_blocks_); merge_at_join_blocks_ = record; if (block != NULL) { - ASSERT(block->end() == NULL); + DCHECK(block->end() == NULL); if (deopt) { normal_merge_at_join_block_count_++; } else { deopt_merge_at_join_block_count_++; } } - builder_->set_current_block(NULL); + builder()->set_current_block(NULL); pending_merge_block_ = false; } void HGraphBuilder::IfBuilder::Finish() { - ASSERT(!finished_); + DCHECK(!finished_); if (!did_then_) { Then(); } @@ -988,7 +989,7 @@ void HGraphBuilder::IfBuilder::Finish(HBasicBlock** then_continuation, if (then_continuation != NULL) { *then_continuation = then_record->block_; } - ASSERT(then_record->next_ == NULL); + DCHECK(then_record->next_ == NULL); } @@ -998,9 +999,9 @@ void HGraphBuilder::IfBuilder::End() { int total_merged_blocks = normal_merge_at_join_block_count_ + deopt_merge_at_join_block_count_; - ASSERT(total_merged_blocks >= 1); - HBasicBlock* merge_block = total_merged_blocks == 1 - ? NULL : builder_->graph()->CreateBasicBlock(); + DCHECK(total_merged_blocks >= 1); + HBasicBlock* merge_block = + total_merged_blocks == 1 ? NULL : builder()->graph()->CreateBasicBlock(); // Merge non-deopt blocks first to ensure environment has right size for // padding. @@ -1011,10 +1012,10 @@ void HGraphBuilder::IfBuilder::End() { // if, then just set it as the current block and continue rather then // creating an unnecessary merge block. if (total_merged_blocks == 1) { - builder_->set_current_block(current->block_); + builder()->set_current_block(current->block_); return; } - builder_->GotoNoSimulate(current->block_, merge_block); + builder()->GotoNoSimulate(current->block_, merge_block); } current = current->next_; } @@ -1023,44 +1024,48 @@ void HGraphBuilder::IfBuilder::End() { current = merge_at_join_blocks_; while (current != NULL) { if (current->deopt_ && current->block_ != NULL) { - current->block_->FinishExit( - HAbnormalExit::New(builder_->zone(), NULL), - HSourcePosition::Unknown()); + current->block_->FinishExit(HAbnormalExit::New(builder()->zone(), NULL), + HSourcePosition::Unknown()); } current = current->next_; } - builder_->set_current_block(merge_block); + builder()->set_current_block(merge_block); } -HGraphBuilder::LoopBuilder::LoopBuilder(HGraphBuilder* builder, - HValue* context, - LoopBuilder::Direction direction) - : builder_(builder), - context_(context), - direction_(direction), - finished_(false) { - header_block_ = builder->CreateLoopHeaderBlock(); - body_block_ = NULL; - exit_block_ = NULL; - exit_trampoline_block_ = NULL; - increment_amount_ = builder_->graph()->GetConstant1(); +HGraphBuilder::LoopBuilder::LoopBuilder(HGraphBuilder* builder) { + Initialize(builder, NULL, kWhileTrue, NULL); } -HGraphBuilder::LoopBuilder::LoopBuilder(HGraphBuilder* builder, - HValue* context, +HGraphBuilder::LoopBuilder::LoopBuilder(HGraphBuilder* builder, HValue* context, + LoopBuilder::Direction direction) { + Initialize(builder, context, direction, builder->graph()->GetConstant1()); +} + + +HGraphBuilder::LoopBuilder::LoopBuilder(HGraphBuilder* builder, HValue* context, LoopBuilder::Direction direction, - HValue* increment_amount) - : builder_(builder), - context_(context), - direction_(direction), - finished_(false) { + HValue* increment_amount) { + Initialize(builder, context, direction, increment_amount); + increment_amount_ = increment_amount; +} + + +void HGraphBuilder::LoopBuilder::Initialize(HGraphBuilder* builder, + HValue* context, + Direction direction, + HValue* increment_amount) { + builder_ = builder; + context_ = context; + direction_ = direction; + increment_amount_ = increment_amount; + + finished_ = false; header_block_ = builder->CreateLoopHeaderBlock(); body_block_ = NULL; exit_block_ = NULL; exit_trampoline_block_ = NULL; - increment_amount_ = increment_amount; } @@ -1068,6 +1073,7 @@ HValue* HGraphBuilder::LoopBuilder::BeginBody( HValue* initial, HValue* terminating, Token::Value token) { + DCHECK(direction_ != kWhileTrue); HEnvironment* env = builder_->environment(); phi_ = header_block_->AddNewPhi(env->values()->length()); phi_->AddInput(initial); @@ -1104,12 +1110,26 @@ HValue* HGraphBuilder::LoopBuilder::BeginBody( } +void HGraphBuilder::LoopBuilder::BeginBody(int drop_count) { + DCHECK(direction_ == kWhileTrue); + HEnvironment* env = builder_->environment(); + builder_->GotoNoSimulate(header_block_); + builder_->set_current_block(header_block_); + env->Drop(drop_count); +} + + void HGraphBuilder::LoopBuilder::Break() { if (exit_trampoline_block_ == NULL) { // Its the first time we saw a break. - HEnvironment* env = exit_block_->last_environment()->Copy(); - exit_trampoline_block_ = builder_->CreateBasicBlock(env); - builder_->GotoNoSimulate(exit_block_, exit_trampoline_block_); + if (direction_ == kWhileTrue) { + HEnvironment* env = builder_->environment()->Copy(); + exit_trampoline_block_ = builder_->CreateBasicBlock(env); + } else { + HEnvironment* env = exit_block_->last_environment()->Copy(); + exit_trampoline_block_ = builder_->CreateBasicBlock(env); + builder_->GotoNoSimulate(exit_block_, exit_trampoline_block_); + } } builder_->GotoNoSimulate(exit_trampoline_block_); @@ -1118,7 +1138,7 @@ void HGraphBuilder::LoopBuilder::Break() { void HGraphBuilder::LoopBuilder::EndBody() { - ASSERT(!finished_); + DCHECK(!finished_); if (direction_ == kPostIncrement || direction_ == kPostDecrement) { if (direction_ == kPostIncrement) { @@ -1130,8 +1150,11 @@ void HGraphBuilder::LoopBuilder::EndBody() { builder_->AddInstruction(increment_); } - // Push the new increment value on the expression stack to merge into the phi. - builder_->environment()->Push(increment_); + if (direction_ != kWhileTrue) { + // Push the new increment value on the expression stack to merge into + // the phi. + builder_->environment()->Push(increment_); + } HBasicBlock* last_block = builder_->current_block(); builder_->GotoNoSimulate(last_block, header_block_); header_block_->loop_information()->RegisterBackEdge(last_block); @@ -1157,8 +1180,8 @@ HGraph* HGraphBuilder::CreateGraph() { HInstruction* HGraphBuilder::AddInstruction(HInstruction* instr) { - ASSERT(current_block() != NULL); - ASSERT(!FLAG_hydrogen_track_positions || + DCHECK(current_block() != NULL); + DCHECK(!FLAG_hydrogen_track_positions || !position_.IsUnknown() || !info_->IsOptimizing()); current_block()->AddInstruction(instr, source_position()); @@ -1170,7 +1193,7 @@ HInstruction* HGraphBuilder::AddInstruction(HInstruction* instr) { void HGraphBuilder::FinishCurrentBlock(HControlInstruction* last) { - ASSERT(!FLAG_hydrogen_track_positions || + DCHECK(!FLAG_hydrogen_track_positions || !info_->IsOptimizing() || !position_.IsUnknown()); current_block()->Finish(last, source_position()); @@ -1181,7 +1204,7 @@ void HGraphBuilder::FinishCurrentBlock(HControlInstruction* last) { void HGraphBuilder::FinishExitCurrentBlock(HControlInstruction* instruction) { - ASSERT(!FLAG_hydrogen_track_positions || !info_->IsOptimizing() || + DCHECK(!FLAG_hydrogen_track_positions || !info_->IsOptimizing() || !position_.IsUnknown()); current_block()->FinishExit(instruction, source_position()); if (instruction->IsReturn() || instruction->IsAbnormalExit()) { @@ -1205,8 +1228,8 @@ void HGraphBuilder::AddIncrementCounter(StatsCounter* counter) { void HGraphBuilder::AddSimulate(BailoutId id, RemovableSimulate removable) { - ASSERT(current_block() != NULL); - ASSERT(!graph()->IsInsideNoSideEffectsScope()); + DCHECK(current_block() != NULL); + DCHECK(!graph()->IsInsideNoSideEffectsScope()); current_block()->AddNewSimulate(id, source_position(), removable); } @@ -1227,6 +1250,16 @@ HBasicBlock* HGraphBuilder::CreateLoopHeaderBlock() { } +HValue* HGraphBuilder::BuildGetElementsKind(HValue* object) { + HValue* map = Add<HLoadNamedField>(object, static_cast<HValue*>(NULL), + HObjectAccess::ForMap()); + + HValue* bit_field2 = Add<HLoadNamedField>(map, static_cast<HValue*>(NULL), + HObjectAccess::ForMapBitField2()); + return BuildDecodeField<Map::ElementsKindBits>(bit_field2); +} + + HValue* HGraphBuilder::BuildCheckHeapObject(HValue* obj) { if (obj->type().IsHeapObject()) return obj; return Add<HCheckHeapObject>(obj); @@ -1241,7 +1274,7 @@ void HGraphBuilder::FinishExitWithHardDeoptimization(const char* reason) { HValue* HGraphBuilder::BuildCheckString(HValue* string) { if (!string->type().IsString()) { - ASSERT(!string->IsConstant() || + DCHECK(!string->IsConstant() || !HConstant::cast(string)->HasStringValue()); BuildCheckHeapObject(string); return Add<HCheckInstanceType>(string, HCheckInstanceType::IS_STRING); @@ -1360,7 +1393,7 @@ void HGraphBuilder::BuildTransitionElementsKind(HValue* object, ElementsKind from_kind, ElementsKind to_kind, bool is_jsarray) { - ASSERT(!IsFastHoleyElementsKind(from_kind) || + DCHECK(!IsFastHoleyElementsKind(from_kind) || IsFastHoleyElementsKind(to_kind)); if (AllocationSite::GetMode(from_kind, to_kind) == TRACK_ALLOCATION_SITE) { @@ -1396,81 +1429,202 @@ void HGraphBuilder::BuildTransitionElementsKind(HValue* object, } -HValue* HGraphBuilder::BuildUncheckedDictionaryElementLoadHelper( - HValue* elements, - HValue* key, - HValue* hash, - HValue* mask, - int current_probe) { - if (current_probe == kNumberDictionaryProbes) { - return NULL; - } +void HGraphBuilder::BuildJSObjectCheck(HValue* receiver, + int bit_field_mask) { + // Check that the object isn't a smi. + Add<HCheckHeapObject>(receiver); - int32_t offset = SeededNumberDictionary::GetProbeOffset(current_probe); - HValue* raw_index = (current_probe == 0) - ? hash - : AddUncasted<HAdd>(hash, Add<HConstant>(offset)); - raw_index = AddUncasted<HBitwise>(Token::BIT_AND, raw_index, mask); - int32_t entry_size = SeededNumberDictionary::kEntrySize; - raw_index = AddUncasted<HMul>(raw_index, Add<HConstant>(entry_size)); - raw_index->ClearFlag(HValue::kCanOverflow); + // Get the map of the receiver. + HValue* map = Add<HLoadNamedField>(receiver, static_cast<HValue*>(NULL), + HObjectAccess::ForMap()); - int32_t base_offset = SeededNumberDictionary::kElementsStartIndex; - HValue* key_index = AddUncasted<HAdd>(raw_index, Add<HConstant>(base_offset)); - key_index->ClearFlag(HValue::kCanOverflow); + // Check the instance type and if an access check is needed, this can be + // done with a single load, since both bytes are adjacent in the map. + HObjectAccess access(HObjectAccess::ForMapInstanceTypeAndBitField()); + HValue* instance_type_and_bit_field = + Add<HLoadNamedField>(map, static_cast<HValue*>(NULL), access); + + HValue* mask = Add<HConstant>(0x00FF | (bit_field_mask << 8)); + HValue* and_result = AddUncasted<HBitwise>(Token::BIT_AND, + instance_type_and_bit_field, + mask); + HValue* sub_result = AddUncasted<HSub>(and_result, + Add<HConstant>(JS_OBJECT_TYPE)); + Add<HBoundsCheck>(sub_result, + Add<HConstant>(LAST_JS_OBJECT_TYPE + 1 - JS_OBJECT_TYPE)); +} - HValue* candidate_key = Add<HLoadKeyed>(elements, key_index, - static_cast<HValue*>(NULL), - FAST_ELEMENTS); - IfBuilder key_compare(this); - key_compare.IfNot<HCompareObjectEqAndBranch>(key, candidate_key); - key_compare.Then(); +void HGraphBuilder::BuildKeyedIndexCheck(HValue* key, + HIfContinuation* join_continuation) { + // The sometimes unintuitively backward ordering of the ifs below is + // convoluted, but necessary. All of the paths must guarantee that the + // if-true of the continuation returns a smi element index and the if-false of + // the continuation returns either a symbol or a unique string key. All other + // object types cause a deopt to fall back to the runtime. + + IfBuilder key_smi_if(this); + key_smi_if.If<HIsSmiAndBranch>(key); + key_smi_if.Then(); { - // Key at the current probe doesn't match, try at the next probe. - HValue* result = BuildUncheckedDictionaryElementLoadHelper( - elements, key, hash, mask, current_probe + 1); - if (result == NULL) { - key_compare.Deopt("probes exhausted in keyed load dictionary lookup"); - result = graph()->GetConstantUndefined(); - } else { - Push(result); - } + Push(key); // Nothing to do, just continue to true of continuation. } - key_compare.Else(); + key_smi_if.Else(); { - // Key at current probe matches. Details must be zero, otherwise the - // dictionary element requires special handling. - HValue* details_index = AddUncasted<HAdd>( - raw_index, Add<HConstant>(base_offset + 2)); - details_index->ClearFlag(HValue::kCanOverflow); - - HValue* details = Add<HLoadKeyed>(elements, details_index, - static_cast<HValue*>(NULL), - FAST_ELEMENTS); - IfBuilder details_compare(this); - details_compare.If<HCompareNumericAndBranch>(details, - graph()->GetConstant0(), - Token::NE); - details_compare.ThenDeopt("keyed load dictionary element not fast case"); - - details_compare.Else(); + HValue* map = Add<HLoadNamedField>(key, static_cast<HValue*>(NULL), + HObjectAccess::ForMap()); + HValue* instance_type = + Add<HLoadNamedField>(map, static_cast<HValue*>(NULL), + HObjectAccess::ForMapInstanceType()); + + // Non-unique string, check for a string with a hash code that is actually + // an index. + STATIC_ASSERT(LAST_UNIQUE_NAME_TYPE == FIRST_NONSTRING_TYPE); + IfBuilder not_string_or_name_if(this); + not_string_or_name_if.If<HCompareNumericAndBranch>( + instance_type, + Add<HConstant>(LAST_UNIQUE_NAME_TYPE), + Token::GT); + + not_string_or_name_if.Then(); { - // Key matches and details are zero --> fast case. Load and return the - // value. - HValue* result_index = AddUncasted<HAdd>( - raw_index, Add<HConstant>(base_offset + 1)); - result_index->ClearFlag(HValue::kCanOverflow); - - Push(Add<HLoadKeyed>(elements, result_index, - static_cast<HValue*>(NULL), - FAST_ELEMENTS)); + // Non-smi, non-Name, non-String: Try to convert to smi in case of + // HeapNumber. + // TODO(danno): This could call some variant of ToString + Push(AddUncasted<HForceRepresentation>(key, Representation::Smi())); } - details_compare.End(); + not_string_or_name_if.Else(); + { + // String or Name: check explicitly for Name, they can short-circuit + // directly to unique non-index key path. + IfBuilder not_symbol_if(this); + not_symbol_if.If<HCompareNumericAndBranch>( + instance_type, + Add<HConstant>(SYMBOL_TYPE), + Token::NE); + + not_symbol_if.Then(); + { + // String: check whether the String is a String of an index. If it is, + // extract the index value from the hash. + HValue* hash = + Add<HLoadNamedField>(key, static_cast<HValue*>(NULL), + HObjectAccess::ForNameHashField()); + HValue* not_index_mask = Add<HConstant>(static_cast<int>( + String::kContainsCachedArrayIndexMask)); + + HValue* not_index_test = AddUncasted<HBitwise>( + Token::BIT_AND, hash, not_index_mask); + + IfBuilder string_index_if(this); + string_index_if.If<HCompareNumericAndBranch>(not_index_test, + graph()->GetConstant0(), + Token::EQ); + string_index_if.Then(); + { + // String with index in hash: extract string and merge to index path. + Push(BuildDecodeField<String::ArrayIndexValueBits>(hash)); + } + string_index_if.Else(); + { + // Key is a non-index String, check for uniqueness/internalization. + // If it's not internalized yet, internalize it now. + HValue* not_internalized_bit = AddUncasted<HBitwise>( + Token::BIT_AND, + instance_type, + Add<HConstant>(static_cast<int>(kIsNotInternalizedMask))); + + IfBuilder internalized(this); + internalized.If<HCompareNumericAndBranch>(not_internalized_bit, + graph()->GetConstant0(), + Token::EQ); + internalized.Then(); + Push(key); + + internalized.Else(); + Add<HPushArguments>(key); + HValue* intern_key = Add<HCallRuntime>( + isolate()->factory()->empty_string(), + Runtime::FunctionForId(Runtime::kInternalizeString), 1); + Push(intern_key); + + internalized.End(); + // Key guaranteed to be a unique string + } + string_index_if.JoinContinuation(join_continuation); + } + not_symbol_if.Else(); + { + Push(key); // Key is symbol + } + not_symbol_if.JoinContinuation(join_continuation); + } + not_string_or_name_if.JoinContinuation(join_continuation); } - key_compare.End(); + key_smi_if.JoinContinuation(join_continuation); +} - return Pop(); + +void HGraphBuilder::BuildNonGlobalObjectCheck(HValue* receiver) { + // Get the the instance type of the receiver, and make sure that it is + // not one of the global object types. + HValue* map = Add<HLoadNamedField>(receiver, static_cast<HValue*>(NULL), + HObjectAccess::ForMap()); + HValue* instance_type = + Add<HLoadNamedField>(map, static_cast<HValue*>(NULL), + HObjectAccess::ForMapInstanceType()); + STATIC_ASSERT(JS_BUILTINS_OBJECT_TYPE == JS_GLOBAL_OBJECT_TYPE + 1); + HValue* min_global_type = Add<HConstant>(JS_GLOBAL_OBJECT_TYPE); + HValue* max_global_type = Add<HConstant>(JS_BUILTINS_OBJECT_TYPE); + + IfBuilder if_global_object(this); + if_global_object.If<HCompareNumericAndBranch>(instance_type, + max_global_type, + Token::LTE); + if_global_object.And(); + if_global_object.If<HCompareNumericAndBranch>(instance_type, + min_global_type, + Token::GTE); + if_global_object.ThenDeopt("receiver was a global object"); + if_global_object.End(); +} + + +void HGraphBuilder::BuildTestForDictionaryProperties( + HValue* object, + HIfContinuation* continuation) { + HValue* properties = Add<HLoadNamedField>( + object, static_cast<HValue*>(NULL), + HObjectAccess::ForPropertiesPointer()); + HValue* properties_map = + Add<HLoadNamedField>(properties, static_cast<HValue*>(NULL), + HObjectAccess::ForMap()); + HValue* hash_map = Add<HLoadRoot>(Heap::kHashTableMapRootIndex); + IfBuilder builder(this); + builder.If<HCompareObjectEqAndBranch>(properties_map, hash_map); + builder.CaptureContinuation(continuation); +} + + +HValue* HGraphBuilder::BuildKeyedLookupCacheHash(HValue* object, + HValue* key) { + // Load the map of the receiver, compute the keyed lookup cache hash + // based on 32 bits of the map pointer and the string hash. + HValue* object_map = + Add<HLoadNamedField>(object, static_cast<HValue*>(NULL), + HObjectAccess::ForMapAsInteger32()); + HValue* shifted_map = AddUncasted<HShr>( + object_map, Add<HConstant>(KeyedLookupCache::kMapHashShift)); + HValue* string_hash = + Add<HLoadNamedField>(key, static_cast<HValue*>(NULL), + HObjectAccess::ForStringHashField()); + HValue* shifted_hash = AddUncasted<HShr>( + string_hash, Add<HConstant>(String::kHashShift)); + HValue* xor_result = AddUncasted<HBitwise>(Token::BIT_XOR, shifted_map, + shifted_hash); + int mask = (KeyedLookupCache::kCapacityMask & KeyedLookupCache::kHashMask); + return AddUncasted<HBitwise>(Token::BIT_AND, xor_result, + Add<HConstant>(mask)); } @@ -1508,11 +1662,9 @@ HValue* HGraphBuilder::BuildElementIndexHash(HValue* index) { HValue* HGraphBuilder::BuildUncheckedDictionaryElementLoad(HValue* receiver, - HValue* key) { - HValue* elements = AddLoadElements(receiver); - - HValue* hash = BuildElementIndexHash(key); - + HValue* elements, + HValue* key, + HValue* hash) { HValue* capacity = Add<HLoadKeyed>( elements, Add<HConstant>(NameDictionary::kCapacityIndex), @@ -1523,8 +1675,129 @@ HValue* HGraphBuilder::BuildUncheckedDictionaryElementLoad(HValue* receiver, mask->ChangeRepresentation(Representation::Integer32()); mask->ClearFlag(HValue::kCanOverflow); - return BuildUncheckedDictionaryElementLoadHelper(elements, key, - hash, mask, 0); + HValue* entry = hash; + HValue* count = graph()->GetConstant1(); + Push(entry); + Push(count); + + HIfContinuation return_or_loop_continuation(graph()->CreateBasicBlock(), + graph()->CreateBasicBlock()); + HIfContinuation found_key_match_continuation(graph()->CreateBasicBlock(), + graph()->CreateBasicBlock()); + LoopBuilder probe_loop(this); + probe_loop.BeginBody(2); // Drop entry, count from last environment to + // appease live range building without simulates. + + count = Pop(); + entry = Pop(); + entry = AddUncasted<HBitwise>(Token::BIT_AND, entry, mask); + int entry_size = SeededNumberDictionary::kEntrySize; + HValue* base_index = AddUncasted<HMul>(entry, Add<HConstant>(entry_size)); + base_index->ClearFlag(HValue::kCanOverflow); + int start_offset = SeededNumberDictionary::kElementsStartIndex; + HValue* key_index = + AddUncasted<HAdd>(base_index, Add<HConstant>(start_offset)); + key_index->ClearFlag(HValue::kCanOverflow); + + HValue* candidate_key = Add<HLoadKeyed>( + elements, key_index, static_cast<HValue*>(NULL), FAST_ELEMENTS); + IfBuilder if_undefined(this); + if_undefined.If<HCompareObjectEqAndBranch>(candidate_key, + graph()->GetConstantUndefined()); + if_undefined.Then(); + { + // element == undefined means "not found". Call the runtime. + // TODO(jkummerow): walk the prototype chain instead. + Add<HPushArguments>(receiver, key); + Push(Add<HCallRuntime>(isolate()->factory()->empty_string(), + Runtime::FunctionForId(Runtime::kKeyedGetProperty), + 2)); + } + if_undefined.Else(); + { + IfBuilder if_match(this); + if_match.If<HCompareObjectEqAndBranch>(candidate_key, key); + if_match.Then(); + if_match.Else(); + + // Update non-internalized string in the dictionary with internalized key? + IfBuilder if_update_with_internalized(this); + HValue* smi_check = + if_update_with_internalized.IfNot<HIsSmiAndBranch>(candidate_key); + if_update_with_internalized.And(); + HValue* map = AddLoadMap(candidate_key, smi_check); + HValue* instance_type = Add<HLoadNamedField>( + map, static_cast<HValue*>(NULL), HObjectAccess::ForMapInstanceType()); + HValue* not_internalized_bit = AddUncasted<HBitwise>( + Token::BIT_AND, instance_type, + Add<HConstant>(static_cast<int>(kIsNotInternalizedMask))); + if_update_with_internalized.If<HCompareNumericAndBranch>( + not_internalized_bit, graph()->GetConstant0(), Token::NE); + if_update_with_internalized.And(); + if_update_with_internalized.IfNot<HCompareObjectEqAndBranch>( + candidate_key, graph()->GetConstantHole()); + if_update_with_internalized.AndIf<HStringCompareAndBranch>(candidate_key, + key, Token::EQ); + if_update_with_internalized.Then(); + // Replace a key that is a non-internalized string by the equivalent + // internalized string for faster further lookups. + Add<HStoreKeyed>(elements, key_index, key, FAST_ELEMENTS); + if_update_with_internalized.Else(); + + if_update_with_internalized.JoinContinuation(&found_key_match_continuation); + if_match.JoinContinuation(&found_key_match_continuation); + + IfBuilder found_key_match(this, &found_key_match_continuation); + found_key_match.Then(); + // Key at current probe matches. Relevant bits in the |details| field must + // be zero, otherwise the dictionary element requires special handling. + HValue* details_index = + AddUncasted<HAdd>(base_index, Add<HConstant>(start_offset + 2)); + details_index->ClearFlag(HValue::kCanOverflow); + HValue* details = Add<HLoadKeyed>( + elements, details_index, static_cast<HValue*>(NULL), FAST_ELEMENTS); + int details_mask = PropertyDetails::TypeField::kMask | + PropertyDetails::DeletedField::kMask; + details = AddUncasted<HBitwise>(Token::BIT_AND, details, + Add<HConstant>(details_mask)); + IfBuilder details_compare(this); + details_compare.If<HCompareNumericAndBranch>( + details, graph()->GetConstant0(), Token::EQ); + details_compare.Then(); + HValue* result_index = + AddUncasted<HAdd>(base_index, Add<HConstant>(start_offset + 1)); + result_index->ClearFlag(HValue::kCanOverflow); + Push(Add<HLoadKeyed>(elements, result_index, static_cast<HValue*>(NULL), + FAST_ELEMENTS)); + details_compare.Else(); + Add<HPushArguments>(receiver, key); + Push(Add<HCallRuntime>(isolate()->factory()->empty_string(), + Runtime::FunctionForId(Runtime::kKeyedGetProperty), + 2)); + details_compare.End(); + + found_key_match.Else(); + found_key_match.JoinContinuation(&return_or_loop_continuation); + } + if_undefined.JoinContinuation(&return_or_loop_continuation); + + IfBuilder return_or_loop(this, &return_or_loop_continuation); + return_or_loop.Then(); + probe_loop.Break(); + + return_or_loop.Else(); + entry = AddUncasted<HAdd>(entry, count); + entry->ClearFlag(HValue::kCanOverflow); + count = AddUncasted<HAdd>(count, graph()->GetConstant1()); + count->ClearFlag(HValue::kCanOverflow); + Push(entry); + Push(count); + + probe_loop.EndBody(); + + return_or_loop.End(); + + return Pop(); } @@ -1532,23 +1805,18 @@ HValue* HGraphBuilder::BuildRegExpConstructResult(HValue* length, HValue* index, HValue* input) { NoObservableSideEffectsScope scope(this); + HConstant* max_length = Add<HConstant>(JSObject::kInitialMaxFastElementArray); + Add<HBoundsCheck>(length, max_length); - // Compute the size of the RegExpResult followed by FixedArray with length. - HValue* size = length; - size = AddUncasted<HShl>(size, Add<HConstant>(kPointerSizeLog2)); - size = AddUncasted<HAdd>(size, Add<HConstant>(static_cast<int32_t>( - JSRegExpResult::kSize + FixedArray::kHeaderSize))); - - // Make sure size does not exceeds max regular heap object size. - Add<HBoundsCheck>(size, Add<HConstant>(Page::kMaxRegularHeapObjectSize)); + // Generate size calculation code here in order to make it dominate + // the JSRegExpResult allocation. + ElementsKind elements_kind = FAST_ELEMENTS; + HValue* size = BuildCalculateElementsSize(elements_kind, length); // Allocate the JSRegExpResult and the FixedArray in one step. HValue* result = Add<HAllocate>( - size, HType::JSArray(), NOT_TENURED, JS_ARRAY_TYPE); - - // Determine the elements FixedArray. - HValue* elements = Add<HInnerAllocatedObject>( - result, Add<HConstant>(JSRegExpResult::kSize)); + Add<HConstant>(JSRegExpResult::kSize), HType::JSArray(), + NOT_TENURED, JS_ARRAY_TYPE); // Initialize the JSRegExpResult header. HValue* global_object = Add<HLoadNamedField>( @@ -1557,15 +1825,19 @@ HValue* HGraphBuilder::BuildRegExpConstructResult(HValue* length, HValue* native_context = Add<HLoadNamedField>( global_object, static_cast<HValue*>(NULL), HObjectAccess::ForGlobalObjectNativeContext()); - AddStoreMapNoWriteBarrier(result, Add<HLoadNamedField>( + Add<HStoreNamedField>( + result, HObjectAccess::ForMap(), + Add<HLoadNamedField>( native_context, static_cast<HValue*>(NULL), HObjectAccess::ForContextSlot(Context::REGEXP_RESULT_MAP_INDEX))); + HConstant* empty_fixed_array = + Add<HConstant>(isolate()->factory()->empty_fixed_array()); Add<HStoreNamedField>( result, HObjectAccess::ForJSArrayOffset(JSArray::kPropertiesOffset), - Add<HConstant>(isolate()->factory()->empty_fixed_array())); + empty_fixed_array); Add<HStoreNamedField>( result, HObjectAccess::ForJSArrayOffset(JSArray::kElementsOffset), - elements); + empty_fixed_array); Add<HStoreNamedField>( result, HObjectAccess::ForJSArrayOffset(JSArray::kLengthOffset), length); @@ -1577,19 +1849,22 @@ HValue* HGraphBuilder::BuildRegExpConstructResult(HValue* length, result, HObjectAccess::ForJSArrayOffset(JSRegExpResult::kInputOffset), input); - // Initialize the elements header. - AddStoreMapConstantNoWriteBarrier(elements, - isolate()->factory()->fixed_array_map()); - Add<HStoreNamedField>(elements, HObjectAccess::ForFixedArrayLength(), length); + // Allocate and initialize the elements header. + HAllocate* elements = BuildAllocateElements(elements_kind, size); + BuildInitializeElementsHeader(elements, elements_kind, length); + + HConstant* size_in_bytes_upper_bound = EstablishElementsAllocationSize( + elements_kind, max_length->Integer32Value()); + elements->set_size_upper_bound(size_in_bytes_upper_bound); + + Add<HStoreNamedField>( + result, HObjectAccess::ForJSArrayOffset(JSArray::kElementsOffset), + elements); // Initialize the elements contents with undefined. - LoopBuilder loop(this, context(), LoopBuilder::kPostIncrement); - index = loop.BeginBody(graph()->GetConstant0(), length, Token::LT); - { - Add<HStoreKeyed>(elements, index, graph()->GetConstantUndefined(), - FAST_ELEMENTS); - } - loop.EndBody(); + BuildFillElementsWithValue( + elements, elements_kind, graph()->GetConstant0(), length, + graph()->GetConstantUndefined()); return result; } @@ -1671,26 +1946,32 @@ HValue* HGraphBuilder::BuildNumberToString(HValue* object, Type* type) { static_cast<HValue*>(NULL), FAST_ELEMENTS, ALLOW_RETURN_HOLE); - // Check if key is a heap number (the number string cache contains only - // SMIs and heap number, so it is sufficient to do a SMI check here). + // Check if the key is a heap number and compare it with the object. IfBuilder if_keyisnotsmi(this); HValue* keyisnotsmi = if_keyisnotsmi.IfNot<HIsSmiAndBranch>(key); if_keyisnotsmi.Then(); { - // Check if values of key and object match. - IfBuilder if_keyeqobject(this); - if_keyeqobject.If<HCompareNumericAndBranch>( - Add<HLoadNamedField>(key, keyisnotsmi, - HObjectAccess::ForHeapNumberValue()), - Add<HLoadNamedField>(object, objectisnumber, - HObjectAccess::ForHeapNumberValue()), - Token::EQ); - if_keyeqobject.Then(); + IfBuilder if_keyisheapnumber(this); + if_keyisheapnumber.If<HCompareMap>( + key, isolate()->factory()->heap_number_map()); + if_keyisheapnumber.Then(); { - // Make the key_index available. - Push(key_index); + // Check if values of key and object match. + IfBuilder if_keyeqobject(this); + if_keyeqobject.If<HCompareNumericAndBranch>( + Add<HLoadNamedField>(key, keyisnotsmi, + HObjectAccess::ForHeapNumberValue()), + Add<HLoadNamedField>(object, objectisnumber, + HObjectAccess::ForHeapNumberValue()), + Token::EQ); + if_keyeqobject.Then(); + { + // Make the key_index available. + Push(key_index); + } + if_keyeqobject.JoinContinuation(&found); } - if_keyeqobject.JoinContinuation(&found); + if_keyisheapnumber.JoinContinuation(&found); } if_keyisnotsmi.JoinContinuation(&found); } @@ -1722,10 +2003,10 @@ HValue* HGraphBuilder::BuildNumberToString(HValue* object, Type* type) { if_found.Else(); { // Cache miss, fallback to runtime. - Add<HPushArgument>(object); + Add<HPushArguments>(object); Push(Add<HCallRuntime>( isolate()->factory()->empty_string(), - Runtime::FunctionForId(Runtime::kHiddenNumberToStringSkipCache), + Runtime::FunctionForId(Runtime::kNumberToStringSkipCache), 1)); } if_found.End(); @@ -1785,7 +2066,7 @@ HValue* HGraphBuilder::BuildCreateConsString( // pass CONS_STRING_TYPE or CONS_ASCII_STRING_TYPE here, so we just use // CONS_STRING_TYPE here. Below we decide whether the cons string is // one-byte or two-byte and set the appropriate map. - ASSERT(HAllocate::CompatibleInstanceTypes(CONS_STRING_TYPE, + DCHECK(HAllocate::CompatibleInstanceTypes(CONS_STRING_TYPE, CONS_ASCII_STRING_TYPE)); HAllocate* result = BuildAllocate(Add<HConstant>(ConsString::kSize), HType::String(), CONS_STRING_TYPE, @@ -1829,14 +2110,16 @@ HValue* HGraphBuilder::BuildCreateConsString( if_onebyte.Then(); { // We can safely skip the write barrier for storing the map here. - Handle<Map> map = isolate()->factory()->cons_ascii_string_map(); - AddStoreMapConstantNoWriteBarrier(result, map); + Add<HStoreNamedField>( + result, HObjectAccess::ForMap(), + Add<HConstant>(isolate()->factory()->cons_ascii_string_map())); } if_onebyte.Else(); { // We can safely skip the write barrier for storing the map here. - Handle<Map> map = isolate()->factory()->cons_string_map(); - AddStoreMapConstantNoWriteBarrier(result, map); + Add<HStoreNamedField>( + result, HObjectAccess::ForMap(), + Add<HConstant>(isolate()->factory()->cons_string_map())); } if_onebyte.End(); @@ -1861,7 +2144,7 @@ void HGraphBuilder::BuildCopySeqStringChars(HValue* src, HValue* dst_offset, String::Encoding dst_encoding, HValue* length) { - ASSERT(dst_encoding != String::ONE_BYTE_ENCODING || + DCHECK(dst_encoding != String::ONE_BYTE_ENCODING || src_encoding == String::ONE_BYTE_ENCODING); LoopBuilder loop(this, context(), LoopBuilder::kPostIncrement); HValue* index = loop.BeginBody(graph()->GetConstant0(), length, Token::LT); @@ -1878,7 +2161,7 @@ void HGraphBuilder::BuildCopySeqStringChars(HValue* src, HValue* HGraphBuilder::BuildObjectSizeAlignment( HValue* unaligned_size, int header_size) { - ASSERT((header_size & kObjectAlignmentMask) == 0); + DCHECK((header_size & kObjectAlignmentMask) == 0); HValue* size = AddUncasted<HAdd>( unaligned_size, Add<HConstant>(static_cast<int32_t>( header_size + kObjectAlignmentMask))); @@ -1903,14 +2186,14 @@ HValue* HGraphBuilder::BuildUncheckedStringAdd( // Do some manual constant folding here. if (left_length->IsConstant()) { HConstant* c_left_length = HConstant::cast(left_length); - ASSERT_NE(0, c_left_length->Integer32Value()); + DCHECK_NE(0, c_left_length->Integer32Value()); if (c_left_length->Integer32Value() + 1 >= ConsString::kMinLength) { // The right string contains at least one character. return BuildCreateConsString(length, left, right, allocation_mode); } } else if (right_length->IsConstant()) { HConstant* c_right_length = HConstant::cast(right_length); - ASSERT_NE(0, c_right_length->Integer32Value()); + DCHECK_NE(0, c_right_length->Integer32Value()); if (c_right_length->Integer32Value() + 1 >= ConsString::kMinLength) { // The left string contains at least one character. return BuildCreateConsString(length, left, right, allocation_mode); @@ -1995,9 +2278,7 @@ HValue* HGraphBuilder::BuildUncheckedStringAdd( // STRING_TYPE or ASCII_STRING_TYPE here, so we just use STRING_TYPE here. HAllocate* result = BuildAllocate( size, HType::String(), STRING_TYPE, allocation_mode); - - // We can safely skip the write barrier for storing map here. - AddStoreMapNoWriteBarrier(result, map); + Add<HStoreNamedField>(result, HObjectAccess::ForMap(), map); // Initialize the string fields. Add<HStoreNamedField>(result, HObjectAccess::ForStringHashField(), @@ -2046,11 +2327,10 @@ HValue* HGraphBuilder::BuildUncheckedStringAdd( if_sameencodingandsequential.Else(); { // Fallback to the runtime to add the two strings. - Add<HPushArgument>(left); - Add<HPushArgument>(right); + Add<HPushArguments>(left, right); Push(Add<HCallRuntime>( isolate()->factory()->empty_string(), - Runtime::FunctionForId(Runtime::kHiddenStringAdd), + Runtime::FunctionForId(Runtime::kStringAdd), 2)); } if_sameencodingandsequential.End(); @@ -2119,7 +2399,7 @@ HInstruction* HGraphBuilder::BuildUncheckedMonomorphicElementAccess( PropertyAccessType access_type, LoadKeyedHoleMode load_mode, KeyedAccessStoreMode store_mode) { - ASSERT((!IsExternalArrayElementsKind(elements_kind) && + DCHECK((!IsExternalArrayElementsKind(elements_kind) && !IsFixedTypedArrayElementsKind(elements_kind)) || !is_js_array); // No GVNFlag is necessary for ElementsKind if there is an explicit dependency @@ -2178,14 +2458,14 @@ HInstruction* HGraphBuilder::BuildUncheckedMonomorphicElementAccess( length_checker.End(); return result; } else { - ASSERT(store_mode == STANDARD_STORE); + DCHECK(store_mode == STANDARD_STORE); checked_key = Add<HBoundsCheck>(key, length); return AddElementAccess( backing_store, checked_key, val, checked_object, elements_kind, access_type); } } - ASSERT(fast_smi_only_elements || + DCHECK(fast_smi_only_elements || fast_elements || IsFastDoubleElementsKind(elements_kind)); @@ -2226,17 +2506,19 @@ HInstruction* HGraphBuilder::BuildUncheckedMonomorphicElementAccess( } - HValue* HGraphBuilder::BuildAllocateArrayFromLength( JSArrayBuilder* array_builder, HValue* length_argument) { if (length_argument->IsConstant() && HConstant::cast(length_argument)->HasSmiValue()) { int array_length = HConstant::cast(length_argument)->Integer32Value(); - HValue* new_object = array_length == 0 - ? array_builder->AllocateEmptyArray() - : array_builder->AllocateArray(length_argument, length_argument); - return new_object; + if (array_length == 0) { + return array_builder->AllocateEmptyArray(); + } else { + return array_builder->AllocateArray(length_argument, + array_length, + length_argument); + } } HValue* constant_zero = graph()->GetConstant0(); @@ -2266,35 +2548,62 @@ HValue* HGraphBuilder::BuildAllocateArrayFromLength( // Figure out total size HValue* length = Pop(); HValue* capacity = Pop(); - return array_builder->AllocateArray(capacity, length); + return array_builder->AllocateArray(capacity, max_alloc_length, length); } -HValue* HGraphBuilder::BuildAllocateElements(ElementsKind kind, - HValue* capacity) { - int elements_size; - InstanceType instance_type; - if (IsFastDoubleElementsKind(kind)) { - elements_size = kDoubleSize; - instance_type = FIXED_DOUBLE_ARRAY_TYPE; - } else { - elements_size = kPointerSize; - instance_type = FIXED_ARRAY_TYPE; - } +HValue* HGraphBuilder::BuildCalculateElementsSize(ElementsKind kind, + HValue* capacity) { + int elements_size = IsFastDoubleElementsKind(kind) + ? kDoubleSize + : kPointerSize; HConstant* elements_size_value = Add<HConstant>(elements_size); - HValue* mul = AddUncasted<HMul>(capacity, elements_size_value); + HInstruction* mul = HMul::NewImul(zone(), context(), + capacity->ActualValue(), + elements_size_value); + AddInstruction(mul); mul->ClearFlag(HValue::kCanOverflow); + STATIC_ASSERT(FixedDoubleArray::kHeaderSize == FixedArray::kHeaderSize); + HConstant* header_size = Add<HConstant>(FixedArray::kHeaderSize); HValue* total_size = AddUncasted<HAdd>(mul, header_size); total_size->ClearFlag(HValue::kCanOverflow); + return total_size; +} + + +HAllocate* HGraphBuilder::AllocateJSArrayObject(AllocationSiteMode mode) { + int base_size = JSArray::kSize; + if (mode == TRACK_ALLOCATION_SITE) { + base_size += AllocationMemento::kSize; + } + HConstant* size_in_bytes = Add<HConstant>(base_size); + return Add<HAllocate>( + size_in_bytes, HType::JSArray(), NOT_TENURED, JS_OBJECT_TYPE); +} - PretenureFlag pretenure_flag = !FLAG_allocation_site_pretenuring ? - isolate()->heap()->GetPretenureMode() : NOT_TENURED; - return Add<HAllocate>(total_size, HType::Tagged(), pretenure_flag, - instance_type); +HConstant* HGraphBuilder::EstablishElementsAllocationSize( + ElementsKind kind, + int capacity) { + int base_size = IsFastDoubleElementsKind(kind) + ? FixedDoubleArray::SizeFor(capacity) + : FixedArray::SizeFor(capacity); + + return Add<HConstant>(base_size); +} + + +HAllocate* HGraphBuilder::BuildAllocateElements(ElementsKind kind, + HValue* size_in_bytes) { + InstanceType instance_type = IsFastDoubleElementsKind(kind) + ? FIXED_DOUBLE_ARRAY_TYPE + : FIXED_ARRAY_TYPE; + + return Add<HAllocate>(size_in_bytes, HType::HeapObject(), NOT_TENURED, + instance_type); } @@ -2306,7 +2615,7 @@ void HGraphBuilder::BuildInitializeElementsHeader(HValue* elements, ? factory->fixed_double_array_map() : factory->fixed_array_map(); - AddStoreMapConstant(elements, map); + Add<HStoreNamedField>(elements, HObjectAccess::ForMap(), Add<HConstant>(map)); Add<HStoreNamedField>(elements, HObjectAccess::ForFixedArrayLength(), capacity); } @@ -2318,43 +2627,39 @@ HValue* HGraphBuilder::BuildAllocateElementsAndInitializeElementsHeader( // The HForceRepresentation is to prevent possible deopt on int-smi // conversion after allocation but before the new object fields are set. capacity = AddUncasted<HForceRepresentation>(capacity, Representation::Smi()); - HValue* new_elements = BuildAllocateElements(kind, capacity); + HValue* size_in_bytes = BuildCalculateElementsSize(kind, capacity); + HValue* new_elements = BuildAllocateElements(kind, size_in_bytes); BuildInitializeElementsHeader(new_elements, kind, capacity); return new_elements; } -HInnerAllocatedObject* HGraphBuilder::BuildJSArrayHeader(HValue* array, - HValue* array_map, - AllocationSiteMode mode, - ElementsKind elements_kind, - HValue* allocation_site_payload, - HValue* length_field) { - +void HGraphBuilder::BuildJSArrayHeader(HValue* array, + HValue* array_map, + HValue* elements, + AllocationSiteMode mode, + ElementsKind elements_kind, + HValue* allocation_site_payload, + HValue* length_field) { Add<HStoreNamedField>(array, HObjectAccess::ForMap(), array_map); HConstant* empty_fixed_array = Add<HConstant>(isolate()->factory()->empty_fixed_array()); - HObjectAccess access = HObjectAccess::ForPropertiesPointer(); - Add<HStoreNamedField>(array, access, empty_fixed_array); - Add<HStoreNamedField>(array, HObjectAccess::ForArrayLength(elements_kind), - length_field); + Add<HStoreNamedField>( + array, HObjectAccess::ForPropertiesPointer(), empty_fixed_array); + + Add<HStoreNamedField>( + array, HObjectAccess::ForElementsPointer(), + elements != NULL ? elements : empty_fixed_array); + + Add<HStoreNamedField>( + array, HObjectAccess::ForArrayLength(elements_kind), length_field); if (mode == TRACK_ALLOCATION_SITE) { BuildCreateAllocationMemento( array, Add<HConstant>(JSArray::kSize), allocation_site_payload); } - - int elements_location = JSArray::kSize; - if (mode == TRACK_ALLOCATION_SITE) { - elements_location += AllocationMemento::kSize; - } - - HInnerAllocatedObject* elements = Add<HInnerAllocatedObject>( - array, Add<HConstant>(elements_location)); - Add<HStoreNamedField>(array, HObjectAccess::ForElementsPointer(), elements); - return elements; } @@ -2367,7 +2672,7 @@ HInstruction* HGraphBuilder::AddElementAccess( PropertyAccessType access_type, LoadKeyedHoleMode load_mode) { if (access_type == STORE) { - ASSERT(val != NULL); + DCHECK(val != NULL); if (elements_kind == EXTERNAL_UINT8_CLAMPED_ELEMENTS || elements_kind == UINT8_CLAMPED_ELEMENTS) { val = Add<HClampToUint8>(val); @@ -2376,8 +2681,8 @@ HInstruction* HGraphBuilder::AddElementAccess( STORE_TO_INITIALIZED_ENTRY); } - ASSERT(access_type == LOAD); - ASSERT(val == NULL); + DCHECK(access_type == LOAD); + DCHECK(val == NULL); HLoadKeyed* load = Add<HLoadKeyed>( elements, checked_key, dependency, elements_kind, load_mode); if (FLAG_opt_safe_uint32_operations && @@ -2389,15 +2694,32 @@ HInstruction* HGraphBuilder::AddElementAccess( } -HLoadNamedField* HGraphBuilder::AddLoadElements(HValue* object) { +HLoadNamedField* HGraphBuilder::AddLoadMap(HValue* object, + HValue* dependency) { + return Add<HLoadNamedField>(object, dependency, HObjectAccess::ForMap()); +} + + +HLoadNamedField* HGraphBuilder::AddLoadElements(HValue* object, + HValue* dependency) { return Add<HLoadNamedField>( - object, static_cast<HValue*>(NULL), HObjectAccess::ForElementsPointer()); + object, dependency, HObjectAccess::ForElementsPointer()); } -HLoadNamedField* HGraphBuilder::AddLoadFixedArrayLength(HValue* object) { +HLoadNamedField* HGraphBuilder::AddLoadFixedArrayLength( + HValue* array, + HValue* dependency) { return Add<HLoadNamedField>( - object, static_cast<HValue*>(NULL), HObjectAccess::ForFixedArrayLength()); + array, dependency, HObjectAccess::ForFixedArrayLength()); +} + + +HLoadNamedField* HGraphBuilder::AddLoadArrayLength(HValue* array, + ElementsKind kind, + HValue* dependency) { + return Add<HLoadNamedField>( + array, dependency, HObjectAccess::ForArrayLength(kind)); } @@ -2417,30 +2739,21 @@ HValue* HGraphBuilder::BuildNewElementsCapacity(HValue* old_capacity) { } -void HGraphBuilder::BuildNewSpaceArrayCheck(HValue* length, ElementsKind kind) { - int element_size = IsFastDoubleElementsKind(kind) ? kDoubleSize - : kPointerSize; - int max_size = Page::kMaxRegularHeapObjectSize / element_size; - max_size -= JSArray::kSize / element_size; - HConstant* max_size_constant = Add<HConstant>(max_size); - Add<HBoundsCheck>(length, max_size_constant); -} - - HValue* HGraphBuilder::BuildGrowElementsCapacity(HValue* object, HValue* elements, ElementsKind kind, ElementsKind new_kind, HValue* length, HValue* new_capacity) { - BuildNewSpaceArrayCheck(new_capacity, new_kind); + Add<HBoundsCheck>(new_capacity, Add<HConstant>( + (Page::kMaxRegularHeapObjectSize - FixedArray::kHeaderSize) >> + ElementsKindToShiftSize(new_kind))); HValue* new_elements = BuildAllocateElementsAndInitializeElementsHeader( new_kind, new_capacity); - BuildCopyElements(elements, kind, - new_elements, new_kind, - length, new_capacity); + BuildCopyElements(elements, kind, new_elements, + new_kind, length, new_capacity); Add<HStoreNamedField>(object, HObjectAccess::ForElementsPointer(), new_elements); @@ -2449,28 +2762,24 @@ HValue* HGraphBuilder::BuildGrowElementsCapacity(HValue* object, } -void HGraphBuilder::BuildFillElementsWithHole(HValue* elements, - ElementsKind elements_kind, - HValue* from, - HValue* to) { - // Fast elements kinds need to be initialized in case statements below cause - // a garbage collection. - Factory* factory = isolate()->factory(); - - double nan_double = FixedDoubleArray::hole_nan_as_double(); - HValue* hole = IsFastSmiOrObjectElementsKind(elements_kind) - ? Add<HConstant>(factory->the_hole_value()) - : Add<HConstant>(nan_double); +void HGraphBuilder::BuildFillElementsWithValue(HValue* elements, + ElementsKind elements_kind, + HValue* from, + HValue* to, + HValue* value) { + if (to == NULL) { + to = AddLoadFixedArrayLength(elements); + } // Special loop unfolding case - static const int kLoopUnfoldLimit = 8; - STATIC_ASSERT(JSArray::kPreallocatedArrayElements <= kLoopUnfoldLimit); + STATIC_ASSERT(JSArray::kPreallocatedArrayElements <= + kElementLoopUnrollThreshold); int initial_capacity = -1; if (from->IsInteger32Constant() && to->IsInteger32Constant()) { int constant_from = from->GetInteger32Constant(); int constant_to = to->GetInteger32Constant(); - if (constant_from == 0 && constant_to <= kLoopUnfoldLimit) { + if (constant_from == 0 && constant_to <= kElementLoopUnrollThreshold) { initial_capacity = constant_to; } } @@ -2484,159 +2793,225 @@ void HGraphBuilder::BuildFillElementsWithHole(HValue* elements, if (initial_capacity >= 0) { for (int i = 0; i < initial_capacity; i++) { HInstruction* key = Add<HConstant>(i); - Add<HStoreKeyed>(elements, key, hole, elements_kind); + Add<HStoreKeyed>(elements, key, value, elements_kind); } } else { - LoopBuilder builder(this, context(), LoopBuilder::kPostIncrement); + // Carefully loop backwards so that the "from" remains live through the loop + // rather than the to. This often corresponds to keeping length live rather + // then capacity, which helps register allocation, since length is used more + // other than capacity after filling with holes. + LoopBuilder builder(this, context(), LoopBuilder::kPostDecrement); + + HValue* key = builder.BeginBody(to, from, Token::GT); - HValue* key = builder.BeginBody(from, to, Token::LT); + HValue* adjusted_key = AddUncasted<HSub>(key, graph()->GetConstant1()); + adjusted_key->ClearFlag(HValue::kCanOverflow); - Add<HStoreKeyed>(elements, key, hole, elements_kind); + Add<HStoreKeyed>(elements, adjusted_key, value, elements_kind); builder.EndBody(); } } +void HGraphBuilder::BuildFillElementsWithHole(HValue* elements, + ElementsKind elements_kind, + HValue* from, + HValue* to) { + // Fast elements kinds need to be initialized in case statements below cause a + // garbage collection. + Factory* factory = isolate()->factory(); + + double nan_double = FixedDoubleArray::hole_nan_as_double(); + HValue* hole = IsFastSmiOrObjectElementsKind(elements_kind) + ? Add<HConstant>(factory->the_hole_value()) + : Add<HConstant>(nan_double); + + BuildFillElementsWithValue(elements, elements_kind, from, to, hole); +} + + void HGraphBuilder::BuildCopyElements(HValue* from_elements, ElementsKind from_elements_kind, HValue* to_elements, ElementsKind to_elements_kind, HValue* length, HValue* capacity) { - bool pre_fill_with_holes = - IsFastDoubleElementsKind(from_elements_kind) && - IsFastObjectElementsKind(to_elements_kind); + int constant_capacity = -1; + if (capacity != NULL && + capacity->IsConstant() && + HConstant::cast(capacity)->HasInteger32Value()) { + int constant_candidate = HConstant::cast(capacity)->Integer32Value(); + if (constant_candidate <= kElementLoopUnrollThreshold) { + constant_capacity = constant_candidate; + } + } + bool pre_fill_with_holes = + IsFastDoubleElementsKind(from_elements_kind) && + IsFastObjectElementsKind(to_elements_kind); if (pre_fill_with_holes) { // If the copy might trigger a GC, make sure that the FixedArray is - // pre-initialized with holes to make sure that it's always in a consistent - // state. + // pre-initialized with holes to make sure that it's always in a + // consistent state. BuildFillElementsWithHole(to_elements, to_elements_kind, - graph()->GetConstant0(), capacity); + graph()->GetConstant0(), NULL); } - LoopBuilder builder(this, context(), LoopBuilder::kPostIncrement); + if (constant_capacity != -1) { + // Unroll the loop for small elements kinds. + for (int i = 0; i < constant_capacity; i++) { + HValue* key_constant = Add<HConstant>(i); + HInstruction* value = Add<HLoadKeyed>(from_elements, key_constant, + static_cast<HValue*>(NULL), + from_elements_kind); + Add<HStoreKeyed>(to_elements, key_constant, value, to_elements_kind); + } + } else { + if (!pre_fill_with_holes && + (capacity == NULL || !length->Equals(capacity))) { + BuildFillElementsWithHole(to_elements, to_elements_kind, + length, NULL); + } + + if (capacity == NULL) { + capacity = AddLoadFixedArrayLength(to_elements); + } + + LoopBuilder builder(this, context(), LoopBuilder::kPostDecrement); - HValue* key = builder.BeginBody(graph()->GetConstant0(), length, Token::LT); + HValue* key = builder.BeginBody(length, graph()->GetConstant0(), + Token::GT); - HValue* element = Add<HLoadKeyed>(from_elements, key, - static_cast<HValue*>(NULL), - from_elements_kind, - ALLOW_RETURN_HOLE); + key = AddUncasted<HSub>(key, graph()->GetConstant1()); + key->ClearFlag(HValue::kCanOverflow); - ElementsKind kind = (IsHoleyElementsKind(from_elements_kind) && - IsFastSmiElementsKind(to_elements_kind)) + HValue* element = Add<HLoadKeyed>(from_elements, key, + static_cast<HValue*>(NULL), + from_elements_kind, + ALLOW_RETURN_HOLE); + + ElementsKind kind = (IsHoleyElementsKind(from_elements_kind) && + IsFastSmiElementsKind(to_elements_kind)) ? FAST_HOLEY_ELEMENTS : to_elements_kind; - if (IsHoleyElementsKind(from_elements_kind) && - from_elements_kind != to_elements_kind) { - IfBuilder if_hole(this); - if_hole.If<HCompareHoleAndBranch>(element); - if_hole.Then(); - HConstant* hole_constant = IsFastDoubleElementsKind(to_elements_kind) + if (IsHoleyElementsKind(from_elements_kind) && + from_elements_kind != to_elements_kind) { + IfBuilder if_hole(this); + if_hole.If<HCompareHoleAndBranch>(element); + if_hole.Then(); + HConstant* hole_constant = IsFastDoubleElementsKind(to_elements_kind) ? Add<HConstant>(FixedDoubleArray::hole_nan_as_double()) : graph()->GetConstantHole(); - Add<HStoreKeyed>(to_elements, key, hole_constant, kind); - if_hole.Else(); - HStoreKeyed* store = Add<HStoreKeyed>(to_elements, key, element, kind); - store->SetFlag(HValue::kAllowUndefinedAsNaN); - if_hole.End(); - } else { - HStoreKeyed* store = Add<HStoreKeyed>(to_elements, key, element, kind); - store->SetFlag(HValue::kAllowUndefinedAsNaN); + Add<HStoreKeyed>(to_elements, key, hole_constant, kind); + if_hole.Else(); + HStoreKeyed* store = Add<HStoreKeyed>(to_elements, key, element, kind); + store->SetFlag(HValue::kAllowUndefinedAsNaN); + if_hole.End(); + } else { + HStoreKeyed* store = Add<HStoreKeyed>(to_elements, key, element, kind); + store->SetFlag(HValue::kAllowUndefinedAsNaN); + } + + builder.EndBody(); } - builder.EndBody(); + Counters* counters = isolate()->counters(); + AddIncrementCounter(counters->inlined_copied_elements()); +} - if (!pre_fill_with_holes && length != capacity) { - // Fill unused capacity with the hole. - BuildFillElementsWithHole(to_elements, to_elements_kind, - key, capacity); - } + +HValue* HGraphBuilder::BuildCloneShallowArrayCow(HValue* boilerplate, + HValue* allocation_site, + AllocationSiteMode mode, + ElementsKind kind) { + HAllocate* array = AllocateJSArrayObject(mode); + + HValue* map = AddLoadMap(boilerplate); + HValue* elements = AddLoadElements(boilerplate); + HValue* length = AddLoadArrayLength(boilerplate, kind); + + BuildJSArrayHeader(array, + map, + elements, + mode, + FAST_ELEMENTS, + allocation_site, + length); + return array; } -HValue* HGraphBuilder::BuildCloneShallowArray(HValue* boilerplate, - HValue* allocation_site, - AllocationSiteMode mode, - ElementsKind kind, - int length) { - NoObservableSideEffectsScope no_effects(this); +HValue* HGraphBuilder::BuildCloneShallowArrayEmpty(HValue* boilerplate, + HValue* allocation_site, + AllocationSiteMode mode) { + HAllocate* array = AllocateJSArrayObject(mode); - // All sizes here are multiples of kPointerSize. - int size = JSArray::kSize; - if (mode == TRACK_ALLOCATION_SITE) { - size += AllocationMemento::kSize; - } + HValue* map = AddLoadMap(boilerplate); - HValue* size_in_bytes = Add<HConstant>(size); - HInstruction* object = Add<HAllocate>(size_in_bytes, - HType::JSObject(), - NOT_TENURED, - JS_OBJECT_TYPE); + BuildJSArrayHeader(array, + map, + NULL, // set elements to empty fixed array + mode, + FAST_ELEMENTS, + allocation_site, + graph()->GetConstant0()); + return array; +} - // Copy the JS array part. - for (int i = 0; i < JSArray::kSize; i += kPointerSize) { - if ((i != JSArray::kElementsOffset) || (length == 0)) { - HObjectAccess access = HObjectAccess::ForJSArrayOffset(i); - Add<HStoreNamedField>( - object, access, Add<HLoadNamedField>( - boilerplate, static_cast<HValue*>(NULL), access)); - } - } - // Create an allocation site info if requested. - if (mode == TRACK_ALLOCATION_SITE) { - BuildCreateAllocationMemento( - object, Add<HConstant>(JSArray::kSize), allocation_site); - } +HValue* HGraphBuilder::BuildCloneShallowArrayNonEmpty(HValue* boilerplate, + HValue* allocation_site, + AllocationSiteMode mode, + ElementsKind kind) { + HValue* boilerplate_elements = AddLoadElements(boilerplate); + HValue* capacity = AddLoadFixedArrayLength(boilerplate_elements); - if (length > 0) { - // We have to initialize the elements pointer if allocation folding is - // turned off. - if (!FLAG_use_gvn || !FLAG_use_allocation_folding) { - HConstant* empty_fixed_array = Add<HConstant>( - isolate()->factory()->empty_fixed_array()); - Add<HStoreNamedField>(object, HObjectAccess::ForElementsPointer(), - empty_fixed_array, INITIALIZING_STORE); - } + // Generate size calculation code here in order to make it dominate + // the JSArray allocation. + HValue* elements_size = BuildCalculateElementsSize(kind, capacity); - HValue* boilerplate_elements = AddLoadElements(boilerplate); - HValue* object_elements; - if (IsFastDoubleElementsKind(kind)) { - HValue* elems_size = Add<HConstant>(FixedDoubleArray::SizeFor(length)); - object_elements = Add<HAllocate>(elems_size, HType::Tagged(), - NOT_TENURED, FIXED_DOUBLE_ARRAY_TYPE); - } else { - HValue* elems_size = Add<HConstant>(FixedArray::SizeFor(length)); - object_elements = Add<HAllocate>(elems_size, HType::Tagged(), - NOT_TENURED, FIXED_ARRAY_TYPE); - } - Add<HStoreNamedField>(object, HObjectAccess::ForElementsPointer(), - object_elements); - - // Copy the elements array header. - for (int i = 0; i < FixedArrayBase::kHeaderSize; i += kPointerSize) { - HObjectAccess access = HObjectAccess::ForFixedArrayHeader(i); - Add<HStoreNamedField>( - object_elements, access, Add<HLoadNamedField>( - boilerplate_elements, static_cast<HValue*>(NULL), access)); - } - - // Copy the elements array contents. - // TODO(mstarzinger): Teach HGraphBuilder::BuildCopyElements to unfold - // copying loops with constant length up to a given boundary and use this - // helper here instead. - for (int i = 0; i < length; i++) { - HValue* key_constant = Add<HConstant>(i); - HInstruction* value = Add<HLoadKeyed>(boilerplate_elements, key_constant, - static_cast<HValue*>(NULL), kind); - Add<HStoreKeyed>(object_elements, key_constant, value, kind); - } + // Create empty JSArray object for now, store elimination should remove + // redundant initialization of elements and length fields and at the same + // time the object will be fully prepared for GC if it happens during + // elements allocation. + HValue* result = BuildCloneShallowArrayEmpty( + boilerplate, allocation_site, mode); + + HAllocate* elements = BuildAllocateElements(kind, elements_size); + + // This function implicitly relies on the fact that the + // FastCloneShallowArrayStub is called only for literals shorter than + // JSObject::kInitialMaxFastElementArray. + // Can't add HBoundsCheck here because otherwise the stub will eager a frame. + HConstant* size_upper_bound = EstablishElementsAllocationSize( + kind, JSObject::kInitialMaxFastElementArray); + elements->set_size_upper_bound(size_upper_bound); + + Add<HStoreNamedField>(result, HObjectAccess::ForElementsPointer(), elements); + + // The allocation for the cloned array above causes register pressure on + // machines with low register counts. Force a reload of the boilerplate + // elements here to free up a register for the allocation to avoid unnecessary + // spillage. + boilerplate_elements = AddLoadElements(boilerplate); + boilerplate_elements->SetFlag(HValue::kCantBeReplaced); + + // Copy the elements array header. + for (int i = 0; i < FixedArrayBase::kHeaderSize; i += kPointerSize) { + HObjectAccess access = HObjectAccess::ForFixedArrayHeader(i); + Add<HStoreNamedField>(elements, access, + Add<HLoadNamedField>(boilerplate_elements, + static_cast<HValue*>(NULL), access)); } - return object; + // And the result of the length + HValue* length = AddLoadArrayLength(boilerplate, kind); + Add<HStoreNamedField>(result, HObjectAccess::ForArrayLength(kind), length); + + BuildCopyElements(boilerplate_elements, kind, elements, + kind, length, NULL); + return result; } @@ -2696,9 +3071,9 @@ void HGraphBuilder::BuildCreateAllocationMemento( HValue* previous_object, HValue* previous_object_size, HValue* allocation_site) { - ASSERT(allocation_site != NULL); + DCHECK(allocation_site != NULL); HInnerAllocatedObject* allocation_memento = Add<HInnerAllocatedObject>( - previous_object, previous_object_size); + previous_object, previous_object_size, HType::HeapObject()); AddStoreMapConstant( allocation_memento, isolate()->factory()->allocation_memento_map()); Add<HStoreNamedField>( @@ -2715,11 +3090,9 @@ void HGraphBuilder::BuildCreateAllocationMemento( // This smi value is reset to zero after every gc, overflow isn't a problem // since the counter is bounded by the new space size. memento_create_count->ClearFlag(HValue::kCanOverflow); - HStoreNamedField* store = Add<HStoreNamedField>( + Add<HStoreNamedField>( allocation_site, HObjectAccess::ForAllocationSiteOffset( AllocationSite::kPretenureCreateCountOffset), memento_create_count); - // No write barrier needed to store a smi. - store->SkipWriteBarrier(); } } @@ -2769,7 +3142,7 @@ HGraphBuilder::JSArrayBuilder::JSArrayBuilder(HGraphBuilder* builder, kind_(kind), allocation_site_payload_(allocation_site_payload), constructor_function_(constructor_function) { - ASSERT(!allocation_site_payload->IsConstant() || + DCHECK(!allocation_site_payload->IsConstant() || HConstant::cast(allocation_site_payload)->handle( builder_->isolate())->IsAllocationSite()); mode_ = override_mode == DISABLE_ALLOCATION_SITES @@ -2832,67 +3205,47 @@ HValue* HGraphBuilder::JSArrayBuilder::EmitInternalMapCode() { } -HValue* HGraphBuilder::JSArrayBuilder::EstablishAllocationSize( - HValue* length_node) { - ASSERT(length_node != NULL); - - int base_size = JSArray::kSize; - if (mode_ == TRACK_ALLOCATION_SITE) { - base_size += AllocationMemento::kSize; - } - - STATIC_ASSERT(FixedDoubleArray::kHeaderSize == FixedArray::kHeaderSize); - base_size += FixedArray::kHeaderSize; - - HInstruction* elements_size_value = - builder()->Add<HConstant>(elements_size()); - HInstruction* mul = HMul::NewImul(builder()->zone(), builder()->context(), - length_node, elements_size_value); - builder()->AddInstruction(mul); - HInstruction* base = builder()->Add<HConstant>(base_size); - HInstruction* total_size = HAdd::New(builder()->zone(), builder()->context(), - base, mul); - total_size->ClearFlag(HValue::kCanOverflow); - builder()->AddInstruction(total_size); - return total_size; +HAllocate* HGraphBuilder::JSArrayBuilder::AllocateEmptyArray() { + HConstant* capacity = builder()->Add<HConstant>(initial_capacity()); + return AllocateArray(capacity, + capacity, + builder()->graph()->GetConstant0()); } -HValue* HGraphBuilder::JSArrayBuilder::EstablishEmptyArrayAllocationSize() { - int base_size = JSArray::kSize; - if (mode_ == TRACK_ALLOCATION_SITE) { - base_size += AllocationMemento::kSize; - } - - base_size += IsFastDoubleElementsKind(kind_) - ? FixedDoubleArray::SizeFor(initial_capacity()) - : FixedArray::SizeFor(initial_capacity()); - - return builder()->Add<HConstant>(base_size); +HAllocate* HGraphBuilder::JSArrayBuilder::AllocateArray( + HValue* capacity, + HConstant* capacity_upper_bound, + HValue* length_field, + FillMode fill_mode) { + return AllocateArray(capacity, + capacity_upper_bound->GetInteger32Constant(), + length_field, + fill_mode); } -HValue* HGraphBuilder::JSArrayBuilder::AllocateEmptyArray() { - HValue* size_in_bytes = EstablishEmptyArrayAllocationSize(); - HConstant* capacity = builder()->Add<HConstant>(initial_capacity()); - return AllocateArray(size_in_bytes, - capacity, - builder()->graph()->GetConstant0()); -} - +HAllocate* HGraphBuilder::JSArrayBuilder::AllocateArray( + HValue* capacity, + int capacity_upper_bound, + HValue* length_field, + FillMode fill_mode) { + HConstant* elememts_size_upper_bound = capacity->IsInteger32Constant() + ? HConstant::cast(capacity) + : builder()->EstablishElementsAllocationSize(kind_, capacity_upper_bound); -HValue* HGraphBuilder::JSArrayBuilder::AllocateArray(HValue* capacity, - HValue* length_field, - FillMode fill_mode) { - HValue* size_in_bytes = EstablishAllocationSize(capacity); - return AllocateArray(size_in_bytes, capacity, length_field, fill_mode); + HAllocate* array = AllocateArray(capacity, length_field, fill_mode); + if (!elements_location_->has_size_upper_bound()) { + elements_location_->set_size_upper_bound(elememts_size_upper_bound); + } + return array; } -HValue* HGraphBuilder::JSArrayBuilder::AllocateArray(HValue* size_in_bytes, - HValue* capacity, - HValue* length_field, - FillMode fill_mode) { +HAllocate* HGraphBuilder::JSArrayBuilder::AllocateArray( + HValue* capacity, + HValue* length_field, + FillMode fill_mode) { // These HForceRepresentations are because we store these as fields in the // objects we construct, and an int32-to-smi HChange could deopt. Accept // the deopt possibility now, before allocation occurs. @@ -2902,14 +3255,14 @@ HValue* HGraphBuilder::JSArrayBuilder::AllocateArray(HValue* size_in_bytes, length_field = builder()->AddUncasted<HForceRepresentation>(length_field, Representation::Smi()); - // Allocate (dealing with failure appropriately) - HAllocate* new_object = builder()->Add<HAllocate>(size_in_bytes, - HType::JSArray(), NOT_TENURED, JS_ARRAY_TYPE); - // Folded array allocation should be aligned if it has fast double elements. - if (IsFastDoubleElementsKind(kind_)) { - new_object->MakeDoubleAligned(); - } + // Generate size calculation code here in order to make it dominate + // the JSArray allocation. + HValue* elements_size = + builder()->BuildCalculateElementsSize(kind_, capacity); + + // Allocate (dealing with failure appropriately) + HAllocate* array_object = builder()->AllocateJSArrayObject(mode_); // Fill in the fields: map, properties, length HValue* map; @@ -2918,29 +3271,30 @@ HValue* HGraphBuilder::JSArrayBuilder::AllocateArray(HValue* size_in_bytes, } else { map = EmitMapCode(); } - elements_location_ = builder()->BuildJSArrayHeader(new_object, - map, - mode_, - kind_, - allocation_site_payload_, - length_field); - // Initialize the elements + builder()->BuildJSArrayHeader(array_object, + map, + NULL, // set elements to empty fixed array + mode_, + kind_, + allocation_site_payload_, + length_field); + + // Allocate and initialize the elements + elements_location_ = builder()->BuildAllocateElements(kind_, elements_size); + builder()->BuildInitializeElementsHeader(elements_location_, kind_, capacity); + // Set the elements + builder()->Add<HStoreNamedField>( + array_object, HObjectAccess::ForElementsPointer(), elements_location_); + if (fill_mode == FILL_WITH_HOLE) { builder()->BuildFillElementsWithHole(elements_location_, kind_, graph()->GetConstant0(), capacity); } - return new_object; -} - - -HStoreNamedField* HGraphBuilder::AddStoreMapConstant(HValue *object, - Handle<Map> map) { - return Add<HStoreNamedField>(object, HObjectAccess::ForMap(), - Add<HConstant>(map)); + return array_object; } @@ -3050,6 +3404,11 @@ void HBasicBlock::FinishExit(HControlInstruction* instruction, } +OStream& operator<<(OStream& os, const HBasicBlock& b) { + return os << "B" << b.block_id(); +} + + HGraph::HGraph(CompilationInfo* info) : isolate_(info->isolate()), next_block_id_(0), @@ -3073,8 +3432,8 @@ HGraph::HGraph(CompilationInfo* info) if (info->IsStub()) { HydrogenCodeStub* stub = info->code_stub(); CodeStubInterfaceDescriptor* descriptor = stub->GetInterfaceDescriptor(); - start_environment_ = - new(zone_) HEnvironment(zone_, descriptor->environment_length()); + start_environment_ = new(zone_) HEnvironment( + zone_, descriptor->GetEnvironmentParameterCount()); } else { TraceInlinedFunction(info->shared_info(), HSourcePosition::Unknown()); start_environment_ = @@ -3095,7 +3454,7 @@ HBasicBlock* HGraph::CreateBasicBlock() { void HGraph::FinalizeUniqueness() { DisallowHeapAllocation no_gc; - ASSERT(!OptimizingCompilerThread::IsOptimizerThread(isolate())); + DCHECK(!OptimizingCompilerThread::IsOptimizerThread(isolate())); for (int i = 0; i < blocks()->length(); ++i) { for (HInstructionIterator it(blocks()->at(i)); !it.Done(); it.Advance()) { it.Current()->FinalizeUniqueness(); @@ -3124,13 +3483,10 @@ int HGraph::TraceInlinedFunction( if (!shared->script()->IsUndefined()) { Handle<Script> script(Script::cast(shared->script())); if (!script->source()->IsUndefined()) { - CodeTracer::Scope tracing_scope(isolate()->GetCodeTracer()); - PrintF(tracing_scope.file(), - "--- FUNCTION SOURCE (%s) id{%d,%d} ---\n", - shared->DebugName()->ToCString().get(), - info()->optimization_id(), - id); - + CodeTracer::Scope tracing_scopex(isolate()->GetCodeTracer()); + OFStream os(tracing_scopex.file()); + os << "--- FUNCTION SOURCE (" << shared->DebugName()->ToCString().get() + << ") id{" << info()->optimization_id() << "," << id << "} ---\n"; { ConsStringIteratorOp op; StringCharacterStream stream(String::cast(script->source()), @@ -3142,12 +3498,12 @@ int HGraph::TraceInlinedFunction( shared->end_position() - shared->start_position() + 1; for (int i = 0; i < source_len; i++) { if (stream.HasMore()) { - PrintF(tracing_scope.file(), "%c", stream.GetNext()); + os << AsReversiblyEscapedUC16(stream.GetNext()); } } } - PrintF(tracing_scope.file(), "\n--- END ---\n"); + os << "\n--- END ---\n"; } } } @@ -3156,13 +3512,10 @@ int HGraph::TraceInlinedFunction( if (inline_id != 0) { CodeTracer::Scope tracing_scope(isolate()->GetCodeTracer()); - PrintF(tracing_scope.file(), "INLINE (%s) id{%d,%d} AS %d AT ", - shared->DebugName()->ToCString().get(), - info()->optimization_id(), - id, - inline_id); - position.PrintTo(tracing_scope.file()); - PrintF(tracing_scope.file(), "\n"); + OFStream os(tracing_scope.file()); + os << "INLINE (" << shared->DebugName()->ToCString().get() << ") id{" + << info()->optimization_id() << "," << id << "} AS " << inline_id + << " AT " << position << endl; } return inline_id; @@ -3235,21 +3588,19 @@ class PostorderProcessor : public ZoneObject { HBasicBlock* loop_header() { return loop_header_; } static PostorderProcessor* CreateEntryProcessor(Zone* zone, - HBasicBlock* block, - BitVector* visited) { + HBasicBlock* block) { PostorderProcessor* result = new(zone) PostorderProcessor(NULL); - return result->SetupSuccessors(zone, block, NULL, visited); + return result->SetupSuccessors(zone, block, NULL); } PostorderProcessor* PerformStep(Zone* zone, - BitVector* visited, ZoneList<HBasicBlock*>* order) { PostorderProcessor* next = - PerformNonBacktrackingStep(zone, visited, order); + PerformNonBacktrackingStep(zone, order); if (next != NULL) { return next; } else { - return Backtrack(zone, visited, order); + return Backtrack(zone, order); } } @@ -3269,9 +3620,8 @@ class PostorderProcessor : public ZoneObject { // Each "Setup..." method is like a constructor for a cycle state. PostorderProcessor* SetupSuccessors(Zone* zone, HBasicBlock* block, - HBasicBlock* loop_header, - BitVector* visited) { - if (block == NULL || visited->Contains(block->block_id()) || + HBasicBlock* loop_header) { + if (block == NULL || block->IsOrdered() || block->parent_loop_header() != loop_header) { kind_ = NONE; block_ = NULL; @@ -3281,7 +3631,7 @@ class PostorderProcessor : public ZoneObject { } else { block_ = block; loop_ = NULL; - visited->Add(block->block_id()); + block->MarkAsOrdered(); if (block->IsLoopHeader()) { kind_ = SUCCESSORS_OF_LOOP_HEADER; @@ -3291,7 +3641,7 @@ class PostorderProcessor : public ZoneObject { return result->SetupLoopMembers(zone, block, block->loop_information(), loop_header); } else { - ASSERT(block->IsFinished()); + DCHECK(block->IsFinished()); kind_ = SUCCESSORS; loop_header_ = loop_header; InitializeSuccessors(); @@ -3333,10 +3683,10 @@ class PostorderProcessor : public ZoneObject { } void ClosePostorder(ZoneList<HBasicBlock*>* order, Zone* zone) { - ASSERT(block_->end()->FirstSuccessor() == NULL || + DCHECK(block_->end()->FirstSuccessor() == NULL || order->Contains(block_->end()->FirstSuccessor()) || block_->end()->FirstSuccessor()->IsLoopHeader()); - ASSERT(block_->end()->SecondSuccessor() == NULL || + DCHECK(block_->end()->SecondSuccessor() == NULL || order->Contains(block_->end()->SecondSuccessor()) || block_->end()->SecondSuccessor()->IsLoopHeader()); order->Add(block_, zone); @@ -3344,7 +3694,6 @@ class PostorderProcessor : public ZoneObject { // This method is the basic block to walk up the stack. PostorderProcessor* Pop(Zone* zone, - BitVector* visited, ZoneList<HBasicBlock*>* order) { switch (kind_) { case SUCCESSORS: @@ -3371,16 +3720,15 @@ class PostorderProcessor : public ZoneObject { // Walks up the stack. PostorderProcessor* Backtrack(Zone* zone, - BitVector* visited, ZoneList<HBasicBlock*>* order) { - PostorderProcessor* parent = Pop(zone, visited, order); + PostorderProcessor* parent = Pop(zone, order); while (parent != NULL) { PostorderProcessor* next = - parent->PerformNonBacktrackingStep(zone, visited, order); + parent->PerformNonBacktrackingStep(zone, order); if (next != NULL) { return next; } else { - parent = parent->Pop(zone, visited, order); + parent = parent->Pop(zone, order); } } return NULL; @@ -3388,7 +3736,6 @@ class PostorderProcessor : public ZoneObject { PostorderProcessor* PerformNonBacktrackingStep( Zone* zone, - BitVector* visited, ZoneList<HBasicBlock*>* order) { HBasicBlock* next_block; switch (kind_) { @@ -3396,16 +3743,14 @@ class PostorderProcessor : public ZoneObject { next_block = AdvanceSuccessors(); if (next_block != NULL) { PostorderProcessor* result = Push(zone); - return result->SetupSuccessors(zone, next_block, - loop_header_, visited); + return result->SetupSuccessors(zone, next_block, loop_header_); } break; case SUCCESSORS_OF_LOOP_HEADER: next_block = AdvanceSuccessors(); if (next_block != NULL) { PostorderProcessor* result = Push(zone); - return result->SetupSuccessors(zone, next_block, - block(), visited); + return result->SetupSuccessors(zone, next_block, block()); } break; case LOOP_MEMBERS: @@ -3420,8 +3765,7 @@ class PostorderProcessor : public ZoneObject { next_block = AdvanceSuccessors(); if (next_block != NULL) { PostorderProcessor* result = Push(zone); - return result->SetupSuccessors(zone, next_block, - loop_header_, visited); + return result->SetupSuccessors(zone, next_block, loop_header_); } break; case NONE: @@ -3476,21 +3820,36 @@ class PostorderProcessor : public ZoneObject { void HGraph::OrderBlocks() { CompilationPhase phase("H_Block ordering", info()); - BitVector visited(blocks_.length(), zone()); - ZoneList<HBasicBlock*> reverse_result(8, zone()); - HBasicBlock* start = blocks_[0]; +#ifdef DEBUG + // Initially the blocks must not be ordered. + for (int i = 0; i < blocks_.length(); ++i) { + DCHECK(!blocks_[i]->IsOrdered()); + } +#endif + PostorderProcessor* postorder = - PostorderProcessor::CreateEntryProcessor(zone(), start, &visited); - while (postorder != NULL) { - postorder = postorder->PerformStep(zone(), &visited, &reverse_result); - } + PostorderProcessor::CreateEntryProcessor(zone(), blocks_[0]); blocks_.Rewind(0); - int index = 0; - for (int i = reverse_result.length() - 1; i >= 0; --i) { - HBasicBlock* b = reverse_result[i]; - blocks_.Add(b, zone()); - b->set_block_id(index++); + while (postorder) { + postorder = postorder->PerformStep(zone(), &blocks_); + } + +#ifdef DEBUG + // Now all blocks must be marked as ordered. + for (int i = 0; i < blocks_.length(); ++i) { + DCHECK(blocks_[i]->IsOrdered()); + } +#endif + + // Reverse block list and assign block IDs. + for (int i = 0, j = blocks_.length(); --j >= i; ++i) { + HBasicBlock* bi = blocks_[i]; + HBasicBlock* bj = blocks_[j]; + bi->set_block_id(j); + bj->set_block_id(i); + blocks_[i] = bj; + blocks_[j] = bi; } } @@ -3626,7 +3985,7 @@ AstContext::AstContext(HOptimizedGraphBuilder* owner, Expression::Context kind) for_typeof_(false) { owner->set_ast_context(this); // Push. #ifdef DEBUG - ASSERT(owner->environment()->frame_type() == JS_FUNCTION); + DCHECK(owner->environment()->frame_type() == JS_FUNCTION); original_length_ = owner->environment()->length(); #endif } @@ -3638,7 +3997,7 @@ AstContext::~AstContext() { EffectContext::~EffectContext() { - ASSERT(owner()->HasStackOverflow() || + DCHECK(owner()->HasStackOverflow() || owner()->current_block() == NULL || (owner()->environment()->length() == original_length_ && owner()->environment()->frame_type() == JS_FUNCTION)); @@ -3646,7 +4005,7 @@ EffectContext::~EffectContext() { ValueContext::~ValueContext() { - ASSERT(owner()->HasStackOverflow() || + DCHECK(owner()->HasStackOverflow() || owner()->current_block() == NULL || (owner()->environment()->length() == original_length_ + 1 && owner()->environment()->frame_type() == JS_FUNCTION)); @@ -3674,7 +4033,7 @@ void TestContext::ReturnValue(HValue* value) { void EffectContext::ReturnInstruction(HInstruction* instr, BailoutId ast_id) { - ASSERT(!instr->IsControlInstruction()); + DCHECK(!instr->IsControlInstruction()); owner()->AddInstruction(instr); if (instr->HasObservableSideEffects()) { owner()->Add<HSimulate>(ast_id, REMOVABLE_SIMULATE); @@ -3684,7 +4043,7 @@ void EffectContext::ReturnInstruction(HInstruction* instr, BailoutId ast_id) { void EffectContext::ReturnControl(HControlInstruction* instr, BailoutId ast_id) { - ASSERT(!instr->HasObservableSideEffects()); + DCHECK(!instr->HasObservableSideEffects()); HBasicBlock* empty_true = owner()->graph()->CreateBasicBlock(); HBasicBlock* empty_false = owner()->graph()->CreateBasicBlock(); instr->SetSuccessorAt(0, empty_true); @@ -3712,7 +4071,7 @@ void EffectContext::ReturnContinuation(HIfContinuation* continuation, void ValueContext::ReturnInstruction(HInstruction* instr, BailoutId ast_id) { - ASSERT(!instr->IsControlInstruction()); + DCHECK(!instr->IsControlInstruction()); if (!arguments_allowed() && instr->CheckFlag(HValue::kIsArguments)) { return owner()->Bailout(kBadValueContextForArgumentsObjectValue); } @@ -3725,7 +4084,7 @@ void ValueContext::ReturnInstruction(HInstruction* instr, BailoutId ast_id) { void ValueContext::ReturnControl(HControlInstruction* instr, BailoutId ast_id) { - ASSERT(!instr->HasObservableSideEffects()); + DCHECK(!instr->HasObservableSideEffects()); if (!arguments_allowed() && instr->CheckFlag(HValue::kIsArguments)) { return owner()->Bailout(kBadValueContextForArgumentsObjectValue); } @@ -3768,7 +4127,7 @@ void ValueContext::ReturnContinuation(HIfContinuation* continuation, void TestContext::ReturnInstruction(HInstruction* instr, BailoutId ast_id) { - ASSERT(!instr->IsControlInstruction()); + DCHECK(!instr->IsControlInstruction()); HOptimizedGraphBuilder* builder = owner(); builder->AddInstruction(instr); // We expect a simulate after every expression with side effects, though @@ -3783,7 +4142,7 @@ void TestContext::ReturnInstruction(HInstruction* instr, BailoutId ast_id) { void TestContext::ReturnControl(HControlInstruction* instr, BailoutId ast_id) { - ASSERT(!instr->HasObservableSideEffects()); + DCHECK(!instr->HasObservableSideEffects()); HBasicBlock* empty_true = owner()->graph()->CreateBasicBlock(); HBasicBlock* empty_false = owner()->graph()->CreateBasicBlock(); instr->SetSuccessorAt(0, empty_true); @@ -3948,7 +4307,7 @@ bool HOptimizedGraphBuilder::BuildGraph() { // due to missing/inadequate type feedback, but rather too aggressive // optimization. Disable optimistic LICM in that case. Handle<Code> unoptimized_code(current_info()->shared_info()->code()); - ASSERT(unoptimized_code->kind() == Code::FUNCTION); + DCHECK(unoptimized_code->kind() == Code::FUNCTION); Handle<TypeFeedbackInfo> type_info( TypeFeedbackInfo::cast(unoptimized_code->type_feedback_info())); int checksum = type_info->own_type_change_checksum(); @@ -4062,7 +4421,7 @@ void HGraph::RestoreActualValues() { #ifdef DEBUG for (int i = 0; i < block->phis()->length(); i++) { HPhi* phi = block->phis()->at(i); - ASSERT(phi->ActualValue() == phi); + DCHECK(phi->ActualValue() == phi); } #endif @@ -4075,7 +4434,7 @@ void HGraph::RestoreActualValues() { // instructions. instruction->DeleteAndReplaceWith(instruction->ActualValue()); } else { - ASSERT(instruction->IsInformativeDefinition()); + DCHECK(instruction->IsInformativeDefinition()); if (instruction->IsPurelyInformativeDefinition()) { instruction->DeleteAndReplaceWith(instruction->RedefinedOperand()); } else { @@ -4093,9 +4452,11 @@ void HOptimizedGraphBuilder::PushArgumentsFromEnvironment(int count) { arguments.Add(Pop(), zone()); } + HPushArguments* push_args = New<HPushArguments>(); while (!arguments.is_empty()) { - Add<HPushArgument>(arguments.RemoveLast()); + push_args->AddInput(arguments.RemoveLast()); } + AddInstruction(push_args); } @@ -4113,7 +4474,7 @@ void HOptimizedGraphBuilder::SetUpScope(Scope* scope) { // Create an arguments object containing the initial parameters. Set the // initial values of parameters including "this" having parameter index 0. - ASSERT_EQ(scope->num_parameters() + 1, environment()->parameter_count()); + DCHECK_EQ(scope->num_parameters() + 1, environment()->parameter_count()); HArgumentsObject* arguments_object = New<HArgumentsObject>(environment()->parameter_count()); for (int i = 0; i < environment()->parameter_count(); ++i) { @@ -4155,16 +4516,55 @@ void HOptimizedGraphBuilder::VisitStatements(ZoneList<Statement*>* statements) { void HOptimizedGraphBuilder::VisitBlock(Block* stmt) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); - if (stmt->scope() != NULL) { - return Bailout(kScopedBlock); - } - BreakAndContinueInfo break_info(stmt); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); + + Scope* outer_scope = scope(); + Scope* scope = stmt->scope(); + BreakAndContinueInfo break_info(stmt, outer_scope); + { BreakAndContinueScope push(&break_info, this); + if (scope != NULL) { + // Load the function object. + Scope* declaration_scope = scope->DeclarationScope(); + HInstruction* function; + HValue* outer_context = environment()->context(); + if (declaration_scope->is_global_scope() || + declaration_scope->is_eval_scope()) { + function = new(zone()) HLoadContextSlot( + outer_context, Context::CLOSURE_INDEX, HLoadContextSlot::kNoCheck); + } else { + function = New<HThisFunction>(); + } + AddInstruction(function); + // Allocate a block context and store it to the stack frame. + HInstruction* inner_context = Add<HAllocateBlockContext>( + outer_context, function, scope->GetScopeInfo()); + HInstruction* instr = Add<HStoreFrameContext>(inner_context); + if (instr->HasObservableSideEffects()) { + AddSimulate(stmt->EntryId(), REMOVABLE_SIMULATE); + } + set_scope(scope); + environment()->BindContext(inner_context); + VisitDeclarations(scope->declarations()); + AddSimulate(stmt->DeclsId(), REMOVABLE_SIMULATE); + } CHECK_BAILOUT(VisitStatements(stmt->statements())); } + set_scope(outer_scope); + if (scope != NULL && current_block() != NULL) { + HValue* inner_context = environment()->context(); + HValue* outer_context = Add<HLoadNamedField>( + inner_context, static_cast<HValue*>(NULL), + HObjectAccess::ForContextSlot(Context::PREVIOUS_INDEX)); + + HInstruction* instr = Add<HStoreFrameContext>(outer_context); + if (instr->HasObservableSideEffects()) { + AddSimulate(stmt->ExitId(), REMOVABLE_SIMULATE); + } + environment()->BindContext(outer_context); + } HBasicBlock* break_block = break_info.break_block(); if (break_block != NULL) { if (current_block() != NULL) Goto(break_block); @@ -4176,24 +4576,24 @@ void HOptimizedGraphBuilder::VisitBlock(Block* stmt) { void HOptimizedGraphBuilder::VisitExpressionStatement( ExpressionStatement* stmt) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); VisitForEffect(stmt->expression()); } void HOptimizedGraphBuilder::VisitEmptyStatement(EmptyStatement* stmt) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); } void HOptimizedGraphBuilder::VisitIfStatement(IfStatement* stmt) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); if (stmt->condition()->ToBooleanIsTrue()) { Add<HSimulate>(stmt->ThenId()); Visit(stmt->then_statement()); @@ -4232,6 +4632,7 @@ void HOptimizedGraphBuilder::VisitIfStatement(IfStatement* stmt) { HBasicBlock* HOptimizedGraphBuilder::BreakAndContinueScope::Get( BreakableStatement* stmt, BreakType type, + Scope** scope, int* drop_extra) { *drop_extra = 0; BreakAndContinueScope* current = this; @@ -4239,7 +4640,8 @@ HBasicBlock* HOptimizedGraphBuilder::BreakAndContinueScope::Get( *drop_extra += current->info()->drop_extra(); current = current->next(); } - ASSERT(current != NULL); // Always found (unless stack is malformed). + DCHECK(current != NULL); // Always found (unless stack is malformed). + *scope = current->info()->scope(); if (type == BREAK) { *drop_extra += current->info()->drop_extra(); @@ -4270,35 +4672,72 @@ HBasicBlock* HOptimizedGraphBuilder::BreakAndContinueScope::Get( void HOptimizedGraphBuilder::VisitContinueStatement( ContinueStatement* stmt) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); + Scope* outer_scope = NULL; + Scope* inner_scope = scope(); int drop_extra = 0; HBasicBlock* continue_block = break_scope()->Get( - stmt->target(), BreakAndContinueScope::CONTINUE, &drop_extra); + stmt->target(), BreakAndContinueScope::CONTINUE, + &outer_scope, &drop_extra); + HValue* context = environment()->context(); Drop(drop_extra); + int context_pop_count = inner_scope->ContextChainLength(outer_scope); + if (context_pop_count > 0) { + while (context_pop_count-- > 0) { + HInstruction* context_instruction = Add<HLoadNamedField>( + context, static_cast<HValue*>(NULL), + HObjectAccess::ForContextSlot(Context::PREVIOUS_INDEX)); + context = context_instruction; + } + HInstruction* instr = Add<HStoreFrameContext>(context); + if (instr->HasObservableSideEffects()) { + AddSimulate(stmt->target()->EntryId(), REMOVABLE_SIMULATE); + } + environment()->BindContext(context); + } + Goto(continue_block); set_current_block(NULL); } void HOptimizedGraphBuilder::VisitBreakStatement(BreakStatement* stmt) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); + Scope* outer_scope = NULL; + Scope* inner_scope = scope(); int drop_extra = 0; HBasicBlock* break_block = break_scope()->Get( - stmt->target(), BreakAndContinueScope::BREAK, &drop_extra); + stmt->target(), BreakAndContinueScope::BREAK, + &outer_scope, &drop_extra); + HValue* context = environment()->context(); Drop(drop_extra); + int context_pop_count = inner_scope->ContextChainLength(outer_scope); + if (context_pop_count > 0) { + while (context_pop_count-- > 0) { + HInstruction* context_instruction = Add<HLoadNamedField>( + context, static_cast<HValue*>(NULL), + HObjectAccess::ForContextSlot(Context::PREVIOUS_INDEX)); + context = context_instruction; + } + HInstruction* instr = Add<HStoreFrameContext>(context); + if (instr->HasObservableSideEffects()) { + AddSimulate(stmt->target()->ExitId(), REMOVABLE_SIMULATE); + } + environment()->BindContext(context); + } Goto(break_block); set_current_block(NULL); } void HOptimizedGraphBuilder::VisitReturnStatement(ReturnStatement* stmt) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); FunctionState* state = function_state(); AstContext* context = call_context(); if (context == NULL) { @@ -4318,7 +4757,7 @@ void HOptimizedGraphBuilder::VisitReturnStatement(ReturnStatement* stmt) { CHECK_ALIVE(VisitForEffect(stmt->expression())); Goto(function_return(), state); } else { - ASSERT(context->IsValue()); + DCHECK(context->IsValue()); CHECK_ALIVE(VisitForValue(stmt->expression())); HValue* return_value = Pop(); HValue* receiver = environment()->arguments_environment()->Lookup(0); @@ -4344,7 +4783,7 @@ void HOptimizedGraphBuilder::VisitReturnStatement(ReturnStatement* stmt) { } else if (context->IsEffect()) { Goto(function_return(), state); } else { - ASSERT(context->IsValue()); + DCHECK(context->IsValue()); HValue* rhs = environment()->arguments_environment()->Lookup(1); AddLeaveInlined(rhs, state); } @@ -4363,7 +4802,7 @@ void HOptimizedGraphBuilder::VisitReturnStatement(ReturnStatement* stmt) { Pop(); Goto(function_return(), state); } else { - ASSERT(context->IsValue()); + DCHECK(context->IsValue()); CHECK_ALIVE(VisitForValue(stmt->expression())); AddLeaveInlined(Pop(), state); } @@ -4373,17 +4812,17 @@ void HOptimizedGraphBuilder::VisitReturnStatement(ReturnStatement* stmt) { void HOptimizedGraphBuilder::VisitWithStatement(WithStatement* stmt) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); return Bailout(kWithStatement); } void HOptimizedGraphBuilder::VisitSwitchStatement(SwitchStatement* stmt) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); // We only optimize switch statements with a bounded number of clauses. const int kCaseClauseLimit = 128; @@ -4444,7 +4883,7 @@ void HOptimizedGraphBuilder::VisitSwitchStatement(SwitchStatement* stmt) { // translating the clause bodies. HBasicBlock* fall_through_block = NULL; - BreakAndContinueInfo break_info(stmt); + BreakAndContinueInfo break_info(stmt, scope()); { BreakAndContinueScope push(&break_info, this); for (int i = 0; i < clause_count; ++i) { CaseClause* clause = clauses->at(i); @@ -4491,27 +4930,28 @@ void HOptimizedGraphBuilder::VisitSwitchStatement(SwitchStatement* stmt) { void HOptimizedGraphBuilder::VisitLoopBody(IterationStatement* stmt, - HBasicBlock* loop_entry, - BreakAndContinueInfo* break_info) { - BreakAndContinueScope push(break_info, this); + HBasicBlock* loop_entry) { Add<HSimulate>(stmt->StackCheckId()); HStackCheck* stack_check = HStackCheck::cast(Add<HStackCheck>(HStackCheck::kBackwardsBranch)); - ASSERT(loop_entry->IsLoopHeader()); + DCHECK(loop_entry->IsLoopHeader()); loop_entry->loop_information()->set_stack_check(stack_check); CHECK_BAILOUT(Visit(stmt->body())); } void HOptimizedGraphBuilder::VisitDoWhileStatement(DoWhileStatement* stmt) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); - ASSERT(current_block() != NULL); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); + DCHECK(current_block() != NULL); HBasicBlock* loop_entry = BuildLoopEntry(stmt); - BreakAndContinueInfo break_info(stmt); - CHECK_BAILOUT(VisitLoopBody(stmt, loop_entry, &break_info)); + BreakAndContinueInfo break_info(stmt, scope()); + { + BreakAndContinueScope push(&break_info, this); + CHECK_BAILOUT(VisitLoopBody(stmt, loop_entry)); + } HBasicBlock* body_exit = JoinContinue(stmt, current_block(), break_info.continue_block()); HBasicBlock* loop_successor = NULL; @@ -4519,6 +4959,7 @@ void HOptimizedGraphBuilder::VisitDoWhileStatement(DoWhileStatement* stmt) { set_current_block(body_exit); loop_successor = graph()->CreateBasicBlock(); if (stmt->cond()->ToBooleanIsFalse()) { + loop_entry->loop_information()->stack_check()->Eliminate(); Goto(loop_successor); body_exit = NULL; } else { @@ -4548,10 +4989,10 @@ void HOptimizedGraphBuilder::VisitDoWhileStatement(DoWhileStatement* stmt) { void HOptimizedGraphBuilder::VisitWhileStatement(WhileStatement* stmt) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); - ASSERT(current_block() != NULL); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); + DCHECK(current_block() != NULL); HBasicBlock* loop_entry = BuildLoopEntry(stmt); // If the condition is constant true, do not generate a branch. @@ -4571,9 +5012,10 @@ void HOptimizedGraphBuilder::VisitWhileStatement(WhileStatement* stmt) { } } - BreakAndContinueInfo break_info(stmt); + BreakAndContinueInfo break_info(stmt, scope()); if (current_block() != NULL) { - CHECK_BAILOUT(VisitLoopBody(stmt, loop_entry, &break_info)); + BreakAndContinueScope push(&break_info, this); + CHECK_BAILOUT(VisitLoopBody(stmt, loop_entry)); } HBasicBlock* body_exit = JoinContinue(stmt, current_block(), break_info.continue_block()); @@ -4587,13 +5029,13 @@ void HOptimizedGraphBuilder::VisitWhileStatement(WhileStatement* stmt) { void HOptimizedGraphBuilder::VisitForStatement(ForStatement* stmt) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); if (stmt->init() != NULL) { CHECK_ALIVE(Visit(stmt->init())); } - ASSERT(current_block() != NULL); + DCHECK(current_block() != NULL); HBasicBlock* loop_entry = BuildLoopEntry(stmt); HBasicBlock* loop_successor = NULL; @@ -4612,9 +5054,10 @@ void HOptimizedGraphBuilder::VisitForStatement(ForStatement* stmt) { } } - BreakAndContinueInfo break_info(stmt); + BreakAndContinueInfo break_info(stmt, scope()); if (current_block() != NULL) { - CHECK_BAILOUT(VisitLoopBody(stmt, loop_entry, &break_info)); + BreakAndContinueScope push(&break_info, this); + CHECK_BAILOUT(VisitLoopBody(stmt, loop_entry)); } HBasicBlock* body_exit = JoinContinue(stmt, current_block(), break_info.continue_block()); @@ -4635,9 +5078,9 @@ void HOptimizedGraphBuilder::VisitForStatement(ForStatement* stmt) { void HOptimizedGraphBuilder::VisitForInStatement(ForInStatement* stmt) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); if (!FLAG_optimize_for_in) { return Bailout(kForInStatementOptimizationIsDisabled); @@ -4713,8 +5156,11 @@ void HOptimizedGraphBuilder::VisitForInStatement(ForInStatement* stmt) { Bind(each_var, key); - BreakAndContinueInfo break_info(stmt, 5); - CHECK_BAILOUT(VisitLoopBody(stmt, loop_entry, &break_info)); + BreakAndContinueInfo break_info(stmt, scope(), 5); + { + BreakAndContinueScope push(&break_info, this); + CHECK_BAILOUT(VisitLoopBody(stmt, loop_entry)); + } HBasicBlock* body_exit = JoinContinue(stmt, current_block(), break_info.continue_block()); @@ -4738,34 +5184,34 @@ void HOptimizedGraphBuilder::VisitForInStatement(ForInStatement* stmt) { void HOptimizedGraphBuilder::VisitForOfStatement(ForOfStatement* stmt) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); return Bailout(kForOfStatement); } void HOptimizedGraphBuilder::VisitTryCatchStatement(TryCatchStatement* stmt) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); return Bailout(kTryCatchStatement); } void HOptimizedGraphBuilder::VisitTryFinallyStatement( TryFinallyStatement* stmt) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); return Bailout(kTryFinallyStatement); } void HOptimizedGraphBuilder::VisitDebuggerStatement(DebuggerStatement* stmt) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); return Bailout(kDebuggerStatement); } @@ -4776,12 +5222,13 @@ void HOptimizedGraphBuilder::VisitCaseClause(CaseClause* clause) { void HOptimizedGraphBuilder::VisitFunctionLiteral(FunctionLiteral* expr) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); Handle<SharedFunctionInfo> shared_info = expr->shared_info(); if (shared_info.is_null()) { - shared_info = Compiler::BuildFunctionInfo(expr, current_info()->script()); + shared_info = + Compiler::BuildFunctionInfo(expr, current_info()->script(), top_info()); } // We also have a stack overflow if the recursive compilation did. if (HasStackOverflow()) return; @@ -4793,17 +5240,17 @@ void HOptimizedGraphBuilder::VisitFunctionLiteral(FunctionLiteral* expr) { void HOptimizedGraphBuilder::VisitNativeFunctionLiteral( NativeFunctionLiteral* expr) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); return Bailout(kNativeFunctionLiteral); } void HOptimizedGraphBuilder::VisitConditional(Conditional* expr) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); HBasicBlock* cond_true = graph()->CreateBasicBlock(); HBasicBlock* cond_false = graph()->CreateBasicBlock(); CHECK_BAILOUT(VisitForControl(expr->condition(), cond_true, cond_false)); @@ -4857,9 +5304,9 @@ HOptimizedGraphBuilder::GlobalPropertyAccess HValue* HOptimizedGraphBuilder::BuildContextChainWalk(Variable* var) { - ASSERT(var->IsContextSlot()); + DCHECK(var->IsContextSlot()); HValue* context = environment()->context(); - int length = current_info()->scope()->ContextChainLength(var->scope()); + int length = scope()->ContextChainLength(var->scope()); while (length-- > 0) { context = Add<HLoadNamedField>( context, static_cast<HValue*>(NULL), @@ -4874,14 +5321,14 @@ void HOptimizedGraphBuilder::VisitVariableProxy(VariableProxy* expr) { current_info()->set_this_has_uses(true); } - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); Variable* variable = expr->var(); switch (variable->location()) { case Variable::UNALLOCATED: { if (IsLexicalVariableMode(variable->mode())) { - // TODO(rossberg): should this be an ASSERT? + // TODO(rossberg): should this be an DCHECK? return Bailout(kReferenceToGlobalLexicalVariable); } // Handle known global constants like 'undefined' specially to avoid a @@ -4926,6 +5373,13 @@ void HOptimizedGraphBuilder::VisitVariableProxy(VariableProxy* expr) { New<HLoadGlobalGeneric>(global_object, variable->name(), ast_context()->is_for_typeof()); + if (FLAG_vector_ics) { + Handle<SharedFunctionInfo> current_shared = + function_state()->compilation_info()->shared_info(); + instr->SetVectorAndSlot( + handle(current_shared->feedback_vector(), isolate()), + expr->VariableFeedbackSlot()); + } return ast_context()->ReturnInstruction(instr, expr->id()); } } @@ -4934,7 +5388,7 @@ void HOptimizedGraphBuilder::VisitVariableProxy(VariableProxy* expr) { case Variable::LOCAL: { HValue* value = LookupAndMakeLive(variable); if (value == graph()->GetConstantHole()) { - ASSERT(IsDeclaredVariableMode(variable->mode()) && + DCHECK(IsDeclaredVariableMode(variable->mode()) && variable->mode() != VAR); return Bailout(kReferenceToUninitializedVariable); } @@ -4943,7 +5397,21 @@ void HOptimizedGraphBuilder::VisitVariableProxy(VariableProxy* expr) { case Variable::CONTEXT: { HValue* context = BuildContextChainWalk(variable); - HLoadContextSlot* instr = new(zone()) HLoadContextSlot(context, variable); + HLoadContextSlot::Mode mode; + switch (variable->mode()) { + case LET: + case CONST: + mode = HLoadContextSlot::kCheckDeoptimize; + break; + case CONST_LEGACY: + mode = HLoadContextSlot::kCheckReturnUndefined; + break; + default: + mode = HLoadContextSlot::kNoCheck; + break; + } + HLoadContextSlot* instr = + new(zone()) HLoadContextSlot(context, variable->index(), mode); return ast_context()->ReturnInstruction(instr, expr->id()); } @@ -4954,18 +5422,18 @@ void HOptimizedGraphBuilder::VisitVariableProxy(VariableProxy* expr) { void HOptimizedGraphBuilder::VisitLiteral(Literal* expr) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); HConstant* instr = New<HConstant>(expr->value()); return ast_context()->ReturnInstruction(instr, expr->id()); } void HOptimizedGraphBuilder::VisitRegExpLiteral(RegExpLiteral* expr) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); Handle<JSFunction> closure = function_state()->compilation_info()->closure(); Handle<FixedArray> literals(closure->literals()); HRegExpLiteral* instr = New<HRegExpLiteral>(literals, @@ -4997,7 +5465,7 @@ static bool IsFastLiteral(Handle<JSObject> boilerplate, return false; } - ASSERT(max_depth >= 0 && *max_properties >= 0); + DCHECK(max_depth >= 0 && *max_properties >= 0); if (max_depth == 0) return false; Isolate* isolate = boilerplate->GetIsolate(); @@ -5052,9 +5520,9 @@ static bool IsFastLiteral(Handle<JSObject> boilerplate, void HOptimizedGraphBuilder::VisitObjectLiteral(ObjectLiteral* expr) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); expr->BuildConstantProperties(isolate()); Handle<JSFunction> closure = function_state()->compilation_info()->closure(); HInstruction* literal; @@ -5088,15 +5556,15 @@ void HOptimizedGraphBuilder::VisitObjectLiteral(ObjectLiteral* expr) { flags |= expr->has_function() ? ObjectLiteral::kHasFunction : ObjectLiteral::kNoFlags; - Add<HPushArgument>(Add<HConstant>(closure_literals)); - Add<HPushArgument>(Add<HConstant>(literal_index)); - Add<HPushArgument>(Add<HConstant>(constant_properties)); - Add<HPushArgument>(Add<HConstant>(flags)); + Add<HPushArguments>(Add<HConstant>(closure_literals), + Add<HConstant>(literal_index), + Add<HConstant>(constant_properties), + Add<HConstant>(flags)); // TODO(mvstanton): Add a flag to turn off creation of any // AllocationMementos for this call: we are in crankshaft and should have // learned enough about transition behavior to stop emitting mementos. - Runtime::FunctionId function_id = Runtime::kHiddenCreateObjectLiteral; + Runtime::FunctionId function_id = Runtime::kCreateObjectLiteral; literal = Add<HCallRuntime>(isolate()->factory()->empty_string(), Runtime::FunctionForId(function_id), 4); @@ -5117,7 +5585,7 @@ void HOptimizedGraphBuilder::VisitObjectLiteral(ObjectLiteral* expr) { switch (property->kind()) { case ObjectLiteral::Property::MATERIALIZED_LITERAL: - ASSERT(!CompileTimeValue::IsCompileTimeValue(value)); + DCHECK(!CompileTimeValue::IsCompileTimeValue(value)); // Fall through. case ObjectLiteral::Property::COMPUTED: if (key->value()->IsInternalizedString()) { @@ -5130,18 +5598,18 @@ void HOptimizedGraphBuilder::VisitObjectLiteral(ObjectLiteral* expr) { if (map.is_null()) { // If we don't know the monomorphic type, do a generic store. CHECK_ALIVE(store = BuildNamedGeneric( - STORE, literal, name, value)); + STORE, NULL, literal, name, value)); } else { PropertyAccessInfo info(this, STORE, ToType(map), name); if (info.CanAccessMonomorphic()) { HValue* checked_literal = Add<HCheckMaps>(literal, map); - ASSERT(!info.lookup()->IsPropertyCallbacks()); + DCHECK(!info.lookup()->IsPropertyCallbacks()); store = BuildMonomorphicAccess( &info, literal, checked_literal, value, BailoutId::None(), BailoutId::None()); } else { CHECK_ALIVE(store = BuildNamedGeneric( - STORE, literal, name, value)); + STORE, NULL, literal, name, value)); } } AddInstruction(store); @@ -5177,9 +5645,9 @@ void HOptimizedGraphBuilder::VisitObjectLiteral(ObjectLiteral* expr) { void HOptimizedGraphBuilder::VisitArrayLiteral(ArrayLiteral* expr) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); expr->BuildConstantElements(isolate()); ZoneList<Expression*>* subexprs = expr->values(); int length = subexprs->length(); @@ -5214,14 +5682,14 @@ void HOptimizedGraphBuilder::VisitArrayLiteral(ArrayLiteral* expr) { isolate()->counters()->cow_arrays_created_runtime()->Increment(); } } else { - ASSERT(literals_cell->IsAllocationSite()); + DCHECK(literals_cell->IsAllocationSite()); site = Handle<AllocationSite>::cast(literals_cell); boilerplate_object = Handle<JSObject>( JSObject::cast(site->transition_info()), isolate()); } - ASSERT(!boilerplate_object.is_null()); - ASSERT(site->SitePointsToLiteral()); + DCHECK(!boilerplate_object.is_null()); + DCHECK(site->SitePointsToLiteral()); ElementsKind boilerplate_elements_kind = boilerplate_object->GetElementsKind(); @@ -5246,15 +5714,15 @@ void HOptimizedGraphBuilder::VisitArrayLiteral(ArrayLiteral* expr) { : ArrayLiteral::kNoFlags; flags |= ArrayLiteral::kDisableMementos; - Add<HPushArgument>(Add<HConstant>(literals)); - Add<HPushArgument>(Add<HConstant>(literal_index)); - Add<HPushArgument>(Add<HConstant>(constants)); - Add<HPushArgument>(Add<HConstant>(flags)); + Add<HPushArguments>(Add<HConstant>(literals), + Add<HConstant>(literal_index), + Add<HConstant>(constants), + Add<HConstant>(flags)); // TODO(mvstanton): Consider a flag to turn off creation of any // AllocationMementos for this call: we are in crankshaft and should have // learned enough about transition behavior to stop emitting mementos. - Runtime::FunctionId function_id = Runtime::kHiddenCreateArrayLiteral; + Runtime::FunctionId function_id = Runtime::kCreateArrayLiteral; literal = Add<HCallRuntime>(isolate()->factory()->empty_string(), Runtime::FunctionForId(function_id), 4); @@ -5333,9 +5801,8 @@ HInstruction* HOptimizedGraphBuilder::BuildLoadNamedField( Handle<JSObject>::cast(object)->Lookup(info->name(), &lookup); Handle<Object> value(lookup.GetLazyValue(), isolate()); - if (!value->IsTheHole()) { - return New<HConstant>(value); - } + DCHECK(!value->IsTheHole()); + return New<HConstant>(value); } } @@ -5345,7 +5812,6 @@ HInstruction* HOptimizedGraphBuilder::BuildLoadNamedField( checked_object = Add<HLoadNamedField>( checked_object, static_cast<HValue*>(NULL), access.WithRepresentation(Representation::Tagged())); - checked_object->set_type(HType::HeapNumber()); // Load the double value from it. access = HObjectAccess::ForHeapNumberValue(); } @@ -5357,13 +5823,7 @@ HInstruction* HOptimizedGraphBuilder::BuildLoadNamedField( UniqueSet<Map>* maps = new(zone()) UniqueSet<Map>(map_list->length(), zone()); for (int i = 0; i < map_list->length(); ++i) { - Handle<Map> map = map_list->at(i); - maps->Add(Unique<Map>::CreateImmovable(map), zone()); - // TODO(bmeurer): Get rid of this shit! - if (map->CanTransition()) { - Map::AddDependentCompilationInfo( - map, DependentCode::kPrototypeCheckGroup, top_info()); - } + maps->Add(Unique<Map>::CreateImmovable(map_list->at(i)), zone()); } return New<HLoadNamedField>( checked_object, checked_object, access, maps, info->field_type()); @@ -5387,14 +5847,13 @@ HInstruction* HOptimizedGraphBuilder::BuildStoreNamedField( NoObservableSideEffectsScope no_side_effects(this); HInstruction* heap_number_size = Add<HConstant>(HeapNumber::kSize); - PretenureFlag pretenure_flag = !FLAG_allocation_site_pretenuring ? - isolate()->heap()->GetPretenureMode() : NOT_TENURED; - + // TODO(hpayer): Allocation site pretenuring support. HInstruction* heap_number = Add<HAllocate>(heap_number_size, - HType::HeapNumber(), - pretenure_flag, - HEAP_NUMBER_TYPE); - AddStoreMapConstant(heap_number, isolate()->factory()->heap_number_map()); + HType::HeapObject(), + NOT_TENURED, + MUTABLE_HEAP_NUMBER_TYPE); + AddStoreMapConstant( + heap_number, isolate()->factory()->mutable_heap_number_map()); Add<HStoreNamedField>(heap_number, HObjectAccess::ForHeapNumberValue(), value); instr = New<HStoreNamedField>(checked_object->ActualValue(), @@ -5404,7 +5863,6 @@ HInstruction* HOptimizedGraphBuilder::BuildStoreNamedField( // Already holds a HeapNumber; load the box and write its value field. HInstruction* heap_number = Add<HLoadNamedField>( checked_object, static_cast<HValue*>(NULL), heap_number_access); - heap_number->set_type(HType::HeapNumber()); instr = New<HStoreNamedField>(heap_number, HObjectAccess::ForHeapNumberValue(), value, STORE_TO_INITIALIZED_ENTRY); @@ -5415,7 +5873,7 @@ HInstruction* HOptimizedGraphBuilder::BuildStoreNamedField( } if (!info->field_maps()->is_empty()) { - ASSERT(field_access.representation().IsHeapObject()); + DCHECK(field_access.representation().IsHeapObject()); value = Add<HCheckMaps>(value, info->field_maps()); } @@ -5427,7 +5885,7 @@ HInstruction* HOptimizedGraphBuilder::BuildStoreNamedField( if (transition_to_field) { Handle<Map> transition(info->transition()); - ASSERT(!transition->is_deprecated()); + DCHECK(!transition->is_deprecated()); instr->SetTransition(Add<HConstant>(transition)); } return instr; @@ -5471,7 +5929,7 @@ bool HOptimizedGraphBuilder::PropertyAccessInfo::IsCompatible( return constant_.is_identical_to(info->constant_); } - ASSERT(lookup_.IsField()); + DCHECK(lookup_.IsField()); if (!info->lookup_.IsField()) return false; Representation r = access_.representation(); @@ -5564,7 +6022,7 @@ void HOptimizedGraphBuilder::PropertyAccessInfo::LoadFieldMaps( // Collect the (stable) maps from the field type. int num_field_maps = field_type->NumClasses(); if (num_field_maps == 0) return; - ASSERT(access_.representation().IsHeapObject()); + DCHECK(access_.representation().IsHeapObject()); field_maps_.Reserve(num_field_maps, zone()); HeapType::Iterator<Map> it = field_type->Classes(); while (!it.Done()) { @@ -5577,23 +6035,11 @@ void HOptimizedGraphBuilder::PropertyAccessInfo::LoadFieldMaps( it.Advance(); } field_maps_.Sort(); - ASSERT_EQ(num_field_maps, field_maps_.length()); + DCHECK_EQ(num_field_maps, field_maps_.length()); // Determine field HType from field HeapType. - if (field_type->Is(HeapType::Number())) { - field_type_ = HType::HeapNumber(); - } else if (field_type->Is(HeapType::String())) { - field_type_ = HType::String(); - } else if (field_type->Is(HeapType::Boolean())) { - field_type_ = HType::Boolean(); - } else if (field_type->Is(HeapType::Array())) { - field_type_ = HType::JSArray(); - } else if (field_type->Is(HeapType::Object())) { - field_type_ = HType::JSObject(); - } else if (field_type->Is(HeapType::Null()) || - field_type->Is(HeapType::Undefined())) { - field_type_ = HType::NonPrimitive(); - } + field_type_ = HType::FromType<HeapType>(field_type); + DCHECK(field_type_.IsHeapObject()); // Add dependency on the map that introduced the field. Map::AddDependentCompilationInfo( @@ -5626,6 +6072,11 @@ bool HOptimizedGraphBuilder::PropertyAccessInfo::LookupInPrototypes() { bool HOptimizedGraphBuilder::PropertyAccessInfo::CanAccessMonomorphic() { if (!CanInlinePropertyAccess(type_)) return false; if (IsJSObjectFieldAccessor()) return IsLoad(); + if (this->map()->function_with_prototype() && + !this->map()->has_non_instance_prototype() && + name_.is_identical_to(isolate()->factory()->prototype_string())) { + return IsLoad(); + } if (!LookupDescriptor()) return false; if (lookup_.IsFound()) { if (IsLoad()) return true; @@ -5651,7 +6102,7 @@ bool HOptimizedGraphBuilder::PropertyAccessInfo::CanAccessMonomorphic() { bool HOptimizedGraphBuilder::PropertyAccessInfo::CanAccessAsMonomorphic( SmallMapList* types) { - ASSERT(type_->Is(ToType(types->first()))); + DCHECK(type_->Is(ToType(types->first()))); if (!CanAccessMonomorphic()) return false; STATIC_ASSERT(kMaxLoadPolymorphism == kMaxStorePolymorphism); if (types->length() > kMaxLoadPolymorphism) return false; @@ -5674,7 +6125,7 @@ bool HOptimizedGraphBuilder::PropertyAccessInfo::CanAccessAsMonomorphic( if (type_->Is(Type::Number())) return false; // Multiple maps cannot transition to the same target map. - ASSERT(!IsLoad() || !lookup_.IsTransition()); + DCHECK(!IsLoad() || !lookup_.IsTransition()); if (lookup_.IsTransition() && types->length() > 1) return false; for (int i = 1; i < types->length(); ++i) { @@ -5687,6 +6138,14 @@ bool HOptimizedGraphBuilder::PropertyAccessInfo::CanAccessAsMonomorphic( } +Handle<Map> HOptimizedGraphBuilder::PropertyAccessInfo::map() { + JSFunction* ctor = IC::GetRootConstructor( + type_, current_info()->closure()->context()->native_context()); + if (ctor != NULL) return handle(ctor->initial_map()); + return type_->AsClass()->Map(); +} + + static bool NeedsWrappingFor(Type* type, Handle<JSFunction> target) { return type->Is(Type::NumberOrString()) && target->shared()->strict_mode() == SLOPPY && @@ -5705,10 +6164,16 @@ HInstruction* HOptimizedGraphBuilder::BuildMonomorphicAccess( HObjectAccess access = HObjectAccess::ForMap(); // bogus default if (info->GetJSObjectFieldAccess(&access)) { - ASSERT(info->IsLoad()); + DCHECK(info->IsLoad()); return New<HLoadNamedField>(object, checked_object, access); } + if (info->name().is_identical_to(isolate()->factory()->prototype_string()) && + info->map()->function_with_prototype()) { + DCHECK(!info->map()->has_non_instance_prototype()); + return New<HLoadFunctionPrototype>(checked_object); + } + HValue* checked_holder = checked_object; if (info->has_holder()) { Handle<JSObject> prototype(JSObject::cast(info->map()->prototype())); @@ -5716,7 +6181,7 @@ HInstruction* HOptimizedGraphBuilder::BuildMonomorphicAccess( } if (!info->lookup()->IsFound()) { - ASSERT(info->IsLoad()); + DCHECK(info->IsLoad()); return graph()->GetConstantUndefined(); } @@ -5729,7 +6194,7 @@ HInstruction* HOptimizedGraphBuilder::BuildMonomorphicAccess( } if (info->lookup()->IsTransition()) { - ASSERT(!info->IsLoad()); + DCHECK(!info->IsLoad()); return BuildStoreNamedField(info, checked_object, value); } @@ -5757,7 +6222,7 @@ HInstruction* HOptimizedGraphBuilder::BuildMonomorphicAccess( return BuildCallConstantFunction(info->accessor(), argument_count); } - ASSERT(info->lookup()->IsConstant()); + DCHECK(info->lookup()->IsConstant()); if (info->IsLoad()) { return New<HConstant>(info->constant()); } else { @@ -5768,6 +6233,7 @@ HInstruction* HOptimizedGraphBuilder::BuildMonomorphicAccess( void HOptimizedGraphBuilder::HandlePolymorphicNamedFieldAccess( PropertyAccessType access_type, + Expression* expr, BailoutId ast_id, BailoutId return_id, HValue* object, @@ -5881,7 +6347,8 @@ void HOptimizedGraphBuilder::HandlePolymorphicNamedFieldAccess( if (count == types->length() && FLAG_deoptimize_uncommon_cases) { FinishExitWithHardDeoptimization("Uknown map in polymorphic access"); } else { - HInstruction* instr = BuildNamedGeneric(access_type, object, name, value); + HInstruction* instr = BuildNamedGeneric(access_type, expr, object, name, + value); AddInstruction(instr); if (!ast_context()->IsEffect()) Push(access_type == LOAD ? instr : value); @@ -5894,7 +6361,7 @@ void HOptimizedGraphBuilder::HandlePolymorphicNamedFieldAccess( } } - ASSERT(join != NULL); + DCHECK(join != NULL); if (join->HasPredecessor()) { join->SetJoinId(ast_id); set_current_block(join); @@ -5955,7 +6422,7 @@ void HOptimizedGraphBuilder::BuildStore(Expression* expr, Literal* key = prop->key()->AsLiteral(); Handle<String> name = Handle<String>::cast(key->value()); - ASSERT(!name.is_null()); + DCHECK(!name.is_null()); HInstruction* instr = BuildNamedAccess(STORE, ast_id, return_id, expr, object, name, value, is_uninitialized); @@ -5973,7 +6440,7 @@ void HOptimizedGraphBuilder::BuildStore(Expression* expr, void HOptimizedGraphBuilder::HandlePropertyAssignment(Assignment* expr) { Property* prop = expr->target()->AsProperty(); - ASSERT(prop != NULL); + DCHECK(prop != NULL); CHECK_ALIVE(VisitForValue(prop->obj())); if (!prop->key()->IsPropertyName()) { CHECK_ALIVE(VisitForValue(prop->key())); @@ -6032,7 +6499,7 @@ void HOptimizedGraphBuilder::HandleGlobalVariableAssignment( Add<HStoreNamedGeneric>(global_object, var->name(), value, function_strict_mode()); USE(instr); - ASSERT(instr->HasObservableSideEffects()); + DCHECK(instr->HasObservableSideEffects()); Add<HSimulate>(ast_id, REMOVABLE_SIMULATE); } } @@ -6042,7 +6509,7 @@ void HOptimizedGraphBuilder::HandleCompoundAssignment(Assignment* expr) { Expression* target = expr->target(); VariableProxy* proxy = target->AsVariableProxy(); Property* prop = target->AsProperty(); - ASSERT(proxy == NULL || prop == NULL); + DCHECK(proxy == NULL || prop == NULL); // We have a second position recorded in the FullCodeGenerator to have // type feedback for the binary operation. @@ -6121,8 +6588,7 @@ void HOptimizedGraphBuilder::HandleCompoundAssignment(Assignment* expr) { CHECK_ALIVE(VisitForValue(prop->obj())); HValue* object = Top(); HValue* key = NULL; - if ((!prop->IsFunctionPrototype() && !prop->key()->IsPropertyName()) || - prop->IsStringAccess()) { + if (!prop->key()->IsPropertyName() || prop->IsStringAccess()) { CHECK_ALIVE(VisitForValue(prop->key())); key = Top(); } @@ -6144,12 +6610,12 @@ void HOptimizedGraphBuilder::HandleCompoundAssignment(Assignment* expr) { void HOptimizedGraphBuilder::VisitAssignment(Assignment* expr) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); VariableProxy* proxy = expr->target()->AsVariableProxy(); Property* prop = expr->target()->AsProperty(); - ASSERT(proxy == NULL || prop == NULL); + DCHECK(proxy == NULL || prop == NULL); if (expr->is_compound()) { HandleCompoundAssignment(expr); @@ -6245,7 +6711,7 @@ void HOptimizedGraphBuilder::VisitAssignment(Assignment* expr) { expr->op() == Token::INIT_CONST) { mode = HStoreContextSlot::kNoCheck; } else { - ASSERT(expr->op() == Token::INIT_CONST_LEGACY); + DCHECK(expr->op() == Token::INIT_CONST_LEGACY); mode = HStoreContextSlot::kCheckIgnoreAssignment; } @@ -6275,20 +6741,20 @@ void HOptimizedGraphBuilder::VisitYield(Yield* expr) { void HOptimizedGraphBuilder::VisitThrow(Throw* expr) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); // We don't optimize functions with invalid left-hand sides in // assignments, count operations, or for-in. Consequently throw can // currently only occur in an effect context. - ASSERT(ast_context()->IsEffect()); + DCHECK(ast_context()->IsEffect()); CHECK_ALIVE(VisitForValue(expr->exception())); HValue* value = environment()->Pop(); if (!FLAG_hydrogen_track_positions) SetSourcePosition(expr->position()); - Add<HPushArgument>(value); + Add<HPushArguments>(value); Add<HCallRuntime>(isolate()->factory()->empty_string(), - Runtime::FunctionForId(Runtime::kHiddenThrow), 1); + Runtime::FunctionForId(Runtime::kThrow), 1); Add<HSimulate>(expr->id()); // If the throw definitely exits the function, we can finish with a dummy @@ -6328,6 +6794,7 @@ HInstruction* HGraphBuilder::AddLoadStringLength(HValue* string) { HInstruction* HOptimizedGraphBuilder::BuildNamedGeneric( PropertyAccessType access_type, + Expression* expr, HValue* object, Handle<String> name, HValue* value, @@ -6337,7 +6804,15 @@ HInstruction* HOptimizedGraphBuilder::BuildNamedGeneric( Deoptimizer::SOFT); } if (access_type == LOAD) { - return New<HLoadNamedGeneric>(object, name); + HLoadNamedGeneric* result = New<HLoadNamedGeneric>(object, name); + if (FLAG_vector_ics) { + Handle<SharedFunctionInfo> current_shared = + function_state()->compilation_info()->shared_info(); + result->SetVectorAndSlot( + handle(current_shared->feedback_vector(), isolate()), + expr->AsProperty()->PropertyFeedbackSlot()); + } + return result; } else { return New<HStoreNamedGeneric>(object, name, value, function_strict_mode()); } @@ -6347,11 +6822,20 @@ HInstruction* HOptimizedGraphBuilder::BuildNamedGeneric( HInstruction* HOptimizedGraphBuilder::BuildKeyedGeneric( PropertyAccessType access_type, + Expression* expr, HValue* object, HValue* key, HValue* value) { if (access_type == LOAD) { - return New<HLoadKeyedGeneric>(object, key); + HLoadKeyedGeneric* result = New<HLoadKeyedGeneric>(object, key); + if (FLAG_vector_ics) { + Handle<SharedFunctionInfo> current_shared = + function_state()->compilation_info()->shared_info(); + result->SetVectorAndSlot( + handle(current_shared->feedback_vector(), isolate()), + expr->AsProperty()->PropertyFeedbackSlot()); + } + return result; } else { return New<HStoreKeyedGeneric>(object, key, value, function_strict_mode()); } @@ -6391,15 +6875,16 @@ HInstruction* HOptimizedGraphBuilder::BuildMonomorphicElementAccess( // monomorphic stores need a prototype chain check because shape // changes could allow callbacks on elements in the chain that // aren't compatible with monomorphic keyed stores. - Handle<JSObject> prototype(JSObject::cast(map->prototype())); - Object* holder = map->prototype(); - while (holder->GetPrototype(isolate())->IsJSObject()) { - holder = holder->GetPrototype(isolate()); + PrototypeIterator iter(map); + JSObject* holder = NULL; + while (!iter.IsAtEnd()) { + holder = JSObject::cast(*PrototypeIterator::GetCurrent(iter)); + iter.Advance(); } - ASSERT(holder->GetPrototype(isolate())->IsNull()); + DCHECK(holder && holder->IsJSObject()); - BuildCheckPrototypeMaps(prototype, - Handle<JSObject>(JSObject::cast(holder))); + BuildCheckPrototypeMaps(handle(JSObject::cast(map->prototype())), + Handle<JSObject>(holder)); } LoadKeyedHoleMode load_mode = BuildKeyedHoleMode(map); @@ -6478,6 +6963,7 @@ HInstruction* HOptimizedGraphBuilder::TryBuildConsolidatedElementLoad( HValue* HOptimizedGraphBuilder::HandlePolymorphicElementAccess( + Expression* expr, HValue* object, HValue* key, HValue* val, @@ -6509,7 +6995,8 @@ HValue* HOptimizedGraphBuilder::HandlePolymorphicElementAccess( possible_transitioned_maps.Add(map); } if (elements_kind == SLOPPY_ARGUMENTS_ELEMENTS) { - HInstruction* result = BuildKeyedGeneric(access_type, object, key, val); + HInstruction* result = BuildKeyedGeneric(access_type, expr, object, key, + val); *has_side_effects = result->HasObservableSideEffects(); return AddInstruction(result); } @@ -6526,9 +7013,9 @@ HValue* HOptimizedGraphBuilder::HandlePolymorphicElementAccess( HTransitionElementsKind* transition = NULL; for (int i = 0; i < maps->length(); ++i) { Handle<Map> map = maps->at(i); - ASSERT(map->IsMap()); + DCHECK(map->IsMap()); if (!transition_target.at(i).is_null()) { - ASSERT(Map::IsValidElementsTransition( + DCHECK(Map::IsValidElementsTransition( map->elements_kind(), transition_target.at(i)->elements_kind())); transition = Add<HTransitionElementsKind>(object, map, @@ -6540,13 +7027,14 @@ HValue* HOptimizedGraphBuilder::HandlePolymorphicElementAccess( // If only one map is left after transitioning, handle this case // monomorphically. - ASSERT(untransitionable_maps.length() >= 1); + DCHECK(untransitionable_maps.length() >= 1); if (untransitionable_maps.length() == 1) { Handle<Map> untransitionable_map = untransitionable_maps[0]; HInstruction* instr = NULL; if (untransitionable_map->has_slow_elements_kind() || !untransitionable_map->IsJSObjectMap()) { - instr = AddInstruction(BuildKeyedGeneric(access_type, object, key, val)); + instr = AddInstruction(BuildKeyedGeneric(access_type, expr, object, key, + val)); } else { instr = BuildMonomorphicElementAccess( object, key, val, transition, untransitionable_map, access_type, @@ -6571,9 +7059,10 @@ HValue* HOptimizedGraphBuilder::HandlePolymorphicElementAccess( set_current_block(this_map); HInstruction* access = NULL; if (IsDictionaryElementsKind(elements_kind)) { - access = AddInstruction(BuildKeyedGeneric(access_type, object, key, val)); + access = AddInstruction(BuildKeyedGeneric(access_type, expr, object, key, + val)); } else { - ASSERT(IsFastElementsKind(elements_kind) || + DCHECK(IsFastElementsKind(elements_kind) || IsExternalArrayElementsKind(elements_kind) || IsFixedTypedArrayElementsKind(elements_kind)); LoadKeyedHoleMode load_mode = BuildKeyedHoleMode(map); @@ -6600,7 +7089,7 @@ HValue* HOptimizedGraphBuilder::HandlePolymorphicElementAccess( // necessary because FinishExitWithHardDeoptimization does an AbnormalExit // rather than joining the join block. If this becomes an issue, insert a // generic access in the case length() == 0. - ASSERT(join->predecessors()->length() > 0); + DCHECK(join->predecessors()->length() > 0); // Deopt if none of the cases matched. NoObservableSideEffectsScope scope(this); FinishExitWithHardDeoptimization("Unknown map in polymorphic element access"); @@ -6616,7 +7105,7 @@ HValue* HOptimizedGraphBuilder::HandleKeyedElementAccess( Expression* expr, PropertyAccessType access_type, bool* has_side_effects) { - ASSERT(!expr->IsPropertyName()); + DCHECK(!expr->IsPropertyName()); HInstruction* instr = NULL; SmallMapList* types; @@ -6642,7 +7131,8 @@ HValue* HOptimizedGraphBuilder::HandleKeyedElementAccess( if (monomorphic) { Handle<Map> map = types->first(); if (map->has_slow_elements_kind() || !map->IsJSObjectMap()) { - instr = AddInstruction(BuildKeyedGeneric(access_type, obj, key, val)); + instr = AddInstruction(BuildKeyedGeneric(access_type, expr, obj, key, + val)); } else { BuildCheckHeapObject(obj); instr = BuildMonomorphicElementAccess( @@ -6650,7 +7140,7 @@ HValue* HOptimizedGraphBuilder::HandleKeyedElementAccess( } } else if (!force_generic && (types != NULL && !types->is_empty())) { return HandlePolymorphicElementAccess( - obj, key, val, types, access_type, + expr, obj, key, val, types, access_type, expr->GetStoreMode(), has_side_effects); } else { if (access_type == STORE) { @@ -6665,7 +7155,7 @@ HValue* HOptimizedGraphBuilder::HandleKeyedElementAccess( Deoptimizer::SOFT); } } - instr = AddInstruction(BuildKeyedGeneric(access_type, obj, key, val)); + instr = AddInstruction(BuildKeyedGeneric(access_type, expr, obj, key, val)); } *has_side_effects = instr->HasObservableSideEffects(); return instr; @@ -6688,7 +7178,7 @@ void HOptimizedGraphBuilder::EnsureArgumentsArePushedForAccess() { HInstruction* insert_after = entry; for (int i = 0; i < arguments_values->length(); i++) { HValue* argument = arguments_values->at(i); - HInstruction* push_argument = New<HPushArgument>(argument); + HInstruction* push_argument = New<HPushArguments>(argument); push_argument->InsertAfter(insert_after); insert_after = push_argument; } @@ -6760,19 +7250,19 @@ HInstruction* HOptimizedGraphBuilder::BuildNamedAccess( bool is_uninitialized) { SmallMapList* types; ComputeReceiverTypes(expr, object, &types, zone()); - ASSERT(types != NULL); + DCHECK(types != NULL); if (types->length() > 0) { PropertyAccessInfo info(this, access, ToType(types->first()), name); if (!info.CanAccessAsMonomorphic(types)) { HandlePolymorphicNamedFieldAccess( - access, ast_id, return_id, object, value, types, name); + access, expr, ast_id, return_id, object, value, types, name); return NULL; } HValue* checked_object; // Type::Number() is only supported by polymorphic load/call handling. - ASSERT(!info.type()->Is(Type::Number())); + DCHECK(!info.type()->Is(Type::Number())); BuildCheckHeapObject(object); if (AreStringTypes(types)) { checked_object = @@ -6784,7 +7274,7 @@ HInstruction* HOptimizedGraphBuilder::BuildNamedAccess( &info, object, checked_object, value, ast_id, return_id); } - return BuildNamedGeneric(access, object, name, value, is_uninitialized); + return BuildNamedGeneric(access, expr, object, name, value, is_uninitialized); } @@ -6808,11 +7298,6 @@ void HOptimizedGraphBuilder::BuildLoad(Property* expr, AddInstruction(char_code); instr = NewUncasted<HStringCharFromCode>(char_code); - } else if (expr->IsFunctionPrototype()) { - HValue* function = Pop(); - BuildCheckHeapObject(function); - instr = New<HLoadFunctionPrototype>(function); - } else if (expr->key()->IsPropertyName()) { Handle<String> name = expr->key()->AsLiteral()->AsPropertyName(); HValue* object = Pop(); @@ -6845,15 +7330,14 @@ void HOptimizedGraphBuilder::BuildLoad(Property* expr, void HOptimizedGraphBuilder::VisitProperty(Property* expr) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); if (TryArgumentsAccess(expr)) return; CHECK_ALIVE(VisitForValue(expr->obj())); - if ((!expr->IsFunctionPrototype() && !expr->key()->IsPropertyName()) || - expr->IsStringAccess()) { + if (!expr->key()->IsPropertyName() || expr->IsStringAccess()) { CHECK_ALIVE(VisitForValue(expr->key())); } @@ -6871,14 +7355,19 @@ HInstruction* HGraphBuilder::BuildConstantMapCheck(Handle<JSObject> constant) { HInstruction* HGraphBuilder::BuildCheckPrototypeMaps(Handle<JSObject> prototype, Handle<JSObject> holder) { - while (holder.is_null() || !prototype.is_identical_to(holder)) { - BuildConstantMapCheck(prototype); - Object* next_prototype = prototype->GetPrototype(); - if (next_prototype->IsNull()) return NULL; - CHECK(next_prototype->IsJSObject()); - prototype = handle(JSObject::cast(next_prototype)); + PrototypeIterator iter(isolate(), prototype, + PrototypeIterator::START_AT_RECEIVER); + while (holder.is_null() || + !PrototypeIterator::GetCurrent(iter).is_identical_to(holder)) { + BuildConstantMapCheck( + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter))); + iter.Advance(); + if (iter.IsAtEnd()) { + return NULL; + } } - return BuildConstantMapCheck(prototype); + return BuildConstantMapCheck( + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter))); } @@ -6906,7 +7395,7 @@ HInstruction* HOptimizedGraphBuilder::NewArgumentAdaptorCall( HValue* arity = Add<HConstant>(argument_count - 1); - HValue* op_vals[] = { fun, context, arity, expected_param_count }; + HValue* op_vals[] = { context, fun, arity, expected_param_count }; Handle<Code> adaptor = isolate()->builtins()->ArgumentsAdaptorTrampoline(); @@ -6914,7 +7403,7 @@ HInstruction* HOptimizedGraphBuilder::NewArgumentAdaptorCall( return New<HCallWithDescriptor>( adaptor_value, argument_count, descriptor, - Vector<HValue*>(op_vals, descriptor->environment_length())); + Vector<HValue*>(op_vals, descriptor->GetEnvironmentLength())); } @@ -6950,8 +7439,8 @@ HInstruction* HOptimizedGraphBuilder::BuildCallConstantFunction( class FunctionSorter { public: - FunctionSorter(int index = 0, int ticks = 0, int size = 0) - : index_(index), ticks_(ticks), size_(size) { } + explicit FunctionSorter(int index = 0, int ticks = 0, int size = 0) + : index_(index), ticks_(ticks), size_(size) {} int index() const { return index_; } int ticks() const { return ticks_; } @@ -7108,7 +7597,7 @@ void HOptimizedGraphBuilder::HandlePolymorphicCallNamed( } else { Property* prop = expr->expression()->AsProperty(); HInstruction* function = BuildNamedGeneric( - LOAD, receiver, name, NULL, prop->IsUninitialized()); + LOAD, prop, receiver, name, NULL, prop->IsUninitialized()); AddInstruction(function); Push(function); AddSimulate(prop->LoadId(), REMOVABLE_SIMULATE); @@ -7138,7 +7627,7 @@ void HOptimizedGraphBuilder::HandlePolymorphicCallNamed( // We assume that control flow is always live after an expression. So // even without predecessors to the join block, we set it as the exit // block and continue by adding instructions there. - ASSERT(join != NULL); + DCHECK(join != NULL); if (join->HasPredecessor()) { set_current_block(join); join->SetJoinId(expr->id()); @@ -7202,7 +7691,7 @@ int HOptimizedGraphBuilder::InliningAstSize(Handle<JSFunction> target) { TraceInline(target, caller, "target not inlineable"); return kNotInlinable; } - if (target_shared->dont_inline() || target_shared->dont_optimize()) { + if (target_shared->DisableOptimizationReason() != kNoReason) { TraceInline(target, caller, "target contains unsupported syntax [early]"); return kNotInlinable; } @@ -7262,6 +7751,9 @@ bool HOptimizedGraphBuilder::TryInline(Handle<JSFunction> target, // Parse and allocate variables. CompilationInfo target_info(target, zone()); + // Use the same AstValueFactory for creating strings in the sub-compilation + // step, but don't transfer ownership to target_info. + target_info.SetAstValueFactory(top_info()->ast_value_factory(), false); Handle<SharedFunctionInfo> target_shared(target->shared()); if (!Parser::Parse(&target_info) || !Scope::Analyze(&target_info)) { if (target_info.isolate()->has_pending_exception()) { @@ -7286,8 +7778,7 @@ bool HOptimizedGraphBuilder::TryInline(Handle<JSFunction> target, TraceInline(target, caller, "target AST is too large [late]"); return false; } - AstProperties::Flags* flags(function->flags()); - if (flags->Contains(kDontInline) || function->dont_optimize()) { + if (function->dont_optimize()) { TraceInline(target, caller, "target contains unsupported syntax [late]"); return false; } @@ -7348,7 +7839,7 @@ bool HOptimizedGraphBuilder::TryInline(Handle<JSFunction> target, // TryInline should always return true). // Type-check the inlined function. - ASSERT(target_shared->has_deoptimization_support()); + DCHECK(target_shared->has_deoptimization_support()); AstTyper::Run(&target_info); int function_id = graph()->TraceInlinedFunction(target_shared, position); @@ -7371,19 +7862,19 @@ bool HOptimizedGraphBuilder::TryInline(Handle<JSFunction> target, HConstant* context = Add<HConstant>(Handle<Context>(target->context())); inner_env->BindContext(context); - HArgumentsObject* arguments_object = NULL; - - // If the function uses arguments object create and bind one, also copy + // Create a dematerialized arguments object for the function, also copy the // current arguments values to use them for materialization. + HEnvironment* arguments_env = inner_env->arguments_environment(); + int parameter_count = arguments_env->parameter_count(); + HArgumentsObject* arguments_object = Add<HArgumentsObject>(parameter_count); + for (int i = 0; i < parameter_count; i++) { + arguments_object->AddArgument(arguments_env->Lookup(i), zone()); + } + + // If the function uses arguments object then bind bind one. if (function->scope()->arguments() != NULL) { - ASSERT(function->scope()->arguments()->IsStackAllocated()); - HEnvironment* arguments_env = inner_env->arguments_environment(); - int arguments_count = arguments_env->parameter_count(); - arguments_object = Add<HArgumentsObject>(arguments_count); + DCHECK(function->scope()->arguments()->IsStackAllocated()); inner_env->Bind(function->scope()->arguments(), arguments_object); - for (int i = 0; i < arguments_count; i++) { - arguments_object->AddArgument(arguments_env->Lookup(i), zone()); - } } // Capture the state before invoking the inlined function for deopt in the @@ -7394,7 +7885,8 @@ bool HOptimizedGraphBuilder::TryInline(Handle<JSFunction> target, Add<HSimulate>(BailoutId::None()); current_block()->UpdateEnvironment(inner_env); - + Scope* saved_scope = scope(); + set_scope(target_info.scope()); HEnterInlined* enter_inlined = Add<HEnterInlined>(return_id, target, arguments_count, function, function_state()->inlining_kind(), @@ -7404,6 +7896,7 @@ bool HOptimizedGraphBuilder::TryInline(Handle<JSFunction> target, VisitDeclarations(target_info.scope()->declarations()); VisitStatements(function->body()); + set_scope(saved_scope); if (HasStackOverflow()) { // Bail out if the inline function did, as we cannot residualize a call // instead. @@ -7418,7 +7911,7 @@ bool HOptimizedGraphBuilder::TryInline(Handle<JSFunction> target, inlined_count_ += nodes_added; Handle<Code> unoptimized_code(target_shared->code()); - ASSERT(unoptimized_code->kind() == Code::FUNCTION); + DCHECK(unoptimized_code->kind() == Code::FUNCTION); Handle<TypeFeedbackInfo> type_info( TypeFeedbackInfo::cast(unoptimized_code->type_feedback_info())); graph()->update_type_change_checksum(type_info->own_type_change_checksum()); @@ -7436,7 +7929,7 @@ bool HOptimizedGraphBuilder::TryInline(Handle<JSFunction> target, } else if (call_context()->IsEffect()) { Goto(function_return(), state); } else { - ASSERT(call_context()->IsValue()); + DCHECK(call_context()->IsValue()); AddLeaveInlined(implicit_return_value, state); } } else if (state->inlining_kind() == SETTER_CALL_RETURN) { @@ -7448,7 +7941,7 @@ bool HOptimizedGraphBuilder::TryInline(Handle<JSFunction> target, } else if (call_context()->IsEffect()) { Goto(function_return(), state); } else { - ASSERT(call_context()->IsValue()); + DCHECK(call_context()->IsValue()); AddLeaveInlined(implicit_return_value, state); } } else { @@ -7459,7 +7952,7 @@ bool HOptimizedGraphBuilder::TryInline(Handle<JSFunction> target, } else if (call_context()->IsEffect()) { Goto(function_return(), state); } else { - ASSERT(call_context()->IsValue()); + DCHECK(call_context()->IsValue()); AddLeaveInlined(undefined, state); } } @@ -7473,7 +7966,7 @@ bool HOptimizedGraphBuilder::TryInline(Handle<JSFunction> target, HEnterInlined* entry = function_state()->entry(); // Pop the return test context from the expression context stack. - ASSERT(ast_context() == inlined_test_context()); + DCHECK(ast_context() == inlined_test_context()); ClearInlinedTestContext(); delete target_state; @@ -7579,6 +8072,7 @@ bool HOptimizedGraphBuilder::TryInlineBuiltinFunctionCall(Call* expr) { if (!FLAG_fast_math) break; // Fall through if FLAG_fast_math. case kMathRound: + case kMathFround: case kMathFloor: case kMathAbs: case kMathSqrt: @@ -7650,6 +8144,7 @@ bool HOptimizedGraphBuilder::TryInlineBuiltinMethodCall( if (!FLAG_fast_math) break; // Fall through if FLAG_fast_math. case kMathRound: + case kMathFround: case kMathFloor: case kMathAbs: case kMathSqrt: @@ -7680,7 +8175,7 @@ bool HOptimizedGraphBuilder::TryInlineBuiltinMethodCall( left, kMathPowHalf); // MathPowHalf doesn't have side effects so there's no need for // an environment simulation here. - ASSERT(!sqrt->HasObservableSideEffects()); + DCHECK(!sqrt->HasObservableSideEffects()); result = NewUncasted<HDiv>(one, sqrt); } else if (exponent == 2.0) { result = NewUncasted<HMul>(left, left); @@ -7723,7 +8218,7 @@ bool HOptimizedGraphBuilder::TryInlineBuiltinMethodCall( ElementsKind elements_kind = receiver_map->elements_kind(); if (!IsFastElementsKind(elements_kind)) return false; if (receiver_map->is_observed()) return false; - ASSERT(receiver_map->is_extensible()); + DCHECK(receiver_map->is_extensible()); Drop(expr->arguments()->length()); HValue* result; @@ -7787,7 +8282,8 @@ bool HOptimizedGraphBuilder::TryInlineBuiltinMethodCall( ElementsKind elements_kind = receiver_map->elements_kind(); if (!IsFastElementsKind(elements_kind)) return false; if (receiver_map->is_observed()) return false; - ASSERT(receiver_map->is_extensible()); + if (JSArray::IsReadOnlyLengthDescriptor(receiver_map)) return false; + DCHECK(receiver_map->is_extensible()); // If there may be elements accessors in the prototype chain, the fast // inlined version can't be used. @@ -7833,6 +8329,156 @@ bool HOptimizedGraphBuilder::TryInlineBuiltinMethodCall( ast_context()->ReturnValue(new_size); return true; } + case kArrayShift: { + if (receiver_map.is_null()) return false; + if (receiver_map->instance_type() != JS_ARRAY_TYPE) return false; + ElementsKind kind = receiver_map->elements_kind(); + if (!IsFastElementsKind(kind)) return false; + if (receiver_map->is_observed()) return false; + DCHECK(receiver_map->is_extensible()); + + // If there may be elements accessors in the prototype chain, the fast + // inlined version can't be used. + if (receiver_map->DictionaryElementsInPrototypeChainOnly()) return false; + + // If there currently can be no elements accessors on the prototype chain, + // it doesn't mean that there won't be any later. Install a full prototype + // chain check to trap element accessors being installed on the prototype + // chain, which would cause elements to go to dictionary mode and result + // in a map change. + BuildCheckPrototypeMaps( + handle(JSObject::cast(receiver_map->prototype()), isolate()), + Handle<JSObject>::null()); + + // Threshold for fast inlined Array.shift(). + HConstant* inline_threshold = Add<HConstant>(static_cast<int32_t>(16)); + + Drop(expr->arguments()->length()); + HValue* receiver = Pop(); + HValue* function = Pop(); + HValue* result; + + { + NoObservableSideEffectsScope scope(this); + + HValue* length = Add<HLoadNamedField>( + receiver, static_cast<HValue*>(NULL), + HObjectAccess::ForArrayLength(kind)); + + IfBuilder if_lengthiszero(this); + HValue* lengthiszero = if_lengthiszero.If<HCompareNumericAndBranch>( + length, graph()->GetConstant0(), Token::EQ); + if_lengthiszero.Then(); + { + if (!ast_context()->IsEffect()) Push(graph()->GetConstantUndefined()); + } + if_lengthiszero.Else(); + { + HValue* elements = AddLoadElements(receiver); + + // Check if we can use the fast inlined Array.shift(). + IfBuilder if_inline(this); + if_inline.If<HCompareNumericAndBranch>( + length, inline_threshold, Token::LTE); + if (IsFastSmiOrObjectElementsKind(kind)) { + // We cannot handle copy-on-write backing stores here. + if_inline.AndIf<HCompareMap>( + elements, isolate()->factory()->fixed_array_map()); + } + if_inline.Then(); + { + // Remember the result. + if (!ast_context()->IsEffect()) { + Push(AddElementAccess(elements, graph()->GetConstant0(), NULL, + lengthiszero, kind, LOAD)); + } + + // Compute the new length. + HValue* new_length = AddUncasted<HSub>( + length, graph()->GetConstant1()); + new_length->ClearFlag(HValue::kCanOverflow); + + // Copy the remaining elements. + LoopBuilder loop(this, context(), LoopBuilder::kPostIncrement); + { + HValue* new_key = loop.BeginBody( + graph()->GetConstant0(), new_length, Token::LT); + HValue* key = AddUncasted<HAdd>(new_key, graph()->GetConstant1()); + key->ClearFlag(HValue::kCanOverflow); + HValue* element = AddUncasted<HLoadKeyed>( + elements, key, lengthiszero, kind, ALLOW_RETURN_HOLE); + HStoreKeyed* store = Add<HStoreKeyed>( + elements, new_key, element, kind); + store->SetFlag(HValue::kAllowUndefinedAsNaN); + } + loop.EndBody(); + + // Put a hole at the end. + HValue* hole = IsFastSmiOrObjectElementsKind(kind) + ? Add<HConstant>(isolate()->factory()->the_hole_value()) + : Add<HConstant>(FixedDoubleArray::hole_nan_as_double()); + if (IsFastSmiOrObjectElementsKind(kind)) kind = FAST_HOLEY_ELEMENTS; + Add<HStoreKeyed>( + elements, new_length, hole, kind, INITIALIZING_STORE); + + // Remember new length. + Add<HStoreNamedField>( + receiver, HObjectAccess::ForArrayLength(kind), + new_length, STORE_TO_INITIALIZED_ENTRY); + } + if_inline.Else(); + { + Add<HPushArguments>(receiver); + result = Add<HCallJSFunction>(function, 1, true); + if (!ast_context()->IsEffect()) Push(result); + } + if_inline.End(); + } + if_lengthiszero.End(); + } + result = ast_context()->IsEffect() ? graph()->GetConstant0() : Top(); + Add<HSimulate>(expr->id(), REMOVABLE_SIMULATE); + if (!ast_context()->IsEffect()) Drop(1); + ast_context()->ReturnValue(result); + return true; + } + case kArrayIndexOf: + case kArrayLastIndexOf: { + if (receiver_map.is_null()) return false; + if (receiver_map->instance_type() != JS_ARRAY_TYPE) return false; + ElementsKind kind = receiver_map->elements_kind(); + if (!IsFastElementsKind(kind)) return false; + if (receiver_map->is_observed()) return false; + if (argument_count != 2) return false; + DCHECK(receiver_map->is_extensible()); + + // If there may be elements accessors in the prototype chain, the fast + // inlined version can't be used. + if (receiver_map->DictionaryElementsInPrototypeChainOnly()) return false; + + // If there currently can be no elements accessors on the prototype chain, + // it doesn't mean that there won't be any later. Install a full prototype + // chain check to trap element accessors being installed on the prototype + // chain, which would cause elements to go to dictionary mode and result + // in a map change. + BuildCheckPrototypeMaps( + handle(JSObject::cast(receiver_map->prototype()), isolate()), + Handle<JSObject>::null()); + + HValue* search_element = Pop(); + HValue* receiver = Pop(); + Drop(1); // Drop function. + + ArrayIndexOfMode mode = (id == kArrayIndexOf) + ? kFirstIndexOf : kLastIndexOf; + HValue* index = BuildArrayIndexOf(receiver, search_element, kind, mode); + + if (!ast_context()->IsEffect()) Push(index); + Add<HSimulate>(expr->id(), REMOVABLE_SIMULATE); + if (!ast_context()->IsEffect()) Drop(1); + ast_context()->ReturnValue(index); + return true; + } default: // Not yet supported for inlining. break; @@ -7910,11 +8556,9 @@ bool HOptimizedGraphBuilder::TryInlineApiCall(Handle<JSFunction> function, if (call_type == kCallApiFunction) { // Cannot embed a direct reference to the global proxy map // as it maybe dropped on deserialization. - CHECK(!Serializer::enabled(isolate())); - ASSERT_EQ(0, receiver_maps->length()); - receiver_maps->Add(handle( - function->context()->global_object()->global_receiver()->map()), - zone()); + CHECK(!isolate()->serializer_enabled()); + DCHECK_EQ(0, receiver_maps->length()); + receiver_maps->Add(handle(function->global_proxy()->map()), zone()); } CallOptimization::HolderLookup holder_lookup = CallOptimization::kHolderNotFound; @@ -7939,7 +8583,7 @@ bool HOptimizedGraphBuilder::TryInlineApiCall(Handle<JSFunction> function, if (holder_lookup == CallOptimization::kHolderFound) { AddCheckPrototypeMaps(api_holder, receiver_maps->first()); } else { - ASSERT_EQ(holder_lookup, CallOptimization::kHolderIsReceiver); + DCHECK_EQ(holder_lookup, CallOptimization::kHolderIsReceiver); } // Includes receiver. PushArgumentsFromEnvironment(argc + 1); @@ -7948,23 +8592,22 @@ bool HOptimizedGraphBuilder::TryInlineApiCall(Handle<JSFunction> function, break; case kCallApiGetter: // Receiver and prototype chain cannot have changed. - ASSERT_EQ(0, argc); - ASSERT_EQ(NULL, receiver); + DCHECK_EQ(0, argc); + DCHECK_EQ(NULL, receiver); // Receiver is on expression stack. receiver = Pop(); - Add<HPushArgument>(receiver); + Add<HPushArguments>(receiver); break; case kCallApiSetter: { is_store = true; // Receiver and prototype chain cannot have changed. - ASSERT_EQ(1, argc); - ASSERT_EQ(NULL, receiver); + DCHECK_EQ(1, argc); + DCHECK_EQ(NULL, receiver); // Receiver and value are on expression stack. HValue* value = Pop(); receiver = Pop(); - Add<HPushArgument>(receiver); - Add<HPushArgument>(value); + Add<HPushArguments>(receiver, value); break; } } @@ -7992,11 +8635,11 @@ bool HOptimizedGraphBuilder::TryInlineApiCall(Handle<JSFunction> function, HValue* api_function_address = Add<HConstant>(ExternalReference(ref)); HValue* op_vals[] = { + context(), Add<HConstant>(function), call_data, holder, - api_function_address, - context() + api_function_address }; CallInterfaceDescriptor* descriptor = @@ -8006,12 +8649,12 @@ bool HOptimizedGraphBuilder::TryInlineApiCall(Handle<JSFunction> function, Handle<Code> code = stub.GetCode(); HConstant* code_value = Add<HConstant>(code); - ASSERT((sizeof(op_vals) / kPointerSize) == - descriptor->environment_length()); + DCHECK((sizeof(op_vals) / kPointerSize) == + descriptor->GetEnvironmentLength()); HInstruction* call = New<HCallWithDescriptor>( code_value, argc + 1, descriptor, - Vector<HValue*>(op_vals, descriptor->environment_length())); + Vector<HValue*>(op_vals, descriptor->GetEnvironmentLength())); if (drop_extra) Drop(1); // Drop function. ast_context()->ReturnInstruction(call, ast_id); @@ -8020,7 +8663,7 @@ bool HOptimizedGraphBuilder::TryInlineApiCall(Handle<JSFunction> function, bool HOptimizedGraphBuilder::TryCallApply(Call* expr) { - ASSERT(expr->expression()->IsProperty()); + DCHECK(expr->expression()->IsProperty()); if (!expr->IsMonomorphic()) { return false; @@ -8063,7 +8706,7 @@ bool HOptimizedGraphBuilder::TryCallApply(Call* expr) { } else { // We are inside inlined function and we know exactly what is inside // arguments object. But we need to be able to materialize at deopt. - ASSERT_EQ(environment()->arguments_environment()->parameter_count(), + DCHECK_EQ(environment()->arguments_environment()->parameter_count(), function_state()->entry()->arguments_object()->arguments_count()); HArgumentsObject* args = function_state()->entry()->arguments_object(); const ZoneList<HValue*>* arguments_values = args->arguments_values(); @@ -8099,19 +8742,217 @@ HValue* HOptimizedGraphBuilder::ImplicitReceiverFor(HValue* function, if (shared->strict_mode() == SLOPPY && !shared->native()) { // Cannot embed a direct reference to the global proxy // as is it dropped on deserialization. - CHECK(!Serializer::enabled(isolate())); - Handle<JSObject> global_receiver( - target->context()->global_object()->global_receiver()); - return Add<HConstant>(global_receiver); + CHECK(!isolate()->serializer_enabled()); + Handle<JSObject> global_proxy(target->context()->global_proxy()); + return Add<HConstant>(global_proxy); } return graph()->GetConstantUndefined(); } +void HOptimizedGraphBuilder::BuildArrayCall(Expression* expression, + int arguments_count, + HValue* function, + Handle<AllocationSite> site) { + Add<HCheckValue>(function, array_function()); + + if (IsCallArrayInlineable(arguments_count, site)) { + BuildInlinedCallArray(expression, arguments_count, site); + return; + } + + HInstruction* call = PreProcessCall(New<HCallNewArray>( + function, arguments_count + 1, site->GetElementsKind())); + if (expression->IsCall()) { + Drop(1); + } + ast_context()->ReturnInstruction(call, expression->id()); +} + + +HValue* HOptimizedGraphBuilder::BuildArrayIndexOf(HValue* receiver, + HValue* search_element, + ElementsKind kind, + ArrayIndexOfMode mode) { + DCHECK(IsFastElementsKind(kind)); + + NoObservableSideEffectsScope no_effects(this); + + HValue* elements = AddLoadElements(receiver); + HValue* length = AddLoadArrayLength(receiver, kind); + + HValue* initial; + HValue* terminating; + Token::Value token; + LoopBuilder::Direction direction; + if (mode == kFirstIndexOf) { + initial = graph()->GetConstant0(); + terminating = length; + token = Token::LT; + direction = LoopBuilder::kPostIncrement; + } else { + DCHECK_EQ(kLastIndexOf, mode); + initial = length; + terminating = graph()->GetConstant0(); + token = Token::GT; + direction = LoopBuilder::kPreDecrement; + } + + Push(graph()->GetConstantMinus1()); + if (IsFastDoubleElementsKind(kind) || IsFastSmiElementsKind(kind)) { + LoopBuilder loop(this, context(), direction); + { + HValue* index = loop.BeginBody(initial, terminating, token); + HValue* element = AddUncasted<HLoadKeyed>( + elements, index, static_cast<HValue*>(NULL), + kind, ALLOW_RETURN_HOLE); + IfBuilder if_issame(this); + if (IsFastDoubleElementsKind(kind)) { + if_issame.If<HCompareNumericAndBranch>( + element, search_element, Token::EQ_STRICT); + } else { + if_issame.If<HCompareObjectEqAndBranch>(element, search_element); + } + if_issame.Then(); + { + Drop(1); + Push(index); + loop.Break(); + } + if_issame.End(); + } + loop.EndBody(); + } else { + IfBuilder if_isstring(this); + if_isstring.If<HIsStringAndBranch>(search_element); + if_isstring.Then(); + { + LoopBuilder loop(this, context(), direction); + { + HValue* index = loop.BeginBody(initial, terminating, token); + HValue* element = AddUncasted<HLoadKeyed>( + elements, index, static_cast<HValue*>(NULL), + kind, ALLOW_RETURN_HOLE); + IfBuilder if_issame(this); + if_issame.If<HIsStringAndBranch>(element); + if_issame.AndIf<HStringCompareAndBranch>( + element, search_element, Token::EQ_STRICT); + if_issame.Then(); + { + Drop(1); + Push(index); + loop.Break(); + } + if_issame.End(); + } + loop.EndBody(); + } + if_isstring.Else(); + { + IfBuilder if_isnumber(this); + if_isnumber.If<HIsSmiAndBranch>(search_element); + if_isnumber.OrIf<HCompareMap>( + search_element, isolate()->factory()->heap_number_map()); + if_isnumber.Then(); + { + HValue* search_number = + AddUncasted<HForceRepresentation>(search_element, + Representation::Double()); + LoopBuilder loop(this, context(), direction); + { + HValue* index = loop.BeginBody(initial, terminating, token); + HValue* element = AddUncasted<HLoadKeyed>( + elements, index, static_cast<HValue*>(NULL), + kind, ALLOW_RETURN_HOLE); + + IfBuilder if_element_isnumber(this); + if_element_isnumber.If<HIsSmiAndBranch>(element); + if_element_isnumber.OrIf<HCompareMap>( + element, isolate()->factory()->heap_number_map()); + if_element_isnumber.Then(); + { + HValue* number = + AddUncasted<HForceRepresentation>(element, + Representation::Double()); + IfBuilder if_issame(this); + if_issame.If<HCompareNumericAndBranch>( + number, search_number, Token::EQ_STRICT); + if_issame.Then(); + { + Drop(1); + Push(index); + loop.Break(); + } + if_issame.End(); + } + if_element_isnumber.End(); + } + loop.EndBody(); + } + if_isnumber.Else(); + { + LoopBuilder loop(this, context(), direction); + { + HValue* index = loop.BeginBody(initial, terminating, token); + HValue* element = AddUncasted<HLoadKeyed>( + elements, index, static_cast<HValue*>(NULL), + kind, ALLOW_RETURN_HOLE); + IfBuilder if_issame(this); + if_issame.If<HCompareObjectEqAndBranch>( + element, search_element); + if_issame.Then(); + { + Drop(1); + Push(index); + loop.Break(); + } + if_issame.End(); + } + loop.EndBody(); + } + if_isnumber.End(); + } + if_isstring.End(); + } + + return Pop(); +} + + +bool HOptimizedGraphBuilder::TryHandleArrayCall(Call* expr, HValue* function) { + if (!array_function().is_identical_to(expr->target())) { + return false; + } + + Handle<AllocationSite> site = expr->allocation_site(); + if (site.is_null()) return false; + + BuildArrayCall(expr, + expr->arguments()->length(), + function, + site); + return true; +} + + +bool HOptimizedGraphBuilder::TryHandleArrayCallNew(CallNew* expr, + HValue* function) { + if (!array_function().is_identical_to(expr->target())) { + return false; + } + + BuildArrayCall(expr, + expr->arguments()->length(), + function, + expr->allocation_site()); + return true; +} + + void HOptimizedGraphBuilder::VisitCall(Call* expr) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); Expression* callee = expr->expression(); int argument_count = expr->arguments()->length() + 1; // Plus receiver. HInstruction* call = NULL; @@ -8202,8 +9043,7 @@ void HOptimizedGraphBuilder::VisitCall(Call* expr) { // evaluation of the arguments. CHECK_ALIVE(VisitForValue(expr->expression())); HValue* function = Top(); - bool global_call = proxy != NULL && proxy->var()->IsUnallocated(); - if (global_call) { + if (expr->global_call()) { Variable* var = proxy->var(); bool known_global_function = false; // If there is a global property cell for the name at compile time and @@ -8237,6 +9077,7 @@ void HOptimizedGraphBuilder::VisitCall(Call* expr) { return; } if (TryInlineApiFunctionCall(expr, receiver)) return; + if (TryHandleArrayCall(expr, function)) return; if (TryInlineCall(expr)) return; PushArgumentsFromEnvironment(argument_count); @@ -8286,20 +9127,21 @@ void HOptimizedGraphBuilder::VisitCall(Call* expr) { } -void HOptimizedGraphBuilder::BuildInlinedCallNewArray(CallNew* expr) { +void HOptimizedGraphBuilder::BuildInlinedCallArray( + Expression* expression, + int argument_count, + Handle<AllocationSite> site) { + DCHECK(!site.is_null()); + DCHECK(argument_count >= 0 && argument_count <= 1); NoObservableSideEffectsScope no_effects(this); - int argument_count = expr->arguments()->length(); // We should at least have the constructor on the expression stack. HValue* constructor = environment()->ExpressionStackAt(argument_count); - ElementsKind kind = expr->elements_kind(); - Handle<AllocationSite> site = expr->allocation_site(); - ASSERT(!site.is_null()); - // Register on the site for deoptimization if the transition feedback changes. AllocationSite::AddDependentCompilationInfo( site, AllocationSite::TRANSITIONS, top_info()); + ElementsKind kind = site->GetElementsKind(); HInstruction* site_instruction = Add<HConstant>(site); // In the single constant argument case, we may have to adjust elements kind @@ -8308,7 +9150,7 @@ void HOptimizedGraphBuilder::BuildInlinedCallNewArray(CallNew* expr) { HValue* argument = environment()->Top(); if (argument->IsConstant()) { HConstant* constant_argument = HConstant::cast(argument); - ASSERT(constant_argument->HasSmiValue()); + DCHECK(constant_argument->HasSmiValue()); int constant_array_size = constant_argument->Integer32Value(); if (constant_array_size != 0) { kind = GetHoleyElementsKind(kind); @@ -8322,32 +9164,12 @@ void HOptimizedGraphBuilder::BuildInlinedCallNewArray(CallNew* expr) { site_instruction, constructor, DISABLE_ALLOCATION_SITES); - HValue* new_object; - if (argument_count == 0) { - new_object = array_builder.AllocateEmptyArray(); - } else if (argument_count == 1) { - HValue* argument = environment()->Top(); - new_object = BuildAllocateArrayFromLength(&array_builder, argument); - } else { - HValue* length = Add<HConstant>(argument_count); - // Smi arrays need to initialize array elements with the hole because - // bailout could occur if the arguments don't fit in a smi. - // - // TODO(mvstanton): If all the arguments are constants in smi range, then - // we could set fill_with_hole to false and save a few instructions. - JSArrayBuilder::FillMode fill_mode = IsFastSmiElementsKind(kind) - ? JSArrayBuilder::FILL_WITH_HOLE - : JSArrayBuilder::DONT_FILL_WITH_HOLE; - new_object = array_builder.AllocateArray(length, length, fill_mode); - HValue* elements = array_builder.GetElementsLocation(); - for (int i = 0; i < argument_count; i++) { - HValue* value = environment()->ExpressionStackAt(argument_count - i - 1); - HValue* constant_i = Add<HConstant>(i); - Add<HStoreKeyed>(elements, constant_i, value, kind); - } - } - - Drop(argument_count + 1); // drop constructor and args. + HValue* new_object = argument_count == 0 + ? array_builder.AllocateEmptyArray() + : BuildAllocateArrayFromLength(&array_builder, Top()); + + int args_to_drop = argument_count + (expression->IsCall() ? 2 : 1); + Drop(args_to_drop); ast_context()->ReturnValue(new_object); } @@ -8361,15 +9183,14 @@ static bool IsAllocationInlineable(Handle<JSFunction> constructor) { } -bool HOptimizedGraphBuilder::IsCallNewArrayInlineable(CallNew* expr) { +bool HOptimizedGraphBuilder::IsCallArrayInlineable( + int argument_count, + Handle<AllocationSite> site) { Handle<JSFunction> caller = current_info()->closure(); - Handle<JSFunction> target(isolate()->native_context()->array_function(), - isolate()); - int argument_count = expr->arguments()->length(); + Handle<JSFunction> target = array_function(); // We should have the function plus array arguments on the environment stack. - ASSERT(environment()->length() >= (argument_count + 1)); - Handle<AllocationSite> site = expr->allocation_site(); - ASSERT(!site.is_null()); + DCHECK(environment()->length() >= (argument_count + 1)); + DCHECK(!site.is_null()); bool inline_ok = false; if (site->CanInlineCall()) { @@ -8378,22 +9199,24 @@ bool HOptimizedGraphBuilder::IsCallNewArrayInlineable(CallNew* expr) { HValue* argument = Top(); if (argument->IsConstant()) { // Do not inline if the constant length argument is not a smi or - // outside the valid range for a fast array. + // outside the valid range for unrolled loop initialization. HConstant* constant_argument = HConstant::cast(argument); if (constant_argument->HasSmiValue()) { int value = constant_argument->Integer32Value(); - inline_ok = value >= 0 && - value < JSObject::kInitialMaxFastElementArray; + inline_ok = value >= 0 && value <= kElementLoopUnrollThreshold; if (!inline_ok) { TraceInline(target, caller, - "Length outside of valid array range"); + "Constant length outside of valid inlining range."); } } } else { - inline_ok = true; + TraceInline(target, caller, + "Dont inline [new] Array(n) where n isn't constant."); } - } else { + } else if (argument_count == 0) { inline_ok = true; + } else { + TraceInline(target, caller, "Too many arguments to inline."); } } else { TraceInline(target, caller, "AllocationSite requested no inlining."); @@ -8407,9 +9230,9 @@ bool HOptimizedGraphBuilder::IsCallNewArrayInlineable(CallNew* expr) { void HOptimizedGraphBuilder::VisitCallNew(CallNew* expr) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); if (!FLAG_hydrogen_track_positions) SetSourcePosition(expr->position()); int argument_count = expr->arguments()->length() + 1; // Plus constructor. Factory* factory = isolate()->factory(); @@ -8428,15 +9251,15 @@ void HOptimizedGraphBuilder::VisitCallNew(CallNew* expr) { // Force completion of inobject slack tracking before generating // allocation code to finalize instance size. - if (constructor->shared()->IsInobjectSlackTrackingInProgress()) { - constructor->shared()->CompleteInobjectSlackTracking(); + if (constructor->IsInobjectSlackTrackingInProgress()) { + constructor->CompleteInobjectSlackTracking(); } // Calculate instance size from initial map of constructor. - ASSERT(constructor->has_initial_map()); + DCHECK(constructor->has_initial_map()); Handle<Map> initial_map(constructor->initial_map()); int instance_size = initial_map->instance_size(); - ASSERT(initial_map->InitialPropertiesLength() == 0); + DCHECK(initial_map->InitialPropertiesLength() == 0); // Allocate an instance of the implicit receiver object. HValue* size_in_bytes = Add<HConstant>(instance_size); @@ -8450,31 +9273,19 @@ void HOptimizedGraphBuilder::VisitCallNew(CallNew* expr) { AllocationSite::AddDependentCompilationInfo(allocation_site, AllocationSite::TENURING, top_info()); - } else { - allocation_mode = HAllocationMode( - isolate()->heap()->GetPretenureMode()); } } - HAllocate* receiver = - BuildAllocate(size_in_bytes, HType::JSObject(), JS_OBJECT_TYPE, - allocation_mode); + HAllocate* receiver = BuildAllocate( + size_in_bytes, HType::JSObject(), JS_OBJECT_TYPE, allocation_mode); receiver->set_known_initial_map(initial_map); - // Load the initial map from the constructor. - HValue* constructor_value = Add<HConstant>(constructor); - HValue* initial_map_value = - Add<HLoadNamedField>(constructor_value, static_cast<HValue*>(NULL), - HObjectAccess::ForMapAndOffset( - handle(constructor->map()), - JSFunction::kPrototypeOrInitialMapOffset)); - // Initialize map and fields of the newly allocated object. { NoObservableSideEffectsScope no_effects(this); - ASSERT(initial_map->instance_type() == JS_OBJECT_TYPE); + DCHECK(initial_map->instance_type() == JS_OBJECT_TYPE); Add<HStoreNamedField>(receiver, HObjectAccess::ForMapAndOffset(initial_map, JSObject::kMapOffset), - initial_map_value); + Add<HConstant>(initial_map)); HValue* empty_fixed_array = Add<HConstant>(factory->empty_fixed_array()); Add<HStoreNamedField>(receiver, HObjectAccess::ForMapAndOffset(initial_map, @@ -8498,24 +9309,28 @@ void HOptimizedGraphBuilder::VisitCallNew(CallNew* expr) { // Replace the constructor function with a newly allocated receiver using // the index of the receiver from the top of the expression stack. const int receiver_index = argument_count - 1; - ASSERT(environment()->ExpressionStackAt(receiver_index) == function); + DCHECK(environment()->ExpressionStackAt(receiver_index) == function); environment()->SetExpressionStackAt(receiver_index, receiver); - if (TryInlineConstruct(expr, receiver)) return; + if (TryInlineConstruct(expr, receiver)) { + // Inlining worked, add a dependency on the initial map to make sure that + // this code is deoptimized whenever the initial map of the constructor + // changes. + Map::AddDependentCompilationInfo( + initial_map, DependentCode::kInitialMapChangedGroup, top_info()); + return; + } // TODO(mstarzinger): For now we remove the previous HAllocate and all - // corresponding instructions and instead add HPushArgument for the + // corresponding instructions and instead add HPushArguments for the // arguments in case inlining failed. What we actually should do is for // inlining to try to build a subgraph without mutating the parent graph. HInstruction* instr = current_block()->last(); - while (instr != initial_map_value) { + do { HInstruction* prev_instr = instr->previous(); instr->DeleteAndReplaceWith(NULL); instr = prev_instr; - } - initial_map_value->DeleteAndReplaceWith(NULL); - receiver->DeleteAndReplaceWith(NULL); - check->DeleteAndReplaceWith(NULL); + } while (instr != check); environment()->SetExpressionStackAt(receiver_index, function); HInstruction* call = PreProcessCall(New<HCallNew>(function, argument_count)); @@ -8523,25 +9338,10 @@ void HOptimizedGraphBuilder::VisitCallNew(CallNew* expr) { } else { // The constructor function is both an operand to the instruction and an // argument to the construct call. - Handle<JSFunction> array_function( - isolate()->native_context()->array_function(), isolate()); - bool use_call_new_array = expr->target().is_identical_to(array_function); - if (use_call_new_array && IsCallNewArrayInlineable(expr)) { - // Verify we are still calling the array function for our native context. - Add<HCheckValue>(function, array_function); - BuildInlinedCallNewArray(expr); - return; - } + if (TryHandleArrayCallNew(expr, function)) return; - HBinaryCall* call; - if (use_call_new_array) { - Add<HCheckValue>(function, array_function); - call = New<HCallNewArray>(function, argument_count, - expr->elements_kind()); - } else { - call = New<HCallNew>(function, argument_count); - } - PreProcessCall(call); + HInstruction* call = + PreProcessCall(New<HCallNew>(function, argument_count)); return ast_context()->ReturnInstruction(call, expr->id()); } } @@ -8615,8 +9415,7 @@ void HOptimizedGraphBuilder::GenerateDataViewInitialize( CallRuntime* expr) { ZoneList<Expression*>* arguments = expr->arguments(); - NoObservableSideEffectsScope scope(this); - ASSERT(arguments->length()== 4); + DCHECK(arguments->length()== 4); CHECK_ALIVE(VisitForValue(arguments->at(0))); HValue* obj = Pop(); @@ -8629,8 +9428,11 @@ void HOptimizedGraphBuilder::GenerateDataViewInitialize( CHECK_ALIVE(VisitForValue(arguments->at(3))); HValue* byte_length = Pop(); - BuildArrayBufferViewInitialization<JSDataView>( - obj, buffer, byte_offset, byte_length); + { + NoObservableSideEffectsScope scope(this); + BuildArrayBufferViewInitialization<JSDataView>( + obj, buffer, byte_offset, byte_length); + } } @@ -8666,7 +9468,7 @@ HValue* HOptimizedGraphBuilder::BuildAllocateExternalElements( HValue* elements = Add<HAllocate>( Add<HConstant>(ExternalArray::kAlignedSize), - HType::Tagged(), + HType::HeapObject(), NOT_TENURED, external_array_map->instance_type()); @@ -8723,9 +9525,8 @@ HValue* HOptimizedGraphBuilder::BuildAllocateFixedTypedArray( Handle<Map> fixed_typed_array_map( isolate()->heap()->MapForFixedTypedArray(array_type)); HValue* elements = - Add<HAllocate>(total_size, HType::Tagged(), - NOT_TENURED, - fixed_typed_array_map->instance_type()); + Add<HAllocate>(total_size, HType::HeapObject(), + NOT_TENURED, fixed_typed_array_map->instance_type()); AddStoreMapConstant(elements, fixed_typed_array_map); Add<HStoreNamedField>(elements, @@ -8752,23 +9553,32 @@ void HOptimizedGraphBuilder::GenerateTypedArrayInitialize( CallRuntime* expr) { ZoneList<Expression*>* arguments = expr->arguments(); - NoObservableSideEffectsScope scope(this); static const int kObjectArg = 0; static const int kArrayIdArg = 1; static const int kBufferArg = 2; static const int kByteOffsetArg = 3; static const int kByteLengthArg = 4; static const int kArgsLength = 5; - ASSERT(arguments->length() == kArgsLength); + DCHECK(arguments->length() == kArgsLength); CHECK_ALIVE(VisitForValue(arguments->at(kObjectArg))); HValue* obj = Pop(); - ASSERT(arguments->at(kArrayIdArg)->node_type() == AstNode::kLiteral); + if (arguments->at(kArrayIdArg)->IsLiteral()) { + // This should never happen in real use, but can happen when fuzzing. + // Just bail out. + Bailout(kNeedSmiLiteral); + return; + } Handle<Object> value = static_cast<Literal*>(arguments->at(kArrayIdArg))->value(); - ASSERT(value->IsSmi()); + if (!value->IsSmi()) { + // This should never happen in real use, but can happen when fuzzing. + // Just bail out. + Bailout(kNeedSmiLiteral); + return; + } int array_id = Smi::cast(*value)->value(); HValue* buffer; @@ -8782,7 +9592,7 @@ void HOptimizedGraphBuilder::GenerateTypedArrayInitialize( HValue* byte_offset; bool is_zero_byte_offset; - if (arguments->at(kByteOffsetArg)->node_type() == AstNode::kLiteral + if (arguments->at(kByteOffsetArg)->IsLiteral() && Smi::FromInt(0) == *static_cast<Literal*>(arguments->at(kByteOffsetArg))->value()) { byte_offset = Add<HConstant>(static_cast<int32_t>(0)); @@ -8791,12 +9601,13 @@ void HOptimizedGraphBuilder::GenerateTypedArrayInitialize( CHECK_ALIVE(VisitForValue(arguments->at(kByteOffsetArg))); byte_offset = Pop(); is_zero_byte_offset = false; - ASSERT(buffer != NULL); + DCHECK(buffer != NULL); } CHECK_ALIVE(VisitForValue(arguments->at(kByteLengthArg))); HValue* byte_length = Pop(); + NoObservableSideEffectsScope scope(this); IfBuilder byte_offset_smi(this); if (!is_zero_byte_offset) { @@ -8838,7 +9649,7 @@ void HOptimizedGraphBuilder::GenerateTypedArrayInitialize( isolate(), array_type, external_elements_kind); AddStoreMapConstant(obj, obj_map); } else { - ASSERT(is_zero_byte_offset); + DCHECK(is_zero_byte_offset); elements = BuildAllocateFixedTypedArray( array_type, element_size, fixed_elements_kind, byte_length, length); @@ -8864,7 +9675,7 @@ void HOptimizedGraphBuilder::GenerateTypedArrayInitialize( void HOptimizedGraphBuilder::GenerateMaxSmi(CallRuntime* expr) { - ASSERT(expr->arguments()->length() == 0); + DCHECK(expr->arguments()->length() == 0); HConstant* max_smi = New<HConstant>(static_cast<int32_t>(Smi::kMaxValue)); return ast_context()->ReturnInstruction(max_smi, expr->id()); } @@ -8872,7 +9683,7 @@ void HOptimizedGraphBuilder::GenerateMaxSmi(CallRuntime* expr) { void HOptimizedGraphBuilder::GenerateTypedArrayMaxSizeInHeap( CallRuntime* expr) { - ASSERT(expr->arguments()->length() == 0); + DCHECK(expr->arguments()->length() == 0); HConstant* result = New<HConstant>(static_cast<int32_t>( FLAG_typed_array_max_size_in_heap)); return ast_context()->ReturnInstruction(result, expr->id()); @@ -8881,7 +9692,7 @@ void HOptimizedGraphBuilder::GenerateTypedArrayMaxSizeInHeap( void HOptimizedGraphBuilder::GenerateArrayBufferGetByteLength( CallRuntime* expr) { - ASSERT(expr->arguments()->length() == 1); + DCHECK(expr->arguments()->length() == 1); CHECK_ALIVE(VisitForValue(expr->arguments()->at(0))); HValue* buffer = Pop(); HInstruction* result = New<HLoadNamedField>( @@ -8894,7 +9705,7 @@ void HOptimizedGraphBuilder::GenerateArrayBufferGetByteLength( void HOptimizedGraphBuilder::GenerateArrayBufferViewGetByteLength( CallRuntime* expr) { - ASSERT(expr->arguments()->length() == 1); + DCHECK(expr->arguments()->length() == 1); CHECK_ALIVE(VisitForValue(expr->arguments()->at(0))); HValue* buffer = Pop(); HInstruction* result = New<HLoadNamedField>( @@ -8907,7 +9718,7 @@ void HOptimizedGraphBuilder::GenerateArrayBufferViewGetByteLength( void HOptimizedGraphBuilder::GenerateArrayBufferViewGetByteOffset( CallRuntime* expr) { - ASSERT(expr->arguments()->length() == 1); + DCHECK(expr->arguments()->length() == 1); CHECK_ALIVE(VisitForValue(expr->arguments()->at(0))); HValue* buffer = Pop(); HInstruction* result = New<HLoadNamedField>( @@ -8920,7 +9731,7 @@ void HOptimizedGraphBuilder::GenerateArrayBufferViewGetByteOffset( void HOptimizedGraphBuilder::GenerateTypedArrayGetLength( CallRuntime* expr) { - ASSERT(expr->arguments()->length() == 1); + DCHECK(expr->arguments()->length() == 1); CHECK_ALIVE(VisitForValue(expr->arguments()->at(0))); HValue* buffer = Pop(); HInstruction* result = New<HLoadNamedField>( @@ -8932,32 +9743,32 @@ void HOptimizedGraphBuilder::GenerateTypedArrayGetLength( void HOptimizedGraphBuilder::VisitCallRuntime(CallRuntime* expr) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); if (expr->is_jsruntime()) { return Bailout(kCallToAJavaScriptRuntimeFunction); } const Runtime::Function* function = expr->function(); - ASSERT(function != NULL); + DCHECK(function != NULL); if (function->intrinsic_type == Runtime::INLINE || function->intrinsic_type == Runtime::INLINE_OPTIMIZED) { - ASSERT(expr->name()->length() > 0); - ASSERT(expr->name()->Get(0) == '_'); + DCHECK(expr->name()->length() > 0); + DCHECK(expr->name()->Get(0) == '_'); // Call to an inline function. int lookup_index = static_cast<int>(function->function_id) - static_cast<int>(Runtime::kFirstInlineFunction); - ASSERT(lookup_index >= 0); - ASSERT(static_cast<size_t>(lookup_index) < + DCHECK(lookup_index >= 0); + DCHECK(static_cast<size_t>(lookup_index) < ARRAY_SIZE(kInlineFunctionGenerators)); InlineFunctionGenerator generator = kInlineFunctionGenerators[lookup_index]; // Call the inline code generator using the pointer-to-member. (this->*generator)(expr); } else { - ASSERT(function->intrinsic_type == Runtime::RUNTIME); + DCHECK(function->intrinsic_type == Runtime::RUNTIME); Handle<String> name = expr->name(); int argument_count = expr->arguments()->length(); CHECK_ALIVE(VisitExpressions(expr->arguments())); @@ -8970,9 +9781,9 @@ void HOptimizedGraphBuilder::VisitCallRuntime(CallRuntime* expr) { void HOptimizedGraphBuilder::VisitUnaryOperation(UnaryOperation* expr) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); switch (expr->op()) { case Token::DELETE: return VisitDelete(expr); case Token::VOID: return VisitVoid(expr); @@ -8992,9 +9803,7 @@ void HOptimizedGraphBuilder::VisitDelete(UnaryOperation* expr) { HValue* key = Pop(); HValue* obj = Pop(); HValue* function = AddLoadJSBuiltin(Builtins::DELETE); - Add<HPushArgument>(obj); - Add<HPushArgument>(key); - Add<HPushArgument>(Add<HConstant>(function_strict_mode())); + Add<HPushArguments>(obj, key, Add<HConstant>(function_strict_mode())); // TODO(olivf) InvokeFunction produces a check for the parameter count, // even though we are certain to pass the correct number of arguments here. HInstruction* instr = New<HInvokeFunction>(function, 3); @@ -9051,7 +9860,7 @@ void HOptimizedGraphBuilder::VisitNot(UnaryOperation* expr) { return; } - ASSERT(ast_context()->IsValue()); + DCHECK(ast_context()->IsValue()); HBasicBlock* materialize_false = graph()->CreateBasicBlock(); HBasicBlock* materialize_true = graph()->CreateBasicBlock(); CHECK_BAILOUT(VisitForControl(expr->expression(), @@ -9137,9 +9946,9 @@ void HOptimizedGraphBuilder::BuildStoreForEffect(Expression* expr, void HOptimizedGraphBuilder::VisitCountOperation(CountOperation* expr) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); if (!FLAG_hydrogen_track_positions) SetSourcePosition(expr->position()); Expression* target = expr->expression(); VariableProxy* proxy = target->AsVariableProxy(); @@ -9162,7 +9971,7 @@ void HOptimizedGraphBuilder::VisitCountOperation(CountOperation* expr) { return Bailout(kUnsupportedCountOperationWithConst); } // Argument of the count operation is a variable, not a property. - ASSERT(prop == NULL); + DCHECK(prop == NULL); CHECK_ALIVE(VisitForValue(target)); after = BuildIncrement(returns_original_input, expr); @@ -9217,15 +10026,14 @@ void HOptimizedGraphBuilder::VisitCountOperation(CountOperation* expr) { } // Argument of the count operation is a property. - ASSERT(prop != NULL); + DCHECK(prop != NULL); if (returns_original_input) Push(graph()->GetConstantUndefined()); CHECK_ALIVE(VisitForValue(prop->obj())); HValue* object = Top(); HValue* key = NULL; - if ((!prop->IsFunctionPrototype() && !prop->key()->IsPropertyName()) || - prop->IsStringAccess()) { + if (!prop->key()->IsPropertyName() || prop->IsStringAccess()) { CHECK_ALIVE(VisitForValue(prop->key())); key = Top(); } @@ -9259,7 +10067,7 @@ HInstruction* HOptimizedGraphBuilder::BuildStringCharCodeAt( int32_t i = c_index->NumberValueAsInteger32(); Handle<String> s = c_string->StringValue(); if (i < 0 || i >= s->length()) { - return New<HConstant>(OS::nan_value()); + return New<HConstant>(base::OS::nan_value()); } return New<HConstant>(s->Get(i)); } @@ -9368,13 +10176,13 @@ HValue* HGraphBuilder::TruncateToNumber(HValue* value, Type** expected) { // We expect to get a number. // (We need to check first, since Type::None->Is(Type::Any()) == true. if (expected_obj->Is(Type::None())) { - ASSERT(!expected_number->Is(Type::None(zone()))); + DCHECK(!expected_number->Is(Type::None(zone()))); return value; } if (expected_obj->Is(Type::Undefined(zone()))) { // This is already done by HChange. - *expected = Type::Union(expected_number, Type::Float(zone()), zone()); + *expected = Type::Union(expected_number, Type::Number(zone()), zone()); return value; } @@ -9393,15 +10201,10 @@ HValue* HOptimizedGraphBuilder::BuildBinaryOperation( Maybe<int> fixed_right_arg = expr->fixed_right_arg(); Handle<AllocationSite> allocation_site = expr->allocation_site(); - PretenureFlag pretenure_flag = !FLAG_allocation_site_pretenuring ? - isolate()->heap()->GetPretenureMode() : NOT_TENURED; - - HAllocationMode allocation_mode = - FLAG_allocation_site_pretenuring - ? (allocation_site.is_null() - ? HAllocationMode(NOT_TENURED) - : HAllocationMode(allocation_site)) - : HAllocationMode(pretenure_flag); + HAllocationMode allocation_mode; + if (FLAG_allocation_site_pretenuring && !allocation_site.is_null()) { + allocation_mode = HAllocationMode(allocation_site); + } HValue* result = HGraphBuilder::BuildBinaryOperation( expr->op(), left, right, left_type, right_type, result_type, @@ -9437,7 +10240,9 @@ HValue* HGraphBuilder::BuildBinaryOperation( bool maybe_string_add = op == Token::ADD && (left_type->Maybe(Type::String()) || - right_type->Maybe(Type::String())); + left_type->Maybe(Type::Receiver()) || + right_type->Maybe(Type::String()) || + right_type->Maybe(Type::Receiver())); if (left_type->Is(Type::None())) { Add<HDeoptimize>("Insufficient type feedback for LHS of binary operation", @@ -9474,25 +10279,23 @@ HValue* HGraphBuilder::BuildBinaryOperation( // Convert left argument as necessary. if (left_type->Is(Type::Number())) { - ASSERT(right_type->Is(Type::String())); + DCHECK(right_type->Is(Type::String())); left = BuildNumberToString(left, left_type); } else if (!left_type->Is(Type::String())) { - ASSERT(right_type->Is(Type::String())); + DCHECK(right_type->Is(Type::String())); HValue* function = AddLoadJSBuiltin(Builtins::STRING_ADD_RIGHT); - Add<HPushArgument>(left); - Add<HPushArgument>(right); + Add<HPushArguments>(left, right); return AddUncasted<HInvokeFunction>(function, 2); } // Convert right argument as necessary. if (right_type->Is(Type::Number())) { - ASSERT(left_type->Is(Type::String())); + DCHECK(left_type->Is(Type::String())); right = BuildNumberToString(right, right_type); } else if (!right_type->Is(Type::String())) { - ASSERT(left_type->Is(Type::String())); + DCHECK(left_type->Is(Type::String())); HValue* function = AddLoadJSBuiltin(Builtins::STRING_ADD_LEFT); - Add<HPushArgument>(left); - Add<HPushArgument>(right); + Add<HPushArguments>(left, right); return AddUncasted<HInvokeFunction>(function, 2); } @@ -9510,7 +10313,7 @@ HValue* HGraphBuilder::BuildBinaryOperation( // Register the dependent code with the allocation site. if (!allocation_mode.feedback_site().is_null()) { - ASSERT(!graph()->info()->IsStub()); + DCHECK(!graph()->info()->IsStub()); Handle<AllocationSite> site(allocation_mode.feedback_site()); AllocationSite::AddDependentCompilationInfo( site, AllocationSite::TENURING, top_info()); @@ -9554,8 +10357,7 @@ HValue* HGraphBuilder::BuildBinaryOperation( // operation in optimized code, which is more expensive, than a stub call. if (graph()->info()->IsStub() && is_non_primitive) { HValue* function = AddLoadJSBuiltin(BinaryOpIC::TokenToJSBuiltin(op)); - Add<HPushArgument>(left); - Add<HPushArgument>(right); + Add<HPushArguments>(left, right); instr = AddUncasted<HInvokeFunction>(function, 2); } else { switch (op) { @@ -9652,15 +10454,15 @@ static bool IsClassOfTest(CompareOperation* expr) { if (!call->name()->IsOneByteEqualTo(STATIC_ASCII_VECTOR("_ClassOf"))) { return false; } - ASSERT(call->arguments()->length() == 1); + DCHECK(call->arguments()->length() == 1); return true; } void HOptimizedGraphBuilder::VisitBinaryOperation(BinaryOperation* expr) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); switch (expr->op()) { case Token::COMMA: return VisitComma(expr); @@ -9707,7 +10509,7 @@ void HOptimizedGraphBuilder::VisitLogicalExpression(BinaryOperation* expr) { } else if (ast_context()->IsValue()) { CHECK_ALIVE(VisitForValue(expr->left())); - ASSERT(current_block() != NULL); + DCHECK(current_block() != NULL); HValue* left_value = Top(); // Short-circuit left values that always evaluate to the same boolean value. @@ -9742,7 +10544,7 @@ void HOptimizedGraphBuilder::VisitLogicalExpression(BinaryOperation* expr) { return ast_context()->ReturnValue(Pop()); } else { - ASSERT(ast_context()->IsEffect()); + DCHECK(ast_context()->IsEffect()); // In an effect context, we don't need the value of the left subexpression, // only its control flow and side effects. We need an extra block to // maintain edge-split form. @@ -9828,9 +10630,9 @@ static bool IsLiteralCompareBool(Isolate* isolate, void HOptimizedGraphBuilder::VisitCompareOperation(CompareOperation* expr) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); if (!FLAG_hydrogen_track_positions) SetSourcePosition(expr->position()); @@ -9851,7 +10653,7 @@ void HOptimizedGraphBuilder::VisitCompareOperation(CompareOperation* expr) { if (IsClassOfTest(expr)) { CallRuntime* call = expr->left()->AsCallRuntime(); - ASSERT(call->arguments()->length() == 1); + DCHECK(call->arguments()->length() == 1); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); HValue* value = Pop(); Literal* literal = expr->right()->AsLiteral(); @@ -9919,8 +10721,7 @@ void HOptimizedGraphBuilder::VisitCompareOperation(CompareOperation* expr) { UNREACHABLE(); } else if (op == Token::IN) { HValue* function = AddLoadJSBuiltin(Builtins::IN); - Add<HPushArgument>(left); - Add<HPushArgument>(right); + Add<HPushArguments>(left, right); // TODO(olivf) InvokeFunction produces a check for the parameter count, // even though we are certain to pass the correct number of arguments here. HInstruction* result = New<HInvokeFunction>(function, 2); @@ -10063,10 +10864,10 @@ HControlInstruction* HOptimizedGraphBuilder::BuildCompareInstruction( void HOptimizedGraphBuilder::HandleLiteralCompareNil(CompareOperation* expr, Expression* sub_expr, NilValue nil) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); - ASSERT(expr->op() == Token::EQ || expr->op() == Token::EQ_STRICT); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); + DCHECK(expr->op() == Token::EQ || expr->op() == Token::EQ_STRICT); if (!FLAG_hydrogen_track_positions) SetSourcePosition(expr->position()); CHECK_ALIVE(VisitForValue(sub_expr)); HValue* value = Pop(); @@ -10078,7 +10879,7 @@ void HOptimizedGraphBuilder::HandleLiteralCompareNil(CompareOperation* expr, New<HCompareObjectEqAndBranch>(value, nil_constant); return ast_context()->ReturnControl(instr, expr->id()); } else { - ASSERT_EQ(Token::EQ, expr->op()); + DCHECK_EQ(Token::EQ, expr->op()); Type* type = expr->combined_type()->Is(Type::None()) ? Type::Any(zone()) : expr->combined_type(); HIfContinuation continuation; @@ -10105,14 +10906,14 @@ HInstruction* HOptimizedGraphBuilder::BuildFastLiteral( AllocationSiteUsageContext* site_context) { NoObservableSideEffectsScope no_effects(this); InstanceType instance_type = boilerplate_object->map()->instance_type(); - ASSERT(instance_type == JS_ARRAY_TYPE || instance_type == JS_OBJECT_TYPE); + DCHECK(instance_type == JS_ARRAY_TYPE || instance_type == JS_OBJECT_TYPE); HType type = instance_type == JS_ARRAY_TYPE ? HType::JSArray() : HType::JSObject(); HValue* object_size_constant = Add<HConstant>( boilerplate_object->map()->instance_size()); - PretenureFlag pretenure_flag = isolate()->heap()->GetPretenureMode(); + PretenureFlag pretenure_flag = NOT_TENURED; if (FLAG_allocation_site_pretenuring) { pretenure_flag = site_context->current()->GetPretenureMode(); Handle<AllocationSite> site(site_context->current()); @@ -10130,7 +10931,7 @@ HInstruction* HOptimizedGraphBuilder::BuildFastLiteral( HConstant* empty_fixed_array = Add<HConstant>( isolate()->factory()->empty_fixed_array()); Add<HStoreNamedField>(object, HObjectAccess::ForElementsPointer(), - empty_fixed_array, INITIALIZING_STORE); + empty_fixed_array); BuildEmitObjectHeader(boilerplate_object, object); @@ -10154,13 +10955,11 @@ HInstruction* HOptimizedGraphBuilder::BuildFastLiteral( HInstruction* object_elements = NULL; if (elements_size > 0) { HValue* object_elements_size = Add<HConstant>(elements_size); - if (boilerplate_object->HasFastDoubleElements()) { - object_elements = Add<HAllocate>(object_elements_size, HType::Tagged(), - pretenure_flag, FIXED_DOUBLE_ARRAY_TYPE, site_context->current()); - } else { - object_elements = Add<HAllocate>(object_elements_size, HType::Tagged(), - pretenure_flag, FIXED_ARRAY_TYPE, site_context->current()); - } + InstanceType instance_type = boilerplate_object->HasFastDoubleElements() + ? FIXED_DOUBLE_ARRAY_TYPE : FIXED_ARRAY_TYPE; + object_elements = Add<HAllocate>( + object_elements_size, HType::HeapObject(), + pretenure_flag, instance_type, site_context->current()); } BuildInitElementsInObjectHeader(boilerplate_object, object, object_elements); @@ -10182,14 +10981,14 @@ HInstruction* HOptimizedGraphBuilder::BuildFastLiteral( void HOptimizedGraphBuilder::BuildEmitObjectHeader( Handle<JSObject> boilerplate_object, HInstruction* object) { - ASSERT(boilerplate_object->properties()->length() == 0); + DCHECK(boilerplate_object->properties()->length() == 0); Handle<Map> boilerplate_object_map(boilerplate_object->map()); AddStoreMapConstant(object, boilerplate_object_map); Handle<Object> properties_field = Handle<Object>(boilerplate_object->properties(), isolate()); - ASSERT(*properties_field == isolate()->heap()->empty_fixed_array()); + DCHECK(*properties_field == isolate()->heap()->empty_fixed_array()); HInstruction* properties = Add<HConstant>(properties_field); HObjectAccess access = HObjectAccess::ForPropertiesPointer(); Add<HStoreNamedField>(object, access, properties); @@ -10201,7 +11000,7 @@ void HOptimizedGraphBuilder::BuildEmitObjectHeader( Handle<Object>(boilerplate_array->length(), isolate()); HInstruction* length = Add<HConstant>(length_field); - ASSERT(boilerplate_array->length()->IsSmi()); + DCHECK(boilerplate_array->length()->IsSmi()); Add<HStoreNamedField>(object, HObjectAccess::ForArrayLength( boilerplate_array->GetElementsKind()), length); } @@ -10212,7 +11011,7 @@ void HOptimizedGraphBuilder::BuildInitElementsInObjectHeader( Handle<JSObject> boilerplate_object, HInstruction* object, HInstruction* object_elements) { - ASSERT(boilerplate_object->properties()->length() == 0); + DCHECK(boilerplate_object->properties()->length() == 0); if (object_elements == NULL) { Handle<Object> elements_field = Handle<Object>(boilerplate_object->elements(), isolate()); @@ -10268,12 +11067,15 @@ void HOptimizedGraphBuilder::BuildEmitInObjectProperties( // 1) it's a child object of another object with a valid allocation site // 2) we can just use the mode of the parent object for pretenuring HInstruction* double_box = - Add<HAllocate>(heap_number_constant, HType::HeapNumber(), - pretenure_flag, HEAP_NUMBER_TYPE); + Add<HAllocate>(heap_number_constant, HType::HeapObject(), + pretenure_flag, MUTABLE_HEAP_NUMBER_TYPE); AddStoreMapConstant(double_box, - isolate()->factory()->heap_number_map()); - Add<HStoreNamedField>(double_box, HObjectAccess::ForHeapNumberValue(), - Add<HConstant>(value)); + isolate()->factory()->mutable_heap_number_map()); + // Unwrap the mutable heap number from the boilerplate. + HValue* double_value = + Add<HConstant>(Handle<HeapNumber>::cast(value)->value()); + Add<HStoreNamedField>( + double_box, HObjectAccess::ForHeapNumberValue(), double_value); value_instruction = double_box; } else if (representation.IsSmi()) { value_instruction = value->IsUninitialized() @@ -10293,7 +11095,7 @@ void HOptimizedGraphBuilder::BuildEmitInObjectProperties( HInstruction* value_instruction = Add<HConstant>(isolate()->factory()->one_pointer_filler_map()); for (int i = copied_fields; i < inobject_properties; i++) { - ASSERT(boilerplate_object->IsJSObject()); + DCHECK(boilerplate_object->IsJSObject()); int property_offset = boilerplate_object->GetInObjectPropertyOffset(i); HObjectAccess access = HObjectAccess::ForMapAndOffset(boilerplate_map, property_offset); @@ -10373,9 +11175,9 @@ void HOptimizedGraphBuilder::BuildEmitFixedArray( void HOptimizedGraphBuilder::VisitThisFunction(ThisFunction* expr) { - ASSERT(!HasStackOverflow()); - ASSERT(current_block() != NULL); - ASSERT(current_block()->HasPredecessor()); + DCHECK(!HasStackOverflow()); + DCHECK(current_block() != NULL); + DCHECK(current_block()->HasPredecessor()); HInstruction* instr = BuildThisFunction(); return ast_context()->ReturnInstruction(instr, expr->id()); } @@ -10383,7 +11185,7 @@ void HOptimizedGraphBuilder::VisitThisFunction(ThisFunction* expr) { void HOptimizedGraphBuilder::VisitDeclarations( ZoneList<Declaration*>* declarations) { - ASSERT(globals_.is_empty()); + DCHECK(globals_.is_empty()); AstVisitor::VisitDeclarations(declarations); if (!globals_.is_empty()) { Handle<FixedArray> array = @@ -10393,7 +11195,7 @@ void HOptimizedGraphBuilder::VisitDeclarations( DeclareGlobalsNativeFlag::encode(current_info()->is_native()) | DeclareGlobalsStrictMode::encode(current_info()->strict_mode()); Add<HDeclareGlobals>(array, flags); - globals_.Clear(); + globals_.Rewind(0); } } @@ -10443,7 +11245,7 @@ void HOptimizedGraphBuilder::VisitFunctionDeclaration( case Variable::UNALLOCATED: { globals_.Add(variable->name(), zone()); Handle<SharedFunctionInfo> function = Compiler::BuildFunctionInfo( - declaration->fun(), current_info()->script()); + declaration->fun(), current_info()->script(), top_info()); // Check for stack-overflow exception. if (function.is_null()) return SetStackOverflow(); globals_.Add(function, zone()); @@ -10519,7 +11321,7 @@ void HOptimizedGraphBuilder::VisitModuleStatement(ModuleStatement* stmt) { // Generators for inline runtime functions. // Support for types. void HOptimizedGraphBuilder::GenerateIsSmi(CallRuntime* call) { - ASSERT(call->arguments()->length() == 1); + DCHECK(call->arguments()->length() == 1); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); HValue* value = Pop(); HIsSmiAndBranch* result = New<HIsSmiAndBranch>(value); @@ -10528,7 +11330,7 @@ void HOptimizedGraphBuilder::GenerateIsSmi(CallRuntime* call) { void HOptimizedGraphBuilder::GenerateIsSpecObject(CallRuntime* call) { - ASSERT(call->arguments()->length() == 1); + DCHECK(call->arguments()->length() == 1); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); HValue* value = Pop(); HHasInstanceTypeAndBranch* result = @@ -10540,7 +11342,7 @@ void HOptimizedGraphBuilder::GenerateIsSpecObject(CallRuntime* call) { void HOptimizedGraphBuilder::GenerateIsFunction(CallRuntime* call) { - ASSERT(call->arguments()->length() == 1); + DCHECK(call->arguments()->length() == 1); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); HValue* value = Pop(); HHasInstanceTypeAndBranch* result = @@ -10550,7 +11352,7 @@ void HOptimizedGraphBuilder::GenerateIsFunction(CallRuntime* call) { void HOptimizedGraphBuilder::GenerateIsMinusZero(CallRuntime* call) { - ASSERT(call->arguments()->length() == 1); + DCHECK(call->arguments()->length() == 1); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); HValue* value = Pop(); HCompareMinusZeroAndBranch* result = New<HCompareMinusZeroAndBranch>(value); @@ -10559,7 +11361,7 @@ void HOptimizedGraphBuilder::GenerateIsMinusZero(CallRuntime* call) { void HOptimizedGraphBuilder::GenerateHasCachedArrayIndex(CallRuntime* call) { - ASSERT(call->arguments()->length() == 1); + DCHECK(call->arguments()->length() == 1); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); HValue* value = Pop(); HHasCachedArrayIndexAndBranch* result = @@ -10569,7 +11371,7 @@ void HOptimizedGraphBuilder::GenerateHasCachedArrayIndex(CallRuntime* call) { void HOptimizedGraphBuilder::GenerateIsArray(CallRuntime* call) { - ASSERT(call->arguments()->length() == 1); + DCHECK(call->arguments()->length() == 1); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); HValue* value = Pop(); HHasInstanceTypeAndBranch* result = @@ -10579,7 +11381,7 @@ void HOptimizedGraphBuilder::GenerateIsArray(CallRuntime* call) { void HOptimizedGraphBuilder::GenerateIsRegExp(CallRuntime* call) { - ASSERT(call->arguments()->length() == 1); + DCHECK(call->arguments()->length() == 1); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); HValue* value = Pop(); HHasInstanceTypeAndBranch* result = @@ -10589,7 +11391,7 @@ void HOptimizedGraphBuilder::GenerateIsRegExp(CallRuntime* call) { void HOptimizedGraphBuilder::GenerateIsObject(CallRuntime* call) { - ASSERT(call->arguments()->length() == 1); + DCHECK(call->arguments()->length() == 1); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); HValue* value = Pop(); HIsObjectAndBranch* result = New<HIsObjectAndBranch>(value); @@ -10603,7 +11405,7 @@ void HOptimizedGraphBuilder::GenerateIsNonNegativeSmi(CallRuntime* call) { void HOptimizedGraphBuilder::GenerateIsUndetectableObject(CallRuntime* call) { - ASSERT(call->arguments()->length() == 1); + DCHECK(call->arguments()->length() == 1); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); HValue* value = Pop(); HIsUndetectableAndBranch* result = New<HIsUndetectableAndBranch>(value); @@ -10619,7 +11421,7 @@ void HOptimizedGraphBuilder::GenerateIsStringWrapperSafeForDefaultValueOf( // Support for construct call checks. void HOptimizedGraphBuilder::GenerateIsConstructCall(CallRuntime* call) { - ASSERT(call->arguments()->length() == 0); + DCHECK(call->arguments()->length() == 0); if (function_state()->outer() != NULL) { // We are generating graph for inlined function. HValue* value = function_state()->inlining_kind() == CONSTRUCT_CALL_RETURN @@ -10635,30 +11437,42 @@ void HOptimizedGraphBuilder::GenerateIsConstructCall(CallRuntime* call) { // Support for arguments.length and arguments[?]. void HOptimizedGraphBuilder::GenerateArgumentsLength(CallRuntime* call) { - // Our implementation of arguments (based on this stack frame or an - // adapter below it) does not work for inlined functions. This runtime - // function is blacklisted by AstNode::IsInlineable. - ASSERT(function_state()->outer() == NULL); - ASSERT(call->arguments()->length() == 0); - HInstruction* elements = Add<HArgumentsElements>(false); - HArgumentsLength* result = New<HArgumentsLength>(elements); + DCHECK(call->arguments()->length() == 0); + HInstruction* result = NULL; + if (function_state()->outer() == NULL) { + HInstruction* elements = Add<HArgumentsElements>(false); + result = New<HArgumentsLength>(elements); + } else { + // Number of arguments without receiver. + int argument_count = environment()-> + arguments_environment()->parameter_count() - 1; + result = New<HConstant>(argument_count); + } return ast_context()->ReturnInstruction(result, call->id()); } void HOptimizedGraphBuilder::GenerateArguments(CallRuntime* call) { - // Our implementation of arguments (based on this stack frame or an - // adapter below it) does not work for inlined functions. This runtime - // function is blacklisted by AstNode::IsInlineable. - ASSERT(function_state()->outer() == NULL); - ASSERT(call->arguments()->length() == 1); + DCHECK(call->arguments()->length() == 1); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); HValue* index = Pop(); - HInstruction* elements = Add<HArgumentsElements>(false); - HInstruction* length = Add<HArgumentsLength>(elements); - HInstruction* checked_index = Add<HBoundsCheck>(index, length); - HAccessArgumentsAt* result = New<HAccessArgumentsAt>( - elements, length, checked_index); + HInstruction* result = NULL; + if (function_state()->outer() == NULL) { + HInstruction* elements = Add<HArgumentsElements>(false); + HInstruction* length = Add<HArgumentsLength>(elements); + HInstruction* checked_index = Add<HBoundsCheck>(index, length); + result = New<HAccessArgumentsAt>(elements, length, checked_index); + } else { + EnsureArgumentsArePushedForAccess(); + + // Number of arguments without receiver. + HInstruction* elements = function_state()->arguments_elements(); + int argument_count = environment()-> + arguments_environment()->parameter_count() - 1; + HInstruction* length = Add<HConstant>(argument_count); + HInstruction* checked_key = Add<HBoundsCheck>(index, length); + result = New<HAccessArgumentsAt>(elements, length, checked_key); + } return ast_context()->ReturnInstruction(result, call->id()); } @@ -10672,7 +11486,7 @@ void HOptimizedGraphBuilder::GenerateClassOf(CallRuntime* call) { void HOptimizedGraphBuilder::GenerateValueOf(CallRuntime* call) { - ASSERT(call->arguments()->length() == 1); + DCHECK(call->arguments()->length() == 1); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); HValue* object = Pop(); @@ -10700,8 +11514,8 @@ void HOptimizedGraphBuilder::GenerateValueOf(CallRuntime* call) { void HOptimizedGraphBuilder::GenerateDateField(CallRuntime* call) { - ASSERT(call->arguments()->length() == 2); - ASSERT_NE(NULL, call->arguments()->at(1)->AsLiteral()); + DCHECK(call->arguments()->length() == 2); + DCHECK_NE(NULL, call->arguments()->at(1)->AsLiteral()); Smi* index = Smi::cast(*(call->arguments()->at(1)->AsLiteral()->value())); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); HValue* date = Pop(); @@ -10712,7 +11526,7 @@ void HOptimizedGraphBuilder::GenerateDateField(CallRuntime* call) { void HOptimizedGraphBuilder::GenerateOneByteSeqStringSetChar( CallRuntime* call) { - ASSERT(call->arguments()->length() == 3); + DCHECK(call->arguments()->length() == 3); // We need to follow the evaluation order of full codegen. CHECK_ALIVE(VisitForValue(call->arguments()->at(1))); CHECK_ALIVE(VisitForValue(call->arguments()->at(2))); @@ -10729,7 +11543,7 @@ void HOptimizedGraphBuilder::GenerateOneByteSeqStringSetChar( void HOptimizedGraphBuilder::GenerateTwoByteSeqStringSetChar( CallRuntime* call) { - ASSERT(call->arguments()->length() == 3); + DCHECK(call->arguments()->length() == 3); // We need to follow the evaluation order of full codegen. CHECK_ALIVE(VisitForValue(call->arguments()->at(1))); CHECK_ALIVE(VisitForValue(call->arguments()->at(2))); @@ -10745,7 +11559,7 @@ void HOptimizedGraphBuilder::GenerateTwoByteSeqStringSetChar( void HOptimizedGraphBuilder::GenerateSetValueOf(CallRuntime* call) { - ASSERT(call->arguments()->length() == 2); + DCHECK(call->arguments()->length() == 2); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); CHECK_ALIVE(VisitForValue(call->arguments()->at(1))); HValue* value = Pop(); @@ -10783,7 +11597,7 @@ void HOptimizedGraphBuilder::GenerateSetValueOf(CallRuntime* call) { // Fast support for charCodeAt(n). void HOptimizedGraphBuilder::GenerateStringCharCodeAt(CallRuntime* call) { - ASSERT(call->arguments()->length() == 2); + DCHECK(call->arguments()->length() == 2); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); CHECK_ALIVE(VisitForValue(call->arguments()->at(1))); HValue* index = Pop(); @@ -10795,7 +11609,7 @@ void HOptimizedGraphBuilder::GenerateStringCharCodeAt(CallRuntime* call) { // Fast support for string.charAt(n) and string[n]. void HOptimizedGraphBuilder::GenerateStringCharFromCode(CallRuntime* call) { - ASSERT(call->arguments()->length() == 1); + DCHECK(call->arguments()->length() == 1); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); HValue* char_code = Pop(); HInstruction* result = NewUncasted<HStringCharFromCode>(char_code); @@ -10805,7 +11619,7 @@ void HOptimizedGraphBuilder::GenerateStringCharFromCode(CallRuntime* call) { // Fast support for string.charAt(n) and string[n]. void HOptimizedGraphBuilder::GenerateStringCharAt(CallRuntime* call) { - ASSERT(call->arguments()->length() == 2); + DCHECK(call->arguments()->length() == 2); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); CHECK_ALIVE(VisitForValue(call->arguments()->at(1))); HValue* index = Pop(); @@ -10819,7 +11633,7 @@ void HOptimizedGraphBuilder::GenerateStringCharAt(CallRuntime* call) { // Fast support for object equality testing. void HOptimizedGraphBuilder::GenerateObjectEquals(CallRuntime* call) { - ASSERT(call->arguments()->length() == 2); + DCHECK(call->arguments()->length() == 2); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); CHECK_ALIVE(VisitForValue(call->arguments()->at(1))); HValue* right = Pop(); @@ -10832,7 +11646,7 @@ void HOptimizedGraphBuilder::GenerateObjectEquals(CallRuntime* call) { // Fast support for StringAdd. void HOptimizedGraphBuilder::GenerateStringAdd(CallRuntime* call) { - ASSERT_EQ(2, call->arguments()->length()); + DCHECK_EQ(2, call->arguments()->length()); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); CHECK_ALIVE(VisitForValue(call->arguments()->at(1))); HValue* right = Pop(); @@ -10844,7 +11658,7 @@ void HOptimizedGraphBuilder::GenerateStringAdd(CallRuntime* call) { // Fast support for SubString. void HOptimizedGraphBuilder::GenerateSubString(CallRuntime* call) { - ASSERT_EQ(3, call->arguments()->length()); + DCHECK_EQ(3, call->arguments()->length()); CHECK_ALIVE(VisitExpressions(call->arguments())); PushArgumentsFromEnvironment(call->arguments()->length()); HCallStub* result = New<HCallStub>(CodeStub::SubString, 3); @@ -10854,7 +11668,7 @@ void HOptimizedGraphBuilder::GenerateSubString(CallRuntime* call) { // Fast support for StringCompare. void HOptimizedGraphBuilder::GenerateStringCompare(CallRuntime* call) { - ASSERT_EQ(2, call->arguments()->length()); + DCHECK_EQ(2, call->arguments()->length()); CHECK_ALIVE(VisitExpressions(call->arguments())); PushArgumentsFromEnvironment(call->arguments()->length()); HCallStub* result = New<HCallStub>(CodeStub::StringCompare, 2); @@ -10864,7 +11678,7 @@ void HOptimizedGraphBuilder::GenerateStringCompare(CallRuntime* call) { // Support for direct calls from JavaScript to native RegExp code. void HOptimizedGraphBuilder::GenerateRegExpExec(CallRuntime* call) { - ASSERT_EQ(4, call->arguments()->length()); + DCHECK_EQ(4, call->arguments()->length()); CHECK_ALIVE(VisitExpressions(call->arguments())); PushArgumentsFromEnvironment(call->arguments()->length()); HCallStub* result = New<HCallStub>(CodeStub::RegExpExec, 4); @@ -10873,7 +11687,7 @@ void HOptimizedGraphBuilder::GenerateRegExpExec(CallRuntime* call) { void HOptimizedGraphBuilder::GenerateDoubleLo(CallRuntime* call) { - ASSERT_EQ(1, call->arguments()->length()); + DCHECK_EQ(1, call->arguments()->length()); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); HValue* value = Pop(); HInstruction* result = NewUncasted<HDoubleBits>(value, HDoubleBits::LOW); @@ -10882,7 +11696,7 @@ void HOptimizedGraphBuilder::GenerateDoubleLo(CallRuntime* call) { void HOptimizedGraphBuilder::GenerateDoubleHi(CallRuntime* call) { - ASSERT_EQ(1, call->arguments()->length()); + DCHECK_EQ(1, call->arguments()->length()); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); HValue* value = Pop(); HInstruction* result = NewUncasted<HDoubleBits>(value, HDoubleBits::HIGH); @@ -10891,7 +11705,7 @@ void HOptimizedGraphBuilder::GenerateDoubleHi(CallRuntime* call) { void HOptimizedGraphBuilder::GenerateConstructDouble(CallRuntime* call) { - ASSERT_EQ(2, call->arguments()->length()); + DCHECK_EQ(2, call->arguments()->length()); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); CHECK_ALIVE(VisitForValue(call->arguments()->at(1))); HValue* lo = Pop(); @@ -10903,7 +11717,7 @@ void HOptimizedGraphBuilder::GenerateConstructDouble(CallRuntime* call) { // Construct a RegExp exec result with two in-object properties. void HOptimizedGraphBuilder::GenerateRegExpConstructResult(CallRuntime* call) { - ASSERT_EQ(3, call->arguments()->length()); + DCHECK_EQ(3, call->arguments()->length()); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); CHECK_ALIVE(VisitForValue(call->arguments()->at(1))); CHECK_ALIVE(VisitForValue(call->arguments()->at(2))); @@ -10923,7 +11737,7 @@ void HOptimizedGraphBuilder::GenerateGetFromCache(CallRuntime* call) { // Fast support for number to string. void HOptimizedGraphBuilder::GenerateNumberToString(CallRuntime* call) { - ASSERT_EQ(1, call->arguments()->length()); + DCHECK_EQ(1, call->arguments()->length()); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); HValue* number = Pop(); HValue* result = BuildNumberToString(number, Type::Any(zone())); @@ -10935,7 +11749,7 @@ void HOptimizedGraphBuilder::GenerateNumberToString(CallRuntime* call) { void HOptimizedGraphBuilder::GenerateCallFunction(CallRuntime* call) { // 1 ~ The function to call is not itself an argument to the call. int arg_count = call->arguments()->length() - 1; - ASSERT(arg_count >= 1); // There's always at least a receiver. + DCHECK(arg_count >= 1); // There's always at least a receiver. CHECK_ALIVE(VisitExpressions(call->arguments())); // The function is the last argument @@ -10979,7 +11793,7 @@ void HOptimizedGraphBuilder::GenerateCallFunction(CallRuntime* call) { // Fast call to math functions. void HOptimizedGraphBuilder::GenerateMathPow(CallRuntime* call) { - ASSERT_EQ(2, call->arguments()->length()); + DCHECK_EQ(2, call->arguments()->length()); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); CHECK_ALIVE(VisitForValue(call->arguments()->at(1))); HValue* right = Pop(); @@ -10989,8 +11803,8 @@ void HOptimizedGraphBuilder::GenerateMathPow(CallRuntime* call) { } -void HOptimizedGraphBuilder::GenerateMathLog(CallRuntime* call) { - ASSERT(call->arguments()->length() == 1); +void HOptimizedGraphBuilder::GenerateMathLogRT(CallRuntime* call) { + DCHECK(call->arguments()->length() == 1); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); HValue* value = Pop(); HInstruction* result = NewUncasted<HUnaryMathOperation>(value, kMathLog); @@ -10998,8 +11812,8 @@ void HOptimizedGraphBuilder::GenerateMathLog(CallRuntime* call) { } -void HOptimizedGraphBuilder::GenerateMathSqrt(CallRuntime* call) { - ASSERT(call->arguments()->length() == 1); +void HOptimizedGraphBuilder::GenerateMathSqrtRT(CallRuntime* call) { + DCHECK(call->arguments()->length() == 1); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); HValue* value = Pop(); HInstruction* result = NewUncasted<HUnaryMathOperation>(value, kMathSqrt); @@ -11008,7 +11822,7 @@ void HOptimizedGraphBuilder::GenerateMathSqrt(CallRuntime* call) { void HOptimizedGraphBuilder::GenerateGetCachedArrayIndex(CallRuntime* call) { - ASSERT(call->arguments()->length() == 1); + DCHECK(call->arguments()->length() == 1); CHECK_ALIVE(VisitForValue(call->arguments()->at(0))); HValue* value = Pop(); HGetCachedArrayIndex* result = New<HGetCachedArrayIndex>(value); @@ -11039,11 +11853,13 @@ void HOptimizedGraphBuilder::GenerateDebugBreakInOptimizedCode( } -void HOptimizedGraphBuilder::GenerateDebugCallbackSupportsStepping( - CallRuntime* call) { - ASSERT(call->arguments()->length() == 1); - // Debugging is not supported in optimized code. - return ast_context()->ReturnValue(graph()->GetConstantFalse()); +void HOptimizedGraphBuilder::GenerateDebugIsActive(CallRuntime* call) { + DCHECK(call->arguments()->length() == 0); + HValue* ref = + Add<HConstant>(ExternalReference::debug_is_active_address(isolate())); + HValue* value = Add<HLoadNamedField>( + ref, static_cast<HValue*>(NULL), HObjectAccess::ForExternalUInteger8()); + return ast_context()->ReturnValue(value); } @@ -11067,7 +11883,9 @@ HEnvironment::HEnvironment(HEnvironment* outer, push_count_(0), ast_id_(BailoutId::None()), zone_(zone) { - Initialize(scope->num_parameters() + 1, scope->num_stack_slots(), 0); + Scope* declaration_scope = scope->DeclarationScope(); + Initialize(declaration_scope->num_parameters() + 1, + declaration_scope->num_stack_slots(), 0); } @@ -11153,8 +11971,8 @@ void HEnvironment::Initialize(const HEnvironment* other) { void HEnvironment::AddIncomingEdge(HBasicBlock* block, HEnvironment* other) { - ASSERT(!block->IsLoopHeader()); - ASSERT(values_.length() == other->values_.length()); + DCHECK(!block->IsLoopHeader()); + DCHECK(values_.length() == other->values_.length()); int length = values_.length(); for (int i = 0; i < length; ++i) { @@ -11163,12 +11981,12 @@ void HEnvironment::AddIncomingEdge(HBasicBlock* block, HEnvironment* other) { // There is already a phi for the i'th value. HPhi* phi = HPhi::cast(value); // Assert index is correct and that we haven't missed an incoming edge. - ASSERT(phi->merged_index() == i || !phi->HasMergedIndex()); - ASSERT(phi->OperandCount() == block->predecessors()->length()); + DCHECK(phi->merged_index() == i || !phi->HasMergedIndex()); + DCHECK(phi->OperandCount() == block->predecessors()->length()); phi->AddInput(other->values_[i]); } else if (values_[i] != other->values_[i]) { // There is a fresh value on the incoming edge, a phi is needed. - ASSERT(values_[i] != NULL && other->values_[i] != NULL); + DCHECK(values_[i] != NULL && other->values_[i] != NULL); HPhi* phi = block->AddNewPhi(i); HValue* old_value = values_[i]; for (int j = 0; j < block->predecessors()->length(); j++) { @@ -11182,7 +12000,7 @@ void HEnvironment::AddIncomingEdge(HBasicBlock* block, HEnvironment* other) { void HEnvironment::Bind(int index, HValue* value) { - ASSERT(value != NULL); + DCHECK(value != NULL); assigned_variables_.Add(index, zone()); values_[index] = value; } @@ -11194,7 +12012,7 @@ bool HEnvironment::HasExpressionAt(int index) const { bool HEnvironment::ExpressionStackIsEmpty() const { - ASSERT(length() >= first_expression_index()); + DCHECK(length() >= first_expression_index()); return length() == first_expression_index(); } @@ -11202,7 +12020,7 @@ bool HEnvironment::ExpressionStackIsEmpty() const { void HEnvironment::SetExpressionStackAt(int index_from_top, HValue* value) { int count = index_from_top + 1; int index = values_.length() - count; - ASSERT(HasExpressionAt(index)); + DCHECK(HasExpressionAt(index)); // The push count must include at least the element in question or else // the new value will not be included in this environment's history. if (push_count_ < count) { @@ -11266,7 +12084,7 @@ HEnvironment* HEnvironment::CopyForInlining( FunctionLiteral* function, HConstant* undefined, InliningKind inlining_kind) const { - ASSERT(frame_type() == JS_FUNCTION); + DCHECK(frame_type() == JS_FUNCTION); // Outer environment is a copy of this one without the arguments. int arity = function->scope()->num_parameters(); @@ -11313,32 +12131,24 @@ HEnvironment* HEnvironment::CopyForInlining( } -void HEnvironment::PrintTo(StringStream* stream) { - for (int i = 0; i < length(); i++) { - if (i == 0) stream->Add("parameters\n"); - if (i == parameter_count()) stream->Add("specials\n"); - if (i == parameter_count() + specials_count()) stream->Add("locals\n"); - if (i == parameter_count() + specials_count() + local_count()) { - stream->Add("expressions\n"); +OStream& operator<<(OStream& os, const HEnvironment& env) { + for (int i = 0; i < env.length(); i++) { + if (i == 0) os << "parameters\n"; + if (i == env.parameter_count()) os << "specials\n"; + if (i == env.parameter_count() + env.specials_count()) os << "locals\n"; + if (i == env.parameter_count() + env.specials_count() + env.local_count()) { + os << "expressions\n"; } - HValue* val = values_.at(i); - stream->Add("%d: ", i); + HValue* val = env.values()->at(i); + os << i << ": "; if (val != NULL) { - val->PrintNameTo(stream); + os << val; } else { - stream->Add("NULL"); + os << "NULL"; } - stream->Add("\n"); + os << "\n"; } - PrintF("\n"); -} - - -void HEnvironment::PrintToStd() { - HeapStringAllocator string_allocator; - StringStream trace(&string_allocator); - PrintTo(&trace); - PrintF("%s", trace.ToCString().get()); + return os << "\n"; } @@ -11356,12 +12166,13 @@ void HTracer::TraceCompilation(CompilationInfo* info) { PrintStringProperty("name", CodeStub::MajorName(major_key, false)); PrintStringProperty("method", "stub"); } - PrintLongProperty("date", static_cast<int64_t>(OS::TimeCurrentMillis())); + PrintLongProperty("date", + static_cast<int64_t>(base::OS::TimeCurrentMillis())); } void HTracer::TraceLithium(const char* name, LChunk* chunk) { - ASSERT(!chunk->isolate()->concurrent_recompilation_enabled()); + DCHECK(!chunk->isolate()->concurrent_recompilation_enabled()); AllowHandleDereference allow_deref; AllowDeferredHandleDereference allow_deferred_deref; Trace(name, chunk->graph(), chunk); @@ -11369,7 +12180,7 @@ void HTracer::TraceLithium(const char* name, LChunk* chunk) { void HTracer::TraceHydrogen(const char* name, HGraph* graph) { - ASSERT(!graph->isolate()->concurrent_recompilation_enabled()); + DCHECK(!graph->isolate()->concurrent_recompilation_enabled()); AllowHandleDereference allow_deref; AllowDeferredHandleDereference allow_deferred_deref; Trace(name, graph, NULL); @@ -11452,11 +12263,9 @@ void HTracer::Trace(const char* name, HGraph* graph, LChunk* chunk) { for (int j = 0; j < total; ++j) { HPhi* phi = current->phis()->at(j); PrintIndent(); - trace_.Add("%d ", phi->merged_index()); - phi->PrintNameTo(&trace_); - trace_.Add(" "); - phi->PrintTo(&trace_); - trace_.Add("\n"); + OStringStream os; + os << phi->merged_index() << " " << NameOf(phi) << " " << *phi << "\n"; + trace_.Add(os.c_str()); } } @@ -11466,21 +12275,18 @@ void HTracer::Trace(const char* name, HGraph* graph, LChunk* chunk) { HInstruction* instruction = it.Current(); int uses = instruction->UseCount(); PrintIndent(); - trace_.Add("0 %d ", uses); - instruction->PrintNameTo(&trace_); - trace_.Add(" "); - instruction->PrintTo(&trace_); + OStringStream os; + os << "0 " << uses << " " << NameOf(instruction) << " " << *instruction; if (FLAG_hydrogen_track_positions && instruction->has_position() && instruction->position().raw() != 0) { const HSourcePosition pos = instruction->position(); - trace_.Add(" pos:"); - if (pos.inlining_id() != 0) { - trace_.Add("%d_", pos.inlining_id()); - } - trace_.Add("%d", pos.position()); + os << " pos:"; + if (pos.inlining_id() != 0) os << pos.inlining_id() << "_"; + os << pos.position(); } - trace_.Add(" <|@\n"); + os << " <|@\n"; + trace_.Add(os.c_str()); } } @@ -11498,10 +12304,9 @@ void HTracer::Trace(const char* name, HGraph* graph, LChunk* chunk) { trace_.Add("%d ", LifetimePosition::FromInstructionIndex(i).Value()); linstr->PrintTo(&trace_); - trace_.Add(" [hir:"); - linstr->hydrogen_value()->PrintNameTo(&trace_); - trace_.Add("]"); - trace_.Add(" <|@\n"); + OStringStream os; + os << " [hir:" << NameOf(linstr->hydrogen_value()) << "] <|@\n"; + trace_.Add(os.c_str()); } } } @@ -11543,7 +12348,7 @@ void HTracer::TraceLiveRange(LiveRange* range, const char* type, trace_.Add(" \"%s\"", DoubleRegister::AllocationIndexToString(assigned_reg)); } else { - ASSERT(op->IsRegister()); + DCHECK(op->IsRegister()); trace_.Add(" \"%s\"", Register::AllocationIndexToString(assigned_reg)); } } else if (range->IsSpilled()) { @@ -11551,7 +12356,7 @@ void HTracer::TraceLiveRange(LiveRange* range, const char* type, if (op->IsDoubleStackSlot()) { trace_.Add(" \"double_stack:%d\"", op->index()); } else { - ASSERT(op->IsStackSlot()); + DCHECK(op->IsStackSlot()); trace_.Add(" \"stack:%d\"", op->index()); } } @@ -11601,15 +12406,22 @@ void HStatistics::Initialize(CompilationInfo* info) { } -void HStatistics::Print() { - PrintF("Timing results:\n"); - TimeDelta sum; +void HStatistics::Print(const char* stats_name) { + PrintF( + "\n" + "----------------------------------------" + "----------------------------------------\n" + "--- %s timing results:\n" + "----------------------------------------" + "----------------------------------------\n", + stats_name); + base::TimeDelta sum; for (int i = 0; i < times_.length(); ++i) { sum += times_[i]; } for (int i = 0; i < names_.length(); ++i) { - PrintF("%32s", names_[i]); + PrintF("%33s", names_[i]); double ms = times_[i].InMillisecondsF(); double percent = times_[i].PercentOf(sum); PrintF(" %8.3f ms / %4.1f %% ", ms, percent); @@ -11619,26 +12431,22 @@ void HStatistics::Print() { PrintF(" %9u bytes / %4.1f %%\n", size, size_percent); } - PrintF("----------------------------------------" - "---------------------------------------\n"); - TimeDelta total = create_graph_ + optimize_graph_ + generate_code_; - PrintF("%32s %8.3f ms / %4.1f %% \n", - "Create graph", - create_graph_.InMillisecondsF(), - create_graph_.PercentOf(total)); - PrintF("%32s %8.3f ms / %4.1f %% \n", - "Optimize graph", - optimize_graph_.InMillisecondsF(), - optimize_graph_.PercentOf(total)); - PrintF("%32s %8.3f ms / %4.1f %% \n", - "Generate and install code", - generate_code_.InMillisecondsF(), - generate_code_.PercentOf(total)); - PrintF("----------------------------------------" - "---------------------------------------\n"); - PrintF("%32s %8.3f ms (%.1f times slower than full code gen)\n", - "Total", - total.InMillisecondsF(), + PrintF( + "----------------------------------------" + "----------------------------------------\n"); + base::TimeDelta total = create_graph_ + optimize_graph_ + generate_code_; + PrintF("%33s %8.3f ms / %4.1f %% \n", "Create graph", + create_graph_.InMillisecondsF(), create_graph_.PercentOf(total)); + PrintF("%33s %8.3f ms / %4.1f %% \n", "Optimize graph", + optimize_graph_.InMillisecondsF(), optimize_graph_.PercentOf(total)); + PrintF("%33s %8.3f ms / %4.1f %% \n", "Generate and install code", + generate_code_.InMillisecondsF(), generate_code_.PercentOf(total)); + PrintF( + "----------------------------------------" + "----------------------------------------\n"); + PrintF("%33s %8.3f ms %9u bytes\n", "Total", + total.InMillisecondsF(), total_size_); + PrintF("%33s (%.1f times slower than full code gen)\n", "", total.TimesOf(full_code_gen_)); double source_size_in_kb = static_cast<double>(source_size_) / 1024; @@ -11648,13 +12456,13 @@ void HStatistics::Print() { double normalized_size_in_kb = source_size_in_kb > 0 ? total_size_ / 1024 / source_size_in_kb : 0; - PrintF("%32s %8.3f ms %7.3f kB allocated\n", - "Average per kB source", - normalized_time, normalized_size_in_kb); + PrintF("%33s %8.3f ms %7.3f kB allocated\n", + "Average per kB source", normalized_time, normalized_size_in_kb); } -void HStatistics::SaveTiming(const char* name, TimeDelta time, unsigned size) { +void HStatistics::SaveTiming(const char* name, base::TimeDelta time, + unsigned size) { total_size_ += size; for (int i = 0; i < names_.length(); ++i) { if (strcmp(names_[i], name) == 0) { diff --git a/deps/v8/src/hydrogen.h b/deps/v8/src/hydrogen.h index d20a81771ed..bc91e191368 100644 --- a/deps/v8/src/hydrogen.h +++ b/deps/v8/src/hydrogen.h @@ -5,15 +5,15 @@ #ifndef V8_HYDROGEN_H_ #define V8_HYDROGEN_H_ -#include "v8.h" +#include "src/v8.h" -#include "accessors.h" -#include "allocation.h" -#include "ast.h" -#include "compiler.h" -#include "hydrogen-instructions.h" -#include "zone.h" -#include "scopes.h" +#include "src/accessors.h" +#include "src/allocation.h" +#include "src/ast.h" +#include "src/compiler.h" +#include "src/hydrogen-instructions.h" +#include "src/scopes.h" +#include "src/zone.h" namespace v8 { namespace internal { @@ -94,8 +94,8 @@ class HBasicBlock V8_FINAL : public ZoneObject { void SetInitialEnvironment(HEnvironment* env); void ClearEnvironment() { - ASSERT(IsFinished()); - ASSERT(end()->SuccessorCount() == 0); + DCHECK(IsFinished()); + DCHECK(end()->SuccessorCount() == 0); last_environment_ = NULL; } bool HasEnvironment() const { return last_environment_ != NULL; } @@ -103,7 +103,7 @@ class HBasicBlock V8_FINAL : public ZoneObject { HBasicBlock* parent_loop_header() const { return parent_loop_header_; } void set_parent_loop_header(HBasicBlock* block) { - ASSERT(parent_loop_header_ == NULL); + DCHECK(parent_loop_header_ == NULL); parent_loop_header_ = block; } @@ -151,6 +151,9 @@ class HBasicBlock V8_FINAL : public ZoneObject { dominates_loop_successors_ = true; } + bool IsOrdered() const { return is_ordered_; } + void MarkAsOrdered() { is_ordered_ = true; } + void MarkSuccEdgeUnreachable(int succ); inline Zone* zone() const; @@ -207,9 +210,13 @@ class HBasicBlock V8_FINAL : public ZoneObject { bool is_reachable_ : 1; bool dominates_loop_successors_ : 1; bool is_osr_entry_ : 1; + bool is_ordered_ : 1; }; +OStream& operator<<(OStream& os, const HBasicBlock& b); + + class HPredecessorIterator V8_FINAL BASE_EMBEDDED { public: explicit HPredecessorIterator(HBasicBlock* block) @@ -354,7 +361,7 @@ class HGraph V8_FINAL : public ZoneObject { int GetMaximumValueID() const { return values_.length(); } int GetNextBlockID() { return next_block_id_++; } int GetNextValueID(HValue* value) { - ASSERT(!disallow_adding_new_values_); + DCHECK(!disallow_adding_new_values_); values_.Add(value, zone()); return values_.length() - 1; } @@ -429,17 +436,17 @@ class HGraph V8_FINAL : public ZoneObject { } bool has_uint32_instructions() { - ASSERT(uint32_instructions_ == NULL || !uint32_instructions_->is_empty()); + DCHECK(uint32_instructions_ == NULL || !uint32_instructions_->is_empty()); return uint32_instructions_ != NULL; } ZoneList<HInstruction*>* uint32_instructions() { - ASSERT(uint32_instructions_ == NULL || !uint32_instructions_->is_empty()); + DCHECK(uint32_instructions_ == NULL || !uint32_instructions_->is_empty()); return uint32_instructions_; } void RecordUint32Instruction(HInstruction* instr) { - ASSERT(uint32_instructions_ == NULL || !uint32_instructions_->is_empty()); + DCHECK(uint32_instructions_ == NULL || !uint32_instructions_->is_empty()); if (uint32_instructions_ == NULL) { uint32_instructions_ = new(zone()) ZoneList<HInstruction*>(4, zone()); } @@ -599,7 +606,7 @@ class HEnvironment V8_FINAL : public ZoneObject { HValue* Lookup(int index) const { HValue* result = values_[index]; - ASSERT(result != NULL); + DCHECK(result != NULL); return result; } @@ -609,13 +616,13 @@ class HEnvironment V8_FINAL : public ZoneObject { } void Push(HValue* value) { - ASSERT(value != NULL); + DCHECK(value != NULL); ++push_count_; values_.Add(value, zone()); } HValue* Pop() { - ASSERT(!ExpressionStackIsEmpty()); + DCHECK(!ExpressionStackIsEmpty()); if (push_count_ > 0) { --push_count_; } else { @@ -632,7 +639,7 @@ class HEnvironment V8_FINAL : public ZoneObject { HValue* ExpressionStackAt(int index_from_top) const { int index = length() - index_from_top - 1; - ASSERT(HasExpressionAt(index)); + DCHECK(HasExpressionAt(index)); return values_[index]; } @@ -667,7 +674,7 @@ class HEnvironment V8_FINAL : public ZoneObject { } void SetValueAt(int index, HValue* value) { - ASSERT(index < length()); + DCHECK(index < length()); values_[index] = value; } @@ -675,7 +682,7 @@ class HEnvironment V8_FINAL : public ZoneObject { // by 1 (receiver is parameter index -1 but environment index 0). // Stack-allocated local indices are shifted by the number of parameters. int IndexFor(Variable* variable) const { - ASSERT(variable->IsStackAllocated()); + DCHECK(variable->IsStackAllocated()); int shift = variable->IsParameter() ? 1 : parameter_count_ + specials_count_; @@ -694,9 +701,6 @@ class HEnvironment V8_FINAL : public ZoneObject { return i >= parameter_count() && i < parameter_count() + specials_count(); } - void PrintTo(StringStream* stream); - void PrintToStd(); - Zone* zone() const { return zone_; } private: @@ -738,6 +742,9 @@ class HEnvironment V8_FINAL : public ZoneObject { }; +OStream& operator<<(OStream& os, const HEnvironment& env); + + class HOptimizedGraphBuilder; enum ArgumentsAllowedFlag { @@ -865,7 +872,7 @@ class TestContext V8_FINAL : public AstContext { BailoutId ast_id) V8_OVERRIDE; static TestContext* cast(AstContext* context) { - ASSERT(context->IsTest()); + DCHECK(context->IsTest()); return reinterpret_cast<TestContext*>(context); } @@ -967,11 +974,11 @@ class HIfContinuation V8_FINAL { HBasicBlock* false_branch) : continuation_captured_(true), true_branch_(true_branch), false_branch_(false_branch) {} - ~HIfContinuation() { ASSERT(!continuation_captured_); } + ~HIfContinuation() { DCHECK(!continuation_captured_); } void Capture(HBasicBlock* true_branch, HBasicBlock* false_branch) { - ASSERT(!continuation_captured_); + DCHECK(!continuation_captured_); true_branch_ = true_branch; false_branch_ = false_branch; continuation_captured_ = true; @@ -979,7 +986,7 @@ class HIfContinuation V8_FINAL { void Continue(HBasicBlock** true_branch, HBasicBlock** false_branch) { - ASSERT(continuation_captured_); + DCHECK(continuation_captured_); *true_branch = true_branch_; *false_branch = false_branch_; continuation_captured_ = false; @@ -1038,10 +1045,14 @@ class HGraphBuilder { : info_(info), graph_(NULL), current_block_(NULL), + scope_(info->scope()), position_(HSourcePosition::Unknown()), start_position_(0) {} virtual ~HGraphBuilder() {} + Scope* scope() const { return scope_; } + void set_scope(Scope* scope) { scope_ = scope; } + HBasicBlock* current_block() const { return current_block_; } void set_current_block(HBasicBlock* block) { current_block_ = block; } HEnvironment* environment() const { @@ -1116,7 +1127,7 @@ class HGraphBuilder { HInstruction* result = AddInstruction(NewUncasted<I>(p1)); // Specializations must have their parameters properly casted // to avoid landing here. - ASSERT(!result->IsReturn() && !result->IsSimulate() && + DCHECK(!result->IsReturn() && !result->IsSimulate() && !result->IsDeoptimize()); return result; } @@ -1126,7 +1137,7 @@ class HGraphBuilder { I* result = AddInstructionTyped(New<I>(p1)); // Specializations must have their parameters properly casted // to avoid landing here. - ASSERT(!result->IsReturn() && !result->IsSimulate() && + DCHECK(!result->IsReturn() && !result->IsSimulate() && !result->IsDeoptimize()); return result; } @@ -1146,7 +1157,7 @@ class HGraphBuilder { HInstruction* result = AddInstruction(NewUncasted<I>(p1, p2)); // Specializations must have their parameters properly casted // to avoid landing here. - ASSERT(!result->IsSimulate()); + DCHECK(!result->IsSimulate()); return result; } @@ -1155,7 +1166,7 @@ class HGraphBuilder { I* result = AddInstructionTyped(New<I>(p1, p2)); // Specializations must have their parameters properly casted // to avoid landing here. - ASSERT(!result->IsSimulate()); + DCHECK(!result->IsSimulate()); return result; } @@ -1291,12 +1302,27 @@ class HGraphBuilder { void AddSimulate(BailoutId id, RemovableSimulate removable = FIXED_SIMULATE); + // When initializing arrays, we'll unfold the loop if the number of elements + // is known at compile time and is <= kElementLoopUnrollThreshold. + static const int kElementLoopUnrollThreshold = 8; + protected: virtual bool BuildGraph() = 0; HBasicBlock* CreateBasicBlock(HEnvironment* env); HBasicBlock* CreateLoopHeaderBlock(); + template <class BitFieldClass> + HValue* BuildDecodeField(HValue* encoded_field) { + HValue* mask_value = Add<HConstant>(static_cast<int>(BitFieldClass::kMask)); + HValue* masked_field = + AddUncasted<HBitwise>(Token::BIT_AND, encoded_field, mask_value); + return AddUncasted<HShr>(masked_field, + Add<HConstant>(static_cast<int>(BitFieldClass::kShift))); + } + + HValue* BuildGetElementsKind(HValue* object); + HValue* BuildCheckHeapObject(HValue* object); HValue* BuildCheckString(HValue* string); HValue* BuildWrapReceiver(HValue* object, HValue* function); @@ -1323,8 +1349,32 @@ class HGraphBuilder { HValue* BuildNumberToString(HValue* object, Type* type); + void BuildJSObjectCheck(HValue* receiver, + int bit_field_mask); + + // Checks a key value that's being used for a keyed element access context. If + // the key is a index, i.e. a smi or a number in a unique string with a cached + // numeric value, the "true" of the continuation is joined. Otherwise, + // if the key is a name or a unique string, the "false" of the continuation is + // joined. Otherwise, a deoptimization is triggered. In both paths of the + // continuation, the key is pushed on the top of the environment. + void BuildKeyedIndexCheck(HValue* key, + HIfContinuation* join_continuation); + + // Checks the properties of an object if they are in dictionary case, in which + // case "true" of continuation is taken, otherwise the "false" + void BuildTestForDictionaryProperties(HValue* object, + HIfContinuation* continuation); + + void BuildNonGlobalObjectCheck(HValue* receiver); + + HValue* BuildKeyedLookupCacheHash(HValue* object, + HValue* key); + HValue* BuildUncheckedDictionaryElementLoad(HValue* receiver, - HValue* key); + HValue* elements, + HValue* key, + HValue* hash); HValue* BuildRegExpConstructResult(HValue* length, HValue* index, @@ -1384,20 +1434,14 @@ class HGraphBuilder { HInstruction* AddLoadStringInstanceType(HValue* string); HInstruction* AddLoadStringLength(HValue* string); - HStoreNamedField* AddStoreMapNoWriteBarrier(HValue* object, HValue* map) { - HStoreNamedField* store_map = Add<HStoreNamedField>( - object, HObjectAccess::ForMap(), map); - store_map->SkipWriteBarrier(); - return store_map; + HStoreNamedField* AddStoreMapConstant(HValue* object, Handle<Map> map) { + return Add<HStoreNamedField>(object, HObjectAccess::ForMap(), + Add<HConstant>(map)); } - HStoreNamedField* AddStoreMapConstant(HValue* object, Handle<Map> map); - HStoreNamedField* AddStoreMapConstantNoWriteBarrier(HValue* object, - Handle<Map> map) { - HStoreNamedField* store_map = AddStoreMapConstant(object, map); - store_map->SkipWriteBarrier(); - return store_map; - } - HLoadNamedField* AddLoadElements(HValue* object); + HLoadNamedField* AddLoadMap(HValue* object, + HValue* dependency = NULL); + HLoadNamedField* AddLoadElements(HValue* object, + HValue* dependency = NULL); bool MatchRotateRight(HValue* left, HValue* right, @@ -1413,7 +1457,12 @@ class HGraphBuilder { Maybe<int> fixed_right_arg, HAllocationMode allocation_mode); - HLoadNamedField* AddLoadFixedArrayLength(HValue *object); + HLoadNamedField* AddLoadFixedArrayLength(HValue *object, + HValue *dependency = NULL); + + HLoadNamedField* AddLoadArrayLength(HValue *object, + ElementsKind kind, + HValue *dependency = NULL); HValue* AddLoadJSBuiltin(Builtins::JavaScript builtin); @@ -1426,6 +1475,9 @@ class HGraphBuilder { class IfBuilder V8_FINAL { public: + // If using this constructor, Initialize() must be called explicitly! + IfBuilder(); + explicit IfBuilder(HGraphBuilder* builder); IfBuilder(HGraphBuilder* builder, HIfContinuation* continuation); @@ -1434,6 +1486,8 @@ class HGraphBuilder { if (!finished_) End(); } + void Initialize(HGraphBuilder* builder); + template<class Condition> Condition* If(HValue *p) { Condition* compare = builder()->New<Condition>(p); @@ -1576,9 +1630,14 @@ class HGraphBuilder { void Return(HValue* value); private: + void InitializeDontCreateBlocks(HGraphBuilder* builder); + HControlInstruction* AddCompare(HControlInstruction* compare); - HGraphBuilder* builder() const { return builder_; } + HGraphBuilder* builder() const { + DCHECK(builder_ != NULL); // Have you called "Initialize"? + return builder_; + } void AddMergeAtJoinBlock(bool deopt); @@ -1623,9 +1682,11 @@ class HGraphBuilder { kPreIncrement, kPostIncrement, kPreDecrement, - kPostDecrement + kPostDecrement, + kWhileTrue }; + explicit LoopBuilder(HGraphBuilder* builder); // while (true) {...} LoopBuilder(HGraphBuilder* builder, HValue* context, Direction direction); @@ -1635,7 +1696,7 @@ class HGraphBuilder { HValue* increment_amount); ~LoopBuilder() { - ASSERT(finished_); + DCHECK(finished_); } HValue* BeginBody( @@ -1643,11 +1704,15 @@ class HGraphBuilder { HValue* terminating, Token::Value token); + void BeginBody(int drop_count); + void Break(); void EndBody(); private: + void Initialize(HGraphBuilder* builder, HValue* context, + Direction direction, HValue* increment_amount); Zone* zone() { return builder_->zone(); } HGraphBuilder* builder_; @@ -1663,10 +1728,28 @@ class HGraphBuilder { bool finished_; }; - HValue* BuildNewElementsCapacity(HValue* old_capacity); + template <class A, class P1> + void DeoptimizeIf(P1 p1, char* const reason) { + IfBuilder builder(this); + builder.If<A>(p1); + builder.ThenDeopt(reason); + } - void BuildNewSpaceArrayCheck(HValue* length, - ElementsKind kind); + template <class A, class P1, class P2> + void DeoptimizeIf(P1 p1, P2 p2, const char* reason) { + IfBuilder builder(this); + builder.If<A>(p1, p2); + builder.ThenDeopt(reason); + } + + template <class A, class P1, class P2, class P3> + void DeoptimizeIf(P1 p1, P2 p2, P3 p3, const char* reason) { + IfBuilder builder(this); + builder.If<A>(p1, p2, p3); + builder.ThenDeopt(reason); + } + + HValue* BuildNewElementsCapacity(HValue* old_capacity); class JSArrayBuilder V8_FINAL { public: @@ -1686,10 +1769,24 @@ class HGraphBuilder { }; ElementsKind kind() { return kind_; } - - HValue* AllocateEmptyArray(); - HValue* AllocateArray(HValue* capacity, HValue* length_field, - FillMode fill_mode = FILL_WITH_HOLE); + HAllocate* elements_location() { return elements_location_; } + + HAllocate* AllocateEmptyArray(); + HAllocate* AllocateArray(HValue* capacity, + HValue* length_field, + FillMode fill_mode = FILL_WITH_HOLE); + // Use these allocators when capacity could be unknown at compile time + // but its limit is known. For constant |capacity| the value of + // |capacity_upper_bound| is ignored and the actual |capacity| + // value is used as an upper bound. + HAllocate* AllocateArray(HValue* capacity, + int capacity_upper_bound, + HValue* length_field, + FillMode fill_mode = FILL_WITH_HOLE); + HAllocate* AllocateArray(HValue* capacity, + HConstant* capacity_upper_bound, + HValue* length_field, + FillMode fill_mode = FILL_WITH_HOLE); HValue* GetElementsLocation() { return elements_location_; } HValue* EmitMapCode(); @@ -1706,25 +1803,23 @@ class HGraphBuilder { } HValue* EmitInternalMapCode(); - HValue* EstablishEmptyArrayAllocationSize(); - HValue* EstablishAllocationSize(HValue* length_node); - HValue* AllocateArray(HValue* size_in_bytes, HValue* capacity, - HValue* length_field, - FillMode fill_mode = FILL_WITH_HOLE); HGraphBuilder* builder_; ElementsKind kind_; AllocationSiteMode mode_; HValue* allocation_site_payload_; HValue* constructor_function_; - HInnerAllocatedObject* elements_location_; + HAllocate* elements_location_; }; HValue* BuildAllocateArrayFromLength(JSArrayBuilder* array_builder, HValue* length_argument); + HValue* BuildCalculateElementsSize(ElementsKind kind, + HValue* capacity); + HAllocate* AllocateJSArrayObject(AllocationSiteMode mode); + HConstant* EstablishElementsAllocationSize(ElementsKind kind, int capacity); - HValue* BuildAllocateElements(ElementsKind kind, - HValue* capacity); + HAllocate* BuildAllocateElements(ElementsKind kind, HValue* size_in_bytes); void BuildInitializeElementsHeader(HValue* elements, ElementsKind kind, @@ -1733,16 +1828,17 @@ class HGraphBuilder { HValue* BuildAllocateElementsAndInitializeElementsHeader(ElementsKind kind, HValue* capacity); - // array must have been allocated with enough room for - // 1) the JSArray, 2) a AllocationMemento if mode requires it, - // 3) a FixedArray or FixedDoubleArray. - // A pointer to the Fixed(Double)Array is returned. - HInnerAllocatedObject* BuildJSArrayHeader(HValue* array, - HValue* array_map, - AllocationSiteMode mode, - ElementsKind elements_kind, - HValue* allocation_site_payload, - HValue* length_field); + // |array| must have been allocated with enough room for + // 1) the JSArray and 2) an AllocationMemento if mode requires it. + // If the |elements| value provided is NULL then the array elements storage + // is initialized with empty array. + void BuildJSArrayHeader(HValue* array, + HValue* array_map, + HValue* elements, + AllocationSiteMode mode, + ElementsKind elements_kind, + HValue* allocation_site_payload, + HValue* length_field); HValue* BuildGrowElementsCapacity(HValue* object, HValue* elements, @@ -1751,6 +1847,12 @@ class HGraphBuilder { HValue* length, HValue* new_capacity); + void BuildFillElementsWithValue(HValue* elements, + ElementsKind elements_kind, + HValue* from, + HValue* to, + HValue* value); + void BuildFillElementsWithHole(HValue* elements, ElementsKind elements_kind, HValue* from, @@ -1763,11 +1865,19 @@ class HGraphBuilder { HValue* length, HValue* capacity); - HValue* BuildCloneShallowArray(HValue* boilerplate, - HValue* allocation_site, - AllocationSiteMode mode, - ElementsKind kind, - int length); + HValue* BuildCloneShallowArrayCow(HValue* boilerplate, + HValue* allocation_site, + AllocationSiteMode mode, + ElementsKind kind); + + HValue* BuildCloneShallowArrayEmpty(HValue* boilerplate, + HValue* allocation_site, + AllocationSiteMode mode); + + HValue* BuildCloneShallowArrayNonEmpty(HValue* boilerplate, + HValue* allocation_site, + AllocationSiteMode mode, + ElementsKind kind); HValue* BuildElementIndexHash(HValue* index); @@ -1790,7 +1900,7 @@ class HGraphBuilder { protected: void SetSourcePosition(int position) { - ASSERT(position != RelocInfo::kNoPosition); + DCHECK(position != RelocInfo::kNoPosition); position_.set_position(position - start_position_); } @@ -1824,13 +1934,6 @@ class HGraphBuilder { private: HGraphBuilder(); - HValue* BuildUncheckedDictionaryElementLoadHelper( - HValue* elements, - HValue* key, - HValue* hash, - HValue* mask, - int current_probe); - template <class I> I* AddInstructionTyped(I* instr) { return I::cast(AddInstruction(instr)); @@ -1839,6 +1942,7 @@ class HGraphBuilder { CompilationInfo* info_; HGraph* graph_; HBasicBlock* current_block_; + Scope* scope_; HSourcePosition position_; int start_position_; }; @@ -1966,10 +2070,12 @@ class HOptimizedGraphBuilder : public HGraphBuilder, public AstVisitor { class BreakAndContinueInfo V8_FINAL BASE_EMBEDDED { public: explicit BreakAndContinueInfo(BreakableStatement* target, + Scope* scope, int drop_extra = 0) : target_(target), break_block_(NULL), continue_block_(NULL), + scope_(scope), drop_extra_(drop_extra) { } @@ -1978,12 +2084,14 @@ class HOptimizedGraphBuilder : public HGraphBuilder, public AstVisitor { void set_break_block(HBasicBlock* block) { break_block_ = block; } HBasicBlock* continue_block() { return continue_block_; } void set_continue_block(HBasicBlock* block) { continue_block_ = block; } + Scope* scope() { return scope_; } int drop_extra() { return drop_extra_; } private: BreakableStatement* target_; HBasicBlock* break_block_; HBasicBlock* continue_block_; + Scope* scope_; int drop_extra_; }; @@ -2005,7 +2113,8 @@ class HOptimizedGraphBuilder : public HGraphBuilder, public AstVisitor { // Search the break stack for a break or continue target. enum BreakType { BREAK, CONTINUE }; - HBasicBlock* Get(BreakableStatement* stmt, BreakType type, int* drop_extra); + HBasicBlock* Get(BreakableStatement* stmt, BreakType type, + Scope** scope, int* drop_extra); private: BreakAndContinueInfo* info_; @@ -2115,8 +2224,7 @@ class HOptimizedGraphBuilder : public HGraphBuilder, public AstVisitor { bool PreProcessOsrEntry(IterationStatement* statement); void VisitLoopBody(IterationStatement* stmt, - HBasicBlock* loop_entry, - BreakAndContinueInfo* break_info); + HBasicBlock* loop_entry); // Create a back edge in the flow graph. body_exit is the predecessor // block and loop_entry is the successor block. loop_successor is the @@ -2229,6 +2337,17 @@ class HOptimizedGraphBuilder : public HGraphBuilder, public AstVisitor { // Try to optimize fun.apply(receiver, arguments) pattern. bool TryCallApply(Call* expr); + bool TryHandleArrayCall(Call* expr, HValue* function); + bool TryHandleArrayCallNew(CallNew* expr, HValue* function); + void BuildArrayCall(Expression* expr, int arguments_count, HValue* function, + Handle<AllocationSite> cell); + + enum ArrayIndexOfMode { kFirstIndexOf, kLastIndexOf }; + HValue* BuildArrayIndexOf(HValue* receiver, + HValue* search_element, + ElementsKind kind, + ArrayIndexOfMode mode); + HValue* ImplicitReceiverFor(HValue* function, Handle<JSFunction> target); @@ -2296,6 +2415,7 @@ class HOptimizedGraphBuilder : public HGraphBuilder, public AstVisitor { void HandlePropertyAssignment(Assignment* expr); void HandleCompoundAssignment(Assignment* expr); void HandlePolymorphicNamedFieldAccess(PropertyAccessType access_type, + Expression* expr, BailoutId ast_id, BailoutId return_id, HValue* object, @@ -2312,8 +2432,13 @@ class HOptimizedGraphBuilder : public HGraphBuilder, public AstVisitor { ElementsKind fixed_elements_kind, HValue* byte_length, HValue* length); - bool IsCallNewArrayInlineable(CallNew* expr); - void BuildInlinedCallNewArray(CallNew* expr); + Handle<JSFunction> array_function() { + return handle(isolate()->native_context()->array_function()); + } + + bool IsCallArrayInlineable(int argument_count, Handle<AllocationSite> site); + void BuildInlinedCallArray(Expression* expression, int argument_count, + Handle<AllocationSite> site); class PropertyAccessInfo { public: @@ -2342,23 +2467,7 @@ class HOptimizedGraphBuilder : public HGraphBuilder, public AstVisitor { // PropertyAccessInfo is built for types->first(). bool CanAccessAsMonomorphic(SmallMapList* types); - Handle<Map> map() { - if (type_->Is(Type::Number())) { - Context* context = current_info()->closure()->context(); - context = context->native_context(); - return handle(context->number_function()->initial_map()); - } else if (type_->Is(Type::Boolean())) { - Context* context = current_info()->closure()->context(); - context = context->native_context(); - return handle(context->boolean_function()->initial_map()); - } else if (type_->Is(Type::String())) { - Context* context = current_info()->closure()->context(); - context = context->native_context(); - return handle(context->string_function()->initial_map()); - } else { - return type_->AsClass()->Map(); - } - } + Handle<Map> map(); Type* type() const { return type_; } Handle<String> name() const { return name_; } @@ -2371,10 +2480,10 @@ class HOptimizedGraphBuilder : public HGraphBuilder, public AstVisitor { int offset; if (Accessors::IsJSObjectFieldAccessor<Type>(type_, name_, &offset)) { if (type_->Is(Type::String())) { - ASSERT(String::Equals(isolate()->factory()->length_string(), name_)); + DCHECK(String::Equals(isolate()->factory()->length_string(), name_)); *access = HObjectAccess::ForStringLength(); } else if (type_->Is(Type::Array())) { - ASSERT(String::Equals(isolate()->factory()->length_string(), name_)); + DCHECK(String::Equals(isolate()->factory()->length_string(), name_)); *access = HObjectAccess::ForArrayLength(map()->elements_kind()); } else { *access = HObjectAccess::ForMapAndOffset(map(), offset); @@ -2484,6 +2593,7 @@ class HOptimizedGraphBuilder : public HGraphBuilder, public AstVisitor { HInstruction* BuildIncrement(bool returns_original_input, CountOperation* expr); HInstruction* BuildKeyedGeneric(PropertyAccessType access_type, + Expression* expr, HValue* object, HValue* key, HValue* value); @@ -2503,7 +2613,8 @@ class HOptimizedGraphBuilder : public HGraphBuilder, public AstVisitor { PropertyAccessType access_type, KeyedAccessStoreMode store_mode); - HValue* HandlePolymorphicElementAccess(HValue* object, + HValue* HandlePolymorphicElementAccess(Expression* expr, + HValue* object, HValue* key, HValue* val, SmallMapList* maps, @@ -2519,6 +2630,7 @@ class HOptimizedGraphBuilder : public HGraphBuilder, public AstVisitor { bool* has_side_effects); HInstruction* BuildNamedGeneric(PropertyAccessType access, + Expression* expr, HValue* object, Handle<String> name, HValue* value, @@ -2641,30 +2753,38 @@ class HStatistics V8_FINAL: public Malloced { source_size_(0) { } void Initialize(CompilationInfo* info); - void Print(); - void SaveTiming(const char* name, TimeDelta time, unsigned size); + void Print(const char* stats_name); + void SaveTiming(const char* name, base::TimeDelta time, unsigned size); - void IncrementFullCodeGen(TimeDelta full_code_gen) { + void IncrementFullCodeGen(base::TimeDelta full_code_gen) { full_code_gen_ += full_code_gen; } - void IncrementSubtotals(TimeDelta create_graph, - TimeDelta optimize_graph, - TimeDelta generate_code) { - create_graph_ += create_graph; - optimize_graph_ += optimize_graph; - generate_code_ += generate_code; + void IncrementCreateGraph(base::TimeDelta delta) { create_graph_ += delta; } + + void IncrementOptimizeGraph(base::TimeDelta delta) { + optimize_graph_ += delta; + } + + void IncrementGenerateCode(base::TimeDelta delta) { generate_code_ += delta; } + + void IncrementSubtotals(base::TimeDelta create_graph, + base::TimeDelta optimize_graph, + base::TimeDelta generate_code) { + IncrementCreateGraph(create_graph); + IncrementOptimizeGraph(optimize_graph); + IncrementGenerateCode(generate_code); } private: - List<TimeDelta> times_; + List<base::TimeDelta> times_; List<const char*> names_; List<unsigned> sizes_; - TimeDelta create_graph_; - TimeDelta optimize_graph_; - TimeDelta generate_code_; + base::TimeDelta create_graph_; + base::TimeDelta optimize_graph_; + base::TimeDelta generate_code_; unsigned total_size_; - TimeDelta full_code_gen_; + base::TimeDelta full_code_gen_; double source_size_; }; @@ -2691,12 +2811,12 @@ class HTracer V8_FINAL : public Malloced { explicit HTracer(int isolate_id) : trace_(&string_allocator_), indent_(0) { if (FLAG_trace_hydrogen_file == NULL) { - OS::SNPrintF(filename_, - "hydrogen-%d-%d.cfg", - OS::GetCurrentProcessId(), - isolate_id); + SNPrintF(filename_, + "hydrogen-%d-%d.cfg", + base::OS::GetCurrentProcessId(), + isolate_id); } else { - OS::StrNCpy(filename_, FLAG_trace_hydrogen_file, filename_.length()); + StrNCpy(filename_, FLAG_trace_hydrogen_file, filename_.length()); } WriteChars(filename_.start(), "", 0, false); } @@ -2721,7 +2841,7 @@ class HTracer V8_FINAL : public Malloced { tracer_->indent_--; tracer_->PrintIndent(); tracer_->trace_.Add("end_%s\n", name_); - ASSERT(tracer_->indent_ >= 0); + DCHECK(tracer_->indent_ >= 0); tracer_->FlushToFile(); } diff --git a/deps/v8/src/i18n.cc b/deps/v8/src/i18n.cc index 2b6b0fb07ea..2d67cf13eb2 100644 --- a/deps/v8/src/i18n.cc +++ b/deps/v8/src/i18n.cc @@ -3,7 +3,7 @@ // found in the LICENSE file. // limitations under the License. -#include "i18n.h" +#include "src/i18n.h" #include "unicode/brkiter.h" #include "unicode/calendar.h" @@ -137,7 +137,6 @@ void SetResolvedDateSettings(Isolate* isolate, Vector<const uint16_t>( reinterpret_cast<const uint16_t*>(pattern.getBuffer()), pattern.length())).ToHandleChecked(), - NONE, SLOPPY).Assert(); // Set time zone and calendar. @@ -147,7 +146,6 @@ void SetResolvedDateSettings(Isolate* isolate, resolved, factory->NewStringFromStaticAscii("calendar"), factory->NewStringFromAsciiChecked(calendar_name), - NONE, SLOPPY).Assert(); const icu::TimeZone& tz = calendar->getTimeZone(); @@ -162,7 +160,6 @@ void SetResolvedDateSettings(Isolate* isolate, resolved, factory->NewStringFromStaticAscii("timeZone"), factory->NewStringFromStaticAscii("UTC"), - NONE, SLOPPY).Assert(); } else { JSObject::SetProperty( @@ -173,7 +170,6 @@ void SetResolvedDateSettings(Isolate* isolate, reinterpret_cast<const uint16_t*>( canonical_time_zone.getBuffer()), canonical_time_zone.length())).ToHandleChecked(), - NONE, SLOPPY).Assert(); } } @@ -190,14 +186,12 @@ void SetResolvedDateSettings(Isolate* isolate, resolved, factory->NewStringFromStaticAscii("numberingSystem"), factory->NewStringFromAsciiChecked(ns), - NONE, SLOPPY).Assert(); } else { JSObject::SetProperty( resolved, factory->NewStringFromStaticAscii("numberingSystem"), factory->undefined_value(), - NONE, SLOPPY).Assert(); } delete numbering_system; @@ -212,7 +206,6 @@ void SetResolvedDateSettings(Isolate* isolate, resolved, factory->NewStringFromStaticAscii("locale"), factory->NewStringFromAsciiChecked(result), - NONE, SLOPPY).Assert(); } else { // This would never happen, since we got the locale from ICU. @@ -220,7 +213,6 @@ void SetResolvedDateSettings(Isolate* isolate, resolved, factory->NewStringFromStaticAscii("locale"), factory->NewStringFromStaticAscii("und"), - NONE, SLOPPY).Assert(); } } @@ -364,7 +356,6 @@ void SetResolvedNumberSettings(Isolate* isolate, Vector<const uint16_t>( reinterpret_cast<const uint16_t*>(pattern.getBuffer()), pattern.length())).ToHandleChecked(), - NONE, SLOPPY).Assert(); // Set resolved currency code in options.currency if not empty. @@ -377,7 +368,6 @@ void SetResolvedNumberSettings(Isolate* isolate, Vector<const uint16_t>( reinterpret_cast<const uint16_t*>(currency.getBuffer()), currency.length())).ToHandleChecked(), - NONE, SLOPPY).Assert(); } @@ -393,14 +383,12 @@ void SetResolvedNumberSettings(Isolate* isolate, resolved, factory->NewStringFromStaticAscii("numberingSystem"), factory->NewStringFromAsciiChecked(ns), - NONE, SLOPPY).Assert(); } else { JSObject::SetProperty( resolved, factory->NewStringFromStaticAscii("numberingSystem"), factory->undefined_value(), - NONE, SLOPPY).Assert(); } delete numbering_system; @@ -409,48 +397,46 @@ void SetResolvedNumberSettings(Isolate* isolate, resolved, factory->NewStringFromStaticAscii("useGrouping"), factory->ToBoolean(number_format->isGroupingUsed()), - NONE, SLOPPY).Assert(); JSObject::SetProperty( resolved, factory->NewStringFromStaticAscii("minimumIntegerDigits"), factory->NewNumberFromInt(number_format->getMinimumIntegerDigits()), - NONE, SLOPPY).Assert(); JSObject::SetProperty( resolved, factory->NewStringFromStaticAscii("minimumFractionDigits"), factory->NewNumberFromInt(number_format->getMinimumFractionDigits()), - NONE, SLOPPY).Assert(); JSObject::SetProperty( resolved, factory->NewStringFromStaticAscii("maximumFractionDigits"), factory->NewNumberFromInt(number_format->getMaximumFractionDigits()), - NONE, SLOPPY).Assert(); Handle<String> key = factory->NewStringFromStaticAscii("minimumSignificantDigits"); - if (JSReceiver::HasLocalProperty(resolved, key)) { + Maybe<bool> maybe = JSReceiver::HasOwnProperty(resolved, key); + CHECK(maybe.has_value); + if (maybe.value) { JSObject::SetProperty( resolved, factory->NewStringFromStaticAscii("minimumSignificantDigits"), factory->NewNumberFromInt(number_format->getMinimumSignificantDigits()), - NONE, SLOPPY).Assert(); } key = factory->NewStringFromStaticAscii("maximumSignificantDigits"); - if (JSReceiver::HasLocalProperty(resolved, key)) { + maybe = JSReceiver::HasOwnProperty(resolved, key); + CHECK(maybe.has_value); + if (maybe.value) { JSObject::SetProperty( resolved, factory->NewStringFromStaticAscii("maximumSignificantDigits"), factory->NewNumberFromInt(number_format->getMaximumSignificantDigits()), - NONE, SLOPPY).Assert(); } @@ -464,7 +450,6 @@ void SetResolvedNumberSettings(Isolate* isolate, resolved, factory->NewStringFromStaticAscii("locale"), factory->NewStringFromAsciiChecked(result), - NONE, SLOPPY).Assert(); } else { // This would never happen, since we got the locale from ICU. @@ -472,7 +457,6 @@ void SetResolvedNumberSettings(Isolate* isolate, resolved, factory->NewStringFromStaticAscii("locale"), factory->NewStringFromStaticAscii("und"), - NONE, SLOPPY).Assert(); } } @@ -554,7 +538,6 @@ void SetResolvedCollatorSettings(Isolate* isolate, factory->NewStringFromStaticAscii("numeric"), factory->ToBoolean( collator->getAttribute(UCOL_NUMERIC_COLLATION, status) == UCOL_ON), - NONE, SLOPPY).Assert(); switch (collator->getAttribute(UCOL_CASE_FIRST, status)) { @@ -563,7 +546,6 @@ void SetResolvedCollatorSettings(Isolate* isolate, resolved, factory->NewStringFromStaticAscii("caseFirst"), factory->NewStringFromStaticAscii("lower"), - NONE, SLOPPY).Assert(); break; case UCOL_UPPER_FIRST: @@ -571,7 +553,6 @@ void SetResolvedCollatorSettings(Isolate* isolate, resolved, factory->NewStringFromStaticAscii("caseFirst"), factory->NewStringFromStaticAscii("upper"), - NONE, SLOPPY).Assert(); break; default: @@ -579,7 +560,6 @@ void SetResolvedCollatorSettings(Isolate* isolate, resolved, factory->NewStringFromStaticAscii("caseFirst"), factory->NewStringFromStaticAscii("false"), - NONE, SLOPPY).Assert(); } @@ -589,7 +569,6 @@ void SetResolvedCollatorSettings(Isolate* isolate, resolved, factory->NewStringFromStaticAscii("strength"), factory->NewStringFromStaticAscii("primary"), - NONE, SLOPPY).Assert(); // case level: true + s1 -> case, s1 -> base. @@ -598,14 +577,12 @@ void SetResolvedCollatorSettings(Isolate* isolate, resolved, factory->NewStringFromStaticAscii("sensitivity"), factory->NewStringFromStaticAscii("case"), - NONE, SLOPPY).Assert(); } else { JSObject::SetProperty( resolved, factory->NewStringFromStaticAscii("sensitivity"), factory->NewStringFromStaticAscii("base"), - NONE, SLOPPY).Assert(); } break; @@ -615,13 +592,11 @@ void SetResolvedCollatorSettings(Isolate* isolate, resolved, factory->NewStringFromStaticAscii("strength"), factory->NewStringFromStaticAscii("secondary"), - NONE, SLOPPY).Assert(); JSObject::SetProperty( resolved, factory->NewStringFromStaticAscii("sensitivity"), factory->NewStringFromStaticAscii("accent"), - NONE, SLOPPY).Assert(); break; case UCOL_TERTIARY: @@ -629,13 +604,11 @@ void SetResolvedCollatorSettings(Isolate* isolate, resolved, factory->NewStringFromStaticAscii("strength"), factory->NewStringFromStaticAscii("tertiary"), - NONE, SLOPPY).Assert(); JSObject::SetProperty( resolved, factory->NewStringFromStaticAscii("sensitivity"), factory->NewStringFromStaticAscii("variant"), - NONE, SLOPPY).Assert(); break; case UCOL_QUATERNARY: @@ -645,13 +618,11 @@ void SetResolvedCollatorSettings(Isolate* isolate, resolved, factory->NewStringFromStaticAscii("strength"), factory->NewStringFromStaticAscii("quaternary"), - NONE, SLOPPY).Assert(); JSObject::SetProperty( resolved, factory->NewStringFromStaticAscii("sensitivity"), factory->NewStringFromStaticAscii("variant"), - NONE, SLOPPY).Assert(); break; default: @@ -659,13 +630,11 @@ void SetResolvedCollatorSettings(Isolate* isolate, resolved, factory->NewStringFromStaticAscii("strength"), factory->NewStringFromStaticAscii("identical"), - NONE, SLOPPY).Assert(); JSObject::SetProperty( resolved, factory->NewStringFromStaticAscii("sensitivity"), factory->NewStringFromStaticAscii("variant"), - NONE, SLOPPY).Assert(); } @@ -674,7 +643,6 @@ void SetResolvedCollatorSettings(Isolate* isolate, factory->NewStringFromStaticAscii("ignorePunctuation"), factory->ToBoolean(collator->getAttribute( UCOL_ALTERNATE_HANDLING, status) == UCOL_SHIFTED), - NONE, SLOPPY).Assert(); // Set the locale @@ -687,7 +655,6 @@ void SetResolvedCollatorSettings(Isolate* isolate, resolved, factory->NewStringFromStaticAscii("locale"), factory->NewStringFromAsciiChecked(result), - NONE, SLOPPY).Assert(); } else { // This would never happen, since we got the locale from ICU. @@ -695,7 +662,6 @@ void SetResolvedCollatorSettings(Isolate* isolate, resolved, factory->NewStringFromStaticAscii("locale"), factory->NewStringFromStaticAscii("und"), - NONE, SLOPPY).Assert(); } } @@ -751,7 +717,6 @@ void SetResolvedBreakIteratorSettings(Isolate* isolate, resolved, factory->NewStringFromStaticAscii("locale"), factory->NewStringFromAsciiChecked(result), - NONE, SLOPPY).Assert(); } else { // This would never happen, since we got the locale from ICU. @@ -759,7 +724,6 @@ void SetResolvedBreakIteratorSettings(Isolate* isolate, resolved, factory->NewStringFromStaticAscii("locale"), factory->NewStringFromStaticAscii("und"), - NONE, SLOPPY).Assert(); } } @@ -823,7 +787,9 @@ icu::SimpleDateFormat* DateFormat::UnpackDateFormat( Handle<JSObject> obj) { Handle<String> key = isolate->factory()->NewStringFromStaticAscii("dateFormat"); - if (JSReceiver::HasLocalProperty(obj, key)) { + Maybe<bool> maybe = JSReceiver::HasOwnProperty(obj, key); + CHECK(maybe.has_value); + if (maybe.value) { return reinterpret_cast<icu::SimpleDateFormat*>( obj->GetInternalField(0)); } @@ -897,7 +863,9 @@ icu::DecimalFormat* NumberFormat::UnpackNumberFormat( Handle<JSObject> obj) { Handle<String> key = isolate->factory()->NewStringFromStaticAscii("numberFormat"); - if (JSReceiver::HasLocalProperty(obj, key)) { + Maybe<bool> maybe = JSReceiver::HasOwnProperty(obj, key); + CHECK(maybe.has_value); + if (maybe.value) { return reinterpret_cast<icu::DecimalFormat*>(obj->GetInternalField(0)); } @@ -952,7 +920,9 @@ icu::Collator* Collator::InitializeCollator( icu::Collator* Collator::UnpackCollator(Isolate* isolate, Handle<JSObject> obj) { Handle<String> key = isolate->factory()->NewStringFromStaticAscii("collator"); - if (JSReceiver::HasLocalProperty(obj, key)) { + Maybe<bool> maybe = JSReceiver::HasOwnProperty(obj, key); + CHECK(maybe.has_value); + if (maybe.value) { return reinterpret_cast<icu::Collator*>(obj->GetInternalField(0)); } @@ -1011,7 +981,9 @@ icu::BreakIterator* BreakIterator::UnpackBreakIterator(Isolate* isolate, Handle<JSObject> obj) { Handle<String> key = isolate->factory()->NewStringFromStaticAscii("breakIterator"); - if (JSReceiver::HasLocalProperty(obj, key)) { + Maybe<bool> maybe = JSReceiver::HasOwnProperty(obj, key); + CHECK(maybe.has_value); + if (maybe.value) { return reinterpret_cast<icu::BreakIterator*>(obj->GetInternalField(0)); } diff --git a/deps/v8/src/i18n.h b/deps/v8/src/i18n.h index e312322b6fc..a50c43a4299 100644 --- a/deps/v8/src/i18n.h +++ b/deps/v8/src/i18n.h @@ -6,8 +6,8 @@ #ifndef V8_I18N_H_ #define V8_I18N_H_ +#include "src/v8.h" #include "unicode/uversion.h" -#include "v8.h" namespace U_ICU_NAMESPACE { class BreakIterator; diff --git a/deps/v8/src/i18n.js b/deps/v8/src/i18n.js index 4fcb02b4408..61e0ac98e5b 100644 --- a/deps/v8/src/i18n.js +++ b/deps/v8/src/i18n.js @@ -1,7 +1,8 @@ // Copyright 2013 the V8 project authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -// limitations under the License. + +"use strict"; // ECMAScript 402 API implementation. @@ -11,8 +12,6 @@ */ $Object.defineProperty(global, "Intl", { enumerable: false, value: (function() { -'use strict'; - var Intl = {}; var undefined = global.undefined; @@ -943,7 +942,7 @@ function initializeCollator(collator, locales, options) { * * @constructor */ -%SetProperty(Intl, 'Collator', function() { +%AddNamedProperty(Intl, 'Collator', function() { var locales = %_Arguments(0); var options = %_Arguments(1); @@ -961,7 +960,7 @@ function initializeCollator(collator, locales, options) { /** * Collator resolvedOptions method. */ -%SetProperty(Intl.Collator.prototype, 'resolvedOptions', function() { +%AddNamedProperty(Intl.Collator.prototype, 'resolvedOptions', function() { if (%_IsConstructCall()) { throw new $TypeError(ORDINARY_FUNCTION_CALLED_AS_CONSTRUCTOR); } @@ -998,7 +997,7 @@ function initializeCollator(collator, locales, options) { * order in the returned list as in the input list. * Options are optional parameter. */ -%SetProperty(Intl.Collator, 'supportedLocalesOf', function(locales) { +%AddNamedProperty(Intl.Collator, 'supportedLocalesOf', function(locales) { if (%_IsConstructCall()) { throw new $TypeError(ORDINARY_FUNCTION_CALLED_AS_CONSTRUCTOR); } @@ -1170,7 +1169,7 @@ function initializeNumberFormat(numberFormat, locales, options) { * * @constructor */ -%SetProperty(Intl, 'NumberFormat', function() { +%AddNamedProperty(Intl, 'NumberFormat', function() { var locales = %_Arguments(0); var options = %_Arguments(1); @@ -1188,7 +1187,7 @@ function initializeNumberFormat(numberFormat, locales, options) { /** * NumberFormat resolvedOptions method. */ -%SetProperty(Intl.NumberFormat.prototype, 'resolvedOptions', function() { +%AddNamedProperty(Intl.NumberFormat.prototype, 'resolvedOptions', function() { if (%_IsConstructCall()) { throw new $TypeError(ORDINARY_FUNCTION_CALLED_AS_CONSTRUCTOR); } @@ -1244,7 +1243,7 @@ function initializeNumberFormat(numberFormat, locales, options) { * order in the returned list as in the input list. * Options are optional parameter. */ -%SetProperty(Intl.NumberFormat, 'supportedLocalesOf', function(locales) { +%AddNamedProperty(Intl.NumberFormat, 'supportedLocalesOf', function(locales) { if (%_IsConstructCall()) { throw new $TypeError(ORDINARY_FUNCTION_CALLED_AS_CONSTRUCTOR); } @@ -1563,7 +1562,7 @@ function initializeDateTimeFormat(dateFormat, locales, options) { * * @constructor */ -%SetProperty(Intl, 'DateTimeFormat', function() { +%AddNamedProperty(Intl, 'DateTimeFormat', function() { var locales = %_Arguments(0); var options = %_Arguments(1); @@ -1581,7 +1580,7 @@ function initializeDateTimeFormat(dateFormat, locales, options) { /** * DateTimeFormat resolvedOptions method. */ -%SetProperty(Intl.DateTimeFormat.prototype, 'resolvedOptions', function() { +%AddNamedProperty(Intl.DateTimeFormat.prototype, 'resolvedOptions', function() { if (%_IsConstructCall()) { throw new $TypeError(ORDINARY_FUNCTION_CALLED_AS_CONSTRUCTOR); } @@ -1637,7 +1636,7 @@ function initializeDateTimeFormat(dateFormat, locales, options) { * order in the returned list as in the input list. * Options are optional parameter. */ -%SetProperty(Intl.DateTimeFormat, 'supportedLocalesOf', function(locales) { +%AddNamedProperty(Intl.DateTimeFormat, 'supportedLocalesOf', function(locales) { if (%_IsConstructCall()) { throw new $TypeError(ORDINARY_FUNCTION_CALLED_AS_CONSTRUCTOR); } @@ -1769,7 +1768,7 @@ function initializeBreakIterator(iterator, locales, options) { * * @constructor */ -%SetProperty(Intl, 'v8BreakIterator', function() { +%AddNamedProperty(Intl, 'v8BreakIterator', function() { var locales = %_Arguments(0); var options = %_Arguments(1); @@ -1787,7 +1786,8 @@ function initializeBreakIterator(iterator, locales, options) { /** * BreakIterator resolvedOptions method. */ -%SetProperty(Intl.v8BreakIterator.prototype, 'resolvedOptions', function() { +%AddNamedProperty(Intl.v8BreakIterator.prototype, 'resolvedOptions', + function() { if (%_IsConstructCall()) { throw new $TypeError(ORDINARY_FUNCTION_CALLED_AS_CONSTRUCTOR); } @@ -1820,7 +1820,8 @@ function initializeBreakIterator(iterator, locales, options) { * order in the returned list as in the input list. * Options are optional parameter. */ -%SetProperty(Intl.v8BreakIterator, 'supportedLocalesOf', function(locales) { +%AddNamedProperty(Intl.v8BreakIterator, 'supportedLocalesOf', + function(locales) { if (%_IsConstructCall()) { throw new $TypeError(ORDINARY_FUNCTION_CALLED_AS_CONSTRUCTOR); } diff --git a/deps/v8/src/ia32/assembler-ia32-inl.h b/deps/v8/src/ia32/assembler-ia32-inl.h index 97aeeae72cf..c7ec6d9918c 100644 --- a/deps/v8/src/ia32/assembler-ia32-inl.h +++ b/deps/v8/src/ia32/assembler-ia32-inl.h @@ -37,60 +37,63 @@ #ifndef V8_IA32_ASSEMBLER_IA32_INL_H_ #define V8_IA32_ASSEMBLER_IA32_INL_H_ -#include "ia32/assembler-ia32.h" +#include "src/ia32/assembler-ia32.h" -#include "cpu.h" -#include "debug.h" +#include "src/assembler.h" +#include "src/debug.h" namespace v8 { namespace internal { +bool CpuFeatures::SupportsCrankshaft() { return true; } + static const byte kCallOpcode = 0xE8; static const int kNoCodeAgeSequenceLength = 5; // The modes possibly affected by apply must be in kApplyMask. -void RelocInfo::apply(intptr_t delta) { +void RelocInfo::apply(intptr_t delta, ICacheFlushMode icache_flush_mode) { + bool flush_icache = icache_flush_mode != SKIP_ICACHE_FLUSH; if (IsRuntimeEntry(rmode_) || IsCodeTarget(rmode_)) { int32_t* p = reinterpret_cast<int32_t*>(pc_); *p -= delta; // Relocate entry. - CPU::FlushICache(p, sizeof(uint32_t)); + if (flush_icache) CpuFeatures::FlushICache(p, sizeof(uint32_t)); } else if (rmode_ == CODE_AGE_SEQUENCE) { if (*pc_ == kCallOpcode) { int32_t* p = reinterpret_cast<int32_t*>(pc_ + 1); *p -= delta; // Relocate entry. - CPU::FlushICache(p, sizeof(uint32_t)); + if (flush_icache) CpuFeatures::FlushICache(p, sizeof(uint32_t)); } } else if (rmode_ == JS_RETURN && IsPatchedReturnSequence()) { // Special handling of js_return when a break point is set (call // instruction has been inserted). int32_t* p = reinterpret_cast<int32_t*>(pc_ + 1); *p -= delta; // Relocate entry. - CPU::FlushICache(p, sizeof(uint32_t)); + if (flush_icache) CpuFeatures::FlushICache(p, sizeof(uint32_t)); } else if (rmode_ == DEBUG_BREAK_SLOT && IsPatchedDebugBreakSlotSequence()) { // Special handling of a debug break slot when a break point is set (call // instruction has been inserted). int32_t* p = reinterpret_cast<int32_t*>(pc_ + 1); *p -= delta; // Relocate entry. - CPU::FlushICache(p, sizeof(uint32_t)); + if (flush_icache) CpuFeatures::FlushICache(p, sizeof(uint32_t)); } else if (IsInternalReference(rmode_)) { // absolute code pointer inside code object moves with the code object. int32_t* p = reinterpret_cast<int32_t*>(pc_); *p += delta; // Relocate entry. - CPU::FlushICache(p, sizeof(uint32_t)); + if (flush_icache) CpuFeatures::FlushICache(p, sizeof(uint32_t)); } } Address RelocInfo::target_address() { - ASSERT(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); + DCHECK(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); return Assembler::target_address_at(pc_, host_); } Address RelocInfo::target_address_address() { - ASSERT(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_) + DCHECK(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_) || rmode_ == EMBEDDED_OBJECT || rmode_ == EXTERNAL_REFERENCE); return reinterpret_cast<Address>(pc_); @@ -108,10 +111,13 @@ int RelocInfo::target_address_size() { } -void RelocInfo::set_target_address(Address target, WriteBarrierMode mode) { - Assembler::set_target_address_at(pc_, host_, target); - ASSERT(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); - if (mode == UPDATE_WRITE_BARRIER && host() != NULL && IsCodeTarget(rmode_)) { +void RelocInfo::set_target_address(Address target, + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + Assembler::set_target_address_at(pc_, host_, target, icache_flush_mode); + DCHECK(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); + if (write_barrier_mode == UPDATE_WRITE_BARRIER && host() != NULL && + IsCodeTarget(rmode_)) { Object* target_code = Code::GetCodeFromTargetAddress(target); host()->GetHeap()->incremental_marking()->RecordWriteIntoCode( host(), this, HeapObject::cast(target_code)); @@ -120,23 +126,26 @@ void RelocInfo::set_target_address(Address target, WriteBarrierMode mode) { Object* RelocInfo::target_object() { - ASSERT(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); + DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); return Memory::Object_at(pc_); } Handle<Object> RelocInfo::target_object_handle(Assembler* origin) { - ASSERT(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); + DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); return Memory::Object_Handle_at(pc_); } -void RelocInfo::set_target_object(Object* target, WriteBarrierMode mode) { - ASSERT(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); - ASSERT(!target->IsConsString()); +void RelocInfo::set_target_object(Object* target, + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); Memory::Object_at(pc_) = target; - CPU::FlushICache(pc_, sizeof(Address)); - if (mode == UPDATE_WRITE_BARRIER && + if (icache_flush_mode != SKIP_ICACHE_FLUSH) { + CpuFeatures::FlushICache(pc_, sizeof(Address)); + } + if (write_barrier_mode == UPDATE_WRITE_BARRIER && host() != NULL && target->IsHeapObject()) { host()->GetHeap()->incremental_marking()->RecordWrite( @@ -146,43 +155,50 @@ void RelocInfo::set_target_object(Object* target, WriteBarrierMode mode) { Address RelocInfo::target_reference() { - ASSERT(rmode_ == RelocInfo::EXTERNAL_REFERENCE); + DCHECK(rmode_ == RelocInfo::EXTERNAL_REFERENCE); return Memory::Address_at(pc_); } Address RelocInfo::target_runtime_entry(Assembler* origin) { - ASSERT(IsRuntimeEntry(rmode_)); + DCHECK(IsRuntimeEntry(rmode_)); return reinterpret_cast<Address>(*reinterpret_cast<int32_t*>(pc_)); } void RelocInfo::set_target_runtime_entry(Address target, - WriteBarrierMode mode) { - ASSERT(IsRuntimeEntry(rmode_)); - if (target_address() != target) set_target_address(target, mode); + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(IsRuntimeEntry(rmode_)); + if (target_address() != target) { + set_target_address(target, write_barrier_mode, icache_flush_mode); + } } Handle<Cell> RelocInfo::target_cell_handle() { - ASSERT(rmode_ == RelocInfo::CELL); + DCHECK(rmode_ == RelocInfo::CELL); Address address = Memory::Address_at(pc_); return Handle<Cell>(reinterpret_cast<Cell**>(address)); } Cell* RelocInfo::target_cell() { - ASSERT(rmode_ == RelocInfo::CELL); + DCHECK(rmode_ == RelocInfo::CELL); return Cell::FromValueAddress(Memory::Address_at(pc_)); } -void RelocInfo::set_target_cell(Cell* cell, WriteBarrierMode mode) { - ASSERT(rmode_ == RelocInfo::CELL); +void RelocInfo::set_target_cell(Cell* cell, + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(rmode_ == RelocInfo::CELL); Address address = cell->address() + Cell::kValueOffset; Memory::Address_at(pc_) = address; - CPU::FlushICache(pc_, sizeof(Address)); - if (mode == UPDATE_WRITE_BARRIER && host() != NULL) { + if (icache_flush_mode != SKIP_ICACHE_FLUSH) { + CpuFeatures::FlushICache(pc_, sizeof(Address)); + } + if (write_barrier_mode == UPDATE_WRITE_BARRIER && host() != NULL) { // TODO(1550) We are passing NULL as a slot because cell can never be on // evacuation candidate. host()->GetHeap()->incremental_marking()->RecordWrite( @@ -192,36 +208,38 @@ void RelocInfo::set_target_cell(Cell* cell, WriteBarrierMode mode) { Handle<Object> RelocInfo::code_age_stub_handle(Assembler* origin) { - ASSERT(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); - ASSERT(*pc_ == kCallOpcode); + DCHECK(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); + DCHECK(*pc_ == kCallOpcode); return Memory::Object_Handle_at(pc_ + 1); } Code* RelocInfo::code_age_stub() { - ASSERT(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); - ASSERT(*pc_ == kCallOpcode); + DCHECK(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); + DCHECK(*pc_ == kCallOpcode); return Code::GetCodeFromTargetAddress( Assembler::target_address_at(pc_ + 1, host_)); } -void RelocInfo::set_code_age_stub(Code* stub) { - ASSERT(*pc_ == kCallOpcode); - ASSERT(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); - Assembler::set_target_address_at(pc_ + 1, host_, stub->instruction_start()); +void RelocInfo::set_code_age_stub(Code* stub, + ICacheFlushMode icache_flush_mode) { + DCHECK(*pc_ == kCallOpcode); + DCHECK(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); + Assembler::set_target_address_at(pc_ + 1, host_, stub->instruction_start(), + icache_flush_mode); } Address RelocInfo::call_address() { - ASSERT((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || + DCHECK((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || (IsDebugBreakSlot(rmode()) && IsPatchedDebugBreakSlotSequence())); return Assembler::target_address_at(pc_ + 1, host_); } void RelocInfo::set_call_address(Address target) { - ASSERT((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || + DCHECK((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || (IsDebugBreakSlot(rmode()) && IsPatchedDebugBreakSlotSequence())); Assembler::set_target_address_at(pc_ + 1, host_, target); if (host() != NULL) { @@ -243,7 +261,7 @@ void RelocInfo::set_call_object(Object* target) { Object** RelocInfo::call_object_address() { - ASSERT((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || + DCHECK((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || (IsDebugBreakSlot(rmode()) && IsPatchedDebugBreakSlotSequence())); return reinterpret_cast<Object**>(pc_ + 1); } @@ -275,14 +293,14 @@ void RelocInfo::Visit(Isolate* isolate, ObjectVisitor* visitor) { RelocInfo::Mode mode = rmode(); if (mode == RelocInfo::EMBEDDED_OBJECT) { visitor->VisitEmbeddedPointer(this); - CPU::FlushICache(pc_, sizeof(Address)); + CpuFeatures::FlushICache(pc_, sizeof(Address)); } else if (RelocInfo::IsCodeTarget(mode)) { visitor->VisitCodeTarget(this); } else if (mode == RelocInfo::CELL) { visitor->VisitCell(this); } else if (mode == RelocInfo::EXTERNAL_REFERENCE) { visitor->VisitExternalReference(this); - CPU::FlushICache(pc_, sizeof(Address)); + CpuFeatures::FlushICache(pc_, sizeof(Address)); } else if (RelocInfo::IsCodeAgeSequence(mode)) { visitor->VisitCodeAgeSequence(this); } else if (((RelocInfo::IsJSReturn(mode) && @@ -302,14 +320,14 @@ void RelocInfo::Visit(Heap* heap) { RelocInfo::Mode mode = rmode(); if (mode == RelocInfo::EMBEDDED_OBJECT) { StaticVisitor::VisitEmbeddedPointer(heap, this); - CPU::FlushICache(pc_, sizeof(Address)); + CpuFeatures::FlushICache(pc_, sizeof(Address)); } else if (RelocInfo::IsCodeTarget(mode)) { StaticVisitor::VisitCodeTarget(heap, this); } else if (mode == RelocInfo::CELL) { StaticVisitor::VisitCell(heap, this); } else if (mode == RelocInfo::EXTERNAL_REFERENCE) { StaticVisitor::VisitExternalReference(this); - CPU::FlushICache(pc_, sizeof(Address)); + CpuFeatures::FlushICache(pc_, sizeof(Address)); } else if (RelocInfo::IsCodeAgeSequence(mode)) { StaticVisitor::VisitCodeAgeSequence(heap, this); } else if (heap->isolate()->debug()->has_break_points() && @@ -348,7 +366,7 @@ Immediate::Immediate(Handle<Object> handle) { // Verify all Objects referred by code are NOT in new space. Object* obj = *handle; if (obj->IsHeapObject()) { - ASSERT(!HeapObject::cast(obj)->GetHeap()->InNewSpace(obj)); + DCHECK(!HeapObject::cast(obj)->GetHeap()->InNewSpace(obj)); x_ = reinterpret_cast<intptr_t>(handle.location()); rmode_ = RelocInfo::EMBEDDED_OBJECT; } else { @@ -381,7 +399,7 @@ void Assembler::emit(Handle<Object> handle) { AllowDeferredHandleDereference heap_object_check; // Verify all Objects referred by code are NOT in new space. Object* obj = *handle; - ASSERT(!isolate()->heap()->InNewSpace(obj)); + DCHECK(!isolate()->heap()->InNewSpace(obj)); if (obj->IsHeapObject()) { emit(reinterpret_cast<intptr_t>(handle.location()), RelocInfo::EMBEDDED_OBJECT); @@ -434,7 +452,7 @@ void Assembler::emit_code_relative_offset(Label* label) { void Assembler::emit_w(const Immediate& x) { - ASSERT(RelocInfo::IsNone(x.rmode_)); + DCHECK(RelocInfo::IsNone(x.rmode_)); uint16_t value = static_cast<uint16_t>(x.x_); reinterpret_cast<uint16_t*>(pc_)[0] = value; pc_ += sizeof(uint16_t); @@ -449,10 +467,13 @@ Address Assembler::target_address_at(Address pc, void Assembler::set_target_address_at(Address pc, ConstantPoolArray* constant_pool, - Address target) { + Address target, + ICacheFlushMode icache_flush_mode) { int32_t* p = reinterpret_cast<int32_t*>(pc); *p = target - (pc + sizeof(int32_t)); - CPU::FlushICache(p, sizeof(int32_t)); + if (icache_flush_mode != SKIP_ICACHE_FLUSH) { + CpuFeatures::FlushICache(p, sizeof(int32_t)); + } } @@ -461,6 +482,11 @@ Address Assembler::target_address_from_return_address(Address pc) { } +Address Assembler::break_address_from_return_address(Address pc) { + return pc - Assembler::kPatchDebugBreakSlotReturnOffset; +} + + Displacement Assembler::disp_at(Label* L) { return Displacement(long_at(L->pos())); } @@ -482,7 +508,7 @@ void Assembler::emit_near_disp(Label* L) { byte disp = 0x00; if (L->is_near_linked()) { int offset = L->near_link_pos() - pc_offset(); - ASSERT(is_int8(offset)); + DCHECK(is_int8(offset)); disp = static_cast<byte>(offset & 0xFF); } L->link_to(pc_offset(), Label::kNear); @@ -491,30 +517,30 @@ void Assembler::emit_near_disp(Label* L) { void Operand::set_modrm(int mod, Register rm) { - ASSERT((mod & -4) == 0); + DCHECK((mod & -4) == 0); buf_[0] = mod << 6 | rm.code(); len_ = 1; } void Operand::set_sib(ScaleFactor scale, Register index, Register base) { - ASSERT(len_ == 1); - ASSERT((scale & -4) == 0); + DCHECK(len_ == 1); + DCHECK((scale & -4) == 0); // Use SIB with no index register only for base esp. - ASSERT(!index.is(esp) || base.is(esp)); + DCHECK(!index.is(esp) || base.is(esp)); buf_[1] = scale << 6 | index.code() << 3 | base.code(); len_ = 2; } void Operand::set_disp8(int8_t disp) { - ASSERT(len_ == 1 || len_ == 2); + DCHECK(len_ == 1 || len_ == 2); *reinterpret_cast<int8_t*>(&buf_[len_++]) = disp; } void Operand::set_dispr(int32_t disp, RelocInfo::Mode rmode) { - ASSERT(len_ == 1 || len_ == 2); + DCHECK(len_ == 1 || len_ == 2); int32_t* p = reinterpret_cast<int32_t*>(&buf_[len_]); *p = disp; len_ += sizeof(int32_t); @@ -539,6 +565,12 @@ Operand::Operand(int32_t disp, RelocInfo::Mode rmode) { set_dispr(disp, rmode); } + +Operand::Operand(Immediate imm) { + // [disp/r] + set_modrm(0, ebp); + set_dispr(imm.x_, imm.rmode_); +} } } // namespace v8::internal #endif // V8_IA32_ASSEMBLER_IA32_INL_H_ diff --git a/deps/v8/src/ia32/assembler-ia32.cc b/deps/v8/src/ia32/assembler-ia32.cc index 7a88e708272..d8cd59cf504 100644 --- a/deps/v8/src/ia32/assembler-ia32.cc +++ b/deps/v8/src/ia32/assembler-ia32.cc @@ -34,13 +34,14 @@ // significantly by Google Inc. // Copyright 2012 the V8 project authors. All rights reserved. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_IA32 -#include "disassembler.h" -#include "macro-assembler.h" -#include "serialize.h" +#include "src/base/cpu.h" +#include "src/disassembler.h" +#include "src/macro-assembler.h" +#include "src/serialize.h" namespace v8 { namespace internal { @@ -48,95 +49,35 @@ namespace internal { // ----------------------------------------------------------------------------- // Implementation of CpuFeatures -#ifdef DEBUG -bool CpuFeatures::initialized_ = false; -#endif -uint64_t CpuFeatures::supported_ = 0; -uint64_t CpuFeatures::found_by_runtime_probing_only_ = 0; -uint64_t CpuFeatures::cross_compile_ = 0; - - -ExternalReference ExternalReference::cpu_features() { - ASSERT(CpuFeatures::initialized_); - return ExternalReference(&CpuFeatures::supported_); -} - - -int IntelDoubleRegister::NumAllocatableRegisters() { - if (CpuFeatures::IsSupported(SSE2)) { - return XMMRegister::kNumAllocatableRegisters; - } else { - return X87Register::kNumAllocatableRegisters; - } -} +void CpuFeatures::ProbeImpl(bool cross_compile) { + base::CPU cpu; + CHECK(cpu.has_sse2()); // SSE2 support is mandatory. + CHECK(cpu.has_cmov()); // CMOV support is mandatory. + // Only use statically determined features for cross compile (snapshot). + if (cross_compile) return; -int IntelDoubleRegister::NumRegisters() { - if (CpuFeatures::IsSupported(SSE2)) { - return XMMRegister::kNumRegisters; - } else { - return X87Register::kNumRegisters; - } + if (cpu.has_sse41() && FLAG_enable_sse4_1) supported_ |= 1u << SSE4_1; + if (cpu.has_sse3() && FLAG_enable_sse3) supported_ |= 1u << SSE3; } -const char* IntelDoubleRegister::AllocationIndexToString(int index) { - if (CpuFeatures::IsSupported(SSE2)) { - return XMMRegister::AllocationIndexToString(index); - } else { - return X87Register::AllocationIndexToString(index); - } -} - - -void CpuFeatures::Probe(bool serializer_enabled) { - ASSERT(!initialized_); - ASSERT(supported_ == 0); -#ifdef DEBUG - initialized_ = true; -#endif - if (serializer_enabled) { - supported_ |= OS::CpuFeaturesImpliedByPlatform(); - return; // No features if we might serialize. - } - - uint64_t probed_features = 0; - CPU cpu; - if (cpu.has_sse41()) { - probed_features |= static_cast<uint64_t>(1) << SSE4_1; - } - if (cpu.has_sse3()) { - probed_features |= static_cast<uint64_t>(1) << SSE3; - } - if (cpu.has_sse2()) { - probed_features |= static_cast<uint64_t>(1) << SSE2; - } - if (cpu.has_cmov()) { - probed_features |= static_cast<uint64_t>(1) << CMOV; - } - - // SAHF must be available in compat/legacy mode. - ASSERT(cpu.has_sahf()); - probed_features |= static_cast<uint64_t>(1) << SAHF; - - uint64_t platform_features = OS::CpuFeaturesImpliedByPlatform(); - supported_ = probed_features | platform_features; - found_by_runtime_probing_only_ = probed_features & ~platform_features; -} +void CpuFeatures::PrintTarget() { } +void CpuFeatures::PrintFeatures() { } // ----------------------------------------------------------------------------- // Implementation of Displacement void Displacement::init(Label* L, Type type) { - ASSERT(!L->is_bound()); + DCHECK(!L->is_bound()); int next = 0; if (L->is_linked()) { next = L->pos(); - ASSERT(next > 0); // Displacements must be at positions > 0 + DCHECK(next > 0); // Displacements must be at positions > 0 } // Ensure that we _never_ overflow the next field. - ASSERT(NextField::is_valid(Assembler::kMaximalBufferSize)); + DCHECK(NextField::is_valid(Assembler::kMaximalBufferSize)); data_ = NextField::encode(next) | TypeField::encode(type); } @@ -172,7 +113,7 @@ void RelocInfo::PatchCode(byte* instructions, int instruction_count) { } // Indicate that code has changed. - CPU::FlushICache(pc_, instruction_count); + CpuFeatures::FlushICache(pc_, instruction_count); } @@ -196,11 +137,11 @@ void RelocInfo::PatchCodeWithCall(Address target, int guard_bytes) { patcher.masm()->call(target, RelocInfo::NONE32); // Check that the size of the code generated is as expected. - ASSERT_EQ(kCallCodeSize, + DCHECK_EQ(kCallCodeSize, patcher.masm()->SizeOfCodeGeneratedSince(&check_codesize)); // Add the requested number of int3 instructions after the call. - ASSERT_GE(guard_bytes, 0); + DCHECK_GE(guard_bytes, 0); for (int i = 0; i < guard_bytes; i++) { patcher.masm()->int3(); } @@ -235,7 +176,7 @@ Operand::Operand(Register base, ScaleFactor scale, int32_t disp, RelocInfo::Mode rmode) { - ASSERT(!index.is(esp)); // illegal addressing mode + DCHECK(!index.is(esp)); // illegal addressing mode // [base + index*scale + disp/r] if (disp == 0 && RelocInfo::IsNone(rmode) && !base.is(ebp)) { // [base + index*scale] @@ -259,7 +200,7 @@ Operand::Operand(Register index, ScaleFactor scale, int32_t disp, RelocInfo::Mode rmode) { - ASSERT(!index.is(esp)); // illegal addressing mode + DCHECK(!index.is(esp)); // illegal addressing mode // [index*scale + disp/r] set_modrm(0, esp); set_sib(scale, index, ebp); @@ -279,7 +220,7 @@ bool Operand::is_reg_only() const { Register Operand::reg() const { - ASSERT(is_reg_only()); + DCHECK(is_reg_only()); return Register::from_code(buf_[0] & 0x07); } @@ -319,7 +260,7 @@ Assembler::Assembler(Isolate* isolate, void* buffer, int buffer_size) void Assembler::GetCode(CodeDesc* desc) { // Finalize code (at this point overflow() may be true, but the gap ensures // that we are still not overlapping instructions and relocation info). - ASSERT(pc_ <= reloc_info_writer.pos()); // No overlap. + DCHECK(pc_ <= reloc_info_writer.pos()); // No overlap. // Set up code descriptor. desc->buffer = buffer_; desc->buffer_size = buffer_size_; @@ -330,7 +271,7 @@ void Assembler::GetCode(CodeDesc* desc) { void Assembler::Align(int m) { - ASSERT(IsPowerOf2(m)); + DCHECK(IsPowerOf2(m)); int mask = m - 1; int addr = pc_offset(); Nop((m - (addr & mask)) & mask); @@ -349,15 +290,6 @@ bool Assembler::IsNop(Address addr) { void Assembler::Nop(int bytes) { EnsureSpace ensure_space(this); - if (!CpuFeatures::IsSupported(SSE2)) { - // Older CPUs that do not support SSE2 may not support multibyte NOP - // instructions. - for (; bytes > 0; bytes--) { - EMIT(0x90); - } - return; - } - // Multi byte nops from http://support.amd.com/us/Processor_TechDocs/40546.pdf while (bytes > 0) { switch (bytes) { @@ -489,7 +421,7 @@ void Assembler::push(const Operand& src) { void Assembler::pop(Register dst) { - ASSERT(reloc_info_writer.last_pc() != NULL); + DCHECK(reloc_info_writer.last_pc() != NULL); EnsureSpace ensure_space(this); EMIT(0x58 | dst.code()); } @@ -657,7 +589,6 @@ void Assembler::movzx_w(Register dst, const Operand& src) { void Assembler::cmov(Condition cc, Register dst, const Operand& src) { - ASSERT(IsEnabled(CMOV)); EnsureSpace ensure_space(this); // Opcode: 0f 40 + cc /r. EMIT(0x0F); @@ -703,6 +634,13 @@ void Assembler::xchg(Register dst, Register src) { } +void Assembler::xchg(Register dst, const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0x87); + emit_operand(dst, src); +} + + void Assembler::adc(Register dst, int32_t imm32) { EnsureSpace ensure_space(this); emit_arith(2, Operand(dst), Immediate(imm32)); @@ -731,7 +669,7 @@ void Assembler::add(const Operand& dst, Register src) { void Assembler::add(const Operand& dst, const Immediate& x) { - ASSERT(reloc_info_writer.last_pc() != NULL); + DCHECK(reloc_info_writer.last_pc() != NULL); EnsureSpace ensure_space(this); emit_arith(0, dst, x); } @@ -797,7 +735,7 @@ void Assembler::cmpb(Register reg, const Operand& op) { void Assembler::cmpw(const Operand& op, Immediate imm16) { - ASSERT(imm16.is_int16()); + DCHECK(imm16.is_int16()); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x81); @@ -886,10 +824,17 @@ void Assembler::cdq() { } -void Assembler::idiv(Register src) { +void Assembler::idiv(const Operand& src) { EnsureSpace ensure_space(this); EMIT(0xF7); - EMIT(0xF8 | src.code()); + emit_operand(edi, src); +} + + +void Assembler::div(const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0xF7); + emit_operand(esi, src); } @@ -909,14 +854,19 @@ void Assembler::imul(Register dst, const Operand& src) { void Assembler::imul(Register dst, Register src, int32_t imm32) { + imul(dst, Operand(src), imm32); +} + + +void Assembler::imul(Register dst, const Operand& src, int32_t imm32) { EnsureSpace ensure_space(this); if (is_int8(imm32)) { EMIT(0x6B); - EMIT(0xC0 | dst.code() << 3 | src.code()); + emit_operand(dst, src); EMIT(imm32); } else { EMIT(0x69); - EMIT(0xC0 | dst.code() << 3 | src.code()); + emit_operand(dst, src); emit(imm32); } } @@ -956,6 +906,13 @@ void Assembler::neg(Register dst) { } +void Assembler::neg(const Operand& dst) { + EnsureSpace ensure_space(this); + EMIT(0xF7); + emit_operand(ebx, dst); +} + + void Assembler::not_(Register dst) { EnsureSpace ensure_space(this); EMIT(0xF7); @@ -963,6 +920,13 @@ void Assembler::not_(Register dst) { } +void Assembler::not_(const Operand& dst) { + EnsureSpace ensure_space(this); + EMIT(0xF7); + emit_operand(edx, dst); +} + + void Assembler::or_(Register dst, int32_t imm32) { EnsureSpace ensure_space(this); emit_arith(1, Operand(dst), Immediate(imm32)); @@ -991,7 +955,7 @@ void Assembler::or_(const Operand& dst, Register src) { void Assembler::rcl(Register dst, uint8_t imm8) { EnsureSpace ensure_space(this); - ASSERT(is_uint5(imm8)); // illegal shift count + DCHECK(is_uint5(imm8)); // illegal shift count if (imm8 == 1) { EMIT(0xD1); EMIT(0xD0 | dst.code()); @@ -1005,7 +969,7 @@ void Assembler::rcl(Register dst, uint8_t imm8) { void Assembler::rcr(Register dst, uint8_t imm8) { EnsureSpace ensure_space(this); - ASSERT(is_uint5(imm8)); // illegal shift count + DCHECK(is_uint5(imm8)); // illegal shift count if (imm8 == 1) { EMIT(0xD1); EMIT(0xD8 | dst.code()); @@ -1019,7 +983,7 @@ void Assembler::rcr(Register dst, uint8_t imm8) { void Assembler::ror(Register dst, uint8_t imm8) { EnsureSpace ensure_space(this); - ASSERT(is_uint5(imm8)); // illegal shift count + DCHECK(is_uint5(imm8)); // illegal shift count if (imm8 == 1) { EMIT(0xD1); EMIT(0xC8 | dst.code()); @@ -1038,24 +1002,24 @@ void Assembler::ror_cl(Register dst) { } -void Assembler::sar(Register dst, uint8_t imm8) { +void Assembler::sar(const Operand& dst, uint8_t imm8) { EnsureSpace ensure_space(this); - ASSERT(is_uint5(imm8)); // illegal shift count + DCHECK(is_uint5(imm8)); // illegal shift count if (imm8 == 1) { EMIT(0xD1); - EMIT(0xF8 | dst.code()); + emit_operand(edi, dst); } else { EMIT(0xC1); - EMIT(0xF8 | dst.code()); + emit_operand(edi, dst); EMIT(imm8); } } -void Assembler::sar_cl(Register dst) { +void Assembler::sar_cl(const Operand& dst) { EnsureSpace ensure_space(this); EMIT(0xD3); - EMIT(0xF8 | dst.code()); + emit_operand(edi, dst); } @@ -1074,24 +1038,24 @@ void Assembler::shld(Register dst, const Operand& src) { } -void Assembler::shl(Register dst, uint8_t imm8) { +void Assembler::shl(const Operand& dst, uint8_t imm8) { EnsureSpace ensure_space(this); - ASSERT(is_uint5(imm8)); // illegal shift count + DCHECK(is_uint5(imm8)); // illegal shift count if (imm8 == 1) { EMIT(0xD1); - EMIT(0xE0 | dst.code()); + emit_operand(esp, dst); } else { EMIT(0xC1); - EMIT(0xE0 | dst.code()); + emit_operand(esp, dst); EMIT(imm8); } } -void Assembler::shl_cl(Register dst) { +void Assembler::shl_cl(const Operand& dst) { EnsureSpace ensure_space(this); EMIT(0xD3); - EMIT(0xE0 | dst.code()); + emit_operand(esp, dst); } @@ -1103,24 +1067,24 @@ void Assembler::shrd(Register dst, const Operand& src) { } -void Assembler::shr(Register dst, uint8_t imm8) { +void Assembler::shr(const Operand& dst, uint8_t imm8) { EnsureSpace ensure_space(this); - ASSERT(is_uint5(imm8)); // illegal shift count + DCHECK(is_uint5(imm8)); // illegal shift count if (imm8 == 1) { EMIT(0xD1); - EMIT(0xE8 | dst.code()); + emit_operand(ebp, dst); } else { EMIT(0xC1); - EMIT(0xE8 | dst.code()); + emit_operand(ebp, dst); EMIT(imm8); } } -void Assembler::shr_cl(Register dst) { +void Assembler::shr_cl(const Operand& dst) { EnsureSpace ensure_space(this); EMIT(0xD3); - EMIT(0xE8 | dst.code()); + emit_operand(ebp, dst); } @@ -1292,7 +1256,7 @@ void Assembler::nop() { void Assembler::ret(int imm16) { EnsureSpace ensure_space(this); - ASSERT(is_uint16(imm16)); + DCHECK(is_uint16(imm16)); if (imm16 == 0) { EMIT(0xC3); } else { @@ -1337,7 +1301,7 @@ void Assembler::print(Label* L) { void Assembler::bind_to(Label* L, int pos) { EnsureSpace ensure_space(this); - ASSERT(0 <= pos && pos <= pc_offset()); // must have a valid binding position + DCHECK(0 <= pos && pos <= pc_offset()); // must have a valid binding position while (L->is_linked()) { Displacement disp = disp_at(L); int fixup_pos = L->pos(); @@ -1346,7 +1310,7 @@ void Assembler::bind_to(Label* L, int pos) { long_at_put(fixup_pos, pos + Code::kHeaderSize - kHeapObjectTag); } else { if (disp.type() == Displacement::UNCONDITIONAL_JUMP) { - ASSERT(byte_at(fixup_pos - 1) == 0xE9); // jmp expected + DCHECK(byte_at(fixup_pos - 1) == 0xE9); // jmp expected } // Relative address, relative to point after address. int imm32 = pos - (fixup_pos + sizeof(int32_t)); @@ -1358,7 +1322,7 @@ void Assembler::bind_to(Label* L, int pos) { int fixup_pos = L->near_link_pos(); int offset_to_next = static_cast<int>(*reinterpret_cast<int8_t*>(addr_at(fixup_pos))); - ASSERT(offset_to_next <= 0); + DCHECK(offset_to_next <= 0); // Relative address, relative to point after address. int disp = pos - fixup_pos - sizeof(int8_t); CHECK(0 <= disp && disp <= 127); @@ -1375,7 +1339,7 @@ void Assembler::bind_to(Label* L, int pos) { void Assembler::bind(Label* L) { EnsureSpace ensure_space(this); - ASSERT(!L->is_bound()); // label can only be bound once + DCHECK(!L->is_bound()); // label can only be bound once bind_to(L, pc_offset()); } @@ -1386,7 +1350,7 @@ void Assembler::call(Label* L) { if (L->is_bound()) { const int long_size = 5; int offs = L->pos() - pc_offset(); - ASSERT(offs <= 0); + DCHECK(offs <= 0); // 1110 1000 #32-bit disp. EMIT(0xE8); emit(offs - long_size); @@ -1401,7 +1365,7 @@ void Assembler::call(Label* L) { void Assembler::call(byte* entry, RelocInfo::Mode rmode) { positions_recorder()->WriteRecordedPositions(); EnsureSpace ensure_space(this); - ASSERT(!RelocInfo::IsCodeTarget(rmode)); + DCHECK(!RelocInfo::IsCodeTarget(rmode)); EMIT(0xE8); if (RelocInfo::IsRuntimeEntry(rmode)) { emit(reinterpret_cast<uint32_t>(entry), rmode); @@ -1435,7 +1399,7 @@ void Assembler::call(Handle<Code> code, TypeFeedbackId ast_id) { positions_recorder()->WriteRecordedPositions(); EnsureSpace ensure_space(this); - ASSERT(RelocInfo::IsCodeTarget(rmode) + DCHECK(RelocInfo::IsCodeTarget(rmode) || rmode == RelocInfo::CODE_AGE_SEQUENCE); EMIT(0xE8); emit(code, rmode, ast_id); @@ -1448,7 +1412,7 @@ void Assembler::jmp(Label* L, Label::Distance distance) { const int short_size = 2; const int long_size = 5; int offs = L->pos() - pc_offset(); - ASSERT(offs <= 0); + DCHECK(offs <= 0); if (is_int8(offs - short_size)) { // 1110 1011 #8-bit disp. EMIT(0xEB); @@ -1471,7 +1435,7 @@ void Assembler::jmp(Label* L, Label::Distance distance) { void Assembler::jmp(byte* entry, RelocInfo::Mode rmode) { EnsureSpace ensure_space(this); - ASSERT(!RelocInfo::IsCodeTarget(rmode)); + DCHECK(!RelocInfo::IsCodeTarget(rmode)); EMIT(0xE9); if (RelocInfo::IsRuntimeEntry(rmode)) { emit(reinterpret_cast<uint32_t>(entry), rmode); @@ -1490,7 +1454,7 @@ void Assembler::jmp(const Operand& adr) { void Assembler::jmp(Handle<Code> code, RelocInfo::Mode rmode) { EnsureSpace ensure_space(this); - ASSERT(RelocInfo::IsCodeTarget(rmode)); + DCHECK(RelocInfo::IsCodeTarget(rmode)); EMIT(0xE9); emit(code, rmode); } @@ -1498,12 +1462,12 @@ void Assembler::jmp(Handle<Code> code, RelocInfo::Mode rmode) { void Assembler::j(Condition cc, Label* L, Label::Distance distance) { EnsureSpace ensure_space(this); - ASSERT(0 <= cc && static_cast<int>(cc) < 16); + DCHECK(0 <= cc && static_cast<int>(cc) < 16); if (L->is_bound()) { const int short_size = 2; const int long_size = 6; int offs = L->pos() - pc_offset(); - ASSERT(offs <= 0); + DCHECK(offs <= 0); if (is_int8(offs - short_size)) { // 0111 tttn #8-bit disp EMIT(0x70 | cc); @@ -1530,7 +1494,7 @@ void Assembler::j(Condition cc, Label* L, Label::Distance distance) { void Assembler::j(Condition cc, byte* entry, RelocInfo::Mode rmode) { EnsureSpace ensure_space(this); - ASSERT((0 <= cc) && (static_cast<int>(cc) < 16)); + DCHECK((0 <= cc) && (static_cast<int>(cc) < 16)); // 0000 1111 1000 tttn #32-bit disp. EMIT(0x0F); EMIT(0x80 | cc); @@ -1657,7 +1621,7 @@ void Assembler::fistp_s(const Operand& adr) { void Assembler::fisttp_s(const Operand& adr) { - ASSERT(IsEnabled(SSE3)); + DCHECK(IsEnabled(SSE3)); EnsureSpace ensure_space(this); EMIT(0xDB); emit_operand(ecx, adr); @@ -1665,7 +1629,7 @@ void Assembler::fisttp_s(const Operand& adr) { void Assembler::fisttp_d(const Operand& adr) { - ASSERT(IsEnabled(SSE3)); + DCHECK(IsEnabled(SSE3)); EnsureSpace ensure_space(this); EMIT(0xDD); emit_operand(ecx, adr); @@ -1942,7 +1906,7 @@ void Assembler::sahf() { void Assembler::setcc(Condition cc, Register reg) { - ASSERT(reg.is_byte_register()); + DCHECK(reg.is_byte_register()); EnsureSpace ensure_space(this); EMIT(0x0F); EMIT(0x90 | cc); @@ -1951,7 +1915,6 @@ void Assembler::setcc(Condition cc, Register reg) { void Assembler::cvttss2si(Register dst, const Operand& src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0xF3); EMIT(0x0F); @@ -1961,7 +1924,6 @@ void Assembler::cvttss2si(Register dst, const Operand& src) { void Assembler::cvttsd2si(Register dst, const Operand& src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0xF2); EMIT(0x0F); @@ -1971,7 +1933,6 @@ void Assembler::cvttsd2si(Register dst, const Operand& src) { void Assembler::cvtsd2si(Register dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0xF2); EMIT(0x0F); @@ -1981,7 +1942,6 @@ void Assembler::cvtsd2si(Register dst, XMMRegister src) { void Assembler::cvtsi2sd(XMMRegister dst, const Operand& src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0xF2); EMIT(0x0F); @@ -1991,7 +1951,6 @@ void Assembler::cvtsi2sd(XMMRegister dst, const Operand& src) { void Assembler::cvtss2sd(XMMRegister dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0xF3); EMIT(0x0F); @@ -2001,7 +1960,6 @@ void Assembler::cvtss2sd(XMMRegister dst, XMMRegister src) { void Assembler::cvtsd2ss(XMMRegister dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0xF2); EMIT(0x0F); @@ -2011,7 +1969,6 @@ void Assembler::cvtsd2ss(XMMRegister dst, XMMRegister src) { void Assembler::addsd(XMMRegister dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0xF2); EMIT(0x0F); @@ -2021,7 +1978,6 @@ void Assembler::addsd(XMMRegister dst, XMMRegister src) { void Assembler::addsd(XMMRegister dst, const Operand& src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0xF2); EMIT(0x0F); @@ -2031,7 +1987,6 @@ void Assembler::addsd(XMMRegister dst, const Operand& src) { void Assembler::mulsd(XMMRegister dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0xF2); EMIT(0x0F); @@ -2041,7 +1996,6 @@ void Assembler::mulsd(XMMRegister dst, XMMRegister src) { void Assembler::mulsd(XMMRegister dst, const Operand& src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0xF2); EMIT(0x0F); @@ -2051,7 +2005,6 @@ void Assembler::mulsd(XMMRegister dst, const Operand& src) { void Assembler::subsd(XMMRegister dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0xF2); EMIT(0x0F); @@ -2061,7 +2014,6 @@ void Assembler::subsd(XMMRegister dst, XMMRegister src) { void Assembler::divsd(XMMRegister dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0xF2); EMIT(0x0F); @@ -2071,7 +2023,6 @@ void Assembler::divsd(XMMRegister dst, XMMRegister src) { void Assembler::xorpd(XMMRegister dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2081,7 +2032,6 @@ void Assembler::xorpd(XMMRegister dst, XMMRegister src) { void Assembler::andps(XMMRegister dst, const Operand& src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x0F); EMIT(0x54); @@ -2090,7 +2040,6 @@ void Assembler::andps(XMMRegister dst, const Operand& src) { void Assembler::orps(XMMRegister dst, const Operand& src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x0F); EMIT(0x56); @@ -2099,7 +2048,6 @@ void Assembler::orps(XMMRegister dst, const Operand& src) { void Assembler::xorps(XMMRegister dst, const Operand& src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x0F); EMIT(0x57); @@ -2108,7 +2056,6 @@ void Assembler::xorps(XMMRegister dst, const Operand& src) { void Assembler::addps(XMMRegister dst, const Operand& src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x0F); EMIT(0x58); @@ -2117,7 +2064,6 @@ void Assembler::addps(XMMRegister dst, const Operand& src) { void Assembler::subps(XMMRegister dst, const Operand& src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x0F); EMIT(0x5C); @@ -2126,7 +2072,6 @@ void Assembler::subps(XMMRegister dst, const Operand& src) { void Assembler::mulps(XMMRegister dst, const Operand& src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x0F); EMIT(0x59); @@ -2135,7 +2080,6 @@ void Assembler::mulps(XMMRegister dst, const Operand& src) { void Assembler::divps(XMMRegister dst, const Operand& src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x0F); EMIT(0x5E); @@ -2144,7 +2088,15 @@ void Assembler::divps(XMMRegister dst, const Operand& src) { void Assembler::sqrtsd(XMMRegister dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); + EnsureSpace ensure_space(this); + EMIT(0xF2); + EMIT(0x0F); + EMIT(0x51); + emit_sse_operand(dst, src); +} + + +void Assembler::sqrtsd(XMMRegister dst, const Operand& src) { EnsureSpace ensure_space(this); EMIT(0xF2); EMIT(0x0F); @@ -2154,7 +2106,6 @@ void Assembler::sqrtsd(XMMRegister dst, XMMRegister src) { void Assembler::andpd(XMMRegister dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2164,7 +2115,6 @@ void Assembler::andpd(XMMRegister dst, XMMRegister src) { void Assembler::orpd(XMMRegister dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2174,7 +2124,6 @@ void Assembler::orpd(XMMRegister dst, XMMRegister src) { void Assembler::ucomisd(XMMRegister dst, const Operand& src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2184,7 +2133,7 @@ void Assembler::ucomisd(XMMRegister dst, const Operand& src) { void Assembler::roundsd(XMMRegister dst, XMMRegister src, RoundingMode mode) { - ASSERT(IsEnabled(SSE4_1)); + DCHECK(IsEnabled(SSE4_1)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2197,7 +2146,6 @@ void Assembler::roundsd(XMMRegister dst, XMMRegister src, RoundingMode mode) { void Assembler::movmskpd(Register dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2207,7 +2155,6 @@ void Assembler::movmskpd(Register dst, XMMRegister src) { void Assembler::movmskps(Register dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x0F); EMIT(0x50); @@ -2216,7 +2163,6 @@ void Assembler::movmskps(Register dst, XMMRegister src) { void Assembler::pcmpeqd(XMMRegister dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2226,7 +2172,6 @@ void Assembler::pcmpeqd(XMMRegister dst, XMMRegister src) { void Assembler::cmpltsd(XMMRegister dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0xF2); EMIT(0x0F); @@ -2237,7 +2182,6 @@ void Assembler::cmpltsd(XMMRegister dst, XMMRegister src) { void Assembler::movaps(XMMRegister dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x0F); EMIT(0x28); @@ -2246,8 +2190,7 @@ void Assembler::movaps(XMMRegister dst, XMMRegister src) { void Assembler::shufps(XMMRegister dst, XMMRegister src, byte imm8) { - ASSERT(IsEnabled(SSE2)); - ASSERT(is_uint8(imm8)); + DCHECK(is_uint8(imm8)); EnsureSpace ensure_space(this); EMIT(0x0F); EMIT(0xC6); @@ -2257,7 +2200,6 @@ void Assembler::shufps(XMMRegister dst, XMMRegister src, byte imm8) { void Assembler::movdqa(const Operand& dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2267,7 +2209,6 @@ void Assembler::movdqa(const Operand& dst, XMMRegister src) { void Assembler::movdqa(XMMRegister dst, const Operand& src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2277,7 +2218,6 @@ void Assembler::movdqa(XMMRegister dst, const Operand& src) { void Assembler::movdqu(const Operand& dst, XMMRegister src ) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0xF3); EMIT(0x0F); @@ -2287,7 +2227,6 @@ void Assembler::movdqu(const Operand& dst, XMMRegister src ) { void Assembler::movdqu(XMMRegister dst, const Operand& src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0xF3); EMIT(0x0F); @@ -2297,7 +2236,7 @@ void Assembler::movdqu(XMMRegister dst, const Operand& src) { void Assembler::movntdqa(XMMRegister dst, const Operand& src) { - ASSERT(IsEnabled(SSE4_1)); + DCHECK(IsEnabled(SSE4_1)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2308,7 +2247,6 @@ void Assembler::movntdqa(XMMRegister dst, const Operand& src) { void Assembler::movntdq(const Operand& dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2318,7 +2256,7 @@ void Assembler::movntdq(const Operand& dst, XMMRegister src) { void Assembler::prefetch(const Operand& src, int level) { - ASSERT(is_uint2(level)); + DCHECK(is_uint2(level)); EnsureSpace ensure_space(this); EMIT(0x0F); EMIT(0x18); @@ -2329,7 +2267,6 @@ void Assembler::prefetch(const Operand& src, int level) { void Assembler::movsd(const Operand& dst, XMMRegister src ) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0xF2); // double EMIT(0x0F); @@ -2339,7 +2276,6 @@ void Assembler::movsd(const Operand& dst, XMMRegister src ) { void Assembler::movsd(XMMRegister dst, const Operand& src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0xF2); // double EMIT(0x0F); @@ -2349,7 +2285,6 @@ void Assembler::movsd(XMMRegister dst, const Operand& src) { void Assembler::movss(const Operand& dst, XMMRegister src ) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0xF3); // float EMIT(0x0F); @@ -2359,7 +2294,6 @@ void Assembler::movss(const Operand& dst, XMMRegister src ) { void Assembler::movss(XMMRegister dst, const Operand& src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0xF3); // float EMIT(0x0F); @@ -2369,7 +2303,6 @@ void Assembler::movss(XMMRegister dst, const Operand& src) { void Assembler::movd(XMMRegister dst, const Operand& src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2379,7 +2312,6 @@ void Assembler::movd(XMMRegister dst, const Operand& src) { void Assembler::movd(const Operand& dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2389,8 +2321,8 @@ void Assembler::movd(const Operand& dst, XMMRegister src) { void Assembler::extractps(Register dst, XMMRegister src, byte imm8) { - ASSERT(IsEnabled(SSE4_1)); - ASSERT(is_uint8(imm8)); + DCHECK(IsEnabled(SSE4_1)); + DCHECK(is_uint8(imm8)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2402,7 +2334,6 @@ void Assembler::extractps(Register dst, XMMRegister src, byte imm8) { void Assembler::pand(XMMRegister dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2412,7 +2343,6 @@ void Assembler::pand(XMMRegister dst, XMMRegister src) { void Assembler::pxor(XMMRegister dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2422,7 +2352,6 @@ void Assembler::pxor(XMMRegister dst, XMMRegister src) { void Assembler::por(XMMRegister dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2432,7 +2361,7 @@ void Assembler::por(XMMRegister dst, XMMRegister src) { void Assembler::ptest(XMMRegister dst, XMMRegister src) { - ASSERT(IsEnabled(SSE4_1)); + DCHECK(IsEnabled(SSE4_1)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2443,7 +2372,6 @@ void Assembler::ptest(XMMRegister dst, XMMRegister src) { void Assembler::psllq(XMMRegister reg, int8_t shift) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2454,7 +2382,6 @@ void Assembler::psllq(XMMRegister reg, int8_t shift) { void Assembler::psllq(XMMRegister dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2464,7 +2391,6 @@ void Assembler::psllq(XMMRegister dst, XMMRegister src) { void Assembler::psrlq(XMMRegister reg, int8_t shift) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2475,7 +2401,6 @@ void Assembler::psrlq(XMMRegister reg, int8_t shift) { void Assembler::psrlq(XMMRegister dst, XMMRegister src) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2485,7 +2410,6 @@ void Assembler::psrlq(XMMRegister dst, XMMRegister src) { void Assembler::pshufd(XMMRegister dst, XMMRegister src, uint8_t shuffle) { - ASSERT(IsEnabled(SSE2)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2496,7 +2420,7 @@ void Assembler::pshufd(XMMRegister dst, XMMRegister src, uint8_t shuffle) { void Assembler::pextrd(const Operand& dst, XMMRegister src, int8_t offset) { - ASSERT(IsEnabled(SSE4_1)); + DCHECK(IsEnabled(SSE4_1)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2508,7 +2432,7 @@ void Assembler::pextrd(const Operand& dst, XMMRegister src, int8_t offset) { void Assembler::pinsrd(XMMRegister dst, const Operand& src, int8_t offset) { - ASSERT(IsEnabled(SSE4_1)); + DCHECK(IsEnabled(SSE4_1)); EnsureSpace ensure_space(this); EMIT(0x66); EMIT(0x0F); @@ -2568,16 +2492,13 @@ void Assembler::RecordComment(const char* msg, bool force) { void Assembler::GrowBuffer() { - ASSERT(buffer_overflow()); + DCHECK(buffer_overflow()); if (!own_buffer_) FATAL("external code buffer is too small"); // Compute new buffer size. CodeDesc desc; // the new buffer - if (buffer_size_ < 4*KB) { - desc.buffer_size = 4*KB; - } else { - desc.buffer_size = 2*buffer_size_; - } + desc.buffer_size = 2 * buffer_size_; + // Some internal data structures overflow for very large buffers, // they must ensure that kMaximalBufferSize is not too large. if ((desc.buffer_size > kMaximalBufferSize) || @@ -2599,17 +2520,12 @@ void Assembler::GrowBuffer() { // Copy the data. int pc_delta = desc.buffer - buffer_; int rc_delta = (desc.buffer + desc.buffer_size) - (buffer_ + buffer_size_); - OS::MemMove(desc.buffer, buffer_, desc.instr_size); - OS::MemMove(rc_delta + reloc_info_writer.pos(), - reloc_info_writer.pos(), desc.reloc_size); + MemMove(desc.buffer, buffer_, desc.instr_size); + MemMove(rc_delta + reloc_info_writer.pos(), reloc_info_writer.pos(), + desc.reloc_size); // Switch buffers. - if (isolate()->assembler_spare_buffer() == NULL && - buffer_size_ == kMinimalBufferSize) { - isolate()->set_assembler_spare_buffer(buffer_); - } else { - DeleteArray(buffer_); - } + DeleteArray(buffer_); buffer_ = desc.buffer; buffer_size_ = desc.buffer_size; pc_ += pc_delta; @@ -2627,14 +2543,14 @@ void Assembler::GrowBuffer() { } } - ASSERT(!buffer_overflow()); + DCHECK(!buffer_overflow()); } void Assembler::emit_arith_b(int op1, int op2, Register dst, int imm8) { - ASSERT(is_uint8(op1) && is_uint8(op2)); // wrong opcode - ASSERT(is_uint8(imm8)); - ASSERT((op1 & 0x01) == 0); // should be 8bit operation + DCHECK(is_uint8(op1) && is_uint8(op2)); // wrong opcode + DCHECK(is_uint8(imm8)); + DCHECK((op1 & 0x01) == 0); // should be 8bit operation EMIT(op1); EMIT(op2 | dst.code()); EMIT(imm8); @@ -2642,7 +2558,7 @@ void Assembler::emit_arith_b(int op1, int op2, Register dst, int imm8) { void Assembler::emit_arith(int sel, Operand dst, const Immediate& x) { - ASSERT((0 <= sel) && (sel <= 7)); + DCHECK((0 <= sel) && (sel <= 7)); Register ireg = { sel }; if (x.is_int8()) { EMIT(0x83); // using a sign-extended 8-bit immediate. @@ -2661,7 +2577,7 @@ void Assembler::emit_arith(int sel, Operand dst, const Immediate& x) { void Assembler::emit_operand(Register reg, const Operand& adr) { const unsigned length = adr.len_; - ASSERT(length > 0); + DCHECK(length > 0); // Emit updated ModRM byte containing the given register. pc_[0] = (adr.buf_[0] & ~0x38) | (reg.code() << 3); @@ -2680,8 +2596,8 @@ void Assembler::emit_operand(Register reg, const Operand& adr) { void Assembler::emit_farith(int b1, int b2, int i) { - ASSERT(is_uint8(b1) && is_uint8(b2)); // wrong opcode - ASSERT(0 <= i && i < 8); // illegal stack offset + DCHECK(is_uint8(b1) && is_uint8(b2)); // wrong opcode + DCHECK(0 <= i && i < 8); // illegal stack offset EMIT(b1); EMIT(b2 + i); } @@ -2700,12 +2616,11 @@ void Assembler::dd(uint32_t data) { void Assembler::RecordRelocInfo(RelocInfo::Mode rmode, intptr_t data) { - ASSERT(!RelocInfo::IsNone(rmode)); + DCHECK(!RelocInfo::IsNone(rmode)); // Don't record external references unless the heap will be serialized. - if (rmode == RelocInfo::EXTERNAL_REFERENCE) { - if (!Serializer::enabled(isolate()) && !emit_debug_code()) { - return; - } + if (rmode == RelocInfo::EXTERNAL_REFERENCE && + !serializer_enabled() && !emit_debug_code()) { + return; } RelocInfo rinfo(pc_, rmode, data, NULL); reloc_info_writer.Write(&rinfo); @@ -2714,14 +2629,14 @@ void Assembler::RecordRelocInfo(RelocInfo::Mode rmode, intptr_t data) { Handle<ConstantPoolArray> Assembler::NewConstantPool(Isolate* isolate) { // No out-of-line constant pool support. - ASSERT(!FLAG_enable_ool_constant_pool); + DCHECK(!FLAG_enable_ool_constant_pool); return isolate->factory()->empty_constant_pool_array(); } void Assembler::PopulateConstantPool(ConstantPoolArray* constant_pool) { // No out-of-line constant pool support. - ASSERT(!FLAG_enable_ool_constant_pool); + DCHECK(!FLAG_enable_ool_constant_pool); return; } diff --git a/deps/v8/src/ia32/assembler-ia32.h b/deps/v8/src/ia32/assembler-ia32.h index 3033db936b3..5febffd8c38 100644 --- a/deps/v8/src/ia32/assembler-ia32.h +++ b/deps/v8/src/ia32/assembler-ia32.h @@ -37,8 +37,8 @@ #ifndef V8_IA32_ASSEMBLER_IA32_H_ #define V8_IA32_ASSEMBLER_IA32_H_ -#include "isolate.h" -#include "serialize.h" +#include "src/isolate.h" +#include "src/serialize.h" namespace v8 { namespace internal { @@ -78,8 +78,8 @@ struct Register { static inline Register FromAllocationIndex(int index); static Register from_code(int code) { - ASSERT(code >= 0); - ASSERT(code < kNumRegisters); + DCHECK(code >= 0); + DCHECK(code < kNumRegisters); Register r = { code }; return r; } @@ -88,11 +88,11 @@ struct Register { // eax, ebx, ecx and edx are byte registers, the rest are not. bool is_byte_register() const { return code_ <= 3; } int code() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return code_; } int bit() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return 1 << code_; } @@ -122,7 +122,7 @@ const Register no_reg = { kRegister_no_reg_Code }; inline const char* Register::AllocationIndexToString(int index) { - ASSERT(index >= 0 && index < kMaxNumAllocatableRegisters); + DCHECK(index >= 0 && index < kMaxNumAllocatableRegisters); // This is the mapping of allocation indices to registers. const char* const kNames[] = { "eax", "ecx", "edx", "ebx", "esi", "edi" }; return kNames[index]; @@ -130,82 +130,52 @@ inline const char* Register::AllocationIndexToString(int index) { inline int Register::ToAllocationIndex(Register reg) { - ASSERT(reg.is_valid() && !reg.is(esp) && !reg.is(ebp)); + DCHECK(reg.is_valid() && !reg.is(esp) && !reg.is(ebp)); return (reg.code() >= 6) ? reg.code() - 2 : reg.code(); } inline Register Register::FromAllocationIndex(int index) { - ASSERT(index >= 0 && index < kMaxNumAllocatableRegisters); + DCHECK(index >= 0 && index < kMaxNumAllocatableRegisters); return (index >= 4) ? from_code(index + 2) : from_code(index); } -struct IntelDoubleRegister { - static const int kMaxNumRegisters = 8; +struct XMMRegister { static const int kMaxNumAllocatableRegisters = 7; - static int NumAllocatableRegisters(); - static int NumRegisters(); - static const char* AllocationIndexToString(int index); + static const int kMaxNumRegisters = 8; + static int NumAllocatableRegisters() { + return kMaxNumAllocatableRegisters; + } - static int ToAllocationIndex(IntelDoubleRegister reg) { - ASSERT(reg.code() != 0); + static int ToAllocationIndex(XMMRegister reg) { + DCHECK(reg.code() != 0); return reg.code() - 1; } - static IntelDoubleRegister FromAllocationIndex(int index) { - ASSERT(index >= 0 && index < NumAllocatableRegisters()); + static XMMRegister FromAllocationIndex(int index) { + DCHECK(index >= 0 && index < kMaxNumAllocatableRegisters); return from_code(index + 1); } - static IntelDoubleRegister from_code(int code) { - IntelDoubleRegister result = { code }; + static XMMRegister from_code(int code) { + XMMRegister result = { code }; return result; } bool is_valid() const { - return 0 <= code_ && code_ < NumRegisters(); + return 0 <= code_ && code_ < kMaxNumRegisters; } + int code() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return code_; } - int code_; -}; - - -const IntelDoubleRegister double_register_0 = { 0 }; -const IntelDoubleRegister double_register_1 = { 1 }; -const IntelDoubleRegister double_register_2 = { 2 }; -const IntelDoubleRegister double_register_3 = { 3 }; -const IntelDoubleRegister double_register_4 = { 4 }; -const IntelDoubleRegister double_register_5 = { 5 }; -const IntelDoubleRegister double_register_6 = { 6 }; -const IntelDoubleRegister double_register_7 = { 7 }; -const IntelDoubleRegister no_double_reg = { -1 }; - - -struct XMMRegister : IntelDoubleRegister { - static const int kNumAllocatableRegisters = 7; - static const int kNumRegisters = 8; - - static XMMRegister from_code(int code) { - STATIC_ASSERT(sizeof(XMMRegister) == sizeof(IntelDoubleRegister)); - XMMRegister result; - result.code_ = code; - return result; - } - bool is(XMMRegister reg) const { return code_ == reg.code_; } - static XMMRegister FromAllocationIndex(int index) { - ASSERT(index >= 0 && index < NumAllocatableRegisters()); - return from_code(index + 1); - } - static const char* AllocationIndexToString(int index) { - ASSERT(index >= 0 && index < kNumAllocatableRegisters); + DCHECK(index >= 0 && index < kMaxNumAllocatableRegisters); const char* const names[] = { "xmm1", "xmm2", @@ -217,57 +187,23 @@ struct XMMRegister : IntelDoubleRegister { }; return names[index]; } -}; - - -#define xmm0 (static_cast<const XMMRegister&>(double_register_0)) -#define xmm1 (static_cast<const XMMRegister&>(double_register_1)) -#define xmm2 (static_cast<const XMMRegister&>(double_register_2)) -#define xmm3 (static_cast<const XMMRegister&>(double_register_3)) -#define xmm4 (static_cast<const XMMRegister&>(double_register_4)) -#define xmm5 (static_cast<const XMMRegister&>(double_register_5)) -#define xmm6 (static_cast<const XMMRegister&>(double_register_6)) -#define xmm7 (static_cast<const XMMRegister&>(double_register_7)) -#define no_xmm_reg (static_cast<const XMMRegister&>(no_double_reg)) - - -struct X87Register : IntelDoubleRegister { - static const int kNumAllocatableRegisters = 5; - static const int kNumRegisters = 5; - - bool is(X87Register reg) const { - return code_ == reg.code_; - } - - static const char* AllocationIndexToString(int index) { - ASSERT(index >= 0 && index < kNumAllocatableRegisters); - const char* const names[] = { - "stX_0", "stX_1", "stX_2", "stX_3", "stX_4" - }; - return names[index]; - } - - static X87Register FromAllocationIndex(int index) { - STATIC_ASSERT(sizeof(X87Register) == sizeof(IntelDoubleRegister)); - ASSERT(index >= 0 && index < NumAllocatableRegisters()); - X87Register result; - result.code_ = index; - return result; - } - static int ToAllocationIndex(X87Register reg) { - return reg.code_; - } + int code_; }; -#define stX_0 static_cast<const X87Register&>(double_register_0) -#define stX_1 static_cast<const X87Register&>(double_register_1) -#define stX_2 static_cast<const X87Register&>(double_register_2) -#define stX_3 static_cast<const X87Register&>(double_register_3) -#define stX_4 static_cast<const X87Register&>(double_register_4) + +typedef XMMRegister DoubleRegister; -typedef IntelDoubleRegister DoubleRegister; +const XMMRegister xmm0 = { 0 }; +const XMMRegister xmm1 = { 1 }; +const XMMRegister xmm2 = { 2 }; +const XMMRegister xmm3 = { 3 }; +const XMMRegister xmm4 = { 4 }; +const XMMRegister xmm5 = { 5 }; +const XMMRegister xmm6 = { 6 }; +const XMMRegister xmm7 = { 7 }; +const XMMRegister no_xmm_reg = { -1 }; enum Condition { @@ -310,8 +246,8 @@ inline Condition NegateCondition(Condition cc) { } -// Corresponds to transposing the operands of a comparison. -inline Condition ReverseCondition(Condition cc) { +// Commute a condition such that {a cond b == b cond' a}. +inline Condition CommuteCondition(Condition cc) { switch (cc) { case below: return above; @@ -331,7 +267,7 @@ inline Condition ReverseCondition(Condition cc) { return greater_equal; default: return cc; - }; + } } @@ -364,6 +300,7 @@ class Immediate BASE_EMBEDDED { int x_; RelocInfo::Mode rmode_; + friend class Operand; friend class Assembler; friend class MacroAssembler; }; @@ -386,12 +323,17 @@ enum ScaleFactor { class Operand BASE_EMBEDDED { public: + // reg + INLINE(explicit Operand(Register reg)); + // XMM reg INLINE(explicit Operand(XMMRegister xmm_reg)); // [disp/r] INLINE(explicit Operand(int32_t disp, RelocInfo::Mode rmode)); - // disp only must always be relocated + + // [disp/r] + INLINE(explicit Operand(Immediate imm)); // [base + disp/r] explicit Operand(Register base, int32_t disp, @@ -428,6 +370,10 @@ class Operand BASE_EMBEDDED { RelocInfo::CELL); } + static Operand ForRegisterPlusImmediate(Register base, Immediate imm) { + return Operand(base, imm.x_, imm.rmode_); + } + // Returns true if this Operand is a wrapper for the specified register. bool is_reg(Register reg) const; @@ -439,9 +385,6 @@ class Operand BASE_EMBEDDED { Register reg() const; private: - // reg - INLINE(explicit Operand(Register reg)); - // Set the ModRM byte without an encoded 'reg' register. The // register is encoded later as part of the emit_operand operation. inline void set_modrm(int mod, Register rm); @@ -458,7 +401,6 @@ class Operand BASE_EMBEDDED { friend class Assembler; friend class MacroAssembler; - friend class LCodeGen; }; @@ -516,75 +458,6 @@ class Displacement BASE_EMBEDDED { }; - -// CpuFeatures keeps track of which features are supported by the target CPU. -// Supported features must be enabled by a CpuFeatureScope before use. -// Example: -// if (assembler->IsSupported(SSE2)) { -// CpuFeatureScope fscope(assembler, SSE2); -// // Generate SSE2 floating point code. -// } else { -// // Generate standard x87 floating point code. -// } -class CpuFeatures : public AllStatic { - public: - // Detect features of the target CPU. Set safe defaults if the serializer - // is enabled (snapshots must be portable). - static void Probe(bool serializer_enabled); - - // Check whether a feature is supported by the target CPU. - static bool IsSupported(CpuFeature f) { - ASSERT(initialized_); - if (Check(f, cross_compile_)) return true; - if (f == SSE2 && !FLAG_enable_sse2) return false; - if (f == SSE3 && !FLAG_enable_sse3) return false; - if (f == SSE4_1 && !FLAG_enable_sse4_1) return false; - if (f == CMOV && !FLAG_enable_cmov) return false; - return Check(f, supported_); - } - - static bool IsSafeForSnapshot(Isolate* isolate, CpuFeature f) { - return Check(f, cross_compile_) || - (IsSupported(f) && - !(Serializer::enabled(isolate) && - Check(f, found_by_runtime_probing_only_))); - } - - static bool VerifyCrossCompiling() { - return cross_compile_ == 0; - } - - static bool VerifyCrossCompiling(CpuFeature f) { - uint64_t mask = flag2set(f); - return cross_compile_ == 0 || - (cross_compile_ & mask) == mask; - } - - static bool SupportsCrankshaft() { return IsSupported(SSE2); } - - private: - static bool Check(CpuFeature f, uint64_t set) { - return (set & flag2set(f)) != 0; - } - - static uint64_t flag2set(CpuFeature f) { - return static_cast<uint64_t>(1) << f; - } - -#ifdef DEBUG - static bool initialized_; -#endif - static uint64_t supported_; - static uint64_t found_by_runtime_probing_only_; - - static uint64_t cross_compile_; - - friend class ExternalReference; - friend class PlatformFeatureScope; - DISALLOW_COPY_AND_ASSIGN(CpuFeatures); -}; - - class Assembler : public AssemblerBase { private: // We check before assembling an instruction that there is sufficient @@ -626,14 +499,18 @@ class Assembler : public AssemblerBase { ConstantPoolArray* constant_pool); inline static void set_target_address_at(Address pc, ConstantPoolArray* constant_pool, - Address target); + Address target, + ICacheFlushMode icache_flush_mode = + FLUSH_ICACHE_IF_NEEDED); static inline Address target_address_at(Address pc, Code* code) { ConstantPoolArray* constant_pool = code ? code->constant_pool() : NULL; return target_address_at(pc, constant_pool); } static inline void set_target_address_at(Address pc, Code* code, - Address target) { + Address target, + ICacheFlushMode icache_flush_mode = + FLUSH_ICACHE_IF_NEEDED) { ConstantPoolArray* constant_pool = code ? code->constant_pool() : NULL; set_target_address_at(pc, constant_pool, target); } @@ -642,6 +519,9 @@ class Assembler : public AssemblerBase { // of that call in the instruction stream. inline static Address target_address_from_return_address(Address pc); + // Return the code target address of the patch debug break slot + inline static Address break_address_from_return_address(Address pc); + // This sets the branch destination (which is in the instruction on x86). // This is for calls and branches within generated code. inline static void deserialization_set_special_target_at( @@ -776,8 +656,9 @@ class Assembler : public AssemblerBase { void rep_stos(); void stos(); - // Exchange two registers + // Exchange void xchg(Register dst, Register src); + void xchg(Register dst, const Operand& src); // Arithmetics void adc(Register dst, int32_t imm32); @@ -819,13 +700,17 @@ class Assembler : public AssemblerBase { void cdq(); - void idiv(Register src); + void idiv(Register src) { idiv(Operand(src)); } + void idiv(const Operand& src); + void div(Register src) { div(Operand(src)); } + void div(const Operand& src); // Signed multiply instructions. void imul(Register src); // edx:eax = eax * src. void imul(Register dst, Register src) { imul(dst, Operand(src)); } void imul(Register dst, const Operand& src); // dst = dst * src. void imul(Register dst, Register src, int32_t imm32); // dst = src * imm32. + void imul(Register dst, const Operand& src, int32_t imm32); void inc(Register dst); void inc(const Operand& dst); @@ -836,8 +721,10 @@ class Assembler : public AssemblerBase { void mul(Register src); // edx:eax = eax * reg. void neg(Register dst); + void neg(const Operand& dst); void not_(Register dst); + void not_(const Operand& dst); void or_(Register dst, int32_t imm32); void or_(Register dst, Register src) { or_(dst, Operand(src)); } @@ -851,22 +738,28 @@ class Assembler : public AssemblerBase { void ror(Register dst, uint8_t imm8); void ror_cl(Register dst); - void sar(Register dst, uint8_t imm8); - void sar_cl(Register dst); + void sar(Register dst, uint8_t imm8) { sar(Operand(dst), imm8); } + void sar(const Operand& dst, uint8_t imm8); + void sar_cl(Register dst) { sar_cl(Operand(dst)); } + void sar_cl(const Operand& dst); void sbb(Register dst, const Operand& src); void shld(Register dst, Register src) { shld(dst, Operand(src)); } void shld(Register dst, const Operand& src); - void shl(Register dst, uint8_t imm8); - void shl_cl(Register dst); + void shl(Register dst, uint8_t imm8) { shl(Operand(dst), imm8); } + void shl(const Operand& dst, uint8_t imm8); + void shl_cl(Register dst) { shl_cl(Operand(dst)); } + void shl_cl(const Operand& dst); void shrd(Register dst, Register src) { shrd(dst, Operand(src)); } void shrd(Register dst, const Operand& src); - void shr(Register dst, uint8_t imm8); - void shr_cl(Register dst); + void shr(Register dst, uint8_t imm8) { shr(Operand(dst), imm8); } + void shr(const Operand& dst, uint8_t imm8); + void shr_cl(Register dst) { shr_cl(Operand(dst)); } + void shr_cl(const Operand& dst); void sub(Register dst, const Immediate& imm) { sub(Operand(dst), imm); } void sub(const Operand& dst, const Immediate& x); @@ -1050,6 +943,9 @@ class Assembler : public AssemblerBase { cvttss2si(dst, Operand(src)); } void cvttsd2si(Register dst, const Operand& src); + void cvttsd2si(Register dst, XMMRegister src) { + cvttsd2si(dst, Operand(src)); + } void cvtsd2si(Register dst, XMMRegister src); void cvtsi2sd(XMMRegister dst, Register src) { cvtsi2sd(dst, Operand(src)); } @@ -1065,6 +961,7 @@ class Assembler : public AssemblerBase { void divsd(XMMRegister dst, XMMRegister src); void xorpd(XMMRegister dst, XMMRegister src); void sqrtsd(XMMRegister dst, XMMRegister src); + void sqrtsd(XMMRegister dst, const Operand& src); void andpd(XMMRegister dst, XMMRegister src); void orpd(XMMRegister dst, XMMRegister src); @@ -1281,7 +1178,7 @@ class EnsureSpace BASE_EMBEDDED { #ifdef DEBUG ~EnsureSpace() { int bytes_generated = space_before_ - assembler_->available_space(); - ASSERT(bytes_generated < assembler_->kGap); + DCHECK(bytes_generated < assembler_->kGap); } #endif diff --git a/deps/v8/src/ia32/builtins-ia32.cc b/deps/v8/src/ia32/builtins-ia32.cc index b3af2b2fede..cca65f47163 100644 --- a/deps/v8/src/ia32/builtins-ia32.cc +++ b/deps/v8/src/ia32/builtins-ia32.cc @@ -2,14 +2,14 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_IA32 -#include "codegen.h" -#include "deoptimizer.h" -#include "full-codegen.h" -#include "stub-cache.h" +#include "src/codegen.h" +#include "src/deoptimizer.h" +#include "src/full-codegen.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -42,7 +42,7 @@ void Builtins::Generate_Adaptor(MacroAssembler* masm, __ push(edi); __ push(scratch); // Restore return address. } else { - ASSERT(extra_args == NO_EXTRA_ARGUMENTS); + DCHECK(extra_args == NO_EXTRA_ARGUMENTS); } // JumpToExternalReference expects eax to contain the number of arguments @@ -92,7 +92,7 @@ void Builtins::Generate_InOptimizationQueue(MacroAssembler* masm) { __ cmp(esp, Operand::StaticVariable(stack_limit)); __ j(above_equal, &ok, Label::kNear); - CallRuntimePassFunction(masm, Runtime::kHiddenTryInstallOptimizedCode); + CallRuntimePassFunction(masm, Runtime::kTryInstallOptimizedCode); GenerateTailCallToReturnedCode(masm); __ bind(&ok); @@ -102,7 +102,6 @@ void Builtins::Generate_InOptimizationQueue(MacroAssembler* masm) { static void Generate_JSConstructStubHelper(MacroAssembler* masm, bool is_api_function, - bool count_constructions, bool create_memento) { // ----------- S t a t e ------------- // -- eax: number of arguments @@ -110,14 +109,8 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, // -- ebx: allocation site or undefined // ----------------------------------- - // Should never count constructions for api objects. - ASSERT(!is_api_function || !count_constructions); - // Should never create mementos for api functions. - ASSERT(!is_api_function || !create_memento); - - // Should never create mementos before slack tracking is finished. - ASSERT(!count_constructions || !create_memento); + DCHECK(!is_api_function || !create_memento); // Enter a construct frame. { @@ -164,23 +157,32 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, __ CmpInstanceType(eax, JS_FUNCTION_TYPE); __ j(equal, &rt_call); - if (count_constructions) { + if (!is_api_function) { Label allocate; + // The code below relies on these assumptions. + STATIC_ASSERT(JSFunction::kNoSlackTracking == 0); + STATIC_ASSERT(Map::ConstructionCount::kShift + + Map::ConstructionCount::kSize == 32); + // Check if slack tracking is enabled. + __ mov(esi, FieldOperand(eax, Map::kBitField3Offset)); + __ shr(esi, Map::ConstructionCount::kShift); + __ j(zero, &allocate); // JSFunction::kNoSlackTracking // Decrease generous allocation count. - __ mov(ecx, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset)); - __ dec_b(FieldOperand(ecx, - SharedFunctionInfo::kConstructionCountOffset)); - __ j(not_zero, &allocate); + __ sub(FieldOperand(eax, Map::kBitField3Offset), + Immediate(1 << Map::ConstructionCount::kShift)); + + __ cmp(esi, JSFunction::kFinishSlackTracking); + __ j(not_equal, &allocate); __ push(eax); __ push(edi); __ push(edi); // constructor - // The call will replace the stub, so the countdown is only done once. - __ CallRuntime(Runtime::kHiddenFinalizeInstanceSize, 1); + __ CallRuntime(Runtime::kFinalizeInstanceSize, 1); __ pop(edi); __ pop(eax); + __ xor_(esi, esi); // JSFunction::kNoSlackTracking __ bind(&allocate); } @@ -210,9 +212,17 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, // eax: initial map // ebx: JSObject // edi: start of next object (including memento if create_memento) - __ lea(ecx, Operand(ebx, JSObject::kHeaderSize)); + // esi: slack tracking counter (non-API function case) __ mov(edx, factory->undefined_value()); - if (count_constructions) { + __ lea(ecx, Operand(ebx, JSObject::kHeaderSize)); + if (!is_api_function) { + Label no_inobject_slack_tracking; + + // Check if slack tracking is enabled. + __ cmp(esi, JSFunction::kNoSlackTracking); + __ j(equal, &no_inobject_slack_tracking); + + // Allocate object with a slack. __ movzx_b(esi, FieldOperand(eax, Map::kPreAllocatedPropertyFieldsOffset)); __ lea(esi, @@ -225,16 +235,19 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, } __ InitializeFieldsWithFiller(ecx, esi, edx); __ mov(edx, factory->one_pointer_filler_map()); - __ InitializeFieldsWithFiller(ecx, edi, edx); - } else if (create_memento) { + // Fill the remaining fields with one pointer filler map. + + __ bind(&no_inobject_slack_tracking); + } + + if (create_memento) { __ lea(esi, Operand(edi, -AllocationMemento::kSize)); __ InitializeFieldsWithFiller(ecx, esi, edx); // Fill in memento fields if necessary. // esi: points to the allocated but uninitialized memento. - Handle<Map> allocation_memento_map = factory->allocation_memento_map(); __ mov(Operand(esi, AllocationMemento::kMapOffset), - allocation_memento_map); + factory->allocation_memento_map()); // Get the cell or undefined. __ mov(edx, Operand(esp, kPointerSize*2)); __ mov(Operand(esi, AllocationMemento::kAllocationSiteOffset), @@ -340,14 +353,15 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, offset = kPointerSize; } - // Must restore edi (constructor) before calling runtime. + // Must restore esi (context) and edi (constructor) before calling runtime. + __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset)); __ mov(edi, Operand(esp, offset)); // edi: function (constructor) __ push(edi); if (create_memento) { - __ CallRuntime(Runtime::kHiddenNewObjectWithAllocationSite, 2); + __ CallRuntime(Runtime::kNewObjectWithAllocationSite, 2); } else { - __ CallRuntime(Runtime::kHiddenNewObject, 1); + __ CallRuntime(Runtime::kNewObject, 1); } __ mov(ebx, eax); // store result in ebx @@ -413,7 +427,7 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, } // Store offset of return address for deoptimizer. - if (!is_api_function && !count_constructions) { + if (!is_api_function) { masm->isolate()->heap()->SetConstructStubDeoptPCOffset(masm->pc_offset()); } @@ -455,18 +469,13 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, } -void Builtins::Generate_JSConstructStubCountdown(MacroAssembler* masm) { - Generate_JSConstructStubHelper(masm, false, true, false); -} - - void Builtins::Generate_JSConstructStubGeneric(MacroAssembler* masm) { - Generate_JSConstructStubHelper(masm, false, false, FLAG_pretenuring_call_new); + Generate_JSConstructStubHelper(masm, false, FLAG_pretenuring_call_new); } void Builtins::Generate_JSConstructStubApi(MacroAssembler* masm) { - Generate_JSConstructStubHelper(masm, true, false, false); + Generate_JSConstructStubHelper(masm, true, false); } @@ -542,7 +551,7 @@ void Builtins::Generate_JSConstructEntryTrampoline(MacroAssembler* masm) { void Builtins::Generate_CompileUnoptimized(MacroAssembler* masm) { - CallRuntimePassFunction(masm, Runtime::kHiddenCompileUnoptimized); + CallRuntimePassFunction(masm, Runtime::kCompileUnoptimized); GenerateTailCallToReturnedCode(masm); } @@ -557,7 +566,7 @@ static void CallCompileOptimized(MacroAssembler* masm, bool concurrent) { // Whether to compile in a background thread. __ Push(masm->isolate()->factory()->ToBoolean(concurrent)); - __ CallRuntime(Runtime::kHiddenCompileOptimized, 2); + __ CallRuntime(Runtime::kCompileOptimized, 2); // Restore receiver. __ pop(edi); } @@ -661,7 +670,7 @@ static void Generate_NotifyStubFailureHelper(MacroAssembler* masm, // stubs that tail call the runtime on deopts passing their parameters in // registers. __ pushad(); - __ CallRuntime(Runtime::kHiddenNotifyStubFailure, 0, save_doubles); + __ CallRuntime(Runtime::kNotifyStubFailure, 0, save_doubles); __ popad(); // Tear down internal frame. } @@ -677,12 +686,7 @@ void Builtins::Generate_NotifyStubFailure(MacroAssembler* masm) { void Builtins::Generate_NotifyStubFailureSaveDoubles(MacroAssembler* masm) { - if (Serializer::enabled(masm->isolate())) { - PlatformFeatureScope sse2(masm->isolate(), SSE2); - Generate_NotifyStubFailureHelper(masm, kSaveFPRegs); - } else { - Generate_NotifyStubFailureHelper(masm, kSaveFPRegs); - } + Generate_NotifyStubFailureHelper(masm, kSaveFPRegs); } @@ -693,7 +697,7 @@ static void Generate_NotifyDeoptimizedHelper(MacroAssembler* masm, // Pass deoptimization type to the runtime system. __ push(Immediate(Smi::FromInt(static_cast<int>(type)))); - __ CallRuntime(Runtime::kHiddenNotifyDeoptimized, 1); + __ CallRuntime(Runtime::kNotifyDeoptimized, 1); // Tear down internal frame. } @@ -761,7 +765,7 @@ void Builtins::Generate_FunctionCall(MacroAssembler* masm) { // 3a. Patch the first argument if necessary when calling a function. Label shift_arguments; __ Move(edx, Immediate(0)); // indicate regular JS_FUNCTION - { Label convert_to_object, use_global_receiver, patch_receiver; + { Label convert_to_object, use_global_proxy, patch_receiver; // Change context eagerly in case we need the global receiver. __ mov(esi, FieldOperand(edi, JSFunction::kContextOffset)); @@ -783,9 +787,9 @@ void Builtins::Generate_FunctionCall(MacroAssembler* masm) { // global object if it is null or undefined. __ JumpIfSmi(ebx, &convert_to_object); __ cmp(ebx, factory->null_value()); - __ j(equal, &use_global_receiver); + __ j(equal, &use_global_proxy); __ cmp(ebx, factory->undefined_value()); - __ j(equal, &use_global_receiver); + __ j(equal, &use_global_proxy); STATIC_ASSERT(LAST_SPEC_OBJECT_TYPE == LAST_TYPE); __ CmpObjectType(ebx, FIRST_SPEC_OBJECT_TYPE, ecx); __ j(above_equal, &shift_arguments); @@ -810,10 +814,10 @@ void Builtins::Generate_FunctionCall(MacroAssembler* masm) { __ mov(edi, Operand(esp, eax, times_4, 1 * kPointerSize)); __ jmp(&patch_receiver); - __ bind(&use_global_receiver); + __ bind(&use_global_proxy); __ mov(ebx, Operand(esi, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); - __ mov(ebx, FieldOperand(ebx, GlobalObject::kGlobalReceiverOffset)); + __ mov(ebx, FieldOperand(ebx, GlobalObject::kGlobalProxyOffset)); __ bind(&patch_receiver); __ mov(Operand(esp, eax, times_4, 0), ebx); @@ -939,7 +943,7 @@ void Builtins::Generate_FunctionApply(MacroAssembler* masm) { __ mov(ebx, Operand(ebp, kReceiverOffset)); // Check that the function is a JS function (otherwise it must be a proxy). - Label push_receiver, use_global_receiver; + Label push_receiver, use_global_proxy; __ mov(edi, Operand(ebp, kFunctionOffset)); __ CmpObjectType(edi, JS_FUNCTION_TYPE, ecx); __ j(not_equal, &push_receiver); @@ -967,9 +971,9 @@ void Builtins::Generate_FunctionApply(MacroAssembler* masm) { // global object if it is null or undefined. __ JumpIfSmi(ebx, &call_to_object); __ cmp(ebx, factory->null_value()); - __ j(equal, &use_global_receiver); + __ j(equal, &use_global_proxy); __ cmp(ebx, factory->undefined_value()); - __ j(equal, &use_global_receiver); + __ j(equal, &use_global_proxy); STATIC_ASSERT(LAST_SPEC_OBJECT_TYPE == LAST_TYPE); __ CmpObjectType(ebx, FIRST_SPEC_OBJECT_TYPE, ecx); __ j(above_equal, &push_receiver); @@ -980,10 +984,10 @@ void Builtins::Generate_FunctionApply(MacroAssembler* masm) { __ mov(ebx, eax); __ jmp(&push_receiver); - __ bind(&use_global_receiver); + __ bind(&use_global_proxy); __ mov(ebx, Operand(esi, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); - __ mov(ebx, FieldOperand(ebx, GlobalObject::kGlobalReceiverOffset)); + __ mov(ebx, FieldOperand(ebx, GlobalObject::kGlobalProxyOffset)); // Push the receiver. __ bind(&push_receiver); @@ -991,12 +995,17 @@ void Builtins::Generate_FunctionApply(MacroAssembler* masm) { // Copy all arguments from the array to the stack. Label entry, loop; - __ mov(ecx, Operand(ebp, kIndexOffset)); + Register receiver = LoadIC::ReceiverRegister(); + Register key = LoadIC::NameRegister(); + __ mov(key, Operand(ebp, kIndexOffset)); __ jmp(&entry); __ bind(&loop); - __ mov(edx, Operand(ebp, kArgumentsOffset)); // load arguments + __ mov(receiver, Operand(ebp, kArgumentsOffset)); // load arguments // Use inline caching to speed up access to arguments. + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), Immediate(Smi::FromInt(0))); + } Handle<Code> ic = masm->isolate()->builtins()->KeyedLoadIC_Initialize(); __ call(ic, RelocInfo::CODE_TARGET); // It is important that we do not have a test instruction after the @@ -1007,19 +1016,19 @@ void Builtins::Generate_FunctionApply(MacroAssembler* masm) { // Push the nth argument. __ push(eax); - // Update the index on the stack and in register eax. - __ mov(ecx, Operand(ebp, kIndexOffset)); - __ add(ecx, Immediate(1 << kSmiTagSize)); - __ mov(Operand(ebp, kIndexOffset), ecx); + // Update the index on the stack and in register key. + __ mov(key, Operand(ebp, kIndexOffset)); + __ add(key, Immediate(1 << kSmiTagSize)); + __ mov(Operand(ebp, kIndexOffset), key); __ bind(&entry); - __ cmp(ecx, Operand(ebp, kLimitOffset)); + __ cmp(key, Operand(ebp, kLimitOffset)); __ j(not_equal, &loop); // Call the function. Label call_proxy; - __ mov(eax, ecx); ParameterCount actual(eax); + __ Move(eax, key); __ SmiUntag(eax); __ mov(edi, Operand(ebp, kFunctionOffset)); __ CmpObjectType(edi, JS_FUNCTION_TYPE, ecx); @@ -1432,7 +1441,7 @@ void Builtins::Generate_OsrAfterStackCheck(MacroAssembler* masm) { __ j(above_equal, &ok, Label::kNear); { FrameScope scope(masm, StackFrame::INTERNAL); - __ CallRuntime(Runtime::kHiddenStackGuard, 0); + __ CallRuntime(Runtime::kStackGuard, 0); } __ jmp(masm->isolate()->builtins()->OnStackReplacement(), RelocInfo::CODE_TARGET); diff --git a/deps/v8/src/ia32/code-stubs-ia32.cc b/deps/v8/src/ia32/code-stubs-ia32.cc index 174ebbbfaf7..104576e64a9 100644 --- a/deps/v8/src/ia32/code-stubs-ia32.cc +++ b/deps/v8/src/ia32/code-stubs-ia32.cc @@ -2,19 +2,18 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_IA32 -#include "bootstrapper.h" -#include "code-stubs.h" -#include "isolate.h" -#include "jsregexp.h" -#include "regexp-macro-assembler.h" -#include "runtime.h" -#include "stub-cache.h" -#include "codegen.h" -#include "runtime.h" +#include "src/bootstrapper.h" +#include "src/code-stubs.h" +#include "src/codegen.h" +#include "src/isolate.h" +#include "src/jsregexp.h" +#include "src/regexp-macro-assembler.h" +#include "src/runtime.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -22,299 +21,232 @@ namespace internal { void FastNewClosureStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { ebx }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenNewClosureFromStubFailure)->entry; + Register registers[] = { esi, ebx }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kNewClosureFromStubFailure)->entry); } void FastNewContextStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { edi }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; + Register registers[] = { esi, edi }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); } void ToNumberStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { eax }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; + // ToNumberStub invokes a function, and therefore needs a context. + Register registers[] = { esi, eax }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); } void NumberToStringStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { eax }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenNumberToString)->entry; + Register registers[] = { esi, eax }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kNumberToStringRT)->entry); } void FastCloneShallowArrayStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { eax, ebx, ecx }; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId( - Runtime::kHiddenCreateArrayLiteralStubBailout)->entry; + Register registers[] = { esi, eax, ebx, ecx }; + Representation representations[] = { + Representation::Tagged(), + Representation::Tagged(), + Representation::Smi(), + Representation::Tagged() }; + + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kCreateArrayLiteralStubBailout)->entry, + representations); } void FastCloneShallowObjectStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { eax, ebx, ecx, edx }; - descriptor->register_param_count_ = 4; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenCreateObjectLiteral)->entry; + Register registers[] = { esi, eax, ebx, ecx, edx }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kCreateObjectLiteral)->entry); } void CreateAllocationSiteStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { ebx, edx }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; + Register registers[] = { esi, ebx, edx }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); } -void KeyedLoadFastElementStub::InitializeInterfaceDescriptor( +void CallFunctionStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { edx, ecx }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(KeyedLoadIC_MissFromStubFailure); + Register registers[] = {esi, edi}; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); } -void KeyedLoadDictionaryElementStub::InitializeInterfaceDescriptor( +void CallConstructStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { edx, ecx }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(KeyedLoadIC_MissFromStubFailure); + // eax : number of arguments + // ebx : feedback vector + // edx : (only if ebx is not the megamorphic symbol) slot in feedback + // vector (Smi) + // edi : constructor function + // TODO(turbofan): So far we don't gather type feedback and hence skip the + // slot parameter, but ArrayConstructStub needs the vector to be undefined. + Register registers[] = {esi, eax, edi, ebx}; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); } void RegExpConstructResultStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { ecx, ebx, eax }; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenRegExpConstructResult)->entry; + Register registers[] = { esi, ecx, ebx, eax }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kRegExpConstructResult)->entry); } -void LoadFieldStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { edx }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; -} - - -void KeyedLoadFieldStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { edx }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; -} - - -void StringLengthStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { edx, ecx }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; -} - - -void KeyedStringLengthStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { edx, ecx }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; -} - - -void KeyedStoreFastElementStub::InitializeInterfaceDescriptor( +void TransitionElementsKindStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { edx, ecx, eax }; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(KeyedStoreIC_MissFromStubFailure); + Register registers[] = { esi, eax, ebx }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kTransitionElementsKind)->entry); } -void TransitionElementsKindStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { eax, ebx }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kTransitionElementsKind)->entry; -} +const Register InterfaceDescriptor::ContextRegister() { return esi; } static void InitializeArrayConstructorDescriptor( - Isolate* isolate, + Isolate* isolate, CodeStub::Major major, CodeStubInterfaceDescriptor* descriptor, int constant_stack_parameter_count) { // register state // eax -- number of arguments // edi -- function // ebx -- allocation site with elements kind - static Register registers_variable_args[] = { edi, ebx, eax }; - static Register registers_no_args[] = { edi, ebx }; + Address deopt_handler = Runtime::FunctionForId( + Runtime::kArrayConstructor)->entry; if (constant_stack_parameter_count == 0) { - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers_no_args; + Register registers[] = { esi, edi, ebx }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, + deopt_handler, NULL, constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE); } else { // stack param count needs (constructor pointer, and single argument) - descriptor->handler_arguments_mode_ = PASS_ARGUMENTS; - descriptor->stack_parameter_count_ = eax; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers_variable_args; + Register registers[] = { esi, edi, ebx, eax }; + Representation representations[] = { + Representation::Tagged(), + Representation::Tagged(), + Representation::Tagged(), + Representation::Integer32() }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, eax, + deopt_handler, representations, + constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE, PASS_ARGUMENTS); } - - descriptor->hint_stack_parameter_count_ = constant_stack_parameter_count; - descriptor->function_mode_ = JS_FUNCTION_STUB_MODE; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenArrayConstructor)->entry; } static void InitializeInternalArrayConstructorDescriptor( - CodeStubInterfaceDescriptor* descriptor, + CodeStub::Major major, CodeStubInterfaceDescriptor* descriptor, int constant_stack_parameter_count) { // register state // eax -- number of arguments // edi -- constructor function - static Register registers_variable_args[] = { edi, eax }; - static Register registers_no_args[] = { edi }; + Address deopt_handler = Runtime::FunctionForId( + Runtime::kInternalArrayConstructor)->entry; if (constant_stack_parameter_count == 0) { - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers_no_args; + Register registers[] = { esi, edi }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, + deopt_handler, NULL, constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE); } else { // stack param count needs (constructor pointer, and single argument) - descriptor->handler_arguments_mode_ = PASS_ARGUMENTS; - descriptor->stack_parameter_count_ = eax; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers_variable_args; + Register registers[] = { esi, edi, eax }; + Representation representations[] = { + Representation::Tagged(), + Representation::Tagged(), + Representation::Integer32() }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, eax, + deopt_handler, representations, + constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE, PASS_ARGUMENTS); } - - descriptor->hint_stack_parameter_count_ = constant_stack_parameter_count; - descriptor->function_mode_ = JS_FUNCTION_STUB_MODE; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenInternalArrayConstructor)->entry; } void ArrayNoArgumentConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeArrayConstructorDescriptor(isolate(), descriptor, 0); + InitializeArrayConstructorDescriptor(isolate(), MajorKey(), descriptor, 0); } void ArraySingleArgumentConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeArrayConstructorDescriptor(isolate(), descriptor, 1); + InitializeArrayConstructorDescriptor(isolate(), MajorKey(), descriptor, 1); } void ArrayNArgumentsConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeArrayConstructorDescriptor(isolate(), descriptor, -1); + InitializeArrayConstructorDescriptor(isolate(), MajorKey(), descriptor, -1); } void InternalArrayNoArgumentConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeInternalArrayConstructorDescriptor(descriptor, 0); + InitializeInternalArrayConstructorDescriptor(MajorKey(), descriptor, 0); } void InternalArraySingleArgumentConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeInternalArrayConstructorDescriptor(descriptor, 1); + InitializeInternalArrayConstructorDescriptor(MajorKey(), descriptor, 1); } void InternalArrayNArgumentsConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeInternalArrayConstructorDescriptor(descriptor, -1); + InitializeInternalArrayConstructorDescriptor(MajorKey(), descriptor, -1); } void CompareNilICStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { eax }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(CompareNilIC_Miss); + Register registers[] = { esi, eax }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(CompareNilIC_Miss)); descriptor->SetMissHandler( ExternalReference(IC_Utility(IC::kCompareNilIC_Miss), isolate())); } void ToBooleanStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { eax }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(ToBooleanIC_Miss); + Register registers[] = { esi, eax }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(ToBooleanIC_Miss)); descriptor->SetMissHandler( ExternalReference(IC_Utility(IC::kToBooleanIC_Miss), isolate())); } -void StoreGlobalStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { edx, ecx, eax }; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(StoreIC_MissFromStubFailure); -} - - -void ElementsTransitionAndStoreStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { eax, ebx, ecx, edx }; - descriptor->register_param_count_ = 4; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(ElementsTransitionAndStoreIC_Miss); -} - - void BinaryOpICStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { edx, eax }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = FUNCTION_ADDR(BinaryOpIC_Miss); + Register registers[] = { esi, edx, eax }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(BinaryOpIC_Miss)); descriptor->SetMissHandler( ExternalReference(IC_Utility(IC::kBinaryOpIC_Miss), isolate())); } @@ -322,21 +254,17 @@ void BinaryOpICStub::InitializeInterfaceDescriptor( void BinaryOpWithAllocationSiteStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { ecx, edx, eax }; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(BinaryOpIC_MissWithAllocationSite); + Register registers[] = { esi, ecx, edx, eax }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(BinaryOpIC_MissWithAllocationSite)); } void StringAddStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { edx, eax }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenStringAdd)->entry; + Register registers[] = { esi, edx, eax }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kStringAdd)->entry); } @@ -344,82 +272,72 @@ void CallDescriptors::InitializeForIsolate(Isolate* isolate) { { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::ArgumentAdaptorCall); - static Register registers[] = { edi, // JSFunction - esi, // context - eax, // actual number of arguments - ebx, // expected number of arguments + Register registers[] = { esi, // context + edi, // JSFunction + eax, // actual number of arguments + ebx, // expected number of arguments }; - static Representation representations[] = { - Representation::Tagged(), // JSFunction + Representation representations[] = { Representation::Tagged(), // context + Representation::Tagged(), // JSFunction Representation::Integer32(), // actual number of arguments Representation::Integer32(), // expected number of arguments }; - descriptor->register_param_count_ = 4; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); } { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::KeyedCall); - static Register registers[] = { esi, // context - ecx, // key + Register registers[] = { esi, // context + ecx, // key }; - static Representation representations[] = { + Representation representations[] = { Representation::Tagged(), // context Representation::Tagged(), // key }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); } { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::NamedCall); - static Register registers[] = { esi, // context - ecx, // name + Register registers[] = { esi, // context + ecx, // name }; - static Representation representations[] = { + Representation representations[] = { Representation::Tagged(), // context Representation::Tagged(), // name }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); } { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::CallHandler); - static Register registers[] = { esi, // context - edx, // receiver + Register registers[] = { esi, // context + edx, // name }; - static Representation representations[] = { - Representation::Tagged(), // context - Representation::Tagged(), // receiver + Representation representations[] = { + Representation::Tagged(), // context + Representation::Tagged(), // receiver }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); } { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::ApiFunctionCall); - static Register registers[] = { eax, // callee - ebx, // call_data - ecx, // holder - edx, // api_function_address - esi, // context + Register registers[] = { esi, // context + eax, // callee + ebx, // call_data + ecx, // holder + edx, // api_function_address }; - static Representation representations[] = { + Representation representations[] = { + Representation::Tagged(), // context Representation::Tagged(), // callee Representation::Tagged(), // call_data Representation::Tagged(), // holder Representation::External(), // api_function_address - Representation::Tagged(), // context }; - descriptor->register_param_count_ = 5; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); } } @@ -432,18 +350,19 @@ void HydrogenCodeStub::GenerateLightweightMiss(MacroAssembler* masm) { isolate()->counters()->code_stubs()->Increment(); CodeStubInterfaceDescriptor* descriptor = GetInterfaceDescriptor(); - int param_count = descriptor->register_param_count_; + int param_count = descriptor->GetEnvironmentParameterCount(); { // Call the runtime system in a fresh internal frame. FrameScope scope(masm, StackFrame::INTERNAL); - ASSERT(descriptor->register_param_count_ == 0 || - eax.is(descriptor->register_params_[param_count - 1])); + DCHECK(param_count == 0 || + eax.is(descriptor->GetEnvironmentParameterRegister( + param_count - 1))); // Push arguments for (int i = 0; i < param_count; ++i) { - __ push(descriptor->register_params_[i]); + __ push(descriptor->GetEnvironmentParameterRegister(i)); } ExternalReference miss = descriptor->miss_handler(); - __ CallExternalReference(miss, descriptor->register_param_count_); + __ CallExternalReference(miss, param_count); } __ ret(0); @@ -456,9 +375,8 @@ void StoreBufferOverflowStub::Generate(MacroAssembler* masm) { // restore them. __ pushad(); if (save_doubles_ == kSaveFPRegs) { - CpuFeatureScope scope(masm, SSE2); - __ sub(esp, Immediate(kDoubleSize * XMMRegister::kNumRegisters)); - for (int i = 0; i < XMMRegister::kNumRegisters; i++) { + __ sub(esp, Immediate(kDoubleSize * XMMRegister::kMaxNumRegisters)); + for (int i = 0; i < XMMRegister::kMaxNumRegisters; i++) { XMMRegister reg = XMMRegister::from_code(i); __ movsd(Operand(esp, i * kDoubleSize), reg); } @@ -473,12 +391,11 @@ void StoreBufferOverflowStub::Generate(MacroAssembler* masm) { ExternalReference::store_buffer_overflow_function(isolate()), argument_count); if (save_doubles_ == kSaveFPRegs) { - CpuFeatureScope scope(masm, SSE2); - for (int i = 0; i < XMMRegister::kNumRegisters; i++) { + for (int i = 0; i < XMMRegister::kMaxNumRegisters; i++) { XMMRegister reg = XMMRegister::from_code(i); __ movsd(reg, Operand(esp, i * kDoubleSize)); } - __ add(esp, Immediate(kDoubleSize * XMMRegister::kNumRegisters)); + __ add(esp, Immediate(kDoubleSize * XMMRegister::kMaxNumRegisters)); } __ popad(); __ ret(0); @@ -516,7 +433,7 @@ class FloatingPointHelper : public AllStatic { void DoubleToIStub::Generate(MacroAssembler* masm) { Register input_reg = this->source(); Register final_result_reg = this->destination(); - ASSERT(is_truncating()); + DCHECK(is_truncating()); Label check_negative, process_64_bits, done, done_no_stash; @@ -607,15 +524,7 @@ void DoubleToIStub::Generate(MacroAssembler* masm) { __ shrd(result_reg, scratch1); __ shr_cl(result_reg); __ test(ecx, Immediate(32)); - if (CpuFeatures::IsSupported(CMOV)) { - CpuFeatureScope use_cmov(masm, CMOV); - __ cmov(not_equal, scratch1, result_reg); - } else { - Label skip_mov; - __ j(equal, &skip_mov, Label::kNear); - __ mov(scratch1, result_reg); - __ bind(&skip_mov); - } + __ cmov(not_equal, scratch1, result_reg); } // If the double was negative, negate the integer result. @@ -627,15 +536,7 @@ void DoubleToIStub::Generate(MacroAssembler* masm) { } else { __ cmp(exponent_operand, Immediate(0)); } - if (CpuFeatures::IsSupported(CMOV)) { - CpuFeatureScope use_cmov(masm, CMOV); __ cmov(greater, result_reg, scratch1); - } else { - Label skip_mov; - __ j(less_equal, &skip_mov, Label::kNear); - __ mov(result_reg, scratch1); - __ bind(&skip_mov); - } // Restore registers __ bind(&done); @@ -644,7 +545,7 @@ void DoubleToIStub::Generate(MacroAssembler* masm) { } __ bind(&done_no_stash); if (!final_result_reg.is(result_reg)) { - ASSERT(final_result_reg.is(ecx)); + DCHECK(final_result_reg.is(ecx)); __ mov(final_result_reg, result_reg); } __ pop(save_reg); @@ -726,7 +627,6 @@ void FloatingPointHelper::CheckFloatOperands(MacroAssembler* masm, void MathPowStub::Generate(MacroAssembler* masm) { - CpuFeatureScope use_sse2(masm, SSE2); Factory* factory = isolate()->factory(); const Register exponent = eax; const Register base = edx; @@ -960,7 +860,7 @@ void MathPowStub::Generate(MacroAssembler* masm) { if (exponent_type_ == ON_STACK) { // The arguments are still on the stack. __ bind(&call_runtime); - __ TailCallRuntime(Runtime::kHiddenMathPow, 2, 1); + __ TailCallRuntime(Runtime::kMathPowRT, 2, 1); // The stub is called from non-optimized code, which expects the result // as heap number in exponent. @@ -994,22 +894,14 @@ void MathPowStub::Generate(MacroAssembler* masm) { void FunctionPrototypeStub::Generate(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- ecx : name - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- Label miss; + Register receiver = LoadIC::ReceiverRegister(); - if (kind() == Code::KEYED_LOAD_IC) { - __ cmp(ecx, Immediate(isolate()->factory()->prototype_string())); - __ j(not_equal, &miss); - } - - StubCompiler::GenerateLoadFunctionPrototype(masm, edx, eax, ebx, &miss); + NamedLoadHandlerCompiler::GenerateLoadFunctionPrototype(masm, receiver, eax, + ebx, &miss); __ bind(&miss); - StubCompiler::TailCallBuiltin( - masm, BaseLoadStoreStubCompiler::MissBuiltin(kind())); + PropertyAccessCompiler::TailCallBuiltin( + masm, PropertyAccessCompiler::MissBuiltin(Code::LOAD_IC)); } @@ -1093,7 +985,7 @@ void ArgumentsAccessStub::GenerateNewSloppySlow(MacroAssembler* masm) { __ mov(Operand(esp, 2 * kPointerSize), edx); __ bind(&runtime); - __ TailCallRuntime(Runtime::kHiddenNewArgumentsFast, 3, 1); + __ TailCallRuntime(Runtime::kNewSloppyArguments, 3, 1); } @@ -1128,7 +1020,7 @@ void ArgumentsAccessStub::GenerateNewSloppyFast(MacroAssembler* masm) { __ mov(Operand(esp, 2 * kPointerSize), edx); // ebx = parameter count (tagged) - // ecx = argument count (tagged) + // ecx = argument count (smi-tagged) // esp[4] = parameter count (tagged) // esp[8] = address of receiver argument // Compute the mapped parameter count = min(ebx, ecx) in ebx. @@ -1161,47 +1053,52 @@ void ArgumentsAccessStub::GenerateNewSloppyFast(MacroAssembler* masm) { __ Allocate(ebx, eax, edx, edi, &runtime, TAG_OBJECT); // eax = address of new object(s) (tagged) - // ecx = argument count (tagged) + // ecx = argument count (smi-tagged) // esp[0] = mapped parameter count (tagged) // esp[8] = parameter count (tagged) // esp[12] = address of receiver argument - // Get the arguments boilerplate from the current native context into edi. - Label has_mapped_parameters, copy; + // Get the arguments map from the current native context into edi. + Label has_mapped_parameters, instantiate; __ mov(edi, Operand(esi, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); __ mov(edi, FieldOperand(edi, GlobalObject::kNativeContextOffset)); __ mov(ebx, Operand(esp, 0 * kPointerSize)); __ test(ebx, ebx); __ j(not_zero, &has_mapped_parameters, Label::kNear); - __ mov(edi, Operand(edi, - Context::SlotOffset(Context::SLOPPY_ARGUMENTS_BOILERPLATE_INDEX))); - __ jmp(©, Label::kNear); + __ mov( + edi, + Operand(edi, Context::SlotOffset(Context::SLOPPY_ARGUMENTS_MAP_INDEX))); + __ jmp(&instantiate, Label::kNear); __ bind(&has_mapped_parameters); - __ mov(edi, Operand(edi, - Context::SlotOffset(Context::ALIASED_ARGUMENTS_BOILERPLATE_INDEX))); - __ bind(©); + __ mov( + edi, + Operand(edi, Context::SlotOffset(Context::ALIASED_ARGUMENTS_MAP_INDEX))); + __ bind(&instantiate); // eax = address of new object (tagged) // ebx = mapped parameter count (tagged) - // ecx = argument count (tagged) - // edi = address of boilerplate object (tagged) + // ecx = argument count (smi-tagged) + // edi = address of arguments map (tagged) // esp[0] = mapped parameter count (tagged) // esp[8] = parameter count (tagged) // esp[12] = address of receiver argument // Copy the JS object part. - for (int i = 0; i < JSObject::kHeaderSize; i += kPointerSize) { - __ mov(edx, FieldOperand(edi, i)); - __ mov(FieldOperand(eax, i), edx); - } + __ mov(FieldOperand(eax, JSObject::kMapOffset), edi); + __ mov(FieldOperand(eax, JSObject::kPropertiesOffset), + masm->isolate()->factory()->empty_fixed_array()); + __ mov(FieldOperand(eax, JSObject::kElementsOffset), + masm->isolate()->factory()->empty_fixed_array()); // Set up the callee in-object property. STATIC_ASSERT(Heap::kArgumentsCalleeIndex == 1); __ mov(edx, Operand(esp, 4 * kPointerSize)); + __ AssertNotSmi(edx); __ mov(FieldOperand(eax, JSObject::kHeaderSize + Heap::kArgumentsCalleeIndex * kPointerSize), edx); // Use the length (smi tagged) and set that as an in-object property too. + __ AssertSmi(ecx); STATIC_ASSERT(Heap::kArgumentsLengthIndex == 0); __ mov(FieldOperand(eax, JSObject::kHeaderSize + Heap::kArgumentsLengthIndex * kPointerSize), @@ -1316,7 +1213,7 @@ void ArgumentsAccessStub::GenerateNewSloppyFast(MacroAssembler* masm) { __ bind(&runtime); __ pop(eax); // Remove saved parameter count. __ mov(Operand(esp, 1 * kPointerSize), ecx); // Patch argument count. - __ TailCallRuntime(Runtime::kHiddenNewArgumentsFast, 3, 1); + __ TailCallRuntime(Runtime::kNewSloppyArguments, 3, 1); } @@ -1358,22 +1255,22 @@ void ArgumentsAccessStub::GenerateNewStrict(MacroAssembler* masm) { // Do the allocation of both objects in one go. __ Allocate(ecx, eax, edx, ebx, &runtime, TAG_OBJECT); - // Get the arguments boilerplate from the current native context. + // Get the arguments map from the current native context. __ mov(edi, Operand(esi, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); __ mov(edi, FieldOperand(edi, GlobalObject::kNativeContextOffset)); - const int offset = - Context::SlotOffset(Context::STRICT_ARGUMENTS_BOILERPLATE_INDEX); + const int offset = Context::SlotOffset(Context::STRICT_ARGUMENTS_MAP_INDEX); __ mov(edi, Operand(edi, offset)); - // Copy the JS object part. - for (int i = 0; i < JSObject::kHeaderSize; i += kPointerSize) { - __ mov(ebx, FieldOperand(edi, i)); - __ mov(FieldOperand(eax, i), ebx); - } + __ mov(FieldOperand(eax, JSObject::kMapOffset), edi); + __ mov(FieldOperand(eax, JSObject::kPropertiesOffset), + masm->isolate()->factory()->empty_fixed_array()); + __ mov(FieldOperand(eax, JSObject::kElementsOffset), + masm->isolate()->factory()->empty_fixed_array()); // Get the length (smi tagged) and set that as an in-object property too. STATIC_ASSERT(Heap::kArgumentsLengthIndex == 0); __ mov(ecx, Operand(esp, 1 * kPointerSize)); + __ AssertSmi(ecx); __ mov(FieldOperand(eax, JSObject::kHeaderSize + Heap::kArgumentsLengthIndex * kPointerSize), ecx); @@ -1413,7 +1310,7 @@ void ArgumentsAccessStub::GenerateNewStrict(MacroAssembler* masm) { // Do the runtime call to allocate the arguments object. __ bind(&runtime); - __ TailCallRuntime(Runtime::kHiddenNewStrictArgumentsFast, 3, 1); + __ TailCallRuntime(Runtime::kNewStrictArguments, 3, 1); } @@ -1422,7 +1319,7 @@ void RegExpExecStub::Generate(MacroAssembler* masm) { // time or if regexp entry in generated code is turned off runtime switch or // at compilation. #ifdef V8_INTERPRETED_REGEXP - __ TailCallRuntime(Runtime::kHiddenRegExpExec, 4, 1); + __ TailCallRuntime(Runtime::kRegExpExecRT, 4, 1); #else // V8_INTERPRETED_REGEXP // Stack frame on entry. @@ -1561,8 +1458,8 @@ void RegExpExecStub::Generate(MacroAssembler* masm) { // (5b) Is subject external? If yes, go to (8). __ test_b(ebx, kStringRepresentationMask); // The underlying external string is never a short external string. - STATIC_CHECK(ExternalString::kMaxShortLength < ConsString::kMinLength); - STATIC_CHECK(ExternalString::kMaxShortLength < SlicedString::kMinLength); + STATIC_ASSERT(ExternalString::kMaxShortLength < ConsString::kMinLength); + STATIC_ASSERT(ExternalString::kMaxShortLength < SlicedString::kMinLength); __ j(not_zero, &external_string); // Go to (8). // eax: sequential subject string (or look-alike, external string) @@ -1805,7 +1702,7 @@ void RegExpExecStub::Generate(MacroAssembler* masm) { // Do the runtime call to execute the regexp. __ bind(&runtime); - __ TailCallRuntime(Runtime::kHiddenRegExpExec, 4, 1); + __ TailCallRuntime(Runtime::kRegExpExecRT, 4, 1); // Deferred code for string handling. // (7) Not a long external string? If yes, go to (10). @@ -1866,8 +1763,8 @@ void RegExpExecStub::Generate(MacroAssembler* masm) { static int NegativeComparisonResult(Condition cc) { - ASSERT(cc != equal); - ASSERT((cc == less) || (cc == less_equal) + DCHECK(cc != equal); + DCHECK((cc == less) || (cc == less_equal) || (cc == greater) || (cc == greater_equal)); return (cc == greater || cc == greater_equal) ? LESS : GREATER; } @@ -1977,7 +1874,7 @@ void ICCompareStub::GenerateGeneric(MacroAssembler* masm) { // If either is a Smi (we know that not both are), then they can only // be equal if the other is a HeapNumber. If so, use the slow case. STATIC_ASSERT(kSmiTag == 0); - ASSERT_EQ(0, Smi::FromInt(0)); + DCHECK_EQ(0, Smi::FromInt(0)); __ mov(ecx, Immediate(kSmiTagMask)); __ and_(ecx, eax); __ test(ecx, edx); @@ -2041,53 +1938,23 @@ void ICCompareStub::GenerateGeneric(MacroAssembler* masm) { Label non_number_comparison; Label unordered; __ bind(&generic_heap_number_comparison); - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope use_sse2(masm, SSE2); - CpuFeatureScope use_cmov(masm, CMOV); - - FloatingPointHelper::LoadSSE2Operands(masm, &non_number_comparison); - __ ucomisd(xmm0, xmm1); - - // Don't base result on EFLAGS when a NaN is involved. - __ j(parity_even, &unordered, Label::kNear); - // Return a result of -1, 0, or 1, based on EFLAGS. - __ mov(eax, 0); // equal - __ mov(ecx, Immediate(Smi::FromInt(1))); - __ cmov(above, eax, ecx); - __ mov(ecx, Immediate(Smi::FromInt(-1))); - __ cmov(below, eax, ecx); - __ ret(0); - } else { - FloatingPointHelper::CheckFloatOperands( - masm, &non_number_comparison, ebx); - FloatingPointHelper::LoadFloatOperand(masm, eax); - FloatingPointHelper::LoadFloatOperand(masm, edx); - __ FCmp(); - - // Don't base result on EFLAGS when a NaN is involved. - __ j(parity_even, &unordered, Label::kNear); - - Label below_label, above_label; - // Return a result of -1, 0, or 1, based on EFLAGS. - __ j(below, &below_label, Label::kNear); - __ j(above, &above_label, Label::kNear); - __ Move(eax, Immediate(0)); - __ ret(0); - - __ bind(&below_label); - __ mov(eax, Immediate(Smi::FromInt(-1))); - __ ret(0); + FloatingPointHelper::LoadSSE2Operands(masm, &non_number_comparison); + __ ucomisd(xmm0, xmm1); + // Don't base result on EFLAGS when a NaN is involved. + __ j(parity_even, &unordered, Label::kNear); - __ bind(&above_label); - __ mov(eax, Immediate(Smi::FromInt(1))); - __ ret(0); - } + __ mov(eax, 0); // equal + __ mov(ecx, Immediate(Smi::FromInt(1))); + __ cmov(above, eax, ecx); + __ mov(ecx, Immediate(Smi::FromInt(-1))); + __ cmov(below, eax, ecx); + __ ret(0); // If one of the numbers was NaN, then the result is always false. // The cc is never not-equal. __ bind(&unordered); - ASSERT(cc != not_equal); + DCHECK(cc != not_equal); if (cc == less || cc == less_equal) { __ mov(eax, Immediate(Smi::FromInt(1))); } else { @@ -2360,11 +2227,13 @@ static void EmitWrapCase(MacroAssembler* masm, int argc, Label* cont) { } -void CallFunctionStub::Generate(MacroAssembler* masm) { +static void CallFunctionNoFeedback(MacroAssembler* masm, + int argc, bool needs_checks, + bool call_as_method) { // edi : the function to call Label slow, non_function, wrap, cont; - if (NeedsChecks()) { + if (needs_checks) { // Check that the function really is a JavaScript function. __ JumpIfSmi(edi, &non_function); @@ -2374,17 +2243,17 @@ void CallFunctionStub::Generate(MacroAssembler* masm) { } // Fast-case: Just invoke the function. - ParameterCount actual(argc_); + ParameterCount actual(argc); - if (CallAsMethod()) { - if (NeedsChecks()) { + if (call_as_method) { + if (needs_checks) { EmitContinueIfStrictOrNative(masm, &cont); } // Load the receiver from the stack. - __ mov(eax, Operand(esp, (argc_ + 1) * kPointerSize)); + __ mov(eax, Operand(esp, (argc + 1) * kPointerSize)); - if (NeedsChecks()) { + if (needs_checks) { __ JumpIfSmi(eax, &wrap); __ CmpObjectType(eax, FIRST_SPEC_OBJECT_TYPE, ecx); @@ -2398,20 +2267,25 @@ void CallFunctionStub::Generate(MacroAssembler* masm) { __ InvokeFunction(edi, actual, JUMP_FUNCTION, NullCallWrapper()); - if (NeedsChecks()) { + if (needs_checks) { // Slow-case: Non-function called. __ bind(&slow); // (non_function is bound in EmitSlowCase) - EmitSlowCase(isolate(), masm, argc_, &non_function); + EmitSlowCase(masm->isolate(), masm, argc, &non_function); } - if (CallAsMethod()) { + if (call_as_method) { __ bind(&wrap); - EmitWrapCase(masm, argc_, &cont); + EmitWrapCase(masm, argc, &cont); } } +void CallFunctionStub::Generate(MacroAssembler* masm) { + CallFunctionNoFeedback(masm, argc_, NeedsChecks(), CallAsMethod()); +} + + void CallConstructStub::Generate(MacroAssembler* masm) { // eax : number of arguments // ebx : feedback vector @@ -2488,6 +2362,47 @@ static void EmitLoadTypeFeedbackVector(MacroAssembler* masm, Register vector) { } +void CallIC_ArrayStub::Generate(MacroAssembler* masm) { + // edi - function + // edx - slot id + Label miss; + int argc = state_.arg_count(); + ParameterCount actual(argc); + + EmitLoadTypeFeedbackVector(masm, ebx); + + __ LoadGlobalFunction(Context::ARRAY_FUNCTION_INDEX, ecx); + __ cmp(edi, ecx); + __ j(not_equal, &miss); + + __ mov(eax, arg_count()); + __ mov(ecx, FieldOperand(ebx, edx, times_half_pointer_size, + FixedArray::kHeaderSize)); + + // Verify that ecx contains an AllocationSite + Factory* factory = masm->isolate()->factory(); + __ cmp(FieldOperand(ecx, HeapObject::kMapOffset), + factory->allocation_site_map()); + __ j(not_equal, &miss); + + __ mov(ebx, ecx); + ArrayConstructorStub stub(masm->isolate(), arg_count()); + __ TailCallStub(&stub); + + __ bind(&miss); + GenerateMiss(masm, IC::kCallIC_Customization_Miss); + + // The slow case, we need this no matter what to complete a call after a miss. + CallFunctionNoFeedback(masm, + arg_count(), + true, + CallAsMethod()); + + // Unreachable. + __ int3(); +} + + void CallICStub::Generate(MacroAssembler* masm) { // edi - function // edx - slot id @@ -2541,7 +2456,11 @@ void CallICStub::Generate(MacroAssembler* masm) { __ j(equal, &miss); if (!FLAG_trace_ic) { - // We are going megamorphic, and we don't want to visit the runtime. + // We are going megamorphic. If the feedback is a JSFunction, it is fine + // to handle it here. More complex cases are dealt with in the runtime. + __ AssertNotSmi(ecx); + __ CmpObjectType(ecx, JS_FUNCTION_TYPE, ecx); + __ j(not_equal, &miss); __ mov(FieldOperand(ebx, edx, times_half_pointer_size, FixedArray::kHeaderSize), Immediate(TypeFeedbackInfo::MegamorphicSentinel(isolate))); @@ -2550,7 +2469,7 @@ void CallICStub::Generate(MacroAssembler* masm) { // We are here because tracing is on or we are going monomorphic. __ bind(&miss); - GenerateMiss(masm); + GenerateMiss(masm, IC::kCallIC_Miss); // the slow case __ bind(&slow_start); @@ -2568,7 +2487,7 @@ void CallICStub::Generate(MacroAssembler* masm) { } -void CallICStub::GenerateMiss(MacroAssembler* masm) { +void CallICStub::GenerateMiss(MacroAssembler* masm, IC::UtilityId id) { // Get the receiver of the function from the stack; 1 ~ return address. __ mov(ecx, Operand(esp, (state_.arg_count() + 1) * kPointerSize)); @@ -2582,7 +2501,7 @@ void CallICStub::GenerateMiss(MacroAssembler* masm) { __ push(edx); // Call the entry. - ExternalReference miss = ExternalReference(IC_Utility(IC::kCallIC_Miss), + ExternalReference miss = ExternalReference(IC_Utility(id), masm->isolate()); __ CallExternalReference(miss, 4); @@ -2604,28 +2523,20 @@ void CodeStub::GenerateStubsAheadOfTime(Isolate* isolate) { // It is important that the store buffer overflow stubs are generated first. ArrayConstructorStubBase::GenerateStubsAheadOfTime(isolate); CreateAllocationSiteStub::GenerateAheadOfTime(isolate); - if (Serializer::enabled(isolate)) { - PlatformFeatureScope sse2(isolate, SSE2); - BinaryOpICStub::GenerateAheadOfTime(isolate); - BinaryOpICWithAllocationSiteStub::GenerateAheadOfTime(isolate); - } else { - BinaryOpICStub::GenerateAheadOfTime(isolate); - BinaryOpICWithAllocationSiteStub::GenerateAheadOfTime(isolate); - } + BinaryOpICStub::GenerateAheadOfTime(isolate); + BinaryOpICWithAllocationSiteStub::GenerateAheadOfTime(isolate); } void CodeStub::GenerateFPStubs(Isolate* isolate) { - if (CpuFeatures::IsSupported(SSE2)) { - CEntryStub save_doubles(isolate, 1, kSaveFPRegs); - // Stubs might already be in the snapshot, detect that and don't regenerate, - // which would lead to code stub initialization state being messed up. - Code* save_doubles_code; - if (!save_doubles.FindCodeInCache(&save_doubles_code)) { - save_doubles_code = *(save_doubles.GetCode()); - } - isolate->set_fp_stubs_generated(true); + CEntryStub save_doubles(isolate, 1, kSaveFPRegs); + // Stubs might already be in the snapshot, detect that and don't regenerate, + // which would lead to code stub initialization state being messed up. + Code* save_doubles_code; + if (!save_doubles.FindCodeInCache(&save_doubles_code)) { + save_doubles_code = *(save_doubles.GetCode()); } + isolate->set_fp_stubs_generated(true); } @@ -2848,7 +2759,7 @@ void JSEntryStub::GenerateBody(MacroAssembler* masm, bool is_construct) { // void InstanceofStub::Generate(MacroAssembler* masm) { // Call site inlining and patching implies arguments in registers. - ASSERT(HasArgsInRegisters() || !HasCallSiteInlineCheck()); + DCHECK(HasArgsInRegisters() || !HasCallSiteInlineCheck()); // Fixed register usage throughout the stub. Register object = eax; // Object (lhs). @@ -2865,8 +2776,8 @@ void InstanceofStub::Generate(MacroAssembler* masm) { static const int8_t kCmpEdiOperandByte2 = BitCast<int8_t, uint8_t>(0x3d); static const int8_t kMovEaxImmediateByte = BitCast<int8_t, uint8_t>(0xb8); - ASSERT_EQ(object.code(), InstanceofStub::left().code()); - ASSERT_EQ(function.code(), InstanceofStub::right().code()); + DCHECK_EQ(object.code(), InstanceofStub::left().code()); + DCHECK_EQ(function.code(), InstanceofStub::right().code()); // Get the object and function - they are always both needed. Label slow, not_js_object; @@ -2881,7 +2792,7 @@ void InstanceofStub::Generate(MacroAssembler* masm) { // If there is a call site cache don't look in the global cache, but do the // real lookup and update the call site cache. - if (!HasCallSiteInlineCheck()) { + if (!HasCallSiteInlineCheck() && !ReturnTrueFalseObject()) { // Look up the function and the map in the instanceof cache. Label miss; __ CompareRoot(function, scratch, Heap::kInstanceofCacheFunctionRootIndex); @@ -2908,7 +2819,7 @@ void InstanceofStub::Generate(MacroAssembler* masm) { } else { // The constants for the code patching are based on no push instructions // at the call site. - ASSERT(HasArgsInRegisters()); + DCHECK(HasArgsInRegisters()); // Get return address and delta to inlined map check. __ mov(scratch, Operand(esp, 0 * kPointerSize)); __ sub(scratch, Operand(esp, 1 * kPointerSize)); @@ -2940,6 +2851,9 @@ void InstanceofStub::Generate(MacroAssembler* masm) { if (!HasCallSiteInlineCheck()) { __ mov(eax, Immediate(0)); __ StoreRoot(eax, scratch, Heap::kInstanceofCacheAnswerRootIndex); + if (ReturnTrueFalseObject()) { + __ mov(eax, factory->true_value()); + } } else { // Get return address and delta to inlined map check. __ mov(eax, factory->true_value()); @@ -2960,6 +2874,9 @@ void InstanceofStub::Generate(MacroAssembler* masm) { if (!HasCallSiteInlineCheck()) { __ mov(eax, Immediate(Smi::FromInt(1))); __ StoreRoot(eax, scratch, Heap::kInstanceofCacheAnswerRootIndex); + if (ReturnTrueFalseObject()) { + __ mov(eax, factory->false_value()); + } } else { // Get return address and delta to inlined map check. __ mov(eax, factory->false_value()); @@ -2987,20 +2904,32 @@ void InstanceofStub::Generate(MacroAssembler* masm) { // Null is not instance of anything. __ cmp(object, factory->null_value()); __ j(not_equal, &object_not_null, Label::kNear); - __ Move(eax, Immediate(Smi::FromInt(1))); + if (ReturnTrueFalseObject()) { + __ mov(eax, factory->false_value()); + } else { + __ Move(eax, Immediate(Smi::FromInt(1))); + } __ ret((HasArgsInRegisters() ? 0 : 2) * kPointerSize); __ bind(&object_not_null); // Smi values is not instance of anything. __ JumpIfNotSmi(object, &object_not_null_or_smi, Label::kNear); - __ Move(eax, Immediate(Smi::FromInt(1))); + if (ReturnTrueFalseObject()) { + __ mov(eax, factory->false_value()); + } else { + __ Move(eax, Immediate(Smi::FromInt(1))); + } __ ret((HasArgsInRegisters() ? 0 : 2) * kPointerSize); __ bind(&object_not_null_or_smi); // String values is not instance of anything. Condition is_string = masm->IsObjectStringType(object, scratch, scratch); __ j(NegateCondition(is_string), &slow, Label::kNear); - __ Move(eax, Immediate(Smi::FromInt(1))); + if (ReturnTrueFalseObject()) { + __ mov(eax, factory->false_value()); + } else { + __ Move(eax, Immediate(Smi::FromInt(1))); + } __ ret((HasArgsInRegisters() ? 0 : 2) * kPointerSize); // Slow-case: Go through the JavaScript implementation. @@ -3095,9 +3024,9 @@ void StringCharCodeAtGenerator::GenerateSlow( if (index_flags_ == STRING_INDEX_IS_NUMBER) { __ CallRuntime(Runtime::kNumberToIntegerMapMinusZero, 1); } else { - ASSERT(index_flags_ == STRING_INDEX_IS_ARRAY_INDEX); + DCHECK(index_flags_ == STRING_INDEX_IS_ARRAY_INDEX); // NumberToSmi discards numbers that are not exact integers. - __ CallRuntime(Runtime::kHiddenNumberToSmi, 1); + __ CallRuntime(Runtime::kNumberToSmi, 1); } if (!index_.is(eax)) { // Save the conversion result before the pop instructions below @@ -3123,7 +3052,7 @@ void StringCharCodeAtGenerator::GenerateSlow( __ push(object_); __ SmiTag(index_); __ push(index_); - __ CallRuntime(Runtime::kHiddenStringCharCodeAt, 2); + __ CallRuntime(Runtime::kStringCharCodeAtRT, 2); if (!result_.is(eax)) { __ mov(result_, eax); } @@ -3141,7 +3070,7 @@ void StringCharFromCodeGenerator::GenerateFast(MacroAssembler* masm) { // Fast case of Heap::LookupSingleCharacterStringFromCode. STATIC_ASSERT(kSmiTag == 0); STATIC_ASSERT(kSmiShiftSize == 0); - ASSERT(IsPowerOf2(String::kMaxOneByteCharCode + 1)); + DCHECK(IsPowerOf2(String::kMaxOneByteCharCode + 1)); __ test(code_, Immediate(kSmiTagMask | ((~String::kMaxOneByteCharCode) << kSmiTagSize))); @@ -3181,21 +3110,15 @@ void StringCharFromCodeGenerator::GenerateSlow( } -void StringHelper::GenerateCopyCharactersREP(MacroAssembler* masm, - Register dest, - Register src, - Register count, - Register scratch, - bool ascii) { - // Copy characters using rep movs of doublewords. - // The destination is aligned on a 4 byte boundary because we are - // copying to the beginning of a newly allocated string. - ASSERT(dest.is(edi)); // rep movs destination - ASSERT(src.is(esi)); // rep movs source - ASSERT(count.is(ecx)); // rep movs count - ASSERT(!scratch.is(dest)); - ASSERT(!scratch.is(src)); - ASSERT(!scratch.is(count)); +void StringHelper::GenerateCopyCharacters(MacroAssembler* masm, + Register dest, + Register src, + Register count, + Register scratch, + String::Encoding encoding) { + DCHECK(!scratch.is(dest)); + DCHECK(!scratch.is(src)); + DCHECK(!scratch.is(count)); // Nothing to do for zero characters. Label done; @@ -3203,38 +3126,17 @@ void StringHelper::GenerateCopyCharactersREP(MacroAssembler* masm, __ j(zero, &done); // Make count the number of bytes to copy. - if (!ascii) { + if (encoding == String::TWO_BYTE_ENCODING) { __ shl(count, 1); } - // Don't enter the rep movs if there are less than 4 bytes to copy. - Label last_bytes; - __ test(count, Immediate(~3)); - __ j(zero, &last_bytes, Label::kNear); - - // Copy from edi to esi using rep movs instruction. - __ mov(scratch, count); - __ sar(count, 2); // Number of doublewords to copy. - __ cld(); - __ rep_movs(); - - // Find number of bytes left. - __ mov(count, scratch); - __ and_(count, 3); - - // Check if there are more bytes to copy. - __ bind(&last_bytes); - __ test(count, count); - __ j(zero, &done); - - // Copy remaining characters. Label loop; __ bind(&loop); __ mov_b(scratch, Operand(src, 0)); __ mov_b(Operand(dest, 0), scratch); - __ add(src, Immediate(1)); - __ add(dest, Immediate(1)); - __ sub(count, Immediate(1)); + __ inc(src); + __ inc(dest); + __ dec(count); __ j(not_zero, &loop); __ bind(&done); @@ -3246,7 +3148,7 @@ void StringHelper::GenerateHashInit(MacroAssembler* masm, Register character, Register scratch) { // hash = (seed + character) + ((seed + character) << 10); - if (Serializer::enabled(masm->isolate())) { + if (masm->serializer_enabled()) { __ LoadRoot(scratch, Heap::kHashSeedRootIndex); __ SmiUntag(scratch); __ add(scratch, character); @@ -3441,7 +3343,7 @@ void SubStringStub::Generate(MacroAssembler* masm) { // Handle external string. // Rule out short external strings. - STATIC_CHECK(kShortExternalStringTag != 0); + STATIC_ASSERT(kShortExternalStringTag != 0); __ test_b(ebx, kShortExternalStringMask); __ j(not_zero, &runtime); __ mov(edi, FieldOperand(edi, ExternalString::kResourceDataOffset)); @@ -3463,23 +3365,21 @@ void SubStringStub::Generate(MacroAssembler* masm) { // eax: result string // ecx: result string length - __ mov(edx, esi); // esi used by following code. // Locate first character of result. __ mov(edi, eax); __ add(edi, Immediate(SeqOneByteString::kHeaderSize - kHeapObjectTag)); // Load string argument and locate character of sub string start. - __ pop(esi); + __ pop(edx); __ pop(ebx); __ SmiUntag(ebx); - __ lea(esi, FieldOperand(esi, ebx, times_1, SeqOneByteString::kHeaderSize)); + __ lea(edx, FieldOperand(edx, ebx, times_1, SeqOneByteString::kHeaderSize)); // eax: result string // ecx: result length - // edx: original value of esi // edi: first character of result - // esi: character of sub string start - StringHelper::GenerateCopyCharactersREP(masm, edi, esi, ecx, ebx, true); - __ mov(esi, edx); // Restore esi. + // edx: character of sub string start + StringHelper::GenerateCopyCharacters( + masm, edi, edx, ecx, ebx, String::ONE_BYTE_ENCODING); __ IncrementCounter(counters->sub_string_native(), 1); __ ret(3 * kPointerSize); @@ -3489,27 +3389,25 @@ void SubStringStub::Generate(MacroAssembler* masm) { // eax: result string // ecx: result string length - __ mov(edx, esi); // esi used by following code. // Locate first character of result. __ mov(edi, eax); __ add(edi, Immediate(SeqTwoByteString::kHeaderSize - kHeapObjectTag)); // Load string argument and locate character of sub string start. - __ pop(esi); + __ pop(edx); __ pop(ebx); // As from is a smi it is 2 times the value which matches the size of a two // byte character. STATIC_ASSERT(kSmiTag == 0); STATIC_ASSERT(kSmiTagSize + kSmiShiftSize == 1); - __ lea(esi, FieldOperand(esi, ebx, times_1, SeqTwoByteString::kHeaderSize)); + __ lea(edx, FieldOperand(edx, ebx, times_1, SeqTwoByteString::kHeaderSize)); // eax: result string // ecx: result length - // edx: original value of esi // edi: first character of result - // esi: character of sub string start - StringHelper::GenerateCopyCharactersREP(masm, edi, esi, ecx, ebx, false); - __ mov(esi, edx); // Restore esi. + // edx: character of sub string start + StringHelper::GenerateCopyCharacters( + masm, edi, edx, ecx, ebx, String::TWO_BYTE_ENCODING); __ IncrementCounter(counters->sub_string_native(), 1); __ ret(3 * kPointerSize); @@ -3519,7 +3417,7 @@ void SubStringStub::Generate(MacroAssembler* masm) { // Just jump to runtime to create the sub string. __ bind(&runtime); - __ TailCallRuntime(Runtime::kHiddenSubString, 3, 1); + __ TailCallRuntime(Runtime::kSubString, 3, 1); __ bind(&single_char); // eax: string @@ -3701,7 +3599,7 @@ void StringCompareStub::Generate(MacroAssembler* masm) { // Call the runtime; it returns -1 (less), 0 (equal), or 1 (greater) // tagged as a small integer. __ bind(&runtime); - __ TailCallRuntime(Runtime::kHiddenStringCompare, 2, 1); + __ TailCallRuntime(Runtime::kStringCompare, 2, 1); } @@ -3734,7 +3632,7 @@ void BinaryOpICWithAllocationSiteStub::Generate(MacroAssembler* masm) { void ICCompareStub::GenerateSmis(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::SMI); + DCHECK(state_ == CompareIC::SMI); Label miss; __ mov(ecx, edx); __ or_(ecx, eax); @@ -3760,7 +3658,7 @@ void ICCompareStub::GenerateSmis(MacroAssembler* masm) { void ICCompareStub::GenerateNumbers(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::NUMBER); + DCHECK(state_ == CompareIC::NUMBER); Label generic_stub; Label unordered, maybe_undefined1, maybe_undefined2; @@ -3773,64 +3671,46 @@ void ICCompareStub::GenerateNumbers(MacroAssembler* masm) { __ JumpIfNotSmi(eax, &miss); } - // Inlining the double comparison and falling back to the general compare - // stub if NaN is involved or SSE2 or CMOV is unsupported. - if (CpuFeatures::IsSupported(SSE2) && CpuFeatures::IsSupported(CMOV)) { - CpuFeatureScope scope1(masm, SSE2); - CpuFeatureScope scope2(masm, CMOV); - - // Load left and right operand. - Label done, left, left_smi, right_smi; - __ JumpIfSmi(eax, &right_smi, Label::kNear); - __ cmp(FieldOperand(eax, HeapObject::kMapOffset), - isolate()->factory()->heap_number_map()); - __ j(not_equal, &maybe_undefined1, Label::kNear); - __ movsd(xmm1, FieldOperand(eax, HeapNumber::kValueOffset)); - __ jmp(&left, Label::kNear); - __ bind(&right_smi); - __ mov(ecx, eax); // Can't clobber eax because we can still jump away. - __ SmiUntag(ecx); - __ Cvtsi2sd(xmm1, ecx); - - __ bind(&left); - __ JumpIfSmi(edx, &left_smi, Label::kNear); - __ cmp(FieldOperand(edx, HeapObject::kMapOffset), - isolate()->factory()->heap_number_map()); - __ j(not_equal, &maybe_undefined2, Label::kNear); - __ movsd(xmm0, FieldOperand(edx, HeapNumber::kValueOffset)); - __ jmp(&done); - __ bind(&left_smi); - __ mov(ecx, edx); // Can't clobber edx because we can still jump away. - __ SmiUntag(ecx); - __ Cvtsi2sd(xmm0, ecx); + // Load left and right operand. + Label done, left, left_smi, right_smi; + __ JumpIfSmi(eax, &right_smi, Label::kNear); + __ cmp(FieldOperand(eax, HeapObject::kMapOffset), + isolate()->factory()->heap_number_map()); + __ j(not_equal, &maybe_undefined1, Label::kNear); + __ movsd(xmm1, FieldOperand(eax, HeapNumber::kValueOffset)); + __ jmp(&left, Label::kNear); + __ bind(&right_smi); + __ mov(ecx, eax); // Can't clobber eax because we can still jump away. + __ SmiUntag(ecx); + __ Cvtsi2sd(xmm1, ecx); - __ bind(&done); - // Compare operands. - __ ucomisd(xmm0, xmm1); - - // Don't base result on EFLAGS when a NaN is involved. - __ j(parity_even, &unordered, Label::kNear); - - // Return a result of -1, 0, or 1, based on EFLAGS. - // Performing mov, because xor would destroy the flag register. - __ mov(eax, 0); // equal - __ mov(ecx, Immediate(Smi::FromInt(1))); - __ cmov(above, eax, ecx); - __ mov(ecx, Immediate(Smi::FromInt(-1))); - __ cmov(below, eax, ecx); - __ ret(0); - } else { - __ mov(ecx, edx); - __ and_(ecx, eax); - __ JumpIfSmi(ecx, &generic_stub, Label::kNear); + __ bind(&left); + __ JumpIfSmi(edx, &left_smi, Label::kNear); + __ cmp(FieldOperand(edx, HeapObject::kMapOffset), + isolate()->factory()->heap_number_map()); + __ j(not_equal, &maybe_undefined2, Label::kNear); + __ movsd(xmm0, FieldOperand(edx, HeapNumber::kValueOffset)); + __ jmp(&done); + __ bind(&left_smi); + __ mov(ecx, edx); // Can't clobber edx because we can still jump away. + __ SmiUntag(ecx); + __ Cvtsi2sd(xmm0, ecx); - __ cmp(FieldOperand(eax, HeapObject::kMapOffset), - isolate()->factory()->heap_number_map()); - __ j(not_equal, &maybe_undefined1, Label::kNear); - __ cmp(FieldOperand(edx, HeapObject::kMapOffset), - isolate()->factory()->heap_number_map()); - __ j(not_equal, &maybe_undefined2, Label::kNear); - } + __ bind(&done); + // Compare operands. + __ ucomisd(xmm0, xmm1); + + // Don't base result on EFLAGS when a NaN is involved. + __ j(parity_even, &unordered, Label::kNear); + + // Return a result of -1, 0, or 1, based on EFLAGS. + // Performing mov, because xor would destroy the flag register. + __ mov(eax, 0); // equal + __ mov(ecx, Immediate(Smi::FromInt(1))); + __ cmov(above, eax, ecx); + __ mov(ecx, Immediate(Smi::FromInt(-1))); + __ cmov(below, eax, ecx); + __ ret(0); __ bind(&unordered); __ bind(&generic_stub); @@ -3860,8 +3740,8 @@ void ICCompareStub::GenerateNumbers(MacroAssembler* masm) { void ICCompareStub::GenerateInternalizedStrings(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::INTERNALIZED_STRING); - ASSERT(GetCondition() == equal); + DCHECK(state_ == CompareIC::INTERNALIZED_STRING); + DCHECK(GetCondition() == equal); // Registers containing left and right operands respectively. Register left = edx; @@ -3891,7 +3771,7 @@ void ICCompareStub::GenerateInternalizedStrings(MacroAssembler* masm) { __ cmp(left, right); // Make sure eax is non-zero. At this point input operands are // guaranteed to be non-zero. - ASSERT(right.is(eax)); + DCHECK(right.is(eax)); __ j(not_equal, &done, Label::kNear); STATIC_ASSERT(EQUAL == 0); STATIC_ASSERT(kSmiTag == 0); @@ -3905,8 +3785,8 @@ void ICCompareStub::GenerateInternalizedStrings(MacroAssembler* masm) { void ICCompareStub::GenerateUniqueNames(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::UNIQUE_NAME); - ASSERT(GetCondition() == equal); + DCHECK(state_ == CompareIC::UNIQUE_NAME); + DCHECK(GetCondition() == equal); // Registers containing left and right operands respectively. Register left = edx; @@ -3936,7 +3816,7 @@ void ICCompareStub::GenerateUniqueNames(MacroAssembler* masm) { __ cmp(left, right); // Make sure eax is non-zero. At this point input operands are // guaranteed to be non-zero. - ASSERT(right.is(eax)); + DCHECK(right.is(eax)); __ j(not_equal, &done, Label::kNear); STATIC_ASSERT(EQUAL == 0); STATIC_ASSERT(kSmiTag == 0); @@ -3950,7 +3830,7 @@ void ICCompareStub::GenerateUniqueNames(MacroAssembler* masm) { void ICCompareStub::GenerateStrings(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::STRING); + DCHECK(state_ == CompareIC::STRING); Label miss; bool equality = Token::IsEqualityOp(op_); @@ -4004,7 +3884,7 @@ void ICCompareStub::GenerateStrings(MacroAssembler* masm) { __ j(not_zero, &do_compare, Label::kNear); // Make sure eax is non-zero. At this point input operands are // guaranteed to be non-zero. - ASSERT(right.is(eax)); + DCHECK(right.is(eax)); __ ret(0); __ bind(&do_compare); } @@ -4031,7 +3911,7 @@ void ICCompareStub::GenerateStrings(MacroAssembler* masm) { if (equality) { __ TailCallRuntime(Runtime::kStringEquals, 2, 1); } else { - __ TailCallRuntime(Runtime::kHiddenStringCompare, 2, 1); + __ TailCallRuntime(Runtime::kStringCompare, 2, 1); } __ bind(&miss); @@ -4040,7 +3920,7 @@ void ICCompareStub::GenerateStrings(MacroAssembler* masm) { void ICCompareStub::GenerateObjects(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::OBJECT); + DCHECK(state_ == CompareIC::OBJECT); Label miss; __ mov(ecx, edx); __ and_(ecx, eax); @@ -4051,7 +3931,7 @@ void ICCompareStub::GenerateObjects(MacroAssembler* masm) { __ CmpObjectType(edx, JS_OBJECT_TYPE, ecx); __ j(not_equal, &miss, Label::kNear); - ASSERT(GetCondition() == equal); + DCHECK(GetCondition() == equal); __ sub(eax, edx); __ ret(0); @@ -4115,7 +3995,7 @@ void NameDictionaryLookupStub::GenerateNegativeLookup(MacroAssembler* masm, Register properties, Handle<Name> name, Register r0) { - ASSERT(name->IsUniqueName()); + DCHECK(name->IsUniqueName()); // If names of slots in range from 1 to kProbes - 1 for the hash value are // not equal to the name and kProbes-th slot is not used (its name is the @@ -4133,11 +4013,11 @@ void NameDictionaryLookupStub::GenerateNegativeLookup(MacroAssembler* masm, NameDictionary::GetProbeOffset(i)))); // Scale the index by multiplying by the entry size. - ASSERT(NameDictionary::kEntrySize == 3); + DCHECK(NameDictionary::kEntrySize == 3); __ lea(index, Operand(index, index, times_2, 0)); // index *= 3. Register entity_name = r0; // Having undefined at this place means the name is not contained. - ASSERT_EQ(kSmiTagSize, 1); + DCHECK_EQ(kSmiTagSize, 1); __ mov(entity_name, Operand(properties, index, times_half_pointer_size, kElementsStartOffset - kHeapObjectTag)); __ cmp(entity_name, masm->isolate()->factory()->undefined_value()); @@ -4181,10 +4061,10 @@ void NameDictionaryLookupStub::GeneratePositiveLookup(MacroAssembler* masm, Register name, Register r0, Register r1) { - ASSERT(!elements.is(r0)); - ASSERT(!elements.is(r1)); - ASSERT(!name.is(r0)); - ASSERT(!name.is(r1)); + DCHECK(!elements.is(r0)); + DCHECK(!elements.is(r1)); + DCHECK(!name.is(r0)); + DCHECK(!name.is(r1)); __ AssertName(name); @@ -4205,7 +4085,7 @@ void NameDictionaryLookupStub::GeneratePositiveLookup(MacroAssembler* masm, __ and_(r0, r1); // Scale the index by multiplying by the entry size. - ASSERT(NameDictionary::kEntrySize == 3); + DCHECK(NameDictionary::kEntrySize == 3); __ lea(r0, Operand(r0, r0, times_2, 0)); // r0 = r0 * 3 // Check if the key is identical to the name. @@ -4268,11 +4148,11 @@ void NameDictionaryLookupStub::Generate(MacroAssembler* masm) { __ and_(scratch, Operand(esp, 0)); // Scale the index by multiplying by the entry size. - ASSERT(NameDictionary::kEntrySize == 3); + DCHECK(NameDictionary::kEntrySize == 3); __ lea(index_, Operand(scratch, scratch, times_2, 0)); // index *= 3. // Having undefined at this place means the name is not contained. - ASSERT_EQ(kSmiTagSize, 1); + DCHECK_EQ(kSmiTagSize, 1); __ mov(scratch, Operand(dictionary_, index_, times_pointer_size, @@ -4322,15 +4202,8 @@ void StoreBufferOverflowStub::GenerateFixedRegStubsAheadOfTime( Isolate* isolate) { StoreBufferOverflowStub stub(isolate, kDontSaveFPRegs); stub.GetCode(); - if (CpuFeatures::IsSafeForSnapshot(isolate, SSE2)) { - StoreBufferOverflowStub stub2(isolate, kSaveFPRegs); - stub2.GetCode(); - } -} - - -bool CodeStub::CanUseFPRegisters() { - return CpuFeatures::IsSupported(SSE2); + StoreBufferOverflowStub stub2(isolate, kSaveFPRegs); + stub2.GetCode(); } @@ -4606,15 +4479,14 @@ void StoreArrayLiteralElementStub::Generate(MacroAssembler* masm) { ecx, edi, xmm0, - &slow_elements_from_double, - false); + &slow_elements_from_double); __ pop(edx); __ ret(0); } void StubFailureTrampolineStub::Generate(MacroAssembler* masm) { - CEntryStub ces(isolate(), 1, fp_registers_ ? kSaveFPRegs : kDontSaveFPRegs); + CEntryStub ces(isolate(), 1, kSaveFPRegs); __ call(ces.GetCode(), RelocInfo::CODE_TARGET); int parameter_count_offset = StubFailureTrampolineFrame::kCallerStackParameterCountFrameOffset; @@ -4655,7 +4527,7 @@ void ProfileEntryHookStub::Generate(MacroAssembler* masm) { __ push(eax); // Call the entry hook. - ASSERT(isolate()->function_entry_hook() != NULL); + DCHECK(isolate()->function_entry_hook() != NULL); __ call(FUNCTION_ADDR(isolate()->function_entry_hook()), RelocInfo::RUNTIME_ENTRY); __ add(esp, Immediate(2 * kPointerSize)); @@ -4708,12 +4580,12 @@ static void CreateArrayDispatchOneArgument(MacroAssembler* masm, // esp[4] - last argument Label normal_sequence; if (mode == DONT_OVERRIDE) { - ASSERT(FAST_SMI_ELEMENTS == 0); - ASSERT(FAST_HOLEY_SMI_ELEMENTS == 1); - ASSERT(FAST_ELEMENTS == 2); - ASSERT(FAST_HOLEY_ELEMENTS == 3); - ASSERT(FAST_DOUBLE_ELEMENTS == 4); - ASSERT(FAST_HOLEY_DOUBLE_ELEMENTS == 5); + DCHECK(FAST_SMI_ELEMENTS == 0); + DCHECK(FAST_HOLEY_SMI_ELEMENTS == 1); + DCHECK(FAST_ELEMENTS == 2); + DCHECK(FAST_HOLEY_ELEMENTS == 3); + DCHECK(FAST_DOUBLE_ELEMENTS == 4); + DCHECK(FAST_HOLEY_DOUBLE_ELEMENTS == 5); // is the low bit set? If so, we are holey and that is good. __ test_b(edx, 1); @@ -4954,8 +4826,7 @@ void InternalArrayConstructorStub::Generate(MacroAssembler* masm) { // but the following masking takes care of that anyway. __ mov(ecx, FieldOperand(ecx, Map::kBitField2Offset)); // Retrieve elements_kind from bit field 2. - __ and_(ecx, Map::kElementsKindMask); - __ shr(ecx, Map::kElementsKindShift); + __ DecodeField<Map::ElementsKindBits>(ecx); if (FLAG_debug_code) { Label done; diff --git a/deps/v8/src/ia32/code-stubs-ia32.h b/deps/v8/src/ia32/code-stubs-ia32.h index 1d55ec3c028..b72b6dd089b 100644 --- a/deps/v8/src/ia32/code-stubs-ia32.h +++ b/deps/v8/src/ia32/code-stubs-ia32.h @@ -5,8 +5,8 @@ #ifndef V8_IA32_CODE_STUBS_IA32_H_ #define V8_IA32_CODE_STUBS_IA32_H_ -#include "macro-assembler.h" -#include "ic-inl.h" +#include "src/ic-inl.h" +#include "src/macro-assembler.h" namespace v8 { namespace internal { @@ -20,10 +20,7 @@ void ArrayNativeCode(MacroAssembler* masm, class StoreBufferOverflowStub: public PlatformCodeStub { public: StoreBufferOverflowStub(Isolate* isolate, SaveFPRegsMode save_fp) - : PlatformCodeStub(isolate), save_doubles_(save_fp) { - ASSERT(CpuFeatures::IsSafeForSnapshot(isolate, SSE2) || - save_fp == kDontSaveFPRegs); - } + : PlatformCodeStub(isolate), save_doubles_(save_fp) { } void Generate(MacroAssembler* masm); @@ -33,8 +30,8 @@ class StoreBufferOverflowStub: public PlatformCodeStub { private: SaveFPRegsMode save_doubles_; - Major MajorKey() { return StoreBufferOverflow; } - int MinorKey() { return (save_doubles_ == kSaveFPRegs) ? 1 : 0; } + Major MajorKey() const { return StoreBufferOverflow; } + int MinorKey() const { return (save_doubles_ == kSaveFPRegs) ? 1 : 0; } }; @@ -43,12 +40,12 @@ class StringHelper : public AllStatic { // Generate code for copying characters using the rep movs instruction. // Copies ecx characters from esi to edi. Copying of overlapping regions is // not supported. - static void GenerateCopyCharactersREP(MacroAssembler* masm, - Register dest, // Must be edi. - Register src, // Must be esi. - Register count, // Must be ecx. - Register scratch, // Neither of above. - bool ascii); + static void GenerateCopyCharacters(MacroAssembler* masm, + Register dest, + Register src, + Register count, + Register scratch, + String::Encoding encoding); // Generate string hash. static void GenerateHashInit(MacroAssembler* masm, @@ -73,8 +70,8 @@ class SubStringStub: public PlatformCodeStub { explicit SubStringStub(Isolate* isolate) : PlatformCodeStub(isolate) {} private: - Major MajorKey() { return SubString; } - int MinorKey() { return 0; } + Major MajorKey() const { return SubString; } + int MinorKey() const { return 0; } void Generate(MacroAssembler* masm); }; @@ -101,8 +98,8 @@ class StringCompareStub: public PlatformCodeStub { Register scratch2); private: - virtual Major MajorKey() { return StringCompare; } - virtual int MinorKey() { return 0; } + virtual Major MajorKey() const { return StringCompare; } + virtual int MinorKey() const { return 0; } virtual void Generate(MacroAssembler* masm); static void GenerateAsciiCharsCompareLoop( @@ -159,9 +156,9 @@ class NameDictionaryLookupStub: public PlatformCodeStub { NameDictionary::kHeaderSize + NameDictionary::kElementsStartIndex * kPointerSize; - Major MajorKey() { return NameDictionaryLookup; } + Major MajorKey() const { return NameDictionaryLookup; } - int MinorKey() { + int MinorKey() const { return DictionaryBits::encode(dictionary_.code()) | ResultBits::encode(result_.code()) | IndexBits::encode(index_.code()) | @@ -197,8 +194,6 @@ class RecordWriteStub: public PlatformCodeStub { regs_(object, // An input reg. address, // An input reg. value) { // One scratch reg. - ASSERT(CpuFeatures::IsSafeForSnapshot(isolate, SSE2) || - fp_mode == kDontSaveFPRegs); } enum Mode { @@ -223,13 +218,13 @@ class RecordWriteStub: public PlatformCodeStub { return INCREMENTAL; } - ASSERT(first_instruction == kTwoByteNopInstruction); + DCHECK(first_instruction == kTwoByteNopInstruction); if (second_instruction == kFiveByteJumpInstruction) { return INCREMENTAL_COMPACTION; } - ASSERT(second_instruction == kFiveByteNopInstruction); + DCHECK(second_instruction == kFiveByteNopInstruction); return STORE_BUFFER_ONLY; } @@ -237,23 +232,23 @@ class RecordWriteStub: public PlatformCodeStub { static void Patch(Code* stub, Mode mode) { switch (mode) { case STORE_BUFFER_ONLY: - ASSERT(GetMode(stub) == INCREMENTAL || + DCHECK(GetMode(stub) == INCREMENTAL || GetMode(stub) == INCREMENTAL_COMPACTION); stub->instruction_start()[0] = kTwoByteNopInstruction; stub->instruction_start()[2] = kFiveByteNopInstruction; break; case INCREMENTAL: - ASSERT(GetMode(stub) == STORE_BUFFER_ONLY); + DCHECK(GetMode(stub) == STORE_BUFFER_ONLY); stub->instruction_start()[0] = kTwoByteJumpInstruction; break; case INCREMENTAL_COMPACTION: - ASSERT(GetMode(stub) == STORE_BUFFER_ONLY); + DCHECK(GetMode(stub) == STORE_BUFFER_ONLY); stub->instruction_start()[0] = kTwoByteNopInstruction; stub->instruction_start()[2] = kFiveByteJumpInstruction; break; } - ASSERT(GetMode(stub) == mode); - CPU::FlushICache(stub->instruction_start(), 7); + DCHECK(GetMode(stub) == mode); + CpuFeatures::FlushICache(stub->instruction_start(), 7); } private: @@ -271,7 +266,7 @@ class RecordWriteStub: public PlatformCodeStub { object_(object), address_(address), scratch0_(scratch0) { - ASSERT(!AreAliased(scratch0, object, address, no_reg)); + DCHECK(!AreAliased(scratch0, object, address, no_reg)); scratch1_ = GetRegThatIsNotEcxOr(object_, address_, scratch0_); if (scratch0.is(ecx)) { scratch0_ = GetRegThatIsNotEcxOr(object_, address_, scratch1_); @@ -282,15 +277,15 @@ class RecordWriteStub: public PlatformCodeStub { if (address.is(ecx)) { address_ = GetRegThatIsNotEcxOr(object_, scratch0_, scratch1_); } - ASSERT(!AreAliased(scratch0_, object_, address_, ecx)); + DCHECK(!AreAliased(scratch0_, object_, address_, ecx)); } void Save(MacroAssembler* masm) { - ASSERT(!address_orig_.is(object_)); - ASSERT(object_.is(object_orig_) || address_.is(address_orig_)); - ASSERT(!AreAliased(object_, address_, scratch1_, scratch0_)); - ASSERT(!AreAliased(object_orig_, address_, scratch1_, scratch0_)); - ASSERT(!AreAliased(object_, address_orig_, scratch1_, scratch0_)); + DCHECK(!address_orig_.is(object_)); + DCHECK(object_.is(object_orig_) || address_.is(address_orig_)); + DCHECK(!AreAliased(object_, address_, scratch1_, scratch0_)); + DCHECK(!AreAliased(object_orig_, address_, scratch1_, scratch0_)); + DCHECK(!AreAliased(object_, address_orig_, scratch1_, scratch0_)); // We don't have to save scratch0_orig_ because it was given to us as // a scratch register. But if we had to switch to a different reg then // we should save the new scratch0_. @@ -340,11 +335,10 @@ class RecordWriteStub: public PlatformCodeStub { if (!scratch0_.is(eax) && !scratch1_.is(eax)) masm->push(eax); if (!scratch0_.is(edx) && !scratch1_.is(edx)) masm->push(edx); if (mode == kSaveFPRegs) { - CpuFeatureScope scope(masm, SSE2); masm->sub(esp, - Immediate(kDoubleSize * (XMMRegister::kNumRegisters - 1))); + Immediate(kDoubleSize * (XMMRegister::kMaxNumRegisters - 1))); // Save all XMM registers except XMM0. - for (int i = XMMRegister::kNumRegisters - 1; i > 0; i--) { + for (int i = XMMRegister::kMaxNumRegisters - 1; i > 0; i--) { XMMRegister reg = XMMRegister::from_code(i); masm->movsd(Operand(esp, (i - 1) * kDoubleSize), reg); } @@ -354,14 +348,13 @@ class RecordWriteStub: public PlatformCodeStub { inline void RestoreCallerSaveRegisters(MacroAssembler*masm, SaveFPRegsMode mode) { if (mode == kSaveFPRegs) { - CpuFeatureScope scope(masm, SSE2); // Restore all XMM registers except XMM0. - for (int i = XMMRegister::kNumRegisters - 1; i > 0; i--) { + for (int i = XMMRegister::kMaxNumRegisters - 1; i > 0; i--) { XMMRegister reg = XMMRegister::from_code(i); masm->movsd(reg, Operand(esp, (i - 1) * kDoubleSize)); } masm->add(esp, - Immediate(kDoubleSize * (XMMRegister::kNumRegisters - 1))); + Immediate(kDoubleSize * (XMMRegister::kMaxNumRegisters - 1))); } if (!scratch0_.is(edx) && !scratch1_.is(edx)) masm->pop(edx); if (!scratch0_.is(eax) && !scratch1_.is(eax)) masm->pop(eax); @@ -412,9 +405,9 @@ class RecordWriteStub: public PlatformCodeStub { Mode mode); void InformIncrementalMarker(MacroAssembler* masm); - Major MajorKey() { return RecordWrite; } + Major MajorKey() const { return RecordWrite; } - int MinorKey() { + int MinorKey() const { return ObjectBits::encode(object_.code()) | ValueBits::encode(value_.code()) | AddressBits::encode(address_.code()) | diff --git a/deps/v8/src/ia32/codegen-ia32.cc b/deps/v8/src/ia32/codegen-ia32.cc index 19b66aee3b5..444f98b16a2 100644 --- a/deps/v8/src/ia32/codegen-ia32.cc +++ b/deps/v8/src/ia32/codegen-ia32.cc @@ -2,13 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_IA32 -#include "codegen.h" -#include "heap.h" -#include "macro-assembler.h" +#include "src/codegen.h" +#include "src/heap/heap.h" +#include "src/macro-assembler.h" namespace v8 { namespace internal { @@ -19,14 +19,14 @@ namespace internal { void StubRuntimeCallHelper::BeforeCall(MacroAssembler* masm) const { masm->EnterFrame(StackFrame::INTERNAL); - ASSERT(!masm->has_frame()); + DCHECK(!masm->has_frame()); masm->set_has_frame(true); } void StubRuntimeCallHelper::AfterCall(MacroAssembler* masm) const { masm->LeaveFrame(StackFrame::INTERNAL); - ASSERT(masm->has_frame()); + DCHECK(masm->has_frame()); masm->set_has_frame(false); } @@ -35,10 +35,10 @@ void StubRuntimeCallHelper::AfterCall(MacroAssembler* masm) const { UnaryMathFunction CreateExpFunction() { - if (!CpuFeatures::IsSupported(SSE2)) return &std::exp; if (!FLAG_fast_math) return &std::exp; size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(1 * KB, &actual_size, true)); + byte* buffer = + static_cast<byte*>(base::OS::Allocate(1 * KB, &actual_size, true)); if (buffer == NULL) return &std::exp; ExternalReference::InitializeMathExpData(); @@ -46,7 +46,6 @@ UnaryMathFunction CreateExpFunction() { // esp[1 * kPointerSize]: raw double input // esp[0 * kPointerSize]: return address { - CpuFeatureScope use_sse2(&masm, SSE2); XMMRegister input = xmm1; XMMRegister result = xmm2; __ movsd(input, Operand(esp, 1 * kPointerSize)); @@ -64,10 +63,10 @@ UnaryMathFunction CreateExpFunction() { CodeDesc desc; masm.GetCode(&desc); - ASSERT(!RelocInfo::RequiresRelocation(desc)); + DCHECK(!RelocInfo::RequiresRelocation(desc)); - CPU::FlushICache(buffer, actual_size); - OS::ProtectCode(buffer, actual_size); + CpuFeatures::FlushICache(buffer, actual_size); + base::OS::ProtectCode(buffer, actual_size); return FUNCTION_CAST<UnaryMathFunction>(buffer); } @@ -75,18 +74,14 @@ UnaryMathFunction CreateExpFunction() { UnaryMathFunction CreateSqrtFunction() { size_t actual_size; // Allocate buffer in executable space. - byte* buffer = static_cast<byte*>(OS::Allocate(1 * KB, - &actual_size, - true)); - // If SSE2 is not available, we can use libc's implementation to ensure - // consistency since code by fullcodegen's calls into runtime in that case. - if (buffer == NULL || !CpuFeatures::IsSupported(SSE2)) return &std::sqrt; + byte* buffer = + static_cast<byte*>(base::OS::Allocate(1 * KB, &actual_size, true)); + if (buffer == NULL) return &std::sqrt; MacroAssembler masm(NULL, buffer, static_cast<int>(actual_size)); // esp[1 * kPointerSize]: raw double input // esp[0 * kPointerSize]: return address // Move double input into registers. { - CpuFeatureScope use_sse2(&masm, SSE2); __ movsd(xmm0, Operand(esp, 1 * kPointerSize)); __ sqrtsd(xmm0, xmm0); __ movsd(Operand(esp, 1 * kPointerSize), xmm0); @@ -97,10 +92,10 @@ UnaryMathFunction CreateSqrtFunction() { CodeDesc desc; masm.GetCode(&desc); - ASSERT(!RelocInfo::RequiresRelocation(desc)); + DCHECK(!RelocInfo::RequiresRelocation(desc)); - CPU::FlushICache(buffer, actual_size); - OS::ProtectCode(buffer, actual_size); + CpuFeatures::FlushICache(buffer, actual_size); + base::OS::ProtectCode(buffer, actual_size); return FUNCTION_CAST<UnaryMathFunction>(buffer); } @@ -191,10 +186,11 @@ class LabelConverter { }; -OS::MemMoveFunction CreateMemMoveFunction() { +MemMoveFunction CreateMemMoveFunction() { size_t actual_size; // Allocate buffer in executable space. - byte* buffer = static_cast<byte*>(OS::Allocate(1 * KB, &actual_size, true)); + byte* buffer = + static_cast<byte*>(base::OS::Allocate(1 * KB, &actual_size, true)); if (buffer == NULL) return NULL; MacroAssembler masm(NULL, buffer, static_cast<int>(actual_size)); LabelConverter conv(buffer); @@ -243,325 +239,264 @@ OS::MemMoveFunction CreateMemMoveFunction() { __ cmp(dst, src); __ j(equal, &pop_and_return); - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope sse2_scope(&masm, SSE2); - __ prefetch(Operand(src, 0), 1); + __ prefetch(Operand(src, 0), 1); + __ cmp(count, kSmallCopySize); + __ j(below_equal, &small_size); + __ cmp(count, kMediumCopySize); + __ j(below_equal, &medium_size); + __ cmp(dst, src); + __ j(above, &backward); + + { + // |dst| is a lower address than |src|. Copy front-to-back. + Label unaligned_source, move_last_15, skip_last_move; + __ mov(eax, src); + __ sub(eax, dst); + __ cmp(eax, kMinMoveDistance); + __ j(below, &forward_much_overlap); + // Copy first 16 bytes. + __ movdqu(xmm0, Operand(src, 0)); + __ movdqu(Operand(dst, 0), xmm0); + // Determine distance to alignment: 16 - (dst & 0xF). + __ mov(edx, dst); + __ and_(edx, 0xF); + __ neg(edx); + __ add(edx, Immediate(16)); + __ add(dst, edx); + __ add(src, edx); + __ sub(count, edx); + // dst is now aligned. Main copy loop. + __ mov(loop_count, count); + __ shr(loop_count, 6); + // Check if src is also aligned. + __ test(src, Immediate(0xF)); + __ j(not_zero, &unaligned_source); + // Copy loop for aligned source and destination. + MemMoveEmitMainLoop(&masm, &move_last_15, FORWARD, MOVE_ALIGNED); + // At most 15 bytes to copy. Copy 16 bytes at end of string. + __ bind(&move_last_15); + __ and_(count, 0xF); + __ j(zero, &skip_last_move, Label::kNear); + __ movdqu(xmm0, Operand(src, count, times_1, -0x10)); + __ movdqu(Operand(dst, count, times_1, -0x10), xmm0); + __ bind(&skip_last_move); + MemMoveEmitPopAndReturn(&masm); + + // Copy loop for unaligned source and aligned destination. + __ bind(&unaligned_source); + MemMoveEmitMainLoop(&masm, &move_last_15, FORWARD, MOVE_UNALIGNED); + __ jmp(&move_last_15); + + // Less than kMinMoveDistance offset between dst and src. + Label loop_until_aligned, last_15_much_overlap; + __ bind(&loop_until_aligned); + __ mov_b(eax, Operand(src, 0)); + __ inc(src); + __ mov_b(Operand(dst, 0), eax); + __ inc(dst); + __ dec(count); + __ bind(&forward_much_overlap); // Entry point into this block. + __ test(dst, Immediate(0xF)); + __ j(not_zero, &loop_until_aligned); + // dst is now aligned, src can't be. Main copy loop. + __ mov(loop_count, count); + __ shr(loop_count, 6); + MemMoveEmitMainLoop(&masm, &last_15_much_overlap, + FORWARD, MOVE_UNALIGNED); + __ bind(&last_15_much_overlap); + __ and_(count, 0xF); + __ j(zero, &pop_and_return); __ cmp(count, kSmallCopySize); __ j(below_equal, &small_size); - __ cmp(count, kMediumCopySize); - __ j(below_equal, &medium_size); - __ cmp(dst, src); - __ j(above, &backward); - - { - // |dst| is a lower address than |src|. Copy front-to-back. - Label unaligned_source, move_last_15, skip_last_move; - __ mov(eax, src); - __ sub(eax, dst); - __ cmp(eax, kMinMoveDistance); - __ j(below, &forward_much_overlap); - // Copy first 16 bytes. - __ movdqu(xmm0, Operand(src, 0)); - __ movdqu(Operand(dst, 0), xmm0); - // Determine distance to alignment: 16 - (dst & 0xF). - __ mov(edx, dst); - __ and_(edx, 0xF); - __ neg(edx); - __ add(edx, Immediate(16)); - __ add(dst, edx); - __ add(src, edx); - __ sub(count, edx); - // dst is now aligned. Main copy loop. - __ mov(loop_count, count); - __ shr(loop_count, 6); - // Check if src is also aligned. - __ test(src, Immediate(0xF)); - __ j(not_zero, &unaligned_source); - // Copy loop for aligned source and destination. - MemMoveEmitMainLoop(&masm, &move_last_15, FORWARD, MOVE_ALIGNED); - // At most 15 bytes to copy. Copy 16 bytes at end of string. - __ bind(&move_last_15); - __ and_(count, 0xF); - __ j(zero, &skip_last_move, Label::kNear); - __ movdqu(xmm0, Operand(src, count, times_1, -0x10)); - __ movdqu(Operand(dst, count, times_1, -0x10), xmm0); - __ bind(&skip_last_move); - MemMoveEmitPopAndReturn(&masm); - - // Copy loop for unaligned source and aligned destination. - __ bind(&unaligned_source); - MemMoveEmitMainLoop(&masm, &move_last_15, FORWARD, MOVE_UNALIGNED); - __ jmp(&move_last_15); - - // Less than kMinMoveDistance offset between dst and src. - Label loop_until_aligned, last_15_much_overlap; - __ bind(&loop_until_aligned); - __ mov_b(eax, Operand(src, 0)); - __ inc(src); - __ mov_b(Operand(dst, 0), eax); - __ inc(dst); - __ dec(count); - __ bind(&forward_much_overlap); // Entry point into this block. - __ test(dst, Immediate(0xF)); - __ j(not_zero, &loop_until_aligned); - // dst is now aligned, src can't be. Main copy loop. - __ mov(loop_count, count); - __ shr(loop_count, 6); - MemMoveEmitMainLoop(&masm, &last_15_much_overlap, - FORWARD, MOVE_UNALIGNED); - __ bind(&last_15_much_overlap); - __ and_(count, 0xF); - __ j(zero, &pop_and_return); - __ cmp(count, kSmallCopySize); - __ j(below_equal, &small_size); - __ jmp(&medium_size); - } + __ jmp(&medium_size); + } - { - // |dst| is a higher address than |src|. Copy backwards. - Label unaligned_source, move_first_15, skip_last_move; - __ bind(&backward); - // |dst| and |src| always point to the end of what's left to copy. - __ add(dst, count); - __ add(src, count); - __ mov(eax, dst); - __ sub(eax, src); - __ cmp(eax, kMinMoveDistance); - __ j(below, &backward_much_overlap); - // Copy last 16 bytes. - __ movdqu(xmm0, Operand(src, -0x10)); - __ movdqu(Operand(dst, -0x10), xmm0); - // Find distance to alignment: dst & 0xF - __ mov(edx, dst); - __ and_(edx, 0xF); - __ sub(dst, edx); - __ sub(src, edx); - __ sub(count, edx); - // dst is now aligned. Main copy loop. - __ mov(loop_count, count); - __ shr(loop_count, 6); - // Check if src is also aligned. - __ test(src, Immediate(0xF)); - __ j(not_zero, &unaligned_source); - // Copy loop for aligned source and destination. - MemMoveEmitMainLoop(&masm, &move_first_15, BACKWARD, MOVE_ALIGNED); - // At most 15 bytes to copy. Copy 16 bytes at beginning of string. - __ bind(&move_first_15); - __ and_(count, 0xF); - __ j(zero, &skip_last_move, Label::kNear); - __ sub(src, count); - __ sub(dst, count); - __ movdqu(xmm0, Operand(src, 0)); - __ movdqu(Operand(dst, 0), xmm0); - __ bind(&skip_last_move); - MemMoveEmitPopAndReturn(&masm); - - // Copy loop for unaligned source and aligned destination. - __ bind(&unaligned_source); - MemMoveEmitMainLoop(&masm, &move_first_15, BACKWARD, MOVE_UNALIGNED); - __ jmp(&move_first_15); - - // Less than kMinMoveDistance offset between dst and src. - Label loop_until_aligned, first_15_much_overlap; - __ bind(&loop_until_aligned); - __ dec(src); - __ dec(dst); - __ mov_b(eax, Operand(src, 0)); - __ mov_b(Operand(dst, 0), eax); - __ dec(count); - __ bind(&backward_much_overlap); // Entry point into this block. - __ test(dst, Immediate(0xF)); - __ j(not_zero, &loop_until_aligned); - // dst is now aligned, src can't be. Main copy loop. - __ mov(loop_count, count); - __ shr(loop_count, 6); - MemMoveEmitMainLoop(&masm, &first_15_much_overlap, - BACKWARD, MOVE_UNALIGNED); - __ bind(&first_15_much_overlap); - __ and_(count, 0xF); - __ j(zero, &pop_and_return); - // Small/medium handlers expect dst/src to point to the beginning. - __ sub(dst, count); - __ sub(src, count); - __ cmp(count, kSmallCopySize); - __ j(below_equal, &small_size); - __ jmp(&medium_size); - } - { - // Special handlers for 9 <= copy_size < 64. No assumptions about - // alignment or move distance, so all reads must be unaligned and - // must happen before any writes. - Label medium_handlers, f9_16, f17_32, f33_48, f49_63; - - __ bind(&f9_16); - __ movsd(xmm0, Operand(src, 0)); - __ movsd(xmm1, Operand(src, count, times_1, -8)); - __ movsd(Operand(dst, 0), xmm0); - __ movsd(Operand(dst, count, times_1, -8), xmm1); - MemMoveEmitPopAndReturn(&masm); - - __ bind(&f17_32); - __ movdqu(xmm0, Operand(src, 0)); - __ movdqu(xmm1, Operand(src, count, times_1, -0x10)); - __ movdqu(Operand(dst, 0x00), xmm0); - __ movdqu(Operand(dst, count, times_1, -0x10), xmm1); - MemMoveEmitPopAndReturn(&masm); - - __ bind(&f33_48); - __ movdqu(xmm0, Operand(src, 0x00)); - __ movdqu(xmm1, Operand(src, 0x10)); - __ movdqu(xmm2, Operand(src, count, times_1, -0x10)); - __ movdqu(Operand(dst, 0x00), xmm0); - __ movdqu(Operand(dst, 0x10), xmm1); - __ movdqu(Operand(dst, count, times_1, -0x10), xmm2); - MemMoveEmitPopAndReturn(&masm); - - __ bind(&f49_63); - __ movdqu(xmm0, Operand(src, 0x00)); - __ movdqu(xmm1, Operand(src, 0x10)); - __ movdqu(xmm2, Operand(src, 0x20)); - __ movdqu(xmm3, Operand(src, count, times_1, -0x10)); - __ movdqu(Operand(dst, 0x00), xmm0); - __ movdqu(Operand(dst, 0x10), xmm1); - __ movdqu(Operand(dst, 0x20), xmm2); - __ movdqu(Operand(dst, count, times_1, -0x10), xmm3); - MemMoveEmitPopAndReturn(&masm); - - __ bind(&medium_handlers); - __ dd(conv.address(&f9_16)); - __ dd(conv.address(&f17_32)); - __ dd(conv.address(&f33_48)); - __ dd(conv.address(&f49_63)); - - __ bind(&medium_size); // Entry point into this block. - __ mov(eax, count); - __ dec(eax); - __ shr(eax, 4); - if (FLAG_debug_code) { - Label ok; - __ cmp(eax, 3); - __ j(below_equal, &ok); - __ int3(); - __ bind(&ok); - } - __ mov(eax, Operand(eax, times_4, conv.address(&medium_handlers))); - __ jmp(eax); - } - { - // Specialized copiers for copy_size <= 8 bytes. - Label small_handlers, f0, f1, f2, f3, f4, f5_8; - __ bind(&f0); - MemMoveEmitPopAndReturn(&masm); - - __ bind(&f1); - __ mov_b(eax, Operand(src, 0)); - __ mov_b(Operand(dst, 0), eax); - MemMoveEmitPopAndReturn(&masm); - - __ bind(&f2); - __ mov_w(eax, Operand(src, 0)); - __ mov_w(Operand(dst, 0), eax); - MemMoveEmitPopAndReturn(&masm); - - __ bind(&f3); - __ mov_w(eax, Operand(src, 0)); - __ mov_b(edx, Operand(src, 2)); - __ mov_w(Operand(dst, 0), eax); - __ mov_b(Operand(dst, 2), edx); - MemMoveEmitPopAndReturn(&masm); - - __ bind(&f4); - __ mov(eax, Operand(src, 0)); - __ mov(Operand(dst, 0), eax); - MemMoveEmitPopAndReturn(&masm); - - __ bind(&f5_8); - __ mov(eax, Operand(src, 0)); - __ mov(edx, Operand(src, count, times_1, -4)); - __ mov(Operand(dst, 0), eax); - __ mov(Operand(dst, count, times_1, -4), edx); - MemMoveEmitPopAndReturn(&masm); - - __ bind(&small_handlers); - __ dd(conv.address(&f0)); - __ dd(conv.address(&f1)); - __ dd(conv.address(&f2)); - __ dd(conv.address(&f3)); - __ dd(conv.address(&f4)); - __ dd(conv.address(&f5_8)); - __ dd(conv.address(&f5_8)); - __ dd(conv.address(&f5_8)); - __ dd(conv.address(&f5_8)); - - __ bind(&small_size); // Entry point into this block. - if (FLAG_debug_code) { - Label ok; - __ cmp(count, 8); - __ j(below_equal, &ok); - __ int3(); - __ bind(&ok); - } - __ mov(eax, Operand(count, times_4, conv.address(&small_handlers))); - __ jmp(eax); - } - } else { - // No SSE2. - Label forward; - __ cmp(count, 0); - __ j(equal, &pop_and_return); - __ cmp(dst, src); - __ j(above, &backward); - __ jmp(&forward); - { - // Simple forward copier. - Label forward_loop_1byte, forward_loop_4byte; - __ bind(&forward_loop_4byte); - __ mov(eax, Operand(src, 0)); - __ sub(count, Immediate(4)); - __ add(src, Immediate(4)); - __ mov(Operand(dst, 0), eax); - __ add(dst, Immediate(4)); - __ bind(&forward); // Entry point. - __ cmp(count, 3); - __ j(above, &forward_loop_4byte); - __ bind(&forward_loop_1byte); - __ cmp(count, 0); - __ j(below_equal, &pop_and_return); - __ mov_b(eax, Operand(src, 0)); - __ dec(count); - __ inc(src); - __ mov_b(Operand(dst, 0), eax); - __ inc(dst); - __ jmp(&forward_loop_1byte); + { + // |dst| is a higher address than |src|. Copy backwards. + Label unaligned_source, move_first_15, skip_last_move; + __ bind(&backward); + // |dst| and |src| always point to the end of what's left to copy. + __ add(dst, count); + __ add(src, count); + __ mov(eax, dst); + __ sub(eax, src); + __ cmp(eax, kMinMoveDistance); + __ j(below, &backward_much_overlap); + // Copy last 16 bytes. + __ movdqu(xmm0, Operand(src, -0x10)); + __ movdqu(Operand(dst, -0x10), xmm0); + // Find distance to alignment: dst & 0xF + __ mov(edx, dst); + __ and_(edx, 0xF); + __ sub(dst, edx); + __ sub(src, edx); + __ sub(count, edx); + // dst is now aligned. Main copy loop. + __ mov(loop_count, count); + __ shr(loop_count, 6); + // Check if src is also aligned. + __ test(src, Immediate(0xF)); + __ j(not_zero, &unaligned_source); + // Copy loop for aligned source and destination. + MemMoveEmitMainLoop(&masm, &move_first_15, BACKWARD, MOVE_ALIGNED); + // At most 15 bytes to copy. Copy 16 bytes at beginning of string. + __ bind(&move_first_15); + __ and_(count, 0xF); + __ j(zero, &skip_last_move, Label::kNear); + __ sub(src, count); + __ sub(dst, count); + __ movdqu(xmm0, Operand(src, 0)); + __ movdqu(Operand(dst, 0), xmm0); + __ bind(&skip_last_move); + MemMoveEmitPopAndReturn(&masm); + + // Copy loop for unaligned source and aligned destination. + __ bind(&unaligned_source); + MemMoveEmitMainLoop(&masm, &move_first_15, BACKWARD, MOVE_UNALIGNED); + __ jmp(&move_first_15); + + // Less than kMinMoveDistance offset between dst and src. + Label loop_until_aligned, first_15_much_overlap; + __ bind(&loop_until_aligned); + __ dec(src); + __ dec(dst); + __ mov_b(eax, Operand(src, 0)); + __ mov_b(Operand(dst, 0), eax); + __ dec(count); + __ bind(&backward_much_overlap); // Entry point into this block. + __ test(dst, Immediate(0xF)); + __ j(not_zero, &loop_until_aligned); + // dst is now aligned, src can't be. Main copy loop. + __ mov(loop_count, count); + __ shr(loop_count, 6); + MemMoveEmitMainLoop(&masm, &first_15_much_overlap, + BACKWARD, MOVE_UNALIGNED); + __ bind(&first_15_much_overlap); + __ and_(count, 0xF); + __ j(zero, &pop_and_return); + // Small/medium handlers expect dst/src to point to the beginning. + __ sub(dst, count); + __ sub(src, count); + __ cmp(count, kSmallCopySize); + __ j(below_equal, &small_size); + __ jmp(&medium_size); + } + { + // Special handlers for 9 <= copy_size < 64. No assumptions about + // alignment or move distance, so all reads must be unaligned and + // must happen before any writes. + Label medium_handlers, f9_16, f17_32, f33_48, f49_63; + + __ bind(&f9_16); + __ movsd(xmm0, Operand(src, 0)); + __ movsd(xmm1, Operand(src, count, times_1, -8)); + __ movsd(Operand(dst, 0), xmm0); + __ movsd(Operand(dst, count, times_1, -8), xmm1); + MemMoveEmitPopAndReturn(&masm); + + __ bind(&f17_32); + __ movdqu(xmm0, Operand(src, 0)); + __ movdqu(xmm1, Operand(src, count, times_1, -0x10)); + __ movdqu(Operand(dst, 0x00), xmm0); + __ movdqu(Operand(dst, count, times_1, -0x10), xmm1); + MemMoveEmitPopAndReturn(&masm); + + __ bind(&f33_48); + __ movdqu(xmm0, Operand(src, 0x00)); + __ movdqu(xmm1, Operand(src, 0x10)); + __ movdqu(xmm2, Operand(src, count, times_1, -0x10)); + __ movdqu(Operand(dst, 0x00), xmm0); + __ movdqu(Operand(dst, 0x10), xmm1); + __ movdqu(Operand(dst, count, times_1, -0x10), xmm2); + MemMoveEmitPopAndReturn(&masm); + + __ bind(&f49_63); + __ movdqu(xmm0, Operand(src, 0x00)); + __ movdqu(xmm1, Operand(src, 0x10)); + __ movdqu(xmm2, Operand(src, 0x20)); + __ movdqu(xmm3, Operand(src, count, times_1, -0x10)); + __ movdqu(Operand(dst, 0x00), xmm0); + __ movdqu(Operand(dst, 0x10), xmm1); + __ movdqu(Operand(dst, 0x20), xmm2); + __ movdqu(Operand(dst, count, times_1, -0x10), xmm3); + MemMoveEmitPopAndReturn(&masm); + + __ bind(&medium_handlers); + __ dd(conv.address(&f9_16)); + __ dd(conv.address(&f17_32)); + __ dd(conv.address(&f33_48)); + __ dd(conv.address(&f49_63)); + + __ bind(&medium_size); // Entry point into this block. + __ mov(eax, count); + __ dec(eax); + __ shr(eax, 4); + if (FLAG_debug_code) { + Label ok; + __ cmp(eax, 3); + __ j(below_equal, &ok); + __ int3(); + __ bind(&ok); } - { - // Simple backward copier. - Label backward_loop_1byte, backward_loop_4byte, entry_shortcut; - __ bind(&backward); - __ add(src, count); - __ add(dst, count); - __ cmp(count, 3); - __ j(below_equal, &entry_shortcut); - - __ bind(&backward_loop_4byte); - __ sub(src, Immediate(4)); - __ sub(count, Immediate(4)); - __ mov(eax, Operand(src, 0)); - __ sub(dst, Immediate(4)); - __ mov(Operand(dst, 0), eax); - __ cmp(count, 3); - __ j(above, &backward_loop_4byte); - __ bind(&backward_loop_1byte); - __ cmp(count, 0); - __ j(below_equal, &pop_and_return); - __ bind(&entry_shortcut); - __ dec(src); - __ dec(count); - __ mov_b(eax, Operand(src, 0)); - __ dec(dst); - __ mov_b(Operand(dst, 0), eax); - __ jmp(&backward_loop_1byte); + __ mov(eax, Operand(eax, times_4, conv.address(&medium_handlers))); + __ jmp(eax); + } + { + // Specialized copiers for copy_size <= 8 bytes. + Label small_handlers, f0, f1, f2, f3, f4, f5_8; + __ bind(&f0); + MemMoveEmitPopAndReturn(&masm); + + __ bind(&f1); + __ mov_b(eax, Operand(src, 0)); + __ mov_b(Operand(dst, 0), eax); + MemMoveEmitPopAndReturn(&masm); + + __ bind(&f2); + __ mov_w(eax, Operand(src, 0)); + __ mov_w(Operand(dst, 0), eax); + MemMoveEmitPopAndReturn(&masm); + + __ bind(&f3); + __ mov_w(eax, Operand(src, 0)); + __ mov_b(edx, Operand(src, 2)); + __ mov_w(Operand(dst, 0), eax); + __ mov_b(Operand(dst, 2), edx); + MemMoveEmitPopAndReturn(&masm); + + __ bind(&f4); + __ mov(eax, Operand(src, 0)); + __ mov(Operand(dst, 0), eax); + MemMoveEmitPopAndReturn(&masm); + + __ bind(&f5_8); + __ mov(eax, Operand(src, 0)); + __ mov(edx, Operand(src, count, times_1, -4)); + __ mov(Operand(dst, 0), eax); + __ mov(Operand(dst, count, times_1, -4), edx); + MemMoveEmitPopAndReturn(&masm); + + __ bind(&small_handlers); + __ dd(conv.address(&f0)); + __ dd(conv.address(&f1)); + __ dd(conv.address(&f2)); + __ dd(conv.address(&f3)); + __ dd(conv.address(&f4)); + __ dd(conv.address(&f5_8)); + __ dd(conv.address(&f5_8)); + __ dd(conv.address(&f5_8)); + __ dd(conv.address(&f5_8)); + + __ bind(&small_size); // Entry point into this block. + if (FLAG_debug_code) { + Label ok; + __ cmp(count, 8); + __ j(below_equal, &ok); + __ int3(); + __ bind(&ok); } + __ mov(eax, Operand(count, times_4, conv.address(&small_handlers))); + __ jmp(eax); } __ bind(&pop_and_return); @@ -569,12 +504,12 @@ OS::MemMoveFunction CreateMemMoveFunction() { CodeDesc desc; masm.GetCode(&desc); - ASSERT(!RelocInfo::RequiresRelocation(desc)); - CPU::FlushICache(buffer, actual_size); - OS::ProtectCode(buffer, actual_size); + DCHECK(!RelocInfo::RequiresRelocation(desc)); + CpuFeatures::FlushICache(buffer, actual_size); + base::OS::ProtectCode(buffer, actual_size); // TODO(jkummerow): It would be nice to register this code creation event // with the PROFILE / GDBJIT system. - return FUNCTION_CAST<OS::MemMoveFunction>(buffer); + return FUNCTION_CAST<MemMoveFunction>(buffer); } @@ -587,26 +522,28 @@ OS::MemMoveFunction CreateMemMoveFunction() { void ElementsTransitionGenerator::GenerateMapChangeElementsTransition( - MacroAssembler* masm, AllocationSiteMode mode, + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, + AllocationSiteMode mode, Label* allocation_memento_found) { - // ----------- S t a t e ------------- - // -- eax : value - // -- ebx : target map - // -- ecx : key - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- + Register scratch = edi; + DCHECK(!AreAliased(receiver, key, value, target_map, scratch)); + if (mode == TRACK_ALLOCATION_SITE) { - ASSERT(allocation_memento_found != NULL); - __ JumpIfJSArrayHasAllocationMemento(edx, edi, allocation_memento_found); + DCHECK(allocation_memento_found != NULL); + __ JumpIfJSArrayHasAllocationMemento( + receiver, scratch, allocation_memento_found); } // Set transitioned map. - __ mov(FieldOperand(edx, HeapObject::kMapOffset), ebx); - __ RecordWriteField(edx, + __ mov(FieldOperand(receiver, HeapObject::kMapOffset), target_map); + __ RecordWriteField(receiver, HeapObject::kMapOffset, - ebx, - edi, + target_map, + scratch, kDontSaveFPRegs, EMIT_REMEMBERED_SET, OMIT_SMI_CHECK); @@ -614,14 +551,19 @@ void ElementsTransitionGenerator::GenerateMapChangeElementsTransition( void ElementsTransitionGenerator::GenerateSmiToDouble( - MacroAssembler* masm, AllocationSiteMode mode, Label* fail) { - // ----------- S t a t e ------------- - // -- eax : value - // -- ebx : target map - // -- ecx : key - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, + AllocationSiteMode mode, + Label* fail) { + // Return address is on the stack. + DCHECK(receiver.is(edx)); + DCHECK(key.is(ecx)); + DCHECK(value.is(eax)); + DCHECK(target_map.is(ebx)); + Label loop, entry, convert_hole, gc_required, only_change_map; if (mode == TRACK_ALLOCATION_SITE) { @@ -671,11 +613,8 @@ void ElementsTransitionGenerator::GenerateSmiToDouble( ExternalReference canonical_the_hole_nan_reference = ExternalReference::address_of_the_hole_nan(); XMMRegister the_hole_nan = xmm1; - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope use_sse2(masm, SSE2); - __ movsd(the_hole_nan, - Operand::StaticVariable(canonical_the_hole_nan_reference)); - } + __ movsd(the_hole_nan, + Operand::StaticVariable(canonical_the_hole_nan_reference)); __ jmp(&entry); // Call into runtime if GC is required. @@ -696,17 +635,9 @@ void ElementsTransitionGenerator::GenerateSmiToDouble( // Normal smi, convert it to double and store. __ SmiUntag(ebx); - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope fscope(masm, SSE2); - __ Cvtsi2sd(xmm0, ebx); - __ movsd(FieldOperand(eax, edi, times_4, FixedDoubleArray::kHeaderSize), - xmm0); - } else { - __ push(ebx); - __ fild_s(Operand(esp, 0)); - __ pop(ebx); - __ fstp_d(FieldOperand(eax, edi, times_4, FixedDoubleArray::kHeaderSize)); - } + __ Cvtsi2sd(xmm0, ebx); + __ movsd(FieldOperand(eax, edi, times_4, FixedDoubleArray::kHeaderSize), + xmm0); __ jmp(&entry); // Found hole, store hole_nan_as_double instead. @@ -717,14 +648,8 @@ void ElementsTransitionGenerator::GenerateSmiToDouble( __ Assert(equal, kObjectFoundInSmiOnlyArray); } - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope use_sse2(masm, SSE2); - __ movsd(FieldOperand(eax, edi, times_4, FixedDoubleArray::kHeaderSize), - the_hole_nan); - } else { - __ fld_d(Operand::StaticVariable(canonical_the_hole_nan_reference)); - __ fstp_d(FieldOperand(eax, edi, times_4, FixedDoubleArray::kHeaderSize)); - } + __ movsd(FieldOperand(eax, edi, times_4, FixedDoubleArray::kHeaderSize), + the_hole_nan); __ bind(&entry); __ sub(edi, Immediate(Smi::FromInt(1))); @@ -752,14 +677,19 @@ void ElementsTransitionGenerator::GenerateSmiToDouble( void ElementsTransitionGenerator::GenerateDoubleToObject( - MacroAssembler* masm, AllocationSiteMode mode, Label* fail) { - // ----------- S t a t e ------------- - // -- eax : value - // -- ebx : target map - // -- ecx : key - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, + AllocationSiteMode mode, + Label* fail) { + // Return address is on the stack. + DCHECK(receiver.is(edx)); + DCHECK(key.is(ecx)); + DCHECK(value.is(eax)); + DCHECK(target_map.is(ebx)); + Label loop, entry, convert_hole, gc_required, only_change_map, success; if (mode == TRACK_ALLOCATION_SITE) { @@ -826,17 +756,9 @@ void ElementsTransitionGenerator::GenerateDoubleToObject( // Non-hole double, copy value into a heap number. __ AllocateHeapNumber(edx, esi, no_reg, &gc_required); // edx: new heap number - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope fscope(masm, SSE2); - __ movsd(xmm0, - FieldOperand(edi, ebx, times_4, FixedDoubleArray::kHeaderSize)); - __ movsd(FieldOperand(edx, HeapNumber::kValueOffset), xmm0); - } else { - __ mov(esi, FieldOperand(edi, ebx, times_4, FixedDoubleArray::kHeaderSize)); - __ mov(FieldOperand(edx, HeapNumber::kValueOffset), esi); - __ mov(esi, FieldOperand(edi, ebx, times_4, offset)); - __ mov(FieldOperand(edx, HeapNumber::kValueOffset + kPointerSize), esi); - } + __ movsd(xmm0, + FieldOperand(edi, ebx, times_4, FixedDoubleArray::kHeaderSize)); + __ movsd(FieldOperand(edx, HeapNumber::kValueOffset), xmm0); __ mov(FieldOperand(eax, ebx, times_2, FixedArray::kHeaderSize), edx); __ mov(esi, ebx); __ RecordWriteArray(eax, @@ -948,7 +870,7 @@ void StringCharLoadGenerator::Generate(MacroAssembler* masm, __ Assert(zero, kExternalStringExpectedButNotFound); } // Rule out short external strings. - STATIC_CHECK(kShortExternalStringTag != 0); + STATIC_ASSERT(kShortExternalStringTag != 0); __ test_b(result, kShortExternalStringMask); __ j(not_zero, call_runtime); // Check encoding. @@ -1002,11 +924,12 @@ void MathExpGenerator::EmitMathExp(MacroAssembler* masm, XMMRegister double_scratch, Register temp1, Register temp2) { - ASSERT(!input.is(double_scratch)); - ASSERT(!input.is(result)); - ASSERT(!result.is(double_scratch)); - ASSERT(!temp1.is(temp2)); - ASSERT(ExternalReference::math_exp_constants(0).address() != NULL); + DCHECK(!input.is(double_scratch)); + DCHECK(!input.is(result)); + DCHECK(!result.is(double_scratch)); + DCHECK(!temp1.is(temp2)); + DCHECK(ExternalReference::math_exp_constants(0).address() != NULL); + DCHECK(!masm->serializer_enabled()); // External references not serializable. Label done; @@ -1051,7 +974,7 @@ void MathExpGenerator::EmitMathExp(MacroAssembler* masm, CodeAgingHelper::CodeAgingHelper() { - ASSERT(young_sequence_.length() == kNoCodeAgeSequenceLength); + DCHECK(young_sequence_.length() == kNoCodeAgeSequenceLength); CodePatcher patcher(young_sequence_.start(), young_sequence_.length()); patcher.masm()->push(ebp); patcher.masm()->mov(ebp, esp); @@ -1069,7 +992,7 @@ bool CodeAgingHelper::IsOld(byte* candidate) const { bool Code::IsYoungSequence(Isolate* isolate, byte* sequence) { bool result = isolate->code_aging_helper()->IsYoung(sequence); - ASSERT(result || isolate->code_aging_helper()->IsOld(sequence)); + DCHECK(result || isolate->code_aging_helper()->IsOld(sequence)); return result; } @@ -1096,7 +1019,7 @@ void Code::PatchPlatformCodeAge(Isolate* isolate, uint32_t young_length = isolate->code_aging_helper()->young_sequence_length(); if (age == kNoAgeCodeAge) { isolate->code_aging_helper()->CopyYoungSequenceTo(sequence); - CPU::FlushICache(sequence, young_length); + CpuFeatures::FlushICache(sequence, young_length); } else { Code* stub = GetCodeAgeStub(isolate, age, parity); CodePatcher patcher(sequence, young_length); diff --git a/deps/v8/src/ia32/codegen-ia32.h b/deps/v8/src/ia32/codegen-ia32.h index eda92b0a8c2..3f59c2cb2fe 100644 --- a/deps/v8/src/ia32/codegen-ia32.h +++ b/deps/v8/src/ia32/codegen-ia32.h @@ -5,8 +5,8 @@ #ifndef V8_IA32_CODEGEN_IA32_H_ #define V8_IA32_CODEGEN_IA32_H_ -#include "ast.h" -#include "ic-inl.h" +#include "src/ast.h" +#include "src/ic-inl.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/ia32/cpu-ia32.cc b/deps/v8/src/ia32/cpu-ia32.cc index 7f87a624e30..00c20437bf9 100644 --- a/deps/v8/src/ia32/cpu-ia32.cc +++ b/deps/v8/src/ia32/cpu-ia32.cc @@ -5,20 +5,20 @@ // CPU specific code for ia32 independent of OS goes here. #ifdef __GNUC__ -#include "third_party/valgrind/valgrind.h" +#include "src/third_party/valgrind/valgrind.h" #endif -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_IA32 -#include "cpu.h" -#include "macro-assembler.h" +#include "src/assembler.h" +#include "src/macro-assembler.h" namespace v8 { namespace internal { -void CPU::FlushICache(void* start, size_t size) { +void CpuFeatures::FlushICache(void* start, size_t size) { // No need to flush the instruction cache on Intel. On Intel instruction // cache flushing is only necessary when multiple cores running the same // code simultaneously. V8 (and JavaScript) is single threaded and when code diff --git a/deps/v8/src/ia32/debug-ia32.cc b/deps/v8/src/ia32/debug-ia32.cc index e7a7b605815..c7a10d47cc2 100644 --- a/deps/v8/src/ia32/debug-ia32.cc +++ b/deps/v8/src/ia32/debug-ia32.cc @@ -2,12 +2,12 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_IA32 -#include "codegen.h" -#include "debug.h" +#include "src/codegen.h" +#include "src/debug.h" namespace v8 { @@ -22,7 +22,7 @@ bool BreakLocationIterator::IsDebugBreakAtReturn() { // CodeGenerator::VisitReturnStatement and VirtualFrame::Exit in codegen-ia32.cc // for the precise return instructions sequence. void BreakLocationIterator::SetDebugBreakAtReturn() { - ASSERT(Assembler::kJSReturnSequenceLength >= + DCHECK(Assembler::kJSReturnSequenceLength >= Assembler::kCallInstructionLength); rinfo()->PatchCodeWithCall( debug_info_->GetIsolate()->builtins()->Return_DebugBreak()->entry(), @@ -40,20 +40,20 @@ void BreakLocationIterator::ClearDebugBreakAtReturn() { // A debug break in the frame exit code is identified by the JS frame exit code // having been patched with a call instruction. bool Debug::IsDebugBreakAtReturn(RelocInfo* rinfo) { - ASSERT(RelocInfo::IsJSReturn(rinfo->rmode())); + DCHECK(RelocInfo::IsJSReturn(rinfo->rmode())); return rinfo->IsPatchedReturnSequence(); } bool BreakLocationIterator::IsDebugBreakAtSlot() { - ASSERT(IsDebugBreakSlot()); + DCHECK(IsDebugBreakSlot()); // Check whether the debug break slot instructions have been patched. return rinfo()->IsPatchedDebugBreakSlotSequence(); } void BreakLocationIterator::SetDebugBreakAtSlot() { - ASSERT(IsDebugBreakSlot()); + DCHECK(IsDebugBreakSlot()); Isolate* isolate = debug_info_->GetIsolate(); rinfo()->PatchCodeWithCall( isolate->builtins()->Slot_DebugBreak()->entry(), @@ -62,15 +62,11 @@ void BreakLocationIterator::SetDebugBreakAtSlot() { void BreakLocationIterator::ClearDebugBreakAtSlot() { - ASSERT(IsDebugBreakSlot()); + DCHECK(IsDebugBreakSlot()); rinfo()->PatchCode(original_rinfo()->pc(), Assembler::kDebugBreakSlotLength); } -// All debug break stubs support padding for LiveEdit. -const bool Debug::FramePaddingLayout::kIsSupported = true; - - #define __ ACCESS_MASM(masm) static void Generate_DebugBreakCallHelper(MacroAssembler* masm, @@ -82,18 +78,17 @@ static void Generate_DebugBreakCallHelper(MacroAssembler* masm, FrameScope scope(masm, StackFrame::INTERNAL); // Load padding words on stack. - for (int i = 0; i < Debug::FramePaddingLayout::kInitialSize; i++) { - __ push(Immediate(Smi::FromInt( - Debug::FramePaddingLayout::kPaddingValue))); + for (int i = 0; i < LiveEdit::kFramePaddingInitialSize; i++) { + __ push(Immediate(Smi::FromInt(LiveEdit::kFramePaddingValue))); } - __ push(Immediate(Smi::FromInt(Debug::FramePaddingLayout::kInitialSize))); + __ push(Immediate(Smi::FromInt(LiveEdit::kFramePaddingInitialSize))); // Store the registers containing live values on the expression stack to // make sure that these are correctly updated during GC. Non object values // are stored as a smi causing it to be untouched by GC. - ASSERT((object_regs & ~kJSCallerSaved) == 0); - ASSERT((non_object_regs & ~kJSCallerSaved) == 0); - ASSERT((object_regs & non_object_regs) == 0); + DCHECK((object_regs & ~kJSCallerSaved) == 0); + DCHECK((non_object_regs & ~kJSCallerSaved) == 0); + DCHECK((object_regs & non_object_regs) == 0); for (int i = 0; i < kNumJSCallerSaved; i++) { int r = JSCallerSavedCode(i); Register reg = { r }; @@ -146,7 +141,7 @@ static void Generate_DebugBreakCallHelper(MacroAssembler* masm, } } - ASSERT(unused_reg.code() != -1); + DCHECK(unused_reg.code() != -1); // Read current padding counter and skip corresponding number of words. __ pop(unused_reg); @@ -167,12 +162,12 @@ static void Generate_DebugBreakCallHelper(MacroAssembler* masm, // jumping to the target address intended by the caller and that was // overwritten by the address of DebugBreakXXX. ExternalReference after_break_target = - ExternalReference(Debug_Address::AfterBreakTarget(), masm->isolate()); + ExternalReference::debug_after_break_target_address(masm->isolate()); __ jmp(Operand::StaticVariable(after_break_target)); } -void Debug::GenerateCallICStubDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCallICStubDebugBreak(MacroAssembler* masm) { // Register state for CallICStub // ----------- S t a t e ------------- // -- edx : type feedback slot (smi) @@ -183,51 +178,41 @@ void Debug::GenerateCallICStubDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateLoadICDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateLoadICDebugBreak(MacroAssembler* masm) { // Register state for IC load call (from ic-ia32.cc). - // ----------- S t a t e ------------- - // -- ecx : name - // -- edx : receiver - // ----------------------------------- - Generate_DebugBreakCallHelper(masm, ecx.bit() | edx.bit(), 0, false); + Register receiver = LoadIC::ReceiverRegister(); + Register name = LoadIC::NameRegister(); + Generate_DebugBreakCallHelper(masm, receiver.bit() | name.bit(), 0, false); } -void Debug::GenerateStoreICDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateStoreICDebugBreak(MacroAssembler* masm) { // Register state for IC store call (from ic-ia32.cc). - // ----------- S t a t e ------------- - // -- eax : value - // -- ecx : name - // -- edx : receiver - // ----------------------------------- + Register receiver = StoreIC::ReceiverRegister(); + Register name = StoreIC::NameRegister(); + Register value = StoreIC::ValueRegister(); Generate_DebugBreakCallHelper( - masm, eax.bit() | ecx.bit() | edx.bit(), 0, false); + masm, receiver.bit() | name.bit() | value.bit(), 0, false); } -void Debug::GenerateKeyedLoadICDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateKeyedLoadICDebugBreak(MacroAssembler* masm) { // Register state for keyed IC load call (from ic-ia32.cc). - // ----------- S t a t e ------------- - // -- ecx : key - // -- edx : receiver - // ----------------------------------- - Generate_DebugBreakCallHelper(masm, ecx.bit() | edx.bit(), 0, false); + GenerateLoadICDebugBreak(masm); } -void Debug::GenerateKeyedStoreICDebugBreak(MacroAssembler* masm) { - // Register state for keyed IC load call (from ic-ia32.cc). - // ----------- S t a t e ------------- - // -- eax : value - // -- ecx : key - // -- edx : receiver - // ----------------------------------- +void DebugCodegen::GenerateKeyedStoreICDebugBreak(MacroAssembler* masm) { + // Register state for keyed IC store call (from ic-ia32.cc). + Register receiver = KeyedStoreIC::ReceiverRegister(); + Register name = KeyedStoreIC::NameRegister(); + Register value = KeyedStoreIC::ValueRegister(); Generate_DebugBreakCallHelper( - masm, eax.bit() | ecx.bit() | edx.bit(), 0, false); + masm, receiver.bit() | name.bit() | value.bit(), 0, false); } -void Debug::GenerateCompareNilICDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCompareNilICDebugBreak(MacroAssembler* masm) { // Register state for CompareNil IC // ----------- S t a t e ------------- // -- eax : value @@ -236,7 +221,7 @@ void Debug::GenerateCompareNilICDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateReturnDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateReturnDebugBreak(MacroAssembler* masm) { // Register state just before return from JS function (from codegen-ia32.cc). // ----------- S t a t e ------------- // -- eax: return value @@ -245,7 +230,7 @@ void Debug::GenerateReturnDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateCallFunctionStubDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCallFunctionStubDebugBreak(MacroAssembler* masm) { // Register state for CallFunctionStub (from code-stubs-ia32.cc). // ----------- S t a t e ------------- // -- edi: function @@ -254,7 +239,7 @@ void Debug::GenerateCallFunctionStubDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateCallConstructStubDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCallConstructStubDebugBreak(MacroAssembler* masm) { // Register state for CallConstructStub (from code-stubs-ia32.cc). // eax is the actual number of arguments not encoded as a smi see comment // above IC call. @@ -267,7 +252,8 @@ void Debug::GenerateCallConstructStubDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateCallConstructStubRecordDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCallConstructStubRecordDebugBreak( + MacroAssembler* masm) { // Register state for CallConstructStub (from code-stubs-ia32.cc). // eax is the actual number of arguments not encoded as a smi see comment // above IC call. @@ -283,33 +269,33 @@ void Debug::GenerateCallConstructStubRecordDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateSlot(MacroAssembler* masm) { +void DebugCodegen::GenerateSlot(MacroAssembler* masm) { // Generate enough nop's to make space for a call instruction. Label check_codesize; __ bind(&check_codesize); __ RecordDebugBreakSlot(); __ Nop(Assembler::kDebugBreakSlotLength); - ASSERT_EQ(Assembler::kDebugBreakSlotLength, + DCHECK_EQ(Assembler::kDebugBreakSlotLength, masm->SizeOfCodeGeneratedSince(&check_codesize)); } -void Debug::GenerateSlotDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateSlotDebugBreak(MacroAssembler* masm) { // In the places where a debug break slot is inserted no registers can contain // object pointers. Generate_DebugBreakCallHelper(masm, 0, 0, true); } -void Debug::GeneratePlainReturnLiveEdit(MacroAssembler* masm) { +void DebugCodegen::GeneratePlainReturnLiveEdit(MacroAssembler* masm) { masm->ret(0); } -void Debug::GenerateFrameDropperLiveEdit(MacroAssembler* masm) { +void DebugCodegen::GenerateFrameDropperLiveEdit(MacroAssembler* masm) { ExternalReference restarter_frame_function_slot = - ExternalReference(Debug_Address::RestarterFrameFunctionPointer(), - masm->isolate()); + ExternalReference::debug_restarter_frame_function_pointer_address( + masm->isolate()); __ mov(Operand::StaticVariable(restarter_frame_function_slot), Immediate(0)); // We do not know our frame height, but set esp based on ebp. @@ -330,7 +316,8 @@ void Debug::GenerateFrameDropperLiveEdit(MacroAssembler* masm) { __ jmp(edx); } -const bool Debug::kFrameDropperSupported = true; + +const bool LiveEdit::kFrameDropperSupported = true; #undef __ diff --git a/deps/v8/src/ia32/deoptimizer-ia32.cc b/deps/v8/src/ia32/deoptimizer-ia32.cc index 6db045079da..5fac8859d56 100644 --- a/deps/v8/src/ia32/deoptimizer-ia32.cc +++ b/deps/v8/src/ia32/deoptimizer-ia32.cc @@ -2,14 +2,14 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_IA32 -#include "codegen.h" -#include "deoptimizer.h" -#include "full-codegen.h" -#include "safepoint-table.h" +#include "src/codegen.h" +#include "src/deoptimizer.h" +#include "src/full-codegen.h" +#include "src/safepoint-table.h" namespace v8 { namespace internal { @@ -35,7 +35,7 @@ void Deoptimizer::EnsureRelocSpaceForLazyDeoptimization(Handle<Code> code) { for (int i = 0; i < deopt_data->DeoptCount(); i++) { int pc_offset = deopt_data->Pc(i)->value(); if (pc_offset == -1) continue; - ASSERT_GE(pc_offset, prev_pc_offset); + DCHECK_GE(pc_offset, prev_pc_offset); int pc_delta = pc_offset - prev_pc_offset; // We use RUNTIME_ENTRY reloc info which has a size of 2 bytes // if encodable with small pc delta encoding and up to 6 bytes @@ -67,9 +67,8 @@ void Deoptimizer::EnsureRelocSpaceForLazyDeoptimization(Handle<Code> code) { Factory* factory = isolate->factory(); Handle<ByteArray> new_reloc = factory->NewByteArray(reloc_length + padding, TENURED); - OS::MemCopy(new_reloc->GetDataStartAddress() + padding, - code->relocation_info()->GetDataStartAddress(), - reloc_length); + MemCopy(new_reloc->GetDataStartAddress() + padding, + code->relocation_info()->GetDataStartAddress(), reloc_length); // Create a relocation writer to write the comments in the padding // space. Use position 0 for everything to ensure short encoding. RelocInfoWriter reloc_info_writer( @@ -82,7 +81,7 @@ void Deoptimizer::EnsureRelocSpaceForLazyDeoptimization(Handle<Code> code) { byte* pos_before = reloc_info_writer.pos(); #endif reloc_info_writer.Write(&rinfo); - ASSERT(RelocInfo::kMinRelocCommentSize == + DCHECK(RelocInfo::kMinRelocCommentSize == pos_before - reloc_info_writer.pos()); } // Replace relocation information on the code object. @@ -129,9 +128,6 @@ void Deoptimizer::PatchCodeForDeoptimization(Isolate* isolate, Code* code) { // Emit call to lazy deoptimization at all lazy deopt points. DeoptimizationInputData* deopt_data = DeoptimizationInputData::cast(code->deoptimization_data()); - SharedFunctionInfo* shared = - SharedFunctionInfo::cast(deopt_data->SharedFunctionInfo()); - shared->EvictFromOptimizedCodeMap(code, "deoptimized code"); #ifdef DEBUG Address prev_call_address = NULL; #endif @@ -150,11 +146,11 @@ void Deoptimizer::PatchCodeForDeoptimization(Isolate* isolate, Code* code) { reinterpret_cast<intptr_t>(deopt_entry), NULL); reloc_info_writer.Write(&rinfo); - ASSERT_GE(reloc_info_writer.pos(), + DCHECK_GE(reloc_info_writer.pos(), reloc_info->address() + ByteArray::kHeaderSize); - ASSERT(prev_call_address == NULL || + DCHECK(prev_call_address == NULL || call_address >= prev_call_address + patch_size()); - ASSERT(call_address + patch_size() <= code->instruction_end()); + DCHECK(call_address + patch_size() <= code->instruction_end()); #ifdef DEBUG prev_call_address = call_address; #endif @@ -162,8 +158,7 @@ void Deoptimizer::PatchCodeForDeoptimization(Isolate* isolate, Code* code) { // Move the relocation info to the beginning of the byte array. int new_reloc_size = reloc_end_address - reloc_info_writer.pos(); - OS::MemMove( - code->relocation_start(), reloc_info_writer.pos(), new_reloc_size); + MemMove(code->relocation_start(), reloc_info_writer.pos(), new_reloc_size); // The relocation info is in place, update the size. reloc_info->set_length(new_reloc_size); @@ -171,7 +166,7 @@ void Deoptimizer::PatchCodeForDeoptimization(Isolate* isolate, Code* code) { // Handle the junk part after the new relocation info. We will create // a non-live object in the extra space at the end of the former reloc info. Address junk_address = reloc_info->address() + reloc_info->Size(); - ASSERT(junk_address <= reloc_end_address); + DCHECK(junk_address <= reloc_end_address); isolate->heap()->CreateFillerObjectAt(junk_address, reloc_end_address - junk_address); } @@ -187,7 +182,7 @@ void Deoptimizer::FillInputFrame(Address tos, JavaScriptFrame* frame) { } input_->SetRegister(esp.code(), reinterpret_cast<intptr_t>(frame->sp())); input_->SetRegister(ebp.code(), reinterpret_cast<intptr_t>(frame->fp())); - for (int i = 0; i < DoubleRegister::NumAllocatableRegisters(); i++) { + for (int i = 0; i < XMMRegister::kMaxNumAllocatableRegisters; i++) { input_->SetDoubleRegister(i, 0.0); } @@ -201,7 +196,7 @@ void Deoptimizer::FillInputFrame(Address tos, JavaScriptFrame* frame) { void Deoptimizer::SetPlatformCompiledStubRegisters( FrameDescription* output_frame, CodeStubInterfaceDescriptor* descriptor) { intptr_t handler = - reinterpret_cast<intptr_t>(descriptor->deoptimization_handler_); + reinterpret_cast<intptr_t>(descriptor->deoptimization_handler()); int params = descriptor->GetHandlerParameterCount(); output_frame->SetRegister(eax.code(), params); output_frame->SetRegister(ebx.code(), handler); @@ -209,8 +204,7 @@ void Deoptimizer::SetPlatformCompiledStubRegisters( void Deoptimizer::CopyDoubleRegisters(FrameDescription* output_frame) { - if (!CpuFeatures::IsSupported(SSE2)) return; - for (int i = 0; i < XMMRegister::kNumAllocatableRegisters; ++i) { + for (int i = 0; i < XMMRegister::kMaxNumAllocatableRegisters; ++i) { double double_value = input_->GetDoubleRegister(i); output_frame->SetDoubleRegister(i, double_value); } @@ -224,20 +218,13 @@ bool Deoptimizer::HasAlignmentPadding(JSFunction* function) { input_frame_size - parameter_count * kPointerSize - StandardFrameConstants::kFixedFrameSize - kPointerSize; - ASSERT(JavaScriptFrameConstants::kDynamicAlignmentStateOffset == + DCHECK(JavaScriptFrameConstants::kDynamicAlignmentStateOffset == JavaScriptFrameConstants::kLocal0Offset); int32_t alignment_state = input_->GetFrameSlot(alignment_state_offset); return (alignment_state == kAlignmentPaddingPushed); } -Code* Deoptimizer::NotifyStubFailureBuiltin() { - Builtins::Name name = CpuFeatures::IsSupported(SSE2) ? - Builtins::kNotifyStubFailureSaveDoubles : Builtins::kNotifyStubFailure; - return isolate_->builtins()->builtin(name); -} - - #define __ masm()-> void Deoptimizer::EntryGenerator::Generate() { @@ -247,15 +234,12 @@ void Deoptimizer::EntryGenerator::Generate() { const int kNumberOfRegisters = Register::kNumRegisters; const int kDoubleRegsSize = kDoubleSize * - XMMRegister::kNumAllocatableRegisters; + XMMRegister::kMaxNumAllocatableRegisters; __ sub(esp, Immediate(kDoubleRegsSize)); - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope scope(masm(), SSE2); - for (int i = 0; i < XMMRegister::kNumAllocatableRegisters; ++i) { - XMMRegister xmm_reg = XMMRegister::FromAllocationIndex(i); - int offset = i * kDoubleSize; - __ movsd(Operand(esp, offset), xmm_reg); - } + for (int i = 0; i < XMMRegister::kMaxNumAllocatableRegisters; ++i) { + XMMRegister xmm_reg = XMMRegister::FromAllocationIndex(i); + int offset = i * kDoubleSize; + __ movsd(Operand(esp, offset), xmm_reg); } __ pushad(); @@ -300,15 +284,12 @@ void Deoptimizer::EntryGenerator::Generate() { } int double_regs_offset = FrameDescription::double_registers_offset(); - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope scope(masm(), SSE2); - // Fill in the double input registers. - for (int i = 0; i < XMMRegister::kNumAllocatableRegisters; ++i) { - int dst_offset = i * kDoubleSize + double_regs_offset; - int src_offset = i * kDoubleSize; - __ movsd(xmm0, Operand(esp, src_offset)); - __ movsd(Operand(ebx, dst_offset), xmm0); - } + // Fill in the double input registers. + for (int i = 0; i < XMMRegister::kMaxNumAllocatableRegisters; ++i) { + int dst_offset = i * kDoubleSize + double_regs_offset; + int src_offset = i * kDoubleSize; + __ movsd(xmm0, Operand(esp, src_offset)); + __ movsd(Operand(ebx, dst_offset), xmm0); } // Clear FPU all exceptions. @@ -387,13 +368,10 @@ void Deoptimizer::EntryGenerator::Generate() { __ j(below, &outer_push_loop); // In case of a failed STUB, we have to restore the XMM registers. - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope scope(masm(), SSE2); - for (int i = 0; i < XMMRegister::kNumAllocatableRegisters; ++i) { - XMMRegister xmm_reg = XMMRegister::FromAllocationIndex(i); - int src_offset = i * kDoubleSize + double_regs_offset; - __ movsd(xmm_reg, Operand(ebx, src_offset)); - } + for (int i = 0; i < XMMRegister::kMaxNumAllocatableRegisters; ++i) { + XMMRegister xmm_reg = XMMRegister::FromAllocationIndex(i); + int src_offset = i * kDoubleSize + double_regs_offset; + __ movsd(xmm_reg, Operand(ebx, src_offset)); } // Push state, pc, and continuation from the last output frame. @@ -424,7 +402,7 @@ void Deoptimizer::TableEntryGenerator::GeneratePrologue() { USE(start); __ push_imm32(i); __ jmp(&done); - ASSERT(masm()->pc_offset() - start == table_entry_size_); + DCHECK(masm()->pc_offset() - start == table_entry_size_); } __ bind(&done); } diff --git a/deps/v8/src/ia32/disasm-ia32.cc b/deps/v8/src/ia32/disasm-ia32.cc index 721b0bb429e..22c2a55c17b 100644 --- a/deps/v8/src/ia32/disasm-ia32.cc +++ b/deps/v8/src/ia32/disasm-ia32.cc @@ -3,14 +3,14 @@ // found in the LICENSE file. #include <assert.h> -#include <stdio.h> #include <stdarg.h> +#include <stdio.h> -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_IA32 -#include "disasm.h" +#include "src/disasm.h" namespace disasm { @@ -211,7 +211,7 @@ void InstructionTable::CopyTable(const ByteMnemonic bm[], InstructionDesc* id = &instructions_[bm[i].b]; id->mnem = bm[i].mnem; id->op_order_ = bm[i].op_order_; - ASSERT_EQ(NO_INSTR, id->type); // Information not already entered. + DCHECK_EQ(NO_INSTR, id->type); // Information not already entered. id->type = type; } } @@ -223,7 +223,7 @@ void InstructionTable::SetTableRange(InstructionType type, const char* mnem) { for (byte b = start; b <= end; b++) { InstructionDesc* id = &instructions_[b]; - ASSERT_EQ(NO_INSTR, id->type); // Information not already entered. + DCHECK_EQ(NO_INSTR, id->type); // Information not already entered. id->mnem = mnem; id->type = type; } @@ -233,7 +233,7 @@ void InstructionTable::SetTableRange(InstructionType type, void InstructionTable::AddJumpConditionalShort() { for (byte b = 0x70; b <= 0x7F; b++) { InstructionDesc* id = &instructions_[b]; - ASSERT_EQ(NO_INSTR, id->type); // Information not already entered. + DCHECK_EQ(NO_INSTR, id->type); // Information not already entered. id->mnem = jump_conditional_mnem[b & 0x0F]; id->type = JUMP_CONDITIONAL_SHORT_INSTR; } @@ -357,7 +357,7 @@ void DisassemblerIA32::AppendToBuffer(const char* format, ...) { v8::internal::Vector<char> buf = tmp_buffer_ + tmp_buffer_pos_; va_list args; va_start(args, format); - int result = v8::internal::OS::VSNPrintF(buf, format, args); + int result = v8::internal::VSNPrintF(buf, format, args); va_end(args); tmp_buffer_pos_ += result; } @@ -528,84 +528,101 @@ int DisassemblerIA32::PrintImmediateOp(byte* data) { // Returns number of bytes used, including *data. int DisassemblerIA32::F7Instruction(byte* data) { - ASSERT_EQ(0xF7, *data); - byte modrm = *(data+1); + DCHECK_EQ(0xF7, *data); + byte modrm = *++data; int mod, regop, rm; get_modrm(modrm, &mod, ®op, &rm); - if (mod == 3 && regop != 0) { - const char* mnem = NULL; - switch (regop) { - case 2: mnem = "not"; break; - case 3: mnem = "neg"; break; - case 4: mnem = "mul"; break; - case 5: mnem = "imul"; break; - case 7: mnem = "idiv"; break; - default: UnimplementedInstruction(); - } - AppendToBuffer("%s %s", mnem, NameOfCPURegister(rm)); - return 2; - } else if (mod == 3 && regop == eax) { - int32_t imm = *reinterpret_cast<int32_t*>(data+2); - AppendToBuffer("test %s,0x%x", NameOfCPURegister(rm), imm); - return 6; - } else if (regop == eax) { - AppendToBuffer("test "); - int count = PrintRightOperand(data+1); - int32_t imm = *reinterpret_cast<int32_t*>(data+1+count); - AppendToBuffer(",0x%x", imm); - return 1+count+4 /*int32_t*/; - } else { - UnimplementedInstruction(); - return 2; + const char* mnem = NULL; + switch (regop) { + case 0: + mnem = "test"; + break; + case 2: + mnem = "not"; + break; + case 3: + mnem = "neg"; + break; + case 4: + mnem = "mul"; + break; + case 5: + mnem = "imul"; + break; + case 6: + mnem = "div"; + break; + case 7: + mnem = "idiv"; + break; + default: + UnimplementedInstruction(); + } + AppendToBuffer("%s ", mnem); + int count = PrintRightOperand(data); + if (regop == 0) { + AppendToBuffer(",0x%x", *reinterpret_cast<int32_t*>(data + count)); + count += 4; } + return 1 + count; } int DisassemblerIA32::D1D3C1Instruction(byte* data) { byte op = *data; - ASSERT(op == 0xD1 || op == 0xD3 || op == 0xC1); - byte modrm = *(data+1); + DCHECK(op == 0xD1 || op == 0xD3 || op == 0xC1); + byte modrm = *++data; int mod, regop, rm; get_modrm(modrm, &mod, ®op, &rm); int imm8 = -1; - int num_bytes = 2; - if (mod == 3) { - const char* mnem = NULL; - switch (regop) { - case kROL: mnem = "rol"; break; - case kROR: mnem = "ror"; break; - case kRCL: mnem = "rcl"; break; - case kRCR: mnem = "rcr"; break; - case kSHL: mnem = "shl"; break; - case KSHR: mnem = "shr"; break; - case kSAR: mnem = "sar"; break; - default: UnimplementedInstruction(); - } - if (op == 0xD1) { - imm8 = 1; - } else if (op == 0xC1) { - imm8 = *(data+2); - num_bytes = 3; - } else if (op == 0xD3) { - // Shift/rotate by cl. - } - ASSERT_NE(NULL, mnem); - AppendToBuffer("%s %s,", mnem, NameOfCPURegister(rm)); - if (imm8 >= 0) { - AppendToBuffer("%d", imm8); - } else { - AppendToBuffer("cl"); - } + const char* mnem = NULL; + switch (regop) { + case kROL: + mnem = "rol"; + break; + case kROR: + mnem = "ror"; + break; + case kRCL: + mnem = "rcl"; + break; + case kRCR: + mnem = "rcr"; + break; + case kSHL: + mnem = "shl"; + break; + case KSHR: + mnem = "shr"; + break; + case kSAR: + mnem = "sar"; + break; + default: + UnimplementedInstruction(); + } + AppendToBuffer("%s ", mnem); + int count = PrintRightOperand(data); + if (op == 0xD1) { + imm8 = 1; + } else if (op == 0xC1) { + imm8 = *(data + 1); + count++; + } else if (op == 0xD3) { + // Shift/rotate by cl. + } + if (imm8 >= 0) { + AppendToBuffer(",%d", imm8); } else { - UnimplementedInstruction(); + AppendToBuffer(",cl"); } - return num_bytes; + return 1 + count; } // Returns number of bytes used, including *data. int DisassemblerIA32::JumpShort(byte* data) { - ASSERT_EQ(0xEB, *data); + DCHECK_EQ(0xEB, *data); byte b = *(data+1); byte* dest = data + static_cast<int8_t>(b) + 2; AppendToBuffer("jmp %s", NameOfAddress(dest)); @@ -615,7 +632,7 @@ int DisassemblerIA32::JumpShort(byte* data) { // Returns number of bytes used, including *data. int DisassemblerIA32::JumpConditional(byte* data, const char* comment) { - ASSERT_EQ(0x0F, *data); + DCHECK_EQ(0x0F, *data); byte cond = *(data+1) & 0x0F; byte* dest = data + *reinterpret_cast<int32_t*>(data+2) + 6; const char* mnem = jump_conditional_mnem[cond]; @@ -643,7 +660,7 @@ int DisassemblerIA32::JumpConditionalShort(byte* data, const char* comment) { // Returns number of bytes used, including *data. int DisassemblerIA32::SetCC(byte* data) { - ASSERT_EQ(0x0F, *data); + DCHECK_EQ(0x0F, *data); byte cond = *(data+1) & 0x0F; const char* mnem = set_conditional_mnem[cond]; AppendToBuffer("%s ", mnem); @@ -654,7 +671,7 @@ int DisassemblerIA32::SetCC(byte* data) { // Returns number of bytes used, including *data. int DisassemblerIA32::CMov(byte* data) { - ASSERT_EQ(0x0F, *data); + DCHECK_EQ(0x0F, *data); byte cond = *(data + 1) & 0x0F; const char* mnem = conditional_move_mnem[cond]; int op_size = PrintOperands(mnem, REG_OPER_OP_ORDER, data + 2); @@ -665,7 +682,7 @@ int DisassemblerIA32::CMov(byte* data) { // Returns number of bytes used, including *data. int DisassemblerIA32::FPUInstruction(byte* data) { byte escape_opcode = *data; - ASSERT_EQ(0xD8, escape_opcode & 0xF8); + DCHECK_EQ(0xD8, escape_opcode & 0xF8); byte modrm_byte = *(data+1); if (modrm_byte >= 0xC0) { @@ -954,17 +971,18 @@ int DisassemblerIA32::InstructionDecode(v8::internal::Vector<char> out_buffer, data += 3; break; - case 0x69: // fall through - case 0x6B: - { int mod, regop, rm; - get_modrm(*(data+1), &mod, ®op, &rm); - int32_t imm = - *data == 0x6B ? *(data+2) : *reinterpret_cast<int32_t*>(data+2); - AppendToBuffer("imul %s,%s,0x%x", - NameOfCPURegister(regop), - NameOfCPURegister(rm), - imm); - data += 2 + (*data == 0x6B ? 1 : 4); + case 0x6B: { + data++; + data += PrintOperands("imul", REG_OPER_OP_ORDER, data); + AppendToBuffer(",%d", *data); + data++; + } break; + + case 0x69: { + data++; + data += PrintOperands("imul", REG_OPER_OP_ORDER, data); + AppendToBuffer(",%d", *reinterpret_cast<int32_t*>(data)); + data += 4; } break; @@ -1373,7 +1391,7 @@ int DisassemblerIA32::InstructionDecode(v8::internal::Vector<char> out_buffer, int mod, regop, rm; get_modrm(*data, &mod, ®op, &rm); int8_t imm8 = static_cast<int8_t>(data[1]); - ASSERT(regop == esi || regop == edx); + DCHECK(regop == esi || regop == edx); AppendToBuffer("%s %s,%d", (regop == esi) ? "psllq" : "psrlq", NameOfXMMRegister(rm), @@ -1640,23 +1658,22 @@ int DisassemblerIA32::InstructionDecode(v8::internal::Vector<char> out_buffer, if (instr_len == 0) { printf("%02x", *data); } - ASSERT(instr_len > 0); // Ensure progress. + DCHECK(instr_len > 0); // Ensure progress. int outp = 0; // Instruction bytes. for (byte* bp = instr; bp < data; bp++) { - outp += v8::internal::OS::SNPrintF(out_buffer + outp, - "%02x", - *bp); + outp += v8::internal::SNPrintF(out_buffer + outp, + "%02x", + *bp); } for (int i = 6 - instr_len; i >= 0; i--) { - outp += v8::internal::OS::SNPrintF(out_buffer + outp, - " "); + outp += v8::internal::SNPrintF(out_buffer + outp, " "); } - outp += v8::internal::OS::SNPrintF(out_buffer + outp, - " %s", - tmp_buffer_.start()); + outp += v8::internal::SNPrintF(out_buffer + outp, + " %s", + tmp_buffer_.start()); return instr_len; } // NOLINT (function is too long) @@ -1680,7 +1697,7 @@ static const char* xmm_regs[8] = { const char* NameConverter::NameOfAddress(byte* addr) const { - v8::internal::OS::SNPrintF(tmp_buffer_, "%p", addr); + v8::internal::SNPrintF(tmp_buffer_, "%p", addr); return tmp_buffer_.start(); } diff --git a/deps/v8/src/ia32/frames-ia32.cc b/deps/v8/src/ia32/frames-ia32.cc index 55671ba8fda..18f19605588 100644 --- a/deps/v8/src/ia32/frames-ia32.cc +++ b/deps/v8/src/ia32/frames-ia32.cc @@ -2,14 +2,14 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_IA32 -#include "assembler.h" -#include "assembler-ia32.h" -#include "assembler-ia32-inl.h" -#include "frames.h" +#include "src/assembler.h" +#include "src/frames.h" +#include "src/ia32/assembler-ia32-inl.h" +#include "src/ia32/assembler-ia32.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/ia32/frames-ia32.h b/deps/v8/src/ia32/frames-ia32.h index fcfabda8b6b..1290ad6e09d 100644 --- a/deps/v8/src/ia32/frames-ia32.h +++ b/deps/v8/src/ia32/frames-ia32.h @@ -24,8 +24,6 @@ const RegList kJSCallerSaved = const int kNumJSCallerSaved = 5; -typedef Object* JSCallerSavedBuffer[kNumJSCallerSaved]; - // Number of registers for which space is reserved in safepoints. const int kNumSafepointRegisters = 8; diff --git a/deps/v8/src/ia32/full-codegen-ia32.cc b/deps/v8/src/ia32/full-codegen-ia32.cc index 63c3ee6014c..aacaeeb6a6b 100644 --- a/deps/v8/src/ia32/full-codegen-ia32.cc +++ b/deps/v8/src/ia32/full-codegen-ia32.cc @@ -2,19 +2,19 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_IA32 -#include "code-stubs.h" -#include "codegen.h" -#include "compiler.h" -#include "debug.h" -#include "full-codegen.h" -#include "isolate-inl.h" -#include "parser.h" -#include "scopes.h" -#include "stub-cache.h" +#include "src/code-stubs.h" +#include "src/codegen.h" +#include "src/compiler.h" +#include "src/debug.h" +#include "src/full-codegen.h" +#include "src/isolate-inl.h" +#include "src/parser.h" +#include "src/scopes.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -31,7 +31,7 @@ class JumpPatchSite BASE_EMBEDDED { } ~JumpPatchSite() { - ASSERT(patch_site_.is_bound() == info_emitted_); + DCHECK(patch_site_.is_bound() == info_emitted_); } void EmitJumpIfNotSmi(Register reg, @@ -51,7 +51,7 @@ class JumpPatchSite BASE_EMBEDDED { void EmitPatchInfo() { if (patch_site_.is_bound()) { int delta_to_patch_site = masm_->SizeOfCodeGeneratedSince(&patch_site_); - ASSERT(is_uint8(delta_to_patch_site)); + DCHECK(is_uint8(delta_to_patch_site)); __ test(eax, Immediate(delta_to_patch_site)); #ifdef DEBUG info_emitted_ = true; @@ -64,8 +64,8 @@ class JumpPatchSite BASE_EMBEDDED { private: // jc will be patched with jz, jnc will become jnz. void EmitJump(Condition cc, Label* target, Label::Distance distance) { - ASSERT(!patch_site_.is_bound() && !info_emitted_); - ASSERT(cc == carry || cc == not_carry); + DCHECK(!patch_site_.is_bound() && !info_emitted_); + DCHECK(cc == carry || cc == not_carry); __ bind(&patch_site_); __ j(cc, target, distance); } @@ -78,27 +78,6 @@ class JumpPatchSite BASE_EMBEDDED { }; -static void EmitStackCheck(MacroAssembler* masm_, - int pointers = 0, - Register scratch = esp) { - Label ok; - Isolate* isolate = masm_->isolate(); - ASSERT(scratch.is(esp) == (pointers == 0)); - ExternalReference stack_limit; - if (pointers != 0) { - __ mov(scratch, esp); - __ sub(scratch, Immediate(pointers * kPointerSize)); - stack_limit = ExternalReference::address_of_real_stack_limit(isolate); - } else { - stack_limit = ExternalReference::address_of_stack_limit(isolate); - } - __ cmp(scratch, Operand::StaticVariable(stack_limit)); - __ j(above_equal, &ok, Label::kNear); - __ call(isolate->builtins()->StackCheck(), RelocInfo::CODE_TARGET); - __ bind(&ok); -} - - // Generate code for a JS function. On entry to the function the receiver // and arguments have been pushed on the stack left to right, with the // return address on top of them. The actual argument count matches the @@ -144,7 +123,7 @@ void FullCodeGenerator::Generate() { __ j(not_equal, &ok, Label::kNear); __ mov(ecx, GlobalObjectOperand()); - __ mov(ecx, FieldOperand(ecx, GlobalObject::kGlobalReceiverOffset)); + __ mov(ecx, FieldOperand(ecx, GlobalObject::kGlobalProxyOffset)); __ mov(Operand(esp, receiver_offset), ecx); @@ -157,18 +136,26 @@ void FullCodeGenerator::Generate() { FrameScope frame_scope(masm_, StackFrame::MANUAL); info->set_prologue_offset(masm_->pc_offset()); - __ Prologue(BUILD_FUNCTION_FRAME); + __ Prologue(info->IsCodePreAgingActive()); info->AddNoFrameRange(0, masm_->pc_offset()); { Comment cmnt(masm_, "[ Allocate locals"); int locals_count = info->scope()->num_stack_slots(); // Generators allocate locals, if any, in context slots. - ASSERT(!info->function()->is_generator() || locals_count == 0); + DCHECK(!info->function()->is_generator() || locals_count == 0); if (locals_count == 1) { __ push(Immediate(isolate()->factory()->undefined_value())); } else if (locals_count > 1) { if (locals_count >= 128) { - EmitStackCheck(masm_, locals_count, ecx); + Label ok; + __ mov(ecx, esp); + __ sub(ecx, Immediate(locals_count * kPointerSize)); + ExternalReference stack_limit = + ExternalReference::address_of_real_stack_limit(isolate()); + __ cmp(ecx, Operand::StaticVariable(stack_limit)); + __ j(above_equal, &ok, Label::kNear); + __ InvokeBuiltin(Builtins::STACK_OVERFLOW, CALL_FUNCTION); + __ bind(&ok); } __ mov(eax, Immediate(isolate()->factory()->undefined_value())); const int kMaxPushes = 32; @@ -198,17 +185,20 @@ void FullCodeGenerator::Generate() { int heap_slots = info->scope()->num_heap_slots() - Context::MIN_CONTEXT_SLOTS; if (heap_slots > 0) { Comment cmnt(masm_, "[ Allocate context"); + bool need_write_barrier = true; // Argument to NewContext is the function, which is still in edi. if (FLAG_harmony_scoping && info->scope()->is_global_scope()) { __ push(edi); __ Push(info->scope()->GetScopeInfo()); - __ CallRuntime(Runtime::kHiddenNewGlobalContext, 2); + __ CallRuntime(Runtime::kNewGlobalContext, 2); } else if (heap_slots <= FastNewContextStub::kMaximumSlots) { FastNewContextStub stub(isolate(), heap_slots); __ CallStub(&stub); + // Result of FastNewContextStub is always in new space. + need_write_barrier = false; } else { __ push(edi); - __ CallRuntime(Runtime::kHiddenNewFunctionContext, 1); + __ CallRuntime(Runtime::kNewFunctionContext, 1); } function_in_register = false; // Context is returned in eax. It replaces the context passed to us. @@ -229,11 +219,18 @@ void FullCodeGenerator::Generate() { int context_offset = Context::SlotOffset(var->index()); __ mov(Operand(esi, context_offset), eax); // Update the write barrier. This clobbers eax and ebx. - __ RecordWriteContextSlot(esi, - context_offset, - eax, - ebx, - kDontSaveFPRegs); + if (need_write_barrier) { + __ RecordWriteContextSlot(esi, + context_offset, + eax, + ebx, + kDontSaveFPRegs); + } else if (FLAG_debug_code) { + Label done; + __ JumpIfInNewSpace(esi, eax, &done, Label::kNear); + __ Abort(kExpectedNewSpaceObject); + __ bind(&done); + } } } } @@ -289,9 +286,9 @@ void FullCodeGenerator::Generate() { // constant. if (scope()->is_function_scope() && scope()->function() != NULL) { VariableDeclaration* function = scope()->function(); - ASSERT(function->proxy()->var()->mode() == CONST || + DCHECK(function->proxy()->var()->mode() == CONST || function->proxy()->var()->mode() == CONST_LEGACY); - ASSERT(function->proxy()->var()->location() != Variable::UNALLOCATED); + DCHECK(function->proxy()->var()->location() != Variable::UNALLOCATED); VisitVariableDeclaration(function); } VisitDeclarations(scope()->declarations()); @@ -299,13 +296,19 @@ void FullCodeGenerator::Generate() { { Comment cmnt(masm_, "[ Stack check"); PrepareForBailoutForId(BailoutId::Declarations(), NO_REGISTERS); - EmitStackCheck(masm_); + Label ok; + ExternalReference stack_limit + = ExternalReference::address_of_stack_limit(isolate()); + __ cmp(esp, Operand::StaticVariable(stack_limit)); + __ j(above_equal, &ok, Label::kNear); + __ call(isolate()->builtins()->StackCheck(), RelocInfo::CODE_TARGET); + __ bind(&ok); } { Comment cmnt(masm_, "[ Body"); - ASSERT(loop_depth() == 0); + DCHECK(loop_depth() == 0); VisitStatements(function()->body()); - ASSERT(loop_depth() == 0); + DCHECK(loop_depth() == 0); } } @@ -343,7 +346,7 @@ void FullCodeGenerator::EmitBackEdgeBookkeeping(IterationStatement* stmt, Comment cmnt(masm_, "[ Back edge bookkeeping"); Label ok; - ASSERT(back_edge_target->is_bound()); + DCHECK(back_edge_target->is_bound()); int distance = masm_->SizeOfCodeGeneratedSince(back_edge_target); int weight = Min(kMaxBackEdgeWeight, Max(1, distance / kCodeSizeMultiplier)); @@ -413,7 +416,7 @@ void FullCodeGenerator::EmitReturnSequence() { __ Ret(arguments_bytes, ecx); // Check that the size of the code used for returning is large enough // for the debugger's requirements. - ASSERT(Assembler::kJSReturnSequenceLength <= + DCHECK(Assembler::kJSReturnSequenceLength <= masm_->SizeOfCodeGeneratedSince(&check_exit_codesize)); info_->AddNoFrameRange(no_frame_start, masm_->pc_offset()); } @@ -421,18 +424,18 @@ void FullCodeGenerator::EmitReturnSequence() { void FullCodeGenerator::EffectContext::Plug(Variable* var) const { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); } void FullCodeGenerator::AccumulatorValueContext::Plug(Variable* var) const { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); codegen()->GetVar(result_register(), var); } void FullCodeGenerator::StackValueContext::Plug(Variable* var) const { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); MemOperand operand = codegen()->VarOperand(var, result_register()); // Memory operands can be pushed directly. __ push(operand); @@ -497,7 +500,7 @@ void FullCodeGenerator::TestContext::Plug(Handle<Object> lit) const { true, true_label_, false_label_); - ASSERT(!lit->IsUndetectableObject()); // There are no undetectable literals. + DCHECK(!lit->IsUndetectableObject()); // There are no undetectable literals. if (lit->IsUndefined() || lit->IsNull() || lit->IsFalse()) { if (false_label_ != fall_through_) __ jmp(false_label_); } else if (lit->IsTrue() || lit->IsJSObject()) { @@ -524,7 +527,7 @@ void FullCodeGenerator::TestContext::Plug(Handle<Object> lit) const { void FullCodeGenerator::EffectContext::DropAndPlug(int count, Register reg) const { - ASSERT(count > 0); + DCHECK(count > 0); __ Drop(count); } @@ -532,7 +535,7 @@ void FullCodeGenerator::EffectContext::DropAndPlug(int count, void FullCodeGenerator::AccumulatorValueContext::DropAndPlug( int count, Register reg) const { - ASSERT(count > 0); + DCHECK(count > 0); __ Drop(count); __ Move(result_register(), reg); } @@ -540,7 +543,7 @@ void FullCodeGenerator::AccumulatorValueContext::DropAndPlug( void FullCodeGenerator::StackValueContext::DropAndPlug(int count, Register reg) const { - ASSERT(count > 0); + DCHECK(count > 0); if (count > 1) __ Drop(count - 1); __ mov(Operand(esp, 0), reg); } @@ -548,7 +551,7 @@ void FullCodeGenerator::StackValueContext::DropAndPlug(int count, void FullCodeGenerator::TestContext::DropAndPlug(int count, Register reg) const { - ASSERT(count > 0); + DCHECK(count > 0); // For simplicity we always test the accumulator register. __ Drop(count); __ Move(result_register(), reg); @@ -559,7 +562,7 @@ void FullCodeGenerator::TestContext::DropAndPlug(int count, void FullCodeGenerator::EffectContext::Plug(Label* materialize_true, Label* materialize_false) const { - ASSERT(materialize_true == materialize_false); + DCHECK(materialize_true == materialize_false); __ bind(materialize_true); } @@ -592,8 +595,8 @@ void FullCodeGenerator::StackValueContext::Plug( void FullCodeGenerator::TestContext::Plug(Label* materialize_true, Label* materialize_false) const { - ASSERT(materialize_true == true_label_); - ASSERT(materialize_false == false_label_); + DCHECK(materialize_true == true_label_); + DCHECK(materialize_false == false_label_); } @@ -658,7 +661,7 @@ void FullCodeGenerator::Split(Condition cc, MemOperand FullCodeGenerator::StackOperand(Variable* var) { - ASSERT(var->IsStackAllocated()); + DCHECK(var->IsStackAllocated()); // Offset is negative because higher indexes are at lower addresses. int offset = -var->index() * kPointerSize; // Adjust by a (parameter or local) base offset. @@ -672,7 +675,7 @@ MemOperand FullCodeGenerator::StackOperand(Variable* var) { MemOperand FullCodeGenerator::VarOperand(Variable* var, Register scratch) { - ASSERT(var->IsContextSlot() || var->IsStackAllocated()); + DCHECK(var->IsContextSlot() || var->IsStackAllocated()); if (var->IsContextSlot()) { int context_chain_length = scope()->ContextChainLength(var->scope()); __ LoadContext(scratch, context_chain_length); @@ -684,7 +687,7 @@ MemOperand FullCodeGenerator::VarOperand(Variable* var, Register scratch) { void FullCodeGenerator::GetVar(Register dest, Variable* var) { - ASSERT(var->IsContextSlot() || var->IsStackAllocated()); + DCHECK(var->IsContextSlot() || var->IsStackAllocated()); MemOperand location = VarOperand(var, dest); __ mov(dest, location); } @@ -694,17 +697,17 @@ void FullCodeGenerator::SetVar(Variable* var, Register src, Register scratch0, Register scratch1) { - ASSERT(var->IsContextSlot() || var->IsStackAllocated()); - ASSERT(!scratch0.is(src)); - ASSERT(!scratch0.is(scratch1)); - ASSERT(!scratch1.is(src)); + DCHECK(var->IsContextSlot() || var->IsStackAllocated()); + DCHECK(!scratch0.is(src)); + DCHECK(!scratch0.is(scratch1)); + DCHECK(!scratch1.is(src)); MemOperand location = VarOperand(var, scratch0); __ mov(location, src); // Emit the write barrier code if the location is in the heap. if (var->IsContextSlot()) { int offset = Context::SlotOffset(var->index()); - ASSERT(!scratch0.is(esi) && !src.is(esi) && !scratch1.is(esi)); + DCHECK(!scratch0.is(esi) && !src.is(esi) && !scratch1.is(esi)); __ RecordWriteContextSlot(scratch0, offset, src, scratch1, kDontSaveFPRegs); } } @@ -732,7 +735,7 @@ void FullCodeGenerator::PrepareForBailoutBeforeSplit(Expression* expr, void FullCodeGenerator::EmitDebugCheckDeclarationContext(Variable* variable) { // The variable in the declaration always resides in the current context. - ASSERT_EQ(0, scope()->ContextChainLength(variable->scope())); + DCHECK_EQ(0, scope()->ContextChainLength(variable->scope())); if (generate_debug_code_) { // Check that we're not inside a with or catch context. __ mov(ebx, FieldOperand(esi, HeapObject::kMapOffset)); @@ -786,7 +789,7 @@ void FullCodeGenerator::VisitVariableDeclaration( __ push(esi); __ push(Immediate(variable->name())); // VariableDeclaration nodes are always introduced in one of four modes. - ASSERT(IsDeclaredVariableMode(mode)); + DCHECK(IsDeclaredVariableMode(mode)); PropertyAttributes attr = IsImmutableVariableMode(mode) ? READ_ONLY : NONE; __ push(Immediate(Smi::FromInt(attr))); @@ -799,7 +802,7 @@ void FullCodeGenerator::VisitVariableDeclaration( } else { __ push(Immediate(Smi::FromInt(0))); // Indicates no initial value. } - __ CallRuntime(Runtime::kHiddenDeclareContextSlot, 4); + __ CallRuntime(Runtime::kDeclareLookupSlot, 4); break; } } @@ -814,7 +817,7 @@ void FullCodeGenerator::VisitFunctionDeclaration( case Variable::UNALLOCATED: { globals_->Add(variable->name(), zone()); Handle<SharedFunctionInfo> function = - Compiler::BuildFunctionInfo(declaration->fun(), script()); + Compiler::BuildFunctionInfo(declaration->fun(), script(), info_); // Check for stack-overflow exception. if (function.is_null()) return SetStackOverflow(); globals_->Add(function, zone()); @@ -852,7 +855,7 @@ void FullCodeGenerator::VisitFunctionDeclaration( __ push(Immediate(variable->name())); __ push(Immediate(Smi::FromInt(NONE))); VisitForStackValue(declaration->fun()); - __ CallRuntime(Runtime::kHiddenDeclareContextSlot, 4); + __ CallRuntime(Runtime::kDeclareLookupSlot, 4); break; } } @@ -861,8 +864,8 @@ void FullCodeGenerator::VisitFunctionDeclaration( void FullCodeGenerator::VisitModuleDeclaration(ModuleDeclaration* declaration) { Variable* variable = declaration->proxy()->var(); - ASSERT(variable->location() == Variable::CONTEXT); - ASSERT(variable->interface()->IsFrozen()); + DCHECK(variable->location() == Variable::CONTEXT); + DCHECK(variable->interface()->IsFrozen()); Comment cmnt(masm_, "[ ModuleDeclaration"); EmitDebugCheckDeclarationContext(variable); @@ -922,7 +925,7 @@ void FullCodeGenerator::DeclareGlobals(Handle<FixedArray> pairs) { __ push(esi); // The context is the first argument. __ Push(pairs); __ Push(Smi::FromInt(DeclareGlobalsFlags())); - __ CallRuntime(Runtime::kHiddenDeclareGlobals, 3); + __ CallRuntime(Runtime::kDeclareGlobals, 3); // Return value is ignored. } @@ -930,7 +933,7 @@ void FullCodeGenerator::DeclareGlobals(Handle<FixedArray> pairs) { void FullCodeGenerator::DeclareModules(Handle<FixedArray> descriptions) { // Call the runtime to declare the modules. __ Push(descriptions); - __ CallRuntime(Runtime::kHiddenDeclareModules, 1); + __ CallRuntime(Runtime::kDeclareModules, 1); // Return value is ignored. } @@ -1152,7 +1155,7 @@ void FullCodeGenerator::VisitForInStatement(ForInStatement* stmt) { // For proxies, no filtering is done. // TODO(rossberg): What if only a prototype is a proxy? Not specified yet. - ASSERT(Smi::FromInt(0) == 0); + DCHECK(Smi::FromInt(0) == 0); __ test(edx, edx); __ j(zero, &update_each); @@ -1204,24 +1207,8 @@ void FullCodeGenerator::VisitForOfStatement(ForOfStatement* stmt) { Iteration loop_statement(this, stmt); increment_loop_depth(); - // var iterator = iterable[@@iterator]() - VisitForAccumulatorValue(stmt->assign_iterator()); - - // As with for-in, skip the loop if the iterator is null or undefined. - __ CompareRoot(eax, Heap::kUndefinedValueRootIndex); - __ j(equal, loop_statement.break_label()); - __ CompareRoot(eax, Heap::kNullValueRootIndex); - __ j(equal, loop_statement.break_label()); - - // Convert the iterator to a JS object. - Label convert, done_convert; - __ JumpIfSmi(eax, &convert); - __ CmpObjectType(eax, FIRST_SPEC_OBJECT_TYPE, ecx); - __ j(above_equal, &done_convert); - __ bind(&convert); - __ push(eax); - __ InvokeBuiltin(Builtins::TO_OBJECT, CALL_FUNCTION); - __ bind(&done_convert); + // var iterator = iterable[Symbol.iterator](); + VisitForEffect(stmt->assign_iterator()); // Loop entry. __ bind(loop_statement.continue_label()); @@ -1279,7 +1266,7 @@ void FullCodeGenerator::EmitNewClosure(Handle<SharedFunctionInfo> info, __ push(Immediate(pretenure ? isolate()->factory()->true_value() : isolate()->factory()->false_value())); - __ CallRuntime(Runtime::kHiddenNewClosure, 3); + __ CallRuntime(Runtime::kNewClosure, 3); } context()->Plug(eax); } @@ -1291,7 +1278,7 @@ void FullCodeGenerator::VisitVariableProxy(VariableProxy* expr) { } -void FullCodeGenerator::EmitLoadGlobalCheckExtensions(Variable* var, +void FullCodeGenerator::EmitLoadGlobalCheckExtensions(VariableProxy* proxy, TypeofState typeof_state, Label* slow) { Register context = esi; @@ -1341,8 +1328,13 @@ void FullCodeGenerator::EmitLoadGlobalCheckExtensions(Variable* var, // All extension objects were empty and it is safe to use a global // load IC call. - __ mov(edx, GlobalObjectOperand()); - __ mov(ecx, var->name()); + __ mov(LoadIC::ReceiverRegister(), GlobalObjectOperand()); + __ mov(LoadIC::NameRegister(), proxy->var()->name()); + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(proxy->VariableFeedbackSlot()))); + } + ContextualMode mode = (typeof_state == INSIDE_TYPEOF) ? NOT_CONTEXTUAL : CONTEXTUAL; @@ -1353,7 +1345,7 @@ void FullCodeGenerator::EmitLoadGlobalCheckExtensions(Variable* var, MemOperand FullCodeGenerator::ContextSlotOperandCheckExtensions(Variable* var, Label* slow) { - ASSERT(var->IsContextSlot()); + DCHECK(var->IsContextSlot()); Register context = esi; Register temp = ebx; @@ -1381,7 +1373,7 @@ MemOperand FullCodeGenerator::ContextSlotOperandCheckExtensions(Variable* var, } -void FullCodeGenerator::EmitDynamicLookupFastCase(Variable* var, +void FullCodeGenerator::EmitDynamicLookupFastCase(VariableProxy* proxy, TypeofState typeof_state, Label* slow, Label* done) { @@ -1390,8 +1382,9 @@ void FullCodeGenerator::EmitDynamicLookupFastCase(Variable* var, // introducing variables. In those cases, we do not want to // perform a runtime call for all variables in the scope // containing the eval. + Variable* var = proxy->var(); if (var->mode() == DYNAMIC_GLOBAL) { - EmitLoadGlobalCheckExtensions(var, typeof_state, slow); + EmitLoadGlobalCheckExtensions(proxy, typeof_state, slow); __ jmp(done); } else if (var->mode() == DYNAMIC_LOCAL) { Variable* local = var->local_if_not_shadowed(); @@ -1404,7 +1397,7 @@ void FullCodeGenerator::EmitDynamicLookupFastCase(Variable* var, __ mov(eax, isolate()->factory()->undefined_value()); } else { // LET || CONST __ push(Immediate(var->name())); - __ CallRuntime(Runtime::kHiddenThrowReferenceError, 1); + __ CallRuntime(Runtime::kThrowReferenceError, 1); } } __ jmp(done); @@ -1422,10 +1415,12 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { switch (var->location()) { case Variable::UNALLOCATED: { Comment cmnt(masm_, "[ Global variable"); - // Use inline caching. Variable name is passed in ecx and the global - // object in eax. - __ mov(edx, GlobalObjectOperand()); - __ mov(ecx, var->name()); + __ mov(LoadIC::ReceiverRegister(), GlobalObjectOperand()); + __ mov(LoadIC::NameRegister(), var->name()); + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(proxy->VariableFeedbackSlot()))); + } CallLoadIC(CONTEXTUAL); context()->Plug(eax); break; @@ -1442,7 +1437,7 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { // always looked up dynamically, i.e. in that case // var->location() == LOOKUP. // always holds. - ASSERT(var->scope() != NULL); + DCHECK(var->scope() != NULL); // Check if the binding really needs an initialization check. The check // can be skipped in the following situation: we have a LET or CONST @@ -1465,8 +1460,8 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { skip_init_check = false; } else { // Check that we always have valid source position. - ASSERT(var->initializer_position() != RelocInfo::kNoPosition); - ASSERT(proxy->position() != RelocInfo::kNoPosition); + DCHECK(var->initializer_position() != RelocInfo::kNoPosition); + DCHECK(proxy->position() != RelocInfo::kNoPosition); skip_init_check = var->mode() != CONST_LEGACY && var->initializer_position() < proxy->position(); } @@ -1481,10 +1476,10 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { // Throw a reference error when using an uninitialized let/const // binding in harmony mode. __ push(Immediate(var->name())); - __ CallRuntime(Runtime::kHiddenThrowReferenceError, 1); + __ CallRuntime(Runtime::kThrowReferenceError, 1); } else { // Uninitalized const bindings outside of harmony mode are unholed. - ASSERT(var->mode() == CONST_LEGACY); + DCHECK(var->mode() == CONST_LEGACY); __ mov(eax, isolate()->factory()->undefined_value()); } __ bind(&done); @@ -1501,11 +1496,11 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { Label done, slow; // Generate code for loading from variables potentially shadowed // by eval-introduced variables. - EmitDynamicLookupFastCase(var, NOT_INSIDE_TYPEOF, &slow, &done); + EmitDynamicLookupFastCase(proxy, NOT_INSIDE_TYPEOF, &slow, &done); __ bind(&slow); __ push(esi); // Context. __ push(Immediate(var->name())); - __ CallRuntime(Runtime::kHiddenLoadContextSlot, 2); + __ CallRuntime(Runtime::kLoadLookupSlot, 2); __ bind(&done); context()->Plug(eax); break; @@ -1536,7 +1531,7 @@ void FullCodeGenerator::VisitRegExpLiteral(RegExpLiteral* expr) { __ push(Immediate(Smi::FromInt(expr->literal_index()))); __ push(Immediate(expr->pattern())); __ push(Immediate(expr->flags())); - __ CallRuntime(Runtime::kHiddenMaterializeRegExpLiteral, 4); + __ CallRuntime(Runtime::kMaterializeRegExpLiteral, 4); __ mov(ebx, eax); __ bind(&materialized); @@ -1548,7 +1543,7 @@ void FullCodeGenerator::VisitRegExpLiteral(RegExpLiteral* expr) { __ bind(&runtime_allocate); __ push(ebx); __ push(Immediate(Smi::FromInt(size))); - __ CallRuntime(Runtime::kHiddenAllocateInNewSpace, 1); + __ CallRuntime(Runtime::kAllocateInNewSpace, 1); __ pop(ebx); __ bind(&allocated); @@ -1590,7 +1585,7 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) { : ObjectLiteral::kNoFlags; int properties_count = constant_properties->length() / 2; if (expr->may_store_doubles() || expr->depth() > 1 || - Serializer::enabled(isolate()) || + masm()->serializer_enabled() || flags != ObjectLiteral::kFastElements || properties_count > FastCloneShallowObjectStub::kMaximumClonedProperties) { __ mov(edi, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset)); @@ -1598,7 +1593,7 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) { __ push(Immediate(Smi::FromInt(expr->literal_index()))); __ push(Immediate(constant_properties)); __ push(Immediate(Smi::FromInt(flags))); - __ CallRuntime(Runtime::kHiddenCreateObjectLiteral, 4); + __ CallRuntime(Runtime::kCreateObjectLiteral, 4); } else { __ mov(edi, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset)); __ mov(eax, FieldOperand(edi, JSFunction::kLiteralsOffset)); @@ -1633,14 +1628,15 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) { case ObjectLiteral::Property::CONSTANT: UNREACHABLE(); case ObjectLiteral::Property::MATERIALIZED_LITERAL: - ASSERT(!CompileTimeValue::IsCompileTimeValue(value)); + DCHECK(!CompileTimeValue::IsCompileTimeValue(value)); // Fall through. case ObjectLiteral::Property::COMPUTED: if (key->value()->IsInternalizedString()) { if (property->emit_store()) { VisitForAccumulatorValue(value); - __ mov(ecx, Immediate(key->value())); - __ mov(edx, Operand(esp, 0)); + DCHECK(StoreIC::ValueRegister().is(eax)); + __ mov(StoreIC::NameRegister(), Immediate(key->value())); + __ mov(StoreIC::ReceiverRegister(), Operand(esp, 0)); CallStoreIC(key->LiteralFeedbackId()); PrepareForBailoutForId(key->id(), NO_REGISTERS); } else { @@ -1652,7 +1648,7 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) { VisitForStackValue(key); VisitForStackValue(value); if (property->emit_store()) { - __ push(Immediate(Smi::FromInt(NONE))); // PropertyAttributes + __ push(Immediate(Smi::FromInt(SLOPPY))); // Strict mode __ CallRuntime(Runtime::kSetProperty, 4); } else { __ Drop(3); @@ -1686,11 +1682,11 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) { EmitAccessor(it->second->getter); EmitAccessor(it->second->setter); __ push(Immediate(Smi::FromInt(NONE))); - __ CallRuntime(Runtime::kDefineOrRedefineAccessorProperty, 5); + __ CallRuntime(Runtime::kDefineAccessorPropertyUnchecked, 5); } if (expr->has_function()) { - ASSERT(result_saved); + DCHECK(result_saved); __ push(Operand(esp, 0)); __ CallRuntime(Runtime::kToFastProperties, 1); } @@ -1714,7 +1710,7 @@ void FullCodeGenerator::VisitArrayLiteral(ArrayLiteral* expr) { ZoneList<Expression*>* subexprs = expr->values(); int length = subexprs->length(); Handle<FixedArray> constant_elements = expr->constant_elements(); - ASSERT_EQ(2, constant_elements->length()); + DCHECK_EQ(2, constant_elements->length()); ElementsKind constant_elements_kind = static_cast<ElementsKind>(Smi::cast(constant_elements->get(0))->value()); bool has_constant_fast_elements = @@ -1729,50 +1725,19 @@ void FullCodeGenerator::VisitArrayLiteral(ArrayLiteral* expr) { allocation_site_mode = DONT_TRACK_ALLOCATION_SITE; } - Heap* heap = isolate()->heap(); - if (has_constant_fast_elements && - constant_elements_values->map() == heap->fixed_cow_array_map()) { - // If the elements are already FAST_*_ELEMENTS, the boilerplate cannot - // change, so it's possible to specialize the stub in advance. - __ IncrementCounter(isolate()->counters()->cow_arrays_created_stub(), 1); - __ mov(ebx, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset)); - __ mov(eax, FieldOperand(ebx, JSFunction::kLiteralsOffset)); - __ mov(ebx, Immediate(Smi::FromInt(expr->literal_index()))); - __ mov(ecx, Immediate(constant_elements)); - FastCloneShallowArrayStub stub( - isolate(), - FastCloneShallowArrayStub::COPY_ON_WRITE_ELEMENTS, - allocation_site_mode, - length); - __ CallStub(&stub); - } else if (expr->depth() > 1 || Serializer::enabled(isolate()) || - length > FastCloneShallowArrayStub::kMaximumClonedLength) { + if (expr->depth() > 1 || length > JSObject::kInitialMaxFastElementArray) { __ mov(ebx, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset)); __ push(FieldOperand(ebx, JSFunction::kLiteralsOffset)); __ push(Immediate(Smi::FromInt(expr->literal_index()))); __ push(Immediate(constant_elements)); __ push(Immediate(Smi::FromInt(flags))); - __ CallRuntime(Runtime::kHiddenCreateArrayLiteral, 4); + __ CallRuntime(Runtime::kCreateArrayLiteral, 4); } else { - ASSERT(IsFastSmiOrObjectElementsKind(constant_elements_kind) || - FLAG_smi_only_arrays); - FastCloneShallowArrayStub::Mode mode = - FastCloneShallowArrayStub::CLONE_ANY_ELEMENTS; - - // If the elements are already FAST_*_ELEMENTS, the boilerplate cannot - // change, so it's possible to specialize the stub in advance. - if (has_constant_fast_elements) { - mode = FastCloneShallowArrayStub::CLONE_ELEMENTS; - } - __ mov(ebx, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset)); __ mov(eax, FieldOperand(ebx, JSFunction::kLiteralsOffset)); __ mov(ebx, Immediate(Smi::FromInt(expr->literal_index()))); __ mov(ecx, Immediate(constant_elements)); - FastCloneShallowArrayStub stub(isolate(), - mode, - allocation_site_mode, - length); + FastCloneShallowArrayStub stub(isolate(), allocation_site_mode); __ CallStub(&stub); } @@ -1826,7 +1791,7 @@ void FullCodeGenerator::VisitArrayLiteral(ArrayLiteral* expr) { void FullCodeGenerator::VisitAssignment(Assignment* expr) { - ASSERT(expr->target()->IsValidReferenceExpression()); + DCHECK(expr->target()->IsValidReferenceExpression()); Comment cmnt(masm_, "[ Assignment"); @@ -1848,9 +1813,9 @@ void FullCodeGenerator::VisitAssignment(Assignment* expr) { break; case NAMED_PROPERTY: if (expr->is_compound()) { - // We need the receiver both on the stack and in edx. + // We need the receiver both on the stack and in the register. VisitForStackValue(property->obj()); - __ mov(edx, Operand(esp, 0)); + __ mov(LoadIC::ReceiverRegister(), Operand(esp, 0)); } else { VisitForStackValue(property->obj()); } @@ -1859,8 +1824,8 @@ void FullCodeGenerator::VisitAssignment(Assignment* expr) { if (expr->is_compound()) { VisitForStackValue(property->obj()); VisitForStackValue(property->key()); - __ mov(edx, Operand(esp, kPointerSize)); // Object. - __ mov(ecx, Operand(esp, 0)); // Key. + __ mov(LoadIC::ReceiverRegister(), Operand(esp, kPointerSize)); + __ mov(LoadIC::NameRegister(), Operand(esp, 0)); } else { VisitForStackValue(property->obj()); VisitForStackValue(property->key()); @@ -1957,7 +1922,7 @@ void FullCodeGenerator::VisitYield(Yield* expr) { __ bind(&suspend); VisitForAccumulatorValue(expr->generator_object()); - ASSERT(continuation.pos() > 0 && Smi::IsValid(continuation.pos())); + DCHECK(continuation.pos() > 0 && Smi::IsValid(continuation.pos())); __ mov(FieldOperand(eax, JSGeneratorObject::kContinuationOffset), Immediate(Smi::FromInt(continuation.pos()))); __ mov(FieldOperand(eax, JSGeneratorObject::kContextOffset), esi); @@ -1968,7 +1933,7 @@ void FullCodeGenerator::VisitYield(Yield* expr) { __ cmp(esp, ebx); __ j(equal, &post_runtime); __ push(eax); // generator object - __ CallRuntime(Runtime::kHiddenSuspendJSGeneratorObject, 1); + __ CallRuntime(Runtime::kSuspendJSGeneratorObject, 1); __ mov(context_register(), Operand(ebp, StandardFrameConstants::kContextOffset)); __ bind(&post_runtime); @@ -2001,6 +1966,9 @@ void FullCodeGenerator::VisitYield(Yield* expr) { Label l_catch, l_try, l_suspend, l_continuation, l_resume; Label l_next, l_call, l_loop; + Register load_receiver = LoadIC::ReceiverRegister(); + Register load_name = LoadIC::NameRegister(); + // Initial send value is undefined. __ mov(eax, isolate()->factory()->undefined_value()); __ jmp(&l_next); @@ -2008,10 +1976,10 @@ void FullCodeGenerator::VisitYield(Yield* expr) { // catch (e) { receiver = iter; f = 'throw'; arg = e; goto l_call; } __ bind(&l_catch); handler_table()->set(expr->index(), Smi::FromInt(l_catch.pos())); - __ mov(ecx, isolate()->factory()->throw_string()); // "throw" - __ push(ecx); // "throw" - __ push(Operand(esp, 2 * kPointerSize)); // iter - __ push(eax); // exception + __ mov(load_name, isolate()->factory()->throw_string()); // "throw" + __ push(load_name); // "throw" + __ push(Operand(esp, 2 * kPointerSize)); // iter + __ push(eax); // exception __ jmp(&l_call); // try { received = %yield result } @@ -2029,14 +1997,14 @@ void FullCodeGenerator::VisitYield(Yield* expr) { const int generator_object_depth = kPointerSize + handler_size; __ mov(eax, Operand(esp, generator_object_depth)); __ push(eax); // g - ASSERT(l_continuation.pos() > 0 && Smi::IsValid(l_continuation.pos())); + DCHECK(l_continuation.pos() > 0 && Smi::IsValid(l_continuation.pos())); __ mov(FieldOperand(eax, JSGeneratorObject::kContinuationOffset), Immediate(Smi::FromInt(l_continuation.pos()))); __ mov(FieldOperand(eax, JSGeneratorObject::kContextOffset), esi); __ mov(ecx, esi); __ RecordWriteField(eax, JSGeneratorObject::kContextOffset, ecx, edx, kDontSaveFPRegs); - __ CallRuntime(Runtime::kHiddenSuspendJSGeneratorObject, 1); + __ CallRuntime(Runtime::kSuspendJSGeneratorObject, 1); __ mov(context_register(), Operand(ebp, StandardFrameConstants::kContextOffset)); __ pop(eax); // result @@ -2046,14 +2014,19 @@ void FullCodeGenerator::VisitYield(Yield* expr) { // receiver = iter; f = iter.next; arg = received; __ bind(&l_next); - __ mov(ecx, isolate()->factory()->next_string()); // "next" - __ push(ecx); - __ push(Operand(esp, 2 * kPointerSize)); // iter - __ push(eax); // received + + __ mov(load_name, isolate()->factory()->next_string()); + __ push(load_name); // "next" + __ push(Operand(esp, 2 * kPointerSize)); // iter + __ push(eax); // received // result = receiver[f](arg); __ bind(&l_call); - __ mov(edx, Operand(esp, kPointerSize)); + __ mov(load_receiver, Operand(esp, kPointerSize)); + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(expr->KeyedLoadFeedbackSlot()))); + } Handle<Code> ic = isolate()->builtins()->KeyedLoadIC_Initialize(); CallIC(ic, TypeFeedbackId::None()); __ mov(edi, eax); @@ -2067,8 +2040,13 @@ void FullCodeGenerator::VisitYield(Yield* expr) { // if (!result.done) goto l_try; __ bind(&l_loop); __ push(eax); // save result - __ mov(edx, eax); // result - __ mov(ecx, isolate()->factory()->done_string()); // "done" + __ Move(load_receiver, eax); // result + __ mov(load_name, + isolate()->factory()->done_string()); // "done" + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(expr->DoneFeedbackSlot()))); + } CallLoadIC(NOT_CONTEXTUAL); // result.done in eax Handle<Code> bool_ic = ToBooleanStub::GetUninitialized(isolate()); CallIC(bool_ic); @@ -2076,8 +2054,13 @@ void FullCodeGenerator::VisitYield(Yield* expr) { __ j(zero, &l_try); // result.value - __ pop(edx); // result - __ mov(ecx, isolate()->factory()->value_string()); // "value" + __ pop(load_receiver); // result + __ mov(load_name, + isolate()->factory()->value_string()); // "value" + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(expr->ValueFeedbackSlot()))); + } CallLoadIC(NOT_CONTEXTUAL); // result.value in eax context()->DropAndPlug(2, eax); // drop iter and g break; @@ -2090,7 +2073,7 @@ void FullCodeGenerator::EmitGeneratorResume(Expression *generator, Expression *value, JSGeneratorObject::ResumeMode resume_mode) { // The value stays in eax, and is ultimately read by the resumed generator, as - // if CallRuntime(Runtime::kHiddenSuspendJSGeneratorObject) returned it. Or it + // if CallRuntime(Runtime::kSuspendJSGeneratorObject) returned it. Or it // is read to throw the value when the resumed generator is already closed. // ebx will hold the generator object until the activation has been resumed. VisitForStackValue(generator); @@ -2170,7 +2153,7 @@ void FullCodeGenerator::EmitGeneratorResume(Expression *generator, __ push(ebx); __ push(result_register()); __ Push(Smi::FromInt(resume_mode)); - __ CallRuntime(Runtime::kHiddenResumeJSGeneratorObject, 3); + __ CallRuntime(Runtime::kResumeJSGeneratorObject, 3); // Not reached: the runtime call returns elsewhere. __ Abort(kGeneratorFailedToResume); @@ -2184,14 +2167,14 @@ void FullCodeGenerator::EmitGeneratorResume(Expression *generator, } else { // Throw the provided value. __ push(eax); - __ CallRuntime(Runtime::kHiddenThrow, 1); + __ CallRuntime(Runtime::kThrow, 1); } __ jmp(&done); // Throw error if we attempt to operate on a running generator. __ bind(&wrong_state); __ push(ebx); - __ CallRuntime(Runtime::kHiddenThrowGeneratorStateError, 1); + __ CallRuntime(Runtime::kThrowGeneratorStateError, 1); __ bind(&done); context()->Plug(result_register()); @@ -2209,7 +2192,7 @@ void FullCodeGenerator::EmitCreateIteratorResult(bool done) { __ bind(&gc_required); __ Push(Smi::FromInt(map->instance_size())); - __ CallRuntime(Runtime::kHiddenAllocateInNewSpace, 1); + __ CallRuntime(Runtime::kAllocateInNewSpace, 1); __ mov(context_register(), Operand(ebp, StandardFrameConstants::kContextOffset)); @@ -2217,7 +2200,7 @@ void FullCodeGenerator::EmitCreateIteratorResult(bool done) { __ mov(ebx, map); __ pop(ecx); __ mov(edx, isolate()->factory()->ToBoolean(done)); - ASSERT_EQ(map->instance_size(), 5 * kPointerSize); + DCHECK_EQ(map->instance_size(), 5 * kPointerSize); __ mov(FieldOperand(eax, HeapObject::kMapOffset), ebx); __ mov(FieldOperand(eax, JSObject::kPropertiesOffset), isolate()->factory()->empty_fixed_array()); @@ -2236,16 +2219,28 @@ void FullCodeGenerator::EmitCreateIteratorResult(bool done) { void FullCodeGenerator::EmitNamedPropertyLoad(Property* prop) { SetSourcePosition(prop->position()); Literal* key = prop->key()->AsLiteral(); - ASSERT(!key->value()->IsSmi()); - __ mov(ecx, Immediate(key->value())); - CallLoadIC(NOT_CONTEXTUAL, prop->PropertyFeedbackId()); + DCHECK(!key->value()->IsSmi()); + __ mov(LoadIC::NameRegister(), Immediate(key->value())); + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(prop->PropertyFeedbackSlot()))); + CallLoadIC(NOT_CONTEXTUAL); + } else { + CallLoadIC(NOT_CONTEXTUAL, prop->PropertyFeedbackId()); + } } void FullCodeGenerator::EmitKeyedPropertyLoad(Property* prop) { SetSourcePosition(prop->position()); Handle<Code> ic = isolate()->builtins()->KeyedLoadIC_Initialize(); - CallIC(ic, prop->PropertyFeedbackId()); + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(prop->PropertyFeedbackSlot()))); + CallIC(ic); + } else { + CallIC(ic, prop->PropertyFeedbackId()); + } } @@ -2357,7 +2352,7 @@ void FullCodeGenerator::EmitBinaryOp(BinaryOperation* expr, void FullCodeGenerator::EmitAssignment(Expression* expr) { - ASSERT(expr->IsValidReferenceExpression()); + DCHECK(expr->IsValidReferenceExpression()); // Left-hand side can only be a property, a global or a (parameter or local) // slot. @@ -2380,9 +2375,9 @@ void FullCodeGenerator::EmitAssignment(Expression* expr) { case NAMED_PROPERTY: { __ push(eax); // Preserve value. VisitForAccumulatorValue(prop->obj()); - __ mov(edx, eax); - __ pop(eax); // Restore value. - __ mov(ecx, prop->key()->AsLiteral()->value()); + __ Move(StoreIC::ReceiverRegister(), eax); + __ pop(StoreIC::ValueRegister()); // Restore value. + __ mov(StoreIC::NameRegister(), prop->key()->AsLiteral()->value()); CallStoreIC(); break; } @@ -2390,9 +2385,9 @@ void FullCodeGenerator::EmitAssignment(Expression* expr) { __ push(eax); // Preserve value. VisitForStackValue(prop->obj()); VisitForAccumulatorValue(prop->key()); - __ mov(ecx, eax); - __ pop(edx); // Receiver. - __ pop(eax); // Restore value. + __ Move(KeyedStoreIC::NameRegister(), eax); + __ pop(KeyedStoreIC::ReceiverRegister()); // Receiver. + __ pop(KeyedStoreIC::ValueRegister()); // Restore value. Handle<Code> ic = strict_mode() == SLOPPY ? isolate()->builtins()->KeyedStoreIC_Initialize() : isolate()->builtins()->KeyedStoreIC_Initialize_Strict(); @@ -2415,34 +2410,24 @@ void FullCodeGenerator::EmitStoreToStackLocalOrContextSlot( } -void FullCodeGenerator::EmitCallStoreContextSlot( - Handle<String> name, StrictMode strict_mode) { - __ push(eax); // Value. - __ push(esi); // Context. - __ push(Immediate(name)); - __ push(Immediate(Smi::FromInt(strict_mode))); - __ CallRuntime(Runtime::kHiddenStoreContextSlot, 4); -} - - void FullCodeGenerator::EmitVariableAssignment(Variable* var, Token::Value op) { if (var->IsUnallocated()) { // Global var, const, or let. - __ mov(ecx, var->name()); - __ mov(edx, GlobalObjectOperand()); + __ mov(StoreIC::NameRegister(), var->name()); + __ mov(StoreIC::ReceiverRegister(), GlobalObjectOperand()); CallStoreIC(); } else if (op == Token::INIT_CONST_LEGACY) { // Const initializers need a write barrier. - ASSERT(!var->IsParameter()); // No const parameters. + DCHECK(!var->IsParameter()); // No const parameters. if (var->IsLookupSlot()) { __ push(eax); __ push(esi); __ push(Immediate(var->name())); - __ CallRuntime(Runtime::kHiddenInitializeConstContextSlot, 3); + __ CallRuntime(Runtime::kInitializeLegacyConstLookupSlot, 3); } else { - ASSERT(var->IsStackLocal() || var->IsContextSlot()); + DCHECK(var->IsStackLocal() || var->IsContextSlot()); Label skip; MemOperand location = VarOperand(var, ecx); __ mov(edx, location); @@ -2454,28 +2439,30 @@ void FullCodeGenerator::EmitVariableAssignment(Variable* var, } else if (var->mode() == LET && op != Token::INIT_LET) { // Non-initializing assignment to let variable needs a write barrier. - if (var->IsLookupSlot()) { - EmitCallStoreContextSlot(var->name(), strict_mode()); - } else { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); - Label assign; - MemOperand location = VarOperand(var, ecx); - __ mov(edx, location); - __ cmp(edx, isolate()->factory()->the_hole_value()); - __ j(not_equal, &assign, Label::kNear); - __ push(Immediate(var->name())); - __ CallRuntime(Runtime::kHiddenThrowReferenceError, 1); - __ bind(&assign); - EmitStoreToStackLocalOrContextSlot(var, location); - } + DCHECK(!var->IsLookupSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); + Label assign; + MemOperand location = VarOperand(var, ecx); + __ mov(edx, location); + __ cmp(edx, isolate()->factory()->the_hole_value()); + __ j(not_equal, &assign, Label::kNear); + __ push(Immediate(var->name())); + __ CallRuntime(Runtime::kThrowReferenceError, 1); + __ bind(&assign); + EmitStoreToStackLocalOrContextSlot(var, location); } else if (!var->is_const_mode() || op == Token::INIT_CONST) { - // Assignment to var or initializing assignment to let/const - // in harmony mode. if (var->IsLookupSlot()) { - EmitCallStoreContextSlot(var->name(), strict_mode()); + // Assignment to var. + __ push(eax); // Value. + __ push(esi); // Context. + __ push(Immediate(var->name())); + __ push(Immediate(Smi::FromInt(strict_mode()))); + __ CallRuntime(Runtime::kStoreLookupSlot, 4); } else { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); + // Assignment to var or initializing assignment to let/const in harmony + // mode. + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); MemOperand location = VarOperand(var, ecx); if (generate_debug_code_ && op == Token::INIT_LET) { // Check for an uninitialized let binding. @@ -2496,13 +2483,13 @@ void FullCodeGenerator::EmitNamedPropertyAssignment(Assignment* expr) { // esp[0] : receiver Property* prop = expr->target()->AsProperty(); - ASSERT(prop != NULL); - ASSERT(prop->key()->AsLiteral() != NULL); + DCHECK(prop != NULL); + DCHECK(prop->key()->IsLiteral()); // Record source code position before IC call. SetSourcePosition(expr->position()); - __ mov(ecx, prop->key()->AsLiteral()->value()); - __ pop(edx); + __ mov(StoreIC::NameRegister(), prop->key()->AsLiteral()->value()); + __ pop(StoreIC::ReceiverRegister()); CallStoreIC(expr->AssignmentFeedbackId()); PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); context()->Plug(eax); @@ -2515,8 +2502,9 @@ void FullCodeGenerator::EmitKeyedPropertyAssignment(Assignment* expr) { // esp[0] : key // esp[kPointerSize] : receiver - __ pop(ecx); // Key. - __ pop(edx); + __ pop(KeyedStoreIC::NameRegister()); // Key. + __ pop(KeyedStoreIC::ReceiverRegister()); + DCHECK(KeyedStoreIC::ValueRegister().is(eax)); // Record source code position before IC call. SetSourcePosition(expr->position()); Handle<Code> ic = strict_mode() == SLOPPY @@ -2535,15 +2523,15 @@ void FullCodeGenerator::VisitProperty(Property* expr) { if (key->IsPropertyName()) { VisitForAccumulatorValue(expr->obj()); - __ mov(edx, result_register()); + __ Move(LoadIC::ReceiverRegister(), result_register()); EmitNamedPropertyLoad(expr); PrepareForBailoutForId(expr->LoadId(), TOS_REG); context()->Plug(eax); } else { VisitForStackValue(expr->obj()); VisitForAccumulatorValue(expr->key()); - __ pop(edx); // Object. - __ mov(ecx, result_register()); // Key. + __ pop(LoadIC::ReceiverRegister()); // Object. + __ Move(LoadIC::NameRegister(), result_register()); // Key. EmitKeyedPropertyLoad(expr); context()->Plug(eax); } @@ -2575,8 +2563,8 @@ void FullCodeGenerator::EmitCallWithLoadIC(Call* expr) { __ push(Immediate(isolate()->factory()->undefined_value())); } else { // Load the function from the receiver. - ASSERT(callee->IsProperty()); - __ mov(edx, Operand(esp, 0)); + DCHECK(callee->IsProperty()); + __ mov(LoadIC::ReceiverRegister(), Operand(esp, 0)); EmitNamedPropertyLoad(callee->AsProperty()); PrepareForBailoutForId(callee->AsProperty()->LoadId(), TOS_REG); // Push the target function under the receiver. @@ -2597,10 +2585,9 @@ void FullCodeGenerator::EmitKeyedCallWithLoadIC(Call* expr, Expression* callee = expr->expression(); // Load the function from the receiver. - ASSERT(callee->IsProperty()); - __ mov(edx, Operand(esp, 0)); - // Move the key into the right register for the keyed load IC. - __ mov(ecx, eax); + DCHECK(callee->IsProperty()); + __ mov(LoadIC::ReceiverRegister(), Operand(esp, 0)); + __ mov(LoadIC::NameRegister(), eax); EmitKeyedPropertyLoad(callee->AsProperty()); PrepareForBailoutForId(callee->AsProperty()->LoadId(), TOS_REG); @@ -2658,7 +2645,7 @@ void FullCodeGenerator::EmitResolvePossiblyDirectEval(int arg_count) { __ push(Immediate(Smi::FromInt(scope()->start_position()))); // Do the runtime call. - __ CallRuntime(Runtime::kHiddenResolvePossiblyDirectEval, 5); + __ CallRuntime(Runtime::kResolvePossiblyDirectEval, 5); } @@ -2718,14 +2705,14 @@ void FullCodeGenerator::VisitCall(Call* expr) { { PreservePositionScope scope(masm()->positions_recorder()); // Generate code for loading from variables potentially shadowed by // eval-introduced variables. - EmitDynamicLookupFastCase(proxy->var(), NOT_INSIDE_TYPEOF, &slow, &done); + EmitDynamicLookupFastCase(proxy, NOT_INSIDE_TYPEOF, &slow, &done); } __ bind(&slow); // Call the runtime to find the function to call (returned in eax) and // the object holding it (returned in edx). __ push(context_register()); __ push(Immediate(proxy->name())); - __ CallRuntime(Runtime::kHiddenLoadContextSlot, 2); + __ CallRuntime(Runtime::kLoadLookupSlot, 2); __ push(eax); // Function. __ push(edx); // Receiver. @@ -2759,7 +2746,7 @@ void FullCodeGenerator::VisitCall(Call* expr) { } } else { - ASSERT(call_type == Call::OTHER_CALL); + DCHECK(call_type == Call::OTHER_CALL); // Call to an arbitrary expression not handled specially above. { PreservePositionScope scope(masm()->positions_recorder()); VisitForStackValue(callee); @@ -2771,7 +2758,7 @@ void FullCodeGenerator::VisitCall(Call* expr) { #ifdef DEBUG // RecordJSReturnSite should have been called. - ASSERT(expr->return_is_recorded_); + DCHECK(expr->return_is_recorded_); #endif } @@ -2805,7 +2792,7 @@ void FullCodeGenerator::VisitCallNew(CallNew* expr) { // Record call targets in unoptimized code. if (FLAG_pretenuring_call_new) { EnsureSlotContainsAllocationSite(expr->AllocationSiteFeedbackSlot()); - ASSERT(expr->AllocationSiteFeedbackSlot() == + DCHECK(expr->AllocationSiteFeedbackSlot() == expr->CallNewFeedbackSlot() + 1); } @@ -2821,7 +2808,7 @@ void FullCodeGenerator::VisitCallNew(CallNew* expr) { void FullCodeGenerator::EmitIsSmi(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2842,7 +2829,7 @@ void FullCodeGenerator::EmitIsSmi(CallRuntime* expr) { void FullCodeGenerator::EmitIsNonNegativeSmi(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2863,7 +2850,7 @@ void FullCodeGenerator::EmitIsNonNegativeSmi(CallRuntime* expr) { void FullCodeGenerator::EmitIsObject(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2895,7 +2882,7 @@ void FullCodeGenerator::EmitIsObject(CallRuntime* expr) { void FullCodeGenerator::EmitIsSpecObject(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2917,7 +2904,7 @@ void FullCodeGenerator::EmitIsSpecObject(CallRuntime* expr) { void FullCodeGenerator::EmitIsUndetectableObject(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2942,7 +2929,7 @@ void FullCodeGenerator::EmitIsUndetectableObject(CallRuntime* expr) { void FullCodeGenerator::EmitIsStringWrapperSafeForDefaultValueOf( CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2986,7 +2973,7 @@ void FullCodeGenerator::EmitIsStringWrapperSafeForDefaultValueOf( STATIC_ASSERT(kSmiTagSize == 1); STATIC_ASSERT(kPointerSize == 4); __ imul(ecx, ecx, DescriptorArray::kDescriptorSize); - __ lea(ecx, Operand(ebx, ecx, times_2, DescriptorArray::kFirstOffset)); + __ lea(ecx, Operand(ebx, ecx, times_4, DescriptorArray::kFirstOffset)); // Calculate location of the first key name. __ add(ebx, Immediate(DescriptorArray::kFirstOffset)); // Loop through all the keys in the descriptor array. If one of these is the @@ -3032,7 +3019,7 @@ void FullCodeGenerator::EmitIsStringWrapperSafeForDefaultValueOf( void FullCodeGenerator::EmitIsFunction(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3054,7 +3041,7 @@ void FullCodeGenerator::EmitIsFunction(CallRuntime* expr) { void FullCodeGenerator::EmitIsMinusZero(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3082,7 +3069,7 @@ void FullCodeGenerator::EmitIsMinusZero(CallRuntime* expr) { void FullCodeGenerator::EmitIsArray(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3104,7 +3091,7 @@ void FullCodeGenerator::EmitIsArray(CallRuntime* expr) { void FullCodeGenerator::EmitIsRegExp(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3126,7 +3113,7 @@ void FullCodeGenerator::EmitIsRegExp(CallRuntime* expr) { void FullCodeGenerator::EmitIsConstructCall(CallRuntime* expr) { - ASSERT(expr->arguments()->length() == 0); + DCHECK(expr->arguments()->length() == 0); Label materialize_true, materialize_false; Label* if_true = NULL; @@ -3158,7 +3145,7 @@ void FullCodeGenerator::EmitIsConstructCall(CallRuntime* expr) { void FullCodeGenerator::EmitObjectEquals(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); // Load the two objects into registers and perform the comparison. VisitForStackValue(args->at(0)); @@ -3182,7 +3169,7 @@ void FullCodeGenerator::EmitObjectEquals(CallRuntime* expr) { void FullCodeGenerator::EmitArguments(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); // ArgumentsAccessStub expects the key in edx and the formal // parameter count in eax. @@ -3196,7 +3183,7 @@ void FullCodeGenerator::EmitArguments(CallRuntime* expr) { void FullCodeGenerator::EmitArgumentsLength(CallRuntime* expr) { - ASSERT(expr->arguments()->length() == 0); + DCHECK(expr->arguments()->length() == 0); Label exit; // Get the number of formal parameters. @@ -3220,7 +3207,7 @@ void FullCodeGenerator::EmitArgumentsLength(CallRuntime* expr) { void FullCodeGenerator::EmitClassOf(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); Label done, null, function, non_function_constructor; VisitForAccumulatorValue(args->at(0)); @@ -3283,7 +3270,7 @@ void FullCodeGenerator::EmitSubString(CallRuntime* expr) { // Load the arguments on the stack and call the stub. SubStringStub stub(isolate()); ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 3); + DCHECK(args->length() == 3); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); VisitForStackValue(args->at(2)); @@ -3296,7 +3283,7 @@ void FullCodeGenerator::EmitRegExpExec(CallRuntime* expr) { // Load the arguments on the stack and call the stub. RegExpExecStub stub(isolate()); ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 4); + DCHECK(args->length() == 4); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); VisitForStackValue(args->at(2)); @@ -3308,7 +3295,7 @@ void FullCodeGenerator::EmitRegExpExec(CallRuntime* expr) { void FullCodeGenerator::EmitValueOf(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); // Load the object. @@ -3327,8 +3314,8 @@ void FullCodeGenerator::EmitValueOf(CallRuntime* expr) { void FullCodeGenerator::EmitDateField(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); - ASSERT_NE(NULL, args->at(1)->AsLiteral()); + DCHECK(args->length() == 2); + DCHECK_NE(NULL, args->at(1)->AsLiteral()); Smi* index = Smi::cast(*(args->at(1)->AsLiteral()->value())); VisitForAccumulatorValue(args->at(0)); // Load the object. @@ -3364,7 +3351,7 @@ void FullCodeGenerator::EmitDateField(CallRuntime* expr) { } __ bind(¬_date_object); - __ CallRuntime(Runtime::kHiddenThrowNotDateError, 0); + __ CallRuntime(Runtime::kThrowNotDateError, 0); __ bind(&done); context()->Plug(result); } @@ -3372,7 +3359,7 @@ void FullCodeGenerator::EmitDateField(CallRuntime* expr) { void FullCodeGenerator::EmitOneByteSeqStringSetChar(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(3, args->length()); + DCHECK_EQ(3, args->length()); Register string = eax; Register index = ebx; @@ -3408,7 +3395,7 @@ void FullCodeGenerator::EmitOneByteSeqStringSetChar(CallRuntime* expr) { void FullCodeGenerator::EmitTwoByteSeqStringSetChar(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(3, args->length()); + DCHECK_EQ(3, args->length()); Register string = eax; Register index = ebx; @@ -3442,23 +3429,19 @@ void FullCodeGenerator::EmitTwoByteSeqStringSetChar(CallRuntime* expr) { void FullCodeGenerator::EmitMathPow(CallRuntime* expr) { // Load the arguments on the stack and call the runtime function. ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); - if (CpuFeatures::IsSupported(SSE2)) { - MathPowStub stub(isolate(), MathPowStub::ON_STACK); - __ CallStub(&stub); - } else { - __ CallRuntime(Runtime::kHiddenMathPowSlow, 2); - } + MathPowStub stub(isolate(), MathPowStub::ON_STACK); + __ CallStub(&stub); context()->Plug(eax); } void FullCodeGenerator::EmitSetValueOf(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); VisitForStackValue(args->at(0)); // Load the object. VisitForAccumulatorValue(args->at(1)); // Load the value. @@ -3487,7 +3470,7 @@ void FullCodeGenerator::EmitSetValueOf(CallRuntime* expr) { void FullCodeGenerator::EmitNumberToString(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(args->length(), 1); + DCHECK_EQ(args->length(), 1); // Load the argument into eax and call the stub. VisitForAccumulatorValue(args->at(0)); @@ -3500,7 +3483,7 @@ void FullCodeGenerator::EmitNumberToString(CallRuntime* expr) { void FullCodeGenerator::EmitStringCharFromCode(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3519,7 +3502,7 @@ void FullCodeGenerator::EmitStringCharFromCode(CallRuntime* expr) { void FullCodeGenerator::EmitStringCharCodeAt(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); VisitForStackValue(args->at(0)); VisitForAccumulatorValue(args->at(1)); @@ -3565,7 +3548,7 @@ void FullCodeGenerator::EmitStringCharCodeAt(CallRuntime* expr) { void FullCodeGenerator::EmitStringCharAt(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); VisitForStackValue(args->at(0)); VisitForAccumulatorValue(args->at(1)); @@ -3613,7 +3596,7 @@ void FullCodeGenerator::EmitStringCharAt(CallRuntime* expr) { void FullCodeGenerator::EmitStringAdd(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(2, args->length()); + DCHECK_EQ(2, args->length()); VisitForStackValue(args->at(0)); VisitForAccumulatorValue(args->at(1)); @@ -3626,7 +3609,7 @@ void FullCodeGenerator::EmitStringAdd(CallRuntime* expr) { void FullCodeGenerator::EmitStringCompare(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(2, args->length()); + DCHECK_EQ(2, args->length()); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); @@ -3639,7 +3622,7 @@ void FullCodeGenerator::EmitStringCompare(CallRuntime* expr) { void FullCodeGenerator::EmitCallFunction(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() >= 2); + DCHECK(args->length() >= 2); int arg_count = args->length() - 2; // 2 ~ receiver and function. for (int i = 0; i < arg_count + 1; ++i) { @@ -3673,7 +3656,7 @@ void FullCodeGenerator::EmitRegExpConstructResult(CallRuntime* expr) { // Load the arguments on the stack and call the stub. RegExpConstructResultStub stub(isolate()); ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 3); + DCHECK(args->length() == 3); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); VisitForAccumulatorValue(args->at(2)); @@ -3686,9 +3669,9 @@ void FullCodeGenerator::EmitRegExpConstructResult(CallRuntime* expr) { void FullCodeGenerator::EmitGetFromCache(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(2, args->length()); + DCHECK_EQ(2, args->length()); - ASSERT_NE(NULL, args->at(0)->AsLiteral()); + DCHECK_NE(NULL, args->at(0)->AsLiteral()); int cache_id = Smi::cast(*(args->at(0)->AsLiteral()->value()))->value(); Handle<FixedArray> jsfunction_result_caches( @@ -3726,7 +3709,7 @@ void FullCodeGenerator::EmitGetFromCache(CallRuntime* expr) { // Call runtime to perform the lookup. __ push(cache); __ push(key); - __ CallRuntime(Runtime::kHiddenGetFromCache, 2); + __ CallRuntime(Runtime::kGetFromCache, 2); __ bind(&done); context()->Plug(eax); @@ -3735,7 +3718,7 @@ void FullCodeGenerator::EmitGetFromCache(CallRuntime* expr) { void FullCodeGenerator::EmitHasCachedArrayIndex(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3759,7 +3742,7 @@ void FullCodeGenerator::EmitHasCachedArrayIndex(CallRuntime* expr) { void FullCodeGenerator::EmitGetCachedArrayIndex(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); __ AssertString(eax); @@ -3777,7 +3760,7 @@ void FullCodeGenerator::EmitFastAsciiArrayJoin(CallRuntime* expr) { loop_1, loop_1_condition, loop_2, loop_2_entry, loop_3, loop_3_entry; ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); // We will leave the separator on the stack until the end of the function. VisitForStackValue(args->at(1)); // Load this to eax (= array) @@ -4035,6 +4018,16 @@ void FullCodeGenerator::EmitFastAsciiArrayJoin(CallRuntime* expr) { } +void FullCodeGenerator::EmitDebugIsActive(CallRuntime* expr) { + DCHECK(expr->arguments()->length() == 0); + ExternalReference debug_is_active = + ExternalReference::debug_is_active_address(isolate()); + __ movzx_b(eax, Operand::StaticVariable(debug_is_active)); + __ SmiTag(eax); + context()->Plug(eax); +} + + void FullCodeGenerator::VisitCallRuntime(CallRuntime* expr) { if (expr->function() != NULL && expr->function()->intrinsic_type == Runtime::INLINE) { @@ -4052,9 +4045,15 @@ void FullCodeGenerator::VisitCallRuntime(CallRuntime* expr) { __ push(FieldOperand(eax, GlobalObject::kBuiltinsOffset)); // Load the function from the receiver. - __ mov(edx, Operand(esp, 0)); - __ mov(ecx, Immediate(expr->name())); - CallLoadIC(NOT_CONTEXTUAL, expr->CallRuntimeFeedbackId()); + __ mov(LoadIC::ReceiverRegister(), Operand(esp, 0)); + __ mov(LoadIC::NameRegister(), Immediate(expr->name())); + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(expr->CallRuntimeFeedbackSlot()))); + CallLoadIC(NOT_CONTEXTUAL); + } else { + CallLoadIC(NOT_CONTEXTUAL, expr->CallRuntimeFeedbackId()); + } // Push the target function under the receiver. __ push(Operand(esp, 0)); @@ -4108,7 +4107,7 @@ void FullCodeGenerator::VisitUnaryOperation(UnaryOperation* expr) { Variable* var = proxy->var(); // Delete of an unqualified identifier is disallowed in strict mode // but "delete this" is allowed. - ASSERT(strict_mode() == SLOPPY || var->is_this()); + DCHECK(strict_mode() == SLOPPY || var->is_this()); if (var->IsUnallocated()) { __ push(GlobalObjectOperand()); __ push(Immediate(var->name())); @@ -4125,7 +4124,7 @@ void FullCodeGenerator::VisitUnaryOperation(UnaryOperation* expr) { // context where the variable was introduced. __ push(context_register()); __ push(Immediate(var->name())); - __ CallRuntime(Runtime::kHiddenDeleteContextSlot, 2); + __ CallRuntime(Runtime::kDeleteLookupSlot, 2); context()->Plug(eax); } } else { @@ -4163,7 +4162,7 @@ void FullCodeGenerator::VisitUnaryOperation(UnaryOperation* expr) { // for control and plugging the control flow into the context, // because we need to prepare a pair of extra administrative AST ids // for the optimizing compiler. - ASSERT(context()->IsAccumulatorValue() || context()->IsStackValue()); + DCHECK(context()->IsAccumulatorValue() || context()->IsStackValue()); Label materialize_true, materialize_false, done; VisitForControl(expr->expression(), &materialize_false, @@ -4206,7 +4205,7 @@ void FullCodeGenerator::VisitUnaryOperation(UnaryOperation* expr) { void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { - ASSERT(expr->expression()->IsValidReferenceExpression()); + DCHECK(expr->expression()->IsValidReferenceExpression()); Comment cmnt(masm_, "[ CountOperation"); SetSourcePosition(expr->position()); @@ -4225,7 +4224,7 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { // Evaluate expression and get value. if (assign_type == VARIABLE) { - ASSERT(expr->expression()->AsVariableProxy()->var() != NULL); + DCHECK(expr->expression()->AsVariableProxy()->var() != NULL); AccumulatorValueContext context(this); EmitVariableLoad(expr->expression()->AsVariableProxy()); } else { @@ -4234,16 +4233,16 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { __ push(Immediate(Smi::FromInt(0))); } if (assign_type == NAMED_PROPERTY) { - // Put the object both on the stack and in edx. - VisitForAccumulatorValue(prop->obj()); - __ push(eax); - __ mov(edx, eax); + // Put the object both on the stack and in the register. + VisitForStackValue(prop->obj()); + __ mov(LoadIC::ReceiverRegister(), Operand(esp, 0)); EmitNamedPropertyLoad(prop); } else { VisitForStackValue(prop->obj()); VisitForStackValue(prop->key()); - __ mov(edx, Operand(esp, kPointerSize)); // Object. - __ mov(ecx, Operand(esp, 0)); // Key. + __ mov(LoadIC::ReceiverRegister(), + Operand(esp, kPointerSize)); // Object. + __ mov(LoadIC::NameRegister(), Operand(esp, 0)); // Key. EmitKeyedPropertyLoad(prop); } } @@ -4358,8 +4357,8 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { } break; case NAMED_PROPERTY: { - __ mov(ecx, prop->key()->AsLiteral()->value()); - __ pop(edx); + __ mov(StoreIC::NameRegister(), prop->key()->AsLiteral()->value()); + __ pop(StoreIC::ReceiverRegister()); CallStoreIC(expr->CountStoreFeedbackId()); PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); if (expr->is_postfix()) { @@ -4372,8 +4371,8 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { break; } case KEYED_PROPERTY: { - __ pop(ecx); - __ pop(edx); + __ pop(KeyedStoreIC::NameRegister()); + __ pop(KeyedStoreIC::ReceiverRegister()); Handle<Code> ic = strict_mode() == SLOPPY ? isolate()->builtins()->KeyedStoreIC_Initialize() : isolate()->builtins()->KeyedStoreIC_Initialize_Strict(); @@ -4395,13 +4394,17 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { void FullCodeGenerator::VisitForTypeofValue(Expression* expr) { VariableProxy* proxy = expr->AsVariableProxy(); - ASSERT(!context()->IsEffect()); - ASSERT(!context()->IsTest()); + DCHECK(!context()->IsEffect()); + DCHECK(!context()->IsTest()); if (proxy != NULL && proxy->var()->IsUnallocated()) { Comment cmnt(masm_, "[ Global variable"); - __ mov(edx, GlobalObjectOperand()); - __ mov(ecx, Immediate(proxy->name())); + __ mov(LoadIC::ReceiverRegister(), GlobalObjectOperand()); + __ mov(LoadIC::NameRegister(), Immediate(proxy->name())); + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(proxy->VariableFeedbackSlot()))); + } // Use a regular load, not a contextual load, to avoid a reference // error. CallLoadIC(NOT_CONTEXTUAL); @@ -4413,12 +4416,12 @@ void FullCodeGenerator::VisitForTypeofValue(Expression* expr) { // Generate code for loading from variables potentially shadowed // by eval-introduced variables. - EmitDynamicLookupFastCase(proxy->var(), INSIDE_TYPEOF, &slow, &done); + EmitDynamicLookupFastCase(proxy, INSIDE_TYPEOF, &slow, &done); __ bind(&slow); __ push(esi); __ push(Immediate(proxy->name())); - __ CallRuntime(Runtime::kHiddenLoadContextSlotNoReferenceError, 2); + __ CallRuntime(Runtime::kLoadLookupSlotNoReferenceError, 2); PrepareForBailout(expr, TOS_REG); __ bind(&done); @@ -4468,10 +4471,6 @@ void FullCodeGenerator::EmitLiteralCompareTypeof(Expression* expr, __ j(equal, if_true); __ cmp(eax, isolate()->factory()->false_value()); Split(equal, if_true, if_false, fall_through); - } else if (FLAG_harmony_typeof && - String::Equals(check, factory->null_string())) { - __ cmp(eax, isolate()->factory()->null_value()); - Split(equal, if_true, if_false, fall_through); } else if (String::Equals(check, factory->undefined_string())) { __ cmp(eax, isolate()->factory()->undefined_value()); __ j(equal, if_true); @@ -4490,10 +4489,8 @@ void FullCodeGenerator::EmitLiteralCompareTypeof(Expression* expr, Split(equal, if_true, if_false, fall_through); } else if (String::Equals(check, factory->object_string())) { __ JumpIfSmi(eax, if_false); - if (!FLAG_harmony_typeof) { - __ cmp(eax, isolate()->factory()->null_value()); - __ j(equal, if_true); - } + __ cmp(eax, isolate()->factory()->null_value()); + __ j(equal, if_true); __ CmpObjectType(eax, FIRST_NONCALLABLE_SPEC_OBJECT_TYPE, edx); __ j(below, if_false); __ CmpInstanceType(edx, LAST_NONCALLABLE_SPEC_OBJECT_TYPE); @@ -4629,7 +4626,7 @@ Register FullCodeGenerator::context_register() { void FullCodeGenerator::StoreToFrameField(int frame_offset, Register value) { - ASSERT_EQ(POINTER_SIZE_ALIGN(frame_offset), frame_offset); + DCHECK_EQ(POINTER_SIZE_ALIGN(frame_offset), frame_offset); __ mov(Operand(ebp, frame_offset), value); } @@ -4654,7 +4651,7 @@ void FullCodeGenerator::PushFunctionArgumentForContextAllocation() { // Fetch it from the context. __ push(ContextOperand(esi, Context::CLOSURE_INDEX)); } else { - ASSERT(declaration_scope->is_function_scope()); + DCHECK(declaration_scope->is_function_scope()); __ push(Operand(ebp, JavaScriptFrameConstants::kFunctionOffset)); } } @@ -4665,7 +4662,7 @@ void FullCodeGenerator::PushFunctionArgumentForContextAllocation() { void FullCodeGenerator::EnterFinallyBlock() { // Cook return address on top of stack (smi encoded Code* delta) - ASSERT(!result_register().is(edx)); + DCHECK(!result_register().is(edx)); __ pop(edx); __ sub(edx, Immediate(masm_->CodeObject())); STATIC_ASSERT(kSmiTagSize + kSmiShiftSize == 1); @@ -4696,7 +4693,7 @@ void FullCodeGenerator::EnterFinallyBlock() { void FullCodeGenerator::ExitFinallyBlock() { - ASSERT(!result_register().is(edx)); + DCHECK(!result_register().is(edx)); // Restore pending message from stack. __ pop(edx); ExternalReference pending_message_script = @@ -4807,25 +4804,25 @@ BackEdgeTable::BackEdgeState BackEdgeTable::GetBackEdgeState( Address pc) { Address call_target_address = pc - kIntSize; Address jns_instr_address = call_target_address - 3; - ASSERT_EQ(kCallInstruction, *(call_target_address - 1)); + DCHECK_EQ(kCallInstruction, *(call_target_address - 1)); if (*jns_instr_address == kJnsInstruction) { - ASSERT_EQ(kJnsOffset, *(call_target_address - 2)); - ASSERT_EQ(isolate->builtins()->InterruptCheck()->entry(), + DCHECK_EQ(kJnsOffset, *(call_target_address - 2)); + DCHECK_EQ(isolate->builtins()->InterruptCheck()->entry(), Assembler::target_address_at(call_target_address, unoptimized_code)); return INTERRUPT; } - ASSERT_EQ(kNopByteOne, *jns_instr_address); - ASSERT_EQ(kNopByteTwo, *(call_target_address - 2)); + DCHECK_EQ(kNopByteOne, *jns_instr_address); + DCHECK_EQ(kNopByteTwo, *(call_target_address - 2)); if (Assembler::target_address_at(call_target_address, unoptimized_code) == isolate->builtins()->OnStackReplacement()->entry()) { return ON_STACK_REPLACEMENT; } - ASSERT_EQ(isolate->builtins()->OsrAfterStackCheck()->entry(), + DCHECK_EQ(isolate->builtins()->OsrAfterStackCheck()->entry(), Assembler::target_address_at(call_target_address, unoptimized_code)); return OSR_AFTER_STACK_CHECK; diff --git a/deps/v8/src/ia32/ic-ia32.cc b/deps/v8/src/ia32/ic-ia32.cc index 52aa0ea1102..62e845eb27e 100644 --- a/deps/v8/src/ia32/ic-ia32.cc +++ b/deps/v8/src/ia32/ic-ia32.cc @@ -2,14 +2,14 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_IA32 -#include "codegen.h" -#include "ic-inl.h" -#include "runtime.h" -#include "stub-cache.h" +#include "src/codegen.h" +#include "src/ic-inl.h" +#include "src/runtime.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -35,45 +35,6 @@ static void GenerateGlobalInstanceTypeCheck(MacroAssembler* masm, } -// Generated code falls through if the receiver is a regular non-global -// JS object with slow properties and no interceptors. -static void GenerateNameDictionaryReceiverCheck(MacroAssembler* masm, - Register receiver, - Register r0, - Register r1, - Label* miss) { - // Register usage: - // receiver: holds the receiver on entry and is unchanged. - // r0: used to hold receiver instance type. - // Holds the property dictionary on fall through. - // r1: used to hold receivers map. - - // Check that the receiver isn't a smi. - __ JumpIfSmi(receiver, miss); - - // Check that the receiver is a valid JS object. - __ mov(r1, FieldOperand(receiver, HeapObject::kMapOffset)); - __ movzx_b(r0, FieldOperand(r1, Map::kInstanceTypeOffset)); - __ cmp(r0, FIRST_SPEC_OBJECT_TYPE); - __ j(below, miss); - - // If this assert fails, we have to check upper bound too. - STATIC_ASSERT(LAST_TYPE == LAST_SPEC_OBJECT_TYPE); - - GenerateGlobalInstanceTypeCheck(masm, r0, miss); - - // Check for non-global object that requires access check. - __ test_b(FieldOperand(r1, Map::kBitFieldOffset), - (1 << Map::kIsAccessCheckNeeded) | - (1 << Map::kHasNamedInterceptor)); - __ j(not_zero, miss); - - __ mov(r0, FieldOperand(receiver, JSObject::kPropertiesOffset)); - __ CheckMap(r0, masm->isolate()->factory()->hash_table_map(), miss, - DONT_DO_SMI_CHECK); -} - - // Helper function used to load a property from a dictionary backing // storage. This function may fail to load a property even though it is // in the dictionary, so code at miss_label must always call a backup @@ -220,7 +181,7 @@ static void GenerateKeyedLoadReceiverCheck(MacroAssembler* masm, // In the case that the object is a value-wrapper object, // we enter the runtime system to make sure that indexing // into string objects works as intended. - ASSERT(JS_OBJECT_TYPE > JS_VALUE_TYPE); + DCHECK(JS_OBJECT_TYPE > JS_VALUE_TYPE); __ CmpInstanceType(map, JS_OBJECT_TYPE); __ j(below, slow); @@ -383,41 +344,40 @@ static Operand GenerateUnmappedArgumentsLookup(MacroAssembler* masm, void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- ecx : key - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- + // The return address is on the stack. Label slow, check_name, index_smi, index_name, property_array_property; Label probe_dictionary, check_number_dictionary; + Register receiver = ReceiverRegister(); + Register key = NameRegister(); + DCHECK(receiver.is(edx)); + DCHECK(key.is(ecx)); + // Check that the key is a smi. - __ JumpIfNotSmi(ecx, &check_name); + __ JumpIfNotSmi(key, &check_name); __ bind(&index_smi); // Now the key is known to be a smi. This place is also jumped to from // where a numeric string is converted to a smi. GenerateKeyedLoadReceiverCheck( - masm, edx, eax, Map::kHasIndexedInterceptor, &slow); + masm, receiver, eax, Map::kHasIndexedInterceptor, &slow); // Check the receiver's map to see if it has fast elements. __ CheckFastElements(eax, &check_number_dictionary); - GenerateFastArrayLoad(masm, edx, ecx, eax, eax, NULL, &slow); + GenerateFastArrayLoad(masm, receiver, key, eax, eax, NULL, &slow); Isolate* isolate = masm->isolate(); Counters* counters = isolate->counters(); __ IncrementCounter(counters->keyed_load_generic_smi(), 1); __ ret(0); __ bind(&check_number_dictionary); - __ mov(ebx, ecx); + __ mov(ebx, key); __ SmiUntag(ebx); - __ mov(eax, FieldOperand(edx, JSObject::kElementsOffset)); + __ mov(eax, FieldOperand(receiver, JSObject::kElementsOffset)); // Check whether the elements is a number dictionary. - // edx: receiver // ebx: untagged index - // ecx: key // eax: elements __ CheckMap(eax, isolate->factory()->hash_table_map(), @@ -426,32 +386,30 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { Label slow_pop_receiver; // Push receiver on the stack to free up a register for the dictionary // probing. - __ push(edx); - __ LoadFromNumberDictionary(&slow_pop_receiver, eax, ecx, ebx, edx, edi, eax); + __ push(receiver); + __ LoadFromNumberDictionary(&slow_pop_receiver, eax, key, ebx, edx, edi, eax); // Pop receiver before returning. - __ pop(edx); + __ pop(receiver); __ ret(0); __ bind(&slow_pop_receiver); // Pop the receiver from the stack and jump to runtime. - __ pop(edx); + __ pop(receiver); __ bind(&slow); // Slow case: jump to runtime. - // edx: receiver - // ecx: key __ IncrementCounter(counters->keyed_load_generic_slow(), 1); GenerateRuntimeGetProperty(masm); __ bind(&check_name); - GenerateKeyNameCheck(masm, ecx, eax, ebx, &index_name, &slow); + GenerateKeyNameCheck(masm, key, eax, ebx, &index_name, &slow); GenerateKeyedLoadReceiverCheck( - masm, edx, eax, Map::kHasNamedInterceptor, &slow); + masm, receiver, eax, Map::kHasNamedInterceptor, &slow); // If the receiver is a fast-case object, check the keyed lookup // cache. Otherwise probe the dictionary. - __ mov(ebx, FieldOperand(edx, JSObject::kPropertiesOffset)); + __ mov(ebx, FieldOperand(receiver, JSObject::kPropertiesOffset)); __ cmp(FieldOperand(ebx, HeapObject::kMapOffset), Immediate(isolate->factory()->hash_table_map())); __ j(equal, &probe_dictionary); @@ -459,12 +417,12 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { // The receiver's map is still in eax, compute the keyed lookup cache hash // based on 32 bits of the map pointer and the string hash. if (FLAG_debug_code) { - __ cmp(eax, FieldOperand(edx, HeapObject::kMapOffset)); + __ cmp(eax, FieldOperand(receiver, HeapObject::kMapOffset)); __ Check(equal, kMapIsNoLongerInEax); } __ mov(ebx, eax); // Keep the map around for later. __ shr(eax, KeyedLookupCache::kMapHashShift); - __ mov(edi, FieldOperand(ecx, String::kHashFieldOffset)); + __ mov(edi, FieldOperand(key, String::kHashFieldOffset)); __ shr(edi, String::kHashShift); __ xor_(eax, edi); __ and_(eax, KeyedLookupCache::kCapacityMask & KeyedLookupCache::kHashMask); @@ -487,7 +445,7 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { __ cmp(ebx, Operand::StaticArray(edi, times_1, cache_keys)); __ j(not_equal, &try_next_entry); __ add(edi, Immediate(kPointerSize)); - __ cmp(ecx, Operand::StaticArray(edi, times_1, cache_keys)); + __ cmp(key, Operand::StaticArray(edi, times_1, cache_keys)); __ j(equal, &hit_on_nth_entry[i]); __ bind(&try_next_entry); } @@ -498,14 +456,12 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { __ cmp(ebx, Operand::StaticArray(edi, times_1, cache_keys)); __ j(not_equal, &slow); __ add(edi, Immediate(kPointerSize)); - __ cmp(ecx, Operand::StaticArray(edi, times_1, cache_keys)); + __ cmp(key, Operand::StaticArray(edi, times_1, cache_keys)); __ j(not_equal, &slow); // Get field offset. - // edx : receiver - // ebx : receiver's map - // ecx : key - // eax : lookup cache index + // ebx : receiver's map + // eax : lookup cache index ExternalReference cache_field_offsets = ExternalReference::keyed_lookup_cache_field_offsets(masm->isolate()); @@ -529,13 +485,13 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { __ bind(&load_in_object_property); __ movzx_b(eax, FieldOperand(ebx, Map::kInstanceSizeOffset)); __ add(eax, edi); - __ mov(eax, FieldOperand(edx, eax, times_pointer_size, 0)); + __ mov(eax, FieldOperand(receiver, eax, times_pointer_size, 0)); __ IncrementCounter(counters->keyed_load_generic_lookup_cache(), 1); __ ret(0); // Load property array property. __ bind(&property_array_property); - __ mov(eax, FieldOperand(edx, JSObject::kPropertiesOffset)); + __ mov(eax, FieldOperand(receiver, JSObject::kPropertiesOffset)); __ mov(eax, FieldOperand(eax, edi, times_pointer_size, FixedArray::kHeaderSize)); __ IncrementCounter(counters->keyed_load_generic_lookup_cache(), 1); @@ -545,33 +501,31 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { // exists. __ bind(&probe_dictionary); - __ mov(eax, FieldOperand(edx, JSObject::kMapOffset)); + __ mov(eax, FieldOperand(receiver, JSObject::kMapOffset)); __ movzx_b(eax, FieldOperand(eax, Map::kInstanceTypeOffset)); GenerateGlobalInstanceTypeCheck(masm, eax, &slow); - GenerateDictionaryLoad(masm, &slow, ebx, ecx, eax, edi, eax); + GenerateDictionaryLoad(masm, &slow, ebx, key, eax, edi, eax); __ IncrementCounter(counters->keyed_load_generic_symbol(), 1); __ ret(0); __ bind(&index_name); - __ IndexFromHash(ebx, ecx); + __ IndexFromHash(ebx, key); // Now jump to the place where smi keys are handled. __ jmp(&index_smi); } void KeyedLoadIC::GenerateString(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- ecx : key (index) - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- + // Return address is on the stack. Label miss; - Register receiver = edx; - Register index = ecx; + Register receiver = ReceiverRegister(); + Register index = NameRegister(); Register scratch = ebx; + DCHECK(!scratch.is(receiver) && !scratch.is(index)); Register result = eax; + DCHECK(!result.is(scratch)); StringCharAtGenerator char_at_generator(receiver, index, @@ -593,40 +547,40 @@ void KeyedLoadIC::GenerateString(MacroAssembler* masm) { void KeyedLoadIC::GenerateIndexedInterceptor(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- ecx : key - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- + // Return address is on the stack. Label slow; + Register receiver = ReceiverRegister(); + Register key = NameRegister(); + Register scratch = eax; + DCHECK(!scratch.is(receiver) && !scratch.is(key)); + // Check that the receiver isn't a smi. - __ JumpIfSmi(edx, &slow); + __ JumpIfSmi(receiver, &slow); // Check that the key is an array index, that is Uint32. - __ test(ecx, Immediate(kSmiTagMask | kSmiSignMask)); + __ test(key, Immediate(kSmiTagMask | kSmiSignMask)); __ j(not_zero, &slow); // Get the map of the receiver. - __ mov(eax, FieldOperand(edx, HeapObject::kMapOffset)); + __ mov(scratch, FieldOperand(receiver, HeapObject::kMapOffset)); // Check that it has indexed interceptor and access checks // are not enabled for this object. - __ movzx_b(eax, FieldOperand(eax, Map::kBitFieldOffset)); - __ and_(eax, Immediate(kSlowCaseBitFieldMask)); - __ cmp(eax, Immediate(1 << Map::kHasIndexedInterceptor)); + __ movzx_b(scratch, FieldOperand(scratch, Map::kBitFieldOffset)); + __ and_(scratch, Immediate(kSlowCaseBitFieldMask)); + __ cmp(scratch, Immediate(1 << Map::kHasIndexedInterceptor)); __ j(not_zero, &slow); // Everything is fine, call runtime. - __ pop(eax); - __ push(edx); // receiver - __ push(ecx); // key - __ push(eax); // return address + __ pop(scratch); + __ push(receiver); // receiver + __ push(key); // key + __ push(scratch); // return address // Perform tail call to the entry. - ExternalReference ref = - ExternalReference(IC_Utility(kKeyedLoadPropertyWithInterceptor), - masm->isolate()); + ExternalReference ref = ExternalReference( + IC_Utility(kLoadElementWithInterceptor), masm->isolate()); __ TailCallExternalReference(ref, 2, 1); __ bind(&slow); @@ -635,21 +589,23 @@ void KeyedLoadIC::GenerateIndexedInterceptor(MacroAssembler* masm) { void KeyedLoadIC::GenerateSloppyArguments(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- ecx : key - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- + // The return address is on the stack. + Register receiver = ReceiverRegister(); + Register key = NameRegister(); + DCHECK(receiver.is(edx)); + DCHECK(key.is(ecx)); + Label slow, notin; Factory* factory = masm->isolate()->factory(); Operand mapped_location = - GenerateMappedArgumentsLookup(masm, edx, ecx, ebx, eax, ¬in, &slow); + GenerateMappedArgumentsLookup( + masm, receiver, key, ebx, eax, ¬in, &slow); __ mov(eax, mapped_location); __ Ret(); __ bind(¬in); // The unmapped lookup expects that the parameter map is in ebx. Operand unmapped_location = - GenerateUnmappedArgumentsLookup(masm, ecx, ebx, eax, &slow); + GenerateUnmappedArgumentsLookup(masm, key, ebx, eax, &slow); __ cmp(unmapped_location, factory->the_hole_value()); __ j(equal, &slow); __ mov(eax, unmapped_location); @@ -660,27 +616,30 @@ void KeyedLoadIC::GenerateSloppyArguments(MacroAssembler* masm) { void KeyedStoreIC::GenerateSloppyArguments(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- eax : value - // -- ecx : key - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- + // Return address is on the stack. Label slow, notin; + Register receiver = ReceiverRegister(); + Register name = NameRegister(); + Register value = ValueRegister(); + DCHECK(receiver.is(edx)); + DCHECK(name.is(ecx)); + DCHECK(value.is(eax)); + Operand mapped_location = - GenerateMappedArgumentsLookup(masm, edx, ecx, ebx, edi, ¬in, &slow); - __ mov(mapped_location, eax); + GenerateMappedArgumentsLookup(masm, receiver, name, ebx, edi, ¬in, + &slow); + __ mov(mapped_location, value); __ lea(ecx, mapped_location); - __ mov(edx, eax); + __ mov(edx, value); __ RecordWrite(ebx, ecx, edx, kDontSaveFPRegs); __ Ret(); __ bind(¬in); // The unmapped lookup expects that the parameter map is in ebx. Operand unmapped_location = - GenerateUnmappedArgumentsLookup(masm, ecx, ebx, edi, &slow); - __ mov(unmapped_location, eax); + GenerateUnmappedArgumentsLookup(masm, name, ebx, edi, &slow); + __ mov(unmapped_location, value); __ lea(edi, unmapped_location); - __ mov(edx, eax); + __ mov(edx, value); __ RecordWrite(ebx, edi, edx, kDontSaveFPRegs); __ Ret(); __ bind(&slow); @@ -698,9 +657,13 @@ static void KeyedStoreGenerateGenericHelper( Label transition_smi_elements; Label finish_object_store, non_double_value, transition_double_elements; Label fast_double_without_map_check; - // eax: value - // ecx: key (a smi) - // edx: receiver + Register receiver = KeyedStoreIC::ReceiverRegister(); + Register key = KeyedStoreIC::NameRegister(); + Register value = KeyedStoreIC::ValueRegister(); + DCHECK(receiver.is(edx)); + DCHECK(key.is(ecx)); + DCHECK(value.is(eax)); + // key is a smi. // ebx: FixedArray receiver->elements // edi: receiver map // Fast case: Do the store, could either Object or double. @@ -715,43 +678,43 @@ static void KeyedStoreGenerateGenericHelper( // We have to go to the runtime if the current value is the hole because // there may be a callback on the element Label holecheck_passed1; - __ cmp(FixedArrayElementOperand(ebx, ecx), + __ cmp(FixedArrayElementOperand(ebx, key), masm->isolate()->factory()->the_hole_value()); __ j(not_equal, &holecheck_passed1); - __ JumpIfDictionaryInPrototypeChain(edx, ebx, edi, slow); - __ mov(ebx, FieldOperand(edx, JSObject::kElementsOffset)); + __ JumpIfDictionaryInPrototypeChain(receiver, ebx, edi, slow); + __ mov(ebx, FieldOperand(receiver, JSObject::kElementsOffset)); __ bind(&holecheck_passed1); // Smi stores don't require further checks. Label non_smi_value; - __ JumpIfNotSmi(eax, &non_smi_value); + __ JumpIfNotSmi(value, &non_smi_value); if (increment_length == kIncrementLength) { // Add 1 to receiver->length. - __ add(FieldOperand(edx, JSArray::kLengthOffset), + __ add(FieldOperand(receiver, JSArray::kLengthOffset), Immediate(Smi::FromInt(1))); } // It's irrelevant whether array is smi-only or not when writing a smi. - __ mov(FixedArrayElementOperand(ebx, ecx), eax); + __ mov(FixedArrayElementOperand(ebx, key), value); __ ret(0); __ bind(&non_smi_value); // Escape to elements kind transition case. - __ mov(edi, FieldOperand(edx, HeapObject::kMapOffset)); + __ mov(edi, FieldOperand(receiver, HeapObject::kMapOffset)); __ CheckFastObjectElements(edi, &transition_smi_elements); // Fast elements array, store the value to the elements backing store. __ bind(&finish_object_store); if (increment_length == kIncrementLength) { // Add 1 to receiver->length. - __ add(FieldOperand(edx, JSArray::kLengthOffset), + __ add(FieldOperand(receiver, JSArray::kLengthOffset), Immediate(Smi::FromInt(1))); } - __ mov(FixedArrayElementOperand(ebx, ecx), eax); + __ mov(FixedArrayElementOperand(ebx, key), value); // Update write barrier for the elements array address. - __ mov(edx, eax); // Preserve the value which is returned. + __ mov(edx, value); // Preserve the value which is returned. __ RecordWriteArray( - ebx, edx, ecx, kDontSaveFPRegs, EMIT_REMEMBERED_SET, OMIT_SMI_CHECK); + ebx, edx, key, kDontSaveFPRegs, EMIT_REMEMBERED_SET, OMIT_SMI_CHECK); __ ret(0); __ bind(fast_double); @@ -768,26 +731,26 @@ static void KeyedStoreGenerateGenericHelper( // We have to see if the double version of the hole is present. If so // go to the runtime. uint32_t offset = FixedDoubleArray::kHeaderSize + sizeof(kHoleNanLower32); - __ cmp(FieldOperand(ebx, ecx, times_4, offset), Immediate(kHoleNanUpper32)); + __ cmp(FieldOperand(ebx, key, times_4, offset), Immediate(kHoleNanUpper32)); __ j(not_equal, &fast_double_without_map_check); - __ JumpIfDictionaryInPrototypeChain(edx, ebx, edi, slow); - __ mov(ebx, FieldOperand(edx, JSObject::kElementsOffset)); + __ JumpIfDictionaryInPrototypeChain(receiver, ebx, edi, slow); + __ mov(ebx, FieldOperand(receiver, JSObject::kElementsOffset)); __ bind(&fast_double_without_map_check); - __ StoreNumberToDoubleElements(eax, ebx, ecx, edi, xmm0, - &transition_double_elements, false); + __ StoreNumberToDoubleElements(value, ebx, key, edi, xmm0, + &transition_double_elements); if (increment_length == kIncrementLength) { // Add 1 to receiver->length. - __ add(FieldOperand(edx, JSArray::kLengthOffset), + __ add(FieldOperand(receiver, JSArray::kLengthOffset), Immediate(Smi::FromInt(1))); } __ ret(0); __ bind(&transition_smi_elements); - __ mov(ebx, FieldOperand(edx, HeapObject::kMapOffset)); + __ mov(ebx, FieldOperand(receiver, HeapObject::kMapOffset)); // Transition the array appropriately depending on the value type. - __ CheckMap(eax, + __ CheckMap(value, masm->isolate()->factory()->heap_number_map(), &non_double_value, DONT_DO_SMI_CHECK); @@ -801,8 +764,9 @@ static void KeyedStoreGenerateGenericHelper( slow); AllocationSiteMode mode = AllocationSite::GetMode(FAST_SMI_ELEMENTS, FAST_DOUBLE_ELEMENTS); - ElementsTransitionGenerator::GenerateSmiToDouble(masm, mode, slow); - __ mov(ebx, FieldOperand(edx, JSObject::kElementsOffset)); + ElementsTransitionGenerator::GenerateSmiToDouble( + masm, receiver, key, value, ebx, mode, slow); + __ mov(ebx, FieldOperand(receiver, JSObject::kElementsOffset)); __ jmp(&fast_double_without_map_check); __ bind(&non_double_value); @@ -813,51 +777,51 @@ static void KeyedStoreGenerateGenericHelper( edi, slow); mode = AllocationSite::GetMode(FAST_SMI_ELEMENTS, FAST_ELEMENTS); - ElementsTransitionGenerator::GenerateMapChangeElementsTransition(masm, mode, - slow); - __ mov(ebx, FieldOperand(edx, JSObject::kElementsOffset)); + ElementsTransitionGenerator::GenerateMapChangeElementsTransition( + masm, receiver, key, value, ebx, mode, slow); + __ mov(ebx, FieldOperand(receiver, JSObject::kElementsOffset)); __ jmp(&finish_object_store); __ bind(&transition_double_elements); // Elements are FAST_DOUBLE_ELEMENTS, but value is an Object that's not a // HeapNumber. Make sure that the receiver is a Array with FAST_ELEMENTS and // transition array from FAST_DOUBLE_ELEMENTS to FAST_ELEMENTS - __ mov(ebx, FieldOperand(edx, HeapObject::kMapOffset)); + __ mov(ebx, FieldOperand(receiver, HeapObject::kMapOffset)); __ LoadTransitionedArrayMapConditional(FAST_DOUBLE_ELEMENTS, FAST_ELEMENTS, ebx, edi, slow); mode = AllocationSite::GetMode(FAST_DOUBLE_ELEMENTS, FAST_ELEMENTS); - ElementsTransitionGenerator::GenerateDoubleToObject(masm, mode, slow); - __ mov(ebx, FieldOperand(edx, JSObject::kElementsOffset)); + ElementsTransitionGenerator::GenerateDoubleToObject( + masm, receiver, key, value, ebx, mode, slow); + __ mov(ebx, FieldOperand(receiver, JSObject::kElementsOffset)); __ jmp(&finish_object_store); } void KeyedStoreIC::GenerateGeneric(MacroAssembler* masm, StrictMode strict_mode) { - // ----------- S t a t e ------------- - // -- eax : value - // -- ecx : key - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- + // Return address is on the stack. Label slow, fast_object, fast_object_grow; Label fast_double, fast_double_grow; Label array, extra, check_if_double_array; + Register receiver = ReceiverRegister(); + Register key = NameRegister(); + DCHECK(receiver.is(edx)); + DCHECK(key.is(ecx)); // Check that the object isn't a smi. - __ JumpIfSmi(edx, &slow); + __ JumpIfSmi(receiver, &slow); // Get the map from the receiver. - __ mov(edi, FieldOperand(edx, HeapObject::kMapOffset)); + __ mov(edi, FieldOperand(receiver, HeapObject::kMapOffset)); // Check that the receiver does not require access checks and is not observed. // The generic stub does not perform map checks or handle observed objects. __ test_b(FieldOperand(edi, Map::kBitFieldOffset), 1 << Map::kIsAccessCheckNeeded | 1 << Map::kIsObserved); __ j(not_zero, &slow); // Check that the key is a smi. - __ JumpIfNotSmi(ecx, &slow); + __ JumpIfNotSmi(key, &slow); __ CmpInstanceType(edi, JS_ARRAY_TYPE); __ j(equal, &array); // Check that the object is some kind of JSObject. @@ -865,13 +829,11 @@ void KeyedStoreIC::GenerateGeneric(MacroAssembler* masm, __ j(below, &slow); // Object case: Check key against length in the elements array. - // eax: value - // edx: JSObject - // ecx: key (a smi) + // Key is a smi. // edi: receiver map - __ mov(ebx, FieldOperand(edx, JSObject::kElementsOffset)); + __ mov(ebx, FieldOperand(receiver, JSObject::kElementsOffset)); // Check array bounds. Both the key and the length of FixedArray are smis. - __ cmp(ecx, FieldOperand(ebx, FixedArray::kLengthOffset)); + __ cmp(key, FieldOperand(ebx, FixedArray::kLengthOffset)); __ j(below, &fast_object); // Slow case: call runtime. @@ -882,15 +844,14 @@ void KeyedStoreIC::GenerateGeneric(MacroAssembler* masm, // perform the store and update the length. Used for adding one // element to the array by writing to array[array.length]. __ bind(&extra); - // eax: value - // edx: receiver, a JSArray - // ecx: key, a smi. + // receiver is a JSArray. + // key is a smi. // ebx: receiver->elements, a FixedArray // edi: receiver map - // flags: compare (ecx, edx.length()) + // flags: compare (key, receiver.length()) // do not leave holes in the array: __ j(not_equal, &slow); - __ cmp(ecx, FieldOperand(ebx, FixedArray::kLengthOffset)); + __ cmp(key, FieldOperand(ebx, FixedArray::kLengthOffset)); __ j(above_equal, &slow); __ mov(edi, FieldOperand(ebx, HeapObject::kMapOffset)); __ cmp(edi, masm->isolate()->factory()->fixed_array_map()); @@ -906,15 +867,14 @@ void KeyedStoreIC::GenerateGeneric(MacroAssembler* masm, // array. Check that the array is in fast mode (and writable); if it // is the length is always a smi. __ bind(&array); - // eax: value - // edx: receiver, a JSArray - // ecx: key, a smi. + // receiver is a JSArray. + // key is a smi. // edi: receiver map - __ mov(ebx, FieldOperand(edx, JSObject::kElementsOffset)); + __ mov(ebx, FieldOperand(receiver, JSObject::kElementsOffset)); // Check the key against the length in the array and fall through to the // common store code. - __ cmp(ecx, FieldOperand(edx, JSArray::kLengthOffset)); // Compare smis. + __ cmp(key, FieldOperand(receiver, JSArray::kLengthOffset)); // Compare smis. __ j(above_equal, &extra); KeyedStoreGenerateGenericHelper(masm, &fast_object, &fast_double, @@ -925,16 +885,17 @@ void KeyedStoreIC::GenerateGeneric(MacroAssembler* masm, void LoadIC::GenerateMegamorphic(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- ecx : name - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- + // The return address is on the stack. + Register receiver = ReceiverRegister(); + Register name = NameRegister(); + DCHECK(receiver.is(edx)); + DCHECK(name.is(ecx)); // Probe the stub cache. - Code::Flags flags = Code::ComputeHandlerFlags(Code::LOAD_IC); + Code::Flags flags = Code::RemoveTypeAndHolderFromFlags( + Code::ComputeHandlerFlags(Code::LOAD_IC)); masm->isolate()->stub_cache()->GenerateProbe( - masm, flags, edx, ecx, ebx, eax); + masm, flags, receiver, name, ebx, eax); // Cache miss: Jump to runtime. GenerateMiss(masm); @@ -942,39 +903,41 @@ void LoadIC::GenerateMegamorphic(MacroAssembler* masm) { void LoadIC::GenerateNormal(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- ecx : name - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- - Label miss; + Register dictionary = eax; + DCHECK(!dictionary.is(ReceiverRegister())); + DCHECK(!dictionary.is(NameRegister())); - GenerateNameDictionaryReceiverCheck(masm, edx, eax, ebx, &miss); + Label slow; - // eax: elements - // Search the dictionary placing the result in eax. - GenerateDictionaryLoad(masm, &miss, eax, ecx, edi, ebx, eax); + __ mov(dictionary, + FieldOperand(ReceiverRegister(), JSObject::kPropertiesOffset)); + GenerateDictionaryLoad(masm, &slow, dictionary, NameRegister(), edi, ebx, + eax); __ ret(0); - // Cache miss: Jump to runtime. - __ bind(&miss); - GenerateMiss(masm); + // Dictionary load failed, go slow (but don't miss). + __ bind(&slow); + GenerateRuntimeGetProperty(masm); } -void LoadIC::GenerateMiss(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- ecx : name - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- +static void LoadIC_PushArgs(MacroAssembler* masm) { + Register receiver = LoadIC::ReceiverRegister(); + Register name = LoadIC::NameRegister(); + DCHECK(!ebx.is(receiver) && !ebx.is(name)); + __ pop(ebx); + __ push(receiver); + __ push(name); + __ push(ebx); +} + + +void LoadIC::GenerateMiss(MacroAssembler* masm) { + // Return address is on the stack. __ IncrementCounter(masm->isolate()->counters()->load_miss(), 1); - __ pop(ebx); - __ push(edx); // receiver - __ push(ecx); // name - __ push(ebx); // return address + LoadIC_PushArgs(masm); // Perform tail call to the entry. ExternalReference ref = @@ -984,16 +947,8 @@ void LoadIC::GenerateMiss(MacroAssembler* masm) { void LoadIC::GenerateRuntimeGetProperty(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- ecx : key - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- - - __ pop(ebx); - __ push(edx); // receiver - __ push(ecx); // name - __ push(ebx); // return address + // Return address is on the stack. + LoadIC_PushArgs(masm); // Perform tail call to the entry. __ TailCallRuntime(Runtime::kGetProperty, 2, 1); @@ -1001,18 +956,10 @@ void LoadIC::GenerateRuntimeGetProperty(MacroAssembler* masm) { void KeyedLoadIC::GenerateMiss(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- ecx : key - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- - + // Return address is on the stack. __ IncrementCounter(masm->isolate()->counters()->keyed_load_miss(), 1); - __ pop(ebx); - __ push(edx); // receiver - __ push(ecx); // name - __ push(ebx); // return address + LoadIC_PushArgs(masm); // Perform tail call to the entry. ExternalReference ref = @@ -1021,17 +968,36 @@ void KeyedLoadIC::GenerateMiss(MacroAssembler* masm) { } -void KeyedLoadIC::GenerateRuntimeGetProperty(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- ecx : key - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- +// IC register specifications +const Register LoadIC::ReceiverRegister() { return edx; } +const Register LoadIC::NameRegister() { return ecx; } - __ pop(ebx); - __ push(edx); // receiver - __ push(ecx); // name - __ push(ebx); // return address + +const Register LoadIC::SlotRegister() { + DCHECK(FLAG_vector_ics); + return eax; +} + + +const Register LoadIC::VectorRegister() { + DCHECK(FLAG_vector_ics); + return ebx; +} + + +const Register StoreIC::ReceiverRegister() { return edx; } +const Register StoreIC::NameRegister() { return ecx; } +const Register StoreIC::ValueRegister() { return eax; } + + +const Register KeyedStoreIC::MapRegister() { + return ebx; +} + + +void KeyedLoadIC::GenerateRuntimeGetProperty(MacroAssembler* masm) { + // Return address is on the stack. + LoadIC_PushArgs(masm); // Perform tail call to the entry. __ TailCallRuntime(Runtime::kKeyedGetProperty, 2, 1); @@ -1039,34 +1005,36 @@ void KeyedLoadIC::GenerateRuntimeGetProperty(MacroAssembler* masm) { void StoreIC::GenerateMegamorphic(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- eax : value - // -- ecx : name - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- - Code::Flags flags = Code::ComputeHandlerFlags(Code::STORE_IC); + // Return address is on the stack. + Code::Flags flags = Code::RemoveTypeAndHolderFromFlags( + Code::ComputeHandlerFlags(Code::STORE_IC)); masm->isolate()->stub_cache()->GenerateProbe( - masm, flags, edx, ecx, ebx, no_reg); + masm, flags, ReceiverRegister(), NameRegister(), + ebx, no_reg); // Cache miss: Jump to runtime. GenerateMiss(masm); } -void StoreIC::GenerateMiss(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- eax : value - // -- ecx : name - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- +static void StoreIC_PushArgs(MacroAssembler* masm) { + Register receiver = StoreIC::ReceiverRegister(); + Register name = StoreIC::NameRegister(); + Register value = StoreIC::ValueRegister(); + + DCHECK(!ebx.is(receiver) && !ebx.is(name) && !ebx.is(value)); __ pop(ebx); - __ push(edx); - __ push(ecx); - __ push(eax); + __ push(receiver); + __ push(name); + __ push(value); __ push(ebx); +} + + +void StoreIC::GenerateMiss(MacroAssembler* masm) { + // Return address is on the stack. + StoreIC_PushArgs(masm); // Perform tail call to the entry. ExternalReference ref = @@ -1076,31 +1044,27 @@ void StoreIC::GenerateMiss(MacroAssembler* masm) { void StoreIC::GenerateNormal(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- eax : value - // -- ecx : name - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- - - Label miss, restore_miss; + Label restore_miss; + Register receiver = ReceiverRegister(); + Register name = NameRegister(); + Register value = ValueRegister(); + Register dictionary = ebx; - GenerateNameDictionaryReceiverCheck(masm, edx, ebx, edi, &miss); + __ mov(dictionary, FieldOperand(receiver, JSObject::kPropertiesOffset)); // A lot of registers are needed for storing to slow case // objects. Push and restore receiver but rely on // GenerateDictionaryStore preserving the value and name. - __ push(edx); - GenerateDictionaryStore(masm, &restore_miss, ebx, ecx, eax, edx, edi); + __ push(receiver); + GenerateDictionaryStore(masm, &restore_miss, dictionary, name, value, + receiver, edi); __ Drop(1); Counters* counters = masm->isolate()->counters(); __ IncrementCounter(counters->store_normal_hit(), 1); __ ret(0); __ bind(&restore_miss); - __ pop(edx); - - __ bind(&miss); + __ pop(receiver); __ IncrementCounter(counters->store_normal_miss(), 1); GenerateMiss(masm); } @@ -1108,60 +1072,41 @@ void StoreIC::GenerateNormal(MacroAssembler* masm) { void StoreIC::GenerateRuntimeSetProperty(MacroAssembler* masm, StrictMode strict_mode) { - // ----------- S t a t e ------------- - // -- eax : value - // -- ecx : name - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- + // Return address is on the stack. + DCHECK(!ebx.is(ReceiverRegister()) && !ebx.is(NameRegister()) && + !ebx.is(ValueRegister())); __ pop(ebx); - __ push(edx); - __ push(ecx); - __ push(eax); - __ push(Immediate(Smi::FromInt(NONE))); // PropertyAttributes + __ push(ReceiverRegister()); + __ push(NameRegister()); + __ push(ValueRegister()); __ push(Immediate(Smi::FromInt(strict_mode))); __ push(ebx); // return address // Do tail-call to runtime routine. - __ TailCallRuntime(Runtime::kSetProperty, 5, 1); + __ TailCallRuntime(Runtime::kSetProperty, 4, 1); } void KeyedStoreIC::GenerateRuntimeSetProperty(MacroAssembler* masm, StrictMode strict_mode) { - // ----------- S t a t e ------------- - // -- eax : value - // -- ecx : key - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- - + // Return address is on the stack. + DCHECK(!ebx.is(ReceiverRegister()) && !ebx.is(NameRegister()) && + !ebx.is(ValueRegister())); __ pop(ebx); - __ push(edx); - __ push(ecx); - __ push(eax); - __ push(Immediate(Smi::FromInt(NONE))); // PropertyAttributes - __ push(Immediate(Smi::FromInt(strict_mode))); // Strict mode. - __ push(ebx); // return address + __ push(ReceiverRegister()); + __ push(NameRegister()); + __ push(ValueRegister()); + __ push(Immediate(Smi::FromInt(strict_mode))); + __ push(ebx); // return address // Do tail-call to runtime routine. - __ TailCallRuntime(Runtime::kSetProperty, 5, 1); + __ TailCallRuntime(Runtime::kSetProperty, 4, 1); } void KeyedStoreIC::GenerateMiss(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- eax : value - // -- ecx : key - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- - - __ pop(ebx); - __ push(edx); - __ push(ecx); - __ push(eax); - __ push(ebx); + // Return address is on the stack. + StoreIC_PushArgs(masm); // Do tail-call to runtime routine. ExternalReference ref = @@ -1171,18 +1116,8 @@ void KeyedStoreIC::GenerateMiss(MacroAssembler* masm) { void StoreIC::GenerateSlow(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- eax : value - // -- ecx : key - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- - - __ pop(ebx); - __ push(edx); - __ push(ecx); - __ push(eax); - __ push(ebx); // return address + // Return address is on the stack. + StoreIC_PushArgs(masm); // Do tail-call to runtime routine. ExternalReference ref(IC_Utility(kStoreIC_Slow), masm->isolate()); @@ -1191,18 +1126,8 @@ void StoreIC::GenerateSlow(MacroAssembler* masm) { void KeyedStoreIC::GenerateSlow(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- eax : value - // -- ecx : key - // -- edx : receiver - // -- esp[0] : return address - // ----------------------------------- - - __ pop(ebx); - __ push(edx); - __ push(ecx); - __ push(eax); - __ push(ebx); // return address + // Return address is on the stack. + StoreIC_PushArgs(masm); // Do tail-call to runtime routine. ExternalReference ref(IC_Utility(kKeyedStoreIC_Slow), masm->isolate()); @@ -1252,7 +1177,7 @@ void PatchInlinedSmiCode(Address address, InlinedSmiCheck check) { // If the instruction following the call is not a test al, nothing // was inlined. if (*test_instruction_address != Assembler::kTestAlByte) { - ASSERT(*test_instruction_address == Assembler::kNopByte); + DCHECK(*test_instruction_address == Assembler::kNopByte); return; } @@ -1269,7 +1194,7 @@ void PatchInlinedSmiCode(Address address, InlinedSmiCheck check) { // jump-if-carry/not-carry to jump-if-zero/not-zero, whereas disabling is the // reverse operation of that. Address jmp_address = test_instruction_address - delta; - ASSERT((check == ENABLE_INLINED_SMI_CHECK) + DCHECK((check == ENABLE_INLINED_SMI_CHECK) ? (*jmp_address == Assembler::kJncShortOpcode || *jmp_address == Assembler::kJcShortOpcode) : (*jmp_address == Assembler::kJnzShortOpcode || diff --git a/deps/v8/src/ia32/lithium-codegen-ia32.cc b/deps/v8/src/ia32/lithium-codegen-ia32.cc index d2b4f2f7db2..245dcdc482a 100644 --- a/deps/v8/src/ia32/lithium-codegen-ia32.cc +++ b/deps/v8/src/ia32/lithium-codegen-ia32.cc @@ -2,29 +2,21 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_IA32 -#include "ia32/lithium-codegen-ia32.h" -#include "ic.h" -#include "code-stubs.h" -#include "deoptimizer.h" -#include "stub-cache.h" -#include "codegen.h" -#include "hydrogen-osr.h" +#include "src/code-stubs.h" +#include "src/codegen.h" +#include "src/deoptimizer.h" +#include "src/hydrogen-osr.h" +#include "src/ia32/lithium-codegen-ia32.h" +#include "src/ic.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { - -static SaveFPRegsMode GetSaveFPRegsMode(Isolate* isolate) { - // We don't need to save floating point regs when generating the snapshot - return CpuFeatures::IsSafeForSnapshot(isolate, SSE2) ? kSaveFPRegs - : kDontSaveFPRegs; -} - - // When invoking builtins, we need to record the safepoint in the middle of // the invoke instruction sequence generated by the macro assembler. class SafepointGenerator V8_FINAL : public CallWrapper { @@ -54,7 +46,7 @@ class SafepointGenerator V8_FINAL : public CallWrapper { bool LCodeGen::GenerateCode() { LPhase phase("Z_Code generation", chunk()); - ASSERT(is_unused()); + DCHECK(is_unused()); status_ = GENERATING; // Open a frame scope to indicate that there is a frame on the stack. The @@ -78,7 +70,7 @@ bool LCodeGen::GenerateCode() { void LCodeGen::FinishCode(Handle<Code> code) { - ASSERT(is_done()); + DCHECK(is_done()); code->set_stack_slots(GetStackSlotCount()); code->set_safepoint_table_offset(safepoints_.GetCodeOffset()); if (code->is_optimized_code()) RegisterWeakObjectsInOptimizedCode(code); @@ -100,10 +92,9 @@ void LCodeGen::MakeSureStackPagesMapped(int offset) { void LCodeGen::SaveCallerDoubles() { - ASSERT(info()->saves_caller_doubles()); - ASSERT(NeedsEagerFrame()); + DCHECK(info()->saves_caller_doubles()); + DCHECK(NeedsEagerFrame()); Comment(";;; Save clobbered callee double registers"); - CpuFeatureScope scope(masm(), SSE2); int count = 0; BitVector* doubles = chunk()->allocated_double_registers(); BitVector::Iterator save_iterator(doubles); @@ -117,10 +108,9 @@ void LCodeGen::SaveCallerDoubles() { void LCodeGen::RestoreCallerDoubles() { - ASSERT(info()->saves_caller_doubles()); - ASSERT(NeedsEagerFrame()); + DCHECK(info()->saves_caller_doubles()); + DCHECK(NeedsEagerFrame()); Comment(";;; Restore clobbered callee double registers"); - CpuFeatureScope scope(masm(), SSE2); BitVector* doubles = chunk()->allocated_double_registers(); BitVector::Iterator save_iterator(doubles); int count = 0; @@ -134,7 +124,7 @@ void LCodeGen::RestoreCallerDoubles() { bool LCodeGen::GeneratePrologue() { - ASSERT(is_generating()); + DCHECK(is_generating()); if (info()->IsOptimizing()) { ProfileEntryHookStub::MaybeCallEntryHook(masm_); @@ -161,7 +151,7 @@ bool LCodeGen::GeneratePrologue() { __ j(not_equal, &ok, Label::kNear); __ mov(ecx, GlobalObjectOperand()); - __ mov(ecx, FieldOperand(ecx, GlobalObject::kGlobalReceiverOffset)); + __ mov(ecx, FieldOperand(ecx, GlobalObject::kGlobalProxyOffset)); __ mov(Operand(esp, receiver_offset), ecx); @@ -196,9 +186,13 @@ bool LCodeGen::GeneratePrologue() { info()->set_prologue_offset(masm_->pc_offset()); if (NeedsEagerFrame()) { - ASSERT(!frame_is_built_); + DCHECK(!frame_is_built_); frame_is_built_ = true; - __ Prologue(info()->IsStub() ? BUILD_STUB_FRAME : BUILD_FUNCTION_FRAME); + if (info()->IsStub()) { + __ StubPrologue(); + } else { + __ Prologue(info()->IsCodePreAgingActive()); + } info()->AddNoFrameRange(0, masm_->pc_offset()); } @@ -211,7 +205,7 @@ bool LCodeGen::GeneratePrologue() { // Reserve space for the stack slots needed by the code. int slots = GetStackSlotCount(); - ASSERT(slots != 0 || !info()->IsOptimizing()); + DCHECK(slots != 0 || !info()->IsOptimizing()); if (slots > 0) { if (slots == 1) { if (dynamic_frame_alignment_) { @@ -253,22 +247,23 @@ bool LCodeGen::GeneratePrologue() { } } - if (info()->saves_caller_doubles() && CpuFeatures::IsSupported(SSE2)) { - SaveCallerDoubles(); - } + if (info()->saves_caller_doubles()) SaveCallerDoubles(); } // Possibly allocate a local context. int heap_slots = info_->num_heap_slots() - Context::MIN_CONTEXT_SLOTS; if (heap_slots > 0) { Comment(";;; Allocate local context"); + bool need_write_barrier = true; // Argument to NewContext is the function, which is still in edi. if (heap_slots <= FastNewContextStub::kMaximumSlots) { FastNewContextStub stub(isolate(), heap_slots); __ CallStub(&stub); + // Result of FastNewContextStub is always in new space. + need_write_barrier = false; } else { __ push(edi); - __ CallRuntime(Runtime::kHiddenNewFunctionContext, 1); + __ CallRuntime(Runtime::kNewFunctionContext, 1); } RecordSafepoint(Safepoint::kNoLazyDeopt); // Context is returned in eax. It replaces the context passed to us. @@ -289,11 +284,18 @@ bool LCodeGen::GeneratePrologue() { int context_offset = Context::SlotOffset(var->index()); __ mov(Operand(esi, context_offset), eax); // Update the write barrier. This clobbers eax and ebx. - __ RecordWriteContextSlot(esi, - context_offset, - eax, - ebx, - kDontSaveFPRegs); + if (need_write_barrier) { + __ RecordWriteContextSlot(esi, + context_offset, + eax, + ebx, + kDontSaveFPRegs); + } else if (FLAG_debug_code) { + Label done; + __ JumpIfInNewSpace(esi, eax, &done, Label::kNear); + __ Abort(kExpectedNewSpaceObject); + __ bind(&done); + } } } Comment(";;; End allocate local context"); @@ -355,7 +357,7 @@ void LCodeGen::GenerateOsrPrologue() { // Adjust the frame size, subsuming the unoptimized frame into the // optimized frame. int slots = GetStackSlotCount() - graph()->osr()->UnoptimizedFrameSlots(); - ASSERT(slots >= 1); + DCHECK(slots >= 1); __ sub(esp, Immediate((slots - 1) * kPointerSize)); } @@ -367,27 +369,10 @@ void LCodeGen::GenerateBodyInstructionPre(LInstruction* instr) { if (!instr->IsLazyBailout() && !instr->IsGap()) { safepoints_.BumpLastLazySafepointIndex(); } - if (!CpuFeatures::IsSupported(SSE2)) FlushX87StackIfNecessary(instr); } -void LCodeGen::GenerateBodyInstructionPost(LInstruction* instr) { - if (!CpuFeatures::IsSupported(SSE2)) { - if (instr->IsGoto()) { - x87_stack_.LeavingBlock(current_block_, LGoto::cast(instr)); - } else if (FLAG_debug_code && FLAG_enable_slow_asserts && - !instr->IsGap() && !instr->IsReturn()) { - if (instr->ClobbersDoubleRegisters(isolate())) { - if (instr->HasDoubleRegisterResult()) { - ASSERT_EQ(1, x87_stack_.depth()); - } else { - ASSERT_EQ(0, x87_stack_.depth()); - } - } - __ VerifyX87StackDepth(x87_stack_.depth()); - } - } -} +void LCodeGen::GenerateBodyInstructionPost(LInstruction* instr) { } bool LCodeGen::GenerateJumpTable() { @@ -406,7 +391,7 @@ bool LCodeGen::GenerateJumpTable() { Comment(";;; jump table entry %d: deoptimization bailout %d.", i, id); } if (jump_table_[i].needs_frame) { - ASSERT(!info()->saves_caller_doubles()); + DCHECK(!info()->saves_caller_doubles()); __ push(Immediate(ExternalReference::ForDeoptEntry(entry))); if (needs_frame.is_bound()) { __ jmp(&needs_frame); @@ -416,7 +401,7 @@ bool LCodeGen::GenerateJumpTable() { // This variant of deopt can only be used with stubs. Since we don't // have a function pointer to install in the stack frame that we're // building, install a special marker there instead. - ASSERT(info()->IsStub()); + DCHECK(info()->IsStub()); __ push(Immediate(Smi::FromInt(StackFrame::STUB))); // Push a PC inside the function so that the deopt code can find where // the deopt comes from. It doesn't have to be the precise return @@ -433,9 +418,7 @@ bool LCodeGen::GenerateJumpTable() { __ ret(0); // Call the continuation without clobbering registers. } } else { - if (info()->saves_caller_doubles() && CpuFeatures::IsSupported(SSE2)) { - RestoreCallerDoubles(); - } + if (info()->saves_caller_doubles()) RestoreCallerDoubles(); __ call(entry, RelocInfo::RUNTIME_ENTRY); } } @@ -444,12 +427,10 @@ bool LCodeGen::GenerateJumpTable() { bool LCodeGen::GenerateDeferredCode() { - ASSERT(is_generating()); + DCHECK(is_generating()); if (deferred_.length() > 0) { for (int i = 0; !is_aborted() && i < deferred_.length(); i++) { LDeferredCode* code = deferred_[i]; - X87Stack copy(code->x87_stack()); - x87_stack_ = copy; HValue* value = instructions_->at(code->instruction_index())->hydrogen_value(); @@ -464,8 +445,8 @@ bool LCodeGen::GenerateDeferredCode() { __ bind(code->entry()); if (NeedsDeferredFrame()) { Comment(";;; Build frame"); - ASSERT(!frame_is_built_); - ASSERT(info()->IsStub()); + DCHECK(!frame_is_built_); + DCHECK(info()->IsStub()); frame_is_built_ = true; // Build the frame in such a way that esi isn't trashed. __ push(ebp); // Caller's frame pointer. @@ -478,7 +459,7 @@ bool LCodeGen::GenerateDeferredCode() { if (NeedsDeferredFrame()) { __ bind(code->done()); Comment(";;; Destroy frame"); - ASSERT(frame_is_built_); + DCHECK(frame_is_built_); frame_is_built_ = false; __ mov(esp, ebp); __ pop(ebp); @@ -495,7 +476,7 @@ bool LCodeGen::GenerateDeferredCode() { bool LCodeGen::GenerateSafepointTable() { - ASSERT(is_done()); + DCHECK(is_done()); if (!info()->IsStub()) { // For lazy deoptimization we need space to patch a call after every call. // Ensure there is always space for such patching, even if the code ends @@ -515,234 +496,19 @@ Register LCodeGen::ToRegister(int index) const { } -X87Register LCodeGen::ToX87Register(int index) const { - return X87Register::FromAllocationIndex(index); -} - - XMMRegister LCodeGen::ToDoubleRegister(int index) const { return XMMRegister::FromAllocationIndex(index); } -void LCodeGen::X87LoadForUsage(X87Register reg) { - ASSERT(x87_stack_.Contains(reg)); - x87_stack_.Fxch(reg); - x87_stack_.pop(); -} - - -void LCodeGen::X87LoadForUsage(X87Register reg1, X87Register reg2) { - ASSERT(x87_stack_.Contains(reg1)); - ASSERT(x87_stack_.Contains(reg2)); - x87_stack_.Fxch(reg1, 1); - x87_stack_.Fxch(reg2); - x87_stack_.pop(); - x87_stack_.pop(); -} - - -void LCodeGen::X87Stack::Fxch(X87Register reg, int other_slot) { - ASSERT(is_mutable_); - ASSERT(Contains(reg) && stack_depth_ > other_slot); - int i = ArrayIndex(reg); - int st = st2idx(i); - if (st != other_slot) { - int other_i = st2idx(other_slot); - X87Register other = stack_[other_i]; - stack_[other_i] = reg; - stack_[i] = other; - if (st == 0) { - __ fxch(other_slot); - } else if (other_slot == 0) { - __ fxch(st); - } else { - __ fxch(st); - __ fxch(other_slot); - __ fxch(st); - } - } -} - - -int LCodeGen::X87Stack::st2idx(int pos) { - return stack_depth_ - pos - 1; -} - - -int LCodeGen::X87Stack::ArrayIndex(X87Register reg) { - for (int i = 0; i < stack_depth_; i++) { - if (stack_[i].is(reg)) return i; - } - UNREACHABLE(); - return -1; -} - - -bool LCodeGen::X87Stack::Contains(X87Register reg) { - for (int i = 0; i < stack_depth_; i++) { - if (stack_[i].is(reg)) return true; - } - return false; -} - - -void LCodeGen::X87Stack::Free(X87Register reg) { - ASSERT(is_mutable_); - ASSERT(Contains(reg)); - int i = ArrayIndex(reg); - int st = st2idx(i); - if (st > 0) { - // keep track of how fstp(i) changes the order of elements - int tos_i = st2idx(0); - stack_[i] = stack_[tos_i]; - } - pop(); - __ fstp(st); -} - - -void LCodeGen::X87Mov(X87Register dst, Operand src, X87OperandType opts) { - if (x87_stack_.Contains(dst)) { - x87_stack_.Fxch(dst); - __ fstp(0); - } else { - x87_stack_.push(dst); - } - X87Fld(src, opts); -} - - -void LCodeGen::X87Fld(Operand src, X87OperandType opts) { - ASSERT(!src.is_reg_only()); - switch (opts) { - case kX87DoubleOperand: - __ fld_d(src); - break; - case kX87FloatOperand: - __ fld_s(src); - break; - case kX87IntOperand: - __ fild_s(src); - break; - default: - UNREACHABLE(); - } -} - - -void LCodeGen::X87Mov(Operand dst, X87Register src, X87OperandType opts) { - ASSERT(!dst.is_reg_only()); - x87_stack_.Fxch(src); - switch (opts) { - case kX87DoubleOperand: - __ fst_d(dst); - break; - case kX87IntOperand: - __ fist_s(dst); - break; - default: - UNREACHABLE(); - } -} - - -void LCodeGen::X87Stack::PrepareToWrite(X87Register reg) { - ASSERT(is_mutable_); - if (Contains(reg)) { - Free(reg); - } - // Mark this register as the next register to write to - stack_[stack_depth_] = reg; -} - - -void LCodeGen::X87Stack::CommitWrite(X87Register reg) { - ASSERT(is_mutable_); - // Assert the reg is prepared to write, but not on the virtual stack yet - ASSERT(!Contains(reg) && stack_[stack_depth_].is(reg) && - stack_depth_ < X87Register::kNumAllocatableRegisters); - stack_depth_++; -} - - -void LCodeGen::X87PrepareBinaryOp( - X87Register left, X87Register right, X87Register result) { - // You need to use DefineSameAsFirst for x87 instructions - ASSERT(result.is(left)); - x87_stack_.Fxch(right, 1); - x87_stack_.Fxch(left); -} - - -void LCodeGen::X87Stack::FlushIfNecessary(LInstruction* instr, LCodeGen* cgen) { - if (stack_depth_ > 0 && instr->ClobbersDoubleRegisters(isolate())) { - bool double_inputs = instr->HasDoubleRegisterInput(); - - // Flush stack from tos down, since FreeX87() will mess with tos - for (int i = stack_depth_-1; i >= 0; i--) { - X87Register reg = stack_[i]; - // Skip registers which contain the inputs for the next instruction - // when flushing the stack - if (double_inputs && instr->IsDoubleInput(reg, cgen)) { - continue; - } - Free(reg); - if (i < stack_depth_-1) i++; - } - } - if (instr->IsReturn()) { - while (stack_depth_ > 0) { - __ fstp(0); - stack_depth_--; - } - if (FLAG_debug_code && FLAG_enable_slow_asserts) __ VerifyX87StackDepth(0); - } -} - - -void LCodeGen::X87Stack::LeavingBlock(int current_block_id, LGoto* goto_instr) { - ASSERT(stack_depth_ <= 1); - // If ever used for new stubs producing two pairs of doubles joined into two - // phis this assert hits. That situation is not handled, since the two stacks - // might have st0 and st1 swapped. - if (current_block_id + 1 != goto_instr->block_id()) { - // If we have a value on the x87 stack on leaving a block, it must be a - // phi input. If the next block we compile is not the join block, we have - // to discard the stack state. - stack_depth_ = 0; - } -} - - -void LCodeGen::EmitFlushX87ForDeopt() { - // The deoptimizer does not support X87 Registers. But as long as we - // deopt from a stub its not a problem, since we will re-materialize the - // original stub inputs, which can't be double registers. - ASSERT(info()->IsStub()); - if (FLAG_debug_code && FLAG_enable_slow_asserts) { - __ pushfd(); - __ VerifyX87StackDepth(x87_stack_.depth()); - __ popfd(); - } - for (int i = 0; i < x87_stack_.depth(); i++) __ fstp(0); -} - - Register LCodeGen::ToRegister(LOperand* op) const { - ASSERT(op->IsRegister()); + DCHECK(op->IsRegister()); return ToRegister(op->index()); } -X87Register LCodeGen::ToX87Register(LOperand* op) const { - ASSERT(op->IsDoubleRegister()); - return ToX87Register(op->index()); -} - - XMMRegister LCodeGen::ToDoubleRegister(LOperand* op) const { - ASSERT(op->IsDoubleRegister()); + DCHECK(op->IsDoubleRegister()); return ToDoubleRegister(op->index()); } @@ -757,28 +523,28 @@ int32_t LCodeGen::ToRepresentation(LConstantOperand* op, HConstant* constant = chunk_->LookupConstant(op); int32_t value = constant->Integer32Value(); if (r.IsInteger32()) return value; - ASSERT(r.IsSmiOrTagged()); + DCHECK(r.IsSmiOrTagged()); return reinterpret_cast<int32_t>(Smi::FromInt(value)); } Handle<Object> LCodeGen::ToHandle(LConstantOperand* op) const { HConstant* constant = chunk_->LookupConstant(op); - ASSERT(chunk_->LookupLiteralRepresentation(op).IsSmiOrTagged()); + DCHECK(chunk_->LookupLiteralRepresentation(op).IsSmiOrTagged()); return constant->handle(isolate()); } double LCodeGen::ToDouble(LConstantOperand* op) const { HConstant* constant = chunk_->LookupConstant(op); - ASSERT(constant->HasDoubleValue()); + DCHECK(constant->HasDoubleValue()); return constant->DoubleValue(); } ExternalReference LCodeGen::ToExternalReference(LConstantOperand* op) const { HConstant* constant = chunk_->LookupConstant(op); - ASSERT(constant->HasExternalReferenceValue()); + DCHECK(constant->HasExternalReferenceValue()); return constant->ExternalReferenceValue(); } @@ -794,7 +560,7 @@ bool LCodeGen::IsSmi(LConstantOperand* op) const { static int ArgumentsOffsetWithoutFrame(int index) { - ASSERT(index < 0); + DCHECK(index < 0); return -(index + 1) * kPointerSize + kPCOnStackSize; } @@ -802,7 +568,7 @@ static int ArgumentsOffsetWithoutFrame(int index) { Operand LCodeGen::ToOperand(LOperand* op) const { if (op->IsRegister()) return Operand(ToRegister(op)); if (op->IsDoubleRegister()) return Operand(ToDoubleRegister(op)); - ASSERT(op->IsStackSlot() || op->IsDoubleStackSlot()); + DCHECK(op->IsStackSlot() || op->IsDoubleStackSlot()); if (NeedsEagerFrame()) { return Operand(ebp, StackSlotOffset(op->index())); } else { @@ -814,7 +580,7 @@ Operand LCodeGen::ToOperand(LOperand* op) const { Operand LCodeGen::HighOperand(LOperand* op) { - ASSERT(op->IsDoubleStackSlot()); + DCHECK(op->IsDoubleStackSlot()); if (NeedsEagerFrame()) { return Operand(ebp, StackSlotOffset(op->index()) + kPointerSize); } else { @@ -849,13 +615,13 @@ void LCodeGen::WriteTranslation(LEnvironment* environment, translation->BeginConstructStubFrame(closure_id, translation_size); break; case JS_GETTER: - ASSERT(translation_size == 1); - ASSERT(height == 0); + DCHECK(translation_size == 1); + DCHECK(height == 0); translation->BeginGetterStubFrame(closure_id); break; case JS_SETTER: - ASSERT(translation_size == 2); - ASSERT(height == 0); + DCHECK(translation_size == 2); + DCHECK(height == 0); translation->BeginSetterStubFrame(closure_id); break; case ARGUMENTS_ADAPTOR: @@ -955,7 +721,7 @@ void LCodeGen::CallCodeGeneric(Handle<Code> code, RelocInfo::Mode mode, LInstruction* instr, SafepointMode safepoint_mode) { - ASSERT(instr != NULL); + DCHECK(instr != NULL); __ call(code, mode); RecordSafepointWithLazyDeopt(instr, safepoint_mode); @@ -979,14 +745,14 @@ void LCodeGen::CallRuntime(const Runtime::Function* fun, int argc, LInstruction* instr, SaveFPRegsMode save_doubles) { - ASSERT(instr != NULL); - ASSERT(instr->HasPointerMap()); + DCHECK(instr != NULL); + DCHECK(instr->HasPointerMap()); __ CallRuntime(fun, argc, save_doubles); RecordSafepointWithLazyDeopt(instr, RECORD_SIMPLE_SAFEPOINT); - ASSERT(info()->is_calling()); + DCHECK(info()->is_calling()); } @@ -1016,7 +782,7 @@ void LCodeGen::CallRuntimeFromDeferred(Runtime::FunctionId id, RecordSafepointWithRegisters( instr->pointer_map(), argc, Safepoint::kNoLazyDeopt); - ASSERT(info()->is_calling()); + DCHECK(info()->is_calling()); } @@ -1061,9 +827,9 @@ void LCodeGen::DeoptimizeIf(Condition cc, LEnvironment* environment, Deoptimizer::BailoutType bailout_type) { RegisterEnvironmentForDeoptimization(environment, Safepoint::kNoLazyDeopt); - ASSERT(environment->HasBeenRegistered()); + DCHECK(environment->HasBeenRegistered()); int id = environment->deoptimization_index(); - ASSERT(info()->IsOptimizing() || info()->IsStub()); + DCHECK(info()->IsOptimizing() || info()->IsStub()); Address entry = Deoptimizer::GetDeoptimizationEntry(isolate(), id, bailout_type); if (entry == NULL) { @@ -1084,7 +850,7 @@ void LCodeGen::DeoptimizeIf(Condition cc, __ mov(Operand::StaticVariable(count), eax); __ pop(eax); __ popfd(); - ASSERT(frame_is_built_); + DCHECK(frame_is_built_); __ call(entry, RelocInfo::RUNTIME_ENTRY); __ bind(&no_deopt); __ mov(Operand::StaticVariable(count), eax); @@ -1092,17 +858,6 @@ void LCodeGen::DeoptimizeIf(Condition cc, __ popfd(); } - // Before Instructions which can deopt, we normally flush the x87 stack. But - // we can have inputs or outputs of the current instruction on the stack, - // thus we need to flush them here from the physical stack to leave it in a - // consistent state. - if (x87_stack_.depth() > 0) { - Label done; - if (cc != no_condition) __ j(NegateCondition(cc), &done, Label::kNear); - EmitFlushX87ForDeopt(); - __ bind(&done); - } - if (info()->ShouldTrapOnDeopt()) { Label done; if (cc != no_condition) __ j(NegateCondition(cc), &done, Label::kNear); @@ -1110,7 +865,7 @@ void LCodeGen::DeoptimizeIf(Condition cc, __ bind(&done); } - ASSERT(info()->IsStub() || frame_is_built_); + DCHECK(info()->IsStub() || frame_is_built_); if (cc == no_condition && frame_is_built_) { __ call(entry, RelocInfo::RUNTIME_ENTRY); } else { @@ -1147,7 +902,7 @@ void LCodeGen::PopulateDeoptimizationData(Handle<Code> code) { int length = deoptimizations_.length(); if (length == 0) return; Handle<DeoptimizationInputData> data = - DeoptimizationInputData::New(isolate(), length, TENURED); + DeoptimizationInputData::New(isolate(), length, 0, TENURED); Handle<ByteArray> translations = translations_.CreateByteArray(isolate()->factory()); @@ -1198,7 +953,7 @@ int LCodeGen::DefineDeoptimizationLiteral(Handle<Object> literal) { void LCodeGen::PopulateDeoptimizationLiteralsWithInlinedFunctions() { - ASSERT(deoptimization_literals_.length() == 0); + DCHECK(deoptimization_literals_.length() == 0); const ZoneList<Handle<JSFunction> >* inlined_closures = chunk()->inlined_closures(); @@ -1218,7 +973,7 @@ void LCodeGen::RecordSafepointWithLazyDeopt( if (safepoint_mode == RECORD_SIMPLE_SAFEPOINT) { RecordSafepoint(instr->pointer_map(), Safepoint::kLazyDeopt); } else { - ASSERT(safepoint_mode == RECORD_SAFEPOINT_WITH_REGISTERS_AND_NO_ARGUMENTS); + DCHECK(safepoint_mode == RECORD_SAFEPOINT_WITH_REGISTERS_AND_NO_ARGUMENTS); RecordSafepointWithRegisters( instr->pointer_map(), 0, Safepoint::kLazyDeopt); } @@ -1230,7 +985,7 @@ void LCodeGen::RecordSafepoint( Safepoint::Kind kind, int arguments, Safepoint::DeoptMode deopt_mode) { - ASSERT(kind == expected_safepoint_kind_); + DCHECK(kind == expected_safepoint_kind_); const ZoneList<LOperand*>* operands = pointers->GetNormalizedOperands(); Safepoint safepoint = safepoints_.DefineSafepoint(masm(), kind, arguments, deopt_mode); @@ -1317,8 +1072,8 @@ void LCodeGen::DoParameter(LParameter* instr) { void LCodeGen::DoCallStub(LCallStub* instr) { - ASSERT(ToRegister(instr->context()).is(esi)); - ASSERT(ToRegister(instr->result()).is(eax)); + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->result()).is(eax)); switch (instr->hydrogen()->major_key()) { case CodeStub::RegExpExec: { RegExpExecStub stub(isolate()); @@ -1349,7 +1104,7 @@ void LCodeGen::DoUnknownOSRValue(LUnknownOSRValue* instr) { void LCodeGen::DoModByPowerOf2I(LModByPowerOf2I* instr) { Register dividend = ToRegister(instr->dividend()); int32_t divisor = instr->divisor(); - ASSERT(dividend.is(ToRegister(instr->result()))); + DCHECK(dividend.is(ToRegister(instr->result()))); // Theoretically, a variation of the branch-free code for integer division by // a power of 2 (calculating the remainder via an additional multiplication @@ -1382,7 +1137,7 @@ void LCodeGen::DoModByPowerOf2I(LModByPowerOf2I* instr) { void LCodeGen::DoModByConstI(LModByConstI* instr) { Register dividend = ToRegister(instr->dividend()); int32_t divisor = instr->divisor(); - ASSERT(ToRegister(instr->result()).is(eax)); + DCHECK(ToRegister(instr->result()).is(eax)); if (divisor == 0) { DeoptimizeIf(no_condition, instr->environment()); @@ -1410,12 +1165,12 @@ void LCodeGen::DoModI(LModI* instr) { HMod* hmod = instr->hydrogen(); Register left_reg = ToRegister(instr->left()); - ASSERT(left_reg.is(eax)); + DCHECK(left_reg.is(eax)); Register right_reg = ToRegister(instr->right()); - ASSERT(!right_reg.is(eax)); - ASSERT(!right_reg.is(edx)); + DCHECK(!right_reg.is(eax)); + DCHECK(!right_reg.is(edx)); Register result_reg = ToRegister(instr->result()); - ASSERT(result_reg.is(edx)); + DCHECK(result_reg.is(edx)); Label done; // Check for x % 0, idiv would signal a divide error. We have to @@ -1465,8 +1220,8 @@ void LCodeGen::DoDivByPowerOf2I(LDivByPowerOf2I* instr) { Register dividend = ToRegister(instr->dividend()); int32_t divisor = instr->divisor(); Register result = ToRegister(instr->result()); - ASSERT(divisor == kMinInt || IsPowerOf2(Abs(divisor))); - ASSERT(!result.is(dividend)); + DCHECK(divisor == kMinInt || IsPowerOf2(Abs(divisor))); + DCHECK(!result.is(dividend)); // Check for (0 / -x) that will produce negative zero. HDiv* hdiv = instr->hydrogen(); @@ -1502,7 +1257,7 @@ void LCodeGen::DoDivByPowerOf2I(LDivByPowerOf2I* instr) { void LCodeGen::DoDivByConstI(LDivByConstI* instr) { Register dividend = ToRegister(instr->dividend()); int32_t divisor = instr->divisor(); - ASSERT(ToRegister(instr->result()).is(edx)); + DCHECK(ToRegister(instr->result()).is(edx)); if (divisor == 0) { DeoptimizeIf(no_condition, instr->environment()); @@ -1534,11 +1289,11 @@ void LCodeGen::DoDivI(LDivI* instr) { Register dividend = ToRegister(instr->dividend()); Register divisor = ToRegister(instr->divisor()); Register remainder = ToRegister(instr->temp()); - ASSERT(dividend.is(eax)); - ASSERT(remainder.is(edx)); - ASSERT(ToRegister(instr->result()).is(eax)); - ASSERT(!divisor.is(eax)); - ASSERT(!divisor.is(edx)); + DCHECK(dividend.is(eax)); + DCHECK(remainder.is(edx)); + DCHECK(ToRegister(instr->result()).is(eax)); + DCHECK(!divisor.is(eax)); + DCHECK(!divisor.is(edx)); // Check for x / 0. if (hdiv->CheckFlag(HValue::kCanBeDivByZero)) { @@ -1581,7 +1336,7 @@ void LCodeGen::DoDivI(LDivI* instr) { void LCodeGen::DoFlooringDivByPowerOf2I(LFlooringDivByPowerOf2I* instr) { Register dividend = ToRegister(instr->dividend()); int32_t divisor = instr->divisor(); - ASSERT(dividend.is(ToRegister(instr->result()))); + DCHECK(dividend.is(ToRegister(instr->result()))); // If the divisor is positive, things are easy: There can be no deopts and we // can simply do an arithmetic right shift. @@ -1598,14 +1353,17 @@ void LCodeGen::DoFlooringDivByPowerOf2I(LFlooringDivByPowerOf2I* instr) { DeoptimizeIf(zero, instr->environment()); } - if (!instr->hydrogen()->CheckFlag(HValue::kLeftCanBeMinInt)) { - __ sar(dividend, shift); + // Dividing by -1 is basically negation, unless we overflow. + if (divisor == -1) { + if (instr->hydrogen()->CheckFlag(HValue::kLeftCanBeMinInt)) { + DeoptimizeIf(overflow, instr->environment()); + } return; } - // Dividing by -1 is basically negation, unless we overflow. - if (divisor == -1) { - DeoptimizeIf(overflow, instr->environment()); + // If the negation could not overflow, simply shifting is OK. + if (!instr->hydrogen()->CheckFlag(HValue::kLeftCanBeMinInt)) { + __ sar(dividend, shift); return; } @@ -1622,7 +1380,7 @@ void LCodeGen::DoFlooringDivByPowerOf2I(LFlooringDivByPowerOf2I* instr) { void LCodeGen::DoFlooringDivByConstI(LFlooringDivByConstI* instr) { Register dividend = ToRegister(instr->dividend()); int32_t divisor = instr->divisor(); - ASSERT(ToRegister(instr->result()).is(edx)); + DCHECK(ToRegister(instr->result()).is(edx)); if (divisor == 0) { DeoptimizeIf(no_condition, instr->environment()); @@ -1648,7 +1406,7 @@ void LCodeGen::DoFlooringDivByConstI(LFlooringDivByConstI* instr) { // In the general case we may need to adjust before and after the truncating // division to get a flooring division. Register temp = ToRegister(instr->temp3()); - ASSERT(!temp.is(dividend) && !temp.is(eax) && !temp.is(edx)); + DCHECK(!temp.is(dividend) && !temp.is(eax) && !temp.is(edx)); Label needs_adjustment, done; __ cmp(dividend, Immediate(0)); __ j(divisor > 0 ? less : greater, &needs_adjustment, Label::kNear); @@ -1671,11 +1429,11 @@ void LCodeGen::DoFlooringDivI(LFlooringDivI* instr) { Register divisor = ToRegister(instr->divisor()); Register remainder = ToRegister(instr->temp()); Register result = ToRegister(instr->result()); - ASSERT(dividend.is(eax)); - ASSERT(remainder.is(edx)); - ASSERT(result.is(eax)); - ASSERT(!divisor.is(eax)); - ASSERT(!divisor.is(edx)); + DCHECK(dividend.is(eax)); + DCHECK(remainder.is(edx)); + DCHECK(result.is(eax)); + DCHECK(!divisor.is(eax)); + DCHECK(!divisor.is(edx)); // Check for x / 0. if (hdiv->CheckFlag(HValue::kCanBeDivByZero)) { @@ -1805,8 +1563,8 @@ void LCodeGen::DoMulI(LMulI* instr) { void LCodeGen::DoBitI(LBitI* instr) { LOperand* left = instr->left(); LOperand* right = instr->right(); - ASSERT(left->Equals(instr->result())); - ASSERT(left->IsRegister()); + DCHECK(left->Equals(instr->result())); + DCHECK(left->IsRegister()); if (right->IsConstantOperand()) { int32_t right_operand = @@ -1852,10 +1610,10 @@ void LCodeGen::DoBitI(LBitI* instr) { void LCodeGen::DoShiftI(LShiftI* instr) { LOperand* left = instr->left(); LOperand* right = instr->right(); - ASSERT(left->Equals(instr->result())); - ASSERT(left->IsRegister()); + DCHECK(left->Equals(instr->result())); + DCHECK(left->IsRegister()); if (right->IsRegister()) { - ASSERT(ToRegister(right).is(ecx)); + DCHECK(ToRegister(right).is(ecx)); switch (instr->op()) { case Token::ROR: @@ -1900,11 +1658,11 @@ void LCodeGen::DoShiftI(LShiftI* instr) { } break; case Token::SHR: - if (shift_count == 0 && instr->can_deopt()) { + if (shift_count != 0) { + __ shr(ToRegister(left), shift_count); + } else if (instr->can_deopt()) { __ test(ToRegister(left), ToRegister(left)); DeoptimizeIf(sign, instr->environment()); - } else { - __ shr(ToRegister(left), shift_count); } break; case Token::SHL: @@ -1932,7 +1690,7 @@ void LCodeGen::DoShiftI(LShiftI* instr) { void LCodeGen::DoSubI(LSubI* instr) { LOperand* left = instr->left(); LOperand* right = instr->right(); - ASSERT(left->Equals(instr->result())); + DCHECK(left->Equals(instr->result())); if (right->IsConstantOperand()) { __ sub(ToOperand(left), @@ -1961,43 +1719,34 @@ void LCodeGen::DoConstantD(LConstantD* instr) { uint64_t int_val = BitCast<uint64_t, double>(v); int32_t lower = static_cast<int32_t>(int_val); int32_t upper = static_cast<int32_t>(int_val >> (kBitsPerInt)); - ASSERT(instr->result()->IsDoubleRegister()); - - if (!CpuFeatures::IsSafeForSnapshot(isolate(), SSE2)) { - __ push(Immediate(upper)); - __ push(Immediate(lower)); - X87Register reg = ToX87Register(instr->result()); - X87Mov(reg, Operand(esp, 0)); - __ add(Operand(esp), Immediate(kDoubleSize)); + DCHECK(instr->result()->IsDoubleRegister()); + + XMMRegister res = ToDoubleRegister(instr->result()); + if (int_val == 0) { + __ xorps(res, res); } else { - CpuFeatureScope scope1(masm(), SSE2); - XMMRegister res = ToDoubleRegister(instr->result()); - if (int_val == 0) { - __ xorps(res, res); - } else { - Register temp = ToRegister(instr->temp()); - if (CpuFeatures::IsSupported(SSE4_1)) { - CpuFeatureScope scope2(masm(), SSE4_1); - if (lower != 0) { - __ Move(temp, Immediate(lower)); - __ movd(res, Operand(temp)); - __ Move(temp, Immediate(upper)); - __ pinsrd(res, Operand(temp), 1); - } else { - __ xorps(res, res); - __ Move(temp, Immediate(upper)); - __ pinsrd(res, Operand(temp), 1); - } + Register temp = ToRegister(instr->temp()); + if (CpuFeatures::IsSupported(SSE4_1)) { + CpuFeatureScope scope2(masm(), SSE4_1); + if (lower != 0) { + __ Move(temp, Immediate(lower)); + __ movd(res, Operand(temp)); + __ Move(temp, Immediate(upper)); + __ pinsrd(res, Operand(temp), 1); } else { + __ xorps(res, res); __ Move(temp, Immediate(upper)); - __ movd(res, Operand(temp)); - __ psllq(res, 32); - if (lower != 0) { - XMMRegister xmm_scratch = double_scratch0(); - __ Move(temp, Immediate(lower)); - __ movd(xmm_scratch, Operand(temp)); - __ orps(res, xmm_scratch); - } + __ pinsrd(res, Operand(temp), 1); + } + } else { + __ Move(temp, Immediate(upper)); + __ movd(res, Operand(temp)); + __ psllq(res, 32); + if (lower != 0) { + XMMRegister xmm_scratch = double_scratch0(); + __ Move(temp, Immediate(lower)); + __ movd(xmm_scratch, Operand(temp)); + __ orps(res, xmm_scratch); } } } @@ -2013,13 +1762,6 @@ void LCodeGen::DoConstantT(LConstantT* instr) { Register reg = ToRegister(instr->result()); Handle<Object> object = instr->value(isolate()); AllowDeferredHandleDereference smi_check; - if (instr->hydrogen()->HasObjectMap()) { - Handle<Map> object_map = instr->hydrogen()->ObjectMap().handle(); - ASSERT(object->IsHeapObject()); - ASSERT(!object_map->is_stable() || - *object_map == Handle<HeapObject>::cast(object)->map()); - USE(object_map); - } __ LoadObject(reg, object); } @@ -2037,8 +1779,8 @@ void LCodeGen::DoDateField(LDateField* instr) { Register scratch = ToRegister(instr->temp()); Smi* index = instr->index(); Label runtime, done; - ASSERT(object.is(result)); - ASSERT(object.is(eax)); + DCHECK(object.is(result)); + DCHECK(object.is(eax)); __ test(object, Immediate(kSmiTagMask)); DeoptimizeIf(zero, instr->environment()); @@ -2133,12 +1875,12 @@ void LCodeGen::DoSeqStringSetChar(LSeqStringSetChar* instr) { if (instr->value()->IsConstantOperand()) { int value = ToRepresentation(LConstantOperand::cast(instr->value()), Representation::Integer32()); - ASSERT_LE(0, value); + DCHECK_LE(0, value); if (encoding == String::ONE_BYTE_ENCODING) { - ASSERT_LE(value, String::kMaxOneByteCharCode); + DCHECK_LE(value, String::kMaxOneByteCharCode); __ mov_b(operand, static_cast<int8_t>(value)); } else { - ASSERT_LE(value, String::kMaxUtf16CodeUnit); + DCHECK_LE(value, String::kMaxUtf16CodeUnit); __ mov_w(operand, static_cast<int16_t>(value)); } } else { @@ -2180,10 +1922,9 @@ void LCodeGen::DoAddI(LAddI* instr) { void LCodeGen::DoMathMinMax(LMathMinMax* instr) { - CpuFeatureScope scope(masm(), SSE2); LOperand* left = instr->left(); LOperand* right = instr->right(); - ASSERT(left->Equals(instr->result())); + DCHECK(left->Equals(instr->result())); HMathMinMax::Operation operation = instr->hydrogen()->operation(); if (instr->hydrogen()->representation().IsSmiOrInteger32()) { Label return_left; @@ -2206,7 +1947,7 @@ void LCodeGen::DoMathMinMax(LMathMinMax* instr) { } __ bind(&return_left); } else { - ASSERT(instr->hydrogen()->representation().IsDouble()); + DCHECK(instr->hydrogen()->representation().IsDouble()); Label check_nan_left, check_zero, return_left, return_right; Condition condition = (operation == HMathMinMax::kMathMin) ? below : above; XMMRegister left_reg = ToDoubleRegister(left); @@ -2243,97 +1984,54 @@ void LCodeGen::DoMathMinMax(LMathMinMax* instr) { void LCodeGen::DoArithmeticD(LArithmeticD* instr) { - if (CpuFeatures::IsSafeForSnapshot(isolate(), SSE2)) { - CpuFeatureScope scope(masm(), SSE2); - XMMRegister left = ToDoubleRegister(instr->left()); - XMMRegister right = ToDoubleRegister(instr->right()); - XMMRegister result = ToDoubleRegister(instr->result()); - switch (instr->op()) { - case Token::ADD: - __ addsd(left, right); - break; - case Token::SUB: - __ subsd(left, right); - break; - case Token::MUL: - __ mulsd(left, right); - break; - case Token::DIV: - __ divsd(left, right); - // Don't delete this mov. It may improve performance on some CPUs, - // when there is a mulsd depending on the result - __ movaps(left, left); - break; - case Token::MOD: { - // Pass two doubles as arguments on the stack. - __ PrepareCallCFunction(4, eax); - __ movsd(Operand(esp, 0 * kDoubleSize), left); - __ movsd(Operand(esp, 1 * kDoubleSize), right); - __ CallCFunction( - ExternalReference::mod_two_doubles_operation(isolate()), - 4); - - // Return value is in st(0) on ia32. - // Store it into the result register. - __ sub(Operand(esp), Immediate(kDoubleSize)); - __ fstp_d(Operand(esp, 0)); - __ movsd(result, Operand(esp, 0)); - __ add(Operand(esp), Immediate(kDoubleSize)); - break; - } - default: - UNREACHABLE(); - break; - } - } else { - X87Register left = ToX87Register(instr->left()); - X87Register right = ToX87Register(instr->right()); - X87Register result = ToX87Register(instr->result()); - if (instr->op() != Token::MOD) { - X87PrepareBinaryOp(left, right, result); - } - switch (instr->op()) { - case Token::ADD: - __ fadd_i(1); - break; - case Token::SUB: - __ fsub_i(1); - break; - case Token::MUL: - __ fmul_i(1); - break; - case Token::DIV: - __ fdiv_i(1); - break; - case Token::MOD: { - // Pass two doubles as arguments on the stack. - __ PrepareCallCFunction(4, eax); - X87Mov(Operand(esp, 1 * kDoubleSize), right); - X87Mov(Operand(esp, 0), left); - X87Free(right); - ASSERT(left.is(result)); - X87PrepareToWrite(result); - __ CallCFunction( - ExternalReference::mod_two_doubles_operation(isolate()), - 4); - - // Return value is in st(0) on ia32. - X87CommitWrite(result); - break; - } - default: - UNREACHABLE(); - break; + XMMRegister left = ToDoubleRegister(instr->left()); + XMMRegister right = ToDoubleRegister(instr->right()); + XMMRegister result = ToDoubleRegister(instr->result()); + switch (instr->op()) { + case Token::ADD: + __ addsd(left, right); + break; + case Token::SUB: + __ subsd(left, right); + break; + case Token::MUL: + __ mulsd(left, right); + break; + case Token::DIV: + __ divsd(left, right); + // Don't delete this mov. It may improve performance on some CPUs, + // when there is a mulsd depending on the result + __ movaps(left, left); + break; + case Token::MOD: { + // Pass two doubles as arguments on the stack. + __ PrepareCallCFunction(4, eax); + __ movsd(Operand(esp, 0 * kDoubleSize), left); + __ movsd(Operand(esp, 1 * kDoubleSize), right); + __ CallCFunction( + ExternalReference::mod_two_doubles_operation(isolate()), + 4); + + // Return value is in st(0) on ia32. + // Store it into the result register. + __ sub(Operand(esp), Immediate(kDoubleSize)); + __ fstp_d(Operand(esp, 0)); + __ movsd(result, Operand(esp, 0)); + __ add(Operand(esp), Immediate(kDoubleSize)); + break; } + default: + UNREACHABLE(); + break; } } void LCodeGen::DoArithmeticT(LArithmeticT* instr) { - ASSERT(ToRegister(instr->context()).is(esi)); - ASSERT(ToRegister(instr->left()).is(edx)); - ASSERT(ToRegister(instr->right()).is(eax)); - ASSERT(ToRegister(instr->result()).is(eax)); + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->left()).is(edx)); + DCHECK(ToRegister(instr->right()).is(eax)); + DCHECK(ToRegister(instr->result()).is(eax)); BinaryOpICStub stub(isolate(), instr->op(), NO_OVERWRITE); CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); @@ -2378,37 +2076,35 @@ void LCodeGen::DoBranch(LBranch* instr) { __ test(reg, Operand(reg)); EmitBranch(instr, not_zero); } else if (r.IsDouble()) { - ASSERT(!info()->IsStub()); - CpuFeatureScope scope(masm(), SSE2); + DCHECK(!info()->IsStub()); XMMRegister reg = ToDoubleRegister(instr->value()); XMMRegister xmm_scratch = double_scratch0(); __ xorps(xmm_scratch, xmm_scratch); __ ucomisd(reg, xmm_scratch); EmitBranch(instr, not_equal); } else { - ASSERT(r.IsTagged()); + DCHECK(r.IsTagged()); Register reg = ToRegister(instr->value()); HType type = instr->hydrogen()->value()->type(); if (type.IsBoolean()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); __ cmp(reg, factory()->true_value()); EmitBranch(instr, equal); } else if (type.IsSmi()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); __ test(reg, Operand(reg)); EmitBranch(instr, not_equal); } else if (type.IsJSArray()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); EmitBranch(instr, no_condition); } else if (type.IsHeapNumber()) { - ASSERT(!info()->IsStub()); - CpuFeatureScope scope(masm(), SSE2); + DCHECK(!info()->IsStub()); XMMRegister xmm_scratch = double_scratch0(); __ xorps(xmm_scratch, xmm_scratch); __ ucomisd(xmm_scratch, FieldOperand(reg, HeapNumber::kValueOffset)); EmitBranch(instr, not_equal); } else if (type.IsString()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); __ cmp(FieldOperand(reg, String::kLengthOffset), Immediate(0)); EmitBranch(instr, not_equal); } else { @@ -2448,7 +2144,7 @@ void LCodeGen::DoBranch(LBranch* instr) { Register map = no_reg; // Keep the compiler happy. if (expected.NeedsMap()) { map = ToRegister(instr->temp()); - ASSERT(!map.is(reg)); + DCHECK(!map.is(reg)); __ mov(map, FieldOperand(reg, HeapObject::kMapOffset)); if (expected.CanBeUndetectable()) { @@ -2488,16 +2184,9 @@ void LCodeGen::DoBranch(LBranch* instr) { __ cmp(FieldOperand(reg, HeapObject::kMapOffset), factory()->heap_number_map()); __ j(not_equal, ¬_heap_number, Label::kNear); - if (CpuFeatures::IsSafeForSnapshot(isolate(), SSE2)) { - CpuFeatureScope scope(masm(), SSE2); - XMMRegister xmm_scratch = double_scratch0(); - __ xorps(xmm_scratch, xmm_scratch); - __ ucomisd(xmm_scratch, FieldOperand(reg, HeapNumber::kValueOffset)); - } else { - __ fldz(); - __ fld_d(FieldOperand(reg, HeapNumber::kValueOffset)); - __ FCmp(); - } + XMMRegister xmm_scratch = double_scratch0(); + __ xorps(xmm_scratch, xmm_scratch); + __ ucomisd(xmm_scratch, FieldOperand(reg, HeapNumber::kValueOffset)); __ j(zero, instr->FalseLabel(chunk_)); __ jmp(instr->TrueLabel(chunk_)); __ bind(¬_heap_number); @@ -2520,10 +2209,6 @@ void LCodeGen::EmitGoto(int block) { } -void LCodeGen::DoClobberDoubles(LClobberDoubles* instr) { -} - - void LCodeGen::DoGoto(LGoto* instr) { EmitGoto(instr->block_id()); } @@ -2564,7 +2249,11 @@ Condition LCodeGen::TokenToCondition(Token::Value op, bool is_unsigned) { void LCodeGen::DoCompareNumericAndBranch(LCompareNumericAndBranch* instr) { LOperand* left = instr->left(); LOperand* right = instr->right(); - Condition cc = TokenToCondition(instr->op(), instr->is_double()); + bool is_unsigned = + instr->is_double() || + instr->hydrogen()->left()->CheckFlag(HInstruction::kUint32) || + instr->hydrogen()->right()->CheckFlag(HInstruction::kUint32); + Condition cc = TokenToCondition(instr->op(), is_unsigned); if (left->IsConstantOperand() && right->IsConstantOperand()) { // We can statically evaluate the comparison. @@ -2575,13 +2264,7 @@ void LCodeGen::DoCompareNumericAndBranch(LCompareNumericAndBranch* instr) { EmitGoto(next_block); } else { if (instr->is_double()) { - if (CpuFeatures::IsSafeForSnapshot(isolate(), SSE2)) { - CpuFeatureScope scope(masm(), SSE2); - __ ucomisd(ToDoubleRegister(left), ToDoubleRegister(right)); - } else { - X87LoadForUsage(ToX87Register(right), ToX87Register(left)); - __ FCmp(); - } + __ ucomisd(ToDoubleRegister(left), ToDoubleRegister(right)); // Don't base result on EFLAGS when a NaN is involved. Instead // jump to the false block. __ j(parity_even, instr->FalseLabel(chunk_)); @@ -2592,8 +2275,8 @@ void LCodeGen::DoCompareNumericAndBranch(LCompareNumericAndBranch* instr) { } else if (left->IsConstantOperand()) { __ cmp(ToOperand(right), ToImmediate(left, instr->hydrogen()->representation())); - // We transposed the operands. Reverse the condition. - cc = ReverseCondition(cc); + // We commuted the operands, so commute the condition. + cc = CommuteCondition(cc); } else { __ cmp(ToRegister(left), ToOperand(right)); } @@ -2625,35 +2308,12 @@ void LCodeGen::DoCmpHoleAndBranch(LCmpHoleAndBranch* instr) { return; } - bool use_sse2 = CpuFeatures::IsSupported(SSE2); - if (use_sse2) { - CpuFeatureScope scope(masm(), SSE2); - XMMRegister input_reg = ToDoubleRegister(instr->object()); - __ ucomisd(input_reg, input_reg); - EmitFalseBranch(instr, parity_odd); - } else { - // Put the value to the top of stack - X87Register src = ToX87Register(instr->object()); - X87LoadForUsage(src); - __ fld(0); - __ fld(0); - __ FCmp(); - Label ok; - __ j(parity_even, &ok, Label::kNear); - __ fstp(0); - EmitFalseBranch(instr, no_condition); - __ bind(&ok); - } - + XMMRegister input_reg = ToDoubleRegister(instr->object()); + __ ucomisd(input_reg, input_reg); + EmitFalseBranch(instr, parity_odd); __ sub(esp, Immediate(kDoubleSize)); - if (use_sse2) { - CpuFeatureScope scope(masm(), SSE2); - XMMRegister input_reg = ToDoubleRegister(instr->object()); - __ movsd(MemOperand(esp, 0), input_reg); - } else { - __ fstp_d(MemOperand(esp, 0)); - } + __ movsd(MemOperand(esp, 0), input_reg); __ add(esp, Immediate(kDoubleSize)); int offset = sizeof(kHoleNanUpper32); @@ -2664,11 +2324,10 @@ void LCodeGen::DoCmpHoleAndBranch(LCmpHoleAndBranch* instr) { void LCodeGen::DoCompareMinusZeroAndBranch(LCompareMinusZeroAndBranch* instr) { Representation rep = instr->hydrogen()->value()->representation(); - ASSERT(!rep.IsInteger32()); + DCHECK(!rep.IsInteger32()); Register scratch = ToRegister(instr->temp()); if (rep.IsDouble()) { - CpuFeatureScope use_sse2(masm(), SSE2); XMMRegister value = ToDoubleRegister(instr->value()); XMMRegister xmm_scratch = double_scratch0(); __ xorps(xmm_scratch, xmm_scratch); @@ -2744,7 +2403,7 @@ void LCodeGen::DoIsStringAndBranch(LIsStringAndBranch* instr) { Register temp = ToRegister(instr->temp()); SmiCheck check_needed = - instr->hydrogen()->value()->IsHeapObject() + instr->hydrogen()->value()->type().IsHeapObject() ? OMIT_SMI_CHECK : INLINE_SMI_CHECK; Condition true_cond = EmitIsString( @@ -2766,7 +2425,7 @@ void LCodeGen::DoIsUndetectableAndBranch(LIsUndetectableAndBranch* instr) { Register input = ToRegister(instr->value()); Register temp = ToRegister(instr->temp()); - if (!instr->hydrogen()->value()->IsHeapObject()) { + if (!instr->hydrogen()->value()->type().IsHeapObject()) { STATIC_ASSERT(kSmiTag == 0); __ JumpIfSmi(input, instr->FalseLabel(chunk_)); } @@ -2814,7 +2473,7 @@ static InstanceType TestType(HHasInstanceTypeAndBranch* instr) { InstanceType from = instr->from(); InstanceType to = instr->to(); if (from == FIRST_TYPE) return to; - ASSERT(from == to || to == LAST_TYPE); + DCHECK(from == to || to == LAST_TYPE); return from; } @@ -2834,7 +2493,7 @@ void LCodeGen::DoHasInstanceTypeAndBranch(LHasInstanceTypeAndBranch* instr) { Register input = ToRegister(instr->value()); Register temp = ToRegister(instr->temp()); - if (!instr->hydrogen()->value()->IsHeapObject()) { + if (!instr->hydrogen()->value()->type().IsHeapObject()) { __ JumpIfSmi(input, instr->FalseLabel(chunk_)); } @@ -2872,9 +2531,9 @@ void LCodeGen::EmitClassOfTest(Label* is_true, Register input, Register temp, Register temp2) { - ASSERT(!input.is(temp)); - ASSERT(!input.is(temp2)); - ASSERT(!temp.is(temp2)); + DCHECK(!input.is(temp)); + DCHECK(!input.is(temp2)); + DCHECK(!temp.is(temp2)); __ JumpIfSmi(input, is_false); if (class_name->IsOneByteEqualTo(STATIC_ASCII_VECTOR("Function"))) { @@ -2952,7 +2611,7 @@ void LCodeGen::DoCmpMapAndBranch(LCmpMapAndBranch* instr) { void LCodeGen::DoInstanceOf(LInstanceOf* instr) { // Object and function are in fixed registers defined by the stub. - ASSERT(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->context()).is(esi)); InstanceofStub stub(isolate(), InstanceofStub::kArgsInRegisters); CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); @@ -2971,9 +2630,8 @@ void LCodeGen::DoInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr) { class DeferredInstanceOfKnownGlobal V8_FINAL : public LDeferredCode { public: DeferredInstanceOfKnownGlobal(LCodeGen* codegen, - LInstanceOfKnownGlobal* instr, - const X87Stack& x87_stack) - : LDeferredCode(codegen, x87_stack), instr_(instr) { } + LInstanceOfKnownGlobal* instr) + : LDeferredCode(codegen), instr_(instr) { } virtual void Generate() V8_OVERRIDE { codegen()->DoDeferredInstanceOfKnownGlobal(instr_, &map_check_); } @@ -2985,7 +2643,7 @@ void LCodeGen::DoInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr) { }; DeferredInstanceOfKnownGlobal* deferred; - deferred = new(zone()) DeferredInstanceOfKnownGlobal(this, instr, x87_stack_); + deferred = new(zone()) DeferredInstanceOfKnownGlobal(this, instr); Label done, false_result; Register object = ToRegister(instr->value()); @@ -3049,7 +2707,7 @@ void LCodeGen::DoDeferredInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr, // stack is used to pass the offset to the location of the map check to // the stub. Register temp = ToRegister(instr->temp()); - ASSERT(MacroAssembler::SafepointRegisterStackIndex(temp) == 0); + DCHECK(MacroAssembler::SafepointRegisterStackIndex(temp) == 0); __ LoadHeapObject(InstanceofStub::right(), instr->function()); static const int kAdditionalDelta = 13; int delta = masm_->SizeOfCodeGeneratedSince(map_check) + kAdditionalDelta; @@ -3105,7 +2763,7 @@ void LCodeGen::EmitReturn(LReturn* instr, bool dynamic_frame_alignment) { __ SmiUntag(reg); Register return_addr_reg = reg.is(ecx) ? ebx : ecx; if (dynamic_frame_alignment && FLAG_debug_code) { - ASSERT(extra_value_count == 2); + DCHECK(extra_value_count == 2); __ cmp(Operand(esp, reg, times_pointer_size, extra_value_count * kPointerSize), Immediate(kAlignmentZapValue)); @@ -3134,9 +2792,7 @@ void LCodeGen::DoReturn(LReturn* instr) { __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset)); __ CallRuntime(Runtime::kTraceExit, 1); } - if (info()->saves_caller_doubles() && CpuFeatures::IsSupported(SSE2)) { - RestoreCallerDoubles(); - } + if (info()->saves_caller_doubles()) RestoreCallerDoubles(); if (dynamic_frame_alignment_) { // Fetch the state of the dynamic frame alignment. __ mov(edx, Operand(ebp, @@ -3175,11 +2831,20 @@ void LCodeGen::DoLoadGlobalCell(LLoadGlobalCell* instr) { void LCodeGen::DoLoadGlobalGeneric(LLoadGlobalGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(esi)); - ASSERT(ToRegister(instr->global_object()).is(edx)); - ASSERT(ToRegister(instr->result()).is(eax)); - - __ mov(ecx, instr->name()); + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->global_object()).is(LoadIC::ReceiverRegister())); + DCHECK(ToRegister(instr->result()).is(eax)); + + __ mov(LoadIC::NameRegister(), instr->name()); + if (FLAG_vector_ics) { + Register vector = ToRegister(instr->temp_vector()); + DCHECK(vector.is(LoadIC::VectorRegister())); + __ mov(vector, instr->hydrogen()->feedback_vector()); + // No need to allocate this register. + DCHECK(LoadIC::SlotRegister().is(eax)); + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(instr->hydrogen()->slot()))); + } ContextualMode mode = instr->for_typeof() ? NOT_CONTEXTUAL : CONTEXTUAL; Handle<Code> ic = LoadIC::initialize_stub(isolate(), mode); CallCode(ic, RelocInfo::CODE_TARGET, instr); @@ -3243,7 +2908,7 @@ void LCodeGen::DoStoreContextSlot(LStoreContextSlot* instr) { __ mov(target, value); if (instr->hydrogen()->NeedsWriteBarrier()) { SmiCheck check_needed = - instr->hydrogen()->value()->IsHeapObject() + instr->hydrogen()->value()->type().IsHeapObject() ? OMIT_SMI_CHECK : INLINE_SMI_CHECK; Register temp = ToRegister(instr->temp()); int offset = Context::SlotOffset(instr->slot_index()); @@ -3251,7 +2916,7 @@ void LCodeGen::DoStoreContextSlot(LStoreContextSlot* instr) { offset, value, temp, - GetSaveFPRegsMode(isolate()), + kSaveFPRegs, EMIT_REMEMBERED_SET, check_needed); } @@ -3276,13 +2941,8 @@ void LCodeGen::DoLoadNamedField(LLoadNamedField* instr) { Register object = ToRegister(instr->object()); if (instr->hydrogen()->representation().IsDouble()) { - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope scope(masm(), SSE2); - XMMRegister result = ToDoubleRegister(instr->result()); - __ movsd(result, FieldOperand(object, offset)); - } else { - X87Mov(ToX87Register(instr->result()), FieldOperand(object, offset)); - } + XMMRegister result = ToDoubleRegister(instr->result()); + __ movsd(result, FieldOperand(object, offset)); return; } @@ -3296,7 +2956,7 @@ void LCodeGen::DoLoadNamedField(LLoadNamedField* instr) { void LCodeGen::EmitPushTaggedOperand(LOperand* operand) { - ASSERT(!operand->IsDoubleRegister()); + DCHECK(!operand->IsDoubleRegister()); if (operand->IsConstantOperand()) { Handle<Object> object = ToHandle(LConstantOperand::cast(operand)); AllowDeferredHandleDereference smi_check; @@ -3314,11 +2974,20 @@ void LCodeGen::EmitPushTaggedOperand(LOperand* operand) { void LCodeGen::DoLoadNamedGeneric(LLoadNamedGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(esi)); - ASSERT(ToRegister(instr->object()).is(edx)); - ASSERT(ToRegister(instr->result()).is(eax)); - - __ mov(ecx, instr->name()); + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->object()).is(LoadIC::ReceiverRegister())); + DCHECK(ToRegister(instr->result()).is(eax)); + + __ mov(LoadIC::NameRegister(), instr->name()); + if (FLAG_vector_ics) { + Register vector = ToRegister(instr->temp_vector()); + DCHECK(vector.is(LoadIC::VectorRegister())); + __ mov(vector, instr->hydrogen()->feedback_vector()); + // No need to allocate this register. + DCHECK(LoadIC::SlotRegister().is(eax)); + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(instr->hydrogen()->slot()))); + } Handle<Code> ic = LoadIC::initialize_stub(isolate(), NOT_CONTEXTUAL); CallCode(ic, RelocInfo::CODE_TARGET, instr); } @@ -3329,16 +2998,6 @@ void LCodeGen::DoLoadFunctionPrototype(LLoadFunctionPrototype* instr) { Register temp = ToRegister(instr->temp()); Register result = ToRegister(instr->result()); - // Check that the function really is a function. - __ CmpObjectType(function, JS_FUNCTION_TYPE, result); - DeoptimizeIf(not_equal, instr->environment()); - - // Check whether the function has an instance prototype. - Label non_instance; - __ test_b(FieldOperand(result, Map::kBitFieldOffset), - 1 << Map::kHasNonInstancePrototype); - __ j(not_zero, &non_instance, Label::kNear); - // Get the prototype or initial map from the function. __ mov(result, FieldOperand(function, JSFunction::kPrototypeOrInitialMapOffset)); @@ -3354,12 +3013,6 @@ void LCodeGen::DoLoadFunctionPrototype(LLoadFunctionPrototype* instr) { // Get the prototype from the initial map. __ mov(result, FieldOperand(result, Map::kPrototypeOffset)); - __ jmp(&done, Label::kNear); - - // Non-instance prototype: Fetch prototype from constructor field - // in the function's map. - __ bind(&non_instance); - __ mov(result, FieldOperand(result, Map::kConstructorOffset)); // All done. __ bind(&done); @@ -3405,26 +3058,15 @@ void LCodeGen::DoLoadKeyedExternalArray(LLoadKeyed* instr) { key, instr->hydrogen()->key()->representation(), elements_kind, - 0, - instr->additional_index())); + instr->base_offset())); if (elements_kind == EXTERNAL_FLOAT32_ELEMENTS || elements_kind == FLOAT32_ELEMENTS) { - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope scope(masm(), SSE2); - XMMRegister result(ToDoubleRegister(instr->result())); - __ movss(result, operand); - __ cvtss2sd(result, result); - } else { - X87Mov(ToX87Register(instr->result()), operand, kX87FloatOperand); - } + XMMRegister result(ToDoubleRegister(instr->result())); + __ movss(result, operand); + __ cvtss2sd(result, result); } else if (elements_kind == EXTERNAL_FLOAT64_ELEMENTS || elements_kind == FLOAT64_ELEMENTS) { - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope scope(masm(), SSE2); - __ movsd(ToDoubleRegister(instr->result()), operand); - } else { - X87Mov(ToX87Register(instr->result()), operand); - } + __ movsd(ToDoubleRegister(instr->result()), operand); } else { Register result(ToRegister(instr->result())); switch (elements_kind) { @@ -3479,14 +3121,11 @@ void LCodeGen::DoLoadKeyedExternalArray(LLoadKeyed* instr) { void LCodeGen::DoLoadKeyedFixedDoubleArray(LLoadKeyed* instr) { if (instr->hydrogen()->RequiresHoleCheck()) { - int offset = FixedDoubleArray::kHeaderSize - kHeapObjectTag + - sizeof(kHoleNanLower32); Operand hole_check_operand = BuildFastArrayOperand( instr->elements(), instr->key(), instr->hydrogen()->key()->representation(), FAST_DOUBLE_ELEMENTS, - offset, - instr->additional_index()); + instr->base_offset() + sizeof(kHoleNanLower32)); __ cmp(hole_check_operand, Immediate(kHoleNanUpper32)); DeoptimizeIf(equal, instr->environment()); } @@ -3496,15 +3135,9 @@ void LCodeGen::DoLoadKeyedFixedDoubleArray(LLoadKeyed* instr) { instr->key(), instr->hydrogen()->key()->representation(), FAST_DOUBLE_ELEMENTS, - FixedDoubleArray::kHeaderSize - kHeapObjectTag, - instr->additional_index()); - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope scope(masm(), SSE2); - XMMRegister result = ToDoubleRegister(instr->result()); - __ movsd(result, double_load_operand); - } else { - X87Mov(ToX87Register(instr->result()), double_load_operand); - } + instr->base_offset()); + XMMRegister result = ToDoubleRegister(instr->result()); + __ movsd(result, double_load_operand); } @@ -3517,8 +3150,7 @@ void LCodeGen::DoLoadKeyedFixedArray(LLoadKeyed* instr) { instr->key(), instr->hydrogen()->key()->representation(), FAST_ELEMENTS, - FixedArray::kHeaderSize - kHeapObjectTag, - instr->additional_index())); + instr->base_offset())); // Check for the hole value. if (instr->hydrogen()->RequiresHoleCheck()) { @@ -3549,13 +3181,9 @@ Operand LCodeGen::BuildFastArrayOperand( LOperand* key, Representation key_representation, ElementsKind elements_kind, - uint32_t offset, - uint32_t additional_index) { + uint32_t base_offset) { Register elements_pointer_reg = ToRegister(elements_pointer); int element_shift_size = ElementsKindToShiftSize(elements_kind); - if (IsFixedTypedArrayElementsKind(elements_kind)) { - offset += FixedTypedArrayBase::kDataOffset - kHeapObjectTag; - } int shift_size = element_shift_size; if (key->IsConstantOperand()) { int constant_value = ToInteger32(LConstantOperand::cast(key)); @@ -3563,8 +3191,8 @@ Operand LCodeGen::BuildFastArrayOperand( Abort(kArrayIndexConstantValueTooBig); } return Operand(elements_pointer_reg, - ((constant_value + additional_index) << shift_size) - + offset); + ((constant_value) << shift_size) + + base_offset); } else { // Take the tag bit into account while computing the shift size. if (key_representation.IsSmi() && (shift_size >= 1)) { @@ -3574,15 +3202,25 @@ Operand LCodeGen::BuildFastArrayOperand( return Operand(elements_pointer_reg, ToRegister(key), scale_factor, - offset + (additional_index << element_shift_size)); + base_offset); } } void LCodeGen::DoLoadKeyedGeneric(LLoadKeyedGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(esi)); - ASSERT(ToRegister(instr->object()).is(edx)); - ASSERT(ToRegister(instr->key()).is(ecx)); + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->object()).is(LoadIC::ReceiverRegister())); + DCHECK(ToRegister(instr->key()).is(LoadIC::NameRegister())); + + if (FLAG_vector_ics) { + Register vector = ToRegister(instr->temp_vector()); + DCHECK(vector.is(LoadIC::VectorRegister())); + __ mov(vector, instr->hydrogen()->feedback_vector()); + // No need to allocate this register. + DCHECK(LoadIC::SlotRegister().is(eax)); + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(instr->hydrogen()->slot()))); + } Handle<Code> ic = isolate()->builtins()->KeyedLoadIC_Initialize(); CallCode(ic, RelocInfo::CODE_TARGET, instr); @@ -3683,8 +3321,8 @@ void LCodeGen::DoWrapReceiver(LWrapReceiver* instr) { __ mov(receiver, FieldOperand(function, JSFunction::kContextOffset)); const int global_offset = Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX); __ mov(receiver, Operand(receiver, global_offset)); - const int receiver_offset = GlobalObject::kGlobalReceiverOffset; - __ mov(receiver, FieldOperand(receiver, receiver_offset)); + const int proxy_offset = GlobalObject::kGlobalProxyOffset; + __ mov(receiver, FieldOperand(receiver, proxy_offset)); __ bind(&receiver_ok); } @@ -3694,9 +3332,9 @@ void LCodeGen::DoApplyArguments(LApplyArguments* instr) { Register function = ToRegister(instr->function()); Register length = ToRegister(instr->length()); Register elements = ToRegister(instr->elements()); - ASSERT(receiver.is(eax)); // Used for parameter count. - ASSERT(function.is(edi)); // Required by InvokeFunction. - ASSERT(ToRegister(instr->result()).is(eax)); + DCHECK(receiver.is(eax)); // Used for parameter count. + DCHECK(function.is(edi)); // Required by InvokeFunction. + DCHECK(ToRegister(instr->result()).is(eax)); // Copy the arguments to this function possibly from the // adaptor frame below it. @@ -3720,7 +3358,7 @@ void LCodeGen::DoApplyArguments(LApplyArguments* instr) { // Invoke the function. __ bind(&invoke); - ASSERT(instr->HasPointerMap()); + DCHECK(instr->HasPointerMap()); LPointerMap* pointers = instr->pointer_map(); SafepointGenerator safepoint_generator( this, pointers, Safepoint::kLazyDeopt); @@ -3757,17 +3395,17 @@ void LCodeGen::DoContext(LContext* instr) { __ mov(result, Operand(ebp, StandardFrameConstants::kContextOffset)); } else { // If there is no frame, the context must be in esi. - ASSERT(result.is(esi)); + DCHECK(result.is(esi)); } } void LCodeGen::DoDeclareGlobals(LDeclareGlobals* instr) { - ASSERT(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->context()).is(esi)); __ push(esi); // The context is the first argument. __ push(Immediate(instr->hydrogen()->pairs())); __ push(Immediate(Smi::FromInt(instr->hydrogen()->flags()))); - CallRuntime(Runtime::kHiddenDeclareGlobals, 3, instr); + CallRuntime(Runtime::kDeclareGlobals, 3, instr); } @@ -3815,7 +3453,7 @@ void LCodeGen::CallKnownFunction(Handle<JSFunction> function, void LCodeGen::DoCallWithDescriptor(LCallWithDescriptor* instr) { - ASSERT(ToRegister(instr->result()).is(eax)); + DCHECK(ToRegister(instr->result()).is(eax)); LPointerMap* pointers = instr->pointer_map(); SafepointGenerator generator(this, pointers, Safepoint::kLazyDeopt); @@ -3826,7 +3464,7 @@ void LCodeGen::DoCallWithDescriptor(LCallWithDescriptor* instr) { generator.BeforeCall(__ CallSize(code, RelocInfo::CODE_TARGET)); __ call(code, RelocInfo::CODE_TARGET); } else { - ASSERT(instr->target()->IsRegister()); + DCHECK(instr->target()->IsRegister()); Register target = ToRegister(instr->target()); generator.BeforeCall(__ CallSize(Operand(target))); __ add(target, Immediate(Code::kHeaderSize - kHeapObjectTag)); @@ -3837,8 +3475,8 @@ void LCodeGen::DoCallWithDescriptor(LCallWithDescriptor* instr) { void LCodeGen::DoCallJSFunction(LCallJSFunction* instr) { - ASSERT(ToRegister(instr->function()).is(edi)); - ASSERT(ToRegister(instr->result()).is(eax)); + DCHECK(ToRegister(instr->function()).is(edi)); + DCHECK(ToRegister(instr->result()).is(eax)); if (instr->hydrogen()->pass_argument_count()) { __ mov(eax, instr->arity()); @@ -3891,7 +3529,7 @@ void LCodeGen::DoDeferredMathAbsTaggedHeapNumber(LMathAbs* instr) { // Slow case: Call the runtime system to do the number allocation. __ bind(&slow); - CallRuntimeFromDeferred(Runtime::kHiddenAllocateHeapNumber, 0, + CallRuntimeFromDeferred(Runtime::kAllocateHeapNumber, 0, instr, instr->context()); // Set the pointer to the new heap number in tmp. if (!tmp.is(eax)) __ mov(tmp, eax); @@ -3926,9 +3564,8 @@ void LCodeGen::DoMathAbs(LMathAbs* instr) { class DeferredMathAbsTaggedHeapNumber V8_FINAL : public LDeferredCode { public: DeferredMathAbsTaggedHeapNumber(LCodeGen* codegen, - LMathAbs* instr, - const X87Stack& x87_stack) - : LDeferredCode(codegen, x87_stack), instr_(instr) { } + LMathAbs* instr) + : LDeferredCode(codegen), instr_(instr) { } virtual void Generate() V8_OVERRIDE { codegen()->DoDeferredMathAbsTaggedHeapNumber(instr_); } @@ -3937,10 +3574,9 @@ void LCodeGen::DoMathAbs(LMathAbs* instr) { LMathAbs* instr_; }; - ASSERT(instr->value()->Equals(instr->result())); + DCHECK(instr->value()->Equals(instr->result())); Representation r = instr->hydrogen()->value()->representation(); - CpuFeatureScope scope(masm(), SSE2); if (r.IsDouble()) { XMMRegister scratch = double_scratch0(); XMMRegister input_reg = ToDoubleRegister(instr->value()); @@ -3951,7 +3587,7 @@ void LCodeGen::DoMathAbs(LMathAbs* instr) { EmitIntegerMathAbs(instr); } else { // Tagged case. DeferredMathAbsTaggedHeapNumber* deferred = - new(zone()) DeferredMathAbsTaggedHeapNumber(this, instr, x87_stack_); + new(zone()) DeferredMathAbsTaggedHeapNumber(this, instr); Register input_reg = ToRegister(instr->value()); // Smi check. __ JumpIfNotSmi(input_reg, deferred->entry()); @@ -3962,7 +3598,6 @@ void LCodeGen::DoMathAbs(LMathAbs* instr) { void LCodeGen::DoMathFloor(LMathFloor* instr) { - CpuFeatureScope scope(masm(), SSE2); XMMRegister xmm_scratch = double_scratch0(); Register output_reg = ToRegister(instr->result()); XMMRegister input_reg = ToDoubleRegister(instr->value()); @@ -4028,7 +3663,6 @@ void LCodeGen::DoMathFloor(LMathFloor* instr) { void LCodeGen::DoMathRound(LMathRound* instr) { - CpuFeatureScope scope(masm(), SSE2); Register output_reg = ToRegister(instr->result()); XMMRegister input_reg = ToDoubleRegister(instr->value()); XMMRegister xmm_scratch = double_scratch0(); @@ -4090,20 +3724,26 @@ void LCodeGen::DoMathRound(LMathRound* instr) { } -void LCodeGen::DoMathSqrt(LMathSqrt* instr) { - CpuFeatureScope scope(masm(), SSE2); +void LCodeGen::DoMathFround(LMathFround* instr) { XMMRegister input_reg = ToDoubleRegister(instr->value()); - ASSERT(ToDoubleRegister(instr->result()).is(input_reg)); - __ sqrtsd(input_reg, input_reg); + XMMRegister output_reg = ToDoubleRegister(instr->result()); + __ cvtsd2ss(output_reg, input_reg); + __ cvtss2sd(output_reg, output_reg); +} + + +void LCodeGen::DoMathSqrt(LMathSqrt* instr) { + Operand input = ToOperand(instr->value()); + XMMRegister output = ToDoubleRegister(instr->result()); + __ sqrtsd(output, input); } void LCodeGen::DoMathPowHalf(LMathPowHalf* instr) { - CpuFeatureScope scope(masm(), SSE2); XMMRegister xmm_scratch = double_scratch0(); XMMRegister input_reg = ToDoubleRegister(instr->value()); Register scratch = ToRegister(instr->temp()); - ASSERT(ToDoubleRegister(instr->result()).is(input_reg)); + DCHECK(ToDoubleRegister(instr->result()).is(input_reg)); // Note that according to ECMA-262 15.8.2.13: // Math.pow(-Infinity, 0.5) == Infinity @@ -4137,12 +3777,12 @@ void LCodeGen::DoPower(LPower* instr) { Representation exponent_type = instr->hydrogen()->right()->representation(); // Having marked this as a call, we can use any registers. // Just make sure that the input/output registers are the expected ones. - ASSERT(!instr->right()->IsDoubleRegister() || + DCHECK(!instr->right()->IsDoubleRegister() || ToDoubleRegister(instr->right()).is(xmm1)); - ASSERT(!instr->right()->IsRegister() || + DCHECK(!instr->right()->IsRegister() || ToRegister(instr->right()).is(eax)); - ASSERT(ToDoubleRegister(instr->left()).is(xmm2)); - ASSERT(ToDoubleRegister(instr->result()).is(xmm3)); + DCHECK(ToDoubleRegister(instr->left()).is(xmm2)); + DCHECK(ToDoubleRegister(instr->result()).is(xmm3)); if (exponent_type.IsSmi()) { MathPowStub stub(isolate(), MathPowStub::TAGGED); @@ -4159,7 +3799,7 @@ void LCodeGen::DoPower(LPower* instr) { MathPowStub stub(isolate(), MathPowStub::INTEGER); __ CallStub(&stub); } else { - ASSERT(exponent_type.IsDouble()); + DCHECK(exponent_type.IsDouble()); MathPowStub stub(isolate(), MathPowStub::DOUBLE); __ CallStub(&stub); } @@ -4167,8 +3807,7 @@ void LCodeGen::DoPower(LPower* instr) { void LCodeGen::DoMathLog(LMathLog* instr) { - CpuFeatureScope scope(masm(), SSE2); - ASSERT(instr->value()->Equals(instr->result())); + DCHECK(instr->value()->Equals(instr->result())); XMMRegister input_reg = ToDoubleRegister(instr->value()); XMMRegister xmm_scratch = double_scratch0(); Label positive, done, zero; @@ -4199,7 +3838,6 @@ void LCodeGen::DoMathLog(LMathLog* instr) { void LCodeGen::DoMathClz32(LMathClz32* instr) { - CpuFeatureScope scope(masm(), SSE2); Register input = ToRegister(instr->value()); Register result = ToRegister(instr->result()); Label not_zero_input; @@ -4214,7 +3852,6 @@ void LCodeGen::DoMathClz32(LMathClz32* instr) { void LCodeGen::DoMathExp(LMathExp* instr) { - CpuFeatureScope scope(masm(), SSE2); XMMRegister input = ToDoubleRegister(instr->value()); XMMRegister result = ToDoubleRegister(instr->result()); XMMRegister temp0 = double_scratch0(); @@ -4226,9 +3863,9 @@ void LCodeGen::DoMathExp(LMathExp* instr) { void LCodeGen::DoInvokeFunction(LInvokeFunction* instr) { - ASSERT(ToRegister(instr->context()).is(esi)); - ASSERT(ToRegister(instr->function()).is(edi)); - ASSERT(instr->HasPointerMap()); + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->function()).is(edi)); + DCHECK(instr->HasPointerMap()); Handle<JSFunction> known_function = instr->hydrogen()->known_function(); if (known_function.is_null()) { @@ -4248,9 +3885,9 @@ void LCodeGen::DoInvokeFunction(LInvokeFunction* instr) { void LCodeGen::DoCallFunction(LCallFunction* instr) { - ASSERT(ToRegister(instr->context()).is(esi)); - ASSERT(ToRegister(instr->function()).is(edi)); - ASSERT(ToRegister(instr->result()).is(eax)); + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->function()).is(edi)); + DCHECK(ToRegister(instr->result()).is(eax)); int arity = instr->arity(); CallFunctionStub stub(isolate(), arity, instr->hydrogen()->function_flags()); @@ -4259,9 +3896,9 @@ void LCodeGen::DoCallFunction(LCallFunction* instr) { void LCodeGen::DoCallNew(LCallNew* instr) { - ASSERT(ToRegister(instr->context()).is(esi)); - ASSERT(ToRegister(instr->constructor()).is(edi)); - ASSERT(ToRegister(instr->result()).is(eax)); + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->constructor()).is(edi)); + DCHECK(ToRegister(instr->result()).is(eax)); // No cell in ebx for construct type feedback in optimized code __ mov(ebx, isolate()->factory()->undefined_value()); @@ -4272,9 +3909,9 @@ void LCodeGen::DoCallNew(LCallNew* instr) { void LCodeGen::DoCallNewArray(LCallNewArray* instr) { - ASSERT(ToRegister(instr->context()).is(esi)); - ASSERT(ToRegister(instr->constructor()).is(edi)); - ASSERT(ToRegister(instr->result()).is(eax)); + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->constructor()).is(edi)); + DCHECK(ToRegister(instr->result()).is(eax)); __ Move(eax, Immediate(instr->arity())); __ mov(ebx, isolate()->factory()->undefined_value()); @@ -4317,7 +3954,7 @@ void LCodeGen::DoCallNewArray(LCallNewArray* instr) { void LCodeGen::DoCallRuntime(LCallRuntime* instr) { - ASSERT(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->context()).is(esi)); CallRuntime(instr->function(), instr->arity(), instr, instr->save_doubles()); } @@ -4350,7 +3987,7 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { int offset = access.offset(); if (access.IsExternalMemory()) { - ASSERT(!instr->hydrogen()->NeedsWriteBarrier()); + DCHECK(!instr->hydrogen()->NeedsWriteBarrier()); MemOperand operand = instr->object()->IsConstantOperand() ? MemOperand::StaticVariable( ToExternalReference(LConstantOperand::cast(instr->object()))) @@ -4368,42 +4005,27 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { Register object = ToRegister(instr->object()); __ AssertNotSmi(object); - ASSERT(!representation.IsSmi() || + DCHECK(!representation.IsSmi() || !instr->value()->IsConstantOperand() || IsSmi(LConstantOperand::cast(instr->value()))); if (representation.IsDouble()) { - ASSERT(access.IsInobject()); - ASSERT(!instr->hydrogen()->has_transition()); - ASSERT(!instr->hydrogen()->NeedsWriteBarrier()); - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope scope(masm(), SSE2); - XMMRegister value = ToDoubleRegister(instr->value()); - __ movsd(FieldOperand(object, offset), value); - } else { - X87Register value = ToX87Register(instr->value()); - X87Mov(FieldOperand(object, offset), value); - } + DCHECK(access.IsInobject()); + DCHECK(!instr->hydrogen()->has_transition()); + DCHECK(!instr->hydrogen()->NeedsWriteBarrier()); + XMMRegister value = ToDoubleRegister(instr->value()); + __ movsd(FieldOperand(object, offset), value); return; } if (instr->hydrogen()->has_transition()) { Handle<Map> transition = instr->hydrogen()->transition_map(); AddDeprecationDependency(transition); - if (!instr->hydrogen()->NeedsWriteBarrierForMap()) { - __ mov(FieldOperand(object, HeapObject::kMapOffset), transition); - } else { + __ mov(FieldOperand(object, HeapObject::kMapOffset), transition); + if (instr->hydrogen()->NeedsWriteBarrierForMap()) { Register temp = ToRegister(instr->temp()); Register temp_map = ToRegister(instr->temp_map()); - __ mov(temp_map, transition); - __ mov(FieldOperand(object, HeapObject::kMapOffset), temp_map); // Update the write barrier for the map field. - __ RecordWriteField(object, - HeapObject::kMapOffset, - temp_map, - temp, - GetSaveFPRegsMode(isolate()), - OMIT_REMEMBERED_SET, - OMIT_SMI_CHECK); + __ RecordWriteForMap(object, transition, temp_map, temp, kSaveFPRegs); } } @@ -4422,11 +4044,11 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { __ Store(value, operand, representation); } else if (representation.IsInteger32()) { Immediate immediate = ToImmediate(operand_value, representation); - ASSERT(!instr->hydrogen()->NeedsWriteBarrier()); + DCHECK(!instr->hydrogen()->NeedsWriteBarrier()); __ mov(operand, immediate); } else { Handle<Object> handle_value = ToHandle(operand_value); - ASSERT(!instr->hydrogen()->NeedsWriteBarrier()); + DCHECK(!instr->hydrogen()->NeedsWriteBarrier()); __ mov(operand, handle_value); } } else { @@ -4442,19 +4064,20 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { offset, value, temp, - GetSaveFPRegsMode(isolate()), + kSaveFPRegs, EMIT_REMEMBERED_SET, - instr->hydrogen()->SmiCheckForWriteBarrier()); + instr->hydrogen()->SmiCheckForWriteBarrier(), + instr->hydrogen()->PointersToHereCheckForValue()); } } void LCodeGen::DoStoreNamedGeneric(LStoreNamedGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(esi)); - ASSERT(ToRegister(instr->object()).is(edx)); - ASSERT(ToRegister(instr->value()).is(eax)); + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->object()).is(StoreIC::ReceiverRegister())); + DCHECK(ToRegister(instr->value()).is(StoreIC::ValueRegister())); - __ mov(ecx, instr->name()); + __ mov(StoreIC::NameRegister(), instr->name()); Handle<Code> ic = StoreIC::initialize_stub(isolate(), instr->strict_mode()); CallCode(ic, RelocInfo::CODE_TARGET, instr); } @@ -4466,7 +4089,7 @@ void LCodeGen::DoBoundsCheck(LBoundsCheck* instr) { __ cmp(ToOperand(instr->length()), ToImmediate(LConstantOperand::cast(instr->index()), instr->hydrogen()->length()->representation())); - cc = ReverseCondition(cc); + cc = CommuteCondition(cc); } else if (instr->length()->IsConstantOperand()) { __ cmp(ToOperand(instr->index()), ToImmediate(LConstantOperand::cast(instr->length()), @@ -4498,27 +4121,15 @@ void LCodeGen::DoStoreKeyedExternalArray(LStoreKeyed* instr) { key, instr->hydrogen()->key()->representation(), elements_kind, - 0, - instr->additional_index())); + instr->base_offset())); if (elements_kind == EXTERNAL_FLOAT32_ELEMENTS || elements_kind == FLOAT32_ELEMENTS) { - if (CpuFeatures::IsSafeForSnapshot(isolate(), SSE2)) { - CpuFeatureScope scope(masm(), SSE2); - XMMRegister xmm_scratch = double_scratch0(); - __ cvtsd2ss(xmm_scratch, ToDoubleRegister(instr->value())); - __ movss(operand, xmm_scratch); - } else { - __ fld(0); - __ fstp_s(operand); - } + XMMRegister xmm_scratch = double_scratch0(); + __ cvtsd2ss(xmm_scratch, ToDoubleRegister(instr->value())); + __ movss(operand, xmm_scratch); } else if (elements_kind == EXTERNAL_FLOAT64_ELEMENTS || elements_kind == FLOAT64_ELEMENTS) { - if (CpuFeatures::IsSafeForSnapshot(isolate(), SSE2)) { - CpuFeatureScope scope(masm(), SSE2); - __ movsd(operand, ToDoubleRegister(instr->value())); - } else { - X87Mov(operand, ToX87Register(instr->value())); - } + __ movsd(operand, ToDoubleRegister(instr->value())); } else { Register value = ToRegister(instr->value()); switch (elements_kind) { @@ -4569,71 +4180,21 @@ void LCodeGen::DoStoreKeyedFixedDoubleArray(LStoreKeyed* instr) { instr->key(), instr->hydrogen()->key()->representation(), FAST_DOUBLE_ELEMENTS, - FixedDoubleArray::kHeaderSize - kHeapObjectTag, - instr->additional_index()); + instr->base_offset()); - if (CpuFeatures::IsSafeForSnapshot(isolate(), SSE2)) { - CpuFeatureScope scope(masm(), SSE2); - XMMRegister value = ToDoubleRegister(instr->value()); - - if (instr->NeedsCanonicalization()) { - Label have_value; + XMMRegister value = ToDoubleRegister(instr->value()); - __ ucomisd(value, value); - __ j(parity_odd, &have_value, Label::kNear); // NaN. + if (instr->NeedsCanonicalization()) { + Label have_value; - __ movsd(value, Operand::StaticVariable(canonical_nan_reference)); - __ bind(&have_value); - } + __ ucomisd(value, value); + __ j(parity_odd, &have_value, Label::kNear); // NaN. - __ movsd(double_store_operand, value); - } else { - // Can't use SSE2 in the serializer - if (instr->hydrogen()->IsConstantHoleStore()) { - // This means we should store the (double) hole. No floating point - // registers required. - double nan_double = FixedDoubleArray::hole_nan_as_double(); - uint64_t int_val = BitCast<uint64_t, double>(nan_double); - int32_t lower = static_cast<int32_t>(int_val); - int32_t upper = static_cast<int32_t>(int_val >> (kBitsPerInt)); - - __ mov(double_store_operand, Immediate(lower)); - Operand double_store_operand2 = BuildFastArrayOperand( - instr->elements(), - instr->key(), - instr->hydrogen()->key()->representation(), - FAST_DOUBLE_ELEMENTS, - FixedDoubleArray::kHeaderSize - kHeapObjectTag + kPointerSize, - instr->additional_index()); - __ mov(double_store_operand2, Immediate(upper)); - } else { - Label no_special_nan_handling; - X87Register value = ToX87Register(instr->value()); - X87Fxch(value); - - if (instr->NeedsCanonicalization()) { - __ fld(0); - __ fld(0); - __ FCmp(); - - __ j(parity_odd, &no_special_nan_handling, Label::kNear); - __ sub(esp, Immediate(kDoubleSize)); - __ fst_d(MemOperand(esp, 0)); - __ cmp(MemOperand(esp, sizeof(kHoleNanLower32)), - Immediate(kHoleNanUpper32)); - __ add(esp, Immediate(kDoubleSize)); - Label canonicalize; - __ j(not_equal, &canonicalize, Label::kNear); - __ jmp(&no_special_nan_handling, Label::kNear); - __ bind(&canonicalize); - __ fstp(0); - __ fld_d(Operand::StaticVariable(canonical_nan_reference)); - } - - __ bind(&no_special_nan_handling); - __ fst_d(double_store_operand); - } + __ movsd(value, Operand::StaticVariable(canonical_nan_reference)); + __ bind(&have_value); } + + __ movsd(double_store_operand, value); } @@ -4646,8 +4207,7 @@ void LCodeGen::DoStoreKeyedFixedArray(LStoreKeyed* instr) { instr->key(), instr->hydrogen()->key()->representation(), FAST_ELEMENTS, - FixedArray::kHeaderSize - kHeapObjectTag, - instr->additional_index()); + instr->base_offset()); if (instr->value()->IsRegister()) { __ mov(operand, ToRegister(instr->value())); } else { @@ -4656,27 +4216,28 @@ void LCodeGen::DoStoreKeyedFixedArray(LStoreKeyed* instr) { Immediate immediate = ToImmediate(operand_value, Representation::Smi()); __ mov(operand, immediate); } else { - ASSERT(!IsInteger32(operand_value)); + DCHECK(!IsInteger32(operand_value)); Handle<Object> handle_value = ToHandle(operand_value); __ mov(operand, handle_value); } } if (instr->hydrogen()->NeedsWriteBarrier()) { - ASSERT(instr->value()->IsRegister()); + DCHECK(instr->value()->IsRegister()); Register value = ToRegister(instr->value()); - ASSERT(!instr->key()->IsConstantOperand()); + DCHECK(!instr->key()->IsConstantOperand()); SmiCheck check_needed = - instr->hydrogen()->value()->IsHeapObject() + instr->hydrogen()->value()->type().IsHeapObject() ? OMIT_SMI_CHECK : INLINE_SMI_CHECK; // Compute address of modified element and store it into key register. __ lea(key, operand); __ RecordWrite(elements, key, value, - GetSaveFPRegsMode(isolate()), + kSaveFPRegs, EMIT_REMEMBERED_SET, - check_needed); + check_needed, + instr->hydrogen()->PointersToHereCheckForValue()); } } @@ -4694,10 +4255,10 @@ void LCodeGen::DoStoreKeyed(LStoreKeyed* instr) { void LCodeGen::DoStoreKeyedGeneric(LStoreKeyedGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(esi)); - ASSERT(ToRegister(instr->object()).is(edx)); - ASSERT(ToRegister(instr->key()).is(ecx)); - ASSERT(ToRegister(instr->value()).is(eax)); + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->object()).is(KeyedStoreIC::ReceiverRegister())); + DCHECK(ToRegister(instr->key()).is(KeyedStoreIC::NameRegister())); + DCHECK(ToRegister(instr->value()).is(KeyedStoreIC::ValueRegister())); Handle<Code> ic = instr->strict_mode() == STRICT ? isolate()->builtins()->KeyedStoreIC_Initialize_Strict() @@ -4736,13 +4297,13 @@ void LCodeGen::DoTransitionElementsKind(LTransitionElementsKind* instr) { __ mov(FieldOperand(object_reg, HeapObject::kMapOffset), Immediate(to_map)); // Write barrier. - ASSERT_NE(instr->temp(), NULL); + DCHECK_NE(instr->temp(), NULL); __ RecordWriteForMap(object_reg, to_map, new_map_reg, ToRegister(instr->temp()), kDontSaveFPRegs); } else { - ASSERT(ToRegister(instr->context()).is(esi)); - ASSERT(object_reg.is(eax)); + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(object_reg.is(eax)); PushSafepointRegistersScope scope(this); __ mov(ebx, to_map); bool is_js_array = from_map->instance_type() == JS_ARRAY_TYPE; @@ -4759,9 +4320,8 @@ void LCodeGen::DoStringCharCodeAt(LStringCharCodeAt* instr) { class DeferredStringCharCodeAt V8_FINAL : public LDeferredCode { public: DeferredStringCharCodeAt(LCodeGen* codegen, - LStringCharCodeAt* instr, - const X87Stack& x87_stack) - : LDeferredCode(codegen, x87_stack), instr_(instr) { } + LStringCharCodeAt* instr) + : LDeferredCode(codegen), instr_(instr) { } virtual void Generate() V8_OVERRIDE { codegen()->DoDeferredStringCharCodeAt(instr_); } @@ -4771,7 +4331,7 @@ void LCodeGen::DoStringCharCodeAt(LStringCharCodeAt* instr) { }; DeferredStringCharCodeAt* deferred = - new(zone()) DeferredStringCharCodeAt(this, instr, x87_stack_); + new(zone()) DeferredStringCharCodeAt(this, instr); StringCharLoadGenerator::Generate(masm(), factory(), @@ -4806,7 +4366,7 @@ void LCodeGen::DoDeferredStringCharCodeAt(LStringCharCodeAt* instr) { __ SmiTag(index); __ push(index); } - CallRuntimeFromDeferred(Runtime::kHiddenStringCharCodeAt, 2, + CallRuntimeFromDeferred(Runtime::kStringCharCodeAtRT, 2, instr, instr->context()); __ AssertSmi(eax); __ SmiUntag(eax); @@ -4818,9 +4378,8 @@ void LCodeGen::DoStringCharFromCode(LStringCharFromCode* instr) { class DeferredStringCharFromCode V8_FINAL : public LDeferredCode { public: DeferredStringCharFromCode(LCodeGen* codegen, - LStringCharFromCode* instr, - const X87Stack& x87_stack) - : LDeferredCode(codegen, x87_stack), instr_(instr) { } + LStringCharFromCode* instr) + : LDeferredCode(codegen), instr_(instr) { } virtual void Generate() V8_OVERRIDE { codegen()->DoDeferredStringCharFromCode(instr_); } @@ -4830,12 +4389,12 @@ void LCodeGen::DoStringCharFromCode(LStringCharFromCode* instr) { }; DeferredStringCharFromCode* deferred = - new(zone()) DeferredStringCharFromCode(this, instr, x87_stack_); + new(zone()) DeferredStringCharFromCode(this, instr); - ASSERT(instr->hydrogen()->value()->representation().IsInteger32()); + DCHECK(instr->hydrogen()->value()->representation().IsInteger32()); Register char_code = ToRegister(instr->char_code()); Register result = ToRegister(instr->result()); - ASSERT(!char_code.is(result)); + DCHECK(!char_code.is(result)); __ cmp(char_code, String::kMaxOneByteCharCode); __ j(above, deferred->entry()); @@ -4867,9 +4426,9 @@ void LCodeGen::DoDeferredStringCharFromCode(LStringCharFromCode* instr) { void LCodeGen::DoStringAdd(LStringAdd* instr) { - ASSERT(ToRegister(instr->context()).is(esi)); - ASSERT(ToRegister(instr->left()).is(edx)); - ASSERT(ToRegister(instr->right()).is(eax)); + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->left()).is(edx)); + DCHECK(ToRegister(instr->right()).is(eax)); StringAddStub stub(isolate(), instr->hydrogen()->flags(), instr->hydrogen()->pretenure_flag()); @@ -4880,38 +4439,16 @@ void LCodeGen::DoStringAdd(LStringAdd* instr) { void LCodeGen::DoInteger32ToDouble(LInteger32ToDouble* instr) { LOperand* input = instr->value(); LOperand* output = instr->result(); - ASSERT(input->IsRegister() || input->IsStackSlot()); - ASSERT(output->IsDoubleRegister()); - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope scope(masm(), SSE2); - __ Cvtsi2sd(ToDoubleRegister(output), ToOperand(input)); - } else if (input->IsRegister()) { - Register input_reg = ToRegister(input); - __ push(input_reg); - X87Mov(ToX87Register(output), Operand(esp, 0), kX87IntOperand); - __ pop(input_reg); - } else { - X87Mov(ToX87Register(output), ToOperand(input), kX87IntOperand); - } + DCHECK(input->IsRegister() || input->IsStackSlot()); + DCHECK(output->IsDoubleRegister()); + __ Cvtsi2sd(ToDoubleRegister(output), ToOperand(input)); } void LCodeGen::DoUint32ToDouble(LUint32ToDouble* instr) { LOperand* input = instr->value(); LOperand* output = instr->result(); - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope scope(masm(), SSE2); - LOperand* temp = instr->temp(); - - __ LoadUint32(ToDoubleRegister(output), - ToRegister(input), - ToDoubleRegister(temp)); - } else { - X87Register res = ToX87Register(output); - X87PrepareToWrite(res); - __ LoadUint32NoSSE2(ToRegister(input)); - X87CommitWrite(res); - } + __ LoadUint32(ToDoubleRegister(output), ToRegister(input)); } @@ -4919,12 +4456,11 @@ void LCodeGen::DoNumberTagI(LNumberTagI* instr) { class DeferredNumberTagI V8_FINAL : public LDeferredCode { public: DeferredNumberTagI(LCodeGen* codegen, - LNumberTagI* instr, - const X87Stack& x87_stack) - : LDeferredCode(codegen, x87_stack), instr_(instr) { } + LNumberTagI* instr) + : LDeferredCode(codegen), instr_(instr) { } virtual void Generate() V8_OVERRIDE { - codegen()->DoDeferredNumberTagIU(instr_, instr_->value(), instr_->temp(), - NULL, SIGNED_INT32); + codegen()->DoDeferredNumberTagIU( + instr_, instr_->value(), instr_->temp(), SIGNED_INT32); } virtual LInstruction* instr() V8_OVERRIDE { return instr_; } private: @@ -4932,11 +4468,11 @@ void LCodeGen::DoNumberTagI(LNumberTagI* instr) { }; LOperand* input = instr->value(); - ASSERT(input->IsRegister() && input->Equals(instr->result())); + DCHECK(input->IsRegister() && input->Equals(instr->result())); Register reg = ToRegister(input); DeferredNumberTagI* deferred = - new(zone()) DeferredNumberTagI(this, instr, x87_stack_); + new(zone()) DeferredNumberTagI(this, instr); __ SmiTag(reg); __ j(overflow, deferred->entry()); __ bind(deferred->exit()); @@ -4946,13 +4482,11 @@ void LCodeGen::DoNumberTagI(LNumberTagI* instr) { void LCodeGen::DoNumberTagU(LNumberTagU* instr) { class DeferredNumberTagU V8_FINAL : public LDeferredCode { public: - DeferredNumberTagU(LCodeGen* codegen, - LNumberTagU* instr, - const X87Stack& x87_stack) - : LDeferredCode(codegen, x87_stack), instr_(instr) { } + DeferredNumberTagU(LCodeGen* codegen, LNumberTagU* instr) + : LDeferredCode(codegen), instr_(instr) { } virtual void Generate() V8_OVERRIDE { - codegen()->DoDeferredNumberTagIU(instr_, instr_->value(), instr_->temp1(), - instr_->temp2(), UNSIGNED_INT32); + codegen()->DoDeferredNumberTagIU( + instr_, instr_->value(), instr_->temp(), UNSIGNED_INT32); } virtual LInstruction* instr() V8_OVERRIDE { return instr_; } private: @@ -4960,11 +4494,11 @@ void LCodeGen::DoNumberTagU(LNumberTagU* instr) { }; LOperand* input = instr->value(); - ASSERT(input->IsRegister() && input->Equals(instr->result())); + DCHECK(input->IsRegister() && input->Equals(instr->result())); Register reg = ToRegister(input); DeferredNumberTagU* deferred = - new(zone()) DeferredNumberTagU(this, instr, x87_stack_); + new(zone()) DeferredNumberTagU(this, instr); __ cmp(reg, Immediate(Smi::kMaxValue)); __ j(above, deferred->entry()); __ SmiTag(reg); @@ -4974,12 +4508,11 @@ void LCodeGen::DoNumberTagU(LNumberTagU* instr) { void LCodeGen::DoDeferredNumberTagIU(LInstruction* instr, LOperand* value, - LOperand* temp1, - LOperand* temp2, + LOperand* temp, IntegerSignedness signedness) { Label done, slow; Register reg = ToRegister(value); - Register tmp = ToRegister(temp1); + Register tmp = ToRegister(temp); XMMRegister xmm_scratch = double_scratch0(); if (signedness == SIGNED_INT32) { @@ -4988,27 +4521,9 @@ void LCodeGen::DoDeferredNumberTagIU(LInstruction* instr, // the value in there. If that fails, call the runtime system. __ SmiUntag(reg); __ xor_(reg, 0x80000000); - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope feature_scope(masm(), SSE2); - __ Cvtsi2sd(xmm_scratch, Operand(reg)); - } else { - __ push(reg); - __ fild_s(Operand(esp, 0)); - __ pop(reg); - } + __ Cvtsi2sd(xmm_scratch, Operand(reg)); } else { - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope feature_scope(masm(), SSE2); - __ LoadUint32(xmm_scratch, reg, ToDoubleRegister(temp2)); - } else { - // There's no fild variant for unsigned values, so zero-extend to a 64-bit - // int manually. - __ push(Immediate(0)); - __ push(reg); - __ fild_d(Operand(esp, 0)); - __ pop(reg); - __ pop(reg); - } + __ LoadUint32(xmm_scratch, reg); } if (FLAG_inline_new) { @@ -5029,11 +4544,11 @@ void LCodeGen::DoDeferredNumberTagIU(LInstruction* instr, // NumberTagI and NumberTagD use the context from the frame, rather than // the environment's HContext or HInlinedContext value. - // They only call Runtime::kHiddenAllocateHeapNumber. + // They only call Runtime::kAllocateHeapNumber. // The corresponding HChange instructions are added in a phase that does // not have easy access to the local context. __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset)); - __ CallRuntimeSaveDoubles(Runtime::kHiddenAllocateHeapNumber); + __ CallRuntimeSaveDoubles(Runtime::kAllocateHeapNumber); RecordSafepointWithRegisters( instr->pointer_map(), 0, Safepoint::kNoLazyDeopt); __ StoreToSafepointRegisterSlot(reg, eax); @@ -5042,22 +4557,15 @@ void LCodeGen::DoDeferredNumberTagIU(LInstruction* instr, // Done. Put the value in xmm_scratch into the value of the allocated heap // number. __ bind(&done); - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope feature_scope(masm(), SSE2); - __ movsd(FieldOperand(reg, HeapNumber::kValueOffset), xmm_scratch); - } else { - __ fstp_d(FieldOperand(reg, HeapNumber::kValueOffset)); - } + __ movsd(FieldOperand(reg, HeapNumber::kValueOffset), xmm_scratch); } void LCodeGen::DoNumberTagD(LNumberTagD* instr) { class DeferredNumberTagD V8_FINAL : public LDeferredCode { public: - DeferredNumberTagD(LCodeGen* codegen, - LNumberTagD* instr, - const X87Stack& x87_stack) - : LDeferredCode(codegen, x87_stack), instr_(instr) { } + DeferredNumberTagD(LCodeGen* codegen, LNumberTagD* instr) + : LDeferredCode(codegen), instr_(instr) { } virtual void Generate() V8_OVERRIDE { codegen()->DoDeferredNumberTagD(instr_); } @@ -5068,15 +4576,8 @@ void LCodeGen::DoNumberTagD(LNumberTagD* instr) { Register reg = ToRegister(instr->result()); - bool use_sse2 = CpuFeatures::IsSupported(SSE2); - if (!use_sse2) { - // Put the value to the top of stack - X87Register src = ToX87Register(instr->value()); - X87LoadForUsage(src); - } - DeferredNumberTagD* deferred = - new(zone()) DeferredNumberTagD(this, instr, x87_stack_); + new(zone()) DeferredNumberTagD(this, instr); if (FLAG_inline_new) { Register tmp = ToRegister(instr->temp()); __ AllocateHeapNumber(reg, tmp, no_reg, deferred->entry()); @@ -5084,13 +4585,8 @@ void LCodeGen::DoNumberTagD(LNumberTagD* instr) { __ jmp(deferred->entry()); } __ bind(deferred->exit()); - if (use_sse2) { - CpuFeatureScope scope(masm(), SSE2); - XMMRegister input_reg = ToDoubleRegister(instr->value()); - __ movsd(FieldOperand(reg, HeapNumber::kValueOffset), input_reg); - } else { - __ fstp_d(FieldOperand(reg, HeapNumber::kValueOffset)); - } + XMMRegister input_reg = ToDoubleRegister(instr->value()); + __ movsd(FieldOperand(reg, HeapNumber::kValueOffset), input_reg); } @@ -5104,11 +4600,11 @@ void LCodeGen::DoDeferredNumberTagD(LNumberTagD* instr) { PushSafepointRegistersScope scope(this); // NumberTagI and NumberTagD use the context from the frame, rather than // the environment's HContext or HInlinedContext value. - // They only call Runtime::kHiddenAllocateHeapNumber. + // They only call Runtime::kAllocateHeapNumber. // The corresponding HChange instructions are added in a phase that does // not have easy access to the local context. __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset)); - __ CallRuntimeSaveDoubles(Runtime::kHiddenAllocateHeapNumber); + __ CallRuntimeSaveDoubles(Runtime::kAllocateHeapNumber); RecordSafepointWithRegisters( instr->pointer_map(), 0, Safepoint::kNoLazyDeopt); __ StoreToSafepointRegisterSlot(reg, eax); @@ -5134,7 +4630,7 @@ void LCodeGen::DoSmiTag(LSmiTag* instr) { void LCodeGen::DoSmiUntag(LSmiUntag* instr) { LOperand* input = instr->value(); Register result = ToRegister(input); - ASSERT(input->IsRegister() && input->Equals(instr->result())); + DCHECK(input->IsRegister() && input->Equals(instr->result())); if (instr->needs_check()) { __ test(result, Immediate(kSmiTagMask)); DeoptimizeIf(not_zero, instr->environment()); @@ -5145,76 +4641,6 @@ void LCodeGen::DoSmiUntag(LSmiUntag* instr) { } -void LCodeGen::EmitNumberUntagDNoSSE2(Register input_reg, - Register temp_reg, - X87Register res_reg, - bool can_convert_undefined_to_nan, - bool deoptimize_on_minus_zero, - LEnvironment* env, - NumberUntagDMode mode) { - Label load_smi, done; - - X87PrepareToWrite(res_reg); - if (mode == NUMBER_CANDIDATE_IS_ANY_TAGGED) { - // Smi check. - __ JumpIfSmi(input_reg, &load_smi, Label::kNear); - - // Heap number map check. - __ cmp(FieldOperand(input_reg, HeapObject::kMapOffset), - factory()->heap_number_map()); - if (!can_convert_undefined_to_nan) { - DeoptimizeIf(not_equal, env); - } else { - Label heap_number, convert; - __ j(equal, &heap_number, Label::kNear); - - // Convert undefined (or hole) to NaN. - __ cmp(input_reg, factory()->undefined_value()); - DeoptimizeIf(not_equal, env); - - __ bind(&convert); - ExternalReference nan = - ExternalReference::address_of_canonical_non_hole_nan(); - __ fld_d(Operand::StaticVariable(nan)); - __ jmp(&done, Label::kNear); - - __ bind(&heap_number); - } - // Heap number to x87 conversion. - __ fld_d(FieldOperand(input_reg, HeapNumber::kValueOffset)); - if (deoptimize_on_minus_zero) { - __ fldz(); - __ FCmp(); - __ fld_d(FieldOperand(input_reg, HeapNumber::kValueOffset)); - __ j(not_zero, &done, Label::kNear); - - // Use general purpose registers to check if we have -0.0 - __ mov(temp_reg, FieldOperand(input_reg, HeapNumber::kExponentOffset)); - __ test(temp_reg, Immediate(HeapNumber::kSignMask)); - __ j(zero, &done, Label::kNear); - - // Pop FPU stack before deoptimizing. - __ fstp(0); - DeoptimizeIf(not_zero, env); - } - __ jmp(&done, Label::kNear); - } else { - ASSERT(mode == NUMBER_CANDIDATE_IS_SMI); - } - - __ bind(&load_smi); - // Clobbering a temp is faster than re-tagging the - // input register since we avoid dependencies. - __ mov(temp_reg, input_reg); - __ SmiUntag(temp_reg); // Untag smi before converting to float. - __ push(temp_reg); - __ fild_s(Operand(esp, 0)); - __ add(esp, Immediate(kPointerSize)); - __ bind(&done); - X87CommitWrite(res_reg); -} - - void LCodeGen::EmitNumberUntagD(Register input_reg, Register temp_reg, XMMRegister result_reg, @@ -5264,7 +4690,7 @@ void LCodeGen::EmitNumberUntagD(Register input_reg, __ jmp(&done, Label::kNear); } } else { - ASSERT(mode == NUMBER_CANDIDATE_IS_SMI); + DCHECK(mode == NUMBER_CANDIDATE_IS_SMI); } __ bind(&load_smi); @@ -5330,10 +4756,8 @@ void LCodeGen::DoDeferredTaggedToI(LTaggedToI* instr, Label* done) { void LCodeGen::DoTaggedToI(LTaggedToI* instr) { class DeferredTaggedToI V8_FINAL : public LDeferredCode { public: - DeferredTaggedToI(LCodeGen* codegen, - LTaggedToI* instr, - const X87Stack& x87_stack) - : LDeferredCode(codegen, x87_stack), instr_(instr) { } + DeferredTaggedToI(LCodeGen* codegen, LTaggedToI* instr) + : LDeferredCode(codegen), instr_(instr) { } virtual void Generate() V8_OVERRIDE { codegen()->DoDeferredTaggedToI(instr_, done()); } @@ -5343,15 +4767,15 @@ void LCodeGen::DoTaggedToI(LTaggedToI* instr) { }; LOperand* input = instr->value(); - ASSERT(input->IsRegister()); + DCHECK(input->IsRegister()); Register input_reg = ToRegister(input); - ASSERT(input_reg.is(ToRegister(instr->result()))); + DCHECK(input_reg.is(ToRegister(instr->result()))); if (instr->hydrogen()->value()->representation().IsSmi()) { __ SmiUntag(input_reg); } else { DeferredTaggedToI* deferred = - new(zone()) DeferredTaggedToI(this, instr, x87_stack_); + new(zone()) DeferredTaggedToI(this, instr); // Optimistically untag the input. // If the input is a HeapObject, SmiUntag will set the carry flag. STATIC_ASSERT(kSmiTagSize == 1 && kSmiTag == 0); @@ -5366,11 +4790,11 @@ void LCodeGen::DoTaggedToI(LTaggedToI* instr) { void LCodeGen::DoNumberUntagD(LNumberUntagD* instr) { LOperand* input = instr->value(); - ASSERT(input->IsRegister()); + DCHECK(input->IsRegister()); LOperand* temp = instr->temp(); - ASSERT(temp->IsRegister()); + DCHECK(temp->IsRegister()); LOperand* result = instr->result(); - ASSERT(result->IsDoubleRegister()); + DCHECK(result->IsDoubleRegister()); Register input_reg = ToRegister(input); bool deoptimize_on_minus_zero = @@ -5381,59 +4805,33 @@ void LCodeGen::DoNumberUntagD(LNumberUntagD* instr) { NumberUntagDMode mode = value->representation().IsSmi() ? NUMBER_CANDIDATE_IS_SMI : NUMBER_CANDIDATE_IS_ANY_TAGGED; - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope scope(masm(), SSE2); - XMMRegister result_reg = ToDoubleRegister(result); - EmitNumberUntagD(input_reg, - temp_reg, - result_reg, - instr->hydrogen()->can_convert_undefined_to_nan(), - deoptimize_on_minus_zero, - instr->environment(), - mode); - } else { - EmitNumberUntagDNoSSE2(input_reg, - temp_reg, - ToX87Register(instr->result()), - instr->hydrogen()->can_convert_undefined_to_nan(), - deoptimize_on_minus_zero, - instr->environment(), - mode); - } + XMMRegister result_reg = ToDoubleRegister(result); + EmitNumberUntagD(input_reg, + temp_reg, + result_reg, + instr->hydrogen()->can_convert_undefined_to_nan(), + deoptimize_on_minus_zero, + instr->environment(), + mode); } void LCodeGen::DoDoubleToI(LDoubleToI* instr) { LOperand* input = instr->value(); - ASSERT(input->IsDoubleRegister()); + DCHECK(input->IsDoubleRegister()); LOperand* result = instr->result(); - ASSERT(result->IsRegister()); + DCHECK(result->IsRegister()); Register result_reg = ToRegister(result); if (instr->truncating()) { - if (CpuFeatures::IsSafeForSnapshot(isolate(), SSE2)) { - CpuFeatureScope scope(masm(), SSE2); - XMMRegister input_reg = ToDoubleRegister(input); - __ TruncateDoubleToI(result_reg, input_reg); - } else { - X87Register input_reg = ToX87Register(input); - X87Fxch(input_reg); - __ TruncateX87TOSToI(result_reg); - } + XMMRegister input_reg = ToDoubleRegister(input); + __ TruncateDoubleToI(result_reg, input_reg); } else { Label bailout, done; - if (CpuFeatures::IsSafeForSnapshot(isolate(), SSE2)) { - CpuFeatureScope scope(masm(), SSE2); - XMMRegister input_reg = ToDoubleRegister(input); - XMMRegister xmm_scratch = double_scratch0(); - __ DoubleToI(result_reg, input_reg, xmm_scratch, - instr->hydrogen()->GetMinusZeroMode(), &bailout, Label::kNear); - } else { - X87Register input_reg = ToX87Register(input); - X87Fxch(input_reg); - __ X87TOSToI(result_reg, instr->hydrogen()->GetMinusZeroMode(), - &bailout, Label::kNear); - } + XMMRegister input_reg = ToDoubleRegister(input); + XMMRegister xmm_scratch = double_scratch0(); + __ DoubleToI(result_reg, input_reg, xmm_scratch, + instr->hydrogen()->GetMinusZeroMode(), &bailout, Label::kNear); __ jmp(&done, Label::kNear); __ bind(&bailout); DeoptimizeIf(no_condition, instr->environment()); @@ -5444,24 +4842,16 @@ void LCodeGen::DoDoubleToI(LDoubleToI* instr) { void LCodeGen::DoDoubleToSmi(LDoubleToSmi* instr) { LOperand* input = instr->value(); - ASSERT(input->IsDoubleRegister()); + DCHECK(input->IsDoubleRegister()); LOperand* result = instr->result(); - ASSERT(result->IsRegister()); + DCHECK(result->IsRegister()); Register result_reg = ToRegister(result); Label bailout, done; - if (CpuFeatures::IsSafeForSnapshot(isolate(), SSE2)) { - CpuFeatureScope scope(masm(), SSE2); - XMMRegister input_reg = ToDoubleRegister(input); - XMMRegister xmm_scratch = double_scratch0(); - __ DoubleToI(result_reg, input_reg, xmm_scratch, - instr->hydrogen()->GetMinusZeroMode(), &bailout, Label::kNear); - } else { - X87Register input_reg = ToX87Register(input); - X87Fxch(input_reg); - __ X87TOSToI(result_reg, instr->hydrogen()->GetMinusZeroMode(), - &bailout, Label::kNear); - } + XMMRegister input_reg = ToDoubleRegister(input); + XMMRegister xmm_scratch = double_scratch0(); + __ DoubleToI(result_reg, input_reg, xmm_scratch, + instr->hydrogen()->GetMinusZeroMode(), &bailout, Label::kNear); __ jmp(&done, Label::kNear); __ bind(&bailout); DeoptimizeIf(no_condition, instr->environment()); @@ -5480,7 +4870,7 @@ void LCodeGen::DoCheckSmi(LCheckSmi* instr) { void LCodeGen::DoCheckNonSmi(LCheckNonSmi* instr) { - if (!instr->hydrogen()->value()->IsHeapObject()) { + if (!instr->hydrogen()->value()->type().IsHeapObject()) { LOperand* input = instr->value(); __ test(ToOperand(input), Immediate(kSmiTagMask)); DeoptimizeIf(zero, instr->environment()); @@ -5520,7 +4910,7 @@ void LCodeGen::DoCheckInstanceType(LCheckInstanceType* instr) { instr->hydrogen()->GetCheckMaskAndTag(&mask, &tag); if (IsPowerOf2(mask)) { - ASSERT(tag == 0 || IsPowerOf2(tag)); + DCHECK(tag == 0 || IsPowerOf2(tag)); __ test_b(FieldOperand(temp, Map::kInstanceTypeOffset), mask); DeoptimizeIf(tag == 0 ? not_zero : zero, instr->environment()); } else { @@ -5565,11 +4955,8 @@ void LCodeGen::DoDeferredInstanceMigration(LCheckMaps* instr, Register object) { void LCodeGen::DoCheckMaps(LCheckMaps* instr) { class DeferredCheckMaps V8_FINAL : public LDeferredCode { public: - DeferredCheckMaps(LCodeGen* codegen, - LCheckMaps* instr, - Register object, - const X87Stack& x87_stack) - : LDeferredCode(codegen, x87_stack), instr_(instr), object_(object) { + DeferredCheckMaps(LCodeGen* codegen, LCheckMaps* instr, Register object) + : LDeferredCode(codegen), instr_(instr), object_(object) { SetExit(check_maps()); } virtual void Generate() V8_OVERRIDE { @@ -5592,12 +4979,12 @@ void LCodeGen::DoCheckMaps(LCheckMaps* instr) { } LOperand* input = instr->value(); - ASSERT(input->IsRegister()); + DCHECK(input->IsRegister()); Register reg = ToRegister(input); DeferredCheckMaps* deferred = NULL; if (instr->hydrogen()->HasMigrationTarget()) { - deferred = new(zone()) DeferredCheckMaps(this, instr, reg, x87_stack_); + deferred = new(zone()) DeferredCheckMaps(this, instr, reg); __ bind(deferred->check_maps()); } @@ -5622,7 +5009,6 @@ void LCodeGen::DoCheckMaps(LCheckMaps* instr) { void LCodeGen::DoClampDToUint8(LClampDToUint8* instr) { - CpuFeatureScope scope(masm(), SSE2); XMMRegister value_reg = ToDoubleRegister(instr->unclamped()); XMMRegister xmm_scratch = double_scratch0(); Register result_reg = ToRegister(instr->result()); @@ -5631,16 +5017,14 @@ void LCodeGen::DoClampDToUint8(LClampDToUint8* instr) { void LCodeGen::DoClampIToUint8(LClampIToUint8* instr) { - ASSERT(instr->unclamped()->Equals(instr->result())); + DCHECK(instr->unclamped()->Equals(instr->result())); Register value_reg = ToRegister(instr->result()); __ ClampUint8(value_reg); } void LCodeGen::DoClampTToUint8(LClampTToUint8* instr) { - CpuFeatureScope scope(masm(), SSE2); - - ASSERT(instr->unclamped()->Equals(instr->result())); + DCHECK(instr->unclamped()->Equals(instr->result())); Register input_reg = ToRegister(instr->unclamped()); XMMRegister temp_xmm_reg = ToDoubleRegister(instr->temp_xmm()); XMMRegister xmm_scratch = double_scratch0(); @@ -5674,130 +5058,7 @@ void LCodeGen::DoClampTToUint8(LClampTToUint8* instr) { } -void LCodeGen::DoClampTToUint8NoSSE2(LClampTToUint8NoSSE2* instr) { - Register input_reg = ToRegister(instr->unclamped()); - Register result_reg = ToRegister(instr->result()); - Register scratch = ToRegister(instr->scratch()); - Register scratch2 = ToRegister(instr->scratch2()); - Register scratch3 = ToRegister(instr->scratch3()); - Label is_smi, done, heap_number, valid_exponent, - largest_value, zero_result, maybe_nan_or_infinity; - - __ JumpIfSmi(input_reg, &is_smi); - - // Check for heap number - __ cmp(FieldOperand(input_reg, HeapObject::kMapOffset), - factory()->heap_number_map()); - __ j(equal, &heap_number, Label::kNear); - - // Check for undefined. Undefined is converted to zero for clamping - // conversions. - __ cmp(input_reg, factory()->undefined_value()); - DeoptimizeIf(not_equal, instr->environment()); - __ jmp(&zero_result, Label::kNear); - - // Heap number - __ bind(&heap_number); - - // Surprisingly, all of the hand-crafted bit-manipulations below are much - // faster than the x86 FPU built-in instruction, especially since "banker's - // rounding" would be additionally very expensive - - // Get exponent word. - __ mov(scratch, FieldOperand(input_reg, HeapNumber::kExponentOffset)); - __ mov(scratch3, FieldOperand(input_reg, HeapNumber::kMantissaOffset)); - - // Test for negative values --> clamp to zero - __ test(scratch, scratch); - __ j(negative, &zero_result, Label::kNear); - - // Get exponent alone in scratch2. - __ mov(scratch2, scratch); - __ and_(scratch2, HeapNumber::kExponentMask); - __ shr(scratch2, HeapNumber::kExponentShift); - __ j(zero, &zero_result, Label::kNear); - __ sub(scratch2, Immediate(HeapNumber::kExponentBias - 1)); - __ j(negative, &zero_result, Label::kNear); - - const uint32_t non_int8_exponent = 7; - __ cmp(scratch2, Immediate(non_int8_exponent + 1)); - // If the exponent is too big, check for special values. - __ j(greater, &maybe_nan_or_infinity, Label::kNear); - - __ bind(&valid_exponent); - // Exponent word in scratch, exponent in scratch2. We know that 0 <= exponent - // < 7. The shift bias is the number of bits to shift the mantissa such that - // with an exponent of 7 such the that top-most one is in bit 30, allowing - // detection the rounding overflow of a 255.5 to 256 (bit 31 goes from 0 to - // 1). - int shift_bias = (30 - HeapNumber::kExponentShift) - 7 - 1; - __ lea(result_reg, MemOperand(scratch2, shift_bias)); - // Here result_reg (ecx) is the shift, scratch is the exponent word. Get the - // top bits of the mantissa. - __ and_(scratch, HeapNumber::kMantissaMask); - // Put back the implicit 1 of the mantissa - __ or_(scratch, 1 << HeapNumber::kExponentShift); - // Shift up to round - __ shl_cl(scratch); - // Use "banker's rounding" to spec: If fractional part of number is 0.5, then - // use the bit in the "ones" place and add it to the "halves" place, which has - // the effect of rounding to even. - __ mov(scratch2, scratch); - const uint32_t one_half_bit_shift = 30 - sizeof(uint8_t) * 8; - const uint32_t one_bit_shift = one_half_bit_shift + 1; - __ and_(scratch2, Immediate((1 << one_bit_shift) - 1)); - __ cmp(scratch2, Immediate(1 << one_half_bit_shift)); - Label no_round; - __ j(less, &no_round, Label::kNear); - Label round_up; - __ mov(scratch2, Immediate(1 << one_half_bit_shift)); - __ j(greater, &round_up, Label::kNear); - __ test(scratch3, scratch3); - __ j(not_zero, &round_up, Label::kNear); - __ mov(scratch2, scratch); - __ and_(scratch2, Immediate(1 << one_bit_shift)); - __ shr(scratch2, 1); - __ bind(&round_up); - __ add(scratch, scratch2); - __ j(overflow, &largest_value, Label::kNear); - __ bind(&no_round); - __ shr(scratch, 23); - __ mov(result_reg, scratch); - __ jmp(&done, Label::kNear); - - __ bind(&maybe_nan_or_infinity); - // Check for NaN/Infinity, all other values map to 255 - __ cmp(scratch2, Immediate(HeapNumber::kInfinityOrNanExponent + 1)); - __ j(not_equal, &largest_value, Label::kNear); - - // Check for NaN, which differs from Infinity in that at least one mantissa - // bit is set. - __ and_(scratch, HeapNumber::kMantissaMask); - __ or_(scratch, FieldOperand(input_reg, HeapNumber::kMantissaOffset)); - __ j(not_zero, &zero_result, Label::kNear); // M!=0 --> NaN - // Infinity -> Fall through to map to 255. - - __ bind(&largest_value); - __ mov(result_reg, Immediate(255)); - __ jmp(&done, Label::kNear); - - __ bind(&zero_result); - __ xor_(result_reg, result_reg); - __ jmp(&done, Label::kNear); - - // smi - __ bind(&is_smi); - if (!input_reg.is(result_reg)) { - __ mov(result_reg, input_reg); - } - __ SmiUntag(result_reg); - __ ClampUint8(result_reg); - __ bind(&done); -} - - void LCodeGen::DoDoubleBits(LDoubleBits* instr) { - CpuFeatureScope scope(masm(), SSE2); XMMRegister value_reg = ToDoubleRegister(instr->value()); Register result_reg = ToRegister(instr->result()); if (instr->hydrogen()->bits() == HDoubleBits::HIGH) { @@ -5819,7 +5080,6 @@ void LCodeGen::DoConstructDouble(LConstructDouble* instr) { Register hi_reg = ToRegister(instr->hi()); Register lo_reg = ToRegister(instr->lo()); XMMRegister result_reg = ToDoubleRegister(instr->result()); - CpuFeatureScope scope(masm(), SSE2); if (CpuFeatures::IsSupported(SSE4_1)) { CpuFeatureScope scope2(masm(), SSE4_1); @@ -5838,10 +5098,8 @@ void LCodeGen::DoConstructDouble(LConstructDouble* instr) { void LCodeGen::DoAllocate(LAllocate* instr) { class DeferredAllocate V8_FINAL : public LDeferredCode { public: - DeferredAllocate(LCodeGen* codegen, - LAllocate* instr, - const X87Stack& x87_stack) - : LDeferredCode(codegen, x87_stack), instr_(instr) { } + DeferredAllocate(LCodeGen* codegen, LAllocate* instr) + : LDeferredCode(codegen), instr_(instr) { } virtual void Generate() V8_OVERRIDE { codegen()->DoDeferredAllocate(instr_); } @@ -5850,8 +5108,7 @@ void LCodeGen::DoAllocate(LAllocate* instr) { LAllocate* instr_; }; - DeferredAllocate* deferred = - new(zone()) DeferredAllocate(this, instr, x87_stack_); + DeferredAllocate* deferred = new(zone()) DeferredAllocate(this, instr); Register result = ToRegister(instr->result()); Register temp = ToRegister(instr->temp()); @@ -5862,11 +5119,11 @@ void LCodeGen::DoAllocate(LAllocate* instr) { flags = static_cast<AllocationFlags>(flags | DOUBLE_ALIGNMENT); } if (instr->hydrogen()->IsOldPointerSpaceAllocation()) { - ASSERT(!instr->hydrogen()->IsOldDataSpaceAllocation()); - ASSERT(!instr->hydrogen()->IsNewSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsOldDataSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); flags = static_cast<AllocationFlags>(flags | PRETENURE_OLD_POINTER_SPACE); } else if (instr->hydrogen()->IsOldDataSpaceAllocation()) { - ASSERT(!instr->hydrogen()->IsNewSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); flags = static_cast<AllocationFlags>(flags | PRETENURE_OLD_DATA_SPACE); } @@ -5914,7 +5171,7 @@ void LCodeGen::DoDeferredAllocate(LAllocate* instr) { PushSafepointRegistersScope scope(this); if (instr->size()->IsRegister()) { Register size = ToRegister(instr->size()); - ASSERT(!size.is(result)); + DCHECK(!size.is(result)); __ SmiTag(ToRegister(instr->size())); __ push(size); } else { @@ -5931,11 +5188,11 @@ void LCodeGen::DoDeferredAllocate(LAllocate* instr) { int flags = AllocateDoubleAlignFlag::encode( instr->hydrogen()->MustAllocateDoubleAligned()); if (instr->hydrogen()->IsOldPointerSpaceAllocation()) { - ASSERT(!instr->hydrogen()->IsOldDataSpaceAllocation()); - ASSERT(!instr->hydrogen()->IsNewSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsOldDataSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); flags = AllocateTargetSpace::update(flags, OLD_POINTER_SPACE); } else if (instr->hydrogen()->IsOldDataSpaceAllocation()) { - ASSERT(!instr->hydrogen()->IsNewSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); flags = AllocateTargetSpace::update(flags, OLD_DATA_SPACE); } else { flags = AllocateTargetSpace::update(flags, NEW_SPACE); @@ -5943,20 +5200,20 @@ void LCodeGen::DoDeferredAllocate(LAllocate* instr) { __ push(Immediate(Smi::FromInt(flags))); CallRuntimeFromDeferred( - Runtime::kHiddenAllocateInTargetSpace, 2, instr, instr->context()); + Runtime::kAllocateInTargetSpace, 2, instr, instr->context()); __ StoreToSafepointRegisterSlot(result, eax); } void LCodeGen::DoToFastProperties(LToFastProperties* instr) { - ASSERT(ToRegister(instr->value()).is(eax)); + DCHECK(ToRegister(instr->value()).is(eax)); __ push(eax); CallRuntime(Runtime::kToFastProperties, 1, instr); } void LCodeGen::DoRegExpLiteral(LRegExpLiteral* instr) { - ASSERT(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->context()).is(esi)); Label materialized; // Registers will be used as follows: // ecx = literals array. @@ -5976,7 +5233,7 @@ void LCodeGen::DoRegExpLiteral(LRegExpLiteral* instr) { __ push(Immediate(Smi::FromInt(instr->hydrogen()->literal_index()))); __ push(Immediate(instr->hydrogen()->pattern())); __ push(Immediate(instr->hydrogen()->flags())); - CallRuntime(Runtime::kHiddenMaterializeRegExpLiteral, 4, instr); + CallRuntime(Runtime::kMaterializeRegExpLiteral, 4, instr); __ mov(ebx, eax); __ bind(&materialized); @@ -5988,7 +5245,7 @@ void LCodeGen::DoRegExpLiteral(LRegExpLiteral* instr) { __ bind(&runtime_allocate); __ push(ebx); __ push(Immediate(Smi::FromInt(size))); - CallRuntime(Runtime::kHiddenAllocateInNewSpace, 1, instr); + CallRuntime(Runtime::kAllocateInNewSpace, 1, instr); __ pop(ebx); __ bind(&allocated); @@ -6008,7 +5265,7 @@ void LCodeGen::DoRegExpLiteral(LRegExpLiteral* instr) { void LCodeGen::DoFunctionLiteral(LFunctionLiteral* instr) { - ASSERT(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->context()).is(esi)); // Use the fast case closure allocation code that allocates in new // space for nested functions that don't need literals cloning. bool pretenure = instr->hydrogen()->pretenure(); @@ -6023,13 +5280,13 @@ void LCodeGen::DoFunctionLiteral(LFunctionLiteral* instr) { __ push(Immediate(instr->hydrogen()->shared_info())); __ push(Immediate(pretenure ? factory()->true_value() : factory()->false_value())); - CallRuntime(Runtime::kHiddenNewClosure, 3, instr); + CallRuntime(Runtime::kNewClosure, 3, instr); } } void LCodeGen::DoTypeof(LTypeof* instr) { - ASSERT(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->context()).is(esi)); LOperand* input = instr->value(); EmitPushTaggedOperand(input); CallRuntime(Runtime::kTypeof, 1, instr); @@ -6083,11 +5340,6 @@ Condition LCodeGen::EmitTypeofIs(LTypeofIsAndBranch* instr, Register input) { __ cmp(input, factory()->false_value()); final_branch_condition = equal; - } else if (FLAG_harmony_typeof && - String::Equals(type_name, factory()->null_string())) { - __ cmp(input, factory()->null_value()); - final_branch_condition = equal; - } else if (String::Equals(type_name, factory()->undefined_string())) { __ cmp(input, factory()->undefined_value()); __ j(equal, true_label, true_distance); @@ -6108,10 +5360,8 @@ Condition LCodeGen::EmitTypeofIs(LTypeofIsAndBranch* instr, Register input) { } else if (String::Equals(type_name, factory()->object_string())) { __ JumpIfSmi(input, false_label, false_distance); - if (!FLAG_harmony_typeof) { - __ cmp(input, factory()->null_value()); - __ j(equal, true_label, true_distance); - } + __ cmp(input, factory()->null_value()); + __ j(equal, true_label, true_distance); __ CmpObjectType(input, FIRST_NONCALLABLE_SPEC_OBJECT_TYPE, input); __ j(below, false_label, false_distance); __ CmpInstanceType(input, LAST_NONCALLABLE_SPEC_OBJECT_TYPE); @@ -6170,7 +5420,7 @@ void LCodeGen::EnsureSpaceForLazyDeopt(int space_needed) { void LCodeGen::DoLazyBailout(LLazyBailout* instr) { last_lazy_deopt_pc_ = masm()->pc_offset(); - ASSERT(instr->HasEnvironment()); + DCHECK(instr->HasEnvironment()); LEnvironment* env = instr->environment(); RegisterEnvironmentForDeoptimization(env, Safepoint::kLazyDeopt); safepoints_.RecordLazyDeoptimizationIndex(env->deoptimization_index()); @@ -6204,10 +5454,10 @@ void LCodeGen::DoDummyUse(LDummyUse* instr) { void LCodeGen::DoDeferredStackCheck(LStackCheck* instr) { PushSafepointRegistersScope scope(this); __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset)); - __ CallRuntimeSaveDoubles(Runtime::kHiddenStackGuard); + __ CallRuntimeSaveDoubles(Runtime::kStackGuard); RecordSafepointWithLazyDeopt( instr, RECORD_SAFEPOINT_WITH_REGISTERS_AND_NO_ARGUMENTS); - ASSERT(instr->HasEnvironment()); + DCHECK(instr->HasEnvironment()); LEnvironment* env = instr->environment(); safepoints_.RecordLazyDeoptimizationIndex(env->deoptimization_index()); } @@ -6216,10 +5466,8 @@ void LCodeGen::DoDeferredStackCheck(LStackCheck* instr) { void LCodeGen::DoStackCheck(LStackCheck* instr) { class DeferredStackCheck V8_FINAL : public LDeferredCode { public: - DeferredStackCheck(LCodeGen* codegen, - LStackCheck* instr, - const X87Stack& x87_stack) - : LDeferredCode(codegen, x87_stack), instr_(instr) { } + DeferredStackCheck(LCodeGen* codegen, LStackCheck* instr) + : LDeferredCode(codegen), instr_(instr) { } virtual void Generate() V8_OVERRIDE { codegen()->DoDeferredStackCheck(instr_); } @@ -6228,7 +5476,7 @@ void LCodeGen::DoStackCheck(LStackCheck* instr) { LStackCheck* instr_; }; - ASSERT(instr->HasEnvironment()); + DCHECK(instr->HasEnvironment()); LEnvironment* env = instr->environment(); // There is no LLazyBailout instruction for stack-checks. We have to // prepare for lazy deoptimization explicitly here. @@ -6240,17 +5488,17 @@ void LCodeGen::DoStackCheck(LStackCheck* instr) { __ cmp(esp, Operand::StaticVariable(stack_limit)); __ j(above_equal, &done, Label::kNear); - ASSERT(instr->context()->IsRegister()); - ASSERT(ToRegister(instr->context()).is(esi)); + DCHECK(instr->context()->IsRegister()); + DCHECK(ToRegister(instr->context()).is(esi)); CallCode(isolate()->builtins()->StackCheck(), RelocInfo::CODE_TARGET, instr); __ bind(&done); } else { - ASSERT(instr->hydrogen()->is_backwards_branch()); + DCHECK(instr->hydrogen()->is_backwards_branch()); // Perform stack overflow check if this goto needs it before jumping. DeferredStackCheck* deferred_stack_check = - new(zone()) DeferredStackCheck(this, instr, x87_stack_); + new(zone()) DeferredStackCheck(this, instr); ExternalReference stack_limit = ExternalReference::address_of_stack_limit(isolate()); __ cmp(esp, Operand::StaticVariable(stack_limit)); @@ -6274,7 +5522,7 @@ void LCodeGen::DoOsrEntry(LOsrEntry* instr) { // If the environment were already registered, we would have no way of // backpatching it with the spill slot operands. - ASSERT(!environment->HasBeenRegistered()); + DCHECK(!environment->HasBeenRegistered()); RegisterEnvironmentForDeoptimization(environment, Safepoint::kNoLazyDeopt); GenerateOsrPrologue(); @@ -6282,7 +5530,7 @@ void LCodeGen::DoOsrEntry(LOsrEntry* instr) { void LCodeGen::DoForInPrepareMap(LForInPrepareMap* instr) { - ASSERT(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->context()).is(esi)); __ cmp(eax, isolate()->factory()->undefined_value()); DeoptimizeIf(equal, instr->environment()); @@ -6364,9 +5612,8 @@ void LCodeGen::DoLoadFieldByIndex(LLoadFieldByIndex* instr) { DeferredLoadMutableDouble(LCodeGen* codegen, LLoadFieldByIndex* instr, Register object, - Register index, - const X87Stack& x87_stack) - : LDeferredCode(codegen, x87_stack), + Register index) + : LDeferredCode(codegen), instr_(instr), object_(object), index_(index) { @@ -6386,7 +5633,7 @@ void LCodeGen::DoLoadFieldByIndex(LLoadFieldByIndex* instr) { DeferredLoadMutableDouble* deferred; deferred = new(zone()) DeferredLoadMutableDouble( - this, instr, object, index, x87_stack_); + this, instr, object, index); Label out_of_object, done; __ test(index, Immediate(Smi::FromInt(1))); @@ -6415,6 +5662,21 @@ void LCodeGen::DoLoadFieldByIndex(LLoadFieldByIndex* instr) { } +void LCodeGen::DoStoreFrameContext(LStoreFrameContext* instr) { + Register context = ToRegister(instr->context()); + __ mov(Operand(ebp, StandardFrameConstants::kContextOffset), context); +} + + +void LCodeGen::DoAllocateBlockContext(LAllocateBlockContext* instr) { + Handle<ScopeInfo> scope_info = instr->scope_info(); + __ Push(scope_info); + __ push(ToRegister(instr->function())); + CallRuntime(Runtime::kPushBlockContext, 2, instr); + RecordSafepoint(Safepoint::kNoLazyDeopt); +} + + #undef __ } } // namespace v8::internal diff --git a/deps/v8/src/ia32/lithium-codegen-ia32.h b/deps/v8/src/ia32/lithium-codegen-ia32.h index f4542eecddb..d2f85f1279f 100644 --- a/deps/v8/src/ia32/lithium-codegen-ia32.h +++ b/deps/v8/src/ia32/lithium-codegen-ia32.h @@ -5,15 +5,15 @@ #ifndef V8_IA32_LITHIUM_CODEGEN_IA32_H_ #define V8_IA32_LITHIUM_CODEGEN_IA32_H_ -#include "ia32/lithium-ia32.h" +#include "src/ia32/lithium-ia32.h" -#include "checks.h" -#include "deoptimizer.h" -#include "ia32/lithium-gap-resolver-ia32.h" -#include "lithium-codegen.h" -#include "safepoint-table.h" -#include "scopes.h" -#include "utils.h" +#include "src/base/logging.h" +#include "src/deoptimizer.h" +#include "src/ia32/lithium-gap-resolver-ia32.h" +#include "src/lithium-codegen.h" +#include "src/safepoint-table.h" +#include "src/scopes.h" +#include "src/utils.h" namespace v8 { namespace internal { @@ -38,7 +38,6 @@ class LCodeGen: public LCodeGenBase { support_aligned_spilled_doubles_(false), osr_pc_offset_(-1), frame_is_built_(false), - x87_stack_(assembler), safepoints_(info->zone()), resolver_(this), expected_safepoint_kind_(Safepoint::kSimple) { @@ -67,7 +66,6 @@ class LCodeGen: public LCodeGenBase { Operand ToOperand(LOperand* op) const; Register ToRegister(LOperand* op) const; XMMRegister ToDoubleRegister(LOperand* op) const; - X87Register ToX87Register(LOperand* op) const; bool IsInteger32(LConstantOperand* op) const; bool IsSmi(LConstantOperand* op) const; @@ -76,36 +74,6 @@ class LCodeGen: public LCodeGenBase { } double ToDouble(LConstantOperand* op) const; - // Support for non-sse2 (x87) floating point stack handling. - // These functions maintain the mapping of physical stack registers to our - // virtual registers between instructions. - enum X87OperandType { kX87DoubleOperand, kX87FloatOperand, kX87IntOperand }; - - void X87Mov(X87Register reg, Operand src, - X87OperandType operand = kX87DoubleOperand); - void X87Mov(Operand src, X87Register reg, - X87OperandType operand = kX87DoubleOperand); - - void X87PrepareBinaryOp( - X87Register left, X87Register right, X87Register result); - - void X87LoadForUsage(X87Register reg); - void X87LoadForUsage(X87Register reg1, X87Register reg2); - void X87PrepareToWrite(X87Register reg) { x87_stack_.PrepareToWrite(reg); } - void X87CommitWrite(X87Register reg) { x87_stack_.CommitWrite(reg); } - - void X87Fxch(X87Register reg, int other_slot = 0) { - x87_stack_.Fxch(reg, other_slot); - } - void X87Free(X87Register reg) { - x87_stack_.Free(reg); - } - - - bool X87StackEmpty() { - return x87_stack_.depth() == 0; - } - Handle<Object> ToHandle(LConstantOperand* op) const; // The operand denoting the second word (the one with a higher address) of @@ -127,8 +95,7 @@ class LCodeGen: public LCodeGenBase { enum IntegerSignedness { SIGNED_INT32, UNSIGNED_INT32 }; void DoDeferredNumberTagIU(LInstruction* instr, LOperand* value, - LOperand* temp1, - LOperand* temp2, + LOperand* temp, IntegerSignedness signedness); void DoDeferredTaggedToI(LTaggedToI* instr, Label* done); @@ -265,7 +232,6 @@ class LCodeGen: public LCodeGenBase { Register ToRegister(int index) const; XMMRegister ToDoubleRegister(int index) const; - X87Register ToX87Register(int index) const; int32_t ToRepresentation(LConstantOperand* op, const Representation& r) const; int32_t ToInteger32(LConstantOperand* op) const; ExternalReference ToExternalReference(LConstantOperand* op) const; @@ -274,8 +240,7 @@ class LCodeGen: public LCodeGenBase { LOperand* key, Representation key_representation, ElementsKind elements_kind, - uint32_t offset, - uint32_t additional_index = 0); + uint32_t base_offset); Operand BuildSeqStringOperand(Register string, LOperand* index, @@ -313,15 +278,6 @@ class LCodeGen: public LCodeGenBase { LEnvironment* env, NumberUntagDMode mode = NUMBER_CANDIDATE_IS_ANY_TAGGED); - void EmitNumberUntagDNoSSE2( - Register input, - Register temp, - X87Register res_reg, - bool allow_undefined_as_nan, - bool deoptimize_on_minus_zero, - LEnvironment* env, - NumberUntagDMode mode = NUMBER_CANDIDATE_IS_ANY_TAGGED); - // Emits optimized code for typeof x == "y". Modifies input register. // Returns the condition on which a final split to // true and false label should be made, to optimize fallthrough. @@ -369,12 +325,6 @@ class LCodeGen: public LCodeGenBase { // register, or a stack slot operand. void EmitPushTaggedOperand(LOperand* operand); - void X87Fld(Operand src, X87OperandType opts); - - void EmitFlushX87ForDeopt(); - void FlushX87StackIfNecessary(LInstruction* instr) { - x87_stack_.FlushIfNecessary(instr, this); - } friend class LGapResolver; #ifdef _MSC_VER @@ -397,56 +347,6 @@ class LCodeGen: public LCodeGenBase { int osr_pc_offset_; bool frame_is_built_; - class X87Stack { - public: - explicit X87Stack(MacroAssembler* masm) - : stack_depth_(0), is_mutable_(true), masm_(masm) { } - explicit X87Stack(const X87Stack& other) - : stack_depth_(other.stack_depth_), is_mutable_(false), masm_(masm()) { - for (int i = 0; i < stack_depth_; i++) { - stack_[i] = other.stack_[i]; - } - } - bool operator==(const X87Stack& other) const { - if (stack_depth_ != other.stack_depth_) return false; - for (int i = 0; i < stack_depth_; i++) { - if (!stack_[i].is(other.stack_[i])) return false; - } - return true; - } - bool Contains(X87Register reg); - void Fxch(X87Register reg, int other_slot = 0); - void Free(X87Register reg); - void PrepareToWrite(X87Register reg); - void CommitWrite(X87Register reg); - void FlushIfNecessary(LInstruction* instr, LCodeGen* cgen); - void LeavingBlock(int current_block_id, LGoto* goto_instr); - int depth() const { return stack_depth_; } - void pop() { - ASSERT(is_mutable_); - stack_depth_--; - } - void push(X87Register reg) { - ASSERT(is_mutable_); - ASSERT(stack_depth_ < X87Register::kNumAllocatableRegisters); - stack_[stack_depth_] = reg; - stack_depth_++; - } - - MacroAssembler* masm() const { return masm_; } - Isolate* isolate() const { return masm_->isolate(); } - - private: - int ArrayIndex(X87Register reg); - int st2idx(int pos); - - X87Register stack_[X87Register::kNumAllocatableRegisters]; - int stack_depth_; - bool is_mutable_; - MacroAssembler* masm_; - }; - X87Stack x87_stack_; - // Builder that keeps track of safepoints in the code. The table // itself is emitted at the end of the generated code. SafepointTableBuilder safepoints_; @@ -460,14 +360,14 @@ class LCodeGen: public LCodeGenBase { public: explicit PushSafepointRegistersScope(LCodeGen* codegen) : codegen_(codegen) { - ASSERT(codegen_->expected_safepoint_kind_ == Safepoint::kSimple); + DCHECK(codegen_->expected_safepoint_kind_ == Safepoint::kSimple); codegen_->masm_->PushSafepointRegisters(); codegen_->expected_safepoint_kind_ = Safepoint::kWithRegisters; - ASSERT(codegen_->info()->is_calling()); + DCHECK(codegen_->info()->is_calling()); } ~PushSafepointRegistersScope() { - ASSERT(codegen_->expected_safepoint_kind_ == Safepoint::kWithRegisters); + DCHECK(codegen_->expected_safepoint_kind_ == Safepoint::kWithRegisters); codegen_->masm_->PopSafepointRegisters(); codegen_->expected_safepoint_kind_ = Safepoint::kSimple; } @@ -485,11 +385,10 @@ class LCodeGen: public LCodeGenBase { class LDeferredCode : public ZoneObject { public: - explicit LDeferredCode(LCodeGen* codegen, const LCodeGen::X87Stack& x87_stack) + explicit LDeferredCode(LCodeGen* codegen) : codegen_(codegen), external_exit_(NULL), - instruction_index_(codegen->current_instruction_), - x87_stack_(x87_stack) { + instruction_index_(codegen->current_instruction_) { codegen->AddDeferredCode(this); } @@ -502,7 +401,6 @@ class LDeferredCode : public ZoneObject { Label* exit() { return external_exit_ != NULL ? external_exit_ : &exit_; } Label* done() { return codegen_->NeedsDeferredFrame() ? &done_ : exit(); } int instruction_index() const { return instruction_index_; } - const LCodeGen::X87Stack& x87_stack() const { return x87_stack_; } protected: LCodeGen* codegen() const { return codegen_; } @@ -515,7 +413,6 @@ class LDeferredCode : public ZoneObject { Label* external_exit_; Label done_; int instruction_index_; - LCodeGen::X87Stack x87_stack_; }; } } // namespace v8::internal diff --git a/deps/v8/src/ia32/lithium-gap-resolver-ia32.cc b/deps/v8/src/ia32/lithium-gap-resolver-ia32.cc index 34b949085a1..1e590fd4ff4 100644 --- a/deps/v8/src/ia32/lithium-gap-resolver-ia32.cc +++ b/deps/v8/src/ia32/lithium-gap-resolver-ia32.cc @@ -2,12 +2,12 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_IA32 -#include "ia32/lithium-gap-resolver-ia32.h" -#include "ia32/lithium-codegen-ia32.h" +#include "src/ia32/lithium-codegen-ia32.h" +#include "src/ia32/lithium-gap-resolver-ia32.h" namespace v8 { namespace internal { @@ -21,7 +21,7 @@ LGapResolver::LGapResolver(LCodeGen* owner) void LGapResolver::Resolve(LParallelMove* parallel_move) { - ASSERT(HasBeenReset()); + DCHECK(HasBeenReset()); // Build up a worklist of moves. BuildInitialMoveList(parallel_move); @@ -38,13 +38,13 @@ void LGapResolver::Resolve(LParallelMove* parallel_move) { // Perform the moves with constant sources. for (int i = 0; i < moves_.length(); ++i) { if (!moves_[i].IsEliminated()) { - ASSERT(moves_[i].source()->IsConstantOperand()); + DCHECK(moves_[i].source()->IsConstantOperand()); EmitMove(i); } } Finish(); - ASSERT(HasBeenReset()); + DCHECK(HasBeenReset()); } @@ -70,12 +70,12 @@ void LGapResolver::PerformMove(int index) { // which means that a call to PerformMove could change any source operand // in the move graph. - ASSERT(!moves_[index].IsPending()); - ASSERT(!moves_[index].IsRedundant()); + DCHECK(!moves_[index].IsPending()); + DCHECK(!moves_[index].IsRedundant()); // Clear this move's destination to indicate a pending move. The actual // destination is saved on the side. - ASSERT(moves_[index].source() != NULL); // Or else it will look eliminated. + DCHECK(moves_[index].source() != NULL); // Or else it will look eliminated. LOperand* destination = moves_[index].destination(); moves_[index].set_destination(NULL); @@ -116,7 +116,7 @@ void LGapResolver::PerformMove(int index) { for (int i = 0; i < moves_.length(); ++i) { LMoveOperands other_move = moves_[i]; if (other_move.Blocks(destination)) { - ASSERT(other_move.IsPending()); + DCHECK(other_move.IsPending()); EmitSwap(index); return; } @@ -142,13 +142,13 @@ void LGapResolver::RemoveMove(int index) { LOperand* source = moves_[index].source(); if (source->IsRegister()) { --source_uses_[source->index()]; - ASSERT(source_uses_[source->index()] >= 0); + DCHECK(source_uses_[source->index()] >= 0); } LOperand* destination = moves_[index].destination(); if (destination->IsRegister()) { --destination_uses_[destination->index()]; - ASSERT(destination_uses_[destination->index()] >= 0); + DCHECK(destination_uses_[destination->index()] >= 0); } moves_[index].Eliminate(); @@ -190,12 +190,12 @@ bool LGapResolver::HasBeenReset() { void LGapResolver::Verify() { -#ifdef ENABLE_SLOW_ASSERTS +#ifdef ENABLE_SLOW_DCHECKS // No operand should be the destination for more than one move. for (int i = 0; i < moves_.length(); ++i) { LOperand* destination = moves_[i].destination(); for (int j = i + 1; j < moves_.length(); ++j) { - SLOW_ASSERT(!destination->Equals(moves_[j].destination())); + SLOW_DCHECK(!destination->Equals(moves_[j].destination())); } } #endif @@ -259,13 +259,13 @@ void LGapResolver::EmitMove(int index) { // Dispatch on the source and destination operand kinds. Not all // combinations are possible. if (source->IsRegister()) { - ASSERT(destination->IsRegister() || destination->IsStackSlot()); + DCHECK(destination->IsRegister() || destination->IsStackSlot()); Register src = cgen_->ToRegister(source); Operand dst = cgen_->ToOperand(destination); __ mov(dst, src); } else if (source->IsStackSlot()) { - ASSERT(destination->IsRegister() || destination->IsStackSlot()); + DCHECK(destination->IsRegister() || destination->IsStackSlot()); Operand src = cgen_->ToOperand(source); if (destination->IsRegister()) { Register dst = cgen_->ToRegister(destination); @@ -295,26 +295,17 @@ void LGapResolver::EmitMove(int index) { uint64_t int_val = BitCast<uint64_t, double>(v); int32_t lower = static_cast<int32_t>(int_val); int32_t upper = static_cast<int32_t>(int_val >> kBitsPerInt); - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope scope(cgen_->masm(), SSE2); - XMMRegister dst = cgen_->ToDoubleRegister(destination); - if (int_val == 0) { - __ xorps(dst, dst); - } else { - __ push(Immediate(upper)); - __ push(Immediate(lower)); - __ movsd(dst, Operand(esp, 0)); - __ add(esp, Immediate(kDoubleSize)); - } + XMMRegister dst = cgen_->ToDoubleRegister(destination); + if (int_val == 0) { + __ xorps(dst, dst); } else { __ push(Immediate(upper)); __ push(Immediate(lower)); - X87Register dst = cgen_->ToX87Register(destination); - cgen_->X87Mov(dst, MemOperand(esp, 0)); + __ movsd(dst, Operand(esp, 0)); __ add(esp, Immediate(kDoubleSize)); } } else { - ASSERT(destination->IsStackSlot()); + DCHECK(destination->IsStackSlot()); Operand dst = cgen_->ToOperand(destination); Representation r = cgen_->IsSmi(constant_source) ? Representation::Smi() : Representation::Integer32(); @@ -328,59 +319,27 @@ void LGapResolver::EmitMove(int index) { } } else if (source->IsDoubleRegister()) { - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope scope(cgen_->masm(), SSE2); - XMMRegister src = cgen_->ToDoubleRegister(source); - if (destination->IsDoubleRegister()) { - XMMRegister dst = cgen_->ToDoubleRegister(destination); - __ movaps(dst, src); - } else { - ASSERT(destination->IsDoubleStackSlot()); - Operand dst = cgen_->ToOperand(destination); - __ movsd(dst, src); - } + XMMRegister src = cgen_->ToDoubleRegister(source); + if (destination->IsDoubleRegister()) { + XMMRegister dst = cgen_->ToDoubleRegister(destination); + __ movaps(dst, src); } else { - // load from the register onto the stack, store in destination, which must - // be a double stack slot in the non-SSE2 case. - ASSERT(destination->IsDoubleStackSlot()); + DCHECK(destination->IsDoubleStackSlot()); Operand dst = cgen_->ToOperand(destination); - X87Register src = cgen_->ToX87Register(source); - cgen_->X87Mov(dst, src); + __ movsd(dst, src); } } else if (source->IsDoubleStackSlot()) { - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope scope(cgen_->masm(), SSE2); - ASSERT(destination->IsDoubleRegister() || - destination->IsDoubleStackSlot()); - Operand src = cgen_->ToOperand(source); - if (destination->IsDoubleRegister()) { - XMMRegister dst = cgen_->ToDoubleRegister(destination); - __ movsd(dst, src); - } else { - // We rely on having xmm0 available as a fixed scratch register. - Operand dst = cgen_->ToOperand(destination); - __ movsd(xmm0, src); - __ movsd(dst, xmm0); - } + DCHECK(destination->IsDoubleRegister() || + destination->IsDoubleStackSlot()); + Operand src = cgen_->ToOperand(source); + if (destination->IsDoubleRegister()) { + XMMRegister dst = cgen_->ToDoubleRegister(destination); + __ movsd(dst, src); } else { - // load from the stack slot on top of the floating point stack, and then - // store in destination. If destination is a double register, then it - // represents the top of the stack and nothing needs to be done. - if (destination->IsDoubleStackSlot()) { - Register tmp = EnsureTempRegister(); - Operand src0 = cgen_->ToOperand(source); - Operand src1 = cgen_->HighOperand(source); - Operand dst0 = cgen_->ToOperand(destination); - Operand dst1 = cgen_->HighOperand(destination); - __ mov(tmp, src0); // Then use tmp to copy source to destination. - __ mov(dst0, tmp); - __ mov(tmp, src1); - __ mov(dst1, tmp); - } else { - Operand src = cgen_->ToOperand(source); - X87Register dst = cgen_->ToX87Register(destination); - cgen_->X87Mov(dst, src); - } + // We rely on having xmm0 available as a fixed scratch register. + Operand dst = cgen_->ToOperand(destination); + __ movsd(xmm0, src); + __ movsd(dst, xmm0); } } else { UNREACHABLE(); @@ -445,7 +404,6 @@ void LGapResolver::EmitSwap(int index) { __ mov(src, tmp0); } } else if (source->IsDoubleRegister() && destination->IsDoubleRegister()) { - CpuFeatureScope scope(cgen_->masm(), SSE2); // XMM register-register swap. We rely on having xmm0 // available as a fixed scratch register. XMMRegister src = cgen_->ToDoubleRegister(source); @@ -454,10 +412,9 @@ void LGapResolver::EmitSwap(int index) { __ movaps(src, dst); __ movaps(dst, xmm0); } else if (source->IsDoubleRegister() || destination->IsDoubleRegister()) { - CpuFeatureScope scope(cgen_->masm(), SSE2); // XMM register-memory swap. We rely on having xmm0 // available as a fixed scratch register. - ASSERT(source->IsDoubleStackSlot() || destination->IsDoubleStackSlot()); + DCHECK(source->IsDoubleStackSlot() || destination->IsDoubleStackSlot()); XMMRegister reg = cgen_->ToDoubleRegister(source->IsDoubleRegister() ? source : destination); @@ -467,7 +424,6 @@ void LGapResolver::EmitSwap(int index) { __ movsd(other, reg); __ movaps(reg, xmm0); } else if (source->IsDoubleStackSlot() && destination->IsDoubleStackSlot()) { - CpuFeatureScope scope(cgen_->masm(), SSE2); // Double-width memory-to-memory. Spill on demand to use a general // purpose temporary register and also rely on having xmm0 available as // a fixed scratch register. diff --git a/deps/v8/src/ia32/lithium-gap-resolver-ia32.h b/deps/v8/src/ia32/lithium-gap-resolver-ia32.h index 0eca35b39ca..87549d00bbe 100644 --- a/deps/v8/src/ia32/lithium-gap-resolver-ia32.h +++ b/deps/v8/src/ia32/lithium-gap-resolver-ia32.h @@ -5,9 +5,9 @@ #ifndef V8_IA32_LITHIUM_GAP_RESOLVER_IA32_H_ #define V8_IA32_LITHIUM_GAP_RESOLVER_IA32_H_ -#include "v8.h" +#include "src/v8.h" -#include "lithium.h" +#include "src/lithium.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/ia32/lithium-ia32.cc b/deps/v8/src/ia32/lithium-ia32.cc index 8c2687ee9c6..e02b65e30f9 100644 --- a/deps/v8/src/ia32/lithium-ia32.cc +++ b/deps/v8/src/ia32/lithium-ia32.cc @@ -2,14 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_IA32 -#include "lithium-allocator-inl.h" -#include "ia32/lithium-ia32.h" -#include "ia32/lithium-codegen-ia32.h" -#include "hydrogen-osr.h" +#include "src/hydrogen-osr.h" +#include "src/ia32/lithium-codegen-ia32.h" +#include "src/lithium-inl.h" namespace v8 { namespace internal { @@ -28,17 +27,17 @@ void LInstruction::VerifyCall() { // outputs because all registers are blocked by the calling convention. // Inputs operands must use a fixed register or use-at-start policy or // a non-register policy. - ASSERT(Output() == NULL || + DCHECK(Output() == NULL || LUnallocated::cast(Output())->HasFixedPolicy() || !LUnallocated::cast(Output())->HasRegisterPolicy()); for (UseIterator it(this); !it.Done(); it.Advance()) { LUnallocated* operand = LUnallocated::cast(it.Current()); - ASSERT(operand->HasFixedPolicy() || + DCHECK(operand->HasFixedPolicy() || operand->IsUsedAtStart()); } for (TempIterator it(this); !it.Done(); it.Advance()) { LUnallocated* operand = LUnallocated::cast(it.Current()); - ASSERT(operand->HasFixedPolicy() ||!operand->HasRegisterPolicy()); + DCHECK(operand->HasFixedPolicy() ||!operand->HasRegisterPolicy()); } } #endif @@ -60,17 +59,6 @@ bool LInstruction::HasDoubleRegisterInput() { } -bool LInstruction::IsDoubleInput(X87Register reg, LCodeGen* cgen) { - for (int i = 0; i < InputCount(); i++) { - LOperand* op = InputAt(i); - if (op != NULL && op->IsDoubleRegister()) { - if (cgen->ToX87Register(op).is(reg)) return true; - } - } - return false; -} - - void LInstruction::PrintTo(StringStream* stream) { stream->Add("%s ", this->Mnemonic()); @@ -369,7 +357,7 @@ LOperand* LPlatformChunk::GetNextSpillSlot(RegisterKind kind) { if (kind == DOUBLE_REGISTERS) { return LDoubleStackSlot::Create(index, zone()); } else { - ASSERT(kind == GENERAL_REGISTERS); + DCHECK(kind == GENERAL_REGISTERS); return LStackSlot::Create(index, zone()); } } @@ -377,8 +365,9 @@ LOperand* LPlatformChunk::GetNextSpillSlot(RegisterKind kind) { void LStoreNamedField::PrintDataTo(StringStream* stream) { object()->PrintTo(stream); - hydrogen()->access().PrintTo(stream); - stream->Add(" <- "); + OStringStream os; + os << hydrogen()->access() << " <- "; + stream->Add(os.c_str()); value()->PrintTo(stream); } @@ -397,7 +386,7 @@ void LLoadKeyed::PrintDataTo(StringStream* stream) { stream->Add("["); key()->PrintTo(stream); if (hydrogen()->IsDehoisted()) { - stream->Add(" + %d]", additional_index()); + stream->Add(" + %d]", base_offset()); } else { stream->Add("]"); } @@ -409,13 +398,13 @@ void LStoreKeyed::PrintDataTo(StringStream* stream) { stream->Add("["); key()->PrintTo(stream); if (hydrogen()->IsDehoisted()) { - stream->Add(" + %d] <-", additional_index()); + stream->Add(" + %d] <-", base_offset()); } else { stream->Add("] <- "); } if (value() == NULL) { - ASSERT(hydrogen()->IsConstantHoleStore() && + DCHECK(hydrogen()->IsConstantHoleStore() && hydrogen()->value()->representation().IsDouble()); stream->Add("<the hole(nan)>"); } else { @@ -440,7 +429,7 @@ void LTransitionElementsKind::PrintDataTo(StringStream* stream) { LPlatformChunk* LChunkBuilder::Build() { - ASSERT(is_unused()); + DCHECK(is_unused()); chunk_ = new(zone()) LPlatformChunk(info(), graph()); LPhase phase("L_Building chunk", chunk_); status_ = BUILDING; @@ -448,7 +437,7 @@ LPlatformChunk* LChunkBuilder::Build() { // Reserve the first spill slot for the state of dynamic alignment. if (info()->IsOptimizing()) { int alignment_state_index = chunk_->GetNextSpillIndex(GENERAL_REGISTERS); - ASSERT_EQ(alignment_state_index, 0); + DCHECK_EQ(alignment_state_index, 0); USE(alignment_state_index); } @@ -674,7 +663,7 @@ LInstruction* LChunkBuilder::MarkAsCall(LInstruction* instr, LInstruction* LChunkBuilder::AssignPointerMap(LInstruction* instr) { - ASSERT(!instr->HasPointerMap()); + DCHECK(!instr->HasPointerMap()); instr->set_pointer_map(new(zone()) LPointerMap(zone())); return instr; } @@ -695,14 +684,14 @@ LUnallocated* LChunkBuilder::TempRegister() { LOperand* LChunkBuilder::FixedTemp(Register reg) { LUnallocated* operand = ToUnallocated(reg); - ASSERT(operand->HasFixedPolicy()); + DCHECK(operand->HasFixedPolicy()); return operand; } LOperand* LChunkBuilder::FixedTemp(XMMRegister reg) { LUnallocated* operand = ToUnallocated(reg); - ASSERT(operand->HasFixedPolicy()); + DCHECK(operand->HasFixedPolicy()); return operand; } @@ -731,8 +720,8 @@ LInstruction* LChunkBuilder::DoDeoptimize(HDeoptimize* instr) { LInstruction* LChunkBuilder::DoShift(Token::Value op, HBitwiseBinaryOperation* instr) { if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* left = UseRegisterAtStart(instr->left()); HValue* right_value = instr->right(); @@ -773,9 +762,9 @@ LInstruction* LChunkBuilder::DoShift(Token::Value op, LInstruction* LChunkBuilder::DoArithmeticD(Token::Value op, HArithmeticBinaryOperation* instr) { - ASSERT(instr->representation().IsDouble()); - ASSERT(instr->left()->representation().IsDouble()); - ASSERT(instr->right()->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); + DCHECK(instr->right()->representation().IsDouble()); if (op == Token::MOD) { LOperand* left = UseRegisterAtStart(instr->BetterLeftOperand()); LOperand* right = UseRegisterAtStart(instr->BetterRightOperand()); @@ -794,8 +783,8 @@ LInstruction* LChunkBuilder::DoArithmeticT(Token::Value op, HBinaryOperation* instr) { HValue* left = instr->left(); HValue* right = instr->right(); - ASSERT(left->representation().IsTagged()); - ASSERT(right->representation().IsTagged()); + DCHECK(left->representation().IsTagged()); + DCHECK(right->representation().IsTagged()); LOperand* context = UseFixed(instr->context(), esi); LOperand* left_operand = UseFixed(left, edx); LOperand* right_operand = UseFixed(right, eax); @@ -806,7 +795,7 @@ LInstruction* LChunkBuilder::DoArithmeticT(Token::Value op, void LChunkBuilder::DoBasicBlock(HBasicBlock* block, HBasicBlock* next_block) { - ASSERT(is_building()); + DCHECK(is_building()); current_block_ = block; next_block_ = next_block; if (block->IsStartBlock()) { @@ -815,13 +804,13 @@ void LChunkBuilder::DoBasicBlock(HBasicBlock* block, HBasicBlock* next_block) { } else if (block->predecessors()->length() == 1) { // We have a single predecessor => copy environment and outgoing // argument count from the predecessor. - ASSERT(block->phis()->length() == 0); + DCHECK(block->phis()->length() == 0); HBasicBlock* pred = block->predecessors()->at(0); HEnvironment* last_environment = pred->last_environment(); - ASSERT(last_environment != NULL); + DCHECK(last_environment != NULL); // Only copy the environment, if it is later used again. if (pred->end()->SecondSuccessor() == NULL) { - ASSERT(pred->end()->FirstSuccessor() == block); + DCHECK(pred->end()->FirstSuccessor() == block); } else { if (pred->end()->FirstSuccessor()->block_id() > block->block_id() || pred->end()->SecondSuccessor()->block_id() > block->block_id()) { @@ -829,7 +818,7 @@ void LChunkBuilder::DoBasicBlock(HBasicBlock* block, HBasicBlock* next_block) { } } block->UpdateEnvironment(last_environment); - ASSERT(pred->argument_count() >= 0); + DCHECK(pred->argument_count() >= 0); argument_count_ = pred->argument_count(); } else { // We are at a state join => process phis. @@ -881,7 +870,7 @@ void LChunkBuilder::VisitInstruction(HInstruction* current) { if (current->OperandCount() == 0) { instr = DefineAsRegister(new(zone()) LDummy()); } else { - ASSERT(!current->OperandAt(0)->IsControlInstruction()); + DCHECK(!current->OperandAt(0)->IsControlInstruction()); instr = DefineAsRegister(new(zone()) LDummyUse(UseAny(current->OperandAt(0)))); } @@ -893,86 +882,90 @@ void LChunkBuilder::VisitInstruction(HInstruction* current) { chunk_->AddInstruction(dummy, current_block_); } } else { - instr = current->CompileToLithium(this); + HBasicBlock* successor; + if (current->IsControlInstruction() && + HControlInstruction::cast(current)->KnownSuccessorBlock(&successor) && + successor != NULL) { + instr = new(zone()) LGoto(successor); + } else { + instr = current->CompileToLithium(this); + } } argument_count_ += current->argument_delta(); - ASSERT(argument_count_ >= 0); + DCHECK(argument_count_ >= 0); if (instr != NULL) { - // Associate the hydrogen instruction first, since we may need it for - // the ClobbersRegisters() or ClobbersDoubleRegisters() calls below. - instr->set_hydrogen_value(current); + AddInstruction(instr, current); + } + + current_instruction_ = old_current; +} + + +void LChunkBuilder::AddInstruction(LInstruction* instr, + HInstruction* hydrogen_val) { + // Associate the hydrogen instruction first, since we may need it for + // the ClobbersRegisters() or ClobbersDoubleRegisters() calls below. + instr->set_hydrogen_value(hydrogen_val); #if DEBUG - // Make sure that the lithium instruction has either no fixed register - // constraints in temps or the result OR no uses that are only used at - // start. If this invariant doesn't hold, the register allocator can decide - // to insert a split of a range immediately before the instruction due to an - // already allocated register needing to be used for the instruction's fixed - // register constraint. In this case, The register allocator won't see an - // interference between the split child and the use-at-start (it would if - // the it was just a plain use), so it is free to move the split child into - // the same register that is used for the use-at-start. - // See https://code.google.com/p/chromium/issues/detail?id=201590 - if (!(instr->ClobbersRegisters() && - instr->ClobbersDoubleRegisters(isolate()))) { - int fixed = 0; - int used_at_start = 0; - for (UseIterator it(instr); !it.Done(); it.Advance()) { - LUnallocated* operand = LUnallocated::cast(it.Current()); - if (operand->IsUsedAtStart()) ++used_at_start; - } - if (instr->Output() != NULL) { - if (LUnallocated::cast(instr->Output())->HasFixedPolicy()) ++fixed; - } - for (TempIterator it(instr); !it.Done(); it.Advance()) { - LUnallocated* operand = LUnallocated::cast(it.Current()); - if (operand->HasFixedPolicy()) ++fixed; - } - ASSERT(fixed == 0 || used_at_start == 0); + // Make sure that the lithium instruction has either no fixed register + // constraints in temps or the result OR no uses that are only used at + // start. If this invariant doesn't hold, the register allocator can decide + // to insert a split of a range immediately before the instruction due to an + // already allocated register needing to be used for the instruction's fixed + // register constraint. In this case, The register allocator won't see an + // interference between the split child and the use-at-start (it would if + // the it was just a plain use), so it is free to move the split child into + // the same register that is used for the use-at-start. + // See https://code.google.com/p/chromium/issues/detail?id=201590 + if (!(instr->ClobbersRegisters() && + instr->ClobbersDoubleRegisters(isolate()))) { + int fixed = 0; + int used_at_start = 0; + for (UseIterator it(instr); !it.Done(); it.Advance()) { + LUnallocated* operand = LUnallocated::cast(it.Current()); + if (operand->IsUsedAtStart()) ++used_at_start; } -#endif - - if (FLAG_stress_pointer_maps && !instr->HasPointerMap()) { - instr = AssignPointerMap(instr); + if (instr->Output() != NULL) { + if (LUnallocated::cast(instr->Output())->HasFixedPolicy()) ++fixed; } - if (FLAG_stress_environments && !instr->HasEnvironment()) { - instr = AssignEnvironment(instr); + for (TempIterator it(instr); !it.Done(); it.Advance()) { + LUnallocated* operand = LUnallocated::cast(it.Current()); + if (operand->HasFixedPolicy()) ++fixed; } - if (!CpuFeatures::IsSafeForSnapshot(isolate(), SSE2) && instr->IsGoto() && - LGoto::cast(instr)->jumps_to_join()) { - // TODO(olivf) Since phis of spilled values are joined as registers - // (not in the stack slot), we need to allow the goto gaps to keep one - // x87 register alive. To ensure all other values are still spilled, we - // insert a fpu register barrier right before. - LClobberDoubles* clobber = new(zone()) LClobberDoubles(isolate()); - clobber->set_hydrogen_value(current); - chunk_->AddInstruction(clobber, current_block_); + DCHECK(fixed == 0 || used_at_start == 0); + } +#endif + + if (FLAG_stress_pointer_maps && !instr->HasPointerMap()) { + instr = AssignPointerMap(instr); + } + if (FLAG_stress_environments && !instr->HasEnvironment()) { + instr = AssignEnvironment(instr); + } + chunk_->AddInstruction(instr, current_block_); + + if (instr->IsCall()) { + HValue* hydrogen_value_for_lazy_bailout = hydrogen_val; + LInstruction* instruction_needing_environment = NULL; + if (hydrogen_val->HasObservableSideEffects()) { + HSimulate* sim = HSimulate::cast(hydrogen_val->next()); + instruction_needing_environment = instr; + sim->ReplayEnvironment(current_block_->last_environment()); + hydrogen_value_for_lazy_bailout = sim; } - chunk_->AddInstruction(instr, current_block_); - - if (instr->IsCall()) { - HValue* hydrogen_value_for_lazy_bailout = current; - LInstruction* instruction_needing_environment = NULL; - if (current->HasObservableSideEffects()) { - HSimulate* sim = HSimulate::cast(current->next()); - instruction_needing_environment = instr; - sim->ReplayEnvironment(current_block_->last_environment()); - hydrogen_value_for_lazy_bailout = sim; - } - LInstruction* bailout = AssignEnvironment(new(zone()) LLazyBailout()); - bailout->set_hydrogen_value(hydrogen_value_for_lazy_bailout); - chunk_->AddInstruction(bailout, current_block_); - if (instruction_needing_environment != NULL) { - // Store the lazy deopt environment with the instruction if needed. - // Right now it is only used for LInstanceOfKnownGlobal. - instruction_needing_environment-> - SetDeferredLazyDeoptimizationEnvironment(bailout->environment()); - } + LInstruction* bailout = AssignEnvironment(new(zone()) LLazyBailout()); + bailout->set_hydrogen_value(hydrogen_value_for_lazy_bailout); + chunk_->AddInstruction(bailout, current_block_); + if (instruction_needing_environment != NULL) { + // Store the lazy deopt environment with the instruction if needed. + // Right now it is only used for LInstanceOfKnownGlobal. + instruction_needing_environment-> + SetDeferredLazyDeoptimizationEnvironment(bailout->environment()); } } - current_instruction_ = old_current; } @@ -982,9 +975,6 @@ LInstruction* LChunkBuilder::DoGoto(HGoto* instr) { LInstruction* LChunkBuilder::DoBranch(HBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; - HValue* value = instr->value(); Representation r = value->representation(); HType type = value->type(); @@ -1010,10 +1000,7 @@ LInstruction* LChunkBuilder::DoDebugBreak(HDebugBreak* instr) { LInstruction* LChunkBuilder::DoCompareMap(HCompareMap* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; - - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); return new(zone()) LCmpMapAndBranch(value); } @@ -1074,9 +1061,13 @@ LInstruction* LChunkBuilder::DoApplyArguments(HApplyArguments* instr) { } -LInstruction* LChunkBuilder::DoPushArgument(HPushArgument* instr) { - LOperand* argument = UseAny(instr->argument()); - return new(zone()) LPushArgument(argument); +LInstruction* LChunkBuilder::DoPushArguments(HPushArguments* instr) { + int argc = instr->OperandCount(); + for (int i = 0; i < argc; ++i) { + LOperand* argument = UseAny(instr->argument(i)); + AddInstruction(new(zone()) LPushArgument(argument), instr); + } + return NULL; } @@ -1133,8 +1124,7 @@ LInstruction* LChunkBuilder::DoCallJSFunction( LInstruction* LChunkBuilder::DoCallWithDescriptor( HCallWithDescriptor* instr) { - const CallInterfaceDescriptor* descriptor = instr->descriptor(); - + const InterfaceDescriptor* descriptor = instr->descriptor(); LOperand* target = UseRegisterOrConstantAtStart(instr->target()); ZoneList<LOperand*> ops(instr->OperandCount(), zone()); ops.Add(target, zone()); @@ -1160,14 +1150,24 @@ LInstruction* LChunkBuilder::DoInvokeFunction(HInvokeFunction* instr) { LInstruction* LChunkBuilder::DoUnaryMathOperation(HUnaryMathOperation* instr) { switch (instr->op()) { - case kMathFloor: return DoMathFloor(instr); - case kMathRound: return DoMathRound(instr); - case kMathAbs: return DoMathAbs(instr); - case kMathLog: return DoMathLog(instr); - case kMathExp: return DoMathExp(instr); - case kMathSqrt: return DoMathSqrt(instr); - case kMathPowHalf: return DoMathPowHalf(instr); - case kMathClz32: return DoMathClz32(instr); + case kMathFloor: + return DoMathFloor(instr); + case kMathRound: + return DoMathRound(instr); + case kMathFround: + return DoMathFround(instr); + case kMathAbs: + return DoMathAbs(instr); + case kMathLog: + return DoMathLog(instr); + case kMathExp: + return DoMathExp(instr); + case kMathSqrt: + return DoMathSqrt(instr); + case kMathPowHalf: + return DoMathPowHalf(instr); + case kMathClz32: + return DoMathClz32(instr); default: UNREACHABLE(); return NULL; @@ -1190,6 +1190,13 @@ LInstruction* LChunkBuilder::DoMathRound(HUnaryMathOperation* instr) { } +LInstruction* LChunkBuilder::DoMathFround(HUnaryMathOperation* instr) { + LOperand* input = UseRegister(instr->value()); + LMathFround* result = new (zone()) LMathFround(input); + return DefineAsRegister(result); +} + + LInstruction* LChunkBuilder::DoMathAbs(HUnaryMathOperation* instr) { LOperand* context = UseAny(instr->context()); // Deferred use. LOperand* input = UseRegisterAtStart(instr->value()); @@ -1203,8 +1210,8 @@ LInstruction* LChunkBuilder::DoMathAbs(HUnaryMathOperation* instr) { LInstruction* LChunkBuilder::DoMathLog(HUnaryMathOperation* instr) { - ASSERT(instr->representation().IsDouble()); - ASSERT(instr->value()->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->value()->representation().IsDouble()); LOperand* input = UseRegisterAtStart(instr->value()); return MarkAsCall(DefineSameAsFirst(new(zone()) LMathLog(input)), instr); } @@ -1218,8 +1225,8 @@ LInstruction* LChunkBuilder::DoMathClz32(HUnaryMathOperation* instr) { LInstruction* LChunkBuilder::DoMathExp(HUnaryMathOperation* instr) { - ASSERT(instr->representation().IsDouble()); - ASSERT(instr->value()->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->value()->representation().IsDouble()); LOperand* value = UseTempRegister(instr->value()); LOperand* temp1 = TempRegister(); LOperand* temp2 = TempRegister(); @@ -1229,9 +1236,8 @@ LInstruction* LChunkBuilder::DoMathExp(HUnaryMathOperation* instr) { LInstruction* LChunkBuilder::DoMathSqrt(HUnaryMathOperation* instr) { - LOperand* input = UseRegisterAtStart(instr->value()); - LMathSqrt* result = new(zone()) LMathSqrt(input); - return DefineSameAsFirst(result); + LOperand* input = UseAtStart(instr->value()); + return DefineAsRegister(new(zone()) LMathSqrt(input)); } @@ -1295,9 +1301,9 @@ LInstruction* LChunkBuilder::DoShl(HShl* instr) { LInstruction* LChunkBuilder::DoBitwise(HBitwise* instr) { if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); - ASSERT(instr->CheckFlag(HValue::kTruncatingToInt32)); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->CheckFlag(HValue::kTruncatingToInt32)); LOperand* left = UseRegisterAtStart(instr->BetterLeftOperand()); LOperand* right = UseOrConstantAtStart(instr->BetterRightOperand()); @@ -1309,9 +1315,9 @@ LInstruction* LChunkBuilder::DoBitwise(HBitwise* instr) { LInstruction* LChunkBuilder::DoDivByPowerOf2I(HDiv* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LInstruction* result = DefineAsRegister(new(zone()) LDivByPowerOf2I( @@ -1327,9 +1333,9 @@ LInstruction* LChunkBuilder::DoDivByPowerOf2I(HDiv* instr) { LInstruction* LChunkBuilder::DoDivByConstI(HDiv* instr) { - ASSERT(instr->representation().IsInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LOperand* temp1 = FixedTemp(eax); @@ -1346,9 +1352,9 @@ LInstruction* LChunkBuilder::DoDivByConstI(HDiv* instr) { LInstruction* LChunkBuilder::DoDivI(HDiv* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseFixed(instr->left(), eax); LOperand* divisor = UseRegister(instr->right()); LOperand* temp = FixedTemp(edx); @@ -1395,9 +1401,9 @@ LInstruction* LChunkBuilder::DoFlooringDivByPowerOf2I(HMathFloorOfDiv* instr) { LInstruction* LChunkBuilder::DoFlooringDivByConstI(HMathFloorOfDiv* instr) { - ASSERT(instr->representation().IsInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LOperand* temp1 = FixedTemp(eax); @@ -1422,9 +1428,9 @@ LInstruction* LChunkBuilder::DoFlooringDivByConstI(HMathFloorOfDiv* instr) { LInstruction* LChunkBuilder::DoFlooringDivI(HMathFloorOfDiv* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseFixed(instr->left(), eax); LOperand* divisor = UseRegister(instr->right()); LOperand* temp = FixedTemp(edx); @@ -1451,14 +1457,15 @@ LInstruction* LChunkBuilder::DoMathFloorOfDiv(HMathFloorOfDiv* instr) { LInstruction* LChunkBuilder::DoModByPowerOf2I(HMod* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegisterAtStart(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LInstruction* result = DefineSameAsFirst(new(zone()) LModByPowerOf2I( dividend, divisor)); - if (instr->CheckFlag(HValue::kBailoutOnMinusZero)) { + if (instr->CheckFlag(HValue::kLeftCanBeNegative) && + instr->CheckFlag(HValue::kBailoutOnMinusZero)) { result = AssignEnvironment(result); } return result; @@ -1466,9 +1473,9 @@ LInstruction* LChunkBuilder::DoModByPowerOf2I(HMod* instr) { LInstruction* LChunkBuilder::DoModByConstI(HMod* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LOperand* temp1 = FixedTemp(eax); @@ -1483,9 +1490,9 @@ LInstruction* LChunkBuilder::DoModByConstI(HMod* instr) { LInstruction* LChunkBuilder::DoModI(HMod* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseFixed(instr->left(), eax); LOperand* divisor = UseRegister(instr->right()); LOperand* temp = FixedTemp(edx); @@ -1518,8 +1525,8 @@ LInstruction* LChunkBuilder::DoMod(HMod* instr) { LInstruction* LChunkBuilder::DoMul(HMul* instr) { if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* left = UseRegisterAtStart(instr->BetterLeftOperand()); LOperand* right = UseOrConstant(instr->BetterRightOperand()); LOperand* temp = NULL; @@ -1542,8 +1549,8 @@ LInstruction* LChunkBuilder::DoMul(HMul* instr) { LInstruction* LChunkBuilder::DoSub(HSub* instr) { if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* left = UseRegisterAtStart(instr->left()); LOperand* right = UseOrConstantAtStart(instr->right()); LSubI* sub = new(zone()) LSubI(left, right); @@ -1562,8 +1569,8 @@ LInstruction* LChunkBuilder::DoSub(HSub* instr) { LInstruction* LChunkBuilder::DoAdd(HAdd* instr) { if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); // Check to see if it would be advantageous to use an lea instruction rather // than an add. This is the case when no overflow check is needed and there // are multiple uses of the add's inputs, so using a 3-register add will @@ -1586,9 +1593,9 @@ LInstruction* LChunkBuilder::DoAdd(HAdd* instr) { } else if (instr->representation().IsDouble()) { return DoArithmeticD(Token::ADD, instr); } else if (instr->representation().IsExternal()) { - ASSERT(instr->left()->representation().IsExternal()); - ASSERT(instr->right()->representation().IsInteger32()); - ASSERT(!instr->CheckFlag(HValue::kCanOverflow)); + DCHECK(instr->left()->representation().IsExternal()); + DCHECK(instr->right()->representation().IsInteger32()); + DCHECK(!instr->CheckFlag(HValue::kCanOverflow)); bool use_lea = LAddI::UseLea(instr); LOperand* left = UseRegisterAtStart(instr->left()); HValue* right_candidate = instr->right(); @@ -1610,14 +1617,14 @@ LInstruction* LChunkBuilder::DoMathMinMax(HMathMinMax* instr) { LOperand* left = NULL; LOperand* right = NULL; if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); left = UseRegisterAtStart(instr->BetterLeftOperand()); right = UseOrConstantAtStart(instr->BetterRightOperand()); } else { - ASSERT(instr->representation().IsDouble()); - ASSERT(instr->left()->representation().IsDouble()); - ASSERT(instr->right()->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); + DCHECK(instr->right()->representation().IsDouble()); left = UseRegisterAtStart(instr->left()); right = UseRegisterAtStart(instr->right()); } @@ -1627,11 +1634,11 @@ LInstruction* LChunkBuilder::DoMathMinMax(HMathMinMax* instr) { LInstruction* LChunkBuilder::DoPower(HPower* instr) { - ASSERT(instr->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); // We call a C function for double power. It can't trigger a GC. // We need to use fixed result register for the call. Representation exponent_type = instr->right()->representation(); - ASSERT(instr->left()->representation().IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); LOperand* left = UseFixedDouble(instr->left(), xmm2); LOperand* right = exponent_type.IsDouble() ? UseFixedDouble(instr->right(), xmm1) : @@ -1643,8 +1650,8 @@ LInstruction* LChunkBuilder::DoPower(HPower* instr) { LInstruction* LChunkBuilder::DoCompareGeneric(HCompareGeneric* instr) { - ASSERT(instr->left()->representation().IsSmiOrTagged()); - ASSERT(instr->right()->representation().IsSmiOrTagged()); + DCHECK(instr->left()->representation().IsSmiOrTagged()); + DCHECK(instr->right()->representation().IsSmiOrTagged()); LOperand* context = UseFixed(instr->context(), esi); LOperand* left = UseFixed(instr->left(), edx); LOperand* right = UseFixed(instr->right(), eax); @@ -1655,19 +1662,17 @@ LInstruction* LChunkBuilder::DoCompareGeneric(HCompareGeneric* instr) { LInstruction* LChunkBuilder::DoCompareNumericAndBranch( HCompareNumericAndBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; Representation r = instr->representation(); if (r.IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(r)); - ASSERT(instr->right()->representation().Equals(r)); + DCHECK(instr->left()->representation().Equals(r)); + DCHECK(instr->right()->representation().Equals(r)); LOperand* left = UseRegisterOrConstantAtStart(instr->left()); LOperand* right = UseOrConstantAtStart(instr->right()); return new(zone()) LCompareNumericAndBranch(left, right); } else { - ASSERT(r.IsDouble()); - ASSERT(instr->left()->representation().IsDouble()); - ASSERT(instr->right()->representation().IsDouble()); + DCHECK(r.IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); + DCHECK(instr->right()->representation().IsDouble()); LOperand* left; LOperand* right; if (CanBeImmediateConstant(instr->left()) && @@ -1687,8 +1692,6 @@ LInstruction* LChunkBuilder::DoCompareNumericAndBranch( LInstruction* LChunkBuilder::DoCompareObjectEqAndBranch( HCompareObjectEqAndBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; LOperand* left = UseRegisterAtStart(instr->left()); LOperand* right = UseOrConstantAtStart(instr->right()); return new(zone()) LCmpObjectEqAndBranch(left, right); @@ -1704,8 +1707,6 @@ LInstruction* LChunkBuilder::DoCompareHoleAndBranch( LInstruction* LChunkBuilder::DoCompareMinusZeroAndBranch( HCompareMinusZeroAndBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; LOperand* value = UseRegister(instr->value()); LOperand* scratch = TempRegister(); return new(zone()) LCompareMinusZeroAndBranch(value, scratch); @@ -1713,28 +1714,28 @@ LInstruction* LChunkBuilder::DoCompareMinusZeroAndBranch( LInstruction* LChunkBuilder::DoIsObjectAndBranch(HIsObjectAndBranch* instr) { - ASSERT(instr->value()->representation().IsSmiOrTagged()); + DCHECK(instr->value()->representation().IsSmiOrTagged()); LOperand* temp = TempRegister(); return new(zone()) LIsObjectAndBranch(UseRegister(instr->value()), temp); } LInstruction* LChunkBuilder::DoIsStringAndBranch(HIsStringAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* temp = TempRegister(); return new(zone()) LIsStringAndBranch(UseRegister(instr->value()), temp); } LInstruction* LChunkBuilder::DoIsSmiAndBranch(HIsSmiAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); return new(zone()) LIsSmiAndBranch(Use(instr->value())); } LInstruction* LChunkBuilder::DoIsUndetectableAndBranch( HIsUndetectableAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); return new(zone()) LIsUndetectableAndBranch( UseRegisterAtStart(instr->value()), TempRegister()); } @@ -1742,8 +1743,8 @@ LInstruction* LChunkBuilder::DoIsUndetectableAndBranch( LInstruction* LChunkBuilder::DoStringCompareAndBranch( HStringCompareAndBranch* instr) { - ASSERT(instr->left()->representation().IsTagged()); - ASSERT(instr->right()->representation().IsTagged()); + DCHECK(instr->left()->representation().IsTagged()); + DCHECK(instr->right()->representation().IsTagged()); LOperand* context = UseFixed(instr->context(), esi); LOperand* left = UseFixed(instr->left(), edx); LOperand* right = UseFixed(instr->right(), eax); @@ -1757,7 +1758,7 @@ LInstruction* LChunkBuilder::DoStringCompareAndBranch( LInstruction* LChunkBuilder::DoHasInstanceTypeAndBranch( HHasInstanceTypeAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); return new(zone()) LHasInstanceTypeAndBranch( UseRegisterAtStart(instr->value()), TempRegister()); @@ -1766,7 +1767,7 @@ LInstruction* LChunkBuilder::DoHasInstanceTypeAndBranch( LInstruction* LChunkBuilder::DoGetCachedArrayIndex( HGetCachedArrayIndex* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); return DefineAsRegister(new(zone()) LGetCachedArrayIndex(value)); @@ -1775,7 +1776,7 @@ LInstruction* LChunkBuilder::DoGetCachedArrayIndex( LInstruction* LChunkBuilder::DoHasCachedArrayIndexAndBranch( HHasCachedArrayIndexAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); return new(zone()) LHasCachedArrayIndexAndBranch( UseRegisterAtStart(instr->value())); } @@ -1783,7 +1784,7 @@ LInstruction* LChunkBuilder::DoHasCachedArrayIndexAndBranch( LInstruction* LChunkBuilder::DoClassOfTestAndBranch( HClassOfTestAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); return new(zone()) LClassOfTestAndBranch(UseRegister(instr->value()), TempRegister(), TempRegister()); @@ -1911,16 +1912,14 @@ LInstruction* LChunkBuilder::DoChange(HChange* instr) { } return AssignEnvironment(DefineSameAsFirst(new(zone()) LCheckSmi(value))); } else { - ASSERT(to.IsInteger32()); + DCHECK(to.IsInteger32()); if (val->type().IsSmi() || val->representation().IsSmi()) { LOperand* value = UseRegister(val); return DefineSameAsFirst(new(zone()) LSmiUntag(value, false)); } else { LOperand* value = UseRegister(val); bool truncating = instr->CanTruncateToInt32(); - LOperand* xmm_temp = - (CpuFeatures::IsSafeForSnapshot(isolate(), SSE2) && !truncating) - ? FixedTemp(xmm1) : NULL; + LOperand* xmm_temp = !truncating ? FixedTemp(xmm1) : NULL; LInstruction* result = DefineSameAsFirst(new(zone()) LTaggedToI(value, xmm_temp)); if (!val->representation().IsSmi()) result = AssignEnvironment(result); @@ -1940,10 +1939,9 @@ LInstruction* LChunkBuilder::DoChange(HChange* instr) { return AssignEnvironment( DefineAsRegister(new(zone()) LDoubleToSmi(value))); } else { - ASSERT(to.IsInteger32()); + DCHECK(to.IsInteger32()); bool truncating = instr->CanTruncateToInt32(); - bool needs_temp = - CpuFeatures::IsSafeForSnapshot(isolate(), SSE2) && !truncating; + bool needs_temp = !truncating; LOperand* value = needs_temp ? UseTempRegister(val) : UseRegister(val); LOperand* temp = needs_temp ? TempRegister() : NULL; LInstruction* result = @@ -1954,18 +1952,14 @@ LInstruction* LChunkBuilder::DoChange(HChange* instr) { } else if (from.IsInteger32()) { info()->MarkAsDeferredCalling(); if (to.IsTagged()) { + LOperand* value = UseRegister(val); if (!instr->CheckFlag(HValue::kCanOverflow)) { - LOperand* value = UseRegister(val); return DefineSameAsFirst(new(zone()) LSmiTag(value)); } else if (val->CheckFlag(HInstruction::kUint32)) { - LOperand* value = UseRegister(val); - LOperand* temp1 = TempRegister(); - LOperand* temp2 = - CpuFeatures::IsSupported(SSE2) ? FixedTemp(xmm1) : NULL; - LNumberTagU* result = new(zone()) LNumberTagU(value, temp1, temp2); + LOperand* temp = TempRegister(); + LNumberTagU* result = new(zone()) LNumberTagU(value, temp); return AssignPointerMap(DefineSameAsFirst(result)); } else { - LOperand* value = UseRegister(val); LOperand* temp = TempRegister(); LNumberTagI* result = new(zone()) LNumberTagI(value, temp); return AssignPointerMap(DefineSameAsFirst(result)); @@ -1978,11 +1972,9 @@ LInstruction* LChunkBuilder::DoChange(HChange* instr) { } return result; } else { - ASSERT(to.IsDouble()); + DCHECK(to.IsDouble()); if (val->CheckFlag(HInstruction::kUint32)) { - LOperand* temp = FixedTemp(xmm1); - return DefineAsRegister( - new(zone()) LUint32ToDouble(UseRegister(val), temp)); + return DefineAsRegister(new(zone()) LUint32ToDouble(UseRegister(val))); } else { return DefineAsRegister(new(zone()) LInteger32ToDouble(Use(val))); } @@ -1996,7 +1988,9 @@ LInstruction* LChunkBuilder::DoChange(HChange* instr) { LInstruction* LChunkBuilder::DoCheckHeapObject(HCheckHeapObject* instr) { LOperand* value = UseAtStart(instr->value()); LInstruction* result = new(zone()) LCheckNonSmi(value); - if (!instr->value()->IsHeapObject()) result = AssignEnvironment(result); + if (!instr->value()->type().IsHeapObject()) { + result = AssignEnvironment(result); + } return result; } @@ -2048,28 +2042,20 @@ LInstruction* LChunkBuilder::DoClampToUint8(HClampToUint8* instr) { LOperand* reg = UseFixed(value, eax); return DefineFixed(new(zone()) LClampIToUint8(reg), eax); } else { - ASSERT(input_rep.IsSmiOrTagged()); - if (CpuFeatures::IsSupported(SSE2)) { - LOperand* reg = UseFixed(value, eax); - // Register allocator doesn't (yet) support allocation of double - // temps. Reserve xmm1 explicitly. - LOperand* temp = FixedTemp(xmm1); - LClampTToUint8* result = new(zone()) LClampTToUint8(reg, temp); - return AssignEnvironment(DefineFixed(result, eax)); - } else { - LOperand* value = UseRegister(instr->value()); - LClampTToUint8NoSSE2* res = - new(zone()) LClampTToUint8NoSSE2(value, TempRegister(), - TempRegister(), TempRegister()); - return AssignEnvironment(DefineFixed(res, ecx)); - } + DCHECK(input_rep.IsSmiOrTagged()); + LOperand* reg = UseFixed(value, eax); + // Register allocator doesn't (yet) support allocation of double + // temps. Reserve xmm1 explicitly. + LOperand* temp = FixedTemp(xmm1); + LClampTToUint8* result = new(zone()) LClampTToUint8(reg, temp); + return AssignEnvironment(DefineFixed(result, eax)); } } LInstruction* LChunkBuilder::DoDoubleBits(HDoubleBits* instr) { HValue* value = instr->value(); - ASSERT(value->representation().IsDouble()); + DCHECK(value->representation().IsDouble()); return DefineAsRegister(new(zone()) LDoubleBits(UseRegister(value))); } @@ -2121,9 +2107,15 @@ LInstruction* LChunkBuilder::DoLoadGlobalCell(HLoadGlobalCell* instr) { LInstruction* LChunkBuilder::DoLoadGlobalGeneric(HLoadGlobalGeneric* instr) { LOperand* context = UseFixed(instr->context(), esi); - LOperand* global_object = UseFixed(instr->global_object(), edx); + LOperand* global_object = UseFixed(instr->global_object(), + LoadIC::ReceiverRegister()); + LOperand* vector = NULL; + if (FLAG_vector_ics) { + vector = FixedTemp(LoadIC::VectorRegister()); + } + LLoadGlobalGeneric* result = - new(zone()) LLoadGlobalGeneric(context, global_object); + new(zone()) LLoadGlobalGeneric(context, global_object, vector); return MarkAsCall(DefineFixed(result, eax), instr); } @@ -2176,8 +2168,13 @@ LInstruction* LChunkBuilder::DoLoadNamedField(HLoadNamedField* instr) { LInstruction* LChunkBuilder::DoLoadNamedGeneric(HLoadNamedGeneric* instr) { LOperand* context = UseFixed(instr->context(), esi); - LOperand* object = UseFixed(instr->object(), edx); - LLoadNamedGeneric* result = new(zone()) LLoadNamedGeneric(context, object); + LOperand* object = UseFixed(instr->object(), LoadIC::ReceiverRegister()); + LOperand* vector = NULL; + if (FLAG_vector_ics) { + vector = FixedTemp(LoadIC::VectorRegister()); + } + LLoadNamedGeneric* result = new(zone()) LLoadNamedGeneric( + context, object, vector); return MarkAsCall(DefineFixed(result, eax), instr); } @@ -2196,7 +2193,7 @@ LInstruction* LChunkBuilder::DoLoadRoot(HLoadRoot* instr) { LInstruction* LChunkBuilder::DoLoadKeyed(HLoadKeyed* instr) { - ASSERT(instr->key()->representation().IsSmiOrInteger32()); + DCHECK(instr->key()->representation().IsSmiOrInteger32()); ElementsKind elements_kind = instr->elements_kind(); bool clobbers_key = ExternalArrayOpRequiresTemp( instr->key()->representation(), elements_kind); @@ -2209,7 +2206,7 @@ LInstruction* LChunkBuilder::DoLoadKeyed(HLoadKeyed* instr) { LOperand* obj = UseRegisterAtStart(instr->elements()); result = DefineAsRegister(new(zone()) LLoadKeyed(obj, key)); } else { - ASSERT( + DCHECK( (instr->representation().IsInteger32() && !(IsDoubleOrFloatElementsKind(instr->elements_kind()))) || (instr->representation().IsDouble() && @@ -2234,11 +2231,14 @@ LInstruction* LChunkBuilder::DoLoadKeyed(HLoadKeyed* instr) { LInstruction* LChunkBuilder::DoLoadKeyedGeneric(HLoadKeyedGeneric* instr) { LOperand* context = UseFixed(instr->context(), esi); - LOperand* object = UseFixed(instr->object(), edx); - LOperand* key = UseFixed(instr->key(), ecx); - + LOperand* object = UseFixed(instr->object(), LoadIC::ReceiverRegister()); + LOperand* key = UseFixed(instr->key(), LoadIC::NameRegister()); + LOperand* vector = NULL; + if (FLAG_vector_ics) { + vector = FixedTemp(LoadIC::VectorRegister()); + } LLoadKeyedGeneric* result = - new(zone()) LLoadKeyedGeneric(context, object, key); + new(zone()) LLoadKeyedGeneric(context, object, key, vector); return MarkAsCall(DefineFixed(result, eax), instr); } @@ -2258,19 +2258,14 @@ LOperand* LChunkBuilder::GetStoreKeyedValueOperand(HStoreKeyed* instr) { return UseFixed(instr->value(), eax); } - if (!CpuFeatures::IsSafeForSnapshot(isolate(), SSE2) && - IsDoubleOrFloatElementsKind(elements_kind)) { - return UseRegisterAtStart(instr->value()); - } - return UseRegister(instr->value()); } LInstruction* LChunkBuilder::DoStoreKeyed(HStoreKeyed* instr) { if (!instr->is_typed_elements()) { - ASSERT(instr->elements()->representation().IsTagged()); - ASSERT(instr->key()->representation().IsInteger32() || + DCHECK(instr->elements()->representation().IsTagged()); + DCHECK(instr->key()->representation().IsInteger32() || instr->key()->representation().IsSmi()); if (instr->value()->representation().IsDouble()) { @@ -2280,7 +2275,7 @@ LInstruction* LChunkBuilder::DoStoreKeyed(HStoreKeyed* instr) { LOperand* key = UseRegisterOrConstantAtStart(instr->key()); return new(zone()) LStoreKeyed(object, key, val); } else { - ASSERT(instr->value()->representation().IsSmiOrTagged()); + DCHECK(instr->value()->representation().IsSmiOrTagged()); bool needs_write_barrier = instr->NeedsWriteBarrier(); LOperand* obj = UseRegister(instr->elements()); @@ -2298,12 +2293,12 @@ LInstruction* LChunkBuilder::DoStoreKeyed(HStoreKeyed* instr) { } ElementsKind elements_kind = instr->elements_kind(); - ASSERT( + DCHECK( (instr->value()->representation().IsInteger32() && !IsDoubleOrFloatElementsKind(elements_kind)) || (instr->value()->representation().IsDouble() && IsDoubleOrFloatElementsKind(elements_kind))); - ASSERT((instr->is_fixed_typed_array() && + DCHECK((instr->is_fixed_typed_array() && instr->elements()->representation().IsTagged()) || (instr->is_external() && instr->elements()->representation().IsExternal())); @@ -2321,13 +2316,14 @@ LInstruction* LChunkBuilder::DoStoreKeyed(HStoreKeyed* instr) { LInstruction* LChunkBuilder::DoStoreKeyedGeneric(HStoreKeyedGeneric* instr) { LOperand* context = UseFixed(instr->context(), esi); - LOperand* object = UseFixed(instr->object(), edx); - LOperand* key = UseFixed(instr->key(), ecx); - LOperand* value = UseFixed(instr->value(), eax); + LOperand* object = UseFixed(instr->object(), + KeyedStoreIC::ReceiverRegister()); + LOperand* key = UseFixed(instr->key(), KeyedStoreIC::NameRegister()); + LOperand* value = UseFixed(instr->value(), KeyedStoreIC::ValueRegister()); - ASSERT(instr->object()->representation().IsTagged()); - ASSERT(instr->key()->representation().IsTagged()); - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->object()->representation().IsTagged()); + DCHECK(instr->key()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LStoreKeyedGeneric* result = new(zone()) LStoreKeyedGeneric(context, object, key, value); @@ -2379,9 +2375,9 @@ LInstruction* LChunkBuilder::DoStoreNamedField(HStoreNamedField* instr) { ? UseRegister(instr->object()) : UseTempRegister(instr->object()); } else if (is_external_location) { - ASSERT(!is_in_object); - ASSERT(!needs_write_barrier); - ASSERT(!needs_write_barrier_for_map); + DCHECK(!is_in_object); + DCHECK(!needs_write_barrier); + DCHECK(!needs_write_barrier_for_map); obj = UseRegisterOrConstant(instr->object()); } else { obj = needs_write_barrier_for_map @@ -2425,8 +2421,8 @@ LInstruction* LChunkBuilder::DoStoreNamedField(HStoreNamedField* instr) { LInstruction* LChunkBuilder::DoStoreNamedGeneric(HStoreNamedGeneric* instr) { LOperand* context = UseFixed(instr->context(), esi); - LOperand* object = UseFixed(instr->object(), edx); - LOperand* value = UseFixed(instr->value(), eax); + LOperand* object = UseFixed(instr->object(), StoreIC::ReceiverRegister()); + LOperand* value = UseFixed(instr->value(), StoreIC::ValueRegister()); LStoreNamedGeneric* result = new(zone()) LStoreNamedGeneric(context, object, value); @@ -2489,7 +2485,7 @@ LInstruction* LChunkBuilder::DoFunctionLiteral(HFunctionLiteral* instr) { LInstruction* LChunkBuilder::DoOsrEntry(HOsrEntry* instr) { - ASSERT(argument_count_ == 0); + DCHECK(argument_count_ == 0); allocator_->MarkAsOsrEntry(); current_block_->last_environment()->set_ast_id(instr->ast_id()); return AssignEnvironment(new(zone()) LOsrEntry); @@ -2502,11 +2498,11 @@ LInstruction* LChunkBuilder::DoParameter(HParameter* instr) { int spill_index = chunk()->GetParameterStackSlot(instr->index()); return DefineAsSpilled(result, spill_index); } else { - ASSERT(info()->IsStub()); + DCHECK(info()->IsStub()); CodeStubInterfaceDescriptor* descriptor = info()->code_stub()->GetInterfaceDescriptor(); int index = static_cast<int>(instr->index()); - Register reg = descriptor->GetParameterRegister(index); + Register reg = descriptor->GetEnvironmentParameterRegister(index); return DefineFixed(result, reg); } } @@ -2591,8 +2587,6 @@ LInstruction* LChunkBuilder::DoTypeof(HTypeof* instr) { LInstruction* LChunkBuilder::DoTypeofIsAndBranch(HTypeofIsAndBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; return new(zone()) LTypeofIsAndBranch(UseTempRegister(instr->value())); } @@ -2615,7 +2609,7 @@ LInstruction* LChunkBuilder::DoStackCheck(HStackCheck* instr) { LOperand* context = UseFixed(instr->context(), esi); return MarkAsCall(new(zone()) LStackCheck(context), instr); } else { - ASSERT(instr->is_backwards_branch()); + DCHECK(instr->is_backwards_branch()); LOperand* context = UseAny(instr->context()); return AssignEnvironment( AssignPointerMap(new(zone()) LStackCheck(context))); @@ -2651,7 +2645,7 @@ LInstruction* LChunkBuilder::DoLeaveInlined(HLeaveInlined* instr) { if (env->entry()->arguments_pushed()) { int argument_count = env->arguments_environment()->parameter_count(); pop = new(zone()) LDrop(argument_count); - ASSERT(instr->argument_delta() == -argument_count); + DCHECK(instr->argument_delta() == -argument_count); } HEnvironment* outer = current_block_->last_environment()-> @@ -2692,6 +2686,22 @@ LInstruction* LChunkBuilder::DoLoadFieldByIndex(HLoadFieldByIndex* instr) { } +LInstruction* LChunkBuilder::DoStoreFrameContext(HStoreFrameContext* instr) { + LOperand* context = UseRegisterAtStart(instr->context()); + return new(zone()) LStoreFrameContext(context); +} + + +LInstruction* LChunkBuilder::DoAllocateBlockContext( + HAllocateBlockContext* instr) { + LOperand* context = UseFixed(instr->context(), esi); + LOperand* function = UseRegisterAtStart(instr->function()); + LAllocateBlockContext* result = + new(zone()) LAllocateBlockContext(context, function); + return MarkAsCall(DefineFixed(result, esi), instr); +} + + } } // namespace v8::internal #endif // V8_TARGET_ARCH_IA32 diff --git a/deps/v8/src/ia32/lithium-ia32.h b/deps/v8/src/ia32/lithium-ia32.h index fe6f79463e0..4206482de75 100644 --- a/deps/v8/src/ia32/lithium-ia32.h +++ b/deps/v8/src/ia32/lithium-ia32.h @@ -5,159 +5,164 @@ #ifndef V8_IA32_LITHIUM_IA32_H_ #define V8_IA32_LITHIUM_IA32_H_ -#include "hydrogen.h" -#include "lithium-allocator.h" -#include "lithium.h" -#include "safepoint-table.h" -#include "utils.h" +#include "src/hydrogen.h" +#include "src/lithium.h" +#include "src/lithium-allocator.h" +#include "src/safepoint-table.h" +#include "src/utils.h" namespace v8 { namespace internal { +namespace compiler { +class RCodeVisualizer; +} + // Forward declarations. class LCodeGen; -#define LITHIUM_CONCRETE_INSTRUCTION_LIST(V) \ - V(AccessArgumentsAt) \ - V(AddI) \ - V(Allocate) \ - V(ApplyArguments) \ - V(ArgumentsElements) \ - V(ArgumentsLength) \ - V(ArithmeticD) \ - V(ArithmeticT) \ - V(BitI) \ - V(BoundsCheck) \ - V(Branch) \ - V(CallJSFunction) \ - V(CallWithDescriptor) \ - V(CallFunction) \ - V(CallNew) \ - V(CallNewArray) \ - V(CallRuntime) \ - V(CallStub) \ - V(CheckInstanceType) \ - V(CheckMaps) \ - V(CheckMapValue) \ - V(CheckNonSmi) \ - V(CheckSmi) \ - V(CheckValue) \ - V(ClampDToUint8) \ - V(ClampIToUint8) \ - V(ClampTToUint8) \ - V(ClampTToUint8NoSSE2) \ - V(ClassOfTestAndBranch) \ - V(ClobberDoubles) \ - V(CompareMinusZeroAndBranch) \ - V(CompareNumericAndBranch) \ - V(CmpObjectEqAndBranch) \ - V(CmpHoleAndBranch) \ - V(CmpMapAndBranch) \ - V(CmpT) \ - V(ConstantD) \ - V(ConstantE) \ - V(ConstantI) \ - V(ConstantS) \ - V(ConstantT) \ - V(ConstructDouble) \ - V(Context) \ - V(DateField) \ - V(DebugBreak) \ - V(DeclareGlobals) \ - V(Deoptimize) \ - V(DivByConstI) \ - V(DivByPowerOf2I) \ - V(DivI) \ - V(DoubleBits) \ - V(DoubleToI) \ - V(DoubleToSmi) \ - V(Drop) \ - V(Dummy) \ - V(DummyUse) \ - V(FlooringDivByConstI) \ - V(FlooringDivByPowerOf2I) \ - V(FlooringDivI) \ - V(ForInCacheArray) \ - V(ForInPrepareMap) \ - V(FunctionLiteral) \ - V(GetCachedArrayIndex) \ - V(Goto) \ - V(HasCachedArrayIndexAndBranch) \ - V(HasInstanceTypeAndBranch) \ - V(InnerAllocatedObject) \ - V(InstanceOf) \ - V(InstanceOfKnownGlobal) \ - V(InstructionGap) \ - V(Integer32ToDouble) \ - V(InvokeFunction) \ - V(IsConstructCallAndBranch) \ - V(IsObjectAndBranch) \ - V(IsStringAndBranch) \ - V(IsSmiAndBranch) \ - V(IsUndetectableAndBranch) \ - V(Label) \ - V(LazyBailout) \ - V(LoadContextSlot) \ - V(LoadFieldByIndex) \ - V(LoadFunctionPrototype) \ - V(LoadGlobalCell) \ - V(LoadGlobalGeneric) \ - V(LoadKeyed) \ - V(LoadKeyedGeneric) \ - V(LoadNamedField) \ - V(LoadNamedGeneric) \ - V(LoadRoot) \ - V(MapEnumLength) \ - V(MathAbs) \ - V(MathClz32) \ - V(MathExp) \ - V(MathFloor) \ - V(MathLog) \ - V(MathMinMax) \ - V(MathPowHalf) \ - V(MathRound) \ - V(MathSqrt) \ - V(ModByConstI) \ - V(ModByPowerOf2I) \ - V(ModI) \ - V(MulI) \ - V(NumberTagD) \ - V(NumberTagI) \ - V(NumberTagU) \ - V(NumberUntagD) \ - V(OsrEntry) \ - V(Parameter) \ - V(Power) \ - V(PushArgument) \ - V(RegExpLiteral) \ - V(Return) \ - V(SeqStringGetChar) \ - V(SeqStringSetChar) \ - V(ShiftI) \ - V(SmiTag) \ - V(SmiUntag) \ - V(StackCheck) \ - V(StoreCodeEntry) \ - V(StoreContextSlot) \ - V(StoreGlobalCell) \ - V(StoreKeyed) \ - V(StoreKeyedGeneric) \ - V(StoreNamedField) \ - V(StoreNamedGeneric) \ - V(StringAdd) \ - V(StringCharCodeAt) \ - V(StringCharFromCode) \ - V(StringCompareAndBranch) \ - V(SubI) \ - V(TaggedToI) \ - V(ThisFunction) \ - V(ToFastProperties) \ - V(TransitionElementsKind) \ - V(TrapAllocationMemento) \ - V(Typeof) \ - V(TypeofIsAndBranch) \ - V(Uint32ToDouble) \ - V(UnknownOSRValue) \ +#define LITHIUM_CONCRETE_INSTRUCTION_LIST(V) \ + V(AccessArgumentsAt) \ + V(AddI) \ + V(AllocateBlockContext) \ + V(Allocate) \ + V(ApplyArguments) \ + V(ArgumentsElements) \ + V(ArgumentsLength) \ + V(ArithmeticD) \ + V(ArithmeticT) \ + V(BitI) \ + V(BoundsCheck) \ + V(Branch) \ + V(CallJSFunction) \ + V(CallWithDescriptor) \ + V(CallFunction) \ + V(CallNew) \ + V(CallNewArray) \ + V(CallRuntime) \ + V(CallStub) \ + V(CheckInstanceType) \ + V(CheckMaps) \ + V(CheckMapValue) \ + V(CheckNonSmi) \ + V(CheckSmi) \ + V(CheckValue) \ + V(ClampDToUint8) \ + V(ClampIToUint8) \ + V(ClampTToUint8) \ + V(ClassOfTestAndBranch) \ + V(CompareMinusZeroAndBranch) \ + V(CompareNumericAndBranch) \ + V(CmpObjectEqAndBranch) \ + V(CmpHoleAndBranch) \ + V(CmpMapAndBranch) \ + V(CmpT) \ + V(ConstantD) \ + V(ConstantE) \ + V(ConstantI) \ + V(ConstantS) \ + V(ConstantT) \ + V(ConstructDouble) \ + V(Context) \ + V(DateField) \ + V(DebugBreak) \ + V(DeclareGlobals) \ + V(Deoptimize) \ + V(DivByConstI) \ + V(DivByPowerOf2I) \ + V(DivI) \ + V(DoubleBits) \ + V(DoubleToI) \ + V(DoubleToSmi) \ + V(Drop) \ + V(Dummy) \ + V(DummyUse) \ + V(FlooringDivByConstI) \ + V(FlooringDivByPowerOf2I) \ + V(FlooringDivI) \ + V(ForInCacheArray) \ + V(ForInPrepareMap) \ + V(FunctionLiteral) \ + V(GetCachedArrayIndex) \ + V(Goto) \ + V(HasCachedArrayIndexAndBranch) \ + V(HasInstanceTypeAndBranch) \ + V(InnerAllocatedObject) \ + V(InstanceOf) \ + V(InstanceOfKnownGlobal) \ + V(InstructionGap) \ + V(Integer32ToDouble) \ + V(InvokeFunction) \ + V(IsConstructCallAndBranch) \ + V(IsObjectAndBranch) \ + V(IsStringAndBranch) \ + V(IsSmiAndBranch) \ + V(IsUndetectableAndBranch) \ + V(Label) \ + V(LazyBailout) \ + V(LoadContextSlot) \ + V(LoadFieldByIndex) \ + V(LoadFunctionPrototype) \ + V(LoadGlobalCell) \ + V(LoadGlobalGeneric) \ + V(LoadKeyed) \ + V(LoadKeyedGeneric) \ + V(LoadNamedField) \ + V(LoadNamedGeneric) \ + V(LoadRoot) \ + V(MapEnumLength) \ + V(MathAbs) \ + V(MathClz32) \ + V(MathExp) \ + V(MathFloor) \ + V(MathFround) \ + V(MathLog) \ + V(MathMinMax) \ + V(MathPowHalf) \ + V(MathRound) \ + V(MathSqrt) \ + V(ModByConstI) \ + V(ModByPowerOf2I) \ + V(ModI) \ + V(MulI) \ + V(NumberTagD) \ + V(NumberTagI) \ + V(NumberTagU) \ + V(NumberUntagD) \ + V(OsrEntry) \ + V(Parameter) \ + V(Power) \ + V(PushArgument) \ + V(RegExpLiteral) \ + V(Return) \ + V(SeqStringGetChar) \ + V(SeqStringSetChar) \ + V(ShiftI) \ + V(SmiTag) \ + V(SmiUntag) \ + V(StackCheck) \ + V(StoreCodeEntry) \ + V(StoreContextSlot) \ + V(StoreFrameContext) \ + V(StoreGlobalCell) \ + V(StoreKeyed) \ + V(StoreKeyedGeneric) \ + V(StoreNamedField) \ + V(StoreNamedGeneric) \ + V(StringAdd) \ + V(StringCharCodeAt) \ + V(StringCharFromCode) \ + V(StringCompareAndBranch) \ + V(SubI) \ + V(TaggedToI) \ + V(ThisFunction) \ + V(ToFastProperties) \ + V(TransitionElementsKind) \ + V(TrapAllocationMemento) \ + V(Typeof) \ + V(TypeofIsAndBranch) \ + V(Uint32ToDouble) \ + V(UnknownOSRValue) \ V(WrapReceiver) @@ -170,7 +175,7 @@ class LCodeGen; return mnemonic; \ } \ static L##type* cast(LInstruction* instr) { \ - ASSERT(instr->Is##type()); \ + DCHECK(instr->Is##type()); \ return reinterpret_cast<L##type*>(instr); \ } @@ -200,7 +205,7 @@ class LInstruction : public ZoneObject { enum Opcode { // Declare a unique enum value for each instruction. #define DECLARE_OPCODE(type) k##type, - LITHIUM_CONCRETE_INSTRUCTION_LIST(DECLARE_OPCODE) + LITHIUM_CONCRETE_INSTRUCTION_LIST(DECLARE_OPCODE) kAdapter, kNumberOfInstructions #undef DECLARE_OPCODE }; @@ -219,6 +224,9 @@ class LInstruction : public ZoneObject { virtual bool IsControl() const { return false; } + // Try deleting this instruction if possible. + virtual bool TryDelete() { return false; } + void set_environment(LEnvironment* env) { environment_ = env; } LEnvironment* environment() const { return environment_; } bool HasEnvironment() const { return environment_ != NULL; } @@ -239,10 +247,7 @@ class LInstruction : public ZoneObject { bool ClobbersTemps() const { return IsCall(); } bool ClobbersRegisters() const { return IsCall(); } virtual bool ClobbersDoubleRegisters(Isolate* isolate) const { - return IsCall() || - // We only have rudimentary X87Stack tracking, thus in general - // cannot handle phi-nodes. - (!CpuFeatures::IsSafeForSnapshot(isolate, SSE2) && IsControl()); + return IsCall(); } virtual bool HasResult() const = 0; @@ -250,7 +255,6 @@ class LInstruction : public ZoneObject { bool HasDoubleRegisterResult(); bool HasDoubleRegisterInput(); - bool IsDoubleInput(X87Register reg, LCodeGen* cgen); LOperand* FirstInput() { return InputAt(0); } LOperand* Output() { return HasResult() ? result() : NULL; } @@ -261,11 +265,12 @@ class LInstruction : public ZoneObject { void VerifyCall(); #endif + virtual int InputCount() = 0; + virtual LOperand* InputAt(int i) = 0; + private: // Iterator support. friend class InputIterator; - virtual int InputCount() = 0; - virtual LOperand* InputAt(int i) = 0; friend class TempIterator; virtual int TempCount() = 0; @@ -329,7 +334,7 @@ class LGap : public LTemplateInstruction<0, 0, 0> { virtual bool IsGap() const V8_FINAL V8_OVERRIDE { return true; } virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; static LGap* cast(LInstruction* instr) { - ASSERT(instr->IsGap()); + DCHECK(instr->IsGap()); return reinterpret_cast<LGap*>(instr); } @@ -375,20 +380,6 @@ class LInstructionGap V8_FINAL : public LGap { }; -class LClobberDoubles V8_FINAL : public LTemplateInstruction<0, 0, 0> { - public: - explicit LClobberDoubles(Isolate* isolate) { - ASSERT(!CpuFeatures::IsSafeForSnapshot(isolate, SSE2)); - } - - virtual bool ClobbersDoubleRegisters(Isolate* isolate) const V8_OVERRIDE { - return true; - } - - DECLARE_CONCRETE_INSTRUCTION(ClobberDoubles, "clobber-d") -}; - - class LGoto V8_FINAL : public LTemplateInstruction<0, 0, 0> { public: explicit LGoto(HBasicBlock* block) : block_(block) { } @@ -418,7 +409,7 @@ class LLazyBailout V8_FINAL : public LTemplateInstruction<0, 0, 0> { class LDummy V8_FINAL : public LTemplateInstruction<1, 0, 0> { public: - explicit LDummy() { } + LDummy() {} DECLARE_CONCRETE_INSTRUCTION(Dummy, "dummy") }; @@ -434,6 +425,7 @@ class LDummyUse V8_FINAL : public LTemplateInstruction<1, 1, 0> { class LDeoptimize V8_FINAL : public LTemplateInstruction<0, 0, 0> { public: + virtual bool IsControl() const V8_OVERRIDE { return true; } DECLARE_CONCRETE_INSTRUCTION(Deoptimize, "deoptimize") DECLARE_HYDROGEN_ACCESSOR(Deoptimize) }; @@ -867,14 +859,24 @@ class LMathRound V8_FINAL : public LTemplateInstruction<1, 1, 1> { temps_[0] = temp; } - LOperand* value() { return inputs_[0]; } LOperand* temp() { return temps_[0]; } + LOperand* value() { return inputs_[0]; } DECLARE_CONCRETE_INSTRUCTION(MathRound, "math-round") DECLARE_HYDROGEN_ACCESSOR(UnaryMathOperation) }; +class LMathFround V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LMathFround(LOperand* value) { inputs_[0] = value; } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(MathFround, "math-fround") +}; + + class LMathAbs V8_FINAL : public LTemplateInstruction<1, 2, 0> { public: LMathAbs(LOperand* context, LOperand* value) { @@ -1572,7 +1574,7 @@ class LReturn V8_FINAL : public LTemplateInstruction<0, 3, 0> { return parameter_count()->IsConstantOperand(); } LConstantOperand* constant_parameter_count() { - ASSERT(has_constant_parameter_count()); + DCHECK(has_constant_parameter_count()); return LConstantOperand::cast(parameter_count()); } LOperand* parameter_count() { return inputs_[2]; } @@ -1595,15 +1597,17 @@ class LLoadNamedField V8_FINAL : public LTemplateInstruction<1, 1, 0> { }; -class LLoadNamedGeneric V8_FINAL : public LTemplateInstruction<1, 2, 0> { +class LLoadNamedGeneric V8_FINAL : public LTemplateInstruction<1, 2, 1> { public: - LLoadNamedGeneric(LOperand* context, LOperand* object) { + LLoadNamedGeneric(LOperand* context, LOperand* object, LOperand* vector) { inputs_[0] = context; inputs_[1] = object; + temps_[0] = vector; } LOperand* context() { return inputs_[0]; } LOperand* object() { return inputs_[1]; } + LOperand* temp_vector() { return temps_[0]; } DECLARE_CONCRETE_INSTRUCTION(LoadNamedGeneric, "load-named-generic") DECLARE_HYDROGEN_ACCESSOR(LoadNamedGeneric) @@ -1661,7 +1665,7 @@ class LLoadKeyed V8_FINAL : public LTemplateInstruction<1, 2, 0> { DECLARE_HYDROGEN_ACCESSOR(LoadKeyed) virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; - uint32_t additional_index() const { return hydrogen()->index_offset(); } + uint32_t base_offset() const { return hydrogen()->base_offset(); } bool key_is_smi() { return hydrogen()->key()->representation().IsTagged(); } @@ -1684,19 +1688,23 @@ inline static bool ExternalArrayOpRequiresTemp( } -class LLoadKeyedGeneric V8_FINAL : public LTemplateInstruction<1, 3, 0> { +class LLoadKeyedGeneric V8_FINAL : public LTemplateInstruction<1, 3, 1> { public: - LLoadKeyedGeneric(LOperand* context, LOperand* obj, LOperand* key) { + LLoadKeyedGeneric(LOperand* context, LOperand* obj, LOperand* key, + LOperand* vector) { inputs_[0] = context; inputs_[1] = obj; inputs_[2] = key; + temps_[0] = vector; } LOperand* context() { return inputs_[0]; } LOperand* object() { return inputs_[1]; } LOperand* key() { return inputs_[2]; } + LOperand* temp_vector() { return temps_[0]; } DECLARE_CONCRETE_INSTRUCTION(LoadKeyedGeneric, "load-keyed-generic") + DECLARE_HYDROGEN_ACCESSOR(LoadKeyedGeneric) }; @@ -1707,15 +1715,18 @@ class LLoadGlobalCell V8_FINAL : public LTemplateInstruction<1, 0, 0> { }; -class LLoadGlobalGeneric V8_FINAL : public LTemplateInstruction<1, 2, 0> { +class LLoadGlobalGeneric V8_FINAL : public LTemplateInstruction<1, 2, 1> { public: - LLoadGlobalGeneric(LOperand* context, LOperand* global_object) { + LLoadGlobalGeneric(LOperand* context, LOperand* global_object, + LOperand* vector) { inputs_[0] = context; inputs_[1] = global_object; + temps_[0] = vector; } LOperand* context() { return inputs_[0]; } LOperand* global_object() { return inputs_[1]; } + LOperand* temp_vector() { return temps_[0]; } DECLARE_CONCRETE_INSTRUCTION(LoadGlobalGeneric, "load-global-generic") DECLARE_HYDROGEN_ACCESSOR(LoadGlobalGeneric) @@ -1801,15 +1812,15 @@ class LDrop V8_FINAL : public LTemplateInstruction<0, 0, 0> { }; -class LStoreCodeEntry V8_FINAL: public LTemplateInstruction<0, 1, 1> { +class LStoreCodeEntry V8_FINAL: public LTemplateInstruction<0, 2, 0> { public: LStoreCodeEntry(LOperand* function, LOperand* code_object) { inputs_[0] = function; - temps_[0] = code_object; + inputs_[1] = code_object; } LOperand* function() { return inputs_[0]; } - LOperand* code_object() { return temps_[0]; } + LOperand* code_object() { return inputs_[1]; } virtual void PrintDataTo(StringStream* stream); @@ -1880,11 +1891,11 @@ class LCallJSFunction V8_FINAL : public LTemplateInstruction<1, 1, 0> { class LCallWithDescriptor V8_FINAL : public LTemplateResultInstruction<1> { public: - LCallWithDescriptor(const CallInterfaceDescriptor* descriptor, - ZoneList<LOperand*>& operands, + LCallWithDescriptor(const InterfaceDescriptor* descriptor, + const ZoneList<LOperand*>& operands, Zone* zone) - : inputs_(descriptor->environment_length() + 1, zone) { - ASSERT(descriptor->environment_length() + 1 == operands.length()); + : inputs_(descriptor->GetRegisterParameterCount() + 1, zone) { + DCHECK(descriptor->GetRegisterParameterCount() + 1 == operands.length()); inputs_.AddAll(operands, zone); } @@ -2016,15 +2027,13 @@ class LInteger32ToDouble V8_FINAL : public LTemplateInstruction<1, 1, 0> { }; -class LUint32ToDouble V8_FINAL : public LTemplateInstruction<1, 1, 1> { +class LUint32ToDouble V8_FINAL : public LTemplateInstruction<1, 1, 0> { public: - explicit LUint32ToDouble(LOperand* value, LOperand* temp) { + explicit LUint32ToDouble(LOperand* value) { inputs_[0] = value; - temps_[0] = temp; } LOperand* value() { return inputs_[0]; } - LOperand* temp() { return temps_[0]; } DECLARE_CONCRETE_INSTRUCTION(Uint32ToDouble, "uint32-to-double") }; @@ -2044,17 +2053,15 @@ class LNumberTagI V8_FINAL : public LTemplateInstruction<1, 1, 1> { }; -class LNumberTagU V8_FINAL : public LTemplateInstruction<1, 1, 2> { +class LNumberTagU V8_FINAL : public LTemplateInstruction<1, 1, 1> { public: - LNumberTagU(LOperand* value, LOperand* temp1, LOperand* temp2) { + LNumberTagU(LOperand* value, LOperand* temp) { inputs_[0] = value; - temps_[0] = temp1; - temps_[1] = temp2; + temps_[0] = temp; } LOperand* value() { return inputs_[0]; } - LOperand* temp1() { return temps_[0]; } - LOperand* temp2() { return temps_[1]; } + LOperand* temp() { return temps_[0]; } DECLARE_CONCRETE_INSTRUCTION(NumberTagU, "number-tag-u") }; @@ -2241,7 +2248,7 @@ class LStoreKeyed V8_FINAL : public LTemplateInstruction<0, 3, 0> { DECLARE_HYDROGEN_ACCESSOR(StoreKeyed) virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; - uint32_t additional_index() const { return hydrogen()->index_offset(); } + uint32_t base_offset() const { return hydrogen()->base_offset(); } bool NeedsCanonicalization() { return hydrogen()->NeedsCanonicalization(); } }; @@ -2460,30 +2467,6 @@ class LClampTToUint8 V8_FINAL : public LTemplateInstruction<1, 1, 1> { }; -// Truncating conversion from a tagged value to an int32. -class LClampTToUint8NoSSE2 V8_FINAL : public LTemplateInstruction<1, 1, 3> { - public: - LClampTToUint8NoSSE2(LOperand* unclamped, - LOperand* temp1, - LOperand* temp2, - LOperand* temp3) { - inputs_[0] = unclamped; - temps_[0] = temp1; - temps_[1] = temp2; - temps_[2] = temp3; - } - - LOperand* unclamped() { return inputs_[0]; } - LOperand* scratch() { return temps_[0]; } - LOperand* scratch2() { return temps_[1]; } - LOperand* scratch3() { return temps_[2]; } - - DECLARE_CONCRETE_INSTRUCTION(ClampTToUint8NoSSE2, - "clamp-t-to-uint8-nosse2") - DECLARE_HYDROGEN_ACCESSOR(UnaryOperation) -}; - - class LCheckNonSmi V8_FINAL : public LTemplateInstruction<0, 1, 0> { public: explicit LCheckNonSmi(LOperand* value) { @@ -2696,6 +2679,35 @@ class LLoadFieldByIndex V8_FINAL : public LTemplateInstruction<1, 2, 0> { }; +class LStoreFrameContext: public LTemplateInstruction<0, 1, 0> { + public: + explicit LStoreFrameContext(LOperand* context) { + inputs_[0] = context; + } + + LOperand* context() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(StoreFrameContext, "store-frame-context") +}; + + +class LAllocateBlockContext: public LTemplateInstruction<1, 2, 0> { + public: + LAllocateBlockContext(LOperand* context, LOperand* function) { + inputs_[0] = context; + inputs_[1] = function; + } + + LOperand* context() { return inputs_[0]; } + LOperand* function() { return inputs_[1]; } + + Handle<ScopeInfo> scope_info() { return hydrogen()->scope_info(); } + + DECLARE_CONCRETE_INSTRUCTION(AllocateBlockContext, "allocate-block-context") + DECLARE_HYDROGEN_ACCESSOR(AllocateBlockContext) +}; + + class LChunkBuilder; class LPlatformChunk V8_FINAL : public LChunk { public: @@ -2731,8 +2743,6 @@ class LChunkBuilder V8_FINAL : public LChunkBuilderBase { // Build the sequence for the graph. LPlatformChunk* Build(); - LInstruction* CheckElideControlInstruction(HControlInstruction* instr); - // Declare methods that deal with the individual node types. #define DECLARE_DO(type) LInstruction* Do##type(H##type* node); HYDROGEN_CONCRETE_INSTRUCTION_LIST(DECLARE_DO) @@ -2740,6 +2750,7 @@ class LChunkBuilder V8_FINAL : public LChunkBuilderBase { LInstruction* DoMathFloor(HUnaryMathOperation* instr); LInstruction* DoMathRound(HUnaryMathOperation* instr); + LInstruction* DoMathFround(HUnaryMathOperation* instr); LInstruction* DoMathAbs(HUnaryMathOperation* instr); LInstruction* DoMathLog(HUnaryMathOperation* instr); LInstruction* DoMathExp(HUnaryMathOperation* instr); @@ -2778,7 +2789,6 @@ class LChunkBuilder V8_FINAL : public LChunkBuilderBase { // Methods for getting operands for Use / Define / Temp. LUnallocated* ToUnallocated(Register reg); LUnallocated* ToUnallocated(XMMRegister reg); - LUnallocated* ToUnallocated(X87Register reg); // Methods for setting up define-use relationships. MUST_USE_RESULT LOperand* Use(HValue* value, LUnallocated* operand); @@ -2840,7 +2850,6 @@ class LChunkBuilder V8_FINAL : public LChunkBuilderBase { Register reg); LInstruction* DefineFixedDouble(LTemplateResultInstruction<1>* instr, XMMRegister reg); - LInstruction* DefineX87TOS(LTemplateResultInstruction<1>* instr); // Assigns an environment to an instruction. An instruction which can // deoptimize must have an environment. LInstruction* AssignEnvironment(LInstruction* instr); @@ -2861,6 +2870,7 @@ class LChunkBuilder V8_FINAL : public LChunkBuilderBase { CanDeoptimize can_deoptimize = CANNOT_DEOPTIMIZE_EAGERLY); void VisitInstruction(HInstruction* current); + void AddInstruction(LInstruction* instr, HInstruction* current); void DoBasicBlock(HBasicBlock* block, HBasicBlock* next_block); LInstruction* DoShift(Token::Value op, HBitwiseBinaryOperation* instr); diff --git a/deps/v8/src/ia32/macro-assembler-ia32.cc b/deps/v8/src/ia32/macro-assembler-ia32.cc index 8b17baa2a7a..7e05e674dd9 100644 --- a/deps/v8/src/ia32/macro-assembler-ia32.cc +++ b/deps/v8/src/ia32/macro-assembler-ia32.cc @@ -2,17 +2,17 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_IA32 -#include "bootstrapper.h" -#include "codegen.h" -#include "cpu-profiler.h" -#include "debug.h" -#include "isolate-inl.h" -#include "runtime.h" -#include "serialize.h" +#include "src/bootstrapper.h" +#include "src/codegen.h" +#include "src/cpu-profiler.h" +#include "src/debug.h" +#include "src/isolate-inl.h" +#include "src/runtime.h" +#include "src/serialize.h" namespace v8 { namespace internal { @@ -33,7 +33,7 @@ MacroAssembler::MacroAssembler(Isolate* arg_isolate, void* buffer, int size) void MacroAssembler::Load(Register dst, const Operand& src, Representation r) { - ASSERT(!r.IsDouble()); + DCHECK(!r.IsDouble()); if (r.IsInteger8()) { movsx_b(dst, src); } else if (r.IsUInteger8()) { @@ -49,7 +49,7 @@ void MacroAssembler::Load(Register dst, const Operand& src, Representation r) { void MacroAssembler::Store(Register src, const Operand& dst, Representation r) { - ASSERT(!r.IsDouble()); + DCHECK(!r.IsDouble()); if (r.IsInteger8() || r.IsUInteger8()) { mov_b(dst, src); } else if (r.IsInteger16() || r.IsUInteger16()) { @@ -83,7 +83,7 @@ void MacroAssembler::LoadRoot(Register destination, Heap::RootListIndex index) { void MacroAssembler::StoreRoot(Register source, Register scratch, Heap::RootListIndex index) { - ASSERT(Heap::RootCanBeWrittenAfterInitialization(index)); + DCHECK(Heap::RootCanBeWrittenAfterInitialization(index)); ExternalReference roots_array_start = ExternalReference::roots_array_start(isolate()); mov(scratch, Immediate(index)); @@ -105,7 +105,7 @@ void MacroAssembler::CompareRoot(Register with, void MacroAssembler::CompareRoot(Register with, Heap::RootListIndex index) { - ASSERT(isolate()->heap()->RootCanBeTreatedAsConstant(index)); + DCHECK(isolate()->heap()->RootCanBeTreatedAsConstant(index)); Handle<Object> value(&isolate()->heap()->roots_array_start()[index]); cmp(with, value); } @@ -113,7 +113,7 @@ void MacroAssembler::CompareRoot(Register with, Heap::RootListIndex index) { void MacroAssembler::CompareRoot(const Operand& with, Heap::RootListIndex index) { - ASSERT(isolate()->heap()->RootCanBeTreatedAsConstant(index)); + DCHECK(isolate()->heap()->RootCanBeTreatedAsConstant(index)); Handle<Object> value(&isolate()->heap()->roots_array_start()[index]); cmp(with, value); } @@ -125,7 +125,7 @@ void MacroAssembler::InNewSpace( Condition cc, Label* condition_met, Label::Distance condition_met_distance) { - ASSERT(cc == equal || cc == not_equal); + DCHECK(cc == equal || cc == not_equal); if (scratch.is(object)) { and_(scratch, Immediate(~Page::kPageAlignmentMask)); } else { @@ -133,8 +133,8 @@ void MacroAssembler::InNewSpace( and_(scratch, object); } // Check that we can use a test_b. - ASSERT(MemoryChunk::IN_FROM_SPACE < 8); - ASSERT(MemoryChunk::IN_TO_SPACE < 8); + DCHECK(MemoryChunk::IN_FROM_SPACE < 8); + DCHECK(MemoryChunk::IN_TO_SPACE < 8); int mask = (1 << MemoryChunk::IN_FROM_SPACE) | (1 << MemoryChunk::IN_TO_SPACE); // If non-zero, the page belongs to new-space. @@ -176,7 +176,7 @@ void MacroAssembler::RememberedSetHelper( ret(0); bind(&buffer_overflowed); } else { - ASSERT(and_then == kFallThroughAtEnd); + DCHECK(and_then == kFallThroughAtEnd); j(equal, &done, Label::kNear); } StoreBufferOverflowStub store_buffer_overflow = @@ -185,7 +185,7 @@ void MacroAssembler::RememberedSetHelper( if (and_then == kReturnAtEnd) { ret(0); } else { - ASSERT(and_then == kFallThroughAtEnd); + DCHECK(and_then == kFallThroughAtEnd); bind(&done); } } @@ -249,49 +249,13 @@ void MacroAssembler::TruncateDoubleToI(Register result_reg, } -void MacroAssembler::TruncateX87TOSToI(Register result_reg) { - sub(esp, Immediate(kDoubleSize)); - fst_d(MemOperand(esp, 0)); - SlowTruncateToI(result_reg, esp, 0); - add(esp, Immediate(kDoubleSize)); -} - - -void MacroAssembler::X87TOSToI(Register result_reg, - MinusZeroMode minus_zero_mode, - Label* conversion_failed, - Label::Distance dst) { - Label done; - sub(esp, Immediate(kPointerSize)); - fld(0); - fist_s(MemOperand(esp, 0)); - fild_s(MemOperand(esp, 0)); - pop(result_reg); - FCmp(); - j(not_equal, conversion_failed, dst); - j(parity_even, conversion_failed, dst); - if (minus_zero_mode == FAIL_ON_MINUS_ZERO) { - test(result_reg, Operand(result_reg)); - j(not_zero, &done, Label::kNear); - // To check for minus zero, we load the value again as float, and check - // if that is still 0. - sub(esp, Immediate(kPointerSize)); - fst_s(MemOperand(esp, 0)); - pop(result_reg); - test(result_reg, Operand(result_reg)); - j(not_zero, conversion_failed, dst); - } - bind(&done); -} - - void MacroAssembler::DoubleToI(Register result_reg, XMMRegister input_reg, XMMRegister scratch, MinusZeroMode minus_zero_mode, Label* conversion_failed, Label::Distance dst) { - ASSERT(!input_reg.is(scratch)); + DCHECK(!input_reg.is(scratch)); cvttsd2si(result_reg, Operand(input_reg)); Cvtsi2sd(scratch, Operand(result_reg)); ucomisd(scratch, input_reg); @@ -352,8 +316,7 @@ void MacroAssembler::TruncateHeapNumberToI(Register result_reg, fstp(0); SlowTruncateToI(result_reg, input_reg); } - } else if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope scope(this, SSE2); + } else { movsd(xmm0, FieldOperand(input_reg, HeapNumber::kValueOffset)); cvttsd2si(result_reg, Operand(xmm0)); cmp(result_reg, 0x1); @@ -377,8 +340,6 @@ void MacroAssembler::TruncateHeapNumberToI(Register result_reg, } else { SlowTruncateToI(result_reg, input_reg); } - } else { - SlowTruncateToI(result_reg, input_reg); } bind(&done); } @@ -390,114 +351,62 @@ void MacroAssembler::TaggedToI(Register result_reg, MinusZeroMode minus_zero_mode, Label* lost_precision) { Label done; - ASSERT(!temp.is(xmm0)); + DCHECK(!temp.is(xmm0)); cmp(FieldOperand(input_reg, HeapObject::kMapOffset), isolate()->factory()->heap_number_map()); j(not_equal, lost_precision, Label::kNear); - if (CpuFeatures::IsSafeForSnapshot(isolate(), SSE2)) { - ASSERT(!temp.is(no_xmm_reg)); - CpuFeatureScope scope(this, SSE2); + DCHECK(!temp.is(no_xmm_reg)); - movsd(xmm0, FieldOperand(input_reg, HeapNumber::kValueOffset)); - cvttsd2si(result_reg, Operand(xmm0)); - Cvtsi2sd(temp, Operand(result_reg)); - ucomisd(xmm0, temp); - RecordComment("Deferred TaggedToI: lost precision"); - j(not_equal, lost_precision, Label::kNear); - RecordComment("Deferred TaggedToI: NaN"); - j(parity_even, lost_precision, Label::kNear); - if (minus_zero_mode == FAIL_ON_MINUS_ZERO) { - test(result_reg, Operand(result_reg)); - j(not_zero, &done, Label::kNear); - movmskpd(result_reg, xmm0); - and_(result_reg, 1); - RecordComment("Deferred TaggedToI: minus zero"); - j(not_zero, lost_precision, Label::kNear); - } - } else { - // TODO(olivf) Converting a number on the fpu is actually quite slow. We - // should first try a fast conversion and then bailout to this slow case. - Label lost_precision_pop, zero_check; - Label* lost_precision_int = (minus_zero_mode == FAIL_ON_MINUS_ZERO) - ? &lost_precision_pop : lost_precision; - sub(esp, Immediate(kPointerSize)); - fld_d(FieldOperand(input_reg, HeapNumber::kValueOffset)); - if (minus_zero_mode == FAIL_ON_MINUS_ZERO) fld(0); - fist_s(MemOperand(esp, 0)); - fild_s(MemOperand(esp, 0)); - FCmp(); - pop(result_reg); - j(not_equal, lost_precision_int, Label::kNear); - j(parity_even, lost_precision_int, Label::kNear); // NaN. - if (minus_zero_mode == FAIL_ON_MINUS_ZERO) { - test(result_reg, Operand(result_reg)); - j(zero, &zero_check, Label::kNear); - fstp(0); - jmp(&done, Label::kNear); - bind(&zero_check); - // To check for minus zero, we load the value again as float, and check - // if that is still 0. - sub(esp, Immediate(kPointerSize)); - fstp_s(Operand(esp, 0)); - pop(result_reg); - test(result_reg, Operand(result_reg)); - j(zero, &done, Label::kNear); - jmp(lost_precision, Label::kNear); - - bind(&lost_precision_pop); - fstp(0); - jmp(lost_precision, Label::kNear); - } + movsd(xmm0, FieldOperand(input_reg, HeapNumber::kValueOffset)); + cvttsd2si(result_reg, Operand(xmm0)); + Cvtsi2sd(temp, Operand(result_reg)); + ucomisd(xmm0, temp); + RecordComment("Deferred TaggedToI: lost precision"); + j(not_equal, lost_precision, Label::kNear); + RecordComment("Deferred TaggedToI: NaN"); + j(parity_even, lost_precision, Label::kNear); + if (minus_zero_mode == FAIL_ON_MINUS_ZERO) { + test(result_reg, Operand(result_reg)); + j(not_zero, &done, Label::kNear); + movmskpd(result_reg, xmm0); + and_(result_reg, 1); + RecordComment("Deferred TaggedToI: minus zero"); + j(not_zero, lost_precision, Label::kNear); } bind(&done); } void MacroAssembler::LoadUint32(XMMRegister dst, - Register src, - XMMRegister scratch) { + Register src) { Label done; cmp(src, Immediate(0)); ExternalReference uint32_bias = ExternalReference::address_of_uint32_bias(); - movsd(scratch, Operand::StaticVariable(uint32_bias)); Cvtsi2sd(dst, src); j(not_sign, &done, Label::kNear); - addsd(dst, scratch); - bind(&done); -} - - -void MacroAssembler::LoadUint32NoSSE2(Register src) { - Label done; - push(src); - fild_s(Operand(esp, 0)); - cmp(src, Immediate(0)); - j(not_sign, &done, Label::kNear); - ExternalReference uint32_bias = - ExternalReference::address_of_uint32_bias(); - fld_d(Operand::StaticVariable(uint32_bias)); - faddp(1); + addsd(dst, Operand::StaticVariable(uint32_bias)); bind(&done); - add(esp, Immediate(kPointerSize)); } -void MacroAssembler::RecordWriteArray(Register object, - Register value, - Register index, - SaveFPRegsMode save_fp, - RememberedSetAction remembered_set_action, - SmiCheck smi_check) { +void MacroAssembler::RecordWriteArray( + Register object, + Register value, + Register index, + SaveFPRegsMode save_fp, + RememberedSetAction remembered_set_action, + SmiCheck smi_check, + PointersToHereCheck pointers_to_here_check_for_value) { // First, check if a write barrier is even needed. The tests below // catch stores of Smis. Label done; // Skip barrier if writing a smi. if (smi_check == INLINE_SMI_CHECK) { - ASSERT_EQ(0, kSmiTag); + DCHECK_EQ(0, kSmiTag); test(value, Immediate(kSmiTagMask)); j(zero, &done); } @@ -509,8 +418,8 @@ void MacroAssembler::RecordWriteArray(Register object, lea(dst, Operand(object, index, times_half_pointer_size, FixedArray::kHeaderSize - kHeapObjectTag)); - RecordWrite( - object, dst, value, save_fp, remembered_set_action, OMIT_SMI_CHECK); + RecordWrite(object, dst, value, save_fp, remembered_set_action, + OMIT_SMI_CHECK, pointers_to_here_check_for_value); bind(&done); @@ -530,7 +439,8 @@ void MacroAssembler::RecordWriteField( Register dst, SaveFPRegsMode save_fp, RememberedSetAction remembered_set_action, - SmiCheck smi_check) { + SmiCheck smi_check, + PointersToHereCheck pointers_to_here_check_for_value) { // First, check if a write barrier is even needed. The tests below // catch stores of Smis. Label done; @@ -542,7 +452,7 @@ void MacroAssembler::RecordWriteField( // Although the object register is tagged, the offset is relative to the start // of the object, so so offset must be a multiple of kPointerSize. - ASSERT(IsAligned(offset, kPointerSize)); + DCHECK(IsAligned(offset, kPointerSize)); lea(dst, FieldOperand(object, offset)); if (emit_debug_code()) { @@ -553,8 +463,8 @@ void MacroAssembler::RecordWriteField( bind(&ok); } - RecordWrite( - object, dst, value, save_fp, remembered_set_action, OMIT_SMI_CHECK); + RecordWrite(object, dst, value, save_fp, remembered_set_action, + OMIT_SMI_CHECK, pointers_to_here_check_for_value); bind(&done); @@ -586,42 +496,39 @@ void MacroAssembler::RecordWriteForMap( bind(&ok); } - ASSERT(!object.is(value)); - ASSERT(!object.is(address)); - ASSERT(!value.is(address)); + DCHECK(!object.is(value)); + DCHECK(!object.is(address)); + DCHECK(!value.is(address)); AssertNotSmi(object); if (!FLAG_incremental_marking) { return; } - // Count number of write barriers in generated code. - isolate()->counters()->write_barriers_static()->Increment(); - IncrementCounter(isolate()->counters()->write_barriers_dynamic(), 1); + // Compute the address. + lea(address, FieldOperand(object, HeapObject::kMapOffset)); // A single check of the map's pages interesting flag suffices, since it is // only set during incremental collection, and then it's also guaranteed that // the from object's page's interesting flag is also set. This optimization // relies on the fact that maps can never be in new space. - ASSERT(!isolate()->heap()->InNewSpace(*map)); + DCHECK(!isolate()->heap()->InNewSpace(*map)); CheckPageFlagForMap(map, MemoryChunk::kPointersToHereAreInterestingMask, zero, &done, Label::kNear); - // Delay the initialization of |address| and |value| for the stub until it's - // known that the will be needed. Up until this point their values are not - // needed since they are embedded in the operands of instructions that need - // them. - lea(address, FieldOperand(object, HeapObject::kMapOffset)); - mov(value, Immediate(map)); RecordWriteStub stub(isolate(), object, value, address, OMIT_REMEMBERED_SET, save_fp); CallStub(&stub); bind(&done); + // Count number of write barriers in generated code. + isolate()->counters()->write_barriers_static()->Increment(); + IncrementCounter(isolate()->counters()->write_barriers_dynamic(), 1); + // Clobber clobbered input registers when running with the debug-code flag // turned on to provoke errors. if (emit_debug_code()) { @@ -632,15 +539,17 @@ void MacroAssembler::RecordWriteForMap( } -void MacroAssembler::RecordWrite(Register object, - Register address, - Register value, - SaveFPRegsMode fp_mode, - RememberedSetAction remembered_set_action, - SmiCheck smi_check) { - ASSERT(!object.is(value)); - ASSERT(!object.is(address)); - ASSERT(!value.is(address)); +void MacroAssembler::RecordWrite( + Register object, + Register address, + Register value, + SaveFPRegsMode fp_mode, + RememberedSetAction remembered_set_action, + SmiCheck smi_check, + PointersToHereCheck pointers_to_here_check_for_value) { + DCHECK(!object.is(value)); + DCHECK(!object.is(address)); + DCHECK(!value.is(address)); AssertNotSmi(object); if (remembered_set_action == OMIT_REMEMBERED_SET && @@ -656,10 +565,6 @@ void MacroAssembler::RecordWrite(Register object, bind(&ok); } - // Count number of write barriers in generated code. - isolate()->counters()->write_barriers_static()->Increment(); - IncrementCounter(isolate()->counters()->write_barriers_dynamic(), 1); - // First, check if a write barrier is even needed. The tests below // catch stores of Smis and stores into young gen. Label done; @@ -669,12 +574,14 @@ void MacroAssembler::RecordWrite(Register object, JumpIfSmi(value, &done, Label::kNear); } - CheckPageFlag(value, - value, // Used as scratch. - MemoryChunk::kPointersToHereAreInterestingMask, - zero, - &done, - Label::kNear); + if (pointers_to_here_check_for_value != kPointersToHereAreAlwaysInteresting) { + CheckPageFlag(value, + value, // Used as scratch. + MemoryChunk::kPointersToHereAreInterestingMask, + zero, + &done, + Label::kNear); + } CheckPageFlag(object, value, // Used as scratch. MemoryChunk::kPointersFromHereAreInterestingMask, @@ -688,6 +595,10 @@ void MacroAssembler::RecordWrite(Register object, bind(&done); + // Count number of write barriers in generated code. + isolate()->counters()->write_barriers_static()->Increment(); + IncrementCounter(isolate()->counters()->write_barriers_dynamic(), 1); + // Clobber clobbered registers when running with the debug-code flag // turned on to provoke errors. if (emit_debug_code()) { @@ -799,7 +710,6 @@ void MacroAssembler::StoreNumberToDoubleElements( Register scratch1, XMMRegister scratch2, Label* fail, - bool specialize_for_processor, int elements_offset) { Label smi_value, done, maybe_nan, not_nan, is_nan, have_double_value; JumpIfSmi(maybe_number, &smi_value, Label::kNear); @@ -818,19 +728,11 @@ void MacroAssembler::StoreNumberToDoubleElements( bind(¬_nan); ExternalReference canonical_nan_reference = ExternalReference::address_of_canonical_non_hole_nan(); - if (CpuFeatures::IsSupported(SSE2) && specialize_for_processor) { - CpuFeatureScope use_sse2(this, SSE2); - movsd(scratch2, FieldOperand(maybe_number, HeapNumber::kValueOffset)); - bind(&have_double_value); - movsd(FieldOperand(elements, key, times_4, - FixedDoubleArray::kHeaderSize - elements_offset), - scratch2); - } else { - fld_d(FieldOperand(maybe_number, HeapNumber::kValueOffset)); - bind(&have_double_value); - fstp_d(FieldOperand(elements, key, times_4, - FixedDoubleArray::kHeaderSize - elements_offset)); - } + movsd(scratch2, FieldOperand(maybe_number, HeapNumber::kValueOffset)); + bind(&have_double_value); + movsd(FieldOperand(elements, key, times_4, + FixedDoubleArray::kHeaderSize - elements_offset), + scratch2); jmp(&done); bind(&maybe_nan); @@ -840,12 +742,7 @@ void MacroAssembler::StoreNumberToDoubleElements( cmp(FieldOperand(maybe_number, HeapNumber::kValueOffset), Immediate(0)); j(zero, ¬_nan); bind(&is_nan); - if (CpuFeatures::IsSupported(SSE2) && specialize_for_processor) { - CpuFeatureScope use_sse2(this, SSE2); - movsd(scratch2, Operand::StaticVariable(canonical_nan_reference)); - } else { - fld_d(Operand::StaticVariable(canonical_nan_reference)); - } + movsd(scratch2, Operand::StaticVariable(canonical_nan_reference)); jmp(&have_double_value, Label::kNear); bind(&smi_value); @@ -853,19 +750,10 @@ void MacroAssembler::StoreNumberToDoubleElements( // Preserve original value. mov(scratch1, maybe_number); SmiUntag(scratch1); - if (CpuFeatures::IsSupported(SSE2) && specialize_for_processor) { - CpuFeatureScope fscope(this, SSE2); - Cvtsi2sd(scratch2, scratch1); - movsd(FieldOperand(elements, key, times_4, - FixedDoubleArray::kHeaderSize - elements_offset), - scratch2); - } else { - push(scratch1); - fild_s(Operand(esp, 0)); - pop(scratch1); - fstp_d(FieldOperand(elements, key, times_4, - FixedDoubleArray::kHeaderSize - elements_offset)); - } + Cvtsi2sd(scratch2, scratch1); + movsd(FieldOperand(elements, key, times_4, + FixedDoubleArray::kHeaderSize - elements_offset), + scratch2); bind(&done); } @@ -946,16 +834,8 @@ void MacroAssembler::IsInstanceJSObjectType(Register map, void MacroAssembler::FCmp() { - if (CpuFeatures::IsSupported(CMOV)) { - fucomip(); - fstp(0); - } else { - fucompp(); - push(eax); - fnstsw_ax(); - sahf(); - pop(eax); - } + fucomip(); + fstp(0); } @@ -1027,26 +907,27 @@ void MacroAssembler::AssertNotSmi(Register object) { } -void MacroAssembler::Prologue(PrologueFrameMode frame_mode) { - if (frame_mode == BUILD_STUB_FRAME) { +void MacroAssembler::StubPrologue() { + push(ebp); // Caller's frame pointer. + mov(ebp, esp); + push(esi); // Callee's context. + push(Immediate(Smi::FromInt(StackFrame::STUB))); +} + + +void MacroAssembler::Prologue(bool code_pre_aging) { + PredictableCodeSizeScope predictible_code_size_scope(this, + kNoCodeAgeSequenceLength); + if (code_pre_aging) { + // Pre-age the code. + call(isolate()->builtins()->MarkCodeAsExecutedOnce(), + RelocInfo::CODE_AGE_SEQUENCE); + Nop(kNoCodeAgeSequenceLength - Assembler::kCallInstructionLength); + } else { push(ebp); // Caller's frame pointer. mov(ebp, esp); push(esi); // Callee's context. - push(Immediate(Smi::FromInt(StackFrame::STUB))); - } else { - PredictableCodeSizeScope predictible_code_size_scope(this, - kNoCodeAgeSequenceLength); - if (isolate()->IsCodePreAgingActive()) { - // Pre-age the code. - call(isolate()->builtins()->MarkCodeAsExecutedOnce(), - RelocInfo::CODE_AGE_SEQUENCE); - Nop(kNoCodeAgeSequenceLength - Assembler::kCallInstructionLength); - } else { - push(ebp); // Caller's frame pointer. - mov(ebp, esp); - push(esi); // Callee's context. - push(edi); // Callee's JS function. - } + push(edi); // Callee's JS function. } } @@ -1076,14 +957,14 @@ void MacroAssembler::LeaveFrame(StackFrame::Type type) { void MacroAssembler::EnterExitFramePrologue() { // Set up the frame structure on the stack. - ASSERT(ExitFrameConstants::kCallerSPDisplacement == +2 * kPointerSize); - ASSERT(ExitFrameConstants::kCallerPCOffset == +1 * kPointerSize); - ASSERT(ExitFrameConstants::kCallerFPOffset == 0 * kPointerSize); + DCHECK(ExitFrameConstants::kCallerSPDisplacement == +2 * kPointerSize); + DCHECK(ExitFrameConstants::kCallerPCOffset == +1 * kPointerSize); + DCHECK(ExitFrameConstants::kCallerFPOffset == 0 * kPointerSize); push(ebp); mov(ebp, esp); // Reserve room for entry stack pointer and push the code object. - ASSERT(ExitFrameConstants::kSPOffset == -1 * kPointerSize); + DCHECK(ExitFrameConstants::kSPOffset == -1 * kPointerSize); push(Immediate(0)); // Saved entry sp, patched before call. push(Immediate(CodeObject())); // Accessed from ExitFrame::code_slot. @@ -1098,11 +979,11 @@ void MacroAssembler::EnterExitFramePrologue() { void MacroAssembler::EnterExitFrameEpilogue(int argc, bool save_doubles) { // Optionally save all XMM registers. if (save_doubles) { - CpuFeatureScope scope(this, SSE2); - int space = XMMRegister::kNumRegisters * kDoubleSize + argc * kPointerSize; + int space = XMMRegister::kMaxNumRegisters * kDoubleSize + + argc * kPointerSize; sub(esp, Immediate(space)); const int offset = -2 * kPointerSize; - for (int i = 0; i < XMMRegister::kNumRegisters; i++) { + for (int i = 0; i < XMMRegister::kMaxNumRegisters; i++) { XMMRegister reg = XMMRegister::from_code(i); movsd(Operand(ebp, offset - ((i + 1) * kDoubleSize)), reg); } @@ -1111,9 +992,9 @@ void MacroAssembler::EnterExitFrameEpilogue(int argc, bool save_doubles) { } // Get the required frame alignment for the OS. - const int kFrameAlignment = OS::ActivationFrameAlignment(); + const int kFrameAlignment = base::OS::ActivationFrameAlignment(); if (kFrameAlignment > 0) { - ASSERT(IsPowerOf2(kFrameAlignment)); + DCHECK(IsPowerOf2(kFrameAlignment)); and_(esp, -kFrameAlignment); } @@ -1144,9 +1025,8 @@ void MacroAssembler::EnterApiExitFrame(int argc) { void MacroAssembler::LeaveExitFrame(bool save_doubles) { // Optionally restore all XMM registers. if (save_doubles) { - CpuFeatureScope scope(this, SSE2); const int offset = -2 * kPointerSize; - for (int i = 0; i < XMMRegister::kNumRegisters; i++) { + for (int i = 0; i < XMMRegister::kMaxNumRegisters; i++) { XMMRegister reg = XMMRegister::from_code(i); movsd(reg, Operand(ebp, offset - ((i + 1) * kDoubleSize))); } @@ -1339,9 +1219,9 @@ void MacroAssembler::CheckAccessGlobalProxy(Register holder_reg, Label* miss) { Label same_contexts; - ASSERT(!holder_reg.is(scratch1)); - ASSERT(!holder_reg.is(scratch2)); - ASSERT(!scratch1.is(scratch2)); + DCHECK(!holder_reg.is(scratch1)); + DCHECK(!holder_reg.is(scratch2)); + DCHECK(!scratch1.is(scratch2)); // Load current lexical context from the stack frame. mov(scratch1, Operand(ebp, StandardFrameConstants::kContextOffset)); @@ -1400,13 +1280,13 @@ void MacroAssembler::CheckAccessGlobalProxy(Register holder_reg, // Compute the hash code from the untagged key. This must be kept in sync with -// ComputeIntegerHash in utils.h and KeyedLoadGenericElementStub in +// ComputeIntegerHash in utils.h and KeyedLoadGenericStub in // code-stub-hydrogen.cc // // Note: r0 will contain hash code void MacroAssembler::GetNumberHash(Register r0, Register scratch) { // Xor original key with a seed. - if (Serializer::enabled(isolate())) { + if (serializer_enabled()) { ExternalReference roots_array_start = ExternalReference::roots_array_start(isolate()); mov(scratch, Immediate(Heap::kHashSeedRootIndex)); @@ -1487,7 +1367,7 @@ void MacroAssembler::LoadFromNumberDictionary(Label* miss, and_(r2, r1); // Scale the index by multiplying by the entry size. - ASSERT(SeededNumberDictionary::kEntrySize == 3); + DCHECK(SeededNumberDictionary::kEntrySize == 3); lea(r2, Operand(r2, r2, times_2, 0)); // r2 = r2 * 3 // Check if the key matches. @@ -1506,7 +1386,7 @@ void MacroAssembler::LoadFromNumberDictionary(Label* miss, // Check that the value is a normal propety. const int kDetailsOffset = SeededNumberDictionary::kElementsStartOffset + 2 * kPointerSize; - ASSERT_EQ(NORMAL, 0); + DCHECK_EQ(NORMAL, 0); test(FieldOperand(elements, r2, times_pointer_size, kDetailsOffset), Immediate(PropertyDetails::TypeField::kMask << kSmiTagSize)); j(not_zero, miss); @@ -1527,7 +1407,7 @@ void MacroAssembler::LoadAllocationTopHelper(Register result, // Just return if allocation top is already known. if ((flags & RESULT_CONTAINS_TOP) != 0) { // No use of scratch if allocation top is provided. - ASSERT(scratch.is(no_reg)); + DCHECK(scratch.is(no_reg)); #ifdef DEBUG // Assert that result actually contains top on entry. cmp(result, Operand::StaticVariable(allocation_top)); @@ -1572,8 +1452,8 @@ void MacroAssembler::Allocate(int object_size, Register scratch, Label* gc_required, AllocationFlags flags) { - ASSERT((flags & (RESULT_CONTAINS_TOP | SIZE_IN_WORDS)) == 0); - ASSERT(object_size <= Page::kMaxRegularHeapObjectSize); + DCHECK((flags & (RESULT_CONTAINS_TOP | SIZE_IN_WORDS)) == 0); + DCHECK(object_size <= Page::kMaxRegularHeapObjectSize); if (!FLAG_inline_new) { if (emit_debug_code()) { // Trash the registers to simulate an allocation failure. @@ -1588,7 +1468,7 @@ void MacroAssembler::Allocate(int object_size, jmp(gc_required); return; } - ASSERT(!result.is(result_end)); + DCHECK(!result.is(result_end)); // Load address of new object into result. LoadAllocationTopHelper(result, scratch, flags); @@ -1599,8 +1479,8 @@ void MacroAssembler::Allocate(int object_size, // Align the next allocation. Storing the filler map without checking top is // safe in new-space because the limit of the heap is aligned there. if ((flags & DOUBLE_ALIGNMENT) != 0) { - ASSERT((flags & PRETENURE_OLD_POINTER_SPACE) == 0); - ASSERT(kPointerAlignment * 2 == kDoubleAlignment); + DCHECK((flags & PRETENURE_OLD_POINTER_SPACE) == 0); + DCHECK(kPointerAlignment * 2 == kDoubleAlignment); Label aligned; test(result, Immediate(kDoubleAlignmentMask)); j(zero, &aligned, Label::kNear); @@ -1636,7 +1516,7 @@ void MacroAssembler::Allocate(int object_size, sub(result, Immediate(object_size)); } } else if (tag_result) { - ASSERT(kHeapObjectTag == 1); + DCHECK(kHeapObjectTag == 1); inc(result); } } @@ -1651,7 +1531,7 @@ void MacroAssembler::Allocate(int header_size, Register scratch, Label* gc_required, AllocationFlags flags) { - ASSERT((flags & SIZE_IN_WORDS) == 0); + DCHECK((flags & SIZE_IN_WORDS) == 0); if (!FLAG_inline_new) { if (emit_debug_code()) { // Trash the registers to simulate an allocation failure. @@ -1665,7 +1545,7 @@ void MacroAssembler::Allocate(int header_size, jmp(gc_required); return; } - ASSERT(!result.is(result_end)); + DCHECK(!result.is(result_end)); // Load address of new object into result. LoadAllocationTopHelper(result, scratch, flags); @@ -1676,8 +1556,8 @@ void MacroAssembler::Allocate(int header_size, // Align the next allocation. Storing the filler map without checking top is // safe in new-space because the limit of the heap is aligned there. if ((flags & DOUBLE_ALIGNMENT) != 0) { - ASSERT((flags & PRETENURE_OLD_POINTER_SPACE) == 0); - ASSERT(kPointerAlignment * 2 == kDoubleAlignment); + DCHECK((flags & PRETENURE_OLD_POINTER_SPACE) == 0); + DCHECK(kPointerAlignment * 2 == kDoubleAlignment); Label aligned; test(result, Immediate(kDoubleAlignmentMask)); j(zero, &aligned, Label::kNear); @@ -1698,11 +1578,11 @@ void MacroAssembler::Allocate(int header_size, STATIC_ASSERT(static_cast<ScaleFactor>(times_2 - 1) == times_1); STATIC_ASSERT(static_cast<ScaleFactor>(times_4 - 1) == times_2); STATIC_ASSERT(static_cast<ScaleFactor>(times_8 - 1) == times_4); - ASSERT(element_size >= times_2); - ASSERT(kSmiTagSize == 1); + DCHECK(element_size >= times_2); + DCHECK(kSmiTagSize == 1); element_size = static_cast<ScaleFactor>(element_size - 1); } else { - ASSERT(element_count_type == REGISTER_VALUE_IS_INT32); + DCHECK(element_count_type == REGISTER_VALUE_IS_INT32); } lea(result_end, Operand(element_count, element_size, header_size)); add(result_end, result); @@ -1711,7 +1591,7 @@ void MacroAssembler::Allocate(int header_size, j(above, gc_required); if ((flags & TAG_OBJECT) != 0) { - ASSERT(kHeapObjectTag == 1); + DCHECK(kHeapObjectTag == 1); inc(result); } @@ -1726,7 +1606,7 @@ void MacroAssembler::Allocate(Register object_size, Register scratch, Label* gc_required, AllocationFlags flags) { - ASSERT((flags & (RESULT_CONTAINS_TOP | SIZE_IN_WORDS)) == 0); + DCHECK((flags & (RESULT_CONTAINS_TOP | SIZE_IN_WORDS)) == 0); if (!FLAG_inline_new) { if (emit_debug_code()) { // Trash the registers to simulate an allocation failure. @@ -1740,7 +1620,7 @@ void MacroAssembler::Allocate(Register object_size, jmp(gc_required); return; } - ASSERT(!result.is(result_end)); + DCHECK(!result.is(result_end)); // Load address of new object into result. LoadAllocationTopHelper(result, scratch, flags); @@ -1751,8 +1631,8 @@ void MacroAssembler::Allocate(Register object_size, // Align the next allocation. Storing the filler map without checking top is // safe in new-space because the limit of the heap is aligned there. if ((flags & DOUBLE_ALIGNMENT) != 0) { - ASSERT((flags & PRETENURE_OLD_POINTER_SPACE) == 0); - ASSERT(kPointerAlignment * 2 == kDoubleAlignment); + DCHECK((flags & PRETENURE_OLD_POINTER_SPACE) == 0); + DCHECK(kPointerAlignment * 2 == kDoubleAlignment); Label aligned; test(result, Immediate(kDoubleAlignmentMask)); j(zero, &aligned, Label::kNear); @@ -1777,7 +1657,7 @@ void MacroAssembler::Allocate(Register object_size, // Tag result if requested. if ((flags & TAG_OBJECT) != 0) { - ASSERT(kHeapObjectTag == 1); + DCHECK(kHeapObjectTag == 1); inc(result); } @@ -1803,14 +1683,18 @@ void MacroAssembler::UndoAllocationInNewSpace(Register object) { void MacroAssembler::AllocateHeapNumber(Register result, Register scratch1, Register scratch2, - Label* gc_required) { + Label* gc_required, + MutableMode mode) { // Allocate heap number in new space. Allocate(HeapNumber::kSize, result, scratch1, scratch2, gc_required, TAG_OBJECT); + Handle<Map> map = mode == MUTABLE + ? isolate()->factory()->mutable_heap_number_map() + : isolate()->factory()->heap_number_map(); + // Set the map. - mov(FieldOperand(result, HeapObject::kMapOffset), - Immediate(isolate()->factory()->heap_number_map())); + mov(FieldOperand(result, HeapObject::kMapOffset), Immediate(map)); } @@ -1822,8 +1706,8 @@ void MacroAssembler::AllocateTwoByteString(Register result, Label* gc_required) { // Calculate the number of bytes needed for the characters in the string while // observing object alignment. - ASSERT((SeqTwoByteString::kHeaderSize & kObjectAlignmentMask) == 0); - ASSERT(kShortSize == 2); + DCHECK((SeqTwoByteString::kHeaderSize & kObjectAlignmentMask) == 0); + DCHECK(kShortSize == 2); // scratch1 = length * 2 + kObjectAlignmentMask. lea(scratch1, Operand(length, length, times_1, kObjectAlignmentMask)); and_(scratch1, Immediate(~kObjectAlignmentMask)); @@ -1858,9 +1742,9 @@ void MacroAssembler::AllocateAsciiString(Register result, Label* gc_required) { // Calculate the number of bytes needed for the characters in the string while // observing object alignment. - ASSERT((SeqOneByteString::kHeaderSize & kObjectAlignmentMask) == 0); + DCHECK((SeqOneByteString::kHeaderSize & kObjectAlignmentMask) == 0); mov(scratch1, length); - ASSERT(kCharSize == 1); + DCHECK(kCharSize == 1); add(scratch1, Immediate(kObjectAlignmentMask)); and_(scratch1, Immediate(~kObjectAlignmentMask)); @@ -1891,7 +1775,7 @@ void MacroAssembler::AllocateAsciiString(Register result, Register scratch1, Register scratch2, Label* gc_required) { - ASSERT(length > 0); + DCHECK(length > 0); // Allocate ASCII string in new space. Allocate(SeqOneByteString::SizeFor(length), result, scratch1, scratch2, @@ -1925,32 +1809,13 @@ void MacroAssembler::AllocateAsciiConsString(Register result, Register scratch1, Register scratch2, Label* gc_required) { - Label allocate_new_space, install_map; - AllocationFlags flags = TAG_OBJECT; - - ExternalReference high_promotion_mode = ExternalReference:: - new_space_high_promotion_mode_active_address(isolate()); - - test(Operand::StaticVariable(high_promotion_mode), Immediate(1)); - j(zero, &allocate_new_space); - - Allocate(ConsString::kSize, - result, - scratch1, - scratch2, - gc_required, - static_cast<AllocationFlags>(flags | PRETENURE_OLD_POINTER_SPACE)); - jmp(&install_map); - - bind(&allocate_new_space); Allocate(ConsString::kSize, result, scratch1, scratch2, gc_required, - flags); + TAG_OBJECT); - bind(&install_map); // Set the map. The other fields are left uninitialized. mov(FieldOperand(result, HeapObject::kMapOffset), Immediate(isolate()->factory()->cons_ascii_string_map())); @@ -1998,9 +1863,9 @@ void MacroAssembler::CopyBytes(Register source, Register length, Register scratch) { Label short_loop, len4, len8, len12, done, short_string; - ASSERT(source.is(esi)); - ASSERT(destination.is(edi)); - ASSERT(length.is(ecx)); + DCHECK(source.is(esi)); + DCHECK(destination.is(edi)); + DCHECK(length.is(ecx)); cmp(length, Immediate(4)); j(below, &short_string, Label::kNear); @@ -2070,7 +1935,7 @@ void MacroAssembler::BooleanBitTest(Register object, int field_offset, int bit_index) { bit_index += kSmiTagSize + kSmiShiftSize; - ASSERT(IsPowerOf2(kBitsPerByte)); + DCHECK(IsPowerOf2(kBitsPerByte)); int byte_index = bit_index / kBitsPerByte; int byte_bit_index = bit_index & (kBitsPerByte - 1); test_b(FieldOperand(object, field_offset + byte_index), @@ -2111,27 +1976,27 @@ void MacroAssembler::TryGetFunctionPrototype(Register function, Register scratch, Label* miss, bool miss_on_bound_function) { - // Check that the receiver isn't a smi. - JumpIfSmi(function, miss); + Label non_instance; + if (miss_on_bound_function) { + // Check that the receiver isn't a smi. + JumpIfSmi(function, miss); - // Check that the function really is a function. - CmpObjectType(function, JS_FUNCTION_TYPE, result); - j(not_equal, miss); + // Check that the function really is a function. + CmpObjectType(function, JS_FUNCTION_TYPE, result); + j(not_equal, miss); - if (miss_on_bound_function) { // If a bound function, go to miss label. mov(scratch, FieldOperand(function, JSFunction::kSharedFunctionInfoOffset)); BooleanBitTest(scratch, SharedFunctionInfo::kCompilerHintsOffset, SharedFunctionInfo::kBoundFunction); j(not_zero, miss); - } - // Make sure that the function has an instance prototype. - Label non_instance; - movzx_b(scratch, FieldOperand(result, Map::kBitFieldOffset)); - test(scratch, Immediate(1 << Map::kHasNonInstancePrototype)); - j(not_zero, &non_instance); + // Make sure that the function has an instance prototype. + movzx_b(scratch, FieldOperand(result, Map::kBitFieldOffset)); + test(scratch, Immediate(1 << Map::kHasNonInstancePrototype)); + j(not_zero, &non_instance); + } // Get the prototype or initial map from the function. mov(result, @@ -2150,12 +2015,15 @@ void MacroAssembler::TryGetFunctionPrototype(Register function, // Get the prototype from the initial map. mov(result, FieldOperand(result, Map::kPrototypeOffset)); - jmp(&done); - // Non-instance prototype: Fetch prototype from constructor field - // in initial map. - bind(&non_instance); - mov(result, FieldOperand(result, Map::kConstructorOffset)); + if (miss_on_bound_function) { + jmp(&done); + + // Non-instance prototype: Fetch prototype from constructor field + // in initial map. + bind(&non_instance); + mov(result, FieldOperand(result, Map::kConstructorOffset)); + } // All done. bind(&done); @@ -2163,7 +2031,7 @@ void MacroAssembler::TryGetFunctionPrototype(Register function, void MacroAssembler::CallStub(CodeStub* stub, TypeFeedbackId ast_id) { - ASSERT(AllowThisStubCall(stub)); // Calls are not allowed in some stubs. + DCHECK(AllowThisStubCall(stub)); // Calls are not allowed in some stubs. call(stub->GetCode(), RelocInfo::CODE_TARGET, ast_id); } @@ -2174,7 +2042,7 @@ void MacroAssembler::TailCallStub(CodeStub* stub) { void MacroAssembler::StubReturn(int argc) { - ASSERT(argc >= 1 && generating_stub()); + DCHECK(argc >= 1 && generating_stub()); ret((argc - 1) * kPointerSize); } @@ -2188,18 +2056,12 @@ void MacroAssembler::IndexFromHash(Register hash, Register index) { // The assert checks that the constants for the maximum number of digits // for an array index cached in the hash field and the number of bits // reserved for it does not conflict. - ASSERT(TenToThe(String::kMaxCachedArrayIndexLength) < + DCHECK(TenToThe(String::kMaxCachedArrayIndexLength) < (1 << String::kArrayIndexValueBits)); - // We want the smi-tagged index in key. kArrayIndexValueMask has zeros in - // the low kHashShift bits. - and_(hash, String::kArrayIndexValueMask); - STATIC_ASSERT(String::kHashShift >= kSmiTagSize && kSmiTag == 0); - if (String::kHashShift > kSmiTagSize) { - shr(hash, String::kHashShift - kSmiTagSize); - } if (!index.is(hash)) { mov(index, hash); } + DecodeFieldToSmi<String::ArrayIndexValueBits>(index); } @@ -2217,10 +2079,7 @@ void MacroAssembler::CallRuntime(const Runtime::Function* f, // smarter. Move(eax, Immediate(num_arguments)); mov(ebx, Immediate(ExternalReference(f, isolate()))); - CEntryStub ces(isolate(), - 1, - CpuFeatures::IsSupported(SSE2) ? save_doubles - : kDontSaveFPRegs); + CEntryStub ces(isolate(), 1, save_doubles); CallStub(&ces); } @@ -2283,7 +2142,7 @@ void MacroAssembler::CallApiFunctionAndReturn( ExternalReference level_address = ExternalReference::handle_scope_level_address(isolate()); - ASSERT(edx.is(function_address)); + DCHECK(edx.is(function_address)); // Allocate HandleScope in callee-save registers. mov(ebx, Operand::StaticVariable(next_address)); mov(edi, Operand::StaticVariable(limit_address)); @@ -2400,7 +2259,7 @@ void MacroAssembler::CallApiFunctionAndReturn( bind(&promote_scheduled_exception); { FrameScope frame(this, StackFrame::INTERNAL); - CallRuntime(Runtime::kHiddenPromoteScheduledException, 0); + CallRuntime(Runtime::kPromoteScheduledException, 0); } jmp(&exception_handled); @@ -2440,7 +2299,7 @@ void MacroAssembler::InvokePrologue(const ParameterCount& expected, *definitely_mismatches = false; Label invoke; if (expected.is_immediate()) { - ASSERT(actual.is_immediate()); + DCHECK(actual.is_immediate()); if (expected.immediate() == actual.immediate()) { definitely_matches = true; } else { @@ -2464,15 +2323,15 @@ void MacroAssembler::InvokePrologue(const ParameterCount& expected, // IC mechanism. cmp(expected.reg(), actual.immediate()); j(equal, &invoke); - ASSERT(expected.reg().is(ebx)); + DCHECK(expected.reg().is(ebx)); mov(eax, actual.immediate()); } else if (!expected.reg().is(actual.reg())) { // Both expected and actual are in (different) registers. This // is the case when we invoke functions using call and apply. cmp(expected.reg(), actual.reg()); j(equal, &invoke); - ASSERT(actual.reg().is(eax)); - ASSERT(expected.reg().is(ebx)); + DCHECK(actual.reg().is(eax)); + DCHECK(expected.reg().is(ebx)); } } @@ -2507,7 +2366,7 @@ void MacroAssembler::InvokeCode(const Operand& code, InvokeFlag flag, const CallWrapper& call_wrapper) { // You can't call a function without a valid frame. - ASSERT(flag == JUMP_FUNCTION || has_frame()); + DCHECK(flag == JUMP_FUNCTION || has_frame()); Label done; bool definitely_mismatches = false; @@ -2520,7 +2379,7 @@ void MacroAssembler::InvokeCode(const Operand& code, call(code); call_wrapper.AfterCall(); } else { - ASSERT(flag == JUMP_FUNCTION); + DCHECK(flag == JUMP_FUNCTION); jmp(code); } bind(&done); @@ -2533,9 +2392,9 @@ void MacroAssembler::InvokeFunction(Register fun, InvokeFlag flag, const CallWrapper& call_wrapper) { // You can't call a function without a valid frame. - ASSERT(flag == JUMP_FUNCTION || has_frame()); + DCHECK(flag == JUMP_FUNCTION || has_frame()); - ASSERT(fun.is(edi)); + DCHECK(fun.is(edi)); mov(edx, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset)); mov(esi, FieldOperand(edi, JSFunction::kContextOffset)); mov(ebx, FieldOperand(edx, SharedFunctionInfo::kFormalParameterCountOffset)); @@ -2553,9 +2412,9 @@ void MacroAssembler::InvokeFunction(Register fun, InvokeFlag flag, const CallWrapper& call_wrapper) { // You can't call a function without a valid frame. - ASSERT(flag == JUMP_FUNCTION || has_frame()); + DCHECK(flag == JUMP_FUNCTION || has_frame()); - ASSERT(fun.is(edi)); + DCHECK(fun.is(edi)); mov(esi, FieldOperand(edi, JSFunction::kContextOffset)); InvokeCode(FieldOperand(edi, JSFunction::kCodeEntryOffset), @@ -2577,7 +2436,7 @@ void MacroAssembler::InvokeBuiltin(Builtins::JavaScript id, InvokeFlag flag, const CallWrapper& call_wrapper) { // You can't call a builtin without a valid frame. - ASSERT(flag == JUMP_FUNCTION || has_frame()); + DCHECK(flag == JUMP_FUNCTION || has_frame()); // Rely on the assertion to check that the number of provided // arguments match the expected number of arguments. Fake a @@ -2600,7 +2459,7 @@ void MacroAssembler::GetBuiltinFunction(Register target, void MacroAssembler::GetBuiltinEntry(Register target, Builtins::JavaScript id) { - ASSERT(!target.is(edi)); + DCHECK(!target.is(edi)); // Load the JavaScript builtin function from the builtins object. GetBuiltinFunction(edi, id); // Load the code entry point from the function into the target register. @@ -2713,7 +2572,7 @@ int MacroAssembler::SafepointRegisterStackIndex(int reg_code) { // The registers are pushed starting with the lowest encoding, // which means that lowest encodings are furthest away from // the stack pointer. - ASSERT(reg_code >= 0 && reg_code < kNumSafepointRegisters); + DCHECK(reg_code >= 0 && reg_code < kNumSafepointRegisters); return kNumSafepointRegisters - reg_code - 1; } @@ -2769,27 +2628,6 @@ void MacroAssembler::Ret(int bytes_dropped, Register scratch) { } -void MacroAssembler::VerifyX87StackDepth(uint32_t depth) { - // Make sure the floating point stack is either empty or has depth items. - ASSERT(depth <= 7); - // This is very expensive. - ASSERT(FLAG_debug_code && FLAG_enable_slow_asserts); - - // The top-of-stack (tos) is 7 if there is one item pushed. - int tos = (8 - depth) % 8; - const int kTopMask = 0x3800; - push(eax); - fwait(); - fnstsw_ax(); - and_(eax, kTopMask); - shr(eax, 11); - cmp(eax, Immediate(tos)); - Check(equal, kUnexpectedFPUStackDepthAfterInstruction); - fnclex(); - pop(eax); -} - - void MacroAssembler::Drop(int stack_elements) { if (stack_elements > 0) { add(esp, Immediate(stack_elements * kPointerSize)); @@ -2820,7 +2658,6 @@ void MacroAssembler::Move(const Operand& dst, const Immediate& x) { void MacroAssembler::Move(XMMRegister dst, double val) { // TODO(titzer): recognize double constants with ExternalReferences. - CpuFeatureScope scope(this, SSE2); uint64_t int_val = BitCast<uint64_t, double>(val); if (int_val == 0) { xorps(dst, dst); @@ -2843,7 +2680,7 @@ void MacroAssembler::SetCounter(StatsCounter* counter, int value) { void MacroAssembler::IncrementCounter(StatsCounter* counter, int value) { - ASSERT(value > 0); + DCHECK(value > 0); if (FLAG_native_code_counters && counter->Enabled()) { Operand operand = Operand::StaticVariable(ExternalReference(counter)); if (value == 1) { @@ -2856,7 +2693,7 @@ void MacroAssembler::IncrementCounter(StatsCounter* counter, int value) { void MacroAssembler::DecrementCounter(StatsCounter* counter, int value) { - ASSERT(value > 0); + DCHECK(value > 0); if (FLAG_native_code_counters && counter->Enabled()) { Operand operand = Operand::StaticVariable(ExternalReference(counter)); if (value == 1) { @@ -2871,7 +2708,7 @@ void MacroAssembler::DecrementCounter(StatsCounter* counter, int value) { void MacroAssembler::IncrementCounter(Condition cc, StatsCounter* counter, int value) { - ASSERT(value > 0); + DCHECK(value > 0); if (FLAG_native_code_counters && counter->Enabled()) { Label skip; j(NegateCondition(cc), &skip); @@ -2886,7 +2723,7 @@ void MacroAssembler::IncrementCounter(Condition cc, void MacroAssembler::DecrementCounter(Condition cc, StatsCounter* counter, int value) { - ASSERT(value > 0); + DCHECK(value > 0); if (FLAG_native_code_counters && counter->Enabled()) { Label skip; j(NegateCondition(cc), &skip); @@ -2932,10 +2769,10 @@ void MacroAssembler::Check(Condition cc, BailoutReason reason) { void MacroAssembler::CheckStackAlignment() { - int frame_alignment = OS::ActivationFrameAlignment(); + int frame_alignment = base::OS::ActivationFrameAlignment(); int frame_alignment_mask = frame_alignment - 1; if (frame_alignment > kPointerSize) { - ASSERT(IsPowerOf2(frame_alignment)); + DCHECK(IsPowerOf2(frame_alignment)); Label alignment_as_expected; test(esp, Immediate(frame_alignment_mask)); j(zero, &alignment_as_expected); @@ -2960,7 +2797,6 @@ void MacroAssembler::Abort(BailoutReason reason) { } #endif - push(eax); push(Immediate(reinterpret_cast<intptr_t>(Smi::FromInt(reason)))); // Disable stub call restrictions to always allow calls to abort. if (!has_frame_) { @@ -2976,40 +2812,6 @@ void MacroAssembler::Abort(BailoutReason reason) { } -void MacroAssembler::Throw(BailoutReason reason) { -#ifdef DEBUG - const char* msg = GetBailoutReason(reason); - if (msg != NULL) { - RecordComment("Throw message: "); - RecordComment(msg); - } -#endif - - push(eax); - push(Immediate(Smi::FromInt(reason))); - // Disable stub call restrictions to always allow calls to throw. - if (!has_frame_) { - // We don't actually want to generate a pile of code for this, so just - // claim there is a stack frame, without generating one. - FrameScope scope(this, StackFrame::NONE); - CallRuntime(Runtime::kHiddenThrowMessage, 1); - } else { - CallRuntime(Runtime::kHiddenThrowMessage, 1); - } - // will not return here - int3(); -} - - -void MacroAssembler::ThrowIf(Condition cc, BailoutReason reason) { - Label L; - j(NegateCondition(cc), &L); - Throw(reason); - // will not return here - bind(&L); -} - - void MacroAssembler::LoadInstanceDescriptors(Register map, Register descriptors) { mov(descriptors, FieldOperand(map, Map::kDescriptorsOffset)); @@ -3025,7 +2827,7 @@ void MacroAssembler::NumberOfOwnDescriptors(Register dst, Register map) { void MacroAssembler::LoadPowerOf2(XMMRegister dst, Register scratch, int power) { - ASSERT(is_uintn(power + HeapNumber::kExponentBias, + DCHECK(is_uintn(power + HeapNumber::kExponentBias, HeapNumber::kExponentBits)); mov(scratch, Immediate(power + HeapNumber::kExponentBias)); movd(dst, scratch); @@ -3080,15 +2882,8 @@ void MacroAssembler::LookupNumberStringCache(Register object, times_twice_pointer_size, FixedArray::kHeaderSize)); JumpIfSmi(probe, not_found); - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope fscope(this, SSE2); - movsd(xmm0, FieldOperand(object, HeapNumber::kValueOffset)); - ucomisd(xmm0, FieldOperand(probe, HeapNumber::kValueOffset)); - } else { - fld_d(FieldOperand(object, HeapNumber::kValueOffset)); - fld_d(FieldOperand(probe, HeapNumber::kValueOffset)); - FCmp(); - } + movsd(xmm0, FieldOperand(object, HeapNumber::kValueOffset)); + ucomisd(xmm0, FieldOperand(probe, HeapNumber::kValueOffset)); j(parity_even, not_found); // Bail out if NaN is involved. j(not_equal, not_found); // The cache did not contain this value. jmp(&load_result_from_cache, Label::kNear); @@ -3152,7 +2947,7 @@ void MacroAssembler::JumpIfNotBothSequentialAsciiStrings(Register object1, const int kFlatAsciiStringTag = kStringTag | kOneByteStringTag | kSeqStringTag; // Interleave bits from both instance types and compare them in one check. - ASSERT_EQ(0, kFlatAsciiStringMask & (kFlatAsciiStringMask << 3)); + DCHECK_EQ(0, kFlatAsciiStringMask & (kFlatAsciiStringMask << 3)); and_(scratch1, kFlatAsciiStringMask); and_(scratch2, kFlatAsciiStringMask); lea(scratch1, Operand(scratch1, scratch2, times_8, 0)); @@ -3211,13 +3006,13 @@ void MacroAssembler::EmitSeqStringSetCharCheck(Register string, void MacroAssembler::PrepareCallCFunction(int num_arguments, Register scratch) { - int frame_alignment = OS::ActivationFrameAlignment(); + int frame_alignment = base::OS::ActivationFrameAlignment(); if (frame_alignment != 0) { // Make stack end at alignment and make room for num_arguments words // and the original value of esp. mov(scratch, esp); sub(esp, Immediate((num_arguments + 1) * kPointerSize)); - ASSERT(IsPowerOf2(frame_alignment)); + DCHECK(IsPowerOf2(frame_alignment)); and_(esp, -frame_alignment); mov(Operand(esp, num_arguments * kPointerSize), scratch); } else { @@ -3236,14 +3031,14 @@ void MacroAssembler::CallCFunction(ExternalReference function, void MacroAssembler::CallCFunction(Register function, int num_arguments) { - ASSERT(has_frame()); + DCHECK(has_frame()); // Check stack alignment. if (emit_debug_code()) { CheckStackAlignment(); } call(function); - if (OS::ActivationFrameAlignment() != 0) { + if (base::OS::ActivationFrameAlignment() != 0) { mov(esp, Operand(esp, num_arguments * kPointerSize)); } else { add(esp, Immediate(num_arguments * kPointerSize)); @@ -3251,15 +3046,33 @@ void MacroAssembler::CallCFunction(Register function, } -bool AreAliased(Register r1, Register r2, Register r3, Register r4) { - if (r1.is(r2)) return true; - if (r1.is(r3)) return true; - if (r1.is(r4)) return true; - if (r2.is(r3)) return true; - if (r2.is(r4)) return true; - if (r3.is(r4)) return true; - return false; +#ifdef DEBUG +bool AreAliased(Register reg1, + Register reg2, + Register reg3, + Register reg4, + Register reg5, + Register reg6, + Register reg7, + Register reg8) { + int n_of_valid_regs = reg1.is_valid() + reg2.is_valid() + + reg3.is_valid() + reg4.is_valid() + reg5.is_valid() + reg6.is_valid() + + reg7.is_valid() + reg8.is_valid(); + + RegList regs = 0; + if (reg1.is_valid()) regs |= reg1.bit(); + if (reg2.is_valid()) regs |= reg2.bit(); + if (reg3.is_valid()) regs |= reg3.bit(); + if (reg4.is_valid()) regs |= reg4.bit(); + if (reg5.is_valid()) regs |= reg5.bit(); + if (reg6.is_valid()) regs |= reg6.bit(); + if (reg7.is_valid()) regs |= reg7.bit(); + if (reg8.is_valid()) regs |= reg8.bit(); + int n_of_non_aliasing_regs = NumRegs(regs); + + return n_of_valid_regs != n_of_non_aliasing_regs; } +#endif CodePatcher::CodePatcher(byte* address, int size) @@ -3269,17 +3082,17 @@ CodePatcher::CodePatcher(byte* address, int size) // Create a new macro assembler pointing to the address of the code to patch. // The size is adjusted with kGap on order for the assembler to generate size // bytes of instructions without failing with buffer size constraints. - ASSERT(masm_.reloc_info_writer.pos() == address_ + size_ + Assembler::kGap); + DCHECK(masm_.reloc_info_writer.pos() == address_ + size_ + Assembler::kGap); } CodePatcher::~CodePatcher() { // Indicate that code has changed. - CPU::FlushICache(address_, size_); + CpuFeatures::FlushICache(address_, size_); // Check that the code was patched as expected. - ASSERT(masm_.pc_ == address_ + size_); - ASSERT(masm_.reloc_info_writer.pos() == address_ + size_ + Assembler::kGap); + DCHECK(masm_.pc_ == address_ + size_); + DCHECK(masm_.reloc_info_writer.pos() == address_ + size_ + Assembler::kGap); } @@ -3290,7 +3103,7 @@ void MacroAssembler::CheckPageFlag( Condition cc, Label* condition_met, Label::Distance condition_met_distance) { - ASSERT(cc == zero || cc == not_zero); + DCHECK(cc == zero || cc == not_zero); if (scratch.is(object)) { and_(scratch, Immediate(~Page::kPageAlignmentMask)); } else { @@ -3313,12 +3126,13 @@ void MacroAssembler::CheckPageFlagForMap( Condition cc, Label* condition_met, Label::Distance condition_met_distance) { - ASSERT(cc == zero || cc == not_zero); + DCHECK(cc == zero || cc == not_zero); Page* page = Page::FromAddress(map->address()); + DCHECK(!serializer_enabled()); // Serializer cannot match page_flags. ExternalReference reference(ExternalReference::page_flags(page)); // The inlined static address check of the page's flags relies // on maps never being compacted. - ASSERT(!isolate()->heap()->mark_compact_collector()-> + DCHECK(!isolate()->heap()->mark_compact_collector()-> IsOnEvacuationCandidate(*map)); if (mask < (1 << kBitsPerByte)) { test_b(Operand::StaticVariable(reference), static_cast<uint8_t>(mask)); @@ -3335,7 +3149,7 @@ void MacroAssembler::CheckMapDeprecated(Handle<Map> map, if (map->CanBeDeprecated()) { mov(scratch, map); mov(scratch, FieldOperand(scratch, Map::kBitField3Offset)); - and_(scratch, Immediate(Smi::FromInt(Map::Deprecated::kMask))); + and_(scratch, Immediate(Map::Deprecated::kMask)); j(not_zero, if_deprecated); } } @@ -3349,7 +3163,7 @@ void MacroAssembler::JumpIfBlack(Register object, HasColor(object, scratch0, scratch1, on_black, on_black_near, 1, 0); // kBlackBitPattern. - ASSERT(strcmp(Marking::kBlackBitPattern, "10") == 0); + DCHECK(strcmp(Marking::kBlackBitPattern, "10") == 0); } @@ -3360,7 +3174,7 @@ void MacroAssembler::HasColor(Register object, Label::Distance has_color_distance, int first_bit, int second_bit) { - ASSERT(!AreAliased(object, bitmap_scratch, mask_scratch, ecx)); + DCHECK(!AreAliased(object, bitmap_scratch, mask_scratch, ecx)); GetMarkBits(object, bitmap_scratch, mask_scratch); @@ -3384,7 +3198,7 @@ void MacroAssembler::HasColor(Register object, void MacroAssembler::GetMarkBits(Register addr_reg, Register bitmap_reg, Register mask_reg) { - ASSERT(!AreAliased(addr_reg, mask_reg, bitmap_reg, ecx)); + DCHECK(!AreAliased(addr_reg, mask_reg, bitmap_reg, ecx)); mov(bitmap_reg, Immediate(~Page::kPageAlignmentMask)); and_(bitmap_reg, addr_reg); mov(ecx, addr_reg); @@ -3409,14 +3223,14 @@ void MacroAssembler::EnsureNotWhite( Register mask_scratch, Label* value_is_white_and_not_data, Label::Distance distance) { - ASSERT(!AreAliased(value, bitmap_scratch, mask_scratch, ecx)); + DCHECK(!AreAliased(value, bitmap_scratch, mask_scratch, ecx)); GetMarkBits(value, bitmap_scratch, mask_scratch); // If the value is black or grey we don't need to do anything. - ASSERT(strcmp(Marking::kWhiteBitPattern, "00") == 0); - ASSERT(strcmp(Marking::kBlackBitPattern, "10") == 0); - ASSERT(strcmp(Marking::kGreyBitPattern, "11") == 0); - ASSERT(strcmp(Marking::kImpossibleBitPattern, "01") == 0); + DCHECK(strcmp(Marking::kWhiteBitPattern, "00") == 0); + DCHECK(strcmp(Marking::kBlackBitPattern, "10") == 0); + DCHECK(strcmp(Marking::kGreyBitPattern, "11") == 0); + DCHECK(strcmp(Marking::kImpossibleBitPattern, "01") == 0); Label done; @@ -3454,8 +3268,8 @@ void MacroAssembler::EnsureNotWhite( bind(¬_heap_number); // Check for strings. - ASSERT(kIsIndirectStringTag == 1 && kIsIndirectStringMask == 1); - ASSERT(kNotStringTag == 0x80 && kIsNotStringMask == 0x80); + DCHECK(kIsIndirectStringTag == 1 && kIsIndirectStringMask == 1); + DCHECK(kNotStringTag == 0x80 && kIsNotStringMask == 0x80); // If it's a string and it's not a cons string then it's an object containing // no GC pointers. Register instance_type = ecx; @@ -3468,8 +3282,8 @@ void MacroAssembler::EnsureNotWhite( Label not_external; // External strings are the only ones with the kExternalStringTag bit // set. - ASSERT_EQ(0, kSeqStringTag & kExternalStringTag); - ASSERT_EQ(0, kConsStringTag & kExternalStringTag); + DCHECK_EQ(0, kSeqStringTag & kExternalStringTag); + DCHECK_EQ(0, kConsStringTag & kExternalStringTag); test_b(instance_type, kExternalStringTag); j(zero, ¬_external, Label::kNear); mov(length, Immediate(ExternalString::kSize)); @@ -3477,15 +3291,15 @@ void MacroAssembler::EnsureNotWhite( bind(¬_external); // Sequential string, either ASCII or UC16. - ASSERT(kOneByteStringTag == 0x04); + DCHECK(kOneByteStringTag == 0x04); and_(length, Immediate(kStringEncodingMask)); xor_(length, Immediate(kStringEncodingMask)); add(length, Immediate(0x04)); // Value now either 4 (if ASCII) or 8 (if UC16), i.e., char-size shifted // by 2. If we multiply the string length as smi by this, it still // won't overflow a 32-bit value. - ASSERT_EQ(SeqOneByteString::kMaxSize, SeqTwoByteString::kMaxSize); - ASSERT(SeqOneByteString::kMaxSize <= + DCHECK_EQ(SeqOneByteString::kMaxSize, SeqTwoByteString::kMaxSize); + DCHECK(SeqOneByteString::kMaxSize <= static_cast<int>(0xffffffffu >> (2 + kSmiTagSize))); imul(length, FieldOperand(value, String::kLengthOffset)); shr(length, 2 + kSmiTagSize + kSmiShiftSize); @@ -3513,7 +3327,8 @@ void MacroAssembler::EnsureNotWhite( void MacroAssembler::EnumLength(Register dst, Register map) { STATIC_ASSERT(Map::EnumLengthBits::kShift == 0); mov(dst, FieldOperand(map, Map::kBitField3Offset)); - and_(dst, Immediate(Smi::FromInt(Map::EnumLengthBits::kMask))); + and_(dst, Immediate(Map::EnumLengthBits::kMask)); + SmiTag(dst); } @@ -3584,7 +3399,7 @@ void MacroAssembler::JumpIfDictionaryInPrototypeChain( Register scratch0, Register scratch1, Label* found) { - ASSERT(!scratch1.is(scratch0)); + DCHECK(!scratch1.is(scratch0)); Factory* factory = isolate()->factory(); Register current = scratch0; Label loop_again; @@ -3596,8 +3411,7 @@ void MacroAssembler::JumpIfDictionaryInPrototypeChain( bind(&loop_again); mov(current, FieldOperand(current, HeapObject::kMapOffset)); mov(scratch1, FieldOperand(current, Map::kBitField2Offset)); - and_(scratch1, Map::kElementsKindMask); - shr(scratch1, Map::kElementsKindShift); + DecodeField<Map::ElementsKindBits>(scratch1); cmp(scratch1, Immediate(DICTIONARY_ELEMENTS)); j(equal, found); mov(current, FieldOperand(current, Map::kPrototypeOffset)); @@ -3607,8 +3421,8 @@ void MacroAssembler::JumpIfDictionaryInPrototypeChain( void MacroAssembler::TruncatingDiv(Register dividend, int32_t divisor) { - ASSERT(!dividend.is(eax)); - ASSERT(!dividend.is(edx)); + DCHECK(!dividend.is(eax)); + DCHECK(!dividend.is(edx)); MultiplierAndShift ms(divisor); mov(eax, Immediate(ms.multiplier())); imul(dividend); diff --git a/deps/v8/src/ia32/macro-assembler-ia32.h b/deps/v8/src/ia32/macro-assembler-ia32.h index f8c2401323b..3b2051f231d 100644 --- a/deps/v8/src/ia32/macro-assembler-ia32.h +++ b/deps/v8/src/ia32/macro-assembler-ia32.h @@ -5,9 +5,9 @@ #ifndef V8_IA32_MACRO_ASSEMBLER_IA32_H_ #define V8_IA32_MACRO_ASSEMBLER_IA32_H_ -#include "assembler.h" -#include "frames.h" -#include "v8globals.h" +#include "src/assembler.h" +#include "src/frames.h" +#include "src/globals.h" namespace v8 { namespace internal { @@ -18,6 +18,10 @@ typedef Operand MemOperand; enum RememberedSetAction { EMIT_REMEMBERED_SET, OMIT_REMEMBERED_SET }; enum SmiCheck { INLINE_SMI_CHECK, OMIT_SMI_CHECK }; +enum PointersToHereCheck { + kPointersToHereMaybeInteresting, + kPointersToHereAreAlwaysInteresting +}; enum RegisterValueType { @@ -26,7 +30,16 @@ enum RegisterValueType { }; -bool AreAliased(Register r1, Register r2, Register r3, Register r4); +#ifdef DEBUG +bool AreAliased(Register reg1, + Register reg2, + Register reg3 = no_reg, + Register reg4 = no_reg, + Register reg5 = no_reg, + Register reg6 = no_reg, + Register reg7 = no_reg, + Register reg8 = no_reg); +#endif // MacroAssembler implements a collection of frequently used macros. @@ -140,7 +153,9 @@ class MacroAssembler: public Assembler { Register scratch, SaveFPRegsMode save_fp, RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, - SmiCheck smi_check = INLINE_SMI_CHECK); + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting); // As above, but the offset has the tag presubtracted. For use with // Operand(reg, off). @@ -151,14 +166,17 @@ class MacroAssembler: public Assembler { Register scratch, SaveFPRegsMode save_fp, RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, - SmiCheck smi_check = INLINE_SMI_CHECK) { + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting) { RecordWriteField(context, offset + kHeapObjectTag, value, scratch, save_fp, remembered_set_action, - smi_check); + smi_check, + pointers_to_here_check_for_value); } // Notify the garbage collector that we wrote a pointer into a fixed array. @@ -173,7 +191,9 @@ class MacroAssembler: public Assembler { Register index, SaveFPRegsMode save_fp, RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, - SmiCheck smi_check = INLINE_SMI_CHECK); + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting); // For page containing |object| mark region covering |address| // dirty. |object| is the object being stored into, |value| is the @@ -186,7 +206,9 @@ class MacroAssembler: public Assembler { Register value, SaveFPRegsMode save_fp, RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, - SmiCheck smi_check = INLINE_SMI_CHECK); + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting); // For page containing |object| mark the region covering the object's map // dirty. |object| is the object being stored into, |map| is the Map object @@ -204,7 +226,8 @@ class MacroAssembler: public Assembler { void DebugBreak(); // Generates function and stub prologue code. - void Prologue(PrologueFrameMode frame_mode); + void StubPrologue(); + void Prologue(bool code_pre_aging); // Enter specific kind of exit frame. Expects the number of // arguments in register eax and sets up the number of arguments in @@ -370,7 +393,6 @@ class MacroAssembler: public Assembler { Register scratch1, XMMRegister scratch2, Label* fail, - bool specialize_for_processor, int offset = 0); // Compare an object's map with the specified map. @@ -439,13 +461,10 @@ class MacroAssembler: public Assembler { void TruncateHeapNumberToI(Register result_reg, Register input_reg); void TruncateDoubleToI(Register result_reg, XMMRegister input_reg); - void TruncateX87TOSToI(Register result_reg); void DoubleToI(Register result_reg, XMMRegister input_reg, XMMRegister scratch, MinusZeroMode minus_zero_mode, Label* conversion_failed, Label::Distance dst = Label::kFar); - void X87TOSToI(Register result_reg, MinusZeroMode minus_zero_mode, - Label* conversion_failed, Label::Distance dst = Label::kFar); void TaggedToI(Register result_reg, Register input_reg, XMMRegister temp, MinusZeroMode minus_zero_mode, Label* lost_precision); @@ -468,8 +487,7 @@ class MacroAssembler: public Assembler { j(not_carry, is_smi); } - void LoadUint32(XMMRegister dst, Register src, XMMRegister scratch); - void LoadUint32NoSSE2(Register src); + void LoadUint32(XMMRegister dst, Register src); // Jump the register contains a smi. inline void JumpIfSmi(Register value, @@ -499,11 +517,28 @@ class MacroAssembler: public Assembler { template<typename Field> void DecodeField(Register reg) { + static const int shift = Field::kShift; + static const int mask = Field::kMask >> Field::kShift; + if (shift != 0) { + sar(reg, shift); + } + and_(reg, Immediate(mask)); + } + + template<typename Field> + void DecodeFieldToSmi(Register reg) { static const int shift = Field::kShift; static const int mask = (Field::kMask >> Field::kShift) << kSmiTagSize; - sar(reg, shift); + STATIC_ASSERT((mask & (0x80000000u >> (kSmiTagSize - 1))) == 0); + STATIC_ASSERT(kSmiTag == 0); + if (shift < kSmiTagSize) { + shl(reg, kSmiTagSize - shift); + } else if (shift > kSmiTagSize) { + sar(reg, shift - kSmiTagSize); + } and_(reg, Immediate(mask)); } + void LoadPowerOf2(XMMRegister dst, Register scratch, int power); // Abort execution if argument is not a number, enabled via --debug-code. @@ -540,12 +575,6 @@ class MacroAssembler: public Assembler { // Throw past all JS frames to the top JS entry frame. void ThrowUncatchable(Register value); - // Throw a message string as an exception. - void Throw(BailoutReason reason); - - // Throw a message string as an exception if a condition is not true. - void ThrowIf(Condition cc, BailoutReason reason); - // --------------------------------------------------------------------------- // Inline caching support @@ -618,7 +647,8 @@ class MacroAssembler: public Assembler { void AllocateHeapNumber(Register result, Register scratch1, Register scratch2, - Label* gc_required); + Label* gc_required, + MutableMode mode = IMMUTABLE); // Allocate a sequential string. All the header fields of the string object // are initialized. @@ -827,13 +857,10 @@ class MacroAssembler: public Assembler { void Push(Smi* smi) { Push(Handle<Smi>(smi, isolate())); } Handle<Object> CodeObject() { - ASSERT(!code_object_.is_null()); + DCHECK(!code_object_.is_null()); return code_object_; } - // Insert code to verify that the x87 stack has the specified depth (0-7) - void VerifyX87StackDepth(uint32_t depth); - // Emit code for a truncating division by a constant. The dividend register is // unchanged, the result is in edx, and eax gets clobbered. void TruncatingDiv(Register dividend, int32_t divisor); diff --git a/deps/v8/src/ia32/regexp-macro-assembler-ia32.cc b/deps/v8/src/ia32/regexp-macro-assembler-ia32.cc index 22c620e7c89..5f31298c9ab 100644 --- a/deps/v8/src/ia32/regexp-macro-assembler-ia32.cc +++ b/deps/v8/src/ia32/regexp-macro-assembler-ia32.cc @@ -2,17 +2,18 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_IA32 -#include "cpu-profiler.h" -#include "unicode.h" -#include "log.h" -#include "regexp-stack.h" -#include "macro-assembler.h" -#include "regexp-macro-assembler.h" -#include "ia32/regexp-macro-assembler-ia32.h" +#include "src/cpu-profiler.h" +#include "src/log.h" +#include "src/macro-assembler.h" +#include "src/regexp-macro-assembler.h" +#include "src/regexp-stack.h" +#include "src/unicode.h" + +#include "src/ia32/regexp-macro-assembler-ia32.h" namespace v8 { namespace internal { @@ -91,7 +92,7 @@ RegExpMacroAssemblerIA32::RegExpMacroAssemblerIA32( success_label_(), backtrack_label_(), exit_label_() { - ASSERT_EQ(0, registers_to_save % 2); + DCHECK_EQ(0, registers_to_save % 2); __ jmp(&entry_label_); // We'll write the entry code later. __ bind(&start_label_); // And then continue from here. } @@ -123,8 +124,8 @@ void RegExpMacroAssemblerIA32::AdvanceCurrentPosition(int by) { void RegExpMacroAssemblerIA32::AdvanceRegister(int reg, int by) { - ASSERT(reg >= 0); - ASSERT(reg < num_registers_); + DCHECK(reg >= 0); + DCHECK(reg < num_registers_); if (by != 0) { __ add(register_location(reg), Immediate(by)); } @@ -281,7 +282,7 @@ void RegExpMacroAssemblerIA32::CheckNotBackReferenceIgnoreCase( // Compute new value of character position after the matched part. __ sub(edi, esi); } else { - ASSERT(mode_ == UC16); + DCHECK(mode_ == UC16); // Save registers before calling C function. __ push(esi); __ push(edi); @@ -369,7 +370,7 @@ void RegExpMacroAssemblerIA32::CheckNotBackReference( __ movzx_b(eax, Operand(edx, 0)); __ cmpb_al(Operand(ebx, 0)); } else { - ASSERT(mode_ == UC16); + DCHECK(mode_ == UC16); __ movzx_w(eax, Operand(edx, 0)); __ cmpw_ax(Operand(ebx, 0)); } @@ -438,7 +439,7 @@ void RegExpMacroAssemblerIA32::CheckNotCharacterAfterMinusAnd( uc16 minus, uc16 mask, Label* on_not_equal) { - ASSERT(minus < String::kMaxUtf16CodeUnit); + DCHECK(minus < String::kMaxUtf16CodeUnit); __ lea(eax, Operand(current_character(), -minus)); if (c == 0) { __ test(eax, Immediate(mask)); @@ -547,7 +548,7 @@ bool RegExpMacroAssemblerIA32::CheckSpecialCharacterClass(uc16 type, __ cmp(current_character(), Immediate('z')); BranchOrBacktrack(above, on_no_match); } - ASSERT_EQ(0, word_character_map[0]); // Character '\0' is not a word char. + DCHECK_EQ(0, word_character_map[0]); // Character '\0' is not a word char. ExternalReference word_map = ExternalReference::re_word_character_map(); __ test_b(current_character(), Operand::StaticArray(current_character(), times_1, word_map)); @@ -561,7 +562,7 @@ bool RegExpMacroAssemblerIA32::CheckSpecialCharacterClass(uc16 type, __ cmp(current_character(), Immediate('z')); __ j(above, &done); } - ASSERT_EQ(0, word_character_map[0]); // Character '\0' is not a word char. + DCHECK_EQ(0, word_character_map[0]); // Character '\0' is not a word char. ExternalReference word_map = ExternalReference::re_word_character_map(); __ test_b(current_character(), Operand::StaticArray(current_character(), times_1, word_map)); @@ -588,7 +589,7 @@ bool RegExpMacroAssemblerIA32::CheckSpecialCharacterClass(uc16 type, } else { Label done; BranchOrBacktrack(below_equal, &done); - ASSERT_EQ(UC16, mode_); + DCHECK_EQ(UC16, mode_); // Compare original value to 0x2028 and 0x2029, using the already // computed (current_char ^ 0x01 - 0x0b). I.e., check for // 0x201d (0x2028 - 0x0b) or 0x201e. @@ -946,8 +947,8 @@ void RegExpMacroAssemblerIA32::LoadCurrentCharacter(int cp_offset, Label* on_end_of_input, bool check_bounds, int characters) { - ASSERT(cp_offset >= -1); // ^ and \b can look behind one character. - ASSERT(cp_offset < (1<<30)); // Be sane! (And ensure negation works) + DCHECK(cp_offset >= -1); // ^ and \b can look behind one character. + DCHECK(cp_offset < (1<<30)); // Be sane! (And ensure negation works) if (check_bounds) { CheckPosition(cp_offset + characters - 1, on_end_of_input); } @@ -1009,7 +1010,7 @@ void RegExpMacroAssemblerIA32::SetCurrentPositionFromEnd(int by) { void RegExpMacroAssemblerIA32::SetRegister(int register_index, int to) { - ASSERT(register_index >= num_saved_registers_); // Reserved for positions! + DCHECK(register_index >= num_saved_registers_); // Reserved for positions! __ mov(register_location(register_index), Immediate(to)); } @@ -1032,7 +1033,7 @@ void RegExpMacroAssemblerIA32::WriteCurrentPositionToRegister(int reg, void RegExpMacroAssemblerIA32::ClearRegisters(int reg_from, int reg_to) { - ASSERT(reg_from <= reg_to); + DCHECK(reg_from <= reg_to); __ mov(eax, Operand(ebp, kInputStartMinusOne)); for (int reg = reg_from; reg <= reg_to; reg++) { __ mov(register_location(reg), eax); @@ -1076,7 +1077,8 @@ int RegExpMacroAssemblerIA32::CheckStackGuardState(Address* return_address, Code* re_code, Address re_frame) { Isolate* isolate = frame_entry<Isolate*>(re_frame, kIsolate); - if (isolate->stack_guard()->IsStackOverflow()) { + StackLimitCheck check(isolate); + if (check.JsHasOverflowed()) { isolate->StackOverflow(); return EXCEPTION; } @@ -1099,11 +1101,11 @@ int RegExpMacroAssemblerIA32::CheckStackGuardState(Address* return_address, // Current string. bool is_ascii = subject->IsOneByteRepresentationUnderneath(); - ASSERT(re_code->instruction_start() <= *return_address); - ASSERT(*return_address <= + DCHECK(re_code->instruction_start() <= *return_address); + DCHECK(*return_address <= re_code->instruction_start() + re_code->instruction_size()); - Object* result = Execution::HandleStackGuardInterrupt(isolate); + Object* result = isolate->stack_guard()->HandleInterrupts(); if (*code_handle != re_code) { // Return address no longer valid int delta = code_handle->address() - re_code->address(); @@ -1139,7 +1141,7 @@ int RegExpMacroAssemblerIA32::CheckStackGuardState(Address* return_address, // be a sequential or external string with the same content. // Update the start and end pointers in the stack frame to the current // location (whether it has actually moved or not). - ASSERT(StringShape(*subject_tmp).IsSequential() || + DCHECK(StringShape(*subject_tmp).IsSequential() || StringShape(*subject_tmp).IsExternal()); // The original start address of the characters to match. @@ -1171,7 +1173,7 @@ int RegExpMacroAssemblerIA32::CheckStackGuardState(Address* return_address, Operand RegExpMacroAssemblerIA32::register_location(int register_index) { - ASSERT(register_index < (1<<30)); + DCHECK(register_index < (1<<30)); if (num_registers_ <= register_index) { num_registers_ = register_index + 1; } @@ -1225,7 +1227,7 @@ void RegExpMacroAssemblerIA32::SafeCallTarget(Label* name) { void RegExpMacroAssemblerIA32::Push(Register source) { - ASSERT(!source.is(backtrack_stackpointer())); + DCHECK(!source.is(backtrack_stackpointer())); // Notice: This updates flags, unlike normal Push. __ sub(backtrack_stackpointer(), Immediate(kPointerSize)); __ mov(Operand(backtrack_stackpointer(), 0), source); @@ -1240,7 +1242,7 @@ void RegExpMacroAssemblerIA32::Push(Immediate value) { void RegExpMacroAssemblerIA32::Pop(Register target) { - ASSERT(!target.is(backtrack_stackpointer())); + DCHECK(!target.is(backtrack_stackpointer())); __ mov(target, Operand(backtrack_stackpointer(), 0)); // Notice: This updates flags, unlike normal Pop. __ add(backtrack_stackpointer(), Immediate(kPointerSize)); @@ -1282,16 +1284,16 @@ void RegExpMacroAssemblerIA32::LoadCurrentCharacterUnchecked(int cp_offset, } else if (characters == 2) { __ movzx_w(current_character(), Operand(esi, edi, times_1, cp_offset)); } else { - ASSERT(characters == 1); + DCHECK(characters == 1); __ movzx_b(current_character(), Operand(esi, edi, times_1, cp_offset)); } } else { - ASSERT(mode_ == UC16); + DCHECK(mode_ == UC16); if (characters == 2) { __ mov(current_character(), Operand(esi, edi, times_1, cp_offset * sizeof(uc16))); } else { - ASSERT(characters == 1); + DCHECK(characters == 1); __ movzx_w(current_character(), Operand(esi, edi, times_1, cp_offset * sizeof(uc16))); } diff --git a/deps/v8/src/ia32/regexp-macro-assembler-ia32.h b/deps/v8/src/ia32/regexp-macro-assembler-ia32.h index ab5b75b0904..e04a8ef4b62 100644 --- a/deps/v8/src/ia32/regexp-macro-assembler-ia32.h +++ b/deps/v8/src/ia32/regexp-macro-assembler-ia32.h @@ -5,9 +5,9 @@ #ifndef V8_IA32_REGEXP_MACRO_ASSEMBLER_IA32_H_ #define V8_IA32_REGEXP_MACRO_ASSEMBLER_IA32_H_ -#include "ia32/assembler-ia32.h" -#include "ia32/assembler-ia32-inl.h" -#include "macro-assembler.h" +#include "src/ia32/assembler-ia32.h" +#include "src/ia32/assembler-ia32-inl.h" +#include "src/macro-assembler.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/ia32/simulator-ia32.h b/deps/v8/src/ia32/simulator-ia32.h index 10356284ec6..02a8e9c03a4 100644 --- a/deps/v8/src/ia32/simulator-ia32.h +++ b/deps/v8/src/ia32/simulator-ia32.h @@ -5,7 +5,7 @@ #ifndef V8_IA32_SIMULATOR_IA32_H_ #define V8_IA32_SIMULATOR_IA32_H_ -#include "allocation.h" +#include "src/allocation.h" namespace v8 { namespace internal { @@ -25,9 +25,6 @@ typedef int (*regexp_matcher)(String*, int, const byte*, (FUNCTION_CAST<regexp_matcher>(entry)(p0, p1, p2, p3, p4, p5, p6, p7, p8)) -#define TRY_CATCH_FROM_ADDRESS(try_catch_address) \ - (reinterpret_cast<TryCatch*>(try_catch_address)) - // The stack limit beyond which we will throw stack overflow errors in // generated code. Because generated code on ia32 uses the C stack, we // just use the C stack limit. diff --git a/deps/v8/src/ia32/stub-cache-ia32.cc b/deps/v8/src/ia32/stub-cache-ia32.cc index adc8cd59aa9..4db24742fe7 100644 --- a/deps/v8/src/ia32/stub-cache-ia32.cc +++ b/deps/v8/src/ia32/stub-cache-ia32.cc @@ -2,13 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_IA32 -#include "ic-inl.h" -#include "codegen.h" -#include "stub-cache.h" +#include "src/codegen.h" +#include "src/ic-inl.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -114,14 +114,11 @@ static void ProbeTable(Isolate* isolate, } -void StubCompiler::GenerateDictionaryNegativeLookup(MacroAssembler* masm, - Label* miss_label, - Register receiver, - Handle<Name> name, - Register scratch0, - Register scratch1) { - ASSERT(name->IsUniqueName()); - ASSERT(!receiver.is(scratch0)); +void PropertyHandlerCompiler::GenerateDictionaryNegativeLookup( + MacroAssembler* masm, Label* miss_label, Register receiver, + Handle<Name> name, Register scratch0, Register scratch1) { + DCHECK(name->IsUniqueName()); + DCHECK(!receiver.is(scratch0)); Counters* counters = masm->isolate()->counters(); __ IncrementCounter(counters->negative_lookups(), 1); __ IncrementCounter(counters->negative_lookups_miss(), 1); @@ -173,22 +170,22 @@ void StubCache::GenerateProbe(MacroAssembler* masm, // Assert that code is valid. The multiplying code relies on the entry size // being 12. - ASSERT(sizeof(Entry) == 12); + DCHECK(sizeof(Entry) == 12); // Assert the flags do not name a specific type. - ASSERT(Code::ExtractTypeFromFlags(flags) == 0); + DCHECK(Code::ExtractTypeFromFlags(flags) == 0); // Assert that there are no register conflicts. - ASSERT(!scratch.is(receiver)); - ASSERT(!scratch.is(name)); - ASSERT(!extra.is(receiver)); - ASSERT(!extra.is(name)); - ASSERT(!extra.is(scratch)); + DCHECK(!scratch.is(receiver)); + DCHECK(!scratch.is(name)); + DCHECK(!extra.is(receiver)); + DCHECK(!extra.is(name)); + DCHECK(!extra.is(scratch)); // Assert scratch and extra registers are valid, and extra2/3 are unused. - ASSERT(!scratch.is(no_reg)); - ASSERT(extra2.is(no_reg)); - ASSERT(extra3.is(no_reg)); + DCHECK(!scratch.is(no_reg)); + DCHECK(extra2.is(no_reg)); + DCHECK(extra3.is(no_reg)); Register offset = scratch; scratch = no_reg; @@ -205,10 +202,10 @@ void StubCache::GenerateProbe(MacroAssembler* masm, __ xor_(offset, flags); // We mask out the last two bits because they are not part of the hash and // they are always 01 for maps. Also in the two 'and' instructions below. - __ and_(offset, (kPrimaryTableSize - 1) << kHeapObjectTagSize); + __ and_(offset, (kPrimaryTableSize - 1) << kCacheIndexShift); // ProbeTable expects the offset to be pointer scaled, which it is, because // the heap object tag size is 2 and the pointer size log 2 is also 2. - ASSERT(kHeapObjectTagSize == kPointerSizeLog2); + DCHECK(kCacheIndexShift == kPointerSizeLog2); // Probe the primary table. ProbeTable(isolate(), masm, flags, kPrimary, name, receiver, offset, extra); @@ -217,10 +214,10 @@ void StubCache::GenerateProbe(MacroAssembler* masm, __ mov(offset, FieldOperand(name, Name::kHashFieldOffset)); __ add(offset, FieldOperand(receiver, HeapObject::kMapOffset)); __ xor_(offset, flags); - __ and_(offset, (kPrimaryTableSize - 1) << kHeapObjectTagSize); + __ and_(offset, (kPrimaryTableSize - 1) << kCacheIndexShift); __ sub(offset, name); __ add(offset, Immediate(flags)); - __ and_(offset, (kSecondaryTableSize - 1) << kHeapObjectTagSize); + __ and_(offset, (kSecondaryTableSize - 1) << kCacheIndexShift); // Probe the secondary table. ProbeTable( @@ -233,21 +230,8 @@ void StubCache::GenerateProbe(MacroAssembler* masm, } -void StubCompiler::GenerateLoadGlobalFunctionPrototype(MacroAssembler* masm, - int index, - Register prototype) { - __ LoadGlobalFunction(index, prototype); - __ LoadGlobalFunctionInitialMap(prototype, prototype); - // Load the prototype from the initial map. - __ mov(prototype, FieldOperand(prototype, Map::kPrototypeOffset)); -} - - -void StubCompiler::GenerateDirectLoadGlobalFunctionPrototype( - MacroAssembler* masm, - int index, - Register prototype, - Label* miss) { +void NamedLoadHandlerCompiler::GenerateDirectLoadGlobalFunctionPrototype( + MacroAssembler* masm, int index, Register prototype, Label* miss) { // Get the global function with the given index. Handle<JSFunction> function( JSFunction::cast(masm->isolate()->native_context()->get(index))); @@ -266,65 +250,28 @@ void StubCompiler::GenerateDirectLoadGlobalFunctionPrototype( } -void StubCompiler::GenerateLoadArrayLength(MacroAssembler* masm, - Register receiver, - Register scratch, - Label* miss_label) { - // Check that the receiver isn't a smi. - __ JumpIfSmi(receiver, miss_label); - - // Check that the object is a JS array. - __ CmpObjectType(receiver, JS_ARRAY_TYPE, scratch); - __ j(not_equal, miss_label); - - // Load length directly from the JS array. - __ mov(eax, FieldOperand(receiver, JSArray::kLengthOffset)); - __ ret(0); -} - - -void StubCompiler::GenerateLoadFunctionPrototype(MacroAssembler* masm, - Register receiver, - Register scratch1, - Register scratch2, - Label* miss_label) { +void NamedLoadHandlerCompiler::GenerateLoadFunctionPrototype( + MacroAssembler* masm, Register receiver, Register scratch1, + Register scratch2, Label* miss_label) { __ TryGetFunctionPrototype(receiver, scratch1, scratch2, miss_label); __ mov(eax, scratch1); __ ret(0); } -void StubCompiler::GenerateFastPropertyLoad(MacroAssembler* masm, - Register dst, - Register src, - bool inobject, - int index, - Representation representation) { - ASSERT(!representation.IsDouble()); - int offset = index * kPointerSize; - if (!inobject) { - // Calculate the offset into the properties array. - offset = offset + FixedArray::kHeaderSize; - __ mov(dst, FieldOperand(src, JSObject::kPropertiesOffset)); - src = dst; - } - __ mov(dst, FieldOperand(src, offset)); -} - - static void PushInterceptorArguments(MacroAssembler* masm, Register receiver, Register holder, Register name, Handle<JSObject> holder_obj) { - STATIC_ASSERT(StubCache::kInterceptorArgsNameIndex == 0); - STATIC_ASSERT(StubCache::kInterceptorArgsInfoIndex == 1); - STATIC_ASSERT(StubCache::kInterceptorArgsThisIndex == 2); - STATIC_ASSERT(StubCache::kInterceptorArgsHolderIndex == 3); - STATIC_ASSERT(StubCache::kInterceptorArgsLength == 4); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsNameIndex == 0); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsInfoIndex == 1); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsThisIndex == 2); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsHolderIndex == 3); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsLength == 4); __ push(name); Handle<InterceptorInfo> interceptor(holder_obj->GetNamedInterceptor()); - ASSERT(!masm->isolate()->heap()->InNewSpace(*interceptor)); + DCHECK(!masm->isolate()->heap()->InNewSpace(*interceptor)); Register scratch = name; __ mov(scratch, Immediate(interceptor)); __ push(scratch); @@ -341,9 +288,8 @@ static void CompileCallLoadPropertyWithInterceptor( Handle<JSObject> holder_obj, IC::UtilityId id) { PushInterceptorArguments(masm, receiver, holder, name, holder_obj); - __ CallExternalReference( - ExternalReference(IC_Utility(id), masm->isolate()), - StubCache::kInterceptorArgsLength); + __ CallExternalReference(ExternalReference(IC_Utility(id), masm->isolate()), + NamedLoadHandlerCompiler::kInterceptorArgsLength); } @@ -351,14 +297,10 @@ static void CompileCallLoadPropertyWithInterceptor( // This function uses push() to generate smaller, faster code than // the version above. It is an optimization that should will be removed // when api call ICs are generated in hydrogen. -void StubCompiler::GenerateFastApiCall(MacroAssembler* masm, - const CallOptimization& optimization, - Handle<Map> receiver_map, - Register receiver, - Register scratch_in, - bool is_store, - int argc, - Register* values) { +void PropertyHandlerCompiler::GenerateFastApiCall( + MacroAssembler* masm, const CallOptimization& optimization, + Handle<Map> receiver_map, Register receiver, Register scratch_in, + bool is_store, int argc, Register* values) { // Copy return value. __ pop(scratch_in); // receiver @@ -366,13 +308,13 @@ void StubCompiler::GenerateFastApiCall(MacroAssembler* masm, // Write the arguments to stack frame. for (int i = 0; i < argc; i++) { Register arg = values[argc-1-i]; - ASSERT(!receiver.is(arg)); - ASSERT(!scratch_in.is(arg)); + DCHECK(!receiver.is(arg)); + DCHECK(!scratch_in.is(arg)); __ push(arg); } __ push(scratch_in); // Stack now matches JSFunction abi. - ASSERT(optimization.is_simple_api_call()); + DCHECK(optimization.is_simple_api_call()); // Abi for CallApiFunctionStub. Register callee = eax; @@ -428,29 +370,17 @@ void StubCompiler::GenerateFastApiCall(MacroAssembler* masm, } -void StoreStubCompiler::GenerateRestoreName(MacroAssembler* masm, - Label* label, - Handle<Name> name) { - if (!label->is_unused()) { - __ bind(label); - __ mov(this->name(), Immediate(name)); - } -} - - // Generate code to check that a global property cell is empty. Create // the property cell at compilation time if no cell exists for the // property. -void StubCompiler::GenerateCheckPropertyCell(MacroAssembler* masm, - Handle<JSGlobalObject> global, - Handle<Name> name, - Register scratch, - Label* miss) { +void PropertyHandlerCompiler::GenerateCheckPropertyCell( + MacroAssembler* masm, Handle<JSGlobalObject> global, Handle<Name> name, + Register scratch, Label* miss) { Handle<PropertyCell> cell = JSGlobalObject::EnsurePropertyCell(global, name); - ASSERT(cell->value()->IsTheHole()); + DCHECK(cell->value()->IsTheHole()); Handle<Oddball> the_hole = masm->isolate()->factory()->the_hole_value(); - if (Serializer::enabled(masm->isolate())) { + if (masm->serializer_enabled()) { __ mov(scratch, Immediate(cell)); __ cmp(FieldOperand(scratch, PropertyCell::kValueOffset), Immediate(the_hole)); @@ -461,45 +391,39 @@ void StubCompiler::GenerateCheckPropertyCell(MacroAssembler* masm, } -void StoreStubCompiler::GenerateNegativeHolderLookup( - MacroAssembler* masm, - Handle<JSObject> holder, - Register holder_reg, - Handle<Name> name, - Label* miss) { - if (holder->IsJSGlobalObject()) { - GenerateCheckPropertyCell( - masm, Handle<JSGlobalObject>::cast(holder), name, scratch1(), miss); - } else if (!holder->HasFastProperties() && !holder->IsJSGlobalProxy()) { - GenerateDictionaryNegativeLookup( - masm, miss, holder_reg, name, scratch1(), scratch2()); +void PropertyAccessCompiler::GenerateTailCall(MacroAssembler* masm, + Handle<Code> code) { + __ jmp(code, RelocInfo::CODE_TARGET); +} + + +#undef __ +#define __ ACCESS_MASM(masm()) + + +void NamedStoreHandlerCompiler::GenerateRestoreName(Label* label, + Handle<Name> name) { + if (!label->is_unused()) { + __ bind(label); + __ mov(this->name(), Immediate(name)); } } // Receiver_reg is preserved on jumps to miss_label, but may be destroyed if // store is successful. -void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, - Handle<JSObject> object, - LookupResult* lookup, - Handle<Map> transition, - Handle<Name> name, - Register receiver_reg, - Register storage_reg, - Register value_reg, - Register scratch1, - Register scratch2, - Register unused, - Label* miss_label, - Label* slow) { +void NamedStoreHandlerCompiler::GenerateStoreTransition( + Handle<Map> transition, Handle<Name> name, Register receiver_reg, + Register storage_reg, Register value_reg, Register scratch1, + Register scratch2, Register unused, Label* miss_label, Label* slow) { int descriptor = transition->LastAdded(); DescriptorArray* descriptors = transition->instance_descriptors(); PropertyDetails details = descriptors->GetDetails(descriptor); Representation representation = details.representation(); - ASSERT(!representation.IsNone()); + DCHECK(!representation.IsNone()); if (details.type() == CONSTANT) { - Handle<Object> constant(descriptors->GetValue(descriptor), masm->isolate()); + Handle<Object> constant(descriptors->GetValue(descriptor), isolate()); __ CmpObject(value_reg, constant); __ j(not_equal, miss_label); } else if (representation.IsSmi()) { @@ -523,47 +447,29 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, } } else if (representation.IsDouble()) { Label do_store, heap_number; - __ AllocateHeapNumber(storage_reg, scratch1, scratch2, slow); + __ AllocateHeapNumber(storage_reg, scratch1, scratch2, slow, MUTABLE); __ JumpIfNotSmi(value_reg, &heap_number); __ SmiUntag(value_reg); - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope use_sse2(masm, SSE2); - __ Cvtsi2sd(xmm0, value_reg); - } else { - __ push(value_reg); - __ fild_s(Operand(esp, 0)); - __ pop(value_reg); - } + __ Cvtsi2sd(xmm0, value_reg); __ SmiTag(value_reg); __ jmp(&do_store); __ bind(&heap_number); - __ CheckMap(value_reg, masm->isolate()->factory()->heap_number_map(), - miss_label, DONT_DO_SMI_CHECK); - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope use_sse2(masm, SSE2); - __ movsd(xmm0, FieldOperand(value_reg, HeapNumber::kValueOffset)); - } else { - __ fld_d(FieldOperand(value_reg, HeapNumber::kValueOffset)); - } + __ CheckMap(value_reg, isolate()->factory()->heap_number_map(), miss_label, + DONT_DO_SMI_CHECK); + __ movsd(xmm0, FieldOperand(value_reg, HeapNumber::kValueOffset)); __ bind(&do_store); - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope use_sse2(masm, SSE2); - __ movsd(FieldOperand(storage_reg, HeapNumber::kValueOffset), xmm0); - } else { - __ fstp_d(FieldOperand(storage_reg, HeapNumber::kValueOffset)); - } + __ movsd(FieldOperand(storage_reg, HeapNumber::kValueOffset), xmm0); } - // Stub never generated for non-global objects that require access - // checks. - ASSERT(object->IsJSGlobalProxy() || !object->IsAccessCheckNeeded()); + // Stub never generated for objects that require access checks. + DCHECK(!transition->is_access_check_needed()); // Perform map transition for the receiver if necessary. if (details.type() == FIELD && - object->map()->unused_property_fields() == 0) { + Map::cast(transition->GetBackPointer())->unused_property_fields() == 0) { // The properties must be extended before we can store the value. // We jump to a runtime call that extends the properties array. __ pop(scratch1); // Return address. @@ -573,9 +479,8 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, __ push(scratch1); __ TailCallExternalReference( ExternalReference(IC_Utility(IC::kSharedStoreIC_ExtendStorage), - masm->isolate()), - 3, - 1); + isolate()), + 3, 1); return; } @@ -593,7 +498,7 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, OMIT_SMI_CHECK); if (details.type() == CONSTANT) { - ASSERT(value_reg.is(eax)); + DCHECK(value_reg.is(eax)); __ ret(0); return; } @@ -604,14 +509,14 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, // Adjust for the number of properties stored in the object. Even in the // face of a transition we can use the old map here because the size of the // object and the number of in-object properties is not going to change. - index -= object->map()->inobject_properties(); + index -= transition->inobject_properties(); SmiCheck smi_check = representation.IsTagged() ? INLINE_SMI_CHECK : OMIT_SMI_CHECK; // TODO(verwaest): Share this code as a code stub. if (index < 0) { // Set the property straight into the object. - int offset = object->map()->instance_size() + (index * kPointerSize); + int offset = transition->instance_size() + (index * kPointerSize); if (representation.IsDouble()) { __ mov(FieldOperand(receiver_reg, offset), storage_reg); } else { @@ -658,172 +563,44 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, } // Return the value (register eax). - ASSERT(value_reg.is(eax)); + DCHECK(value_reg.is(eax)); __ ret(0); } -// Both name_reg and receiver_reg are preserved on jumps to miss_label, -// but may be destroyed if store is successful. -void StoreStubCompiler::GenerateStoreField(MacroAssembler* masm, - Handle<JSObject> object, - LookupResult* lookup, - Register receiver_reg, - Register name_reg, - Register value_reg, - Register scratch1, - Register scratch2, - Label* miss_label) { - // Stub never generated for non-global objects that require access - // checks. - ASSERT(object->IsJSGlobalProxy() || !object->IsAccessCheckNeeded()); - - int index = lookup->GetFieldIndex().field_index(); - - // Adjust for the number of properties stored in the object. Even in the - // face of a transition we can use the old map here because the size of the - // object and the number of in-object properties is not going to change. - index -= object->map()->inobject_properties(); - - Representation representation = lookup->representation(); - ASSERT(!representation.IsNone()); - if (representation.IsSmi()) { - __ JumpIfNotSmi(value_reg, miss_label); - } else if (representation.IsHeapObject()) { - __ JumpIfSmi(value_reg, miss_label); - HeapType* field_type = lookup->GetFieldType(); - HeapType::Iterator<Map> it = field_type->Classes(); - if (!it.Done()) { - Label do_store; - while (true) { - __ CompareMap(value_reg, it.Current()); - it.Advance(); - if (it.Done()) { - __ j(not_equal, miss_label); - break; - } - __ j(equal, &do_store, Label::kNear); - } - __ bind(&do_store); - } - } else if (representation.IsDouble()) { - // Load the double storage. - if (index < 0) { - int offset = object->map()->instance_size() + (index * kPointerSize); - __ mov(scratch1, FieldOperand(receiver_reg, offset)); - } else { - __ mov(scratch1, FieldOperand(receiver_reg, JSObject::kPropertiesOffset)); - int offset = index * kPointerSize + FixedArray::kHeaderSize; - __ mov(scratch1, FieldOperand(scratch1, offset)); - } - - // Store the value into the storage. - Label do_store, heap_number; - __ JumpIfNotSmi(value_reg, &heap_number); - __ SmiUntag(value_reg); - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope use_sse2(masm, SSE2); - __ Cvtsi2sd(xmm0, value_reg); - } else { - __ push(value_reg); - __ fild_s(Operand(esp, 0)); - __ pop(value_reg); - } - __ SmiTag(value_reg); - __ jmp(&do_store); - __ bind(&heap_number); - __ CheckMap(value_reg, masm->isolate()->factory()->heap_number_map(), - miss_label, DONT_DO_SMI_CHECK); - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope use_sse2(masm, SSE2); - __ movsd(xmm0, FieldOperand(value_reg, HeapNumber::kValueOffset)); - } else { - __ fld_d(FieldOperand(value_reg, HeapNumber::kValueOffset)); - } - __ bind(&do_store); - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope use_sse2(masm, SSE2); - __ movsd(FieldOperand(scratch1, HeapNumber::kValueOffset), xmm0); - } else { - __ fstp_d(FieldOperand(scratch1, HeapNumber::kValueOffset)); - } - // Return the value (register eax). - ASSERT(value_reg.is(eax)); - __ ret(0); - return; - } - - ASSERT(!representation.IsDouble()); - // TODO(verwaest): Share this code as a code stub. - SmiCheck smi_check = representation.IsTagged() - ? INLINE_SMI_CHECK : OMIT_SMI_CHECK; - if (index < 0) { - // Set the property straight into the object. - int offset = object->map()->instance_size() + (index * kPointerSize); - __ mov(FieldOperand(receiver_reg, offset), value_reg); - - if (!representation.IsSmi()) { - // Update the write barrier for the array address. - // Pass the value being stored in the now unused name_reg. - __ mov(name_reg, value_reg); - __ RecordWriteField(receiver_reg, - offset, - name_reg, - scratch1, - kDontSaveFPRegs, - EMIT_REMEMBERED_SET, - smi_check); - } - } else { - // Write to the properties array. - int offset = index * kPointerSize + FixedArray::kHeaderSize; - // Get the properties array (optimistically). - __ mov(scratch1, FieldOperand(receiver_reg, JSObject::kPropertiesOffset)); - __ mov(FieldOperand(scratch1, offset), value_reg); - - if (!representation.IsSmi()) { - // Update the write barrier for the array address. - // Pass the value being stored in the now unused name_reg. - __ mov(name_reg, value_reg); - __ RecordWriteField(scratch1, - offset, - name_reg, - receiver_reg, - kDontSaveFPRegs, - EMIT_REMEMBERED_SET, - smi_check); +void NamedStoreHandlerCompiler::GenerateStoreField(LookupResult* lookup, + Register value_reg, + Label* miss_label) { + DCHECK(lookup->representation().IsHeapObject()); + __ JumpIfSmi(value_reg, miss_label); + HeapType::Iterator<Map> it = lookup->GetFieldType()->Classes(); + Label do_store; + while (true) { + __ CompareMap(value_reg, it.Current()); + it.Advance(); + if (it.Done()) { + __ j(not_equal, miss_label); + break; } + __ j(equal, &do_store, Label::kNear); } + __ bind(&do_store); - // Return the value (register eax). - ASSERT(value_reg.is(eax)); - __ ret(0); -} - - -void StubCompiler::GenerateTailCall(MacroAssembler* masm, Handle<Code> code) { - __ jmp(code, RelocInfo::CODE_TARGET); + StoreFieldStub stub(isolate(), lookup->GetFieldIndex(), + lookup->representation()); + GenerateTailCall(masm(), stub.GetCode()); } -#undef __ -#define __ ACCESS_MASM(masm()) - - -Register StubCompiler::CheckPrototypes(Handle<HeapType> type, - Register object_reg, - Handle<JSObject> holder, - Register holder_reg, - Register scratch1, - Register scratch2, - Handle<Name> name, - Label* miss, - PrototypeCheckType check) { - Handle<Map> receiver_map(IC::TypeToMap(*type, isolate())); +Register PropertyHandlerCompiler::CheckPrototypes( + Register object_reg, Register holder_reg, Register scratch1, + Register scratch2, Handle<Name> name, Label* miss, + PrototypeCheckType check) { + Handle<Map> receiver_map(IC::TypeToMap(*type(), isolate())); // Make sure there's no overlap between holder and object registers. - ASSERT(!scratch1.is(object_reg) && !scratch1.is(holder_reg)); - ASSERT(!scratch2.is(object_reg) && !scratch2.is(holder_reg) + DCHECK(!scratch1.is(object_reg) && !scratch1.is(holder_reg)); + DCHECK(!scratch2.is(object_reg) && !scratch2.is(holder_reg) && !scratch2.is(scratch1)); // Keep track of the current object in register reg. @@ -831,11 +608,11 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, int depth = 0; Handle<JSObject> current = Handle<JSObject>::null(); - if (type->IsConstant()) current = - Handle<JSObject>::cast(type->AsConstant()->Value()); + if (type()->IsConstant()) + current = Handle<JSObject>::cast(type()->AsConstant()->Value()); Handle<JSObject> prototype = Handle<JSObject>::null(); Handle<Map> current_map = receiver_map; - Handle<Map> holder_map(holder->map()); + Handle<Map> holder_map(holder()->map()); // Traverse the prototype chain and check the maps in the prototype chain for // fast and global objects or do negative lookup for normal objects. while (!current_map.is_identical_to(holder_map)) { @@ -843,18 +620,18 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, // Only global objects and objects that do not require access // checks are allowed in stubs. - ASSERT(current_map->IsJSGlobalProxyMap() || + DCHECK(current_map->IsJSGlobalProxyMap() || !current_map->is_access_check_needed()); prototype = handle(JSObject::cast(current_map->prototype())); if (current_map->is_dictionary_map() && - !current_map->IsJSGlobalObjectMap() && - !current_map->IsJSGlobalProxyMap()) { + !current_map->IsJSGlobalObjectMap()) { + DCHECK(!current_map->IsJSGlobalProxyMap()); // Proxy maps are fast. if (!name->IsUniqueName()) { - ASSERT(name->IsString()); + DCHECK(name->IsString()); name = factory()->InternalizeString(Handle<String>::cast(name)); } - ASSERT(current.is_null() || + DCHECK(current.is_null() || current->property_dictionary()->FindEntry(name) == NameDictionary::kNotFound); @@ -866,6 +643,11 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, __ mov(reg, FieldOperand(scratch1, Map::kPrototypeOffset)); } else { bool in_new_space = heap()->InNewSpace(*prototype); + // Two possible reasons for loading the prototype from the map: + // (1) Can't store references to new space in code. + // (2) Handler is shared for all receivers with the same prototype + // map (but not necessarily the same prototype instance). + bool load_prototype_from_map = in_new_space || depth == 1; if (depth != 1 || check == CHECK_ALL_MAPS) { __ CheckMap(reg, current_map, miss, DONT_DO_SMI_CHECK); } @@ -873,6 +655,9 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, // Check access rights to the global object. This has to happen after // the map check so that we know that the object is actually a global // object. + // This allows us to install generated handlers for accesses to the + // global proxy (as opposed to using slow ICs). See corresponding code + // in LookupForRead(). if (current_map->IsJSGlobalProxyMap()) { __ CheckAccessGlobalProxy(reg, scratch1, scratch2, miss); } else if (current_map->IsJSGlobalObjectMap()) { @@ -881,19 +666,16 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, scratch2, miss); } - if (in_new_space) { + if (load_prototype_from_map) { // Save the map in scratch1 for later. __ mov(scratch1, FieldOperand(reg, HeapObject::kMapOffset)); } reg = holder_reg; // From now on the object will be in holder_reg. - if (in_new_space) { - // The prototype is in new space; we cannot store a reference to it - // in the code. Load it from the map. + if (load_prototype_from_map) { __ mov(reg, FieldOperand(scratch1, Map::kPrototypeOffset)); } else { - // The prototype is in old space; load it directly. __ mov(reg, prototype); } } @@ -912,7 +694,7 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, } // Perform security check for access to the global object. - ASSERT(current_map->IsJSGlobalProxyMap() || + DCHECK(current_map->IsJSGlobalProxyMap() || !current_map->is_access_check_needed()); if (current_map->IsJSGlobalProxyMap()) { __ CheckAccessGlobalProxy(reg, scratch1, scratch2, miss); @@ -923,7 +705,7 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, } -void LoadStubCompiler::HandlerFrontendFooter(Handle<Name> name, Label* miss) { +void NamedLoadHandlerCompiler::FrontendFooter(Handle<Name> name, Label* miss) { if (!miss->is_unused()) { Label success; __ jmp(&success); @@ -934,102 +716,21 @@ void LoadStubCompiler::HandlerFrontendFooter(Handle<Name> name, Label* miss) { } -void StoreStubCompiler::HandlerFrontendFooter(Handle<Name> name, Label* miss) { +void NamedStoreHandlerCompiler::FrontendFooter(Handle<Name> name, Label* miss) { if (!miss->is_unused()) { Label success; __ jmp(&success); - GenerateRestoreName(masm(), miss, name); + GenerateRestoreName(miss, name); TailCallBuiltin(masm(), MissBuiltin(kind())); __ bind(&success); } } -Register LoadStubCompiler::CallbackHandlerFrontend( - Handle<HeapType> type, - Register object_reg, - Handle<JSObject> holder, - Handle<Name> name, - Handle<Object> callback) { - Label miss; - - Register reg = HandlerFrontendHeader(type, object_reg, holder, name, &miss); - - if (!holder->HasFastProperties() && !holder->IsJSGlobalObject()) { - ASSERT(!reg.is(scratch2())); - ASSERT(!reg.is(scratch3())); - Register dictionary = scratch1(); - bool must_preserve_dictionary_reg = reg.is(dictionary); - - // Load the properties dictionary. - if (must_preserve_dictionary_reg) { - __ push(dictionary); - } - __ mov(dictionary, FieldOperand(reg, JSObject::kPropertiesOffset)); - - // Probe the dictionary. - Label probe_done, pop_and_miss; - NameDictionaryLookupStub::GeneratePositiveLookup(masm(), - &pop_and_miss, - &probe_done, - dictionary, - this->name(), - scratch2(), - scratch3()); - __ bind(&pop_and_miss); - if (must_preserve_dictionary_reg) { - __ pop(dictionary); - } - __ jmp(&miss); - __ bind(&probe_done); - - // If probing finds an entry in the dictionary, scratch2 contains the - // index into the dictionary. Check that the value is the callback. - Register index = scratch2(); - const int kElementsStartOffset = - NameDictionary::kHeaderSize + - NameDictionary::kElementsStartIndex * kPointerSize; - const int kValueOffset = kElementsStartOffset + kPointerSize; - __ mov(scratch3(), - Operand(dictionary, index, times_4, kValueOffset - kHeapObjectTag)); - if (must_preserve_dictionary_reg) { - __ pop(dictionary); - } - __ cmp(scratch3(), callback); - __ j(not_equal, &miss); - } - - HandlerFrontendFooter(name, &miss); - return reg; -} - - -void LoadStubCompiler::GenerateLoadField(Register reg, - Handle<JSObject> holder, - PropertyIndex field, - Representation representation) { - if (!reg.is(receiver())) __ mov(receiver(), reg); - if (kind() == Code::LOAD_IC) { - LoadFieldStub stub(isolate(), - field.is_inobject(holder), - field.translate(holder), - representation); - GenerateTailCall(masm(), stub.GetCode()); - } else { - KeyedLoadFieldStub stub(isolate(), - field.is_inobject(holder), - field.translate(holder), - representation); - GenerateTailCall(masm(), stub.GetCode()); - } -} - - -void LoadStubCompiler::GenerateLoadCallback( - Register reg, - Handle<ExecutableAccessorInfo> callback) { +void NamedLoadHandlerCompiler::GenerateLoadCallback( + Register reg, Handle<ExecutableAccessorInfo> callback) { // Insert additional parameters into the stack frame above return address. - ASSERT(!scratch3().is(reg)); + DCHECK(!scratch3().is(reg)); __ pop(scratch3()); // Get return address to place it below. STATIC_ASSERT(PropertyCallbackArguments::kHolderIndex == 0); @@ -1041,7 +742,7 @@ void LoadStubCompiler::GenerateLoadCallback( __ push(receiver()); // receiver // Push data from ExecutableAccessorInfo. if (isolate()->heap()->InNewSpace(callback->data())) { - ASSERT(!scratch2().is(reg)); + DCHECK(!scratch2().is(reg)); __ mov(scratch2(), Immediate(callback)); __ push(FieldOperand(scratch2(), ExecutableAccessorInfo::kDataOffset)); } else { @@ -1071,21 +772,18 @@ void LoadStubCompiler::GenerateLoadCallback( } -void LoadStubCompiler::GenerateLoadConstant(Handle<Object> value) { +void NamedLoadHandlerCompiler::GenerateLoadConstant(Handle<Object> value) { // Return the constant value. __ LoadObject(eax, value); __ ret(0); } -void LoadStubCompiler::GenerateLoadInterceptor( - Register holder_reg, - Handle<Object> object, - Handle<JSObject> interceptor_holder, - LookupResult* lookup, - Handle<Name> name) { - ASSERT(interceptor_holder->HasNamedInterceptor()); - ASSERT(!interceptor_holder->GetNamedInterceptor()->getter()->IsUndefined()); +void NamedLoadHandlerCompiler::GenerateLoadInterceptor(Register holder_reg, + LookupResult* lookup, + Handle<Name> name) { + DCHECK(holder()->HasNamedInterceptor()); + DCHECK(!holder()->GetNamedInterceptor()->getter()->IsUndefined()); // So far the most popular follow ups for interceptor loads are FIELD // and CALLBACKS, so inline only them, other cases may be added @@ -1096,10 +794,12 @@ void LoadStubCompiler::GenerateLoadInterceptor( compile_followup_inline = true; } else if (lookup->type() == CALLBACKS && lookup->GetCallbackObject()->IsExecutableAccessorInfo()) { - ExecutableAccessorInfo* callback = - ExecutableAccessorInfo::cast(lookup->GetCallbackObject()); - compile_followup_inline = callback->getter() != NULL && - callback->IsCompatibleReceiver(*object); + Handle<ExecutableAccessorInfo> callback( + ExecutableAccessorInfo::cast(lookup->GetCallbackObject())); + compile_followup_inline = + callback->getter() != NULL && + ExecutableAccessorInfo::IsCompatibleReceiverType(isolate(), callback, + type()); } } @@ -1107,13 +807,13 @@ void LoadStubCompiler::GenerateLoadInterceptor( // Compile the interceptor call, followed by inline code to load the // property from further up the prototype chain if the call fails. // Check that the maps haven't changed. - ASSERT(holder_reg.is(receiver()) || holder_reg.is(scratch1())); + DCHECK(holder_reg.is(receiver()) || holder_reg.is(scratch1())); // Preserve the receiver register explicitly whenever it is different from // the holder and it is needed should the interceptor return without any // result. The CALLBACKS case needs the receiver to be passed into C++ code, // the FIELD case might cause a miss during the prototype check. - bool must_perfrom_prototype_check = *interceptor_holder != lookup->holder(); + bool must_perfrom_prototype_check = *holder() != lookup->holder(); bool must_preserve_receiver_reg = !receiver().is(holder_reg) && (lookup->type() == CALLBACKS || must_perfrom_prototype_check); @@ -1132,7 +832,7 @@ void LoadStubCompiler::GenerateLoadInterceptor( // interceptor's holder has been compiled before (see a caller // of this method.) CompileCallLoadPropertyWithInterceptor( - masm(), receiver(), holder_reg, this->name(), interceptor_holder, + masm(), receiver(), holder_reg, this->name(), holder(), IC::kLoadPropertyWithInterceptorOnly); // Check if interceptor provided a value for property. If it's @@ -1160,30 +860,28 @@ void LoadStubCompiler::GenerateLoadInterceptor( // Leave the internal frame. } - GenerateLoadPostInterceptor(holder_reg, interceptor_holder, name, lookup); + GenerateLoadPostInterceptor(holder_reg, name, lookup); } else { // !compile_followup_inline // Call the runtime system to load the interceptor. // Check that the maps haven't changed. __ pop(scratch2()); // save old return address - PushInterceptorArguments(masm(), receiver(), holder_reg, - this->name(), interceptor_holder); + PushInterceptorArguments(masm(), receiver(), holder_reg, this->name(), + holder()); __ push(scratch2()); // restore old return address ExternalReference ref = - ExternalReference(IC_Utility(IC::kLoadPropertyWithInterceptorForLoad), + ExternalReference(IC_Utility(IC::kLoadPropertyWithInterceptor), isolate()); - __ TailCallExternalReference(ref, StubCache::kInterceptorArgsLength, 1); + __ TailCallExternalReference( + ref, NamedLoadHandlerCompiler::kInterceptorArgsLength, 1); } } -Handle<Code> StoreStubCompiler::CompileStoreCallback( - Handle<JSObject> object, - Handle<JSObject> holder, - Handle<Name> name, +Handle<Code> NamedStoreHandlerCompiler::CompileStoreCallback( + Handle<JSObject> object, Handle<Name> name, Handle<ExecutableAccessorInfo> callback) { - Register holder_reg = HandlerFrontend( - IC::CurrentTypeOf(object, isolate()), receiver(), holder, name); + Register holder_reg = Frontend(receiver(), name); __ pop(scratch1()); // remove the return address __ push(receiver()); @@ -1207,10 +905,8 @@ Handle<Code> StoreStubCompiler::CompileStoreCallback( #define __ ACCESS_MASM(masm) -void StoreStubCompiler::GenerateStoreViaSetter( - MacroAssembler* masm, - Handle<HeapType> type, - Register receiver, +void NamedStoreHandlerCompiler::GenerateStoreViaSetter( + MacroAssembler* masm, Handle<HeapType> type, Register receiver, Handle<JSFunction> setter) { // ----------- S t a t e ------------- // -- esp[0] : return address @@ -1226,7 +922,7 @@ void StoreStubCompiler::GenerateStoreViaSetter( if (IC::TypeToMap(*type, masm->isolate())->IsJSGlobalObjectMap()) { // Swap in the global receiver. __ mov(receiver, - FieldOperand(receiver, JSGlobalObject::kGlobalReceiverOffset)); + FieldOperand(receiver, JSGlobalObject::kGlobalProxyOffset)); } __ push(receiver); __ push(value()); @@ -1254,8 +950,7 @@ void StoreStubCompiler::GenerateStoreViaSetter( #define __ ACCESS_MASM(masm()) -Handle<Code> StoreStubCompiler::CompileStoreInterceptor( - Handle<JSObject> object, +Handle<Code> NamedStoreHandlerCompiler::CompileStoreInterceptor( Handle<Name> name) { __ pop(scratch1()); // remove the return address __ push(receiver()); @@ -1264,8 +959,8 @@ Handle<Code> StoreStubCompiler::CompileStoreInterceptor( __ push(scratch1()); // restore return address // Do tail-call to the runtime system. - ExternalReference store_ic_property = - ExternalReference(IC_Utility(IC::kStoreInterceptorProperty), isolate()); + ExternalReference store_ic_property = ExternalReference( + IC_Utility(IC::kStorePropertyWithInterceptor), isolate()); __ TailCallExternalReference(store_ic_property, 3, 1); // Return the generated code. @@ -1273,23 +968,8 @@ Handle<Code> StoreStubCompiler::CompileStoreInterceptor( } -void StoreStubCompiler::GenerateStoreArrayLength() { - // Prepare tail call to StoreIC_ArrayLength. - __ pop(scratch1()); // remove the return address - __ push(receiver()); - __ push(value()); - __ push(scratch1()); // restore return address - - ExternalReference ref = - ExternalReference(IC_Utility(IC::kStoreIC_ArrayLength), - masm()->isolate()); - __ TailCallExternalReference(ref, 2, 1); -} - - -Handle<Code> KeyedStoreStubCompiler::CompileStorePolymorphic( - MapHandleList* receiver_maps, - CodeHandleList* handler_stubs, +Handle<Code> PropertyICCompiler::CompileKeyedStorePolymorphic( + MapHandleList* receiver_maps, CodeHandleList* handler_stubs, MapHandleList* transitioned_maps) { Label miss; __ JumpIfSmi(receiver(), &miss, Label::kNear); @@ -1310,67 +990,39 @@ Handle<Code> KeyedStoreStubCompiler::CompileStorePolymorphic( TailCallBuiltin(masm(), MissBuiltin(kind())); // Return the generated code. - return GetICCode( - kind(), Code::NORMAL, factory()->empty_string(), POLYMORPHIC); + return GetCode(kind(), Code::NORMAL, factory()->empty_string(), POLYMORPHIC); } -Handle<Code> LoadStubCompiler::CompileLoadNonexistent(Handle<HeapType> type, - Handle<JSObject> last, - Handle<Name> name) { - NonexistentHandlerFrontend(type, last, name); - - // Return undefined if maps of the full prototype chain are still the - // same and no global property with this name contains a value. - __ mov(eax, isolate()->factory()->undefined_value()); - __ ret(0); - - // Return the generated code. - return GetCode(kind(), Code::FAST, name); -} - - -Register* LoadStubCompiler::registers() { - // receiver, name, scratch1, scratch2, scratch3, scratch4. - static Register registers[] = { edx, ecx, ebx, eax, edi, no_reg }; - return registers; -} - - -Register* KeyedLoadStubCompiler::registers() { +Register* PropertyAccessCompiler::load_calling_convention() { // receiver, name, scratch1, scratch2, scratch3, scratch4. - static Register registers[] = { edx, ecx, ebx, eax, edi, no_reg }; + Register receiver = LoadIC::ReceiverRegister(); + Register name = LoadIC::NameRegister(); + static Register registers[] = { receiver, name, ebx, eax, edi, no_reg }; return registers; } -Register StoreStubCompiler::value() { - return eax; -} - - -Register* StoreStubCompiler::registers() { +Register* PropertyAccessCompiler::store_calling_convention() { // receiver, name, scratch1, scratch2, scratch3. - static Register registers[] = { edx, ecx, ebx, edi, no_reg }; + Register receiver = StoreIC::ReceiverRegister(); + Register name = StoreIC::NameRegister(); + DCHECK(ebx.is(KeyedStoreIC::MapRegister())); + static Register registers[] = { receiver, name, ebx, edi, no_reg }; return registers; } -Register* KeyedStoreStubCompiler::registers() { - // receiver, name, scratch1, scratch2, scratch3. - static Register registers[] = { edx, ecx, ebx, edi, no_reg }; - return registers; -} +Register NamedStoreHandlerCompiler::value() { return StoreIC::ValueRegister(); } #undef __ #define __ ACCESS_MASM(masm) -void LoadStubCompiler::GenerateLoadViaGetter(MacroAssembler* masm, - Handle<HeapType> type, - Register receiver, - Handle<JSFunction> getter) { +void NamedLoadHandlerCompiler::GenerateLoadViaGetter( + MacroAssembler* masm, Handle<HeapType> type, Register receiver, + Handle<JSFunction> getter) { { FrameScope scope(masm, StackFrame::INTERNAL); @@ -1379,7 +1031,7 @@ void LoadStubCompiler::GenerateLoadViaGetter(MacroAssembler* masm, if (IC::TypeToMap(*type, masm->isolate())->IsJSGlobalObjectMap()) { // Swap in the global receiver. __ mov(receiver, - FieldOperand(receiver, JSGlobalObject::kGlobalReceiverOffset)); + FieldOperand(receiver, JSGlobalObject::kGlobalProxyOffset)); } __ push(receiver); ParameterCount actual(0); @@ -1403,29 +1055,26 @@ void LoadStubCompiler::GenerateLoadViaGetter(MacroAssembler* masm, #define __ ACCESS_MASM(masm()) -Handle<Code> LoadStubCompiler::CompileLoadGlobal( - Handle<HeapType> type, - Handle<GlobalObject> global, - Handle<PropertyCell> cell, - Handle<Name> name, - bool is_dont_delete) { +Handle<Code> NamedLoadHandlerCompiler::CompileLoadGlobal( + Handle<PropertyCell> cell, Handle<Name> name, bool is_configurable) { Label miss; - HandlerFrontendHeader(type, receiver(), global, name, &miss); + FrontendHeader(receiver(), name, &miss); // Get the value from the cell. - if (Serializer::enabled(isolate())) { - __ mov(eax, Immediate(cell)); - __ mov(eax, FieldOperand(eax, PropertyCell::kValueOffset)); + Register result = StoreIC::ValueRegister(); + if (masm()->serializer_enabled()) { + __ mov(result, Immediate(cell)); + __ mov(result, FieldOperand(result, PropertyCell::kValueOffset)); } else { - __ mov(eax, Operand::ForCell(cell)); + __ mov(result, Operand::ForCell(cell)); } // Check for deleted property if property can actually be deleted. - if (!is_dont_delete) { - __ cmp(eax, factory()->the_hole_value()); + if (is_configurable) { + __ cmp(result, factory()->the_hole_value()); __ j(equal, &miss); } else if (FLAG_debug_code) { - __ cmp(eax, factory()->the_hole_value()); + __ cmp(result, factory()->the_hole_value()); __ Check(not_equal, kDontDeleteCellsCannotContainTheHole); } @@ -1434,32 +1083,40 @@ Handle<Code> LoadStubCompiler::CompileLoadGlobal( // The code above already loads the result into the return register. __ ret(0); - HandlerFrontendFooter(name, &miss); + FrontendFooter(name, &miss); // Return the generated code. return GetCode(kind(), Code::NORMAL, name); } -Handle<Code> BaseLoadStoreStubCompiler::CompilePolymorphicIC( - TypeHandleList* types, - CodeHandleList* handlers, - Handle<Name> name, - Code::StubType type, - IcCheckType check) { +Handle<Code> PropertyICCompiler::CompilePolymorphic(TypeHandleList* types, + CodeHandleList* handlers, + Handle<Name> name, + Code::StubType type, + IcCheckType check) { Label miss; if (check == PROPERTY && (kind() == Code::KEYED_LOAD_IC || kind() == Code::KEYED_STORE_IC)) { - __ cmp(this->name(), Immediate(name)); - __ j(not_equal, &miss); + // In case we are compiling an IC for dictionary loads and stores, just + // check whether the name is unique. + if (name.is_identical_to(isolate()->factory()->normal_ic_symbol())) { + __ JumpIfNotUniqueName(this->name(), &miss); + } else { + __ cmp(this->name(), Immediate(name)); + __ j(not_equal, &miss); + } } Label number_case; Label* smi_target = IncludesNumberType(types) ? &number_case : &miss; __ JumpIfSmi(receiver(), smi_target); + // Polymorphic keyed stores may use the map register Register map_reg = scratch1(); + DCHECK(kind() != Code::KEYED_STORE_IC || + map_reg.is(KeyedStoreIC::MapRegister())); __ mov(map_reg, FieldOperand(receiver(), HeapObject::kMapOffset)); int receiver_count = types->length(); int number_of_handled_maps = 0; @@ -1470,13 +1127,13 @@ Handle<Code> BaseLoadStoreStubCompiler::CompilePolymorphicIC( number_of_handled_maps++; __ cmp(map_reg, map); if (type->Is(HeapType::Number())) { - ASSERT(!number_case.is_unused()); + DCHECK(!number_case.is_unused()); __ bind(&number_case); } __ j(equal, handlers->at(current)); } } - ASSERT(number_of_handled_maps != 0); + DCHECK(number_of_handled_maps != 0); __ bind(&miss); TailCallBuiltin(masm(), MissBuiltin(kind())); @@ -1484,7 +1141,7 @@ Handle<Code> BaseLoadStoreStubCompiler::CompilePolymorphicIC( // Return the generated code. InlineCacheState state = number_of_handled_maps > 1 ? POLYMORPHIC : MONOMORPHIC; - return GetICCode(kind(), type, name, state); + return GetCode(kind(), type, name, state); } @@ -1492,13 +1149,15 @@ Handle<Code> BaseLoadStoreStubCompiler::CompilePolymorphicIC( #define __ ACCESS_MASM(masm) -void KeyedLoadStubCompiler::GenerateLoadDictionaryElement( +void ElementHandlerCompiler::GenerateLoadDictionaryElement( MacroAssembler* masm) { // ----------- S t a t e ------------- // -- ecx : key // -- edx : receiver // -- esp[0] : return address // ----------------------------------- + DCHECK(edx.is(LoadIC::ReceiverRegister())); + DCHECK(ecx.is(LoadIC::NameRegister())); Label slow, miss; // This stub is meant to be tail-jumped to, the receiver must already diff --git a/deps/v8/src/ic-inl.h b/deps/v8/src/ic-inl.h index 010df08c664..c7954ce13a3 100644 --- a/deps/v8/src/ic-inl.h +++ b/deps/v8/src/ic-inl.h @@ -5,11 +5,12 @@ #ifndef V8_IC_INL_H_ #define V8_IC_INL_H_ -#include "ic.h" +#include "src/ic.h" -#include "compiler.h" -#include "debug.h" -#include "macro-assembler.h" +#include "src/compiler.h" +#include "src/debug.h" +#include "src/macro-assembler.h" +#include "src/prototype.h" namespace v8 { namespace internal { @@ -87,7 +88,7 @@ Code* IC::GetTargetAtAddress(Address address, // Convert target address to the code object. Code::GetCodeFromTargetAddress // is safe for use during GC where the map might be marked. Code* result = Code::GetCodeFromTargetAddress(target); - ASSERT(result->is_inline_cache_stub()); + DCHECK(result->is_inline_cache_stub()); return result; } @@ -95,7 +96,7 @@ Code* IC::GetTargetAtAddress(Address address, void IC::SetTargetAtAddress(Address address, Code* target, ConstantPoolArray* constant_pool) { - ASSERT(target->is_inline_cache_stub() || target->is_compare_ic_stub()); + DCHECK(target->is_inline_cache_stub() || target->is_compare_ic_stub()); Heap* heap = target->GetHeap(); Code* old_target = GetTargetAtAddress(address, constant_pool); #ifdef DEBUG @@ -103,7 +104,7 @@ void IC::SetTargetAtAddress(Address address, // ICs as strict mode. The strict-ness of the IC must be preserved. if (old_target->kind() == Code::STORE_IC || old_target->kind() == Code::KEYED_STORE_IC) { - ASSERT(StoreIC::GetStrictMode(old_target->extra_ic_state()) == + DCHECK(StoreIC::GetStrictMode(old_target->extra_ic_state()) == StoreIC::GetStrictMode(target->extra_ic_state())); } #endif @@ -118,59 +119,71 @@ void IC::SetTargetAtAddress(Address address, } -InlineCacheHolderFlag IC::GetCodeCacheForObject(Object* object) { - if (object->IsJSObject()) return OWN_MAP; - - // If the object is a value, we use the prototype map for the cache. - ASSERT(object->IsString() || object->IsSymbol() || - object->IsNumber() || object->IsBoolean()); - return PROTOTYPE_MAP; -} - - -HeapObject* IC::GetCodeCacheHolder(Isolate* isolate, - Object* object, - InlineCacheHolderFlag holder) { - if (object->IsSmi()) holder = PROTOTYPE_MAP; - Object* map_owner = holder == OWN_MAP - ? object : object->GetPrototype(isolate); - return HeapObject::cast(map_owner); +template <class TypeClass> +JSFunction* IC::GetRootConstructor(TypeClass* type, Context* native_context) { + if (type->Is(TypeClass::Boolean())) { + return native_context->boolean_function(); + } else if (type->Is(TypeClass::Number())) { + return native_context->number_function(); + } else if (type->Is(TypeClass::String())) { + return native_context->string_function(); + } else if (type->Is(TypeClass::Symbol())) { + return native_context->symbol_function(); + } else { + return NULL; + } } -InlineCacheHolderFlag IC::GetCodeCacheFlag(HeapType* type) { - if (type->Is(HeapType::Boolean()) || - type->Is(HeapType::Number()) || - type->Is(HeapType::String()) || - type->Is(HeapType::Symbol())) { - return PROTOTYPE_MAP; +Handle<Map> IC::GetHandlerCacheHolder(HeapType* type, bool receiver_is_holder, + Isolate* isolate, CacheHolderFlag* flag) { + Handle<Map> receiver_map = TypeToMap(type, isolate); + if (receiver_is_holder) { + *flag = kCacheOnReceiver; + return receiver_map; } - return OWN_MAP; + Context* native_context = *isolate->native_context(); + JSFunction* builtin_ctor = GetRootConstructor(type, native_context); + if (builtin_ctor != NULL) { + *flag = kCacheOnPrototypeReceiverIsPrimitive; + return handle(HeapObject::cast(builtin_ctor->instance_prototype())->map()); + } + *flag = receiver_map->is_dictionary_map() + ? kCacheOnPrototypeReceiverIsDictionary + : kCacheOnPrototype; + // Callers must ensure that the prototype is non-null. + return handle(JSObject::cast(receiver_map->prototype())->map()); } -Handle<Map> IC::GetCodeCacheHolder(InlineCacheHolderFlag flag, - HeapType* type, - Isolate* isolate) { - if (flag == PROTOTYPE_MAP) { - Context* context = isolate->context()->native_context(); - JSFunction* constructor; - if (type->Is(HeapType::Boolean())) { - constructor = context->boolean_function(); - } else if (type->Is(HeapType::Number())) { - constructor = context->number_function(); - } else if (type->Is(HeapType::String())) { - constructor = context->string_function(); - } else { - ASSERT(type->Is(HeapType::Symbol())); - constructor = context->symbol_function(); - } - return handle(JSObject::cast(constructor->instance_prototype())->map()); +Handle<Map> IC::GetICCacheHolder(HeapType* type, Isolate* isolate, + CacheHolderFlag* flag) { + Context* native_context = *isolate->native_context(); + JSFunction* builtin_ctor = GetRootConstructor(type, native_context); + if (builtin_ctor != NULL) { + *flag = kCacheOnPrototype; + return handle(builtin_ctor->initial_map()); } + *flag = kCacheOnReceiver; return TypeToMap(type, isolate); } +IC::State CallIC::FeedbackToState(Handle<FixedArray> vector, + Handle<Smi> slot) const { + IC::State state = UNINITIALIZED; + Object* feedback = vector->get(slot->value()); + + if (feedback == *TypeFeedbackInfo::MegamorphicSentinel(isolate())) { + state = GENERIC; + } else if (feedback->IsAllocationSite() || feedback->IsJSFunction()) { + state = MONOMORPHIC; + } else { + CHECK(feedback == *TypeFeedbackInfo::UninitializedSentinel(isolate())); + } + + return state; +} } } // namespace v8::internal #endif // V8_IC_INL_H_ diff --git a/deps/v8/src/ic.cc b/deps/v8/src/ic.cc index 3897f884500..db82bf52388 100644 --- a/deps/v8/src/ic.cc +++ b/deps/v8/src/ic.cc @@ -2,28 +2,29 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" - -#include "accessors.h" -#include "api.h" -#include "arguments.h" -#include "codegen.h" -#include "conversions.h" -#include "execution.h" -#include "ic-inl.h" -#include "runtime.h" -#include "stub-cache.h" +#include "src/v8.h" + +#include "src/accessors.h" +#include "src/api.h" +#include "src/arguments.h" +#include "src/codegen.h" +#include "src/conversions.h" +#include "src/execution.h" +#include "src/ic-inl.h" +#include "src/prototype.h" +#include "src/runtime.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { -#ifdef DEBUG char IC::TransitionMarkFromState(IC::State state) { switch (state) { case UNINITIALIZED: return '0'; case PREMONOMORPHIC: return '.'; case MONOMORPHIC: return '1'; - case MONOMORPHIC_PROTOTYPE_FAILURE: return '^'; + case PROTOTYPE_FAILURE: + return '^'; case POLYMORPHIC: return 'P'; case MEGAMORPHIC: return 'N'; case GENERIC: return 'G'; @@ -32,6 +33,9 @@ char IC::TransitionMarkFromState(IC::State state) { // computed from the original code - not the patched code. Let // these cases fall through to the unreachable code below. case DEBUG_STUB: break; + // Type-vector-based ICs resolve state to one of the above. + case DEFAULT: + break; } UNREACHABLE(); return 0; @@ -48,55 +52,74 @@ const char* GetTransitionMarkModifier(KeyedAccessStoreMode mode) { } -void IC::TraceIC(const char* type, - Handle<Object> name) { +#ifdef DEBUG + +#define TRACE_GENERIC_IC(isolate, type, reason) \ + do { \ + if (FLAG_trace_ic) { \ + PrintF("[%s patching generic stub in ", type); \ + JavaScriptFrame::PrintTop(isolate, stdout, false, true); \ + PrintF(" (%s)]\n", reason); \ + } \ + } while (false) + +#else + +#define TRACE_GENERIC_IC(isolate, type, reason) + +#endif // DEBUG + + +void IC::TraceIC(const char* type, Handle<Object> name) { if (FLAG_trace_ic) { Code* new_target = raw_target(); State new_state = new_target->ic_state(); + TraceIC(type, name, state(), new_state); + } +} + + +void IC::TraceIC(const char* type, Handle<Object> name, State old_state, + State new_state) { + if (FLAG_trace_ic) { + Code* new_target = raw_target(); PrintF("[%s%s in ", new_target->is_keyed_stub() ? "Keyed" : "", type); - StackFrameIterator it(isolate()); - while (it.frame()->fp() != this->fp()) it.Advance(); - StackFrame* raw_frame = it.frame(); - if (raw_frame->is_internal()) { - Code* apply_builtin = isolate()->builtins()->builtin( - Builtins::kFunctionApply); - if (raw_frame->unchecked_code() == apply_builtin) { - PrintF("apply from "); - it.Advance(); - raw_frame = it.frame(); - } + + // TODO(jkummerow): Add support for "apply". The logic is roughly: + // marker = [fp_ + kMarkerOffset]; + // if marker is smi and marker.value == INTERNAL and + // the frame's code == builtin(Builtins::kFunctionApply): + // then print "apply from" and advance one frame + + Object* maybe_function = + Memory::Object_at(fp_ + JavaScriptFrameConstants::kFunctionOffset); + if (maybe_function->IsJSFunction()) { + JSFunction* function = JSFunction::cast(maybe_function); + JavaScriptFrame::PrintFunctionAndOffset(function, function->code(), pc(), + stdout, true); } - JavaScriptFrame::PrintTop(isolate(), stdout, false, true); + ExtraICState extra_state = new_target->extra_ic_state(); const char* modifier = ""; if (new_target->kind() == Code::KEYED_STORE_IC) { modifier = GetTransitionMarkModifier( KeyedStoreIC::GetKeyedAccessStoreMode(extra_state)); } - PrintF(" (%c->%c%s)", - TransitionMarkFromState(state()), - TransitionMarkFromState(new_state), - modifier); - name->Print(); + PrintF(" (%c->%c%s)", TransitionMarkFromState(old_state), + TransitionMarkFromState(new_state), modifier); +#ifdef OBJECT_PRINT + OFStream os(stdout); + name->Print(os); +#else + name->ShortPrint(stdout); +#endif PrintF("]\n"); } } -#define TRACE_GENERIC_IC(isolate, type, reason) \ - do { \ - if (FLAG_trace_ic) { \ - PrintF("[%s patching generic stub in ", type); \ - JavaScriptFrame::PrintTop(isolate, stdout, false, true); \ - PrintF(" (%s)]\n", reason); \ - } \ - } while (false) - -#else -#define TRACE_GENERIC_IC(isolate, type, reason) -#endif // DEBUG - -#define TRACE_IC(type, name) \ - ASSERT((TraceIC(type, name), true)) +#define TRACE_IC(type, name) TraceIC(type, name) +#define TRACE_VECTOR_IC(type, name, old_state, new_state) \ + TraceIC(type, name, old_state, new_state) IC::IC(FrameDepth depth, Isolate* isolate) : isolate_(isolate), @@ -131,7 +154,7 @@ IC::IC(FrameDepth depth, Isolate* isolate) StackFrameIterator it(isolate); for (int i = 0; i < depth + 1; i++) it.Advance(); StackFrame* frame = it.frame(); - ASSERT(fp == frame->fp() && pc_address == frame->pc_address()); + DCHECK(fp == frame->fp() && pc_address == frame->pc_address()); #endif fp_ = fp; if (FLAG_enable_ool_constant_pool) { @@ -142,6 +165,7 @@ IC::IC(FrameDepth depth, Isolate* isolate) pc_address_ = StackFrame::ResolveReturnAddressLocation(pc_address); target_ = handle(raw_target(), isolate); state_ = target_->ic_state(); + kind_ = target_->kind(); extra_ic_state_ = target_->extra_ic_state(); } @@ -171,145 +195,102 @@ Code* IC::GetCode() const { Code* IC::GetOriginalCode() const { HandleScope scope(isolate()); Handle<SharedFunctionInfo> shared(GetSharedFunctionInfo(), isolate()); - ASSERT(Debug::HasDebugInfo(shared)); + DCHECK(Debug::HasDebugInfo(shared)); Code* original_code = Debug::GetDebugInfo(shared)->original_code(); - ASSERT(original_code->IsCode()); + DCHECK(original_code->IsCode()); return original_code; } -static bool HasInterceptorGetter(JSObject* object) { - return !object->GetNamedInterceptor()->getter()->IsUndefined(); -} - - static bool HasInterceptorSetter(JSObject* object) { return !object->GetNamedInterceptor()->setter()->IsUndefined(); } -static void LookupForRead(Handle<Object> object, - Handle<String> name, - LookupResult* lookup) { - // Skip all the objects with named interceptors, but - // without actual getter. - while (true) { - object->Lookup(name, lookup); - // Besides normal conditions (property not found or it's not - // an interceptor), bail out if lookup is not cacheable: we won't - // be able to IC it anyway and regular lookup should work fine. - if (!lookup->IsInterceptor() || !lookup->IsCacheable()) { - return; - } - - Handle<JSObject> holder(lookup->holder(), lookup->isolate()); - if (HasInterceptorGetter(*holder)) { - return; - } - - holder->LocalLookupRealNamedProperty(name, lookup); - if (lookup->IsFound()) { - ASSERT(!lookup->IsInterceptor()); - return; - } - - Handle<Object> proto(holder->GetPrototype(), lookup->isolate()); - if (proto->IsNull()) { - ASSERT(!lookup->IsFound()); - return; +static void LookupForRead(LookupIterator* it) { + for (; it->IsFound(); it->Next()) { + switch (it->state()) { + case LookupIterator::NOT_FOUND: + UNREACHABLE(); + case LookupIterator::JSPROXY: + return; + case LookupIterator::INTERCEPTOR: { + // If there is a getter, return; otherwise loop to perform the lookup. + Handle<JSObject> holder = it->GetHolder<JSObject>(); + if (!holder->GetNamedInterceptor()->getter()->IsUndefined()) { + return; + } + break; + } + case LookupIterator::ACCESS_CHECK: + // PropertyHandlerCompiler::CheckPrototypes() knows how to emit + // access checks for global proxies. + if (it->GetHolder<JSObject>()->IsJSGlobalProxy() && + it->HasAccess(v8::ACCESS_GET)) { + break; + } + return; + case LookupIterator::PROPERTY: + if (it->HasProperty()) return; // Yay! + break; } - - object = proto; } } bool IC::TryRemoveInvalidPrototypeDependentStub(Handle<Object> receiver, Handle<String> name) { - if (!IsNameCompatibleWithMonomorphicPrototypeFailure(name)) return false; - - InlineCacheHolderFlag cache_holder = - Code::ExtractCacheHolderFromFlags(target()->flags()); - - switch (cache_holder) { - case OWN_MAP: - // The stub was generated for JSObject but called for non-JSObject. - // IC::GetCodeCacheHolder is not applicable. - if (!receiver->IsJSObject()) return false; - break; - case PROTOTYPE_MAP: - // IC::GetCodeCacheHolder is not applicable. - if (receiver->GetPrototype(isolate())->IsNull()) return false; - break; + if (!IsNameCompatibleWithPrototypeFailure(name)) return false; + Handle<Map> receiver_map = TypeToMap(*receiver_type(), isolate()); + maybe_handler_ = target()->FindHandlerForMap(*receiver_map); + + // The current map wasn't handled yet. There's no reason to stay monomorphic, + // *unless* we're moving from a deprecated map to its replacement, or + // to a more general elements kind. + // TODO(verwaest): Check if the current map is actually what the old map + // would transition to. + if (maybe_handler_.is_null()) { + if (!receiver_map->IsJSObjectMap()) return false; + Map* first_map = FirstTargetMap(); + if (first_map == NULL) return false; + Handle<Map> old_map(first_map); + if (old_map->is_deprecated()) return true; + if (IsMoreGeneralElementsKindTransition(old_map->elements_kind(), + receiver_map->elements_kind())) { + return true; + } + return false; } - Handle<Map> map( - IC::GetCodeCacheHolder(isolate(), *receiver, cache_holder)->map()); - - // Decide whether the inline cache failed because of changes to the - // receiver itself or changes to one of its prototypes. - // - // If there are changes to the receiver itself, the map of the - // receiver will have changed and the current target will not be in - // the receiver map's code cache. Therefore, if the current target - // is in the receiver map's code cache, the inline cache failed due - // to prototype check failure. - int index = map->IndexInCodeCache(*name, *target()); - if (index >= 0) { - map->RemoveFromCodeCache(*name, *target(), index); - // Handlers are stored in addition to the ICs on the map. Remove those, too. - TryRemoveInvalidHandlers(map, name); - return true; - } + CacheHolderFlag flag; + Handle<Map> ic_holder_map( + GetICCacheHolder(*receiver_type(), isolate(), &flag)); - // The stub is not in the cache. We've ruled out all other kinds of failure - // except for proptotype chain changes, a deprecated map, a map that's - // different from the one that the stub expects, elements kind changes, or a - // constant global property that will become mutable. Threat all those - // situations as prototype failures (stay monomorphic if possible). - - // If the IC is shared between multiple receivers (slow dictionary mode), then - // the map cannot be deprecated and the stub invalidated. - if (cache_holder == OWN_MAP) { - Map* old_map = FirstTargetMap(); - if (old_map == *map) return true; - if (old_map != NULL) { - if (old_map->is_deprecated()) return true; - if (IsMoreGeneralElementsKindTransition(old_map->elements_kind(), - map->elements_kind())) { - return true; - } + DCHECK(flag != kCacheOnReceiver || receiver->IsJSObject()); + DCHECK(flag != kCacheOnPrototype || !receiver->IsJSReceiver()); + DCHECK(flag != kCacheOnPrototypeReceiverIsDictionary); + + if (state() == MONOMORPHIC) { + int index = ic_holder_map->IndexInCodeCache(*name, *target()); + if (index >= 0) { + ic_holder_map->RemoveFromCodeCache(*name, *target(), index); } } if (receiver->IsGlobalObject()) { LookupResult lookup(isolate()); GlobalObject* global = GlobalObject::cast(*receiver); - global->LocalLookupRealNamedProperty(name, &lookup); + global->LookupOwnRealNamedProperty(name, &lookup); if (!lookup.IsFound()) return false; PropertyCell* cell = global->GetPropertyCell(&lookup); return cell->type()->IsConstant(); } - return false; -} - - -void IC::TryRemoveInvalidHandlers(Handle<Map> map, Handle<String> name) { - CodeHandleList handlers; - target()->FindHandlers(&handlers); - for (int i = 0; i < handlers.length(); i++) { - Handle<Code> handler = handlers.at(i); - int index = map->IndexInCodeCache(*name, *handler); - if (index >= 0) { - map->RemoveFromCodeCache(*name, *handler, index); - return; - } - } + return true; } -bool IC::IsNameCompatibleWithMonomorphicPrototypeFailure(Handle<Object> name) { +bool IC::IsNameCompatibleWithPrototypeFailure(Handle<Object> name) { if (target()->is_keyed_stub()) { // Determine whether the failure is due to a name failure. if (!name->IsName()) return false; @@ -322,23 +303,17 @@ bool IC::IsNameCompatibleWithMonomorphicPrototypeFailure(Handle<Object> name) { void IC::UpdateState(Handle<Object> receiver, Handle<Object> name) { + receiver_type_ = CurrentTypeOf(receiver, isolate()); if (!name->IsString()) return; - if (state() != MONOMORPHIC) { - if (state() == POLYMORPHIC && receiver->IsHeapObject()) { - TryRemoveInvalidHandlers( - handle(Handle<HeapObject>::cast(receiver)->map()), - Handle<String>::cast(name)); - } - return; - } + if (state() != MONOMORPHIC && state() != POLYMORPHIC) return; if (receiver->IsUndefined() || receiver->IsNull()) return; // Remove the target from the code cache if it became invalid // because of changes in the prototype chain to avoid hitting it // again. - if (TryRemoveInvalidPrototypeDependentStub( - receiver, Handle<String>::cast(name)) && - TryMarkMonomorphicPrototypeFailure(name)) { + if (TryRemoveInvalidPrototypeDependentStub(receiver, + Handle<String>::cast(name))) { + MarkPrototypeFailure(name); return; } @@ -363,7 +338,7 @@ MaybeHandle<Object> IC::TypeError(const char* type, } -MaybeHandle<Object> IC::ReferenceError(const char* type, Handle<String> name) { +MaybeHandle<Object> IC::ReferenceError(const char* type, Handle<Name> name) { HandleScope scope(isolate()); Handle<Object> error = isolate()->factory()->NewReferenceError( type, HandleVector(&name, 1)); @@ -371,37 +346,60 @@ MaybeHandle<Object> IC::ReferenceError(const char* type, Handle<String> name) { } -static int ComputeTypeInfoCountDelta(IC::State old_state, IC::State new_state) { - bool was_uninitialized = - old_state == UNINITIALIZED || old_state == PREMONOMORPHIC; - bool is_uninitialized = - new_state == UNINITIALIZED || new_state == PREMONOMORPHIC; - return (was_uninitialized && !is_uninitialized) ? 1 : - (!was_uninitialized && is_uninitialized) ? -1 : 0; +static void ComputeTypeInfoCountDelta(IC::State old_state, IC::State new_state, + int* polymorphic_delta, + int* generic_delta) { + switch (old_state) { + case UNINITIALIZED: + case PREMONOMORPHIC: + if (new_state == UNINITIALIZED || new_state == PREMONOMORPHIC) break; + if (new_state == MONOMORPHIC || new_state == POLYMORPHIC) { + *polymorphic_delta = 1; + } else if (new_state == MEGAMORPHIC || new_state == GENERIC) { + *generic_delta = 1; + } + break; + case MONOMORPHIC: + case POLYMORPHIC: + if (new_state == MONOMORPHIC || new_state == POLYMORPHIC) break; + *polymorphic_delta = -1; + if (new_state == MEGAMORPHIC || new_state == GENERIC) { + *generic_delta = 1; + } + break; + case MEGAMORPHIC: + case GENERIC: + if (new_state == MEGAMORPHIC || new_state == GENERIC) break; + *generic_delta = -1; + if (new_state == MONOMORPHIC || new_state == POLYMORPHIC) { + *polymorphic_delta = 1; + } + break; + case PROTOTYPE_FAILURE: + case DEBUG_STUB: + case DEFAULT: + UNREACHABLE(); + } } -void IC::PostPatching(Address address, Code* target, Code* old_target) { - Isolate* isolate = target->GetHeap()->isolate(); +void IC::OnTypeFeedbackChanged(Isolate* isolate, Address address, + State old_state, State new_state, + bool target_remains_ic_stub) { Code* host = isolate-> inner_pointer_to_code_cache()->GetCacheEntry(address)->code; if (host->kind() != Code::FUNCTION) return; - if (FLAG_type_info_threshold > 0 && - old_target->is_inline_cache_stub() && - target->is_inline_cache_stub()) { - int delta = ComputeTypeInfoCountDelta(old_target->ic_state(), - target->ic_state()); - // Call ICs don't have interesting state changes from this point - // of view. - ASSERT(target->kind() != Code::CALL_IC || delta == 0); - - // Not all Code objects have TypeFeedbackInfo. - if (host->type_feedback_info()->IsTypeFeedbackInfo() && delta != 0) { - TypeFeedbackInfo* info = - TypeFeedbackInfo::cast(host->type_feedback_info()); - info->change_ic_with_type_info_count(delta); - } + if (FLAG_type_info_threshold > 0 && target_remains_ic_stub && + // Not all Code objects have TypeFeedbackInfo. + host->type_feedback_info()->IsTypeFeedbackInfo()) { + int polymorphic_delta = 0; // "Polymorphic" here includes monomorphic. + int generic_delta = 0; // "Generic" here includes megamorphic. + ComputeTypeInfoCountDelta(old_state, new_state, &polymorphic_delta, + &generic_delta); + TypeFeedbackInfo* info = TypeFeedbackInfo::cast(host->type_feedback_info()); + info->change_ic_with_type_info_count(polymorphic_delta); + info->change_ic_generic_count(generic_delta); } if (host->type_feedback_info()->IsTypeFeedbackInfo()) { TypeFeedbackInfo* info = @@ -416,10 +414,30 @@ void IC::PostPatching(Address address, Code* target, Code* old_target) { } +void IC::PostPatching(Address address, Code* target, Code* old_target) { + // Type vector based ICs update these statistics at a different time because + // they don't always patch on state change. + if (target->kind() == Code::CALL_IC) return; + + Isolate* isolate = target->GetHeap()->isolate(); + State old_state = UNINITIALIZED; + State new_state = UNINITIALIZED; + bool target_remains_ic_stub = false; + if (old_target->is_inline_cache_stub() && target->is_inline_cache_stub()) { + old_state = old_target->ic_state(); + new_state = target->ic_state(); + target_remains_ic_stub = true; + } + + OnTypeFeedbackChanged(isolate, address, old_state, new_state, + target_remains_ic_stub); +} + + void IC::RegisterWeakMapDependency(Handle<Code> stub) { if (FLAG_collect_maps && FLAG_weak_embedded_maps_in_ic && stub->CanBeWeakStub()) { - ASSERT(!stub->is_weak_stub()); + DCHECK(!stub->is_weak_stub()); MapHandleList maps; stub->FindAllMaps(&maps); if (maps.length() == 1 && stub->IsWeakObjectInIC(*maps.at(0))) { @@ -435,7 +453,7 @@ void IC::RegisterWeakMapDependency(Handle<Code> stub) { void IC::InvalidateMaps(Code* stub) { - ASSERT(stub->is_weak_stub()); + DCHECK(stub->is_weak_stub()); stub->mark_as_invalidated_weak_stub(); Isolate* isolate = stub->GetIsolate(); Heap* heap = isolate->heap(); @@ -448,7 +466,7 @@ void IC::InvalidateMaps(Code* stub) { it.rinfo()->set_target_object(undefined, SKIP_WRITE_BARRIER); } } - CPU::FlushICache(stub->instruction_start(), stub->instruction_size()); + CpuFeatures::FlushICache(stub->instruction_start(), stub->instruction_size()); } @@ -501,7 +519,6 @@ void CallIC::Clear(Isolate* isolate, Code* target, ConstantPoolArray* constant_pool) { // Currently, CallIC doesn't have state changes. - ASSERT(target->ic_state() == v8::internal::GENERIC); } @@ -510,8 +527,8 @@ void LoadIC::Clear(Isolate* isolate, Code* target, ConstantPoolArray* constant_pool) { if (IsCleared(target)) return; - Code* code = target->GetIsolate()->stub_cache()->FindPreMonomorphicIC( - Code::LOAD_IC, target->extra_ic_state()); + Code* code = PropertyICCompiler::FindPreMonomorphic(isolate, Code::LOAD_IC, + target->extra_ic_state()); SetTargetAtAddress(address, code, constant_pool); } @@ -521,8 +538,8 @@ void StoreIC::Clear(Isolate* isolate, Code* target, ConstantPoolArray* constant_pool) { if (IsCleared(target)) return; - Code* code = target->GetIsolate()->stub_cache()->FindPreMonomorphicIC( - Code::STORE_IC, target->extra_ic_state()); + Code* code = PropertyICCompiler::FindPreMonomorphic(isolate, Code::STORE_IC, + target->extra_ic_state()); SetTargetAtAddress(address, code, constant_pool); } @@ -543,11 +560,10 @@ void CompareIC::Clear(Isolate* isolate, Address address, Code* target, ConstantPoolArray* constant_pool) { - ASSERT(target->major_key() == CodeStub::CompareIC); + DCHECK(CodeStub::GetMajorKey(target) == CodeStub::CompareIC); CompareIC::State handler_state; Token::Value op; - ICCompareStub::DecodeMinorKey(target->stub_info(), NULL, NULL, - &handler_state, &op); + ICCompareStub::DecodeKey(target->stub_key(), NULL, NULL, &handler_state, &op); // Only clear CompareICs that can retain objects. if (handler_state != KNOWN_OBJECT) return; SetTargetAtAddress(address, GetRawUninitialized(isolate, op), constant_pool); @@ -555,6 +571,16 @@ void CompareIC::Clear(Isolate* isolate, } +// static +Handle<Code> KeyedLoadIC::generic_stub(Isolate* isolate) { + if (FLAG_compiled_keyed_generic_loads) { + return KeyedLoadGenericStub(isolate).GetCode(); + } else { + return isolate->builtins()->KeyedLoadIC_Generic(); + } +} + + static bool MigrateDeprecated(Handle<Object> object) { if (!object->IsJSObject()) return false; Handle<JSObject> receiver = Handle<JSObject>::cast(object); @@ -564,42 +590,23 @@ static bool MigrateDeprecated(Handle<Object> object) { } -MaybeHandle<Object> LoadIC::Load(Handle<Object> object, Handle<String> name) { +MaybeHandle<Object> LoadIC::Load(Handle<Object> object, Handle<Name> name) { // If the object is undefined or null it's illegal to try to get any // of its properties; throw a TypeError in that case. if (object->IsUndefined() || object->IsNull()) { return TypeError("non_object_property_load", object, name); } - if (FLAG_use_ic) { - // Use specialized code for getting prototype of functions. - if (object->IsJSFunction() && - String::Equals(isolate()->factory()->prototype_string(), name) && - Handle<JSFunction>::cast(object)->should_have_prototype()) { - Handle<Code> stub; - if (state() == UNINITIALIZED) { - stub = pre_monomorphic_stub(); - } else if (state() == PREMONOMORPHIC) { - FunctionPrototypeStub function_prototype_stub(isolate(), kind()); - stub = function_prototype_stub.GetCode(); - } else if (state() != MEGAMORPHIC) { - ASSERT(state() != GENERIC); - stub = megamorphic_stub(); - } - if (!stub.is_null()) { - set_target(*stub); - if (FLAG_trace_ic) PrintF("[LoadIC : +#prototype /function]\n"); - } - return Accessors::FunctionGetPrototype(Handle<JSFunction>::cast(object)); - } - } - // Check if the name is trivially convertible to an index and get // the element or char if so. uint32_t index; if (kind() == Code::KEYED_LOAD_IC && name->AsArrayIndex(&index)) { // Rewrite to the generic keyed load stub. - if (FLAG_use_ic) set_target(*generic_stub()); + if (FLAG_use_ic) { + set_target(*KeyedLoadIC::generic_stub(isolate())); + TRACE_IC("LoadIC", name); + TRACE_GENERIC_IC(isolate(), "LoadIC", "name as array index"); + } Handle<Object> result; ASSIGN_RETURN_ON_EXCEPTION( isolate(), @@ -612,11 +619,11 @@ MaybeHandle<Object> LoadIC::Load(Handle<Object> object, Handle<String> name) { bool use_ic = MigrateDeprecated(object) ? false : FLAG_use_ic; // Named lookup in the object. - LookupResult lookup(isolate()); - LookupForRead(object, name, &lookup); + LookupIterator it(object, name); + LookupForRead(&it); // If we did not find a property, check if we need to throw an exception. - if (!lookup.IsFound()) { + if (!it.IsFound()) { if (IsUndeclaredGlobal(object)) { return ReferenceError("not_defined", name); } @@ -624,19 +631,14 @@ MaybeHandle<Object> LoadIC::Load(Handle<Object> object, Handle<String> name) { } // Update inline cache and stub cache. - if (use_ic) UpdateCaches(&lookup, object, name); + if (use_ic) UpdateCaches(&it, object, name); - PropertyAttributes attr; // Get the property. Handle<Object> result; ASSIGN_RETURN_ON_EXCEPTION( - isolate(), - result, - Object::GetProperty(object, object, &lookup, name, &attr), - Object); + isolate(), result, Object::GetProperty(&it), Object); // If the property is not present, check if we need to throw an exception. - if ((lookup.IsInterceptor() || lookup.IsHandler()) && - attr == ABSENT && IsUndeclaredGlobal(object)) { + if (!it.IsFound() && IsUndeclaredGlobal(object)) { return ReferenceError("not_defined", name); } @@ -646,7 +648,7 @@ MaybeHandle<Object> LoadIC::Load(Handle<Object> object, Handle<String> name) { static bool AddOneReceiverMapIfMissing(MapHandleList* receiver_maps, Handle<Map> new_receiver_map) { - ASSERT(!new_receiver_map.is_null()); + DCHECK(!new_receiver_map.is_null()); for (int current = 0; current < receiver_maps->length(); ++current) { if (!receiver_maps->at(current).is_null() && receiver_maps->at(current).is_identical_to(new_receiver_map)) { @@ -658,10 +660,10 @@ static bool AddOneReceiverMapIfMissing(MapHandleList* receiver_maps, } -bool IC::UpdatePolymorphicIC(Handle<HeapType> type, - Handle<String> name, - Handle<Code> code) { +bool IC::UpdatePolymorphicIC(Handle<Name> name, Handle<Code> code) { if (!code->is_handler()) return false; + if (target()->is_keyed_stub() && state() != PROTOTYPE_FAILURE) return false; + Handle<HeapType> type = receiver_type(); TypeHandleList types; CodeHandleList handlers; @@ -698,18 +700,25 @@ bool IC::UpdatePolymorphicIC(Handle<HeapType> type, if (!target()->FindHandlers(&handlers, types.length())) return false; number_of_valid_types++; - if (handler_to_overwrite >= 0) { - handlers.Set(handler_to_overwrite, code); - if (!type->NowIs(types.at(handler_to_overwrite))) { - types.Set(handler_to_overwrite, type); - } + if (number_of_valid_types > 1 && target()->is_keyed_stub()) return false; + Handle<Code> ic; + if (number_of_valid_types == 1) { + ic = PropertyICCompiler::ComputeMonomorphic(kind(), name, type, code, + extra_ic_state()); } else { - types.Add(type); - handlers.Add(code); + if (handler_to_overwrite >= 0) { + handlers.Set(handler_to_overwrite, code); + if (!type->NowIs(types.at(handler_to_overwrite))) { + types.Set(handler_to_overwrite, type); + } + } else { + types.Add(type); + handlers.Add(code); + } + ic = PropertyICCompiler::ComputePolymorphic(kind(), &types, &handlers, + number_of_valid_types, name, + extra_ic_state()); } - - Handle<Code> ic = isolate()->stub_cache()->ComputePolymorphicIC( - kind(), &types, &handlers, number_of_valid_types, name, extra_ic_state()); set_target(*ic); return true; } @@ -730,7 +739,7 @@ Handle<Map> IC::TypeToMap(HeapType* type, Isolate* isolate) { return handle( Handle<JSGlobalObject>::cast(type->AsConstant()->Value())->map()); } - ASSERT(type->IsClass()); + DCHECK(type->IsClass()); return type->AsClass()->Map(); } @@ -757,17 +766,15 @@ template Handle<HeapType> IC::MapToType<HeapType>(Handle<Map> map, Isolate* region); -void IC::UpdateMonomorphicIC(Handle<HeapType> type, - Handle<Code> handler, - Handle<String> name) { - if (!handler->is_handler()) return set_target(*handler); - Handle<Code> ic = isolate()->stub_cache()->ComputeMonomorphicIC( - kind(), name, type, handler, extra_ic_state()); +void IC::UpdateMonomorphicIC(Handle<Code> handler, Handle<Name> name) { + DCHECK(handler->is_handler()); + Handle<Code> ic = PropertyICCompiler::ComputeMonomorphic( + kind(), name, receiver_type(), handler, extra_ic_state()); set_target(*ic); } -void IC::CopyICToMegamorphicCache(Handle<String> name) { +void IC::CopyICToMegamorphicCache(Handle<Name> name) { TypeHandleList types; CodeHandleList handlers; TargetTypes(&types); @@ -793,28 +800,27 @@ bool IC::IsTransitionOfMonomorphicTarget(Map* source_map, Map* target_map) { } -void IC::PatchCache(Handle<HeapType> type, - Handle<String> name, - Handle<Code> code) { +void IC::PatchCache(Handle<Name> name, Handle<Code> code) { switch (state()) { case UNINITIALIZED: case PREMONOMORPHIC: - case MONOMORPHIC_PROTOTYPE_FAILURE: - UpdateMonomorphicIC(type, code, name); + UpdateMonomorphicIC(code, name); break; - case MONOMORPHIC: // Fall through. + case PROTOTYPE_FAILURE: + case MONOMORPHIC: case POLYMORPHIC: - if (!target()->is_keyed_stub()) { - if (UpdatePolymorphicIC(type, name, code)) break; + if (!target()->is_keyed_stub() || state() == PROTOTYPE_FAILURE) { + if (UpdatePolymorphicIC(name, code)) break; CopyICToMegamorphicCache(name); } set_target(*megamorphic_stub()); // Fall through. case MEGAMORPHIC: - UpdateMegamorphicCache(*type, *name, *code); + UpdateMegamorphicCache(*receiver_type(), *name, *code); break; case DEBUG_STUB: break; + case DEFAULT: case GENERIC: UNREACHABLE(); break; @@ -824,37 +830,50 @@ void IC::PatchCache(Handle<HeapType> type, Handle<Code> LoadIC::initialize_stub(Isolate* isolate, ExtraICState extra_state) { - return isolate->stub_cache()->ComputeLoad(UNINITIALIZED, extra_state); + return PropertyICCompiler::ComputeLoad(isolate, UNINITIALIZED, extra_state); +} + + +Handle<Code> LoadIC::megamorphic_stub() { + if (kind() == Code::LOAD_IC) { + return PropertyICCompiler::ComputeLoad(isolate(), MEGAMORPHIC, + extra_ic_state()); + } else { + DCHECK_EQ(Code::KEYED_LOAD_IC, kind()); + return KeyedLoadIC::generic_stub(isolate()); + } } Handle<Code> LoadIC::pre_monomorphic_stub(Isolate* isolate, ExtraICState extra_state) { - return isolate->stub_cache()->ComputeLoad(PREMONOMORPHIC, extra_state); + return PropertyICCompiler::ComputeLoad(isolate, PREMONOMORPHIC, extra_state); } -Handle<Code> LoadIC::megamorphic_stub() { - return isolate()->stub_cache()->ComputeLoad(MEGAMORPHIC, extra_ic_state()); +Handle<Code> KeyedLoadIC::pre_monomorphic_stub(Isolate* isolate) { + return isolate->builtins()->KeyedLoadIC_PreMonomorphic(); } -Handle<Code> LoadIC::SimpleFieldLoad(int offset, - bool inobject, - Representation representation) { +Handle<Code> LoadIC::pre_monomorphic_stub() const { if (kind() == Code::LOAD_IC) { - LoadFieldStub stub(isolate(), inobject, offset, representation); - return stub.GetCode(); + return LoadIC::pre_monomorphic_stub(isolate(), extra_ic_state()); } else { - KeyedLoadFieldStub stub(isolate(), inobject, offset, representation); - return stub.GetCode(); + DCHECK_EQ(Code::KEYED_LOAD_IC, kind()); + return KeyedLoadIC::pre_monomorphic_stub(isolate()); } } -void LoadIC::UpdateCaches(LookupResult* lookup, - Handle<Object> object, - Handle<String> name) { +Handle<Code> LoadIC::SimpleFieldLoad(FieldIndex index) { + LoadFieldStub stub(isolate(), index); + return stub.GetCode(); +} + + +void LoadIC::UpdateCaches(LookupIterator* lookup, Handle<Object> object, + Handle<Name> name) { if (state() == UNINITIALIZED) { // This is the first time we execute this inline cache. // Set the target to the pre monomorphic stub to delay @@ -864,14 +883,16 @@ void LoadIC::UpdateCaches(LookupResult* lookup, return; } - Handle<HeapType> type = CurrentTypeOf(object, isolate()); Handle<Code> code; - if (!lookup->IsCacheable()) { - // Bail out if the result is not cacheable. + if (lookup->state() == LookupIterator::JSPROXY || + lookup->state() == LookupIterator::ACCESS_CHECK) { code = slow_stub(); - } else if (!lookup->IsProperty()) { + } else if (!lookup->IsFound()) { if (kind() == Code::LOAD_IC) { - code = isolate()->stub_cache()->ComputeLoadNonexistent(name, type); + code = NamedLoadHandlerCompiler::ComputeLoadNonexistent(name, + receiver_type()); + // TODO(jkummerow/verwaest): Introduce a builtin that handles this case. + if (code.is_null()) code = slow_stub(); } else { code = slow_stub(); } @@ -879,163 +900,245 @@ void LoadIC::UpdateCaches(LookupResult* lookup, code = ComputeHandler(lookup, object, name); } - PatchCache(type, name, code); + PatchCache(name, code); TRACE_IC("LoadIC", name); } void IC::UpdateMegamorphicCache(HeapType* type, Name* name, Code* code) { - // Cache code holding map should be consistent with - // GenerateMonomorphicCacheProbe. + if (kind() == Code::KEYED_LOAD_IC || kind() == Code::KEYED_STORE_IC) return; Map* map = *TypeToMap(type, isolate()); isolate()->stub_cache()->Set(name, map, code); } -Handle<Code> IC::ComputeHandler(LookupResult* lookup, - Handle<Object> object, - Handle<String> name, - Handle<Object> value) { - InlineCacheHolderFlag cache_holder = GetCodeCacheForObject(*object); - Handle<HeapObject> stub_holder(GetCodeCacheHolder( - isolate(), *object, cache_holder)); +Handle<Code> IC::ComputeHandler(LookupIterator* lookup, Handle<Object> object, + Handle<Name> name, Handle<Object> value) { + bool receiver_is_holder = + object.is_identical_to(lookup->GetHolder<JSObject>()); + CacheHolderFlag flag; + Handle<Map> stub_holder_map = IC::GetHandlerCacheHolder( + *receiver_type(), receiver_is_holder, isolate(), &flag); - Handle<Code> code = isolate()->stub_cache()->FindHandler( - name, handle(stub_holder->map()), kind(), cache_holder, + Handle<Code> code = PropertyHandlerCompiler::Find( + name, stub_holder_map, kind(), flag, + lookup->holder_map()->is_dictionary_map() ? Code::NORMAL : Code::FAST); + // Use the cached value if it exists, and if it is different from the + // handler that just missed. + if (!code.is_null()) { + if (!maybe_handler_.is_null() && + !maybe_handler_.ToHandleChecked().is_identical_to(code)) { + return code; + } + if (maybe_handler_.is_null()) { + // maybe_handler_ is only populated for MONOMORPHIC and POLYMORPHIC ICs. + // In MEGAMORPHIC case, check if the handler in the megamorphic stub + // cache (which just missed) is different from the cached handler. + if (state() == MEGAMORPHIC && object->IsHeapObject()) { + Map* map = Handle<HeapObject>::cast(object)->map(); + Code* megamorphic_cached_code = + isolate()->stub_cache()->Get(*name, map, code->flags()); + if (megamorphic_cached_code != *code) return code; + } else { + return code; + } + } + } + + code = CompileHandler(lookup, object, name, value, flag); + DCHECK(code->is_handler()); + + if (code->type() != Code::NORMAL) { + Map::UpdateCodeCache(stub_holder_map, name, code); + } + + return code; +} + + +Handle<Code> IC::ComputeStoreHandler(LookupResult* lookup, + Handle<Object> object, Handle<Name> name, + Handle<Object> value) { + bool receiver_is_holder = lookup->ReceiverIsHolder(object); + CacheHolderFlag flag; + Handle<Map> stub_holder_map = IC::GetHandlerCacheHolder( + *receiver_type(), receiver_is_holder, isolate(), &flag); + + Handle<Code> code = PropertyHandlerCompiler::Find( + name, stub_holder_map, handler_kind(), flag, lookup->holder()->HasFastProperties() ? Code::FAST : Code::NORMAL); + // Use the cached value if it exists, and if it is different from the + // handler that just missed. if (!code.is_null()) { - return code; + if (!maybe_handler_.is_null() && + !maybe_handler_.ToHandleChecked().is_identical_to(code)) { + return code; + } + if (maybe_handler_.is_null()) { + // maybe_handler_ is only populated for MONOMORPHIC and POLYMORPHIC ICs. + // In MEGAMORPHIC case, check if the handler in the megamorphic stub + // cache (which just missed) is different from the cached handler. + if (state() == MEGAMORPHIC && object->IsHeapObject()) { + Map* map = Handle<HeapObject>::cast(object)->map(); + Code* megamorphic_cached_code = + isolate()->stub_cache()->Get(*name, map, code->flags()); + if (megamorphic_cached_code != *code) return code; + } else { + return code; + } + } } - code = CompileHandler(lookup, object, name, value, cache_holder); - ASSERT(code->is_handler()); + code = CompileStoreHandler(lookup, object, name, value, flag); + DCHECK(code->is_handler()); if (code->type() != Code::NORMAL) { - HeapObject::UpdateMapCodeCache(stub_holder, name, code); + Map::UpdateCodeCache(stub_holder_map, name, code); } return code; } -Handle<Code> LoadIC::CompileHandler(LookupResult* lookup, - Handle<Object> object, - Handle<String> name, +Handle<Code> LoadIC::CompileHandler(LookupIterator* lookup, + Handle<Object> object, Handle<Name> name, Handle<Object> unused, - InlineCacheHolderFlag cache_holder) { + CacheHolderFlag cache_holder) { if (object->IsString() && - String::Equals(isolate()->factory()->length_string(), name)) { - int length_index = String::kLengthOffset / kPointerSize; - return SimpleFieldLoad(length_index); + Name::Equals(isolate()->factory()->length_string(), name)) { + FieldIndex index = FieldIndex::ForInObjectOffset(String::kLengthOffset); + return SimpleFieldLoad(index); } if (object->IsStringWrapper() && - String::Equals(isolate()->factory()->length_string(), name)) { - if (kind() == Code::LOAD_IC) { - StringLengthStub string_length_stub(isolate()); - return string_length_stub.GetCode(); - } else { - KeyedStringLengthStub string_length_stub(isolate()); - return string_length_stub.GetCode(); + Name::Equals(isolate()->factory()->length_string(), name)) { + StringLengthStub string_length_stub(isolate()); + return string_length_stub.GetCode(); + } + + // Use specialized code for getting prototype of functions. + if (object->IsJSFunction() && + Name::Equals(isolate()->factory()->prototype_string(), name) && + Handle<JSFunction>::cast(object)->should_have_prototype() && + !Handle<JSFunction>::cast(object)->map()->has_non_instance_prototype()) { + Handle<Code> stub; + FunctionPrototypeStub function_prototype_stub(isolate()); + return function_prototype_stub.GetCode(); + } + + Handle<HeapType> type = receiver_type(); + Handle<JSObject> holder = lookup->GetHolder<JSObject>(); + bool receiver_is_holder = object.is_identical_to(holder); + // -------------- Interceptors -------------- + if (lookup->state() == LookupIterator::INTERCEPTOR) { + DCHECK(!holder->GetNamedInterceptor()->getter()->IsUndefined()); + NamedLoadHandlerCompiler compiler(isolate(), receiver_type(), holder, + cache_holder); + return compiler.CompileLoadInterceptor(name); + } + DCHECK(lookup->state() == LookupIterator::PROPERTY); + + // -------------- Accessors -------------- + if (lookup->property_kind() == LookupIterator::ACCESSOR) { + // Use simple field loads for some well-known callback properties. + if (receiver_is_holder) { + DCHECK(object->IsJSObject()); + Handle<JSObject> receiver = Handle<JSObject>::cast(object); + int object_offset; + if (Accessors::IsJSObjectFieldAccessor<HeapType>(type, name, + &object_offset)) { + FieldIndex index = + FieldIndex::ForInObjectOffset(object_offset, receiver->map()); + return SimpleFieldLoad(index); + } } - } - Handle<HeapType> type = CurrentTypeOf(object, isolate()); - Handle<JSObject> holder(lookup->holder()); - LoadStubCompiler compiler(isolate(), kNoExtraICState, cache_holder, kind()); - - switch (lookup->type()) { - case FIELD: { - PropertyIndex field = lookup->GetFieldIndex(); - if (object.is_identical_to(holder)) { - return SimpleFieldLoad(field.translate(holder), - field.is_inobject(holder), - lookup->representation()); + Handle<Object> accessors = lookup->GetAccessors(); + if (accessors->IsExecutableAccessorInfo()) { + Handle<ExecutableAccessorInfo> info = + Handle<ExecutableAccessorInfo>::cast(accessors); + if (v8::ToCData<Address>(info->getter()) == 0) return slow_stub(); + if (!ExecutableAccessorInfo::IsCompatibleReceiverType(isolate(), info, + type)) { + return slow_stub(); } - return compiler.CompileLoadField( - type, holder, name, field, lookup->representation()); + if (!holder->HasFastProperties()) return slow_stub(); + NamedLoadHandlerCompiler compiler(isolate(), receiver_type(), holder, + cache_holder); + return compiler.CompileLoadCallback(name, info); } - case CONSTANT: { - Handle<Object> constant(lookup->GetConstant(), isolate()); - // TODO(2803): Don't compute a stub for cons strings because they cannot - // be embedded into code. - if (constant->IsConsString()) break; - return compiler.CompileLoadConstant(type, holder, name, constant); - } - case NORMAL: - if (kind() != Code::LOAD_IC) break; - if (holder->IsGlobalObject()) { - Handle<GlobalObject> global = Handle<GlobalObject>::cast(holder); - Handle<PropertyCell> cell( - global->GetPropertyCell(lookup), isolate()); - Handle<Code> code = compiler.CompileLoadGlobal( - type, global, cell, name, lookup->IsDontDelete()); - // TODO(verwaest): Move caching of these NORMAL stubs outside as well. - Handle<HeapObject> stub_holder(GetCodeCacheHolder( - isolate(), *object, cache_holder)); - HeapObject::UpdateMapCodeCache(stub_holder, name, code); - return code; + if (accessors->IsAccessorPair()) { + Handle<Object> getter(Handle<AccessorPair>::cast(accessors)->getter(), + isolate()); + if (!getter->IsJSFunction()) return slow_stub(); + if (!holder->HasFastProperties()) return slow_stub(); + Handle<JSFunction> function = Handle<JSFunction>::cast(getter); + if (!object->IsJSObject() && !function->IsBuiltin() && + function->shared()->strict_mode() == SLOPPY) { + // Calling sloppy non-builtins with a value as the receiver + // requires boxing. + return slow_stub(); } - // There is only one shared stub for loading normalized - // properties. It does not traverse the prototype chain, so the - // property must be found in the object for the stub to be - // applicable. - if (!object.is_identical_to(holder)) break; - return isolate()->builtins()->LoadIC_Normal(); - case CALLBACKS: { - // Use simple field loads for some well-known callback properties. - if (object->IsJSObject()) { - Handle<JSObject> receiver = Handle<JSObject>::cast(object); - Handle<HeapType> type = IC::MapToType<HeapType>( - handle(receiver->map()), isolate()); - int object_offset; - if (Accessors::IsJSObjectFieldAccessor<HeapType>( - type, name, &object_offset)) { - return SimpleFieldLoad(object_offset / kPointerSize); - } + CallOptimization call_optimization(function); + NamedLoadHandlerCompiler compiler(isolate(), receiver_type(), holder, + cache_holder); + if (call_optimization.is_simple_api_call() && + call_optimization.IsCompatibleReceiver(object, holder)) { + return compiler.CompileLoadCallback(name, call_optimization); } - - Handle<Object> callback(lookup->GetCallbackObject(), isolate()); - if (callback->IsExecutableAccessorInfo()) { - Handle<ExecutableAccessorInfo> info = - Handle<ExecutableAccessorInfo>::cast(callback); - if (v8::ToCData<Address>(info->getter()) == 0) break; - if (!info->IsCompatibleReceiver(*object)) break; - return compiler.CompileLoadCallback(type, holder, name, info); - } else if (callback->IsAccessorPair()) { - Handle<Object> getter(Handle<AccessorPair>::cast(callback)->getter(), - isolate()); - if (!getter->IsJSFunction()) break; - if (holder->IsGlobalObject()) break; - if (!holder->HasFastProperties()) break; - Handle<JSFunction> function = Handle<JSFunction>::cast(getter); - if (!object->IsJSObject() && - !function->IsBuiltin() && - function->shared()->strict_mode() == SLOPPY) { - // Calling sloppy non-builtins with a value as the receiver - // requires boxing. - break; - } - CallOptimization call_optimization(function); - if (call_optimization.is_simple_api_call() && - call_optimization.IsCompatibleReceiver(object, holder)) { - return compiler.CompileLoadCallback( - type, holder, name, call_optimization); - } - return compiler.CompileLoadViaGetter(type, holder, name, function); - } - // TODO(dcarney): Handle correctly. - ASSERT(callback->IsDeclaredAccessorInfo()); - break; + return compiler.CompileLoadViaGetter(name, function); } - case INTERCEPTOR: - ASSERT(HasInterceptorGetter(*holder)); - return compiler.CompileLoadInterceptor(type, holder, name); - default: - break; + // TODO(dcarney): Handle correctly. + DCHECK(accessors->IsDeclaredAccessorInfo()); + return slow_stub(); + } + + // -------------- Dictionary properties -------------- + DCHECK(lookup->property_kind() == LookupIterator::DATA); + if (lookup->property_encoding() == LookupIterator::DICTIONARY) { + if (kind() != Code::LOAD_IC) return slow_stub(); + if (holder->IsGlobalObject()) { + NamedLoadHandlerCompiler compiler(isolate(), receiver_type(), holder, + cache_holder); + Handle<PropertyCell> cell = lookup->GetPropertyCell(); + Handle<Code> code = + compiler.CompileLoadGlobal(cell, name, lookup->IsConfigurable()); + // TODO(verwaest): Move caching of these NORMAL stubs outside as well. + CacheHolderFlag flag; + Handle<Map> stub_holder_map = + GetHandlerCacheHolder(*type, receiver_is_holder, isolate(), &flag); + Map::UpdateCodeCache(stub_holder_map, name, code); + return code; + } + // There is only one shared stub for loading normalized + // properties. It does not traverse the prototype chain, so the + // property must be found in the object for the stub to be + // applicable. + if (!receiver_is_holder) return slow_stub(); + return isolate()->builtins()->LoadIC_Normal(); + } + + // -------------- Fields -------------- + DCHECK(lookup->property_encoding() == LookupIterator::DESCRIPTOR); + if (lookup->property_details().type() == FIELD) { + FieldIndex field = lookup->GetFieldIndex(); + if (receiver_is_holder) { + return SimpleFieldLoad(field); + } + NamedLoadHandlerCompiler compiler(isolate(), receiver_type(), holder, + cache_holder); + return compiler.CompileLoadField(name, field); } - return slow_stub(); + // -------------- Constant properties -------------- + DCHECK(lookup->property_details().type() == CONSTANT); + if (receiver_is_holder) { + LoadConstantStub stub(isolate(), lookup->GetConstantIndex()); + return stub.GetCode(); + } + NamedLoadHandlerCompiler compiler(isolate(), receiver_type(), holder, + cache_holder); + return compiler.CompileLoadConstant(name, lookup->GetConstantIndex()); } @@ -1076,7 +1179,7 @@ Handle<Code> KeyedLoadIC::LoadElementStub(Handle<JSObject> receiver) { TargetMaps(&target_receiver_maps); } if (target_receiver_maps.length() == 0) { - return isolate()->stub_cache()->ComputeKeyedLoadElement(receiver_map); + return PropertyICCompiler::ComputeKeyedLoadMonomorphic(receiver_map); } // The first time a receiver is seen that is a transitioned version of the @@ -1090,10 +1193,10 @@ Handle<Code> KeyedLoadIC::LoadElementStub(Handle<JSObject> receiver) { IsMoreGeneralElementsKindTransition( target_receiver_maps.at(0)->elements_kind(), receiver->GetElementsKind())) { - return isolate()->stub_cache()->ComputeKeyedLoadElement(receiver_map); + return PropertyICCompiler::ComputeKeyedLoadMonomorphic(receiver_map); } - ASSERT(state() != GENERIC); + DCHECK(state() != GENERIC); // Determine the list of receiver maps that this call site has seen, // adding the map that was just encountered. @@ -1111,8 +1214,7 @@ Handle<Code> KeyedLoadIC::LoadElementStub(Handle<JSObject> receiver) { return generic_stub(); } - return isolate()->stub_cache()->ComputeLoadElementPolymorphic( - &target_receiver_maps); + return PropertyICCompiler::ComputeKeyedLoadPolymorphic(&target_receiver_maps); } @@ -1135,11 +1237,11 @@ MaybeHandle<Object> KeyedLoadIC::Load(Handle<Object> object, // internalized string directly or is representable as a smi. key = TryConvertKey(key, isolate()); - if (key->IsInternalizedString()) { + if (key->IsInternalizedString() || key->IsSymbol()) { ASSIGN_RETURN_ON_EXCEPTION( isolate(), load_handle, - LoadIC::Load(object, Handle<String>::cast(key)), + LoadIC::Load(object, Handle<Name>::cast(key)), Object); } else if (FLAG_use_ic && !object->IsAccessCheckNeeded()) { if (object->IsString() && key->IsNumber()) { @@ -1159,7 +1261,8 @@ MaybeHandle<Object> KeyedLoadIC::Load(Handle<Object> object, } if (!is_target_set()) { - if (*stub == *generic_stub()) { + Code* generic = *generic_stub(); + if (*stub == generic) { TRACE_GENERIC_IC(isolate(), "KeyedLoadIC", "set generic"); } set_target(*stub); @@ -1177,16 +1280,17 @@ MaybeHandle<Object> KeyedLoadIC::Load(Handle<Object> object, } -static bool LookupForWrite(Handle<JSObject> receiver, - Handle<String> name, - Handle<Object> value, - LookupResult* lookup, - IC* ic) { +static bool LookupForWrite(Handle<Object> object, Handle<Name> name, + Handle<Object> value, LookupResult* lookup, IC* ic) { + // Disable ICs for non-JSObjects for now. + if (!object->IsJSObject()) return false; + Handle<JSObject> receiver = Handle<JSObject>::cast(object); + Handle<JSObject> holder = receiver; receiver->Lookup(name, lookup); if (lookup->IsFound()) { if (lookup->IsInterceptor() && !HasInterceptorSetter(lookup->holder())) { - receiver->LocalLookupRealNamedProperty(name, lookup); + receiver->LookupOwnRealNamedProperty(name, lookup); if (!lookup->IsFound()) return false; } @@ -1197,7 +1301,8 @@ static bool LookupForWrite(Handle<JSObject> receiver, // goes into the runtime if access checks are needed, so this is always // safe. if (receiver->IsJSGlobalProxy()) { - return lookup->holder() == receiver->GetPrototype(); + PrototypeIterator iter(lookup->isolate(), receiver); + return lookup->holder() == *PrototypeIterator::GetCurrent(iter); } // Currently normal holders in the prototype chain are not supported. They // would require a runtime positive lookup and verification that the details @@ -1220,7 +1325,7 @@ static bool LookupForWrite(Handle<JSObject> receiver, // transition target. // Ensure the instance and its map were migrated before trying to update the // transition target. - ASSERT(!receiver->map()->is_deprecated()); + DCHECK(!receiver->map()->is_deprecated()); if (!lookup->CanHoldValue(value)) { Handle<Map> target(lookup->GetTransitionTarget()); Representation field_representation = value->OptimalRepresentation(); @@ -1233,7 +1338,9 @@ static bool LookupForWrite(Handle<JSObject> receiver, // entirely by the migration above. receiver->map()->LookupTransition(*holder, *name, lookup); if (!lookup->IsTransition()) return false; - return ic->TryMarkMonomorphicPrototypeFailure(name); + if (!ic->IsNameCompatibleWithPrototypeFailure(name)) return false; + ic->MarkPrototypeFailure(name); + return true; } return true; @@ -1241,17 +1348,16 @@ static bool LookupForWrite(Handle<JSObject> receiver, MaybeHandle<Object> StoreIC::Store(Handle<Object> object, - Handle<String> name, + Handle<Name> name, Handle<Object> value, JSReceiver::StoreFromKeyed store_mode) { + // TODO(verwaest): Let SetProperty do the migration, since storing a property + // might deprecate the current map again, if value does not fit. if (MigrateDeprecated(object) || object->IsJSProxy()) { - Handle<JSReceiver> receiver = Handle<JSReceiver>::cast(object); Handle<Object> result; ASSIGN_RETURN_ON_EXCEPTION( - isolate(), - result, - JSReceiver::SetProperty(receiver, name, value, NONE, strict_mode()), - Object); + isolate(), result, + Object::SetProperty(object, name, value, strict_mode()), Object); return result; } @@ -1261,21 +1367,14 @@ MaybeHandle<Object> StoreIC::Store(Handle<Object> object, return TypeError("non_object_property_store", object, name); } - // The length property of string values is read-only. Throw in strict mode. - if (strict_mode() == STRICT && object->IsString() && - String::Equals(isolate()->factory()->length_string(), name)) { - return TypeError("strict_read_only_property", object, name); - } - - // Ignore other stores where the receiver is not a JSObject. - // TODO(1475): Must check prototype chains of object wrappers. - if (!object->IsJSObject()) return value; - - Handle<JSObject> receiver = Handle<JSObject>::cast(object); - // Check if the given name is an array index. uint32_t index; if (name->AsArrayIndex(&index)) { + // Ignore other stores where the receiver is not a JSObject. + // TODO(1475): Must check prototype chains of object wrappers. + if (!object->IsJSObject()) return value; + Handle<JSObject> receiver = Handle<JSObject>::cast(object); + Handle<Object> result; ASSIGN_RETURN_ON_EXCEPTION( isolate(), @@ -1286,19 +1385,18 @@ MaybeHandle<Object> StoreIC::Store(Handle<Object> object, } // Observed objects are always modified through the runtime. - if (receiver->map()->is_observed()) { + if (object->IsHeapObject() && + Handle<HeapObject>::cast(object)->map()->is_observed()) { Handle<Object> result; ASSIGN_RETURN_ON_EXCEPTION( - isolate(), - result, - JSReceiver::SetProperty( - receiver, name, value, NONE, strict_mode(), store_mode), + isolate(), result, + Object::SetProperty(object, name, value, strict_mode(), store_mode), Object); return result; } LookupResult lookup(isolate()); - bool can_store = LookupForWrite(receiver, name, value, &lookup, this); + bool can_store = LookupForWrite(object, name, value, &lookup, this); if (!can_store && strict_mode() == STRICT && !(lookup.IsProperty() && lookup.IsReadOnly()) && @@ -1312,9 +1410,8 @@ MaybeHandle<Object> StoreIC::Store(Handle<Object> object, set_target(*stub); TRACE_IC("StoreIC", name); } else if (can_store) { - UpdateCaches(&lookup, receiver, name, value); - } else if (!name->IsCacheable(isolate()) || - lookup.IsNormal() || + UpdateCaches(&lookup, Handle<JSObject>::cast(object), name, value); + } else if (lookup.IsNormal() || (lookup.IsField() && lookup.CanHoldValue(value))) { Handle<Code> stub = generic_stub(); set_target(*stub); @@ -1324,27 +1421,24 @@ MaybeHandle<Object> StoreIC::Store(Handle<Object> object, // Set the property. Handle<Object> result; ASSIGN_RETURN_ON_EXCEPTION( - isolate(), - result, - JSReceiver::SetProperty( - receiver, name, value, NONE, strict_mode(), store_mode), + isolate(), result, + Object::SetProperty(object, name, value, strict_mode(), store_mode), Object); return result; } -void CallIC::State::Print(StringStream* stream) const { - stream->Add("(args(%d), ", - argc_); - stream->Add("%s, ", - call_type_ == CallIC::METHOD ? "METHOD" : "FUNCTION"); +OStream& operator<<(OStream& os, const CallIC::State& s) { + return os << "(args(" << s.arg_count() << "), " + << (s.call_type() == CallIC::METHOD ? "METHOD" : "FUNCTION") + << ", "; } Handle<Code> CallIC::initialize_stub(Isolate* isolate, int argc, CallType call_type) { - CallICStub stub(isolate, State::DefaultCallState(argc, call_type)); + CallICStub stub(isolate, State(argc, call_type)); Handle<Code> code = stub.GetCode(); return code; } @@ -1353,81 +1447,99 @@ Handle<Code> CallIC::initialize_stub(Isolate* isolate, Handle<Code> StoreIC::initialize_stub(Isolate* isolate, StrictMode strict_mode) { ExtraICState extra_state = ComputeExtraICState(strict_mode); - Handle<Code> ic = isolate->stub_cache()->ComputeStore( - UNINITIALIZED, extra_state); + Handle<Code> ic = + PropertyICCompiler::ComputeStore(isolate, UNINITIALIZED, extra_state); return ic; } Handle<Code> StoreIC::megamorphic_stub() { - return isolate()->stub_cache()->ComputeStore(MEGAMORPHIC, extra_ic_state()); + return PropertyICCompiler::ComputeStore(isolate(), MEGAMORPHIC, + extra_ic_state()); } Handle<Code> StoreIC::generic_stub() const { - return isolate()->stub_cache()->ComputeStore(GENERIC, extra_ic_state()); + return PropertyICCompiler::ComputeStore(isolate(), GENERIC, extra_ic_state()); } Handle<Code> StoreIC::pre_monomorphic_stub(Isolate* isolate, StrictMode strict_mode) { ExtraICState state = ComputeExtraICState(strict_mode); - return isolate->stub_cache()->ComputeStore(PREMONOMORPHIC, state); + return PropertyICCompiler::ComputeStore(isolate, PREMONOMORPHIC, state); } void StoreIC::UpdateCaches(LookupResult* lookup, Handle<JSObject> receiver, - Handle<String> name, + Handle<Name> name, Handle<Object> value) { - ASSERT(lookup->IsFound()); + DCHECK(lookup->IsFound()); // These are not cacheable, so we never see such LookupResults here. - ASSERT(!lookup->IsHandler()); + DCHECK(!lookup->IsHandler()); - Handle<Code> code = ComputeHandler(lookup, receiver, name, value); + Handle<Code> code = ComputeStoreHandler(lookup, receiver, name, value); - PatchCache(CurrentTypeOf(receiver, isolate()), name, code); + PatchCache(name, code); TRACE_IC("StoreIC", name); } -Handle<Code> StoreIC::CompileHandler(LookupResult* lookup, - Handle<Object> object, - Handle<String> name, - Handle<Object> value, - InlineCacheHolderFlag cache_holder) { +Handle<Code> StoreIC::CompileStoreHandler(LookupResult* lookup, + Handle<Object> object, + Handle<Name> name, + Handle<Object> value, + CacheHolderFlag cache_holder) { if (object->IsAccessCheckNeeded()) return slow_stub(); - ASSERT(cache_holder == OWN_MAP); + DCHECK(cache_holder == kCacheOnReceiver || lookup->type() == CALLBACKS || + (object->IsJSGlobalProxy() && lookup->holder()->IsJSGlobalObject())); // This is currently guaranteed by checks in StoreIC::Store. Handle<JSObject> receiver = Handle<JSObject>::cast(object); Handle<JSObject> holder(lookup->holder()); - // Handlers do not use strict mode. - StoreStubCompiler compiler(isolate(), SLOPPY, kind()); + if (lookup->IsTransition()) { // Explicitly pass in the receiver map since LookupForWrite may have // stored something else than the receiver in the holder. Handle<Map> transition(lookup->GetTransitionTarget()); PropertyDetails details = lookup->GetPropertyDetails(); - if (details.type() != CALLBACKS && details.attributes() == NONE) { - return compiler.CompileStoreTransition( - receiver, lookup, transition, name); + if (details.type() != CALLBACKS && details.attributes() == NONE && + holder->HasFastProperties()) { + NamedStoreHandlerCompiler compiler(isolate(), receiver_type(), holder); + return compiler.CompileStoreTransition(transition, name); } } else { switch (lookup->type()) { - case FIELD: - return compiler.CompileStoreField(receiver, lookup, name); + case FIELD: { + bool use_stub = true; + if (lookup->representation().IsHeapObject()) { + // Only use a generic stub if no types need to be tracked. + HeapType* field_type = lookup->GetFieldType(); + HeapType::Iterator<Map> it = field_type->Classes(); + use_stub = it.Done(); + } + if (use_stub) { + StoreFieldStub stub(isolate(), lookup->GetFieldIndex(), + lookup->representation()); + return stub.GetCode(); + } + NamedStoreHandlerCompiler compiler(isolate(), receiver_type(), holder); + return compiler.CompileStoreField(lookup, name); + } case NORMAL: - if (kind() == Code::KEYED_STORE_IC) break; if (receiver->IsJSGlobalProxy() || receiver->IsGlobalObject()) { // The stub generated for the global object picks the value directly // from the property cell. So the property must be directly on the // global object. - Handle<GlobalObject> global = receiver->IsJSGlobalProxy() - ? handle(GlobalObject::cast(receiver->GetPrototype())) - : Handle<GlobalObject>::cast(receiver); + PrototypeIterator iter(isolate(), receiver); + Handle<GlobalObject> global = + receiver->IsJSGlobalProxy() + ? Handle<GlobalObject>::cast( + PrototypeIterator::GetCurrent(iter)) + : Handle<GlobalObject>::cast(receiver); Handle<PropertyCell> cell(global->GetPropertyCell(lookup), isolate()); Handle<HeapType> union_type = PropertyCell::UpdatedType(cell, value); StoreGlobalStub stub( @@ -1437,7 +1549,7 @@ Handle<Code> StoreIC::CompileHandler(LookupResult* lookup, HeapObject::UpdateMapCodeCache(receiver, name, code); return code; } - ASSERT(holder.is_identical_to(receiver)); + DCHECK(holder.is_identical_to(receiver)); return isolate()->builtins()->StoreIC_Normal(); case CALLBACKS: { Handle<Object> callback(lookup->GetCallbackObject(), isolate()); @@ -1446,8 +1558,13 @@ Handle<Code> StoreIC::CompileHandler(LookupResult* lookup, Handle<ExecutableAccessorInfo>::cast(callback); if (v8::ToCData<Address>(info->setter()) == 0) break; if (!holder->HasFastProperties()) break; - if (!info->IsCompatibleReceiver(*receiver)) break; - return compiler.CompileStoreCallback(receiver, holder, name, info); + if (!ExecutableAccessorInfo::IsCompatibleReceiverType( + isolate(), info, receiver_type())) { + break; + } + NamedStoreHandlerCompiler compiler(isolate(), receiver_type(), + holder); + return compiler.CompileStoreCallback(receiver, name, info); } else if (callback->IsAccessorPair()) { Handle<Object> setter( Handle<AccessorPair>::cast(callback)->setter(), isolate()); @@ -1456,22 +1573,25 @@ Handle<Code> StoreIC::CompileHandler(LookupResult* lookup, if (!holder->HasFastProperties()) break; Handle<JSFunction> function = Handle<JSFunction>::cast(setter); CallOptimization call_optimization(function); + NamedStoreHandlerCompiler compiler(isolate(), receiver_type(), + holder); if (call_optimization.is_simple_api_call() && call_optimization.IsCompatibleReceiver(receiver, holder)) { - return compiler.CompileStoreCallback( - receiver, holder, name, call_optimization); + return compiler.CompileStoreCallback(receiver, name, + call_optimization); } return compiler.CompileStoreViaSetter( - receiver, holder, name, Handle<JSFunction>::cast(setter)); + receiver, name, Handle<JSFunction>::cast(setter)); } // TODO(dcarney): Handle correctly. - ASSERT(callback->IsDeclaredAccessorInfo()); + DCHECK(callback->IsDeclaredAccessorInfo()); break; } - case INTERCEPTOR: - if (kind() == Code::KEYED_STORE_IC) break; - ASSERT(HasInterceptorSetter(*holder)); - return compiler.CompileStoreInterceptor(receiver, name); + case INTERCEPTOR: { + DCHECK(HasInterceptorSetter(*holder)); + NamedStoreHandlerCompiler compiler(isolate(), receiver_type(), holder); + return compiler.CompileStoreInterceptor(name); + } case CONSTANT: break; case NONEXISTENT: @@ -1501,7 +1621,7 @@ Handle<Code> KeyedStoreIC::StoreElementStub(Handle<JSObject> receiver, Handle<Map> monomorphic_map = ComputeTransitionedMap(receiver_map, store_mode); store_mode = GetNonTransitioningStoreMode(store_mode); - return isolate()->stub_cache()->ComputeKeyedStoreElement( + return PropertyICCompiler::ComputeKeyedStoreMonomorphic( monomorphic_map, strict_mode(), store_mode); } @@ -1526,7 +1646,7 @@ Handle<Code> KeyedStoreIC::StoreElementStub(Handle<JSObject> receiver, // if they at least come from the same origin for a transitioning store, // stay MONOMORPHIC and use the map for the most generic ElementsKind. store_mode = GetNonTransitioningStoreMode(store_mode); - return isolate()->stub_cache()->ComputeKeyedStoreElement( + return PropertyICCompiler::ComputeKeyedStoreMonomorphic( transitioned_receiver_map, strict_mode(), store_mode); } else if (*previous_receiver_map == receiver->map() && old_store_mode == STANDARD_STORE && @@ -1536,12 +1656,12 @@ Handle<Code> KeyedStoreIC::StoreElementStub(Handle<JSObject> receiver, // A "normal" IC that handles stores can switch to a version that can // grow at the end of the array, handle OOB accesses or copy COW arrays // and still stay MONOMORPHIC. - return isolate()->stub_cache()->ComputeKeyedStoreElement( + return PropertyICCompiler::ComputeKeyedStoreMonomorphic( receiver_map, strict_mode(), store_mode); } } - ASSERT(state() != GENERIC); + DCHECK(state() != GENERIC); bool map_added = AddOneReceiverMapIfMissing(&target_receiver_maps, receiver_map); @@ -1598,7 +1718,7 @@ Handle<Code> KeyedStoreIC::StoreElementStub(Handle<JSObject> receiver, } } - return isolate()->stub_cache()->ComputeStoreElementPolymorphic( + return PropertyICCompiler::ComputeKeyedStorePolymorphic( &target_receiver_maps, store_mode, strict_mode()); } @@ -1624,7 +1744,7 @@ Handle<Map> KeyedStoreIC::ComputeTransitionedMap( case STORE_AND_GROW_TRANSITION_HOLEY_SMI_TO_DOUBLE: return Map::TransitionElementsTo(map, FAST_HOLEY_DOUBLE_ELEMENTS); case STORE_NO_TRANSITION_IGNORE_OUT_OF_BOUNDS: - ASSERT(map->has_external_array_elements()); + DCHECK(map->has_external_array_elements()); // Fall through case STORE_NO_TRANSITION_HANDLE_COW: case STANDARD_STORE: @@ -1725,13 +1845,15 @@ KeyedAccessStoreMode KeyedStoreIC::GetStoreMode(Handle<JSObject> receiver, MaybeHandle<Object> KeyedStoreIC::Store(Handle<Object> object, Handle<Object> key, Handle<Object> value) { + // TODO(verwaest): Let SetProperty do the migration, since storing a property + // might deprecate the current map again, if value does not fit. if (MigrateDeprecated(object)) { Handle<Object> result; ASSIGN_RETURN_ON_EXCEPTION( isolate(), result, Runtime::SetObjectProperty( - isolate(), object, key, value, NONE, strict_mode()), + isolate(), object, key, value, strict_mode()), Object); return result; } @@ -1752,66 +1874,67 @@ MaybeHandle<Object> KeyedStoreIC::Store(Handle<Object> object, value, JSReceiver::MAY_BE_STORE_FROM_KEYED), Object); - } else { - bool use_ic = FLAG_use_ic && - !object->IsStringWrapper() && - !object->IsAccessCheckNeeded() && - !object->IsJSGlobalProxy() && - !(object->IsJSObject() && - JSObject::cast(*object)->map()->is_observed()); - if (use_ic && !object->IsSmi()) { - // Don't use ICs for maps of the objects in Array's prototype chain. We - // expect to be able to trap element sets to objects with those maps in - // the runtime to enable optimization of element hole access. - Handle<HeapObject> heap_object = Handle<HeapObject>::cast(object); - if (heap_object->map()->IsMapInArrayPrototypeChain()) use_ic = false; - } + TRACE_GENERIC_IC(isolate(), "KeyedStoreIC", "set generic"); + set_target(*stub); + return store_handle; + } - if (use_ic) { - ASSERT(!object->IsAccessCheckNeeded()); + bool use_ic = + FLAG_use_ic && !object->IsStringWrapper() && + !object->IsAccessCheckNeeded() && !object->IsJSGlobalProxy() && + !(object->IsJSObject() && JSObject::cast(*object)->map()->is_observed()); + if (use_ic && !object->IsSmi()) { + // Don't use ICs for maps of the objects in Array's prototype chain. We + // expect to be able to trap element sets to objects with those maps in + // the runtime to enable optimization of element hole access. + Handle<HeapObject> heap_object = Handle<HeapObject>::cast(object); + if (heap_object->map()->IsMapInArrayPrototypeChain()) use_ic = false; + } - if (object->IsJSObject()) { - Handle<JSObject> receiver = Handle<JSObject>::cast(object); - bool key_is_smi_like = !Object::ToSmi(isolate(), key).is_null(); - if (receiver->elements()->map() == - isolate()->heap()->sloppy_arguments_elements_map()) { - if (strict_mode() == SLOPPY) { - stub = sloppy_arguments_stub(); - } - } else if (key_is_smi_like && - !(target().is_identical_to(sloppy_arguments_stub()))) { - // We should go generic if receiver isn't a dictionary, but our - // prototype chain does have dictionary elements. This ensures that - // other non-dictionary receivers in the polymorphic case benefit - // from fast path keyed stores. - if (!(receiver->map()->DictionaryElementsInPrototypeChainOnly())) { - KeyedAccessStoreMode store_mode = - GetStoreMode(receiver, key, value); - stub = StoreElementStub(receiver, store_mode); - } + if (use_ic) { + DCHECK(!object->IsAccessCheckNeeded()); + + if (object->IsJSObject()) { + Handle<JSObject> receiver = Handle<JSObject>::cast(object); + bool key_is_smi_like = !Object::ToSmi(isolate(), key).is_null(); + if (receiver->elements()->map() == + isolate()->heap()->sloppy_arguments_elements_map()) { + if (strict_mode() == SLOPPY) { + stub = sloppy_arguments_stub(); + } + } else if (key_is_smi_like && + !(target().is_identical_to(sloppy_arguments_stub()))) { + // We should go generic if receiver isn't a dictionary, but our + // prototype chain does have dictionary elements. This ensures that + // other non-dictionary receivers in the polymorphic case benefit + // from fast path keyed stores. + if (!(receiver->map()->DictionaryElementsInPrototypeChainOnly())) { + KeyedAccessStoreMode store_mode = GetStoreMode(receiver, key, value); + stub = StoreElementStub(receiver, store_mode); } } } } - if (!is_target_set()) { - if (*stub == *generic_stub()) { - TRACE_GENERIC_IC(isolate(), "KeyedStoreIC", "set generic"); - } - ASSERT(!stub.is_null()); - set_target(*stub); - TRACE_IC("StoreIC", key); + if (store_handle.is_null()) { + ASSIGN_RETURN_ON_EXCEPTION( + isolate(), + store_handle, + Runtime::SetObjectProperty( + isolate(), object, key, value, strict_mode()), + Object); } - if (!store_handle.is_null()) return store_handle; - Handle<Object> result; - ASSIGN_RETURN_ON_EXCEPTION( - isolate(), - result, - Runtime::SetObjectProperty( - isolate(), object, key, value, NONE, strict_mode()), - Object); - return result; + DCHECK(!is_target_set()); + Code* generic = *generic_stub(); + if (*stub == generic) { + TRACE_GENERIC_IC(isolate(), "KeyedStoreIC", "set generic"); + } + DCHECK(!stub.is_null()); + set_target(*stub); + TRACE_IC("StoreIC", key); + + return store_handle; } @@ -1829,30 +1952,113 @@ ExtraICState CallIC::State::GetExtraICState() const { } +bool CallIC::DoCustomHandler(Handle<Object> receiver, + Handle<Object> function, + Handle<FixedArray> vector, + Handle<Smi> slot, + const State& state) { + DCHECK(FLAG_use_ic && function->IsJSFunction()); + + // Are we the array function? + Handle<JSFunction> array_function = Handle<JSFunction>( + isolate()->native_context()->array_function()); + if (array_function.is_identical_to(Handle<JSFunction>::cast(function))) { + // Alter the slot. + IC::State old_state = FeedbackToState(vector, slot); + Object* feedback = vector->get(slot->value()); + if (!feedback->IsAllocationSite()) { + Handle<AllocationSite> new_site = + isolate()->factory()->NewAllocationSite(); + vector->set(slot->value(), *new_site); + } + + CallIC_ArrayStub stub(isolate(), state); + set_target(*stub.GetCode()); + Handle<String> name; + if (array_function->shared()->name()->IsString()) { + name = Handle<String>(String::cast(array_function->shared()->name()), + isolate()); + } + + IC::State new_state = FeedbackToState(vector, slot); + OnTypeFeedbackChanged(isolate(), address(), old_state, new_state, true); + TRACE_VECTOR_IC("CallIC (custom handler)", name, old_state, new_state); + return true; + } + return false; +} + + +void CallIC::PatchMegamorphic(Handle<Object> function, + Handle<FixedArray> vector, Handle<Smi> slot) { + State state(target()->extra_ic_state()); + IC::State old_state = FeedbackToState(vector, slot); + + // We are going generic. + vector->set(slot->value(), + *TypeFeedbackInfo::MegamorphicSentinel(isolate()), + SKIP_WRITE_BARRIER); + + CallICStub stub(isolate(), state); + Handle<Code> code = stub.GetCode(); + set_target(*code); + + Handle<Object> name = isolate()->factory()->empty_string(); + if (function->IsJSFunction()) { + Handle<JSFunction> js_function = Handle<JSFunction>::cast(function); + name = handle(js_function->shared()->name(), isolate()); + } + + IC::State new_state = FeedbackToState(vector, slot); + OnTypeFeedbackChanged(isolate(), address(), old_state, new_state, true); + TRACE_VECTOR_IC("CallIC", name, old_state, new_state); +} + + void CallIC::HandleMiss(Handle<Object> receiver, Handle<Object> function, Handle<FixedArray> vector, Handle<Smi> slot) { State state(target()->extra_ic_state()); + IC::State old_state = FeedbackToState(vector, slot); + Handle<Object> name = isolate()->factory()->empty_string(); Object* feedback = vector->get(slot->value()); + // Hand-coded MISS handling is easier if CallIC slots don't contain smis. + DCHECK(!feedback->IsSmi()); + if (feedback->IsJSFunction() || !function->IsJSFunction()) { // We are going generic. - ASSERT(!function->IsJSFunction() || *function != feedback); - vector->set(slot->value(), *TypeFeedbackInfo::MegamorphicSentinel(isolate()), SKIP_WRITE_BARRIER); - TRACE_GENERIC_IC(isolate(), "CallIC", "megamorphic"); } else { - // If we came here feedback must be the uninitialized sentinel, - // and we are going monomorphic. - ASSERT(feedback == *TypeFeedbackInfo::UninitializedSentinel(isolate())); - Handle<JSFunction> js_function = Handle<JSFunction>::cast(function); - Handle<Object> name(js_function->shared()->name(), isolate()); - TRACE_IC("CallIC", name); + // The feedback is either uninitialized or an allocation site. + // It might be an allocation site because if we re-compile the full code + // to add deoptimization support, we call with the default call-ic, and + // merely need to patch the target to match the feedback. + // TODO(mvstanton): the better approach is to dispense with patching + // altogether, which is in progress. + DCHECK(feedback == *TypeFeedbackInfo::UninitializedSentinel(isolate()) || + feedback->IsAllocationSite()); + + // Do we want to install a custom handler? + if (FLAG_use_ic && + DoCustomHandler(receiver, function, vector, slot, state)) { + return; + } + vector->set(slot->value(), *function); } + + if (function->IsJSFunction()) { + Handle<JSFunction> js_function = Handle<JSFunction>::cast(function); + name = handle(js_function->shared()->name(), isolate()); + } + + IC::State new_state = FeedbackToState(vector, slot); + OnTypeFeedbackChanged(isolate(), address(), old_state, new_state, true); + TRACE_VECTOR_IC("CallIC", name, old_state, new_state); } @@ -1865,8 +2071,9 @@ void CallIC::HandleMiss(Handle<Object> receiver, // Used from ic-<arch>.cc. RUNTIME_FUNCTION(CallIC_Miss) { + TimerEventScope<TimerEventIcMiss> timer(isolate); HandleScope scope(isolate); - ASSERT(args.length() == 4); + DCHECK(args.length() == 4); CallIC ic(isolate); Handle<Object> receiver = args.at<Object>(0); Handle<Object> function = args.at<Object>(1); @@ -1877,10 +2084,25 @@ RUNTIME_FUNCTION(CallIC_Miss) { } +RUNTIME_FUNCTION(CallIC_Customization_Miss) { + TimerEventScope<TimerEventIcMiss> timer(isolate); + HandleScope scope(isolate); + DCHECK(args.length() == 4); + // A miss on a custom call ic always results in going megamorphic. + CallIC ic(isolate); + Handle<Object> function = args.at<Object>(1); + Handle<FixedArray> vector = args.at<FixedArray>(2); + Handle<Smi> slot = args.at<Smi>(3); + ic.PatchMegamorphic(function, vector, slot); + return *function; +} + + // Used from ic-<arch>.cc. RUNTIME_FUNCTION(LoadIC_Miss) { + TimerEventScope<TimerEventIcMiss> timer(isolate); HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); LoadIC ic(IC::NO_EXTRA_FRAME, isolate); Handle<Object> receiver = args.at<Object>(0); Handle<String> key = args.at<String>(1); @@ -1893,8 +2115,9 @@ RUNTIME_FUNCTION(LoadIC_Miss) { // Used from ic-<arch>.cc RUNTIME_FUNCTION(KeyedLoadIC_Miss) { + TimerEventScope<TimerEventIcMiss> timer(isolate); HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); KeyedLoadIC ic(IC::NO_EXTRA_FRAME, isolate); Handle<Object> receiver = args.at<Object>(0); Handle<Object> key = args.at<Object>(1); @@ -1906,8 +2129,9 @@ RUNTIME_FUNCTION(KeyedLoadIC_Miss) { RUNTIME_FUNCTION(KeyedLoadIC_MissFromStubFailure) { + TimerEventScope<TimerEventIcMiss> timer(isolate); HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); KeyedLoadIC ic(IC::EXTRA_CALL_FRAME, isolate); Handle<Object> receiver = args.at<Object>(0); Handle<Object> key = args.at<Object>(1); @@ -1920,8 +2144,9 @@ RUNTIME_FUNCTION(KeyedLoadIC_MissFromStubFailure) { // Used from ic-<arch>.cc. RUNTIME_FUNCTION(StoreIC_Miss) { + TimerEventScope<TimerEventIcMiss> timer(isolate); HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); StoreIC ic(IC::NO_EXTRA_FRAME, isolate); Handle<Object> receiver = args.at<Object>(0); Handle<String> key = args.at<String>(1); @@ -1936,8 +2161,9 @@ RUNTIME_FUNCTION(StoreIC_Miss) { RUNTIME_FUNCTION(StoreIC_MissFromStubFailure) { + TimerEventScope<TimerEventIcMiss> timer(isolate); HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); StoreIC ic(IC::EXTRA_CALL_FRAME, isolate); Handle<Object> receiver = args.at<Object>(0); Handle<String> key = args.at<String>(1); @@ -1951,35 +2177,13 @@ RUNTIME_FUNCTION(StoreIC_MissFromStubFailure) { } -RUNTIME_FUNCTION(StoreIC_ArrayLength) { - HandleScope scope(isolate); - - ASSERT(args.length() == 2); - Handle<JSArray> receiver = args.at<JSArray>(0); - Handle<Object> len = args.at<Object>(1); - - // The generated code should filter out non-Smis before we get here. - ASSERT(len->IsSmi()); - -#ifdef DEBUG - // The length property has to be a writable callback property. - LookupResult debug_lookup(isolate); - receiver->LocalLookup(isolate->factory()->length_string(), &debug_lookup); - ASSERT(debug_lookup.IsPropertyCallbacks() && !debug_lookup.IsReadOnly()); -#endif - - RETURN_FAILURE_ON_EXCEPTION( - isolate, JSArray::SetElementsLength(receiver, len)); - return *len; -} - - // Extend storage is called in a store inline cache when // it is necessary to extend the properties array of a // JSObject. RUNTIME_FUNCTION(SharedStoreIC_ExtendStorage) { + TimerEventScope<TimerEventIcMiss> timer(isolate); HandleScope shs(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); // Convert the parameters Handle<JSObject> object = args.at<JSObject>(0); @@ -1987,29 +2191,10 @@ RUNTIME_FUNCTION(SharedStoreIC_ExtendStorage) { Handle<Object> value = args.at<Object>(2); // Check the object has run out out property space. - ASSERT(object->HasFastProperties()); - ASSERT(object->map()->unused_property_fields() == 0); - - // Expand the properties array. - Handle<FixedArray> old_storage = handle(object->properties(), isolate); - int new_unused = transition->unused_property_fields(); - int new_size = old_storage->length() + new_unused + 1; + DCHECK(object->HasFastProperties()); + DCHECK(object->map()->unused_property_fields() == 0); - Handle<FixedArray> new_storage = FixedArray::CopySize(old_storage, new_size); - - Handle<Object> to_store = value; - - PropertyDetails details = transition->instance_descriptors()->GetDetails( - transition->LastAdded()); - if (details.representation().IsDouble()) { - to_store = isolate->factory()->NewHeapNumber(value->Number()); - } - - new_storage->set(old_storage->length(), *to_store); - - // Set the new property value and do the map transition. - object->set_properties(*new_storage); - object->set_map(*transition); + JSObject::MigrateToNewProperty(object, transition, value); // Return the stored value. return *value; @@ -2018,8 +2203,9 @@ RUNTIME_FUNCTION(SharedStoreIC_ExtendStorage) { // Used from ic-<arch>.cc. RUNTIME_FUNCTION(KeyedStoreIC_Miss) { + TimerEventScope<TimerEventIcMiss> timer(isolate); HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); KeyedStoreIC ic(IC::NO_EXTRA_FRAME, isolate); Handle<Object> receiver = args.at<Object>(0); Handle<Object> key = args.at<Object>(1); @@ -2034,8 +2220,9 @@ RUNTIME_FUNCTION(KeyedStoreIC_Miss) { RUNTIME_FUNCTION(KeyedStoreIC_MissFromStubFailure) { + TimerEventScope<TimerEventIcMiss> timer(isolate); HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); KeyedStoreIC ic(IC::EXTRA_CALL_FRAME, isolate); Handle<Object> receiver = args.at<Object>(0); Handle<Object> key = args.at<Object>(1); @@ -2051,7 +2238,7 @@ RUNTIME_FUNCTION(KeyedStoreIC_MissFromStubFailure) { RUNTIME_FUNCTION(StoreIC_Slow) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); StoreIC ic(IC::NO_EXTRA_FRAME, isolate); Handle<Object> object = args.at<Object>(0); Handle<Object> key = args.at<Object>(1); @@ -2061,14 +2248,14 @@ RUNTIME_FUNCTION(StoreIC_Slow) { ASSIGN_RETURN_FAILURE_ON_EXCEPTION( isolate, result, Runtime::SetObjectProperty( - isolate, object, key, value, NONE, strict_mode)); + isolate, object, key, value, strict_mode)); return *result; } RUNTIME_FUNCTION(KeyedStoreIC_Slow) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); KeyedStoreIC ic(IC::NO_EXTRA_FRAME, isolate); Handle<Object> object = args.at<Object>(0); Handle<Object> key = args.at<Object>(1); @@ -2078,14 +2265,15 @@ RUNTIME_FUNCTION(KeyedStoreIC_Slow) { ASSIGN_RETURN_FAILURE_ON_EXCEPTION( isolate, result, Runtime::SetObjectProperty( - isolate, object, key, value, NONE, strict_mode)); + isolate, object, key, value, strict_mode)); return *result; } RUNTIME_FUNCTION(ElementsTransitionAndStoreIC_Miss) { + TimerEventScope<TimerEventIcMiss> timer(isolate); HandleScope scope(isolate); - ASSERT(args.length() == 4); + DCHECK(args.length() == 4); KeyedStoreIC ic(IC::EXTRA_CALL_FRAME, isolate); Handle<Object> value = args.at<Object>(0); Handle<Map> map = args.at<Map>(1); @@ -2100,16 +2288,13 @@ RUNTIME_FUNCTION(ElementsTransitionAndStoreIC_Miss) { ASSIGN_RETURN_FAILURE_ON_EXCEPTION( isolate, result, Runtime::SetObjectProperty( - isolate, object, key, value, NONE, strict_mode)); + isolate, object, key, value, strict_mode)); return *result; } BinaryOpIC::State::State(Isolate* isolate, ExtraICState extra_ic_state) : isolate_(isolate) { - // We don't deserialize the SSE2 Field, since this is only used to be able - // to include SSE2 as well as non-SSE2 versions in the snapshot. For code - // generation we always want it to reflect the current state. op_ = static_cast<Token::Value>( FIRST_TOKEN + OpField::decode(extra_ic_state)); mode_ = OverwriteModeField::decode(extra_ic_state); @@ -2123,16 +2308,13 @@ BinaryOpIC::State::State(Isolate* isolate, ExtraICState extra_ic_state) right_kind_ = RightKindField::decode(extra_ic_state); } result_kind_ = ResultKindField::decode(extra_ic_state); - ASSERT_LE(FIRST_TOKEN, op_); - ASSERT_LE(op_, LAST_TOKEN); + DCHECK_LE(FIRST_TOKEN, op_); + DCHECK_LE(op_, LAST_TOKEN); } ExtraICState BinaryOpIC::State::GetExtraICState() const { - bool sse2 = (Max(result_kind_, Max(left_kind_, right_kind_)) > SMI && - CpuFeatures::IsSafeForSnapshot(isolate(), SSE2)); ExtraICState extra_ic_state = - SSE2Field::encode(sse2) | OpField::encode(op_ - FIRST_TOKEN) | OverwriteModeField::encode(mode_) | LeftKindField::encode(left_kind_) | @@ -2380,23 +2562,25 @@ Type* BinaryOpIC::State::GetResultType(Zone* zone) const { } else if (result_kind == NUMBER && op_ == Token::SHR) { return Type::Unsigned32(zone); } - ASSERT_NE(GENERIC, result_kind); + DCHECK_NE(GENERIC, result_kind); return KindToType(result_kind, zone); } -void BinaryOpIC::State::Print(StringStream* stream) const { - stream->Add("(%s", Token::Name(op_)); - if (mode_ == OVERWRITE_LEFT) stream->Add("_ReuseLeft"); - else if (mode_ == OVERWRITE_RIGHT) stream->Add("_ReuseRight"); - if (CouldCreateAllocationMementos()) stream->Add("_CreateAllocationMementos"); - stream->Add(":%s*", KindToString(left_kind_)); - if (fixed_right_arg_.has_value) { - stream->Add("%d", fixed_right_arg_.value); +OStream& operator<<(OStream& os, const BinaryOpIC::State& s) { + os << "(" << Token::Name(s.op_); + if (s.mode_ == OVERWRITE_LEFT) + os << "_ReuseLeft"; + else if (s.mode_ == OVERWRITE_RIGHT) + os << "_ReuseRight"; + if (s.CouldCreateAllocationMementos()) os << "_CreateAllocationMementos"; + os << ":" << BinaryOpIC::State::KindToString(s.left_kind_) << "*"; + if (s.fixed_right_arg_.has_value) { + os << s.fixed_right_arg_.value; } else { - stream->Add("%s", KindToString(right_kind_)); + os << BinaryOpIC::State::KindToString(s.right_kind_); } - stream->Add("->%s)", KindToString(result_kind_)); + return os << "->" << BinaryOpIC::State::KindToString(s.result_kind_) << ")"; } @@ -2432,12 +2616,12 @@ void BinaryOpIC::State::Update(Handle<Object> left, // We don't want to distinguish INT32 and NUMBER for string add (because // NumberToString can't make use of this anyway). if (left_kind_ == STRING && right_kind_ == INT32) { - ASSERT_EQ(STRING, result_kind_); - ASSERT_EQ(Token::ADD, op_); + DCHECK_EQ(STRING, result_kind_); + DCHECK_EQ(Token::ADD, op_); right_kind_ = NUMBER; } else if (right_kind_ == STRING && left_kind_ == INT32) { - ASSERT_EQ(STRING, result_kind_); - ASSERT_EQ(Token::ADD, op_); + DCHECK_EQ(STRING, result_kind_); + DCHECK_EQ(Token::ADD, op_); left_kind_ = NUMBER; } @@ -2453,14 +2637,9 @@ void BinaryOpIC::State::Update(Handle<Object> left, // Tagged operations can lead to non-truncating HChanges if (left->IsUndefined() || left->IsBoolean()) { left_kind_ = GENERIC; - } else if (right->IsUndefined() || right->IsBoolean()) { - right_kind_ = GENERIC; } else { - // Since the X87 is too precise, we might bail out on numbers which - // actually would truncate with 64 bit precision. - ASSERT(!CpuFeatures::IsSupported(SSE2)); - ASSERT(result_kind_ < NUMBER); - result_kind_ = NUMBER; + DCHECK(right->IsUndefined() || right->IsBoolean()); + right_kind_ = GENERIC; } } } @@ -2563,33 +2742,26 @@ MaybeHandle<Object> BinaryOpIC::Transition( target = stub.GetCodeCopyFromTemplate(allocation_site); // Sanity check the trampoline stub. - ASSERT_EQ(*allocation_site, target->FindFirstAllocationSite()); + DCHECK_EQ(*allocation_site, target->FindFirstAllocationSite()); } else { // Install the generic stub. BinaryOpICStub stub(isolate(), state); target = stub.GetCode(); // Sanity check the generic stub. - ASSERT_EQ(NULL, target->FindFirstAllocationSite()); + DCHECK_EQ(NULL, target->FindFirstAllocationSite()); } set_target(*target); if (FLAG_trace_ic) { - char buffer[150]; - NoAllocationStringAllocator allocator( - buffer, static_cast<unsigned>(sizeof(buffer))); - StringStream stream(&allocator); - stream.Add("[BinaryOpIC"); - old_state.Print(&stream); - stream.Add(" => "); - state.Print(&stream); - stream.Add(" @ %p <- ", static_cast<void*>(*target)); - stream.OutputToStdOut(); + OFStream os(stdout); + os << "[BinaryOpIC" << old_state << " => " << state << " @ " + << static_cast<void*>(*target) << " <- "; JavaScriptFrame::PrintTop(isolate(), stdout, false, true); if (!allocation_site.is_null()) { - PrintF(" using allocation site %p", static_cast<void*>(*allocation_site)); + os << " using allocation site " << static_cast<void*>(*allocation_site); } - PrintF("]\n"); + os << "]" << endl; } // Patch the inlined smi code as necessary. @@ -2604,8 +2776,9 @@ MaybeHandle<Object> BinaryOpIC::Transition( RUNTIME_FUNCTION(BinaryOpIC_Miss) { + TimerEventScope<TimerEventIcMiss> timer(isolate); HandleScope scope(isolate); - ASSERT_EQ(2, args.length()); + DCHECK_EQ(2, args.length()); Handle<Object> left = args.at<Object>(BinaryOpICStub::kLeft); Handle<Object> right = args.at<Object>(BinaryOpICStub::kRight); BinaryOpIC ic(isolate); @@ -2619,8 +2792,9 @@ RUNTIME_FUNCTION(BinaryOpIC_Miss) { RUNTIME_FUNCTION(BinaryOpIC_MissWithAllocationSite) { + TimerEventScope<TimerEventIcMiss> timer(isolate); HandleScope scope(isolate); - ASSERT_EQ(3, args.length()); + DCHECK_EQ(3, args.length()); Handle<AllocationSite> allocation_site = args.at<AllocationSite>( BinaryOpWithAllocationSiteStub::kAllocationSite); Handle<Object> left = args.at<Object>( @@ -2689,15 +2863,12 @@ Type* CompareIC::StateToType( } -void CompareIC::StubInfoToType(int stub_minor_key, - Type** left_type, - Type** right_type, - Type** overall_type, - Handle<Map> map, - Zone* zone) { +void CompareIC::StubInfoToType(uint32_t stub_key, Type** left_type, + Type** right_type, Type** overall_type, + Handle<Map> map, Zone* zone) { State left_state, right_state, handler_state; - ICCompareStub::DecodeMinorKey(stub_minor_key, &left_state, &right_state, - &handler_state, NULL); + ICCompareStub::DecodeKey(stub_key, &left_state, &right_state, &handler_state, + NULL); *left_type = StateToType(zone, left_state); *right_type = StateToType(zone, right_state); *overall_type = StateToType(zone, handler_state, map); @@ -2784,7 +2955,7 @@ CompareIC::State CompareIC::TargetState(State old_state, case SMI: return x->IsNumber() && y->IsNumber() ? NUMBER : GENERIC; case INTERNALIZED_STRING: - ASSERT(Token::IsEqualityOp(op_)); + DCHECK(Token::IsEqualityOp(op_)); if (x->IsString() && y->IsString()) return STRING; if (x->IsUniqueName() && y->IsUniqueName()) return UNIQUE_NAME; return GENERIC; @@ -2796,7 +2967,7 @@ CompareIC::State CompareIC::TargetState(State old_state, if (old_right == SMI && y->IsHeapNumber()) return NUMBER; return GENERIC; case KNOWN_OBJECT: - ASSERT(Token::IsEqualityOp(op_)); + DCHECK(Token::IsEqualityOp(op_)); if (x->IsJSObject() && y->IsJSObject()) return OBJECT; return GENERIC; case STRING: @@ -2813,8 +2984,8 @@ CompareIC::State CompareIC::TargetState(State old_state, Code* CompareIC::UpdateCaches(Handle<Object> x, Handle<Object> y) { HandleScope scope(isolate()); State previous_left, previous_right, previous_state; - ICCompareStub::DecodeMinorKey(target()->stub_info(), &previous_left, - &previous_right, &previous_state, NULL); + ICCompareStub::DecodeKey(target()->stub_key(), &previous_left, + &previous_right, &previous_state, NULL); State new_left = NewInputState(previous_left, x); State new_right = NewInputState(previous_right, y); State state = TargetState(previous_state, previous_left, previous_right, @@ -2852,8 +3023,9 @@ Code* CompareIC::UpdateCaches(Handle<Object> x, Handle<Object> y) { // Used from ICCompareStub::GenerateMiss in code-stubs-<arch>.cc. RUNTIME_FUNCTION(CompareIC_Miss) { + TimerEventScope<TimerEventIcMiss> timer(isolate); HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CompareIC ic(isolate, static_cast<Token::Value>(args.smi_at(2))); return ic.UpdateCaches(args.at<Object>(0), args.at<Object>(1)); } @@ -2906,7 +3078,7 @@ Handle<Object> CompareNilIC::CompareNil(Handle<Object> object) { Handle<Map> monomorphic_map(already_monomorphic && FirstTargetMap() != NULL ? FirstTargetMap() : HeapObject::cast(*object)->map()); - code = isolate()->stub_cache()->ComputeCompareNil(monomorphic_map, stub); + code = PropertyICCompiler::ComputeCompareNil(monomorphic_map, &stub); } else { code = stub.GetCode(); } @@ -2916,6 +3088,7 @@ Handle<Object> CompareNilIC::CompareNil(Handle<Object> object) { RUNTIME_FUNCTION(CompareNilIC_Miss) { + TimerEventScope<TimerEventIcMiss> timer(isolate); HandleScope scope(isolate); Handle<Object> object = args.at<Object>(0); CompareNilIC ic(isolate); @@ -2981,7 +3154,8 @@ Handle<Object> ToBooleanIC::ToBoolean(Handle<Object> object) { RUNTIME_FUNCTION(ToBooleanIC_Miss) { - ASSERT(args.length() == 1); + TimerEventScope<TimerEventIcMiss> timer(isolate); + DCHECK(args.length() == 1); HandleScope scope(isolate); Handle<Object> object = args.at<Object>(0); ToBooleanIC ic(isolate); diff --git a/deps/v8/src/ic.h b/deps/v8/src/ic.h index 895c21e736c..eb844cf7471 100644 --- a/deps/v8/src/ic.h +++ b/deps/v8/src/ic.h @@ -5,7 +5,7 @@ #ifndef V8_IC_H_ #define V8_IC_H_ -#include "macro-assembler.h" +#include "src/macro-assembler.h" namespace v8 { namespace internal { @@ -16,27 +16,26 @@ const int kMaxKeyedPolymorphism = 4; // IC_UTIL_LIST defines all utility functions called from generated // inline caching code. The argument for the macro, ICU, is the function name. -#define IC_UTIL_LIST(ICU) \ - ICU(LoadIC_Miss) \ - ICU(KeyedLoadIC_Miss) \ - ICU(CallIC_Miss) \ - ICU(StoreIC_Miss) \ - ICU(StoreIC_ArrayLength) \ - ICU(StoreIC_Slow) \ - ICU(SharedStoreIC_ExtendStorage) \ - ICU(KeyedStoreIC_Miss) \ - ICU(KeyedStoreIC_Slow) \ - /* Utilities for IC stubs. */ \ - ICU(StoreCallbackProperty) \ - ICU(LoadPropertyWithInterceptorOnly) \ - ICU(LoadPropertyWithInterceptorForLoad) \ - ICU(LoadPropertyWithInterceptorForCall) \ - ICU(KeyedLoadPropertyWithInterceptor) \ - ICU(StoreInterceptorProperty) \ - ICU(CompareIC_Miss) \ - ICU(BinaryOpIC_Miss) \ - ICU(CompareNilIC_Miss) \ - ICU(Unreachable) \ +#define IC_UTIL_LIST(ICU) \ + ICU(LoadIC_Miss) \ + ICU(KeyedLoadIC_Miss) \ + ICU(CallIC_Miss) \ + ICU(CallIC_Customization_Miss) \ + ICU(StoreIC_Miss) \ + ICU(StoreIC_Slow) \ + ICU(SharedStoreIC_ExtendStorage) \ + ICU(KeyedStoreIC_Miss) \ + ICU(KeyedStoreIC_Slow) \ + /* Utilities for IC stubs. */ \ + ICU(StoreCallbackProperty) \ + ICU(LoadPropertyWithInterceptorOnly) \ + ICU(LoadPropertyWithInterceptor) \ + ICU(LoadElementWithInterceptor) \ + ICU(StorePropertyWithInterceptor) \ + ICU(CompareIC_Miss) \ + ICU(BinaryOpIC_Miss) \ + ICU(CompareNilIC_Miss) \ + ICU(Unreachable) \ ICU(ToBooleanIC_Miss) // // IC is the base class for LoadIC, StoreIC, KeyedLoadIC, and KeyedStoreIC. @@ -75,13 +74,10 @@ class IC { // Compute the current IC state based on the target stub, receiver and name. void UpdateState(Handle<Object> receiver, Handle<Object> name); - bool IsNameCompatibleWithMonomorphicPrototypeFailure(Handle<Object> name); - bool TryMarkMonomorphicPrototypeFailure(Handle<Object> name) { - if (IsNameCompatibleWithMonomorphicPrototypeFailure(name)) { - state_ = MONOMORPHIC_PROTOTYPE_FAILURE; - return true; - } - return false; + bool IsNameCompatibleWithPrototypeFailure(Handle<Object> name); + void MarkPrototypeFailure(Handle<Object> name) { + DCHECK(IsNameCompatibleWithPrototypeFailure(name)); + state_ = PROTOTYPE_FAILURE; } // If the stub contains weak maps then this function adds the stub to @@ -111,20 +107,15 @@ class IC { } #endif - // Determines which map must be used for keeping the code stub. - // These methods should not be called with undefined or null. - static inline InlineCacheHolderFlag GetCodeCacheForObject(Object* object); - // TODO(verwaest): This currently returns a HeapObject rather than JSObject* - // since loading the IC for loading the length from strings are stored on - // the string map directly, rather than on the JSObject-typed prototype. - static inline HeapObject* GetCodeCacheHolder(Isolate* isolate, - Object* object, - InlineCacheHolderFlag holder); - - static inline InlineCacheHolderFlag GetCodeCacheFlag(HeapType* type); - static inline Handle<Map> GetCodeCacheHolder(InlineCacheHolderFlag flag, - HeapType* type, - Isolate* isolate); + template <class TypeClass> + static JSFunction* GetRootConstructor(TypeClass* type, + Context* native_context); + static inline Handle<Map> GetHandlerCacheHolder(HeapType* type, + bool receiver_is_holder, + Isolate* isolate, + CacheHolderFlag* flag); + static inline Handle<Map> GetICCacheHolder(HeapType* type, Isolate* isolate, + CacheHolderFlag* flag); static bool IsCleared(Code* code) { InlineCacheState state = code->ic_state(); @@ -170,16 +161,15 @@ class IC { bool is_target_set() { return target_set_; } -#ifdef DEBUG char TransitionMarkFromState(IC::State state); - void TraceIC(const char* type, Handle<Object> name); -#endif + void TraceIC(const char* type, Handle<Object> name, State old_state, + State new_state); MaybeHandle<Object> TypeError(const char* type, Handle<Object> object, Handle<Object> key); - MaybeHandle<Object> ReferenceError(const char* type, Handle<String> name); + MaybeHandle<Object> ReferenceError(const char* type, Handle<Name> name); // Access the target code for the given IC address. static inline Code* GetTargetAtAddress(Address address, @@ -187,63 +177,65 @@ class IC { static inline void SetTargetAtAddress(Address address, Code* target, ConstantPoolArray* constant_pool); + static void OnTypeFeedbackChanged(Isolate* isolate, Address address, + State old_state, State new_state, + bool target_remains_ic_stub); static void PostPatching(Address address, Code* target, Code* old_target); // Compute the handler either by compiling or by retrieving a cached version. - Handle<Code> ComputeHandler(LookupResult* lookup, - Handle<Object> object, - Handle<String> name, + Handle<Code> ComputeHandler(LookupIterator* lookup, Handle<Object> object, + Handle<Name> name, Handle<Object> value = Handle<Code>::null()); - virtual Handle<Code> CompileHandler(LookupResult* lookup, + virtual Handle<Code> CompileHandler(LookupIterator* lookup, Handle<Object> object, - Handle<String> name, - Handle<Object> value, - InlineCacheHolderFlag cache_holder) { + Handle<Name> name, Handle<Object> value, + CacheHolderFlag cache_holder) { + UNREACHABLE(); + return Handle<Code>::null(); + } + // Temporary copy of the above, but using a LookupResult. + // TODO(jkummerow): Migrate callers to LookupIterator and delete these. + Handle<Code> ComputeStoreHandler(LookupResult* lookup, Handle<Object> object, + Handle<Name> name, + Handle<Object> value = Handle<Code>::null()); + virtual Handle<Code> CompileStoreHandler(LookupResult* lookup, + Handle<Object> object, + Handle<Name> name, + Handle<Object> value, + CacheHolderFlag cache_holder) { UNREACHABLE(); return Handle<Code>::null(); } - void UpdateMonomorphicIC(Handle<HeapType> type, - Handle<Code> handler, - Handle<String> name); - - bool UpdatePolymorphicIC(Handle<HeapType> type, - Handle<String> name, - Handle<Code> code); - - virtual void UpdateMegamorphicCache(HeapType* type, Name* name, Code* code); + void UpdateMonomorphicIC(Handle<Code> handler, Handle<Name> name); + bool UpdatePolymorphicIC(Handle<Name> name, Handle<Code> code); + void UpdateMegamorphicCache(HeapType* type, Name* name, Code* code); - void CopyICToMegamorphicCache(Handle<String> name); + void CopyICToMegamorphicCache(Handle<Name> name); bool IsTransitionOfMonomorphicTarget(Map* source_map, Map* target_map); - void PatchCache(Handle<HeapType> type, - Handle<String> name, - Handle<Code> code); - virtual Code::Kind kind() const { - UNREACHABLE(); - return Code::STUB; - } - virtual Handle<Code> slow_stub() const { - UNREACHABLE(); - return Handle<Code>::null(); + void PatchCache(Handle<Name> name, Handle<Code> code); + Code::Kind kind() const { return kind_; } + Code::Kind handler_kind() const { + if (kind_ == Code::KEYED_LOAD_IC) return Code::LOAD_IC; + DCHECK(kind_ == Code::LOAD_IC || kind_ == Code::STORE_IC || + kind_ == Code::KEYED_STORE_IC); + return kind_; } virtual Handle<Code> megamorphic_stub() { UNREACHABLE(); return Handle<Code>::null(); } - virtual Handle<Code> generic_stub() const { - UNREACHABLE(); - return Handle<Code>::null(); - } bool TryRemoveInvalidPrototypeDependentStub(Handle<Object> receiver, Handle<String> name); - void TryRemoveInvalidHandlers(Handle<Map> map, Handle<String> name); ExtraICState extra_ic_state() const { return extra_ic_state_; } void set_extra_ic_state(ExtraICState state) { extra_ic_state_ = state; } + Handle<HeapType> receiver_type() { return receiver_type_; } + void TargetMaps(MapHandleList* list) { FindTargetMaps(); for (int i = 0; i < target_maps_.length(); i++) { @@ -303,8 +295,11 @@ class IC { // The original code target that missed. Handle<Code> target_; - State state_; bool target_set_; + State state_; + Code::Kind kind_; + Handle<HeapType> receiver_type_; + MaybeHandle<Code> maybe_handler_; ExtraICState extra_ic_state_; MapHandleList target_maps_; @@ -338,16 +333,10 @@ class CallIC: public IC { public: explicit State(ExtraICState extra_ic_state); - static State DefaultCallState(int argc, CallType call_type) { - return State(argc, call_type); + State(int argc, CallType call_type) + : argc_(argc), call_type_(call_type) { } - static State MegamorphicCallState(int argc, CallType call_type) { - return State(argc, call_type); - } - - InlineCacheState GetICState() const { return ::v8::internal::GENERIC; } - ExtraICState GetExtraICState() const; static void GenerateAheadOfTime( @@ -358,24 +347,7 @@ class CallIC: public IC { bool CallAsMethod() const { return call_type_ == METHOD; } - void Print(StringStream* stream) const; - - bool operator==(const State& other_state) const { - return (argc_ == other_state.argc_ && - call_type_ == other_state.call_type_); - } - - bool operator!=(const State& other_state) const { - return !(*this == other_state); - } - private: - State(int argc, - CallType call_type) - : argc_(argc), - call_type_(call_type) { - } - class ArgcBits: public BitField<int, 0, Code::kArgumentsBits> {}; class CallTypeBits: public BitField<CallType, Code::kArgumentsBits, 1> {}; @@ -387,11 +359,21 @@ class CallIC: public IC { : IC(EXTRA_CALL_FRAME, isolate) { } + void PatchMegamorphic(Handle<Object> function, Handle<FixedArray> vector, + Handle<Smi> slot); + void HandleMiss(Handle<Object> receiver, Handle<Object> function, Handle<FixedArray> vector, Handle<Smi> slot); + // Returns true if a custom handler was installed. + bool DoCustomHandler(Handle<Object> receiver, + Handle<Object> function, + Handle<FixedArray> vector, + Handle<Smi> slot, + const State& state); + // Code generator routines. static Handle<Code> initialize_stub(Isolate* isolate, int argc, @@ -399,30 +381,67 @@ class CallIC: public IC { static void Clear(Isolate* isolate, Address address, Code* target, ConstantPoolArray* constant_pool); + + private: + inline IC::State FeedbackToState(Handle<FixedArray> vector, + Handle<Smi> slot) const; }; +OStream& operator<<(OStream& os, const CallIC::State& s); + + class LoadIC: public IC { public: - // ExtraICState bits - class ContextualModeBits: public BitField<ContextualMode, 0, 1> {}; - STATIC_ASSERT(static_cast<int>(NOT_CONTEXTUAL) == 0); + enum ParameterIndices { + kReceiverIndex, + kNameIndex, + kParameterCount + }; + static const Register ReceiverRegister(); + static const Register NameRegister(); + + // With flag vector-ics, there is an additional argument. And for calls from + // crankshaft, yet another. + static const Register SlotRegister(); + static const Register VectorRegister(); + + class State V8_FINAL BASE_EMBEDDED { + public: + explicit State(ExtraICState extra_ic_state) + : state_(extra_ic_state) {} + + explicit State(ContextualMode mode) + : state_(ContextualModeBits::encode(mode)) {} + + ExtraICState GetExtraICState() const { return state_; } + + ContextualMode contextual_mode() const { + return ContextualModeBits::decode(state_); + } + + private: + class ContextualModeBits: public BitField<ContextualMode, 0, 1> {}; + STATIC_ASSERT(static_cast<int>(NOT_CONTEXTUAL) == 0); + + const ExtraICState state_; + }; static ExtraICState ComputeExtraICState(ContextualMode contextual_mode) { - return ContextualModeBits::encode(contextual_mode); + return State(contextual_mode).GetExtraICState(); } static ContextualMode GetContextualMode(ExtraICState state) { - return ContextualModeBits::decode(state); + return State(state).contextual_mode(); } ContextualMode contextual_mode() const { - return ContextualModeBits::decode(extra_ic_state()); + return GetContextualMode(extra_ic_state()); } explicit LoadIC(FrameDepth depth, Isolate* isolate) : IC(depth, isolate) { - ASSERT(IsLoadStub()); + DCHECK(IsLoadStub()); } // Returns if this IC is for contextual (no explicit receiver) @@ -431,7 +450,7 @@ class LoadIC: public IC { if (receiver->IsGlobalObject()) { return contextual_mode() == CONTEXTUAL; } else { - ASSERT(contextual_mode() != CONTEXTUAL); + DCHECK(contextual_mode() != CONTEXTUAL); return false; } } @@ -450,50 +469,45 @@ class LoadIC: public IC { ExtraICState extra_state); MUST_USE_RESULT MaybeHandle<Object> Load(Handle<Object> object, - Handle<String> name); + Handle<Name> name); protected: - virtual Code::Kind kind() const { return Code::LOAD_IC; } - void set_target(Code* code) { // The contextual mode must be preserved across IC patching. - ASSERT(GetContextualMode(code->extra_ic_state()) == + DCHECK(GetContextualMode(code->extra_ic_state()) == GetContextualMode(target()->extra_ic_state())); IC::set_target(code); } - virtual Handle<Code> slow_stub() const { - return isolate()->builtins()->LoadIC_Slow(); + Handle<Code> slow_stub() const { + if (kind() == Code::LOAD_IC) { + return isolate()->builtins()->LoadIC_Slow(); + } else { + DCHECK_EQ(Code::KEYED_LOAD_IC, kind()); + return isolate()->builtins()->KeyedLoadIC_Slow(); + } } virtual Handle<Code> megamorphic_stub(); // Update the inline cache and the global stub cache based on the // lookup result. - void UpdateCaches(LookupResult* lookup, - Handle<Object> object, - Handle<String> name); + void UpdateCaches(LookupIterator* lookup, Handle<Object> object, + Handle<Name> name); - virtual Handle<Code> CompileHandler(LookupResult* lookup, + virtual Handle<Code> CompileHandler(LookupIterator* lookup, Handle<Object> object, - Handle<String> name, + Handle<Name> name, Handle<Object> unused, - InlineCacheHolderFlag cache_holder); + CacheHolderFlag cache_holder); private: - // Stub accessors. + virtual Handle<Code> pre_monomorphic_stub() const; static Handle<Code> pre_monomorphic_stub(Isolate* isolate, - ExtraICState exstra_state); - - virtual Handle<Code> pre_monomorphic_stub() { - return pre_monomorphic_stub(isolate(), extra_ic_state()); - } + ExtraICState extra_state); - Handle<Code> SimpleFieldLoad(int offset, - bool inobject = true, - Representation representation = - Representation::Tagged()); + Handle<Code> SimpleFieldLoad(FieldIndex index); static void Clear(Isolate* isolate, Address address, @@ -508,7 +522,7 @@ class KeyedLoadIC: public LoadIC { public: explicit KeyedLoadIC(FrameDepth depth, Isolate* isolate) : LoadIC(depth, isolate) { - ASSERT(target()->is_keyed_load_stub()); + DCHECK(target()->is_keyed_load_stub()); } MUST_USE_RESULT MaybeHandle<Object> Load(Handle<Object> object, @@ -533,31 +547,17 @@ class KeyedLoadIC: public LoadIC { static const int kSlowCaseBitFieldMask = (1 << Map::kIsAccessCheckNeeded) | (1 << Map::kHasIndexedInterceptor); - protected: - virtual Code::Kind kind() const { return Code::KEYED_LOAD_IC; } + static Handle<Code> generic_stub(Isolate* isolate); + static Handle<Code> pre_monomorphic_stub(Isolate* isolate); + protected: Handle<Code> LoadElementStub(Handle<JSObject> receiver); - - virtual Handle<Code> megamorphic_stub() { - return isolate()->builtins()->KeyedLoadIC_Generic(); - } - virtual Handle<Code> generic_stub() const { - return isolate()->builtins()->KeyedLoadIC_Generic(); - } - virtual Handle<Code> slow_stub() const { - return isolate()->builtins()->KeyedLoadIC_Slow(); + virtual Handle<Code> pre_monomorphic_stub() const { + return pre_monomorphic_stub(isolate()); } - virtual void UpdateMegamorphicCache(HeapType* type, Name* name, Code* code) {} - private: - // Stub accessors. - static Handle<Code> pre_monomorphic_stub(Isolate* isolate) { - return isolate->builtins()->KeyedLoadIC_PreMonomorphic(); - } - virtual Handle<Code> pre_monomorphic_stub() { - return pre_monomorphic_stub(isolate()); - } + Handle<Code> generic_stub() const { return generic_stub(isolate()); } Handle<Code> indexed_interceptor_stub() { return isolate()->builtins()->KeyedLoadIC_IndexedInterceptor(); } @@ -592,9 +592,19 @@ class StoreIC: public IC { static const ExtraICState kStrictModeState = 1 << StrictModeState::kShift; + enum ParameterIndices { + kReceiverIndex, + kNameIndex, + kValueIndex, + kParameterCount + }; + static const Register ReceiverRegister(); + static const Register NameRegister(); + static const Register ValueRegister(); + StoreIC(FrameDepth depth, Isolate* isolate) : IC(depth, isolate) { - ASSERT(IsStoreStub()); + DCHECK(IsStoreStub()); } StrictMode strict_mode() const { @@ -618,13 +628,12 @@ class StoreIC: public IC { MUST_USE_RESULT MaybeHandle<Object> Store( Handle<Object> object, - Handle<String> name, + Handle<Name> name, Handle<Object> value, JSReceiver::StoreFromKeyed store_mode = JSReceiver::CERTAINLY_NOT_STORE_FROM_KEYED); protected: - virtual Code::Kind kind() const { return Code::STORE_IC; } virtual Handle<Code> megamorphic_stub(); // Stub accessors. @@ -634,7 +643,7 @@ class StoreIC: public IC { return isolate()->builtins()->StoreIC_Slow(); } - virtual Handle<Code> pre_monomorphic_stub() { + virtual Handle<Code> pre_monomorphic_stub() const { return pre_monomorphic_stub(isolate(), strict_mode()); } @@ -645,18 +654,18 @@ class StoreIC: public IC { // lookup result. void UpdateCaches(LookupResult* lookup, Handle<JSObject> receiver, - Handle<String> name, + Handle<Name> name, Handle<Object> value); - virtual Handle<Code> CompileHandler(LookupResult* lookup, - Handle<Object> object, - Handle<String> name, - Handle<Object> value, - InlineCacheHolderFlag cache_holder); + virtual Handle<Code> CompileStoreHandler(LookupResult* lookup, + Handle<Object> object, + Handle<Name> name, + Handle<Object> value, + CacheHolderFlag cache_holder); private: void set_target(Code* code) { // Strict mode must be preserved across IC patching. - ASSERT(GetStrictMode(code->extra_ic_state()) == + DCHECK(GetStrictMode(code->extra_ic_state()) == GetStrictMode(target()->extra_ic_state())); IC::set_target(code); } @@ -700,9 +709,14 @@ class KeyedStoreIC: public StoreIC { return ExtraICStateKeyedAccessStoreMode::decode(extra_state); } + // The map register isn't part of the normal call specification, but + // ElementsTransitionAndStoreStub, used in polymorphic keyed store + // stub implementations requires it to be initialized. + static const Register MapRegister(); + KeyedStoreIC(FrameDepth depth, Isolate* isolate) : StoreIC(depth, isolate) { - ASSERT(target()->is_keyed_store_stub()); + DCHECK(target()->is_keyed_store_stub()); } MUST_USE_RESULT MaybeHandle<Object> Store(Handle<Object> object, @@ -722,11 +736,7 @@ class KeyedStoreIC: public StoreIC { static void GenerateSloppyArguments(MacroAssembler* masm); protected: - virtual Code::Kind kind() const { return Code::KEYED_STORE_IC; } - - virtual void UpdateMegamorphicCache(HeapType* type, Name* name, Code* code) {} - - virtual Handle<Code> pre_monomorphic_stub() { + virtual Handle<Code> pre_monomorphic_stub() const { return pre_monomorphic_stub(isolate(), strict_mode()); } static Handle<Code> pre_monomorphic_stub(Isolate* isolate, @@ -754,7 +764,7 @@ class KeyedStoreIC: public StoreIC { private: void set_target(Code* code) { // Strict mode must be preserved across IC patching. - ASSERT(GetStrictMode(code->extra_ic_state()) == strict_mode()); + DCHECK(GetStrictMode(code->extra_ic_state()) == strict_mode()); IC::set_target(code); } @@ -800,8 +810,8 @@ class BinaryOpIC: public IC { State(Isolate* isolate, Token::Value op, OverwriteMode mode) : op_(op), mode_(mode), left_kind_(NONE), right_kind_(NONE), result_kind_(NONE), isolate_(isolate) { - ASSERT_LE(FIRST_TOKEN, op); - ASSERT_LE(op, LAST_TOKEN); + DCHECK_LE(FIRST_TOKEN, op); + DCHECK_LE(op, LAST_TOKEN); } InlineCacheState GetICState() const { @@ -833,7 +843,7 @@ class BinaryOpIC: public IC { // Returns true if the IC _could_ create allocation mementos. bool CouldCreateAllocationMementos() const { if (left_kind_ == STRING || right_kind_ == STRING) { - ASSERT_EQ(Token::ADD, op_); + DCHECK_EQ(Token::ADD, op_); return true; } return false; @@ -870,8 +880,6 @@ class BinaryOpIC: public IC { } Type* GetResultType(Zone* zone) const; - void Print(StringStream* stream) const; - void Update(Handle<Object> left, Handle<Object> right, Handle<Object> result); @@ -879,6 +887,8 @@ class BinaryOpIC: public IC { Isolate* isolate() const { return isolate_; } private: + friend OStream& operator<<(OStream& os, const BinaryOpIC::State& s); + enum Kind { NONE, SMI, INT32, NUMBER, STRING, GENERIC }; Kind UpdateKind(Handle<Object> object, Kind kind) const; @@ -893,14 +903,13 @@ class BinaryOpIC: public IC { STATIC_ASSERT(LAST_TOKEN - FIRST_TOKEN < (1 << 4)); class OpField: public BitField<int, 0, 4> {}; class OverwriteModeField: public BitField<OverwriteMode, 4, 2> {}; - class SSE2Field: public BitField<bool, 6, 1> {}; - class ResultKindField: public BitField<Kind, 7, 3> {}; - class LeftKindField: public BitField<Kind, 10, 3> {}; + class ResultKindField: public BitField<Kind, 6, 3> {}; + class LeftKindField: public BitField<Kind, 9, 3> {}; // When fixed right arg is set, we don't need to store the right kind. // Thus the two fields can overlap. - class HasFixedRightArgField: public BitField<bool, 13, 1> {}; - class FixedRightArgValueField: public BitField<int, 14, 4> {}; - class RightKindField: public BitField<Kind, 14, 3> {}; + class HasFixedRightArgField: public BitField<bool, 12, 1> {}; + class FixedRightArgValueField: public BitField<int, 13, 4> {}; + class RightKindField: public BitField<Kind, 13, 3> {}; Token::Value op_; OverwriteMode mode_; @@ -921,6 +930,9 @@ class BinaryOpIC: public IC { }; +OStream& operator<<(OStream& os, const BinaryOpIC::State& s); + + class CompareIC: public IC { public: // The type/state lattice is defined by the following inequations: @@ -947,12 +959,9 @@ class CompareIC: public IC { State state, Handle<Map> map = Handle<Map>()); - static void StubInfoToType(int stub_minor_key, - Type** left_type, - Type** right_type, - Type** overall_type, - Handle<Map> map, - Zone* zone); + static void StubInfoToType(uint32_t stub_key, Type** left_type, + Type** right_type, Type** overall_type, + Handle<Map> map, Zone* zone); CompareIC(Isolate* isolate, Token::Value op) : IC(EXTRA_CALL_FRAME, isolate), op_(op) { } diff --git a/deps/v8/src/icu_util.cc b/deps/v8/src/icu_util.cc index b036ef4eef3..b323942d02a 100644 --- a/deps/v8/src/icu_util.cc +++ b/deps/v8/src/icu_util.cc @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "icu_util.h" +#include "src/icu_util.h" #if defined(_WIN32) #include <windows.h> diff --git a/deps/v8/src/interface.cc b/deps/v8/src/interface.cc index bd50c61ea73..62169f597b1 100644 --- a/deps/v8/src/interface.cc +++ b/deps/v8/src/interface.cc @@ -2,32 +2,23 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "interface.h" +#include "src/interface.h" namespace v8 { namespace internal { -static bool Match(void* key1, void* key2) { - String* name1 = *static_cast<String**>(key1); - String* name2 = *static_cast<String**>(key2); - ASSERT(name1->IsInternalizedString()); - ASSERT(name2->IsInternalizedString()); - return name1 == name2; -} - - Interface* Interface::Lookup(Handle<String> name, Zone* zone) { - ASSERT(IsModule()); + DCHECK(IsModule()); ZoneHashMap* map = Chase()->exports_; if (map == NULL) return NULL; ZoneAllocationPolicy allocator(zone); ZoneHashMap::Entry* p = map->Lookup(name.location(), name->Hash(), false, allocator); if (p == NULL) return NULL; - ASSERT(*static_cast<String**>(p->key) == *name); - ASSERT(p->value != NULL); + DCHECK(*static_cast<String**>(p->key) == *name); + DCHECK(p->value != NULL); return static_cast<Interface*>(p->value); } @@ -47,8 +38,8 @@ int Nesting::current_ = 0; #endif -void Interface::DoAdd( - void* name, uint32_t hash, Interface* interface, Zone* zone, bool* ok) { +void Interface::DoAdd(const void* name, uint32_t hash, Interface* interface, + Zone* zone, bool* ok) { MakeModule(ok); if (!*ok) return; @@ -57,8 +48,9 @@ void Interface::DoAdd( PrintF("%*s# Adding...\n", Nesting::current(), ""); PrintF("%*sthis = ", Nesting::current(), ""); this->Print(Nesting::current()); - PrintF("%*s%s : ", Nesting::current(), "", - (*static_cast<String**>(name))->ToAsciiArray()); + const AstRawString* symbol = static_cast<const AstRawString*>(name); + PrintF("%*s%.*s : ", Nesting::current(), "", symbol->length(), + symbol->raw_data()); interface->Print(Nesting::current()); } #endif @@ -68,10 +60,12 @@ void Interface::DoAdd( if (*map == NULL) { *map = new(zone->New(sizeof(ZoneHashMap))) - ZoneHashMap(Match, ZoneHashMap::kDefaultHashMapCapacity, allocator); + ZoneHashMap(ZoneHashMap::PointersMatch, + ZoneHashMap::kDefaultHashMapCapacity, allocator); } - ZoneHashMap::Entry* p = (*map)->Lookup(name, hash, !IsFrozen(), allocator); + ZoneHashMap::Entry* p = + (*map)->Lookup(const_cast<void*>(name), hash, !IsFrozen(), allocator); if (p == NULL) { // This didn't have name but was frozen already, that's an error. *ok = false; @@ -97,8 +91,8 @@ void Interface::DoAdd( void Interface::Unify(Interface* that, Zone* zone, bool* ok) { if (this->forward_) return this->Chase()->Unify(that, zone, ok); if (that->forward_) return this->Unify(that->Chase(), zone, ok); - ASSERT(this->forward_ == NULL); - ASSERT(that->forward_ == NULL); + DCHECK(this->forward_ == NULL); + DCHECK(that->forward_ == NULL); *ok = true; if (this == that) return; @@ -144,13 +138,13 @@ void Interface::Unify(Interface* that, Zone* zone, bool* ok) { void Interface::DoUnify(Interface* that, bool* ok, Zone* zone) { - ASSERT(this->forward_ == NULL); - ASSERT(that->forward_ == NULL); - ASSERT(!this->IsValue()); - ASSERT(!that->IsValue()); - ASSERT(this->index_ == -1); - ASSERT(that->index_ == -1); - ASSERT(*ok); + DCHECK(this->forward_ == NULL); + DCHECK(that->forward_ == NULL); + DCHECK(!this->IsValue()); + DCHECK(!that->IsValue()); + DCHECK(this->index_ == -1); + DCHECK(that->index_ == -1); + DCHECK(*ok); #ifdef DEBUG Nesting nested; diff --git a/deps/v8/src/interface.h b/deps/v8/src/interface.h index 31a9fa0c4d8..598d0381427 100644 --- a/deps/v8/src/interface.h +++ b/deps/v8/src/interface.h @@ -5,7 +5,8 @@ #ifndef V8_INTERFACE_H_ #define V8_INTERFACE_H_ -#include "zone-inl.h" // For operator new. +#include "src/ast-value-factory.h" +#include "src/zone-inl.h" // For operator new. namespace v8 { namespace internal { @@ -59,8 +60,9 @@ class Interface : public ZoneObject { // Add a name to the list of exports. If it already exists, unify with // interface, otherwise insert unless this is closed. - void Add(Handle<String> name, Interface* interface, Zone* zone, bool* ok) { - DoAdd(name.location(), name->Hash(), interface, zone, ok); + void Add(const AstRawString* name, Interface* interface, Zone* zone, + bool* ok) { + DoAdd(name, name->hash(), interface, zone, ok); } // Unify with another interface. If successful, both interface objects will @@ -93,7 +95,7 @@ class Interface : public ZoneObject { // Assign an index. void Allocate(int index) { - ASSERT(IsModule() && IsFrozen() && Chase()->index_ == -1); + DCHECK(IsModule() && IsFrozen() && Chase()->index_ == -1); Chase()->index_ = index; } @@ -122,14 +124,14 @@ class Interface : public ZoneObject { } int Length() { - ASSERT(IsModule() && IsFrozen()); + DCHECK(IsModule() && IsFrozen()); ZoneHashMap* exports = Chase()->exports_; return exports ? exports->occupancy() : 0; } // The context slot in the hosting global context pointing to this module. int Index() { - ASSERT(IsModule() && IsFrozen()); + DCHECK(IsModule() && IsFrozen()); return Chase()->index_; } @@ -146,12 +148,12 @@ class Interface : public ZoneObject { class Iterator { public: bool done() const { return entry_ == NULL; } - Handle<String> name() const { - ASSERT(!done()); - return Handle<String>(*static_cast<String**>(entry_->key)); + const AstRawString* name() const { + DCHECK(!done()); + return static_cast<const AstRawString*>(entry_->key); } Interface* interface() const { - ASSERT(!done()); + DCHECK(!done()); return static_cast<Interface*>(entry_->value); } void Advance() { entry_ = exports_->Next(entry_); } @@ -207,7 +209,7 @@ class Interface : public ZoneObject { return result; } - void DoAdd(void* name, uint32_t hash, Interface* interface, Zone* zone, + void DoAdd(const void* name, uint32_t hash, Interface* interface, Zone* zone, bool* ok); void DoUnify(Interface* that, bool* ok, Zone* zone); }; diff --git a/deps/v8/src/interpreter-irregexp.cc b/deps/v8/src/interpreter-irregexp.cc index 4c7c04c13cf..7f51d5e410a 100644 --- a/deps/v8/src/interpreter-irregexp.cc +++ b/deps/v8/src/interpreter-irregexp.cc @@ -5,14 +5,15 @@ // A simple interpreter for the Irregexp byte code. -#include "v8.h" -#include "unicode.h" -#include "utils.h" -#include "ast.h" -#include "bytecodes-irregexp.h" -#include "interpreter-irregexp.h" -#include "jsregexp.h" -#include "regexp-macro-assembler.h" +#include "src/v8.h" + +#include "src/ast.h" +#include "src/bytecodes-irregexp.h" +#include "src/interpreter-irregexp.h" +#include "src/jsregexp.h" +#include "src/regexp-macro-assembler.h" +#include "src/unicode.h" +#include "src/utils.h" namespace v8 { namespace internal { @@ -118,13 +119,13 @@ static void TraceInterpreter(const byte* code_base, static int32_t Load32Aligned(const byte* pc) { - ASSERT((reinterpret_cast<intptr_t>(pc) & 3) == 0); + DCHECK((reinterpret_cast<intptr_t>(pc) & 3) == 0); return *reinterpret_cast<const int32_t *>(pc); } static int32_t Load16Aligned(const byte* pc) { - ASSERT((reinterpret_cast<intptr_t>(pc) & 1) == 0); + DCHECK((reinterpret_cast<intptr_t>(pc) & 1) == 0); return *reinterpret_cast<const uint16_t *>(pc); } @@ -135,9 +136,7 @@ static int32_t Load16Aligned(const byte* pc) { // matching terminates. class BacktrackStack { public: - explicit BacktrackStack() { - data_ = NewArray<int>(kBacktrackStackSize); - } + BacktrackStack() { data_ = NewArray<int>(kBacktrackStackSize); } ~BacktrackStack() { DeleteArray(data_); @@ -307,7 +306,7 @@ static RegExpImpl::IrregexpResult RawMatch(Isolate* isolate, break; } BYTECODE(LOAD_4_CURRENT_CHARS) { - ASSERT(sizeof(Char) == 1); + DCHECK(sizeof(Char) == 1); int pos = current + (insn >> BYTECODE_SHIFT); if (pos + 4 > subject.length()) { pc = code_base + Load32Aligned(pc + 4); @@ -324,7 +323,7 @@ static RegExpImpl::IrregexpResult RawMatch(Isolate* isolate, break; } BYTECODE(LOAD_4_CURRENT_CHARS_UNCHECKED) { - ASSERT(sizeof(Char) == 1); + DCHECK(sizeof(Char) == 1); int pos = current + (insn >> BYTECODE_SHIFT); Char next1 = subject[pos + 1]; Char next2 = subject[pos + 2]; @@ -579,7 +578,7 @@ RegExpImpl::IrregexpResult IrregexpInterpreter::Match( Handle<String> subject, int* registers, int start_position) { - ASSERT(subject->IsFlat()); + DCHECK(subject->IsFlat()); DisallowHeapAllocation no_gc; const byte* code_base = code_array->GetDataStartAddress(); @@ -595,7 +594,7 @@ RegExpImpl::IrregexpResult IrregexpInterpreter::Match( start_position, previous_char); } else { - ASSERT(subject_content.IsTwoByte()); + DCHECK(subject_content.IsTwoByte()); Vector<const uc16> subject_vector = subject_content.ToUC16Vector(); if (start_position != 0) previous_char = subject_vector[start_position - 1]; return RawMatch(isolate, diff --git a/deps/v8/src/isolate-inl.h b/deps/v8/src/isolate-inl.h index eebdcee9bdc..b44c4d6d723 100644 --- a/deps/v8/src/isolate-inl.h +++ b/deps/v8/src/isolate-inl.h @@ -5,9 +5,9 @@ #ifndef V8_ISOLATE_INL_H_ #define V8_ISOLATE_INL_H_ -#include "debug.h" -#include "isolate.h" -#include "utils/random-number-generator.h" +#include "src/base/utils/random-number-generator.h" +#include "src/debug.h" +#include "src/isolate.h" namespace v8 { namespace internal { @@ -25,24 +25,19 @@ SaveContext::SaveContext(Isolate* isolate) } -bool Isolate::IsCodePreAgingActive() { - return FLAG_optimize_for_size && FLAG_age_code && !IsDebuggerActive(); -} - - -bool Isolate::IsDebuggerActive() { - return debugger()->IsDebuggerActive(); -} - - bool Isolate::DebuggerHasBreakPoints() { return debug()->has_break_points(); } -RandomNumberGenerator* Isolate::random_number_generator() { +base::RandomNumberGenerator* Isolate::random_number_generator() { if (random_number_generator_ == NULL) { - random_number_generator_ = new RandomNumberGenerator; + if (FLAG_random_seed != 0) { + random_number_generator_ = + new base::RandomNumberGenerator(FLAG_random_seed); + } else { + random_number_generator_ = new base::RandomNumberGenerator(); + } } return random_number_generator_; } diff --git a/deps/v8/src/isolate.cc b/deps/v8/src/isolate.cc index d93639b5d08..215296d735c 100644 --- a/deps/v8/src/isolate.cc +++ b/deps/v8/src/isolate.cc @@ -4,52 +4,54 @@ #include <stdlib.h> -#include "v8.h" - -#include "ast.h" -#include "bootstrapper.h" -#include "codegen.h" -#include "compilation-cache.h" -#include "cpu-profiler.h" -#include "debug.h" -#include "deoptimizer.h" -#include "heap-profiler.h" -#include "hydrogen.h" -#include "isolate-inl.h" -#include "lithium-allocator.h" -#include "log.h" -#include "messages.h" -#include "platform.h" -#include "regexp-stack.h" -#include "runtime-profiler.h" -#include "sampler.h" -#include "scopeinfo.h" -#include "serialize.h" -#include "simulator.h" -#include "spaces.h" -#include "stub-cache.h" -#include "sweeper-thread.h" -#include "utils/random-number-generator.h" -#include "version.h" -#include "vm-state-inl.h" +#include "src/v8.h" + +#include "src/ast.h" +#include "src/base/platform/platform.h" +#include "src/base/utils/random-number-generator.h" +#include "src/bootstrapper.h" +#include "src/codegen.h" +#include "src/compilation-cache.h" +#include "src/cpu-profiler.h" +#include "src/debug.h" +#include "src/deoptimizer.h" +#include "src/heap/spaces.h" +#include "src/heap/sweeper-thread.h" +#include "src/heap-profiler.h" +#include "src/hydrogen.h" +#include "src/isolate-inl.h" +#include "src/lithium-allocator.h" +#include "src/log.h" +#include "src/messages.h" +#include "src/prototype.h" +#include "src/regexp-stack.h" +#include "src/runtime-profiler.h" +#include "src/sampler.h" +#include "src/scopeinfo.h" +#include "src/serialize.h" +#include "src/simulator.h" +#include "src/stub-cache.h" +#include "src/version.h" +#include "src/vm-state-inl.h" namespace v8 { namespace internal { -Atomic32 ThreadId::highest_thread_id_ = 0; +base::Atomic32 ThreadId::highest_thread_id_ = 0; int ThreadId::AllocateThreadId() { - int new_id = NoBarrier_AtomicIncrement(&highest_thread_id_, 1); + int new_id = base::NoBarrier_AtomicIncrement(&highest_thread_id_, 1); return new_id; } int ThreadId::GetCurrentThreadId() { - int thread_id = Thread::GetThreadLocalInt(Isolate::thread_id_key_); + Isolate::EnsureInitialized(); + int thread_id = base::Thread::GetThreadLocalInt(Isolate::thread_id_key_); if (thread_id == 0) { thread_id = AllocateThreadId(); - Thread::SetThreadLocalInt(Isolate::thread_id_key_, thread_id); + base::Thread::SetThreadLocalInt(Isolate::thread_id_key_, thread_id); } return thread_id; } @@ -69,7 +71,7 @@ void ThreadLocalTop::InitializeInternal() { js_entry_sp_ = NULL; external_callback_scope_ = NULL; current_vm_state_ = EXTERNAL; - try_catch_handler_address_ = NULL; + try_catch_handler_ = NULL; context_ = NULL; thread_id_ = ThreadId::Invalid(); external_caught_exception_ = false; @@ -98,42 +100,29 @@ void ThreadLocalTop::Initialize() { } -v8::TryCatch* ThreadLocalTop::TryCatchHandler() { - return TRY_CATCH_FROM_ADDRESS(try_catch_handler_address()); -} - - -Isolate* Isolate::default_isolate_ = NULL; -Thread::LocalStorageKey Isolate::isolate_key_; -Thread::LocalStorageKey Isolate::thread_id_key_; -Thread::LocalStorageKey Isolate::per_isolate_thread_data_key_; +base::Thread::LocalStorageKey Isolate::isolate_key_; +base::Thread::LocalStorageKey Isolate::thread_id_key_; +base::Thread::LocalStorageKey Isolate::per_isolate_thread_data_key_; #ifdef DEBUG -Thread::LocalStorageKey PerThreadAssertScopeBase::thread_local_key; +base::Thread::LocalStorageKey PerThreadAssertScopeBase::thread_local_key; #endif // DEBUG -Mutex Isolate::process_wide_mutex_; -// TODO(dcarney): Remove with default isolate. -enum DefaultIsolateStatus { - kDefaultIsolateUninitialized, - kDefaultIsolateInitialized, - kDefaultIsolateCrashIfInitialized -}; -static DefaultIsolateStatus default_isolate_status_ - = kDefaultIsolateUninitialized; +base::LazyMutex Isolate::process_wide_mutex_ = LAZY_MUTEX_INITIALIZER; Isolate::ThreadDataTable* Isolate::thread_data_table_ = NULL; -Atomic32 Isolate::isolate_counter_ = 0; +base::Atomic32 Isolate::isolate_counter_ = 0; Isolate::PerIsolateThreadData* Isolate::FindOrAllocatePerThreadDataForThisThread() { + EnsureInitialized(); ThreadId thread_id = ThreadId::Current(); PerIsolateThreadData* per_thread = NULL; { - LockGuard<Mutex> lock_guard(&process_wide_mutex_); + base::LockGuard<base::Mutex> lock_guard(process_wide_mutex_.Pointer()); per_thread = thread_data_table_->Lookup(this, thread_id); if (per_thread == NULL) { per_thread = new PerIsolateThreadData(this, thread_id); thread_data_table_->Insert(per_thread); } - ASSERT(thread_data_table_->Lookup(this, thread_id) == per_thread); + DCHECK(thread_data_table_->Lookup(this, thread_id) == per_thread); } return per_thread; } @@ -147,48 +136,30 @@ Isolate::PerIsolateThreadData* Isolate::FindPerThreadDataForThisThread() { Isolate::PerIsolateThreadData* Isolate::FindPerThreadDataForThread( ThreadId thread_id) { + EnsureInitialized(); PerIsolateThreadData* per_thread = NULL; { - LockGuard<Mutex> lock_guard(&process_wide_mutex_); + base::LockGuard<base::Mutex> lock_guard(process_wide_mutex_.Pointer()); per_thread = thread_data_table_->Lookup(this, thread_id); } return per_thread; } -void Isolate::SetCrashIfDefaultIsolateInitialized() { - LockGuard<Mutex> lock_guard(&process_wide_mutex_); - CHECK(default_isolate_status_ != kDefaultIsolateInitialized); - default_isolate_status_ = kDefaultIsolateCrashIfInitialized; -} - - -void Isolate::EnsureDefaultIsolate() { - LockGuard<Mutex> lock_guard(&process_wide_mutex_); - CHECK(default_isolate_status_ != kDefaultIsolateCrashIfInitialized); - if (default_isolate_ == NULL) { - isolate_key_ = Thread::CreateThreadLocalKey(); - thread_id_key_ = Thread::CreateThreadLocalKey(); - per_isolate_thread_data_key_ = Thread::CreateThreadLocalKey(); +void Isolate::EnsureInitialized() { + base::LockGuard<base::Mutex> lock_guard(process_wide_mutex_.Pointer()); + if (thread_data_table_ == NULL) { + isolate_key_ = base::Thread::CreateThreadLocalKey(); + thread_id_key_ = base::Thread::CreateThreadLocalKey(); + per_isolate_thread_data_key_ = base::Thread::CreateThreadLocalKey(); #ifdef DEBUG - PerThreadAssertScopeBase::thread_local_key = Thread::CreateThreadLocalKey(); + PerThreadAssertScopeBase::thread_local_key = + base::Thread::CreateThreadLocalKey(); #endif // DEBUG thread_data_table_ = new Isolate::ThreadDataTable(); - default_isolate_ = new Isolate(); - } - // Can't use SetIsolateThreadLocals(default_isolate_, NULL) here - // because a non-null thread data may be already set. - if (Thread::GetThreadLocal(isolate_key_) == NULL) { - Thread::SetThreadLocal(isolate_key_, default_isolate_); } } -struct StaticInitializer { - StaticInitializer() { - Isolate::EnsureDefaultIsolate(); - } -} static_initializer; - Address Isolate::get_address_from_id(Isolate::AddressId id) { return isolate_addresses_[id]; @@ -216,9 +187,9 @@ void Isolate::Iterate(ObjectVisitor* v, ThreadLocalTop* thread) { v->VisitPointer(BitCast<Object**>(&(thread->context_))); v->VisitPointer(&thread->scheduled_exception_); - for (v8::TryCatch* block = thread->TryCatchHandler(); + for (v8::TryCatch* block = thread->try_catch_handler(); block != NULL; - block = TRY_CATCH_FROM_ADDRESS(block->next_)) { + block = block->next_) { v->VisitPointer(BitCast<Object**>(&(block->exception_))); v->VisitPointer(BitCast<Object**>(&(block->message_obj_))); v->VisitPointer(BitCast<Object**>(&(block->message_script_))); @@ -273,23 +244,14 @@ bool Isolate::IsDeferredHandle(Object** handle) { void Isolate::RegisterTryCatchHandler(v8::TryCatch* that) { - // The ARM simulator has a separate JS stack. We therefore register - // the C++ try catch handler with the simulator and get back an - // address that can be used for comparisons with addresses into the - // JS stack. When running without the simulator, the address - // returned will be the address of the C++ try catch handler itself. - Address address = reinterpret_cast<Address>( - SimulatorStack::RegisterCTryCatch(reinterpret_cast<uintptr_t>(that))); - thread_local_top()->set_try_catch_handler_address(address); + thread_local_top()->set_try_catch_handler(that); } void Isolate::UnregisterTryCatchHandler(v8::TryCatch* that) { - ASSERT(thread_local_top()->TryCatchHandler() == that); - thread_local_top()->set_try_catch_handler_address( - reinterpret_cast<Address>(that->next_)); + DCHECK(thread_local_top()->try_catch_handler() == that); + thread_local_top()->set_try_catch_handler(that->next_); thread_local_top()->catcher_ = NULL; - SimulatorStack::UnregisterCTryCatch(); } @@ -307,14 +269,14 @@ Handle<String> Isolate::StackTraceString() { return stack_trace; } else if (stack_trace_nesting_level_ == 1) { stack_trace_nesting_level_++; - OS::PrintError( + base::OS::PrintError( "\n\nAttempt to print stack while printing stack (double fault)\n"); - OS::PrintError( + base::OS::PrintError( "If you are lucky you may find a partial stack dump on stdout.\n\n"); incomplete_message_->OutputToStdOut(); return factory()->empty_string(); } else { - OS::Abort(); + base::OS::Abort(); // Unreachable return factory()->empty_string(); } @@ -332,11 +294,10 @@ void Isolate::PushStackTraceAndDie(unsigned int magic, String::WriteToFlat(*trace, buffer, 0, length); buffer[length] = '\0'; // TODO(dcarney): convert buffer to utf8? - OS::PrintError("Stacktrace (%x-%x) %p %p: %s\n", - magic, magic2, - static_cast<void*>(object), static_cast<void*>(map), - reinterpret_cast<char*>(buffer)); - OS::Abort(); + base::OS::PrintError("Stacktrace (%x-%x) %p %p: %s\n", magic, magic2, + static_cast<void*>(object), static_cast<void*>(map), + reinterpret_cast<char*>(buffer)); + base::OS::Abort(); } @@ -346,13 +307,10 @@ void Isolate::PushStackTraceAndDie(unsigned int magic, // call to this function is encountered it is skipped. The seen_caller // in/out parameter is used to remember if the caller has been seen // yet. -static bool IsVisibleInStackTrace(StackFrame* raw_frame, +static bool IsVisibleInStackTrace(JSFunction* fun, Object* caller, + Object* receiver, bool* seen_caller) { - // Only display JS frames. - if (!raw_frame->is_java_script()) return false; - JavaScriptFrame* frame = JavaScriptFrame::cast(raw_frame); - JSFunction* fun = frame->function(); if ((fun == caller) && !(*seen_caller)) { *seen_caller = true; return false; @@ -365,8 +323,10 @@ static bool IsVisibleInStackTrace(StackFrame* raw_frame, // The --builtins-in-stack-traces command line flag allows including // internal call sites in the stack trace for debugging purposes. if (!FLAG_builtins_in_stack_traces) { - if (frame->receiver()->IsJSBuiltinsObject() || - (fun->IsBuiltin() && !fun->shared()->native())) { + if (receiver->IsJSBuiltinsObject()) return false; + if (fun->IsBuiltin()) { + return fun->shared()->native(); + } else if (fun->IsFromNativeScript() || fun->IsFromExtensionScript()) { return false; } } @@ -374,10 +334,23 @@ static bool IsVisibleInStackTrace(StackFrame* raw_frame, } -Handle<JSArray> Isolate::CaptureSimpleStackTrace(Handle<JSObject> error_object, - Handle<Object> caller, - int limit) { +Handle<Object> Isolate::CaptureSimpleStackTrace(Handle<JSObject> error_object, + Handle<Object> caller) { + // Get stack trace limit. + Handle<Object> error = Object::GetProperty( + this, js_builtins_object(), "$Error").ToHandleChecked(); + if (!error->IsJSObject()) return factory()->undefined_value(); + + Handle<String> stackTraceLimit = + factory()->InternalizeUtf8String("stackTraceLimit"); + DCHECK(!stackTraceLimit.is_null()); + Handle<Object> stack_trace_limit = + JSObject::GetDataProperty(Handle<JSObject>::cast(error), + stackTraceLimit); + if (!stack_trace_limit->IsNumber()) return factory()->undefined_value(); + int limit = FastD2IChecked(stack_trace_limit->Number()); limit = Max(limit, 0); // Ensure that limit is not negative. + int initial_size = Min(limit, 10); Handle<FixedArray> elements = factory()->NewFixedArrayWithHoles(initial_size * 4 + 1); @@ -390,49 +363,51 @@ Handle<JSArray> Isolate::CaptureSimpleStackTrace(Handle<JSObject> error_object, int frames_seen = 0; int sloppy_frames = 0; bool encountered_strict_function = false; - for (StackFrameIterator iter(this); + for (JavaScriptFrameIterator iter(this); !iter.done() && frames_seen < limit; iter.Advance()) { - StackFrame* raw_frame = iter.frame(); - if (IsVisibleInStackTrace(raw_frame, *caller, &seen_caller)) { - frames_seen++; - JavaScriptFrame* frame = JavaScriptFrame::cast(raw_frame); - // Set initial size to the maximum inlining level + 1 for the outermost - // function. - List<FrameSummary> frames(FLAG_max_inlining_levels + 1); - frame->Summarize(&frames); - for (int i = frames.length() - 1; i >= 0; i--) { - if (cursor + 4 > elements->length()) { - int new_capacity = JSObject::NewElementsCapacity(elements->length()); - Handle<FixedArray> new_elements = - factory()->NewFixedArrayWithHoles(new_capacity); - for (int i = 0; i < cursor; i++) { - new_elements->set(i, elements->get(i)); - } - elements = new_elements; + JavaScriptFrame* frame = iter.frame(); + // Set initial size to the maximum inlining level + 1 for the outermost + // function. + List<FrameSummary> frames(FLAG_max_inlining_levels + 1); + frame->Summarize(&frames); + for (int i = frames.length() - 1; i >= 0; i--) { + Handle<JSFunction> fun = frames[i].function(); + Handle<Object> recv = frames[i].receiver(); + // Filter out internal frames that we do not want to show. + if (!IsVisibleInStackTrace(*fun, *caller, *recv, &seen_caller)) continue; + // Filter out frames from other security contexts. + if (!this->context()->HasSameSecurityTokenAs(fun->context())) continue; + if (cursor + 4 > elements->length()) { + int new_capacity = JSObject::NewElementsCapacity(elements->length()); + Handle<FixedArray> new_elements = + factory()->NewFixedArrayWithHoles(new_capacity); + for (int i = 0; i < cursor; i++) { + new_elements->set(i, elements->get(i)); } - ASSERT(cursor + 4 <= elements->length()); - - Handle<Object> recv = frames[i].receiver(); - Handle<JSFunction> fun = frames[i].function(); - Handle<Code> code = frames[i].code(); - Handle<Smi> offset(Smi::FromInt(frames[i].offset()), this); - // The stack trace API should not expose receivers and function - // objects on frames deeper than the top-most one with a strict - // mode function. The number of sloppy frames is stored as - // first element in the result array. - if (!encountered_strict_function) { - if (fun->shared()->strict_mode() == STRICT) { - encountered_strict_function = true; - } else { - sloppy_frames++; - } + elements = new_elements; + } + DCHECK(cursor + 4 <= elements->length()); + + + Handle<Code> code = frames[i].code(); + Handle<Smi> offset(Smi::FromInt(frames[i].offset()), this); + // The stack trace API should not expose receivers and function + // objects on frames deeper than the top-most one with a strict + // mode function. The number of sloppy frames is stored as + // first element in the result array. + if (!encountered_strict_function) { + if (fun->shared()->strict_mode() == STRICT) { + encountered_strict_function = true; + } else { + sloppy_frames++; } - elements->set(cursor++, *recv); - elements->set(cursor++, *fun); - elements->set(cursor++, *code); - elements->set(cursor++, *offset); } + elements->set(cursor++, *recv); + elements->set(cursor++, *fun); + elements->set(cursor++, *code); + elements->set(cursor++, *offset); + frames_seen++; } } elements->set(0, Smi::FromInt(sloppy_frames)); @@ -445,15 +420,24 @@ Handle<JSArray> Isolate::CaptureSimpleStackTrace(Handle<JSObject> error_object, void Isolate::CaptureAndSetDetailedStackTrace(Handle<JSObject> error_object) { if (capture_stack_trace_for_uncaught_exceptions_) { // Capture stack trace for a detailed exception message. - Handle<String> key = factory()->hidden_stack_trace_string(); + Handle<Name> key = factory()->detailed_stack_trace_symbol(); Handle<JSArray> stack_trace = CaptureCurrentStackTrace( stack_trace_for_uncaught_exceptions_frame_limit_, stack_trace_for_uncaught_exceptions_options_); - JSObject::SetHiddenProperty(error_object, key, stack_trace); + JSObject::SetProperty(error_object, key, stack_trace, STRICT).Assert(); } } +void Isolate::CaptureAndSetSimpleStackTrace(Handle<JSObject> error_object, + Handle<Object> caller) { + // Capture stack trace for simple stack trace string formatting. + Handle<Name> key = factory()->stack_trace_symbol(); + Handle<Object> stack_trace = CaptureSimpleStackTrace(error_object, caller); + JSObject::SetProperty(error_object, key, stack_trace, STRICT).Assert(); +} + + Handle<JSArray> Isolate::CaptureCurrentStackTrace( int frame_limit, StackTrace::StackTraceOptions options) { // Ensure no negative values. @@ -487,10 +471,14 @@ Handle<JSArray> Isolate::CaptureCurrentStackTrace( List<FrameSummary> frames(FLAG_max_inlining_levels + 1); frame->Summarize(&frames); for (int i = frames.length() - 1; i >= 0 && frames_seen < limit; i--) { + Handle<JSFunction> fun = frames[i].function(); + // Filter frames from other security contexts. + if (!(options & StackTrace::kExposeFramesAcrossSecurityOrigins) && + !this->context()->HasSameSecurityTokenAs(fun->context())) continue; + // Create a JSObject to hold the information for the StackFrame. Handle<JSObject> stack_frame = factory()->NewJSObject(object_function()); - Handle<JSFunction> fun = frames[i].function(); Handle<Script> script(Script::cast(fun->shared()->script())); if (options & StackTrace::kLineNumber) { @@ -509,55 +497,48 @@ Handle<JSArray> Isolate::CaptureCurrentStackTrace( // tag. column_offset += script->column_offset()->value(); } - JSObject::SetLocalPropertyIgnoreAttributes( + JSObject::AddProperty( stack_frame, column_key, - Handle<Smi>(Smi::FromInt(column_offset + 1), this), NONE).Check(); + handle(Smi::FromInt(column_offset + 1), this), NONE); } - JSObject::SetLocalPropertyIgnoreAttributes( + JSObject::AddProperty( stack_frame, line_key, - Handle<Smi>(Smi::FromInt(line_number + 1), this), NONE).Check(); + handle(Smi::FromInt(line_number + 1), this), NONE); } if (options & StackTrace::kScriptId) { - Handle<Smi> script_id(script->id(), this); - JSObject::SetLocalPropertyIgnoreAttributes( - stack_frame, script_id_key, script_id, NONE).Check(); + JSObject::AddProperty( + stack_frame, script_id_key, handle(script->id(), this), NONE); } if (options & StackTrace::kScriptName) { - Handle<Object> script_name(script->name(), this); - JSObject::SetLocalPropertyIgnoreAttributes( - stack_frame, script_name_key, script_name, NONE).Check(); + JSObject::AddProperty( + stack_frame, script_name_key, handle(script->name(), this), NONE); } if (options & StackTrace::kScriptNameOrSourceURL) { Handle<Object> result = Script::GetNameOrSourceURL(script); - JSObject::SetLocalPropertyIgnoreAttributes( - stack_frame, script_name_or_source_url_key, result, NONE).Check(); + JSObject::AddProperty( + stack_frame, script_name_or_source_url_key, result, NONE); } if (options & StackTrace::kFunctionName) { - Handle<Object> fun_name(fun->shared()->name(), this); - if (!fun_name->BooleanValue()) { - fun_name = Handle<Object>(fun->shared()->inferred_name(), this); - } - JSObject::SetLocalPropertyIgnoreAttributes( - stack_frame, function_key, fun_name, NONE).Check(); + Handle<Object> fun_name(fun->shared()->DebugName(), this); + JSObject::AddProperty(stack_frame, function_key, fun_name, NONE); } if (options & StackTrace::kIsEval) { Handle<Object> is_eval = script->compilation_type() == Script::COMPILATION_TYPE_EVAL ? factory()->true_value() : factory()->false_value(); - JSObject::SetLocalPropertyIgnoreAttributes( - stack_frame, eval_key, is_eval, NONE).Check(); + JSObject::AddProperty(stack_frame, eval_key, is_eval, NONE); } if (options & StackTrace::kIsConstructor) { Handle<Object> is_constructor = (frames[i].is_constructor()) ? factory()->true_value() : factory()->false_value(); - JSObject::SetLocalPropertyIgnoreAttributes( - stack_frame, constructor_key, is_constructor, NONE).Check(); + JSObject::AddProperty( + stack_frame, constructor_key, is_constructor, NONE); } FixedArray::cast(stack_trace->elements())->set(frames_seen, *stack_frame); @@ -586,9 +567,9 @@ void Isolate::PrintStack(FILE* out) { stack_trace_nesting_level_ = 0; } else if (stack_trace_nesting_level_ == 1) { stack_trace_nesting_level_++; - OS::PrintError( + base::OS::PrintError( "\n\nAttempt to print stack while printing stack (double fault)\n"); - OS::PrintError( + base::OS::PrintError( "If you are lucky you may find a partial stack dump on stdout.\n\n"); incomplete_message_->OutputToFile(out); } @@ -615,7 +596,7 @@ void Isolate::PrintStack(StringStream* accumulator) { } // The MentionedObjectCache is not GC-proof at the moment. DisallowHeapAllocation no_gc; - ASSERT(StringStream::IsMentionedObjectCacheClear(this)); + DCHECK(StringStream::IsMentionedObjectCacheClear(this)); // Avoid printing anything if there are no frames. if (c_entry_fp(thread_local_top()) == 0) return; @@ -654,10 +635,14 @@ static inline AccessCheckInfo* GetAccessCheckInfo(Isolate* isolate, void Isolate::ReportFailedAccessCheck(Handle<JSObject> receiver, v8::AccessType type) { - if (!thread_local_top()->failed_access_check_callback_) return; + if (!thread_local_top()->failed_access_check_callback_) { + Handle<String> message = factory()->InternalizeUtf8String("no access"); + ScheduleThrow(*factory()->NewTypeError(message)); + return; + } - ASSERT(receiver->IsAccessCheckNeeded()); - ASSERT(context()); + DCHECK(receiver->IsAccessCheckNeeded()); + DCHECK(context()); // Get the data object from access check info. HandleScope scope(this); @@ -711,7 +696,7 @@ static MayAccessDecision MayAccessPreCheck(Isolate* isolate, bool Isolate::MayNamedAccess(Handle<JSObject> receiver, Handle<Object> key, v8::AccessType type) { - ASSERT(receiver->IsJSGlobalProxy() || receiver->IsAccessCheckNeeded()); + DCHECK(receiver->IsJSGlobalProxy() || receiver->IsAccessCheckNeeded()); // Skip checks for hidden properties access. Note, we do not // require existence of a context in this case. @@ -719,7 +704,7 @@ bool Isolate::MayNamedAccess(Handle<JSObject> receiver, // Check for compatibility between the security tokens in the // current lexical context and the accessed object. - ASSERT(context()); + DCHECK(context()); MayAccessDecision decision = MayAccessPreCheck(this, receiver, type); if (decision != UNKNOWN) return decision == YES; @@ -750,10 +735,10 @@ bool Isolate::MayNamedAccess(Handle<JSObject> receiver, bool Isolate::MayIndexedAccess(Handle<JSObject> receiver, uint32_t index, v8::AccessType type) { - ASSERT(receiver->IsJSGlobalProxy() || receiver->IsAccessCheckNeeded()); + DCHECK(receiver->IsJSGlobalProxy() || receiver->IsAccessCheckNeeded()); // Check for compatibility between the security tokens in the // current lexical context and the accessed object. - ASSERT(context()); + DCHECK(context()); MayAccessDecision decision = MayAccessPreCheck(this, receiver, type); if (decision != UNKNOWN) return decision == YES; @@ -795,26 +780,7 @@ Object* Isolate::StackOverflow() { Handle<JSObject> exception = factory()->CopyJSObject(boilerplate); DoThrow(*exception, NULL); - // Get stack trace limit. - Handle<Object> error = Object::GetProperty( - this, js_builtins_object(), "$Error").ToHandleChecked(); - if (!error->IsJSObject()) return heap()->exception(); - - Handle<String> stackTraceLimit = - factory()->InternalizeUtf8String("stackTraceLimit"); - ASSERT(!stackTraceLimit.is_null()); - Handle<Object> stack_trace_limit = - JSObject::GetDataProperty(Handle<JSObject>::cast(error), - stackTraceLimit); - if (!stack_trace_limit->IsNumber()) return heap()->exception(); - double dlimit = stack_trace_limit->Number(); - int limit = std::isnan(dlimit) ? 0 : static_cast<int>(dlimit); - - Handle<JSArray> stack_trace = CaptureSimpleStackTrace( - exception, factory()->undefined_value(), limit); - JSObject::SetHiddenProperty(exception, - factory()->hidden_stack_trace_string(), - stack_trace); + CaptureAndSetSimpleStackTrace(exception, factory()->undefined_value()); return heap()->exception(); } @@ -842,6 +808,26 @@ void Isolate::CancelTerminateExecution() { } +void Isolate::InvokeApiInterruptCallback() { + // Note: callback below should be called outside of execution access lock. + InterruptCallback callback = NULL; + void* data = NULL; + { + ExecutionAccess access(this); + callback = api_interrupt_callback_; + data = api_interrupt_callback_data_; + api_interrupt_callback_ = NULL; + api_interrupt_callback_data_ = NULL; + } + + if (callback != NULL) { + VMState<EXTERNAL> state(this); + HandleScope handle_scope(this); + callback(reinterpret_cast<v8::Isolate*>(this), data); + } +} + + Object* Isolate::Throw(Object* exception, MessageLocation* location) { DoThrow(exception, location); return heap()->exception(); @@ -888,14 +874,14 @@ void Isolate::ScheduleThrow(Object* exception) { void Isolate::RestorePendingMessageFromTryCatch(v8::TryCatch* handler) { - ASSERT(handler == try_catch_handler()); - ASSERT(handler->HasCaught()); - ASSERT(handler->rethrow_); - ASSERT(handler->capture_message_); + DCHECK(handler == try_catch_handler()); + DCHECK(handler->HasCaught()); + DCHECK(handler->rethrow_); + DCHECK(handler->capture_message_); Object* message = reinterpret_cast<Object*>(handler->message_obj_); Object* script = reinterpret_cast<Object*>(handler->message_script_); - ASSERT(message->IsJSMessageObject() || message->IsTheHole()); - ASSERT(script->IsScript() || script->IsTheHole()); + DCHECK(message->IsJSMessageObject() || message->IsTheHole()); + DCHECK(script->IsScript() || script->IsTheHole()); thread_local_top()->pending_message_obj_ = message; thread_local_top()->pending_message_script_ = script; thread_local_top()->pending_message_start_pos_ = handler->message_start_pos_; @@ -903,6 +889,15 @@ void Isolate::RestorePendingMessageFromTryCatch(v8::TryCatch* handler) { } +void Isolate::CancelScheduledExceptionFromTryCatch(v8::TryCatch* handler) { + DCHECK(has_scheduled_exception()); + if (scheduled_exception() == handler->exception_) { + DCHECK(scheduled_exception() != heap()->termination_exception()); + clear_scheduled_exception(); + } +} + + Object* Isolate::PromoteScheduledException() { Object* thrown = scheduled_exception(); clear_scheduled_exception(); @@ -997,10 +992,10 @@ bool Isolate::IsErrorObject(Handle<Object> obj) { js_builtins_object(), error_key).ToHandleChecked(); DisallowHeapAllocation no_gc; - for (Object* prototype = *obj; !prototype->IsNull(); - prototype = prototype->GetPrototype(this)) { - if (!prototype->IsJSObject()) return false; - if (JSObject::cast(prototype)->map()->constructor() == + for (PrototypeIterator iter(this, *obj, PrototypeIterator::START_AT_RECEIVER); + !iter.IsAtEnd(); iter.Advance()) { + if (iter.GetCurrent()->IsJSProxy()) return false; + if (JSObject::cast(iter.GetCurrent())->map()->constructor() == *error_constructor) { return true; } @@ -1011,7 +1006,7 @@ bool Isolate::IsErrorObject(Handle<Object> obj) { static int fatal_exception_depth = 0; void Isolate::DoThrow(Object* exception, MessageLocation* location) { - ASSERT(!has_pending_exception()); + DCHECK(!has_pending_exception()); HandleScope scope(this); Handle<Object> exception_handle(exception, this); @@ -1031,7 +1026,7 @@ void Isolate::DoThrow(Object* exception, MessageLocation* location) { // Notify debugger of exception. if (catchable_by_javascript) { - debugger_->OnException(exception_handle, report_exception); + debug()->OnThrow(exception_handle, report_exception); } // Generate the message if required. @@ -1050,13 +1045,16 @@ void Isolate::DoThrow(Object* exception, MessageLocation* location) { if (capture_stack_trace_for_uncaught_exceptions_) { if (IsErrorObject(exception_handle)) { // We fetch the stack trace that corresponds to this error object. - Handle<String> key = factory()->hidden_stack_trace_string(); - Object* stack_property = - JSObject::cast(*exception_handle)->GetHiddenProperty(key); - // Property lookup may have failed. In this case it's probably not - // a valid Error object. - if (stack_property->IsJSArray()) { - stack_trace_object = Handle<JSArray>(JSArray::cast(stack_property)); + Handle<Name> key = factory()->detailed_stack_trace_symbol(); + // Look up as own property. If the lookup fails, the exception is + // probably not a valid Error object. In that case, we fall through + // and capture the stack trace at this throw site. + LookupIterator lookup( + exception_handle, key, LookupIterator::CHECK_OWN_REAL); + Handle<Object> stack_trace_property; + if (Object::GetProperty(&lookup).ToHandle(&stack_trace_property) && + stack_trace_property->IsJSArray()) { + stack_trace_object = Handle<JSArray>::cast(stack_trace_property); } } if (stack_trace_object.is_null()) { @@ -1104,7 +1102,7 @@ void Isolate::DoThrow(Object* exception, MessageLocation* location) { "%s\n\nFROM\n", MessageHandler::GetLocalizedMessage(this, message_obj).get()); PrintCurrentStackTrace(stderr); - OS::Abort(); + base::OS::Abort(); } } else if (location != NULL && !location->script().is_null()) { // We are bootstrapping and caught an error where the location is set @@ -1115,19 +1113,37 @@ void Isolate::DoThrow(Object* exception, MessageLocation* location) { int line_number = location->script()->GetLineNumber(location->start_pos()) + 1; if (exception->IsString() && location->script()->name()->IsString()) { - OS::PrintError( + base::OS::PrintError( "Extension or internal compilation error: %s in %s at line %d.\n", String::cast(exception)->ToCString().get(), String::cast(location->script()->name())->ToCString().get(), line_number); } else if (location->script()->name()->IsString()) { - OS::PrintError( + base::OS::PrintError( "Extension or internal compilation error in %s at line %d.\n", String::cast(location->script()->name())->ToCString().get(), line_number); } else { - OS::PrintError("Extension or internal compilation error.\n"); + base::OS::PrintError("Extension or internal compilation error.\n"); } +#ifdef OBJECT_PRINT + // Since comments and empty lines have been stripped from the source of + // builtins, print the actual source here so that line numbers match. + if (location->script()->source()->IsString()) { + Handle<String> src(String::cast(location->script()->source())); + PrintF("Failing script:\n"); + int len = src->length(); + int line_number = 1; + PrintF("%5d: ", line_number); + for (int i = 0; i < len; i++) { + uint16_t character = src->Get(i); + PrintF("%c", character); + if (character == '\n' && i < len - 2) { + PrintF("%5d: ", ++line_number); + } + } + } +#endif } } @@ -1143,25 +1159,20 @@ void Isolate::DoThrow(Object* exception, MessageLocation* location) { } -bool Isolate::IsExternallyCaught() { - ASSERT(has_pending_exception()); +bool Isolate::HasExternalTryCatch() { + DCHECK(has_pending_exception()); - if ((thread_local_top()->catcher_ == NULL) || - (try_catch_handler() != thread_local_top()->catcher_)) { - // When throwing the exception, we found no v8::TryCatch - // which should care about this exception. - return false; - } + return (thread_local_top()->catcher_ != NULL) && + (try_catch_handler() == thread_local_top()->catcher_); +} - if (!is_catchable_by_javascript(pending_exception())) { - return true; - } +bool Isolate::IsFinallyOnTop() { // Get the address of the external handler so we can compare the address to // determine which one is closer to the top of the stack. Address external_handler_address = thread_local_top()->try_catch_handler_address(); - ASSERT(external_handler_address != NULL); + DCHECK(external_handler_address != NULL); // The exception has been externally caught if and only if there is // an external handler which is on top of the top-most try-finally @@ -1175,23 +1186,22 @@ bool Isolate::IsExternallyCaught() { StackHandler* handler = StackHandler::FromAddress(Isolate::handler(thread_local_top())); while (handler != NULL && handler->address() < external_handler_address) { - ASSERT(!handler->is_catch()); - if (handler->is_finally()) return false; + DCHECK(!handler->is_catch()); + if (handler->is_finally()) return true; handler = handler->next(); } - return true; + return false; } void Isolate::ReportPendingMessages() { - ASSERT(has_pending_exception()); - PropagatePendingExceptionToExternalTryCatch(); + DCHECK(has_pending_exception()); + bool can_clear_message = PropagatePendingExceptionToExternalTryCatch(); HandleScope scope(this); - if (thread_local_top_.pending_exception_ == - heap()->termination_exception()) { + if (thread_local_top_.pending_exception_ == heap()->termination_exception()) { // Do nothing: if needed, the exception has been already propagated to // v8::TryCatch. } else { @@ -1214,12 +1224,12 @@ void Isolate::ReportPendingMessages() { } } } - clear_pending_message(); + if (can_clear_message) clear_pending_message(); } MessageLocation Isolate::GetMessageLocation() { - ASSERT(has_pending_exception()); + DCHECK(has_pending_exception()); if (thread_local_top_.pending_exception_ != heap()->termination_exception() && thread_local_top_.has_pending_message_ && @@ -1237,7 +1247,7 @@ MessageLocation Isolate::GetMessageLocation() { bool Isolate::OptionalRescheduleException(bool is_bottom_call) { - ASSERT(has_pending_exception()); + DCHECK(has_pending_exception()); PropagatePendingExceptionToExternalTryCatch(); bool is_termination_exception = @@ -1256,7 +1266,7 @@ bool Isolate::OptionalRescheduleException(bool is_bottom_call) { // If the exception is externally caught, clear it if there are no // JavaScript frames on the way to the C++ frame that has the // external handler. - ASSERT(thread_local_top()->try_catch_handler_address() != NULL); + DCHECK(thread_local_top()->try_catch_handler_address() != NULL); Address external_handler_address = thread_local_top()->try_catch_handler_address(); JavaScriptFrameIterator it(this); @@ -1290,18 +1300,18 @@ void Isolate::SetCaptureStackTraceForUncaughtExceptions( Handle<Context> Isolate::native_context() { - return Handle<Context>(context()->global_object()->native_context()); + return handle(context()->native_context()); } Handle<Context> Isolate::global_context() { - return Handle<Context>(context()->global_object()->global_context()); + return handle(context()->global_object()->global_context()); } Handle<Context> Isolate::GetCallingNativeContext() { JavaScriptFrameIterator it(this); - if (debug_->InDebugger()) { + if (debug_->in_debug_scope()) { while (!it.done()) { JavaScriptFrame* frame = it.frame(); Context* context = Context::cast(frame->context()); @@ -1320,8 +1330,8 @@ Handle<Context> Isolate::GetCallingNativeContext() { char* Isolate::ArchiveThread(char* to) { - OS::MemCopy(to, reinterpret_cast<char*>(thread_local_top()), - sizeof(ThreadLocalTop)); + MemCopy(to, reinterpret_cast<char*>(thread_local_top()), + sizeof(ThreadLocalTop)); InitializeThreadLocal(); clear_pending_exception(); clear_pending_message(); @@ -1331,14 +1341,14 @@ char* Isolate::ArchiveThread(char* to) { char* Isolate::RestoreThread(char* from) { - OS::MemCopy(reinterpret_cast<char*>(thread_local_top()), from, - sizeof(ThreadLocalTop)); - // This might be just paranoia, but it seems to be needed in case a - // thread_local_top_ is restored on a separate OS thread. + MemCopy(reinterpret_cast<char*>(thread_local_top()), from, + sizeof(ThreadLocalTop)); +// This might be just paranoia, but it seems to be needed in case a +// thread_local_top_ is restored on a separate OS thread. #ifdef USE_SIMULATOR thread_local_top()->simulator_ = Simulator::current(this); #endif - ASSERT(context() == NULL || context()->IsContext()); + DCHECK(context() == NULL || context()->IsContext()); return from + sizeof(ThreadLocalTop); } @@ -1352,7 +1362,7 @@ Isolate::ThreadDataTable::~ThreadDataTable() { // TODO(svenpanne) The assertion below would fire if an embedder does not // cleanly dispose all Isolates before disposing v8, so we are conservative // and leave it out for now. - // ASSERT_EQ(NULL, list_); + // DCHECK_EQ(NULL, list_); } @@ -1452,8 +1462,8 @@ Isolate::Isolate() // TODO(bmeurer) Initialized lazily because it depends on flags; can // be fixed once the default isolate cleanup is done. random_number_generator_(NULL), + serializer_enabled_(false), has_fatal_error_(false), - use_crankshaft_(true), initialized_from_snapshot_(false), cpu_profiler_(NULL), heap_profiler_(NULL), @@ -1463,8 +1473,9 @@ Isolate::Isolate() sweeper_thread_(NULL), num_sweeper_threads_(0), stress_deopt_count_(0), - next_optimization_id_(0) { - id_ = NoBarrier_AtomicIncrement(&isolate_counter_, 1); + next_optimization_id_(0), + use_counter_callback_(NULL) { + id_ = base::NoBarrier_AtomicIncrement(&isolate_counter_, 1); TRACE_ISOLATE(constructor); memset(isolate_addresses_, 0, @@ -1497,7 +1508,6 @@ Isolate::Isolate() InitializeLoggingAndCounters(); debug_ = new Debug(this); - debugger_ = new Debugger(this); } @@ -1514,7 +1524,8 @@ void Isolate::TearDown() { Deinit(); - { LockGuard<Mutex> lock_guard(&process_wide_mutex_); + { + base::LockGuard<base::Mutex> lock_guard(process_wide_mutex_.Pointer()); thread_data_table_->RemoveAllThreads(this); } @@ -1523,9 +1534,7 @@ void Isolate::TearDown() { serialize_partial_snapshot_cache_ = NULL; } - if (!IsDefaultIsolate()) { - delete this; - } + delete this; // Restore the previous current isolate. SetIsolateThreadLocals(saved_isolate, saved_data); @@ -1541,7 +1550,7 @@ void Isolate::Deinit() { if (state_ == INITIALIZED) { TRACE_ISOLATE(deinit); - debugger()->UnloadDebugger(); + debug()->Unload(); if (concurrent_recompilation_enabled()) { optimizing_compiler_thread_->Stop(); @@ -1558,11 +1567,12 @@ void Isolate::Deinit() { sweeper_thread_ = NULL; if (FLAG_job_based_sweeping && - heap_.mark_compact_collector()->IsConcurrentSweepingInProgress()) { - heap_.mark_compact_collector()->WaitUntilSweepingCompleted(); + heap_.mark_compact_collector()->sweeping_in_progress()) { + heap_.mark_compact_collector()->EnsureSweepingCompleted(); } - if (FLAG_hydrogen_stats) GetHStatistics()->Print(); + if (FLAG_turbo_stats) GetTStatistics()->Print("TurboFan"); + if (FLAG_hydrogen_stats) GetHStatistics()->Print("Hydrogen"); if (FLAG_print_deopt_stress) { PrintF(stdout, "=== Stress deopt counter: %u\n", stress_deopt_count_); @@ -1617,8 +1627,9 @@ void Isolate::PushToPartialSnapshotCache(Object* obj) { void Isolate::SetIsolateThreadLocals(Isolate* isolate, PerIsolateThreadData* data) { - Thread::SetThreadLocal(isolate_key_, isolate); - Thread::SetThreadLocal(per_isolate_thread_data_key_, data); + EnsureInitialized(); + base::Thread::SetThreadLocal(isolate_key_, isolate); + base::Thread::SetThreadLocal(per_isolate_thread_data_key_, data); } @@ -1629,14 +1640,11 @@ Isolate::~Isolate() { runtime_zone_.DeleteKeptSegment(); // The entry stack must be empty when we get here. - ASSERT(entry_stack_ == NULL || entry_stack_->previous_item == NULL); + DCHECK(entry_stack_ == NULL || entry_stack_->previous_item == NULL); delete entry_stack_; entry_stack_ = NULL; - delete[] assembler_spare_buffer_; - assembler_spare_buffer_ = NULL; - delete unicode_cache_; unicode_cache_ = NULL; @@ -1711,8 +1719,6 @@ Isolate::~Isolate() { delete random_number_generator_; random_number_generator_ = NULL; - delete debugger_; - debugger_ = NULL; delete debug_; debug_ = NULL; } @@ -1724,36 +1730,44 @@ void Isolate::InitializeThreadLocal() { } -void Isolate::PropagatePendingExceptionToExternalTryCatch() { - ASSERT(has_pending_exception()); +bool Isolate::PropagatePendingExceptionToExternalTryCatch() { + DCHECK(has_pending_exception()); - bool external_caught = IsExternallyCaught(); - thread_local_top_.external_caught_exception_ = external_caught; + bool has_external_try_catch = HasExternalTryCatch(); + if (!has_external_try_catch) { + thread_local_top_.external_caught_exception_ = false; + return true; + } - if (!external_caught) return; + bool catchable_by_js = is_catchable_by_javascript(pending_exception()); + if (catchable_by_js && IsFinallyOnTop()) { + thread_local_top_.external_caught_exception_ = false; + return false; + } - if (thread_local_top_.pending_exception_ == - heap()->termination_exception()) { + thread_local_top_.external_caught_exception_ = true; + if (thread_local_top_.pending_exception_ == heap()->termination_exception()) { try_catch_handler()->can_continue_ = false; try_catch_handler()->has_terminated_ = true; try_catch_handler()->exception_ = heap()->null_value(); } else { v8::TryCatch* handler = try_catch_handler(); - ASSERT(thread_local_top_.pending_message_obj_->IsJSMessageObject() || + DCHECK(thread_local_top_.pending_message_obj_->IsJSMessageObject() || thread_local_top_.pending_message_obj_->IsTheHole()); - ASSERT(thread_local_top_.pending_message_script_->IsScript() || + DCHECK(thread_local_top_.pending_message_script_->IsScript() || thread_local_top_.pending_message_script_->IsTheHole()); handler->can_continue_ = true; handler->has_terminated_ = false; handler->exception_ = pending_exception(); // Propagate to the external try-catch only if we got an actual message. - if (thread_local_top_.pending_message_obj_->IsTheHole()) return; + if (thread_local_top_.pending_message_obj_->IsTheHole()) return true; handler->message_obj_ = thread_local_top_.pending_message_obj_; handler->message_script_ = thread_local_top_.pending_message_script_; handler->message_start_pos_ = thread_local_top_.pending_message_start_pos_; handler->message_end_pos_ = thread_local_top_.pending_message_end_pos_; } + return true; } @@ -1768,23 +1782,19 @@ void Isolate::InitializeLoggingAndCounters() { bool Isolate::Init(Deserializer* des) { - ASSERT(state_ != INITIALIZED); + DCHECK(state_ != INITIALIZED); TRACE_ISOLATE(init); stress_deopt_count_ = FLAG_deopt_every_n_times; has_fatal_error_ = false; - use_crankshaft_ = FLAG_crankshaft - && !Serializer::enabled(this) - && CpuFeatures::SupportsCrankshaft(); - if (function_entry_hook() != NULL) { // When function entry hooking is in effect, we have to create the code // stubs from scratch to get entry hooks, rather than loading the previously // generated stubs from disk. // If this assert fires, the initialization path has regressed. - ASSERT(des == NULL); + DCHECK(des == NULL); } // The initialization process does not handle memory exhaustion. @@ -1832,7 +1842,8 @@ bool Isolate::Init(Deserializer* des) { // Initialize other runtime facilities #if defined(USE_SIMULATOR) -#if V8_TARGET_ARCH_ARM || V8_TARGET_ARCH_ARM64 || V8_TARGET_ARCH_MIPS +#if V8_TARGET_ARCH_ARM || V8_TARGET_ARCH_ARM64 || \ + V8_TARGET_ARCH_MIPS || V8_TARGET_ARCH_MIPS64 Simulator::Initialize(this); #endif #endif @@ -1848,7 +1859,7 @@ bool Isolate::Init(Deserializer* des) { } // SetUp the object heap. - ASSERT(!heap_.HasBeenSetUp()); + DCHECK(!heap_.HasBeenSetUp()); if (!heap_.SetUp()) { V8::FatalProcessOutOfMemory("heap setup"); return false; @@ -1873,9 +1884,9 @@ bool Isolate::Init(Deserializer* des) { builtins_.SetUp(this, create_heap_objects); if (FLAG_log_internal_timer_events) { - set_event_logger(Logger::LogInternalEvents); + set_event_logger(Logger::DefaultTimerEventsLogger); } else { - set_event_logger(Logger::EmptyLogInternalEvents); + set_event_logger(Logger::EmptyTimerEventsLogger); } // Set default value if not yet set. @@ -1883,7 +1894,8 @@ bool Isolate::Init(Deserializer* des) { // once ResourceConstraints becomes an argument to the Isolate constructor. if (max_available_threads_ < 1) { // Choose the default between 1 and 4. - max_available_threads_ = Max(Min(CPU::NumberOfProcessorsOnline(), 4), 1); + max_available_threads_ = + Max(Min(base::OS::NumberOfProcessorsOnline(), 4), 1); } if (!FLAG_job_based_sweeping) { @@ -1939,19 +1951,20 @@ bool Isolate::Init(Deserializer* des) { LOG(this, LogCompiledFunctions()); } - // If we are profiling with the Linux perf tool, we need to disable - // code relocation. - if (FLAG_perf_jit_prof || FLAG_perf_basic_prof) { - FLAG_compact_code_space = false; - } - CHECK_EQ(static_cast<int>(OFFSET_OF(Isolate, embedder_data_)), Internals::kIsolateEmbedderDataOffset); CHECK_EQ(static_cast<int>(OFFSET_OF(Isolate, heap_.roots_)), Internals::kIsolateRootsOffset); + CHECK_EQ(static_cast<int>( + OFFSET_OF(Isolate, heap_.amount_of_external_allocated_memory_)), + Internals::kAmountOfExternalAllocatedMemoryOffset); + CHECK_EQ(static_cast<int>(OFFSET_OF( + Isolate, + heap_.amount_of_external_allocated_memory_at_last_global_gc_)), + Internals::kAmountOfExternalAllocatedMemoryAtLastGlobalGCOffset); state_ = INITIALIZED; - time_millis_at_init_ = OS::TimeCurrentMillis(); + time_millis_at_init_ = base::OS::TimeCurrentMillis(); if (!create_heap_objects) { // Now that the heap is consistent, it's OK to generate the code for the @@ -1964,7 +1977,7 @@ bool Isolate::Init(Deserializer* des) { kDeoptTableSerializeEntryCount - 1); } - if (!Serializer::enabled(this)) { + if (!serializer_enabled()) { // Ensure that all stubs which need to be generated ahead of time, but // cannot be serialized into the snapshot have been generated. HandleScope scope(this); @@ -1986,6 +1999,8 @@ bool Isolate::Init(Deserializer* des) { NumberToStringStub::InstallDescriptors(this); StringAddStub::InstallDescriptors(this); RegExpConstructResultStub::InstallDescriptors(this); + KeyedLoadGenericStub::InstallDescriptors(this); + StoreFieldStub::InstallDescriptors(this); } CallDescriptors::InitializeForIsolate(this); @@ -2011,11 +2026,11 @@ void Isolate::Enter() { PerIsolateThreadData* current_data = CurrentPerIsolateThreadData(); if (current_data != NULL) { current_isolate = current_data->isolate_; - ASSERT(current_isolate != NULL); + DCHECK(current_isolate != NULL); if (current_isolate == this) { - ASSERT(Current() == this); - ASSERT(entry_stack_ != NULL); - ASSERT(entry_stack_->previous_thread_data == NULL || + DCHECK(Current() == this); + DCHECK(entry_stack_ != NULL); + DCHECK(entry_stack_->previous_thread_data == NULL || entry_stack_->previous_thread_data->thread_id().Equals( ThreadId::Current())); // Same thread re-enters the isolate, no need to re-init anything. @@ -2024,19 +2039,9 @@ void Isolate::Enter() { } } - // Threads can have default isolate set into TLS as Current but not yet have - // PerIsolateThreadData for it, as it requires more advanced phase of the - // initialization. For example, a thread might be the one that system used for - // static initializers - in this case the default isolate is set in TLS but - // the thread did not yet Enter the isolate. If PerisolateThreadData is not - // there, use the isolate set in TLS. - if (current_isolate == NULL) { - current_isolate = Isolate::UncheckedCurrent(); - } - PerIsolateThreadData* data = FindOrAllocatePerThreadDataForThisThread(); - ASSERT(data != NULL); - ASSERT(data->isolate_ == this); + DCHECK(data != NULL); + DCHECK(data->isolate_ == this); EntryStackItem* item = new EntryStackItem(current_data, current_isolate, @@ -2051,15 +2056,15 @@ void Isolate::Enter() { void Isolate::Exit() { - ASSERT(entry_stack_ != NULL); - ASSERT(entry_stack_->previous_thread_data == NULL || + DCHECK(entry_stack_ != NULL); + DCHECK(entry_stack_->previous_thread_data == NULL || entry_stack_->previous_thread_data->thread_id().Equals( ThreadId::Current())); if (--entry_stack_->entry_count > 0) return; - ASSERT(CurrentPerIsolateThreadData() != NULL); - ASSERT(CurrentPerIsolateThreadData()->isolate_ == this); + DCHECK(CurrentPerIsolateThreadData() != NULL); + DCHECK(CurrentPerIsolateThreadData()->isolate_ == this); // Pop the stack. EntryStackItem* item = entry_stack_; @@ -2091,7 +2096,7 @@ void Isolate::UnlinkDeferredHandles(DeferredHandles* deferred) { while (deferred_iterator->previous_ != NULL) { deferred_iterator = deferred_iterator->previous_; } - ASSERT(deferred_handles_head_ == deferred_iterator); + DCHECK(deferred_handles_head_ == deferred_iterator); #endif if (deferred_handles_head_ == deferred) { deferred_handles_head_ = deferred_handles_head_->next_; @@ -2111,6 +2116,12 @@ HStatistics* Isolate::GetHStatistics() { } +HStatistics* Isolate::GetTStatistics() { + if (tstatistics() == NULL) set_tstatistics(new HStatistics()); + return tstatistics(); +} + + HTracer* Isolate::GetHTracer() { if (htracer() == NULL) set_htracer(new HTracer(id())); return htracer(); @@ -2137,10 +2148,17 @@ Map* Isolate::get_initial_js_array_map(ElementsKind kind) { } +bool Isolate::use_crankshaft() const { + return FLAG_crankshaft && + !serializer_enabled_ && + CpuFeatures::SupportsCrankshaft(); +} + + bool Isolate::IsFastArrayConstructorPrototypeChainIntact() { Map* root_array_map = get_initial_js_array_map(GetInitialFastElementsKind()); - ASSERT(root_array_map != NULL); + DCHECK(root_array_map != NULL); JSObject* initial_array_proto = JSObject::cast(*initial_array_prototype()); // Check that the array prototype hasn't been altered WRT empty elements. @@ -2151,13 +2169,16 @@ bool Isolate::IsFastArrayConstructorPrototypeChainIntact() { // Check that the object prototype hasn't been altered WRT empty elements. JSObject* initial_object_proto = JSObject::cast(*initial_object_prototype()); - Object* root_array_map_proto = initial_array_proto->GetPrototype(); - if (root_array_map_proto != initial_object_proto) return false; + PrototypeIterator iter(this, initial_array_proto); + if (iter.IsAtEnd() || iter.GetCurrent() != initial_object_proto) { + return false; + } if (initial_object_proto->elements() != heap()->empty_fixed_array()) { return false; } - return initial_object_proto->GetPrototype()->IsNull(); + iter.Advance(); + return iter.IsAtEnd(); } @@ -2169,7 +2190,7 @@ CodeStubInterfaceDescriptor* CallInterfaceDescriptor* Isolate::call_descriptor(CallDescriptorKey index) { - ASSERT(0 <= index && index < NUMBER_OF_CALL_DESCRIPTORS); + DCHECK(0 <= index && index < NUMBER_OF_CALL_DESCRIPTORS); return &call_descriptors_[index]; } @@ -2201,7 +2222,7 @@ Handle<JSObject> Isolate::GetSymbolRegistry() { Handle<String> name = factory()->InternalizeUtf8String(nested[i]); Handle<JSObject> obj = factory()->NewJSObjectFromMap(map); JSObject::NormalizeProperties(obj, KEEP_INOBJECT_PROPERTIES, 8); - JSObject::SetProperty(registry, name, obj, NONE, STRICT).Assert(); + JSObject::SetProperty(registry, name, obj, STRICT).Assert(); } } return Handle<JSObject>::cast(factory()->symbol_registry()); @@ -2227,31 +2248,126 @@ void Isolate::RemoveCallCompletedCallback(CallCompletedCallback callback) { void Isolate::FireCallCompletedCallback() { bool has_call_completed_callbacks = !call_completed_callbacks_.is_empty(); - bool run_microtasks = autorun_microtasks() && microtask_pending(); + bool run_microtasks = autorun_microtasks() && pending_microtask_count(); if (!has_call_completed_callbacks && !run_microtasks) return; if (!handle_scope_implementer()->CallDepthIsZero()) return; + if (run_microtasks) RunMicrotasks(); // Fire callbacks. Increase call depth to prevent recursive callbacks. - handle_scope_implementer()->IncrementCallDepth(); - if (run_microtasks) Execution::RunMicrotasks(this); + v8::Isolate::SuppressMicrotaskExecutionScope suppress( + reinterpret_cast<v8::Isolate*>(this)); for (int i = 0; i < call_completed_callbacks_.length(); i++) { call_completed_callbacks_.at(i)(); } - handle_scope_implementer()->DecrementCallDepth(); } -void Isolate::RunMicrotasks() { - if (!microtask_pending()) - return; +void Isolate::EnqueueMicrotask(Handle<Object> microtask) { + DCHECK(microtask->IsJSFunction() || microtask->IsCallHandlerInfo()); + Handle<FixedArray> queue(heap()->microtask_queue(), this); + int num_tasks = pending_microtask_count(); + DCHECK(num_tasks <= queue->length()); + if (num_tasks == 0) { + queue = factory()->NewFixedArray(8); + heap()->set_microtask_queue(*queue); + } else if (num_tasks == queue->length()) { + queue = FixedArray::CopySize(queue, num_tasks * 2); + heap()->set_microtask_queue(*queue); + } + DCHECK(queue->get(num_tasks)->IsUndefined()); + queue->set(num_tasks, *microtask); + set_pending_microtask_count(num_tasks + 1); +} + - ASSERT(handle_scope_implementer()->CallDepthIsZero()); +void Isolate::RunMicrotasks() { + // %RunMicrotasks may be called in mjsunit tests, which violates + // this assertion, hence the check for --allow-natives-syntax. + // TODO(adamk): However, this also fails some layout tests. + // + // DCHECK(FLAG_allow_natives_syntax || + // handle_scope_implementer()->CallDepthIsZero()); // Increase call depth to prevent recursive callbacks. - handle_scope_implementer()->IncrementCallDepth(); - Execution::RunMicrotasks(this); - handle_scope_implementer()->DecrementCallDepth(); + v8::Isolate::SuppressMicrotaskExecutionScope suppress( + reinterpret_cast<v8::Isolate*>(this)); + + while (pending_microtask_count() > 0) { + HandleScope scope(this); + int num_tasks = pending_microtask_count(); + Handle<FixedArray> queue(heap()->microtask_queue(), this); + DCHECK(num_tasks <= queue->length()); + set_pending_microtask_count(0); + heap()->set_microtask_queue(heap()->empty_fixed_array()); + + for (int i = 0; i < num_tasks; i++) { + HandleScope scope(this); + Handle<Object> microtask(queue->get(i), this); + if (microtask->IsJSFunction()) { + Handle<JSFunction> microtask_function = + Handle<JSFunction>::cast(microtask); + SaveContext save(this); + set_context(microtask_function->context()->native_context()); + Handle<Object> exception; + MaybeHandle<Object> result = Execution::TryCall( + microtask_function, factory()->undefined_value(), + 0, NULL, &exception); + // If execution is terminating, just bail out. + if (result.is_null() && + !exception.is_null() && + *exception == heap()->termination_exception()) { + // Clear out any remaining callbacks in the queue. + heap()->set_microtask_queue(heap()->empty_fixed_array()); + set_pending_microtask_count(0); + return; + } + } else { + Handle<CallHandlerInfo> callback_info = + Handle<CallHandlerInfo>::cast(microtask); + v8::MicrotaskCallback callback = + v8::ToCData<v8::MicrotaskCallback>(callback_info->callback()); + void* data = v8::ToCData<void*>(callback_info->data()); + callback(data); + } + } + } } +void Isolate::SetUseCounterCallback(v8::Isolate::UseCounterCallback callback) { + DCHECK(!use_counter_callback_); + use_counter_callback_ = callback; +} + + +void Isolate::CountUsage(v8::Isolate::UseCounterFeature feature) { + if (use_counter_callback_) { + use_counter_callback_(reinterpret_cast<v8::Isolate*>(this), feature); + } +} + + +bool StackLimitCheck::JsHasOverflowed() const { + StackGuard* stack_guard = isolate_->stack_guard(); +#ifdef USE_SIMULATOR + // The simulator uses a separate JS stack. + Address jssp_address = Simulator::current(isolate_)->get_sp(); + uintptr_t jssp = reinterpret_cast<uintptr_t>(jssp_address); + if (jssp < stack_guard->real_jslimit()) return true; +#endif // USE_SIMULATOR + return GetCurrentStackPosition() < stack_guard->real_climit(); +} + + +bool PostponeInterruptsScope::Intercept(StackGuard::InterruptFlag flag) { + // First check whether the previous scope intercepts. + if (prev_ && prev_->Intercept(flag)) return true; + // Then check whether this scope intercepts. + if ((flag & intercept_mask_)) { + intercepted_flags_ |= flag; + return true; + } + return false; +} + } } // namespace v8::internal diff --git a/deps/v8/src/isolate.h b/deps/v8/src/isolate.h index 5b1e77f20e1..9ef6fc732a9 100644 --- a/deps/v8/src/isolate.h +++ b/deps/v8/src/isolate.h @@ -5,33 +5,38 @@ #ifndef V8_ISOLATE_H_ #define V8_ISOLATE_H_ -#include "../include/v8-debug.h" -#include "allocation.h" -#include "assert-scope.h" -#include "atomicops.h" -#include "builtins.h" -#include "contexts.h" -#include "execution.h" -#include "frames.h" -#include "date.h" -#include "global-handles.h" -#include "handles.h" -#include "hashmap.h" -#include "heap.h" -#include "optimizing-compiler-thread.h" -#include "regexp-stack.h" -#include "runtime-profiler.h" -#include "runtime.h" -#include "zone.h" +#include "include/v8-debug.h" +#include "src/allocation.h" +#include "src/assert-scope.h" +#include "src/base/atomicops.h" +#include "src/builtins.h" +#include "src/contexts.h" +#include "src/date.h" +#include "src/execution.h" +#include "src/frames.h" +#include "src/global-handles.h" +#include "src/handles.h" +#include "src/hashmap.h" +#include "src/heap/heap.h" +#include "src/optimizing-compiler-thread.h" +#include "src/regexp-stack.h" +#include "src/runtime.h" +#include "src/runtime-profiler.h" +#include "src/zone.h" namespace v8 { + +namespace base { +class RandomNumberGenerator; +} + namespace internal { class Bootstrapper; -struct CallInterfaceDescriptor; +class CallInterfaceDescriptor; class CodeGenerator; class CodeRange; -struct CodeStubInterfaceDescriptor; +class CodeStubInterfaceDescriptor; class CodeTracer; class CompilationCache; class ConsStringIteratorOp; @@ -53,9 +58,7 @@ class HTracer; class InlineRuntimeFunctionsTable; class InnerPointerToCodeCache; class MaterializedObjectStore; -class NoAllocationStringAllocator; class CodeAgingHelper; -class RandomNumberGenerator; class RegExpStack; class SaveContext; class StringTracker; @@ -75,11 +78,11 @@ typedef void* ExternalReferenceRedirectorPointer(); class Debug; class Debugger; -class DebuggerAgent; #if !defined(__arm__) && V8_TARGET_ARCH_ARM || \ !defined(__aarch64__) && V8_TARGET_ARCH_ARM64 || \ - !defined(__mips__) && V8_TARGET_ARCH_MIPS + !defined(__mips__) && V8_TARGET_ARCH_MIPS || \ + !defined(__mips__) && V8_TARGET_ARCH_MIPS64 class Redirection; class Simulator; #endif @@ -103,19 +106,22 @@ typedef ZoneList<Handle<Object> > ZoneObjectList; // Macros for MaybeHandle. -#define RETURN_EXCEPTION_IF_SCHEDULED_EXCEPTION(isolate, T) \ - do { \ - Isolate* __isolate__ = (isolate); \ - if (__isolate__->has_scheduled_exception()) { \ - __isolate__->PromoteScheduledException(); \ - return MaybeHandle<T>(); \ - } \ +#define RETURN_VALUE_IF_SCHEDULED_EXCEPTION(isolate, value) \ + do { \ + Isolate* __isolate__ = (isolate); \ + if (__isolate__->has_scheduled_exception()) { \ + __isolate__->PromoteScheduledException(); \ + return value; \ + } \ } while (false) +#define RETURN_EXCEPTION_IF_SCHEDULED_EXCEPTION(isolate, T) \ + RETURN_VALUE_IF_SCHEDULED_EXCEPTION(isolate, MaybeHandle<T>()) + #define ASSIGN_RETURN_ON_EXCEPTION_VALUE(isolate, dst, call, value) \ do { \ if (!(call).ToHandle(&dst)) { \ - ASSERT((isolate)->has_pending_exception()); \ + DCHECK((isolate)->has_pending_exception()); \ return value; \ } \ } while (false) @@ -130,7 +136,7 @@ typedef ZoneList<Handle<Object> > ZoneObjectList; #define RETURN_ON_EXCEPTION_VALUE(isolate, call, value) \ do { \ if ((call).is_null()) { \ - ASSERT((isolate)->has_pending_exception()); \ + DCHECK((isolate)->has_pending_exception()); \ return value; \ } \ } while (false) @@ -192,7 +198,7 @@ class ThreadId { int id_; - static Atomic32 highest_thread_id_; + static base::Atomic32 highest_thread_id_; friend class Isolate; }; @@ -214,10 +220,10 @@ class ThreadLocalTop BASE_EMBEDDED { // Get the top C++ try catch handler or NULL if none are registered. // - // This method is not guarenteed to return an address that can be + // This method is not guaranteed to return an address that can be // used for comparison with addresses into the JS stack. If such an // address is needed, use try_catch_handler_address. - v8::TryCatch* TryCatchHandler(); + FIELD_ACCESSOR(v8::TryCatch*, try_catch_handler) // Get the address of the top C++ try catch handler or NULL if // none are registered. @@ -229,12 +235,15 @@ class ThreadLocalTop BASE_EMBEDDED { // stack, try_catch_handler_address returns a JS stack address that // corresponds to the place on the JS stack where the C++ handler // would have been if the stack were not separate. - FIELD_ACCESSOR(Address, try_catch_handler_address) + Address try_catch_handler_address() { + return reinterpret_cast<Address>( + v8::TryCatch::JSStackComparableAddress(try_catch_handler())); + } void Free() { - ASSERT(!has_pending_message_); - ASSERT(!external_caught_exception_); - ASSERT(try_catch_handler_address_ == NULL); + DCHECK(!has_pending_message_); + DCHECK(!external_caught_exception_); + DCHECK(try_catch_handler_ == NULL); } Isolate* isolate_; @@ -282,13 +291,14 @@ class ThreadLocalTop BASE_EMBEDDED { private: void InitializeInternal(); - Address try_catch_handler_address_; + v8::TryCatch* try_catch_handler_; }; #if V8_TARGET_ARCH_ARM && !defined(__arm__) || \ V8_TARGET_ARCH_ARM64 && !defined(__aarch64__) || \ - V8_TARGET_ARCH_MIPS && !defined(__mips__) + V8_TARGET_ARCH_MIPS && !defined(__mips__) || \ + V8_TARGET_ARCH_MIPS64 && !defined(__mips__) #define ISOLATE_INIT_SIMULATOR_LIST(V) \ V(bool, simulator_initialized, false) \ @@ -330,8 +340,6 @@ typedef List<HeapObject*> DebugObjectCache; V(int, serialize_partial_snapshot_cache_capacity, 0) \ V(Object**, serialize_partial_snapshot_cache, NULL) \ /* Assembler state. */ \ - /* A previously allocated buffer of kMinimalBufferSize bytes, or NULL. */ \ - V(byte*, assembler_spare_buffer, NULL) \ V(FatalErrorCallback, exception_behavior, NULL) \ V(LogEventCallback, event_logger, NULL) \ V(AllowCodeGenerationFromStringsCallback, allow_code_gen_callback, NULL) \ @@ -350,15 +358,17 @@ typedef List<HeapObject*> DebugObjectCache; /* AstNode state. */ \ V(int, ast_node_id, 0) \ V(unsigned, ast_node_count, 0) \ - V(bool, microtask_pending, false) \ + V(int, pending_microtask_count, 0) \ V(bool, autorun_microtasks, true) \ V(HStatistics*, hstatistics, NULL) \ + V(HStatistics*, tstatistics, NULL) \ V(HTracer*, htracer, NULL) \ V(CodeTracer*, code_tracer, NULL) \ V(bool, fp_stubs_generated, false) \ V(int, max_available_threads, 0) \ V(uint32_t, per_isolate_assert_data, 0xFFFFFFFFu) \ - V(DebuggerAgent*, debugger_agent_instance, NULL) \ + V(InterruptCallback, api_interrupt_callback, NULL) \ + V(void*, api_interrupt_callback_data, NULL) \ ISOLATE_INIT_SIMULATOR_LIST(V) #define THREAD_LOCAL_TOP_ACCESSOR(type, name) \ @@ -386,7 +396,8 @@ class Isolate { thread_state_(NULL), #if !defined(__arm__) && V8_TARGET_ARCH_ARM || \ !defined(__aarch64__) && V8_TARGET_ARCH_ARM64 || \ - !defined(__mips__) && V8_TARGET_ARCH_MIPS + !defined(__mips__) && V8_TARGET_ARCH_MIPS || \ + !defined(__mips__) && V8_TARGET_ARCH_MIPS64 simulator_(NULL), #endif next_(NULL), @@ -400,7 +411,8 @@ class Isolate { #if !defined(__arm__) && V8_TARGET_ARCH_ARM || \ !defined(__aarch64__) && V8_TARGET_ARCH_ARM64 || \ - !defined(__mips__) && V8_TARGET_ARCH_MIPS + !defined(__mips__) && V8_TARGET_ARCH_MIPS || \ + !defined(__mips__) && V8_TARGET_ARCH_MIPS64 FIELD_ACCESSOR(Simulator*, simulator) #endif @@ -416,7 +428,8 @@ class Isolate { #if !defined(__arm__) && V8_TARGET_ARCH_ARM || \ !defined(__aarch64__) && V8_TARGET_ARCH_ARM64 || \ - !defined(__mips__) && V8_TARGET_ARCH_MIPS + !defined(__mips__) && V8_TARGET_ARCH_MIPS || \ + !defined(__mips__) && V8_TARGET_ARCH_MIPS64 Simulator* simulator_; #endif @@ -441,20 +454,31 @@ class Isolate { // Returns the PerIsolateThreadData for the current thread (or NULL if one is // not currently set). static PerIsolateThreadData* CurrentPerIsolateThreadData() { + EnsureInitialized(); return reinterpret_cast<PerIsolateThreadData*>( - Thread::GetThreadLocal(per_isolate_thread_data_key_)); + base::Thread::GetThreadLocal(per_isolate_thread_data_key_)); } // Returns the isolate inside which the current thread is running. INLINE(static Isolate* Current()) { + EnsureInitialized(); Isolate* isolate = reinterpret_cast<Isolate*>( - Thread::GetExistingThreadLocal(isolate_key_)); - ASSERT(isolate != NULL); + base::Thread::GetExistingThreadLocal(isolate_key_)); + DCHECK(isolate != NULL); return isolate; } INLINE(static Isolate* UncheckedCurrent()) { - return reinterpret_cast<Isolate*>(Thread::GetThreadLocal(isolate_key_)); + EnsureInitialized(); + return reinterpret_cast<Isolate*>( + base::Thread::GetThreadLocal(isolate_key_)); + } + + // Like UncheckedCurrent, but skips the check that |isolate_key_| was + // initialized. Callers have to ensure that themselves. + INLINE(static Isolate* UnsafeCurrent()) { + return reinterpret_cast<Isolate*>( + base::Thread::GetThreadLocal(isolate_key_)); } // Usually called by Init(), but can be called early e.g. to allow @@ -478,15 +502,6 @@ class Isolate { static void GlobalTearDown(); - bool IsDefaultIsolate() const { return this == default_isolate_; } - - static void SetCrashIfDefaultIsolateInitialized(); - // Ensures that process-wide resources and the default isolate have been - // allocated. It is only necessary to call this method in rare cases, for - // example if you are using V8 from within the body of a static initializer. - // Safe to call multiple times. - static void EnsureDefaultIsolate(); - // Find the PerThread for this particular (isolate, thread) combination // If one does not yet exist, return null. PerIsolateThreadData* FindPerThreadDataForThisThread(); @@ -498,29 +513,28 @@ class Isolate { // Returns the key used to store the pointer to the current isolate. // Used internally for V8 threads that do not execute JavaScript but still // are part of the domain of an isolate (like the context switcher). - static Thread::LocalStorageKey isolate_key() { + static base::Thread::LocalStorageKey isolate_key() { + EnsureInitialized(); return isolate_key_; } // Returns the key used to store process-wide thread IDs. - static Thread::LocalStorageKey thread_id_key() { + static base::Thread::LocalStorageKey thread_id_key() { + EnsureInitialized(); return thread_id_key_; } - static Thread::LocalStorageKey per_isolate_thread_data_key(); + static base::Thread::LocalStorageKey per_isolate_thread_data_key(); // Mutex for serializing access to break control structures. - RecursiveMutex* break_access() { return &break_access_; } - - // Mutex for serializing access to debugger. - RecursiveMutex* debugger_access() { return &debugger_access_; } + base::RecursiveMutex* break_access() { return &break_access_; } Address get_address_from_id(AddressId id); // Access to top context (where the current function object was created). Context* context() { return thread_local_top_.context_; } void set_context(Context* context) { - ASSERT(context == NULL || context->IsContext()); + DCHECK(context == NULL || context->IsContext()); thread_local_top_.context_ = context; } Context** context_address() { return &thread_local_top_.context_; } @@ -532,18 +546,18 @@ class Isolate { // Interface to pending exception. Object* pending_exception() { - ASSERT(has_pending_exception()); - ASSERT(!thread_local_top_.pending_exception_->IsException()); + DCHECK(has_pending_exception()); + DCHECK(!thread_local_top_.pending_exception_->IsException()); return thread_local_top_.pending_exception_; } void set_pending_exception(Object* exception_obj) { - ASSERT(!exception_obj->IsException()); + DCHECK(!exception_obj->IsException()); thread_local_top_.pending_exception_ = exception_obj; } void clear_pending_exception() { - ASSERT(!thread_local_top_.pending_exception_->IsException()); + DCHECK(!thread_local_top_.pending_exception_->IsException()); thread_local_top_.pending_exception_ = heap_.the_hole_value(); } @@ -552,7 +566,7 @@ class Isolate { } bool has_pending_exception() { - ASSERT(!thread_local_top_.pending_exception_->IsException()); + DCHECK(!thread_local_top_.pending_exception_->IsException()); return !thread_local_top_.pending_exception_->IsTheHole(); } @@ -564,7 +578,7 @@ class Isolate { thread_local_top_.pending_message_script_ = heap_.the_hole_value(); } v8::TryCatch* try_catch_handler() { - return thread_local_top_.TryCatchHandler(); + return thread_local_top_.try_catch_handler(); } Address try_catch_handler_address() { return thread_local_top_.try_catch_handler_address(); @@ -593,20 +607,21 @@ class Isolate { } Object* scheduled_exception() { - ASSERT(has_scheduled_exception()); - ASSERT(!thread_local_top_.scheduled_exception_->IsException()); + DCHECK(has_scheduled_exception()); + DCHECK(!thread_local_top_.scheduled_exception_->IsException()); return thread_local_top_.scheduled_exception_; } bool has_scheduled_exception() { - ASSERT(!thread_local_top_.scheduled_exception_->IsException()); + DCHECK(!thread_local_top_.scheduled_exception_->IsException()); return thread_local_top_.scheduled_exception_ != heap_.the_hole_value(); } void clear_scheduled_exception() { - ASSERT(!thread_local_top_.scheduled_exception_->IsException()); + DCHECK(!thread_local_top_.scheduled_exception_->IsException()); thread_local_top_.scheduled_exception_ = heap_.the_hole_value(); } - bool IsExternallyCaught(); + bool HasExternalTryCatch(); + bool IsFinallyOnTop(); bool is_catchable_by_javascript(Object* exception) { return exception != heap()->termination_exception(); @@ -644,7 +659,7 @@ class Isolate { } // Returns the global proxy object of the current context. - Object* global_proxy() { + JSObject* global_proxy() { return context()->global_proxy(); } @@ -698,11 +713,11 @@ class Isolate { Handle<JSArray> CaptureCurrentStackTrace( int frame_limit, StackTrace::StackTraceOptions options); - - Handle<JSArray> CaptureSimpleStackTrace(Handle<JSObject> error_object, - Handle<Object> caller, - int limit); + Handle<Object> CaptureSimpleStackTrace(Handle<JSObject> error_object, + Handle<Object> caller); void CaptureAndSetDetailedStackTrace(Handle<JSObject> error_object); + void CaptureAndSetSimpleStackTrace(Handle<JSObject> error_object, + Handle<Object> caller); // Returns if the top context may access the given global object. If // the result is false, the pending exception is guaranteed to be @@ -737,6 +752,8 @@ class Isolate { // Re-set pending message, script and positions reported to the TryCatch // back to the TLS for re-use when rethrowing. void RestorePendingMessageFromTryCatch(v8::TryCatch* handler); + // Un-schedule an exception that was caught by a TryCatch handler. + void CancelScheduledExceptionFromTryCatch(v8::TryCatch* handler); void ReportPendingMessages(); // Return pending location if any or unfilled structure. MessageLocation GetMessageLocation(); @@ -760,6 +777,8 @@ class Isolate { Object* TerminateExecution(); void CancelTerminateExecution(); + void InvokeApiInterruptCallback(); + // Administration void Iterate(ObjectVisitor* v); void Iterate(ObjectVisitor* v, ThreadLocalTop* t); @@ -789,11 +808,11 @@ class Isolate { // Accessors. #define GLOBAL_ACCESSOR(type, name, initialvalue) \ inline type name() const { \ - ASSERT(OFFSET_OF(Isolate, name##_) == name##_debug_offset_); \ + DCHECK(OFFSET_OF(Isolate, name##_) == name##_debug_offset_); \ return name##_; \ } \ inline void set_##name(type value) { \ - ASSERT(OFFSET_OF(Isolate, name##_) == name##_debug_offset_); \ + DCHECK(OFFSET_OF(Isolate, name##_) == name##_debug_offset_); \ name##_ = value; \ } ISOLATE_INIT_LIST(GLOBAL_ACCESSOR) @@ -801,7 +820,7 @@ class Isolate { #define GLOBAL_ARRAY_ACCESSOR(type, name, length) \ inline type* name() { \ - ASSERT(OFFSET_OF(Isolate, name##_) == name##_debug_offset_); \ + DCHECK(OFFSET_OF(Isolate, name##_) == name##_debug_offset_); \ return &(name##_)[0]; \ } ISOLATE_INIT_ARRAY_LIST(GLOBAL_ARRAY_ACCESSOR) @@ -809,10 +828,10 @@ class Isolate { #define NATIVE_CONTEXT_FIELD_ACCESSOR(index, type, name) \ Handle<type> name() { \ - return Handle<type>(context()->native_context()->name(), this); \ + return Handle<type>(native_context()->name(), this); \ } \ bool is_##name(type* value) { \ - return context()->native_context()->is_##name(value); \ + return native_context()->is_##name(value); \ } NATIVE_CONTEXT_FIELDS(NATIVE_CONTEXT_FIELD_ACCESSOR) #undef NATIVE_CONTEXT_FIELD_ACCESSOR @@ -821,7 +840,7 @@ class Isolate { Counters* counters() { // Call InitializeLoggingAndCounters() if logging is needed before // the isolate is fully initialized. - ASSERT(counters_ != NULL); + DCHECK(counters_ != NULL); return counters_; } CodeRange* code_range() { return code_range_; } @@ -830,7 +849,7 @@ class Isolate { Logger* logger() { // Call InitializeLoggingAndCounters() if logging is needed before // the isolate is fully initialized. - ASSERT(logger_ != NULL); + DCHECK(logger_ != NULL); return logger_; } StackGuard* stack_guard() { return &stack_guard_; } @@ -863,7 +882,7 @@ class Isolate { HandleScopeData* handle_scope_data() { return &handle_scope_data_; } HandleScopeImplementer* handle_scope_implementer() { - ASSERT(handle_scope_implementer_); + DCHECK(handle_scope_implementer_); return handle_scope_implementer_; } Zone* runtime_zone() { return &runtime_zone_; } @@ -928,12 +947,8 @@ class Isolate { return &interp_canonicalize_mapping_; } - inline bool IsCodePreAgingActive(); - - Debugger* debugger() { return debugger_; } Debug* debug() { return debug_; } - inline bool IsDebuggerActive(); inline bool DebuggerHasBreakPoints(); CpuProfiler* cpu_profiler() const { return cpu_profiler_; } @@ -956,25 +971,33 @@ class Isolate { THREAD_LOCAL_TOP_ACCESSOR(StateTag, current_vm_state) void SetData(uint32_t slot, void* data) { - ASSERT(slot < Internals::kNumIsolateDataSlots); + DCHECK(slot < Internals::kNumIsolateDataSlots); embedder_data_[slot] = data; } void* GetData(uint32_t slot) { - ASSERT(slot < Internals::kNumIsolateDataSlots); + DCHECK(slot < Internals::kNumIsolateDataSlots); return embedder_data_[slot]; } THREAD_LOCAL_TOP_ACCESSOR(LookupResult*, top_lookup_result) + void enable_serializer() { + // The serializer can only be enabled before the isolate init. + DCHECK(state_ != INITIALIZED); + serializer_enabled_ = true; + } + + bool serializer_enabled() const { return serializer_enabled_; } + bool IsDead() { return has_fatal_error_; } void SignalFatalError() { has_fatal_error_ = true; } - bool use_crankshaft() const { return use_crankshaft_; } + bool use_crankshaft() const; bool initialized_from_snapshot() { return initialized_from_snapshot_; } double time_millis_since_init() { - return OS::TimeCurrentMillis() - time_millis_at_init_; + return base::OS::TimeCurrentMillis() - time_millis_at_init_; } DateCache* date_cache() { @@ -1016,14 +1039,14 @@ class Isolate { bool concurrent_recompilation_enabled() { // Thread is only available with flag enabled. - ASSERT(optimizing_compiler_thread_ == NULL || + DCHECK(optimizing_compiler_thread_ == NULL || FLAG_concurrent_recompilation); return optimizing_compiler_thread_ != NULL; } bool concurrent_osr_enabled() const { // Thread is only available with flag enabled. - ASSERT(optimizing_compiler_thread_ == NULL || + DCHECK(optimizing_compiler_thread_ == NULL || FLAG_concurrent_recompilation); return optimizing_compiler_thread_ != NULL && FLAG_concurrent_osr; } @@ -1043,6 +1066,7 @@ class Isolate { int id() const { return static_cast<int>(id_); } HStatistics* GetHStatistics(); + HStatistics* GetTStatistics(); HTracer* GetHTracer(); CodeTracer* GetCodeTracer(); @@ -1053,7 +1077,7 @@ class Isolate { void* stress_deopt_count_address() { return &stress_deopt_count_; } - inline RandomNumberGenerator* random_number_generator(); + inline base::RandomNumberGenerator* random_number_generator(); // Given an address occupied by a live code object, return that object. Object* FindCodeObject(Address a); @@ -1073,9 +1097,15 @@ class Isolate { void RemoveCallCompletedCallback(CallCompletedCallback callback); void FireCallCompletedCallback(); + void EnqueueMicrotask(Handle<Object> microtask); void RunMicrotasks(); + void SetUseCounterCallback(v8::Isolate::UseCounterCallback callback); + void CountUsage(v8::Isolate::UseCounterFeature feature); + private: + static void EnsureInitialized(); + Isolate(); friend struct GlobalState; @@ -1134,18 +1164,16 @@ class Isolate { DISALLOW_COPY_AND_ASSIGN(EntryStackItem); }; - // This mutex protects highest_thread_id_, thread_data_table_ and - // default_isolate_. - static Mutex process_wide_mutex_; + // This mutex protects highest_thread_id_ and thread_data_table_. + static base::LazyMutex process_wide_mutex_; - static Thread::LocalStorageKey per_isolate_thread_data_key_; - static Thread::LocalStorageKey isolate_key_; - static Thread::LocalStorageKey thread_id_key_; - static Isolate* default_isolate_; + static base::Thread::LocalStorageKey per_isolate_thread_data_key_; + static base::Thread::LocalStorageKey isolate_key_; + static base::Thread::LocalStorageKey thread_id_key_; static ThreadDataTable* thread_data_table_; // A global counter for all generated Isolates, might overflow. - static Atomic32 isolate_counter_; + static base::Atomic32 isolate_counter_; void Deinit(); @@ -1176,13 +1204,16 @@ class Isolate { void FillCache(); - void PropagatePendingExceptionToExternalTryCatch(); + // Propagate pending exception message to the v8::TryCatch. + // If there is no external try-catch or message was successfully propagated, + // then return true. + bool PropagatePendingExceptionToExternalTryCatch(); // Traverse prototype chain to find out whether the object is derived from // the Error object. bool IsErrorObject(Handle<Object> obj); - Atomic32 id_; + base::Atomic32 id_; EntryStackItem* entry_stack_; int stack_trace_nesting_level_; StringStream* incomplete_message_; @@ -1192,9 +1223,8 @@ class Isolate { CompilationCache* compilation_cache_; Counters* counters_; CodeRange* code_range_; - RecursiveMutex break_access_; - Atomic32 debugger_initialized_; - RecursiveMutex debugger_access_; + base::RecursiveMutex break_access_; + base::Atomic32 debugger_initialized_; Logger* logger_; StackGuard stack_guard_; StatsTable* stats_table_; @@ -1235,14 +1265,14 @@ class Isolate { unibrow::Mapping<unibrow::Ecma262Canonicalize> interp_canonicalize_mapping_; CodeStubInterfaceDescriptor* code_stub_interface_descriptors_; CallInterfaceDescriptor* call_descriptors_; - RandomNumberGenerator* random_number_generator_; + base::RandomNumberGenerator* random_number_generator_; + + // Whether the isolate has been created for snapshotting. + bool serializer_enabled_; // True if fatal error has been signaled for this isolate. bool has_fatal_error_; - // True if we are using the Crankshaft optimizing compiler. - bool use_crankshaft_; - // True if this isolate was initialized from a snapshot. bool initialized_from_snapshot_; @@ -1255,7 +1285,6 @@ class Isolate { JSObject::SpillInformation js_spill_information_; #endif - Debugger* debugger_; Debug* debug_; CpuProfiler* cpu_profiler_; HeapProfiler* heap_profiler_; @@ -1295,6 +1324,8 @@ class Isolate { // List of callbacks when a Call completes. List<CallCompletedCallback> call_completed_callbacks_; + v8::Isolate::UseCounterCallback use_counter_callback_; + friend class ExecutionAccess; friend class HandleScopeImplementer; friend class IsolateInitializer; @@ -1353,7 +1384,7 @@ class AssertNoContextChange BASE_EMBEDDED { : isolate_(isolate), context_(isolate->context(), isolate) { } ~AssertNoContextChange() { - ASSERT(isolate_->context() == *context_); + DCHECK(isolate_->context() == *context_); } private: @@ -1385,15 +1416,20 @@ class ExecutionAccess BASE_EMBEDDED { }; -// Support for checking for stack-overflows in C++ code. +// Support for checking for stack-overflows. class StackLimitCheck BASE_EMBEDDED { public: explicit StackLimitCheck(Isolate* isolate) : isolate_(isolate) { } - bool HasOverflowed() const { + // Use this to check for stack-overflows in C++ code. + inline bool HasOverflowed() const { StackGuard* stack_guard = isolate_->stack_guard(); - return (reinterpret_cast<uintptr_t>(this) < stack_guard->real_climit()); + return GetCurrentStackPosition() < stack_guard->real_climit(); } + + // Use this to check for stack-overflow when entering runtime from JS code. + bool JsHasOverflowed() const; + private: Isolate* isolate_; }; @@ -1405,22 +1441,29 @@ class StackLimitCheck BASE_EMBEDDED { // account. class PostponeInterruptsScope BASE_EMBEDDED { public: - explicit PostponeInterruptsScope(Isolate* isolate) - : stack_guard_(isolate->stack_guard()), isolate_(isolate) { - ExecutionAccess access(isolate_); - stack_guard_->thread_local_.postpone_interrupts_nesting_++; - stack_guard_->DisableInterrupts(); + PostponeInterruptsScope(Isolate* isolate, + int intercept_mask = StackGuard::ALL_INTERRUPTS) + : stack_guard_(isolate->stack_guard()), + intercept_mask_(intercept_mask), + intercepted_flags_(0) { + stack_guard_->PushPostponeInterruptsScope(this); } ~PostponeInterruptsScope() { - ExecutionAccess access(isolate_); - if (--stack_guard_->thread_local_.postpone_interrupts_nesting_ == 0) { - stack_guard_->EnableInterrupts(); - } + stack_guard_->PopPostponeInterruptsScope(); } + + // Find the bottom-most scope that intercepts this interrupt. + // Return whether the interrupt has been intercepted. + bool Intercept(StackGuard::InterruptFlag flag); + private: StackGuard* stack_guard_; - Isolate* isolate_; + int intercept_mask_; + int intercepted_flags_; + PostponeInterruptsScope* prev_; + + friend class StackGuard; }; @@ -1435,12 +1478,12 @@ class CodeTracer V8_FINAL : public Malloced { } if (FLAG_redirect_code_traces_to == NULL) { - OS::SNPrintF(filename_, - "code-%d-%d.asm", - OS::GetCurrentProcessId(), - isolate_id); + SNPrintF(filename_, + "code-%d-%d.asm", + base::OS::GetCurrentProcessId(), + isolate_id); } else { - OS::StrNCpy(filename_, FLAG_redirect_code_traces_to, filename_.length()); + StrNCpy(filename_, FLAG_redirect_code_traces_to, filename_.length()); } WriteChars(filename_.start(), "", 0, false); @@ -1463,7 +1506,7 @@ class CodeTracer V8_FINAL : public Malloced { } if (file_ == NULL) { - file_ = OS::FOpen(filename_.start(), "a"); + file_ = base::OS::FOpen(filename_.start(), "a"); } scope_depth_++; diff --git a/deps/v8/src/json-parser.h b/deps/v8/src/json-parser.h index f3017784bcd..c23e50dbb4d 100644 --- a/deps/v8/src/json-parser.h +++ b/deps/v8/src/json-parser.h @@ -5,13 +5,13 @@ #ifndef V8_JSON_PARSER_H_ #define V8_JSON_PARSER_H_ -#include "v8.h" +#include "src/v8.h" -#include "char-predicates-inl.h" -#include "conversions.h" -#include "messages.h" -#include "spaces-inl.h" -#include "token.h" +#include "src/char-predicates-inl.h" +#include "src/conversions.h" +#include "src/heap/spaces-inl.h" +#include "src/messages.h" +#include "src/token.h" namespace v8 { namespace internal { @@ -104,7 +104,7 @@ class JsonParser BASE_EMBEDDED { DisallowHeapAllocation no_gc; String::FlatContent content = expected->GetFlatContent(); if (content.IsAscii()) { - ASSERT_EQ('"', c0_); + DCHECK_EQ('"', c0_); const uint8_t* input_chars = seq_source_->GetChars() + position_ + 1; const uint8_t* expected_chars = content.ToOneByteVector().start(); for (int i = 0; i < length; i++) { @@ -300,7 +300,7 @@ Handle<Object> JsonParser<seq_ascii>::ParseJsonObject() { factory()->NewJSObject(object_constructor(), pretenure_); Handle<Map> map(json_object->map()); ZoneList<Handle<Object> > properties(8, zone()); - ASSERT_EQ(c0_, '{'); + DCHECK_EQ(c0_, '{'); bool transitioning = true; @@ -358,19 +358,19 @@ Handle<Object> JsonParser<seq_ascii>::ParseJsonObject() { bool follow_expected = false; Handle<Map> target; if (seq_ascii) { - key = JSObject::ExpectedTransitionKey(map); + key = Map::ExpectedTransitionKey(map); follow_expected = !key.is_null() && ParseJsonString(key); } // If the expected transition hits, follow it. if (follow_expected) { - target = JSObject::ExpectedTransitionTarget(map); + target = Map::ExpectedTransitionTarget(map); } else { // If the expected transition failed, parse an internalized string and // try to find a matching transition. key = ParseJsonInternalizedString(); if (key.is_null()) return ReportUnexpectedCharacter(); - target = JSObject::FindTransitionToField(map, key); + target = Map::FindTransitionToField(map, key); // If a transition was found, follow it and continue. transitioning = !target.is_null(); } @@ -387,11 +387,9 @@ Handle<Object> JsonParser<seq_ascii>::ParseJsonObject() { Representation expected_representation = details.representation(); if (value->FitsRepresentation(expected_representation)) { - // If the target representation is double and the value is already - // double, use the existing box. - if (value->IsSmi() && expected_representation.IsDouble()) { - value = factory()->NewHeapNumber( - Handle<Smi>::cast(value)->value()); + if (expected_representation.IsDouble()) { + value = Object::NewStorageFor(isolate(), value, + expected_representation); } else if (expected_representation.IsHeapObject() && !target->instance_descriptors()->GetFieldType( descriptor)->NowContains(value)) { @@ -399,7 +397,7 @@ Handle<Object> JsonParser<seq_ascii>::ParseJsonObject() { isolate(), expected_representation)); Map::GeneralizeFieldType(target, descriptor, value_type); } - ASSERT(target->instance_descriptors()->GetFieldType( + DCHECK(target->instance_descriptors()->GetFieldType( descriptor)->NowContains(value)); properties.Add(value, zone()); map = target; @@ -414,7 +412,8 @@ Handle<Object> JsonParser<seq_ascii>::ParseJsonObject() { int length = properties.length(); for (int i = 0; i < length; i++) { Handle<Object> value = properties[i]; - json_object->FastPropertyAtPut(i, *value); + FieldIndex index = FieldIndex::ForPropertyIndex(*map, i); + json_object->FastPropertyAtPut(index, *value); } } else { key = ParseJsonInternalizedString(); @@ -425,7 +424,7 @@ Handle<Object> JsonParser<seq_ascii>::ParseJsonObject() { if (value.is_null()) return ReportUnexpectedCharacter(); } - JSObject::SetLocalPropertyIgnoreAttributes( + JSObject::SetOwnPropertyIgnoreAttributes( json_object, key, value, NONE).Assert(); } while (MatchSkipWhiteSpace(',')); if (c0_ != '}') { @@ -438,7 +437,8 @@ Handle<Object> JsonParser<seq_ascii>::ParseJsonObject() { int length = properties.length(); for (int i = 0; i < length; i++) { Handle<Object> value = properties[i]; - json_object->FastPropertyAtPut(i, *value); + FieldIndex index = FieldIndex::ForPropertyIndex(*map, i); + json_object->FastPropertyAtPut(index, *value); } } } @@ -451,7 +451,7 @@ template <bool seq_ascii> Handle<Object> JsonParser<seq_ascii>::ParseJsonArray() { HandleScope scope(isolate()); ZoneList<Handle<Object> > elements(4, zone()); - ASSERT_EQ(c0_, '['); + DCHECK_EQ(c0_, '['); AdvanceSkipWhitespace(); if (c0_ != ']') { @@ -526,7 +526,7 @@ Handle<Object> JsonParser<seq_ascii>::ParseJsonNumber() { number = StringToDouble(isolate()->unicode_cache(), chars, NO_FLAGS, // Hex, octal or trailing junk. - OS::nan_value()); + base::OS::nan_value()); } else { Vector<uint8_t> buffer = Vector<uint8_t>::New(length); String::WriteToFlat(*source_, buffer.start(), beg_pos, position_); @@ -666,7 +666,7 @@ Handle<String> JsonParser<seq_ascii>::SlowScanJsonString( } } - ASSERT_EQ('"', c0_); + DCHECK_EQ('"', c0_); // Advance past the last '"'. AdvanceSkipWhitespace(); @@ -678,7 +678,7 @@ Handle<String> JsonParser<seq_ascii>::SlowScanJsonString( template <bool seq_ascii> template <bool is_internalized> Handle<String> JsonParser<seq_ascii>::ScanJsonString() { - ASSERT_EQ('"', c0_); + DCHECK_EQ('"', c0_); Advance(); if (c0_ == '"') { AdvanceSkipWhitespace(); @@ -719,7 +719,8 @@ Handle<String> JsonParser<seq_ascii>::ScanJsonString() { } while (c0 != '"'); int length = position - position_; uint32_t hash = (length <= String::kMaxHashCalcLength) - ? StringHasher::GetHashCore(running_hash) : length; + ? StringHasher::GetHashCore(running_hash) + : static_cast<uint32_t>(length); Vector<const uint8_t> string_vector( seq_source_->GetChars() + position_, length); StringTable* string_table = isolate()->heap()->string_table(); @@ -741,7 +742,7 @@ Handle<String> JsonParser<seq_ascii>::ScanJsonString() { #ifdef DEBUG uint32_t hash_field = (hash << String::kHashShift) | String::kIsNotArrayIndexMask; - ASSERT_EQ(static_cast<int>(result->Hash()), + DCHECK_EQ(static_cast<int>(result->Hash()), static_cast<int>(hash_field >> String::kHashShift)); #endif break; @@ -779,7 +780,7 @@ Handle<String> JsonParser<seq_ascii>::ScanJsonString() { uint8_t* dest = SeqOneByteString::cast(*result)->GetChars(); String::WriteToFlat(*source_, dest, beg_pos, position_); - ASSERT_EQ('"', c0_); + DCHECK_EQ('"', c0_); // Advance past the last '"'. AdvanceSkipWhitespace(); return result; diff --git a/deps/v8/src/json-stringifier.h b/deps/v8/src/json-stringifier.h index 7eb6746dfbe..81249a7d4a8 100644 --- a/deps/v8/src/json-stringifier.h +++ b/deps/v8/src/json-stringifier.h @@ -5,9 +5,10 @@ #ifndef V8_JSON_STRINGIFIER_H_ #define V8_JSON_STRINGIFIER_H_ -#include "v8.h" -#include "conversions.h" -#include "utils.h" +#include "src/v8.h" + +#include "src/conversions.h" +#include "src/utils.h" namespace v8 { namespace internal { @@ -94,7 +95,7 @@ class BasicJsonStringifier BASE_EMBEDDED { INLINE(Result SerializeProperty(Handle<Object> object, bool deferred_comma, Handle<String> deferred_key)) { - ASSERT(!deferred_key.is_null()); + DCHECK(!deferred_key.is_null()); return Serialize_<true>(object, deferred_comma, deferred_key); } @@ -262,7 +263,7 @@ MaybeHandle<Object> BasicJsonStringifier::Stringify(Handle<Object> object) { } return accumulator(); } - ASSERT(result == EXCEPTION); + DCHECK(result == EXCEPTION); return MaybeHandle<Object>(); } @@ -280,7 +281,7 @@ MaybeHandle<Object> BasicJsonStringifier::StringifyString( } object = String::Flatten(object); - ASSERT(object->IsFlat()); + DCHECK(object->IsFlat()); if (object->IsOneByteRepresentationUnderneath()) { Handle<String> result = isolate->factory()->NewRawOneByteString( worst_case_length).ToHandleChecked(); @@ -338,15 +339,9 @@ void BasicJsonStringifier::Append_(const Char* chars) { MaybeHandle<Object> BasicJsonStringifier::ApplyToJsonFunction( Handle<Object> object, Handle<Object> key) { - LookupResult lookup(isolate_); - JSObject::cast(*object)->LookupRealNamedProperty(tojson_string_, &lookup); - if (!lookup.IsProperty()) return object; - PropertyAttributes attr; + LookupIterator it(object, tojson_string_, LookupIterator::SKIP_INTERCEPTOR); Handle<Object> fun; - ASSIGN_RETURN_ON_EXCEPTION( - isolate_, fun, - Object::GetProperty(object, object, &lookup, tojson_string_, &attr), - Object); + ASSIGN_RETURN_ON_EXCEPTION(isolate_, fun, Object::GetProperty(&it), Object); if (!fun->IsJSFunction()) return object; // Call toJSON function. @@ -412,6 +407,7 @@ BasicJsonStringifier::Result BasicJsonStringifier::Serialize_( switch (HeapObject::cast(*object)->map()->instance_type()) { case HEAP_NUMBER_TYPE: + case MUTABLE_HEAP_NUMBER_TYPE: if (deferred_string_key) SerializeDeferredKey(comma, key); return SerializeHeapNumber(Handle<HeapNumber>::cast(object)); case ODDBALL_TYPE: @@ -446,7 +442,8 @@ BasicJsonStringifier::Result BasicJsonStringifier::Serialize_( SerializeString(Handle<String>::cast(object)); return SUCCESS; } else if (object->IsJSObject()) { - if (object->IsAccessCheckNeeded()) break; + // Go to slow path for global proxy and objects requiring access checks. + if (object->IsAccessCheckNeeded() || object->IsJSGlobalProxy()) break; if (deferred_string_key) SerializeDeferredKey(comma, key); return SerializeJSObject(Handle<JSObject>::cast(object)); } @@ -509,9 +506,9 @@ BasicJsonStringifier::Result BasicJsonStringifier::SerializeJSValue( if (value->IsSmi()) return SerializeSmi(Smi::cast(*value)); SerializeHeapNumber(Handle<HeapNumber>::cast(value)); } else { - ASSERT(class_name == isolate_->heap()->Boolean_string()); + DCHECK(class_name == isolate_->heap()->Boolean_string()); Object* value = JSValue::cast(*object)->value(); - ASSERT(value->IsBoolean()); + DCHECK(value->IsBoolean()); AppendAscii(value->IsTrue() ? "true" : "false"); } return SUCCESS; @@ -634,11 +631,7 @@ BasicJsonStringifier::Result BasicJsonStringifier::SerializeJSObject( HandleScope handle_scope(isolate_); Result stack_push = StackPush(object); if (stack_push != SUCCESS) return stack_push; - if (object->IsJSGlobalProxy()) { - object = Handle<JSObject>( - JSObject::cast(object->GetPrototype()), isolate_); - ASSERT(object->IsGlobalObject()); - } + DCHECK(!object->IsJSGlobalProxy() && !object->IsGlobalObject()); Append('{'); bool comma = false; @@ -657,10 +650,8 @@ BasicJsonStringifier::Result BasicJsonStringifier::SerializeJSObject( if (details.IsDontEnum()) continue; Handle<Object> property; if (details.type() == FIELD && *map == object->map()) { - property = Handle<Object>( - object->RawFastPropertyAt( - map->instance_descriptors()->GetFieldIndex(i)), - isolate_); + property = Handle<Object>(object->RawFastPropertyAt( + FieldIndex::ForDescriptor(*map, i)), isolate_); } else { ASSIGN_RETURN_ON_EXCEPTION_VALUE( isolate_, property, @@ -675,7 +666,7 @@ BasicJsonStringifier::Result BasicJsonStringifier::SerializeJSObject( Handle<FixedArray> contents; ASSIGN_RETURN_ON_EXCEPTION_VALUE( isolate_, contents, - JSReceiver::GetKeys(object, JSReceiver::LOCAL_ONLY), + JSReceiver::GetKeys(object, JSReceiver::OWN_ONLY), EXCEPTION); for (int i = 0; i < contents->length(); i++) { @@ -686,7 +677,7 @@ BasicJsonStringifier::Result BasicJsonStringifier::SerializeJSObject( key_handle = Handle<String>(String::cast(key), isolate_); maybe_property = Object::GetPropertyOrElement(object, key_handle); } else { - ASSERT(key->IsNumber()); + DCHECK(key->IsNumber()); key_handle = factory_->NumberToString(Handle<Object>(key, isolate_)); uint32_t index; if (key->IsSmi()) { @@ -715,7 +706,7 @@ BasicJsonStringifier::Result BasicJsonStringifier::SerializeJSObject( void BasicJsonStringifier::ShrinkCurrentPart() { - ASSERT(current_index_ < part_length_); + DCHECK(current_index_ < part_length_); current_part_ = SeqString::Truncate(Handle<SeqString>::cast(current_part_), current_index_); } @@ -745,7 +736,7 @@ void BasicJsonStringifier::Extend() { current_part_ = factory_->NewRawTwoByteString(part_length_).ToHandleChecked(); } - ASSERT(!current_part_.is_null()); + DCHECK(!current_part_.is_null()); current_index_ = 0; } @@ -755,7 +746,7 @@ void BasicJsonStringifier::ChangeEncoding() { Accumulate(); current_part_ = factory_->NewRawTwoByteString(part_length_).ToHandleChecked(); - ASSERT(!current_part_.is_null()); + DCHECK(!current_part_.is_null()); current_index_ = 0; is_ascii_ = false; } @@ -769,7 +760,7 @@ int BasicJsonStringifier::SerializeStringUnchecked_(const SrcChar* src, // Assert that uc16 character is not truncated down to 8 bit. // The <uc16, char> version of this method must not be called. - ASSERT(sizeof(*dest) >= sizeof(*src)); + DCHECK(sizeof(*dest) >= sizeof(*src)); for (int i = 0; i < length; i++) { SrcChar c = src[i]; @@ -850,7 +841,7 @@ template <> Vector<const uint8_t> BasicJsonStringifier::GetCharVector( Handle<String> string) { String::FlatContent flat = string->GetFlatContent(); - ASSERT(flat.IsAscii()); + DCHECK(flat.IsAscii()); return flat.ToOneByteVector(); } @@ -858,7 +849,7 @@ Vector<const uint8_t> BasicJsonStringifier::GetCharVector( template <> Vector<const uc16> BasicJsonStringifier::GetCharVector(Handle<String> string) { String::FlatContent flat = string->GetFlatContent(); - ASSERT(flat.IsTwoByte()); + DCHECK(flat.IsTwoByte()); return flat.ToUC16Vector(); } diff --git a/deps/v8/src/json.js b/deps/v8/src/json.js index 93e38b0dba7..f767f4a195e 100644 --- a/deps/v8/src/json.js +++ b/deps/v8/src/json.js @@ -2,6 +2,8 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. +"use strict"; + // This file relies on the fact that the following declarations have been made // in runtime.js: // var $Array = global.Array; diff --git a/deps/v8/src/jsregexp-inl.h b/deps/v8/src/jsregexp-inl.h index 34e60fa0abb..1ab70b8c4b8 100644 --- a/deps/v8/src/jsregexp-inl.h +++ b/deps/v8/src/jsregexp-inl.h @@ -6,11 +6,11 @@ #ifndef V8_JSREGEXP_INL_H_ #define V8_JSREGEXP_INL_H_ -#include "allocation.h" -#include "handles.h" -#include "heap.h" -#include "jsregexp.h" -#include "objects.h" +#include "src/allocation.h" +#include "src/handles.h" +#include "src/heap/heap.h" +#include "src/jsregexp.h" +#include "src/objects.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/jsregexp.cc b/deps/v8/src/jsregexp.cc index 7284c778cb6..27b8699cf21 100644 --- a/deps/v8/src/jsregexp.cc +++ b/deps/v8/src/jsregexp.cc @@ -2,42 +2,46 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" - -#include "ast.h" -#include "compiler.h" -#include "execution.h" -#include "factory.h" -#include "jsregexp.h" -#include "jsregexp-inl.h" -#include "platform.h" -#include "string-search.h" -#include "runtime.h" -#include "compilation-cache.h" -#include "string-stream.h" -#include "parser.h" -#include "regexp-macro-assembler.h" -#include "regexp-macro-assembler-tracer.h" -#include "regexp-macro-assembler-irregexp.h" -#include "regexp-stack.h" +#include "src/v8.h" + +#include "src/ast.h" +#include "src/base/platform/platform.h" +#include "src/compilation-cache.h" +#include "src/compiler.h" +#include "src/execution.h" +#include "src/factory.h" +#include "src/jsregexp-inl.h" +#include "src/jsregexp.h" +#include "src/ostreams.h" +#include "src/parser.h" +#include "src/regexp-macro-assembler.h" +#include "src/regexp-macro-assembler-irregexp.h" +#include "src/regexp-macro-assembler-tracer.h" +#include "src/regexp-stack.h" +#include "src/runtime.h" +#include "src/string-search.h" #ifndef V8_INTERPRETED_REGEXP #if V8_TARGET_ARCH_IA32 -#include "ia32/regexp-macro-assembler-ia32.h" +#include "src/ia32/regexp-macro-assembler-ia32.h" // NOLINT #elif V8_TARGET_ARCH_X64 -#include "x64/regexp-macro-assembler-x64.h" +#include "src/x64/regexp-macro-assembler-x64.h" // NOLINT #elif V8_TARGET_ARCH_ARM64 -#include "arm64/regexp-macro-assembler-arm64.h" +#include "src/arm64/regexp-macro-assembler-arm64.h" // NOLINT #elif V8_TARGET_ARCH_ARM -#include "arm/regexp-macro-assembler-arm.h" +#include "src/arm/regexp-macro-assembler-arm.h" // NOLINT #elif V8_TARGET_ARCH_MIPS -#include "mips/regexp-macro-assembler-mips.h" +#include "src/mips/regexp-macro-assembler-mips.h" // NOLINT +#elif V8_TARGET_ARCH_MIPS64 +#include "src/mips64/regexp-macro-assembler-mips64.h" // NOLINT +#elif V8_TARGET_ARCH_X87 +#include "src/x87/regexp-macro-assembler-x87.h" // NOLINT #else #error Unsupported target architecture. #endif #endif -#include "interpreter-irregexp.h" +#include "src/interpreter-irregexp.h" namespace v8 { @@ -93,8 +97,8 @@ ContainedInLattice AddRange(ContainedInLattice containment, const int* ranges, int ranges_length, Interval new_range) { - ASSERT((ranges_length & 1) == 1); - ASSERT(ranges[ranges_length - 1] == String::kMaxUtf16CodeUnit + 1); + DCHECK((ranges_length & 1) == 1); + DCHECK(ranges[ranges_length - 1] == String::kMaxUtf16CodeUnit + 1); if (containment == kLatticeUnknown) return containment; bool inside = false; int last = 0; @@ -203,7 +207,7 @@ MaybeHandle<Object> RegExpImpl::Compile(Handle<JSRegExp> re, if (!has_been_compiled) { IrregexpInitialize(re, pattern, flags, parse_result.capture_count); } - ASSERT(re->data()->IsFixedArray()); + DCHECK(re->data()->IsFixedArray()); // Compilation succeeded so the data is set on the regexp // and we can store it in the cache. Handle<FixedArray> data(FixedArray::cast(re->data())); @@ -265,16 +269,16 @@ int RegExpImpl::AtomExecRaw(Handle<JSRegExp> regexp, int output_size) { Isolate* isolate = regexp->GetIsolate(); - ASSERT(0 <= index); - ASSERT(index <= subject->length()); + DCHECK(0 <= index); + DCHECK(index <= subject->length()); subject = String::Flatten(subject); DisallowHeapAllocation no_gc; // ensure vectors stay valid String* needle = String::cast(regexp->DataAt(JSRegExp::kAtomPatternIndex)); int needle_len = needle->length(); - ASSERT(needle->IsFlat()); - ASSERT_LT(0, needle_len); + DCHECK(needle->IsFlat()); + DCHECK_LT(0, needle_len); if (index + needle_len > subject->length()) { return RegExpImpl::RE_FAILURE; @@ -283,8 +287,8 @@ int RegExpImpl::AtomExecRaw(Handle<JSRegExp> regexp, for (int i = 0; i < output_size; i += 2) { String::FlatContent needle_content = needle->GetFlatContent(); String::FlatContent subject_content = subject->GetFlatContent(); - ASSERT(needle_content.IsFlat()); - ASSERT(subject_content.IsFlat()); + DCHECK(needle_content.IsFlat()); + DCHECK(subject_content.IsFlat()); // dispatch on type of strings index = (needle_content.IsAscii() ? (subject_content.IsAscii() @@ -331,7 +335,7 @@ Handle<Object> RegExpImpl::AtomExec(Handle<JSRegExp> re, if (res == RegExpImpl::RE_FAILURE) return isolate->factory()->null_value(); - ASSERT_EQ(res, RegExpImpl::RE_SUCCESS); + DCHECK_EQ(res, RegExpImpl::RE_SUCCESS); SealHandleScope shs(isolate); FixedArray* array = FixedArray::cast(last_match_info->elements()); SetAtomLastCapture(array, *subject, output_registers[0], output_registers[1]); @@ -361,7 +365,7 @@ bool RegExpImpl::EnsureCompiledIrregexp( if (saved_code->IsCode()) { // Reinstate the code in the original place. re->SetDataAt(JSRegExp::code_index(is_ascii), saved_code); - ASSERT(compiled_code->IsSmi()); + DCHECK(compiled_code->IsSmi()); return true; } return CompileIrregexp(re, sample_subject, is_ascii); @@ -397,9 +401,9 @@ bool RegExpImpl::CompileIrregexp(Handle<JSRegExp> re, // When arriving here entry can only be a smi, either representing an // uncompiled regexp, a previous compilation error, or code that has // been flushed. - ASSERT(entry->IsSmi()); + DCHECK(entry->IsSmi()); int entry_value = Smi::cast(entry)->value(); - ASSERT(entry_value == JSRegExp::kUninitializedValue || + DCHECK(entry_value == JSRegExp::kUninitializedValue || entry_value == JSRegExp::kCompilationErrorValue || (entry_value < JSRegExp::kCodeAgeMask && entry_value >= 0)); @@ -408,7 +412,7 @@ bool RegExpImpl::CompileIrregexp(Handle<JSRegExp> re, // the saved code index (we store the error message, not the actual // error). Recreate the error object and throw it. Object* error_string = re->DataAt(JSRegExp::saved_code_index(is_ascii)); - ASSERT(error_string->IsString()); + DCHECK(error_string->IsString()); Handle<String> error_message(String::cast(error_string)); CreateRegExpErrorObjectAndThrow(re, is_ascii, error_message, isolate); return false; @@ -535,14 +539,14 @@ int RegExpImpl::IrregexpExecRaw(Handle<JSRegExp> regexp, Handle<FixedArray> irregexp(FixedArray::cast(regexp->data()), isolate); - ASSERT(index >= 0); - ASSERT(index <= subject->length()); - ASSERT(subject->IsFlat()); + DCHECK(index >= 0); + DCHECK(index <= subject->length()); + DCHECK(subject->IsFlat()); bool is_ascii = subject->IsOneByteRepresentationUnderneath(); #ifndef V8_INTERPRETED_REGEXP - ASSERT(output_size >= (IrregexpNumberOfCaptures(*irregexp) + 1) * 2); + DCHECK(output_size >= (IrregexpNumberOfCaptures(*irregexp) + 1) * 2); do { EnsureCompiledIrregexp(regexp, subject, is_ascii); Handle<Code> code(IrregexpNativeCode(*irregexp, is_ascii), isolate); @@ -558,7 +562,7 @@ int RegExpImpl::IrregexpExecRaw(Handle<JSRegExp> regexp, index, isolate); if (res != NativeRegExpMacroAssembler::RETRY) { - ASSERT(res != NativeRegExpMacroAssembler::EXCEPTION || + DCHECK(res != NativeRegExpMacroAssembler::EXCEPTION || isolate->has_pending_exception()); STATIC_ASSERT( static_cast<int>(NativeRegExpMacroAssembler::SUCCESS) == RE_SUCCESS); @@ -581,7 +585,7 @@ int RegExpImpl::IrregexpExecRaw(Handle<JSRegExp> regexp, return RE_EXCEPTION; #else // V8_INTERPRETED_REGEXP - ASSERT(output_size >= IrregexpNumberOfRegisters(*irregexp)); + DCHECK(output_size >= IrregexpNumberOfRegisters(*irregexp)); // We must have done EnsureCompiledIrregexp, so we can get the number of // registers. int number_of_capture_registers = @@ -602,11 +606,10 @@ int RegExpImpl::IrregexpExecRaw(Handle<JSRegExp> regexp, index); if (result == RE_SUCCESS) { // Copy capture results to the start of the registers array. - OS::MemCopy( - output, raw_output, number_of_capture_registers * sizeof(int32_t)); + MemCopy(output, raw_output, number_of_capture_registers * sizeof(int32_t)); } if (result == RE_EXCEPTION) { - ASSERT(!isolate->has_pending_exception()); + DCHECK(!isolate->has_pending_exception()); isolate->StackOverflow(); } return result; @@ -619,7 +622,7 @@ MaybeHandle<Object> RegExpImpl::IrregexpExec(Handle<JSRegExp> regexp, int previous_index, Handle<JSArray> last_match_info) { Isolate* isolate = regexp->GetIsolate(); - ASSERT_EQ(regexp->TypeTag(), JSRegExp::IRREGEXP); + DCHECK_EQ(regexp->TypeTag(), JSRegExp::IRREGEXP); // Prepare space for the return values. #if defined(V8_INTERPRETED_REGEXP) && defined(DEBUG) @@ -632,7 +635,7 @@ MaybeHandle<Object> RegExpImpl::IrregexpExec(Handle<JSRegExp> regexp, int required_registers = RegExpImpl::IrregexpPrepare(regexp, subject); if (required_registers < 0) { // Compiling failed with an exception. - ASSERT(isolate->has_pending_exception()); + DCHECK(isolate->has_pending_exception()); return MaybeHandle<Object>(); } @@ -654,10 +657,10 @@ MaybeHandle<Object> RegExpImpl::IrregexpExec(Handle<JSRegExp> regexp, last_match_info, subject, capture_count, output_registers); } if (res == RE_EXCEPTION) { - ASSERT(isolate->has_pending_exception()); + DCHECK(isolate->has_pending_exception()); return MaybeHandle<Object>(); } - ASSERT(res == RE_FAILURE); + DCHECK(res == RE_FAILURE); return isolate->factory()->null_value(); } @@ -666,7 +669,7 @@ Handle<JSArray> RegExpImpl::SetLastMatchInfo(Handle<JSArray> last_match_info, Handle<String> subject, int capture_count, int32_t* match) { - ASSERT(last_match_info->HasFastObjectElements()); + DCHECK(last_match_info->HasFastObjectElements()); int capture_register_count = (capture_count + 1) * 2; JSArray::EnsureSize(last_match_info, capture_register_count + kLastMatchOverhead); @@ -733,8 +736,8 @@ RegExpImpl::GlobalCache::GlobalCache(Handle<JSRegExp> regexp, // to the compiled regexp. current_match_index_ = max_matches_ - 1; num_matches_ = max_matches_; - ASSERT(registers_per_match_ >= 2); // Each match has at least one capture. - ASSERT_GE(register_array_size_, registers_per_match_); + DCHECK(registers_per_match_ >= 2); // Each match has at least one capture. + DCHECK_GE(register_array_size_, registers_per_match_); int32_t* last_match = ®ister_array_[current_match_index_ * registers_per_match_]; last_match[0] = -1; @@ -963,7 +966,7 @@ class FrequencyCollator { // Does not measure in percent, but rather per-128 (the table size from the // regexp macro assembler). int Frequency(int in_character) { - ASSERT((in_character & RegExpMacroAssembler::kTableMask) == in_character); + DCHECK((in_character & RegExpMacroAssembler::kTableMask) == in_character); if (total_samples_ < 1) return 1; // Division by zero. int freq_in_per128 = (frequencies_[in_character].counter() * 128) / total_samples_; @@ -1085,7 +1088,7 @@ RegExpCompiler::RegExpCompiler(int capture_count, bool ignore_case, bool ascii, frequency_collator_(), zone_(zone) { accept_ = new(zone) EndNode(EndNode::ACCEPT, zone); - ASSERT(next_register_ - 1 <= RegExpMacroAssembler::kMaxRegister); + DCHECK(next_register_ - 1 <= RegExpMacroAssembler::kMaxRegister); } @@ -1132,8 +1135,8 @@ RegExpEngine::CompilationResult RegExpCompiler::Assemble( #ifdef DEBUG if (FLAG_print_code) { CodeTracer::Scope trace_scope(heap->isolate()->GetCodeTracer()); - Handle<Code>::cast(code)->Disassemble(pattern->ToCString().get(), - trace_scope.file()); + OFStream os(trace_scope.file()); + Handle<Code>::cast(code)->Disassemble(pattern->ToCString().get(), os); } if (FLAG_trace_regexp_assembler) { delete macro_assembler_; @@ -1165,7 +1168,7 @@ bool Trace::mentions_reg(int reg) { bool Trace::GetStoredPosition(int reg, int* cp_offset) { - ASSERT_EQ(0, *cp_offset); + DCHECK_EQ(0, *cp_offset); for (DeferredAction* action = actions_; action != NULL; action = action->next()) { @@ -1204,11 +1207,12 @@ int Trace::FindAffectedRegisters(OutSet* affected_registers, void Trace::RestoreAffectedRegisters(RegExpMacroAssembler* assembler, int max_register, - OutSet& registers_to_pop, - OutSet& registers_to_clear) { + const OutSet& registers_to_pop, + const OutSet& registers_to_clear) { for (int reg = max_register; reg >= 0; reg--) { - if (registers_to_pop.Get(reg)) assembler->PopRegister(reg); - else if (registers_to_clear.Get(reg)) { + if (registers_to_pop.Get(reg)) { + assembler->PopRegister(reg); + } else if (registers_to_clear.Get(reg)) { int clear_to = reg; while (reg > 0 && registers_to_clear.Get(reg - 1)) { reg--; @@ -1221,7 +1225,7 @@ void Trace::RestoreAffectedRegisters(RegExpMacroAssembler* assembler, void Trace::PerformDeferredActions(RegExpMacroAssembler* assembler, int max_register, - OutSet& affected_registers, + const OutSet& affected_registers, OutSet* registers_to_pop, OutSet* registers_to_clear, Zone* zone) { @@ -1266,16 +1270,16 @@ void Trace::PerformDeferredActions(RegExpMacroAssembler* assembler, // we can set undo_action to IGNORE if we know there is no value to // restore. undo_action = RESTORE; - ASSERT_EQ(store_position, -1); - ASSERT(!clear); + DCHECK_EQ(store_position, -1); + DCHECK(!clear); break; } case ActionNode::INCREMENT_REGISTER: if (!absolute) { value++; } - ASSERT_EQ(store_position, -1); - ASSERT(!clear); + DCHECK_EQ(store_position, -1); + DCHECK(!clear); undo_action = RESTORE; break; case ActionNode::STORE_POSITION: { @@ -1297,8 +1301,8 @@ void Trace::PerformDeferredActions(RegExpMacroAssembler* assembler, } else { undo_action = pc->is_capture() ? CLEAR : RESTORE; } - ASSERT(!absolute); - ASSERT_EQ(value, 0); + DCHECK(!absolute); + DCHECK_EQ(value, 0); break; } case ActionNode::CLEAR_CAPTURES: { @@ -1309,8 +1313,8 @@ void Trace::PerformDeferredActions(RegExpMacroAssembler* assembler, clear = true; } undo_action = RESTORE; - ASSERT(!absolute); - ASSERT_EQ(value, 0); + DCHECK(!absolute); + DCHECK_EQ(value, 0); break; } default: @@ -1355,7 +1359,7 @@ void Trace::PerformDeferredActions(RegExpMacroAssembler* assembler, void Trace::Flush(RegExpCompiler* compiler, RegExpNode* successor) { RegExpMacroAssembler* assembler = compiler->macro_assembler(); - ASSERT(!is_trivial()); + DCHECK(!is_trivial()); if (actions_ == NULL && backtrack() == NULL) { // Here we just have some deferred cp advances to fix and we are back to @@ -1572,13 +1576,13 @@ void ChoiceNode::GenerateGuard(RegExpMacroAssembler* macro_assembler, Trace* trace) { switch (guard->op()) { case Guard::LT: - ASSERT(!trace->mentions_reg(guard->reg())); + DCHECK(!trace->mentions_reg(guard->reg())); macro_assembler->IfRegisterGE(guard->reg(), guard->value(), trace->backtrack()); break; case Guard::GEQ: - ASSERT(!trace->mentions_reg(guard->reg())); + DCHECK(!trace->mentions_reg(guard->reg())); macro_assembler->IfRegisterLT(guard->reg(), guard->value(), trace->backtrack()); @@ -1682,12 +1686,12 @@ static bool ShortCutEmitCharacterPair(RegExpMacroAssembler* macro_assembler, if (((exor - 1) & exor) == 0) { // If c1 and c2 differ only by one bit. // Ecma262UnCanonicalize always gives the highest number last. - ASSERT(c2 > c1); + DCHECK(c2 > c1); uc16 mask = char_mask ^ exor; macro_assembler->CheckNotCharacterAfterAnd(c1, mask, on_failure); return true; } - ASSERT(c2 > c1); + DCHECK(c2 > c1); uc16 diff = c2 - c1; if (((diff - 1) & diff) == 0 && c1 >= diff) { // If the characters differ by 2^n but don't differ by one bit then @@ -1733,7 +1737,7 @@ static inline bool EmitAtomLetter(Isolate* isolate, macro_assembler->LoadCurrentCharacter(cp_offset, on_failure, check); } Label ok; - ASSERT(unibrow::Ecma262UnCanonicalize::kMaxWidth == 4); + DCHECK(unibrow::Ecma262UnCanonicalize::kMaxWidth == 4); switch (length) { case 2: { if (ShortCutEmitCharacterPair(macro_assembler, @@ -1821,9 +1825,9 @@ static void EmitUseLookupTable( // Assert that everything is on one kTableSize page. for (int i = start_index; i <= end_index; i++) { - ASSERT_EQ(ranges->at(i) & ~kMask, base); + DCHECK_EQ(ranges->at(i) & ~kMask, base); } - ASSERT(start_index == 0 || (ranges->at(start_index - 1) & ~kMask) <= base); + DCHECK(start_index == 0 || (ranges->at(start_index - 1) & ~kMask) <= base); char templ[kSize]; Label* on_bit_set; @@ -1879,7 +1883,7 @@ static void CutOutRange(RegExpMacroAssembler* masm, &dummy, in_range_label, &dummy); - ASSERT(!dummy.is_linked()); + DCHECK(!dummy.is_linked()); // Cut out the single range by rewriting the array. This creates a new // range that is a merger of the two ranges on either side of the one we // are cutting out. The oddity of the labels is preserved. @@ -1946,7 +1950,7 @@ static void SplitSearchSpace(ZoneList<int>* ranges, } } - ASSERT(*new_start_index > start_index); + DCHECK(*new_start_index > start_index); *new_end_index = *new_start_index - 1; if (ranges->at(*new_end_index) == *border) { (*new_end_index)--; @@ -1977,7 +1981,7 @@ static void GenerateBranches(RegExpMacroAssembler* masm, int first = ranges->at(start_index); int last = ranges->at(end_index) - 1; - ASSERT_LT(min_char, first); + DCHECK_LT(min_char, first); // Just need to test if the character is before or on-or-after // a particular character. @@ -2010,7 +2014,7 @@ static void GenerateBranches(RegExpMacroAssembler* masm, if (cut == kNoCutIndex) cut = start_index; CutOutRange( masm, ranges, start_index, end_index, cut, even_label, odd_label); - ASSERT_GE(end_index - start_index, 2); + DCHECK_GE(end_index - start_index, 2); GenerateBranches(masm, ranges, start_index + 1, @@ -2070,25 +2074,25 @@ static void GenerateBranches(RegExpMacroAssembler* masm, // We didn't find any section that started after the limit, so everything // above the border is one of the terminal labels. above = (end_index & 1) != (start_index & 1) ? odd_label : even_label; - ASSERT(new_end_index == end_index - 1); + DCHECK(new_end_index == end_index - 1); } - ASSERT_LE(start_index, new_end_index); - ASSERT_LE(new_start_index, end_index); - ASSERT_LT(start_index, new_start_index); - ASSERT_LT(new_end_index, end_index); - ASSERT(new_end_index + 1 == new_start_index || + DCHECK_LE(start_index, new_end_index); + DCHECK_LE(new_start_index, end_index); + DCHECK_LT(start_index, new_start_index); + DCHECK_LT(new_end_index, end_index); + DCHECK(new_end_index + 1 == new_start_index || (new_end_index + 2 == new_start_index && border == ranges->at(new_end_index + 1))); - ASSERT_LT(min_char, border - 1); - ASSERT_LT(border, max_char); - ASSERT_LT(ranges->at(new_end_index), border); - ASSERT(border < ranges->at(new_start_index) || + DCHECK_LT(min_char, border - 1); + DCHECK_LT(border, max_char); + DCHECK_LT(ranges->at(new_end_index), border); + DCHECK(border < ranges->at(new_start_index) || (border == ranges->at(new_start_index) && new_start_index == end_index && new_end_index == end_index - 1 && border == last + 1)); - ASSERT(new_start_index == 0 || border >= ranges->at(new_start_index - 1)); + DCHECK(new_start_index == 0 || border >= ranges->at(new_start_index - 1)); masm->CheckCharacterGT(border - 1, above); Label dummy; @@ -2205,7 +2209,7 @@ static void EmitCharClass(RegExpMacroAssembler* macro_assembler, for (int i = 0; i <= last_valid_range; i++) { CharacterRange& range = ranges->at(i); if (range.from() == 0) { - ASSERT_EQ(i, 0); + DCHECK_EQ(i, 0); zeroth_entry_is_failure = !zeroth_entry_is_failure; } else { range_boundaries->Add(range.from(), zone); @@ -2462,7 +2466,7 @@ bool RegExpNode::EmitQuickCheck(RegExpCompiler* compiler, details, compiler, 0, trace->at_start() == Trace::FALSE_VALUE); if (details->cannot_match()) return false; if (!details->Rationalize(compiler->ascii())) return false; - ASSERT(details->characters() == 1 || + DCHECK(details->characters() == 1 || compiler->macro_assembler()->CanReadUnaligned()); uint32_t mask = details->mask(); uint32_t value = details->value(); @@ -2532,7 +2536,7 @@ void TextNode::GetQuickCheckDetails(QuickCheckDetails* details, int characters_filled_in, bool not_at_start) { Isolate* isolate = compiler->macro_assembler()->zone()->isolate(); - ASSERT(characters_filled_in < details->characters()); + DCHECK(characters_filled_in < details->characters()); int characters = details->characters(); int char_mask; if (compiler->ascii()) { @@ -2561,7 +2565,7 @@ void TextNode::GetQuickCheckDetails(QuickCheckDetails* details, unibrow::uchar chars[unibrow::Ecma262UnCanonicalize::kMaxWidth]; int length = GetCaseIndependentLetters(isolate, c, compiler->ascii(), chars); - ASSERT(length != 0); // Can only happen if c > char_mask (see above). + DCHECK(length != 0); // Can only happen if c > char_mask (see above). if (length == 1) { // This letter has no case equivalents, so it's nice and simple // and the mask-compare will determine definitely whether we have @@ -2597,7 +2601,7 @@ void TextNode::GetQuickCheckDetails(QuickCheckDetails* details, pos->determines_perfectly = true; } characters_filled_in++; - ASSERT(characters_filled_in <= details->characters()); + DCHECK(characters_filled_in <= details->characters()); if (characters_filled_in == details->characters()) { return; } @@ -2663,13 +2667,13 @@ void TextNode::GetQuickCheckDetails(QuickCheckDetails* details, pos->value = bits; } characters_filled_in++; - ASSERT(characters_filled_in <= details->characters()); + DCHECK(characters_filled_in <= details->characters()); if (characters_filled_in == details->characters()) { return; } } } - ASSERT(characters_filled_in != details->characters()); + DCHECK(characters_filled_in != details->characters()); if (!details->cannot_match()) { on_success()-> GetQuickCheckDetails(details, compiler, @@ -2690,7 +2694,7 @@ void QuickCheckDetails::Clear() { void QuickCheckDetails::Advance(int by, bool ascii) { - ASSERT(by >= 0); + DCHECK(by >= 0); if (by >= characters_) { Clear(); return; @@ -2711,7 +2715,7 @@ void QuickCheckDetails::Advance(int by, bool ascii) { void QuickCheckDetails::Merge(QuickCheckDetails* other, int from_index) { - ASSERT(characters_ == other->characters_); + DCHECK(characters_ == other->characters_); if (other->cannot_match_) { return; } @@ -2742,7 +2746,7 @@ void QuickCheckDetails::Merge(QuickCheckDetails* other, int from_index) { class VisitMarker { public: explicit VisitMarker(NodeInfo* info) : info_(info) { - ASSERT(!info->visited); + DCHECK(!info->visited); info->visited = true; } ~VisitMarker() { @@ -2756,7 +2760,7 @@ class VisitMarker { RegExpNode* SeqRegExpNode::FilterASCII(int depth, bool ignore_case) { if (info()->replacement_calculated) return replacement(); if (depth < 0) return this; - ASSERT(!info()->visited); + DCHECK(!info()->visited); VisitMarker marker(info()); return FilterSuccessor(depth - 1, ignore_case); } @@ -2790,7 +2794,7 @@ static bool RangesContainLatin1Equivalents(ZoneList<CharacterRange>* ranges) { RegExpNode* TextNode::FilterASCII(int depth, bool ignore_case) { if (info()->replacement_calculated) return replacement(); if (depth < 0) return this; - ASSERT(!info()->visited); + DCHECK(!info()->visited); VisitMarker marker(info()); int element_count = elms_->length(); for (int i = 0; i < element_count; i++) { @@ -2811,7 +2815,7 @@ RegExpNode* TextNode::FilterASCII(int depth, bool ignore_case) { copy[j] = converted; } } else { - ASSERT(elm.text_type() == TextElement::CHAR_CLASS); + DCHECK(elm.text_type() == TextElement::CHAR_CLASS); RegExpCharacterClass* cc = elm.char_class(); ZoneList<CharacterRange>* ranges = cc->ranges(zone()); if (!CharacterRange::IsCanonical(ranges)) { @@ -2880,7 +2884,7 @@ RegExpNode* ChoiceNode::FilterASCII(int depth, bool ignore_case) { GuardedAlternative alternative = alternatives_->at(i); RegExpNode* replacement = alternative.node()->FilterASCII(depth - 1, ignore_case); - ASSERT(replacement != this); // No missing EMPTY_MATCH_CHECK. + DCHECK(replacement != this); // No missing EMPTY_MATCH_CHECK. if (replacement != NULL) { alternatives_->at(i).set_node(replacement); surviving++; @@ -2966,7 +2970,7 @@ void ChoiceNode::GetQuickCheckDetails(QuickCheckDetails* details, bool not_at_start) { not_at_start = (not_at_start || not_at_start_); int choice_count = alternatives_->length(); - ASSERT(choice_count > 0); + DCHECK(choice_count > 0); alternatives_->at(0).node()->GetQuickCheckDetails(details, compiler, characters_filled_in, @@ -3090,7 +3094,7 @@ void AssertionNode::EmitBoundaryCheck(RegExpCompiler* compiler, Trace* trace) { } else if (next_is_word_character == Trace::TRUE_VALUE) { BacktrackIfPrevious(compiler, trace, at_boundary ? kIsWord : kIsNonWord); } else { - ASSERT(next_is_word_character == Trace::FALSE_VALUE); + DCHECK(next_is_word_character == Trace::FALSE_VALUE); BacktrackIfPrevious(compiler, trace, at_boundary ? kIsNonWord : kIsWord); } } @@ -3246,7 +3250,7 @@ void TextNode::TextEmitPass(RegExpCompiler* compiler, EmitCharacterFunction* emit_function = NULL; switch (pass) { case NON_ASCII_MATCH: - ASSERT(ascii); + DCHECK(ascii); if (quarks[j] > String::kMaxOneByteCharCode) { assembler->GoTo(backtrack); return; @@ -3276,7 +3280,7 @@ void TextNode::TextEmitPass(RegExpCompiler* compiler, } } } else { - ASSERT_EQ(TextElement::CHAR_CLASS, elm.text_type()); + DCHECK_EQ(TextElement::CHAR_CLASS, elm.text_type()); if (pass == CHARACTER_CLASS_MATCH) { if (first_element_checked && i == 0) continue; if (DeterminedAlready(quick_check, elm.cp_offset())) continue; @@ -3298,7 +3302,7 @@ void TextNode::TextEmitPass(RegExpCompiler* compiler, int TextNode::Length() { TextElement elm = elms_->last(); - ASSERT(elm.cp_offset() >= 0); + DCHECK(elm.cp_offset() >= 0); return elm.cp_offset() + elm.length(); } @@ -3322,7 +3326,7 @@ bool TextNode::SkipPass(int int_pass, bool ignore_case) { void TextNode::Emit(RegExpCompiler* compiler, Trace* trace) { LimitResult limit_result = LimitVersions(compiler, trace); if (limit_result == DONE) return; - ASSERT(limit_result == CONTINUE); + DCHECK(limit_result == CONTINUE); if (trace->cp_offset() + Length() > RegExpMacroAssembler::kMaxCPOffset) { compiler->SetRegExpTooBig(); @@ -3379,7 +3383,7 @@ void Trace::InvalidateCurrentCharacter() { void Trace::AdvanceCurrentPositionInTrace(int by, RegExpCompiler* compiler) { - ASSERT(by > 0); + DCHECK(by > 0); // We don't have an instruction for shifting the current character register // down or for using a shifted value for anything so lets just forget that // we preloaded any characters into it. @@ -3474,14 +3478,14 @@ int ChoiceNode::GreedyLoopTextLengthForAlternative( void LoopChoiceNode::AddLoopAlternative(GuardedAlternative alt) { - ASSERT_EQ(loop_node_, NULL); + DCHECK_EQ(loop_node_, NULL); AddAlternative(alt); loop_node_ = alt.node(); } void LoopChoiceNode::AddContinueAlternative(GuardedAlternative alt) { - ASSERT_EQ(continue_node_, NULL); + DCHECK_EQ(continue_node_, NULL); AddAlternative(alt); continue_node_ = alt.node(); } @@ -3492,15 +3496,15 @@ void LoopChoiceNode::Emit(RegExpCompiler* compiler, Trace* trace) { if (trace->stop_node() == this) { int text_length = GreedyLoopTextLengthForAlternative(&(alternatives_->at(0))); - ASSERT(text_length != kNodeIsTooComplexForGreedyLoops); + DCHECK(text_length != kNodeIsTooComplexForGreedyLoops); // Update the counter-based backtracking info on the stack. This is an // optimization for greedy loops (see below). - ASSERT(trace->cp_offset() == text_length); + DCHECK(trace->cp_offset() == text_length); macro_assembler->AdvanceCurrentPosition(text_length); macro_assembler->GoTo(trace->loop_label()); return; } - ASSERT(trace->stop_node() == NULL); + DCHECK(trace->stop_node() == NULL); if (!trace->is_trivial()) { trace->Flush(compiler, this); return; @@ -3810,7 +3814,7 @@ bool BoyerMooreLookahead::EmitSkipInstructions(RegExpMacroAssembler* masm) { Handle<ByteArray> boolean_skip_table = factory->NewByteArray(kSize, TENURED); int skip_distance = GetSkipTable( min_lookahead, max_lookahead, boolean_skip_table); - ASSERT(skip_distance != 0); + DCHECK(skip_distance != 0); Label cont, again; masm->Bind(&again); @@ -3911,14 +3915,14 @@ void ChoiceNode::Emit(RegExpCompiler* compiler, Trace* trace) { ZoneList<Guard*>* guards = alternative.guards(); int guard_count = (guards == NULL) ? 0 : guards->length(); for (int j = 0; j < guard_count; j++) { - ASSERT(!trace->mentions_reg(guards->at(j)->reg())); + DCHECK(!trace->mentions_reg(guards->at(j)->reg())); } } #endif LimitResult limit_result = LimitVersions(compiler, trace); if (limit_result == DONE) return; - ASSERT(limit_result == CONTINUE); + DCHECK(limit_result == CONTINUE); int new_flush_budget = trace->flush_budget() / choice_count; if (trace->flush_budget() == 0 && trace->actions() != NULL) { @@ -3946,7 +3950,7 @@ void ChoiceNode::Emit(RegExpCompiler* compiler, Trace* trace) { // information for each iteration of the loop, which could take up a lot of // space. greedy_loop = true; - ASSERT(trace->stop_node() == NULL); + DCHECK(trace->stop_node() == NULL); macro_assembler->PushCurrentPosition(); current_trace = &counter_backtrack_trace; Label greedy_match_failed; @@ -3985,7 +3989,7 @@ void ChoiceNode::Emit(RegExpCompiler* compiler, Trace* trace) { // and step forwards 3 if the character is not one of abc. Abc need // not be atoms, they can be any reasonably limited character class or // small alternation. - ASSERT(trace->is_trivial()); // This is the case on LoopChoiceNodes. + DCHECK(trace->is_trivial()); // This is the case on LoopChoiceNodes. BoyerMooreLookahead* lookahead = bm_info(not_at_start); if (lookahead == NULL) { eats_at_least = Min(kMaxLookaheadForBoyerMoore, @@ -4170,7 +4174,7 @@ void ActionNode::Emit(RegExpCompiler* compiler, Trace* trace) { RegExpMacroAssembler* assembler = compiler->macro_assembler(); LimitResult limit_result = LimitVersions(compiler, trace); if (limit_result == DONE) return; - ASSERT(limit_result == CONTINUE); + DCHECK(limit_result == CONTINUE); RecursionCheck rc(compiler); @@ -4278,7 +4282,7 @@ void ActionNode::Emit(RegExpCompiler* compiler, Trace* trace) { int clear_registers_to = clear_registers_from + clear_register_count - 1; assembler->ClearRegisters(clear_registers_from, clear_registers_to); - ASSERT(trace->backtrack() == NULL); + DCHECK(trace->backtrack() == NULL); assembler->Backtrack(); return; } @@ -4297,11 +4301,11 @@ void BackReferenceNode::Emit(RegExpCompiler* compiler, Trace* trace) { LimitResult limit_result = LimitVersions(compiler, trace); if (limit_result == DONE) return; - ASSERT(limit_result == CONTINUE); + DCHECK(limit_result == CONTINUE); RecursionCheck rc(compiler); - ASSERT_EQ(start_reg_ + 1, end_reg_); + DCHECK_EQ(start_reg_ + 1, end_reg_); if (compiler->ignore_case()) { assembler->CheckNotBackReferenceIgnoreCase(start_reg_, trace->backtrack()); @@ -4321,44 +4325,41 @@ void BackReferenceNode::Emit(RegExpCompiler* compiler, Trace* trace) { class DotPrinter: public NodeVisitor { public: - explicit DotPrinter(bool ignore_case) - : ignore_case_(ignore_case), - stream_(&alloc_) { } + DotPrinter(OStream& os, bool ignore_case) // NOLINT + : os_(os), + ignore_case_(ignore_case) {} void PrintNode(const char* label, RegExpNode* node); void Visit(RegExpNode* node); void PrintAttributes(RegExpNode* from); - StringStream* stream() { return &stream_; } void PrintOnFailure(RegExpNode* from, RegExpNode* to); #define DECLARE_VISIT(Type) \ virtual void Visit##Type(Type##Node* that); FOR_EACH_NODE_TYPE(DECLARE_VISIT) #undef DECLARE_VISIT private: + OStream& os_; bool ignore_case_; - HeapStringAllocator alloc_; - StringStream stream_; }; void DotPrinter::PrintNode(const char* label, RegExpNode* node) { - stream()->Add("digraph G {\n graph [label=\""); + os_ << "digraph G {\n graph [label=\""; for (int i = 0; label[i]; i++) { switch (label[i]) { case '\\': - stream()->Add("\\\\"); + os_ << "\\\\"; break; case '"': - stream()->Add("\""); + os_ << "\""; break; default: - stream()->Put(label[i]); + os_ << label[i]; break; } } - stream()->Add("\"];\n"); + os_ << "\"];\n"; Visit(node); - stream()->Add("}\n"); - printf("%s", stream()->ToCString().get()); + os_ << "}" << endl; } @@ -4370,97 +4371,95 @@ void DotPrinter::Visit(RegExpNode* node) { void DotPrinter::PrintOnFailure(RegExpNode* from, RegExpNode* on_failure) { - stream()->Add(" n%p -> n%p [style=dotted];\n", from, on_failure); + os_ << " n" << from << " -> n" << on_failure << " [style=dotted];\n"; Visit(on_failure); } class TableEntryBodyPrinter { public: - TableEntryBodyPrinter(StringStream* stream, ChoiceNode* choice) - : stream_(stream), choice_(choice) { } + TableEntryBodyPrinter(OStream& os, ChoiceNode* choice) // NOLINT + : os_(os), + choice_(choice) {} void Call(uc16 from, DispatchTable::Entry entry) { OutSet* out_set = entry.out_set(); for (unsigned i = 0; i < OutSet::kFirstLimit; i++) { if (out_set->Get(i)) { - stream()->Add(" n%p:s%io%i -> n%p;\n", - choice(), - from, - i, - choice()->alternatives()->at(i).node()); + os_ << " n" << choice() << ":s" << from << "o" << i << " -> n" + << choice()->alternatives()->at(i).node() << ";\n"; } } } private: - StringStream* stream() { return stream_; } ChoiceNode* choice() { return choice_; } - StringStream* stream_; + OStream& os_; ChoiceNode* choice_; }; class TableEntryHeaderPrinter { public: - explicit TableEntryHeaderPrinter(StringStream* stream) - : first_(true), stream_(stream) { } + explicit TableEntryHeaderPrinter(OStream& os) // NOLINT + : first_(true), + os_(os) {} void Call(uc16 from, DispatchTable::Entry entry) { if (first_) { first_ = false; } else { - stream()->Add("|"); + os_ << "|"; } - stream()->Add("{\\%k-\\%k|{", from, entry.to()); + os_ << "{\\" << AsUC16(from) << "-\\" << AsUC16(entry.to()) << "|{"; OutSet* out_set = entry.out_set(); int priority = 0; for (unsigned i = 0; i < OutSet::kFirstLimit; i++) { if (out_set->Get(i)) { - if (priority > 0) stream()->Add("|"); - stream()->Add("<s%io%i> %i", from, i, priority); + if (priority > 0) os_ << "|"; + os_ << "<s" << from << "o" << i << "> " << priority; priority++; } } - stream()->Add("}}"); + os_ << "}}"; } private: bool first_; - StringStream* stream() { return stream_; } - StringStream* stream_; + OStream& os_; }; class AttributePrinter { public: - explicit AttributePrinter(DotPrinter* out) - : out_(out), first_(true) { } + explicit AttributePrinter(OStream& os) // NOLINT + : os_(os), + first_(true) {} void PrintSeparator() { if (first_) { first_ = false; } else { - out_->stream()->Add("|"); + os_ << "|"; } } void PrintBit(const char* name, bool value) { if (!value) return; PrintSeparator(); - out_->stream()->Add("{%s}", name); + os_ << "{" << name << "}"; } void PrintPositive(const char* name, int value) { if (value < 0) return; PrintSeparator(); - out_->stream()->Add("{%s|%x}", name, value); + os_ << "{" << name << "|" << value << "}"; } + private: - DotPrinter* out_; + OStream& os_; bool first_; }; void DotPrinter::PrintAttributes(RegExpNode* that) { - stream()->Add(" a%p [shape=Mrecord, color=grey, fontcolor=grey, " - "margin=0.1, fontsize=10, label=\"{", - that); - AttributePrinter printer(this); + os_ << " a" << that << " [shape=Mrecord, color=grey, fontcolor=grey, " + << "margin=0.1, fontsize=10, label=\"{"; + AttributePrinter printer(os_); NodeInfo* info = that->info(); printer.PrintBit("NI", info->follows_newline_interest); printer.PrintBit("WI", info->follows_word_interest); @@ -4468,27 +4467,27 @@ void DotPrinter::PrintAttributes(RegExpNode* that) { Label* label = that->label(); if (label->is_bound()) printer.PrintPositive("@", label->pos()); - stream()->Add("}\"];\n"); - stream()->Add(" a%p -> n%p [style=dashed, color=grey, " - "arrowhead=none];\n", that, that); + os_ << "}\"];\n" + << " a" << that << " -> n" << that + << " [style=dashed, color=grey, arrowhead=none];\n"; } static const bool kPrintDispatchTable = false; void DotPrinter::VisitChoice(ChoiceNode* that) { if (kPrintDispatchTable) { - stream()->Add(" n%p [shape=Mrecord, label=\"", that); - TableEntryHeaderPrinter header_printer(stream()); + os_ << " n" << that << " [shape=Mrecord, label=\""; + TableEntryHeaderPrinter header_printer(os_); that->GetTable(ignore_case_)->ForEach(&header_printer); - stream()->Add("\"]\n", that); + os_ << "\"]\n"; PrintAttributes(that); - TableEntryBodyPrinter body_printer(stream(), that); + TableEntryBodyPrinter body_printer(os_, that); that->GetTable(ignore_case_)->ForEach(&body_printer); } else { - stream()->Add(" n%p [shape=Mrecord, label=\"?\"];\n", that); + os_ << " n" << that << " [shape=Mrecord, label=\"?\"];\n"; for (int i = 0; i < that->alternatives()->length(); i++) { GuardedAlternative alt = that->alternatives()->at(i); - stream()->Add(" n%p -> n%p;\n", that, alt.node()); + os_ << " n" << that << " -> n" << alt.node(); } } for (int i = 0; i < that->alternatives()->length(); i++) { @@ -4500,138 +4499,136 @@ void DotPrinter::VisitChoice(ChoiceNode* that) { void DotPrinter::VisitText(TextNode* that) { Zone* zone = that->zone(); - stream()->Add(" n%p [label=\"", that); + os_ << " n" << that << " [label=\""; for (int i = 0; i < that->elements()->length(); i++) { - if (i > 0) stream()->Add(" "); + if (i > 0) os_ << " "; TextElement elm = that->elements()->at(i); switch (elm.text_type()) { case TextElement::ATOM: { - stream()->Add("'%w'", elm.atom()->data()); + Vector<const uc16> data = elm.atom()->data(); + for (int i = 0; i < data.length(); i++) { + os_ << static_cast<char>(data[i]); + } break; } case TextElement::CHAR_CLASS: { RegExpCharacterClass* node = elm.char_class(); - stream()->Add("["); - if (node->is_negated()) - stream()->Add("^"); + os_ << "["; + if (node->is_negated()) os_ << "^"; for (int j = 0; j < node->ranges(zone)->length(); j++) { CharacterRange range = node->ranges(zone)->at(j); - stream()->Add("%k-%k", range.from(), range.to()); + os_ << AsUC16(range.from()) << "-" << AsUC16(range.to()); } - stream()->Add("]"); + os_ << "]"; break; } default: UNREACHABLE(); } } - stream()->Add("\", shape=box, peripheries=2];\n"); + os_ << "\", shape=box, peripheries=2];\n"; PrintAttributes(that); - stream()->Add(" n%p -> n%p;\n", that, that->on_success()); + os_ << " n" << that << " -> n" << that->on_success() << ";\n"; Visit(that->on_success()); } void DotPrinter::VisitBackReference(BackReferenceNode* that) { - stream()->Add(" n%p [label=\"$%i..$%i\", shape=doubleoctagon];\n", - that, - that->start_register(), - that->end_register()); + os_ << " n" << that << " [label=\"$" << that->start_register() << "..$" + << that->end_register() << "\", shape=doubleoctagon];\n"; PrintAttributes(that); - stream()->Add(" n%p -> n%p;\n", that, that->on_success()); + os_ << " n" << that << " -> n" << that->on_success() << ";\n"; Visit(that->on_success()); } void DotPrinter::VisitEnd(EndNode* that) { - stream()->Add(" n%p [style=bold, shape=point];\n", that); + os_ << " n" << that << " [style=bold, shape=point];\n"; PrintAttributes(that); } void DotPrinter::VisitAssertion(AssertionNode* that) { - stream()->Add(" n%p [", that); + os_ << " n" << that << " ["; switch (that->assertion_type()) { case AssertionNode::AT_END: - stream()->Add("label=\"$\", shape=septagon"); + os_ << "label=\"$\", shape=septagon"; break; case AssertionNode::AT_START: - stream()->Add("label=\"^\", shape=septagon"); + os_ << "label=\"^\", shape=septagon"; break; case AssertionNode::AT_BOUNDARY: - stream()->Add("label=\"\\b\", shape=septagon"); + os_ << "label=\"\\b\", shape=septagon"; break; case AssertionNode::AT_NON_BOUNDARY: - stream()->Add("label=\"\\B\", shape=septagon"); + os_ << "label=\"\\B\", shape=septagon"; break; case AssertionNode::AFTER_NEWLINE: - stream()->Add("label=\"(?<=\\n)\", shape=septagon"); + os_ << "label=\"(?<=\\n)\", shape=septagon"; break; } - stream()->Add("];\n"); + os_ << "];\n"; PrintAttributes(that); RegExpNode* successor = that->on_success(); - stream()->Add(" n%p -> n%p;\n", that, successor); + os_ << " n" << that << " -> n" << successor << ";\n"; Visit(successor); } void DotPrinter::VisitAction(ActionNode* that) { - stream()->Add(" n%p [", that); + os_ << " n" << that << " ["; switch (that->action_type_) { case ActionNode::SET_REGISTER: - stream()->Add("label=\"$%i:=%i\", shape=octagon", - that->data_.u_store_register.reg, - that->data_.u_store_register.value); + os_ << "label=\"$" << that->data_.u_store_register.reg + << ":=" << that->data_.u_store_register.value << "\", shape=octagon"; break; case ActionNode::INCREMENT_REGISTER: - stream()->Add("label=\"$%i++\", shape=octagon", - that->data_.u_increment_register.reg); + os_ << "label=\"$" << that->data_.u_increment_register.reg + << "++\", shape=octagon"; break; case ActionNode::STORE_POSITION: - stream()->Add("label=\"$%i:=$pos\", shape=octagon", - that->data_.u_position_register.reg); + os_ << "label=\"$" << that->data_.u_position_register.reg + << ":=$pos\", shape=octagon"; break; case ActionNode::BEGIN_SUBMATCH: - stream()->Add("label=\"$%i:=$pos,begin\", shape=septagon", - that->data_.u_submatch.current_position_register); + os_ << "label=\"$" << that->data_.u_submatch.current_position_register + << ":=$pos,begin\", shape=septagon"; break; case ActionNode::POSITIVE_SUBMATCH_SUCCESS: - stream()->Add("label=\"escape\", shape=septagon"); + os_ << "label=\"escape\", shape=septagon"; break; case ActionNode::EMPTY_MATCH_CHECK: - stream()->Add("label=\"$%i=$pos?,$%i<%i?\", shape=septagon", - that->data_.u_empty_match_check.start_register, - that->data_.u_empty_match_check.repetition_register, - that->data_.u_empty_match_check.repetition_limit); + os_ << "label=\"$" << that->data_.u_empty_match_check.start_register + << "=$pos?,$" << that->data_.u_empty_match_check.repetition_register + << "<" << that->data_.u_empty_match_check.repetition_limit + << "?\", shape=septagon"; break; case ActionNode::CLEAR_CAPTURES: { - stream()->Add("label=\"clear $%i to $%i\", shape=septagon", - that->data_.u_clear_captures.range_from, - that->data_.u_clear_captures.range_to); + os_ << "label=\"clear $" << that->data_.u_clear_captures.range_from + << " to $" << that->data_.u_clear_captures.range_to + << "\", shape=septagon"; break; } } - stream()->Add("];\n"); + os_ << "];\n"; PrintAttributes(that); RegExpNode* successor = that->on_success(); - stream()->Add(" n%p -> n%p;\n", that, successor); + os_ << " n" << that << " -> n" << successor << ";\n"; Visit(successor); } class DispatchTableDumper { public: - explicit DispatchTableDumper(StringStream* stream) : stream_(stream) { } + explicit DispatchTableDumper(OStream& os) : os_(os) {} void Call(uc16 key, DispatchTable::Entry entry); - StringStream* stream() { return stream_; } private: - StringStream* stream_; + OStream& os_; }; void DispatchTableDumper::Call(uc16 key, DispatchTable::Entry entry) { - stream()->Add("[%k-%k]: {", key, entry.to()); + os_ << "[" << AsUC16(key) << "-" << AsUC16(entry.to()) << "]: {"; OutSet* set = entry.out_set(); bool first = true; for (unsigned i = 0; i < OutSet::kFirstLimit; i++) { @@ -4639,28 +4636,27 @@ void DispatchTableDumper::Call(uc16 key, DispatchTable::Entry entry) { if (first) { first = false; } else { - stream()->Add(", "); + os_ << ", "; } - stream()->Add("%i", i); + os_ << i; } } - stream()->Add("}\n"); + os_ << "}\n"; } void DispatchTable::Dump() { - HeapStringAllocator alloc; - StringStream stream(&alloc); - DispatchTableDumper dumper(&stream); + OFStream os(stderr); + DispatchTableDumper dumper(os); tree()->ForEach(&dumper); - OS::PrintError("%s", stream.ToCString().get()); } void RegExpEngine::DotPrint(const char* label, RegExpNode* node, bool ignore_case) { - DotPrinter printer(ignore_case); + OFStream os(stdout); + DotPrinter printer(os, ignore_case); printer.PrintNode(label, node); } @@ -4690,10 +4686,10 @@ static bool CompareInverseRanges(ZoneList<CharacterRange>* ranges, const int* special_class, int length) { length--; // Remove final 0x10000. - ASSERT(special_class[length] == 0x10000); - ASSERT(ranges->length() != 0); - ASSERT(length != 0); - ASSERT(special_class[0] != 0); + DCHECK(special_class[length] == 0x10000); + DCHECK(ranges->length() != 0); + DCHECK(length != 0); + DCHECK(special_class[0] != 0); if (ranges->length() != (length >> 1) + 1) { return false; } @@ -4721,7 +4717,7 @@ static bool CompareRanges(ZoneList<CharacterRange>* ranges, const int* special_class, int length) { length--; // Remove final 0x10000. - ASSERT(special_class[length] == 0x10000); + DCHECK(special_class[length] == 0x10000); if (ranges->length() * 2 != length) { return false; } @@ -4818,7 +4814,7 @@ class RegExpExpansionLimiter { : compiler_(compiler), saved_expansion_factor_(compiler->current_expansion_factor()), ok_to_expand_(saved_expansion_factor_ <= kMaxExpansionFactor) { - ASSERT(factor > 0); + DCHECK(factor > 0); if (ok_to_expand_) { if (factor > kMaxExpansionFactor) { // Avoid integer overflow of the current expansion factor. @@ -4907,7 +4903,7 @@ RegExpNode* RegExpQuantifier::ToNode(int min, } } if (max <= kMaxUnrolledMaxMatches && min == 0) { - ASSERT(max > 0); // Due to the 'if' above. + DCHECK(max > 0); // Due to the 'if' above. RegExpExpansionLimiter limiter(compiler, max); if (limiter.ok_to_expand()) { // Unroll the optional matches up to max. @@ -5146,9 +5142,9 @@ static void AddClass(const int* elmv, ZoneList<CharacterRange>* ranges, Zone* zone) { elmc--; - ASSERT(elmv[elmc] == 0x10000); + DCHECK(elmv[elmc] == 0x10000); for (int i = 0; i < elmc; i += 2) { - ASSERT(elmv[i] < elmv[i + 1]); + DCHECK(elmv[i] < elmv[i + 1]); ranges->Add(CharacterRange(elmv[i], elmv[i + 1] - 1), zone); } } @@ -5159,13 +5155,13 @@ static void AddClassNegated(const int *elmv, ZoneList<CharacterRange>* ranges, Zone* zone) { elmc--; - ASSERT(elmv[elmc] == 0x10000); - ASSERT(elmv[0] != 0x0000); - ASSERT(elmv[elmc-1] != String::kMaxUtf16CodeUnit); + DCHECK(elmv[elmc] == 0x10000); + DCHECK(elmv[0] != 0x0000); + DCHECK(elmv[elmc-1] != String::kMaxUtf16CodeUnit); uc16 last = 0x0000; for (int i = 0; i < elmc; i += 2) { - ASSERT(last <= elmv[i] - 1); - ASSERT(elmv[i] < elmv[i + 1]); + DCHECK(last <= elmv[i] - 1); + DCHECK(elmv[i] < elmv[i + 1]); ranges->Add(CharacterRange(last, elmv[i] - 1), zone); last = elmv[i + 1]; } @@ -5261,8 +5257,8 @@ void CharacterRange::Split(ZoneList<CharacterRange>* base, ZoneList<CharacterRange>** included, ZoneList<CharacterRange>** excluded, Zone* zone) { - ASSERT_EQ(NULL, *included); - ASSERT_EQ(NULL, *excluded); + DCHECK_EQ(NULL, *included); + DCHECK_EQ(NULL, *excluded); DispatchTable table(zone); for (int i = 0; i < base->length(); i++) table.AddRange(base->at(i), CharacterRangeSplitter::kInBase, zone); @@ -5322,7 +5318,7 @@ void CharacterRange::AddCaseEquivalents(ZoneList<CharacterRange>* ranges, if (length == 0) { block_end = pos; } else { - ASSERT_EQ(1, length); + DCHECK_EQ(1, length); block_end = range[0]; } int end = (block_end > top) ? top : block_end; @@ -5342,7 +5338,7 @@ void CharacterRange::AddCaseEquivalents(ZoneList<CharacterRange>* ranges, bool CharacterRange::IsCanonical(ZoneList<CharacterRange>* ranges) { - ASSERT_NOT_NULL(ranges); + DCHECK_NOT_NULL(ranges); int n = ranges->length(); if (n <= 1) return true; int max = ranges->at(0).to(); @@ -5482,15 +5478,15 @@ void CharacterRange::Canonicalize(ZoneList<CharacterRange>* character_ranges) { } while (read < n); character_ranges->Rewind(num_canonical); - ASSERT(CharacterRange::IsCanonical(character_ranges)); + DCHECK(CharacterRange::IsCanonical(character_ranges)); } void CharacterRange::Negate(ZoneList<CharacterRange>* ranges, ZoneList<CharacterRange>* negated_ranges, Zone* zone) { - ASSERT(CharacterRange::IsCanonical(ranges)); - ASSERT_EQ(0, negated_ranges->length()); + DCHECK(CharacterRange::IsCanonical(ranges)); + DCHECK_EQ(0, negated_ranges->length()); int range_count = ranges->length(); uc16 from = 0; int i = 0; @@ -5546,7 +5542,7 @@ void OutSet::Set(unsigned value, Zone *zone) { } -bool OutSet::Get(unsigned value) { +bool OutSet::Get(unsigned value) const { if (value < kFirstLimit) { return (first_ & (1 << value)) != 0; } else if (remaining_ == NULL) { @@ -5566,7 +5562,7 @@ void DispatchTable::AddRange(CharacterRange full_range, int value, if (tree()->is_empty()) { // If this is the first range we just insert into the table. ZoneSplayTree<Config>::Locator loc; - ASSERT_RESULT(tree()->Insert(current.from(), &loc)); + DCHECK_RESULT(tree()->Insert(current.from(), &loc)); loc.set_value(Entry(current.from(), current.to(), empty()->Extend(value, zone))); return; @@ -5592,7 +5588,7 @@ void DispatchTable::AddRange(CharacterRange full_range, int value, // to the map and let the next step deal with merging it with // the range we're adding. ZoneSplayTree<Config>::Locator loc; - ASSERT_RESULT(tree()->Insert(right.from(), &loc)); + DCHECK_RESULT(tree()->Insert(right.from(), &loc)); loc.set_value(Entry(right.from(), right.to(), entry->out_set())); @@ -5608,24 +5604,24 @@ void DispatchTable::AddRange(CharacterRange full_range, int value, // then we have to add a range covering just that space. if (current.from() < entry->from()) { ZoneSplayTree<Config>::Locator ins; - ASSERT_RESULT(tree()->Insert(current.from(), &ins)); + DCHECK_RESULT(tree()->Insert(current.from(), &ins)); ins.set_value(Entry(current.from(), entry->from() - 1, empty()->Extend(value, zone))); current.set_from(entry->from()); } - ASSERT_EQ(current.from(), entry->from()); + DCHECK_EQ(current.from(), entry->from()); // If the overlapping range extends beyond the one we want to add // we have to snap the right part off and add it separately. if (entry->to() > current.to()) { ZoneSplayTree<Config>::Locator ins; - ASSERT_RESULT(tree()->Insert(current.to() + 1, &ins)); + DCHECK_RESULT(tree()->Insert(current.to() + 1, &ins)); ins.set_value(Entry(current.to() + 1, entry->to(), entry->out_set())); entry->set_to(current.to()); } - ASSERT(entry->to() <= current.to()); + DCHECK(entry->to() <= current.to()); // The overlapping range is now completely contained by the range // we're adding so we can just update it and move the start point // of the range we're adding just past it. @@ -5634,12 +5630,12 @@ void DispatchTable::AddRange(CharacterRange full_range, int value, // adding 1 will wrap around to 0. if (entry->to() == String::kMaxUtf16CodeUnit) break; - ASSERT(entry->to() + 1 > current.from()); + DCHECK(entry->to() + 1 > current.from()); current.set_from(entry->to() + 1); } else { // There is no overlap so we can just add the range ZoneSplayTree<Config>::Locator ins; - ASSERT_RESULT(tree()->Insert(current.from(), &ins)); + DCHECK_RESULT(tree()->Insert(current.from(), &ins)); ins.set_value(Entry(current.from(), current.to(), empty()->Extend(value, zone))); @@ -5832,7 +5828,7 @@ void TextNode::FillInBMInfo(int initial_offset, } } } else { - ASSERT_EQ(TextElement::CHAR_CLASS, text.text_type()); + DCHECK_EQ(TextElement::CHAR_CLASS, text.text_type()); RegExpCharacterClass* char_class = text.char_class(); ZoneList<CharacterRange>* ranges = char_class->ranges(zone()); if (char_class->is_negated()) { @@ -6075,6 +6071,12 @@ RegExpEngine::CompilationResult RegExpEngine::Compile( #elif V8_TARGET_ARCH_MIPS RegExpMacroAssemblerMIPS macro_assembler(mode, (data->capture_count + 1) * 2, zone); +#elif V8_TARGET_ARCH_MIPS64 + RegExpMacroAssemblerMIPS macro_assembler(mode, (data->capture_count + 1) * 2, + zone); +#elif V8_TARGET_ARCH_X87 + RegExpMacroAssemblerX87 macro_assembler(mode, (data->capture_count + 1) * 2, + zone); #else #error "Unsupported architecture" #endif diff --git a/deps/v8/src/jsregexp.h b/deps/v8/src/jsregexp.h index 5366d6e132e..11bad24d4b5 100644 --- a/deps/v8/src/jsregexp.h +++ b/deps/v8/src/jsregexp.h @@ -5,9 +5,9 @@ #ifndef V8_JSREGEXP_H_ #define V8_JSREGEXP_H_ -#include "allocation.h" -#include "assembler.h" -#include "zone-inl.h" +#include "src/allocation.h" +#include "src/assembler.h" +#include "src/zone-inl.h" namespace v8 { namespace internal { @@ -239,7 +239,7 @@ class CharacterRange { public: CharacterRange() : from_(0), to_(0) { } // For compatibility with the CHECK_OK macro - CharacterRange(void* null) { ASSERT_EQ(NULL, null); } //NOLINT + CharacterRange(void* null) { DCHECK_EQ(NULL, null); } //NOLINT CharacterRange(uc16 from, uc16 to) : from_(from), to_(to) { } static void AddClassEscape(uc16 type, ZoneList<CharacterRange>* ranges, Zone* zone); @@ -248,7 +248,7 @@ class CharacterRange { return CharacterRange(value, value); } static inline CharacterRange Range(uc16 from, uc16 to) { - ASSERT(from <= to); + DCHECK(from <= to); return CharacterRange(from, to); } static inline CharacterRange Everything() { @@ -296,7 +296,7 @@ class OutSet: public ZoneObject { public: OutSet() : first_(0), remaining_(NULL), successors_(NULL) { } OutSet* Extend(unsigned value, Zone* zone); - bool Get(unsigned value); + bool Get(unsigned value) const; static const unsigned kFirstLimit = 32; private: @@ -425,12 +425,12 @@ class TextElement V8_FINAL BASE_EMBEDDED { RegExpTree* tree() const { return tree_; } RegExpAtom* atom() const { - ASSERT(text_type() == ATOM); + DCHECK(text_type() == ATOM); return reinterpret_cast<RegExpAtom*>(tree()); } RegExpCharacterClass* char_class() const { - ASSERT(text_type() == CHAR_CLASS); + DCHECK(text_type() == CHAR_CLASS); return reinterpret_cast<RegExpCharacterClass*>(tree()); } @@ -541,8 +541,8 @@ class QuickCheckDetails { int characters() { return characters_; } void set_characters(int characters) { characters_ = characters; } Position* positions(int index) { - ASSERT(index >= 0); - ASSERT(index < characters_); + DCHECK(index >= 0); + DCHECK(index < characters_); return positions_ + index; } uint32_t mask() { return mask_; } @@ -628,7 +628,7 @@ class RegExpNode: public ZoneObject { virtual RegExpNode* FilterASCII(int depth, bool ignore_case) { return this; } // Helper for FilterASCII. RegExpNode* replacement() { - ASSERT(info()->replacement_calculated); + DCHECK(info()->replacement_calculated); return replacement_; } RegExpNode* set_replacement(RegExpNode* replacement) { @@ -1445,7 +1445,7 @@ class Trace { // These set methods and AdvanceCurrentPositionInTrace should be used only on // new traces - the intention is that traces are immutable after creation. void add_action(DeferredAction* new_action) { - ASSERT(new_action->next_ == NULL); + DCHECK(new_action->next_ == NULL); new_action->next_ = actions_; actions_ = new_action; } @@ -1465,14 +1465,14 @@ class Trace { int FindAffectedRegisters(OutSet* affected_registers, Zone* zone); void PerformDeferredActions(RegExpMacroAssembler* macro, int max_register, - OutSet& affected_registers, + const OutSet& affected_registers, OutSet* registers_to_pop, OutSet* registers_to_clear, Zone* zone); void RestoreAffectedRegisters(RegExpMacroAssembler* macro, int max_register, - OutSet& registers_to_pop, - OutSet& registers_to_clear); + const OutSet& registers_to_pop, + const OutSet& registers_to_clear); int cp_offset_; DeferredAction* actions_; Label* backtrack_; @@ -1560,7 +1560,7 @@ FOR_EACH_NODE_TYPE(DECLARE_VISIT) bool has_failed() { return error_message_ != NULL; } const char* error_message() { - ASSERT(error_message_ != NULL); + DCHECK(error_message_ != NULL); return error_message_; } void fail(const char* error_message) { diff --git a/deps/v8/src/libplatform/DEPS b/deps/v8/src/libplatform/DEPS new file mode 100644 index 00000000000..2ea335990fb --- /dev/null +++ b/deps/v8/src/libplatform/DEPS @@ -0,0 +1,8 @@ +include_rules = [ + "-include", + "+include/libplatform", + "+include/v8-platform.h", + "-src", + "+src/base", + "+src/libplatform", +] diff --git a/deps/v8/src/libplatform/default-platform.cc b/deps/v8/src/libplatform/default-platform.cc index 6ff8830fb26..9a503bc34c1 100644 --- a/deps/v8/src/libplatform/default-platform.cc +++ b/deps/v8/src/libplatform/default-platform.cc @@ -2,19 +2,30 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "default-platform.h" +#include "src/libplatform/default-platform.h" #include <algorithm> #include <queue> -// TODO(jochen): We should have our own version of checks.h. -#include "../checks.h" -// TODO(jochen): Why is cpu.h not in platform/? -#include "../cpu.h" -#include "worker-thread.h" +#include "src/base/logging.h" +#include "src/base/platform/platform.h" +#include "src/libplatform/worker-thread.h" namespace v8 { -namespace internal { +namespace platform { + + +v8::Platform* CreateDefaultPlatform(int thread_pool_size) { + DefaultPlatform* platform = new DefaultPlatform(); + platform->SetThreadPoolSize(thread_pool_size); + platform->EnsureInitialized(); + return platform; +} + + +bool PumpMessageLoop(v8::Platform* platform, v8::Isolate* isolate) { + return reinterpret_cast<DefaultPlatform*>(platform)->PumpMessageLoop(isolate); +} const int DefaultPlatform::kMaxThreadPoolSize = 4; @@ -25,7 +36,7 @@ DefaultPlatform::DefaultPlatform() DefaultPlatform::~DefaultPlatform() { - LockGuard<Mutex> guard(&lock_); + base::LockGuard<base::Mutex> guard(&lock_); queue_.Terminate(); if (initialized_) { for (std::vector<WorkerThread*>::iterator i = thread_pool_.begin(); @@ -33,21 +44,29 @@ DefaultPlatform::~DefaultPlatform() { delete *i; } } + for (std::map<v8::Isolate*, std::queue<Task*> >::iterator i = + main_thread_queue_.begin(); + i != main_thread_queue_.end(); ++i) { + while (!i->second.empty()) { + delete i->second.front(); + i->second.pop(); + } + } } void DefaultPlatform::SetThreadPoolSize(int thread_pool_size) { - LockGuard<Mutex> guard(&lock_); - ASSERT(thread_pool_size >= 0); + base::LockGuard<base::Mutex> guard(&lock_); + DCHECK(thread_pool_size >= 0); if (thread_pool_size < 1) - thread_pool_size = CPU::NumberOfProcessorsOnline(); + thread_pool_size = base::OS::NumberOfProcessorsOnline(); thread_pool_size_ = std::max(std::min(thread_pool_size, kMaxThreadPoolSize), 1); } void DefaultPlatform::EnsureInitialized() { - LockGuard<Mutex> guard(&lock_); + base::LockGuard<base::Mutex> guard(&lock_); if (initialized_) return; initialized_ = true; @@ -55,6 +74,24 @@ void DefaultPlatform::EnsureInitialized() { thread_pool_.push_back(new WorkerThread(&queue_)); } + +bool DefaultPlatform::PumpMessageLoop(v8::Isolate* isolate) { + Task* task = NULL; + { + base::LockGuard<base::Mutex> guard(&lock_); + std::map<v8::Isolate*, std::queue<Task*> >::iterator it = + main_thread_queue_.find(isolate); + if (it == main_thread_queue_.end() || it->second.empty()) { + return false; + } + task = it->second.front(); + it->second.pop(); + } + task->Run(); + delete task; + return true; +} + void DefaultPlatform::CallOnBackgroundThread(Task *task, ExpectedRuntime expected_runtime) { EnsureInitialized(); @@ -63,9 +100,8 @@ void DefaultPlatform::CallOnBackgroundThread(Task *task, void DefaultPlatform::CallOnForegroundThread(v8::Isolate* isolate, Task* task) { - // TODO(jochen): implement. - task->Run(); - delete task; + base::LockGuard<base::Mutex> guard(&lock_); + main_thread_queue_[isolate].push(task); } -} } // namespace v8::internal +} } // namespace v8::platform diff --git a/deps/v8/src/libplatform/default-platform.h b/deps/v8/src/libplatform/default-platform.h index f887eb32e8a..fcbb14c36c4 100644 --- a/deps/v8/src/libplatform/default-platform.h +++ b/deps/v8/src/libplatform/default-platform.h @@ -5,15 +5,17 @@ #ifndef V8_LIBPLATFORM_DEFAULT_PLATFORM_H_ #define V8_LIBPLATFORM_DEFAULT_PLATFORM_H_ +#include <map> +#include <queue> #include <vector> -#include "../../include/v8-platform.h" -#include "../base/macros.h" -#include "../platform/mutex.h" -#include "task-queue.h" +#include "include/v8-platform.h" +#include "src/base/macros.h" +#include "src/base/platform/mutex.h" +#include "src/libplatform/task-queue.h" namespace v8 { -namespace internal { +namespace platform { class TaskQueue; class Thread; @@ -28,26 +30,29 @@ class DefaultPlatform : public Platform { void EnsureInitialized(); + bool PumpMessageLoop(v8::Isolate* isolate); + // v8::Platform implementation. virtual void CallOnBackgroundThread( - Task *task, ExpectedRuntime expected_runtime) V8_OVERRIDE; - virtual void CallOnForegroundThread(v8::Isolate *isolate, - Task *task) V8_OVERRIDE; + Task* task, ExpectedRuntime expected_runtime) V8_OVERRIDE; + virtual void CallOnForegroundThread(v8::Isolate* isolate, + Task* task) V8_OVERRIDE; private: static const int kMaxThreadPoolSize; - Mutex lock_; + base::Mutex lock_; bool initialized_; int thread_pool_size_; std::vector<WorkerThread*> thread_pool_; TaskQueue queue_; + std::map<v8::Isolate*, std::queue<Task*> > main_thread_queue_; DISALLOW_COPY_AND_ASSIGN(DefaultPlatform); }; -} } // namespace v8::internal +} } // namespace v8::platform #endif // V8_LIBPLATFORM_DEFAULT_PLATFORM_H_ diff --git a/deps/v8/src/libplatform/task-queue.cc b/deps/v8/src/libplatform/task-queue.cc index 37cf1353e54..7a9071f3620 100644 --- a/deps/v8/src/libplatform/task-queue.cc +++ b/deps/v8/src/libplatform/task-queue.cc @@ -2,27 +2,26 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "task-queue.h" +#include "src/libplatform/task-queue.h" -// TODO(jochen): We should have our own version of checks.h. -#include "../checks.h" +#include "src/base/logging.h" namespace v8 { -namespace internal { +namespace platform { TaskQueue::TaskQueue() : process_queue_semaphore_(0), terminated_(false) {} TaskQueue::~TaskQueue() { - LockGuard<Mutex> guard(&lock_); - ASSERT(terminated_); - ASSERT(task_queue_.empty()); + base::LockGuard<base::Mutex> guard(&lock_); + DCHECK(terminated_); + DCHECK(task_queue_.empty()); } void TaskQueue::Append(Task* task) { - LockGuard<Mutex> guard(&lock_); - ASSERT(!terminated_); + base::LockGuard<base::Mutex> guard(&lock_); + DCHECK(!terminated_); task_queue_.push(task); process_queue_semaphore_.Signal(); } @@ -31,7 +30,7 @@ void TaskQueue::Append(Task* task) { Task* TaskQueue::GetNext() { for (;;) { { - LockGuard<Mutex> guard(&lock_); + base::LockGuard<base::Mutex> guard(&lock_); if (!task_queue_.empty()) { Task* result = task_queue_.front(); task_queue_.pop(); @@ -48,10 +47,10 @@ Task* TaskQueue::GetNext() { void TaskQueue::Terminate() { - LockGuard<Mutex> guard(&lock_); - ASSERT(!terminated_); + base::LockGuard<base::Mutex> guard(&lock_); + DCHECK(!terminated_); terminated_ = true; process_queue_semaphore_.Signal(); } -} } // namespace v8::internal +} } // namespace v8::platform diff --git a/deps/v8/src/libplatform/task-queue.h b/deps/v8/src/libplatform/task-queue.h index 8b9137b0329..eb9d6987e95 100644 --- a/deps/v8/src/libplatform/task-queue.h +++ b/deps/v8/src/libplatform/task-queue.h @@ -7,15 +7,15 @@ #include <queue> -#include "../base/macros.h" -#include "../platform/mutex.h" -#include "../platform/semaphore.h" +#include "src/base/macros.h" +#include "src/base/platform/mutex.h" +#include "src/base/platform/semaphore.h" namespace v8 { class Task; -namespace internal { +namespace platform { class TaskQueue { public: @@ -33,15 +33,15 @@ class TaskQueue { void Terminate(); private: - Mutex lock_; - Semaphore process_queue_semaphore_; + base::Mutex lock_; + base::Semaphore process_queue_semaphore_; std::queue<Task*> task_queue_; bool terminated_; DISALLOW_COPY_AND_ASSIGN(TaskQueue); }; -} } // namespace v8::internal +} } // namespace v8::platform #endif // V8_LIBPLATFORM_TASK_QUEUE_H_ diff --git a/deps/v8/src/libplatform/worker-thread.cc b/deps/v8/src/libplatform/worker-thread.cc index e7d8ec76345..99637151e2f 100644 --- a/deps/v8/src/libplatform/worker-thread.cc +++ b/deps/v8/src/libplatform/worker-thread.cc @@ -2,18 +2,16 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "worker-thread.h" +#include "src/libplatform/worker-thread.h" -// TODO(jochen): We should have our own version of checks.h. -#include "../checks.h" -#include "../../include/v8-platform.h" -#include "task-queue.h" +#include "include/v8-platform.h" +#include "src/libplatform/task-queue.h" namespace v8 { -namespace internal { +namespace platform { WorkerThread::WorkerThread(TaskQueue* queue) - : Thread("V8 WorkerThread"), queue_(queue) { + : Thread(Options("V8 WorkerThread")), queue_(queue) { Start(); } @@ -30,4 +28,4 @@ void WorkerThread::Run() { } } -} } // namespace v8::internal +} } // namespace v8::platform diff --git a/deps/v8/src/libplatform/worker-thread.h b/deps/v8/src/libplatform/worker-thread.h index b9d7fdabe11..5550f16a7f6 100644 --- a/deps/v8/src/libplatform/worker-thread.h +++ b/deps/v8/src/libplatform/worker-thread.h @@ -7,16 +7,16 @@ #include <queue> -#include "../base/macros.h" -#include "../platform.h" +#include "src/base/macros.h" +#include "src/base/platform/platform.h" namespace v8 { -namespace internal { +namespace platform { class TaskQueue; -class WorkerThread : public Thread { +class WorkerThread : public base::Thread { public: explicit WorkerThread(TaskQueue* queue); virtual ~WorkerThread(); @@ -32,7 +32,7 @@ class WorkerThread : public Thread { DISALLOW_COPY_AND_ASSIGN(WorkerThread); }; -} } // namespace v8::internal +} } // namespace v8::platform #endif // V8_LIBPLATFORM_WORKER_THREAD_H_ diff --git a/deps/v8/src/list-inl.h b/deps/v8/src/list-inl.h index 4a18d982051..60e8fab6f6f 100644 --- a/deps/v8/src/list-inl.h +++ b/deps/v8/src/list-inl.h @@ -5,8 +5,9 @@ #ifndef V8_LIST_INL_H_ #define V8_LIST_INL_H_ -#include "list.h" -#include "platform.h" +#include "src/list.h" + +#include "src/base/platform/platform.h" namespace v8 { namespace internal { @@ -49,7 +50,7 @@ void List<T, P>::ResizeAdd(const T& element, P alloc) { template<typename T, class P> void List<T, P>::ResizeAddInternal(const T& element, P alloc) { - ASSERT(length_ >= capacity_); + DCHECK(length_ >= capacity_); // Grow the list capacity by 100%, but make sure to let it grow // even when the capacity is zero (possible initial case). int new_capacity = 1 + 2 * capacity_; @@ -63,9 +64,9 @@ void List<T, P>::ResizeAddInternal(const T& element, P alloc) { template<typename T, class P> void List<T, P>::Resize(int new_capacity, P alloc) { - ASSERT_LE(length_, new_capacity); + DCHECK_LE(length_, new_capacity); T* new_data = NewData(new_capacity, alloc); - OS::MemCopy(new_data, data_, length_ * sizeof(T)); + MemCopy(new_data, data_, length_ * sizeof(T)); List<T, P>::DeleteData(data_); data_ = new_data; capacity_ = new_capacity; @@ -82,14 +83,14 @@ Vector<T> List<T, P>::AddBlock(T value, int count, P alloc) { template<typename T, class P> void List<T, P>::Set(int index, const T& elm) { - ASSERT(index >= 0 && index <= length_); + DCHECK(index >= 0 && index <= length_); data_[index] = elm; } template<typename T, class P> void List<T, P>::InsertAt(int index, const T& elm, P alloc) { - ASSERT(index >= 0 && index <= length_); + DCHECK(index >= 0 && index <= length_); Add(elm, alloc); for (int i = length_ - 1; i > index; --i) { data_[i] = data_[i - 1]; @@ -143,7 +144,7 @@ void List<T, P>::Clear() { template<typename T, class P> void List<T, P>::Rewind(int pos) { - ASSERT(0 <= pos && pos <= length_); + DCHECK(0 <= pos && pos <= length_); length_ = pos; } @@ -194,7 +195,7 @@ void List<T, P>::Sort(int (*cmp)(const T* x, const T* y)) { ToVector().Sort(cmp); #ifdef DEBUG for (int i = 1; i < length_; i++) - ASSERT(cmp(&data_[i - 1], &data_[i]) <= 0); + DCHECK(cmp(&data_[i - 1], &data_[i]) <= 0); #endif } @@ -207,7 +208,7 @@ void List<T, P>::Sort() { template<typename T, class P> void List<T, P>::Initialize(int capacity, P allocator) { - ASSERT(capacity >= 0); + DCHECK(capacity >= 0); data_ = (capacity > 0) ? NewData(capacity, allocator) : NULL; capacity_ = capacity; length_ = 0; diff --git a/deps/v8/src/list.h b/deps/v8/src/list.h index 1029f493f19..ea5fd1e0cac 100644 --- a/deps/v8/src/list.h +++ b/deps/v8/src/list.h @@ -5,11 +5,13 @@ #ifndef V8_LIST_H_ #define V8_LIST_H_ -#include "utils.h" +#include "src/checks.h" +#include "src/utils.h" namespace v8 { namespace internal { +template<typename T> class Vector; // ---------------------------------------------------------------------------- // The list is a template for very light-weight lists. We are not @@ -60,8 +62,8 @@ class List { // not safe to use after operations that can change the list's // backing store (e.g. Add). inline T& operator[](int i) const { - ASSERT(0 <= i); - SLOW_ASSERT(i < length_); + DCHECK(0 <= i); + SLOW_DCHECK(i < length_); return data_[i]; } inline T& at(int i) const { return operator[](i); } diff --git a/deps/v8/src/lithium-allocator-inl.h b/deps/v8/src/lithium-allocator-inl.h index 1b9de0eede2..bafa00f07b1 100644 --- a/deps/v8/src/lithium-allocator-inl.h +++ b/deps/v8/src/lithium-allocator-inl.h @@ -5,18 +5,22 @@ #ifndef V8_LITHIUM_ALLOCATOR_INL_H_ #define V8_LITHIUM_ALLOCATOR_INL_H_ -#include "lithium-allocator.h" +#include "src/lithium-allocator.h" #if V8_TARGET_ARCH_IA32 -#include "ia32/lithium-ia32.h" +#include "src/ia32/lithium-ia32.h" // NOLINT #elif V8_TARGET_ARCH_X64 -#include "x64/lithium-x64.h" +#include "src/x64/lithium-x64.h" // NOLINT #elif V8_TARGET_ARCH_ARM64 -#include "arm64/lithium-arm64.h" +#include "src/arm64/lithium-arm64.h" // NOLINT #elif V8_TARGET_ARCH_ARM -#include "arm/lithium-arm.h" +#include "src/arm/lithium-arm.h" // NOLINT #elif V8_TARGET_ARCH_MIPS -#include "mips/lithium-mips.h" +#include "src/mips/lithium-mips.h" // NOLINT +#elif V8_TARGET_ARCH_MIPS64 +#include "src/mips64/lithium-mips64.h" // NOLINT +#elif V8_TARGET_ARCH_X87 +#include "src/x87/lithium-x87.h" // NOLINT #else #error "Unknown architecture." #endif @@ -37,98 +41,11 @@ LGap* LAllocator::GapAt(int index) { } -TempIterator::TempIterator(LInstruction* instr) - : instr_(instr), - limit_(instr->TempCount()), - current_(0) { - SkipUninteresting(); -} - - -bool TempIterator::Done() { return current_ >= limit_; } - - -LOperand* TempIterator::Current() { - ASSERT(!Done()); - return instr_->TempAt(current_); -} - - -void TempIterator::SkipUninteresting() { - while (current_ < limit_ && instr_->TempAt(current_) == NULL) ++current_; -} - - -void TempIterator::Advance() { - ++current_; - SkipUninteresting(); -} - - -InputIterator::InputIterator(LInstruction* instr) - : instr_(instr), - limit_(instr->InputCount()), - current_(0) { - SkipUninteresting(); -} - - -bool InputIterator::Done() { return current_ >= limit_; } - - -LOperand* InputIterator::Current() { - ASSERT(!Done()); - ASSERT(instr_->InputAt(current_) != NULL); - return instr_->InputAt(current_); -} - - -void InputIterator::Advance() { - ++current_; - SkipUninteresting(); -} - - -void InputIterator::SkipUninteresting() { - while (current_ < limit_) { - LOperand* current = instr_->InputAt(current_); - if (current != NULL && !current->IsConstantOperand()) break; - ++current_; - } -} - - -UseIterator::UseIterator(LInstruction* instr) - : input_iterator_(instr), env_iterator_(instr->environment()) { } - - -bool UseIterator::Done() { - return input_iterator_.Done() && env_iterator_.Done(); -} - - -LOperand* UseIterator::Current() { - ASSERT(!Done()); - LOperand* result = input_iterator_.Done() - ? env_iterator_.Current() - : input_iterator_.Current(); - ASSERT(result != NULL); - return result; -} - - -void UseIterator::Advance() { - input_iterator_.Done() - ? env_iterator_.Advance() - : input_iterator_.Advance(); -} - - void LAllocator::SetLiveRangeAssignedRegister(LiveRange* range, int reg) { if (range->Kind() == DOUBLE_REGISTERS) { assigned_double_registers_->Add(reg); } else { - ASSERT(range->Kind() == GENERAL_REGISTERS); + DCHECK(range->Kind() == GENERAL_REGISTERS); assigned_registers_->Add(reg); } range->set_assigned_register(reg, chunk()->zone()); diff --git a/deps/v8/src/lithium-allocator.cc b/deps/v8/src/lithium-allocator.cc index c6e52ed824f..8350c80bbfd 100644 --- a/deps/v8/src/lithium-allocator.cc +++ b/deps/v8/src/lithium-allocator.cc @@ -2,25 +2,12 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" -#include "lithium-allocator-inl.h" - -#include "hydrogen.h" -#include "string-stream.h" - -#if V8_TARGET_ARCH_IA32 -#include "ia32/lithium-ia32.h" -#elif V8_TARGET_ARCH_X64 -#include "x64/lithium-x64.h" -#elif V8_TARGET_ARCH_ARM64 -#include "arm64/lithium-arm64.h" -#elif V8_TARGET_ARCH_ARM -#include "arm/lithium-arm.h" -#elif V8_TARGET_ARCH_MIPS -#include "mips/lithium-mips.h" -#else -#error "Unknown architecture." -#endif +#include "src/v8.h" + +#include "src/hydrogen.h" +#include "src/lithium-inl.h" +#include "src/lithium-allocator-inl.h" +#include "src/string-stream.h" namespace v8 { namespace internal { @@ -50,7 +37,7 @@ UsePosition::UsePosition(LifetimePosition pos, unalloc->HasDoubleRegisterPolicy(); register_beneficial_ = !unalloc->HasAnyPolicy(); } - ASSERT(pos_.IsValid()); + DCHECK(pos_.IsValid()); } @@ -70,7 +57,7 @@ bool UsePosition::RegisterIsBeneficial() const { void UseInterval::SplitAt(LifetimePosition pos, Zone* zone) { - ASSERT(Contains(pos) && pos.Value() != start().Value()); + DCHECK(Contains(pos) && pos.Value() != start().Value()); UseInterval* after = new(zone) UseInterval(pos, end_); after->next_ = next_; next_ = after; @@ -84,7 +71,7 @@ void UseInterval::SplitAt(LifetimePosition pos, Zone* zone) { void LiveRange::Verify() const { UsePosition* cur = first_pos_; while (cur != NULL) { - ASSERT(Start().Value() <= cur->pos().Value() && + DCHECK(Start().Value() <= cur->pos().Value() && cur->pos().Value() <= End().Value()); cur = cur->next(); } @@ -126,15 +113,15 @@ LiveRange::LiveRange(int id, Zone* zone) void LiveRange::set_assigned_register(int reg, Zone* zone) { - ASSERT(!HasRegisterAssigned() && !IsSpilled()); + DCHECK(!HasRegisterAssigned() && !IsSpilled()); assigned_register_ = reg; ConvertOperands(zone); } void LiveRange::MakeSpilled(Zone* zone) { - ASSERT(!IsSpilled()); - ASSERT(TopLevel()->HasAllocatedSpillOperand()); + DCHECK(!IsSpilled()); + DCHECK(TopLevel()->HasAllocatedSpillOperand()); spilled_ = true; assigned_register_ = kInvalidAssignment; ConvertOperands(zone); @@ -142,15 +129,15 @@ void LiveRange::MakeSpilled(Zone* zone) { bool LiveRange::HasAllocatedSpillOperand() const { - ASSERT(spill_operand_ != NULL); + DCHECK(spill_operand_ != NULL); return !spill_operand_->IsIgnored(); } void LiveRange::SetSpillOperand(LOperand* operand) { - ASSERT(!operand->IsUnallocated()); - ASSERT(spill_operand_ != NULL); - ASSERT(spill_operand_->IsIgnored()); + DCHECK(!operand->IsUnallocated()); + DCHECK(spill_operand_ != NULL); + DCHECK(spill_operand_->IsIgnored()); spill_operand_->ConvertTo(operand->kind(), operand->index()); } @@ -210,7 +197,7 @@ bool LiveRange::CanBeSpilled(LifetimePosition pos) { LOperand* LiveRange::CreateAssignedOperand(Zone* zone) { LOperand* op = NULL; if (HasRegisterAssigned()) { - ASSERT(!IsSpilled()); + DCHECK(!IsSpilled()); switch (Kind()) { case GENERAL_REGISTERS: op = LRegister::Create(assigned_register(), zone); @@ -222,9 +209,9 @@ LOperand* LiveRange::CreateAssignedOperand(Zone* zone) { UNREACHABLE(); } } else if (IsSpilled()) { - ASSERT(!HasRegisterAssigned()); + DCHECK(!HasRegisterAssigned()); op = TopLevel()->GetSpillOperand(); - ASSERT(!op->IsUnallocated()); + DCHECK(!op->IsUnallocated()); } else { LUnallocated* unalloc = new(zone) LUnallocated(LUnallocated::NONE); unalloc->set_virtual_register(id_); @@ -261,8 +248,8 @@ void LiveRange::AdvanceLastProcessedMarker( void LiveRange::SplitAt(LifetimePosition position, LiveRange* result, Zone* zone) { - ASSERT(Start().Value() < position.Value()); - ASSERT(result->IsEmpty()); + DCHECK(Start().Value() < position.Value()); + DCHECK(result->IsEmpty()); // Find the last interval that ends before the position. If the // position is contained in one of the intervals in the chain, we // split that interval and use the first part. @@ -366,9 +353,9 @@ bool LiveRange::ShouldBeAllocatedBefore(const LiveRange* other) const { void LiveRange::ShortenTo(LifetimePosition start) { LAllocator::TraceAlloc("Shorten live range %d to [%d\n", id_, start.Value()); - ASSERT(first_interval_ != NULL); - ASSERT(first_interval_->start().Value() <= start.Value()); - ASSERT(start.Value() < first_interval_->end().Value()); + DCHECK(first_interval_ != NULL); + DCHECK(first_interval_->start().Value() <= start.Value()); + DCHECK(start.Value() < first_interval_->end().Value()); first_interval_->set_start(start); } @@ -420,7 +407,7 @@ void LiveRange::AddUseInterval(LifetimePosition start, // Order of instruction's processing (see ProcessInstructions) guarantees // that each new use interval either precedes or intersects with // last added interval. - ASSERT(start.Value() < first_interval_->end().Value()); + DCHECK(start.Value() < first_interval_->end().Value()); first_interval_->start_ = Min(start, first_interval_->start_); first_interval_->end_ = Max(end, first_interval_->end_); } @@ -463,11 +450,11 @@ void LiveRange::ConvertOperands(Zone* zone) { LOperand* op = CreateAssignedOperand(zone); UsePosition* use_pos = first_pos(); while (use_pos != NULL) { - ASSERT(Start().Value() <= use_pos->pos().Value() && + DCHECK(Start().Value() <= use_pos->pos().Value() && use_pos->pos().Value() <= End().Value()); if (use_pos->HasOperand()) { - ASSERT(op->IsRegister() || op->IsDoubleRegister() || + DCHECK(op->IsRegister() || op->IsDoubleRegister() || !use_pos->RequiresRegister()); use_pos->operand()->ConvertTo(op->kind(), op->index()); } @@ -489,7 +476,7 @@ bool LiveRange::Covers(LifetimePosition position) { for (UseInterval* interval = start_search; interval != NULL; interval = interval->next()) { - ASSERT(interval->next() == NULL || + DCHECK(interval->next() == NULL || interval->next()->start().Value() >= interval->start().Value()); AdvanceLastProcessedMarker(interval, position); if (interval->Contains(position)) return true; @@ -607,7 +594,7 @@ LOperand* LAllocator::AllocateFixed(LUnallocated* operand, int pos, bool is_tagged) { TraceAlloc("Allocating fixed reg for op %d\n", operand->virtual_register()); - ASSERT(operand->HasFixedPolicy()); + DCHECK(operand->HasFixedPolicy()); if (operand->HasFixedSlotPolicy()) { operand->ConvertTo(LOperand::STACK_SLOT, operand->fixed_slot_index()); } else if (operand->HasFixedRegisterPolicy()) { @@ -631,11 +618,11 @@ LOperand* LAllocator::AllocateFixed(LUnallocated* operand, LiveRange* LAllocator::FixedLiveRangeFor(int index) { - ASSERT(index < Register::kMaxNumAllocatableRegisters); + DCHECK(index < Register::kMaxNumAllocatableRegisters); LiveRange* result = fixed_live_ranges_[index]; if (result == NULL) { result = new(zone()) LiveRange(FixedLiveRangeID(index), chunk()->zone()); - ASSERT(result->IsFixed()); + DCHECK(result->IsFixed()); result->kind_ = GENERAL_REGISTERS; SetLiveRangeAssignedRegister(result, index); fixed_live_ranges_[index] = result; @@ -645,12 +632,12 @@ LiveRange* LAllocator::FixedLiveRangeFor(int index) { LiveRange* LAllocator::FixedDoubleLiveRangeFor(int index) { - ASSERT(index < DoubleRegister::NumAllocatableRegisters()); + DCHECK(index < DoubleRegister::NumAllocatableRegisters()); LiveRange* result = fixed_double_live_ranges_[index]; if (result == NULL) { result = new(zone()) LiveRange(FixedDoubleLiveRangeID(index), chunk()->zone()); - ASSERT(result->IsFixed()); + DCHECK(result->IsFixed()); result->kind_ = DOUBLE_REGISTERS; SetLiveRangeAssignedRegister(result, index); fixed_double_live_ranges_[index] = result; @@ -840,7 +827,7 @@ void LAllocator::MeetConstraintsBetween(LInstruction* first, } else if (cur_input->HasWritableRegisterPolicy()) { // The live range of writable input registers always goes until the end // of the instruction. - ASSERT(!cur_input->IsUsedAtStart()); + DCHECK(!cur_input->IsUsedAtStart()); LUnallocated* input_copy = cur_input->CopyUnconstrained( chunk()->zone()); @@ -940,7 +927,7 @@ void LAllocator::ProcessInstructions(HBasicBlock* block, BitVector* live) { } } } else { - ASSERT(!IsGapAt(index)); + DCHECK(!IsGapAt(index)); LInstruction* instr = InstructionAt(index); if (instr != NULL) { @@ -1038,7 +1025,7 @@ void LAllocator::ResolvePhis(HBasicBlock* block) { HConstant* constant = HConstant::cast(op); operand = chunk_->DefineConstantOperand(constant); } else { - ASSERT(!op->EmitAtUses()); + DCHECK(!op->EmitAtUses()); LUnallocated* unalloc = new(chunk()->zone()) LUnallocated(LUnallocated::ANY); unalloc->set_virtual_register(op->id()); @@ -1080,7 +1067,7 @@ void LAllocator::ResolvePhis(HBasicBlock* block) { bool LAllocator::Allocate(LChunk* chunk) { - ASSERT(chunk_ == NULL); + DCHECK(chunk_ == NULL); chunk_ = static_cast<LPlatformChunk*>(chunk); assigned_registers_ = new(chunk->zone()) BitVector(Register::NumAllocatableRegisters(), @@ -1138,18 +1125,18 @@ void LAllocator::ResolveControlFlow(LiveRange* range, LiveRange* cur_range = range; while (cur_range != NULL && (cur_cover == NULL || pred_cover == NULL)) { if (cur_range->CanCover(cur_start)) { - ASSERT(cur_cover == NULL); + DCHECK(cur_cover == NULL); cur_cover = cur_range; } if (cur_range->CanCover(pred_end)) { - ASSERT(pred_cover == NULL); + DCHECK(pred_cover == NULL); pred_cover = cur_range; } cur_range = cur_range->next(); } if (cur_cover->IsSpilled()) return; - ASSERT(pred_cover != NULL && cur_cover != NULL); + DCHECK(pred_cover != NULL && cur_cover != NULL); if (pred_cover != cur_cover) { LOperand* pred_op = pred_cover->CreateAssignedOperand(chunk()->zone()); LOperand* cur_op = cur_cover->CreateAssignedOperand(chunk()->zone()); @@ -1158,7 +1145,7 @@ void LAllocator::ResolveControlFlow(LiveRange* range, if (block->predecessors()->length() == 1) { gap = GapAt(block->first_instruction_index()); } else { - ASSERT(pred->end()->SecondSuccessor() == NULL); + DCHECK(pred->end()->SecondSuccessor() == NULL); gap = GetLastGap(pred); // We are going to insert a move before the branch instruction. @@ -1307,7 +1294,7 @@ void LAllocator::BuildLiveRanges() { break; } } - ASSERT(hint != NULL); + DCHECK(hint != NULL); LifetimePosition block_start = LifetimePosition::FromInstructionIndex( block->first_instruction_index()); @@ -1354,7 +1341,7 @@ void LAllocator::BuildLiveRanges() { CodeStub::Major major_key = chunk_->info()->code_stub()->MajorKey(); PrintF("Function: %s\n", CodeStub::MajorName(major_key, false)); } else { - ASSERT(chunk_->info()->IsOptimizing()); + DCHECK(chunk_->info()->IsOptimizing()); AllowHandleDereference allow_deref; PrintF("Function: %s\n", chunk_->info()->function()->debug_name()->ToCString().get()); @@ -1364,7 +1351,7 @@ void LAllocator::BuildLiveRanges() { PrintF("First use is at %d\n", range->first_pos()->pos().Value()); iterator.Advance(); } - ASSERT(!found); + DCHECK(!found); } #endif } @@ -1393,7 +1380,7 @@ void LAllocator::PopulatePointerMaps() { LAllocatorPhase phase("L_Populate pointer maps", this); const ZoneList<LPointerMap*>* pointer_maps = chunk_->pointer_maps(); - ASSERT(SafePointsAreInOrder()); + DCHECK(SafePointsAreInOrder()); // Iterate over all safe point positions and record a pointer // for all spilled live ranges at this point. @@ -1415,7 +1402,7 @@ void LAllocator::PopulatePointerMaps() { for (LiveRange* cur = range; cur != NULL; cur = cur->next()) { LifetimePosition this_end = cur->End(); if (this_end.InstructionIndex() > end) end = this_end.InstructionIndex(); - ASSERT(cur->Start().InstructionIndex() >= start); + DCHECK(cur->Start().InstructionIndex() >= start); } // Most of the ranges are in order, but not all. Keep an eye on when @@ -1469,7 +1456,7 @@ void LAllocator::PopulatePointerMaps() { "at safe point %d\n", cur->id(), cur->Start().Value(), safe_point); LOperand* operand = cur->CreateAssignedOperand(chunk()->zone()); - ASSERT(!operand->IsStackSlot()); + DCHECK(!operand->IsStackSlot()); map->RecordPointer(operand, chunk()->zone()); } } @@ -1494,7 +1481,7 @@ void LAllocator::AllocateDoubleRegisters() { void LAllocator::AllocateRegisters() { - ASSERT(unhandled_live_ranges_.is_empty()); + DCHECK(unhandled_live_ranges_.is_empty()); for (int i = 0; i < live_ranges_.length(); ++i) { if (live_ranges_[i] != NULL) { @@ -1504,11 +1491,11 @@ void LAllocator::AllocateRegisters() { } } SortUnhandled(); - ASSERT(UnhandledIsSorted()); + DCHECK(UnhandledIsSorted()); - ASSERT(reusable_slots_.is_empty()); - ASSERT(active_live_ranges_.is_empty()); - ASSERT(inactive_live_ranges_.is_empty()); + DCHECK(reusable_slots_.is_empty()); + DCHECK(active_live_ranges_.is_empty()); + DCHECK(inactive_live_ranges_.is_empty()); if (mode_ == DOUBLE_REGISTERS) { for (int i = 0; i < DoubleRegister::NumAllocatableRegisters(); ++i) { @@ -1518,7 +1505,7 @@ void LAllocator::AllocateRegisters() { } } } else { - ASSERT(mode_ == GENERAL_REGISTERS); + DCHECK(mode_ == GENERAL_REGISTERS); for (int i = 0; i < fixed_live_ranges_.length(); ++i) { LiveRange* current = fixed_live_ranges_.at(i); if (current != NULL) { @@ -1528,9 +1515,9 @@ void LAllocator::AllocateRegisters() { } while (!unhandled_live_ranges_.is_empty()) { - ASSERT(UnhandledIsSorted()); + DCHECK(UnhandledIsSorted()); LiveRange* current = unhandled_live_ranges_.RemoveLast(); - ASSERT(UnhandledIsSorted()); + DCHECK(UnhandledIsSorted()); LifetimePosition position = current->Start(); #ifdef DEBUG allocation_finger_ = position; @@ -1557,7 +1544,7 @@ void LAllocator::AllocateRegisters() { // the register is too close to the start of live range. SpillBetween(current, current->Start(), pos->pos()); if (!AllocationOk()) return; - ASSERT(UnhandledIsSorted()); + DCHECK(UnhandledIsSorted()); continue; } } @@ -1584,7 +1571,7 @@ void LAllocator::AllocateRegisters() { } } - ASSERT(!current->HasRegisterAssigned() && !current->IsSpilled()); + DCHECK(!current->HasRegisterAssigned() && !current->IsSpilled()); bool result = TryAllocateFreeReg(current); if (!AllocationOk()) return; @@ -1616,7 +1603,7 @@ void LAllocator::TraceAlloc(const char* msg, ...) { if (FLAG_trace_alloc) { va_list arguments; va_start(arguments, msg); - OS::VPrint(msg, arguments); + base::OS::VPrint(msg, arguments); va_end(arguments); } } @@ -1658,33 +1645,33 @@ void LAllocator::AddToInactive(LiveRange* range) { void LAllocator::AddToUnhandledSorted(LiveRange* range) { if (range == NULL || range->IsEmpty()) return; - ASSERT(!range->HasRegisterAssigned() && !range->IsSpilled()); - ASSERT(allocation_finger_.Value() <= range->Start().Value()); + DCHECK(!range->HasRegisterAssigned() && !range->IsSpilled()); + DCHECK(allocation_finger_.Value() <= range->Start().Value()); for (int i = unhandled_live_ranges_.length() - 1; i >= 0; --i) { LiveRange* cur_range = unhandled_live_ranges_.at(i); if (range->ShouldBeAllocatedBefore(cur_range)) { TraceAlloc("Add live range %d to unhandled at %d\n", range->id(), i + 1); unhandled_live_ranges_.InsertAt(i + 1, range, zone()); - ASSERT(UnhandledIsSorted()); + DCHECK(UnhandledIsSorted()); return; } } TraceAlloc("Add live range %d to unhandled at start\n", range->id()); unhandled_live_ranges_.InsertAt(0, range, zone()); - ASSERT(UnhandledIsSorted()); + DCHECK(UnhandledIsSorted()); } void LAllocator::AddToUnhandledUnsorted(LiveRange* range) { if (range == NULL || range->IsEmpty()) return; - ASSERT(!range->HasRegisterAssigned() && !range->IsSpilled()); + DCHECK(!range->HasRegisterAssigned() && !range->IsSpilled()); TraceAlloc("Add live range %d to unhandled unsorted at end\n", range->id()); unhandled_live_ranges_.Add(range, zone()); } static int UnhandledSortHelper(LiveRange* const* a, LiveRange* const* b) { - ASSERT(!(*a)->ShouldBeAllocatedBefore(*b) || + DCHECK(!(*a)->ShouldBeAllocatedBefore(*b) || !(*b)->ShouldBeAllocatedBefore(*a)); if ((*a)->ShouldBeAllocatedBefore(*b)) return 1; if ((*b)->ShouldBeAllocatedBefore(*a)) return -1; @@ -1738,7 +1725,7 @@ LOperand* LAllocator::TryReuseSpillSlot(LiveRange* range) { void LAllocator::ActiveToHandled(LiveRange* range) { - ASSERT(active_live_ranges_.Contains(range)); + DCHECK(active_live_ranges_.Contains(range)); active_live_ranges_.RemoveElement(range); TraceAlloc("Moving live range %d from active to handled\n", range->id()); FreeSpillSlot(range); @@ -1746,7 +1733,7 @@ void LAllocator::ActiveToHandled(LiveRange* range) { void LAllocator::ActiveToInactive(LiveRange* range) { - ASSERT(active_live_ranges_.Contains(range)); + DCHECK(active_live_ranges_.Contains(range)); active_live_ranges_.RemoveElement(range); inactive_live_ranges_.Add(range, zone()); TraceAlloc("Moving live range %d from active to inactive\n", range->id()); @@ -1754,7 +1741,7 @@ void LAllocator::ActiveToInactive(LiveRange* range) { void LAllocator::InactiveToHandled(LiveRange* range) { - ASSERT(inactive_live_ranges_.Contains(range)); + DCHECK(inactive_live_ranges_.Contains(range)); inactive_live_ranges_.RemoveElement(range); TraceAlloc("Moving live range %d from inactive to handled\n", range->id()); FreeSpillSlot(range); @@ -1762,7 +1749,7 @@ void LAllocator::InactiveToHandled(LiveRange* range) { void LAllocator::InactiveToActive(LiveRange* range) { - ASSERT(inactive_live_ranges_.Contains(range)); + DCHECK(inactive_live_ranges_.Contains(range)); inactive_live_ranges_.RemoveElement(range); active_live_ranges_.Add(range, zone()); TraceAlloc("Moving live range %d from inactive to active\n", range->id()); @@ -1790,7 +1777,7 @@ bool LAllocator::TryAllocateFreeReg(LiveRange* current) { for (int i = 0; i < inactive_live_ranges_.length(); ++i) { LiveRange* cur_inactive = inactive_live_ranges_.at(i); - ASSERT(cur_inactive->End().Value() > current->Start().Value()); + DCHECK(cur_inactive->End().Value() > current->Start().Value()); LifetimePosition next_intersection = cur_inactive->FirstIntersection(current); if (!next_intersection.IsValid()) continue; @@ -1844,7 +1831,7 @@ bool LAllocator::TryAllocateFreeReg(LiveRange* current) { // Register reg is available at the range start and is free until // the range end. - ASSERT(pos.Value() >= current->End().Value()); + DCHECK(pos.Value() >= current->End().Value()); TraceAlloc("Assigning free reg %s to live range %d\n", RegisterName(reg), current->id()); @@ -1890,7 +1877,7 @@ void LAllocator::AllocateBlockedReg(LiveRange* current) { for (int i = 0; i < inactive_live_ranges_.length(); ++i) { LiveRange* range = inactive_live_ranges_.at(i); - ASSERT(range->End().Value() > current->Start().Value()); + DCHECK(range->End().Value() > current->Start().Value()); LifetimePosition next_intersection = range->FirstIntersection(current); if (!next_intersection.IsValid()) continue; int cur_reg = range->assigned_register(); @@ -1929,7 +1916,7 @@ void LAllocator::AllocateBlockedReg(LiveRange* current) { } // Register reg is not blocked for the whole range. - ASSERT(block_pos[reg].Value() >= current->End().Value()); + DCHECK(block_pos[reg].Value() >= current->End().Value()); TraceAlloc("Assigning blocked reg %s to live range %d\n", RegisterName(reg), current->id()); @@ -1976,7 +1963,7 @@ LifetimePosition LAllocator::FindOptimalSpillingPos(LiveRange* range, void LAllocator::SplitAndSpillIntersecting(LiveRange* current) { - ASSERT(current->HasRegisterAssigned()); + DCHECK(current->HasRegisterAssigned()); int reg = current->assigned_register(); LifetimePosition split_pos = current->Start(); for (int i = 0; i < active_live_ranges_.length(); ++i) { @@ -2005,7 +1992,7 @@ void LAllocator::SplitAndSpillIntersecting(LiveRange* current) { for (int i = 0; i < inactive_live_ranges_.length(); ++i) { LiveRange* range = inactive_live_ranges_[i]; - ASSERT(range->End().Value() > current->Start().Value()); + DCHECK(range->End().Value() > current->Start().Value()); if (range->assigned_register() == reg && !range->IsFixed()) { LifetimePosition next_intersection = range->FirstIntersection(current); if (next_intersection.IsValid()) { @@ -2032,14 +2019,14 @@ bool LAllocator::IsBlockBoundary(LifetimePosition pos) { LiveRange* LAllocator::SplitRangeAt(LiveRange* range, LifetimePosition pos) { - ASSERT(!range->IsFixed()); + DCHECK(!range->IsFixed()); TraceAlloc("Splitting live range %d at %d\n", range->id(), pos.Value()); if (pos.Value() <= range->Start().Value()) return range; // We can't properly connect liveranges if split occured at the end // of control instruction. - ASSERT(pos.IsInstructionStart() || + DCHECK(pos.IsInstructionStart() || !chunk_->instructions()->at(pos.InstructionIndex())->IsControl()); int vreg = GetVirtualRegister(); @@ -2053,14 +2040,14 @@ LiveRange* LAllocator::SplitRangeAt(LiveRange* range, LifetimePosition pos) { LiveRange* LAllocator::SplitBetween(LiveRange* range, LifetimePosition start, LifetimePosition end) { - ASSERT(!range->IsFixed()); + DCHECK(!range->IsFixed()); TraceAlloc("Splitting live range %d in position between [%d, %d]\n", range->id(), start.Value(), end.Value()); LifetimePosition split_pos = FindOptimalSplitPos(start, end); - ASSERT(split_pos.Value() >= start.Value()); + DCHECK(split_pos.Value() >= start.Value()); return SplitRangeAt(range, split_pos); } @@ -2069,7 +2056,7 @@ LifetimePosition LAllocator::FindOptimalSplitPos(LifetimePosition start, LifetimePosition end) { int start_instr = start.InstructionIndex(); int end_instr = end.InstructionIndex(); - ASSERT(start_instr <= end_instr); + DCHECK(start_instr <= end_instr); // We have no choice if (start_instr == end_instr) return end; @@ -2131,7 +2118,7 @@ void LAllocator::SpillBetweenUntil(LiveRange* range, end.PrevInstruction().InstructionEnd()); if (!AllocationOk()) return; - ASSERT(third_part != second_part); + DCHECK(third_part != second_part); Spill(second_part); AddToUnhandledSorted(third_part); @@ -2144,7 +2131,7 @@ void LAllocator::SpillBetweenUntil(LiveRange* range, void LAllocator::Spill(LiveRange* range) { - ASSERT(!range->IsSpilled()); + DCHECK(!range->IsSpilled()); TraceAlloc("Spilling live range %d\n", range->id()); LiveRange* first = range->TopLevel(); @@ -2190,7 +2177,7 @@ LAllocatorPhase::~LAllocatorPhase() { if (FLAG_hydrogen_stats) { unsigned size = allocator_->zone()->allocation_size() - allocator_zone_start_allocation_size_; - isolate()->GetHStatistics()->SaveTiming(name(), TimeDelta(), size); + isolate()->GetHStatistics()->SaveTiming(name(), base::TimeDelta(), size); } if (ShouldProduceTraceOutput()) { diff --git a/deps/v8/src/lithium-allocator.h b/deps/v8/src/lithium-allocator.h index 83ba9afb68c..f63077e19d7 100644 --- a/deps/v8/src/lithium-allocator.h +++ b/deps/v8/src/lithium-allocator.h @@ -5,11 +5,11 @@ #ifndef V8_LITHIUM_ALLOCATOR_H_ #define V8_LITHIUM_ALLOCATOR_H_ -#include "v8.h" +#include "src/v8.h" -#include "allocation.h" -#include "lithium.h" -#include "zone.h" +#include "src/allocation.h" +#include "src/lithium.h" +#include "src/zone.h" namespace v8 { namespace internal { @@ -17,7 +17,6 @@ namespace internal { // Forward declarations. class HBasicBlock; class HGraph; -class HInstruction; class HPhi; class HTracer; class HValue; @@ -52,7 +51,7 @@ class LifetimePosition { // Returns the index of the instruction to which this lifetime position // corresponds. int InstructionIndex() const { - ASSERT(IsValid()); + DCHECK(IsValid()); return value_ / kStep; } @@ -65,28 +64,28 @@ class LifetimePosition { // Returns the lifetime position for the start of the instruction which // corresponds to this lifetime position. LifetimePosition InstructionStart() const { - ASSERT(IsValid()); + DCHECK(IsValid()); return LifetimePosition(value_ & ~(kStep - 1)); } // Returns the lifetime position for the end of the instruction which // corresponds to this lifetime position. LifetimePosition InstructionEnd() const { - ASSERT(IsValid()); + DCHECK(IsValid()); return LifetimePosition(InstructionStart().Value() + kStep/2); } // Returns the lifetime position for the beginning of the next instruction. LifetimePosition NextInstruction() const { - ASSERT(IsValid()); + DCHECK(IsValid()); return LifetimePosition(InstructionStart().Value() + kStep); } // Returns the lifetime position for the beginning of the previous // instruction. LifetimePosition PrevInstruction() const { - ASSERT(IsValid()); - ASSERT(value_ > 1); + DCHECK(IsValid()); + DCHECK(value_ > 1); return LifetimePosition(InstructionStart().Value() - kStep); } @@ -118,70 +117,12 @@ class LifetimePosition { }; -enum RegisterKind { - UNALLOCATED_REGISTERS, - GENERAL_REGISTERS, - DOUBLE_REGISTERS -}; - - -// A register-allocator view of a Lithium instruction. It contains the id of -// the output operand and a list of input operand uses. - -class LInstruction; -class LEnvironment; - -// Iterator for non-null temp operands. -class TempIterator BASE_EMBEDDED { - public: - inline explicit TempIterator(LInstruction* instr); - inline bool Done(); - inline LOperand* Current(); - inline void Advance(); - - private: - inline void SkipUninteresting(); - LInstruction* instr_; - int limit_; - int current_; -}; - - -// Iterator for non-constant input operands. -class InputIterator BASE_EMBEDDED { - public: - inline explicit InputIterator(LInstruction* instr); - inline bool Done(); - inline LOperand* Current(); - inline void Advance(); - - private: - inline void SkipUninteresting(); - LInstruction* instr_; - int limit_; - int current_; -}; - - -class UseIterator BASE_EMBEDDED { - public: - inline explicit UseIterator(LInstruction* instr); - inline bool Done(); - inline LOperand* Current(); - inline void Advance(); - - private: - InputIterator input_iterator_; - DeepIterator env_iterator_; -}; - - // Representation of the non-empty interval [start,end[. class UseInterval: public ZoneObject { public: UseInterval(LifetimePosition start, LifetimePosition end) : start_(start), end_(end), next_(NULL) { - ASSERT(start.Value() < end.Value()); + DCHECK(start.Value() < end.Value()); } LifetimePosition start() const { return start_; } @@ -302,7 +243,7 @@ class LiveRange: public ZoneObject { bool IsSpilled() const { return spilled_; } LOperand* current_hint_operand() const { - ASSERT(current_hint_operand_ == FirstHint()); + DCHECK(current_hint_operand_ == FirstHint()); return current_hint_operand_; } LOperand* FirstHint() const { @@ -313,12 +254,12 @@ class LiveRange: public ZoneObject { } LifetimePosition Start() const { - ASSERT(!IsEmpty()); + DCHECK(!IsEmpty()); return first_interval()->start(); } LifetimePosition End() const { - ASSERT(!IsEmpty()); + DCHECK(!IsEmpty()); return last_interval_->end(); } @@ -423,7 +364,7 @@ class LAllocator BASE_EMBEDDED { void MarkAsOsrEntry() { // There can be only one. - ASSERT(!has_osr_entry_); + DCHECK(!has_osr_entry_); // Simply set a flag to find and process instruction later. has_osr_entry_ = true; } diff --git a/deps/v8/src/lithium-codegen.cc b/deps/v8/src/lithium-codegen.cc index 0d841b7e80a..8b6444db58b 100644 --- a/deps/v8/src/lithium-codegen.cc +++ b/deps/v8/src/lithium-codegen.cc @@ -2,25 +2,31 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "lithium-codegen.h" +#include "src/lithium-codegen.h" #if V8_TARGET_ARCH_IA32 -#include "ia32/lithium-ia32.h" -#include "ia32/lithium-codegen-ia32.h" +#include "src/ia32/lithium-ia32.h" // NOLINT +#include "src/ia32/lithium-codegen-ia32.h" // NOLINT #elif V8_TARGET_ARCH_X64 -#include "x64/lithium-x64.h" -#include "x64/lithium-codegen-x64.h" +#include "src/x64/lithium-x64.h" // NOLINT +#include "src/x64/lithium-codegen-x64.h" // NOLINT #elif V8_TARGET_ARCH_ARM -#include "arm/lithium-arm.h" -#include "arm/lithium-codegen-arm.h" +#include "src/arm/lithium-arm.h" // NOLINT +#include "src/arm/lithium-codegen-arm.h" // NOLINT #elif V8_TARGET_ARCH_ARM64 -#include "arm64/lithium-arm64.h" -#include "arm64/lithium-codegen-arm64.h" +#include "src/arm64/lithium-arm64.h" // NOLINT +#include "src/arm64/lithium-codegen-arm64.h" // NOLINT #elif V8_TARGET_ARCH_MIPS -#include "mips/lithium-mips.h" -#include "mips/lithium-codegen-mips.h" +#include "src/mips/lithium-mips.h" // NOLINT +#include "src/mips/lithium-codegen-mips.h" // NOLINT +#elif V8_TARGET_ARCH_MIPS64 +#include "src/mips64/lithium-mips64.h" // NOLINT +#include "src/mips64/lithium-codegen-mips64.h" // NOLINT +#elif V8_TARGET_ARCH_X87 +#include "src/x87/lithium-x87.h" // NOLINT +#include "src/x87/lithium-codegen-x87.h" // NOLINT #else #error Unsupported target architecture. #endif @@ -50,7 +56,7 @@ LCodeGenBase::LCodeGenBase(LChunk* chunk, bool LCodeGenBase::GenerateBody() { - ASSERT(is_generating()); + DCHECK(is_generating()); bool emit_instructions = true; LCodeGen* codegen = static_cast<LCodeGen*>(this); for (current_instruction_ = 0; @@ -110,12 +116,12 @@ void LCodeGenBase::CheckEnvironmentUsage() { HInstruction* hinstr = HInstruction::cast(hval); if (!hinstr->CanDeoptimize() && instr->HasEnvironment()) { - V8_Fatal(__FILE__, __LINE__, "CanDeoptimize is wrong for %s (%s)\n", + V8_Fatal(__FILE__, __LINE__, "CanDeoptimize is wrong for %s (%s)", hinstr->Mnemonic(), instr->Mnemonic()); } if (instr->HasEnvironment() && !instr->environment()->has_been_used()) { - V8_Fatal(__FILE__, __LINE__, "unused environment for %s (%s)\n", + V8_Fatal(__FILE__, __LINE__, "unused environment for %s (%s)", hinstr->Mnemonic(), instr->Mnemonic()); } } @@ -136,7 +142,7 @@ void LCodeGenBase::Comment(const char* format, ...) { // issues when the stack allocated buffer goes out of scope. size_t length = builder.position(); Vector<char> copy = Vector<char>::New(static_cast<int>(length) + 1); - OS::MemCopy(copy.start(), builder.Finalize(), copy.length()); + MemCopy(copy.start(), builder.Finalize(), copy.length()); masm()->RecordComment(copy.start()); } @@ -162,7 +168,7 @@ static void AddWeakObjectToCodeDependency(Isolate* isolate, void LCodeGenBase::RegisterWeakObjectsInOptimizedCode(Handle<Code> code) { - ASSERT(code->is_optimized_code()); + DCHECK(code->is_optimized_code()); ZoneList<Handle<Map> > maps(1, zone()); ZoneList<Handle<JSObject> > objects(1, zone()); ZoneList<Handle<Cell> > cells(1, zone()); diff --git a/deps/v8/src/lithium-codegen.h b/deps/v8/src/lithium-codegen.h index 28a5ab16e0c..1eb963e6faa 100644 --- a/deps/v8/src/lithium-codegen.h +++ b/deps/v8/src/lithium-codegen.h @@ -5,9 +5,9 @@ #ifndef V8_LITHIUM_CODEGEN_H_ #define V8_LITHIUM_CODEGEN_H_ -#include "v8.h" +#include "src/v8.h" -#include "compiler.h" +#include "src/compiler.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/lithium-inl.h b/deps/v8/src/lithium-inl.h new file mode 100644 index 00000000000..36e166e9260 --- /dev/null +++ b/deps/v8/src/lithium-inl.h @@ -0,0 +1,112 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_LITHIUM_INL_H_ +#define V8_LITHIUM_INL_H_ + +#include "src/lithium.h" + +#if V8_TARGET_ARCH_IA32 +#include "src/ia32/lithium-ia32.h" // NOLINT +#elif V8_TARGET_ARCH_X64 +#include "src/x64/lithium-x64.h" // NOLINT +#elif V8_TARGET_ARCH_ARM64 +#include "src/arm64/lithium-arm64.h" // NOLINT +#elif V8_TARGET_ARCH_ARM +#include "src/arm/lithium-arm.h" // NOLINT +#elif V8_TARGET_ARCH_MIPS +#include "src/mips/lithium-mips.h" // NOLINT +#elif V8_TARGET_ARCH_MIPS64 +#include "src/mips64/lithium-mips64.h" // NOLINT +#elif V8_TARGET_ARCH_X87 +#include "src/x87/lithium-x87.h" // NOLINT +#else +#error "Unknown architecture." +#endif + +namespace v8 { +namespace internal { + +TempIterator::TempIterator(LInstruction* instr) + : instr_(instr), limit_(instr->TempCount()), current_(0) { + SkipUninteresting(); +} + + +bool TempIterator::Done() { return current_ >= limit_; } + + +LOperand* TempIterator::Current() { + DCHECK(!Done()); + return instr_->TempAt(current_); +} + + +void TempIterator::SkipUninteresting() { + while (current_ < limit_ && instr_->TempAt(current_) == NULL) ++current_; +} + + +void TempIterator::Advance() { + ++current_; + SkipUninteresting(); +} + + +InputIterator::InputIterator(LInstruction* instr) + : instr_(instr), limit_(instr->InputCount()), current_(0) { + SkipUninteresting(); +} + + +bool InputIterator::Done() { return current_ >= limit_; } + + +LOperand* InputIterator::Current() { + DCHECK(!Done()); + DCHECK(instr_->InputAt(current_) != NULL); + return instr_->InputAt(current_); +} + + +void InputIterator::Advance() { + ++current_; + SkipUninteresting(); +} + + +void InputIterator::SkipUninteresting() { + while (current_ < limit_) { + LOperand* current = instr_->InputAt(current_); + if (current != NULL && !current->IsConstantOperand()) break; + ++current_; + } +} + + +UseIterator::UseIterator(LInstruction* instr) + : input_iterator_(instr), env_iterator_(instr->environment()) {} + + +bool UseIterator::Done() { + return input_iterator_.Done() && env_iterator_.Done(); +} + + +LOperand* UseIterator::Current() { + DCHECK(!Done()); + LOperand* result = input_iterator_.Done() ? env_iterator_.Current() + : input_iterator_.Current(); + DCHECK(result != NULL); + return result; +} + + +void UseIterator::Advance() { + input_iterator_.Done() ? env_iterator_.Advance() : input_iterator_.Advance(); +} +} +} // namespace v8::internal + +#endif // V8_LITHIUM_INL_H_ diff --git a/deps/v8/src/lithium.cc b/deps/v8/src/lithium.cc index 2265353f473..a8d4d22ab5c 100644 --- a/deps/v8/src/lithium.cc +++ b/deps/v8/src/lithium.cc @@ -2,25 +2,33 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" -#include "lithium.h" -#include "scopes.h" +#include "src/v8.h" + +#include "src/lithium.h" +#include "src/scopes.h" +#include "src/serialize.h" #if V8_TARGET_ARCH_IA32 -#include "ia32/lithium-ia32.h" -#include "ia32/lithium-codegen-ia32.h" +#include "src/ia32/lithium-ia32.h" // NOLINT +#include "src/ia32/lithium-codegen-ia32.h" // NOLINT #elif V8_TARGET_ARCH_X64 -#include "x64/lithium-x64.h" -#include "x64/lithium-codegen-x64.h" +#include "src/x64/lithium-x64.h" // NOLINT +#include "src/x64/lithium-codegen-x64.h" // NOLINT #elif V8_TARGET_ARCH_ARM -#include "arm/lithium-arm.h" -#include "arm/lithium-codegen-arm.h" +#include "src/arm/lithium-arm.h" // NOLINT +#include "src/arm/lithium-codegen-arm.h" // NOLINT #elif V8_TARGET_ARCH_MIPS -#include "mips/lithium-mips.h" -#include "mips/lithium-codegen-mips.h" +#include "src/mips/lithium-mips.h" // NOLINT +#include "src/mips/lithium-codegen-mips.h" // NOLINT #elif V8_TARGET_ARCH_ARM64 -#include "arm64/lithium-arm64.h" -#include "arm64/lithium-codegen-arm64.h" +#include "src/arm64/lithium-arm64.h" // NOLINT +#include "src/arm64/lithium-codegen-arm64.h" // NOLINT +#elif V8_TARGET_ARCH_MIPS64 +#include "src/mips64/lithium-mips64.h" // NOLINT +#include "src/mips64/lithium-codegen-mips64.h" // NOLINT +#elif V8_TARGET_ARCH_X87 +#include "src/x87/lithium-x87.h" // NOLINT +#include "src/x87/lithium-codegen-x87.h" // NOLINT #else #error "Unknown architecture." #endif @@ -47,16 +55,26 @@ void LOperand::PrintTo(StringStream* stream) { break; case LUnallocated::FIXED_REGISTER: { int reg_index = unalloc->fixed_register_index(); - const char* register_name = - Register::AllocationIndexToString(reg_index); - stream->Add("(=%s)", register_name); + if (reg_index < 0 || + reg_index >= Register::kMaxNumAllocatableRegisters) { + stream->Add("(=invalid_reg#%d)", reg_index); + } else { + const char* register_name = + Register::AllocationIndexToString(reg_index); + stream->Add("(=%s)", register_name); + } break; } case LUnallocated::FIXED_DOUBLE_REGISTER: { int reg_index = unalloc->fixed_register_index(); - const char* double_register_name = - DoubleRegister::AllocationIndexToString(reg_index); - stream->Add("(=%s)", double_register_name); + if (reg_index < 0 || + reg_index >= DoubleRegister::kMaxNumAllocatableRegisters) { + stream->Add("(=invalid_double_reg#%d)", reg_index); + } else { + const char* double_register_name = + DoubleRegister::AllocationIndexToString(reg_index); + stream->Add("(=%s)", double_register_name); + } break; } case LUnallocated::MUST_HAVE_REGISTER: @@ -85,12 +103,26 @@ void LOperand::PrintTo(StringStream* stream) { case DOUBLE_STACK_SLOT: stream->Add("[double_stack:%d]", index()); break; - case REGISTER: - stream->Add("[%s|R]", Register::AllocationIndexToString(index())); + case REGISTER: { + int reg_index = index(); + if (reg_index < 0 || reg_index >= Register::kMaxNumAllocatableRegisters) { + stream->Add("(=invalid_reg#%d|R)", reg_index); + } else { + stream->Add("[%s|R]", Register::AllocationIndexToString(reg_index)); + } break; - case DOUBLE_REGISTER: - stream->Add("[%s|R]", DoubleRegister::AllocationIndexToString(index())); + } + case DOUBLE_REGISTER: { + int reg_index = index(); + if (reg_index < 0 || + reg_index >= DoubleRegister::kMaxNumAllocatableRegisters) { + stream->Add("(=invalid_double_reg#%d|R)", reg_index); + } else { + stream->Add("[%s|R]", + DoubleRegister::AllocationIndexToString(reg_index)); + } break; + } } } @@ -181,7 +213,7 @@ void LEnvironment::PrintTo(StringStream* stream) { void LPointerMap::RecordPointer(LOperand* op, Zone* zone) { // Do not record arguments as pointers. if (op->IsStackSlot() && op->index() < 0) return; - ASSERT(!op->IsDoubleRegister() && !op->IsDoubleStackSlot()); + DCHECK(!op->IsDoubleRegister() && !op->IsDoubleStackSlot()); pointer_operands_.Add(op, zone); } @@ -189,7 +221,7 @@ void LPointerMap::RecordPointer(LOperand* op, Zone* zone) { void LPointerMap::RemovePointer(LOperand* op) { // Do not record arguments as pointers. if (op->IsStackSlot() && op->index() < 0) return; - ASSERT(!op->IsDoubleRegister() && !op->IsDoubleStackSlot()); + DCHECK(!op->IsDoubleRegister() && !op->IsDoubleStackSlot()); for (int i = 0; i < pointer_operands_.length(); ++i) { if (pointer_operands_[i]->Equals(op)) { pointer_operands_.Remove(i); @@ -202,7 +234,7 @@ void LPointerMap::RemovePointer(LOperand* op) { void LPointerMap::RecordUntagged(LOperand* op, Zone* zone) { // Do not record arguments as pointers. if (op->IsStackSlot() && op->index() < 0) return; - ASSERT(!op->IsDoubleRegister() && !op->IsDoubleStackSlot()); + DCHECK(!op->IsDoubleRegister() && !op->IsDoubleStackSlot()); untagged_operands_.Add(op, zone); } @@ -234,12 +266,11 @@ LChunk::LChunk(CompilationInfo* info, HGraph* graph) : spill_slot_count_(0), info_(info), graph_(graph), - instructions_(32, graph->zone()), - pointer_maps_(8, graph->zone()), - inlined_closures_(1, graph->zone()), - deprecation_dependencies_(MapLess(), MapAllocator(graph->zone())), - stability_dependencies_(MapLess(), MapAllocator(graph->zone())) { -} + instructions_(32, info->zone()), + pointer_maps_(8, info->zone()), + inlined_closures_(1, info->zone()), + deprecation_dependencies_(MapLess(), MapAllocator(info->zone())), + stability_dependencies_(MapLess(), MapAllocator(info->zone())) {} LLabel* LChunk::GetLabel(int block_id) const { @@ -259,7 +290,7 @@ int LChunk::LookupDestination(int block_id) const { Label* LChunk::GetAssemblyLabel(int block_id) const { LLabel* label = GetLabel(block_id); - ASSERT(!label->HasReplacement()); + DCHECK(!label->HasReplacement()); return label->label(); } @@ -300,7 +331,7 @@ void LChunk::MarkEmptyBlocks() { void LChunk::AddInstruction(LInstruction* instr, HBasicBlock* block) { - LInstructionGap* gap = new(graph_->zone()) LInstructionGap(block); + LInstructionGap* gap = new (zone()) LInstructionGap(block); gap->set_hydrogen_value(instr->hydrogen_value()); int index = -1; if (instr->IsControl()) { @@ -331,14 +362,14 @@ int LChunk::GetParameterStackSlot(int index) const { // spill slots. int result = index - info()->num_parameters() - 1; - ASSERT(result < 0); + DCHECK(result < 0); return result; } // A parameter relative to ebp in the arguments stub. int LChunk::ParameterAt(int index) { - ASSERT(-1 <= index); // -1 is the receiver. + DCHECK(-1 <= index); // -1 is the receiver. return (1 + info()->scope()->num_parameters() - index) * kPointerSize; } @@ -381,16 +412,16 @@ void LChunk::CommitDependencies(Handle<Code> code) const { for (MapSet::const_iterator it = deprecation_dependencies_.begin(), iend = deprecation_dependencies_.end(); it != iend; ++it) { Handle<Map> map = *it; - ASSERT(!map->is_deprecated()); - ASSERT(map->CanBeDeprecated()); + DCHECK(!map->is_deprecated()); + DCHECK(map->CanBeDeprecated()); Map::AddDependentCode(map, DependentCode::kTransitionGroup, code); } for (MapSet::const_iterator it = stability_dependencies_.begin(), iend = stability_dependencies_.end(); it != iend; ++it) { Handle<Map> map = *it; - ASSERT(map->is_stable()); - ASSERT(map->CanTransition()); + DCHECK(map->is_stable()); + DCHECK(map->CanTransition()); Map::AddDependentCode(map, DependentCode::kPrototypeCheckGroup, code); } @@ -430,6 +461,8 @@ Handle<Code> LChunk::Codegen() { LOG_CODE_EVENT(info()->isolate(), CodeStartLinePosInfoRecordEvent( assembler.positions_recorder())); + // TODO(yangguo) remove this once the code serializer handles code stubs. + if (info()->will_serialize()) assembler.enable_serializer(); LCodeGen generator(this, &assembler, info()); MarkEmptyBlocks(); @@ -449,6 +482,9 @@ Handle<Code> LChunk::Codegen() { CodeEndLinePosInfoRecordEvent(*code, jit_handler_data)); CodeGenerator::PrintCode(code, info()); + DCHECK(!(info()->isolate()->serializer_enabled() && + info()->GetMustNotHaveEagerFrame() && + generator.NeedsEagerFrame())); return code; } assembler.AbortedCodeGeneration(); @@ -483,7 +519,7 @@ LEnvironment* LChunkBuilderBase::CreateEnvironment( argument_index_accumulator, objects_to_materialize); BailoutId ast_id = hydrogen_env->ast_id(); - ASSERT(!ast_id.IsNone() || + DCHECK(!ast_id.IsNone() || hydrogen_env->frame_type() != JS_FUNCTION); int value_count = hydrogen_env->length() - hydrogen_env->specials_count(); LEnvironment* result = @@ -505,7 +541,7 @@ LEnvironment* LChunkBuilderBase::CreateEnvironment( LOperand* op; HValue* value = hydrogen_env->values()->at(i); - CHECK(!value->IsPushArgument()); // Do not deopt outgoing arguments + CHECK(!value->IsPushArguments()); // Do not deopt outgoing arguments if (value->IsArgumentsObject() || value->IsCapturedObject()) { op = LEnvironment::materialization_marker(); } else { @@ -586,7 +622,7 @@ void LChunkBuilderBase::AddObjectToMaterialize(HValue* value, // Insert a hole for nested objects op = LEnvironment::materialization_marker(); } else { - ASSERT(!arg_value->IsPushArgument()); + DCHECK(!arg_value->IsPushArguments()); // For ordinary values, tell the register allocator we need the value // to be alive here op = UseAny(arg_value); @@ -605,14 +641,6 @@ void LChunkBuilderBase::AddObjectToMaterialize(HValue* value, } -LInstruction* LChunkBuilder::CheckElideControlInstruction( - HControlInstruction* instr) { - HBasicBlock* successor; - if (!instr->KnownSuccessorBlock(&successor)) return NULL; - return new(zone()) LGoto(successor); -} - - LPhase::~LPhase() { if (ShouldProduceTraceOutput()) { isolate()->GetHTracer()->TraceLithium(name(), chunk_); diff --git a/deps/v8/src/lithium.h b/deps/v8/src/lithium.h index 650bae69235..032c1d4290c 100644 --- a/deps/v8/src/lithium.h +++ b/deps/v8/src/lithium.h @@ -7,10 +7,10 @@ #include <set> -#include "allocation.h" -#include "hydrogen.h" -#include "safepoint-table.h" -#include "zone-allocator.h" +#include "src/allocation.h" +#include "src/hydrogen.h" +#include "src/safepoint-table.h" +#include "src/zone-allocator.h" namespace v8 { namespace internal { @@ -22,7 +22,6 @@ namespace internal { V(Register, REGISTER, 16) \ V(DoubleRegister, DOUBLE_REGISTER, 16) - class LOperand : public ZoneObject { public: enum Kind { @@ -49,9 +48,10 @@ class LOperand : public ZoneObject { void PrintTo(StringStream* stream); void ConvertTo(Kind kind, int index) { + if (kind == REGISTER) DCHECK(index >= 0); value_ = KindField::encode(kind); value_ |= index << kKindFieldWidth; - ASSERT(this->index() == index); + DCHECK(this->index() == index); } // Calls SetUpCache()/TearDownCache() for each subclass. @@ -107,14 +107,14 @@ class LUnallocated : public LOperand { } LUnallocated(BasicPolicy policy, int index) : LOperand(UNALLOCATED, 0) { - ASSERT(policy == FIXED_SLOT); + DCHECK(policy == FIXED_SLOT); value_ |= BasicPolicyField::encode(policy); value_ |= index << FixedSlotIndexField::kShift; - ASSERT(this->fixed_slot_index() == index); + DCHECK(this->fixed_slot_index() == index); } LUnallocated(ExtendedPolicy policy, int index) : LOperand(UNALLOCATED, 0) { - ASSERT(policy == FIXED_REGISTER || policy == FIXED_DOUBLE_REGISTER); + DCHECK(policy == FIXED_REGISTER || policy == FIXED_DOUBLE_REGISTER); value_ |= BasicPolicyField::encode(EXTENDED_POLICY); value_ |= ExtendedPolicyField::encode(policy); value_ |= LifetimeField::encode(USED_AT_END); @@ -135,7 +135,7 @@ class LUnallocated : public LOperand { } static LUnallocated* cast(LOperand* op) { - ASSERT(op->IsUnallocated()); + DCHECK(op->IsUnallocated()); return reinterpret_cast<LUnallocated*>(op); } @@ -222,19 +222,19 @@ class LUnallocated : public LOperand { // [extended_policy]: Only for non-FIXED_SLOT. The finer-grained policy. ExtendedPolicy extended_policy() const { - ASSERT(basic_policy() == EXTENDED_POLICY); + DCHECK(basic_policy() == EXTENDED_POLICY); return ExtendedPolicyField::decode(value_); } // [fixed_slot_index]: Only for FIXED_SLOT. int fixed_slot_index() const { - ASSERT(HasFixedSlotPolicy()); + DCHECK(HasFixedSlotPolicy()); return static_cast<int>(value_) >> FixedSlotIndexField::kShift; } // [fixed_register_index]: Only for FIXED_REGISTER or FIXED_DOUBLE_REGISTER. int fixed_register_index() const { - ASSERT(HasFixedRegisterPolicy() || HasFixedDoubleRegisterPolicy()); + DCHECK(HasFixedRegisterPolicy() || HasFixedDoubleRegisterPolicy()); return FixedRegisterField::decode(value_); } @@ -248,7 +248,7 @@ class LUnallocated : public LOperand { // [lifetime]: Only for non-FIXED_SLOT. bool IsUsedAtStart() { - ASSERT(basic_policy() == EXTENDED_POLICY); + DCHECK(basic_policy() == EXTENDED_POLICY); return LifetimeField::decode(value_) == USED_AT_START; } }; @@ -278,9 +278,10 @@ class LMoveOperands V8_FINAL BASE_EMBEDDED { } // A move is redundant if it's been eliminated, if its source and - // destination are the same, or if its destination is unneeded. + // destination are the same, or if its destination is unneeded or constant. bool IsRedundant() const { - return IsEliminated() || source_->Equals(destination_) || IsIgnored(); + return IsEliminated() || source_->Equals(destination_) || IsIgnored() || + (destination_ != NULL && destination_->IsConstantOperand()); } bool IsIgnored() const { @@ -290,7 +291,7 @@ class LMoveOperands V8_FINAL BASE_EMBEDDED { // We clear both operands to indicate move that's been eliminated. void Eliminate() { source_ = destination_ = NULL; } bool IsEliminated() const { - ASSERT(source_ != NULL || destination_ == NULL); + DCHECK(source_ != NULL || destination_ == NULL); return source_ == NULL; } @@ -304,13 +305,13 @@ template<LOperand::Kind kOperandKind, int kNumCachedOperands> class LSubKindOperand V8_FINAL : public LOperand { public: static LSubKindOperand* Create(int index, Zone* zone) { - ASSERT(index >= 0); + DCHECK(index >= 0); if (index < kNumCachedOperands) return &cache[index]; return new(zone) LSubKindOperand(index); } static LSubKindOperand* cast(LOperand* op) { - ASSERT(op->kind() == kOperandKind); + DCHECK(op->kind() == kOperandKind); return reinterpret_cast<LSubKindOperand*>(op); } @@ -341,9 +342,7 @@ class LParallelMove V8_FINAL : public ZoneObject { bool IsRedundant() const; - const ZoneList<LMoveOperands>* move_operands() const { - return &move_operands_; - } + ZoneList<LMoveOperands>* move_operands() { return &move_operands_; } void PrintDataTo(StringStream* stream) const; @@ -369,7 +368,7 @@ class LPointerMap V8_FINAL : public ZoneObject { int lithium_position() const { return lithium_position_; } void set_lithium_position(int pos) { - ASSERT(lithium_position_ == -1); + DCHECK(lithium_position_ == -1); lithium_position_ = pos; } @@ -436,7 +435,7 @@ class LEnvironment V8_FINAL : public ZoneObject { bool is_uint32) { values_.Add(operand, zone()); if (representation.IsSmiOrTagged()) { - ASSERT(!is_uint32); + DCHECK(!is_uint32); is_tagged_.Add(values_.length() - 1, zone()); } @@ -467,17 +466,17 @@ class LEnvironment V8_FINAL : public ZoneObject { } int ObjectDuplicateOfAt(int index) { - ASSERT(ObjectIsDuplicateAt(index)); + DCHECK(ObjectIsDuplicateAt(index)); return LengthOrDupeField::decode(object_mapping_[index]); } int ObjectLengthAt(int index) { - ASSERT(!ObjectIsDuplicateAt(index)); + DCHECK(!ObjectIsDuplicateAt(index)); return LengthOrDupeField::decode(object_mapping_[index]); } bool ObjectIsArgumentsAt(int index) { - ASSERT(!ObjectIsDuplicateAt(index)); + DCHECK(!ObjectIsDuplicateAt(index)); return IsArgumentsField::decode(object_mapping_[index]); } @@ -488,7 +487,7 @@ class LEnvironment V8_FINAL : public ZoneObject { void Register(int deoptimization_index, int translation_index, int pc_offset) { - ASSERT(!HasBeenRegistered()); + DCHECK(!HasBeenRegistered()); deoptimization_index_ = deoptimization_index; translation_index_ = translation_index; pc_offset_ = pc_offset; @@ -547,13 +546,13 @@ class ShallowIterator V8_FINAL BASE_EMBEDDED { bool Done() { return current_ >= limit_; } LOperand* Current() { - ASSERT(!Done()); - ASSERT(env_->values()->at(current_) != NULL); + DCHECK(!Done()); + DCHECK(env_->values()->at(current_) != NULL); return env_->values()->at(current_); } void Advance() { - ASSERT(!Done()); + DCHECK(!Done()); ++current_; SkipUninteresting(); } @@ -589,8 +588,8 @@ class DeepIterator V8_FINAL BASE_EMBEDDED { bool Done() { return current_iterator_.Done(); } LOperand* Current() { - ASSERT(!current_iterator_.Done()); - ASSERT(current_iterator_.Current() != NULL); + DCHECK(!current_iterator_.Done()); + DCHECK(current_iterator_.Current() != NULL); return current_iterator_.Current(); } @@ -651,16 +650,16 @@ class LChunk : public ZoneObject { } void AddDeprecationDependency(Handle<Map> map) { - ASSERT(!map->is_deprecated()); + DCHECK(!map->is_deprecated()); if (!map->CanBeDeprecated()) return; - ASSERT(!info_->IsStub()); + DCHECK(!info_->IsStub()); deprecation_dependencies_.insert(map); } void AddStabilityDependency(Handle<Map> map) { - ASSERT(map->is_stable()); + DCHECK(map->is_stable()); if (!map->CanTransition()) return; - ASSERT(!info_->IsStub()); + DCHECK(!info_->IsStub()); stability_dependencies_.insert(map); } @@ -747,6 +746,61 @@ class LPhase : public CompilationPhase { }; +// A register-allocator view of a Lithium instruction. It contains the id of +// the output operand and a list of input operand uses. + +enum RegisterKind { + UNALLOCATED_REGISTERS, + GENERAL_REGISTERS, + DOUBLE_REGISTERS +}; + +// Iterator for non-null temp operands. +class TempIterator BASE_EMBEDDED { + public: + inline explicit TempIterator(LInstruction* instr); + inline bool Done(); + inline LOperand* Current(); + inline void Advance(); + + private: + inline void SkipUninteresting(); + LInstruction* instr_; + int limit_; + int current_; +}; + + +// Iterator for non-constant input operands. +class InputIterator BASE_EMBEDDED { + public: + inline explicit InputIterator(LInstruction* instr); + inline bool Done(); + inline LOperand* Current(); + inline void Advance(); + + private: + inline void SkipUninteresting(); + LInstruction* instr_; + int limit_; + int current_; +}; + + +class UseIterator BASE_EMBEDDED { + public: + inline explicit UseIterator(LInstruction* instr); + inline bool Done(); + inline LOperand* Current(); + inline void Advance(); + + private: + InputIterator input_iterator_; + DeepIterator env_iterator_; +}; + +class LInstruction; +class LCodeGen; } } // namespace v8::internal #endif // V8_LITHIUM_H_ diff --git a/deps/v8/src/liveedit-debugger.js b/deps/v8/src/liveedit-debugger.js index 021a4f052a7..07214f9657c 100644 --- a/deps/v8/src/liveedit-debugger.js +++ b/deps/v8/src/liveedit-debugger.js @@ -946,7 +946,9 @@ Debug.LiveEdit = new function() { BLOCKED_ON_ACTIVE_STACK: 2, BLOCKED_ON_OTHER_STACK: 3, BLOCKED_UNDER_NATIVE_CODE: 4, - REPLACED_ON_ACTIVE_STACK: 5 + REPLACED_ON_ACTIVE_STACK: 5, + BLOCKED_UNDER_GENERATOR: 6, + BLOCKED_ACTIVE_GENERATOR: 7 }; FunctionPatchabilityStatus.SymbolName = function(code) { diff --git a/deps/v8/src/liveedit.cc b/deps/v8/src/liveedit.cc index e6bb4b29a63..57258b0c513 100644 --- a/deps/v8/src/liveedit.cc +++ b/deps/v8/src/liveedit.cc @@ -3,21 +3,21 @@ // found in the LICENSE file. -#include "v8.h" - -#include "liveedit.h" - -#include "code-stubs.h" -#include "compilation-cache.h" -#include "compiler.h" -#include "debug.h" -#include "deoptimizer.h" -#include "global-handles.h" -#include "messages.h" -#include "parser.h" -#include "scopeinfo.h" -#include "scopes.h" -#include "v8memory.h" +#include "src/v8.h" + +#include "src/liveedit.h" + +#include "src/code-stubs.h" +#include "src/compilation-cache.h" +#include "src/compiler.h" +#include "src/debug.h" +#include "src/deoptimizer.h" +#include "src/global-handles.h" +#include "src/messages.h" +#include "src/parser.h" +#include "src/scopeinfo.h" +#include "src/scopes.h" +#include "src/v8memory.h" namespace v8 { namespace internal { @@ -161,7 +161,7 @@ class Differencer { // Each cell keeps a value plus direction. Value is multiplied by 4. void set_value4_and_dir(int i1, int i2, int value4, Direction dir) { - ASSERT((value4 & kDirectionMask) == 0); + DCHECK((value4 & kDirectionMask) == 0); get_cell(i1, i2) = value4 | dir; } @@ -174,7 +174,7 @@ class Differencer { static const int kDirectionSizeBits = 2; static const int kDirectionMask = (1 << kDirectionSizeBits) - 1; - static const int kEmptyCellValue = -1 << kDirectionSizeBits; + static const int kEmptyCellValue = ~0u << kDirectionSizeBits; // This method only holds static assert statement (unfortunately you cannot // place one in class scope). @@ -805,6 +805,41 @@ class FunctionInfoListener { }; +void LiveEdit::InitializeThreadLocal(Debug* debug) { + debug->thread_local_.frame_drop_mode_ = LiveEdit::FRAMES_UNTOUCHED; +} + + +bool LiveEdit::SetAfterBreakTarget(Debug* debug) { + Code* code = NULL; + Isolate* isolate = debug->isolate_; + switch (debug->thread_local_.frame_drop_mode_) { + case FRAMES_UNTOUCHED: + return false; + case FRAME_DROPPED_IN_IC_CALL: + // We must have been calling IC stub. Do not go there anymore. + code = isolate->builtins()->builtin(Builtins::kPlainReturn_LiveEdit); + break; + case FRAME_DROPPED_IN_DEBUG_SLOT_CALL: + // Debug break slot stub does not return normally, instead it manually + // cleans the stack and jumps. We should patch the jump address. + code = isolate->builtins()->builtin(Builtins::kFrameDropper_LiveEdit); + break; + case FRAME_DROPPED_IN_DIRECT_CALL: + // Nothing to do, after_break_target is not used here. + return true; + case FRAME_DROPPED_IN_RETURN_CALL: + code = isolate->builtins()->builtin(Builtins::kFrameDropper_LiveEdit); + break; + case CURRENTLY_SET_MODE: + UNREACHABLE(); + break; + } + debug->after_break_target_ = code->entry(); + return true; +} + + MaybeHandle<JSArray> LiveEdit::GatherCompileInfo(Handle<Script> script, Handle<String> source) { Isolate* isolate = script->GetIsolate(); @@ -850,12 +885,12 @@ MaybeHandle<JSArray> LiveEdit::GatherCompileInfo(Handle<Script> script, Handle<Smi> end_pos(Smi::FromInt(message_location.end_pos()), isolate); Handle<JSObject> script_obj = Script::GetWrapper(message_location.script()); - JSReceiver::SetProperty( - rethrow_exception, start_pos_key, start_pos, NONE, SLOPPY).Assert(); - JSReceiver::SetProperty( - rethrow_exception, end_pos_key, end_pos, NONE, SLOPPY).Assert(); - JSReceiver::SetProperty( - rethrow_exception, script_obj_key, script_obj, NONE, SLOPPY).Assert(); + Object::SetProperty(rethrow_exception, start_pos_key, start_pos, SLOPPY) + .Assert(); + Object::SetProperty(rethrow_exception, end_pos_key, end_pos, SLOPPY) + .Assert(); + Object::SetProperty(rethrow_exception, script_obj_key, script_obj, SLOPPY) + .Assert(); } } @@ -939,12 +974,9 @@ static void ReplaceCodeObject(Handle<Code> original, // to code objects (that are never in new space) without worrying about // write barriers. Heap* heap = original->GetHeap(); - heap->CollectAllGarbage(Heap::kMakeHeapIterableMask, - "liveedit.cc ReplaceCodeObject"); - - ASSERT(!heap->InNewSpace(*substitution)); + HeapIterator iterator(heap); - DisallowHeapAllocation no_allocation; + DCHECK(!heap->InNewSpace(*substitution)); ReplacingVisitor visitor(*original, *substitution); @@ -955,7 +987,6 @@ static void ReplaceCodeObject(Handle<Code> original, // Now iterate over all pointers of all objects, including code_target // implicit pointers. - HeapIterator iterator(heap); for (HeapObject* obj = iterator.next(); obj != NULL; obj = iterator.next()) { obj->Iterate(&visitor); } @@ -982,7 +1013,7 @@ class LiteralFixer { // If literal count didn't change, simply go over all functions // and clear literal arrays. ClearValuesVisitor visitor; - IterateJSFunctions(*shared_info, &visitor); + IterateJSFunctions(shared_info, &visitor); } else { // When literal count changes, we have to create new array instances. // Since we cannot create instances when iterating heap, we should first @@ -1017,16 +1048,14 @@ class LiteralFixer { // Iterates all function instances in the HEAP that refers to the // provided shared_info. template<typename Visitor> - static void IterateJSFunctions(SharedFunctionInfo* shared_info, + static void IterateJSFunctions(Handle<SharedFunctionInfo> shared_info, Visitor* visitor) { - DisallowHeapAllocation no_allocation; - HeapIterator iterator(shared_info->GetHeap()); for (HeapObject* obj = iterator.next(); obj != NULL; obj = iterator.next()) { if (obj->IsJSFunction()) { JSFunction* function = JSFunction::cast(obj); - if (function->shared() == shared_info) { + if (function->shared() == *shared_info) { visitor->visit(function); } } @@ -1039,13 +1068,13 @@ class LiteralFixer { Handle<SharedFunctionInfo> shared_info, Isolate* isolate) { CountVisitor count_visitor; count_visitor.count = 0; - IterateJSFunctions(*shared_info, &count_visitor); + IterateJSFunctions(shared_info, &count_visitor); int size = count_visitor.count; Handle<FixedArray> result = isolate->factory()->NewFixedArray(size); if (size > 0) { CollectVisitor collect_visitor(result); - IterateJSFunctions(*shared_info, &collect_visitor); + IterateJSFunctions(shared_info, &collect_visitor); } return result; } @@ -1131,7 +1160,7 @@ class DependentFunctionMarker: public OptimizedFunctionVisitor { virtual void LeaveContext(Context* context) { } // Don't care. virtual void VisitFunction(JSFunction* function) { // It should be guaranteed by the iterator that everything is optimized. - ASSERT(function->code()->kind() == Code::OPTIMIZED_FUNCTION); + DCHECK(function->code()->kind() == Code::OPTIMIZED_FUNCTION); if (shared_info_ == function->shared() || IsInlined(function, shared_info_)) { // Mark the code for deoptimization. @@ -1165,8 +1194,6 @@ void LiveEdit::ReplaceFunctionCode( Handle<SharedFunctionInfo> shared_info = shared_info_wrapper.GetInfo(); - isolate->heap()->EnsureHeapIsIterable(); - if (IsJSFunctionCode(shared_info->code())) { Handle<Code> code = compile_info_wrapper.GetFunctionCode(); ReplaceCodeObject(Handle<Code>(shared_info->code()), code); @@ -1252,7 +1279,7 @@ static int TranslatePosition(int original_position, CHECK(element->IsSmi()); int chunk_end = Handle<Smi>::cast(element)->value(); // Position mustn't be inside a chunk. - ASSERT(original_position >= chunk_end); + DCHECK(original_position >= chunk_end); element = Object::GetElement( isolate, position_change_array, i + 2).ToHandleChecked(); CHECK(element->IsSmi()); @@ -1319,8 +1346,8 @@ class RelocInfoBuffer { // Copy the data. int curently_used_size = static_cast<int>(buffer_ + buffer_size_ - reloc_info_writer_.pos()); - OS::MemMove(new_buffer + new_buffer_size - curently_used_size, - reloc_info_writer_.pos(), curently_used_size); + MemMove(new_buffer + new_buffer_size - curently_used_size, + reloc_info_writer_.pos(), curently_used_size); reloc_info_writer_.Reposition( new_buffer + new_buffer_size - curently_used_size, @@ -1373,7 +1400,7 @@ static Handle<Code> PatchPositionsInCode( if (buffer.length() == code->relocation_size()) { // Simply patch relocation area of code. - OS::MemCopy(code->relocation_start(), buffer.start(), buffer.length()); + MemCopy(code->relocation_start(), buffer.start(), buffer.length()); return code; } else { // Relocation info section now has different size. We cannot simply @@ -1402,8 +1429,6 @@ void LiveEdit::PatchFunctionPositions(Handle<JSArray> shared_info_array, info->set_end_position(new_function_end); info->set_function_token_position(new_function_token_pos); - info->GetIsolate()->heap()->EnsureHeapIsIterable(); - if (IsJSFunctionCode(info->code())) { // Patch relocation info section of the code. Handle<Code> patched_code = PatchPositionsInCode(Handle<Code>(info->code()), @@ -1452,8 +1477,7 @@ Handle<Object> LiveEdit::ChangeScriptSource(Handle<Script> original_script, Handle<Script> old_script = CreateScriptCopy(original_script); old_script->set_name(String::cast(*old_script_name)); old_script_object = old_script; - isolate->debugger()->OnAfterCompile( - old_script, Debugger::SEND_WHEN_DEBUGGING); + isolate->debug()->OnAfterCompile(old_script); } else { old_script_object = isolate->factory()->null_value(); } @@ -1540,6 +1564,38 @@ static bool FixTryCatchHandler(StackFrame* top_frame, } +// Initializes an artificial stack frame. The data it contains is used for: +// a. successful work of frame dropper code which eventually gets control, +// b. being compatible with regular stack structure for various stack +// iterators. +// Returns address of stack allocated pointer to restarted function, +// the value that is called 'restarter_frame_function_pointer'. The value +// at this address (possibly updated by GC) may be used later when preparing +// 'step in' operation. +// Frame structure (conforms InternalFrame structure): +// -- code +// -- SMI maker +// -- function (slot is called "context") +// -- frame base +static Object** SetUpFrameDropperFrame(StackFrame* bottom_js_frame, + Handle<Code> code) { + DCHECK(bottom_js_frame->is_java_script()); + + Address fp = bottom_js_frame->fp(); + + // Move function pointer into "context" slot. + Memory::Object_at(fp + StandardFrameConstants::kContextOffset) = + Memory::Object_at(fp + JavaScriptFrameConstants::kFunctionOffset); + + Memory::Object_at(fp + InternalFrameConstants::kCodeOffset) = *code; + Memory::Object_at(fp + StandardFrameConstants::kMarkerOffset) = + Smi::FromInt(StackFrame::INTERNAL); + + return reinterpret_cast<Object**>(&Memory::Object_at( + fp + StandardFrameConstants::kContextOffset)); +} + + // Removes specified range of frames from stack. There may be 1 or more // frames in range. Anyway the bottom frame is restarted rather than dropped, // and therefore has to be a JavaScript frame. @@ -1547,9 +1603,9 @@ static bool FixTryCatchHandler(StackFrame* top_frame, static const char* DropFrames(Vector<StackFrame*> frames, int top_frame_index, int bottom_js_frame_index, - Debug::FrameDropMode* mode, + LiveEdit::FrameDropMode* mode, Object*** restarter_frame_function_pointer) { - if (!Debug::kFrameDropperSupported) { + if (!LiveEdit::kFrameDropperSupported) { return "Stack manipulations are not supported in this architecture."; } @@ -1557,39 +1613,35 @@ static const char* DropFrames(Vector<StackFrame*> frames, StackFrame* top_frame = frames[top_frame_index]; StackFrame* bottom_js_frame = frames[bottom_js_frame_index]; - ASSERT(bottom_js_frame->is_java_script()); + DCHECK(bottom_js_frame->is_java_script()); // Check the nature of the top frame. Isolate* isolate = bottom_js_frame->isolate(); Code* pre_top_frame_code = pre_top_frame->LookupCode(); - bool frame_has_padding; + bool frame_has_padding = true; if (pre_top_frame_code->is_inline_cache_stub() && pre_top_frame_code->is_debug_stub()) { // OK, we can drop inline cache calls. - *mode = Debug::FRAME_DROPPED_IN_IC_CALL; - frame_has_padding = Debug::FramePaddingLayout::kIsSupported; + *mode = LiveEdit::FRAME_DROPPED_IN_IC_CALL; } else if (pre_top_frame_code == isolate->builtins()->builtin(Builtins::kSlot_DebugBreak)) { // OK, we can drop debug break slot. - *mode = Debug::FRAME_DROPPED_IN_DEBUG_SLOT_CALL; - frame_has_padding = Debug::FramePaddingLayout::kIsSupported; + *mode = LiveEdit::FRAME_DROPPED_IN_DEBUG_SLOT_CALL; } else if (pre_top_frame_code == - isolate->builtins()->builtin( - Builtins::kFrameDropper_LiveEdit)) { + isolate->builtins()->builtin(Builtins::kFrameDropper_LiveEdit)) { // OK, we can drop our own code. pre_top_frame = frames[top_frame_index - 2]; top_frame = frames[top_frame_index - 1]; - *mode = Debug::CURRENTLY_SET_MODE; + *mode = LiveEdit::CURRENTLY_SET_MODE; frame_has_padding = false; } else if (pre_top_frame_code == - isolate->builtins()->builtin(Builtins::kReturn_DebugBreak)) { - *mode = Debug::FRAME_DROPPED_IN_RETURN_CALL; - frame_has_padding = Debug::FramePaddingLayout::kIsSupported; + isolate->builtins()->builtin(Builtins::kReturn_DebugBreak)) { + *mode = LiveEdit::FRAME_DROPPED_IN_RETURN_CALL; } else if (pre_top_frame_code->kind() == Code::STUB && - pre_top_frame_code->major_key() == CodeStub::CEntry) { + CodeStub::GetMajorKey(pre_top_frame_code) == CodeStub::CEntry) { // Entry from our unit tests on 'debugger' statement. // It's fine, we support this case. - *mode = Debug::FRAME_DROPPED_IN_DIRECT_CALL; + *mode = LiveEdit::FRAME_DROPPED_IN_DIRECT_CALL; // We don't have a padding from 'debugger' statement call. // Here the stub is CEntry, it's not debug-only and can't be padded. // If anyone would complain, a proxy padded stub could be added. @@ -1597,25 +1649,25 @@ static const char* DropFrames(Vector<StackFrame*> frames, } else if (pre_top_frame->type() == StackFrame::ARGUMENTS_ADAPTOR) { // This must be adaptor that remain from the frame dropping that // is still on stack. A frame dropper frame must be above it. - ASSERT(frames[top_frame_index - 2]->LookupCode() == - isolate->builtins()->builtin(Builtins::kFrameDropper_LiveEdit)); + DCHECK(frames[top_frame_index - 2]->LookupCode() == + isolate->builtins()->builtin(Builtins::kFrameDropper_LiveEdit)); pre_top_frame = frames[top_frame_index - 3]; top_frame = frames[top_frame_index - 2]; - *mode = Debug::CURRENTLY_SET_MODE; + *mode = LiveEdit::CURRENTLY_SET_MODE; frame_has_padding = false; } else { return "Unknown structure of stack above changing function"; } Address unused_stack_top = top_frame->sp(); + int new_frame_size = LiveEdit::kFrameDropperFrameSize * kPointerSize; Address unused_stack_bottom = bottom_js_frame->fp() - - Debug::kFrameDropperFrameSize * kPointerSize // Size of the new frame. - + kPointerSize; // Bigger address end is exclusive. + - new_frame_size + kPointerSize; // Bigger address end is exclusive. Address* top_frame_pc_address = top_frame->pc_address(); // top_frame may be damaged below this point. Do not used it. - ASSERT(!(top_frame = NULL)); + DCHECK(!(top_frame = NULL)); if (unused_stack_top > unused_stack_bottom) { if (frame_has_padding) { @@ -1623,11 +1675,10 @@ static const char* DropFrames(Vector<StackFrame*> frames, static_cast<int>(unused_stack_top - unused_stack_bottom); Address padding_start = pre_top_frame->fp() - - Debug::FramePaddingLayout::kFrameBaseSize * kPointerSize; + LiveEdit::kFrameDropperFrameSize * kPointerSize; Address padding_pointer = padding_start; - Smi* padding_object = - Smi::FromInt(Debug::FramePaddingLayout::kPaddingValue); + Smi* padding_object = Smi::FromInt(LiveEdit::kFramePaddingValue); while (Memory::Object_at(padding_pointer) == padding_object) { padding_pointer -= kPointerSize; } @@ -1642,9 +1693,9 @@ static const char* DropFrames(Vector<StackFrame*> frames, StackFrame* pre_pre_frame = frames[top_frame_index - 2]; - OS::MemMove(padding_start + kPointerSize - shortage_bytes, - padding_start + kPointerSize, - Debug::FramePaddingLayout::kFrameBaseSize * kPointerSize); + MemMove(padding_start + kPointerSize - shortage_bytes, + padding_start + kPointerSize, + LiveEdit::kFrameDropperFrameSize * kPointerSize); pre_top_frame->UpdateFp(pre_top_frame->fp() - shortage_bytes); pre_pre_frame->SetCallerFp(pre_top_frame->fp()); @@ -1661,16 +1712,16 @@ static const char* DropFrames(Vector<StackFrame*> frames, FixTryCatchHandler(pre_top_frame, bottom_js_frame); // Make sure FixTryCatchHandler is idempotent. - ASSERT(!FixTryCatchHandler(pre_top_frame, bottom_js_frame)); + DCHECK(!FixTryCatchHandler(pre_top_frame, bottom_js_frame)); Handle<Code> code = isolate->builtins()->FrameDropper_LiveEdit(); *top_frame_pc_address = code->entry(); pre_top_frame->SetCallerFp(bottom_js_frame->fp()); *restarter_frame_function_pointer = - Debug::SetUpFrameDropperFrame(bottom_js_frame, code); + SetUpFrameDropperFrame(bottom_js_frame, code); - ASSERT((**restarter_frame_function_pointer)->IsJSFunction()); + DCHECK((**restarter_frame_function_pointer)->IsJSFunction()); for (Address a = unused_stack_top; a < unused_stack_bottom; @@ -1682,11 +1733,6 @@ static const char* DropFrames(Vector<StackFrame*> frames, } -static bool IsDropableFrame(StackFrame* frame) { - return !frame->is_exit(); -} - - // Describes a set of call frames that execute any of listed functions. // Finding no such frames does not mean error. class MultipleFunctionTarget { @@ -1699,7 +1745,7 @@ class MultipleFunctionTarget { LiveEdit::FunctionPatchabilityStatus status) { return CheckActivation(m_shared_info_array, m_result, frame, status); } - const char* GetNotFoundMessage() { + const char* GetNotFoundMessage() const { return NULL; } private: @@ -1711,7 +1757,9 @@ class MultipleFunctionTarget { // Drops all call frame matched by target and all frames above them. template<typename TARGET> static const char* DropActivationsInActiveThreadImpl( - Isolate* isolate, TARGET& target, bool do_drop) { + Isolate* isolate, + TARGET& target, // NOLINT + bool do_drop) { Debug* debug = isolate->debug(); Zone zone(isolate); Vector<StackFrame*> frames = CreateStackMap(isolate, &zone); @@ -1740,12 +1788,20 @@ static const char* DropActivationsInActiveThreadImpl( bool target_frame_found = false; int bottom_js_frame_index = top_frame_index; - bool c_code_found = false; + bool non_droppable_frame_found = false; + LiveEdit::FunctionPatchabilityStatus non_droppable_reason; for (; frame_index < frames.length(); frame_index++) { StackFrame* frame = frames[frame_index]; - if (!IsDropableFrame(frame)) { - c_code_found = true; + if (frame->is_exit()) { + non_droppable_frame_found = true; + non_droppable_reason = LiveEdit::FUNCTION_BLOCKED_UNDER_NATIVE_CODE; + break; + } + if (frame->is_java_script() && + JavaScriptFrame::cast(frame)->function()->shared()->is_generator()) { + non_droppable_frame_found = true; + non_droppable_reason = LiveEdit::FUNCTION_BLOCKED_UNDER_GENERATOR; break; } if (target.MatchActivation( @@ -1755,15 +1811,15 @@ static const char* DropActivationsInActiveThreadImpl( } } - if (c_code_found) { - // There is a C frames on stack. Check that there are no target frames - // below them. + if (non_droppable_frame_found) { + // There is a C or generator frame on stack. We can't drop C frames, and we + // can't restart generators. Check that there are no target frames below + // them. for (; frame_index < frames.length(); frame_index++) { StackFrame* frame = frames[frame_index]; if (frame->is_java_script()) { - if (target.MatchActivation( - frame, LiveEdit::FUNCTION_BLOCKED_UNDER_NATIVE_CODE)) { - // Cannot drop frame under C frames. + if (target.MatchActivation(frame, non_droppable_reason)) { + // Fail. return NULL; } } @@ -1780,7 +1836,7 @@ static const char* DropActivationsInActiveThreadImpl( return target.GetNotFoundMessage(); } - Debug::FrameDropMode drop_mode = Debug::FRAMES_UNTOUCHED; + LiveEdit::FrameDropMode drop_mode = LiveEdit::FRAMES_UNTOUCHED; Object** restarter_frame_function_pointer = NULL; const char* error_message = DropFrames(frames, top_frame_index, bottom_js_frame_index, &drop_mode, @@ -1798,8 +1854,8 @@ static const char* DropActivationsInActiveThreadImpl( break; } } - debug->FramesHaveBeenDropped(new_id, drop_mode, - restarter_frame_function_pointer); + debug->FramesHaveBeenDropped( + new_id, drop_mode, restarter_frame_function_pointer); return NULL; } @@ -1833,6 +1889,44 @@ static const char* DropActivationsInActiveThread( } +bool LiveEdit::FindActiveGenerators(Handle<FixedArray> shared_info_array, + Handle<FixedArray> result, + int len) { + Isolate* isolate = shared_info_array->GetIsolate(); + bool found_suspended_activations = false; + + DCHECK_LE(len, result->length()); + + FunctionPatchabilityStatus active = FUNCTION_BLOCKED_ACTIVE_GENERATOR; + + Heap* heap = isolate->heap(); + HeapIterator iterator(heap); + HeapObject* obj = NULL; + while ((obj = iterator.next()) != NULL) { + if (!obj->IsJSGeneratorObject()) continue; + + JSGeneratorObject* gen = JSGeneratorObject::cast(obj); + if (gen->is_closed()) continue; + + HandleScope scope(isolate); + + for (int i = 0; i < len; i++) { + Handle<JSValue> jsvalue = + Handle<JSValue>::cast(FixedArray::get(shared_info_array, i)); + Handle<SharedFunctionInfo> shared = + UnwrapSharedFunctionInfoFromJSValue(jsvalue); + + if (gen->function()->shared() == *shared) { + result->set(i, Smi::FromInt(active)); + found_suspended_activations = true; + } + } + } + + return found_suspended_activations; +} + + class InactiveThreadActivationsChecker : public ThreadVisitor { public: InactiveThreadActivationsChecker(Handle<JSArray> shared_info_array, @@ -1863,18 +1957,29 @@ Handle<JSArray> LiveEdit::CheckAndDropActivations( Isolate* isolate = shared_info_array->GetIsolate(); int len = GetArrayLength(shared_info_array); + DCHECK(shared_info_array->HasFastElements()); + Handle<FixedArray> shared_info_array_elements( + FixedArray::cast(shared_info_array->elements())); + Handle<JSArray> result = isolate->factory()->NewJSArray(len); + Handle<FixedArray> result_elements = + JSObject::EnsureWritableFastElements(result); // Fill the default values. for (int i = 0; i < len; i++) { - SetElementSloppy( - result, - i, - Handle<Smi>(Smi::FromInt(FUNCTION_AVAILABLE_FOR_PATCH), isolate)); + FunctionPatchabilityStatus status = FUNCTION_AVAILABLE_FOR_PATCH; + result_elements->set(i, Smi::FromInt(status)); } + // Scan the heap for active generators -- those that are either currently + // running (as we wouldn't want to restart them, because we don't know where + // to restart them from) or suspended. Fail if any one corresponds to the set + // of functions being edited. + if (FindActiveGenerators(shared_info_array_elements, result_elements, len)) { + return result; + } - // First check inactive threads. Fail if some functions are blocked there. + // Check inactive threads. Fail if some functions are blocked there. InactiveThreadActivationsChecker inactive_threads_checker(shared_info_array, result); isolate->thread_manager()->IterateArchivedThreads( @@ -1912,7 +2017,7 @@ class SingleFrameTarget { } return false; } - const char* GetNotFoundMessage() { + const char* GetNotFoundMessage() const { return "Failed to found requested frame"; } LiveEdit::FunctionPatchabilityStatus saved_status() { @@ -1937,6 +2042,9 @@ const char* LiveEdit::RestartFrame(JavaScriptFrame* frame) { if (target.saved_status() == LiveEdit::FUNCTION_BLOCKED_UNDER_NATIVE_CODE) { return "Function is blocked under native code"; } + if (target.saved_status() == LiveEdit::FUNCTION_BLOCKED_UNDER_GENERATOR) { + return "Function is blocked under a generator activation"; + } return NULL; } diff --git a/deps/v8/src/liveedit.h b/deps/v8/src/liveedit.h index 5be63ac0a12..3465d886d77 100644 --- a/deps/v8/src/liveedit.h +++ b/deps/v8/src/liveedit.h @@ -26,8 +26,8 @@ // instantiate newly compiled functions. -#include "allocation.h" -#include "compiler.h" +#include "src/allocation.h" +#include "src/compiler.h" namespace v8 { namespace internal { @@ -58,6 +58,26 @@ class LiveEditFunctionTracker { class LiveEdit : AllStatic { public: + // Describes how exactly a frame has been dropped from stack. + enum FrameDropMode { + // No frame has been dropped. + FRAMES_UNTOUCHED, + // The top JS frame had been calling IC stub. IC stub mustn't be called now. + FRAME_DROPPED_IN_IC_CALL, + // The top JS frame had been calling debug break slot stub. Patch the + // address this stub jumps to in the end. + FRAME_DROPPED_IN_DEBUG_SLOT_CALL, + // The top JS frame had been calling some C++ function. The return address + // gets patched automatically. + FRAME_DROPPED_IN_DIRECT_CALL, + FRAME_DROPPED_IN_RETURN_CALL, + CURRENTLY_SET_MODE + }; + + static void InitializeThreadLocal(Debug* debug); + + static bool SetAfterBreakTarget(Debug* debug); + MUST_USE_RESULT static MaybeHandle<JSArray> GatherCompileInfo( Handle<Script> script, Handle<String> source); @@ -89,6 +109,11 @@ class LiveEdit : AllStatic { Handle<JSValue> orig_function_shared, Handle<JSValue> subst_function_shared); + // Find open generator activations, and set corresponding "result" elements to + // FUNCTION_BLOCKED_ACTIVE_GENERATOR. + static bool FindActiveGenerators(Handle<FixedArray> shared_info_array, + Handle<FixedArray> result, int len); + // Checks listed functions on stack and return array with corresponding // FunctionPatchabilityStatus statuses; extra array element may // contain general error message. Modifies the current stack and @@ -107,7 +132,9 @@ class LiveEdit : AllStatic { FUNCTION_BLOCKED_ON_ACTIVE_STACK = 2, FUNCTION_BLOCKED_ON_OTHER_STACK = 3, FUNCTION_BLOCKED_UNDER_NATIVE_CODE = 4, - FUNCTION_REPLACED_ON_ACTIVE_STACK = 5 + FUNCTION_REPLACED_ON_ACTIVE_STACK = 5, + FUNCTION_BLOCKED_UNDER_GENERATOR = 6, + FUNCTION_BLOCKED_ACTIVE_GENERATOR = 7 }; // Compares 2 strings line-by-line, then token-wise and returns diff in form @@ -115,6 +142,46 @@ class LiveEdit : AllStatic { // of diff chunks. static Handle<JSArray> CompareStrings(Handle<String> s1, Handle<String> s2); + + // Architecture-specific constant. + static const bool kFrameDropperSupported; + + /** + * Defines layout of a stack frame that supports padding. This is a regular + * internal frame that has a flexible stack structure. LiveEdit can shift + * its lower part up the stack, taking up the 'padding' space when additional + * stack memory is required. + * Such frame is expected immediately above the topmost JavaScript frame. + * + * Stack Layout: + * --- Top + * LiveEdit routine frames + * --- + * C frames of debug handler + * --- + * ... + * --- + * An internal frame that has n padding words: + * - any number of words as needed by code -- upper part of frame + * - padding size: a Smi storing n -- current size of padding + * - padding: n words filled with kPaddingValue in form of Smi + * - 3 context/type words of a regular InternalFrame + * - fp + * --- + * Topmost JavaScript frame + * --- + * ... + * --- Bottom + */ + // A size of frame base including fp. Padding words starts right above + // the base. + static const int kFrameDropperFrameSize = 4; + // A number of words that should be reserved on stack for the LiveEdit use. + // Stored on stack in form of Smi. + static const int kFramePaddingInitialSize = 1; + // A value that padding words are filled with (in form of Smi). Going + // bottom-top, the first word not having this value is a counter word. + static const int kFramePaddingValue = kFramePaddingInitialSize + 1; }; @@ -278,9 +345,13 @@ class FunctionInfoWrapper : public JSArrayBasedStruct<FunctionInfoWrapper> { class SharedInfoWrapper : public JSArrayBasedStruct<SharedInfoWrapper> { public: static bool IsInstance(Handle<JSArray> array) { - return array->length() == Smi::FromInt(kSize_) && - Object::GetElement(array->GetIsolate(), array, kSharedInfoOffset_) - .ToHandleChecked()->IsJSValue(); + if (array->length() != Smi::FromInt(kSize_)) return false; + Handle<Object> element( + Object::GetElement(array->GetIsolate(), + array, + kSharedInfoOffset_).ToHandleChecked()); + if (!element->IsJSValue()) return false; + return Handle<JSValue>::cast(element)->value()->IsSharedFunctionInfo(); } explicit SharedInfoWrapper(Handle<JSArray> array) diff --git a/deps/v8/src/log-inl.h b/deps/v8/src/log-inl.h index d6781678c76..28677ad235d 100644 --- a/deps/v8/src/log-inl.h +++ b/deps/v8/src/log-inl.h @@ -5,7 +5,7 @@ #ifndef V8_LOG_INL_H_ #define V8_LOG_INL_H_ -#include "log.h" +#include "src/log.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/log-utils.cc b/deps/v8/src/log-utils.cc index 687578847da..c94d07a9f29 100644 --- a/deps/v8/src/log-utils.cc +++ b/deps/v8/src/log-utils.cc @@ -2,10 +2,10 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "log-utils.h" -#include "string-stream.h" +#include "src/log-utils.h" +#include "src/string-stream.h" namespace v8 { namespace internal { @@ -54,20 +54,20 @@ void Log::Initialize(const char* log_file_name) { void Log::OpenStdout() { - ASSERT(!IsEnabled()); + DCHECK(!IsEnabled()); output_handle_ = stdout; } void Log::OpenTemporaryFile() { - ASSERT(!IsEnabled()); - output_handle_ = i::OS::OpenTemporaryFile(); + DCHECK(!IsEnabled()); + output_handle_ = base::OS::OpenTemporaryFile(); } void Log::OpenFile(const char* name) { - ASSERT(!IsEnabled()); - output_handle_ = OS::FOpen(name, OS::LogFileOpenMode); + DCHECK(!IsEnabled()); + output_handle_ = base::OS::FOpen(name, base::OS::LogFileOpenMode); } @@ -94,7 +94,7 @@ Log::MessageBuilder::MessageBuilder(Log* log) : log_(log), lock_guard_(&log_->mutex_), pos_(0) { - ASSERT(log_->message_buffer_ != NULL); + DCHECK(log_->message_buffer_ != NULL); } @@ -105,14 +105,14 @@ void Log::MessageBuilder::Append(const char* format, ...) { va_start(args, format); AppendVA(format, args); va_end(args); - ASSERT(pos_ <= Log::kMessageBufferSize); + DCHECK(pos_ <= Log::kMessageBufferSize); } void Log::MessageBuilder::AppendVA(const char* format, va_list args) { Vector<char> buf(log_->message_buffer_ + pos_, Log::kMessageBufferSize - pos_); - int result = v8::internal::OS::VSNPrintF(buf, format, args); + int result = v8::internal::VSNPrintF(buf, format, args); // Result is -1 if output was truncated. if (result >= 0) { @@ -120,7 +120,7 @@ void Log::MessageBuilder::AppendVA(const char* format, va_list args) { } else { pos_ = Log::kMessageBufferSize; } - ASSERT(pos_ <= Log::kMessageBufferSize); + DCHECK(pos_ <= Log::kMessageBufferSize); } @@ -128,7 +128,7 @@ void Log::MessageBuilder::Append(const char c) { if (pos_ < Log::kMessageBufferSize) { log_->message_buffer_[pos_++] = c; } - ASSERT(pos_ <= Log::kMessageBufferSize); + DCHECK(pos_ <= Log::kMessageBufferSize); } @@ -159,7 +159,7 @@ void Log::MessageBuilder::AppendAddress(Address addr) { void Log::MessageBuilder::AppendSymbolName(Symbol* symbol) { - ASSERT(symbol); + DCHECK(symbol); Append("symbol("); if (!symbol->name()->IsUndefined()) { Append("\""); @@ -206,19 +206,23 @@ void Log::MessageBuilder::AppendDetailed(String* str, bool show_impl_info) { void Log::MessageBuilder::AppendStringPart(const char* str, int len) { if (pos_ + len > Log::kMessageBufferSize) { len = Log::kMessageBufferSize - pos_; - ASSERT(len >= 0); + DCHECK(len >= 0); if (len == 0) return; } Vector<char> buf(log_->message_buffer_ + pos_, Log::kMessageBufferSize - pos_); - OS::StrNCpy(buf, str, len); + StrNCpy(buf, str, len); pos_ += len; - ASSERT(pos_ <= Log::kMessageBufferSize); + DCHECK(pos_ <= Log::kMessageBufferSize); } void Log::MessageBuilder::WriteToLogFile() { - ASSERT(pos_ <= Log::kMessageBufferSize); + DCHECK(pos_ <= Log::kMessageBufferSize); + // Assert that we do not already have a new line at the end. + DCHECK(pos_ == 0 || log_->message_buffer_[pos_ - 1] != '\n'); + if (pos_ == Log::kMessageBufferSize) pos_--; + log_->message_buffer_[pos_++] = '\n'; const int written = log_->WriteToFile(log_->message_buffer_, pos_); if (written != pos_) { log_->stop(); diff --git a/deps/v8/src/log-utils.h b/deps/v8/src/log-utils.h index deb3f7c4830..ef285e6d68c 100644 --- a/deps/v8/src/log-utils.h +++ b/deps/v8/src/log-utils.h @@ -5,7 +5,7 @@ #ifndef V8_LOG_UTILS_H_ #define V8_LOG_UTILS_H_ -#include "allocation.h" +#include "src/allocation.h" namespace v8 { namespace internal { @@ -85,7 +85,7 @@ class Log { private: Log* log_; - LockGuard<Mutex> lock_guard_; + base::LockGuard<base::Mutex> lock_guard_; int pos_; }; @@ -103,9 +103,9 @@ class Log { // Implementation of writing to a log file. int WriteToFile(const char* msg, int length) { - ASSERT(output_handle_ != NULL); + DCHECK(output_handle_ != NULL); size_t rv = fwrite(msg, 1, length, output_handle_); - ASSERT(static_cast<size_t>(length) == rv); + DCHECK(static_cast<size_t>(length) == rv); USE(rv); fflush(output_handle_); return length; @@ -120,7 +120,7 @@ class Log { // mutex_ is a Mutex used for enforcing exclusive // access to the formatting buffer and the log file or log memory buffer. - Mutex mutex_; + base::Mutex mutex_; // Buffer used for formatting log messages. This is a singleton buffer and // mutex_ should be acquired before using it. diff --git a/deps/v8/src/log.cc b/deps/v8/src/log.cc index 88dae56b7b5..0c6c4355aad 100644 --- a/deps/v8/src/log.cc +++ b/deps/v8/src/log.cc @@ -4,21 +4,22 @@ #include <stdarg.h> -#include "v8.h" - -#include "bootstrapper.h" -#include "code-stubs.h" -#include "cpu-profiler.h" -#include "deoptimizer.h" -#include "global-handles.h" -#include "log.h" -#include "log-utils.h" -#include "macro-assembler.h" -#include "platform.h" -#include "runtime-profiler.h" -#include "serialize.h" -#include "string-stream.h" -#include "vm-state-inl.h" +#include "src/v8.h" + +#include "src/base/platform/platform.h" +#include "src/bootstrapper.h" +#include "src/code-stubs.h" +#include "src/cpu-profiler.h" +#include "src/deoptimizer.h" +#include "src/global-handles.h" +#include "src/log.h" +#include "src/log-utils.h" +#include "src/macro-assembler.h" +#include "src/perf-jit.h" +#include "src/runtime-profiler.h" +#include "src/serialize.h" +#include "src/string-stream.h" +#include "src/vm-state-inl.h" namespace v8 { namespace internal { @@ -106,7 +107,7 @@ class CodeEventLogger::NameBuffer { void AppendBytes(const char* bytes, int size) { size = Min(size, kUtf8BufferSize - utf8_pos_); - OS::MemCopy(utf8_buffer_ + utf8_pos_, bytes, size); + MemCopy(utf8_buffer_ + utf8_pos_, bytes, size); utf8_pos_ += size; } @@ -122,7 +123,7 @@ class CodeEventLogger::NameBuffer { void AppendInt(int n) { Vector<char> buffer(utf8_buffer_ + utf8_pos_, kUtf8BufferSize - utf8_pos_); - int size = OS::SNPrintF(buffer, "%d", n); + int size = SNPrintF(buffer, "%d", n); if (size > 0 && utf8_pos_ + size <= kUtf8BufferSize) { utf8_pos_ += size; } @@ -131,7 +132,7 @@ class CodeEventLogger::NameBuffer { void AppendHex(uint32_t n) { Vector<char> buffer(utf8_buffer_ + utf8_pos_, kUtf8BufferSize - utf8_pos_); - int size = OS::SNPrintF(buffer, "%x", n); + int size = SNPrintF(buffer, "%x", n); if (size > 0 && utf8_pos_ + size <= kUtf8BufferSize) { utf8_pos_ += size; } @@ -230,6 +231,7 @@ class PerfBasicLogger : public CodeEventLogger { virtual ~PerfBasicLogger(); virtual void CodeMoveEvent(Address from, Address to) { } + virtual void CodeDisableOptEvent(Code* code, SharedFunctionInfo* shared) { } virtual void CodeDeleteEvent(Address from) { } private: @@ -258,12 +260,13 @@ PerfBasicLogger::PerfBasicLogger() // Open the perf JIT dump file. int bufferSize = sizeof(kFilenameFormatString) + kFilenameBufferPadding; ScopedVector<char> perf_dump_name(bufferSize); - int size = OS::SNPrintF( + int size = SNPrintF( perf_dump_name, kFilenameFormatString, - OS::GetCurrentProcessId()); + base::OS::GetCurrentProcessId()); CHECK_NE(size, -1); - perf_output_handle_ = OS::FOpen(perf_dump_name.start(), OS::LogFileOpenMode); + perf_output_handle_ = + base::OS::FOpen(perf_dump_name.start(), base::OS::LogFileOpenMode); CHECK_NE(perf_output_handle_, NULL); setvbuf(perf_output_handle_, NULL, _IOFBF, kLogBufferSize); } @@ -279,172 +282,11 @@ void PerfBasicLogger::LogRecordedBuffer(Code* code, SharedFunctionInfo*, const char* name, int length) { - ASSERT(code->instruction_start() == code->address() + Code::kHeaderSize); + DCHECK(code->instruction_start() == code->address() + Code::kHeaderSize); - OS::FPrint(perf_output_handle_, "%llx %x %.*s\n", - reinterpret_cast<uint64_t>(code->instruction_start()), - code->instruction_size(), - length, name); -} - - -// Linux perf tool logging support -class PerfJitLogger : public CodeEventLogger { - public: - PerfJitLogger(); - virtual ~PerfJitLogger(); - - virtual void CodeMoveEvent(Address from, Address to) { } - virtual void CodeDeleteEvent(Address from) { } - - private: - virtual void LogRecordedBuffer(Code* code, - SharedFunctionInfo* shared, - const char* name, - int length); - - // Extension added to V8 log file name to get the low-level log name. - static const char kFilenameFormatString[]; - static const int kFilenameBufferPadding; - - // File buffer size of the low-level log. We don't use the default to - // minimize the associated overhead. - static const int kLogBufferSize = 2 * MB; - - void LogWriteBytes(const char* bytes, int size); - void LogWriteHeader(); - - static const uint32_t kJitHeaderMagic = 0x4F74496A; - static const uint32_t kJitHeaderVersion = 0x2; - static const uint32_t kElfMachIA32 = 3; - static const uint32_t kElfMachX64 = 62; - static const uint32_t kElfMachARM = 40; - static const uint32_t kElfMachMIPS = 10; - - struct jitheader { - uint32_t magic; - uint32_t version; - uint32_t total_size; - uint32_t elf_mach; - uint32_t pad1; - uint32_t pid; - uint64_t timestamp; - }; - - enum jit_record_type { - JIT_CODE_LOAD = 0 - // JIT_CODE_UNLOAD = 1, - // JIT_CODE_CLOSE = 2, - // JIT_CODE_DEBUG_INFO = 3, - // JIT_CODE_PAGE_MAP = 4, - // JIT_CODE_MAX = 5 - }; - - struct jr_code_load { - uint32_t id; - uint32_t total_size; - uint64_t timestamp; - uint64_t vma; - uint64_t code_addr; - uint32_t code_size; - uint32_t align; - }; - - uint32_t GetElfMach() { -#if V8_TARGET_ARCH_IA32 - return kElfMachIA32; -#elif V8_TARGET_ARCH_X64 - return kElfMachX64; -#elif V8_TARGET_ARCH_ARM - return kElfMachARM; -#elif V8_TARGET_ARCH_MIPS - return kElfMachMIPS; -#else - UNIMPLEMENTED(); - return 0; -#endif - } - - FILE* perf_output_handle_; -}; - -const char PerfJitLogger::kFilenameFormatString[] = "/tmp/jit-%d.dump"; - -// Extra padding for the PID in the filename -const int PerfJitLogger::kFilenameBufferPadding = 16; - -PerfJitLogger::PerfJitLogger() - : perf_output_handle_(NULL) { - // Open the perf JIT dump file. - int bufferSize = sizeof(kFilenameFormatString) + kFilenameBufferPadding; - ScopedVector<char> perf_dump_name(bufferSize); - int size = OS::SNPrintF( - perf_dump_name, - kFilenameFormatString, - OS::GetCurrentProcessId()); - CHECK_NE(size, -1); - perf_output_handle_ = OS::FOpen(perf_dump_name.start(), OS::LogFileOpenMode); - CHECK_NE(perf_output_handle_, NULL); - setvbuf(perf_output_handle_, NULL, _IOFBF, kLogBufferSize); - - LogWriteHeader(); -} - - -PerfJitLogger::~PerfJitLogger() { - fclose(perf_output_handle_); - perf_output_handle_ = NULL; -} - - -void PerfJitLogger::LogRecordedBuffer(Code* code, - SharedFunctionInfo*, - const char* name, - int length) { - ASSERT(code->instruction_start() == code->address() + Code::kHeaderSize); - ASSERT(perf_output_handle_ != NULL); - - const char* code_name = name; - uint8_t* code_pointer = reinterpret_cast<uint8_t*>(code->instruction_start()); - uint32_t code_size = code->instruction_size(); - - static const char string_terminator[] = "\0"; - - jr_code_load code_load; - code_load.id = JIT_CODE_LOAD; - code_load.total_size = sizeof(code_load) + length + 1 + code_size; - code_load.timestamp = - static_cast<uint64_t>(OS::TimeCurrentMillis() * 1000.0); - code_load.vma = 0x0; // Our addresses are absolute. - code_load.code_addr = reinterpret_cast<uint64_t>(code->instruction_start()); - code_load.code_size = code_size; - code_load.align = 0; - - LogWriteBytes(reinterpret_cast<const char*>(&code_load), sizeof(code_load)); - LogWriteBytes(code_name, length); - LogWriteBytes(string_terminator, 1); - LogWriteBytes(reinterpret_cast<const char*>(code_pointer), code_size); -} - - -void PerfJitLogger::LogWriteBytes(const char* bytes, int size) { - size_t rv = fwrite(bytes, 1, size, perf_output_handle_); - ASSERT(static_cast<size_t>(size) == rv); - USE(rv); -} - - -void PerfJitLogger::LogWriteHeader() { - ASSERT(perf_output_handle_ != NULL); - jitheader header; - header.magic = kJitHeaderMagic; - header.version = kJitHeaderVersion; - header.total_size = sizeof(jitheader); - header.pad1 = 0xdeadbeef; - header.elf_mach = GetElfMach(); - header.pid = OS::GetCurrentProcessId(); - header.timestamp = static_cast<uint64_t>(OS::TimeCurrentMillis() * 1000.0); - LogWriteBytes(reinterpret_cast<const char*>(&header), sizeof(header)); + base::OS::FPrint(perf_output_handle_, "%llx %x %.*s\n", + reinterpret_cast<uint64_t>(code->instruction_start()), + code->instruction_size(), length, name); } @@ -457,6 +299,7 @@ class LowLevelLogger : public CodeEventLogger { virtual ~LowLevelLogger(); virtual void CodeMoveEvent(Address from, Address to); + virtual void CodeDisableOptEvent(Code* code, SharedFunctionInfo* shared) { } virtual void CodeDeleteEvent(Address from); virtual void SnapshotPositionEvent(Address addr, int pos); virtual void CodeMovingGCEvent(); @@ -530,9 +373,10 @@ LowLevelLogger::LowLevelLogger(const char* name) // Open the low-level log file. size_t len = strlen(name); ScopedVector<char> ll_name(static_cast<int>(len + sizeof(kLogExt))); - OS::MemCopy(ll_name.start(), name, len); - OS::MemCopy(ll_name.start() + len, kLogExt, sizeof(kLogExt)); - ll_output_handle_ = OS::FOpen(ll_name.start(), OS::LogFileOpenMode); + MemCopy(ll_name.start(), name, len); + MemCopy(ll_name.start() + len, kLogExt, sizeof(kLogExt)); + ll_output_handle_ = + base::OS::FOpen(ll_name.start(), base::OS::LogFileOpenMode); setvbuf(ll_output_handle_, NULL, _IOFBF, kLogBufferSize); LogCodeInfo(); @@ -548,12 +392,18 @@ LowLevelLogger::~LowLevelLogger() { void LowLevelLogger::LogCodeInfo() { #if V8_TARGET_ARCH_IA32 const char arch[] = "ia32"; -#elif V8_TARGET_ARCH_X64 +#elif V8_TARGET_ARCH_X64 && V8_TARGET_ARCH_64_BIT const char arch[] = "x64"; +#elif V8_TARGET_ARCH_X64 && V8_TARGET_ARCH_32_BIT + const char arch[] = "x32"; #elif V8_TARGET_ARCH_ARM const char arch[] = "arm"; #elif V8_TARGET_ARCH_MIPS const char arch[] = "mips"; +#elif V8_TARGET_ARCH_X87 + const char arch[] = "x87"; +#elif V8_TARGET_ARCH_ARM64 + const char arch[] = "arm64"; #else const char arch[] = "unknown"; #endif @@ -568,7 +418,7 @@ void LowLevelLogger::LogRecordedBuffer(Code* code, CodeCreateStruct event; event.name_size = length; event.code_address = code->instruction_start(); - ASSERT(event.code_address == code->address() + Code::kHeaderSize); + DCHECK(event.code_address == code->address() + Code::kHeaderSize); event.code_size = code->instruction_size(); LogWriteStruct(event); LogWriteBytes(name, length); @@ -603,7 +453,7 @@ void LowLevelLogger::SnapshotPositionEvent(Address addr, int pos) { void LowLevelLogger::LogWriteBytes(const char* bytes, int size) { size_t rv = fwrite(bytes, 1, size, ll_output_handle_); - ASSERT(static_cast<size_t>(size) == rv); + DCHECK(static_cast<size_t>(size) == rv); USE(rv); } @@ -623,6 +473,7 @@ class JitLogger : public CodeEventLogger { explicit JitLogger(JitCodeEventHandler code_event_handler); virtual void CodeMoveEvent(Address from, Address to); + virtual void CodeDisableOptEvent(Code* code, SharedFunctionInfo* shared) { } virtual void CodeDeleteEvent(Address from); virtual void AddCodeLinePosInfoEvent( void* jit_handler_data, @@ -657,11 +508,11 @@ void JitLogger::LogRecordedBuffer(Code* code, event.type = JitCodeEvent::CODE_ADDED; event.code_start = code->instruction_start(); event.code_len = code->instruction_size(); - Handle<Script> script_handle; + Handle<SharedFunctionInfo> shared_function_handle; if (shared && shared->script()->IsScript()) { - script_handle = Handle<Script>(Script::cast(shared->script())); + shared_function_handle = Handle<SharedFunctionInfo>(shared); } - event.script = ToApiHandle<v8::Script>(script_handle); + event.script = ToApiHandle<v8::UnboundScript>(shared_function_handle); event.name.str = name; event.name.len = length; code_event_handler_(&event); @@ -742,7 +593,7 @@ void JitLogger::EndCodePosInfoEvent(Code* code, void* jit_handler_data) { // An independent thread removes data and writes it to the log. // This design minimizes the time spent in the sampler. // -class Profiler: public Thread { +class Profiler: public base::Thread { public: explicit Profiler(Isolate* isolate); void Engage(); @@ -791,7 +642,7 @@ class Profiler: public Thread { int tail_; // Index to the buffer tail. bool overflow_; // Tell whether a buffer overflow has occurred. // Sempahore used for buffer synchronization. - Semaphore buffer_semaphore_; + base::Semaphore buffer_semaphore_; // Tells whether profiler is engaged, that is, processing thread is stated. bool engaged_; @@ -821,7 +672,7 @@ class Ticker: public Sampler { } void SetProfiler(Profiler* profiler) { - ASSERT(profiler_ == NULL); + DCHECK(profiler_ == NULL); profiler_ = profiler; IncreaseProfilingDepth(); if (!IsActive()) Start(); @@ -842,7 +693,7 @@ class Ticker: public Sampler { // Profiler implementation. // Profiler::Profiler(Isolate* isolate) - : Thread("v8:Profiler"), + : base::Thread(Options("v8:Profiler")), isolate_(isolate), head_(0), tail_(0), @@ -850,15 +701,19 @@ Profiler::Profiler(Isolate* isolate) buffer_semaphore_(0), engaged_(false), running_(false), - paused_(false) { -} + paused_(false) {} void Profiler::Engage() { if (engaged_) return; engaged_ = true; - OS::LogSharedLibraryAddresses(isolate_); + std::vector<base::OS::SharedLibraryAddress> addresses = + base::OS::GetSharedLibraryAddresses(); + for (size_t i = 0; i < addresses.size(); ++i) { + LOG(isolate_, SharedLibraryEvent( + addresses[i].library_path, addresses[i].start, addresses[i].end)); + } // Start thread processing the profiler buffer. running_ = true; @@ -928,13 +783,13 @@ Logger::~Logger() { void Logger::addCodeEventListener(CodeEventListener* listener) { - ASSERT(!hasCodeEventListener(listener)); + DCHECK(!hasCodeEventListener(listener)); listeners_.Add(listener); } void Logger::removeCodeEventListener(CodeEventListener* listener) { - ASSERT(hasCodeEventListener(listener)); + DCHECK(hasCodeEventListener(listener)); listeners_.RemoveElement(listener); } @@ -947,7 +802,7 @@ bool Logger::hasCodeEventListener(CodeEventListener* listener) { void Logger::ProfilerBeginEvent() { if (!log_->IsEnabled()) return; Log::MessageBuilder msg(log_); - msg.Append("profiler,\"begin\",%d\n", kSamplingIntervalMs); + msg.Append("profiler,\"begin\",%d", kSamplingIntervalMs); msg.WriteToLogFile(); } @@ -960,7 +815,7 @@ void Logger::StringEvent(const char* name, const char* value) { void Logger::UncheckedStringEvent(const char* name, const char* value) { if (!log_->IsEnabled()) return; Log::MessageBuilder msg(log_); - msg.Append("%s,\"%s\"\n", name, value); + msg.Append("%s,\"%s\"", name, value); msg.WriteToLogFile(); } @@ -978,7 +833,7 @@ void Logger::IntPtrTEvent(const char* name, intptr_t value) { void Logger::UncheckedIntEvent(const char* name, int value) { if (!log_->IsEnabled()) return; Log::MessageBuilder msg(log_); - msg.Append("%s,%d\n", name, value); + msg.Append("%s,%d", name, value); msg.WriteToLogFile(); } @@ -986,7 +841,7 @@ void Logger::UncheckedIntEvent(const char* name, int value) { void Logger::UncheckedIntPtrTEvent(const char* name, intptr_t value) { if (!log_->IsEnabled()) return; Log::MessageBuilder msg(log_); - msg.Append("%s,%" V8_PTR_PREFIX "d\n", name, value); + msg.Append("%s,%" V8_PTR_PREFIX "d", name, value); msg.WriteToLogFile(); } @@ -994,7 +849,7 @@ void Logger::UncheckedIntPtrTEvent(const char* name, intptr_t value) { void Logger::HandleEvent(const char* name, Object** location) { if (!log_->IsEnabled() || !FLAG_log_handles) return; Log::MessageBuilder msg(log_); - msg.Append("%s,0x%" V8PRIxPTR "\n", name, location); + msg.Append("%s,0x%" V8PRIxPTR, name, location); msg.WriteToLogFile(); } @@ -1003,7 +858,7 @@ void Logger::HandleEvent(const char* name, Object** location) { // caller's responsibility to ensure that log is enabled and that // FLAG_log_api is true. void Logger::ApiEvent(const char* format, ...) { - ASSERT(log_->IsEnabled() && FLAG_log_api); + DCHECK(log_->IsEnabled() && FLAG_log_api); Log::MessageBuilder msg(log_); va_list ap; va_start(ap, format); @@ -1018,108 +873,103 @@ void Logger::ApiNamedSecurityCheck(Object* key) { if (key->IsString()) { SmartArrayPointer<char> str = String::cast(key)->ToCString(DISALLOW_NULLS, ROBUST_STRING_TRAVERSAL); - ApiEvent("api,check-security,\"%s\"\n", str.get()); + ApiEvent("api,check-security,\"%s\"", str.get()); } else if (key->IsSymbol()) { Symbol* symbol = Symbol::cast(key); if (symbol->name()->IsUndefined()) { - ApiEvent("api,check-security,symbol(hash %x)\n", - Symbol::cast(key)->Hash()); + ApiEvent("api,check-security,symbol(hash %x)", Symbol::cast(key)->Hash()); } else { SmartArrayPointer<char> str = String::cast(symbol->name())->ToCString( DISALLOW_NULLS, ROBUST_STRING_TRAVERSAL); - ApiEvent("api,check-security,symbol(\"%s\" hash %x)\n", - str.get(), + ApiEvent("api,check-security,symbol(\"%s\" hash %x)", str.get(), Symbol::cast(key)->Hash()); } } else if (key->IsUndefined()) { - ApiEvent("api,check-security,undefined\n"); + ApiEvent("api,check-security,undefined"); } else { - ApiEvent("api,check-security,['no-name']\n"); + ApiEvent("api,check-security,['no-name']"); } } -void Logger::SharedLibraryEvent(const char* library_path, +void Logger::SharedLibraryEvent(const std::string& library_path, uintptr_t start, uintptr_t end) { if (!log_->IsEnabled() || !FLAG_prof) return; Log::MessageBuilder msg(log_); - msg.Append("shared-library,\"%s\",0x%08" V8PRIxPTR ",0x%08" V8PRIxPTR "\n", - library_path, - start, - end); + msg.Append("shared-library,\"%s\",0x%08" V8PRIxPTR ",0x%08" V8PRIxPTR, + library_path.c_str(), start, end); msg.WriteToLogFile(); } -void Logger::SharedLibraryEvent(const wchar_t* library_path, - uintptr_t start, - uintptr_t end) { - if (!log_->IsEnabled() || !FLAG_prof) return; +void Logger::CodeDeoptEvent(Code* code) { + if (!log_->IsEnabled()) return; + DCHECK(FLAG_log_internal_timer_events); Log::MessageBuilder msg(log_); - msg.Append("shared-library,\"%ls\",0x%08" V8PRIxPTR ",0x%08" V8PRIxPTR "\n", - library_path, - start, - end); + int since_epoch = static_cast<int>(timer_.Elapsed().InMicroseconds()); + msg.Append("code-deopt,%ld,%d", since_epoch, code->CodeSize()); msg.WriteToLogFile(); } -void Logger::CodeDeoptEvent(Code* code) { +void Logger::CurrentTimeEvent() { if (!log_->IsEnabled()) return; - ASSERT(FLAG_log_internal_timer_events); + DCHECK(FLAG_log_internal_timer_events); Log::MessageBuilder msg(log_); int since_epoch = static_cast<int>(timer_.Elapsed().InMicroseconds()); - msg.Append("code-deopt,%ld,%d\n", since_epoch, code->CodeSize()); + msg.Append("current-time,%ld", since_epoch); msg.WriteToLogFile(); } -void Logger::TimerEvent(StartEnd se, const char* name) { +void Logger::TimerEvent(Logger::StartEnd se, const char* name) { if (!log_->IsEnabled()) return; - ASSERT(FLAG_log_internal_timer_events); + DCHECK(FLAG_log_internal_timer_events); Log::MessageBuilder msg(log_); int since_epoch = static_cast<int>(timer_.Elapsed().InMicroseconds()); - const char* format = (se == START) ? "timer-event-start,\"%s\",%ld\n" - : "timer-event-end,\"%s\",%ld\n"; + const char* format = (se == START) ? "timer-event-start,\"%s\",%ld" + : "timer-event-end,\"%s\",%ld"; msg.Append(format, name, since_epoch); msg.WriteToLogFile(); } void Logger::EnterExternal(Isolate* isolate) { - LOG(isolate, TimerEvent(START, TimerEventScope::v8_external)); - ASSERT(isolate->current_vm_state() == JS); + LOG(isolate, TimerEvent(START, TimerEventExternal::name())); + DCHECK(isolate->current_vm_state() == JS); isolate->set_current_vm_state(EXTERNAL); } void Logger::LeaveExternal(Isolate* isolate) { - LOG(isolate, TimerEvent(END, TimerEventScope::v8_external)); - ASSERT(isolate->current_vm_state() == EXTERNAL); + LOG(isolate, TimerEvent(END, TimerEventExternal::name())); + DCHECK(isolate->current_vm_state() == EXTERNAL); isolate->set_current_vm_state(JS); } -void Logger::LogInternalEvents(const char* name, int se) { +void Logger::DefaultTimerEventsLogger(const char* name, int se) { Isolate* isolate = Isolate::Current(); LOG(isolate, TimerEvent(static_cast<StartEnd>(se), name)); } -void Logger::TimerEventScope::LogTimerEvent(StartEnd se) { - isolate_->event_logger()(name_, se); +template <class TimerEvent> +void TimerEventScope<TimerEvent>::LogTimerEvent(Logger::StartEnd se) { + if (TimerEvent::expose_to_api() || + isolate_->event_logger() == Logger::DefaultTimerEventsLogger) { + isolate_->event_logger()(TimerEvent::name(), se); + } } -const char* Logger::TimerEventScope::v8_recompile_synchronous = - "V8.RecompileSynchronous"; -const char* Logger::TimerEventScope::v8_recompile_concurrent = - "V8.RecompileConcurrent"; -const char* Logger::TimerEventScope::v8_compile_full_code = - "V8.CompileFullCode"; -const char* Logger::TimerEventScope::v8_execute = "V8.Execute"; -const char* Logger::TimerEventScope::v8_external = "V8.External"; +// Instantiate template methods. +#define V(TimerName, expose) \ + template void TimerEventScope<TimerEvent##TimerName>::LogTimerEvent( \ + Logger::StartEnd se); +TIMER_EVENTS_LIST(V) +#undef V void Logger::LogRegExpSource(Handle<JSRegExp> regexp) { @@ -1173,21 +1023,21 @@ void Logger::RegExpCompileEvent(Handle<JSRegExp> regexp, bool in_cache) { Log::MessageBuilder msg(log_); msg.Append("regexp-compile,"); LogRegExpSource(regexp); - msg.Append(in_cache ? ",hit\n" : ",miss\n"); + msg.Append(in_cache ? ",hit" : ",miss"); msg.WriteToLogFile(); } void Logger::ApiIndexedSecurityCheck(uint32_t index) { if (!log_->IsEnabled() || !FLAG_log_api) return; - ApiEvent("api,check-security,%u\n", index); + ApiEvent("api,check-security,%u", index); } void Logger::ApiNamedPropertyAccess(const char* tag, JSObject* holder, Object* name) { - ASSERT(name->IsName()); + DCHECK(name->IsName()); if (!log_->IsEnabled() || !FLAG_log_api) return; String* class_name_obj = holder->class_name(); SmartArrayPointer<char> class_name = @@ -1195,18 +1045,18 @@ void Logger::ApiNamedPropertyAccess(const char* tag, if (name->IsString()) { SmartArrayPointer<char> property_name = String::cast(name)->ToCString(DISALLOW_NULLS, ROBUST_STRING_TRAVERSAL); - ApiEvent("api,%s,\"%s\",\"%s\"\n", tag, class_name.get(), + ApiEvent("api,%s,\"%s\",\"%s\"", tag, class_name.get(), property_name.get()); } else { Symbol* symbol = Symbol::cast(name); uint32_t hash = symbol->Hash(); if (symbol->name()->IsUndefined()) { - ApiEvent("api,%s,\"%s\",symbol(hash %x)\n", tag, class_name.get(), hash); + ApiEvent("api,%s,\"%s\",symbol(hash %x)", tag, class_name.get(), hash); } else { SmartArrayPointer<char> str = String::cast(symbol->name())->ToCString( DISALLOW_NULLS, ROBUST_STRING_TRAVERSAL); - ApiEvent("api,%s,\"%s\",symbol(\"%s\" hash %x)\n", - tag, class_name.get(), str.get(), hash); + ApiEvent("api,%s,\"%s\",symbol(\"%s\" hash %x)", tag, class_name.get(), + str.get(), hash); } } } @@ -1218,7 +1068,7 @@ void Logger::ApiIndexedPropertyAccess(const char* tag, String* class_name_obj = holder->class_name(); SmartArrayPointer<char> class_name = class_name_obj->ToCString(DISALLOW_NULLS, ROBUST_STRING_TRAVERSAL); - ApiEvent("api,%s,\"%s\",%u\n", tag, class_name.get(), index); + ApiEvent("api,%s,\"%s\",%u", tag, class_name.get(), index); } @@ -1227,20 +1077,20 @@ void Logger::ApiObjectAccess(const char* tag, JSObject* object) { String* class_name_obj = object->class_name(); SmartArrayPointer<char> class_name = class_name_obj->ToCString(DISALLOW_NULLS, ROBUST_STRING_TRAVERSAL); - ApiEvent("api,%s,\"%s\"\n", tag, class_name.get()); + ApiEvent("api,%s,\"%s\"", tag, class_name.get()); } void Logger::ApiEntryCall(const char* name) { if (!log_->IsEnabled() || !FLAG_log_api) return; - ApiEvent("api,%s\n", name); + ApiEvent("api,%s", name); } void Logger::NewEvent(const char* name, void* object, size_t size) { if (!log_->IsEnabled() || !FLAG_log) return; Log::MessageBuilder msg(log_); - msg.Append("new,%s,0x%" V8PRIxPTR ",%u\n", name, object, + msg.Append("new,%s,0x%" V8PRIxPTR ",%u", name, object, static_cast<unsigned int>(size)); msg.WriteToLogFile(); } @@ -1249,7 +1099,7 @@ void Logger::NewEvent(const char* name, void* object, size_t size) { void Logger::DeleteEvent(const char* name, void* object) { if (!log_->IsEnabled() || !FLAG_log) return; Log::MessageBuilder msg(log_); - msg.Append("delete,%s,0x%" V8PRIxPTR "\n", name, object); + msg.Append("delete,%s,0x%" V8PRIxPTR, name, object); msg.WriteToLogFile(); } @@ -1287,7 +1137,6 @@ void Logger::CallbackEventInternal(const char* prefix, Name* name, symbol->Hash()); } } - msg.Append('\n'); msg.WriteToLogFile(); } @@ -1313,7 +1162,7 @@ void Logger::SetterCallbackEvent(Name* name, Address entry_point) { static void AppendCodeCreateHeader(Log::MessageBuilder* msg, Logger::LogEventsAndTags tag, Code* code) { - ASSERT(msg); + DCHECK(msg); msg->Append("%s,%s,%d,", kLogEventsNames[Logger::CODE_CREATION_EVENT], kLogEventsNames[tag], @@ -1335,7 +1184,6 @@ void Logger::CodeCreateEvent(LogEventsAndTags tag, Log::MessageBuilder msg(log_); AppendCodeCreateHeader(&msg, tag, code); msg.AppendDoubleQuotedString(comment); - msg.Append('\n'); msg.WriteToLogFile(); } @@ -1358,7 +1206,6 @@ void Logger::CodeCreateEvent(LogEventsAndTags tag, } else { msg.AppendSymbolName(Symbol::cast(name)); } - msg.Append('\n'); msg.WriteToLogFile(); } @@ -1389,7 +1236,6 @@ void Logger::CodeCreateEvent(LogEventsAndTags tag, msg.Append(','); msg.AppendAddress(shared->address()); msg.Append(",%s", ComputeMarker(code)); - msg.Append('\n'); msg.WriteToLogFile(); } @@ -1424,7 +1270,6 @@ void Logger::CodeCreateEvent(LogEventsAndTags tag, msg.Append(":%d:%d\",", line, column); msg.AppendAddress(shared->address()); msg.Append(",%s", ComputeMarker(code)); - msg.Append('\n'); msg.WriteToLogFile(); } @@ -1441,7 +1286,24 @@ void Logger::CodeCreateEvent(LogEventsAndTags tag, Log::MessageBuilder msg(log_); AppendCodeCreateHeader(&msg, tag, code); msg.Append("\"args_count: %d\"", args_count); - msg.Append('\n'); + msg.WriteToLogFile(); +} + + +void Logger::CodeDisableOptEvent(Code* code, + SharedFunctionInfo* shared) { + PROFILER_LOG(CodeDisableOptEvent(code, shared)); + + if (!is_logging_code_events()) return; + CALL_LISTENERS(CodeDisableOptEvent(code, shared)); + + if (!FLAG_log_code || !log_->IsEnabled()) return; + Log::MessageBuilder msg(log_); + msg.Append("%s,", kLogEventsNames[CODE_DISABLE_OPT_EVENT]); + SmartArrayPointer<char> name = + shared->DebugName()->ToCString(DISALLOW_NULLS, ROBUST_STRING_TRAVERSAL); + msg.Append("\"%s\",", name.get()); + msg.Append("\"%s\"", GetBailoutReason(shared->DisableOptimizationReason())); msg.WriteToLogFile(); } @@ -1452,7 +1314,7 @@ void Logger::CodeMovingGCEvent() { if (!is_logging_code_events()) return; if (!log_->IsEnabled() || !FLAG_ll_prof) return; CALL_LISTENERS(CodeMovingGCEvent()); - OS::SignalCodeMovingGC(); + base::OS::SignalCodeMovingGC(); } @@ -1468,7 +1330,6 @@ void Logger::RegExpCodeCreateEvent(Code* code, String* source) { msg.Append('"'); msg.AppendDetailed(source, false); msg.Append('"'); - msg.Append('\n'); msg.WriteToLogFile(); } @@ -1492,7 +1353,6 @@ void Logger::CodeDeleteEvent(Address from) { Log::MessageBuilder msg(log_); msg.Append("%s,", kLogEventsNames[CODE_DELETE_EVENT]); msg.AppendAddress(from); - msg.Append('\n'); msg.WriteToLogFile(); } @@ -1535,7 +1395,6 @@ void Logger::CodeNameEvent(Address addr, int pos, const char* code_name) { Log::MessageBuilder msg(log_); msg.Append("%s,%d,", kLogEventsNames[SNAPSHOT_CODE_NAME_EVENT], pos); msg.AppendDoubleQuotedString(code_name); - msg.Append("\n"); msg.WriteToLogFile(); } @@ -1548,7 +1407,6 @@ void Logger::SnapshotPositionEvent(Address addr, int pos) { msg.Append("%s,", kLogEventsNames[SNAPSHOT_POSITION_EVENT]); msg.AppendAddress(addr); msg.Append(",%d", pos); - msg.Append('\n'); msg.WriteToLogFile(); } @@ -1570,7 +1428,6 @@ void Logger::MoveEventInternal(LogEventsAndTags event, msg.AppendAddress(from); msg.Append(','); msg.AppendAddress(to); - msg.Append('\n'); msg.WriteToLogFile(); } @@ -1581,12 +1438,10 @@ void Logger::ResourceEvent(const char* name, const char* tag) { msg.Append("%s,%s,", name, tag); uint32_t sec, usec; - if (OS::GetUserTime(&sec, &usec) != -1) { + if (base::OS::GetUserTime(&sec, &usec) != -1) { msg.Append("%d,%d,", sec, usec); } - msg.Append("%.0f", OS::TimeCurrentMillis()); - - msg.Append('\n'); + msg.Append("%.0f", base::OS::TimeCurrentMillis()); msg.WriteToLogFile(); } @@ -1607,7 +1462,6 @@ void Logger::SuspectReadEvent(Name* name, Object* obj) { } else { msg.AppendSymbolName(Symbol::cast(name)); } - msg.Append('\n'); msg.WriteToLogFile(); } @@ -1617,8 +1471,8 @@ void Logger::HeapSampleBeginEvent(const char* space, const char* kind) { Log::MessageBuilder msg(log_); // Using non-relative system time in order to be able to synchronize with // external memory profiling events (e.g. DOM memory size). - msg.Append("heap-sample-begin,\"%s\",\"%s\",%.0f\n", - space, kind, OS::TimeCurrentMillis()); + msg.Append("heap-sample-begin,\"%s\",\"%s\",%.0f", space, kind, + base::OS::TimeCurrentMillis()); msg.WriteToLogFile(); } @@ -1626,7 +1480,7 @@ void Logger::HeapSampleBeginEvent(const char* space, const char* kind) { void Logger::HeapSampleEndEvent(const char* space, const char* kind) { if (!log_->IsEnabled() || !FLAG_log_gc) return; Log::MessageBuilder msg(log_); - msg.Append("heap-sample-end,\"%s\",\"%s\"\n", space, kind); + msg.Append("heap-sample-end,\"%s\",\"%s\"", space, kind); msg.WriteToLogFile(); } @@ -1634,7 +1488,7 @@ void Logger::HeapSampleEndEvent(const char* space, const char* kind) { void Logger::HeapSampleItemEvent(const char* type, int number, int bytes) { if (!log_->IsEnabled() || !FLAG_log_gc) return; Log::MessageBuilder msg(log_); - msg.Append("heap-sample-item,%s,%d,%d\n", type, number, bytes); + msg.Append("heap-sample-item,%s,%d,%d", type, number, bytes); msg.WriteToLogFile(); } @@ -1642,7 +1496,7 @@ void Logger::HeapSampleItemEvent(const char* type, int number, int bytes) { void Logger::DebugTag(const char* call_site_tag) { if (!log_->IsEnabled() || !FLAG_log) return; Log::MessageBuilder msg(log_); - msg.Append("debug-tag,%s\n", call_site_tag); + msg.Append("debug-tag,%s", call_site_tag); msg.WriteToLogFile(); } @@ -1655,10 +1509,8 @@ void Logger::DebugEvent(const char* event_type, Vector<uint16_t> parameter) { } char* parameter_string = s.Finalize(); Log::MessageBuilder msg(log_); - msg.Append("debug-queue-event,%s,%15.3f,%s\n", - event_type, - OS::TimeCurrentMillis(), - parameter_string); + msg.Append("debug-queue-event,%s,%15.3f,%s", event_type, + base::OS::TimeCurrentMillis(), parameter_string); DeleteArray(parameter_string); msg.WriteToLogFile(); } @@ -1681,11 +1533,10 @@ void Logger::TickEvent(TickSample* sample, bool overflow) { if (overflow) { msg.Append(",overflow"); } - for (int i = 0; i < sample->frames_count; ++i) { + for (unsigned i = 0; i < sample->frames_count; ++i) { msg.Append(','); msg.AppendAddress(sample->stack[i]); } - msg.Append('\n'); msg.WriteToLogFile(); } @@ -1725,7 +1576,7 @@ class EnumerateOptimizedFunctionsVisitor: public OptimizedFunctionVisitor { sfis_[*count_] = Handle<SharedFunctionInfo>(sfi); } if (code_objects_ != NULL) { - ASSERT(function->code()->kind() == Code::OPTIMIZED_FUNCTION); + DCHECK(function->code()->kind() == Code::OPTIMIZED_FUNCTION); code_objects_[*count_] = Handle<Code>(function->code()); } *count_ = *count_ + 1; @@ -1935,59 +1786,44 @@ void Logger::LogAccessorCallbacks() { } -static void AddIsolateIdIfNeeded(Isolate* isolate, StringStream* stream) { - if (isolate->IsDefaultIsolate() || !FLAG_logfile_per_isolate) return; - stream->Add("isolate-%p-", isolate); -} - - -static SmartArrayPointer<const char> PrepareLogFileName( - Isolate* isolate, const char* file_name) { - if (strchr(file_name, '%') != NULL || !isolate->IsDefaultIsolate()) { - // If there's a '%' in the log file name we have to expand - // placeholders. - HeapStringAllocator allocator; - StringStream stream(&allocator); - AddIsolateIdIfNeeded(isolate, &stream); - for (const char* p = file_name; *p; p++) { - if (*p == '%') { - p++; - switch (*p) { - case '\0': - // If there's a % at the end of the string we back up - // one character so we can escape the loop properly. - p--; - break; - case 'p': - stream.Add("%d", OS::GetCurrentProcessId()); - break; - case 't': { - // %t expands to the current time in milliseconds. - double time = OS::TimeCurrentMillis(); - stream.Add("%.0f", FmtElm(time)); - break; - } - case '%': - // %% expands (contracts really) to %. - stream.Put('%'); - break; - default: - // All other %'s expand to themselves. - stream.Put('%'); - stream.Put(*p); - break; - } - } else { - stream.Put(*p); +static void AddIsolateIdIfNeeded(OStream& os, // NOLINT + Isolate* isolate) { + if (FLAG_logfile_per_isolate) os << "isolate-" << isolate << "-"; +} + + +static void PrepareLogFileName(OStream& os, // NOLINT + Isolate* isolate, const char* file_name) { + AddIsolateIdIfNeeded(os, isolate); + for (const char* p = file_name; *p; p++) { + if (*p == '%') { + p++; + switch (*p) { + case '\0': + // If there's a % at the end of the string we back up + // one character so we can escape the loop properly. + p--; + break; + case 'p': + os << base::OS::GetCurrentProcessId(); + break; + case 't': + // %t expands to the current time in milliseconds. + os << static_cast<int64_t>(base::OS::TimeCurrentMillis()); + break; + case '%': + // %% expands (contracts really) to %. + os << '%'; + break; + default: + // All other %'s expand to themselves. + os << '%' << *p; + break; } + } else { + os << *p; } - return SmartArrayPointer<const char>(stream.ToCString()); } - int length = StrLength(file_name); - char* str = NewArray<char>(length + 1); - OS::MemCopy(str, file_name, length); - str[length] = '\0'; - return SmartArrayPointer<const char>(str); } @@ -2001,9 +1837,9 @@ bool Logger::SetUp(Isolate* isolate) { FLAG_log_snapshot_positions = true; } - SmartArrayPointer<const char> log_file_name = - PrepareLogFileName(isolate, FLAG_logfile); - log_->Initialize(log_file_name.get()); + OStringStream log_file_name; + PrepareLogFileName(log_file_name, isolate, FLAG_logfile); + log_->Initialize(log_file_name.c_str()); if (FLAG_perf_basic_prof) { @@ -2017,7 +1853,7 @@ bool Logger::SetUp(Isolate* isolate) { } if (FLAG_ll_prof) { - ll_logger_ = new LowLevelLogger(log_file_name.get()); + ll_logger_ = new LowLevelLogger(log_file_name.c_str()); addCodeEventListener(ll_logger_); } diff --git a/deps/v8/src/log.h b/deps/v8/src/log.h index b1a41e949f4..51597ddda93 100644 --- a/deps/v8/src/log.h +++ b/deps/v8/src/log.h @@ -5,12 +5,19 @@ #ifndef V8_LOG_H_ #define V8_LOG_H_ -#include "allocation.h" -#include "objects.h" -#include "platform.h" -#include "platform/elapsed-timer.h" +#include <string> + +#include "src/allocation.h" +#include "src/base/platform/elapsed-timer.h" +#include "src/base/platform/platform.h" +#include "src/objects.h" namespace v8 { + +namespace base { +class Semaphore; +} + namespace internal { // Logger is used for collecting logging information from V8 during @@ -55,7 +62,6 @@ class Isolate; class Log; class PositionsRecorder; class Profiler; -class Semaphore; class Ticker; struct TickSample; @@ -79,6 +85,7 @@ struct TickSample; #define LOG_EVENTS_AND_TAGS_LIST(V) \ V(CODE_CREATION_EVENT, "code-creation") \ + V(CODE_DISABLE_OPT_EVENT, "code-disable-optimization") \ V(CODE_MOVE_EVENT, "code-move") \ V(CODE_DELETE_EVENT, "code-delete") \ V(CODE_MOVING_GC, "code-moving-gc") \ @@ -144,6 +151,8 @@ class Sampler; class Logger { public: + enum StartEnd { START = 0, END = 1 }; + #define DECLARE_ENUM(enum_item, ignore) enum_item, enum LogEventsAndTags { LOG_EVENTS_AND_TAGS_LIST(DECLARE_ENUM) @@ -237,6 +246,8 @@ class Logger { CompilationInfo* info, Name* source, int line, int column); void CodeCreateEvent(LogEventsAndTags tag, Code* code, int args_count); + // Emits a code deoptimization event. + void CodeDisableOptEvent(Code* code, SharedFunctionInfo* shared); void CodeMovingGCEvent(); // Emits a code create event for a RegExp. void RegExpCodeCreateEvent(Code* code, String* source); @@ -277,49 +288,20 @@ class Logger { void HeapSampleStats(const char* space, const char* kind, intptr_t capacity, intptr_t used); - void SharedLibraryEvent(const char* library_path, - uintptr_t start, - uintptr_t end); - void SharedLibraryEvent(const wchar_t* library_path, + void SharedLibraryEvent(const std::string& library_path, uintptr_t start, uintptr_t end); - // ==== Events logged by --log-timer-events. ==== - enum StartEnd { START, END }; - void CodeDeoptEvent(Code* code); + void CurrentTimeEvent(); void TimerEvent(StartEnd se, const char* name); static void EnterExternal(Isolate* isolate); static void LeaveExternal(Isolate* isolate); - static void EmptyLogInternalEvents(const char* name, int se) { } - static void LogInternalEvents(const char* name, int se); - - class TimerEventScope { - public: - TimerEventScope(Isolate* isolate, const char* name) - : isolate_(isolate), name_(name) { - LogTimerEvent(START); - } - - ~TimerEventScope() { - LogTimerEvent(END); - } - - void LogTimerEvent(StartEnd se); - - static const char* v8_recompile_synchronous; - static const char* v8_recompile_concurrent; - static const char* v8_compile_full_code; - static const char* v8_execute; - static const char* v8_external; - - private: - Isolate* isolate_; - const char* name_; - }; + static void EmptyTimerEventsLogger(const char* name, int se) {} + static void DefaultTimerEventsLogger(const char* name, int se); // ==== Events logged by --log-regexp ==== // Regexp compilation and execution events. @@ -432,12 +414,46 @@ class Logger { // 'true' between SetUp() and TearDown(). bool is_initialized_; - ElapsedTimer timer_; + base::ElapsedTimer timer_; friend class CpuProfiler; }; +#define TIMER_EVENTS_LIST(V) \ + V(RecompileSynchronous, true) \ + V(RecompileConcurrent, true) \ + V(CompileFullCode, true) \ + V(Execute, true) \ + V(External, true) \ + V(IcMiss, false) + +#define V(TimerName, expose) \ + class TimerEvent##TimerName : public AllStatic { \ + public: \ + static const char* name(void* unused = NULL) { return "V8." #TimerName; } \ + static bool expose_to_api() { return expose; } \ + }; +TIMER_EVENTS_LIST(V) +#undef V + + +template <class TimerEvent> +class TimerEventScope { + public: + explicit TimerEventScope(Isolate* isolate) : isolate_(isolate) { + LogTimerEvent(Logger::START); + } + + ~TimerEventScope() { LogTimerEvent(Logger::END); } + + void LogTimerEvent(Logger::StartEnd se); + + private: + Isolate* isolate_; +}; + + class CodeEventListener { public: virtual ~CodeEventListener() {} @@ -470,6 +486,7 @@ class CodeEventListener { virtual void CodeDeleteEvent(Address from) = 0; virtual void SharedFunctionInfoMoveEvent(Address from, Address to) = 0; virtual void CodeMovingGCEvent() = 0; + virtual void CodeDisableOptEvent(Code* code, SharedFunctionInfo* shared) = 0; }; diff --git a/deps/v8/src/lookup-inl.h b/deps/v8/src/lookup-inl.h new file mode 100644 index 00000000000..ccd1fbb14d0 --- /dev/null +++ b/deps/v8/src/lookup-inl.h @@ -0,0 +1,68 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_LOOKUP_INL_H_ +#define V8_LOOKUP_INL_H_ + +#include "src/lookup.h" + +namespace v8 { +namespace internal { + + +JSReceiver* LookupIterator::NextHolder(Map* map) { + DisallowHeapAllocation no_gc; + if (map->prototype()->IsNull()) return NULL; + + JSReceiver* next = JSReceiver::cast(map->prototype()); + DCHECK(!next->map()->IsGlobalObjectMap() || + next->map()->is_hidden_prototype()); + + if (!check_derived() && + !(check_hidden() && next->map()->is_hidden_prototype())) { + return NULL; + } + + return next; +} + + +LookupIterator::State LookupIterator::LookupInHolder(Map* map) { + DisallowHeapAllocation no_gc; + switch (state_) { + case NOT_FOUND: + if (map->IsJSProxyMap()) { + return JSPROXY; + } + if (check_access_check() && map->is_access_check_needed()) { + return ACCESS_CHECK; + } + // Fall through. + case ACCESS_CHECK: + if (check_interceptor() && map->has_named_interceptor()) { + return INTERCEPTOR; + } + // Fall through. + case INTERCEPTOR: + if (map->is_dictionary_map()) { + property_encoding_ = DICTIONARY; + } else { + DescriptorArray* descriptors = map->instance_descriptors(); + number_ = descriptors->SearchWithCache(*name_, map); + if (number_ == DescriptorArray::kNotFound) return NOT_FOUND; + property_encoding_ = DESCRIPTOR; + } + return PROPERTY; + case PROPERTY: + return NOT_FOUND; + case JSPROXY: + UNREACHABLE(); + } + UNREACHABLE(); + return state_; +} +} +} // namespace v8::internal + +#endif // V8_LOOKUP_INL_H_ diff --git a/deps/v8/src/lookup.cc b/deps/v8/src/lookup.cc new file mode 100644 index 00000000000..967ce498ac1 --- /dev/null +++ b/deps/v8/src/lookup.cc @@ -0,0 +1,275 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "src/bootstrapper.h" +#include "src/lookup.h" +#include "src/lookup-inl.h" + +namespace v8 { +namespace internal { + + +void LookupIterator::Next() { + DisallowHeapAllocation no_gc; + has_property_ = false; + + JSReceiver* holder = NULL; + Map* map = *holder_map_; + + // Perform lookup on current holder. + state_ = LookupInHolder(map); + + // Continue lookup if lookup on current holder failed. + while (!IsFound()) { + JSReceiver* maybe_holder = NextHolder(map); + if (maybe_holder == NULL) break; + holder = maybe_holder; + map = holder->map(); + state_ = LookupInHolder(map); + } + + // Either was found in the receiver, or the receiver has no prototype. + if (holder == NULL) return; + + maybe_holder_ = handle(holder); + holder_map_ = handle(map); +} + + +Handle<JSReceiver> LookupIterator::GetRoot() const { + Handle<Object> receiver = GetReceiver(); + if (receiver->IsJSReceiver()) return Handle<JSReceiver>::cast(receiver); + Handle<Object> root = + handle(receiver->GetRootMap(isolate_)->prototype(), isolate_); + CHECK(!root->IsNull()); + return Handle<JSReceiver>::cast(root); +} + + +Handle<Map> LookupIterator::GetReceiverMap() const { + Handle<Object> receiver = GetReceiver(); + if (receiver->IsNumber()) return isolate_->factory()->heap_number_map(); + return handle(Handle<HeapObject>::cast(receiver)->map()); +} + + +bool LookupIterator::IsBootstrapping() const { + return isolate_->bootstrapper()->IsActive(); +} + + +bool LookupIterator::HasAccess(v8::AccessType access_type) const { + DCHECK_EQ(ACCESS_CHECK, state_); + DCHECK(is_guaranteed_to_have_holder()); + return isolate_->MayNamedAccess(GetHolder<JSObject>(), name_, access_type); +} + + +bool LookupIterator::HasProperty() { + DCHECK_EQ(PROPERTY, state_); + DCHECK(is_guaranteed_to_have_holder()); + + if (property_encoding_ == DICTIONARY) { + Handle<JSObject> holder = GetHolder<JSObject>(); + number_ = holder->property_dictionary()->FindEntry(name_); + if (number_ == NameDictionary::kNotFound) return false; + + property_details_ = holder->property_dictionary()->DetailsAt(number_); + // Holes in dictionary cells are absent values. + if (holder->IsGlobalObject() && + (property_details_.IsDeleted() || FetchValue()->IsTheHole())) { + return false; + } + } else { + // Can't use descriptor_number() yet because has_property_ is still false. + property_details_ = + holder_map_->instance_descriptors()->GetDetails(number_); + } + + switch (property_details_.type()) { + case v8::internal::FIELD: + case v8::internal::NORMAL: + case v8::internal::CONSTANT: + property_kind_ = DATA; + break; + case v8::internal::CALLBACKS: + property_kind_ = ACCESSOR; + break; + case v8::internal::HANDLER: + case v8::internal::NONEXISTENT: + case v8::internal::INTERCEPTOR: + UNREACHABLE(); + } + + has_property_ = true; + return true; +} + + +void LookupIterator::PrepareForDataProperty(Handle<Object> value) { + DCHECK(has_property_); + DCHECK(HolderIsReceiverOrHiddenPrototype()); + if (property_encoding_ == DICTIONARY) return; + holder_map_ = + Map::PrepareForDataProperty(holder_map_, descriptor_number(), value); + JSObject::MigrateToMap(GetHolder<JSObject>(), holder_map_); + // Reload property information. + if (holder_map_->is_dictionary_map()) { + property_encoding_ = DICTIONARY; + } else { + property_encoding_ = DESCRIPTOR; + } + CHECK(HasProperty()); +} + + +void LookupIterator::TransitionToDataProperty( + Handle<Object> value, PropertyAttributes attributes, + Object::StoreFromKeyed store_mode) { + DCHECK(!has_property_ || !HolderIsReceiverOrHiddenPrototype()); + + // Can only be called when the receiver is a JSObject. JSProxy has to be + // handled via a trap. Adding properties to primitive values is not + // observable. + Handle<JSObject> receiver = Handle<JSObject>::cast(GetReceiver()); + + // Properties have to be added to context extension objects through + // SetOwnPropertyIgnoreAttributes. + DCHECK(!receiver->IsJSContextExtensionObject()); + + if (receiver->IsJSGlobalProxy()) { + PrototypeIterator iter(isolate(), receiver); + receiver = + Handle<JSGlobalObject>::cast(PrototypeIterator::GetCurrent(iter)); + } + + maybe_holder_ = receiver; + holder_map_ = Map::TransitionToDataProperty(handle(receiver->map()), name_, + value, attributes, store_mode); + JSObject::MigrateToMap(receiver, holder_map_); + + // Reload the information. + state_ = NOT_FOUND; + configuration_ = CHECK_OWN_REAL; + state_ = LookupInHolder(*holder_map_); + DCHECK(IsFound()); + HasProperty(); +} + + +bool LookupIterator::HolderIsReceiverOrHiddenPrototype() const { + DCHECK(has_property_ || state_ == INTERCEPTOR || state_ == JSPROXY); + DisallowHeapAllocation no_gc; + Handle<Object> receiver = GetReceiver(); + if (!receiver->IsJSReceiver()) return false; + Object* current = *receiver; + JSReceiver* holder = *maybe_holder_.ToHandleChecked(); + // JSProxy do not occur as hidden prototypes. + if (current->IsJSProxy()) { + return JSReceiver::cast(current) == holder; + } + PrototypeIterator iter(isolate(), current, + PrototypeIterator::START_AT_RECEIVER); + do { + if (JSReceiver::cast(iter.GetCurrent()) == holder) return true; + DCHECK(!current->IsJSProxy()); + iter.Advance(); + } while (!iter.IsAtEnd(PrototypeIterator::END_AT_NON_HIDDEN)); + return false; +} + + +Handle<Object> LookupIterator::FetchValue() const { + Object* result = NULL; + Handle<JSObject> holder = GetHolder<JSObject>(); + switch (property_encoding_) { + case DICTIONARY: + result = holder->property_dictionary()->ValueAt(number_); + if (holder->IsGlobalObject()) { + result = PropertyCell::cast(result)->value(); + } + break; + case DESCRIPTOR: + if (property_details_.type() == v8::internal::FIELD) { + FieldIndex field_index = + FieldIndex::ForDescriptor(*holder_map_, number_); + return JSObject::FastPropertyAt( + holder, property_details_.representation(), field_index); + } + result = holder_map_->instance_descriptors()->GetValue(number_); + } + return handle(result, isolate_); +} + + +int LookupIterator::GetConstantIndex() const { + DCHECK(has_property_); + DCHECK_EQ(DESCRIPTOR, property_encoding_); + DCHECK_EQ(v8::internal::CONSTANT, property_details_.type()); + return descriptor_number(); +} + + +FieldIndex LookupIterator::GetFieldIndex() const { + DCHECK(has_property_); + DCHECK_EQ(DESCRIPTOR, property_encoding_); + DCHECK_EQ(v8::internal::FIELD, property_details_.type()); + int index = + holder_map()->instance_descriptors()->GetFieldIndex(descriptor_number()); + bool is_double = representation().IsDouble(); + return FieldIndex::ForPropertyIndex(*holder_map(), index, is_double); +} + + +Handle<PropertyCell> LookupIterator::GetPropertyCell() const { + Handle<JSObject> holder = GetHolder<JSObject>(); + Handle<GlobalObject> global = Handle<GlobalObject>::cast(holder); + Object* value = global->property_dictionary()->ValueAt(dictionary_entry()); + return Handle<PropertyCell>(PropertyCell::cast(value)); +} + + +Handle<Object> LookupIterator::GetAccessors() const { + DCHECK(has_property_); + DCHECK_EQ(ACCESSOR, property_kind_); + return FetchValue(); +} + + +Handle<Object> LookupIterator::GetDataValue() const { + DCHECK(has_property_); + DCHECK_EQ(DATA, property_kind_); + Handle<Object> value = FetchValue(); + return value; +} + + +void LookupIterator::WriteDataValue(Handle<Object> value) { + DCHECK(is_guaranteed_to_have_holder()); + DCHECK(has_property_); + Handle<JSObject> holder = GetHolder<JSObject>(); + if (property_encoding_ == DICTIONARY) { + NameDictionary* property_dictionary = holder->property_dictionary(); + if (holder->IsGlobalObject()) { + Handle<PropertyCell> cell( + PropertyCell::cast(property_dictionary->ValueAt(dictionary_entry()))); + PropertyCell::SetValueInferType(cell, value); + } else { + property_dictionary->ValueAtPut(dictionary_entry(), *value); + } + } else if (property_details_.type() == v8::internal::FIELD) { + holder->WriteToField(descriptor_number(), *value); + } else { + DCHECK_EQ(v8::internal::CONSTANT, property_details_.type()); + } +} + + +void LookupIterator::InternalizeName() { + if (name_->IsUniqueName()) return; + name_ = factory()->InternalizeString(Handle<String>::cast(name_)); +} +} } // namespace v8::internal diff --git a/deps/v8/src/lookup.h b/deps/v8/src/lookup.h new file mode 100644 index 00000000000..2d609c5f666 --- /dev/null +++ b/deps/v8/src/lookup.h @@ -0,0 +1,217 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_LOOKUP_H_ +#define V8_LOOKUP_H_ + +#include "src/factory.h" +#include "src/isolate.h" +#include "src/objects.h" + +namespace v8 { +namespace internal { + +class LookupIterator V8_FINAL BASE_EMBEDDED { + public: + enum Configuration { + CHECK_OWN_REAL = 0, + CHECK_HIDDEN = 1 << 0, + CHECK_DERIVED = 1 << 1, + CHECK_INTERCEPTOR = 1 << 2, + CHECK_ACCESS_CHECK = 1 << 3, + CHECK_ALL = CHECK_HIDDEN | CHECK_DERIVED | + CHECK_INTERCEPTOR | CHECK_ACCESS_CHECK, + SKIP_INTERCEPTOR = CHECK_ALL ^ CHECK_INTERCEPTOR, + CHECK_OWN = CHECK_ALL ^ CHECK_DERIVED + }; + + enum State { + NOT_FOUND, + PROPERTY, + INTERCEPTOR, + ACCESS_CHECK, + JSPROXY + }; + + enum PropertyKind { + DATA, + ACCESSOR + }; + + enum PropertyEncoding { + DICTIONARY, + DESCRIPTOR + }; + + LookupIterator(Handle<Object> receiver, + Handle<Name> name, + Configuration configuration = CHECK_ALL) + : configuration_(ComputeConfiguration(configuration, name)), + state_(NOT_FOUND), + property_kind_(DATA), + property_encoding_(DESCRIPTOR), + property_details_(NONE, NONEXISTENT, Representation::None()), + isolate_(name->GetIsolate()), + name_(name), + maybe_receiver_(receiver), + number_(DescriptorArray::kNotFound) { + Handle<JSReceiver> root = GetRoot(); + holder_map_ = handle(root->map()); + maybe_holder_ = root; + Next(); + } + + LookupIterator(Handle<Object> receiver, + Handle<Name> name, + Handle<JSReceiver> holder, + Configuration configuration = CHECK_ALL) + : configuration_(ComputeConfiguration(configuration, name)), + state_(NOT_FOUND), + property_kind_(DATA), + property_encoding_(DESCRIPTOR), + property_details_(NONE, NONEXISTENT, Representation::None()), + isolate_(name->GetIsolate()), + name_(name), + holder_map_(holder->map()), + maybe_receiver_(receiver), + maybe_holder_(holder), + number_(DescriptorArray::kNotFound) { + Next(); + } + + Isolate* isolate() const { return isolate_; } + State state() const { return state_; } + Handle<Name> name() const { return name_; } + + bool IsFound() const { return state_ != NOT_FOUND; } + void Next(); + + Heap* heap() const { return isolate_->heap(); } + Factory* factory() const { return isolate_->factory(); } + Handle<Object> GetReceiver() const { + return Handle<Object>::cast(maybe_receiver_.ToHandleChecked()); + } + Handle<Map> holder_map() const { return holder_map_; } + template <class T> + Handle<T> GetHolder() const { + DCHECK(IsFound()); + return Handle<T>::cast(maybe_holder_.ToHandleChecked()); + } + Handle<JSReceiver> GetRoot() const; + bool HolderIsReceiverOrHiddenPrototype() const; + + /* Dynamically reduce the trapped types. */ + void skip_interceptor() { + configuration_ = static_cast<Configuration>( + configuration_ & ~CHECK_INTERCEPTOR); + } + void skip_access_check() { + configuration_ = static_cast<Configuration>( + configuration_ & ~CHECK_ACCESS_CHECK); + } + + /* ACCESS_CHECK */ + bool HasAccess(v8::AccessType access_type) const; + + /* PROPERTY */ + // HasProperty needs to be called before any of the other PROPERTY methods + // below can be used. It ensures that we are able to provide a definite + // answer, and loads extra information about the property. + bool HasProperty(); + void PrepareForDataProperty(Handle<Object> value); + void TransitionToDataProperty(Handle<Object> value, + PropertyAttributes attributes, + Object::StoreFromKeyed store_mode); + PropertyKind property_kind() const { + DCHECK(has_property_); + return property_kind_; + } + PropertyEncoding property_encoding() const { + DCHECK(has_property_); + return property_encoding_; + } + PropertyDetails property_details() const { + DCHECK(has_property_); + return property_details_; + } + bool IsConfigurable() const { return !property_details().IsDontDelete(); } + Representation representation() const { + return property_details().representation(); + } + FieldIndex GetFieldIndex() const; + int GetConstantIndex() const; + Handle<PropertyCell> GetPropertyCell() const; + Handle<Object> GetAccessors() const; + Handle<Object> GetDataValue() const; + void WriteDataValue(Handle<Object> value); + + void InternalizeName(); + + private: + Handle<Map> GetReceiverMap() const; + + MUST_USE_RESULT inline JSReceiver* NextHolder(Map* map); + inline State LookupInHolder(Map* map); + Handle<Object> FetchValue() const; + + bool IsBootstrapping() const; + + // Methods that fetch data from the holder ensure they always have a holder. + // This means the receiver needs to be present as opposed to just the receiver + // map. Other objects in the prototype chain are transitively guaranteed to be + // present via the receiver map. + bool is_guaranteed_to_have_holder() const { + return !maybe_receiver_.is_null(); + } + bool check_interceptor() const { + return !IsBootstrapping() && (configuration_ & CHECK_INTERCEPTOR) != 0; + } + bool check_derived() const { + return (configuration_ & CHECK_DERIVED) != 0; + } + bool check_hidden() const { + return (configuration_ & CHECK_HIDDEN) != 0; + } + bool check_access_check() const { + return (configuration_ & CHECK_ACCESS_CHECK) != 0; + } + int descriptor_number() const { + DCHECK(has_property_); + DCHECK_EQ(DESCRIPTOR, property_encoding_); + return number_; + } + int dictionary_entry() const { + DCHECK(has_property_); + DCHECK_EQ(DICTIONARY, property_encoding_); + return number_; + } + + static Configuration ComputeConfiguration( + Configuration configuration, Handle<Name> name) { + if (name->IsOwn()) { + return static_cast<Configuration>(configuration & CHECK_OWN); + } else { + return configuration; + } + } + + Configuration configuration_; + State state_; + bool has_property_; + PropertyKind property_kind_; + PropertyEncoding property_encoding_; + PropertyDetails property_details_; + Isolate* isolate_; + Handle<Name> name_; + Handle<Map> holder_map_; + MaybeHandle<Object> maybe_receiver_; + MaybeHandle<JSReceiver> maybe_holder_; + + int number_; +}; + + +} } // namespace v8::internal + +#endif // V8_LOOKUP_H_ diff --git a/deps/v8/src/macro-assembler.h b/deps/v8/src/macro-assembler.h index bd22b93c729..54cebca90f8 100644 --- a/deps/v8/src/macro-assembler.h +++ b/deps/v8/src/macro-assembler.h @@ -38,39 +38,52 @@ enum AllocationFlags { const int kInvalidProtoDepth = -1; #if V8_TARGET_ARCH_IA32 -#include "assembler.h" -#include "ia32/assembler-ia32.h" -#include "ia32/assembler-ia32-inl.h" -#include "code.h" // must be after assembler_*.h -#include "ia32/macro-assembler-ia32.h" +#include "src/assembler.h" +#include "src/ia32/assembler-ia32.h" +#include "src/ia32/assembler-ia32-inl.h" +#include "src/code.h" // NOLINT, must be after assembler_*.h +#include "src/ia32/macro-assembler-ia32.h" #elif V8_TARGET_ARCH_X64 -#include "assembler.h" -#include "x64/assembler-x64.h" -#include "x64/assembler-x64-inl.h" -#include "code.h" // must be after assembler_*.h -#include "x64/macro-assembler-x64.h" +#include "src/assembler.h" +#include "src/x64/assembler-x64.h" +#include "src/x64/assembler-x64-inl.h" +#include "src/code.h" // NOLINT, must be after assembler_*.h +#include "src/x64/macro-assembler-x64.h" #elif V8_TARGET_ARCH_ARM64 -#include "arm64/constants-arm64.h" -#include "assembler.h" -#include "arm64/assembler-arm64.h" -#include "arm64/assembler-arm64-inl.h" -#include "code.h" // must be after assembler_*.h -#include "arm64/macro-assembler-arm64.h" -#include "arm64/macro-assembler-arm64-inl.h" +#include "src/arm64/constants-arm64.h" +#include "src/assembler.h" +#include "src/arm64/assembler-arm64.h" // NOLINT +#include "src/arm64/assembler-arm64-inl.h" +#include "src/code.h" // NOLINT, must be after assembler_*.h +#include "src/arm64/macro-assembler-arm64.h" // NOLINT +#include "src/arm64/macro-assembler-arm64-inl.h" #elif V8_TARGET_ARCH_ARM -#include "arm/constants-arm.h" -#include "assembler.h" -#include "arm/assembler-arm.h" -#include "arm/assembler-arm-inl.h" -#include "code.h" // must be after assembler_*.h -#include "arm/macro-assembler-arm.h" +#include "src/arm/constants-arm.h" +#include "src/assembler.h" +#include "src/arm/assembler-arm.h" // NOLINT +#include "src/arm/assembler-arm-inl.h" +#include "src/code.h" // NOLINT, must be after assembler_*.h +#include "src/arm/macro-assembler-arm.h" // NOLINT #elif V8_TARGET_ARCH_MIPS -#include "mips/constants-mips.h" -#include "assembler.h" -#include "mips/assembler-mips.h" -#include "mips/assembler-mips-inl.h" -#include "code.h" // must be after assembler_*.h -#include "mips/macro-assembler-mips.h" +#include "src/mips/constants-mips.h" +#include "src/assembler.h" // NOLINT +#include "src/mips/assembler-mips.h" // NOLINT +#include "src/mips/assembler-mips-inl.h" +#include "src/code.h" // NOLINT, must be after assembler_*.h +#include "src/mips/macro-assembler-mips.h" +#elif V8_TARGET_ARCH_MIPS64 +#include "src/mips64/constants-mips64.h" +#include "src/assembler.h" // NOLINT +#include "src/mips64/assembler-mips64.h" // NOLINT +#include "src/mips64/assembler-mips64-inl.h" +#include "src/code.h" // NOLINT, must be after assembler_*.h +#include "src/mips64/macro-assembler-mips64.h" +#elif V8_TARGET_ARCH_X87 +#include "src/assembler.h" +#include "src/x87/assembler-x87.h" +#include "src/x87/assembler-x87-inl.h" +#include "src/code.h" // NOLINT, must be after assembler_*.h +#include "src/x87/macro-assembler-x87.h" #else #error Unsupported target architecture. #endif @@ -101,7 +114,7 @@ class FrameScope { // scope, the MacroAssembler is still marked as being in a frame scope, and // the code will be generated again when it goes out of scope. void GenerateLeaveFrame() { - ASSERT(type_ != StackFrame::MANUAL && type_ != StackFrame::NONE); + DCHECK(type_ != StackFrame::MANUAL && type_ != StackFrame::NONE); masm_->LeaveFrame(type_); } diff --git a/deps/v8/src/macros.py b/deps/v8/src/macros.py index 3eb906f74d6..131df878b5d 100644 --- a/deps/v8/src/macros.py +++ b/deps/v8/src/macros.py @@ -126,6 +126,8 @@ macro IS_ARRAYBUFFER(arg) = (%_ClassOf(arg) === 'ArrayBuffer'); macro IS_DATAVIEW(arg) = (%_ClassOf(arg) === 'DataView'); macro IS_GENERATOR(arg) = (%_ClassOf(arg) === 'Generator'); +macro IS_SET_ITERATOR(arg) = (%_ClassOf(arg) === 'Set Iterator'); +macro IS_MAP_ITERATOR(arg) = (%_ClassOf(arg) === 'Map Iterator'); macro IS_UNDETECTABLE(arg) = (%_IsUndetectableObject(arg)); macro FLOOR(arg) = $floor(arg); @@ -166,10 +168,12 @@ macro JSON_NUMBER_TO_STRING(arg) = ((%_IsSmi(%IS_VAR(arg)) || arg - arg == 0) ? %_NumberToString(arg) : "null"); # Private names. +# GET_PRIVATE should only be used if the property is known to exists on obj +# itself (it should really use %GetOwnProperty, but that would be way slower). macro GLOBAL_PRIVATE(name) = (%CreateGlobalPrivateSymbol(name)); macro NEW_PRIVATE(name) = (%CreatePrivateSymbol(name)); macro IS_PRIVATE(sym) = (%SymbolIsPrivate(sym)); -macro HAS_PRIVATE(obj, sym) = (sym in obj); +macro HAS_PRIVATE(obj, sym) = (%HasOwnProperty(obj, sym)); macro GET_PRIVATE(obj, sym) = (obj[sym]); macro SET_PRIVATE(obj, sym, val) = (obj[sym] = val); macro DELETE_PRIVATE(obj, sym) = (delete obj[sym]); @@ -281,3 +285,6 @@ const ITERATOR_KIND_KEYS = 1; const ITERATOR_KIND_VALUES = 2; const ITERATOR_KIND_ENTRIES = 3; + +# Check whether debug is active. +const DEBUG_IS_ACTIVE = (%_DebugIsActive() != 0); diff --git a/deps/v8/src/math.js b/deps/v8/src/math.js index f8738b5f858..13cdb31cdcf 100644 --- a/deps/v8/src/math.js +++ b/deps/v8/src/math.js @@ -2,6 +2,8 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. +"use strict"; + // This file relies on the fact that the following declarations have been made // in runtime.js: // var $Object = global.Object; @@ -28,24 +30,24 @@ function MathAbs(x) { } // ECMA 262 - 15.8.2.2 -function MathAcos(x) { +function MathAcosJS(x) { return %MathAcos(TO_NUMBER_INLINE(x)); } // ECMA 262 - 15.8.2.3 -function MathAsin(x) { +function MathAsinJS(x) { return %MathAsin(TO_NUMBER_INLINE(x)); } // ECMA 262 - 15.8.2.4 -function MathAtan(x) { +function MathAtanJS(x) { return %MathAtan(TO_NUMBER_INLINE(x)); } // ECMA 262 - 15.8.2.5 // The naming of y and x matches the spec, as does the order in which // ToNumber (valueOf) is called. -function MathAtan2(y, x) { +function MathAtan2JS(y, x) { return %MathAtan2(TO_NUMBER_INLINE(y), TO_NUMBER_INLINE(x)); } @@ -54,15 +56,9 @@ function MathCeil(x) { return -MathFloor(-x); } -// ECMA 262 - 15.8.2.7 -function MathCos(x) { - x = MathAbs(x); // Convert to number and get rid of -0. - return TrigonometricInterpolation(x, 1); -} - // ECMA 262 - 15.8.2.8 function MathExp(x) { - return %MathExp(TO_NUMBER_INLINE(x)); + return %MathExpRT(TO_NUMBER_INLINE(x)); } // ECMA 262 - 15.8.2.9 @@ -77,13 +73,13 @@ function MathFloor(x) { // has to be -0, which wouldn't be the case with the shift. return TO_UINT32(x); } else { - return %MathFloor(x); + return %MathFloorRT(x); } } // ECMA 262 - 15.8.2.10 function MathLog(x) { - return %_MathLog(TO_NUMBER_INLINE(x)); + return %_MathLogRT(TO_NUMBER_INLINE(x)); } // ECMA 262 - 15.8.2.11 @@ -162,21 +158,9 @@ function MathRound(x) { return %RoundNumber(TO_NUMBER_INLINE(x)); } -// ECMA 262 - 15.8.2.16 -function MathSin(x) { - x = x * 1; // Convert to number and deal with -0. - if (%_IsMinusZero(x)) return x; - return TrigonometricInterpolation(x, 0); -} - // ECMA 262 - 15.8.2.17 function MathSqrt(x) { - return %_MathSqrt(TO_NUMBER_INLINE(x)); -} - -// ECMA 262 - 15.8.2.18 -function MathTan(x) { - return MathSin(x) / MathCos(x); + return %_MathSqrtRT(TO_NUMBER_INLINE(x)); } // Non-standard extension. @@ -184,72 +168,183 @@ function MathImul(x, y) { return %NumberImul(TO_NUMBER_INLINE(x), TO_NUMBER_INLINE(y)); } +// ES6 draft 09-27-13, section 20.2.2.28. +function MathSign(x) { + x = TO_NUMBER_INLINE(x); + if (x > 0) return 1; + if (x < 0) return -1; + if (x === 0) return x; + return NAN; +} -var kInversePiHalf = 0.636619772367581343; // 2 / pi -var kInversePiHalfS26 = 9.48637384723993156e-9; // 2 / pi / (2^26) -var kS26 = 1 << 26; -var kTwoStepThreshold = 1 << 27; -// pi / 2 rounded up -var kPiHalf = 1.570796326794896780; // 0x192d4454fb21f93f -// We use two parts for pi/2 to emulate a higher precision. -// pi_half_1 only has 26 significant bits for mantissa. -// Note that pi_half > pi_half_1 + pi_half_2 -var kPiHalf1 = 1.570796325802803040; // 0x00000054fb21f93f -var kPiHalf2 = 9.920935796805404252e-10; // 0x3326a611460b113e - -var kSamples; // Initialized to a number during genesis. -var kIndexConvert; // Initialized to kSamples / (pi/2) during genesis. -var kSinTable; // Initialized to a Float64Array during genesis. -var kCosXIntervalTable; // Initialized to a Float64Array during genesis. - -// This implements sine using the following algorithm. -// 1) Multiplication takes care of to-number conversion. -// 2) Reduce x to the first quadrant [0, pi/2]. -// Conveniently enough, in case of +/-Infinity, we get NaN. -// Note that we try to use only 26 instead of 52 significant bits for -// mantissa to avoid rounding errors when multiplying. For very large -// input we therefore have additional steps. -// 3) Replace x by (pi/2-x) if x was in the 2nd or 4th quadrant. -// 4) Do a table lookup for the closest samples to the left and right of x. -// 5) Find the derivatives at those sampling points by table lookup: -// dsin(x)/dx = cos(x) = sin(pi/2-x) for x in [0, pi/2]. -// 6) Use cubic spline interpolation to approximate sin(x). -// 7) Negate the result if x was in the 3rd or 4th quadrant. -// 8) Get rid of -0 by adding 0. -function TrigonometricInterpolation(x, phase) { - if (x < 0 || x > kPiHalf) { - var multiple; - while (x < -kTwoStepThreshold || x > kTwoStepThreshold) { - // Let's assume this loop does not terminate. - // All numbers x in each loop forms a set S. - // (1) abs(x) > 2^27 for all x in S. - // (2) abs(multiple) != 0 since (2^27 * inverse_pi_half_s26) > 1 - // (3) multiple is rounded down in 2^26 steps, so the rounding error is - // at most max(ulp, 2^26). - // (4) so for x > 2^27, we subtract at most (1+pi/4)x and at least - // (1-pi/4)x - // (5) The subtraction results in x' so that abs(x') <= abs(x)*pi/4. - // Note that this difference cannot be simply rounded off. - // Set S cannot exist since (5) violates (1). Loop must terminate. - multiple = MathFloor(x * kInversePiHalfS26) * kS26; - x = x - multiple * kPiHalf1 - multiple * kPiHalf2; - } - multiple = MathFloor(x * kInversePiHalf); - x = x - multiple * kPiHalf1 - multiple * kPiHalf2; - phase += multiple; +// ES6 draft 09-27-13, section 20.2.2.34. +function MathTrunc(x) { + x = TO_NUMBER_INLINE(x); + if (x > 0) return MathFloor(x); + if (x < 0) return MathCeil(x); + if (x === 0) return x; + return NAN; +} + +// ES6 draft 09-27-13, section 20.2.2.30. +function MathSinh(x) { + if (!IS_NUMBER(x)) x = NonNumberToNumber(x); + // Idempotent for NaN, +/-0 and +/-Infinity. + if (x === 0 || !NUMBER_IS_FINITE(x)) return x; + return (MathExp(x) - MathExp(-x)) / 2; +} + +// ES6 draft 09-27-13, section 20.2.2.12. +function MathCosh(x) { + if (!IS_NUMBER(x)) x = NonNumberToNumber(x); + if (!NUMBER_IS_FINITE(x)) return MathAbs(x); + return (MathExp(x) + MathExp(-x)) / 2; +} + +// ES6 draft 09-27-13, section 20.2.2.33. +function MathTanh(x) { + if (!IS_NUMBER(x)) x = NonNumberToNumber(x); + // Idempotent for +/-0. + if (x === 0) return x; + // Returns +/-1 for +/-Infinity. + if (!NUMBER_IS_FINITE(x)) return MathSign(x); + var exp1 = MathExp(x); + var exp2 = MathExp(-x); + return (exp1 - exp2) / (exp1 + exp2); +} + +// ES6 draft 09-27-13, section 20.2.2.5. +function MathAsinh(x) { + if (!IS_NUMBER(x)) x = NonNumberToNumber(x); + // Idempotent for NaN, +/-0 and +/-Infinity. + if (x === 0 || !NUMBER_IS_FINITE(x)) return x; + if (x > 0) return MathLog(x + MathSqrt(x * x + 1)); + // This is to prevent numerical errors caused by large negative x. + return -MathLog(-x + MathSqrt(x * x + 1)); +} + +// ES6 draft 09-27-13, section 20.2.2.3. +function MathAcosh(x) { + if (!IS_NUMBER(x)) x = NonNumberToNumber(x); + if (x < 1) return NAN; + // Idempotent for NaN and +Infinity. + if (!NUMBER_IS_FINITE(x)) return x; + return MathLog(x + MathSqrt(x + 1) * MathSqrt(x - 1)); +} + +// ES6 draft 09-27-13, section 20.2.2.7. +function MathAtanh(x) { + if (!IS_NUMBER(x)) x = NonNumberToNumber(x); + // Idempotent for +/-0. + if (x === 0) return x; + // Returns NaN for NaN and +/- Infinity. + if (!NUMBER_IS_FINITE(x)) return NAN; + return 0.5 * MathLog((1 + x) / (1 - x)); +} + +// ES6 draft 09-27-13, section 20.2.2.21. +function MathLog10(x) { + return MathLog(x) * 0.434294481903251828; // log10(x) = log(x)/log(10). +} + + +// ES6 draft 09-27-13, section 20.2.2.22. +function MathLog2(x) { + return MathLog(x) * 1.442695040888963407; // log2(x) = log(x)/log(2). +} + +// ES6 draft 09-27-13, section 20.2.2.17. +function MathHypot(x, y) { // Function length is 2. + // We may want to introduce fast paths for two arguments and when + // normalization to avoid overflow is not necessary. For now, we + // simply assume the general case. + var length = %_ArgumentsLength(); + var args = new InternalArray(length); + var max = 0; + for (var i = 0; i < length; i++) { + var n = %_Arguments(i); + if (!IS_NUMBER(n)) n = NonNumberToNumber(n); + if (n === INFINITY || n === -INFINITY) return INFINITY; + n = MathAbs(n); + if (n > max) max = n; + args[i] = n; + } + + // Kahan summation to avoid rounding errors. + // Normalize the numbers to the largest one to avoid overflow. + if (max === 0) max = 1; + var sum = 0; + var compensation = 0; + for (var i = 0; i < length; i++) { + var n = args[i] / max; + var summand = n * n - compensation; + var preliminary = sum + summand; + compensation = (preliminary - sum) - summand; + sum = preliminary; + } + return MathSqrt(sum) * max; +} + +// ES6 draft 09-27-13, section 20.2.2.16. +function MathFroundJS(x) { + return %MathFround(TO_NUMBER_INLINE(x)); +} + +// ES6 draft 07-18-14, section 20.2.2.11 +function MathClz32(x) { + x = ToUint32(TO_NUMBER_INLINE(x)); + if (x == 0) return 32; + var result = 0; + // Binary search. + if ((x & 0xFFFF0000) === 0) { x <<= 16; result += 16; }; + if ((x & 0xFF000000) === 0) { x <<= 8; result += 8; }; + if ((x & 0xF0000000) === 0) { x <<= 4; result += 4; }; + if ((x & 0xC0000000) === 0) { x <<= 2; result += 2; }; + if ((x & 0x80000000) === 0) { x <<= 1; result += 1; }; + return result; +} + +// ES6 draft 09-27-13, section 20.2.2.9. +// Cube root approximation, refer to: http://metamerist.com/cbrt/cbrt.htm +// Using initial approximation adapted from Kahan's cbrt and 4 iterations +// of Newton's method. +function MathCbrt(x) { + if (!IS_NUMBER(x)) x = NonNumberToNumber(x); + if (x == 0 || !NUMBER_IS_FINITE(x)) return x; + return x >= 0 ? CubeRoot(x) : -CubeRoot(-x); +} + +macro NEWTON_ITERATION_CBRT(x, approx) + (1.0 / 3.0) * (x / (approx * approx) + 2 * approx); +endmacro + +function CubeRoot(x) { + var approx_hi = MathFloor(%_DoubleHi(x) / 3) + 0x2A9F7893; + var approx = %_ConstructDouble(approx_hi, 0); + approx = NEWTON_ITERATION_CBRT(x, approx); + approx = NEWTON_ITERATION_CBRT(x, approx); + approx = NEWTON_ITERATION_CBRT(x, approx); + return NEWTON_ITERATION_CBRT(x, approx); +} + +// ES6 draft 09-27-13, section 20.2.2.14. +// Use Taylor series to approximate. +// exp(x) - 1 at 0 == -1 + exp(0) + exp'(0)*x/1! + exp''(0)*x^2/2! + ... +// == x/1! + x^2/2! + x^3/3! + ... +// The closer x is to 0, the fewer terms are required. +function MathExpm1(x) { + if (!IS_NUMBER(x)) x = NonNumberToNumber(x); + var xabs = MathAbs(x); + if (xabs < 2E-7) { + return x * (1 + x * (1/2)); + } else if (xabs < 6E-5) { + return x * (1 + x * (1/2 + x * (1/6))); + } else if (xabs < 2E-2) { + return x * (1 + x * (1/2 + x * (1/6 + + x * (1/24 + x * (1/120 + x * (1/720)))))); + } else { // Use regular exp if not close enough to 0. + return MathExp(x) - 1; } - var double_index = x * kIndexConvert; - if (phase & 1) double_index = kSamples - double_index; - var index = double_index | 0; - var t1 = double_index - index; - var t2 = 1 - t1; - var y1 = kSinTable[index]; - var y2 = kSinTable[index + 1]; - var dy = y2 - y1; - return (t2 * y1 + t1 * y2 + - t1 * t2 * ((kCosXIntervalTable[index] - dy) * t2 + - (dy - kCosXIntervalTable[index + 1]) * t1)) - * (1 - (phase & 2)) + 0; } // ------------------------------------------------------------------- @@ -257,8 +352,8 @@ function TrigonometricInterpolation(x, phase) { function SetUpMath() { %CheckIsBootstrapping(); - %SetPrototype($Math, $Object.prototype); - %SetProperty(global, "Math", $Math, DONT_ENUM); + %InternalSetPrototype($Math, $Object.prototype); + %AddNamedProperty(global, "Math", $Math, DONT_ENUM); %FunctionSetInstanceClassName(MathConstructor, 'Math'); // Set up math constants. @@ -282,31 +377,45 @@ function SetUpMath() { InstallFunctions($Math, DONT_ENUM, $Array( "random", MathRandom, "abs", MathAbs, - "acos", MathAcos, - "asin", MathAsin, - "atan", MathAtan, + "acos", MathAcosJS, + "asin", MathAsinJS, + "atan", MathAtanJS, "ceil", MathCeil, - "cos", MathCos, + "cos", MathCos, // implemented by third_party/fdlibm "exp", MathExp, "floor", MathFloor, "log", MathLog, "round", MathRound, - "sin", MathSin, + "sin", MathSin, // implemented by third_party/fdlibm "sqrt", MathSqrt, - "tan", MathTan, - "atan2", MathAtan2, + "tan", MathTan, // implemented by third_party/fdlibm + "atan2", MathAtan2JS, "pow", MathPow, "max", MathMax, "min", MathMin, - "imul", MathImul + "imul", MathImul, + "sign", MathSign, + "trunc", MathTrunc, + "sinh", MathSinh, + "cosh", MathCosh, + "tanh", MathTanh, + "asinh", MathAsinh, + "acosh", MathAcosh, + "atanh", MathAtanh, + "log10", MathLog10, + "log2", MathLog2, + "hypot", MathHypot, + "fround", MathFroundJS, + "clz32", MathClz32, + "cbrt", MathCbrt, + "log1p", MathLog1p, // implemented by third_party/fdlibm + "expm1", MathExpm1 )); %SetInlineBuiltinFlag(MathCeil); %SetInlineBuiltinFlag(MathRandom); %SetInlineBuiltinFlag(MathSin); %SetInlineBuiltinFlag(MathCos); - %SetInlineBuiltinFlag(MathTan); - %SetInlineBuiltinFlag(TrigonometricInterpolation); } SetUpMath(); diff --git a/deps/v8/src/messages.cc b/deps/v8/src/messages.cc index 8c084e266f8..865bdca8fa6 100644 --- a/deps/v8/src/messages.cc +++ b/deps/v8/src/messages.cc @@ -2,12 +2,12 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "api.h" -#include "execution.h" -#include "messages.h" -#include "spaces-inl.h" +#include "src/api.h" +#include "src/execution.h" +#include "src/heap/spaces-inl.h" +#include "src/messages.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/messages.h b/deps/v8/src/messages.h index 297160d988f..aec34690352 100644 --- a/deps/v8/src/messages.h +++ b/deps/v8/src/messages.h @@ -10,7 +10,7 @@ #ifndef V8_MESSAGES_H_ #define V8_MESSAGES_H_ -#include "handles-inl.h" +#include "src/handles-inl.h" // Forward declaration of MessageLocation. namespace v8 { diff --git a/deps/v8/src/messages.js b/deps/v8/src/messages.js index 1965da104e3..eba1e16ece0 100644 --- a/deps/v8/src/messages.js +++ b/deps/v8/src/messages.js @@ -26,6 +26,7 @@ var kMessages = { newline_after_throw: ["Illegal newline after throw"], label_redeclaration: ["Label '", "%0", "' has already been declared"], var_redeclaration: ["Identifier '", "%0", "' has already been declared"], + duplicate_template_property: ["Object template has duplicate property '", "%0", "'"], no_catch_or_finally: ["Missing catch or finally after try"], unknown_label: ["Undefined label '", "%0", "'"], uncaught_exception: ["Uncaught ", "%0"], @@ -89,6 +90,10 @@ var kMessages = { array_functions_on_frozen: ["Cannot modify frozen array elements"], array_functions_change_sealed: ["Cannot add/remove sealed array elements"], first_argument_not_regexp: ["First argument to ", "%0", " must not be a regular expression"], + not_iterable: ["%0", " is not iterable"], + not_an_iterator: ["%0", " is not an iterator"], + iterator_result_not_an_object: ["Iterator result ", "%0", " is not an object"], + iterator_value_not_an_object: ["Iterator value ", "%0", " is not an entry object"], // RangeError invalid_array_length: ["Invalid array length"], invalid_array_buffer_length: ["Invalid array buffer length"], @@ -108,6 +113,7 @@ var kMessages = { stack_overflow: ["Maximum call stack size exceeded"], invalid_time_value: ["Invalid time value"], invalid_count_value: ["Invalid count value"], + invalid_code_point: ["Invalid code point ", "%0"], // ReferenceError invalid_lhs_in_assignment: ["Invalid left-hand side in assignment"], invalid_lhs_in_for: ["Invalid left-hand side in for-loop"], @@ -122,7 +128,6 @@ var kMessages = { illegal_break: ["Illegal break statement"], illegal_continue: ["Illegal continue statement"], illegal_return: ["Illegal return statement"], - illegal_let: ["Illegal let declaration outside extended mode"], error_loading_debugger: ["Error loading debugger"], no_input_to_regexp: ["No input to ", "%0"], invalid_json: ["String '", "%0", "' is not valid JSON"], @@ -132,8 +137,6 @@ var kMessages = { array_indexof_not_defined: ["Array.getIndexOf: Argument undefined"], object_not_extensible: ["Can't add property ", "%0", ", object is not extensible"], illegal_access: ["Illegal access"], - invalid_cached_data_function: ["Invalid cached data for function ", "%0"], - invalid_cached_data: ["Invalid cached data"], strict_mode_with: ["Strict mode code may not include a with statement"], strict_eval_arguments: ["Unexpected eval or arguments in strict mode"], too_many_arguments: ["Too many arguments in function call (only 65535 allowed)"], @@ -152,6 +155,8 @@ var kMessages = { strict_cannot_assign: ["Cannot assign to read only '", "%0", "' in strict mode"], strict_poison_pill: ["'caller', 'callee', and 'arguments' properties may not be accessed on strict mode functions or the arguments objects for calls to them"], strict_caller: ["Illegal access to a strict mode caller function."], + malformed_arrow_function_parameter_list: ["Malformed arrow function parameter list"], + generator_poison_pill: ["'caller' and 'arguments' properties may not be accessed on generator functions."], unprotected_let: ["Illegal let declaration in unprotected statement context."], unprotected_const: ["Illegal const declaration in unprotected statement context."], cant_prevent_ext_external_array_elements: ["Cannot prevent extension of an object with external array elements"], @@ -159,6 +164,7 @@ var kMessages = { harmony_const_assign: ["Assignment to constant variable."], symbol_to_string: ["Cannot convert a Symbol value to a string"], symbol_to_primitive: ["Cannot convert a Symbol wrapper object to a primitive value"], + symbol_to_number: ["Cannot convert a Symbol value to a number"], invalid_module_path: ["Module does not export '", "%0", "', or export is not itself a module"], module_type_error: ["Module '", "%0", "' used improperly"], module_export_undefined: ["Export '", "%0", "' is not defined in module"] @@ -177,10 +183,6 @@ function FormatString(format, args) { // str is one of %0, %1, %2 or %3. try { str = NoSideEffectToString(args[arg_num]); - if (str.length > 256) { - str = %_SubString(str, 0, 239) + "...<omitted>..." + - %_SubString(str, str.length - 2, str.length); - } } catch (e) { if (%IsJSModule(args[arg_num])) str = "module"; @@ -200,10 +202,18 @@ function FormatString(format, args) { function NoSideEffectToString(obj) { if (IS_STRING(obj)) return obj; if (IS_NUMBER(obj)) return %_NumberToString(obj); - if (IS_BOOLEAN(obj)) return x ? 'true' : 'false'; + if (IS_BOOLEAN(obj)) return obj ? 'true' : 'false'; if (IS_UNDEFINED(obj)) return 'undefined'; if (IS_NULL(obj)) return 'null'; - if (IS_FUNCTION(obj)) return %_CallFunction(obj, FunctionToString); + if (IS_FUNCTION(obj)) { + var str = %_CallFunction(obj, FunctionToString); + if (str.length > 128) { + str = %_SubString(str, 0, 111) + "...<omitted>..." + + %_SubString(str, str.length - 2, str.length); + } + return str; + } + if (IS_SYMBOL(obj)) return %_CallFunction(obj, SymbolToString); if (IS_OBJECT(obj) && %GetDataProperty(obj, "toString") === ObjectToString) { var constructor = %GetDataProperty(obj, "constructor"); if (typeof constructor == "function") { @@ -278,8 +288,8 @@ function MakeGenericError(constructor, type, args) { * Set up the Script function and constructor. */ %FunctionSetInstanceClassName(Script, 'Script'); -%SetProperty(Script.prototype, 'constructor', Script, - DONT_ENUM | DONT_DELETE | READ_ONLY); +%AddNamedProperty(Script.prototype, 'constructor', Script, + DONT_ENUM | DONT_DELETE | READ_ONLY); %SetCode(Script, function(x) { // Script objects can only be created by the VM. throw new $Error("Not supported"); @@ -551,44 +561,16 @@ function ScriptNameOrSourceURL() { if (this.line_offset > 0 || this.column_offset > 0) { return this.name; } - - // The result is cached as on long scripts it takes noticable time to search - // for the sourceURL. - if (this.hasCachedNameOrSourceURL) { - return this.cachedNameOrSourceURL; + if (this.source_url) { + return this.source_url; } - this.hasCachedNameOrSourceURL = true; - - // TODO(608): the spaces in a regexp below had to be escaped as \040 - // because this file is being processed by js2c whose handling of spaces - // in regexps is broken. Also, ['"] are excluded from allowed URLs to - // avoid matches against sources that invoke evals with sourceURL. - // A better solution would be to detect these special comments in - // the scanner/parser. - var source = ToString(this.source); - var sourceUrlPos = %StringIndexOf(source, "sourceURL=", 0); - this.cachedNameOrSourceURL = this.name; - if (sourceUrlPos > 4) { - var sourceUrlPattern = - /\/\/[#@][\040\t]sourceURL=[\040\t]*([^\s\'\"]*)[\040\t]*$/gm; - // Don't reuse lastMatchInfo here, so we create a new array with room - // for four captures (array with length one longer than the index - // of the fourth capture, where the numbering is zero-based). - var matchInfo = new InternalArray(CAPTURE(3) + 1); - var match = - %_RegExpExec(sourceUrlPattern, source, sourceUrlPos - 4, matchInfo); - if (match) { - this.cachedNameOrSourceURL = - %_SubString(source, matchInfo[CAPTURE(2)], matchInfo[CAPTURE(3)]); - } - } - return this.cachedNameOrSourceURL; + return this.name; } SetUpLockedPrototype(Script, - $Array("source", "name", "line_ends", "line_offset", "column_offset", - "cachedNameOrSourceURL", "hasCachedNameOrSourceURL" ), + $Array("source", "name", "source_url", "source_mapping_url", "line_ends", + "line_offset", "column_offset"), $Array( "lineFromPosition", ScriptLineFromPosition, "locationFromPosition", ScriptLocationFromPosition, @@ -954,12 +936,12 @@ function CallSiteToString() { var methodName = this.getMethodName(); if (functionName) { if (typeName && - %_CallFunction(functionName, typeName, StringIndexOf) != 0) { + %_CallFunction(functionName, typeName, StringIndexOfJS) != 0) { line += typeName + "."; } line += functionName; if (methodName && - (%_CallFunction(functionName, "." + methodName, StringIndexOf) != + (%_CallFunction(functionName, "." + methodName, StringIndexOfJS) != functionName.length - methodName.length - 1)) { line += " [as " + methodName + "]"; } @@ -1072,7 +1054,8 @@ function GetStackFrames(raw_stack) { var formatting_custom_stack_trace = false; -function FormatStackTrace(obj, error_string, frames) { +function FormatStackTrace(obj, raw_stack) { + var frames = GetStackFrames(raw_stack); if (IS_FUNCTION($Error.prepareStackTrace) && !formatting_custom_stack_trace) { var array = []; %MoveArrayContents(frames, array); @@ -1089,7 +1072,7 @@ function FormatStackTrace(obj, error_string, frames) { } var lines = new InternalArray(); - lines.push(error_string); + lines.push(FormatErrorString(obj)); for (var i = 0; i < frames.length; i++) { var frame = frames[i]; var line; @@ -1124,45 +1107,48 @@ function GetTypeName(receiver, requireConstructor) { } -function captureStackTrace(obj, cons_opt) { - var stackTraceLimit = $Error.stackTraceLimit; - if (!stackTraceLimit || !IS_NUMBER(stackTraceLimit)) return; - if (stackTraceLimit < 0 || stackTraceLimit > 10000) { - stackTraceLimit = 10000; - } - var stack = %CollectStackTrace(obj, - cons_opt ? cons_opt : captureStackTrace, - stackTraceLimit); - - var error_string = FormatErrorString(obj); - - // Set the 'stack' property on the receiver. If the receiver is the same as - // holder of this setter, the accessor pair is turned into a data property. - var setter = function(v) { - // Set data property on the receiver (not necessarily holder). - %DefineOrRedefineDataProperty(this, 'stack', v, NONE); - if (this === obj) { - // Release context values if holder is the same as the receiver. - stack = error_string = UNDEFINED; +var stack_trace_symbol; // Set during bootstrapping. +var formatted_stack_trace_symbol = NEW_PRIVATE("formatted stack trace"); + + +// Format the stack trace if not yet done, and return it. +// Cache the formatted stack trace on the holder. +var StackTraceGetter = function() { + var formatted_stack_trace = GET_PRIVATE(this, formatted_stack_trace_symbol); + if (IS_UNDEFINED(formatted_stack_trace)) { + var holder = this; + while (!HAS_PRIVATE(holder, stack_trace_symbol)) { + holder = %GetPrototype(holder); + if (!holder) return UNDEFINED; } - }; + var stack_trace = GET_PRIVATE(holder, stack_trace_symbol); + if (IS_UNDEFINED(stack_trace)) return UNDEFINED; + formatted_stack_trace = FormatStackTrace(holder, stack_trace); + SET_PRIVATE(holder, stack_trace_symbol, UNDEFINED); + SET_PRIVATE(holder, formatted_stack_trace_symbol, formatted_stack_trace); + } + return formatted_stack_trace; +}; - // The holder of this getter ('obj') may not be the receiver ('this'). - // When this getter is called the first time, we use the context values to - // format a stack trace string and turn this accessor pair into a data - // property (on the holder). - var getter = function() { - // Stack is still a raw array awaiting to be formatted. - var result = FormatStackTrace(obj, error_string, GetStackFrames(stack)); - // Replace this accessor to return result directly. - %DefineOrRedefineAccessorProperty( - obj, 'stack', function() { return result }, setter, DONT_ENUM); - // Release context values. - stack = error_string = UNDEFINED; - return result; - }; - %DefineOrRedefineAccessorProperty(obj, 'stack', getter, setter, DONT_ENUM); +// If the receiver equals the holder, set the formatted stack trace that the +// getter returns. +var StackTraceSetter = function(v) { + if (HAS_PRIVATE(this, stack_trace_symbol)) { + SET_PRIVATE(this, stack_trace_symbol, UNDEFINED); + SET_PRIVATE(this, formatted_stack_trace_symbol, v); + } +}; + + +// Use a dummy function since we do not actually want to capture a stack trace +// when constructing the initial Error prototytpes. +var captureStackTrace = function captureStackTrace(obj, cons_opt) { + // Define accessors first, as this may fail and throw. + ObjectDefineProperty(obj, 'stack', { get: StackTraceGetter, + set: StackTraceSetter, + configurable: true }); + %CollectStackTrace(obj, cons_opt ? cons_opt : captureStackTrace); } @@ -1177,8 +1163,9 @@ function SetUpError() { // effects when overwriting the error functions from // user code. var name = f.name; - %SetProperty(global, name, f, DONT_ENUM); - %SetProperty(builtins, '$' + name, f, DONT_ENUM | DONT_DELETE | READ_ONLY); + %AddNamedProperty(global, name, f, DONT_ENUM); + %AddNamedProperty(builtins, '$' + name, f, + DONT_ENUM | DONT_DELETE | READ_ONLY); // Configure the error function. if (name == 'Error') { // The prototype of the Error object must itself be an error. @@ -1193,19 +1180,18 @@ function SetUpError() { %FunctionSetPrototype(f, new $Error()); } %FunctionSetInstanceClassName(f, 'Error'); - %SetProperty(f.prototype, 'constructor', f, DONT_ENUM); - %SetProperty(f.prototype, "name", name, DONT_ENUM); + %AddNamedProperty(f.prototype, 'constructor', f, DONT_ENUM); + %AddNamedProperty(f.prototype, "name", name, DONT_ENUM); %SetCode(f, function(m) { if (%_IsConstructCall()) { // Define all the expected properties directly on the error // object. This avoids going through getters and setters defined // on prototype objects. - %IgnoreAttributesAndSetProperty(this, 'stack', UNDEFINED, DONT_ENUM); + %AddNamedProperty(this, 'stack', UNDEFINED, DONT_ENUM); if (!IS_UNDEFINED(m)) { - %IgnoreAttributesAndSetProperty( - this, 'message', ToString(m), DONT_ENUM); + %AddNamedProperty(this, 'message', ToString(m), DONT_ENUM); } - captureStackTrace(this, f); + try { captureStackTrace(this, f); } catch (e) { } } else { return new f(m); } @@ -1226,7 +1212,7 @@ SetUpError(); $Error.captureStackTrace = captureStackTrace; -%SetProperty($Error.prototype, 'message', '', DONT_ENUM); +%AddNamedProperty($Error.prototype, 'message', '', DONT_ENUM); // Global list of error objects visited during ErrorToString. This is // used to detect cycles in error toString formatting. @@ -1236,7 +1222,7 @@ var cyclic_error_marker = new $Object(); function GetPropertyWithoutInvokingMonkeyGetters(error, name) { var current = error; // Climb the prototype chain until we find the holder. - while (current && !%HasLocalProperty(current, name)) { + while (current && !%HasOwnProperty(current, name)) { current = %GetPrototype(current); } if (IS_NULL(current)) return UNDEFINED; @@ -1298,40 +1284,8 @@ InstallFunctions($Error.prototype, DONT_ENUM, ['toString', ErrorToString]); function SetUpStackOverflowBoilerplate() { var boilerplate = MakeRangeError('stack_overflow', []); - var error_string = boilerplate.name + ": " + boilerplate.message; - - // Set the 'stack' property on the receiver. If the receiver is the same as - // holder of this setter, the accessor pair is turned into a data property. - var setter = function(v) { - %DefineOrRedefineDataProperty(this, 'stack', v, NONE); - // Tentatively clear the hidden property. If the receiver is the same as - // holder, we release the raw stack trace this way. - %GetAndClearOverflowedStackTrace(this); - }; - - // The raw stack trace is stored as a hidden property on the holder of this - // getter, which may not be the same as the receiver. Find the holder to - // retrieve the raw stack trace and then turn this accessor pair into a - // data property. - var getter = function() { - var holder = this; - while (!IS_ERROR(holder)) { - holder = %GetPrototype(holder); - if (IS_NULL(holder)) return MakeSyntaxError('illegal_access', []); - } - var stack = %GetAndClearOverflowedStackTrace(holder); - // We may not have captured any stack trace. - if (IS_UNDEFINED(stack)) return stack; - - var result = FormatStackTrace(holder, error_string, GetStackFrames(stack)); - // Replace this accessor to return result directly. - %DefineOrRedefineAccessorProperty( - holder, 'stack', function() { return result }, setter, DONT_ENUM); - return result; - }; - - %DefineOrRedefineAccessorProperty( - boilerplate, 'stack', getter, setter, DONT_ENUM); + %DefineAccessorPropertyUnchecked( + boilerplate, 'stack', StackTraceGetter, StackTraceSetter, DONT_ENUM); return boilerplate; } diff --git a/deps/v8/src/mips/OWNERS b/deps/v8/src/mips/OWNERS index 38473b56d1f..8d6807d2672 100644 --- a/deps/v8/src/mips/OWNERS +++ b/deps/v8/src/mips/OWNERS @@ -1,2 +1,10 @@ plind44@gmail.com +paul.lind@imgtec.com gergely@homejinni.com +gergely.kis@imgtec.com +palfia@homejinni.com +akos.palfi@imgtec.com +kilvadyb@homejinni.com +balazs.kilvady@imgtec.com +Dusan.Milosavljevic@rt-rk.com +dusan.milosavljevic@imgtec.com diff --git a/deps/v8/src/mips/assembler-mips-inl.h b/deps/v8/src/mips/assembler-mips-inl.h index ba4c4f1348a..2666f6ada7c 100644 --- a/deps/v8/src/mips/assembler-mips-inl.h +++ b/deps/v8/src/mips/assembler-mips-inl.h @@ -37,15 +37,19 @@ #ifndef V8_MIPS_ASSEMBLER_MIPS_INL_H_ #define V8_MIPS_ASSEMBLER_MIPS_INL_H_ -#include "mips/assembler-mips.h" +#include "src/mips/assembler-mips.h" -#include "cpu.h" -#include "debug.h" +#include "src/assembler.h" +#include "src/debug.h" namespace v8 { namespace internal { + +bool CpuFeatures::SupportsCrankshaft() { return IsSupported(FPU); } + + // ----------------------------------------------------------------------------- // Operand and MemOperand. @@ -96,11 +100,11 @@ int DoubleRegister::NumAllocatableRegisters() { int FPURegister::ToAllocationIndex(FPURegister reg) { - ASSERT(reg.code() % 2 == 0); - ASSERT(reg.code() / 2 < kMaxNumAllocatableRegisters); - ASSERT(reg.is_valid()); - ASSERT(!reg.is(kDoubleRegZero)); - ASSERT(!reg.is(kLithiumScratchDouble)); + DCHECK(reg.code() % 2 == 0); + DCHECK(reg.code() / 2 < kMaxNumAllocatableRegisters); + DCHECK(reg.is_valid()); + DCHECK(!reg.is(kDoubleRegZero)); + DCHECK(!reg.is(kLithiumScratchDouble)); return (reg.code() / 2); } @@ -108,7 +112,7 @@ int FPURegister::ToAllocationIndex(FPURegister reg) { // ----------------------------------------------------------------------------- // RelocInfo. -void RelocInfo::apply(intptr_t delta) { +void RelocInfo::apply(intptr_t delta, ICacheFlushMode icache_flush_mode) { if (IsCodeTarget(rmode_)) { uint32_t scope1 = (uint32_t) target_address() & ~kImm28Mask; uint32_t scope2 = reinterpret_cast<uint32_t>(pc_) & ~kImm28Mask; @@ -121,19 +125,19 @@ void RelocInfo::apply(intptr_t delta) { // Absolute code pointer inside code object moves with the code object. byte* p = reinterpret_cast<byte*>(pc_); int count = Assembler::RelocateInternalReference(p, delta); - CPU::FlushICache(p, count * sizeof(uint32_t)); + CpuFeatures::FlushICache(p, count * sizeof(uint32_t)); } } Address RelocInfo::target_address() { - ASSERT(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); + DCHECK(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); return Assembler::target_address_at(pc_, host_); } Address RelocInfo::target_address_address() { - ASSERT(IsCodeTarget(rmode_) || + DCHECK(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_) || rmode_ == EMBEDDED_OBJECT || rmode_ == EXTERNAL_REFERENCE); @@ -167,10 +171,13 @@ int RelocInfo::target_address_size() { } -void RelocInfo::set_target_address(Address target, WriteBarrierMode mode) { - ASSERT(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); - Assembler::set_target_address_at(pc_, host_, target); - if (mode == UPDATE_WRITE_BARRIER && host() != NULL && IsCodeTarget(rmode_)) { +void RelocInfo::set_target_address(Address target, + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); + Assembler::set_target_address_at(pc_, host_, target, icache_flush_mode); + if (write_barrier_mode == UPDATE_WRITE_BARRIER && + host() != NULL && IsCodeTarget(rmode_)) { Object* target_code = Code::GetCodeFromTargetAddress(target); host()->GetHeap()->incremental_marking()->RecordWriteIntoCode( host(), this, HeapObject::cast(target_code)); @@ -183,25 +190,32 @@ Address Assembler::target_address_from_return_address(Address pc) { } +Address Assembler::break_address_from_return_address(Address pc) { + return pc - Assembler::kPatchDebugBreakSlotReturnOffset; +} + + Object* RelocInfo::target_object() { - ASSERT(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); + DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); return reinterpret_cast<Object*>(Assembler::target_address_at(pc_, host_)); } Handle<Object> RelocInfo::target_object_handle(Assembler* origin) { - ASSERT(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); + DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); return Handle<Object>(reinterpret_cast<Object**>( Assembler::target_address_at(pc_, host_))); } -void RelocInfo::set_target_object(Object* target, WriteBarrierMode mode) { - ASSERT(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); - ASSERT(!target->IsConsString()); +void RelocInfo::set_target_object(Object* target, + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); Assembler::set_target_address_at(pc_, host_, - reinterpret_cast<Address>(target)); - if (mode == UPDATE_WRITE_BARRIER && + reinterpret_cast<Address>(target), + icache_flush_mode); + if (write_barrier_mode == UPDATE_WRITE_BARRIER && host() != NULL && target->IsHeapObject()) { host()->GetHeap()->incremental_marking()->RecordWrite( @@ -211,42 +225,46 @@ void RelocInfo::set_target_object(Object* target, WriteBarrierMode mode) { Address RelocInfo::target_reference() { - ASSERT(rmode_ == EXTERNAL_REFERENCE); + DCHECK(rmode_ == EXTERNAL_REFERENCE); return Assembler::target_address_at(pc_, host_); } Address RelocInfo::target_runtime_entry(Assembler* origin) { - ASSERT(IsRuntimeEntry(rmode_)); + DCHECK(IsRuntimeEntry(rmode_)); return target_address(); } void RelocInfo::set_target_runtime_entry(Address target, - WriteBarrierMode mode) { - ASSERT(IsRuntimeEntry(rmode_)); - if (target_address() != target) set_target_address(target, mode); + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(IsRuntimeEntry(rmode_)); + if (target_address() != target) + set_target_address(target, write_barrier_mode, icache_flush_mode); } Handle<Cell> RelocInfo::target_cell_handle() { - ASSERT(rmode_ == RelocInfo::CELL); + DCHECK(rmode_ == RelocInfo::CELL); Address address = Memory::Address_at(pc_); return Handle<Cell>(reinterpret_cast<Cell**>(address)); } Cell* RelocInfo::target_cell() { - ASSERT(rmode_ == RelocInfo::CELL); + DCHECK(rmode_ == RelocInfo::CELL); return Cell::FromValueAddress(Memory::Address_at(pc_)); } -void RelocInfo::set_target_cell(Cell* cell, WriteBarrierMode mode) { - ASSERT(rmode_ == RelocInfo::CELL); +void RelocInfo::set_target_cell(Cell* cell, + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(rmode_ == RelocInfo::CELL); Address address = cell->address() + Cell::kValueOffset; Memory::Address_at(pc_) = address; - if (mode == UPDATE_WRITE_BARRIER && host() != NULL) { + if (write_barrier_mode == UPDATE_WRITE_BARRIER && host() != NULL) { // TODO(1550) We are passing NULL as a slot because cell can never be on // evacuation candidate. host()->GetHeap()->incremental_marking()->RecordWrite( @@ -265,14 +283,15 @@ Handle<Object> RelocInfo::code_age_stub_handle(Assembler* origin) { Code* RelocInfo::code_age_stub() { - ASSERT(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); + DCHECK(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); return Code::GetCodeFromTargetAddress( Assembler::target_address_at(pc_ + Assembler::kInstrSize, host_)); } -void RelocInfo::set_code_age_stub(Code* stub) { - ASSERT(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); +void RelocInfo::set_code_age_stub(Code* stub, + ICacheFlushMode icache_flush_mode) { + DCHECK(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); Assembler::set_target_address_at(pc_ + Assembler::kInstrSize, host_, stub->instruction_start()); @@ -280,7 +299,7 @@ void RelocInfo::set_code_age_stub(Code* stub) { Address RelocInfo::call_address() { - ASSERT((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || + DCHECK((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || (IsDebugBreakSlot(rmode()) && IsPatchedDebugBreakSlotSequence())); // The pc_ offset of 0 assumes mips patched return sequence per // debug-mips.cc BreakLocationIterator::SetDebugBreakAtReturn(), or @@ -290,7 +309,7 @@ Address RelocInfo::call_address() { void RelocInfo::set_call_address(Address target) { - ASSERT((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || + DCHECK((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || (IsDebugBreakSlot(rmode()) && IsPatchedDebugBreakSlotSequence())); // The pc_ offset of 0 assumes mips patched return sequence per // debug-mips.cc BreakLocationIterator::SetDebugBreakAtReturn(), or @@ -310,7 +329,7 @@ Object* RelocInfo::call_object() { Object** RelocInfo::call_object_address() { - ASSERT((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || + DCHECK((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || (IsDebugBreakSlot(rmode()) && IsPatchedDebugBreakSlotSequence())); return reinterpret_cast<Object**>(pc_ + 2 * Assembler::kInstrSize); } @@ -322,7 +341,7 @@ void RelocInfo::set_call_object(Object* target) { void RelocInfo::WipeOut() { - ASSERT(IsEmbeddedObject(rmode_) || + DCHECK(IsEmbeddedObject(rmode_) || IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_) || IsExternalReference(rmode_)); diff --git a/deps/v8/src/mips/assembler-mips.cc b/deps/v8/src/mips/assembler-mips.cc index e629868e4ed..936a73b5f9e 100644 --- a/deps/v8/src/mips/assembler-mips.cc +++ b/deps/v8/src/mips/assembler-mips.cc @@ -33,55 +33,40 @@ // Copyright 2012 the V8 project authors. All rights reserved. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_MIPS -#include "mips/assembler-mips-inl.h" -#include "serialize.h" +#include "src/base/cpu.h" +#include "src/mips/assembler-mips-inl.h" +#include "src/serialize.h" namespace v8 { namespace internal { -#ifdef DEBUG -bool CpuFeatures::initialized_ = false; -#endif -unsigned CpuFeatures::supported_ = 0; -unsigned CpuFeatures::found_by_runtime_probing_only_ = 0; -unsigned CpuFeatures::cross_compile_ = 0; - - -ExternalReference ExternalReference::cpu_features() { - ASSERT(CpuFeatures::initialized_); - return ExternalReference(&CpuFeatures::supported_); -} - - // Get the CPU features enabled by the build. For cross compilation the // preprocessor symbols CAN_USE_FPU_INSTRUCTIONS // can be defined to enable FPU instructions when building the // snapshot. -static uint64_t CpuFeaturesImpliedByCompiler() { - uint64_t answer = 0; +static unsigned CpuFeaturesImpliedByCompiler() { + unsigned answer = 0; #ifdef CAN_USE_FPU_INSTRUCTIONS - answer |= static_cast<uint64_t>(1) << FPU; + answer |= 1u << FPU; #endif // def CAN_USE_FPU_INSTRUCTIONS -#ifdef __mips__ // If the compiler is allowed to use FPU then we can use FPU too in our code // generation even when generating snapshots. This won't work for cross // compilation. -#if(defined(__mips_hard_float) && __mips_hard_float != 0) - answer |= static_cast<uint64_t>(1) << FPU; -#endif // defined(__mips_hard_float) && __mips_hard_float != 0 -#endif // def __mips__ +#if defined(__mips__) && defined(__mips_hard_float) && __mips_hard_float != 0 + answer |= 1u << FPU; +#endif return answer; } const char* DoubleRegister::AllocationIndexToString(int index) { - ASSERT(index >= 0 && index < kMaxNumAllocatableRegisters); + DCHECK(index >= 0 && index < kMaxNumAllocatableRegisters); const char* const names[] = { "f0", "f2", @@ -102,45 +87,31 @@ const char* DoubleRegister::AllocationIndexToString(int index) { } -void CpuFeatures::Probe(bool serializer_enabled) { - unsigned standard_features = (OS::CpuFeaturesImpliedByPlatform() | - CpuFeaturesImpliedByCompiler()); - ASSERT(supported_ == 0 || - (supported_ & standard_features) == standard_features); -#ifdef DEBUG - initialized_ = true; -#endif - - // Get the features implied by the OS and the compiler settings. This is the - // minimal set of features which is also allowed for generated code in the - // snapshot. - supported_ |= standard_features; +void CpuFeatures::ProbeImpl(bool cross_compile) { + supported_ |= CpuFeaturesImpliedByCompiler(); - if (serializer_enabled) { - // No probing for features if we might serialize (generate snapshot). - return; - } + // Only use statically determined features for cross compile (snapshot). + if (cross_compile) return; // If the compiler is allowed to use fpu then we can use fpu too in our // code generation. -#if !defined(__mips__) +#ifndef __mips__ // For the simulator build, use FPU. - supported_ |= static_cast<uint64_t>(1) << FPU; + supported_ |= 1u << FPU; #else - // Probe for additional features not already known to be available. - CPU cpu; - if (cpu.has_fpu()) { - // This implementation also sets the FPU flags if - // runtime detection of FPU returns true. - supported_ |= static_cast<uint64_t>(1) << FPU; - found_by_runtime_probing_only_ |= static_cast<uint64_t>(1) << FPU; - } + // Probe for additional features at runtime. + base::CPU cpu; + if (cpu.has_fpu()) supported_ |= 1u << FPU; #endif } +void CpuFeatures::PrintTarget() { } +void CpuFeatures::PrintFeatures() { } + + int ToNumber(Register reg) { - ASSERT(reg.is_valid()); + DCHECK(reg.is_valid()); const int kNumbers[] = { 0, // zero_reg 1, // at @@ -180,7 +151,7 @@ int ToNumber(Register reg) { Register ToRegister(int num) { - ASSERT(num >= 0 && num < kNumRegisters); + DCHECK(num >= 0 && num < kNumRegisters); const Register kRegisters[] = { zero_reg, at, @@ -228,7 +199,7 @@ void RelocInfo::PatchCode(byte* instructions, int instruction_count) { } // Indicate that code has changed. - CPU::FlushICache(pc_, instruction_count * Assembler::kInstrSize); + CpuFeatures::FlushICache(pc_, instruction_count * Assembler::kInstrSize); } @@ -250,7 +221,7 @@ Operand::Operand(Handle<Object> handle) { // Verify all Objects referred by code are NOT in new space. Object* obj = *handle; if (obj->IsHeapObject()) { - ASSERT(!HeapObject::cast(obj)->GetHeap()->InNewSpace(obj)); + DCHECK(!HeapObject::cast(obj)->GetHeap()->InNewSpace(obj)); imm32_ = reinterpret_cast<intptr_t>(handle.location()); rmode_ = RelocInfo::EMBEDDED_OBJECT; } else { @@ -279,28 +250,30 @@ static const int kNegOffset = 0x00008000; // addiu(sp, sp, 4) aka Pop() operation or part of Pop(r) // operations as post-increment of sp. const Instr kPopInstruction = ADDIU | (kRegister_sp_Code << kRsShift) - | (kRegister_sp_Code << kRtShift) | (kPointerSize & kImm16Mask); + | (kRegister_sp_Code << kRtShift) + | (kPointerSize & kImm16Mask); // NOLINT // addiu(sp, sp, -4) part of Push(r) operation as pre-decrement of sp. const Instr kPushInstruction = ADDIU | (kRegister_sp_Code << kRsShift) - | (kRegister_sp_Code << kRtShift) | (-kPointerSize & kImm16Mask); + | (kRegister_sp_Code << kRtShift) + | (-kPointerSize & kImm16Mask); // NOLINT // sw(r, MemOperand(sp, 0)) const Instr kPushRegPattern = SW | (kRegister_sp_Code << kRsShift) - | (0 & kImm16Mask); + | (0 & kImm16Mask); // NOLINT // lw(r, MemOperand(sp, 0)) const Instr kPopRegPattern = LW | (kRegister_sp_Code << kRsShift) - | (0 & kImm16Mask); + | (0 & kImm16Mask); // NOLINT const Instr kLwRegFpOffsetPattern = LW | (kRegister_fp_Code << kRsShift) - | (0 & kImm16Mask); + | (0 & kImm16Mask); // NOLINT const Instr kSwRegFpOffsetPattern = SW | (kRegister_fp_Code << kRsShift) - | (0 & kImm16Mask); + | (0 & kImm16Mask); // NOLINT const Instr kLwRegFpNegOffsetPattern = LW | (kRegister_fp_Code << kRsShift) - | (kNegOffset & kImm16Mask); + | (kNegOffset & kImm16Mask); // NOLINT const Instr kSwRegFpNegOffsetPattern = SW | (kRegister_fp_Code << kRsShift) - | (kNegOffset & kImm16Mask); + | (kNegOffset & kImm16Mask); // NOLINT // A mask for the Rt register for push, pop, lw, sw instructions. const Instr kRtMask = kRtFieldMask; const Instr kLwSwInstrTypeMask = 0xffe00000; @@ -333,7 +306,7 @@ Assembler::Assembler(Isolate* isolate, void* buffer, int buffer_size) void Assembler::GetCode(CodeDesc* desc) { - ASSERT(pc_ <= reloc_info_writer.pos()); // No overlap. + DCHECK(pc_ <= reloc_info_writer.pos()); // No overlap. // Set up code descriptor. desc->buffer = buffer_; desc->buffer_size = buffer_size_; @@ -344,7 +317,7 @@ void Assembler::GetCode(CodeDesc* desc) { void Assembler::Align(int m) { - ASSERT(m >= 4 && IsPowerOf2(m)); + DCHECK(m >= 4 && IsPowerOf2(m)); while ((pc_offset() & (m - 1)) != 0) { nop(); } @@ -581,7 +554,7 @@ bool Assembler::IsOri(Instr instr) { bool Assembler::IsNop(Instr instr, unsigned int type) { // See Assembler::nop(type). - ASSERT(type < 32); + DCHECK(type < 32); uint32_t opcode = GetOpcodeField(instr); uint32_t function = GetFunctionField(instr); uint32_t rt = GetRt(instr); @@ -604,7 +577,7 @@ bool Assembler::IsNop(Instr instr, unsigned int type) { int32_t Assembler::GetBranchOffset(Instr instr) { - ASSERT(IsBranch(instr)); + DCHECK(IsBranch(instr)); return (static_cast<int16_t>(instr & kImm16Mask)) << 2; } @@ -615,13 +588,13 @@ bool Assembler::IsLw(Instr instr) { int16_t Assembler::GetLwOffset(Instr instr) { - ASSERT(IsLw(instr)); + DCHECK(IsLw(instr)); return ((instr & kImm16Mask)); } Instr Assembler::SetLwOffset(Instr instr, int16_t offset) { - ASSERT(IsLw(instr)); + DCHECK(IsLw(instr)); // We actually create a new lw instruction based on the original one. Instr temp_instr = LW | (instr & kRsFieldMask) | (instr & kRtFieldMask) @@ -637,7 +610,7 @@ bool Assembler::IsSw(Instr instr) { Instr Assembler::SetSwOffset(Instr instr, int16_t offset) { - ASSERT(IsSw(instr)); + DCHECK(IsSw(instr)); return ((instr & ~kImm16Mask) | (offset & kImm16Mask)); } @@ -648,7 +621,7 @@ bool Assembler::IsAddImmediate(Instr instr) { Instr Assembler::SetAddImmediateOffset(Instr instr, int16_t offset) { - ASSERT(IsAddImmediate(instr)); + DCHECK(IsAddImmediate(instr)); return ((instr & ~kImm16Mask) | (offset & kImm16Mask)); } @@ -670,7 +643,7 @@ int Assembler::target_at(int32_t pos) { } } // Check we have a branch or jump instruction. - ASSERT(IsBranch(instr) || IsJ(instr) || IsLui(instr)); + DCHECK(IsBranch(instr) || IsJ(instr) || IsLui(instr)); // Do NOT change this to <<2. We rely on arithmetic shifts here, assuming // the compiler uses arithmectic shifts for signed integers. if (IsBranch(instr)) { @@ -685,7 +658,7 @@ int Assembler::target_at(int32_t pos) { } else if (IsLui(instr)) { Instr instr_lui = instr_at(pos + 0 * Assembler::kInstrSize); Instr instr_ori = instr_at(pos + 1 * Assembler::kInstrSize); - ASSERT(IsOri(instr_ori)); + DCHECK(IsOri(instr_ori)); int32_t imm = (instr_lui & static_cast<int32_t>(kImm16Mask)) << kLuiShift; imm |= (instr_ori & static_cast<int32_t>(kImm16Mask)); @@ -695,7 +668,7 @@ int Assembler::target_at(int32_t pos) { } else { uint32_t instr_address = reinterpret_cast<int32_t>(buffer_ + pos); int32_t delta = instr_address - imm; - ASSERT(pos > delta); + DCHECK(pos > delta); return pos - delta; } } else { @@ -707,7 +680,7 @@ int Assembler::target_at(int32_t pos) { uint32_t instr_address = reinterpret_cast<int32_t>(buffer_ + pos); instr_address &= kImm28Mask; int32_t delta = instr_address - imm28; - ASSERT(pos > delta); + DCHECK(pos > delta); return pos - delta; } } @@ -717,29 +690,29 @@ int Assembler::target_at(int32_t pos) { void Assembler::target_at_put(int32_t pos, int32_t target_pos) { Instr instr = instr_at(pos); if ((instr & ~kImm16Mask) == 0) { - ASSERT(target_pos == kEndOfChain || target_pos >= 0); + DCHECK(target_pos == kEndOfChain || target_pos >= 0); // Emitted label constant, not part of a branch. // Make label relative to Code* of generated Code object. instr_at_put(pos, target_pos + (Code::kHeaderSize - kHeapObjectTag)); return; } - ASSERT(IsBranch(instr) || IsJ(instr) || IsLui(instr)); + DCHECK(IsBranch(instr) || IsJ(instr) || IsLui(instr)); if (IsBranch(instr)) { int32_t imm18 = target_pos - (pos + kBranchPCOffset); - ASSERT((imm18 & 3) == 0); + DCHECK((imm18 & 3) == 0); instr &= ~kImm16Mask; int32_t imm16 = imm18 >> 2; - ASSERT(is_int16(imm16)); + DCHECK(is_int16(imm16)); instr_at_put(pos, instr | (imm16 & kImm16Mask)); } else if (IsLui(instr)) { Instr instr_lui = instr_at(pos + 0 * Assembler::kInstrSize); Instr instr_ori = instr_at(pos + 1 * Assembler::kInstrSize); - ASSERT(IsOri(instr_ori)); + DCHECK(IsOri(instr_ori)); uint32_t imm = reinterpret_cast<uint32_t>(buffer_) + target_pos; - ASSERT((imm & 3) == 0); + DCHECK((imm & 3) == 0); instr_lui &= ~kImm16Mask; instr_ori &= ~kImm16Mask; @@ -751,11 +724,11 @@ void Assembler::target_at_put(int32_t pos, int32_t target_pos) { } else { uint32_t imm28 = reinterpret_cast<uint32_t>(buffer_) + target_pos; imm28 &= kImm28Mask; - ASSERT((imm28 & 3) == 0); + DCHECK((imm28 & 3) == 0); instr &= ~kImm26Mask; uint32_t imm26 = imm28 >> 2; - ASSERT(is_uint26(imm26)); + DCHECK(is_uint26(imm26)); instr_at_put(pos, instr | (imm26 & kImm26Mask)); } @@ -787,7 +760,7 @@ void Assembler::print(Label* L) { void Assembler::bind_to(Label* L, int pos) { - ASSERT(0 <= pos && pos <= pc_offset()); // Must have valid binding position. + DCHECK(0 <= pos && pos <= pc_offset()); // Must have valid binding position. int32_t trampoline_pos = kInvalidSlotPos; if (L->is_linked() && !trampoline_emitted_) { unbound_labels_count_--; @@ -805,14 +778,14 @@ void Assembler::bind_to(Label* L, int pos) { trampoline_pos = get_trampoline_entry(fixup_pos); CHECK(trampoline_pos != kInvalidSlotPos); } - ASSERT((trampoline_pos - fixup_pos) <= kMaxBranchOffset); + DCHECK((trampoline_pos - fixup_pos) <= kMaxBranchOffset); target_at_put(fixup_pos, trampoline_pos); fixup_pos = trampoline_pos; dist = pos - fixup_pos; } target_at_put(fixup_pos, pos); } else { - ASSERT(IsJ(instr) || IsLui(instr) || IsEmittedConstant(instr)); + DCHECK(IsJ(instr) || IsLui(instr) || IsEmittedConstant(instr)); target_at_put(fixup_pos, pos); } } @@ -826,18 +799,18 @@ void Assembler::bind_to(Label* L, int pos) { void Assembler::bind(Label* L) { - ASSERT(!L->is_bound()); // Label can only be bound once. + DCHECK(!L->is_bound()); // Label can only be bound once. bind_to(L, pc_offset()); } void Assembler::next(Label* L) { - ASSERT(L->is_linked()); + DCHECK(L->is_linked()); int link = target_at(L->pos()); if (link == kEndOfChain) { L->Unuse(); } else { - ASSERT(link >= 0); + DCHECK(link >= 0); L->link_to(link); } } @@ -865,7 +838,7 @@ void Assembler::GenInstrRegister(Opcode opcode, Register rd, uint16_t sa, SecondaryField func) { - ASSERT(rd.is_valid() && rs.is_valid() && rt.is_valid() && is_uint5(sa)); + DCHECK(rd.is_valid() && rs.is_valid() && rt.is_valid() && is_uint5(sa)); Instr instr = opcode | (rs.code() << kRsShift) | (rt.code() << kRtShift) | (rd.code() << kRdShift) | (sa << kSaShift) | func; emit(instr); @@ -878,7 +851,7 @@ void Assembler::GenInstrRegister(Opcode opcode, uint16_t msb, uint16_t lsb, SecondaryField func) { - ASSERT(rs.is_valid() && rt.is_valid() && is_uint5(msb) && is_uint5(lsb)); + DCHECK(rs.is_valid() && rt.is_valid() && is_uint5(msb) && is_uint5(lsb)); Instr instr = opcode | (rs.code() << kRsShift) | (rt.code() << kRtShift) | (msb << kRdShift) | (lsb << kSaShift) | func; emit(instr); @@ -891,7 +864,7 @@ void Assembler::GenInstrRegister(Opcode opcode, FPURegister fs, FPURegister fd, SecondaryField func) { - ASSERT(fd.is_valid() && fs.is_valid() && ft.is_valid()); + DCHECK(fd.is_valid() && fs.is_valid() && ft.is_valid()); Instr instr = opcode | fmt | (ft.code() << kFtShift) | (fs.code() << kFsShift) | (fd.code() << kFdShift) | func; emit(instr); @@ -904,7 +877,7 @@ void Assembler::GenInstrRegister(Opcode opcode, FPURegister fs, FPURegister fd, SecondaryField func) { - ASSERT(fd.is_valid() && fr.is_valid() && fs.is_valid() && ft.is_valid()); + DCHECK(fd.is_valid() && fr.is_valid() && fs.is_valid() && ft.is_valid()); Instr instr = opcode | (fr.code() << kFrShift) | (ft.code() << kFtShift) | (fs.code() << kFsShift) | (fd.code() << kFdShift) | func; emit(instr); @@ -917,7 +890,7 @@ void Assembler::GenInstrRegister(Opcode opcode, FPURegister fs, FPURegister fd, SecondaryField func) { - ASSERT(fd.is_valid() && fs.is_valid() && rt.is_valid()); + DCHECK(fd.is_valid() && fs.is_valid() && rt.is_valid()); Instr instr = opcode | fmt | (rt.code() << kRtShift) | (fs.code() << kFsShift) | (fd.code() << kFdShift) | func; emit(instr); @@ -929,7 +902,7 @@ void Assembler::GenInstrRegister(Opcode opcode, Register rt, FPUControlRegister fs, SecondaryField func) { - ASSERT(fs.is_valid() && rt.is_valid()); + DCHECK(fs.is_valid() && rt.is_valid()); Instr instr = opcode | fmt | (rt.code() << kRtShift) | (fs.code() << kFsShift) | func; emit(instr); @@ -942,7 +915,7 @@ void Assembler::GenInstrImmediate(Opcode opcode, Register rs, Register rt, int32_t j) { - ASSERT(rs.is_valid() && rt.is_valid() && (is_int16(j) || is_uint16(j))); + DCHECK(rs.is_valid() && rt.is_valid() && (is_int16(j) || is_uint16(j))); Instr instr = opcode | (rs.code() << kRsShift) | (rt.code() << kRtShift) | (j & kImm16Mask); emit(instr); @@ -953,7 +926,7 @@ void Assembler::GenInstrImmediate(Opcode opcode, Register rs, SecondaryField SF, int32_t j) { - ASSERT(rs.is_valid() && (is_int16(j) || is_uint16(j))); + DCHECK(rs.is_valid() && (is_int16(j) || is_uint16(j))); Instr instr = opcode | (rs.code() << kRsShift) | SF | (j & kImm16Mask); emit(instr); } @@ -963,7 +936,7 @@ void Assembler::GenInstrImmediate(Opcode opcode, Register rs, FPURegister ft, int32_t j) { - ASSERT(rs.is_valid() && ft.is_valid() && (is_int16(j) || is_uint16(j))); + DCHECK(rs.is_valid() && ft.is_valid() && (is_int16(j) || is_uint16(j))); Instr instr = opcode | (rs.code() << kRsShift) | (ft.code() << kFtShift) | (j & kImm16Mask); emit(instr); @@ -973,7 +946,7 @@ void Assembler::GenInstrImmediate(Opcode opcode, void Assembler::GenInstrJump(Opcode opcode, uint32_t address) { BlockTrampolinePoolScope block_trampoline_pool(this); - ASSERT(is_uint26(address)); + DCHECK(is_uint26(address)); Instr instr = opcode | address; emit(instr); BlockTrampolinePoolFor(1); // For associated delay slot. @@ -1013,7 +986,7 @@ uint32_t Assembler::jump_address(Label* L) { } uint32_t imm = reinterpret_cast<uint32_t>(buffer_) + target_pos; - ASSERT((imm & 3) == 0); + DCHECK((imm & 3) == 0); return imm; } @@ -1039,8 +1012,8 @@ int32_t Assembler::branch_offset(Label* L, bool jump_elimination_allowed) { } int32_t offset = target_pos - (pc_offset() + kBranchPCOffset); - ASSERT((offset & 3) == 0); - ASSERT(is_int16(offset >> 2)); + DCHECK((offset & 3) == 0); + DCHECK(is_int16(offset >> 2)); return offset; } @@ -1055,9 +1028,9 @@ void Assembler::label_at_put(Label* L, int at_offset) { if (L->is_linked()) { target_pos = L->pos(); // L's link. int32_t imm18 = target_pos - at_offset; - ASSERT((imm18 & 3) == 0); + DCHECK((imm18 & 3) == 0); int32_t imm16 = imm18 >> 2; - ASSERT(is_int16(imm16)); + DCHECK(is_int16(imm16)); instr_at_put(at_offset, (imm16 & kImm16Mask)); } else { target_pos = kEndOfChain; @@ -1149,7 +1122,7 @@ void Assembler::j(int32_t target) { uint32_t ipc = reinterpret_cast<uint32_t>(pc_ + 1 * kInstrSize); bool in_range = (ipc ^ static_cast<uint32_t>(target) >> (kImm26Bits + kImmFieldShift)) == 0; - ASSERT(in_range && ((target & 3) == 0)); + DCHECK(in_range && ((target & 3) == 0)); #endif GenInstrJump(J, target >> 2); } @@ -1171,7 +1144,7 @@ void Assembler::jal(int32_t target) { uint32_t ipc = reinterpret_cast<uint32_t>(pc_ + 1 * kInstrSize); bool in_range = (ipc ^ static_cast<uint32_t>(target) >> (kImm26Bits + kImmFieldShift)) == 0; - ASSERT(in_range && ((target & 3) == 0)); + DCHECK(in_range && ((target & 3) == 0)); #endif positions_recorder()->WriteRecordedPositions(); GenInstrJump(JAL, target >> 2); @@ -1212,7 +1185,7 @@ void Assembler::jal_or_jalr(int32_t target, Register rs) { } -//-------Data-processing-instructions--------- +// -------Data-processing-instructions--------- // Arithmetic. @@ -1264,7 +1237,7 @@ void Assembler::and_(Register rd, Register rs, Register rt) { void Assembler::andi(Register rt, Register rs, int32_t j) { - ASSERT(is_uint16(j)); + DCHECK(is_uint16(j)); GenInstrImmediate(ANDI, rs, rt, j); } @@ -1275,7 +1248,7 @@ void Assembler::or_(Register rd, Register rs, Register rt) { void Assembler::ori(Register rt, Register rs, int32_t j) { - ASSERT(is_uint16(j)); + DCHECK(is_uint16(j)); GenInstrImmediate(ORI, rs, rt, j); } @@ -1286,7 +1259,7 @@ void Assembler::xor_(Register rd, Register rs, Register rt) { void Assembler::xori(Register rt, Register rs, int32_t j) { - ASSERT(is_uint16(j)); + DCHECK(is_uint16(j)); GenInstrImmediate(XORI, rs, rt, j); } @@ -1305,7 +1278,7 @@ void Assembler::sll(Register rd, // generated using the sll instruction. They must be generated using // nop(int/NopMarkerTypes) or MarkCode(int/NopMarkerTypes) pseudo // instructions. - ASSERT(coming_from_nop || !(rd.is(zero_reg) && rt.is(zero_reg))); + DCHECK(coming_from_nop || !(rd.is(zero_reg) && rt.is(zero_reg))); GenInstrRegister(SPECIAL, zero_reg, rt, rd, sa, SLL); } @@ -1337,8 +1310,8 @@ void Assembler::srav(Register rd, Register rt, Register rs) { void Assembler::rotr(Register rd, Register rt, uint16_t sa) { // Should be called via MacroAssembler::Ror. - ASSERT(rd.is_valid() && rt.is_valid() && is_uint5(sa)); - ASSERT(kArchVariant == kMips32r2); + DCHECK(rd.is_valid() && rt.is_valid() && is_uint5(sa)); + DCHECK(kArchVariant == kMips32r2); Instr instr = SPECIAL | (1 << kRsShift) | (rt.code() << kRtShift) | (rd.code() << kRdShift) | (sa << kSaShift) | SRL; emit(instr); @@ -1347,19 +1320,19 @@ void Assembler::rotr(Register rd, Register rt, uint16_t sa) { void Assembler::rotrv(Register rd, Register rt, Register rs) { // Should be called via MacroAssembler::Ror. - ASSERT(rd.is_valid() && rt.is_valid() && rs.is_valid() ); - ASSERT(kArchVariant == kMips32r2); + DCHECK(rd.is_valid() && rt.is_valid() && rs.is_valid() ); + DCHECK(kArchVariant == kMips32r2); Instr instr = SPECIAL | (rs.code() << kRsShift) | (rt.code() << kRtShift) | (rd.code() << kRdShift) | (1 << kSaShift) | SRLV; emit(instr); } -//------------Memory-instructions------------- +// ------------Memory-instructions------------- // Helper for base-reg + offset, when offset is larger than int16. void Assembler::LoadRegPlusOffsetToAt(const MemOperand& src) { - ASSERT(!src.rm().is(at)); + DCHECK(!src.rm().is(at)); lui(at, (src.offset_ >> kLuiShift) & kImm16Mask); ori(at, at, src.offset_ & kImm16Mask); // Load 32-bit offset. addu(at, at, src.rm()); // Add base register. @@ -1467,20 +1440,20 @@ void Assembler::swr(Register rd, const MemOperand& rs) { void Assembler::lui(Register rd, int32_t j) { - ASSERT(is_uint16(j)); + DCHECK(is_uint16(j)); GenInstrImmediate(LUI, zero_reg, rd, j); } -//-------------Misc-instructions-------------- +// -------------Misc-instructions-------------- // Break / Trap instructions. void Assembler::break_(uint32_t code, bool break_as_stop) { - ASSERT((code & ~0xfffff) == 0); + DCHECK((code & ~0xfffff) == 0); // We need to invalidate breaks that could be stops as well because the // simulator expects a char pointer after the stop instruction. // See constants-mips.h for explanation. - ASSERT((break_as_stop && + DCHECK((break_as_stop && code <= kMaxStopCode && code > kMaxWatchpointCode) || (!break_as_stop && @@ -1492,8 +1465,8 @@ void Assembler::break_(uint32_t code, bool break_as_stop) { void Assembler::stop(const char* msg, uint32_t code) { - ASSERT(code > kMaxWatchpointCode); - ASSERT(code <= kMaxStopCode); + DCHECK(code > kMaxWatchpointCode); + DCHECK(code <= kMaxStopCode); #if V8_HOST_ARCH_MIPS break_(0x54321); #else // V8_HOST_ARCH_MIPS @@ -1507,7 +1480,7 @@ void Assembler::stop(const char* msg, uint32_t code) { void Assembler::tge(Register rs, Register rt, uint16_t code) { - ASSERT(is_uint10(code)); + DCHECK(is_uint10(code)); Instr instr = SPECIAL | TGE | rs.code() << kRsShift | rt.code() << kRtShift | code << 6; emit(instr); @@ -1515,7 +1488,7 @@ void Assembler::tge(Register rs, Register rt, uint16_t code) { void Assembler::tgeu(Register rs, Register rt, uint16_t code) { - ASSERT(is_uint10(code)); + DCHECK(is_uint10(code)); Instr instr = SPECIAL | TGEU | rs.code() << kRsShift | rt.code() << kRtShift | code << 6; emit(instr); @@ -1523,7 +1496,7 @@ void Assembler::tgeu(Register rs, Register rt, uint16_t code) { void Assembler::tlt(Register rs, Register rt, uint16_t code) { - ASSERT(is_uint10(code)); + DCHECK(is_uint10(code)); Instr instr = SPECIAL | TLT | rs.code() << kRsShift | rt.code() << kRtShift | code << 6; emit(instr); @@ -1531,7 +1504,7 @@ void Assembler::tlt(Register rs, Register rt, uint16_t code) { void Assembler::tltu(Register rs, Register rt, uint16_t code) { - ASSERT(is_uint10(code)); + DCHECK(is_uint10(code)); Instr instr = SPECIAL | TLTU | rs.code() << kRsShift | rt.code() << kRtShift | code << 6; @@ -1540,7 +1513,7 @@ void Assembler::tltu(Register rs, Register rt, uint16_t code) { void Assembler::teq(Register rs, Register rt, uint16_t code) { - ASSERT(is_uint10(code)); + DCHECK(is_uint10(code)); Instr instr = SPECIAL | TEQ | rs.code() << kRsShift | rt.code() << kRtShift | code << 6; emit(instr); @@ -1548,7 +1521,7 @@ void Assembler::teq(Register rs, Register rt, uint16_t code) { void Assembler::tne(Register rs, Register rt, uint16_t code) { - ASSERT(is_uint10(code)); + DCHECK(is_uint10(code)); Instr instr = SPECIAL | TNE | rs.code() << kRsShift | rt.code() << kRtShift | code << 6; emit(instr); @@ -1623,7 +1596,7 @@ void Assembler::clz(Register rd, Register rs) { void Assembler::ins_(Register rt, Register rs, uint16_t pos, uint16_t size) { // Should be called via MacroAssembler::Ins. // Ins instr has 'rt' field as dest, and two uint5: msb, lsb. - ASSERT(kArchVariant == kMips32r2); + DCHECK(kArchVariant == kMips32r2); GenInstrRegister(SPECIAL3, rs, rt, pos + size - 1, pos, INS); } @@ -1631,21 +1604,21 @@ void Assembler::ins_(Register rt, Register rs, uint16_t pos, uint16_t size) { void Assembler::ext_(Register rt, Register rs, uint16_t pos, uint16_t size) { // Should be called via MacroAssembler::Ext. // Ext instr has 'rt' field as dest, and two uint5: msb, lsb. - ASSERT(kArchVariant == kMips32r2); + DCHECK(kArchVariant == kMips32r2); GenInstrRegister(SPECIAL3, rs, rt, size - 1, pos, EXT); } void Assembler::pref(int32_t hint, const MemOperand& rs) { - ASSERT(kArchVariant != kLoongson); - ASSERT(is_uint5(hint) && is_uint16(rs.offset_)); + DCHECK(kArchVariant != kLoongson); + DCHECK(is_uint5(hint) && is_uint16(rs.offset_)); Instr instr = PREF | (rs.rm().code() << kRsShift) | (hint << kRtShift) | (rs.offset_); emit(instr); } -//--------Coprocessor-instructions---------------- +// --------Coprocessor-instructions---------------- // Load, store, move. void Assembler::lwc1(FPURegister fd, const MemOperand& src) { @@ -1704,7 +1677,7 @@ void Assembler::cfc1(Register rt, FPUControlRegister fs) { void Assembler::DoubleAsTwoUInt32(double d, uint32_t* lo, uint32_t* hi) { uint64_t i; - OS::MemCopy(&i, &d, 8); + memcpy(&i, &d, 8); *lo = i & 0xffffffff; *hi = i >> 32; @@ -1812,25 +1785,25 @@ void Assembler::ceil_w_d(FPURegister fd, FPURegister fs) { void Assembler::cvt_l_s(FPURegister fd, FPURegister fs) { - ASSERT(kArchVariant == kMips32r2); + DCHECK(kArchVariant == kMips32r2); GenInstrRegister(COP1, S, f0, fs, fd, CVT_L_S); } void Assembler::cvt_l_d(FPURegister fd, FPURegister fs) { - ASSERT(kArchVariant == kMips32r2); + DCHECK(kArchVariant == kMips32r2); GenInstrRegister(COP1, D, f0, fs, fd, CVT_L_D); } void Assembler::trunc_l_s(FPURegister fd, FPURegister fs) { - ASSERT(kArchVariant == kMips32r2); + DCHECK(kArchVariant == kMips32r2); GenInstrRegister(COP1, S, f0, fs, fd, TRUNC_L_S); } void Assembler::trunc_l_d(FPURegister fd, FPURegister fs) { - ASSERT(kArchVariant == kMips32r2); + DCHECK(kArchVariant == kMips32r2); GenInstrRegister(COP1, D, f0, fs, fd, TRUNC_L_D); } @@ -1871,7 +1844,7 @@ void Assembler::cvt_s_w(FPURegister fd, FPURegister fs) { void Assembler::cvt_s_l(FPURegister fd, FPURegister fs) { - ASSERT(kArchVariant == kMips32r2); + DCHECK(kArchVariant == kMips32r2); GenInstrRegister(COP1, L, f0, fs, fd, CVT_S_L); } @@ -1887,7 +1860,7 @@ void Assembler::cvt_d_w(FPURegister fd, FPURegister fs) { void Assembler::cvt_d_l(FPURegister fd, FPURegister fs) { - ASSERT(kArchVariant == kMips32r2); + DCHECK(kArchVariant == kMips32r2); GenInstrRegister(COP1, L, f0, fs, fd, CVT_D_L); } @@ -1900,8 +1873,8 @@ void Assembler::cvt_d_s(FPURegister fd, FPURegister fs) { // Conditions. void Assembler::c(FPUCondition cond, SecondaryField fmt, FPURegister fs, FPURegister ft, uint16_t cc) { - ASSERT(is_uint3(cc)); - ASSERT((fmt & ~(31 << kRsShift)) == 0); + DCHECK(is_uint3(cc)); + DCHECK((fmt & ~(31 << kRsShift)) == 0); Instr instr = COP1 | fmt | ft.code() << 16 | fs.code() << kFsShift | cc << 8 | 3 << 4 | cond; emit(instr); @@ -1910,7 +1883,7 @@ void Assembler::c(FPUCondition cond, SecondaryField fmt, void Assembler::fcmp(FPURegister src1, const double src2, FPUCondition cond) { - ASSERT(src2 == 0.0); + DCHECK(src2 == 0.0); mtc1(zero_reg, f14); cvt_d_w(f14, f14); c(cond, D, src1, f14, 0); @@ -1918,14 +1891,14 @@ void Assembler::fcmp(FPURegister src1, const double src2, void Assembler::bc1f(int16_t offset, uint16_t cc) { - ASSERT(is_uint3(cc)); + DCHECK(is_uint3(cc)); Instr instr = COP1 | BC1 | cc << 18 | 0 << 16 | (offset & kImm16Mask); emit(instr); } void Assembler::bc1t(int16_t offset, uint16_t cc) { - ASSERT(is_uint3(cc)); + DCHECK(is_uint3(cc)); Instr instr = COP1 | BC1 | cc << 18 | 1 << 16 | (offset & kImm16Mask); emit(instr); } @@ -1956,18 +1929,18 @@ void Assembler::RecordComment(const char* msg) { int Assembler::RelocateInternalReference(byte* pc, intptr_t pc_delta) { Instr instr = instr_at(pc); - ASSERT(IsJ(instr) || IsLui(instr)); + DCHECK(IsJ(instr) || IsLui(instr)); if (IsLui(instr)) { Instr instr_lui = instr_at(pc + 0 * Assembler::kInstrSize); Instr instr_ori = instr_at(pc + 1 * Assembler::kInstrSize); - ASSERT(IsOri(instr_ori)); + DCHECK(IsOri(instr_ori)); int32_t imm = (instr_lui & static_cast<int32_t>(kImm16Mask)) << kLuiShift; imm |= (instr_ori & static_cast<int32_t>(kImm16Mask)); if (imm == kEndOfJumpChain) { return 0; // Number of instructions patched. } imm += pc_delta; - ASSERT((imm & 3) == 0); + DCHECK((imm & 3) == 0); instr_lui &= ~kImm16Mask; instr_ori &= ~kImm16Mask; @@ -1984,11 +1957,11 @@ int Assembler::RelocateInternalReference(byte* pc, intptr_t pc_delta) { } imm28 += pc_delta; imm28 &= kImm28Mask; - ASSERT((imm28 & 3) == 0); + DCHECK((imm28 & 3) == 0); instr &= ~kImm26Mask; uint32_t imm26 = imm28 >> 2; - ASSERT(is_uint26(imm26)); + DCHECK(is_uint26(imm26)); instr_at_put(pc, instr | (imm26 & kImm26Mask)); return 1; // Number of instructions patched. @@ -2001,9 +1974,7 @@ void Assembler::GrowBuffer() { // Compute new buffer size. CodeDesc desc; // The new buffer. - if (buffer_size_ < 4*KB) { - desc.buffer_size = 4*KB; - } else if (buffer_size_ < 1*MB) { + if (buffer_size_ < 1 * MB) { desc.buffer_size = 2*buffer_size_; } else { desc.buffer_size = buffer_size_ + 1*MB; @@ -2019,9 +1990,9 @@ void Assembler::GrowBuffer() { // Copy the data. int pc_delta = desc.buffer - buffer_; int rc_delta = (desc.buffer + desc.buffer_size) - (buffer_ + buffer_size_); - OS::MemMove(desc.buffer, buffer_, desc.instr_size); - OS::MemMove(reloc_info_writer.pos() + rc_delta, - reloc_info_writer.pos(), desc.reloc_size); + MemMove(desc.buffer, buffer_, desc.instr_size); + MemMove(reloc_info_writer.pos() + rc_delta, reloc_info_writer.pos(), + desc.reloc_size); // Switch buffers. DeleteArray(buffer_); @@ -2040,7 +2011,7 @@ void Assembler::GrowBuffer() { } } - ASSERT(!overflow()); + DCHECK(!overflow()); } @@ -2071,7 +2042,7 @@ void Assembler::RecordRelocInfo(RelocInfo::Mode rmode, intptr_t data) { RelocInfo rinfo(pc_, rmode, data, NULL); if (rmode >= RelocInfo::JS_RETURN && rmode <= RelocInfo::DEBUG_BREAK_SLOT) { // Adjust code for new modes. - ASSERT(RelocInfo::IsDebugBreakSlot(rmode) + DCHECK(RelocInfo::IsDebugBreakSlot(rmode) || RelocInfo::IsJSReturn(rmode) || RelocInfo::IsComment(rmode) || RelocInfo::IsPosition(rmode)); @@ -2079,12 +2050,11 @@ void Assembler::RecordRelocInfo(RelocInfo::Mode rmode, intptr_t data) { } if (!RelocInfo::IsNone(rinfo.rmode())) { // Don't record external references unless the heap will be serialized. - if (rmode == RelocInfo::EXTERNAL_REFERENCE) { - if (!Serializer::enabled(isolate()) && !emit_debug_code()) { - return; - } + if (rmode == RelocInfo::EXTERNAL_REFERENCE && + !serializer_enabled() && !emit_debug_code()) { + return; } - ASSERT(buffer_space() >= kMaxRelocSize); // Too late to grow buffer here. + DCHECK(buffer_space() >= kMaxRelocSize); // Too late to grow buffer here. if (rmode == RelocInfo::CODE_TARGET_WITH_ID) { RelocInfo reloc_info_with_ast_id(pc_, rmode, @@ -2122,8 +2092,8 @@ void Assembler::CheckTrampolinePool() { return; } - ASSERT(!trampoline_emitted_); - ASSERT(unbound_labels_count_ >= 0); + DCHECK(!trampoline_emitted_); + DCHECK(unbound_labels_count_ >= 0); if (unbound_labels_count_ > 0) { // First we emit jump (2 instructions), then we emit trampoline pool. { BlockTrampolinePoolScope block_trampoline_pool(this); @@ -2185,7 +2155,7 @@ Address Assembler::target_address_at(Address pc) { // snapshot generated on ia32, the resulting MIPS sNaN must be quieted. // OS::nan_value() returns a qNaN. void Assembler::QuietNaN(HeapObject* object) { - HeapNumber::cast(object)->set_value(OS::nan_value()); + HeapNumber::cast(object)->set_value(base::OS::nan_value()); } @@ -2196,7 +2166,9 @@ void Assembler::QuietNaN(HeapObject* object) { // There is an optimization below, which emits a nop when the address // fits in just 16 bits. This is unlikely to help, and should be benchmarked, // and possibly removed. -void Assembler::set_target_address_at(Address pc, Address target) { +void Assembler::set_target_address_at(Address pc, + Address target, + ICacheFlushMode icache_flush_mode) { Instr instr2 = instr_at(pc + kInstrSize); uint32_t rt_code = GetRtField(instr2); uint32_t* p = reinterpret_cast<uint32_t*>(pc); @@ -2290,7 +2262,9 @@ void Assembler::set_target_address_at(Address pc, Address target) { patched_jump = true; } - CPU::FlushICache(pc, (patched_jump ? 3 : 2) * sizeof(int32_t)); + if (icache_flush_mode != SKIP_ICACHE_FLUSH) { + CpuFeatures::FlushICache(pc, (patched_jump ? 3 : 2) * sizeof(int32_t)); + } } @@ -2306,16 +2280,16 @@ void Assembler::JumpLabelToJumpRegister(Address pc) { bool patched = false; if (IsJal(instr3)) { - ASSERT(GetOpcodeField(instr1) == LUI); - ASSERT(GetOpcodeField(instr2) == ORI); + DCHECK(GetOpcodeField(instr1) == LUI); + DCHECK(GetOpcodeField(instr2) == ORI); uint32_t rs_field = GetRt(instr2) << kRsShift; uint32_t rd_field = ra.code() << kRdShift; // Return-address (ra) reg. *(p+2) = SPECIAL | rs_field | rd_field | JALR; patched = true; } else if (IsJ(instr3)) { - ASSERT(GetOpcodeField(instr1) == LUI); - ASSERT(GetOpcodeField(instr2) == ORI); + DCHECK(GetOpcodeField(instr1) == LUI); + DCHECK(GetOpcodeField(instr2) == ORI); uint32_t rs_field = GetRt(instr2) << kRsShift; *(p+2) = SPECIAL | rs_field | JR; @@ -2323,21 +2297,21 @@ void Assembler::JumpLabelToJumpRegister(Address pc) { } if (patched) { - CPU::FlushICache(pc+2, sizeof(Address)); + CpuFeatures::FlushICache(pc+2, sizeof(Address)); } } Handle<ConstantPoolArray> Assembler::NewConstantPool(Isolate* isolate) { // No out-of-line constant pool support. - ASSERT(!FLAG_enable_ool_constant_pool); + DCHECK(!FLAG_enable_ool_constant_pool); return isolate->factory()->empty_constant_pool_array(); } void Assembler::PopulateConstantPool(ConstantPoolArray* constant_pool) { // No out-of-line constant pool support. - ASSERT(!FLAG_enable_ool_constant_pool); + DCHECK(!FLAG_enable_ool_constant_pool); return; } diff --git a/deps/v8/src/mips/assembler-mips.h b/deps/v8/src/mips/assembler-mips.h index 860097c8aff..8469c1ca182 100644 --- a/deps/v8/src/mips/assembler-mips.h +++ b/deps/v8/src/mips/assembler-mips.h @@ -38,9 +38,9 @@ #include <stdio.h> -#include "assembler.h" -#include "constants-mips.h" -#include "serialize.h" +#include "src/assembler.h" +#include "src/mips/constants-mips.h" +#include "src/serialize.h" namespace v8 { namespace internal { @@ -90,7 +90,7 @@ struct Register { inline static int NumAllocatableRegisters(); static int ToAllocationIndex(Register reg) { - ASSERT((reg.code() - 2) < (kMaxNumAllocatableRegisters - 1) || + DCHECK((reg.code() - 2) < (kMaxNumAllocatableRegisters - 1) || reg.is(from_code(kCpRegister))); return reg.is(from_code(kCpRegister)) ? kMaxNumAllocatableRegisters - 1 : // Return last index for 'cp'. @@ -98,14 +98,14 @@ struct Register { } static Register FromAllocationIndex(int index) { - ASSERT(index >= 0 && index < kMaxNumAllocatableRegisters); + DCHECK(index >= 0 && index < kMaxNumAllocatableRegisters); return index == kMaxNumAllocatableRegisters - 1 ? from_code(kCpRegister) : // Last index is always the 'cp' register. from_code(index + 2); // zero_reg and 'at' are skipped. } static const char* AllocationIndexToString(int index) { - ASSERT(index >= 0 && index < kMaxNumAllocatableRegisters); + DCHECK(index >= 0 && index < kMaxNumAllocatableRegisters); const char* const names[] = { "v0", "v1", @@ -133,11 +133,11 @@ struct Register { bool is_valid() const { return 0 <= code_ && code_ < kNumRegisters; } bool is(Register reg) const { return code_ == reg.code_; } int code() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return code_; } int bit() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return 1 << code_; } @@ -226,7 +226,7 @@ struct FPURegister { static const char* AllocationIndexToString(int index); static FPURegister FromAllocationIndex(int index) { - ASSERT(index >= 0 && index < kMaxNumAllocatableRegisters); + DCHECK(index >= 0 && index < kMaxNumAllocatableRegisters); return from_code(index * 2); } @@ -239,32 +239,32 @@ struct FPURegister { bool is(FPURegister creg) const { return code_ == creg.code_; } FPURegister low() const { // Find low reg of a Double-reg pair, which is the reg itself. - ASSERT(code_ % 2 == 0); // Specified Double reg must be even. + DCHECK(code_ % 2 == 0); // Specified Double reg must be even. FPURegister reg; reg.code_ = code_; - ASSERT(reg.is_valid()); + DCHECK(reg.is_valid()); return reg; } FPURegister high() const { // Find high reg of a Doubel-reg pair, which is reg + 1. - ASSERT(code_ % 2 == 0); // Specified Double reg must be even. + DCHECK(code_ % 2 == 0); // Specified Double reg must be even. FPURegister reg; reg.code_ = code_ + 1; - ASSERT(reg.is_valid()); + DCHECK(reg.is_valid()); return reg; } int code() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return code_; } int bit() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return 1 << code_; } void setcode(int f) { code_ = f; - ASSERT(is_valid()); + DCHECK(is_valid()); } // Unfortunately we can't make this private in a struct. int code_; @@ -335,16 +335,16 @@ struct FPUControlRegister { bool is_valid() const { return code_ == kFCSRRegister; } bool is(FPUControlRegister creg) const { return code_ == creg.code_; } int code() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return code_; } int bit() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return 1 << code_; } void setcode(int f) { code_ = f; - ASSERT(is_valid()); + DCHECK(is_valid()); } // Unfortunately we can't make this private in a struct. int code_; @@ -377,7 +377,7 @@ class Operand BASE_EMBEDDED { INLINE(bool is_reg() const); inline int32_t immediate() const { - ASSERT(!is_reg()); + DCHECK(!is_reg()); return imm32_; } @@ -419,65 +419,6 @@ class MemOperand : public Operand { }; -// CpuFeatures keeps track of which features are supported by the target CPU. -// Supported features must be enabled by a CpuFeatureScope before use. -class CpuFeatures : public AllStatic { - public: - // Detect features of the target CPU. Set safe defaults if the serializer - // is enabled (snapshots must be portable). - static void Probe(bool serializer_enabled); - - // A special case for printing target and features, which we want to do - // before initializing the isolate - - // Check whether a feature is supported by the target CPU. - static bool IsSupported(CpuFeature f) { - ASSERT(initialized_); - return Check(f, supported_); - } - - static bool IsSafeForSnapshot(Isolate* isolate, CpuFeature f) { - return Check(f, cross_compile_) || - (IsSupported(f) && - !(Serializer::enabled(isolate) && - Check(f, found_by_runtime_probing_only_))); - } - - static bool VerifyCrossCompiling() { - return cross_compile_ == 0; - } - - static bool VerifyCrossCompiling(CpuFeature f) { - unsigned mask = flag2set(f); - return cross_compile_ == 0 || - (cross_compile_ & mask) == mask; - } - - static bool SupportsCrankshaft() { return CpuFeatures::IsSupported(FPU); } - - private: - static bool Check(CpuFeature f, unsigned set) { - return (set & flag2set(f)) != 0; - } - - static unsigned flag2set(CpuFeature f) { - return 1u << f; - } - -#ifdef DEBUG - static bool initialized_; -#endif - static unsigned supported_; - static unsigned found_by_runtime_probing_only_; - - static unsigned cross_compile_; - - friend class ExternalReference; - friend class PlatformFeatureScope; - DISALLOW_COPY_AND_ASSIGN(CpuFeatures); -}; - - class Assembler : public AssemblerBase { public: // Create an assembler. Instructions and relocation information are emitted @@ -526,7 +467,7 @@ class Assembler : public AssemblerBase { int32_t branch_offset(Label* L, bool jump_elimination_allowed); int32_t shifted_branch_offset(Label* L, bool jump_elimination_allowed) { int32_t o = branch_offset(L, jump_elimination_allowed); - ASSERT((o & 3) == 0); // Assert the offset is aligned. + DCHECK((o & 3) == 0); // Assert the offset is aligned. return o >> 2; } uint32_t jump_address(Label* L); @@ -537,7 +478,10 @@ class Assembler : public AssemblerBase { // Read/Modify the code target address in the branch/call instruction at pc. static Address target_address_at(Address pc); - static void set_target_address_at(Address pc, Address target); + static void set_target_address_at(Address pc, + Address target, + ICacheFlushMode icache_flush_mode = + FLUSH_ICACHE_IF_NEEDED); // On MIPS there is no Constant Pool so we skip that parameter. INLINE(static Address target_address_at(Address pc, ConstantPoolArray* constant_pool)) { @@ -545,8 +489,10 @@ class Assembler : public AssemblerBase { } INLINE(static void set_target_address_at(Address pc, ConstantPoolArray* constant_pool, - Address target)) { - set_target_address_at(pc, target); + Address target, + ICacheFlushMode icache_flush_mode = + FLUSH_ICACHE_IF_NEEDED)) { + set_target_address_at(pc, target, icache_flush_mode); } INLINE(static Address target_address_at(Address pc, Code* code)) { ConstantPoolArray* constant_pool = code ? code->constant_pool() : NULL; @@ -554,15 +500,20 @@ class Assembler : public AssemblerBase { } INLINE(static void set_target_address_at(Address pc, Code* code, - Address target)) { + Address target, + ICacheFlushMode icache_flush_mode = + FLUSH_ICACHE_IF_NEEDED)) { ConstantPoolArray* constant_pool = code ? code->constant_pool() : NULL; - set_target_address_at(pc, constant_pool, target); + set_target_address_at(pc, constant_pool, target, icache_flush_mode); } // Return the code target address at a call site from the return address // of that call in the instruction stream. inline static Address target_address_from_return_address(Address pc); + // Return the code target address of the patch debug break slot + inline static Address break_address_from_return_address(Address pc); + static void JumpLabelToJumpRegister(Address pc); static void QuietNaN(HeapObject* nan); @@ -658,7 +609,7 @@ class Assembler : public AssemblerBase { // sll(zero_reg, zero_reg, 0). We use rt_reg == at for non-zero // marking, to avoid conflict with ssnop and ehb instructions. void nop(unsigned int type = 0) { - ASSERT(type < 32); + DCHECK(type < 32); Register nop_rt_reg = (type == 0) ? zero_reg : at; sll(zero_reg, nop_rt_reg, type, true); } @@ -698,7 +649,7 @@ class Assembler : public AssemblerBase { void jal_or_jalr(int32_t target, Register rs); - //-------Data-processing-instructions--------- + // -------Data-processing-instructions--------- // Arithmetic. void addu(Register rd, Register rs, Register rt); @@ -736,7 +687,7 @@ class Assembler : public AssemblerBase { void rotrv(Register rd, Register rt, Register rs); - //------------Memory-instructions------------- + // ------------Memory-instructions------------- void lb(Register rd, const MemOperand& rs); void lbu(Register rd, const MemOperand& rs); @@ -752,12 +703,12 @@ class Assembler : public AssemblerBase { void swr(Register rd, const MemOperand& rs); - //----------------Prefetch-------------------- + // ----------------Prefetch-------------------- void pref(int32_t hint, const MemOperand& rs); - //-------------Misc-instructions-------------- + // -------------Misc-instructions-------------- // Break / Trap instructions. void break_(uint32_t code, bool break_as_stop = false); @@ -790,7 +741,7 @@ class Assembler : public AssemblerBase { void ins_(Register rt, Register rs, uint16_t pos, uint16_t size); void ext_(Register rt, Register rs, uint16_t pos, uint16_t size); - //--------Coprocessor-instructions---------------- + // --------Coprocessor-instructions---------------- // Load, store, and move. void lwc1(FPURegister fd, const MemOperand& src); @@ -896,10 +847,10 @@ class Assembler : public AssemblerBase { assem_->EndBlockGrowBuffer(); } - private: - Assembler* assem_; + private: + Assembler* assem_; - DISALLOW_IMPLICIT_CONSTRUCTORS(BlockGrowBufferScope); + DISALLOW_IMPLICIT_CONSTRUCTORS(BlockGrowBufferScope); }; // Debugging. @@ -913,12 +864,12 @@ class Assembler : public AssemblerBase { // Record the AST id of the CallIC being compiled, so that it can be placed // in the relocation information. void SetRecordedAstId(TypeFeedbackId ast_id) { - ASSERT(recorded_ast_id_.IsNone()); + DCHECK(recorded_ast_id_.IsNone()); recorded_ast_id_ = ast_id; } TypeFeedbackId RecordedAstId() { - ASSERT(!recorded_ast_id_.IsNone()); + DCHECK(!recorded_ast_id_.IsNone()); return recorded_ast_id_; } @@ -1073,12 +1024,12 @@ class Assembler : public AssemblerBase { // Temporarily block automatic assembly buffer growth. void StartBlockGrowBuffer() { - ASSERT(!block_buffer_growth_); + DCHECK(!block_buffer_growth_); block_buffer_growth_ = true; } void EndBlockGrowBuffer() { - ASSERT(block_buffer_growth_); + DCHECK(block_buffer_growth_); block_buffer_growth_ = false; } @@ -1240,7 +1191,7 @@ class Assembler : public AssemblerBase { // We have run out of space on trampolines. // Make sure we fail in debug mode, so we become aware of each case // when this happens. - ASSERT(0); + DCHECK(0); // Internal exception will be caught. } else { trampoline_slot = next_slot_; diff --git a/deps/v8/src/mips/builtins-mips.cc b/deps/v8/src/mips/builtins-mips.cc index fdd062b6b69..462ef675c95 100644 --- a/deps/v8/src/mips/builtins-mips.cc +++ b/deps/v8/src/mips/builtins-mips.cc @@ -4,16 +4,16 @@ -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_MIPS -#include "codegen.h" -#include "debug.h" -#include "deoptimizer.h" -#include "full-codegen.h" -#include "runtime.h" -#include "stub-cache.h" +#include "src/codegen.h" +#include "src/debug.h" +#include "src/deoptimizer.h" +#include "src/full-codegen.h" +#include "src/runtime.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -42,7 +42,7 @@ void Builtins::Generate_Adaptor(MacroAssembler* masm, num_extra_args = 1; __ push(a1); } else { - ASSERT(extra_args == NO_EXTRA_ARGUMENTS); + DCHECK(extra_args == NO_EXTRA_ARGUMENTS); } // JumpToExternalReference expects s0 to contain the number of arguments @@ -309,7 +309,7 @@ void Builtins::Generate_InOptimizationQueue(MacroAssembler* masm) { __ LoadRoot(t0, Heap::kStackLimitRootIndex); __ Branch(&ok, hs, sp, Operand(t0)); - CallRuntimePassFunction(masm, Runtime::kHiddenTryInstallOptimizedCode); + CallRuntimePassFunction(masm, Runtime::kTryInstallOptimizedCode); GenerateTailCallToReturnedCode(masm); __ bind(&ok); @@ -319,7 +319,6 @@ void Builtins::Generate_InOptimizationQueue(MacroAssembler* masm) { static void Generate_JSConstructStubHelper(MacroAssembler* masm, bool is_api_function, - bool count_constructions, bool create_memento) { // ----------- S t a t e ------------- // -- a0 : number of arguments @@ -329,14 +328,8 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, // -- sp[...]: constructor arguments // ----------------------------------- - // Should never count constructions for api objects. - ASSERT(!is_api_function || !count_constructions); - // Should never create mementos for api functions. - ASSERT(!is_api_function || !create_memento); - - // Should never create mementos before slack tracking is finished. - ASSERT(!count_constructions || !create_memento); + DCHECK(!is_api_function || !create_memento); Isolate* isolate = masm->isolate(); @@ -360,9 +353,6 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, __ sll(a0, a0, kSmiTagSize); // Tag arguments count. __ MultiPushReversed(a0.bit() | a1.bit()); - // Use t7 to hold undefined, which is used in several places below. - __ LoadRoot(t7, Heap::kUndefinedValueRootIndex); - Label rt_call, allocated; // Try to allocate the object without transitioning into C code. If any of // the preconditions is not met, the code bails out to the runtime call. @@ -389,22 +379,26 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, __ lbu(a3, FieldMemOperand(a2, Map::kInstanceTypeOffset)); __ Branch(&rt_call, eq, a3, Operand(JS_FUNCTION_TYPE)); - if (count_constructions) { + if (!is_api_function) { Label allocate; + MemOperand bit_field3 = FieldMemOperand(a2, Map::kBitField3Offset); + // Check if slack tracking is enabled. + __ lw(t0, bit_field3); + __ DecodeField<Map::ConstructionCount>(t2, t0); + __ Branch(&allocate, eq, t2, Operand(JSFunction::kNoSlackTracking)); // Decrease generous allocation count. - __ lw(a3, FieldMemOperand(a1, JSFunction::kSharedFunctionInfoOffset)); - MemOperand constructor_count = - FieldMemOperand(a3, SharedFunctionInfo::kConstructionCountOffset); - __ lbu(t0, constructor_count); - __ Subu(t0, t0, Operand(1)); - __ sb(t0, constructor_count); - __ Branch(&allocate, ne, t0, Operand(zero_reg)); + __ Subu(t0, t0, Operand(1 << Map::ConstructionCount::kShift)); + __ Branch(USE_DELAY_SLOT, + &allocate, ne, t2, Operand(JSFunction::kFinishSlackTracking)); + __ sw(t0, bit_field3); // In delay slot. __ Push(a1, a2, a1); // a1 = Constructor. - // The call will replace the stub, so the countdown is only done once. - __ CallRuntime(Runtime::kHiddenFinalizeInstanceSize, 1); + __ CallRuntime(Runtime::kFinalizeInstanceSize, 1); __ Pop(a1, a2); + // Slack tracking counter is kNoSlackTracking after runtime call. + DCHECK(JSFunction::kNoSlackTracking == 0); + __ mov(t2, zero_reg); __ bind(&allocate); } @@ -431,9 +425,9 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, __ sw(t6, MemOperand(t5, JSObject::kPropertiesOffset)); __ sw(t6, MemOperand(t5, JSObject::kElementsOffset)); __ Addu(t5, t5, Operand(3*kPointerSize)); - ASSERT_EQ(0 * kPointerSize, JSObject::kMapOffset); - ASSERT_EQ(1 * kPointerSize, JSObject::kPropertiesOffset); - ASSERT_EQ(2 * kPointerSize, JSObject::kElementsOffset); + DCHECK_EQ(0 * kPointerSize, JSObject::kMapOffset); + DCHECK_EQ(1 * kPointerSize, JSObject::kPropertiesOffset); + DCHECK_EQ(2 * kPointerSize, JSObject::kElementsOffset); // Fill all the in-object properties with appropriate filler. // a1: constructor function @@ -441,44 +435,56 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, // a3: object size (in words, including memento if create_memento) // t4: JSObject (not tagged) // t5: First in-object property of JSObject (not tagged) - ASSERT_EQ(3 * kPointerSize, JSObject::kHeaderSize); + // t2: slack tracking counter (non-API function case) + DCHECK_EQ(3 * kPointerSize, JSObject::kHeaderSize); + + // Use t7 to hold undefined, which is used in several places below. + __ LoadRoot(t7, Heap::kUndefinedValueRootIndex); + + if (!is_api_function) { + Label no_inobject_slack_tracking; - if (count_constructions) { - __ LoadRoot(t7, Heap::kUndefinedValueRootIndex); + // Check if slack tracking is enabled. + __ Branch(&no_inobject_slack_tracking, + eq, t2, Operand(JSFunction::kNoSlackTracking)); + + // Allocate object with a slack. __ lbu(a0, FieldMemOperand(a2, Map::kPreAllocatedPropertyFieldsOffset)); __ sll(at, a0, kPointerSizeLog2); __ addu(a0, t5, at); - __ sll(at, a3, kPointerSizeLog2); - __ Addu(t6, t4, Operand(at)); // End of object. // a0: offset of first field after pre-allocated fields if (FLAG_debug_code) { + __ sll(at, a3, kPointerSizeLog2); + __ Addu(t6, t4, Operand(at)); // End of object. __ Assert(le, kUnexpectedNumberOfPreAllocatedPropertyFields, a0, Operand(t6)); } __ InitializeFieldsWithFiller(t5, a0, t7); // To allow for truncation. __ LoadRoot(t7, Heap::kOnePointerFillerMapRootIndex); - __ InitializeFieldsWithFiller(t5, t6, t7); - } else if (create_memento) { - __ Subu(t7, a3, Operand(AllocationMemento::kSize / kPointerSize)); - __ sll(at, t7, kPointerSizeLog2); - __ Addu(a0, t4, Operand(at)); // End of object. - __ LoadRoot(t7, Heap::kUndefinedValueRootIndex); + // Fill the remaining fields with one pointer filler map. + + __ bind(&no_inobject_slack_tracking); + } + + if (create_memento) { + __ Subu(a0, a3, Operand(AllocationMemento::kSize / kPointerSize)); + __ sll(a0, a0, kPointerSizeLog2); + __ Addu(a0, t4, Operand(a0)); // End of object. __ InitializeFieldsWithFiller(t5, a0, t7); // Fill in memento fields. // t5: points to the allocated but uninitialized memento. __ LoadRoot(t7, Heap::kAllocationMementoMapRootIndex); - ASSERT_EQ(0 * kPointerSize, AllocationMemento::kMapOffset); + DCHECK_EQ(0 * kPointerSize, AllocationMemento::kMapOffset); __ sw(t7, MemOperand(t5)); __ Addu(t5, t5, kPointerSize); // Load the AllocationSite. __ lw(t7, MemOperand(sp, 2 * kPointerSize)); - ASSERT_EQ(1 * kPointerSize, AllocationMemento::kAllocationSiteOffset); + DCHECK_EQ(1 * kPointerSize, AllocationMemento::kAllocationSiteOffset); __ sw(t7, MemOperand(t5)); __ Addu(t5, t5, kPointerSize); } else { - __ LoadRoot(t7, Heap::kUndefinedValueRootIndex); __ sll(at, a3, kPointerSizeLog2); __ Addu(a0, t4, Operand(at)); // End of object. __ InitializeFieldsWithFiller(t5, a0, t7); @@ -535,8 +541,8 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, __ sw(a0, MemOperand(a2, FixedArray::kLengthOffset)); __ Addu(a2, a2, Operand(2 * kPointerSize)); - ASSERT_EQ(0 * kPointerSize, JSObject::kMapOffset); - ASSERT_EQ(1 * kPointerSize, FixedArray::kLengthOffset); + DCHECK_EQ(0 * kPointerSize, JSObject::kMapOffset); + DCHECK_EQ(1 * kPointerSize, FixedArray::kLengthOffset); // Initialize the fields to undefined. // a1: constructor @@ -546,13 +552,13 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, // t5: FixedArray (not tagged) __ sll(t3, a3, kPointerSizeLog2); __ addu(t6, a2, t3); // End of object. - ASSERT_EQ(2 * kPointerSize, FixedArray::kHeaderSize); + DCHECK_EQ(2 * kPointerSize, FixedArray::kHeaderSize); { Label loop, entry; - if (count_constructions) { + if (!is_api_function || create_memento) { __ LoadRoot(t7, Heap::kUndefinedValueRootIndex); } else if (FLAG_debug_code) { - __ LoadRoot(t8, Heap::kUndefinedValueRootIndex); - __ Assert(eq, kUndefinedValueNotLoaded, t7, Operand(t8)); + __ LoadRoot(t2, Heap::kUndefinedValueRootIndex); + __ Assert(eq, kUndefinedValueNotLoaded, t7, Operand(t2)); } __ jmp(&entry); __ bind(&loop); @@ -594,9 +600,9 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, __ push(a1); // Argument for Runtime_NewObject. if (create_memento) { - __ CallRuntime(Runtime::kHiddenNewObjectWithAllocationSite, 2); + __ CallRuntime(Runtime::kNewObjectWithAllocationSite, 2); } else { - __ CallRuntime(Runtime::kHiddenNewObject, 1); + __ CallRuntime(Runtime::kNewObject, 1); } __ mov(t4, v0); @@ -610,6 +616,7 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, // Receiver for constructor call allocated. // t4: JSObject + __ bind(&allocated); if (create_memento) { __ lw(a2, MemOperand(sp, kPointerSize * 2)); @@ -625,7 +632,6 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, __ bind(&count_incremented); } - __ bind(&allocated); __ Push(t4, t4); // Reload the number of arguments from the stack. @@ -676,7 +682,7 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, } // Store offset of return address for deoptimizer. - if (!is_api_function && !count_constructions) { + if (!is_api_function) { masm->isolate()->heap()->SetConstructStubDeoptPCOffset(masm->pc_offset()); } @@ -725,18 +731,13 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, } -void Builtins::Generate_JSConstructStubCountdown(MacroAssembler* masm) { - Generate_JSConstructStubHelper(masm, false, true, false); -} - - void Builtins::Generate_JSConstructStubGeneric(MacroAssembler* masm) { - Generate_JSConstructStubHelper(masm, false, false, FLAG_pretenuring_call_new); + Generate_JSConstructStubHelper(masm, false, FLAG_pretenuring_call_new); } void Builtins::Generate_JSConstructStubApi(MacroAssembler* masm) { - Generate_JSConstructStubHelper(masm, true, false, false); + Generate_JSConstructStubHelper(masm, true, false); } @@ -824,7 +825,7 @@ void Builtins::Generate_JSConstructEntryTrampoline(MacroAssembler* masm) { void Builtins::Generate_CompileUnoptimized(MacroAssembler* masm) { - CallRuntimePassFunction(masm, Runtime::kHiddenCompileUnoptimized); + CallRuntimePassFunction(masm, Runtime::kCompileUnoptimized); GenerateTailCallToReturnedCode(masm); } @@ -837,7 +838,7 @@ static void CallCompileOptimized(MacroAssembler* masm, bool concurrent) { // Whether to compile in a background thread. __ Push(masm->isolate()->factory()->ToBoolean(concurrent)); - __ CallRuntime(Runtime::kHiddenCompileOptimized, 2); + __ CallRuntime(Runtime::kCompileOptimized, 2); // Restore receiver. __ Pop(a1); } @@ -946,7 +947,7 @@ static void Generate_NotifyStubFailureHelper(MacroAssembler* masm, // registers. __ MultiPush(kJSCallerSaved | kCalleeSaved); // Pass the function and deoptimization type to the runtime system. - __ CallRuntime(Runtime::kHiddenNotifyStubFailure, 0, save_doubles); + __ CallRuntime(Runtime::kNotifyStubFailure, 0, save_doubles); __ MultiPop(kJSCallerSaved | kCalleeSaved); } @@ -972,7 +973,7 @@ static void Generate_NotifyDeoptimizedHelper(MacroAssembler* masm, // Pass the function and deoptimization type to the runtime system. __ li(a0, Operand(Smi::FromInt(static_cast<int>(type)))); __ push(a0); - __ CallRuntime(Runtime::kHiddenNotifyDeoptimized, 1); + __ CallRuntime(Runtime::kNotifyDeoptimized, 1); } // Get the full codegen state from the stack and untag it -> t2. @@ -1054,7 +1055,7 @@ void Builtins::Generate_OsrAfterStackCheck(MacroAssembler* masm) { __ Branch(&ok, hs, sp, Operand(at)); { FrameScope scope(masm, StackFrame::INTERNAL); - __ CallRuntime(Runtime::kHiddenStackGuard, 0); + __ CallRuntime(Runtime::kStackGuard, 0); } __ Jump(masm->isolate()->builtins()->OnStackReplacement(), RelocInfo::CODE_TARGET); @@ -1091,7 +1092,7 @@ void Builtins::Generate_FunctionCall(MacroAssembler* masm) { // a1: function Label shift_arguments; __ li(t0, Operand(0, RelocInfo::NONE32)); // Indicate regular JS_FUNCTION. - { Label convert_to_object, use_global_receiver, patch_receiver; + { Label convert_to_object, use_global_proxy, patch_receiver; // Change context eagerly in case we need the global receiver. __ lw(cp, FieldMemOperand(a1, JSFunction::kContextOffset)); @@ -1117,9 +1118,9 @@ void Builtins::Generate_FunctionCall(MacroAssembler* masm) { __ JumpIfSmi(a2, &convert_to_object, t2); __ LoadRoot(a3, Heap::kUndefinedValueRootIndex); - __ Branch(&use_global_receiver, eq, a2, Operand(a3)); + __ Branch(&use_global_proxy, eq, a2, Operand(a3)); __ LoadRoot(a3, Heap::kNullValueRootIndex); - __ Branch(&use_global_receiver, eq, a2, Operand(a3)); + __ Branch(&use_global_proxy, eq, a2, Operand(a3)); STATIC_ASSERT(LAST_SPEC_OBJECT_TYPE == LAST_TYPE); __ GetObjectType(a2, a3, a3); @@ -1138,16 +1139,17 @@ void Builtins::Generate_FunctionCall(MacroAssembler* masm) { __ sra(a0, a0, kSmiTagSize); // Un-tag. // Leave internal frame. } + // Restore the function to a1, and the flag to t0. __ sll(at, a0, kPointerSizeLog2); __ addu(at, sp, at); __ lw(a1, MemOperand(at)); - __ li(t0, Operand(0, RelocInfo::NONE32)); - __ Branch(&patch_receiver); + __ Branch(USE_DELAY_SLOT, &patch_receiver); + __ li(t0, Operand(0, RelocInfo::NONE32)); // In delay slot. - __ bind(&use_global_receiver); + __ bind(&use_global_proxy); __ lw(a2, ContextOperand(cp, Context::GLOBAL_OBJECT_INDEX)); - __ lw(a2, FieldMemOperand(a2, GlobalObject::kGlobalReceiverOffset)); + __ lw(a2, FieldMemOperand(a2, GlobalObject::kGlobalProxyOffset)); __ bind(&patch_receiver); __ sll(at, a0, kPointerSizeLog2); @@ -1299,7 +1301,7 @@ void Builtins::Generate_FunctionApply(MacroAssembler* masm) { // Compute the receiver. // Do not transform the receiver for strict mode functions. - Label call_to_object, use_global_receiver; + Label call_to_object, use_global_proxy; __ lw(a2, FieldMemOperand(a2, SharedFunctionInfo::kCompilerHintsOffset)); __ And(t3, a2, Operand(1 << (SharedFunctionInfo::kStrictModeFunction + kSmiTagSize))); @@ -1312,9 +1314,9 @@ void Builtins::Generate_FunctionApply(MacroAssembler* masm) { // Compute the receiver in sloppy mode. __ JumpIfSmi(a0, &call_to_object); __ LoadRoot(a1, Heap::kNullValueRootIndex); - __ Branch(&use_global_receiver, eq, a0, Operand(a1)); + __ Branch(&use_global_proxy, eq, a0, Operand(a1)); __ LoadRoot(a2, Heap::kUndefinedValueRootIndex); - __ Branch(&use_global_receiver, eq, a0, Operand(a2)); + __ Branch(&use_global_proxy, eq, a0, Operand(a2)); // Check if the receiver is already a JavaScript object. // a0: receiver @@ -1330,9 +1332,9 @@ void Builtins::Generate_FunctionApply(MacroAssembler* masm) { __ mov(a0, v0); // Put object in a0 to match other paths to push_receiver. __ Branch(&push_receiver); - __ bind(&use_global_receiver); + __ bind(&use_global_proxy); __ lw(a0, ContextOperand(cp, Context::GLOBAL_OBJECT_INDEX)); - __ lw(a0, FieldMemOperand(a0, GlobalObject::kGlobalReceiverOffset)); + __ lw(a0, FieldMemOperand(a0, GlobalObject::kGlobalProxyOffset)); // Push the receiver. // a0: receiver diff --git a/deps/v8/src/mips/code-stubs-mips.cc b/deps/v8/src/mips/code-stubs-mips.cc index 79af21940cf..2e8e6074b57 100644 --- a/deps/v8/src/mips/code-stubs-mips.cc +++ b/deps/v8/src/mips/code-stubs-mips.cc @@ -2,15 +2,15 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_MIPS -#include "bootstrapper.h" -#include "code-stubs.h" -#include "codegen.h" -#include "regexp-macro-assembler.h" -#include "stub-cache.h" +#include "src/bootstrapper.h" +#include "src/code-stubs.h" +#include "src/codegen.h" +#include "src/regexp-macro-assembler.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -18,251 +18,196 @@ namespace internal { void FastNewClosureStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { a2 }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenNewClosureFromStubFailure)->entry; + Register registers[] = { cp, a2 }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kNewClosureFromStubFailure)->entry); } void FastNewContextStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { a1 }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; + Register registers[] = { cp, a1 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); } void ToNumberStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { a0 }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; + Register registers[] = { cp, a0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); } void NumberToStringStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { a0 }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenNumberToString)->entry; + Register registers[] = { cp, a0 }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kNumberToStringRT)->entry); } void FastCloneShallowArrayStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { a3, a2, a1 }; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId( - Runtime::kHiddenCreateArrayLiteralStubBailout)->entry; + Register registers[] = { cp, a3, a2, a1 }; + Representation representations[] = { + Representation::Tagged(), + Representation::Tagged(), + Representation::Smi(), + Representation::Tagged() }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kCreateArrayLiteralStubBailout)->entry, + representations); } void FastCloneShallowObjectStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { a3, a2, a1, a0 }; - descriptor->register_param_count_ = 4; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenCreateObjectLiteral)->entry; + Register registers[] = { cp, a3, a2, a1, a0 }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kCreateObjectLiteral)->entry); } void CreateAllocationSiteStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { a2, a3 }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; + Register registers[] = { cp, a2, a3 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); } -void KeyedLoadFastElementStub::InitializeInterfaceDescriptor( +void CallFunctionStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { a1, a0 }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(KeyedLoadIC_MissFromStubFailure); + UNIMPLEMENTED(); } -void KeyedLoadDictionaryElementStub::InitializeInterfaceDescriptor( +void CallConstructStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = {a1, a0 }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(KeyedLoadIC_MissFromStubFailure); + UNIMPLEMENTED(); } void RegExpConstructResultStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { a2, a1, a0 }; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenRegExpConstructResult)->entry; -} - - -void LoadFieldStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { a0 }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; -} - - -void KeyedLoadFieldStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { a1 }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; -} - - -void StringLengthStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { a0, a2 }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; -} - - -void KeyedStringLengthStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { a1, a0 }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; -} - - -void KeyedStoreFastElementStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { a2, a1, a0 }; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(KeyedStoreIC_MissFromStubFailure); + Register registers[] = { cp, a2, a1, a0 }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kRegExpConstructResult)->entry); } void TransitionElementsKindStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { a0, a1 }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; + Register registers[] = { cp, a0, a1 }; Address entry = Runtime::FunctionForId(Runtime::kTransitionElementsKind)->entry; - descriptor->deoptimization_handler_ = FUNCTION_ADDR(entry); + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(entry)); } void CompareNilICStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { a0 }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(CompareNilIC_Miss); + Register registers[] = { cp, a0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(CompareNilIC_Miss)); descriptor->SetMissHandler( ExternalReference(IC_Utility(IC::kCompareNilIC_Miss), isolate())); } +const Register InterfaceDescriptor::ContextRegister() { return cp; } + + static void InitializeArrayConstructorDescriptor( - CodeStubInterfaceDescriptor* descriptor, + CodeStub::Major major, CodeStubInterfaceDescriptor* descriptor, int constant_stack_parameter_count) { // register state + // cp -- context // a0 -- number of arguments // a1 -- function // a2 -- allocation site with elements kind - static Register registers_variable_args[] = { a1, a2, a0 }; - static Register registers_no_args[] = { a1, a2 }; + Address deopt_handler = Runtime::FunctionForId( + Runtime::kArrayConstructor)->entry; if (constant_stack_parameter_count == 0) { - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers_no_args; + Register registers[] = { cp, a1, a2 }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, + deopt_handler, NULL, constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE); } else { // stack param count needs (constructor pointer, and single argument) - descriptor->handler_arguments_mode_ = PASS_ARGUMENTS; - descriptor->stack_parameter_count_ = a0; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers_variable_args; + Register registers[] = { cp, a1, a2, a0 }; + Representation representations[] = { + Representation::Tagged(), + Representation::Tagged(), + Representation::Tagged(), + Representation::Integer32() }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, a0, + deopt_handler, representations, + constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE, PASS_ARGUMENTS); } - - descriptor->hint_stack_parameter_count_ = constant_stack_parameter_count; - descriptor->function_mode_ = JS_FUNCTION_STUB_MODE; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenArrayConstructor)->entry; } static void InitializeInternalArrayConstructorDescriptor( - CodeStubInterfaceDescriptor* descriptor, + CodeStub::Major major, CodeStubInterfaceDescriptor* descriptor, int constant_stack_parameter_count) { // register state + // cp -- context // a0 -- number of arguments // a1 -- constructor function - static Register registers_variable_args[] = { a1, a0 }; - static Register registers_no_args[] = { a1 }; + Address deopt_handler = Runtime::FunctionForId( + Runtime::kInternalArrayConstructor)->entry; if (constant_stack_parameter_count == 0) { - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers_no_args; + Register registers[] = { cp, a1 }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, + deopt_handler, NULL, constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE); } else { // stack param count needs (constructor pointer, and single argument) - descriptor->handler_arguments_mode_ = PASS_ARGUMENTS; - descriptor->stack_parameter_count_ = a0; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers_variable_args; + Register registers[] = { cp, a1, a0 }; + Representation representations[] = { + Representation::Tagged(), + Representation::Tagged(), + Representation::Integer32() }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, a0, + deopt_handler, representations, + constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE, PASS_ARGUMENTS); } - - descriptor->hint_stack_parameter_count_ = constant_stack_parameter_count; - descriptor->function_mode_ = JS_FUNCTION_STUB_MODE; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenInternalArrayConstructor)->entry; } void ArrayNoArgumentConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeArrayConstructorDescriptor(descriptor, 0); + InitializeArrayConstructorDescriptor(MajorKey(), descriptor, 0); } void ArraySingleArgumentConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeArrayConstructorDescriptor(descriptor, 1); + InitializeArrayConstructorDescriptor(MajorKey(), descriptor, 1); } void ArrayNArgumentsConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeArrayConstructorDescriptor(descriptor, -1); + InitializeArrayConstructorDescriptor(MajorKey(), descriptor, -1); } void ToBooleanStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { a0 }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(ToBooleanIC_Miss); + Register registers[] = { cp, a0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(ToBooleanIC_Miss)); descriptor->SetMissHandler( ExternalReference(IC_Utility(IC::kToBooleanIC_Miss), isolate())); } @@ -270,48 +215,27 @@ void ToBooleanStub::InitializeInterfaceDescriptor( void InternalArrayNoArgumentConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeInternalArrayConstructorDescriptor(descriptor, 0); + InitializeInternalArrayConstructorDescriptor(MajorKey(), descriptor, 0); } void InternalArraySingleArgumentConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeInternalArrayConstructorDescriptor(descriptor, 1); + InitializeInternalArrayConstructorDescriptor(MajorKey(), descriptor, 1); } void InternalArrayNArgumentsConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeInternalArrayConstructorDescriptor(descriptor, -1); -} - - -void StoreGlobalStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { a1, a2, a0 }; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(StoreIC_MissFromStubFailure); -} - - -void ElementsTransitionAndStoreStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { a0, a3, a1, a2 }; - descriptor->register_param_count_ = 4; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(ElementsTransitionAndStoreIC_Miss); + InitializeInternalArrayConstructorDescriptor(MajorKey(), descriptor, -1); } void BinaryOpICStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { a1, a0 }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = FUNCTION_ADDR(BinaryOpIC_Miss); + Register registers[] = { cp, a1, a0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(BinaryOpIC_Miss)); descriptor->SetMissHandler( ExternalReference(IC_Utility(IC::kBinaryOpIC_Miss), isolate())); } @@ -319,21 +243,17 @@ void BinaryOpICStub::InitializeInterfaceDescriptor( void BinaryOpWithAllocationSiteStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { a2, a1, a0 }; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(BinaryOpIC_MissWithAllocationSite); + Register registers[] = { cp, a2, a1, a0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(BinaryOpIC_MissWithAllocationSite)); } void StringAddStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { a1, a0 }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenStringAdd)->entry; + Register registers[] = { cp, a1, a0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kStringAdd)->entry); } @@ -341,82 +261,72 @@ void CallDescriptors::InitializeForIsolate(Isolate* isolate) { { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::ArgumentAdaptorCall); - static Register registers[] = { a1, // JSFunction - cp, // context - a0, // actual number of arguments - a2, // expected number of arguments + Register registers[] = { cp, // context, + a1, // JSFunction + a0, // actual number of arguments + a2, // expected number of arguments }; - static Representation representations[] = { - Representation::Tagged(), // JSFunction + Representation representations[] = { Representation::Tagged(), // context + Representation::Tagged(), // JSFunction Representation::Integer32(), // actual number of arguments Representation::Integer32(), // expected number of arguments }; - descriptor->register_param_count_ = 4; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); } { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::KeyedCall); - static Register registers[] = { cp, // context - a2, // key + Register registers[] = { cp, // context + a2, // key }; - static Representation representations[] = { + Representation representations[] = { Representation::Tagged(), // context Representation::Tagged(), // key }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); } { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::NamedCall); - static Register registers[] = { cp, // context - a2, // name + Register registers[] = { cp, // context + a2, // name }; - static Representation representations[] = { + Representation representations[] = { Representation::Tagged(), // context Representation::Tagged(), // name }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); } { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::CallHandler); - static Register registers[] = { cp, // context - a0, // receiver + Register registers[] = { cp, // context + a0, // receiver }; - static Representation representations[] = { + Representation representations[] = { Representation::Tagged(), // context Representation::Tagged(), // receiver }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); } { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::ApiFunctionCall); - static Register registers[] = { a0, // callee - t0, // call_data - a2, // holder - a1, // api_function_address - cp, // context + Register registers[] = { cp, // context + a0, // callee + t0, // call_data + a2, // holder + a1, // api_function_address }; - static Representation representations[] = { + Representation representations[] = { + Representation::Tagged(), // context Representation::Tagged(), // callee Representation::Tagged(), // call_data Representation::Tagged(), // holder Representation::External(), // api_function_address - Representation::Tagged(), // context }; - descriptor->register_param_count_ = 5; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); } } @@ -443,21 +353,22 @@ void HydrogenCodeStub::GenerateLightweightMiss(MacroAssembler* masm) { isolate()->counters()->code_stubs()->Increment(); CodeStubInterfaceDescriptor* descriptor = GetInterfaceDescriptor(); - int param_count = descriptor->register_param_count_; + int param_count = descriptor->GetEnvironmentParameterCount(); { // Call the runtime system in a fresh internal frame. FrameScope scope(masm, StackFrame::INTERNAL); - ASSERT(descriptor->register_param_count_ == 0 || - a0.is(descriptor->register_params_[param_count - 1])); + DCHECK(param_count == 0 || + a0.is(descriptor->GetEnvironmentParameterRegister( + param_count - 1))); // Push arguments, adjust sp. __ Subu(sp, sp, Operand(param_count * kPointerSize)); for (int i = 0; i < param_count; ++i) { // Store argument to stack. - __ sw(descriptor->register_params_[i], + __ sw(descriptor->GetEnvironmentParameterRegister(i), MemOperand(sp, (param_count-1-i) * kPointerSize)); } ExternalReference miss = descriptor->miss_handler(); - __ CallExternalReference(miss, descriptor->register_param_count_); + __ CallExternalReference(miss, param_count); } __ Ret(); @@ -492,8 +403,8 @@ class ConvertToDoubleStub : public PlatformCodeStub { class ModeBits: public BitField<OverwriteMode, 0, 2> {}; class OpBits: public BitField<Token::Value, 2, 14> {}; - Major MajorKey() { return ConvertToDouble; } - int MinorKey() { + Major MajorKey() const { return ConvertToDouble; } + int MinorKey() const { // Encode the parameters in a unique 16 bit value. return result1_.code() + (result2_.code() << 4) + @@ -739,7 +650,7 @@ void WriteInt32ToHeapNumberStub::Generate(MacroAssembler* masm) { // but it just ends up combining harmlessly with the last digit of the // exponent that happens to be 1. The sign bit is 0 so we shift 10 to get // the most significant 1 to hit the last bit of the 12 bit sign and exponent. - ASSERT(((1 << HeapNumber::kExponentShift) & non_smi_exponent) != 0); + DCHECK(((1 << HeapNumber::kExponentShift) & non_smi_exponent) != 0); const int shift_distance = HeapNumber::kNonMantissaBitsInTopWord - 2; __ srl(at, the_int_, shift_distance); __ or_(scratch_, scratch_, at); @@ -800,7 +711,7 @@ static void EmitIdenticalObjectComparison(MacroAssembler* masm, __ Branch(&return_equal, ne, t4, Operand(ODDBALL_TYPE)); __ LoadRoot(t2, Heap::kUndefinedValueRootIndex); __ Branch(&return_equal, ne, a0, Operand(t2)); - ASSERT(is_int16(GREATER) && is_int16(LESS)); + DCHECK(is_int16(GREATER) && is_int16(LESS)); __ Ret(USE_DELAY_SLOT); if (cc == le) { // undefined <= undefined should fail. @@ -814,7 +725,7 @@ static void EmitIdenticalObjectComparison(MacroAssembler* masm, } __ bind(&return_equal); - ASSERT(is_int16(GREATER) && is_int16(LESS)); + DCHECK(is_int16(GREATER) && is_int16(LESS)); __ Ret(USE_DELAY_SLOT); if (cc == less) { __ li(v0, Operand(GREATER)); // Things aren't less than themselves. @@ -853,7 +764,7 @@ static void EmitIdenticalObjectComparison(MacroAssembler* masm, if (cc != eq) { // All-zero means Infinity means equal. __ Ret(eq, v0, Operand(zero_reg)); - ASSERT(is_int16(GREATER) && is_int16(LESS)); + DCHECK(is_int16(GREATER) && is_int16(LESS)); __ Ret(USE_DELAY_SLOT); if (cc == le) { __ li(v0, Operand(GREATER)); // NaN <= NaN should fail. @@ -874,7 +785,7 @@ static void EmitSmiNonsmiComparison(MacroAssembler* masm, Label* both_loaded_as_doubles, Label* slow, bool strict) { - ASSERT((lhs.is(a0) && rhs.is(a1)) || + DCHECK((lhs.is(a0) && rhs.is(a1)) || (lhs.is(a1) && rhs.is(a0))); Label lhs_is_smi; @@ -992,7 +903,7 @@ static void EmitCheckForInternalizedStringsOrObjects(MacroAssembler* masm, Register rhs, Label* possible_strings, Label* not_both_strings) { - ASSERT((lhs.is(a0) && rhs.is(a1)) || + DCHECK((lhs.is(a0) && rhs.is(a1)) || (lhs.is(a1) && rhs.is(a0))); // a2 is object type of rhs. @@ -1083,7 +994,7 @@ void ICCompareStub::GenerateGeneric(MacroAssembler* masm) { // If either is a Smi (we know that not both are), then they can only // be strictly equal if the other is a HeapNumber. STATIC_ASSERT(kSmiTag == 0); - ASSERT_EQ(0, Smi::FromInt(0)); + DCHECK_EQ(0, Smi::FromInt(0)); __ And(t2, lhs, Operand(rhs)); __ JumpIfNotSmi(t2, ¬_smis, t0); // One operand is a smi. EmitSmiNonsmiComparison generates code that can: @@ -1127,7 +1038,7 @@ void ICCompareStub::GenerateGeneric(MacroAssembler* masm) { __ bind(&nan); // NaN comparisons always fail. // Load whatever we need in v0 to make the comparison fail. - ASSERT(is_int16(GREATER) && is_int16(LESS)); + DCHECK(is_int16(GREATER) && is_int16(LESS)); __ Ret(USE_DELAY_SLOT); if (cc == lt || cc == le) { __ li(v0, Operand(GREATER)); @@ -1209,7 +1120,7 @@ void ICCompareStub::GenerateGeneric(MacroAssembler* masm) { if (cc == lt || cc == le) { ncr = GREATER; } else { - ASSERT(cc == gt || cc == ge); // Remaining cases. + DCHECK(cc == gt || cc == ge); // Remaining cases. ncr = LESS; } __ li(a0, Operand(Smi::FromInt(ncr))); @@ -1228,11 +1139,7 @@ void ICCompareStub::GenerateGeneric(MacroAssembler* masm) { void StoreRegistersStateStub::Generate(MacroAssembler* masm) { __ mov(t9, ra); __ pop(ra); - if (save_doubles_ == kSaveFPRegs) { - __ PushSafepointRegistersAndDoubles(); - } else { - __ PushSafepointRegisters(); - } + __ PushSafepointRegisters(); __ Jump(t9); } @@ -1240,12 +1147,7 @@ void StoreRegistersStateStub::Generate(MacroAssembler* masm) { void RestoreRegistersStateStub::Generate(MacroAssembler* masm) { __ mov(t9, ra); __ pop(ra); - __ StoreToSafepointRegisterSlot(t9, t9); - if (save_doubles_ == kSaveFPRegs) { - __ PopSafepointRegistersAndDoubles(); - } else { - __ PopSafepointRegisters(); - } + __ PopSafepointRegisters(); __ Jump(t9); } @@ -1460,7 +1362,7 @@ void MathPowStub::Generate(MacroAssembler* masm) { if (exponent_type_ == ON_STACK) { // The arguments are still on the stack. __ bind(&call_runtime); - __ TailCallRuntime(Runtime::kHiddenMathPow, 2, 1); + __ TailCallRuntime(Runtime::kMathPowRT, 2, 1); // The stub is called from non-optimized code, which expects the result // as heap number in exponent. @@ -1469,7 +1371,7 @@ void MathPowStub::Generate(MacroAssembler* masm) { heapnumber, scratch, scratch2, heapnumbermap, &call_runtime); __ sdc1(double_result, FieldMemOperand(heapnumber, HeapNumber::kValueOffset)); - ASSERT(heapnumber.is(v0)); + DCHECK(heapnumber.is(v0)); __ IncrementCounter(counters->math_pow(), 1, scratch, scratch2); __ DropAndRet(2); } else { @@ -1511,23 +1413,15 @@ void CodeStub::GenerateStubsAheadOfTime(Isolate* isolate) { } -void StoreRegistersStateStub::GenerateAheadOfTime( - Isolate* isolate) { - StoreRegistersStateStub stub1(isolate, kDontSaveFPRegs); - stub1.GetCode(); - // Hydrogen code stubs need stub2 at snapshot time. - StoreRegistersStateStub stub2(isolate, kSaveFPRegs); - stub2.GetCode(); +void StoreRegistersStateStub::GenerateAheadOfTime(Isolate* isolate) { + StoreRegistersStateStub stub(isolate); + stub.GetCode(); } -void RestoreRegistersStateStub::GenerateAheadOfTime( - Isolate* isolate) { - RestoreRegistersStateStub stub1(isolate, kDontSaveFPRegs); - stub1.GetCode(); - // Hydrogen code stubs need stub2 at snapshot time. - RestoreRegistersStateStub stub2(isolate, kSaveFPRegs); - stub2.GetCode(); +void RestoreRegistersStateStub::GenerateAheadOfTime(Isolate* isolate) { + RestoreRegistersStateStub stub(isolate); + stub.GetCode(); } @@ -1624,7 +1518,7 @@ void CEntryStub::Generate(MacroAssembler* masm) { // Set up sp in the delay slot. masm->addiu(sp, sp, -kCArgsSlotsSize); // Make sure the stored 'ra' points to this position. - ASSERT_EQ(kNumInstructionsToJump, + DCHECK_EQ(kNumInstructionsToJump, masm->InstructionsGeneratedSince(&find_ra)); } @@ -1875,9 +1769,9 @@ void JSEntryStub::GenerateBody(MacroAssembler* masm, bool is_construct) { // in the safepoint slot for register t0. void InstanceofStub::Generate(MacroAssembler* masm) { // Call site inlining and patching implies arguments in registers. - ASSERT(HasArgsInRegisters() || !HasCallSiteInlineCheck()); + DCHECK(HasArgsInRegisters() || !HasCallSiteInlineCheck()); // ReturnTrueFalse is only implemented for inlined call sites. - ASSERT(!ReturnTrueFalseObject() || HasCallSiteInlineCheck()); + DCHECK(!ReturnTrueFalseObject() || HasCallSiteInlineCheck()); // Fixed register usage throughout the stub: const Register object = a0; // Object (lhs). @@ -1927,7 +1821,7 @@ void InstanceofStub::Generate(MacroAssembler* masm) { __ StoreRoot(function, Heap::kInstanceofCacheFunctionRootIndex); __ StoreRoot(map, Heap::kInstanceofCacheMapRootIndex); } else { - ASSERT(HasArgsInRegisters()); + DCHECK(HasArgsInRegisters()); // Patch the (relocated) inlined map check. // The offset was stored in t0 safepoint slot. @@ -1957,7 +1851,7 @@ void InstanceofStub::Generate(MacroAssembler* masm) { __ Branch(&loop); __ bind(&is_instance); - ASSERT(Smi::FromInt(0) == 0); + DCHECK(Smi::FromInt(0) == 0); if (!HasCallSiteInlineCheck()) { __ mov(v0, zero_reg); __ StoreRoot(v0, Heap::kInstanceofCacheAnswerRootIndex); @@ -1969,7 +1863,7 @@ void InstanceofStub::Generate(MacroAssembler* masm) { __ PatchRelocatedValue(inline_site, scratch, v0); if (!ReturnTrueFalseObject()) { - ASSERT_EQ(Smi::FromInt(0), 0); + DCHECK_EQ(Smi::FromInt(0), 0); __ mov(v0, zero_reg); } } @@ -2045,31 +1939,12 @@ void InstanceofStub::Generate(MacroAssembler* masm) { void FunctionPrototypeStub::Generate(MacroAssembler* masm) { Label miss; - Register receiver; - if (kind() == Code::KEYED_LOAD_IC) { - // ----------- S t a t e ------------- - // -- ra : return address - // -- a0 : key - // -- a1 : receiver - // ----------------------------------- - __ Branch(&miss, ne, a0, - Operand(isolate()->factory()->prototype_string())); - receiver = a1; - } else { - ASSERT(kind() == Code::LOAD_IC); - // ----------- S t a t e ------------- - // -- a2 : name - // -- ra : return address - // -- a0 : receiver - // -- sp[0] : receiver - // ----------------------------------- - receiver = a0; - } - - StubCompiler::GenerateLoadFunctionPrototype(masm, receiver, a3, t0, &miss); + Register receiver = LoadIC::ReceiverRegister(); + NamedLoadHandlerCompiler::GenerateLoadFunctionPrototype(masm, receiver, a3, + t0, &miss); __ bind(&miss); - StubCompiler::TailCallBuiltin( - masm, BaseLoadStoreStubCompiler::MissBuiltin(kind())); + PropertyAccessCompiler::TailCallBuiltin( + masm, PropertyAccessCompiler::MissBuiltin(Code::LOAD_IC)); } @@ -2154,7 +2029,7 @@ void ArgumentsAccessStub::GenerateNewSloppySlow(MacroAssembler* masm) { __ sw(a3, MemOperand(sp, 1 * kPointerSize)); __ bind(&runtime); - __ TailCallRuntime(Runtime::kHiddenNewArgumentsFast, 3, 1); + __ TailCallRuntime(Runtime::kNewSloppyArguments, 3, 1); } @@ -2209,7 +2084,7 @@ void ArgumentsAccessStub::GenerateNewSloppyFast(MacroAssembler* masm) { FixedArray::kHeaderSize + 2 * kPointerSize; // If there are no mapped parameters, we do not need the parameter_map. Label param_map_size; - ASSERT_EQ(0, Smi::FromInt(0)); + DCHECK_EQ(0, Smi::FromInt(0)); __ Branch(USE_DELAY_SLOT, ¶m_map_size, eq, a1, Operand(zero_reg)); __ mov(t5, zero_reg); // In delay slot: param map size = 0 when a1 == 0. __ sll(t5, a1, 1); @@ -2228,12 +2103,12 @@ void ArgumentsAccessStub::GenerateNewSloppyFast(MacroAssembler* masm) { __ Allocate(t5, v0, a3, t0, &runtime, TAG_OBJECT); // v0 = address of new object(s) (tagged) - // a2 = argument count (tagged) + // a2 = argument count (smi-tagged) // Get the arguments boilerplate from the current native context into t0. const int kNormalOffset = - Context::SlotOffset(Context::SLOPPY_ARGUMENTS_BOILERPLATE_INDEX); + Context::SlotOffset(Context::SLOPPY_ARGUMENTS_MAP_INDEX); const int kAliasedOffset = - Context::SlotOffset(Context::ALIASED_ARGUMENTS_BOILERPLATE_INDEX); + Context::SlotOffset(Context::ALIASED_ARGUMENTS_MAP_INDEX); __ lw(t0, MemOperand(cp, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); __ lw(t0, FieldMemOperand(t0, GlobalObject::kNativeContextOffset)); @@ -2248,22 +2123,23 @@ void ArgumentsAccessStub::GenerateNewSloppyFast(MacroAssembler* masm) { // v0 = address of new object (tagged) // a1 = mapped parameter count (tagged) - // a2 = argument count (tagged) - // t0 = address of boilerplate object (tagged) - // Copy the JS object part. - for (int i = 0; i < JSObject::kHeaderSize; i += kPointerSize) { - __ lw(a3, FieldMemOperand(t0, i)); - __ sw(a3, FieldMemOperand(v0, i)); - } + // a2 = argument count (smi-tagged) + // t0 = address of arguments map (tagged) + __ sw(t0, FieldMemOperand(v0, JSObject::kMapOffset)); + __ LoadRoot(a3, Heap::kEmptyFixedArrayRootIndex); + __ sw(a3, FieldMemOperand(v0, JSObject::kPropertiesOffset)); + __ sw(a3, FieldMemOperand(v0, JSObject::kElementsOffset)); // Set up the callee in-object property. STATIC_ASSERT(Heap::kArgumentsCalleeIndex == 1); __ lw(a3, MemOperand(sp, 2 * kPointerSize)); + __ AssertNotSmi(a3); const int kCalleeOffset = JSObject::kHeaderSize + Heap::kArgumentsCalleeIndex * kPointerSize; __ sw(a3, FieldMemOperand(v0, kCalleeOffset)); // Use the length (smi tagged) and set that as an in-object property too. + __ AssertSmi(a2); STATIC_ASSERT(Heap::kArgumentsLengthIndex == 0); const int kLengthOffset = JSObject::kHeaderSize + Heap::kArgumentsLengthIndex * kPointerSize; @@ -2373,7 +2249,7 @@ void ArgumentsAccessStub::GenerateNewSloppyFast(MacroAssembler* masm) { // a2 = argument count (tagged) __ bind(&runtime); __ sw(a2, MemOperand(sp, 0 * kPointerSize)); // Patch argument count. - __ TailCallRuntime(Runtime::kHiddenNewArgumentsFast, 3, 1); + __ TailCallRuntime(Runtime::kNewSloppyArguments, 3, 1); } @@ -2422,15 +2298,18 @@ void ArgumentsAccessStub::GenerateNewStrict(MacroAssembler* masm) { // Get the arguments boilerplate from the current native context. __ lw(t0, MemOperand(cp, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); __ lw(t0, FieldMemOperand(t0, GlobalObject::kNativeContextOffset)); - __ lw(t0, MemOperand(t0, Context::SlotOffset( - Context::STRICT_ARGUMENTS_BOILERPLATE_INDEX))); + __ lw(t0, MemOperand( + t0, Context::SlotOffset(Context::STRICT_ARGUMENTS_MAP_INDEX))); - // Copy the JS object part. - __ CopyFields(v0, t0, a3.bit(), JSObject::kHeaderSize / kPointerSize); + __ sw(t0, FieldMemOperand(v0, JSObject::kMapOffset)); + __ LoadRoot(a3, Heap::kEmptyFixedArrayRootIndex); + __ sw(a3, FieldMemOperand(v0, JSObject::kPropertiesOffset)); + __ sw(a3, FieldMemOperand(v0, JSObject::kElementsOffset)); // Get the length (smi tagged) and set that as an in-object property too. STATIC_ASSERT(Heap::kArgumentsLengthIndex == 0); __ lw(a1, MemOperand(sp, 0 * kPointerSize)); + __ AssertSmi(a1); __ sw(a1, FieldMemOperand(v0, JSObject::kHeaderSize + Heap::kArgumentsLengthIndex * kPointerSize)); @@ -2471,7 +2350,7 @@ void ArgumentsAccessStub::GenerateNewStrict(MacroAssembler* masm) { // Do the runtime call to allocate the arguments object. __ bind(&runtime); - __ TailCallRuntime(Runtime::kHiddenNewStrictArgumentsFast, 3, 1); + __ TailCallRuntime(Runtime::kNewStrictArguments, 3, 1); } @@ -2480,7 +2359,7 @@ void RegExpExecStub::Generate(MacroAssembler* masm) { // time or if regexp entry in generated code is turned off runtime switch or // at compilation. #ifdef V8_INTERPRETED_REGEXP - __ TailCallRuntime(Runtime::kHiddenRegExpExec, 4, 1); + __ TailCallRuntime(Runtime::kRegExpExecRT, 4, 1); #else // V8_INTERPRETED_REGEXP // Stack frame on entry. @@ -2618,8 +2497,8 @@ void RegExpExecStub::Generate(MacroAssembler* masm) { STATIC_ASSERT(kSeqStringTag == 0); __ And(at, a0, Operand(kStringRepresentationMask)); // The underlying external string is never a short external string. - STATIC_CHECK(ExternalString::kMaxShortLength < ConsString::kMinLength); - STATIC_CHECK(ExternalString::kMaxShortLength < SlicedString::kMinLength); + STATIC_ASSERT(ExternalString::kMaxShortLength < ConsString::kMinLength); + STATIC_ASSERT(ExternalString::kMaxShortLength < SlicedString::kMinLength); __ Branch(&external_string, ne, at, Operand(zero_reg)); // Go to (7). // (5) Sequential string. Load regexp code according to encoding. @@ -2870,7 +2749,7 @@ void RegExpExecStub::Generate(MacroAssembler* masm) { // Do the runtime call to execute the regexp. __ bind(&runtime); - __ TailCallRuntime(Runtime::kHiddenRegExpExec, 4, 1); + __ TailCallRuntime(Runtime::kRegExpExecRT, 4, 1); // Deferred code for string handling. // (6) Not a long external string? If yes, go to (8). @@ -2926,9 +2805,9 @@ static void GenerateRecordCallTarget(MacroAssembler* masm) { // a3 : slot in feedback vector (Smi) Label initialize, done, miss, megamorphic, not_array_function; - ASSERT_EQ(*TypeFeedbackInfo::MegamorphicSentinel(masm->isolate()), + DCHECK_EQ(*TypeFeedbackInfo::MegamorphicSentinel(masm->isolate()), masm->isolate()->heap()->megamorphic_symbol()); - ASSERT_EQ(*TypeFeedbackInfo::UninitializedSentinel(masm->isolate()), + DCHECK_EQ(*TypeFeedbackInfo::UninitializedSentinel(masm->isolate()), masm->isolate()->heap()->uninitialized_symbol()); // Load the cache state into t0. @@ -3070,11 +2949,13 @@ static void EmitWrapCase(MacroAssembler* masm, int argc, Label* cont) { } -void CallFunctionStub::Generate(MacroAssembler* masm) { +static void CallFunctionNoFeedback(MacroAssembler* masm, + int argc, bool needs_checks, + bool call_as_method) { // a1 : the function to call Label slow, non_function, wrap, cont; - if (NeedsChecks()) { + if (needs_checks) { // Check that the function is really a JavaScript function. // a1: pushed function (to be verified) __ JumpIfSmi(a1, &non_function); @@ -3086,18 +2967,17 @@ void CallFunctionStub::Generate(MacroAssembler* masm) { // Fast-case: Invoke the function now. // a1: pushed function - int argc = argc_; ParameterCount actual(argc); - if (CallAsMethod()) { - if (NeedsChecks()) { + if (call_as_method) { + if (needs_checks) { EmitContinueIfStrictOrNative(masm, &cont); } // Compute the receiver in sloppy mode. __ lw(a3, MemOperand(sp, argc * kPointerSize)); - if (NeedsChecks()) { + if (needs_checks) { __ JumpIfSmi(a3, &wrap); __ GetObjectType(a3, t0, t0); __ Branch(&wrap, lt, t0, Operand(FIRST_SPEC_OBJECT_TYPE)); @@ -3110,13 +2990,13 @@ void CallFunctionStub::Generate(MacroAssembler* masm) { __ InvokeFunction(a1, actual, JUMP_FUNCTION, NullCallWrapper()); - if (NeedsChecks()) { + if (needs_checks) { // Slow-case: Non-function called. __ bind(&slow); EmitSlowCase(masm, argc, &non_function); } - if (CallAsMethod()) { + if (call_as_method) { __ bind(&wrap); // Wrap the receiver and patch it back onto the stack. EmitWrapCase(masm, argc, &cont); @@ -3124,6 +3004,11 @@ void CallFunctionStub::Generate(MacroAssembler* masm) { } +void CallFunctionStub::Generate(MacroAssembler* masm) { + CallFunctionNoFeedback(masm, argc_, NeedsChecks(), CallAsMethod()); +} + + void CallConstructStub::Generate(MacroAssembler* masm) { // a0 : number of arguments // a1 : the function to call @@ -3183,8 +3068,8 @@ void CallConstructStub::Generate(MacroAssembler* masm) { __ bind(&do_call); // Set expected number of arguments to zero (not changing r0). __ li(a2, Operand(0, RelocInfo::NONE32)); - __ Jump(isolate()->builtins()->ArgumentsAdaptorTrampoline(), - RelocInfo::CODE_TARGET); + __ Jump(masm->isolate()->builtins()->ArgumentsAdaptorTrampoline(), + RelocInfo::CODE_TARGET); } @@ -3197,6 +3082,44 @@ static void EmitLoadTypeFeedbackVector(MacroAssembler* masm, Register vector) { } +void CallIC_ArrayStub::Generate(MacroAssembler* masm) { + // a1 - function + // a3 - slot id + Label miss; + + EmitLoadTypeFeedbackVector(masm, a2); + + __ LoadGlobalFunction(Context::ARRAY_FUNCTION_INDEX, at); + __ Branch(&miss, ne, a1, Operand(at)); + + __ li(a0, Operand(arg_count())); + __ sll(at, a3, kPointerSizeLog2 - kSmiTagSize); + __ Addu(at, a2, Operand(at)); + __ lw(t0, FieldMemOperand(at, FixedArray::kHeaderSize)); + + // Verify that t0 contains an AllocationSite + __ lw(t1, FieldMemOperand(t0, HeapObject::kMapOffset)); + __ LoadRoot(at, Heap::kAllocationSiteMapRootIndex); + __ Branch(&miss, ne, t1, Operand(at)); + + __ mov(a2, t0); + ArrayConstructorStub stub(masm->isolate(), arg_count()); + __ TailCallStub(&stub); + + __ bind(&miss); + GenerateMiss(masm, IC::kCallIC_Customization_Miss); + + // The slow case, we need this no matter what to complete a call after a miss. + CallFunctionNoFeedback(masm, + arg_count(), + true, + CallAsMethod()); + + // Unreachable. + __ stop("Unexpected code address"); +} + + void CallICStub::Generate(MacroAssembler* masm) { // r1 - function // r3 - slot id (Smi) @@ -3246,7 +3169,11 @@ void CallICStub::Generate(MacroAssembler* masm) { __ Branch(&miss, eq, t0, Operand(at)); if (!FLAG_trace_ic) { - // We are going megamorphic, and we don't want to visit the runtime. + // We are going megamorphic. If the feedback is a JSFunction, it is fine + // to handle it here. More complex cases are dealt with in the runtime. + __ AssertNotSmi(t0); + __ GetObjectType(t0, t1, t1); + __ Branch(&miss, ne, t1, Operand(JS_FUNCTION_TYPE)); __ sll(t0, a3, kPointerSizeLog2 - kSmiTagSize); __ Addu(t0, a2, Operand(t0)); __ LoadRoot(at, Heap::kMegamorphicSymbolRootIndex); @@ -3256,7 +3183,7 @@ void CallICStub::Generate(MacroAssembler* masm) { // We are here because tracing is on or we are going monomorphic. __ bind(&miss); - GenerateMiss(masm); + GenerateMiss(masm, IC::kCallIC_Miss); // the slow case __ bind(&slow_start); @@ -3271,7 +3198,7 @@ void CallICStub::Generate(MacroAssembler* masm) { } -void CallICStub::GenerateMiss(MacroAssembler* masm) { +void CallICStub::GenerateMiss(MacroAssembler* masm, IC::UtilityId id) { // Get the receiver of the function from the stack; 1 ~ return address. __ lw(t0, MemOperand(sp, (state_.arg_count() + 1) * kPointerSize)); @@ -3282,7 +3209,7 @@ void CallICStub::GenerateMiss(MacroAssembler* masm) { __ Push(t0, a1, a2, a3); // Call the entry. - ExternalReference miss = ExternalReference(IC_Utility(IC::kCallIC_Miss), + ExternalReference miss = ExternalReference(IC_Utility(id), masm->isolate()); __ CallExternalReference(miss, 4); @@ -3299,9 +3226,9 @@ void StringCharCodeAtGenerator::GenerateFast(MacroAssembler* masm) { Label got_char_code; Label sliced_string; - ASSERT(!t0.is(index_)); - ASSERT(!t0.is(result_)); - ASSERT(!t0.is(object_)); + DCHECK(!t0.is(index_)); + DCHECK(!t0.is(result_)); + DCHECK(!t0.is(object_)); // If the receiver is a smi trigger the non-string case. __ JumpIfSmi(object_, receiver_not_string_); @@ -3354,9 +3281,9 @@ void StringCharCodeAtGenerator::GenerateSlow( if (index_flags_ == STRING_INDEX_IS_NUMBER) { __ CallRuntime(Runtime::kNumberToIntegerMapMinusZero, 1); } else { - ASSERT(index_flags_ == STRING_INDEX_IS_ARRAY_INDEX); + DCHECK(index_flags_ == STRING_INDEX_IS_ARRAY_INDEX); // NumberToSmi discards numbers that are not exact integers. - __ CallRuntime(Runtime::kHiddenNumberToSmi, 1); + __ CallRuntime(Runtime::kNumberToSmi, 1); } // Save the conversion result before the pop instructions below @@ -3380,7 +3307,7 @@ void StringCharCodeAtGenerator::GenerateSlow( call_helper.BeforeCall(masm); __ sll(index_, index_, kSmiTagSize); __ Push(object_, index_); - __ CallRuntime(Runtime::kHiddenStringCharCodeAt, 2); + __ CallRuntime(Runtime::kStringCharCodeAtRT, 2); __ Move(result_, v0); @@ -3397,12 +3324,12 @@ void StringCharCodeAtGenerator::GenerateSlow( void StringCharFromCodeGenerator::GenerateFast(MacroAssembler* masm) { // Fast case of Heap::LookupSingleCharacterStringFromCode. - ASSERT(!t0.is(result_)); - ASSERT(!t0.is(code_)); + DCHECK(!t0.is(result_)); + DCHECK(!t0.is(code_)); STATIC_ASSERT(kSmiTag == 0); STATIC_ASSERT(kSmiShiftSize == 0); - ASSERT(IsPowerOf2(String::kMaxOneByteCharCode + 1)); + DCHECK(IsPowerOf2(String::kMaxOneByteCharCode + 1)); __ And(t0, code_, Operand(kSmiTagMask | @@ -3445,119 +3372,42 @@ enum CopyCharactersFlags { }; -void StringHelper::GenerateCopyCharactersLong(MacroAssembler* masm, - Register dest, - Register src, - Register count, - Register scratch1, - Register scratch2, - Register scratch3, - Register scratch4, - Register scratch5, - int flags) { - bool ascii = (flags & COPY_ASCII) != 0; - bool dest_always_aligned = (flags & DEST_ALWAYS_ALIGNED) != 0; - - if (dest_always_aligned && FLAG_debug_code) { - // Check that destination is actually word aligned if the flag says - // that it is. - __ And(scratch4, dest, Operand(kPointerAlignmentMask)); +void StringHelper::GenerateCopyCharacters(MacroAssembler* masm, + Register dest, + Register src, + Register count, + Register scratch, + String::Encoding encoding) { + if (FLAG_debug_code) { + // Check that destination is word aligned. + __ And(scratch, dest, Operand(kPointerAlignmentMask)); __ Check(eq, kDestinationOfCopyNotAligned, - scratch4, + scratch, Operand(zero_reg)); } - const int kReadAlignment = 4; - const int kReadAlignmentMask = kReadAlignment - 1; - // Ensure that reading an entire aligned word containing the last character - // of a string will not read outside the allocated area (because we pad up - // to kObjectAlignment). - STATIC_ASSERT(kObjectAlignment >= kReadAlignment); // Assumes word reads and writes are little endian. // Nothing to do for zero characters. Label done; - if (!ascii) { - __ addu(count, count, count); - } - __ Branch(&done, eq, count, Operand(zero_reg)); - - Label byte_loop; - // Must copy at least eight bytes, otherwise just do it one byte at a time. - __ Subu(scratch1, count, Operand(8)); - __ Addu(count, dest, Operand(count)); - Register limit = count; // Read until src equals this. - __ Branch(&byte_loop, lt, scratch1, Operand(zero_reg)); - - if (!dest_always_aligned) { - // Align dest by byte copying. Copies between zero and three bytes. - __ And(scratch4, dest, Operand(kReadAlignmentMask)); - Label dest_aligned; - __ Branch(&dest_aligned, eq, scratch4, Operand(zero_reg)); - Label aligned_loop; - __ bind(&aligned_loop); - __ lbu(scratch1, MemOperand(src)); - __ addiu(src, src, 1); - __ sb(scratch1, MemOperand(dest)); - __ addiu(dest, dest, 1); - __ addiu(scratch4, scratch4, 1); - __ Branch(&aligned_loop, le, scratch4, Operand(kReadAlignmentMask)); - __ bind(&dest_aligned); - } - - Label simple_loop; - - __ And(scratch4, src, Operand(kReadAlignmentMask)); - __ Branch(&simple_loop, eq, scratch4, Operand(zero_reg)); - - // Loop for src/dst that are not aligned the same way. - // This loop uses lwl and lwr instructions. These instructions - // depend on the endianness, and the implementation assumes little-endian. - { - Label loop; - __ bind(&loop); - if (kArchEndian == kBig) { - __ lwl(scratch1, MemOperand(src)); - __ Addu(src, src, Operand(kReadAlignment)); - __ lwr(scratch1, MemOperand(src, -1)); - } else { - __ lwr(scratch1, MemOperand(src)); - __ Addu(src, src, Operand(kReadAlignment)); - __ lwl(scratch1, MemOperand(src, -1)); - } - __ sw(scratch1, MemOperand(dest)); - __ Addu(dest, dest, Operand(kReadAlignment)); - __ Subu(scratch2, limit, dest); - __ Branch(&loop, ge, scratch2, Operand(kReadAlignment)); + if (encoding == String::TWO_BYTE_ENCODING) { + __ Addu(count, count, count); } - __ Branch(&byte_loop); - - // Simple loop. - // Copy words from src to dest, until less than four bytes left. - // Both src and dest are word aligned. - __ bind(&simple_loop); - { - Label loop; - __ bind(&loop); - __ lw(scratch1, MemOperand(src)); - __ Addu(src, src, Operand(kReadAlignment)); - __ sw(scratch1, MemOperand(dest)); - __ Addu(dest, dest, Operand(kReadAlignment)); - __ Subu(scratch2, limit, dest); - __ Branch(&loop, ge, scratch2, Operand(kReadAlignment)); - } + Register limit = count; // Read until dest equals this. + __ Addu(limit, dest, Operand(count)); + Label loop_entry, loop; // Copy bytes from src to dest until dest hits limit. - __ bind(&byte_loop); - // Test if dest has already reached the limit. - __ Branch(&done, ge, dest, Operand(limit)); - __ lbu(scratch1, MemOperand(src)); - __ addiu(src, src, 1); - __ sb(scratch1, MemOperand(dest)); - __ addiu(dest, dest, 1); - __ Branch(&byte_loop); + __ Branch(&loop_entry); + __ bind(&loop); + __ lbu(scratch, MemOperand(src)); + __ Addu(src, src, Operand(1)); + __ sb(scratch, MemOperand(dest)); + __ Addu(dest, dest, Operand(1)); + __ bind(&loop_entry); + __ Branch(&loop, lt, dest, Operand(limit)); __ bind(&done); } @@ -3759,7 +3609,7 @@ void SubStringStub::Generate(MacroAssembler* masm) { // Handle external string. // Rule out short external strings. - STATIC_CHECK(kShortExternalStringTag != 0); + STATIC_ASSERT(kShortExternalStringTag != 0); __ And(t0, a1, Operand(kShortExternalStringTag)); __ Branch(&runtime, ne, t0, Operand(zero_reg)); __ lw(t1, FieldMemOperand(t1, ExternalString::kResourceDataOffset)); @@ -3791,8 +3641,8 @@ void SubStringStub::Generate(MacroAssembler* masm) { // a2: result string length // t1: first character of substring to copy STATIC_ASSERT((SeqOneByteString::kHeaderSize & kObjectAlignmentMask) == 0); - StringHelper::GenerateCopyCharactersLong( - masm, a1, t1, a2, a3, t0, t2, t3, t4, COPY_ASCII | DEST_ALWAYS_ALIGNED); + StringHelper::GenerateCopyCharacters( + masm, a1, t1, a2, a3, String::ONE_BYTE_ENCODING); __ jmp(&return_v0); // Allocate and copy the resulting two-byte string. @@ -3811,8 +3661,8 @@ void SubStringStub::Generate(MacroAssembler* masm) { // a2: result length. // t1: first character of substring to copy. STATIC_ASSERT((SeqTwoByteString::kHeaderSize & kObjectAlignmentMask) == 0); - StringHelper::GenerateCopyCharactersLong( - masm, a1, t1, a2, a3, t0, t2, t3, t4, DEST_ALWAYS_ALIGNED); + StringHelper::GenerateCopyCharacters( + masm, a1, t1, a2, a3, String::TWO_BYTE_ENCODING); __ bind(&return_v0); Counters* counters = isolate()->counters(); @@ -3821,7 +3671,7 @@ void SubStringStub::Generate(MacroAssembler* masm) { // Just jump to runtime to create the sub string. __ bind(&runtime); - __ TailCallRuntime(Runtime::kHiddenSubString, 3, 1); + __ TailCallRuntime(Runtime::kSubString, 3, 1); __ bind(&single_char); // v0: original string @@ -3851,7 +3701,7 @@ void StringCompareStub::GenerateFlatAsciiStringEquals(MacroAssembler* masm, __ lw(scratch2, FieldMemOperand(right, String::kLengthOffset)); __ Branch(&check_zero_length, eq, length, Operand(scratch2)); __ bind(&strings_not_equal); - ASSERT(is_int16(NOT_EQUAL)); + DCHECK(is_int16(NOT_EQUAL)); __ Ret(USE_DELAY_SLOT); __ li(v0, Operand(Smi::FromInt(NOT_EQUAL))); @@ -3860,7 +3710,7 @@ void StringCompareStub::GenerateFlatAsciiStringEquals(MacroAssembler* masm, __ bind(&check_zero_length); STATIC_ASSERT(kSmiTag == 0); __ Branch(&compare_chars, ne, length, Operand(zero_reg)); - ASSERT(is_int16(EQUAL)); + DCHECK(is_int16(EQUAL)); __ Ret(USE_DELAY_SLOT); __ li(v0, Operand(Smi::FromInt(EQUAL))); @@ -3903,7 +3753,7 @@ void StringCompareStub::GenerateCompareFlatAsciiStrings(MacroAssembler* masm, // Compare lengths - strings up to min-length are equal. __ bind(&compare_lengths); - ASSERT(Smi::FromInt(EQUAL) == static_cast<Smi*>(0)); + DCHECK(Smi::FromInt(EQUAL) == static_cast<Smi*>(0)); // Use length_delta as result if it's zero. __ mov(scratch2, length_delta); __ mov(scratch4, zero_reg); @@ -3986,7 +3836,7 @@ void StringCompareStub::Generate(MacroAssembler* masm) { GenerateCompareFlatAsciiStrings(masm, a1, a0, a2, a3, t0, t1); __ bind(&runtime); - __ TailCallRuntime(Runtime::kHiddenStringCompare, 2, 1); + __ TailCallRuntime(Runtime::kStringCompare, 2, 1); } @@ -4019,7 +3869,7 @@ void BinaryOpICWithAllocationSiteStub::Generate(MacroAssembler* masm) { void ICCompareStub::GenerateSmis(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::SMI); + DCHECK(state_ == CompareIC::SMI); Label miss; __ Or(a2, a1, a0); __ JumpIfNotSmi(a2, &miss); @@ -4042,7 +3892,7 @@ void ICCompareStub::GenerateSmis(MacroAssembler* masm) { void ICCompareStub::GenerateNumbers(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::NUMBER); + DCHECK(state_ == CompareIC::NUMBER); Label generic_stub; Label unordered, maybe_undefined1, maybe_undefined2; @@ -4095,7 +3945,7 @@ void ICCompareStub::GenerateNumbers(MacroAssembler* masm) { __ BranchF(&fpu_lt, NULL, lt, f0, f2); // Otherwise it's greater, so just fall thru, and return. - ASSERT(is_int16(GREATER) && is_int16(EQUAL) && is_int16(LESS)); + DCHECK(is_int16(GREATER) && is_int16(EQUAL) && is_int16(LESS)); __ Ret(USE_DELAY_SLOT); __ li(v0, Operand(GREATER)); @@ -4135,7 +3985,7 @@ void ICCompareStub::GenerateNumbers(MacroAssembler* masm) { void ICCompareStub::GenerateInternalizedStrings(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::INTERNALIZED_STRING); + DCHECK(state_ == CompareIC::INTERNALIZED_STRING); Label miss; // Registers containing left and right operands respectively. @@ -4159,13 +4009,13 @@ void ICCompareStub::GenerateInternalizedStrings(MacroAssembler* masm) { // Make sure a0 is non-zero. At this point input operands are // guaranteed to be non-zero. - ASSERT(right.is(a0)); + DCHECK(right.is(a0)); STATIC_ASSERT(EQUAL == 0); STATIC_ASSERT(kSmiTag == 0); __ mov(v0, right); // Internalized strings are compared by identity. __ Ret(ne, left, Operand(right)); - ASSERT(is_int16(EQUAL)); + DCHECK(is_int16(EQUAL)); __ Ret(USE_DELAY_SLOT); __ li(v0, Operand(Smi::FromInt(EQUAL))); @@ -4175,8 +4025,8 @@ void ICCompareStub::GenerateInternalizedStrings(MacroAssembler* masm) { void ICCompareStub::GenerateUniqueNames(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::UNIQUE_NAME); - ASSERT(GetCondition() == eq); + DCHECK(state_ == CompareIC::UNIQUE_NAME); + DCHECK(GetCondition() == eq); Label miss; // Registers containing left and right operands respectively. @@ -4206,7 +4056,7 @@ void ICCompareStub::GenerateUniqueNames(MacroAssembler* masm) { __ Branch(&done, ne, left, Operand(right)); // Make sure a0 is non-zero. At this point input operands are // guaranteed to be non-zero. - ASSERT(right.is(a0)); + DCHECK(right.is(a0)); STATIC_ASSERT(EQUAL == 0); STATIC_ASSERT(kSmiTag == 0); __ li(v0, Operand(Smi::FromInt(EQUAL))); @@ -4219,7 +4069,7 @@ void ICCompareStub::GenerateUniqueNames(MacroAssembler* masm) { void ICCompareStub::GenerateStrings(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::STRING); + DCHECK(state_ == CompareIC::STRING); Label miss; bool equality = Token::IsEqualityOp(op_); @@ -4262,7 +4112,7 @@ void ICCompareStub::GenerateStrings(MacroAssembler* masm) { // because we already know they are not identical. We know they are both // strings. if (equality) { - ASSERT(GetCondition() == eq); + DCHECK(GetCondition() == eq); STATIC_ASSERT(kInternalizedTag == 0); __ Or(tmp3, tmp1, Operand(tmp2)); __ And(tmp5, tmp3, Operand(kIsNotInternalizedMask)); @@ -4270,7 +4120,7 @@ void ICCompareStub::GenerateStrings(MacroAssembler* masm) { __ Branch(&is_symbol, ne, tmp5, Operand(zero_reg)); // Make sure a0 is non-zero. At this point input operands are // guaranteed to be non-zero. - ASSERT(right.is(a0)); + DCHECK(right.is(a0)); __ Ret(USE_DELAY_SLOT); __ mov(v0, a0); // In the delay slot. __ bind(&is_symbol); @@ -4296,7 +4146,7 @@ void ICCompareStub::GenerateStrings(MacroAssembler* masm) { if (equality) { __ TailCallRuntime(Runtime::kStringEquals, 2, 1); } else { - __ TailCallRuntime(Runtime::kHiddenStringCompare, 2, 1); + __ TailCallRuntime(Runtime::kStringCompare, 2, 1); } __ bind(&miss); @@ -4305,7 +4155,7 @@ void ICCompareStub::GenerateStrings(MacroAssembler* masm) { void ICCompareStub::GenerateObjects(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::OBJECT); + DCHECK(state_ == CompareIC::OBJECT); Label miss; __ And(a2, a1, Operand(a0)); __ JumpIfSmi(a2, &miss); @@ -4315,7 +4165,7 @@ void ICCompareStub::GenerateObjects(MacroAssembler* masm) { __ GetObjectType(a1, a2, a2); __ Branch(&miss, ne, a2, Operand(JS_OBJECT_TYPE)); - ASSERT(GetCondition() == eq); + DCHECK(GetCondition() == eq); __ Ret(USE_DELAY_SLOT); __ subu(v0, a0, a1); @@ -4404,7 +4254,7 @@ void NameDictionaryLookupStub::GenerateNegativeLookup(MacroAssembler* masm, Register properties, Handle<Name> name, Register scratch0) { - ASSERT(name->IsUniqueName()); + DCHECK(name->IsUniqueName()); // If names of slots in range from 1 to kProbes - 1 for the hash value are // not equal to the name and kProbes-th slot is not used (its name is the // undefined value), it guarantees the hash table doesn't contain the @@ -4421,19 +4271,19 @@ void NameDictionaryLookupStub::GenerateNegativeLookup(MacroAssembler* masm, Smi::FromInt(name->Hash() + NameDictionary::GetProbeOffset(i)))); // Scale the index by multiplying by the entry size. - ASSERT(NameDictionary::kEntrySize == 3); + DCHECK(NameDictionary::kEntrySize == 3); __ sll(at, index, 1); __ Addu(index, index, at); Register entity_name = scratch0; // Having undefined at this place means the name is not contained. - ASSERT_EQ(kSmiTagSize, 1); + DCHECK_EQ(kSmiTagSize, 1); Register tmp = properties; __ sll(scratch0, index, 1); __ Addu(tmp, properties, scratch0); __ lw(entity_name, FieldMemOperand(tmp, kElementsStartOffset)); - ASSERT(!tmp.is(entity_name)); + DCHECK(!tmp.is(entity_name)); __ LoadRoot(tmp, Heap::kUndefinedValueRootIndex); __ Branch(done, eq, entity_name, Operand(tmp)); @@ -4486,10 +4336,10 @@ void NameDictionaryLookupStub::GeneratePositiveLookup(MacroAssembler* masm, Register name, Register scratch1, Register scratch2) { - ASSERT(!elements.is(scratch1)); - ASSERT(!elements.is(scratch2)); - ASSERT(!name.is(scratch1)); - ASSERT(!name.is(scratch2)); + DCHECK(!elements.is(scratch1)); + DCHECK(!elements.is(scratch2)); + DCHECK(!name.is(scratch1)); + DCHECK(!name.is(scratch2)); __ AssertName(name); @@ -4508,7 +4358,7 @@ void NameDictionaryLookupStub::GeneratePositiveLookup(MacroAssembler* masm, // Add the probe offset (i + i * i) left shifted to avoid right shifting // the hash in a separate instruction. The value hash + i + i * i is right // shifted in the following and instruction. - ASSERT(NameDictionary::GetProbeOffset(i) < + DCHECK(NameDictionary::GetProbeOffset(i) < 1 << (32 - Name::kHashFieldOffset)); __ Addu(scratch2, scratch2, Operand( NameDictionary::GetProbeOffset(i) << Name::kHashShift)); @@ -4517,7 +4367,7 @@ void NameDictionaryLookupStub::GeneratePositiveLookup(MacroAssembler* masm, __ And(scratch2, scratch1, scratch2); // Scale the index by multiplying by the element size. - ASSERT(NameDictionary::kEntrySize == 3); + DCHECK(NameDictionary::kEntrySize == 3); // scratch2 = scratch2 * 3. __ sll(at, scratch2, 1); @@ -4537,7 +4387,7 @@ void NameDictionaryLookupStub::GeneratePositiveLookup(MacroAssembler* masm, __ MultiPush(spill_mask); if (name.is(a0)) { - ASSERT(!elements.is(a1)); + DCHECK(!elements.is(a1)); __ Move(a1, name); __ Move(a0, elements); } else { @@ -4593,7 +4443,7 @@ void NameDictionaryLookupStub::Generate(MacroAssembler* masm) { // Add the probe offset (i + i * i) left shifted to avoid right shifting // the hash in a separate instruction. The value hash + i + i * i is right // shifted in the following and instruction. - ASSERT(NameDictionary::GetProbeOffset(i) < + DCHECK(NameDictionary::GetProbeOffset(i) < 1 << (32 - Name::kHashFieldOffset)); __ Addu(index, hash, Operand( NameDictionary::GetProbeOffset(i) << Name::kHashShift)); @@ -4604,14 +4454,14 @@ void NameDictionaryLookupStub::Generate(MacroAssembler* masm) { __ And(index, mask, index); // Scale the index by multiplying by the entry size. - ASSERT(NameDictionary::kEntrySize == 3); + DCHECK(NameDictionary::kEntrySize == 3); // index *= 3. __ mov(at, index); __ sll(index, index, 1); __ Addu(index, index, at); - ASSERT_EQ(kSmiTagSize, 1); + DCHECK_EQ(kSmiTagSize, 1); __ sll(index, index, 2); __ Addu(index, index, dictionary); __ lw(entry_key, FieldMemOperand(index, kElementsStartOffset)); @@ -4660,11 +4510,6 @@ void StoreBufferOverflowStub::GenerateFixedRegStubsAheadOfTime( } -bool CodeStub::CanUseFPRegisters() { - return true; // FPU is a base requirement for V8. -} - - // Takes the input in 3 registers: address_ value_ and object_. A pointer to // the value has just been written into the object, now this stub makes sure // we keep the GC informed. The word in the object where the value has been @@ -4753,8 +4598,8 @@ void RecordWriteStub::InformIncrementalMarker(MacroAssembler* masm) { __ PrepareCallCFunction(argument_count, regs_.scratch0()); Register address = a0.is(regs_.address()) ? regs_.scratch0() : regs_.address(); - ASSERT(!address.is(regs_.object())); - ASSERT(!address.is(a0)); + DCHECK(!address.is(regs_.object())); + DCHECK(!address.is(a0)); __ Move(address, regs_.address()); __ Move(a0, regs_.object()); __ Move(a1, address); @@ -4922,7 +4767,7 @@ void StoreArrayLiteralElementStub::Generate(MacroAssembler* masm) { void StubFailureTrampolineStub::Generate(MacroAssembler* masm) { - CEntryStub ces(isolate(), 1, fp_registers_ ? kSaveFPRegs : kDontSaveFPRegs); + CEntryStub ces(isolate(), 1, kSaveFPRegs); __ Call(ces.GetCode(), RelocInfo::CODE_TARGET); int parameter_count_offset = StubFailureTrampolineFrame::kCallerStackParameterCountFrameOffset; @@ -4975,7 +4820,7 @@ void ProfileEntryHookStub::Generate(MacroAssembler* masm) { int frame_alignment = masm->ActivationFrameAlignment(); if (frame_alignment > kPointerSize) { __ mov(s5, sp); - ASSERT(IsPowerOf2(frame_alignment)); + DCHECK(IsPowerOf2(frame_alignment)); __ And(sp, sp, Operand(-frame_alignment)); } __ Subu(sp, sp, kCArgsSlotsSize); @@ -5042,12 +4887,12 @@ static void CreateArrayDispatchOneArgument(MacroAssembler* masm, // sp[0] - last argument Label normal_sequence; if (mode == DONT_OVERRIDE) { - ASSERT(FAST_SMI_ELEMENTS == 0); - ASSERT(FAST_HOLEY_SMI_ELEMENTS == 1); - ASSERT(FAST_ELEMENTS == 2); - ASSERT(FAST_HOLEY_ELEMENTS == 3); - ASSERT(FAST_DOUBLE_ELEMENTS == 4); - ASSERT(FAST_HOLEY_DOUBLE_ELEMENTS == 5); + DCHECK(FAST_SMI_ELEMENTS == 0); + DCHECK(FAST_HOLEY_SMI_ELEMENTS == 1); + DCHECK(FAST_ELEMENTS == 2); + DCHECK(FAST_HOLEY_ELEMENTS == 3); + DCHECK(FAST_DOUBLE_ELEMENTS == 4); + DCHECK(FAST_HOLEY_DOUBLE_ELEMENTS == 5); // is the low bit set? If so, we are holey and that is good. __ And(at, a3, Operand(1)); @@ -5274,7 +5119,7 @@ void InternalArrayConstructorStub::Generate(MacroAssembler* masm) { // but the following bit field extraction takes care of that anyway. __ lbu(a3, FieldMemOperand(a3, Map::kBitField2Offset)); // Retrieve elements_kind from bit field 2. - __ Ext(a3, a3, Map::kElementsKindShift, Map::kElementsKindBitCount); + __ DecodeField<Map::ElementsKindBits>(a3); if (FLAG_debug_code) { Label done; @@ -5355,7 +5200,7 @@ void CallApiFunctionStub::Generate(MacroAssembler* masm) { FrameScope frame_scope(masm, StackFrame::MANUAL); __ EnterExitFrame(false, kApiStackSpace); - ASSERT(!api_function_address.is(a0) && !scratch.is(a0)); + DCHECK(!api_function_address.is(a0) && !scratch.is(a0)); // a0 = FunctionCallbackInfo& // Arguments is after the return address. __ Addu(a0, sp, Operand(1 * kPointerSize)); diff --git a/deps/v8/src/mips/code-stubs-mips.h b/deps/v8/src/mips/code-stubs-mips.h index 64577f903dc..0e3f1c3fa65 100644 --- a/deps/v8/src/mips/code-stubs-mips.h +++ b/deps/v8/src/mips/code-stubs-mips.h @@ -5,7 +5,7 @@ #ifndef V8_MIPS_CODE_STUBS_ARM_H_ #define V8_MIPS_CODE_STUBS_ARM_H_ -#include "ic-inl.h" +#include "src/ic-inl.h" namespace v8 { @@ -28,8 +28,8 @@ class StoreBufferOverflowStub: public PlatformCodeStub { private: SaveFPRegsMode save_doubles_; - Major MajorKey() { return StoreBufferOverflow; } - int MinorKey() { return (save_doubles_ == kSaveFPRegs) ? 1 : 0; } + Major MajorKey() const { return StoreBufferOverflow; } + int MinorKey() const { return (save_doubles_ == kSaveFPRegs) ? 1 : 0; } }; @@ -39,16 +39,12 @@ class StringHelper : public AllStatic { // is allowed to spend extra time setting up conditions to make copying // faster. Copying of overlapping regions is not supported. // Dest register ends at the position after the last character written. - static void GenerateCopyCharactersLong(MacroAssembler* masm, - Register dest, - Register src, - Register count, - Register scratch1, - Register scratch2, - Register scratch3, - Register scratch4, - Register scratch5, - int flags); + static void GenerateCopyCharacters(MacroAssembler* masm, + Register dest, + Register src, + Register count, + Register scratch, + String::Encoding encoding); // Generate string hash. @@ -73,36 +69,35 @@ class SubStringStub: public PlatformCodeStub { explicit SubStringStub(Isolate* isolate) : PlatformCodeStub(isolate) {} private: - Major MajorKey() { return SubString; } - int MinorKey() { return 0; } + Major MajorKey() const { return SubString; } + int MinorKey() const { return 0; } void Generate(MacroAssembler* masm); }; + class StoreRegistersStateStub: public PlatformCodeStub { public: - explicit StoreRegistersStateStub(Isolate* isolate, SaveFPRegsMode with_fp) - : PlatformCodeStub(isolate), save_doubles_(with_fp) {} + explicit StoreRegistersStateStub(Isolate* isolate) + : PlatformCodeStub(isolate) {} static void GenerateAheadOfTime(Isolate* isolate); private: - Major MajorKey() { return StoreRegistersState; } - int MinorKey() { return (save_doubles_ == kSaveFPRegs) ? 1 : 0; } - SaveFPRegsMode save_doubles_; + Major MajorKey() const { return StoreRegistersState; } + int MinorKey() const { return 0; } void Generate(MacroAssembler* masm); }; class RestoreRegistersStateStub: public PlatformCodeStub { public: - explicit RestoreRegistersStateStub(Isolate* isolate, SaveFPRegsMode with_fp) - : PlatformCodeStub(isolate), save_doubles_(with_fp) {} + explicit RestoreRegistersStateStub(Isolate* isolate) + : PlatformCodeStub(isolate) {} static void GenerateAheadOfTime(Isolate* isolate); private: - Major MajorKey() { return RestoreRegistersState; } - int MinorKey() { return (save_doubles_ == kSaveFPRegs) ? 1 : 0; } - SaveFPRegsMode save_doubles_; + Major MajorKey() const { return RestoreRegistersState; } + int MinorKey() const { return 0; } void Generate(MacroAssembler* masm); }; @@ -130,8 +125,8 @@ class StringCompareStub: public PlatformCodeStub { Register scratch3); private: - virtual Major MajorKey() { return StringCompare; } - virtual int MinorKey() { return 0; } + virtual Major MajorKey() const { return StringCompare; } + virtual int MinorKey() const { return 0; } virtual void Generate(MacroAssembler* masm); static void GenerateAsciiCharsCompareLoop(MacroAssembler* masm, @@ -160,10 +155,10 @@ class WriteInt32ToHeapNumberStub : public PlatformCodeStub { the_heap_number_(the_heap_number), scratch_(scratch), sign_(scratch2) { - ASSERT(IntRegisterBits::is_valid(the_int_.code())); - ASSERT(HeapNumberRegisterBits::is_valid(the_heap_number_.code())); - ASSERT(ScratchRegisterBits::is_valid(scratch_.code())); - ASSERT(SignRegisterBits::is_valid(sign_.code())); + DCHECK(IntRegisterBits::is_valid(the_int_.code())); + DCHECK(HeapNumberRegisterBits::is_valid(the_heap_number_.code())); + DCHECK(ScratchRegisterBits::is_valid(scratch_.code())); + DCHECK(SignRegisterBits::is_valid(sign_.code())); } static void GenerateFixedRegStubsAheadOfTime(Isolate* isolate); @@ -180,8 +175,8 @@ class WriteInt32ToHeapNumberStub : public PlatformCodeStub { class ScratchRegisterBits: public BitField<int, 8, 4> {}; class SignRegisterBits: public BitField<int, 12, 4> {}; - Major MajorKey() { return WriteInt32ToHeapNumber; } - int MinorKey() { + Major MajorKey() const { return WriteInt32ToHeapNumber; } + int MinorKey() const { // Encode the parameters in a unique 16 bit value. return IntRegisterBits::encode(the_int_.code()) | HeapNumberRegisterBits::encode(the_heap_number_.code()) @@ -224,14 +219,14 @@ class RecordWriteStub: public PlatformCodeStub { const unsigned offset = masm->instr_at(pos) & kImm16Mask; masm->instr_at_put(pos, BNE | (zero_reg.code() << kRsShift) | (zero_reg.code() << kRtShift) | (offset & kImm16Mask)); - ASSERT(Assembler::IsBne(masm->instr_at(pos))); + DCHECK(Assembler::IsBne(masm->instr_at(pos))); } static void PatchNopIntoBranch(MacroAssembler* masm, int pos) { const unsigned offset = masm->instr_at(pos) & kImm16Mask; masm->instr_at_put(pos, BEQ | (zero_reg.code() << kRsShift) | (zero_reg.code() << kRtShift) | (offset & kImm16Mask)); - ASSERT(Assembler::IsBeq(masm->instr_at(pos))); + DCHECK(Assembler::IsBeq(masm->instr_at(pos))); } static Mode GetMode(Code* stub) { @@ -243,13 +238,13 @@ class RecordWriteStub: public PlatformCodeStub { return INCREMENTAL; } - ASSERT(Assembler::IsBne(first_instruction)); + DCHECK(Assembler::IsBne(first_instruction)); if (Assembler::IsBeq(second_instruction)) { return INCREMENTAL_COMPACTION; } - ASSERT(Assembler::IsBne(second_instruction)); + DCHECK(Assembler::IsBne(second_instruction)); return STORE_BUFFER_ONLY; } @@ -260,22 +255,23 @@ class RecordWriteStub: public PlatformCodeStub { stub->instruction_size()); switch (mode) { case STORE_BUFFER_ONLY: - ASSERT(GetMode(stub) == INCREMENTAL || + DCHECK(GetMode(stub) == INCREMENTAL || GetMode(stub) == INCREMENTAL_COMPACTION); PatchBranchIntoNop(&masm, 0); PatchBranchIntoNop(&masm, 2 * Assembler::kInstrSize); break; case INCREMENTAL: - ASSERT(GetMode(stub) == STORE_BUFFER_ONLY); + DCHECK(GetMode(stub) == STORE_BUFFER_ONLY); PatchNopIntoBranch(&masm, 0); break; case INCREMENTAL_COMPACTION: - ASSERT(GetMode(stub) == STORE_BUFFER_ONLY); + DCHECK(GetMode(stub) == STORE_BUFFER_ONLY); PatchNopIntoBranch(&masm, 2 * Assembler::kInstrSize); break; } - ASSERT(GetMode(stub) == mode); - CPU::FlushICache(stub->instruction_start(), 4 * Assembler::kInstrSize); + DCHECK(GetMode(stub) == mode); + CpuFeatures::FlushICache(stub->instruction_start(), + 4 * Assembler::kInstrSize); } private: @@ -290,12 +286,12 @@ class RecordWriteStub: public PlatformCodeStub { : object_(object), address_(address), scratch0_(scratch0) { - ASSERT(!AreAliased(scratch0, object, address, no_reg)); + DCHECK(!AreAliased(scratch0, object, address, no_reg)); scratch1_ = GetRegisterThatIsNotOneOf(object_, address_, scratch0_); } void Save(MacroAssembler* masm) { - ASSERT(!AreAliased(object_, address_, scratch1_, scratch0_)); + DCHECK(!AreAliased(object_, address_, scratch1_, scratch0_)); // We don't have to save scratch0_ because it was given to us as // a scratch register. masm->push(scratch1_); @@ -350,9 +346,9 @@ class RecordWriteStub: public PlatformCodeStub { Mode mode); void InformIncrementalMarker(MacroAssembler* masm); - Major MajorKey() { return RecordWrite; } + Major MajorKey() const { return RecordWrite; } - int MinorKey() { + int MinorKey() const { return ObjectBits::encode(object_.code()) | ValueBits::encode(value_.code()) | AddressBits::encode(address_.code()) | @@ -392,8 +388,8 @@ class DirectCEntryStub: public PlatformCodeStub { void GenerateCall(MacroAssembler* masm, Register target); private: - Major MajorKey() { return DirectCEntry; } - int MinorKey() { return 0; } + Major MajorKey() const { return DirectCEntry; } + int MinorKey() const { return 0; } bool NeedsImmovableCode() { return true; } }; @@ -438,11 +434,9 @@ class NameDictionaryLookupStub: public PlatformCodeStub { NameDictionary::kHeaderSize + NameDictionary::kElementsStartIndex * kPointerSize; - Major MajorKey() { return NameDictionaryLookup; } + Major MajorKey() const { return NameDictionaryLookup; } - int MinorKey() { - return LookupModeBits::encode(mode_); - } + int MinorKey() const { return LookupModeBits::encode(mode_); } class LookupModeBits: public BitField<LookupMode, 0, 1> {}; diff --git a/deps/v8/src/mips/codegen-mips.cc b/deps/v8/src/mips/codegen-mips.cc index adf6d37d895..c413665771e 100644 --- a/deps/v8/src/mips/codegen-mips.cc +++ b/deps/v8/src/mips/codegen-mips.cc @@ -2,13 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_MIPS -#include "codegen.h" -#include "macro-assembler.h" -#include "simulator-mips.h" +#include "src/codegen.h" +#include "src/macro-assembler.h" +#include "src/mips/simulator-mips.h" namespace v8 { namespace internal { @@ -29,7 +29,8 @@ double fast_exp_simulator(double x) { UnaryMathFunction CreateExpFunction() { if (!FLAG_fast_math) return &std::exp; size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(1 * KB, &actual_size, true)); + byte* buffer = + static_cast<byte*>(base::OS::Allocate(1 * KB, &actual_size, true)); if (buffer == NULL) return &std::exp; ExternalReference::InitializeMathExpData(); @@ -56,10 +57,10 @@ UnaryMathFunction CreateExpFunction() { CodeDesc desc; masm.GetCode(&desc); - ASSERT(!RelocInfo::RequiresRelocation(desc)); + DCHECK(!RelocInfo::RequiresRelocation(desc)); - CPU::FlushICache(buffer, actual_size); - OS::ProtectCode(buffer, actual_size); + CpuFeatures::FlushICache(buffer, actual_size); + base::OS::ProtectCode(buffer, actual_size); #if !defined(USE_SIMULATOR) return FUNCTION_CAST<UnaryMathFunction>(buffer); @@ -71,13 +72,13 @@ UnaryMathFunction CreateExpFunction() { #if defined(V8_HOST_ARCH_MIPS) -OS::MemCopyUint8Function CreateMemCopyUint8Function( - OS::MemCopyUint8Function stub) { +MemCopyUint8Function CreateMemCopyUint8Function(MemCopyUint8Function stub) { #if defined(USE_SIMULATOR) return stub; #else size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(3 * KB, &actual_size, true)); + byte* buffer = + static_cast<byte*>(base::OS::Allocate(3 * KB, &actual_size, true)); if (buffer == NULL) return stub; // This code assumes that cache lines are 32 bytes and if the cache line is @@ -97,7 +98,7 @@ OS::MemCopyUint8Function CreateMemCopyUint8Function( // the kPrefHintPrepareForStore hint is used, the code will not work // correctly. uint32_t max_pref_size = 128; - ASSERT(pref_chunk < max_pref_size); + DCHECK(pref_chunk < max_pref_size); // pref_limit is set based on the fact that we never use an offset // greater then 5 on a store pref and that a single pref can @@ -110,7 +111,7 @@ OS::MemCopyUint8Function CreateMemCopyUint8Function( // The initial prefetches may fetch bytes that are before the buffer being // copied. Start copies with an offset of 4 so avoid this situation when // using kPrefHintPrepareForStore. - ASSERT(pref_hint_store != kPrefHintPrepareForStore || + DCHECK(pref_hint_store != kPrefHintPrepareForStore || pref_chunk * 4 >= max_pref_size); // If the size is less than 8, go to lastb. Regardless of size, @@ -593,11 +594,11 @@ OS::MemCopyUint8Function CreateMemCopyUint8Function( } CodeDesc desc; masm.GetCode(&desc); - ASSERT(!RelocInfo::RequiresRelocation(desc)); + DCHECK(!RelocInfo::RequiresRelocation(desc)); - CPU::FlushICache(buffer, actual_size); - OS::ProtectCode(buffer, actual_size); - return FUNCTION_CAST<OS::MemCopyUint8Function>(buffer); + CpuFeatures::FlushICache(buffer, actual_size); + base::OS::ProtectCode(buffer, actual_size); + return FUNCTION_CAST<MemCopyUint8Function>(buffer); #endif } #endif @@ -607,7 +608,8 @@ UnaryMathFunction CreateSqrtFunction() { return &std::sqrt; #else size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(1 * KB, &actual_size, true)); + byte* buffer = + static_cast<byte*>(base::OS::Allocate(1 * KB, &actual_size, true)); if (buffer == NULL) return &std::sqrt; MacroAssembler masm(NULL, buffer, static_cast<int>(actual_size)); @@ -619,10 +621,10 @@ UnaryMathFunction CreateSqrtFunction() { CodeDesc desc; masm.GetCode(&desc); - ASSERT(!RelocInfo::RequiresRelocation(desc)); + DCHECK(!RelocInfo::RequiresRelocation(desc)); - CPU::FlushICache(buffer, actual_size); - OS::ProtectCode(buffer, actual_size); + CpuFeatures::FlushICache(buffer, actual_size); + base::OS::ProtectCode(buffer, actual_size); return FUNCTION_CAST<UnaryMathFunction>(buffer); #endif } @@ -635,14 +637,14 @@ UnaryMathFunction CreateSqrtFunction() { void StubRuntimeCallHelper::BeforeCall(MacroAssembler* masm) const { masm->EnterFrame(StackFrame::INTERNAL); - ASSERT(!masm->has_frame()); + DCHECK(!masm->has_frame()); masm->set_has_frame(true); } void StubRuntimeCallHelper::AfterCall(MacroAssembler* masm) const { masm->LeaveFrame(StackFrame::INTERNAL); - ASSERT(masm->has_frame()); + DCHECK(masm->has_frame()); masm->set_has_frame(false); } @@ -653,26 +655,28 @@ void StubRuntimeCallHelper::AfterCall(MacroAssembler* masm) const { #define __ ACCESS_MASM(masm) void ElementsTransitionGenerator::GenerateMapChangeElementsTransition( - MacroAssembler* masm, AllocationSiteMode mode, + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, + AllocationSiteMode mode, Label* allocation_memento_found) { - // ----------- S t a t e ------------- - // -- a0 : value - // -- a1 : key - // -- a2 : receiver - // -- ra : return address - // -- a3 : target map, scratch for subsequent call - // -- t0 : scratch (elements) - // ----------------------------------- + Register scratch_elements = t0; + DCHECK(!AreAliased(receiver, key, value, target_map, + scratch_elements)); + if (mode == TRACK_ALLOCATION_SITE) { - ASSERT(allocation_memento_found != NULL); - __ JumpIfJSArrayHasAllocationMemento(a2, t0, allocation_memento_found); + DCHECK(allocation_memento_found != NULL); + __ JumpIfJSArrayHasAllocationMemento( + receiver, scratch_elements, allocation_memento_found); } // Set transitioned map. - __ sw(a3, FieldMemOperand(a2, HeapObject::kMapOffset)); - __ RecordWriteField(a2, + __ sw(target_map, FieldMemOperand(receiver, HeapObject::kMapOffset)); + __ RecordWriteField(receiver, HeapObject::kMapOffset, - a3, + target_map, t5, kRAHasNotBeenSaved, kDontSaveFPRegs, @@ -682,62 +686,74 @@ void ElementsTransitionGenerator::GenerateMapChangeElementsTransition( void ElementsTransitionGenerator::GenerateSmiToDouble( - MacroAssembler* masm, AllocationSiteMode mode, Label* fail) { - // ----------- S t a t e ------------- - // -- a0 : value - // -- a1 : key - // -- a2 : receiver - // -- ra : return address - // -- a3 : target map, scratch for subsequent call - // -- t0 : scratch (elements) - // ----------------------------------- + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, + AllocationSiteMode mode, + Label* fail) { + // Register ra contains the return address. Label loop, entry, convert_hole, gc_required, only_change_map, done; + Register elements = t0; + Register length = t1; + Register array = t2; + Register array_end = array; + + // target_map parameter can be clobbered. + Register scratch1 = target_map; + Register scratch2 = t5; + Register scratch3 = t3; + + // Verify input registers don't conflict with locals. + DCHECK(!AreAliased(receiver, key, value, target_map, + elements, length, array, scratch2)); Register scratch = t6; if (mode == TRACK_ALLOCATION_SITE) { - __ JumpIfJSArrayHasAllocationMemento(a2, t0, fail); + __ JumpIfJSArrayHasAllocationMemento(receiver, elements, fail); } // Check for empty arrays, which only require a map transition and no changes // to the backing store. - __ lw(t0, FieldMemOperand(a2, JSObject::kElementsOffset)); + __ lw(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); __ LoadRoot(at, Heap::kEmptyFixedArrayRootIndex); - __ Branch(&only_change_map, eq, at, Operand(t0)); + __ Branch(&only_change_map, eq, at, Operand(elements)); __ push(ra); - __ lw(t1, FieldMemOperand(t0, FixedArray::kLengthOffset)); - // t0: source FixedArray - // t1: number of elements (smi-tagged) + __ lw(length, FieldMemOperand(elements, FixedArray::kLengthOffset)); + // elements: source FixedArray + // length: number of elements (smi-tagged) // Allocate new FixedDoubleArray. - __ sll(scratch, t1, 2); + __ sll(scratch, length, 2); __ Addu(scratch, scratch, FixedDoubleArray::kHeaderSize); - __ Allocate(scratch, t2, t3, t5, &gc_required, DOUBLE_ALIGNMENT); - // t2: destination FixedDoubleArray, not tagged as heap object + __ Allocate(scratch, array, t3, scratch2, &gc_required, DOUBLE_ALIGNMENT); + // array: destination FixedDoubleArray, not tagged as heap object // Set destination FixedDoubleArray's length and map. - __ LoadRoot(t5, Heap::kFixedDoubleArrayMapRootIndex); - __ sw(t1, MemOperand(t2, FixedDoubleArray::kLengthOffset)); - __ sw(t5, MemOperand(t2, HeapObject::kMapOffset)); + __ LoadRoot(scratch2, Heap::kFixedDoubleArrayMapRootIndex); + __ sw(length, MemOperand(array, FixedDoubleArray::kLengthOffset)); // Update receiver's map. + __ sw(scratch2, MemOperand(array, HeapObject::kMapOffset)); - __ sw(a3, FieldMemOperand(a2, HeapObject::kMapOffset)); - __ RecordWriteField(a2, + __ sw(target_map, FieldMemOperand(receiver, HeapObject::kMapOffset)); + __ RecordWriteField(receiver, HeapObject::kMapOffset, - a3, - t5, + target_map, + scratch2, kRAHasBeenSaved, kDontSaveFPRegs, OMIT_REMEMBERED_SET, OMIT_SMI_CHECK); // Replace receiver's backing store with newly created FixedDoubleArray. - __ Addu(a3, t2, Operand(kHeapObjectTag)); - __ sw(a3, FieldMemOperand(a2, JSObject::kElementsOffset)); - __ RecordWriteField(a2, + __ Addu(scratch1, array, Operand(kHeapObjectTag)); + __ sw(scratch1, FieldMemOperand(a2, JSObject::kElementsOffset)); + __ RecordWriteField(receiver, JSObject::kElementsOffset, - a3, - t5, + scratch1, + scratch2, kRAHasBeenSaved, kDontSaveFPRegs, EMIT_REMEMBERED_SET, @@ -745,26 +761,32 @@ void ElementsTransitionGenerator::GenerateSmiToDouble( // Prepare for conversion loop. - __ Addu(a3, t0, Operand(FixedArray::kHeaderSize - kHeapObjectTag)); - __ Addu(t3, t2, Operand(FixedDoubleArray::kHeaderSize)); - __ sll(t2, t1, 2); - __ Addu(t2, t2, t3); - __ li(t0, Operand(kHoleNanLower32)); - __ li(t1, Operand(kHoleNanUpper32)); - // t0: kHoleNanLower32 - // t1: kHoleNanUpper32 - // t2: end of destination FixedDoubleArray, not tagged - // t3: begin of FixedDoubleArray element fields, not tagged - - __ Branch(&entry); + __ Addu(scratch1, elements, + Operand(FixedArray::kHeaderSize - kHeapObjectTag)); + __ Addu(scratch3, array, Operand(FixedDoubleArray::kHeaderSize)); + __ sll(at, length, 2); + __ Addu(array_end, scratch3, at); + + // Repurpose registers no longer in use. + Register hole_lower = elements; + Register hole_upper = length; + + __ li(hole_lower, Operand(kHoleNanLower32)); + // scratch1: begin of source FixedArray element fields, not tagged + // hole_lower: kHoleNanLower32 + // hole_upper: kHoleNanUpper32 + // array_end: end of destination FixedDoubleArray, not tagged + // scratch3: begin of FixedDoubleArray element fields, not tagged + __ Branch(USE_DELAY_SLOT, &entry); + __ li(hole_upper, Operand(kHoleNanUpper32)); // In delay slot. __ bind(&only_change_map); - __ sw(a3, FieldMemOperand(a2, HeapObject::kMapOffset)); - __ RecordWriteField(a2, + __ sw(target_map, FieldMemOperand(receiver, HeapObject::kMapOffset)); + __ RecordWriteField(receiver, HeapObject::kMapOffset, - a3, - t5, - kRAHasNotBeenSaved, + target_map, + scratch2, + kRAHasBeenSaved, kDontSaveFPRegs, OMIT_REMEMBERED_SET, OMIT_SMI_CHECK); @@ -772,130 +794,154 @@ void ElementsTransitionGenerator::GenerateSmiToDouble( // Call into runtime if GC is required. __ bind(&gc_required); - __ pop(ra); - __ Branch(fail); + __ lw(ra, MemOperand(sp, 0)); + __ Branch(USE_DELAY_SLOT, fail); + __ addiu(sp, sp, kPointerSize); // In delay slot. // Convert and copy elements. __ bind(&loop); - __ lw(t5, MemOperand(a3)); - __ Addu(a3, a3, kIntSize); - // t5: current element - __ UntagAndJumpIfNotSmi(t5, t5, &convert_hole); + __ lw(scratch2, MemOperand(scratch1)); + __ Addu(scratch1, scratch1, kIntSize); + // scratch2: current element + __ UntagAndJumpIfNotSmi(scratch2, scratch2, &convert_hole); // Normal smi, convert to double and store. - __ mtc1(t5, f0); + __ mtc1(scratch2, f0); __ cvt_d_w(f0, f0); - __ sdc1(f0, MemOperand(t3)); - __ Addu(t3, t3, kDoubleSize); - - __ Branch(&entry); + __ sdc1(f0, MemOperand(scratch3)); + __ Branch(USE_DELAY_SLOT, &entry); + __ addiu(scratch3, scratch3, kDoubleSize); // In delay slot. // Hole found, store the-hole NaN. __ bind(&convert_hole); if (FLAG_debug_code) { // Restore a "smi-untagged" heap object. - __ SmiTag(t5); - __ Or(t5, t5, Operand(1)); + __ SmiTag(scratch2); + __ Or(scratch2, scratch2, Operand(1)); __ LoadRoot(at, Heap::kTheHoleValueRootIndex); - __ Assert(eq, kObjectFoundInSmiOnlyArray, at, Operand(t5)); + __ Assert(eq, kObjectFoundInSmiOnlyArray, at, Operand(scratch2)); } - __ sw(t0, MemOperand(t3, Register::kMantissaOffset)); // mantissa - __ sw(t1, MemOperand(t3, Register::kExponentOffset)); // exponent - __ Addu(t3, t3, kDoubleSize); - + // mantissa + __ sw(hole_lower, MemOperand(scratch3, Register::kMantissaOffset)); + // exponent + __ sw(hole_upper, MemOperand(scratch3, Register::kExponentOffset)); __ bind(&entry); - __ Branch(&loop, lt, t3, Operand(t2)); + __ addiu(scratch3, scratch3, kDoubleSize); + + __ Branch(&loop, lt, scratch3, Operand(array_end)); - __ pop(ra); __ bind(&done); + __ pop(ra); } void ElementsTransitionGenerator::GenerateDoubleToObject( - MacroAssembler* masm, AllocationSiteMode mode, Label* fail) { - // ----------- S t a t e ------------- - // -- a0 : value - // -- a1 : key - // -- a2 : receiver - // -- ra : return address - // -- a3 : target map, scratch for subsequent call - // -- t0 : scratch (elements) - // ----------------------------------- + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, + AllocationSiteMode mode, + Label* fail) { + // Register ra contains the return address. Label entry, loop, convert_hole, gc_required, only_change_map; + Register elements = t0; + Register array = t2; + Register length = t1; + Register scratch = t5; + + // Verify input registers don't conflict with locals. + DCHECK(!AreAliased(receiver, key, value, target_map, + elements, array, length, scratch)); if (mode == TRACK_ALLOCATION_SITE) { - __ JumpIfJSArrayHasAllocationMemento(a2, t0, fail); + __ JumpIfJSArrayHasAllocationMemento(receiver, elements, fail); } // Check for empty arrays, which only require a map transition and no changes // to the backing store. - __ lw(t0, FieldMemOperand(a2, JSObject::kElementsOffset)); + __ lw(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); __ LoadRoot(at, Heap::kEmptyFixedArrayRootIndex); - __ Branch(&only_change_map, eq, at, Operand(t0)); + __ Branch(&only_change_map, eq, at, Operand(elements)); - __ MultiPush(a0.bit() | a1.bit() | a2.bit() | a3.bit() | ra.bit()); + __ MultiPush( + value.bit() | key.bit() | receiver.bit() | target_map.bit() | ra.bit()); - __ lw(t1, FieldMemOperand(t0, FixedArray::kLengthOffset)); - // t0: source FixedArray - // t1: number of elements (smi-tagged) + __ lw(length, FieldMemOperand(elements, FixedArray::kLengthOffset)); + // elements: source FixedArray + // length: number of elements (smi-tagged) // Allocate new FixedArray. - __ sll(a0, t1, 1); - __ Addu(a0, a0, FixedDoubleArray::kHeaderSize); - __ Allocate(a0, t2, t3, t5, &gc_required, NO_ALLOCATION_FLAGS); - // t2: destination FixedArray, not tagged as heap object + // Re-use value and target_map registers, as they have been saved on the + // stack. + Register array_size = value; + Register allocate_scratch = target_map; + __ sll(array_size, length, 1); + __ Addu(array_size, array_size, FixedDoubleArray::kHeaderSize); + __ Allocate(array_size, array, allocate_scratch, scratch, &gc_required, + NO_ALLOCATION_FLAGS); + // array: destination FixedArray, not tagged as heap object // Set destination FixedDoubleArray's length and map. - __ LoadRoot(t5, Heap::kFixedArrayMapRootIndex); - __ sw(t1, MemOperand(t2, FixedDoubleArray::kLengthOffset)); - __ sw(t5, MemOperand(t2, HeapObject::kMapOffset)); + __ LoadRoot(scratch, Heap::kFixedArrayMapRootIndex); + __ sw(length, MemOperand(array, FixedDoubleArray::kLengthOffset)); + __ sw(scratch, MemOperand(array, HeapObject::kMapOffset)); // Prepare for conversion loop. - __ Addu(t0, t0, Operand( + Register src_elements = elements; + Register dst_elements = target_map; + Register dst_end = length; + Register heap_number_map = scratch; + __ Addu(src_elements, src_elements, Operand( FixedDoubleArray::kHeaderSize - kHeapObjectTag + Register::kExponentOffset)); - __ Addu(a3, t2, Operand(FixedArray::kHeaderSize)); - __ Addu(t2, t2, Operand(kHeapObjectTag)); - __ sll(t1, t1, 1); - __ Addu(t1, a3, t1); - __ LoadRoot(t3, Heap::kTheHoleValueRootIndex); - __ LoadRoot(t5, Heap::kHeapNumberMapRootIndex); + __ Addu(dst_elements, array, Operand(FixedArray::kHeaderSize)); + __ Addu(array, array, Operand(kHeapObjectTag)); + __ sll(dst_end, dst_end, 1); + __ Addu(dst_end, dst_elements, dst_end); + __ LoadRoot(heap_number_map, Heap::kHeapNumberMapRootIndex); // Using offsetted addresses. - // a3: begin of destination FixedArray element fields, not tagged - // t0: begin of source FixedDoubleArray element fields, not tagged, - // points to the exponent - // t1: end of destination FixedArray, not tagged - // t2: destination FixedArray - // t3: the-hole pointer - // t5: heap number map + // dst_elements: begin of destination FixedArray element fields, not tagged + // src_elements: begin of source FixedDoubleArray element fields, not tagged, + // points to the exponent + // dst_end: end of destination FixedArray, not tagged + // array: destination FixedArray + // heap_number_map: heap number map __ Branch(&entry); // Call into runtime if GC is required. __ bind(&gc_required); - __ MultiPop(a0.bit() | a1.bit() | a2.bit() | a3.bit() | ra.bit()); + __ MultiPop( + value.bit() | key.bit() | receiver.bit() | target_map.bit() | ra.bit()); __ Branch(fail); __ bind(&loop); - __ lw(a1, MemOperand(t0)); - __ Addu(t0, t0, kDoubleSize); - // a1: current element's upper 32 bit - // t0: address of next element's upper 32 bit + Register upper_bits = key; + __ lw(upper_bits, MemOperand(src_elements)); + __ Addu(src_elements, src_elements, kDoubleSize); + // upper_bits: current element's upper 32 bit + // src_elements: address of next element's upper 32 bit __ Branch(&convert_hole, eq, a1, Operand(kHoleNanUpper32)); // Non-hole double, copy value into a heap number. - __ AllocateHeapNumber(a2, a0, t6, t5, &gc_required); - // a2: new heap number - // Load mantissa of current element, t0 point to exponent of next element. - __ lw(a0, MemOperand(t0, (Register::kMantissaOffset + Register heap_number = receiver; + Register scratch2 = value; + Register scratch3 = t6; + __ AllocateHeapNumber(heap_number, scratch2, scratch3, heap_number_map, + &gc_required); + // heap_number: new heap number + // Load mantissa of current element, src_elements + // point to exponent of next element. + __ lw(scratch2, MemOperand(src_elements, (Register::kMantissaOffset - Register::kExponentOffset - kDoubleSize))); - __ sw(a0, FieldMemOperand(a2, HeapNumber::kMantissaOffset)); - __ sw(a1, FieldMemOperand(a2, HeapNumber::kExponentOffset)); - __ mov(a0, a3); - __ sw(a2, MemOperand(a3)); - __ Addu(a3, a3, kIntSize); - __ RecordWrite(t2, - a0, - a2, + __ sw(scratch2, FieldMemOperand(heap_number, HeapNumber::kMantissaOffset)); + __ sw(upper_bits, FieldMemOperand(heap_number, HeapNumber::kExponentOffset)); + __ mov(scratch2, dst_elements); + __ sw(heap_number, MemOperand(dst_elements)); + __ Addu(dst_elements, dst_elements, kIntSize); + __ RecordWrite(array, + scratch2, + heap_number, kRAHasBeenSaved, kDontSaveFPRegs, EMIT_REMEMBERED_SET, @@ -904,19 +950,20 @@ void ElementsTransitionGenerator::GenerateDoubleToObject( // Replace the-hole NaN with the-hole pointer. __ bind(&convert_hole); - __ sw(t3, MemOperand(a3)); - __ Addu(a3, a3, kIntSize); + __ LoadRoot(scratch2, Heap::kTheHoleValueRootIndex); + __ sw(scratch2, MemOperand(dst_elements)); + __ Addu(dst_elements, dst_elements, kIntSize); __ bind(&entry); - __ Branch(&loop, lt, a3, Operand(t1)); + __ Branch(&loop, lt, dst_elements, Operand(dst_end)); - __ MultiPop(a2.bit() | a3.bit() | a0.bit() | a1.bit()); + __ MultiPop(receiver.bit() | target_map.bit() | value.bit() | key.bit()); // Replace receiver's backing store with newly created and filled FixedArray. - __ sw(t2, FieldMemOperand(a2, JSObject::kElementsOffset)); - __ RecordWriteField(a2, + __ sw(array, FieldMemOperand(receiver, JSObject::kElementsOffset)); + __ RecordWriteField(receiver, JSObject::kElementsOffset, - t2, - t5, + array, + scratch, kRAHasBeenSaved, kDontSaveFPRegs, EMIT_REMEMBERED_SET, @@ -925,11 +972,11 @@ void ElementsTransitionGenerator::GenerateDoubleToObject( __ bind(&only_change_map); // Update receiver's map. - __ sw(a3, FieldMemOperand(a2, HeapObject::kMapOffset)); - __ RecordWriteField(a2, + __ sw(target_map, FieldMemOperand(receiver, HeapObject::kMapOffset)); + __ RecordWriteField(receiver, HeapObject::kMapOffset, - a3, - t5, + target_map, + scratch, kRAHasNotBeenSaved, kDontSaveFPRegs, OMIT_REMEMBERED_SET, @@ -1006,7 +1053,7 @@ void StringCharLoadGenerator::Generate(MacroAssembler* masm, at, Operand(zero_reg)); } // Rule out short external strings. - STATIC_CHECK(kShortExternalStringTag != 0); + STATIC_ASSERT(kShortExternalStringTag != 0); __ And(at, result, Operand(kShortExternalStringMask)); __ Branch(call_runtime, ne, at, Operand(zero_reg)); __ lw(string, FieldMemOperand(string, ExternalString::kResourceDataOffset)); @@ -1042,16 +1089,17 @@ void MathExpGenerator::EmitMathExp(MacroAssembler* masm, Register temp1, Register temp2, Register temp3) { - ASSERT(!input.is(result)); - ASSERT(!input.is(double_scratch1)); - ASSERT(!input.is(double_scratch2)); - ASSERT(!result.is(double_scratch1)); - ASSERT(!result.is(double_scratch2)); - ASSERT(!double_scratch1.is(double_scratch2)); - ASSERT(!temp1.is(temp2)); - ASSERT(!temp1.is(temp3)); - ASSERT(!temp2.is(temp3)); - ASSERT(ExternalReference::math_exp_constants(0).address() != NULL); + DCHECK(!input.is(result)); + DCHECK(!input.is(double_scratch1)); + DCHECK(!input.is(double_scratch2)); + DCHECK(!result.is(double_scratch1)); + DCHECK(!result.is(double_scratch2)); + DCHECK(!double_scratch1.is(double_scratch2)); + DCHECK(!temp1.is(temp2)); + DCHECK(!temp1.is(temp3)); + DCHECK(!temp2.is(temp3)); + DCHECK(ExternalReference::math_exp_constants(0).address() != NULL); + DCHECK(!masm->serializer_enabled()); // External references not serializable. Label zero, infinity, done; @@ -1080,7 +1128,7 @@ void MathExpGenerator::EmitMathExp(MacroAssembler* masm, __ mul_d(result, result, double_scratch2); __ sub_d(result, result, double_scratch1); // Mov 1 in double_scratch2 as math_exp_constants_array[8] == 1. - ASSERT(*reinterpret_cast<double*> + DCHECK(*reinterpret_cast<double*> (ExternalReference::math_exp_constants(8).address()) == 1); __ Move(double_scratch2, 1); __ add_d(result, result, double_scratch2); @@ -1124,7 +1172,7 @@ static const uint32_t kCodeAgePatchFirstInstruction = 0x00010180; CodeAgingHelper::CodeAgingHelper() { - ASSERT(young_sequence_.length() == kNoCodeAgeSequenceLength); + DCHECK(young_sequence_.length() == kNoCodeAgeSequenceLength); // Since patcher is a large object, allocate it dynamically when needed, // to avoid overloading the stack in stress conditions. // DONT_FLUSH is used because the CodeAgingHelper is initialized early in @@ -1150,7 +1198,7 @@ bool CodeAgingHelper::IsOld(byte* candidate) const { bool Code::IsYoungSequence(Isolate* isolate, byte* sequence) { bool result = isolate->code_aging_helper()->IsYoung(sequence); - ASSERT(result || isolate->code_aging_helper()->IsOld(sequence)); + DCHECK(result || isolate->code_aging_helper()->IsOld(sequence)); return result; } @@ -1176,7 +1224,7 @@ void Code::PatchPlatformCodeAge(Isolate* isolate, uint32_t young_length = isolate->code_aging_helper()->young_sequence_length(); if (age == kNoAgeCodeAge) { isolate->code_aging_helper()->CopyYoungSequenceTo(sequence); - CPU::FlushICache(sequence, young_length); + CpuFeatures::FlushICache(sequence, young_length); } else { Code* stub = GetCodeAgeStub(isolate, age, parity); CodePatcher patcher(sequence, young_length / Assembler::kInstrSize); diff --git a/deps/v8/src/mips/codegen-mips.h b/deps/v8/src/mips/codegen-mips.h index 23b8fe339cd..82a410ec235 100644 --- a/deps/v8/src/mips/codegen-mips.h +++ b/deps/v8/src/mips/codegen-mips.h @@ -7,8 +7,8 @@ #define V8_MIPS_CODEGEN_MIPS_H_ -#include "ast.h" -#include "ic-inl.h" +#include "src/ast.h" +#include "src/ic-inl.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/mips/constants-mips.cc b/deps/v8/src/mips/constants-mips.cc index db6e9c63b79..f14992719db 100644 --- a/deps/v8/src/mips/constants-mips.cc +++ b/deps/v8/src/mips/constants-mips.cc @@ -2,11 +2,11 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_MIPS -#include "constants-mips.h" +#include "src/mips/constants-mips.h" namespace v8 { namespace internal { @@ -151,7 +151,7 @@ bool Instruction::IsForbiddenInBranchDelay() const { return true; default: return false; - }; + } break; case SPECIAL: switch (FunctionFieldRaw()) { @@ -160,11 +160,11 @@ bool Instruction::IsForbiddenInBranchDelay() const { return true; default: return false; - }; + } break; default: return false; - }; + } } @@ -180,17 +180,17 @@ bool Instruction::IsLinkingInstruction() const { return true; default: return false; - }; + } case SPECIAL: switch (FunctionFieldRaw()) { case JALR: return true; default: return false; - }; + } default: return false; - }; + } } @@ -209,7 +209,7 @@ bool Instruction::IsTrap() const { return true; default: return false; - }; + } } } @@ -255,7 +255,7 @@ Instruction::Type Instruction::InstructionType() const { return kRegisterType; default: return kUnsupported; - }; + } break; case SPECIAL2: switch (FunctionFieldRaw()) { @@ -264,7 +264,7 @@ Instruction::Type Instruction::InstructionType() const { return kRegisterType; default: return kUnsupported; - }; + } break; case SPECIAL3: switch (FunctionFieldRaw()) { @@ -273,7 +273,7 @@ Instruction::Type Instruction::InstructionType() const { return kRegisterType; default: return kUnsupported; - }; + } break; case COP1: // Coprocessor instructions. switch (RsFieldRawNoAssert()) { @@ -281,7 +281,7 @@ Instruction::Type Instruction::InstructionType() const { return kImmediateType; default: return kRegisterType; - }; + } break; case COP1X: return kRegisterType; @@ -326,7 +326,7 @@ Instruction::Type Instruction::InstructionType() const { return kJumpType; default: return kUnsupported; - }; + } return kUnsupported; } diff --git a/deps/v8/src/mips/constants-mips.h b/deps/v8/src/mips/constants-mips.h index 6aeb1195bb4..b2cbea734e2 100644 --- a/deps/v8/src/mips/constants-mips.h +++ b/deps/v8/src/mips/constants-mips.h @@ -499,12 +499,13 @@ enum Condition { // no_condition value (-2). As long as tests for no_condition check // for condition < 0, this will work as expected. inline Condition NegateCondition(Condition cc) { - ASSERT(cc != cc_always); + DCHECK(cc != cc_always); return static_cast<Condition>(cc ^ 1); } -inline Condition ReverseCondition(Condition cc) { +// Commute a condition such that {a cond b == b cond' a}. +inline Condition CommuteCondition(Condition cc) { switch (cc) { case Uless: return Ugreater; @@ -524,7 +525,7 @@ inline Condition ReverseCondition(Condition cc) { return greater_equal; default: return cc; - }; + } } @@ -659,29 +660,29 @@ class Instruction { } inline int RsValue() const { - ASSERT(InstructionType() == kRegisterType || + DCHECK(InstructionType() == kRegisterType || InstructionType() == kImmediateType); return Bits(kRsShift + kRsBits - 1, kRsShift); } inline int RtValue() const { - ASSERT(InstructionType() == kRegisterType || + DCHECK(InstructionType() == kRegisterType || InstructionType() == kImmediateType); return Bits(kRtShift + kRtBits - 1, kRtShift); } inline int RdValue() const { - ASSERT(InstructionType() == kRegisterType); + DCHECK(InstructionType() == kRegisterType); return Bits(kRdShift + kRdBits - 1, kRdShift); } inline int SaValue() const { - ASSERT(InstructionType() == kRegisterType); + DCHECK(InstructionType() == kRegisterType); return Bits(kSaShift + kSaBits - 1, kSaShift); } inline int FunctionValue() const { - ASSERT(InstructionType() == kRegisterType || + DCHECK(InstructionType() == kRegisterType || InstructionType() == kImmediateType); return Bits(kFunctionShift + kFunctionBits - 1, kFunctionShift); } @@ -723,7 +724,7 @@ class Instruction { } inline int RsFieldRaw() const { - ASSERT(InstructionType() == kRegisterType || + DCHECK(InstructionType() == kRegisterType || InstructionType() == kImmediateType); return InstructionBits() & kRsFieldMask; } @@ -734,18 +735,18 @@ class Instruction { } inline int RtFieldRaw() const { - ASSERT(InstructionType() == kRegisterType || + DCHECK(InstructionType() == kRegisterType || InstructionType() == kImmediateType); return InstructionBits() & kRtFieldMask; } inline int RdFieldRaw() const { - ASSERT(InstructionType() == kRegisterType); + DCHECK(InstructionType() == kRegisterType); return InstructionBits() & kRdFieldMask; } inline int SaFieldRaw() const { - ASSERT(InstructionType() == kRegisterType); + DCHECK(InstructionType() == kRegisterType); return InstructionBits() & kSaFieldMask; } @@ -770,12 +771,12 @@ class Instruction { } inline int32_t Imm16Value() const { - ASSERT(InstructionType() == kImmediateType); + DCHECK(InstructionType() == kImmediateType); return Bits(kImm16Shift + kImm16Bits - 1, kImm16Shift); } inline int32_t Imm26Value() const { - ASSERT(InstructionType() == kJumpType); + DCHECK(InstructionType() == kJumpType); return Bits(kImm26Shift + kImm26Bits - 1, kImm26Shift); } diff --git a/deps/v8/src/mips/cpu-mips.cc b/deps/v8/src/mips/cpu-mips.cc index 71e3ddc79fc..f2d50650b02 100644 --- a/deps/v8/src/mips/cpu-mips.cc +++ b/deps/v8/src/mips/cpu-mips.cc @@ -11,20 +11,20 @@ #include <asm/cachectl.h> #endif // #ifdef __mips -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_MIPS -#include "cpu.h" -#include "macro-assembler.h" +#include "src/assembler.h" +#include "src/macro-assembler.h" -#include "simulator.h" // For cache flushing. +#include "src/simulator.h" // For cache flushing. namespace v8 { namespace internal { -void CPU::FlushICache(void* start, size_t size) { +void CpuFeatures::FlushICache(void* start, size_t size) { // Nothing to do, flushing no instructions. if (size == 0) { return; diff --git a/deps/v8/src/mips/debug-mips.cc b/deps/v8/src/mips/debug-mips.cc index fcb5643d157..c421c727d1d 100644 --- a/deps/v8/src/mips/debug-mips.cc +++ b/deps/v8/src/mips/debug-mips.cc @@ -4,12 +4,12 @@ -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_MIPS -#include "codegen.h" -#include "debug.h" +#include "src/codegen.h" +#include "src/debug.h" namespace v8 { namespace internal { @@ -30,7 +30,7 @@ void BreakLocationIterator::SetDebugBreakAtReturn() { // nop (in branch delay slot) // Make sure this constant matches the number if instrucntions we emit. - ASSERT(Assembler::kJSReturnSequenceInstructions == 7); + DCHECK(Assembler::kJSReturnSequenceInstructions == 7); CodePatcher patcher(rinfo()->pc(), Assembler::kJSReturnSequenceInstructions); // li and Call pseudo-instructions emit two instructions each. patcher.masm()->li(v8::internal::t9, Operand(reinterpret_cast<int32_t>( @@ -55,20 +55,20 @@ void BreakLocationIterator::ClearDebugBreakAtReturn() { // A debug break in the exit code is identified by the JS frame exit code // having been patched with li/call psuedo-instrunction (liu/ori/jalr). bool Debug::IsDebugBreakAtReturn(RelocInfo* rinfo) { - ASSERT(RelocInfo::IsJSReturn(rinfo->rmode())); + DCHECK(RelocInfo::IsJSReturn(rinfo->rmode())); return rinfo->IsPatchedReturnSequence(); } bool BreakLocationIterator::IsDebugBreakAtSlot() { - ASSERT(IsDebugBreakSlot()); + DCHECK(IsDebugBreakSlot()); // Check whether the debug break slot instructions have been patched. return rinfo()->IsPatchedDebugBreakSlotSequence(); } void BreakLocationIterator::SetDebugBreakAtSlot() { - ASSERT(IsDebugBreakSlot()); + DCHECK(IsDebugBreakSlot()); // Patch the code changing the debug break slot code from: // nop(DEBUG_BREAK_NOP) - nop(1) is sll(zero_reg, zero_reg, 1) // nop(DEBUG_BREAK_NOP) @@ -85,13 +85,11 @@ void BreakLocationIterator::SetDebugBreakAtSlot() { void BreakLocationIterator::ClearDebugBreakAtSlot() { - ASSERT(IsDebugBreakSlot()); + DCHECK(IsDebugBreakSlot()); rinfo()->PatchCode(original_rinfo()->pc(), Assembler::kDebugBreakSlotInstructions); } -const bool Debug::FramePaddingLayout::kIsSupported = false; - #define __ ACCESS_MASM(masm) @@ -103,12 +101,22 @@ static void Generate_DebugBreakCallHelper(MacroAssembler* masm, { FrameScope scope(masm, StackFrame::INTERNAL); + // Load padding words on stack. + __ li(at, Operand(Smi::FromInt(LiveEdit::kFramePaddingValue))); + __ Subu(sp, sp, + Operand(kPointerSize * LiveEdit::kFramePaddingInitialSize)); + for (int i = LiveEdit::kFramePaddingInitialSize - 1; i >= 0; i--) { + __ sw(at, MemOperand(sp, kPointerSize * i)); + } + __ li(at, Operand(Smi::FromInt(LiveEdit::kFramePaddingInitialSize))); + __ push(at); + // Store the registers containing live values on the expression stack to // make sure that these are correctly updated during GC. Non object values // are stored as a smi causing it to be untouched by GC. - ASSERT((object_regs & ~kJSCallerSaved) == 0); - ASSERT((non_object_regs & ~kJSCallerSaved) == 0); - ASSERT((object_regs & non_object_regs) == 0); + DCHECK((object_regs & ~kJSCallerSaved) == 0); + DCHECK((non_object_regs & ~kJSCallerSaved) == 0); + DCHECK((object_regs & non_object_regs) == 0); if ((object_regs | non_object_regs) != 0) { for (int i = 0; i < kNumJSCallerSaved; i++) { int r = JSCallerSavedCode(i); @@ -149,20 +157,24 @@ static void Generate_DebugBreakCallHelper(MacroAssembler* masm, } } + // Don't bother removing padding bytes pushed on the stack + // as the frame is going to be restored right away. + // Leave the internal frame. } // Now that the break point has been handled, resume normal execution by // jumping to the target address intended by the caller and that was // overwritten by the address of DebugBreakXXX. - __ li(t9, Operand( - ExternalReference(Debug_Address::AfterBreakTarget(), masm->isolate()))); + ExternalReference after_break_target = + ExternalReference::debug_after_break_target_address(masm->isolate()); + __ li(t9, Operand(after_break_target)); __ lw(t9, MemOperand(t9)); __ Jump(t9); } -void Debug::GenerateCallICStubDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCallICStubDebugBreak(MacroAssembler* masm) { // Register state for CallICStub // ----------- S t a t e ------------- // -- a1 : function @@ -172,54 +184,40 @@ void Debug::GenerateCallICStubDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateLoadICDebugBreak(MacroAssembler* masm) { - // Calling convention for IC load (from ic-mips.cc). - // ----------- S t a t e ------------- - // -- a2 : name - // -- ra : return address - // -- a0 : receiver - // -- [sp] : receiver - // ----------------------------------- - // Registers a0 and a2 contain objects that need to be pushed on the - // expression stack of the fake JS frame. - Generate_DebugBreakCallHelper(masm, a0.bit() | a2.bit(), 0); +void DebugCodegen::GenerateLoadICDebugBreak(MacroAssembler* masm) { + Register receiver = LoadIC::ReceiverRegister(); + Register name = LoadIC::NameRegister(); + Generate_DebugBreakCallHelper(masm, receiver.bit() | name.bit(), 0); } -void Debug::GenerateStoreICDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateStoreICDebugBreak(MacroAssembler* masm) { // Calling convention for IC store (from ic-mips.cc). - // ----------- S t a t e ------------- - // -- a0 : value - // -- a1 : receiver - // -- a2 : name - // -- ra : return address - // ----------------------------------- - // Registers a0, a1, and a2 contain objects that need to be pushed on the - // expression stack of the fake JS frame. - Generate_DebugBreakCallHelper(masm, a0.bit() | a1.bit() | a2.bit(), 0); + Register receiver = StoreIC::ReceiverRegister(); + Register name = StoreIC::NameRegister(); + Register value = StoreIC::ValueRegister(); + Generate_DebugBreakCallHelper( + masm, receiver.bit() | name.bit() | value.bit(), 0); } -void Debug::GenerateKeyedLoadICDebugBreak(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- ra : return address - // -- a0 : key - // -- a1 : receiver - Generate_DebugBreakCallHelper(masm, a0.bit() | a1.bit(), 0); +void DebugCodegen::GenerateKeyedLoadICDebugBreak(MacroAssembler* masm) { + // Calling convention for keyed IC load (from ic-mips.cc). + GenerateLoadICDebugBreak(masm); } -void Debug::GenerateKeyedStoreICDebugBreak(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- a0 : value - // -- a1 : key - // -- a2 : receiver - // -- ra : return address - Generate_DebugBreakCallHelper(masm, a0.bit() | a1.bit() | a2.bit(), 0); +void DebugCodegen::GenerateKeyedStoreICDebugBreak(MacroAssembler* masm) { + // Calling convention for IC keyed store call (from ic-mips.cc). + Register receiver = KeyedStoreIC::ReceiverRegister(); + Register name = KeyedStoreIC::NameRegister(); + Register value = KeyedStoreIC::ValueRegister(); + Generate_DebugBreakCallHelper( + masm, receiver.bit() | name.bit() | value.bit(), 0); } -void Debug::GenerateCompareNilICDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCompareNilICDebugBreak(MacroAssembler* masm) { // Register state for CompareNil IC // ----------- S t a t e ------------- // -- a0 : value @@ -228,7 +226,7 @@ void Debug::GenerateCompareNilICDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateReturnDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateReturnDebugBreak(MacroAssembler* masm) { // In places other than IC call sites it is expected that v0 is TOS which // is an object - this is not generally the case so this should be used with // care. @@ -236,7 +234,7 @@ void Debug::GenerateReturnDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateCallFunctionStubDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCallFunctionStubDebugBreak(MacroAssembler* masm) { // Register state for CallFunctionStub (from code-stubs-mips.cc). // ----------- S t a t e ------------- // -- a1 : function @@ -245,7 +243,7 @@ void Debug::GenerateCallFunctionStubDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateCallConstructStubDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCallConstructStubDebugBreak(MacroAssembler* masm) { // Calling convention for CallConstructStub (from code-stubs-mips.cc). // ----------- S t a t e ------------- // -- a0 : number of arguments (not smi) @@ -255,7 +253,8 @@ void Debug::GenerateCallConstructStubDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateCallConstructStubRecordDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCallConstructStubRecordDebugBreak( + MacroAssembler* masm) { // Calling convention for CallConstructStub (from code-stubs-mips.cc). // ----------- S t a t e ------------- // -- a0 : number of arguments (not smi) @@ -267,7 +266,7 @@ void Debug::GenerateCallConstructStubRecordDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateSlot(MacroAssembler* masm) { +void DebugCodegen::GenerateSlot(MacroAssembler* masm) { // Generate enough nop's to make space for a call instruction. Avoid emitting // the trampoline pool in the debug break slot code. Assembler::BlockTrampolinePoolScope block_trampoline_pool(masm); @@ -277,29 +276,49 @@ void Debug::GenerateSlot(MacroAssembler* masm) { for (int i = 0; i < Assembler::kDebugBreakSlotInstructions; i++) { __ nop(MacroAssembler::DEBUG_BREAK_NOP); } - ASSERT_EQ(Assembler::kDebugBreakSlotInstructions, + DCHECK_EQ(Assembler::kDebugBreakSlotInstructions, masm->InstructionsGeneratedSince(&check_codesize)); } -void Debug::GenerateSlotDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateSlotDebugBreak(MacroAssembler* masm) { // In the places where a debug break slot is inserted no registers can contain // object pointers. Generate_DebugBreakCallHelper(masm, 0, 0); } -void Debug::GeneratePlainReturnLiveEdit(MacroAssembler* masm) { - masm->Abort(kLiveEditFrameDroppingIsNotSupportedOnMips); +void DebugCodegen::GeneratePlainReturnLiveEdit(MacroAssembler* masm) { + __ Ret(); } -void Debug::GenerateFrameDropperLiveEdit(MacroAssembler* masm) { - masm->Abort(kLiveEditFrameDroppingIsNotSupportedOnMips); +void DebugCodegen::GenerateFrameDropperLiveEdit(MacroAssembler* masm) { + ExternalReference restarter_frame_function_slot = + ExternalReference::debug_restarter_frame_function_pointer_address( + masm->isolate()); + __ li(at, Operand(restarter_frame_function_slot)); + __ sw(zero_reg, MemOperand(at, 0)); + + // We do not know our frame height, but set sp based on fp. + __ Subu(sp, fp, Operand(kPointerSize)); + + __ Pop(ra, fp, a1); // Return address, Frame, Function. + + // Load context from the function. + __ lw(cp, FieldMemOperand(a1, JSFunction::kContextOffset)); + + // Get function code. + __ lw(at, FieldMemOperand(a1, JSFunction::kSharedFunctionInfoOffset)); + __ lw(at, FieldMemOperand(at, SharedFunctionInfo::kCodeOffset)); + __ Addu(t9, at, Operand(Code::kHeaderSize - kHeapObjectTag)); + + // Re-run JSFunction, a1 is function, cp is context. + __ Jump(t9); } -const bool Debug::kFrameDropperSupported = false; +const bool LiveEdit::kFrameDropperSupported = true; #undef __ diff --git a/deps/v8/src/mips/deoptimizer-mips.cc b/deps/v8/src/mips/deoptimizer-mips.cc index 4297ad12789..1e88e62b213 100644 --- a/deps/v8/src/mips/deoptimizer-mips.cc +++ b/deps/v8/src/mips/deoptimizer-mips.cc @@ -3,12 +3,12 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "codegen.h" -#include "deoptimizer.h" -#include "full-codegen.h" -#include "safepoint-table.h" +#include "src/codegen.h" +#include "src/deoptimizer.h" +#include "src/full-codegen.h" +#include "src/safepoint-table.h" namespace v8 { namespace internal { @@ -48,9 +48,6 @@ void Deoptimizer::PatchCodeForDeoptimization(Isolate* isolate, Code* code) { DeoptimizationInputData* deopt_data = DeoptimizationInputData::cast(code->deoptimization_data()); - SharedFunctionInfo* shared = - SharedFunctionInfo::cast(deopt_data->SharedFunctionInfo()); - shared->EvictFromOptimizedCodeMap(code, "deoptimized code"); #ifdef DEBUG Address prev_call_address = NULL; #endif @@ -63,13 +60,13 @@ void Deoptimizer::PatchCodeForDeoptimization(Isolate* isolate, Code* code) { int call_size_in_bytes = MacroAssembler::CallSize(deopt_entry, RelocInfo::NONE32); int call_size_in_words = call_size_in_bytes / Assembler::kInstrSize; - ASSERT(call_size_in_bytes % Assembler::kInstrSize == 0); - ASSERT(call_size_in_bytes <= patch_size()); + DCHECK(call_size_in_bytes % Assembler::kInstrSize == 0); + DCHECK(call_size_in_bytes <= patch_size()); CodePatcher patcher(call_address, call_size_in_words); patcher.masm()->Call(deopt_entry, RelocInfo::NONE32); - ASSERT(prev_call_address == NULL || + DCHECK(prev_call_address == NULL || call_address >= prev_call_address + patch_size()); - ASSERT(call_address + patch_size() <= code->instruction_end()); + DCHECK(call_address + patch_size() <= code->instruction_end()); #ifdef DEBUG prev_call_address = call_address; @@ -101,7 +98,7 @@ void Deoptimizer::FillInputFrame(Address tos, JavaScriptFrame* frame) { void Deoptimizer::SetPlatformCompiledStubRegisters( FrameDescription* output_frame, CodeStubInterfaceDescriptor* descriptor) { - ApiFunction function(descriptor->deoptimization_handler_); + ApiFunction function(descriptor->deoptimization_handler()); ExternalReference xref(&function, ExternalReference::BUILTIN_CALL, isolate_); intptr_t handler = reinterpret_cast<intptr_t>(xref.address()); int params = descriptor->GetHandlerParameterCount(); @@ -125,11 +122,6 @@ bool Deoptimizer::HasAlignmentPadding(JSFunction* function) { } -Code* Deoptimizer::NotifyStubFailureBuiltin() { - return isolate_->builtins()->builtin(Builtins::kNotifyStubFailureSaveDoubles); -} - - #define __ masm()-> @@ -203,7 +195,7 @@ void Deoptimizer::EntryGenerator::Generate() { __ lw(a1, MemOperand(v0, Deoptimizer::input_offset())); // Copy core registers into FrameDescription::registers_[kNumRegisters]. - ASSERT(Register::kNumRegisters == kNumberOfRegisters); + DCHECK(Register::kNumRegisters == kNumberOfRegisters); for (int i = 0; i < kNumberOfRegisters; i++) { int offset = (i * kPointerSize) + FrameDescription::registers_offset(); if ((saved_regs & (1 << i)) != 0) { @@ -305,7 +297,7 @@ void Deoptimizer::EntryGenerator::Generate() { // Technically restoring 'at' should work unless zero_reg is also restored // but it's safer to check for this. - ASSERT(!(at.bit() & restored_regs)); + DCHECK(!(at.bit() & restored_regs)); // Restore the registers from the last output frame. __ mov(at, a2); for (int i = kNumberOfRegisters - 1; i >= 0; i--) { @@ -325,39 +317,29 @@ void Deoptimizer::EntryGenerator::Generate() { // Maximum size of a table entry generated below. -const int Deoptimizer::table_entry_size_ = 7 * Assembler::kInstrSize; +const int Deoptimizer::table_entry_size_ = 2 * Assembler::kInstrSize; void Deoptimizer::TableEntryGenerator::GeneratePrologue() { Assembler::BlockTrampolinePoolScope block_trampoline_pool(masm()); // Create a sequence of deoptimization entries. // Note that registers are still live when jumping to an entry. - Label table_start; + Label table_start, done; __ bind(&table_start); for (int i = 0; i < count(); i++) { Label start; __ bind(&start); - __ addiu(sp, sp, -1 * kPointerSize); - // Jump over the remaining deopt entries (including this one). - // This code is always reached by calling Jump, which puts the target (label - // start) into t9. - const int remaining_entries = (count() - i) * table_entry_size_; - __ Addu(t9, t9, remaining_entries); - // 'at' was clobbered so we can only load the current entry value here. - __ li(at, i); - __ jr(t9); // Expose delay slot. - __ sw(at, MemOperand(sp, 0 * kPointerSize)); // In the delay slot. - - // Pad the rest of the code. - while (table_entry_size_ > (masm()->SizeOfCodeGeneratedSince(&start))) { - __ nop(); - } + DCHECK(is_int16(i)); + __ Branch(USE_DELAY_SLOT, &done); // Expose delay slot. + __ li(at, i); // In the delay slot. - ASSERT_EQ(table_entry_size_, masm()->SizeOfCodeGeneratedSince(&start)); + DCHECK_EQ(table_entry_size_, masm()->SizeOfCodeGeneratedSince(&start)); } - ASSERT_EQ(masm()->SizeOfCodeGeneratedSince(&table_start), + DCHECK_EQ(masm()->SizeOfCodeGeneratedSince(&table_start), count() * table_entry_size_); + __ bind(&done); + __ Push(at); } diff --git a/deps/v8/src/mips/disasm-mips.cc b/deps/v8/src/mips/disasm-mips.cc index 52f33d25d24..4a8fe657772 100644 --- a/deps/v8/src/mips/disasm-mips.cc +++ b/deps/v8/src/mips/disasm-mips.cc @@ -24,18 +24,18 @@ #include <assert.h> -#include <stdio.h> #include <stdarg.h> +#include <stdio.h> #include <string.h> -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_MIPS -#include "mips/constants-mips.h" -#include "disasm.h" -#include "macro-assembler.h" -#include "platform.h" +#include "src/base/platform/platform.h" +#include "src/disasm.h" +#include "src/macro-assembler.h" +#include "src/mips/constants-mips.h" namespace v8 { namespace internal { @@ -184,21 +184,21 @@ void Decoder::PrintFd(Instruction* instr) { // Print the integer value of the sa field. void Decoder::PrintSa(Instruction* instr) { int sa = instr->SaValue(); - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, "%d", sa); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, "%d", sa); } // Print the integer value of the rd field, when it is not used as reg. void Decoder::PrintSd(Instruction* instr) { int sd = instr->RdValue(); - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, "%d", sd); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, "%d", sd); } // Print the integer value of the rd field, when used as 'ext' size. void Decoder::PrintSs1(Instruction* instr) { int ss = instr->RdValue(); - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, "%d", ss + 1); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, "%d", ss + 1); } @@ -207,49 +207,49 @@ void Decoder::PrintSs2(Instruction* instr) { int ss = instr->RdValue(); int pos = instr->SaValue(); out_buffer_pos_ += - OS::SNPrintF(out_buffer_ + out_buffer_pos_, "%d", ss - pos + 1); + SNPrintF(out_buffer_ + out_buffer_pos_, "%d", ss - pos + 1); } // Print the integer value of the cc field for the bc1t/f instructions. void Decoder::PrintBc(Instruction* instr) { int cc = instr->FBccValue(); - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, "%d", cc); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, "%d", cc); } // Print the integer value of the cc field for the FP compare instructions. void Decoder::PrintCc(Instruction* instr) { int cc = instr->FCccValue(); - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, "cc(%d)", cc); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, "cc(%d)", cc); } // Print 16-bit unsigned immediate value. void Decoder::PrintUImm16(Instruction* instr) { int32_t imm = instr->Imm16Value(); - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, "%u", imm); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, "%u", imm); } // Print 16-bit signed immediate value. void Decoder::PrintSImm16(Instruction* instr) { int32_t imm = ((instr->Imm16Value()) << 16) >> 16; - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, "%d", imm); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, "%d", imm); } // Print 16-bit hexa immediate value. void Decoder::PrintXImm16(Instruction* instr) { int32_t imm = instr->Imm16Value(); - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, "0x%x", imm); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, "0x%x", imm); } // Print 26-bit immediate value. void Decoder::PrintXImm26(Instruction* instr) { uint32_t imm = instr->Imm26Value() << kImmFieldShift; - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, "0x%x", imm); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, "0x%x", imm); } @@ -260,8 +260,8 @@ void Decoder::PrintCode(Instruction* instr) { switch (instr->FunctionFieldRaw()) { case BREAK: { int32_t code = instr->Bits(25, 6); - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "0x%05x (%d)", code, code); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "0x%05x (%d)", code, code); break; } case TGE: @@ -272,12 +272,12 @@ void Decoder::PrintCode(Instruction* instr) { case TNE: { int32_t code = instr->Bits(15, 6); out_buffer_pos_ += - OS::SNPrintF(out_buffer_ + out_buffer_pos_, "0x%03x", code); + SNPrintF(out_buffer_ + out_buffer_pos_, "0x%03x", code); break; } default: // Not a break or trap instruction. break; - }; + } } @@ -289,7 +289,7 @@ void Decoder::PrintInstructionName(Instruction* instr) { // Handle all register based formatting in this function to reduce the // complexity of FormatOption. int Decoder::FormatRegister(Instruction* instr, const char* format) { - ASSERT(format[0] == 'r'); + DCHECK(format[0] == 'r'); if (format[1] == 's') { // 'rs: Rs register. int reg = instr->RsValue(); PrintRegister(reg); @@ -311,7 +311,7 @@ int Decoder::FormatRegister(Instruction* instr, const char* format) { // Handle all FPUregister based formatting in this function to reduce the // complexity of FormatOption. int Decoder::FormatFPURegister(Instruction* instr, const char* format) { - ASSERT(format[0] == 'f'); + DCHECK(format[0] == 'f'); if (format[1] == 's') { // 'fs: fs register. int reg = instr->FsValue(); PrintFPURegister(reg); @@ -342,26 +342,26 @@ int Decoder::FormatFPURegister(Instruction* instr, const char* format) { int Decoder::FormatOption(Instruction* instr, const char* format) { switch (format[0]) { case 'c': { // 'code for break or trap instructions. - ASSERT(STRING_STARTS_WITH(format, "code")); + DCHECK(STRING_STARTS_WITH(format, "code")); PrintCode(instr); return 4; } case 'i': { // 'imm16u or 'imm26. if (format[3] == '1') { - ASSERT(STRING_STARTS_WITH(format, "imm16")); + DCHECK(STRING_STARTS_WITH(format, "imm16")); if (format[5] == 's') { - ASSERT(STRING_STARTS_WITH(format, "imm16s")); + DCHECK(STRING_STARTS_WITH(format, "imm16s")); PrintSImm16(instr); } else if (format[5] == 'u') { - ASSERT(STRING_STARTS_WITH(format, "imm16u")); + DCHECK(STRING_STARTS_WITH(format, "imm16u")); PrintSImm16(instr); } else { - ASSERT(STRING_STARTS_WITH(format, "imm16x")); + DCHECK(STRING_STARTS_WITH(format, "imm16x")); PrintXImm16(instr); } return 6; } else { - ASSERT(STRING_STARTS_WITH(format, "imm26x")); + DCHECK(STRING_STARTS_WITH(format, "imm26x")); PrintXImm26(instr); return 6; } @@ -375,22 +375,22 @@ int Decoder::FormatOption(Instruction* instr, const char* format) { case 's': { // 'sa. switch (format[1]) { case 'a': { - ASSERT(STRING_STARTS_WITH(format, "sa")); + DCHECK(STRING_STARTS_WITH(format, "sa")); PrintSa(instr); return 2; } case 'd': { - ASSERT(STRING_STARTS_WITH(format, "sd")); + DCHECK(STRING_STARTS_WITH(format, "sd")); PrintSd(instr); return 2; } case 's': { if (format[2] == '1') { - ASSERT(STRING_STARTS_WITH(format, "ss1")); /* ext size */ + DCHECK(STRING_STARTS_WITH(format, "ss1")); /* ext size */ PrintSs1(instr); return 3; } else { - ASSERT(STRING_STARTS_WITH(format, "ss2")); /* ins size */ + DCHECK(STRING_STARTS_WITH(format, "ss2")); /* ins size */ PrintSs2(instr); return 3; } @@ -398,16 +398,16 @@ int Decoder::FormatOption(Instruction* instr, const char* format) { } } case 'b': { // 'bc - Special for bc1 cc field. - ASSERT(STRING_STARTS_WITH(format, "bc")); + DCHECK(STRING_STARTS_WITH(format, "bc")); PrintBc(instr); return 2; } case 'C': { // 'Cc - Special for c.xx.d cc field. - ASSERT(STRING_STARTS_WITH(format, "Cc")); + DCHECK(STRING_STARTS_WITH(format, "Cc")); PrintCc(instr); return 2; } - }; + } UNREACHABLE(); return -1; } @@ -603,7 +603,7 @@ void Decoder::DecodeTypeRegister(Instruction* instr) { break; default: UNREACHABLE(); - }; + } break; case SPECIAL: switch (instr->FunctionFieldRaw()) { @@ -796,7 +796,7 @@ void Decoder::DecodeTypeImmediate(Instruction* instr) { break; default: UNREACHABLE(); - }; + } break; // Case COP1. case REGIMM: switch (instr->RtFieldRaw()) { @@ -909,7 +909,7 @@ void Decoder::DecodeTypeImmediate(Instruction* instr) { default: UNREACHABLE(); break; - }; + } } @@ -931,9 +931,9 @@ void Decoder::DecodeTypeJump(Instruction* instr) { int Decoder::InstructionDecode(byte* instr_ptr) { Instruction* instr = Instruction::At(instr_ptr); // Print raw instruction bytes. - out_buffer_pos_ += OS::SNPrintF(out_buffer_ + out_buffer_pos_, - "%08x ", - instr->InstructionBits()); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "%08x ", + instr->InstructionBits()); switch (instr->InstructionType()) { case Instruction::kRegisterType: { DecodeTypeRegister(instr); @@ -965,7 +965,7 @@ int Decoder::InstructionDecode(byte* instr_ptr) { namespace disasm { const char* NameConverter::NameOfAddress(byte* addr) const { - v8::internal::OS::SNPrintF(tmp_buffer_, "%p", addr); + v8::internal::SNPrintF(tmp_buffer_, "%p", addr); return tmp_buffer_.start(); } diff --git a/deps/v8/src/mips/frames-mips.cc b/deps/v8/src/mips/frames-mips.cc index 205e9bcf479..b65f1bff952 100644 --- a/deps/v8/src/mips/frames-mips.cc +++ b/deps/v8/src/mips/frames-mips.cc @@ -2,14 +2,14 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_MIPS -#include "assembler.h" -#include "assembler-mips.h" -#include "assembler-mips-inl.h" -#include "frames.h" +#include "src/assembler.h" +#include "src/frames.h" +#include "src/mips/assembler-mips-inl.h" +#include "src/mips/assembler-mips.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/mips/frames-mips.h b/deps/v8/src/mips/frames-mips.h index c7b88aa5cce..5666f642f99 100644 --- a/deps/v8/src/mips/frames-mips.h +++ b/deps/v8/src/mips/frames-mips.h @@ -87,8 +87,6 @@ const RegList kSafepointSavedRegisters = kJSCallerSaved | kCalleeSaved; const int kNumSafepointSavedRegisters = kNumJSCallerSaved + kNumCalleeSaved; -typedef Object* JSCallerSavedBuffer[kNumJSCallerSaved]; - const int kUndefIndex = -1; // Map with indexes on stack that corresponds to codes of saved registers. const int kSafepointRegisterStackIndexMap[kNumRegs] = { diff --git a/deps/v8/src/mips/full-codegen-mips.cc b/deps/v8/src/mips/full-codegen-mips.cc index ff280ce76c1..639f57fd63d 100644 --- a/deps/v8/src/mips/full-codegen-mips.cc +++ b/deps/v8/src/mips/full-codegen-mips.cc @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_MIPS @@ -14,18 +14,18 @@ // places where we have to move a previous result in v0 to a0 for the // next call: mov(a0, v0). This is not needed on the other architectures. -#include "code-stubs.h" -#include "codegen.h" -#include "compiler.h" -#include "debug.h" -#include "full-codegen.h" -#include "isolate-inl.h" -#include "parser.h" -#include "scopes.h" -#include "stub-cache.h" +#include "src/code-stubs.h" +#include "src/codegen.h" +#include "src/compiler.h" +#include "src/debug.h" +#include "src/full-codegen.h" +#include "src/isolate-inl.h" +#include "src/parser.h" +#include "src/scopes.h" +#include "src/stub-cache.h" -#include "mips/code-stubs-mips.h" -#include "mips/macro-assembler-mips.h" +#include "src/mips/code-stubs-mips.h" +#include "src/mips/macro-assembler-mips.h" namespace v8 { namespace internal { @@ -50,13 +50,13 @@ class JumpPatchSite BASE_EMBEDDED { } ~JumpPatchSite() { - ASSERT(patch_site_.is_bound() == info_emitted_); + DCHECK(patch_site_.is_bound() == info_emitted_); } // When initially emitting this ensure that a jump is always generated to skip // the inlined smi code. void EmitJumpIfNotSmi(Register reg, Label* target) { - ASSERT(!patch_site_.is_bound() && !info_emitted_); + DCHECK(!patch_site_.is_bound() && !info_emitted_); Assembler::BlockTrampolinePoolScope block_trampoline_pool(masm_); __ bind(&patch_site_); __ andi(at, reg, 0); @@ -68,7 +68,7 @@ class JumpPatchSite BASE_EMBEDDED { // the inlined smi code. void EmitJumpIfSmi(Register reg, Label* target) { Assembler::BlockTrampolinePoolScope block_trampoline_pool(masm_); - ASSERT(!patch_site_.is_bound() && !info_emitted_); + DCHECK(!patch_site_.is_bound() && !info_emitted_); __ bind(&patch_site_); __ andi(at, reg, 0); // Never taken before patched. @@ -97,28 +97,6 @@ class JumpPatchSite BASE_EMBEDDED { }; -static void EmitStackCheck(MacroAssembler* masm_, - Register stack_limit_scratch, - int pointers = 0, - Register scratch = sp) { - Isolate* isolate = masm_->isolate(); - Label ok; - ASSERT(scratch.is(sp) == (pointers == 0)); - Heap::RootListIndex index; - if (pointers != 0) { - __ Subu(scratch, sp, Operand(pointers * kPointerSize)); - index = Heap::kRealStackLimitRootIndex; - } else { - index = Heap::kStackLimitRootIndex; - } - __ LoadRoot(stack_limit_scratch, index); - __ Branch(&ok, hs, scratch, Operand(stack_limit_scratch)); - PredictableCodeSizeScope predictable(masm_, 4 * Assembler::kInstrSize); - __ Call(isolate->builtins()->StackCheck(), RelocInfo::CODE_TARGET); - __ bind(&ok); -} - - // Generate code for a JS function. On entry to the function the receiver // and arguments have been pushed on the stack left to right. The actual // argument count matches the formal parameter count expected by the @@ -163,7 +141,7 @@ void FullCodeGenerator::Generate() { __ Branch(&ok, ne, a2, Operand(at)); __ lw(a2, GlobalObjectOperand()); - __ lw(a2, FieldMemOperand(a2, GlobalObject::kGlobalReceiverOffset)); + __ lw(a2, FieldMemOperand(a2, GlobalObject::kGlobalProxyOffset)); __ sw(a2, MemOperand(sp, receiver_offset)); @@ -176,16 +154,21 @@ void FullCodeGenerator::Generate() { FrameScope frame_scope(masm_, StackFrame::MANUAL); info->set_prologue_offset(masm_->pc_offset()); - __ Prologue(BUILD_FUNCTION_FRAME); + __ Prologue(info->IsCodePreAgingActive()); info->AddNoFrameRange(0, masm_->pc_offset()); { Comment cmnt(masm_, "[ Allocate locals"); int locals_count = info->scope()->num_stack_slots(); // Generators allocate locals, if any, in context slots. - ASSERT(!info->function()->is_generator() || locals_count == 0); + DCHECK(!info->function()->is_generator() || locals_count == 0); if (locals_count > 0) { if (locals_count >= 128) { - EmitStackCheck(masm_, a2, locals_count, t5); + Label ok; + __ Subu(t5, sp, Operand(locals_count * kPointerSize)); + __ LoadRoot(a2, Heap::kRealStackLimitRootIndex); + __ Branch(&ok, hs, t5, Operand(a2)); + __ InvokeBuiltin(Builtins::STACK_OVERFLOW, CALL_FUNCTION); + __ bind(&ok); } __ LoadRoot(t5, Heap::kUndefinedValueRootIndex); int kMaxPushes = FLAG_optimize_for_size ? 4 : 32; @@ -219,16 +202,19 @@ void FullCodeGenerator::Generate() { if (heap_slots > 0) { Comment cmnt(masm_, "[ Allocate context"); // Argument to NewContext is the function, which is still in a1. + bool need_write_barrier = true; if (FLAG_harmony_scoping && info->scope()->is_global_scope()) { __ push(a1); __ Push(info->scope()->GetScopeInfo()); - __ CallRuntime(Runtime::kHiddenNewGlobalContext, 2); + __ CallRuntime(Runtime::kNewGlobalContext, 2); } else if (heap_slots <= FastNewContextStub::kMaximumSlots) { FastNewContextStub stub(isolate(), heap_slots); __ CallStub(&stub); + // Result of FastNewContextStub is always in new space. + need_write_barrier = false; } else { __ push(a1); - __ CallRuntime(Runtime::kHiddenNewFunctionContext, 1); + __ CallRuntime(Runtime::kNewFunctionContext, 1); } function_in_register = false; // Context is returned in v0. It replaces the context passed to us. @@ -249,8 +235,15 @@ void FullCodeGenerator::Generate() { __ sw(a0, target); // Update the write barrier. - __ RecordWriteContextSlot( - cp, target.offset(), a0, a3, kRAHasBeenSaved, kDontSaveFPRegs); + if (need_write_barrier) { + __ RecordWriteContextSlot( + cp, target.offset(), a0, a3, kRAHasBeenSaved, kDontSaveFPRegs); + } else if (FLAG_debug_code) { + Label done; + __ JumpIfInNewSpace(cp, a0, &done); + __ Abort(kExpectedNewSpaceObject); + __ bind(&done); + } } } } @@ -308,9 +301,9 @@ void FullCodeGenerator::Generate() { // constant. if (scope()->is_function_scope() && scope()->function() != NULL) { VariableDeclaration* function = scope()->function(); - ASSERT(function->proxy()->var()->mode() == CONST || + DCHECK(function->proxy()->var()->mode() == CONST || function->proxy()->var()->mode() == CONST_LEGACY); - ASSERT(function->proxy()->var()->location() != Variable::UNALLOCATED); + DCHECK(function->proxy()->var()->location() != Variable::UNALLOCATED); VisitVariableDeclaration(function); } VisitDeclarations(scope()->declarations()); @@ -318,13 +311,20 @@ void FullCodeGenerator::Generate() { { Comment cmnt(masm_, "[ Stack check"); PrepareForBailoutForId(BailoutId::Declarations(), NO_REGISTERS); - EmitStackCheck(masm_, at); + Label ok; + __ LoadRoot(at, Heap::kStackLimitRootIndex); + __ Branch(&ok, hs, sp, Operand(at)); + Handle<Code> stack_check = isolate()->builtins()->StackCheck(); + PredictableCodeSizeScope predictable(masm_, + masm_->CallSize(stack_check, RelocInfo::CODE_TARGET)); + __ Call(stack_check, RelocInfo::CODE_TARGET); + __ bind(&ok); } { Comment cmnt(masm_, "[ Body"); - ASSERT(loop_depth() == 0); + DCHECK(loop_depth() == 0); VisitStatements(function()->body()); - ASSERT(loop_depth() == 0); + DCHECK(loop_depth() == 0); } } @@ -338,7 +338,7 @@ void FullCodeGenerator::Generate() { void FullCodeGenerator::ClearAccumulator() { - ASSERT(Smi::FromInt(0) == 0); + DCHECK(Smi::FromInt(0) == 0); __ mov(v0, zero_reg); } @@ -353,7 +353,7 @@ void FullCodeGenerator::EmitProfilingCounterDecrement(int delta) { void FullCodeGenerator::EmitProfilingCounterReset() { int reset_value = FLAG_interrupt_budget; - if (isolate()->IsDebuggerActive()) { + if (info_->is_debug()) { // Detect debug break requests as soon as possible. reset_value = FLAG_interrupt_budget >> 4; } @@ -373,7 +373,7 @@ void FullCodeGenerator::EmitBackEdgeBookkeeping(IterationStatement* stmt, Assembler::BlockTrampolinePoolScope block_trampoline_pool(masm_); Comment cmnt(masm_, "[ Back edge bookkeeping"); Label ok; - ASSERT(back_edge_target->is_bound()); + DCHECK(back_edge_target->is_bound()); int distance = masm_->SizeOfCodeGeneratedSince(back_edge_target); int weight = Min(kMaxBackEdgeWeight, Max(1, distance / kCodeSizeMultiplier)); @@ -452,7 +452,7 @@ void FullCodeGenerator::EmitReturnSequence() { #ifdef DEBUG // Check that the size of the code used for returning is large enough // for the debugger's requirements. - ASSERT(Assembler::kJSReturnSequenceInstructions <= + DCHECK(Assembler::kJSReturnSequenceInstructions <= masm_->InstructionsGeneratedSince(&check_exit_codesize)); #endif } @@ -460,18 +460,18 @@ void FullCodeGenerator::EmitReturnSequence() { void FullCodeGenerator::EffectContext::Plug(Variable* var) const { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); } void FullCodeGenerator::AccumulatorValueContext::Plug(Variable* var) const { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); codegen()->GetVar(result_register(), var); } void FullCodeGenerator::StackValueContext::Plug(Variable* var) const { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); codegen()->GetVar(result_register(), var); __ push(result_register()); } @@ -542,7 +542,7 @@ void FullCodeGenerator::TestContext::Plug(Handle<Object> lit) const { true, true_label_, false_label_); - ASSERT(!lit->IsUndetectableObject()); // There are no undetectable literals. + DCHECK(!lit->IsUndetectableObject()); // There are no undetectable literals. if (lit->IsUndefined() || lit->IsNull() || lit->IsFalse()) { if (false_label_ != fall_through_) __ Branch(false_label_); } else if (lit->IsTrue() || lit->IsJSObject()) { @@ -569,7 +569,7 @@ void FullCodeGenerator::TestContext::Plug(Handle<Object> lit) const { void FullCodeGenerator::EffectContext::DropAndPlug(int count, Register reg) const { - ASSERT(count > 0); + DCHECK(count > 0); __ Drop(count); } @@ -577,7 +577,7 @@ void FullCodeGenerator::EffectContext::DropAndPlug(int count, void FullCodeGenerator::AccumulatorValueContext::DropAndPlug( int count, Register reg) const { - ASSERT(count > 0); + DCHECK(count > 0); __ Drop(count); __ Move(result_register(), reg); } @@ -585,7 +585,7 @@ void FullCodeGenerator::AccumulatorValueContext::DropAndPlug( void FullCodeGenerator::StackValueContext::DropAndPlug(int count, Register reg) const { - ASSERT(count > 0); + DCHECK(count > 0); if (count > 1) __ Drop(count - 1); __ sw(reg, MemOperand(sp, 0)); } @@ -593,7 +593,7 @@ void FullCodeGenerator::StackValueContext::DropAndPlug(int count, void FullCodeGenerator::TestContext::DropAndPlug(int count, Register reg) const { - ASSERT(count > 0); + DCHECK(count > 0); // For simplicity we always test the accumulator register. __ Drop(count); __ Move(result_register(), reg); @@ -604,7 +604,7 @@ void FullCodeGenerator::TestContext::DropAndPlug(int count, void FullCodeGenerator::EffectContext::Plug(Label* materialize_true, Label* materialize_false) const { - ASSERT(materialize_true == materialize_false); + DCHECK(materialize_true == materialize_false); __ bind(materialize_true); } @@ -640,8 +640,8 @@ void FullCodeGenerator::StackValueContext::Plug( void FullCodeGenerator::TestContext::Plug(Label* materialize_true, Label* materialize_false) const { - ASSERT(materialize_true == true_label_); - ASSERT(materialize_false == false_label_); + DCHECK(materialize_true == true_label_); + DCHECK(materialize_false == false_label_); } @@ -707,7 +707,7 @@ void FullCodeGenerator::Split(Condition cc, MemOperand FullCodeGenerator::StackOperand(Variable* var) { - ASSERT(var->IsStackAllocated()); + DCHECK(var->IsStackAllocated()); // Offset is negative because higher indexes are at lower addresses. int offset = -var->index() * kPointerSize; // Adjust by a (parameter or local) base offset. @@ -721,7 +721,7 @@ MemOperand FullCodeGenerator::StackOperand(Variable* var) { MemOperand FullCodeGenerator::VarOperand(Variable* var, Register scratch) { - ASSERT(var->IsContextSlot() || var->IsStackAllocated()); + DCHECK(var->IsContextSlot() || var->IsStackAllocated()); if (var->IsContextSlot()) { int context_chain_length = scope()->ContextChainLength(var->scope()); __ LoadContext(scratch, context_chain_length); @@ -743,10 +743,10 @@ void FullCodeGenerator::SetVar(Variable* var, Register src, Register scratch0, Register scratch1) { - ASSERT(var->IsContextSlot() || var->IsStackAllocated()); - ASSERT(!scratch0.is(src)); - ASSERT(!scratch0.is(scratch1)); - ASSERT(!scratch1.is(src)); + DCHECK(var->IsContextSlot() || var->IsStackAllocated()); + DCHECK(!scratch0.is(src)); + DCHECK(!scratch0.is(scratch1)); + DCHECK(!scratch1.is(src)); MemOperand location = VarOperand(var, scratch0); __ sw(src, location); // Emit the write barrier code if the location is in the heap. @@ -784,7 +784,7 @@ void FullCodeGenerator::PrepareForBailoutBeforeSplit(Expression* expr, void FullCodeGenerator::EmitDebugCheckDeclarationContext(Variable* variable) { // The variable in the declaration always resides in the current function // context. - ASSERT_EQ(0, scope()->ContextChainLength(variable->scope())); + DCHECK_EQ(0, scope()->ContextChainLength(variable->scope())); if (generate_debug_code_) { // Check that we're not inside a with or catch context. __ lw(a1, FieldMemOperand(cp, HeapObject::kMapOffset)); @@ -840,7 +840,7 @@ void FullCodeGenerator::VisitVariableDeclaration( Comment cmnt(masm_, "[ VariableDeclaration"); __ li(a2, Operand(variable->name())); // Declaration nodes are always introduced in one of four modes. - ASSERT(IsDeclaredVariableMode(mode)); + DCHECK(IsDeclaredVariableMode(mode)); PropertyAttributes attr = IsImmutableVariableMode(mode) ? READ_ONLY : NONE; __ li(a1, Operand(Smi::FromInt(attr))); @@ -852,11 +852,11 @@ void FullCodeGenerator::VisitVariableDeclaration( __ LoadRoot(a0, Heap::kTheHoleValueRootIndex); __ Push(cp, a2, a1, a0); } else { - ASSERT(Smi::FromInt(0) == 0); + DCHECK(Smi::FromInt(0) == 0); __ mov(a0, zero_reg); // Smi::FromInt(0) indicates no initial value. __ Push(cp, a2, a1, a0); } - __ CallRuntime(Runtime::kHiddenDeclareContextSlot, 4); + __ CallRuntime(Runtime::kDeclareLookupSlot, 4); break; } } @@ -871,7 +871,7 @@ void FullCodeGenerator::VisitFunctionDeclaration( case Variable::UNALLOCATED: { globals_->Add(variable->name(), zone()); Handle<SharedFunctionInfo> function = - Compiler::BuildFunctionInfo(declaration->fun(), script()); + Compiler::BuildFunctionInfo(declaration->fun(), script(), info_); // Check for stack-overflow exception. if (function.is_null()) return SetStackOverflow(); globals_->Add(function, zone()); @@ -912,7 +912,7 @@ void FullCodeGenerator::VisitFunctionDeclaration( __ Push(cp, a2, a1); // Push initial value for function declaration. VisitForStackValue(declaration->fun()); - __ CallRuntime(Runtime::kHiddenDeclareContextSlot, 4); + __ CallRuntime(Runtime::kDeclareLookupSlot, 4); break; } } @@ -921,8 +921,8 @@ void FullCodeGenerator::VisitFunctionDeclaration( void FullCodeGenerator::VisitModuleDeclaration(ModuleDeclaration* declaration) { Variable* variable = declaration->proxy()->var(); - ASSERT(variable->location() == Variable::CONTEXT); - ASSERT(variable->interface()->IsFrozen()); + DCHECK(variable->location() == Variable::CONTEXT); + DCHECK(variable->interface()->IsFrozen()); Comment cmnt(masm_, "[ ModuleDeclaration"); EmitDebugCheckDeclarationContext(variable); @@ -984,7 +984,7 @@ void FullCodeGenerator::DeclareGlobals(Handle<FixedArray> pairs) { __ li(a1, Operand(pairs)); __ li(a0, Operand(Smi::FromInt(DeclareGlobalsFlags()))); __ Push(cp, a1, a0); - __ CallRuntime(Runtime::kHiddenDeclareGlobals, 3); + __ CallRuntime(Runtime::kDeclareGlobals, 3); // Return value is ignored. } @@ -992,7 +992,7 @@ void FullCodeGenerator::DeclareGlobals(Handle<FixedArray> pairs) { void FullCodeGenerator::DeclareModules(Handle<FixedArray> descriptions) { // Call the runtime to declare the modules. __ Push(descriptions); - __ CallRuntime(Runtime::kHiddenDeclareModules, 1); + __ CallRuntime(Runtime::kDeclareModules, 1); // Return value is ignored. } @@ -1221,7 +1221,7 @@ void FullCodeGenerator::VisitForInStatement(ForInStatement* stmt) { // For proxies, no filtering is done. // TODO(rossberg): What if only a prototype is a proxy? Not specified yet. - ASSERT_EQ(Smi::FromInt(0), 0); + DCHECK_EQ(Smi::FromInt(0), 0); __ Branch(&update_each, eq, a2, Operand(zero_reg)); // Convert the entry to a string or (smi) 0 if it isn't a property @@ -1272,27 +1272,8 @@ void FullCodeGenerator::VisitForOfStatement(ForOfStatement* stmt) { Iteration loop_statement(this, stmt); increment_loop_depth(); - // var iterator = iterable[@@iterator]() - VisitForAccumulatorValue(stmt->assign_iterator()); - __ mov(a0, v0); - - // As with for-in, skip the loop if the iterator is null or undefined. - __ LoadRoot(at, Heap::kUndefinedValueRootIndex); - __ Branch(loop_statement.break_label(), eq, a0, Operand(at)); - __ LoadRoot(at, Heap::kNullValueRootIndex); - __ Branch(loop_statement.break_label(), eq, a0, Operand(at)); - - // Convert the iterator to a JS object. - Label convert, done_convert; - __ JumpIfSmi(a0, &convert); - __ GetObjectType(a0, a1, a1); - __ Branch(&done_convert, ge, a1, Operand(FIRST_SPEC_OBJECT_TYPE)); - __ bind(&convert); - __ push(a0); - __ InvokeBuiltin(Builtins::TO_OBJECT, CALL_FUNCTION); - __ mov(a0, v0); - __ bind(&done_convert); - __ push(a0); + // var iterator = iterable[Symbol.iterator](); + VisitForEffect(stmt->assign_iterator()); // Loop entry. __ bind(loop_statement.continue_label()); @@ -1349,7 +1330,7 @@ void FullCodeGenerator::EmitNewClosure(Handle<SharedFunctionInfo> info, __ LoadRoot(a1, pretenure ? Heap::kTrueValueRootIndex : Heap::kFalseValueRootIndex); __ Push(cp, a0, a1); - __ CallRuntime(Runtime::kHiddenNewClosure, 3); + __ CallRuntime(Runtime::kNewClosure, 3); } context()->Plug(v0); } @@ -1361,7 +1342,7 @@ void FullCodeGenerator::VisitVariableProxy(VariableProxy* expr) { } -void FullCodeGenerator::EmitLoadGlobalCheckExtensions(Variable* var, +void FullCodeGenerator::EmitLoadGlobalCheckExtensions(VariableProxy* proxy, TypeofState typeof_state, Label* slow) { Register current = cp; @@ -1406,8 +1387,13 @@ void FullCodeGenerator::EmitLoadGlobalCheckExtensions(Variable* var, __ bind(&fast); } - __ lw(a0, GlobalObjectOperand()); - __ li(a2, Operand(var->name())); + __ lw(LoadIC::ReceiverRegister(), GlobalObjectOperand()); + __ li(LoadIC::NameRegister(), Operand(proxy->var()->name())); + if (FLAG_vector_ics) { + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(proxy->VariableFeedbackSlot()))); + } + ContextualMode mode = (typeof_state == INSIDE_TYPEOF) ? NOT_CONTEXTUAL : CONTEXTUAL; @@ -1417,7 +1403,7 @@ void FullCodeGenerator::EmitLoadGlobalCheckExtensions(Variable* var, MemOperand FullCodeGenerator::ContextSlotOperandCheckExtensions(Variable* var, Label* slow) { - ASSERT(var->IsContextSlot()); + DCHECK(var->IsContextSlot()); Register context = cp; Register next = a3; Register temp = t0; @@ -1445,7 +1431,7 @@ MemOperand FullCodeGenerator::ContextSlotOperandCheckExtensions(Variable* var, } -void FullCodeGenerator::EmitDynamicLookupFastCase(Variable* var, +void FullCodeGenerator::EmitDynamicLookupFastCase(VariableProxy* proxy, TypeofState typeof_state, Label* slow, Label* done) { @@ -1454,8 +1440,9 @@ void FullCodeGenerator::EmitDynamicLookupFastCase(Variable* var, // introducing variables. In those cases, we do not want to // perform a runtime call for all variables in the scope // containing the eval. + Variable* var = proxy->var(); if (var->mode() == DYNAMIC_GLOBAL) { - EmitLoadGlobalCheckExtensions(var, typeof_state, slow); + EmitLoadGlobalCheckExtensions(proxy, typeof_state, slow); __ Branch(done); } else if (var->mode() == DYNAMIC_LOCAL) { Variable* local = var->local_if_not_shadowed(); @@ -1471,7 +1458,7 @@ void FullCodeGenerator::EmitDynamicLookupFastCase(Variable* var, __ Branch(done, ne, at, Operand(zero_reg)); __ li(a0, Operand(var->name())); __ push(a0); - __ CallRuntime(Runtime::kHiddenThrowReferenceError, 1); + __ CallRuntime(Runtime::kThrowReferenceError, 1); } } __ Branch(done); @@ -1489,10 +1476,12 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { switch (var->location()) { case Variable::UNALLOCATED: { Comment cmnt(masm_, "[ Global variable"); - // Use inline caching. Variable name is passed in a2 and the global - // object (receiver) in a0. - __ lw(a0, GlobalObjectOperand()); - __ li(a2, Operand(var->name())); + __ lw(LoadIC::ReceiverRegister(), GlobalObjectOperand()); + __ li(LoadIC::NameRegister(), Operand(var->name())); + if (FLAG_vector_ics) { + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(proxy->VariableFeedbackSlot()))); + } CallLoadIC(CONTEXTUAL); context()->Plug(v0); break; @@ -1509,7 +1498,7 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { // always looked up dynamically, i.e. in that case // var->location() == LOOKUP. // always holds. - ASSERT(var->scope() != NULL); + DCHECK(var->scope() != NULL); // Check if the binding really needs an initialization check. The check // can be skipped in the following situation: we have a LET or CONST @@ -1532,8 +1521,8 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { skip_init_check = false; } else { // Check that we always have valid source position. - ASSERT(var->initializer_position() != RelocInfo::kNoPosition); - ASSERT(proxy->position() != RelocInfo::kNoPosition); + DCHECK(var->initializer_position() != RelocInfo::kNoPosition); + DCHECK(proxy->position() != RelocInfo::kNoPosition); skip_init_check = var->mode() != CONST_LEGACY && var->initializer_position() < proxy->position(); } @@ -1550,11 +1539,11 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { __ Branch(&done, ne, at, Operand(zero_reg)); __ li(a0, Operand(var->name())); __ push(a0); - __ CallRuntime(Runtime::kHiddenThrowReferenceError, 1); + __ CallRuntime(Runtime::kThrowReferenceError, 1); __ bind(&done); } else { // Uninitalized const bindings outside of harmony mode are unholed. - ASSERT(var->mode() == CONST_LEGACY); + DCHECK(var->mode() == CONST_LEGACY); __ LoadRoot(a0, Heap::kUndefinedValueRootIndex); __ Movz(v0, a0, at); // Conditional move: Undefined if TheHole. } @@ -1571,11 +1560,11 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { Label done, slow; // Generate code for loading from variables potentially shadowed // by eval-introduced variables. - EmitDynamicLookupFastCase(var, NOT_INSIDE_TYPEOF, &slow, &done); + EmitDynamicLookupFastCase(proxy, NOT_INSIDE_TYPEOF, &slow, &done); __ bind(&slow); __ li(a1, Operand(var->name())); __ Push(cp, a1); // Context and name. - __ CallRuntime(Runtime::kHiddenLoadContextSlot, 2); + __ CallRuntime(Runtime::kLoadLookupSlot, 2); __ bind(&done); context()->Plug(v0); } @@ -1607,7 +1596,7 @@ void FullCodeGenerator::VisitRegExpLiteral(RegExpLiteral* expr) { __ li(a2, Operand(expr->pattern())); __ li(a1, Operand(expr->flags())); __ Push(t0, a3, a2, a1); - __ CallRuntime(Runtime::kHiddenMaterializeRegExpLiteral, 4); + __ CallRuntime(Runtime::kMaterializeRegExpLiteral, 4); __ mov(t1, v0); __ bind(&materialized); @@ -1619,7 +1608,7 @@ void FullCodeGenerator::VisitRegExpLiteral(RegExpLiteral* expr) { __ bind(&runtime_allocate); __ li(a0, Operand(Smi::FromInt(size))); __ Push(t1, a0); - __ CallRuntime(Runtime::kHiddenAllocateInNewSpace, 1); + __ CallRuntime(Runtime::kAllocateInNewSpace, 1); __ pop(t1); __ bind(&allocated); @@ -1661,10 +1650,10 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) { __ li(a0, Operand(Smi::FromInt(flags))); int properties_count = constant_properties->length() / 2; if (expr->may_store_doubles() || expr->depth() > 1 || - Serializer::enabled(isolate()) || flags != ObjectLiteral::kFastElements || + masm()->serializer_enabled() || flags != ObjectLiteral::kFastElements || properties_count > FastCloneShallowObjectStub::kMaximumClonedProperties) { __ Push(a3, a2, a1, a0); - __ CallRuntime(Runtime::kHiddenCreateObjectLiteral, 4); + __ CallRuntime(Runtime::kCreateObjectLiteral, 4); } else { FastCloneShallowObjectStub stub(isolate(), properties_count); __ CallStub(&stub); @@ -1694,15 +1683,16 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) { case ObjectLiteral::Property::CONSTANT: UNREACHABLE(); case ObjectLiteral::Property::MATERIALIZED_LITERAL: - ASSERT(!CompileTimeValue::IsCompileTimeValue(property->value())); + DCHECK(!CompileTimeValue::IsCompileTimeValue(property->value())); // Fall through. case ObjectLiteral::Property::COMPUTED: if (key->value()->IsInternalizedString()) { if (property->emit_store()) { VisitForAccumulatorValue(value); - __ mov(a0, result_register()); - __ li(a2, Operand(key->value())); - __ lw(a1, MemOperand(sp)); + __ mov(StoreIC::ValueRegister(), result_register()); + DCHECK(StoreIC::ValueRegister().is(a0)); + __ li(StoreIC::NameRegister(), Operand(key->value())); + __ lw(StoreIC::ReceiverRegister(), MemOperand(sp)); CallStoreIC(key->LiteralFeedbackId()); PrepareForBailoutForId(key->id(), NO_REGISTERS); } else { @@ -1716,7 +1706,7 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) { VisitForStackValue(key); VisitForStackValue(value); if (property->emit_store()) { - __ li(a0, Operand(Smi::FromInt(NONE))); // PropertyAttributes. + __ li(a0, Operand(Smi::FromInt(SLOPPY))); // PropertyAttributes. __ push(a0); __ CallRuntime(Runtime::kSetProperty, 4); } else { @@ -1755,11 +1745,11 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) { EmitAccessor(it->second->setter); __ li(a0, Operand(Smi::FromInt(NONE))); __ push(a0); - __ CallRuntime(Runtime::kDefineOrRedefineAccessorProperty, 5); + __ CallRuntime(Runtime::kDefineAccessorPropertyUnchecked, 5); } if (expr->has_function()) { - ASSERT(result_saved); + DCHECK(result_saved); __ lw(a0, MemOperand(sp)); __ push(a0); __ CallRuntime(Runtime::kToFastProperties, 1); @@ -1785,7 +1775,7 @@ void FullCodeGenerator::VisitArrayLiteral(ArrayLiteral* expr) { int length = subexprs->length(); Handle<FixedArray> constant_elements = expr->constant_elements(); - ASSERT_EQ(2, constant_elements->length()); + DCHECK_EQ(2, constant_elements->length()); ElementsKind constant_elements_kind = static_cast<ElementsKind>(Smi::cast(constant_elements->get(0))->value()); bool has_fast_elements = @@ -1805,33 +1795,12 @@ void FullCodeGenerator::VisitArrayLiteral(ArrayLiteral* expr) { __ lw(a3, FieldMemOperand(a3, JSFunction::kLiteralsOffset)); __ li(a2, Operand(Smi::FromInt(expr->literal_index()))); __ li(a1, Operand(constant_elements)); - if (has_fast_elements && constant_elements_values->map() == - isolate()->heap()->fixed_cow_array_map()) { - FastCloneShallowArrayStub stub( - isolate(), - FastCloneShallowArrayStub::COPY_ON_WRITE_ELEMENTS, - allocation_site_mode, - length); - __ CallStub(&stub); - __ IncrementCounter(isolate()->counters()->cow_arrays_created_stub(), - 1, a1, a2); - } else if (expr->depth() > 1 || Serializer::enabled(isolate()) || - length > FastCloneShallowArrayStub::kMaximumClonedLength) { + if (expr->depth() > 1 || length > JSObject::kInitialMaxFastElementArray) { __ li(a0, Operand(Smi::FromInt(flags))); __ Push(a3, a2, a1, a0); - __ CallRuntime(Runtime::kHiddenCreateArrayLiteral, 4); + __ CallRuntime(Runtime::kCreateArrayLiteral, 4); } else { - ASSERT(IsFastSmiOrObjectElementsKind(constant_elements_kind) || - FLAG_smi_only_arrays); - FastCloneShallowArrayStub::Mode mode = - FastCloneShallowArrayStub::CLONE_ANY_ELEMENTS; - - if (has_fast_elements) { - mode = FastCloneShallowArrayStub::CLONE_ELEMENTS; - } - - FastCloneShallowArrayStub stub(isolate(), mode, allocation_site_mode, - length); + FastCloneShallowArrayStub stub(isolate(), allocation_site_mode); __ CallStub(&stub); } @@ -1881,7 +1850,7 @@ void FullCodeGenerator::VisitArrayLiteral(ArrayLiteral* expr) { void FullCodeGenerator::VisitAssignment(Assignment* expr) { - ASSERT(expr->target()->IsValidReferenceExpression()); + DCHECK(expr->target()->IsValidReferenceExpression()); Comment cmnt(masm_, "[ Assignment"); @@ -1903,9 +1872,9 @@ void FullCodeGenerator::VisitAssignment(Assignment* expr) { break; case NAMED_PROPERTY: if (expr->is_compound()) { - // We need the receiver both on the stack and in the accumulator. - VisitForAccumulatorValue(property->obj()); - __ push(result_register()); + // We need the receiver both on the stack and in the register. + VisitForStackValue(property->obj()); + __ lw(LoadIC::ReceiverRegister(), MemOperand(sp, 0)); } else { VisitForStackValue(property->obj()); } @@ -1914,9 +1883,9 @@ void FullCodeGenerator::VisitAssignment(Assignment* expr) { // We need the key and receiver on both the stack and in v0 and a1. if (expr->is_compound()) { VisitForStackValue(property->obj()); - VisitForAccumulatorValue(property->key()); - __ lw(a1, MemOperand(sp, 0)); - __ push(v0); + VisitForStackValue(property->key()); + __ lw(LoadIC::ReceiverRegister(), MemOperand(sp, 1 * kPointerSize)); + __ lw(LoadIC::NameRegister(), MemOperand(sp, 0)); } else { VisitForStackValue(property->obj()); VisitForStackValue(property->key()); @@ -2012,7 +1981,7 @@ void FullCodeGenerator::VisitYield(Yield* expr) { __ bind(&suspend); VisitForAccumulatorValue(expr->generator_object()); - ASSERT(continuation.pos() > 0 && Smi::IsValid(continuation.pos())); + DCHECK(continuation.pos() > 0 && Smi::IsValid(continuation.pos())); __ li(a1, Operand(Smi::FromInt(continuation.pos()))); __ sw(a1, FieldMemOperand(v0, JSGeneratorObject::kContinuationOffset)); __ sw(cp, FieldMemOperand(v0, JSGeneratorObject::kContextOffset)); @@ -2022,7 +1991,7 @@ void FullCodeGenerator::VisitYield(Yield* expr) { __ Addu(a1, fp, Operand(StandardFrameConstants::kExpressionsOffset)); __ Branch(&post_runtime, eq, sp, Operand(a1)); __ push(v0); // generator object - __ CallRuntime(Runtime::kHiddenSuspendJSGeneratorObject, 1); + __ CallRuntime(Runtime::kSuspendJSGeneratorObject, 1); __ lw(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); __ bind(&post_runtime); __ pop(result_register()); @@ -2053,7 +2022,10 @@ void FullCodeGenerator::VisitYield(Yield* expr) { // [sp + 0 * kPointerSize] g Label l_catch, l_try, l_suspend, l_continuation, l_resume; - Label l_next, l_call, l_loop; + Label l_next, l_call; + Register load_receiver = LoadIC::ReceiverRegister(); + Register load_name = LoadIC::NameRegister(); + // Initial send value is undefined. __ LoadRoot(a0, Heap::kUndefinedValueRootIndex); __ Branch(&l_next); @@ -2062,9 +2034,9 @@ void FullCodeGenerator::VisitYield(Yield* expr) { __ bind(&l_catch); __ mov(a0, v0); handler_table()->set(expr->index(), Smi::FromInt(l_catch.pos())); - __ LoadRoot(a2, Heap::kthrow_stringRootIndex); // "throw" - __ lw(a3, MemOperand(sp, 1 * kPointerSize)); // iter - __ Push(a2, a3, a0); // "throw", iter, except + __ LoadRoot(load_name, Heap::kthrow_stringRootIndex); // "throw" + __ lw(a3, MemOperand(sp, 1 * kPointerSize)); // iter + __ Push(load_name, a3, a0); // "throw", iter, except __ jmp(&l_call); // try { received = %yield result } @@ -2083,14 +2055,14 @@ void FullCodeGenerator::VisitYield(Yield* expr) { const int generator_object_depth = kPointerSize + handler_size; __ lw(a0, MemOperand(sp, generator_object_depth)); __ push(a0); // g - ASSERT(l_continuation.pos() > 0 && Smi::IsValid(l_continuation.pos())); + DCHECK(l_continuation.pos() > 0 && Smi::IsValid(l_continuation.pos())); __ li(a1, Operand(Smi::FromInt(l_continuation.pos()))); __ sw(a1, FieldMemOperand(a0, JSGeneratorObject::kContinuationOffset)); __ sw(cp, FieldMemOperand(a0, JSGeneratorObject::kContextOffset)); __ mov(a1, cp); __ RecordWriteField(a0, JSGeneratorObject::kContextOffset, a1, a2, kRAHasBeenSaved, kDontSaveFPRegs); - __ CallRuntime(Runtime::kHiddenSuspendJSGeneratorObject, 1); + __ CallRuntime(Runtime::kSuspendJSGeneratorObject, 1); __ lw(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); __ pop(v0); // result EmitReturnSequence(); @@ -2100,14 +2072,19 @@ void FullCodeGenerator::VisitYield(Yield* expr) { // receiver = iter; f = 'next'; arg = received; __ bind(&l_next); - __ LoadRoot(a2, Heap::knext_stringRootIndex); // "next" - __ lw(a3, MemOperand(sp, 1 * kPointerSize)); // iter - __ Push(a2, a3, a0); // "next", iter, received + + __ LoadRoot(load_name, Heap::knext_stringRootIndex); // "next" + __ lw(a3, MemOperand(sp, 1 * kPointerSize)); // iter + __ Push(load_name, a3, a0); // "next", iter, received // result = receiver[f](arg); __ bind(&l_call); - __ lw(a1, MemOperand(sp, kPointerSize)); - __ lw(a0, MemOperand(sp, 2 * kPointerSize)); + __ lw(load_receiver, MemOperand(sp, kPointerSize)); + __ lw(load_name, MemOperand(sp, 2 * kPointerSize)); + if (FLAG_vector_ics) { + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(expr->KeyedLoadFeedbackSlot()))); + } Handle<Code> ic = isolate()->builtins()->KeyedLoadIC_Initialize(); CallIC(ic, TypeFeedbackId::None()); __ mov(a0, v0); @@ -2120,21 +2097,29 @@ void FullCodeGenerator::VisitYield(Yield* expr) { __ Drop(1); // The function is still on the stack; drop it. // if (!result.done) goto l_try; - __ bind(&l_loop); - __ mov(a0, v0); - __ push(a0); // save result - __ LoadRoot(a2, Heap::kdone_stringRootIndex); // "done" - CallLoadIC(NOT_CONTEXTUAL); // result.done in v0 + __ Move(load_receiver, v0); + + __ push(load_receiver); // save result + __ LoadRoot(load_name, Heap::kdone_stringRootIndex); // "done" + if (FLAG_vector_ics) { + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(expr->DoneFeedbackSlot()))); + } + CallLoadIC(NOT_CONTEXTUAL); // v0=result.done __ mov(a0, v0); Handle<Code> bool_ic = ToBooleanStub::GetUninitialized(isolate()); CallIC(bool_ic); __ Branch(&l_try, eq, v0, Operand(zero_reg)); // result.value - __ pop(a0); // result - __ LoadRoot(a2, Heap::kvalue_stringRootIndex); // "value" - CallLoadIC(NOT_CONTEXTUAL); // result.value in v0 - context()->DropAndPlug(2, v0); // drop iter and g + __ pop(load_receiver); // result + __ LoadRoot(load_name, Heap::kvalue_stringRootIndex); // "value" + if (FLAG_vector_ics) { + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(expr->ValueFeedbackSlot()))); + } + CallLoadIC(NOT_CONTEXTUAL); // v0=result.value + context()->DropAndPlug(2, v0); // drop iter and g break; } } @@ -2145,7 +2130,7 @@ void FullCodeGenerator::EmitGeneratorResume(Expression *generator, Expression *value, JSGeneratorObject::ResumeMode resume_mode) { // The value stays in a0, and is ultimately read by the resumed generator, as - // if CallRuntime(Runtime::kHiddenSuspendJSGeneratorObject) returned it. Or it + // if CallRuntime(Runtime::kSuspendJSGeneratorObject) returned it. Or it // is read to throw the value when the resumed generator is already closed. // a1 will hold the generator object until the activation has been resumed. VisitForStackValue(generator); @@ -2224,10 +2209,10 @@ void FullCodeGenerator::EmitGeneratorResume(Expression *generator, __ push(a2); __ Branch(&push_operand_holes); __ bind(&call_resume); - ASSERT(!result_register().is(a1)); + DCHECK(!result_register().is(a1)); __ Push(a1, result_register()); __ Push(Smi::FromInt(resume_mode)); - __ CallRuntime(Runtime::kHiddenResumeJSGeneratorObject, 3); + __ CallRuntime(Runtime::kResumeJSGeneratorObject, 3); // Not reached: the runtime call returns elsewhere. __ stop("not-reached"); @@ -2242,14 +2227,14 @@ void FullCodeGenerator::EmitGeneratorResume(Expression *generator, } else { // Throw the provided value. __ push(a0); - __ CallRuntime(Runtime::kHiddenThrow, 1); + __ CallRuntime(Runtime::kThrow, 1); } __ jmp(&done); // Throw error if we attempt to operate on a running generator. __ bind(&wrong_state); __ push(a1); - __ CallRuntime(Runtime::kHiddenThrowGeneratorStateError, 1); + __ CallRuntime(Runtime::kThrowGeneratorStateError, 1); __ bind(&done); context()->Plug(result_register()); @@ -2267,7 +2252,7 @@ void FullCodeGenerator::EmitCreateIteratorResult(bool done) { __ bind(&gc_required); __ Push(Smi::FromInt(map->instance_size())); - __ CallRuntime(Runtime::kHiddenAllocateInNewSpace, 1); + __ CallRuntime(Runtime::kAllocateInNewSpace, 1); __ lw(context_register(), MemOperand(fp, StandardFrameConstants::kContextOffset)); @@ -2276,7 +2261,7 @@ void FullCodeGenerator::EmitCreateIteratorResult(bool done) { __ pop(a2); __ li(a3, Operand(isolate()->factory()->ToBoolean(done))); __ li(t0, Operand(isolate()->factory()->empty_fixed_array())); - ASSERT_EQ(map->instance_size(), 5 * kPointerSize); + DCHECK_EQ(map->instance_size(), 5 * kPointerSize); __ sw(a1, FieldMemOperand(v0, HeapObject::kMapOffset)); __ sw(t0, FieldMemOperand(v0, JSObject::kPropertiesOffset)); __ sw(t0, FieldMemOperand(v0, JSObject::kElementsOffset)); @@ -2295,19 +2280,27 @@ void FullCodeGenerator::EmitCreateIteratorResult(bool done) { void FullCodeGenerator::EmitNamedPropertyLoad(Property* prop) { SetSourcePosition(prop->position()); Literal* key = prop->key()->AsLiteral(); - __ mov(a0, result_register()); - __ li(a2, Operand(key->value())); - // Call load IC. It has arguments receiver and property name a0 and a2. - CallLoadIC(NOT_CONTEXTUAL, prop->PropertyFeedbackId()); + __ li(LoadIC::NameRegister(), Operand(key->value())); + if (FLAG_vector_ics) { + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(prop->PropertyFeedbackSlot()))); + CallLoadIC(NOT_CONTEXTUAL); + } else { + CallLoadIC(NOT_CONTEXTUAL, prop->PropertyFeedbackId()); + } } void FullCodeGenerator::EmitKeyedPropertyLoad(Property* prop) { SetSourcePosition(prop->position()); - __ mov(a0, result_register()); - // Call keyed load IC. It has arguments key and receiver in a0 and a1. Handle<Code> ic = isolate()->builtins()->KeyedLoadIC_Initialize(); - CallIC(ic, prop->PropertyFeedbackId()); + if (FLAG_vector_ics) { + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(prop->PropertyFeedbackSlot()))); + CallIC(ic); + } else { + CallIC(ic, prop->PropertyFeedbackId()); + } } @@ -2385,7 +2378,7 @@ void FullCodeGenerator::EmitInlineSmiBinaryOp(BinaryOperation* expr, __ Branch(&done, ne, v0, Operand(zero_reg)); __ Addu(scratch2, right, left); __ Branch(&stub_call, lt, scratch2, Operand(zero_reg)); - ASSERT(Smi::FromInt(0) == 0); + DCHECK(Smi::FromInt(0) == 0); __ mov(v0, zero_reg); break; } @@ -2421,7 +2414,7 @@ void FullCodeGenerator::EmitBinaryOp(BinaryOperation* expr, void FullCodeGenerator::EmitAssignment(Expression* expr) { - ASSERT(expr->IsValidReferenceExpression()); + DCHECK(expr->IsValidReferenceExpression()); // Left-hand side can only be a property, a global or a (parameter or local) // slot. @@ -2444,9 +2437,10 @@ void FullCodeGenerator::EmitAssignment(Expression* expr) { case NAMED_PROPERTY: { __ push(result_register()); // Preserve value. VisitForAccumulatorValue(prop->obj()); - __ mov(a1, result_register()); - __ pop(a0); // Restore value. - __ li(a2, Operand(prop->key()->AsLiteral()->value())); + __ mov(StoreIC::ReceiverRegister(), result_register()); + __ pop(StoreIC::ValueRegister()); // Restore value. + __ li(StoreIC::NameRegister(), + Operand(prop->key()->AsLiteral()->value())); CallStoreIC(); break; } @@ -2454,8 +2448,8 @@ void FullCodeGenerator::EmitAssignment(Expression* expr) { __ push(result_register()); // Preserve value. VisitForStackValue(prop->obj()); VisitForAccumulatorValue(prop->key()); - __ mov(a1, result_register()); - __ Pop(a0, a2); // a0 = restored value. + __ mov(KeyedStoreIC::NameRegister(), result_register()); + __ Pop(KeyedStoreIC::ValueRegister(), KeyedStoreIC::ReceiverRegister()); Handle<Code> ic = strict_mode() == SLOPPY ? isolate()->builtins()->KeyedStoreIC_Initialize() : isolate()->builtins()->KeyedStoreIC_Initialize_Strict(); @@ -2480,32 +2474,23 @@ void FullCodeGenerator::EmitStoreToStackLocalOrContextSlot( } -void FullCodeGenerator::EmitCallStoreContextSlot( - Handle<String> name, StrictMode strict_mode) { - __ li(a1, Operand(name)); - __ li(a0, Operand(Smi::FromInt(strict_mode))); - __ Push(v0, cp, a1, a0); // Value, context, name, strict mode. - __ CallRuntime(Runtime::kHiddenStoreContextSlot, 4); -} - - void FullCodeGenerator::EmitVariableAssignment(Variable* var, Token::Value op) { if (var->IsUnallocated()) { // Global var, const, or let. - __ mov(a0, result_register()); - __ li(a2, Operand(var->name())); - __ lw(a1, GlobalObjectOperand()); + __ mov(StoreIC::ValueRegister(), result_register()); + __ li(StoreIC::NameRegister(), Operand(var->name())); + __ lw(StoreIC::ReceiverRegister(), GlobalObjectOperand()); CallStoreIC(); } else if (op == Token::INIT_CONST_LEGACY) { // Const initializers need a write barrier. - ASSERT(!var->IsParameter()); // No const parameters. + DCHECK(!var->IsParameter()); // No const parameters. if (var->IsLookupSlot()) { __ li(a0, Operand(var->name())); __ Push(v0, cp, a0); // Context and name. - __ CallRuntime(Runtime::kHiddenInitializeConstContextSlot, 3); + __ CallRuntime(Runtime::kInitializeLegacyConstLookupSlot, 3); } else { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); Label skip; MemOperand location = VarOperand(var, a1); __ lw(a2, location); @@ -2517,30 +2502,31 @@ void FullCodeGenerator::EmitVariableAssignment(Variable* var, Token::Value op) { } else if (var->mode() == LET && op != Token::INIT_LET) { // Non-initializing assignment to let variable needs a write barrier. - if (var->IsLookupSlot()) { - EmitCallStoreContextSlot(var->name(), strict_mode()); - } else { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); - Label assign; - MemOperand location = VarOperand(var, a1); - __ lw(a3, location); - __ LoadRoot(t0, Heap::kTheHoleValueRootIndex); - __ Branch(&assign, ne, a3, Operand(t0)); - __ li(a3, Operand(var->name())); - __ push(a3); - __ CallRuntime(Runtime::kHiddenThrowReferenceError, 1); - // Perform the assignment. - __ bind(&assign); - EmitStoreToStackLocalOrContextSlot(var, location); - } + DCHECK(!var->IsLookupSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); + Label assign; + MemOperand location = VarOperand(var, a1); + __ lw(a3, location); + __ LoadRoot(t0, Heap::kTheHoleValueRootIndex); + __ Branch(&assign, ne, a3, Operand(t0)); + __ li(a3, Operand(var->name())); + __ push(a3); + __ CallRuntime(Runtime::kThrowReferenceError, 1); + // Perform the assignment. + __ bind(&assign); + EmitStoreToStackLocalOrContextSlot(var, location); } else if (!var->is_const_mode() || op == Token::INIT_CONST) { - // Assignment to var or initializing assignment to let/const - // in harmony mode. if (var->IsLookupSlot()) { - EmitCallStoreContextSlot(var->name(), strict_mode()); + // Assignment to var. + __ li(a1, Operand(var->name())); + __ li(a0, Operand(Smi::FromInt(strict_mode()))); + __ Push(v0, cp, a1, a0); // Value, context, name, strict mode. + __ CallRuntime(Runtime::kStoreLookupSlot, 4); } else { - ASSERT((var->IsStackAllocated() || var->IsContextSlot())); + // Assignment to var or initializing assignment to let/const in harmony + // mode. + DCHECK((var->IsStackAllocated() || var->IsContextSlot())); MemOperand location = VarOperand(var, a1); if (generate_debug_code_ && op == Token::INIT_LET) { // Check for an uninitialized let binding. @@ -2558,15 +2544,14 @@ void FullCodeGenerator::EmitVariableAssignment(Variable* var, Token::Value op) { void FullCodeGenerator::EmitNamedPropertyAssignment(Assignment* expr) { // Assignment to a property, using a named store IC. Property* prop = expr->target()->AsProperty(); - ASSERT(prop != NULL); - ASSERT(prop->key()->AsLiteral() != NULL); + DCHECK(prop != NULL); + DCHECK(prop->key()->IsLiteral()); // Record source code position before IC call. SetSourcePosition(expr->position()); - __ mov(a0, result_register()); // Load the value. - __ li(a2, Operand(prop->key()->AsLiteral()->value())); - __ pop(a1); - + __ mov(StoreIC::ValueRegister(), result_register()); + __ li(StoreIC::NameRegister(), Operand(prop->key()->AsLiteral()->value())); + __ pop(StoreIC::ReceiverRegister()); CallStoreIC(expr->AssignmentFeedbackId()); PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); @@ -2584,8 +2569,9 @@ void FullCodeGenerator::EmitKeyedPropertyAssignment(Assignment* expr) { // - a0 is the value, // - a1 is the key, // - a2 is the receiver. - __ mov(a0, result_register()); - __ Pop(a2, a1); // a1 = key. + __ mov(KeyedStoreIC::ValueRegister(), result_register()); + __ Pop(KeyedStoreIC::ReceiverRegister(), KeyedStoreIC::NameRegister()); + DCHECK(KeyedStoreIC::ValueRegister().is(a0)); Handle<Code> ic = strict_mode() == SLOPPY ? isolate()->builtins()->KeyedStoreIC_Initialize() @@ -2603,13 +2589,15 @@ void FullCodeGenerator::VisitProperty(Property* expr) { if (key->IsPropertyName()) { VisitForAccumulatorValue(expr->obj()); + __ Move(LoadIC::ReceiverRegister(), v0); EmitNamedPropertyLoad(expr); PrepareForBailoutForId(expr->LoadId(), TOS_REG); context()->Plug(v0); } else { VisitForStackValue(expr->obj()); VisitForAccumulatorValue(expr->key()); - __ pop(a1); + __ Move(LoadIC::NameRegister(), v0); + __ pop(LoadIC::ReceiverRegister()); EmitKeyedPropertyLoad(expr); context()->Plug(v0); } @@ -2642,8 +2630,8 @@ void FullCodeGenerator::EmitCallWithLoadIC(Call* expr) { __ Push(isolate()->factory()->undefined_value()); } else { // Load the function from the receiver. - ASSERT(callee->IsProperty()); - __ lw(v0, MemOperand(sp, 0)); + DCHECK(callee->IsProperty()); + __ lw(LoadIC::ReceiverRegister(), MemOperand(sp, 0)); EmitNamedPropertyLoad(callee->AsProperty()); PrepareForBailoutForId(callee->AsProperty()->LoadId(), TOS_REG); // Push the target function under the receiver. @@ -2665,8 +2653,9 @@ void FullCodeGenerator::EmitKeyedCallWithLoadIC(Call* expr, Expression* callee = expr->expression(); // Load the function from the receiver. - ASSERT(callee->IsProperty()); - __ lw(a1, MemOperand(sp, 0)); + DCHECK(callee->IsProperty()); + __ lw(LoadIC::ReceiverRegister(), MemOperand(sp, 0)); + __ Move(LoadIC::NameRegister(), v0); EmitKeyedPropertyLoad(callee->AsProperty()); PrepareForBailoutForId(callee->AsProperty()->LoadId(), TOS_REG); @@ -2726,7 +2715,7 @@ void FullCodeGenerator::EmitResolvePossiblyDirectEval(int arg_count) { // Do the runtime call. __ Push(t2, t1, t0, a1); - __ CallRuntime(Runtime::kHiddenResolvePossiblyDirectEval, 5); + __ CallRuntime(Runtime::kResolvePossiblyDirectEval, 5); } @@ -2789,16 +2778,16 @@ void FullCodeGenerator::VisitCall(Call* expr) { { PreservePositionScope scope(masm()->positions_recorder()); // Generate code for loading from variables potentially shadowed // by eval-introduced variables. - EmitDynamicLookupFastCase(proxy->var(), NOT_INSIDE_TYPEOF, &slow, &done); + EmitDynamicLookupFastCase(proxy, NOT_INSIDE_TYPEOF, &slow, &done); } __ bind(&slow); // Call the runtime to find the function to call (returned in v0) // and the object holding it (returned in v1). - ASSERT(!context_register().is(a2)); + DCHECK(!context_register().is(a2)); __ li(a2, Operand(proxy->name())); __ Push(context_register(), a2); - __ CallRuntime(Runtime::kHiddenLoadContextSlot, 2); + __ CallRuntime(Runtime::kLoadLookupSlot, 2); __ Push(v0, v1); // Function, receiver. // If fast case code has been generated, emit code to push the @@ -2831,7 +2820,7 @@ void FullCodeGenerator::VisitCall(Call* expr) { EmitKeyedCallWithLoadIC(expr, property->key()); } } else { - ASSERT(call_type == Call::OTHER_CALL); + DCHECK(call_type == Call::OTHER_CALL); // Call to an arbitrary expression not handled specially above. { PreservePositionScope scope(masm()->positions_recorder()); VisitForStackValue(callee); @@ -2844,7 +2833,7 @@ void FullCodeGenerator::VisitCall(Call* expr) { #ifdef DEBUG // RecordJSReturnSite should have been called. - ASSERT(expr->return_is_recorded_); + DCHECK(expr->return_is_recorded_); #endif } @@ -2878,7 +2867,7 @@ void FullCodeGenerator::VisitCallNew(CallNew* expr) { // Record call targets in unoptimized code. if (FLAG_pretenuring_call_new) { EnsureSlotContainsAllocationSite(expr->AllocationSiteFeedbackSlot()); - ASSERT(expr->AllocationSiteFeedbackSlot() == + DCHECK(expr->AllocationSiteFeedbackSlot() == expr->CallNewFeedbackSlot() + 1); } @@ -2894,7 +2883,7 @@ void FullCodeGenerator::VisitCallNew(CallNew* expr) { void FullCodeGenerator::EmitIsSmi(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2915,7 +2904,7 @@ void FullCodeGenerator::EmitIsSmi(CallRuntime* expr) { void FullCodeGenerator::EmitIsNonNegativeSmi(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2936,7 +2925,7 @@ void FullCodeGenerator::EmitIsNonNegativeSmi(CallRuntime* expr) { void FullCodeGenerator::EmitIsObject(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2967,7 +2956,7 @@ void FullCodeGenerator::EmitIsObject(CallRuntime* expr) { void FullCodeGenerator::EmitIsSpecObject(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2990,7 +2979,7 @@ void FullCodeGenerator::EmitIsSpecObject(CallRuntime* expr) { void FullCodeGenerator::EmitIsUndetectableObject(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3015,7 +3004,7 @@ void FullCodeGenerator::EmitIsUndetectableObject(CallRuntime* expr) { void FullCodeGenerator::EmitIsStringWrapperSafeForDefaultValueOf( CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3060,7 +3049,7 @@ void FullCodeGenerator::EmitIsStringWrapperSafeForDefaultValueOf( __ Addu(t0, t0, Operand(DescriptorArray::kFirstOffset - kHeapObjectTag)); // Calculate the end of the descriptor array. __ mov(a2, t0); - __ sll(t1, a3, kPointerSizeLog2 - kSmiTagSize); + __ sll(t1, a3, kPointerSizeLog2); __ Addu(a2, a2, t1); // Loop through all the keys in the descriptor array. If one of these is the @@ -3102,7 +3091,7 @@ void FullCodeGenerator::EmitIsStringWrapperSafeForDefaultValueOf( void FullCodeGenerator::EmitIsFunction(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3125,7 +3114,7 @@ void FullCodeGenerator::EmitIsFunction(CallRuntime* expr) { void FullCodeGenerator::EmitIsMinusZero(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3155,7 +3144,7 @@ void FullCodeGenerator::EmitIsMinusZero(CallRuntime* expr) { void FullCodeGenerator::EmitIsArray(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3178,7 +3167,7 @@ void FullCodeGenerator::EmitIsArray(CallRuntime* expr) { void FullCodeGenerator::EmitIsRegExp(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3199,7 +3188,7 @@ void FullCodeGenerator::EmitIsRegExp(CallRuntime* expr) { void FullCodeGenerator::EmitIsConstructCall(CallRuntime* expr) { - ASSERT(expr->arguments()->length() == 0); + DCHECK(expr->arguments()->length() == 0); Label materialize_true, materialize_false; Label* if_true = NULL; @@ -3231,7 +3220,7 @@ void FullCodeGenerator::EmitIsConstructCall(CallRuntime* expr) { void FullCodeGenerator::EmitObjectEquals(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); // Load the two objects into registers and perform the comparison. VisitForStackValue(args->at(0)); @@ -3254,7 +3243,7 @@ void FullCodeGenerator::EmitObjectEquals(CallRuntime* expr) { void FullCodeGenerator::EmitArguments(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); // ArgumentsAccessStub expects the key in a1 and the formal // parameter count in a0. @@ -3268,7 +3257,7 @@ void FullCodeGenerator::EmitArguments(CallRuntime* expr) { void FullCodeGenerator::EmitArgumentsLength(CallRuntime* expr) { - ASSERT(expr->arguments()->length() == 0); + DCHECK(expr->arguments()->length() == 0); Label exit; // Get the number of formal parameters. __ li(v0, Operand(Smi::FromInt(info_->scope()->num_parameters()))); @@ -3290,7 +3279,7 @@ void FullCodeGenerator::EmitArgumentsLength(CallRuntime* expr) { void FullCodeGenerator::EmitClassOf(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); Label done, null, function, non_function_constructor; VisitForAccumulatorValue(args->at(0)); @@ -3352,7 +3341,7 @@ void FullCodeGenerator::EmitSubString(CallRuntime* expr) { // Load the arguments on the stack and call the stub. SubStringStub stub(isolate()); ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 3); + DCHECK(args->length() == 3); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); VisitForStackValue(args->at(2)); @@ -3365,7 +3354,7 @@ void FullCodeGenerator::EmitRegExpExec(CallRuntime* expr) { // Load the arguments on the stack and call the stub. RegExpExecStub stub(isolate()); ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 4); + DCHECK(args->length() == 4); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); VisitForStackValue(args->at(2)); @@ -3377,7 +3366,7 @@ void FullCodeGenerator::EmitRegExpExec(CallRuntime* expr) { void FullCodeGenerator::EmitValueOf(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); // Load the object. @@ -3397,8 +3386,8 @@ void FullCodeGenerator::EmitValueOf(CallRuntime* expr) { void FullCodeGenerator::EmitDateField(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); - ASSERT_NE(NULL, args->at(1)->AsLiteral()); + DCHECK(args->length() == 2); + DCHECK_NE(NULL, args->at(1)->AsLiteral()); Smi* index = Smi::cast(*(args->at(1)->AsLiteral()->value())); VisitForAccumulatorValue(args->at(0)); // Load the object. @@ -3436,7 +3425,7 @@ void FullCodeGenerator::EmitDateField(CallRuntime* expr) { } __ bind(¬_date_object); - __ CallRuntime(Runtime::kHiddenThrowNotDateError, 0); + __ CallRuntime(Runtime::kThrowNotDateError, 0); __ bind(&done); context()->Plug(v0); } @@ -3444,7 +3433,7 @@ void FullCodeGenerator::EmitDateField(CallRuntime* expr) { void FullCodeGenerator::EmitOneByteSeqStringSetChar(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(3, args->length()); + DCHECK_EQ(3, args->length()); Register string = v0; Register index = a1; @@ -3481,7 +3470,7 @@ void FullCodeGenerator::EmitOneByteSeqStringSetChar(CallRuntime* expr) { void FullCodeGenerator::EmitTwoByteSeqStringSetChar(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(3, args->length()); + DCHECK_EQ(3, args->length()); Register string = v0; Register index = a1; @@ -3519,7 +3508,7 @@ void FullCodeGenerator::EmitTwoByteSeqStringSetChar(CallRuntime* expr) { void FullCodeGenerator::EmitMathPow(CallRuntime* expr) { // Load the arguments on the stack and call the runtime function. ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); MathPowStub stub(isolate(), MathPowStub::ON_STACK); @@ -3530,7 +3519,7 @@ void FullCodeGenerator::EmitMathPow(CallRuntime* expr) { void FullCodeGenerator::EmitSetValueOf(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); VisitForStackValue(args->at(0)); // Load the object. VisitForAccumulatorValue(args->at(1)); // Load the value. @@ -3559,7 +3548,7 @@ void FullCodeGenerator::EmitSetValueOf(CallRuntime* expr) { void FullCodeGenerator::EmitNumberToString(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(args->length(), 1); + DCHECK_EQ(args->length(), 1); // Load the argument into a0 and call the stub. VisitForAccumulatorValue(args->at(0)); @@ -3573,7 +3562,7 @@ void FullCodeGenerator::EmitNumberToString(CallRuntime* expr) { void FullCodeGenerator::EmitStringCharFromCode(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3592,7 +3581,7 @@ void FullCodeGenerator::EmitStringCharFromCode(CallRuntime* expr) { void FullCodeGenerator::EmitStringCharCodeAt(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); VisitForStackValue(args->at(0)); VisitForAccumulatorValue(args->at(1)); @@ -3639,7 +3628,7 @@ void FullCodeGenerator::EmitStringCharCodeAt(CallRuntime* expr) { void FullCodeGenerator::EmitStringCharAt(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); VisitForStackValue(args->at(0)); VisitForAccumulatorValue(args->at(1)); @@ -3688,7 +3677,7 @@ void FullCodeGenerator::EmitStringCharAt(CallRuntime* expr) { void FullCodeGenerator::EmitStringAdd(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(2, args->length()); + DCHECK_EQ(2, args->length()); VisitForStackValue(args->at(0)); VisitForAccumulatorValue(args->at(1)); @@ -3702,7 +3691,7 @@ void FullCodeGenerator::EmitStringAdd(CallRuntime* expr) { void FullCodeGenerator::EmitStringCompare(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(2, args->length()); + DCHECK_EQ(2, args->length()); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); @@ -3715,7 +3704,7 @@ void FullCodeGenerator::EmitStringCompare(CallRuntime* expr) { void FullCodeGenerator::EmitCallFunction(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() >= 2); + DCHECK(args->length() >= 2); int arg_count = args->length() - 2; // 2 ~ receiver and function. for (int i = 0; i < arg_count + 1; i++) { @@ -3748,7 +3737,7 @@ void FullCodeGenerator::EmitCallFunction(CallRuntime* expr) { void FullCodeGenerator::EmitRegExpConstructResult(CallRuntime* expr) { RegExpConstructResultStub stub(isolate()); ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 3); + DCHECK(args->length() == 3); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); VisitForAccumulatorValue(args->at(2)); @@ -3762,9 +3751,9 @@ void FullCodeGenerator::EmitRegExpConstructResult(CallRuntime* expr) { void FullCodeGenerator::EmitGetFromCache(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(2, args->length()); + DCHECK_EQ(2, args->length()); - ASSERT_NE(NULL, args->at(0)->AsLiteral()); + DCHECK_NE(NULL, args->at(0)->AsLiteral()); int cache_id = Smi::cast(*(args->at(0)->AsLiteral()->value()))->value(); Handle<FixedArray> jsfunction_result_caches( @@ -3807,7 +3796,7 @@ void FullCodeGenerator::EmitGetFromCache(CallRuntime* expr) { __ bind(¬_found); // Call runtime to perform the lookup. __ Push(cache, key); - __ CallRuntime(Runtime::kHiddenGetFromCache, 2); + __ CallRuntime(Runtime::kGetFromCache, 2); __ bind(&done); context()->Plug(v0); @@ -3837,7 +3826,7 @@ void FullCodeGenerator::EmitHasCachedArrayIndex(CallRuntime* expr) { void FullCodeGenerator::EmitGetCachedArrayIndex(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); __ AssertString(v0); @@ -3855,7 +3844,7 @@ void FullCodeGenerator::EmitFastAsciiArrayJoin(CallRuntime* expr) { empty_separator_loop, one_char_separator_loop, one_char_separator_loop_entry, long_separator_loop; ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); VisitForStackValue(args->at(1)); VisitForAccumulatorValue(args->at(0)); @@ -4016,7 +4005,7 @@ void FullCodeGenerator::EmitFastAsciiArrayJoin(CallRuntime* expr) { __ CopyBytes(string, result_pos, string_length, scratch1); // End while (element < elements_end). __ Branch(&empty_separator_loop, lt, element, Operand(elements_end)); - ASSERT(result.is(v0)); + DCHECK(result.is(v0)); __ Branch(&done); // One-character separator case. @@ -4048,7 +4037,7 @@ void FullCodeGenerator::EmitFastAsciiArrayJoin(CallRuntime* expr) { __ CopyBytes(string, result_pos, string_length, scratch1); // End while (element < elements_end). __ Branch(&one_char_separator_loop, lt, element, Operand(elements_end)); - ASSERT(result.is(v0)); + DCHECK(result.is(v0)); __ Branch(&done); // Long separator case (separator is more than one character). Entry is at the @@ -4077,7 +4066,7 @@ void FullCodeGenerator::EmitFastAsciiArrayJoin(CallRuntime* expr) { __ CopyBytes(string, result_pos, string_length, scratch1); // End while (element < elements_end). __ Branch(&long_separator_loop, lt, element, Operand(elements_end)); - ASSERT(result.is(v0)); + DCHECK(result.is(v0)); __ Branch(&done); __ bind(&bailout); @@ -4087,6 +4076,17 @@ void FullCodeGenerator::EmitFastAsciiArrayJoin(CallRuntime* expr) { } +void FullCodeGenerator::EmitDebugIsActive(CallRuntime* expr) { + DCHECK(expr->arguments()->length() == 0); + ExternalReference debug_is_active = + ExternalReference::debug_is_active_address(isolate()); + __ li(at, Operand(debug_is_active)); + __ lb(v0, MemOperand(at)); + __ SmiTag(v0); + context()->Plug(v0); +} + + void FullCodeGenerator::VisitCallRuntime(CallRuntime* expr) { if (expr->function() != NULL && expr->function()->intrinsic_type == Runtime::INLINE) { @@ -4101,12 +4101,20 @@ void FullCodeGenerator::VisitCallRuntime(CallRuntime* expr) { if (expr->is_jsruntime()) { // Push the builtins object as the receiver. - __ lw(a0, GlobalObjectOperand()); - __ lw(a0, FieldMemOperand(a0, GlobalObject::kBuiltinsOffset)); - __ push(a0); + Register receiver = LoadIC::ReceiverRegister(); + __ lw(receiver, GlobalObjectOperand()); + __ lw(receiver, FieldMemOperand(receiver, GlobalObject::kBuiltinsOffset)); + __ push(receiver); + // Load the function from the receiver. - __ li(a2, Operand(expr->name())); - CallLoadIC(NOT_CONTEXTUAL, expr->CallRuntimeFeedbackId()); + __ li(LoadIC::NameRegister(), Operand(expr->name())); + if (FLAG_vector_ics) { + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(expr->CallRuntimeFeedbackSlot()))); + CallLoadIC(NOT_CONTEXTUAL); + } else { + CallLoadIC(NOT_CONTEXTUAL, expr->CallRuntimeFeedbackId()); + } // Push the target function under the receiver. __ lw(at, MemOperand(sp, 0)); @@ -4160,7 +4168,7 @@ void FullCodeGenerator::VisitUnaryOperation(UnaryOperation* expr) { Variable* var = proxy->var(); // Delete of an unqualified identifier is disallowed in strict mode // but "delete this" is allowed. - ASSERT(strict_mode() == SLOPPY || var->is_this()); + DCHECK(strict_mode() == SLOPPY || var->is_this()); if (var->IsUnallocated()) { __ lw(a2, GlobalObjectOperand()); __ li(a1, Operand(var->name())); @@ -4175,10 +4183,10 @@ void FullCodeGenerator::VisitUnaryOperation(UnaryOperation* expr) { } else { // Non-global variable. Call the runtime to try to delete from the // context where the variable was introduced. - ASSERT(!context_register().is(a2)); + DCHECK(!context_register().is(a2)); __ li(a2, Operand(var->name())); __ Push(context_register(), a2); - __ CallRuntime(Runtime::kHiddenDeleteContextSlot, 2); + __ CallRuntime(Runtime::kDeleteLookupSlot, 2); context()->Plug(v0); } } else { @@ -4216,7 +4224,7 @@ void FullCodeGenerator::VisitUnaryOperation(UnaryOperation* expr) { // for control and plugging the control flow into the context, // because we need to prepare a pair of extra administrative AST ids // for the optimizing compiler. - ASSERT(context()->IsAccumulatorValue() || context()->IsStackValue()); + DCHECK(context()->IsAccumulatorValue() || context()->IsStackValue()); Label materialize_true, materialize_false, done; VisitForControl(expr->expression(), &materialize_false, @@ -4253,7 +4261,7 @@ void FullCodeGenerator::VisitUnaryOperation(UnaryOperation* expr) { void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { - ASSERT(expr->expression()->IsValidReferenceExpression()); + DCHECK(expr->expression()->IsValidReferenceExpression()); Comment cmnt(masm_, "[ CountOperation"); SetSourcePosition(expr->position()); @@ -4272,7 +4280,7 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { // Evaluate expression and get value. if (assign_type == VARIABLE) { - ASSERT(expr->expression()->AsVariableProxy()->var() != NULL); + DCHECK(expr->expression()->AsVariableProxy()->var() != NULL); AccumulatorValueContext context(this); EmitVariableLoad(expr->expression()->AsVariableProxy()); } else { @@ -4282,15 +4290,15 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { __ push(at); } if (assign_type == NAMED_PROPERTY) { - // Put the object both on the stack and in the accumulator. - VisitForAccumulatorValue(prop->obj()); - __ push(v0); + // Put the object both on the stack and in the register. + VisitForStackValue(prop->obj()); + __ lw(LoadIC::ReceiverRegister(), MemOperand(sp, 0)); EmitNamedPropertyLoad(prop); } else { VisitForStackValue(prop->obj()); - VisitForAccumulatorValue(prop->key()); - __ lw(a1, MemOperand(sp, 0)); - __ push(v0); + VisitForStackValue(prop->key()); + __ lw(LoadIC::ReceiverRegister(), MemOperand(sp, 1 * kPointerSize)); + __ lw(LoadIC::NameRegister(), MemOperand(sp, 0)); EmitKeyedPropertyLoad(prop); } } @@ -4401,9 +4409,10 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { } break; case NAMED_PROPERTY: { - __ mov(a0, result_register()); // Value. - __ li(a2, Operand(prop->key()->AsLiteral()->value())); // Name. - __ pop(a1); // Receiver. + __ mov(StoreIC::ValueRegister(), result_register()); + __ li(StoreIC::NameRegister(), + Operand(prop->key()->AsLiteral()->value())); + __ pop(StoreIC::ReceiverRegister()); CallStoreIC(expr->CountStoreFeedbackId()); PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); if (expr->is_postfix()) { @@ -4416,8 +4425,8 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { break; } case KEYED_PROPERTY: { - __ mov(a0, result_register()); // Value. - __ Pop(a2, a1); // a1 = key, a2 = receiver. + __ mov(KeyedStoreIC::ValueRegister(), result_register()); + __ Pop(KeyedStoreIC::ReceiverRegister(), KeyedStoreIC::NameRegister()); Handle<Code> ic = strict_mode() == SLOPPY ? isolate()->builtins()->KeyedStoreIC_Initialize() : isolate()->builtins()->KeyedStoreIC_Initialize_Strict(); @@ -4437,13 +4446,17 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { void FullCodeGenerator::VisitForTypeofValue(Expression* expr) { - ASSERT(!context()->IsEffect()); - ASSERT(!context()->IsTest()); + DCHECK(!context()->IsEffect()); + DCHECK(!context()->IsTest()); VariableProxy* proxy = expr->AsVariableProxy(); if (proxy != NULL && proxy->var()->IsUnallocated()) { Comment cmnt(masm_, "[ Global variable"); - __ lw(a0, GlobalObjectOperand()); - __ li(a2, Operand(proxy->name())); + __ lw(LoadIC::ReceiverRegister(), GlobalObjectOperand()); + __ li(LoadIC::NameRegister(), Operand(proxy->name())); + if (FLAG_vector_ics) { + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(proxy->VariableFeedbackSlot()))); + } // Use a regular load, not a contextual load, to avoid a reference // error. CallLoadIC(NOT_CONTEXTUAL); @@ -4455,12 +4468,12 @@ void FullCodeGenerator::VisitForTypeofValue(Expression* expr) { // Generate code for loading from variables potentially shadowed // by eval-introduced variables. - EmitDynamicLookupFastCase(proxy->var(), INSIDE_TYPEOF, &slow, &done); + EmitDynamicLookupFastCase(proxy, INSIDE_TYPEOF, &slow, &done); __ bind(&slow); __ li(a0, Operand(proxy->name())); __ Push(cp, a0); - __ CallRuntime(Runtime::kHiddenLoadContextSlotNoReferenceError, 2); + __ CallRuntime(Runtime::kLoadLookupSlotNoReferenceError, 2); PrepareForBailout(expr, TOS_REG); __ bind(&done); @@ -4510,10 +4523,6 @@ void FullCodeGenerator::EmitLiteralCompareTypeof(Expression* expr, __ Branch(if_true, eq, v0, Operand(at)); __ LoadRoot(at, Heap::kFalseValueRootIndex); Split(eq, v0, Operand(at), if_true, if_false, fall_through); - } else if (FLAG_harmony_typeof && - String::Equals(check, factory->null_string())) { - __ LoadRoot(at, Heap::kNullValueRootIndex); - Split(eq, v0, Operand(at), if_true, if_false, fall_through); } else if (String::Equals(check, factory->undefined_string())) { __ LoadRoot(at, Heap::kUndefinedValueRootIndex); __ Branch(if_true, eq, v0, Operand(at)); @@ -4532,10 +4541,8 @@ void FullCodeGenerator::EmitLiteralCompareTypeof(Expression* expr, if_true, if_false, fall_through); } else if (String::Equals(check, factory->object_string())) { __ JumpIfSmi(v0, if_false); - if (!FLAG_harmony_typeof) { - __ LoadRoot(at, Heap::kNullValueRootIndex); - __ Branch(if_true, eq, v0, Operand(at)); - } + __ LoadRoot(at, Heap::kNullValueRootIndex); + __ Branch(if_true, eq, v0, Operand(at)); // Check for JS objects => true. __ GetObjectType(v0, v0, a1); __ Branch(if_false, lt, a1, Operand(FIRST_NONCALLABLE_SPEC_OBJECT_TYPE)); @@ -4666,7 +4673,7 @@ Register FullCodeGenerator::context_register() { void FullCodeGenerator::StoreToFrameField(int frame_offset, Register value) { - ASSERT_EQ(POINTER_SIZE_ALIGN(frame_offset), frame_offset); + DCHECK_EQ(POINTER_SIZE_ALIGN(frame_offset), frame_offset); __ sw(value, MemOperand(fp, frame_offset)); } @@ -4691,7 +4698,7 @@ void FullCodeGenerator::PushFunctionArgumentForContextAllocation() { // code. Fetch it from the context. __ lw(at, ContextOperand(cp, Context::CLOSURE_INDEX)); } else { - ASSERT(declaration_scope->is_function_scope()); + DCHECK(declaration_scope->is_function_scope()); __ lw(at, MemOperand(fp, JavaScriptFrameConstants::kFunctionOffset)); } __ push(at); @@ -4702,12 +4709,12 @@ void FullCodeGenerator::PushFunctionArgumentForContextAllocation() { // Non-local control flow support. void FullCodeGenerator::EnterFinallyBlock() { - ASSERT(!result_register().is(a1)); + DCHECK(!result_register().is(a1)); // Store result register while executing finally block. __ push(result_register()); // Cook return address in link register to stack (smi encoded Code* delta). __ Subu(a1, ra, Operand(masm_->CodeObject())); - ASSERT_EQ(1, kSmiTagSize + kSmiShiftSize); + DCHECK_EQ(1, kSmiTagSize + kSmiShiftSize); STATIC_ASSERT(0 == kSmiTag); __ Addu(a1, a1, Operand(a1)); // Convert to smi. @@ -4737,7 +4744,7 @@ void FullCodeGenerator::EnterFinallyBlock() { void FullCodeGenerator::ExitFinallyBlock() { - ASSERT(!result_register().is(a1)); + DCHECK(!result_register().is(a1)); // Restore pending message from stack. __ pop(a1); ExternalReference pending_message_script = @@ -4763,7 +4770,7 @@ void FullCodeGenerator::ExitFinallyBlock() { // Uncook return address and return. __ pop(result_register()); - ASSERT_EQ(1, kSmiTagSize + kSmiShiftSize); + DCHECK_EQ(1, kSmiTagSize + kSmiShiftSize); __ sra(a1, a1, 1); // Un-smi-tag value. __ Addu(at, a1, Operand(masm_->CodeObject())); __ Jump(at); @@ -4851,16 +4858,16 @@ BackEdgeTable::BackEdgeState BackEdgeTable::GetBackEdgeState( Address branch_address = pc - 6 * kInstrSize; Address pc_immediate_load_address = pc - 4 * kInstrSize; - ASSERT(Assembler::IsBeq(Assembler::instr_at(pc - 5 * kInstrSize))); + DCHECK(Assembler::IsBeq(Assembler::instr_at(pc - 5 * kInstrSize))); if (!Assembler::IsAddImmediate(Assembler::instr_at(branch_address))) { - ASSERT(reinterpret_cast<uint32_t>( + DCHECK(reinterpret_cast<uint32_t>( Assembler::target_address_at(pc_immediate_load_address)) == reinterpret_cast<uint32_t>( isolate->builtins()->InterruptCheck()->entry())); return INTERRUPT; } - ASSERT(Assembler::IsAddImmediate(Assembler::instr_at(branch_address))); + DCHECK(Assembler::IsAddImmediate(Assembler::instr_at(branch_address))); if (reinterpret_cast<uint32_t>( Assembler::target_address_at(pc_immediate_load_address)) == @@ -4869,7 +4876,7 @@ BackEdgeTable::BackEdgeState BackEdgeTable::GetBackEdgeState( return ON_STACK_REPLACEMENT; } - ASSERT(reinterpret_cast<uint32_t>( + DCHECK(reinterpret_cast<uint32_t>( Assembler::target_address_at(pc_immediate_load_address)) == reinterpret_cast<uint32_t>( isolate->builtins()->OsrAfterStackCheck()->entry())); diff --git a/deps/v8/src/mips/ic-mips.cc b/deps/v8/src/mips/ic-mips.cc index d78fdc6437c..1f4a7bea2dd 100644 --- a/deps/v8/src/mips/ic-mips.cc +++ b/deps/v8/src/mips/ic-mips.cc @@ -4,15 +4,15 @@ -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_MIPS -#include "codegen.h" -#include "code-stubs.h" -#include "ic-inl.h" -#include "runtime.h" -#include "stub-cache.h" +#include "src/code-stubs.h" +#include "src/codegen.h" +#include "src/ic-inl.h" +#include "src/runtime.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -36,47 +36,6 @@ static void GenerateGlobalInstanceTypeCheck(MacroAssembler* masm, } -// Generated code falls through if the receiver is a regular non-global -// JS object with slow properties and no interceptors. -static void GenerateNameDictionaryReceiverCheck(MacroAssembler* masm, - Register receiver, - Register elements, - Register scratch0, - Register scratch1, - Label* miss) { - // Register usage: - // receiver: holds the receiver on entry and is unchanged. - // elements: holds the property dictionary on fall through. - // Scratch registers: - // scratch0: used to holds the receiver map. - // scratch1: used to holds the receiver instance type, receiver bit mask - // and elements map. - - // Check that the receiver isn't a smi. - __ JumpIfSmi(receiver, miss); - - // Check that the receiver is a valid JS object. - __ GetObjectType(receiver, scratch0, scratch1); - __ Branch(miss, lt, scratch1, Operand(FIRST_SPEC_OBJECT_TYPE)); - - // If this assert fails, we have to check upper bound too. - STATIC_ASSERT(LAST_TYPE == LAST_SPEC_OBJECT_TYPE); - - GenerateGlobalInstanceTypeCheck(masm, scratch1, miss); - - // Check that the global object does not require access checks. - __ lbu(scratch1, FieldMemOperand(scratch0, Map::kBitFieldOffset)); - __ And(scratch1, scratch1, Operand((1 << Map::kIsAccessCheckNeeded) | - (1 << Map::kHasNamedInterceptor))); - __ Branch(miss, ne, scratch1, Operand(zero_reg)); - - __ lw(elements, FieldMemOperand(receiver, JSObject::kPropertiesOffset)); - __ lw(scratch1, FieldMemOperand(elements, HeapObject::kMapOffset)); - __ LoadRoot(scratch0, Heap::kHashTableMapRootIndex); - __ Branch(miss, ne, scratch1, Operand(scratch0)); -} - - // Helper function used from LoadIC GenerateNormal. // // elements: Property dictionary. It is not clobbered if a jump to the miss @@ -213,7 +172,7 @@ static void GenerateKeyedLoadReceiverCheck(MacroAssembler* masm, // In the case that the object is a value-wrapper object, // we enter the runtime system to make sure that indexing into string // objects work as intended. - ASSERT(JS_OBJECT_TYPE > JS_VALUE_TYPE); + DCHECK(JS_OBJECT_TYPE > JS_VALUE_TYPE); __ lbu(scratch, FieldMemOperand(map, Map::kInstanceTypeOffset)); __ Branch(slow, lt, scratch, Operand(JS_OBJECT_TYPE)); } @@ -317,16 +276,17 @@ static void GenerateKeyNameCheck(MacroAssembler* masm, void LoadIC::GenerateMegamorphic(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- a2 : name - // -- ra : return address - // -- a0 : receiver - // ----------------------------------- + // The return address is in lr. + Register receiver = ReceiverRegister(); + Register name = NameRegister(); + DCHECK(receiver.is(a1)); + DCHECK(name.is(a2)); // Probe the stub cache. - Code::Flags flags = Code::ComputeHandlerFlags(Code::LOAD_IC); + Code::Flags flags = Code::RemoveTypeAndHolderFromFlags( + Code::ComputeHandlerFlags(Code::LOAD_IC)); masm->isolate()->stub_cache()->GenerateProbe( - masm, flags, a0, a2, a3, t0, t1, t2); + masm, flags, receiver, name, a3, t0, t1, t2); // Cache miss: Jump to runtime. GenerateMiss(masm); @@ -334,37 +294,35 @@ void LoadIC::GenerateMegamorphic(MacroAssembler* masm) { void LoadIC::GenerateNormal(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- a2 : name - // -- lr : return address - // -- a0 : receiver - // ----------------------------------- - Label miss; + Register dictionary = a0; + DCHECK(!dictionary.is(ReceiverRegister())); + DCHECK(!dictionary.is(NameRegister())); - GenerateNameDictionaryReceiverCheck(masm, a0, a1, a3, t0, &miss); + Label slow; - // a1: elements - GenerateDictionaryLoad(masm, &miss, a1, a2, v0, a3, t0); + __ lw(dictionary, + FieldMemOperand(ReceiverRegister(), JSObject::kPropertiesOffset)); + GenerateDictionaryLoad(masm, &slow, dictionary, NameRegister(), v0, a3, t0); __ Ret(); - // Cache miss: Jump to runtime. - __ bind(&miss); - GenerateMiss(masm); + // Dictionary load failed, go slow (but don't miss). + __ bind(&slow); + GenerateRuntimeGetProperty(masm); } +// A register that isn't one of the parameters to the load ic. +static const Register LoadIC_TempRegister() { return a3; } + + void LoadIC::GenerateMiss(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- a2 : name - // -- ra : return address - // -- a0 : receiver - // ----------------------------------- + // The return address is in ra. Isolate* isolate = masm->isolate(); __ IncrementCounter(isolate->counters()->keyed_load_miss(), 1, a3, t0); - __ mov(a3, a0); - __ Push(a3, a2); + __ mov(LoadIC_TempRegister(), ReceiverRegister()); + __ Push(LoadIC_TempRegister(), NameRegister()); // Perform tail call to the entry. ExternalReference ref = ExternalReference(IC_Utility(kLoadIC_Miss), isolate); @@ -373,14 +331,10 @@ void LoadIC::GenerateMiss(MacroAssembler* masm) { void LoadIC::GenerateRuntimeGetProperty(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- a2 : name - // -- ra : return address - // -- a0 : receiver - // ----------------------------------- + // The return address is in ra. - __ mov(a3, a0); - __ Push(a3, a2); + __ mov(LoadIC_TempRegister(), ReceiverRegister()); + __ Push(LoadIC_TempRegister(), NameRegister()); __ TailCallRuntime(Runtime::kGetProperty, 2, 1); } @@ -477,56 +431,57 @@ static MemOperand GenerateUnmappedArgumentsLookup(MacroAssembler* masm, void KeyedLoadIC::GenerateSloppyArguments(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- lr : return address - // -- a0 : key - // -- a1 : receiver - // ----------------------------------- + // The return address is in ra. + Register receiver = ReceiverRegister(); + Register key = NameRegister(); + DCHECK(receiver.is(a1)); + DCHECK(key.is(a2)); + Label slow, notin; MemOperand mapped_location = - GenerateMappedArgumentsLookup(masm, a1, a0, a2, a3, t0, ¬in, &slow); + GenerateMappedArgumentsLookup( + masm, receiver, key, a0, a3, t0, ¬in, &slow); __ Ret(USE_DELAY_SLOT); __ lw(v0, mapped_location); __ bind(¬in); - // The unmapped lookup expects that the parameter map is in a2. + // The unmapped lookup expects that the parameter map is in a0. MemOperand unmapped_location = - GenerateUnmappedArgumentsLookup(masm, a0, a2, a3, &slow); - __ lw(a2, unmapped_location); + GenerateUnmappedArgumentsLookup(masm, key, a0, a3, &slow); + __ lw(a0, unmapped_location); __ LoadRoot(a3, Heap::kTheHoleValueRootIndex); - __ Branch(&slow, eq, a2, Operand(a3)); + __ Branch(&slow, eq, a0, Operand(a3)); __ Ret(USE_DELAY_SLOT); - __ mov(v0, a2); + __ mov(v0, a0); __ bind(&slow); GenerateMiss(masm); } void KeyedStoreIC::GenerateSloppyArguments(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- a0 : value - // -- a1 : key - // -- a2 : receiver - // -- lr : return address - // ----------------------------------- + Register receiver = ReceiverRegister(); + Register key = NameRegister(); + Register value = ValueRegister(); + DCHECK(value.is(a0)); + Label slow, notin; // Store address is returned in register (of MemOperand) mapped_location. - MemOperand mapped_location = - GenerateMappedArgumentsLookup(masm, a2, a1, a3, t0, t1, ¬in, &slow); - __ sw(a0, mapped_location); - __ mov(t5, a0); - ASSERT_EQ(mapped_location.offset(), 0); + MemOperand mapped_location = GenerateMappedArgumentsLookup( + masm, receiver, key, a3, t0, t1, ¬in, &slow); + __ sw(value, mapped_location); + __ mov(t5, value); + DCHECK_EQ(mapped_location.offset(), 0); __ RecordWrite(a3, mapped_location.rm(), t5, kRAHasNotBeenSaved, kDontSaveFPRegs); __ Ret(USE_DELAY_SLOT); - __ mov(v0, a0); // (In delay slot) return the value stored in v0. + __ mov(v0, value); // (In delay slot) return the value stored in v0. __ bind(¬in); // The unmapped lookup expects that the parameter map is in a3. // Store address is returned in register (of MemOperand) unmapped_location. MemOperand unmapped_location = - GenerateUnmappedArgumentsLookup(masm, a1, a3, t0, &slow); - __ sw(a0, unmapped_location); - __ mov(t5, a0); - ASSERT_EQ(unmapped_location.offset(), 0); + GenerateUnmappedArgumentsLookup(masm, key, a3, t0, &slow); + __ sw(value, unmapped_location); + __ mov(t5, value); + DCHECK_EQ(unmapped_location.offset(), 0); __ RecordWrite(a3, unmapped_location.rm(), t5, kRAHasNotBeenSaved, kDontSaveFPRegs); __ Ret(USE_DELAY_SLOT); @@ -537,16 +492,12 @@ void KeyedStoreIC::GenerateSloppyArguments(MacroAssembler* masm) { void KeyedLoadIC::GenerateMiss(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- ra : return address - // -- a0 : key - // -- a1 : receiver - // ----------------------------------- + // The return address is in ra. Isolate* isolate = masm->isolate(); __ IncrementCounter(isolate->counters()->keyed_load_miss(), 1, a3, t0); - __ Push(a1, a0); + __ Push(ReceiverRegister(), NameRegister()); // Perform tail call to the entry. ExternalReference ref = @@ -556,30 +507,51 @@ void KeyedLoadIC::GenerateMiss(MacroAssembler* masm) { } +// IC register specifications +const Register LoadIC::ReceiverRegister() { return a1; } +const Register LoadIC::NameRegister() { return a2; } + + +const Register LoadIC::SlotRegister() { + DCHECK(FLAG_vector_ics); + return a0; +} + + +const Register LoadIC::VectorRegister() { + DCHECK(FLAG_vector_ics); + return a3; +} + + +const Register StoreIC::ReceiverRegister() { return a1; } +const Register StoreIC::NameRegister() { return a2; } +const Register StoreIC::ValueRegister() { return a0; } + + +const Register KeyedStoreIC::MapRegister() { + return a3; +} + + void KeyedLoadIC::GenerateRuntimeGetProperty(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- ra : return address - // -- a0 : key - // -- a1 : receiver - // ----------------------------------- + // The return address is in ra. - __ Push(a1, a0); + __ Push(ReceiverRegister(), NameRegister()); __ TailCallRuntime(Runtime::kKeyedGetProperty, 2, 1); } void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- ra : return address - // -- a0 : key - // -- a1 : receiver - // ----------------------------------- + // The return address is in ra. Label slow, check_name, index_smi, index_name, property_array_property; Label probe_dictionary, check_number_dictionary; - Register key = a0; - Register receiver = a1; + Register key = NameRegister(); + Register receiver = ReceiverRegister(); + DCHECK(key.is(a2)); + DCHECK(receiver.is(a1)); Isolate* isolate = masm->isolate(); @@ -590,15 +562,14 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { // where a numeric string is converted to a smi. GenerateKeyedLoadReceiverCheck( - masm, receiver, a2, a3, Map::kHasIndexedInterceptor, &slow); + masm, receiver, a0, a3, Map::kHasIndexedInterceptor, &slow); // Check the receiver's map to see if it has fast elements. - __ CheckFastElements(a2, a3, &check_number_dictionary); + __ CheckFastElements(a0, a3, &check_number_dictionary); GenerateFastArrayLoad( - masm, receiver, key, t0, a3, a2, v0, NULL, &slow); - - __ IncrementCounter(isolate->counters()->keyed_load_generic_smi(), 1, a2, a3); + masm, receiver, key, a0, a3, t0, v0, NULL, &slow); + __ IncrementCounter(isolate->counters()->keyed_load_generic_smi(), 1, t0, a3); __ Ret(); __ bind(&check_number_dictionary); @@ -606,42 +577,41 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { __ lw(a3, FieldMemOperand(t0, JSObject::kMapOffset)); // Check whether the elements is a number dictionary. - // a0: key // a3: elements map // t0: elements __ LoadRoot(at, Heap::kHashTableMapRootIndex); __ Branch(&slow, ne, a3, Operand(at)); - __ sra(a2, a0, kSmiTagSize); - __ LoadFromNumberDictionary(&slow, t0, a0, v0, a2, a3, t1); + __ sra(a0, key, kSmiTagSize); + __ LoadFromNumberDictionary(&slow, t0, key, v0, a0, a3, t1); __ Ret(); - // Slow case, key and receiver still in a0 and a1. + // Slow case, key and receiver still in a2 and a1. __ bind(&slow); __ IncrementCounter(isolate->counters()->keyed_load_generic_slow(), 1, - a2, + t0, a3); GenerateRuntimeGetProperty(masm); __ bind(&check_name); - GenerateKeyNameCheck(masm, key, a2, a3, &index_name, &slow); + GenerateKeyNameCheck(masm, key, a0, a3, &index_name, &slow); GenerateKeyedLoadReceiverCheck( - masm, receiver, a2, a3, Map::kHasNamedInterceptor, &slow); + masm, receiver, a0, a3, Map::kHasNamedInterceptor, &slow); // If the receiver is a fast-case object, check the keyed lookup // cache. Otherwise probe the dictionary. - __ lw(a3, FieldMemOperand(a1, JSObject::kPropertiesOffset)); + __ lw(a3, FieldMemOperand(receiver, JSObject::kPropertiesOffset)); __ lw(t0, FieldMemOperand(a3, HeapObject::kMapOffset)); __ LoadRoot(at, Heap::kHashTableMapRootIndex); __ Branch(&probe_dictionary, eq, t0, Operand(at)); // Load the map of the receiver, compute the keyed lookup cache hash // based on 32 bits of the map pointer and the name hash. - __ lw(a2, FieldMemOperand(a1, HeapObject::kMapOffset)); - __ sra(a3, a2, KeyedLookupCache::kMapHashShift); - __ lw(t0, FieldMemOperand(a0, Name::kHashFieldOffset)); + __ lw(a0, FieldMemOperand(receiver, HeapObject::kMapOffset)); + __ sra(a3, a0, KeyedLookupCache::kMapHashShift); + __ lw(t0, FieldMemOperand(key, Name::kHashFieldOffset)); __ sra(at, t0, Name::kHashShift); __ xor_(a3, a3, at); int mask = KeyedLookupCache::kCapacityMask & KeyedLookupCache::kHashMask; @@ -661,21 +631,19 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { for (int i = 0; i < kEntriesPerBucket - 1; i++) { Label try_next_entry; __ lw(t1, MemOperand(t0, kPointerSize * i * 2)); - __ Branch(&try_next_entry, ne, a2, Operand(t1)); + __ Branch(&try_next_entry, ne, a0, Operand(t1)); __ lw(t1, MemOperand(t0, kPointerSize * (i * 2 + 1))); - __ Branch(&hit_on_nth_entry[i], eq, a0, Operand(t1)); + __ Branch(&hit_on_nth_entry[i], eq, key, Operand(t1)); __ bind(&try_next_entry); } __ lw(t1, MemOperand(t0, kPointerSize * (kEntriesPerBucket - 1) * 2)); - __ Branch(&slow, ne, a2, Operand(t1)); - __ lw(t1, MemOperand(t0, kPointerSize * ((kEntriesPerBucket - 1) * 2 + 1))); __ Branch(&slow, ne, a0, Operand(t1)); + __ lw(t1, MemOperand(t0, kPointerSize * ((kEntriesPerBucket - 1) * 2 + 1))); + __ Branch(&slow, ne, key, Operand(t1)); // Get field offset. - // a0 : key - // a1 : receiver - // a2 : receiver's map + // a0 : receiver's map // a3 : lookup cache index ExternalReference cache_field_offsets = ExternalReference::keyed_lookup_cache_field_offsets(isolate); @@ -687,7 +655,7 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { __ sll(at, a3, kPointerSizeLog2); __ addu(at, t0, at); __ lw(t1, MemOperand(at, kPointerSize * i)); - __ lbu(t2, FieldMemOperand(a2, Map::kInObjectPropertiesOffset)); + __ lbu(t2, FieldMemOperand(a0, Map::kInObjectPropertiesOffset)); __ Subu(t1, t1, t2); __ Branch(&property_array_property, ge, t1, Operand(zero_reg)); if (i != 0) { @@ -697,28 +665,28 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { // Load in-object property. __ bind(&load_in_object_property); - __ lbu(t2, FieldMemOperand(a2, Map::kInstanceSizeOffset)); + __ lbu(t2, FieldMemOperand(a0, Map::kInstanceSizeOffset)); __ addu(t2, t2, t1); // Index from start of object. - __ Subu(a1, a1, Operand(kHeapObjectTag)); // Remove the heap tag. + __ Subu(receiver, receiver, Operand(kHeapObjectTag)); // Remove the heap tag. __ sll(at, t2, kPointerSizeLog2); - __ addu(at, a1, at); + __ addu(at, receiver, at); __ lw(v0, MemOperand(at)); __ IncrementCounter(isolate->counters()->keyed_load_generic_lookup_cache(), 1, - a2, + t0, a3); __ Ret(); // Load property array property. __ bind(&property_array_property); - __ lw(a1, FieldMemOperand(a1, JSObject::kPropertiesOffset)); - __ Addu(a1, a1, FixedArray::kHeaderSize - kHeapObjectTag); - __ sll(t0, t1, kPointerSizeLog2); - __ Addu(t0, t0, a1); - __ lw(v0, MemOperand(t0)); + __ lw(receiver, FieldMemOperand(receiver, JSObject::kPropertiesOffset)); + __ Addu(receiver, receiver, FixedArray::kHeaderSize - kHeapObjectTag); + __ sll(v0, t1, kPointerSizeLog2); + __ Addu(v0, v0, receiver); + __ lw(v0, MemOperand(v0)); __ IncrementCounter(isolate->counters()->keyed_load_generic_lookup_cache(), 1, - a2, + t0, a3); __ Ret(); @@ -726,17 +694,15 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { // Do a quick inline probe of the receiver's dictionary, if it // exists. __ bind(&probe_dictionary); - // a1: receiver - // a0: key // a3: elements - __ lw(a2, FieldMemOperand(a1, HeapObject::kMapOffset)); - __ lbu(a2, FieldMemOperand(a2, Map::kInstanceTypeOffset)); - GenerateGlobalInstanceTypeCheck(masm, a2, &slow); + __ lw(a0, FieldMemOperand(receiver, HeapObject::kMapOffset)); + __ lbu(a0, FieldMemOperand(a0, Map::kInstanceTypeOffset)); + GenerateGlobalInstanceTypeCheck(masm, a0, &slow); // Load the property to v0. - GenerateDictionaryLoad(masm, &slow, a3, a0, v0, a2, t0); + GenerateDictionaryLoad(masm, &slow, a3, key, v0, t1, t0); __ IncrementCounter(isolate->counters()->keyed_load_generic_symbol(), 1, - a2, + t0, a3); __ Ret(); @@ -748,17 +714,14 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { void KeyedLoadIC::GenerateString(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- ra : return address - // -- a0 : key (index) - // -- a1 : receiver - // ----------------------------------- + // Return address is in ra. Label miss; - Register receiver = a1; - Register index = a0; + Register receiver = ReceiverRegister(); + Register index = NameRegister(); Register scratch = a3; Register result = v0; + DCHECK(!scratch.is(receiver) && !scratch.is(index)); StringCharAtGenerator char_at_generator(receiver, index, @@ -781,20 +744,12 @@ void KeyedLoadIC::GenerateString(MacroAssembler* masm) { void KeyedStoreIC::GenerateRuntimeSetProperty(MacroAssembler* masm, StrictMode strict_mode) { - // ---------- S t a t e -------------- - // -- a0 : value - // -- a1 : key - // -- a2 : receiver - // -- ra : return address - // ----------------------------------- - // Push receiver, key and value for runtime call. - __ Push(a2, a1, a0); - __ li(a1, Operand(Smi::FromInt(NONE))); // PropertyAttributes. + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); __ li(a0, Operand(Smi::FromInt(strict_mode))); // Strict mode. - __ Push(a1, a0); + __ Push(a0); - __ TailCallRuntime(Runtime::kSetProperty, 5, 1); + __ TailCallRuntime(Runtime::kSetProperty, 4, 1); } @@ -933,10 +888,10 @@ static void KeyedStoreGenerateGenericHelper( receiver_map, t0, slow); - ASSERT(receiver_map.is(a3)); // Transition code expects map in a3 AllocationSiteMode mode = AllocationSite::GetMode(FAST_SMI_ELEMENTS, FAST_DOUBLE_ELEMENTS); - ElementsTransitionGenerator::GenerateSmiToDouble(masm, mode, slow); + ElementsTransitionGenerator::GenerateSmiToDouble( + masm, receiver, key, value, receiver_map, mode, slow); __ lw(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); __ jmp(&fast_double_without_map_check); @@ -947,10 +902,9 @@ static void KeyedStoreGenerateGenericHelper( receiver_map, t0, slow); - ASSERT(receiver_map.is(a3)); // Transition code expects map in a3 mode = AllocationSite::GetMode(FAST_SMI_ELEMENTS, FAST_ELEMENTS); - ElementsTransitionGenerator::GenerateMapChangeElementsTransition(masm, mode, - slow); + ElementsTransitionGenerator::GenerateMapChangeElementsTransition( + masm, receiver, key, value, receiver_map, mode, slow); __ lw(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); __ jmp(&finish_object_store); @@ -963,9 +917,9 @@ static void KeyedStoreGenerateGenericHelper( receiver_map, t0, slow); - ASSERT(receiver_map.is(a3)); // Transition code expects map in a3 mode = AllocationSite::GetMode(FAST_DOUBLE_ELEMENTS, FAST_ELEMENTS); - ElementsTransitionGenerator::GenerateDoubleToObject(masm, mode, slow); + ElementsTransitionGenerator::GenerateDoubleToObject( + masm, receiver, key, value, receiver_map, mode, slow); __ lw(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); __ jmp(&finish_object_store); } @@ -984,9 +938,10 @@ void KeyedStoreIC::GenerateGeneric(MacroAssembler* masm, Label array, extra, check_if_double_array; // Register usage. - Register value = a0; - Register key = a1; - Register receiver = a2; + Register value = ValueRegister(); + Register key = NameRegister(); + Register receiver = ReceiverRegister(); + DCHECK(value.is(a0)); Register receiver_map = a3; Register elements_map = t2; Register elements = t3; // Elements array of the receiver. @@ -1067,34 +1022,37 @@ void KeyedStoreIC::GenerateGeneric(MacroAssembler* masm, void KeyedLoadIC::GenerateIndexedInterceptor(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- ra : return address - // -- a0 : key - // -- a1 : receiver - // ----------------------------------- + // Return address is in ra. Label slow; + Register receiver = ReceiverRegister(); + Register key = NameRegister(); + Register scratch1 = a3; + Register scratch2 = t0; + DCHECK(!scratch1.is(receiver) && !scratch1.is(key)); + DCHECK(!scratch2.is(receiver) && !scratch2.is(key)); + // Check that the receiver isn't a smi. - __ JumpIfSmi(a1, &slow); + __ JumpIfSmi(receiver, &slow); // Check that the key is an array index, that is Uint32. - __ And(t0, a0, Operand(kSmiTagMask | kSmiSignMask)); + __ And(t0, key, Operand(kSmiTagMask | kSmiSignMask)); __ Branch(&slow, ne, t0, Operand(zero_reg)); // Get the map of the receiver. - __ lw(a2, FieldMemOperand(a1, HeapObject::kMapOffset)); + __ lw(scratch1, FieldMemOperand(receiver, HeapObject::kMapOffset)); // Check that it has indexed interceptor and access checks // are not enabled for this object. - __ lbu(a3, FieldMemOperand(a2, Map::kBitFieldOffset)); - __ And(a3, a3, Operand(kSlowCaseBitFieldMask)); - __ Branch(&slow, ne, a3, Operand(1 << Map::kHasIndexedInterceptor)); + __ lbu(scratch2, FieldMemOperand(scratch1, Map::kBitFieldOffset)); + __ And(scratch2, scratch2, Operand(kSlowCaseBitFieldMask)); + __ Branch(&slow, ne, scratch2, Operand(1 << Map::kHasIndexedInterceptor)); // Everything is fine, call runtime. - __ Push(a1, a0); // Receiver, key. + __ Push(receiver, key); // Receiver, key. // Perform tail call to the entry. __ TailCallExternalReference(ExternalReference( - IC_Utility(kKeyedLoadPropertyWithInterceptor), masm->isolate()), 2, 1); + IC_Utility(kLoadElementWithInterceptor), masm->isolate()), 2, 1); __ bind(&slow); GenerateMiss(masm); @@ -1102,15 +1060,8 @@ void KeyedLoadIC::GenerateIndexedInterceptor(MacroAssembler* masm) { void KeyedStoreIC::GenerateMiss(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- a0 : value - // -- a1 : key - // -- a2 : receiver - // -- ra : return address - // ----------------------------------- - // Push receiver, key and value for runtime call. - __ Push(a2, a1, a0); + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); ExternalReference ref = ExternalReference(IC_Utility(kKeyedStoreIC_Miss), masm->isolate()); @@ -1119,15 +1070,8 @@ void KeyedStoreIC::GenerateMiss(MacroAssembler* masm) { void StoreIC::GenerateSlow(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- a0 : value - // -- a2 : key - // -- a1 : receiver - // -- ra : return address - // ----------------------------------- - // Push receiver, key and value for runtime call. - __ Push(a1, a2, a0); + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); // The slow case calls into the runtime to complete the store without causing // an IC miss that would otherwise cause a transition to the generic stub. @@ -1138,16 +1082,9 @@ void StoreIC::GenerateSlow(MacroAssembler* masm) { void KeyedStoreIC::GenerateSlow(MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- a0 : value - // -- a1 : key - // -- a2 : receiver - // -- ra : return address - // ----------------------------------- - // Push receiver, key and value for runtime call. // We can't use MultiPush as the order of the registers is important. - __ Push(a2, a1, a0); + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); // The slow case calls into the runtime to complete the store without causing // an IC miss that would otherwise cause a transition to the generic stub. @@ -1159,17 +1096,17 @@ void KeyedStoreIC::GenerateSlow(MacroAssembler* masm) { void StoreIC::GenerateMegamorphic(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- a0 : value - // -- a1 : receiver - // -- a2 : name - // -- ra : return address - // ----------------------------------- + Register receiver = ReceiverRegister(); + Register name = NameRegister(); + DCHECK(receiver.is(a1)); + DCHECK(name.is(a2)); + DCHECK(ValueRegister().is(a0)); // Get the receiver from the stack and probe the stub cache. - Code::Flags flags = Code::ComputeHandlerFlags(Code::STORE_IC); + Code::Flags flags = Code::RemoveTypeAndHolderFromFlags( + Code::ComputeHandlerFlags(Code::STORE_IC)); masm->isolate()->stub_cache()->GenerateProbe( - masm, flags, a1, a2, a3, t0, t1, t2); + masm, flags, receiver, name, a3, t0, t1, t2); // Cache miss: Jump to runtime. GenerateMiss(masm); @@ -1177,14 +1114,7 @@ void StoreIC::GenerateMegamorphic(MacroAssembler* masm) { void StoreIC::GenerateMiss(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- a0 : value - // -- a1 : receiver - // -- a2 : name - // -- ra : return address - // ----------------------------------- - - __ Push(a1, a2, a0); + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); // Perform tail call to the entry. ExternalReference ref = ExternalReference(IC_Utility(kStoreIC_Miss), masm->isolate()); @@ -1193,17 +1123,18 @@ void StoreIC::GenerateMiss(MacroAssembler* masm) { void StoreIC::GenerateNormal(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- a0 : value - // -- a1 : receiver - // -- a2 : name - // -- ra : return address - // ----------------------------------- Label miss; + Register receiver = ReceiverRegister(); + Register name = NameRegister(); + Register value = ValueRegister(); + Register dictionary = a3; + DCHECK(receiver.is(a1)); + DCHECK(name.is(a2)); + DCHECK(value.is(a0)); - GenerateNameDictionaryReceiverCheck(masm, a1, a3, t0, t1, &miss); + __ lw(dictionary, FieldMemOperand(receiver, JSObject::kPropertiesOffset)); - GenerateDictionaryStore(masm, &miss, a3, a2, a0, t0, t1); + GenerateDictionaryStore(masm, &miss, dictionary, name, value, t0, t1); Counters* counters = masm->isolate()->counters(); __ IncrementCounter(counters->store_normal_hit(), 1, t0, t1); __ Ret(); @@ -1216,21 +1147,13 @@ void StoreIC::GenerateNormal(MacroAssembler* masm) { void StoreIC::GenerateRuntimeSetProperty(MacroAssembler* masm, StrictMode strict_mode) { - // ----------- S t a t e ------------- - // -- a0 : value - // -- a1 : receiver - // -- a2 : name - // -- ra : return address - // ----------------------------------- - - __ Push(a1, a2, a0); + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); - __ li(a1, Operand(Smi::FromInt(NONE))); // PropertyAttributes. __ li(a0, Operand(Smi::FromInt(strict_mode))); - __ Push(a1, a0); + __ Push(a0); // Do tail-call to runtime routine. - __ TailCallRuntime(Runtime::kSetProperty, 5, 1); + __ TailCallRuntime(Runtime::kSetProperty, 4, 1); } @@ -1313,19 +1236,19 @@ void PatchInlinedSmiCode(Address address, InlinedSmiCheck check) { CodePatcher patcher(patch_address, 2); Register reg = Register::from_code(Assembler::GetRs(instr_at_patch)); if (check == ENABLE_INLINED_SMI_CHECK) { - ASSERT(Assembler::IsAndImmediate(instr_at_patch)); - ASSERT_EQ(0, Assembler::GetImmediate16(instr_at_patch)); + DCHECK(Assembler::IsAndImmediate(instr_at_patch)); + DCHECK_EQ(0, Assembler::GetImmediate16(instr_at_patch)); patcher.masm()->andi(at, reg, kSmiTagMask); } else { - ASSERT(check == DISABLE_INLINED_SMI_CHECK); - ASSERT(Assembler::IsAndImmediate(instr_at_patch)); + DCHECK(check == DISABLE_INLINED_SMI_CHECK); + DCHECK(Assembler::IsAndImmediate(instr_at_patch)); patcher.masm()->andi(at, reg, 0); } - ASSERT(Assembler::IsBranch(branch_instr)); + DCHECK(Assembler::IsBranch(branch_instr)); if (Assembler::IsBeq(branch_instr)) { patcher.ChangeBranchCondition(ne); } else { - ASSERT(Assembler::IsBne(branch_instr)); + DCHECK(Assembler::IsBne(branch_instr)); patcher.ChangeBranchCondition(eq); } } diff --git a/deps/v8/src/mips/lithium-codegen-mips.cc b/deps/v8/src/mips/lithium-codegen-mips.cc index fe3e349c2d3..8cf77607ad0 100644 --- a/deps/v8/src/mips/lithium-codegen-mips.cc +++ b/deps/v8/src/mips/lithium-codegen-mips.cc @@ -25,13 +25,13 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "v8.h" +#include "src/v8.h" -#include "mips/lithium-codegen-mips.h" -#include "mips/lithium-gap-resolver-mips.h" -#include "code-stubs.h" -#include "stub-cache.h" -#include "hydrogen-osr.h" +#include "src/code-stubs.h" +#include "src/hydrogen-osr.h" +#include "src/mips/lithium-codegen-mips.h" +#include "src/mips/lithium-gap-resolver-mips.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -64,7 +64,7 @@ class SafepointGenerator V8_FINAL : public CallWrapper { bool LCodeGen::GenerateCode() { LPhase phase("Z_Code generation", chunk()); - ASSERT(is_unused()); + DCHECK(is_unused()); status_ = GENERATING; // Open a frame scope to indicate that there is a frame on the stack. The @@ -81,7 +81,7 @@ bool LCodeGen::GenerateCode() { void LCodeGen::FinishCode(Handle<Code> code) { - ASSERT(is_done()); + DCHECK(is_done()); code->set_stack_slots(GetStackSlotCount()); code->set_safepoint_table_offset(safepoints_.GetCodeOffset()); if (code->is_optimized_code()) RegisterWeakObjectsInOptimizedCode(code); @@ -90,8 +90,8 @@ void LCodeGen::FinishCode(Handle<Code> code) { void LCodeGen::SaveCallerDoubles() { - ASSERT(info()->saves_caller_doubles()); - ASSERT(NeedsEagerFrame()); + DCHECK(info()->saves_caller_doubles()); + DCHECK(NeedsEagerFrame()); Comment(";;; Save clobbered callee double registers"); int count = 0; BitVector* doubles = chunk()->allocated_double_registers(); @@ -106,8 +106,8 @@ void LCodeGen::SaveCallerDoubles() { void LCodeGen::RestoreCallerDoubles() { - ASSERT(info()->saves_caller_doubles()); - ASSERT(NeedsEagerFrame()); + DCHECK(info()->saves_caller_doubles()); + DCHECK(NeedsEagerFrame()); Comment(";;; Restore clobbered callee double registers"); BitVector* doubles = chunk()->allocated_double_registers(); BitVector::Iterator save_iterator(doubles); @@ -122,7 +122,7 @@ void LCodeGen::RestoreCallerDoubles() { bool LCodeGen::GeneratePrologue() { - ASSERT(is_generating()); + DCHECK(is_generating()); if (info()->IsOptimizing()) { ProfileEntryHookStub::MaybeCallEntryHook(masm_); @@ -152,7 +152,7 @@ bool LCodeGen::GeneratePrologue() { __ Branch(&ok, ne, a2, Operand(at)); __ lw(a2, GlobalObjectOperand()); - __ lw(a2, FieldMemOperand(a2, GlobalObject::kGlobalReceiverOffset)); + __ lw(a2, FieldMemOperand(a2, GlobalObject::kGlobalProxyOffset)); __ sw(a2, MemOperand(sp, receiver_offset)); @@ -162,7 +162,11 @@ bool LCodeGen::GeneratePrologue() { info()->set_prologue_offset(masm_->pc_offset()); if (NeedsEagerFrame()) { - __ Prologue(info()->IsStub() ? BUILD_STUB_FRAME : BUILD_FUNCTION_FRAME); + if (info()->IsStub()) { + __ StubPrologue(); + } else { + __ Prologue(info()->IsCodePreAgingActive()); + } frame_is_built_ = true; info_->AddNoFrameRange(0, masm_->pc_offset()); } @@ -194,13 +198,16 @@ bool LCodeGen::GeneratePrologue() { int heap_slots = info()->num_heap_slots() - Context::MIN_CONTEXT_SLOTS; if (heap_slots > 0) { Comment(";;; Allocate local context"); + bool need_write_barrier = true; // Argument to NewContext is the function, which is in a1. if (heap_slots <= FastNewContextStub::kMaximumSlots) { FastNewContextStub stub(isolate(), heap_slots); __ CallStub(&stub); + // Result of FastNewContextStub is always in new space. + need_write_barrier = false; } else { __ push(a1); - __ CallRuntime(Runtime::kHiddenNewFunctionContext, 1); + __ CallRuntime(Runtime::kNewFunctionContext, 1); } RecordSafepoint(Safepoint::kNoLazyDeopt); // Context is returned in both v0. It replaces the context passed to us. @@ -220,8 +227,15 @@ bool LCodeGen::GeneratePrologue() { MemOperand target = ContextOperand(cp, var->index()); __ sw(a0, target); // Update the write barrier. This clobbers a3 and a0. - __ RecordWriteContextSlot( - cp, target.offset(), a0, a3, GetRAState(), kSaveFPRegs); + if (need_write_barrier) { + __ RecordWriteContextSlot( + cp, target.offset(), a0, a3, GetRAState(), kSaveFPRegs); + } else if (FLAG_debug_code) { + Label done; + __ JumpIfInNewSpace(cp, a0, &done); + __ Abort(kExpectedNewSpaceObject); + __ bind(&done); + } } } Comment(";;; End allocate local context"); @@ -247,7 +261,7 @@ void LCodeGen::GenerateOsrPrologue() { // Adjust the frame size, subsuming the unoptimized frame into the // optimized frame. int slots = GetStackSlotCount() - graph()->osr()->UnoptimizedFrameSlots(); - ASSERT(slots >= 0); + DCHECK(slots >= 0); __ Subu(sp, sp, Operand(slots * kPointerSize)); } @@ -263,7 +277,7 @@ void LCodeGen::GenerateBodyInstructionPre(LInstruction* instr) { bool LCodeGen::GenerateDeferredCode() { - ASSERT(is_generating()); + DCHECK(is_generating()); if (deferred_.length() > 0) { for (int i = 0; !is_aborted() && i < deferred_.length(); i++) { LDeferredCode* code = deferred_[i]; @@ -281,8 +295,8 @@ bool LCodeGen::GenerateDeferredCode() { __ bind(code->entry()); if (NeedsDeferredFrame()) { Comment(";;; Build frame"); - ASSERT(!frame_is_built_); - ASSERT(info()->IsStub()); + DCHECK(!frame_is_built_); + DCHECK(info()->IsStub()); frame_is_built_ = true; __ MultiPush(cp.bit() | fp.bit() | ra.bit()); __ li(scratch0(), Operand(Smi::FromInt(StackFrame::STUB))); @@ -293,7 +307,7 @@ bool LCodeGen::GenerateDeferredCode() { code->Generate(); if (NeedsDeferredFrame()) { Comment(";;; Destroy frame"); - ASSERT(frame_is_built_); + DCHECK(frame_is_built_); __ pop(at); __ MultiPop(cp.bit() | fp.bit() | ra.bit()); frame_is_built_ = false; @@ -310,45 +324,74 @@ bool LCodeGen::GenerateDeferredCode() { bool LCodeGen::GenerateDeoptJumpTable() { if (deopt_jump_table_.length() > 0) { + Label needs_frame, call_deopt_entry; + Comment(";;; -------------------- Jump table --------------------"); - } - Assembler::BlockTrampolinePoolScope block_trampoline_pool(masm_); - Label table_start; - __ bind(&table_start); - Label needs_frame; - for (int i = 0; i < deopt_jump_table_.length(); i++) { - __ bind(&deopt_jump_table_[i].label); - Address entry = deopt_jump_table_[i].address; - Deoptimizer::BailoutType type = deopt_jump_table_[i].bailout_type; - int id = Deoptimizer::GetDeoptimizationId(isolate(), entry, type); - if (id == Deoptimizer::kNotDeoptimizationEntry) { - Comment(";;; jump table entry %d.", i); - } else { + Address base = deopt_jump_table_[0].address; + + Register entry_offset = t9; + + int length = deopt_jump_table_.length(); + for (int i = 0; i < length; i++) { + __ bind(&deopt_jump_table_[i].label); + + Deoptimizer::BailoutType type = deopt_jump_table_[i].bailout_type; + DCHECK(type == deopt_jump_table_[0].bailout_type); + Address entry = deopt_jump_table_[i].address; + int id = Deoptimizer::GetDeoptimizationId(isolate(), entry, type); + DCHECK(id != Deoptimizer::kNotDeoptimizationEntry); Comment(";;; jump table entry %d: deoptimization bailout %d.", i, id); - } - __ li(t9, Operand(ExternalReference::ForDeoptEntry(entry))); - if (deopt_jump_table_[i].needs_frame) { - ASSERT(!info()->saves_caller_doubles()); - if (needs_frame.is_bound()) { - __ Branch(&needs_frame); + + // Second-level deopt table entries are contiguous and small, so instead + // of loading the full, absolute address of each one, load an immediate + // offset which will be added to the base address later. + __ li(entry_offset, Operand(entry - base)); + + if (deopt_jump_table_[i].needs_frame) { + DCHECK(!info()->saves_caller_doubles()); + if (needs_frame.is_bound()) { + __ Branch(&needs_frame); + } else { + __ bind(&needs_frame); + Comment(";;; call deopt with frame"); + __ MultiPush(cp.bit() | fp.bit() | ra.bit()); + // This variant of deopt can only be used with stubs. Since we don't + // have a function pointer to install in the stack frame that we're + // building, install a special marker there instead. + DCHECK(info()->IsStub()); + __ li(at, Operand(Smi::FromInt(StackFrame::STUB))); + __ push(at); + __ Addu(fp, sp, + Operand(StandardFrameConstants::kFixedFrameSizeFromFp)); + __ bind(&call_deopt_entry); + // Add the base address to the offset previously loaded in + // entry_offset. + __ Addu(entry_offset, entry_offset, + Operand(ExternalReference::ForDeoptEntry(base))); + __ Call(entry_offset); + } } else { - __ bind(&needs_frame); - __ MultiPush(cp.bit() | fp.bit() | ra.bit()); - // This variant of deopt can only be used with stubs. Since we don't - // have a function pointer to install in the stack frame that we're - // building, install a special marker there instead. - ASSERT(info()->IsStub()); - __ li(scratch0(), Operand(Smi::FromInt(StackFrame::STUB))); - __ push(scratch0()); - __ Addu(fp, sp, Operand(StandardFrameConstants::kFixedFrameSizeFromFp)); - __ Call(t9); + // The last entry can fall through into `call_deopt_entry`, avoiding a + // branch. + bool need_branch = ((i + 1) != length) || call_deopt_entry.is_bound(); + + if (need_branch) __ Branch(&call_deopt_entry); } - } else { + } + + if (!call_deopt_entry.is_bound()) { + Comment(";;; call deopt"); + __ bind(&call_deopt_entry); + if (info()->saves_caller_doubles()) { - ASSERT(info()->IsStub()); + DCHECK(info()->IsStub()); RestoreCallerDoubles(); } - __ Call(t9); + + // Add the base address to the offset previously loaded in entry_offset. + __ Addu(entry_offset, entry_offset, + Operand(ExternalReference::ForDeoptEntry(base))); + __ Call(entry_offset); } } __ RecordComment("]"); @@ -361,7 +404,7 @@ bool LCodeGen::GenerateDeoptJumpTable() { bool LCodeGen::GenerateSafepointTable() { - ASSERT(is_done()); + DCHECK(is_done()); safepoints_.Emit(masm(), GetStackSlotCount()); return !is_aborted(); } @@ -378,7 +421,7 @@ DoubleRegister LCodeGen::ToDoubleRegister(int index) const { Register LCodeGen::ToRegister(LOperand* op) const { - ASSERT(op->IsRegister()); + DCHECK(op->IsRegister()); return ToRegister(op->index()); } @@ -392,15 +435,15 @@ Register LCodeGen::EmitLoadRegister(LOperand* op, Register scratch) { Handle<Object> literal = constant->handle(isolate()); Representation r = chunk_->LookupLiteralRepresentation(const_op); if (r.IsInteger32()) { - ASSERT(literal->IsNumber()); + DCHECK(literal->IsNumber()); __ li(scratch, Operand(static_cast<int32_t>(literal->Number()))); } else if (r.IsSmi()) { - ASSERT(constant->HasSmiValue()); + DCHECK(constant->HasSmiValue()); __ li(scratch, Operand(Smi::FromInt(constant->Integer32Value()))); } else if (r.IsDouble()) { Abort(kEmitLoadRegisterUnsupportedDoubleImmediate); } else { - ASSERT(r.IsSmiOrTagged()); + DCHECK(r.IsSmiOrTagged()); __ li(scratch, literal); } return scratch; @@ -414,7 +457,7 @@ Register LCodeGen::EmitLoadRegister(LOperand* op, Register scratch) { DoubleRegister LCodeGen::ToDoubleRegister(LOperand* op) const { - ASSERT(op->IsDoubleRegister()); + DCHECK(op->IsDoubleRegister()); return ToDoubleRegister(op->index()); } @@ -430,7 +473,7 @@ DoubleRegister LCodeGen::EmitLoadDoubleRegister(LOperand* op, Handle<Object> literal = constant->handle(isolate()); Representation r = chunk_->LookupLiteralRepresentation(const_op); if (r.IsInteger32()) { - ASSERT(literal->IsNumber()); + DCHECK(literal->IsNumber()); __ li(at, Operand(static_cast<int32_t>(literal->Number()))); __ mtc1(at, flt_scratch); __ cvt_d_w(dbl_scratch, flt_scratch); @@ -452,7 +495,7 @@ DoubleRegister LCodeGen::EmitLoadDoubleRegister(LOperand* op, Handle<Object> LCodeGen::ToHandle(LConstantOperand* op) const { HConstant* constant = chunk_->LookupConstant(op); - ASSERT(chunk_->LookupLiteralRepresentation(op).IsSmiOrTagged()); + DCHECK(chunk_->LookupLiteralRepresentation(op).IsSmiOrTagged()); return constant->handle(isolate()); } @@ -477,7 +520,7 @@ int32_t LCodeGen::ToRepresentation(LConstantOperand* op, HConstant* constant = chunk_->LookupConstant(op); int32_t value = constant->Integer32Value(); if (r.IsInteger32()) return value; - ASSERT(r.IsSmiOrTagged()); + DCHECK(r.IsSmiOrTagged()); return reinterpret_cast<int32_t>(Smi::FromInt(value)); } @@ -490,7 +533,7 @@ Smi* LCodeGen::ToSmi(LConstantOperand* op) const { double LCodeGen::ToDouble(LConstantOperand* op) const { HConstant* constant = chunk_->LookupConstant(op); - ASSERT(constant->HasDoubleValue()); + DCHECK(constant->HasDoubleValue()); return constant->DoubleValue(); } @@ -501,15 +544,15 @@ Operand LCodeGen::ToOperand(LOperand* op) { HConstant* constant = chunk()->LookupConstant(const_op); Representation r = chunk_->LookupLiteralRepresentation(const_op); if (r.IsSmi()) { - ASSERT(constant->HasSmiValue()); + DCHECK(constant->HasSmiValue()); return Operand(Smi::FromInt(constant->Integer32Value())); } else if (r.IsInteger32()) { - ASSERT(constant->HasInteger32Value()); + DCHECK(constant->HasInteger32Value()); return Operand(constant->Integer32Value()); } else if (r.IsDouble()) { Abort(kToOperandUnsupportedDoubleImmediate); } - ASSERT(r.IsTagged()); + DCHECK(r.IsTagged()); return Operand(constant->handle(isolate())); } else if (op->IsRegister()) { return Operand(ToRegister(op)); @@ -524,15 +567,15 @@ Operand LCodeGen::ToOperand(LOperand* op) { static int ArgumentsOffsetWithoutFrame(int index) { - ASSERT(index < 0); + DCHECK(index < 0); return -(index + 1) * kPointerSize; } MemOperand LCodeGen::ToMemOperand(LOperand* op) const { - ASSERT(!op->IsRegister()); - ASSERT(!op->IsDoubleRegister()); - ASSERT(op->IsStackSlot() || op->IsDoubleStackSlot()); + DCHECK(!op->IsRegister()); + DCHECK(!op->IsDoubleRegister()); + DCHECK(op->IsStackSlot() || op->IsDoubleStackSlot()); if (NeedsEagerFrame()) { return MemOperand(fp, StackSlotOffset(op->index())); } else { @@ -544,7 +587,7 @@ MemOperand LCodeGen::ToMemOperand(LOperand* op) const { MemOperand LCodeGen::ToHighMemOperand(LOperand* op) const { - ASSERT(op->IsDoubleStackSlot()); + DCHECK(op->IsDoubleStackSlot()); if (NeedsEagerFrame()) { return MemOperand(fp, StackSlotOffset(op->index()) + kPointerSize); } else { @@ -580,13 +623,13 @@ void LCodeGen::WriteTranslation(LEnvironment* environment, translation->BeginConstructStubFrame(closure_id, translation_size); break; case JS_GETTER: - ASSERT(translation_size == 1); - ASSERT(height == 0); + DCHECK(translation_size == 1); + DCHECK(height == 0); translation->BeginGetterStubFrame(closure_id); break; case JS_SETTER: - ASSERT(translation_size == 2); - ASSERT(height == 0); + DCHECK(translation_size == 2); + DCHECK(height == 0); translation->BeginSetterStubFrame(closure_id); break; case STUB: @@ -691,7 +734,7 @@ void LCodeGen::CallCodeGeneric(Handle<Code> code, RelocInfo::Mode mode, LInstruction* instr, SafepointMode safepoint_mode) { - ASSERT(instr != NULL); + DCHECK(instr != NULL); __ Call(code, mode); RecordSafepointWithLazyDeopt(instr, safepoint_mode); } @@ -701,7 +744,7 @@ void LCodeGen::CallRuntime(const Runtime::Function* function, int num_arguments, LInstruction* instr, SaveFPRegsMode save_doubles) { - ASSERT(instr != NULL); + DCHECK(instr != NULL); __ CallRuntime(function, num_arguments, save_doubles); @@ -778,9 +821,9 @@ void LCodeGen::DeoptimizeIf(Condition condition, Register src1, const Operand& src2) { RegisterEnvironmentForDeoptimization(environment, Safepoint::kNoLazyDeopt); - ASSERT(environment->HasBeenRegistered()); + DCHECK(environment->HasBeenRegistered()); int id = environment->deoptimization_index(); - ASSERT(info()->IsOptimizing() || info()->IsStub()); + DCHECK(info()->IsOptimizing() || info()->IsStub()); Address entry = Deoptimizer::GetDeoptimizationEntry(isolate(), id, bailout_type); if (entry == NULL) { @@ -816,7 +859,7 @@ void LCodeGen::DeoptimizeIf(Condition condition, __ bind(&skip); } - ASSERT(info()->IsStub() || frame_is_built_); + DCHECK(info()->IsStub() || frame_is_built_); // Go through jump table if we need to handle condition, build frame, or // restore caller doubles. if (condition == al && frame_is_built_ && @@ -854,7 +897,7 @@ void LCodeGen::PopulateDeoptimizationData(Handle<Code> code) { int length = deoptimizations_.length(); if (length == 0) return; Handle<DeoptimizationInputData> data = - DeoptimizationInputData::New(isolate(), length, TENURED); + DeoptimizationInputData::New(isolate(), length, 0, TENURED); Handle<ByteArray> translations = translations_.CreateByteArray(isolate()->factory()); @@ -905,7 +948,7 @@ int LCodeGen::DefineDeoptimizationLiteral(Handle<Object> literal) { void LCodeGen::PopulateDeoptimizationLiteralsWithInlinedFunctions() { - ASSERT(deoptimization_literals_.length() == 0); + DCHECK(deoptimization_literals_.length() == 0); const ZoneList<Handle<JSFunction> >* inlined_closures = chunk()->inlined_closures(); @@ -925,7 +968,7 @@ void LCodeGen::RecordSafepointWithLazyDeopt( if (safepoint_mode == RECORD_SIMPLE_SAFEPOINT) { RecordSafepoint(instr->pointer_map(), Safepoint::kLazyDeopt); } else { - ASSERT(safepoint_mode == RECORD_SAFEPOINT_WITH_REGISTERS_AND_NO_ARGUMENTS); + DCHECK(safepoint_mode == RECORD_SAFEPOINT_WITH_REGISTERS_AND_NO_ARGUMENTS); RecordSafepointWithRegisters( instr->pointer_map(), 0, Safepoint::kLazyDeopt); } @@ -937,7 +980,7 @@ void LCodeGen::RecordSafepoint( Safepoint::Kind kind, int arguments, Safepoint::DeoptMode deopt_mode) { - ASSERT(expected_safepoint_kind_ == kind); + DCHECK(expected_safepoint_kind_ == kind); const ZoneList<LOperand*>* operands = pointers->GetNormalizedOperands(); Safepoint safepoint = safepoints_.DefineSafepoint(masm(), @@ -973,15 +1016,6 @@ void LCodeGen::RecordSafepointWithRegisters(LPointerMap* pointers, } -void LCodeGen::RecordSafepointWithRegistersAndDoubles( - LPointerMap* pointers, - int arguments, - Safepoint::DeoptMode deopt_mode) { - RecordSafepoint( - pointers, Safepoint::kWithRegistersAndDoubles, arguments, deopt_mode); -} - - void LCodeGen::RecordAndWritePosition(int position) { if (position == RelocInfo::kNoPosition) return; masm()->positions_recorder()->RecordPosition(position); @@ -1035,8 +1069,8 @@ void LCodeGen::DoParameter(LParameter* instr) { void LCodeGen::DoCallStub(LCallStub* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->result()).is(v0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->result()).is(v0)); switch (instr->hydrogen()->major_key()) { case CodeStub::RegExpExec: { RegExpExecStub stub(isolate()); @@ -1067,7 +1101,7 @@ void LCodeGen::DoUnknownOSRValue(LUnknownOSRValue* instr) { void LCodeGen::DoModByPowerOf2I(LModByPowerOf2I* instr) { Register dividend = ToRegister(instr->dividend()); int32_t divisor = instr->divisor(); - ASSERT(dividend.is(ToRegister(instr->result()))); + DCHECK(dividend.is(ToRegister(instr->result()))); // Theoretically, a variation of the branch-free code for integer division by // a power of 2 (calculating the remainder via an additional multiplication @@ -1101,7 +1135,7 @@ void LCodeGen::DoModByConstI(LModByConstI* instr) { Register dividend = ToRegister(instr->dividend()); int32_t divisor = instr->divisor(); Register result = ToRegister(instr->result()); - ASSERT(!dividend.is(result)); + DCHECK(!dividend.is(result)); if (divisor == 0) { DeoptimizeIf(al, instr->environment()); @@ -1168,8 +1202,8 @@ void LCodeGen::DoDivByPowerOf2I(LDivByPowerOf2I* instr) { Register dividend = ToRegister(instr->dividend()); int32_t divisor = instr->divisor(); Register result = ToRegister(instr->result()); - ASSERT(divisor == kMinInt || IsPowerOf2(Abs(divisor))); - ASSERT(!result.is(dividend)); + DCHECK(divisor == kMinInt || IsPowerOf2(Abs(divisor))); + DCHECK(!result.is(dividend)); // Check for (0 / -x) that will produce negative zero. HDiv* hdiv = instr->hydrogen(); @@ -1212,7 +1246,7 @@ void LCodeGen::DoDivByConstI(LDivByConstI* instr) { Register dividend = ToRegister(instr->dividend()); int32_t divisor = instr->divisor(); Register result = ToRegister(instr->result()); - ASSERT(!dividend.is(result)); + DCHECK(!dividend.is(result)); if (divisor == 0) { DeoptimizeIf(al, instr->environment()); @@ -1285,7 +1319,7 @@ void LCodeGen::DoMultiplyAddD(LMultiplyAddD* instr) { DoubleRegister multiplicand = ToDoubleRegister(instr->multiplicand()); // This is computed in-place. - ASSERT(addend.is(ToDoubleRegister(instr->result()))); + DCHECK(addend.is(ToDoubleRegister(instr->result()))); __ madd_d(addend, addend, multiplier, multiplicand); } @@ -1295,12 +1329,17 @@ void LCodeGen::DoFlooringDivByPowerOf2I(LFlooringDivByPowerOf2I* instr) { Register dividend = ToRegister(instr->dividend()); Register result = ToRegister(instr->result()); int32_t divisor = instr->divisor(); - Register scratch = scratch0(); - ASSERT(!scratch.is(dividend)); + Register scratch = result.is(dividend) ? scratch0() : dividend; + DCHECK(!result.is(dividend) || !scratch.is(dividend)); + + // If the divisor is 1, return the dividend. + if (divisor == 1) { + __ Move(result, dividend); + return; + } // If the divisor is positive, things are easy: There can be no deopts and we // can simply do an arithmetic right shift. - if (divisor == 1) return; uint16_t shift = WhichPowerOf2Abs(divisor); if (divisor > 1) { __ sra(result, dividend, shift); @@ -1308,33 +1347,37 @@ void LCodeGen::DoFlooringDivByPowerOf2I(LFlooringDivByPowerOf2I* instr) { } // If the divisor is negative, we have to negate and handle edge cases. - if (instr->hydrogen()->CheckFlag(HValue::kLeftCanBeMinInt)) { - __ Move(scratch, dividend); - } + + // dividend can be the same register as result so save the value of it + // for checking overflow. + __ Move(scratch, dividend); + __ Subu(result, zero_reg, dividend); if (instr->hydrogen()->CheckFlag(HValue::kBailoutOnMinusZero)) { DeoptimizeIf(eq, instr->environment(), result, Operand(zero_reg)); } - // If the negation could not overflow, simply shifting is OK. - if (!instr->hydrogen()->CheckFlag(HValue::kLeftCanBeMinInt)) { - __ sra(result, dividend, shift); + // Dividing by -1 is basically negation, unless we overflow. + __ Xor(scratch, scratch, result); + if (divisor == -1) { + if (instr->hydrogen()->CheckFlag(HValue::kLeftCanBeMinInt)) { + DeoptimizeIf(ge, instr->environment(), scratch, Operand(zero_reg)); + } return; } - // Dividing by -1 is basically negation, unless we overflow. - __ Xor(at, scratch, result); - if (divisor == -1) { - DeoptimizeIf(ge, instr->environment(), at, Operand(zero_reg)); + // If the negation could not overflow, simply shifting is OK. + if (!instr->hydrogen()->CheckFlag(HValue::kLeftCanBeMinInt)) { + __ sra(result, result, shift); return; } Label no_overflow, done; - __ Branch(&no_overflow, lt, at, Operand(zero_reg)); + __ Branch(&no_overflow, lt, scratch, Operand(zero_reg)); __ li(result, Operand(kMinInt / divisor)); __ Branch(&done); __ bind(&no_overflow); - __ sra(result, dividend, shift); + __ sra(result, result, shift); __ bind(&done); } @@ -1343,7 +1386,7 @@ void LCodeGen::DoFlooringDivByConstI(LFlooringDivByConstI* instr) { Register dividend = ToRegister(instr->dividend()); int32_t divisor = instr->divisor(); Register result = ToRegister(instr->result()); - ASSERT(!dividend.is(result)); + DCHECK(!dividend.is(result)); if (divisor == 0) { DeoptimizeIf(al, instr->environment()); @@ -1368,7 +1411,7 @@ void LCodeGen::DoFlooringDivByConstI(LFlooringDivByConstI* instr) { // In the general case we may need to adjust before and after the truncating // division to get a flooring division. Register temp = ToRegister(instr->temp()); - ASSERT(!temp.is(dividend) && !temp.is(result)); + DCHECK(!temp.is(dividend) && !temp.is(result)); Label needs_adjustment, done; __ Branch(&needs_adjustment, divisor > 0 ? lt : gt, dividend, Operand(zero_reg)); @@ -1503,7 +1546,7 @@ void LCodeGen::DoMulI(LMulI* instr) { } } else { - ASSERT(right_op->IsRegister()); + DCHECK(right_op->IsRegister()); Register right = ToRegister(right_op); if (overflow) { @@ -1547,7 +1590,7 @@ void LCodeGen::DoMulI(LMulI* instr) { void LCodeGen::DoBitI(LBitI* instr) { LOperand* left_op = instr->left(); LOperand* right_op = instr->right(); - ASSERT(left_op->IsRegister()); + DCHECK(left_op->IsRegister()); Register left = ToRegister(left_op); Register result = ToRegister(instr->result()); Operand right(no_reg); @@ -1555,7 +1598,7 @@ void LCodeGen::DoBitI(LBitI* instr) { if (right_op->IsStackSlot()) { right = Operand(EmitLoadRegister(right_op, at)); } else { - ASSERT(right_op->IsRegister() || right_op->IsConstantOperand()); + DCHECK(right_op->IsRegister() || right_op->IsConstantOperand()); right = ToOperand(right_op); } @@ -1678,7 +1721,7 @@ void LCodeGen::DoSubI(LSubI* instr) { Register right_reg = EmitLoadRegister(right, at); __ Subu(ToRegister(result), ToRegister(left), Operand(right_reg)); } else { - ASSERT(right->IsRegister() || right->IsConstantOperand()); + DCHECK(right->IsRegister() || right->IsConstantOperand()); __ Subu(ToRegister(result), ToRegister(left), ToOperand(right)); } } else { // can_overflow. @@ -1691,7 +1734,7 @@ void LCodeGen::DoSubI(LSubI* instr) { right_reg, overflow); // Reg at also used as scratch. } else { - ASSERT(right->IsRegister()); + DCHECK(right->IsRegister()); // Due to overflow check macros not supporting constant operands, // handling the IsConstantOperand case was moved to prev if clause. __ SubuAndCheckForOverflow(ToRegister(result), @@ -1715,7 +1758,7 @@ void LCodeGen::DoConstantS(LConstantS* instr) { void LCodeGen::DoConstantD(LConstantD* instr) { - ASSERT(instr->result()->IsDoubleRegister()); + DCHECK(instr->result()->IsDoubleRegister()); DoubleRegister result = ToDoubleRegister(instr->result()); double v = instr->value(); __ Move(result, v); @@ -1730,13 +1773,6 @@ void LCodeGen::DoConstantE(LConstantE* instr) { void LCodeGen::DoConstantT(LConstantT* instr) { Handle<Object> object = instr->value(isolate()); AllowDeferredHandleDereference smi_check; - if (instr->hydrogen()->HasObjectMap()) { - Handle<Map> object_map = instr->hydrogen()->ObjectMap().handle(); - ASSERT(object->IsHeapObject()); - ASSERT(!object_map->is_stable() || - *object_map == Handle<HeapObject>::cast(object)->map()); - USE(object_map); - } __ li(ToRegister(instr->result()), object); } @@ -1754,10 +1790,10 @@ void LCodeGen::DoDateField(LDateField* instr) { Register scratch = ToRegister(instr->temp()); Smi* index = instr->index(); Label runtime, done; - ASSERT(object.is(a0)); - ASSERT(result.is(v0)); - ASSERT(!scratch.is(scratch0())); - ASSERT(!scratch.is(object)); + DCHECK(object.is(a0)); + DCHECK(result.is(v0)); + DCHECK(!scratch.is(scratch0())); + DCHECK(!scratch.is(object)); __ SmiTst(object, at); DeoptimizeIf(eq, instr->environment(), at, Operand(zero_reg)); @@ -1798,8 +1834,8 @@ MemOperand LCodeGen::BuildSeqStringOperand(Register string, return FieldMemOperand(string, SeqString::kHeaderSize + offset); } Register scratch = scratch0(); - ASSERT(!scratch.is(string)); - ASSERT(!scratch.is(ToRegister(index))); + DCHECK(!scratch.is(string)); + DCHECK(!scratch.is(ToRegister(index))); if (encoding == String::ONE_BYTE_ENCODING) { __ Addu(scratch, string, ToRegister(index)); } else { @@ -1875,7 +1911,7 @@ void LCodeGen::DoAddI(LAddI* instr) { Register right_reg = EmitLoadRegister(right, at); __ Addu(ToRegister(result), ToRegister(left), Operand(right_reg)); } else { - ASSERT(right->IsRegister() || right->IsConstantOperand()); + DCHECK(right->IsRegister() || right->IsConstantOperand()); __ Addu(ToRegister(result), ToRegister(left), ToOperand(right)); } } else { // can_overflow. @@ -1889,7 +1925,7 @@ void LCodeGen::DoAddI(LAddI* instr) { right_reg, overflow); // Reg at also used as scratch. } else { - ASSERT(right->IsRegister()); + DCHECK(right->IsRegister()); // Due to overflow check macros not supporting constant operands, // handling the IsConstantOperand case was moved to prev if clause. __ AdduAndCheckForOverflow(ToRegister(result), @@ -1909,22 +1945,21 @@ void LCodeGen::DoMathMinMax(LMathMinMax* instr) { Condition condition = (operation == HMathMinMax::kMathMin) ? le : ge; if (instr->hydrogen()->representation().IsSmiOrInteger32()) { Register left_reg = ToRegister(left); - Operand right_op = (right->IsRegister() || right->IsConstantOperand()) - ? ToOperand(right) - : Operand(EmitLoadRegister(right, at)); + Register right_reg = EmitLoadRegister(right, scratch0()); Register result_reg = ToRegister(instr->result()); Label return_right, done; - if (!result_reg.is(left_reg)) { - __ Branch(&return_right, NegateCondition(condition), left_reg, right_op); - __ mov(result_reg, left_reg); - __ Branch(&done); + Register scratch = scratch1(); + __ Slt(scratch, left_reg, Operand(right_reg)); + if (condition == ge) { + __ Movz(result_reg, left_reg, scratch); + __ Movn(result_reg, right_reg, scratch); + } else { + DCHECK(condition == le); + __ Movn(result_reg, left_reg, scratch); + __ Movz(result_reg, right_reg, scratch); } - __ Branch(&done, condition, left_reg, right_op); - __ bind(&return_right); - __ Addu(result_reg, zero_reg, right_op); - __ bind(&done); } else { - ASSERT(instr->hydrogen()->representation().IsDouble()); + DCHECK(instr->hydrogen()->representation().IsDouble()); FPURegister left_reg = ToDoubleRegister(left); FPURegister right_reg = ToDoubleRegister(right); FPURegister result_reg = ToDoubleRegister(instr->result()); @@ -2006,10 +2041,10 @@ void LCodeGen::DoArithmeticD(LArithmeticD* instr) { void LCodeGen::DoArithmeticT(LArithmeticT* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->left()).is(a1)); - ASSERT(ToRegister(instr->right()).is(a0)); - ASSERT(ToRegister(instr->result()).is(v0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->left()).is(a1)); + DCHECK(ToRegister(instr->right()).is(a0)); + DCHECK(ToRegister(instr->result()).is(v0)); BinaryOpICStub stub(isolate(), instr->op(), NO_OVERWRITE); CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); @@ -2096,36 +2131,36 @@ void LCodeGen::DoDebugBreak(LDebugBreak* instr) { void LCodeGen::DoBranch(LBranch* instr) { Representation r = instr->hydrogen()->value()->representation(); if (r.IsInteger32() || r.IsSmi()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); Register reg = ToRegister(instr->value()); EmitBranch(instr, ne, reg, Operand(zero_reg)); } else if (r.IsDouble()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); DoubleRegister reg = ToDoubleRegister(instr->value()); // Test the double value. Zero and NaN are false. EmitBranchF(instr, nue, reg, kDoubleRegZero); } else { - ASSERT(r.IsTagged()); + DCHECK(r.IsTagged()); Register reg = ToRegister(instr->value()); HType type = instr->hydrogen()->value()->type(); if (type.IsBoolean()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); __ LoadRoot(at, Heap::kTrueValueRootIndex); EmitBranch(instr, eq, reg, Operand(at)); } else if (type.IsSmi()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); EmitBranch(instr, ne, reg, Operand(zero_reg)); } else if (type.IsJSArray()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); EmitBranch(instr, al, zero_reg, Operand(zero_reg)); } else if (type.IsHeapNumber()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); DoubleRegister dbl_scratch = double_scratch0(); __ ldc1(dbl_scratch, FieldMemOperand(reg, HeapNumber::kValueOffset)); // Test the double value. Zero and NaN are false. EmitBranchF(instr, nue, dbl_scratch, kDoubleRegZero); } else if (type.IsString()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); __ lw(at, FieldMemOperand(reg, String::kLengthOffset)); EmitBranch(instr, ne, at, Operand(zero_reg)); } else { @@ -2268,7 +2303,10 @@ Condition LCodeGen::TokenToCondition(Token::Value op, bool is_unsigned) { void LCodeGen::DoCompareNumericAndBranch(LCompareNumericAndBranch* instr) { LOperand* left = instr->left(); LOperand* right = instr->right(); - Condition cond = TokenToCondition(instr->op(), false); + bool is_unsigned = + instr->hydrogen()->left()->CheckFlag(HInstruction::kUint32) || + instr->hydrogen()->right()->CheckFlag(HInstruction::kUint32); + Condition cond = TokenToCondition(instr->op(), is_unsigned); if (left->IsConstantOperand() && right->IsConstantOperand()) { // We can statically evaluate the comparison. @@ -2312,8 +2350,8 @@ void LCodeGen::DoCompareNumericAndBranch(LCompareNumericAndBranch* instr) { cmp_left = ToRegister(right); cmp_right = Operand(value); } - // We transposed the operands. Reverse the condition. - cond = ReverseCondition(cond); + // We commuted the operands, so commute the condition. + cond = CommuteCondition(cond); } else { cmp_left = ToRegister(left); cmp_right = Operand(ToRegister(right)); @@ -2352,7 +2390,7 @@ void LCodeGen::DoCmpHoleAndBranch(LCmpHoleAndBranch* instr) { void LCodeGen::DoCompareMinusZeroAndBranch(LCompareMinusZeroAndBranch* instr) { Representation rep = instr->hydrogen()->value()->representation(); - ASSERT(!rep.IsInteger32()); + DCHECK(!rep.IsInteger32()); Register scratch = ToRegister(instr->temp()); if (rep.IsDouble()) { @@ -2434,7 +2472,7 @@ void LCodeGen::DoIsStringAndBranch(LIsStringAndBranch* instr) { Register temp1 = ToRegister(instr->temp()); SmiCheck check_needed = - instr->hydrogen()->value()->IsHeapObject() + instr->hydrogen()->value()->type().IsHeapObject() ? OMIT_SMI_CHECK : INLINE_SMI_CHECK; Condition true_cond = EmitIsString(reg, temp1, instr->FalseLabel(chunk_), check_needed); @@ -2455,7 +2493,7 @@ void LCodeGen::DoIsUndetectableAndBranch(LIsUndetectableAndBranch* instr) { Register input = ToRegister(instr->value()); Register temp = ToRegister(instr->temp()); - if (!instr->hydrogen()->value()->IsHeapObject()) { + if (!instr->hydrogen()->value()->type().IsHeapObject()) { __ JumpIfSmi(input, instr->FalseLabel(chunk_)); } __ lw(temp, FieldMemOperand(input, HeapObject::kMapOffset)); @@ -2486,7 +2524,7 @@ static Condition ComputeCompareCondition(Token::Value op) { void LCodeGen::DoStringCompareAndBranch(LStringCompareAndBranch* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->context()).is(cp)); Token::Value op = instr->op(); Handle<Code> ic = CompareIC::GetUninitialized(isolate(), op); @@ -2502,7 +2540,7 @@ static InstanceType TestType(HHasInstanceTypeAndBranch* instr) { InstanceType from = instr->from(); InstanceType to = instr->to(); if (from == FIRST_TYPE) return to; - ASSERT(from == to || to == LAST_TYPE); + DCHECK(from == to || to == LAST_TYPE); return from; } @@ -2522,7 +2560,7 @@ void LCodeGen::DoHasInstanceTypeAndBranch(LHasInstanceTypeAndBranch* instr) { Register scratch = scratch0(); Register input = ToRegister(instr->value()); - if (!instr->hydrogen()->value()->IsHeapObject()) { + if (!instr->hydrogen()->value()->type().IsHeapObject()) { __ JumpIfSmi(input, instr->FalseLabel(chunk_)); } @@ -2565,9 +2603,9 @@ void LCodeGen::EmitClassOfTest(Label* is_true, Register input, Register temp, Register temp2) { - ASSERT(!input.is(temp)); - ASSERT(!input.is(temp2)); - ASSERT(!temp.is(temp2)); + DCHECK(!input.is(temp)); + DCHECK(!input.is(temp2)); + DCHECK(!temp.is(temp2)); __ JumpIfSmi(input, is_false); @@ -2646,12 +2684,12 @@ void LCodeGen::DoCmpMapAndBranch(LCmpMapAndBranch* instr) { void LCodeGen::DoInstanceOf(LInstanceOf* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->context()).is(cp)); Label true_label, done; - ASSERT(ToRegister(instr->left()).is(a0)); // Object is in a0. - ASSERT(ToRegister(instr->right()).is(a1)); // Function is in a1. + DCHECK(ToRegister(instr->left()).is(a0)); // Object is in a0. + DCHECK(ToRegister(instr->right()).is(a1)); // Function is in a1. Register result = ToRegister(instr->result()); - ASSERT(result.is(v0)); + DCHECK(result.is(v0)); InstanceofStub stub(isolate(), InstanceofStub::kArgsInRegisters); CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); @@ -2690,8 +2728,8 @@ void LCodeGen::DoInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr) { Register temp = ToRegister(instr->temp()); Register result = ToRegister(instr->result()); - ASSERT(object.is(a0)); - ASSERT(result.is(v0)); + DCHECK(object.is(a0)); + DCHECK(result.is(v0)); // A Smi is not instance of anything. __ JumpIfSmi(object, &false_result); @@ -2745,7 +2783,7 @@ void LCodeGen::DoInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr) { void LCodeGen::DoDeferredInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr, Label* map_check) { Register result = ToRegister(instr->result()); - ASSERT(result.is(v0)); + DCHECK(result.is(v0)); InstanceofStub::Flags flags = InstanceofStub::kNoFlags; flags = static_cast<InstanceofStub::Flags>( @@ -2756,14 +2794,14 @@ void LCodeGen::DoDeferredInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr, flags | InstanceofStub::kReturnTrueFalseObject); InstanceofStub stub(isolate(), flags); - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); LoadContextFromDeferred(instr->context()); // Get the temp register reserved by the instruction. This needs to be t0 as // its slot of the pushing of safepoint registers is used to communicate the // offset to the location of the map check. Register temp = ToRegister(instr->temp()); - ASSERT(temp.is(t0)); + DCHECK(temp.is(t0)); __ li(InstanceofStub::right(), instr->function()); static const int kAdditionalDelta = 7; int delta = masm_->InstructionsGeneratedSince(map_check) + kAdditionalDelta; @@ -2787,7 +2825,7 @@ void LCodeGen::DoDeferredInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr, void LCodeGen::DoCmpT(LCmpT* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->context()).is(cp)); Token::Value op = instr->op(); Handle<Code> ic = CompareIC::GetUninitialized(isolate(), op); @@ -2802,7 +2840,7 @@ void LCodeGen::DoCmpT(LCmpT* instr) { __ Branch(USE_DELAY_SLOT, &done, condition, v0, Operand(zero_reg)); __ bind(&check); __ LoadRoot(ToRegister(instr->result()), Heap::kTrueValueRootIndex); - ASSERT_EQ(1, masm()->InstructionsGeneratedSince(&check)); + DCHECK_EQ(1, masm()->InstructionsGeneratedSince(&check)); __ LoadRoot(ToRegister(instr->result()), Heap::kFalseValueRootIndex); __ bind(&done); } @@ -2861,11 +2899,20 @@ void LCodeGen::DoLoadGlobalCell(LLoadGlobalCell* instr) { void LCodeGen::DoLoadGlobalGeneric(LLoadGlobalGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->global_object()).is(a0)); - ASSERT(ToRegister(instr->result()).is(v0)); - - __ li(a2, Operand(instr->name())); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->global_object()).is(LoadIC::ReceiverRegister())); + DCHECK(ToRegister(instr->result()).is(v0)); + + __ li(LoadIC::NameRegister(), Operand(instr->name())); + if (FLAG_vector_ics) { + Register vector = ToRegister(instr->temp_vector()); + DCHECK(vector.is(LoadIC::VectorRegister())); + __ li(vector, instr->hydrogen()->feedback_vector()); + // No need to allocate this register. + DCHECK(LoadIC::SlotRegister().is(a0)); + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(instr->hydrogen()->slot()))); + } ContextualMode mode = instr->for_typeof() ? NOT_CONTEXTUAL : CONTEXTUAL; Handle<Code> ic = LoadIC::initialize_stub(isolate(), mode); CallCode(ic, RelocInfo::CODE_TARGET, instr); @@ -2940,7 +2987,7 @@ void LCodeGen::DoStoreContextSlot(LStoreContextSlot* instr) { __ sw(value, target); if (instr->hydrogen()->NeedsWriteBarrier()) { SmiCheck check_needed = - instr->hydrogen()->value()->IsHeapObject() + instr->hydrogen()->value()->type().IsHeapObject() ? OMIT_SMI_CHECK : INLINE_SMI_CHECK; __ RecordWriteContextSlot(context, target.offset(), @@ -2985,12 +3032,21 @@ void LCodeGen::DoLoadNamedField(LLoadNamedField* instr) { void LCodeGen::DoLoadNamedGeneric(LLoadNamedGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->object()).is(a0)); - ASSERT(ToRegister(instr->result()).is(v0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->object()).is(LoadIC::ReceiverRegister())); + DCHECK(ToRegister(instr->result()).is(v0)); // Name is always in a2. - __ li(a2, Operand(instr->name())); + __ li(LoadIC::NameRegister(), Operand(instr->name())); + if (FLAG_vector_ics) { + Register vector = ToRegister(instr->temp_vector()); + DCHECK(vector.is(LoadIC::VectorRegister())); + __ li(vector, instr->hydrogen()->feedback_vector()); + // No need to allocate this register. + DCHECK(LoadIC::SlotRegister().is(a0)); + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(instr->hydrogen()->slot()))); + } Handle<Code> ic = LoadIC::initialize_stub(isolate(), NOT_CONTEXTUAL); CallCode(ic, RelocInfo::CODE_TARGET, instr); } @@ -3001,17 +3057,6 @@ void LCodeGen::DoLoadFunctionPrototype(LLoadFunctionPrototype* instr) { Register function = ToRegister(instr->function()); Register result = ToRegister(instr->result()); - // Check that the function really is a function. Load map into the - // result register. - __ GetObjectType(function, result, scratch); - DeoptimizeIf(ne, instr->environment(), scratch, Operand(JS_FUNCTION_TYPE)); - - // Make sure that the function has an instance prototype. - Label non_instance; - __ lbu(scratch, FieldMemOperand(result, Map::kBitFieldOffset)); - __ And(scratch, scratch, Operand(1 << Map::kHasNonInstancePrototype)); - __ Branch(&non_instance, ne, scratch, Operand(zero_reg)); - // Get the prototype or initial map from the function. __ lw(result, FieldMemOperand(function, JSFunction::kPrototypeOrInitialMapOffset)); @@ -3027,12 +3072,6 @@ void LCodeGen::DoLoadFunctionPrototype(LLoadFunctionPrototype* instr) { // Get the prototype from the initial map. __ lw(result, FieldMemOperand(result, Map::kPrototypeOffset)); - __ Branch(&done); - - // Non-instance prototype: Fetch prototype from constructor field - // in initial map. - __ bind(&non_instance); - __ lw(result, FieldMemOperand(result, Map::kConstructorOffset)); // All done. __ bind(&done); @@ -3107,16 +3146,13 @@ void LCodeGen::DoLoadKeyedExternalArray(LLoadKeyed* instr) { int element_size_shift = ElementsKindToShiftSize(elements_kind); int shift_size = (instr->hydrogen()->key()->representation().IsSmi()) ? (element_size_shift - kSmiTagSize) : element_size_shift; - int additional_offset = IsFixedTypedArrayElementsKind(elements_kind) - ? FixedTypedArrayBase::kDataOffset - kHeapObjectTag - : 0; + int base_offset = instr->base_offset(); if (elements_kind == EXTERNAL_FLOAT32_ELEMENTS || elements_kind == FLOAT32_ELEMENTS || elements_kind == EXTERNAL_FLOAT64_ELEMENTS || elements_kind == FLOAT64_ELEMENTS) { - int base_offset = - (instr->additional_index() << element_size_shift) + additional_offset; + int base_offset = instr->base_offset(); FPURegister result = ToDoubleRegister(instr->result()); if (key_is_constant) { __ Addu(scratch0(), external_pointer, constant_key << element_size_shift); @@ -3128,15 +3164,14 @@ void LCodeGen::DoLoadKeyedExternalArray(LLoadKeyed* instr) { elements_kind == FLOAT32_ELEMENTS) { __ lwc1(result, MemOperand(scratch0(), base_offset)); __ cvt_d_s(result, result); - } else { // loading doubles, not floats. + } else { // i.e. elements_kind == EXTERNAL_DOUBLE_ELEMENTS __ ldc1(result, MemOperand(scratch0(), base_offset)); } } else { Register result = ToRegister(instr->result()); MemOperand mem_operand = PrepareKeyedOperand( key, external_pointer, key_is_constant, constant_key, - element_size_shift, shift_size, - instr->additional_index(), additional_offset); + element_size_shift, shift_size, base_offset); switch (elements_kind) { case EXTERNAL_INT8_ELEMENTS: case INT8_ELEMENTS: @@ -3196,15 +3231,13 @@ void LCodeGen::DoLoadKeyedFixedDoubleArray(LLoadKeyed* instr) { int element_size_shift = ElementsKindToShiftSize(FAST_DOUBLE_ELEMENTS); - int base_offset = - FixedDoubleArray::kHeaderSize - kHeapObjectTag + - (instr->additional_index() << element_size_shift); + int base_offset = instr->base_offset(); if (key_is_constant) { int constant_key = ToInteger32(LConstantOperand::cast(instr->key())); if (constant_key & 0xF0000000) { Abort(kArrayIndexConstantValueTooBig); } - base_offset += constant_key << element_size_shift; + base_offset += constant_key * kDoubleSize; } __ Addu(scratch, elements, Operand(base_offset)); @@ -3230,12 +3263,11 @@ void LCodeGen::DoLoadKeyedFixedArray(LLoadKeyed* instr) { Register result = ToRegister(instr->result()); Register scratch = scratch0(); Register store_base = scratch; - int offset = 0; + int offset = instr->base_offset(); if (instr->key()->IsConstantOperand()) { LConstantOperand* const_operand = LConstantOperand::cast(instr->key()); - offset = FixedArray::OffsetOfElementAt(ToInteger32(const_operand) + - instr->additional_index()); + offset += ToInteger32(const_operand) * kPointerSize; store_base = elements; } else { Register key = ToRegister(instr->key()); @@ -3250,9 +3282,8 @@ void LCodeGen::DoLoadKeyedFixedArray(LLoadKeyed* instr) { __ sll(scratch, key, kPointerSizeLog2); __ addu(scratch, elements, scratch); } - offset = FixedArray::OffsetOfElementAt(instr->additional_index()); } - __ lw(result, FieldMemOperand(store_base, offset)); + __ lw(result, MemOperand(store_base, offset)); // Check for the hole value. if (instr->hydrogen()->RequiresHoleCheck()) { @@ -3284,40 +3315,18 @@ MemOperand LCodeGen::PrepareKeyedOperand(Register key, int constant_key, int element_size, int shift_size, - int additional_index, - int additional_offset) { - int base_offset = (additional_index << element_size) + additional_offset; + int base_offset) { if (key_is_constant) { - return MemOperand(base, - base_offset + (constant_key << element_size)); + return MemOperand(base, (constant_key << element_size) + base_offset); } - if (additional_offset != 0) { - if (shift_size >= 0) { - __ sll(scratch0(), key, shift_size); - __ Addu(scratch0(), scratch0(), Operand(base_offset)); - } else { - ASSERT_EQ(-1, shift_size); - // Key can be negative, so using sra here. - __ sra(scratch0(), key, 1); - __ Addu(scratch0(), scratch0(), Operand(base_offset)); - } - __ Addu(scratch0(), base, scratch0()); - return MemOperand(scratch0()); - } - - if (additional_index != 0) { - additional_index *= 1 << (element_size - shift_size); - __ Addu(scratch0(), key, Operand(additional_index)); - } - - if (additional_index == 0) { + if (base_offset == 0) { if (shift_size >= 0) { __ sll(scratch0(), key, shift_size); __ Addu(scratch0(), base, scratch0()); return MemOperand(scratch0()); } else { - ASSERT_EQ(-1, shift_size); + DCHECK_EQ(-1, shift_size); __ srl(scratch0(), key, 1); __ Addu(scratch0(), base, scratch0()); return MemOperand(scratch0()); @@ -3325,22 +3334,32 @@ MemOperand LCodeGen::PrepareKeyedOperand(Register key, } if (shift_size >= 0) { - __ sll(scratch0(), scratch0(), shift_size); + __ sll(scratch0(), key, shift_size); __ Addu(scratch0(), base, scratch0()); - return MemOperand(scratch0()); + return MemOperand(scratch0(), base_offset); } else { - ASSERT_EQ(-1, shift_size); - __ srl(scratch0(), scratch0(), 1); + DCHECK_EQ(-1, shift_size); + __ sra(scratch0(), key, 1); __ Addu(scratch0(), base, scratch0()); - return MemOperand(scratch0()); + return MemOperand(scratch0(), base_offset); } } void LCodeGen::DoLoadKeyedGeneric(LLoadKeyedGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->object()).is(a1)); - ASSERT(ToRegister(instr->key()).is(a0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->object()).is(LoadIC::ReceiverRegister())); + DCHECK(ToRegister(instr->key()).is(LoadIC::NameRegister())); + + if (FLAG_vector_ics) { + Register vector = ToRegister(instr->temp_vector()); + DCHECK(vector.is(LoadIC::VectorRegister())); + __ li(vector, instr->hydrogen()->feedback_vector()); + // No need to allocate this register. + DCHECK(LoadIC::SlotRegister().is(a0)); + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(instr->hydrogen()->slot()))); + } Handle<Code> ic = isolate()->builtins()->KeyedLoadIC_Initialize(); CallCode(ic, RelocInfo::CODE_TARGET, instr); @@ -3437,7 +3456,7 @@ void LCodeGen::DoWrapReceiver(LWrapReceiver* instr) { __ lw(result, ContextOperand(result, Context::GLOBAL_OBJECT_INDEX)); __ lw(result, - FieldMemOperand(result, GlobalObject::kGlobalReceiverOffset)); + FieldMemOperand(result, GlobalObject::kGlobalProxyOffset)); if (result.is(receiver)) { __ bind(&result_in_receiver); @@ -3457,9 +3476,9 @@ void LCodeGen::DoApplyArguments(LApplyArguments* instr) { Register length = ToRegister(instr->length()); Register elements = ToRegister(instr->elements()); Register scratch = scratch0(); - ASSERT(receiver.is(a0)); // Used for parameter count. - ASSERT(function.is(a1)); // Required by InvokeFunction. - ASSERT(ToRegister(instr->result()).is(v0)); + DCHECK(receiver.is(a0)); // Used for parameter count. + DCHECK(function.is(a1)); // Required by InvokeFunction. + DCHECK(ToRegister(instr->result()).is(v0)); // Copy the arguments to this function possibly from the // adaptor frame below it. @@ -3488,7 +3507,7 @@ void LCodeGen::DoApplyArguments(LApplyArguments* instr) { __ sll(scratch, length, 2); __ bind(&invoke); - ASSERT(instr->HasPointerMap()); + DCHECK(instr->HasPointerMap()); LPointerMap* pointers = instr->pointer_map(); SafepointGenerator safepoint_generator( this, pointers, Safepoint::kLazyDeopt); @@ -3528,18 +3547,18 @@ void LCodeGen::DoContext(LContext* instr) { __ lw(result, MemOperand(fp, StandardFrameConstants::kContextOffset)); } else { // If there is no frame, the context must be in cp. - ASSERT(result.is(cp)); + DCHECK(result.is(cp)); } } void LCodeGen::DoDeclareGlobals(LDeclareGlobals* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->context()).is(cp)); __ li(scratch0(), instr->hydrogen()->pairs()); __ li(scratch1(), Operand(Smi::FromInt(instr->hydrogen()->flags()))); // The context is the first argument. __ Push(cp, scratch0(), scratch1()); - CallRuntime(Runtime::kHiddenDeclareGlobals, 3, instr); + CallRuntime(Runtime::kDeclareGlobals, 3, instr); } @@ -3585,8 +3604,8 @@ void LCodeGen::CallKnownFunction(Handle<JSFunction> function, void LCodeGen::DoDeferredMathAbsTaggedHeapNumber(LMathAbs* instr) { - ASSERT(instr->context() != NULL); - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(instr->context() != NULL); + DCHECK(ToRegister(instr->context()).is(cp)); Register input = ToRegister(instr->value()); Register result = ToRegister(instr->result()); Register scratch = scratch0(); @@ -3609,7 +3628,7 @@ void LCodeGen::DoDeferredMathAbsTaggedHeapNumber(LMathAbs* instr) { // Input is negative. Reverse its sign. // Preserve the value of all registers. { - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); // Registers were saved at the safepoint, so we can use // many scratch registers. @@ -3628,7 +3647,7 @@ void LCodeGen::DoDeferredMathAbsTaggedHeapNumber(LMathAbs* instr) { // Slow case: Call the runtime system to do the number allocation. __ bind(&slow); - CallRuntimeFromDeferred(Runtime::kHiddenAllocateHeapNumber, 0, instr, + CallRuntimeFromDeferred(Runtime::kAllocateHeapNumber, 0, instr, instr->context()); // Set the pointer to the new heap number in tmp. if (!tmp1.is(v0)) @@ -3805,6 +3824,14 @@ void LCodeGen::DoMathRound(LMathRound* instr) { } +void LCodeGen::DoMathFround(LMathFround* instr) { + DoubleRegister input = ToDoubleRegister(instr->value()); + DoubleRegister result = ToDoubleRegister(instr->result()); + __ cvt_s_d(result.low(), input); + __ cvt_d_s(result, result.low()); +} + + void LCodeGen::DoMathSqrt(LMathSqrt* instr) { DoubleRegister input = ToDoubleRegister(instr->value()); DoubleRegister result = ToDoubleRegister(instr->result()); @@ -3817,7 +3844,7 @@ void LCodeGen::DoMathPowHalf(LMathPowHalf* instr) { DoubleRegister result = ToDoubleRegister(instr->result()); DoubleRegister temp = ToDoubleRegister(instr->temp()); - ASSERT(!input.is(result)); + DCHECK(!input.is(result)); // Note that according to ECMA-262 15.8.2.13: // Math.pow(-Infinity, 0.5) == Infinity @@ -3840,12 +3867,12 @@ void LCodeGen::DoPower(LPower* instr) { Representation exponent_type = instr->hydrogen()->right()->representation(); // Having marked this as a call, we can use any registers. // Just make sure that the input/output registers are the expected ones. - ASSERT(!instr->right()->IsDoubleRegister() || + DCHECK(!instr->right()->IsDoubleRegister() || ToDoubleRegister(instr->right()).is(f4)); - ASSERT(!instr->right()->IsRegister() || + DCHECK(!instr->right()->IsRegister() || ToRegister(instr->right()).is(a2)); - ASSERT(ToDoubleRegister(instr->left()).is(f2)); - ASSERT(ToDoubleRegister(instr->result()).is(f0)); + DCHECK(ToDoubleRegister(instr->left()).is(f2)); + DCHECK(ToDoubleRegister(instr->result()).is(f0)); if (exponent_type.IsSmi()) { MathPowStub stub(isolate(), MathPowStub::TAGGED); @@ -3863,7 +3890,7 @@ void LCodeGen::DoPower(LPower* instr) { MathPowStub stub(isolate(), MathPowStub::INTEGER); __ CallStub(&stub); } else { - ASSERT(exponent_type.IsDouble()); + DCHECK(exponent_type.IsDouble()); MathPowStub stub(isolate(), MathPowStub::DOUBLE); __ CallStub(&stub); } @@ -3901,9 +3928,9 @@ void LCodeGen::DoMathClz32(LMathClz32* instr) { void LCodeGen::DoInvokeFunction(LInvokeFunction* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->function()).is(a1)); - ASSERT(instr->HasPointerMap()); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->function()).is(a1)); + DCHECK(instr->HasPointerMap()); Handle<JSFunction> known_function = instr->hydrogen()->known_function(); if (known_function.is_null()) { @@ -3922,7 +3949,7 @@ void LCodeGen::DoInvokeFunction(LInvokeFunction* instr) { void LCodeGen::DoCallWithDescriptor(LCallWithDescriptor* instr) { - ASSERT(ToRegister(instr->result()).is(v0)); + DCHECK(ToRegister(instr->result()).is(v0)); LPointerMap* pointers = instr->pointer_map(); SafepointGenerator generator(this, pointers, Safepoint::kLazyDeopt); @@ -3933,7 +3960,7 @@ void LCodeGen::DoCallWithDescriptor(LCallWithDescriptor* instr) { generator.BeforeCall(__ CallSize(code, RelocInfo::CODE_TARGET)); __ Call(code, RelocInfo::CODE_TARGET); } else { - ASSERT(instr->target()->IsRegister()); + DCHECK(instr->target()->IsRegister()); Register target = ToRegister(instr->target()); generator.BeforeCall(__ CallSize(target)); __ Addu(target, target, Operand(Code::kHeaderSize - kHeapObjectTag)); @@ -3944,8 +3971,8 @@ void LCodeGen::DoCallWithDescriptor(LCallWithDescriptor* instr) { void LCodeGen::DoCallJSFunction(LCallJSFunction* instr) { - ASSERT(ToRegister(instr->function()).is(a1)); - ASSERT(ToRegister(instr->result()).is(v0)); + DCHECK(ToRegister(instr->function()).is(a1)); + DCHECK(ToRegister(instr->result()).is(v0)); if (instr->hydrogen()->pass_argument_count()) { __ li(a0, Operand(instr->arity())); @@ -3963,9 +3990,9 @@ void LCodeGen::DoCallJSFunction(LCallJSFunction* instr) { void LCodeGen::DoCallFunction(LCallFunction* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->function()).is(a1)); - ASSERT(ToRegister(instr->result()).is(v0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->function()).is(a1)); + DCHECK(ToRegister(instr->result()).is(v0)); int arity = instr->arity(); CallFunctionStub stub(isolate(), arity, instr->hydrogen()->function_flags()); @@ -3974,9 +4001,9 @@ void LCodeGen::DoCallFunction(LCallFunction* instr) { void LCodeGen::DoCallNew(LCallNew* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->constructor()).is(a1)); - ASSERT(ToRegister(instr->result()).is(v0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->constructor()).is(a1)); + DCHECK(ToRegister(instr->result()).is(v0)); __ li(a0, Operand(instr->arity())); // No cell in a2 for construct type feedback in optimized code @@ -3987,9 +4014,9 @@ void LCodeGen::DoCallNew(LCallNew* instr) { void LCodeGen::DoCallNewArray(LCallNewArray* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->constructor()).is(a1)); - ASSERT(ToRegister(instr->result()).is(v0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->constructor()).is(a1)); + DCHECK(ToRegister(instr->result()).is(v0)); __ li(a0, Operand(instr->arity())); __ LoadRoot(a2, Heap::kUndefinedValueRootIndex); @@ -4075,13 +4102,13 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { __ AssertNotSmi(object); - ASSERT(!representation.IsSmi() || + DCHECK(!representation.IsSmi() || !instr->value()->IsConstantOperand() || IsSmi(LConstantOperand::cast(instr->value()))); if (representation.IsDouble()) { - ASSERT(access.IsInobject()); - ASSERT(!instr->hydrogen()->has_transition()); - ASSERT(!instr->hydrogen()->NeedsWriteBarrier()); + DCHECK(access.IsInobject()); + DCHECK(!instr->hydrogen()->has_transition()); + DCHECK(!instr->hydrogen()->NeedsWriteBarrier()); DoubleRegister value = ToDoubleRegister(instr->value()); __ sdc1(value, FieldMemOperand(object, offset)); return; @@ -4095,14 +4122,11 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { if (instr->hydrogen()->NeedsWriteBarrierForMap()) { Register temp = ToRegister(instr->temp()); // Update the write barrier for the map field. - __ RecordWriteField(object, - HeapObject::kMapOffset, - scratch, - temp, - GetRAState(), - kSaveFPRegs, - OMIT_REMEMBERED_SET, - OMIT_SMI_CHECK); + __ RecordWriteForMap(object, + scratch, + temp, + GetRAState(), + kSaveFPRegs); } } @@ -4120,7 +4144,8 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { GetRAState(), kSaveFPRegs, EMIT_REMEMBERED_SET, - instr->hydrogen()->SmiCheckForWriteBarrier()); + instr->hydrogen()->SmiCheckForWriteBarrier(), + instr->hydrogen()->PointersToHereCheckForValue()); } } else { __ lw(scratch, FieldMemOperand(object, JSObject::kPropertiesOffset)); @@ -4136,19 +4161,19 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { GetRAState(), kSaveFPRegs, EMIT_REMEMBERED_SET, - instr->hydrogen()->SmiCheckForWriteBarrier()); + instr->hydrogen()->SmiCheckForWriteBarrier(), + instr->hydrogen()->PointersToHereCheckForValue()); } } } void LCodeGen::DoStoreNamedGeneric(LStoreNamedGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->object()).is(a1)); - ASSERT(ToRegister(instr->value()).is(a0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->object()).is(StoreIC::ReceiverRegister())); + DCHECK(ToRegister(instr->value()).is(StoreIC::ValueRegister())); - // Name is always in a2. - __ li(a2, Operand(instr->name())); + __ li(StoreIC::NameRegister(), Operand(instr->name())); Handle<Code> ic = StoreIC::initialize_stub(isolate(), instr->strict_mode()); CallCode(ic, RelocInfo::CODE_TARGET, instr); } @@ -4161,7 +4186,7 @@ void LCodeGen::DoBoundsCheck(LBoundsCheck* instr) { if (instr->index()->IsConstantOperand()) { operand = ToOperand(instr->index()); reg = ToRegister(instr->length()); - cc = ReverseCondition(cc); + cc = CommuteCondition(cc); } else { reg = ToRegister(instr->index()); operand = ToOperand(instr->length()); @@ -4194,16 +4219,12 @@ void LCodeGen::DoStoreKeyedExternalArray(LStoreKeyed* instr) { int element_size_shift = ElementsKindToShiftSize(elements_kind); int shift_size = (instr->hydrogen()->key()->representation().IsSmi()) ? (element_size_shift - kSmiTagSize) : element_size_shift; - int additional_offset = IsFixedTypedArrayElementsKind(elements_kind) - ? FixedTypedArrayBase::kDataOffset - kHeapObjectTag - : 0; + int base_offset = instr->base_offset(); if (elements_kind == EXTERNAL_FLOAT32_ELEMENTS || elements_kind == FLOAT32_ELEMENTS || elements_kind == EXTERNAL_FLOAT64_ELEMENTS || elements_kind == FLOAT64_ELEMENTS) { - int base_offset = - (instr->additional_index() << element_size_shift) + additional_offset; Register address = scratch0(); FPURegister value(ToDoubleRegister(instr->value())); if (key_is_constant) { @@ -4230,7 +4251,7 @@ void LCodeGen::DoStoreKeyedExternalArray(LStoreKeyed* instr) { MemOperand mem_operand = PrepareKeyedOperand( key, external_pointer, key_is_constant, constant_key, element_size_shift, shift_size, - instr->additional_index(), additional_offset); + base_offset); switch (elements_kind) { case EXTERNAL_UINT8_CLAMPED_ELEMENTS: case EXTERNAL_INT8_ELEMENTS: @@ -4277,6 +4298,7 @@ void LCodeGen::DoStoreKeyedFixedDoubleArray(LStoreKeyed* instr) { Register scratch = scratch0(); DoubleRegister double_scratch = double_scratch0(); bool key_is_constant = instr->key()->IsConstantOperand(); + int base_offset = instr->base_offset(); Label not_nan, done; // Calculate the effective address of the slot in the array to store the @@ -4288,13 +4310,11 @@ void LCodeGen::DoStoreKeyedFixedDoubleArray(LStoreKeyed* instr) { Abort(kArrayIndexConstantValueTooBig); } __ Addu(scratch, elements, - Operand((constant_key << element_size_shift) + - FixedDoubleArray::kHeaderSize - kHeapObjectTag)); + Operand((constant_key << element_size_shift) + base_offset)); } else { int shift_size = (instr->hydrogen()->key()->representation().IsSmi()) ? (element_size_shift - kSmiTagSize) : element_size_shift; - __ Addu(scratch, elements, - Operand(FixedDoubleArray::kHeaderSize - kHeapObjectTag)); + __ Addu(scratch, elements, Operand(base_offset)); __ sll(at, ToRegister(instr->key()), shift_size); __ Addu(scratch, scratch, at); } @@ -4309,14 +4329,12 @@ void LCodeGen::DoStoreKeyedFixedDoubleArray(LStoreKeyed* instr) { __ bind(&is_nan); __ LoadRoot(at, Heap::kNanValueRootIndex); __ ldc1(double_scratch, FieldMemOperand(at, HeapNumber::kValueOffset)); - __ sdc1(double_scratch, MemOperand(scratch, instr->additional_index() << - element_size_shift)); + __ sdc1(double_scratch, MemOperand(scratch, 0)); __ Branch(&done); } __ bind(¬_nan); - __ sdc1(value, MemOperand(scratch, instr->additional_index() << - element_size_shift)); + __ sdc1(value, MemOperand(scratch, 0)); __ bind(&done); } @@ -4328,14 +4346,13 @@ void LCodeGen::DoStoreKeyedFixedArray(LStoreKeyed* instr) { : no_reg; Register scratch = scratch0(); Register store_base = scratch; - int offset = 0; + int offset = instr->base_offset(); // Do the store. if (instr->key()->IsConstantOperand()) { - ASSERT(!instr->hydrogen()->NeedsWriteBarrier()); + DCHECK(!instr->hydrogen()->NeedsWriteBarrier()); LConstantOperand* const_operand = LConstantOperand::cast(instr->key()); - offset = FixedArray::OffsetOfElementAt(ToInteger32(const_operand) + - instr->additional_index()); + offset += ToInteger32(const_operand) * kPointerSize; store_base = elements; } else { // Even though the HLoadKeyed instruction forces the input @@ -4349,23 +4366,23 @@ void LCodeGen::DoStoreKeyedFixedArray(LStoreKeyed* instr) { __ sll(scratch, key, kPointerSizeLog2); __ addu(scratch, elements, scratch); } - offset = FixedArray::OffsetOfElementAt(instr->additional_index()); } - __ sw(value, FieldMemOperand(store_base, offset)); + __ sw(value, MemOperand(store_base, offset)); if (instr->hydrogen()->NeedsWriteBarrier()) { SmiCheck check_needed = - instr->hydrogen()->value()->IsHeapObject() + instr->hydrogen()->value()->type().IsHeapObject() ? OMIT_SMI_CHECK : INLINE_SMI_CHECK; // Compute address of modified element and store it into key register. - __ Addu(key, store_base, Operand(offset - kHeapObjectTag)); + __ Addu(key, store_base, Operand(offset)); __ RecordWrite(elements, key, value, GetRAState(), kSaveFPRegs, EMIT_REMEMBERED_SET, - check_needed); + check_needed, + instr->hydrogen()->PointersToHereCheckForValue()); } } @@ -4383,10 +4400,10 @@ void LCodeGen::DoStoreKeyed(LStoreKeyed* instr) { void LCodeGen::DoStoreKeyedGeneric(LStoreKeyedGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->object()).is(a2)); - ASSERT(ToRegister(instr->key()).is(a1)); - ASSERT(ToRegister(instr->value()).is(a0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->object()).is(KeyedStoreIC::ReceiverRegister())); + DCHECK(ToRegister(instr->key()).is(KeyedStoreIC::NameRegister())); + DCHECK(ToRegister(instr->value()).is(KeyedStoreIC::ValueRegister())); Handle<Code> ic = (instr->strict_mode() == STRICT) ? isolate()->builtins()->KeyedStoreIC_Initialize_Strict() @@ -4413,18 +4430,20 @@ void LCodeGen::DoTransitionElementsKind(LTransitionElementsKind* instr) { __ li(new_map_reg, Operand(to_map)); __ sw(new_map_reg, FieldMemOperand(object_reg, HeapObject::kMapOffset)); // Write barrier. - __ RecordWriteField(object_reg, HeapObject::kMapOffset, new_map_reg, - scratch, GetRAState(), kDontSaveFPRegs); + __ RecordWriteForMap(object_reg, + new_map_reg, + scratch, + GetRAState(), + kDontSaveFPRegs); } else { - ASSERT(object_reg.is(a0)); - ASSERT(ToRegister(instr->context()).is(cp)); - PushSafepointRegistersScope scope( - this, Safepoint::kWithRegistersAndDoubles); + DCHECK(object_reg.is(a0)); + DCHECK(ToRegister(instr->context()).is(cp)); + PushSafepointRegistersScope scope(this); __ li(a1, Operand(to_map)); bool is_js_array = from_map->instance_type() == JS_ARRAY_TYPE; TransitionElementsKindStub stub(isolate(), from_kind, to_kind, is_js_array); __ CallStub(&stub); - RecordSafepointWithRegistersAndDoubles( + RecordSafepointWithRegisters( instr->pointer_map(), 0, Safepoint::kLazyDeopt); } __ bind(¬_applicable); @@ -4443,9 +4462,9 @@ void LCodeGen::DoTrapAllocationMemento(LTrapAllocationMemento* instr) { void LCodeGen::DoStringAdd(LStringAdd* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); - ASSERT(ToRegister(instr->left()).is(a1)); - ASSERT(ToRegister(instr->right()).is(a0)); + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->left()).is(a1)); + DCHECK(ToRegister(instr->right()).is(a0)); StringAddStub stub(isolate(), instr->hydrogen()->flags(), instr->hydrogen()->pretenure_flag()); @@ -4487,7 +4506,7 @@ void LCodeGen::DoDeferredStringCharCodeAt(LStringCharCodeAt* instr) { // contained in the register pointer map. __ mov(result, zero_reg); - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); __ push(string); // Push the index as a smi. This is safe because of the checks in // DoStringCharCodeAt above. @@ -4500,7 +4519,7 @@ void LCodeGen::DoDeferredStringCharCodeAt(LStringCharCodeAt* instr) { __ SmiTag(index); __ push(index); } - CallRuntimeFromDeferred(Runtime::kHiddenStringCharCodeAt, 2, instr, + CallRuntimeFromDeferred(Runtime::kStringCharCodeAtRT, 2, instr, instr->context()); __ AssertSmi(v0); __ SmiUntag(v0); @@ -4524,11 +4543,11 @@ void LCodeGen::DoStringCharFromCode(LStringCharFromCode* instr) { DeferredStringCharFromCode* deferred = new(zone()) DeferredStringCharFromCode(this, instr); - ASSERT(instr->hydrogen()->value()->representation().IsInteger32()); + DCHECK(instr->hydrogen()->value()->representation().IsInteger32()); Register char_code = ToRegister(instr->char_code()); Register result = ToRegister(instr->result()); Register scratch = scratch0(); - ASSERT(!char_code.is(result)); + DCHECK(!char_code.is(result)); __ Branch(deferred->entry(), hi, char_code, Operand(String::kMaxOneByteCharCode)); @@ -4551,7 +4570,7 @@ void LCodeGen::DoDeferredStringCharFromCode(LStringCharFromCode* instr) { // contained in the register pointer map. __ mov(result, zero_reg); - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); __ SmiTag(char_code); __ push(char_code); CallRuntimeFromDeferred(Runtime::kCharFromCode, 1, instr, instr->context()); @@ -4561,9 +4580,9 @@ void LCodeGen::DoDeferredStringCharFromCode(LStringCharFromCode* instr) { void LCodeGen::DoInteger32ToDouble(LInteger32ToDouble* instr) { LOperand* input = instr->value(); - ASSERT(input->IsRegister() || input->IsStackSlot()); + DCHECK(input->IsRegister() || input->IsStackSlot()); LOperand* output = instr->result(); - ASSERT(output->IsDoubleRegister()); + DCHECK(output->IsDoubleRegister()); FPURegister single_scratch = double_scratch0().low(); if (input->IsStackSlot()) { Register scratch = scratch0(); @@ -4684,15 +4703,15 @@ void LCodeGen::DoDeferredNumberTagIU(LInstruction* instr, __ mov(dst, zero_reg); // Preserve the value of all registers. - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); // NumberTagI and NumberTagD use the context from the frame, rather than // the environment's HContext or HInlinedContext value. - // They only call Runtime::kHiddenAllocateHeapNumber. + // They only call Runtime::kAllocateHeapNumber. // The corresponding HChange instructions are added in a phase that does // not have easy access to the local context. __ lw(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); - __ CallRuntimeSaveDoubles(Runtime::kHiddenAllocateHeapNumber); + __ CallRuntimeSaveDoubles(Runtime::kAllocateHeapNumber); RecordSafepointWithRegisters( instr->pointer_map(), 0, Safepoint::kNoLazyDeopt); __ Subu(v0, v0, kHeapObjectTag); @@ -4750,14 +4769,14 @@ void LCodeGen::DoDeferredNumberTagD(LNumberTagD* instr) { Register reg = ToRegister(instr->result()); __ mov(reg, zero_reg); - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); // NumberTagI and NumberTagD use the context from the frame, rather than // the environment's HContext or HInlinedContext value. - // They only call Runtime::kHiddenAllocateHeapNumber. + // They only call Runtime::kAllocateHeapNumber. // The corresponding HChange instructions are added in a phase that does // not have easy access to the local context. __ lw(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); - __ CallRuntimeSaveDoubles(Runtime::kHiddenAllocateHeapNumber); + __ CallRuntimeSaveDoubles(Runtime::kAllocateHeapNumber); RecordSafepointWithRegisters( instr->pointer_map(), 0, Safepoint::kNoLazyDeopt); __ Subu(v0, v0, kHeapObjectTag); @@ -4839,7 +4858,7 @@ void LCodeGen::EmitNumberUntagD(Register input_reg, } } else { __ SmiUntag(scratch, input_reg); - ASSERT(mode == NUMBER_CANDIDATE_IS_SMI); + DCHECK(mode == NUMBER_CANDIDATE_IS_SMI); } // Smi to double register conversion __ bind(&load_smi); @@ -4857,8 +4876,8 @@ void LCodeGen::DoDeferredTaggedToI(LTaggedToI* instr) { DoubleRegister double_scratch = double_scratch0(); DoubleRegister double_scratch2 = ToDoubleRegister(instr->temp2()); - ASSERT(!scratch1.is(input_reg) && !scratch1.is(scratch2)); - ASSERT(!scratch2.is(input_reg) && !scratch2.is(scratch1)); + DCHECK(!scratch1.is(input_reg) && !scratch1.is(scratch2)); + DCHECK(!scratch2.is(input_reg) && !scratch2.is(scratch1)); Label done; @@ -4884,7 +4903,7 @@ void LCodeGen::DoDeferredTaggedToI(LTaggedToI* instr) { __ bind(&no_heap_number); __ LoadRoot(at, Heap::kUndefinedValueRootIndex); __ Branch(&check_bools, ne, input_reg, Operand(at)); - ASSERT(ToRegister(instr->result()).is(input_reg)); + DCHECK(ToRegister(instr->result()).is(input_reg)); __ Branch(USE_DELAY_SLOT, &done); __ mov(input_reg, zero_reg); // In delay slot. @@ -4945,8 +4964,8 @@ void LCodeGen::DoTaggedToI(LTaggedToI* instr) { }; LOperand* input = instr->value(); - ASSERT(input->IsRegister()); - ASSERT(input->Equals(instr->result())); + DCHECK(input->IsRegister()); + DCHECK(input->Equals(instr->result())); Register input_reg = ToRegister(input); @@ -4967,9 +4986,9 @@ void LCodeGen::DoTaggedToI(LTaggedToI* instr) { void LCodeGen::DoNumberUntagD(LNumberUntagD* instr) { LOperand* input = instr->value(); - ASSERT(input->IsRegister()); + DCHECK(input->IsRegister()); LOperand* result = instr->result(); - ASSERT(result->IsDoubleRegister()); + DCHECK(result->IsDoubleRegister()); Register input_reg = ToRegister(input); DoubleRegister result_reg = ToDoubleRegister(result); @@ -5062,7 +5081,7 @@ void LCodeGen::DoCheckSmi(LCheckSmi* instr) { void LCodeGen::DoCheckNonSmi(LCheckNonSmi* instr) { - if (!instr->hydrogen()->value()->IsHeapObject()) { + if (!instr->hydrogen()->value()->type().IsHeapObject()) { LOperand* input = instr->value(); __ SmiTst(ToRegister(input), at); DeoptimizeIf(eq, instr->environment(), at, Operand(zero_reg)); @@ -5097,7 +5116,7 @@ void LCodeGen::DoCheckInstanceType(LCheckInstanceType* instr) { instr->hydrogen()->GetCheckMaskAndTag(&mask, &tag); if (IsPowerOf2(mask)) { - ASSERT(tag == 0 || IsPowerOf2(tag)); + DCHECK(tag == 0 || IsPowerOf2(tag)); __ And(at, scratch, mask); DeoptimizeIf(tag == 0 ? ne : eq, instr->environment(), at, Operand(zero_reg)); @@ -5129,7 +5148,7 @@ void LCodeGen::DoCheckValue(LCheckValue* instr) { void LCodeGen::DoDeferredInstanceMigration(LCheckMaps* instr, Register object) { { - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); __ push(object); __ mov(cp, zero_reg); __ CallRuntimeSaveDoubles(Runtime::kTryMigrateInstance); @@ -5170,7 +5189,7 @@ void LCodeGen::DoCheckMaps(LCheckMaps* instr) { Register map_reg = scratch0(); LOperand* input = instr->value(); - ASSERT(input->IsRegister()); + DCHECK(input->IsRegister()); Register reg = ToRegister(input); __ lw(map_reg, FieldMemOperand(reg, HeapObject::kMapOffset)); @@ -5293,11 +5312,11 @@ void LCodeGen::DoAllocate(LAllocate* instr) { flags = static_cast<AllocationFlags>(flags | DOUBLE_ALIGNMENT); } if (instr->hydrogen()->IsOldPointerSpaceAllocation()) { - ASSERT(!instr->hydrogen()->IsOldDataSpaceAllocation()); - ASSERT(!instr->hydrogen()->IsNewSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsOldDataSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); flags = static_cast<AllocationFlags>(flags | PRETENURE_OLD_POINTER_SPACE); } else if (instr->hydrogen()->IsOldDataSpaceAllocation()) { - ASSERT(!instr->hydrogen()->IsNewSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); flags = static_cast<AllocationFlags>(flags | PRETENURE_OLD_DATA_SPACE); } if (instr->size()->IsConstantOperand()) { @@ -5309,33 +5328,26 @@ void LCodeGen::DoAllocate(LAllocate* instr) { } } else { Register size = ToRegister(instr->size()); - __ Allocate(size, - result, - scratch, - scratch2, - deferred->entry(), - flags); + __ Allocate(size, result, scratch, scratch2, deferred->entry(), flags); } __ bind(deferred->exit()); if (instr->hydrogen()->MustPrefillWithFiller()) { + STATIC_ASSERT(kHeapObjectTag == 1); if (instr->size()->IsConstantOperand()) { int32_t size = ToInteger32(LConstantOperand::cast(instr->size())); - __ li(scratch, Operand(size)); + __ li(scratch, Operand(size - kHeapObjectTag)); } else { - scratch = ToRegister(instr->size()); + __ Subu(scratch, ToRegister(instr->size()), Operand(kHeapObjectTag)); } - __ Subu(scratch, scratch, Operand(kPointerSize)); - __ Subu(result, result, Operand(kHeapObjectTag)); + __ li(scratch2, Operand(isolate()->factory()->one_pointer_filler_map())); Label loop; __ bind(&loop); - __ li(scratch2, Operand(isolate()->factory()->one_pointer_filler_map())); + __ Subu(scratch, scratch, Operand(kPointerSize)); __ Addu(at, result, Operand(scratch)); __ sw(scratch2, MemOperand(at)); - __ Subu(scratch, scratch, Operand(kPointerSize)); __ Branch(&loop, ge, scratch, Operand(zero_reg)); - __ Addu(result, result, Operand(kHeapObjectTag)); } } @@ -5348,10 +5360,10 @@ void LCodeGen::DoDeferredAllocate(LAllocate* instr) { // contained in the register pointer map. __ mov(result, zero_reg); - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); if (instr->size()->IsRegister()) { Register size = ToRegister(instr->size()); - ASSERT(!size.is(result)); + DCHECK(!size.is(result)); __ SmiTag(size); __ push(size); } else { @@ -5368,11 +5380,11 @@ void LCodeGen::DoDeferredAllocate(LAllocate* instr) { int flags = AllocateDoubleAlignFlag::encode( instr->hydrogen()->MustAllocateDoubleAligned()); if (instr->hydrogen()->IsOldPointerSpaceAllocation()) { - ASSERT(!instr->hydrogen()->IsOldDataSpaceAllocation()); - ASSERT(!instr->hydrogen()->IsNewSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsOldDataSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); flags = AllocateTargetSpace::update(flags, OLD_POINTER_SPACE); } else if (instr->hydrogen()->IsOldDataSpaceAllocation()) { - ASSERT(!instr->hydrogen()->IsNewSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); flags = AllocateTargetSpace::update(flags, OLD_DATA_SPACE); } else { flags = AllocateTargetSpace::update(flags, NEW_SPACE); @@ -5380,21 +5392,21 @@ void LCodeGen::DoDeferredAllocate(LAllocate* instr) { __ Push(Smi::FromInt(flags)); CallRuntimeFromDeferred( - Runtime::kHiddenAllocateInTargetSpace, 2, instr, instr->context()); + Runtime::kAllocateInTargetSpace, 2, instr, instr->context()); __ StoreToSafepointRegisterSlot(v0, result); } void LCodeGen::DoToFastProperties(LToFastProperties* instr) { - ASSERT(ToRegister(instr->value()).is(a0)); - ASSERT(ToRegister(instr->result()).is(v0)); + DCHECK(ToRegister(instr->value()).is(a0)); + DCHECK(ToRegister(instr->result()).is(v0)); __ push(a0); CallRuntime(Runtime::kToFastProperties, 1, instr); } void LCodeGen::DoRegExpLiteral(LRegExpLiteral* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->context()).is(cp)); Label materialized; // Registers will be used as follows: // t3 = literals array. @@ -5414,7 +5426,7 @@ void LCodeGen::DoRegExpLiteral(LRegExpLiteral* instr) { __ li(t1, Operand(instr->hydrogen()->pattern())); __ li(t0, Operand(instr->hydrogen()->flags())); __ Push(t3, t2, t1, t0); - CallRuntime(Runtime::kHiddenMaterializeRegExpLiteral, 4, instr); + CallRuntime(Runtime::kMaterializeRegExpLiteral, 4, instr); __ mov(a1, v0); __ bind(&materialized); @@ -5427,7 +5439,7 @@ void LCodeGen::DoRegExpLiteral(LRegExpLiteral* instr) { __ bind(&runtime_allocate); __ li(a0, Operand(Smi::FromInt(size))); __ Push(a1, a0); - CallRuntime(Runtime::kHiddenAllocateInNewSpace, 1, instr); + CallRuntime(Runtime::kAllocateInNewSpace, 1, instr); __ pop(a1); __ bind(&allocated); @@ -5447,7 +5459,7 @@ void LCodeGen::DoRegExpLiteral(LRegExpLiteral* instr) { void LCodeGen::DoFunctionLiteral(LFunctionLiteral* instr) { - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->context()).is(cp)); // Use the fast case closure allocation code that allocates in new // space for nested functions that don't need literals cloning. bool pretenure = instr->hydrogen()->pretenure(); @@ -5462,13 +5474,13 @@ void LCodeGen::DoFunctionLiteral(LFunctionLiteral* instr) { __ li(a1, Operand(pretenure ? factory()->true_value() : factory()->false_value())); __ Push(cp, a2, a1); - CallRuntime(Runtime::kHiddenNewClosure, 3, instr); + CallRuntime(Runtime::kNewClosure, 3, instr); } } void LCodeGen::DoTypeof(LTypeof* instr) { - ASSERT(ToRegister(instr->result()).is(v0)); + DCHECK(ToRegister(instr->result()).is(v0)); Register input = ToRegister(instr->value()); __ push(input); CallRuntime(Runtime::kTypeof, 1, instr); @@ -5485,11 +5497,11 @@ void LCodeGen::DoTypeofIsAndBranch(LTypeofIsAndBranch* instr) { instr->FalseLabel(chunk_), input, instr->type_literal(), - cmp1, - cmp2); + &cmp1, + &cmp2); - ASSERT(cmp1.is_valid()); - ASSERT(!cmp2.is_reg() || cmp2.rm().is_valid()); + DCHECK(cmp1.is_valid()); + DCHECK(!cmp2.is_reg() || cmp2.rm().is_valid()); if (final_branch_condition != kNoCondition) { EmitBranch(instr, final_branch_condition, cmp1, cmp2); @@ -5501,8 +5513,8 @@ Condition LCodeGen::EmitTypeofIs(Label* true_label, Label* false_label, Register input, Handle<String> type_name, - Register& cmp1, - Operand& cmp2) { + Register* cmp1, + Operand* cmp2) { // This function utilizes the delay slot heavily. This is used to load // values that are always usable without depending on the type of the input // register. @@ -5513,8 +5525,8 @@ Condition LCodeGen::EmitTypeofIs(Label* true_label, __ JumpIfSmi(input, true_label); __ lw(input, FieldMemOperand(input, HeapObject::kMapOffset)); __ LoadRoot(at, Heap::kHeapNumberMapRootIndex); - cmp1 = input; - cmp2 = Operand(at); + *cmp1 = input; + *cmp2 = Operand(at); final_branch_condition = eq; } else if (String::Equals(type_name, factory->string_string())) { @@ -5526,30 +5538,23 @@ Condition LCodeGen::EmitTypeofIs(Label* true_label, // other branch. __ lbu(at, FieldMemOperand(input, Map::kBitFieldOffset)); __ And(at, at, 1 << Map::kIsUndetectable); - cmp1 = at; - cmp2 = Operand(zero_reg); + *cmp1 = at; + *cmp2 = Operand(zero_reg); final_branch_condition = eq; } else if (String::Equals(type_name, factory->symbol_string())) { __ JumpIfSmi(input, false_label); __ GetObjectType(input, input, scratch); - cmp1 = scratch; - cmp2 = Operand(SYMBOL_TYPE); + *cmp1 = scratch; + *cmp2 = Operand(SYMBOL_TYPE); final_branch_condition = eq; } else if (String::Equals(type_name, factory->boolean_string())) { __ LoadRoot(at, Heap::kTrueValueRootIndex); __ Branch(USE_DELAY_SLOT, true_label, eq, at, Operand(input)); __ LoadRoot(at, Heap::kFalseValueRootIndex); - cmp1 = at; - cmp2 = Operand(input); - final_branch_condition = eq; - - } else if (FLAG_harmony_typeof && - String::Equals(type_name, factory->null_string())) { - __ LoadRoot(at, Heap::kNullValueRootIndex); - cmp1 = at; - cmp2 = Operand(input); + *cmp1 = at; + *cmp2 = Operand(input); final_branch_condition = eq; } else if (String::Equals(type_name, factory->undefined_string())) { @@ -5562,8 +5567,8 @@ Condition LCodeGen::EmitTypeofIs(Label* true_label, __ lw(input, FieldMemOperand(input, HeapObject::kMapOffset)); __ lbu(at, FieldMemOperand(input, Map::kBitFieldOffset)); __ And(at, at, 1 << Map::kIsUndetectable); - cmp1 = at; - cmp2 = Operand(zero_reg); + *cmp1 = at; + *cmp2 = Operand(zero_reg); final_branch_condition = ne; } else if (String::Equals(type_name, factory->function_string())) { @@ -5571,16 +5576,14 @@ Condition LCodeGen::EmitTypeofIs(Label* true_label, __ JumpIfSmi(input, false_label); __ GetObjectType(input, scratch, input); __ Branch(true_label, eq, input, Operand(JS_FUNCTION_TYPE)); - cmp1 = input; - cmp2 = Operand(JS_FUNCTION_PROXY_TYPE); + *cmp1 = input; + *cmp2 = Operand(JS_FUNCTION_PROXY_TYPE); final_branch_condition = eq; } else if (String::Equals(type_name, factory->object_string())) { __ JumpIfSmi(input, false_label); - if (!FLAG_harmony_typeof) { - __ LoadRoot(at, Heap::kNullValueRootIndex); - __ Branch(USE_DELAY_SLOT, true_label, eq, at, Operand(input)); - } + __ LoadRoot(at, Heap::kNullValueRootIndex); + __ Branch(USE_DELAY_SLOT, true_label, eq, at, Operand(input)); Register map = input; __ GetObjectType(input, map, scratch); __ Branch(false_label, @@ -5591,13 +5594,13 @@ Condition LCodeGen::EmitTypeofIs(Label* true_label, // Check for undetectable objects => false. __ lbu(at, FieldMemOperand(map, Map::kBitFieldOffset)); __ And(at, at, 1 << Map::kIsUndetectable); - cmp1 = at; - cmp2 = Operand(zero_reg); + *cmp1 = at; + *cmp2 = Operand(zero_reg); final_branch_condition = eq; } else { - cmp1 = at; - cmp2 = Operand(zero_reg); // Set to valid regs, to avoid caller assertion. + *cmp1 = at; + *cmp2 = Operand(zero_reg); // Set to valid regs, to avoid caller assertion. __ Branch(false_label); } @@ -5616,7 +5619,7 @@ void LCodeGen::DoIsConstructCallAndBranch(LIsConstructCallAndBranch* instr) { void LCodeGen::EmitIsConstructCall(Register temp1, Register temp2) { - ASSERT(!temp1.is(temp2)); + DCHECK(!temp1.is(temp2)); // Get the frame pointer for the calling frame. __ lw(temp1, MemOperand(fp, StandardFrameConstants::kCallerFPOffset)); @@ -5640,7 +5643,7 @@ void LCodeGen::EnsureSpaceForLazyDeopt(int space_needed) { int current_pc = masm()->pc_offset(); if (current_pc < last_lazy_deopt_pc_ + space_needed) { int padding_size = last_lazy_deopt_pc_ + space_needed - current_pc; - ASSERT_EQ(0, padding_size % Assembler::kInstrSize); + DCHECK_EQ(0, padding_size % Assembler::kInstrSize); while (padding_size > 0) { __ nop(); padding_size -= Assembler::kInstrSize; @@ -5653,7 +5656,7 @@ void LCodeGen::EnsureSpaceForLazyDeopt(int space_needed) { void LCodeGen::DoLazyBailout(LLazyBailout* instr) { last_lazy_deopt_pc_ = masm()->pc_offset(); - ASSERT(instr->HasEnvironment()); + DCHECK(instr->HasEnvironment()); LEnvironment* env = instr->environment(); RegisterEnvironmentForDeoptimization(env, Safepoint::kLazyDeopt); safepoints_.RecordLazyDeoptimizationIndex(env->deoptimization_index()); @@ -5686,12 +5689,12 @@ void LCodeGen::DoDummyUse(LDummyUse* instr) { void LCodeGen::DoDeferredStackCheck(LStackCheck* instr) { - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); LoadContextFromDeferred(instr->context()); - __ CallRuntimeSaveDoubles(Runtime::kHiddenStackGuard); + __ CallRuntimeSaveDoubles(Runtime::kStackGuard); RecordSafepointWithLazyDeopt( instr, RECORD_SAFEPOINT_WITH_REGISTERS_AND_NO_ARGUMENTS); - ASSERT(instr->HasEnvironment()); + DCHECK(instr->HasEnvironment()); LEnvironment* env = instr->environment(); safepoints_.RecordLazyDeoptimizationIndex(env->deoptimization_index()); } @@ -5710,7 +5713,7 @@ void LCodeGen::DoStackCheck(LStackCheck* instr) { LStackCheck* instr_; }; - ASSERT(instr->HasEnvironment()); + DCHECK(instr->HasEnvironment()); LEnvironment* env = instr->environment(); // There is no LLazyBailout instruction for stack-checks. We have to // prepare for lazy deoptimization explicitly here. @@ -5719,14 +5722,14 @@ void LCodeGen::DoStackCheck(LStackCheck* instr) { Label done; __ LoadRoot(at, Heap::kStackLimitRootIndex); __ Branch(&done, hs, sp, Operand(at)); - ASSERT(instr->context()->IsRegister()); - ASSERT(ToRegister(instr->context()).is(cp)); + DCHECK(instr->context()->IsRegister()); + DCHECK(ToRegister(instr->context()).is(cp)); CallCode(isolate()->builtins()->StackCheck(), RelocInfo::CODE_TARGET, instr); __ bind(&done); } else { - ASSERT(instr->hydrogen()->is_backwards_branch()); + DCHECK(instr->hydrogen()->is_backwards_branch()); // Perform stack overflow check if this goto needs it before jumping. DeferredStackCheck* deferred_stack_check = new(zone()) DeferredStackCheck(this, instr); @@ -5751,7 +5754,7 @@ void LCodeGen::DoOsrEntry(LOsrEntry* instr) { // If the environment were already registered, we would have no way of // backpatching it with the spill slot operands. - ASSERT(!environment->HasBeenRegistered()); + DCHECK(!environment->HasBeenRegistered()); RegisterEnvironmentForDeoptimization(environment, Safepoint::kNoLazyDeopt); GenerateOsrPrologue(); @@ -5776,7 +5779,7 @@ void LCodeGen::DoForInPrepareMap(LForInPrepareMap* instr) { DeoptimizeIf(le, instr->environment(), a1, Operand(LAST_JS_PROXY_TYPE)); Label use_cache, call_runtime; - ASSERT(object.is(a0)); + DCHECK(object.is(a0)); __ CheckEnumCache(null_value, &call_runtime); __ lw(result, FieldMemOperand(object, HeapObject::kMapOffset)); @@ -5788,7 +5791,7 @@ void LCodeGen::DoForInPrepareMap(LForInPrepareMap* instr) { CallRuntime(Runtime::kGetPropertyNamesFast, 1, instr); __ lw(a1, FieldMemOperand(v0, HeapObject::kMapOffset)); - ASSERT(result.is(v0)); + DCHECK(result.is(v0)); __ LoadRoot(at, Heap::kMetaMapRootIndex); DeoptimizeIf(ne, instr->environment(), a1, Operand(at)); __ bind(&use_cache); @@ -5828,7 +5831,7 @@ void LCodeGen::DoDeferredLoadMutableDouble(LLoadFieldByIndex* instr, Register result, Register object, Register index) { - PushSafepointRegistersScope scope(this, Safepoint::kWithRegisters); + PushSafepointRegistersScope scope(this); __ Push(object, index); __ mov(cp, zero_reg); __ CallRuntimeSaveDoubles(Runtime::kLoadMutableDouble); @@ -5898,6 +5901,21 @@ void LCodeGen::DoLoadFieldByIndex(LLoadFieldByIndex* instr) { } +void LCodeGen::DoStoreFrameContext(LStoreFrameContext* instr) { + Register context = ToRegister(instr->context()); + __ sw(context, MemOperand(fp, StandardFrameConstants::kContextOffset)); +} + + +void LCodeGen::DoAllocateBlockContext(LAllocateBlockContext* instr) { + Handle<ScopeInfo> scope_info = instr->scope_info(); + __ li(at, scope_info); + __ Push(at, ToRegister(instr->function())); + CallRuntime(Runtime::kPushBlockContext, 2, instr); + RecordSafepoint(Safepoint::kNoLazyDeopt); +} + + #undef __ } } // namespace v8::internal diff --git a/deps/v8/src/mips/lithium-codegen-mips.h b/deps/v8/src/mips/lithium-codegen-mips.h index 7c52d8182e2..5c19e0d3ac3 100644 --- a/deps/v8/src/mips/lithium-codegen-mips.h +++ b/deps/v8/src/mips/lithium-codegen-mips.h @@ -5,13 +5,13 @@ #ifndef V8_MIPS_LITHIUM_CODEGEN_MIPS_H_ #define V8_MIPS_LITHIUM_CODEGEN_MIPS_H_ -#include "deoptimizer.h" -#include "mips/lithium-gap-resolver-mips.h" -#include "mips/lithium-mips.h" -#include "lithium-codegen.h" -#include "safepoint-table.h" -#include "scopes.h" -#include "utils.h" +#include "src/deoptimizer.h" +#include "src/lithium-codegen.h" +#include "src/mips/lithium-gap-resolver-mips.h" +#include "src/mips/lithium-mips.h" +#include "src/safepoint-table.h" +#include "src/scopes.h" +#include "src/utils.h" namespace v8 { namespace internal { @@ -132,8 +132,7 @@ class LCodeGen: public LCodeGenBase { int constant_key, int element_size, int shift_size, - int additional_index, - int additional_offset); + int base_offset); // Emit frame translation commands for an environment. void WriteTranslation(LEnvironment* environment, Translation* translation); @@ -270,9 +269,6 @@ class LCodeGen: public LCodeGenBase { void RecordSafepointWithRegisters(LPointerMap* pointers, int arguments, Safepoint::DeoptMode mode); - void RecordSafepointWithRegistersAndDoubles(LPointerMap* pointers, - int arguments, - Safepoint::DeoptMode mode); void RecordAndWritePosition(int position) V8_OVERRIDE; @@ -317,8 +313,8 @@ class LCodeGen: public LCodeGenBase { Label* false_label, Register input, Handle<String> type_name, - Register& cmp1, - Operand& cmp2); + Register* cmp1, + Operand* cmp2); // Emits optimized code for %_IsObject(x). Preserves input register. // Returns the condition on which a final split to @@ -387,56 +383,24 @@ class LCodeGen: public LCodeGenBase { Safepoint::Kind expected_safepoint_kind_; - class PushSafepointRegistersScope V8_FINAL BASE_EMBEDDED { + class PushSafepointRegistersScope V8_FINAL BASE_EMBEDDED { public: - PushSafepointRegistersScope(LCodeGen* codegen, - Safepoint::Kind kind) + explicit PushSafepointRegistersScope(LCodeGen* codegen) : codegen_(codegen) { - ASSERT(codegen_->info()->is_calling()); - ASSERT(codegen_->expected_safepoint_kind_ == Safepoint::kSimple); - codegen_->expected_safepoint_kind_ = kind; - - switch (codegen_->expected_safepoint_kind_) { - case Safepoint::kWithRegisters: { - StoreRegistersStateStub stub1(codegen_->masm_->isolate(), - kDontSaveFPRegs); - codegen_->masm_->push(ra); - codegen_->masm_->CallStub(&stub1); - break; - } - case Safepoint::kWithRegistersAndDoubles: { - StoreRegistersStateStub stub2(codegen_->masm_->isolate(), - kSaveFPRegs); - codegen_->masm_->push(ra); - codegen_->masm_->CallStub(&stub2); - break; - } - default: - UNREACHABLE(); - } + DCHECK(codegen_->info()->is_calling()); + DCHECK(codegen_->expected_safepoint_kind_ == Safepoint::kSimple); + codegen_->expected_safepoint_kind_ = Safepoint::kWithRegisters; + + StoreRegistersStateStub stub(codegen_->isolate()); + codegen_->masm_->push(ra); + codegen_->masm_->CallStub(&stub); } ~PushSafepointRegistersScope() { - Safepoint::Kind kind = codegen_->expected_safepoint_kind_; - ASSERT((kind & Safepoint::kWithRegisters) != 0); - switch (kind) { - case Safepoint::kWithRegisters: { - RestoreRegistersStateStub stub1(codegen_->masm_->isolate(), - kDontSaveFPRegs); - codegen_->masm_->push(ra); - codegen_->masm_->CallStub(&stub1); - break; - } - case Safepoint::kWithRegistersAndDoubles: { - RestoreRegistersStateStub stub2(codegen_->masm_->isolate(), - kSaveFPRegs); - codegen_->masm_->push(ra); - codegen_->masm_->CallStub(&stub2); - break; - } - default: - UNREACHABLE(); - } + DCHECK(codegen_->expected_safepoint_kind_ == Safepoint::kWithRegisters); + RestoreRegistersStateStub stub(codegen_->isolate()); + codegen_->masm_->push(ra); + codegen_->masm_->CallStub(&stub); codegen_->expected_safepoint_kind_ = Safepoint::kSimple; } diff --git a/deps/v8/src/mips/lithium-gap-resolver-mips.cc b/deps/v8/src/mips/lithium-gap-resolver-mips.cc index 69af8b7ee3e..1bec0c8cda9 100644 --- a/deps/v8/src/mips/lithium-gap-resolver-mips.cc +++ b/deps/v8/src/mips/lithium-gap-resolver-mips.cc @@ -2,10 +2,10 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "mips/lithium-gap-resolver-mips.h" -#include "mips/lithium-codegen-mips.h" +#include "src/mips/lithium-codegen-mips.h" +#include "src/mips/lithium-gap-resolver-mips.h" namespace v8 { namespace internal { @@ -19,7 +19,7 @@ LGapResolver::LGapResolver(LCodeGen* owner) void LGapResolver::Resolve(LParallelMove* parallel_move) { - ASSERT(moves_.is_empty()); + DCHECK(moves_.is_empty()); // Build up a worklist of moves. BuildInitialMoveList(parallel_move); @@ -40,7 +40,7 @@ void LGapResolver::Resolve(LParallelMove* parallel_move) { // Perform the moves with constant sources. for (int i = 0; i < moves_.length(); ++i) { if (!moves_[i].IsEliminated()) { - ASSERT(moves_[i].source()->IsConstantOperand()); + DCHECK(moves_[i].source()->IsConstantOperand()); EmitMove(i); } } @@ -78,13 +78,13 @@ void LGapResolver::PerformMove(int index) { // An additional complication is that moves to MemOperands with large // offsets (more than 1K or 4K) require us to spill this spilled value to // the stack, to free up the register. - ASSERT(!moves_[index].IsPending()); - ASSERT(!moves_[index].IsRedundant()); + DCHECK(!moves_[index].IsPending()); + DCHECK(!moves_[index].IsRedundant()); // Clear this move's destination to indicate a pending move. The actual // destination is saved in a stack allocated local. Multiple moves can // be pending because this function is recursive. - ASSERT(moves_[index].source() != NULL); // Or else it will look eliminated. + DCHECK(moves_[index].source() != NULL); // Or else it will look eliminated. LOperand* destination = moves_[index].destination(); moves_[index].set_destination(NULL); @@ -111,7 +111,7 @@ void LGapResolver::PerformMove(int index) { // a scratch register to break it. LMoveOperands other_move = moves_[root_index_]; if (other_move.Blocks(destination)) { - ASSERT(other_move.IsPending()); + DCHECK(other_move.IsPending()); BreakCycle(index); return; } @@ -122,12 +122,12 @@ void LGapResolver::PerformMove(int index) { void LGapResolver::Verify() { -#ifdef ENABLE_SLOW_ASSERTS +#ifdef ENABLE_SLOW_DCHECKS // No operand should be the destination for more than one move. for (int i = 0; i < moves_.length(); ++i) { LOperand* destination = moves_[i].destination(); for (int j = i + 1; j < moves_.length(); ++j) { - SLOW_ASSERT(!destination->Equals(moves_[j].destination())); + SLOW_DCHECK(!destination->Equals(moves_[j].destination())); } } #endif @@ -139,8 +139,8 @@ void LGapResolver::BreakCycle(int index) { // We save in a register the value that should end up in the source of // moves_[root_index]. After performing all moves in the tree rooted // in that move, we save the value to that source. - ASSERT(moves_[index].destination()->Equals(moves_[root_index_].source())); - ASSERT(!in_cycle_); + DCHECK(moves_[index].destination()->Equals(moves_[root_index_].source())); + DCHECK(!in_cycle_); in_cycle_ = true; LOperand* source = moves_[index].source(); saved_destination_ = moves_[index].destination(); @@ -161,8 +161,8 @@ void LGapResolver::BreakCycle(int index) { void LGapResolver::RestoreValue() { - ASSERT(in_cycle_); - ASSERT(saved_destination_ != NULL); + DCHECK(in_cycle_); + DCHECK(saved_destination_ != NULL); // Spilled value is in kLithiumScratchReg or kLithiumScratchDouble. if (saved_destination_->IsRegister()) { @@ -196,7 +196,7 @@ void LGapResolver::EmitMove(int index) { if (destination->IsRegister()) { __ mov(cgen_->ToRegister(destination), source_register); } else { - ASSERT(destination->IsStackSlot()); + DCHECK(destination->IsStackSlot()); __ sw(source_register, cgen_->ToMemOperand(destination)); } } else if (source->IsStackSlot()) { @@ -204,7 +204,7 @@ void LGapResolver::EmitMove(int index) { if (destination->IsRegister()) { __ lw(cgen_->ToRegister(destination), source_operand); } else { - ASSERT(destination->IsStackSlot()); + DCHECK(destination->IsStackSlot()); MemOperand destination_operand = cgen_->ToMemOperand(destination); if (in_cycle_) { if (!destination_operand.OffsetIsInt16Encodable()) { @@ -240,8 +240,8 @@ void LGapResolver::EmitMove(int index) { double v = cgen_->ToDouble(constant_source); __ Move(result, v); } else { - ASSERT(destination->IsStackSlot()); - ASSERT(!in_cycle_); // Constant moves happen after all cycles are gone. + DCHECK(destination->IsStackSlot()); + DCHECK(!in_cycle_); // Constant moves happen after all cycles are gone. Representation r = cgen_->IsSmi(constant_source) ? Representation::Smi() : Representation::Integer32(); if (cgen_->IsInteger32(constant_source)) { @@ -258,7 +258,7 @@ void LGapResolver::EmitMove(int index) { if (destination->IsDoubleRegister()) { __ mov_d(cgen_->ToDoubleRegister(destination), source_register); } else { - ASSERT(destination->IsDoubleStackSlot()); + DCHECK(destination->IsDoubleStackSlot()); MemOperand destination_operand = cgen_->ToMemOperand(destination); __ sdc1(source_register, destination_operand); } @@ -268,7 +268,7 @@ void LGapResolver::EmitMove(int index) { if (destination->IsDoubleRegister()) { __ ldc1(cgen_->ToDoubleRegister(destination), source_operand); } else { - ASSERT(destination->IsDoubleStackSlot()); + DCHECK(destination->IsDoubleStackSlot()); MemOperand destination_operand = cgen_->ToMemOperand(destination); if (in_cycle_) { // kLithiumScratchDouble was used to break the cycle, diff --git a/deps/v8/src/mips/lithium-gap-resolver-mips.h b/deps/v8/src/mips/lithium-gap-resolver-mips.h index f3f6b7d61e9..0072e526cb1 100644 --- a/deps/v8/src/mips/lithium-gap-resolver-mips.h +++ b/deps/v8/src/mips/lithium-gap-resolver-mips.h @@ -5,9 +5,9 @@ #ifndef V8_MIPS_LITHIUM_GAP_RESOLVER_MIPS_H_ #define V8_MIPS_LITHIUM_GAP_RESOLVER_MIPS_H_ -#include "v8.h" +#include "src/v8.h" -#include "lithium.h" +#include "src/lithium.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/mips/lithium-mips.cc b/deps/v8/src/mips/lithium-mips.cc index 56e15119434..5ff73db8135 100644 --- a/deps/v8/src/mips/lithium-mips.cc +++ b/deps/v8/src/mips/lithium-mips.cc @@ -2,12 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "lithium-allocator-inl.h" -#include "mips/lithium-mips.h" -#include "mips/lithium-codegen-mips.h" -#include "hydrogen-osr.h" +#if V8_TARGET_ARCH_MIPS + +#include "src/hydrogen-osr.h" +#include "src/lithium-inl.h" +#include "src/mips/lithium-codegen-mips.h" namespace v8 { namespace internal { @@ -25,17 +26,17 @@ void LInstruction::VerifyCall() { // outputs because all registers are blocked by the calling convention. // Inputs operands must use a fixed register or use-at-start policy or // a non-register policy. - ASSERT(Output() == NULL || + DCHECK(Output() == NULL || LUnallocated::cast(Output())->HasFixedPolicy() || !LUnallocated::cast(Output())->HasRegisterPolicy()); for (UseIterator it(this); !it.Done(); it.Advance()) { LUnallocated* operand = LUnallocated::cast(it.Current()); - ASSERT(operand->HasFixedPolicy() || + DCHECK(operand->HasFixedPolicy() || operand->IsUsedAtStart()); } for (TempIterator it(this); !it.Done(); it.Advance()) { LUnallocated* operand = LUnallocated::cast(it.Current()); - ASSERT(operand->HasFixedPolicy() ||!operand->HasRegisterPolicy()); + DCHECK(operand->HasFixedPolicy() ||!operand->HasRegisterPolicy()); } } #endif @@ -322,8 +323,9 @@ void LAccessArgumentsAt::PrintDataTo(StringStream* stream) { void LStoreNamedField::PrintDataTo(StringStream* stream) { object()->PrintTo(stream); - hydrogen()->access().PrintTo(stream); - stream->Add(" <- "); + OStringStream os; + os << hydrogen()->access() << " <- "; + stream->Add(os.c_str()); value()->PrintTo(stream); } @@ -342,7 +344,7 @@ void LLoadKeyed::PrintDataTo(StringStream* stream) { stream->Add("["); key()->PrintTo(stream); if (hydrogen()->IsDehoisted()) { - stream->Add(" + %d]", additional_index()); + stream->Add(" + %d]", base_offset()); } else { stream->Add("]"); } @@ -354,13 +356,13 @@ void LStoreKeyed::PrintDataTo(StringStream* stream) { stream->Add("["); key()->PrintTo(stream); if (hydrogen()->IsDehoisted()) { - stream->Add(" + %d] <-", additional_index()); + stream->Add(" + %d] <-", base_offset()); } else { stream->Add("] <- "); } if (value() == NULL) { - ASSERT(hydrogen()->IsConstantHoleStore() && + DCHECK(hydrogen()->IsConstantHoleStore() && hydrogen()->value()->representation().IsDouble()); stream->Add("<the hole(nan)>"); } else { @@ -396,14 +398,14 @@ LOperand* LPlatformChunk::GetNextSpillSlot(RegisterKind kind) { if (kind == DOUBLE_REGISTERS) { return LDoubleStackSlot::Create(index, zone()); } else { - ASSERT(kind == GENERAL_REGISTERS); + DCHECK(kind == GENERAL_REGISTERS); return LStackSlot::Create(index, zone()); } } LPlatformChunk* LChunkBuilder::Build() { - ASSERT(is_unused()); + DCHECK(is_unused()); chunk_ = new(zone()) LPlatformChunk(info(), graph()); LPhase phase("L_Building chunk", chunk_); status_ = BUILDING; @@ -614,7 +616,7 @@ LInstruction* LChunkBuilder::MarkAsCall(LInstruction* instr, LInstruction* LChunkBuilder::AssignPointerMap(LInstruction* instr) { - ASSERT(!instr->HasPointerMap()); + DCHECK(!instr->HasPointerMap()); instr->set_pointer_map(new(zone()) LPointerMap(zone())); return instr; } @@ -633,16 +635,29 @@ LUnallocated* LChunkBuilder::TempRegister() { } +LUnallocated* LChunkBuilder::TempDoubleRegister() { + LUnallocated* operand = + new(zone()) LUnallocated(LUnallocated::MUST_HAVE_DOUBLE_REGISTER); + int vreg = allocator_->GetVirtualRegister(); + if (!allocator_->AllocationOk()) { + Abort(kOutOfVirtualRegistersWhileTryingToAllocateTempRegister); + vreg = 0; + } + operand->set_virtual_register(vreg); + return operand; +} + + LOperand* LChunkBuilder::FixedTemp(Register reg) { LUnallocated* operand = ToUnallocated(reg); - ASSERT(operand->HasFixedPolicy()); + DCHECK(operand->HasFixedPolicy()); return operand; } LOperand* LChunkBuilder::FixedTemp(DoubleRegister reg) { LUnallocated* operand = ToUnallocated(reg); - ASSERT(operand->HasFixedPolicy()); + DCHECK(operand->HasFixedPolicy()); return operand; } @@ -671,8 +686,8 @@ LInstruction* LChunkBuilder::DoDeoptimize(HDeoptimize* instr) { LInstruction* LChunkBuilder::DoShift(Token::Value op, HBitwiseBinaryOperation* instr) { if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* left = UseRegisterAtStart(instr->left()); HValue* right_value = instr->right(); @@ -713,9 +728,9 @@ LInstruction* LChunkBuilder::DoShift(Token::Value op, LInstruction* LChunkBuilder::DoArithmeticD(Token::Value op, HArithmeticBinaryOperation* instr) { - ASSERT(instr->representation().IsDouble()); - ASSERT(instr->left()->representation().IsDouble()); - ASSERT(instr->right()->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); + DCHECK(instr->right()->representation().IsDouble()); if (op == Token::MOD) { LOperand* left = UseFixedDouble(instr->left(), f2); LOperand* right = UseFixedDouble(instr->right(), f4); @@ -737,8 +752,8 @@ LInstruction* LChunkBuilder::DoArithmeticT(Token::Value op, HBinaryOperation* instr) { HValue* left = instr->left(); HValue* right = instr->right(); - ASSERT(left->representation().IsTagged()); - ASSERT(right->representation().IsTagged()); + DCHECK(left->representation().IsTagged()); + DCHECK(right->representation().IsTagged()); LOperand* context = UseFixed(instr->context(), cp); LOperand* left_operand = UseFixed(left, a1); LOperand* right_operand = UseFixed(right, a0); @@ -749,7 +764,7 @@ LInstruction* LChunkBuilder::DoArithmeticT(Token::Value op, void LChunkBuilder::DoBasicBlock(HBasicBlock* block, HBasicBlock* next_block) { - ASSERT(is_building()); + DCHECK(is_building()); current_block_ = block; next_block_ = next_block; if (block->IsStartBlock()) { @@ -758,13 +773,13 @@ void LChunkBuilder::DoBasicBlock(HBasicBlock* block, HBasicBlock* next_block) { } else if (block->predecessors()->length() == 1) { // We have a single predecessor => copy environment and outgoing // argument count from the predecessor. - ASSERT(block->phis()->length() == 0); + DCHECK(block->phis()->length() == 0); HBasicBlock* pred = block->predecessors()->at(0); HEnvironment* last_environment = pred->last_environment(); - ASSERT(last_environment != NULL); + DCHECK(last_environment != NULL); // Only copy the environment, if it is later used again. if (pred->end()->SecondSuccessor() == NULL) { - ASSERT(pred->end()->FirstSuccessor() == block); + DCHECK(pred->end()->FirstSuccessor() == block); } else { if (pred->end()->FirstSuccessor()->block_id() > block->block_id() || pred->end()->SecondSuccessor()->block_id() > block->block_id()) { @@ -772,7 +787,7 @@ void LChunkBuilder::DoBasicBlock(HBasicBlock* block, HBasicBlock* next_block) { } } block->UpdateEnvironment(last_environment); - ASSERT(pred->argument_count() >= 0); + DCHECK(pred->argument_count() >= 0); argument_count_ = pred->argument_count(); } else { // We are at a state join => process phis. @@ -824,7 +839,7 @@ void LChunkBuilder::VisitInstruction(HInstruction* current) { if (current->OperandCount() == 0) { instr = DefineAsRegister(new(zone()) LDummy()); } else { - ASSERT(!current->OperandAt(0)->IsControlInstruction()); + DCHECK(!current->OperandAt(0)->IsControlInstruction()); instr = DefineAsRegister(new(zone()) LDummyUse(UseAny(current->OperandAt(0)))); } @@ -836,76 +851,90 @@ void LChunkBuilder::VisitInstruction(HInstruction* current) { chunk_->AddInstruction(dummy, current_block_); } } else { - instr = current->CompileToLithium(this); + HBasicBlock* successor; + if (current->IsControlInstruction() && + HControlInstruction::cast(current)->KnownSuccessorBlock(&successor) && + successor != NULL) { + instr = new(zone()) LGoto(successor); + } else { + instr = current->CompileToLithium(this); + } } argument_count_ += current->argument_delta(); - ASSERT(argument_count_ >= 0); + DCHECK(argument_count_ >= 0); if (instr != NULL) { - // Associate the hydrogen instruction first, since we may need it for - // the ClobbersRegisters() or ClobbersDoubleRegisters() calls below. - instr->set_hydrogen_value(current); + AddInstruction(instr, current); + } + + current_instruction_ = old_current; +} + + +void LChunkBuilder::AddInstruction(LInstruction* instr, + HInstruction* hydrogen_val) { +// Associate the hydrogen instruction first, since we may need it for + // the ClobbersRegisters() or ClobbersDoubleRegisters() calls below. + instr->set_hydrogen_value(hydrogen_val); #if DEBUG - // Make sure that the lithium instruction has either no fixed register - // constraints in temps or the result OR no uses that are only used at - // start. If this invariant doesn't hold, the register allocator can decide - // to insert a split of a range immediately before the instruction due to an - // already allocated register needing to be used for the instruction's fixed - // register constraint. In this case, The register allocator won't see an - // interference between the split child and the use-at-start (it would if - // the it was just a plain use), so it is free to move the split child into - // the same register that is used for the use-at-start. - // See https://code.google.com/p/chromium/issues/detail?id=201590 - if (!(instr->ClobbersRegisters() && - instr->ClobbersDoubleRegisters(isolate()))) { - int fixed = 0; - int used_at_start = 0; - for (UseIterator it(instr); !it.Done(); it.Advance()) { - LUnallocated* operand = LUnallocated::cast(it.Current()); - if (operand->IsUsedAtStart()) ++used_at_start; - } - if (instr->Output() != NULL) { - if (LUnallocated::cast(instr->Output())->HasFixedPolicy()) ++fixed; - } - for (TempIterator it(instr); !it.Done(); it.Advance()) { - LUnallocated* operand = LUnallocated::cast(it.Current()); - if (operand->HasFixedPolicy()) ++fixed; - } - ASSERT(fixed == 0 || used_at_start == 0); + // Make sure that the lithium instruction has either no fixed register + // constraints in temps or the result OR no uses that are only used at + // start. If this invariant doesn't hold, the register allocator can decide + // to insert a split of a range immediately before the instruction due to an + // already allocated register needing to be used for the instruction's fixed + // register constraint. In this case, The register allocator won't see an + // interference between the split child and the use-at-start (it would if + // the it was just a plain use), so it is free to move the split child into + // the same register that is used for the use-at-start. + // See https://code.google.com/p/chromium/issues/detail?id=201590 + if (!(instr->ClobbersRegisters() && + instr->ClobbersDoubleRegisters(isolate()))) { + int fixed = 0; + int used_at_start = 0; + for (UseIterator it(instr); !it.Done(); it.Advance()) { + LUnallocated* operand = LUnallocated::cast(it.Current()); + if (operand->IsUsedAtStart()) ++used_at_start; } + if (instr->Output() != NULL) { + if (LUnallocated::cast(instr->Output())->HasFixedPolicy()) ++fixed; + } + for (TempIterator it(instr); !it.Done(); it.Advance()) { + LUnallocated* operand = LUnallocated::cast(it.Current()); + if (operand->HasFixedPolicy()) ++fixed; + } + DCHECK(fixed == 0 || used_at_start == 0); + } #endif - if (FLAG_stress_pointer_maps && !instr->HasPointerMap()) { - instr = AssignPointerMap(instr); - } - if (FLAG_stress_environments && !instr->HasEnvironment()) { - instr = AssignEnvironment(instr); + if (FLAG_stress_pointer_maps && !instr->HasPointerMap()) { + instr = AssignPointerMap(instr); + } + if (FLAG_stress_environments && !instr->HasEnvironment()) { + instr = AssignEnvironment(instr); + } + chunk_->AddInstruction(instr, current_block_); + + if (instr->IsCall()) { + HValue* hydrogen_value_for_lazy_bailout = hydrogen_val; + LInstruction* instruction_needing_environment = NULL; + if (hydrogen_val->HasObservableSideEffects()) { + HSimulate* sim = HSimulate::cast(hydrogen_val->next()); + instruction_needing_environment = instr; + sim->ReplayEnvironment(current_block_->last_environment()); + hydrogen_value_for_lazy_bailout = sim; } - chunk_->AddInstruction(instr, current_block_); - - if (instr->IsCall()) { - HValue* hydrogen_value_for_lazy_bailout = current; - LInstruction* instruction_needing_environment = NULL; - if (current->HasObservableSideEffects()) { - HSimulate* sim = HSimulate::cast(current->next()); - instruction_needing_environment = instr; - sim->ReplayEnvironment(current_block_->last_environment()); - hydrogen_value_for_lazy_bailout = sim; - } - LInstruction* bailout = AssignEnvironment(new(zone()) LLazyBailout()); - bailout->set_hydrogen_value(hydrogen_value_for_lazy_bailout); - chunk_->AddInstruction(bailout, current_block_); - if (instruction_needing_environment != NULL) { - // Store the lazy deopt environment with the instruction if needed. - // Right now it is only used for LInstanceOfKnownGlobal. - instruction_needing_environment-> - SetDeferredLazyDeoptimizationEnvironment(bailout->environment()); - } + LInstruction* bailout = AssignEnvironment(new(zone()) LLazyBailout()); + bailout->set_hydrogen_value(hydrogen_value_for_lazy_bailout); + chunk_->AddInstruction(bailout, current_block_); + if (instruction_needing_environment != NULL) { + // Store the lazy deopt environment with the instruction if needed. + // Right now it is only used for LInstanceOfKnownGlobal. + instruction_needing_environment-> + SetDeferredLazyDeoptimizationEnvironment(bailout->environment()); } } - current_instruction_ = old_current; } @@ -915,9 +944,6 @@ LInstruction* LChunkBuilder::DoGoto(HGoto* instr) { LInstruction* LChunkBuilder::DoBranch(HBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; - HValue* value = instr->value(); Representation r = value->representation(); HType type = value->type(); @@ -937,10 +963,7 @@ LInstruction* LChunkBuilder::DoBranch(HBranch* instr) { LInstruction* LChunkBuilder::DoCompareMap(HCompareMap* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; - - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); LOperand* temp = TempRegister(); return new(zone()) LCmpMapAndBranch(value, temp); @@ -1001,9 +1024,13 @@ LInstruction* LChunkBuilder::DoApplyArguments(HApplyArguments* instr) { } -LInstruction* LChunkBuilder::DoPushArgument(HPushArgument* instr) { - LOperand* argument = Use(instr->argument()); - return new(zone()) LPushArgument(argument); +LInstruction* LChunkBuilder::DoPushArguments(HPushArguments* instr) { + int argc = instr->OperandCount(); + for (int i = 0; i < argc; ++i) { + LOperand* argument = Use(instr->argument(i)); + AddInstruction(new(zone()) LPushArgument(argument), instr); + } + return NULL; } @@ -1060,7 +1087,7 @@ LInstruction* LChunkBuilder::DoCallJSFunction( LInstruction* LChunkBuilder::DoCallWithDescriptor( HCallWithDescriptor* instr) { - const CallInterfaceDescriptor* descriptor = instr->descriptor(); + const InterfaceDescriptor* descriptor = instr->descriptor(); LOperand* target = UseRegisterOrConstantAtStart(instr->target()); ZoneList<LOperand*> ops(instr->OperandCount(), zone()); @@ -1087,14 +1114,24 @@ LInstruction* LChunkBuilder::DoInvokeFunction(HInvokeFunction* instr) { LInstruction* LChunkBuilder::DoUnaryMathOperation(HUnaryMathOperation* instr) { switch (instr->op()) { - case kMathFloor: return DoMathFloor(instr); - case kMathRound: return DoMathRound(instr); - case kMathAbs: return DoMathAbs(instr); - case kMathLog: return DoMathLog(instr); - case kMathExp: return DoMathExp(instr); - case kMathSqrt: return DoMathSqrt(instr); - case kMathPowHalf: return DoMathPowHalf(instr); - case kMathClz32: return DoMathClz32(instr); + case kMathFloor: + return DoMathFloor(instr); + case kMathRound: + return DoMathRound(instr); + case kMathFround: + return DoMathFround(instr); + case kMathAbs: + return DoMathAbs(instr); + case kMathLog: + return DoMathLog(instr); + case kMathExp: + return DoMathExp(instr); + case kMathSqrt: + return DoMathSqrt(instr); + case kMathPowHalf: + return DoMathPowHalf(instr); + case kMathClz32: + return DoMathClz32(instr); default: UNREACHABLE(); return NULL; @@ -1103,8 +1140,8 @@ LInstruction* LChunkBuilder::DoUnaryMathOperation(HUnaryMathOperation* instr) { LInstruction* LChunkBuilder::DoMathLog(HUnaryMathOperation* instr) { - ASSERT(instr->representation().IsDouble()); - ASSERT(instr->value()->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->value()->representation().IsDouble()); LOperand* input = UseFixedDouble(instr->value(), f4); return MarkAsCall(DefineFixedDouble(new(zone()) LMathLog(input), f4), instr); } @@ -1118,12 +1155,12 @@ LInstruction* LChunkBuilder::DoMathClz32(HUnaryMathOperation* instr) { LInstruction* LChunkBuilder::DoMathExp(HUnaryMathOperation* instr) { - ASSERT(instr->representation().IsDouble()); - ASSERT(instr->value()->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->value()->representation().IsDouble()); LOperand* input = UseRegister(instr->value()); LOperand* temp1 = TempRegister(); LOperand* temp2 = TempRegister(); - LOperand* double_temp = FixedTemp(f6); // Chosen by fair dice roll. + LOperand* double_temp = TempDoubleRegister(); LMathExp* result = new(zone()) LMathExp(input, double_temp, temp1, temp2); return DefineAsRegister(result); } @@ -1132,12 +1169,19 @@ LInstruction* LChunkBuilder::DoMathExp(HUnaryMathOperation* instr) { LInstruction* LChunkBuilder::DoMathPowHalf(HUnaryMathOperation* instr) { // Input cannot be the same as the result, see LCodeGen::DoMathPowHalf. LOperand* input = UseFixedDouble(instr->value(), f8); - LOperand* temp = FixedTemp(f6); + LOperand* temp = TempDoubleRegister(); LMathPowHalf* result = new(zone()) LMathPowHalf(input, temp); return DefineFixedDouble(result, f4); } +LInstruction* LChunkBuilder::DoMathFround(HUnaryMathOperation* instr) { + LOperand* input = UseRegister(instr->value()); + LMathFround* result = new (zone()) LMathFround(input); + return DefineAsRegister(result); +} + + LInstruction* LChunkBuilder::DoMathAbs(HUnaryMathOperation* instr) { Representation r = instr->value()->representation(); LOperand* context = (r.IsDouble() || r.IsSmiOrInteger32()) @@ -1169,7 +1213,7 @@ LInstruction* LChunkBuilder::DoMathSqrt(HUnaryMathOperation* instr) { LInstruction* LChunkBuilder::DoMathRound(HUnaryMathOperation* instr) { LOperand* input = UseRegister(instr->value()); - LOperand* temp = FixedTemp(f6); + LOperand* temp = TempDoubleRegister(); LMathRound* result = new(zone()) LMathRound(input, temp); return AssignEnvironment(DefineAsRegister(result)); } @@ -1227,9 +1271,9 @@ LInstruction* LChunkBuilder::DoShl(HShl* instr) { LInstruction* LChunkBuilder::DoBitwise(HBitwise* instr) { if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); - ASSERT(instr->CheckFlag(HValue::kTruncatingToInt32)); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->CheckFlag(HValue::kTruncatingToInt32)); LOperand* left = UseRegisterAtStart(instr->BetterLeftOperand()); LOperand* right = UseOrConstantAtStart(instr->BetterRightOperand()); @@ -1241,9 +1285,9 @@ LInstruction* LChunkBuilder::DoBitwise(HBitwise* instr) { LInstruction* LChunkBuilder::DoDivByPowerOf2I(HDiv* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LInstruction* result = DefineAsRegister(new(zone()) LDivByPowerOf2I( @@ -1259,9 +1303,9 @@ LInstruction* LChunkBuilder::DoDivByPowerOf2I(HDiv* instr) { LInstruction* LChunkBuilder::DoDivByConstI(HDiv* instr) { - ASSERT(instr->representation().IsInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LInstruction* result = DefineAsRegister(new(zone()) LDivByConstI( @@ -1276,9 +1320,9 @@ LInstruction* LChunkBuilder::DoDivByConstI(HDiv* instr) { LInstruction* LChunkBuilder::DoDivI(HDiv* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); LOperand* divisor = UseRegister(instr->right()); LInstruction* result = @@ -1326,9 +1370,9 @@ LInstruction* LChunkBuilder::DoFlooringDivByPowerOf2I(HMathFloorOfDiv* instr) { LInstruction* LChunkBuilder::DoFlooringDivByConstI(HMathFloorOfDiv* instr) { - ASSERT(instr->representation().IsInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LOperand* temp = @@ -1346,9 +1390,9 @@ LInstruction* LChunkBuilder::DoFlooringDivByConstI(HMathFloorOfDiv* instr) { LInstruction* LChunkBuilder::DoFlooringDivI(HMathFloorOfDiv* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); LOperand* divisor = UseRegister(instr->right()); LFlooringDivI* div = new(zone()) LFlooringDivI(dividend, divisor); @@ -1368,14 +1412,15 @@ LInstruction* LChunkBuilder::DoMathFloorOfDiv(HMathFloorOfDiv* instr) { LInstruction* LChunkBuilder::DoModByPowerOf2I(HMod* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegisterAtStart(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LInstruction* result = DefineSameAsFirst(new(zone()) LModByPowerOf2I( dividend, divisor)); - if (instr->CheckFlag(HValue::kBailoutOnMinusZero)) { + if (instr->CheckFlag(HValue::kLeftCanBeNegative) && + instr->CheckFlag(HValue::kBailoutOnMinusZero)) { result = AssignEnvironment(result); } return result; @@ -1383,9 +1428,9 @@ LInstruction* LChunkBuilder::DoModByPowerOf2I(HMod* instr) { LInstruction* LChunkBuilder::DoModByConstI(HMod* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LInstruction* result = DefineAsRegister(new(zone()) LModByConstI( @@ -1398,9 +1443,9 @@ LInstruction* LChunkBuilder::DoModByConstI(HMod* instr) { LInstruction* LChunkBuilder::DoModI(HMod* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); LOperand* divisor = UseRegister(instr->right()); LInstruction* result = DefineAsRegister(new(zone()) LModI( @@ -1426,8 +1471,8 @@ LInstruction* LChunkBuilder::DoMod(HMod* instr) { LInstruction* LChunkBuilder::DoMul(HMul* instr) { if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); HValue* left = instr->BetterLeftOperand(); HValue* right = instr->BetterRightOperand(); LOperand* left_op; @@ -1467,7 +1512,7 @@ LInstruction* LChunkBuilder::DoMul(HMul* instr) { } else if (instr->representation().IsDouble()) { if (kArchVariant == kMips32r2) { - if (instr->UseCount() == 1 && instr->uses().value()->IsAdd()) { + if (instr->HasOneUse() && instr->uses().value()->IsAdd()) { HAdd* add = HAdd::cast(instr->uses().value()); if (instr == add->left()) { // This mul is the lhs of an add. The add and mul will be folded @@ -1490,8 +1535,8 @@ LInstruction* LChunkBuilder::DoMul(HMul* instr) { LInstruction* LChunkBuilder::DoSub(HSub* instr) { if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* left = UseRegisterAtStart(instr->left()); LOperand* right = UseOrConstantAtStart(instr->right()); LSubI* sub = new(zone()) LSubI(left, right); @@ -1519,8 +1564,8 @@ LInstruction* LChunkBuilder::DoMultiplyAdd(HMul* mul, HValue* addend) { LInstruction* LChunkBuilder::DoAdd(HAdd* instr) { if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* left = UseRegisterAtStart(instr->BetterLeftOperand()); LOperand* right = UseOrConstantAtStart(instr->BetterRightOperand()); LAddI* add = new(zone()) LAddI(left, right); @@ -1530,9 +1575,9 @@ LInstruction* LChunkBuilder::DoAdd(HAdd* instr) { } return result; } else if (instr->representation().IsExternal()) { - ASSERT(instr->left()->representation().IsExternal()); - ASSERT(instr->right()->representation().IsInteger32()); - ASSERT(!instr->CheckFlag(HValue::kCanOverflow)); + DCHECK(instr->left()->representation().IsExternal()); + DCHECK(instr->right()->representation().IsInteger32()); + DCHECK(!instr->CheckFlag(HValue::kCanOverflow)); LOperand* left = UseRegisterAtStart(instr->left()); LOperand* right = UseOrConstantAtStart(instr->right()); LAddI* add = new(zone()) LAddI(left, right); @@ -1544,7 +1589,7 @@ LInstruction* LChunkBuilder::DoAdd(HAdd* instr) { return DoMultiplyAdd(HMul::cast(instr->left()), instr->right()); if (instr->right()->IsMul()) { - ASSERT(!instr->left()->IsMul()); + DCHECK(!instr->left()->IsMul()); return DoMultiplyAdd(HMul::cast(instr->right()), instr->left()); } } @@ -1559,14 +1604,14 @@ LInstruction* LChunkBuilder::DoMathMinMax(HMathMinMax* instr) { LOperand* left = NULL; LOperand* right = NULL; if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); left = UseRegisterAtStart(instr->BetterLeftOperand()); right = UseOrConstantAtStart(instr->BetterRightOperand()); } else { - ASSERT(instr->representation().IsDouble()); - ASSERT(instr->left()->representation().IsDouble()); - ASSERT(instr->right()->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); + DCHECK(instr->right()->representation().IsDouble()); left = UseRegisterAtStart(instr->left()); right = UseRegisterAtStart(instr->right()); } @@ -1575,11 +1620,11 @@ LInstruction* LChunkBuilder::DoMathMinMax(HMathMinMax* instr) { LInstruction* LChunkBuilder::DoPower(HPower* instr) { - ASSERT(instr->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); // We call a C function for double power. It can't trigger a GC. // We need to use fixed result register for the call. Representation exponent_type = instr->right()->representation(); - ASSERT(instr->left()->representation().IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); LOperand* left = UseFixedDouble(instr->left(), f2); LOperand* right = exponent_type.IsDouble() ? UseFixedDouble(instr->right(), f4) : @@ -1592,8 +1637,8 @@ LInstruction* LChunkBuilder::DoPower(HPower* instr) { LInstruction* LChunkBuilder::DoCompareGeneric(HCompareGeneric* instr) { - ASSERT(instr->left()->representation().IsTagged()); - ASSERT(instr->right()->representation().IsTagged()); + DCHECK(instr->left()->representation().IsTagged()); + DCHECK(instr->right()->representation().IsTagged()); LOperand* context = UseFixed(instr->context(), cp); LOperand* left = UseFixed(instr->left(), a1); LOperand* right = UseFixed(instr->right(), a0); @@ -1604,19 +1649,17 @@ LInstruction* LChunkBuilder::DoCompareGeneric(HCompareGeneric* instr) { LInstruction* LChunkBuilder::DoCompareNumericAndBranch( HCompareNumericAndBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; Representation r = instr->representation(); if (r.IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(r)); - ASSERT(instr->right()->representation().Equals(r)); + DCHECK(instr->left()->representation().Equals(r)); + DCHECK(instr->right()->representation().Equals(r)); LOperand* left = UseRegisterOrConstantAtStart(instr->left()); LOperand* right = UseRegisterOrConstantAtStart(instr->right()); return new(zone()) LCompareNumericAndBranch(left, right); } else { - ASSERT(r.IsDouble()); - ASSERT(instr->left()->representation().IsDouble()); - ASSERT(instr->right()->representation().IsDouble()); + DCHECK(r.IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); + DCHECK(instr->right()->representation().IsDouble()); LOperand* left = UseRegisterAtStart(instr->left()); LOperand* right = UseRegisterAtStart(instr->right()); return new(zone()) LCompareNumericAndBranch(left, right); @@ -1626,8 +1669,6 @@ LInstruction* LChunkBuilder::DoCompareNumericAndBranch( LInstruction* LChunkBuilder::DoCompareObjectEqAndBranch( HCompareObjectEqAndBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; LOperand* left = UseRegisterAtStart(instr->left()); LOperand* right = UseRegisterAtStart(instr->right()); return new(zone()) LCmpObjectEqAndBranch(left, right); @@ -1643,8 +1684,6 @@ LInstruction* LChunkBuilder::DoCompareHoleAndBranch( LInstruction* LChunkBuilder::DoCompareMinusZeroAndBranch( HCompareMinusZeroAndBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; LOperand* value = UseRegister(instr->value()); LOperand* scratch = TempRegister(); return new(zone()) LCompareMinusZeroAndBranch(value, scratch); @@ -1652,7 +1691,7 @@ LInstruction* LChunkBuilder::DoCompareMinusZeroAndBranch( LInstruction* LChunkBuilder::DoIsObjectAndBranch(HIsObjectAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* temp = TempRegister(); return new(zone()) LIsObjectAndBranch(UseRegisterAtStart(instr->value()), temp); @@ -1660,7 +1699,7 @@ LInstruction* LChunkBuilder::DoIsObjectAndBranch(HIsObjectAndBranch* instr) { LInstruction* LChunkBuilder::DoIsStringAndBranch(HIsStringAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* temp = TempRegister(); return new(zone()) LIsStringAndBranch(UseRegisterAtStart(instr->value()), temp); @@ -1668,14 +1707,14 @@ LInstruction* LChunkBuilder::DoIsStringAndBranch(HIsStringAndBranch* instr) { LInstruction* LChunkBuilder::DoIsSmiAndBranch(HIsSmiAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); return new(zone()) LIsSmiAndBranch(Use(instr->value())); } LInstruction* LChunkBuilder::DoIsUndetectableAndBranch( HIsUndetectableAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); return new(zone()) LIsUndetectableAndBranch( UseRegisterAtStart(instr->value()), TempRegister()); } @@ -1683,8 +1722,8 @@ LInstruction* LChunkBuilder::DoIsUndetectableAndBranch( LInstruction* LChunkBuilder::DoStringCompareAndBranch( HStringCompareAndBranch* instr) { - ASSERT(instr->left()->representation().IsTagged()); - ASSERT(instr->right()->representation().IsTagged()); + DCHECK(instr->left()->representation().IsTagged()); + DCHECK(instr->right()->representation().IsTagged()); LOperand* context = UseFixed(instr->context(), cp); LOperand* left = UseFixed(instr->left(), a1); LOperand* right = UseFixed(instr->right(), a0); @@ -1696,7 +1735,7 @@ LInstruction* LChunkBuilder::DoStringCompareAndBranch( LInstruction* LChunkBuilder::DoHasInstanceTypeAndBranch( HHasInstanceTypeAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); return new(zone()) LHasInstanceTypeAndBranch(value); } @@ -1704,7 +1743,7 @@ LInstruction* LChunkBuilder::DoHasInstanceTypeAndBranch( LInstruction* LChunkBuilder::DoGetCachedArrayIndex( HGetCachedArrayIndex* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); return DefineAsRegister(new(zone()) LGetCachedArrayIndex(value)); @@ -1713,7 +1752,7 @@ LInstruction* LChunkBuilder::DoGetCachedArrayIndex( LInstruction* LChunkBuilder::DoHasCachedArrayIndexAndBranch( HHasCachedArrayIndexAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); return new(zone()) LHasCachedArrayIndexAndBranch( UseRegisterAtStart(instr->value())); } @@ -1721,7 +1760,7 @@ LInstruction* LChunkBuilder::DoHasCachedArrayIndexAndBranch( LInstruction* LChunkBuilder::DoClassOfTestAndBranch( HClassOfTestAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); return new(zone()) LClassOfTestAndBranch(UseRegister(instr->value()), TempRegister()); } @@ -1824,14 +1863,14 @@ LInstruction* LChunkBuilder::DoChange(HChange* instr) { } return AssignEnvironment(DefineSameAsFirst(new(zone()) LCheckSmi(value))); } else { - ASSERT(to.IsInteger32()); + DCHECK(to.IsInteger32()); if (val->type().IsSmi() || val->representation().IsSmi()) { LOperand* value = UseRegisterAtStart(val); return DefineAsRegister(new(zone()) LSmiUntag(value, false)); } else { LOperand* value = UseRegister(val); LOperand* temp1 = TempRegister(); - LOperand* temp2 = FixedTemp(f22); + LOperand* temp2 = TempDoubleRegister(); LInstruction* result = DefineSameAsFirst(new(zone()) LTaggedToI(value, temp1, temp2)); if (!val->representation().IsSmi()) result = AssignEnvironment(result); @@ -1852,7 +1891,7 @@ LInstruction* LChunkBuilder::DoChange(HChange* instr) { return AssignEnvironment( DefineAsRegister(new(zone()) LDoubleToSmi(value))); } else { - ASSERT(to.IsInteger32()); + DCHECK(to.IsInteger32()); LOperand* value = UseRegister(val); LInstruction* result = DefineAsRegister(new(zone()) LDoubleToI(value)); if (!instr->CanTruncateToInt32()) result = AssignEnvironment(result); @@ -1885,7 +1924,7 @@ LInstruction* LChunkBuilder::DoChange(HChange* instr) { } return result; } else { - ASSERT(to.IsDouble()); + DCHECK(to.IsDouble()); if (val->CheckFlag(HInstruction::kUint32)) { return DefineAsRegister(new(zone()) LUint32ToDouble(UseRegister(val))); } else { @@ -1901,7 +1940,9 @@ LInstruction* LChunkBuilder::DoChange(HChange* instr) { LInstruction* LChunkBuilder::DoCheckHeapObject(HCheckHeapObject* instr) { LOperand* value = UseRegisterAtStart(instr->value()); LInstruction* result = new(zone()) LCheckNonSmi(value); - if (!instr->value()->IsHeapObject()) result = AssignEnvironment(result); + if (!instr->value()->type().IsHeapObject()) { + result = AssignEnvironment(result); + } return result; } @@ -1943,14 +1984,14 @@ LInstruction* LChunkBuilder::DoClampToUint8(HClampToUint8* instr) { LOperand* reg = UseRegister(value); if (input_rep.IsDouble()) { // Revisit this decision, here and 8 lines below. - return DefineAsRegister(new(zone()) LClampDToUint8(reg, FixedTemp(f22))); + return DefineAsRegister(new(zone()) LClampDToUint8(reg, + TempDoubleRegister())); } else if (input_rep.IsInteger32()) { return DefineAsRegister(new(zone()) LClampIToUint8(reg)); } else { - ASSERT(input_rep.IsSmiOrTagged()); - // Register allocator doesn't (yet) support allocation of double - // temps. Reserve f22 explicitly. - LClampTToUint8* result = new(zone()) LClampTToUint8(reg, FixedTemp(f22)); + DCHECK(input_rep.IsSmiOrTagged()); + LClampTToUint8* result = + new(zone()) LClampTToUint8(reg, TempDoubleRegister()); return AssignEnvironment(DefineAsRegister(result)); } } @@ -1958,7 +1999,7 @@ LInstruction* LChunkBuilder::DoClampToUint8(HClampToUint8* instr) { LInstruction* LChunkBuilder::DoDoubleBits(HDoubleBits* instr) { HValue* value = instr->value(); - ASSERT(value->representation().IsDouble()); + DCHECK(value->representation().IsDouble()); return DefineAsRegister(new(zone()) LDoubleBits(UseRegister(value))); } @@ -2009,9 +2050,14 @@ LInstruction* LChunkBuilder::DoLoadGlobalCell(HLoadGlobalCell* instr) { LInstruction* LChunkBuilder::DoLoadGlobalGeneric(HLoadGlobalGeneric* instr) { LOperand* context = UseFixed(instr->context(), cp); - LOperand* global_object = UseFixed(instr->global_object(), a0); + LOperand* global_object = UseFixed(instr->global_object(), + LoadIC::ReceiverRegister()); + LOperand* vector = NULL; + if (FLAG_vector_ics) { + vector = FixedTemp(LoadIC::VectorRegister()); + } LLoadGlobalGeneric* result = - new(zone()) LLoadGlobalGeneric(context, global_object); + new(zone()) LLoadGlobalGeneric(context, global_object, vector); return MarkAsCall(DefineFixed(result, v0), instr); } @@ -2063,9 +2109,14 @@ LInstruction* LChunkBuilder::DoLoadNamedField(HLoadNamedField* instr) { LInstruction* LChunkBuilder::DoLoadNamedGeneric(HLoadNamedGeneric* instr) { LOperand* context = UseFixed(instr->context(), cp); - LOperand* object = UseFixed(instr->object(), a0); + LOperand* object = UseFixed(instr->object(), LoadIC::ReceiverRegister()); + LOperand* vector = NULL; + if (FLAG_vector_ics) { + vector = FixedTemp(LoadIC::VectorRegister()); + } + LInstruction* result = - DefineFixed(new(zone()) LLoadNamedGeneric(context, object), v0); + DefineFixed(new(zone()) LLoadNamedGeneric(context, object, vector), v0); return MarkAsCall(result, instr); } @@ -2083,7 +2134,7 @@ LInstruction* LChunkBuilder::DoLoadRoot(HLoadRoot* instr) { LInstruction* LChunkBuilder::DoLoadKeyed(HLoadKeyed* instr) { - ASSERT(instr->key()->representation().IsSmiOrInteger32()); + DCHECK(instr->key()->representation().IsSmiOrInteger32()); ElementsKind elements_kind = instr->elements_kind(); LOperand* key = UseRegisterOrConstantAtStart(instr->key()); LInstruction* result = NULL; @@ -2093,12 +2144,12 @@ LInstruction* LChunkBuilder::DoLoadKeyed(HLoadKeyed* instr) { if (instr->representation().IsDouble()) { obj = UseRegister(instr->elements()); } else { - ASSERT(instr->representation().IsSmiOrTagged()); + DCHECK(instr->representation().IsSmiOrTagged()); obj = UseRegisterAtStart(instr->elements()); } result = DefineAsRegister(new(zone()) LLoadKeyed(obj, key)); } else { - ASSERT( + DCHECK( (instr->representation().IsInteger32() && !IsDoubleOrFloatElementsKind(elements_kind)) || (instr->representation().IsDouble() && @@ -2123,18 +2174,23 @@ LInstruction* LChunkBuilder::DoLoadKeyed(HLoadKeyed* instr) { LInstruction* LChunkBuilder::DoLoadKeyedGeneric(HLoadKeyedGeneric* instr) { LOperand* context = UseFixed(instr->context(), cp); - LOperand* object = UseFixed(instr->object(), a1); - LOperand* key = UseFixed(instr->key(), a0); + LOperand* object = UseFixed(instr->object(), LoadIC::ReceiverRegister()); + LOperand* key = UseFixed(instr->key(), LoadIC::NameRegister()); + LOperand* vector = NULL; + if (FLAG_vector_ics) { + vector = FixedTemp(LoadIC::VectorRegister()); + } LInstruction* result = - DefineFixed(new(zone()) LLoadKeyedGeneric(context, object, key), v0); + DefineFixed(new(zone()) LLoadKeyedGeneric(context, object, key, vector), + v0); return MarkAsCall(result, instr); } LInstruction* LChunkBuilder::DoStoreKeyed(HStoreKeyed* instr) { if (!instr->is_typed_elements()) { - ASSERT(instr->elements()->representation().IsTagged()); + DCHECK(instr->elements()->representation().IsTagged()); bool needs_write_barrier = instr->NeedsWriteBarrier(); LOperand* object = NULL; LOperand* val = NULL; @@ -2145,7 +2201,7 @@ LInstruction* LChunkBuilder::DoStoreKeyed(HStoreKeyed* instr) { key = UseRegisterOrConstantAtStart(instr->key()); val = UseRegister(instr->value()); } else { - ASSERT(instr->value()->representation().IsSmiOrTagged()); + DCHECK(instr->value()->representation().IsSmiOrTagged()); if (needs_write_barrier) { object = UseTempRegister(instr->elements()); val = UseTempRegister(instr->value()); @@ -2160,12 +2216,12 @@ LInstruction* LChunkBuilder::DoStoreKeyed(HStoreKeyed* instr) { return new(zone()) LStoreKeyed(object, key, val); } - ASSERT( + DCHECK( (instr->value()->representation().IsInteger32() && !IsDoubleOrFloatElementsKind(instr->elements_kind())) || (instr->value()->representation().IsDouble() && IsDoubleOrFloatElementsKind(instr->elements_kind()))); - ASSERT((instr->is_fixed_typed_array() && + DCHECK((instr->is_fixed_typed_array() && instr->elements()->representation().IsTagged()) || (instr->is_external() && instr->elements()->representation().IsExternal())); @@ -2178,13 +2234,13 @@ LInstruction* LChunkBuilder::DoStoreKeyed(HStoreKeyed* instr) { LInstruction* LChunkBuilder::DoStoreKeyedGeneric(HStoreKeyedGeneric* instr) { LOperand* context = UseFixed(instr->context(), cp); - LOperand* obj = UseFixed(instr->object(), a2); - LOperand* key = UseFixed(instr->key(), a1); - LOperand* val = UseFixed(instr->value(), a0); + LOperand* obj = UseFixed(instr->object(), KeyedStoreIC::ReceiverRegister()); + LOperand* key = UseFixed(instr->key(), KeyedStoreIC::NameRegister()); + LOperand* val = UseFixed(instr->value(), KeyedStoreIC::ValueRegister()); - ASSERT(instr->object()->representation().IsTagged()); - ASSERT(instr->key()->representation().IsTagged()); - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->object()->representation().IsTagged()); + DCHECK(instr->key()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); return MarkAsCall( new(zone()) LStoreKeyedGeneric(context, obj, key, val), instr); @@ -2254,8 +2310,8 @@ LInstruction* LChunkBuilder::DoStoreNamedField(HStoreNamedField* instr) { LInstruction* LChunkBuilder::DoStoreNamedGeneric(HStoreNamedGeneric* instr) { LOperand* context = UseFixed(instr->context(), cp); - LOperand* obj = UseFixed(instr->object(), a1); - LOperand* val = UseFixed(instr->value(), a0); + LOperand* obj = UseFixed(instr->object(), StoreIC::ReceiverRegister()); + LOperand* val = UseFixed(instr->value(), StoreIC::ValueRegister()); LInstruction* result = new(zone()) LStoreNamedGeneric(context, obj, val); return MarkAsCall(result, instr); @@ -2294,9 +2350,7 @@ LInstruction* LChunkBuilder::DoStringCharFromCode(HStringCharFromCode* instr) { LInstruction* LChunkBuilder::DoAllocate(HAllocate* instr) { info()->MarkAsDeferredCalling(); LOperand* context = UseAny(instr->context()); - LOperand* size = instr->size()->IsConstant() - ? UseConstant(instr->size()) - : UseTempRegister(instr->size()); + LOperand* size = UseRegisterOrConstant(instr->size()); LOperand* temp1 = TempRegister(); LOperand* temp2 = TempRegister(); LAllocate* result = new(zone()) LAllocate(context, size, temp1, temp2); @@ -2319,7 +2373,7 @@ LInstruction* LChunkBuilder::DoFunctionLiteral(HFunctionLiteral* instr) { LInstruction* LChunkBuilder::DoOsrEntry(HOsrEntry* instr) { - ASSERT(argument_count_ == 0); + DCHECK(argument_count_ == 0); allocator_->MarkAsOsrEntry(); current_block_->last_environment()->set_ast_id(instr->ast_id()); return AssignEnvironment(new(zone()) LOsrEntry); @@ -2332,11 +2386,11 @@ LInstruction* LChunkBuilder::DoParameter(HParameter* instr) { int spill_index = chunk()->GetParameterStackSlot(instr->index()); return DefineAsSpilled(result, spill_index); } else { - ASSERT(info()->IsStub()); + DCHECK(info()->IsStub()); CodeStubInterfaceDescriptor* descriptor = info()->code_stub()->GetInterfaceDescriptor(); int index = static_cast<int>(instr->index()); - Register reg = descriptor->GetParameterRegister(index); + Register reg = descriptor->GetEnvironmentParameterRegister(index); return DefineFixed(result, reg); } } @@ -2407,9 +2461,6 @@ LInstruction* LChunkBuilder::DoTypeof(HTypeof* instr) { LInstruction* LChunkBuilder::DoTypeofIsAndBranch(HTypeofIsAndBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; - return new(zone()) LTypeofIsAndBranch(UseTempRegister(instr->value())); } @@ -2431,7 +2482,7 @@ LInstruction* LChunkBuilder::DoStackCheck(HStackCheck* instr) { LOperand* context = UseFixed(instr->context(), cp); return MarkAsCall(new(zone()) LStackCheck(context), instr); } else { - ASSERT(instr->is_backwards_branch()); + DCHECK(instr->is_backwards_branch()); LOperand* context = UseAny(instr->context()); return AssignEnvironment( AssignPointerMap(new(zone()) LStackCheck(context))); @@ -2467,7 +2518,7 @@ LInstruction* LChunkBuilder::DoLeaveInlined(HLeaveInlined* instr) { if (env->entry()->arguments_pushed()) { int argument_count = env->arguments_environment()->parameter_count(); pop = new(zone()) LDrop(argument_count); - ASSERT(instr->argument_delta() == -argument_count); + DCHECK(instr->argument_delta() == -argument_count); } HEnvironment* outer = current_block_->last_environment()-> @@ -2508,4 +2559,22 @@ LInstruction* LChunkBuilder::DoLoadFieldByIndex(HLoadFieldByIndex* instr) { } + +LInstruction* LChunkBuilder::DoStoreFrameContext(HStoreFrameContext* instr) { + LOperand* context = UseRegisterAtStart(instr->context()); + return new(zone()) LStoreFrameContext(context); +} + + +LInstruction* LChunkBuilder::DoAllocateBlockContext( + HAllocateBlockContext* instr) { + LOperand* context = UseFixed(instr->context(), cp); + LOperand* function = UseRegisterAtStart(instr->function()); + LAllocateBlockContext* result = + new(zone()) LAllocateBlockContext(context, function); + return MarkAsCall(DefineFixed(result, cp), instr); +} + } } // namespace v8::internal + +#endif // V8_TARGET_ARCH_MIPS diff --git a/deps/v8/src/mips/lithium-mips.h b/deps/v8/src/mips/lithium-mips.h index 14ac0a48232..9578955600d 100644 --- a/deps/v8/src/mips/lithium-mips.h +++ b/deps/v8/src/mips/lithium-mips.h @@ -5,11 +5,11 @@ #ifndef V8_MIPS_LITHIUM_MIPS_H_ #define V8_MIPS_LITHIUM_MIPS_H_ -#include "hydrogen.h" -#include "lithium-allocator.h" -#include "lithium.h" -#include "safepoint-table.h" -#include "utils.h" +#include "src/hydrogen.h" +#include "src/lithium.h" +#include "src/lithium-allocator.h" +#include "src/safepoint-table.h" +#include "src/utils.h" namespace v8 { namespace internal { @@ -17,146 +17,149 @@ namespace internal { // Forward declarations. class LCodeGen; -#define LITHIUM_CONCRETE_INSTRUCTION_LIST(V) \ - V(AccessArgumentsAt) \ - V(AddI) \ - V(Allocate) \ - V(ApplyArguments) \ - V(ArgumentsElements) \ - V(ArgumentsLength) \ - V(ArithmeticD) \ - V(ArithmeticT) \ - V(BitI) \ - V(BoundsCheck) \ - V(Branch) \ - V(CallJSFunction) \ - V(CallWithDescriptor) \ - V(CallFunction) \ - V(CallNew) \ - V(CallNewArray) \ - V(CallRuntime) \ - V(CallStub) \ - V(CheckInstanceType) \ - V(CheckMaps) \ - V(CheckMapValue) \ - V(CheckNonSmi) \ - V(CheckSmi) \ - V(CheckValue) \ - V(ClampDToUint8) \ - V(ClampIToUint8) \ - V(ClampTToUint8) \ - V(ClassOfTestAndBranch) \ - V(CompareMinusZeroAndBranch) \ - V(CompareNumericAndBranch) \ - V(CmpObjectEqAndBranch) \ - V(CmpHoleAndBranch) \ - V(CmpMapAndBranch) \ - V(CmpT) \ - V(ConstantD) \ - V(ConstantE) \ - V(ConstantI) \ - V(ConstantS) \ - V(ConstantT) \ - V(ConstructDouble) \ - V(Context) \ - V(DateField) \ - V(DebugBreak) \ - V(DeclareGlobals) \ - V(Deoptimize) \ - V(DivByConstI) \ - V(DivByPowerOf2I) \ - V(DivI) \ - V(DoubleToI) \ - V(DoubleBits) \ - V(DoubleToSmi) \ - V(Drop) \ - V(Dummy) \ - V(DummyUse) \ - V(FlooringDivByConstI) \ - V(FlooringDivByPowerOf2I) \ - V(FlooringDivI) \ - V(ForInCacheArray) \ - V(ForInPrepareMap) \ - V(FunctionLiteral) \ - V(GetCachedArrayIndex) \ - V(Goto) \ - V(HasCachedArrayIndexAndBranch) \ - V(HasInstanceTypeAndBranch) \ - V(InnerAllocatedObject) \ - V(InstanceOf) \ - V(InstanceOfKnownGlobal) \ - V(InstructionGap) \ - V(Integer32ToDouble) \ - V(InvokeFunction) \ - V(IsConstructCallAndBranch) \ - V(IsObjectAndBranch) \ - V(IsStringAndBranch) \ - V(IsSmiAndBranch) \ - V(IsUndetectableAndBranch) \ - V(Label) \ - V(LazyBailout) \ - V(LoadContextSlot) \ - V(LoadRoot) \ - V(LoadFieldByIndex) \ - V(LoadFunctionPrototype) \ - V(LoadGlobalCell) \ - V(LoadGlobalGeneric) \ - V(LoadKeyed) \ - V(LoadKeyedGeneric) \ - V(LoadNamedField) \ - V(LoadNamedGeneric) \ - V(MapEnumLength) \ - V(MathAbs) \ - V(MathExp) \ - V(MathClz32) \ - V(MathFloor) \ - V(MathLog) \ - V(MathMinMax) \ - V(MathPowHalf) \ - V(MathRound) \ - V(MathSqrt) \ - V(ModByConstI) \ - V(ModByPowerOf2I) \ - V(ModI) \ - V(MulI) \ - V(MultiplyAddD) \ - V(NumberTagD) \ - V(NumberTagI) \ - V(NumberTagU) \ - V(NumberUntagD) \ - V(OsrEntry) \ - V(Parameter) \ - V(Power) \ - V(PushArgument) \ - V(RegExpLiteral) \ - V(Return) \ - V(SeqStringGetChar) \ - V(SeqStringSetChar) \ - V(ShiftI) \ - V(SmiTag) \ - V(SmiUntag) \ - V(StackCheck) \ - V(StoreCodeEntry) \ - V(StoreContextSlot) \ - V(StoreGlobalCell) \ - V(StoreKeyed) \ - V(StoreKeyedGeneric) \ - V(StoreNamedField) \ - V(StoreNamedGeneric) \ - V(StringAdd) \ - V(StringCharCodeAt) \ - V(StringCharFromCode) \ - V(StringCompareAndBranch) \ - V(SubI) \ - V(TaggedToI) \ - V(ThisFunction) \ - V(ToFastProperties) \ - V(TransitionElementsKind) \ - V(TrapAllocationMemento) \ - V(Typeof) \ - V(TypeofIsAndBranch) \ - V(Uint32ToDouble) \ - V(UnknownOSRValue) \ +#define LITHIUM_CONCRETE_INSTRUCTION_LIST(V) \ + V(AccessArgumentsAt) \ + V(AddI) \ + V(Allocate) \ + V(AllocateBlockContext) \ + V(ApplyArguments) \ + V(ArgumentsElements) \ + V(ArgumentsLength) \ + V(ArithmeticD) \ + V(ArithmeticT) \ + V(BitI) \ + V(BoundsCheck) \ + V(Branch) \ + V(CallJSFunction) \ + V(CallWithDescriptor) \ + V(CallFunction) \ + V(CallNew) \ + V(CallNewArray) \ + V(CallRuntime) \ + V(CallStub) \ + V(CheckInstanceType) \ + V(CheckMaps) \ + V(CheckMapValue) \ + V(CheckNonSmi) \ + V(CheckSmi) \ + V(CheckValue) \ + V(ClampDToUint8) \ + V(ClampIToUint8) \ + V(ClampTToUint8) \ + V(ClassOfTestAndBranch) \ + V(CompareMinusZeroAndBranch) \ + V(CompareNumericAndBranch) \ + V(CmpObjectEqAndBranch) \ + V(CmpHoleAndBranch) \ + V(CmpMapAndBranch) \ + V(CmpT) \ + V(ConstantD) \ + V(ConstantE) \ + V(ConstantI) \ + V(ConstantS) \ + V(ConstantT) \ + V(ConstructDouble) \ + V(Context) \ + V(DateField) \ + V(DebugBreak) \ + V(DeclareGlobals) \ + V(Deoptimize) \ + V(DivByConstI) \ + V(DivByPowerOf2I) \ + V(DivI) \ + V(DoubleToI) \ + V(DoubleBits) \ + V(DoubleToSmi) \ + V(Drop) \ + V(Dummy) \ + V(DummyUse) \ + V(FlooringDivByConstI) \ + V(FlooringDivByPowerOf2I) \ + V(FlooringDivI) \ + V(ForInCacheArray) \ + V(ForInPrepareMap) \ + V(FunctionLiteral) \ + V(GetCachedArrayIndex) \ + V(Goto) \ + V(HasCachedArrayIndexAndBranch) \ + V(HasInstanceTypeAndBranch) \ + V(InnerAllocatedObject) \ + V(InstanceOf) \ + V(InstanceOfKnownGlobal) \ + V(InstructionGap) \ + V(Integer32ToDouble) \ + V(InvokeFunction) \ + V(IsConstructCallAndBranch) \ + V(IsObjectAndBranch) \ + V(IsStringAndBranch) \ + V(IsSmiAndBranch) \ + V(IsUndetectableAndBranch) \ + V(Label) \ + V(LazyBailout) \ + V(LoadContextSlot) \ + V(LoadRoot) \ + V(LoadFieldByIndex) \ + V(LoadFunctionPrototype) \ + V(LoadGlobalCell) \ + V(LoadGlobalGeneric) \ + V(LoadKeyed) \ + V(LoadKeyedGeneric) \ + V(LoadNamedField) \ + V(LoadNamedGeneric) \ + V(MapEnumLength) \ + V(MathAbs) \ + V(MathExp) \ + V(MathClz32) \ + V(MathFloor) \ + V(MathFround) \ + V(MathLog) \ + V(MathMinMax) \ + V(MathPowHalf) \ + V(MathRound) \ + V(MathSqrt) \ + V(ModByConstI) \ + V(ModByPowerOf2I) \ + V(ModI) \ + V(MulI) \ + V(MultiplyAddD) \ + V(NumberTagD) \ + V(NumberTagI) \ + V(NumberTagU) \ + V(NumberUntagD) \ + V(OsrEntry) \ + V(Parameter) \ + V(Power) \ + V(PushArgument) \ + V(RegExpLiteral) \ + V(Return) \ + V(SeqStringGetChar) \ + V(SeqStringSetChar) \ + V(ShiftI) \ + V(SmiTag) \ + V(SmiUntag) \ + V(StackCheck) \ + V(StoreCodeEntry) \ + V(StoreContextSlot) \ + V(StoreFrameContext) \ + V(StoreGlobalCell) \ + V(StoreKeyed) \ + V(StoreKeyedGeneric) \ + V(StoreNamedField) \ + V(StoreNamedGeneric) \ + V(StringAdd) \ + V(StringCharCodeAt) \ + V(StringCharFromCode) \ + V(StringCompareAndBranch) \ + V(SubI) \ + V(TaggedToI) \ + V(ThisFunction) \ + V(ToFastProperties) \ + V(TransitionElementsKind) \ + V(TrapAllocationMemento) \ + V(Typeof) \ + V(TypeofIsAndBranch) \ + V(Uint32ToDouble) \ + V(UnknownOSRValue) \ V(WrapReceiver) #define DECLARE_CONCRETE_INSTRUCTION(type, mnemonic) \ @@ -168,7 +171,7 @@ class LCodeGen; return mnemonic; \ } \ static L##type* cast(LInstruction* instr) { \ - ASSERT(instr->Is##type()); \ + DCHECK(instr->Is##type()); \ return reinterpret_cast<L##type*>(instr); \ } @@ -217,6 +220,9 @@ class LInstruction : public ZoneObject { virtual bool IsControl() const { return false; } + // Try deleting this instruction if possible. + virtual bool TryDelete() { return false; } + void set_environment(LEnvironment* env) { environment_ = env; } LEnvironment* environment() const { return environment_; } bool HasEnvironment() const { return environment_ != NULL; } @@ -255,11 +261,12 @@ class LInstruction : public ZoneObject { void VerifyCall(); #endif + virtual int InputCount() = 0; + virtual LOperand* InputAt(int i) = 0; + private: // Iterator interface. friend class InputIterator; - virtual int InputCount() = 0; - virtual LOperand* InputAt(int i) = 0; friend class TempIterator; virtual int TempCount() = 0; @@ -324,7 +331,7 @@ class LGap : public LTemplateInstruction<0, 0, 0> { virtual bool IsGap() const V8_FINAL V8_OVERRIDE { return true; } virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; static LGap* cast(LInstruction* instr) { - ASSERT(instr->IsGap()); + DCHECK(instr->IsGap()); return reinterpret_cast<LGap*>(instr); } @@ -404,7 +411,7 @@ class LLazyBailout V8_FINAL : public LTemplateInstruction<0, 0, 0> { class LDummy V8_FINAL : public LTemplateInstruction<1, 0, 0> { public: - explicit LDummy() { } + LDummy() {} DECLARE_CONCRETE_INSTRUCTION(Dummy, "dummy") }; @@ -420,6 +427,7 @@ class LDummyUse V8_FINAL : public LTemplateInstruction<1, 1, 0> { class LDeoptimize V8_FINAL : public LTemplateInstruction<0, 0, 0> { public: + virtual bool IsControl() const V8_OVERRIDE { return true; } DECLARE_CONCRETE_INSTRUCTION(Deoptimize, "deoptimize") DECLARE_HYDROGEN_ACCESSOR(Deoptimize) }; @@ -848,6 +856,16 @@ class LMathRound V8_FINAL : public LTemplateInstruction<1, 1, 1> { }; +class LMathFround V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LMathFround(LOperand* value) { inputs_[0] = value; } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(MathFround, "math-fround") +}; + + class LMathAbs V8_FINAL : public LTemplateInstruction<1, 2, 0> { public: LMathAbs(LOperand* context, LOperand* value) { @@ -1521,7 +1539,7 @@ class LReturn V8_FINAL : public LTemplateInstruction<0, 3, 0> { return parameter_count()->IsConstantOperand(); } LConstantOperand* constant_parameter_count() { - ASSERT(has_constant_parameter_count()); + DCHECK(has_constant_parameter_count()); return LConstantOperand::cast(parameter_count()); } LOperand* parameter_count() { return inputs_[2]; } @@ -1543,15 +1561,17 @@ class LLoadNamedField V8_FINAL : public LTemplateInstruction<1, 1, 0> { }; -class LLoadNamedGeneric V8_FINAL : public LTemplateInstruction<1, 2, 0> { +class LLoadNamedGeneric V8_FINAL : public LTemplateInstruction<1, 2, 1> { public: - LLoadNamedGeneric(LOperand* context, LOperand* object) { + LLoadNamedGeneric(LOperand* context, LOperand* object, LOperand* vector) { inputs_[0] = context; inputs_[1] = object; + temps_[0] = vector; } LOperand* context() { return inputs_[0]; } LOperand* object() { return inputs_[1]; } + LOperand* temp_vector() { return temps_[0]; } DECLARE_CONCRETE_INSTRUCTION(LoadNamedGeneric, "load-named-generic") DECLARE_HYDROGEN_ACCESSOR(LoadNamedGeneric) @@ -1608,23 +1628,27 @@ class LLoadKeyed V8_FINAL : public LTemplateInstruction<1, 2, 0> { DECLARE_HYDROGEN_ACCESSOR(LoadKeyed) virtual void PrintDataTo(StringStream* stream); - uint32_t additional_index() const { return hydrogen()->index_offset(); } + uint32_t base_offset() const { return hydrogen()->base_offset(); } }; -class LLoadKeyedGeneric V8_FINAL : public LTemplateInstruction<1, 3, 0> { +class LLoadKeyedGeneric V8_FINAL : public LTemplateInstruction<1, 3, 1> { public: - LLoadKeyedGeneric(LOperand* context, LOperand* object, LOperand* key) { + LLoadKeyedGeneric(LOperand* context, LOperand* object, LOperand* key, + LOperand* vector) { inputs_[0] = context; inputs_[1] = object; inputs_[2] = key; + temps_[0] = vector; } LOperand* context() { return inputs_[0]; } LOperand* object() { return inputs_[1]; } LOperand* key() { return inputs_[2]; } + LOperand* temp_vector() { return temps_[0]; } DECLARE_CONCRETE_INSTRUCTION(LoadKeyedGeneric, "load-keyed-generic") + DECLARE_HYDROGEN_ACCESSOR(LoadKeyedGeneric) }; @@ -1635,15 +1659,18 @@ class LLoadGlobalCell V8_FINAL : public LTemplateInstruction<1, 0, 0> { }; -class LLoadGlobalGeneric V8_FINAL : public LTemplateInstruction<1, 2, 0> { +class LLoadGlobalGeneric V8_FINAL : public LTemplateInstruction<1, 2, 1> { public: - LLoadGlobalGeneric(LOperand* context, LOperand* global_object) { + LLoadGlobalGeneric(LOperand* context, LOperand* global_object, + LOperand* vector) { inputs_[0] = context; inputs_[1] = global_object; + temps_[0] = vector; } LOperand* context() { return inputs_[0]; } LOperand* global_object() { return inputs_[1]; } + LOperand* temp_vector() { return temps_[0]; } DECLARE_CONCRETE_INSTRUCTION(LoadGlobalGeneric, "load-global-generic") DECLARE_HYDROGEN_ACCESSOR(LoadGlobalGeneric) @@ -1729,15 +1756,15 @@ class LDrop V8_FINAL : public LTemplateInstruction<0, 0, 0> { }; -class LStoreCodeEntry V8_FINAL: public LTemplateInstruction<0, 1, 1> { +class LStoreCodeEntry V8_FINAL: public LTemplateInstruction<0, 2, 0> { public: LStoreCodeEntry(LOperand* function, LOperand* code_object) { inputs_[0] = function; - temps_[0] = code_object; + inputs_[1] = code_object; } LOperand* function() { return inputs_[0]; } - LOperand* code_object() { return temps_[0]; } + LOperand* code_object() { return inputs_[1]; } virtual void PrintDataTo(StringStream* stream); @@ -1808,18 +1835,18 @@ class LCallJSFunction V8_FINAL : public LTemplateInstruction<1, 1, 0> { class LCallWithDescriptor V8_FINAL : public LTemplateResultInstruction<1> { public: - LCallWithDescriptor(const CallInterfaceDescriptor* descriptor, - ZoneList<LOperand*>& operands, + LCallWithDescriptor(const InterfaceDescriptor* descriptor, + const ZoneList<LOperand*>& operands, Zone* zone) : descriptor_(descriptor), - inputs_(descriptor->environment_length() + 1, zone) { - ASSERT(descriptor->environment_length() + 1 == operands.length()); + inputs_(descriptor->GetRegisterParameterCount() + 1, zone) { + DCHECK(descriptor->GetRegisterParameterCount() + 1 == operands.length()); inputs_.AddAll(operands, zone); } LOperand* target() const { return inputs_[0]; } - const CallInterfaceDescriptor* descriptor() { return descriptor_; } + const InterfaceDescriptor* descriptor() { return descriptor_; } private: DECLARE_CONCRETE_INSTRUCTION(CallWithDescriptor, "call-with-descriptor") @@ -1829,7 +1856,7 @@ class LCallWithDescriptor V8_FINAL : public LTemplateResultInstruction<1> { int arity() const { return hydrogen()->argument_count() - 1; } - const CallInterfaceDescriptor* descriptor_; + const InterfaceDescriptor* descriptor_; ZoneList<LOperand*> inputs_; // Iterator support. @@ -2177,7 +2204,7 @@ class LStoreKeyed V8_FINAL : public LTemplateInstruction<0, 3, 0> { virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; bool NeedsCanonicalization() { return hydrogen()->NeedsCanonicalization(); } - uint32_t additional_index() const { return hydrogen()->index_offset(); } + uint32_t base_offset() const { return hydrogen()->base_offset(); } }; @@ -2625,6 +2652,35 @@ class LLoadFieldByIndex V8_FINAL : public LTemplateInstruction<1, 2, 0> { }; +class LStoreFrameContext: public LTemplateInstruction<0, 1, 0> { + public: + explicit LStoreFrameContext(LOperand* context) { + inputs_[0] = context; + } + + LOperand* context() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(StoreFrameContext, "store-frame-context") +}; + + +class LAllocateBlockContext: public LTemplateInstruction<1, 2, 0> { + public: + LAllocateBlockContext(LOperand* context, LOperand* function) { + inputs_[0] = context; + inputs_[1] = function; + } + + LOperand* context() { return inputs_[0]; } + LOperand* function() { return inputs_[1]; } + + Handle<ScopeInfo> scope_info() { return hydrogen()->scope_info(); } + + DECLARE_CONCRETE_INSTRUCTION(AllocateBlockContext, "allocate-block-context") + DECLARE_HYDROGEN_ACCESSOR(AllocateBlockContext) +}; + + class LChunkBuilder; class LPlatformChunk V8_FINAL : public LChunk { public: @@ -2654,8 +2710,6 @@ class LChunkBuilder V8_FINAL : public LChunkBuilderBase { // Build the sequence for the graph. LPlatformChunk* Build(); - LInstruction* CheckElideControlInstruction(HControlInstruction* instr); - // Declare methods that deal with the individual node types. #define DECLARE_DO(type) LInstruction* Do##type(H##type* node); HYDROGEN_CONCRETE_INSTRUCTION_LIST(DECLARE_DO) @@ -2667,6 +2721,7 @@ class LChunkBuilder V8_FINAL : public LChunkBuilderBase { LInstruction* DoMathFloor(HUnaryMathOperation* instr); LInstruction* DoMathRound(HUnaryMathOperation* instr); + LInstruction* DoMathFround(HUnaryMathOperation* instr); LInstruction* DoMathAbs(HUnaryMathOperation* instr); LInstruction* DoMathLog(HUnaryMathOperation* instr); LInstruction* DoMathExp(HUnaryMathOperation* instr); @@ -2747,6 +2802,7 @@ class LChunkBuilder V8_FINAL : public LChunkBuilderBase { // Temporary operand that must be in a register. MUST_USE_RESULT LUnallocated* TempRegister(); + MUST_USE_RESULT LUnallocated* TempDoubleRegister(); MUST_USE_RESULT LOperand* FixedTemp(Register reg); MUST_USE_RESULT LOperand* FixedTemp(DoubleRegister reg); @@ -2776,6 +2832,7 @@ class LChunkBuilder V8_FINAL : public LChunkBuilderBase { CanDeoptimize can_deoptimize = CANNOT_DEOPTIMIZE_EAGERLY); void VisitInstruction(HInstruction* current); + void AddInstruction(LInstruction* instr, HInstruction* current); void DoBasicBlock(HBasicBlock* block, HBasicBlock* next_block); LInstruction* DoBit(Token::Value op, HBitwiseBinaryOperation* instr); diff --git a/deps/v8/src/mips/macro-assembler-mips.cc b/deps/v8/src/mips/macro-assembler-mips.cc index 62067a26dcb..69159227291 100644 --- a/deps/v8/src/mips/macro-assembler-mips.cc +++ b/deps/v8/src/mips/macro-assembler-mips.cc @@ -4,16 +4,16 @@ #include <limits.h> // For LONG_MIN, LONG_MAX. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_MIPS -#include "bootstrapper.h" -#include "codegen.h" -#include "cpu-profiler.h" -#include "debug.h" -#include "isolate-inl.h" -#include "runtime.h" +#include "src/bootstrapper.h" +#include "src/codegen.h" +#include "src/cpu-profiler.h" +#include "src/debug.h" +#include "src/isolate-inl.h" +#include "src/runtime.h" namespace v8 { namespace internal { @@ -32,7 +32,7 @@ MacroAssembler::MacroAssembler(Isolate* arg_isolate, void* buffer, int size) void MacroAssembler::Load(Register dst, const MemOperand& src, Representation r) { - ASSERT(!r.IsDouble()); + DCHECK(!r.IsDouble()); if (r.IsInteger8()) { lb(dst, src); } else if (r.IsUInteger8()) { @@ -50,7 +50,7 @@ void MacroAssembler::Load(Register dst, void MacroAssembler::Store(Register src, const MemOperand& dst, Representation r) { - ASSERT(!r.IsDouble()); + DCHECK(!r.IsDouble()); if (r.IsInteger8() || r.IsUInteger8()) { sb(src, dst); } else if (r.IsInteger16() || r.IsUInteger16()) { @@ -101,7 +101,7 @@ void MacroAssembler::PushSafepointRegisters() { // Safepoints expect a block of kNumSafepointRegisters values on the // stack, so adjust the stack for unsaved registers. const int num_unsaved = kNumSafepointRegisters - kNumSafepointSavedRegisters; - ASSERT(num_unsaved >= 0); + DCHECK(num_unsaved >= 0); if (num_unsaved > 0) { Subu(sp, sp, Operand(num_unsaved * kPointerSize)); } @@ -118,32 +118,6 @@ void MacroAssembler::PopSafepointRegisters() { } -void MacroAssembler::PushSafepointRegistersAndDoubles() { - PushSafepointRegisters(); - Subu(sp, sp, Operand(FPURegister::NumAllocatableRegisters() * kDoubleSize)); - for (int i = 0; i < FPURegister::NumAllocatableRegisters(); i+=2) { - FPURegister reg = FPURegister::FromAllocationIndex(i); - sdc1(reg, MemOperand(sp, i * kDoubleSize)); - } -} - - -void MacroAssembler::PopSafepointRegistersAndDoubles() { - for (int i = 0; i < FPURegister::NumAllocatableRegisters(); i+=2) { - FPURegister reg = FPURegister::FromAllocationIndex(i); - ldc1(reg, MemOperand(sp, i * kDoubleSize)); - } - Addu(sp, sp, Operand(FPURegister::NumAllocatableRegisters() * kDoubleSize)); - PopSafepointRegisters(); -} - - -void MacroAssembler::StoreToSafepointRegistersAndDoublesSlot(Register src, - Register dst) { - sw(src, SafepointRegistersAndDoublesSlot(dst)); -} - - void MacroAssembler::StoreToSafepointRegisterSlot(Register src, Register dst) { sw(src, SafepointRegisterSlot(dst)); } @@ -179,7 +153,7 @@ void MacroAssembler::InNewSpace(Register object, Register scratch, Condition cc, Label* branch) { - ASSERT(cc == eq || cc == ne); + DCHECK(cc == eq || cc == ne); And(scratch, object, Operand(ExternalReference::new_space_mask(isolate()))); Branch(branch, cc, scratch, Operand(ExternalReference::new_space_start(isolate()))); @@ -194,8 +168,9 @@ void MacroAssembler::RecordWriteField( RAStatus ra_status, SaveFPRegsMode save_fp, RememberedSetAction remembered_set_action, - SmiCheck smi_check) { - ASSERT(!AreAliased(value, dst, t8, object)); + SmiCheck smi_check, + PointersToHereCheck pointers_to_here_check_for_value) { + DCHECK(!AreAliased(value, dst, t8, object)); // First, check if a write barrier is even needed. The tests below // catch stores of Smis. Label done; @@ -207,7 +182,7 @@ void MacroAssembler::RecordWriteField( // Although the object register is tagged, the offset is relative to the start // of the object, so so offset must be a multiple of kPointerSize. - ASSERT(IsAligned(offset, kPointerSize)); + DCHECK(IsAligned(offset, kPointerSize)); Addu(dst, object, Operand(offset - kHeapObjectTag)); if (emit_debug_code()) { @@ -224,7 +199,8 @@ void MacroAssembler::RecordWriteField( ra_status, save_fp, remembered_set_action, - OMIT_SMI_CHECK); + OMIT_SMI_CHECK, + pointers_to_here_check_for_value); bind(&done); @@ -237,18 +213,95 @@ void MacroAssembler::RecordWriteField( } +// Will clobber 4 registers: object, map, dst, ip. The +// register 'object' contains a heap object pointer. +void MacroAssembler::RecordWriteForMap(Register object, + Register map, + Register dst, + RAStatus ra_status, + SaveFPRegsMode fp_mode) { + if (emit_debug_code()) { + DCHECK(!dst.is(at)); + lw(dst, FieldMemOperand(map, HeapObject::kMapOffset)); + Check(eq, + kWrongAddressOrValuePassedToRecordWrite, + dst, + Operand(isolate()->factory()->meta_map())); + } + + if (!FLAG_incremental_marking) { + return; + } + + if (emit_debug_code()) { + lw(at, FieldMemOperand(object, HeapObject::kMapOffset)); + Check(eq, + kWrongAddressOrValuePassedToRecordWrite, + map, + Operand(at)); + } + + Label done; + + // A single check of the map's pages interesting flag suffices, since it is + // only set during incremental collection, and then it's also guaranteed that + // the from object's page's interesting flag is also set. This optimization + // relies on the fact that maps can never be in new space. + CheckPageFlag(map, + map, // Used as scratch. + MemoryChunk::kPointersToHereAreInterestingMask, + eq, + &done); + + Addu(dst, object, Operand(HeapObject::kMapOffset - kHeapObjectTag)); + if (emit_debug_code()) { + Label ok; + And(at, dst, Operand((1 << kPointerSizeLog2) - 1)); + Branch(&ok, eq, at, Operand(zero_reg)); + stop("Unaligned cell in write barrier"); + bind(&ok); + } + + // Record the actual write. + if (ra_status == kRAHasNotBeenSaved) { + push(ra); + } + RecordWriteStub stub(isolate(), object, map, dst, OMIT_REMEMBERED_SET, + fp_mode); + CallStub(&stub); + if (ra_status == kRAHasNotBeenSaved) { + pop(ra); + } + + bind(&done); + + // Count number of write barriers in generated code. + isolate()->counters()->write_barriers_static()->Increment(); + IncrementCounter(isolate()->counters()->write_barriers_dynamic(), 1, at, dst); + + // Clobber clobbered registers when running with the debug-code flag + // turned on to provoke errors. + if (emit_debug_code()) { + li(dst, Operand(BitCast<int32_t>(kZapValue + 12))); + li(map, Operand(BitCast<int32_t>(kZapValue + 16))); + } +} + + // Will clobber 4 registers: object, address, scratch, ip. The // register 'object' contains a heap object pointer. The heap object // tag is shifted away. -void MacroAssembler::RecordWrite(Register object, - Register address, - Register value, - RAStatus ra_status, - SaveFPRegsMode fp_mode, - RememberedSetAction remembered_set_action, - SmiCheck smi_check) { - ASSERT(!AreAliased(object, address, value, t8)); - ASSERT(!AreAliased(object, address, value, t9)); +void MacroAssembler::RecordWrite( + Register object, + Register address, + Register value, + RAStatus ra_status, + SaveFPRegsMode fp_mode, + RememberedSetAction remembered_set_action, + SmiCheck smi_check, + PointersToHereCheck pointers_to_here_check_for_value) { + DCHECK(!AreAliased(object, address, value, t8)); + DCHECK(!AreAliased(object, address, value, t9)); if (emit_debug_code()) { lw(at, MemOperand(address)); @@ -256,24 +309,27 @@ void MacroAssembler::RecordWrite(Register object, eq, kWrongAddressOrValuePassedToRecordWrite, at, Operand(value)); } - // Count number of write barriers in generated code. - isolate()->counters()->write_barriers_static()->Increment(); - // TODO(mstarzinger): Dynamic counter missing. + if (remembered_set_action == OMIT_REMEMBERED_SET && + !FLAG_incremental_marking) { + return; + } // First, check if a write barrier is even needed. The tests below // catch stores of smis and stores into the young generation. Label done; if (smi_check == INLINE_SMI_CHECK) { - ASSERT_EQ(0, kSmiTag); + DCHECK_EQ(0, kSmiTag); JumpIfSmi(value, &done); } - CheckPageFlag(value, - value, // Used as scratch. - MemoryChunk::kPointersToHereAreInterestingMask, - eq, - &done); + if (pointers_to_here_check_for_value != kPointersToHereAreAlwaysInteresting) { + CheckPageFlag(value, + value, // Used as scratch. + MemoryChunk::kPointersToHereAreInterestingMask, + eq, + &done); + } CheckPageFlag(object, value, // Used as scratch. MemoryChunk::kPointersFromHereAreInterestingMask, @@ -293,6 +349,11 @@ void MacroAssembler::RecordWrite(Register object, bind(&done); + // Count number of write barriers in generated code. + isolate()->counters()->write_barriers_static()->Increment(); + IncrementCounter(isolate()->counters()->write_barriers_dynamic(), 1, at, + value); + // Clobber clobbered registers when running with the debug-code flag // turned on to provoke errors. if (emit_debug_code()) { @@ -330,7 +391,7 @@ void MacroAssembler::RememberedSetHelper(Register object, // For debug tests. if (and_then == kFallThroughAtEnd) { Branch(&done, eq, t8, Operand(zero_reg)); } else { - ASSERT(and_then == kReturnAtEnd); + DCHECK(and_then == kReturnAtEnd); Ret(eq, t8, Operand(zero_reg)); } push(ra); @@ -354,9 +415,9 @@ void MacroAssembler::CheckAccessGlobalProxy(Register holder_reg, Label* miss) { Label same_contexts; - ASSERT(!holder_reg.is(scratch)); - ASSERT(!holder_reg.is(at)); - ASSERT(!scratch.is(at)); + DCHECK(!holder_reg.is(scratch)); + DCHECK(!holder_reg.is(at)); + DCHECK(!scratch.is(at)); // Load current lexical context from the stack frame. lw(scratch, MemOperand(fp, StandardFrameConstants::kContextOffset)); @@ -419,6 +480,9 @@ void MacroAssembler::CheckAccessGlobalProxy(Register holder_reg, } +// Compute the hash code from the untagged key. This must be kept in sync with +// ComputeIntegerHash in utils.h and KeyedLoadGenericStub in +// code-stub-hydrogen.cc void MacroAssembler::GetNumberHash(Register reg0, Register scratch) { // First of all we assign the hash seed to scratch. LoadRoot(scratch, Heap::kHashSeedRootIndex); @@ -508,7 +572,7 @@ void MacroAssembler::LoadFromNumberDictionary(Label* miss, and_(reg2, reg2, reg1); // Scale the index by multiplying by the element size. - ASSERT(SeededNumberDictionary::kEntrySize == 3); + DCHECK(SeededNumberDictionary::kEntrySize == 3); sll(at, reg2, 1); // 2x. addu(reg2, reg2, at); // reg2 = reg2 * 3. @@ -551,7 +615,7 @@ void MacroAssembler::Addu(Register rd, Register rs, const Operand& rt) { addiu(rd, rs, rt.imm32_); } else { // li handles the relocation. - ASSERT(!rs.is(at)); + DCHECK(!rs.is(at)); li(at, rt); addu(rd, rs, at); } @@ -567,7 +631,7 @@ void MacroAssembler::Subu(Register rd, Register rs, const Operand& rt) { addiu(rd, rs, -rt.imm32_); // No subiu instr, use addiu(x, y, -imm). } else { // li handles the relocation. - ASSERT(!rs.is(at)); + DCHECK(!rs.is(at)); li(at, rt); subu(rd, rs, at); } @@ -585,7 +649,7 @@ void MacroAssembler::Mul(Register rd, Register rs, const Operand& rt) { } } else { // li handles the relocation. - ASSERT(!rs.is(at)); + DCHECK(!rs.is(at)); li(at, rt); if (kArchVariant == kLoongson) { mult(rs, at); @@ -602,7 +666,7 @@ void MacroAssembler::Mult(Register rs, const Operand& rt) { mult(rs, rt.rm()); } else { // li handles the relocation. - ASSERT(!rs.is(at)); + DCHECK(!rs.is(at)); li(at, rt); mult(rs, at); } @@ -614,7 +678,7 @@ void MacroAssembler::Multu(Register rs, const Operand& rt) { multu(rs, rt.rm()); } else { // li handles the relocation. - ASSERT(!rs.is(at)); + DCHECK(!rs.is(at)); li(at, rt); multu(rs, at); } @@ -626,7 +690,7 @@ void MacroAssembler::Div(Register rs, const Operand& rt) { div(rs, rt.rm()); } else { // li handles the relocation. - ASSERT(!rs.is(at)); + DCHECK(!rs.is(at)); li(at, rt); div(rs, at); } @@ -638,7 +702,7 @@ void MacroAssembler::Divu(Register rs, const Operand& rt) { divu(rs, rt.rm()); } else { // li handles the relocation. - ASSERT(!rs.is(at)); + DCHECK(!rs.is(at)); li(at, rt); divu(rs, at); } @@ -653,7 +717,7 @@ void MacroAssembler::And(Register rd, Register rs, const Operand& rt) { andi(rd, rs, rt.imm32_); } else { // li handles the relocation. - ASSERT(!rs.is(at)); + DCHECK(!rs.is(at)); li(at, rt); and_(rd, rs, at); } @@ -669,7 +733,7 @@ void MacroAssembler::Or(Register rd, Register rs, const Operand& rt) { ori(rd, rs, rt.imm32_); } else { // li handles the relocation. - ASSERT(!rs.is(at)); + DCHECK(!rs.is(at)); li(at, rt); or_(rd, rs, at); } @@ -685,7 +749,7 @@ void MacroAssembler::Xor(Register rd, Register rs, const Operand& rt) { xori(rd, rs, rt.imm32_); } else { // li handles the relocation. - ASSERT(!rs.is(at)); + DCHECK(!rs.is(at)); li(at, rt); xor_(rd, rs, at); } @@ -698,7 +762,7 @@ void MacroAssembler::Nor(Register rd, Register rs, const Operand& rt) { nor(rd, rs, rt.rm()); } else { // li handles the relocation. - ASSERT(!rs.is(at)); + DCHECK(!rs.is(at)); li(at, rt); nor(rd, rs, at); } @@ -706,9 +770,9 @@ void MacroAssembler::Nor(Register rd, Register rs, const Operand& rt) { void MacroAssembler::Neg(Register rs, const Operand& rt) { - ASSERT(rt.is_reg()); - ASSERT(!at.is(rs)); - ASSERT(!at.is(rt.rm())); + DCHECK(rt.is_reg()); + DCHECK(!at.is(rs)); + DCHECK(!at.is(rt.rm())); li(at, -1); xor_(rs, rt.rm(), at); } @@ -722,7 +786,7 @@ void MacroAssembler::Slt(Register rd, Register rs, const Operand& rt) { slti(rd, rs, rt.imm32_); } else { // li handles the relocation. - ASSERT(!rs.is(at)); + DCHECK(!rs.is(at)); li(at, rt); slt(rd, rs, at); } @@ -738,7 +802,7 @@ void MacroAssembler::Sltu(Register rd, Register rs, const Operand& rt) { sltiu(rd, rs, rt.imm32_); } else { // li handles the relocation. - ASSERT(!rs.is(at)); + DCHECK(!rs.is(at)); li(at, rt); sltu(rd, rs, at); } @@ -781,7 +845,7 @@ void MacroAssembler::Pref(int32_t hint, const MemOperand& rs) { } -//------------Pseudo-instructions------------- +// ------------Pseudo-instructions------------- void MacroAssembler::Ulw(Register rd, const MemOperand& rs) { lwr(rd, rs); @@ -800,7 +864,7 @@ void MacroAssembler::li(Register dst, Handle<Object> value, LiFlags mode) { if (value->IsSmi()) { li(dst, Operand(value), mode); } else { - ASSERT(value->IsHeapObject()); + DCHECK(value->IsHeapObject()); if (isolate()->heap()->InNewSpace(*value)) { Handle<Cell> cell = isolate()->factory()->NewCell(value); li(dst, Operand(cell)); @@ -813,7 +877,7 @@ void MacroAssembler::li(Register dst, Handle<Object> value, LiFlags mode) { void MacroAssembler::li(Register rd, Operand j, LiFlags mode) { - ASSERT(!j.is_reg()); + DCHECK(!j.is_reg()); BlockTrampolinePoolScope block_trampoline_pool(this); if (!MustUseReg(j.rmode_) && mode == OPTIMIZE_SIZE) { // Normal load of an immediate value which does not need Relocation Info. @@ -966,8 +1030,8 @@ void MacroAssembler::Ext(Register rt, Register rs, uint16_t pos, uint16_t size) { - ASSERT(pos < 32); - ASSERT(pos + size < 33); + DCHECK(pos < 32); + DCHECK(pos + size < 33); if (kArchVariant == kMips32r2) { ext_(rt, rs, pos, size); @@ -989,14 +1053,14 @@ void MacroAssembler::Ins(Register rt, Register rs, uint16_t pos, uint16_t size) { - ASSERT(pos < 32); - ASSERT(pos + size <= 32); - ASSERT(size != 0); + DCHECK(pos < 32); + DCHECK(pos + size <= 32); + DCHECK(size != 0); if (kArchVariant == kMips32r2) { ins_(rt, rs, pos, size); } else { - ASSERT(!rt.is(t8) && !rs.is(t8)); + DCHECK(!rt.is(t8) && !rs.is(t8)); Subu(at, zero_reg, Operand(1)); srl(at, at, 32 - size); and_(t8, rs, at); @@ -1025,9 +1089,9 @@ void MacroAssembler::Cvt_d_uw(FPURegister fd, // We do this by converting rs minus the MSB to avoid sign conversion, // then adding 2^31 to the result (if needed). - ASSERT(!fd.is(scratch)); - ASSERT(!rs.is(t9)); - ASSERT(!rs.is(at)); + DCHECK(!fd.is(scratch)); + DCHECK(!rs.is(t9)); + DCHECK(!rs.is(at)); // Save rs's MSB to t9. Ext(t9, rs, 31, 1); @@ -1111,8 +1175,8 @@ void MacroAssembler::Ceil_w_d(FPURegister fd, FPURegister fs) { void MacroAssembler::Trunc_uw_d(FPURegister fd, Register rs, FPURegister scratch) { - ASSERT(!fd.is(scratch)); - ASSERT(!rs.is(at)); + DCHECK(!fd.is(scratch)); + DCHECK(!rs.is(at)); // Load 2^31 into scratch as its float representation. li(at, 0x41E00000); @@ -1153,7 +1217,7 @@ void MacroAssembler::BranchF(Label* target, return; } - ASSERT(nan || target); + DCHECK(nan || target); // Check for unordered (NaN) cases. if (nan) { c(UN, D, cmp1, cmp2); @@ -1199,7 +1263,7 @@ void MacroAssembler::BranchF(Label* target, break; default: CHECK(0); - }; + } } if (bd == PROTECT) { @@ -1269,8 +1333,8 @@ void MacroAssembler::Movt(Register rd, Register rs, uint16_t cc) { if (kArchVariant == kLoongson) { // Tests an FP condition code and then conditionally move rs to rd. // We do not currently use any FPU cc bit other than bit 0. - ASSERT(cc == 0); - ASSERT(!(rs.is(t8) || rd.is(t8))); + DCHECK(cc == 0); + DCHECK(!(rs.is(t8) || rd.is(t8))); Label done; Register scratch = t8; // For testing purposes we need to fetch content of the FCSR register and @@ -1295,8 +1359,8 @@ void MacroAssembler::Movf(Register rd, Register rs, uint16_t cc) { if (kArchVariant == kLoongson) { // Tests an FP condition code and then conditionally move rs to rd. // We do not currently use any FPU cc bit other than bit 0. - ASSERT(cc == 0); - ASSERT(!(rs.is(t8) || rd.is(t8))); + DCHECK(cc == 0); + DCHECK(!(rs.is(t8) || rd.is(t8))); Label done; Register scratch = t8; // For testing purposes we need to fetch content of the FCSR register and @@ -1319,7 +1383,7 @@ void MacroAssembler::Movf(Register rd, Register rs, uint16_t cc) { void MacroAssembler::Clz(Register rd, Register rs) { if (kArchVariant == kLoongson) { - ASSERT(!(rd.is(t8) || rd.is(t9)) && !(rs.is(t8) || rs.is(t9))); + DCHECK(!(rd.is(t8) || rd.is(t9)) && !(rs.is(t8) || rs.is(t9))); Register mask = t8; Register scratch = t9; Label loop, end; @@ -1346,9 +1410,9 @@ void MacroAssembler::EmitFPUTruncate(FPURoundingMode rounding_mode, DoubleRegister double_scratch, Register except_flag, CheckForInexactConversion check_inexact) { - ASSERT(!result.is(scratch)); - ASSERT(!double_input.is(double_scratch)); - ASSERT(!except_flag.is(scratch)); + DCHECK(!result.is(scratch)); + DCHECK(!double_input.is(double_scratch)); + DCHECK(!except_flag.is(scratch)); Label done; @@ -1452,7 +1516,7 @@ void MacroAssembler::TruncateDoubleToI(Register result, void MacroAssembler::TruncateHeapNumberToI(Register result, Register object) { Label done; DoubleRegister double_scratch = f12; - ASSERT(!result.is(object)); + DCHECK(!result.is(object)); ldc1(double_scratch, MemOperand(object, HeapNumber::kValueOffset - kHeapObjectTag)); @@ -1479,7 +1543,7 @@ void MacroAssembler::TruncateNumberToI(Register object, Register scratch, Label* not_number) { Label done; - ASSERT(!result.is(object)); + DCHECK(!result.is(object)); UntagAndJumpIfSmi(result, object, &done); JumpIfNotHeapNumber(object, heap_number_map, scratch, not_number); @@ -1506,7 +1570,7 @@ void MacroAssembler::GetLeastBitsFromInt32(Register dst, // Emulated condtional branches do not emit a nop in the branch delay slot. // // BRANCH_ARGS_CHECK checks that conditional jump arguments are correct. -#define BRANCH_ARGS_CHECK(cond, rs, rt) ASSERT( \ +#define BRANCH_ARGS_CHECK(cond, rs, rt) DCHECK( \ (cond == cc_always && rs.is(zero_reg) && rt.rm().is(zero_reg)) || \ (cond != cc_always && (!rs.is(zero_reg) || !rt.rm().is(zero_reg)))) @@ -1598,7 +1662,7 @@ void MacroAssembler::BranchShort(int16_t offset, Condition cond, Register rs, const Operand& rt, BranchDelaySlot bdslot) { BRANCH_ARGS_CHECK(cond, rs, rt); - ASSERT(!rs.is(zero_reg)); + DCHECK(!rs.is(zero_reg)); Register r2 = no_reg; Register scratch = at; @@ -1698,14 +1762,14 @@ void MacroAssembler::BranchShort(int16_t offset, Condition cond, Register rs, break; case eq: // We don't want any other register but scratch clobbered. - ASSERT(!scratch.is(rs)); + DCHECK(!scratch.is(rs)); r2 = scratch; li(r2, rt); beq(rs, r2, offset); break; case ne: // We don't want any other register but scratch clobbered. - ASSERT(!scratch.is(rs)); + DCHECK(!scratch.is(rs)); r2 = scratch; li(r2, rt); bne(rs, r2, offset); @@ -1950,14 +2014,14 @@ void MacroAssembler::BranchShort(Label* L, Condition cond, Register rs, b(offset); break; case eq: - ASSERT(!scratch.is(rs)); + DCHECK(!scratch.is(rs)); r2 = scratch; li(r2, rt); offset = shifted_branch_offset(L, false); beq(rs, r2, offset); break; case ne: - ASSERT(!scratch.is(rs)); + DCHECK(!scratch.is(rs)); r2 = scratch; li(r2, rt); offset = shifted_branch_offset(L, false); @@ -1969,7 +2033,7 @@ void MacroAssembler::BranchShort(Label* L, Condition cond, Register rs, offset = shifted_branch_offset(L, false); bgtz(rs, offset); } else { - ASSERT(!scratch.is(rs)); + DCHECK(!scratch.is(rs)); r2 = scratch; li(r2, rt); slt(scratch, r2, rs); @@ -1986,7 +2050,7 @@ void MacroAssembler::BranchShort(Label* L, Condition cond, Register rs, offset = shifted_branch_offset(L, false); beq(scratch, zero_reg, offset); } else { - ASSERT(!scratch.is(rs)); + DCHECK(!scratch.is(rs)); r2 = scratch; li(r2, rt); slt(scratch, rs, r2); @@ -2003,7 +2067,7 @@ void MacroAssembler::BranchShort(Label* L, Condition cond, Register rs, offset = shifted_branch_offset(L, false); bne(scratch, zero_reg, offset); } else { - ASSERT(!scratch.is(rs)); + DCHECK(!scratch.is(rs)); r2 = scratch; li(r2, rt); slt(scratch, rs, r2); @@ -2016,7 +2080,7 @@ void MacroAssembler::BranchShort(Label* L, Condition cond, Register rs, offset = shifted_branch_offset(L, false); blez(rs, offset); } else { - ASSERT(!scratch.is(rs)); + DCHECK(!scratch.is(rs)); r2 = scratch; li(r2, rt); slt(scratch, r2, rs); @@ -2028,9 +2092,9 @@ void MacroAssembler::BranchShort(Label* L, Condition cond, Register rs, case Ugreater: if (rt.imm32_ == 0) { offset = shifted_branch_offset(L, false); - bgtz(rs, offset); + bne(rs, zero_reg, offset); } else { - ASSERT(!scratch.is(rs)); + DCHECK(!scratch.is(rs)); r2 = scratch; li(r2, rt); sltu(scratch, r2, rs); @@ -2047,7 +2111,7 @@ void MacroAssembler::BranchShort(Label* L, Condition cond, Register rs, offset = shifted_branch_offset(L, false); beq(scratch, zero_reg, offset); } else { - ASSERT(!scratch.is(rs)); + DCHECK(!scratch.is(rs)); r2 = scratch; li(r2, rt); sltu(scratch, rs, r2); @@ -2064,7 +2128,7 @@ void MacroAssembler::BranchShort(Label* L, Condition cond, Register rs, offset = shifted_branch_offset(L, false); bne(scratch, zero_reg, offset); } else { - ASSERT(!scratch.is(rs)); + DCHECK(!scratch.is(rs)); r2 = scratch; li(r2, rt); sltu(scratch, rs, r2); @@ -2077,7 +2141,7 @@ void MacroAssembler::BranchShort(Label* L, Condition cond, Register rs, offset = shifted_branch_offset(L, false); beq(rs, zero_reg, offset); } else { - ASSERT(!scratch.is(rs)); + DCHECK(!scratch.is(rs)); r2 = scratch; li(r2, rt); sltu(scratch, r2, rs); @@ -2090,7 +2154,7 @@ void MacroAssembler::BranchShort(Label* L, Condition cond, Register rs, } } // Check that offset could actually hold on an int16_t. - ASSERT(is_int16(offset)); + DCHECK(is_int16(offset)); // Emit a nop in the branch delay slot if required. if (bdslot == PROTECT) nop(); @@ -2352,7 +2416,7 @@ void MacroAssembler::BranchAndLinkShort(Label* L, Condition cond, Register rs, } } // Check that offset could actually hold on an int16_t. - ASSERT(is_int16(offset)); + DCHECK(is_int16(offset)); // Emit a nop in the branch delay slot if required. if (bdslot == PROTECT) @@ -2403,7 +2467,7 @@ void MacroAssembler::Jump(Address target, Register rs, const Operand& rt, BranchDelaySlot bd) { - ASSERT(!RelocInfo::IsCodeTarget(rmode)); + DCHECK(!RelocInfo::IsCodeTarget(rmode)); Jump(reinterpret_cast<intptr_t>(target), rmode, cond, rs, rt, bd); } @@ -2414,7 +2478,7 @@ void MacroAssembler::Jump(Handle<Code> code, Register rs, const Operand& rt, BranchDelaySlot bd) { - ASSERT(RelocInfo::IsCodeTarget(rmode)); + DCHECK(RelocInfo::IsCodeTarget(rmode)); AllowDeferredHandleDereference embedding_raw_address; Jump(reinterpret_cast<intptr_t>(code.location()), rmode, cond, rs, rt, bd); } @@ -2460,7 +2524,7 @@ void MacroAssembler::Call(Register target, if (bd == PROTECT) nop(); - ASSERT_EQ(CallSize(target, cond, rs, rt, bd), + DCHECK_EQ(CallSize(target, cond, rs, rt, bd), SizeOfCodeGeneratedSince(&start)); } @@ -2491,7 +2555,7 @@ void MacroAssembler::Call(Address target, positions_recorder()->WriteRecordedPositions(); li(t9, Operand(target_int, rmode), CONSTANT_SIZE); Call(t9, cond, rs, rt, bd); - ASSERT_EQ(CallSize(target, rmode, cond, rs, rt, bd), + DCHECK_EQ(CallSize(target, rmode, cond, rs, rt, bd), SizeOfCodeGeneratedSince(&start)); } @@ -2519,14 +2583,14 @@ void MacroAssembler::Call(Handle<Code> code, BlockTrampolinePoolScope block_trampoline_pool(this); Label start; bind(&start); - ASSERT(RelocInfo::IsCodeTarget(rmode)); + DCHECK(RelocInfo::IsCodeTarget(rmode)); if (rmode == RelocInfo::CODE_TARGET && !ast_id.IsNone()) { SetRecordedAstId(ast_id); rmode = RelocInfo::CODE_TARGET_WITH_ID; } AllowDeferredHandleDereference embedding_raw_address; Call(reinterpret_cast<Address>(code.location()), rmode, cond, rs, rt, bd); - ASSERT_EQ(CallSize(code, rmode, ast_id, cond, rs, rt, bd), + DCHECK_EQ(CallSize(code, rmode, ast_id, cond, rs, rt, bd), SizeOfCodeGeneratedSince(&start)); } @@ -2674,7 +2738,7 @@ void MacroAssembler::DebugBreak() { PrepareCEntryArgs(0); PrepareCEntryFunction(ExternalReference(Runtime::kDebugBreak, isolate())); CEntryStub ces(isolate(), 1); - ASSERT(AllowThisStubCall(&ces)); + DCHECK(AllowThisStubCall(&ces)); Call(ces.GetCode(), RelocInfo::DEBUG_BREAK); } @@ -2704,7 +2768,7 @@ void MacroAssembler::PushTryHandler(StackHandler::Kind kind, // Push the frame pointer, context, state, and code object. if (kind == StackHandler::JS_ENTRY) { - ASSERT_EQ(Smi::FromInt(0), 0); + DCHECK_EQ(Smi::FromInt(0), 0); // The second zero_reg indicates no context. // The first zero_reg is the NULL frame pointer. // The operands are reversed to match the order of MultiPush/Pop. @@ -2832,7 +2896,7 @@ void MacroAssembler::Allocate(int object_size, Register scratch2, Label* gc_required, AllocationFlags flags) { - ASSERT(object_size <= Page::kMaxRegularHeapObjectSize); + DCHECK(object_size <= Page::kMaxRegularHeapObjectSize); if (!FLAG_inline_new) { if (emit_debug_code()) { // Trash the registers to simulate an allocation failure. @@ -2844,18 +2908,18 @@ void MacroAssembler::Allocate(int object_size, return; } - ASSERT(!result.is(scratch1)); - ASSERT(!result.is(scratch2)); - ASSERT(!scratch1.is(scratch2)); - ASSERT(!scratch1.is(t9)); - ASSERT(!scratch2.is(t9)); - ASSERT(!result.is(t9)); + DCHECK(!result.is(scratch1)); + DCHECK(!result.is(scratch2)); + DCHECK(!scratch1.is(scratch2)); + DCHECK(!scratch1.is(t9)); + DCHECK(!scratch2.is(t9)); + DCHECK(!result.is(t9)); // Make object size into bytes. if ((flags & SIZE_IN_WORDS) != 0) { object_size *= kPointerSize; } - ASSERT_EQ(0, object_size & kObjectAlignmentMask); + DCHECK_EQ(0, object_size & kObjectAlignmentMask); // Check relative positions of allocation top and limit addresses. // ARM adds additional checks to make sure the ldm instruction can be @@ -2869,7 +2933,7 @@ void MacroAssembler::Allocate(int object_size, reinterpret_cast<intptr_t>(allocation_top.address()); intptr_t limit = reinterpret_cast<intptr_t>(allocation_limit.address()); - ASSERT((limit - top) == kPointerSize); + DCHECK((limit - top) == kPointerSize); // Set up allocation top address and object size registers. Register topaddr = scratch1; @@ -2895,8 +2959,8 @@ void MacroAssembler::Allocate(int object_size, if ((flags & DOUBLE_ALIGNMENT) != 0) { // Align the next allocation. Storing the filler map without checking top is // safe in new-space because the limit of the heap is aligned there. - ASSERT((flags & PRETENURE_OLD_POINTER_SPACE) == 0); - ASSERT(kPointerAlignment * 2 == kDoubleAlignment); + DCHECK((flags & PRETENURE_OLD_POINTER_SPACE) == 0); + DCHECK(kPointerAlignment * 2 == kDoubleAlignment); And(scratch2, result, Operand(kDoubleAlignmentMask)); Label aligned; Branch(&aligned, eq, scratch2, Operand(zero_reg)); @@ -2939,11 +3003,11 @@ void MacroAssembler::Allocate(Register object_size, return; } - ASSERT(!result.is(scratch1)); - ASSERT(!result.is(scratch2)); - ASSERT(!scratch1.is(scratch2)); - ASSERT(!object_size.is(t9)); - ASSERT(!scratch1.is(t9) && !scratch2.is(t9) && !result.is(t9)); + DCHECK(!result.is(scratch1)); + DCHECK(!result.is(scratch2)); + DCHECK(!scratch1.is(scratch2)); + DCHECK(!object_size.is(t9)); + DCHECK(!scratch1.is(t9) && !scratch2.is(t9) && !result.is(t9)); // Check relative positions of allocation top and limit addresses. // ARM adds additional checks to make sure the ldm instruction can be @@ -2956,7 +3020,7 @@ void MacroAssembler::Allocate(Register object_size, reinterpret_cast<intptr_t>(allocation_top.address()); intptr_t limit = reinterpret_cast<intptr_t>(allocation_limit.address()); - ASSERT((limit - top) == kPointerSize); + DCHECK((limit - top) == kPointerSize); // Set up allocation top address and object size registers. Register topaddr = scratch1; @@ -2982,8 +3046,8 @@ void MacroAssembler::Allocate(Register object_size, if ((flags & DOUBLE_ALIGNMENT) != 0) { // Align the next allocation. Storing the filler map without checking top is // safe in new-space because the limit of the heap is aligned there. - ASSERT((flags & PRETENURE_OLD_POINTER_SPACE) == 0); - ASSERT(kPointerAlignment * 2 == kDoubleAlignment); + DCHECK((flags & PRETENURE_OLD_POINTER_SPACE) == 0); + DCHECK(kPointerAlignment * 2 == kDoubleAlignment); And(scratch2, result, Operand(kDoubleAlignmentMask)); Label aligned; Branch(&aligned, eq, scratch2, Operand(zero_reg)); @@ -3049,7 +3113,7 @@ void MacroAssembler::AllocateTwoByteString(Register result, Label* gc_required) { // Calculate the number of bytes needed for the characters in the string while // observing object alignment. - ASSERT((SeqTwoByteString::kHeaderSize & kObjectAlignmentMask) == 0); + DCHECK((SeqTwoByteString::kHeaderSize & kObjectAlignmentMask) == 0); sll(scratch1, length, 1); // Length in bytes, not chars. addiu(scratch1, scratch1, kObjectAlignmentMask + SeqTwoByteString::kHeaderSize); @@ -3080,8 +3144,8 @@ void MacroAssembler::AllocateAsciiString(Register result, Label* gc_required) { // Calculate the number of bytes needed for the characters in the string // while observing object alignment. - ASSERT((SeqOneByteString::kHeaderSize & kObjectAlignmentMask) == 0); - ASSERT(kCharSize == 1); + DCHECK((SeqOneByteString::kHeaderSize & kObjectAlignmentMask) == 0); + DCHECK(kCharSize == 1); addiu(scratch1, length, kObjectAlignmentMask + SeqOneByteString::kHeaderSize); And(scratch1, scratch1, Operand(~kObjectAlignmentMask)); @@ -3122,33 +3186,12 @@ void MacroAssembler::AllocateAsciiConsString(Register result, Register scratch1, Register scratch2, Label* gc_required) { - Label allocate_new_space, install_map; - AllocationFlags flags = TAG_OBJECT; - - ExternalReference high_promotion_mode = ExternalReference:: - new_space_high_promotion_mode_active_address(isolate()); - li(scratch1, Operand(high_promotion_mode)); - lw(scratch1, MemOperand(scratch1, 0)); - Branch(&allocate_new_space, eq, scratch1, Operand(zero_reg)); - - Allocate(ConsString::kSize, - result, - scratch1, - scratch2, - gc_required, - static_cast<AllocationFlags>(flags | PRETENURE_OLD_POINTER_SPACE)); - - jmp(&install_map); - - bind(&allocate_new_space); Allocate(ConsString::kSize, result, scratch1, scratch2, gc_required, - flags); - - bind(&install_map); + TAG_OBJECT); InitializeNewString(result, length, @@ -3209,14 +3252,19 @@ void MacroAssembler::AllocateHeapNumber(Register result, Register scratch2, Register heap_number_map, Label* need_gc, - TaggingMode tagging_mode) { + TaggingMode tagging_mode, + MutableMode mode) { // Allocate an object in the heap for the heap number and tag it as a heap // object. Allocate(HeapNumber::kSize, result, scratch1, scratch2, need_gc, tagging_mode == TAG_RESULT ? TAG_OBJECT : NO_ALLOCATION_FLAGS); + Heap::RootListIndex map_index = mode == MUTABLE + ? Heap::kMutableHeapNumberMapRootIndex + : Heap::kHeapNumberMapRootIndex; + AssertIsRoot(heap_number_map, map_index); + // Store heap number map in the allocated object. - AssertIsRoot(heap_number_map, Heap::kHeapNumberMapRootIndex); if (tagging_mode == TAG_RESULT) { sw(heap_number_map, FieldMemOperand(result, HeapObject::kMapOffset)); } else { @@ -3241,8 +3289,8 @@ void MacroAssembler::CopyFields(Register dst, Register src, RegList temps, int field_count) { - ASSERT((temps & dst.bit()) == 0); - ASSERT((temps & src.bit()) == 0); + DCHECK((temps & dst.bit()) == 0); + DCHECK((temps & src.bit()) == 0); // Primitive implementation using only one temporary register. Register tmp = no_reg; @@ -3253,7 +3301,7 @@ void MacroAssembler::CopyFields(Register dst, break; } } - ASSERT(!tmp.is(no_reg)); + DCHECK(!tmp.is(no_reg)); for (int i = 0; i < field_count; i++) { lw(tmp, FieldMemOperand(src, i * kPointerSize)); @@ -3572,7 +3620,7 @@ void MacroAssembler::MovToFloatParameters(DoubleRegister src1, DoubleRegister src2) { if (!IsMipsSoftFloatABI) { if (src2.is(f12)) { - ASSERT(!src1.is(f14)); + DCHECK(!src1.is(f14)); Move(f14, src2); Move(f12, src1); } else { @@ -3615,12 +3663,12 @@ void MacroAssembler::InvokePrologue(const ParameterCount& expected, // The code below is made a lot easier because the calling code already sets // up actual and expected registers according to the contract if values are // passed in registers. - ASSERT(actual.is_immediate() || actual.reg().is(a0)); - ASSERT(expected.is_immediate() || expected.reg().is(a2)); - ASSERT((!code_constant.is_null() && code_reg.is(no_reg)) || code_reg.is(a3)); + DCHECK(actual.is_immediate() || actual.reg().is(a0)); + DCHECK(expected.is_immediate() || expected.reg().is(a2)); + DCHECK((!code_constant.is_null() && code_reg.is(no_reg)) || code_reg.is(a3)); if (expected.is_immediate()) { - ASSERT(actual.is_immediate()); + DCHECK(actual.is_immediate()); if (expected.immediate() == actual.immediate()) { definitely_matches = true; } else { @@ -3673,7 +3721,7 @@ void MacroAssembler::InvokeCode(Register code, InvokeFlag flag, const CallWrapper& call_wrapper) { // You can't call a function without a valid frame. - ASSERT(flag == JUMP_FUNCTION || has_frame()); + DCHECK(flag == JUMP_FUNCTION || has_frame()); Label done; @@ -3687,7 +3735,7 @@ void MacroAssembler::InvokeCode(Register code, Call(code); call_wrapper.AfterCall(); } else { - ASSERT(flag == JUMP_FUNCTION); + DCHECK(flag == JUMP_FUNCTION); Jump(code); } // Continue here if InvokePrologue does handle the invocation due to @@ -3702,10 +3750,10 @@ void MacroAssembler::InvokeFunction(Register function, InvokeFlag flag, const CallWrapper& call_wrapper) { // You can't call a function without a valid frame. - ASSERT(flag == JUMP_FUNCTION || has_frame()); + DCHECK(flag == JUMP_FUNCTION || has_frame()); // Contract with called JS functions requires that function is passed in a1. - ASSERT(function.is(a1)); + DCHECK(function.is(a1)); Register expected_reg = a2; Register code_reg = a3; @@ -3728,10 +3776,10 @@ void MacroAssembler::InvokeFunction(Register function, InvokeFlag flag, const CallWrapper& call_wrapper) { // You can't call a function without a valid frame. - ASSERT(flag == JUMP_FUNCTION || has_frame()); + DCHECK(flag == JUMP_FUNCTION || has_frame()); // Contract with called JS functions requires that function is passed in a1. - ASSERT(function.is(a1)); + DCHECK(function.is(a1)); // Get the function and setup the context. lw(cp, FieldMemOperand(a1, JSFunction::kContextOffset)); @@ -3775,7 +3823,7 @@ void MacroAssembler::IsInstanceJSObjectType(Register map, void MacroAssembler::IsObjectJSStringType(Register object, Register scratch, Label* fail) { - ASSERT(kNotStringTag != 0); + DCHECK(kNotStringTag != 0); lw(scratch, FieldMemOperand(object, HeapObject::kMapOffset)); lbu(scratch, FieldMemOperand(scratch, Map::kInstanceTypeOffset)); @@ -3802,14 +3850,15 @@ void MacroAssembler::TryGetFunctionPrototype(Register function, Register scratch, Label* miss, bool miss_on_bound_function) { - // Check that the receiver isn't a smi. - JumpIfSmi(function, miss); + Label non_instance; + if (miss_on_bound_function) { + // Check that the receiver isn't a smi. + JumpIfSmi(function, miss); - // Check that the function really is a function. Load map into result reg. - GetObjectType(function, result, scratch); - Branch(miss, ne, scratch, Operand(JS_FUNCTION_TYPE)); + // Check that the function really is a function. Load map into result reg. + GetObjectType(function, result, scratch); + Branch(miss, ne, scratch, Operand(JS_FUNCTION_TYPE)); - if (miss_on_bound_function) { lw(scratch, FieldMemOperand(function, JSFunction::kSharedFunctionInfoOffset)); lw(scratch, @@ -3817,13 +3866,12 @@ void MacroAssembler::TryGetFunctionPrototype(Register function, And(scratch, scratch, Operand(Smi::FromInt(1 << SharedFunctionInfo::kBoundFunction))); Branch(miss, ne, scratch, Operand(zero_reg)); - } - // Make sure that the function has an instance prototype. - Label non_instance; - lbu(scratch, FieldMemOperand(result, Map::kBitFieldOffset)); - And(scratch, scratch, Operand(1 << Map::kHasNonInstancePrototype)); - Branch(&non_instance, ne, scratch, Operand(zero_reg)); + // Make sure that the function has an instance prototype. + lbu(scratch, FieldMemOperand(result, Map::kBitFieldOffset)); + And(scratch, scratch, Operand(1 << Map::kHasNonInstancePrototype)); + Branch(&non_instance, ne, scratch, Operand(zero_reg)); + } // Get the prototype or initial map from the function. lw(result, @@ -3842,12 +3890,15 @@ void MacroAssembler::TryGetFunctionPrototype(Register function, // Get the prototype from the initial map. lw(result, FieldMemOperand(result, Map::kPrototypeOffset)); - jmp(&done); - // Non-instance prototype: Fetch prototype from constructor field - // in initial map. - bind(&non_instance); - lw(result, FieldMemOperand(result, Map::kConstructorOffset)); + if (miss_on_bound_function) { + jmp(&done); + + // Non-instance prototype: Fetch prototype from constructor field + // in initial map. + bind(&non_instance); + lw(result, FieldMemOperand(result, Map::kConstructorOffset)); + } // All done. bind(&done); @@ -3871,7 +3922,7 @@ void MacroAssembler::CallStub(CodeStub* stub, Register r1, const Operand& r2, BranchDelaySlot bd) { - ASSERT(AllowThisStubCall(stub)); // Stub calls are not allowed in some stubs. + DCHECK(AllowThisStubCall(stub)); // Stub calls are not allowed in some stubs. Call(stub->GetCode(), RelocInfo::CODE_TARGET, ast_id, cond, r1, r2, bd); } @@ -3907,7 +3958,7 @@ void MacroAssembler::CallApiFunctionAndReturn( ExternalReference::handle_scope_level_address(isolate()), next_address); - ASSERT(function_address.is(a1) || function_address.is(a2)); + DCHECK(function_address.is(a1) || function_address.is(a2)); Label profiler_disabled; Label end_profiler_check; @@ -3996,7 +4047,7 @@ void MacroAssembler::CallApiFunctionAndReturn( { FrameScope frame(this, StackFrame::INTERNAL); CallExternalReference( - ExternalReference(Runtime::kHiddenPromoteScheduledException, isolate()), + ExternalReference(Runtime::kPromoteScheduledException, isolate()), 0); } jmp(&exception_handled); @@ -4020,19 +4071,14 @@ bool MacroAssembler::AllowThisStubCall(CodeStub* stub) { } -void MacroAssembler::IndexFromHash(Register hash, - Register index) { +void MacroAssembler::IndexFromHash(Register hash, Register index) { // If the hash field contains an array index pick it out. The assert checks // that the constants for the maximum number of digits for an array index // cached in the hash field and the number of bits reserved for it does not // conflict. - ASSERT(TenToThe(String::kMaxCachedArrayIndexLength) < + DCHECK(TenToThe(String::kMaxCachedArrayIndexLength) < (1 << String::kArrayIndexValueBits)); - // We want the smi-tagged index in key. kArrayIndexValueMask has zeros in - // the low kHashShift bits. - STATIC_ASSERT(kSmiTag == 0); - Ext(hash, hash, String::kHashShift, String::kArrayIndexValueBits); - sll(index, hash, kSmiTagSize); + DecodeFieldToSmi<String::ArrayIndexValueBits>(index, hash); } @@ -4087,18 +4133,18 @@ void MacroAssembler::AdduAndCheckForOverflow(Register dst, Register right, Register overflow_dst, Register scratch) { - ASSERT(!dst.is(overflow_dst)); - ASSERT(!dst.is(scratch)); - ASSERT(!overflow_dst.is(scratch)); - ASSERT(!overflow_dst.is(left)); - ASSERT(!overflow_dst.is(right)); + DCHECK(!dst.is(overflow_dst)); + DCHECK(!dst.is(scratch)); + DCHECK(!overflow_dst.is(scratch)); + DCHECK(!overflow_dst.is(left)); + DCHECK(!overflow_dst.is(right)); if (left.is(right) && dst.is(left)) { - ASSERT(!dst.is(t9)); - ASSERT(!scratch.is(t9)); - ASSERT(!left.is(t9)); - ASSERT(!right.is(t9)); - ASSERT(!overflow_dst.is(t9)); + DCHECK(!dst.is(t9)); + DCHECK(!scratch.is(t9)); + DCHECK(!left.is(t9)); + DCHECK(!right.is(t9)); + DCHECK(!overflow_dst.is(t9)); mov(t9, right); right = t9; } @@ -4129,13 +4175,13 @@ void MacroAssembler::SubuAndCheckForOverflow(Register dst, Register right, Register overflow_dst, Register scratch) { - ASSERT(!dst.is(overflow_dst)); - ASSERT(!dst.is(scratch)); - ASSERT(!overflow_dst.is(scratch)); - ASSERT(!overflow_dst.is(left)); - ASSERT(!overflow_dst.is(right)); - ASSERT(!scratch.is(left)); - ASSERT(!scratch.is(right)); + DCHECK(!dst.is(overflow_dst)); + DCHECK(!dst.is(scratch)); + DCHECK(!overflow_dst.is(scratch)); + DCHECK(!overflow_dst.is(left)); + DCHECK(!overflow_dst.is(right)); + DCHECK(!scratch.is(left)); + DCHECK(!scratch.is(right)); // This happens with some crankshaft code. Since Subu works fine if // left == right, let's not make that restriction here. @@ -4236,7 +4282,7 @@ void MacroAssembler::InvokeBuiltin(Builtins::JavaScript id, InvokeFlag flag, const CallWrapper& call_wrapper) { // You can't call a builtin without a valid frame. - ASSERT(flag == JUMP_FUNCTION || has_frame()); + DCHECK(flag == JUMP_FUNCTION || has_frame()); GetBuiltinEntry(t9, id); if (flag == CALL_FUNCTION) { @@ -4244,7 +4290,7 @@ void MacroAssembler::InvokeBuiltin(Builtins::JavaScript id, Call(t9); call_wrapper.AfterCall(); } else { - ASSERT(flag == JUMP_FUNCTION); + DCHECK(flag == JUMP_FUNCTION); Jump(t9); } } @@ -4262,7 +4308,7 @@ void MacroAssembler::GetBuiltinFunction(Register target, void MacroAssembler::GetBuiltinEntry(Register target, Builtins::JavaScript id) { - ASSERT(!target.is(a1)); + DCHECK(!target.is(a1)); GetBuiltinFunction(a1, id); // Load the code entry point from the builtins object. lw(target, FieldMemOperand(a1, JSFunction::kCodeEntryOffset)); @@ -4281,7 +4327,7 @@ void MacroAssembler::SetCounter(StatsCounter* counter, int value, void MacroAssembler::IncrementCounter(StatsCounter* counter, int value, Register scratch1, Register scratch2) { - ASSERT(value > 0); + DCHECK(value > 0); if (FLAG_native_code_counters && counter->Enabled()) { li(scratch2, Operand(ExternalReference(counter))); lw(scratch1, MemOperand(scratch2)); @@ -4293,7 +4339,7 @@ void MacroAssembler::IncrementCounter(StatsCounter* counter, int value, void MacroAssembler::DecrementCounter(StatsCounter* counter, int value, Register scratch1, Register scratch2) { - ASSERT(value > 0); + DCHECK(value > 0); if (FLAG_native_code_counters && counter->Enabled()) { li(scratch2, Operand(ExternalReference(counter))); lw(scratch1, MemOperand(scratch2)); @@ -4315,7 +4361,7 @@ void MacroAssembler::Assert(Condition cc, BailoutReason reason, void MacroAssembler::AssertFastElements(Register elements) { if (emit_debug_code()) { - ASSERT(!elements.is(at)); + DCHECK(!elements.is(at)); Label ok; push(elements); lw(elements, FieldMemOperand(elements, HeapObject::kMapOffset)); @@ -4378,7 +4424,7 @@ void MacroAssembler::Abort(BailoutReason reason) { // generated instructions is 10, so we use this as a maximum value. static const int kExpectedAbortInstructions = 10; int abort_instructions = InstructionsGeneratedSince(&abort_start); - ASSERT(abort_instructions <= kExpectedAbortInstructions); + DCHECK(abort_instructions <= kExpectedAbortInstructions); while (abort_instructions++ < kExpectedAbortInstructions) { nop(); } @@ -4457,36 +4503,37 @@ void MacroAssembler::LoadGlobalFunctionInitialMap(Register function, } -void MacroAssembler::Prologue(PrologueFrameMode frame_mode) { - if (frame_mode == BUILD_STUB_FRAME) { +void MacroAssembler::StubPrologue() { Push(ra, fp, cp); Push(Smi::FromInt(StackFrame::STUB)); // Adjust FP to point to saved FP. Addu(fp, sp, Operand(StandardFrameConstants::kFixedFrameSizeFromFp)); - } else { - PredictableCodeSizeScope predictible_code_size_scope( +} + + +void MacroAssembler::Prologue(bool code_pre_aging) { + PredictableCodeSizeScope predictible_code_size_scope( this, kNoCodeAgeSequenceLength); - // The following three instructions must remain together and unmodified - // for code aging to work properly. - if (isolate()->IsCodePreAgingActive()) { - // Pre-age the code. - Code* stub = Code::GetPreAgedCodeAgeStub(isolate()); - nop(Assembler::CODE_AGE_MARKER_NOP); - // Load the stub address to t9 and call it, - // GetCodeAgeAndParity() extracts the stub address from this instruction. - li(t9, - Operand(reinterpret_cast<uint32_t>(stub->instruction_start())), - CONSTANT_SIZE); - nop(); // Prevent jalr to jal optimization. - jalr(t9, a0); - nop(); // Branch delay slot nop. - nop(); // Pad the empty space. - } else { - Push(ra, fp, cp, a1); - nop(Assembler::CODE_AGE_SEQUENCE_NOP); - // Adjust fp to point to caller's fp. - Addu(fp, sp, Operand(StandardFrameConstants::kFixedFrameSizeFromFp)); - } + // The following three instructions must remain together and unmodified + // for code aging to work properly. + if (code_pre_aging) { + // Pre-age the code. + Code* stub = Code::GetPreAgedCodeAgeStub(isolate()); + nop(Assembler::CODE_AGE_MARKER_NOP); + // Load the stub address to t9 and call it, + // GetCodeAgeAndParity() extracts the stub address from this instruction. + li(t9, + Operand(reinterpret_cast<uint32_t>(stub->instruction_start())), + CONSTANT_SIZE); + nop(); // Prevent jalr to jal optimization. + jalr(t9, a0); + nop(); // Branch delay slot nop. + nop(); // Pad the empty space. + } else { + Push(ra, fp, cp, a1); + nop(Assembler::CODE_AGE_SEQUENCE_NOP); + // Adjust fp to point to caller's fp. + Addu(fp, sp, Operand(StandardFrameConstants::kFixedFrameSizeFromFp)); } } @@ -4553,9 +4600,9 @@ void MacroAssembler::EnterExitFrame(bool save_doubles, const int frame_alignment = MacroAssembler::ActivationFrameAlignment(); if (save_doubles) { // The stack must be allign to 0 modulo 8 for stores with sdc1. - ASSERT(kDoubleSize == frame_alignment); + DCHECK(kDoubleSize == frame_alignment); if (frame_alignment > 0) { - ASSERT(IsPowerOf2(frame_alignment)); + DCHECK(IsPowerOf2(frame_alignment)); And(sp, sp, Operand(-frame_alignment)); // Align stack. } int space = FPURegister::kMaxNumRegisters * kDoubleSize; @@ -4570,10 +4617,10 @@ void MacroAssembler::EnterExitFrame(bool save_doubles, // Reserve place for the return address, stack space and an optional slot // (used by the DirectCEntryStub to hold the return value if a struct is // returned) and align the frame preparing for calling the runtime function. - ASSERT(stack_space >= 0); + DCHECK(stack_space >= 0); Subu(sp, sp, Operand((stack_space + 2) * kPointerSize)); if (frame_alignment > 0) { - ASSERT(IsPowerOf2(frame_alignment)); + DCHECK(IsPowerOf2(frame_alignment)); And(sp, sp, Operand(-frame_alignment)); // Align stack. } @@ -4650,7 +4697,7 @@ int MacroAssembler::ActivationFrameAlignment() { // environment. // Note: This will break if we ever start generating snapshots on one Mips // platform for another Mips platform with a different alignment. - return OS::ActivationFrameAlignment(); + return base::OS::ActivationFrameAlignment(); #else // V8_HOST_ARCH_MIPS // If we are using the simulator then we should always align to the expected // alignment. As the simulator is used to generate snapshots we do not know @@ -4668,7 +4715,7 @@ void MacroAssembler::AssertStackIsAligned() { if (frame_alignment > kPointerSize) { Label alignment_as_expected; - ASSERT(IsPowerOf2(frame_alignment)); + DCHECK(IsPowerOf2(frame_alignment)); andi(at, sp, frame_alignment_mask); Branch(&alignment_as_expected, eq, at, Operand(zero_reg)); // Don't use Check here, as it will call Runtime_Abort re-entering here. @@ -4692,7 +4739,7 @@ void MacroAssembler::JumpIfNotPowerOfTwoOrZero( void MacroAssembler::SmiTagCheckOverflow(Register reg, Register overflow) { - ASSERT(!reg.is(overflow)); + DCHECK(!reg.is(overflow)); mov(overflow, reg); // Save original value. SmiTag(reg); xor_(overflow, overflow, reg); // Overflow if (value ^ 2 * value) < 0. @@ -4706,9 +4753,9 @@ void MacroAssembler::SmiTagCheckOverflow(Register dst, // Fall back to slower case. SmiTagCheckOverflow(dst, overflow); } else { - ASSERT(!dst.is(src)); - ASSERT(!dst.is(overflow)); - ASSERT(!src.is(overflow)); + DCHECK(!dst.is(src)); + DCHECK(!dst.is(overflow)); + DCHECK(!src.is(overflow)); SmiTag(dst, src); xor_(overflow, dst, src); // Overflow if (value ^ 2 * value) < 0. } @@ -4734,7 +4781,7 @@ void MacroAssembler::JumpIfSmi(Register value, Label* smi_label, Register scratch, BranchDelaySlot bd) { - ASSERT_EQ(0, kSmiTag); + DCHECK_EQ(0, kSmiTag); andi(scratch, value, kSmiTagMask); Branch(bd, smi_label, eq, scratch, Operand(zero_reg)); } @@ -4743,7 +4790,7 @@ void MacroAssembler::JumpIfNotSmi(Register value, Label* not_smi_label, Register scratch, BranchDelaySlot bd) { - ASSERT_EQ(0, kSmiTag); + DCHECK_EQ(0, kSmiTag); andi(scratch, value, kSmiTagMask); Branch(bd, not_smi_label, ne, scratch, Operand(zero_reg)); } @@ -4753,7 +4800,7 @@ void MacroAssembler::JumpIfNotBothSmi(Register reg1, Register reg2, Label* on_not_both_smi) { STATIC_ASSERT(kSmiTag == 0); - ASSERT_EQ(1, kSmiTagMask); + DCHECK_EQ(1, kSmiTagMask); or_(at, reg1, reg2); JumpIfNotSmi(at, on_not_both_smi); } @@ -4763,7 +4810,7 @@ void MacroAssembler::JumpIfEitherSmi(Register reg1, Register reg2, Label* on_either_smi) { STATIC_ASSERT(kSmiTag == 0); - ASSERT_EQ(1, kSmiTagMask); + DCHECK_EQ(1, kSmiTagMask); // Both Smi tags must be 1 (not Smi). and_(at, reg1, reg2); JumpIfSmi(at, on_either_smi); @@ -4835,7 +4882,7 @@ void MacroAssembler::AssertUndefinedOrAllocationSite(Register object, void MacroAssembler::AssertIsRoot(Register reg, Heap::RootListIndex index) { if (emit_debug_code()) { - ASSERT(!reg.is(at)); + DCHECK(!reg.is(at)); LoadRoot(at, index); Check(eq, kHeapNumberMapRegisterClobbered, reg, Operand(at)); } @@ -4980,7 +5027,7 @@ void MacroAssembler::JumpIfBothInstanceTypesAreNotSequentialAscii( kIsNotStringMask | kStringEncodingMask | kStringRepresentationMask; const int kFlatAsciiStringTag = kStringTag | kOneByteStringTag | kSeqStringTag; - ASSERT(kFlatAsciiStringTag <= 0xffff); // Ensure this fits 16-bit immed. + DCHECK(kFlatAsciiStringTag <= 0xffff); // Ensure this fits 16-bit immed. andi(scratch1, first, kFlatAsciiStringMask); Branch(failure, ne, scratch1, Operand(kFlatAsciiStringTag)); andi(scratch2, second, kFlatAsciiStringMask); @@ -5045,7 +5092,7 @@ void MacroAssembler::EmitSeqStringSetCharCheck(Register string, lw(at, FieldMemOperand(string, String::kLengthOffset)); Check(lt, kIndexIsTooLarge, index, Operand(at)); - ASSERT(Smi::FromInt(0) == 0); + DCHECK(Smi::FromInt(0) == 0); Check(ge, kIndexIsNegative, index, Operand(zero_reg)); SmiUntag(index, index); @@ -5069,7 +5116,7 @@ void MacroAssembler::PrepareCallCFunction(int num_reg_arguments, // and the original value of sp. mov(scratch, sp); Subu(sp, sp, Operand((stack_passed_arguments + 1) * kPointerSize)); - ASSERT(IsPowerOf2(frame_alignment)); + DCHECK(IsPowerOf2(frame_alignment)); And(sp, sp, Operand(-frame_alignment)); sw(scratch, MemOperand(sp, stack_passed_arguments * kPointerSize)); } else { @@ -5114,7 +5161,7 @@ void MacroAssembler::CallCFunction(Register function, void MacroAssembler::CallCFunctionHelper(Register function, int num_reg_arguments, int num_double_arguments) { - ASSERT(has_frame()); + DCHECK(has_frame()); // Make sure that the stack is aligned before calling a C function unless // running in the simulator. The simulator has its own alignment check which // provides more information. @@ -5123,10 +5170,10 @@ void MacroAssembler::CallCFunctionHelper(Register function, #if V8_HOST_ARCH_MIPS if (emit_debug_code()) { - int frame_alignment = OS::ActivationFrameAlignment(); + int frame_alignment = base::OS::ActivationFrameAlignment(); int frame_alignment_mask = frame_alignment - 1; if (frame_alignment > kPointerSize) { - ASSERT(IsPowerOf2(frame_alignment)); + DCHECK(IsPowerOf2(frame_alignment)); Label alignment_as_expected; And(at, sp, Operand(frame_alignment_mask)); Branch(&alignment_as_expected, eq, at, Operand(zero_reg)); @@ -5152,7 +5199,7 @@ void MacroAssembler::CallCFunctionHelper(Register function, int stack_passed_arguments = CalculateStackPassedWords( num_reg_arguments, num_double_arguments); - if (OS::ActivationFrameAlignment() > kPointerSize) { + if (base::OS::ActivationFrameAlignment() > kPointerSize) { lw(sp, MemOperand(sp, stack_passed_arguments * kPointerSize)); } else { Addu(sp, sp, Operand(stack_passed_arguments * sizeof(kPointerSize))); @@ -5241,7 +5288,7 @@ void MacroAssembler::CheckMapDeprecated(Handle<Map> map, if (map->CanBeDeprecated()) { li(scratch, Operand(map)); lw(scratch, FieldMemOperand(scratch, Map::kBitField3Offset)); - And(scratch, scratch, Operand(Smi::FromInt(Map::Deprecated::kMask))); + And(scratch, scratch, Operand(Map::Deprecated::kMask)); Branch(if_deprecated, ne, scratch, Operand(zero_reg)); } } @@ -5252,7 +5299,7 @@ void MacroAssembler::JumpIfBlack(Register object, Register scratch1, Label* on_black) { HasColor(object, scratch0, scratch1, on_black, 1, 0); // kBlackBitPattern. - ASSERT(strcmp(Marking::kBlackBitPattern, "10") == 0); + DCHECK(strcmp(Marking::kBlackBitPattern, "10") == 0); } @@ -5262,8 +5309,8 @@ void MacroAssembler::HasColor(Register object, Label* has_color, int first_bit, int second_bit) { - ASSERT(!AreAliased(object, bitmap_scratch, mask_scratch, t8)); - ASSERT(!AreAliased(object, bitmap_scratch, mask_scratch, t9)); + DCHECK(!AreAliased(object, bitmap_scratch, mask_scratch, t8)); + DCHECK(!AreAliased(object, bitmap_scratch, mask_scratch, t9)); GetMarkBits(object, bitmap_scratch, mask_scratch); @@ -5292,13 +5339,13 @@ void MacroAssembler::HasColor(Register object, void MacroAssembler::JumpIfDataObject(Register value, Register scratch, Label* not_data_object) { - ASSERT(!AreAliased(value, scratch, t8, no_reg)); + DCHECK(!AreAliased(value, scratch, t8, no_reg)); Label is_data_object; lw(scratch, FieldMemOperand(value, HeapObject::kMapOffset)); LoadRoot(t8, Heap::kHeapNumberMapRootIndex); Branch(&is_data_object, eq, t8, Operand(scratch)); - ASSERT(kIsIndirectStringTag == 1 && kIsIndirectStringMask == 1); - ASSERT(kNotStringTag == 0x80 && kIsNotStringMask == 0x80); + DCHECK(kIsIndirectStringTag == 1 && kIsIndirectStringMask == 1); + DCHECK(kNotStringTag == 0x80 && kIsNotStringMask == 0x80); // If it's a string and it's not a cons string then it's an object containing // no GC pointers. lbu(scratch, FieldMemOperand(scratch, Map::kInstanceTypeOffset)); @@ -5311,7 +5358,7 @@ void MacroAssembler::JumpIfDataObject(Register value, void MacroAssembler::GetMarkBits(Register addr_reg, Register bitmap_reg, Register mask_reg) { - ASSERT(!AreAliased(addr_reg, bitmap_reg, mask_reg, no_reg)); + DCHECK(!AreAliased(addr_reg, bitmap_reg, mask_reg, no_reg)); And(bitmap_reg, addr_reg, Operand(~Page::kPageAlignmentMask)); Ext(mask_reg, addr_reg, kPointerSizeLog2, Bitmap::kBitsPerCellLog2); const int kLowBits = kPointerSizeLog2 + Bitmap::kBitsPerCellLog2; @@ -5329,14 +5376,14 @@ void MacroAssembler::EnsureNotWhite( Register mask_scratch, Register load_scratch, Label* value_is_white_and_not_data) { - ASSERT(!AreAliased(value, bitmap_scratch, mask_scratch, t8)); + DCHECK(!AreAliased(value, bitmap_scratch, mask_scratch, t8)); GetMarkBits(value, bitmap_scratch, mask_scratch); // If the value is black or grey we don't need to do anything. - ASSERT(strcmp(Marking::kWhiteBitPattern, "00") == 0); - ASSERT(strcmp(Marking::kBlackBitPattern, "10") == 0); - ASSERT(strcmp(Marking::kGreyBitPattern, "11") == 0); - ASSERT(strcmp(Marking::kImpossibleBitPattern, "01") == 0); + DCHECK(strcmp(Marking::kWhiteBitPattern, "00") == 0); + DCHECK(strcmp(Marking::kBlackBitPattern, "10") == 0); + DCHECK(strcmp(Marking::kGreyBitPattern, "11") == 0); + DCHECK(strcmp(Marking::kImpossibleBitPattern, "01") == 0); Label done; @@ -5375,8 +5422,8 @@ void MacroAssembler::EnsureNotWhite( } // Check for strings. - ASSERT(kIsIndirectStringTag == 1 && kIsIndirectStringMask == 1); - ASSERT(kNotStringTag == 0x80 && kIsNotStringMask == 0x80); + DCHECK(kIsIndirectStringTag == 1 && kIsIndirectStringMask == 1); + DCHECK(kNotStringTag == 0x80 && kIsNotStringMask == 0x80); // If it's a string and it's not a cons string then it's an object containing // no GC pointers. Register instance_type = load_scratch; @@ -5388,8 +5435,8 @@ void MacroAssembler::EnsureNotWhite( // Otherwise it's String::kHeaderSize + string->length() * (1 or 2). // External strings are the only ones with the kExternalStringTag bit // set. - ASSERT_EQ(0, kSeqStringTag & kExternalStringTag); - ASSERT_EQ(0, kConsStringTag & kExternalStringTag); + DCHECK_EQ(0, kSeqStringTag & kExternalStringTag); + DCHECK_EQ(0, kConsStringTag & kExternalStringTag); And(t8, instance_type, Operand(kExternalStringTag)); { Label skip; @@ -5403,8 +5450,8 @@ void MacroAssembler::EnsureNotWhite( // For ASCII (char-size of 1) we shift the smi tag away to get the length. // For UC16 (char-size of 2) we just leave the smi tag in place, thereby // getting the length multiplied by 2. - ASSERT(kOneByteStringTag == 4 && kStringEncodingMask == 4); - ASSERT(kSmiTag == 0 && kSmiTagSize == 1); + DCHECK(kOneByteStringTag == 4 && kStringEncodingMask == 4); + DCHECK(kSmiTag == 0 && kSmiTagSize == 1); lw(t9, FieldMemOperand(value, String::kLengthOffset)); And(t8, instance_type, Operand(kStringEncodingMask)); { @@ -5432,57 +5479,6 @@ void MacroAssembler::EnsureNotWhite( } -void MacroAssembler::Throw(BailoutReason reason) { - Label throw_start; - bind(&throw_start); -#ifdef DEBUG - const char* msg = GetBailoutReason(reason); - if (msg != NULL) { - RecordComment("Throw message: "); - RecordComment(msg); - } -#endif - - li(a0, Operand(Smi::FromInt(reason))); - push(a0); - // Disable stub call restrictions to always allow calls to throw. - if (!has_frame_) { - // We don't actually want to generate a pile of code for this, so just - // claim there is a stack frame, without generating one. - FrameScope scope(this, StackFrame::NONE); - CallRuntime(Runtime::kHiddenThrowMessage, 1); - } else { - CallRuntime(Runtime::kHiddenThrowMessage, 1); - } - // will not return here - if (is_trampoline_pool_blocked()) { - // If the calling code cares throw the exact number of - // instructions generated, we insert padding here to keep the size - // of the ThrowMessage macro constant. - // Currently in debug mode with debug_code enabled the number of - // generated instructions is 14, so we use this as a maximum value. - static const int kExpectedThrowMessageInstructions = 14; - int throw_instructions = InstructionsGeneratedSince(&throw_start); - ASSERT(throw_instructions <= kExpectedThrowMessageInstructions); - while (throw_instructions++ < kExpectedThrowMessageInstructions) { - nop(); - } - } -} - - -void MacroAssembler::ThrowIf(Condition cc, - BailoutReason reason, - Register rs, - Operand rt) { - Label L; - Branch(&L, NegateCondition(cc), rs, rt); - Throw(reason); - // will not return here - bind(&L); -} - - void MacroAssembler::LoadInstanceDescriptors(Register map, Register descriptors) { lw(descriptors, FieldMemOperand(map, Map::kDescriptorsOffset)); @@ -5498,7 +5494,8 @@ void MacroAssembler::NumberOfOwnDescriptors(Register dst, Register map) { void MacroAssembler::EnumLength(Register dst, Register map) { STATIC_ASSERT(Map::EnumLengthBits::kShift == 0); lw(dst, FieldMemOperand(map, Map::kBitField3Offset)); - And(dst, dst, Operand(Smi::FromInt(Map::EnumLengthBits::kMask))); + And(dst, dst, Operand(Map::EnumLengthBits::kMask)); + SmiTag(dst); } @@ -5544,7 +5541,7 @@ void MacroAssembler::CheckEnumCache(Register null_value, Label* call_runtime) { void MacroAssembler::ClampUint8(Register output_reg, Register input_reg) { - ASSERT(!output_reg.is(input_reg)); + DCHECK(!output_reg.is(input_reg)); Label done; li(output_reg, Operand(255)); // Normal branch: nop in delay slot. @@ -5639,7 +5636,7 @@ void MacroAssembler::JumpIfDictionaryInPrototypeChain( Register scratch0, Register scratch1, Label* found) { - ASSERT(!scratch1.is(scratch0)); + DCHECK(!scratch1.is(scratch0)); Factory* factory = isolate()->factory(); Register current = scratch0; Label loop_again; @@ -5651,21 +5648,37 @@ void MacroAssembler::JumpIfDictionaryInPrototypeChain( bind(&loop_again); lw(current, FieldMemOperand(current, HeapObject::kMapOffset)); lb(scratch1, FieldMemOperand(current, Map::kBitField2Offset)); - Ext(scratch1, scratch1, Map::kElementsKindShift, Map::kElementsKindBitCount); + DecodeField<Map::ElementsKindBits>(scratch1); Branch(found, eq, scratch1, Operand(DICTIONARY_ELEMENTS)); lw(current, FieldMemOperand(current, Map::kPrototypeOffset)); Branch(&loop_again, ne, current, Operand(factory->null_value())); } -bool AreAliased(Register r1, Register r2, Register r3, Register r4) { - if (r1.is(r2)) return true; - if (r1.is(r3)) return true; - if (r1.is(r4)) return true; - if (r2.is(r3)) return true; - if (r2.is(r4)) return true; - if (r3.is(r4)) return true; - return false; +bool AreAliased(Register reg1, + Register reg2, + Register reg3, + Register reg4, + Register reg5, + Register reg6, + Register reg7, + Register reg8) { + int n_of_valid_regs = reg1.is_valid() + reg2.is_valid() + + reg3.is_valid() + reg4.is_valid() + reg5.is_valid() + reg6.is_valid() + + reg7.is_valid() + reg8.is_valid(); + + RegList regs = 0; + if (reg1.is_valid()) regs |= reg1.bit(); + if (reg2.is_valid()) regs |= reg2.bit(); + if (reg3.is_valid()) regs |= reg3.bit(); + if (reg4.is_valid()) regs |= reg4.bit(); + if (reg5.is_valid()) regs |= reg5.bit(); + if (reg6.is_valid()) regs |= reg6.bit(); + if (reg7.is_valid()) regs |= reg7.bit(); + if (reg8.is_valid()) regs |= reg8.bit(); + int n_of_non_aliasing_regs = NumRegs(regs); + + return n_of_valid_regs != n_of_non_aliasing_regs; } @@ -5679,19 +5692,19 @@ CodePatcher::CodePatcher(byte* address, // Create a new macro assembler pointing to the address of the code to patch. // The size is adjusted with kGap on order for the assembler to generate size // bytes of instructions without failing with buffer size constraints. - ASSERT(masm_.reloc_info_writer.pos() == address_ + size_ + Assembler::kGap); + DCHECK(masm_.reloc_info_writer.pos() == address_ + size_ + Assembler::kGap); } CodePatcher::~CodePatcher() { // Indicate that code has changed. if (flush_cache_ == FLUSH) { - CPU::FlushICache(address_, size_); + CpuFeatures::FlushICache(address_, size_); } // Check that the code was patched as expected. - ASSERT(masm_.pc_ == address_ + size_); - ASSERT(masm_.reloc_info_writer.pos() == address_ + size_ + Assembler::kGap); + DCHECK(masm_.pc_ == address_ + size_); + DCHECK(masm_.reloc_info_writer.pos() == address_ + size_ + Assembler::kGap); } @@ -5707,13 +5720,13 @@ void CodePatcher::Emit(Address addr) { void CodePatcher::ChangeBranchCondition(Condition cond) { Instr instr = Assembler::instr_at(masm_.pc_); - ASSERT(Assembler::IsBranch(instr)); + DCHECK(Assembler::IsBranch(instr)); uint32_t opcode = Assembler::GetOpcodeField(instr); // Currently only the 'eq' and 'ne' cond values are supported and the simple // branch instructions (with opcode being the branch type). // There are some special cases (see Assembler::IsBranch()) so extending this // would be tricky. - ASSERT(opcode == BEQ || + DCHECK(opcode == BEQ || opcode == BNE || opcode == BLEZ || opcode == BGTZ || @@ -5730,9 +5743,9 @@ void CodePatcher::ChangeBranchCondition(Condition cond) { void MacroAssembler::TruncatingDiv(Register result, Register dividend, int32_t divisor) { - ASSERT(!dividend.is(result)); - ASSERT(!dividend.is(at)); - ASSERT(!result.is(at)); + DCHECK(!dividend.is(result)); + DCHECK(!dividend.is(at)); + DCHECK(!result.is(at)); MultiplierAndShift ms(divisor); li(at, Operand(ms.multiplier())); Mult(dividend, Operand(at)); diff --git a/deps/v8/src/mips/macro-assembler-mips.h b/deps/v8/src/mips/macro-assembler-mips.h index 774449cab84..c67d7fe149b 100644 --- a/deps/v8/src/mips/macro-assembler-mips.h +++ b/deps/v8/src/mips/macro-assembler-mips.h @@ -5,9 +5,9 @@ #ifndef V8_MIPS_MACRO_ASSEMBLER_MIPS_H_ #define V8_MIPS_MACRO_ASSEMBLER_MIPS_H_ -#include "assembler.h" -#include "mips/assembler-mips.h" -#include "v8globals.h" +#include "src/assembler.h" +#include "src/globals.h" +#include "src/mips/assembler-mips.h" namespace v8 { namespace internal { @@ -71,6 +71,10 @@ enum LiFlags { enum RememberedSetAction { EMIT_REMEMBERED_SET, OMIT_REMEMBERED_SET }; enum SmiCheck { INLINE_SMI_CHECK, OMIT_SMI_CHECK }; +enum PointersToHereCheck { + kPointersToHereMaybeInteresting, + kPointersToHereAreAlwaysInteresting +}; enum RAStatus { kRAHasNotBeenSaved, kRAHasBeenSaved }; Register GetRegisterThatIsNotOneOf(Register reg1, @@ -80,7 +84,14 @@ Register GetRegisterThatIsNotOneOf(Register reg1, Register reg5 = no_reg, Register reg6 = no_reg); -bool AreAliased(Register r1, Register r2, Register r3, Register r4); +bool AreAliased(Register reg1, + Register reg2, + Register reg3 = no_reg, + Register reg4 = no_reg, + Register reg5 = no_reg, + Register reg6 = no_reg, + Register reg7 = no_reg, + Register reg8 = no_reg); // ----------------------------------------------------------------------------- @@ -105,7 +116,7 @@ inline MemOperand FieldMemOperand(Register object, int offset) { // Generate a MemOperand for storing arguments 5..N on the stack // when calling CallCFunction(). inline MemOperand CFunctionArgumentOperand(int index) { - ASSERT(index > kCArgSlotCount); + DCHECK(index > kCArgSlotCount); // Argument 5 takes the slot just past the four Arg-slots. int offset = (index - 5) * kPointerSize + kCArgsSlotsSize; return MemOperand(sp, offset); @@ -365,7 +376,9 @@ class MacroAssembler: public Assembler { RAStatus ra_status, SaveFPRegsMode save_fp, RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, - SmiCheck smi_check = INLINE_SMI_CHECK); + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting); // As above, but the offset has the tag presubtracted. For use with // MemOperand(reg, off). @@ -377,7 +390,9 @@ class MacroAssembler: public Assembler { RAStatus ra_status, SaveFPRegsMode save_fp, RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, - SmiCheck smi_check = INLINE_SMI_CHECK) { + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting) { RecordWriteField(context, offset + kHeapObjectTag, value, @@ -385,9 +400,17 @@ class MacroAssembler: public Assembler { ra_status, save_fp, remembered_set_action, - smi_check); + smi_check, + pointers_to_here_check_for_value); } + void RecordWriteForMap( + Register object, + Register map, + Register dst, + RAStatus ra_status, + SaveFPRegsMode save_fp); + // For a given |object| notify the garbage collector that the slot |address| // has been written. |value| is the object being stored. The value and // address registers are clobbered by the operation. @@ -398,7 +421,9 @@ class MacroAssembler: public Assembler { RAStatus ra_status, SaveFPRegsMode save_fp, RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, - SmiCheck smi_check = INLINE_SMI_CHECK); + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting); // --------------------------------------------------------------------------- @@ -431,7 +456,7 @@ class MacroAssembler: public Assembler { // nop(type)). These instructions are generated to mark special location in // the code, like some special IC code. static inline bool IsMarkedCode(Instr instr, int type) { - ASSERT((FIRST_IC_MARKER <= type) && (type < LAST_CODE_MARKER)); + DCHECK((FIRST_IC_MARKER <= type) && (type < LAST_CODE_MARKER)); return IsNop(instr, type); } @@ -449,7 +474,7 @@ class MacroAssembler: public Assembler { rs == static_cast<uint32_t>(ToNumber(zero_reg))); int type = (sllzz && FIRST_IC_MARKER <= sa && sa < LAST_CODE_MARKER) ? sa : -1; - ASSERT((type == -1) || + DCHECK((type == -1) || ((FIRST_IC_MARKER <= type) && (type < LAST_CODE_MARKER))); return type; } @@ -528,7 +553,8 @@ class MacroAssembler: public Assembler { Register scratch2, Register heap_number_map, Label* gc_required, - TaggingMode tagging_mode = TAG_RESULT); + TaggingMode tagging_mode = TAG_RESULT, + MutableMode mode = IMMUTABLE); void AllocateHeapNumberWithValue(Register result, FPURegister value, Register scratch1, @@ -663,7 +689,7 @@ class MacroAssembler: public Assembler { // Pop two registers. Pops rightmost register first (from lower address). void Pop(Register src1, Register src2) { - ASSERT(!src1.is(src2)); + DCHECK(!src1.is(src2)); lw(src2, MemOperand(sp, 0 * kPointerSize)); lw(src1, MemOperand(sp, 1 * kPointerSize)); Addu(sp, sp, 2 * kPointerSize); @@ -685,17 +711,15 @@ class MacroAssembler: public Assembler { // RegList constant kSafepointSavedRegisters. void PushSafepointRegisters(); void PopSafepointRegisters(); - void PushSafepointRegistersAndDoubles(); - void PopSafepointRegistersAndDoubles(); // Store value in register src in the safepoint stack slot for // register dst. void StoreToSafepointRegisterSlot(Register src, Register dst); - void StoreToSafepointRegistersAndDoublesSlot(Register src, Register dst); // Load the value of the src register from its safepoint stack slot // into register dst. void LoadFromSafepointRegisterSlot(Register dst, Register src); - // Flush the I-cache from asm code. You should use CPU::FlushICache from C. + // Flush the I-cache from asm code. You should use CpuFeatures::FlushICache + // from C. // Does not handle errors. void FlushICache(Register address, unsigned instructions); @@ -734,7 +758,7 @@ class MacroAssembler: public Assembler { FPURegister cmp1, FPURegister cmp2) { BranchF(target, nan, cc, cmp1, cmp2, bd); - }; + } // Truncates a double using a specific rounding mode, and writes the value // to the result register. @@ -931,12 +955,6 @@ class MacroAssembler: public Assembler { // handler chain. void ThrowUncatchable(Register value); - // Throw a message string as an exception. - void Throw(BailoutReason reason); - - // Throw a message string as an exception if a condition is not true. - void ThrowIf(Condition cc, BailoutReason reason, Register rs, Operand rt); - // Copies a fixed number of fields of heap objects from src to dst. void CopyFields(Register dst, Register src, RegList temps, int field_count); @@ -1058,7 +1076,7 @@ class MacroAssembler: public Assembler { lw(type, FieldMemOperand(obj, HeapObject::kMapOffset)); lbu(type, FieldMemOperand(type, Map::kInstanceTypeOffset)); And(type, type, Operand(kIsNotStringMask)); - ASSERT_EQ(0, kStringTag); + DCHECK_EQ(0, kStringTag); return eq; } @@ -1270,7 +1288,7 @@ const Operand& rt = Operand(zero_reg), BranchDelaySlot bd = PROTECT }; Handle<Object> CodeObject() { - ASSERT(!code_object_.is_null()); + DCHECK(!code_object_.is_null()); return code_object_; } @@ -1483,16 +1501,41 @@ const Operand& rt = Operand(zero_reg), BranchDelaySlot bd = PROTECT void EnumLength(Register dst, Register map); void NumberOfOwnDescriptors(Register dst, Register map); + template<typename Field> + void DecodeField(Register dst, Register src) { + Ext(dst, src, Field::kShift, Field::kSize); + } + template<typename Field> void DecodeField(Register reg) { + DecodeField<Field>(reg, reg); + } + + template<typename Field> + void DecodeFieldToSmi(Register dst, Register src) { static const int shift = Field::kShift; - static const int mask = (Field::kMask >> shift) << kSmiTagSize; - srl(reg, reg, shift); - And(reg, reg, Operand(mask)); + static const int mask = Field::kMask >> shift << kSmiTagSize; + STATIC_ASSERT((mask & (0x80000000u >> (kSmiTagSize - 1))) == 0); + STATIC_ASSERT(kSmiTag == 0); + if (shift < kSmiTagSize) { + sll(dst, src, kSmiTagSize - shift); + And(dst, dst, Operand(mask)); + } else if (shift > kSmiTagSize) { + srl(dst, src, shift - kSmiTagSize); + And(dst, dst, Operand(mask)); + } else { + And(dst, src, Operand(mask)); + } + } + + template<typename Field> + void DecodeFieldToSmi(Register reg) { + DecodeField<Field>(reg, reg); } // Generates function and stub prologue code. - void Prologue(PrologueFrameMode frame_mode); + void StubPrologue(); + void Prologue(bool code_pre_aging); // Activation support. void EnterFrame(StackFrame::Type type); diff --git a/deps/v8/src/mips/regexp-macro-assembler-mips.cc b/deps/v8/src/mips/regexp-macro-assembler-mips.cc index 7c8fde900f7..2bc66ecd25a 100644 --- a/deps/v8/src/mips/regexp-macro-assembler-mips.cc +++ b/deps/v8/src/mips/regexp-macro-assembler-mips.cc @@ -2,17 +2,18 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_MIPS -#include "unicode.h" -#include "log.h" -#include "code-stubs.h" -#include "regexp-stack.h" -#include "macro-assembler.h" -#include "regexp-macro-assembler.h" -#include "mips/regexp-macro-assembler-mips.h" +#include "src/code-stubs.h" +#include "src/log.h" +#include "src/macro-assembler.h" +#include "src/regexp-macro-assembler.h" +#include "src/regexp-stack.h" +#include "src/unicode.h" + +#include "src/mips/regexp-macro-assembler-mips.h" namespace v8 { namespace internal { @@ -109,7 +110,7 @@ RegExpMacroAssemblerMIPS::RegExpMacroAssemblerMIPS( backtrack_label_(), exit_label_(), internal_failure_label_() { - ASSERT_EQ(0, registers_to_save % 2); + DCHECK_EQ(0, registers_to_save % 2); __ jmp(&entry_label_); // We'll write the entry code later. // If the code gets too big or corrupted, an internal exception will be // raised, and we will exit right away. @@ -148,8 +149,8 @@ void RegExpMacroAssemblerMIPS::AdvanceCurrentPosition(int by) { void RegExpMacroAssemblerMIPS::AdvanceRegister(int reg, int by) { - ASSERT(reg >= 0); - ASSERT(reg < num_registers_); + DCHECK(reg >= 0); + DCHECK(reg < num_registers_); if (by != 0) { __ lw(a0, register_location(reg)); __ Addu(a0, a0, Operand(by)); @@ -288,7 +289,7 @@ void RegExpMacroAssemblerMIPS::CheckNotBackReferenceIgnoreCase( // Compute new value of character position after the matched part. __ Subu(current_input_offset(), a2, end_of_input_address()); } else { - ASSERT(mode_ == UC16); + DCHECK(mode_ == UC16); // Put regexp engine registers on stack. RegList regexp_registers_to_retain = current_input_offset().bit() | current_character().bit() | backtrack_stackpointer().bit(); @@ -370,7 +371,7 @@ void RegExpMacroAssemblerMIPS::CheckNotBackReference( __ lbu(t0, MemOperand(a2, 0)); __ addiu(a2, a2, char_size()); } else { - ASSERT(mode_ == UC16); + DCHECK(mode_ == UC16); __ lhu(a3, MemOperand(a0, 0)); __ addiu(a0, a0, char_size()); __ lhu(t0, MemOperand(a2, 0)); @@ -414,7 +415,7 @@ void RegExpMacroAssemblerMIPS::CheckNotCharacterAfterMinusAnd( uc16 minus, uc16 mask, Label* on_not_equal) { - ASSERT(minus < String::kMaxUtf16CodeUnit); + DCHECK(minus < String::kMaxUtf16CodeUnit); __ Subu(a0, current_character(), Operand(minus)); __ And(a0, a0, Operand(mask)); BranchOrBacktrack(on_not_equal, ne, a0, Operand(c)); @@ -705,7 +706,7 @@ Handle<HeapObject> RegExpMacroAssemblerMIPS::GetCode(Handle<String> source) { __ Addu(a1, a1, Operand(a2)); // a1 is length of string in characters. - ASSERT_EQ(0, num_saved_registers_ % 2); + DCHECK_EQ(0, num_saved_registers_ % 2); // Always an even number of capture registers. This allows us to // unroll the loop once to add an operation between a load of a register // and the following use of that register. @@ -907,8 +908,8 @@ void RegExpMacroAssemblerMIPS::LoadCurrentCharacter(int cp_offset, Label* on_end_of_input, bool check_bounds, int characters) { - ASSERT(cp_offset >= -1); // ^ and \b can look behind one character. - ASSERT(cp_offset < (1<<30)); // Be sane! (And ensure negation works). + DCHECK(cp_offset >= -1); // ^ and \b can look behind one character. + DCHECK(cp_offset < (1<<30)); // Be sane! (And ensure negation works). if (check_bounds) { CheckPosition(cp_offset + characters - 1, on_end_of_input); } @@ -993,7 +994,7 @@ void RegExpMacroAssemblerMIPS::SetCurrentPositionFromEnd(int by) { void RegExpMacroAssemblerMIPS::SetRegister(int register_index, int to) { - ASSERT(register_index >= num_saved_registers_); // Reserved for positions! + DCHECK(register_index >= num_saved_registers_); // Reserved for positions! __ li(a0, Operand(to)); __ sw(a0, register_location(register_index)); } @@ -1017,7 +1018,7 @@ void RegExpMacroAssemblerMIPS::WriteCurrentPositionToRegister(int reg, void RegExpMacroAssemblerMIPS::ClearRegisters(int reg_from, int reg_to) { - ASSERT(reg_from <= reg_to); + DCHECK(reg_from <= reg_to); __ lw(a0, MemOperand(frame_pointer(), kInputStartMinusOne)); for (int reg = reg_from; reg <= reg_to; reg++) { __ sw(a0, register_location(reg)); @@ -1040,12 +1041,12 @@ bool RegExpMacroAssemblerMIPS::CanReadUnaligned() { // Private methods: void RegExpMacroAssemblerMIPS::CallCheckStackGuardState(Register scratch) { - int stack_alignment = OS::ActivationFrameAlignment(); + int stack_alignment = base::OS::ActivationFrameAlignment(); // Align the stack pointer and save the original sp value on the stack. __ mov(scratch, sp); __ Subu(sp, sp, Operand(kPointerSize)); - ASSERT(IsPowerOf2(stack_alignment)); + DCHECK(IsPowerOf2(stack_alignment)); __ And(sp, sp, Operand(-stack_alignment)); __ sw(scratch, MemOperand(sp)); @@ -1054,7 +1055,7 @@ void RegExpMacroAssemblerMIPS::CallCheckStackGuardState(Register scratch) { __ li(a1, Operand(masm_->CodeObject()), CONSTANT_SIZE); // We need to make room for the return address on the stack. - ASSERT(IsAligned(stack_alignment, kPointerSize)); + DCHECK(IsAligned(stack_alignment, kPointerSize)); __ Subu(sp, sp, Operand(stack_alignment)); // Stack pointer now points to cell where return address is to be written. @@ -1104,7 +1105,8 @@ int RegExpMacroAssemblerMIPS::CheckStackGuardState(Address* return_address, Code* re_code, Address re_frame) { Isolate* isolate = frame_entry<Isolate*>(re_frame, kIsolate); - if (isolate->stack_guard()->IsStackOverflow()) { + StackLimitCheck check(isolate); + if (check.JsHasOverflowed()) { isolate->StackOverflow(); return EXCEPTION; } @@ -1126,11 +1128,11 @@ int RegExpMacroAssemblerMIPS::CheckStackGuardState(Address* return_address, // Current string. bool is_ascii = subject->IsOneByteRepresentationUnderneath(); - ASSERT(re_code->instruction_start() <= *return_address); - ASSERT(*return_address <= + DCHECK(re_code->instruction_start() <= *return_address); + DCHECK(*return_address <= re_code->instruction_start() + re_code->instruction_size()); - Object* result = Execution::HandleStackGuardInterrupt(isolate); + Object* result = isolate->stack_guard()->HandleInterrupts(); if (*code_handle != re_code) { // Return address no longer valid. int delta = code_handle->address() - re_code->address(); @@ -1166,7 +1168,7 @@ int RegExpMacroAssemblerMIPS::CheckStackGuardState(Address* return_address, // be a sequential or external string with the same content. // Update the start and end pointers in the stack frame to the current // location (whether it has actually moved or not). - ASSERT(StringShape(*subject_tmp).IsSequential() || + DCHECK(StringShape(*subject_tmp).IsSequential() || StringShape(*subject_tmp).IsExternal()); // The original start address of the characters to match. @@ -1198,7 +1200,7 @@ int RegExpMacroAssemblerMIPS::CheckStackGuardState(Address* return_address, MemOperand RegExpMacroAssemblerMIPS::register_location(int register_index) { - ASSERT(register_index < (1<<30)); + DCHECK(register_index < (1<<30)); if (num_registers_ <= register_index) { num_registers_ = register_index + 1; } @@ -1259,7 +1261,7 @@ void RegExpMacroAssemblerMIPS::SafeCallTarget(Label* name) { void RegExpMacroAssemblerMIPS::Push(Register source) { - ASSERT(!source.is(backtrack_stackpointer())); + DCHECK(!source.is(backtrack_stackpointer())); __ Addu(backtrack_stackpointer(), backtrack_stackpointer(), Operand(-kPointerSize)); @@ -1268,7 +1270,7 @@ void RegExpMacroAssemblerMIPS::Push(Register source) { void RegExpMacroAssemblerMIPS::Pop(Register target) { - ASSERT(!target.is(backtrack_stackpointer())); + DCHECK(!target.is(backtrack_stackpointer())); __ lw(target, MemOperand(backtrack_stackpointer())); __ Addu(backtrack_stackpointer(), backtrack_stackpointer(), kPointerSize); } @@ -1304,12 +1306,12 @@ void RegExpMacroAssemblerMIPS::LoadCurrentCharacterUnchecked(int cp_offset, } // We assume that we cannot do unaligned loads on MIPS, so this function // must only be used to load a single character at a time. - ASSERT(characters == 1); + DCHECK(characters == 1); __ Addu(t5, end_of_input_address(), Operand(offset)); if (mode_ == ASCII) { __ lbu(current_character(), MemOperand(t5, 0)); } else { - ASSERT(mode_ == UC16); + DCHECK(mode_ == UC16); __ lhu(current_character(), MemOperand(t5, 0)); } } diff --git a/deps/v8/src/mips/regexp-macro-assembler-mips.h b/deps/v8/src/mips/regexp-macro-assembler-mips.h index f0aba07dc2c..ddf484cbbe8 100644 --- a/deps/v8/src/mips/regexp-macro-assembler-mips.h +++ b/deps/v8/src/mips/regexp-macro-assembler-mips.h @@ -6,11 +6,10 @@ #ifndef V8_MIPS_REGEXP_MACRO_ASSEMBLER_MIPS_H_ #define V8_MIPS_REGEXP_MACRO_ASSEMBLER_MIPS_H_ -#include "mips/assembler-mips.h" -#include "mips/assembler-mips-inl.h" -#include "macro-assembler.h" -#include "code.h" -#include "mips/macro-assembler-mips.h" +#include "src/macro-assembler.h" +#include "src/mips/assembler-mips-inl.h" +#include "src/mips/assembler-mips.h" +#include "src/mips/macro-assembler-mips.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/mips/simulator-mips.cc b/deps/v8/src/mips/simulator-mips.cc index 51f679bdc19..30924569bc0 100644 --- a/deps/v8/src/mips/simulator-mips.cc +++ b/deps/v8/src/mips/simulator-mips.cc @@ -7,16 +7,16 @@ #include <stdlib.h> #include <cmath> -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_MIPS -#include "cpu.h" -#include "disasm.h" -#include "assembler.h" -#include "globals.h" // Need the BitCast. -#include "mips/constants-mips.h" -#include "mips/simulator-mips.h" +#include "src/assembler.h" +#include "src/disasm.h" +#include "src/globals.h" // Need the BitCast. +#include "src/mips/constants-mips.h" +#include "src/mips/simulator-mips.h" +#include "src/ostreams.h" // Only build the simulator if not compiling for real MIPS hardware. @@ -107,7 +107,7 @@ void MipsDebugger::Stop(Instruction* instr) { char** msg_address = reinterpret_cast<char**>(sim_->get_pc() + Instr::kInstrSize); char* msg = *msg_address; - ASSERT(msg != NULL); + DCHECK(msg != NULL); // Update this stop description. if (!watched_stops_[code].desc) { @@ -456,17 +456,18 @@ void MipsDebugger::Debug() { || (strcmp(cmd, "printobject") == 0)) { if (argc == 2) { int32_t value; + OFStream os(stdout); if (GetValue(arg1, &value)) { Object* obj = reinterpret_cast<Object*>(value); - PrintF("%s: \n", arg1); + os << arg1 << ": \n"; #ifdef DEBUG - obj->PrintLn(); + obj->Print(os); + os << "\n"; #else - obj->ShortPrint(); - PrintF("\n"); + os << Brief(obj) << "\n"; #endif } else { - PrintF("%s unrecognized\n", arg1); + os << arg1 << " unrecognized\n"; } } else { PrintF("printobject <value>\n"); @@ -567,7 +568,7 @@ void MipsDebugger::Debug() { } } else if (strcmp(cmd, "gdb") == 0) { PrintF("relinquishing control to gdb\n"); - v8::internal::OS::DebugBreak(); + v8::base::OS::DebugBreak(); PrintF("regaining control from gdb\n"); } else if (strcmp(cmd, "break") == 0) { if (argc == 2) { @@ -753,8 +754,8 @@ void MipsDebugger::Debug() { static bool ICacheMatch(void* one, void* two) { - ASSERT((reinterpret_cast<intptr_t>(one) & CachePage::kPageMask) == 0); - ASSERT((reinterpret_cast<intptr_t>(two) & CachePage::kPageMask) == 0); + DCHECK((reinterpret_cast<intptr_t>(one) & CachePage::kPageMask) == 0); + DCHECK((reinterpret_cast<intptr_t>(two) & CachePage::kPageMask) == 0); return one == two; } @@ -791,7 +792,7 @@ void Simulator::FlushICache(v8::internal::HashMap* i_cache, FlushOnePage(i_cache, start, bytes_to_flush); start += bytes_to_flush; size -= bytes_to_flush; - ASSERT_EQ(0, start & CachePage::kPageMask); + DCHECK_EQ(0, start & CachePage::kPageMask); offset = 0; } if (size != 0) { @@ -816,10 +817,10 @@ CachePage* Simulator::GetCachePage(v8::internal::HashMap* i_cache, void* page) { void Simulator::FlushOnePage(v8::internal::HashMap* i_cache, intptr_t start, int size) { - ASSERT(size <= CachePage::kPageSize); - ASSERT(AllOnOnePage(start, size - 1)); - ASSERT((start & CachePage::kLineMask) == 0); - ASSERT((size & CachePage::kLineMask) == 0); + DCHECK(size <= CachePage::kPageSize); + DCHECK(AllOnOnePage(start, size - 1)); + DCHECK((start & CachePage::kLineMask) == 0); + DCHECK((size & CachePage::kLineMask) == 0); void* page = reinterpret_cast<void*>(start & (~CachePage::kPageMask)); int offset = (start & CachePage::kPageMask); CachePage* cache_page = GetCachePage(i_cache, page); @@ -840,12 +841,12 @@ void Simulator::CheckICache(v8::internal::HashMap* i_cache, char* cached_line = cache_page->CachedData(offset & ~CachePage::kLineMask); if (cache_hit) { // Check that the data in memory matches the contents of the I-cache. - CHECK(memcmp(reinterpret_cast<void*>(instr), - cache_page->CachedData(offset), - Instruction::kInstrSize) == 0); + CHECK_EQ(0, memcmp(reinterpret_cast<void*>(instr), + cache_page->CachedData(offset), + Instruction::kInstrSize)); } else { // Cache miss. Load memory into the cache. - OS::MemCopy(cached_line, line, CachePage::kLineLength); + memcpy(cached_line, line, CachePage::kLineLength); *cache_valid_byte = CachePage::LINE_VALID; } } @@ -978,8 +979,8 @@ void* Simulator::RedirectExternalReference(void* external_function, Simulator* Simulator::current(Isolate* isolate) { v8::internal::Isolate::PerIsolateThreadData* isolate_data = isolate->FindOrAllocatePerThreadDataForThisThread(); - ASSERT(isolate_data != NULL); - ASSERT(isolate_data != NULL); + DCHECK(isolate_data != NULL); + DCHECK(isolate_data != NULL); Simulator* sim = isolate_data->simulator(); if (sim == NULL) { @@ -994,7 +995,7 @@ Simulator* Simulator::current(Isolate* isolate) { // Sets the register in the architecture state. It will also deal with updating // Simulator internal state for special registers such as PC. void Simulator::set_register(int reg, int32_t value) { - ASSERT((reg >= 0) && (reg < kNumSimuRegisters)); + DCHECK((reg >= 0) && (reg < kNumSimuRegisters)); if (reg == pc) { pc_modified_ = true; } @@ -1005,26 +1006,26 @@ void Simulator::set_register(int reg, int32_t value) { void Simulator::set_dw_register(int reg, const int* dbl) { - ASSERT((reg >= 0) && (reg < kNumSimuRegisters)); + DCHECK((reg >= 0) && (reg < kNumSimuRegisters)); registers_[reg] = dbl[0]; registers_[reg + 1] = dbl[1]; } void Simulator::set_fpu_register(int fpureg, int32_t value) { - ASSERT((fpureg >= 0) && (fpureg < kNumFPURegisters)); + DCHECK((fpureg >= 0) && (fpureg < kNumFPURegisters)); FPUregisters_[fpureg] = value; } void Simulator::set_fpu_register_float(int fpureg, float value) { - ASSERT((fpureg >= 0) && (fpureg < kNumFPURegisters)); + DCHECK((fpureg >= 0) && (fpureg < kNumFPURegisters)); *BitCast<float*>(&FPUregisters_[fpureg]) = value; } void Simulator::set_fpu_register_double(int fpureg, double value) { - ASSERT((fpureg >= 0) && (fpureg < kNumFPURegisters) && ((fpureg % 2) == 0)); + DCHECK((fpureg >= 0) && (fpureg < kNumFPURegisters) && ((fpureg % 2) == 0)); *BitCast<double*>(&FPUregisters_[fpureg]) = value; } @@ -1032,7 +1033,7 @@ void Simulator::set_fpu_register_double(int fpureg, double value) { // Get the register from the architecture state. This function does handle // the special case of accessing the PC register. int32_t Simulator::get_register(int reg) const { - ASSERT((reg >= 0) && (reg < kNumSimuRegisters)); + DCHECK((reg >= 0) && (reg < kNumSimuRegisters)); if (reg == 0) return 0; else @@ -1041,40 +1042,40 @@ int32_t Simulator::get_register(int reg) const { double Simulator::get_double_from_register_pair(int reg) { - ASSERT((reg >= 0) && (reg < kNumSimuRegisters) && ((reg % 2) == 0)); + DCHECK((reg >= 0) && (reg < kNumSimuRegisters) && ((reg % 2) == 0)); double dm_val = 0.0; // Read the bits from the unsigned integer register_[] array // into the double precision floating point value and return it. char buffer[2 * sizeof(registers_[0])]; - OS::MemCopy(buffer, ®isters_[reg], 2 * sizeof(registers_[0])); - OS::MemCopy(&dm_val, buffer, 2 * sizeof(registers_[0])); + memcpy(buffer, ®isters_[reg], 2 * sizeof(registers_[0])); + memcpy(&dm_val, buffer, 2 * sizeof(registers_[0])); return(dm_val); } int32_t Simulator::get_fpu_register(int fpureg) const { - ASSERT((fpureg >= 0) && (fpureg < kNumFPURegisters)); + DCHECK((fpureg >= 0) && (fpureg < kNumFPURegisters)); return FPUregisters_[fpureg]; } int64_t Simulator::get_fpu_register_long(int fpureg) const { - ASSERT((fpureg >= 0) && (fpureg < kNumFPURegisters) && ((fpureg % 2) == 0)); + DCHECK((fpureg >= 0) && (fpureg < kNumFPURegisters) && ((fpureg % 2) == 0)); return *BitCast<int64_t*>( const_cast<int32_t*>(&FPUregisters_[fpureg])); } float Simulator::get_fpu_register_float(int fpureg) const { - ASSERT((fpureg >= 0) && (fpureg < kNumFPURegisters)); + DCHECK((fpureg >= 0) && (fpureg < kNumFPURegisters)); return *BitCast<float*>( const_cast<int32_t*>(&FPUregisters_[fpureg])); } double Simulator::get_fpu_register_double(int fpureg) const { - ASSERT((fpureg >= 0) && (fpureg < kNumFPURegisters) && ((fpureg % 2) == 0)); + DCHECK((fpureg >= 0) && (fpureg < kNumFPURegisters) && ((fpureg % 2) == 0)); return *BitCast<double*>(const_cast<int32_t*>(&FPUregisters_[fpureg])); } @@ -1096,14 +1097,14 @@ void Simulator::GetFpArgs(double* x, double* y, int32_t* z) { // Registers a0 and a1 -> x. reg_buffer[0] = get_register(a0); reg_buffer[1] = get_register(a1); - OS::MemCopy(x, buffer, sizeof(buffer)); + memcpy(x, buffer, sizeof(buffer)); // Registers a2 and a3 -> y. reg_buffer[0] = get_register(a2); reg_buffer[1] = get_register(a3); - OS::MemCopy(y, buffer, sizeof(buffer)); + memcpy(y, buffer, sizeof(buffer)); // Register 2 -> z. reg_buffer[0] = get_register(a2); - OS::MemCopy(z, buffer, sizeof(*z)); + memcpy(z, buffer, sizeof(*z)); } } @@ -1115,7 +1116,7 @@ void Simulator::SetFpResult(const double& result) { } else { char buffer[2 * sizeof(registers_[0])]; int32_t* reg_buffer = reinterpret_cast<int32_t*>(buffer); - OS::MemCopy(buffer, &result, sizeof(buffer)); + memcpy(buffer, &result, sizeof(buffer)); // Copy result to v0 and v1. set_register(v0, reg_buffer[0]); set_register(v1, reg_buffer[1]); @@ -1244,7 +1245,7 @@ double Simulator::ReadD(int32_t addr, Instruction* instr) { PrintF("Unaligned (double) read at 0x%08x, pc=0x%08" V8PRIxPTR "\n", addr, reinterpret_cast<intptr_t>(instr)); - OS::Abort(); + base::OS::Abort(); return 0; } @@ -1258,7 +1259,7 @@ void Simulator::WriteD(int32_t addr, double value, Instruction* instr) { PrintF("Unaligned (double) write at 0x%08x, pc=0x%08" V8PRIxPTR "\n", addr, reinterpret_cast<intptr_t>(instr)); - OS::Abort(); + base::OS::Abort(); } @@ -1270,7 +1271,7 @@ uint16_t Simulator::ReadHU(int32_t addr, Instruction* instr) { PrintF("Unaligned unsigned halfword read at 0x%08x, pc=0x%08" V8PRIxPTR "\n", addr, reinterpret_cast<intptr_t>(instr)); - OS::Abort(); + base::OS::Abort(); return 0; } @@ -1283,7 +1284,7 @@ int16_t Simulator::ReadH(int32_t addr, Instruction* instr) { PrintF("Unaligned signed halfword read at 0x%08x, pc=0x%08" V8PRIxPTR "\n", addr, reinterpret_cast<intptr_t>(instr)); - OS::Abort(); + base::OS::Abort(); return 0; } @@ -1297,7 +1298,7 @@ void Simulator::WriteH(int32_t addr, uint16_t value, Instruction* instr) { PrintF("Unaligned unsigned halfword write at 0x%08x, pc=0x%08" V8PRIxPTR "\n", addr, reinterpret_cast<intptr_t>(instr)); - OS::Abort(); + base::OS::Abort(); } @@ -1310,7 +1311,7 @@ void Simulator::WriteH(int32_t addr, int16_t value, Instruction* instr) { PrintF("Unaligned halfword write at 0x%08x, pc=0x%08" V8PRIxPTR "\n", addr, reinterpret_cast<intptr_t>(instr)); - OS::Abort(); + base::OS::Abort(); } @@ -1637,8 +1638,8 @@ bool Simulator::IsStopInstruction(Instruction* instr) { bool Simulator::IsEnabledStop(uint32_t code) { - ASSERT(code <= kMaxStopCode); - ASSERT(code > kMaxWatchpointCode); + DCHECK(code <= kMaxStopCode); + DCHECK(code > kMaxWatchpointCode); return !(watched_stops_[code].count & kStopDisabledBit); } @@ -1658,7 +1659,7 @@ void Simulator::DisableStop(uint32_t code) { void Simulator::IncreaseStopCounter(uint32_t code) { - ASSERT(code <= kMaxStopCode); + DCHECK(code <= kMaxStopCode); if ((watched_stops_[code].count & ~(1 << 31)) == 0x7fffffff) { PrintF("Stop counter for code %i has overflowed.\n" "Enabling this code and reseting the counter to 0.\n", code); @@ -1706,12 +1707,12 @@ void Simulator::SignalExceptions() { // Handle execution based on instruction types. void Simulator::ConfigureTypeRegister(Instruction* instr, - int32_t& alu_out, - int64_t& i64hilo, - uint64_t& u64hilo, - int32_t& next_pc, - int32_t& return_addr_reg, - bool& do_interrupt) { + int32_t* alu_out, + int64_t* i64hilo, + uint64_t* u64hilo, + int32_t* next_pc, + int32_t* return_addr_reg, + bool* do_interrupt) { // Every local variable declared here needs to be const. // This is to make sure that changed values are sent back to // DecodeTypeRegister correctly. @@ -1739,11 +1740,11 @@ void Simulator::ConfigureTypeRegister(Instruction* instr, break; case CFC1: // At the moment only FCSR is supported. - ASSERT(fs_reg == kFCSRRegister); - alu_out = FCSR_; + DCHECK(fs_reg == kFCSRRegister); + *alu_out = FCSR_; break; case MFC1: - alu_out = get_fpu_register(fs_reg); + *alu_out = get_fpu_register(fs_reg); break; case MFHC1: UNIMPLEMENTED_MIPS(); @@ -1762,7 +1763,7 @@ void Simulator::ConfigureTypeRegister(Instruction* instr, break; default: UNIMPLEMENTED_MIPS(); - }; + } break; case COP1X: break; @@ -1770,56 +1771,56 @@ void Simulator::ConfigureTypeRegister(Instruction* instr, switch (instr->FunctionFieldRaw()) { case JR: case JALR: - next_pc = get_register(instr->RsValue()); - return_addr_reg = instr->RdValue(); + *next_pc = get_register(instr->RsValue()); + *return_addr_reg = instr->RdValue(); break; case SLL: - alu_out = rt << sa; + *alu_out = rt << sa; break; case SRL: if (rs_reg == 0) { // Regular logical right shift of a word by a fixed number of // bits instruction. RS field is always equal to 0. - alu_out = rt_u >> sa; + *alu_out = rt_u >> sa; } else { // Logical right-rotate of a word by a fixed number of bits. This // is special case of SRL instruction, added in MIPS32 Release 2. // RS field is equal to 00001. - alu_out = (rt_u >> sa) | (rt_u << (32 - sa)); + *alu_out = (rt_u >> sa) | (rt_u << (32 - sa)); } break; case SRA: - alu_out = rt >> sa; + *alu_out = rt >> sa; break; case SLLV: - alu_out = rt << rs; + *alu_out = rt << rs; break; case SRLV: if (sa == 0) { // Regular logical right-shift of a word by a variable number of // bits instruction. SA field is always equal to 0. - alu_out = rt_u >> rs; + *alu_out = rt_u >> rs; } else { // Logical right-rotate of a word by a variable number of bits. // This is special case od SRLV instruction, added in MIPS32 // Release 2. SA field is equal to 00001. - alu_out = (rt_u >> rs_u) | (rt_u << (32 - rs_u)); + *alu_out = (rt_u >> rs_u) | (rt_u << (32 - rs_u)); } break; case SRAV: - alu_out = rt >> rs; + *alu_out = rt >> rs; break; case MFHI: - alu_out = get_register(HI); + *alu_out = get_register(HI); break; case MFLO: - alu_out = get_register(LO); + *alu_out = get_register(LO); break; case MULT: - i64hilo = static_cast<int64_t>(rs) * static_cast<int64_t>(rt); + *i64hilo = static_cast<int64_t>(rs) * static_cast<int64_t>(rt); break; case MULTU: - u64hilo = static_cast<uint64_t>(rs_u) * static_cast<uint64_t>(rt_u); + *u64hilo = static_cast<uint64_t>(rs_u) * static_cast<uint64_t>(rt_u); break; case ADD: if (HaveSameSign(rs, rt)) { @@ -1829,10 +1830,10 @@ void Simulator::ConfigureTypeRegister(Instruction* instr, exceptions[kIntegerUnderflow] = rs < (Registers::kMinValue - rt); } } - alu_out = rs + rt; + *alu_out = rs + rt; break; case ADDU: - alu_out = rs + rt; + *alu_out = rs + rt; break; case SUB: if (!HaveSameSign(rs, rt)) { @@ -1842,51 +1843,50 @@ void Simulator::ConfigureTypeRegister(Instruction* instr, exceptions[kIntegerUnderflow] = rs < (Registers::kMinValue + rt); } } - alu_out = rs - rt; + *alu_out = rs - rt; break; case SUBU: - alu_out = rs - rt; + *alu_out = rs - rt; break; case AND: - alu_out = rs & rt; + *alu_out = rs & rt; break; case OR: - alu_out = rs | rt; + *alu_out = rs | rt; break; case XOR: - alu_out = rs ^ rt; + *alu_out = rs ^ rt; break; case NOR: - alu_out = ~(rs | rt); + *alu_out = ~(rs | rt); break; case SLT: - alu_out = rs < rt ? 1 : 0; + *alu_out = rs < rt ? 1 : 0; break; case SLTU: - alu_out = rs_u < rt_u ? 1 : 0; + *alu_out = rs_u < rt_u ? 1 : 0; break; // Break and trap instructions. case BREAK: - - do_interrupt = true; + *do_interrupt = true; break; case TGE: - do_interrupt = rs >= rt; + *do_interrupt = rs >= rt; break; case TGEU: - do_interrupt = rs_u >= rt_u; + *do_interrupt = rs_u >= rt_u; break; case TLT: - do_interrupt = rs < rt; + *do_interrupt = rs < rt; break; case TLTU: - do_interrupt = rs_u < rt_u; + *do_interrupt = rs_u < rt_u; break; case TEQ: - do_interrupt = rs == rt; + *do_interrupt = rs == rt; break; case TNE: - do_interrupt = rs != rt; + *do_interrupt = rs != rt; break; case MOVN: case MOVZ: @@ -1899,23 +1899,23 @@ void Simulator::ConfigureTypeRegister(Instruction* instr, break; default: UNREACHABLE(); - }; + } break; case SPECIAL2: switch (instr->FunctionFieldRaw()) { case MUL: - alu_out = rs_u * rt_u; // Only the lower 32 bits are kept. + *alu_out = rs_u * rt_u; // Only the lower 32 bits are kept. break; case CLZ: // MIPS32 spec: If no bits were set in GPR rs, the result written to // GPR rd is 32. // GCC __builtin_clz: If input is 0, the result is undefined. - alu_out = + *alu_out = rs_u == 0 ? 32 : CompilerIntrinsics::CountLeadingZeros(rs_u); break; default: UNREACHABLE(); - }; + } break; case SPECIAL3: switch (instr->FunctionFieldRaw()) { @@ -1926,7 +1926,7 @@ void Simulator::ConfigureTypeRegister(Instruction* instr, uint16_t lsb = sa; uint16_t size = msb - lsb + 1; uint32_t mask = (1 << size) - 1; - alu_out = (rt_u & ~(mask << lsb)) | ((rs_u & mask) << lsb); + *alu_out = (rt_u & ~(mask << lsb)) | ((rs_u & mask) << lsb); break; } case EXT: { // Mips32r2 instruction. @@ -1936,16 +1936,16 @@ void Simulator::ConfigureTypeRegister(Instruction* instr, uint16_t lsb = sa; uint16_t size = msb + 1; uint32_t mask = (1 << size) - 1; - alu_out = (rs_u & (mask << lsb)) >> lsb; + *alu_out = (rs_u & (mask << lsb)) >> lsb; break; } default: UNREACHABLE(); - }; + } break; default: UNREACHABLE(); - }; + } } @@ -1984,12 +1984,12 @@ void Simulator::DecodeTypeRegister(Instruction* instr) { // Set up the variables if needed before executing the instruction. ConfigureTypeRegister(instr, - alu_out, - i64hilo, - u64hilo, - next_pc, - return_addr_reg, - do_interrupt); + &alu_out, + &i64hilo, + &u64hilo, + &next_pc, + &return_addr_reg, + &do_interrupt); // ---------- Raise exceptions triggered. SignalExceptions(); @@ -2011,7 +2011,7 @@ void Simulator::DecodeTypeRegister(Instruction* instr) { break; case CTC1: // At the moment only FCSR is supported. - ASSERT(fs_reg == kFCSRRegister); + DCHECK(fs_reg == kFCSRRegister); FCSR_ = registers_[rt_reg]; break; case MTC1: @@ -2103,7 +2103,7 @@ void Simulator::DecodeTypeRegister(Instruction* instr) { break; case CVT_W_D: // Convert double to word. // Rounding modes are not yet supported. - ASSERT((FCSR_ & 3) == 0); + DCHECK((FCSR_ & 3) == 0); // In rounding mode 0 it should behave like ROUND. case ROUND_W_D: // Round double to word (round half to even). { @@ -2204,7 +2204,7 @@ void Simulator::DecodeTypeRegister(Instruction* instr) { break; default: UNREACHABLE(); - }; + } break; case L: switch (instr->FunctionFieldRaw()) { @@ -2226,7 +2226,7 @@ void Simulator::DecodeTypeRegister(Instruction* instr) { break; default: UNREACHABLE(); - }; + } break; case COP1X: switch (instr->FunctionFieldRaw()) { @@ -2239,7 +2239,7 @@ void Simulator::DecodeTypeRegister(Instruction* instr) { break; default: UNREACHABLE(); - }; + } break; case SPECIAL: switch (instr->FunctionFieldRaw()) { @@ -2320,7 +2320,7 @@ void Simulator::DecodeTypeRegister(Instruction* instr) { break; default: // For other special opcodes we do the default operation. set_register(rd_reg, alu_out); - }; + } break; case SPECIAL2: switch (instr->FunctionFieldRaw()) { @@ -2346,14 +2346,14 @@ void Simulator::DecodeTypeRegister(Instruction* instr) { break; default: UNREACHABLE(); - }; + } break; // Unimplemented opcodes raised an error in the configuration step before, // so we can use the default here to set the destination register in common // cases. default: set_register(rd_reg, alu_out); - }; + } } @@ -2414,7 +2414,7 @@ void Simulator::DecodeTypeImmediate(Instruction* instr) { break; default: UNREACHABLE(); - }; + } break; // ------------- REGIMM class. case REGIMM: @@ -2433,7 +2433,7 @@ void Simulator::DecodeTypeImmediate(Instruction* instr) { break; default: UNREACHABLE(); - }; + } switch (instr->RtFieldRaw()) { case BLTZ: case BLTZAL: @@ -2452,7 +2452,7 @@ void Simulator::DecodeTypeImmediate(Instruction* instr) { } default: break; - }; + } break; // case REGIMM. // ------------- Branch instructions. // When comparing to zero, the encoding of rt field is always 0, so we don't @@ -2585,7 +2585,7 @@ void Simulator::DecodeTypeImmediate(Instruction* instr) { break; default: UNREACHABLE(); - }; + } // ---------- Raise exceptions triggered. SignalExceptions(); @@ -2661,7 +2661,7 @@ void Simulator::DecodeTypeImmediate(Instruction* instr) { break; default: break; - }; + } if (execute_branch_delay_instruction) { @@ -2847,7 +2847,7 @@ int32_t Simulator::Call(byte* entry, int argument_count, ...) { // Set up arguments. // First four arguments passed in registers. - ASSERT(argument_count >= 4); + DCHECK(argument_count >= 4); set_register(a0, va_arg(parameters, int32_t)); set_register(a1, va_arg(parameters, int32_t)); set_register(a2, va_arg(parameters, int32_t)); @@ -2858,8 +2858,8 @@ int32_t Simulator::Call(byte* entry, int argument_count, ...) { // Compute position of stack on entry to generated code. int entry_stack = (original_stack - (argument_count - 4) * sizeof(int32_t) - kCArgsSlotsSize); - if (OS::ActivationFrameAlignment() != 0) { - entry_stack &= -OS::ActivationFrameAlignment(); + if (base::OS::ActivationFrameAlignment() != 0) { + entry_stack &= -base::OS::ActivationFrameAlignment(); } // Store remaining arguments on stack, from low to high memory. intptr_t* stack_argument = reinterpret_cast<intptr_t*>(entry_stack); @@ -2886,10 +2886,10 @@ double Simulator::CallFP(byte* entry, double d0, double d1) { set_fpu_register_double(f14, d1); } else { int buffer[2]; - ASSERT(sizeof(buffer[0]) * 2 == sizeof(d0)); - OS::MemCopy(buffer, &d0, sizeof(d0)); + DCHECK(sizeof(buffer[0]) * 2 == sizeof(d0)); + memcpy(buffer, &d0, sizeof(d0)); set_dw_register(a0, buffer); - OS::MemCopy(buffer, &d1, sizeof(d1)); + memcpy(buffer, &d1, sizeof(d1)); set_dw_register(a2, buffer); } CallInternal(entry); diff --git a/deps/v8/src/mips/simulator-mips.h b/deps/v8/src/mips/simulator-mips.h index feeb7bcfc15..4c84b86db65 100644 --- a/deps/v8/src/mips/simulator-mips.h +++ b/deps/v8/src/mips/simulator-mips.h @@ -13,8 +13,8 @@ #ifndef V8_MIPS_SIMULATOR_MIPS_H_ #define V8_MIPS_SIMULATOR_MIPS_H_ -#include "allocation.h" -#include "constants-mips.h" +#include "src/allocation.h" +#include "src/mips/constants-mips.h" #if !defined(USE_SIMULATOR) // Running without a simulator on a native mips platform. @@ -38,9 +38,6 @@ typedef int (*mips_regexp_matcher)(String*, int, const byte*, const byte*, (FUNCTION_CAST<mips_regexp_matcher>(entry)( \ p0, p1, p2, p3, NULL, p4, p5, p6, p7, p8)) -#define TRY_CATCH_FROM_ADDRESS(try_catch_address) \ - reinterpret_cast<TryCatch*>(try_catch_address) - // The stack limit beyond which we will throw stack overflow errors in // generated code. Because generated code on mips uses the C stack, we // just use the C stack limit. @@ -73,8 +70,8 @@ class SimulatorStack : public v8::internal::AllStatic { #else // !defined(USE_SIMULATOR) // Running with a simulator. -#include "hashmap.h" -#include "assembler.h" +#include "src/assembler.h" +#include "src/hashmap.h" namespace v8 { namespace internal { @@ -266,12 +263,12 @@ class Simulator { // Helper function for DecodeTypeRegister. void ConfigureTypeRegister(Instruction* instr, - int32_t& alu_out, - int64_t& i64hilo, - uint64_t& u64hilo, - int32_t& next_pc, - int32_t& return_addr_reg, - bool& do_interrupt); + int32_t* alu_out, + int64_t* i64hilo, + uint64_t* u64hilo, + int32_t* next_pc, + int32_t* return_addr_reg, + bool* do_interrupt); void DecodeTypeImmediate(Instruction* instr); void DecodeTypeJump(Instruction* instr); @@ -390,10 +387,6 @@ class Simulator { Simulator::current(Isolate::Current())->Call( \ entry, 10, p0, p1, p2, p3, NULL, p4, p5, p6, p7, p8) -#define TRY_CATCH_FROM_ADDRESS(try_catch_address) \ - try_catch_address == NULL ? \ - NULL : *(reinterpret_cast<TryCatch**>(try_catch_address)) - // The simulator has its own stack. Thus it has a different stack limit from // the C-based native code. Setting the c_limit to indicate a very small diff --git a/deps/v8/src/mips/stub-cache-mips.cc b/deps/v8/src/mips/stub-cache-mips.cc index abccc9496b8..8f6af7a7fd6 100644 --- a/deps/v8/src/mips/stub-cache-mips.cc +++ b/deps/v8/src/mips/stub-cache-mips.cc @@ -2,13 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_MIPS -#include "ic-inl.h" -#include "codegen.h" -#include "stub-cache.h" +#include "src/codegen.h" +#include "src/ic-inl.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -36,12 +36,12 @@ static void ProbeTable(Isolate* isolate, uint32_t map_off_addr = reinterpret_cast<uint32_t>(map_offset.address()); // Check the relative positions of the address fields. - ASSERT(value_off_addr > key_off_addr); - ASSERT((value_off_addr - key_off_addr) % 4 == 0); - ASSERT((value_off_addr - key_off_addr) < (256 * 4)); - ASSERT(map_off_addr > key_off_addr); - ASSERT((map_off_addr - key_off_addr) % 4 == 0); - ASSERT((map_off_addr - key_off_addr) < (256 * 4)); + DCHECK(value_off_addr > key_off_addr); + DCHECK((value_off_addr - key_off_addr) % 4 == 0); + DCHECK((value_off_addr - key_off_addr) < (256 * 4)); + DCHECK(map_off_addr > key_off_addr); + DCHECK((map_off_addr - key_off_addr) % 4 == 0); + DCHECK((map_off_addr - key_off_addr) < (256 * 4)); Label miss; Register base_addr = scratch; @@ -94,14 +94,11 @@ static void ProbeTable(Isolate* isolate, } -void StubCompiler::GenerateDictionaryNegativeLookup(MacroAssembler* masm, - Label* miss_label, - Register receiver, - Handle<Name> name, - Register scratch0, - Register scratch1) { - ASSERT(name->IsUniqueName()); - ASSERT(!receiver.is(scratch0)); +void PropertyHandlerCompiler::GenerateDictionaryNegativeLookup( + MacroAssembler* masm, Label* miss_label, Register receiver, + Handle<Name> name, Register scratch0, Register scratch1) { + DCHECK(name->IsUniqueName()); + DCHECK(!receiver.is(scratch0)); Counters* counters = masm->isolate()->counters(); __ IncrementCounter(counters->negative_lookups(), 1, scratch0, scratch1); __ IncrementCounter(counters->negative_lookups_miss(), 1, scratch0, scratch1); @@ -160,27 +157,27 @@ void StubCache::GenerateProbe(MacroAssembler* masm, // Make sure that code is valid. The multiplying code relies on the // entry size being 12. - ASSERT(sizeof(Entry) == 12); + DCHECK(sizeof(Entry) == 12); // Make sure the flags does not name a specific type. - ASSERT(Code::ExtractTypeFromFlags(flags) == 0); + DCHECK(Code::ExtractTypeFromFlags(flags) == 0); // Make sure that there are no register conflicts. - ASSERT(!scratch.is(receiver)); - ASSERT(!scratch.is(name)); - ASSERT(!extra.is(receiver)); - ASSERT(!extra.is(name)); - ASSERT(!extra.is(scratch)); - ASSERT(!extra2.is(receiver)); - ASSERT(!extra2.is(name)); - ASSERT(!extra2.is(scratch)); - ASSERT(!extra2.is(extra)); + DCHECK(!scratch.is(receiver)); + DCHECK(!scratch.is(name)); + DCHECK(!extra.is(receiver)); + DCHECK(!extra.is(name)); + DCHECK(!extra.is(scratch)); + DCHECK(!extra2.is(receiver)); + DCHECK(!extra2.is(name)); + DCHECK(!extra2.is(scratch)); + DCHECK(!extra2.is(extra)); // Check register validity. - ASSERT(!scratch.is(no_reg)); - ASSERT(!extra.is(no_reg)); - ASSERT(!extra2.is(no_reg)); - ASSERT(!extra3.is(no_reg)); + DCHECK(!scratch.is(no_reg)); + DCHECK(!extra.is(no_reg)); + DCHECK(!extra2.is(no_reg)); + DCHECK(!extra3.is(no_reg)); Counters* counters = masm->isolate()->counters(); __ IncrementCounter(counters->megamorphic_stub_cache_probes(), 1, @@ -196,8 +193,8 @@ void StubCache::GenerateProbe(MacroAssembler* masm, uint32_t mask = kPrimaryTableSize - 1; // We shift out the last two bits because they are not part of the hash and // they are always 01 for maps. - __ srl(scratch, scratch, kHeapObjectTagSize); - __ Xor(scratch, scratch, Operand((flags >> kHeapObjectTagSize) & mask)); + __ srl(scratch, scratch, kCacheIndexShift); + __ Xor(scratch, scratch, Operand((flags >> kCacheIndexShift) & mask)); __ And(scratch, scratch, Operand(mask)); // Probe the primary table. @@ -213,10 +210,10 @@ void StubCache::GenerateProbe(MacroAssembler* masm, extra3); // Primary miss: Compute hash for secondary probe. - __ srl(at, name, kHeapObjectTagSize); + __ srl(at, name, kCacheIndexShift); __ Subu(scratch, scratch, at); uint32_t mask2 = kSecondaryTableSize - 1; - __ Addu(scratch, scratch, Operand((flags >> kHeapObjectTagSize) & mask2)); + __ Addu(scratch, scratch, Operand((flags >> kCacheIndexShift) & mask2)); __ And(scratch, scratch, Operand(mask2)); // Probe the secondary table. @@ -239,30 +236,8 @@ void StubCache::GenerateProbe(MacroAssembler* masm, } -void StubCompiler::GenerateLoadGlobalFunctionPrototype(MacroAssembler* masm, - int index, - Register prototype) { - // Load the global or builtins object from the current context. - __ lw(prototype, - MemOperand(cp, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); - // Load the native context from the global or builtins object. - __ lw(prototype, - FieldMemOperand(prototype, GlobalObject::kNativeContextOffset)); - // Load the function from the native context. - __ lw(prototype, MemOperand(prototype, Context::SlotOffset(index))); - // Load the initial map. The global functions all have initial maps. - __ lw(prototype, - FieldMemOperand(prototype, JSFunction::kPrototypeOrInitialMapOffset)); - // Load the prototype from the initial map. - __ lw(prototype, FieldMemOperand(prototype, Map::kPrototypeOffset)); -} - - -void StubCompiler::GenerateDirectLoadGlobalFunctionPrototype( - MacroAssembler* masm, - int index, - Register prototype, - Label* miss) { +void NamedLoadHandlerCompiler::GenerateDirectLoadGlobalFunctionPrototype( + MacroAssembler* masm, int index, Register prototype, Label* miss) { Isolate* isolate = masm->isolate(); // Get the global function with the given index. Handle<JSFunction> function( @@ -284,59 +259,20 @@ void StubCompiler::GenerateDirectLoadGlobalFunctionPrototype( } -void StubCompiler::GenerateFastPropertyLoad(MacroAssembler* masm, - Register dst, - Register src, - bool inobject, - int index, - Representation representation) { - ASSERT(!representation.IsDouble()); - int offset = index * kPointerSize; - if (!inobject) { - // Calculate the offset into the properties array. - offset = offset + FixedArray::kHeaderSize; - __ lw(dst, FieldMemOperand(src, JSObject::kPropertiesOffset)); - src = dst; - } - __ lw(dst, FieldMemOperand(src, offset)); -} - - -void StubCompiler::GenerateLoadArrayLength(MacroAssembler* masm, - Register receiver, - Register scratch, - Label* miss_label) { - // Check that the receiver isn't a smi. - __ JumpIfSmi(receiver, miss_label); - - // Check that the object is a JS array. - __ GetObjectType(receiver, scratch, scratch); - __ Branch(miss_label, ne, scratch, Operand(JS_ARRAY_TYPE)); - - // Load length directly from the JS array. - __ Ret(USE_DELAY_SLOT); - __ lw(v0, FieldMemOperand(receiver, JSArray::kLengthOffset)); -} - - -void StubCompiler::GenerateLoadFunctionPrototype(MacroAssembler* masm, - Register receiver, - Register scratch1, - Register scratch2, - Label* miss_label) { +void NamedLoadHandlerCompiler::GenerateLoadFunctionPrototype( + MacroAssembler* masm, Register receiver, Register scratch1, + Register scratch2, Label* miss_label) { __ TryGetFunctionPrototype(receiver, scratch1, scratch2, miss_label); __ Ret(USE_DELAY_SLOT); __ mov(v0, scratch1); } -void StubCompiler::GenerateCheckPropertyCell(MacroAssembler* masm, - Handle<JSGlobalObject> global, - Handle<Name> name, - Register scratch, - Label* miss) { +void PropertyHandlerCompiler::GenerateCheckPropertyCell( + MacroAssembler* masm, Handle<JSGlobalObject> global, Handle<Name> name, + Register scratch, Label* miss) { Handle<Cell> cell = JSGlobalObject::EnsurePropertyCell(global, name); - ASSERT(cell->value()->IsTheHole()); + DCHECK(cell->value()->IsTheHole()); __ li(scratch, Operand(cell)); __ lw(scratch, FieldMemOperand(scratch, Cell::kValueOffset)); __ LoadRoot(at, Heap::kTheHoleValueRootIndex); @@ -344,18 +280,126 @@ void StubCompiler::GenerateCheckPropertyCell(MacroAssembler* masm, } -void StoreStubCompiler::GenerateNegativeHolderLookup( +static void PushInterceptorArguments(MacroAssembler* masm, + Register receiver, + Register holder, + Register name, + Handle<JSObject> holder_obj) { + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsNameIndex == 0); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsInfoIndex == 1); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsThisIndex == 2); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsHolderIndex == 3); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsLength == 4); + __ push(name); + Handle<InterceptorInfo> interceptor(holder_obj->GetNamedInterceptor()); + DCHECK(!masm->isolate()->heap()->InNewSpace(*interceptor)); + Register scratch = name; + __ li(scratch, Operand(interceptor)); + __ Push(scratch, receiver, holder); +} + + +static void CompileCallLoadPropertyWithInterceptor( MacroAssembler* masm, - Handle<JSObject> holder, - Register holder_reg, - Handle<Name> name, - Label* miss) { - if (holder->IsJSGlobalObject()) { - GenerateCheckPropertyCell( - masm, Handle<JSGlobalObject>::cast(holder), name, scratch1(), miss); - } else if (!holder->HasFastProperties() && !holder->IsJSGlobalProxy()) { - GenerateDictionaryNegativeLookup( - masm, miss, holder_reg, name, scratch1(), scratch2()); + Register receiver, + Register holder, + Register name, + Handle<JSObject> holder_obj, + IC::UtilityId id) { + PushInterceptorArguments(masm, receiver, holder, name, holder_obj); + __ CallExternalReference(ExternalReference(IC_Utility(id), masm->isolate()), + NamedLoadHandlerCompiler::kInterceptorArgsLength); +} + + +// Generate call to api function. +void PropertyHandlerCompiler::GenerateFastApiCall( + MacroAssembler* masm, const CallOptimization& optimization, + Handle<Map> receiver_map, Register receiver, Register scratch_in, + bool is_store, int argc, Register* values) { + DCHECK(!receiver.is(scratch_in)); + // Preparing to push, adjust sp. + __ Subu(sp, sp, Operand((argc + 1) * kPointerSize)); + __ sw(receiver, MemOperand(sp, argc * kPointerSize)); // Push receiver. + // Write the arguments to stack frame. + for (int i = 0; i < argc; i++) { + Register arg = values[argc-1-i]; + DCHECK(!receiver.is(arg)); + DCHECK(!scratch_in.is(arg)); + __ sw(arg, MemOperand(sp, (argc-1-i) * kPointerSize)); // Push arg. + } + DCHECK(optimization.is_simple_api_call()); + + // Abi for CallApiFunctionStub. + Register callee = a0; + Register call_data = t0; + Register holder = a2; + Register api_function_address = a1; + + // Put holder in place. + CallOptimization::HolderLookup holder_lookup; + Handle<JSObject> api_holder = optimization.LookupHolderOfExpectedType( + receiver_map, + &holder_lookup); + switch (holder_lookup) { + case CallOptimization::kHolderIsReceiver: + __ Move(holder, receiver); + break; + case CallOptimization::kHolderFound: + __ li(holder, api_holder); + break; + case CallOptimization::kHolderNotFound: + UNREACHABLE(); + break; + } + + Isolate* isolate = masm->isolate(); + Handle<JSFunction> function = optimization.constant_function(); + Handle<CallHandlerInfo> api_call_info = optimization.api_call_info(); + Handle<Object> call_data_obj(api_call_info->data(), isolate); + + // Put callee in place. + __ li(callee, function); + + bool call_data_undefined = false; + // Put call_data in place. + if (isolate->heap()->InNewSpace(*call_data_obj)) { + __ li(call_data, api_call_info); + __ lw(call_data, FieldMemOperand(call_data, CallHandlerInfo::kDataOffset)); + } else if (call_data_obj->IsUndefined()) { + call_data_undefined = true; + __ LoadRoot(call_data, Heap::kUndefinedValueRootIndex); + } else { + __ li(call_data, call_data_obj); + } + // Put api_function_address in place. + Address function_address = v8::ToCData<Address>(api_call_info->callback()); + ApiFunction fun(function_address); + ExternalReference::Type type = ExternalReference::DIRECT_API_CALL; + ExternalReference ref = ExternalReference(&fun, type, masm->isolate()); + __ li(api_function_address, Operand(ref)); + + // Jump to stub. + CallApiFunctionStub stub(isolate, is_store, call_data_undefined, argc); + __ TailCallStub(&stub); +} + + +void PropertyAccessCompiler::GenerateTailCall(MacroAssembler* masm, + Handle<Code> code) { + __ Jump(code, RelocInfo::CODE_TARGET); +} + + +#undef __ +#define __ ACCESS_MASM(masm()) + + +void NamedStoreHandlerCompiler::GenerateRestoreName(Label* label, + Handle<Name> name) { + if (!label->is_unused()) { + __ bind(label); + __ li(this->name(), Operand(name)); } } @@ -363,19 +407,10 @@ void StoreStubCompiler::GenerateNegativeHolderLookup( // Generate StoreTransition code, value is passed in a0 register. // After executing generated code, the receiver_reg and name_reg // may be clobbered. -void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, - Handle<JSObject> object, - LookupResult* lookup, - Handle<Map> transition, - Handle<Name> name, - Register receiver_reg, - Register storage_reg, - Register value_reg, - Register scratch1, - Register scratch2, - Register scratch3, - Label* miss_label, - Label* slow) { +void NamedStoreHandlerCompiler::GenerateStoreTransition( + Handle<Map> transition, Handle<Name> name, Register receiver_reg, + Register storage_reg, Register value_reg, Register scratch1, + Register scratch2, Register scratch3, Label* miss_label, Label* slow) { // a0 : value. Label exit; @@ -383,10 +418,10 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, DescriptorArray* descriptors = transition->instance_descriptors(); PropertyDetails details = descriptors->GetDetails(descriptor); Representation representation = details.representation(); - ASSERT(!representation.IsNone()); + DCHECK(!representation.IsNone()); if (details.type() == CONSTANT) { - Handle<Object> constant(descriptors->GetValue(descriptor), masm->isolate()); + Handle<Object> constant(descriptors->GetValue(descriptor), isolate()); __ li(scratch1, constant); __ Branch(miss_label, ne, value_reg, Operand(scratch1)); } else if (representation.IsSmi()) { @@ -413,8 +448,9 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, } } else if (representation.IsDouble()) { Label do_store, heap_number; - __ LoadRoot(scratch3, Heap::kHeapNumberMapRootIndex); - __ AllocateHeapNumber(storage_reg, scratch1, scratch2, scratch3, slow); + __ LoadRoot(scratch3, Heap::kMutableHeapNumberMapRootIndex); + __ AllocateHeapNumber(storage_reg, scratch1, scratch2, scratch3, slow, + TAG_RESULT, MUTABLE); __ JumpIfNotSmi(value_reg, &heap_number); __ SmiUntag(scratch1, value_reg); @@ -431,13 +467,12 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, __ sdc1(f4, FieldMemOperand(storage_reg, HeapNumber::kValueOffset)); } - // Stub never generated for non-global objects that require access - // checks. - ASSERT(object->IsJSGlobalProxy() || !object->IsAccessCheckNeeded()); + // Stub never generated for objects that require access checks. + DCHECK(!transition->is_access_check_needed()); // Perform map transition for the receiver if necessary. if (details.type() == FIELD && - object->map()->unused_property_fields() == 0) { + Map::cast(transition->GetBackPointer())->unused_property_fields() == 0) { // The properties must be extended before we can store the value. // We jump to a runtime call that extends the properties array. __ push(receiver_reg); @@ -445,7 +480,7 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, __ Push(a2, a0); __ TailCallExternalReference( ExternalReference(IC_Utility(IC::kSharedStoreIC_ExtendStorage), - masm->isolate()), + isolate()), 3, 1); return; } @@ -465,7 +500,7 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, OMIT_SMI_CHECK); if (details.type() == CONSTANT) { - ASSERT(value_reg.is(a0)); + DCHECK(value_reg.is(a0)); __ Ret(USE_DELAY_SLOT); __ mov(v0, a0); return; @@ -477,14 +512,14 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, // Adjust for the number of properties stored in the object. Even in the // face of a transition we can use the old map here because the size of the // object and the number of in-object properties is not going to change. - index -= object->map()->inobject_properties(); + index -= transition->inobject_properties(); // TODO(verwaest): Share this code as a code stub. SmiCheck smi_check = representation.IsTagged() ? INLINE_SMI_CHECK : OMIT_SMI_CHECK; if (index < 0) { // Set the property straight into the object. - int offset = object->map()->instance_size() + (index * kPointerSize); + int offset = transition->instance_size() + (index * kPointerSize); if (representation.IsDouble()) { __ sw(storage_reg, FieldMemOperand(receiver_reg, offset)); } else { @@ -534,302 +569,49 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, } // Return the value (register v0). - ASSERT(value_reg.is(a0)); - __ bind(&exit); - __ Ret(USE_DELAY_SLOT); - __ mov(v0, a0); -} - - -// Generate StoreField code, value is passed in a0 register. -// When leaving generated code after success, the receiver_reg and name_reg -// may be clobbered. Upon branch to miss_label, the receiver and name -// registers have their original values. -void StoreStubCompiler::GenerateStoreField(MacroAssembler* masm, - Handle<JSObject> object, - LookupResult* lookup, - Register receiver_reg, - Register name_reg, - Register value_reg, - Register scratch1, - Register scratch2, - Label* miss_label) { - // a0 : value - Label exit; - - // Stub never generated for non-global objects that require access - // checks. - ASSERT(object->IsJSGlobalProxy() || !object->IsAccessCheckNeeded()); - - int index = lookup->GetFieldIndex().field_index(); - - // Adjust for the number of properties stored in the object. Even in the - // face of a transition we can use the old map here because the size of the - // object and the number of in-object properties is not going to change. - index -= object->map()->inobject_properties(); - - Representation representation = lookup->representation(); - ASSERT(!representation.IsNone()); - if (representation.IsSmi()) { - __ JumpIfNotSmi(value_reg, miss_label); - } else if (representation.IsHeapObject()) { - __ JumpIfSmi(value_reg, miss_label); - HeapType* field_type = lookup->GetFieldType(); - HeapType::Iterator<Map> it = field_type->Classes(); - if (!it.Done()) { - __ lw(scratch1, FieldMemOperand(value_reg, HeapObject::kMapOffset)); - Label do_store; - Handle<Map> current; - while (true) { - // Do the CompareMap() directly within the Branch() functions. - current = it.Current(); - it.Advance(); - if (it.Done()) { - __ Branch(miss_label, ne, scratch1, Operand(current)); - break; - } - __ Branch(&do_store, eq, scratch1, Operand(current)); - } - __ bind(&do_store); - } - } else if (representation.IsDouble()) { - // Load the double storage. - if (index < 0) { - int offset = object->map()->instance_size() + (index * kPointerSize); - __ lw(scratch1, FieldMemOperand(receiver_reg, offset)); - } else { - __ lw(scratch1, - FieldMemOperand(receiver_reg, JSObject::kPropertiesOffset)); - int offset = index * kPointerSize + FixedArray::kHeaderSize; - __ lw(scratch1, FieldMemOperand(scratch1, offset)); - } - - // Store the value into the storage. - Label do_store, heap_number; - __ JumpIfNotSmi(value_reg, &heap_number); - __ SmiUntag(scratch2, value_reg); - __ mtc1(scratch2, f6); - __ cvt_d_w(f4, f6); - __ jmp(&do_store); - - __ bind(&heap_number); - __ CheckMap(value_reg, scratch2, Heap::kHeapNumberMapRootIndex, - miss_label, DONT_DO_SMI_CHECK); - __ ldc1(f4, FieldMemOperand(value_reg, HeapNumber::kValueOffset)); - - __ bind(&do_store); - __ sdc1(f4, FieldMemOperand(scratch1, HeapNumber::kValueOffset)); - // Return the value (register v0). - ASSERT(value_reg.is(a0)); - __ Ret(USE_DELAY_SLOT); - __ mov(v0, a0); - return; - } - - // TODO(verwaest): Share this code as a code stub. - SmiCheck smi_check = representation.IsTagged() - ? INLINE_SMI_CHECK : OMIT_SMI_CHECK; - if (index < 0) { - // Set the property straight into the object. - int offset = object->map()->instance_size() + (index * kPointerSize); - __ sw(value_reg, FieldMemOperand(receiver_reg, offset)); - - if (!representation.IsSmi()) { - // Skip updating write barrier if storing a smi. - __ JumpIfSmi(value_reg, &exit); - - // Update the write barrier for the array address. - // Pass the now unused name_reg as a scratch register. - __ mov(name_reg, value_reg); - __ RecordWriteField(receiver_reg, - offset, - name_reg, - scratch1, - kRAHasNotBeenSaved, - kDontSaveFPRegs, - EMIT_REMEMBERED_SET, - smi_check); - } - } else { - // Write to the properties array. - int offset = index * kPointerSize + FixedArray::kHeaderSize; - // Get the properties array. - __ lw(scratch1, - FieldMemOperand(receiver_reg, JSObject::kPropertiesOffset)); - __ sw(value_reg, FieldMemOperand(scratch1, offset)); - - if (!representation.IsSmi()) { - // Skip updating write barrier if storing a smi. - __ JumpIfSmi(value_reg, &exit); - - // Update the write barrier for the array address. - // Ok to clobber receiver_reg and name_reg, since we return. - __ mov(name_reg, value_reg); - __ RecordWriteField(scratch1, - offset, - name_reg, - receiver_reg, - kRAHasNotBeenSaved, - kDontSaveFPRegs, - EMIT_REMEMBERED_SET, - smi_check); - } - } - - // Return the value (register v0). - ASSERT(value_reg.is(a0)); + DCHECK(value_reg.is(a0)); __ bind(&exit); __ Ret(USE_DELAY_SLOT); __ mov(v0, a0); } -void StoreStubCompiler::GenerateRestoreName(MacroAssembler* masm, - Label* label, - Handle<Name> name) { - if (!label->is_unused()) { - __ bind(label); - __ li(this->name(), Operand(name)); - } -} - - -static void PushInterceptorArguments(MacroAssembler* masm, - Register receiver, - Register holder, - Register name, - Handle<JSObject> holder_obj) { - STATIC_ASSERT(StubCache::kInterceptorArgsNameIndex == 0); - STATIC_ASSERT(StubCache::kInterceptorArgsInfoIndex == 1); - STATIC_ASSERT(StubCache::kInterceptorArgsThisIndex == 2); - STATIC_ASSERT(StubCache::kInterceptorArgsHolderIndex == 3); - STATIC_ASSERT(StubCache::kInterceptorArgsLength == 4); - __ push(name); - Handle<InterceptorInfo> interceptor(holder_obj->GetNamedInterceptor()); - ASSERT(!masm->isolate()->heap()->InNewSpace(*interceptor)); - Register scratch = name; - __ li(scratch, Operand(interceptor)); - __ Push(scratch, receiver, holder); -} - - -static void CompileCallLoadPropertyWithInterceptor( - MacroAssembler* masm, - Register receiver, - Register holder, - Register name, - Handle<JSObject> holder_obj, - IC::UtilityId id) { - PushInterceptorArguments(masm, receiver, holder, name, holder_obj); - __ CallExternalReference( - ExternalReference(IC_Utility(id), masm->isolate()), - StubCache::kInterceptorArgsLength); -} - - -// Generate call to api function. -void StubCompiler::GenerateFastApiCall(MacroAssembler* masm, - const CallOptimization& optimization, - Handle<Map> receiver_map, - Register receiver, - Register scratch_in, - bool is_store, - int argc, - Register* values) { - ASSERT(!receiver.is(scratch_in)); - // Preparing to push, adjust sp. - __ Subu(sp, sp, Operand((argc + 1) * kPointerSize)); - __ sw(receiver, MemOperand(sp, argc * kPointerSize)); // Push receiver. - // Write the arguments to stack frame. - for (int i = 0; i < argc; i++) { - Register arg = values[argc-1-i]; - ASSERT(!receiver.is(arg)); - ASSERT(!scratch_in.is(arg)); - __ sw(arg, MemOperand(sp, (argc-1-i) * kPointerSize)); // Push arg. - } - ASSERT(optimization.is_simple_api_call()); - - // Abi for CallApiFunctionStub. - Register callee = a0; - Register call_data = t0; - Register holder = a2; - Register api_function_address = a1; - - // Put holder in place. - CallOptimization::HolderLookup holder_lookup; - Handle<JSObject> api_holder = optimization.LookupHolderOfExpectedType( - receiver_map, - &holder_lookup); - switch (holder_lookup) { - case CallOptimization::kHolderIsReceiver: - __ Move(holder, receiver); - break; - case CallOptimization::kHolderFound: - __ li(holder, api_holder); - break; - case CallOptimization::kHolderNotFound: - UNREACHABLE(); +void NamedStoreHandlerCompiler::GenerateStoreField(LookupResult* lookup, + Register value_reg, + Label* miss_label) { + DCHECK(lookup->representation().IsHeapObject()); + __ JumpIfSmi(value_reg, miss_label); + HeapType::Iterator<Map> it = lookup->GetFieldType()->Classes(); + __ lw(scratch1(), FieldMemOperand(value_reg, HeapObject::kMapOffset)); + Label do_store; + Handle<Map> current; + while (true) { + // Do the CompareMap() directly within the Branch() functions. + current = it.Current(); + it.Advance(); + if (it.Done()) { + __ Branch(miss_label, ne, scratch1(), Operand(current)); break; + } + __ Branch(&do_store, eq, scratch1(), Operand(current)); } + __ bind(&do_store); - Isolate* isolate = masm->isolate(); - Handle<JSFunction> function = optimization.constant_function(); - Handle<CallHandlerInfo> api_call_info = optimization.api_call_info(); - Handle<Object> call_data_obj(api_call_info->data(), isolate); - - // Put callee in place. - __ li(callee, function); - - bool call_data_undefined = false; - // Put call_data in place. - if (isolate->heap()->InNewSpace(*call_data_obj)) { - __ li(call_data, api_call_info); - __ lw(call_data, FieldMemOperand(call_data, CallHandlerInfo::kDataOffset)); - } else if (call_data_obj->IsUndefined()) { - call_data_undefined = true; - __ LoadRoot(call_data, Heap::kUndefinedValueRootIndex); - } else { - __ li(call_data, call_data_obj); - } - // Put api_function_address in place. - Address function_address = v8::ToCData<Address>(api_call_info->callback()); - ApiFunction fun(function_address); - ExternalReference::Type type = ExternalReference::DIRECT_API_CALL; - ExternalReference ref = - ExternalReference(&fun, - type, - masm->isolate()); - __ li(api_function_address, Operand(ref)); - - // Jump to stub. - CallApiFunctionStub stub(isolate, is_store, call_data_undefined, argc); - __ TailCallStub(&stub); -} - - -void StubCompiler::GenerateTailCall(MacroAssembler* masm, Handle<Code> code) { - __ Jump(code, RelocInfo::CODE_TARGET); + StoreFieldStub stub(isolate(), lookup->GetFieldIndex(), + lookup->representation()); + GenerateTailCall(masm(), stub.GetCode()); } -#undef __ -#define __ ACCESS_MASM(masm()) - - -Register StubCompiler::CheckPrototypes(Handle<HeapType> type, - Register object_reg, - Handle<JSObject> holder, - Register holder_reg, - Register scratch1, - Register scratch2, - Handle<Name> name, - Label* miss, - PrototypeCheckType check) { - Handle<Map> receiver_map(IC::TypeToMap(*type, isolate())); +Register PropertyHandlerCompiler::CheckPrototypes( + Register object_reg, Register holder_reg, Register scratch1, + Register scratch2, Handle<Name> name, Label* miss, + PrototypeCheckType check) { + Handle<Map> receiver_map(IC::TypeToMap(*type(), isolate())); // Make sure there's no overlap between holder and object registers. - ASSERT(!scratch1.is(object_reg) && !scratch1.is(holder_reg)); - ASSERT(!scratch2.is(object_reg) && !scratch2.is(holder_reg) + DCHECK(!scratch1.is(object_reg) && !scratch1.is(holder_reg)); + DCHECK(!scratch2.is(object_reg) && !scratch2.is(holder_reg) && !scratch2.is(scratch1)); // Keep track of the current object in register reg. @@ -837,12 +619,12 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, int depth = 0; Handle<JSObject> current = Handle<JSObject>::null(); - if (type->IsConstant()) { - current = Handle<JSObject>::cast(type->AsConstant()->Value()); + if (type()->IsConstant()) { + current = Handle<JSObject>::cast(type()->AsConstant()->Value()); } Handle<JSObject> prototype = Handle<JSObject>::null(); Handle<Map> current_map = receiver_map; - Handle<Map> holder_map(holder->map()); + Handle<Map> holder_map(holder()->map()); // Traverse the prototype chain and check the maps in the prototype chain for // fast and global objects or do negative lookup for normal objects. while (!current_map.is_identical_to(holder_map)) { @@ -850,18 +632,18 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, // Only global objects and objects that do not require access // checks are allowed in stubs. - ASSERT(current_map->IsJSGlobalProxyMap() || + DCHECK(current_map->IsJSGlobalProxyMap() || !current_map->is_access_check_needed()); prototype = handle(JSObject::cast(current_map->prototype())); if (current_map->is_dictionary_map() && - !current_map->IsJSGlobalObjectMap() && - !current_map->IsJSGlobalProxyMap()) { + !current_map->IsJSGlobalObjectMap()) { + DCHECK(!current_map->IsJSGlobalProxyMap()); // Proxy maps are fast. if (!name->IsUniqueName()) { - ASSERT(name->IsString()); + DCHECK(name->IsString()); name = factory()->InternalizeString(Handle<String>::cast(name)); } - ASSERT(current.is_null() || + DCHECK(current.is_null() || current->property_dictionary()->FindEntry(name) == NameDictionary::kNotFound); @@ -883,6 +665,9 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, // Check access rights to the global object. This has to happen after // the map check so that we know that the object is actually a global // object. + // This allows us to install generated handlers for accesses to the + // global proxy (as opposed to using slow ICs). See corresponding code + // in LookupForRead(). if (current_map->IsJSGlobalProxyMap()) { __ CheckAccessGlobalProxy(reg, scratch2, miss); } else if (current_map->IsJSGlobalObjectMap()) { @@ -893,12 +678,15 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, reg = holder_reg; // From now on the object will be in holder_reg. - if (heap()->InNewSpace(*prototype)) { - // The prototype is in new space; we cannot store a reference to it - // in the code. Load it from the map. + // Two possible reasons for loading the prototype from the map: + // (1) Can't store references to new space in code. + // (2) Handler is shared for all receivers with the same prototype + // map (but not necessarily the same prototype instance). + bool load_prototype_from_map = + heap()->InNewSpace(*prototype) || depth == 1; + if (load_prototype_from_map) { __ lw(reg, FieldMemOperand(map_reg, Map::kPrototypeOffset)); } else { - // The prototype is in old space; load it directly. __ li(reg, Operand(prototype)); } } @@ -917,7 +705,7 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, } // Perform security check for access to the global object. - ASSERT(current_map->IsJSGlobalProxyMap() || + DCHECK(current_map->IsJSGlobalProxyMap() || !current_map->is_access_check_needed()); if (current_map->IsJSGlobalProxyMap()) { __ CheckAccessGlobalProxy(reg, scratch1, miss); @@ -928,7 +716,7 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, } -void LoadStubCompiler::HandlerFrontendFooter(Handle<Name> name, Label* miss) { +void NamedLoadHandlerCompiler::FrontendFooter(Handle<Name> name, Label* miss) { if (!miss->is_unused()) { Label success; __ Branch(&success); @@ -939,93 +727,26 @@ void LoadStubCompiler::HandlerFrontendFooter(Handle<Name> name, Label* miss) { } -void StoreStubCompiler::HandlerFrontendFooter(Handle<Name> name, Label* miss) { +void NamedStoreHandlerCompiler::FrontendFooter(Handle<Name> name, Label* miss) { if (!miss->is_unused()) { Label success; __ Branch(&success); - GenerateRestoreName(masm(), miss, name); + GenerateRestoreName(miss, name); TailCallBuiltin(masm(), MissBuiltin(kind())); __ bind(&success); } } -Register LoadStubCompiler::CallbackHandlerFrontend( - Handle<HeapType> type, - Register object_reg, - Handle<JSObject> holder, - Handle<Name> name, - Handle<Object> callback) { - Label miss; - - Register reg = HandlerFrontendHeader(type, object_reg, holder, name, &miss); - - if (!holder->HasFastProperties() && !holder->IsJSGlobalObject()) { - ASSERT(!reg.is(scratch2())); - ASSERT(!reg.is(scratch3())); - ASSERT(!reg.is(scratch4())); - - // Load the properties dictionary. - Register dictionary = scratch4(); - __ lw(dictionary, FieldMemOperand(reg, JSObject::kPropertiesOffset)); - - // Probe the dictionary. - Label probe_done; - NameDictionaryLookupStub::GeneratePositiveLookup(masm(), - &miss, - &probe_done, - dictionary, - this->name(), - scratch2(), - scratch3()); - __ bind(&probe_done); - - // If probing finds an entry in the dictionary, scratch3 contains the - // pointer into the dictionary. Check that the value is the callback. - Register pointer = scratch3(); - const int kElementsStartOffset = NameDictionary::kHeaderSize + - NameDictionary::kElementsStartIndex * kPointerSize; - const int kValueOffset = kElementsStartOffset + kPointerSize; - __ lw(scratch2(), FieldMemOperand(pointer, kValueOffset)); - __ Branch(&miss, ne, scratch2(), Operand(callback)); - } - - HandlerFrontendFooter(name, &miss); - return reg; -} - - -void LoadStubCompiler::GenerateLoadField(Register reg, - Handle<JSObject> holder, - PropertyIndex field, - Representation representation) { - if (!reg.is(receiver())) __ mov(receiver(), reg); - if (kind() == Code::LOAD_IC) { - LoadFieldStub stub(isolate(), - field.is_inobject(holder), - field.translate(holder), - representation); - GenerateTailCall(masm(), stub.GetCode()); - } else { - KeyedLoadFieldStub stub(isolate(), - field.is_inobject(holder), - field.translate(holder), - representation); - GenerateTailCall(masm(), stub.GetCode()); - } -} - - -void LoadStubCompiler::GenerateLoadConstant(Handle<Object> value) { +void NamedLoadHandlerCompiler::GenerateLoadConstant(Handle<Object> value) { // Return the constant value. __ li(v0, value); __ Ret(); } -void LoadStubCompiler::GenerateLoadCallback( - Register reg, - Handle<ExecutableAccessorInfo> callback) { +void NamedLoadHandlerCompiler::GenerateLoadCallback( + Register reg, Handle<ExecutableAccessorInfo> callback) { // Build AccessorInfo::args_ list on the stack and push property name below // the exit frame to make GC aware of them and store pointers to them. STATIC_ASSERT(PropertyCallbackArguments::kHolderIndex == 0); @@ -1035,9 +756,9 @@ void LoadStubCompiler::GenerateLoadCallback( STATIC_ASSERT(PropertyCallbackArguments::kDataIndex == 4); STATIC_ASSERT(PropertyCallbackArguments::kThisIndex == 5); STATIC_ASSERT(PropertyCallbackArguments::kArgsLength == 6); - ASSERT(!scratch2().is(reg)); - ASSERT(!scratch3().is(reg)); - ASSERT(!scratch4().is(reg)); + DCHECK(!scratch2().is(reg)); + DCHECK(!scratch3().is(reg)); + DCHECK(!scratch4().is(reg)); __ push(receiver()); if (heap()->InNewSpace(callback->data())) { __ li(scratch3(), callback); @@ -1073,14 +794,11 @@ void LoadStubCompiler::GenerateLoadCallback( } -void LoadStubCompiler::GenerateLoadInterceptor( - Register holder_reg, - Handle<Object> object, - Handle<JSObject> interceptor_holder, - LookupResult* lookup, - Handle<Name> name) { - ASSERT(interceptor_holder->HasNamedInterceptor()); - ASSERT(!interceptor_holder->GetNamedInterceptor()->getter()->IsUndefined()); +void NamedLoadHandlerCompiler::GenerateLoadInterceptor(Register holder_reg, + LookupResult* lookup, + Handle<Name> name) { + DCHECK(holder()->HasNamedInterceptor()); + DCHECK(!holder()->GetNamedInterceptor()->getter()->IsUndefined()); // So far the most popular follow ups for interceptor loads are FIELD // and CALLBACKS, so inline only them, other cases may be added @@ -1091,10 +809,12 @@ void LoadStubCompiler::GenerateLoadInterceptor( compile_followup_inline = true; } else if (lookup->type() == CALLBACKS && lookup->GetCallbackObject()->IsExecutableAccessorInfo()) { - ExecutableAccessorInfo* callback = - ExecutableAccessorInfo::cast(lookup->GetCallbackObject()); - compile_followup_inline = callback->getter() != NULL && - callback->IsCompatibleReceiver(*object); + Handle<ExecutableAccessorInfo> callback( + ExecutableAccessorInfo::cast(lookup->GetCallbackObject())); + compile_followup_inline = + callback->getter() != NULL && + ExecutableAccessorInfo::IsCompatibleReceiverType(isolate(), callback, + type()); } } @@ -1102,13 +822,13 @@ void LoadStubCompiler::GenerateLoadInterceptor( // Compile the interceptor call, followed by inline code to load the // property from further up the prototype chain if the call fails. // Check that the maps haven't changed. - ASSERT(holder_reg.is(receiver()) || holder_reg.is(scratch1())); + DCHECK(holder_reg.is(receiver()) || holder_reg.is(scratch1())); // Preserve the receiver register explicitly whenever it is different from // the holder and it is needed should the interceptor return without any // result. The CALLBACKS case needs the receiver to be passed into C++ code, // the FIELD case might cause a miss during the prototype check. - bool must_perfrom_prototype_check = *interceptor_holder != lookup->holder(); + bool must_perfrom_prototype_check = *holder() != lookup->holder(); bool must_preserve_receiver_reg = !receiver().is(holder_reg) && (lookup->type() == CALLBACKS || must_perfrom_prototype_check); @@ -1125,7 +845,7 @@ void LoadStubCompiler::GenerateLoadInterceptor( // interceptor's holder has been compiled before (see a caller // of this method). CompileCallLoadPropertyWithInterceptor( - masm(), receiver(), holder_reg, this->name(), interceptor_holder, + masm(), receiver(), holder_reg, this->name(), holder(), IC::kLoadPropertyWithInterceptorOnly); // Check if interceptor provided a value for property. If it's @@ -1144,31 +864,25 @@ void LoadStubCompiler::GenerateLoadInterceptor( } // Leave the internal frame. } - GenerateLoadPostInterceptor(holder_reg, interceptor_holder, name, lookup); + GenerateLoadPostInterceptor(holder_reg, name, lookup); } else { // !compile_followup_inline // Call the runtime system to load the interceptor. // Check that the maps haven't changed. - PushInterceptorArguments(masm(), receiver(), holder_reg, - this->name(), interceptor_holder); + PushInterceptorArguments(masm(), receiver(), holder_reg, this->name(), + holder()); ExternalReference ref = ExternalReference( - IC_Utility(IC::kLoadPropertyWithInterceptorForLoad), isolate()); - __ TailCallExternalReference(ref, StubCache::kInterceptorArgsLength, 1); + IC_Utility(IC::kLoadPropertyWithInterceptor), isolate()); + __ TailCallExternalReference( + ref, NamedLoadHandlerCompiler::kInterceptorArgsLength, 1); } } -Handle<Code> StoreStubCompiler::CompileStoreCallback( - Handle<JSObject> object, - Handle<JSObject> holder, - Handle<Name> name, +Handle<Code> NamedStoreHandlerCompiler::CompileStoreCallback( + Handle<JSObject> object, Handle<Name> name, Handle<ExecutableAccessorInfo> callback) { - Register holder_reg = HandlerFrontend( - IC::CurrentTypeOf(object, isolate()), receiver(), holder, name); - - // Stub never generated for non-global objects that require access - // checks. - ASSERT(holder->IsJSGlobalProxy() || !holder->IsAccessCheckNeeded()); + Register holder_reg = Frontend(receiver(), name); __ Push(receiver(), holder_reg); // Receiver. __ li(at, Operand(callback)); // Callback info. @@ -1190,10 +904,8 @@ Handle<Code> StoreStubCompiler::CompileStoreCallback( #define __ ACCESS_MASM(masm) -void StoreStubCompiler::GenerateStoreViaSetter( - MacroAssembler* masm, - Handle<HeapType> type, - Register receiver, +void NamedStoreHandlerCompiler::GenerateStoreViaSetter( + MacroAssembler* masm, Handle<HeapType> type, Register receiver, Handle<JSFunction> setter) { // ----------- S t a t e ------------- // -- ra : return address @@ -1209,8 +921,7 @@ void StoreStubCompiler::GenerateStoreViaSetter( if (IC::TypeToMap(*type, masm->isolate())->IsJSGlobalObjectMap()) { // Swap in the global receiver. __ lw(receiver, - FieldMemOperand( - receiver, JSGlobalObject::kGlobalReceiverOffset)); + FieldMemOperand(receiver, JSGlobalObject::kGlobalProxyOffset)); } __ Push(receiver, value()); ParameterCount actual(1); @@ -1237,14 +948,13 @@ void StoreStubCompiler::GenerateStoreViaSetter( #define __ ACCESS_MASM(masm()) -Handle<Code> StoreStubCompiler::CompileStoreInterceptor( - Handle<JSObject> object, +Handle<Code> NamedStoreHandlerCompiler::CompileStoreInterceptor( Handle<Name> name) { __ Push(receiver(), this->name(), value()); // Do tail-call to the runtime system. - ExternalReference store_ic_property = - ExternalReference(IC_Utility(IC::kStoreInterceptorProperty), isolate()); + ExternalReference store_ic_property = ExternalReference( + IC_Utility(IC::kStorePropertyWithInterceptor), isolate()); __ TailCallExternalReference(store_ic_property, 3, 1); // Return the generated code. @@ -1252,61 +962,35 @@ Handle<Code> StoreStubCompiler::CompileStoreInterceptor( } -Handle<Code> LoadStubCompiler::CompileLoadNonexistent(Handle<HeapType> type, - Handle<JSObject> last, - Handle<Name> name) { - NonexistentHandlerFrontend(type, last, name); - - // Return undefined if maps of the full prototype chain is still the same. - __ LoadRoot(v0, Heap::kUndefinedValueRootIndex); - __ Ret(); - - // Return the generated code. - return GetCode(kind(), Code::FAST, name); -} - - -Register* LoadStubCompiler::registers() { +Register* PropertyAccessCompiler::load_calling_convention() { // receiver, name, scratch1, scratch2, scratch3, scratch4. - static Register registers[] = { a0, a2, a3, a1, t0, t1 }; + Register receiver = LoadIC::ReceiverRegister(); + Register name = LoadIC::NameRegister(); + static Register registers[] = { receiver, name, a3, a0, t0, t1 }; return registers; } -Register* KeyedLoadStubCompiler::registers() { - // receiver, name, scratch1, scratch2, scratch3, scratch4. - static Register registers[] = { a1, a0, a2, a3, t0, t1 }; - return registers; -} - - -Register StoreStubCompiler::value() { - return a0; -} - - -Register* StoreStubCompiler::registers() { +Register* PropertyAccessCompiler::store_calling_convention() { // receiver, name, scratch1, scratch2, scratch3. - static Register registers[] = { a1, a2, a3, t0, t1 }; + Register receiver = StoreIC::ReceiverRegister(); + Register name = StoreIC::NameRegister(); + DCHECK(a3.is(KeyedStoreIC::MapRegister())); + static Register registers[] = { receiver, name, a3, t0, t1 }; return registers; } -Register* KeyedStoreStubCompiler::registers() { - // receiver, name, scratch1, scratch2, scratch3. - static Register registers[] = { a2, a1, a3, t0, t1 }; - return registers; -} +Register NamedStoreHandlerCompiler::value() { return StoreIC::ValueRegister(); } #undef __ #define __ ACCESS_MASM(masm) -void LoadStubCompiler::GenerateLoadViaGetter(MacroAssembler* masm, - Handle<HeapType> type, - Register receiver, - Handle<JSFunction> getter) { +void NamedLoadHandlerCompiler::GenerateLoadViaGetter( + MacroAssembler* masm, Handle<HeapType> type, Register receiver, + Handle<JSFunction> getter) { // ----------- S t a t e ------------- // -- a0 : receiver // -- a2 : name @@ -1320,8 +1004,7 @@ void LoadStubCompiler::GenerateLoadViaGetter(MacroAssembler* masm, if (IC::TypeToMap(*type, masm->isolate())->IsJSGlobalObjectMap()) { // Swap in the global receiver. __ lw(receiver, - FieldMemOperand( - receiver, JSGlobalObject::kGlobalReceiverOffset)); + FieldMemOperand(receiver, JSGlobalObject::kGlobalProxyOffset)); } __ push(receiver); ParameterCount actual(0); @@ -1345,57 +1028,62 @@ void LoadStubCompiler::GenerateLoadViaGetter(MacroAssembler* masm, #define __ ACCESS_MASM(masm()) -Handle<Code> LoadStubCompiler::CompileLoadGlobal( - Handle<HeapType> type, - Handle<GlobalObject> global, - Handle<PropertyCell> cell, - Handle<Name> name, - bool is_dont_delete) { +Handle<Code> NamedLoadHandlerCompiler::CompileLoadGlobal( + Handle<PropertyCell> cell, Handle<Name> name, bool is_configurable) { Label miss; - HandlerFrontendHeader(type, receiver(), global, name, &miss); + FrontendHeader(receiver(), name, &miss); // Get the value from the cell. - __ li(a3, Operand(cell)); - __ lw(t0, FieldMemOperand(a3, Cell::kValueOffset)); + Register result = StoreIC::ValueRegister(); + __ li(result, Operand(cell)); + __ lw(result, FieldMemOperand(result, Cell::kValueOffset)); // Check for deleted property if property can actually be deleted. - if (!is_dont_delete) { + if (is_configurable) { __ LoadRoot(at, Heap::kTheHoleValueRootIndex); - __ Branch(&miss, eq, t0, Operand(at)); + __ Branch(&miss, eq, result, Operand(at)); } Counters* counters = isolate()->counters(); __ IncrementCounter(counters->named_load_global_stub(), 1, a1, a3); __ Ret(USE_DELAY_SLOT); - __ mov(v0, t0); + __ mov(v0, result); - HandlerFrontendFooter(name, &miss); + FrontendFooter(name, &miss); // Return the generated code. return GetCode(kind(), Code::NORMAL, name); } -Handle<Code> BaseLoadStoreStubCompiler::CompilePolymorphicIC( - TypeHandleList* types, - CodeHandleList* handlers, - Handle<Name> name, - Code::StubType type, - IcCheckType check) { +Handle<Code> PropertyICCompiler::CompilePolymorphic(TypeHandleList* types, + CodeHandleList* handlers, + Handle<Name> name, + Code::StubType type, + IcCheckType check) { Label miss; if (check == PROPERTY && (kind() == Code::KEYED_LOAD_IC || kind() == Code::KEYED_STORE_IC)) { - __ Branch(&miss, ne, this->name(), Operand(name)); + // In case we are compiling an IC for dictionary loads and stores, just + // check whether the name is unique. + if (name.is_identical_to(isolate()->factory()->normal_ic_symbol())) { + __ JumpIfNotUniqueName(this->name(), &miss); + } else { + __ Branch(&miss, ne, this->name(), Operand(name)); + } } Label number_case; - Register match = scratch1(); + Register match = scratch2(); Label* smi_target = IncludesNumberType(types) ? &number_case : &miss; __ JumpIfSmi(receiver(), smi_target, match); // Reg match is 0 if Smi. - Register map_reg = scratch2(); + // Polymorphic keyed stores may use the map register + Register map_reg = scratch1(); + DCHECK(kind() != Code::KEYED_STORE_IC || + map_reg.is(KeyedStoreIC::MapRegister())); int receiver_count = types->length(); int number_of_handled_maps = 0; @@ -1409,14 +1097,14 @@ Handle<Code> BaseLoadStoreStubCompiler::CompilePolymorphicIC( // Separate compare from branch, to provide path for above JumpIfSmi(). __ Subu(match, map_reg, Operand(map)); if (type->Is(HeapType::Number())) { - ASSERT(!number_case.is_unused()); + DCHECK(!number_case.is_unused()); __ bind(&number_case); } __ Jump(handlers->at(current), RelocInfo::CODE_TARGET, eq, match, Operand(zero_reg)); } } - ASSERT(number_of_handled_maps != 0); + DCHECK(number_of_handled_maps != 0); __ bind(&miss); TailCallBuiltin(masm(), MissBuiltin(kind())); @@ -1424,24 +1112,12 @@ Handle<Code> BaseLoadStoreStubCompiler::CompilePolymorphicIC( // Return the generated code. InlineCacheState state = number_of_handled_maps > 1 ? POLYMORPHIC : MONOMORPHIC; - return GetICCode(kind(), type, name, state); + return GetCode(kind(), type, name, state); } -void StoreStubCompiler::GenerateStoreArrayLength() { - // Prepare tail call to StoreIC_ArrayLength. - __ Push(receiver(), value()); - - ExternalReference ref = - ExternalReference(IC_Utility(IC::kStoreIC_ArrayLength), - masm()->isolate()); - __ TailCallExternalReference(ref, 2, 1); -} - - -Handle<Code> KeyedStoreStubCompiler::CompileStorePolymorphic( - MapHandleList* receiver_maps, - CodeHandleList* handler_stubs, +Handle<Code> PropertyICCompiler::CompileKeyedStorePolymorphic( + MapHandleList* receiver_maps, CodeHandleList* handler_stubs, MapHandleList* transitioned_maps) { Label miss; __ JumpIfSmi(receiver(), &miss); @@ -1465,8 +1141,7 @@ Handle<Code> KeyedStoreStubCompiler::CompileStorePolymorphic( TailCallBuiltin(masm(), MissBuiltin(kind())); // Return the generated code. - return GetICCode( - kind(), Code::NORMAL, factory()->empty_string(), POLYMORPHIC); + return GetCode(kind(), Code::NORMAL, factory()->empty_string(), POLYMORPHIC); } @@ -1474,45 +1149,32 @@ Handle<Code> KeyedStoreStubCompiler::CompileStorePolymorphic( #define __ ACCESS_MASM(masm) -void KeyedLoadStubCompiler::GenerateLoadDictionaryElement( +void ElementHandlerCompiler::GenerateLoadDictionaryElement( MacroAssembler* masm) { - // ---------- S t a t e -------------- - // -- ra : return address - // -- a0 : key - // -- a1 : receiver - // ----------------------------------- + // The return address is in ra. Label slow, miss; - Register key = a0; - Register receiver = a1; + Register key = LoadIC::NameRegister(); + Register receiver = LoadIC::ReceiverRegister(); + DCHECK(receiver.is(a1)); + DCHECK(key.is(a2)); - __ JumpIfNotSmi(key, &miss); + __ UntagAndJumpIfNotSmi(t2, key, &miss); __ lw(t0, FieldMemOperand(receiver, JSObject::kElementsOffset)); - __ sra(a2, a0, kSmiTagSize); - __ LoadFromNumberDictionary(&slow, t0, a0, v0, a2, a3, t1); + __ LoadFromNumberDictionary(&slow, t0, key, v0, t2, a3, t1); __ Ret(); - // Slow case, key and receiver still in a0 and a1. + // Slow case, key and receiver still unmodified. __ bind(&slow); __ IncrementCounter( masm->isolate()->counters()->keyed_load_external_array_slow(), 1, a2, a3); - // Entry registers are intact. - // ---------- S t a t e -------------- - // -- ra : return address - // -- a0 : key - // -- a1 : receiver - // ----------------------------------- + TailCallBuiltin(masm, Builtins::kKeyedLoadIC_Slow); // Miss case, call the runtime. __ bind(&miss); - // ---------- S t a t e -------------- - // -- ra : return address - // -- a0 : key - // -- a1 : receiver - // ----------------------------------- TailCallBuiltin(masm, Builtins::kKeyedLoadIC_Miss); } diff --git a/deps/v8/src/mips64/OWNERS b/deps/v8/src/mips64/OWNERS new file mode 100644 index 00000000000..8d6807d2672 --- /dev/null +++ b/deps/v8/src/mips64/OWNERS @@ -0,0 +1,10 @@ +plind44@gmail.com +paul.lind@imgtec.com +gergely@homejinni.com +gergely.kis@imgtec.com +palfia@homejinni.com +akos.palfi@imgtec.com +kilvadyb@homejinni.com +balazs.kilvady@imgtec.com +Dusan.Milosavljevic@rt-rk.com +dusan.milosavljevic@imgtec.com diff --git a/deps/v8/src/mips64/assembler-mips64-inl.h b/deps/v8/src/mips64/assembler-mips64-inl.h new file mode 100644 index 00000000000..de294ee6653 --- /dev/null +++ b/deps/v8/src/mips64/assembler-mips64-inl.h @@ -0,0 +1,457 @@ + +// Copyright (c) 1994-2006 Sun Microsystems Inc. +// All Rights Reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// - Redistributions of source code must retain the above copyright notice, +// this list of conditions and the following disclaimer. +// +// - Redistribution in binary form must reproduce the above copyright +// notice, this list of conditions and the following disclaimer in the +// documentation and/or other materials provided with the distribution. +// +// - Neither the name of Sun Microsystems or the names of contributors may +// be used to endorse or promote products derived from this software without +// specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS +// IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, +// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +// PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR +// CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, +// EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, +// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +// PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +// LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +// NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +// The original source code covered by the above license above has been +// modified significantly by Google Inc. +// Copyright 2012 the V8 project authors. All rights reserved. + + +#ifndef V8_MIPS_ASSEMBLER_MIPS_INL_H_ +#define V8_MIPS_ASSEMBLER_MIPS_INL_H_ + +#include "src/mips64/assembler-mips64.h" + +#include "src/assembler.h" +#include "src/debug.h" + + +namespace v8 { +namespace internal { + + +bool CpuFeatures::SupportsCrankshaft() { return IsSupported(FPU); } + + +// ----------------------------------------------------------------------------- +// Operand and MemOperand. + +Operand::Operand(int64_t immediate, RelocInfo::Mode rmode) { + rm_ = no_reg; + imm64_ = immediate; + rmode_ = rmode; +} + + +Operand::Operand(const ExternalReference& f) { + rm_ = no_reg; + imm64_ = reinterpret_cast<int64_t>(f.address()); + rmode_ = RelocInfo::EXTERNAL_REFERENCE; +} + + +Operand::Operand(Smi* value) { + rm_ = no_reg; + imm64_ = reinterpret_cast<intptr_t>(value); + rmode_ = RelocInfo::NONE32; +} + + +Operand::Operand(Register rm) { + rm_ = rm; +} + + +bool Operand::is_reg() const { + return rm_.is_valid(); +} + + +int Register::NumAllocatableRegisters() { + return kMaxNumAllocatableRegisters; +} + + +int DoubleRegister::NumRegisters() { + return FPURegister::kMaxNumRegisters; +} + + +int DoubleRegister::NumAllocatableRegisters() { + return FPURegister::kMaxNumAllocatableRegisters; +} + + +int FPURegister::ToAllocationIndex(FPURegister reg) { + DCHECK(reg.code() % 2 == 0); + DCHECK(reg.code() / 2 < kMaxNumAllocatableRegisters); + DCHECK(reg.is_valid()); + DCHECK(!reg.is(kDoubleRegZero)); + DCHECK(!reg.is(kLithiumScratchDouble)); + return (reg.code() / 2); +} + + +// ----------------------------------------------------------------------------- +// RelocInfo. + +void RelocInfo::apply(intptr_t delta, ICacheFlushMode icache_flush_mode) { + if (IsInternalReference(rmode_)) { + // Absolute code pointer inside code object moves with the code object. + byte* p = reinterpret_cast<byte*>(pc_); + int count = Assembler::RelocateInternalReference(p, delta); + CpuFeatures::FlushICache(p, count * sizeof(uint32_t)); + } +} + + +Address RelocInfo::target_address() { + DCHECK(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); + return Assembler::target_address_at(pc_, host_); +} + + +Address RelocInfo::target_address_address() { + DCHECK(IsCodeTarget(rmode_) || + IsRuntimeEntry(rmode_) || + rmode_ == EMBEDDED_OBJECT || + rmode_ == EXTERNAL_REFERENCE); + // Read the address of the word containing the target_address in an + // instruction stream. + // The only architecture-independent user of this function is the serializer. + // The serializer uses it to find out how many raw bytes of instruction to + // output before the next target. + // For an instruction like LUI/ORI where the target bits are mixed into the + // instruction bits, the size of the target will be zero, indicating that the + // serializer should not step forward in memory after a target is resolved + // and written. In this case the target_address_address function should + // return the end of the instructions to be patched, allowing the + // deserializer to deserialize the instructions as raw bytes and put them in + // place, ready to be patched with the target. After jump optimization, + // that is the address of the instruction that follows J/JAL/JR/JALR + // instruction. + // return reinterpret_cast<Address>( + // pc_ + Assembler::kInstructionsFor32BitConstant * Assembler::kInstrSize); + return reinterpret_cast<Address>( + pc_ + Assembler::kInstructionsFor64BitConstant * Assembler::kInstrSize); +} + + +Address RelocInfo::constant_pool_entry_address() { + UNREACHABLE(); + return NULL; +} + + +int RelocInfo::target_address_size() { + return Assembler::kSpecialTargetSize; +} + + +void RelocInfo::set_target_address(Address target, + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); + Assembler::set_target_address_at(pc_, host_, target, icache_flush_mode); + if (write_barrier_mode == UPDATE_WRITE_BARRIER && + host() != NULL && IsCodeTarget(rmode_)) { + Object* target_code = Code::GetCodeFromTargetAddress(target); + host()->GetHeap()->incremental_marking()->RecordWriteIntoCode( + host(), this, HeapObject::cast(target_code)); + } +} + + +Address Assembler::target_address_from_return_address(Address pc) { + return pc - kCallTargetAddressOffset; +} + + +Address Assembler::break_address_from_return_address(Address pc) { + return pc - Assembler::kPatchDebugBreakSlotReturnOffset; +} + + +Object* RelocInfo::target_object() { + DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); + return reinterpret_cast<Object*>(Assembler::target_address_at(pc_, host_)); +} + + +Handle<Object> RelocInfo::target_object_handle(Assembler* origin) { + DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); + return Handle<Object>(reinterpret_cast<Object**>( + Assembler::target_address_at(pc_, host_))); +} + + +void RelocInfo::set_target_object(Object* target, + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); + Assembler::set_target_address_at(pc_, host_, + reinterpret_cast<Address>(target), + icache_flush_mode); + if (write_barrier_mode == UPDATE_WRITE_BARRIER && + host() != NULL && + target->IsHeapObject()) { + host()->GetHeap()->incremental_marking()->RecordWrite( + host(), &Memory::Object_at(pc_), HeapObject::cast(target)); + } +} + + +Address RelocInfo::target_reference() { + DCHECK(rmode_ == EXTERNAL_REFERENCE); + return Assembler::target_address_at(pc_, host_); +} + + +Address RelocInfo::target_runtime_entry(Assembler* origin) { + DCHECK(IsRuntimeEntry(rmode_)); + return target_address(); +} + + +void RelocInfo::set_target_runtime_entry(Address target, + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(IsRuntimeEntry(rmode_)); + if (target_address() != target) + set_target_address(target, write_barrier_mode, icache_flush_mode); +} + + +Handle<Cell> RelocInfo::target_cell_handle() { + DCHECK(rmode_ == RelocInfo::CELL); + Address address = Memory::Address_at(pc_); + return Handle<Cell>(reinterpret_cast<Cell**>(address)); +} + + +Cell* RelocInfo::target_cell() { + DCHECK(rmode_ == RelocInfo::CELL); + return Cell::FromValueAddress(Memory::Address_at(pc_)); +} + + +void RelocInfo::set_target_cell(Cell* cell, + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(rmode_ == RelocInfo::CELL); + Address address = cell->address() + Cell::kValueOffset; + Memory::Address_at(pc_) = address; + if (write_barrier_mode == UPDATE_WRITE_BARRIER && host() != NULL) { + // TODO(1550) We are passing NULL as a slot because cell can never be on + // evacuation candidate. + host()->GetHeap()->incremental_marking()->RecordWrite( + host(), NULL, cell); + } +} + + +static const int kNoCodeAgeSequenceLength = 9 * Assembler::kInstrSize; + + +Handle<Object> RelocInfo::code_age_stub_handle(Assembler* origin) { + UNREACHABLE(); // This should never be reached on Arm. + return Handle<Object>(); +} + + +Code* RelocInfo::code_age_stub() { + DCHECK(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); + return Code::GetCodeFromTargetAddress( + Assembler::target_address_at(pc_ + Assembler::kInstrSize, host_)); +} + + +void RelocInfo::set_code_age_stub(Code* stub, + ICacheFlushMode icache_flush_mode) { + DCHECK(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); + Assembler::set_target_address_at(pc_ + Assembler::kInstrSize, + host_, + stub->instruction_start()); +} + + +Address RelocInfo::call_address() { + DCHECK((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || + (IsDebugBreakSlot(rmode()) && IsPatchedDebugBreakSlotSequence())); + // The pc_ offset of 0 assumes mips patched return sequence per + // debug-mips.cc BreakLocationIterator::SetDebugBreakAtReturn(), or + // debug break slot per BreakLocationIterator::SetDebugBreakAtSlot(). + return Assembler::target_address_at(pc_, host_); +} + + +void RelocInfo::set_call_address(Address target) { + DCHECK((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || + (IsDebugBreakSlot(rmode()) && IsPatchedDebugBreakSlotSequence())); + // The pc_ offset of 0 assumes mips patched return sequence per + // debug-mips.cc BreakLocationIterator::SetDebugBreakAtReturn(), or + // debug break slot per BreakLocationIterator::SetDebugBreakAtSlot(). + Assembler::set_target_address_at(pc_, host_, target); + if (host() != NULL) { + Object* target_code = Code::GetCodeFromTargetAddress(target); + host()->GetHeap()->incremental_marking()->RecordWriteIntoCode( + host(), this, HeapObject::cast(target_code)); + } +} + + +Object* RelocInfo::call_object() { + return *call_object_address(); +} + + +Object** RelocInfo::call_object_address() { + DCHECK((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || + (IsDebugBreakSlot(rmode()) && IsPatchedDebugBreakSlotSequence())); + return reinterpret_cast<Object**>(pc_ + 6 * Assembler::kInstrSize); +} + + +void RelocInfo::set_call_object(Object* target) { + *call_object_address() = target; +} + + +void RelocInfo::WipeOut() { + DCHECK(IsEmbeddedObject(rmode_) || + IsCodeTarget(rmode_) || + IsRuntimeEntry(rmode_) || + IsExternalReference(rmode_)); + Assembler::set_target_address_at(pc_, host_, NULL); +} + + +bool RelocInfo::IsPatchedReturnSequence() { + Instr instr0 = Assembler::instr_at(pc_); // lui. + Instr instr1 = Assembler::instr_at(pc_ + 1 * Assembler::kInstrSize); // ori. + Instr instr2 = Assembler::instr_at(pc_ + 2 * Assembler::kInstrSize); // dsll. + Instr instr3 = Assembler::instr_at(pc_ + 3 * Assembler::kInstrSize); // ori. + Instr instr4 = Assembler::instr_at(pc_ + 4 * Assembler::kInstrSize); // jalr. + + bool patched_return = ((instr0 & kOpcodeMask) == LUI && + (instr1 & kOpcodeMask) == ORI && + (instr2 & kFunctionFieldMask) == DSLL && + (instr3 & kOpcodeMask) == ORI && + (instr4 & kFunctionFieldMask) == JALR); + return patched_return; +} + + +bool RelocInfo::IsPatchedDebugBreakSlotSequence() { + Instr current_instr = Assembler::instr_at(pc_); + return !Assembler::IsNop(current_instr, Assembler::DEBUG_BREAK_NOP); +} + + +void RelocInfo::Visit(Isolate* isolate, ObjectVisitor* visitor) { + RelocInfo::Mode mode = rmode(); + if (mode == RelocInfo::EMBEDDED_OBJECT) { + visitor->VisitEmbeddedPointer(this); + } else if (RelocInfo::IsCodeTarget(mode)) { + visitor->VisitCodeTarget(this); + } else if (mode == RelocInfo::CELL) { + visitor->VisitCell(this); + } else if (mode == RelocInfo::EXTERNAL_REFERENCE) { + visitor->VisitExternalReference(this); + } else if (RelocInfo::IsCodeAgeSequence(mode)) { + visitor->VisitCodeAgeSequence(this); + } else if (((RelocInfo::IsJSReturn(mode) && + IsPatchedReturnSequence()) || + (RelocInfo::IsDebugBreakSlot(mode) && + IsPatchedDebugBreakSlotSequence())) && + isolate->debug()->has_break_points()) { + visitor->VisitDebugTarget(this); + } else if (RelocInfo::IsRuntimeEntry(mode)) { + visitor->VisitRuntimeEntry(this); + } +} + + +template<typename StaticVisitor> +void RelocInfo::Visit(Heap* heap) { + RelocInfo::Mode mode = rmode(); + if (mode == RelocInfo::EMBEDDED_OBJECT) { + StaticVisitor::VisitEmbeddedPointer(heap, this); + } else if (RelocInfo::IsCodeTarget(mode)) { + StaticVisitor::VisitCodeTarget(heap, this); + } else if (mode == RelocInfo::CELL) { + StaticVisitor::VisitCell(heap, this); + } else if (mode == RelocInfo::EXTERNAL_REFERENCE) { + StaticVisitor::VisitExternalReference(this); + } else if (RelocInfo::IsCodeAgeSequence(mode)) { + StaticVisitor::VisitCodeAgeSequence(heap, this); + } else if (heap->isolate()->debug()->has_break_points() && + ((RelocInfo::IsJSReturn(mode) && + IsPatchedReturnSequence()) || + (RelocInfo::IsDebugBreakSlot(mode) && + IsPatchedDebugBreakSlotSequence()))) { + StaticVisitor::VisitDebugTarget(heap, this); + } else if (RelocInfo::IsRuntimeEntry(mode)) { + StaticVisitor::VisitRuntimeEntry(this); + } +} + + +// ----------------------------------------------------------------------------- +// Assembler. + + +void Assembler::CheckBuffer() { + if (buffer_space() <= kGap) { + GrowBuffer(); + } +} + + +void Assembler::CheckTrampolinePoolQuick() { + if (pc_offset() >= next_buffer_check_) { + CheckTrampolinePool(); + } +} + + +void Assembler::emit(Instr x) { + if (!is_buffer_growth_blocked()) { + CheckBuffer(); + } + *reinterpret_cast<Instr*>(pc_) = x; + pc_ += kInstrSize; + CheckTrampolinePoolQuick(); +} + + +void Assembler::emit(uint64_t x) { + if (!is_buffer_growth_blocked()) { + CheckBuffer(); + } + *reinterpret_cast<uint64_t*>(pc_) = x; + pc_ += kInstrSize * 2; + CheckTrampolinePoolQuick(); +} + + +} } // namespace v8::internal + +#endif // V8_MIPS_ASSEMBLER_MIPS_INL_H_ diff --git a/deps/v8/src/mips64/assembler-mips64.cc b/deps/v8/src/mips64/assembler-mips64.cc new file mode 100644 index 00000000000..36e9c2d1057 --- /dev/null +++ b/deps/v8/src/mips64/assembler-mips64.cc @@ -0,0 +1,2933 @@ +// Copyright (c) 1994-2006 Sun Microsystems Inc. +// All Rights Reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// - Redistributions of source code must retain the above copyright notice, +// this list of conditions and the following disclaimer. +// +// - Redistribution in binary form must reproduce the above copyright +// notice, this list of conditions and the following disclaimer in the +// documentation and/or other materials provided with the distribution. +// +// - Neither the name of Sun Microsystems or the names of contributors may +// be used to endorse or promote products derived from this software without +// specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS +// IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, +// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +// PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR +// CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, +// EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, +// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +// PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +// LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +// NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +// The original source code covered by the above license above has been +// modified significantly by Google Inc. +// Copyright 2012 the V8 project authors. All rights reserved. + + +#include "src/v8.h" + +#if V8_TARGET_ARCH_MIPS64 + +#include "src/base/cpu.h" +#include "src/mips64/assembler-mips64-inl.h" +#include "src/serialize.h" + +namespace v8 { +namespace internal { + + +// Get the CPU features enabled by the build. For cross compilation the +// preprocessor symbols CAN_USE_FPU_INSTRUCTIONS +// can be defined to enable FPU instructions when building the +// snapshot. +static unsigned CpuFeaturesImpliedByCompiler() { + unsigned answer = 0; +#ifdef CAN_USE_FPU_INSTRUCTIONS + answer |= 1u << FPU; +#endif // def CAN_USE_FPU_INSTRUCTIONS + + // If the compiler is allowed to use FPU then we can use FPU too in our code + // generation even when generating snapshots. This won't work for cross + // compilation. +#if defined(__mips__) && defined(__mips_hard_float) && __mips_hard_float != 0 + answer |= 1u << FPU; +#endif + + return answer; +} + + +const char* DoubleRegister::AllocationIndexToString(int index) { + DCHECK(index >= 0 && index < kMaxNumAllocatableRegisters); + const char* const names[] = { + "f0", + "f2", + "f4", + "f6", + "f8", + "f10", + "f12", + "f14", + "f16", + "f18", + "f20", + "f22", + "f24", + "f26" + }; + return names[index]; +} + + +void CpuFeatures::ProbeImpl(bool cross_compile) { + supported_ |= CpuFeaturesImpliedByCompiler(); + + // Only use statically determined features for cross compile (snapshot). + if (cross_compile) return; + + // If the compiler is allowed to use fpu then we can use fpu too in our + // code generation. +#ifndef __mips__ + // For the simulator build, use FPU. + supported_ |= 1u << FPU; +#else + // Probe for additional features at runtime. + base::CPU cpu; + if (cpu.has_fpu()) supported_ |= 1u << FPU; +#endif +} + + +void CpuFeatures::PrintTarget() { } +void CpuFeatures::PrintFeatures() { } + + +int ToNumber(Register reg) { + DCHECK(reg.is_valid()); + const int kNumbers[] = { + 0, // zero_reg + 1, // at + 2, // v0 + 3, // v1 + 4, // a0 + 5, // a1 + 6, // a2 + 7, // a3 + 8, // a4 + 9, // a5 + 10, // a6 + 11, // a7 + 12, // t0 + 13, // t1 + 14, // t2 + 15, // t3 + 16, // s0 + 17, // s1 + 18, // s2 + 19, // s3 + 20, // s4 + 21, // s5 + 22, // s6 + 23, // s7 + 24, // t8 + 25, // t9 + 26, // k0 + 27, // k1 + 28, // gp + 29, // sp + 30, // fp + 31, // ra + }; + return kNumbers[reg.code()]; +} + + +Register ToRegister(int num) { + DCHECK(num >= 0 && num < kNumRegisters); + const Register kRegisters[] = { + zero_reg, + at, + v0, v1, + a0, a1, a2, a3, a4, a5, a6, a7, + t0, t1, t2, t3, + s0, s1, s2, s3, s4, s5, s6, s7, + t8, t9, + k0, k1, + gp, + sp, + fp, + ra + }; + return kRegisters[num]; +} + + +// ----------------------------------------------------------------------------- +// Implementation of RelocInfo. + +const int RelocInfo::kApplyMask = RelocInfo::kCodeTargetMask | + 1 << RelocInfo::INTERNAL_REFERENCE; + + +bool RelocInfo::IsCodedSpecially() { + // The deserializer needs to know whether a pointer is specially coded. Being + // specially coded on MIPS means that it is a lui/ori instruction, and that is + // always the case inside code objects. + return true; +} + + +bool RelocInfo::IsInConstantPool() { + return false; +} + + +// Patch the code at the current address with the supplied instructions. +void RelocInfo::PatchCode(byte* instructions, int instruction_count) { + Instr* pc = reinterpret_cast<Instr*>(pc_); + Instr* instr = reinterpret_cast<Instr*>(instructions); + for (int i = 0; i < instruction_count; i++) { + *(pc + i) = *(instr + i); + } + + // Indicate that code has changed. + CpuFeatures::FlushICache(pc_, instruction_count * Assembler::kInstrSize); +} + + +// Patch the code at the current PC with a call to the target address. +// Additional guard instructions can be added if required. +void RelocInfo::PatchCodeWithCall(Address target, int guard_bytes) { + // Patch the code at the current address with a call to the target. + UNIMPLEMENTED_MIPS(); +} + + +// ----------------------------------------------------------------------------- +// Implementation of Operand and MemOperand. +// See assembler-mips-inl.h for inlined constructors. + +Operand::Operand(Handle<Object> handle) { + AllowDeferredHandleDereference using_raw_address; + rm_ = no_reg; + // Verify all Objects referred by code are NOT in new space. + Object* obj = *handle; + if (obj->IsHeapObject()) { + DCHECK(!HeapObject::cast(obj)->GetHeap()->InNewSpace(obj)); + imm64_ = reinterpret_cast<intptr_t>(handle.location()); + rmode_ = RelocInfo::EMBEDDED_OBJECT; + } else { + // No relocation needed. + imm64_ = reinterpret_cast<intptr_t>(obj); + rmode_ = RelocInfo::NONE64; + } +} + + +MemOperand::MemOperand(Register rm, int64_t offset) : Operand(rm) { + offset_ = offset; +} + + +MemOperand::MemOperand(Register rm, int64_t unit, int64_t multiplier, + OffsetAddend offset_addend) : Operand(rm) { + offset_ = unit * multiplier + offset_addend; +} + + +// ----------------------------------------------------------------------------- +// Specific instructions, constants, and masks. + +static const int kNegOffset = 0x00008000; +// daddiu(sp, sp, 8) aka Pop() operation or part of Pop(r) +// operations as post-increment of sp. +const Instr kPopInstruction = DADDIU | (kRegister_sp_Code << kRsShift) + | (kRegister_sp_Code << kRtShift) + | (kPointerSize & kImm16Mask); // NOLINT +// daddiu(sp, sp, -8) part of Push(r) operation as pre-decrement of sp. +const Instr kPushInstruction = DADDIU | (kRegister_sp_Code << kRsShift) + | (kRegister_sp_Code << kRtShift) + | (-kPointerSize & kImm16Mask); // NOLINT +// sd(r, MemOperand(sp, 0)) +const Instr kPushRegPattern = SD | (kRegister_sp_Code << kRsShift) + | (0 & kImm16Mask); // NOLINT +// ld(r, MemOperand(sp, 0)) +const Instr kPopRegPattern = LD | (kRegister_sp_Code << kRsShift) + | (0 & kImm16Mask); // NOLINT + +const Instr kLwRegFpOffsetPattern = LW | (kRegister_fp_Code << kRsShift) + | (0 & kImm16Mask); // NOLINT + +const Instr kSwRegFpOffsetPattern = SW | (kRegister_fp_Code << kRsShift) + | (0 & kImm16Mask); // NOLINT + +const Instr kLwRegFpNegOffsetPattern = LW | (kRegister_fp_Code << kRsShift) + | (kNegOffset & kImm16Mask); // NOLINT + +const Instr kSwRegFpNegOffsetPattern = SW | (kRegister_fp_Code << kRsShift) + | (kNegOffset & kImm16Mask); // NOLINT +// A mask for the Rt register for push, pop, lw, sw instructions. +const Instr kRtMask = kRtFieldMask; +const Instr kLwSwInstrTypeMask = 0xffe00000; +const Instr kLwSwInstrArgumentMask = ~kLwSwInstrTypeMask; +const Instr kLwSwOffsetMask = kImm16Mask; + + +Assembler::Assembler(Isolate* isolate, void* buffer, int buffer_size) + : AssemblerBase(isolate, buffer, buffer_size), + recorded_ast_id_(TypeFeedbackId::None()), + positions_recorder_(this) { + reloc_info_writer.Reposition(buffer_ + buffer_size_, pc_); + + last_trampoline_pool_end_ = 0; + no_trampoline_pool_before_ = 0; + trampoline_pool_blocked_nesting_ = 0; + // We leave space (16 * kTrampolineSlotsSize) + // for BlockTrampolinePoolScope buffer. + next_buffer_check_ = FLAG_force_long_branches + ? kMaxInt : kMaxBranchOffset - kTrampolineSlotsSize * 16; + internal_trampoline_exception_ = false; + last_bound_pos_ = 0; + + trampoline_emitted_ = FLAG_force_long_branches; + unbound_labels_count_ = 0; + block_buffer_growth_ = false; + + ClearRecordedAstId(); +} + + +void Assembler::GetCode(CodeDesc* desc) { + DCHECK(pc_ <= reloc_info_writer.pos()); // No overlap. + // Set up code descriptor. + desc->buffer = buffer_; + desc->buffer_size = buffer_size_; + desc->instr_size = pc_offset(); + desc->reloc_size = (buffer_ + buffer_size_) - reloc_info_writer.pos(); + desc->origin = this; +} + + +void Assembler::Align(int m) { + DCHECK(m >= 4 && IsPowerOf2(m)); + while ((pc_offset() & (m - 1)) != 0) { + nop(); + } +} + + +void Assembler::CodeTargetAlign() { + // No advantage to aligning branch/call targets to more than + // single instruction, that I am aware of. + Align(4); +} + + +Register Assembler::GetRtReg(Instr instr) { + Register rt; + rt.code_ = (instr & kRtFieldMask) >> kRtShift; + return rt; +} + + +Register Assembler::GetRsReg(Instr instr) { + Register rs; + rs.code_ = (instr & kRsFieldMask) >> kRsShift; + return rs; +} + + +Register Assembler::GetRdReg(Instr instr) { + Register rd; + rd.code_ = (instr & kRdFieldMask) >> kRdShift; + return rd; +} + + +uint32_t Assembler::GetRt(Instr instr) { + return (instr & kRtFieldMask) >> kRtShift; +} + + +uint32_t Assembler::GetRtField(Instr instr) { + return instr & kRtFieldMask; +} + + +uint32_t Assembler::GetRs(Instr instr) { + return (instr & kRsFieldMask) >> kRsShift; +} + + +uint32_t Assembler::GetRsField(Instr instr) { + return instr & kRsFieldMask; +} + + +uint32_t Assembler::GetRd(Instr instr) { + return (instr & kRdFieldMask) >> kRdShift; +} + + +uint32_t Assembler::GetRdField(Instr instr) { + return instr & kRdFieldMask; +} + + +uint32_t Assembler::GetSa(Instr instr) { + return (instr & kSaFieldMask) >> kSaShift; +} + + +uint32_t Assembler::GetSaField(Instr instr) { + return instr & kSaFieldMask; +} + + +uint32_t Assembler::GetOpcodeField(Instr instr) { + return instr & kOpcodeMask; +} + + +uint32_t Assembler::GetFunction(Instr instr) { + return (instr & kFunctionFieldMask) >> kFunctionShift; +} + + +uint32_t Assembler::GetFunctionField(Instr instr) { + return instr & kFunctionFieldMask; +} + + +uint32_t Assembler::GetImmediate16(Instr instr) { + return instr & kImm16Mask; +} + + +uint32_t Assembler::GetLabelConst(Instr instr) { + return instr & ~kImm16Mask; +} + + +bool Assembler::IsPop(Instr instr) { + return (instr & ~kRtMask) == kPopRegPattern; +} + + +bool Assembler::IsPush(Instr instr) { + return (instr & ~kRtMask) == kPushRegPattern; +} + + +bool Assembler::IsSwRegFpOffset(Instr instr) { + return ((instr & kLwSwInstrTypeMask) == kSwRegFpOffsetPattern); +} + + +bool Assembler::IsLwRegFpOffset(Instr instr) { + return ((instr & kLwSwInstrTypeMask) == kLwRegFpOffsetPattern); +} + + +bool Assembler::IsSwRegFpNegOffset(Instr instr) { + return ((instr & (kLwSwInstrTypeMask | kNegOffset)) == + kSwRegFpNegOffsetPattern); +} + + +bool Assembler::IsLwRegFpNegOffset(Instr instr) { + return ((instr & (kLwSwInstrTypeMask | kNegOffset)) == + kLwRegFpNegOffsetPattern); +} + + +// Labels refer to positions in the (to be) generated code. +// There are bound, linked, and unused labels. +// +// Bound labels refer to known positions in the already +// generated code. pos() is the position the label refers to. +// +// Linked labels refer to unknown positions in the code +// to be generated; pos() is the position of the last +// instruction using the label. + +// The link chain is terminated by a value in the instruction of -1, +// which is an otherwise illegal value (branch -1 is inf loop). +// The instruction 16-bit offset field addresses 32-bit words, but in +// code is conv to an 18-bit value addressing bytes, hence the -4 value. + +const int kEndOfChain = -4; +// Determines the end of the Jump chain (a subset of the label link chain). +const int kEndOfJumpChain = 0; + + +bool Assembler::IsBranch(Instr instr) { + uint32_t opcode = GetOpcodeField(instr); + uint32_t rt_field = GetRtField(instr); + uint32_t rs_field = GetRsField(instr); + // Checks if the instruction is a branch. + return opcode == BEQ || + opcode == BNE || + opcode == BLEZ || + opcode == BGTZ || + opcode == BEQL || + opcode == BNEL || + opcode == BLEZL || + opcode == BGTZL || + (opcode == REGIMM && (rt_field == BLTZ || rt_field == BGEZ || + rt_field == BLTZAL || rt_field == BGEZAL)) || + (opcode == COP1 && rs_field == BC1) || // Coprocessor branch. + (opcode == COP1 && rs_field == BC1EQZ) || + (opcode == COP1 && rs_field == BC1NEZ); +} + + +bool Assembler::IsEmittedConstant(Instr instr) { + uint32_t label_constant = GetLabelConst(instr); + return label_constant == 0; // Emitted label const in reg-exp engine. +} + + +bool Assembler::IsBeq(Instr instr) { + return GetOpcodeField(instr) == BEQ; +} + + +bool Assembler::IsBne(Instr instr) { + return GetOpcodeField(instr) == BNE; +} + + +bool Assembler::IsJump(Instr instr) { + uint32_t opcode = GetOpcodeField(instr); + uint32_t rt_field = GetRtField(instr); + uint32_t rd_field = GetRdField(instr); + uint32_t function_field = GetFunctionField(instr); + // Checks if the instruction is a jump. + return opcode == J || opcode == JAL || + (opcode == SPECIAL && rt_field == 0 && + ((function_field == JALR) || (rd_field == 0 && (function_field == JR)))); +} + + +bool Assembler::IsJ(Instr instr) { + uint32_t opcode = GetOpcodeField(instr); + // Checks if the instruction is a jump. + return opcode == J; +} + + +bool Assembler::IsJal(Instr instr) { + return GetOpcodeField(instr) == JAL; +} + + +bool Assembler::IsJr(Instr instr) { + return GetOpcodeField(instr) == SPECIAL && GetFunctionField(instr) == JR; +} + + +bool Assembler::IsJalr(Instr instr) { + return GetOpcodeField(instr) == SPECIAL && GetFunctionField(instr) == JALR; +} + + +bool Assembler::IsLui(Instr instr) { + uint32_t opcode = GetOpcodeField(instr); + // Checks if the instruction is a load upper immediate. + return opcode == LUI; +} + + +bool Assembler::IsOri(Instr instr) { + uint32_t opcode = GetOpcodeField(instr); + // Checks if the instruction is a load upper immediate. + return opcode == ORI; +} + + +bool Assembler::IsNop(Instr instr, unsigned int type) { + // See Assembler::nop(type). + DCHECK(type < 32); + uint32_t opcode = GetOpcodeField(instr); + uint32_t function = GetFunctionField(instr); + uint32_t rt = GetRt(instr); + uint32_t rd = GetRd(instr); + uint32_t sa = GetSa(instr); + + // Traditional mips nop == sll(zero_reg, zero_reg, 0) + // When marking non-zero type, use sll(zero_reg, at, type) + // to avoid use of mips ssnop and ehb special encodings + // of the sll instruction. + + Register nop_rt_reg = (type == 0) ? zero_reg : at; + bool ret = (opcode == SPECIAL && function == SLL && + rd == static_cast<uint32_t>(ToNumber(zero_reg)) && + rt == static_cast<uint32_t>(ToNumber(nop_rt_reg)) && + sa == type); + + return ret; +} + + +int32_t Assembler::GetBranchOffset(Instr instr) { + DCHECK(IsBranch(instr)); + return (static_cast<int16_t>(instr & kImm16Mask)) << 2; +} + + +bool Assembler::IsLw(Instr instr) { + return ((instr & kOpcodeMask) == LW); +} + + +int16_t Assembler::GetLwOffset(Instr instr) { + DCHECK(IsLw(instr)); + return ((instr & kImm16Mask)); +} + + +Instr Assembler::SetLwOffset(Instr instr, int16_t offset) { + DCHECK(IsLw(instr)); + + // We actually create a new lw instruction based on the original one. + Instr temp_instr = LW | (instr & kRsFieldMask) | (instr & kRtFieldMask) + | (offset & kImm16Mask); + + return temp_instr; +} + + +bool Assembler::IsSw(Instr instr) { + return ((instr & kOpcodeMask) == SW); +} + + +Instr Assembler::SetSwOffset(Instr instr, int16_t offset) { + DCHECK(IsSw(instr)); + return ((instr & ~kImm16Mask) | (offset & kImm16Mask)); +} + + +bool Assembler::IsAddImmediate(Instr instr) { + return ((instr & kOpcodeMask) == ADDIU || (instr & kOpcodeMask) == DADDIU); +} + + +Instr Assembler::SetAddImmediateOffset(Instr instr, int16_t offset) { + DCHECK(IsAddImmediate(instr)); + return ((instr & ~kImm16Mask) | (offset & kImm16Mask)); +} + + +bool Assembler::IsAndImmediate(Instr instr) { + return GetOpcodeField(instr) == ANDI; +} + + +int64_t Assembler::target_at(int64_t pos) { + Instr instr = instr_at(pos); + if ((instr & ~kImm16Mask) == 0) { + // Emitted label constant, not part of a branch. + if (instr == 0) { + return kEndOfChain; + } else { + int32_t imm18 =((instr & static_cast<int32_t>(kImm16Mask)) << 16) >> 14; + return (imm18 + pos); + } + } + // Check we have a branch or jump instruction. + DCHECK(IsBranch(instr) || IsJ(instr) || IsLui(instr)); + // Do NOT change this to <<2. We rely on arithmetic shifts here, assuming + // the compiler uses arithmetic shifts for signed integers. + if (IsBranch(instr)) { + int32_t imm18 = ((instr & static_cast<int32_t>(kImm16Mask)) << 16) >> 14; + if (imm18 == kEndOfChain) { + // EndOfChain sentinel is returned directly, not relative to pc or pos. + return kEndOfChain; + } else { + return pos + kBranchPCOffset + imm18; + } + } else if (IsLui(instr)) { + Instr instr_lui = instr_at(pos + 0 * Assembler::kInstrSize); + Instr instr_ori = instr_at(pos + 1 * Assembler::kInstrSize); + Instr instr_ori2 = instr_at(pos + 3 * Assembler::kInstrSize); + DCHECK(IsOri(instr_ori)); + DCHECK(IsOri(instr_ori2)); + + // TODO(plind) create named constants for shift values. + int64_t imm = static_cast<int64_t>(instr_lui & kImm16Mask) << 48; + imm |= static_cast<int64_t>(instr_ori & kImm16Mask) << 32; + imm |= static_cast<int64_t>(instr_ori2 & kImm16Mask) << 16; + // Sign extend address; + imm >>= 16; + + if (imm == kEndOfJumpChain) { + // EndOfChain sentinel is returned directly, not relative to pc or pos. + return kEndOfChain; + } else { + uint64_t instr_address = reinterpret_cast<int64_t>(buffer_ + pos); + int64_t delta = instr_address - imm; + DCHECK(pos > delta); + return pos - delta; + } + } else { + int32_t imm28 = (instr & static_cast<int32_t>(kImm26Mask)) << 2; + if (imm28 == kEndOfJumpChain) { + // EndOfChain sentinel is returned directly, not relative to pc or pos. + return kEndOfChain; + } else { + uint64_t instr_address = reinterpret_cast<int64_t>(buffer_ + pos); + instr_address &= kImm28Mask; + int64_t delta = instr_address - imm28; + DCHECK(pos > delta); + return pos - delta; + } + } +} + + +void Assembler::target_at_put(int64_t pos, int64_t target_pos) { + Instr instr = instr_at(pos); + if ((instr & ~kImm16Mask) == 0) { + DCHECK(target_pos == kEndOfChain || target_pos >= 0); + // Emitted label constant, not part of a branch. + // Make label relative to Code* of generated Code object. + instr_at_put(pos, target_pos + (Code::kHeaderSize - kHeapObjectTag)); + return; + } + + DCHECK(IsBranch(instr) || IsJ(instr) || IsLui(instr)); + if (IsBranch(instr)) { + int32_t imm18 = target_pos - (pos + kBranchPCOffset); + DCHECK((imm18 & 3) == 0); + + instr &= ~kImm16Mask; + int32_t imm16 = imm18 >> 2; + DCHECK(is_int16(imm16)); + + instr_at_put(pos, instr | (imm16 & kImm16Mask)); + } else if (IsLui(instr)) { + Instr instr_lui = instr_at(pos + 0 * Assembler::kInstrSize); + Instr instr_ori = instr_at(pos + 1 * Assembler::kInstrSize); + Instr instr_ori2 = instr_at(pos + 3 * Assembler::kInstrSize); + DCHECK(IsOri(instr_ori)); + DCHECK(IsOri(instr_ori2)); + + uint64_t imm = reinterpret_cast<uint64_t>(buffer_) + target_pos; + DCHECK((imm & 3) == 0); + + instr_lui &= ~kImm16Mask; + instr_ori &= ~kImm16Mask; + instr_ori2 &= ~kImm16Mask; + + instr_at_put(pos + 0 * Assembler::kInstrSize, + instr_lui | ((imm >> 32) & kImm16Mask)); + instr_at_put(pos + 1 * Assembler::kInstrSize, + instr_ori | ((imm >> 16) & kImm16Mask)); + instr_at_put(pos + 3 * Assembler::kInstrSize, + instr_ori2 | (imm & kImm16Mask)); + } else { + uint64_t imm28 = reinterpret_cast<uint64_t>(buffer_) + target_pos; + imm28 &= kImm28Mask; + DCHECK((imm28 & 3) == 0); + + instr &= ~kImm26Mask; + uint32_t imm26 = imm28 >> 2; + DCHECK(is_uint26(imm26)); + + instr_at_put(pos, instr | (imm26 & kImm26Mask)); + } +} + + +void Assembler::print(Label* L) { + if (L->is_unused()) { + PrintF("unused label\n"); + } else if (L->is_bound()) { + PrintF("bound label to %d\n", L->pos()); + } else if (L->is_linked()) { + Label l = *L; + PrintF("unbound label"); + while (l.is_linked()) { + PrintF("@ %d ", l.pos()); + Instr instr = instr_at(l.pos()); + if ((instr & ~kImm16Mask) == 0) { + PrintF("value\n"); + } else { + PrintF("%d\n", instr); + } + next(&l); + } + } else { + PrintF("label in inconsistent state (pos = %d)\n", L->pos_); + } +} + + +void Assembler::bind_to(Label* L, int pos) { + DCHECK(0 <= pos && pos <= pc_offset()); // Must have valid binding position. + int32_t trampoline_pos = kInvalidSlotPos; + if (L->is_linked() && !trampoline_emitted_) { + unbound_labels_count_--; + next_buffer_check_ += kTrampolineSlotsSize; + } + + while (L->is_linked()) { + int32_t fixup_pos = L->pos(); + int32_t dist = pos - fixup_pos; + next(L); // Call next before overwriting link with target at fixup_pos. + Instr instr = instr_at(fixup_pos); + if (IsBranch(instr)) { + if (dist > kMaxBranchOffset) { + if (trampoline_pos == kInvalidSlotPos) { + trampoline_pos = get_trampoline_entry(fixup_pos); + CHECK(trampoline_pos != kInvalidSlotPos); + } + DCHECK((trampoline_pos - fixup_pos) <= kMaxBranchOffset); + target_at_put(fixup_pos, trampoline_pos); + fixup_pos = trampoline_pos; + dist = pos - fixup_pos; + } + target_at_put(fixup_pos, pos); + } else { + DCHECK(IsJ(instr) || IsLui(instr) || IsEmittedConstant(instr)); + target_at_put(fixup_pos, pos); + } + } + L->bind_to(pos); + + // Keep track of the last bound label so we don't eliminate any instructions + // before a bound label. + if (pos > last_bound_pos_) + last_bound_pos_ = pos; +} + + +void Assembler::bind(Label* L) { + DCHECK(!L->is_bound()); // Label can only be bound once. + bind_to(L, pc_offset()); +} + + +void Assembler::next(Label* L) { + DCHECK(L->is_linked()); + int link = target_at(L->pos()); + if (link == kEndOfChain) { + L->Unuse(); + } else { + DCHECK(link >= 0); + L->link_to(link); + } +} + + +bool Assembler::is_near(Label* L) { + if (L->is_bound()) { + return ((pc_offset() - L->pos()) < kMaxBranchOffset - 4 * kInstrSize); + } + return false; +} + + +// We have to use a temporary register for things that can be relocated even +// if they can be encoded in the MIPS's 16 bits of immediate-offset instruction +// space. There is no guarantee that the relocated location can be similarly +// encoded. +bool Assembler::MustUseReg(RelocInfo::Mode rmode) { + return !RelocInfo::IsNone(rmode); +} + +void Assembler::GenInstrRegister(Opcode opcode, + Register rs, + Register rt, + Register rd, + uint16_t sa, + SecondaryField func) { + DCHECK(rd.is_valid() && rs.is_valid() && rt.is_valid() && is_uint5(sa)); + Instr instr = opcode | (rs.code() << kRsShift) | (rt.code() << kRtShift) + | (rd.code() << kRdShift) | (sa << kSaShift) | func; + emit(instr); +} + + +void Assembler::GenInstrRegister(Opcode opcode, + Register rs, + Register rt, + uint16_t msb, + uint16_t lsb, + SecondaryField func) { + DCHECK(rs.is_valid() && rt.is_valid() && is_uint5(msb) && is_uint5(lsb)); + Instr instr = opcode | (rs.code() << kRsShift) | (rt.code() << kRtShift) + | (msb << kRdShift) | (lsb << kSaShift) | func; + emit(instr); +} + + +void Assembler::GenInstrRegister(Opcode opcode, + SecondaryField fmt, + FPURegister ft, + FPURegister fs, + FPURegister fd, + SecondaryField func) { + DCHECK(fd.is_valid() && fs.is_valid() && ft.is_valid()); + Instr instr = opcode | fmt | (ft.code() << kFtShift) | (fs.code() << kFsShift) + | (fd.code() << kFdShift) | func; + emit(instr); +} + + +void Assembler::GenInstrRegister(Opcode opcode, + FPURegister fr, + FPURegister ft, + FPURegister fs, + FPURegister fd, + SecondaryField func) { + DCHECK(fd.is_valid() && fr.is_valid() && fs.is_valid() && ft.is_valid()); + Instr instr = opcode | (fr.code() << kFrShift) | (ft.code() << kFtShift) + | (fs.code() << kFsShift) | (fd.code() << kFdShift) | func; + emit(instr); +} + + +void Assembler::GenInstrRegister(Opcode opcode, + SecondaryField fmt, + Register rt, + FPURegister fs, + FPURegister fd, + SecondaryField func) { + DCHECK(fd.is_valid() && fs.is_valid() && rt.is_valid()); + Instr instr = opcode | fmt | (rt.code() << kRtShift) + | (fs.code() << kFsShift) | (fd.code() << kFdShift) | func; + emit(instr); +} + + +void Assembler::GenInstrRegister(Opcode opcode, + SecondaryField fmt, + Register rt, + FPUControlRegister fs, + SecondaryField func) { + DCHECK(fs.is_valid() && rt.is_valid()); + Instr instr = + opcode | fmt | (rt.code() << kRtShift) | (fs.code() << kFsShift) | func; + emit(instr); +} + + +// Instructions with immediate value. +// Registers are in the order of the instruction encoding, from left to right. +void Assembler::GenInstrImmediate(Opcode opcode, + Register rs, + Register rt, + int32_t j) { + DCHECK(rs.is_valid() && rt.is_valid() && (is_int16(j) || is_uint16(j))); + Instr instr = opcode | (rs.code() << kRsShift) | (rt.code() << kRtShift) + | (j & kImm16Mask); + emit(instr); +} + + +void Assembler::GenInstrImmediate(Opcode opcode, + Register rs, + SecondaryField SF, + int32_t j) { + DCHECK(rs.is_valid() && (is_int16(j) || is_uint16(j))); + Instr instr = opcode | (rs.code() << kRsShift) | SF | (j & kImm16Mask); + emit(instr); +} + + +void Assembler::GenInstrImmediate(Opcode opcode, + Register rs, + FPURegister ft, + int32_t j) { + DCHECK(rs.is_valid() && ft.is_valid() && (is_int16(j) || is_uint16(j))); + Instr instr = opcode | (rs.code() << kRsShift) | (ft.code() << kFtShift) + | (j & kImm16Mask); + emit(instr); +} + + +void Assembler::GenInstrJump(Opcode opcode, + uint32_t address) { + BlockTrampolinePoolScope block_trampoline_pool(this); + DCHECK(is_uint26(address)); + Instr instr = opcode | address; + emit(instr); + BlockTrampolinePoolFor(1); // For associated delay slot. +} + + +// Returns the next free trampoline entry. +int32_t Assembler::get_trampoline_entry(int32_t pos) { + int32_t trampoline_entry = kInvalidSlotPos; + if (!internal_trampoline_exception_) { + if (trampoline_.start() > pos) { + trampoline_entry = trampoline_.take_slot(); + } + + if (kInvalidSlotPos == trampoline_entry) { + internal_trampoline_exception_ = true; + } + } + return trampoline_entry; +} + + +uint64_t Assembler::jump_address(Label* L) { + int64_t target_pos; + if (L->is_bound()) { + target_pos = L->pos(); + } else { + if (L->is_linked()) { + target_pos = L->pos(); // L's link. + L->link_to(pc_offset()); + } else { + L->link_to(pc_offset()); + return kEndOfJumpChain; + } + } + + uint64_t imm = reinterpret_cast<uint64_t>(buffer_) + target_pos; + DCHECK((imm & 3) == 0); + + return imm; +} + + +int32_t Assembler::branch_offset(Label* L, bool jump_elimination_allowed) { + int32_t target_pos; + if (L->is_bound()) { + target_pos = L->pos(); + } else { + if (L->is_linked()) { + target_pos = L->pos(); + L->link_to(pc_offset()); + } else { + L->link_to(pc_offset()); + if (!trampoline_emitted_) { + unbound_labels_count_++; + next_buffer_check_ -= kTrampolineSlotsSize; + } + return kEndOfChain; + } + } + + int32_t offset = target_pos - (pc_offset() + kBranchPCOffset); + DCHECK((offset & 3) == 0); + DCHECK(is_int16(offset >> 2)); + + return offset; +} + + +int32_t Assembler::branch_offset_compact(Label* L, + bool jump_elimination_allowed) { + int32_t target_pos; + if (L->is_bound()) { + target_pos = L->pos(); + } else { + if (L->is_linked()) { + target_pos = L->pos(); + L->link_to(pc_offset()); + } else { + L->link_to(pc_offset()); + if (!trampoline_emitted_) { + unbound_labels_count_++; + next_buffer_check_ -= kTrampolineSlotsSize; + } + return kEndOfChain; + } + } + + int32_t offset = target_pos - pc_offset(); + DCHECK((offset & 3) == 0); + DCHECK(is_int16(offset >> 2)); + + return offset; +} + + +int32_t Assembler::branch_offset21(Label* L, bool jump_elimination_allowed) { + int32_t target_pos; + if (L->is_bound()) { + target_pos = L->pos(); + } else { + if (L->is_linked()) { + target_pos = L->pos(); + L->link_to(pc_offset()); + } else { + L->link_to(pc_offset()); + if (!trampoline_emitted_) { + unbound_labels_count_++; + next_buffer_check_ -= kTrampolineSlotsSize; + } + return kEndOfChain; + } + } + + int32_t offset = target_pos - (pc_offset() + kBranchPCOffset); + DCHECK((offset & 3) == 0); + DCHECK(((offset >> 2) & 0xFFE00000) == 0); // Offset is 21bit width. + + return offset; +} + + +int32_t Assembler::branch_offset21_compact(Label* L, + bool jump_elimination_allowed) { + int32_t target_pos; + if (L->is_bound()) { + target_pos = L->pos(); + } else { + if (L->is_linked()) { + target_pos = L->pos(); + L->link_to(pc_offset()); + } else { + L->link_to(pc_offset()); + if (!trampoline_emitted_) { + unbound_labels_count_++; + next_buffer_check_ -= kTrampolineSlotsSize; + } + return kEndOfChain; + } + } + + int32_t offset = target_pos - pc_offset(); + DCHECK((offset & 3) == 0); + DCHECK(((offset >> 2) & 0xFFE00000) == 0); // Offset is 21bit width. + + return offset; +} + + +void Assembler::label_at_put(Label* L, int at_offset) { + int target_pos; + if (L->is_bound()) { + target_pos = L->pos(); + instr_at_put(at_offset, target_pos + (Code::kHeaderSize - kHeapObjectTag)); + } else { + if (L->is_linked()) { + target_pos = L->pos(); // L's link. + int32_t imm18 = target_pos - at_offset; + DCHECK((imm18 & 3) == 0); + int32_t imm16 = imm18 >> 2; + DCHECK(is_int16(imm16)); + instr_at_put(at_offset, (imm16 & kImm16Mask)); + } else { + target_pos = kEndOfChain; + instr_at_put(at_offset, 0); + if (!trampoline_emitted_) { + unbound_labels_count_++; + next_buffer_check_ -= kTrampolineSlotsSize; + } + } + L->link_to(at_offset); + } +} + + +//------- Branch and jump instructions -------- + +void Assembler::b(int16_t offset) { + beq(zero_reg, zero_reg, offset); +} + + +void Assembler::bal(int16_t offset) { + positions_recorder()->WriteRecordedPositions(); + bgezal(zero_reg, offset); +} + + +void Assembler::beq(Register rs, Register rt, int16_t offset) { + BlockTrampolinePoolScope block_trampoline_pool(this); + GenInstrImmediate(BEQ, rs, rt, offset); + BlockTrampolinePoolFor(1); // For associated delay slot. +} + + +void Assembler::bgez(Register rs, int16_t offset) { + BlockTrampolinePoolScope block_trampoline_pool(this); + GenInstrImmediate(REGIMM, rs, BGEZ, offset); + BlockTrampolinePoolFor(1); // For associated delay slot. +} + + +void Assembler::bgezc(Register rt, int16_t offset) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(!(rt.is(zero_reg))); + GenInstrImmediate(BLEZL, rt, rt, offset); +} + + +void Assembler::bgeuc(Register rs, Register rt, int16_t offset) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(!(rs.is(zero_reg))); + DCHECK(!(rt.is(zero_reg))); + DCHECK(rs.code() != rt.code()); + GenInstrImmediate(BLEZ, rs, rt, offset); +} + + +void Assembler::bgec(Register rs, Register rt, int16_t offset) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(!(rs.is(zero_reg))); + DCHECK(!(rt.is(zero_reg))); + DCHECK(rs.code() != rt.code()); + GenInstrImmediate(BLEZL, rs, rt, offset); +} + + +void Assembler::bgezal(Register rs, int16_t offset) { + DCHECK(kArchVariant != kMips64r6 || rs.is(zero_reg)); + BlockTrampolinePoolScope block_trampoline_pool(this); + positions_recorder()->WriteRecordedPositions(); + GenInstrImmediate(REGIMM, rs, BGEZAL, offset); + BlockTrampolinePoolFor(1); // For associated delay slot. +} + + +void Assembler::bgtz(Register rs, int16_t offset) { + BlockTrampolinePoolScope block_trampoline_pool(this); + GenInstrImmediate(BGTZ, rs, zero_reg, offset); + BlockTrampolinePoolFor(1); // For associated delay slot. +} + + +void Assembler::bgtzc(Register rt, int16_t offset) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(!(rt.is(zero_reg))); + GenInstrImmediate(BGTZL, zero_reg, rt, offset); +} + + +void Assembler::blez(Register rs, int16_t offset) { + BlockTrampolinePoolScope block_trampoline_pool(this); + GenInstrImmediate(BLEZ, rs, zero_reg, offset); + BlockTrampolinePoolFor(1); // For associated delay slot. +} + + +void Assembler::blezc(Register rt, int16_t offset) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(!(rt.is(zero_reg))); + GenInstrImmediate(BLEZL, zero_reg, rt, offset); +} + + +void Assembler::bltzc(Register rt, int16_t offset) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(!(rt.is(zero_reg))); + GenInstrImmediate(BGTZL, rt, rt, offset); +} + + +void Assembler::bltuc(Register rs, Register rt, int16_t offset) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(!(rs.is(zero_reg))); + DCHECK(!(rt.is(zero_reg))); + DCHECK(rs.code() != rt.code()); + GenInstrImmediate(BGTZ, rs, rt, offset); +} + + +void Assembler::bltc(Register rs, Register rt, int16_t offset) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(!(rs.is(zero_reg))); + DCHECK(!(rt.is(zero_reg))); + DCHECK(rs.code() != rt.code()); + GenInstrImmediate(BGTZL, rs, rt, offset); +} + + +void Assembler::bltz(Register rs, int16_t offset) { + BlockTrampolinePoolScope block_trampoline_pool(this); + GenInstrImmediate(REGIMM, rs, BLTZ, offset); + BlockTrampolinePoolFor(1); // For associated delay slot. +} + + +void Assembler::bltzal(Register rs, int16_t offset) { + DCHECK(kArchVariant != kMips64r6 || rs.is(zero_reg)); + BlockTrampolinePoolScope block_trampoline_pool(this); + positions_recorder()->WriteRecordedPositions(); + GenInstrImmediate(REGIMM, rs, BLTZAL, offset); + BlockTrampolinePoolFor(1); // For associated delay slot. +} + + +void Assembler::bne(Register rs, Register rt, int16_t offset) { + BlockTrampolinePoolScope block_trampoline_pool(this); + GenInstrImmediate(BNE, rs, rt, offset); + BlockTrampolinePoolFor(1); // For associated delay slot. +} + + +void Assembler::bovc(Register rs, Register rt, int16_t offset) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(!(rs.is(zero_reg))); + DCHECK(rs.code() >= rt.code()); + GenInstrImmediate(ADDI, rs, rt, offset); +} + + +void Assembler::bnvc(Register rs, Register rt, int16_t offset) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(!(rs.is(zero_reg))); + DCHECK(rs.code() >= rt.code()); + GenInstrImmediate(DADDI, rs, rt, offset); +} + + +void Assembler::blezalc(Register rt, int16_t offset) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(!(rt.is(zero_reg))); + GenInstrImmediate(BLEZ, zero_reg, rt, offset); +} + + +void Assembler::bgezalc(Register rt, int16_t offset) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(!(rt.is(zero_reg))); + GenInstrImmediate(BLEZ, rt, rt, offset); +} + + +void Assembler::bgezall(Register rs, int16_t offset) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(!(rs.is(zero_reg))); + GenInstrImmediate(REGIMM, rs, BGEZALL, offset); +} + + +void Assembler::bltzalc(Register rt, int16_t offset) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(!(rt.is(zero_reg))); + GenInstrImmediate(BGTZ, rt, rt, offset); +} + + +void Assembler::bgtzalc(Register rt, int16_t offset) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(!(rt.is(zero_reg))); + GenInstrImmediate(BGTZ, zero_reg, rt, offset); +} + + +void Assembler::beqzalc(Register rt, int16_t offset) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(!(rt.is(zero_reg))); + GenInstrImmediate(ADDI, zero_reg, rt, offset); +} + + +void Assembler::bnezalc(Register rt, int16_t offset) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(!(rt.is(zero_reg))); + GenInstrImmediate(DADDI, zero_reg, rt, offset); +} + + +void Assembler::beqc(Register rs, Register rt, int16_t offset) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(rs.code() < rt.code()); + GenInstrImmediate(ADDI, rs, rt, offset); +} + + +void Assembler::beqzc(Register rs, int32_t offset) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(!(rs.is(zero_reg))); + Instr instr = BEQZC | (rs.code() << kRsShift) | offset; + emit(instr); +} + + +void Assembler::bnec(Register rs, Register rt, int16_t offset) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(rs.code() < rt.code()); + GenInstrImmediate(DADDI, rs, rt, offset); +} + + +void Assembler::bnezc(Register rs, int32_t offset) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(!(rs.is(zero_reg))); + Instr instr = BNEZC | (rs.code() << kRsShift) | offset; + emit(instr); +} + + +void Assembler::j(int64_t target) { +#if DEBUG + // Get pc of delay slot. + uint64_t ipc = reinterpret_cast<uint64_t>(pc_ + 1 * kInstrSize); + bool in_range = (ipc ^ static_cast<uint64_t>(target) >> + (kImm26Bits + kImmFieldShift)) == 0; + DCHECK(in_range && ((target & 3) == 0)); +#endif + GenInstrJump(J, target >> 2); +} + + +void Assembler::jr(Register rs) { + if (kArchVariant != kMips64r6) { + BlockTrampolinePoolScope block_trampoline_pool(this); + if (rs.is(ra)) { + positions_recorder()->WriteRecordedPositions(); + } + GenInstrRegister(SPECIAL, rs, zero_reg, zero_reg, 0, JR); + BlockTrampolinePoolFor(1); // For associated delay slot. + } else { + jalr(rs, zero_reg); + } +} + + +void Assembler::jal(int64_t target) { +#ifdef DEBUG + // Get pc of delay slot. + uint64_t ipc = reinterpret_cast<uint64_t>(pc_ + 1 * kInstrSize); + bool in_range = (ipc ^ static_cast<uint64_t>(target) >> + (kImm26Bits + kImmFieldShift)) == 0; + DCHECK(in_range && ((target & 3) == 0)); +#endif + positions_recorder()->WriteRecordedPositions(); + GenInstrJump(JAL, target >> 2); +} + + +void Assembler::jalr(Register rs, Register rd) { + BlockTrampolinePoolScope block_trampoline_pool(this); + positions_recorder()->WriteRecordedPositions(); + GenInstrRegister(SPECIAL, rs, zero_reg, rd, 0, JALR); + BlockTrampolinePoolFor(1); // For associated delay slot. +} + + +void Assembler::j_or_jr(int64_t target, Register rs) { + // Get pc of delay slot. + uint64_t ipc = reinterpret_cast<uint64_t>(pc_ + 1 * kInstrSize); + bool in_range = (ipc ^ static_cast<uint64_t>(target) >> + (kImm26Bits + kImmFieldShift)) == 0; + if (in_range) { + j(target); + } else { + jr(t9); + } +} + + +void Assembler::jal_or_jalr(int64_t target, Register rs) { + // Get pc of delay slot. + uint64_t ipc = reinterpret_cast<uint64_t>(pc_ + 1 * kInstrSize); + bool in_range = (ipc ^ static_cast<uint64_t>(target) >> + (kImm26Bits+kImmFieldShift)) == 0; + if (in_range) { + jal(target); + } else { + jalr(t9); + } +} + + +// -------Data-processing-instructions--------- + +// Arithmetic. + +void Assembler::addu(Register rd, Register rs, Register rt) { + GenInstrRegister(SPECIAL, rs, rt, rd, 0, ADDU); +} + + +void Assembler::addiu(Register rd, Register rs, int32_t j) { + GenInstrImmediate(ADDIU, rs, rd, j); +} + + +void Assembler::subu(Register rd, Register rs, Register rt) { + GenInstrRegister(SPECIAL, rs, rt, rd, 0, SUBU); +} + + +void Assembler::mul(Register rd, Register rs, Register rt) { + if (kArchVariant == kMips64r6) { + GenInstrRegister(SPECIAL, rs, rt, rd, MUL_OP, MUL_MUH); + } else { + GenInstrRegister(SPECIAL2, rs, rt, rd, 0, MUL); + } +} + + +void Assembler::muh(Register rd, Register rs, Register rt) { + DCHECK(kArchVariant == kMips64r6); + GenInstrRegister(SPECIAL, rs, rt, rd, MUH_OP, MUL_MUH); +} + + +void Assembler::mulu(Register rd, Register rs, Register rt) { + DCHECK(kArchVariant == kMips64r6); + GenInstrRegister(SPECIAL, rs, rt, rd, MUL_OP, MUL_MUH_U); +} + + +void Assembler::muhu(Register rd, Register rs, Register rt) { + DCHECK(kArchVariant == kMips64r6); + GenInstrRegister(SPECIAL, rs, rt, rd, MUH_OP, MUL_MUH_U); +} + + +void Assembler::dmul(Register rd, Register rs, Register rt) { + DCHECK(kArchVariant == kMips64r6); + GenInstrRegister(SPECIAL, rs, rt, rd, MUL_OP, D_MUL_MUH); +} + + +void Assembler::dmuh(Register rd, Register rs, Register rt) { + DCHECK(kArchVariant == kMips64r6); + GenInstrRegister(SPECIAL, rs, rt, rd, MUH_OP, D_MUL_MUH); +} + + +void Assembler::dmulu(Register rd, Register rs, Register rt) { + DCHECK(kArchVariant == kMips64r6); + GenInstrRegister(SPECIAL, rs, rt, rd, MUL_OP, D_MUL_MUH_U); +} + + +void Assembler::dmuhu(Register rd, Register rs, Register rt) { + DCHECK(kArchVariant == kMips64r6); + GenInstrRegister(SPECIAL, rs, rt, rd, MUH_OP, D_MUL_MUH_U); +} + + +void Assembler::mult(Register rs, Register rt) { + DCHECK(kArchVariant != kMips64r6); + GenInstrRegister(SPECIAL, rs, rt, zero_reg, 0, MULT); +} + + +void Assembler::multu(Register rs, Register rt) { + DCHECK(kArchVariant != kMips64r6); + GenInstrRegister(SPECIAL, rs, rt, zero_reg, 0, MULTU); +} + + +void Assembler::daddiu(Register rd, Register rs, int32_t j) { + GenInstrImmediate(DADDIU, rs, rd, j); +} + + +void Assembler::div(Register rs, Register rt) { + GenInstrRegister(SPECIAL, rs, rt, zero_reg, 0, DIV); +} + + +void Assembler::div(Register rd, Register rs, Register rt) { + DCHECK(kArchVariant == kMips64r6); + GenInstrRegister(SPECIAL, rs, rt, rd, DIV_OP, DIV_MOD); +} + + +void Assembler::mod(Register rd, Register rs, Register rt) { + DCHECK(kArchVariant == kMips64r6); + GenInstrRegister(SPECIAL, rs, rt, rd, MOD_OP, DIV_MOD); +} + + +void Assembler::divu(Register rs, Register rt) { + GenInstrRegister(SPECIAL, rs, rt, zero_reg, 0, DIVU); +} + + +void Assembler::divu(Register rd, Register rs, Register rt) { + DCHECK(kArchVariant == kMips64r6); + GenInstrRegister(SPECIAL, rs, rt, rd, DIV_OP, DIV_MOD_U); +} + + +void Assembler::modu(Register rd, Register rs, Register rt) { + DCHECK(kArchVariant == kMips64r6); + GenInstrRegister(SPECIAL, rs, rt, rd, MOD_OP, DIV_MOD_U); +} + + +void Assembler::daddu(Register rd, Register rs, Register rt) { + GenInstrRegister(SPECIAL, rs, rt, rd, 0, DADDU); +} + + +void Assembler::dsubu(Register rd, Register rs, Register rt) { + GenInstrRegister(SPECIAL, rs, rt, rd, 0, DSUBU); +} + + +void Assembler::dmult(Register rs, Register rt) { + GenInstrRegister(SPECIAL, rs, rt, zero_reg, 0, DMULT); +} + + +void Assembler::dmultu(Register rs, Register rt) { + GenInstrRegister(SPECIAL, rs, rt, zero_reg, 0, DMULTU); +} + + +void Assembler::ddiv(Register rs, Register rt) { + GenInstrRegister(SPECIAL, rs, rt, zero_reg, 0, DDIV); +} + + +void Assembler::ddiv(Register rd, Register rs, Register rt) { + DCHECK(kArchVariant == kMips64r6); + GenInstrRegister(SPECIAL, rs, rt, rd, DIV_OP, D_DIV_MOD); +} + + +void Assembler::dmod(Register rd, Register rs, Register rt) { + DCHECK(kArchVariant == kMips64r6); + GenInstrRegister(SPECIAL, rs, rt, rd, MOD_OP, D_DIV_MOD); +} + + +void Assembler::ddivu(Register rs, Register rt) { + GenInstrRegister(SPECIAL, rs, rt, zero_reg, 0, DDIVU); +} + + +void Assembler::ddivu(Register rd, Register rs, Register rt) { + DCHECK(kArchVariant == kMips64r6); + GenInstrRegister(SPECIAL, rs, rt, rd, DIV_OP, D_DIV_MOD_U); +} + + +void Assembler::dmodu(Register rd, Register rs, Register rt) { + DCHECK(kArchVariant == kMips64r6); + GenInstrRegister(SPECIAL, rs, rt, rd, MOD_OP, D_DIV_MOD_U); +} + + +// Logical. + +void Assembler::and_(Register rd, Register rs, Register rt) { + GenInstrRegister(SPECIAL, rs, rt, rd, 0, AND); +} + + +void Assembler::andi(Register rt, Register rs, int32_t j) { + DCHECK(is_uint16(j)); + GenInstrImmediate(ANDI, rs, rt, j); +} + + +void Assembler::or_(Register rd, Register rs, Register rt) { + GenInstrRegister(SPECIAL, rs, rt, rd, 0, OR); +} + + +void Assembler::ori(Register rt, Register rs, int32_t j) { + DCHECK(is_uint16(j)); + GenInstrImmediate(ORI, rs, rt, j); +} + + +void Assembler::xor_(Register rd, Register rs, Register rt) { + GenInstrRegister(SPECIAL, rs, rt, rd, 0, XOR); +} + + +void Assembler::xori(Register rt, Register rs, int32_t j) { + DCHECK(is_uint16(j)); + GenInstrImmediate(XORI, rs, rt, j); +} + + +void Assembler::nor(Register rd, Register rs, Register rt) { + GenInstrRegister(SPECIAL, rs, rt, rd, 0, NOR); +} + + +// Shifts. +void Assembler::sll(Register rd, + Register rt, + uint16_t sa, + bool coming_from_nop) { + // Don't allow nop instructions in the form sll zero_reg, zero_reg to be + // generated using the sll instruction. They must be generated using + // nop(int/NopMarkerTypes) or MarkCode(int/NopMarkerTypes) pseudo + // instructions. + DCHECK(coming_from_nop || !(rd.is(zero_reg) && rt.is(zero_reg))); + GenInstrRegister(SPECIAL, zero_reg, rt, rd, sa, SLL); +} + + +void Assembler::sllv(Register rd, Register rt, Register rs) { + GenInstrRegister(SPECIAL, rs, rt, rd, 0, SLLV); +} + + +void Assembler::srl(Register rd, Register rt, uint16_t sa) { + GenInstrRegister(SPECIAL, zero_reg, rt, rd, sa, SRL); +} + + +void Assembler::srlv(Register rd, Register rt, Register rs) { + GenInstrRegister(SPECIAL, rs, rt, rd, 0, SRLV); +} + + +void Assembler::sra(Register rd, Register rt, uint16_t sa) { + GenInstrRegister(SPECIAL, zero_reg, rt, rd, sa, SRA); +} + + +void Assembler::srav(Register rd, Register rt, Register rs) { + GenInstrRegister(SPECIAL, rs, rt, rd, 0, SRAV); +} + + +void Assembler::rotr(Register rd, Register rt, uint16_t sa) { + // Should be called via MacroAssembler::Ror. + DCHECK(rd.is_valid() && rt.is_valid() && is_uint5(sa)); + DCHECK(kArchVariant == kMips64r2); + Instr instr = SPECIAL | (1 << kRsShift) | (rt.code() << kRtShift) + | (rd.code() << kRdShift) | (sa << kSaShift) | SRL; + emit(instr); +} + + +void Assembler::rotrv(Register rd, Register rt, Register rs) { + // Should be called via MacroAssembler::Ror. + DCHECK(rd.is_valid() && rt.is_valid() && rs.is_valid() ); + DCHECK(kArchVariant == kMips64r2); + Instr instr = SPECIAL | (rs.code() << kRsShift) | (rt.code() << kRtShift) + | (rd.code() << kRdShift) | (1 << kSaShift) | SRLV; + emit(instr); +} + + +void Assembler::dsll(Register rd, Register rt, uint16_t sa) { + GenInstrRegister(SPECIAL, zero_reg, rt, rd, sa, DSLL); +} + + +void Assembler::dsllv(Register rd, Register rt, Register rs) { + GenInstrRegister(SPECIAL, rs, rt, rd, 0, DSLLV); +} + + +void Assembler::dsrl(Register rd, Register rt, uint16_t sa) { + GenInstrRegister(SPECIAL, zero_reg, rt, rd, sa, DSRL); +} + + +void Assembler::dsrlv(Register rd, Register rt, Register rs) { + GenInstrRegister(SPECIAL, rs, rt, rd, 0, DSRLV); +} + + +void Assembler::drotr(Register rd, Register rt, uint16_t sa) { + DCHECK(rd.is_valid() && rt.is_valid() && is_uint5(sa)); + Instr instr = SPECIAL | (1 << kRsShift) | (rt.code() << kRtShift) + | (rd.code() << kRdShift) | (sa << kSaShift) | DSRL; + emit(instr); +} + + +void Assembler::drotrv(Register rd, Register rt, Register rs) { + DCHECK(rd.is_valid() && rt.is_valid() && rs.is_valid() ); + Instr instr = SPECIAL | (rs.code() << kRsShift) | (rt.code() << kRtShift) + | (rd.code() << kRdShift) | (1 << kSaShift) | DSRLV; + emit(instr); +} + + +void Assembler::dsra(Register rd, Register rt, uint16_t sa) { + GenInstrRegister(SPECIAL, zero_reg, rt, rd, sa, DSRA); +} + + +void Assembler::dsrav(Register rd, Register rt, Register rs) { + GenInstrRegister(SPECIAL, rs, rt, rd, 0, DSRAV); +} + + +void Assembler::dsll32(Register rd, Register rt, uint16_t sa) { + GenInstrRegister(SPECIAL, zero_reg, rt, rd, sa, DSLL32); +} + + +void Assembler::dsrl32(Register rd, Register rt, uint16_t sa) { + GenInstrRegister(SPECIAL, zero_reg, rt, rd, sa, DSRL32); +} + + +void Assembler::dsra32(Register rd, Register rt, uint16_t sa) { + GenInstrRegister(SPECIAL, zero_reg, rt, rd, sa, DSRA32); +} + + +// ------------Memory-instructions------------- + +// Helper for base-reg + offset, when offset is larger than int16. +void Assembler::LoadRegPlusOffsetToAt(const MemOperand& src) { + DCHECK(!src.rm().is(at)); + DCHECK(is_int32(src.offset_)); + daddiu(at, zero_reg, (src.offset_ >> kLuiShift) & kImm16Mask); + dsll(at, at, kLuiShift); + ori(at, at, src.offset_ & kImm16Mask); // Load 32-bit offset. + daddu(at, at, src.rm()); // Add base register. +} + + +void Assembler::lb(Register rd, const MemOperand& rs) { + if (is_int16(rs.offset_)) { + GenInstrImmediate(LB, rs.rm(), rd, rs.offset_); + } else { // Offset > 16 bits, use multiple instructions to load. + LoadRegPlusOffsetToAt(rs); + GenInstrImmediate(LB, at, rd, 0); // Equiv to lb(rd, MemOperand(at, 0)); + } +} + + +void Assembler::lbu(Register rd, const MemOperand& rs) { + if (is_int16(rs.offset_)) { + GenInstrImmediate(LBU, rs.rm(), rd, rs.offset_); + } else { // Offset > 16 bits, use multiple instructions to load. + LoadRegPlusOffsetToAt(rs); + GenInstrImmediate(LBU, at, rd, 0); // Equiv to lbu(rd, MemOperand(at, 0)); + } +} + + +void Assembler::lh(Register rd, const MemOperand& rs) { + if (is_int16(rs.offset_)) { + GenInstrImmediate(LH, rs.rm(), rd, rs.offset_); + } else { // Offset > 16 bits, use multiple instructions to load. + LoadRegPlusOffsetToAt(rs); + GenInstrImmediate(LH, at, rd, 0); // Equiv to lh(rd, MemOperand(at, 0)); + } +} + + +void Assembler::lhu(Register rd, const MemOperand& rs) { + if (is_int16(rs.offset_)) { + GenInstrImmediate(LHU, rs.rm(), rd, rs.offset_); + } else { // Offset > 16 bits, use multiple instructions to load. + LoadRegPlusOffsetToAt(rs); + GenInstrImmediate(LHU, at, rd, 0); // Equiv to lhu(rd, MemOperand(at, 0)); + } +} + + +void Assembler::lw(Register rd, const MemOperand& rs) { + if (is_int16(rs.offset_)) { + GenInstrImmediate(LW, rs.rm(), rd, rs.offset_); + } else { // Offset > 16 bits, use multiple instructions to load. + LoadRegPlusOffsetToAt(rs); + GenInstrImmediate(LW, at, rd, 0); // Equiv to lw(rd, MemOperand(at, 0)); + } +} + + +void Assembler::lwu(Register rd, const MemOperand& rs) { + if (is_int16(rs.offset_)) { + GenInstrImmediate(LWU, rs.rm(), rd, rs.offset_); + } else { // Offset > 16 bits, use multiple instructions to load. + LoadRegPlusOffsetToAt(rs); + GenInstrImmediate(LWU, at, rd, 0); // Equiv to lwu(rd, MemOperand(at, 0)); + } +} + + +void Assembler::lwl(Register rd, const MemOperand& rs) { + GenInstrImmediate(LWL, rs.rm(), rd, rs.offset_); +} + + +void Assembler::lwr(Register rd, const MemOperand& rs) { + GenInstrImmediate(LWR, rs.rm(), rd, rs.offset_); +} + + +void Assembler::sb(Register rd, const MemOperand& rs) { + if (is_int16(rs.offset_)) { + GenInstrImmediate(SB, rs.rm(), rd, rs.offset_); + } else { // Offset > 16 bits, use multiple instructions to store. + LoadRegPlusOffsetToAt(rs); + GenInstrImmediate(SB, at, rd, 0); // Equiv to sb(rd, MemOperand(at, 0)); + } +} + + +void Assembler::sh(Register rd, const MemOperand& rs) { + if (is_int16(rs.offset_)) { + GenInstrImmediate(SH, rs.rm(), rd, rs.offset_); + } else { // Offset > 16 bits, use multiple instructions to store. + LoadRegPlusOffsetToAt(rs); + GenInstrImmediate(SH, at, rd, 0); // Equiv to sh(rd, MemOperand(at, 0)); + } +} + + +void Assembler::sw(Register rd, const MemOperand& rs) { + if (is_int16(rs.offset_)) { + GenInstrImmediate(SW, rs.rm(), rd, rs.offset_); + } else { // Offset > 16 bits, use multiple instructions to store. + LoadRegPlusOffsetToAt(rs); + GenInstrImmediate(SW, at, rd, 0); // Equiv to sw(rd, MemOperand(at, 0)); + } +} + + +void Assembler::swl(Register rd, const MemOperand& rs) { + GenInstrImmediate(SWL, rs.rm(), rd, rs.offset_); +} + + +void Assembler::swr(Register rd, const MemOperand& rs) { + GenInstrImmediate(SWR, rs.rm(), rd, rs.offset_); +} + + +void Assembler::lui(Register rd, int32_t j) { + DCHECK(is_uint16(j)); + GenInstrImmediate(LUI, zero_reg, rd, j); +} + + +void Assembler::aui(Register rs, Register rt, int32_t j) { + // This instruction uses same opcode as 'lui'. The difference in encoding is + // 'lui' has zero reg. for rs field. + DCHECK(is_uint16(j)); + GenInstrImmediate(LUI, rs, rt, j); +} + + +void Assembler::daui(Register rs, Register rt, int32_t j) { + DCHECK(is_uint16(j)); + GenInstrImmediate(DAUI, rs, rt, j); +} + + +void Assembler::dahi(Register rs, int32_t j) { + DCHECK(is_uint16(j)); + GenInstrImmediate(REGIMM, rs, DAHI, j); +} + + +void Assembler::dati(Register rs, int32_t j) { + DCHECK(is_uint16(j)); + GenInstrImmediate(REGIMM, rs, DATI, j); +} + + +void Assembler::ldl(Register rd, const MemOperand& rs) { + GenInstrImmediate(LDL, rs.rm(), rd, rs.offset_); +} + + +void Assembler::ldr(Register rd, const MemOperand& rs) { + GenInstrImmediate(LDR, rs.rm(), rd, rs.offset_); +} + + +void Assembler::sdl(Register rd, const MemOperand& rs) { + GenInstrImmediate(SDL, rs.rm(), rd, rs.offset_); +} + + +void Assembler::sdr(Register rd, const MemOperand& rs) { + GenInstrImmediate(SDR, rs.rm(), rd, rs.offset_); +} + + +void Assembler::ld(Register rd, const MemOperand& rs) { + if (is_int16(rs.offset_)) { + GenInstrImmediate(LD, rs.rm(), rd, rs.offset_); + } else { // Offset > 16 bits, use multiple instructions to load. + LoadRegPlusOffsetToAt(rs); + GenInstrImmediate(LD, at, rd, 0); // Equiv to lw(rd, MemOperand(at, 0)); + } +} + + +void Assembler::sd(Register rd, const MemOperand& rs) { + if (is_int16(rs.offset_)) { + GenInstrImmediate(SD, rs.rm(), rd, rs.offset_); + } else { // Offset > 16 bits, use multiple instructions to store. + LoadRegPlusOffsetToAt(rs); + GenInstrImmediate(SD, at, rd, 0); // Equiv to sw(rd, MemOperand(at, 0)); + } +} + + +// -------------Misc-instructions-------------- + +// Break / Trap instructions. +void Assembler::break_(uint32_t code, bool break_as_stop) { + DCHECK((code & ~0xfffff) == 0); + // We need to invalidate breaks that could be stops as well because the + // simulator expects a char pointer after the stop instruction. + // See constants-mips.h for explanation. + DCHECK((break_as_stop && + code <= kMaxStopCode && + code > kMaxWatchpointCode) || + (!break_as_stop && + (code > kMaxStopCode || + code <= kMaxWatchpointCode))); + Instr break_instr = SPECIAL | BREAK | (code << 6); + emit(break_instr); +} + + +void Assembler::stop(const char* msg, uint32_t code) { + DCHECK(code > kMaxWatchpointCode); + DCHECK(code <= kMaxStopCode); +#if defined(V8_HOST_ARCH_MIPS) || defined(V8_HOST_ARCH_MIPS64) + break_(0x54321); +#else // V8_HOST_ARCH_MIPS + BlockTrampolinePoolFor(3); + // The Simulator will handle the stop instruction and get the message address. + // On MIPS stop() is just a special kind of break_(). + break_(code, true); + emit(reinterpret_cast<uint64_t>(msg)); +#endif +} + + +void Assembler::tge(Register rs, Register rt, uint16_t code) { + DCHECK(is_uint10(code)); + Instr instr = SPECIAL | TGE | rs.code() << kRsShift + | rt.code() << kRtShift | code << 6; + emit(instr); +} + + +void Assembler::tgeu(Register rs, Register rt, uint16_t code) { + DCHECK(is_uint10(code)); + Instr instr = SPECIAL | TGEU | rs.code() << kRsShift + | rt.code() << kRtShift | code << 6; + emit(instr); +} + + +void Assembler::tlt(Register rs, Register rt, uint16_t code) { + DCHECK(is_uint10(code)); + Instr instr = + SPECIAL | TLT | rs.code() << kRsShift | rt.code() << kRtShift | code << 6; + emit(instr); +} + + +void Assembler::tltu(Register rs, Register rt, uint16_t code) { + DCHECK(is_uint10(code)); + Instr instr = + SPECIAL | TLTU | rs.code() << kRsShift + | rt.code() << kRtShift | code << 6; + emit(instr); +} + + +void Assembler::teq(Register rs, Register rt, uint16_t code) { + DCHECK(is_uint10(code)); + Instr instr = + SPECIAL | TEQ | rs.code() << kRsShift | rt.code() << kRtShift | code << 6; + emit(instr); +} + + +void Assembler::tne(Register rs, Register rt, uint16_t code) { + DCHECK(is_uint10(code)); + Instr instr = + SPECIAL | TNE | rs.code() << kRsShift | rt.code() << kRtShift | code << 6; + emit(instr); +} + + +// Move from HI/LO register. + +void Assembler::mfhi(Register rd) { + GenInstrRegister(SPECIAL, zero_reg, zero_reg, rd, 0, MFHI); +} + + +void Assembler::mflo(Register rd) { + GenInstrRegister(SPECIAL, zero_reg, zero_reg, rd, 0, MFLO); +} + + +// Set on less than instructions. +void Assembler::slt(Register rd, Register rs, Register rt) { + GenInstrRegister(SPECIAL, rs, rt, rd, 0, SLT); +} + + +void Assembler::sltu(Register rd, Register rs, Register rt) { + GenInstrRegister(SPECIAL, rs, rt, rd, 0, SLTU); +} + + +void Assembler::slti(Register rt, Register rs, int32_t j) { + GenInstrImmediate(SLTI, rs, rt, j); +} + + +void Assembler::sltiu(Register rt, Register rs, int32_t j) { + GenInstrImmediate(SLTIU, rs, rt, j); +} + + +// Conditional move. +void Assembler::movz(Register rd, Register rs, Register rt) { + GenInstrRegister(SPECIAL, rs, rt, rd, 0, MOVZ); +} + + +void Assembler::movn(Register rd, Register rs, Register rt) { + GenInstrRegister(SPECIAL, rs, rt, rd, 0, MOVN); +} + + +void Assembler::movt(Register rd, Register rs, uint16_t cc) { + Register rt; + rt.code_ = (cc & 0x0007) << 2 | 1; + GenInstrRegister(SPECIAL, rs, rt, rd, 0, MOVCI); +} + + +void Assembler::movf(Register rd, Register rs, uint16_t cc) { + Register rt; + rt.code_ = (cc & 0x0007) << 2 | 0; + GenInstrRegister(SPECIAL, rs, rt, rd, 0, MOVCI); +} + + +void Assembler::sel(SecondaryField fmt, FPURegister fd, + FPURegister ft, FPURegister fs, uint8_t sel) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(fmt == D); + DCHECK(fmt == S); + + Instr instr = COP1 | fmt << kRsShift | ft.code() << kFtShift | + fs.code() << kFsShift | fd.code() << kFdShift | SEL; + emit(instr); +} + + +// GPR. +void Assembler::seleqz(Register rs, Register rt, Register rd) { + DCHECK(kArchVariant == kMips64r6); + GenInstrRegister(SPECIAL, rs, rt, rd, 0, SELEQZ_S); +} + + +// FPR. +void Assembler::seleqz(SecondaryField fmt, FPURegister fd, + FPURegister ft, FPURegister fs) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(fmt == D); + DCHECK(fmt == S); + + Instr instr = COP1 | fmt << kRsShift | ft.code() << kFtShift | + fs.code() << kFsShift | fd.code() << kFdShift | SELEQZ_C; + emit(instr); +} + + +// GPR. +void Assembler::selnez(Register rs, Register rt, Register rd) { + DCHECK(kArchVariant == kMips64r6); + GenInstrRegister(SPECIAL, rs, rt, rd, 0, SELNEZ_S); +} + + +// FPR. +void Assembler::selnez(SecondaryField fmt, FPURegister fd, + FPURegister ft, FPURegister fs) { + DCHECK(kArchVariant == kMips64r6); + DCHECK(fmt == D); + DCHECK(fmt == S); + + Instr instr = COP1 | fmt << kRsShift | ft.code() << kFtShift | + fs.code() << kFsShift | fd.code() << kFdShift | SELNEZ_C; + emit(instr); +} + + +// Bit twiddling. +void Assembler::clz(Register rd, Register rs) { + if (kArchVariant != kMips64r6) { + // Clz instr requires same GPR number in 'rd' and 'rt' fields. + GenInstrRegister(SPECIAL2, rs, rd, rd, 0, CLZ); + } else { + GenInstrRegister(SPECIAL, rs, zero_reg, rd, 1, CLZ_R6); + } +} + + +void Assembler::ins_(Register rt, Register rs, uint16_t pos, uint16_t size) { + // Should be called via MacroAssembler::Ins. + // Ins instr has 'rt' field as dest, and two uint5: msb, lsb. + DCHECK((kArchVariant == kMips64r2) || (kArchVariant == kMips64r6)); + GenInstrRegister(SPECIAL3, rs, rt, pos + size - 1, pos, INS); +} + + +void Assembler::ext_(Register rt, Register rs, uint16_t pos, uint16_t size) { + // Should be called via MacroAssembler::Ext. + // Ext instr has 'rt' field as dest, and two uint5: msb, lsb. + DCHECK(kArchVariant == kMips64r2 || kArchVariant == kMips64r6); + GenInstrRegister(SPECIAL3, rs, rt, size - 1, pos, EXT); +} + + +void Assembler::pref(int32_t hint, const MemOperand& rs) { + DCHECK(is_uint5(hint) && is_uint16(rs.offset_)); + Instr instr = PREF | (rs.rm().code() << kRsShift) | (hint << kRtShift) + | (rs.offset_); + emit(instr); +} + + +// --------Coprocessor-instructions---------------- + +// Load, store, move. +void Assembler::lwc1(FPURegister fd, const MemOperand& src) { + GenInstrImmediate(LWC1, src.rm(), fd, src.offset_); +} + + +void Assembler::ldc1(FPURegister fd, const MemOperand& src) { + GenInstrImmediate(LDC1, src.rm(), fd, src.offset_); +} + + +void Assembler::swc1(FPURegister fd, const MemOperand& src) { + GenInstrImmediate(SWC1, src.rm(), fd, src.offset_); +} + + +void Assembler::sdc1(FPURegister fd, const MemOperand& src) { + GenInstrImmediate(SDC1, src.rm(), fd, src.offset_); +} + + +void Assembler::mtc1(Register rt, FPURegister fs) { + GenInstrRegister(COP1, MTC1, rt, fs, f0); +} + + +void Assembler::mthc1(Register rt, FPURegister fs) { + GenInstrRegister(COP1, MTHC1, rt, fs, f0); +} + + +void Assembler::dmtc1(Register rt, FPURegister fs) { + GenInstrRegister(COP1, DMTC1, rt, fs, f0); +} + + +void Assembler::mfc1(Register rt, FPURegister fs) { + GenInstrRegister(COP1, MFC1, rt, fs, f0); +} + + +void Assembler::mfhc1(Register rt, FPURegister fs) { + GenInstrRegister(COP1, MFHC1, rt, fs, f0); +} + + +void Assembler::dmfc1(Register rt, FPURegister fs) { + GenInstrRegister(COP1, DMFC1, rt, fs, f0); +} + + +void Assembler::ctc1(Register rt, FPUControlRegister fs) { + GenInstrRegister(COP1, CTC1, rt, fs); +} + + +void Assembler::cfc1(Register rt, FPUControlRegister fs) { + GenInstrRegister(COP1, CFC1, rt, fs); +} + + +void Assembler::DoubleAsTwoUInt32(double d, uint32_t* lo, uint32_t* hi) { + uint64_t i; + memcpy(&i, &d, 8); + + *lo = i & 0xffffffff; + *hi = i >> 32; +} + + +// Arithmetic. + +void Assembler::add_d(FPURegister fd, FPURegister fs, FPURegister ft) { + GenInstrRegister(COP1, D, ft, fs, fd, ADD_D); +} + + +void Assembler::sub_d(FPURegister fd, FPURegister fs, FPURegister ft) { + GenInstrRegister(COP1, D, ft, fs, fd, SUB_D); +} + + +void Assembler::mul_d(FPURegister fd, FPURegister fs, FPURegister ft) { + GenInstrRegister(COP1, D, ft, fs, fd, MUL_D); +} + + +void Assembler::madd_d(FPURegister fd, FPURegister fr, FPURegister fs, + FPURegister ft) { + GenInstrRegister(COP1X, fr, ft, fs, fd, MADD_D); +} + + +void Assembler::div_d(FPURegister fd, FPURegister fs, FPURegister ft) { + GenInstrRegister(COP1, D, ft, fs, fd, DIV_D); +} + + +void Assembler::abs_d(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, D, f0, fs, fd, ABS_D); +} + + +void Assembler::mov_d(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, D, f0, fs, fd, MOV_D); +} + + +void Assembler::neg_d(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, D, f0, fs, fd, NEG_D); +} + + +void Assembler::sqrt_d(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, D, f0, fs, fd, SQRT_D); +} + + +// Conversions. + +void Assembler::cvt_w_s(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, S, f0, fs, fd, CVT_W_S); +} + + +void Assembler::cvt_w_d(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, D, f0, fs, fd, CVT_W_D); +} + + +void Assembler::trunc_w_s(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, S, f0, fs, fd, TRUNC_W_S); +} + + +void Assembler::trunc_w_d(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, D, f0, fs, fd, TRUNC_W_D); +} + + +void Assembler::round_w_s(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, S, f0, fs, fd, ROUND_W_S); +} + + +void Assembler::round_w_d(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, D, f0, fs, fd, ROUND_W_D); +} + + +void Assembler::floor_w_s(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, S, f0, fs, fd, FLOOR_W_S); +} + + +void Assembler::floor_w_d(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, D, f0, fs, fd, FLOOR_W_D); +} + + +void Assembler::ceil_w_s(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, S, f0, fs, fd, CEIL_W_S); +} + + +void Assembler::ceil_w_d(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, D, f0, fs, fd, CEIL_W_D); +} + + +void Assembler::cvt_l_s(FPURegister fd, FPURegister fs) { + DCHECK(kArchVariant == kMips64r2); + GenInstrRegister(COP1, S, f0, fs, fd, CVT_L_S); +} + + +void Assembler::cvt_l_d(FPURegister fd, FPURegister fs) { + DCHECK(kArchVariant == kMips64r2); + GenInstrRegister(COP1, D, f0, fs, fd, CVT_L_D); +} + + +void Assembler::trunc_l_s(FPURegister fd, FPURegister fs) { + DCHECK(kArchVariant == kMips64r2); + GenInstrRegister(COP1, S, f0, fs, fd, TRUNC_L_S); +} + + +void Assembler::trunc_l_d(FPURegister fd, FPURegister fs) { + DCHECK(kArchVariant == kMips64r2); + GenInstrRegister(COP1, D, f0, fs, fd, TRUNC_L_D); +} + + +void Assembler::round_l_s(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, S, f0, fs, fd, ROUND_L_S); +} + + +void Assembler::round_l_d(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, D, f0, fs, fd, ROUND_L_D); +} + + +void Assembler::floor_l_s(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, S, f0, fs, fd, FLOOR_L_S); +} + + +void Assembler::floor_l_d(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, D, f0, fs, fd, FLOOR_L_D); +} + + +void Assembler::ceil_l_s(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, S, f0, fs, fd, CEIL_L_S); +} + + +void Assembler::ceil_l_d(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, D, f0, fs, fd, CEIL_L_D); +} + + +void Assembler::min(SecondaryField fmt, FPURegister fd, FPURegister ft, + FPURegister fs) { + DCHECK(kArchVariant == kMips64r6); + DCHECK((fmt == D) || (fmt == S)); + GenInstrRegister(COP1, fmt, ft, fs, fd, MIN); +} + + +void Assembler::mina(SecondaryField fmt, FPURegister fd, FPURegister ft, + FPURegister fs) { + DCHECK(kArchVariant == kMips64r6); + DCHECK((fmt == D) || (fmt == S)); + GenInstrRegister(COP1, fmt, ft, fs, fd, MINA); +} + + +void Assembler::max(SecondaryField fmt, FPURegister fd, FPURegister ft, + FPURegister fs) { + DCHECK(kArchVariant == kMips64r6); + DCHECK((fmt == D) || (fmt == S)); + GenInstrRegister(COP1, fmt, ft, fs, fd, MAX); +} + + +void Assembler::maxa(SecondaryField fmt, FPURegister fd, FPURegister ft, + FPURegister fs) { + DCHECK(kArchVariant == kMips64r6); + DCHECK((fmt == D) || (fmt == S)); + GenInstrRegister(COP1, fmt, ft, fs, fd, MAXA); +} + + +void Assembler::cvt_s_w(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, W, f0, fs, fd, CVT_S_W); +} + + +void Assembler::cvt_s_l(FPURegister fd, FPURegister fs) { + DCHECK(kArchVariant == kMips64r2); + GenInstrRegister(COP1, L, f0, fs, fd, CVT_S_L); +} + + +void Assembler::cvt_s_d(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, D, f0, fs, fd, CVT_S_D); +} + + +void Assembler::cvt_d_w(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, W, f0, fs, fd, CVT_D_W); +} + + +void Assembler::cvt_d_l(FPURegister fd, FPURegister fs) { + DCHECK(kArchVariant == kMips64r2); + GenInstrRegister(COP1, L, f0, fs, fd, CVT_D_L); +} + + +void Assembler::cvt_d_s(FPURegister fd, FPURegister fs) { + GenInstrRegister(COP1, S, f0, fs, fd, CVT_D_S); +} + + +// Conditions for >= MIPSr6. +void Assembler::cmp(FPUCondition cond, SecondaryField fmt, + FPURegister fd, FPURegister fs, FPURegister ft) { + DCHECK(kArchVariant == kMips64r6); + DCHECK((fmt & ~(31 << kRsShift)) == 0); + Instr instr = COP1 | fmt | ft.code() << kFtShift | + fs.code() << kFsShift | fd.code() << kFdShift | (0 << 5) | cond; + emit(instr); +} + + +void Assembler::bc1eqz(int16_t offset, FPURegister ft) { + DCHECK(kArchVariant == kMips64r6); + Instr instr = COP1 | BC1EQZ | ft.code() << kFtShift | (offset & kImm16Mask); + emit(instr); +} + + +void Assembler::bc1nez(int16_t offset, FPURegister ft) { + DCHECK(kArchVariant == kMips64r6); + Instr instr = COP1 | BC1NEZ | ft.code() << kFtShift | (offset & kImm16Mask); + emit(instr); +} + + +// Conditions for < MIPSr6. +void Assembler::c(FPUCondition cond, SecondaryField fmt, + FPURegister fs, FPURegister ft, uint16_t cc) { + DCHECK(kArchVariant != kMips64r6); + DCHECK(is_uint3(cc)); + DCHECK((fmt & ~(31 << kRsShift)) == 0); + Instr instr = COP1 | fmt | ft.code() << kFtShift | fs.code() << kFsShift + | cc << 8 | 3 << 4 | cond; + emit(instr); +} + + +void Assembler::fcmp(FPURegister src1, const double src2, + FPUCondition cond) { + DCHECK(src2 == 0.0); + mtc1(zero_reg, f14); + cvt_d_w(f14, f14); + c(cond, D, src1, f14, 0); +} + + +void Assembler::bc1f(int16_t offset, uint16_t cc) { + DCHECK(is_uint3(cc)); + Instr instr = COP1 | BC1 | cc << 18 | 0 << 16 | (offset & kImm16Mask); + emit(instr); +} + + +void Assembler::bc1t(int16_t offset, uint16_t cc) { + DCHECK(is_uint3(cc)); + Instr instr = COP1 | BC1 | cc << 18 | 1 << 16 | (offset & kImm16Mask); + emit(instr); +} + + +// Debugging. +void Assembler::RecordJSReturn() { + positions_recorder()->WriteRecordedPositions(); + CheckBuffer(); + RecordRelocInfo(RelocInfo::JS_RETURN); +} + + +void Assembler::RecordDebugBreakSlot() { + positions_recorder()->WriteRecordedPositions(); + CheckBuffer(); + RecordRelocInfo(RelocInfo::DEBUG_BREAK_SLOT); +} + + +void Assembler::RecordComment(const char* msg) { + if (FLAG_code_comments) { + CheckBuffer(); + RecordRelocInfo(RelocInfo::COMMENT, reinterpret_cast<intptr_t>(msg)); + } +} + + +int Assembler::RelocateInternalReference(byte* pc, intptr_t pc_delta) { + Instr instr = instr_at(pc); + DCHECK(IsJ(instr) || IsLui(instr)); + if (IsLui(instr)) { + Instr instr_lui = instr_at(pc + 0 * Assembler::kInstrSize); + Instr instr_ori = instr_at(pc + 1 * Assembler::kInstrSize); + Instr instr_ori2 = instr_at(pc + 3 * Assembler::kInstrSize); + DCHECK(IsOri(instr_ori)); + DCHECK(IsOri(instr_ori2)); + // TODO(plind): symbolic names for the shifts. + int64_t imm = (instr_lui & static_cast<int64_t>(kImm16Mask)) << 48; + imm |= (instr_ori & static_cast<int64_t>(kImm16Mask)) << 32; + imm |= (instr_ori2 & static_cast<int64_t>(kImm16Mask)) << 16; + // Sign extend address. + imm >>= 16; + + if (imm == kEndOfJumpChain) { + return 0; // Number of instructions patched. + } + imm += pc_delta; + DCHECK((imm & 3) == 0); + + instr_lui &= ~kImm16Mask; + instr_ori &= ~kImm16Mask; + instr_ori2 &= ~kImm16Mask; + + instr_at_put(pc + 0 * Assembler::kInstrSize, + instr_lui | ((imm >> 32) & kImm16Mask)); + instr_at_put(pc + 1 * Assembler::kInstrSize, + instr_ori | (imm >> 16 & kImm16Mask)); + instr_at_put(pc + 3 * Assembler::kInstrSize, + instr_ori2 | (imm & kImm16Mask)); + return 4; // Number of instructions patched. + } else { + uint32_t imm28 = (instr & static_cast<int32_t>(kImm26Mask)) << 2; + if (static_cast<int32_t>(imm28) == kEndOfJumpChain) { + return 0; // Number of instructions patched. + } + + imm28 += pc_delta; + imm28 &= kImm28Mask; + DCHECK((imm28 & 3) == 0); + + instr &= ~kImm26Mask; + uint32_t imm26 = imm28 >> 2; + DCHECK(is_uint26(imm26)); + + instr_at_put(pc, instr | (imm26 & kImm26Mask)); + return 1; // Number of instructions patched. + } +} + + +void Assembler::GrowBuffer() { + if (!own_buffer_) FATAL("external code buffer is too small"); + + // Compute new buffer size. + CodeDesc desc; // The new buffer. + if (buffer_size_ < 1 * MB) { + desc.buffer_size = 2*buffer_size_; + } else { + desc.buffer_size = buffer_size_ + 1*MB; + } + CHECK_GT(desc.buffer_size, 0); // No overflow. + + // Set up new buffer. + desc.buffer = NewArray<byte>(desc.buffer_size); + + desc.instr_size = pc_offset(); + desc.reloc_size = (buffer_ + buffer_size_) - reloc_info_writer.pos(); + + // Copy the data. + intptr_t pc_delta = desc.buffer - buffer_; + intptr_t rc_delta = (desc.buffer + desc.buffer_size) - + (buffer_ + buffer_size_); + MemMove(desc.buffer, buffer_, desc.instr_size); + MemMove(reloc_info_writer.pos() + rc_delta, + reloc_info_writer.pos(), desc.reloc_size); + + // Switch buffers. + DeleteArray(buffer_); + buffer_ = desc.buffer; + buffer_size_ = desc.buffer_size; + pc_ += pc_delta; + reloc_info_writer.Reposition(reloc_info_writer.pos() + rc_delta, + reloc_info_writer.last_pc() + pc_delta); + + // Relocate runtime entries. + for (RelocIterator it(desc); !it.done(); it.next()) { + RelocInfo::Mode rmode = it.rinfo()->rmode(); + if (rmode == RelocInfo::INTERNAL_REFERENCE) { + byte* p = reinterpret_cast<byte*>(it.rinfo()->pc()); + RelocateInternalReference(p, pc_delta); + } + } + + DCHECK(!overflow()); +} + + +void Assembler::db(uint8_t data) { + CheckBuffer(); + *reinterpret_cast<uint8_t*>(pc_) = data; + pc_ += sizeof(uint8_t); +} + + +void Assembler::dd(uint32_t data) { + CheckBuffer(); + *reinterpret_cast<uint32_t*>(pc_) = data; + pc_ += sizeof(uint32_t); +} + + +void Assembler::emit_code_stub_address(Code* stub) { + CheckBuffer(); + *reinterpret_cast<uint64_t*>(pc_) = + reinterpret_cast<uint64_t>(stub->instruction_start()); + pc_ += sizeof(uint64_t); +} + + +void Assembler::RecordRelocInfo(RelocInfo::Mode rmode, intptr_t data) { + // We do not try to reuse pool constants. + RelocInfo rinfo(pc_, rmode, data, NULL); + if (rmode >= RelocInfo::JS_RETURN && rmode <= RelocInfo::DEBUG_BREAK_SLOT) { + // Adjust code for new modes. + DCHECK(RelocInfo::IsDebugBreakSlot(rmode) + || RelocInfo::IsJSReturn(rmode) + || RelocInfo::IsComment(rmode) + || RelocInfo::IsPosition(rmode)); + // These modes do not need an entry in the constant pool. + } + if (!RelocInfo::IsNone(rinfo.rmode())) { + // Don't record external references unless the heap will be serialized. + if (rmode == RelocInfo::EXTERNAL_REFERENCE && + !serializer_enabled() && !emit_debug_code()) { + return; + } + DCHECK(buffer_space() >= kMaxRelocSize); // Too late to grow buffer here. + if (rmode == RelocInfo::CODE_TARGET_WITH_ID) { + RelocInfo reloc_info_with_ast_id(pc_, + rmode, + RecordedAstId().ToInt(), + NULL); + ClearRecordedAstId(); + reloc_info_writer.Write(&reloc_info_with_ast_id); + } else { + reloc_info_writer.Write(&rinfo); + } + } +} + + +void Assembler::BlockTrampolinePoolFor(int instructions) { + BlockTrampolinePoolBefore(pc_offset() + instructions * kInstrSize); +} + + +void Assembler::CheckTrampolinePool() { + // Some small sequences of instructions must not be broken up by the + // insertion of a trampoline pool; such sequences are protected by setting + // either trampoline_pool_blocked_nesting_ or no_trampoline_pool_before_, + // which are both checked here. Also, recursive calls to CheckTrampolinePool + // are blocked by trampoline_pool_blocked_nesting_. + if ((trampoline_pool_blocked_nesting_ > 0) || + (pc_offset() < no_trampoline_pool_before_)) { + // Emission is currently blocked; make sure we try again as soon as + // possible. + if (trampoline_pool_blocked_nesting_ > 0) { + next_buffer_check_ = pc_offset() + kInstrSize; + } else { + next_buffer_check_ = no_trampoline_pool_before_; + } + return; + } + + DCHECK(!trampoline_emitted_); + DCHECK(unbound_labels_count_ >= 0); + if (unbound_labels_count_ > 0) { + // First we emit jump (2 instructions), then we emit trampoline pool. + { BlockTrampolinePoolScope block_trampoline_pool(this); + Label after_pool; + b(&after_pool); + nop(); + + int pool_start = pc_offset(); + for (int i = 0; i < unbound_labels_count_; i++) { + uint64_t imm64; + imm64 = jump_address(&after_pool); + { BlockGrowBufferScope block_buf_growth(this); + // Buffer growth (and relocation) must be blocked for internal + // references until associated instructions are emitted and available + // to be patched. + RecordRelocInfo(RelocInfo::INTERNAL_REFERENCE); + // TODO(plind): Verify this, presume I cannot use macro-assembler + // here. + lui(at, (imm64 >> 32) & kImm16Mask); + ori(at, at, (imm64 >> 16) & kImm16Mask); + dsll(at, at, 16); + ori(at, at, imm64 & kImm16Mask); + } + jr(at); + nop(); + } + bind(&after_pool); + trampoline_ = Trampoline(pool_start, unbound_labels_count_); + + trampoline_emitted_ = true; + // As we are only going to emit trampoline once, we need to prevent any + // further emission. + next_buffer_check_ = kMaxInt; + } + } else { + // Number of branches to unbound label at this point is zero, so we can + // move next buffer check to maximum. + next_buffer_check_ = pc_offset() + + kMaxBranchOffset - kTrampolineSlotsSize * 16; + } + return; +} + + +Address Assembler::target_address_at(Address pc) { + Instr instr0 = instr_at(pc); + Instr instr1 = instr_at(pc + 1 * kInstrSize); + Instr instr3 = instr_at(pc + 3 * kInstrSize); + + // Interpret 4 instructions for address generated by li: See listing in + // Assembler::set_target_address_at() just below. + if ((GetOpcodeField(instr0) == LUI) && (GetOpcodeField(instr1) == ORI) && + (GetOpcodeField(instr3) == ORI)) { + // Assemble the 48 bit value. + int64_t addr = static_cast<int64_t>( + ((uint64_t)(GetImmediate16(instr0)) << 32) | + ((uint64_t)(GetImmediate16(instr1)) << 16) | + ((uint64_t)(GetImmediate16(instr3)))); + + // Sign extend to get canonical address. + addr = (addr << 16) >> 16; + return reinterpret_cast<Address>(addr); + } + // We should never get here, force a bad address if we do. + UNREACHABLE(); + return (Address)0x0; +} + + +// MIPS and ia32 use opposite encoding for qNaN and sNaN, such that ia32 +// qNaN is a MIPS sNaN, and ia32 sNaN is MIPS qNaN. If running from a heap +// snapshot generated on ia32, the resulting MIPS sNaN must be quieted. +// OS::nan_value() returns a qNaN. +void Assembler::QuietNaN(HeapObject* object) { + HeapNumber::cast(object)->set_value(base::OS::nan_value()); +} + + +// On Mips64, a target address is stored in a 4-instruction sequence: +// 0: lui(rd, (j.imm64_ >> 32) & kImm16Mask); +// 1: ori(rd, rd, (j.imm64_ >> 16) & kImm16Mask); +// 2: dsll(rd, rd, 16); +// 3: ori(rd, rd, j.imm32_ & kImm16Mask); +// +// Patching the address must replace all the lui & ori instructions, +// and flush the i-cache. +// +// There is an optimization below, which emits a nop when the address +// fits in just 16 bits. This is unlikely to help, and should be benchmarked, +// and possibly removed. +void Assembler::set_target_address_at(Address pc, + Address target, + ICacheFlushMode icache_flush_mode) { +// There is an optimization where only 4 instructions are used to load address +// in code on MIP64 because only 48-bits of address is effectively used. +// It relies on fact the upper [63:48] bits are not used for virtual address +// translation and they have to be set according to value of bit 47 in order +// get canonical address. + Instr instr1 = instr_at(pc + kInstrSize); + uint32_t rt_code = GetRt(instr1); + uint32_t* p = reinterpret_cast<uint32_t*>(pc); + uint64_t itarget = reinterpret_cast<uint64_t>(target); + +#ifdef DEBUG + // Check we have the result from a li macro-instruction. + Instr instr0 = instr_at(pc); + Instr instr3 = instr_at(pc + kInstrSize * 3); + CHECK((GetOpcodeField(instr0) == LUI && GetOpcodeField(instr1) == ORI && + GetOpcodeField(instr3) == ORI)); +#endif + + // Must use 4 instructions to insure patchable code. + // lui rt, upper-16. + // ori rt, rt, lower-16. + // dsll rt, rt, 16. + // ori rt rt, lower-16. + *p = LUI | (rt_code << kRtShift) | ((itarget >> 32) & kImm16Mask); + *(p + 1) = ORI | (rt_code << kRtShift) | (rt_code << kRsShift) + | ((itarget >> 16) & kImm16Mask); + *(p + 3) = ORI | (rt_code << kRsShift) | (rt_code << kRtShift) + | (itarget & kImm16Mask); + + if (icache_flush_mode != SKIP_ICACHE_FLUSH) { + CpuFeatures::FlushICache(pc, 4 * Assembler::kInstrSize); + } +} + + +void Assembler::JumpLabelToJumpRegister(Address pc) { + // Address pc points to lui/ori instructions. + // Jump to label may follow at pc + 2 * kInstrSize. + uint32_t* p = reinterpret_cast<uint32_t*>(pc); +#ifdef DEBUG + Instr instr1 = instr_at(pc); +#endif + Instr instr2 = instr_at(pc + 1 * kInstrSize); + Instr instr3 = instr_at(pc + 6 * kInstrSize); + bool patched = false; + + if (IsJal(instr3)) { + DCHECK(GetOpcodeField(instr1) == LUI); + DCHECK(GetOpcodeField(instr2) == ORI); + + uint32_t rs_field = GetRt(instr2) << kRsShift; + uint32_t rd_field = ra.code() << kRdShift; // Return-address (ra) reg. + *(p+6) = SPECIAL | rs_field | rd_field | JALR; + patched = true; + } else if (IsJ(instr3)) { + DCHECK(GetOpcodeField(instr1) == LUI); + DCHECK(GetOpcodeField(instr2) == ORI); + + uint32_t rs_field = GetRt(instr2) << kRsShift; + *(p+6) = SPECIAL | rs_field | JR; + patched = true; + } + + if (patched) { + CpuFeatures::FlushICache(pc+6, sizeof(int32_t)); + } +} + + +Handle<ConstantPoolArray> Assembler::NewConstantPool(Isolate* isolate) { + // No out-of-line constant pool support. + DCHECK(!FLAG_enable_ool_constant_pool); + return isolate->factory()->empty_constant_pool_array(); +} + + +void Assembler::PopulateConstantPool(ConstantPoolArray* constant_pool) { + // No out-of-line constant pool support. + DCHECK(!FLAG_enable_ool_constant_pool); + return; +} + + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_MIPS64 diff --git a/deps/v8/src/mips64/assembler-mips64.h b/deps/v8/src/mips64/assembler-mips64.h new file mode 100644 index 00000000000..5c754f49505 --- /dev/null +++ b/deps/v8/src/mips64/assembler-mips64.h @@ -0,0 +1,1416 @@ +// Copyright (c) 1994-2006 Sun Microsystems Inc. +// All Rights Reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// - Redistributions of source code must retain the above copyright notice, +// this list of conditions and the following disclaimer. +// +// - Redistribution in binary form must reproduce the above copyright +// notice, this list of conditions and the following disclaimer in the +// documentation and/or other materials provided with the distribution. +// +// - Neither the name of Sun Microsystems or the names of contributors may +// be used to endorse or promote products derived from this software without +// specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS +// IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, +// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +// PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR +// CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, +// EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, +// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +// PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +// LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +// NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +// The original source code covered by the above license above has been +// modified significantly by Google Inc. +// Copyright 2012 the V8 project authors. All rights reserved. + + +#ifndef V8_MIPS_ASSEMBLER_MIPS_H_ +#define V8_MIPS_ASSEMBLER_MIPS_H_ + +#include <stdio.h> +#include "src/assembler.h" +#include "src/mips64/constants-mips64.h" +#include "src/serialize.h" + +namespace v8 { +namespace internal { + +// CPU Registers. +// +// 1) We would prefer to use an enum, but enum values are assignment- +// compatible with int, which has caused code-generation bugs. +// +// 2) We would prefer to use a class instead of a struct but we don't like +// the register initialization to depend on the particular initialization +// order (which appears to be different on OS X, Linux, and Windows for the +// installed versions of C++ we tried). Using a struct permits C-style +// "initialization". Also, the Register objects cannot be const as this +// forces initialization stubs in MSVC, making us dependent on initialization +// order. +// +// 3) By not using an enum, we are possibly preventing the compiler from +// doing certain constant folds, which may significantly reduce the +// code generated for some assembly instructions (because they boil down +// to a few constants). If this is a problem, we could change the code +// such that we use an enum in optimized mode, and the struct in debug +// mode. This way we get the compile-time error checking in debug mode +// and best performance in optimized code. + + +// ----------------------------------------------------------------------------- +// Implementation of Register and FPURegister. + +// Core register. +struct Register { + static const int kNumRegisters = v8::internal::kNumRegisters; + static const int kMaxNumAllocatableRegisters = 14; // v0 through t6 and cp. + static const int kSizeInBytes = 8; + static const int kCpRegister = 23; // cp (s7) is the 23rd register. + + inline static int NumAllocatableRegisters(); + + static int ToAllocationIndex(Register reg) { + DCHECK((reg.code() - 2) < (kMaxNumAllocatableRegisters - 1) || + reg.is(from_code(kCpRegister))); + return reg.is(from_code(kCpRegister)) ? + kMaxNumAllocatableRegisters - 1 : // Return last index for 'cp'. + reg.code() - 2; // zero_reg and 'at' are skipped. + } + + static Register FromAllocationIndex(int index) { + DCHECK(index >= 0 && index < kMaxNumAllocatableRegisters); + return index == kMaxNumAllocatableRegisters - 1 ? + from_code(kCpRegister) : // Last index is always the 'cp' register. + from_code(index + 2); // zero_reg and 'at' are skipped. + } + + static const char* AllocationIndexToString(int index) { + DCHECK(index >= 0 && index < kMaxNumAllocatableRegisters); + const char* const names[] = { + "v0", + "v1", + "a0", + "a1", + "a2", + "a3", + "a4", + "a5", + "a6", + "a7", + "t0", + "t1", + "t2", + "s7", + }; + return names[index]; + } + + static Register from_code(int code) { + Register r = { code }; + return r; + } + + bool is_valid() const { return 0 <= code_ && code_ < kNumRegisters; } + bool is(Register reg) const { return code_ == reg.code_; } + int code() const { + DCHECK(is_valid()); + return code_; + } + int bit() const { + DCHECK(is_valid()); + return 1 << code_; + } + + // Unfortunately we can't make this private in a struct. + int code_; +}; + +#define REGISTER(N, C) \ + const int kRegister_ ## N ## _Code = C; \ + const Register N = { C } + +REGISTER(no_reg, -1); +// Always zero. +REGISTER(zero_reg, 0); +// at: Reserved for synthetic instructions. +REGISTER(at, 1); +// v0, v1: Used when returning multiple values from subroutines. +REGISTER(v0, 2); +REGISTER(v1, 3); +// a0 - a4: Used to pass non-FP parameters. +REGISTER(a0, 4); +REGISTER(a1, 5); +REGISTER(a2, 6); +REGISTER(a3, 7); +// a4 - a7 t0 - t3: Can be used without reservation, act as temporary registers +// and are allowed to be destroyed by subroutines. +REGISTER(a4, 8); +REGISTER(a5, 9); +REGISTER(a6, 10); +REGISTER(a7, 11); +REGISTER(t0, 12); +REGISTER(t1, 13); +REGISTER(t2, 14); +REGISTER(t3, 15); +// s0 - s7: Subroutine register variables. Subroutines that write to these +// registers must restore their values before exiting so that the caller can +// expect the values to be preserved. +REGISTER(s0, 16); +REGISTER(s1, 17); +REGISTER(s2, 18); +REGISTER(s3, 19); +REGISTER(s4, 20); +REGISTER(s5, 21); +REGISTER(s6, 22); +REGISTER(s7, 23); +REGISTER(t8, 24); +REGISTER(t9, 25); +// k0, k1: Reserved for system calls and interrupt handlers. +REGISTER(k0, 26); +REGISTER(k1, 27); +// gp: Reserved. +REGISTER(gp, 28); +// sp: Stack pointer. +REGISTER(sp, 29); +// fp: Frame pointer. +REGISTER(fp, 30); +// ra: Return address pointer. +REGISTER(ra, 31); + +#undef REGISTER + + +int ToNumber(Register reg); + +Register ToRegister(int num); + +// Coprocessor register. +struct FPURegister { + static const int kMaxNumRegisters = v8::internal::kNumFPURegisters; + + // TODO(plind): Warning, inconsistent numbering here. kNumFPURegisters refers + // to number of 32-bit FPU regs, but kNumAllocatableRegisters refers to + // number of Double regs (64-bit regs, or FPU-reg-pairs). + + // A few double registers are reserved: one as a scratch register and one to + // hold 0.0. + // f28: 0.0 + // f30: scratch register. + static const int kNumReservedRegisters = 2; + static const int kMaxNumAllocatableRegisters = kMaxNumRegisters / 2 - + kNumReservedRegisters; + + inline static int NumRegisters(); + inline static int NumAllocatableRegisters(); + inline static int ToAllocationIndex(FPURegister reg); + static const char* AllocationIndexToString(int index); + + static FPURegister FromAllocationIndex(int index) { + DCHECK(index >= 0 && index < kMaxNumAllocatableRegisters); + return from_code(index * 2); + } + + static FPURegister from_code(int code) { + FPURegister r = { code }; + return r; + } + + bool is_valid() const { return 0 <= code_ && code_ < kMaxNumRegisters ; } + bool is(FPURegister creg) const { return code_ == creg.code_; } + FPURegister low() const { + // TODO(plind): Create DCHECK for FR=0 mode. This usage suspect for FR=1. + // Find low reg of a Double-reg pair, which is the reg itself. + DCHECK(code_ % 2 == 0); // Specified Double reg must be even. + FPURegister reg; + reg.code_ = code_; + DCHECK(reg.is_valid()); + return reg; + } + FPURegister high() const { + // TODO(plind): Create DCHECK for FR=0 mode. This usage illegal in FR=1. + // Find high reg of a Doubel-reg pair, which is reg + 1. + DCHECK(code_ % 2 == 0); // Specified Double reg must be even. + FPURegister reg; + reg.code_ = code_ + 1; + DCHECK(reg.is_valid()); + return reg; + } + + int code() const { + DCHECK(is_valid()); + return code_; + } + int bit() const { + DCHECK(is_valid()); + return 1 << code_; + } + void setcode(int f) { + code_ = f; + DCHECK(is_valid()); + } + // Unfortunately we can't make this private in a struct. + int code_; +}; + +// V8 now supports the O32 ABI, and the FPU Registers are organized as 32 +// 32-bit registers, f0 through f31. When used as 'double' they are used +// in pairs, starting with the even numbered register. So a double operation +// on f0 really uses f0 and f1. +// (Modern mips hardware also supports 32 64-bit registers, via setting +// (privileged) Status Register FR bit to 1. This is used by the N32 ABI, +// but it is not in common use. Someday we will want to support this in v8.) + +// For O32 ABI, Floats and Doubles refer to same set of 32 32-bit registers. +typedef FPURegister DoubleRegister; +typedef FPURegister FloatRegister; + +const FPURegister no_freg = { -1 }; + +const FPURegister f0 = { 0 }; // Return value in hard float mode. +const FPURegister f1 = { 1 }; +const FPURegister f2 = { 2 }; +const FPURegister f3 = { 3 }; +const FPURegister f4 = { 4 }; +const FPURegister f5 = { 5 }; +const FPURegister f6 = { 6 }; +const FPURegister f7 = { 7 }; +const FPURegister f8 = { 8 }; +const FPURegister f9 = { 9 }; +const FPURegister f10 = { 10 }; +const FPURegister f11 = { 11 }; +const FPURegister f12 = { 12 }; // Arg 0 in hard float mode. +const FPURegister f13 = { 13 }; +const FPURegister f14 = { 14 }; // Arg 1 in hard float mode. +const FPURegister f15 = { 15 }; +const FPURegister f16 = { 16 }; +const FPURegister f17 = { 17 }; +const FPURegister f18 = { 18 }; +const FPURegister f19 = { 19 }; +const FPURegister f20 = { 20 }; +const FPURegister f21 = { 21 }; +const FPURegister f22 = { 22 }; +const FPURegister f23 = { 23 }; +const FPURegister f24 = { 24 }; +const FPURegister f25 = { 25 }; +const FPURegister f26 = { 26 }; +const FPURegister f27 = { 27 }; +const FPURegister f28 = { 28 }; +const FPURegister f29 = { 29 }; +const FPURegister f30 = { 30 }; +const FPURegister f31 = { 31 }; + +// Register aliases. +// cp is assumed to be a callee saved register. +// Defined using #define instead of "static const Register&" because Clang +// complains otherwise when a compilation unit that includes this header +// doesn't use the variables. +#define kRootRegister s6 +#define cp s7 +#define kLithiumScratchReg s3 +#define kLithiumScratchReg2 s4 +#define kLithiumScratchDouble f30 +#define kDoubleRegZero f28 + +// FPU (coprocessor 1) control registers. +// Currently only FCSR (#31) is implemented. +struct FPUControlRegister { + bool is_valid() const { return code_ == kFCSRRegister; } + bool is(FPUControlRegister creg) const { return code_ == creg.code_; } + int code() const { + DCHECK(is_valid()); + return code_; + } + int bit() const { + DCHECK(is_valid()); + return 1 << code_; + } + void setcode(int f) { + code_ = f; + DCHECK(is_valid()); + } + // Unfortunately we can't make this private in a struct. + int code_; +}; + +const FPUControlRegister no_fpucreg = { kInvalidFPUControlRegister }; +const FPUControlRegister FCSR = { kFCSRRegister }; + + +// ----------------------------------------------------------------------------- +// Machine instruction Operands. +const int kSmiShift = kSmiTagSize + kSmiShiftSize; +const uint64_t kSmiShiftMask = (1UL << kSmiShift) - 1; +// Class Operand represents a shifter operand in data processing instructions. +class Operand BASE_EMBEDDED { + public: + // Immediate. + INLINE(explicit Operand(int64_t immediate, + RelocInfo::Mode rmode = RelocInfo::NONE64)); + INLINE(explicit Operand(const ExternalReference& f)); + INLINE(explicit Operand(const char* s)); + INLINE(explicit Operand(Object** opp)); + INLINE(explicit Operand(Context** cpp)); + explicit Operand(Handle<Object> handle); + INLINE(explicit Operand(Smi* value)); + + // Register. + INLINE(explicit Operand(Register rm)); + + // Return true if this is a register operand. + INLINE(bool is_reg() const); + + inline int64_t immediate() const { + DCHECK(!is_reg()); + return imm64_; + } + + Register rm() const { return rm_; } + + private: + Register rm_; + int64_t imm64_; // Valid if rm_ == no_reg. + RelocInfo::Mode rmode_; + + friend class Assembler; + friend class MacroAssembler; +}; + + +// On MIPS we have only one adressing mode with base_reg + offset. +// Class MemOperand represents a memory operand in load and store instructions. +class MemOperand : public Operand { + public: + // Immediate value attached to offset. + enum OffsetAddend { + offset_minus_one = -1, + offset_zero = 0 + }; + + explicit MemOperand(Register rn, int64_t offset = 0); + explicit MemOperand(Register rn, int64_t unit, int64_t multiplier, + OffsetAddend offset_addend = offset_zero); + int32_t offset() const { return offset_; } + + bool OffsetIsInt16Encodable() const { + return is_int16(offset_); + } + + private: + int32_t offset_; + + friend class Assembler; +}; + + +class Assembler : public AssemblerBase { + public: + // Create an assembler. Instructions and relocation information are emitted + // into a buffer, with the instructions starting from the beginning and the + // relocation information starting from the end of the buffer. See CodeDesc + // for a detailed comment on the layout (globals.h). + // + // If the provided buffer is NULL, the assembler allocates and grows its own + // buffer, and buffer_size determines the initial buffer size. The buffer is + // owned by the assembler and deallocated upon destruction of the assembler. + // + // If the provided buffer is not NULL, the assembler uses the provided buffer + // for code generation and assumes its size to be buffer_size. If the buffer + // is too small, a fatal error occurs. No deallocation of the buffer is done + // upon destruction of the assembler. + Assembler(Isolate* isolate, void* buffer, int buffer_size); + virtual ~Assembler() { } + + // GetCode emits any pending (non-emitted) code and fills the descriptor + // desc. GetCode() is idempotent; it returns the same result if no other + // Assembler functions are invoked in between GetCode() calls. + void GetCode(CodeDesc* desc); + + // Label operations & relative jumps (PPUM Appendix D). + // + // Takes a branch opcode (cc) and a label (L) and generates + // either a backward branch or a forward branch and links it + // to the label fixup chain. Usage: + // + // Label L; // unbound label + // j(cc, &L); // forward branch to unbound label + // bind(&L); // bind label to the current pc + // j(cc, &L); // backward branch to bound label + // bind(&L); // illegal: a label may be bound only once + // + // Note: The same Label can be used for forward and backward branches + // but it may be bound only once. + void bind(Label* L); // Binds an unbound label L to current code position. + // Determines if Label is bound and near enough so that branch instruction + // can be used to reach it, instead of jump instruction. + bool is_near(Label* L); + + // Returns the branch offset to the given label from the current code + // position. Links the label to the current position if it is still unbound. + // Manages the jump elimination optimization if the second parameter is true. + int32_t branch_offset(Label* L, bool jump_elimination_allowed); + int32_t branch_offset_compact(Label* L, bool jump_elimination_allowed); + int32_t branch_offset21(Label* L, bool jump_elimination_allowed); + int32_t branch_offset21_compact(Label* L, bool jump_elimination_allowed); + int32_t shifted_branch_offset(Label* L, bool jump_elimination_allowed) { + int32_t o = branch_offset(L, jump_elimination_allowed); + DCHECK((o & 3) == 0); // Assert the offset is aligned. + return o >> 2; + } + int32_t shifted_branch_offset_compact(Label* L, + bool jump_elimination_allowed) { + int32_t o = branch_offset_compact(L, jump_elimination_allowed); + DCHECK((o & 3) == 0); // Assert the offset is aligned. + return o >> 2; + } + uint64_t jump_address(Label* L); + + // Puts a labels target address at the given position. + // The high 8 bits are set to zero. + void label_at_put(Label* L, int at_offset); + + // Read/Modify the code target address in the branch/call instruction at pc. + static Address target_address_at(Address pc); + static void set_target_address_at(Address pc, + Address target, + ICacheFlushMode icache_flush_mode = + FLUSH_ICACHE_IF_NEEDED); + // On MIPS there is no Constant Pool so we skip that parameter. + INLINE(static Address target_address_at(Address pc, + ConstantPoolArray* constant_pool)) { + return target_address_at(pc); + } + INLINE(static void set_target_address_at(Address pc, + ConstantPoolArray* constant_pool, + Address target, + ICacheFlushMode icache_flush_mode = + FLUSH_ICACHE_IF_NEEDED)) { + set_target_address_at(pc, target, icache_flush_mode); + } + INLINE(static Address target_address_at(Address pc, Code* code)) { + ConstantPoolArray* constant_pool = code ? code->constant_pool() : NULL; + return target_address_at(pc, constant_pool); + } + INLINE(static void set_target_address_at(Address pc, + Code* code, + Address target, + ICacheFlushMode icache_flush_mode = + FLUSH_ICACHE_IF_NEEDED)) { + ConstantPoolArray* constant_pool = code ? code->constant_pool() : NULL; + set_target_address_at(pc, constant_pool, target, icache_flush_mode); + } + + // Return the code target address at a call site from the return address + // of that call in the instruction stream. + inline static Address target_address_from_return_address(Address pc); + + // Return the code target address of the patch debug break slot + inline static Address break_address_from_return_address(Address pc); + + static void JumpLabelToJumpRegister(Address pc); + + static void QuietNaN(HeapObject* nan); + + // This sets the branch destination (which gets loaded at the call address). + // This is for calls and branches within generated code. The serializer + // has already deserialized the lui/ori instructions etc. + inline static void deserialization_set_special_target_at( + Address instruction_payload, Code* code, Address target) { + set_target_address_at( + instruction_payload - kInstructionsFor64BitConstant * kInstrSize, + code, + target); + } + + // Size of an instruction. + static const int kInstrSize = sizeof(Instr); + + // Difference between address of current opcode and target address offset. + static const int kBranchPCOffset = 4; + + // Here we are patching the address in the LUI/ORI instruction pair. + // These values are used in the serialization process and must be zero for + // MIPS platform, as Code, Embedded Object or External-reference pointers + // are split across two consecutive instructions and don't exist separately + // in the code, so the serializer should not step forwards in memory after + // a target is resolved and written. + static const int kSpecialTargetSize = 0; + + // Number of consecutive instructions used to store 32bit/64bit constant. + // Before jump-optimizations, this constant was used in + // RelocInfo::target_address_address() function to tell serializer address of + // the instruction that follows LUI/ORI instruction pair. Now, with new jump + // optimization, where jump-through-register instruction that usually + // follows LUI/ORI pair is substituted with J/JAL, this constant equals + // to 3 instructions (LUI+ORI+J/JAL/JR/JALR). + static const int kInstructionsFor32BitConstant = 3; + static const int kInstructionsFor64BitConstant = 5; + + // Distance between the instruction referring to the address of the call + // target and the return address. + static const int kCallTargetAddressOffset = 6 * kInstrSize; + + // Distance between start of patched return sequence and the emitted address + // to jump to. + static const int kPatchReturnSequenceAddressOffset = 0; + + // Distance between start of patched debug break slot and the emitted address + // to jump to. + static const int kPatchDebugBreakSlotAddressOffset = 0 * kInstrSize; + + // Difference between address of current opcode and value read from pc + // register. + static const int kPcLoadDelta = 4; + + static const int kPatchDebugBreakSlotReturnOffset = 6 * kInstrSize; + + // Number of instructions used for the JS return sequence. The constant is + // used by the debugger to patch the JS return sequence. + static const int kJSReturnSequenceInstructions = 7; + static const int kDebugBreakSlotInstructions = 6; + static const int kDebugBreakSlotLength = + kDebugBreakSlotInstructions * kInstrSize; + + + // --------------------------------------------------------------------------- + // Code generation. + + // Insert the smallest number of nop instructions + // possible to align the pc offset to a multiple + // of m. m must be a power of 2 (>= 4). + void Align(int m); + // Aligns code to something that's optimal for a jump target for the platform. + void CodeTargetAlign(); + + // Different nop operations are used by the code generator to detect certain + // states of the generated code. + enum NopMarkerTypes { + NON_MARKING_NOP = 0, + DEBUG_BREAK_NOP, + // IC markers. + PROPERTY_ACCESS_INLINED, + PROPERTY_ACCESS_INLINED_CONTEXT, + PROPERTY_ACCESS_INLINED_CONTEXT_DONT_DELETE, + // Helper values. + LAST_CODE_MARKER, + FIRST_IC_MARKER = PROPERTY_ACCESS_INLINED, + // Code aging + CODE_AGE_MARKER_NOP = 6, + CODE_AGE_SEQUENCE_NOP + }; + + // Type == 0 is the default non-marking nop. For mips this is a + // sll(zero_reg, zero_reg, 0). We use rt_reg == at for non-zero + // marking, to avoid conflict with ssnop and ehb instructions. + void nop(unsigned int type = 0) { + DCHECK(type < 32); + Register nop_rt_reg = (type == 0) ? zero_reg : at; + sll(zero_reg, nop_rt_reg, type, true); + } + + + // --------Branch-and-jump-instructions---------- + // We don't use likely variant of instructions. + void b(int16_t offset); + void b(Label* L) { b(branch_offset(L, false)>>2); } + void bal(int16_t offset); + void bal(Label* L) { bal(branch_offset(L, false)>>2); } + + void beq(Register rs, Register rt, int16_t offset); + void beq(Register rs, Register rt, Label* L) { + beq(rs, rt, branch_offset(L, false) >> 2); + } + void bgez(Register rs, int16_t offset); + void bgezc(Register rt, int16_t offset); + void bgezc(Register rt, Label* L) { + bgezc(rt, branch_offset_compact(L, false)>>2); + } + void bgeuc(Register rs, Register rt, int16_t offset); + void bgeuc(Register rs, Register rt, Label* L) { + bgeuc(rs, rt, branch_offset_compact(L, false)>>2); + } + void bgec(Register rs, Register rt, int16_t offset); + void bgec(Register rs, Register rt, Label* L) { + bgec(rs, rt, branch_offset_compact(L, false)>>2); + } + void bgezal(Register rs, int16_t offset); + void bgezalc(Register rt, int16_t offset); + void bgezalc(Register rt, Label* L) { + bgezalc(rt, branch_offset_compact(L, false)>>2); + } + void bgezall(Register rs, int16_t offset); + void bgezall(Register rs, Label* L) { + bgezall(rs, branch_offset(L, false)>>2); + } + void bgtz(Register rs, int16_t offset); + void bgtzc(Register rt, int16_t offset); + void bgtzc(Register rt, Label* L) { + bgtzc(rt, branch_offset_compact(L, false)>>2); + } + void blez(Register rs, int16_t offset); + void blezc(Register rt, int16_t offset); + void blezc(Register rt, Label* L) { + blezc(rt, branch_offset_compact(L, false)>>2); + } + void bltz(Register rs, int16_t offset); + void bltzc(Register rt, int16_t offset); + void bltzc(Register rt, Label* L) { + bltzc(rt, branch_offset_compact(L, false)>>2); + } + void bltuc(Register rs, Register rt, int16_t offset); + void bltuc(Register rs, Register rt, Label* L) { + bltuc(rs, rt, branch_offset_compact(L, false)>>2); + } + void bltc(Register rs, Register rt, int16_t offset); + void bltc(Register rs, Register rt, Label* L) { + bltc(rs, rt, branch_offset_compact(L, false)>>2); + } + + void bltzal(Register rs, int16_t offset); + void blezalc(Register rt, int16_t offset); + void blezalc(Register rt, Label* L) { + blezalc(rt, branch_offset_compact(L, false)>>2); + } + void bltzalc(Register rt, int16_t offset); + void bltzalc(Register rt, Label* L) { + bltzalc(rt, branch_offset_compact(L, false)>>2); + } + void bgtzalc(Register rt, int16_t offset); + void bgtzalc(Register rt, Label* L) { + bgtzalc(rt, branch_offset_compact(L, false)>>2); + } + void beqzalc(Register rt, int16_t offset); + void beqzalc(Register rt, Label* L) { + beqzalc(rt, branch_offset_compact(L, false)>>2); + } + void beqc(Register rs, Register rt, int16_t offset); + void beqc(Register rs, Register rt, Label* L) { + beqc(rs, rt, branch_offset_compact(L, false)>>2); + } + void beqzc(Register rs, int32_t offset); + void beqzc(Register rs, Label* L) { + beqzc(rs, branch_offset21_compact(L, false)>>2); + } + void bnezalc(Register rt, int16_t offset); + void bnezalc(Register rt, Label* L) { + bnezalc(rt, branch_offset_compact(L, false)>>2); + } + void bnec(Register rs, Register rt, int16_t offset); + void bnec(Register rs, Register rt, Label* L) { + bnec(rs, rt, branch_offset_compact(L, false)>>2); + } + void bnezc(Register rt, int32_t offset); + void bnezc(Register rt, Label* L) { + bnezc(rt, branch_offset21_compact(L, false)>>2); + } + void bne(Register rs, Register rt, int16_t offset); + void bne(Register rs, Register rt, Label* L) { + bne(rs, rt, branch_offset(L, false)>>2); + } + void bovc(Register rs, Register rt, int16_t offset); + void bovc(Register rs, Register rt, Label* L) { + bovc(rs, rt, branch_offset_compact(L, false)>>2); + } + void bnvc(Register rs, Register rt, int16_t offset); + void bnvc(Register rs, Register rt, Label* L) { + bnvc(rs, rt, branch_offset_compact(L, false)>>2); + } + + // Never use the int16_t b(l)cond version with a branch offset + // instead of using the Label* version. + + // Jump targets must be in the current 256 MB-aligned region. i.e. 28 bits. + void j(int64_t target); + void jal(int64_t target); + void jalr(Register rs, Register rd = ra); + void jr(Register target); + void j_or_jr(int64_t target, Register rs); + void jal_or_jalr(int64_t target, Register rs); + + + // -------Data-processing-instructions--------- + + // Arithmetic. + void addu(Register rd, Register rs, Register rt); + void subu(Register rd, Register rs, Register rt); + + void div(Register rs, Register rt); + void divu(Register rs, Register rt); + void ddiv(Register rs, Register rt); + void ddivu(Register rs, Register rt); + void div(Register rd, Register rs, Register rt); + void divu(Register rd, Register rs, Register rt); + void ddiv(Register rd, Register rs, Register rt); + void ddivu(Register rd, Register rs, Register rt); + void mod(Register rd, Register rs, Register rt); + void modu(Register rd, Register rs, Register rt); + void dmod(Register rd, Register rs, Register rt); + void dmodu(Register rd, Register rs, Register rt); + + void mul(Register rd, Register rs, Register rt); + void muh(Register rd, Register rs, Register rt); + void mulu(Register rd, Register rs, Register rt); + void muhu(Register rd, Register rs, Register rt); + void mult(Register rs, Register rt); + void multu(Register rs, Register rt); + void dmul(Register rd, Register rs, Register rt); + void dmuh(Register rd, Register rs, Register rt); + void dmulu(Register rd, Register rs, Register rt); + void dmuhu(Register rd, Register rs, Register rt); + void daddu(Register rd, Register rs, Register rt); + void dsubu(Register rd, Register rs, Register rt); + void dmult(Register rs, Register rt); + void dmultu(Register rs, Register rt); + + void addiu(Register rd, Register rs, int32_t j); + void daddiu(Register rd, Register rs, int32_t j); + + // Logical. + void and_(Register rd, Register rs, Register rt); + void or_(Register rd, Register rs, Register rt); + void xor_(Register rd, Register rs, Register rt); + void nor(Register rd, Register rs, Register rt); + + void andi(Register rd, Register rs, int32_t j); + void ori(Register rd, Register rs, int32_t j); + void xori(Register rd, Register rs, int32_t j); + void lui(Register rd, int32_t j); + void aui(Register rs, Register rt, int32_t j); + void daui(Register rs, Register rt, int32_t j); + void dahi(Register rs, int32_t j); + void dati(Register rs, int32_t j); + + // Shifts. + // Please note: sll(zero_reg, zero_reg, x) instructions are reserved as nop + // and may cause problems in normal code. coming_from_nop makes sure this + // doesn't happen. + void sll(Register rd, Register rt, uint16_t sa, bool coming_from_nop = false); + void sllv(Register rd, Register rt, Register rs); + void srl(Register rd, Register rt, uint16_t sa); + void srlv(Register rd, Register rt, Register rs); + void sra(Register rt, Register rd, uint16_t sa); + void srav(Register rt, Register rd, Register rs); + void rotr(Register rd, Register rt, uint16_t sa); + void rotrv(Register rd, Register rt, Register rs); + void dsll(Register rd, Register rt, uint16_t sa); + void dsllv(Register rd, Register rt, Register rs); + void dsrl(Register rd, Register rt, uint16_t sa); + void dsrlv(Register rd, Register rt, Register rs); + void drotr(Register rd, Register rt, uint16_t sa); + void drotrv(Register rd, Register rt, Register rs); + void dsra(Register rt, Register rd, uint16_t sa); + void dsrav(Register rd, Register rt, Register rs); + void dsll32(Register rt, Register rd, uint16_t sa); + void dsrl32(Register rt, Register rd, uint16_t sa); + void dsra32(Register rt, Register rd, uint16_t sa); + + + // ------------Memory-instructions------------- + + void lb(Register rd, const MemOperand& rs); + void lbu(Register rd, const MemOperand& rs); + void lh(Register rd, const MemOperand& rs); + void lhu(Register rd, const MemOperand& rs); + void lw(Register rd, const MemOperand& rs); + void lwu(Register rd, const MemOperand& rs); + void lwl(Register rd, const MemOperand& rs); + void lwr(Register rd, const MemOperand& rs); + void sb(Register rd, const MemOperand& rs); + void sh(Register rd, const MemOperand& rs); + void sw(Register rd, const MemOperand& rs); + void swl(Register rd, const MemOperand& rs); + void swr(Register rd, const MemOperand& rs); + void ldl(Register rd, const MemOperand& rs); + void ldr(Register rd, const MemOperand& rs); + void sdl(Register rd, const MemOperand& rs); + void sdr(Register rd, const MemOperand& rs); + void ld(Register rd, const MemOperand& rs); + void sd(Register rd, const MemOperand& rs); + + + // ----------------Prefetch-------------------- + + void pref(int32_t hint, const MemOperand& rs); + + + // -------------Misc-instructions-------------- + + // Break / Trap instructions. + void break_(uint32_t code, bool break_as_stop = false); + void stop(const char* msg, uint32_t code = kMaxStopCode); + void tge(Register rs, Register rt, uint16_t code); + void tgeu(Register rs, Register rt, uint16_t code); + void tlt(Register rs, Register rt, uint16_t code); + void tltu(Register rs, Register rt, uint16_t code); + void teq(Register rs, Register rt, uint16_t code); + void tne(Register rs, Register rt, uint16_t code); + + // Move from HI/LO register. + void mfhi(Register rd); + void mflo(Register rd); + + // Set on less than. + void slt(Register rd, Register rs, Register rt); + void sltu(Register rd, Register rs, Register rt); + void slti(Register rd, Register rs, int32_t j); + void sltiu(Register rd, Register rs, int32_t j); + + // Conditional move. + void movz(Register rd, Register rs, Register rt); + void movn(Register rd, Register rs, Register rt); + void movt(Register rd, Register rs, uint16_t cc = 0); + void movf(Register rd, Register rs, uint16_t cc = 0); + + void sel(SecondaryField fmt, FPURegister fd, FPURegister ft, + FPURegister fs, uint8_t sel); + void seleqz(Register rs, Register rt, Register rd); + void seleqz(SecondaryField fmt, FPURegister fd, FPURegister ft, + FPURegister fs); + void selnez(Register rs, Register rt, Register rd); + void selnez(SecondaryField fmt, FPURegister fd, FPURegister ft, + FPURegister fs); + + // Bit twiddling. + void clz(Register rd, Register rs); + void ins_(Register rt, Register rs, uint16_t pos, uint16_t size); + void ext_(Register rt, Register rs, uint16_t pos, uint16_t size); + + // --------Coprocessor-instructions---------------- + + // Load, store, and move. + void lwc1(FPURegister fd, const MemOperand& src); + void ldc1(FPURegister fd, const MemOperand& src); + + void swc1(FPURegister fs, const MemOperand& dst); + void sdc1(FPURegister fs, const MemOperand& dst); + + void mtc1(Register rt, FPURegister fs); + void mthc1(Register rt, FPURegister fs); + void dmtc1(Register rt, FPURegister fs); + + void mfc1(Register rt, FPURegister fs); + void mfhc1(Register rt, FPURegister fs); + void dmfc1(Register rt, FPURegister fs); + + void ctc1(Register rt, FPUControlRegister fs); + void cfc1(Register rt, FPUControlRegister fs); + + // Arithmetic. + void add_d(FPURegister fd, FPURegister fs, FPURegister ft); + void sub_d(FPURegister fd, FPURegister fs, FPURegister ft); + void mul_d(FPURegister fd, FPURegister fs, FPURegister ft); + void madd_d(FPURegister fd, FPURegister fr, FPURegister fs, FPURegister ft); + void div_d(FPURegister fd, FPURegister fs, FPURegister ft); + void abs_d(FPURegister fd, FPURegister fs); + void mov_d(FPURegister fd, FPURegister fs); + void neg_d(FPURegister fd, FPURegister fs); + void sqrt_d(FPURegister fd, FPURegister fs); + + // Conversion. + void cvt_w_s(FPURegister fd, FPURegister fs); + void cvt_w_d(FPURegister fd, FPURegister fs); + void trunc_w_s(FPURegister fd, FPURegister fs); + void trunc_w_d(FPURegister fd, FPURegister fs); + void round_w_s(FPURegister fd, FPURegister fs); + void round_w_d(FPURegister fd, FPURegister fs); + void floor_w_s(FPURegister fd, FPURegister fs); + void floor_w_d(FPURegister fd, FPURegister fs); + void ceil_w_s(FPURegister fd, FPURegister fs); + void ceil_w_d(FPURegister fd, FPURegister fs); + + void cvt_l_s(FPURegister fd, FPURegister fs); + void cvt_l_d(FPURegister fd, FPURegister fs); + void trunc_l_s(FPURegister fd, FPURegister fs); + void trunc_l_d(FPURegister fd, FPURegister fs); + void round_l_s(FPURegister fd, FPURegister fs); + void round_l_d(FPURegister fd, FPURegister fs); + void floor_l_s(FPURegister fd, FPURegister fs); + void floor_l_d(FPURegister fd, FPURegister fs); + void ceil_l_s(FPURegister fd, FPURegister fs); + void ceil_l_d(FPURegister fd, FPURegister fs); + + void min(SecondaryField fmt, FPURegister fd, FPURegister ft, FPURegister fs); + void mina(SecondaryField fmt, FPURegister fd, FPURegister ft, FPURegister fs); + void max(SecondaryField fmt, FPURegister fd, FPURegister ft, FPURegister fs); + void maxa(SecondaryField fmt, FPURegister fd, FPURegister ft, FPURegister fs); + + void cvt_s_w(FPURegister fd, FPURegister fs); + void cvt_s_l(FPURegister fd, FPURegister fs); + void cvt_s_d(FPURegister fd, FPURegister fs); + + void cvt_d_w(FPURegister fd, FPURegister fs); + void cvt_d_l(FPURegister fd, FPURegister fs); + void cvt_d_s(FPURegister fd, FPURegister fs); + + // Conditions and branches for MIPSr6. + void cmp(FPUCondition cond, SecondaryField fmt, + FPURegister fd, FPURegister ft, FPURegister fs); + + void bc1eqz(int16_t offset, FPURegister ft); + void bc1eqz(Label* L, FPURegister ft) { + bc1eqz(branch_offset(L, false)>>2, ft); + } + void bc1nez(int16_t offset, FPURegister ft); + void bc1nez(Label* L, FPURegister ft) { + bc1nez(branch_offset(L, false)>>2, ft); + } + + // Conditions and branches for non MIPSr6. + void c(FPUCondition cond, SecondaryField fmt, + FPURegister ft, FPURegister fs, uint16_t cc = 0); + + void bc1f(int16_t offset, uint16_t cc = 0); + void bc1f(Label* L, uint16_t cc = 0) { + bc1f(branch_offset(L, false)>>2, cc); + } + void bc1t(int16_t offset, uint16_t cc = 0); + void bc1t(Label* L, uint16_t cc = 0) { + bc1t(branch_offset(L, false)>>2, cc); + } + void fcmp(FPURegister src1, const double src2, FPUCondition cond); + + // Check the code size generated from label to here. + int SizeOfCodeGeneratedSince(Label* label) { + return pc_offset() - label->pos(); + } + + // Check the number of instructions generated from label to here. + int InstructionsGeneratedSince(Label* label) { + return SizeOfCodeGeneratedSince(label) / kInstrSize; + } + + // Class for scoping postponing the trampoline pool generation. + class BlockTrampolinePoolScope { + public: + explicit BlockTrampolinePoolScope(Assembler* assem) : assem_(assem) { + assem_->StartBlockTrampolinePool(); + } + ~BlockTrampolinePoolScope() { + assem_->EndBlockTrampolinePool(); + } + + private: + Assembler* assem_; + + DISALLOW_IMPLICIT_CONSTRUCTORS(BlockTrampolinePoolScope); + }; + + // Class for postponing the assembly buffer growth. Typically used for + // sequences of instructions that must be emitted as a unit, before + // buffer growth (and relocation) can occur. + // This blocking scope is not nestable. + class BlockGrowBufferScope { + public: + explicit BlockGrowBufferScope(Assembler* assem) : assem_(assem) { + assem_->StartBlockGrowBuffer(); + } + ~BlockGrowBufferScope() { + assem_->EndBlockGrowBuffer(); + } + + private: + Assembler* assem_; + + DISALLOW_IMPLICIT_CONSTRUCTORS(BlockGrowBufferScope); + }; + + // Debugging. + + // Mark address of the ExitJSFrame code. + void RecordJSReturn(); + + // Mark address of a debug break slot. + void RecordDebugBreakSlot(); + + // Record the AST id of the CallIC being compiled, so that it can be placed + // in the relocation information. + void SetRecordedAstId(TypeFeedbackId ast_id) { + DCHECK(recorded_ast_id_.IsNone()); + recorded_ast_id_ = ast_id; + } + + TypeFeedbackId RecordedAstId() { + DCHECK(!recorded_ast_id_.IsNone()); + return recorded_ast_id_; + } + + void ClearRecordedAstId() { recorded_ast_id_ = TypeFeedbackId::None(); } + + // Record a comment relocation entry that can be used by a disassembler. + // Use --code-comments to enable. + void RecordComment(const char* msg); + + static int RelocateInternalReference(byte* pc, intptr_t pc_delta); + + // Writes a single byte or word of data in the code stream. Used for + // inline tables, e.g., jump-tables. + void db(uint8_t data); + void dd(uint32_t data); + + // Emits the address of the code stub's first instruction. + void emit_code_stub_address(Code* stub); + + PositionsRecorder* positions_recorder() { return &positions_recorder_; } + + // Postpone the generation of the trampoline pool for the specified number of + // instructions. + void BlockTrampolinePoolFor(int instructions); + + // Check if there is less than kGap bytes available in the buffer. + // If this is the case, we need to grow the buffer before emitting + // an instruction or relocation information. + inline bool overflow() const { return pc_ >= reloc_info_writer.pos() - kGap; } + + // Get the number of bytes available in the buffer. + inline int available_space() const { return reloc_info_writer.pos() - pc_; } + + // Read/patch instructions. + static Instr instr_at(byte* pc) { return *reinterpret_cast<Instr*>(pc); } + static void instr_at_put(byte* pc, Instr instr) { + *reinterpret_cast<Instr*>(pc) = instr; + } + Instr instr_at(int pos) { return *reinterpret_cast<Instr*>(buffer_ + pos); } + void instr_at_put(int pos, Instr instr) { + *reinterpret_cast<Instr*>(buffer_ + pos) = instr; + } + + // Check if an instruction is a branch of some kind. + static bool IsBranch(Instr instr); + static bool IsBeq(Instr instr); + static bool IsBne(Instr instr); + + static bool IsJump(Instr instr); + static bool IsJ(Instr instr); + static bool IsLui(Instr instr); + static bool IsOri(Instr instr); + + static bool IsJal(Instr instr); + static bool IsJr(Instr instr); + static bool IsJalr(Instr instr); + + static bool IsNop(Instr instr, unsigned int type); + static bool IsPop(Instr instr); + static bool IsPush(Instr instr); + static bool IsLwRegFpOffset(Instr instr); + static bool IsSwRegFpOffset(Instr instr); + static bool IsLwRegFpNegOffset(Instr instr); + static bool IsSwRegFpNegOffset(Instr instr); + + static Register GetRtReg(Instr instr); + static Register GetRsReg(Instr instr); + static Register GetRdReg(Instr instr); + + static uint32_t GetRt(Instr instr); + static uint32_t GetRtField(Instr instr); + static uint32_t GetRs(Instr instr); + static uint32_t GetRsField(Instr instr); + static uint32_t GetRd(Instr instr); + static uint32_t GetRdField(Instr instr); + static uint32_t GetSa(Instr instr); + static uint32_t GetSaField(Instr instr); + static uint32_t GetOpcodeField(Instr instr); + static uint32_t GetFunction(Instr instr); + static uint32_t GetFunctionField(Instr instr); + static uint32_t GetImmediate16(Instr instr); + static uint32_t GetLabelConst(Instr instr); + + static int32_t GetBranchOffset(Instr instr); + static bool IsLw(Instr instr); + static int16_t GetLwOffset(Instr instr); + static Instr SetLwOffset(Instr instr, int16_t offset); + + static bool IsSw(Instr instr); + static Instr SetSwOffset(Instr instr, int16_t offset); + static bool IsAddImmediate(Instr instr); + static Instr SetAddImmediateOffset(Instr instr, int16_t offset); + + static bool IsAndImmediate(Instr instr); + static bool IsEmittedConstant(Instr instr); + + void CheckTrampolinePool(); + + // Allocate a constant pool of the correct size for the generated code. + Handle<ConstantPoolArray> NewConstantPool(Isolate* isolate); + + // Generate the constant pool for the generated code. + void PopulateConstantPool(ConstantPoolArray* constant_pool); + + protected: + // Relocation for a type-recording IC has the AST id added to it. This + // member variable is a way to pass the information from the call site to + // the relocation info. + TypeFeedbackId recorded_ast_id_; + + int64_t buffer_space() const { return reloc_info_writer.pos() - pc_; } + + // Decode branch instruction at pos and return branch target pos. + int64_t target_at(int64_t pos); + + // Patch branch instruction at pos to branch to given branch target pos. + void target_at_put(int64_t pos, int64_t target_pos); + + // Say if we need to relocate with this mode. + bool MustUseReg(RelocInfo::Mode rmode); + + // Record reloc info for current pc_. + void RecordRelocInfo(RelocInfo::Mode rmode, intptr_t data = 0); + + // Block the emission of the trampoline pool before pc_offset. + void BlockTrampolinePoolBefore(int pc_offset) { + if (no_trampoline_pool_before_ < pc_offset) + no_trampoline_pool_before_ = pc_offset; + } + + void StartBlockTrampolinePool() { + trampoline_pool_blocked_nesting_++; + } + + void EndBlockTrampolinePool() { + trampoline_pool_blocked_nesting_--; + } + + bool is_trampoline_pool_blocked() const { + return trampoline_pool_blocked_nesting_ > 0; + } + + bool has_exception() const { + return internal_trampoline_exception_; + } + + void DoubleAsTwoUInt32(double d, uint32_t* lo, uint32_t* hi); + + bool is_trampoline_emitted() const { + return trampoline_emitted_; + } + + // Temporarily block automatic assembly buffer growth. + void StartBlockGrowBuffer() { + DCHECK(!block_buffer_growth_); + block_buffer_growth_ = true; + } + + void EndBlockGrowBuffer() { + DCHECK(block_buffer_growth_); + block_buffer_growth_ = false; + } + + bool is_buffer_growth_blocked() const { + return block_buffer_growth_; + } + + private: + // Buffer size and constant pool distance are checked together at regular + // intervals of kBufferCheckInterval emitted bytes. + static const int kBufferCheckInterval = 1*KB/2; + + // Code generation. + // The relocation writer's position is at least kGap bytes below the end of + // the generated instructions. This is so that multi-instruction sequences do + // not have to check for overflow. The same is true for writes of large + // relocation info entries. + static const int kGap = 32; + + + // Repeated checking whether the trampoline pool should be emitted is rather + // expensive. By default we only check again once a number of instructions + // has been generated. + static const int kCheckConstIntervalInst = 32; + static const int kCheckConstInterval = kCheckConstIntervalInst * kInstrSize; + + int next_buffer_check_; // pc offset of next buffer check. + + // Emission of the trampoline pool may be blocked in some code sequences. + int trampoline_pool_blocked_nesting_; // Block emission if this is not zero. + int no_trampoline_pool_before_; // Block emission before this pc offset. + + // Keep track of the last emitted pool to guarantee a maximal distance. + int last_trampoline_pool_end_; // pc offset of the end of the last pool. + + // Automatic growth of the assembly buffer may be blocked for some sequences. + bool block_buffer_growth_; // Block growth when true. + + // Relocation information generation. + // Each relocation is encoded as a variable size value. + static const int kMaxRelocSize = RelocInfoWriter::kMaxSize; + RelocInfoWriter reloc_info_writer; + + // The bound position, before this we cannot do instruction elimination. + int last_bound_pos_; + + // Code emission. + inline void CheckBuffer(); + void GrowBuffer(); + inline void emit(Instr x); + inline void emit(uint64_t x); + inline void CheckTrampolinePoolQuick(); + + // Instruction generation. + // We have 3 different kind of encoding layout on MIPS. + // However due to many different types of objects encoded in the same fields + // we have quite a few aliases for each mode. + // Using the same structure to refer to Register and FPURegister would spare a + // few aliases, but mixing both does not look clean to me. + // Anyway we could surely implement this differently. + + void GenInstrRegister(Opcode opcode, + Register rs, + Register rt, + Register rd, + uint16_t sa = 0, + SecondaryField func = NULLSF); + + void GenInstrRegister(Opcode opcode, + Register rs, + Register rt, + uint16_t msb, + uint16_t lsb, + SecondaryField func); + + void GenInstrRegister(Opcode opcode, + SecondaryField fmt, + FPURegister ft, + FPURegister fs, + FPURegister fd, + SecondaryField func = NULLSF); + + void GenInstrRegister(Opcode opcode, + FPURegister fr, + FPURegister ft, + FPURegister fs, + FPURegister fd, + SecondaryField func = NULLSF); + + void GenInstrRegister(Opcode opcode, + SecondaryField fmt, + Register rt, + FPURegister fs, + FPURegister fd, + SecondaryField func = NULLSF); + + void GenInstrRegister(Opcode opcode, + SecondaryField fmt, + Register rt, + FPUControlRegister fs, + SecondaryField func = NULLSF); + + + void GenInstrImmediate(Opcode opcode, + Register rs, + Register rt, + int32_t j); + void GenInstrImmediate(Opcode opcode, + Register rs, + SecondaryField SF, + int32_t j); + void GenInstrImmediate(Opcode opcode, + Register r1, + FPURegister r2, + int32_t j); + + + void GenInstrJump(Opcode opcode, + uint32_t address); + + // Helpers. + void LoadRegPlusOffsetToAt(const MemOperand& src); + + // Labels. + void print(Label* L); + void bind_to(Label* L, int pos); + void next(Label* L); + + // One trampoline consists of: + // - space for trampoline slots, + // - space for labels. + // + // Space for trampoline slots is equal to slot_count * 2 * kInstrSize. + // Space for trampoline slots preceeds space for labels. Each label is of one + // instruction size, so total amount for labels is equal to + // label_count * kInstrSize. + class Trampoline { + public: + Trampoline() { + start_ = 0; + next_slot_ = 0; + free_slot_count_ = 0; + end_ = 0; + } + Trampoline(int start, int slot_count) { + start_ = start; + next_slot_ = start; + free_slot_count_ = slot_count; + end_ = start + slot_count * kTrampolineSlotsSize; + } + int start() { + return start_; + } + int end() { + return end_; + } + int take_slot() { + int trampoline_slot = kInvalidSlotPos; + if (free_slot_count_ <= 0) { + // We have run out of space on trampolines. + // Make sure we fail in debug mode, so we become aware of each case + // when this happens. + DCHECK(0); + // Internal exception will be caught. + } else { + trampoline_slot = next_slot_; + free_slot_count_--; + next_slot_ += kTrampolineSlotsSize; + } + return trampoline_slot; + } + + private: + int start_; + int end_; + int next_slot_; + int free_slot_count_; + }; + + int32_t get_trampoline_entry(int32_t pos); + int unbound_labels_count_; + // If trampoline is emitted, generated code is becoming large. As this is + // already a slow case which can possibly break our code generation for the + // extreme case, we use this information to trigger different mode of + // branch instruction generation, where we use jump instructions rather + // than regular branch instructions. + bool trampoline_emitted_; + static const int kTrampolineSlotsSize = 6 * kInstrSize; + static const int kMaxBranchOffset = (1 << (18 - 1)) - 1; + static const int kInvalidSlotPos = -1; + + Trampoline trampoline_; + bool internal_trampoline_exception_; + + friend class RegExpMacroAssemblerMIPS; + friend class RelocInfo; + friend class CodePatcher; + friend class BlockTrampolinePoolScope; + + PositionsRecorder positions_recorder_; + friend class PositionsRecorder; + friend class EnsureSpace; +}; + + +class EnsureSpace BASE_EMBEDDED { + public: + explicit EnsureSpace(Assembler* assembler) { + assembler->CheckBuffer(); + } +}; + +} } // namespace v8::internal + +#endif // V8_ARM_ASSEMBLER_MIPS_H_ diff --git a/deps/v8/src/mips64/builtins-mips64.cc b/deps/v8/src/mips64/builtins-mips64.cc new file mode 100644 index 00000000000..cbbcc054ff9 --- /dev/null +++ b/deps/v8/src/mips64/builtins-mips64.cc @@ -0,0 +1,1597 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + + + +#include "src/v8.h" + +#if V8_TARGET_ARCH_MIPS64 + +#include "src/codegen.h" +#include "src/debug.h" +#include "src/deoptimizer.h" +#include "src/full-codegen.h" +#include "src/runtime.h" +#include "src/stub-cache.h" + +namespace v8 { +namespace internal { + + +#define __ ACCESS_MASM(masm) + + +void Builtins::Generate_Adaptor(MacroAssembler* masm, + CFunctionId id, + BuiltinExtraArguments extra_args) { + // ----------- S t a t e ------------- + // -- a0 : number of arguments excluding receiver + // -- a1 : called function (only guaranteed when + // -- extra_args requires it) + // -- cp : context + // -- sp[0] : last argument + // -- ... + // -- sp[8 * (argc - 1)] : first argument + // -- sp[8 * agrc] : receiver + // ----------------------------------- + + // Insert extra arguments. + int num_extra_args = 0; + if (extra_args == NEEDS_CALLED_FUNCTION) { + num_extra_args = 1; + __ push(a1); + } else { + DCHECK(extra_args == NO_EXTRA_ARGUMENTS); + } + + // JumpToExternalReference expects s0 to contain the number of arguments + // including the receiver and the extra arguments. + __ Daddu(s0, a0, num_extra_args + 1); + __ dsll(s1, s0, kPointerSizeLog2); + __ Dsubu(s1, s1, kPointerSize); + __ JumpToExternalReference(ExternalReference(id, masm->isolate())); +} + + +// Load the built-in InternalArray function from the current context. +static void GenerateLoadInternalArrayFunction(MacroAssembler* masm, + Register result) { + // Load the native context. + + __ ld(result, + MemOperand(cp, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); + __ ld(result, + FieldMemOperand(result, GlobalObject::kNativeContextOffset)); + // Load the InternalArray function from the native context. + __ ld(result, + MemOperand(result, + Context::SlotOffset( + Context::INTERNAL_ARRAY_FUNCTION_INDEX))); +} + + +// Load the built-in Array function from the current context. +static void GenerateLoadArrayFunction(MacroAssembler* masm, Register result) { + // Load the native context. + + __ ld(result, + MemOperand(cp, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); + __ ld(result, + FieldMemOperand(result, GlobalObject::kNativeContextOffset)); + // Load the Array function from the native context. + __ ld(result, + MemOperand(result, + Context::SlotOffset(Context::ARRAY_FUNCTION_INDEX))); +} + + +void Builtins::Generate_InternalArrayCode(MacroAssembler* masm) { + // ----------- S t a t e ------------- + // -- a0 : number of arguments + // -- ra : return address + // -- sp[...]: constructor arguments + // ----------------------------------- + Label generic_array_code, one_or_more_arguments, two_or_more_arguments; + + // Get the InternalArray function. + GenerateLoadInternalArrayFunction(masm, a1); + + if (FLAG_debug_code) { + // Initial map for the builtin InternalArray functions should be maps. + __ ld(a2, FieldMemOperand(a1, JSFunction::kPrototypeOrInitialMapOffset)); + __ SmiTst(a2, a4); + __ Assert(ne, kUnexpectedInitialMapForInternalArrayFunction, + a4, Operand(zero_reg)); + __ GetObjectType(a2, a3, a4); + __ Assert(eq, kUnexpectedInitialMapForInternalArrayFunction, + a4, Operand(MAP_TYPE)); + } + + // Run the native code for the InternalArray function called as a normal + // function. + // Tail call a stub. + InternalArrayConstructorStub stub(masm->isolate()); + __ TailCallStub(&stub); +} + + +void Builtins::Generate_ArrayCode(MacroAssembler* masm) { + // ----------- S t a t e ------------- + // -- a0 : number of arguments + // -- ra : return address + // -- sp[...]: constructor arguments + // ----------------------------------- + Label generic_array_code; + + // Get the Array function. + GenerateLoadArrayFunction(masm, a1); + + if (FLAG_debug_code) { + // Initial map for the builtin Array functions should be maps. + __ ld(a2, FieldMemOperand(a1, JSFunction::kPrototypeOrInitialMapOffset)); + __ SmiTst(a2, a4); + __ Assert(ne, kUnexpectedInitialMapForArrayFunction1, + a4, Operand(zero_reg)); + __ GetObjectType(a2, a3, a4); + __ Assert(eq, kUnexpectedInitialMapForArrayFunction2, + a4, Operand(MAP_TYPE)); + } + + // Run the native code for the Array function called as a normal function. + // Tail call a stub. + __ LoadRoot(a2, Heap::kUndefinedValueRootIndex); + ArrayConstructorStub stub(masm->isolate()); + __ TailCallStub(&stub); +} + + +void Builtins::Generate_StringConstructCode(MacroAssembler* masm) { + // ----------- S t a t e ------------- + // -- a0 : number of arguments + // -- a1 : constructor function + // -- ra : return address + // -- sp[(argc - n - 1) * 8] : arg[n] (zero based) + // -- sp[argc * 8] : receiver + // ----------------------------------- + Counters* counters = masm->isolate()->counters(); + __ IncrementCounter(counters->string_ctor_calls(), 1, a2, a3); + + Register function = a1; + if (FLAG_debug_code) { + __ LoadGlobalFunction(Context::STRING_FUNCTION_INDEX, a2); + __ Assert(eq, kUnexpectedStringFunction, function, Operand(a2)); + } + + // Load the first arguments in a0 and get rid of the rest. + Label no_arguments; + __ Branch(&no_arguments, eq, a0, Operand(zero_reg)); + // First args = sp[(argc - 1) * 8]. + __ Dsubu(a0, a0, Operand(1)); + __ dsll(a0, a0, kPointerSizeLog2); + __ Daddu(sp, a0, sp); + __ ld(a0, MemOperand(sp)); + // sp now point to args[0], drop args[0] + receiver. + __ Drop(2); + + Register argument = a2; + Label not_cached, argument_is_string; + __ LookupNumberStringCache(a0, // Input. + argument, // Result. + a3, // Scratch. + a4, // Scratch. + a5, // Scratch. + ¬_cached); + __ IncrementCounter(counters->string_ctor_cached_number(), 1, a3, a4); + __ bind(&argument_is_string); + + // ----------- S t a t e ------------- + // -- a2 : argument converted to string + // -- a1 : constructor function + // -- ra : return address + // ----------------------------------- + + Label gc_required; + __ Allocate(JSValue::kSize, + v0, // Result. + a3, // Scratch. + a4, // Scratch. + &gc_required, + TAG_OBJECT); + + // Initialising the String Object. + Register map = a3; + __ LoadGlobalFunctionInitialMap(function, map, a4); + if (FLAG_debug_code) { + __ lbu(a4, FieldMemOperand(map, Map::kInstanceSizeOffset)); + __ Assert(eq, kUnexpectedStringWrapperInstanceSize, + a4, Operand(JSValue::kSize >> kPointerSizeLog2)); + __ lbu(a4, FieldMemOperand(map, Map::kUnusedPropertyFieldsOffset)); + __ Assert(eq, kUnexpectedUnusedPropertiesOfStringWrapper, + a4, Operand(zero_reg)); + } + __ sd(map, FieldMemOperand(v0, HeapObject::kMapOffset)); + + __ LoadRoot(a3, Heap::kEmptyFixedArrayRootIndex); + __ sd(a3, FieldMemOperand(v0, JSObject::kPropertiesOffset)); + __ sd(a3, FieldMemOperand(v0, JSObject::kElementsOffset)); + + __ sd(argument, FieldMemOperand(v0, JSValue::kValueOffset)); + + // Ensure the object is fully initialized. + STATIC_ASSERT(JSValue::kSize == 4 * kPointerSize); + + __ Ret(); + + // The argument was not found in the number to string cache. Check + // if it's a string already before calling the conversion builtin. + Label convert_argument; + __ bind(¬_cached); + __ JumpIfSmi(a0, &convert_argument); + + // Is it a String? + __ ld(a2, FieldMemOperand(a0, HeapObject::kMapOffset)); + __ lbu(a3, FieldMemOperand(a2, Map::kInstanceTypeOffset)); + STATIC_ASSERT(kNotStringTag != 0); + __ And(a4, a3, Operand(kIsNotStringMask)); + __ Branch(&convert_argument, ne, a4, Operand(zero_reg)); + __ mov(argument, a0); + __ IncrementCounter(counters->string_ctor_conversions(), 1, a3, a4); + __ Branch(&argument_is_string); + + // Invoke the conversion builtin and put the result into a2. + __ bind(&convert_argument); + __ push(function); // Preserve the function. + __ IncrementCounter(counters->string_ctor_conversions(), 1, a3, a4); + { + FrameScope scope(masm, StackFrame::INTERNAL); + __ push(a0); + __ InvokeBuiltin(Builtins::TO_STRING, CALL_FUNCTION); + } + __ pop(function); + __ mov(argument, v0); + __ Branch(&argument_is_string); + + // Load the empty string into a2, remove the receiver from the + // stack, and jump back to the case where the argument is a string. + __ bind(&no_arguments); + __ LoadRoot(argument, Heap::kempty_stringRootIndex); + __ Drop(1); + __ Branch(&argument_is_string); + + // At this point the argument is already a string. Call runtime to + // create a string wrapper. + __ bind(&gc_required); + __ IncrementCounter(counters->string_ctor_gc_required(), 1, a3, a4); + { + FrameScope scope(masm, StackFrame::INTERNAL); + __ push(argument); + __ CallRuntime(Runtime::kNewStringWrapper, 1); + } + __ Ret(); +} + + +static void CallRuntimePassFunction( + MacroAssembler* masm, Runtime::FunctionId function_id) { + FrameScope scope(masm, StackFrame::INTERNAL); + // Push a copy of the function onto the stack. + // Push call kind information and function as parameter to the runtime call. + __ Push(a1, a1); + + __ CallRuntime(function_id, 1); + // Restore call kind information and receiver. + __ Pop(a1); +} + + +static void GenerateTailCallToSharedCode(MacroAssembler* masm) { + __ ld(a2, FieldMemOperand(a1, JSFunction::kSharedFunctionInfoOffset)); + __ ld(a2, FieldMemOperand(a2, SharedFunctionInfo::kCodeOffset)); + __ Daddu(at, a2, Operand(Code::kHeaderSize - kHeapObjectTag)); + __ Jump(at); +} + + +static void GenerateTailCallToReturnedCode(MacroAssembler* masm) { + __ Daddu(at, v0, Operand(Code::kHeaderSize - kHeapObjectTag)); + __ Jump(at); +} + + +void Builtins::Generate_InOptimizationQueue(MacroAssembler* masm) { + // Checking whether the queued function is ready for install is optional, + // since we come across interrupts and stack checks elsewhere. However, + // not checking may delay installing ready functions, and always checking + // would be quite expensive. A good compromise is to first check against + // stack limit as a cue for an interrupt signal. + Label ok; + __ LoadRoot(a4, Heap::kStackLimitRootIndex); + __ Branch(&ok, hs, sp, Operand(a4)); + + CallRuntimePassFunction(masm, Runtime::kTryInstallOptimizedCode); + GenerateTailCallToReturnedCode(masm); + + __ bind(&ok); + GenerateTailCallToSharedCode(masm); +} + + +static void Generate_JSConstructStubHelper(MacroAssembler* masm, + bool is_api_function, + bool create_memento) { + // ----------- S t a t e ------------- + // -- a0 : number of arguments + // -- a1 : constructor function + // -- a2 : allocation site or undefined + // -- ra : return address + // -- sp[...]: constructor arguments + // ----------------------------------- + + // Should never create mementos for api functions. + DCHECK(!is_api_function || !create_memento); + + Isolate* isolate = masm->isolate(); + + // ----------- S t a t e ------------- + // -- a0 : number of arguments + // -- a1 : constructor function + // -- ra : return address + // -- sp[...]: constructor arguments + // ----------------------------------- + + // Enter a construct frame. + { + FrameScope scope(masm, StackFrame::CONSTRUCT); + + if (create_memento) { + __ AssertUndefinedOrAllocationSite(a2, a3); + __ push(a2); + } + + // Preserve the two incoming parameters on the stack. + // Tag arguments count. + __ dsll32(a0, a0, 0); + __ MultiPushReversed(a0.bit() | a1.bit()); + + Label rt_call, allocated; + // Try to allocate the object without transitioning into C code. If any of + // the preconditions is not met, the code bails out to the runtime call. + if (FLAG_inline_new) { + Label undo_allocation; + ExternalReference debug_step_in_fp = + ExternalReference::debug_step_in_fp_address(isolate); + __ li(a2, Operand(debug_step_in_fp)); + __ ld(a2, MemOperand(a2)); + __ Branch(&rt_call, ne, a2, Operand(zero_reg)); + + // Load the initial map and verify that it is in fact a map. + // a1: constructor function + __ ld(a2, FieldMemOperand(a1, JSFunction::kPrototypeOrInitialMapOffset)); + __ JumpIfSmi(a2, &rt_call); + __ GetObjectType(a2, a3, t0); + __ Branch(&rt_call, ne, t0, Operand(MAP_TYPE)); + + // Check that the constructor is not constructing a JSFunction (see + // comments in Runtime_NewObject in runtime.cc). In which case the + // initial map's instance type would be JS_FUNCTION_TYPE. + // a1: constructor function + // a2: initial map + __ lbu(a3, FieldMemOperand(a2, Map::kInstanceTypeOffset)); + __ Branch(&rt_call, eq, a3, Operand(JS_FUNCTION_TYPE)); + + if (!is_api_function) { + Label allocate; + MemOperand bit_field3 = FieldMemOperand(a2, Map::kBitField3Offset); + // Check if slack tracking is enabled. + __ lwu(a4, bit_field3); + __ DecodeField<Map::ConstructionCount>(a6, a4); + __ Branch(&allocate, + eq, + a6, + Operand(static_cast<int64_t>(JSFunction::kNoSlackTracking))); + // Decrease generous allocation count. + __ Dsubu(a4, a4, Operand(1 << Map::ConstructionCount::kShift)); + __ Branch(USE_DELAY_SLOT, + &allocate, ne, a6, Operand(JSFunction::kFinishSlackTracking)); + __ sw(a4, bit_field3); // In delay slot. + + __ Push(a1, a2, a1); // a1 = Constructor. + __ CallRuntime(Runtime::kFinalizeInstanceSize, 1); + + __ Pop(a1, a2); + // Slack tracking counter is kNoSlackTracking after runtime call. + DCHECK(JSFunction::kNoSlackTracking == 0); + __ mov(a6, zero_reg); + + __ bind(&allocate); + } + + // Now allocate the JSObject on the heap. + // a1: constructor function + // a2: initial map + __ lbu(a3, FieldMemOperand(a2, Map::kInstanceSizeOffset)); + if (create_memento) { + __ Daddu(a3, a3, Operand(AllocationMemento::kSize / kPointerSize)); + } + + __ Allocate(a3, t0, t1, t2, &rt_call, SIZE_IN_WORDS); + + // Allocated the JSObject, now initialize the fields. Map is set to + // initial map and properties and elements are set to empty fixed array. + // a1: constructor function + // a2: initial map + // a3: object size (not including memento if create_memento) + // t0: JSObject (not tagged) + __ LoadRoot(t2, Heap::kEmptyFixedArrayRootIndex); + __ mov(t1, t0); + __ sd(a2, MemOperand(t1, JSObject::kMapOffset)); + __ sd(t2, MemOperand(t1, JSObject::kPropertiesOffset)); + __ sd(t2, MemOperand(t1, JSObject::kElementsOffset)); + __ Daddu(t1, t1, Operand(3*kPointerSize)); + DCHECK_EQ(0 * kPointerSize, JSObject::kMapOffset); + DCHECK_EQ(1 * kPointerSize, JSObject::kPropertiesOffset); + DCHECK_EQ(2 * kPointerSize, JSObject::kElementsOffset); + + // Fill all the in-object properties with appropriate filler. + // a1: constructor function + // a2: initial map + // a3: object size (in words, including memento if create_memento) + // t0: JSObject (not tagged) + // t1: First in-object property of JSObject (not tagged) + // a6: slack tracking counter (non-API function case) + DCHECK_EQ(3 * kPointerSize, JSObject::kHeaderSize); + + // Use t3 to hold undefined, which is used in several places below. + __ LoadRoot(t3, Heap::kUndefinedValueRootIndex); + + if (!is_api_function) { + Label no_inobject_slack_tracking; + + // Check if slack tracking is enabled. + __ Branch(&no_inobject_slack_tracking, + eq, + a6, + Operand(static_cast<int64_t>(JSFunction::kNoSlackTracking))); + + // Allocate object with a slack. + __ lwu(a0, FieldMemOperand(a2, Map::kInstanceSizesOffset)); + __ Ext(a0, a0, Map::kPreAllocatedPropertyFieldsByte * kBitsPerByte, + kBitsPerByte); + __ dsll(at, a0, kPointerSizeLog2); + __ daddu(a0, t1, at); + // a0: offset of first field after pre-allocated fields + if (FLAG_debug_code) { + __ dsll(at, a3, kPointerSizeLog2); + __ Daddu(t2, t0, Operand(at)); // End of object. + __ Assert(le, kUnexpectedNumberOfPreAllocatedPropertyFields, + a0, Operand(t2)); + } + __ InitializeFieldsWithFiller(t1, a0, t3); + // To allow for truncation. + __ LoadRoot(t3, Heap::kOnePointerFillerMapRootIndex); + // Fill the remaining fields with one pointer filler map. + + __ bind(&no_inobject_slack_tracking); + } + + if (create_memento) { + __ Dsubu(a0, a3, Operand(AllocationMemento::kSize / kPointerSize)); + __ dsll(a0, a0, kPointerSizeLog2); + __ Daddu(a0, t0, Operand(a0)); // End of object. + __ InitializeFieldsWithFiller(t1, a0, t3); + + // Fill in memento fields. + // t1: points to the allocated but uninitialized memento. + __ LoadRoot(t3, Heap::kAllocationMementoMapRootIndex); + DCHECK_EQ(0 * kPointerSize, AllocationMemento::kMapOffset); + __ sd(t3, MemOperand(t1)); + __ Daddu(t1, t1, kPointerSize); + // Load the AllocationSite. + __ ld(t3, MemOperand(sp, 2 * kPointerSize)); + DCHECK_EQ(1 * kPointerSize, AllocationMemento::kAllocationSiteOffset); + __ sd(t3, MemOperand(t1)); + __ Daddu(t1, t1, kPointerSize); + } else { + __ dsll(at, a3, kPointerSizeLog2); + __ Daddu(a0, t0, Operand(at)); // End of object. + __ InitializeFieldsWithFiller(t1, a0, t3); + } + + // Add the object tag to make the JSObject real, so that we can continue + // and jump into the continuation code at any time from now on. Any + // failures need to undo the allocation, so that the heap is in a + // consistent state and verifiable. + __ Daddu(t0, t0, Operand(kHeapObjectTag)); + + // Check if a non-empty properties array is needed. Continue with + // allocated object if not fall through to runtime call if it is. + // a1: constructor function + // t0: JSObject + // t1: start of next object (not tagged) + __ lbu(a3, FieldMemOperand(a2, Map::kUnusedPropertyFieldsOffset)); + // The field instance sizes contains both pre-allocated property fields + // and in-object properties. + __ lw(a0, FieldMemOperand(a2, Map::kInstanceSizesOffset)); + __ Ext(t2, a0, Map::kPreAllocatedPropertyFieldsByte * kBitsPerByte, + kBitsPerByte); + __ Daddu(a3, a3, Operand(t2)); + __ Ext(t2, a0, Map::kInObjectPropertiesByte * kBitsPerByte, + kBitsPerByte); + __ dsubu(a3, a3, t2); + + // Done if no extra properties are to be allocated. + __ Branch(&allocated, eq, a3, Operand(zero_reg)); + __ Assert(greater_equal, kPropertyAllocationCountFailed, + a3, Operand(zero_reg)); + + // Scale the number of elements by pointer size and add the header for + // FixedArrays to the start of the next object calculation from above. + // a1: constructor + // a3: number of elements in properties array + // t0: JSObject + // t1: start of next object + __ Daddu(a0, a3, Operand(FixedArray::kHeaderSize / kPointerSize)); + __ Allocate( + a0, + t1, + t2, + a2, + &undo_allocation, + static_cast<AllocationFlags>(RESULT_CONTAINS_TOP | SIZE_IN_WORDS)); + + // Initialize the FixedArray. + // a1: constructor + // a3: number of elements in properties array (untagged) + // t0: JSObject + // t1: start of next object + __ LoadRoot(t2, Heap::kFixedArrayMapRootIndex); + __ mov(a2, t1); + __ sd(t2, MemOperand(a2, JSObject::kMapOffset)); + // Tag number of elements. + __ dsll32(a0, a3, 0); + __ sd(a0, MemOperand(a2, FixedArray::kLengthOffset)); + __ Daddu(a2, a2, Operand(2 * kPointerSize)); + + DCHECK_EQ(0 * kPointerSize, JSObject::kMapOffset); + DCHECK_EQ(1 * kPointerSize, FixedArray::kLengthOffset); + + // Initialize the fields to undefined. + // a1: constructor + // a2: First element of FixedArray (not tagged) + // a3: number of elements in properties array + // t0: JSObject + // t1: FixedArray (not tagged) + __ dsll(a7, a3, kPointerSizeLog2); + __ daddu(t2, a2, a7); // End of object. + DCHECK_EQ(2 * kPointerSize, FixedArray::kHeaderSize); + { Label loop, entry; + if (!is_api_function || create_memento) { + __ LoadRoot(t3, Heap::kUndefinedValueRootIndex); + } else if (FLAG_debug_code) { + __ LoadRoot(a6, Heap::kUndefinedValueRootIndex); + __ Assert(eq, kUndefinedValueNotLoaded, t3, Operand(a6)); + } + __ jmp(&entry); + __ bind(&loop); + __ sd(t3, MemOperand(a2)); + __ daddiu(a2, a2, kPointerSize); + __ bind(&entry); + __ Branch(&loop, less, a2, Operand(t2)); + } + + // Store the initialized FixedArray into the properties field of + // the JSObject. + // a1: constructor function + // t0: JSObject + // t1: FixedArray (not tagged) + __ Daddu(t1, t1, Operand(kHeapObjectTag)); // Add the heap tag. + __ sd(t1, FieldMemOperand(t0, JSObject::kPropertiesOffset)); + + // Continue with JSObject being successfully allocated. + // a1: constructor function + // a4: JSObject + __ jmp(&allocated); + + // Undo the setting of the new top so that the heap is verifiable. For + // example, the map's unused properties potentially do not match the + // allocated objects unused properties. + // t0: JSObject (previous new top) + __ bind(&undo_allocation); + __ UndoAllocationInNewSpace(t0, t1); + } + + // Allocate the new receiver object using the runtime call. + // a1: constructor function + __ bind(&rt_call); + if (create_memento) { + // Get the cell or allocation site. + __ ld(a2, MemOperand(sp, 2 * kPointerSize)); + __ push(a2); + } + + __ push(a1); // Argument for Runtime_NewObject. + if (create_memento) { + __ CallRuntime(Runtime::kNewObjectWithAllocationSite, 2); + } else { + __ CallRuntime(Runtime::kNewObject, 1); + } + __ mov(t0, v0); + + // If we ended up using the runtime, and we want a memento, then the + // runtime call made it for us, and we shouldn't do create count + // increment. + Label count_incremented; + if (create_memento) { + __ jmp(&count_incremented); + } + + // Receiver for constructor call allocated. + // t0: JSObject + __ bind(&allocated); + + if (create_memento) { + __ ld(a2, MemOperand(sp, kPointerSize * 2)); + __ LoadRoot(t1, Heap::kUndefinedValueRootIndex); + __ Branch(&count_incremented, eq, a2, Operand(t1)); + // a2 is an AllocationSite. We are creating a memento from it, so we + // need to increment the memento create count. + __ ld(a3, FieldMemOperand(a2, + AllocationSite::kPretenureCreateCountOffset)); + __ Daddu(a3, a3, Operand(Smi::FromInt(1))); + __ sd(a3, FieldMemOperand(a2, + AllocationSite::kPretenureCreateCountOffset)); + __ bind(&count_incremented); + } + + __ Push(t0, t0); + + // Reload the number of arguments from the stack. + // sp[0]: receiver + // sp[1]: receiver + // sp[2]: constructor function + // sp[3]: number of arguments (smi-tagged) + __ ld(a1, MemOperand(sp, 2 * kPointerSize)); + __ ld(a3, MemOperand(sp, 3 * kPointerSize)); + + // Set up pointer to last argument. + __ Daddu(a2, fp, Operand(StandardFrameConstants::kCallerSPOffset)); + + // Set up number of arguments for function call below. + __ SmiUntag(a0, a3); + + // Copy arguments and receiver to the expression stack. + // a0: number of arguments + // a1: constructor function + // a2: address of last argument (caller sp) + // a3: number of arguments (smi-tagged) + // sp[0]: receiver + // sp[1]: receiver + // sp[2]: constructor function + // sp[3]: number of arguments (smi-tagged) + Label loop, entry; + __ SmiUntag(a3); + __ jmp(&entry); + __ bind(&loop); + __ dsll(a4, a3, kPointerSizeLog2); + __ Daddu(a4, a2, Operand(a4)); + __ ld(a5, MemOperand(a4)); + __ push(a5); + __ bind(&entry); + __ Daddu(a3, a3, Operand(-1)); + __ Branch(&loop, greater_equal, a3, Operand(zero_reg)); + + // Call the function. + // a0: number of arguments + // a1: constructor function + if (is_api_function) { + __ ld(cp, FieldMemOperand(a1, JSFunction::kContextOffset)); + Handle<Code> code = + masm->isolate()->builtins()->HandleApiCallConstruct(); + __ Call(code, RelocInfo::CODE_TARGET); + } else { + ParameterCount actual(a0); + __ InvokeFunction(a1, actual, CALL_FUNCTION, NullCallWrapper()); + } + + // Store offset of return address for deoptimizer. + if (!is_api_function) { + masm->isolate()->heap()->SetConstructStubDeoptPCOffset(masm->pc_offset()); + } + + // Restore context from the frame. + __ ld(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); + + // If the result is an object (in the ECMA sense), we should get rid + // of the receiver and use the result; see ECMA-262 section 13.2.2-7 + // on page 74. + Label use_receiver, exit; + + // If the result is a smi, it is *not* an object in the ECMA sense. + // v0: result + // sp[0]: receiver (newly allocated object) + // sp[1]: constructor function + // sp[2]: number of arguments (smi-tagged) + __ JumpIfSmi(v0, &use_receiver); + + // If the type of the result (stored in its map) is less than + // FIRST_SPEC_OBJECT_TYPE, it is not an object in the ECMA sense. + __ GetObjectType(v0, a1, a3); + __ Branch(&exit, greater_equal, a3, Operand(FIRST_SPEC_OBJECT_TYPE)); + + // Throw away the result of the constructor invocation and use the + // on-stack receiver as the result. + __ bind(&use_receiver); + __ ld(v0, MemOperand(sp)); + + // Remove receiver from the stack, remove caller arguments, and + // return. + __ bind(&exit); + // v0: result + // sp[0]: receiver (newly allocated object) + // sp[1]: constructor function + // sp[2]: number of arguments (smi-tagged) + __ ld(a1, MemOperand(sp, 2 * kPointerSize)); + + // Leave construct frame. + } + + __ SmiScale(a4, a1, kPointerSizeLog2); + __ Daddu(sp, sp, a4); + __ Daddu(sp, sp, kPointerSize); + __ IncrementCounter(isolate->counters()->constructed_objects(), 1, a1, a2); + __ Ret(); +} + + +void Builtins::Generate_JSConstructStubGeneric(MacroAssembler* masm) { + Generate_JSConstructStubHelper(masm, false, FLAG_pretenuring_call_new); +} + + +void Builtins::Generate_JSConstructStubApi(MacroAssembler* masm) { + Generate_JSConstructStubHelper(masm, true, false); +} + + +static void Generate_JSEntryTrampolineHelper(MacroAssembler* masm, + bool is_construct) { + // Called from JSEntryStub::GenerateBody + + // ----------- S t a t e ------------- + // -- a0: code entry + // -- a1: function + // -- a2: receiver_pointer + // -- a3: argc + // -- s0: argv + // ----------------------------------- + ProfileEntryHookStub::MaybeCallEntryHook(masm); + // Clear the context before we push it when entering the JS frame. + __ mov(cp, zero_reg); + + // Enter an internal frame. + { + FrameScope scope(masm, StackFrame::INTERNAL); + + // Set up the context from the function argument. + __ ld(cp, FieldMemOperand(a1, JSFunction::kContextOffset)); + + // Push the function and the receiver onto the stack. + __ Push(a1, a2); + + // Copy arguments to the stack in a loop. + // a3: argc + // s0: argv, i.e. points to first arg + Label loop, entry; + // TODO(plind): At least on simulator, argc in a3 is an int32_t with junk + // in upper bits. Should fix the root cause, rather than use below + // workaround to clear upper bits. + __ dsll32(a3, a3, 0); // int32_t -> int64_t. + __ dsrl32(a3, a3, 0); + __ dsll(a4, a3, kPointerSizeLog2); + __ daddu(a6, s0, a4); + __ b(&entry); + __ nop(); // Branch delay slot nop. + // a6 points past last arg. + __ bind(&loop); + __ ld(a4, MemOperand(s0)); // Read next parameter. + __ daddiu(s0, s0, kPointerSize); + __ ld(a4, MemOperand(a4)); // Dereference handle. + __ push(a4); // Push parameter. + __ bind(&entry); + __ Branch(&loop, ne, s0, Operand(a6)); + + // Initialize all JavaScript callee-saved registers, since they will be seen + // by the garbage collector as part of handlers. + __ LoadRoot(a4, Heap::kUndefinedValueRootIndex); + __ mov(s1, a4); + __ mov(s2, a4); + __ mov(s3, a4); + __ mov(s4, a4); + __ mov(s5, a4); + // s6 holds the root address. Do not clobber. + // s7 is cp. Do not init. + + // Invoke the code and pass argc as a0. + __ mov(a0, a3); + if (is_construct) { + // No type feedback cell is available + __ LoadRoot(a2, Heap::kUndefinedValueRootIndex); + CallConstructStub stub(masm->isolate(), NO_CALL_CONSTRUCTOR_FLAGS); + __ CallStub(&stub); + } else { + ParameterCount actual(a0); + __ InvokeFunction(a1, actual, CALL_FUNCTION, NullCallWrapper()); + } + + // Leave internal frame. + } + __ Jump(ra); +} + + +void Builtins::Generate_JSEntryTrampoline(MacroAssembler* masm) { + Generate_JSEntryTrampolineHelper(masm, false); +} + + +void Builtins::Generate_JSConstructEntryTrampoline(MacroAssembler* masm) { + Generate_JSEntryTrampolineHelper(masm, true); +} + + +void Builtins::Generate_CompileUnoptimized(MacroAssembler* masm) { + CallRuntimePassFunction(masm, Runtime::kCompileUnoptimized); + GenerateTailCallToReturnedCode(masm); +} + + +static void CallCompileOptimized(MacroAssembler* masm, bool concurrent) { + FrameScope scope(masm, StackFrame::INTERNAL); + // Push a copy of the function onto the stack. + // Push function as parameter to the runtime call. + __ Push(a1, a1); + // Whether to compile in a background thread. + __ Push(masm->isolate()->factory()->ToBoolean(concurrent)); + + __ CallRuntime(Runtime::kCompileOptimized, 2); + // Restore receiver. + __ Pop(a1); +} + + +void Builtins::Generate_CompileOptimized(MacroAssembler* masm) { + CallCompileOptimized(masm, false); + GenerateTailCallToReturnedCode(masm); +} + + +void Builtins::Generate_CompileOptimizedConcurrent(MacroAssembler* masm) { + CallCompileOptimized(masm, true); + GenerateTailCallToReturnedCode(masm); +} + + +static void GenerateMakeCodeYoungAgainCommon(MacroAssembler* masm) { + // For now, we are relying on the fact that make_code_young doesn't do any + // garbage collection which allows us to save/restore the registers without + // worrying about which of them contain pointers. We also don't build an + // internal frame to make the code faster, since we shouldn't have to do stack + // crawls in MakeCodeYoung. This seems a bit fragile. + + // Set a0 to point to the head of the PlatformCodeAge sequence. + __ Dsubu(a0, a0, + Operand(kNoCodeAgeSequenceLength - Assembler::kInstrSize)); + + // The following registers must be saved and restored when calling through to + // the runtime: + // a0 - contains return address (beginning of patch sequence) + // a1 - isolate + RegList saved_regs = + (a0.bit() | a1.bit() | ra.bit() | fp.bit()) & ~sp.bit(); + FrameScope scope(masm, StackFrame::MANUAL); + __ MultiPush(saved_regs); + __ PrepareCallCFunction(2, 0, a2); + __ li(a1, Operand(ExternalReference::isolate_address(masm->isolate()))); + __ CallCFunction( + ExternalReference::get_make_code_young_function(masm->isolate()), 2); + __ MultiPop(saved_regs); + __ Jump(a0); +} + +#define DEFINE_CODE_AGE_BUILTIN_GENERATOR(C) \ +void Builtins::Generate_Make##C##CodeYoungAgainEvenMarking( \ + MacroAssembler* masm) { \ + GenerateMakeCodeYoungAgainCommon(masm); \ +} \ +void Builtins::Generate_Make##C##CodeYoungAgainOddMarking( \ + MacroAssembler* masm) { \ + GenerateMakeCodeYoungAgainCommon(masm); \ +} +CODE_AGE_LIST(DEFINE_CODE_AGE_BUILTIN_GENERATOR) +#undef DEFINE_CODE_AGE_BUILTIN_GENERATOR + + +void Builtins::Generate_MarkCodeAsExecutedOnce(MacroAssembler* masm) { + // For now, as in GenerateMakeCodeYoungAgainCommon, we are relying on the fact + // that make_code_young doesn't do any garbage collection which allows us to + // save/restore the registers without worrying about which of them contain + // pointers. + + // Set a0 to point to the head of the PlatformCodeAge sequence. + __ Dsubu(a0, a0, + Operand(kNoCodeAgeSequenceLength - Assembler::kInstrSize)); + + // The following registers must be saved and restored when calling through to + // the runtime: + // a0 - contains return address (beginning of patch sequence) + // a1 - isolate + RegList saved_regs = + (a0.bit() | a1.bit() | ra.bit() | fp.bit()) & ~sp.bit(); + FrameScope scope(masm, StackFrame::MANUAL); + __ MultiPush(saved_regs); + __ PrepareCallCFunction(2, 0, a2); + __ li(a1, Operand(ExternalReference::isolate_address(masm->isolate()))); + __ CallCFunction( + ExternalReference::get_mark_code_as_executed_function(masm->isolate()), + 2); + __ MultiPop(saved_regs); + + // Perform prologue operations usually performed by the young code stub. + __ Push(ra, fp, cp, a1); + __ Daddu(fp, sp, Operand(StandardFrameConstants::kFixedFrameSizeFromFp)); + + // Jump to point after the code-age stub. + __ Daddu(a0, a0, Operand((kNoCodeAgeSequenceLength))); + __ Jump(a0); +} + + +void Builtins::Generate_MarkCodeAsExecutedTwice(MacroAssembler* masm) { + GenerateMakeCodeYoungAgainCommon(masm); +} + + +static void Generate_NotifyStubFailureHelper(MacroAssembler* masm, + SaveFPRegsMode save_doubles) { + { + FrameScope scope(masm, StackFrame::INTERNAL); + + // Preserve registers across notification, this is important for compiled + // stubs that tail call the runtime on deopts passing their parameters in + // registers. + __ MultiPush(kJSCallerSaved | kCalleeSaved); + // Pass the function and deoptimization type to the runtime system. + __ CallRuntime(Runtime::kNotifyStubFailure, 0, save_doubles); + __ MultiPop(kJSCallerSaved | kCalleeSaved); + } + + __ Daddu(sp, sp, Operand(kPointerSize)); // Ignore state + __ Jump(ra); // Jump to miss handler +} + + +void Builtins::Generate_NotifyStubFailure(MacroAssembler* masm) { + Generate_NotifyStubFailureHelper(masm, kDontSaveFPRegs); +} + + +void Builtins::Generate_NotifyStubFailureSaveDoubles(MacroAssembler* masm) { + Generate_NotifyStubFailureHelper(masm, kSaveFPRegs); +} + + +static void Generate_NotifyDeoptimizedHelper(MacroAssembler* masm, + Deoptimizer::BailoutType type) { + { + FrameScope scope(masm, StackFrame::INTERNAL); + // Pass the function and deoptimization type to the runtime system. + __ li(a0, Operand(Smi::FromInt(static_cast<int>(type)))); + __ push(a0); + __ CallRuntime(Runtime::kNotifyDeoptimized, 1); + } + + // Get the full codegen state from the stack and untag it -> a6. + __ ld(a6, MemOperand(sp, 0 * kPointerSize)); + __ SmiUntag(a6); + // Switch on the state. + Label with_tos_register, unknown_state; + __ Branch(&with_tos_register, + ne, a6, Operand(FullCodeGenerator::NO_REGISTERS)); + __ Ret(USE_DELAY_SLOT); + // Safe to fill delay slot Addu will emit one instruction. + __ Daddu(sp, sp, Operand(1 * kPointerSize)); // Remove state. + + __ bind(&with_tos_register); + __ ld(v0, MemOperand(sp, 1 * kPointerSize)); + __ Branch(&unknown_state, ne, a6, Operand(FullCodeGenerator::TOS_REG)); + + __ Ret(USE_DELAY_SLOT); + // Safe to fill delay slot Addu will emit one instruction. + __ Daddu(sp, sp, Operand(2 * kPointerSize)); // Remove state. + + __ bind(&unknown_state); + __ stop("no cases left"); +} + + +void Builtins::Generate_NotifyDeoptimized(MacroAssembler* masm) { + Generate_NotifyDeoptimizedHelper(masm, Deoptimizer::EAGER); +} + + +void Builtins::Generate_NotifySoftDeoptimized(MacroAssembler* masm) { + Generate_NotifyDeoptimizedHelper(masm, Deoptimizer::SOFT); +} + + +void Builtins::Generate_NotifyLazyDeoptimized(MacroAssembler* masm) { + Generate_NotifyDeoptimizedHelper(masm, Deoptimizer::LAZY); +} + + +void Builtins::Generate_OnStackReplacement(MacroAssembler* masm) { + // Lookup the function in the JavaScript frame. + __ ld(a0, MemOperand(fp, JavaScriptFrameConstants::kFunctionOffset)); + { + FrameScope scope(masm, StackFrame::INTERNAL); + // Pass function as argument. + __ push(a0); + __ CallRuntime(Runtime::kCompileForOnStackReplacement, 1); + } + + // If the code object is null, just return to the unoptimized code. + __ Ret(eq, v0, Operand(Smi::FromInt(0))); + + // Load deoptimization data from the code object. + // <deopt_data> = <code>[#deoptimization_data_offset] + __ Uld(a1, MemOperand(v0, Code::kDeoptimizationDataOffset - kHeapObjectTag)); + + // Load the OSR entrypoint offset from the deoptimization data. + // <osr_offset> = <deopt_data>[#header_size + #osr_pc_offset] + __ ld(a1, MemOperand(a1, FixedArray::OffsetOfElementAt( + DeoptimizationInputData::kOsrPcOffsetIndex) - kHeapObjectTag)); + __ SmiUntag(a1); + + // Compute the target address = code_obj + header_size + osr_offset + // <entry_addr> = <code_obj> + #header_size + <osr_offset> + __ daddu(v0, v0, a1); + __ daddiu(ra, v0, Code::kHeaderSize - kHeapObjectTag); + + // And "return" to the OSR entry point of the function. + __ Ret(); +} + + +void Builtins::Generate_OsrAfterStackCheck(MacroAssembler* masm) { + // We check the stack limit as indicator that recompilation might be done. + Label ok; + __ LoadRoot(at, Heap::kStackLimitRootIndex); + __ Branch(&ok, hs, sp, Operand(at)); + { + FrameScope scope(masm, StackFrame::INTERNAL); + __ CallRuntime(Runtime::kStackGuard, 0); + } + __ Jump(masm->isolate()->builtins()->OnStackReplacement(), + RelocInfo::CODE_TARGET); + + __ bind(&ok); + __ Ret(); +} + + +void Builtins::Generate_FunctionCall(MacroAssembler* masm) { + // 1. Make sure we have at least one argument. + // a0: actual number of arguments + { Label done; + __ Branch(&done, ne, a0, Operand(zero_reg)); + __ LoadRoot(a6, Heap::kUndefinedValueRootIndex); + __ push(a6); + __ Daddu(a0, a0, Operand(1)); + __ bind(&done); + } + + // 2. Get the function to call (passed as receiver) from the stack, check + // if it is a function. + // a0: actual number of arguments + Label slow, non_function; + __ dsll(at, a0, kPointerSizeLog2); + __ daddu(at, sp, at); + __ ld(a1, MemOperand(at)); + __ JumpIfSmi(a1, &non_function); + __ GetObjectType(a1, a2, a2); + __ Branch(&slow, ne, a2, Operand(JS_FUNCTION_TYPE)); + + // 3a. Patch the first argument if necessary when calling a function. + // a0: actual number of arguments + // a1: function + Label shift_arguments; + __ li(a4, Operand(0, RelocInfo::NONE32)); // Indicate regular JS_FUNCTION. + { Label convert_to_object, use_global_proxy, patch_receiver; + // Change context eagerly in case we need the global receiver. + __ ld(cp, FieldMemOperand(a1, JSFunction::kContextOffset)); + + // Do not transform the receiver for strict mode functions. + __ ld(a2, FieldMemOperand(a1, JSFunction::kSharedFunctionInfoOffset)); + __ lbu(a3, FieldMemOperand(a2, SharedFunctionInfo::kStrictModeByteOffset)); + __ And(a7, a3, Operand(1 << SharedFunctionInfo::kStrictModeBitWithinByte)); + __ Branch(&shift_arguments, ne, a7, Operand(zero_reg)); + + // Do not transform the receiver for native (Compilerhints already in a3). + __ lbu(a3, FieldMemOperand(a2, SharedFunctionInfo::kNativeByteOffset)); + __ And(a7, a3, Operand(1 << SharedFunctionInfo::kNativeBitWithinByte)); + __ Branch(&shift_arguments, ne, a7, Operand(zero_reg)); + + // Compute the receiver in sloppy mode. + // Load first argument in a2. a2 = -kPointerSize(sp + n_args << 2). + __ dsll(at, a0, kPointerSizeLog2); + __ daddu(a2, sp, at); + __ ld(a2, MemOperand(a2, -kPointerSize)); + // a0: actual number of arguments + // a1: function + // a2: first argument + __ JumpIfSmi(a2, &convert_to_object, a6); + + __ LoadRoot(a3, Heap::kUndefinedValueRootIndex); + __ Branch(&use_global_proxy, eq, a2, Operand(a3)); + __ LoadRoot(a3, Heap::kNullValueRootIndex); + __ Branch(&use_global_proxy, eq, a2, Operand(a3)); + + STATIC_ASSERT(LAST_SPEC_OBJECT_TYPE == LAST_TYPE); + __ GetObjectType(a2, a3, a3); + __ Branch(&shift_arguments, ge, a3, Operand(FIRST_SPEC_OBJECT_TYPE)); + + __ bind(&convert_to_object); + // Enter an internal frame in order to preserve argument count. + { + FrameScope scope(masm, StackFrame::INTERNAL); + __ SmiTag(a0); + __ Push(a0, a2); + __ InvokeBuiltin(Builtins::TO_OBJECT, CALL_FUNCTION); + __ mov(a2, v0); + + __ pop(a0); + __ SmiUntag(a0); + // Leave internal frame. + } + // Restore the function to a1, and the flag to a4. + __ dsll(at, a0, kPointerSizeLog2); + __ daddu(at, sp, at); + __ ld(a1, MemOperand(at)); + __ Branch(USE_DELAY_SLOT, &patch_receiver); + __ li(a4, Operand(0, RelocInfo::NONE32)); + + __ bind(&use_global_proxy); + __ ld(a2, ContextOperand(cp, Context::GLOBAL_OBJECT_INDEX)); + __ ld(a2, FieldMemOperand(a2, GlobalObject::kGlobalProxyOffset)); + + __ bind(&patch_receiver); + __ dsll(at, a0, kPointerSizeLog2); + __ daddu(a3, sp, at); + __ sd(a2, MemOperand(a3, -kPointerSize)); + + __ Branch(&shift_arguments); + } + + // 3b. Check for function proxy. + __ bind(&slow); + __ li(a4, Operand(1, RelocInfo::NONE32)); // Indicate function proxy. + __ Branch(&shift_arguments, eq, a2, Operand(JS_FUNCTION_PROXY_TYPE)); + + __ bind(&non_function); + __ li(a4, Operand(2, RelocInfo::NONE32)); // Indicate non-function. + + // 3c. Patch the first argument when calling a non-function. The + // CALL_NON_FUNCTION builtin expects the non-function callee as + // receiver, so overwrite the first argument which will ultimately + // become the receiver. + // a0: actual number of arguments + // a1: function + // a4: call type (0: JS function, 1: function proxy, 2: non-function) + __ dsll(at, a0, kPointerSizeLog2); + __ daddu(a2, sp, at); + __ sd(a1, MemOperand(a2, -kPointerSize)); + + // 4. Shift arguments and return address one slot down on the stack + // (overwriting the original receiver). Adjust argument count to make + // the original first argument the new receiver. + // a0: actual number of arguments + // a1: function + // a4: call type (0: JS function, 1: function proxy, 2: non-function) + __ bind(&shift_arguments); + { Label loop; + // Calculate the copy start address (destination). Copy end address is sp. + __ dsll(at, a0, kPointerSizeLog2); + __ daddu(a2, sp, at); + + __ bind(&loop); + __ ld(at, MemOperand(a2, -kPointerSize)); + __ sd(at, MemOperand(a2)); + __ Dsubu(a2, a2, Operand(kPointerSize)); + __ Branch(&loop, ne, a2, Operand(sp)); + // Adjust the actual number of arguments and remove the top element + // (which is a copy of the last argument). + __ Dsubu(a0, a0, Operand(1)); + __ Pop(); + } + + // 5a. Call non-function via tail call to CALL_NON_FUNCTION builtin, + // or a function proxy via CALL_FUNCTION_PROXY. + // a0: actual number of arguments + // a1: function + // a4: call type (0: JS function, 1: function proxy, 2: non-function) + { Label function, non_proxy; + __ Branch(&function, eq, a4, Operand(zero_reg)); + // Expected number of arguments is 0 for CALL_NON_FUNCTION. + __ mov(a2, zero_reg); + __ Branch(&non_proxy, ne, a4, Operand(1)); + + __ push(a1); // Re-add proxy object as additional argument. + __ Daddu(a0, a0, Operand(1)); + __ GetBuiltinFunction(a1, Builtins::CALL_FUNCTION_PROXY); + __ Jump(masm->isolate()->builtins()->ArgumentsAdaptorTrampoline(), + RelocInfo::CODE_TARGET); + + __ bind(&non_proxy); + __ GetBuiltinFunction(a1, Builtins::CALL_NON_FUNCTION); + __ Jump(masm->isolate()->builtins()->ArgumentsAdaptorTrampoline(), + RelocInfo::CODE_TARGET); + __ bind(&function); + } + + // 5b. Get the code to call from the function and check that the number of + // expected arguments matches what we're providing. If so, jump + // (tail-call) to the code in register edx without checking arguments. + // a0: actual number of arguments + // a1: function + __ ld(a3, FieldMemOperand(a1, JSFunction::kSharedFunctionInfoOffset)); + // The argument count is stored as int32_t on 64-bit platforms. + // TODO(plind): Smi on 32-bit platforms. + __ lw(a2, + FieldMemOperand(a3, SharedFunctionInfo::kFormalParameterCountOffset)); + // Check formal and actual parameter counts. + __ Jump(masm->isolate()->builtins()->ArgumentsAdaptorTrampoline(), + RelocInfo::CODE_TARGET, ne, a2, Operand(a0)); + + __ ld(a3, FieldMemOperand(a1, JSFunction::kCodeEntryOffset)); + ParameterCount expected(0); + __ InvokeCode(a3, expected, expected, JUMP_FUNCTION, NullCallWrapper()); +} + + +void Builtins::Generate_FunctionApply(MacroAssembler* masm) { + const int kIndexOffset = + StandardFrameConstants::kExpressionsOffset - (2 * kPointerSize); + const int kLimitOffset = + StandardFrameConstants::kExpressionsOffset - (1 * kPointerSize); + const int kArgsOffset = 2 * kPointerSize; + const int kRecvOffset = 3 * kPointerSize; + const int kFunctionOffset = 4 * kPointerSize; + + { + FrameScope frame_scope(masm, StackFrame::INTERNAL); + __ ld(a0, MemOperand(fp, kFunctionOffset)); // Get the function. + __ push(a0); + __ ld(a0, MemOperand(fp, kArgsOffset)); // Get the args array. + __ push(a0); + // Returns (in v0) number of arguments to copy to stack as Smi. + __ InvokeBuiltin(Builtins::APPLY_PREPARE, CALL_FUNCTION); + + // Check the stack for overflow. We are not trying to catch + // interruptions (e.g. debug break and preemption) here, so the "real stack + // limit" is checked. + Label okay; + __ LoadRoot(a2, Heap::kRealStackLimitRootIndex); + // Make a2 the space we have left. The stack might already be overflowed + // here which will cause a2 to become negative. + __ dsubu(a2, sp, a2); + // Check if the arguments will overflow the stack. + __ SmiScale(a7, v0, kPointerSizeLog2); + __ Branch(&okay, gt, a2, Operand(a7)); // Signed comparison. + + // Out of stack space. + __ ld(a1, MemOperand(fp, kFunctionOffset)); + __ Push(a1, v0); + __ InvokeBuiltin(Builtins::STACK_OVERFLOW, CALL_FUNCTION); + // End of stack check. + + // Push current limit and index. + __ bind(&okay); + __ mov(a1, zero_reg); + __ Push(v0, a1); // Limit and initial index. + + // Get the receiver. + __ ld(a0, MemOperand(fp, kRecvOffset)); + + // Check that the function is a JS function (otherwise it must be a proxy). + Label push_receiver; + __ ld(a1, MemOperand(fp, kFunctionOffset)); + __ GetObjectType(a1, a2, a2); + __ Branch(&push_receiver, ne, a2, Operand(JS_FUNCTION_TYPE)); + + // Change context eagerly to get the right global object if necessary. + __ ld(cp, FieldMemOperand(a1, JSFunction::kContextOffset)); + // Load the shared function info while the function is still in a1. + __ ld(a2, FieldMemOperand(a1, JSFunction::kSharedFunctionInfoOffset)); + + // Compute the receiver. + // Do not transform the receiver for strict mode functions. + Label call_to_object, use_global_proxy; + __ lbu(a7, FieldMemOperand(a2, SharedFunctionInfo::kStrictModeByteOffset)); + __ And(a7, a7, Operand(1 << SharedFunctionInfo::kStrictModeBitWithinByte)); + __ Branch(&push_receiver, ne, a7, Operand(zero_reg)); + + // Do not transform the receiver for native (Compilerhints already in a2). + __ lbu(a7, FieldMemOperand(a2, SharedFunctionInfo::kNativeByteOffset)); + __ And(a7, a7, Operand(1 << SharedFunctionInfo::kNativeBitWithinByte)); + __ Branch(&push_receiver, ne, a7, Operand(zero_reg)); + + // Compute the receiver in sloppy mode. + __ JumpIfSmi(a0, &call_to_object); + __ LoadRoot(a1, Heap::kNullValueRootIndex); + __ Branch(&use_global_proxy, eq, a0, Operand(a1)); + __ LoadRoot(a2, Heap::kUndefinedValueRootIndex); + __ Branch(&use_global_proxy, eq, a0, Operand(a2)); + + // Check if the receiver is already a JavaScript object. + // a0: receiver + STATIC_ASSERT(LAST_SPEC_OBJECT_TYPE == LAST_TYPE); + __ GetObjectType(a0, a1, a1); + __ Branch(&push_receiver, ge, a1, Operand(FIRST_SPEC_OBJECT_TYPE)); + + // Convert the receiver to a regular object. + // a0: receiver + __ bind(&call_to_object); + __ push(a0); + __ InvokeBuiltin(Builtins::TO_OBJECT, CALL_FUNCTION); + __ mov(a0, v0); // Put object in a0 to match other paths to push_receiver. + __ Branch(&push_receiver); + + __ bind(&use_global_proxy); + __ ld(a0, ContextOperand(cp, Context::GLOBAL_OBJECT_INDEX)); + __ ld(a0, FieldMemOperand(a0, GlobalObject::kGlobalProxyOffset)); + + // Push the receiver. + // a0: receiver + __ bind(&push_receiver); + __ push(a0); + + // Copy all arguments from the array to the stack. + Label entry, loop; + __ ld(a0, MemOperand(fp, kIndexOffset)); + __ Branch(&entry); + + // Load the current argument from the arguments array and push it to the + // stack. + // a0: current argument index + __ bind(&loop); + __ ld(a1, MemOperand(fp, kArgsOffset)); + __ Push(a1, a0); + + // Call the runtime to access the property in the arguments array. + __ CallRuntime(Runtime::kGetProperty, 2); + __ push(v0); + + // Use inline caching to access the arguments. + __ ld(a0, MemOperand(fp, kIndexOffset)); + __ Daddu(a0, a0, Operand(Smi::FromInt(1))); + __ sd(a0, MemOperand(fp, kIndexOffset)); + + // Test if the copy loop has finished copying all the elements from the + // arguments object. + __ bind(&entry); + __ ld(a1, MemOperand(fp, kLimitOffset)); + __ Branch(&loop, ne, a0, Operand(a1)); + + // Call the function. + Label call_proxy; + ParameterCount actual(a0); + __ SmiUntag(a0); + __ ld(a1, MemOperand(fp, kFunctionOffset)); + __ GetObjectType(a1, a2, a2); + __ Branch(&call_proxy, ne, a2, Operand(JS_FUNCTION_TYPE)); + + __ InvokeFunction(a1, actual, CALL_FUNCTION, NullCallWrapper()); + + frame_scope.GenerateLeaveFrame(); + __ Ret(USE_DELAY_SLOT); + __ Daddu(sp, sp, Operand(3 * kPointerSize)); // In delay slot. + + // Call the function proxy. + __ bind(&call_proxy); + __ push(a1); // Add function proxy as last argument. + __ Daddu(a0, a0, Operand(1)); + __ li(a2, Operand(0, RelocInfo::NONE32)); + __ GetBuiltinFunction(a1, Builtins::CALL_FUNCTION_PROXY); + __ Call(masm->isolate()->builtins()->ArgumentsAdaptorTrampoline(), + RelocInfo::CODE_TARGET); + // Tear down the internal frame and remove function, receiver and args. + } + + __ Ret(USE_DELAY_SLOT); + __ Daddu(sp, sp, Operand(3 * kPointerSize)); // In delay slot. +} + + +static void ArgumentAdaptorStackCheck(MacroAssembler* masm, + Label* stack_overflow) { + // ----------- S t a t e ------------- + // -- a0 : actual number of arguments + // -- a1 : function (passed through to callee) + // -- a2 : expected number of arguments + // ----------------------------------- + // Check the stack for overflow. We are not trying to catch + // interruptions (e.g. debug break and preemption) here, so the "real stack + // limit" is checked. + __ LoadRoot(a5, Heap::kRealStackLimitRootIndex); + // Make a5 the space we have left. The stack might already be overflowed + // here which will cause a5 to become negative. + __ dsubu(a5, sp, a5); + // Check if the arguments will overflow the stack. + __ dsll(at, a2, kPointerSizeLog2); + // Signed comparison. + __ Branch(stack_overflow, le, a5, Operand(at)); +} + + +static void EnterArgumentsAdaptorFrame(MacroAssembler* masm) { + // __ sll(a0, a0, kSmiTagSize); + __ dsll32(a0, a0, 0); + __ li(a4, Operand(Smi::FromInt(StackFrame::ARGUMENTS_ADAPTOR))); + __ MultiPush(a0.bit() | a1.bit() | a4.bit() | fp.bit() | ra.bit()); + __ Daddu(fp, sp, + Operand(StandardFrameConstants::kFixedFrameSizeFromFp + kPointerSize)); +} + + +static void LeaveArgumentsAdaptorFrame(MacroAssembler* masm) { + // ----------- S t a t e ------------- + // -- v0 : result being passed through + // ----------------------------------- + // Get the number of arguments passed (as a smi), tear down the frame and + // then tear down the parameters. + __ ld(a1, MemOperand(fp, -(StandardFrameConstants::kFixedFrameSizeFromFp + + kPointerSize))); + __ mov(sp, fp); + __ MultiPop(fp.bit() | ra.bit()); + __ SmiScale(a4, a1, kPointerSizeLog2); + __ Daddu(sp, sp, a4); + // Adjust for the receiver. + __ Daddu(sp, sp, Operand(kPointerSize)); +} + + +void Builtins::Generate_ArgumentsAdaptorTrampoline(MacroAssembler* masm) { + // State setup as expected by MacroAssembler::InvokePrologue. + // ----------- S t a t e ------------- + // -- a0: actual arguments count + // -- a1: function (passed through to callee) + // -- a2: expected arguments count + // ----------------------------------- + + Label stack_overflow; + ArgumentAdaptorStackCheck(masm, &stack_overflow); + Label invoke, dont_adapt_arguments; + + Label enough, too_few; + __ ld(a3, FieldMemOperand(a1, JSFunction::kCodeEntryOffset)); + __ Branch(&dont_adapt_arguments, eq, + a2, Operand(SharedFunctionInfo::kDontAdaptArgumentsSentinel)); + // We use Uless as the number of argument should always be greater than 0. + __ Branch(&too_few, Uless, a0, Operand(a2)); + + { // Enough parameters: actual >= expected. + // a0: actual number of arguments as a smi + // a1: function + // a2: expected number of arguments + // a3: code entry to call + __ bind(&enough); + EnterArgumentsAdaptorFrame(masm); + + // Calculate copy start address into a0 and copy end address into a2. + __ SmiScale(a0, a0, kPointerSizeLog2); + __ Daddu(a0, fp, a0); + // Adjust for return address and receiver. + __ Daddu(a0, a0, Operand(2 * kPointerSize)); + // Compute copy end address. + __ dsll(a2, a2, kPointerSizeLog2); + __ dsubu(a2, a0, a2); + + // Copy the arguments (including the receiver) to the new stack frame. + // a0: copy start address + // a1: function + // a2: copy end address + // a3: code entry to call + + Label copy; + __ bind(©); + __ ld(a4, MemOperand(a0)); + __ push(a4); + __ Branch(USE_DELAY_SLOT, ©, ne, a0, Operand(a2)); + __ daddiu(a0, a0, -kPointerSize); // In delay slot. + + __ jmp(&invoke); + } + + { // Too few parameters: Actual < expected. + __ bind(&too_few); + EnterArgumentsAdaptorFrame(masm); + + // Calculate copy start address into a0 and copy end address is fp. + // a0: actual number of arguments as a smi + // a1: function + // a2: expected number of arguments + // a3: code entry to call + __ SmiScale(a0, a0, kPointerSizeLog2); + __ Daddu(a0, fp, a0); + // Adjust for return address and receiver. + __ Daddu(a0, a0, Operand(2 * kPointerSize)); + // Compute copy end address. Also adjust for return address. + __ Daddu(a7, fp, kPointerSize); + + // Copy the arguments (including the receiver) to the new stack frame. + // a0: copy start address + // a1: function + // a2: expected number of arguments + // a3: code entry to call + // a7: copy end address + Label copy; + __ bind(©); + __ ld(a4, MemOperand(a0)); // Adjusted above for return addr and receiver. + __ Dsubu(sp, sp, kPointerSize); + __ Dsubu(a0, a0, kPointerSize); + __ Branch(USE_DELAY_SLOT, ©, ne, a0, Operand(a7)); + __ sd(a4, MemOperand(sp)); // In the delay slot. + + // Fill the remaining expected arguments with undefined. + // a1: function + // a2: expected number of arguments + // a3: code entry to call + __ LoadRoot(a4, Heap::kUndefinedValueRootIndex); + __ dsll(a6, a2, kPointerSizeLog2); + __ Dsubu(a2, fp, Operand(a6)); + // Adjust for frame. + __ Dsubu(a2, a2, Operand(StandardFrameConstants::kFixedFrameSizeFromFp + + 2 * kPointerSize)); + + Label fill; + __ bind(&fill); + __ Dsubu(sp, sp, kPointerSize); + __ Branch(USE_DELAY_SLOT, &fill, ne, sp, Operand(a2)); + __ sd(a4, MemOperand(sp)); + } + + // Call the entry point. + __ bind(&invoke); + + __ Call(a3); + + // Store offset of return address for deoptimizer. + masm->isolate()->heap()->SetArgumentsAdaptorDeoptPCOffset(masm->pc_offset()); + + // Exit frame and return. + LeaveArgumentsAdaptorFrame(masm); + __ Ret(); + + + // ------------------------------------------- + // Don't adapt arguments. + // ------------------------------------------- + __ bind(&dont_adapt_arguments); + __ Jump(a3); + + __ bind(&stack_overflow); + { + FrameScope frame(masm, StackFrame::MANUAL); + EnterArgumentsAdaptorFrame(masm); + __ InvokeBuiltin(Builtins::STACK_OVERFLOW, CALL_FUNCTION); + __ break_(0xCC); + } +} + + +#undef __ + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_MIPS64 diff --git a/deps/v8/src/mips64/code-stubs-mips64.cc b/deps/v8/src/mips64/code-stubs-mips64.cc new file mode 100644 index 00000000000..970792aafa9 --- /dev/null +++ b/deps/v8/src/mips64/code-stubs-mips64.cc @@ -0,0 +1,5329 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_MIPS64 + +#include "src/bootstrapper.h" +#include "src/code-stubs.h" +#include "src/codegen.h" +#include "src/regexp-macro-assembler.h" +#include "src/stub-cache.h" + +namespace v8 { +namespace internal { + + +void FastNewClosureStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { cp, a2 }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kNewClosureFromStubFailure)->entry); +} + + +void FastNewContextStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { cp, a1 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); +} + + +void ToNumberStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { cp, a0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); +} + + +void NumberToStringStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { cp, a0 }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kNumberToStringRT)->entry); +} + + +void FastCloneShallowArrayStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { cp, a3, a2, a1 }; + Representation representations[] = { + Representation::Tagged(), + Representation::Tagged(), + Representation::Smi(), + Representation::Tagged() }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kCreateArrayLiteralStubBailout)->entry, + representations); +} + + +void FastCloneShallowObjectStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { cp, a3, a2, a1, a0 }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kCreateObjectLiteral)->entry); +} + + +void CallFunctionStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + UNIMPLEMENTED(); +} + + +void CallConstructStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + UNIMPLEMENTED(); +} + + +void CreateAllocationSiteStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { cp, a2, a3 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); +} + + +void RegExpConstructResultStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { cp, a2, a1, a0 }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kRegExpConstructResult)->entry); +} + + +void TransitionElementsKindStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { cp, a0, a1 }; + Address entry = + Runtime::FunctionForId(Runtime::kTransitionElementsKind)->entry; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(entry)); +} + + +void CompareNilICStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { cp, a0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(CompareNilIC_Miss)); + descriptor->SetMissHandler( + ExternalReference(IC_Utility(IC::kCompareNilIC_Miss), isolate())); +} + + +const Register InterfaceDescriptor::ContextRegister() { return cp; } + + +static void InitializeArrayConstructorDescriptor( + CodeStub::Major major, CodeStubInterfaceDescriptor* descriptor, + int constant_stack_parameter_count) { + // register state + // cp -- context + // a0 -- number of arguments + // a1 -- function + // a2 -- allocation site with elements kind + Address deopt_handler = Runtime::FunctionForId( + Runtime::kArrayConstructor)->entry; + + if (constant_stack_parameter_count == 0) { + Register registers[] = { cp, a1, a2 }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, + deopt_handler, NULL, constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE); + } else { + // stack param count needs (constructor pointer, and single argument) + Register registers[] = { cp, a1, a2, a0 }; + Representation representations[] = { + Representation::Tagged(), + Representation::Tagged(), + Representation::Tagged(), + Representation::Integer32() }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, a0, + deopt_handler, representations, + constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE, PASS_ARGUMENTS); + } +} + + +static void InitializeInternalArrayConstructorDescriptor( + CodeStub::Major major, CodeStubInterfaceDescriptor* descriptor, + int constant_stack_parameter_count) { + // register state + // cp -- context + // a0 -- number of arguments + // a1 -- constructor function + Address deopt_handler = Runtime::FunctionForId( + Runtime::kInternalArrayConstructor)->entry; + + if (constant_stack_parameter_count == 0) { + Register registers[] = { cp, a1 }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, + deopt_handler, NULL, constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE); + } else { + // stack param count needs (constructor pointer, and single argument) + Register registers[] = { cp, a1, a0 }; + Representation representations[] = { + Representation::Tagged(), + Representation::Tagged(), + Representation::Integer32() }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, a0, + deopt_handler, representations, + constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE, PASS_ARGUMENTS); + } +} + + +void ArrayNoArgumentConstructorStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + InitializeArrayConstructorDescriptor(MajorKey(), descriptor, 0); +} + + +void ArraySingleArgumentConstructorStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + InitializeArrayConstructorDescriptor(MajorKey(), descriptor, 1); +} + + +void ArrayNArgumentsConstructorStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + InitializeArrayConstructorDescriptor(MajorKey(), descriptor, -1); +} + + +void ToBooleanStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { cp, a0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(ToBooleanIC_Miss)); + descriptor->SetMissHandler( + ExternalReference(IC_Utility(IC::kToBooleanIC_Miss), isolate())); +} + + +void InternalArrayNoArgumentConstructorStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + InitializeInternalArrayConstructorDescriptor(MajorKey(), descriptor, 0); +} + + +void InternalArraySingleArgumentConstructorStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + InitializeInternalArrayConstructorDescriptor(MajorKey(), descriptor, 1); +} + + +void InternalArrayNArgumentsConstructorStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + InitializeInternalArrayConstructorDescriptor(MajorKey(), descriptor, -1); +} + + +void BinaryOpICStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { cp, a1, a0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(BinaryOpIC_Miss)); + descriptor->SetMissHandler( + ExternalReference(IC_Utility(IC::kBinaryOpIC_Miss), isolate())); +} + + +void BinaryOpWithAllocationSiteStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { cp, a2, a1, a0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(BinaryOpIC_MissWithAllocationSite)); +} + + +void StringAddStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { cp, a1, a0 }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kStringAdd)->entry); +} + + +void CallDescriptors::InitializeForIsolate(Isolate* isolate) { + { + CallInterfaceDescriptor* descriptor = + isolate->call_descriptor(Isolate::ArgumentAdaptorCall); + Register registers[] = { cp, // context + a1, // JSFunction + a0, // actual number of arguments + a2, // expected number of arguments + }; + Representation representations[] = { + Representation::Tagged(), // context + Representation::Tagged(), // JSFunction + Representation::Integer32(), // actual number of arguments + Representation::Integer32(), // expected number of arguments + }; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); + } + { + CallInterfaceDescriptor* descriptor = + isolate->call_descriptor(Isolate::KeyedCall); + Register registers[] = { cp, // context + a2, // key + }; + Representation representations[] = { + Representation::Tagged(), // context + Representation::Tagged(), // key + }; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); + } + { + CallInterfaceDescriptor* descriptor = + isolate->call_descriptor(Isolate::NamedCall); + Register registers[] = { cp, // context + a2, // name + }; + Representation representations[] = { + Representation::Tagged(), // context + Representation::Tagged(), // name + }; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); + } + { + CallInterfaceDescriptor* descriptor = + isolate->call_descriptor(Isolate::CallHandler); + Register registers[] = { cp, // context + a0, // receiver + }; + Representation representations[] = { + Representation::Tagged(), // context + Representation::Tagged(), // receiver + }; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); + } + { + CallInterfaceDescriptor* descriptor = + isolate->call_descriptor(Isolate::ApiFunctionCall); + Register registers[] = { cp, // context + a0, // callee + a4, // call_data + a2, // holder + a1, // api_function_address + }; + Representation representations[] = { + Representation::Tagged(), // context + Representation::Tagged(), // callee + Representation::Tagged(), // call_data + Representation::Tagged(), // holder + Representation::External(), // api_function_address + }; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); + } +} + + +#define __ ACCESS_MASM(masm) + + +static void EmitIdenticalObjectComparison(MacroAssembler* masm, + Label* slow, + Condition cc); +static void EmitSmiNonsmiComparison(MacroAssembler* masm, + Register lhs, + Register rhs, + Label* rhs_not_nan, + Label* slow, + bool strict); +static void EmitStrictTwoHeapObjectCompare(MacroAssembler* masm, + Register lhs, + Register rhs); + + +void HydrogenCodeStub::GenerateLightweightMiss(MacroAssembler* masm) { + // Update the static counter each time a new code stub is generated. + isolate()->counters()->code_stubs()->Increment(); + + CodeStubInterfaceDescriptor* descriptor = GetInterfaceDescriptor(); + int param_count = descriptor->GetEnvironmentParameterCount(); + { + // Call the runtime system in a fresh internal frame. + FrameScope scope(masm, StackFrame::INTERNAL); + DCHECK((param_count == 0) || + a0.is(descriptor->GetEnvironmentParameterRegister(param_count - 1))); + // Push arguments, adjust sp. + __ Dsubu(sp, sp, Operand(param_count * kPointerSize)); + for (int i = 0; i < param_count; ++i) { + // Store argument to stack. + __ sd(descriptor->GetEnvironmentParameterRegister(i), + MemOperand(sp, (param_count-1-i) * kPointerSize)); + } + ExternalReference miss = descriptor->miss_handler(); + __ CallExternalReference(miss, param_count); + } + + __ Ret(); +} + + +// Takes a Smi and converts to an IEEE 64 bit floating point value in two +// registers. The format is 1 sign bit, 11 exponent bits (biased 1023) and +// 52 fraction bits (20 in the first word, 32 in the second). Zeros is a +// scratch register. Destroys the source register. No GC occurs during this +// stub so you don't have to set up the frame. +class ConvertToDoubleStub : public PlatformCodeStub { + public: + ConvertToDoubleStub(Isolate* isolate, + Register result_reg_1, + Register result_reg_2, + Register source_reg, + Register scratch_reg) + : PlatformCodeStub(isolate), + result1_(result_reg_1), + result2_(result_reg_2), + source_(source_reg), + zeros_(scratch_reg) { } + + private: + Register result1_; + Register result2_; + Register source_; + Register zeros_; + + // Minor key encoding in 16 bits. + class ModeBits: public BitField<OverwriteMode, 0, 2> {}; + class OpBits: public BitField<Token::Value, 2, 14> {}; + + Major MajorKey() const { return ConvertToDouble; } + int MinorKey() const { + // Encode the parameters in a unique 16 bit value. + return result1_.code() + + (result2_.code() << 4) + + (source_.code() << 8) + + (zeros_.code() << 12); + } + + void Generate(MacroAssembler* masm); +}; + + +void ConvertToDoubleStub::Generate(MacroAssembler* masm) { +#ifndef BIG_ENDIAN_FLOATING_POINT + Register exponent = result1_; + Register mantissa = result2_; +#else + Register exponent = result2_; + Register mantissa = result1_; +#endif + Label not_special; + // Convert from Smi to integer. + __ SmiUntag(source_); + // Move sign bit from source to destination. This works because the sign bit + // in the exponent word of the double has the same position and polarity as + // the 2's complement sign bit in a Smi. + STATIC_ASSERT(HeapNumber::kSignMask == 0x80000000u); + __ And(exponent, source_, Operand(HeapNumber::kSignMask)); + // Subtract from 0 if source was negative. + __ subu(at, zero_reg, source_); + __ Movn(source_, at, exponent); + + // We have -1, 0 or 1, which we treat specially. Register source_ contains + // absolute value: it is either equal to 1 (special case of -1 and 1), + // greater than 1 (not a special case) or less than 1 (special case of 0). + __ Branch(¬_special, gt, source_, Operand(1)); + + // For 1 or -1 we need to or in the 0 exponent (biased to 1023). + const uint32_t exponent_word_for_1 = + HeapNumber::kExponentBias << HeapNumber::kExponentShift; + // Safe to use 'at' as dest reg here. + __ Or(at, exponent, Operand(exponent_word_for_1)); + __ Movn(exponent, at, source_); // Write exp when source not 0. + // 1, 0 and -1 all have 0 for the second word. + __ Ret(USE_DELAY_SLOT); + __ mov(mantissa, zero_reg); + + __ bind(¬_special); + // Count leading zeros. + // Gets the wrong answer for 0, but we already checked for that case above. + __ Clz(zeros_, source_); + // Compute exponent and or it into the exponent register. + // We use mantissa as a scratch register here. + __ li(mantissa, Operand(31 + HeapNumber::kExponentBias)); + __ subu(mantissa, mantissa, zeros_); + __ sll(mantissa, mantissa, HeapNumber::kExponentShift); + __ Or(exponent, exponent, mantissa); + + // Shift up the source chopping the top bit off. + __ Addu(zeros_, zeros_, Operand(1)); + // This wouldn't work for 1.0 or -1.0 as the shift would be 32 which means 0. + __ sllv(source_, source_, zeros_); + // Compute lower part of fraction (last 12 bits). + __ sll(mantissa, source_, HeapNumber::kMantissaBitsInTopWord); + // And the top (top 20 bits). + __ srl(source_, source_, 32 - HeapNumber::kMantissaBitsInTopWord); + + __ Ret(USE_DELAY_SLOT); + __ or_(exponent, exponent, source_); +} + + +void DoubleToIStub::Generate(MacroAssembler* masm) { + Label out_of_range, only_low, negate, done; + Register input_reg = source(); + Register result_reg = destination(); + + int double_offset = offset(); + // Account for saved regs if input is sp. + if (input_reg.is(sp)) double_offset += 3 * kPointerSize; + + Register scratch = + GetRegisterThatIsNotOneOf(input_reg, result_reg); + Register scratch2 = + GetRegisterThatIsNotOneOf(input_reg, result_reg, scratch); + Register scratch3 = + GetRegisterThatIsNotOneOf(input_reg, result_reg, scratch, scratch2); + DoubleRegister double_scratch = kLithiumScratchDouble; + + __ Push(scratch, scratch2, scratch3); + if (!skip_fastpath()) { + // Load double input. + __ ldc1(double_scratch, MemOperand(input_reg, double_offset)); + + // Clear cumulative exception flags and save the FCSR. + __ cfc1(scratch2, FCSR); + __ ctc1(zero_reg, FCSR); + + // Try a conversion to a signed integer. + __ Trunc_w_d(double_scratch, double_scratch); + // Move the converted value into the result register. + __ mfc1(scratch3, double_scratch); + + // Retrieve and restore the FCSR. + __ cfc1(scratch, FCSR); + __ ctc1(scratch2, FCSR); + + // Check for overflow and NaNs. + __ And( + scratch, scratch, + kFCSROverflowFlagMask | kFCSRUnderflowFlagMask + | kFCSRInvalidOpFlagMask); + // If we had no exceptions then set result_reg and we are done. + Label error; + __ Branch(&error, ne, scratch, Operand(zero_reg)); + __ Move(result_reg, scratch3); + __ Branch(&done); + __ bind(&error); + } + + // Load the double value and perform a manual truncation. + Register input_high = scratch2; + Register input_low = scratch3; + + __ lw(input_low, MemOperand(input_reg, double_offset)); + __ lw(input_high, MemOperand(input_reg, double_offset + kIntSize)); + + Label normal_exponent, restore_sign; + // Extract the biased exponent in result. + __ Ext(result_reg, + input_high, + HeapNumber::kExponentShift, + HeapNumber::kExponentBits); + + // Check for Infinity and NaNs, which should return 0. + __ Subu(scratch, result_reg, HeapNumber::kExponentMask); + __ Movz(result_reg, zero_reg, scratch); + __ Branch(&done, eq, scratch, Operand(zero_reg)); + + // Express exponent as delta to (number of mantissa bits + 31). + __ Subu(result_reg, + result_reg, + Operand(HeapNumber::kExponentBias + HeapNumber::kMantissaBits + 31)); + + // If the delta is strictly positive, all bits would be shifted away, + // which means that we can return 0. + __ Branch(&normal_exponent, le, result_reg, Operand(zero_reg)); + __ mov(result_reg, zero_reg); + __ Branch(&done); + + __ bind(&normal_exponent); + const int kShiftBase = HeapNumber::kNonMantissaBitsInTopWord - 1; + // Calculate shift. + __ Addu(scratch, result_reg, Operand(kShiftBase + HeapNumber::kMantissaBits)); + + // Save the sign. + Register sign = result_reg; + result_reg = no_reg; + __ And(sign, input_high, Operand(HeapNumber::kSignMask)); + + // On ARM shifts > 31 bits are valid and will result in zero. On MIPS we need + // to check for this specific case. + Label high_shift_needed, high_shift_done; + __ Branch(&high_shift_needed, lt, scratch, Operand(32)); + __ mov(input_high, zero_reg); + __ Branch(&high_shift_done); + __ bind(&high_shift_needed); + + // Set the implicit 1 before the mantissa part in input_high. + __ Or(input_high, + input_high, + Operand(1 << HeapNumber::kMantissaBitsInTopWord)); + // Shift the mantissa bits to the correct position. + // We don't need to clear non-mantissa bits as they will be shifted away. + // If they weren't, it would mean that the answer is in the 32bit range. + __ sllv(input_high, input_high, scratch); + + __ bind(&high_shift_done); + + // Replace the shifted bits with bits from the lower mantissa word. + Label pos_shift, shift_done; + __ li(at, 32); + __ subu(scratch, at, scratch); + __ Branch(&pos_shift, ge, scratch, Operand(zero_reg)); + + // Negate scratch. + __ Subu(scratch, zero_reg, scratch); + __ sllv(input_low, input_low, scratch); + __ Branch(&shift_done); + + __ bind(&pos_shift); + __ srlv(input_low, input_low, scratch); + + __ bind(&shift_done); + __ Or(input_high, input_high, Operand(input_low)); + // Restore sign if necessary. + __ mov(scratch, sign); + result_reg = sign; + sign = no_reg; + __ Subu(result_reg, zero_reg, input_high); + __ Movz(result_reg, input_high, scratch); + + __ bind(&done); + + __ Pop(scratch, scratch2, scratch3); + __ Ret(); +} + + +void WriteInt32ToHeapNumberStub::GenerateFixedRegStubsAheadOfTime( + Isolate* isolate) { + WriteInt32ToHeapNumberStub stub1(isolate, a1, v0, a2, a3); + WriteInt32ToHeapNumberStub stub2(isolate, a2, v0, a3, a0); + stub1.GetCode(); + stub2.GetCode(); +} + + +// See comment for class, this does NOT work for int32's that are in Smi range. +void WriteInt32ToHeapNumberStub::Generate(MacroAssembler* masm) { + Label max_negative_int; + // the_int_ has the answer which is a signed int32 but not a Smi. + // We test for the special value that has a different exponent. + STATIC_ASSERT(HeapNumber::kSignMask == 0x80000000u); + // Test sign, and save for later conditionals. + __ And(sign_, the_int_, Operand(0x80000000u)); + __ Branch(&max_negative_int, eq, the_int_, Operand(0x80000000u)); + + // Set up the correct exponent in scratch_. All non-Smi int32s have the same. + // A non-Smi integer is 1.xxx * 2^30 so the exponent is 30 (biased). + uint32_t non_smi_exponent = + (HeapNumber::kExponentBias + 30) << HeapNumber::kExponentShift; + __ li(scratch_, Operand(non_smi_exponent)); + // Set the sign bit in scratch_ if the value was negative. + __ or_(scratch_, scratch_, sign_); + // Subtract from 0 if the value was negative. + __ subu(at, zero_reg, the_int_); + __ Movn(the_int_, at, sign_); + // We should be masking the implict first digit of the mantissa away here, + // but it just ends up combining harmlessly with the last digit of the + // exponent that happens to be 1. The sign bit is 0 so we shift 10 to get + // the most significant 1 to hit the last bit of the 12 bit sign and exponent. + DCHECK(((1 << HeapNumber::kExponentShift) & non_smi_exponent) != 0); + const int shift_distance = HeapNumber::kNonMantissaBitsInTopWord - 2; + __ srl(at, the_int_, shift_distance); + __ or_(scratch_, scratch_, at); + __ sw(scratch_, FieldMemOperand(the_heap_number_, + HeapNumber::kExponentOffset)); + __ sll(scratch_, the_int_, 32 - shift_distance); + __ Ret(USE_DELAY_SLOT); + __ sw(scratch_, FieldMemOperand(the_heap_number_, + HeapNumber::kMantissaOffset)); + + __ bind(&max_negative_int); + // The max negative int32 is stored as a positive number in the mantissa of + // a double because it uses a sign bit instead of using two's complement. + // The actual mantissa bits stored are all 0 because the implicit most + // significant 1 bit is not stored. + non_smi_exponent += 1 << HeapNumber::kExponentShift; + __ li(scratch_, Operand(HeapNumber::kSignMask | non_smi_exponent)); + __ sw(scratch_, + FieldMemOperand(the_heap_number_, HeapNumber::kExponentOffset)); + __ mov(scratch_, zero_reg); + __ Ret(USE_DELAY_SLOT); + __ sw(scratch_, + FieldMemOperand(the_heap_number_, HeapNumber::kMantissaOffset)); +} + + +// Handle the case where the lhs and rhs are the same object. +// Equality is almost reflexive (everything but NaN), so this is a test +// for "identity and not NaN". +static void EmitIdenticalObjectComparison(MacroAssembler* masm, + Label* slow, + Condition cc) { + Label not_identical; + Label heap_number, return_equal; + Register exp_mask_reg = t1; + + __ Branch(¬_identical, ne, a0, Operand(a1)); + + __ li(exp_mask_reg, Operand(HeapNumber::kExponentMask)); + + // Test for NaN. Sadly, we can't just compare to Factory::nan_value(), + // so we do the second best thing - test it ourselves. + // They are both equal and they are not both Smis so both of them are not + // Smis. If it's not a heap number, then return equal. + if (cc == less || cc == greater) { + __ GetObjectType(a0, t0, t0); + __ Branch(slow, greater, t0, Operand(FIRST_SPEC_OBJECT_TYPE)); + } else { + __ GetObjectType(a0, t0, t0); + __ Branch(&heap_number, eq, t0, Operand(HEAP_NUMBER_TYPE)); + // Comparing JS objects with <=, >= is complicated. + if (cc != eq) { + __ Branch(slow, greater, t0, Operand(FIRST_SPEC_OBJECT_TYPE)); + // Normally here we fall through to return_equal, but undefined is + // special: (undefined == undefined) == true, but + // (undefined <= undefined) == false! See ECMAScript 11.8.5. + if (cc == less_equal || cc == greater_equal) { + __ Branch(&return_equal, ne, t0, Operand(ODDBALL_TYPE)); + __ LoadRoot(a6, Heap::kUndefinedValueRootIndex); + __ Branch(&return_equal, ne, a0, Operand(a6)); + DCHECK(is_int16(GREATER) && is_int16(LESS)); + __ Ret(USE_DELAY_SLOT); + if (cc == le) { + // undefined <= undefined should fail. + __ li(v0, Operand(GREATER)); + } else { + // undefined >= undefined should fail. + __ li(v0, Operand(LESS)); + } + } + } + } + + __ bind(&return_equal); + DCHECK(is_int16(GREATER) && is_int16(LESS)); + __ Ret(USE_DELAY_SLOT); + if (cc == less) { + __ li(v0, Operand(GREATER)); // Things aren't less than themselves. + } else if (cc == greater) { + __ li(v0, Operand(LESS)); // Things aren't greater than themselves. + } else { + __ mov(v0, zero_reg); // Things are <=, >=, ==, === themselves. + } + // For less and greater we don't have to check for NaN since the result of + // x < x is false regardless. For the others here is some code to check + // for NaN. + if (cc != lt && cc != gt) { + __ bind(&heap_number); + // It is a heap number, so return non-equal if it's NaN and equal if it's + // not NaN. + + // The representation of NaN values has all exponent bits (52..62) set, + // and not all mantissa bits (0..51) clear. + // Read top bits of double representation (second word of value). + __ lwu(a6, FieldMemOperand(a0, HeapNumber::kExponentOffset)); + // Test that exponent bits are all set. + __ And(a7, a6, Operand(exp_mask_reg)); + // If all bits not set (ne cond), then not a NaN, objects are equal. + __ Branch(&return_equal, ne, a7, Operand(exp_mask_reg)); + + // Shift out flag and all exponent bits, retaining only mantissa. + __ sll(a6, a6, HeapNumber::kNonMantissaBitsInTopWord); + // Or with all low-bits of mantissa. + __ lwu(a7, FieldMemOperand(a0, HeapNumber::kMantissaOffset)); + __ Or(v0, a7, Operand(a6)); + // For equal we already have the right value in v0: Return zero (equal) + // if all bits in mantissa are zero (it's an Infinity) and non-zero if + // not (it's a NaN). For <= and >= we need to load v0 with the failing + // value if it's a NaN. + if (cc != eq) { + // All-zero means Infinity means equal. + __ Ret(eq, v0, Operand(zero_reg)); + DCHECK(is_int16(GREATER) && is_int16(LESS)); + __ Ret(USE_DELAY_SLOT); + if (cc == le) { + __ li(v0, Operand(GREATER)); // NaN <= NaN should fail. + } else { + __ li(v0, Operand(LESS)); // NaN >= NaN should fail. + } + } + } + // No fall through here. + + __ bind(¬_identical); +} + + +static void EmitSmiNonsmiComparison(MacroAssembler* masm, + Register lhs, + Register rhs, + Label* both_loaded_as_doubles, + Label* slow, + bool strict) { + DCHECK((lhs.is(a0) && rhs.is(a1)) || + (lhs.is(a1) && rhs.is(a0))); + + Label lhs_is_smi; + __ JumpIfSmi(lhs, &lhs_is_smi); + // Rhs is a Smi. + // Check whether the non-smi is a heap number. + __ GetObjectType(lhs, t0, t0); + if (strict) { + // If lhs was not a number and rhs was a Smi then strict equality cannot + // succeed. Return non-equal (lhs is already not zero). + __ Ret(USE_DELAY_SLOT, ne, t0, Operand(HEAP_NUMBER_TYPE)); + __ mov(v0, lhs); + } else { + // Smi compared non-strictly with a non-Smi non-heap-number. Call + // the runtime. + __ Branch(slow, ne, t0, Operand(HEAP_NUMBER_TYPE)); + } + // Rhs is a smi, lhs is a number. + // Convert smi rhs to double. + __ SmiUntag(at, rhs); + __ mtc1(at, f14); + __ cvt_d_w(f14, f14); + __ ldc1(f12, FieldMemOperand(lhs, HeapNumber::kValueOffset)); + + // We now have both loaded as doubles. + __ jmp(both_loaded_as_doubles); + + __ bind(&lhs_is_smi); + // Lhs is a Smi. Check whether the non-smi is a heap number. + __ GetObjectType(rhs, t0, t0); + if (strict) { + // If lhs was not a number and rhs was a Smi then strict equality cannot + // succeed. Return non-equal. + __ Ret(USE_DELAY_SLOT, ne, t0, Operand(HEAP_NUMBER_TYPE)); + __ li(v0, Operand(1)); + } else { + // Smi compared non-strictly with a non-Smi non-heap-number. Call + // the runtime. + __ Branch(slow, ne, t0, Operand(HEAP_NUMBER_TYPE)); + } + + // Lhs is a smi, rhs is a number. + // Convert smi lhs to double. + __ SmiUntag(at, lhs); + __ mtc1(at, f12); + __ cvt_d_w(f12, f12); + __ ldc1(f14, FieldMemOperand(rhs, HeapNumber::kValueOffset)); + // Fall through to both_loaded_as_doubles. +} + + +static void EmitStrictTwoHeapObjectCompare(MacroAssembler* masm, + Register lhs, + Register rhs) { + // If either operand is a JS object or an oddball value, then they are + // not equal since their pointers are different. + // There is no test for undetectability in strict equality. + STATIC_ASSERT(LAST_TYPE == LAST_SPEC_OBJECT_TYPE); + Label first_non_object; + // Get the type of the first operand into a2 and compare it with + // FIRST_SPEC_OBJECT_TYPE. + __ GetObjectType(lhs, a2, a2); + __ Branch(&first_non_object, less, a2, Operand(FIRST_SPEC_OBJECT_TYPE)); + + // Return non-zero. + Label return_not_equal; + __ bind(&return_not_equal); + __ Ret(USE_DELAY_SLOT); + __ li(v0, Operand(1)); + + __ bind(&first_non_object); + // Check for oddballs: true, false, null, undefined. + __ Branch(&return_not_equal, eq, a2, Operand(ODDBALL_TYPE)); + + __ GetObjectType(rhs, a3, a3); + __ Branch(&return_not_equal, greater, a3, Operand(FIRST_SPEC_OBJECT_TYPE)); + + // Check for oddballs: true, false, null, undefined. + __ Branch(&return_not_equal, eq, a3, Operand(ODDBALL_TYPE)); + + // Now that we have the types we might as well check for + // internalized-internalized. + STATIC_ASSERT(kInternalizedTag == 0 && kStringTag == 0); + __ Or(a2, a2, Operand(a3)); + __ And(at, a2, Operand(kIsNotStringMask | kIsNotInternalizedMask)); + __ Branch(&return_not_equal, eq, at, Operand(zero_reg)); +} + + +static void EmitCheckForTwoHeapNumbers(MacroAssembler* masm, + Register lhs, + Register rhs, + Label* both_loaded_as_doubles, + Label* not_heap_numbers, + Label* slow) { + __ GetObjectType(lhs, a3, a2); + __ Branch(not_heap_numbers, ne, a2, Operand(HEAP_NUMBER_TYPE)); + __ ld(a2, FieldMemOperand(rhs, HeapObject::kMapOffset)); + // If first was a heap number & second wasn't, go to slow case. + __ Branch(slow, ne, a3, Operand(a2)); + + // Both are heap numbers. Load them up then jump to the code we have + // for that. + __ ldc1(f12, FieldMemOperand(lhs, HeapNumber::kValueOffset)); + __ ldc1(f14, FieldMemOperand(rhs, HeapNumber::kValueOffset)); + + __ jmp(both_loaded_as_doubles); +} + + +// Fast negative check for internalized-to-internalized equality. +static void EmitCheckForInternalizedStringsOrObjects(MacroAssembler* masm, + Register lhs, + Register rhs, + Label* possible_strings, + Label* not_both_strings) { + DCHECK((lhs.is(a0) && rhs.is(a1)) || + (lhs.is(a1) && rhs.is(a0))); + + // a2 is object type of rhs. + Label object_test; + STATIC_ASSERT(kInternalizedTag == 0 && kStringTag == 0); + __ And(at, a2, Operand(kIsNotStringMask)); + __ Branch(&object_test, ne, at, Operand(zero_reg)); + __ And(at, a2, Operand(kIsNotInternalizedMask)); + __ Branch(possible_strings, ne, at, Operand(zero_reg)); + __ GetObjectType(rhs, a3, a3); + __ Branch(not_both_strings, ge, a3, Operand(FIRST_NONSTRING_TYPE)); + __ And(at, a3, Operand(kIsNotInternalizedMask)); + __ Branch(possible_strings, ne, at, Operand(zero_reg)); + + // Both are internalized strings. We already checked they weren't the same + // pointer so they are not equal. + __ Ret(USE_DELAY_SLOT); + __ li(v0, Operand(1)); // Non-zero indicates not equal. + + __ bind(&object_test); + __ Branch(not_both_strings, lt, a2, Operand(FIRST_SPEC_OBJECT_TYPE)); + __ GetObjectType(rhs, a2, a3); + __ Branch(not_both_strings, lt, a3, Operand(FIRST_SPEC_OBJECT_TYPE)); + + // If both objects are undetectable, they are equal. Otherwise, they + // are not equal, since they are different objects and an object is not + // equal to undefined. + __ ld(a3, FieldMemOperand(lhs, HeapObject::kMapOffset)); + __ lbu(a2, FieldMemOperand(a2, Map::kBitFieldOffset)); + __ lbu(a3, FieldMemOperand(a3, Map::kBitFieldOffset)); + __ and_(a0, a2, a3); + __ And(a0, a0, Operand(1 << Map::kIsUndetectable)); + __ Ret(USE_DELAY_SLOT); + __ xori(v0, a0, 1 << Map::kIsUndetectable); +} + + +static void ICCompareStub_CheckInputType(MacroAssembler* masm, + Register input, + Register scratch, + CompareIC::State expected, + Label* fail) { + Label ok; + if (expected == CompareIC::SMI) { + __ JumpIfNotSmi(input, fail); + } else if (expected == CompareIC::NUMBER) { + __ JumpIfSmi(input, &ok); + __ CheckMap(input, scratch, Heap::kHeapNumberMapRootIndex, fail, + DONT_DO_SMI_CHECK); + } + // We could be strict about internalized/string here, but as long as + // hydrogen doesn't care, the stub doesn't have to care either. + __ bind(&ok); +} + + +// On entry a1 and a2 are the values to be compared. +// On exit a0 is 0, positive or negative to indicate the result of +// the comparison. +void ICCompareStub::GenerateGeneric(MacroAssembler* masm) { + Register lhs = a1; + Register rhs = a0; + Condition cc = GetCondition(); + + Label miss; + ICCompareStub_CheckInputType(masm, lhs, a2, left_, &miss); + ICCompareStub_CheckInputType(masm, rhs, a3, right_, &miss); + + Label slow; // Call builtin. + Label not_smis, both_loaded_as_doubles; + + Label not_two_smis, smi_done; + __ Or(a2, a1, a0); + __ JumpIfNotSmi(a2, ¬_two_smis); + __ SmiUntag(a1); + __ SmiUntag(a0); + + __ Ret(USE_DELAY_SLOT); + __ dsubu(v0, a1, a0); + __ bind(¬_two_smis); + + // NOTICE! This code is only reached after a smi-fast-case check, so + // it is certain that at least one operand isn't a smi. + + // Handle the case where the objects are identical. Either returns the answer + // or goes to slow. Only falls through if the objects were not identical. + EmitIdenticalObjectComparison(masm, &slow, cc); + + // If either is a Smi (we know that not both are), then they can only + // be strictly equal if the other is a HeapNumber. + STATIC_ASSERT(kSmiTag == 0); + DCHECK_EQ(0, Smi::FromInt(0)); + __ And(a6, lhs, Operand(rhs)); + __ JumpIfNotSmi(a6, ¬_smis, a4); + // One operand is a smi. EmitSmiNonsmiComparison generates code that can: + // 1) Return the answer. + // 2) Go to slow. + // 3) Fall through to both_loaded_as_doubles. + // 4) Jump to rhs_not_nan. + // In cases 3 and 4 we have found out we were dealing with a number-number + // comparison and the numbers have been loaded into f12 and f14 as doubles, + // or in GP registers (a0, a1, a2, a3) depending on the presence of the FPU. + EmitSmiNonsmiComparison(masm, lhs, rhs, + &both_loaded_as_doubles, &slow, strict()); + + __ bind(&both_loaded_as_doubles); + // f12, f14 are the double representations of the left hand side + // and the right hand side if we have FPU. Otherwise a2, a3 represent + // left hand side and a0, a1 represent right hand side. + + Label nan; + __ li(a4, Operand(LESS)); + __ li(a5, Operand(GREATER)); + __ li(a6, Operand(EQUAL)); + + // Check if either rhs or lhs is NaN. + __ BranchF(NULL, &nan, eq, f12, f14); + + // Check if LESS condition is satisfied. If true, move conditionally + // result to v0. + if (kArchVariant != kMips64r6) { + __ c(OLT, D, f12, f14); + __ Movt(v0, a4); + // Use previous check to store conditionally to v0 oposite condition + // (GREATER). If rhs is equal to lhs, this will be corrected in next + // check. + __ Movf(v0, a5); + // Check if EQUAL condition is satisfied. If true, move conditionally + // result to v0. + __ c(EQ, D, f12, f14); + __ Movt(v0, a6); + } else { + Label skip; + __ BranchF(USE_DELAY_SLOT, &skip, NULL, lt, f12, f14); + __ mov(v0, a4); // Return LESS as result. + + __ BranchF(USE_DELAY_SLOT, &skip, NULL, eq, f12, f14); + __ mov(v0, a6); // Return EQUAL as result. + + __ mov(v0, a5); // Return GREATER as result. + __ bind(&skip); + } + __ Ret(); + + __ bind(&nan); + // NaN comparisons always fail. + // Load whatever we need in v0 to make the comparison fail. + DCHECK(is_int16(GREATER) && is_int16(LESS)); + __ Ret(USE_DELAY_SLOT); + if (cc == lt || cc == le) { + __ li(v0, Operand(GREATER)); + } else { + __ li(v0, Operand(LESS)); + } + + + __ bind(¬_smis); + // At this point we know we are dealing with two different objects, + // and neither of them is a Smi. The objects are in lhs_ and rhs_. + if (strict()) { + // This returns non-equal for some object types, or falls through if it + // was not lucky. + EmitStrictTwoHeapObjectCompare(masm, lhs, rhs); + } + + Label check_for_internalized_strings; + Label flat_string_check; + // Check for heap-number-heap-number comparison. Can jump to slow case, + // or load both doubles and jump to the code that handles + // that case. If the inputs are not doubles then jumps to + // check_for_internalized_strings. + // In this case a2 will contain the type of lhs_. + EmitCheckForTwoHeapNumbers(masm, + lhs, + rhs, + &both_loaded_as_doubles, + &check_for_internalized_strings, + &flat_string_check); + + __ bind(&check_for_internalized_strings); + if (cc == eq && !strict()) { + // Returns an answer for two internalized strings or two + // detectable objects. + // Otherwise jumps to string case or not both strings case. + // Assumes that a2 is the type of lhs_ on entry. + EmitCheckForInternalizedStringsOrObjects( + masm, lhs, rhs, &flat_string_check, &slow); + } + + // Check for both being sequential ASCII strings, and inline if that is the + // case. + __ bind(&flat_string_check); + + __ JumpIfNonSmisNotBothSequentialAsciiStrings(lhs, rhs, a2, a3, &slow); + + __ IncrementCounter(isolate()->counters()->string_compare_native(), 1, a2, + a3); + if (cc == eq) { + StringCompareStub::GenerateFlatAsciiStringEquals(masm, + lhs, + rhs, + a2, + a3, + a4); + } else { + StringCompareStub::GenerateCompareFlatAsciiStrings(masm, + lhs, + rhs, + a2, + a3, + a4, + a5); + } + // Never falls through to here. + + __ bind(&slow); + // Prepare for call to builtin. Push object pointers, a0 (lhs) first, + // a1 (rhs) second. + __ Push(lhs, rhs); + // Figure out which native to call and setup the arguments. + Builtins::JavaScript native; + if (cc == eq) { + native = strict() ? Builtins::STRICT_EQUALS : Builtins::EQUALS; + } else { + native = Builtins::COMPARE; + int ncr; // NaN compare result. + if (cc == lt || cc == le) { + ncr = GREATER; + } else { + DCHECK(cc == gt || cc == ge); // Remaining cases. + ncr = LESS; + } + __ li(a0, Operand(Smi::FromInt(ncr))); + __ push(a0); + } + + // Call the native; it returns -1 (less), 0 (equal), or 1 (greater) + // tagged as a small integer. + __ InvokeBuiltin(native, JUMP_FUNCTION); + + __ bind(&miss); + GenerateMiss(masm); +} + + +void StoreRegistersStateStub::Generate(MacroAssembler* masm) { + __ mov(t9, ra); + __ pop(ra); + __ PushSafepointRegisters(); + __ Jump(t9); +} + + +void RestoreRegistersStateStub::Generate(MacroAssembler* masm) { + __ mov(t9, ra); + __ pop(ra); + __ PopSafepointRegisters(); + __ Jump(t9); +} + + +void StoreBufferOverflowStub::Generate(MacroAssembler* masm) { + // We don't allow a GC during a store buffer overflow so there is no need to + // store the registers in any particular way, but we do have to store and + // restore them. + __ MultiPush(kJSCallerSaved | ra.bit()); + if (save_doubles_ == kSaveFPRegs) { + __ MultiPushFPU(kCallerSavedFPU); + } + const int argument_count = 1; + const int fp_argument_count = 0; + const Register scratch = a1; + + AllowExternalCallThatCantCauseGC scope(masm); + __ PrepareCallCFunction(argument_count, fp_argument_count, scratch); + __ li(a0, Operand(ExternalReference::isolate_address(isolate()))); + __ CallCFunction( + ExternalReference::store_buffer_overflow_function(isolate()), + argument_count); + if (save_doubles_ == kSaveFPRegs) { + __ MultiPopFPU(kCallerSavedFPU); + } + + __ MultiPop(kJSCallerSaved | ra.bit()); + __ Ret(); +} + + +void MathPowStub::Generate(MacroAssembler* masm) { + const Register base = a1; + const Register exponent = a2; + const Register heapnumbermap = a5; + const Register heapnumber = v0; + const DoubleRegister double_base = f2; + const DoubleRegister double_exponent = f4; + const DoubleRegister double_result = f0; + const DoubleRegister double_scratch = f6; + const FPURegister single_scratch = f8; + const Register scratch = t1; + const Register scratch2 = a7; + + Label call_runtime, done, int_exponent; + if (exponent_type_ == ON_STACK) { + Label base_is_smi, unpack_exponent; + // The exponent and base are supplied as arguments on the stack. + // This can only happen if the stub is called from non-optimized code. + // Load input parameters from stack to double registers. + __ ld(base, MemOperand(sp, 1 * kPointerSize)); + __ ld(exponent, MemOperand(sp, 0 * kPointerSize)); + + __ LoadRoot(heapnumbermap, Heap::kHeapNumberMapRootIndex); + + __ UntagAndJumpIfSmi(scratch, base, &base_is_smi); + __ ld(scratch, FieldMemOperand(base, JSObject::kMapOffset)); + __ Branch(&call_runtime, ne, scratch, Operand(heapnumbermap)); + + __ ldc1(double_base, FieldMemOperand(base, HeapNumber::kValueOffset)); + __ jmp(&unpack_exponent); + + __ bind(&base_is_smi); + __ mtc1(scratch, single_scratch); + __ cvt_d_w(double_base, single_scratch); + __ bind(&unpack_exponent); + + __ UntagAndJumpIfSmi(scratch, exponent, &int_exponent); + + __ ld(scratch, FieldMemOperand(exponent, JSObject::kMapOffset)); + __ Branch(&call_runtime, ne, scratch, Operand(heapnumbermap)); + __ ldc1(double_exponent, + FieldMemOperand(exponent, HeapNumber::kValueOffset)); + } else if (exponent_type_ == TAGGED) { + // Base is already in double_base. + __ UntagAndJumpIfSmi(scratch, exponent, &int_exponent); + + __ ldc1(double_exponent, + FieldMemOperand(exponent, HeapNumber::kValueOffset)); + } + + if (exponent_type_ != INTEGER) { + Label int_exponent_convert; + // Detect integer exponents stored as double. + __ EmitFPUTruncate(kRoundToMinusInf, + scratch, + double_exponent, + at, + double_scratch, + scratch2, + kCheckForInexactConversion); + // scratch2 == 0 means there was no conversion error. + __ Branch(&int_exponent_convert, eq, scratch2, Operand(zero_reg)); + + if (exponent_type_ == ON_STACK) { + // Detect square root case. Crankshaft detects constant +/-0.5 at + // compile time and uses DoMathPowHalf instead. We then skip this check + // for non-constant cases of +/-0.5 as these hardly occur. + Label not_plus_half; + + // Test for 0.5. + __ Move(double_scratch, 0.5); + __ BranchF(USE_DELAY_SLOT, + ¬_plus_half, + NULL, + ne, + double_exponent, + double_scratch); + // double_scratch can be overwritten in the delay slot. + // Calculates square root of base. Check for the special case of + // Math.pow(-Infinity, 0.5) == Infinity (ECMA spec, 15.8.2.13). + __ Move(double_scratch, -V8_INFINITY); + __ BranchF(USE_DELAY_SLOT, &done, NULL, eq, double_base, double_scratch); + __ neg_d(double_result, double_scratch); + + // Add +0 to convert -0 to +0. + __ add_d(double_scratch, double_base, kDoubleRegZero); + __ sqrt_d(double_result, double_scratch); + __ jmp(&done); + + __ bind(¬_plus_half); + __ Move(double_scratch, -0.5); + __ BranchF(USE_DELAY_SLOT, + &call_runtime, + NULL, + ne, + double_exponent, + double_scratch); + // double_scratch can be overwritten in the delay slot. + // Calculates square root of base. Check for the special case of + // Math.pow(-Infinity, -0.5) == 0 (ECMA spec, 15.8.2.13). + __ Move(double_scratch, -V8_INFINITY); + __ BranchF(USE_DELAY_SLOT, &done, NULL, eq, double_base, double_scratch); + __ Move(double_result, kDoubleRegZero); + + // Add +0 to convert -0 to +0. + __ add_d(double_scratch, double_base, kDoubleRegZero); + __ Move(double_result, 1); + __ sqrt_d(double_scratch, double_scratch); + __ div_d(double_result, double_result, double_scratch); + __ jmp(&done); + } + + __ push(ra); + { + AllowExternalCallThatCantCauseGC scope(masm); + __ PrepareCallCFunction(0, 2, scratch2); + __ MovToFloatParameters(double_base, double_exponent); + __ CallCFunction( + ExternalReference::power_double_double_function(isolate()), + 0, 2); + } + __ pop(ra); + __ MovFromFloatResult(double_result); + __ jmp(&done); + + __ bind(&int_exponent_convert); + } + + // Calculate power with integer exponent. + __ bind(&int_exponent); + + // Get two copies of exponent in the registers scratch and exponent. + if (exponent_type_ == INTEGER) { + __ mov(scratch, exponent); + } else { + // Exponent has previously been stored into scratch as untagged integer. + __ mov(exponent, scratch); + } + + __ mov_d(double_scratch, double_base); // Back up base. + __ Move(double_result, 1.0); + + // Get absolute value of exponent. + Label positive_exponent; + __ Branch(&positive_exponent, ge, scratch, Operand(zero_reg)); + __ Dsubu(scratch, zero_reg, scratch); + __ bind(&positive_exponent); + + Label while_true, no_carry, loop_end; + __ bind(&while_true); + + __ And(scratch2, scratch, 1); + + __ Branch(&no_carry, eq, scratch2, Operand(zero_reg)); + __ mul_d(double_result, double_result, double_scratch); + __ bind(&no_carry); + + __ dsra(scratch, scratch, 1); + + __ Branch(&loop_end, eq, scratch, Operand(zero_reg)); + __ mul_d(double_scratch, double_scratch, double_scratch); + + __ Branch(&while_true); + + __ bind(&loop_end); + + __ Branch(&done, ge, exponent, Operand(zero_reg)); + __ Move(double_scratch, 1.0); + __ div_d(double_result, double_scratch, double_result); + // Test whether result is zero. Bail out to check for subnormal result. + // Due to subnormals, x^-y == (1/x)^y does not hold in all cases. + __ BranchF(&done, NULL, ne, double_result, kDoubleRegZero); + + // double_exponent may not contain the exponent value if the input was a + // smi. We set it with exponent value before bailing out. + __ mtc1(exponent, single_scratch); + __ cvt_d_w(double_exponent, single_scratch); + + // Returning or bailing out. + Counters* counters = isolate()->counters(); + if (exponent_type_ == ON_STACK) { + // The arguments are still on the stack. + __ bind(&call_runtime); + __ TailCallRuntime(Runtime::kMathPowRT, 2, 1); + + // The stub is called from non-optimized code, which expects the result + // as heap number in exponent. + __ bind(&done); + __ AllocateHeapNumber( + heapnumber, scratch, scratch2, heapnumbermap, &call_runtime); + __ sdc1(double_result, + FieldMemOperand(heapnumber, HeapNumber::kValueOffset)); + DCHECK(heapnumber.is(v0)); + __ IncrementCounter(counters->math_pow(), 1, scratch, scratch2); + __ DropAndRet(2); + } else { + __ push(ra); + { + AllowExternalCallThatCantCauseGC scope(masm); + __ PrepareCallCFunction(0, 2, scratch); + __ MovToFloatParameters(double_base, double_exponent); + __ CallCFunction( + ExternalReference::power_double_double_function(isolate()), + 0, 2); + } + __ pop(ra); + __ MovFromFloatResult(double_result); + + __ bind(&done); + __ IncrementCounter(counters->math_pow(), 1, scratch, scratch2); + __ Ret(); + } +} + + +bool CEntryStub::NeedsImmovableCode() { + return true; +} + + +void CodeStub::GenerateStubsAheadOfTime(Isolate* isolate) { + CEntryStub::GenerateAheadOfTime(isolate); + WriteInt32ToHeapNumberStub::GenerateFixedRegStubsAheadOfTime(isolate); + StoreBufferOverflowStub::GenerateFixedRegStubsAheadOfTime(isolate); + StubFailureTrampolineStub::GenerateAheadOfTime(isolate); + ArrayConstructorStubBase::GenerateStubsAheadOfTime(isolate); + CreateAllocationSiteStub::GenerateAheadOfTime(isolate); + BinaryOpICStub::GenerateAheadOfTime(isolate); + StoreRegistersStateStub::GenerateAheadOfTime(isolate); + RestoreRegistersStateStub::GenerateAheadOfTime(isolate); + BinaryOpICWithAllocationSiteStub::GenerateAheadOfTime(isolate); +} + + +void StoreRegistersStateStub::GenerateAheadOfTime(Isolate* isolate) { + StoreRegistersStateStub stub(isolate); + stub.GetCode(); +} + + +void RestoreRegistersStateStub::GenerateAheadOfTime(Isolate* isolate) { + RestoreRegistersStateStub stub(isolate); + stub.GetCode(); +} + + +void CodeStub::GenerateFPStubs(Isolate* isolate) { + SaveFPRegsMode mode = kSaveFPRegs; + CEntryStub save_doubles(isolate, 1, mode); + StoreBufferOverflowStub stub(isolate, mode); + // These stubs might already be in the snapshot, detect that and don't + // regenerate, which would lead to code stub initialization state being messed + // up. + Code* save_doubles_code; + if (!save_doubles.FindCodeInCache(&save_doubles_code)) { + save_doubles_code = *save_doubles.GetCode(); + } + Code* store_buffer_overflow_code; + if (!stub.FindCodeInCache(&store_buffer_overflow_code)) { + store_buffer_overflow_code = *stub.GetCode(); + } + isolate->set_fp_stubs_generated(true); +} + + +void CEntryStub::GenerateAheadOfTime(Isolate* isolate) { + CEntryStub stub(isolate, 1, kDontSaveFPRegs); + stub.GetCode(); +} + + +void CEntryStub::Generate(MacroAssembler* masm) { + // Called from JavaScript; parameters are on stack as if calling JS function + // s0: number of arguments including receiver + // s1: size of arguments excluding receiver + // s2: pointer to builtin function + // fp: frame pointer (restored after C call) + // sp: stack pointer (restored as callee's sp after C call) + // cp: current context (C callee-saved) + + ProfileEntryHookStub::MaybeCallEntryHook(masm); + + // NOTE: s0-s2 hold the arguments of this function instead of a0-a2. + // The reason for this is that these arguments would need to be saved anyway + // so it's faster to set them up directly. + // See MacroAssembler::PrepareCEntryArgs and PrepareCEntryFunction. + + // Compute the argv pointer in a callee-saved register. + __ Daddu(s1, sp, s1); + + // Enter the exit frame that transitions from JavaScript to C++. + FrameScope scope(masm, StackFrame::MANUAL); + __ EnterExitFrame(save_doubles_); + + // s0: number of arguments including receiver (C callee-saved) + // s1: pointer to first argument (C callee-saved) + // s2: pointer to builtin function (C callee-saved) + + // Prepare arguments for C routine. + // a0 = argc + __ mov(a0, s0); + // a1 = argv (set in the delay slot after find_ra below). + + // We are calling compiled C/C++ code. a0 and a1 hold our two arguments. We + // also need to reserve the 4 argument slots on the stack. + + __ AssertStackIsAligned(); + + __ li(a2, Operand(ExternalReference::isolate_address(isolate()))); + + // To let the GC traverse the return address of the exit frames, we need to + // know where the return address is. The CEntryStub is unmovable, so + // we can store the address on the stack to be able to find it again and + // we never have to restore it, because it will not change. + { Assembler::BlockTrampolinePoolScope block_trampoline_pool(masm); + // This branch-and-link sequence is needed to find the current PC on mips, + // saved to the ra register. + // Use masm-> here instead of the double-underscore macro since extra + // coverage code can interfere with the proper calculation of ra. + Label find_ra; + masm->bal(&find_ra); // bal exposes branch delay slot. + masm->mov(a1, s1); + masm->bind(&find_ra); + + // Adjust the value in ra to point to the correct return location, 2nd + // instruction past the real call into C code (the jalr(t9)), and push it. + // This is the return address of the exit frame. + const int kNumInstructionsToJump = 5; + masm->Daddu(ra, ra, kNumInstructionsToJump * kInt32Size); + masm->sd(ra, MemOperand(sp)); // This spot was reserved in EnterExitFrame. + // Stack space reservation moved to the branch delay slot below. + // Stack is still aligned. + + // Call the C routine. + masm->mov(t9, s2); // Function pointer to t9 to conform to ABI for PIC. + masm->jalr(t9); + // Set up sp in the delay slot. + masm->daddiu(sp, sp, -kCArgsSlotsSize); + // Make sure the stored 'ra' points to this position. + DCHECK_EQ(kNumInstructionsToJump, + masm->InstructionsGeneratedSince(&find_ra)); + } + + // Runtime functions should not return 'the hole'. Allowing it to escape may + // lead to crashes in the IC code later. + if (FLAG_debug_code) { + Label okay; + __ LoadRoot(a4, Heap::kTheHoleValueRootIndex); + __ Branch(&okay, ne, v0, Operand(a4)); + __ stop("The hole escaped"); + __ bind(&okay); + } + + // Check result for exception sentinel. + Label exception_returned; + __ LoadRoot(a4, Heap::kExceptionRootIndex); + __ Branch(&exception_returned, eq, a4, Operand(v0)); + + ExternalReference pending_exception_address( + Isolate::kPendingExceptionAddress, isolate()); + + // Check that there is no pending exception, otherwise we + // should have returned the exception sentinel. + if (FLAG_debug_code) { + Label okay; + __ li(a2, Operand(pending_exception_address)); + __ ld(a2, MemOperand(a2)); + __ LoadRoot(a4, Heap::kTheHoleValueRootIndex); + // Cannot use check here as it attempts to generate call into runtime. + __ Branch(&okay, eq, a4, Operand(a2)); + __ stop("Unexpected pending exception"); + __ bind(&okay); + } + + // Exit C frame and return. + // v0:v1: result + // sp: stack pointer + // fp: frame pointer + // s0: still holds argc (callee-saved). + __ LeaveExitFrame(save_doubles_, s0, true, EMIT_RETURN); + + // Handling of exception. + __ bind(&exception_returned); + + // Retrieve the pending exception. + __ li(a2, Operand(pending_exception_address)); + __ ld(v0, MemOperand(a2)); + + // Clear the pending exception. + __ li(a3, Operand(isolate()->factory()->the_hole_value())); + __ sd(a3, MemOperand(a2)); + + // Special handling of termination exceptions which are uncatchable + // by javascript code. + Label throw_termination_exception; + __ LoadRoot(a4, Heap::kTerminationExceptionRootIndex); + __ Branch(&throw_termination_exception, eq, v0, Operand(a4)); + + // Handle normal exception. + __ Throw(v0); + + __ bind(&throw_termination_exception); + __ ThrowUncatchable(v0); +} + + +void JSEntryStub::GenerateBody(MacroAssembler* masm, bool is_construct) { + Label invoke, handler_entry, exit; + Isolate* isolate = masm->isolate(); + + // TODO(plind): unify the ABI description here. + // Registers: + // a0: entry address + // a1: function + // a2: receiver + // a3: argc + // a4 (a4): on mips64 + + // Stack: + // 0 arg slots on mips64 (4 args slots on mips) + // args -- in a4/a4 on mips64, on stack on mips + + ProfileEntryHookStub::MaybeCallEntryHook(masm); + + // Save callee saved registers on the stack. + __ MultiPush(kCalleeSaved | ra.bit()); + + // Save callee-saved FPU registers. + __ MultiPushFPU(kCalleeSavedFPU); + // Set up the reserved register for 0.0. + __ Move(kDoubleRegZero, 0.0); + + // Load argv in s0 register. + if (kMipsAbi == kN64) { + __ mov(s0, a4); // 5th parameter in mips64 a4 (a4) register. + } else { // Abi O32. + // 5th parameter on stack for O32 abi. + int offset_to_argv = (kNumCalleeSaved + 1) * kPointerSize; + offset_to_argv += kNumCalleeSavedFPU * kDoubleSize; + __ ld(s0, MemOperand(sp, offset_to_argv + kCArgsSlotsSize)); + } + + __ InitializeRootRegister(); + + // We build an EntryFrame. + __ li(a7, Operand(-1)); // Push a bad frame pointer to fail if it is used. + int marker = is_construct ? StackFrame::ENTRY_CONSTRUCT : StackFrame::ENTRY; + __ li(a6, Operand(Smi::FromInt(marker))); + __ li(a5, Operand(Smi::FromInt(marker))); + ExternalReference c_entry_fp(Isolate::kCEntryFPAddress, isolate); + __ li(a4, Operand(c_entry_fp)); + __ ld(a4, MemOperand(a4)); + __ Push(a7, a6, a5, a4); + // Set up frame pointer for the frame to be pushed. + __ daddiu(fp, sp, -EntryFrameConstants::kCallerFPOffset); + + // Registers: + // a0: entry_address + // a1: function + // a2: receiver_pointer + // a3: argc + // s0: argv + // + // Stack: + // caller fp | + // function slot | entry frame + // context slot | + // bad fp (0xff...f) | + // callee saved registers + ra + // [ O32: 4 args slots] + // args + + // If this is the outermost JS call, set js_entry_sp value. + Label non_outermost_js; + ExternalReference js_entry_sp(Isolate::kJSEntrySPAddress, isolate); + __ li(a5, Operand(ExternalReference(js_entry_sp))); + __ ld(a6, MemOperand(a5)); + __ Branch(&non_outermost_js, ne, a6, Operand(zero_reg)); + __ sd(fp, MemOperand(a5)); + __ li(a4, Operand(Smi::FromInt(StackFrame::OUTERMOST_JSENTRY_FRAME))); + Label cont; + __ b(&cont); + __ nop(); // Branch delay slot nop. + __ bind(&non_outermost_js); + __ li(a4, Operand(Smi::FromInt(StackFrame::INNER_JSENTRY_FRAME))); + __ bind(&cont); + __ push(a4); + + // Jump to a faked try block that does the invoke, with a faked catch + // block that sets the pending exception. + __ jmp(&invoke); + __ bind(&handler_entry); + handler_offset_ = handler_entry.pos(); + // Caught exception: Store result (exception) in the pending exception + // field in the JSEnv and return a failure sentinel. Coming in here the + // fp will be invalid because the PushTryHandler below sets it to 0 to + // signal the existence of the JSEntry frame. + __ li(a4, Operand(ExternalReference(Isolate::kPendingExceptionAddress, + isolate))); + __ sd(v0, MemOperand(a4)); // We come back from 'invoke'. result is in v0. + __ LoadRoot(v0, Heap::kExceptionRootIndex); + __ b(&exit); // b exposes branch delay slot. + __ nop(); // Branch delay slot nop. + + // Invoke: Link this frame into the handler chain. There's only one + // handler block in this code object, so its index is 0. + __ bind(&invoke); + __ PushTryHandler(StackHandler::JS_ENTRY, 0); + // If an exception not caught by another handler occurs, this handler + // returns control to the code after the bal(&invoke) above, which + // restores all kCalleeSaved registers (including cp and fp) to their + // saved values before returning a failure to C. + + // Clear any pending exceptions. + __ LoadRoot(a5, Heap::kTheHoleValueRootIndex); + __ li(a4, Operand(ExternalReference(Isolate::kPendingExceptionAddress, + isolate))); + __ sd(a5, MemOperand(a4)); + + // Invoke the function by calling through JS entry trampoline builtin. + // Notice that we cannot store a reference to the trampoline code directly in + // this stub, because runtime stubs are not traversed when doing GC. + + // Registers: + // a0: entry_address + // a1: function + // a2: receiver_pointer + // a3: argc + // s0: argv + // + // Stack: + // handler frame + // entry frame + // callee saved registers + ra + // [ O32: 4 args slots] + // args + + if (is_construct) { + ExternalReference construct_entry(Builtins::kJSConstructEntryTrampoline, + isolate); + __ li(a4, Operand(construct_entry)); + } else { + ExternalReference entry(Builtins::kJSEntryTrampoline, masm->isolate()); + __ li(a4, Operand(entry)); + } + __ ld(t9, MemOperand(a4)); // Deref address. + // Call JSEntryTrampoline. + __ daddiu(t9, t9, Code::kHeaderSize - kHeapObjectTag); + __ Call(t9); + + // Unlink this frame from the handler chain. + __ PopTryHandler(); + + __ bind(&exit); // v0 holds result + // Check if the current stack frame is marked as the outermost JS frame. + Label non_outermost_js_2; + __ pop(a5); + __ Branch(&non_outermost_js_2, + ne, + a5, + Operand(Smi::FromInt(StackFrame::OUTERMOST_JSENTRY_FRAME))); + __ li(a5, Operand(ExternalReference(js_entry_sp))); + __ sd(zero_reg, MemOperand(a5)); + __ bind(&non_outermost_js_2); + + // Restore the top frame descriptors from the stack. + __ pop(a5); + __ li(a4, Operand(ExternalReference(Isolate::kCEntryFPAddress, + isolate))); + __ sd(a5, MemOperand(a4)); + + // Reset the stack to the callee saved registers. + __ daddiu(sp, sp, -EntryFrameConstants::kCallerFPOffset); + + // Restore callee-saved fpu registers. + __ MultiPopFPU(kCalleeSavedFPU); + + // Restore callee saved registers from the stack. + __ MultiPop(kCalleeSaved | ra.bit()); + // Return. + __ Jump(ra); +} + + +// Uses registers a0 to a4. +// Expected input (depending on whether args are in registers or on the stack): +// * object: a0 or at sp + 1 * kPointerSize. +// * function: a1 or at sp. +// +// An inlined call site may have been generated before calling this stub. +// In this case the offset to the inline site to patch is passed on the stack, +// in the safepoint slot for register a4. +void InstanceofStub::Generate(MacroAssembler* masm) { + // Call site inlining and patching implies arguments in registers. + DCHECK(HasArgsInRegisters() || !HasCallSiteInlineCheck()); + // ReturnTrueFalse is only implemented for inlined call sites. + DCHECK(!ReturnTrueFalseObject() || HasCallSiteInlineCheck()); + + // Fixed register usage throughout the stub: + const Register object = a0; // Object (lhs). + Register map = a3; // Map of the object. + const Register function = a1; // Function (rhs). + const Register prototype = a4; // Prototype of the function. + const Register inline_site = t1; + const Register scratch = a2; + + const int32_t kDeltaToLoadBoolResult = 7 * Assembler::kInstrSize; + + Label slow, loop, is_instance, is_not_instance, not_js_object; + + if (!HasArgsInRegisters()) { + __ ld(object, MemOperand(sp, 1 * kPointerSize)); + __ ld(function, MemOperand(sp, 0)); + } + + // Check that the left hand is a JS object and load map. + __ JumpIfSmi(object, ¬_js_object); + __ IsObjectJSObjectType(object, map, scratch, ¬_js_object); + + // If there is a call site cache don't look in the global cache, but do the + // real lookup and update the call site cache. + if (!HasCallSiteInlineCheck()) { + Label miss; + __ LoadRoot(at, Heap::kInstanceofCacheFunctionRootIndex); + __ Branch(&miss, ne, function, Operand(at)); + __ LoadRoot(at, Heap::kInstanceofCacheMapRootIndex); + __ Branch(&miss, ne, map, Operand(at)); + __ LoadRoot(v0, Heap::kInstanceofCacheAnswerRootIndex); + __ DropAndRet(HasArgsInRegisters() ? 0 : 2); + + __ bind(&miss); + } + + // Get the prototype of the function. + __ TryGetFunctionPrototype(function, prototype, scratch, &slow, true); + + // Check that the function prototype is a JS object. + __ JumpIfSmi(prototype, &slow); + __ IsObjectJSObjectType(prototype, scratch, scratch, &slow); + + // Update the global instanceof or call site inlined cache with the current + // map and function. The cached answer will be set when it is known below. + if (!HasCallSiteInlineCheck()) { + __ StoreRoot(function, Heap::kInstanceofCacheFunctionRootIndex); + __ StoreRoot(map, Heap::kInstanceofCacheMapRootIndex); + } else { + DCHECK(HasArgsInRegisters()); + // Patch the (relocated) inlined map check. + + // The offset was stored in a4 safepoint slot. + // (See LCodeGen::DoDeferredLInstanceOfKnownGlobal). + __ LoadFromSafepointRegisterSlot(scratch, a4); + __ Dsubu(inline_site, ra, scratch); + // Get the map location in scratch and patch it. + __ GetRelocatedValue(inline_site, scratch, v1); // v1 used as scratch. + __ sd(map, FieldMemOperand(scratch, Cell::kValueOffset)); + } + + // Register mapping: a3 is object map and a4 is function prototype. + // Get prototype of object into a2. + __ ld(scratch, FieldMemOperand(map, Map::kPrototypeOffset)); + + // We don't need map any more. Use it as a scratch register. + Register scratch2 = map; + map = no_reg; + + // Loop through the prototype chain looking for the function prototype. + __ LoadRoot(scratch2, Heap::kNullValueRootIndex); + __ bind(&loop); + __ Branch(&is_instance, eq, scratch, Operand(prototype)); + __ Branch(&is_not_instance, eq, scratch, Operand(scratch2)); + __ ld(scratch, FieldMemOperand(scratch, HeapObject::kMapOffset)); + __ ld(scratch, FieldMemOperand(scratch, Map::kPrototypeOffset)); + __ Branch(&loop); + + __ bind(&is_instance); + DCHECK(Smi::FromInt(0) == 0); + if (!HasCallSiteInlineCheck()) { + __ mov(v0, zero_reg); + __ StoreRoot(v0, Heap::kInstanceofCacheAnswerRootIndex); + } else { + // Patch the call site to return true. + __ LoadRoot(v0, Heap::kTrueValueRootIndex); + __ Daddu(inline_site, inline_site, Operand(kDeltaToLoadBoolResult)); + // Get the boolean result location in scratch and patch it. + __ PatchRelocatedValue(inline_site, scratch, v0); + + if (!ReturnTrueFalseObject()) { + DCHECK_EQ(Smi::FromInt(0), 0); + __ mov(v0, zero_reg); + } + } + __ DropAndRet(HasArgsInRegisters() ? 0 : 2); + + __ bind(&is_not_instance); + if (!HasCallSiteInlineCheck()) { + __ li(v0, Operand(Smi::FromInt(1))); + __ StoreRoot(v0, Heap::kInstanceofCacheAnswerRootIndex); + } else { + // Patch the call site to return false. + __ LoadRoot(v0, Heap::kFalseValueRootIndex); + __ Daddu(inline_site, inline_site, Operand(kDeltaToLoadBoolResult)); + // Get the boolean result location in scratch and patch it. + __ PatchRelocatedValue(inline_site, scratch, v0); + + if (!ReturnTrueFalseObject()) { + __ li(v0, Operand(Smi::FromInt(1))); + } + } + + __ DropAndRet(HasArgsInRegisters() ? 0 : 2); + + Label object_not_null, object_not_null_or_smi; + __ bind(¬_js_object); + // Before null, smi and string value checks, check that the rhs is a function + // as for a non-function rhs an exception needs to be thrown. + __ JumpIfSmi(function, &slow); + __ GetObjectType(function, scratch2, scratch); + __ Branch(&slow, ne, scratch, Operand(JS_FUNCTION_TYPE)); + + // Null is not instance of anything. + __ Branch(&object_not_null, + ne, + scratch, + Operand(isolate()->factory()->null_value())); + __ li(v0, Operand(Smi::FromInt(1))); + __ DropAndRet(HasArgsInRegisters() ? 0 : 2); + + __ bind(&object_not_null); + // Smi values are not instances of anything. + __ JumpIfNotSmi(object, &object_not_null_or_smi); + __ li(v0, Operand(Smi::FromInt(1))); + __ DropAndRet(HasArgsInRegisters() ? 0 : 2); + + __ bind(&object_not_null_or_smi); + // String values are not instances of anything. + __ IsObjectJSStringType(object, scratch, &slow); + __ li(v0, Operand(Smi::FromInt(1))); + __ DropAndRet(HasArgsInRegisters() ? 0 : 2); + + // Slow-case. Tail call builtin. + __ bind(&slow); + if (!ReturnTrueFalseObject()) { + if (HasArgsInRegisters()) { + __ Push(a0, a1); + } + __ InvokeBuiltin(Builtins::INSTANCE_OF, JUMP_FUNCTION); + } else { + { + FrameScope scope(masm, StackFrame::INTERNAL); + __ Push(a0, a1); + __ InvokeBuiltin(Builtins::INSTANCE_OF, CALL_FUNCTION); + } + __ mov(a0, v0); + __ LoadRoot(v0, Heap::kTrueValueRootIndex); + __ DropAndRet(HasArgsInRegisters() ? 0 : 2, eq, a0, Operand(zero_reg)); + __ LoadRoot(v0, Heap::kFalseValueRootIndex); + __ DropAndRet(HasArgsInRegisters() ? 0 : 2); + } +} + + +void FunctionPrototypeStub::Generate(MacroAssembler* masm) { + Label miss; + Register receiver = LoadIC::ReceiverRegister(); + NamedLoadHandlerCompiler::GenerateLoadFunctionPrototype(masm, receiver, a3, + a4, &miss); + __ bind(&miss); + PropertyAccessCompiler::TailCallBuiltin( + masm, PropertyAccessCompiler::MissBuiltin(Code::LOAD_IC)); +} + + +Register InstanceofStub::left() { return a0; } + + +Register InstanceofStub::right() { return a1; } + + +void ArgumentsAccessStub::GenerateReadElement(MacroAssembler* masm) { + // The displacement is the offset of the last parameter (if any) + // relative to the frame pointer. + const int kDisplacement = + StandardFrameConstants::kCallerSPOffset - kPointerSize; + + // Check that the key is a smiGenerateReadElement. + Label slow; + __ JumpIfNotSmi(a1, &slow); + + // Check if the calling frame is an arguments adaptor frame. + Label adaptor; + __ ld(a2, MemOperand(fp, StandardFrameConstants::kCallerFPOffset)); + __ ld(a3, MemOperand(a2, StandardFrameConstants::kContextOffset)); + __ Branch(&adaptor, + eq, + a3, + Operand(Smi::FromInt(StackFrame::ARGUMENTS_ADAPTOR))); + + // Check index (a1) against formal parameters count limit passed in + // through register a0. Use unsigned comparison to get negative + // check for free. + __ Branch(&slow, hs, a1, Operand(a0)); + + // Read the argument from the stack and return it. + __ dsubu(a3, a0, a1); + __ SmiScale(a7, a3, kPointerSizeLog2); + __ Daddu(a3, fp, Operand(a7)); + __ Ret(USE_DELAY_SLOT); + __ ld(v0, MemOperand(a3, kDisplacement)); + + // Arguments adaptor case: Check index (a1) against actual arguments + // limit found in the arguments adaptor frame. Use unsigned + // comparison to get negative check for free. + __ bind(&adaptor); + __ ld(a0, MemOperand(a2, ArgumentsAdaptorFrameConstants::kLengthOffset)); + __ Branch(&slow, Ugreater_equal, a1, Operand(a0)); + + // Read the argument from the adaptor frame and return it. + __ dsubu(a3, a0, a1); + __ SmiScale(a7, a3, kPointerSizeLog2); + __ Daddu(a3, a2, Operand(a7)); + __ Ret(USE_DELAY_SLOT); + __ ld(v0, MemOperand(a3, kDisplacement)); + + // Slow-case: Handle non-smi or out-of-bounds access to arguments + // by calling the runtime system. + __ bind(&slow); + __ push(a1); + __ TailCallRuntime(Runtime::kGetArgumentsProperty, 1, 1); +} + + +void ArgumentsAccessStub::GenerateNewSloppySlow(MacroAssembler* masm) { + // sp[0] : number of parameters + // sp[4] : receiver displacement + // sp[8] : function + // Check if the calling frame is an arguments adaptor frame. + Label runtime; + __ ld(a3, MemOperand(fp, StandardFrameConstants::kCallerFPOffset)); + __ ld(a2, MemOperand(a3, StandardFrameConstants::kContextOffset)); + __ Branch(&runtime, + ne, + a2, + Operand(Smi::FromInt(StackFrame::ARGUMENTS_ADAPTOR))); + + // Patch the arguments.length and the parameters pointer in the current frame. + __ ld(a2, MemOperand(a3, ArgumentsAdaptorFrameConstants::kLengthOffset)); + __ sd(a2, MemOperand(sp, 0 * kPointerSize)); + __ SmiScale(a7, a2, kPointerSizeLog2); + __ Daddu(a3, a3, Operand(a7)); + __ daddiu(a3, a3, StandardFrameConstants::kCallerSPOffset); + __ sd(a3, MemOperand(sp, 1 * kPointerSize)); + + __ bind(&runtime); + __ TailCallRuntime(Runtime::kNewSloppyArguments, 3, 1); +} + + +void ArgumentsAccessStub::GenerateNewSloppyFast(MacroAssembler* masm) { + // Stack layout: + // sp[0] : number of parameters (tagged) + // sp[4] : address of receiver argument + // sp[8] : function + // Registers used over whole function: + // a6 : allocated object (tagged) + // t1 : mapped parameter count (tagged) + + __ ld(a1, MemOperand(sp, 0 * kPointerSize)); + // a1 = parameter count (tagged) + + // Check if the calling frame is an arguments adaptor frame. + Label runtime; + Label adaptor_frame, try_allocate; + __ ld(a3, MemOperand(fp, StandardFrameConstants::kCallerFPOffset)); + __ ld(a2, MemOperand(a3, StandardFrameConstants::kContextOffset)); + __ Branch(&adaptor_frame, + eq, + a2, + Operand(Smi::FromInt(StackFrame::ARGUMENTS_ADAPTOR))); + + // No adaptor, parameter count = argument count. + __ mov(a2, a1); + __ Branch(&try_allocate); + + // We have an adaptor frame. Patch the parameters pointer. + __ bind(&adaptor_frame); + __ ld(a2, MemOperand(a3, ArgumentsAdaptorFrameConstants::kLengthOffset)); + __ SmiScale(t2, a2, kPointerSizeLog2); + __ Daddu(a3, a3, Operand(t2)); + __ Daddu(a3, a3, Operand(StandardFrameConstants::kCallerSPOffset)); + __ sd(a3, MemOperand(sp, 1 * kPointerSize)); + + // a1 = parameter count (tagged) + // a2 = argument count (tagged) + // Compute the mapped parameter count = min(a1, a2) in a1. + Label skip_min; + __ Branch(&skip_min, lt, a1, Operand(a2)); + __ mov(a1, a2); + __ bind(&skip_min); + + __ bind(&try_allocate); + + // Compute the sizes of backing store, parameter map, and arguments object. + // 1. Parameter map, has 2 extra words containing context and backing store. + const int kParameterMapHeaderSize = + FixedArray::kHeaderSize + 2 * kPointerSize; + // If there are no mapped parameters, we do not need the parameter_map. + Label param_map_size; + DCHECK_EQ(0, Smi::FromInt(0)); + __ Branch(USE_DELAY_SLOT, ¶m_map_size, eq, a1, Operand(zero_reg)); + __ mov(t1, zero_reg); // In delay slot: param map size = 0 when a1 == 0. + __ SmiScale(t1, a1, kPointerSizeLog2); + __ daddiu(t1, t1, kParameterMapHeaderSize); + __ bind(¶m_map_size); + + // 2. Backing store. + __ SmiScale(t2, a2, kPointerSizeLog2); + __ Daddu(t1, t1, Operand(t2)); + __ Daddu(t1, t1, Operand(FixedArray::kHeaderSize)); + + // 3. Arguments object. + __ Daddu(t1, t1, Operand(Heap::kSloppyArgumentsObjectSize)); + + // Do the allocation of all three objects in one go. + __ Allocate(t1, v0, a3, a4, &runtime, TAG_OBJECT); + + // v0 = address of new object(s) (tagged) + // a2 = argument count (smi-tagged) + // Get the arguments boilerplate from the current native context into a4. + const int kNormalOffset = + Context::SlotOffset(Context::SLOPPY_ARGUMENTS_MAP_INDEX); + const int kAliasedOffset = + Context::SlotOffset(Context::ALIASED_ARGUMENTS_MAP_INDEX); + + __ ld(a4, MemOperand(cp, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); + __ ld(a4, FieldMemOperand(a4, GlobalObject::kNativeContextOffset)); + Label skip2_ne, skip2_eq; + __ Branch(&skip2_ne, ne, a1, Operand(zero_reg)); + __ ld(a4, MemOperand(a4, kNormalOffset)); + __ bind(&skip2_ne); + + __ Branch(&skip2_eq, eq, a1, Operand(zero_reg)); + __ ld(a4, MemOperand(a4, kAliasedOffset)); + __ bind(&skip2_eq); + + // v0 = address of new object (tagged) + // a1 = mapped parameter count (tagged) + // a2 = argument count (smi-tagged) + // a4 = address of arguments map (tagged) + __ sd(a4, FieldMemOperand(v0, JSObject::kMapOffset)); + __ LoadRoot(a3, Heap::kEmptyFixedArrayRootIndex); + __ sd(a3, FieldMemOperand(v0, JSObject::kPropertiesOffset)); + __ sd(a3, FieldMemOperand(v0, JSObject::kElementsOffset)); + + // Set up the callee in-object property. + STATIC_ASSERT(Heap::kArgumentsCalleeIndex == 1); + __ ld(a3, MemOperand(sp, 2 * kPointerSize)); + __ AssertNotSmi(a3); + const int kCalleeOffset = JSObject::kHeaderSize + + Heap::kArgumentsCalleeIndex * kPointerSize; + __ sd(a3, FieldMemOperand(v0, kCalleeOffset)); + + // Use the length (smi tagged) and set that as an in-object property too. + STATIC_ASSERT(Heap::kArgumentsLengthIndex == 0); + const int kLengthOffset = JSObject::kHeaderSize + + Heap::kArgumentsLengthIndex * kPointerSize; + __ sd(a2, FieldMemOperand(v0, kLengthOffset)); + + // Set up the elements pointer in the allocated arguments object. + // If we allocated a parameter map, a4 will point there, otherwise + // it will point to the backing store. + __ Daddu(a4, v0, Operand(Heap::kSloppyArgumentsObjectSize)); + __ sd(a4, FieldMemOperand(v0, JSObject::kElementsOffset)); + + // v0 = address of new object (tagged) + // a1 = mapped parameter count (tagged) + // a2 = argument count (tagged) + // a4 = address of parameter map or backing store (tagged) + // Initialize parameter map. If there are no mapped arguments, we're done. + Label skip_parameter_map; + Label skip3; + __ Branch(&skip3, ne, a1, Operand(Smi::FromInt(0))); + // Move backing store address to a3, because it is + // expected there when filling in the unmapped arguments. + __ mov(a3, a4); + __ bind(&skip3); + + __ Branch(&skip_parameter_map, eq, a1, Operand(Smi::FromInt(0))); + + __ LoadRoot(a6, Heap::kSloppyArgumentsElementsMapRootIndex); + __ sd(a6, FieldMemOperand(a4, FixedArray::kMapOffset)); + __ Daddu(a6, a1, Operand(Smi::FromInt(2))); + __ sd(a6, FieldMemOperand(a4, FixedArray::kLengthOffset)); + __ sd(cp, FieldMemOperand(a4, FixedArray::kHeaderSize + 0 * kPointerSize)); + __ SmiScale(t2, a1, kPointerSizeLog2); + __ Daddu(a6, a4, Operand(t2)); + __ Daddu(a6, a6, Operand(kParameterMapHeaderSize)); + __ sd(a6, FieldMemOperand(a4, FixedArray::kHeaderSize + 1 * kPointerSize)); + + // Copy the parameter slots and the holes in the arguments. + // We need to fill in mapped_parameter_count slots. They index the context, + // where parameters are stored in reverse order, at + // MIN_CONTEXT_SLOTS .. MIN_CONTEXT_SLOTS+parameter_count-1 + // The mapped parameter thus need to get indices + // MIN_CONTEXT_SLOTS+parameter_count-1 .. + // MIN_CONTEXT_SLOTS+parameter_count-mapped_parameter_count + // We loop from right to left. + Label parameters_loop, parameters_test; + __ mov(a6, a1); + __ ld(t1, MemOperand(sp, 0 * kPointerSize)); + __ Daddu(t1, t1, Operand(Smi::FromInt(Context::MIN_CONTEXT_SLOTS))); + __ Dsubu(t1, t1, Operand(a1)); + __ LoadRoot(a7, Heap::kTheHoleValueRootIndex); + __ SmiScale(t2, a6, kPointerSizeLog2); + __ Daddu(a3, a4, Operand(t2)); + __ Daddu(a3, a3, Operand(kParameterMapHeaderSize)); + + // a6 = loop variable (tagged) + // a1 = mapping index (tagged) + // a3 = address of backing store (tagged) + // a4 = address of parameter map (tagged) + // a5 = temporary scratch (a.o., for address calculation) + // a7 = the hole value + __ jmp(¶meters_test); + + __ bind(¶meters_loop); + + __ Dsubu(a6, a6, Operand(Smi::FromInt(1))); + __ SmiScale(a5, a6, kPointerSizeLog2); + __ Daddu(a5, a5, Operand(kParameterMapHeaderSize - kHeapObjectTag)); + __ Daddu(t2, a4, a5); + __ sd(t1, MemOperand(t2)); + __ Dsubu(a5, a5, Operand(kParameterMapHeaderSize - FixedArray::kHeaderSize)); + __ Daddu(t2, a3, a5); + __ sd(a7, MemOperand(t2)); + __ Daddu(t1, t1, Operand(Smi::FromInt(1))); + __ bind(¶meters_test); + __ Branch(¶meters_loop, ne, a6, Operand(Smi::FromInt(0))); + + __ bind(&skip_parameter_map); + // a2 = argument count (tagged) + // a3 = address of backing store (tagged) + // a5 = scratch + // Copy arguments header and remaining slots (if there are any). + __ LoadRoot(a5, Heap::kFixedArrayMapRootIndex); + __ sd(a5, FieldMemOperand(a3, FixedArray::kMapOffset)); + __ sd(a2, FieldMemOperand(a3, FixedArray::kLengthOffset)); + + Label arguments_loop, arguments_test; + __ mov(t1, a1); + __ ld(a4, MemOperand(sp, 1 * kPointerSize)); + __ SmiScale(t2, t1, kPointerSizeLog2); + __ Dsubu(a4, a4, Operand(t2)); + __ jmp(&arguments_test); + + __ bind(&arguments_loop); + __ Dsubu(a4, a4, Operand(kPointerSize)); + __ ld(a6, MemOperand(a4, 0)); + __ SmiScale(t2, t1, kPointerSizeLog2); + __ Daddu(a5, a3, Operand(t2)); + __ sd(a6, FieldMemOperand(a5, FixedArray::kHeaderSize)); + __ Daddu(t1, t1, Operand(Smi::FromInt(1))); + + __ bind(&arguments_test); + __ Branch(&arguments_loop, lt, t1, Operand(a2)); + + // Return and remove the on-stack parameters. + __ DropAndRet(3); + + // Do the runtime call to allocate the arguments object. + // a2 = argument count (tagged) + __ bind(&runtime); + __ sd(a2, MemOperand(sp, 0 * kPointerSize)); // Patch argument count. + __ TailCallRuntime(Runtime::kNewSloppyArguments, 3, 1); +} + + +void ArgumentsAccessStub::GenerateNewStrict(MacroAssembler* masm) { + // sp[0] : number of parameters + // sp[4] : receiver displacement + // sp[8] : function + // Check if the calling frame is an arguments adaptor frame. + Label adaptor_frame, try_allocate, runtime; + __ ld(a2, MemOperand(fp, StandardFrameConstants::kCallerFPOffset)); + __ ld(a3, MemOperand(a2, StandardFrameConstants::kContextOffset)); + __ Branch(&adaptor_frame, + eq, + a3, + Operand(Smi::FromInt(StackFrame::ARGUMENTS_ADAPTOR))); + + // Get the length from the frame. + __ ld(a1, MemOperand(sp, 0)); + __ Branch(&try_allocate); + + // Patch the arguments.length and the parameters pointer. + __ bind(&adaptor_frame); + __ ld(a1, MemOperand(a2, ArgumentsAdaptorFrameConstants::kLengthOffset)); + __ sd(a1, MemOperand(sp, 0)); + __ SmiScale(at, a1, kPointerSizeLog2); + + __ Daddu(a3, a2, Operand(at)); + + __ Daddu(a3, a3, Operand(StandardFrameConstants::kCallerSPOffset)); + __ sd(a3, MemOperand(sp, 1 * kPointerSize)); + + // Try the new space allocation. Start out with computing the size + // of the arguments object and the elements array in words. + Label add_arguments_object; + __ bind(&try_allocate); + __ Branch(&add_arguments_object, eq, a1, Operand(zero_reg)); + __ SmiUntag(a1); + + __ Daddu(a1, a1, Operand(FixedArray::kHeaderSize / kPointerSize)); + __ bind(&add_arguments_object); + __ Daddu(a1, a1, Operand(Heap::kStrictArgumentsObjectSize / kPointerSize)); + + // Do the allocation of both objects in one go. + __ Allocate(a1, v0, a2, a3, &runtime, + static_cast<AllocationFlags>(TAG_OBJECT | SIZE_IN_WORDS)); + + // Get the arguments boilerplate from the current native context. + __ ld(a4, MemOperand(cp, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); + __ ld(a4, FieldMemOperand(a4, GlobalObject::kNativeContextOffset)); + __ ld(a4, MemOperand(a4, Context::SlotOffset( + Context::STRICT_ARGUMENTS_MAP_INDEX))); + + __ sd(a4, FieldMemOperand(v0, JSObject::kMapOffset)); + __ LoadRoot(a3, Heap::kEmptyFixedArrayRootIndex); + __ sd(a3, FieldMemOperand(v0, JSObject::kPropertiesOffset)); + __ sd(a3, FieldMemOperand(v0, JSObject::kElementsOffset)); + + // Get the length (smi tagged) and set that as an in-object property too. + STATIC_ASSERT(Heap::kArgumentsLengthIndex == 0); + __ ld(a1, MemOperand(sp, 0 * kPointerSize)); + __ AssertSmi(a1); + __ sd(a1, FieldMemOperand(v0, JSObject::kHeaderSize + + Heap::kArgumentsLengthIndex * kPointerSize)); + + Label done; + __ Branch(&done, eq, a1, Operand(zero_reg)); + + // Get the parameters pointer from the stack. + __ ld(a2, MemOperand(sp, 1 * kPointerSize)); + + // Set up the elements pointer in the allocated arguments object and + // initialize the header in the elements fixed array. + __ Daddu(a4, v0, Operand(Heap::kStrictArgumentsObjectSize)); + __ sd(a4, FieldMemOperand(v0, JSObject::kElementsOffset)); + __ LoadRoot(a3, Heap::kFixedArrayMapRootIndex); + __ sd(a3, FieldMemOperand(a4, FixedArray::kMapOffset)); + __ sd(a1, FieldMemOperand(a4, FixedArray::kLengthOffset)); + // Untag the length for the loop. + __ SmiUntag(a1); + + + // Copy the fixed array slots. + Label loop; + // Set up a4 to point to the first array slot. + __ Daddu(a4, a4, Operand(FixedArray::kHeaderSize - kHeapObjectTag)); + __ bind(&loop); + // Pre-decrement a2 with kPointerSize on each iteration. + // Pre-decrement in order to skip receiver. + __ Daddu(a2, a2, Operand(-kPointerSize)); + __ ld(a3, MemOperand(a2)); + // Post-increment a4 with kPointerSize on each iteration. + __ sd(a3, MemOperand(a4)); + __ Daddu(a4, a4, Operand(kPointerSize)); + __ Dsubu(a1, a1, Operand(1)); + __ Branch(&loop, ne, a1, Operand(zero_reg)); + + // Return and remove the on-stack parameters. + __ bind(&done); + __ DropAndRet(3); + + // Do the runtime call to allocate the arguments object. + __ bind(&runtime); + __ TailCallRuntime(Runtime::kNewStrictArguments, 3, 1); +} + + +void RegExpExecStub::Generate(MacroAssembler* masm) { + // Just jump directly to runtime if native RegExp is not selected at compile + // time or if regexp entry in generated code is turned off runtime switch or + // at compilation. +#ifdef V8_INTERPRETED_REGEXP + __ TailCallRuntime(Runtime::kRegExpExecRT, 4, 1); +#else // V8_INTERPRETED_REGEXP + + // Stack frame on entry. + // sp[0]: last_match_info (expected JSArray) + // sp[4]: previous index + // sp[8]: subject string + // sp[12]: JSRegExp object + + const int kLastMatchInfoOffset = 0 * kPointerSize; + const int kPreviousIndexOffset = 1 * kPointerSize; + const int kSubjectOffset = 2 * kPointerSize; + const int kJSRegExpOffset = 3 * kPointerSize; + + Label runtime; + // Allocation of registers for this function. These are in callee save + // registers and will be preserved by the call to the native RegExp code, as + // this code is called using the normal C calling convention. When calling + // directly from generated code the native RegExp code will not do a GC and + // therefore the content of these registers are safe to use after the call. + // MIPS - using s0..s2, since we are not using CEntry Stub. + Register subject = s0; + Register regexp_data = s1; + Register last_match_info_elements = s2; + + // Ensure that a RegExp stack is allocated. + ExternalReference address_of_regexp_stack_memory_address = + ExternalReference::address_of_regexp_stack_memory_address( + isolate()); + ExternalReference address_of_regexp_stack_memory_size = + ExternalReference::address_of_regexp_stack_memory_size(isolate()); + __ li(a0, Operand(address_of_regexp_stack_memory_size)); + __ ld(a0, MemOperand(a0, 0)); + __ Branch(&runtime, eq, a0, Operand(zero_reg)); + + // Check that the first argument is a JSRegExp object. + __ ld(a0, MemOperand(sp, kJSRegExpOffset)); + STATIC_ASSERT(kSmiTag == 0); + __ JumpIfSmi(a0, &runtime); + __ GetObjectType(a0, a1, a1); + __ Branch(&runtime, ne, a1, Operand(JS_REGEXP_TYPE)); + + // Check that the RegExp has been compiled (data contains a fixed array). + __ ld(regexp_data, FieldMemOperand(a0, JSRegExp::kDataOffset)); + if (FLAG_debug_code) { + __ SmiTst(regexp_data, a4); + __ Check(nz, + kUnexpectedTypeForRegExpDataFixedArrayExpected, + a4, + Operand(zero_reg)); + __ GetObjectType(regexp_data, a0, a0); + __ Check(eq, + kUnexpectedTypeForRegExpDataFixedArrayExpected, + a0, + Operand(FIXED_ARRAY_TYPE)); + } + + // regexp_data: RegExp data (FixedArray) + // Check the type of the RegExp. Only continue if type is JSRegExp::IRREGEXP. + __ ld(a0, FieldMemOperand(regexp_data, JSRegExp::kDataTagOffset)); + __ Branch(&runtime, ne, a0, Operand(Smi::FromInt(JSRegExp::IRREGEXP))); + + // regexp_data: RegExp data (FixedArray) + // Check that the number of captures fit in the static offsets vector buffer. + __ ld(a2, + FieldMemOperand(regexp_data, JSRegExp::kIrregexpCaptureCountOffset)); + // Check (number_of_captures + 1) * 2 <= offsets vector size + // Or number_of_captures * 2 <= offsets vector size - 2 + // Or number_of_captures <= offsets vector size / 2 - 1 + // Multiplying by 2 comes for free since a2 is smi-tagged. + STATIC_ASSERT(Isolate::kJSRegexpStaticOffsetsVectorSize >= 2); + int temp = Isolate::kJSRegexpStaticOffsetsVectorSize / 2 - 1; + __ Branch(&runtime, hi, a2, Operand(Smi::FromInt(temp))); + + // Reset offset for possibly sliced string. + __ mov(t0, zero_reg); + __ ld(subject, MemOperand(sp, kSubjectOffset)); + __ JumpIfSmi(subject, &runtime); + __ mov(a3, subject); // Make a copy of the original subject string. + __ ld(a0, FieldMemOperand(subject, HeapObject::kMapOffset)); + __ lbu(a0, FieldMemOperand(a0, Map::kInstanceTypeOffset)); + // subject: subject string + // a3: subject string + // a0: subject string instance type + // regexp_data: RegExp data (FixedArray) + // Handle subject string according to its encoding and representation: + // (1) Sequential string? If yes, go to (5). + // (2) Anything but sequential or cons? If yes, go to (6). + // (3) Cons string. If the string is flat, replace subject with first string. + // Otherwise bailout. + // (4) Is subject external? If yes, go to (7). + // (5) Sequential string. Load regexp code according to encoding. + // (E) Carry on. + /// [...] + + // Deferred code at the end of the stub: + // (6) Not a long external string? If yes, go to (8). + // (7) External string. Make it, offset-wise, look like a sequential string. + // Go to (5). + // (8) Short external string or not a string? If yes, bail out to runtime. + // (9) Sliced string. Replace subject with parent. Go to (4). + + Label check_underlying; // (4) + Label seq_string; // (5) + Label not_seq_nor_cons; // (6) + Label external_string; // (7) + Label not_long_external; // (8) + + // (1) Sequential string? If yes, go to (5). + __ And(a1, + a0, + Operand(kIsNotStringMask | + kStringRepresentationMask | + kShortExternalStringMask)); + STATIC_ASSERT((kStringTag | kSeqStringTag) == 0); + __ Branch(&seq_string, eq, a1, Operand(zero_reg)); // Go to (5). + + // (2) Anything but sequential or cons? If yes, go to (6). + STATIC_ASSERT(kConsStringTag < kExternalStringTag); + STATIC_ASSERT(kSlicedStringTag > kExternalStringTag); + STATIC_ASSERT(kIsNotStringMask > kExternalStringTag); + STATIC_ASSERT(kShortExternalStringTag > kExternalStringTag); + // Go to (6). + __ Branch(¬_seq_nor_cons, ge, a1, Operand(kExternalStringTag)); + + // (3) Cons string. Check that it's flat. + // Replace subject with first string and reload instance type. + __ ld(a0, FieldMemOperand(subject, ConsString::kSecondOffset)); + __ LoadRoot(a1, Heap::kempty_stringRootIndex); + __ Branch(&runtime, ne, a0, Operand(a1)); + __ ld(subject, FieldMemOperand(subject, ConsString::kFirstOffset)); + + // (4) Is subject external? If yes, go to (7). + __ bind(&check_underlying); + __ ld(a0, FieldMemOperand(subject, HeapObject::kMapOffset)); + __ lbu(a0, FieldMemOperand(a0, Map::kInstanceTypeOffset)); + STATIC_ASSERT(kSeqStringTag == 0); + __ And(at, a0, Operand(kStringRepresentationMask)); + // The underlying external string is never a short external string. + STATIC_ASSERT(ExternalString::kMaxShortLength < ConsString::kMinLength); + STATIC_ASSERT(ExternalString::kMaxShortLength < SlicedString::kMinLength); + __ Branch(&external_string, ne, at, Operand(zero_reg)); // Go to (7). + + // (5) Sequential string. Load regexp code according to encoding. + __ bind(&seq_string); + // subject: sequential subject string (or look-alike, external string) + // a3: original subject string + // Load previous index and check range before a3 is overwritten. We have to + // use a3 instead of subject here because subject might have been only made + // to look like a sequential string when it actually is an external string. + __ ld(a1, MemOperand(sp, kPreviousIndexOffset)); + __ JumpIfNotSmi(a1, &runtime); + __ ld(a3, FieldMemOperand(a3, String::kLengthOffset)); + __ Branch(&runtime, ls, a3, Operand(a1)); + __ SmiUntag(a1); + + STATIC_ASSERT(kStringEncodingMask == 4); + STATIC_ASSERT(kOneByteStringTag == 4); + STATIC_ASSERT(kTwoByteStringTag == 0); + __ And(a0, a0, Operand(kStringEncodingMask)); // Non-zero for ASCII. + __ ld(t9, FieldMemOperand(regexp_data, JSRegExp::kDataAsciiCodeOffset)); + __ dsra(a3, a0, 2); // a3 is 1 for ASCII, 0 for UC16 (used below). + __ ld(a5, FieldMemOperand(regexp_data, JSRegExp::kDataUC16CodeOffset)); + __ Movz(t9, a5, a0); // If UC16 (a0 is 0), replace t9 w/kDataUC16CodeOffset. + + // (E) Carry on. String handling is done. + // t9: irregexp code + // Check that the irregexp code has been generated for the actual string + // encoding. If it has, the field contains a code object otherwise it contains + // a smi (code flushing support). + __ JumpIfSmi(t9, &runtime); + + // a1: previous index + // a3: encoding of subject string (1 if ASCII, 0 if two_byte); + // t9: code + // subject: Subject string + // regexp_data: RegExp data (FixedArray) + // All checks done. Now push arguments for native regexp code. + __ IncrementCounter(isolate()->counters()->regexp_entry_native(), + 1, a0, a2); + + // Isolates: note we add an additional parameter here (isolate pointer). + const int kRegExpExecuteArguments = 9; + const int kParameterRegisters = (kMipsAbi == kN64) ? 8 : 4; + __ EnterExitFrame(false, kRegExpExecuteArguments - kParameterRegisters); + + // Stack pointer now points to cell where return address is to be written. + // Arguments are before that on the stack or in registers, meaning we + // treat the return address as argument 5. Thus every argument after that + // needs to be shifted back by 1. Since DirectCEntryStub will handle + // allocating space for the c argument slots, we don't need to calculate + // that into the argument positions on the stack. This is how the stack will + // look (sp meaning the value of sp at this moment): + // Abi n64: + // [sp + 1] - Argument 9 + // [sp + 0] - saved ra + // Abi O32: + // [sp + 5] - Argument 9 + // [sp + 4] - Argument 8 + // [sp + 3] - Argument 7 + // [sp + 2] - Argument 6 + // [sp + 1] - Argument 5 + // [sp + 0] - saved ra + + if (kMipsAbi == kN64) { + // Argument 9: Pass current isolate address. + __ li(a0, Operand(ExternalReference::isolate_address(isolate()))); + __ sd(a0, MemOperand(sp, 1 * kPointerSize)); + + // Argument 8: Indicate that this is a direct call from JavaScript. + __ li(a7, Operand(1)); + + // Argument 7: Start (high end) of backtracking stack memory area. + __ li(a0, Operand(address_of_regexp_stack_memory_address)); + __ ld(a0, MemOperand(a0, 0)); + __ li(a2, Operand(address_of_regexp_stack_memory_size)); + __ ld(a2, MemOperand(a2, 0)); + __ daddu(a6, a0, a2); + + // Argument 6: Set the number of capture registers to zero to force global + // regexps to behave as non-global. This does not affect non-global regexps. + __ mov(a5, zero_reg); + + // Argument 5: static offsets vector buffer. + __ li(a4, Operand( + ExternalReference::address_of_static_offsets_vector(isolate()))); + } else { // O32. + DCHECK(kMipsAbi == kO32); + + // Argument 9: Pass current isolate address. + // CFunctionArgumentOperand handles MIPS stack argument slots. + __ li(a0, Operand(ExternalReference::isolate_address(isolate()))); + __ sd(a0, MemOperand(sp, 5 * kPointerSize)); + + // Argument 8: Indicate that this is a direct call from JavaScript. + __ li(a0, Operand(1)); + __ sd(a0, MemOperand(sp, 4 * kPointerSize)); + + // Argument 7: Start (high end) of backtracking stack memory area. + __ li(a0, Operand(address_of_regexp_stack_memory_address)); + __ ld(a0, MemOperand(a0, 0)); + __ li(a2, Operand(address_of_regexp_stack_memory_size)); + __ ld(a2, MemOperand(a2, 0)); + __ daddu(a0, a0, a2); + __ sd(a0, MemOperand(sp, 3 * kPointerSize)); + + // Argument 6: Set the number of capture registers to zero to force global + // regexps to behave as non-global. This does not affect non-global regexps. + __ mov(a0, zero_reg); + __ sd(a0, MemOperand(sp, 2 * kPointerSize)); + + // Argument 5: static offsets vector buffer. + __ li(a0, Operand( + ExternalReference::address_of_static_offsets_vector(isolate()))); + __ sd(a0, MemOperand(sp, 1 * kPointerSize)); + } + + // For arguments 4 and 3 get string length, calculate start of string data + // and calculate the shift of the index (0 for ASCII and 1 for two byte). + __ Daddu(t2, subject, Operand(SeqString::kHeaderSize - kHeapObjectTag)); + __ Xor(a3, a3, Operand(1)); // 1 for 2-byte str, 0 for 1-byte. + // Load the length from the original subject string from the previous stack + // frame. Therefore we have to use fp, which points exactly to two pointer + // sizes below the previous sp. (Because creating a new stack frame pushes + // the previous fp onto the stack and moves up sp by 2 * kPointerSize.) + __ ld(subject, MemOperand(fp, kSubjectOffset + 2 * kPointerSize)); + // If slice offset is not 0, load the length from the original sliced string. + // Argument 4, a3: End of string data + // Argument 3, a2: Start of string data + // Prepare start and end index of the input. + __ dsllv(t1, t0, a3); + __ daddu(t0, t2, t1); + __ dsllv(t1, a1, a3); + __ daddu(a2, t0, t1); + + __ ld(t2, FieldMemOperand(subject, String::kLengthOffset)); + + __ SmiUntag(t2); + __ dsllv(t1, t2, a3); + __ daddu(a3, t0, t1); + // Argument 2 (a1): Previous index. + // Already there + + // Argument 1 (a0): Subject string. + __ mov(a0, subject); + + // Locate the code entry and call it. + __ Daddu(t9, t9, Operand(Code::kHeaderSize - kHeapObjectTag)); + DirectCEntryStub stub(isolate()); + stub.GenerateCall(masm, t9); + + __ LeaveExitFrame(false, no_reg, true); + + // v0: result + // subject: subject string (callee saved) + // regexp_data: RegExp data (callee saved) + // last_match_info_elements: Last match info elements (callee saved) + // Check the result. + Label success; + __ Branch(&success, eq, v0, Operand(1)); + // We expect exactly one result since we force the called regexp to behave + // as non-global. + Label failure; + __ Branch(&failure, eq, v0, Operand(NativeRegExpMacroAssembler::FAILURE)); + // If not exception it can only be retry. Handle that in the runtime system. + __ Branch(&runtime, ne, v0, Operand(NativeRegExpMacroAssembler::EXCEPTION)); + // Result must now be exception. If there is no pending exception already a + // stack overflow (on the backtrack stack) was detected in RegExp code but + // haven't created the exception yet. Handle that in the runtime system. + // TODO(592): Rerunning the RegExp to get the stack overflow exception. + __ li(a1, Operand(isolate()->factory()->the_hole_value())); + __ li(a2, Operand(ExternalReference(Isolate::kPendingExceptionAddress, + isolate()))); + __ ld(v0, MemOperand(a2, 0)); + __ Branch(&runtime, eq, v0, Operand(a1)); + + __ sd(a1, MemOperand(a2, 0)); // Clear pending exception. + + // Check if the exception is a termination. If so, throw as uncatchable. + __ LoadRoot(a0, Heap::kTerminationExceptionRootIndex); + Label termination_exception; + __ Branch(&termination_exception, eq, v0, Operand(a0)); + + __ Throw(v0); + + __ bind(&termination_exception); + __ ThrowUncatchable(v0); + + __ bind(&failure); + // For failure and exception return null. + __ li(v0, Operand(isolate()->factory()->null_value())); + __ DropAndRet(4); + + // Process the result from the native regexp code. + __ bind(&success); + + __ lw(a1, UntagSmiFieldMemOperand( + regexp_data, JSRegExp::kIrregexpCaptureCountOffset)); + // Calculate number of capture registers (number_of_captures + 1) * 2. + __ Daddu(a1, a1, Operand(1)); + __ dsll(a1, a1, 1); // Multiply by 2. + + __ ld(a0, MemOperand(sp, kLastMatchInfoOffset)); + __ JumpIfSmi(a0, &runtime); + __ GetObjectType(a0, a2, a2); + __ Branch(&runtime, ne, a2, Operand(JS_ARRAY_TYPE)); + // Check that the JSArray is in fast case. + __ ld(last_match_info_elements, + FieldMemOperand(a0, JSArray::kElementsOffset)); + __ ld(a0, FieldMemOperand(last_match_info_elements, HeapObject::kMapOffset)); + __ LoadRoot(at, Heap::kFixedArrayMapRootIndex); + __ Branch(&runtime, ne, a0, Operand(at)); + // Check that the last match info has space for the capture registers and the + // additional information. + __ ld(a0, + FieldMemOperand(last_match_info_elements, FixedArray::kLengthOffset)); + __ Daddu(a2, a1, Operand(RegExpImpl::kLastMatchOverhead)); + + __ SmiUntag(at, a0); + __ Branch(&runtime, gt, a2, Operand(at)); + + // a1: number of capture registers + // subject: subject string + // Store the capture count. + __ SmiTag(a2, a1); // To smi. + __ sd(a2, FieldMemOperand(last_match_info_elements, + RegExpImpl::kLastCaptureCountOffset)); + // Store last subject and last input. + __ sd(subject, + FieldMemOperand(last_match_info_elements, + RegExpImpl::kLastSubjectOffset)); + __ mov(a2, subject); + __ RecordWriteField(last_match_info_elements, + RegExpImpl::kLastSubjectOffset, + subject, + a7, + kRAHasNotBeenSaved, + kDontSaveFPRegs); + __ mov(subject, a2); + __ sd(subject, + FieldMemOperand(last_match_info_elements, + RegExpImpl::kLastInputOffset)); + __ RecordWriteField(last_match_info_elements, + RegExpImpl::kLastInputOffset, + subject, + a7, + kRAHasNotBeenSaved, + kDontSaveFPRegs); + + // Get the static offsets vector filled by the native regexp code. + ExternalReference address_of_static_offsets_vector = + ExternalReference::address_of_static_offsets_vector(isolate()); + __ li(a2, Operand(address_of_static_offsets_vector)); + + // a1: number of capture registers + // a2: offsets vector + Label next_capture, done; + // Capture register counter starts from number of capture registers and + // counts down until wrapping after zero. + __ Daddu(a0, + last_match_info_elements, + Operand(RegExpImpl::kFirstCaptureOffset - kHeapObjectTag)); + __ bind(&next_capture); + __ Dsubu(a1, a1, Operand(1)); + __ Branch(&done, lt, a1, Operand(zero_reg)); + // Read the value from the static offsets vector buffer. + __ lw(a3, MemOperand(a2, 0)); + __ daddiu(a2, a2, kIntSize); + // Store the smi value in the last match info. + __ SmiTag(a3); + __ sd(a3, MemOperand(a0, 0)); + __ Branch(&next_capture, USE_DELAY_SLOT); + __ daddiu(a0, a0, kPointerSize); // In branch delay slot. + + __ bind(&done); + + // Return last match info. + __ ld(v0, MemOperand(sp, kLastMatchInfoOffset)); + __ DropAndRet(4); + + // Do the runtime call to execute the regexp. + __ bind(&runtime); + __ TailCallRuntime(Runtime::kRegExpExecRT, 4, 1); + + // Deferred code for string handling. + // (6) Not a long external string? If yes, go to (8). + __ bind(¬_seq_nor_cons); + // Go to (8). + __ Branch(¬_long_external, gt, a1, Operand(kExternalStringTag)); + + // (7) External string. Make it, offset-wise, look like a sequential string. + __ bind(&external_string); + __ ld(a0, FieldMemOperand(subject, HeapObject::kMapOffset)); + __ lbu(a0, FieldMemOperand(a0, Map::kInstanceTypeOffset)); + if (FLAG_debug_code) { + // Assert that we do not have a cons or slice (indirect strings) here. + // Sequential strings have already been ruled out. + __ And(at, a0, Operand(kIsIndirectStringMask)); + __ Assert(eq, + kExternalStringExpectedButNotFound, + at, + Operand(zero_reg)); + } + __ ld(subject, + FieldMemOperand(subject, ExternalString::kResourceDataOffset)); + // Move the pointer so that offset-wise, it looks like a sequential string. + STATIC_ASSERT(SeqTwoByteString::kHeaderSize == SeqOneByteString::kHeaderSize); + __ Dsubu(subject, + subject, + SeqTwoByteString::kHeaderSize - kHeapObjectTag); + __ jmp(&seq_string); // Go to (5). + + // (8) Short external string or not a string? If yes, bail out to runtime. + __ bind(¬_long_external); + STATIC_ASSERT(kNotStringTag != 0 && kShortExternalStringTag !=0); + __ And(at, a1, Operand(kIsNotStringMask | kShortExternalStringMask)); + __ Branch(&runtime, ne, at, Operand(zero_reg)); + + // (9) Sliced string. Replace subject with parent. Go to (4). + // Load offset into t0 and replace subject string with parent. + __ ld(t0, FieldMemOperand(subject, SlicedString::kOffsetOffset)); + __ SmiUntag(t0); + __ ld(subject, FieldMemOperand(subject, SlicedString::kParentOffset)); + __ jmp(&check_underlying); // Go to (4). +#endif // V8_INTERPRETED_REGEXP +} + + +static void GenerateRecordCallTarget(MacroAssembler* masm) { + // Cache the called function in a feedback vector slot. Cache states + // are uninitialized, monomorphic (indicated by a JSFunction), and + // megamorphic. + // a0 : number of arguments to the construct function + // a1 : the function to call + // a2 : Feedback vector + // a3 : slot in feedback vector (Smi) + Label initialize, done, miss, megamorphic, not_array_function; + + DCHECK_EQ(*TypeFeedbackInfo::MegamorphicSentinel(masm->isolate()), + masm->isolate()->heap()->megamorphic_symbol()); + DCHECK_EQ(*TypeFeedbackInfo::UninitializedSentinel(masm->isolate()), + masm->isolate()->heap()->uninitialized_symbol()); + + // Load the cache state into a4. + __ dsrl(a4, a3, 32 - kPointerSizeLog2); + __ Daddu(a4, a2, Operand(a4)); + __ ld(a4, FieldMemOperand(a4, FixedArray::kHeaderSize)); + + // A monomorphic cache hit or an already megamorphic state: invoke the + // function without changing the state. + __ Branch(&done, eq, a4, Operand(a1)); + + if (!FLAG_pretenuring_call_new) { + // If we came here, we need to see if we are the array function. + // If we didn't have a matching function, and we didn't find the megamorph + // sentinel, then we have in the slot either some other function or an + // AllocationSite. Do a map check on the object in a3. + __ ld(a5, FieldMemOperand(a4, 0)); + __ LoadRoot(at, Heap::kAllocationSiteMapRootIndex); + __ Branch(&miss, ne, a5, Operand(at)); + + // Make sure the function is the Array() function + __ LoadGlobalFunction(Context::ARRAY_FUNCTION_INDEX, a4); + __ Branch(&megamorphic, ne, a1, Operand(a4)); + __ jmp(&done); + } + + __ bind(&miss); + + // A monomorphic miss (i.e, here the cache is not uninitialized) goes + // megamorphic. + __ LoadRoot(at, Heap::kUninitializedSymbolRootIndex); + __ Branch(&initialize, eq, a4, Operand(at)); + // MegamorphicSentinel is an immortal immovable object (undefined) so no + // write-barrier is needed. + __ bind(&megamorphic); + __ dsrl(a4, a3, 32- kPointerSizeLog2); + __ Daddu(a4, a2, Operand(a4)); + __ LoadRoot(at, Heap::kMegamorphicSymbolRootIndex); + __ sd(at, FieldMemOperand(a4, FixedArray::kHeaderSize)); + __ jmp(&done); + + // An uninitialized cache is patched with the function. + __ bind(&initialize); + if (!FLAG_pretenuring_call_new) { + // Make sure the function is the Array() function. + __ LoadGlobalFunction(Context::ARRAY_FUNCTION_INDEX, a4); + __ Branch(¬_array_function, ne, a1, Operand(a4)); + + // The target function is the Array constructor, + // Create an AllocationSite if we don't already have it, store it in the + // slot. + { + FrameScope scope(masm, StackFrame::INTERNAL); + const RegList kSavedRegs = + 1 << 4 | // a0 + 1 << 5 | // a1 + 1 << 6 | // a2 + 1 << 7; // a3 + + // Arguments register must be smi-tagged to call out. + __ SmiTag(a0); + __ MultiPush(kSavedRegs); + + CreateAllocationSiteStub create_stub(masm->isolate()); + __ CallStub(&create_stub); + + __ MultiPop(kSavedRegs); + __ SmiUntag(a0); + } + __ Branch(&done); + + __ bind(¬_array_function); + } + + __ dsrl(a4, a3, 32 - kPointerSizeLog2); + __ Daddu(a4, a2, Operand(a4)); + __ Daddu(a4, a4, Operand(FixedArray::kHeaderSize - kHeapObjectTag)); + __ sd(a1, MemOperand(a4, 0)); + + __ Push(a4, a2, a1); + __ RecordWrite(a2, a4, a1, kRAHasNotBeenSaved, kDontSaveFPRegs, + EMIT_REMEMBERED_SET, OMIT_SMI_CHECK); + __ Pop(a4, a2, a1); + + __ bind(&done); +} + + +static void EmitContinueIfStrictOrNative(MacroAssembler* masm, Label* cont) { + __ ld(a3, FieldMemOperand(a1, JSFunction::kSharedFunctionInfoOffset)); + + // Do not transform the receiver for strict mode functions. + int32_t strict_mode_function_mask = + 1 << SharedFunctionInfo::kStrictModeBitWithinByte ; + // Do not transform the receiver for native (Compilerhints already in a3). + int32_t native_mask = 1 << SharedFunctionInfo::kNativeBitWithinByte; + + __ lbu(a4, FieldMemOperand(a3, SharedFunctionInfo::kStrictModeByteOffset)); + __ And(at, a4, Operand(strict_mode_function_mask)); + __ Branch(cont, ne, at, Operand(zero_reg)); + __ lbu(a4, FieldMemOperand(a3, SharedFunctionInfo::kNativeByteOffset)); + __ And(at, a4, Operand(native_mask)); + __ Branch(cont, ne, at, Operand(zero_reg)); +} + + +static void EmitSlowCase(MacroAssembler* masm, + int argc, + Label* non_function) { + // Check for function proxy. + __ Branch(non_function, ne, a4, Operand(JS_FUNCTION_PROXY_TYPE)); + __ push(a1); // put proxy as additional argument + __ li(a0, Operand(argc + 1, RelocInfo::NONE32)); + __ mov(a2, zero_reg); + __ GetBuiltinFunction(a1, Builtins::CALL_FUNCTION_PROXY); + { + Handle<Code> adaptor = + masm->isolate()->builtins()->ArgumentsAdaptorTrampoline(); + __ Jump(adaptor, RelocInfo::CODE_TARGET); + } + + // CALL_NON_FUNCTION expects the non-function callee as receiver (instead + // of the original receiver from the call site). + __ bind(non_function); + __ sd(a1, MemOperand(sp, argc * kPointerSize)); + __ li(a0, Operand(argc)); // Set up the number of arguments. + __ mov(a2, zero_reg); + __ GetBuiltinFunction(a1, Builtins::CALL_NON_FUNCTION); + __ Jump(masm->isolate()->builtins()->ArgumentsAdaptorTrampoline(), + RelocInfo::CODE_TARGET); +} + + +static void EmitWrapCase(MacroAssembler* masm, int argc, Label* cont) { + // Wrap the receiver and patch it back onto the stack. + { FrameScope frame_scope(masm, StackFrame::INTERNAL); + __ Push(a1, a3); + __ InvokeBuiltin(Builtins::TO_OBJECT, CALL_FUNCTION); + __ pop(a1); + } + __ Branch(USE_DELAY_SLOT, cont); + __ sd(v0, MemOperand(sp, argc * kPointerSize)); +} + + +static void CallFunctionNoFeedback(MacroAssembler* masm, + int argc, bool needs_checks, + bool call_as_method) { + // a1 : the function to call + Label slow, non_function, wrap, cont; + + if (needs_checks) { + // Check that the function is really a JavaScript function. + // a1: pushed function (to be verified) + __ JumpIfSmi(a1, &non_function); + + // Goto slow case if we do not have a function. + __ GetObjectType(a1, a4, a4); + __ Branch(&slow, ne, a4, Operand(JS_FUNCTION_TYPE)); + } + + // Fast-case: Invoke the function now. + // a1: pushed function + ParameterCount actual(argc); + + if (call_as_method) { + if (needs_checks) { + EmitContinueIfStrictOrNative(masm, &cont); + } + + // Compute the receiver in sloppy mode. + __ ld(a3, MemOperand(sp, argc * kPointerSize)); + + if (needs_checks) { + __ JumpIfSmi(a3, &wrap); + __ GetObjectType(a3, a4, a4); + __ Branch(&wrap, lt, a4, Operand(FIRST_SPEC_OBJECT_TYPE)); + } else { + __ jmp(&wrap); + } + + __ bind(&cont); + } + __ InvokeFunction(a1, actual, JUMP_FUNCTION, NullCallWrapper()); + + if (needs_checks) { + // Slow-case: Non-function called. + __ bind(&slow); + EmitSlowCase(masm, argc, &non_function); + } + + if (call_as_method) { + __ bind(&wrap); + // Wrap the receiver and patch it back onto the stack. + EmitWrapCase(masm, argc, &cont); + } +} + + +void CallFunctionStub::Generate(MacroAssembler* masm) { + CallFunctionNoFeedback(masm, argc_, NeedsChecks(), CallAsMethod()); +} + + +void CallConstructStub::Generate(MacroAssembler* masm) { + // a0 : number of arguments + // a1 : the function to call + // a2 : feedback vector + // a3 : (only if a2 is not undefined) slot in feedback vector (Smi) + Label slow, non_function_call; + // Check that the function is not a smi. + __ JumpIfSmi(a1, &non_function_call); + // Check that the function is a JSFunction. + __ GetObjectType(a1, a4, a4); + __ Branch(&slow, ne, a4, Operand(JS_FUNCTION_TYPE)); + + if (RecordCallTarget()) { + GenerateRecordCallTarget(masm); + + __ dsrl(at, a3, 32 - kPointerSizeLog2); + __ Daddu(a5, a2, at); + if (FLAG_pretenuring_call_new) { + // Put the AllocationSite from the feedback vector into a2. + // By adding kPointerSize we encode that we know the AllocationSite + // entry is at the feedback vector slot given by a3 + 1. + __ ld(a2, FieldMemOperand(a5, FixedArray::kHeaderSize + kPointerSize)); + } else { + Label feedback_register_initialized; + // Put the AllocationSite from the feedback vector into a2, or undefined. + __ ld(a2, FieldMemOperand(a5, FixedArray::kHeaderSize)); + __ ld(a5, FieldMemOperand(a2, AllocationSite::kMapOffset)); + __ LoadRoot(at, Heap::kAllocationSiteMapRootIndex); + __ Branch(&feedback_register_initialized, eq, a5, Operand(at)); + __ LoadRoot(a2, Heap::kUndefinedValueRootIndex); + __ bind(&feedback_register_initialized); + } + + __ AssertUndefinedOrAllocationSite(a2, a5); + } + + // Jump to the function-specific construct stub. + Register jmp_reg = a4; + __ ld(jmp_reg, FieldMemOperand(a1, JSFunction::kSharedFunctionInfoOffset)); + __ ld(jmp_reg, FieldMemOperand(jmp_reg, + SharedFunctionInfo::kConstructStubOffset)); + __ Daddu(at, jmp_reg, Operand(Code::kHeaderSize - kHeapObjectTag)); + __ Jump(at); + + // a0: number of arguments + // a1: called object + // a4: object type + Label do_call; + __ bind(&slow); + __ Branch(&non_function_call, ne, a4, Operand(JS_FUNCTION_PROXY_TYPE)); + __ GetBuiltinFunction(a1, Builtins::CALL_FUNCTION_PROXY_AS_CONSTRUCTOR); + __ jmp(&do_call); + + __ bind(&non_function_call); + __ GetBuiltinFunction(a1, Builtins::CALL_NON_FUNCTION_AS_CONSTRUCTOR); + __ bind(&do_call); + // Set expected number of arguments to zero (not changing r0). + __ li(a2, Operand(0, RelocInfo::NONE32)); + __ Jump(masm->isolate()->builtins()->ArgumentsAdaptorTrampoline(), + RelocInfo::CODE_TARGET); +} + + +// StringCharCodeAtGenerator. +void StringCharCodeAtGenerator::GenerateFast(MacroAssembler* masm) { + Label flat_string; + Label ascii_string; + Label got_char_code; + Label sliced_string; + + DCHECK(!a4.is(index_)); + DCHECK(!a4.is(result_)); + DCHECK(!a4.is(object_)); + + // If the receiver is a smi trigger the non-string case. + __ JumpIfSmi(object_, receiver_not_string_); + + // Fetch the instance type of the receiver into result register. + __ ld(result_, FieldMemOperand(object_, HeapObject::kMapOffset)); + __ lbu(result_, FieldMemOperand(result_, Map::kInstanceTypeOffset)); + // If the receiver is not a string trigger the non-string case. + __ And(a4, result_, Operand(kIsNotStringMask)); + __ Branch(receiver_not_string_, ne, a4, Operand(zero_reg)); + + // If the index is non-smi trigger the non-smi case. + __ JumpIfNotSmi(index_, &index_not_smi_); + + __ bind(&got_smi_index_); + + // Check for index out of range. + __ ld(a4, FieldMemOperand(object_, String::kLengthOffset)); + __ Branch(index_out_of_range_, ls, a4, Operand(index_)); + + __ SmiUntag(index_); + + StringCharLoadGenerator::Generate(masm, + object_, + index_, + result_, + &call_runtime_); + + __ SmiTag(result_); + __ bind(&exit_); +} + + +static void EmitLoadTypeFeedbackVector(MacroAssembler* masm, Register vector) { + __ ld(vector, MemOperand(fp, JavaScriptFrameConstants::kFunctionOffset)); + __ ld(vector, FieldMemOperand(vector, + JSFunction::kSharedFunctionInfoOffset)); + __ ld(vector, FieldMemOperand(vector, + SharedFunctionInfo::kFeedbackVectorOffset)); +} + + +void CallIC_ArrayStub::Generate(MacroAssembler* masm) { + // a1 - function + // a3 - slot id + Label miss; + + EmitLoadTypeFeedbackVector(masm, a2); + + __ LoadGlobalFunction(Context::ARRAY_FUNCTION_INDEX, at); + __ Branch(&miss, ne, a1, Operand(at)); + + __ li(a0, Operand(arg_count())); + __ dsrl(at, a3, 32 - kPointerSizeLog2); + __ Daddu(at, a2, Operand(at)); + __ ld(a4, FieldMemOperand(at, FixedArray::kHeaderSize)); + + // Verify that a4 contains an AllocationSite + __ ld(a5, FieldMemOperand(a4, HeapObject::kMapOffset)); + __ LoadRoot(at, Heap::kAllocationSiteMapRootIndex); + __ Branch(&miss, ne, a5, Operand(at)); + + __ mov(a2, a4); + ArrayConstructorStub stub(masm->isolate(), arg_count()); + __ TailCallStub(&stub); + + __ bind(&miss); + GenerateMiss(masm, IC::kCallIC_Customization_Miss); + + // The slow case, we need this no matter what to complete a call after a miss. + CallFunctionNoFeedback(masm, + arg_count(), + true, + CallAsMethod()); + + // Unreachable. + __ stop("Unexpected code address"); +} + + +void CallICStub::Generate(MacroAssembler* masm) { + // a1 - function + // a3 - slot id (Smi) + Label extra_checks_or_miss, slow_start; + Label slow, non_function, wrap, cont; + Label have_js_function; + int argc = state_.arg_count(); + ParameterCount actual(argc); + + EmitLoadTypeFeedbackVector(masm, a2); + + // The checks. First, does r1 match the recorded monomorphic target? + __ dsrl(a4, a3, 32 - kPointerSizeLog2); + __ Daddu(a4, a2, Operand(a4)); + __ ld(a4, FieldMemOperand(a4, FixedArray::kHeaderSize)); + __ Branch(&extra_checks_or_miss, ne, a1, Operand(a4)); + + __ bind(&have_js_function); + if (state_.CallAsMethod()) { + EmitContinueIfStrictOrNative(masm, &cont); + // Compute the receiver in sloppy mode. + __ ld(a3, MemOperand(sp, argc * kPointerSize)); + + __ JumpIfSmi(a3, &wrap); + __ GetObjectType(a3, a4, a4); + __ Branch(&wrap, lt, a4, Operand(FIRST_SPEC_OBJECT_TYPE)); + + __ bind(&cont); + } + + __ InvokeFunction(a1, actual, JUMP_FUNCTION, NullCallWrapper()); + + __ bind(&slow); + EmitSlowCase(masm, argc, &non_function); + + if (state_.CallAsMethod()) { + __ bind(&wrap); + EmitWrapCase(masm, argc, &cont); + } + + __ bind(&extra_checks_or_miss); + Label miss; + + __ LoadRoot(at, Heap::kMegamorphicSymbolRootIndex); + __ Branch(&slow_start, eq, a4, Operand(at)); + __ LoadRoot(at, Heap::kUninitializedSymbolRootIndex); + __ Branch(&miss, eq, a4, Operand(at)); + + if (!FLAG_trace_ic) { + // We are going megamorphic. If the feedback is a JSFunction, it is fine + // to handle it here. More complex cases are dealt with in the runtime. + __ AssertNotSmi(a4); + __ GetObjectType(a4, a5, a5); + __ Branch(&miss, ne, a5, Operand(JS_FUNCTION_TYPE)); + __ dsrl(a4, a3, 32 - kPointerSizeLog2); + __ Daddu(a4, a2, Operand(a4)); + __ LoadRoot(at, Heap::kMegamorphicSymbolRootIndex); + __ sd(at, FieldMemOperand(a4, FixedArray::kHeaderSize)); + __ Branch(&slow_start); + } + + // We are here because tracing is on or we are going monomorphic. + __ bind(&miss); + GenerateMiss(masm, IC::kCallIC_Miss); + + // the slow case + __ bind(&slow_start); + // Check that the function is really a JavaScript function. + // r1: pushed function (to be verified) + __ JumpIfSmi(a1, &non_function); + + // Goto slow case if we do not have a function. + __ GetObjectType(a1, a4, a4); + __ Branch(&slow, ne, a4, Operand(JS_FUNCTION_TYPE)); + __ Branch(&have_js_function); +} + + +void CallICStub::GenerateMiss(MacroAssembler* masm, IC::UtilityId id) { + // Get the receiver of the function from the stack; 1 ~ return address. + __ ld(a4, MemOperand(sp, (state_.arg_count() + 1) * kPointerSize)); + + { + FrameScope scope(masm, StackFrame::INTERNAL); + + // Push the receiver and the function and feedback info. + __ Push(a4, a1, a2, a3); + + // Call the entry. + ExternalReference miss = ExternalReference(IC_Utility(id), + masm->isolate()); + __ CallExternalReference(miss, 4); + + // Move result to a1 and exit the internal frame. + __ mov(a1, v0); + } +} + + +void StringCharCodeAtGenerator::GenerateSlow( + MacroAssembler* masm, + const RuntimeCallHelper& call_helper) { + __ Abort(kUnexpectedFallthroughToCharCodeAtSlowCase); + + // Index is not a smi. + __ bind(&index_not_smi_); + // If index is a heap number, try converting it to an integer. + __ CheckMap(index_, + result_, + Heap::kHeapNumberMapRootIndex, + index_not_number_, + DONT_DO_SMI_CHECK); + call_helper.BeforeCall(masm); + // Consumed by runtime conversion function: + __ Push(object_, index_); + if (index_flags_ == STRING_INDEX_IS_NUMBER) { + __ CallRuntime(Runtime::kNumberToIntegerMapMinusZero, 1); + } else { + DCHECK(index_flags_ == STRING_INDEX_IS_ARRAY_INDEX); + // NumberToSmi discards numbers that are not exact integers. + __ CallRuntime(Runtime::kNumberToSmi, 1); + } + + // Save the conversion result before the pop instructions below + // have a chance to overwrite it. + + __ Move(index_, v0); + __ pop(object_); + // Reload the instance type. + __ ld(result_, FieldMemOperand(object_, HeapObject::kMapOffset)); + __ lbu(result_, FieldMemOperand(result_, Map::kInstanceTypeOffset)); + call_helper.AfterCall(masm); + // If index is still not a smi, it must be out of range. + __ JumpIfNotSmi(index_, index_out_of_range_); + // Otherwise, return to the fast path. + __ Branch(&got_smi_index_); + + // Call runtime. We get here when the receiver is a string and the + // index is a number, but the code of getting the actual character + // is too complex (e.g., when the string needs to be flattened). + __ bind(&call_runtime_); + call_helper.BeforeCall(masm); + __ SmiTag(index_); + __ Push(object_, index_); + __ CallRuntime(Runtime::kStringCharCodeAtRT, 2); + + __ Move(result_, v0); + + call_helper.AfterCall(masm); + __ jmp(&exit_); + + __ Abort(kUnexpectedFallthroughFromCharCodeAtSlowCase); +} + + +// ------------------------------------------------------------------------- +// StringCharFromCodeGenerator + +void StringCharFromCodeGenerator::GenerateFast(MacroAssembler* masm) { + // Fast case of Heap::LookupSingleCharacterStringFromCode. + + DCHECK(!a4.is(result_)); + DCHECK(!a4.is(code_)); + + STATIC_ASSERT(kSmiTag == 0); + DCHECK(IsPowerOf2(String::kMaxOneByteCharCode + 1)); + __ And(a4, + code_, + Operand(kSmiTagMask | + ((~String::kMaxOneByteCharCode) << kSmiTagSize))); + __ Branch(&slow_case_, ne, a4, Operand(zero_reg)); + + + __ LoadRoot(result_, Heap::kSingleCharacterStringCacheRootIndex); + // At this point code register contains smi tagged ASCII char code. + STATIC_ASSERT(kSmiTag == 0); + __ SmiScale(a4, code_, kPointerSizeLog2); + __ Daddu(result_, result_, a4); + __ ld(result_, FieldMemOperand(result_, FixedArray::kHeaderSize)); + __ LoadRoot(a4, Heap::kUndefinedValueRootIndex); + __ Branch(&slow_case_, eq, result_, Operand(a4)); + __ bind(&exit_); +} + + +void StringCharFromCodeGenerator::GenerateSlow( + MacroAssembler* masm, + const RuntimeCallHelper& call_helper) { + __ Abort(kUnexpectedFallthroughToCharFromCodeSlowCase); + + __ bind(&slow_case_); + call_helper.BeforeCall(masm); + __ push(code_); + __ CallRuntime(Runtime::kCharFromCode, 1); + __ Move(result_, v0); + + call_helper.AfterCall(masm); + __ Branch(&exit_); + + __ Abort(kUnexpectedFallthroughFromCharFromCodeSlowCase); +} + + +enum CopyCharactersFlags { + COPY_ASCII = 1, + DEST_ALWAYS_ALIGNED = 2 +}; + + +void StringHelper::GenerateCopyCharacters(MacroAssembler* masm, + Register dest, + Register src, + Register count, + Register scratch, + String::Encoding encoding) { + if (FLAG_debug_code) { + // Check that destination is word aligned. + __ And(scratch, dest, Operand(kPointerAlignmentMask)); + __ Check(eq, + kDestinationOfCopyNotAligned, + scratch, + Operand(zero_reg)); + } + + // Assumes word reads and writes are little endian. + // Nothing to do for zero characters. + Label done; + + if (encoding == String::TWO_BYTE_ENCODING) { + __ Daddu(count, count, count); + } + + Register limit = count; // Read until dest equals this. + __ Daddu(limit, dest, Operand(count)); + + Label loop_entry, loop; + // Copy bytes from src to dest until dest hits limit. + __ Branch(&loop_entry); + __ bind(&loop); + __ lbu(scratch, MemOperand(src)); + __ daddiu(src, src, 1); + __ sb(scratch, MemOperand(dest)); + __ daddiu(dest, dest, 1); + __ bind(&loop_entry); + __ Branch(&loop, lt, dest, Operand(limit)); + + __ bind(&done); +} + + +void StringHelper::GenerateHashInit(MacroAssembler* masm, + Register hash, + Register character) { + // hash = seed + character + ((seed + character) << 10); + __ LoadRoot(hash, Heap::kHashSeedRootIndex); + // Untag smi seed and add the character. + __ SmiUntag(hash); + __ addu(hash, hash, character); + __ sll(at, hash, 10); + __ addu(hash, hash, at); + // hash ^= hash >> 6; + __ srl(at, hash, 6); + __ xor_(hash, hash, at); +} + + +void StringHelper::GenerateHashAddCharacter(MacroAssembler* masm, + Register hash, + Register character) { + // hash += character; + __ addu(hash, hash, character); + // hash += hash << 10; + __ sll(at, hash, 10); + __ addu(hash, hash, at); + // hash ^= hash >> 6; + __ srl(at, hash, 6); + __ xor_(hash, hash, at); +} + + +void StringHelper::GenerateHashGetHash(MacroAssembler* masm, + Register hash) { + // hash += hash << 3; + __ sll(at, hash, 3); + __ addu(hash, hash, at); + // hash ^= hash >> 11; + __ srl(at, hash, 11); + __ xor_(hash, hash, at); + // hash += hash << 15; + __ sll(at, hash, 15); + __ addu(hash, hash, at); + + __ li(at, Operand(String::kHashBitMask)); + __ and_(hash, hash, at); + + // if (hash == 0) hash = 27; + __ ori(at, zero_reg, StringHasher::kZeroHash); + __ Movz(hash, at, hash); +} + + +void SubStringStub::Generate(MacroAssembler* masm) { + Label runtime; + // Stack frame on entry. + // ra: return address + // sp[0]: to + // sp[4]: from + // sp[8]: string + + // This stub is called from the native-call %_SubString(...), so + // nothing can be assumed about the arguments. It is tested that: + // "string" is a sequential string, + // both "from" and "to" are smis, and + // 0 <= from <= to <= string.length. + // If any of these assumptions fail, we call the runtime system. + + const int kToOffset = 0 * kPointerSize; + const int kFromOffset = 1 * kPointerSize; + const int kStringOffset = 2 * kPointerSize; + + __ ld(a2, MemOperand(sp, kToOffset)); + __ ld(a3, MemOperand(sp, kFromOffset)); +// Does not needed? +// STATIC_ASSERT(kFromOffset == kToOffset + 4); + STATIC_ASSERT(kSmiTag == 0); +// Does not needed? +// STATIC_ASSERT(kSmiTagSize + kSmiShiftSize == 1); + + // Utilize delay slots. SmiUntag doesn't emit a jump, everything else is + // safe in this case. + __ JumpIfNotSmi(a2, &runtime); + __ JumpIfNotSmi(a3, &runtime); + // Both a2 and a3 are untagged integers. + + __ SmiUntag(a2, a2); + __ SmiUntag(a3, a3); + __ Branch(&runtime, lt, a3, Operand(zero_reg)); // From < 0. + + __ Branch(&runtime, gt, a3, Operand(a2)); // Fail if from > to. + __ Dsubu(a2, a2, a3); + + // Make sure first argument is a string. + __ ld(v0, MemOperand(sp, kStringOffset)); + __ JumpIfSmi(v0, &runtime); + __ ld(a1, FieldMemOperand(v0, HeapObject::kMapOffset)); + __ lbu(a1, FieldMemOperand(a1, Map::kInstanceTypeOffset)); + __ And(a4, a1, Operand(kIsNotStringMask)); + + __ Branch(&runtime, ne, a4, Operand(zero_reg)); + + Label single_char; + __ Branch(&single_char, eq, a2, Operand(1)); + + // Short-cut for the case of trivial substring. + Label return_v0; + // v0: original string + // a2: result string length + __ ld(a4, FieldMemOperand(v0, String::kLengthOffset)); + __ SmiUntag(a4); + // Return original string. + __ Branch(&return_v0, eq, a2, Operand(a4)); + // Longer than original string's length or negative: unsafe arguments. + __ Branch(&runtime, hi, a2, Operand(a4)); + // Shorter than original string's length: an actual substring. + + // Deal with different string types: update the index if necessary + // and put the underlying string into a5. + // v0: original string + // a1: instance type + // a2: length + // a3: from index (untagged) + Label underlying_unpacked, sliced_string, seq_or_external_string; + // If the string is not indirect, it can only be sequential or external. + STATIC_ASSERT(kIsIndirectStringMask == (kSlicedStringTag & kConsStringTag)); + STATIC_ASSERT(kIsIndirectStringMask != 0); + __ And(a4, a1, Operand(kIsIndirectStringMask)); + __ Branch(USE_DELAY_SLOT, &seq_or_external_string, eq, a4, Operand(zero_reg)); + // a4 is used as a scratch register and can be overwritten in either case. + __ And(a4, a1, Operand(kSlicedNotConsMask)); + __ Branch(&sliced_string, ne, a4, Operand(zero_reg)); + // Cons string. Check whether it is flat, then fetch first part. + __ ld(a5, FieldMemOperand(v0, ConsString::kSecondOffset)); + __ LoadRoot(a4, Heap::kempty_stringRootIndex); + __ Branch(&runtime, ne, a5, Operand(a4)); + __ ld(a5, FieldMemOperand(v0, ConsString::kFirstOffset)); + // Update instance type. + __ ld(a1, FieldMemOperand(a5, HeapObject::kMapOffset)); + __ lbu(a1, FieldMemOperand(a1, Map::kInstanceTypeOffset)); + __ jmp(&underlying_unpacked); + + __ bind(&sliced_string); + // Sliced string. Fetch parent and correct start index by offset. + __ ld(a5, FieldMemOperand(v0, SlicedString::kParentOffset)); + __ ld(a4, FieldMemOperand(v0, SlicedString::kOffsetOffset)); + __ SmiUntag(a4); // Add offset to index. + __ Daddu(a3, a3, a4); + // Update instance type. + __ ld(a1, FieldMemOperand(a5, HeapObject::kMapOffset)); + __ lbu(a1, FieldMemOperand(a1, Map::kInstanceTypeOffset)); + __ jmp(&underlying_unpacked); + + __ bind(&seq_or_external_string); + // Sequential or external string. Just move string to the expected register. + __ mov(a5, v0); + + __ bind(&underlying_unpacked); + + if (FLAG_string_slices) { + Label copy_routine; + // a5: underlying subject string + // a1: instance type of underlying subject string + // a2: length + // a3: adjusted start index (untagged) + // Short slice. Copy instead of slicing. + __ Branch(©_routine, lt, a2, Operand(SlicedString::kMinLength)); + // Allocate new sliced string. At this point we do not reload the instance + // type including the string encoding because we simply rely on the info + // provided by the original string. It does not matter if the original + // string's encoding is wrong because we always have to recheck encoding of + // the newly created string's parent anyways due to externalized strings. + Label two_byte_slice, set_slice_header; + STATIC_ASSERT((kStringEncodingMask & kOneByteStringTag) != 0); + STATIC_ASSERT((kStringEncodingMask & kTwoByteStringTag) == 0); + __ And(a4, a1, Operand(kStringEncodingMask)); + __ Branch(&two_byte_slice, eq, a4, Operand(zero_reg)); + __ AllocateAsciiSlicedString(v0, a2, a6, a7, &runtime); + __ jmp(&set_slice_header); + __ bind(&two_byte_slice); + __ AllocateTwoByteSlicedString(v0, a2, a6, a7, &runtime); + __ bind(&set_slice_header); + __ SmiTag(a3); + __ sd(a5, FieldMemOperand(v0, SlicedString::kParentOffset)); + __ sd(a3, FieldMemOperand(v0, SlicedString::kOffsetOffset)); + __ jmp(&return_v0); + + __ bind(©_routine); + } + + // a5: underlying subject string + // a1: instance type of underlying subject string + // a2: length + // a3: adjusted start index (untagged) + Label two_byte_sequential, sequential_string, allocate_result; + STATIC_ASSERT(kExternalStringTag != 0); + STATIC_ASSERT(kSeqStringTag == 0); + __ And(a4, a1, Operand(kExternalStringTag)); + __ Branch(&sequential_string, eq, a4, Operand(zero_reg)); + + // Handle external string. + // Rule out short external strings. + STATIC_ASSERT(kShortExternalStringTag != 0); + __ And(a4, a1, Operand(kShortExternalStringTag)); + __ Branch(&runtime, ne, a4, Operand(zero_reg)); + __ ld(a5, FieldMemOperand(a5, ExternalString::kResourceDataOffset)); + // a5 already points to the first character of underlying string. + __ jmp(&allocate_result); + + __ bind(&sequential_string); + // Locate first character of underlying subject string. + STATIC_ASSERT(SeqTwoByteString::kHeaderSize == SeqOneByteString::kHeaderSize); + __ Daddu(a5, a5, Operand(SeqOneByteString::kHeaderSize - kHeapObjectTag)); + + __ bind(&allocate_result); + // Sequential acii string. Allocate the result. + STATIC_ASSERT((kOneByteStringTag & kStringEncodingMask) != 0); + __ And(a4, a1, Operand(kStringEncodingMask)); + __ Branch(&two_byte_sequential, eq, a4, Operand(zero_reg)); + + // Allocate and copy the resulting ASCII string. + __ AllocateAsciiString(v0, a2, a4, a6, a7, &runtime); + + // Locate first character of substring to copy. + __ Daddu(a5, a5, a3); + + // Locate first character of result. + __ Daddu(a1, v0, Operand(SeqOneByteString::kHeaderSize - kHeapObjectTag)); + + // v0: result string + // a1: first character of result string + // a2: result string length + // a5: first character of substring to copy + STATIC_ASSERT((SeqOneByteString::kHeaderSize & kObjectAlignmentMask) == 0); + StringHelper::GenerateCopyCharacters( + masm, a1, a5, a2, a3, String::ONE_BYTE_ENCODING); + __ jmp(&return_v0); + + // Allocate and copy the resulting two-byte string. + __ bind(&two_byte_sequential); + __ AllocateTwoByteString(v0, a2, a4, a6, a7, &runtime); + + // Locate first character of substring to copy. + STATIC_ASSERT(kSmiTagSize == 1 && kSmiTag == 0); + __ dsll(a4, a3, 1); + __ Daddu(a5, a5, a4); + // Locate first character of result. + __ Daddu(a1, v0, Operand(SeqTwoByteString::kHeaderSize - kHeapObjectTag)); + + // v0: result string. + // a1: first character of result. + // a2: result length. + // a5: first character of substring to copy. + STATIC_ASSERT((SeqTwoByteString::kHeaderSize & kObjectAlignmentMask) == 0); + StringHelper::GenerateCopyCharacters( + masm, a1, a5, a2, a3, String::TWO_BYTE_ENCODING); + + __ bind(&return_v0); + Counters* counters = isolate()->counters(); + __ IncrementCounter(counters->sub_string_native(), 1, a3, a4); + __ DropAndRet(3); + + // Just jump to runtime to create the sub string. + __ bind(&runtime); + __ TailCallRuntime(Runtime::kSubString, 3, 1); + + __ bind(&single_char); + // v0: original string + // a1: instance type + // a2: length + // a3: from index (untagged) + StringCharAtGenerator generator( + v0, a3, a2, v0, &runtime, &runtime, &runtime, STRING_INDEX_IS_NUMBER); + generator.GenerateFast(masm); + __ DropAndRet(3); + generator.SkipSlow(masm, &runtime); +} + + +void StringCompareStub::GenerateFlatAsciiStringEquals(MacroAssembler* masm, + Register left, + Register right, + Register scratch1, + Register scratch2, + Register scratch3) { + Register length = scratch1; + + // Compare lengths. + Label strings_not_equal, check_zero_length; + __ ld(length, FieldMemOperand(left, String::kLengthOffset)); + __ ld(scratch2, FieldMemOperand(right, String::kLengthOffset)); + __ Branch(&check_zero_length, eq, length, Operand(scratch2)); + __ bind(&strings_not_equal); + // Can not put li in delayslot, it has multi instructions. + __ li(v0, Operand(Smi::FromInt(NOT_EQUAL))); + __ Ret(); + + // Check if the length is zero. + Label compare_chars; + __ bind(&check_zero_length); + STATIC_ASSERT(kSmiTag == 0); + __ Branch(&compare_chars, ne, length, Operand(zero_reg)); + DCHECK(is_int16((intptr_t)Smi::FromInt(EQUAL))); + __ Ret(USE_DELAY_SLOT); + __ li(v0, Operand(Smi::FromInt(EQUAL))); + + // Compare characters. + __ bind(&compare_chars); + + GenerateAsciiCharsCompareLoop(masm, + left, right, length, scratch2, scratch3, v0, + &strings_not_equal); + + // Characters are equal. + __ Ret(USE_DELAY_SLOT); + __ li(v0, Operand(Smi::FromInt(EQUAL))); +} + + +void StringCompareStub::GenerateCompareFlatAsciiStrings(MacroAssembler* masm, + Register left, + Register right, + Register scratch1, + Register scratch2, + Register scratch3, + Register scratch4) { + Label result_not_equal, compare_lengths; + // Find minimum length and length difference. + __ ld(scratch1, FieldMemOperand(left, String::kLengthOffset)); + __ ld(scratch2, FieldMemOperand(right, String::kLengthOffset)); + __ Dsubu(scratch3, scratch1, Operand(scratch2)); + Register length_delta = scratch3; + __ slt(scratch4, scratch2, scratch1); + __ Movn(scratch1, scratch2, scratch4); + Register min_length = scratch1; + STATIC_ASSERT(kSmiTag == 0); + __ Branch(&compare_lengths, eq, min_length, Operand(zero_reg)); + + // Compare loop. + GenerateAsciiCharsCompareLoop(masm, + left, right, min_length, scratch2, scratch4, v0, + &result_not_equal); + + // Compare lengths - strings up to min-length are equal. + __ bind(&compare_lengths); + DCHECK(Smi::FromInt(EQUAL) == static_cast<Smi*>(0)); + // Use length_delta as result if it's zero. + __ mov(scratch2, length_delta); + __ mov(scratch4, zero_reg); + __ mov(v0, zero_reg); + + __ bind(&result_not_equal); + // Conditionally update the result based either on length_delta or + // the last comparion performed in the loop above. + Label ret; + __ Branch(&ret, eq, scratch2, Operand(scratch4)); + __ li(v0, Operand(Smi::FromInt(GREATER))); + __ Branch(&ret, gt, scratch2, Operand(scratch4)); + __ li(v0, Operand(Smi::FromInt(LESS))); + __ bind(&ret); + __ Ret(); +} + + +void StringCompareStub::GenerateAsciiCharsCompareLoop( + MacroAssembler* masm, + Register left, + Register right, + Register length, + Register scratch1, + Register scratch2, + Register scratch3, + Label* chars_not_equal) { + // Change index to run from -length to -1 by adding length to string + // start. This means that loop ends when index reaches zero, which + // doesn't need an additional compare. + __ SmiUntag(length); + __ Daddu(scratch1, length, + Operand(SeqOneByteString::kHeaderSize - kHeapObjectTag)); + __ Daddu(left, left, Operand(scratch1)); + __ Daddu(right, right, Operand(scratch1)); + __ Dsubu(length, zero_reg, length); + Register index = length; // index = -length; + + + // Compare loop. + Label loop; + __ bind(&loop); + __ Daddu(scratch3, left, index); + __ lbu(scratch1, MemOperand(scratch3)); + __ Daddu(scratch3, right, index); + __ lbu(scratch2, MemOperand(scratch3)); + __ Branch(chars_not_equal, ne, scratch1, Operand(scratch2)); + __ Daddu(index, index, 1); + __ Branch(&loop, ne, index, Operand(zero_reg)); +} + + +void StringCompareStub::Generate(MacroAssembler* masm) { + Label runtime; + + Counters* counters = isolate()->counters(); + + // Stack frame on entry. + // sp[0]: right string + // sp[4]: left string + __ ld(a1, MemOperand(sp, 1 * kPointerSize)); // Left. + __ ld(a0, MemOperand(sp, 0 * kPointerSize)); // Right. + + Label not_same; + __ Branch(¬_same, ne, a0, Operand(a1)); + STATIC_ASSERT(EQUAL == 0); + STATIC_ASSERT(kSmiTag == 0); + __ li(v0, Operand(Smi::FromInt(EQUAL))); + __ IncrementCounter(counters->string_compare_native(), 1, a1, a2); + __ DropAndRet(2); + + __ bind(¬_same); + + // Check that both objects are sequential ASCII strings. + __ JumpIfNotBothSequentialAsciiStrings(a1, a0, a2, a3, &runtime); + + // Compare flat ASCII strings natively. Remove arguments from stack first. + __ IncrementCounter(counters->string_compare_native(), 1, a2, a3); + __ Daddu(sp, sp, Operand(2 * kPointerSize)); + GenerateCompareFlatAsciiStrings(masm, a1, a0, a2, a3, a4, a5); + + __ bind(&runtime); + __ TailCallRuntime(Runtime::kStringCompare, 2, 1); +} + + +void BinaryOpICWithAllocationSiteStub::Generate(MacroAssembler* masm) { + // ----------- S t a t e ------------- + // -- a1 : left + // -- a0 : right + // -- ra : return address + // ----------------------------------- + + // Load a2 with the allocation site. We stick an undefined dummy value here + // and replace it with the real allocation site later when we instantiate this + // stub in BinaryOpICWithAllocationSiteStub::GetCodeCopyFromTemplate(). + __ li(a2, handle(isolate()->heap()->undefined_value())); + + // Make sure that we actually patched the allocation site. + if (FLAG_debug_code) { + __ And(at, a2, Operand(kSmiTagMask)); + __ Assert(ne, kExpectedAllocationSite, at, Operand(zero_reg)); + __ ld(a4, FieldMemOperand(a2, HeapObject::kMapOffset)); + __ LoadRoot(at, Heap::kAllocationSiteMapRootIndex); + __ Assert(eq, kExpectedAllocationSite, a4, Operand(at)); + } + + // Tail call into the stub that handles binary operations with allocation + // sites. + BinaryOpWithAllocationSiteStub stub(isolate(), state_); + __ TailCallStub(&stub); +} + + +void ICCompareStub::GenerateSmis(MacroAssembler* masm) { + DCHECK(state_ == CompareIC::SMI); + Label miss; + __ Or(a2, a1, a0); + __ JumpIfNotSmi(a2, &miss); + + if (GetCondition() == eq) { + // For equality we do not care about the sign of the result. + __ Ret(USE_DELAY_SLOT); + __ Dsubu(v0, a0, a1); + } else { + // Untag before subtracting to avoid handling overflow. + __ SmiUntag(a1); + __ SmiUntag(a0); + __ Ret(USE_DELAY_SLOT); + __ Dsubu(v0, a1, a0); + } + + __ bind(&miss); + GenerateMiss(masm); +} + + +void ICCompareStub::GenerateNumbers(MacroAssembler* masm) { + DCHECK(state_ == CompareIC::NUMBER); + + Label generic_stub; + Label unordered, maybe_undefined1, maybe_undefined2; + Label miss; + + if (left_ == CompareIC::SMI) { + __ JumpIfNotSmi(a1, &miss); + } + if (right_ == CompareIC::SMI) { + __ JumpIfNotSmi(a0, &miss); + } + + // Inlining the double comparison and falling back to the general compare + // stub if NaN is involved. + // Load left and right operand. + Label done, left, left_smi, right_smi; + __ JumpIfSmi(a0, &right_smi); + __ CheckMap(a0, a2, Heap::kHeapNumberMapRootIndex, &maybe_undefined1, + DONT_DO_SMI_CHECK); + __ Dsubu(a2, a0, Operand(kHeapObjectTag)); + __ ldc1(f2, MemOperand(a2, HeapNumber::kValueOffset)); + __ Branch(&left); + __ bind(&right_smi); + __ SmiUntag(a2, a0); // Can't clobber a0 yet. + FPURegister single_scratch = f6; + __ mtc1(a2, single_scratch); + __ cvt_d_w(f2, single_scratch); + + __ bind(&left); + __ JumpIfSmi(a1, &left_smi); + __ CheckMap(a1, a2, Heap::kHeapNumberMapRootIndex, &maybe_undefined2, + DONT_DO_SMI_CHECK); + __ Dsubu(a2, a1, Operand(kHeapObjectTag)); + __ ldc1(f0, MemOperand(a2, HeapNumber::kValueOffset)); + __ Branch(&done); + __ bind(&left_smi); + __ SmiUntag(a2, a1); // Can't clobber a1 yet. + single_scratch = f8; + __ mtc1(a2, single_scratch); + __ cvt_d_w(f0, single_scratch); + + __ bind(&done); + + // Return a result of -1, 0, or 1, or use CompareStub for NaNs. + Label fpu_eq, fpu_lt; + // Test if equal, and also handle the unordered/NaN case. + __ BranchF(&fpu_eq, &unordered, eq, f0, f2); + + // Test if less (unordered case is already handled). + __ BranchF(&fpu_lt, NULL, lt, f0, f2); + + // Otherwise it's greater, so just fall thru, and return. + DCHECK(is_int16(GREATER) && is_int16(EQUAL) && is_int16(LESS)); + __ Ret(USE_DELAY_SLOT); + __ li(v0, Operand(GREATER)); + + __ bind(&fpu_eq); + __ Ret(USE_DELAY_SLOT); + __ li(v0, Operand(EQUAL)); + + __ bind(&fpu_lt); + __ Ret(USE_DELAY_SLOT); + __ li(v0, Operand(LESS)); + + __ bind(&unordered); + __ bind(&generic_stub); + ICCompareStub stub(isolate(), op_, CompareIC::GENERIC, CompareIC::GENERIC, + CompareIC::GENERIC); + __ Jump(stub.GetCode(), RelocInfo::CODE_TARGET); + + __ bind(&maybe_undefined1); + if (Token::IsOrderedRelationalCompareOp(op_)) { + __ LoadRoot(at, Heap::kUndefinedValueRootIndex); + __ Branch(&miss, ne, a0, Operand(at)); + __ JumpIfSmi(a1, &unordered); + __ GetObjectType(a1, a2, a2); + __ Branch(&maybe_undefined2, ne, a2, Operand(HEAP_NUMBER_TYPE)); + __ jmp(&unordered); + } + + __ bind(&maybe_undefined2); + if (Token::IsOrderedRelationalCompareOp(op_)) { + __ LoadRoot(at, Heap::kUndefinedValueRootIndex); + __ Branch(&unordered, eq, a1, Operand(at)); + } + + __ bind(&miss); + GenerateMiss(masm); +} + + +void ICCompareStub::GenerateInternalizedStrings(MacroAssembler* masm) { + DCHECK(state_ == CompareIC::INTERNALIZED_STRING); + Label miss; + + // Registers containing left and right operands respectively. + Register left = a1; + Register right = a0; + Register tmp1 = a2; + Register tmp2 = a3; + + // Check that both operands are heap objects. + __ JumpIfEitherSmi(left, right, &miss); + + // Check that both operands are internalized strings. + __ ld(tmp1, FieldMemOperand(left, HeapObject::kMapOffset)); + __ ld(tmp2, FieldMemOperand(right, HeapObject::kMapOffset)); + __ lbu(tmp1, FieldMemOperand(tmp1, Map::kInstanceTypeOffset)); + __ lbu(tmp2, FieldMemOperand(tmp2, Map::kInstanceTypeOffset)); + STATIC_ASSERT(kInternalizedTag == 0 && kStringTag == 0); + __ Or(tmp1, tmp1, Operand(tmp2)); + __ And(at, tmp1, Operand(kIsNotStringMask | kIsNotInternalizedMask)); + __ Branch(&miss, ne, at, Operand(zero_reg)); + + // Make sure a0 is non-zero. At this point input operands are + // guaranteed to be non-zero. + DCHECK(right.is(a0)); + STATIC_ASSERT(EQUAL == 0); + STATIC_ASSERT(kSmiTag == 0); + __ mov(v0, right); + // Internalized strings are compared by identity. + __ Ret(ne, left, Operand(right)); + DCHECK(is_int16(EQUAL)); + __ Ret(USE_DELAY_SLOT); + __ li(v0, Operand(Smi::FromInt(EQUAL))); + + __ bind(&miss); + GenerateMiss(masm); +} + + +void ICCompareStub::GenerateUniqueNames(MacroAssembler* masm) { + DCHECK(state_ == CompareIC::UNIQUE_NAME); + DCHECK(GetCondition() == eq); + Label miss; + + // Registers containing left and right operands respectively. + Register left = a1; + Register right = a0; + Register tmp1 = a2; + Register tmp2 = a3; + + // Check that both operands are heap objects. + __ JumpIfEitherSmi(left, right, &miss); + + // Check that both operands are unique names. This leaves the instance + // types loaded in tmp1 and tmp2. + __ ld(tmp1, FieldMemOperand(left, HeapObject::kMapOffset)); + __ ld(tmp2, FieldMemOperand(right, HeapObject::kMapOffset)); + __ lbu(tmp1, FieldMemOperand(tmp1, Map::kInstanceTypeOffset)); + __ lbu(tmp2, FieldMemOperand(tmp2, Map::kInstanceTypeOffset)); + + __ JumpIfNotUniqueName(tmp1, &miss); + __ JumpIfNotUniqueName(tmp2, &miss); + + // Use a0 as result + __ mov(v0, a0); + + // Unique names are compared by identity. + Label done; + __ Branch(&done, ne, left, Operand(right)); + // Make sure a0 is non-zero. At this point input operands are + // guaranteed to be non-zero. + DCHECK(right.is(a0)); + STATIC_ASSERT(EQUAL == 0); + STATIC_ASSERT(kSmiTag == 0); + __ li(v0, Operand(Smi::FromInt(EQUAL))); + __ bind(&done); + __ Ret(); + + __ bind(&miss); + GenerateMiss(masm); +} + + +void ICCompareStub::GenerateStrings(MacroAssembler* masm) { + DCHECK(state_ == CompareIC::STRING); + Label miss; + + bool equality = Token::IsEqualityOp(op_); + + // Registers containing left and right operands respectively. + Register left = a1; + Register right = a0; + Register tmp1 = a2; + Register tmp2 = a3; + Register tmp3 = a4; + Register tmp4 = a5; + Register tmp5 = a6; + + // Check that both operands are heap objects. + __ JumpIfEitherSmi(left, right, &miss); + + // Check that both operands are strings. This leaves the instance + // types loaded in tmp1 and tmp2. + __ ld(tmp1, FieldMemOperand(left, HeapObject::kMapOffset)); + __ ld(tmp2, FieldMemOperand(right, HeapObject::kMapOffset)); + __ lbu(tmp1, FieldMemOperand(tmp1, Map::kInstanceTypeOffset)); + __ lbu(tmp2, FieldMemOperand(tmp2, Map::kInstanceTypeOffset)); + STATIC_ASSERT(kNotStringTag != 0); + __ Or(tmp3, tmp1, tmp2); + __ And(tmp5, tmp3, Operand(kIsNotStringMask)); + __ Branch(&miss, ne, tmp5, Operand(zero_reg)); + + // Fast check for identical strings. + Label left_ne_right; + STATIC_ASSERT(EQUAL == 0); + STATIC_ASSERT(kSmiTag == 0); + __ Branch(&left_ne_right, ne, left, Operand(right)); + __ Ret(USE_DELAY_SLOT); + __ mov(v0, zero_reg); // In the delay slot. + __ bind(&left_ne_right); + + // Handle not identical strings. + + // Check that both strings are internalized strings. If they are, we're done + // because we already know they are not identical. We know they are both + // strings. + if (equality) { + DCHECK(GetCondition() == eq); + STATIC_ASSERT(kInternalizedTag == 0); + __ Or(tmp3, tmp1, Operand(tmp2)); + __ And(tmp5, tmp3, Operand(kIsNotInternalizedMask)); + Label is_symbol; + __ Branch(&is_symbol, ne, tmp5, Operand(zero_reg)); + // Make sure a0 is non-zero. At this point input operands are + // guaranteed to be non-zero. + DCHECK(right.is(a0)); + __ Ret(USE_DELAY_SLOT); + __ mov(v0, a0); // In the delay slot. + __ bind(&is_symbol); + } + + // Check that both strings are sequential ASCII. + Label runtime; + __ JumpIfBothInstanceTypesAreNotSequentialAscii( + tmp1, tmp2, tmp3, tmp4, &runtime); + + // Compare flat ASCII strings. Returns when done. + if (equality) { + StringCompareStub::GenerateFlatAsciiStringEquals( + masm, left, right, tmp1, tmp2, tmp3); + } else { + StringCompareStub::GenerateCompareFlatAsciiStrings( + masm, left, right, tmp1, tmp2, tmp3, tmp4); + } + + // Handle more complex cases in runtime. + __ bind(&runtime); + __ Push(left, right); + if (equality) { + __ TailCallRuntime(Runtime::kStringEquals, 2, 1); + } else { + __ TailCallRuntime(Runtime::kStringCompare, 2, 1); + } + + __ bind(&miss); + GenerateMiss(masm); +} + + +void ICCompareStub::GenerateObjects(MacroAssembler* masm) { + DCHECK(state_ == CompareIC::OBJECT); + Label miss; + __ And(a2, a1, Operand(a0)); + __ JumpIfSmi(a2, &miss); + + __ GetObjectType(a0, a2, a2); + __ Branch(&miss, ne, a2, Operand(JS_OBJECT_TYPE)); + __ GetObjectType(a1, a2, a2); + __ Branch(&miss, ne, a2, Operand(JS_OBJECT_TYPE)); + + DCHECK(GetCondition() == eq); + __ Ret(USE_DELAY_SLOT); + __ dsubu(v0, a0, a1); + + __ bind(&miss); + GenerateMiss(masm); +} + + +void ICCompareStub::GenerateKnownObjects(MacroAssembler* masm) { + Label miss; + __ And(a2, a1, a0); + __ JumpIfSmi(a2, &miss); + __ ld(a2, FieldMemOperand(a0, HeapObject::kMapOffset)); + __ ld(a3, FieldMemOperand(a1, HeapObject::kMapOffset)); + __ Branch(&miss, ne, a2, Operand(known_map_)); + __ Branch(&miss, ne, a3, Operand(known_map_)); + + __ Ret(USE_DELAY_SLOT); + __ dsubu(v0, a0, a1); + + __ bind(&miss); + GenerateMiss(masm); +} + + +void ICCompareStub::GenerateMiss(MacroAssembler* masm) { + { + // Call the runtime system in a fresh internal frame. + ExternalReference miss = + ExternalReference(IC_Utility(IC::kCompareIC_Miss), isolate()); + FrameScope scope(masm, StackFrame::INTERNAL); + __ Push(a1, a0); + __ Push(ra, a1, a0); + __ li(a4, Operand(Smi::FromInt(op_))); + __ daddiu(sp, sp, -kPointerSize); + __ CallExternalReference(miss, 3, USE_DELAY_SLOT); + __ sd(a4, MemOperand(sp)); // In the delay slot. + // Compute the entry point of the rewritten stub. + __ Daddu(a2, v0, Operand(Code::kHeaderSize - kHeapObjectTag)); + // Restore registers. + __ Pop(a1, a0, ra); + } + __ Jump(a2); +} + + +void DirectCEntryStub::Generate(MacroAssembler* masm) { + // Make place for arguments to fit C calling convention. Most of the callers + // of DirectCEntryStub::GenerateCall are using EnterExitFrame/LeaveExitFrame + // so they handle stack restoring and we don't have to do that here. + // Any caller of DirectCEntryStub::GenerateCall must take care of dropping + // kCArgsSlotsSize stack space after the call. + __ daddiu(sp, sp, -kCArgsSlotsSize); + // Place the return address on the stack, making the call + // GC safe. The RegExp backend also relies on this. + __ sd(ra, MemOperand(sp, kCArgsSlotsSize)); + __ Call(t9); // Call the C++ function. + __ ld(t9, MemOperand(sp, kCArgsSlotsSize)); + + if (FLAG_debug_code && FLAG_enable_slow_asserts) { + // In case of an error the return address may point to a memory area + // filled with kZapValue by the GC. + // Dereference the address and check for this. + __ Uld(a4, MemOperand(t9)); + __ Assert(ne, kReceivedInvalidReturnAddress, a4, + Operand(reinterpret_cast<uint64_t>(kZapValue))); + } + __ Jump(t9); +} + + +void DirectCEntryStub::GenerateCall(MacroAssembler* masm, + Register target) { + intptr_t loc = + reinterpret_cast<intptr_t>(GetCode().location()); + __ Move(t9, target); + __ li(ra, Operand(loc, RelocInfo::CODE_TARGET), CONSTANT_SIZE); + __ Call(ra); +} + + +void NameDictionaryLookupStub::GenerateNegativeLookup(MacroAssembler* masm, + Label* miss, + Label* done, + Register receiver, + Register properties, + Handle<Name> name, + Register scratch0) { + DCHECK(name->IsUniqueName()); + // If names of slots in range from 1 to kProbes - 1 for the hash value are + // not equal to the name and kProbes-th slot is not used (its name is the + // undefined value), it guarantees the hash table doesn't contain the + // property. It's true even if some slots represent deleted properties + // (their names are the hole value). + for (int i = 0; i < kInlinedProbes; i++) { + // scratch0 points to properties hash. + // Compute the masked index: (hash + i + i * i) & mask. + Register index = scratch0; + // Capacity is smi 2^n. + __ SmiLoadUntag(index, FieldMemOperand(properties, kCapacityOffset)); + __ Dsubu(index, index, Operand(1)); + __ And(index, index, + Operand(name->Hash() + NameDictionary::GetProbeOffset(i))); + + // Scale the index by multiplying by the entry size. + DCHECK(NameDictionary::kEntrySize == 3); + __ dsll(at, index, 1); + __ Daddu(index, index, at); // index *= 3. + + Register entity_name = scratch0; + // Having undefined at this place means the name is not contained. + DCHECK_EQ(kSmiTagSize, 1); + Register tmp = properties; + + __ dsll(scratch0, index, kPointerSizeLog2); + __ Daddu(tmp, properties, scratch0); + __ ld(entity_name, FieldMemOperand(tmp, kElementsStartOffset)); + + DCHECK(!tmp.is(entity_name)); + __ LoadRoot(tmp, Heap::kUndefinedValueRootIndex); + __ Branch(done, eq, entity_name, Operand(tmp)); + + // Load the hole ready for use below: + __ LoadRoot(tmp, Heap::kTheHoleValueRootIndex); + + // Stop if found the property. + __ Branch(miss, eq, entity_name, Operand(Handle<Name>(name))); + + Label good; + __ Branch(&good, eq, entity_name, Operand(tmp)); + + // Check if the entry name is not a unique name. + __ ld(entity_name, FieldMemOperand(entity_name, HeapObject::kMapOffset)); + __ lbu(entity_name, + FieldMemOperand(entity_name, Map::kInstanceTypeOffset)); + __ JumpIfNotUniqueName(entity_name, miss); + __ bind(&good); + + // Restore the properties. + __ ld(properties, + FieldMemOperand(receiver, JSObject::kPropertiesOffset)); + } + + const int spill_mask = + (ra.bit() | a6.bit() | a5.bit() | a4.bit() | a3.bit() | + a2.bit() | a1.bit() | a0.bit() | v0.bit()); + + __ MultiPush(spill_mask); + __ ld(a0, FieldMemOperand(receiver, JSObject::kPropertiesOffset)); + __ li(a1, Operand(Handle<Name>(name))); + NameDictionaryLookupStub stub(masm->isolate(), NEGATIVE_LOOKUP); + __ CallStub(&stub); + __ mov(at, v0); + __ MultiPop(spill_mask); + + __ Branch(done, eq, at, Operand(zero_reg)); + __ Branch(miss, ne, at, Operand(zero_reg)); +} + + +// Probe the name dictionary in the |elements| register. Jump to the +// |done| label if a property with the given name is found. Jump to +// the |miss| label otherwise. +// If lookup was successful |scratch2| will be equal to elements + 4 * index. +void NameDictionaryLookupStub::GeneratePositiveLookup(MacroAssembler* masm, + Label* miss, + Label* done, + Register elements, + Register name, + Register scratch1, + Register scratch2) { + DCHECK(!elements.is(scratch1)); + DCHECK(!elements.is(scratch2)); + DCHECK(!name.is(scratch1)); + DCHECK(!name.is(scratch2)); + + __ AssertName(name); + + // Compute the capacity mask. + __ ld(scratch1, FieldMemOperand(elements, kCapacityOffset)); + __ SmiUntag(scratch1); + __ Dsubu(scratch1, scratch1, Operand(1)); + + // Generate an unrolled loop that performs a few probes before + // giving up. Measurements done on Gmail indicate that 2 probes + // cover ~93% of loads from dictionaries. + for (int i = 0; i < kInlinedProbes; i++) { + // Compute the masked index: (hash + i + i * i) & mask. + __ lwu(scratch2, FieldMemOperand(name, Name::kHashFieldOffset)); + if (i > 0) { + // Add the probe offset (i + i * i) left shifted to avoid right shifting + // the hash in a separate instruction. The value hash + i + i * i is right + // shifted in the following and instruction. + DCHECK(NameDictionary::GetProbeOffset(i) < + 1 << (32 - Name::kHashFieldOffset)); + __ Daddu(scratch2, scratch2, Operand( + NameDictionary::GetProbeOffset(i) << Name::kHashShift)); + } + __ dsrl(scratch2, scratch2, Name::kHashShift); + __ And(scratch2, scratch1, scratch2); + + // Scale the index by multiplying by the element size. + DCHECK(NameDictionary::kEntrySize == 3); + // scratch2 = scratch2 * 3. + + __ dsll(at, scratch2, 1); + __ Daddu(scratch2, scratch2, at); + + // Check if the key is identical to the name. + __ dsll(at, scratch2, kPointerSizeLog2); + __ Daddu(scratch2, elements, at); + __ ld(at, FieldMemOperand(scratch2, kElementsStartOffset)); + __ Branch(done, eq, name, Operand(at)); + } + + const int spill_mask = + (ra.bit() | a6.bit() | a5.bit() | a4.bit() | + a3.bit() | a2.bit() | a1.bit() | a0.bit() | v0.bit()) & + ~(scratch1.bit() | scratch2.bit()); + + __ MultiPush(spill_mask); + if (name.is(a0)) { + DCHECK(!elements.is(a1)); + __ Move(a1, name); + __ Move(a0, elements); + } else { + __ Move(a0, elements); + __ Move(a1, name); + } + NameDictionaryLookupStub stub(masm->isolate(), POSITIVE_LOOKUP); + __ CallStub(&stub); + __ mov(scratch2, a2); + __ mov(at, v0); + __ MultiPop(spill_mask); + + __ Branch(done, ne, at, Operand(zero_reg)); + __ Branch(miss, eq, at, Operand(zero_reg)); +} + + +void NameDictionaryLookupStub::Generate(MacroAssembler* masm) { + // This stub overrides SometimesSetsUpAFrame() to return false. That means + // we cannot call anything that could cause a GC from this stub. + // Registers: + // result: NameDictionary to probe + // a1: key + // dictionary: NameDictionary to probe. + // index: will hold an index of entry if lookup is successful. + // might alias with result_. + // Returns: + // result_ is zero if lookup failed, non zero otherwise. + + Register result = v0; + Register dictionary = a0; + Register key = a1; + Register index = a2; + Register mask = a3; + Register hash = a4; + Register undefined = a5; + Register entry_key = a6; + + Label in_dictionary, maybe_in_dictionary, not_in_dictionary; + + __ ld(mask, FieldMemOperand(dictionary, kCapacityOffset)); + __ SmiUntag(mask); + __ Dsubu(mask, mask, Operand(1)); + + __ lwu(hash, FieldMemOperand(key, Name::kHashFieldOffset)); + + __ LoadRoot(undefined, Heap::kUndefinedValueRootIndex); + + for (int i = kInlinedProbes; i < kTotalProbes; i++) { + // Compute the masked index: (hash + i + i * i) & mask. + // Capacity is smi 2^n. + if (i > 0) { + // Add the probe offset (i + i * i) left shifted to avoid right shifting + // the hash in a separate instruction. The value hash + i + i * i is right + // shifted in the following and instruction. + DCHECK(NameDictionary::GetProbeOffset(i) < + 1 << (32 - Name::kHashFieldOffset)); + __ Daddu(index, hash, Operand( + NameDictionary::GetProbeOffset(i) << Name::kHashShift)); + } else { + __ mov(index, hash); + } + __ dsrl(index, index, Name::kHashShift); + __ And(index, mask, index); + + // Scale the index by multiplying by the entry size. + DCHECK(NameDictionary::kEntrySize == 3); + // index *= 3. + __ mov(at, index); + __ dsll(index, index, 1); + __ Daddu(index, index, at); + + + DCHECK_EQ(kSmiTagSize, 1); + __ dsll(index, index, kPointerSizeLog2); + __ Daddu(index, index, dictionary); + __ ld(entry_key, FieldMemOperand(index, kElementsStartOffset)); + + // Having undefined at this place means the name is not contained. + __ Branch(¬_in_dictionary, eq, entry_key, Operand(undefined)); + + // Stop if found the property. + __ Branch(&in_dictionary, eq, entry_key, Operand(key)); + + if (i != kTotalProbes - 1 && mode_ == NEGATIVE_LOOKUP) { + // Check if the entry name is not a unique name. + __ ld(entry_key, FieldMemOperand(entry_key, HeapObject::kMapOffset)); + __ lbu(entry_key, + FieldMemOperand(entry_key, Map::kInstanceTypeOffset)); + __ JumpIfNotUniqueName(entry_key, &maybe_in_dictionary); + } + } + + __ bind(&maybe_in_dictionary); + // If we are doing negative lookup then probing failure should be + // treated as a lookup success. For positive lookup probing failure + // should be treated as lookup failure. + if (mode_ == POSITIVE_LOOKUP) { + __ Ret(USE_DELAY_SLOT); + __ mov(result, zero_reg); + } + + __ bind(&in_dictionary); + __ Ret(USE_DELAY_SLOT); + __ li(result, 1); + + __ bind(¬_in_dictionary); + __ Ret(USE_DELAY_SLOT); + __ mov(result, zero_reg); +} + + +void StoreBufferOverflowStub::GenerateFixedRegStubsAheadOfTime( + Isolate* isolate) { + StoreBufferOverflowStub stub1(isolate, kDontSaveFPRegs); + stub1.GetCode(); + // Hydrogen code stubs need stub2 at snapshot time. + StoreBufferOverflowStub stub2(isolate, kSaveFPRegs); + stub2.GetCode(); +} + + +// Takes the input in 3 registers: address_ value_ and object_. A pointer to +// the value has just been written into the object, now this stub makes sure +// we keep the GC informed. The word in the object where the value has been +// written is in the address register. +void RecordWriteStub::Generate(MacroAssembler* masm) { + Label skip_to_incremental_noncompacting; + Label skip_to_incremental_compacting; + + // The first two branch+nop instructions are generated with labels so as to + // get the offset fixed up correctly by the bind(Label*) call. We patch it + // back and forth between a "bne zero_reg, zero_reg, ..." (a nop in this + // position) and the "beq zero_reg, zero_reg, ..." when we start and stop + // incremental heap marking. + // See RecordWriteStub::Patch for details. + __ beq(zero_reg, zero_reg, &skip_to_incremental_noncompacting); + __ nop(); + __ beq(zero_reg, zero_reg, &skip_to_incremental_compacting); + __ nop(); + + if (remembered_set_action_ == EMIT_REMEMBERED_SET) { + __ RememberedSetHelper(object_, + address_, + value_, + save_fp_regs_mode_, + MacroAssembler::kReturnAtEnd); + } + __ Ret(); + + __ bind(&skip_to_incremental_noncompacting); + GenerateIncremental(masm, INCREMENTAL); + + __ bind(&skip_to_incremental_compacting); + GenerateIncremental(masm, INCREMENTAL_COMPACTION); + + // Initial mode of the stub is expected to be STORE_BUFFER_ONLY. + // Will be checked in IncrementalMarking::ActivateGeneratedStub. + + PatchBranchIntoNop(masm, 0); + PatchBranchIntoNop(masm, 2 * Assembler::kInstrSize); +} + + +void RecordWriteStub::GenerateIncremental(MacroAssembler* masm, Mode mode) { + regs_.Save(masm); + + if (remembered_set_action_ == EMIT_REMEMBERED_SET) { + Label dont_need_remembered_set; + + __ ld(regs_.scratch0(), MemOperand(regs_.address(), 0)); + __ JumpIfNotInNewSpace(regs_.scratch0(), // Value. + regs_.scratch0(), + &dont_need_remembered_set); + + __ CheckPageFlag(regs_.object(), + regs_.scratch0(), + 1 << MemoryChunk::SCAN_ON_SCAVENGE, + ne, + &dont_need_remembered_set); + + // First notify the incremental marker if necessary, then update the + // remembered set. + CheckNeedsToInformIncrementalMarker( + masm, kUpdateRememberedSetOnNoNeedToInformIncrementalMarker, mode); + InformIncrementalMarker(masm); + regs_.Restore(masm); + __ RememberedSetHelper(object_, + address_, + value_, + save_fp_regs_mode_, + MacroAssembler::kReturnAtEnd); + + __ bind(&dont_need_remembered_set); + } + + CheckNeedsToInformIncrementalMarker( + masm, kReturnOnNoNeedToInformIncrementalMarker, mode); + InformIncrementalMarker(masm); + regs_.Restore(masm); + __ Ret(); +} + + +void RecordWriteStub::InformIncrementalMarker(MacroAssembler* masm) { + regs_.SaveCallerSaveRegisters(masm, save_fp_regs_mode_); + int argument_count = 3; + __ PrepareCallCFunction(argument_count, regs_.scratch0()); + Register address = + a0.is(regs_.address()) ? regs_.scratch0() : regs_.address(); + DCHECK(!address.is(regs_.object())); + DCHECK(!address.is(a0)); + __ Move(address, regs_.address()); + __ Move(a0, regs_.object()); + __ Move(a1, address); + __ li(a2, Operand(ExternalReference::isolate_address(isolate()))); + + AllowExternalCallThatCantCauseGC scope(masm); + __ CallCFunction( + ExternalReference::incremental_marking_record_write_function(isolate()), + argument_count); + regs_.RestoreCallerSaveRegisters(masm, save_fp_regs_mode_); +} + + +void RecordWriteStub::CheckNeedsToInformIncrementalMarker( + MacroAssembler* masm, + OnNoNeedToInformIncrementalMarker on_no_need, + Mode mode) { + Label on_black; + Label need_incremental; + Label need_incremental_pop_scratch; + + __ And(regs_.scratch0(), regs_.object(), Operand(~Page::kPageAlignmentMask)); + __ ld(regs_.scratch1(), + MemOperand(regs_.scratch0(), + MemoryChunk::kWriteBarrierCounterOffset)); + __ Dsubu(regs_.scratch1(), regs_.scratch1(), Operand(1)); + __ sd(regs_.scratch1(), + MemOperand(regs_.scratch0(), + MemoryChunk::kWriteBarrierCounterOffset)); + __ Branch(&need_incremental, lt, regs_.scratch1(), Operand(zero_reg)); + + // Let's look at the color of the object: If it is not black we don't have + // to inform the incremental marker. + __ JumpIfBlack(regs_.object(), regs_.scratch0(), regs_.scratch1(), &on_black); + + regs_.Restore(masm); + if (on_no_need == kUpdateRememberedSetOnNoNeedToInformIncrementalMarker) { + __ RememberedSetHelper(object_, + address_, + value_, + save_fp_regs_mode_, + MacroAssembler::kReturnAtEnd); + } else { + __ Ret(); + } + + __ bind(&on_black); + + // Get the value from the slot. + __ ld(regs_.scratch0(), MemOperand(regs_.address(), 0)); + + if (mode == INCREMENTAL_COMPACTION) { + Label ensure_not_white; + + __ CheckPageFlag(regs_.scratch0(), // Contains value. + regs_.scratch1(), // Scratch. + MemoryChunk::kEvacuationCandidateMask, + eq, + &ensure_not_white); + + __ CheckPageFlag(regs_.object(), + regs_.scratch1(), // Scratch. + MemoryChunk::kSkipEvacuationSlotsRecordingMask, + eq, + &need_incremental); + + __ bind(&ensure_not_white); + } + + // We need extra registers for this, so we push the object and the address + // register temporarily. + __ Push(regs_.object(), regs_.address()); + __ EnsureNotWhite(regs_.scratch0(), // The value. + regs_.scratch1(), // Scratch. + regs_.object(), // Scratch. + regs_.address(), // Scratch. + &need_incremental_pop_scratch); + __ Pop(regs_.object(), regs_.address()); + + regs_.Restore(masm); + if (on_no_need == kUpdateRememberedSetOnNoNeedToInformIncrementalMarker) { + __ RememberedSetHelper(object_, + address_, + value_, + save_fp_regs_mode_, + MacroAssembler::kReturnAtEnd); + } else { + __ Ret(); + } + + __ bind(&need_incremental_pop_scratch); + __ Pop(regs_.object(), regs_.address()); + + __ bind(&need_incremental); + + // Fall through when we need to inform the incremental marker. +} + + +void StoreArrayLiteralElementStub::Generate(MacroAssembler* masm) { + // ----------- S t a t e ------------- + // -- a0 : element value to store + // -- a3 : element index as smi + // -- sp[0] : array literal index in function as smi + // -- sp[4] : array literal + // clobbers a1, a2, a4 + // ----------------------------------- + + Label element_done; + Label double_elements; + Label smi_element; + Label slow_elements; + Label fast_elements; + + // Get array literal index, array literal and its map. + __ ld(a4, MemOperand(sp, 0 * kPointerSize)); + __ ld(a1, MemOperand(sp, 1 * kPointerSize)); + __ ld(a2, FieldMemOperand(a1, JSObject::kMapOffset)); + + __ CheckFastElements(a2, a5, &double_elements); + // Check for FAST_*_SMI_ELEMENTS or FAST_*_ELEMENTS elements + __ JumpIfSmi(a0, &smi_element); + __ CheckFastSmiElements(a2, a5, &fast_elements); + + // Store into the array literal requires a elements transition. Call into + // the runtime. + __ bind(&slow_elements); + // call. + __ Push(a1, a3, a0); + __ ld(a5, MemOperand(fp, JavaScriptFrameConstants::kFunctionOffset)); + __ ld(a5, FieldMemOperand(a5, JSFunction::kLiteralsOffset)); + __ Push(a5, a4); + __ TailCallRuntime(Runtime::kStoreArrayLiteralElement, 5, 1); + + // Array literal has ElementsKind of FAST_*_ELEMENTS and value is an object. + __ bind(&fast_elements); + __ ld(a5, FieldMemOperand(a1, JSObject::kElementsOffset)); + __ SmiScale(a6, a3, kPointerSizeLog2); + __ Daddu(a6, a5, a6); + __ Daddu(a6, a6, Operand(FixedArray::kHeaderSize - kHeapObjectTag)); + __ sd(a0, MemOperand(a6, 0)); + // Update the write barrier for the array store. + __ RecordWrite(a5, a6, a0, kRAHasNotBeenSaved, kDontSaveFPRegs, + EMIT_REMEMBERED_SET, OMIT_SMI_CHECK); + __ Ret(USE_DELAY_SLOT); + __ mov(v0, a0); + + // Array literal has ElementsKind of FAST_*_SMI_ELEMENTS or FAST_*_ELEMENTS, + // and value is Smi. + __ bind(&smi_element); + __ ld(a5, FieldMemOperand(a1, JSObject::kElementsOffset)); + __ SmiScale(a6, a3, kPointerSizeLog2); + __ Daddu(a6, a5, a6); + __ sd(a0, FieldMemOperand(a6, FixedArray::kHeaderSize)); + __ Ret(USE_DELAY_SLOT); + __ mov(v0, a0); + + // Array literal has ElementsKind of FAST_*_DOUBLE_ELEMENTS. + __ bind(&double_elements); + __ ld(a5, FieldMemOperand(a1, JSObject::kElementsOffset)); + __ StoreNumberToDoubleElements(a0, a3, a5, a7, t1, a2, &slow_elements); + __ Ret(USE_DELAY_SLOT); + __ mov(v0, a0); +} + + +void StubFailureTrampolineStub::Generate(MacroAssembler* masm) { + CEntryStub ces(isolate(), 1, kSaveFPRegs); + __ Call(ces.GetCode(), RelocInfo::CODE_TARGET); + int parameter_count_offset = + StubFailureTrampolineFrame::kCallerStackParameterCountFrameOffset; + __ ld(a1, MemOperand(fp, parameter_count_offset)); + if (function_mode_ == JS_FUNCTION_STUB_MODE) { + __ Daddu(a1, a1, Operand(1)); + } + masm->LeaveFrame(StackFrame::STUB_FAILURE_TRAMPOLINE); + __ dsll(a1, a1, kPointerSizeLog2); + __ Ret(USE_DELAY_SLOT); + __ Daddu(sp, sp, a1); +} + + +void ProfileEntryHookStub::MaybeCallEntryHook(MacroAssembler* masm) { + if (masm->isolate()->function_entry_hook() != NULL) { + ProfileEntryHookStub stub(masm->isolate()); + __ push(ra); + __ CallStub(&stub); + __ pop(ra); + } +} + + +void ProfileEntryHookStub::Generate(MacroAssembler* masm) { + // The entry hook is a "push ra" instruction, followed by a call. + // Note: on MIPS "push" is 2 instruction + const int32_t kReturnAddressDistanceFromFunctionStart = + Assembler::kCallTargetAddressOffset + (2 * Assembler::kInstrSize); + + // This should contain all kJSCallerSaved registers. + const RegList kSavedRegs = + kJSCallerSaved | // Caller saved registers. + s5.bit(); // Saved stack pointer. + + // We also save ra, so the count here is one higher than the mask indicates. + const int32_t kNumSavedRegs = kNumJSCallerSaved + 2; + + // Save all caller-save registers as this may be called from anywhere. + __ MultiPush(kSavedRegs | ra.bit()); + + // Compute the function's address for the first argument. + __ Dsubu(a0, ra, Operand(kReturnAddressDistanceFromFunctionStart)); + + // The caller's return address is above the saved temporaries. + // Grab that for the second argument to the hook. + __ Daddu(a1, sp, Operand(kNumSavedRegs * kPointerSize)); + + // Align the stack if necessary. + int frame_alignment = masm->ActivationFrameAlignment(); + if (frame_alignment > kPointerSize) { + __ mov(s5, sp); + DCHECK(IsPowerOf2(frame_alignment)); + __ And(sp, sp, Operand(-frame_alignment)); + } + + __ Dsubu(sp, sp, kCArgsSlotsSize); +#if defined(V8_HOST_ARCH_MIPS) || defined(V8_HOST_ARCH_MIPS64) + int64_t entry_hook = + reinterpret_cast<int64_t>(isolate()->function_entry_hook()); + __ li(t9, Operand(entry_hook)); +#else + // Under the simulator we need to indirect the entry hook through a + // trampoline function at a known address. + // It additionally takes an isolate as a third parameter. + __ li(a2, Operand(ExternalReference::isolate_address(isolate()))); + + ApiFunction dispatcher(FUNCTION_ADDR(EntryHookTrampoline)); + __ li(t9, Operand(ExternalReference(&dispatcher, + ExternalReference::BUILTIN_CALL, + isolate()))); +#endif + // Call C function through t9 to conform ABI for PIC. + __ Call(t9); + + // Restore the stack pointer if needed. + if (frame_alignment > kPointerSize) { + __ mov(sp, s5); + } else { + __ Daddu(sp, sp, kCArgsSlotsSize); + } + + // Also pop ra to get Ret(0). + __ MultiPop(kSavedRegs | ra.bit()); + __ Ret(); +} + + +template<class T> +static void CreateArrayDispatch(MacroAssembler* masm, + AllocationSiteOverrideMode mode) { + if (mode == DISABLE_ALLOCATION_SITES) { + T stub(masm->isolate(), GetInitialFastElementsKind(), mode); + __ TailCallStub(&stub); + } else if (mode == DONT_OVERRIDE) { + int last_index = GetSequenceIndexFromFastElementsKind( + TERMINAL_FAST_ELEMENTS_KIND); + for (int i = 0; i <= last_index; ++i) { + ElementsKind kind = GetFastElementsKindFromSequenceIndex(i); + T stub(masm->isolate(), kind); + __ TailCallStub(&stub, eq, a3, Operand(kind)); + } + + // If we reached this point there is a problem. + __ Abort(kUnexpectedElementsKindInArrayConstructor); + } else { + UNREACHABLE(); + } +} + + +static void CreateArrayDispatchOneArgument(MacroAssembler* masm, + AllocationSiteOverrideMode mode) { + // a2 - allocation site (if mode != DISABLE_ALLOCATION_SITES) + // a3 - kind (if mode != DISABLE_ALLOCATION_SITES) + // a0 - number of arguments + // a1 - constructor? + // sp[0] - last argument + Label normal_sequence; + if (mode == DONT_OVERRIDE) { + DCHECK(FAST_SMI_ELEMENTS == 0); + DCHECK(FAST_HOLEY_SMI_ELEMENTS == 1); + DCHECK(FAST_ELEMENTS == 2); + DCHECK(FAST_HOLEY_ELEMENTS == 3); + DCHECK(FAST_DOUBLE_ELEMENTS == 4); + DCHECK(FAST_HOLEY_DOUBLE_ELEMENTS == 5); + + // is the low bit set? If so, we are holey and that is good. + __ And(at, a3, Operand(1)); + __ Branch(&normal_sequence, ne, at, Operand(zero_reg)); + } + // look at the first argument + __ ld(a5, MemOperand(sp, 0)); + __ Branch(&normal_sequence, eq, a5, Operand(zero_reg)); + + if (mode == DISABLE_ALLOCATION_SITES) { + ElementsKind initial = GetInitialFastElementsKind(); + ElementsKind holey_initial = GetHoleyElementsKind(initial); + + ArraySingleArgumentConstructorStub stub_holey(masm->isolate(), + holey_initial, + DISABLE_ALLOCATION_SITES); + __ TailCallStub(&stub_holey); + + __ bind(&normal_sequence); + ArraySingleArgumentConstructorStub stub(masm->isolate(), + initial, + DISABLE_ALLOCATION_SITES); + __ TailCallStub(&stub); + } else if (mode == DONT_OVERRIDE) { + // We are going to create a holey array, but our kind is non-holey. + // Fix kind and retry (only if we have an allocation site in the slot). + __ Daddu(a3, a3, Operand(1)); + + if (FLAG_debug_code) { + __ ld(a5, FieldMemOperand(a2, 0)); + __ LoadRoot(at, Heap::kAllocationSiteMapRootIndex); + __ Assert(eq, kExpectedAllocationSite, a5, Operand(at)); + } + + // Save the resulting elements kind in type info. We can't just store a3 + // in the AllocationSite::transition_info field because elements kind is + // restricted to a portion of the field...upper bits need to be left alone. + STATIC_ASSERT(AllocationSite::ElementsKindBits::kShift == 0); + __ ld(a4, FieldMemOperand(a2, AllocationSite::kTransitionInfoOffset)); + __ Daddu(a4, a4, Operand(Smi::FromInt(kFastElementsKindPackedToHoley))); + __ sd(a4, FieldMemOperand(a2, AllocationSite::kTransitionInfoOffset)); + + + __ bind(&normal_sequence); + int last_index = GetSequenceIndexFromFastElementsKind( + TERMINAL_FAST_ELEMENTS_KIND); + for (int i = 0; i <= last_index; ++i) { + ElementsKind kind = GetFastElementsKindFromSequenceIndex(i); + ArraySingleArgumentConstructorStub stub(masm->isolate(), kind); + __ TailCallStub(&stub, eq, a3, Operand(kind)); + } + + // If we reached this point there is a problem. + __ Abort(kUnexpectedElementsKindInArrayConstructor); + } else { + UNREACHABLE(); + } +} + + +template<class T> +static void ArrayConstructorStubAheadOfTimeHelper(Isolate* isolate) { + int to_index = GetSequenceIndexFromFastElementsKind( + TERMINAL_FAST_ELEMENTS_KIND); + for (int i = 0; i <= to_index; ++i) { + ElementsKind kind = GetFastElementsKindFromSequenceIndex(i); + T stub(isolate, kind); + stub.GetCode(); + if (AllocationSite::GetMode(kind) != DONT_TRACK_ALLOCATION_SITE) { + T stub1(isolate, kind, DISABLE_ALLOCATION_SITES); + stub1.GetCode(); + } + } +} + + +void ArrayConstructorStubBase::GenerateStubsAheadOfTime(Isolate* isolate) { + ArrayConstructorStubAheadOfTimeHelper<ArrayNoArgumentConstructorStub>( + isolate); + ArrayConstructorStubAheadOfTimeHelper<ArraySingleArgumentConstructorStub>( + isolate); + ArrayConstructorStubAheadOfTimeHelper<ArrayNArgumentsConstructorStub>( + isolate); +} + + +void InternalArrayConstructorStubBase::GenerateStubsAheadOfTime( + Isolate* isolate) { + ElementsKind kinds[2] = { FAST_ELEMENTS, FAST_HOLEY_ELEMENTS }; + for (int i = 0; i < 2; i++) { + // For internal arrays we only need a few things. + InternalArrayNoArgumentConstructorStub stubh1(isolate, kinds[i]); + stubh1.GetCode(); + InternalArraySingleArgumentConstructorStub stubh2(isolate, kinds[i]); + stubh2.GetCode(); + InternalArrayNArgumentsConstructorStub stubh3(isolate, kinds[i]); + stubh3.GetCode(); + } +} + + +void ArrayConstructorStub::GenerateDispatchToArrayStub( + MacroAssembler* masm, + AllocationSiteOverrideMode mode) { + if (argument_count_ == ANY) { + Label not_zero_case, not_one_case; + __ And(at, a0, a0); + __ Branch(¬_zero_case, ne, at, Operand(zero_reg)); + CreateArrayDispatch<ArrayNoArgumentConstructorStub>(masm, mode); + + __ bind(¬_zero_case); + __ Branch(¬_one_case, gt, a0, Operand(1)); + CreateArrayDispatchOneArgument(masm, mode); + + __ bind(¬_one_case); + CreateArrayDispatch<ArrayNArgumentsConstructorStub>(masm, mode); + } else if (argument_count_ == NONE) { + CreateArrayDispatch<ArrayNoArgumentConstructorStub>(masm, mode); + } else if (argument_count_ == ONE) { + CreateArrayDispatchOneArgument(masm, mode); + } else if (argument_count_ == MORE_THAN_ONE) { + CreateArrayDispatch<ArrayNArgumentsConstructorStub>(masm, mode); + } else { + UNREACHABLE(); + } +} + + +void ArrayConstructorStub::Generate(MacroAssembler* masm) { + // ----------- S t a t e ------------- + // -- a0 : argc (only if argument_count_ == ANY) + // -- a1 : constructor + // -- a2 : AllocationSite or undefined + // -- sp[0] : return address + // -- sp[4] : last argument + // ----------------------------------- + + if (FLAG_debug_code) { + // The array construct code is only set for the global and natives + // builtin Array functions which always have maps. + + // Initial map for the builtin Array function should be a map. + __ ld(a4, FieldMemOperand(a1, JSFunction::kPrototypeOrInitialMapOffset)); + // Will both indicate a NULL and a Smi. + __ SmiTst(a4, at); + __ Assert(ne, kUnexpectedInitialMapForArrayFunction, + at, Operand(zero_reg)); + __ GetObjectType(a4, a4, a5); + __ Assert(eq, kUnexpectedInitialMapForArrayFunction, + a5, Operand(MAP_TYPE)); + + // We should either have undefined in a2 or a valid AllocationSite + __ AssertUndefinedOrAllocationSite(a2, a4); + } + + Label no_info; + // Get the elements kind and case on that. + __ LoadRoot(at, Heap::kUndefinedValueRootIndex); + __ Branch(&no_info, eq, a2, Operand(at)); + + __ ld(a3, FieldMemOperand(a2, AllocationSite::kTransitionInfoOffset)); + __ SmiUntag(a3); + STATIC_ASSERT(AllocationSite::ElementsKindBits::kShift == 0); + __ And(a3, a3, Operand(AllocationSite::ElementsKindBits::kMask)); + GenerateDispatchToArrayStub(masm, DONT_OVERRIDE); + + __ bind(&no_info); + GenerateDispatchToArrayStub(masm, DISABLE_ALLOCATION_SITES); +} + + +void InternalArrayConstructorStub::GenerateCase( + MacroAssembler* masm, ElementsKind kind) { + + InternalArrayNoArgumentConstructorStub stub0(isolate(), kind); + __ TailCallStub(&stub0, lo, a0, Operand(1)); + + InternalArrayNArgumentsConstructorStub stubN(isolate(), kind); + __ TailCallStub(&stubN, hi, a0, Operand(1)); + + if (IsFastPackedElementsKind(kind)) { + // We might need to create a holey array + // look at the first argument. + __ ld(at, MemOperand(sp, 0)); + + InternalArraySingleArgumentConstructorStub + stub1_holey(isolate(), GetHoleyElementsKind(kind)); + __ TailCallStub(&stub1_holey, ne, at, Operand(zero_reg)); + } + + InternalArraySingleArgumentConstructorStub stub1(isolate(), kind); + __ TailCallStub(&stub1); +} + + +void InternalArrayConstructorStub::Generate(MacroAssembler* masm) { + // ----------- S t a t e ------------- + // -- a0 : argc + // -- a1 : constructor + // -- sp[0] : return address + // -- sp[4] : last argument + // ----------------------------------- + + if (FLAG_debug_code) { + // The array construct code is only set for the global and natives + // builtin Array functions which always have maps. + + // Initial map for the builtin Array function should be a map. + __ ld(a3, FieldMemOperand(a1, JSFunction::kPrototypeOrInitialMapOffset)); + // Will both indicate a NULL and a Smi. + __ SmiTst(a3, at); + __ Assert(ne, kUnexpectedInitialMapForArrayFunction, + at, Operand(zero_reg)); + __ GetObjectType(a3, a3, a4); + __ Assert(eq, kUnexpectedInitialMapForArrayFunction, + a4, Operand(MAP_TYPE)); + } + + // Figure out the right elements kind. + __ ld(a3, FieldMemOperand(a1, JSFunction::kPrototypeOrInitialMapOffset)); + + // Load the map's "bit field 2" into a3. We only need the first byte, + // but the following bit field extraction takes care of that anyway. + __ lbu(a3, FieldMemOperand(a3, Map::kBitField2Offset)); + // Retrieve elements_kind from bit field 2. + __ DecodeField<Map::ElementsKindBits>(a3); + + if (FLAG_debug_code) { + Label done; + __ Branch(&done, eq, a3, Operand(FAST_ELEMENTS)); + __ Assert( + eq, kInvalidElementsKindForInternalArrayOrInternalPackedArray, + a3, Operand(FAST_HOLEY_ELEMENTS)); + __ bind(&done); + } + + Label fast_elements_case; + __ Branch(&fast_elements_case, eq, a3, Operand(FAST_ELEMENTS)); + GenerateCase(masm, FAST_HOLEY_ELEMENTS); + + __ bind(&fast_elements_case); + GenerateCase(masm, FAST_ELEMENTS); +} + + +void CallApiFunctionStub::Generate(MacroAssembler* masm) { + // ----------- S t a t e ------------- + // -- a0 : callee + // -- a4 : call_data + // -- a2 : holder + // -- a1 : api_function_address + // -- cp : context + // -- + // -- sp[0] : last argument + // -- ... + // -- sp[(argc - 1)* 4] : first argument + // -- sp[argc * 4] : receiver + // ----------------------------------- + + Register callee = a0; + Register call_data = a4; + Register holder = a2; + Register api_function_address = a1; + Register context = cp; + + int argc = ArgumentBits::decode(bit_field_); + bool is_store = IsStoreBits::decode(bit_field_); + bool call_data_undefined = CallDataUndefinedBits::decode(bit_field_); + + typedef FunctionCallbackArguments FCA; + + STATIC_ASSERT(FCA::kContextSaveIndex == 6); + STATIC_ASSERT(FCA::kCalleeIndex == 5); + STATIC_ASSERT(FCA::kDataIndex == 4); + STATIC_ASSERT(FCA::kReturnValueOffset == 3); + STATIC_ASSERT(FCA::kReturnValueDefaultValueIndex == 2); + STATIC_ASSERT(FCA::kIsolateIndex == 1); + STATIC_ASSERT(FCA::kHolderIndex == 0); + STATIC_ASSERT(FCA::kArgsLength == 7); + + // Save context, callee and call data. + __ Push(context, callee, call_data); + // Load context from callee. + __ ld(context, FieldMemOperand(callee, JSFunction::kContextOffset)); + + Register scratch = call_data; + if (!call_data_undefined) { + __ LoadRoot(scratch, Heap::kUndefinedValueRootIndex); + } + // Push return value and default return value. + __ Push(scratch, scratch); + __ li(scratch, + Operand(ExternalReference::isolate_address(isolate()))); + // Push isolate and holder. + __ Push(scratch, holder); + + // Prepare arguments. + __ mov(scratch, sp); + + // Allocate the v8::Arguments structure in the arguments' space since + // it's not controlled by GC. + const int kApiStackSpace = 4; + + FrameScope frame_scope(masm, StackFrame::MANUAL); + __ EnterExitFrame(false, kApiStackSpace); + + DCHECK(!api_function_address.is(a0) && !scratch.is(a0)); + // a0 = FunctionCallbackInfo& + // Arguments is after the return address. + __ Daddu(a0, sp, Operand(1 * kPointerSize)); + // FunctionCallbackInfo::implicit_args_ + __ sd(scratch, MemOperand(a0, 0 * kPointerSize)); + // FunctionCallbackInfo::values_ + __ Daddu(at, scratch, Operand((FCA::kArgsLength - 1 + argc) * kPointerSize)); + __ sd(at, MemOperand(a0, 1 * kPointerSize)); + // FunctionCallbackInfo::length_ = argc + __ li(at, Operand(argc)); + __ sd(at, MemOperand(a0, 2 * kPointerSize)); + // FunctionCallbackInfo::is_construct_call = 0 + __ sd(zero_reg, MemOperand(a0, 3 * kPointerSize)); + + const int kStackUnwindSpace = argc + FCA::kArgsLength + 1; + ExternalReference thunk_ref = + ExternalReference::invoke_function_callback(isolate()); + + AllowExternalCallThatCantCauseGC scope(masm); + MemOperand context_restore_operand( + fp, (2 + FCA::kContextSaveIndex) * kPointerSize); + // Stores return the first js argument. + int return_value_offset = 0; + if (is_store) { + return_value_offset = 2 + FCA::kArgsLength; + } else { + return_value_offset = 2 + FCA::kReturnValueOffset; + } + MemOperand return_value_operand(fp, return_value_offset * kPointerSize); + + __ CallApiFunctionAndReturn(api_function_address, + thunk_ref, + kStackUnwindSpace, + return_value_operand, + &context_restore_operand); +} + + +void CallApiGetterStub::Generate(MacroAssembler* masm) { + // ----------- S t a t e ------------- + // -- sp[0] : name + // -- sp[4 - kArgsLength*4] : PropertyCallbackArguments object + // -- ... + // -- a2 : api_function_address + // ----------------------------------- + + Register api_function_address = a2; + + __ mov(a0, sp); // a0 = Handle<Name> + __ Daddu(a1, a0, Operand(1 * kPointerSize)); // a1 = PCA + + const int kApiStackSpace = 1; + FrameScope frame_scope(masm, StackFrame::MANUAL); + __ EnterExitFrame(false, kApiStackSpace); + + // Create PropertyAccessorInfo instance on the stack above the exit frame with + // a1 (internal::Object** args_) as the data. + __ sd(a1, MemOperand(sp, 1 * kPointerSize)); + __ Daddu(a1, sp, Operand(1 * kPointerSize)); // a1 = AccessorInfo& + + const int kStackUnwindSpace = PropertyCallbackArguments::kArgsLength + 1; + + ExternalReference thunk_ref = + ExternalReference::invoke_accessor_getter_callback(isolate()); + __ CallApiFunctionAndReturn(api_function_address, + thunk_ref, + kStackUnwindSpace, + MemOperand(fp, 6 * kPointerSize), + NULL); +} + + +#undef __ + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_MIPS64 diff --git a/deps/v8/src/mips64/code-stubs-mips64.h b/deps/v8/src/mips64/code-stubs-mips64.h new file mode 100644 index 00000000000..73f19cde419 --- /dev/null +++ b/deps/v8/src/mips64/code-stubs-mips64.h @@ -0,0 +1,448 @@ +// Copyright 2011 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_MIPS_CODE_STUBS_ARM_H_ +#define V8_MIPS_CODE_STUBS_ARM_H_ + +#include "src/ic-inl.h" + + +namespace v8 { +namespace internal { + + +void ArrayNativeCode(MacroAssembler* masm, Label* call_generic_code); + + +class StoreBufferOverflowStub: public PlatformCodeStub { + public: + StoreBufferOverflowStub(Isolate* isolate, SaveFPRegsMode save_fp) + : PlatformCodeStub(isolate), save_doubles_(save_fp) {} + + void Generate(MacroAssembler* masm); + + static void GenerateFixedRegStubsAheadOfTime(Isolate* isolate); + virtual bool SometimesSetsUpAFrame() { return false; } + + private: + SaveFPRegsMode save_doubles_; + + Major MajorKey() const { return StoreBufferOverflow; } + int MinorKey() const { return (save_doubles_ == kSaveFPRegs) ? 1 : 0; } +}; + + +class StringHelper : public AllStatic { + public: + // Generate code for copying a large number of characters. This function + // is allowed to spend extra time setting up conditions to make copying + // faster. Copying of overlapping regions is not supported. + // Dest register ends at the position after the last character written. + static void GenerateCopyCharacters(MacroAssembler* masm, + Register dest, + Register src, + Register count, + Register scratch, + String::Encoding encoding); + + // Generate string hash. + static void GenerateHashInit(MacroAssembler* masm, + Register hash, + Register character); + + static void GenerateHashAddCharacter(MacroAssembler* masm, + Register hash, + Register character); + + static void GenerateHashGetHash(MacroAssembler* masm, + Register hash); + + private: + DISALLOW_IMPLICIT_CONSTRUCTORS(StringHelper); +}; + + +class SubStringStub: public PlatformCodeStub { + public: + explicit SubStringStub(Isolate* isolate) : PlatformCodeStub(isolate) {} + + private: + Major MajorKey() const { return SubString; } + int MinorKey() const { return 0; } + + void Generate(MacroAssembler* masm); +}; + + +class StoreRegistersStateStub: public PlatformCodeStub { + public: + explicit StoreRegistersStateStub(Isolate* isolate) + : PlatformCodeStub(isolate) {} + + static void GenerateAheadOfTime(Isolate* isolate); + private: + Major MajorKey() const { return StoreRegistersState; } + int MinorKey() const { return 0; } + + void Generate(MacroAssembler* masm); +}; + +class RestoreRegistersStateStub: public PlatformCodeStub { + public: + explicit RestoreRegistersStateStub(Isolate* isolate) + : PlatformCodeStub(isolate) {} + + static void GenerateAheadOfTime(Isolate* isolate); + private: + Major MajorKey() const { return RestoreRegistersState; } + int MinorKey() const { return 0; } + + void Generate(MacroAssembler* masm); +}; + +class StringCompareStub: public PlatformCodeStub { + public: + explicit StringCompareStub(Isolate* isolate) : PlatformCodeStub(isolate) { } + + // Compare two flat ASCII strings and returns result in v0. + static void GenerateCompareFlatAsciiStrings(MacroAssembler* masm, + Register left, + Register right, + Register scratch1, + Register scratch2, + Register scratch3, + Register scratch4); + + // Compares two flat ASCII strings for equality and returns result + // in v0. + static void GenerateFlatAsciiStringEquals(MacroAssembler* masm, + Register left, + Register right, + Register scratch1, + Register scratch2, + Register scratch3); + + private: + virtual Major MajorKey() const { return StringCompare; } + virtual int MinorKey() const { return 0; } + virtual void Generate(MacroAssembler* masm); + + static void GenerateAsciiCharsCompareLoop(MacroAssembler* masm, + Register left, + Register right, + Register length, + Register scratch1, + Register scratch2, + Register scratch3, + Label* chars_not_equal); +}; + + +// This stub can convert a signed int32 to a heap number (double). It does +// not work for int32s that are in Smi range! No GC occurs during this stub +// so you don't have to set up the frame. +class WriteInt32ToHeapNumberStub : public PlatformCodeStub { + public: + WriteInt32ToHeapNumberStub(Isolate* isolate, + Register the_int, + Register the_heap_number, + Register scratch, + Register scratch2) + : PlatformCodeStub(isolate), + the_int_(the_int), + the_heap_number_(the_heap_number), + scratch_(scratch), + sign_(scratch2) { + DCHECK(IntRegisterBits::is_valid(the_int_.code())); + DCHECK(HeapNumberRegisterBits::is_valid(the_heap_number_.code())); + DCHECK(ScratchRegisterBits::is_valid(scratch_.code())); + DCHECK(SignRegisterBits::is_valid(sign_.code())); + } + + static void GenerateFixedRegStubsAheadOfTime(Isolate* isolate); + + private: + Register the_int_; + Register the_heap_number_; + Register scratch_; + Register sign_; + + // Minor key encoding in 16 bits. + class IntRegisterBits: public BitField<int, 0, 4> {}; + class HeapNumberRegisterBits: public BitField<int, 4, 4> {}; + class ScratchRegisterBits: public BitField<int, 8, 4> {}; + class SignRegisterBits: public BitField<int, 12, 4> {}; + + Major MajorKey() const { return WriteInt32ToHeapNumber; } + int MinorKey() const { + // Encode the parameters in a unique 16 bit value. + return IntRegisterBits::encode(the_int_.code()) + | HeapNumberRegisterBits::encode(the_heap_number_.code()) + | ScratchRegisterBits::encode(scratch_.code()) + | SignRegisterBits::encode(sign_.code()); + } + + void Generate(MacroAssembler* masm); +}; + + +class RecordWriteStub: public PlatformCodeStub { + public: + RecordWriteStub(Isolate* isolate, + Register object, + Register value, + Register address, + RememberedSetAction remembered_set_action, + SaveFPRegsMode fp_mode) + : PlatformCodeStub(isolate), + object_(object), + value_(value), + address_(address), + remembered_set_action_(remembered_set_action), + save_fp_regs_mode_(fp_mode), + regs_(object, // An input reg. + address, // An input reg. + value) { // One scratch reg. + } + + enum Mode { + STORE_BUFFER_ONLY, + INCREMENTAL, + INCREMENTAL_COMPACTION + }; + + virtual bool SometimesSetsUpAFrame() { return false; } + + static void PatchBranchIntoNop(MacroAssembler* masm, int pos) { + const unsigned offset = masm->instr_at(pos) & kImm16Mask; + masm->instr_at_put(pos, BNE | (zero_reg.code() << kRsShift) | + (zero_reg.code() << kRtShift) | (offset & kImm16Mask)); + DCHECK(Assembler::IsBne(masm->instr_at(pos))); + } + + static void PatchNopIntoBranch(MacroAssembler* masm, int pos) { + const unsigned offset = masm->instr_at(pos) & kImm16Mask; + masm->instr_at_put(pos, BEQ | (zero_reg.code() << kRsShift) | + (zero_reg.code() << kRtShift) | (offset & kImm16Mask)); + DCHECK(Assembler::IsBeq(masm->instr_at(pos))); + } + + static Mode GetMode(Code* stub) { + Instr first_instruction = Assembler::instr_at(stub->instruction_start()); + Instr second_instruction = Assembler::instr_at(stub->instruction_start() + + 2 * Assembler::kInstrSize); + + if (Assembler::IsBeq(first_instruction)) { + return INCREMENTAL; + } + + DCHECK(Assembler::IsBne(first_instruction)); + + if (Assembler::IsBeq(second_instruction)) { + return INCREMENTAL_COMPACTION; + } + + DCHECK(Assembler::IsBne(second_instruction)); + + return STORE_BUFFER_ONLY; + } + + static void Patch(Code* stub, Mode mode) { + MacroAssembler masm(NULL, + stub->instruction_start(), + stub->instruction_size()); + switch (mode) { + case STORE_BUFFER_ONLY: + DCHECK(GetMode(stub) == INCREMENTAL || + GetMode(stub) == INCREMENTAL_COMPACTION); + PatchBranchIntoNop(&masm, 0); + PatchBranchIntoNop(&masm, 2 * Assembler::kInstrSize); + break; + case INCREMENTAL: + DCHECK(GetMode(stub) == STORE_BUFFER_ONLY); + PatchNopIntoBranch(&masm, 0); + break; + case INCREMENTAL_COMPACTION: + DCHECK(GetMode(stub) == STORE_BUFFER_ONLY); + PatchNopIntoBranch(&masm, 2 * Assembler::kInstrSize); + break; + } + DCHECK(GetMode(stub) == mode); + CpuFeatures::FlushICache(stub->instruction_start(), + 4 * Assembler::kInstrSize); + } + + private: + // This is a helper class for freeing up 3 scratch registers. The input is + // two registers that must be preserved and one scratch register provided by + // the caller. + class RegisterAllocation { + public: + RegisterAllocation(Register object, + Register address, + Register scratch0) + : object_(object), + address_(address), + scratch0_(scratch0) { + DCHECK(!AreAliased(scratch0, object, address, no_reg)); + scratch1_ = GetRegisterThatIsNotOneOf(object_, address_, scratch0_); + } + + void Save(MacroAssembler* masm) { + DCHECK(!AreAliased(object_, address_, scratch1_, scratch0_)); + // We don't have to save scratch0_ because it was given to us as + // a scratch register. + masm->push(scratch1_); + } + + void Restore(MacroAssembler* masm) { + masm->pop(scratch1_); + } + + // If we have to call into C then we need to save and restore all caller- + // saved registers that were not already preserved. The scratch registers + // will be restored by other means so we don't bother pushing them here. + void SaveCallerSaveRegisters(MacroAssembler* masm, SaveFPRegsMode mode) { + masm->MultiPush((kJSCallerSaved | ra.bit()) & ~scratch1_.bit()); + if (mode == kSaveFPRegs) { + masm->MultiPushFPU(kCallerSavedFPU); + } + } + + inline void RestoreCallerSaveRegisters(MacroAssembler*masm, + SaveFPRegsMode mode) { + if (mode == kSaveFPRegs) { + masm->MultiPopFPU(kCallerSavedFPU); + } + masm->MultiPop((kJSCallerSaved | ra.bit()) & ~scratch1_.bit()); + } + + inline Register object() { return object_; } + inline Register address() { return address_; } + inline Register scratch0() { return scratch0_; } + inline Register scratch1() { return scratch1_; } + + private: + Register object_; + Register address_; + Register scratch0_; + Register scratch1_; + + friend class RecordWriteStub; + }; + + enum OnNoNeedToInformIncrementalMarker { + kReturnOnNoNeedToInformIncrementalMarker, + kUpdateRememberedSetOnNoNeedToInformIncrementalMarker + }; + + void Generate(MacroAssembler* masm); + void GenerateIncremental(MacroAssembler* masm, Mode mode); + void CheckNeedsToInformIncrementalMarker( + MacroAssembler* masm, + OnNoNeedToInformIncrementalMarker on_no_need, + Mode mode); + void InformIncrementalMarker(MacroAssembler* masm); + + Major MajorKey() const { return RecordWrite; } + + int MinorKey() const { + return ObjectBits::encode(object_.code()) | + ValueBits::encode(value_.code()) | + AddressBits::encode(address_.code()) | + RememberedSetActionBits::encode(remembered_set_action_) | + SaveFPRegsModeBits::encode(save_fp_regs_mode_); + } + + void Activate(Code* code) { + code->GetHeap()->incremental_marking()->ActivateGeneratedStub(code); + } + + class ObjectBits: public BitField<int, 0, 5> {}; + class ValueBits: public BitField<int, 5, 5> {}; + class AddressBits: public BitField<int, 10, 5> {}; + class RememberedSetActionBits: public BitField<RememberedSetAction, 15, 1> {}; + class SaveFPRegsModeBits: public BitField<SaveFPRegsMode, 16, 1> {}; + + Register object_; + Register value_; + Register address_; + RememberedSetAction remembered_set_action_; + SaveFPRegsMode save_fp_regs_mode_; + Label slow_; + RegisterAllocation regs_; +}; + + +// Trampoline stub to call into native code. To call safely into native code +// in the presence of compacting GC (which can move code objects) we need to +// keep the code which called into native pinned in the memory. Currently the +// simplest approach is to generate such stub early enough so it can never be +// moved by GC +class DirectCEntryStub: public PlatformCodeStub { + public: + explicit DirectCEntryStub(Isolate* isolate) : PlatformCodeStub(isolate) {} + void Generate(MacroAssembler* masm); + void GenerateCall(MacroAssembler* masm, Register target); + + private: + Major MajorKey() const { return DirectCEntry; } + int MinorKey() const { return 0; } + + bool NeedsImmovableCode() { return true; } +}; + + +class NameDictionaryLookupStub: public PlatformCodeStub { + public: + enum LookupMode { POSITIVE_LOOKUP, NEGATIVE_LOOKUP }; + + NameDictionaryLookupStub(Isolate* isolate, LookupMode mode) + : PlatformCodeStub(isolate), mode_(mode) { } + + void Generate(MacroAssembler* masm); + + static void GenerateNegativeLookup(MacroAssembler* masm, + Label* miss, + Label* done, + Register receiver, + Register properties, + Handle<Name> name, + Register scratch0); + + static void GeneratePositiveLookup(MacroAssembler* masm, + Label* miss, + Label* done, + Register elements, + Register name, + Register r0, + Register r1); + + virtual bool SometimesSetsUpAFrame() { return false; } + + private: + static const int kInlinedProbes = 4; + static const int kTotalProbes = 20; + + static const int kCapacityOffset = + NameDictionary::kHeaderSize + + NameDictionary::kCapacityIndex * kPointerSize; + + static const int kElementsStartOffset = + NameDictionary::kHeaderSize + + NameDictionary::kElementsStartIndex * kPointerSize; + + Major MajorKey() const { return NameDictionaryLookup; } + + int MinorKey() const { return LookupModeBits::encode(mode_); } + + class LookupModeBits: public BitField<LookupMode, 0, 1> {}; + + LookupMode mode_; +}; + + +} } // namespace v8::internal + +#endif // V8_MIPS_CODE_STUBS_ARM_H_ diff --git a/deps/v8/src/mips64/codegen-mips64.cc b/deps/v8/src/mips64/codegen-mips64.cc new file mode 100644 index 00000000000..8533ede55c7 --- /dev/null +++ b/deps/v8/src/mips64/codegen-mips64.cc @@ -0,0 +1,1142 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_MIPS64 + +#include "src/codegen.h" +#include "src/macro-assembler.h" +#include "src/mips64/simulator-mips64.h" + +namespace v8 { +namespace internal { + + +#define __ masm. + + +#if defined(USE_SIMULATOR) +byte* fast_exp_mips_machine_code = NULL; +double fast_exp_simulator(double x) { + return Simulator::current(Isolate::Current())->CallFP( + fast_exp_mips_machine_code, x, 0); +} +#endif + + +UnaryMathFunction CreateExpFunction() { + if (!FLAG_fast_math) return &std::exp; + size_t actual_size; + byte* buffer = + static_cast<byte*>(base::OS::Allocate(1 * KB, &actual_size, true)); + if (buffer == NULL) return &std::exp; + ExternalReference::InitializeMathExpData(); + + MacroAssembler masm(NULL, buffer, static_cast<int>(actual_size)); + + { + DoubleRegister input = f12; + DoubleRegister result = f0; + DoubleRegister double_scratch1 = f4; + DoubleRegister double_scratch2 = f6; + Register temp1 = a4; + Register temp2 = a5; + Register temp3 = a6; + + if (!IsMipsSoftFloatABI) { + // Input value is in f12 anyway, nothing to do. + } else { + __ Move(input, a0, a1); + } + __ Push(temp3, temp2, temp1); + MathExpGenerator::EmitMathExp( + &masm, input, result, double_scratch1, double_scratch2, + temp1, temp2, temp3); + __ Pop(temp3, temp2, temp1); + if (!IsMipsSoftFloatABI) { + // Result is already in f0, nothing to do. + } else { + __ Move(v0, v1, result); + } + __ Ret(); + } + + CodeDesc desc; + masm.GetCode(&desc); + DCHECK(!RelocInfo::RequiresRelocation(desc)); + + CpuFeatures::FlushICache(buffer, actual_size); + base::OS::ProtectCode(buffer, actual_size); + +#if !defined(USE_SIMULATOR) + return FUNCTION_CAST<UnaryMathFunction>(buffer); +#else + fast_exp_mips_machine_code = buffer; + return &fast_exp_simulator; +#endif +} + + +#if defined(V8_HOST_ARCH_MIPS) +MemCopyUint8Function CreateMemCopyUint8Function(MemCopyUint8Function stub) { +#if defined(USE_SIMULATOR) + return stub; +#else + + size_t actual_size; + byte* buffer = + static_cast<byte*>(base::OS::Allocate(3 * KB, &actual_size, true)); + if (buffer == NULL) return stub; + + // This code assumes that cache lines are 32 bytes and if the cache line is + // larger it will not work correctly. + MacroAssembler masm(NULL, buffer, static_cast<int>(actual_size)); + + { + Label lastb, unaligned, aligned, chkw, + loop16w, chk1w, wordCopy_loop, skip_pref, lastbloop, + leave, ua_chk16w, ua_loop16w, ua_skip_pref, ua_chkw, + ua_chk1w, ua_wordCopy_loop, ua_smallCopy, ua_smallCopy_loop; + + // The size of each prefetch. + uint32_t pref_chunk = 32; + // The maximum size of a prefetch, it must not be less then pref_chunk. + // If the real size of a prefetch is greater then max_pref_size and + // the kPrefHintPrepareForStore hint is used, the code will not work + // correctly. + uint32_t max_pref_size = 128; + DCHECK(pref_chunk < max_pref_size); + + // pref_limit is set based on the fact that we never use an offset + // greater then 5 on a store pref and that a single pref can + // never be larger then max_pref_size. + uint32_t pref_limit = (5 * pref_chunk) + max_pref_size; + int32_t pref_hint_load = kPrefHintLoadStreamed; + int32_t pref_hint_store = kPrefHintPrepareForStore; + uint32_t loadstore_chunk = 4; + + // The initial prefetches may fetch bytes that are before the buffer being + // copied. Start copies with an offset of 4 so avoid this situation when + // using kPrefHintPrepareForStore. + DCHECK(pref_hint_store != kPrefHintPrepareForStore || + pref_chunk * 4 >= max_pref_size); + // If the size is less than 8, go to lastb. Regardless of size, + // copy dst pointer to v0 for the retuen value. + __ slti(a6, a2, 2 * loadstore_chunk); + __ bne(a6, zero_reg, &lastb); + __ mov(v0, a0); // In delay slot. + + // If src and dst have different alignments, go to unaligned, if they + // have the same alignment (but are not actually aligned) do a partial + // load/store to make them aligned. If they are both already aligned + // we can start copying at aligned. + __ xor_(t8, a1, a0); + __ andi(t8, t8, loadstore_chunk - 1); // t8 is a0/a1 word-displacement. + __ bne(t8, zero_reg, &unaligned); + __ subu(a3, zero_reg, a0); // In delay slot. + + __ andi(a3, a3, loadstore_chunk - 1); // Copy a3 bytes to align a0/a1. + __ beq(a3, zero_reg, &aligned); // Already aligned. + __ subu(a2, a2, a3); // In delay slot. a2 is the remining bytes count. + + __ lwr(t8, MemOperand(a1)); + __ addu(a1, a1, a3); + __ swr(t8, MemOperand(a0)); + __ addu(a0, a0, a3); + + // Now dst/src are both aligned to (word) aligned addresses. Set a2 to + // count how many bytes we have to copy after all the 64 byte chunks are + // copied and a3 to the dst pointer after all the 64 byte chunks have been + // copied. We will loop, incrementing a0 and a1 until a0 equals a3. + __ bind(&aligned); + __ andi(t8, a2, 0x3f); + __ beq(a2, t8, &chkw); // Less than 64? + __ subu(a3, a2, t8); // In delay slot. + __ addu(a3, a0, a3); // Now a3 is the final dst after loop. + + // When in the loop we prefetch with kPrefHintPrepareForStore hint, + // in this case the a0+x should be past the "a4-32" address. This means: + // for x=128 the last "safe" a0 address is "a4-160". Alternatively, for + // x=64 the last "safe" a0 address is "a4-96". In the current version we + // will use "pref hint, 128(a0)", so "a4-160" is the limit. + if (pref_hint_store == kPrefHintPrepareForStore) { + __ addu(a4, a0, a2); // a4 is the "past the end" address. + __ Subu(t9, a4, pref_limit); // t9 is the "last safe pref" address. + } + + __ Pref(pref_hint_load, MemOperand(a1, 0 * pref_chunk)); + __ Pref(pref_hint_load, MemOperand(a1, 1 * pref_chunk)); + __ Pref(pref_hint_load, MemOperand(a1, 2 * pref_chunk)); + __ Pref(pref_hint_load, MemOperand(a1, 3 * pref_chunk)); + + if (pref_hint_store != kPrefHintPrepareForStore) { + __ Pref(pref_hint_store, MemOperand(a0, 1 * pref_chunk)); + __ Pref(pref_hint_store, MemOperand(a0, 2 * pref_chunk)); + __ Pref(pref_hint_store, MemOperand(a0, 3 * pref_chunk)); + } + __ bind(&loop16w); + __ lw(a4, MemOperand(a1)); + + if (pref_hint_store == kPrefHintPrepareForStore) { + __ sltu(v1, t9, a0); // If a0 > t9, don't use next prefetch. + __ Branch(USE_DELAY_SLOT, &skip_pref, gt, v1, Operand(zero_reg)); + } + __ lw(a5, MemOperand(a1, 1, loadstore_chunk)); // Maybe in delay slot. + + __ Pref(pref_hint_store, MemOperand(a0, 4 * pref_chunk)); + __ Pref(pref_hint_store, MemOperand(a0, 5 * pref_chunk)); + + __ bind(&skip_pref); + __ lw(a6, MemOperand(a1, 2, loadstore_chunk)); + __ lw(a7, MemOperand(a1, 3, loadstore_chunk)); + __ lw(t0, MemOperand(a1, 4, loadstore_chunk)); + __ lw(t1, MemOperand(a1, 5, loadstore_chunk)); + __ lw(t2, MemOperand(a1, 6, loadstore_chunk)); + __ lw(t3, MemOperand(a1, 7, loadstore_chunk)); + __ Pref(pref_hint_load, MemOperand(a1, 4 * pref_chunk)); + + __ sw(a4, MemOperand(a0)); + __ sw(a5, MemOperand(a0, 1, loadstore_chunk)); + __ sw(a6, MemOperand(a0, 2, loadstore_chunk)); + __ sw(a7, MemOperand(a0, 3, loadstore_chunk)); + __ sw(t0, MemOperand(a0, 4, loadstore_chunk)); + __ sw(t1, MemOperand(a0, 5, loadstore_chunk)); + __ sw(t2, MemOperand(a0, 6, loadstore_chunk)); + __ sw(t3, MemOperand(a0, 7, loadstore_chunk)); + + __ lw(a4, MemOperand(a1, 8, loadstore_chunk)); + __ lw(a5, MemOperand(a1, 9, loadstore_chunk)); + __ lw(a6, MemOperand(a1, 10, loadstore_chunk)); + __ lw(a7, MemOperand(a1, 11, loadstore_chunk)); + __ lw(t0, MemOperand(a1, 12, loadstore_chunk)); + __ lw(t1, MemOperand(a1, 13, loadstore_chunk)); + __ lw(t2, MemOperand(a1, 14, loadstore_chunk)); + __ lw(t3, MemOperand(a1, 15, loadstore_chunk)); + __ Pref(pref_hint_load, MemOperand(a1, 5 * pref_chunk)); + + __ sw(a4, MemOperand(a0, 8, loadstore_chunk)); + __ sw(a5, MemOperand(a0, 9, loadstore_chunk)); + __ sw(a6, MemOperand(a0, 10, loadstore_chunk)); + __ sw(a7, MemOperand(a0, 11, loadstore_chunk)); + __ sw(t0, MemOperand(a0, 12, loadstore_chunk)); + __ sw(t1, MemOperand(a0, 13, loadstore_chunk)); + __ sw(t2, MemOperand(a0, 14, loadstore_chunk)); + __ sw(t3, MemOperand(a0, 15, loadstore_chunk)); + __ addiu(a0, a0, 16 * loadstore_chunk); + __ bne(a0, a3, &loop16w); + __ addiu(a1, a1, 16 * loadstore_chunk); // In delay slot. + __ mov(a2, t8); + + // Here we have src and dest word-aligned but less than 64-bytes to go. + // Check for a 32 bytes chunk and copy if there is one. Otherwise jump + // down to chk1w to handle the tail end of the copy. + __ bind(&chkw); + __ Pref(pref_hint_load, MemOperand(a1, 0 * pref_chunk)); + __ andi(t8, a2, 0x1f); + __ beq(a2, t8, &chk1w); // Less than 32? + __ nop(); // In delay slot. + __ lw(a4, MemOperand(a1)); + __ lw(a5, MemOperand(a1, 1, loadstore_chunk)); + __ lw(a6, MemOperand(a1, 2, loadstore_chunk)); + __ lw(a7, MemOperand(a1, 3, loadstore_chunk)); + __ lw(t0, MemOperand(a1, 4, loadstore_chunk)); + __ lw(t1, MemOperand(a1, 5, loadstore_chunk)); + __ lw(t2, MemOperand(a1, 6, loadstore_chunk)); + __ lw(t3, MemOperand(a1, 7, loadstore_chunk)); + __ addiu(a1, a1, 8 * loadstore_chunk); + __ sw(a4, MemOperand(a0)); + __ sw(a5, MemOperand(a0, 1, loadstore_chunk)); + __ sw(a6, MemOperand(a0, 2, loadstore_chunk)); + __ sw(a7, MemOperand(a0, 3, loadstore_chunk)); + __ sw(t0, MemOperand(a0, 4, loadstore_chunk)); + __ sw(t1, MemOperand(a0, 5, loadstore_chunk)); + __ sw(t2, MemOperand(a0, 6, loadstore_chunk)); + __ sw(t3, MemOperand(a0, 7, loadstore_chunk)); + __ addiu(a0, a0, 8 * loadstore_chunk); + + // Here we have less than 32 bytes to copy. Set up for a loop to copy + // one word at a time. Set a2 to count how many bytes we have to copy + // after all the word chunks are copied and a3 to the dst pointer after + // all the word chunks have been copied. We will loop, incrementing a0 + // and a1 untill a0 equals a3. + __ bind(&chk1w); + __ andi(a2, t8, loadstore_chunk - 1); + __ beq(a2, t8, &lastb); + __ subu(a3, t8, a2); // In delay slot. + __ addu(a3, a0, a3); + + __ bind(&wordCopy_loop); + __ lw(a7, MemOperand(a1)); + __ addiu(a0, a0, loadstore_chunk); + __ addiu(a1, a1, loadstore_chunk); + __ bne(a0, a3, &wordCopy_loop); + __ sw(a7, MemOperand(a0, -1, loadstore_chunk)); // In delay slot. + + __ bind(&lastb); + __ Branch(&leave, le, a2, Operand(zero_reg)); + __ addu(a3, a0, a2); + + __ bind(&lastbloop); + __ lb(v1, MemOperand(a1)); + __ addiu(a0, a0, 1); + __ addiu(a1, a1, 1); + __ bne(a0, a3, &lastbloop); + __ sb(v1, MemOperand(a0, -1)); // In delay slot. + + __ bind(&leave); + __ jr(ra); + __ nop(); + + // Unaligned case. Only the dst gets aligned so we need to do partial + // loads of the source followed by normal stores to the dst (once we + // have aligned the destination). + __ bind(&unaligned); + __ andi(a3, a3, loadstore_chunk - 1); // Copy a3 bytes to align a0/a1. + __ beq(a3, zero_reg, &ua_chk16w); + __ subu(a2, a2, a3); // In delay slot. + + __ lwr(v1, MemOperand(a1)); + __ lwl(v1, + MemOperand(a1, 1, loadstore_chunk, MemOperand::offset_minus_one)); + __ addu(a1, a1, a3); + __ swr(v1, MemOperand(a0)); + __ addu(a0, a0, a3); + + // Now the dst (but not the source) is aligned. Set a2 to count how many + // bytes we have to copy after all the 64 byte chunks are copied and a3 to + // the dst pointer after all the 64 byte chunks have been copied. We will + // loop, incrementing a0 and a1 until a0 equals a3. + __ bind(&ua_chk16w); + __ andi(t8, a2, 0x3f); + __ beq(a2, t8, &ua_chkw); + __ subu(a3, a2, t8); // In delay slot. + __ addu(a3, a0, a3); + + if (pref_hint_store == kPrefHintPrepareForStore) { + __ addu(a4, a0, a2); + __ Subu(t9, a4, pref_limit); + } + + __ Pref(pref_hint_load, MemOperand(a1, 0 * pref_chunk)); + __ Pref(pref_hint_load, MemOperand(a1, 1 * pref_chunk)); + __ Pref(pref_hint_load, MemOperand(a1, 2 * pref_chunk)); + + if (pref_hint_store != kPrefHintPrepareForStore) { + __ Pref(pref_hint_store, MemOperand(a0, 1 * pref_chunk)); + __ Pref(pref_hint_store, MemOperand(a0, 2 * pref_chunk)); + __ Pref(pref_hint_store, MemOperand(a0, 3 * pref_chunk)); + } + + __ bind(&ua_loop16w); + __ Pref(pref_hint_load, MemOperand(a1, 3 * pref_chunk)); + __ lwr(a4, MemOperand(a1)); + __ lwr(a5, MemOperand(a1, 1, loadstore_chunk)); + __ lwr(a6, MemOperand(a1, 2, loadstore_chunk)); + + if (pref_hint_store == kPrefHintPrepareForStore) { + __ sltu(v1, t9, a0); + __ Branch(USE_DELAY_SLOT, &ua_skip_pref, gt, v1, Operand(zero_reg)); + } + __ lwr(a7, MemOperand(a1, 3, loadstore_chunk)); // Maybe in delay slot. + + __ Pref(pref_hint_store, MemOperand(a0, 4 * pref_chunk)); + __ Pref(pref_hint_store, MemOperand(a0, 5 * pref_chunk)); + + __ bind(&ua_skip_pref); + __ lwr(t0, MemOperand(a1, 4, loadstore_chunk)); + __ lwr(t1, MemOperand(a1, 5, loadstore_chunk)); + __ lwr(t2, MemOperand(a1, 6, loadstore_chunk)); + __ lwr(t3, MemOperand(a1, 7, loadstore_chunk)); + __ lwl(a4, + MemOperand(a1, 1, loadstore_chunk, MemOperand::offset_minus_one)); + __ lwl(a5, + MemOperand(a1, 2, loadstore_chunk, MemOperand::offset_minus_one)); + __ lwl(a6, + MemOperand(a1, 3, loadstore_chunk, MemOperand::offset_minus_one)); + __ lwl(a7, + MemOperand(a1, 4, loadstore_chunk, MemOperand::offset_minus_one)); + __ lwl(t0, + MemOperand(a1, 5, loadstore_chunk, MemOperand::offset_minus_one)); + __ lwl(t1, + MemOperand(a1, 6, loadstore_chunk, MemOperand::offset_minus_one)); + __ lwl(t2, + MemOperand(a1, 7, loadstore_chunk, MemOperand::offset_minus_one)); + __ lwl(t3, + MemOperand(a1, 8, loadstore_chunk, MemOperand::offset_minus_one)); + __ Pref(pref_hint_load, MemOperand(a1, 4 * pref_chunk)); + __ sw(a4, MemOperand(a0)); + __ sw(a5, MemOperand(a0, 1, loadstore_chunk)); + __ sw(a6, MemOperand(a0, 2, loadstore_chunk)); + __ sw(a7, MemOperand(a0, 3, loadstore_chunk)); + __ sw(t0, MemOperand(a0, 4, loadstore_chunk)); + __ sw(t1, MemOperand(a0, 5, loadstore_chunk)); + __ sw(t2, MemOperand(a0, 6, loadstore_chunk)); + __ sw(t3, MemOperand(a0, 7, loadstore_chunk)); + __ lwr(a4, MemOperand(a1, 8, loadstore_chunk)); + __ lwr(a5, MemOperand(a1, 9, loadstore_chunk)); + __ lwr(a6, MemOperand(a1, 10, loadstore_chunk)); + __ lwr(a7, MemOperand(a1, 11, loadstore_chunk)); + __ lwr(t0, MemOperand(a1, 12, loadstore_chunk)); + __ lwr(t1, MemOperand(a1, 13, loadstore_chunk)); + __ lwr(t2, MemOperand(a1, 14, loadstore_chunk)); + __ lwr(t3, MemOperand(a1, 15, loadstore_chunk)); + __ lwl(a4, + MemOperand(a1, 9, loadstore_chunk, MemOperand::offset_minus_one)); + __ lwl(a5, + MemOperand(a1, 10, loadstore_chunk, MemOperand::offset_minus_one)); + __ lwl(a6, + MemOperand(a1, 11, loadstore_chunk, MemOperand::offset_minus_one)); + __ lwl(a7, + MemOperand(a1, 12, loadstore_chunk, MemOperand::offset_minus_one)); + __ lwl(t0, + MemOperand(a1, 13, loadstore_chunk, MemOperand::offset_minus_one)); + __ lwl(t1, + MemOperand(a1, 14, loadstore_chunk, MemOperand::offset_minus_one)); + __ lwl(t2, + MemOperand(a1, 15, loadstore_chunk, MemOperand::offset_minus_one)); + __ lwl(t3, + MemOperand(a1, 16, loadstore_chunk, MemOperand::offset_minus_one)); + __ Pref(pref_hint_load, MemOperand(a1, 5 * pref_chunk)); + __ sw(a4, MemOperand(a0, 8, loadstore_chunk)); + __ sw(a5, MemOperand(a0, 9, loadstore_chunk)); + __ sw(a6, MemOperand(a0, 10, loadstore_chunk)); + __ sw(a7, MemOperand(a0, 11, loadstore_chunk)); + __ sw(t0, MemOperand(a0, 12, loadstore_chunk)); + __ sw(t1, MemOperand(a0, 13, loadstore_chunk)); + __ sw(t2, MemOperand(a0, 14, loadstore_chunk)); + __ sw(t3, MemOperand(a0, 15, loadstore_chunk)); + __ addiu(a0, a0, 16 * loadstore_chunk); + __ bne(a0, a3, &ua_loop16w); + __ addiu(a1, a1, 16 * loadstore_chunk); // In delay slot. + __ mov(a2, t8); + + // Here less than 64-bytes. Check for + // a 32 byte chunk and copy if there is one. Otherwise jump down to + // ua_chk1w to handle the tail end of the copy. + __ bind(&ua_chkw); + __ Pref(pref_hint_load, MemOperand(a1)); + __ andi(t8, a2, 0x1f); + + __ beq(a2, t8, &ua_chk1w); + __ nop(); // In delay slot. + __ lwr(a4, MemOperand(a1)); + __ lwr(a5, MemOperand(a1, 1, loadstore_chunk)); + __ lwr(a6, MemOperand(a1, 2, loadstore_chunk)); + __ lwr(a7, MemOperand(a1, 3, loadstore_chunk)); + __ lwr(t0, MemOperand(a1, 4, loadstore_chunk)); + __ lwr(t1, MemOperand(a1, 5, loadstore_chunk)); + __ lwr(t2, MemOperand(a1, 6, loadstore_chunk)); + __ lwr(t3, MemOperand(a1, 7, loadstore_chunk)); + __ lwl(a4, + MemOperand(a1, 1, loadstore_chunk, MemOperand::offset_minus_one)); + __ lwl(a5, + MemOperand(a1, 2, loadstore_chunk, MemOperand::offset_minus_one)); + __ lwl(a6, + MemOperand(a1, 3, loadstore_chunk, MemOperand::offset_minus_one)); + __ lwl(a7, + MemOperand(a1, 4, loadstore_chunk, MemOperand::offset_minus_one)); + __ lwl(t0, + MemOperand(a1, 5, loadstore_chunk, MemOperand::offset_minus_one)); + __ lwl(t1, + MemOperand(a1, 6, loadstore_chunk, MemOperand::offset_minus_one)); + __ lwl(t2, + MemOperand(a1, 7, loadstore_chunk, MemOperand::offset_minus_one)); + __ lwl(t3, + MemOperand(a1, 8, loadstore_chunk, MemOperand::offset_minus_one)); + __ addiu(a1, a1, 8 * loadstore_chunk); + __ sw(a4, MemOperand(a0)); + __ sw(a5, MemOperand(a0, 1, loadstore_chunk)); + __ sw(a6, MemOperand(a0, 2, loadstore_chunk)); + __ sw(a7, MemOperand(a0, 3, loadstore_chunk)); + __ sw(t0, MemOperand(a0, 4, loadstore_chunk)); + __ sw(t1, MemOperand(a0, 5, loadstore_chunk)); + __ sw(t2, MemOperand(a0, 6, loadstore_chunk)); + __ sw(t3, MemOperand(a0, 7, loadstore_chunk)); + __ addiu(a0, a0, 8 * loadstore_chunk); + + // Less than 32 bytes to copy. Set up for a loop to + // copy one word at a time. + __ bind(&ua_chk1w); + __ andi(a2, t8, loadstore_chunk - 1); + __ beq(a2, t8, &ua_smallCopy); + __ subu(a3, t8, a2); // In delay slot. + __ addu(a3, a0, a3); + + __ bind(&ua_wordCopy_loop); + __ lwr(v1, MemOperand(a1)); + __ lwl(v1, + MemOperand(a1, 1, loadstore_chunk, MemOperand::offset_minus_one)); + __ addiu(a0, a0, loadstore_chunk); + __ addiu(a1, a1, loadstore_chunk); + __ bne(a0, a3, &ua_wordCopy_loop); + __ sw(v1, MemOperand(a0, -1, loadstore_chunk)); // In delay slot. + + // Copy the last 8 bytes. + __ bind(&ua_smallCopy); + __ beq(a2, zero_reg, &leave); + __ addu(a3, a0, a2); // In delay slot. + + __ bind(&ua_smallCopy_loop); + __ lb(v1, MemOperand(a1)); + __ addiu(a0, a0, 1); + __ addiu(a1, a1, 1); + __ bne(a0, a3, &ua_smallCopy_loop); + __ sb(v1, MemOperand(a0, -1)); // In delay slot. + + __ jr(ra); + __ nop(); + } + CodeDesc desc; + masm.GetCode(&desc); + DCHECK(!RelocInfo::RequiresRelocation(desc)); + + CpuFeatures::FlushICache(buffer, actual_size); + base::OS::ProtectCode(buffer, actual_size); + return FUNCTION_CAST<MemCopyUint8Function>(buffer); +#endif +} +#endif + +UnaryMathFunction CreateSqrtFunction() { +#if defined(USE_SIMULATOR) + return &std::sqrt; +#else + size_t actual_size; + byte* buffer = + static_cast<byte*>(base::OS::Allocate(1 * KB, &actual_size, true)); + if (buffer == NULL) return &std::sqrt; + + MacroAssembler masm(NULL, buffer, static_cast<int>(actual_size)); + + __ MovFromFloatParameter(f12); + __ sqrt_d(f0, f12); + __ MovToFloatResult(f0); + __ Ret(); + + CodeDesc desc; + masm.GetCode(&desc); + DCHECK(!RelocInfo::RequiresRelocation(desc)); + + CpuFeatures::FlushICache(buffer, actual_size); + base::OS::ProtectCode(buffer, actual_size); + return FUNCTION_CAST<UnaryMathFunction>(buffer); +#endif +} + +#undef __ + + +// ------------------------------------------------------------------------- +// Platform-specific RuntimeCallHelper functions. + +void StubRuntimeCallHelper::BeforeCall(MacroAssembler* masm) const { + masm->EnterFrame(StackFrame::INTERNAL); + DCHECK(!masm->has_frame()); + masm->set_has_frame(true); +} + + +void StubRuntimeCallHelper::AfterCall(MacroAssembler* masm) const { + masm->LeaveFrame(StackFrame::INTERNAL); + DCHECK(masm->has_frame()); + masm->set_has_frame(false); +} + + +// ------------------------------------------------------------------------- +// Code generators + +#define __ ACCESS_MASM(masm) + +void ElementsTransitionGenerator::GenerateMapChangeElementsTransition( + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, + AllocationSiteMode mode, + Label* allocation_memento_found) { + Register scratch_elements = a4; + DCHECK(!AreAliased(receiver, key, value, target_map, + scratch_elements)); + + if (mode == TRACK_ALLOCATION_SITE) { + __ JumpIfJSArrayHasAllocationMemento( + receiver, scratch_elements, allocation_memento_found); + } + + // Set transitioned map. + __ sd(target_map, FieldMemOperand(receiver, HeapObject::kMapOffset)); + __ RecordWriteField(receiver, + HeapObject::kMapOffset, + target_map, + t1, + kRAHasNotBeenSaved, + kDontSaveFPRegs, + EMIT_REMEMBERED_SET, + OMIT_SMI_CHECK); +} + + +void ElementsTransitionGenerator::GenerateSmiToDouble( + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, + AllocationSiteMode mode, + Label* fail) { + // Register ra contains the return address. + Label loop, entry, convert_hole, gc_required, only_change_map, done; + Register elements = a4; + Register length = a5; + Register array = a6; + Register array_end = array; + + // target_map parameter can be clobbered. + Register scratch1 = target_map; + Register scratch2 = t1; + Register scratch3 = a7; + + // Verify input registers don't conflict with locals. + DCHECK(!AreAliased(receiver, key, value, target_map, + elements, length, array, scratch2)); + + Register scratch = t2; + if (mode == TRACK_ALLOCATION_SITE) { + __ JumpIfJSArrayHasAllocationMemento(receiver, elements, fail); + } + + // Check for empty arrays, which only require a map transition and no changes + // to the backing store. + __ ld(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); + __ LoadRoot(at, Heap::kEmptyFixedArrayRootIndex); + __ Branch(&only_change_map, eq, at, Operand(elements)); + + __ push(ra); + __ ld(length, FieldMemOperand(elements, FixedArray::kLengthOffset)); + // elements: source FixedArray + // length: number of elements (smi-tagged) + + // Allocate new FixedDoubleArray. + __ SmiScale(scratch, length, kDoubleSizeLog2); + __ Daddu(scratch, scratch, FixedDoubleArray::kHeaderSize); + __ Allocate(scratch, array, t3, scratch2, &gc_required, DOUBLE_ALIGNMENT); + // array: destination FixedDoubleArray, not tagged as heap object + + // Set destination FixedDoubleArray's length and map. + __ LoadRoot(scratch2, Heap::kFixedDoubleArrayMapRootIndex); + __ sd(length, MemOperand(array, FixedDoubleArray::kLengthOffset)); + // Update receiver's map. + __ sd(scratch2, MemOperand(array, HeapObject::kMapOffset)); + + __ sd(target_map, FieldMemOperand(receiver, HeapObject::kMapOffset)); + __ RecordWriteField(receiver, + HeapObject::kMapOffset, + target_map, + scratch2, + kRAHasBeenSaved, + kDontSaveFPRegs, + OMIT_REMEMBERED_SET, + OMIT_SMI_CHECK); + // Replace receiver's backing store with newly created FixedDoubleArray. + __ Daddu(scratch1, array, Operand(kHeapObjectTag)); + __ sd(scratch1, FieldMemOperand(a2, JSObject::kElementsOffset)); + __ RecordWriteField(receiver, + JSObject::kElementsOffset, + scratch1, + scratch2, + kRAHasBeenSaved, + kDontSaveFPRegs, + EMIT_REMEMBERED_SET, + OMIT_SMI_CHECK); + + + // Prepare for conversion loop. + __ Daddu(scratch1, elements, + Operand(FixedArray::kHeaderSize - kHeapObjectTag)); + __ Daddu(scratch3, array, Operand(FixedDoubleArray::kHeaderSize)); + __ SmiScale(array_end, length, kDoubleSizeLog2); + __ Daddu(array_end, array_end, scratch3); + + // Repurpose registers no longer in use. + Register hole_lower = elements; + Register hole_upper = length; + __ li(hole_lower, Operand(kHoleNanLower32)); + // scratch1: begin of source FixedArray element fields, not tagged + // hole_lower: kHoleNanLower32 + // hole_upper: kHoleNanUpper32 + // array_end: end of destination FixedDoubleArray, not tagged + // scratch3: begin of FixedDoubleArray element fields, not tagged + __ Branch(USE_DELAY_SLOT, &entry); + __ li(hole_upper, Operand(kHoleNanUpper32)); // In delay slot. + + __ bind(&only_change_map); + __ sd(target_map, FieldMemOperand(receiver, HeapObject::kMapOffset)); + __ RecordWriteField(receiver, + HeapObject::kMapOffset, + target_map, + scratch2, + kRAHasBeenSaved, + kDontSaveFPRegs, + OMIT_REMEMBERED_SET, + OMIT_SMI_CHECK); + __ Branch(&done); + + // Call into runtime if GC is required. + __ bind(&gc_required); + __ ld(ra, MemOperand(sp, 0)); + __ Branch(USE_DELAY_SLOT, fail); + __ daddiu(sp, sp, kPointerSize); // In delay slot. + + // Convert and copy elements. + __ bind(&loop); + __ ld(scratch2, MemOperand(scratch1)); + __ Daddu(scratch1, scratch1, kIntSize); + // scratch2: current element + __ JumpIfNotSmi(scratch2, &convert_hole); + __ SmiUntag(scratch2); + + // Normal smi, convert to double and store. + __ mtc1(scratch2, f0); + __ cvt_d_w(f0, f0); + __ sdc1(f0, MemOperand(scratch3)); + __ Branch(USE_DELAY_SLOT, &entry); + __ daddiu(scratch3, scratch3, kDoubleSize); // In delay slot. + + // Hole found, store the-hole NaN. + __ bind(&convert_hole); + if (FLAG_debug_code) { + // Restore a "smi-untagged" heap object. + __ Or(scratch2, scratch2, Operand(1)); + __ LoadRoot(at, Heap::kTheHoleValueRootIndex); + __ Assert(eq, kObjectFoundInSmiOnlyArray, at, Operand(scratch2)); + } + // mantissa + __ sw(hole_lower, MemOperand(scratch3)); + // exponent + __ sw(hole_upper, MemOperand(scratch3, kIntSize)); + __ Daddu(scratch3, scratch3, kDoubleSize); + + __ bind(&entry); + __ Branch(&loop, lt, scratch3, Operand(array_end)); + + __ bind(&done); + __ pop(ra); +} + + +void ElementsTransitionGenerator::GenerateDoubleToObject( + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, + AllocationSiteMode mode, + Label* fail) { + // Register ra contains the return address. + Label entry, loop, convert_hole, gc_required, only_change_map; + Register elements = a4; + Register array = a6; + Register length = a5; + Register scratch = t1; + + // Verify input registers don't conflict with locals. + DCHECK(!AreAliased(receiver, key, value, target_map, + elements, array, length, scratch)); + if (mode == TRACK_ALLOCATION_SITE) { + __ JumpIfJSArrayHasAllocationMemento(receiver, elements, fail); + } + + // Check for empty arrays, which only require a map transition and no changes + // to the backing store. + __ ld(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); + __ LoadRoot(at, Heap::kEmptyFixedArrayRootIndex); + __ Branch(&only_change_map, eq, at, Operand(elements)); + + __ MultiPush( + value.bit() | key.bit() | receiver.bit() | target_map.bit() | ra.bit()); + + __ ld(length, FieldMemOperand(elements, FixedArray::kLengthOffset)); + // elements: source FixedArray + // length: number of elements (smi-tagged) + + // Allocate new FixedArray. + // Re-use value and target_map registers, as they have been saved on the + // stack. + Register array_size = value; + Register allocate_scratch = target_map; + __ SmiScale(array_size, length, kPointerSizeLog2); + __ Daddu(array_size, array_size, FixedDoubleArray::kHeaderSize); + __ Allocate(array_size, array, allocate_scratch, scratch, &gc_required, + NO_ALLOCATION_FLAGS); + // array: destination FixedArray, not tagged as heap object + // Set destination FixedDoubleArray's length and map. + __ LoadRoot(scratch, Heap::kFixedArrayMapRootIndex); + __ sd(length, MemOperand(array, FixedDoubleArray::kLengthOffset)); + __ sd(scratch, MemOperand(array, HeapObject::kMapOffset)); + + // Prepare for conversion loop. + Register src_elements = elements; + Register dst_elements = target_map; + Register dst_end = length; + Register heap_number_map = scratch; + __ Daddu(src_elements, src_elements, + Operand(FixedDoubleArray::kHeaderSize - kHeapObjectTag + 4)); + __ Daddu(dst_elements, array, Operand(FixedArray::kHeaderSize)); + __ Daddu(array, array, Operand(kHeapObjectTag)); + __ SmiScale(dst_end, dst_end, kPointerSizeLog2); + __ Daddu(dst_end, dst_elements, dst_end); + __ LoadRoot(heap_number_map, Heap::kHeapNumberMapRootIndex); + // Using offsetted addresses. + // dst_elements: begin of destination FixedArray element fields, not tagged + // src_elements: begin of source FixedDoubleArray element fields, not tagged, + // points to the exponent + // dst_end: end of destination FixedArray, not tagged + // array: destination FixedArray + // heap_number_map: heap number map + __ Branch(&entry); + + // Call into runtime if GC is required. + __ bind(&gc_required); + __ MultiPop( + value.bit() | key.bit() | receiver.bit() | target_map.bit() | ra.bit()); + + __ Branch(fail); + + __ bind(&loop); + Register upper_bits = key; + __ lw(upper_bits, MemOperand(src_elements)); + __ Daddu(src_elements, src_elements, kDoubleSize); + // upper_bits: current element's upper 32 bit + // src_elements: address of next element's upper 32 bit + __ Branch(&convert_hole, eq, a1, Operand(kHoleNanUpper32)); + + // Non-hole double, copy value into a heap number. + Register heap_number = receiver; + Register scratch2 = value; + Register scratch3 = t2; + __ AllocateHeapNumber(heap_number, scratch2, scratch3, heap_number_map, + &gc_required); + // heap_number: new heap number + // Load mantissa of current element, src_elements + // point to exponent of next element. + __ lw(scratch2, MemOperand(heap_number, -12)); + __ sw(scratch2, FieldMemOperand(heap_number, HeapNumber::kMantissaOffset)); + __ sw(upper_bits, FieldMemOperand(heap_number, HeapNumber::kExponentOffset)); + __ mov(scratch2, dst_elements); + __ sd(heap_number, MemOperand(dst_elements)); + __ Daddu(dst_elements, dst_elements, kPointerSize); + __ RecordWrite(array, + scratch2, + heap_number, + kRAHasBeenSaved, + kDontSaveFPRegs, + EMIT_REMEMBERED_SET, + OMIT_SMI_CHECK); + __ Branch(&entry); + + // Replace the-hole NaN with the-hole pointer. + __ bind(&convert_hole); + __ LoadRoot(scratch2, Heap::kTheHoleValueRootIndex); + __ sd(scratch2, MemOperand(dst_elements)); + __ Daddu(dst_elements, dst_elements, kPointerSize); + + __ bind(&entry); + __ Branch(&loop, lt, dst_elements, Operand(dst_end)); + + __ MultiPop(receiver.bit() | target_map.bit() | value.bit() | key.bit()); + // Replace receiver's backing store with newly created and filled FixedArray. + __ sd(array, FieldMemOperand(receiver, JSObject::kElementsOffset)); + __ RecordWriteField(receiver, + JSObject::kElementsOffset, + array, + scratch, + kRAHasBeenSaved, + kDontSaveFPRegs, + EMIT_REMEMBERED_SET, + OMIT_SMI_CHECK); + __ pop(ra); + + __ bind(&only_change_map); + // Update receiver's map. + __ sd(target_map, FieldMemOperand(receiver, HeapObject::kMapOffset)); + __ RecordWriteField(receiver, + HeapObject::kMapOffset, + target_map, + scratch, + kRAHasNotBeenSaved, + kDontSaveFPRegs, + OMIT_REMEMBERED_SET, + OMIT_SMI_CHECK); +} + + +void StringCharLoadGenerator::Generate(MacroAssembler* masm, + Register string, + Register index, + Register result, + Label* call_runtime) { + // Fetch the instance type of the receiver into result register. + __ ld(result, FieldMemOperand(string, HeapObject::kMapOffset)); + __ lbu(result, FieldMemOperand(result, Map::kInstanceTypeOffset)); + + // We need special handling for indirect strings. + Label check_sequential; + __ And(at, result, Operand(kIsIndirectStringMask)); + __ Branch(&check_sequential, eq, at, Operand(zero_reg)); + + // Dispatch on the indirect string shape: slice or cons. + Label cons_string; + __ And(at, result, Operand(kSlicedNotConsMask)); + __ Branch(&cons_string, eq, at, Operand(zero_reg)); + + // Handle slices. + Label indirect_string_loaded; + __ ld(result, FieldMemOperand(string, SlicedString::kOffsetOffset)); + __ ld(string, FieldMemOperand(string, SlicedString::kParentOffset)); + __ dsra32(at, result, 0); + __ Daddu(index, index, at); + __ jmp(&indirect_string_loaded); + + // Handle cons strings. + // Check whether the right hand side is the empty string (i.e. if + // this is really a flat string in a cons string). If that is not + // the case we would rather go to the runtime system now to flatten + // the string. + __ bind(&cons_string); + __ ld(result, FieldMemOperand(string, ConsString::kSecondOffset)); + __ LoadRoot(at, Heap::kempty_stringRootIndex); + __ Branch(call_runtime, ne, result, Operand(at)); + // Get the first of the two strings and load its instance type. + __ ld(string, FieldMemOperand(string, ConsString::kFirstOffset)); + + __ bind(&indirect_string_loaded); + __ ld(result, FieldMemOperand(string, HeapObject::kMapOffset)); + __ lbu(result, FieldMemOperand(result, Map::kInstanceTypeOffset)); + + // Distinguish sequential and external strings. Only these two string + // representations can reach here (slices and flat cons strings have been + // reduced to the underlying sequential or external string). + Label external_string, check_encoding; + __ bind(&check_sequential); + STATIC_ASSERT(kSeqStringTag == 0); + __ And(at, result, Operand(kStringRepresentationMask)); + __ Branch(&external_string, ne, at, Operand(zero_reg)); + + // Prepare sequential strings + STATIC_ASSERT(SeqTwoByteString::kHeaderSize == SeqOneByteString::kHeaderSize); + __ Daddu(string, + string, + SeqTwoByteString::kHeaderSize - kHeapObjectTag); + __ jmp(&check_encoding); + + // Handle external strings. + __ bind(&external_string); + if (FLAG_debug_code) { + // Assert that we do not have a cons or slice (indirect strings) here. + // Sequential strings have already been ruled out. + __ And(at, result, Operand(kIsIndirectStringMask)); + __ Assert(eq, kExternalStringExpectedButNotFound, + at, Operand(zero_reg)); + } + // Rule out short external strings. + STATIC_ASSERT(kShortExternalStringTag != 0); + __ And(at, result, Operand(kShortExternalStringMask)); + __ Branch(call_runtime, ne, at, Operand(zero_reg)); + __ ld(string, FieldMemOperand(string, ExternalString::kResourceDataOffset)); + + Label ascii, done; + __ bind(&check_encoding); + STATIC_ASSERT(kTwoByteStringTag == 0); + __ And(at, result, Operand(kStringEncodingMask)); + __ Branch(&ascii, ne, at, Operand(zero_reg)); + // Two-byte string. + __ dsll(at, index, 1); + __ Daddu(at, string, at); + __ lhu(result, MemOperand(at)); + __ jmp(&done); + __ bind(&ascii); + // Ascii string. + __ Daddu(at, string, index); + __ lbu(result, MemOperand(at)); + __ bind(&done); +} + + +static MemOperand ExpConstant(int index, Register base) { + return MemOperand(base, index * kDoubleSize); +} + + +void MathExpGenerator::EmitMathExp(MacroAssembler* masm, + DoubleRegister input, + DoubleRegister result, + DoubleRegister double_scratch1, + DoubleRegister double_scratch2, + Register temp1, + Register temp2, + Register temp3) { + DCHECK(!input.is(result)); + DCHECK(!input.is(double_scratch1)); + DCHECK(!input.is(double_scratch2)); + DCHECK(!result.is(double_scratch1)); + DCHECK(!result.is(double_scratch2)); + DCHECK(!double_scratch1.is(double_scratch2)); + DCHECK(!temp1.is(temp2)); + DCHECK(!temp1.is(temp3)); + DCHECK(!temp2.is(temp3)); + DCHECK(ExternalReference::math_exp_constants(0).address() != NULL); + DCHECK(!masm->serializer_enabled()); // External references not serializable. + + Label zero, infinity, done; + __ li(temp3, Operand(ExternalReference::math_exp_constants(0))); + + __ ldc1(double_scratch1, ExpConstant(0, temp3)); + __ BranchF(&zero, NULL, ge, double_scratch1, input); + + __ ldc1(double_scratch2, ExpConstant(1, temp3)); + __ BranchF(&infinity, NULL, ge, input, double_scratch2); + + __ ldc1(double_scratch1, ExpConstant(3, temp3)); + __ ldc1(result, ExpConstant(4, temp3)); + __ mul_d(double_scratch1, double_scratch1, input); + __ add_d(double_scratch1, double_scratch1, result); + __ FmoveLow(temp2, double_scratch1); + __ sub_d(double_scratch1, double_scratch1, result); + __ ldc1(result, ExpConstant(6, temp3)); + __ ldc1(double_scratch2, ExpConstant(5, temp3)); + __ mul_d(double_scratch1, double_scratch1, double_scratch2); + __ sub_d(double_scratch1, double_scratch1, input); + __ sub_d(result, result, double_scratch1); + __ mul_d(double_scratch2, double_scratch1, double_scratch1); + __ mul_d(result, result, double_scratch2); + __ ldc1(double_scratch2, ExpConstant(7, temp3)); + __ mul_d(result, result, double_scratch2); + __ sub_d(result, result, double_scratch1); + // Mov 1 in double_scratch2 as math_exp_constants_array[8] == 1. + DCHECK(*reinterpret_cast<double*> + (ExternalReference::math_exp_constants(8).address()) == 1); + __ Move(double_scratch2, 1); + __ add_d(result, result, double_scratch2); + __ dsrl(temp1, temp2, 11); + __ Ext(temp2, temp2, 0, 11); + __ Daddu(temp1, temp1, Operand(0x3ff)); + + // Must not call ExpConstant() after overwriting temp3! + __ li(temp3, Operand(ExternalReference::math_exp_log_table())); + __ dsll(at, temp2, 3); + __ Daddu(temp3, temp3, Operand(at)); + __ lwu(temp2, MemOperand(temp3, 0)); + __ lwu(temp3, MemOperand(temp3, kIntSize)); + // The first word is loaded is the lower number register. + if (temp2.code() < temp3.code()) { + __ dsll(at, temp1, 20); + __ Or(temp1, temp3, at); + __ Move(double_scratch1, temp2, temp1); + } else { + __ dsll(at, temp1, 20); + __ Or(temp1, temp2, at); + __ Move(double_scratch1, temp3, temp1); + } + __ mul_d(result, result, double_scratch1); + __ BranchShort(&done); + + __ bind(&zero); + __ Move(result, kDoubleRegZero); + __ BranchShort(&done); + + __ bind(&infinity); + __ ldc1(result, ExpConstant(2, temp3)); + + __ bind(&done); +} + +#ifdef DEBUG +// nop(CODE_AGE_MARKER_NOP) +static const uint32_t kCodeAgePatchFirstInstruction = 0x00010180; +#endif + + +CodeAgingHelper::CodeAgingHelper() { + DCHECK(young_sequence_.length() == kNoCodeAgeSequenceLength); + // Since patcher is a large object, allocate it dynamically when needed, + // to avoid overloading the stack in stress conditions. + // DONT_FLUSH is used because the CodeAgingHelper is initialized early in + // the process, before MIPS simulator ICache is setup. + SmartPointer<CodePatcher> patcher( + new CodePatcher(young_sequence_.start(), + young_sequence_.length() / Assembler::kInstrSize, + CodePatcher::DONT_FLUSH)); + PredictableCodeSizeScope scope(patcher->masm(), young_sequence_.length()); + patcher->masm()->Push(ra, fp, cp, a1); + patcher->masm()->nop(Assembler::CODE_AGE_SEQUENCE_NOP); + patcher->masm()->nop(Assembler::CODE_AGE_SEQUENCE_NOP); + patcher->masm()->nop(Assembler::CODE_AGE_SEQUENCE_NOP); + patcher->masm()->Daddu( + fp, sp, Operand(StandardFrameConstants::kFixedFrameSizeFromFp)); +} + + +#ifdef DEBUG +bool CodeAgingHelper::IsOld(byte* candidate) const { + return Memory::uint32_at(candidate) == kCodeAgePatchFirstInstruction; +} +#endif + + +bool Code::IsYoungSequence(Isolate* isolate, byte* sequence) { + bool result = isolate->code_aging_helper()->IsYoung(sequence); + DCHECK(result || isolate->code_aging_helper()->IsOld(sequence)); + return result; +} + + +void Code::GetCodeAgeAndParity(Isolate* isolate, byte* sequence, Age* age, + MarkingParity* parity) { + if (IsYoungSequence(isolate, sequence)) { + *age = kNoAgeCodeAge; + *parity = NO_MARKING_PARITY; + } else { + Address target_address = Assembler::target_address_at( + sequence + Assembler::kInstrSize); + Code* stub = GetCodeFromTargetAddress(target_address); + GetCodeAgeAndParity(stub, age, parity); + } +} + + +void Code::PatchPlatformCodeAge(Isolate* isolate, + byte* sequence, + Code::Age age, + MarkingParity parity) { + uint32_t young_length = isolate->code_aging_helper()->young_sequence_length(); + if (age == kNoAgeCodeAge) { + isolate->code_aging_helper()->CopyYoungSequenceTo(sequence); + CpuFeatures::FlushICache(sequence, young_length); + } else { + Code* stub = GetCodeAgeStub(isolate, age, parity); + CodePatcher patcher(sequence, young_length / Assembler::kInstrSize); + // Mark this code sequence for FindPlatformCodeAgeSequence(). + patcher.masm()->nop(Assembler::CODE_AGE_MARKER_NOP); + // Load the stub address to t9 and call it, + // GetCodeAgeAndParity() extracts the stub address from this instruction. + patcher.masm()->li( + t9, + Operand(reinterpret_cast<uint64_t>(stub->instruction_start())), + ADDRESS_LOAD); + patcher.masm()->nop(); // Prevent jalr to jal optimization. + patcher.masm()->jalr(t9, a0); + patcher.masm()->nop(); // Branch delay slot nop. + patcher.masm()->nop(); // Pad the empty space. + } +} + + +#undef __ + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_MIPS64 diff --git a/deps/v8/src/mips64/codegen-mips64.h b/deps/v8/src/mips64/codegen-mips64.h new file mode 100644 index 00000000000..82a410ec235 --- /dev/null +++ b/deps/v8/src/mips64/codegen-mips64.h @@ -0,0 +1,54 @@ +// Copyright 2011 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + + +#ifndef V8_MIPS_CODEGEN_MIPS_H_ +#define V8_MIPS_CODEGEN_MIPS_H_ + + +#include "src/ast.h" +#include "src/ic-inl.h" + +namespace v8 { +namespace internal { + + +enum TypeofState { INSIDE_TYPEOF, NOT_INSIDE_TYPEOF }; + + +class StringCharLoadGenerator : public AllStatic { + public: + // Generates the code for handling different string types and loading the + // indexed character into |result|. We expect |index| as untagged input and + // |result| as untagged output. + static void Generate(MacroAssembler* masm, + Register string, + Register index, + Register result, + Label* call_runtime); + + private: + DISALLOW_COPY_AND_ASSIGN(StringCharLoadGenerator); +}; + + +class MathExpGenerator : public AllStatic { + public: + // Register input isn't modified. All other registers are clobbered. + static void EmitMathExp(MacroAssembler* masm, + DoubleRegister input, + DoubleRegister result, + DoubleRegister double_scratch1, + DoubleRegister double_scratch2, + Register temp1, + Register temp2, + Register temp3); + + private: + DISALLOW_COPY_AND_ASSIGN(MathExpGenerator); +}; + +} } // namespace v8::internal + +#endif // V8_MIPS_CODEGEN_MIPS_H_ diff --git a/deps/v8/src/mips64/constants-mips64.cc b/deps/v8/src/mips64/constants-mips64.cc new file mode 100644 index 00000000000..dfd62430c29 --- /dev/null +++ b/deps/v8/src/mips64/constants-mips64.cc @@ -0,0 +1,362 @@ +// Copyright 2011 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_MIPS64 + +#include "src/mips64/constants-mips64.h" + +namespace v8 { +namespace internal { + + +// ----------------------------------------------------------------------------- +// Registers. + + +// These register names are defined in a way to match the native disassembler +// formatting. See for example the command "objdump -d <binary file>". +const char* Registers::names_[kNumSimuRegisters] = { + "zero_reg", + "at", + "v0", "v1", + "a0", "a1", "a2", "a3", "a4", "a5", "a6", "a7", + "t0", "t1", "t2", "t3", + "s0", "s1", "s2", "s3", "s4", "s5", "s6", "s7", + "t8", "t9", + "k0", "k1", + "gp", + "sp", + "fp", + "ra", + "LO", "HI", + "pc" +}; + + +// List of alias names which can be used when referring to MIPS registers. +const Registers::RegisterAlias Registers::aliases_[] = { + {0, "zero"}, + {23, "cp"}, + {30, "s8"}, + {30, "s8_fp"}, + {kInvalidRegister, NULL} +}; + + +const char* Registers::Name(int reg) { + const char* result; + if ((0 <= reg) && (reg < kNumSimuRegisters)) { + result = names_[reg]; + } else { + result = "noreg"; + } + return result; +} + + +int Registers::Number(const char* name) { + // Look through the canonical names. + for (int i = 0; i < kNumSimuRegisters; i++) { + if (strcmp(names_[i], name) == 0) { + return i; + } + } + + // Look through the alias names. + int i = 0; + while (aliases_[i].reg != kInvalidRegister) { + if (strcmp(aliases_[i].name, name) == 0) { + return aliases_[i].reg; + } + i++; + } + + // No register with the reguested name found. + return kInvalidRegister; +} + + +const char* FPURegisters::names_[kNumFPURegisters] = { + "f0", "f1", "f2", "f3", "f4", "f5", "f6", "f7", "f8", "f9", "f10", "f11", + "f12", "f13", "f14", "f15", "f16", "f17", "f18", "f19", "f20", "f21", + "f22", "f23", "f24", "f25", "f26", "f27", "f28", "f29", "f30", "f31" +}; + + +// List of alias names which can be used when referring to MIPS registers. +const FPURegisters::RegisterAlias FPURegisters::aliases_[] = { + {kInvalidRegister, NULL} +}; + + +const char* FPURegisters::Name(int creg) { + const char* result; + if ((0 <= creg) && (creg < kNumFPURegisters)) { + result = names_[creg]; + } else { + result = "nocreg"; + } + return result; +} + + +int FPURegisters::Number(const char* name) { + // Look through the canonical names. + for (int i = 0; i < kNumFPURegisters; i++) { + if (strcmp(names_[i], name) == 0) { + return i; + } + } + + // Look through the alias names. + int i = 0; + while (aliases_[i].creg != kInvalidRegister) { + if (strcmp(aliases_[i].name, name) == 0) { + return aliases_[i].creg; + } + i++; + } + + // No Cregister with the reguested name found. + return kInvalidFPURegister; +} + + +// ----------------------------------------------------------------------------- +// Instructions. + +bool Instruction::IsForbiddenInBranchDelay() const { + const int op = OpcodeFieldRaw(); + switch (op) { + case J: + case JAL: + case BEQ: + case BNE: + case BLEZ: + case BGTZ: + case BEQL: + case BNEL: + case BLEZL: + case BGTZL: + return true; + case REGIMM: + switch (RtFieldRaw()) { + case BLTZ: + case BGEZ: + case BLTZAL: + case BGEZAL: + return true; + default: + return false; + } + break; + case SPECIAL: + switch (FunctionFieldRaw()) { + case JR: + case JALR: + return true; + default: + return false; + } + break; + default: + return false; + } +} + + +bool Instruction::IsLinkingInstruction() const { + const int op = OpcodeFieldRaw(); + switch (op) { + case JAL: + return true; + case REGIMM: + switch (RtFieldRaw()) { + case BGEZAL: + case BLTZAL: + return true; + default: + return false; + } + case SPECIAL: + switch (FunctionFieldRaw()) { + case JALR: + return true; + default: + return false; + } + default: + return false; + } +} + + +bool Instruction::IsTrap() const { + if (OpcodeFieldRaw() != SPECIAL) { + return false; + } else { + switch (FunctionFieldRaw()) { + case BREAK: + case TGE: + case TGEU: + case TLT: + case TLTU: + case TEQ: + case TNE: + return true; + default: + return false; + } + } +} + + +Instruction::Type Instruction::InstructionType() const { + switch (OpcodeFieldRaw()) { + case SPECIAL: + switch (FunctionFieldRaw()) { + case JR: + case JALR: + case BREAK: + case SLL: + case DSLL: + case DSLL32: + case SRL: + case DSRL: + case DSRL32: + case SRA: + case DSRA: + case DSRA32: + case SLLV: + case DSLLV: + case SRLV: + case DSRLV: + case SRAV: + case DSRAV: + case MFHI: + case MFLO: + case MULT: + case DMULT: + case MULTU: + case DMULTU: + case DIV: + case DDIV: + case DIVU: + case DDIVU: + case ADD: + case DADD: + case ADDU: + case DADDU: + case SUB: + case DSUB: + case SUBU: + case DSUBU: + case AND: + case OR: + case XOR: + case NOR: + case SLT: + case SLTU: + case TGE: + case TGEU: + case TLT: + case TLTU: + case TEQ: + case TNE: + case MOVZ: + case MOVN: + case MOVCI: + return kRegisterType; + default: + return kUnsupported; + } + break; + case SPECIAL2: + switch (FunctionFieldRaw()) { + case MUL: + case CLZ: + return kRegisterType; + default: + return kUnsupported; + } + break; + case SPECIAL3: + switch (FunctionFieldRaw()) { + case INS: + case EXT: + return kRegisterType; + default: + return kUnsupported; + } + break; + case COP1: // Coprocessor instructions. + switch (RsFieldRawNoAssert()) { + case BC1: // Branch on coprocessor condition. + case BC1EQZ: + case BC1NEZ: + return kImmediateType; + default: + return kRegisterType; + } + break; + case COP1X: + return kRegisterType; + // 16 bits Immediate type instructions. e.g.: addi dest, src, imm16. + case REGIMM: + case BEQ: + case BNE: + case BLEZ: + case BGTZ: + case ADDI: + case DADDI: + case ADDIU: + case DADDIU: + case SLTI: + case SLTIU: + case ANDI: + case ORI: + case XORI: + case LUI: + case BEQL: + case BNEL: + case BLEZL: + case BGTZL: + case BEQZC: + case BNEZC: + case LB: + case LH: + case LWL: + case LW: + case LWU: + case LD: + case LBU: + case LHU: + case LWR: + case SB: + case SH: + case SWL: + case SW: + case SD: + case SWR: + case LWC1: + case LDC1: + case SWC1: + case SDC1: + return kImmediateType; + // 26 bits immediate type instructions. e.g.: j imm26. + case J: + case JAL: + return kJumpType; + default: + return kUnsupported; + } + return kUnsupported; +} + + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_MIPS64 diff --git a/deps/v8/src/mips64/constants-mips64.h b/deps/v8/src/mips64/constants-mips64.h new file mode 100644 index 00000000000..521869b412a --- /dev/null +++ b/deps/v8/src/mips64/constants-mips64.h @@ -0,0 +1,952 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_MIPS_CONSTANTS_H_ +#define V8_MIPS_CONSTANTS_H_ + +// UNIMPLEMENTED_ macro for MIPS. +#ifdef DEBUG +#define UNIMPLEMENTED_MIPS() \ + v8::internal::PrintF("%s, \tline %d: \tfunction %s not implemented. \n", \ + __FILE__, __LINE__, __func__) +#else +#define UNIMPLEMENTED_MIPS() +#endif + +#define UNSUPPORTED_MIPS() v8::internal::PrintF("Unsupported instruction.\n") + +enum ArchVariants { + kMips64r2, + kMips64r6 +}; + + +#ifdef _MIPS_ARCH_MIPS64R2 + static const ArchVariants kArchVariant = kMips64r2; +#elif _MIPS_ARCH_MIPS64R6 + static const ArchVariants kArchVariant = kMips64r6; +#else + static const ArchVariants kArchVariant = kMips64r2; +#endif + + +// TODO(plind): consider deriving ABI from compiler flags or build system. + +// ABI-dependent definitions are made with #define in simulator-mips64.h, +// so the ABI choice must be available to the pre-processor. However, in all +// other cases, we should use the enum AbiVariants with normal if statements. + +#define MIPS_ABI_N64 1 +// #define MIPS_ABI_O32 1 + +// The only supported Abi's are O32, and n64. +enum AbiVariants { + kO32, + kN64 // Use upper case N for 'n64' ABI to conform to style standard. +}; + +#ifdef MIPS_ABI_N64 +static const AbiVariants kMipsAbi = kN64; +#else +static const AbiVariants kMipsAbi = kO32; +#endif + + +// TODO(plind): consider renaming these ... +#if(defined(__mips_hard_float) && __mips_hard_float != 0) +// Use floating-point coprocessor instructions. This flag is raised when +// -mhard-float is passed to the compiler. +const bool IsMipsSoftFloatABI = false; +#elif(defined(__mips_soft_float) && __mips_soft_float != 0) +// This flag is raised when -msoft-float is passed to the compiler. +// Although FPU is a base requirement for v8, soft-float ABI is used +// on soft-float systems with FPU kernel emulation. +const bool IsMipsSoftFloatABI = true; +#else +const bool IsMipsSoftFloatABI = true; +#endif + + +#ifndef __STDC_FORMAT_MACROS +#define __STDC_FORMAT_MACROS +#endif +#include <inttypes.h> + + +// Defines constants and accessor classes to assemble, disassemble and +// simulate MIPS32 instructions. +// +// See: MIPS32 Architecture For Programmers +// Volume II: The MIPS32 Instruction Set +// Try www.cs.cornell.edu/courses/cs3410/2008fa/MIPS_Vol2.pdf. + +namespace v8 { +namespace internal { + +// ----------------------------------------------------------------------------- +// Registers and FPURegisters. + +// Number of general purpose registers. +const int kNumRegisters = 32; +const int kInvalidRegister = -1; + +// Number of registers with HI, LO, and pc. +const int kNumSimuRegisters = 35; + +// In the simulator, the PC register is simulated as the 34th register. +const int kPCRegister = 34; + +// Number coprocessor registers. +const int kNumFPURegisters = 32; +const int kInvalidFPURegister = -1; + +// FPU (coprocessor 1) control registers. Currently only FCSR is implemented. +const int kFCSRRegister = 31; +const int kInvalidFPUControlRegister = -1; +const uint32_t kFPUInvalidResult = static_cast<uint32_t>(1 << 31) - 1; +const uint64_t kFPU64InvalidResult = + static_cast<uint64_t>(static_cast<uint64_t>(1) << 63) - 1; + +// FCSR constants. +const uint32_t kFCSRInexactFlagBit = 2; +const uint32_t kFCSRUnderflowFlagBit = 3; +const uint32_t kFCSROverflowFlagBit = 4; +const uint32_t kFCSRDivideByZeroFlagBit = 5; +const uint32_t kFCSRInvalidOpFlagBit = 6; + +const uint32_t kFCSRInexactFlagMask = 1 << kFCSRInexactFlagBit; +const uint32_t kFCSRUnderflowFlagMask = 1 << kFCSRUnderflowFlagBit; +const uint32_t kFCSROverflowFlagMask = 1 << kFCSROverflowFlagBit; +const uint32_t kFCSRDivideByZeroFlagMask = 1 << kFCSRDivideByZeroFlagBit; +const uint32_t kFCSRInvalidOpFlagMask = 1 << kFCSRInvalidOpFlagBit; + +const uint32_t kFCSRFlagMask = + kFCSRInexactFlagMask | + kFCSRUnderflowFlagMask | + kFCSROverflowFlagMask | + kFCSRDivideByZeroFlagMask | + kFCSRInvalidOpFlagMask; + +const uint32_t kFCSRExceptionFlagMask = kFCSRFlagMask ^ kFCSRInexactFlagMask; + +// 'pref' instruction hints +const int32_t kPrefHintLoad = 0; +const int32_t kPrefHintStore = 1; +const int32_t kPrefHintLoadStreamed = 4; +const int32_t kPrefHintStoreStreamed = 5; +const int32_t kPrefHintLoadRetained = 6; +const int32_t kPrefHintStoreRetained = 7; +const int32_t kPrefHintWritebackInvalidate = 25; +const int32_t kPrefHintPrepareForStore = 30; + +// Helper functions for converting between register numbers and names. +class Registers { + public: + // Return the name of the register. + static const char* Name(int reg); + + // Lookup the register number for the name provided. + static int Number(const char* name); + + struct RegisterAlias { + int reg; + const char* name; + }; + + static const int64_t kMaxValue = 0x7fffffffffffffffl; + static const int64_t kMinValue = 0x8000000000000000l; + + private: + static const char* names_[kNumSimuRegisters]; + static const RegisterAlias aliases_[]; +}; + +// Helper functions for converting between register numbers and names. +class FPURegisters { + public: + // Return the name of the register. + static const char* Name(int reg); + + // Lookup the register number for the name provided. + static int Number(const char* name); + + struct RegisterAlias { + int creg; + const char* name; + }; + + private: + static const char* names_[kNumFPURegisters]; + static const RegisterAlias aliases_[]; +}; + + +// ----------------------------------------------------------------------------- +// Instructions encoding constants. + +// On MIPS all instructions are 32 bits. +typedef int32_t Instr; + +// Special Software Interrupt codes when used in the presence of the MIPS +// simulator. +enum SoftwareInterruptCodes { + // Transition to C code. + call_rt_redirected = 0xfffff +}; + +// On MIPS Simulator breakpoints can have different codes: +// - Breaks between 0 and kMaxWatchpointCode are treated as simple watchpoints, +// the simulator will run through them and print the registers. +// - Breaks between kMaxWatchpointCode and kMaxStopCode are treated as stop() +// instructions (see Assembler::stop()). +// - Breaks larger than kMaxStopCode are simple breaks, dropping you into the +// debugger. +const uint32_t kMaxWatchpointCode = 31; +const uint32_t kMaxStopCode = 127; +STATIC_ASSERT(kMaxWatchpointCode < kMaxStopCode); + + +// ----- Fields offset and length. +const int kOpcodeShift = 26; +const int kOpcodeBits = 6; +const int kRsShift = 21; +const int kRsBits = 5; +const int kRtShift = 16; +const int kRtBits = 5; +const int kRdShift = 11; +const int kRdBits = 5; +const int kSaShift = 6; +const int kSaBits = 5; +const int kFunctionShift = 0; +const int kFunctionBits = 6; +const int kLuiShift = 16; + +const int kImm16Shift = 0; +const int kImm16Bits = 16; +const int kImm21Shift = 0; +const int kImm21Bits = 21; +const int kImm26Shift = 0; +const int kImm26Bits = 26; +const int kImm28Shift = 0; +const int kImm28Bits = 28; +const int kImm32Shift = 0; +const int kImm32Bits = 32; + +// In branches and jumps immediate fields point to words, not bytes, +// and are therefore shifted by 2. +const int kImmFieldShift = 2; + +const int kFrBits = 5; +const int kFrShift = 21; +const int kFsShift = 11; +const int kFsBits = 5; +const int kFtShift = 16; +const int kFtBits = 5; +const int kFdShift = 6; +const int kFdBits = 5; +const int kFCccShift = 8; +const int kFCccBits = 3; +const int kFBccShift = 18; +const int kFBccBits = 3; +const int kFBtrueShift = 16; +const int kFBtrueBits = 1; + +// ----- Miscellaneous useful masks. +// Instruction bit masks. +const int kOpcodeMask = ((1 << kOpcodeBits) - 1) << kOpcodeShift; +const int kImm16Mask = ((1 << kImm16Bits) - 1) << kImm16Shift; +const int kImm26Mask = ((1 << kImm26Bits) - 1) << kImm26Shift; +const int kImm28Mask = ((1 << kImm28Bits) - 1) << kImm28Shift; +const int kRsFieldMask = ((1 << kRsBits) - 1) << kRsShift; +const int kRtFieldMask = ((1 << kRtBits) - 1) << kRtShift; +const int kRdFieldMask = ((1 << kRdBits) - 1) << kRdShift; +const int kSaFieldMask = ((1 << kSaBits) - 1) << kSaShift; +const int kFunctionFieldMask = ((1 << kFunctionBits) - 1) << kFunctionShift; +// Misc masks. +const int kHiMask = 0xffff << 16; +const int kLoMask = 0xffff; +const int kSignMask = 0x80000000; +const int kJumpAddrMask = (1 << (kImm26Bits + kImmFieldShift)) - 1; +const int64_t kHi16MaskOf64 = (int64_t)0xffff << 48; +const int64_t kSe16MaskOf64 = (int64_t)0xffff << 32; +const int64_t kTh16MaskOf64 = (int64_t)0xffff << 16; + +// ----- MIPS Opcodes and Function Fields. +// We use this presentation to stay close to the table representation in +// MIPS32 Architecture For Programmers, Volume II: The MIPS32 Instruction Set. +enum Opcode { + SPECIAL = 0 << kOpcodeShift, + REGIMM = 1 << kOpcodeShift, + + J = ((0 << 3) + 2) << kOpcodeShift, + JAL = ((0 << 3) + 3) << kOpcodeShift, + BEQ = ((0 << 3) + 4) << kOpcodeShift, + BNE = ((0 << 3) + 5) << kOpcodeShift, + BLEZ = ((0 << 3) + 6) << kOpcodeShift, + BGTZ = ((0 << 3) + 7) << kOpcodeShift, + + ADDI = ((1 << 3) + 0) << kOpcodeShift, + ADDIU = ((1 << 3) + 1) << kOpcodeShift, + SLTI = ((1 << 3) + 2) << kOpcodeShift, + SLTIU = ((1 << 3) + 3) << kOpcodeShift, + ANDI = ((1 << 3) + 4) << kOpcodeShift, + ORI = ((1 << 3) + 5) << kOpcodeShift, + XORI = ((1 << 3) + 6) << kOpcodeShift, + LUI = ((1 << 3) + 7) << kOpcodeShift, // LUI/AUI family. + DAUI = ((3 << 3) + 5) << kOpcodeShift, + + BEQC = ((2 << 3) + 0) << kOpcodeShift, + COP1 = ((2 << 3) + 1) << kOpcodeShift, // Coprocessor 1 class. + BEQL = ((2 << 3) + 4) << kOpcodeShift, + BNEL = ((2 << 3) + 5) << kOpcodeShift, + BLEZL = ((2 << 3) + 6) << kOpcodeShift, + BGTZL = ((2 << 3) + 7) << kOpcodeShift, + + DADDI = ((3 << 3) + 0) << kOpcodeShift, // This is also BNEC. + DADDIU = ((3 << 3) + 1) << kOpcodeShift, + LDL = ((3 << 3) + 2) << kOpcodeShift, + LDR = ((3 << 3) + 3) << kOpcodeShift, + SPECIAL2 = ((3 << 3) + 4) << kOpcodeShift, + SPECIAL3 = ((3 << 3) + 7) << kOpcodeShift, + + LB = ((4 << 3) + 0) << kOpcodeShift, + LH = ((4 << 3) + 1) << kOpcodeShift, + LWL = ((4 << 3) + 2) << kOpcodeShift, + LW = ((4 << 3) + 3) << kOpcodeShift, + LBU = ((4 << 3) + 4) << kOpcodeShift, + LHU = ((4 << 3) + 5) << kOpcodeShift, + LWR = ((4 << 3) + 6) << kOpcodeShift, + LWU = ((4 << 3) + 7) << kOpcodeShift, + + SB = ((5 << 3) + 0) << kOpcodeShift, + SH = ((5 << 3) + 1) << kOpcodeShift, + SWL = ((5 << 3) + 2) << kOpcodeShift, + SW = ((5 << 3) + 3) << kOpcodeShift, + SDL = ((5 << 3) + 4) << kOpcodeShift, + SDR = ((5 << 3) + 5) << kOpcodeShift, + SWR = ((5 << 3) + 6) << kOpcodeShift, + + LWC1 = ((6 << 3) + 1) << kOpcodeShift, + LLD = ((6 << 3) + 4) << kOpcodeShift, + LDC1 = ((6 << 3) + 5) << kOpcodeShift, + BEQZC = ((6 << 3) + 6) << kOpcodeShift, + LD = ((6 << 3) + 7) << kOpcodeShift, + + PREF = ((6 << 3) + 3) << kOpcodeShift, + + SWC1 = ((7 << 3) + 1) << kOpcodeShift, + SCD = ((7 << 3) + 4) << kOpcodeShift, + SDC1 = ((7 << 3) + 5) << kOpcodeShift, + BNEZC = ((7 << 3) + 6) << kOpcodeShift, + SD = ((7 << 3) + 7) << kOpcodeShift, + + COP1X = ((1 << 4) + 3) << kOpcodeShift +}; + +enum SecondaryField { + // SPECIAL Encoding of Function Field. + SLL = ((0 << 3) + 0), + MOVCI = ((0 << 3) + 1), + SRL = ((0 << 3) + 2), + SRA = ((0 << 3) + 3), + SLLV = ((0 << 3) + 4), + SRLV = ((0 << 3) + 6), + SRAV = ((0 << 3) + 7), + + JR = ((1 << 3) + 0), + JALR = ((1 << 3) + 1), + MOVZ = ((1 << 3) + 2), + MOVN = ((1 << 3) + 3), + BREAK = ((1 << 3) + 5), + + MFHI = ((2 << 3) + 0), + CLZ_R6 = ((2 << 3) + 0), + CLO_R6 = ((2 << 3) + 1), + MFLO = ((2 << 3) + 2), + DSLLV = ((2 << 3) + 4), + DSRLV = ((2 << 3) + 6), + DSRAV = ((2 << 3) + 7), + + MULT = ((3 << 3) + 0), + MULTU = ((3 << 3) + 1), + DIV = ((3 << 3) + 2), + DIVU = ((3 << 3) + 3), + DMULT = ((3 << 3) + 4), + DMULTU = ((3 << 3) + 5), + DDIV = ((3 << 3) + 6), + DDIVU = ((3 << 3) + 7), + + ADD = ((4 << 3) + 0), + ADDU = ((4 << 3) + 1), + SUB = ((4 << 3) + 2), + SUBU = ((4 << 3) + 3), + AND = ((4 << 3) + 4), + OR = ((4 << 3) + 5), + XOR = ((4 << 3) + 6), + NOR = ((4 << 3) + 7), + + SLT = ((5 << 3) + 2), + SLTU = ((5 << 3) + 3), + DADD = ((5 << 3) + 4), + DADDU = ((5 << 3) + 5), + DSUB = ((5 << 3) + 6), + DSUBU = ((5 << 3) + 7), + + TGE = ((6 << 3) + 0), + TGEU = ((6 << 3) + 1), + TLT = ((6 << 3) + 2), + TLTU = ((6 << 3) + 3), + TEQ = ((6 << 3) + 4), + SELEQZ_S = ((6 << 3) + 5), + TNE = ((6 << 3) + 6), + SELNEZ_S = ((6 << 3) + 7), + + DSLL = ((7 << 3) + 0), + DSRL = ((7 << 3) + 2), + DSRA = ((7 << 3) + 3), + DSLL32 = ((7 << 3) + 4), + DSRL32 = ((7 << 3) + 6), + DSRA32 = ((7 << 3) + 7), + + // Multiply integers in r6. + MUL_MUH = ((3 << 3) + 0), // MUL, MUH. + MUL_MUH_U = ((3 << 3) + 1), // MUL_U, MUH_U. + D_MUL_MUH = ((7 << 2) + 0), // DMUL, DMUH. + D_MUL_MUH_U = ((7 << 2) + 1), // DMUL_U, DMUH_U. + + MUL_OP = ((0 << 3) + 2), + MUH_OP = ((0 << 3) + 3), + DIV_OP = ((0 << 3) + 2), + MOD_OP = ((0 << 3) + 3), + + DIV_MOD = ((3 << 3) + 2), + DIV_MOD_U = ((3 << 3) + 3), + D_DIV_MOD = ((3 << 3) + 6), + D_DIV_MOD_U = ((3 << 3) + 7), + + // drotr in special4? + + // SPECIAL2 Encoding of Function Field. + MUL = ((0 << 3) + 2), + CLZ = ((4 << 3) + 0), + CLO = ((4 << 3) + 1), + + // SPECIAL3 Encoding of Function Field. + EXT = ((0 << 3) + 0), + DEXTM = ((0 << 3) + 1), + DEXTU = ((0 << 3) + 2), + DEXT = ((0 << 3) + 3), + INS = ((0 << 3) + 4), + DINSM = ((0 << 3) + 5), + DINSU = ((0 << 3) + 6), + DINS = ((0 << 3) + 7), + + DSBH = ((4 << 3) + 4), + + // REGIMM encoding of rt Field. + BLTZ = ((0 << 3) + 0) << 16, + BGEZ = ((0 << 3) + 1) << 16, + BLTZAL = ((2 << 3) + 0) << 16, + BGEZAL = ((2 << 3) + 1) << 16, + BGEZALL = ((2 << 3) + 3) << 16, + DAHI = ((0 << 3) + 6) << 16, + DATI = ((3 << 3) + 6) << 16, + + // COP1 Encoding of rs Field. + MFC1 = ((0 << 3) + 0) << 21, + DMFC1 = ((0 << 3) + 1) << 21, + CFC1 = ((0 << 3) + 2) << 21, + MFHC1 = ((0 << 3) + 3) << 21, + MTC1 = ((0 << 3) + 4) << 21, + DMTC1 = ((0 << 3) + 5) << 21, + CTC1 = ((0 << 3) + 6) << 21, + MTHC1 = ((0 << 3) + 7) << 21, + BC1 = ((1 << 3) + 0) << 21, + S = ((2 << 3) + 0) << 21, + D = ((2 << 3) + 1) << 21, + W = ((2 << 3) + 4) << 21, + L = ((2 << 3) + 5) << 21, + PS = ((2 << 3) + 6) << 21, + // COP1 Encoding of Function Field When rs=S. + ROUND_L_S = ((1 << 3) + 0), + TRUNC_L_S = ((1 << 3) + 1), + CEIL_L_S = ((1 << 3) + 2), + FLOOR_L_S = ((1 << 3) + 3), + ROUND_W_S = ((1 << 3) + 4), + TRUNC_W_S = ((1 << 3) + 5), + CEIL_W_S = ((1 << 3) + 6), + FLOOR_W_S = ((1 << 3) + 7), + CVT_D_S = ((4 << 3) + 1), + CVT_W_S = ((4 << 3) + 4), + CVT_L_S = ((4 << 3) + 5), + CVT_PS_S = ((4 << 3) + 6), + // COP1 Encoding of Function Field When rs=D. + ADD_D = ((0 << 3) + 0), + SUB_D = ((0 << 3) + 1), + MUL_D = ((0 << 3) + 2), + DIV_D = ((0 << 3) + 3), + SQRT_D = ((0 << 3) + 4), + ABS_D = ((0 << 3) + 5), + MOV_D = ((0 << 3) + 6), + NEG_D = ((0 << 3) + 7), + ROUND_L_D = ((1 << 3) + 0), + TRUNC_L_D = ((1 << 3) + 1), + CEIL_L_D = ((1 << 3) + 2), + FLOOR_L_D = ((1 << 3) + 3), + ROUND_W_D = ((1 << 3) + 4), + TRUNC_W_D = ((1 << 3) + 5), + CEIL_W_D = ((1 << 3) + 6), + FLOOR_W_D = ((1 << 3) + 7), + MIN = ((3 << 3) + 4), + MINA = ((3 << 3) + 5), + MAX = ((3 << 3) + 6), + MAXA = ((3 << 3) + 7), + CVT_S_D = ((4 << 3) + 0), + CVT_W_D = ((4 << 3) + 4), + CVT_L_D = ((4 << 3) + 5), + C_F_D = ((6 << 3) + 0), + C_UN_D = ((6 << 3) + 1), + C_EQ_D = ((6 << 3) + 2), + C_UEQ_D = ((6 << 3) + 3), + C_OLT_D = ((6 << 3) + 4), + C_ULT_D = ((6 << 3) + 5), + C_OLE_D = ((6 << 3) + 6), + C_ULE_D = ((6 << 3) + 7), + // COP1 Encoding of Function Field When rs=W or L. + CVT_S_W = ((4 << 3) + 0), + CVT_D_W = ((4 << 3) + 1), + CVT_S_L = ((4 << 3) + 0), + CVT_D_L = ((4 << 3) + 1), + BC1EQZ = ((2 << 2) + 1) << 21, + BC1NEZ = ((3 << 2) + 1) << 21, + // COP1 CMP positive predicates Bit 5..4 = 00. + CMP_AF = ((0 << 3) + 0), + CMP_UN = ((0 << 3) + 1), + CMP_EQ = ((0 << 3) + 2), + CMP_UEQ = ((0 << 3) + 3), + CMP_LT = ((0 << 3) + 4), + CMP_ULT = ((0 << 3) + 5), + CMP_LE = ((0 << 3) + 6), + CMP_ULE = ((0 << 3) + 7), + CMP_SAF = ((1 << 3) + 0), + CMP_SUN = ((1 << 3) + 1), + CMP_SEQ = ((1 << 3) + 2), + CMP_SUEQ = ((1 << 3) + 3), + CMP_SSLT = ((1 << 3) + 4), + CMP_SSULT = ((1 << 3) + 5), + CMP_SLE = ((1 << 3) + 6), + CMP_SULE = ((1 << 3) + 7), + // COP1 CMP negative predicates Bit 5..4 = 01. + CMP_AT = ((2 << 3) + 0), // Reserved, not implemented. + CMP_OR = ((2 << 3) + 1), + CMP_UNE = ((2 << 3) + 2), + CMP_NE = ((2 << 3) + 3), + CMP_UGE = ((2 << 3) + 4), // Reserved, not implemented. + CMP_OGE = ((2 << 3) + 5), // Reserved, not implemented. + CMP_UGT = ((2 << 3) + 6), // Reserved, not implemented. + CMP_OGT = ((2 << 3) + 7), // Reserved, not implemented. + CMP_SAT = ((3 << 3) + 0), // Reserved, not implemented. + CMP_SOR = ((3 << 3) + 1), + CMP_SUNE = ((3 << 3) + 2), + CMP_SNE = ((3 << 3) + 3), + CMP_SUGE = ((3 << 3) + 4), // Reserved, not implemented. + CMP_SOGE = ((3 << 3) + 5), // Reserved, not implemented. + CMP_SUGT = ((3 << 3) + 6), // Reserved, not implemented. + CMP_SOGT = ((3 << 3) + 7), // Reserved, not implemented. + + SEL = ((2 << 3) + 0), + SELEQZ_C = ((2 << 3) + 4), // COP1 on FPR registers. + SELNEZ_C = ((2 << 3) + 7), // COP1 on FPR registers. + + // COP1 Encoding of Function Field When rs=PS. + // COP1X Encoding of Function Field. + MADD_D = ((4 << 3) + 1), + + NULLSF = 0 +}; + + +// ----- Emulated conditions. +// On MIPS we use this enum to abstract from conditional branch instructions. +// The 'U' prefix is used to specify unsigned comparisons. +// Opposite conditions must be paired as odd/even numbers +// because 'NegateCondition' function flips LSB to negate condition. +enum Condition { + // Any value < 0 is considered no_condition. + kNoCondition = -1, + + overflow = 0, + no_overflow = 1, + Uless = 2, + Ugreater_equal= 3, + equal = 4, + not_equal = 5, + Uless_equal = 6, + Ugreater = 7, + negative = 8, + positive = 9, + parity_even = 10, + parity_odd = 11, + less = 12, + greater_equal = 13, + less_equal = 14, + greater = 15, + ueq = 16, // Unordered or Equal. + nue = 17, // Not (Unordered or Equal). + + cc_always = 18, + + // Aliases. + carry = Uless, + not_carry = Ugreater_equal, + zero = equal, + eq = equal, + not_zero = not_equal, + ne = not_equal, + nz = not_equal, + sign = negative, + not_sign = positive, + mi = negative, + pl = positive, + hi = Ugreater, + ls = Uless_equal, + ge = greater_equal, + lt = less, + gt = greater, + le = less_equal, + hs = Ugreater_equal, + lo = Uless, + al = cc_always, + + cc_default = kNoCondition +}; + + +// Returns the equivalent of !cc. +// Negation of the default kNoCondition (-1) results in a non-default +// no_condition value (-2). As long as tests for no_condition check +// for condition < 0, this will work as expected. +inline Condition NegateCondition(Condition cc) { + DCHECK(cc != cc_always); + return static_cast<Condition>(cc ^ 1); +} + + +// Commute a condition such that {a cond b == b cond' a}. +inline Condition CommuteCondition(Condition cc) { + switch (cc) { + case Uless: + return Ugreater; + case Ugreater: + return Uless; + case Ugreater_equal: + return Uless_equal; + case Uless_equal: + return Ugreater_equal; + case less: + return greater; + case greater: + return less; + case greater_equal: + return less_equal; + case less_equal: + return greater_equal; + default: + return cc; + } +} + + +// ----- Coprocessor conditions. +enum FPUCondition { + kNoFPUCondition = -1, + + F = 0, // False. + UN = 1, // Unordered. + EQ = 2, // Equal. + UEQ = 3, // Unordered or Equal. + OLT = 4, // Ordered or Less Than. + ULT = 5, // Unordered or Less Than. + OLE = 6, // Ordered or Less Than or Equal. + ULE = 7 // Unordered or Less Than or Equal. +}; + + +// FPU rounding modes. +enum FPURoundingMode { + RN = 0 << 0, // Round to Nearest. + RZ = 1 << 0, // Round towards zero. + RP = 2 << 0, // Round towards Plus Infinity. + RM = 3 << 0, // Round towards Minus Infinity. + + // Aliases. + kRoundToNearest = RN, + kRoundToZero = RZ, + kRoundToPlusInf = RP, + kRoundToMinusInf = RM +}; + +const uint32_t kFPURoundingModeMask = 3 << 0; + +enum CheckForInexactConversion { + kCheckForInexactConversion, + kDontCheckForInexactConversion +}; + + +// ----------------------------------------------------------------------------- +// Hints. + +// Branch hints are not used on the MIPS. They are defined so that they can +// appear in shared function signatures, but will be ignored in MIPS +// implementations. +enum Hint { + no_hint = 0 +}; + + +inline Hint NegateHint(Hint hint) { + return no_hint; +} + + +// ----------------------------------------------------------------------------- +// Specific instructions, constants, and masks. +// These constants are declared in assembler-mips.cc, as they use named +// registers and other constants. + +// addiu(sp, sp, 4) aka Pop() operation or part of Pop(r) +// operations as post-increment of sp. +extern const Instr kPopInstruction; +// addiu(sp, sp, -4) part of Push(r) operation as pre-decrement of sp. +extern const Instr kPushInstruction; +// sw(r, MemOperand(sp, 0)) +extern const Instr kPushRegPattern; +// lw(r, MemOperand(sp, 0)) +extern const Instr kPopRegPattern; +extern const Instr kLwRegFpOffsetPattern; +extern const Instr kSwRegFpOffsetPattern; +extern const Instr kLwRegFpNegOffsetPattern; +extern const Instr kSwRegFpNegOffsetPattern; +// A mask for the Rt register for push, pop, lw, sw instructions. +extern const Instr kRtMask; +extern const Instr kLwSwInstrTypeMask; +extern const Instr kLwSwInstrArgumentMask; +extern const Instr kLwSwOffsetMask; + +// Break 0xfffff, reserved for redirected real time call. +const Instr rtCallRedirInstr = SPECIAL | BREAK | call_rt_redirected << 6; +// A nop instruction. (Encoding of sll 0 0 0). +const Instr nopInstr = 0; + +class Instruction { + public: + enum { + kInstrSize = 4, + kInstrSizeLog2 = 2, + // On MIPS PC cannot actually be directly accessed. We behave as if PC was + // always the value of the current instruction being executed. + kPCReadOffset = 0 + }; + + // Get the raw instruction bits. + inline Instr InstructionBits() const { + return *reinterpret_cast<const Instr*>(this); + } + + // Set the raw instruction bits to value. + inline void SetInstructionBits(Instr value) { + *reinterpret_cast<Instr*>(this) = value; + } + + // Read one particular bit out of the instruction bits. + inline int Bit(int nr) const { + return (InstructionBits() >> nr) & 1; + } + + // Read a bit field out of the instruction bits. + inline int Bits(int hi, int lo) const { + return (InstructionBits() >> lo) & ((2 << (hi - lo)) - 1); + } + + // Instruction type. + enum Type { + kRegisterType, + kImmediateType, + kJumpType, + kUnsupported = -1 + }; + + // Get the encoding type of the instruction. + Type InstructionType() const; + + + // Accessors for the different named fields used in the MIPS encoding. + inline Opcode OpcodeValue() const { + return static_cast<Opcode>( + Bits(kOpcodeShift + kOpcodeBits - 1, kOpcodeShift)); + } + + inline int RsValue() const { + DCHECK(InstructionType() == kRegisterType || + InstructionType() == kImmediateType); + return Bits(kRsShift + kRsBits - 1, kRsShift); + } + + inline int RtValue() const { + DCHECK(InstructionType() == kRegisterType || + InstructionType() == kImmediateType); + return Bits(kRtShift + kRtBits - 1, kRtShift); + } + + inline int RdValue() const { + DCHECK(InstructionType() == kRegisterType); + return Bits(kRdShift + kRdBits - 1, kRdShift); + } + + inline int SaValue() const { + DCHECK(InstructionType() == kRegisterType); + return Bits(kSaShift + kSaBits - 1, kSaShift); + } + + inline int FunctionValue() const { + DCHECK(InstructionType() == kRegisterType || + InstructionType() == kImmediateType); + return Bits(kFunctionShift + kFunctionBits - 1, kFunctionShift); + } + + inline int FdValue() const { + return Bits(kFdShift + kFdBits - 1, kFdShift); + } + + inline int FsValue() const { + return Bits(kFsShift + kFsBits - 1, kFsShift); + } + + inline int FtValue() const { + return Bits(kFtShift + kFtBits - 1, kFtShift); + } + + inline int FrValue() const { + return Bits(kFrShift + kFrBits -1, kFrShift); + } + + // Float Compare condition code instruction bits. + inline int FCccValue() const { + return Bits(kFCccShift + kFCccBits - 1, kFCccShift); + } + + // Float Branch condition code instruction bits. + inline int FBccValue() const { + return Bits(kFBccShift + kFBccBits - 1, kFBccShift); + } + + // Float Branch true/false instruction bit. + inline int FBtrueValue() const { + return Bits(kFBtrueShift + kFBtrueBits - 1, kFBtrueShift); + } + + // Return the fields at their original place in the instruction encoding. + inline Opcode OpcodeFieldRaw() const { + return static_cast<Opcode>(InstructionBits() & kOpcodeMask); + } + + inline int RsFieldRaw() const { + DCHECK(InstructionType() == kRegisterType || + InstructionType() == kImmediateType); + return InstructionBits() & kRsFieldMask; + } + + // Same as above function, but safe to call within InstructionType(). + inline int RsFieldRawNoAssert() const { + return InstructionBits() & kRsFieldMask; + } + + inline int RtFieldRaw() const { + DCHECK(InstructionType() == kRegisterType || + InstructionType() == kImmediateType); + return InstructionBits() & kRtFieldMask; + } + + inline int RdFieldRaw() const { + DCHECK(InstructionType() == kRegisterType); + return InstructionBits() & kRdFieldMask; + } + + inline int SaFieldRaw() const { + DCHECK(InstructionType() == kRegisterType); + return InstructionBits() & kSaFieldMask; + } + + inline int FunctionFieldRaw() const { + return InstructionBits() & kFunctionFieldMask; + } + + // Get the secondary field according to the opcode. + inline int SecondaryValue() const { + Opcode op = OpcodeFieldRaw(); + switch (op) { + case SPECIAL: + case SPECIAL2: + return FunctionValue(); + case COP1: + return RsValue(); + case REGIMM: + return RtValue(); + default: + return NULLSF; + } + } + + inline int32_t Imm16Value() const { + DCHECK(InstructionType() == kImmediateType); + return Bits(kImm16Shift + kImm16Bits - 1, kImm16Shift); + } + + inline int32_t Imm21Value() const { + DCHECK(InstructionType() == kImmediateType); + return Bits(kImm21Shift + kImm21Bits - 1, kImm21Shift); + } + + inline int32_t Imm26Value() const { + DCHECK(InstructionType() == kJumpType); + return Bits(kImm26Shift + kImm26Bits - 1, kImm26Shift); + } + + // Say if the instruction should not be used in a branch delay slot. + bool IsForbiddenInBranchDelay() const; + // Say if the instruction 'links'. e.g. jal, bal. + bool IsLinkingInstruction() const; + // Say if the instruction is a break or a trap. + bool IsTrap() const; + + // Instructions are read of out a code stream. The only way to get a + // reference to an instruction is to convert a pointer. There is no way + // to allocate or create instances of class Instruction. + // Use the At(pc) function to create references to Instruction. + static Instruction* At(byte* pc) { + return reinterpret_cast<Instruction*>(pc); + } + + private: + // We need to prevent the creation of instances of class Instruction. + DISALLOW_IMPLICIT_CONSTRUCTORS(Instruction); +}; + + +// ----------------------------------------------------------------------------- +// MIPS assembly various constants. + +// C/C++ argument slots size. +const int kCArgSlotCount = (kMipsAbi == kN64) ? 0 : 4; + +// TODO(plind): below should be based on kPointerSize +// TODO(plind): find all usages and remove the needless instructions for n64. +const int kCArgsSlotsSize = kCArgSlotCount * Instruction::kInstrSize * 2; + +const int kBranchReturnOffset = 2 * Instruction::kInstrSize; + +} } // namespace v8::internal + +#endif // #ifndef V8_MIPS_CONSTANTS_H_ diff --git a/deps/v8/src/mips64/cpu-mips64.cc b/deps/v8/src/mips64/cpu-mips64.cc new file mode 100644 index 00000000000..027d5a103e5 --- /dev/null +++ b/deps/v8/src/mips64/cpu-mips64.cc @@ -0,0 +1,59 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// CPU specific code for arm independent of OS goes here. + +#include <sys/syscall.h> +#include <unistd.h> + +#ifdef __mips +#include <asm/cachectl.h> +#endif // #ifdef __mips + +#include "src/v8.h" + +#if V8_TARGET_ARCH_MIPS64 + +#include "src/assembler.h" +#include "src/macro-assembler.h" + +#include "src/simulator.h" // For cache flushing. + +namespace v8 { +namespace internal { + + +void CpuFeatures::FlushICache(void* start, size_t size) { + // Nothing to do, flushing no instructions. + if (size == 0) { + return; + } + +#if !defined (USE_SIMULATOR) +#if defined(ANDROID) && !defined(__LP64__) + // Bionic cacheflush can typically run in userland, avoiding kernel call. + char *end = reinterpret_cast<char *>(start) + size; + cacheflush( + reinterpret_cast<intptr_t>(start), reinterpret_cast<intptr_t>(end), 0); +#else // ANDROID + int res; + // See http://www.linux-mips.org/wiki/Cacheflush_Syscall. + res = syscall(__NR_cacheflush, start, size, ICACHE); + if (res) { + V8_Fatal(__FILE__, __LINE__, "Failed to flush the instruction cache"); + } +#endif // ANDROID +#else // USE_SIMULATOR. + // Not generating mips instructions for C-code. This means that we are + // building a mips emulator based target. We should notify the simulator + // that the Icache was flushed. + // None of this code ends up in the snapshot so there are no issues + // around whether or not to generate the code when building snapshots. + Simulator::FlushICache(Isolate::Current()->simulator_i_cache(), start, size); +#endif // USE_SIMULATOR. +} + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_MIPS64 diff --git a/deps/v8/src/mips64/debug-mips64.cc b/deps/v8/src/mips64/debug-mips64.cc new file mode 100644 index 00000000000..0a091048ded --- /dev/null +++ b/deps/v8/src/mips64/debug-mips64.cc @@ -0,0 +1,330 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + + + +#include "src/v8.h" + +#if V8_TARGET_ARCH_MIPS64 + +#include "src/codegen.h" +#include "src/debug.h" + +namespace v8 { +namespace internal { + +bool BreakLocationIterator::IsDebugBreakAtReturn() { + return Debug::IsDebugBreakAtReturn(rinfo()); +} + + +void BreakLocationIterator::SetDebugBreakAtReturn() { + // Mips return sequence: + // mov sp, fp + // lw fp, sp(0) + // lw ra, sp(4) + // addiu sp, sp, 8 + // addiu sp, sp, N + // jr ra + // nop (in branch delay slot) + + // Make sure this constant matches the number if instructions we emit. + DCHECK(Assembler::kJSReturnSequenceInstructions == 7); + CodePatcher patcher(rinfo()->pc(), Assembler::kJSReturnSequenceInstructions); + // li and Call pseudo-instructions emit 6 + 2 instructions. + patcher.masm()->li(v8::internal::t9, Operand(reinterpret_cast<int64_t>( + debug_info_->GetIsolate()->builtins()->Return_DebugBreak()->entry())), + ADDRESS_LOAD); + patcher.masm()->Call(v8::internal::t9); + // Place nop to match return sequence size. + patcher.masm()->nop(); + // TODO(mips): Open issue about using breakpoint instruction instead of nops. + // patcher.masm()->bkpt(0); +} + + +// Restore the JS frame exit code. +void BreakLocationIterator::ClearDebugBreakAtReturn() { + rinfo()->PatchCode(original_rinfo()->pc(), + Assembler::kJSReturnSequenceInstructions); +} + + +// A debug break in the exit code is identified by the JS frame exit code +// having been patched with li/call psuedo-instrunction (liu/ori/jalr). +bool Debug::IsDebugBreakAtReturn(RelocInfo* rinfo) { + DCHECK(RelocInfo::IsJSReturn(rinfo->rmode())); + return rinfo->IsPatchedReturnSequence(); +} + + +bool BreakLocationIterator::IsDebugBreakAtSlot() { + DCHECK(IsDebugBreakSlot()); + // Check whether the debug break slot instructions have been patched. + return rinfo()->IsPatchedDebugBreakSlotSequence(); +} + + +void BreakLocationIterator::SetDebugBreakAtSlot() { + DCHECK(IsDebugBreakSlot()); + // Patch the code changing the debug break slot code from: + // nop(DEBUG_BREAK_NOP) - nop(1) is sll(zero_reg, zero_reg, 1) + // nop(DEBUG_BREAK_NOP) + // nop(DEBUG_BREAK_NOP) + // nop(DEBUG_BREAK_NOP) + // nop(DEBUG_BREAK_NOP) + // nop(DEBUG_BREAK_NOP) + // to a call to the debug break slot code. + // li t9, address (4-instruction sequence on mips64) + // call t9 (jalr t9 / nop instruction pair) + CodePatcher patcher(rinfo()->pc(), Assembler::kDebugBreakSlotInstructions); + patcher.masm()->li(v8::internal::t9, + Operand(reinterpret_cast<int64_t>( + debug_info_->GetIsolate()->builtins()->Slot_DebugBreak()->entry())), + ADDRESS_LOAD); + patcher.masm()->Call(v8::internal::t9); +} + + +void BreakLocationIterator::ClearDebugBreakAtSlot() { + DCHECK(IsDebugBreakSlot()); + rinfo()->PatchCode(original_rinfo()->pc(), + Assembler::kDebugBreakSlotInstructions); +} + + +#define __ ACCESS_MASM(masm) + + + +static void Generate_DebugBreakCallHelper(MacroAssembler* masm, + RegList object_regs, + RegList non_object_regs) { + { + FrameScope scope(masm, StackFrame::INTERNAL); + + // Load padding words on stack. + __ li(at, Operand(Smi::FromInt(LiveEdit::kFramePaddingValue))); + __ Dsubu(sp, sp, + Operand(kPointerSize * LiveEdit::kFramePaddingInitialSize)); + for (int i = LiveEdit::kFramePaddingInitialSize - 1; i >= 0; i--) { + __ sd(at, MemOperand(sp, kPointerSize * i)); + } + __ li(at, Operand(Smi::FromInt(LiveEdit::kFramePaddingInitialSize))); + __ push(at); + + + // TODO(plind): This needs to be revised to store pairs of smi's per + // the other 64-bit arch's. + + // Store the registers containing live values on the expression stack to + // make sure that these are correctly updated during GC. Non object values + // are stored as a smi causing it to be untouched by GC. + DCHECK((object_regs & ~kJSCallerSaved) == 0); + DCHECK((non_object_regs & ~kJSCallerSaved) == 0); + DCHECK((object_regs & non_object_regs) == 0); + for (int i = 0; i < kNumJSCallerSaved; i++) { + int r = JSCallerSavedCode(i); + Register reg = { r }; + if ((object_regs & (1 << r)) != 0) { + __ push(reg); + } + if ((non_object_regs & (1 << r)) != 0) { + __ PushRegisterAsTwoSmis(reg); + } + } + +#ifdef DEBUG + __ RecordComment("// Calling from debug break to runtime - come in - over"); +#endif + __ PrepareCEntryArgs(0); // No arguments. + __ PrepareCEntryFunction(ExternalReference::debug_break(masm->isolate())); + + CEntryStub ceb(masm->isolate(), 1); + __ CallStub(&ceb); + + // Restore the register values from the expression stack. + for (int i = kNumJSCallerSaved - 1; i >= 0; i--) { + int r = JSCallerSavedCode(i); + Register reg = { r }; + if ((non_object_regs & (1 << r)) != 0) { + __ PopRegisterAsTwoSmis(reg, at); + } + if ((object_regs & (1 << r)) != 0) { + __ pop(reg); + } + if (FLAG_debug_code && + (((object_regs |non_object_regs) & (1 << r)) == 0)) { + __ li(reg, kDebugZapValue); + } + } + + // Don't bother removing padding bytes pushed on the stack + // as the frame is going to be restored right away. + + // Leave the internal frame. + } + + // Now that the break point has been handled, resume normal execution by + // jumping to the target address intended by the caller and that was + // overwritten by the address of DebugBreakXXX. + ExternalReference after_break_target = + ExternalReference::debug_after_break_target_address(masm->isolate()); + __ li(t9, Operand(after_break_target)); + __ ld(t9, MemOperand(t9)); + __ Jump(t9); +} + + +void DebugCodegen::GenerateCallICStubDebugBreak(MacroAssembler* masm) { + // Register state for CallICStub + // ----------- S t a t e ------------- + // -- a1 : function + // -- a3 : slot in feedback array (smi) + // ----------------------------------- + Generate_DebugBreakCallHelper(masm, a1.bit() | a3.bit(), 0); +} + + +void DebugCodegen::GenerateLoadICDebugBreak(MacroAssembler* masm) { + Register receiver = LoadIC::ReceiverRegister(); + Register name = LoadIC::NameRegister(); + Generate_DebugBreakCallHelper(masm, receiver.bit() | name.bit(), 0); +} + + +void DebugCodegen::GenerateStoreICDebugBreak(MacroAssembler* masm) { + Register receiver = StoreIC::ReceiverRegister(); + Register name = StoreIC::NameRegister(); + Register value = StoreIC::ValueRegister(); + Generate_DebugBreakCallHelper( + masm, receiver.bit() | name.bit() | value.bit(), 0); +} + + +void DebugCodegen::GenerateKeyedLoadICDebugBreak(MacroAssembler* masm) { + // Calling convention for keyed IC load (from ic-mips64.cc). + GenerateLoadICDebugBreak(masm); +} + + +void DebugCodegen::GenerateKeyedStoreICDebugBreak(MacroAssembler* masm) { + // Calling convention for IC keyed store call (from ic-mips64.cc). + Register receiver = KeyedStoreIC::ReceiverRegister(); + Register name = KeyedStoreIC::NameRegister(); + Register value = KeyedStoreIC::ValueRegister(); + Generate_DebugBreakCallHelper( + masm, receiver.bit() | name.bit() | value.bit(), 0); +} + + +void DebugCodegen::GenerateCompareNilICDebugBreak(MacroAssembler* masm) { + // Register state for CompareNil IC + // ----------- S t a t e ------------- + // -- a0 : value + // ----------------------------------- + Generate_DebugBreakCallHelper(masm, a0.bit(), 0); +} + + +void DebugCodegen::GenerateReturnDebugBreak(MacroAssembler* masm) { + // In places other than IC call sites it is expected that v0 is TOS which + // is an object - this is not generally the case so this should be used with + // care. + Generate_DebugBreakCallHelper(masm, v0.bit(), 0); +} + + +void DebugCodegen::GenerateCallFunctionStubDebugBreak(MacroAssembler* masm) { + // Register state for CallFunctionStub (from code-stubs-mips.cc). + // ----------- S t a t e ------------- + // -- a1 : function + // ----------------------------------- + Generate_DebugBreakCallHelper(masm, a1.bit(), 0); +} + + +void DebugCodegen::GenerateCallConstructStubDebugBreak(MacroAssembler* masm) { + // Calling convention for CallConstructStub (from code-stubs-mips.cc). + // ----------- S t a t e ------------- + // -- a0 : number of arguments (not smi) + // -- a1 : constructor function + // ----------------------------------- + Generate_DebugBreakCallHelper(masm, a1.bit() , a0.bit()); +} + + + +void DebugCodegen::GenerateCallConstructStubRecordDebugBreak( + MacroAssembler* masm) { + // Calling convention for CallConstructStub (from code-stubs-mips.cc). + // ----------- S t a t e ------------- + // -- a0 : number of arguments (not smi) + // -- a1 : constructor function + // -- a2 : feedback array + // -- a3 : feedback slot (smi) + // ----------------------------------- + Generate_DebugBreakCallHelper(masm, a1.bit() | a2.bit() | a3.bit(), a0.bit()); +} + + +void DebugCodegen::GenerateSlot(MacroAssembler* masm) { + // Generate enough nop's to make space for a call instruction. Avoid emitting + // the trampoline pool in the debug break slot code. + Assembler::BlockTrampolinePoolScope block_trampoline_pool(masm); + Label check_codesize; + __ bind(&check_codesize); + __ RecordDebugBreakSlot(); + for (int i = 0; i < Assembler::kDebugBreakSlotInstructions; i++) { + __ nop(MacroAssembler::DEBUG_BREAK_NOP); + } + DCHECK_EQ(Assembler::kDebugBreakSlotInstructions, + masm->InstructionsGeneratedSince(&check_codesize)); +} + + +void DebugCodegen::GenerateSlotDebugBreak(MacroAssembler* masm) { + // In the places where a debug break slot is inserted no registers can contain + // object pointers. + Generate_DebugBreakCallHelper(masm, 0, 0); +} + + +void DebugCodegen::GeneratePlainReturnLiveEdit(MacroAssembler* masm) { + __ Ret(); +} + + +void DebugCodegen::GenerateFrameDropperLiveEdit(MacroAssembler* masm) { + ExternalReference restarter_frame_function_slot = + ExternalReference::debug_restarter_frame_function_pointer_address( + masm->isolate()); + __ li(at, Operand(restarter_frame_function_slot)); + __ sw(zero_reg, MemOperand(at, 0)); + + // We do not know our frame height, but set sp based on fp. + __ Dsubu(sp, fp, Operand(kPointerSize)); + + __ Pop(ra, fp, a1); // Return address, Frame, Function. + + // Load context from the function. + __ ld(cp, FieldMemOperand(a1, JSFunction::kContextOffset)); + + // Get function code. + __ ld(at, FieldMemOperand(a1, JSFunction::kSharedFunctionInfoOffset)); + __ ld(at, FieldMemOperand(at, SharedFunctionInfo::kCodeOffset)); + __ Daddu(t9, at, Operand(Code::kHeaderSize - kHeapObjectTag)); + + // Re-run JSFunction, a1 is function, cp is context. + __ Jump(t9); +} + + +const bool LiveEdit::kFrameDropperSupported = true; + +#undef __ + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_MIPS64 diff --git a/deps/v8/src/mips64/deoptimizer-mips64.cc b/deps/v8/src/mips64/deoptimizer-mips64.cc new file mode 100644 index 00000000000..8d5bb2d2e55 --- /dev/null +++ b/deps/v8/src/mips64/deoptimizer-mips64.cc @@ -0,0 +1,379 @@ +// Copyright 2011 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "src/codegen.h" +#include "src/deoptimizer.h" +#include "src/full-codegen.h" +#include "src/safepoint-table.h" + +namespace v8 { +namespace internal { + + +int Deoptimizer::patch_size() { + const int kCallInstructionSizeInWords = 6; + return kCallInstructionSizeInWords * Assembler::kInstrSize; +} + + +void Deoptimizer::PatchCodeForDeoptimization(Isolate* isolate, Code* code) { + Address code_start_address = code->instruction_start(); + // Invalidate the relocation information, as it will become invalid by the + // code patching below, and is not needed any more. + code->InvalidateRelocation(); + + if (FLAG_zap_code_space) { + // Fail hard and early if we enter this code object again. + byte* pointer = code->FindCodeAgeSequence(); + if (pointer != NULL) { + pointer += kNoCodeAgeSequenceLength; + } else { + pointer = code->instruction_start(); + } + CodePatcher patcher(pointer, 1); + patcher.masm()->break_(0xCC); + + DeoptimizationInputData* data = + DeoptimizationInputData::cast(code->deoptimization_data()); + int osr_offset = data->OsrPcOffset()->value(); + if (osr_offset > 0) { + CodePatcher osr_patcher(code->instruction_start() + osr_offset, 1); + osr_patcher.masm()->break_(0xCC); + } + } + + DeoptimizationInputData* deopt_data = + DeoptimizationInputData::cast(code->deoptimization_data()); +#ifdef DEBUG + Address prev_call_address = NULL; +#endif + // For each LLazyBailout instruction insert a call to the corresponding + // deoptimization entry. + for (int i = 0; i < deopt_data->DeoptCount(); i++) { + if (deopt_data->Pc(i)->value() == -1) continue; + Address call_address = code_start_address + deopt_data->Pc(i)->value(); + Address deopt_entry = GetDeoptimizationEntry(isolate, i, LAZY); + int call_size_in_bytes = MacroAssembler::CallSize(deopt_entry, + RelocInfo::NONE32); + int call_size_in_words = call_size_in_bytes / Assembler::kInstrSize; + DCHECK(call_size_in_bytes % Assembler::kInstrSize == 0); + DCHECK(call_size_in_bytes <= patch_size()); + CodePatcher patcher(call_address, call_size_in_words); + patcher.masm()->Call(deopt_entry, RelocInfo::NONE32); + DCHECK(prev_call_address == NULL || + call_address >= prev_call_address + patch_size()); + DCHECK(call_address + patch_size() <= code->instruction_end()); + +#ifdef DEBUG + prev_call_address = call_address; +#endif + } +} + + +void Deoptimizer::FillInputFrame(Address tos, JavaScriptFrame* frame) { + // Set the register values. The values are not important as there are no + // callee saved registers in JavaScript frames, so all registers are + // spilled. Registers fp and sp are set to the correct values though. + + for (int i = 0; i < Register::kNumRegisters; i++) { + input_->SetRegister(i, i * 4); + } + input_->SetRegister(sp.code(), reinterpret_cast<intptr_t>(frame->sp())); + input_->SetRegister(fp.code(), reinterpret_cast<intptr_t>(frame->fp())); + for (int i = 0; i < DoubleRegister::NumAllocatableRegisters(); i++) { + input_->SetDoubleRegister(i, 0.0); + } + + // Fill the frame content from the actual data on the frame. + for (unsigned i = 0; i < input_->GetFrameSize(); i += kPointerSize) { + input_->SetFrameSlot(i, Memory::uint64_at(tos + i)); + } +} + + +void Deoptimizer::SetPlatformCompiledStubRegisters( + FrameDescription* output_frame, CodeStubInterfaceDescriptor* descriptor) { + ApiFunction function(descriptor->deoptimization_handler()); + ExternalReference xref(&function, ExternalReference::BUILTIN_CALL, isolate_); + intptr_t handler = reinterpret_cast<intptr_t>(xref.address()); + int params = descriptor->GetHandlerParameterCount(); + output_frame->SetRegister(s0.code(), params); + output_frame->SetRegister(s1.code(), (params - 1) * kPointerSize); + output_frame->SetRegister(s2.code(), handler); +} + + +void Deoptimizer::CopyDoubleRegisters(FrameDescription* output_frame) { + for (int i = 0; i < DoubleRegister::kMaxNumRegisters; ++i) { + double double_value = input_->GetDoubleRegister(i); + output_frame->SetDoubleRegister(i, double_value); + } +} + + +bool Deoptimizer::HasAlignmentPadding(JSFunction* function) { + // There is no dynamic alignment padding on MIPS in the input frame. + return false; +} + + +#define __ masm()-> + + +// This code tries to be close to ia32 code so that any changes can be +// easily ported. +void Deoptimizer::EntryGenerator::Generate() { + GeneratePrologue(); + + // Unlike on ARM we don't save all the registers, just the useful ones. + // For the rest, there are gaps on the stack, so the offsets remain the same. + const int kNumberOfRegisters = Register::kNumRegisters; + + RegList restored_regs = kJSCallerSaved | kCalleeSaved; + RegList saved_regs = restored_regs | sp.bit() | ra.bit(); + + const int kDoubleRegsSize = + kDoubleSize * FPURegister::kMaxNumAllocatableRegisters; + + // Save all FPU registers before messing with them. + __ Dsubu(sp, sp, Operand(kDoubleRegsSize)); + for (int i = 0; i < FPURegister::kMaxNumAllocatableRegisters; ++i) { + FPURegister fpu_reg = FPURegister::FromAllocationIndex(i); + int offset = i * kDoubleSize; + __ sdc1(fpu_reg, MemOperand(sp, offset)); + } + + // Push saved_regs (needed to populate FrameDescription::registers_). + // Leave gaps for other registers. + __ Dsubu(sp, sp, kNumberOfRegisters * kPointerSize); + for (int16_t i = kNumberOfRegisters - 1; i >= 0; i--) { + if ((saved_regs & (1 << i)) != 0) { + __ sd(ToRegister(i), MemOperand(sp, kPointerSize * i)); + } + } + + const int kSavedRegistersAreaSize = + (kNumberOfRegisters * kPointerSize) + kDoubleRegsSize; + + // Get the bailout id from the stack. + __ ld(a2, MemOperand(sp, kSavedRegistersAreaSize)); + + // Get the address of the location in the code object (a3) (return + // address for lazy deoptimization) and compute the fp-to-sp delta in + // register a4. + __ mov(a3, ra); + // Correct one word for bailout id. + __ Daddu(a4, sp, Operand(kSavedRegistersAreaSize + (1 * kPointerSize))); + + __ Dsubu(a4, fp, a4); + + // Allocate a new deoptimizer object. + __ PrepareCallCFunction(6, a5); + // Pass six arguments, according to O32 or n64 ABI. a0..a3 are same for both. + __ li(a1, Operand(type())); // bailout type, + __ ld(a0, MemOperand(fp, JavaScriptFrameConstants::kFunctionOffset)); + // a2: bailout id already loaded. + // a3: code address or 0 already loaded. + if (kMipsAbi == kN64) { + // a4: already has fp-to-sp delta. + __ li(a5, Operand(ExternalReference::isolate_address(isolate()))); + } else { // O32 abi. + // Pass four arguments in a0 to a3 and fifth & sixth arguments on stack. + __ sd(a4, CFunctionArgumentOperand(5)); // Fp-to-sp delta. + __ li(a5, Operand(ExternalReference::isolate_address(isolate()))); + __ sd(a5, CFunctionArgumentOperand(6)); // Isolate. + } + // Call Deoptimizer::New(). + { + AllowExternalCallThatCantCauseGC scope(masm()); + __ CallCFunction(ExternalReference::new_deoptimizer_function(isolate()), 6); + } + + // Preserve "deoptimizer" object in register v0 and get the input + // frame descriptor pointer to a1 (deoptimizer->input_); + // Move deopt-obj to a0 for call to Deoptimizer::ComputeOutputFrames() below. + __ mov(a0, v0); + __ ld(a1, MemOperand(v0, Deoptimizer::input_offset())); + + // Copy core registers into FrameDescription::registers_[kNumRegisters]. + DCHECK(Register::kNumRegisters == kNumberOfRegisters); + for (int i = 0; i < kNumberOfRegisters; i++) { + int offset = (i * kPointerSize) + FrameDescription::registers_offset(); + if ((saved_regs & (1 << i)) != 0) { + __ ld(a2, MemOperand(sp, i * kPointerSize)); + __ sd(a2, MemOperand(a1, offset)); + } else if (FLAG_debug_code) { + __ li(a2, kDebugZapValue); + __ sd(a2, MemOperand(a1, offset)); + } + } + + int double_regs_offset = FrameDescription::double_registers_offset(); + // Copy FPU registers to + // double_registers_[DoubleRegister::kNumAllocatableRegisters] + for (int i = 0; i < FPURegister::NumAllocatableRegisters(); ++i) { + int dst_offset = i * kDoubleSize + double_regs_offset; + int src_offset = i * kDoubleSize + kNumberOfRegisters * kPointerSize; + __ ldc1(f0, MemOperand(sp, src_offset)); + __ sdc1(f0, MemOperand(a1, dst_offset)); + } + + // Remove the bailout id and the saved registers from the stack. + __ Daddu(sp, sp, Operand(kSavedRegistersAreaSize + (1 * kPointerSize))); + + // Compute a pointer to the unwinding limit in register a2; that is + // the first stack slot not part of the input frame. + __ ld(a2, MemOperand(a1, FrameDescription::frame_size_offset())); + __ Daddu(a2, a2, sp); + + // Unwind the stack down to - but not including - the unwinding + // limit and copy the contents of the activation frame to the input + // frame description. + __ Daddu(a3, a1, Operand(FrameDescription::frame_content_offset())); + Label pop_loop; + Label pop_loop_header; + __ BranchShort(&pop_loop_header); + __ bind(&pop_loop); + __ pop(a4); + __ sd(a4, MemOperand(a3, 0)); + __ daddiu(a3, a3, sizeof(uint64_t)); + __ bind(&pop_loop_header); + __ BranchShort(&pop_loop, ne, a2, Operand(sp)); + // Compute the output frame in the deoptimizer. + __ push(a0); // Preserve deoptimizer object across call. + // a0: deoptimizer object; a1: scratch. + __ PrepareCallCFunction(1, a1); + // Call Deoptimizer::ComputeOutputFrames(). + { + AllowExternalCallThatCantCauseGC scope(masm()); + __ CallCFunction( + ExternalReference::compute_output_frames_function(isolate()), 1); + } + __ pop(a0); // Restore deoptimizer object (class Deoptimizer). + + // Replace the current (input) frame with the output frames. + Label outer_push_loop, inner_push_loop, + outer_loop_header, inner_loop_header; + // Outer loop state: a4 = current "FrameDescription** output_", + // a1 = one past the last FrameDescription**. + __ lw(a1, MemOperand(a0, Deoptimizer::output_count_offset())); + __ ld(a4, MemOperand(a0, Deoptimizer::output_offset())); // a4 is output_. + __ dsll(a1, a1, kPointerSizeLog2); // Count to offset. + __ daddu(a1, a4, a1); // a1 = one past the last FrameDescription**. + __ jmp(&outer_loop_header); + __ bind(&outer_push_loop); + // Inner loop state: a2 = current FrameDescription*, a3 = loop index. + __ ld(a2, MemOperand(a4, 0)); // output_[ix] + __ ld(a3, MemOperand(a2, FrameDescription::frame_size_offset())); + __ jmp(&inner_loop_header); + __ bind(&inner_push_loop); + __ Dsubu(a3, a3, Operand(sizeof(uint64_t))); + __ Daddu(a6, a2, Operand(a3)); + __ ld(a7, MemOperand(a6, FrameDescription::frame_content_offset())); + __ push(a7); + __ bind(&inner_loop_header); + __ BranchShort(&inner_push_loop, ne, a3, Operand(zero_reg)); + + __ Daddu(a4, a4, Operand(kPointerSize)); + __ bind(&outer_loop_header); + __ BranchShort(&outer_push_loop, lt, a4, Operand(a1)); + + __ ld(a1, MemOperand(a0, Deoptimizer::input_offset())); + for (int i = 0; i < FPURegister::kMaxNumAllocatableRegisters; ++i) { + const FPURegister fpu_reg = FPURegister::FromAllocationIndex(i); + int src_offset = i * kDoubleSize + double_regs_offset; + __ ldc1(fpu_reg, MemOperand(a1, src_offset)); + } + + // Push state, pc, and continuation from the last output frame. + __ ld(a6, MemOperand(a2, FrameDescription::state_offset())); + __ push(a6); + + __ ld(a6, MemOperand(a2, FrameDescription::pc_offset())); + __ push(a6); + __ ld(a6, MemOperand(a2, FrameDescription::continuation_offset())); + __ push(a6); + + + // Technically restoring 'at' should work unless zero_reg is also restored + // but it's safer to check for this. + DCHECK(!(at.bit() & restored_regs)); + // Restore the registers from the last output frame. + __ mov(at, a2); + for (int i = kNumberOfRegisters - 1; i >= 0; i--) { + int offset = (i * kPointerSize) + FrameDescription::registers_offset(); + if ((restored_regs & (1 << i)) != 0) { + __ ld(ToRegister(i), MemOperand(at, offset)); + } + } + + __ InitializeRootRegister(); + + __ pop(at); // Get continuation, leave pc on stack. + __ pop(ra); + __ Jump(at); + __ stop("Unreachable."); +} + + +// Maximum size of a table entry generated below. +const int Deoptimizer::table_entry_size_ = 11 * Assembler::kInstrSize; + +void Deoptimizer::TableEntryGenerator::GeneratePrologue() { + Assembler::BlockTrampolinePoolScope block_trampoline_pool(masm()); + + // Create a sequence of deoptimization entries. + // Note that registers are still live when jumping to an entry. + Label table_start; + __ bind(&table_start); + for (int i = 0; i < count(); i++) { + Label start; + __ bind(&start); + __ daddiu(sp, sp, -1 * kPointerSize); + // Jump over the remaining deopt entries (including this one). + // This code is always reached by calling Jump, which puts the target (label + // start) into t9. + const int remaining_entries = (count() - i) * table_entry_size_; + __ Daddu(t9, t9, remaining_entries); + // 'at' was clobbered so we can only load the current entry value here. + __ li(t8, i); + __ jr(t9); // Expose delay slot. + __ sd(t8, MemOperand(sp, 0 * kPointerSize)); // In the delay slot. + + // Pad the rest of the code. + while (table_entry_size_ > (masm()->SizeOfCodeGeneratedSince(&start))) { + __ nop(); + } + + DCHECK_EQ(table_entry_size_, masm()->SizeOfCodeGeneratedSince(&start)); + } + + DCHECK_EQ(masm()->SizeOfCodeGeneratedSince(&table_start), + count() * table_entry_size_); +} + + +void FrameDescription::SetCallerPc(unsigned offset, intptr_t value) { + SetFrameSlot(offset, value); +} + + +void FrameDescription::SetCallerFp(unsigned offset, intptr_t value) { + SetFrameSlot(offset, value); +} + + +void FrameDescription::SetCallerConstantPool(unsigned offset, intptr_t value) { + // No out-of-line constant pool support. + UNREACHABLE(); +} + + +#undef __ + + +} } // namespace v8::internal diff --git a/deps/v8/src/mips64/disasm-mips64.cc b/deps/v8/src/mips64/disasm-mips64.cc new file mode 100644 index 00000000000..d47950fd023 --- /dev/null +++ b/deps/v8/src/mips64/disasm-mips64.cc @@ -0,0 +1,1504 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// A Disassembler object is used to disassemble a block of code instruction by +// instruction. The default implementation of the NameConverter object can be +// overriden to modify register names or to do symbol lookup on addresses. +// +// The example below will disassemble a block of code and print it to stdout. +// +// NameConverter converter; +// Disassembler d(converter); +// for (byte* pc = begin; pc < end;) { +// v8::internal::EmbeddedVector<char, 256> buffer; +// byte* prev_pc = pc; +// pc += d.InstructionDecode(buffer, pc); +// printf("%p %08x %s\n", +// prev_pc, *reinterpret_cast<int32_t*>(prev_pc), buffer); +// } +// +// The Disassembler class also has a convenience method to disassemble a block +// of code into a FILE*, meaning that the above functionality could also be +// achieved by just calling Disassembler::Disassemble(stdout, begin, end); + + +#include <assert.h> +#include <stdarg.h> +#include <stdio.h> +#include <string.h> + +#include "src/v8.h" + +#if V8_TARGET_ARCH_MIPS64 + +#include "src/base/platform/platform.h" +#include "src/disasm.h" +#include "src/macro-assembler.h" +#include "src/mips64/constants-mips64.h" + +namespace v8 { +namespace internal { + +//------------------------------------------------------------------------------ + +// Decoder decodes and disassembles instructions into an output buffer. +// It uses the converter to convert register names and call destinations into +// more informative description. +class Decoder { + public: + Decoder(const disasm::NameConverter& converter, + v8::internal::Vector<char> out_buffer) + : converter_(converter), + out_buffer_(out_buffer), + out_buffer_pos_(0) { + out_buffer_[out_buffer_pos_] = '\0'; + } + + ~Decoder() {} + + // Writes one disassembled instruction into 'buffer' (0-terminated). + // Returns the length of the disassembled machine instruction in bytes. + int InstructionDecode(byte* instruction); + + private: + // Bottleneck functions to print into the out_buffer. + void PrintChar(const char ch); + void Print(const char* str); + + // Printing of common values. + void PrintRegister(int reg); + void PrintFPURegister(int freg); + void PrintRs(Instruction* instr); + void PrintRt(Instruction* instr); + void PrintRd(Instruction* instr); + void PrintFs(Instruction* instr); + void PrintFt(Instruction* instr); + void PrintFd(Instruction* instr); + void PrintSa(Instruction* instr); + void PrintSd(Instruction* instr); + void PrintSs1(Instruction* instr); + void PrintSs2(Instruction* instr); + void PrintBc(Instruction* instr); + void PrintCc(Instruction* instr); + void PrintFunction(Instruction* instr); + void PrintSecondaryField(Instruction* instr); + void PrintUImm16(Instruction* instr); + void PrintSImm16(Instruction* instr); + void PrintXImm16(Instruction* instr); + void PrintXImm21(Instruction* instr); + void PrintXImm26(Instruction* instr); + void PrintCode(Instruction* instr); // For break and trap instructions. + // Printing of instruction name. + void PrintInstructionName(Instruction* instr); + + // Handle formatting of instructions and their options. + int FormatRegister(Instruction* instr, const char* option); + int FormatFPURegister(Instruction* instr, const char* option); + int FormatOption(Instruction* instr, const char* option); + void Format(Instruction* instr, const char* format); + void Unknown(Instruction* instr); + int DecodeBreakInstr(Instruction* instr); + + // Each of these functions decodes one particular instruction type. + int DecodeTypeRegister(Instruction* instr); + void DecodeTypeImmediate(Instruction* instr); + void DecodeTypeJump(Instruction* instr); + + const disasm::NameConverter& converter_; + v8::internal::Vector<char> out_buffer_; + int out_buffer_pos_; + + DISALLOW_COPY_AND_ASSIGN(Decoder); +}; + + +// Support for assertions in the Decoder formatting functions. +#define STRING_STARTS_WITH(string, compare_string) \ + (strncmp(string, compare_string, strlen(compare_string)) == 0) + + +// Append the ch to the output buffer. +void Decoder::PrintChar(const char ch) { + out_buffer_[out_buffer_pos_++] = ch; +} + + +// Append the str to the output buffer. +void Decoder::Print(const char* str) { + char cur = *str++; + while (cur != '\0' && (out_buffer_pos_ < (out_buffer_.length() - 1))) { + PrintChar(cur); + cur = *str++; + } + out_buffer_[out_buffer_pos_] = 0; +} + + +// Print the register name according to the active name converter. +void Decoder::PrintRegister(int reg) { + Print(converter_.NameOfCPURegister(reg)); +} + + +void Decoder::PrintRs(Instruction* instr) { + int reg = instr->RsValue(); + PrintRegister(reg); +} + + +void Decoder::PrintRt(Instruction* instr) { + int reg = instr->RtValue(); + PrintRegister(reg); +} + + +void Decoder::PrintRd(Instruction* instr) { + int reg = instr->RdValue(); + PrintRegister(reg); +} + + +// Print the FPUregister name according to the active name converter. +void Decoder::PrintFPURegister(int freg) { + Print(converter_.NameOfXMMRegister(freg)); +} + + +void Decoder::PrintFs(Instruction* instr) { + int freg = instr->RsValue(); + PrintFPURegister(freg); +} + + +void Decoder::PrintFt(Instruction* instr) { + int freg = instr->RtValue(); + PrintFPURegister(freg); +} + + +void Decoder::PrintFd(Instruction* instr) { + int freg = instr->RdValue(); + PrintFPURegister(freg); +} + + +// Print the integer value of the sa field. +void Decoder::PrintSa(Instruction* instr) { + int sa = instr->SaValue(); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, "%d", sa); +} + + +// Print the integer value of the rd field, when it is not used as reg. +void Decoder::PrintSd(Instruction* instr) { + int sd = instr->RdValue(); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, "%d", sd); +} + + +// Print the integer value of the rd field, when used as 'ext' size. +void Decoder::PrintSs1(Instruction* instr) { + int ss = instr->RdValue(); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, "%d", ss + 1); +} + + +// Print the integer value of the rd field, when used as 'ins' size. +void Decoder::PrintSs2(Instruction* instr) { + int ss = instr->RdValue(); + int pos = instr->SaValue(); + out_buffer_pos_ += + SNPrintF(out_buffer_ + out_buffer_pos_, "%d", ss - pos + 1); +} + + +// Print the integer value of the cc field for the bc1t/f instructions. +void Decoder::PrintBc(Instruction* instr) { + int cc = instr->FBccValue(); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, "%d", cc); +} + + +// Print the integer value of the cc field for the FP compare instructions. +void Decoder::PrintCc(Instruction* instr) { + int cc = instr->FCccValue(); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, "cc(%d)", cc); +} + + +// Print 16-bit unsigned immediate value. +void Decoder::PrintUImm16(Instruction* instr) { + int32_t imm = instr->Imm16Value(); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, "%u", imm); +} + + +// Print 16-bit signed immediate value. +void Decoder::PrintSImm16(Instruction* instr) { + int32_t imm = ((instr->Imm16Value()) << 16) >> 16; + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, "%d", imm); +} + + +// Print 16-bit hexa immediate value. +void Decoder::PrintXImm16(Instruction* instr) { + int32_t imm = instr->Imm16Value(); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, "0x%x", imm); +} + + +// Print 21-bit immediate value. +void Decoder::PrintXImm21(Instruction* instr) { + uint32_t imm = instr->Imm21Value(); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, "0x%x", imm); +} + + +// Print 26-bit immediate value. +void Decoder::PrintXImm26(Instruction* instr) { + uint32_t imm = instr->Imm26Value() << kImmFieldShift; + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, "0x%x", imm); +} + + +// Print 26-bit immediate value. +void Decoder::PrintCode(Instruction* instr) { + if (instr->OpcodeFieldRaw() != SPECIAL) + return; // Not a break or trap instruction. + switch (instr->FunctionFieldRaw()) { + case BREAK: { + int32_t code = instr->Bits(25, 6); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "0x%05x (%d)", code, code); + break; + } + case TGE: + case TGEU: + case TLT: + case TLTU: + case TEQ: + case TNE: { + int32_t code = instr->Bits(15, 6); + out_buffer_pos_ += + SNPrintF(out_buffer_ + out_buffer_pos_, "0x%03x", code); + break; + } + default: // Not a break or trap instruction. + break; + } +} + + +// Printing of instruction name. +void Decoder::PrintInstructionName(Instruction* instr) { +} + + +// Handle all register based formatting in this function to reduce the +// complexity of FormatOption. +int Decoder::FormatRegister(Instruction* instr, const char* format) { + DCHECK(format[0] == 'r'); + if (format[1] == 's') { // 'rs: Rs register. + int reg = instr->RsValue(); + PrintRegister(reg); + return 2; + } else if (format[1] == 't') { // 'rt: rt register. + int reg = instr->RtValue(); + PrintRegister(reg); + return 2; + } else if (format[1] == 'd') { // 'rd: rd register. + int reg = instr->RdValue(); + PrintRegister(reg); + return 2; + } + UNREACHABLE(); + return -1; +} + + +// Handle all FPUregister based formatting in this function to reduce the +// complexity of FormatOption. +int Decoder::FormatFPURegister(Instruction* instr, const char* format) { + DCHECK(format[0] == 'f'); + if (format[1] == 's') { // 'fs: fs register. + int reg = instr->FsValue(); + PrintFPURegister(reg); + return 2; + } else if (format[1] == 't') { // 'ft: ft register. + int reg = instr->FtValue(); + PrintFPURegister(reg); + return 2; + } else if (format[1] == 'd') { // 'fd: fd register. + int reg = instr->FdValue(); + PrintFPURegister(reg); + return 2; + } else if (format[1] == 'r') { // 'fr: fr register. + int reg = instr->FrValue(); + PrintFPURegister(reg); + return 2; + } + UNREACHABLE(); + return -1; +} + + +// FormatOption takes a formatting string and interprets it based on +// the current instructions. The format string points to the first +// character of the option string (the option escape has already been +// consumed by the caller.) FormatOption returns the number of +// characters that were consumed from the formatting string. +int Decoder::FormatOption(Instruction* instr, const char* format) { + switch (format[0]) { + case 'c': { // 'code for break or trap instructions. + DCHECK(STRING_STARTS_WITH(format, "code")); + PrintCode(instr); + return 4; + } + case 'i': { // 'imm16u or 'imm26. + if (format[3] == '1') { + DCHECK(STRING_STARTS_WITH(format, "imm16")); + if (format[5] == 's') { + DCHECK(STRING_STARTS_WITH(format, "imm16s")); + PrintSImm16(instr); + } else if (format[5] == 'u') { + DCHECK(STRING_STARTS_WITH(format, "imm16u")); + PrintSImm16(instr); + } else { + DCHECK(STRING_STARTS_WITH(format, "imm16x")); + PrintXImm16(instr); + } + return 6; + } else if (format[3] == '2' && format[4] == '1') { + DCHECK(STRING_STARTS_WITH(format, "imm21x")); + PrintXImm21(instr); + return 6; + } else if (format[3] == '2' && format[4] == '6') { + DCHECK(STRING_STARTS_WITH(format, "imm26x")); + PrintXImm26(instr); + return 6; + } + } + case 'r': { // 'r: registers. + return FormatRegister(instr, format); + } + case 'f': { // 'f: FPUregisters. + return FormatFPURegister(instr, format); + } + case 's': { // 'sa. + switch (format[1]) { + case 'a': { + DCHECK(STRING_STARTS_WITH(format, "sa")); + PrintSa(instr); + return 2; + } + case 'd': { + DCHECK(STRING_STARTS_WITH(format, "sd")); + PrintSd(instr); + return 2; + } + case 's': { + if (format[2] == '1') { + DCHECK(STRING_STARTS_WITH(format, "ss1")); /* ext size */ + PrintSs1(instr); + return 3; + } else { + DCHECK(STRING_STARTS_WITH(format, "ss2")); /* ins size */ + PrintSs2(instr); + return 3; + } + } + } + } + case 'b': { // 'bc - Special for bc1 cc field. + DCHECK(STRING_STARTS_WITH(format, "bc")); + PrintBc(instr); + return 2; + } + case 'C': { // 'Cc - Special for c.xx.d cc field. + DCHECK(STRING_STARTS_WITH(format, "Cc")); + PrintCc(instr); + return 2; + } + } + UNREACHABLE(); + return -1; +} + + +// Format takes a formatting string for a whole instruction and prints it into +// the output buffer. All escaped options are handed to FormatOption to be +// parsed further. +void Decoder::Format(Instruction* instr, const char* format) { + char cur = *format++; + while ((cur != 0) && (out_buffer_pos_ < (out_buffer_.length() - 1))) { + if (cur == '\'') { // Single quote is used as the formatting escape. + format += FormatOption(instr, format); + } else { + out_buffer_[out_buffer_pos_++] = cur; + } + cur = *format++; + } + out_buffer_[out_buffer_pos_] = '\0'; +} + + +// For currently unimplemented decodings the disassembler calls Unknown(instr) +// which will just print "unknown" of the instruction bits. +void Decoder::Unknown(Instruction* instr) { + Format(instr, "unknown"); +} + + +int Decoder::DecodeBreakInstr(Instruction* instr) { + // This is already known to be BREAK instr, just extract the code. + if (instr->Bits(25, 6) == static_cast<int>(kMaxStopCode)) { + // This is stop(msg). + Format(instr, "break, code: 'code"); + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "\n%p %08lx stop msg: %s", + static_cast<void*> + (reinterpret_cast<int32_t*>(instr + + Instruction::kInstrSize)), + reinterpret_cast<uint64_t> + (*reinterpret_cast<char**>(instr + + Instruction::kInstrSize)), + *reinterpret_cast<char**>(instr + + Instruction::kInstrSize)); + // Size 3: the break_ instr, plus embedded 64-bit char pointer. + return 3 * Instruction::kInstrSize; + } else { + Format(instr, "break, code: 'code"); + return Instruction::kInstrSize; + } +} + + +int Decoder::DecodeTypeRegister(Instruction* instr) { + switch (instr->OpcodeFieldRaw()) { + case COP1: // Coprocessor instructions. + switch (instr->RsFieldRaw()) { + case MFC1: + Format(instr, "mfc1 'rt, 'fs"); + break; + case DMFC1: + Format(instr, "dmfc1 'rt, 'fs"); + break; + case MFHC1: + Format(instr, "mfhc1 'rt, 'fs"); + break; + case MTC1: + Format(instr, "mtc1 'rt, 'fs"); + break; + case DMTC1: + Format(instr, "dmtc1 'rt, 'fs"); + break; + // These are called "fs" too, although they are not FPU registers. + case CTC1: + Format(instr, "ctc1 'rt, 'fs"); + break; + case CFC1: + Format(instr, "cfc1 'rt, 'fs"); + break; + case MTHC1: + Format(instr, "mthc1 'rt, 'fs"); + break; + case D: + switch (instr->FunctionFieldRaw()) { + case ADD_D: + Format(instr, "add.d 'fd, 'fs, 'ft"); + break; + case SUB_D: + Format(instr, "sub.d 'fd, 'fs, 'ft"); + break; + case MUL_D: + Format(instr, "mul.d 'fd, 'fs, 'ft"); + break; + case DIV_D: + Format(instr, "div.d 'fd, 'fs, 'ft"); + break; + case ABS_D: + Format(instr, "abs.d 'fd, 'fs"); + break; + case MOV_D: + Format(instr, "mov.d 'fd, 'fs"); + break; + case NEG_D: + Format(instr, "neg.d 'fd, 'fs"); + break; + case SQRT_D: + Format(instr, "sqrt.d 'fd, 'fs"); + break; + case CVT_W_D: + Format(instr, "cvt.w.d 'fd, 'fs"); + break; + case CVT_L_D: + Format(instr, "cvt.l.d 'fd, 'fs"); + break; + case TRUNC_W_D: + Format(instr, "trunc.w.d 'fd, 'fs"); + break; + case TRUNC_L_D: + Format(instr, "trunc.l.d 'fd, 'fs"); + break; + case ROUND_W_D: + Format(instr, "round.w.d 'fd, 'fs"); + break; + case ROUND_L_D: + Format(instr, "round.l.d 'fd, 'fs"); + break; + case FLOOR_W_D: + Format(instr, "floor.w.d 'fd, 'fs"); + break; + case FLOOR_L_D: + Format(instr, "floor.l.d 'fd, 'fs"); + break; + case CEIL_W_D: + Format(instr, "ceil.w.d 'fd, 'fs"); + break; + case CEIL_L_D: + Format(instr, "ceil.l.d 'fd, 'fs"); + break; + case CVT_S_D: + Format(instr, "cvt.s.d 'fd, 'fs"); + break; + case C_F_D: + Format(instr, "c.f.d 'fs, 'ft, 'Cc"); + break; + case C_UN_D: + Format(instr, "c.un.d 'fs, 'ft, 'Cc"); + break; + case C_EQ_D: + Format(instr, "c.eq.d 'fs, 'ft, 'Cc"); + break; + case C_UEQ_D: + Format(instr, "c.ueq.d 'fs, 'ft, 'Cc"); + break; + case C_OLT_D: + Format(instr, "c.olt.d 'fs, 'ft, 'Cc"); + break; + case C_ULT_D: + Format(instr, "c.ult.d 'fs, 'ft, 'Cc"); + break; + case C_OLE_D: + Format(instr, "c.ole.d 'fs, 'ft, 'Cc"); + break; + case C_ULE_D: + Format(instr, "c.ule.d 'fs, 'ft, 'Cc"); + break; + default: + Format(instr, "unknown.cop1.d"); + break; + } + break; + case W: + switch (instr->FunctionFieldRaw()) { + case CVT_D_W: // Convert word to double. + Format(instr, "cvt.d.w 'fd, 'fs"); + break; + default: + UNREACHABLE(); + } + break; + case L: + switch (instr->FunctionFieldRaw()) { + case CVT_D_L: + Format(instr, "cvt.d.l 'fd, 'fs"); + break; + case CVT_S_L: + Format(instr, "cvt.s.l 'fd, 'fs"); + break; + case CMP_UN: + Format(instr, "cmp.un.d 'fd, 'fs, 'ft"); + break; + case CMP_EQ: + Format(instr, "cmp.eq.d 'fd, 'fs, 'ft"); + break; + case CMP_UEQ: + Format(instr, "cmp.ueq.d 'fd, 'fs, 'ft"); + break; + case CMP_LT: + Format(instr, "cmp.lt.d 'fd, 'fs, 'ft"); + break; + case CMP_ULT: + Format(instr, "cmp.ult.d 'fd, 'fs, 'ft"); + break; + case CMP_LE: + Format(instr, "cmp.le.d 'fd, 'fs, 'ft"); + break; + case CMP_ULE: + Format(instr, "cmp.ule.d 'fd, 'fs, 'ft"); + break; + case CMP_OR: + Format(instr, "cmp.or.d 'fd, 'fs, 'ft"); + break; + case CMP_UNE: + Format(instr, "cmp.une.d 'fd, 'fs, 'ft"); + break; + case CMP_NE: + Format(instr, "cmp.ne.d 'fd, 'fs, 'ft"); + break; + default: + UNREACHABLE(); + } + break; + default: + UNREACHABLE(); + } + break; + case COP1X: + switch (instr->FunctionFieldRaw()) { + case MADD_D: + Format(instr, "madd.d 'fd, 'fr, 'fs, 'ft"); + break; + default: + UNREACHABLE(); + } + break; + case SPECIAL: + switch (instr->FunctionFieldRaw()) { + case JR: + Format(instr, "jr 'rs"); + break; + case JALR: + Format(instr, "jalr 'rs"); + break; + case SLL: + if (0x0 == static_cast<int>(instr->InstructionBits())) + Format(instr, "nop"); + else + Format(instr, "sll 'rd, 'rt, 'sa"); + break; + case DSLL: + Format(instr, "dsll 'rd, 'rt, 'sa"); + break; + case D_MUL_MUH: // Equals to DMUL. + if (kArchVariant != kMips64r6) { + Format(instr, "dmult 'rs, 'rt"); + } else { + if (instr->SaValue() == MUL_OP) { + Format(instr, "dmul 'rd, 'rs, 'rt"); + } else { + Format(instr, "dmuh 'rd, 'rs, 'rt"); + } + } + break; + case DSLL32: + Format(instr, "dsll32 'rd, 'rt, 'sa"); + break; + case SRL: + if (instr->RsValue() == 0) { + Format(instr, "srl 'rd, 'rt, 'sa"); + } else { + if (kArchVariant == kMips64r2) { + Format(instr, "rotr 'rd, 'rt, 'sa"); + } else { + Unknown(instr); + } + } + break; + case DSRL: + if (instr->RsValue() == 0) { + Format(instr, "dsrl 'rd, 'rt, 'sa"); + } else { + if (kArchVariant == kMips64r2) { + Format(instr, "drotr 'rd, 'rt, 'sa"); + } else { + Unknown(instr); + } + } + break; + case DSRL32: + Format(instr, "dsrl32 'rd, 'rt, 'sa"); + break; + case SRA: + Format(instr, "sra 'rd, 'rt, 'sa"); + break; + case DSRA: + Format(instr, "dsra 'rd, 'rt, 'sa"); + break; + case DSRA32: + Format(instr, "dsra32 'rd, 'rt, 'sa"); + break; + case SLLV: + Format(instr, "sllv 'rd, 'rt, 'rs"); + break; + case DSLLV: + Format(instr, "dsllv 'rd, 'rt, 'rs"); + break; + case SRLV: + if (instr->SaValue() == 0) { + Format(instr, "srlv 'rd, 'rt, 'rs"); + } else { + if (kArchVariant == kMips64r2) { + Format(instr, "rotrv 'rd, 'rt, 'rs"); + } else { + Unknown(instr); + } + } + break; + case DSRLV: + if (instr->SaValue() == 0) { + Format(instr, "dsrlv 'rd, 'rt, 'rs"); + } else { + if (kArchVariant == kMips64r2) { + Format(instr, "drotrv 'rd, 'rt, 'rs"); + } else { + Unknown(instr); + } + } + break; + case SRAV: + Format(instr, "srav 'rd, 'rt, 'rs"); + break; + case DSRAV: + Format(instr, "dsrav 'rd, 'rt, 'rs"); + break; + case MFHI: + if (instr->Bits(25, 16) == 0) { + Format(instr, "mfhi 'rd"); + } else { + if ((instr->FunctionFieldRaw() == CLZ_R6) + && (instr->FdValue() == 1)) { + Format(instr, "clz 'rd, 'rs"); + } else if ((instr->FunctionFieldRaw() == CLO_R6) + && (instr->FdValue() == 1)) { + Format(instr, "clo 'rd, 'rs"); + } + } + break; + case MFLO: + Format(instr, "mflo 'rd"); + break; + case D_MUL_MUH_U: // Equals to DMULTU. + if (kArchVariant != kMips64r6) { + Format(instr, "dmultu 'rs, 'rt"); + } else { + if (instr->SaValue() == MUL_OP) { + Format(instr, "dmulu 'rd, 'rs, 'rt"); + } else { + Format(instr, "dmuhu 'rd, 'rs, 'rt"); + } + } + break; + case MULT: // @Mips64r6 == MUL_MUH. + if (kArchVariant != kMips64r6) { + Format(instr, "mult 'rs, 'rt"); + } else { + if (instr->SaValue() == MUL_OP) { + Format(instr, "mul 'rd, 'rs, 'rt"); + } else { + Format(instr, "muh 'rd, 'rs, 'rt"); + } + } + break; + case MULTU: // @Mips64r6 == MUL_MUH_U. + if (kArchVariant != kMips64r6) { + Format(instr, "multu 'rs, 'rt"); + } else { + if (instr->SaValue() == MUL_OP) { + Format(instr, "mulu 'rd, 'rs, 'rt"); + } else { + Format(instr, "muhu 'rd, 'rs, 'rt"); + } + } + + break; + case DIV: // @Mips64r6 == DIV_MOD. + if (kArchVariant != kMips64r6) { + Format(instr, "div 'rs, 'rt"); + } else { + if (instr->SaValue() == DIV_OP) { + Format(instr, "div 'rd, 'rs, 'rt"); + } else { + Format(instr, "mod 'rd, 'rs, 'rt"); + } + } + break; + case DDIV: // @Mips64r6 == D_DIV_MOD. + if (kArchVariant != kMips64r6) { + Format(instr, "ddiv 'rs, 'rt"); + } else { + if (instr->SaValue() == DIV_OP) { + Format(instr, "ddiv 'rd, 'rs, 'rt"); + } else { + Format(instr, "dmod 'rd, 'rs, 'rt"); + } + } + break; + case DIVU: // @Mips64r6 == DIV_MOD_U. + if (kArchVariant != kMips64r6) { + Format(instr, "divu 'rs, 'rt"); + } else { + if (instr->SaValue() == DIV_OP) { + Format(instr, "divu 'rd, 'rs, 'rt"); + } else { + Format(instr, "modu 'rd, 'rs, 'rt"); + } + } + break; + case DDIVU: // @Mips64r6 == D_DIV_MOD_U. + if (kArchVariant != kMips64r6) { + Format(instr, "ddivu 'rs, 'rt"); + } else { + if (instr->SaValue() == DIV_OP) { + Format(instr, "ddivu 'rd, 'rs, 'rt"); + } else { + Format(instr, "dmodu 'rd, 'rs, 'rt"); + } + } + break; + case ADD: + Format(instr, "add 'rd, 'rs, 'rt"); + break; + case DADD: + Format(instr, "dadd 'rd, 'rs, 'rt"); + break; + case ADDU: + Format(instr, "addu 'rd, 'rs, 'rt"); + break; + case DADDU: + Format(instr, "daddu 'rd, 'rs, 'rt"); + break; + case SUB: + Format(instr, "sub 'rd, 'rs, 'rt"); + break; + case DSUB: + Format(instr, "dsub 'rd, 'rs, 'rt"); + break; + case SUBU: + Format(instr, "subu 'rd, 'rs, 'rt"); + break; + case DSUBU: + Format(instr, "dsubu 'rd, 'rs, 'rt"); + break; + case AND: + Format(instr, "and 'rd, 'rs, 'rt"); + break; + case OR: + if (0 == instr->RsValue()) { + Format(instr, "mov 'rd, 'rt"); + } else if (0 == instr->RtValue()) { + Format(instr, "mov 'rd, 'rs"); + } else { + Format(instr, "or 'rd, 'rs, 'rt"); + } + break; + case XOR: + Format(instr, "xor 'rd, 'rs, 'rt"); + break; + case NOR: + Format(instr, "nor 'rd, 'rs, 'rt"); + break; + case SLT: + Format(instr, "slt 'rd, 'rs, 'rt"); + break; + case SLTU: + Format(instr, "sltu 'rd, 'rs, 'rt"); + break; + case BREAK: + return DecodeBreakInstr(instr); + case TGE: + Format(instr, "tge 'rs, 'rt, code: 'code"); + break; + case TGEU: + Format(instr, "tgeu 'rs, 'rt, code: 'code"); + break; + case TLT: + Format(instr, "tlt 'rs, 'rt, code: 'code"); + break; + case TLTU: + Format(instr, "tltu 'rs, 'rt, code: 'code"); + break; + case TEQ: + Format(instr, "teq 'rs, 'rt, code: 'code"); + break; + case TNE: + Format(instr, "tne 'rs, 'rt, code: 'code"); + break; + case MOVZ: + Format(instr, "movz 'rd, 'rs, 'rt"); + break; + case MOVN: + Format(instr, "movn 'rd, 'rs, 'rt"); + break; + case MOVCI: + if (instr->Bit(16)) { + Format(instr, "movt 'rd, 'rs, 'bc"); + } else { + Format(instr, "movf 'rd, 'rs, 'bc"); + } + break; + case SELEQZ_S: + Format(instr, "seleqz 'rd, 'rs, 'rt"); + break; + case SELNEZ_S: + Format(instr, "selnez 'rd, 'rs, 'rt"); + break; + default: + UNREACHABLE(); + } + break; + case SPECIAL2: + switch (instr->FunctionFieldRaw()) { + case MUL: + Format(instr, "mul 'rd, 'rs, 'rt"); + break; + case CLZ: + if (kArchVariant != kMips64r6) { + Format(instr, "clz 'rd, 'rs"); + } + break; + default: + UNREACHABLE(); + } + break; + case SPECIAL3: + switch (instr->FunctionFieldRaw()) { + case INS: { + Format(instr, "ins 'rt, 'rs, 'sa, 'ss2"); + break; + } + case EXT: { + Format(instr, "ext 'rt, 'rs, 'sa, 'ss1"); + break; + } + default: + UNREACHABLE(); + } + break; + default: + UNREACHABLE(); + } + return Instruction::kInstrSize; +} + + +void Decoder::DecodeTypeImmediate(Instruction* instr) { + switch (instr->OpcodeFieldRaw()) { + case COP1: + switch (instr->RsFieldRaw()) { + case BC1: + if (instr->FBtrueValue()) { + Format(instr, "bc1t 'bc, 'imm16u"); + } else { + Format(instr, "bc1f 'bc, 'imm16u"); + } + break; + case BC1EQZ: + Format(instr, "bc1eqz 'ft, 'imm16u"); + break; + case BC1NEZ: + Format(instr, "bc1nez 'ft, 'imm16u"); + break; + case W: // CMP.S instruction. + switch (instr->FunctionValue()) { + case CMP_AF: + Format(instr, "cmp.af.S 'ft, 'fs, 'fd"); + break; + case CMP_UN: + Format(instr, "cmp.un.S 'ft, 'fs, 'fd"); + break; + case CMP_EQ: + Format(instr, "cmp.eq.S 'ft, 'fs, 'fd"); + break; + case CMP_UEQ: + Format(instr, "cmp.ueq.S 'ft, 'fs, 'fd"); + break; + case CMP_LT: + Format(instr, "cmp.lt.S 'ft, 'fs, 'fd"); + break; + case CMP_ULT: + Format(instr, "cmp.ult.S 'ft, 'fs, 'fd"); + break; + case CMP_LE: + Format(instr, "cmp.le.S 'ft, 'fs, 'fd"); + break; + case CMP_ULE: + Format(instr, "cmp.ule.S 'ft, 'fs, 'fd"); + break; + case CMP_OR: + Format(instr, "cmp.or.S 'ft, 'fs, 'fd"); + break; + case CMP_UNE: + Format(instr, "cmp.une.S 'ft, 'fs, 'fd"); + break; + case CMP_NE: + Format(instr, "cmp.ne.S 'ft, 'fs, 'fd"); + break; + default: + UNREACHABLE(); + } + break; + case L: // CMP.D instruction. + switch (instr->FunctionValue()) { + case CMP_AF: + Format(instr, "cmp.af.D 'ft, 'fs, 'fd"); + break; + case CMP_UN: + Format(instr, "cmp.un.D 'ft, 'fs, 'fd"); + break; + case CMP_EQ: + Format(instr, "cmp.eq.D 'ft, 'fs, 'fd"); + break; + case CMP_UEQ: + Format(instr, "cmp.ueq.D 'ft, 'fs, 'fd"); + break; + case CMP_LT: + Format(instr, "cmp.lt.D 'ft, 'fs, 'fd"); + break; + case CMP_ULT: + Format(instr, "cmp.ult.D 'ft, 'fs, 'fd"); + break; + case CMP_LE: + Format(instr, "cmp.le.D 'ft, 'fs, 'fd"); + break; + case CMP_ULE: + Format(instr, "cmp.ule.D 'ft, 'fs, 'fd"); + break; + case CMP_OR: + Format(instr, "cmp.or.D 'ft, 'fs, 'fd"); + break; + case CMP_UNE: + Format(instr, "cmp.une.D 'ft, 'fs, 'fd"); + break; + case CMP_NE: + Format(instr, "cmp.ne.D 'ft, 'fs, 'fd"); + break; + default: + UNREACHABLE(); + } + break; + case S: + switch (instr->FunctionValue()) { + case SEL: + Format(instr, "sel.S 'ft, 'fs, 'fd"); + break; + case SELEQZ_C: + Format(instr, "seleqz.S 'ft, 'fs, 'fd"); + break; + case SELNEZ_C: + Format(instr, "selnez.S 'ft, 'fs, 'fd"); + break; + case MIN: + Format(instr, "min.S 'ft, 'fs, 'fd"); + break; + case MINA: + Format(instr, "mina.S 'ft, 'fs, 'fd"); + break; + case MAX: + Format(instr, "max.S 'ft, 'fs, 'fd"); + break; + case MAXA: + Format(instr, "maxa.S 'ft, 'fs, 'fd"); + break; + default: + UNREACHABLE(); + } + break; + case D: + switch (instr->FunctionValue()) { + case SEL: + Format(instr, "sel.D 'ft, 'fs, 'fd"); + break; + case SELEQZ_C: + Format(instr, "seleqz.D 'ft, 'fs, 'fd"); + break; + case SELNEZ_C: + Format(instr, "selnez.D 'ft, 'fs, 'fd"); + break; + case MIN: + Format(instr, "min.D 'ft, 'fs, 'fd"); + break; + case MINA: + Format(instr, "mina.D 'ft, 'fs, 'fd"); + break; + case MAX: + Format(instr, "max.D 'ft, 'fs, 'fd"); + break; + case MAXA: + Format(instr, "maxa.D 'ft, 'fs, 'fd"); + break; + default: + UNREACHABLE(); + } + break; + default: + UNREACHABLE(); + } + + break; // Case COP1. + // ------------- REGIMM class. + case REGIMM: + switch (instr->RtFieldRaw()) { + case BLTZ: + Format(instr, "bltz 'rs, 'imm16u"); + break; + case BLTZAL: + Format(instr, "bltzal 'rs, 'imm16u"); + break; + case BGEZ: + Format(instr, "bgez 'rs, 'imm16u"); + break; + case BGEZAL: + Format(instr, "bgezal 'rs, 'imm16u"); + break; + case BGEZALL: + Format(instr, "bgezall 'rs, 'imm16u"); + break; + case DAHI: + Format(instr, "dahi 'rs, 'imm16u"); + break; + case DATI: + Format(instr, "dati 'rs, 'imm16u"); + break; + default: + UNREACHABLE(); + } + break; // Case REGIMM. + // ------------- Branch instructions. + case BEQ: + Format(instr, "beq 'rs, 'rt, 'imm16u"); + break; + case BNE: + Format(instr, "bne 'rs, 'rt, 'imm16u"); + break; + case BLEZ: + if ((instr->RtFieldRaw() == 0) + && (instr->RsFieldRaw() != 0)) { + Format(instr, "blez 'rs, 'imm16u"); + } else if ((instr->RtFieldRaw() != instr->RsFieldRaw()) + && (instr->RsFieldRaw() != 0) && (instr->RtFieldRaw() != 0)) { + Format(instr, "bgeuc 'rs, 'rt, 'imm16u"); + } else if ((instr->RtFieldRaw() == instr->RsFieldRaw()) + && (instr->RtFieldRaw() != 0)) { + Format(instr, "bgezalc 'rs, 'imm16u"); + } else if ((instr->RsFieldRaw() == 0) + && (instr->RtFieldRaw() != 0)) { + Format(instr, "blezalc 'rs, 'imm16u"); + } else { + UNREACHABLE(); + } + break; + case BGTZ: + if ((instr->RtFieldRaw() == 0) + && (instr->RsFieldRaw() != 0)) { + Format(instr, "bgtz 'rs, 'imm16u"); + } else if ((instr->RtFieldRaw() != instr->RsFieldRaw()) + && (instr->RsFieldRaw() != 0) && (instr->RtFieldRaw() != 0)) { + Format(instr, "bltuc 'rs, 'rt, 'imm16u"); + } else if ((instr->RtFieldRaw() == instr->RsFieldRaw()) + && (instr->RtFieldRaw() != 0)) { + Format(instr, "bltzalc 'rt, 'imm16u"); + } else if ((instr->RsFieldRaw() == 0) + && (instr->RtFieldRaw() != 0)) { + Format(instr, "bgtzalc 'rt, 'imm16u"); + } else { + UNREACHABLE(); + } + break; + case BLEZL: + if ((instr->RtFieldRaw() == instr->RsFieldRaw()) + && (instr->RtFieldRaw() != 0)) { + Format(instr, "bgezc 'rt, 'imm16u"); + } else if ((instr->RtFieldRaw() != instr->RsFieldRaw()) + && (instr->RsFieldRaw() != 0) && (instr->RtFieldRaw() != 0)) { + Format(instr, "bgec 'rs, 'rt, 'imm16u"); + } else if ((instr->RsFieldRaw() == 0) + && (instr->RtFieldRaw() != 0)) { + Format(instr, "blezc 'rt, 'imm16u"); + } else { + UNREACHABLE(); + } + break; + case BGTZL: + if ((instr->RtFieldRaw() == instr->RsFieldRaw()) + && (instr->RtFieldRaw() != 0)) { + Format(instr, "bltzc 'rt, 'imm16u"); + } else if ((instr->RtFieldRaw() != instr->RsFieldRaw()) + && (instr->RsFieldRaw() != 0) && (instr->RtFieldRaw() != 0)) { + Format(instr, "bltc 'rs, 'rt, 'imm16u"); + } else if ((instr->RsFieldRaw() == 0) + && (instr->RtFieldRaw() != 0)) { + Format(instr, "bgtzc 'rt, 'imm16u"); + } else { + UNREACHABLE(); + } + break; + case BEQZC: + if (instr->RsFieldRaw() != 0) { + Format(instr, "beqzc 'rs, 'imm21x"); + } + break; + case BNEZC: + if (instr->RsFieldRaw() != 0) { + Format(instr, "bnezc 'rs, 'imm21x"); + } + break; + // ------------- Arithmetic instructions. + case ADDI: + if (kArchVariant != kMips64r6) { + Format(instr, "addi 'rt, 'rs, 'imm16s"); + } else { + // Check if BOVC or BEQC instruction. + if (instr->RsFieldRaw() >= instr->RtFieldRaw()) { + Format(instr, "bovc 'rs, 'rt, 'imm16s"); + } else if (instr->RsFieldRaw() < instr->RtFieldRaw()) { + Format(instr, "beqc 'rs, 'rt, 'imm16s"); + } else { + UNREACHABLE(); + } + } + break; + case DADDI: + if (kArchVariant != kMips64r6) { + Format(instr, "daddi 'rt, 'rs, 'imm16s"); + } else { + // Check if BNVC or BNEC instruction. + if (instr->RsFieldRaw() >= instr->RtFieldRaw()) { + Format(instr, "bnvc 'rs, 'rt, 'imm16s"); + } else if (instr->RsFieldRaw() < instr->RtFieldRaw()) { + Format(instr, "bnec 'rs, 'rt, 'imm16s"); + } else { + UNREACHABLE(); + } + } + break; + case ADDIU: + Format(instr, "addiu 'rt, 'rs, 'imm16s"); + break; + case DADDIU: + Format(instr, "daddiu 'rt, 'rs, 'imm16s"); + break; + case SLTI: + Format(instr, "slti 'rt, 'rs, 'imm16s"); + break; + case SLTIU: + Format(instr, "sltiu 'rt, 'rs, 'imm16u"); + break; + case ANDI: + Format(instr, "andi 'rt, 'rs, 'imm16x"); + break; + case ORI: + Format(instr, "ori 'rt, 'rs, 'imm16x"); + break; + case XORI: + Format(instr, "xori 'rt, 'rs, 'imm16x"); + break; + case LUI: + if (kArchVariant != kMips64r6) { + Format(instr, "lui 'rt, 'imm16x"); + } else { + if (instr->RsValue() != 0) { + Format(instr, "aui 'rt, 'imm16x"); + } else { + Format(instr, "lui 'rt, 'imm16x"); + } + } + break; + case DAUI: + Format(instr, "daui 'rt, 'imm16x"); + break; + // ------------- Memory instructions. + case LB: + Format(instr, "lb 'rt, 'imm16s('rs)"); + break; + case LH: + Format(instr, "lh 'rt, 'imm16s('rs)"); + break; + case LWL: + Format(instr, "lwl 'rt, 'imm16s('rs)"); + break; + case LDL: + Format(instr, "ldl 'rt, 'imm16s('rs)"); + break; + case LW: + Format(instr, "lw 'rt, 'imm16s('rs)"); + break; + case LWU: + Format(instr, "lwu 'rt, 'imm16s('rs)"); + break; + case LD: + Format(instr, "ld 'rt, 'imm16s('rs)"); + break; + case LBU: + Format(instr, "lbu 'rt, 'imm16s('rs)"); + break; + case LHU: + Format(instr, "lhu 'rt, 'imm16s('rs)"); + break; + case LWR: + Format(instr, "lwr 'rt, 'imm16s('rs)"); + break; + case LDR: + Format(instr, "ldr 'rt, 'imm16s('rs)"); + break; + case PREF: + Format(instr, "pref 'rt, 'imm16s('rs)"); + break; + case SB: + Format(instr, "sb 'rt, 'imm16s('rs)"); + break; + case SH: + Format(instr, "sh 'rt, 'imm16s('rs)"); + break; + case SWL: + Format(instr, "swl 'rt, 'imm16s('rs)"); + break; + case SW: + Format(instr, "sw 'rt, 'imm16s('rs)"); + break; + case SD: + Format(instr, "sd 'rt, 'imm16s('rs)"); + break; + case SWR: + Format(instr, "swr 'rt, 'imm16s('rs)"); + break; + case LWC1: + Format(instr, "lwc1 'ft, 'imm16s('rs)"); + break; + case LDC1: + Format(instr, "ldc1 'ft, 'imm16s('rs)"); + break; + case SWC1: + Format(instr, "swc1 'ft, 'imm16s('rs)"); + break; + case SDC1: + Format(instr, "sdc1 'ft, 'imm16s('rs)"); + break; + default: + printf("a 0x%x \n", instr->OpcodeFieldRaw()); + UNREACHABLE(); + break; + } +} + + +void Decoder::DecodeTypeJump(Instruction* instr) { + switch (instr->OpcodeFieldRaw()) { + case J: + Format(instr, "j 'imm26x"); + break; + case JAL: + Format(instr, "jal 'imm26x"); + break; + default: + UNREACHABLE(); + } +} + + +// Disassemble the instruction at *instr_ptr into the output buffer. +// All instructions are one word long, except for the simulator +// psuedo-instruction stop(msg). For that one special case, we return +// size larger than one kInstrSize. +int Decoder::InstructionDecode(byte* instr_ptr) { + Instruction* instr = Instruction::At(instr_ptr); + // Print raw instruction bytes. + out_buffer_pos_ += SNPrintF(out_buffer_ + out_buffer_pos_, + "%08x ", + instr->InstructionBits()); + switch (instr->InstructionType()) { + case Instruction::kRegisterType: { + return DecodeTypeRegister(instr); + } + case Instruction::kImmediateType: { + DecodeTypeImmediate(instr); + break; + } + case Instruction::kJumpType: { + DecodeTypeJump(instr); + break; + } + default: { + Format(instr, "UNSUPPORTED"); + UNSUPPORTED_MIPS(); + } + } + return Instruction::kInstrSize; +} + + +} } // namespace v8::internal + + + +//------------------------------------------------------------------------------ + +namespace disasm { + +const char* NameConverter::NameOfAddress(byte* addr) const { + v8::internal::SNPrintF(tmp_buffer_, "%p", addr); + return tmp_buffer_.start(); +} + + +const char* NameConverter::NameOfConstant(byte* addr) const { + return NameOfAddress(addr); +} + + +const char* NameConverter::NameOfCPURegister(int reg) const { + return v8::internal::Registers::Name(reg); +} + + +const char* NameConverter::NameOfXMMRegister(int reg) const { + return v8::internal::FPURegisters::Name(reg); +} + + +const char* NameConverter::NameOfByteCPURegister(int reg) const { + UNREACHABLE(); // MIPS does not have the concept of a byte register. + return "nobytereg"; +} + + +const char* NameConverter::NameInCode(byte* addr) const { + // The default name converter is called for unknown code. So we will not try + // to access any memory. + return ""; +} + + +//------------------------------------------------------------------------------ + +Disassembler::Disassembler(const NameConverter& converter) + : converter_(converter) {} + + +Disassembler::~Disassembler() {} + + +int Disassembler::InstructionDecode(v8::internal::Vector<char> buffer, + byte* instruction) { + v8::internal::Decoder d(converter_, buffer); + return d.InstructionDecode(instruction); +} + + +// The MIPS assembler does not currently use constant pools. +int Disassembler::ConstantPoolSizeAt(byte* instruction) { + return -1; +} + + +void Disassembler::Disassemble(FILE* f, byte* begin, byte* end) { + NameConverter converter; + Disassembler d(converter); + for (byte* pc = begin; pc < end;) { + v8::internal::EmbeddedVector<char, 128> buffer; + buffer[0] = '\0'; + byte* prev_pc = pc; + pc += d.InstructionDecode(buffer, pc); + v8::internal::PrintF(f, "%p %08x %s\n", + prev_pc, *reinterpret_cast<int32_t*>(prev_pc), buffer.start()); + } +} + + +#undef UNSUPPORTED + +} // namespace disasm + +#endif // V8_TARGET_ARCH_MIPS64 diff --git a/deps/v8/src/mips64/frames-mips64.cc b/deps/v8/src/mips64/frames-mips64.cc new file mode 100644 index 00000000000..2991248ccf9 --- /dev/null +++ b/deps/v8/src/mips64/frames-mips64.cc @@ -0,0 +1,43 @@ +// Copyright 2011 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + + +#include "src/v8.h" + +#if V8_TARGET_ARCH_MIPS64 + +#include "src/assembler.h" +#include "src/frames.h" +#include "src/mips64/assembler-mips64-inl.h" +#include "src/mips64/assembler-mips64.h" + +namespace v8 { +namespace internal { + + +Register JavaScriptFrame::fp_register() { return v8::internal::fp; } +Register JavaScriptFrame::context_register() { return cp; } +Register JavaScriptFrame::constant_pool_pointer_register() { + UNREACHABLE(); + return no_reg; +} + + +Register StubFailureTrampolineFrame::fp_register() { return v8::internal::fp; } +Register StubFailureTrampolineFrame::context_register() { return cp; } +Register StubFailureTrampolineFrame::constant_pool_pointer_register() { + UNREACHABLE(); + return no_reg; +} + + +Object*& ExitFrame::constant_pool_slot() const { + UNREACHABLE(); + return Memory::Object_at(NULL); +} + + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_MIPS64 diff --git a/deps/v8/src/mips64/frames-mips64.h b/deps/v8/src/mips64/frames-mips64.h new file mode 100644 index 00000000000..eaf29c89bb5 --- /dev/null +++ b/deps/v8/src/mips64/frames-mips64.h @@ -0,0 +1,215 @@ +// Copyright 2011 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + + + +#ifndef V8_MIPS_FRAMES_MIPS_H_ +#define V8_MIPS_FRAMES_MIPS_H_ + +namespace v8 { +namespace internal { + +// Register lists. +// Note that the bit values must match those used in actual instruction +// encoding. +const int kNumRegs = 32; + +const RegList kJSCallerSaved = + 1 << 2 | // v0 + 1 << 3 | // v1 + 1 << 4 | // a0 + 1 << 5 | // a1 + 1 << 6 | // a2 + 1 << 7 | // a3 + 1 << 8 | // a4 + 1 << 9 | // a5 + 1 << 10 | // a6 + 1 << 11 | // a7 + 1 << 12 | // t0 + 1 << 13 | // t1 + 1 << 14 | // t2 + 1 << 15; // t3 + +const int kNumJSCallerSaved = 14; + + +// Return the code of the n-th caller-saved register available to JavaScript +// e.g. JSCallerSavedReg(0) returns a0.code() == 4. +int JSCallerSavedCode(int n); + + +// Callee-saved registers preserved when switching from C to JavaScript. +const RegList kCalleeSaved = + 1 << 16 | // s0 + 1 << 17 | // s1 + 1 << 18 | // s2 + 1 << 19 | // s3 + 1 << 20 | // s4 + 1 << 21 | // s5 + 1 << 22 | // s6 (roots in Javascript code) + 1 << 23 | // s7 (cp in Javascript code) + 1 << 30; // fp/s8 + +const int kNumCalleeSaved = 9; + +const RegList kCalleeSavedFPU = + 1 << 20 | // f20 + 1 << 22 | // f22 + 1 << 24 | // f24 + 1 << 26 | // f26 + 1 << 28 | // f28 + 1 << 30; // f30 + +const int kNumCalleeSavedFPU = 6; + +const RegList kCallerSavedFPU = + 1 << 0 | // f0 + 1 << 2 | // f2 + 1 << 4 | // f4 + 1 << 6 | // f6 + 1 << 8 | // f8 + 1 << 10 | // f10 + 1 << 12 | // f12 + 1 << 14 | // f14 + 1 << 16 | // f16 + 1 << 18; // f18 + + +// Number of registers for which space is reserved in safepoints. Must be a +// multiple of 8. +const int kNumSafepointRegisters = 24; + +// Define the list of registers actually saved at safepoints. +// Note that the number of saved registers may be smaller than the reserved +// space, i.e. kNumSafepointSavedRegisters <= kNumSafepointRegisters. +const RegList kSafepointSavedRegisters = kJSCallerSaved | kCalleeSaved; +const int kNumSafepointSavedRegisters = + kNumJSCallerSaved + kNumCalleeSaved; + +const int kUndefIndex = -1; +// Map with indexes on stack that corresponds to codes of saved registers. +const int kSafepointRegisterStackIndexMap[kNumRegs] = { + kUndefIndex, // zero_reg + kUndefIndex, // at + 0, // v0 + 1, // v1 + 2, // a0 + 3, // a1 + 4, // a2 + 5, // a3 + 6, // a4 + 7, // a5 + 8, // a6 + 9, // a7 + 10, // t0 + 11, // t1 + 12, // t2 + 13, // t3 + 14, // s0 + 15, // s1 + 16, // s2 + 17, // s3 + 18, // s4 + 19, // s5 + 20, // s6 + 21, // s7 + kUndefIndex, // t8 + kUndefIndex, // t9 + kUndefIndex, // k0 + kUndefIndex, // k1 + kUndefIndex, // gp + kUndefIndex, // sp + 22, // fp + kUndefIndex +}; + + +// ---------------------------------------------------- + +class EntryFrameConstants : public AllStatic { + public: + static const int kCallerFPOffset = + -(StandardFrameConstants::kFixedFrameSizeFromFp + kPointerSize); +}; + + +class ExitFrameConstants : public AllStatic { + public: + static const int kFrameSize = 2 * kPointerSize; + + static const int kCodeOffset = -2 * kPointerSize; + static const int kSPOffset = -1 * kPointerSize; + + // The caller fields are below the frame pointer on the stack. + static const int kCallerFPOffset = +0 * kPointerSize; + // The calling JS function is between FP and PC. + static const int kCallerPCOffset = +1 * kPointerSize; + + // MIPS-specific: a pointer to the old sp to avoid unnecessary calculations. + static const int kCallerSPOffset = +2 * kPointerSize; + + // FP-relative displacement of the caller's SP. + static const int kCallerSPDisplacement = +2 * kPointerSize; + + static const int kConstantPoolOffset = 0; // Not used. +}; + + +class JavaScriptFrameConstants : public AllStatic { + public: + // FP-relative. + static const int kLocal0Offset = StandardFrameConstants::kExpressionsOffset; + static const int kLastParameterOffset = +2 * kPointerSize; + static const int kFunctionOffset = StandardFrameConstants::kMarkerOffset; + + // Caller SP-relative. + static const int kParam0Offset = -2 * kPointerSize; + static const int kReceiverOffset = -1 * kPointerSize; +}; + + +class ArgumentsAdaptorFrameConstants : public AllStatic { + public: + // FP-relative. + static const int kLengthOffset = StandardFrameConstants::kExpressionsOffset; + + static const int kFrameSize = + StandardFrameConstants::kFixedFrameSize + kPointerSize; +}; + + +class ConstructFrameConstants : public AllStatic { + public: + // FP-relative. + static const int kImplicitReceiverOffset = -6 * kPointerSize; + static const int kConstructorOffset = -5 * kPointerSize; + static const int kLengthOffset = -4 * kPointerSize; + static const int kCodeOffset = StandardFrameConstants::kExpressionsOffset; + + static const int kFrameSize = + StandardFrameConstants::kFixedFrameSize + 4 * kPointerSize; +}; + + +class InternalFrameConstants : public AllStatic { + public: + // FP-relative. + static const int kCodeOffset = StandardFrameConstants::kExpressionsOffset; +}; + + +inline Object* JavaScriptFrame::function_slot_object() const { + const int offset = JavaScriptFrameConstants::kFunctionOffset; + return Memory::Object_at(fp() + offset); +} + + +inline void StackHandler::SetFp(Address slot, Address fp) { + Memory::Address_at(slot) = fp; +} + + +} } // namespace v8::internal + +#endif diff --git a/deps/v8/src/mips64/full-codegen-mips64.cc b/deps/v8/src/mips64/full-codegen-mips64.cc new file mode 100644 index 00000000000..c69ceccec7a --- /dev/null +++ b/deps/v8/src/mips64/full-codegen-mips64.cc @@ -0,0 +1,4888 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_MIPS64 + +// Note on Mips implementation: +// +// The result_register() for mips is the 'v0' register, which is defined +// by the ABI to contain function return values. However, the first +// parameter to a function is defined to be 'a0'. So there are many +// places where we have to move a previous result in v0 to a0 for the +// next call: mov(a0, v0). This is not needed on the other architectures. + +#include "src/code-stubs.h" +#include "src/codegen.h" +#include "src/compiler.h" +#include "src/debug.h" +#include "src/full-codegen.h" +#include "src/isolate-inl.h" +#include "src/parser.h" +#include "src/scopes.h" +#include "src/stub-cache.h" + +#include "src/mips64/code-stubs-mips64.h" +#include "src/mips64/macro-assembler-mips64.h" + +namespace v8 { +namespace internal { + +#define __ ACCESS_MASM(masm_) + + +// A patch site is a location in the code which it is possible to patch. This +// class has a number of methods to emit the code which is patchable and the +// method EmitPatchInfo to record a marker back to the patchable code. This +// marker is a andi zero_reg, rx, #yyyy instruction, and rx * 0x0000ffff + yyyy +// (raw 16 bit immediate value is used) is the delta from the pc to the first +// instruction of the patchable code. +// The marker instruction is effectively a NOP (dest is zero_reg) and will +// never be emitted by normal code. +class JumpPatchSite BASE_EMBEDDED { + public: + explicit JumpPatchSite(MacroAssembler* masm) : masm_(masm) { +#ifdef DEBUG + info_emitted_ = false; +#endif + } + + ~JumpPatchSite() { + DCHECK(patch_site_.is_bound() == info_emitted_); + } + + // When initially emitting this ensure that a jump is always generated to skip + // the inlined smi code. + void EmitJumpIfNotSmi(Register reg, Label* target) { + DCHECK(!patch_site_.is_bound() && !info_emitted_); + Assembler::BlockTrampolinePoolScope block_trampoline_pool(masm_); + __ bind(&patch_site_); + __ andi(at, reg, 0); + // Always taken before patched. + __ BranchShort(target, eq, at, Operand(zero_reg)); + } + + // When initially emitting this ensure that a jump is never generated to skip + // the inlined smi code. + void EmitJumpIfSmi(Register reg, Label* target) { + Assembler::BlockTrampolinePoolScope block_trampoline_pool(masm_); + DCHECK(!patch_site_.is_bound() && !info_emitted_); + __ bind(&patch_site_); + __ andi(at, reg, 0); + // Never taken before patched. + __ BranchShort(target, ne, at, Operand(zero_reg)); + } + + void EmitPatchInfo() { + if (patch_site_.is_bound()) { + int delta_to_patch_site = masm_->InstructionsGeneratedSince(&patch_site_); + Register reg = Register::from_code(delta_to_patch_site / kImm16Mask); + __ andi(zero_reg, reg, delta_to_patch_site % kImm16Mask); +#ifdef DEBUG + info_emitted_ = true; +#endif + } else { + __ nop(); // Signals no inlined code. + } + } + + private: + MacroAssembler* masm_; + Label patch_site_; +#ifdef DEBUG + bool info_emitted_; +#endif +}; + + +// Generate code for a JS function. On entry to the function the receiver +// and arguments have been pushed on the stack left to right. The actual +// argument count matches the formal parameter count expected by the +// function. +// +// The live registers are: +// o a1: the JS function object being called (i.e. ourselves) +// o cp: our context +// o fp: our caller's frame pointer +// o sp: stack pointer +// o ra: return address +// +// The function builds a JS frame. Please see JavaScriptFrameConstants in +// frames-mips.h for its layout. +void FullCodeGenerator::Generate() { + CompilationInfo* info = info_; + handler_table_ = + isolate()->factory()->NewFixedArray(function()->handler_count(), TENURED); + + profiling_counter_ = isolate()->factory()->NewCell( + Handle<Smi>(Smi::FromInt(FLAG_interrupt_budget), isolate())); + SetFunctionPosition(function()); + Comment cmnt(masm_, "[ function compiled by full code generator"); + + ProfileEntryHookStub::MaybeCallEntryHook(masm_); + +#ifdef DEBUG + if (strlen(FLAG_stop_at) > 0 && + info->function()->name()->IsUtf8EqualTo(CStrVector(FLAG_stop_at))) { + __ stop("stop-at"); + } +#endif + + // Sloppy mode functions and builtins need to replace the receiver with the + // global proxy when called as functions (without an explicit receiver + // object). + if (info->strict_mode() == SLOPPY && !info->is_native()) { + Label ok; + int receiver_offset = info->scope()->num_parameters() * kPointerSize; + __ ld(at, MemOperand(sp, receiver_offset)); + __ LoadRoot(a2, Heap::kUndefinedValueRootIndex); + __ Branch(&ok, ne, a2, Operand(at)); + + __ ld(a2, GlobalObjectOperand()); + __ ld(a2, FieldMemOperand(a2, GlobalObject::kGlobalProxyOffset)); + + __ sd(a2, MemOperand(sp, receiver_offset)); + __ bind(&ok); + } + // Open a frame scope to indicate that there is a frame on the stack. The + // MANUAL indicates that the scope shouldn't actually generate code to set up + // the frame (that is done below). + FrameScope frame_scope(masm_, StackFrame::MANUAL); + info->set_prologue_offset(masm_->pc_offset()); + __ Prologue(info->IsCodePreAgingActive()); + info->AddNoFrameRange(0, masm_->pc_offset()); + + { Comment cmnt(masm_, "[ Allocate locals"); + int locals_count = info->scope()->num_stack_slots(); + // Generators allocate locals, if any, in context slots. + DCHECK(!info->function()->is_generator() || locals_count == 0); + if (locals_count > 0) { + if (locals_count >= 128) { + Label ok; + __ Dsubu(t1, sp, Operand(locals_count * kPointerSize)); + __ LoadRoot(a2, Heap::kRealStackLimitRootIndex); + __ Branch(&ok, hs, t1, Operand(a2)); + __ InvokeBuiltin(Builtins::STACK_OVERFLOW, CALL_FUNCTION); + __ bind(&ok); + } + __ LoadRoot(t1, Heap::kUndefinedValueRootIndex); + int kMaxPushes = FLAG_optimize_for_size ? 4 : 32; + if (locals_count >= kMaxPushes) { + int loop_iterations = locals_count / kMaxPushes; + __ li(a2, Operand(loop_iterations)); + Label loop_header; + __ bind(&loop_header); + // Do pushes. + __ Dsubu(sp, sp, Operand(kMaxPushes * kPointerSize)); + for (int i = 0; i < kMaxPushes; i++) { + __ sd(t1, MemOperand(sp, i * kPointerSize)); + } + // Continue loop if not done. + __ Dsubu(a2, a2, Operand(1)); + __ Branch(&loop_header, ne, a2, Operand(zero_reg)); + } + int remaining = locals_count % kMaxPushes; + // Emit the remaining pushes. + __ Dsubu(sp, sp, Operand(remaining * kPointerSize)); + for (int i = 0; i < remaining; i++) { + __ sd(t1, MemOperand(sp, i * kPointerSize)); + } + } + } + + bool function_in_register = true; + + // Possibly allocate a local context. + int heap_slots = info->scope()->num_heap_slots() - Context::MIN_CONTEXT_SLOTS; + if (heap_slots > 0) { + Comment cmnt(masm_, "[ Allocate context"); + // Argument to NewContext is the function, which is still in a1. + bool need_write_barrier = true; + if (FLAG_harmony_scoping && info->scope()->is_global_scope()) { + __ push(a1); + __ Push(info->scope()->GetScopeInfo()); + __ CallRuntime(Runtime::kNewGlobalContext, 2); + } else if (heap_slots <= FastNewContextStub::kMaximumSlots) { + FastNewContextStub stub(isolate(), heap_slots); + __ CallStub(&stub); + // Result of FastNewContextStub is always in new space. + need_write_barrier = false; + } else { + __ push(a1); + __ CallRuntime(Runtime::kNewFunctionContext, 1); + } + function_in_register = false; + // Context is returned in v0. It replaces the context passed to us. + // It's saved in the stack and kept live in cp. + __ mov(cp, v0); + __ sd(v0, MemOperand(fp, StandardFrameConstants::kContextOffset)); + // Copy any necessary parameters into the context. + int num_parameters = info->scope()->num_parameters(); + for (int i = 0; i < num_parameters; i++) { + Variable* var = scope()->parameter(i); + if (var->IsContextSlot()) { + int parameter_offset = StandardFrameConstants::kCallerSPOffset + + (num_parameters - 1 - i) * kPointerSize; + // Load parameter from stack. + __ ld(a0, MemOperand(fp, parameter_offset)); + // Store it in the context. + MemOperand target = ContextOperand(cp, var->index()); + __ sd(a0, target); + + // Update the write barrier. + if (need_write_barrier) { + __ RecordWriteContextSlot( + cp, target.offset(), a0, a3, kRAHasBeenSaved, kDontSaveFPRegs); + } else if (FLAG_debug_code) { + Label done; + __ JumpIfInNewSpace(cp, a0, &done); + __ Abort(kExpectedNewSpaceObject); + __ bind(&done); + } + } + } + } + Variable* arguments = scope()->arguments(); + if (arguments != NULL) { + // Function uses arguments object. + Comment cmnt(masm_, "[ Allocate arguments object"); + if (!function_in_register) { + // Load this again, if it's used by the local context below. + __ ld(a3, MemOperand(fp, JavaScriptFrameConstants::kFunctionOffset)); + } else { + __ mov(a3, a1); + } + // Receiver is just before the parameters on the caller's stack. + int num_parameters = info->scope()->num_parameters(); + int offset = num_parameters * kPointerSize; + __ Daddu(a2, fp, + Operand(StandardFrameConstants::kCallerSPOffset + offset)); + __ li(a1, Operand(Smi::FromInt(num_parameters))); + __ Push(a3, a2, a1); + + // Arguments to ArgumentsAccessStub: + // function, receiver address, parameter count. + // The stub will rewrite receiever and parameter count if the previous + // stack frame was an arguments adapter frame. + ArgumentsAccessStub::Type type; + if (strict_mode() == STRICT) { + type = ArgumentsAccessStub::NEW_STRICT; + } else if (function()->has_duplicate_parameters()) { + type = ArgumentsAccessStub::NEW_SLOPPY_SLOW; + } else { + type = ArgumentsAccessStub::NEW_SLOPPY_FAST; + } + ArgumentsAccessStub stub(isolate(), type); + __ CallStub(&stub); + + SetVar(arguments, v0, a1, a2); + } + + if (FLAG_trace) { + __ CallRuntime(Runtime::kTraceEnter, 0); + } + // Visit the declarations and body unless there is an illegal + // redeclaration. + if (scope()->HasIllegalRedeclaration()) { + Comment cmnt(masm_, "[ Declarations"); + scope()->VisitIllegalRedeclaration(this); + + } else { + PrepareForBailoutForId(BailoutId::FunctionEntry(), NO_REGISTERS); + { Comment cmnt(masm_, "[ Declarations"); + // For named function expressions, declare the function name as a + // constant. + if (scope()->is_function_scope() && scope()->function() != NULL) { + VariableDeclaration* function = scope()->function(); + DCHECK(function->proxy()->var()->mode() == CONST || + function->proxy()->var()->mode() == CONST_LEGACY); + DCHECK(function->proxy()->var()->location() != Variable::UNALLOCATED); + VisitVariableDeclaration(function); + } + VisitDeclarations(scope()->declarations()); + } + { Comment cmnt(masm_, "[ Stack check"); + PrepareForBailoutForId(BailoutId::Declarations(), NO_REGISTERS); + Label ok; + __ LoadRoot(at, Heap::kStackLimitRootIndex); + __ Branch(&ok, hs, sp, Operand(at)); + Handle<Code> stack_check = isolate()->builtins()->StackCheck(); + PredictableCodeSizeScope predictable(masm_, + masm_->CallSize(stack_check, RelocInfo::CODE_TARGET)); + __ Call(stack_check, RelocInfo::CODE_TARGET); + __ bind(&ok); + } + + { Comment cmnt(masm_, "[ Body"); + DCHECK(loop_depth() == 0); + + VisitStatements(function()->body()); + + DCHECK(loop_depth() == 0); + } + } + + // Always emit a 'return undefined' in case control fell off the end of + // the body. + { Comment cmnt(masm_, "[ return <undefined>;"); + __ LoadRoot(v0, Heap::kUndefinedValueRootIndex); + } + EmitReturnSequence(); +} + + +void FullCodeGenerator::ClearAccumulator() { + DCHECK(Smi::FromInt(0) == 0); + __ mov(v0, zero_reg); +} + + +void FullCodeGenerator::EmitProfilingCounterDecrement(int delta) { + __ li(a2, Operand(profiling_counter_)); + __ ld(a3, FieldMemOperand(a2, Cell::kValueOffset)); + __ Dsubu(a3, a3, Operand(Smi::FromInt(delta))); + __ sd(a3, FieldMemOperand(a2, Cell::kValueOffset)); +} + + +void FullCodeGenerator::EmitProfilingCounterReset() { + int reset_value = FLAG_interrupt_budget; + if (info_->is_debug()) { + // Detect debug break requests as soon as possible. + reset_value = FLAG_interrupt_budget >> 4; + } + __ li(a2, Operand(profiling_counter_)); + __ li(a3, Operand(Smi::FromInt(reset_value))); + __ sd(a3, FieldMemOperand(a2, Cell::kValueOffset)); +} + + +void FullCodeGenerator::EmitBackEdgeBookkeeping(IterationStatement* stmt, + Label* back_edge_target) { + // The generated code is used in Deoptimizer::PatchStackCheckCodeAt so we need + // to make sure it is constant. Branch may emit a skip-or-jump sequence + // instead of the normal Branch. It seems that the "skip" part of that + // sequence is about as long as this Branch would be so it is safe to ignore + // that. + Assembler::BlockTrampolinePoolScope block_trampoline_pool(masm_); + Comment cmnt(masm_, "[ Back edge bookkeeping"); + Label ok; + DCHECK(back_edge_target->is_bound()); + int distance = masm_->SizeOfCodeGeneratedSince(back_edge_target); + int weight = Min(kMaxBackEdgeWeight, + Max(1, distance / kCodeSizeMultiplier)); + EmitProfilingCounterDecrement(weight); + __ slt(at, a3, zero_reg); + __ beq(at, zero_reg, &ok); + // Call will emit a li t9 first, so it is safe to use the delay slot. + __ Call(isolate()->builtins()->InterruptCheck(), RelocInfo::CODE_TARGET); + // Record a mapping of this PC offset to the OSR id. This is used to find + // the AST id from the unoptimized code in order to use it as a key into + // the deoptimization input data found in the optimized code. + RecordBackEdge(stmt->OsrEntryId()); + EmitProfilingCounterReset(); + + __ bind(&ok); + PrepareForBailoutForId(stmt->EntryId(), NO_REGISTERS); + // Record a mapping of the OSR id to this PC. This is used if the OSR + // entry becomes the target of a bailout. We don't expect it to be, but + // we want it to work if it is. + PrepareForBailoutForId(stmt->OsrEntryId(), NO_REGISTERS); +} + + +void FullCodeGenerator::EmitReturnSequence() { + Comment cmnt(masm_, "[ Return sequence"); + if (return_label_.is_bound()) { + __ Branch(&return_label_); + } else { + __ bind(&return_label_); + if (FLAG_trace) { + // Push the return value on the stack as the parameter. + // Runtime::TraceExit returns its parameter in v0. + __ push(v0); + __ CallRuntime(Runtime::kTraceExit, 1); + } + // Pretend that the exit is a backwards jump to the entry. + int weight = 1; + if (info_->ShouldSelfOptimize()) { + weight = FLAG_interrupt_budget / FLAG_self_opt_count; + } else { + int distance = masm_->pc_offset(); + weight = Min(kMaxBackEdgeWeight, + Max(1, distance / kCodeSizeMultiplier)); + } + EmitProfilingCounterDecrement(weight); + Label ok; + __ Branch(&ok, ge, a3, Operand(zero_reg)); + __ push(v0); + __ Call(isolate()->builtins()->InterruptCheck(), + RelocInfo::CODE_TARGET); + __ pop(v0); + EmitProfilingCounterReset(); + __ bind(&ok); + +#ifdef DEBUG + // Add a label for checking the size of the code used for returning. + Label check_exit_codesize; + masm_->bind(&check_exit_codesize); +#endif + // Make sure that the constant pool is not emitted inside of the return + // sequence. + { Assembler::BlockTrampolinePoolScope block_trampoline_pool(masm_); + // Here we use masm_-> instead of the __ macro to avoid the code coverage + // tool from instrumenting as we rely on the code size here. + int32_t sp_delta = (info_->scope()->num_parameters() + 1) * kPointerSize; + CodeGenerator::RecordPositions(masm_, function()->end_position() - 1); + __ RecordJSReturn(); + masm_->mov(sp, fp); + int no_frame_start = masm_->pc_offset(); + masm_->MultiPop(static_cast<RegList>(fp.bit() | ra.bit())); + masm_->Daddu(sp, sp, Operand(sp_delta)); + masm_->Jump(ra); + info_->AddNoFrameRange(no_frame_start, masm_->pc_offset()); + } + +#ifdef DEBUG + // Check that the size of the code used for returning is large enough + // for the debugger's requirements. + DCHECK(Assembler::kJSReturnSequenceInstructions <= + masm_->InstructionsGeneratedSince(&check_exit_codesize)); +#endif + } +} + + +void FullCodeGenerator::EffectContext::Plug(Variable* var) const { + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); +} + + +void FullCodeGenerator::AccumulatorValueContext::Plug(Variable* var) const { + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); + codegen()->GetVar(result_register(), var); +} + + +void FullCodeGenerator::StackValueContext::Plug(Variable* var) const { + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); + codegen()->GetVar(result_register(), var); + __ push(result_register()); +} + + +void FullCodeGenerator::TestContext::Plug(Variable* var) const { + // For simplicity we always test the accumulator register. + codegen()->GetVar(result_register(), var); + codegen()->PrepareForBailoutBeforeSplit(condition(), false, NULL, NULL); + codegen()->DoTest(this); +} + + +void FullCodeGenerator::EffectContext::Plug(Heap::RootListIndex index) const { +} + + +void FullCodeGenerator::AccumulatorValueContext::Plug( + Heap::RootListIndex index) const { + __ LoadRoot(result_register(), index); +} + + +void FullCodeGenerator::StackValueContext::Plug( + Heap::RootListIndex index) const { + __ LoadRoot(result_register(), index); + __ push(result_register()); +} + + +void FullCodeGenerator::TestContext::Plug(Heap::RootListIndex index) const { + codegen()->PrepareForBailoutBeforeSplit(condition(), + true, + true_label_, + false_label_); + if (index == Heap::kUndefinedValueRootIndex || + index == Heap::kNullValueRootIndex || + index == Heap::kFalseValueRootIndex) { + if (false_label_ != fall_through_) __ Branch(false_label_); + } else if (index == Heap::kTrueValueRootIndex) { + if (true_label_ != fall_through_) __ Branch(true_label_); + } else { + __ LoadRoot(result_register(), index); + codegen()->DoTest(this); + } +} + + +void FullCodeGenerator::EffectContext::Plug(Handle<Object> lit) const { +} + + +void FullCodeGenerator::AccumulatorValueContext::Plug( + Handle<Object> lit) const { + __ li(result_register(), Operand(lit)); +} + + +void FullCodeGenerator::StackValueContext::Plug(Handle<Object> lit) const { + // Immediates cannot be pushed directly. + __ li(result_register(), Operand(lit)); + __ push(result_register()); +} + + +void FullCodeGenerator::TestContext::Plug(Handle<Object> lit) const { + codegen()->PrepareForBailoutBeforeSplit(condition(), + true, + true_label_, + false_label_); + DCHECK(!lit->IsUndetectableObject()); // There are no undetectable literals. + if (lit->IsUndefined() || lit->IsNull() || lit->IsFalse()) { + if (false_label_ != fall_through_) __ Branch(false_label_); + } else if (lit->IsTrue() || lit->IsJSObject()) { + if (true_label_ != fall_through_) __ Branch(true_label_); + } else if (lit->IsString()) { + if (String::cast(*lit)->length() == 0) { + if (false_label_ != fall_through_) __ Branch(false_label_); + } else { + if (true_label_ != fall_through_) __ Branch(true_label_); + } + } else if (lit->IsSmi()) { + if (Smi::cast(*lit)->value() == 0) { + if (false_label_ != fall_through_) __ Branch(false_label_); + } else { + if (true_label_ != fall_through_) __ Branch(true_label_); + } + } else { + // For simplicity we always test the accumulator register. + __ li(result_register(), Operand(lit)); + codegen()->DoTest(this); + } +} + + +void FullCodeGenerator::EffectContext::DropAndPlug(int count, + Register reg) const { + DCHECK(count > 0); + __ Drop(count); +} + + +void FullCodeGenerator::AccumulatorValueContext::DropAndPlug( + int count, + Register reg) const { + DCHECK(count > 0); + __ Drop(count); + __ Move(result_register(), reg); +} + + +void FullCodeGenerator::StackValueContext::DropAndPlug(int count, + Register reg) const { + DCHECK(count > 0); + if (count > 1) __ Drop(count - 1); + __ sd(reg, MemOperand(sp, 0)); +} + + +void FullCodeGenerator::TestContext::DropAndPlug(int count, + Register reg) const { + DCHECK(count > 0); + // For simplicity we always test the accumulator register. + __ Drop(count); + __ Move(result_register(), reg); + codegen()->PrepareForBailoutBeforeSplit(condition(), false, NULL, NULL); + codegen()->DoTest(this); +} + + +void FullCodeGenerator::EffectContext::Plug(Label* materialize_true, + Label* materialize_false) const { + DCHECK(materialize_true == materialize_false); + __ bind(materialize_true); +} + + +void FullCodeGenerator::AccumulatorValueContext::Plug( + Label* materialize_true, + Label* materialize_false) const { + Label done; + __ bind(materialize_true); + __ LoadRoot(result_register(), Heap::kTrueValueRootIndex); + __ Branch(&done); + __ bind(materialize_false); + __ LoadRoot(result_register(), Heap::kFalseValueRootIndex); + __ bind(&done); +} + + +void FullCodeGenerator::StackValueContext::Plug( + Label* materialize_true, + Label* materialize_false) const { + Label done; + __ bind(materialize_true); + __ LoadRoot(at, Heap::kTrueValueRootIndex); + // Push the value as the following branch can clobber at in long branch mode. + __ push(at); + __ Branch(&done); + __ bind(materialize_false); + __ LoadRoot(at, Heap::kFalseValueRootIndex); + __ push(at); + __ bind(&done); +} + + +void FullCodeGenerator::TestContext::Plug(Label* materialize_true, + Label* materialize_false) const { + DCHECK(materialize_true == true_label_); + DCHECK(materialize_false == false_label_); +} + + +void FullCodeGenerator::EffectContext::Plug(bool flag) const { +} + + +void FullCodeGenerator::AccumulatorValueContext::Plug(bool flag) const { + Heap::RootListIndex value_root_index = + flag ? Heap::kTrueValueRootIndex : Heap::kFalseValueRootIndex; + __ LoadRoot(result_register(), value_root_index); +} + + +void FullCodeGenerator::StackValueContext::Plug(bool flag) const { + Heap::RootListIndex value_root_index = + flag ? Heap::kTrueValueRootIndex : Heap::kFalseValueRootIndex; + __ LoadRoot(at, value_root_index); + __ push(at); +} + + +void FullCodeGenerator::TestContext::Plug(bool flag) const { + codegen()->PrepareForBailoutBeforeSplit(condition(), + true, + true_label_, + false_label_); + if (flag) { + if (true_label_ != fall_through_) __ Branch(true_label_); + } else { + if (false_label_ != fall_through_) __ Branch(false_label_); + } +} + + +void FullCodeGenerator::DoTest(Expression* condition, + Label* if_true, + Label* if_false, + Label* fall_through) { + __ mov(a0, result_register()); + Handle<Code> ic = ToBooleanStub::GetUninitialized(isolate()); + CallIC(ic, condition->test_id()); + __ mov(at, zero_reg); + Split(ne, v0, Operand(at), if_true, if_false, fall_through); +} + + +void FullCodeGenerator::Split(Condition cc, + Register lhs, + const Operand& rhs, + Label* if_true, + Label* if_false, + Label* fall_through) { + if (if_false == fall_through) { + __ Branch(if_true, cc, lhs, rhs); + } else if (if_true == fall_through) { + __ Branch(if_false, NegateCondition(cc), lhs, rhs); + } else { + __ Branch(if_true, cc, lhs, rhs); + __ Branch(if_false); + } +} + + +MemOperand FullCodeGenerator::StackOperand(Variable* var) { + DCHECK(var->IsStackAllocated()); + // Offset is negative because higher indexes are at lower addresses. + int offset = -var->index() * kPointerSize; + // Adjust by a (parameter or local) base offset. + if (var->IsParameter()) { + offset += (info_->scope()->num_parameters() + 1) * kPointerSize; + } else { + offset += JavaScriptFrameConstants::kLocal0Offset; + } + return MemOperand(fp, offset); +} + + +MemOperand FullCodeGenerator::VarOperand(Variable* var, Register scratch) { + DCHECK(var->IsContextSlot() || var->IsStackAllocated()); + if (var->IsContextSlot()) { + int context_chain_length = scope()->ContextChainLength(var->scope()); + __ LoadContext(scratch, context_chain_length); + return ContextOperand(scratch, var->index()); + } else { + return StackOperand(var); + } +} + + +void FullCodeGenerator::GetVar(Register dest, Variable* var) { + // Use destination as scratch. + MemOperand location = VarOperand(var, dest); + __ ld(dest, location); +} + + +void FullCodeGenerator::SetVar(Variable* var, + Register src, + Register scratch0, + Register scratch1) { + DCHECK(var->IsContextSlot() || var->IsStackAllocated()); + DCHECK(!scratch0.is(src)); + DCHECK(!scratch0.is(scratch1)); + DCHECK(!scratch1.is(src)); + MemOperand location = VarOperand(var, scratch0); + __ sd(src, location); + // Emit the write barrier code if the location is in the heap. + if (var->IsContextSlot()) { + __ RecordWriteContextSlot(scratch0, + location.offset(), + src, + scratch1, + kRAHasBeenSaved, + kDontSaveFPRegs); + } +} + + +void FullCodeGenerator::PrepareForBailoutBeforeSplit(Expression* expr, + bool should_normalize, + Label* if_true, + Label* if_false) { + // Only prepare for bailouts before splits if we're in a test + // context. Otherwise, we let the Visit function deal with the + // preparation to avoid preparing with the same AST id twice. + if (!context()->IsTest() || !info_->IsOptimizable()) return; + + Label skip; + if (should_normalize) __ Branch(&skip); + PrepareForBailout(expr, TOS_REG); + if (should_normalize) { + __ LoadRoot(a4, Heap::kTrueValueRootIndex); + Split(eq, a0, Operand(a4), if_true, if_false, NULL); + __ bind(&skip); + } +} + + +void FullCodeGenerator::EmitDebugCheckDeclarationContext(Variable* variable) { + // The variable in the declaration always resides in the current function + // context. + DCHECK_EQ(0, scope()->ContextChainLength(variable->scope())); + if (generate_debug_code_) { + // Check that we're not inside a with or catch context. + __ ld(a1, FieldMemOperand(cp, HeapObject::kMapOffset)); + __ LoadRoot(a4, Heap::kWithContextMapRootIndex); + __ Check(ne, kDeclarationInWithContext, + a1, Operand(a4)); + __ LoadRoot(a4, Heap::kCatchContextMapRootIndex); + __ Check(ne, kDeclarationInCatchContext, + a1, Operand(a4)); + } +} + + +void FullCodeGenerator::VisitVariableDeclaration( + VariableDeclaration* declaration) { + // If it was not possible to allocate the variable at compile time, we + // need to "declare" it at runtime to make sure it actually exists in the + // local context. + VariableProxy* proxy = declaration->proxy(); + VariableMode mode = declaration->mode(); + Variable* variable = proxy->var(); + bool hole_init = mode == LET || mode == CONST || mode == CONST_LEGACY; + switch (variable->location()) { + case Variable::UNALLOCATED: + globals_->Add(variable->name(), zone()); + globals_->Add(variable->binding_needs_init() + ? isolate()->factory()->the_hole_value() + : isolate()->factory()->undefined_value(), + zone()); + break; + + case Variable::PARAMETER: + case Variable::LOCAL: + if (hole_init) { + Comment cmnt(masm_, "[ VariableDeclaration"); + __ LoadRoot(a4, Heap::kTheHoleValueRootIndex); + __ sd(a4, StackOperand(variable)); + } + break; + + case Variable::CONTEXT: + if (hole_init) { + Comment cmnt(masm_, "[ VariableDeclaration"); + EmitDebugCheckDeclarationContext(variable); + __ LoadRoot(at, Heap::kTheHoleValueRootIndex); + __ sd(at, ContextOperand(cp, variable->index())); + // No write barrier since the_hole_value is in old space. + PrepareForBailoutForId(proxy->id(), NO_REGISTERS); + } + break; + + case Variable::LOOKUP: { + Comment cmnt(masm_, "[ VariableDeclaration"); + __ li(a2, Operand(variable->name())); + // Declaration nodes are always introduced in one of four modes. + DCHECK(IsDeclaredVariableMode(mode)); + PropertyAttributes attr = + IsImmutableVariableMode(mode) ? READ_ONLY : NONE; + __ li(a1, Operand(Smi::FromInt(attr))); + // Push initial value, if any. + // Note: For variables we must not push an initial value (such as + // 'undefined') because we may have a (legal) redeclaration and we + // must not destroy the current value. + if (hole_init) { + __ LoadRoot(a0, Heap::kTheHoleValueRootIndex); + __ Push(cp, a2, a1, a0); + } else { + DCHECK(Smi::FromInt(0) == 0); + __ mov(a0, zero_reg); // Smi::FromInt(0) indicates no initial value. + __ Push(cp, a2, a1, a0); + } + __ CallRuntime(Runtime::kDeclareLookupSlot, 4); + break; + } + } +} + + +void FullCodeGenerator::VisitFunctionDeclaration( + FunctionDeclaration* declaration) { + VariableProxy* proxy = declaration->proxy(); + Variable* variable = proxy->var(); + switch (variable->location()) { + case Variable::UNALLOCATED: { + globals_->Add(variable->name(), zone()); + Handle<SharedFunctionInfo> function = + Compiler::BuildFunctionInfo(declaration->fun(), script(), info_); + // Check for stack-overflow exception. + if (function.is_null()) return SetStackOverflow(); + globals_->Add(function, zone()); + break; + } + + case Variable::PARAMETER: + case Variable::LOCAL: { + Comment cmnt(masm_, "[ FunctionDeclaration"); + VisitForAccumulatorValue(declaration->fun()); + __ sd(result_register(), StackOperand(variable)); + break; + } + + case Variable::CONTEXT: { + Comment cmnt(masm_, "[ FunctionDeclaration"); + EmitDebugCheckDeclarationContext(variable); + VisitForAccumulatorValue(declaration->fun()); + __ sd(result_register(), ContextOperand(cp, variable->index())); + int offset = Context::SlotOffset(variable->index()); + // We know that we have written a function, which is not a smi. + __ RecordWriteContextSlot(cp, + offset, + result_register(), + a2, + kRAHasBeenSaved, + kDontSaveFPRegs, + EMIT_REMEMBERED_SET, + OMIT_SMI_CHECK); + PrepareForBailoutForId(proxy->id(), NO_REGISTERS); + break; + } + + case Variable::LOOKUP: { + Comment cmnt(masm_, "[ FunctionDeclaration"); + __ li(a2, Operand(variable->name())); + __ li(a1, Operand(Smi::FromInt(NONE))); + __ Push(cp, a2, a1); + // Push initial value for function declaration. + VisitForStackValue(declaration->fun()); + __ CallRuntime(Runtime::kDeclareLookupSlot, 4); + break; + } + } +} + + +void FullCodeGenerator::VisitModuleDeclaration(ModuleDeclaration* declaration) { + Variable* variable = declaration->proxy()->var(); + DCHECK(variable->location() == Variable::CONTEXT); + DCHECK(variable->interface()->IsFrozen()); + Comment cmnt(masm_, "[ ModuleDeclaration"); + EmitDebugCheckDeclarationContext(variable); + + // Load instance object. + __ LoadContext(a1, scope_->ContextChainLength(scope_->GlobalScope())); + __ ld(a1, ContextOperand(a1, variable->interface()->Index())); + __ ld(a1, ContextOperand(a1, Context::EXTENSION_INDEX)); + + // Assign it. + __ sd(a1, ContextOperand(cp, variable->index())); + // We know that we have written a module, which is not a smi. + __ RecordWriteContextSlot(cp, + Context::SlotOffset(variable->index()), + a1, + a3, + kRAHasBeenSaved, + kDontSaveFPRegs, + EMIT_REMEMBERED_SET, + OMIT_SMI_CHECK); + PrepareForBailoutForId(declaration->proxy()->id(), NO_REGISTERS); + + // Traverse into body. + Visit(declaration->module()); +} + + +void FullCodeGenerator::VisitImportDeclaration(ImportDeclaration* declaration) { + VariableProxy* proxy = declaration->proxy(); + Variable* variable = proxy->var(); + switch (variable->location()) { + case Variable::UNALLOCATED: + // TODO(rossberg) + break; + + case Variable::CONTEXT: { + Comment cmnt(masm_, "[ ImportDeclaration"); + EmitDebugCheckDeclarationContext(variable); + // TODO(rossberg) + break; + } + + case Variable::PARAMETER: + case Variable::LOCAL: + case Variable::LOOKUP: + UNREACHABLE(); + } +} + + +void FullCodeGenerator::VisitExportDeclaration(ExportDeclaration* declaration) { + // TODO(rossberg) +} + + +void FullCodeGenerator::DeclareGlobals(Handle<FixedArray> pairs) { + // Call the runtime to declare the globals. + // The context is the first argument. + __ li(a1, Operand(pairs)); + __ li(a0, Operand(Smi::FromInt(DeclareGlobalsFlags()))); + __ Push(cp, a1, a0); + __ CallRuntime(Runtime::kDeclareGlobals, 3); + // Return value is ignored. +} + + +void FullCodeGenerator::DeclareModules(Handle<FixedArray> descriptions) { + // Call the runtime to declare the modules. + __ Push(descriptions); + __ CallRuntime(Runtime::kDeclareModules, 1); + // Return value is ignored. +} + + +void FullCodeGenerator::VisitSwitchStatement(SwitchStatement* stmt) { + Comment cmnt(masm_, "[ SwitchStatement"); + Breakable nested_statement(this, stmt); + SetStatementPosition(stmt); + + // Keep the switch value on the stack until a case matches. + VisitForStackValue(stmt->tag()); + PrepareForBailoutForId(stmt->EntryId(), NO_REGISTERS); + + ZoneList<CaseClause*>* clauses = stmt->cases(); + CaseClause* default_clause = NULL; // Can occur anywhere in the list. + + Label next_test; // Recycled for each test. + // Compile all the tests with branches to their bodies. + for (int i = 0; i < clauses->length(); i++) { + CaseClause* clause = clauses->at(i); + clause->body_target()->Unuse(); + + // The default is not a test, but remember it as final fall through. + if (clause->is_default()) { + default_clause = clause; + continue; + } + + Comment cmnt(masm_, "[ Case comparison"); + __ bind(&next_test); + next_test.Unuse(); + + // Compile the label expression. + VisitForAccumulatorValue(clause->label()); + __ mov(a0, result_register()); // CompareStub requires args in a0, a1. + + // Perform the comparison as if via '==='. + __ ld(a1, MemOperand(sp, 0)); // Switch value. + bool inline_smi_code = ShouldInlineSmiCase(Token::EQ_STRICT); + JumpPatchSite patch_site(masm_); + if (inline_smi_code) { + Label slow_case; + __ or_(a2, a1, a0); + patch_site.EmitJumpIfNotSmi(a2, &slow_case); + + __ Branch(&next_test, ne, a1, Operand(a0)); + __ Drop(1); // Switch value is no longer needed. + __ Branch(clause->body_target()); + + __ bind(&slow_case); + } + + // Record position before stub call for type feedback. + SetSourcePosition(clause->position()); + Handle<Code> ic = CompareIC::GetUninitialized(isolate(), Token::EQ_STRICT); + CallIC(ic, clause->CompareId()); + patch_site.EmitPatchInfo(); + + Label skip; + __ Branch(&skip); + PrepareForBailout(clause, TOS_REG); + __ LoadRoot(at, Heap::kTrueValueRootIndex); + __ Branch(&next_test, ne, v0, Operand(at)); + __ Drop(1); + __ Branch(clause->body_target()); + __ bind(&skip); + + __ Branch(&next_test, ne, v0, Operand(zero_reg)); + __ Drop(1); // Switch value is no longer needed. + __ Branch(clause->body_target()); + } + + // Discard the test value and jump to the default if present, otherwise to + // the end of the statement. + __ bind(&next_test); + __ Drop(1); // Switch value is no longer needed. + if (default_clause == NULL) { + __ Branch(nested_statement.break_label()); + } else { + __ Branch(default_clause->body_target()); + } + + // Compile all the case bodies. + for (int i = 0; i < clauses->length(); i++) { + Comment cmnt(masm_, "[ Case body"); + CaseClause* clause = clauses->at(i); + __ bind(clause->body_target()); + PrepareForBailoutForId(clause->EntryId(), NO_REGISTERS); + VisitStatements(clause->statements()); + } + + __ bind(nested_statement.break_label()); + PrepareForBailoutForId(stmt->ExitId(), NO_REGISTERS); +} + + +void FullCodeGenerator::VisitForInStatement(ForInStatement* stmt) { + Comment cmnt(masm_, "[ ForInStatement"); + int slot = stmt->ForInFeedbackSlot(); + SetStatementPosition(stmt); + + Label loop, exit; + ForIn loop_statement(this, stmt); + increment_loop_depth(); + + // Get the object to enumerate over. If the object is null or undefined, skip + // over the loop. See ECMA-262 version 5, section 12.6.4. + VisitForAccumulatorValue(stmt->enumerable()); + __ mov(a0, result_register()); // Result as param to InvokeBuiltin below. + __ LoadRoot(at, Heap::kUndefinedValueRootIndex); + __ Branch(&exit, eq, a0, Operand(at)); + Register null_value = a5; + __ LoadRoot(null_value, Heap::kNullValueRootIndex); + __ Branch(&exit, eq, a0, Operand(null_value)); + PrepareForBailoutForId(stmt->PrepareId(), TOS_REG); + __ mov(a0, v0); + // Convert the object to a JS object. + Label convert, done_convert; + __ JumpIfSmi(a0, &convert); + __ GetObjectType(a0, a1, a1); + __ Branch(&done_convert, ge, a1, Operand(FIRST_SPEC_OBJECT_TYPE)); + __ bind(&convert); + __ push(a0); + __ InvokeBuiltin(Builtins::TO_OBJECT, CALL_FUNCTION); + __ mov(a0, v0); + __ bind(&done_convert); + __ push(a0); + + // Check for proxies. + Label call_runtime; + STATIC_ASSERT(FIRST_JS_PROXY_TYPE == FIRST_SPEC_OBJECT_TYPE); + __ GetObjectType(a0, a1, a1); + __ Branch(&call_runtime, le, a1, Operand(LAST_JS_PROXY_TYPE)); + + // Check cache validity in generated code. This is a fast case for + // the JSObject::IsSimpleEnum cache validity checks. If we cannot + // guarantee cache validity, call the runtime system to check cache + // validity or get the property names in a fixed array. + __ CheckEnumCache(null_value, &call_runtime); + + // The enum cache is valid. Load the map of the object being + // iterated over and use the cache for the iteration. + Label use_cache; + __ ld(v0, FieldMemOperand(a0, HeapObject::kMapOffset)); + __ Branch(&use_cache); + + // Get the set of properties to enumerate. + __ bind(&call_runtime); + __ push(a0); // Duplicate the enumerable object on the stack. + __ CallRuntime(Runtime::kGetPropertyNamesFast, 1); + + // If we got a map from the runtime call, we can do a fast + // modification check. Otherwise, we got a fixed array, and we have + // to do a slow check. + Label fixed_array; + __ ld(a2, FieldMemOperand(v0, HeapObject::kMapOffset)); + __ LoadRoot(at, Heap::kMetaMapRootIndex); + __ Branch(&fixed_array, ne, a2, Operand(at)); + + // We got a map in register v0. Get the enumeration cache from it. + Label no_descriptors; + __ bind(&use_cache); + + __ EnumLength(a1, v0); + __ Branch(&no_descriptors, eq, a1, Operand(Smi::FromInt(0))); + + __ LoadInstanceDescriptors(v0, a2); + __ ld(a2, FieldMemOperand(a2, DescriptorArray::kEnumCacheOffset)); + __ ld(a2, FieldMemOperand(a2, DescriptorArray::kEnumCacheBridgeCacheOffset)); + + // Set up the four remaining stack slots. + __ li(a0, Operand(Smi::FromInt(0))); + // Push map, enumeration cache, enumeration cache length (as smi) and zero. + __ Push(v0, a2, a1, a0); + __ jmp(&loop); + + __ bind(&no_descriptors); + __ Drop(1); + __ jmp(&exit); + + // We got a fixed array in register v0. Iterate through that. + Label non_proxy; + __ bind(&fixed_array); + + __ li(a1, FeedbackVector()); + __ li(a2, Operand(TypeFeedbackInfo::MegamorphicSentinel(isolate()))); + __ sd(a2, FieldMemOperand(a1, FixedArray::OffsetOfElementAt(slot))); + + __ li(a1, Operand(Smi::FromInt(1))); // Smi indicates slow check + __ ld(a2, MemOperand(sp, 0 * kPointerSize)); // Get enumerated object + STATIC_ASSERT(FIRST_JS_PROXY_TYPE == FIRST_SPEC_OBJECT_TYPE); + __ GetObjectType(a2, a3, a3); + __ Branch(&non_proxy, gt, a3, Operand(LAST_JS_PROXY_TYPE)); + __ li(a1, Operand(Smi::FromInt(0))); // Zero indicates proxy + __ bind(&non_proxy); + __ Push(a1, v0); // Smi and array + __ ld(a1, FieldMemOperand(v0, FixedArray::kLengthOffset)); + __ li(a0, Operand(Smi::FromInt(0))); + __ Push(a1, a0); // Fixed array length (as smi) and initial index. + + // Generate code for doing the condition check. + PrepareForBailoutForId(stmt->BodyId(), NO_REGISTERS); + __ bind(&loop); + // Load the current count to a0, load the length to a1. + __ ld(a0, MemOperand(sp, 0 * kPointerSize)); + __ ld(a1, MemOperand(sp, 1 * kPointerSize)); + __ Branch(loop_statement.break_label(), hs, a0, Operand(a1)); + + // Get the current entry of the array into register a3. + __ ld(a2, MemOperand(sp, 2 * kPointerSize)); + __ Daddu(a2, a2, Operand(FixedArray::kHeaderSize - kHeapObjectTag)); + __ SmiScale(a4, a0, kPointerSizeLog2); + __ daddu(a4, a2, a4); // Array base + scaled (smi) index. + __ ld(a3, MemOperand(a4)); // Current entry. + + // Get the expected map from the stack or a smi in the + // permanent slow case into register a2. + __ ld(a2, MemOperand(sp, 3 * kPointerSize)); + + // Check if the expected map still matches that of the enumerable. + // If not, we may have to filter the key. + Label update_each; + __ ld(a1, MemOperand(sp, 4 * kPointerSize)); + __ ld(a4, FieldMemOperand(a1, HeapObject::kMapOffset)); + __ Branch(&update_each, eq, a4, Operand(a2)); + + // For proxies, no filtering is done. + // TODO(rossberg): What if only a prototype is a proxy? Not specified yet. + DCHECK_EQ(Smi::FromInt(0), 0); + __ Branch(&update_each, eq, a2, Operand(zero_reg)); + + // Convert the entry to a string or (smi) 0 if it isn't a property + // any more. If the property has been removed while iterating, we + // just skip it. + __ Push(a1, a3); // Enumerable and current entry. + __ InvokeBuiltin(Builtins::FILTER_KEY, CALL_FUNCTION); + __ mov(a3, result_register()); + __ Branch(loop_statement.continue_label(), eq, a3, Operand(zero_reg)); + + // Update the 'each' property or variable from the possibly filtered + // entry in register a3. + __ bind(&update_each); + __ mov(result_register(), a3); + // Perform the assignment as if via '='. + { EffectContext context(this); + EmitAssignment(stmt->each()); + } + + // Generate code for the body of the loop. + Visit(stmt->body()); + + // Generate code for the going to the next element by incrementing + // the index (smi) stored on top of the stack. + __ bind(loop_statement.continue_label()); + __ pop(a0); + __ Daddu(a0, a0, Operand(Smi::FromInt(1))); + __ push(a0); + + EmitBackEdgeBookkeeping(stmt, &loop); + __ Branch(&loop); + + // Remove the pointers stored on the stack. + __ bind(loop_statement.break_label()); + __ Drop(5); + + // Exit and decrement the loop depth. + PrepareForBailoutForId(stmt->ExitId(), NO_REGISTERS); + __ bind(&exit); + decrement_loop_depth(); +} + + +void FullCodeGenerator::VisitForOfStatement(ForOfStatement* stmt) { + Comment cmnt(masm_, "[ ForOfStatement"); + SetStatementPosition(stmt); + + Iteration loop_statement(this, stmt); + increment_loop_depth(); + + // var iterator = iterable[Symbol.iterator](); + VisitForEffect(stmt->assign_iterator()); + + // Loop entry. + __ bind(loop_statement.continue_label()); + + // result = iterator.next() + VisitForEffect(stmt->next_result()); + + // if (result.done) break; + Label result_not_done; + VisitForControl(stmt->result_done(), + loop_statement.break_label(), + &result_not_done, + &result_not_done); + __ bind(&result_not_done); + + // each = result.value + VisitForEffect(stmt->assign_each()); + + // Generate code for the body of the loop. + Visit(stmt->body()); + + // Check stack before looping. + PrepareForBailoutForId(stmt->BackEdgeId(), NO_REGISTERS); + EmitBackEdgeBookkeeping(stmt, loop_statement.continue_label()); + __ jmp(loop_statement.continue_label()); + + // Exit and decrement the loop depth. + PrepareForBailoutForId(stmt->ExitId(), NO_REGISTERS); + __ bind(loop_statement.break_label()); + decrement_loop_depth(); +} + + +void FullCodeGenerator::EmitNewClosure(Handle<SharedFunctionInfo> info, + bool pretenure) { + // Use the fast case closure allocation code that allocates in new + // space for nested functions that don't need literals cloning. If + // we're running with the --always-opt or the --prepare-always-opt + // flag, we need to use the runtime function so that the new function + // we are creating here gets a chance to have its code optimized and + // doesn't just get a copy of the existing unoptimized code. + if (!FLAG_always_opt && + !FLAG_prepare_always_opt && + !pretenure && + scope()->is_function_scope() && + info->num_literals() == 0) { + FastNewClosureStub stub(isolate(), + info->strict_mode(), + info->is_generator()); + __ li(a2, Operand(info)); + __ CallStub(&stub); + } else { + __ li(a0, Operand(info)); + __ LoadRoot(a1, pretenure ? Heap::kTrueValueRootIndex + : Heap::kFalseValueRootIndex); + __ Push(cp, a0, a1); + __ CallRuntime(Runtime::kNewClosure, 3); + } + context()->Plug(v0); +} + + +void FullCodeGenerator::VisitVariableProxy(VariableProxy* expr) { + Comment cmnt(masm_, "[ VariableProxy"); + EmitVariableLoad(expr); +} + + +void FullCodeGenerator::EmitLoadGlobalCheckExtensions(VariableProxy* proxy, + TypeofState typeof_state, + Label* slow) { + Register current = cp; + Register next = a1; + Register temp = a2; + + Scope* s = scope(); + while (s != NULL) { + if (s->num_heap_slots() > 0) { + if (s->calls_sloppy_eval()) { + // Check that extension is NULL. + __ ld(temp, ContextOperand(current, Context::EXTENSION_INDEX)); + __ Branch(slow, ne, temp, Operand(zero_reg)); + } + // Load next context in chain. + __ ld(next, ContextOperand(current, Context::PREVIOUS_INDEX)); + // Walk the rest of the chain without clobbering cp. + current = next; + } + // If no outer scope calls eval, we do not need to check more + // context extensions. + if (!s->outer_scope_calls_sloppy_eval() || s->is_eval_scope()) break; + s = s->outer_scope(); + } + + if (s->is_eval_scope()) { + Label loop, fast; + if (!current.is(next)) { + __ Move(next, current); + } + __ bind(&loop); + // Terminate at native context. + __ ld(temp, FieldMemOperand(next, HeapObject::kMapOffset)); + __ LoadRoot(a4, Heap::kNativeContextMapRootIndex); + __ Branch(&fast, eq, temp, Operand(a4)); + // Check that extension is NULL. + __ ld(temp, ContextOperand(next, Context::EXTENSION_INDEX)); + __ Branch(slow, ne, temp, Operand(zero_reg)); + // Load next context in chain. + __ ld(next, ContextOperand(next, Context::PREVIOUS_INDEX)); + __ Branch(&loop); + __ bind(&fast); + } + + __ ld(LoadIC::ReceiverRegister(), GlobalObjectOperand()); + __ li(LoadIC::NameRegister(), Operand(proxy->var()->name())); + if (FLAG_vector_ics) { + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(proxy->VariableFeedbackSlot()))); + } + + ContextualMode mode = (typeof_state == INSIDE_TYPEOF) + ? NOT_CONTEXTUAL + : CONTEXTUAL; + CallLoadIC(mode); +} + + +MemOperand FullCodeGenerator::ContextSlotOperandCheckExtensions(Variable* var, + Label* slow) { + DCHECK(var->IsContextSlot()); + Register context = cp; + Register next = a3; + Register temp = a4; + + for (Scope* s = scope(); s != var->scope(); s = s->outer_scope()) { + if (s->num_heap_slots() > 0) { + if (s->calls_sloppy_eval()) { + // Check that extension is NULL. + __ ld(temp, ContextOperand(context, Context::EXTENSION_INDEX)); + __ Branch(slow, ne, temp, Operand(zero_reg)); + } + __ ld(next, ContextOperand(context, Context::PREVIOUS_INDEX)); + // Walk the rest of the chain without clobbering cp. + context = next; + } + } + // Check that last extension is NULL. + __ ld(temp, ContextOperand(context, Context::EXTENSION_INDEX)); + __ Branch(slow, ne, temp, Operand(zero_reg)); + + // This function is used only for loads, not stores, so it's safe to + // return an cp-based operand (the write barrier cannot be allowed to + // destroy the cp register). + return ContextOperand(context, var->index()); +} + + +void FullCodeGenerator::EmitDynamicLookupFastCase(VariableProxy* proxy, + TypeofState typeof_state, + Label* slow, + Label* done) { + // Generate fast-case code for variables that might be shadowed by + // eval-introduced variables. Eval is used a lot without + // introducing variables. In those cases, we do not want to + // perform a runtime call for all variables in the scope + // containing the eval. + Variable* var = proxy->var(); + if (var->mode() == DYNAMIC_GLOBAL) { + EmitLoadGlobalCheckExtensions(proxy, typeof_state, slow); + __ Branch(done); + } else if (var->mode() == DYNAMIC_LOCAL) { + Variable* local = var->local_if_not_shadowed(); + __ ld(v0, ContextSlotOperandCheckExtensions(local, slow)); + if (local->mode() == LET || local->mode() == CONST || + local->mode() == CONST_LEGACY) { + __ LoadRoot(at, Heap::kTheHoleValueRootIndex); + __ dsubu(at, v0, at); // Sub as compare: at == 0 on eq. + if (local->mode() == CONST_LEGACY) { + __ LoadRoot(a0, Heap::kUndefinedValueRootIndex); + __ Movz(v0, a0, at); // Conditional move: return Undefined if TheHole. + } else { // LET || CONST + __ Branch(done, ne, at, Operand(zero_reg)); + __ li(a0, Operand(var->name())); + __ push(a0); + __ CallRuntime(Runtime::kThrowReferenceError, 1); + } + } + __ Branch(done); + } +} + + +void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { + // Record position before possible IC call. + SetSourcePosition(proxy->position()); + Variable* var = proxy->var(); + + // Three cases: global variables, lookup variables, and all other types of + // variables. + switch (var->location()) { + case Variable::UNALLOCATED: { + Comment cmnt(masm_, "[ Global variable"); + // Use inline caching. Variable name is passed in a2 and the global + // object (receiver) in a0. + __ ld(LoadIC::ReceiverRegister(), GlobalObjectOperand()); + __ li(LoadIC::NameRegister(), Operand(var->name())); + if (FLAG_vector_ics) { + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(proxy->VariableFeedbackSlot()))); + } + CallLoadIC(CONTEXTUAL); + context()->Plug(v0); + break; + } + + case Variable::PARAMETER: + case Variable::LOCAL: + case Variable::CONTEXT: { + Comment cmnt(masm_, var->IsContextSlot() ? "[ Context variable" + : "[ Stack variable"); + if (var->binding_needs_init()) { + // var->scope() may be NULL when the proxy is located in eval code and + // refers to a potential outside binding. Currently those bindings are + // always looked up dynamically, i.e. in that case + // var->location() == LOOKUP. + // always holds. + DCHECK(var->scope() != NULL); + + // Check if the binding really needs an initialization check. The check + // can be skipped in the following situation: we have a LET or CONST + // binding in harmony mode, both the Variable and the VariableProxy have + // the same declaration scope (i.e. they are both in global code, in the + // same function or in the same eval code) and the VariableProxy is in + // the source physically located after the initializer of the variable. + // + // We cannot skip any initialization checks for CONST in non-harmony + // mode because const variables may be declared but never initialized: + // if (false) { const x; }; var y = x; + // + // The condition on the declaration scopes is a conservative check for + // nested functions that access a binding and are called before the + // binding is initialized: + // function() { f(); let x = 1; function f() { x = 2; } } + // + bool skip_init_check; + if (var->scope()->DeclarationScope() != scope()->DeclarationScope()) { + skip_init_check = false; + } else { + // Check that we always have valid source position. + DCHECK(var->initializer_position() != RelocInfo::kNoPosition); + DCHECK(proxy->position() != RelocInfo::kNoPosition); + skip_init_check = var->mode() != CONST_LEGACY && + var->initializer_position() < proxy->position(); + } + + if (!skip_init_check) { + // Let and const need a read barrier. + GetVar(v0, var); + __ LoadRoot(at, Heap::kTheHoleValueRootIndex); + __ dsubu(at, v0, at); // Sub as compare: at == 0 on eq. + if (var->mode() == LET || var->mode() == CONST) { + // Throw a reference error when using an uninitialized let/const + // binding in harmony mode. + Label done; + __ Branch(&done, ne, at, Operand(zero_reg)); + __ li(a0, Operand(var->name())); + __ push(a0); + __ CallRuntime(Runtime::kThrowReferenceError, 1); + __ bind(&done); + } else { + // Uninitalized const bindings outside of harmony mode are unholed. + DCHECK(var->mode() == CONST_LEGACY); + __ LoadRoot(a0, Heap::kUndefinedValueRootIndex); + __ Movz(v0, a0, at); // Conditional move: Undefined if TheHole. + } + context()->Plug(v0); + break; + } + } + context()->Plug(var); + break; + } + + case Variable::LOOKUP: { + Comment cmnt(masm_, "[ Lookup variable"); + Label done, slow; + // Generate code for loading from variables potentially shadowed + // by eval-introduced variables. + EmitDynamicLookupFastCase(proxy, NOT_INSIDE_TYPEOF, &slow, &done); + __ bind(&slow); + __ li(a1, Operand(var->name())); + __ Push(cp, a1); // Context and name. + __ CallRuntime(Runtime::kLoadLookupSlot, 2); + __ bind(&done); + context()->Plug(v0); + } + } +} + + +void FullCodeGenerator::VisitRegExpLiteral(RegExpLiteral* expr) { + Comment cmnt(masm_, "[ RegExpLiteral"); + Label materialized; + // Registers will be used as follows: + // a5 = materialized value (RegExp literal) + // a4 = JS function, literals array + // a3 = literal index + // a2 = RegExp pattern + // a1 = RegExp flags + // a0 = RegExp literal clone + __ ld(a0, MemOperand(fp, JavaScriptFrameConstants::kFunctionOffset)); + __ ld(a4, FieldMemOperand(a0, JSFunction::kLiteralsOffset)); + int literal_offset = + FixedArray::kHeaderSize + expr->literal_index() * kPointerSize; + __ ld(a5, FieldMemOperand(a4, literal_offset)); + __ LoadRoot(at, Heap::kUndefinedValueRootIndex); + __ Branch(&materialized, ne, a5, Operand(at)); + + // Create regexp literal using runtime function. + // Result will be in v0. + __ li(a3, Operand(Smi::FromInt(expr->literal_index()))); + __ li(a2, Operand(expr->pattern())); + __ li(a1, Operand(expr->flags())); + __ Push(a4, a3, a2, a1); + __ CallRuntime(Runtime::kMaterializeRegExpLiteral, 4); + __ mov(a5, v0); + + __ bind(&materialized); + int size = JSRegExp::kSize + JSRegExp::kInObjectFieldCount * kPointerSize; + Label allocated, runtime_allocate; + __ Allocate(size, v0, a2, a3, &runtime_allocate, TAG_OBJECT); + __ jmp(&allocated); + + __ bind(&runtime_allocate); + __ li(a0, Operand(Smi::FromInt(size))); + __ Push(a5, a0); + __ CallRuntime(Runtime::kAllocateInNewSpace, 1); + __ pop(a5); + + __ bind(&allocated); + + // After this, registers are used as follows: + // v0: Newly allocated regexp. + // a5: Materialized regexp. + // a2: temp. + __ CopyFields(v0, a5, a2.bit(), size / kPointerSize); + context()->Plug(v0); +} + + +void FullCodeGenerator::EmitAccessor(Expression* expression) { + if (expression == NULL) { + __ LoadRoot(a1, Heap::kNullValueRootIndex); + __ push(a1); + } else { + VisitForStackValue(expression); + } +} + + +void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) { + Comment cmnt(masm_, "[ ObjectLiteral"); + + expr->BuildConstantProperties(isolate()); + Handle<FixedArray> constant_properties = expr->constant_properties(); + __ ld(a3, MemOperand(fp, JavaScriptFrameConstants::kFunctionOffset)); + __ ld(a3, FieldMemOperand(a3, JSFunction::kLiteralsOffset)); + __ li(a2, Operand(Smi::FromInt(expr->literal_index()))); + __ li(a1, Operand(constant_properties)); + int flags = expr->fast_elements() + ? ObjectLiteral::kFastElements + : ObjectLiteral::kNoFlags; + flags |= expr->has_function() + ? ObjectLiteral::kHasFunction + : ObjectLiteral::kNoFlags; + __ li(a0, Operand(Smi::FromInt(flags))); + int properties_count = constant_properties->length() / 2; + if (expr->may_store_doubles() || expr->depth() > 1 || + masm()->serializer_enabled() || flags != ObjectLiteral::kFastElements || + properties_count > FastCloneShallowObjectStub::kMaximumClonedProperties) { + __ Push(a3, a2, a1, a0); + __ CallRuntime(Runtime::kCreateObjectLiteral, 4); + } else { + FastCloneShallowObjectStub stub(isolate(), properties_count); + __ CallStub(&stub); + } + + // If result_saved is true the result is on top of the stack. If + // result_saved is false the result is in v0. + bool result_saved = false; + + // Mark all computed expressions that are bound to a key that + // is shadowed by a later occurrence of the same key. For the + // marked expressions, no store code is emitted. + expr->CalculateEmitStore(zone()); + + AccessorTable accessor_table(zone()); + for (int i = 0; i < expr->properties()->length(); i++) { + ObjectLiteral::Property* property = expr->properties()->at(i); + if (property->IsCompileTimeValue()) continue; + + Literal* key = property->key(); + Expression* value = property->value(); + if (!result_saved) { + __ push(v0); // Save result on stack. + result_saved = true; + } + switch (property->kind()) { + case ObjectLiteral::Property::CONSTANT: + UNREACHABLE(); + case ObjectLiteral::Property::MATERIALIZED_LITERAL: + DCHECK(!CompileTimeValue::IsCompileTimeValue(property->value())); + // Fall through. + case ObjectLiteral::Property::COMPUTED: + if (key->value()->IsInternalizedString()) { + if (property->emit_store()) { + VisitForAccumulatorValue(value); + __ mov(StoreIC::ValueRegister(), result_register()); + DCHECK(StoreIC::ValueRegister().is(a0)); + __ li(StoreIC::NameRegister(), Operand(key->value())); + __ ld(StoreIC::ReceiverRegister(), MemOperand(sp)); + CallStoreIC(key->LiteralFeedbackId()); + PrepareForBailoutForId(key->id(), NO_REGISTERS); + } else { + VisitForEffect(value); + } + break; + } + // Duplicate receiver on stack. + __ ld(a0, MemOperand(sp)); + __ push(a0); + VisitForStackValue(key); + VisitForStackValue(value); + if (property->emit_store()) { + __ li(a0, Operand(Smi::FromInt(SLOPPY))); // PropertyAttributes. + __ push(a0); + __ CallRuntime(Runtime::kSetProperty, 4); + } else { + __ Drop(3); + } + break; + case ObjectLiteral::Property::PROTOTYPE: + // Duplicate receiver on stack. + __ ld(a0, MemOperand(sp)); + __ push(a0); + VisitForStackValue(value); + if (property->emit_store()) { + __ CallRuntime(Runtime::kSetPrototype, 2); + } else { + __ Drop(2); + } + break; + case ObjectLiteral::Property::GETTER: + accessor_table.lookup(key)->second->getter = value; + break; + case ObjectLiteral::Property::SETTER: + accessor_table.lookup(key)->second->setter = value; + break; + } + } + + // Emit code to define accessors, using only a single call to the runtime for + // each pair of corresponding getters and setters. + for (AccessorTable::Iterator it = accessor_table.begin(); + it != accessor_table.end(); + ++it) { + __ ld(a0, MemOperand(sp)); // Duplicate receiver. + __ push(a0); + VisitForStackValue(it->first); + EmitAccessor(it->second->getter); + EmitAccessor(it->second->setter); + __ li(a0, Operand(Smi::FromInt(NONE))); + __ push(a0); + __ CallRuntime(Runtime::kDefineAccessorPropertyUnchecked, 5); + } + + if (expr->has_function()) { + DCHECK(result_saved); + __ ld(a0, MemOperand(sp)); + __ push(a0); + __ CallRuntime(Runtime::kToFastProperties, 1); + } + + if (result_saved) { + context()->PlugTOS(); + } else { + context()->Plug(v0); + } +} + + +void FullCodeGenerator::VisitArrayLiteral(ArrayLiteral* expr) { + Comment cmnt(masm_, "[ ArrayLiteral"); + + expr->BuildConstantElements(isolate()); + int flags = expr->depth() == 1 + ? ArrayLiteral::kShallowElements + : ArrayLiteral::kNoFlags; + + ZoneList<Expression*>* subexprs = expr->values(); + int length = subexprs->length(); + + Handle<FixedArray> constant_elements = expr->constant_elements(); + DCHECK_EQ(2, constant_elements->length()); + ElementsKind constant_elements_kind = + static_cast<ElementsKind>(Smi::cast(constant_elements->get(0))->value()); + bool has_fast_elements = + IsFastObjectElementsKind(constant_elements_kind); + Handle<FixedArrayBase> constant_elements_values( + FixedArrayBase::cast(constant_elements->get(1))); + + AllocationSiteMode allocation_site_mode = TRACK_ALLOCATION_SITE; + if (has_fast_elements && !FLAG_allocation_site_pretenuring) { + // If the only customer of allocation sites is transitioning, then + // we can turn it off if we don't have anywhere else to transition to. + allocation_site_mode = DONT_TRACK_ALLOCATION_SITE; + } + + __ mov(a0, result_register()); + __ ld(a3, MemOperand(fp, JavaScriptFrameConstants::kFunctionOffset)); + __ ld(a3, FieldMemOperand(a3, JSFunction::kLiteralsOffset)); + __ li(a2, Operand(Smi::FromInt(expr->literal_index()))); + __ li(a1, Operand(constant_elements)); + if (expr->depth() > 1 || length > JSObject::kInitialMaxFastElementArray) { + __ li(a0, Operand(Smi::FromInt(flags))); + __ Push(a3, a2, a1, a0); + __ CallRuntime(Runtime::kCreateArrayLiteral, 4); + } else { + FastCloneShallowArrayStub stub(isolate(), allocation_site_mode); + __ CallStub(&stub); + } + + bool result_saved = false; // Is the result saved to the stack? + + // Emit code to evaluate all the non-constant subexpressions and to store + // them into the newly cloned array. + for (int i = 0; i < length; i++) { + Expression* subexpr = subexprs->at(i); + // If the subexpression is a literal or a simple materialized literal it + // is already set in the cloned array. + if (CompileTimeValue::IsCompileTimeValue(subexpr)) continue; + + if (!result_saved) { + __ push(v0); // array literal + __ Push(Smi::FromInt(expr->literal_index())); + result_saved = true; + } + + VisitForAccumulatorValue(subexpr); + + if (IsFastObjectElementsKind(constant_elements_kind)) { + int offset = FixedArray::kHeaderSize + (i * kPointerSize); + __ ld(a6, MemOperand(sp, kPointerSize)); // Copy of array literal. + __ ld(a1, FieldMemOperand(a6, JSObject::kElementsOffset)); + __ sd(result_register(), FieldMemOperand(a1, offset)); + // Update the write barrier for the array store. + __ RecordWriteField(a1, offset, result_register(), a2, + kRAHasBeenSaved, kDontSaveFPRegs, + EMIT_REMEMBERED_SET, INLINE_SMI_CHECK); + } else { + __ li(a3, Operand(Smi::FromInt(i))); + __ mov(a0, result_register()); + StoreArrayLiteralElementStub stub(isolate()); + __ CallStub(&stub); + } + + PrepareForBailoutForId(expr->GetIdForElement(i), NO_REGISTERS); + } + if (result_saved) { + __ Pop(); // literal index + context()->PlugTOS(); + } else { + context()->Plug(v0); + } +} + + +void FullCodeGenerator::VisitAssignment(Assignment* expr) { + DCHECK(expr->target()->IsValidReferenceExpression()); + + Comment cmnt(masm_, "[ Assignment"); + + // Left-hand side can only be a property, a global or a (parameter or local) + // slot. + enum LhsKind { VARIABLE, NAMED_PROPERTY, KEYED_PROPERTY }; + LhsKind assign_type = VARIABLE; + Property* property = expr->target()->AsProperty(); + if (property != NULL) { + assign_type = (property->key()->IsPropertyName()) + ? NAMED_PROPERTY + : KEYED_PROPERTY; + } + + // Evaluate LHS expression. + switch (assign_type) { + case VARIABLE: + // Nothing to do here. + break; + case NAMED_PROPERTY: + if (expr->is_compound()) { + // We need the receiver both on the stack and in the register. + VisitForStackValue(property->obj()); + __ ld(LoadIC::ReceiverRegister(), MemOperand(sp, 0)); + } else { + VisitForStackValue(property->obj()); + } + break; + case KEYED_PROPERTY: + // We need the key and receiver on both the stack and in v0 and a1. + if (expr->is_compound()) { + VisitForStackValue(property->obj()); + VisitForStackValue(property->key()); + __ ld(LoadIC::ReceiverRegister(), MemOperand(sp, 1 * kPointerSize)); + __ ld(LoadIC::NameRegister(), MemOperand(sp, 0)); + } else { + VisitForStackValue(property->obj()); + VisitForStackValue(property->key()); + } + break; + } + + // For compound assignments we need another deoptimization point after the + // variable/property load. + if (expr->is_compound()) { + { AccumulatorValueContext context(this); + switch (assign_type) { + case VARIABLE: + EmitVariableLoad(expr->target()->AsVariableProxy()); + PrepareForBailout(expr->target(), TOS_REG); + break; + case NAMED_PROPERTY: + EmitNamedPropertyLoad(property); + PrepareForBailoutForId(property->LoadId(), TOS_REG); + break; + case KEYED_PROPERTY: + EmitKeyedPropertyLoad(property); + PrepareForBailoutForId(property->LoadId(), TOS_REG); + break; + } + } + + Token::Value op = expr->binary_op(); + __ push(v0); // Left operand goes on the stack. + VisitForAccumulatorValue(expr->value()); + + OverwriteMode mode = expr->value()->ResultOverwriteAllowed() + ? OVERWRITE_RIGHT + : NO_OVERWRITE; + SetSourcePosition(expr->position() + 1); + AccumulatorValueContext context(this); + if (ShouldInlineSmiCase(op)) { + EmitInlineSmiBinaryOp(expr->binary_operation(), + op, + mode, + expr->target(), + expr->value()); + } else { + EmitBinaryOp(expr->binary_operation(), op, mode); + } + + // Deoptimization point in case the binary operation may have side effects. + PrepareForBailout(expr->binary_operation(), TOS_REG); + } else { + VisitForAccumulatorValue(expr->value()); + } + + // Record source position before possible IC call. + SetSourcePosition(expr->position()); + + // Store the value. + switch (assign_type) { + case VARIABLE: + EmitVariableAssignment(expr->target()->AsVariableProxy()->var(), + expr->op()); + PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); + context()->Plug(v0); + break; + case NAMED_PROPERTY: + EmitNamedPropertyAssignment(expr); + break; + case KEYED_PROPERTY: + EmitKeyedPropertyAssignment(expr); + break; + } +} + + +void FullCodeGenerator::VisitYield(Yield* expr) { + Comment cmnt(masm_, "[ Yield"); + // Evaluate yielded value first; the initial iterator definition depends on + // this. It stays on the stack while we update the iterator. + VisitForStackValue(expr->expression()); + + switch (expr->yield_kind()) { + case Yield::SUSPEND: + // Pop value from top-of-stack slot; box result into result register. + EmitCreateIteratorResult(false); + __ push(result_register()); + // Fall through. + case Yield::INITIAL: { + Label suspend, continuation, post_runtime, resume; + + __ jmp(&suspend); + + __ bind(&continuation); + __ jmp(&resume); + + __ bind(&suspend); + VisitForAccumulatorValue(expr->generator_object()); + DCHECK(continuation.pos() > 0 && Smi::IsValid(continuation.pos())); + __ li(a1, Operand(Smi::FromInt(continuation.pos()))); + __ sd(a1, FieldMemOperand(v0, JSGeneratorObject::kContinuationOffset)); + __ sd(cp, FieldMemOperand(v0, JSGeneratorObject::kContextOffset)); + __ mov(a1, cp); + __ RecordWriteField(v0, JSGeneratorObject::kContextOffset, a1, a2, + kRAHasBeenSaved, kDontSaveFPRegs); + __ Daddu(a1, fp, Operand(StandardFrameConstants::kExpressionsOffset)); + __ Branch(&post_runtime, eq, sp, Operand(a1)); + __ push(v0); // generator object + __ CallRuntime(Runtime::kSuspendJSGeneratorObject, 1); + __ ld(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); + __ bind(&post_runtime); + __ pop(result_register()); + EmitReturnSequence(); + + __ bind(&resume); + context()->Plug(result_register()); + break; + } + + case Yield::FINAL: { + VisitForAccumulatorValue(expr->generator_object()); + __ li(a1, Operand(Smi::FromInt(JSGeneratorObject::kGeneratorClosed))); + __ sd(a1, FieldMemOperand(result_register(), + JSGeneratorObject::kContinuationOffset)); + // Pop value from top-of-stack slot, box result into result register. + EmitCreateIteratorResult(true); + EmitUnwindBeforeReturn(); + EmitReturnSequence(); + break; + } + + case Yield::DELEGATING: { + VisitForStackValue(expr->generator_object()); + + // Initial stack layout is as follows: + // [sp + 1 * kPointerSize] iter + // [sp + 0 * kPointerSize] g + + Label l_catch, l_try, l_suspend, l_continuation, l_resume; + Label l_next, l_call; + Register load_receiver = LoadIC::ReceiverRegister(); + Register load_name = LoadIC::NameRegister(); + // Initial send value is undefined. + __ LoadRoot(a0, Heap::kUndefinedValueRootIndex); + __ Branch(&l_next); + + // catch (e) { receiver = iter; f = 'throw'; arg = e; goto l_call; } + __ bind(&l_catch); + __ mov(a0, v0); + handler_table()->set(expr->index(), Smi::FromInt(l_catch.pos())); + __ LoadRoot(a2, Heap::kthrow_stringRootIndex); // "throw" + __ ld(a3, MemOperand(sp, 1 * kPointerSize)); // iter + __ Push(a2, a3, a0); // "throw", iter, except + __ jmp(&l_call); + + // try { received = %yield result } + // Shuffle the received result above a try handler and yield it without + // re-boxing. + __ bind(&l_try); + __ pop(a0); // result + __ PushTryHandler(StackHandler::CATCH, expr->index()); + const int handler_size = StackHandlerConstants::kSize; + __ push(a0); // result + __ jmp(&l_suspend); + __ bind(&l_continuation); + __ mov(a0, v0); + __ jmp(&l_resume); + __ bind(&l_suspend); + const int generator_object_depth = kPointerSize + handler_size; + __ ld(a0, MemOperand(sp, generator_object_depth)); + __ push(a0); // g + DCHECK(l_continuation.pos() > 0 && Smi::IsValid(l_continuation.pos())); + __ li(a1, Operand(Smi::FromInt(l_continuation.pos()))); + __ sd(a1, FieldMemOperand(a0, JSGeneratorObject::kContinuationOffset)); + __ sd(cp, FieldMemOperand(a0, JSGeneratorObject::kContextOffset)); + __ mov(a1, cp); + __ RecordWriteField(a0, JSGeneratorObject::kContextOffset, a1, a2, + kRAHasBeenSaved, kDontSaveFPRegs); + __ CallRuntime(Runtime::kSuspendJSGeneratorObject, 1); + __ ld(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); + __ pop(v0); // result + EmitReturnSequence(); + __ mov(a0, v0); + __ bind(&l_resume); // received in a0 + __ PopTryHandler(); + + // receiver = iter; f = 'next'; arg = received; + __ bind(&l_next); + __ LoadRoot(load_name, Heap::knext_stringRootIndex); // "next" + __ ld(a3, MemOperand(sp, 1 * kPointerSize)); // iter + __ Push(load_name, a3, a0); // "next", iter, received + + // result = receiver[f](arg); + __ bind(&l_call); + __ ld(load_receiver, MemOperand(sp, kPointerSize)); + __ ld(load_name, MemOperand(sp, 2 * kPointerSize)); + if (FLAG_vector_ics) { + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(expr->KeyedLoadFeedbackSlot()))); + } + Handle<Code> ic = isolate()->builtins()->KeyedLoadIC_Initialize(); + CallIC(ic, TypeFeedbackId::None()); + __ mov(a0, v0); + __ mov(a1, a0); + __ sd(a1, MemOperand(sp, 2 * kPointerSize)); + CallFunctionStub stub(isolate(), 1, CALL_AS_METHOD); + __ CallStub(&stub); + + __ ld(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); + __ Drop(1); // The function is still on the stack; drop it. + + // if (!result.done) goto l_try; + __ Move(load_receiver, v0); + + __ push(load_receiver); // save result + __ LoadRoot(load_name, Heap::kdone_stringRootIndex); // "done" + if (FLAG_vector_ics) { + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(expr->DoneFeedbackSlot()))); + } + CallLoadIC(NOT_CONTEXTUAL); // v0=result.done + __ mov(a0, v0); + Handle<Code> bool_ic = ToBooleanStub::GetUninitialized(isolate()); + CallIC(bool_ic); + __ Branch(&l_try, eq, v0, Operand(zero_reg)); + + // result.value + __ pop(load_receiver); // result + __ LoadRoot(load_name, Heap::kvalue_stringRootIndex); // "value" + if (FLAG_vector_ics) { + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(expr->ValueFeedbackSlot()))); + } + CallLoadIC(NOT_CONTEXTUAL); // v0=result.value + context()->DropAndPlug(2, v0); // drop iter and g + break; + } + } +} + + +void FullCodeGenerator::EmitGeneratorResume(Expression *generator, + Expression *value, + JSGeneratorObject::ResumeMode resume_mode) { + // The value stays in a0, and is ultimately read by the resumed generator, as + // if CallRuntime(Runtime::kSuspendJSGeneratorObject) returned it. Or it + // is read to throw the value when the resumed generator is already closed. + // a1 will hold the generator object until the activation has been resumed. + VisitForStackValue(generator); + VisitForAccumulatorValue(value); + __ pop(a1); + + // Check generator state. + Label wrong_state, closed_state, done; + __ ld(a3, FieldMemOperand(a1, JSGeneratorObject::kContinuationOffset)); + STATIC_ASSERT(JSGeneratorObject::kGeneratorExecuting < 0); + STATIC_ASSERT(JSGeneratorObject::kGeneratorClosed == 0); + __ Branch(&closed_state, eq, a3, Operand(zero_reg)); + __ Branch(&wrong_state, lt, a3, Operand(zero_reg)); + + // Load suspended function and context. + __ ld(cp, FieldMemOperand(a1, JSGeneratorObject::kContextOffset)); + __ ld(a4, FieldMemOperand(a1, JSGeneratorObject::kFunctionOffset)); + + // Load receiver and store as the first argument. + __ ld(a2, FieldMemOperand(a1, JSGeneratorObject::kReceiverOffset)); + __ push(a2); + + // Push holes for the rest of the arguments to the generator function. + __ ld(a3, FieldMemOperand(a4, JSFunction::kSharedFunctionInfoOffset)); + // The argument count is stored as int32_t on 64-bit platforms. + // TODO(plind): Smi on 32-bit platforms. + __ lw(a3, + FieldMemOperand(a3, SharedFunctionInfo::kFormalParameterCountOffset)); + __ LoadRoot(a2, Heap::kTheHoleValueRootIndex); + Label push_argument_holes, push_frame; + __ bind(&push_argument_holes); + __ Dsubu(a3, a3, Operand(1)); + __ Branch(&push_frame, lt, a3, Operand(zero_reg)); + __ push(a2); + __ jmp(&push_argument_holes); + + // Enter a new JavaScript frame, and initialize its slots as they were when + // the generator was suspended. + Label resume_frame; + __ bind(&push_frame); + __ Call(&resume_frame); + __ jmp(&done); + __ bind(&resume_frame); + // ra = return address. + // fp = caller's frame pointer. + // cp = callee's context, + // a4 = callee's JS function. + __ Push(ra, fp, cp, a4); + // Adjust FP to point to saved FP. + __ Daddu(fp, sp, 2 * kPointerSize); + + // Load the operand stack size. + __ ld(a3, FieldMemOperand(a1, JSGeneratorObject::kOperandStackOffset)); + __ ld(a3, FieldMemOperand(a3, FixedArray::kLengthOffset)); + __ SmiUntag(a3); + + // If we are sending a value and there is no operand stack, we can jump back + // in directly. + if (resume_mode == JSGeneratorObject::NEXT) { + Label slow_resume; + __ Branch(&slow_resume, ne, a3, Operand(zero_reg)); + __ ld(a3, FieldMemOperand(a4, JSFunction::kCodeEntryOffset)); + __ ld(a2, FieldMemOperand(a1, JSGeneratorObject::kContinuationOffset)); + __ SmiUntag(a2); + __ Daddu(a3, a3, Operand(a2)); + __ li(a2, Operand(Smi::FromInt(JSGeneratorObject::kGeneratorExecuting))); + __ sd(a2, FieldMemOperand(a1, JSGeneratorObject::kContinuationOffset)); + __ Jump(a3); + __ bind(&slow_resume); + } + + // Otherwise, we push holes for the operand stack and call the runtime to fix + // up the stack and the handlers. + Label push_operand_holes, call_resume; + __ bind(&push_operand_holes); + __ Dsubu(a3, a3, Operand(1)); + __ Branch(&call_resume, lt, a3, Operand(zero_reg)); + __ push(a2); + __ Branch(&push_operand_holes); + __ bind(&call_resume); + DCHECK(!result_register().is(a1)); + __ Push(a1, result_register()); + __ Push(Smi::FromInt(resume_mode)); + __ CallRuntime(Runtime::kResumeJSGeneratorObject, 3); + // Not reached: the runtime call returns elsewhere. + __ stop("not-reached"); + + // Reach here when generator is closed. + __ bind(&closed_state); + if (resume_mode == JSGeneratorObject::NEXT) { + // Return completed iterator result when generator is closed. + __ LoadRoot(a2, Heap::kUndefinedValueRootIndex); + __ push(a2); + // Pop value from top-of-stack slot; box result into result register. + EmitCreateIteratorResult(true); + } else { + // Throw the provided value. + __ push(a0); + __ CallRuntime(Runtime::kThrow, 1); + } + __ jmp(&done); + + // Throw error if we attempt to operate on a running generator. + __ bind(&wrong_state); + __ push(a1); + __ CallRuntime(Runtime::kThrowGeneratorStateError, 1); + + __ bind(&done); + context()->Plug(result_register()); +} + + +void FullCodeGenerator::EmitCreateIteratorResult(bool done) { + Label gc_required; + Label allocated; + + Handle<Map> map(isolate()->native_context()->iterator_result_map()); + + __ Allocate(map->instance_size(), v0, a2, a3, &gc_required, TAG_OBJECT); + __ jmp(&allocated); + + __ bind(&gc_required); + __ Push(Smi::FromInt(map->instance_size())); + __ CallRuntime(Runtime::kAllocateInNewSpace, 1); + __ ld(context_register(), + MemOperand(fp, StandardFrameConstants::kContextOffset)); + + __ bind(&allocated); + __ li(a1, Operand(map)); + __ pop(a2); + __ li(a3, Operand(isolate()->factory()->ToBoolean(done))); + __ li(a4, Operand(isolate()->factory()->empty_fixed_array())); + DCHECK_EQ(map->instance_size(), 5 * kPointerSize); + __ sd(a1, FieldMemOperand(v0, HeapObject::kMapOffset)); + __ sd(a4, FieldMemOperand(v0, JSObject::kPropertiesOffset)); + __ sd(a4, FieldMemOperand(v0, JSObject::kElementsOffset)); + __ sd(a2, + FieldMemOperand(v0, JSGeneratorObject::kResultValuePropertyOffset)); + __ sd(a3, + FieldMemOperand(v0, JSGeneratorObject::kResultDonePropertyOffset)); + + // Only the value field needs a write barrier, as the other values are in the + // root set. + __ RecordWriteField(v0, JSGeneratorObject::kResultValuePropertyOffset, + a2, a3, kRAHasBeenSaved, kDontSaveFPRegs); +} + + +void FullCodeGenerator::EmitNamedPropertyLoad(Property* prop) { + SetSourcePosition(prop->position()); + Literal* key = prop->key()->AsLiteral(); + __ li(LoadIC::NameRegister(), Operand(key->value())); + if (FLAG_vector_ics) { + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(prop->PropertyFeedbackSlot()))); + CallLoadIC(NOT_CONTEXTUAL); + } else { + CallLoadIC(NOT_CONTEXTUAL, prop->PropertyFeedbackId()); + } +} + + +void FullCodeGenerator::EmitKeyedPropertyLoad(Property* prop) { + SetSourcePosition(prop->position()); + // Call keyed load IC. It has register arguments receiver and key. + Handle<Code> ic = isolate()->builtins()->KeyedLoadIC_Initialize(); + if (FLAG_vector_ics) { + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(prop->PropertyFeedbackSlot()))); + CallIC(ic); + } else { + CallIC(ic, prop->PropertyFeedbackId()); + } +} + + +void FullCodeGenerator::EmitInlineSmiBinaryOp(BinaryOperation* expr, + Token::Value op, + OverwriteMode mode, + Expression* left_expr, + Expression* right_expr) { + Label done, smi_case, stub_call; + + Register scratch1 = a2; + Register scratch2 = a3; + + // Get the arguments. + Register left = a1; + Register right = a0; + __ pop(left); + __ mov(a0, result_register()); + + // Perform combined smi check on both operands. + __ Or(scratch1, left, Operand(right)); + STATIC_ASSERT(kSmiTag == 0); + JumpPatchSite patch_site(masm_); + patch_site.EmitJumpIfSmi(scratch1, &smi_case); + + __ bind(&stub_call); + BinaryOpICStub stub(isolate(), op, mode); + CallIC(stub.GetCode(), expr->BinaryOperationFeedbackId()); + patch_site.EmitPatchInfo(); + __ jmp(&done); + + __ bind(&smi_case); + // Smi case. This code works the same way as the smi-smi case in the type + // recording binary operation stub, see + switch (op) { + case Token::SAR: + __ GetLeastBitsFromSmi(scratch1, right, 5); + __ dsrav(right, left, scratch1); + __ And(v0, right, Operand(0xffffffff00000000L)); + break; + case Token::SHL: { + __ SmiUntag(scratch1, left); + __ GetLeastBitsFromSmi(scratch2, right, 5); + __ dsllv(scratch1, scratch1, scratch2); + __ SmiTag(v0, scratch1); + break; + } + case Token::SHR: { + __ SmiUntag(scratch1, left); + __ GetLeastBitsFromSmi(scratch2, right, 5); + __ dsrlv(scratch1, scratch1, scratch2); + __ And(scratch2, scratch1, 0x80000000); + __ Branch(&stub_call, ne, scratch2, Operand(zero_reg)); + __ SmiTag(v0, scratch1); + break; + } + case Token::ADD: + __ AdduAndCheckForOverflow(v0, left, right, scratch1); + __ BranchOnOverflow(&stub_call, scratch1); + break; + case Token::SUB: + __ SubuAndCheckForOverflow(v0, left, right, scratch1); + __ BranchOnOverflow(&stub_call, scratch1); + break; + case Token::MUL: { + __ Dmulh(v0, left, right); + __ dsra32(scratch2, v0, 0); + __ sra(scratch1, v0, 31); + __ Branch(USE_DELAY_SLOT, &stub_call, ne, scratch2, Operand(scratch1)); + __ SmiTag(v0); + __ Branch(USE_DELAY_SLOT, &done, ne, v0, Operand(zero_reg)); + __ Daddu(scratch2, right, left); + __ Branch(&stub_call, lt, scratch2, Operand(zero_reg)); + DCHECK(Smi::FromInt(0) == 0); + __ mov(v0, zero_reg); + break; + } + case Token::BIT_OR: + __ Or(v0, left, Operand(right)); + break; + case Token::BIT_AND: + __ And(v0, left, Operand(right)); + break; + case Token::BIT_XOR: + __ Xor(v0, left, Operand(right)); + break; + default: + UNREACHABLE(); + } + + __ bind(&done); + context()->Plug(v0); +} + + +void FullCodeGenerator::EmitBinaryOp(BinaryOperation* expr, + Token::Value op, + OverwriteMode mode) { + __ mov(a0, result_register()); + __ pop(a1); + BinaryOpICStub stub(isolate(), op, mode); + JumpPatchSite patch_site(masm_); // unbound, signals no inlined smi code. + CallIC(stub.GetCode(), expr->BinaryOperationFeedbackId()); + patch_site.EmitPatchInfo(); + context()->Plug(v0); +} + + +void FullCodeGenerator::EmitAssignment(Expression* expr) { + DCHECK(expr->IsValidReferenceExpression()); + + // Left-hand side can only be a property, a global or a (parameter or local) + // slot. + enum LhsKind { VARIABLE, NAMED_PROPERTY, KEYED_PROPERTY }; + LhsKind assign_type = VARIABLE; + Property* prop = expr->AsProperty(); + if (prop != NULL) { + assign_type = (prop->key()->IsPropertyName()) + ? NAMED_PROPERTY + : KEYED_PROPERTY; + } + + switch (assign_type) { + case VARIABLE: { + Variable* var = expr->AsVariableProxy()->var(); + EffectContext context(this); + EmitVariableAssignment(var, Token::ASSIGN); + break; + } + case NAMED_PROPERTY: { + __ push(result_register()); // Preserve value. + VisitForAccumulatorValue(prop->obj()); + __ mov(StoreIC::ReceiverRegister(), result_register()); + __ pop(StoreIC::ValueRegister()); // Restore value. + __ li(StoreIC::NameRegister(), + Operand(prop->key()->AsLiteral()->value())); + CallStoreIC(); + break; + } + case KEYED_PROPERTY: { + __ push(result_register()); // Preserve value. + VisitForStackValue(prop->obj()); + VisitForAccumulatorValue(prop->key()); + __ Move(KeyedStoreIC::NameRegister(), result_register()); + __ Pop(KeyedStoreIC::ValueRegister(), KeyedStoreIC::ReceiverRegister()); + Handle<Code> ic = strict_mode() == SLOPPY + ? isolate()->builtins()->KeyedStoreIC_Initialize() + : isolate()->builtins()->KeyedStoreIC_Initialize_Strict(); + CallIC(ic); + break; + } + } + context()->Plug(v0); +} + + +void FullCodeGenerator::EmitStoreToStackLocalOrContextSlot( + Variable* var, MemOperand location) { + __ sd(result_register(), location); + if (var->IsContextSlot()) { + // RecordWrite may destroy all its register arguments. + __ Move(a3, result_register()); + int offset = Context::SlotOffset(var->index()); + __ RecordWriteContextSlot( + a1, offset, a3, a2, kRAHasBeenSaved, kDontSaveFPRegs); + } +} + + +void FullCodeGenerator::EmitVariableAssignment(Variable* var, Token::Value op) { + if (var->IsUnallocated()) { + // Global var, const, or let. + __ mov(StoreIC::ValueRegister(), result_register()); + __ li(StoreIC::NameRegister(), Operand(var->name())); + __ ld(StoreIC::ReceiverRegister(), GlobalObjectOperand()); + CallStoreIC(); + } else if (op == Token::INIT_CONST_LEGACY) { + // Const initializers need a write barrier. + DCHECK(!var->IsParameter()); // No const parameters. + if (var->IsLookupSlot()) { + __ li(a0, Operand(var->name())); + __ Push(v0, cp, a0); // Context and name. + __ CallRuntime(Runtime::kInitializeLegacyConstLookupSlot, 3); + } else { + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); + Label skip; + MemOperand location = VarOperand(var, a1); + __ ld(a2, location); + __ LoadRoot(at, Heap::kTheHoleValueRootIndex); + __ Branch(&skip, ne, a2, Operand(at)); + EmitStoreToStackLocalOrContextSlot(var, location); + __ bind(&skip); + } + + } else if (var->mode() == LET && op != Token::INIT_LET) { + // Non-initializing assignment to let variable needs a write barrier. + DCHECK(!var->IsLookupSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); + Label assign; + MemOperand location = VarOperand(var, a1); + __ ld(a3, location); + __ LoadRoot(a4, Heap::kTheHoleValueRootIndex); + __ Branch(&assign, ne, a3, Operand(a4)); + __ li(a3, Operand(var->name())); + __ push(a3); + __ CallRuntime(Runtime::kThrowReferenceError, 1); + // Perform the assignment. + __ bind(&assign); + EmitStoreToStackLocalOrContextSlot(var, location); + + } else if (!var->is_const_mode() || op == Token::INIT_CONST) { + if (var->IsLookupSlot()) { + // Assignment to var. + __ li(a4, Operand(var->name())); + __ li(a3, Operand(Smi::FromInt(strict_mode()))); + // jssp[0] : mode. + // jssp[8] : name. + // jssp[16] : context. + // jssp[24] : value. + __ Push(v0, cp, a4, a3); + __ CallRuntime(Runtime::kStoreLookupSlot, 4); + } else { + // Assignment to var or initializing assignment to let/const in harmony + // mode. + DCHECK((var->IsStackAllocated() || var->IsContextSlot())); + MemOperand location = VarOperand(var, a1); + if (generate_debug_code_ && op == Token::INIT_LET) { + // Check for an uninitialized let binding. + __ ld(a2, location); + __ LoadRoot(a4, Heap::kTheHoleValueRootIndex); + __ Check(eq, kLetBindingReInitialization, a2, Operand(a4)); + } + EmitStoreToStackLocalOrContextSlot(var, location); + } + } + // Non-initializing assignments to consts are ignored. +} + + +void FullCodeGenerator::EmitNamedPropertyAssignment(Assignment* expr) { + // Assignment to a property, using a named store IC. + Property* prop = expr->target()->AsProperty(); + DCHECK(prop != NULL); + DCHECK(prop->key()->IsLiteral()); + + // Record source code position before IC call. + SetSourcePosition(expr->position()); + __ mov(StoreIC::ValueRegister(), result_register()); + __ li(StoreIC::NameRegister(), Operand(prop->key()->AsLiteral()->value())); + __ pop(StoreIC::ReceiverRegister()); + CallStoreIC(expr->AssignmentFeedbackId()); + + PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); + context()->Plug(v0); +} + + +void FullCodeGenerator::EmitKeyedPropertyAssignment(Assignment* expr) { + // Assignment to a property, using a keyed store IC. + + // Record source code position before IC call. + SetSourcePosition(expr->position()); + // Call keyed store IC. + // The arguments are: + // - a0 is the value, + // - a1 is the key, + // - a2 is the receiver. + __ mov(KeyedStoreIC::ValueRegister(), result_register()); + __ Pop(KeyedStoreIC::ReceiverRegister(), KeyedStoreIC::NameRegister()); + DCHECK(KeyedStoreIC::ValueRegister().is(a0)); + + Handle<Code> ic = strict_mode() == SLOPPY + ? isolate()->builtins()->KeyedStoreIC_Initialize() + : isolate()->builtins()->KeyedStoreIC_Initialize_Strict(); + CallIC(ic, expr->AssignmentFeedbackId()); + + PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); + context()->Plug(v0); +} + + +void FullCodeGenerator::VisitProperty(Property* expr) { + Comment cmnt(masm_, "[ Property"); + Expression* key = expr->key(); + + if (key->IsPropertyName()) { + VisitForAccumulatorValue(expr->obj()); + __ Move(LoadIC::ReceiverRegister(), v0); + EmitNamedPropertyLoad(expr); + PrepareForBailoutForId(expr->LoadId(), TOS_REG); + context()->Plug(v0); + } else { + VisitForStackValue(expr->obj()); + VisitForAccumulatorValue(expr->key()); + __ Move(LoadIC::NameRegister(), v0); + __ pop(LoadIC::ReceiverRegister()); + EmitKeyedPropertyLoad(expr); + context()->Plug(v0); + } +} + + +void FullCodeGenerator::CallIC(Handle<Code> code, + TypeFeedbackId id) { + ic_total_count_++; + __ Call(code, RelocInfo::CODE_TARGET, id); +} + + +// Code common for calls using the IC. +void FullCodeGenerator::EmitCallWithLoadIC(Call* expr) { + Expression* callee = expr->expression(); + + CallIC::CallType call_type = callee->IsVariableProxy() + ? CallIC::FUNCTION + : CallIC::METHOD; + + // Get the target function. + if (call_type == CallIC::FUNCTION) { + { StackValueContext context(this); + EmitVariableLoad(callee->AsVariableProxy()); + PrepareForBailout(callee, NO_REGISTERS); + } + // Push undefined as receiver. This is patched in the method prologue if it + // is a sloppy mode method. + __ Push(isolate()->factory()->undefined_value()); + } else { + // Load the function from the receiver. + DCHECK(callee->IsProperty()); + __ ld(LoadIC::ReceiverRegister(), MemOperand(sp, 0)); + EmitNamedPropertyLoad(callee->AsProperty()); + PrepareForBailoutForId(callee->AsProperty()->LoadId(), TOS_REG); + // Push the target function under the receiver. + __ ld(at, MemOperand(sp, 0)); + __ push(at); + __ sd(v0, MemOperand(sp, kPointerSize)); + } + + EmitCall(expr, call_type); +} + + +// Code common for calls using the IC. +void FullCodeGenerator::EmitKeyedCallWithLoadIC(Call* expr, + Expression* key) { + // Load the key. + VisitForAccumulatorValue(key); + + Expression* callee = expr->expression(); + + // Load the function from the receiver. + DCHECK(callee->IsProperty()); + __ ld(LoadIC::ReceiverRegister(), MemOperand(sp, 0)); + __ Move(LoadIC::NameRegister(), v0); + EmitKeyedPropertyLoad(callee->AsProperty()); + PrepareForBailoutForId(callee->AsProperty()->LoadId(), TOS_REG); + + // Push the target function under the receiver. + __ ld(at, MemOperand(sp, 0)); + __ push(at); + __ sd(v0, MemOperand(sp, kPointerSize)); + + EmitCall(expr, CallIC::METHOD); +} + + +void FullCodeGenerator::EmitCall(Call* expr, CallIC::CallType call_type) { + // Load the arguments. + ZoneList<Expression*>* args = expr->arguments(); + int arg_count = args->length(); + { PreservePositionScope scope(masm()->positions_recorder()); + for (int i = 0; i < arg_count; i++) { + VisitForStackValue(args->at(i)); + } + } + + // Record source position of the IC call. + SetSourcePosition(expr->position()); + Handle<Code> ic = CallIC::initialize_stub( + isolate(), arg_count, call_type); + __ li(a3, Operand(Smi::FromInt(expr->CallFeedbackSlot()))); + __ ld(a1, MemOperand(sp, (arg_count + 1) * kPointerSize)); + // Don't assign a type feedback id to the IC, since type feedback is provided + // by the vector above. + CallIC(ic); + RecordJSReturnSite(expr); + // Restore context register. + __ ld(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); + context()->DropAndPlug(1, v0); +} + + +void FullCodeGenerator::EmitResolvePossiblyDirectEval(int arg_count) { + // a6: copy of the first argument or undefined if it doesn't exist. + if (arg_count > 0) { + __ ld(a6, MemOperand(sp, arg_count * kPointerSize)); + } else { + __ LoadRoot(a6, Heap::kUndefinedValueRootIndex); + } + + // a5: the receiver of the enclosing function. + int receiver_offset = 2 + info_->scope()->num_parameters(); + __ ld(a5, MemOperand(fp, receiver_offset * kPointerSize)); + + // a4: the strict mode. + __ li(a4, Operand(Smi::FromInt(strict_mode()))); + + // a1: the start position of the scope the calls resides in. + __ li(a1, Operand(Smi::FromInt(scope()->start_position()))); + + // Do the runtime call. + __ Push(a6, a5, a4, a1); + __ CallRuntime(Runtime::kResolvePossiblyDirectEval, 5); +} + + +void FullCodeGenerator::VisitCall(Call* expr) { +#ifdef DEBUG + // We want to verify that RecordJSReturnSite gets called on all paths + // through this function. Avoid early returns. + expr->return_is_recorded_ = false; +#endif + + Comment cmnt(masm_, "[ Call"); + Expression* callee = expr->expression(); + Call::CallType call_type = expr->GetCallType(isolate()); + + if (call_type == Call::POSSIBLY_EVAL_CALL) { + // In a call to eval, we first call RuntimeHidden_ResolvePossiblyDirectEval + // to resolve the function we need to call and the receiver of the + // call. Then we call the resolved function using the given + // arguments. + ZoneList<Expression*>* args = expr->arguments(); + int arg_count = args->length(); + + { PreservePositionScope pos_scope(masm()->positions_recorder()); + VisitForStackValue(callee); + __ LoadRoot(a2, Heap::kUndefinedValueRootIndex); + __ push(a2); // Reserved receiver slot. + + // Push the arguments. + for (int i = 0; i < arg_count; i++) { + VisitForStackValue(args->at(i)); + } + + // Push a copy of the function (found below the arguments) and + // resolve eval. + __ ld(a1, MemOperand(sp, (arg_count + 1) * kPointerSize)); + __ push(a1); + EmitResolvePossiblyDirectEval(arg_count); + + // The runtime call returns a pair of values in v0 (function) and + // v1 (receiver). Touch up the stack with the right values. + __ sd(v0, MemOperand(sp, (arg_count + 1) * kPointerSize)); + __ sd(v1, MemOperand(sp, arg_count * kPointerSize)); + } + // Record source position for debugger. + SetSourcePosition(expr->position()); + CallFunctionStub stub(isolate(), arg_count, NO_CALL_FUNCTION_FLAGS); + __ ld(a1, MemOperand(sp, (arg_count + 1) * kPointerSize)); + __ CallStub(&stub); + RecordJSReturnSite(expr); + // Restore context register. + __ ld(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); + context()->DropAndPlug(1, v0); + } else if (call_type == Call::GLOBAL_CALL) { + EmitCallWithLoadIC(expr); + } else if (call_type == Call::LOOKUP_SLOT_CALL) { + // Call to a lookup slot (dynamically introduced variable). + VariableProxy* proxy = callee->AsVariableProxy(); + Label slow, done; + + { PreservePositionScope scope(masm()->positions_recorder()); + // Generate code for loading from variables potentially shadowed + // by eval-introduced variables. + EmitDynamicLookupFastCase(proxy, NOT_INSIDE_TYPEOF, &slow, &done); + } + + __ bind(&slow); + // Call the runtime to find the function to call (returned in v0) + // and the object holding it (returned in v1). + DCHECK(!context_register().is(a2)); + __ li(a2, Operand(proxy->name())); + __ Push(context_register(), a2); + __ CallRuntime(Runtime::kLoadLookupSlot, 2); + __ Push(v0, v1); // Function, receiver. + + // If fast case code has been generated, emit code to push the + // function and receiver and have the slow path jump around this + // code. + if (done.is_linked()) { + Label call; + __ Branch(&call); + __ bind(&done); + // Push function. + __ push(v0); + // The receiver is implicitly the global receiver. Indicate this + // by passing the hole to the call function stub. + __ LoadRoot(a1, Heap::kUndefinedValueRootIndex); + __ push(a1); + __ bind(&call); + } + + // The receiver is either the global receiver or an object found + // by LoadContextSlot. + EmitCall(expr); + } else if (call_type == Call::PROPERTY_CALL) { + Property* property = callee->AsProperty(); + { PreservePositionScope scope(masm()->positions_recorder()); + VisitForStackValue(property->obj()); + } + if (property->key()->IsPropertyName()) { + EmitCallWithLoadIC(expr); + } else { + EmitKeyedCallWithLoadIC(expr, property->key()); + } + } else { + DCHECK(call_type == Call::OTHER_CALL); + // Call to an arbitrary expression not handled specially above. + { PreservePositionScope scope(masm()->positions_recorder()); + VisitForStackValue(callee); + } + __ LoadRoot(a1, Heap::kUndefinedValueRootIndex); + __ push(a1); + // Emit function call. + EmitCall(expr); + } + +#ifdef DEBUG + // RecordJSReturnSite should have been called. + DCHECK(expr->return_is_recorded_); +#endif +} + + +void FullCodeGenerator::VisitCallNew(CallNew* expr) { + Comment cmnt(masm_, "[ CallNew"); + // According to ECMA-262, section 11.2.2, page 44, the function + // expression in new calls must be evaluated before the + // arguments. + + // Push constructor on the stack. If it's not a function it's used as + // receiver for CALL_NON_FUNCTION, otherwise the value on the stack is + // ignored. + VisitForStackValue(expr->expression()); + + // Push the arguments ("left-to-right") on the stack. + ZoneList<Expression*>* args = expr->arguments(); + int arg_count = args->length(); + for (int i = 0; i < arg_count; i++) { + VisitForStackValue(args->at(i)); + } + // Call the construct call builtin that handles allocation and + // constructor invocation. + SetSourcePosition(expr->position()); + + // Load function and argument count into a1 and a0. + __ li(a0, Operand(arg_count)); + __ ld(a1, MemOperand(sp, arg_count * kPointerSize)); + + // Record call targets in unoptimized code. + if (FLAG_pretenuring_call_new) { + EnsureSlotContainsAllocationSite(expr->AllocationSiteFeedbackSlot()); + DCHECK(expr->AllocationSiteFeedbackSlot() == + expr->CallNewFeedbackSlot() + 1); + } + + __ li(a2, FeedbackVector()); + __ li(a3, Operand(Smi::FromInt(expr->CallNewFeedbackSlot()))); + + CallConstructStub stub(isolate(), RECORD_CONSTRUCTOR_TARGET); + __ Call(stub.GetCode(), RelocInfo::CONSTRUCT_CALL); + PrepareForBailoutForId(expr->ReturnId(), TOS_REG); + context()->Plug(v0); +} + + +void FullCodeGenerator::EmitIsSmi(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + __ SmiTst(v0, a4); + Split(eq, a4, Operand(zero_reg), if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitIsNonNegativeSmi(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + __ NonNegativeSmiTst(v0, at); + Split(eq, at, Operand(zero_reg), if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitIsObject(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + __ JumpIfSmi(v0, if_false); + __ LoadRoot(at, Heap::kNullValueRootIndex); + __ Branch(if_true, eq, v0, Operand(at)); + __ ld(a2, FieldMemOperand(v0, HeapObject::kMapOffset)); + // Undetectable objects behave like undefined when tested with typeof. + __ lbu(a1, FieldMemOperand(a2, Map::kBitFieldOffset)); + __ And(at, a1, Operand(1 << Map::kIsUndetectable)); + __ Branch(if_false, ne, at, Operand(zero_reg)); + __ lbu(a1, FieldMemOperand(a2, Map::kInstanceTypeOffset)); + __ Branch(if_false, lt, a1, Operand(FIRST_NONCALLABLE_SPEC_OBJECT_TYPE)); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + Split(le, a1, Operand(LAST_NONCALLABLE_SPEC_OBJECT_TYPE), + if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitIsSpecObject(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + __ JumpIfSmi(v0, if_false); + __ GetObjectType(v0, a1, a1); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + Split(ge, a1, Operand(FIRST_SPEC_OBJECT_TYPE), + if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitIsUndetectableObject(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + __ JumpIfSmi(v0, if_false); + __ ld(a1, FieldMemOperand(v0, HeapObject::kMapOffset)); + __ lbu(a1, FieldMemOperand(a1, Map::kBitFieldOffset)); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + __ And(at, a1, Operand(1 << Map::kIsUndetectable)); + Split(ne, at, Operand(zero_reg), if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitIsStringWrapperSafeForDefaultValueOf( + CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); + + Label materialize_true, materialize_false, skip_lookup; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + __ AssertNotSmi(v0); + + __ ld(a1, FieldMemOperand(v0, HeapObject::kMapOffset)); + __ lbu(a4, FieldMemOperand(a1, Map::kBitField2Offset)); + __ And(a4, a4, 1 << Map::kStringWrapperSafeForDefaultValueOf); + __ Branch(&skip_lookup, ne, a4, Operand(zero_reg)); + + // Check for fast case object. Generate false result for slow case object. + __ ld(a2, FieldMemOperand(v0, JSObject::kPropertiesOffset)); + __ ld(a2, FieldMemOperand(a2, HeapObject::kMapOffset)); + __ LoadRoot(a4, Heap::kHashTableMapRootIndex); + __ Branch(if_false, eq, a2, Operand(a4)); + + // Look for valueOf name in the descriptor array, and indicate false if + // found. Since we omit an enumeration index check, if it is added via a + // transition that shares its descriptor array, this is a false positive. + Label entry, loop, done; + + // Skip loop if no descriptors are valid. + __ NumberOfOwnDescriptors(a3, a1); + __ Branch(&done, eq, a3, Operand(zero_reg)); + + __ LoadInstanceDescriptors(a1, a4); + // a4: descriptor array. + // a3: valid entries in the descriptor array. + STATIC_ASSERT(kSmiTag == 0); + STATIC_ASSERT(kSmiTagSize == 1); +// Does not need? +// STATIC_ASSERT(kPointerSize == 4); + __ li(at, Operand(DescriptorArray::kDescriptorSize)); + __ Dmul(a3, a3, at); + // Calculate location of the first key name. + __ Daddu(a4, a4, Operand(DescriptorArray::kFirstOffset - kHeapObjectTag)); + // Calculate the end of the descriptor array. + __ mov(a2, a4); + __ dsll(a5, a3, kPointerSizeLog2); + __ Daddu(a2, a2, a5); + + // Loop through all the keys in the descriptor array. If one of these is the + // string "valueOf" the result is false. + // The use of a6 to store the valueOf string assumes that it is not otherwise + // used in the loop below. + __ li(a6, Operand(isolate()->factory()->value_of_string())); + __ jmp(&entry); + __ bind(&loop); + __ ld(a3, MemOperand(a4, 0)); + __ Branch(if_false, eq, a3, Operand(a6)); + __ Daddu(a4, a4, Operand(DescriptorArray::kDescriptorSize * kPointerSize)); + __ bind(&entry); + __ Branch(&loop, ne, a4, Operand(a2)); + + __ bind(&done); + + // Set the bit in the map to indicate that there is no local valueOf field. + __ lbu(a2, FieldMemOperand(a1, Map::kBitField2Offset)); + __ Or(a2, a2, Operand(1 << Map::kStringWrapperSafeForDefaultValueOf)); + __ sb(a2, FieldMemOperand(a1, Map::kBitField2Offset)); + + __ bind(&skip_lookup); + + // If a valueOf property is not found on the object check that its + // prototype is the un-modified String prototype. If not result is false. + __ ld(a2, FieldMemOperand(a1, Map::kPrototypeOffset)); + __ JumpIfSmi(a2, if_false); + __ ld(a2, FieldMemOperand(a2, HeapObject::kMapOffset)); + __ ld(a3, ContextOperand(cp, Context::GLOBAL_OBJECT_INDEX)); + __ ld(a3, FieldMemOperand(a3, GlobalObject::kNativeContextOffset)); + __ ld(a3, ContextOperand(a3, Context::STRING_FUNCTION_PROTOTYPE_MAP_INDEX)); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + Split(eq, a2, Operand(a3), if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitIsFunction(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + __ JumpIfSmi(v0, if_false); + __ GetObjectType(v0, a1, a2); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + __ Branch(if_true, eq, a2, Operand(JS_FUNCTION_TYPE)); + __ Branch(if_false); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitIsMinusZero(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + __ CheckMap(v0, a1, Heap::kHeapNumberMapRootIndex, if_false, DO_SMI_CHECK); + __ lwu(a2, FieldMemOperand(v0, HeapNumber::kExponentOffset)); + __ lwu(a1, FieldMemOperand(v0, HeapNumber::kMantissaOffset)); + __ li(a4, 0x80000000); + Label not_nan; + __ Branch(¬_nan, ne, a2, Operand(a4)); + __ mov(a4, zero_reg); + __ mov(a2, a1); + __ bind(¬_nan); + + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + Split(eq, a2, Operand(a4), if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitIsArray(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + __ JumpIfSmi(v0, if_false); + __ GetObjectType(v0, a1, a1); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + Split(eq, a1, Operand(JS_ARRAY_TYPE), + if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitIsRegExp(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + __ JumpIfSmi(v0, if_false); + __ GetObjectType(v0, a1, a1); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + Split(eq, a1, Operand(JS_REGEXP_TYPE), if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitIsConstructCall(CallRuntime* expr) { + DCHECK(expr->arguments()->length() == 0); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + // Get the frame pointer for the calling frame. + __ ld(a2, MemOperand(fp, StandardFrameConstants::kCallerFPOffset)); + + // Skip the arguments adaptor frame if it exists. + Label check_frame_marker; + __ ld(a1, MemOperand(a2, StandardFrameConstants::kContextOffset)); + __ Branch(&check_frame_marker, ne, + a1, Operand(Smi::FromInt(StackFrame::ARGUMENTS_ADAPTOR))); + __ ld(a2, MemOperand(a2, StandardFrameConstants::kCallerFPOffset)); + + // Check the marker in the calling frame. + __ bind(&check_frame_marker); + __ ld(a1, MemOperand(a2, StandardFrameConstants::kMarkerOffset)); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + Split(eq, a1, Operand(Smi::FromInt(StackFrame::CONSTRUCT)), + if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitObjectEquals(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 2); + + // Load the two objects into registers and perform the comparison. + VisitForStackValue(args->at(0)); + VisitForAccumulatorValue(args->at(1)); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + __ pop(a1); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + Split(eq, v0, Operand(a1), if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitArguments(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + // ArgumentsAccessStub expects the key in a1 and the formal + // parameter count in a0. + VisitForAccumulatorValue(args->at(0)); + __ mov(a1, v0); + __ li(a0, Operand(Smi::FromInt(info_->scope()->num_parameters()))); + ArgumentsAccessStub stub(isolate(), ArgumentsAccessStub::READ_ELEMENT); + __ CallStub(&stub); + context()->Plug(v0); +} + + +void FullCodeGenerator::EmitArgumentsLength(CallRuntime* expr) { + DCHECK(expr->arguments()->length() == 0); + Label exit; + // Get the number of formal parameters. + __ li(v0, Operand(Smi::FromInt(info_->scope()->num_parameters()))); + + // Check if the calling frame is an arguments adaptor frame. + __ ld(a2, MemOperand(fp, StandardFrameConstants::kCallerFPOffset)); + __ ld(a3, MemOperand(a2, StandardFrameConstants::kContextOffset)); + __ Branch(&exit, ne, a3, + Operand(Smi::FromInt(StackFrame::ARGUMENTS_ADAPTOR))); + + // Arguments adaptor case: Read the arguments length from the + // adaptor frame. + __ ld(v0, MemOperand(a2, ArgumentsAdaptorFrameConstants::kLengthOffset)); + + __ bind(&exit); + context()->Plug(v0); +} + + +void FullCodeGenerator::EmitClassOf(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + Label done, null, function, non_function_constructor; + + VisitForAccumulatorValue(args->at(0)); + + // If the object is a smi, we return null. + __ JumpIfSmi(v0, &null); + + // Check that the object is a JS object but take special care of JS + // functions to make sure they have 'Function' as their class. + // Assume that there are only two callable types, and one of them is at + // either end of the type range for JS object types. Saves extra comparisons. + STATIC_ASSERT(NUM_OF_CALLABLE_SPEC_OBJECT_TYPES == 2); + __ GetObjectType(v0, v0, a1); // Map is now in v0. + __ Branch(&null, lt, a1, Operand(FIRST_SPEC_OBJECT_TYPE)); + + STATIC_ASSERT(FIRST_NONCALLABLE_SPEC_OBJECT_TYPE == + FIRST_SPEC_OBJECT_TYPE + 1); + __ Branch(&function, eq, a1, Operand(FIRST_SPEC_OBJECT_TYPE)); + + STATIC_ASSERT(LAST_NONCALLABLE_SPEC_OBJECT_TYPE == + LAST_SPEC_OBJECT_TYPE - 1); + __ Branch(&function, eq, a1, Operand(LAST_SPEC_OBJECT_TYPE)); + // Assume that there is no larger type. + STATIC_ASSERT(LAST_NONCALLABLE_SPEC_OBJECT_TYPE == LAST_TYPE - 1); + + // Check if the constructor in the map is a JS function. + __ ld(v0, FieldMemOperand(v0, Map::kConstructorOffset)); + __ GetObjectType(v0, a1, a1); + __ Branch(&non_function_constructor, ne, a1, Operand(JS_FUNCTION_TYPE)); + + // v0 now contains the constructor function. Grab the + // instance class name from there. + __ ld(v0, FieldMemOperand(v0, JSFunction::kSharedFunctionInfoOffset)); + __ ld(v0, FieldMemOperand(v0, SharedFunctionInfo::kInstanceClassNameOffset)); + __ Branch(&done); + + // Functions have class 'Function'. + __ bind(&function); + __ LoadRoot(v0, Heap::kfunction_class_stringRootIndex); + __ jmp(&done); + + // Objects with a non-function constructor have class 'Object'. + __ bind(&non_function_constructor); + __ LoadRoot(v0, Heap::kObject_stringRootIndex); + __ jmp(&done); + + // Non-JS objects have class null. + __ bind(&null); + __ LoadRoot(v0, Heap::kNullValueRootIndex); + + // All done. + __ bind(&done); + + context()->Plug(v0); +} + + +void FullCodeGenerator::EmitSubString(CallRuntime* expr) { + // Load the arguments on the stack and call the stub. + SubStringStub stub(isolate()); + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 3); + VisitForStackValue(args->at(0)); + VisitForStackValue(args->at(1)); + VisitForStackValue(args->at(2)); + __ CallStub(&stub); + context()->Plug(v0); +} + + +void FullCodeGenerator::EmitRegExpExec(CallRuntime* expr) { + // Load the arguments on the stack and call the stub. + RegExpExecStub stub(isolate()); + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 4); + VisitForStackValue(args->at(0)); + VisitForStackValue(args->at(1)); + VisitForStackValue(args->at(2)); + VisitForStackValue(args->at(3)); + __ CallStub(&stub); + context()->Plug(v0); +} + + +void FullCodeGenerator::EmitValueOf(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); // Load the object. + + Label done; + // If the object is a smi return the object. + __ JumpIfSmi(v0, &done); + // If the object is not a value type, return the object. + __ GetObjectType(v0, a1, a1); + __ Branch(&done, ne, a1, Operand(JS_VALUE_TYPE)); + + __ ld(v0, FieldMemOperand(v0, JSValue::kValueOffset)); + + __ bind(&done); + context()->Plug(v0); +} + + +void FullCodeGenerator::EmitDateField(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 2); + DCHECK_NE(NULL, args->at(1)->AsLiteral()); + Smi* index = Smi::cast(*(args->at(1)->AsLiteral()->value())); + + VisitForAccumulatorValue(args->at(0)); // Load the object. + + Label runtime, done, not_date_object; + Register object = v0; + Register result = v0; + Register scratch0 = t1; + Register scratch1 = a1; + + __ JumpIfSmi(object, ¬_date_object); + __ GetObjectType(object, scratch1, scratch1); + __ Branch(¬_date_object, ne, scratch1, Operand(JS_DATE_TYPE)); + + if (index->value() == 0) { + __ ld(result, FieldMemOperand(object, JSDate::kValueOffset)); + __ jmp(&done); + } else { + if (index->value() < JSDate::kFirstUncachedField) { + ExternalReference stamp = ExternalReference::date_cache_stamp(isolate()); + __ li(scratch1, Operand(stamp)); + __ ld(scratch1, MemOperand(scratch1)); + __ ld(scratch0, FieldMemOperand(object, JSDate::kCacheStampOffset)); + __ Branch(&runtime, ne, scratch1, Operand(scratch0)); + __ ld(result, FieldMemOperand(object, JSDate::kValueOffset + + kPointerSize * index->value())); + __ jmp(&done); + } + __ bind(&runtime); + __ PrepareCallCFunction(2, scratch1); + __ li(a1, Operand(index)); + __ Move(a0, object); + __ CallCFunction(ExternalReference::get_date_field_function(isolate()), 2); + __ jmp(&done); + } + + __ bind(¬_date_object); + __ CallRuntime(Runtime::kThrowNotDateError, 0); + __ bind(&done); + context()->Plug(v0); +} + + +void FullCodeGenerator::EmitOneByteSeqStringSetChar(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK_EQ(3, args->length()); + + Register string = v0; + Register index = a1; + Register value = a2; + + VisitForStackValue(args->at(1)); // index + VisitForStackValue(args->at(2)); // value + VisitForAccumulatorValue(args->at(0)); // string + __ Pop(index, value); + + if (FLAG_debug_code) { + __ SmiTst(value, at); + __ Check(eq, kNonSmiValue, at, Operand(zero_reg)); + __ SmiTst(index, at); + __ Check(eq, kNonSmiIndex, at, Operand(zero_reg)); + __ SmiUntag(index, index); + static const uint32_t one_byte_seq_type = kSeqStringTag | kOneByteStringTag; + Register scratch = t1; + __ EmitSeqStringSetCharCheck( + string, index, value, scratch, one_byte_seq_type); + __ SmiTag(index, index); + } + + __ SmiUntag(value, value); + __ Daddu(at, + string, + Operand(SeqOneByteString::kHeaderSize - kHeapObjectTag)); + __ SmiUntag(index); + __ Daddu(at, at, index); + __ sb(value, MemOperand(at)); + context()->Plug(string); +} + + +void FullCodeGenerator::EmitTwoByteSeqStringSetChar(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK_EQ(3, args->length()); + + Register string = v0; + Register index = a1; + Register value = a2; + + VisitForStackValue(args->at(1)); // index + VisitForStackValue(args->at(2)); // value + VisitForAccumulatorValue(args->at(0)); // string + __ Pop(index, value); + + if (FLAG_debug_code) { + __ SmiTst(value, at); + __ Check(eq, kNonSmiValue, at, Operand(zero_reg)); + __ SmiTst(index, at); + __ Check(eq, kNonSmiIndex, at, Operand(zero_reg)); + __ SmiUntag(index, index); + static const uint32_t two_byte_seq_type = kSeqStringTag | kTwoByteStringTag; + Register scratch = t1; + __ EmitSeqStringSetCharCheck( + string, index, value, scratch, two_byte_seq_type); + __ SmiTag(index, index); + } + + __ SmiUntag(value, value); + __ Daddu(at, + string, + Operand(SeqTwoByteString::kHeaderSize - kHeapObjectTag)); + __ dsra(index, index, 32 - 1); + __ Daddu(at, at, index); + STATIC_ASSERT(kSmiTagSize == 1 && kSmiTag == 0); + __ sh(value, MemOperand(at)); + context()->Plug(string); +} + + +void FullCodeGenerator::EmitMathPow(CallRuntime* expr) { + // Load the arguments on the stack and call the runtime function. + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 2); + VisitForStackValue(args->at(0)); + VisitForStackValue(args->at(1)); + MathPowStub stub(isolate(), MathPowStub::ON_STACK); + __ CallStub(&stub); + context()->Plug(v0); +} + + +void FullCodeGenerator::EmitSetValueOf(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 2); + + VisitForStackValue(args->at(0)); // Load the object. + VisitForAccumulatorValue(args->at(1)); // Load the value. + __ pop(a1); // v0 = value. a1 = object. + + Label done; + // If the object is a smi, return the value. + __ JumpIfSmi(a1, &done); + + // If the object is not a value type, return the value. + __ GetObjectType(a1, a2, a2); + __ Branch(&done, ne, a2, Operand(JS_VALUE_TYPE)); + + // Store the value. + __ sd(v0, FieldMemOperand(a1, JSValue::kValueOffset)); + // Update the write barrier. Save the value as it will be + // overwritten by the write barrier code and is needed afterward. + __ mov(a2, v0); + __ RecordWriteField( + a1, JSValue::kValueOffset, a2, a3, kRAHasBeenSaved, kDontSaveFPRegs); + + __ bind(&done); + context()->Plug(v0); +} + + +void FullCodeGenerator::EmitNumberToString(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK_EQ(args->length(), 1); + + // Load the argument into a0 and call the stub. + VisitForAccumulatorValue(args->at(0)); + __ mov(a0, result_register()); + + NumberToStringStub stub(isolate()); + __ CallStub(&stub); + context()->Plug(v0); +} + + +void FullCodeGenerator::EmitStringCharFromCode(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); + + Label done; + StringCharFromCodeGenerator generator(v0, a1); + generator.GenerateFast(masm_); + __ jmp(&done); + + NopRuntimeCallHelper call_helper; + generator.GenerateSlow(masm_, call_helper); + + __ bind(&done); + context()->Plug(a1); +} + + +void FullCodeGenerator::EmitStringCharCodeAt(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 2); + + VisitForStackValue(args->at(0)); + VisitForAccumulatorValue(args->at(1)); + __ mov(a0, result_register()); + + Register object = a1; + Register index = a0; + Register result = v0; + + __ pop(object); + + Label need_conversion; + Label index_out_of_range; + Label done; + StringCharCodeAtGenerator generator(object, + index, + result, + &need_conversion, + &need_conversion, + &index_out_of_range, + STRING_INDEX_IS_NUMBER); + generator.GenerateFast(masm_); + __ jmp(&done); + + __ bind(&index_out_of_range); + // When the index is out of range, the spec requires us to return + // NaN. + __ LoadRoot(result, Heap::kNanValueRootIndex); + __ jmp(&done); + + __ bind(&need_conversion); + // Load the undefined value into the result register, which will + // trigger conversion. + __ LoadRoot(result, Heap::kUndefinedValueRootIndex); + __ jmp(&done); + + NopRuntimeCallHelper call_helper; + generator.GenerateSlow(masm_, call_helper); + + __ bind(&done); + context()->Plug(result); +} + + +void FullCodeGenerator::EmitStringCharAt(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 2); + + VisitForStackValue(args->at(0)); + VisitForAccumulatorValue(args->at(1)); + __ mov(a0, result_register()); + + Register object = a1; + Register index = a0; + Register scratch = a3; + Register result = v0; + + __ pop(object); + + Label need_conversion; + Label index_out_of_range; + Label done; + StringCharAtGenerator generator(object, + index, + scratch, + result, + &need_conversion, + &need_conversion, + &index_out_of_range, + STRING_INDEX_IS_NUMBER); + generator.GenerateFast(masm_); + __ jmp(&done); + + __ bind(&index_out_of_range); + // When the index is out of range, the spec requires us to return + // the empty string. + __ LoadRoot(result, Heap::kempty_stringRootIndex); + __ jmp(&done); + + __ bind(&need_conversion); + // Move smi zero into the result register, which will trigger + // conversion. + __ li(result, Operand(Smi::FromInt(0))); + __ jmp(&done); + + NopRuntimeCallHelper call_helper; + generator.GenerateSlow(masm_, call_helper); + + __ bind(&done); + context()->Plug(result); +} + + +void FullCodeGenerator::EmitStringAdd(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK_EQ(2, args->length()); + VisitForStackValue(args->at(0)); + VisitForAccumulatorValue(args->at(1)); + + __ pop(a1); + __ mov(a0, result_register()); // StringAddStub requires args in a0, a1. + StringAddStub stub(isolate(), STRING_ADD_CHECK_BOTH, NOT_TENURED); + __ CallStub(&stub); + context()->Plug(v0); +} + + +void FullCodeGenerator::EmitStringCompare(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK_EQ(2, args->length()); + + VisitForStackValue(args->at(0)); + VisitForStackValue(args->at(1)); + + StringCompareStub stub(isolate()); + __ CallStub(&stub); + context()->Plug(v0); +} + + +void FullCodeGenerator::EmitCallFunction(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() >= 2); + + int arg_count = args->length() - 2; // 2 ~ receiver and function. + for (int i = 0; i < arg_count + 1; i++) { + VisitForStackValue(args->at(i)); + } + VisitForAccumulatorValue(args->last()); // Function. + + Label runtime, done; + // Check for non-function argument (including proxy). + __ JumpIfSmi(v0, &runtime); + __ GetObjectType(v0, a1, a1); + __ Branch(&runtime, ne, a1, Operand(JS_FUNCTION_TYPE)); + + // InvokeFunction requires the function in a1. Move it in there. + __ mov(a1, result_register()); + ParameterCount count(arg_count); + __ InvokeFunction(a1, count, CALL_FUNCTION, NullCallWrapper()); + __ ld(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); + __ jmp(&done); + + __ bind(&runtime); + __ push(v0); + __ CallRuntime(Runtime::kCall, args->length()); + __ bind(&done); + + context()->Plug(v0); +} + + +void FullCodeGenerator::EmitRegExpConstructResult(CallRuntime* expr) { + RegExpConstructResultStub stub(isolate()); + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 3); + VisitForStackValue(args->at(0)); + VisitForStackValue(args->at(1)); + VisitForAccumulatorValue(args->at(2)); + __ mov(a0, result_register()); + __ pop(a1); + __ pop(a2); + __ CallStub(&stub); + context()->Plug(v0); +} + + +void FullCodeGenerator::EmitGetFromCache(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK_EQ(2, args->length()); + + DCHECK_NE(NULL, args->at(0)->AsLiteral()); + int cache_id = Smi::cast(*(args->at(0)->AsLiteral()->value()))->value(); + + Handle<FixedArray> jsfunction_result_caches( + isolate()->native_context()->jsfunction_result_caches()); + if (jsfunction_result_caches->length() <= cache_id) { + __ Abort(kAttemptToUseUndefinedCache); + __ LoadRoot(v0, Heap::kUndefinedValueRootIndex); + context()->Plug(v0); + return; + } + + VisitForAccumulatorValue(args->at(1)); + + Register key = v0; + Register cache = a1; + __ ld(cache, ContextOperand(cp, Context::GLOBAL_OBJECT_INDEX)); + __ ld(cache, FieldMemOperand(cache, GlobalObject::kNativeContextOffset)); + __ ld(cache, + ContextOperand( + cache, Context::JSFUNCTION_RESULT_CACHES_INDEX)); + __ ld(cache, + FieldMemOperand(cache, FixedArray::OffsetOfElementAt(cache_id))); + + + Label done, not_found; + STATIC_ASSERT(kSmiTag == 0 && kSmiTagSize == 1); + __ ld(a2, FieldMemOperand(cache, JSFunctionResultCache::kFingerOffset)); + // a2 now holds finger offset as a smi. + __ Daddu(a3, cache, Operand(FixedArray::kHeaderSize - kHeapObjectTag)); + // a3 now points to the start of fixed array elements. + __ SmiScale(at, a2, kPointerSizeLog2); + __ daddu(a3, a3, at); + // a3 now points to key of indexed element of cache. + __ ld(a2, MemOperand(a3)); + __ Branch(¬_found, ne, key, Operand(a2)); + + __ ld(v0, MemOperand(a3, kPointerSize)); + __ Branch(&done); + + __ bind(¬_found); + // Call runtime to perform the lookup. + __ Push(cache, key); + __ CallRuntime(Runtime::kGetFromCache, 2); + + __ bind(&done); + context()->Plug(v0); +} + + +void FullCodeGenerator::EmitHasCachedArrayIndex(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + VisitForAccumulatorValue(args->at(0)); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + __ lwu(a0, FieldMemOperand(v0, String::kHashFieldOffset)); + __ And(a0, a0, Operand(String::kContainsCachedArrayIndexMask)); + + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + Split(eq, a0, Operand(zero_reg), if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitGetCachedArrayIndex(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + VisitForAccumulatorValue(args->at(0)); + + __ AssertString(v0); + + __ lwu(v0, FieldMemOperand(v0, String::kHashFieldOffset)); + __ IndexFromHash(v0, v0); + + context()->Plug(v0); +} + + +void FullCodeGenerator::EmitFastAsciiArrayJoin(CallRuntime* expr) { + Label bailout, done, one_char_separator, long_separator, + non_trivial_array, not_size_one_array, loop, + empty_separator_loop, one_char_separator_loop, + one_char_separator_loop_entry, long_separator_loop; + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 2); + VisitForStackValue(args->at(1)); + VisitForAccumulatorValue(args->at(0)); + + // All aliases of the same register have disjoint lifetimes. + Register array = v0; + Register elements = no_reg; // Will be v0. + Register result = no_reg; // Will be v0. + Register separator = a1; + Register array_length = a2; + Register result_pos = no_reg; // Will be a2. + Register string_length = a3; + Register string = a4; + Register element = a5; + Register elements_end = a6; + Register scratch1 = a7; + Register scratch2 = t1; + Register scratch3 = t0; + + // Separator operand is on the stack. + __ pop(separator); + + // Check that the array is a JSArray. + __ JumpIfSmi(array, &bailout); + __ GetObjectType(array, scratch1, scratch2); + __ Branch(&bailout, ne, scratch2, Operand(JS_ARRAY_TYPE)); + + // Check that the array has fast elements. + __ CheckFastElements(scratch1, scratch2, &bailout); + + // If the array has length zero, return the empty string. + __ ld(array_length, FieldMemOperand(array, JSArray::kLengthOffset)); + __ SmiUntag(array_length); + __ Branch(&non_trivial_array, ne, array_length, Operand(zero_reg)); + __ LoadRoot(v0, Heap::kempty_stringRootIndex); + __ Branch(&done); + + __ bind(&non_trivial_array); + + // Get the FixedArray containing array's elements. + elements = array; + __ ld(elements, FieldMemOperand(array, JSArray::kElementsOffset)); + array = no_reg; // End of array's live range. + + // Check that all array elements are sequential ASCII strings, and + // accumulate the sum of their lengths, as a smi-encoded value. + __ mov(string_length, zero_reg); + __ Daddu(element, + elements, Operand(FixedArray::kHeaderSize - kHeapObjectTag)); + __ dsll(elements_end, array_length, kPointerSizeLog2); + __ Daddu(elements_end, element, elements_end); + // Loop condition: while (element < elements_end). + // Live values in registers: + // elements: Fixed array of strings. + // array_length: Length of the fixed array of strings (not smi) + // separator: Separator string + // string_length: Accumulated sum of string lengths (smi). + // element: Current array element. + // elements_end: Array end. + if (generate_debug_code_) { + __ Assert(gt, kNoEmptyArraysHereInEmitFastAsciiArrayJoin, + array_length, Operand(zero_reg)); + } + __ bind(&loop); + __ ld(string, MemOperand(element)); + __ Daddu(element, element, kPointerSize); + __ JumpIfSmi(string, &bailout); + __ ld(scratch1, FieldMemOperand(string, HeapObject::kMapOffset)); + __ lbu(scratch1, FieldMemOperand(scratch1, Map::kInstanceTypeOffset)); + __ JumpIfInstanceTypeIsNotSequentialAscii(scratch1, scratch2, &bailout); + __ ld(scratch1, FieldMemOperand(string, SeqOneByteString::kLengthOffset)); + __ AdduAndCheckForOverflow(string_length, string_length, scratch1, scratch3); + __ BranchOnOverflow(&bailout, scratch3); + __ Branch(&loop, lt, element, Operand(elements_end)); + + // If array_length is 1, return elements[0], a string. + __ Branch(¬_size_one_array, ne, array_length, Operand(1)); + __ ld(v0, FieldMemOperand(elements, FixedArray::kHeaderSize)); + __ Branch(&done); + + __ bind(¬_size_one_array); + + // Live values in registers: + // separator: Separator string + // array_length: Length of the array. + // string_length: Sum of string lengths (smi). + // elements: FixedArray of strings. + + // Check that the separator is a flat ASCII string. + __ JumpIfSmi(separator, &bailout); + __ ld(scratch1, FieldMemOperand(separator, HeapObject::kMapOffset)); + __ lbu(scratch1, FieldMemOperand(scratch1, Map::kInstanceTypeOffset)); + __ JumpIfInstanceTypeIsNotSequentialAscii(scratch1, scratch2, &bailout); + + // Add (separator length times array_length) - separator length to the + // string_length to get the length of the result string. array_length is not + // smi but the other values are, so the result is a smi. + __ ld(scratch1, FieldMemOperand(separator, SeqOneByteString::kLengthOffset)); + __ Dsubu(string_length, string_length, Operand(scratch1)); + __ SmiUntag(scratch1); + __ Dmul(scratch2, array_length, scratch1); + // Check for smi overflow. No overflow if higher 33 bits of 64-bit result are + // zero. + __ dsra32(scratch1, scratch2, 0); + __ Branch(&bailout, ne, scratch2, Operand(zero_reg)); + __ SmiUntag(string_length); + __ AdduAndCheckForOverflow(string_length, string_length, scratch2, scratch3); + __ BranchOnOverflow(&bailout, scratch3); + + // Get first element in the array to free up the elements register to be used + // for the result. + __ Daddu(element, + elements, Operand(FixedArray::kHeaderSize - kHeapObjectTag)); + result = elements; // End of live range for elements. + elements = no_reg; + // Live values in registers: + // element: First array element + // separator: Separator string + // string_length: Length of result string (not smi) + // array_length: Length of the array. + __ AllocateAsciiString(result, + string_length, + scratch1, + scratch2, + elements_end, + &bailout); + // Prepare for looping. Set up elements_end to end of the array. Set + // result_pos to the position of the result where to write the first + // character. + __ dsll(elements_end, array_length, kPointerSizeLog2); + __ Daddu(elements_end, element, elements_end); + result_pos = array_length; // End of live range for array_length. + array_length = no_reg; + __ Daddu(result_pos, + result, + Operand(SeqOneByteString::kHeaderSize - kHeapObjectTag)); + + // Check the length of the separator. + __ ld(scratch1, FieldMemOperand(separator, SeqOneByteString::kLengthOffset)); + __ li(at, Operand(Smi::FromInt(1))); + __ Branch(&one_char_separator, eq, scratch1, Operand(at)); + __ Branch(&long_separator, gt, scratch1, Operand(at)); + + // Empty separator case. + __ bind(&empty_separator_loop); + // Live values in registers: + // result_pos: the position to which we are currently copying characters. + // element: Current array element. + // elements_end: Array end. + + // Copy next array element to the result. + __ ld(string, MemOperand(element)); + __ Daddu(element, element, kPointerSize); + __ ld(string_length, FieldMemOperand(string, String::kLengthOffset)); + __ SmiUntag(string_length); + __ Daddu(string, string, SeqOneByteString::kHeaderSize - kHeapObjectTag); + __ CopyBytes(string, result_pos, string_length, scratch1); + // End while (element < elements_end). + __ Branch(&empty_separator_loop, lt, element, Operand(elements_end)); + DCHECK(result.is(v0)); + __ Branch(&done); + + // One-character separator case. + __ bind(&one_char_separator); + // Replace separator with its ASCII character value. + __ lbu(separator, FieldMemOperand(separator, SeqOneByteString::kHeaderSize)); + // Jump into the loop after the code that copies the separator, so the first + // element is not preceded by a separator. + __ jmp(&one_char_separator_loop_entry); + + __ bind(&one_char_separator_loop); + // Live values in registers: + // result_pos: the position to which we are currently copying characters. + // element: Current array element. + // elements_end: Array end. + // separator: Single separator ASCII char (in lower byte). + + // Copy the separator character to the result. + __ sb(separator, MemOperand(result_pos)); + __ Daddu(result_pos, result_pos, 1); + + // Copy next array element to the result. + __ bind(&one_char_separator_loop_entry); + __ ld(string, MemOperand(element)); + __ Daddu(element, element, kPointerSize); + __ ld(string_length, FieldMemOperand(string, String::kLengthOffset)); + __ SmiUntag(string_length); + __ Daddu(string, string, SeqOneByteString::kHeaderSize - kHeapObjectTag); + __ CopyBytes(string, result_pos, string_length, scratch1); + // End while (element < elements_end). + __ Branch(&one_char_separator_loop, lt, element, Operand(elements_end)); + DCHECK(result.is(v0)); + __ Branch(&done); + + // Long separator case (separator is more than one character). Entry is at the + // label long_separator below. + __ bind(&long_separator_loop); + // Live values in registers: + // result_pos: the position to which we are currently copying characters. + // element: Current array element. + // elements_end: Array end. + // separator: Separator string. + + // Copy the separator to the result. + __ ld(string_length, FieldMemOperand(separator, String::kLengthOffset)); + __ SmiUntag(string_length); + __ Daddu(string, + separator, + Operand(SeqOneByteString::kHeaderSize - kHeapObjectTag)); + __ CopyBytes(string, result_pos, string_length, scratch1); + + __ bind(&long_separator); + __ ld(string, MemOperand(element)); + __ Daddu(element, element, kPointerSize); + __ ld(string_length, FieldMemOperand(string, String::kLengthOffset)); + __ SmiUntag(string_length); + __ Daddu(string, string, SeqOneByteString::kHeaderSize - kHeapObjectTag); + __ CopyBytes(string, result_pos, string_length, scratch1); + // End while (element < elements_end). + __ Branch(&long_separator_loop, lt, element, Operand(elements_end)); + DCHECK(result.is(v0)); + __ Branch(&done); + + __ bind(&bailout); + __ LoadRoot(v0, Heap::kUndefinedValueRootIndex); + __ bind(&done); + context()->Plug(v0); +} + + +void FullCodeGenerator::EmitDebugIsActive(CallRuntime* expr) { + DCHECK(expr->arguments()->length() == 0); + ExternalReference debug_is_active = + ExternalReference::debug_is_active_address(isolate()); + __ li(at, Operand(debug_is_active)); + __ lbu(v0, MemOperand(at)); + __ SmiTag(v0); + context()->Plug(v0); +} + + +void FullCodeGenerator::VisitCallRuntime(CallRuntime* expr) { + if (expr->function() != NULL && + expr->function()->intrinsic_type == Runtime::INLINE) { + Comment cmnt(masm_, "[ InlineRuntimeCall"); + EmitInlineRuntimeCall(expr); + return; + } + + Comment cmnt(masm_, "[ CallRuntime"); + ZoneList<Expression*>* args = expr->arguments(); + int arg_count = args->length(); + + if (expr->is_jsruntime()) { + // Push the builtins object as the receiver. + Register receiver = LoadIC::ReceiverRegister(); + __ ld(receiver, GlobalObjectOperand()); + __ ld(receiver, FieldMemOperand(receiver, GlobalObject::kBuiltinsOffset)); + __ push(receiver); + + // Load the function from the receiver. + __ li(LoadIC::NameRegister(), Operand(expr->name())); + if (FLAG_vector_ics) { + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(expr->CallRuntimeFeedbackSlot()))); + CallLoadIC(NOT_CONTEXTUAL); + } else { + CallLoadIC(NOT_CONTEXTUAL, expr->CallRuntimeFeedbackId()); + } + + // Push the target function under the receiver. + __ ld(at, MemOperand(sp, 0)); + __ push(at); + __ sd(v0, MemOperand(sp, kPointerSize)); + + // Push the arguments ("left-to-right"). + int arg_count = args->length(); + for (int i = 0; i < arg_count; i++) { + VisitForStackValue(args->at(i)); + } + + // Record source position of the IC call. + SetSourcePosition(expr->position()); + CallFunctionStub stub(isolate(), arg_count, NO_CALL_FUNCTION_FLAGS); + __ ld(a1, MemOperand(sp, (arg_count + 1) * kPointerSize)); + __ CallStub(&stub); + + // Restore context register. + __ ld(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); + + context()->DropAndPlug(1, v0); + } else { + // Push the arguments ("left-to-right"). + for (int i = 0; i < arg_count; i++) { + VisitForStackValue(args->at(i)); + } + + // Call the C runtime function. + __ CallRuntime(expr->function(), arg_count); + context()->Plug(v0); + } +} + + +void FullCodeGenerator::VisitUnaryOperation(UnaryOperation* expr) { + switch (expr->op()) { + case Token::DELETE: { + Comment cmnt(masm_, "[ UnaryOperation (DELETE)"); + Property* property = expr->expression()->AsProperty(); + VariableProxy* proxy = expr->expression()->AsVariableProxy(); + + if (property != NULL) { + VisitForStackValue(property->obj()); + VisitForStackValue(property->key()); + __ li(a1, Operand(Smi::FromInt(strict_mode()))); + __ push(a1); + __ InvokeBuiltin(Builtins::DELETE, CALL_FUNCTION); + context()->Plug(v0); + } else if (proxy != NULL) { + Variable* var = proxy->var(); + // Delete of an unqualified identifier is disallowed in strict mode + // but "delete this" is allowed. + DCHECK(strict_mode() == SLOPPY || var->is_this()); + if (var->IsUnallocated()) { + __ ld(a2, GlobalObjectOperand()); + __ li(a1, Operand(var->name())); + __ li(a0, Operand(Smi::FromInt(SLOPPY))); + __ Push(a2, a1, a0); + __ InvokeBuiltin(Builtins::DELETE, CALL_FUNCTION); + context()->Plug(v0); + } else if (var->IsStackAllocated() || var->IsContextSlot()) { + // Result of deleting non-global, non-dynamic variables is false. + // The subexpression does not have side effects. + context()->Plug(var->is_this()); + } else { + // Non-global variable. Call the runtime to try to delete from the + // context where the variable was introduced. + DCHECK(!context_register().is(a2)); + __ li(a2, Operand(var->name())); + __ Push(context_register(), a2); + __ CallRuntime(Runtime::kDeleteLookupSlot, 2); + context()->Plug(v0); + } + } else { + // Result of deleting non-property, non-variable reference is true. + // The subexpression may have side effects. + VisitForEffect(expr->expression()); + context()->Plug(true); + } + break; + } + + case Token::VOID: { + Comment cmnt(masm_, "[ UnaryOperation (VOID)"); + VisitForEffect(expr->expression()); + context()->Plug(Heap::kUndefinedValueRootIndex); + break; + } + + case Token::NOT: { + Comment cmnt(masm_, "[ UnaryOperation (NOT)"); + if (context()->IsEffect()) { + // Unary NOT has no side effects so it's only necessary to visit the + // subexpression. Match the optimizing compiler by not branching. + VisitForEffect(expr->expression()); + } else if (context()->IsTest()) { + const TestContext* test = TestContext::cast(context()); + // The labels are swapped for the recursive call. + VisitForControl(expr->expression(), + test->false_label(), + test->true_label(), + test->fall_through()); + context()->Plug(test->true_label(), test->false_label()); + } else { + // We handle value contexts explicitly rather than simply visiting + // for control and plugging the control flow into the context, + // because we need to prepare a pair of extra administrative AST ids + // for the optimizing compiler. + DCHECK(context()->IsAccumulatorValue() || context()->IsStackValue()); + Label materialize_true, materialize_false, done; + VisitForControl(expr->expression(), + &materialize_false, + &materialize_true, + &materialize_true); + __ bind(&materialize_true); + PrepareForBailoutForId(expr->MaterializeTrueId(), NO_REGISTERS); + __ LoadRoot(v0, Heap::kTrueValueRootIndex); + if (context()->IsStackValue()) __ push(v0); + __ jmp(&done); + __ bind(&materialize_false); + PrepareForBailoutForId(expr->MaterializeFalseId(), NO_REGISTERS); + __ LoadRoot(v0, Heap::kFalseValueRootIndex); + if (context()->IsStackValue()) __ push(v0); + __ bind(&done); + } + break; + } + + case Token::TYPEOF: { + Comment cmnt(masm_, "[ UnaryOperation (TYPEOF)"); + { StackValueContext context(this); + VisitForTypeofValue(expr->expression()); + } + __ CallRuntime(Runtime::kTypeof, 1); + context()->Plug(v0); + break; + } + + default: + UNREACHABLE(); + } +} + + +void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { + DCHECK(expr->expression()->IsValidReferenceExpression()); + + Comment cmnt(masm_, "[ CountOperation"); + SetSourcePosition(expr->position()); + + // Expression can only be a property, a global or a (parameter or local) + // slot. + enum LhsKind { VARIABLE, NAMED_PROPERTY, KEYED_PROPERTY }; + LhsKind assign_type = VARIABLE; + Property* prop = expr->expression()->AsProperty(); + // In case of a property we use the uninitialized expression context + // of the key to detect a named property. + if (prop != NULL) { + assign_type = + (prop->key()->IsPropertyName()) ? NAMED_PROPERTY : KEYED_PROPERTY; + } + + // Evaluate expression and get value. + if (assign_type == VARIABLE) { + DCHECK(expr->expression()->AsVariableProxy()->var() != NULL); + AccumulatorValueContext context(this); + EmitVariableLoad(expr->expression()->AsVariableProxy()); + } else { + // Reserve space for result of postfix operation. + if (expr->is_postfix() && !context()->IsEffect()) { + __ li(at, Operand(Smi::FromInt(0))); + __ push(at); + } + if (assign_type == NAMED_PROPERTY) { + // Put the object both on the stack and in the register. + VisitForStackValue(prop->obj()); + __ ld(LoadIC::ReceiverRegister(), MemOperand(sp, 0)); + EmitNamedPropertyLoad(prop); + } else { + VisitForStackValue(prop->obj()); + VisitForStackValue(prop->key()); + __ ld(LoadIC::ReceiverRegister(), MemOperand(sp, 1 * kPointerSize)); + __ ld(LoadIC::NameRegister(), MemOperand(sp, 0)); + EmitKeyedPropertyLoad(prop); + } + } + + // We need a second deoptimization point after loading the value + // in case evaluating the property load my have a side effect. + if (assign_type == VARIABLE) { + PrepareForBailout(expr->expression(), TOS_REG); + } else { + PrepareForBailoutForId(prop->LoadId(), TOS_REG); + } + + // Inline smi case if we are in a loop. + Label stub_call, done; + JumpPatchSite patch_site(masm_); + + int count_value = expr->op() == Token::INC ? 1 : -1; + __ mov(a0, v0); + if (ShouldInlineSmiCase(expr->op())) { + Label slow; + patch_site.EmitJumpIfNotSmi(v0, &slow); + + // Save result for postfix expressions. + if (expr->is_postfix()) { + if (!context()->IsEffect()) { + // Save the result on the stack. If we have a named or keyed property + // we store the result under the receiver that is currently on top + // of the stack. + switch (assign_type) { + case VARIABLE: + __ push(v0); + break; + case NAMED_PROPERTY: + __ sd(v0, MemOperand(sp, kPointerSize)); + break; + case KEYED_PROPERTY: + __ sd(v0, MemOperand(sp, 2 * kPointerSize)); + break; + } + } + } + + Register scratch1 = a1; + Register scratch2 = a4; + __ li(scratch1, Operand(Smi::FromInt(count_value))); + __ AdduAndCheckForOverflow(v0, v0, scratch1, scratch2); + __ BranchOnNoOverflow(&done, scratch2); + // Call stub. Undo operation first. + __ Move(v0, a0); + __ jmp(&stub_call); + __ bind(&slow); + } + ToNumberStub convert_stub(isolate()); + __ CallStub(&convert_stub); + + // Save result for postfix expressions. + if (expr->is_postfix()) { + if (!context()->IsEffect()) { + // Save the result on the stack. If we have a named or keyed property + // we store the result under the receiver that is currently on top + // of the stack. + switch (assign_type) { + case VARIABLE: + __ push(v0); + break; + case NAMED_PROPERTY: + __ sd(v0, MemOperand(sp, kPointerSize)); + break; + case KEYED_PROPERTY: + __ sd(v0, MemOperand(sp, 2 * kPointerSize)); + break; + } + } + } + + __ bind(&stub_call); + __ mov(a1, v0); + __ li(a0, Operand(Smi::FromInt(count_value))); + + // Record position before stub call. + SetSourcePosition(expr->position()); + + BinaryOpICStub stub(isolate(), Token::ADD, NO_OVERWRITE); + CallIC(stub.GetCode(), expr->CountBinOpFeedbackId()); + patch_site.EmitPatchInfo(); + __ bind(&done); + + // Store the value returned in v0. + switch (assign_type) { + case VARIABLE: + if (expr->is_postfix()) { + { EffectContext context(this); + EmitVariableAssignment(expr->expression()->AsVariableProxy()->var(), + Token::ASSIGN); + PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); + context.Plug(v0); + } + // For all contexts except EffectConstant we have the result on + // top of the stack. + if (!context()->IsEffect()) { + context()->PlugTOS(); + } + } else { + EmitVariableAssignment(expr->expression()->AsVariableProxy()->var(), + Token::ASSIGN); + PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); + context()->Plug(v0); + } + break; + case NAMED_PROPERTY: { + __ mov(StoreIC::ValueRegister(), result_register()); + __ li(StoreIC::NameRegister(), + Operand(prop->key()->AsLiteral()->value())); + __ pop(StoreIC::ReceiverRegister()); + CallStoreIC(expr->CountStoreFeedbackId()); + PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); + if (expr->is_postfix()) { + if (!context()->IsEffect()) { + context()->PlugTOS(); + } + } else { + context()->Plug(v0); + } + break; + } + case KEYED_PROPERTY: { + __ mov(KeyedStoreIC::ValueRegister(), result_register()); + __ Pop(KeyedStoreIC::ReceiverRegister(), KeyedStoreIC::NameRegister()); + Handle<Code> ic = strict_mode() == SLOPPY + ? isolate()->builtins()->KeyedStoreIC_Initialize() + : isolate()->builtins()->KeyedStoreIC_Initialize_Strict(); + CallIC(ic, expr->CountStoreFeedbackId()); + PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); + if (expr->is_postfix()) { + if (!context()->IsEffect()) { + context()->PlugTOS(); + } + } else { + context()->Plug(v0); + } + break; + } + } +} + + +void FullCodeGenerator::VisitForTypeofValue(Expression* expr) { + DCHECK(!context()->IsEffect()); + DCHECK(!context()->IsTest()); + VariableProxy* proxy = expr->AsVariableProxy(); + if (proxy != NULL && proxy->var()->IsUnallocated()) { + Comment cmnt(masm_, "[ Global variable"); + __ ld(LoadIC::ReceiverRegister(), GlobalObjectOperand()); + __ li(LoadIC::NameRegister(), Operand(proxy->name())); + if (FLAG_vector_ics) { + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(proxy->VariableFeedbackSlot()))); + } + // Use a regular load, not a contextual load, to avoid a reference + // error. + CallLoadIC(NOT_CONTEXTUAL); + PrepareForBailout(expr, TOS_REG); + context()->Plug(v0); + } else if (proxy != NULL && proxy->var()->IsLookupSlot()) { + Comment cmnt(masm_, "[ Lookup slot"); + Label done, slow; + + // Generate code for loading from variables potentially shadowed + // by eval-introduced variables. + EmitDynamicLookupFastCase(proxy, INSIDE_TYPEOF, &slow, &done); + + __ bind(&slow); + __ li(a0, Operand(proxy->name())); + __ Push(cp, a0); + __ CallRuntime(Runtime::kLoadLookupSlotNoReferenceError, 2); + PrepareForBailout(expr, TOS_REG); + __ bind(&done); + + context()->Plug(v0); + } else { + // This expression cannot throw a reference error at the top level. + VisitInDuplicateContext(expr); + } +} + +void FullCodeGenerator::EmitLiteralCompareTypeof(Expression* expr, + Expression* sub_expr, + Handle<String> check) { + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + { AccumulatorValueContext context(this); + VisitForTypeofValue(sub_expr); + } + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + + Factory* factory = isolate()->factory(); + if (String::Equals(check, factory->number_string())) { + __ JumpIfSmi(v0, if_true); + __ ld(v0, FieldMemOperand(v0, HeapObject::kMapOffset)); + __ LoadRoot(at, Heap::kHeapNumberMapRootIndex); + Split(eq, v0, Operand(at), if_true, if_false, fall_through); + } else if (String::Equals(check, factory->string_string())) { + __ JumpIfSmi(v0, if_false); + // Check for undetectable objects => false. + __ GetObjectType(v0, v0, a1); + __ Branch(if_false, ge, a1, Operand(FIRST_NONSTRING_TYPE)); + __ lbu(a1, FieldMemOperand(v0, Map::kBitFieldOffset)); + __ And(a1, a1, Operand(1 << Map::kIsUndetectable)); + Split(eq, a1, Operand(zero_reg), + if_true, if_false, fall_through); + } else if (String::Equals(check, factory->symbol_string())) { + __ JumpIfSmi(v0, if_false); + __ GetObjectType(v0, v0, a1); + Split(eq, a1, Operand(SYMBOL_TYPE), if_true, if_false, fall_through); + } else if (String::Equals(check, factory->boolean_string())) { + __ LoadRoot(at, Heap::kTrueValueRootIndex); + __ Branch(if_true, eq, v0, Operand(at)); + __ LoadRoot(at, Heap::kFalseValueRootIndex); + Split(eq, v0, Operand(at), if_true, if_false, fall_through); + } else if (String::Equals(check, factory->undefined_string())) { + __ LoadRoot(at, Heap::kUndefinedValueRootIndex); + __ Branch(if_true, eq, v0, Operand(at)); + __ JumpIfSmi(v0, if_false); + // Check for undetectable objects => true. + __ ld(v0, FieldMemOperand(v0, HeapObject::kMapOffset)); + __ lbu(a1, FieldMemOperand(v0, Map::kBitFieldOffset)); + __ And(a1, a1, Operand(1 << Map::kIsUndetectable)); + Split(ne, a1, Operand(zero_reg), if_true, if_false, fall_through); + } else if (String::Equals(check, factory->function_string())) { + __ JumpIfSmi(v0, if_false); + STATIC_ASSERT(NUM_OF_CALLABLE_SPEC_OBJECT_TYPES == 2); + __ GetObjectType(v0, v0, a1); + __ Branch(if_true, eq, a1, Operand(JS_FUNCTION_TYPE)); + Split(eq, a1, Operand(JS_FUNCTION_PROXY_TYPE), + if_true, if_false, fall_through); + } else if (String::Equals(check, factory->object_string())) { + __ JumpIfSmi(v0, if_false); + __ LoadRoot(at, Heap::kNullValueRootIndex); + __ Branch(if_true, eq, v0, Operand(at)); + // Check for JS objects => true. + __ GetObjectType(v0, v0, a1); + __ Branch(if_false, lt, a1, Operand(FIRST_NONCALLABLE_SPEC_OBJECT_TYPE)); + __ lbu(a1, FieldMemOperand(v0, Map::kInstanceTypeOffset)); + __ Branch(if_false, gt, a1, Operand(LAST_NONCALLABLE_SPEC_OBJECT_TYPE)); + // Check for undetectable objects => false. + __ lbu(a1, FieldMemOperand(v0, Map::kBitFieldOffset)); + __ And(a1, a1, Operand(1 << Map::kIsUndetectable)); + Split(eq, a1, Operand(zero_reg), if_true, if_false, fall_through); + } else { + if (if_false != fall_through) __ jmp(if_false); + } + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::VisitCompareOperation(CompareOperation* expr) { + Comment cmnt(masm_, "[ CompareOperation"); + SetSourcePosition(expr->position()); + + // First we try a fast inlined version of the compare when one of + // the operands is a literal. + if (TryLiteralCompare(expr)) return; + + // Always perform the comparison for its control flow. Pack the result + // into the expression's context after the comparison is performed. + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + Token::Value op = expr->op(); + VisitForStackValue(expr->left()); + switch (op) { + case Token::IN: + VisitForStackValue(expr->right()); + __ InvokeBuiltin(Builtins::IN, CALL_FUNCTION); + PrepareForBailoutBeforeSplit(expr, false, NULL, NULL); + __ LoadRoot(a4, Heap::kTrueValueRootIndex); + Split(eq, v0, Operand(a4), if_true, if_false, fall_through); + break; + + case Token::INSTANCEOF: { + VisitForStackValue(expr->right()); + InstanceofStub stub(isolate(), InstanceofStub::kNoFlags); + __ CallStub(&stub); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + // The stub returns 0 for true. + Split(eq, v0, Operand(zero_reg), if_true, if_false, fall_through); + break; + } + + default: { + VisitForAccumulatorValue(expr->right()); + Condition cc = CompareIC::ComputeCondition(op); + __ mov(a0, result_register()); + __ pop(a1); + + bool inline_smi_code = ShouldInlineSmiCase(op); + JumpPatchSite patch_site(masm_); + if (inline_smi_code) { + Label slow_case; + __ Or(a2, a0, Operand(a1)); + patch_site.EmitJumpIfNotSmi(a2, &slow_case); + Split(cc, a1, Operand(a0), if_true, if_false, NULL); + __ bind(&slow_case); + } + // Record position and call the compare IC. + SetSourcePosition(expr->position()); + Handle<Code> ic = CompareIC::GetUninitialized(isolate(), op); + CallIC(ic, expr->CompareOperationFeedbackId()); + patch_site.EmitPatchInfo(); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + Split(cc, v0, Operand(zero_reg), if_true, if_false, fall_through); + } + } + + // Convert the result of the comparison into one expected for this + // expression's context. + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitLiteralCompareNil(CompareOperation* expr, + Expression* sub_expr, + NilValue nil) { + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + VisitForAccumulatorValue(sub_expr); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + __ mov(a0, result_register()); + if (expr->op() == Token::EQ_STRICT) { + Heap::RootListIndex nil_value = nil == kNullValue ? + Heap::kNullValueRootIndex : + Heap::kUndefinedValueRootIndex; + __ LoadRoot(a1, nil_value); + Split(eq, a0, Operand(a1), if_true, if_false, fall_through); + } else { + Handle<Code> ic = CompareNilICStub::GetUninitialized(isolate(), nil); + CallIC(ic, expr->CompareOperationFeedbackId()); + Split(ne, v0, Operand(zero_reg), if_true, if_false, fall_through); + } + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::VisitThisFunction(ThisFunction* expr) { + __ ld(v0, MemOperand(fp, JavaScriptFrameConstants::kFunctionOffset)); + context()->Plug(v0); +} + + +Register FullCodeGenerator::result_register() { + return v0; +} + + +Register FullCodeGenerator::context_register() { + return cp; +} + + +void FullCodeGenerator::StoreToFrameField(int frame_offset, Register value) { + // DCHECK_EQ(POINTER_SIZE_ALIGN(frame_offset), frame_offset); + DCHECK(IsAligned(frame_offset, kPointerSize)); + // __ sw(value, MemOperand(fp, frame_offset)); + __ sd(value, MemOperand(fp, frame_offset)); +} + + +void FullCodeGenerator::LoadContextField(Register dst, int context_index) { + __ ld(dst, ContextOperand(cp, context_index)); +} + + +void FullCodeGenerator::PushFunctionArgumentForContextAllocation() { + Scope* declaration_scope = scope()->DeclarationScope(); + if (declaration_scope->is_global_scope() || + declaration_scope->is_module_scope()) { + // Contexts nested in the native context have a canonical empty function + // as their closure, not the anonymous closure containing the global + // code. Pass a smi sentinel and let the runtime look up the empty + // function. + __ li(at, Operand(Smi::FromInt(0))); + } else if (declaration_scope->is_eval_scope()) { + // Contexts created by a call to eval have the same closure as the + // context calling eval, not the anonymous closure containing the eval + // code. Fetch it from the context. + __ ld(at, ContextOperand(cp, Context::CLOSURE_INDEX)); + } else { + DCHECK(declaration_scope->is_function_scope()); + __ ld(at, MemOperand(fp, JavaScriptFrameConstants::kFunctionOffset)); + } + __ push(at); +} + + +// ---------------------------------------------------------------------------- +// Non-local control flow support. + +void FullCodeGenerator::EnterFinallyBlock() { + DCHECK(!result_register().is(a1)); + // Store result register while executing finally block. + __ push(result_register()); + // Cook return address in link register to stack (smi encoded Code* delta). + __ Dsubu(a1, ra, Operand(masm_->CodeObject())); + __ SmiTag(a1); + + // Store result register while executing finally block. + __ push(a1); + + // Store pending message while executing finally block. + ExternalReference pending_message_obj = + ExternalReference::address_of_pending_message_obj(isolate()); + __ li(at, Operand(pending_message_obj)); + __ ld(a1, MemOperand(at)); + __ push(a1); + + ExternalReference has_pending_message = + ExternalReference::address_of_has_pending_message(isolate()); + __ li(at, Operand(has_pending_message)); + __ ld(a1, MemOperand(at)); + __ SmiTag(a1); + __ push(a1); + + ExternalReference pending_message_script = + ExternalReference::address_of_pending_message_script(isolate()); + __ li(at, Operand(pending_message_script)); + __ ld(a1, MemOperand(at)); + __ push(a1); +} + + +void FullCodeGenerator::ExitFinallyBlock() { + DCHECK(!result_register().is(a1)); + // Restore pending message from stack. + __ pop(a1); + ExternalReference pending_message_script = + ExternalReference::address_of_pending_message_script(isolate()); + __ li(at, Operand(pending_message_script)); + __ sd(a1, MemOperand(at)); + + __ pop(a1); + __ SmiUntag(a1); + ExternalReference has_pending_message = + ExternalReference::address_of_has_pending_message(isolate()); + __ li(at, Operand(has_pending_message)); + __ sd(a1, MemOperand(at)); + + __ pop(a1); + ExternalReference pending_message_obj = + ExternalReference::address_of_pending_message_obj(isolate()); + __ li(at, Operand(pending_message_obj)); + __ sd(a1, MemOperand(at)); + + // Restore result register from stack. + __ pop(a1); + + // Uncook return address and return. + __ pop(result_register()); + + __ SmiUntag(a1); + __ Daddu(at, a1, Operand(masm_->CodeObject())); + __ Jump(at); +} + + +#undef __ + +#define __ ACCESS_MASM(masm()) + +FullCodeGenerator::NestedStatement* FullCodeGenerator::TryFinally::Exit( + int* stack_depth, + int* context_length) { + // The macros used here must preserve the result register. + + // Because the handler block contains the context of the finally + // code, we can restore it directly from there for the finally code + // rather than iteratively unwinding contexts via their previous + // links. + __ Drop(*stack_depth); // Down to the handler block. + if (*context_length > 0) { + // Restore the context to its dedicated register and the stack. + __ ld(cp, MemOperand(sp, StackHandlerConstants::kContextOffset)); + __ sd(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); + } + __ PopTryHandler(); + __ Call(finally_entry_); + + *stack_depth = 0; + *context_length = 0; + return previous_; +} + + +#undef __ + + +void BackEdgeTable::PatchAt(Code* unoptimized_code, + Address pc, + BackEdgeState target_state, + Code* replacement_code) { + static const int kInstrSize = Assembler::kInstrSize; + Address branch_address = pc - 8 * kInstrSize; + CodePatcher patcher(branch_address, 1); + + switch (target_state) { + case INTERRUPT: + // slt at, a3, zero_reg (in case of count based interrupts) + // beq at, zero_reg, ok + // lui t9, <interrupt stub address> upper + // ori t9, <interrupt stub address> u-middle + // dsll t9, t9, 16 + // ori t9, <interrupt stub address> lower + // jalr t9 + // nop + // ok-label ----- pc_after points here + patcher.masm()->slt(at, a3, zero_reg); + break; + case ON_STACK_REPLACEMENT: + case OSR_AFTER_STACK_CHECK: + // addiu at, zero_reg, 1 + // beq at, zero_reg, ok ;; Not changed + // lui t9, <on-stack replacement address> upper + // ori t9, <on-stack replacement address> middle + // dsll t9, t9, 16 + // ori t9, <on-stack replacement address> lower + // jalr t9 ;; Not changed + // nop ;; Not changed + // ok-label ----- pc_after points here + patcher.masm()->daddiu(at, zero_reg, 1); + break; + } + Address pc_immediate_load_address = pc - 6 * kInstrSize; + // Replace the stack check address in the load-immediate (6-instr sequence) + // with the entry address of the replacement code. + Assembler::set_target_address_at(pc_immediate_load_address, + replacement_code->entry()); + + unoptimized_code->GetHeap()->incremental_marking()->RecordCodeTargetPatch( + unoptimized_code, pc_immediate_load_address, replacement_code); +} + + +BackEdgeTable::BackEdgeState BackEdgeTable::GetBackEdgeState( + Isolate* isolate, + Code* unoptimized_code, + Address pc) { + static const int kInstrSize = Assembler::kInstrSize; + Address branch_address = pc - 8 * kInstrSize; + Address pc_immediate_load_address = pc - 6 * kInstrSize; + + DCHECK(Assembler::IsBeq(Assembler::instr_at(pc - 7 * kInstrSize))); + if (!Assembler::IsAddImmediate(Assembler::instr_at(branch_address))) { + DCHECK(reinterpret_cast<uint64_t>( + Assembler::target_address_at(pc_immediate_load_address)) == + reinterpret_cast<uint64_t>( + isolate->builtins()->InterruptCheck()->entry())); + return INTERRUPT; + } + + DCHECK(Assembler::IsAddImmediate(Assembler::instr_at(branch_address))); + + if (reinterpret_cast<uint64_t>( + Assembler::target_address_at(pc_immediate_load_address)) == + reinterpret_cast<uint64_t>( + isolate->builtins()->OnStackReplacement()->entry())) { + return ON_STACK_REPLACEMENT; + } + + DCHECK(reinterpret_cast<uint64_t>( + Assembler::target_address_at(pc_immediate_load_address)) == + reinterpret_cast<uint64_t>( + isolate->builtins()->OsrAfterStackCheck()->entry())); + return OSR_AFTER_STACK_CHECK; +} + + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_MIPS64 diff --git a/deps/v8/src/mips64/ic-mips64.cc b/deps/v8/src/mips64/ic-mips64.cc new file mode 100644 index 00000000000..5187342329d --- /dev/null +++ b/deps/v8/src/mips64/ic-mips64.cc @@ -0,0 +1,1266 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + + + +#include "src/v8.h" + +#if V8_TARGET_ARCH_MIPS64 + +#include "src/code-stubs.h" +#include "src/codegen.h" +#include "src/ic-inl.h" +#include "src/runtime.h" +#include "src/stub-cache.h" + +namespace v8 { +namespace internal { + + +// ---------------------------------------------------------------------------- +// Static IC stub generators. +// + +#define __ ACCESS_MASM(masm) + + +static void GenerateGlobalInstanceTypeCheck(MacroAssembler* masm, + Register type, + Label* global_object) { + // Register usage: + // type: holds the receiver instance type on entry. + __ Branch(global_object, eq, type, Operand(JS_GLOBAL_OBJECT_TYPE)); + __ Branch(global_object, eq, type, Operand(JS_BUILTINS_OBJECT_TYPE)); + __ Branch(global_object, eq, type, Operand(JS_GLOBAL_PROXY_TYPE)); +} + + +// Helper function used from LoadIC GenerateNormal. +// +// elements: Property dictionary. It is not clobbered if a jump to the miss +// label is done. +// name: Property name. It is not clobbered if a jump to the miss label is +// done +// result: Register for the result. It is only updated if a jump to the miss +// label is not done. Can be the same as elements or name clobbering +// one of these in the case of not jumping to the miss label. +// The two scratch registers need to be different from elements, name and +// result. +// The generated code assumes that the receiver has slow properties, +// is not a global object and does not have interceptors. +// The address returned from GenerateStringDictionaryProbes() in scratch2 +// is used. +static void GenerateDictionaryLoad(MacroAssembler* masm, + Label* miss, + Register elements, + Register name, + Register result, + Register scratch1, + Register scratch2) { + // Main use of the scratch registers. + // scratch1: Used as temporary and to hold the capacity of the property + // dictionary. + // scratch2: Used as temporary. + Label done; + + // Probe the dictionary. + NameDictionaryLookupStub::GeneratePositiveLookup(masm, + miss, + &done, + elements, + name, + scratch1, + scratch2); + + // If probing finds an entry check that the value is a normal + // property. + __ bind(&done); // scratch2 == elements + 4 * index. + const int kElementsStartOffset = NameDictionary::kHeaderSize + + NameDictionary::kElementsStartIndex * kPointerSize; + const int kDetailsOffset = kElementsStartOffset + 2 * kPointerSize; + __ ld(scratch1, FieldMemOperand(scratch2, kDetailsOffset)); + __ And(at, + scratch1, + Operand(Smi::FromInt(PropertyDetails::TypeField::kMask))); + __ Branch(miss, ne, at, Operand(zero_reg)); + + // Get the value at the masked, scaled index and return. + __ ld(result, + FieldMemOperand(scratch2, kElementsStartOffset + 1 * kPointerSize)); +} + + +// Helper function used from StoreIC::GenerateNormal. +// +// elements: Property dictionary. It is not clobbered if a jump to the miss +// label is done. +// name: Property name. It is not clobbered if a jump to the miss label is +// done +// value: The value to store. +// The two scratch registers need to be different from elements, name and +// result. +// The generated code assumes that the receiver has slow properties, +// is not a global object and does not have interceptors. +// The address returned from GenerateStringDictionaryProbes() in scratch2 +// is used. +static void GenerateDictionaryStore(MacroAssembler* masm, + Label* miss, + Register elements, + Register name, + Register value, + Register scratch1, + Register scratch2) { + // Main use of the scratch registers. + // scratch1: Used as temporary and to hold the capacity of the property + // dictionary. + // scratch2: Used as temporary. + Label done; + + // Probe the dictionary. + NameDictionaryLookupStub::GeneratePositiveLookup(masm, + miss, + &done, + elements, + name, + scratch1, + scratch2); + + // If probing finds an entry in the dictionary check that the value + // is a normal property that is not read only. + __ bind(&done); // scratch2 == elements + 4 * index. + const int kElementsStartOffset = NameDictionary::kHeaderSize + + NameDictionary::kElementsStartIndex * kPointerSize; + const int kDetailsOffset = kElementsStartOffset + 2 * kPointerSize; + const int kTypeAndReadOnlyMask = + (PropertyDetails::TypeField::kMask | + PropertyDetails::AttributesField::encode(READ_ONLY)); + __ ld(scratch1, FieldMemOperand(scratch2, kDetailsOffset)); + __ And(at, scratch1, Operand(Smi::FromInt(kTypeAndReadOnlyMask))); + __ Branch(miss, ne, at, Operand(zero_reg)); + + // Store the value at the masked, scaled index and return. + const int kValueOffset = kElementsStartOffset + kPointerSize; + __ Daddu(scratch2, scratch2, Operand(kValueOffset - kHeapObjectTag)); + __ sd(value, MemOperand(scratch2)); + + // Update the write barrier. Make sure not to clobber the value. + __ mov(scratch1, value); + __ RecordWrite( + elements, scratch2, scratch1, kRAHasNotBeenSaved, kDontSaveFPRegs); +} + + +// Checks the receiver for special cases (value type, slow case bits). +// Falls through for regular JS object. +static void GenerateKeyedLoadReceiverCheck(MacroAssembler* masm, + Register receiver, + Register map, + Register scratch, + int interceptor_bit, + Label* slow) { + // Check that the object isn't a smi. + __ JumpIfSmi(receiver, slow); + // Get the map of the receiver. + __ ld(map, FieldMemOperand(receiver, HeapObject::kMapOffset)); + // Check bit field. + __ lbu(scratch, FieldMemOperand(map, Map::kBitFieldOffset)); + __ And(at, scratch, + Operand((1 << Map::kIsAccessCheckNeeded) | (1 << interceptor_bit))); + __ Branch(slow, ne, at, Operand(zero_reg)); + // Check that the object is some kind of JS object EXCEPT JS Value type. + // In the case that the object is a value-wrapper object, + // we enter the runtime system to make sure that indexing into string + // objects work as intended. + DCHECK(JS_OBJECT_TYPE > JS_VALUE_TYPE); + __ lbu(scratch, FieldMemOperand(map, Map::kInstanceTypeOffset)); + __ Branch(slow, lt, scratch, Operand(JS_OBJECT_TYPE)); +} + + +// Loads an indexed element from a fast case array. +// If not_fast_array is NULL, doesn't perform the elements map check. +static void GenerateFastArrayLoad(MacroAssembler* masm, + Register receiver, + Register key, + Register elements, + Register scratch1, + Register scratch2, + Register result, + Label* not_fast_array, + Label* out_of_range) { + // Register use: + // + // receiver - holds the receiver on entry. + // Unchanged unless 'result' is the same register. + // + // key - holds the smi key on entry. + // Unchanged unless 'result' is the same register. + // + // elements - holds the elements of the receiver on exit. + // + // result - holds the result on exit if the load succeeded. + // Allowed to be the the same as 'receiver' or 'key'. + // Unchanged on bailout so 'receiver' and 'key' can be safely + // used by further computation. + // + // Scratch registers: + // + // scratch1 - used to hold elements map and elements length. + // Holds the elements map if not_fast_array branch is taken. + // + // scratch2 - used to hold the loaded value. + + __ ld(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); + if (not_fast_array != NULL) { + // Check that the object is in fast mode (not dictionary). + __ ld(scratch1, FieldMemOperand(elements, HeapObject::kMapOffset)); + __ LoadRoot(at, Heap::kFixedArrayMapRootIndex); + __ Branch(not_fast_array, ne, scratch1, Operand(at)); + } else { + __ AssertFastElements(elements); + } + + // Check that the key (index) is within bounds. + __ ld(scratch1, FieldMemOperand(elements, FixedArray::kLengthOffset)); + __ Branch(out_of_range, hs, key, Operand(scratch1)); + + // Fast case: Do the load. + __ Daddu(scratch1, elements, + Operand(FixedArray::kHeaderSize - kHeapObjectTag)); + // The key is a smi. + STATIC_ASSERT(kSmiTag == 0 && kSmiTagSize < kPointerSizeLog2); + __ SmiScale(at, key, kPointerSizeLog2); + __ daddu(at, at, scratch1); + __ ld(scratch2, MemOperand(at)); + + __ LoadRoot(at, Heap::kTheHoleValueRootIndex); + // In case the loaded value is the_hole we have to consult GetProperty + // to ensure the prototype chain is searched. + __ Branch(out_of_range, eq, scratch2, Operand(at)); + __ mov(result, scratch2); +} + + +// Checks whether a key is an array index string or a unique name. +// Falls through if a key is a unique name. +static void GenerateKeyNameCheck(MacroAssembler* masm, + Register key, + Register map, + Register hash, + Label* index_string, + Label* not_unique) { + // The key is not a smi. + Label unique; + // Is it a name? + __ GetObjectType(key, map, hash); + __ Branch(not_unique, hi, hash, Operand(LAST_UNIQUE_NAME_TYPE)); + STATIC_ASSERT(LAST_UNIQUE_NAME_TYPE == FIRST_NONSTRING_TYPE); + __ Branch(&unique, eq, hash, Operand(LAST_UNIQUE_NAME_TYPE)); + + // Is the string an array index, with cached numeric value? + __ lwu(hash, FieldMemOperand(key, Name::kHashFieldOffset)); + __ And(at, hash, Operand(Name::kContainsCachedArrayIndexMask)); + __ Branch(index_string, eq, at, Operand(zero_reg)); + + // Is the string internalized? We know it's a string, so a single + // bit test is enough. + // map: key map + __ lbu(hash, FieldMemOperand(map, Map::kInstanceTypeOffset)); + STATIC_ASSERT(kInternalizedTag == 0); + __ And(at, hash, Operand(kIsNotInternalizedMask)); + __ Branch(not_unique, ne, at, Operand(zero_reg)); + + __ bind(&unique); +} + + +void LoadIC::GenerateMegamorphic(MacroAssembler* masm) { + // The return address is in lr. + Register receiver = ReceiverRegister(); + Register name = NameRegister(); + DCHECK(receiver.is(a1)); + DCHECK(name.is(a2)); + + // Probe the stub cache. + Code::Flags flags = Code::RemoveTypeAndHolderFromFlags( + Code::ComputeHandlerFlags(Code::LOAD_IC)); + masm->isolate()->stub_cache()->GenerateProbe( + masm, flags, receiver, name, a3, a4, a5, a6); + + // Cache miss: Jump to runtime. + GenerateMiss(masm); +} + + +void LoadIC::GenerateNormal(MacroAssembler* masm) { + Register dictionary = a0; + DCHECK(!dictionary.is(ReceiverRegister())); + DCHECK(!dictionary.is(NameRegister())); + Label slow; + + __ ld(dictionary, + FieldMemOperand(ReceiverRegister(), JSObject::kPropertiesOffset)); + GenerateDictionaryLoad(masm, &slow, dictionary, NameRegister(), v0, a3, a4); + __ Ret(); + + // Dictionary load failed, go slow (but don't miss). + __ bind(&slow); + GenerateRuntimeGetProperty(masm); +} + + +// A register that isn't one of the parameters to the load ic. +static const Register LoadIC_TempRegister() { return a3; } + + +void LoadIC::GenerateMiss(MacroAssembler* masm) { + // The return address is on the stack. + Isolate* isolate = masm->isolate(); + + __ IncrementCounter(isolate->counters()->keyed_load_miss(), 1, a3, a4); + + __ mov(LoadIC_TempRegister(), ReceiverRegister()); + __ Push(LoadIC_TempRegister(), NameRegister()); + + // Perform tail call to the entry. + ExternalReference ref = ExternalReference(IC_Utility(kLoadIC_Miss), isolate); + __ TailCallExternalReference(ref, 2, 1); +} + + +void LoadIC::GenerateRuntimeGetProperty(MacroAssembler* masm) { + // The return address is in ra. + + __ mov(LoadIC_TempRegister(), ReceiverRegister()); + __ Push(LoadIC_TempRegister(), NameRegister()); + + __ TailCallRuntime(Runtime::kGetProperty, 2, 1); +} + + +static MemOperand GenerateMappedArgumentsLookup(MacroAssembler* masm, + Register object, + Register key, + Register scratch1, + Register scratch2, + Register scratch3, + Label* unmapped_case, + Label* slow_case) { + Heap* heap = masm->isolate()->heap(); + + // Check that the receiver is a JSObject. Because of the map check + // later, we do not need to check for interceptors or whether it + // requires access checks. + __ JumpIfSmi(object, slow_case); + // Check that the object is some kind of JSObject. + __ GetObjectType(object, scratch1, scratch2); + __ Branch(slow_case, lt, scratch2, Operand(FIRST_JS_RECEIVER_TYPE)); + + // Check that the key is a positive smi. + __ NonNegativeSmiTst(key, scratch1); + __ Branch(slow_case, ne, scratch1, Operand(zero_reg)); + + // Load the elements into scratch1 and check its map. + Handle<Map> arguments_map(heap->sloppy_arguments_elements_map()); + __ ld(scratch1, FieldMemOperand(object, JSObject::kElementsOffset)); + __ CheckMap(scratch1, + scratch2, + arguments_map, + slow_case, + DONT_DO_SMI_CHECK); + // Check if element is in the range of mapped arguments. If not, jump + // to the unmapped lookup with the parameter map in scratch1. + __ ld(scratch2, FieldMemOperand(scratch1, FixedArray::kLengthOffset)); + __ Dsubu(scratch2, scratch2, Operand(Smi::FromInt(2))); + __ Branch(unmapped_case, Ugreater_equal, key, Operand(scratch2)); + + // Load element index and check whether it is the hole. + const int kOffset = + FixedArray::kHeaderSize + 2 * kPointerSize - kHeapObjectTag; + + __ SmiUntag(scratch3, key); + __ dsll(scratch3, scratch3, kPointerSizeLog2); + __ Daddu(scratch3, scratch3, Operand(kOffset)); + + __ Daddu(scratch2, scratch1, scratch3); + __ ld(scratch2, MemOperand(scratch2)); + __ LoadRoot(scratch3, Heap::kTheHoleValueRootIndex); + __ Branch(unmapped_case, eq, scratch2, Operand(scratch3)); + + // Load value from context and return it. We can reuse scratch1 because + // we do not jump to the unmapped lookup (which requires the parameter + // map in scratch1). + __ ld(scratch1, FieldMemOperand(scratch1, FixedArray::kHeaderSize)); + __ SmiUntag(scratch3, scratch2); + __ dsll(scratch3, scratch3, kPointerSizeLog2); + __ Daddu(scratch3, scratch3, Operand(Context::kHeaderSize - kHeapObjectTag)); + __ Daddu(scratch2, scratch1, scratch3); + return MemOperand(scratch2); +} + + +static MemOperand GenerateUnmappedArgumentsLookup(MacroAssembler* masm, + Register key, + Register parameter_map, + Register scratch, + Label* slow_case) { + // Element is in arguments backing store, which is referenced by the + // second element of the parameter_map. The parameter_map register + // must be loaded with the parameter map of the arguments object and is + // overwritten. + const int kBackingStoreOffset = FixedArray::kHeaderSize + kPointerSize; + Register backing_store = parameter_map; + __ ld(backing_store, FieldMemOperand(parameter_map, kBackingStoreOffset)); + __ CheckMap(backing_store, + scratch, + Heap::kFixedArrayMapRootIndex, + slow_case, + DONT_DO_SMI_CHECK); + __ ld(scratch, FieldMemOperand(backing_store, FixedArray::kLengthOffset)); + __ Branch(slow_case, Ugreater_equal, key, Operand(scratch)); + __ SmiUntag(scratch, key); + __ dsll(scratch, scratch, kPointerSizeLog2); + __ Daddu(scratch, + scratch, + Operand(FixedArray::kHeaderSize - kHeapObjectTag)); + __ Daddu(scratch, backing_store, scratch); + return MemOperand(scratch); +} + + +void KeyedLoadIC::GenerateSloppyArguments(MacroAssembler* masm) { + // The return address is in ra. + Register receiver = ReceiverRegister(); + Register key = NameRegister(); + DCHECK(receiver.is(a1)); + DCHECK(key.is(a2)); + + Label slow, notin; + MemOperand mapped_location = + GenerateMappedArgumentsLookup( + masm, receiver, key, a0, a3, a4, ¬in, &slow); + __ Ret(USE_DELAY_SLOT); + __ ld(v0, mapped_location); + __ bind(¬in); + // The unmapped lookup expects that the parameter map is in a2. + MemOperand unmapped_location = + GenerateUnmappedArgumentsLookup(masm, key, a0, a3, &slow); + __ ld(a0, unmapped_location); + __ LoadRoot(a3, Heap::kTheHoleValueRootIndex); + __ Branch(&slow, eq, a0, Operand(a3)); + __ Ret(USE_DELAY_SLOT); + __ mov(v0, a0); + __ bind(&slow); + GenerateMiss(masm); +} + + +void KeyedStoreIC::GenerateSloppyArguments(MacroAssembler* masm) { + Register receiver = ReceiverRegister(); + Register key = NameRegister(); + Register value = ValueRegister(); + DCHECK(value.is(a0)); + + Label slow, notin; + // Store address is returned in register (of MemOperand) mapped_location. + MemOperand mapped_location = GenerateMappedArgumentsLookup( + masm, receiver, key, a3, a4, a5, ¬in, &slow); + __ sd(value, mapped_location); + __ mov(t1, value); + DCHECK_EQ(mapped_location.offset(), 0); + __ RecordWrite(a3, mapped_location.rm(), t1, + kRAHasNotBeenSaved, kDontSaveFPRegs); + __ Ret(USE_DELAY_SLOT); + __ mov(v0, value); // (In delay slot) return the value stored in v0. + __ bind(¬in); + // The unmapped lookup expects that the parameter map is in a3. + // Store address is returned in register (of MemOperand) unmapped_location. + MemOperand unmapped_location = + GenerateUnmappedArgumentsLookup(masm, key, a3, a4, &slow); + __ sd(value, unmapped_location); + __ mov(t1, value); + DCHECK_EQ(unmapped_location.offset(), 0); + __ RecordWrite(a3, unmapped_location.rm(), t1, + kRAHasNotBeenSaved, kDontSaveFPRegs); + __ Ret(USE_DELAY_SLOT); + __ mov(v0, a0); // (In delay slot) return the value stored in v0. + __ bind(&slow); + GenerateMiss(masm); +} + + +void KeyedLoadIC::GenerateMiss(MacroAssembler* masm) { + // The return address is in ra. + Isolate* isolate = masm->isolate(); + + __ IncrementCounter(isolate->counters()->keyed_load_miss(), 1, a3, a4); + + __ Push(ReceiverRegister(), NameRegister()); + + // Perform tail call to the entry. + ExternalReference ref = + ExternalReference(IC_Utility(kKeyedLoadIC_Miss), isolate); + + __ TailCallExternalReference(ref, 2, 1); +} + + +// IC register specifications +const Register LoadIC::ReceiverRegister() { return a1; } +const Register LoadIC::NameRegister() { return a2; } + + +const Register LoadIC::SlotRegister() { + DCHECK(FLAG_vector_ics); + return a0; +} + + +const Register LoadIC::VectorRegister() { + DCHECK(FLAG_vector_ics); + return a3; +} + + +const Register StoreIC::ReceiverRegister() { return a1; } +const Register StoreIC::NameRegister() { return a2; } +const Register StoreIC::ValueRegister() { return a0; } + + +const Register KeyedStoreIC::MapRegister() { + return a3; +} + + +void KeyedLoadIC::GenerateRuntimeGetProperty(MacroAssembler* masm) { + // The return address is in ra. + + __ Push(ReceiverRegister(), NameRegister()); + + __ TailCallRuntime(Runtime::kKeyedGetProperty, 2, 1); +} + + +void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { + // The return address is in ra. + Label slow, check_name, index_smi, index_name, property_array_property; + Label probe_dictionary, check_number_dictionary; + + Register key = NameRegister(); + Register receiver = ReceiverRegister(); + DCHECK(key.is(a2)); + DCHECK(receiver.is(a1)); + + Isolate* isolate = masm->isolate(); + + // Check that the key is a smi. + __ JumpIfNotSmi(key, &check_name); + __ bind(&index_smi); + // Now the key is known to be a smi. This place is also jumped to from below + // where a numeric string is converted to a smi. + + GenerateKeyedLoadReceiverCheck( + masm, receiver, a0, a3, Map::kHasIndexedInterceptor, &slow); + + // Check the receiver's map to see if it has fast elements. + __ CheckFastElements(a0, a3, &check_number_dictionary); + + GenerateFastArrayLoad( + masm, receiver, key, a0, a3, a4, v0, NULL, &slow); + __ IncrementCounter(isolate->counters()->keyed_load_generic_smi(), 1, a4, a3); + __ Ret(); + + __ bind(&check_number_dictionary); + __ ld(a4, FieldMemOperand(receiver, JSObject::kElementsOffset)); + __ ld(a3, FieldMemOperand(a4, JSObject::kMapOffset)); + + // Check whether the elements is a number dictionary. + // a3: elements map + // a4: elements + __ LoadRoot(at, Heap::kHashTableMapRootIndex); + __ Branch(&slow, ne, a3, Operand(at)); + __ dsra32(a0, key, 0); + __ LoadFromNumberDictionary(&slow, a4, key, v0, a0, a3, a5); + __ Ret(); + + // Slow case, key and receiver still in a2 and a1. + __ bind(&slow); + __ IncrementCounter(isolate->counters()->keyed_load_generic_slow(), + 1, + a4, + a3); + GenerateRuntimeGetProperty(masm); + + __ bind(&check_name); + GenerateKeyNameCheck(masm, key, a0, a3, &index_name, &slow); + + GenerateKeyedLoadReceiverCheck( + masm, receiver, a0, a3, Map::kHasNamedInterceptor, &slow); + + + // If the receiver is a fast-case object, check the keyed lookup + // cache. Otherwise probe the dictionary. + __ ld(a3, FieldMemOperand(receiver, JSObject::kPropertiesOffset)); + __ ld(a4, FieldMemOperand(a3, HeapObject::kMapOffset)); + __ LoadRoot(at, Heap::kHashTableMapRootIndex); + __ Branch(&probe_dictionary, eq, a4, Operand(at)); + + // Load the map of the receiver, compute the keyed lookup cache hash + // based on 32 bits of the map pointer and the name hash. + __ ld(a0, FieldMemOperand(receiver, HeapObject::kMapOffset)); + __ dsll32(a3, a0, 0); + __ dsrl32(a3, a3, 0); + __ dsra(a3, a3, KeyedLookupCache::kMapHashShift); + __ lwu(a4, FieldMemOperand(key, Name::kHashFieldOffset)); + __ dsra(at, a4, Name::kHashShift); + __ xor_(a3, a3, at); + int mask = KeyedLookupCache::kCapacityMask & KeyedLookupCache::kHashMask; + __ And(a3, a3, Operand(mask)); + + // Load the key (consisting of map and unique name) from the cache and + // check for match. + Label load_in_object_property; + static const int kEntriesPerBucket = KeyedLookupCache::kEntriesPerBucket; + Label hit_on_nth_entry[kEntriesPerBucket]; + ExternalReference cache_keys = + ExternalReference::keyed_lookup_cache_keys(isolate); + __ li(a4, Operand(cache_keys)); + __ dsll(at, a3, kPointerSizeLog2 + 1); + __ daddu(a4, a4, at); + + for (int i = 0; i < kEntriesPerBucket - 1; i++) { + Label try_next_entry; + __ ld(a5, MemOperand(a4, kPointerSize * i * 2)); + __ Branch(&try_next_entry, ne, a0, Operand(a5)); + __ ld(a5, MemOperand(a4, kPointerSize * (i * 2 + 1))); + __ Branch(&hit_on_nth_entry[i], eq, key, Operand(a5)); + __ bind(&try_next_entry); + } + + __ ld(a5, MemOperand(a4, kPointerSize * (kEntriesPerBucket - 1) * 2)); + __ Branch(&slow, ne, a0, Operand(a5)); + __ ld(a5, MemOperand(a4, kPointerSize * ((kEntriesPerBucket - 1) * 2 + 1))); + __ Branch(&slow, ne, key, Operand(a5)); + + // Get field offset. + // a0 : receiver's map + // a3 : lookup cache index + ExternalReference cache_field_offsets = + ExternalReference::keyed_lookup_cache_field_offsets(isolate); + + // Hit on nth entry. + for (int i = kEntriesPerBucket - 1; i >= 0; i--) { + __ bind(&hit_on_nth_entry[i]); + __ li(a4, Operand(cache_field_offsets)); + + // TODO(yy) This data structure does NOT follow natural pointer size. + __ dsll(at, a3, kPointerSizeLog2 - 1); + __ daddu(at, a4, at); + __ lwu(a5, MemOperand(at, kPointerSize / 2 * i)); + + __ lbu(a6, FieldMemOperand(a0, Map::kInObjectPropertiesOffset)); + __ Dsubu(a5, a5, a6); + __ Branch(&property_array_property, ge, a5, Operand(zero_reg)); + if (i != 0) { + __ Branch(&load_in_object_property); + } + } + + // Load in-object property. + __ bind(&load_in_object_property); + __ lbu(a6, FieldMemOperand(a0, Map::kInstanceSizeOffset)); + // Index from start of object. + __ daddu(a6, a6, a5); + // Remove the heap tag. + __ Dsubu(receiver, receiver, Operand(kHeapObjectTag)); + __ dsll(at, a6, kPointerSizeLog2); + __ daddu(at, receiver, at); + __ ld(v0, MemOperand(at)); + __ IncrementCounter(isolate->counters()->keyed_load_generic_lookup_cache(), + 1, + a4, + a3); + __ Ret(); + + // Load property array property. + __ bind(&property_array_property); + __ ld(receiver, FieldMemOperand(receiver, JSObject::kPropertiesOffset)); + __ Daddu(receiver, receiver, FixedArray::kHeaderSize - kHeapObjectTag); + __ dsll(v0, a5, kPointerSizeLog2); + __ Daddu(v0, v0, a1); + __ ld(v0, MemOperand(v0)); + __ IncrementCounter(isolate->counters()->keyed_load_generic_lookup_cache(), + 1, + a4, + a3); + __ Ret(); + + + // Do a quick inline probe of the receiver's dictionary, if it + // exists. + __ bind(&probe_dictionary); + // a3: elements + __ ld(a0, FieldMemOperand(receiver, HeapObject::kMapOffset)); + __ lbu(a0, FieldMemOperand(a0, Map::kInstanceTypeOffset)); + GenerateGlobalInstanceTypeCheck(masm, a0, &slow); + // Load the property to v0. + GenerateDictionaryLoad(masm, &slow, a3, key, v0, a5, a4); + __ IncrementCounter(isolate->counters()->keyed_load_generic_symbol(), + 1, + a4, + a3); + __ Ret(); + + __ bind(&index_name); + __ IndexFromHash(a3, key); + // Now jump to the place where smi keys are handled. + __ Branch(&index_smi); +} + + +void KeyedLoadIC::GenerateString(MacroAssembler* masm) { + // Return address is in ra. + Label miss; + + Register receiver = ReceiverRegister(); + Register index = NameRegister(); + Register scratch = a3; + Register result = v0; + DCHECK(!scratch.is(receiver) && !scratch.is(index)); + + StringCharAtGenerator char_at_generator(receiver, + index, + scratch, + result, + &miss, // When not a string. + &miss, // When not a number. + &miss, // When index out of range. + STRING_INDEX_IS_ARRAY_INDEX); + char_at_generator.GenerateFast(masm); + __ Ret(); + + StubRuntimeCallHelper call_helper; + char_at_generator.GenerateSlow(masm, call_helper); + + __ bind(&miss); + GenerateMiss(masm); +} + + +void KeyedStoreIC::GenerateRuntimeSetProperty(MacroAssembler* masm, + StrictMode strict_mode) { + // Push receiver, key and value for runtime call. + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); + + __ li(a0, Operand(Smi::FromInt(strict_mode))); // Strict mode. + __ Push(a0); + + __ TailCallRuntime(Runtime::kSetProperty, 4, 1); +} + + +static void KeyedStoreGenerateGenericHelper( + MacroAssembler* masm, + Label* fast_object, + Label* fast_double, + Label* slow, + KeyedStoreCheckMap check_map, + KeyedStoreIncrementLength increment_length, + Register value, + Register key, + Register receiver, + Register receiver_map, + Register elements_map, + Register elements) { + Label transition_smi_elements; + Label finish_object_store, non_double_value, transition_double_elements; + Label fast_double_without_map_check; + + // Fast case: Do the store, could be either Object or double. + __ bind(fast_object); + Register scratch_value = a4; + Register address = a5; + if (check_map == kCheckMap) { + __ ld(elements_map, FieldMemOperand(elements, HeapObject::kMapOffset)); + __ Branch(fast_double, ne, elements_map, + Operand(masm->isolate()->factory()->fixed_array_map())); + } + + // HOLECHECK: guards "A[i] = V" + // We have to go to the runtime if the current value is the hole because + // there may be a callback on the element. + Label holecheck_passed1; + __ Daddu(address, elements, FixedArray::kHeaderSize - kHeapObjectTag); + __ SmiScale(at, key, kPointerSizeLog2); + __ daddu(address, address, at); + __ ld(scratch_value, MemOperand(address)); + + __ Branch(&holecheck_passed1, ne, scratch_value, + Operand(masm->isolate()->factory()->the_hole_value())); + __ JumpIfDictionaryInPrototypeChain(receiver, elements_map, scratch_value, + slow); + + __ bind(&holecheck_passed1); + + // Smi stores don't require further checks. + Label non_smi_value; + __ JumpIfNotSmi(value, &non_smi_value); + + if (increment_length == kIncrementLength) { + // Add 1 to receiver->length. + __ Daddu(scratch_value, key, Operand(Smi::FromInt(1))); + __ sd(scratch_value, FieldMemOperand(receiver, JSArray::kLengthOffset)); + } + // It's irrelevant whether array is smi-only or not when writing a smi. + __ Daddu(address, elements, + Operand(FixedArray::kHeaderSize - kHeapObjectTag)); + __ SmiScale(scratch_value, key, kPointerSizeLog2); + __ Daddu(address, address, scratch_value); + __ sd(value, MemOperand(address)); + __ Ret(); + + __ bind(&non_smi_value); + // Escape to elements kind transition case. + __ CheckFastObjectElements(receiver_map, scratch_value, + &transition_smi_elements); + + // Fast elements array, store the value to the elements backing store. + __ bind(&finish_object_store); + if (increment_length == kIncrementLength) { + // Add 1 to receiver->length. + __ Daddu(scratch_value, key, Operand(Smi::FromInt(1))); + __ sd(scratch_value, FieldMemOperand(receiver, JSArray::kLengthOffset)); + } + __ Daddu(address, elements, + Operand(FixedArray::kHeaderSize - kHeapObjectTag)); + __ SmiScale(scratch_value, key, kPointerSizeLog2); + __ Daddu(address, address, scratch_value); + __ sd(value, MemOperand(address)); + // Update write barrier for the elements array address. + __ mov(scratch_value, value); // Preserve the value which is returned. + __ RecordWrite(elements, + address, + scratch_value, + kRAHasNotBeenSaved, + kDontSaveFPRegs, + EMIT_REMEMBERED_SET, + OMIT_SMI_CHECK); + __ Ret(); + + __ bind(fast_double); + if (check_map == kCheckMap) { + // Check for fast double array case. If this fails, call through to the + // runtime. + __ LoadRoot(at, Heap::kFixedDoubleArrayMapRootIndex); + __ Branch(slow, ne, elements_map, Operand(at)); + } + + // HOLECHECK: guards "A[i] double hole?" + // We have to see if the double version of the hole is present. If so + // go to the runtime. + __ Daddu(address, elements, + Operand(FixedDoubleArray::kHeaderSize + sizeof(kHoleNanLower32) + - kHeapObjectTag)); + __ SmiScale(at, key, kPointerSizeLog2); + __ daddu(address, address, at); + __ lw(scratch_value, MemOperand(address)); + __ Branch(&fast_double_without_map_check, ne, scratch_value, + Operand(kHoleNanUpper32)); + __ JumpIfDictionaryInPrototypeChain(receiver, elements_map, scratch_value, + slow); + + __ bind(&fast_double_without_map_check); + __ StoreNumberToDoubleElements(value, + key, + elements, // Overwritten. + a3, // Scratch regs... + a4, + a5, + &transition_double_elements); + if (increment_length == kIncrementLength) { + // Add 1 to receiver->length. + __ Daddu(scratch_value, key, Operand(Smi::FromInt(1))); + __ sd(scratch_value, FieldMemOperand(receiver, JSArray::kLengthOffset)); + } + __ Ret(); + + __ bind(&transition_smi_elements); + // Transition the array appropriately depending on the value type. + __ ld(a4, FieldMemOperand(value, HeapObject::kMapOffset)); + __ LoadRoot(at, Heap::kHeapNumberMapRootIndex); + __ Branch(&non_double_value, ne, a4, Operand(at)); + + // Value is a double. Transition FAST_SMI_ELEMENTS -> + // FAST_DOUBLE_ELEMENTS and complete the store. + __ LoadTransitionedArrayMapConditional(FAST_SMI_ELEMENTS, + FAST_DOUBLE_ELEMENTS, + receiver_map, + a4, + slow); + AllocationSiteMode mode = AllocationSite::GetMode(FAST_SMI_ELEMENTS, + FAST_DOUBLE_ELEMENTS); + ElementsTransitionGenerator::GenerateSmiToDouble( + masm, receiver, key, value, receiver_map, mode, slow); + __ ld(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); + __ jmp(&fast_double_without_map_check); + + __ bind(&non_double_value); + // Value is not a double, FAST_SMI_ELEMENTS -> FAST_ELEMENTS + __ LoadTransitionedArrayMapConditional(FAST_SMI_ELEMENTS, + FAST_ELEMENTS, + receiver_map, + a4, + slow); + mode = AllocationSite::GetMode(FAST_SMI_ELEMENTS, FAST_ELEMENTS); + ElementsTransitionGenerator::GenerateMapChangeElementsTransition( + masm, receiver, key, value, receiver_map, mode, slow); + __ ld(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); + __ jmp(&finish_object_store); + + __ bind(&transition_double_elements); + // Elements are FAST_DOUBLE_ELEMENTS, but value is an Object that's not a + // HeapNumber. Make sure that the receiver is a Array with FAST_ELEMENTS and + // transition array from FAST_DOUBLE_ELEMENTS to FAST_ELEMENTS + __ LoadTransitionedArrayMapConditional(FAST_DOUBLE_ELEMENTS, + FAST_ELEMENTS, + receiver_map, + a4, + slow); + mode = AllocationSite::GetMode(FAST_DOUBLE_ELEMENTS, FAST_ELEMENTS); + ElementsTransitionGenerator::GenerateDoubleToObject( + masm, receiver, key, value, receiver_map, mode, slow); + __ ld(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); + __ jmp(&finish_object_store); +} + + +void KeyedStoreIC::GenerateGeneric(MacroAssembler* masm, + StrictMode strict_mode) { + // ---------- S t a t e -------------- + // -- a0 : value + // -- a1 : key + // -- a2 : receiver + // -- ra : return address + // ----------------------------------- + Label slow, fast_object, fast_object_grow; + Label fast_double, fast_double_grow; + Label array, extra, check_if_double_array; + + // Register usage. + Register value = ValueRegister(); + Register key = NameRegister(); + Register receiver = ReceiverRegister(); + DCHECK(value.is(a0)); + Register receiver_map = a3; + Register elements_map = a6; + Register elements = a7; // Elements array of the receiver. + // a4 and a5 are used as general scratch registers. + + // Check that the key is a smi. + __ JumpIfNotSmi(key, &slow); + // Check that the object isn't a smi. + __ JumpIfSmi(receiver, &slow); + // Get the map of the object. + __ ld(receiver_map, FieldMemOperand(receiver, HeapObject::kMapOffset)); + // Check that the receiver does not require access checks and is not observed. + // The generic stub does not perform map checks or handle observed objects. + __ lbu(a4, FieldMemOperand(receiver_map, Map::kBitFieldOffset)); + __ And(a4, a4, Operand(1 << Map::kIsAccessCheckNeeded | + 1 << Map::kIsObserved)); + __ Branch(&slow, ne, a4, Operand(zero_reg)); + // Check if the object is a JS array or not. + __ lbu(a4, FieldMemOperand(receiver_map, Map::kInstanceTypeOffset)); + __ Branch(&array, eq, a4, Operand(JS_ARRAY_TYPE)); + // Check that the object is some kind of JSObject. + __ Branch(&slow, lt, a4, Operand(FIRST_JS_OBJECT_TYPE)); + + // Object case: Check key against length in the elements array. + __ ld(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); + // Check array bounds. Both the key and the length of FixedArray are smis. + __ ld(a4, FieldMemOperand(elements, FixedArray::kLengthOffset)); + __ Branch(&fast_object, lo, key, Operand(a4)); + + // Slow case, handle jump to runtime. + __ bind(&slow); + // Entry registers are intact. + // a0: value. + // a1: key. + // a2: receiver. + GenerateRuntimeSetProperty(masm, strict_mode); + + // Extra capacity case: Check if there is extra capacity to + // perform the store and update the length. Used for adding one + // element to the array by writing to array[array.length]. + __ bind(&extra); + // Condition code from comparing key and array length is still available. + // Only support writing to array[array.length]. + __ Branch(&slow, ne, key, Operand(a4)); + // Check for room in the elements backing store. + // Both the key and the length of FixedArray are smis. + __ ld(a4, FieldMemOperand(elements, FixedArray::kLengthOffset)); + __ Branch(&slow, hs, key, Operand(a4)); + __ ld(elements_map, FieldMemOperand(elements, HeapObject::kMapOffset)); + __ Branch( + &check_if_double_array, ne, elements_map, Heap::kFixedArrayMapRootIndex); + + __ jmp(&fast_object_grow); + + __ bind(&check_if_double_array); + __ Branch(&slow, ne, elements_map, Heap::kFixedDoubleArrayMapRootIndex); + __ jmp(&fast_double_grow); + + // Array case: Get the length and the elements array from the JS + // array. Check that the array is in fast mode (and writable); if it + // is the length is always a smi. + __ bind(&array); + __ ld(elements, FieldMemOperand(receiver, JSObject::kElementsOffset)); + + // Check the key against the length in the array. + __ ld(a4, FieldMemOperand(receiver, JSArray::kLengthOffset)); + __ Branch(&extra, hs, key, Operand(a4)); + + KeyedStoreGenerateGenericHelper(masm, &fast_object, &fast_double, + &slow, kCheckMap, kDontIncrementLength, + value, key, receiver, receiver_map, + elements_map, elements); + KeyedStoreGenerateGenericHelper(masm, &fast_object_grow, &fast_double_grow, + &slow, kDontCheckMap, kIncrementLength, + value, key, receiver, receiver_map, + elements_map, elements); +} + + +void KeyedLoadIC::GenerateIndexedInterceptor(MacroAssembler* masm) { + // Return address is in ra. + Label slow; + + Register receiver = ReceiverRegister(); + Register key = NameRegister(); + Register scratch1 = a3; + Register scratch2 = a4; + DCHECK(!scratch1.is(receiver) && !scratch1.is(key)); + DCHECK(!scratch2.is(receiver) && !scratch2.is(key)); + + // Check that the receiver isn't a smi. + __ JumpIfSmi(receiver, &slow); + + // Check that the key is an array index, that is Uint32. + __ And(a4, key, Operand(kSmiTagMask | kSmiSignMask)); + __ Branch(&slow, ne, a4, Operand(zero_reg)); + + // Get the map of the receiver. + __ ld(scratch1, FieldMemOperand(receiver, HeapObject::kMapOffset)); + + // Check that it has indexed interceptor and access checks + // are not enabled for this object. + __ lbu(scratch2, FieldMemOperand(scratch1, Map::kBitFieldOffset)); + __ And(scratch2, scratch2, Operand(kSlowCaseBitFieldMask)); + __ Branch(&slow, ne, scratch2, Operand(1 << Map::kHasIndexedInterceptor)); + // Everything is fine, call runtime. + __ Push(receiver, key); // Receiver, key. + + // Perform tail call to the entry. + __ TailCallExternalReference(ExternalReference( + IC_Utility(kLoadElementWithInterceptor), masm->isolate()), 2, 1); + + __ bind(&slow); + GenerateMiss(masm); +} + + +void KeyedStoreIC::GenerateMiss(MacroAssembler* masm) { + // Push receiver, key and value for runtime call. + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); + + ExternalReference ref = + ExternalReference(IC_Utility(kKeyedStoreIC_Miss), masm->isolate()); + __ TailCallExternalReference(ref, 3, 1); +} + + +void StoreIC::GenerateSlow(MacroAssembler* masm) { + // Push receiver, key and value for runtime call. + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); + + // The slow case calls into the runtime to complete the store without causing + // an IC miss that would otherwise cause a transition to the generic stub. + ExternalReference ref = + ExternalReference(IC_Utility(kStoreIC_Slow), masm->isolate()); + __ TailCallExternalReference(ref, 3, 1); +} + + +void KeyedStoreIC::GenerateSlow(MacroAssembler* masm) { + // Push receiver, key and value for runtime call. + // We can't use MultiPush as the order of the registers is important. + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); + // The slow case calls into the runtime to complete the store without causing + // an IC miss that would otherwise cause a transition to the generic stub. + ExternalReference ref = + ExternalReference(IC_Utility(kKeyedStoreIC_Slow), masm->isolate()); + + __ TailCallExternalReference(ref, 3, 1); +} + + +void StoreIC::GenerateMegamorphic(MacroAssembler* masm) { + Register receiver = ReceiverRegister(); + Register name = NameRegister(); + DCHECK(receiver.is(a1)); + DCHECK(name.is(a2)); + DCHECK(ValueRegister().is(a0)); + + // Get the receiver from the stack and probe the stub cache. + Code::Flags flags = Code::RemoveTypeAndHolderFromFlags( + Code::ComputeHandlerFlags(Code::STORE_IC)); + masm->isolate()->stub_cache()->GenerateProbe( + masm, flags, receiver, name, a3, a4, a5, a6); + + // Cache miss: Jump to runtime. + GenerateMiss(masm); +} + + +void StoreIC::GenerateMiss(MacroAssembler* masm) { + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); + // Perform tail call to the entry. + ExternalReference ref = ExternalReference(IC_Utility(kStoreIC_Miss), + masm->isolate()); + __ TailCallExternalReference(ref, 3, 1); +} + + +void StoreIC::GenerateNormal(MacroAssembler* masm) { + Label miss; + Register receiver = ReceiverRegister(); + Register name = NameRegister(); + Register value = ValueRegister(); + Register dictionary = a3; + DCHECK(!AreAliased(value, receiver, name, dictionary, a4, a5)); + + __ ld(dictionary, FieldMemOperand(receiver, JSObject::kPropertiesOffset)); + + GenerateDictionaryStore(masm, &miss, a3, name, value, a4, a5); + Counters* counters = masm->isolate()->counters(); + __ IncrementCounter(counters->store_normal_hit(), 1, a4, a5); + __ Ret(); + + __ bind(&miss); + __ IncrementCounter(counters->store_normal_miss(), 1, a4, a5); + GenerateMiss(masm); +} + + +void StoreIC::GenerateRuntimeSetProperty(MacroAssembler* masm, + StrictMode strict_mode) { + __ Push(ReceiverRegister(), NameRegister(), ValueRegister()); + + __ li(a0, Operand(Smi::FromInt(strict_mode))); + __ Push(a0); + + // Do tail-call to runtime routine. + __ TailCallRuntime(Runtime::kSetProperty, 4, 1); +} + + +#undef __ + + +Condition CompareIC::ComputeCondition(Token::Value op) { + switch (op) { + case Token::EQ_STRICT: + case Token::EQ: + return eq; + case Token::LT: + return lt; + case Token::GT: + return gt; + case Token::LTE: + return le; + case Token::GTE: + return ge; + default: + UNREACHABLE(); + return kNoCondition; + } +} + + +bool CompareIC::HasInlinedSmiCode(Address address) { + // The address of the instruction following the call. + Address andi_instruction_address = + address + Assembler::kCallTargetAddressOffset; + + // If the instruction following the call is not a andi at, rx, #yyy, nothing + // was inlined. + Instr instr = Assembler::instr_at(andi_instruction_address); + return Assembler::IsAndImmediate(instr) && + Assembler::GetRt(instr) == static_cast<uint32_t>(zero_reg.code()); +} + + +void PatchInlinedSmiCode(Address address, InlinedSmiCheck check) { + Address andi_instruction_address = + address + Assembler::kCallTargetAddressOffset; + + // If the instruction following the call is not a andi at, rx, #yyy, nothing + // was inlined. + Instr instr = Assembler::instr_at(andi_instruction_address); + if (!(Assembler::IsAndImmediate(instr) && + Assembler::GetRt(instr) == static_cast<uint32_t>(zero_reg.code()))) { + return; + } + + // The delta to the start of the map check instruction and the + // condition code uses at the patched jump. + int delta = Assembler::GetImmediate16(instr); + delta += Assembler::GetRs(instr) * kImm16Mask; + // If the delta is 0 the instruction is andi at, zero_reg, #0 which also + // signals that nothing was inlined. + if (delta == 0) { + return; + } + + if (FLAG_trace_ic) { + PrintF("[ patching ic at %p, andi=%p, delta=%d\n", + address, andi_instruction_address, delta); + } + + Address patch_address = + andi_instruction_address - delta * Instruction::kInstrSize; + Instr instr_at_patch = Assembler::instr_at(patch_address); + Instr branch_instr = + Assembler::instr_at(patch_address + Instruction::kInstrSize); + // This is patching a conditional "jump if not smi/jump if smi" site. + // Enabling by changing from + // andi at, rx, 0 + // Branch <target>, eq, at, Operand(zero_reg) + // to: + // andi at, rx, #kSmiTagMask + // Branch <target>, ne, at, Operand(zero_reg) + // and vice-versa to be disabled again. + CodePatcher patcher(patch_address, 2); + Register reg = Register::from_code(Assembler::GetRs(instr_at_patch)); + if (check == ENABLE_INLINED_SMI_CHECK) { + DCHECK(Assembler::IsAndImmediate(instr_at_patch)); + DCHECK_EQ(0, Assembler::GetImmediate16(instr_at_patch)); + patcher.masm()->andi(at, reg, kSmiTagMask); + } else { + DCHECK(check == DISABLE_INLINED_SMI_CHECK); + DCHECK(Assembler::IsAndImmediate(instr_at_patch)); + patcher.masm()->andi(at, reg, 0); + } + DCHECK(Assembler::IsBranch(branch_instr)); + if (Assembler::IsBeq(branch_instr)) { + patcher.ChangeBranchCondition(ne); + } else { + DCHECK(Assembler::IsBne(branch_instr)); + patcher.ChangeBranchCondition(eq); + } +} + + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_MIPS64 diff --git a/deps/v8/src/mips64/lithium-codegen-mips64.cc b/deps/v8/src/mips64/lithium-codegen-mips64.cc new file mode 100644 index 00000000000..4d8d6afdf8e --- /dev/null +++ b/deps/v8/src/mips64/lithium-codegen-mips64.cc @@ -0,0 +1,5950 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "src/code-stubs.h" +#include "src/hydrogen-osr.h" +#include "src/mips64/lithium-codegen-mips64.h" +#include "src/mips64/lithium-gap-resolver-mips64.h" +#include "src/stub-cache.h" + +namespace v8 { +namespace internal { + + +class SafepointGenerator V8_FINAL : public CallWrapper { + public: + SafepointGenerator(LCodeGen* codegen, + LPointerMap* pointers, + Safepoint::DeoptMode mode) + : codegen_(codegen), + pointers_(pointers), + deopt_mode_(mode) { } + virtual ~SafepointGenerator() {} + + virtual void BeforeCall(int call_size) const V8_OVERRIDE {} + + virtual void AfterCall() const V8_OVERRIDE { + codegen_->RecordSafepoint(pointers_, deopt_mode_); + } + + private: + LCodeGen* codegen_; + LPointerMap* pointers_; + Safepoint::DeoptMode deopt_mode_; +}; + + +#define __ masm()-> + +bool LCodeGen::GenerateCode() { + LPhase phase("Z_Code generation", chunk()); + DCHECK(is_unused()); + status_ = GENERATING; + + // Open a frame scope to indicate that there is a frame on the stack. The + // NONE indicates that the scope shouldn't actually generate code to set up + // the frame (that is done in GeneratePrologue). + FrameScope frame_scope(masm_, StackFrame::NONE); + + return GeneratePrologue() && + GenerateBody() && + GenerateDeferredCode() && + GenerateDeoptJumpTable() && + GenerateSafepointTable(); +} + + +void LCodeGen::FinishCode(Handle<Code> code) { + DCHECK(is_done()); + code->set_stack_slots(GetStackSlotCount()); + code->set_safepoint_table_offset(safepoints_.GetCodeOffset()); + if (code->is_optimized_code()) RegisterWeakObjectsInOptimizedCode(code); + PopulateDeoptimizationData(code); +} + + +void LCodeGen::SaveCallerDoubles() { + DCHECK(info()->saves_caller_doubles()); + DCHECK(NeedsEagerFrame()); + Comment(";;; Save clobbered callee double registers"); + int count = 0; + BitVector* doubles = chunk()->allocated_double_registers(); + BitVector::Iterator save_iterator(doubles); + while (!save_iterator.Done()) { + __ sdc1(DoubleRegister::FromAllocationIndex(save_iterator.Current()), + MemOperand(sp, count * kDoubleSize)); + save_iterator.Advance(); + count++; + } +} + + +void LCodeGen::RestoreCallerDoubles() { + DCHECK(info()->saves_caller_doubles()); + DCHECK(NeedsEagerFrame()); + Comment(";;; Restore clobbered callee double registers"); + BitVector* doubles = chunk()->allocated_double_registers(); + BitVector::Iterator save_iterator(doubles); + int count = 0; + while (!save_iterator.Done()) { + __ ldc1(DoubleRegister::FromAllocationIndex(save_iterator.Current()), + MemOperand(sp, count * kDoubleSize)); + save_iterator.Advance(); + count++; + } +} + + +bool LCodeGen::GeneratePrologue() { + DCHECK(is_generating()); + + if (info()->IsOptimizing()) { + ProfileEntryHookStub::MaybeCallEntryHook(masm_); + +#ifdef DEBUG + if (strlen(FLAG_stop_at) > 0 && + info_->function()->name()->IsUtf8EqualTo(CStrVector(FLAG_stop_at))) { + __ stop("stop_at"); + } +#endif + + // a1: Callee's JS function. + // cp: Callee's context. + // fp: Caller's frame pointer. + // lr: Caller's pc. + + // Sloppy mode functions and builtins need to replace the receiver with the + // global proxy when called as functions (without an explicit receiver + // object). + if (info_->this_has_uses() && + info_->strict_mode() == SLOPPY && + !info_->is_native()) { + Label ok; + int receiver_offset = info_->scope()->num_parameters() * kPointerSize; + __ LoadRoot(at, Heap::kUndefinedValueRootIndex); + __ ld(a2, MemOperand(sp, receiver_offset)); + __ Branch(&ok, ne, a2, Operand(at)); + + __ ld(a2, GlobalObjectOperand()); + __ ld(a2, FieldMemOperand(a2, GlobalObject::kGlobalProxyOffset)); + + __ sd(a2, MemOperand(sp, receiver_offset)); + + __ bind(&ok); + } + } + + info()->set_prologue_offset(masm_->pc_offset()); + if (NeedsEagerFrame()) { + if (info()->IsStub()) { + __ StubPrologue(); + } else { + __ Prologue(info()->IsCodePreAgingActive()); + } + frame_is_built_ = true; + info_->AddNoFrameRange(0, masm_->pc_offset()); + } + + // Reserve space for the stack slots needed by the code. + int slots = GetStackSlotCount(); + if (slots > 0) { + if (FLAG_debug_code) { + __ Dsubu(sp, sp, Operand(slots * kPointerSize)); + __ Push(a0, a1); + __ Daddu(a0, sp, Operand(slots * kPointerSize)); + __ li(a1, Operand(kSlotsZapValue)); + Label loop; + __ bind(&loop); + __ Dsubu(a0, a0, Operand(kPointerSize)); + __ sd(a1, MemOperand(a0, 2 * kPointerSize)); + __ Branch(&loop, ne, a0, Operand(sp)); + __ Pop(a0, a1); + } else { + __ Dsubu(sp, sp, Operand(slots * kPointerSize)); + } + } + + if (info()->saves_caller_doubles()) { + SaveCallerDoubles(); + } + + // Possibly allocate a local context. + int heap_slots = info()->num_heap_slots() - Context::MIN_CONTEXT_SLOTS; + if (heap_slots > 0) { + Comment(";;; Allocate local context"); + bool need_write_barrier = true; + // Argument to NewContext is the function, which is in a1. + if (heap_slots <= FastNewContextStub::kMaximumSlots) { + FastNewContextStub stub(isolate(), heap_slots); + __ CallStub(&stub); + // Result of FastNewContextStub is always in new space. + need_write_barrier = false; + } else { + __ push(a1); + __ CallRuntime(Runtime::kNewFunctionContext, 1); + } + RecordSafepoint(Safepoint::kNoLazyDeopt); + // Context is returned in both v0. It replaces the context passed to us. + // It's saved in the stack and kept live in cp. + __ mov(cp, v0); + __ sd(v0, MemOperand(fp, StandardFrameConstants::kContextOffset)); + // Copy any necessary parameters into the context. + int num_parameters = scope()->num_parameters(); + for (int i = 0; i < num_parameters; i++) { + Variable* var = scope()->parameter(i); + if (var->IsContextSlot()) { + int parameter_offset = StandardFrameConstants::kCallerSPOffset + + (num_parameters - 1 - i) * kPointerSize; + // Load parameter from stack. + __ ld(a0, MemOperand(fp, parameter_offset)); + // Store it in the context. + MemOperand target = ContextOperand(cp, var->index()); + __ sd(a0, target); + // Update the write barrier. This clobbers a3 and a0. + if (need_write_barrier) { + __ RecordWriteContextSlot( + cp, target.offset(), a0, a3, GetRAState(), kSaveFPRegs); + } else if (FLAG_debug_code) { + Label done; + __ JumpIfInNewSpace(cp, a0, &done); + __ Abort(kExpectedNewSpaceObject); + __ bind(&done); + } + } + } + Comment(";;; End allocate local context"); + } + + // Trace the call. + if (FLAG_trace && info()->IsOptimizing()) { + // We have not executed any compiled code yet, so cp still holds the + // incoming context. + __ CallRuntime(Runtime::kTraceEnter, 0); + } + return !is_aborted(); +} + + +void LCodeGen::GenerateOsrPrologue() { + // Generate the OSR entry prologue at the first unknown OSR value, or if there + // are none, at the OSR entrypoint instruction. + if (osr_pc_offset_ >= 0) return; + + osr_pc_offset_ = masm()->pc_offset(); + + // Adjust the frame size, subsuming the unoptimized frame into the + // optimized frame. + int slots = GetStackSlotCount() - graph()->osr()->UnoptimizedFrameSlots(); + DCHECK(slots >= 0); + __ Dsubu(sp, sp, Operand(slots * kPointerSize)); +} + + +void LCodeGen::GenerateBodyInstructionPre(LInstruction* instr) { + if (instr->IsCall()) { + EnsureSpaceForLazyDeopt(Deoptimizer::patch_size()); + } + if (!instr->IsLazyBailout() && !instr->IsGap()) { + safepoints_.BumpLastLazySafepointIndex(); + } +} + + +bool LCodeGen::GenerateDeferredCode() { + DCHECK(is_generating()); + if (deferred_.length() > 0) { + for (int i = 0; !is_aborted() && i < deferred_.length(); i++) { + LDeferredCode* code = deferred_[i]; + + HValue* value = + instructions_->at(code->instruction_index())->hydrogen_value(); + RecordAndWritePosition( + chunk()->graph()->SourcePositionToScriptPosition(value->position())); + + Comment(";;; <@%d,#%d> " + "-------------------- Deferred %s --------------------", + code->instruction_index(), + code->instr()->hydrogen_value()->id(), + code->instr()->Mnemonic()); + __ bind(code->entry()); + if (NeedsDeferredFrame()) { + Comment(";;; Build frame"); + DCHECK(!frame_is_built_); + DCHECK(info()->IsStub()); + frame_is_built_ = true; + __ MultiPush(cp.bit() | fp.bit() | ra.bit()); + __ li(scratch0(), Operand(Smi::FromInt(StackFrame::STUB))); + __ push(scratch0()); + __ Daddu(fp, sp, + Operand(StandardFrameConstants::kFixedFrameSizeFromFp)); + Comment(";;; Deferred code"); + } + code->Generate(); + if (NeedsDeferredFrame()) { + Comment(";;; Destroy frame"); + DCHECK(frame_is_built_); + __ pop(at); + __ MultiPop(cp.bit() | fp.bit() | ra.bit()); + frame_is_built_ = false; + } + __ jmp(code->exit()); + } + } + // Deferred code is the last part of the instruction sequence. Mark + // the generated code as done unless we bailed out. + if (!is_aborted()) status_ = DONE; + return !is_aborted(); +} + + +bool LCodeGen::GenerateDeoptJumpTable() { + if (deopt_jump_table_.length() > 0) { + Comment(";;; -------------------- Jump table --------------------"); + } + Assembler::BlockTrampolinePoolScope block_trampoline_pool(masm_); + Label table_start; + __ bind(&table_start); + Label needs_frame; + for (int i = 0; i < deopt_jump_table_.length(); i++) { + __ bind(&deopt_jump_table_[i].label); + Address entry = deopt_jump_table_[i].address; + Deoptimizer::BailoutType type = deopt_jump_table_[i].bailout_type; + int id = Deoptimizer::GetDeoptimizationId(isolate(), entry, type); + if (id == Deoptimizer::kNotDeoptimizationEntry) { + Comment(";;; jump table entry %d.", i); + } else { + Comment(";;; jump table entry %d: deoptimization bailout %d.", i, id); + } + __ li(t9, Operand(ExternalReference::ForDeoptEntry(entry))); + if (deopt_jump_table_[i].needs_frame) { + DCHECK(!info()->saves_caller_doubles()); + if (needs_frame.is_bound()) { + __ Branch(&needs_frame); + } else { + __ bind(&needs_frame); + __ MultiPush(cp.bit() | fp.bit() | ra.bit()); + // This variant of deopt can only be used with stubs. Since we don't + // have a function pointer to install in the stack frame that we're + // building, install a special marker there instead. + DCHECK(info()->IsStub()); + __ li(scratch0(), Operand(Smi::FromInt(StackFrame::STUB))); + __ push(scratch0()); + __ Daddu(fp, sp, + Operand(StandardFrameConstants::kFixedFrameSizeFromFp)); + __ Call(t9); + } + } else { + if (info()->saves_caller_doubles()) { + DCHECK(info()->IsStub()); + RestoreCallerDoubles(); + } + __ Call(t9); + } + } + __ RecordComment("]"); + + // The deoptimization jump table is the last part of the instruction + // sequence. Mark the generated code as done unless we bailed out. + if (!is_aborted()) status_ = DONE; + return !is_aborted(); +} + + +bool LCodeGen::GenerateSafepointTable() { + DCHECK(is_done()); + safepoints_.Emit(masm(), GetStackSlotCount()); + return !is_aborted(); +} + + +Register LCodeGen::ToRegister(int index) const { + return Register::FromAllocationIndex(index); +} + + +DoubleRegister LCodeGen::ToDoubleRegister(int index) const { + return DoubleRegister::FromAllocationIndex(index); +} + + +Register LCodeGen::ToRegister(LOperand* op) const { + DCHECK(op->IsRegister()); + return ToRegister(op->index()); +} + + +Register LCodeGen::EmitLoadRegister(LOperand* op, Register scratch) { + if (op->IsRegister()) { + return ToRegister(op->index()); + } else if (op->IsConstantOperand()) { + LConstantOperand* const_op = LConstantOperand::cast(op); + HConstant* constant = chunk_->LookupConstant(const_op); + Handle<Object> literal = constant->handle(isolate()); + Representation r = chunk_->LookupLiteralRepresentation(const_op); + if (r.IsInteger32()) { + DCHECK(literal->IsNumber()); + __ li(scratch, Operand(static_cast<int32_t>(literal->Number()))); + } else if (r.IsSmi()) { + DCHECK(constant->HasSmiValue()); + __ li(scratch, Operand(Smi::FromInt(constant->Integer32Value()))); + } else if (r.IsDouble()) { + Abort(kEmitLoadRegisterUnsupportedDoubleImmediate); + } else { + DCHECK(r.IsSmiOrTagged()); + __ li(scratch, literal); + } + return scratch; + } else if (op->IsStackSlot()) { + __ ld(scratch, ToMemOperand(op)); + return scratch; + } + UNREACHABLE(); + return scratch; +} + + +DoubleRegister LCodeGen::ToDoubleRegister(LOperand* op) const { + DCHECK(op->IsDoubleRegister()); + return ToDoubleRegister(op->index()); +} + + +DoubleRegister LCodeGen::EmitLoadDoubleRegister(LOperand* op, + FloatRegister flt_scratch, + DoubleRegister dbl_scratch) { + if (op->IsDoubleRegister()) { + return ToDoubleRegister(op->index()); + } else if (op->IsConstantOperand()) { + LConstantOperand* const_op = LConstantOperand::cast(op); + HConstant* constant = chunk_->LookupConstant(const_op); + Handle<Object> literal = constant->handle(isolate()); + Representation r = chunk_->LookupLiteralRepresentation(const_op); + if (r.IsInteger32()) { + DCHECK(literal->IsNumber()); + __ li(at, Operand(static_cast<int32_t>(literal->Number()))); + __ mtc1(at, flt_scratch); + __ cvt_d_w(dbl_scratch, flt_scratch); + return dbl_scratch; + } else if (r.IsDouble()) { + Abort(kUnsupportedDoubleImmediate); + } else if (r.IsTagged()) { + Abort(kUnsupportedTaggedImmediate); + } + } else if (op->IsStackSlot()) { + MemOperand mem_op = ToMemOperand(op); + __ ldc1(dbl_scratch, mem_op); + return dbl_scratch; + } + UNREACHABLE(); + return dbl_scratch; +} + + +Handle<Object> LCodeGen::ToHandle(LConstantOperand* op) const { + HConstant* constant = chunk_->LookupConstant(op); + DCHECK(chunk_->LookupLiteralRepresentation(op).IsSmiOrTagged()); + return constant->handle(isolate()); +} + + +bool LCodeGen::IsInteger32(LConstantOperand* op) const { + return chunk_->LookupLiteralRepresentation(op).IsSmiOrInteger32(); +} + + +bool LCodeGen::IsSmi(LConstantOperand* op) const { + return chunk_->LookupLiteralRepresentation(op).IsSmi(); +} + + +int32_t LCodeGen::ToInteger32(LConstantOperand* op) const { + // return ToRepresentation(op, Representation::Integer32()); + HConstant* constant = chunk_->LookupConstant(op); + return constant->Integer32Value(); +} + + +int32_t LCodeGen::ToRepresentation_donotuse(LConstantOperand* op, + const Representation& r) const { + HConstant* constant = chunk_->LookupConstant(op); + int32_t value = constant->Integer32Value(); + if (r.IsInteger32()) return value; + DCHECK(r.IsSmiOrTagged()); + return reinterpret_cast<int64_t>(Smi::FromInt(value)); +} + + +Smi* LCodeGen::ToSmi(LConstantOperand* op) const { + HConstant* constant = chunk_->LookupConstant(op); + return Smi::FromInt(constant->Integer32Value()); +} + + +double LCodeGen::ToDouble(LConstantOperand* op) const { + HConstant* constant = chunk_->LookupConstant(op); + DCHECK(constant->HasDoubleValue()); + return constant->DoubleValue(); +} + + +Operand LCodeGen::ToOperand(LOperand* op) { + if (op->IsConstantOperand()) { + LConstantOperand* const_op = LConstantOperand::cast(op); + HConstant* constant = chunk()->LookupConstant(const_op); + Representation r = chunk_->LookupLiteralRepresentation(const_op); + if (r.IsSmi()) { + DCHECK(constant->HasSmiValue()); + return Operand(Smi::FromInt(constant->Integer32Value())); + } else if (r.IsInteger32()) { + DCHECK(constant->HasInteger32Value()); + return Operand(constant->Integer32Value()); + } else if (r.IsDouble()) { + Abort(kToOperandUnsupportedDoubleImmediate); + } + DCHECK(r.IsTagged()); + return Operand(constant->handle(isolate())); + } else if (op->IsRegister()) { + return Operand(ToRegister(op)); + } else if (op->IsDoubleRegister()) { + Abort(kToOperandIsDoubleRegisterUnimplemented); + return Operand((int64_t)0); + } + // Stack slots not implemented, use ToMemOperand instead. + UNREACHABLE(); + return Operand((int64_t)0); +} + + +static int ArgumentsOffsetWithoutFrame(int index) { + DCHECK(index < 0); + return -(index + 1) * kPointerSize; +} + + +MemOperand LCodeGen::ToMemOperand(LOperand* op) const { + DCHECK(!op->IsRegister()); + DCHECK(!op->IsDoubleRegister()); + DCHECK(op->IsStackSlot() || op->IsDoubleStackSlot()); + if (NeedsEagerFrame()) { + return MemOperand(fp, StackSlotOffset(op->index())); + } else { + // Retrieve parameter without eager stack-frame relative to the + // stack-pointer. + return MemOperand(sp, ArgumentsOffsetWithoutFrame(op->index())); + } +} + + +MemOperand LCodeGen::ToHighMemOperand(LOperand* op) const { + DCHECK(op->IsDoubleStackSlot()); + if (NeedsEagerFrame()) { + // return MemOperand(fp, StackSlotOffset(op->index()) + kPointerSize); + return MemOperand(fp, StackSlotOffset(op->index()) + kIntSize); + } else { + // Retrieve parameter without eager stack-frame relative to the + // stack-pointer. + // return MemOperand( + // sp, ArgumentsOffsetWithoutFrame(op->index()) + kPointerSize); + return MemOperand( + sp, ArgumentsOffsetWithoutFrame(op->index()) + kIntSize); + } +} + + +void LCodeGen::WriteTranslation(LEnvironment* environment, + Translation* translation) { + if (environment == NULL) return; + + // The translation includes one command per value in the environment. + int translation_size = environment->translation_size(); + // The output frame height does not include the parameters. + int height = translation_size - environment->parameter_count(); + + WriteTranslation(environment->outer(), translation); + bool has_closure_id = !info()->closure().is_null() && + !info()->closure().is_identical_to(environment->closure()); + int closure_id = has_closure_id + ? DefineDeoptimizationLiteral(environment->closure()) + : Translation::kSelfLiteralId; + + switch (environment->frame_type()) { + case JS_FUNCTION: + translation->BeginJSFrame(environment->ast_id(), closure_id, height); + break; + case JS_CONSTRUCT: + translation->BeginConstructStubFrame(closure_id, translation_size); + break; + case JS_GETTER: + DCHECK(translation_size == 1); + DCHECK(height == 0); + translation->BeginGetterStubFrame(closure_id); + break; + case JS_SETTER: + DCHECK(translation_size == 2); + DCHECK(height == 0); + translation->BeginSetterStubFrame(closure_id); + break; + case STUB: + translation->BeginCompiledStubFrame(); + break; + case ARGUMENTS_ADAPTOR: + translation->BeginArgumentsAdaptorFrame(closure_id, translation_size); + break; + } + + int object_index = 0; + int dematerialized_index = 0; + for (int i = 0; i < translation_size; ++i) { + LOperand* value = environment->values()->at(i); + AddToTranslation(environment, + translation, + value, + environment->HasTaggedValueAt(i), + environment->HasUint32ValueAt(i), + &object_index, + &dematerialized_index); + } +} + + +void LCodeGen::AddToTranslation(LEnvironment* environment, + Translation* translation, + LOperand* op, + bool is_tagged, + bool is_uint32, + int* object_index_pointer, + int* dematerialized_index_pointer) { + if (op == LEnvironment::materialization_marker()) { + int object_index = (*object_index_pointer)++; + if (environment->ObjectIsDuplicateAt(object_index)) { + int dupe_of = environment->ObjectDuplicateOfAt(object_index); + translation->DuplicateObject(dupe_of); + return; + } + int object_length = environment->ObjectLengthAt(object_index); + if (environment->ObjectIsArgumentsAt(object_index)) { + translation->BeginArgumentsObject(object_length); + } else { + translation->BeginCapturedObject(object_length); + } + int dematerialized_index = *dematerialized_index_pointer; + int env_offset = environment->translation_size() + dematerialized_index; + *dematerialized_index_pointer += object_length; + for (int i = 0; i < object_length; ++i) { + LOperand* value = environment->values()->at(env_offset + i); + AddToTranslation(environment, + translation, + value, + environment->HasTaggedValueAt(env_offset + i), + environment->HasUint32ValueAt(env_offset + i), + object_index_pointer, + dematerialized_index_pointer); + } + return; + } + + if (op->IsStackSlot()) { + if (is_tagged) { + translation->StoreStackSlot(op->index()); + } else if (is_uint32) { + translation->StoreUint32StackSlot(op->index()); + } else { + translation->StoreInt32StackSlot(op->index()); + } + } else if (op->IsDoubleStackSlot()) { + translation->StoreDoubleStackSlot(op->index()); + } else if (op->IsRegister()) { + Register reg = ToRegister(op); + if (is_tagged) { + translation->StoreRegister(reg); + } else if (is_uint32) { + translation->StoreUint32Register(reg); + } else { + translation->StoreInt32Register(reg); + } + } else if (op->IsDoubleRegister()) { + DoubleRegister reg = ToDoubleRegister(op); + translation->StoreDoubleRegister(reg); + } else if (op->IsConstantOperand()) { + HConstant* constant = chunk()->LookupConstant(LConstantOperand::cast(op)); + int src_index = DefineDeoptimizationLiteral(constant->handle(isolate())); + translation->StoreLiteral(src_index); + } else { + UNREACHABLE(); + } +} + + +void LCodeGen::CallCode(Handle<Code> code, + RelocInfo::Mode mode, + LInstruction* instr) { + CallCodeGeneric(code, mode, instr, RECORD_SIMPLE_SAFEPOINT); +} + + +void LCodeGen::CallCodeGeneric(Handle<Code> code, + RelocInfo::Mode mode, + LInstruction* instr, + SafepointMode safepoint_mode) { + DCHECK(instr != NULL); + __ Call(code, mode); + RecordSafepointWithLazyDeopt(instr, safepoint_mode); +} + + +void LCodeGen::CallRuntime(const Runtime::Function* function, + int num_arguments, + LInstruction* instr, + SaveFPRegsMode save_doubles) { + DCHECK(instr != NULL); + + __ CallRuntime(function, num_arguments, save_doubles); + + RecordSafepointWithLazyDeopt(instr, RECORD_SIMPLE_SAFEPOINT); +} + + +void LCodeGen::LoadContextFromDeferred(LOperand* context) { + if (context->IsRegister()) { + __ Move(cp, ToRegister(context)); + } else if (context->IsStackSlot()) { + __ ld(cp, ToMemOperand(context)); + } else if (context->IsConstantOperand()) { + HConstant* constant = + chunk_->LookupConstant(LConstantOperand::cast(context)); + __ li(cp, Handle<Object>::cast(constant->handle(isolate()))); + } else { + UNREACHABLE(); + } +} + + +void LCodeGen::CallRuntimeFromDeferred(Runtime::FunctionId id, + int argc, + LInstruction* instr, + LOperand* context) { + LoadContextFromDeferred(context); + __ CallRuntimeSaveDoubles(id); + RecordSafepointWithRegisters( + instr->pointer_map(), argc, Safepoint::kNoLazyDeopt); +} + + +void LCodeGen::RegisterEnvironmentForDeoptimization(LEnvironment* environment, + Safepoint::DeoptMode mode) { + environment->set_has_been_used(); + if (!environment->HasBeenRegistered()) { + // Physical stack frame layout: + // -x ............. -4 0 ..................................... y + // [incoming arguments] [spill slots] [pushed outgoing arguments] + + // Layout of the environment: + // 0 ..................................................... size-1 + // [parameters] [locals] [expression stack including arguments] + + // Layout of the translation: + // 0 ........................................................ size - 1 + 4 + // [expression stack including arguments] [locals] [4 words] [parameters] + // |>------------ translation_size ------------<| + + int frame_count = 0; + int jsframe_count = 0; + for (LEnvironment* e = environment; e != NULL; e = e->outer()) { + ++frame_count; + if (e->frame_type() == JS_FUNCTION) { + ++jsframe_count; + } + } + Translation translation(&translations_, frame_count, jsframe_count, zone()); + WriteTranslation(environment, &translation); + int deoptimization_index = deoptimizations_.length(); + int pc_offset = masm()->pc_offset(); + environment->Register(deoptimization_index, + translation.index(), + (mode == Safepoint::kLazyDeopt) ? pc_offset : -1); + deoptimizations_.Add(environment, zone()); + } +} + + +void LCodeGen::DeoptimizeIf(Condition condition, + LEnvironment* environment, + Deoptimizer::BailoutType bailout_type, + Register src1, + const Operand& src2) { + RegisterEnvironmentForDeoptimization(environment, Safepoint::kNoLazyDeopt); + DCHECK(environment->HasBeenRegistered()); + int id = environment->deoptimization_index(); + DCHECK(info()->IsOptimizing() || info()->IsStub()); + Address entry = + Deoptimizer::GetDeoptimizationEntry(isolate(), id, bailout_type); + if (entry == NULL) { + Abort(kBailoutWasNotPrepared); + return; + } + + if (FLAG_deopt_every_n_times != 0 && !info()->IsStub()) { + Register scratch = scratch0(); + ExternalReference count = ExternalReference::stress_deopt_count(isolate()); + Label no_deopt; + __ Push(a1, scratch); + __ li(scratch, Operand(count)); + __ lw(a1, MemOperand(scratch)); + __ Subu(a1, a1, Operand(1)); + __ Branch(&no_deopt, ne, a1, Operand(zero_reg)); + __ li(a1, Operand(FLAG_deopt_every_n_times)); + __ sw(a1, MemOperand(scratch)); + __ Pop(a1, scratch); + + __ Call(entry, RelocInfo::RUNTIME_ENTRY); + __ bind(&no_deopt); + __ sw(a1, MemOperand(scratch)); + __ Pop(a1, scratch); + } + + if (info()->ShouldTrapOnDeopt()) { + Label skip; + if (condition != al) { + __ Branch(&skip, NegateCondition(condition), src1, src2); + } + __ stop("trap_on_deopt"); + __ bind(&skip); + } + + DCHECK(info()->IsStub() || frame_is_built_); + // Go through jump table if we need to handle condition, build frame, or + // restore caller doubles. + if (condition == al && frame_is_built_ && + !info()->saves_caller_doubles()) { + __ Call(entry, RelocInfo::RUNTIME_ENTRY, condition, src1, src2); + } else { + // We often have several deopts to the same entry, reuse the last + // jump entry if this is the case. + if (deopt_jump_table_.is_empty() || + (deopt_jump_table_.last().address != entry) || + (deopt_jump_table_.last().bailout_type != bailout_type) || + (deopt_jump_table_.last().needs_frame != !frame_is_built_)) { + Deoptimizer::JumpTableEntry table_entry(entry, + bailout_type, + !frame_is_built_); + deopt_jump_table_.Add(table_entry, zone()); + } + __ Branch(&deopt_jump_table_.last().label, condition, src1, src2); + } +} + + +void LCodeGen::DeoptimizeIf(Condition condition, + LEnvironment* environment, + Register src1, + const Operand& src2) { + Deoptimizer::BailoutType bailout_type = info()->IsStub() + ? Deoptimizer::LAZY + : Deoptimizer::EAGER; + DeoptimizeIf(condition, environment, bailout_type, src1, src2); +} + + +void LCodeGen::PopulateDeoptimizationData(Handle<Code> code) { + int length = deoptimizations_.length(); + if (length == 0) return; + Handle<DeoptimizationInputData> data = + DeoptimizationInputData::New(isolate(), length, 0, TENURED); + + Handle<ByteArray> translations = + translations_.CreateByteArray(isolate()->factory()); + data->SetTranslationByteArray(*translations); + data->SetInlinedFunctionCount(Smi::FromInt(inlined_function_count_)); + data->SetOptimizationId(Smi::FromInt(info_->optimization_id())); + if (info_->IsOptimizing()) { + // Reference to shared function info does not change between phases. + AllowDeferredHandleDereference allow_handle_dereference; + data->SetSharedFunctionInfo(*info_->shared_info()); + } else { + data->SetSharedFunctionInfo(Smi::FromInt(0)); + } + + Handle<FixedArray> literals = + factory()->NewFixedArray(deoptimization_literals_.length(), TENURED); + { AllowDeferredHandleDereference copy_handles; + for (int i = 0; i < deoptimization_literals_.length(); i++) { + literals->set(i, *deoptimization_literals_[i]); + } + data->SetLiteralArray(*literals); + } + + data->SetOsrAstId(Smi::FromInt(info_->osr_ast_id().ToInt())); + data->SetOsrPcOffset(Smi::FromInt(osr_pc_offset_)); + + // Populate the deoptimization entries. + for (int i = 0; i < length; i++) { + LEnvironment* env = deoptimizations_[i]; + data->SetAstId(i, env->ast_id()); + data->SetTranslationIndex(i, Smi::FromInt(env->translation_index())); + data->SetArgumentsStackHeight(i, + Smi::FromInt(env->arguments_stack_height())); + data->SetPc(i, Smi::FromInt(env->pc_offset())); + } + code->set_deoptimization_data(*data); +} + + +int LCodeGen::DefineDeoptimizationLiteral(Handle<Object> literal) { + int result = deoptimization_literals_.length(); + for (int i = 0; i < deoptimization_literals_.length(); ++i) { + if (deoptimization_literals_[i].is_identical_to(literal)) return i; + } + deoptimization_literals_.Add(literal, zone()); + return result; +} + + +void LCodeGen::PopulateDeoptimizationLiteralsWithInlinedFunctions() { + DCHECK(deoptimization_literals_.length() == 0); + + const ZoneList<Handle<JSFunction> >* inlined_closures = + chunk()->inlined_closures(); + + for (int i = 0, length = inlined_closures->length(); + i < length; + i++) { + DefineDeoptimizationLiteral(inlined_closures->at(i)); + } + + inlined_function_count_ = deoptimization_literals_.length(); +} + + +void LCodeGen::RecordSafepointWithLazyDeopt( + LInstruction* instr, SafepointMode safepoint_mode) { + if (safepoint_mode == RECORD_SIMPLE_SAFEPOINT) { + RecordSafepoint(instr->pointer_map(), Safepoint::kLazyDeopt); + } else { + DCHECK(safepoint_mode == RECORD_SAFEPOINT_WITH_REGISTERS_AND_NO_ARGUMENTS); + RecordSafepointWithRegisters( + instr->pointer_map(), 0, Safepoint::kLazyDeopt); + } +} + + +void LCodeGen::RecordSafepoint( + LPointerMap* pointers, + Safepoint::Kind kind, + int arguments, + Safepoint::DeoptMode deopt_mode) { + DCHECK(expected_safepoint_kind_ == kind); + + const ZoneList<LOperand*>* operands = pointers->GetNormalizedOperands(); + Safepoint safepoint = safepoints_.DefineSafepoint(masm(), + kind, arguments, deopt_mode); + for (int i = 0; i < operands->length(); i++) { + LOperand* pointer = operands->at(i); + if (pointer->IsStackSlot()) { + safepoint.DefinePointerSlot(pointer->index(), zone()); + } else if (pointer->IsRegister() && (kind & Safepoint::kWithRegisters)) { + safepoint.DefinePointerRegister(ToRegister(pointer), zone()); + } + } +} + + +void LCodeGen::RecordSafepoint(LPointerMap* pointers, + Safepoint::DeoptMode deopt_mode) { + RecordSafepoint(pointers, Safepoint::kSimple, 0, deopt_mode); +} + + +void LCodeGen::RecordSafepoint(Safepoint::DeoptMode deopt_mode) { + LPointerMap empty_pointers(zone()); + RecordSafepoint(&empty_pointers, deopt_mode); +} + + +void LCodeGen::RecordSafepointWithRegisters(LPointerMap* pointers, + int arguments, + Safepoint::DeoptMode deopt_mode) { + RecordSafepoint( + pointers, Safepoint::kWithRegisters, arguments, deopt_mode); +} + + +void LCodeGen::RecordAndWritePosition(int position) { + if (position == RelocInfo::kNoPosition) return; + masm()->positions_recorder()->RecordPosition(position); + masm()->positions_recorder()->WriteRecordedPositions(); +} + + +static const char* LabelType(LLabel* label) { + if (label->is_loop_header()) return " (loop header)"; + if (label->is_osr_entry()) return " (OSR entry)"; + return ""; +} + + +void LCodeGen::DoLabel(LLabel* label) { + Comment(";;; <@%d,#%d> -------------------- B%d%s --------------------", + current_instruction_, + label->hydrogen_value()->id(), + label->block_id(), + LabelType(label)); + __ bind(label->label()); + current_block_ = label->block_id(); + DoGap(label); +} + + +void LCodeGen::DoParallelMove(LParallelMove* move) { + resolver_.Resolve(move); +} + + +void LCodeGen::DoGap(LGap* gap) { + for (int i = LGap::FIRST_INNER_POSITION; + i <= LGap::LAST_INNER_POSITION; + i++) { + LGap::InnerPosition inner_pos = static_cast<LGap::InnerPosition>(i); + LParallelMove* move = gap->GetParallelMove(inner_pos); + if (move != NULL) DoParallelMove(move); + } +} + + +void LCodeGen::DoInstructionGap(LInstructionGap* instr) { + DoGap(instr); +} + + +void LCodeGen::DoParameter(LParameter* instr) { + // Nothing to do. +} + + +void LCodeGen::DoCallStub(LCallStub* instr) { + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->result()).is(v0)); + switch (instr->hydrogen()->major_key()) { + case CodeStub::RegExpExec: { + RegExpExecStub stub(isolate()); + CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); + break; + } + case CodeStub::SubString: { + SubStringStub stub(isolate()); + CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); + break; + } + case CodeStub::StringCompare: { + StringCompareStub stub(isolate()); + CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); + break; + } + default: + UNREACHABLE(); + } +} + + +void LCodeGen::DoUnknownOSRValue(LUnknownOSRValue* instr) { + GenerateOsrPrologue(); +} + + +void LCodeGen::DoModByPowerOf2I(LModByPowerOf2I* instr) { + Register dividend = ToRegister(instr->dividend()); + int32_t divisor = instr->divisor(); + DCHECK(dividend.is(ToRegister(instr->result()))); + + // Theoretically, a variation of the branch-free code for integer division by + // a power of 2 (calculating the remainder via an additional multiplication + // (which gets simplified to an 'and') and subtraction) should be faster, and + // this is exactly what GCC and clang emit. Nevertheless, benchmarks seem to + // indicate that positive dividends are heavily favored, so the branching + // version performs better. + HMod* hmod = instr->hydrogen(); + int32_t mask = divisor < 0 ? -(divisor + 1) : (divisor - 1); + Label dividend_is_not_negative, done; + + if (hmod->CheckFlag(HValue::kLeftCanBeNegative)) { + __ Branch(÷nd_is_not_negative, ge, dividend, Operand(zero_reg)); + // Note: The code below even works when right contains kMinInt. + __ dsubu(dividend, zero_reg, dividend); + __ And(dividend, dividend, Operand(mask)); + if (hmod->CheckFlag(HValue::kBailoutOnMinusZero)) { + DeoptimizeIf(eq, instr->environment(), dividend, Operand(zero_reg)); + } + __ Branch(USE_DELAY_SLOT, &done); + __ dsubu(dividend, zero_reg, dividend); + } + + __ bind(÷nd_is_not_negative); + __ And(dividend, dividend, Operand(mask)); + __ bind(&done); +} + + +void LCodeGen::DoModByConstI(LModByConstI* instr) { + Register dividend = ToRegister(instr->dividend()); + int32_t divisor = instr->divisor(); + Register result = ToRegister(instr->result()); + DCHECK(!dividend.is(result)); + + if (divisor == 0) { + DeoptimizeIf(al, instr->environment()); + return; + } + + __ TruncatingDiv(result, dividend, Abs(divisor)); + __ Dmul(result, result, Operand(Abs(divisor))); + __ Dsubu(result, dividend, Operand(result)); + + // Check for negative zero. + HMod* hmod = instr->hydrogen(); + if (hmod->CheckFlag(HValue::kBailoutOnMinusZero)) { + Label remainder_not_zero; + __ Branch(&remainder_not_zero, ne, result, Operand(zero_reg)); + DeoptimizeIf(lt, instr->environment(), dividend, Operand(zero_reg)); + __ bind(&remainder_not_zero); + } +} + + +void LCodeGen::DoModI(LModI* instr) { + HMod* hmod = instr->hydrogen(); + const Register left_reg = ToRegister(instr->left()); + const Register right_reg = ToRegister(instr->right()); + const Register result_reg = ToRegister(instr->result()); + + // div runs in the background while we check for special cases. + __ Dmod(result_reg, left_reg, right_reg); + + Label done; + // Check for x % 0, we have to deopt in this case because we can't return a + // NaN. + if (hmod->CheckFlag(HValue::kCanBeDivByZero)) { + DeoptimizeIf(eq, instr->environment(), right_reg, Operand(zero_reg)); + } + + // Check for kMinInt % -1, div will return kMinInt, which is not what we + // want. We have to deopt if we care about -0, because we can't return that. + if (hmod->CheckFlag(HValue::kCanOverflow)) { + Label no_overflow_possible; + __ Branch(&no_overflow_possible, ne, left_reg, Operand(kMinInt)); + if (hmod->CheckFlag(HValue::kBailoutOnMinusZero)) { + DeoptimizeIf(eq, instr->environment(), right_reg, Operand(-1)); + } else { + __ Branch(&no_overflow_possible, ne, right_reg, Operand(-1)); + __ Branch(USE_DELAY_SLOT, &done); + __ mov(result_reg, zero_reg); + } + __ bind(&no_overflow_possible); + } + + // If we care about -0, test if the dividend is <0 and the result is 0. + __ Branch(&done, ge, left_reg, Operand(zero_reg)); + + if (hmod->CheckFlag(HValue::kBailoutOnMinusZero)) { + DeoptimizeIf(eq, instr->environment(), result_reg, Operand(zero_reg)); + } + __ bind(&done); +} + + +void LCodeGen::DoDivByPowerOf2I(LDivByPowerOf2I* instr) { + Register dividend = ToRegister(instr->dividend()); + int32_t divisor = instr->divisor(); + Register result = ToRegister(instr->result()); + DCHECK(divisor == kMinInt || IsPowerOf2(Abs(divisor))); + DCHECK(!result.is(dividend)); + + // Check for (0 / -x) that will produce negative zero. + HDiv* hdiv = instr->hydrogen(); + if (hdiv->CheckFlag(HValue::kBailoutOnMinusZero) && divisor < 0) { + DeoptimizeIf(eq, instr->environment(), dividend, Operand(zero_reg)); + } + // Check for (kMinInt / -1). + if (hdiv->CheckFlag(HValue::kCanOverflow) && divisor == -1) { + DeoptimizeIf(eq, instr->environment(), dividend, Operand(kMinInt)); + } + // Deoptimize if remainder will not be 0. + if (!hdiv->CheckFlag(HInstruction::kAllUsesTruncatingToInt32) && + divisor != 1 && divisor != -1) { + int32_t mask = divisor < 0 ? -(divisor + 1) : (divisor - 1); + __ And(at, dividend, Operand(mask)); + DeoptimizeIf(ne, instr->environment(), at, Operand(zero_reg)); + } + + if (divisor == -1) { // Nice shortcut, not needed for correctness. + __ Dsubu(result, zero_reg, dividend); + return; + } + uint16_t shift = WhichPowerOf2Abs(divisor); + if (shift == 0) { + __ Move(result, dividend); + } else if (shift == 1) { + __ dsrl32(result, dividend, 31); + __ Daddu(result, dividend, Operand(result)); + } else { + __ dsra32(result, dividend, 31); + __ dsrl32(result, result, 32 - shift); + __ Daddu(result, dividend, Operand(result)); + } + if (shift > 0) __ dsra(result, result, shift); + if (divisor < 0) __ Dsubu(result, zero_reg, result); +} + + +void LCodeGen::DoDivByConstI(LDivByConstI* instr) { + Register dividend = ToRegister(instr->dividend()); + int32_t divisor = instr->divisor(); + Register result = ToRegister(instr->result()); + DCHECK(!dividend.is(result)); + + if (divisor == 0) { + DeoptimizeIf(al, instr->environment()); + return; + } + + // Check for (0 / -x) that will produce negative zero. + HDiv* hdiv = instr->hydrogen(); + if (hdiv->CheckFlag(HValue::kBailoutOnMinusZero) && divisor < 0) { + DeoptimizeIf(eq, instr->environment(), dividend, Operand(zero_reg)); + } + + __ TruncatingDiv(result, dividend, Abs(divisor)); + if (divisor < 0) __ Subu(result, zero_reg, result); + + if (!hdiv->CheckFlag(HInstruction::kAllUsesTruncatingToInt32)) { + __ Dmul(scratch0(), result, Operand(divisor)); + __ Dsubu(scratch0(), scratch0(), dividend); + DeoptimizeIf(ne, instr->environment(), scratch0(), Operand(zero_reg)); + } +} + + +// TODO(svenpanne) Refactor this to avoid code duplication with DoFlooringDivI. +void LCodeGen::DoDivI(LDivI* instr) { + HBinaryOperation* hdiv = instr->hydrogen(); + Register dividend = ToRegister(instr->dividend()); + Register divisor = ToRegister(instr->divisor()); + const Register result = ToRegister(instr->result()); + + // On MIPS div is asynchronous - it will run in the background while we + // check for special cases. + __ Ddiv(result, dividend, divisor); + + // Check for x / 0. + if (hdiv->CheckFlag(HValue::kCanBeDivByZero)) { + DeoptimizeIf(eq, instr->environment(), divisor, Operand(zero_reg)); + } + + // Check for (0 / -x) that will produce negative zero. + if (hdiv->CheckFlag(HValue::kBailoutOnMinusZero)) { + Label left_not_zero; + __ Branch(&left_not_zero, ne, dividend, Operand(zero_reg)); + DeoptimizeIf(lt, instr->environment(), divisor, Operand(zero_reg)); + __ bind(&left_not_zero); + } + + // Check for (kMinInt / -1). + if (hdiv->CheckFlag(HValue::kCanOverflow) && + !hdiv->CheckFlag(HValue::kAllUsesTruncatingToInt32)) { + Label left_not_min_int; + __ Branch(&left_not_min_int, ne, dividend, Operand(kMinInt)); + DeoptimizeIf(eq, instr->environment(), divisor, Operand(-1)); + __ bind(&left_not_min_int); + } + + if (!hdiv->CheckFlag(HValue::kAllUsesTruncatingToInt32)) { + // Calculate remainder. + Register remainder = ToRegister(instr->temp()); + if (kArchVariant != kMips64r6) { + __ mfhi(remainder); + } else { + __ dmod(remainder, dividend, divisor); + } + DeoptimizeIf(ne, instr->environment(), remainder, Operand(zero_reg)); + } +} + + +void LCodeGen::DoMultiplyAddD(LMultiplyAddD* instr) { + DoubleRegister addend = ToDoubleRegister(instr->addend()); + DoubleRegister multiplier = ToDoubleRegister(instr->multiplier()); + DoubleRegister multiplicand = ToDoubleRegister(instr->multiplicand()); + + // This is computed in-place. + DCHECK(addend.is(ToDoubleRegister(instr->result()))); + + __ Madd_d(addend, addend, multiplier, multiplicand, double_scratch0()); +} + + +void LCodeGen::DoFlooringDivByPowerOf2I(LFlooringDivByPowerOf2I* instr) { + Register dividend = ToRegister(instr->dividend()); + Register result = ToRegister(instr->result()); + int32_t divisor = instr->divisor(); + Register scratch = result.is(dividend) ? scratch0() : dividend; + DCHECK(!result.is(dividend) || !scratch.is(dividend)); + + // If the divisor is 1, return the dividend. + if (divisor == 1) { + __ Move(result, dividend); + return; + } + + // If the divisor is positive, things are easy: There can be no deopts and we + // can simply do an arithmetic right shift. + uint16_t shift = WhichPowerOf2Abs(divisor); + if (divisor > 1) { + __ dsra(result, dividend, shift); + return; + } + + // If the divisor is negative, we have to negate and handle edge cases. + // Dividend can be the same register as result so save the value of it + // for checking overflow. + __ Move(scratch, dividend); + + __ Dsubu(result, zero_reg, dividend); + if (instr->hydrogen()->CheckFlag(HValue::kBailoutOnMinusZero)) { + DeoptimizeIf(eq, instr->environment(), result, Operand(zero_reg)); + } + + __ Xor(scratch, scratch, result); + // Dividing by -1 is basically negation, unless we overflow. + if (divisor == -1) { + if (instr->hydrogen()->CheckFlag(HValue::kLeftCanBeMinInt)) { + DeoptimizeIf(gt, instr->environment(), result, Operand(kMaxInt)); + } + return; + } + + // If the negation could not overflow, simply shifting is OK. + if (!instr->hydrogen()->CheckFlag(HValue::kLeftCanBeMinInt)) { + __ dsra(result, result, shift); + return; + } + + Label no_overflow, done; + __ Branch(&no_overflow, lt, scratch, Operand(zero_reg)); + __ li(result, Operand(kMinInt / divisor), CONSTANT_SIZE); + __ Branch(&done); + __ bind(&no_overflow); + __ dsra(result, result, shift); + __ bind(&done); +} + + +void LCodeGen::DoFlooringDivByConstI(LFlooringDivByConstI* instr) { + Register dividend = ToRegister(instr->dividend()); + int32_t divisor = instr->divisor(); + Register result = ToRegister(instr->result()); + DCHECK(!dividend.is(result)); + + if (divisor == 0) { + DeoptimizeIf(al, instr->environment()); + return; + } + + // Check for (0 / -x) that will produce negative zero. + HMathFloorOfDiv* hdiv = instr->hydrogen(); + if (hdiv->CheckFlag(HValue::kBailoutOnMinusZero) && divisor < 0) { + DeoptimizeIf(eq, instr->environment(), dividend, Operand(zero_reg)); + } + + // Easy case: We need no dynamic check for the dividend and the flooring + // division is the same as the truncating division. + if ((divisor > 0 && !hdiv->CheckFlag(HValue::kLeftCanBeNegative)) || + (divisor < 0 && !hdiv->CheckFlag(HValue::kLeftCanBePositive))) { + __ TruncatingDiv(result, dividend, Abs(divisor)); + if (divisor < 0) __ Dsubu(result, zero_reg, result); + return; + } + + // In the general case we may need to adjust before and after the truncating + // division to get a flooring division. + Register temp = ToRegister(instr->temp()); + DCHECK(!temp.is(dividend) && !temp.is(result)); + Label needs_adjustment, done; + __ Branch(&needs_adjustment, divisor > 0 ? lt : gt, + dividend, Operand(zero_reg)); + __ TruncatingDiv(result, dividend, Abs(divisor)); + if (divisor < 0) __ Dsubu(result, zero_reg, result); + __ jmp(&done); + __ bind(&needs_adjustment); + __ Daddu(temp, dividend, Operand(divisor > 0 ? 1 : -1)); + __ TruncatingDiv(result, temp, Abs(divisor)); + if (divisor < 0) __ Dsubu(result, zero_reg, result); + __ Dsubu(result, result, Operand(1)); + __ bind(&done); +} + + +// TODO(svenpanne) Refactor this to avoid code duplication with DoDivI. +void LCodeGen::DoFlooringDivI(LFlooringDivI* instr) { + HBinaryOperation* hdiv = instr->hydrogen(); + Register dividend = ToRegister(instr->dividend()); + Register divisor = ToRegister(instr->divisor()); + const Register result = ToRegister(instr->result()); + + // On MIPS div is asynchronous - it will run in the background while we + // check for special cases. + __ Ddiv(result, dividend, divisor); + + // Check for x / 0. + if (hdiv->CheckFlag(HValue::kCanBeDivByZero)) { + DeoptimizeIf(eq, instr->environment(), divisor, Operand(zero_reg)); + } + + // Check for (0 / -x) that will produce negative zero. + if (hdiv->CheckFlag(HValue::kBailoutOnMinusZero)) { + Label left_not_zero; + __ Branch(&left_not_zero, ne, dividend, Operand(zero_reg)); + DeoptimizeIf(lt, instr->environment(), divisor, Operand(zero_reg)); + __ bind(&left_not_zero); + } + + // Check for (kMinInt / -1). + if (hdiv->CheckFlag(HValue::kCanOverflow) && + !hdiv->CheckFlag(HValue::kAllUsesTruncatingToInt32)) { + Label left_not_min_int; + __ Branch(&left_not_min_int, ne, dividend, Operand(kMinInt)); + DeoptimizeIf(eq, instr->environment(), divisor, Operand(-1)); + __ bind(&left_not_min_int); + } + + // We performed a truncating division. Correct the result if necessary. + Label done; + Register remainder = scratch0(); + if (kArchVariant != kMips64r6) { + __ mfhi(remainder); + } else { + __ dmod(remainder, dividend, divisor); + } + __ Branch(&done, eq, remainder, Operand(zero_reg), USE_DELAY_SLOT); + __ Xor(remainder, remainder, Operand(divisor)); + __ Branch(&done, ge, remainder, Operand(zero_reg)); + __ Dsubu(result, result, Operand(1)); + __ bind(&done); +} + + +void LCodeGen::DoMulI(LMulI* instr) { + Register scratch = scratch0(); + Register result = ToRegister(instr->result()); + // Note that result may alias left. + Register left = ToRegister(instr->left()); + LOperand* right_op = instr->right(); + + bool bailout_on_minus_zero = + instr->hydrogen()->CheckFlag(HValue::kBailoutOnMinusZero); + bool overflow = instr->hydrogen()->CheckFlag(HValue::kCanOverflow); + + if (right_op->IsConstantOperand()) { + int32_t constant = ToInteger32(LConstantOperand::cast(right_op)); + + if (bailout_on_minus_zero && (constant < 0)) { + // The case of a null constant will be handled separately. + // If constant is negative and left is null, the result should be -0. + DeoptimizeIf(eq, instr->environment(), left, Operand(zero_reg)); + } + + switch (constant) { + case -1: + if (overflow) { + __ SubuAndCheckForOverflow(result, zero_reg, left, scratch); + DeoptimizeIf(gt, instr->environment(), scratch, Operand(kMaxInt)); + } else { + __ Dsubu(result, zero_reg, left); + } + break; + case 0: + if (bailout_on_minus_zero) { + // If left is strictly negative and the constant is null, the + // result is -0. Deoptimize if required, otherwise return 0. + DeoptimizeIf(lt, instr->environment(), left, Operand(zero_reg)); + } + __ mov(result, zero_reg); + break; + case 1: + // Nothing to do. + __ Move(result, left); + break; + default: + // Multiplying by powers of two and powers of two plus or minus + // one can be done faster with shifted operands. + // For other constants we emit standard code. + int32_t mask = constant >> 31; + uint32_t constant_abs = (constant + mask) ^ mask; + + if (IsPowerOf2(constant_abs)) { + int32_t shift = WhichPowerOf2(constant_abs); + __ dsll(result, left, shift); + // Correct the sign of the result if the constant is negative. + if (constant < 0) __ Dsubu(result, zero_reg, result); + } else if (IsPowerOf2(constant_abs - 1)) { + int32_t shift = WhichPowerOf2(constant_abs - 1); + __ dsll(scratch, left, shift); + __ Daddu(result, scratch, left); + // Correct the sign of the result if the constant is negative. + if (constant < 0) __ Dsubu(result, zero_reg, result); + } else if (IsPowerOf2(constant_abs + 1)) { + int32_t shift = WhichPowerOf2(constant_abs + 1); + __ dsll(scratch, left, shift); + __ Dsubu(result, scratch, left); + // Correct the sign of the result if the constant is negative. + if (constant < 0) __ Dsubu(result, zero_reg, result); + } else { + // Generate standard code. + __ li(at, constant); + __ Dmul(result, left, at); + } + } + + } else { + DCHECK(right_op->IsRegister()); + Register right = ToRegister(right_op); + + if (overflow) { + // hi:lo = left * right. + if (instr->hydrogen()->representation().IsSmi()) { + __ Dmulh(result, left, right); + } else { + __ Dmul(result, left, right); + } + __ dsra32(scratch, result, 0); + __ sra(at, result, 31); + if (instr->hydrogen()->representation().IsSmi()) { + __ SmiTag(result); + } + DeoptimizeIf(ne, instr->environment(), scratch, Operand(at)); + } else { + if (instr->hydrogen()->representation().IsSmi()) { + __ SmiUntag(result, left); + __ Dmul(result, result, right); + } else { + __ Dmul(result, left, right); + } + } + + if (bailout_on_minus_zero) { + Label done; + __ Xor(at, left, right); + __ Branch(&done, ge, at, Operand(zero_reg)); + // Bail out if the result is minus zero. + DeoptimizeIf(eq, + instr->environment(), + result, + Operand(zero_reg)); + __ bind(&done); + } + } +} + + +void LCodeGen::DoBitI(LBitI* instr) { + LOperand* left_op = instr->left(); + LOperand* right_op = instr->right(); + DCHECK(left_op->IsRegister()); + Register left = ToRegister(left_op); + Register result = ToRegister(instr->result()); + Operand right(no_reg); + + if (right_op->IsStackSlot()) { + right = Operand(EmitLoadRegister(right_op, at)); + } else { + DCHECK(right_op->IsRegister() || right_op->IsConstantOperand()); + right = ToOperand(right_op); + } + + switch (instr->op()) { + case Token::BIT_AND: + __ And(result, left, right); + break; + case Token::BIT_OR: + __ Or(result, left, right); + break; + case Token::BIT_XOR: + if (right_op->IsConstantOperand() && right.immediate() == int32_t(~0)) { + __ Nor(result, zero_reg, left); + } else { + __ Xor(result, left, right); + } + break; + default: + UNREACHABLE(); + break; + } +} + + +void LCodeGen::DoShiftI(LShiftI* instr) { + // Both 'left' and 'right' are "used at start" (see LCodeGen::DoShift), so + // result may alias either of them. + LOperand* right_op = instr->right(); + Register left = ToRegister(instr->left()); + Register result = ToRegister(instr->result()); + + if (right_op->IsRegister()) { + // No need to mask the right operand on MIPS, it is built into the variable + // shift instructions. + switch (instr->op()) { + case Token::ROR: + __ Ror(result, left, Operand(ToRegister(right_op))); + break; + case Token::SAR: + __ srav(result, left, ToRegister(right_op)); + break; + case Token::SHR: + __ srlv(result, left, ToRegister(right_op)); + if (instr->can_deopt()) { + // TODO(yy): (-1) >>> 0. anything else? + DeoptimizeIf(lt, instr->environment(), result, Operand(zero_reg)); + DeoptimizeIf(gt, instr->environment(), result, Operand(kMaxInt)); + } + break; + case Token::SHL: + __ sllv(result, left, ToRegister(right_op)); + break; + default: + UNREACHABLE(); + break; + } + } else { + // Mask the right_op operand. + int value = ToInteger32(LConstantOperand::cast(right_op)); + uint8_t shift_count = static_cast<uint8_t>(value & 0x1F); + switch (instr->op()) { + case Token::ROR: + if (shift_count != 0) { + __ Ror(result, left, Operand(shift_count)); + } else { + __ Move(result, left); + } + break; + case Token::SAR: + if (shift_count != 0) { + __ sra(result, left, shift_count); + } else { + __ Move(result, left); + } + break; + case Token::SHR: + if (shift_count != 0) { + __ srl(result, left, shift_count); + } else { + if (instr->can_deopt()) { + __ And(at, left, Operand(0x80000000)); + DeoptimizeIf(ne, instr->environment(), at, Operand(zero_reg)); + } + __ Move(result, left); + } + break; + case Token::SHL: + if (shift_count != 0) { + if (instr->hydrogen_value()->representation().IsSmi()) { + __ dsll(result, left, shift_count); + } else { + __ sll(result, left, shift_count); + } + } else { + __ Move(result, left); + } + break; + default: + UNREACHABLE(); + break; + } + } +} + + +void LCodeGen::DoSubI(LSubI* instr) { + LOperand* left = instr->left(); + LOperand* right = instr->right(); + LOperand* result = instr->result(); + bool can_overflow = instr->hydrogen()->CheckFlag(HValue::kCanOverflow); + + if (!can_overflow) { + if (right->IsStackSlot()) { + Register right_reg = EmitLoadRegister(right, at); + __ Dsubu(ToRegister(result), ToRegister(left), Operand(right_reg)); + } else { + DCHECK(right->IsRegister() || right->IsConstantOperand()); + __ Dsubu(ToRegister(result), ToRegister(left), ToOperand(right)); + } + } else { // can_overflow. + Register overflow = scratch0(); + Register scratch = scratch1(); + if (right->IsStackSlot() || right->IsConstantOperand()) { + Register right_reg = EmitLoadRegister(right, scratch); + __ SubuAndCheckForOverflow(ToRegister(result), + ToRegister(left), + right_reg, + overflow); // Reg at also used as scratch. + } else { + DCHECK(right->IsRegister()); + // Due to overflow check macros not supporting constant operands, + // handling the IsConstantOperand case was moved to prev if clause. + __ SubuAndCheckForOverflow(ToRegister(result), + ToRegister(left), + ToRegister(right), + overflow); // Reg at also used as scratch. + } + DeoptimizeIf(lt, instr->environment(), overflow, Operand(zero_reg)); + if (!instr->hydrogen()->representation().IsSmi()) { + DeoptimizeIf(gt, instr->environment(), + ToRegister(result), Operand(kMaxInt)); + DeoptimizeIf(lt, instr->environment(), + ToRegister(result), Operand(kMinInt)); + } + } +} + + +void LCodeGen::DoConstantI(LConstantI* instr) { + __ li(ToRegister(instr->result()), Operand(instr->value())); +} + + +void LCodeGen::DoConstantS(LConstantS* instr) { + __ li(ToRegister(instr->result()), Operand(instr->value())); +} + + +void LCodeGen::DoConstantD(LConstantD* instr) { + DCHECK(instr->result()->IsDoubleRegister()); + DoubleRegister result = ToDoubleRegister(instr->result()); + double v = instr->value(); + __ Move(result, v); +} + + +void LCodeGen::DoConstantE(LConstantE* instr) { + __ li(ToRegister(instr->result()), Operand(instr->value())); +} + + +void LCodeGen::DoConstantT(LConstantT* instr) { + Handle<Object> object = instr->value(isolate()); + AllowDeferredHandleDereference smi_check; + __ li(ToRegister(instr->result()), object); +} + + +void LCodeGen::DoMapEnumLength(LMapEnumLength* instr) { + Register result = ToRegister(instr->result()); + Register map = ToRegister(instr->value()); + __ EnumLength(result, map); +} + + +void LCodeGen::DoDateField(LDateField* instr) { + Register object = ToRegister(instr->date()); + Register result = ToRegister(instr->result()); + Register scratch = ToRegister(instr->temp()); + Smi* index = instr->index(); + Label runtime, done; + DCHECK(object.is(a0)); + DCHECK(result.is(v0)); + DCHECK(!scratch.is(scratch0())); + DCHECK(!scratch.is(object)); + + __ SmiTst(object, at); + DeoptimizeIf(eq, instr->environment(), at, Operand(zero_reg)); + __ GetObjectType(object, scratch, scratch); + DeoptimizeIf(ne, instr->environment(), scratch, Operand(JS_DATE_TYPE)); + + if (index->value() == 0) { + __ ld(result, FieldMemOperand(object, JSDate::kValueOffset)); + } else { + if (index->value() < JSDate::kFirstUncachedField) { + ExternalReference stamp = ExternalReference::date_cache_stamp(isolate()); + __ li(scratch, Operand(stamp)); + __ ld(scratch, MemOperand(scratch)); + __ ld(scratch0(), FieldMemOperand(object, JSDate::kCacheStampOffset)); + __ Branch(&runtime, ne, scratch, Operand(scratch0())); + __ ld(result, FieldMemOperand(object, JSDate::kValueOffset + + kPointerSize * index->value())); + __ jmp(&done); + } + __ bind(&runtime); + __ PrepareCallCFunction(2, scratch); + __ li(a1, Operand(index)); + __ CallCFunction(ExternalReference::get_date_field_function(isolate()), 2); + __ bind(&done); + } +} + + +MemOperand LCodeGen::BuildSeqStringOperand(Register string, + LOperand* index, + String::Encoding encoding) { + if (index->IsConstantOperand()) { + int offset = ToInteger32(LConstantOperand::cast(index)); + if (encoding == String::TWO_BYTE_ENCODING) { + offset *= kUC16Size; + } + STATIC_ASSERT(kCharSize == 1); + return FieldMemOperand(string, SeqString::kHeaderSize + offset); + } + Register scratch = scratch0(); + DCHECK(!scratch.is(string)); + DCHECK(!scratch.is(ToRegister(index))); + if (encoding == String::ONE_BYTE_ENCODING) { + __ Daddu(scratch, string, ToRegister(index)); + } else { + STATIC_ASSERT(kUC16Size == 2); + __ dsll(scratch, ToRegister(index), 1); + __ Daddu(scratch, string, scratch); + } + return FieldMemOperand(scratch, SeqString::kHeaderSize); +} + + +void LCodeGen::DoSeqStringGetChar(LSeqStringGetChar* instr) { + String::Encoding encoding = instr->hydrogen()->encoding(); + Register string = ToRegister(instr->string()); + Register result = ToRegister(instr->result()); + + if (FLAG_debug_code) { + Register scratch = scratch0(); + __ ld(scratch, FieldMemOperand(string, HeapObject::kMapOffset)); + __ lbu(scratch, FieldMemOperand(scratch, Map::kInstanceTypeOffset)); + + __ And(scratch, scratch, + Operand(kStringRepresentationMask | kStringEncodingMask)); + static const uint32_t one_byte_seq_type = kSeqStringTag | kOneByteStringTag; + static const uint32_t two_byte_seq_type = kSeqStringTag | kTwoByteStringTag; + __ Dsubu(at, scratch, Operand(encoding == String::ONE_BYTE_ENCODING + ? one_byte_seq_type : two_byte_seq_type)); + __ Check(eq, kUnexpectedStringType, at, Operand(zero_reg)); + } + + MemOperand operand = BuildSeqStringOperand(string, instr->index(), encoding); + if (encoding == String::ONE_BYTE_ENCODING) { + __ lbu(result, operand); + } else { + __ lhu(result, operand); + } +} + + +void LCodeGen::DoSeqStringSetChar(LSeqStringSetChar* instr) { + String::Encoding encoding = instr->hydrogen()->encoding(); + Register string = ToRegister(instr->string()); + Register value = ToRegister(instr->value()); + + if (FLAG_debug_code) { + Register scratch = scratch0(); + Register index = ToRegister(instr->index()); + static const uint32_t one_byte_seq_type = kSeqStringTag | kOneByteStringTag; + static const uint32_t two_byte_seq_type = kSeqStringTag | kTwoByteStringTag; + int encoding_mask = + instr->hydrogen()->encoding() == String::ONE_BYTE_ENCODING + ? one_byte_seq_type : two_byte_seq_type; + __ EmitSeqStringSetCharCheck(string, index, value, scratch, encoding_mask); + } + + MemOperand operand = BuildSeqStringOperand(string, instr->index(), encoding); + if (encoding == String::ONE_BYTE_ENCODING) { + __ sb(value, operand); + } else { + __ sh(value, operand); + } +} + + +void LCodeGen::DoAddI(LAddI* instr) { + LOperand* left = instr->left(); + LOperand* right = instr->right(); + LOperand* result = instr->result(); + bool can_overflow = instr->hydrogen()->CheckFlag(HValue::kCanOverflow); + + if (!can_overflow) { + if (right->IsStackSlot()) { + Register right_reg = EmitLoadRegister(right, at); + __ Daddu(ToRegister(result), ToRegister(left), Operand(right_reg)); + } else { + DCHECK(right->IsRegister() || right->IsConstantOperand()); + __ Daddu(ToRegister(result), ToRegister(left), ToOperand(right)); + } + } else { // can_overflow. + Register overflow = scratch0(); + Register scratch = scratch1(); + if (right->IsStackSlot() || + right->IsConstantOperand()) { + Register right_reg = EmitLoadRegister(right, scratch); + __ AdduAndCheckForOverflow(ToRegister(result), + ToRegister(left), + right_reg, + overflow); // Reg at also used as scratch. + } else { + DCHECK(right->IsRegister()); + // Due to overflow check macros not supporting constant operands, + // handling the IsConstantOperand case was moved to prev if clause. + __ AdduAndCheckForOverflow(ToRegister(result), + ToRegister(left), + ToRegister(right), + overflow); // Reg at also used as scratch. + } + DeoptimizeIf(lt, instr->environment(), overflow, Operand(zero_reg)); + // if not smi, it must int32. + if (!instr->hydrogen()->representation().IsSmi()) { + DeoptimizeIf(gt, instr->environment(), + ToRegister(result), Operand(kMaxInt)); + DeoptimizeIf(lt, instr->environment(), + ToRegister(result), Operand(kMinInt)); + } + } +} + + +void LCodeGen::DoMathMinMax(LMathMinMax* instr) { + LOperand* left = instr->left(); + LOperand* right = instr->right(); + HMathMinMax::Operation operation = instr->hydrogen()->operation(); + Condition condition = (operation == HMathMinMax::kMathMin) ? le : ge; + if (instr->hydrogen()->representation().IsSmiOrInteger32()) { + Register left_reg = ToRegister(left); + Register right_reg = EmitLoadRegister(right, scratch0()); + Register result_reg = ToRegister(instr->result()); + Label return_right, done; + Register scratch = scratch1(); + __ Slt(scratch, left_reg, Operand(right_reg)); + if (condition == ge) { + __ Movz(result_reg, left_reg, scratch); + __ Movn(result_reg, right_reg, scratch); + } else { + DCHECK(condition == le); + __ Movn(result_reg, left_reg, scratch); + __ Movz(result_reg, right_reg, scratch); + } + } else { + DCHECK(instr->hydrogen()->representation().IsDouble()); + FPURegister left_reg = ToDoubleRegister(left); + FPURegister right_reg = ToDoubleRegister(right); + FPURegister result_reg = ToDoubleRegister(instr->result()); + Label check_nan_left, check_zero, return_left, return_right, done; + __ BranchF(&check_zero, &check_nan_left, eq, left_reg, right_reg); + __ BranchF(&return_left, NULL, condition, left_reg, right_reg); + __ Branch(&return_right); + + __ bind(&check_zero); + // left == right != 0. + __ BranchF(&return_left, NULL, ne, left_reg, kDoubleRegZero); + // At this point, both left and right are either 0 or -0. + if (operation == HMathMinMax::kMathMin) { + __ neg_d(left_reg, left_reg); + __ sub_d(result_reg, left_reg, right_reg); + __ neg_d(result_reg, result_reg); + } else { + __ add_d(result_reg, left_reg, right_reg); + } + __ Branch(&done); + + __ bind(&check_nan_left); + // left == NaN. + __ BranchF(NULL, &return_left, eq, left_reg, left_reg); + __ bind(&return_right); + if (!right_reg.is(result_reg)) { + __ mov_d(result_reg, right_reg); + } + __ Branch(&done); + + __ bind(&return_left); + if (!left_reg.is(result_reg)) { + __ mov_d(result_reg, left_reg); + } + __ bind(&done); + } +} + + +void LCodeGen::DoArithmeticD(LArithmeticD* instr) { + DoubleRegister left = ToDoubleRegister(instr->left()); + DoubleRegister right = ToDoubleRegister(instr->right()); + DoubleRegister result = ToDoubleRegister(instr->result()); + switch (instr->op()) { + case Token::ADD: + __ add_d(result, left, right); + break; + case Token::SUB: + __ sub_d(result, left, right); + break; + case Token::MUL: + __ mul_d(result, left, right); + break; + case Token::DIV: + __ div_d(result, left, right); + break; + case Token::MOD: { + // Save a0-a3 on the stack. + RegList saved_regs = a0.bit() | a1.bit() | a2.bit() | a3.bit(); + __ MultiPush(saved_regs); + + __ PrepareCallCFunction(0, 2, scratch0()); + __ MovToFloatParameters(left, right); + __ CallCFunction( + ExternalReference::mod_two_doubles_operation(isolate()), + 0, 2); + // Move the result in the double result register. + __ MovFromFloatResult(result); + + // Restore saved register. + __ MultiPop(saved_regs); + break; + } + default: + UNREACHABLE(); + break; + } +} + + +void LCodeGen::DoArithmeticT(LArithmeticT* instr) { + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->left()).is(a1)); + DCHECK(ToRegister(instr->right()).is(a0)); + DCHECK(ToRegister(instr->result()).is(v0)); + + BinaryOpICStub stub(isolate(), instr->op(), NO_OVERWRITE); + CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); + // Other arch use a nop here, to signal that there is no inlined + // patchable code. Mips does not need the nop, since our marker + // instruction (andi zero_reg) will never be used in normal code. +} + + +template<class InstrType> +void LCodeGen::EmitBranch(InstrType instr, + Condition condition, + Register src1, + const Operand& src2) { + int left_block = instr->TrueDestination(chunk_); + int right_block = instr->FalseDestination(chunk_); + + int next_block = GetNextEmittedBlock(); + if (right_block == left_block || condition == al) { + EmitGoto(left_block); + } else if (left_block == next_block) { + __ Branch(chunk_->GetAssemblyLabel(right_block), + NegateCondition(condition), src1, src2); + } else if (right_block == next_block) { + __ Branch(chunk_->GetAssemblyLabel(left_block), condition, src1, src2); + } else { + __ Branch(chunk_->GetAssemblyLabel(left_block), condition, src1, src2); + __ Branch(chunk_->GetAssemblyLabel(right_block)); + } +} + + +template<class InstrType> +void LCodeGen::EmitBranchF(InstrType instr, + Condition condition, + FPURegister src1, + FPURegister src2) { + int right_block = instr->FalseDestination(chunk_); + int left_block = instr->TrueDestination(chunk_); + + int next_block = GetNextEmittedBlock(); + if (right_block == left_block) { + EmitGoto(left_block); + } else if (left_block == next_block) { + __ BranchF(chunk_->GetAssemblyLabel(right_block), NULL, + NegateCondition(condition), src1, src2); + } else if (right_block == next_block) { + __ BranchF(chunk_->GetAssemblyLabel(left_block), NULL, + condition, src1, src2); + } else { + __ BranchF(chunk_->GetAssemblyLabel(left_block), NULL, + condition, src1, src2); + __ Branch(chunk_->GetAssemblyLabel(right_block)); + } +} + + +template<class InstrType> +void LCodeGen::EmitFalseBranch(InstrType instr, + Condition condition, + Register src1, + const Operand& src2) { + int false_block = instr->FalseDestination(chunk_); + __ Branch(chunk_->GetAssemblyLabel(false_block), condition, src1, src2); +} + + +template<class InstrType> +void LCodeGen::EmitFalseBranchF(InstrType instr, + Condition condition, + FPURegister src1, + FPURegister src2) { + int false_block = instr->FalseDestination(chunk_); + __ BranchF(chunk_->GetAssemblyLabel(false_block), NULL, + condition, src1, src2); +} + + +void LCodeGen::DoDebugBreak(LDebugBreak* instr) { + __ stop("LDebugBreak"); +} + + +void LCodeGen::DoBranch(LBranch* instr) { + Representation r = instr->hydrogen()->value()->representation(); + if (r.IsInteger32() || r.IsSmi()) { + DCHECK(!info()->IsStub()); + Register reg = ToRegister(instr->value()); + EmitBranch(instr, ne, reg, Operand(zero_reg)); + } else if (r.IsDouble()) { + DCHECK(!info()->IsStub()); + DoubleRegister reg = ToDoubleRegister(instr->value()); + // Test the double value. Zero and NaN are false. + EmitBranchF(instr, nue, reg, kDoubleRegZero); + } else { + DCHECK(r.IsTagged()); + Register reg = ToRegister(instr->value()); + HType type = instr->hydrogen()->value()->type(); + if (type.IsBoolean()) { + DCHECK(!info()->IsStub()); + __ LoadRoot(at, Heap::kTrueValueRootIndex); + EmitBranch(instr, eq, reg, Operand(at)); + } else if (type.IsSmi()) { + DCHECK(!info()->IsStub()); + EmitBranch(instr, ne, reg, Operand(zero_reg)); + } else if (type.IsJSArray()) { + DCHECK(!info()->IsStub()); + EmitBranch(instr, al, zero_reg, Operand(zero_reg)); + } else if (type.IsHeapNumber()) { + DCHECK(!info()->IsStub()); + DoubleRegister dbl_scratch = double_scratch0(); + __ ldc1(dbl_scratch, FieldMemOperand(reg, HeapNumber::kValueOffset)); + // Test the double value. Zero and NaN are false. + EmitBranchF(instr, nue, dbl_scratch, kDoubleRegZero); + } else if (type.IsString()) { + DCHECK(!info()->IsStub()); + __ ld(at, FieldMemOperand(reg, String::kLengthOffset)); + EmitBranch(instr, ne, at, Operand(zero_reg)); + } else { + ToBooleanStub::Types expected = instr->hydrogen()->expected_input_types(); + // Avoid deopts in the case where we've never executed this path before. + if (expected.IsEmpty()) expected = ToBooleanStub::Types::Generic(); + + if (expected.Contains(ToBooleanStub::UNDEFINED)) { + // undefined -> false. + __ LoadRoot(at, Heap::kUndefinedValueRootIndex); + __ Branch(instr->FalseLabel(chunk_), eq, reg, Operand(at)); + } + if (expected.Contains(ToBooleanStub::BOOLEAN)) { + // Boolean -> its value. + __ LoadRoot(at, Heap::kTrueValueRootIndex); + __ Branch(instr->TrueLabel(chunk_), eq, reg, Operand(at)); + __ LoadRoot(at, Heap::kFalseValueRootIndex); + __ Branch(instr->FalseLabel(chunk_), eq, reg, Operand(at)); + } + if (expected.Contains(ToBooleanStub::NULL_TYPE)) { + // 'null' -> false. + __ LoadRoot(at, Heap::kNullValueRootIndex); + __ Branch(instr->FalseLabel(chunk_), eq, reg, Operand(at)); + } + + if (expected.Contains(ToBooleanStub::SMI)) { + // Smis: 0 -> false, all other -> true. + __ Branch(instr->FalseLabel(chunk_), eq, reg, Operand(zero_reg)); + __ JumpIfSmi(reg, instr->TrueLabel(chunk_)); + } else if (expected.NeedsMap()) { + // If we need a map later and have a Smi -> deopt. + __ SmiTst(reg, at); + DeoptimizeIf(eq, instr->environment(), at, Operand(zero_reg)); + } + + const Register map = scratch0(); + if (expected.NeedsMap()) { + __ ld(map, FieldMemOperand(reg, HeapObject::kMapOffset)); + if (expected.CanBeUndetectable()) { + // Undetectable -> false. + __ lbu(at, FieldMemOperand(map, Map::kBitFieldOffset)); + __ And(at, at, Operand(1 << Map::kIsUndetectable)); + __ Branch(instr->FalseLabel(chunk_), ne, at, Operand(zero_reg)); + } + } + + if (expected.Contains(ToBooleanStub::SPEC_OBJECT)) { + // spec object -> true. + __ lbu(at, FieldMemOperand(map, Map::kInstanceTypeOffset)); + __ Branch(instr->TrueLabel(chunk_), + ge, at, Operand(FIRST_SPEC_OBJECT_TYPE)); + } + + if (expected.Contains(ToBooleanStub::STRING)) { + // String value -> false iff empty. + Label not_string; + __ lbu(at, FieldMemOperand(map, Map::kInstanceTypeOffset)); + __ Branch(¬_string, ge , at, Operand(FIRST_NONSTRING_TYPE)); + __ ld(at, FieldMemOperand(reg, String::kLengthOffset)); + __ Branch(instr->TrueLabel(chunk_), ne, at, Operand(zero_reg)); + __ Branch(instr->FalseLabel(chunk_)); + __ bind(¬_string); + } + + if (expected.Contains(ToBooleanStub::SYMBOL)) { + // Symbol value -> true. + const Register scratch = scratch1(); + __ lbu(scratch, FieldMemOperand(map, Map::kInstanceTypeOffset)); + __ Branch(instr->TrueLabel(chunk_), eq, scratch, Operand(SYMBOL_TYPE)); + } + + if (expected.Contains(ToBooleanStub::HEAP_NUMBER)) { + // heap number -> false iff +0, -0, or NaN. + DoubleRegister dbl_scratch = double_scratch0(); + Label not_heap_number; + __ LoadRoot(at, Heap::kHeapNumberMapRootIndex); + __ Branch(¬_heap_number, ne, map, Operand(at)); + __ ldc1(dbl_scratch, FieldMemOperand(reg, HeapNumber::kValueOffset)); + __ BranchF(instr->TrueLabel(chunk_), instr->FalseLabel(chunk_), + ne, dbl_scratch, kDoubleRegZero); + // Falls through if dbl_scratch == 0. + __ Branch(instr->FalseLabel(chunk_)); + __ bind(¬_heap_number); + } + + if (!expected.IsGeneric()) { + // We've seen something for the first time -> deopt. + // This can only happen if we are not generic already. + DeoptimizeIf(al, instr->environment(), zero_reg, Operand(zero_reg)); + } + } + } +} + + +void LCodeGen::EmitGoto(int block) { + if (!IsNextEmittedBlock(block)) { + __ jmp(chunk_->GetAssemblyLabel(LookupDestination(block))); + } +} + + +void LCodeGen::DoGoto(LGoto* instr) { + EmitGoto(instr->block_id()); +} + + +Condition LCodeGen::TokenToCondition(Token::Value op, bool is_unsigned) { + Condition cond = kNoCondition; + switch (op) { + case Token::EQ: + case Token::EQ_STRICT: + cond = eq; + break; + case Token::NE: + case Token::NE_STRICT: + cond = ne; + break; + case Token::LT: + cond = is_unsigned ? lo : lt; + break; + case Token::GT: + cond = is_unsigned ? hi : gt; + break; + case Token::LTE: + cond = is_unsigned ? ls : le; + break; + case Token::GTE: + cond = is_unsigned ? hs : ge; + break; + case Token::IN: + case Token::INSTANCEOF: + default: + UNREACHABLE(); + } + return cond; +} + + +void LCodeGen::DoCompareNumericAndBranch(LCompareNumericAndBranch* instr) { + LOperand* left = instr->left(); + LOperand* right = instr->right(); + bool is_unsigned = + instr->hydrogen()->left()->CheckFlag(HInstruction::kUint32) || + instr->hydrogen()->right()->CheckFlag(HInstruction::kUint32); + Condition cond = TokenToCondition(instr->op(), is_unsigned); + + if (left->IsConstantOperand() && right->IsConstantOperand()) { + // We can statically evaluate the comparison. + double left_val = ToDouble(LConstantOperand::cast(left)); + double right_val = ToDouble(LConstantOperand::cast(right)); + int next_block = EvalComparison(instr->op(), left_val, right_val) ? + instr->TrueDestination(chunk_) : instr->FalseDestination(chunk_); + EmitGoto(next_block); + } else { + if (instr->is_double()) { + // Compare left and right as doubles and load the + // resulting flags into the normal status register. + FPURegister left_reg = ToDoubleRegister(left); + FPURegister right_reg = ToDoubleRegister(right); + + // If a NaN is involved, i.e. the result is unordered, + // jump to false block label. + __ BranchF(NULL, instr->FalseLabel(chunk_), eq, + left_reg, right_reg); + + EmitBranchF(instr, cond, left_reg, right_reg); + } else { + Register cmp_left; + Operand cmp_right = Operand((int64_t)0); + if (right->IsConstantOperand()) { + int32_t value = ToInteger32(LConstantOperand::cast(right)); + if (instr->hydrogen_value()->representation().IsSmi()) { + cmp_left = ToRegister(left); + cmp_right = Operand(Smi::FromInt(value)); + } else { + cmp_left = ToRegister(left); + cmp_right = Operand(value); + } + } else if (left->IsConstantOperand()) { + int32_t value = ToInteger32(LConstantOperand::cast(left)); + if (instr->hydrogen_value()->representation().IsSmi()) { + cmp_left = ToRegister(right); + cmp_right = Operand(Smi::FromInt(value)); + } else { + cmp_left = ToRegister(right); + cmp_right = Operand(value); + } + // We commuted the operands, so commute the condition. + cond = CommuteCondition(cond); + } else { + cmp_left = ToRegister(left); + cmp_right = Operand(ToRegister(right)); + } + + EmitBranch(instr, cond, cmp_left, cmp_right); + } + } +} + + +void LCodeGen::DoCmpObjectEqAndBranch(LCmpObjectEqAndBranch* instr) { + Register left = ToRegister(instr->left()); + Register right = ToRegister(instr->right()); + + EmitBranch(instr, eq, left, Operand(right)); +} + + +void LCodeGen::DoCmpHoleAndBranch(LCmpHoleAndBranch* instr) { + if (instr->hydrogen()->representation().IsTagged()) { + Register input_reg = ToRegister(instr->object()); + __ li(at, Operand(factory()->the_hole_value())); + EmitBranch(instr, eq, input_reg, Operand(at)); + return; + } + + DoubleRegister input_reg = ToDoubleRegister(instr->object()); + EmitFalseBranchF(instr, eq, input_reg, input_reg); + + Register scratch = scratch0(); + __ FmoveHigh(scratch, input_reg); + EmitBranch(instr, eq, scratch, Operand(kHoleNanUpper32)); +} + + +void LCodeGen::DoCompareMinusZeroAndBranch(LCompareMinusZeroAndBranch* instr) { + Representation rep = instr->hydrogen()->value()->representation(); + DCHECK(!rep.IsInteger32()); + Register scratch = ToRegister(instr->temp()); + + if (rep.IsDouble()) { + DoubleRegister value = ToDoubleRegister(instr->value()); + EmitFalseBranchF(instr, ne, value, kDoubleRegZero); + __ FmoveHigh(scratch, value); + // Only use low 32-bits of value. + __ dsll32(scratch, scratch, 0); + __ dsrl32(scratch, scratch, 0); + __ li(at, 0x80000000); + } else { + Register value = ToRegister(instr->value()); + __ CheckMap(value, + scratch, + Heap::kHeapNumberMapRootIndex, + instr->FalseLabel(chunk()), + DO_SMI_CHECK); + __ lwu(scratch, FieldMemOperand(value, HeapNumber::kExponentOffset)); + EmitFalseBranch(instr, ne, scratch, Operand(0x80000000)); + __ lwu(scratch, FieldMemOperand(value, HeapNumber::kMantissaOffset)); + __ mov(at, zero_reg); + } + EmitBranch(instr, eq, scratch, Operand(at)); +} + + +Condition LCodeGen::EmitIsObject(Register input, + Register temp1, + Register temp2, + Label* is_not_object, + Label* is_object) { + __ JumpIfSmi(input, is_not_object); + + __ LoadRoot(temp2, Heap::kNullValueRootIndex); + __ Branch(is_object, eq, input, Operand(temp2)); + + // Load map. + __ ld(temp1, FieldMemOperand(input, HeapObject::kMapOffset)); + // Undetectable objects behave like undefined. + __ lbu(temp2, FieldMemOperand(temp1, Map::kBitFieldOffset)); + __ And(temp2, temp2, Operand(1 << Map::kIsUndetectable)); + __ Branch(is_not_object, ne, temp2, Operand(zero_reg)); + + // Load instance type and check that it is in object type range. + __ lbu(temp2, FieldMemOperand(temp1, Map::kInstanceTypeOffset)); + __ Branch(is_not_object, + lt, temp2, Operand(FIRST_NONCALLABLE_SPEC_OBJECT_TYPE)); + + return le; +} + + +void LCodeGen::DoIsObjectAndBranch(LIsObjectAndBranch* instr) { + Register reg = ToRegister(instr->value()); + Register temp1 = ToRegister(instr->temp()); + Register temp2 = scratch0(); + + Condition true_cond = + EmitIsObject(reg, temp1, temp2, + instr->FalseLabel(chunk_), instr->TrueLabel(chunk_)); + + EmitBranch(instr, true_cond, temp2, + Operand(LAST_NONCALLABLE_SPEC_OBJECT_TYPE)); +} + + +Condition LCodeGen::EmitIsString(Register input, + Register temp1, + Label* is_not_string, + SmiCheck check_needed = INLINE_SMI_CHECK) { + if (check_needed == INLINE_SMI_CHECK) { + __ JumpIfSmi(input, is_not_string); + } + __ GetObjectType(input, temp1, temp1); + + return lt; +} + + +void LCodeGen::DoIsStringAndBranch(LIsStringAndBranch* instr) { + Register reg = ToRegister(instr->value()); + Register temp1 = ToRegister(instr->temp()); + + SmiCheck check_needed = + instr->hydrogen()->value()->type().IsHeapObject() + ? OMIT_SMI_CHECK : INLINE_SMI_CHECK; + Condition true_cond = + EmitIsString(reg, temp1, instr->FalseLabel(chunk_), check_needed); + + EmitBranch(instr, true_cond, temp1, + Operand(FIRST_NONSTRING_TYPE)); +} + + +void LCodeGen::DoIsSmiAndBranch(LIsSmiAndBranch* instr) { + Register input_reg = EmitLoadRegister(instr->value(), at); + __ And(at, input_reg, kSmiTagMask); + EmitBranch(instr, eq, at, Operand(zero_reg)); +} + + +void LCodeGen::DoIsUndetectableAndBranch(LIsUndetectableAndBranch* instr) { + Register input = ToRegister(instr->value()); + Register temp = ToRegister(instr->temp()); + + if (!instr->hydrogen()->value()->type().IsHeapObject()) { + __ JumpIfSmi(input, instr->FalseLabel(chunk_)); + } + __ ld(temp, FieldMemOperand(input, HeapObject::kMapOffset)); + __ lbu(temp, FieldMemOperand(temp, Map::kBitFieldOffset)); + __ And(at, temp, Operand(1 << Map::kIsUndetectable)); + EmitBranch(instr, ne, at, Operand(zero_reg)); +} + + +static Condition ComputeCompareCondition(Token::Value op) { + switch (op) { + case Token::EQ_STRICT: + case Token::EQ: + return eq; + case Token::LT: + return lt; + case Token::GT: + return gt; + case Token::LTE: + return le; + case Token::GTE: + return ge; + default: + UNREACHABLE(); + return kNoCondition; + } +} + + +void LCodeGen::DoStringCompareAndBranch(LStringCompareAndBranch* instr) { + DCHECK(ToRegister(instr->context()).is(cp)); + Token::Value op = instr->op(); + + Handle<Code> ic = CompareIC::GetUninitialized(isolate(), op); + CallCode(ic, RelocInfo::CODE_TARGET, instr); + + Condition condition = ComputeCompareCondition(op); + + EmitBranch(instr, condition, v0, Operand(zero_reg)); +} + + +static InstanceType TestType(HHasInstanceTypeAndBranch* instr) { + InstanceType from = instr->from(); + InstanceType to = instr->to(); + if (from == FIRST_TYPE) return to; + DCHECK(from == to || to == LAST_TYPE); + return from; +} + + +static Condition BranchCondition(HHasInstanceTypeAndBranch* instr) { + InstanceType from = instr->from(); + InstanceType to = instr->to(); + if (from == to) return eq; + if (to == LAST_TYPE) return hs; + if (from == FIRST_TYPE) return ls; + UNREACHABLE(); + return eq; +} + + +void LCodeGen::DoHasInstanceTypeAndBranch(LHasInstanceTypeAndBranch* instr) { + Register scratch = scratch0(); + Register input = ToRegister(instr->value()); + + if (!instr->hydrogen()->value()->type().IsHeapObject()) { + __ JumpIfSmi(input, instr->FalseLabel(chunk_)); + } + + __ GetObjectType(input, scratch, scratch); + EmitBranch(instr, + BranchCondition(instr->hydrogen()), + scratch, + Operand(TestType(instr->hydrogen()))); +} + + +void LCodeGen::DoGetCachedArrayIndex(LGetCachedArrayIndex* instr) { + Register input = ToRegister(instr->value()); + Register result = ToRegister(instr->result()); + + __ AssertString(input); + + __ lwu(result, FieldMemOperand(input, String::kHashFieldOffset)); + __ IndexFromHash(result, result); +} + + +void LCodeGen::DoHasCachedArrayIndexAndBranch( + LHasCachedArrayIndexAndBranch* instr) { + Register input = ToRegister(instr->value()); + Register scratch = scratch0(); + + __ lwu(scratch, + FieldMemOperand(input, String::kHashFieldOffset)); + __ And(at, scratch, Operand(String::kContainsCachedArrayIndexMask)); + EmitBranch(instr, eq, at, Operand(zero_reg)); +} + + +// Branches to a label or falls through with the answer in flags. Trashes +// the temp registers, but not the input. +void LCodeGen::EmitClassOfTest(Label* is_true, + Label* is_false, + Handle<String>class_name, + Register input, + Register temp, + Register temp2) { + DCHECK(!input.is(temp)); + DCHECK(!input.is(temp2)); + DCHECK(!temp.is(temp2)); + + __ JumpIfSmi(input, is_false); + + if (class_name->IsOneByteEqualTo(STATIC_ASCII_VECTOR("Function"))) { + // Assuming the following assertions, we can use the same compares to test + // for both being a function type and being in the object type range. + STATIC_ASSERT(NUM_OF_CALLABLE_SPEC_OBJECT_TYPES == 2); + STATIC_ASSERT(FIRST_NONCALLABLE_SPEC_OBJECT_TYPE == + FIRST_SPEC_OBJECT_TYPE + 1); + STATIC_ASSERT(LAST_NONCALLABLE_SPEC_OBJECT_TYPE == + LAST_SPEC_OBJECT_TYPE - 1); + STATIC_ASSERT(LAST_SPEC_OBJECT_TYPE == LAST_TYPE); + + __ GetObjectType(input, temp, temp2); + __ Branch(is_false, lt, temp2, Operand(FIRST_SPEC_OBJECT_TYPE)); + __ Branch(is_true, eq, temp2, Operand(FIRST_SPEC_OBJECT_TYPE)); + __ Branch(is_true, eq, temp2, Operand(LAST_SPEC_OBJECT_TYPE)); + } else { + // Faster code path to avoid two compares: subtract lower bound from the + // actual type and do a signed compare with the width of the type range. + __ GetObjectType(input, temp, temp2); + __ Dsubu(temp2, temp2, Operand(FIRST_NONCALLABLE_SPEC_OBJECT_TYPE)); + __ Branch(is_false, gt, temp2, Operand(LAST_NONCALLABLE_SPEC_OBJECT_TYPE - + FIRST_NONCALLABLE_SPEC_OBJECT_TYPE)); + } + + // Now we are in the FIRST-LAST_NONCALLABLE_SPEC_OBJECT_TYPE range. + // Check if the constructor in the map is a function. + __ ld(temp, FieldMemOperand(temp, Map::kConstructorOffset)); + + // Objects with a non-function constructor have class 'Object'. + __ GetObjectType(temp, temp2, temp2); + if (class_name->IsOneByteEqualTo(STATIC_ASCII_VECTOR("Object"))) { + __ Branch(is_true, ne, temp2, Operand(JS_FUNCTION_TYPE)); + } else { + __ Branch(is_false, ne, temp2, Operand(JS_FUNCTION_TYPE)); + } + + // temp now contains the constructor function. Grab the + // instance class name from there. + __ ld(temp, FieldMemOperand(temp, JSFunction::kSharedFunctionInfoOffset)); + __ ld(temp, FieldMemOperand(temp, + SharedFunctionInfo::kInstanceClassNameOffset)); + // The class name we are testing against is internalized since it's a literal. + // The name in the constructor is internalized because of the way the context + // is booted. This routine isn't expected to work for random API-created + // classes and it doesn't have to because you can't access it with natives + // syntax. Since both sides are internalized it is sufficient to use an + // identity comparison. + + // End with the address of this class_name instance in temp register. + // On MIPS, the caller must do the comparison with Handle<String>class_name. +} + + +void LCodeGen::DoClassOfTestAndBranch(LClassOfTestAndBranch* instr) { + Register input = ToRegister(instr->value()); + Register temp = scratch0(); + Register temp2 = ToRegister(instr->temp()); + Handle<String> class_name = instr->hydrogen()->class_name(); + + EmitClassOfTest(instr->TrueLabel(chunk_), instr->FalseLabel(chunk_), + class_name, input, temp, temp2); + + EmitBranch(instr, eq, temp, Operand(class_name)); +} + + +void LCodeGen::DoCmpMapAndBranch(LCmpMapAndBranch* instr) { + Register reg = ToRegister(instr->value()); + Register temp = ToRegister(instr->temp()); + + __ ld(temp, FieldMemOperand(reg, HeapObject::kMapOffset)); + EmitBranch(instr, eq, temp, Operand(instr->map())); +} + + +void LCodeGen::DoInstanceOf(LInstanceOf* instr) { + DCHECK(ToRegister(instr->context()).is(cp)); + Label true_label, done; + DCHECK(ToRegister(instr->left()).is(a0)); // Object is in a0. + DCHECK(ToRegister(instr->right()).is(a1)); // Function is in a1. + Register result = ToRegister(instr->result()); + DCHECK(result.is(v0)); + + InstanceofStub stub(isolate(), InstanceofStub::kArgsInRegisters); + CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); + + __ Branch(&true_label, eq, result, Operand(zero_reg)); + __ li(result, Operand(factory()->false_value())); + __ Branch(&done); + __ bind(&true_label); + __ li(result, Operand(factory()->true_value())); + __ bind(&done); +} + + +void LCodeGen::DoInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr) { + class DeferredInstanceOfKnownGlobal V8_FINAL : public LDeferredCode { + public: + DeferredInstanceOfKnownGlobal(LCodeGen* codegen, + LInstanceOfKnownGlobal* instr) + : LDeferredCode(codegen), instr_(instr) { } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredInstanceOfKnownGlobal(instr_, &map_check_); + } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + Label* map_check() { return &map_check_; } + + private: + LInstanceOfKnownGlobal* instr_; + Label map_check_; + }; + + DeferredInstanceOfKnownGlobal* deferred; + deferred = new(zone()) DeferredInstanceOfKnownGlobal(this, instr); + + Label done, false_result; + Register object = ToRegister(instr->value()); + Register temp = ToRegister(instr->temp()); + Register result = ToRegister(instr->result()); + + DCHECK(object.is(a0)); + DCHECK(result.is(v0)); + + // A Smi is not instance of anything. + __ JumpIfSmi(object, &false_result); + + // This is the inlined call site instanceof cache. The two occurences of the + // hole value will be patched to the last map/result pair generated by the + // instanceof stub. + Label cache_miss; + Register map = temp; + __ ld(map, FieldMemOperand(object, HeapObject::kMapOffset)); + + Assembler::BlockTrampolinePoolScope block_trampoline_pool(masm_); + __ bind(deferred->map_check()); // Label for calculating code patching. + // We use Factory::the_hole_value() on purpose instead of loading from the + // root array to force relocation to be able to later patch with + // the cached map. + Handle<Cell> cell = factory()->NewCell(factory()->the_hole_value()); + __ li(at, Operand(Handle<Object>(cell))); + __ ld(at, FieldMemOperand(at, PropertyCell::kValueOffset)); + __ BranchShort(&cache_miss, ne, map, Operand(at)); + // We use Factory::the_hole_value() on purpose instead of loading from the + // root array to force relocation to be able to later patch + // with true or false. The distance from map check has to be constant. + __ li(result, Operand(factory()->the_hole_value())); + __ Branch(&done); + + // The inlined call site cache did not match. Check null and string before + // calling the deferred code. + __ bind(&cache_miss); + // Null is not instance of anything. + __ LoadRoot(temp, Heap::kNullValueRootIndex); + __ Branch(&false_result, eq, object, Operand(temp)); + + // String values is not instance of anything. + Condition cc = __ IsObjectStringType(object, temp, temp); + __ Branch(&false_result, cc, temp, Operand(zero_reg)); + + // Go to the deferred code. + __ Branch(deferred->entry()); + + __ bind(&false_result); + __ LoadRoot(result, Heap::kFalseValueRootIndex); + + // Here result has either true or false. Deferred code also produces true or + // false object. + __ bind(deferred->exit()); + __ bind(&done); +} + + +void LCodeGen::DoDeferredInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr, + Label* map_check) { + Register result = ToRegister(instr->result()); + DCHECK(result.is(v0)); + + InstanceofStub::Flags flags = InstanceofStub::kNoFlags; + flags = static_cast<InstanceofStub::Flags>( + flags | InstanceofStub::kArgsInRegisters); + flags = static_cast<InstanceofStub::Flags>( + flags | InstanceofStub::kCallSiteInlineCheck); + flags = static_cast<InstanceofStub::Flags>( + flags | InstanceofStub::kReturnTrueFalseObject); + InstanceofStub stub(isolate(), flags); + + PushSafepointRegistersScope scope(this); + LoadContextFromDeferred(instr->context()); + + // Get the temp register reserved by the instruction. This needs to be a4 as + // its slot of the pushing of safepoint registers is used to communicate the + // offset to the location of the map check. + Register temp = ToRegister(instr->temp()); + DCHECK(temp.is(a4)); + __ li(InstanceofStub::right(), instr->function()); + static const int kAdditionalDelta = 13; + int delta = masm_->InstructionsGeneratedSince(map_check) + kAdditionalDelta; + Label before_push_delta; + __ bind(&before_push_delta); + { + Assembler::BlockTrampolinePoolScope block_trampoline_pool(masm_); + __ li(temp, Operand(delta * kIntSize), CONSTANT_SIZE); + __ StoreToSafepointRegisterSlot(temp, temp); + } + CallCodeGeneric(stub.GetCode(), + RelocInfo::CODE_TARGET, + instr, + RECORD_SAFEPOINT_WITH_REGISTERS_AND_NO_ARGUMENTS); + LEnvironment* env = instr->GetDeferredLazyDeoptimizationEnvironment(); + safepoints_.RecordLazyDeoptimizationIndex(env->deoptimization_index()); + // Put the result value into the result register slot and + // restore all registers. + __ StoreToSafepointRegisterSlot(result, result); +} + + +void LCodeGen::DoCmpT(LCmpT* instr) { + DCHECK(ToRegister(instr->context()).is(cp)); + Token::Value op = instr->op(); + + Handle<Code> ic = CompareIC::GetUninitialized(isolate(), op); + CallCode(ic, RelocInfo::CODE_TARGET, instr); + // On MIPS there is no need for a "no inlined smi code" marker (nop). + + Condition condition = ComputeCompareCondition(op); + // A minor optimization that relies on LoadRoot always emitting one + // instruction. + Assembler::BlockTrampolinePoolScope block_trampoline_pool(masm()); + Label done, check; + __ Branch(USE_DELAY_SLOT, &done, condition, v0, Operand(zero_reg)); + __ bind(&check); + __ LoadRoot(ToRegister(instr->result()), Heap::kTrueValueRootIndex); + DCHECK_EQ(1, masm()->InstructionsGeneratedSince(&check)); + __ LoadRoot(ToRegister(instr->result()), Heap::kFalseValueRootIndex); + __ bind(&done); +} + + +void LCodeGen::DoReturn(LReturn* instr) { + if (FLAG_trace && info()->IsOptimizing()) { + // Push the return value on the stack as the parameter. + // Runtime::TraceExit returns its parameter in v0. We're leaving the code + // managed by the register allocator and tearing down the frame, it's + // safe to write to the context register. + __ push(v0); + __ ld(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); + __ CallRuntime(Runtime::kTraceExit, 1); + } + if (info()->saves_caller_doubles()) { + RestoreCallerDoubles(); + } + int no_frame_start = -1; + if (NeedsEagerFrame()) { + __ mov(sp, fp); + no_frame_start = masm_->pc_offset(); + __ Pop(ra, fp); + } + if (instr->has_constant_parameter_count()) { + int parameter_count = ToInteger32(instr->constant_parameter_count()); + int32_t sp_delta = (parameter_count + 1) * kPointerSize; + if (sp_delta != 0) { + __ Daddu(sp, sp, Operand(sp_delta)); + } + } else { + Register reg = ToRegister(instr->parameter_count()); + // The argument count parameter is a smi + __ SmiUntag(reg); + __ dsll(at, reg, kPointerSizeLog2); + __ Daddu(sp, sp, at); + } + + __ Jump(ra); + + if (no_frame_start != -1) { + info_->AddNoFrameRange(no_frame_start, masm_->pc_offset()); + } +} + + +void LCodeGen::DoLoadGlobalCell(LLoadGlobalCell* instr) { + Register result = ToRegister(instr->result()); + __ li(at, Operand(Handle<Object>(instr->hydrogen()->cell().handle()))); + __ ld(result, FieldMemOperand(at, Cell::kValueOffset)); + if (instr->hydrogen()->RequiresHoleCheck()) { + __ LoadRoot(at, Heap::kTheHoleValueRootIndex); + DeoptimizeIf(eq, instr->environment(), result, Operand(at)); + } +} + + +void LCodeGen::DoLoadGlobalGeneric(LLoadGlobalGeneric* instr) { + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->global_object()).is(LoadIC::ReceiverRegister())); + DCHECK(ToRegister(instr->result()).is(v0)); + + __ li(LoadIC::NameRegister(), Operand(instr->name())); + if (FLAG_vector_ics) { + Register vector = ToRegister(instr->temp_vector()); + DCHECK(vector.is(LoadIC::VectorRegister())); + __ li(vector, instr->hydrogen()->feedback_vector()); + // No need to allocate this register. + DCHECK(LoadIC::SlotRegister().is(a0)); + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(instr->hydrogen()->slot()))); + } + ContextualMode mode = instr->for_typeof() ? NOT_CONTEXTUAL : CONTEXTUAL; + Handle<Code> ic = LoadIC::initialize_stub(isolate(), mode); + CallCode(ic, RelocInfo::CODE_TARGET, instr); +} + + +void LCodeGen::DoStoreGlobalCell(LStoreGlobalCell* instr) { + Register value = ToRegister(instr->value()); + Register cell = scratch0(); + + // Load the cell. + __ li(cell, Operand(instr->hydrogen()->cell().handle())); + + // If the cell we are storing to contains the hole it could have + // been deleted from the property dictionary. In that case, we need + // to update the property details in the property dictionary to mark + // it as no longer deleted. + if (instr->hydrogen()->RequiresHoleCheck()) { + // We use a temp to check the payload. + Register payload = ToRegister(instr->temp()); + __ ld(payload, FieldMemOperand(cell, Cell::kValueOffset)); + __ LoadRoot(at, Heap::kTheHoleValueRootIndex); + DeoptimizeIf(eq, instr->environment(), payload, Operand(at)); + } + + // Store the value. + __ sd(value, FieldMemOperand(cell, Cell::kValueOffset)); + // Cells are always rescanned, so no write barrier here. +} + + +void LCodeGen::DoLoadContextSlot(LLoadContextSlot* instr) { + Register context = ToRegister(instr->context()); + Register result = ToRegister(instr->result()); + + __ ld(result, ContextOperand(context, instr->slot_index())); + if (instr->hydrogen()->RequiresHoleCheck()) { + __ LoadRoot(at, Heap::kTheHoleValueRootIndex); + + if (instr->hydrogen()->DeoptimizesOnHole()) { + DeoptimizeIf(eq, instr->environment(), result, Operand(at)); + } else { + Label is_not_hole; + __ Branch(&is_not_hole, ne, result, Operand(at)); + __ LoadRoot(result, Heap::kUndefinedValueRootIndex); + __ bind(&is_not_hole); + } + } +} + + +void LCodeGen::DoStoreContextSlot(LStoreContextSlot* instr) { + Register context = ToRegister(instr->context()); + Register value = ToRegister(instr->value()); + Register scratch = scratch0(); + MemOperand target = ContextOperand(context, instr->slot_index()); + + Label skip_assignment; + + if (instr->hydrogen()->RequiresHoleCheck()) { + __ ld(scratch, target); + __ LoadRoot(at, Heap::kTheHoleValueRootIndex); + + if (instr->hydrogen()->DeoptimizesOnHole()) { + DeoptimizeIf(eq, instr->environment(), scratch, Operand(at)); + } else { + __ Branch(&skip_assignment, ne, scratch, Operand(at)); + } + } + + __ sd(value, target); + if (instr->hydrogen()->NeedsWriteBarrier()) { + SmiCheck check_needed = + instr->hydrogen()->value()->type().IsHeapObject() + ? OMIT_SMI_CHECK : INLINE_SMI_CHECK; + __ RecordWriteContextSlot(context, + target.offset(), + value, + scratch0(), + GetRAState(), + kSaveFPRegs, + EMIT_REMEMBERED_SET, + check_needed); + } + + __ bind(&skip_assignment); +} + + +void LCodeGen::DoLoadNamedField(LLoadNamedField* instr) { + HObjectAccess access = instr->hydrogen()->access(); + int offset = access.offset(); + Register object = ToRegister(instr->object()); + if (access.IsExternalMemory()) { + Register result = ToRegister(instr->result()); + MemOperand operand = MemOperand(object, offset); + __ Load(result, operand, access.representation()); + return; + } + + if (instr->hydrogen()->representation().IsDouble()) { + DoubleRegister result = ToDoubleRegister(instr->result()); + __ ldc1(result, FieldMemOperand(object, offset)); + return; + } + + Register result = ToRegister(instr->result()); + if (!access.IsInobject()) { + __ ld(result, FieldMemOperand(object, JSObject::kPropertiesOffset)); + object = result; + } + + Representation representation = access.representation(); + if (representation.IsSmi() && SmiValuesAre32Bits() && + instr->hydrogen()->representation().IsInteger32()) { + if (FLAG_debug_code) { + // Verify this is really an Smi. + Register scratch = scratch0(); + __ Load(scratch, FieldMemOperand(object, offset), representation); + __ AssertSmi(scratch); + } + + // Read int value directly from upper half of the smi. + STATIC_ASSERT(kSmiTag == 0); + STATIC_ASSERT(kSmiTagSize + kSmiShiftSize == 32); + offset += kPointerSize / 2; + representation = Representation::Integer32(); + } + __ Load(result, FieldMemOperand(object, offset), representation); +} + + +void LCodeGen::DoLoadNamedGeneric(LLoadNamedGeneric* instr) { + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->object()).is(LoadIC::ReceiverRegister())); + DCHECK(ToRegister(instr->result()).is(v0)); + + // Name is always in a2. + __ li(LoadIC::NameRegister(), Operand(instr->name())); + if (FLAG_vector_ics) { + Register vector = ToRegister(instr->temp_vector()); + DCHECK(vector.is(LoadIC::VectorRegister())); + __ li(vector, instr->hydrogen()->feedback_vector()); + // No need to allocate this register. + DCHECK(LoadIC::SlotRegister().is(a0)); + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(instr->hydrogen()->slot()))); + } + Handle<Code> ic = LoadIC::initialize_stub(isolate(), NOT_CONTEXTUAL); + CallCode(ic, RelocInfo::CODE_TARGET, instr); +} + + +void LCodeGen::DoLoadFunctionPrototype(LLoadFunctionPrototype* instr) { + Register scratch = scratch0(); + Register function = ToRegister(instr->function()); + Register result = ToRegister(instr->result()); + + // Get the prototype or initial map from the function. + __ ld(result, + FieldMemOperand(function, JSFunction::kPrototypeOrInitialMapOffset)); + + // Check that the function has a prototype or an initial map. + __ LoadRoot(at, Heap::kTheHoleValueRootIndex); + DeoptimizeIf(eq, instr->environment(), result, Operand(at)); + + // If the function does not have an initial map, we're done. + Label done; + __ GetObjectType(result, scratch, scratch); + __ Branch(&done, ne, scratch, Operand(MAP_TYPE)); + + // Get the prototype from the initial map. + __ ld(result, FieldMemOperand(result, Map::kPrototypeOffset)); + + // All done. + __ bind(&done); +} + + +void LCodeGen::DoLoadRoot(LLoadRoot* instr) { + Register result = ToRegister(instr->result()); + __ LoadRoot(result, instr->index()); +} + + +void LCodeGen::DoAccessArgumentsAt(LAccessArgumentsAt* instr) { + Register arguments = ToRegister(instr->arguments()); + Register result = ToRegister(instr->result()); + // There are two words between the frame pointer and the last argument. + // Subtracting from length accounts for one of them add one more. + if (instr->length()->IsConstantOperand()) { + int const_length = ToInteger32(LConstantOperand::cast(instr->length())); + if (instr->index()->IsConstantOperand()) { + int const_index = ToInteger32(LConstantOperand::cast(instr->index())); + int index = (const_length - const_index) + 1; + __ ld(result, MemOperand(arguments, index * kPointerSize)); + } else { + Register index = ToRegister(instr->index()); + __ li(at, Operand(const_length + 1)); + __ Dsubu(result, at, index); + __ dsll(at, result, kPointerSizeLog2); + __ Daddu(at, arguments, at); + __ ld(result, MemOperand(at)); + } + } else if (instr->index()->IsConstantOperand()) { + Register length = ToRegister(instr->length()); + int const_index = ToInteger32(LConstantOperand::cast(instr->index())); + int loc = const_index - 1; + if (loc != 0) { + __ Dsubu(result, length, Operand(loc)); + __ dsll(at, result, kPointerSizeLog2); + __ Daddu(at, arguments, at); + __ ld(result, MemOperand(at)); + } else { + __ dsll(at, length, kPointerSizeLog2); + __ Daddu(at, arguments, at); + __ ld(result, MemOperand(at)); + } + } else { + Register length = ToRegister(instr->length()); + Register index = ToRegister(instr->index()); + __ Dsubu(result, length, index); + __ Daddu(result, result, 1); + __ dsll(at, result, kPointerSizeLog2); + __ Daddu(at, arguments, at); + __ ld(result, MemOperand(at)); + } +} + + +void LCodeGen::DoLoadKeyedExternalArray(LLoadKeyed* instr) { + Register external_pointer = ToRegister(instr->elements()); + Register key = no_reg; + ElementsKind elements_kind = instr->elements_kind(); + bool key_is_constant = instr->key()->IsConstantOperand(); + int constant_key = 0; + if (key_is_constant) { + constant_key = ToInteger32(LConstantOperand::cast(instr->key())); + if (constant_key & 0xF0000000) { + Abort(kArrayIndexConstantValueTooBig); + } + } else { + key = ToRegister(instr->key()); + } + int element_size_shift = ElementsKindToShiftSize(elements_kind); + int shift_size = (instr->hydrogen()->key()->representation().IsSmi()) + ? (element_size_shift - (kSmiTagSize + kSmiShiftSize)) + : element_size_shift; + int base_offset = instr->base_offset(); + + if (elements_kind == EXTERNAL_FLOAT32_ELEMENTS || + elements_kind == FLOAT32_ELEMENTS || + elements_kind == EXTERNAL_FLOAT64_ELEMENTS || + elements_kind == FLOAT64_ELEMENTS) { + int base_offset = instr->base_offset(); + FPURegister result = ToDoubleRegister(instr->result()); + if (key_is_constant) { + __ Daddu(scratch0(), external_pointer, + constant_key << element_size_shift); + } else { + if (shift_size < 0) { + if (shift_size == -32) { + __ dsra32(scratch0(), key, 0); + } else { + __ dsra(scratch0(), key, -shift_size); + } + } else { + __ dsll(scratch0(), key, shift_size); + } + __ Daddu(scratch0(), scratch0(), external_pointer); + } + if (elements_kind == EXTERNAL_FLOAT32_ELEMENTS || + elements_kind == FLOAT32_ELEMENTS) { + __ lwc1(result, MemOperand(scratch0(), base_offset)); + __ cvt_d_s(result, result); + } else { // i.e. elements_kind == EXTERNAL_DOUBLE_ELEMENTS + __ ldc1(result, MemOperand(scratch0(), base_offset)); + } + } else { + Register result = ToRegister(instr->result()); + MemOperand mem_operand = PrepareKeyedOperand( + key, external_pointer, key_is_constant, constant_key, + element_size_shift, shift_size, base_offset); + switch (elements_kind) { + case EXTERNAL_INT8_ELEMENTS: + case INT8_ELEMENTS: + __ lb(result, mem_operand); + break; + case EXTERNAL_UINT8_CLAMPED_ELEMENTS: + case EXTERNAL_UINT8_ELEMENTS: + case UINT8_ELEMENTS: + case UINT8_CLAMPED_ELEMENTS: + __ lbu(result, mem_operand); + break; + case EXTERNAL_INT16_ELEMENTS: + case INT16_ELEMENTS: + __ lh(result, mem_operand); + break; + case EXTERNAL_UINT16_ELEMENTS: + case UINT16_ELEMENTS: + __ lhu(result, mem_operand); + break; + case EXTERNAL_INT32_ELEMENTS: + case INT32_ELEMENTS: + __ lw(result, mem_operand); + break; + case EXTERNAL_UINT32_ELEMENTS: + case UINT32_ELEMENTS: + __ lw(result, mem_operand); + if (!instr->hydrogen()->CheckFlag(HInstruction::kUint32)) { + DeoptimizeIf(Ugreater_equal, instr->environment(), + result, Operand(0x80000000)); + } + break; + case FLOAT32_ELEMENTS: + case FLOAT64_ELEMENTS: + case EXTERNAL_FLOAT32_ELEMENTS: + case EXTERNAL_FLOAT64_ELEMENTS: + case FAST_DOUBLE_ELEMENTS: + case FAST_ELEMENTS: + case FAST_SMI_ELEMENTS: + case FAST_HOLEY_DOUBLE_ELEMENTS: + case FAST_HOLEY_ELEMENTS: + case FAST_HOLEY_SMI_ELEMENTS: + case DICTIONARY_ELEMENTS: + case SLOPPY_ARGUMENTS_ELEMENTS: + UNREACHABLE(); + break; + } + } +} + + +void LCodeGen::DoLoadKeyedFixedDoubleArray(LLoadKeyed* instr) { + Register elements = ToRegister(instr->elements()); + bool key_is_constant = instr->key()->IsConstantOperand(); + Register key = no_reg; + DoubleRegister result = ToDoubleRegister(instr->result()); + Register scratch = scratch0(); + + int element_size_shift = ElementsKindToShiftSize(FAST_DOUBLE_ELEMENTS); + + int base_offset = instr->base_offset(); + if (key_is_constant) { + int constant_key = ToInteger32(LConstantOperand::cast(instr->key())); + if (constant_key & 0xF0000000) { + Abort(kArrayIndexConstantValueTooBig); + } + base_offset += constant_key * kDoubleSize; + } + __ Daddu(scratch, elements, Operand(base_offset)); + + if (!key_is_constant) { + key = ToRegister(instr->key()); + int shift_size = (instr->hydrogen()->key()->representation().IsSmi()) + ? (element_size_shift - (kSmiTagSize + kSmiShiftSize)) + : element_size_shift; + if (shift_size > 0) { + __ dsll(at, key, shift_size); + } else if (shift_size == -32) { + __ dsra32(at, key, 0); + } else { + __ dsra(at, key, -shift_size); + } + __ Daddu(scratch, scratch, at); + } + + __ ldc1(result, MemOperand(scratch)); + + if (instr->hydrogen()->RequiresHoleCheck()) { + __ lw(scratch, MemOperand(scratch, sizeof(kHoleNanLower32))); + DeoptimizeIf(eq, instr->environment(), scratch, Operand(kHoleNanUpper32)); + } +} + + +void LCodeGen::DoLoadKeyedFixedArray(LLoadKeyed* instr) { + HLoadKeyed* hinstr = instr->hydrogen(); + Register elements = ToRegister(instr->elements()); + Register result = ToRegister(instr->result()); + Register scratch = scratch0(); + Register store_base = scratch; + int offset = instr->base_offset(); + + if (instr->key()->IsConstantOperand()) { + LConstantOperand* const_operand = LConstantOperand::cast(instr->key()); + offset += ToInteger32(const_operand) * kPointerSize; + store_base = elements; + } else { + Register key = ToRegister(instr->key()); + // Even though the HLoadKeyed instruction forces the input + // representation for the key to be an integer, the input gets replaced + // during bound check elimination with the index argument to the bounds + // check, which can be tagged, so that case must be handled here, too. + if (instr->hydrogen()->key()->representation().IsSmi()) { + __ SmiScale(scratch, key, kPointerSizeLog2); + __ daddu(scratch, elements, scratch); + } else { + __ dsll(scratch, key, kPointerSizeLog2); + __ daddu(scratch, elements, scratch); + } + } + + Representation representation = hinstr->representation(); + if (representation.IsInteger32() && SmiValuesAre32Bits() && + hinstr->elements_kind() == FAST_SMI_ELEMENTS) { + DCHECK(!hinstr->RequiresHoleCheck()); + if (FLAG_debug_code) { + Register temp = scratch1(); + __ Load(temp, MemOperand(store_base, offset), Representation::Smi()); + __ AssertSmi(temp); + } + + // Read int value directly from upper half of the smi. + STATIC_ASSERT(kSmiTag == 0); + STATIC_ASSERT(kSmiTagSize + kSmiShiftSize == 32); + offset += kPointerSize / 2; + } + + __ Load(result, MemOperand(store_base, offset), representation); + + // Check for the hole value. + if (hinstr->RequiresHoleCheck()) { + if (IsFastSmiElementsKind(instr->hydrogen()->elements_kind())) { + __ SmiTst(result, scratch); + DeoptimizeIf(ne, instr->environment(), scratch, Operand(zero_reg)); + } else { + __ LoadRoot(scratch, Heap::kTheHoleValueRootIndex); + DeoptimizeIf(eq, instr->environment(), result, Operand(scratch)); + } + } +} + + +void LCodeGen::DoLoadKeyed(LLoadKeyed* instr) { + if (instr->is_typed_elements()) { + DoLoadKeyedExternalArray(instr); + } else if (instr->hydrogen()->representation().IsDouble()) { + DoLoadKeyedFixedDoubleArray(instr); + } else { + DoLoadKeyedFixedArray(instr); + } +} + + +MemOperand LCodeGen::PrepareKeyedOperand(Register key, + Register base, + bool key_is_constant, + int constant_key, + int element_size, + int shift_size, + int base_offset) { + if (key_is_constant) { + return MemOperand(base, (constant_key << element_size) + base_offset); + } + + if (base_offset == 0) { + if (shift_size >= 0) { + __ dsll(scratch0(), key, shift_size); + __ Daddu(scratch0(), base, scratch0()); + return MemOperand(scratch0()); + } else { + if (shift_size == -32) { + __ dsra32(scratch0(), key, 0); + } else { + __ dsra(scratch0(), key, -shift_size); + } + __ Daddu(scratch0(), base, scratch0()); + return MemOperand(scratch0()); + } + } + + if (shift_size >= 0) { + __ dsll(scratch0(), key, shift_size); + __ Daddu(scratch0(), base, scratch0()); + return MemOperand(scratch0(), base_offset); + } else { + if (shift_size == -32) { + __ dsra32(scratch0(), key, 0); + } else { + __ dsra(scratch0(), key, -shift_size); + } + __ Daddu(scratch0(), base, scratch0()); + return MemOperand(scratch0(), base_offset); + } +} + + +void LCodeGen::DoLoadKeyedGeneric(LLoadKeyedGeneric* instr) { + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->object()).is(LoadIC::ReceiverRegister())); + DCHECK(ToRegister(instr->key()).is(LoadIC::NameRegister())); + + if (FLAG_vector_ics) { + Register vector = ToRegister(instr->temp_vector()); + DCHECK(vector.is(LoadIC::VectorRegister())); + __ li(vector, instr->hydrogen()->feedback_vector()); + // No need to allocate this register. + DCHECK(LoadIC::SlotRegister().is(a0)); + __ li(LoadIC::SlotRegister(), + Operand(Smi::FromInt(instr->hydrogen()->slot()))); + } + + Handle<Code> ic = isolate()->builtins()->KeyedLoadIC_Initialize(); + CallCode(ic, RelocInfo::CODE_TARGET, instr); +} + + +void LCodeGen::DoArgumentsElements(LArgumentsElements* instr) { + Register scratch = scratch0(); + Register temp = scratch1(); + Register result = ToRegister(instr->result()); + + if (instr->hydrogen()->from_inlined()) { + __ Dsubu(result, sp, 2 * kPointerSize); + } else { + // Check if the calling frame is an arguments adaptor frame. + Label done, adapted; + __ ld(scratch, MemOperand(fp, StandardFrameConstants::kCallerFPOffset)); + __ ld(result, MemOperand(scratch, StandardFrameConstants::kContextOffset)); + __ Xor(temp, result, Operand(Smi::FromInt(StackFrame::ARGUMENTS_ADAPTOR))); + + // Result is the frame pointer for the frame if not adapted and for the real + // frame below the adaptor frame if adapted. + __ Movn(result, fp, temp); // Move only if temp is not equal to zero (ne). + __ Movz(result, scratch, temp); // Move only if temp is equal to zero (eq). + } +} + + +void LCodeGen::DoArgumentsLength(LArgumentsLength* instr) { + Register elem = ToRegister(instr->elements()); + Register result = ToRegister(instr->result()); + + Label done; + + // If no arguments adaptor frame the number of arguments is fixed. + __ Daddu(result, zero_reg, Operand(scope()->num_parameters())); + __ Branch(&done, eq, fp, Operand(elem)); + + // Arguments adaptor frame present. Get argument length from there. + __ ld(result, MemOperand(fp, StandardFrameConstants::kCallerFPOffset)); + __ ld(result, + MemOperand(result, ArgumentsAdaptorFrameConstants::kLengthOffset)); + __ SmiUntag(result); + + // Argument length is in result register. + __ bind(&done); +} + + +void LCodeGen::DoWrapReceiver(LWrapReceiver* instr) { + Register receiver = ToRegister(instr->receiver()); + Register function = ToRegister(instr->function()); + Register result = ToRegister(instr->result()); + Register scratch = scratch0(); + + // If the receiver is null or undefined, we have to pass the global + // object as a receiver to normal functions. Values have to be + // passed unchanged to builtins and strict-mode functions. + Label global_object, result_in_receiver; + + if (!instr->hydrogen()->known_function()) { + // Do not transform the receiver to object for strict mode functions. + __ ld(scratch, + FieldMemOperand(function, JSFunction::kSharedFunctionInfoOffset)); + + // Do not transform the receiver to object for builtins. + int32_t strict_mode_function_mask = + 1 << SharedFunctionInfo::kStrictModeBitWithinByte; + int32_t native_mask = 1 << SharedFunctionInfo::kNativeBitWithinByte; + + __ lbu(at, + FieldMemOperand(scratch, SharedFunctionInfo::kStrictModeByteOffset)); + __ And(at, at, Operand(strict_mode_function_mask)); + __ Branch(&result_in_receiver, ne, at, Operand(zero_reg)); + __ lbu(at, + FieldMemOperand(scratch, SharedFunctionInfo::kNativeByteOffset)); + __ And(at, at, Operand(native_mask)); + __ Branch(&result_in_receiver, ne, at, Operand(zero_reg)); + } + + // Normal function. Replace undefined or null with global receiver. + __ LoadRoot(scratch, Heap::kNullValueRootIndex); + __ Branch(&global_object, eq, receiver, Operand(scratch)); + __ LoadRoot(scratch, Heap::kUndefinedValueRootIndex); + __ Branch(&global_object, eq, receiver, Operand(scratch)); + + // Deoptimize if the receiver is not a JS object. + __ SmiTst(receiver, scratch); + DeoptimizeIf(eq, instr->environment(), scratch, Operand(zero_reg)); + + __ GetObjectType(receiver, scratch, scratch); + DeoptimizeIf(lt, instr->environment(), + scratch, Operand(FIRST_SPEC_OBJECT_TYPE)); + __ Branch(&result_in_receiver); + + __ bind(&global_object); + __ ld(result, FieldMemOperand(function, JSFunction::kContextOffset)); + __ ld(result, + ContextOperand(result, Context::GLOBAL_OBJECT_INDEX)); + __ ld(result, + FieldMemOperand(result, GlobalObject::kGlobalProxyOffset)); + + if (result.is(receiver)) { + __ bind(&result_in_receiver); + } else { + Label result_ok; + __ Branch(&result_ok); + __ bind(&result_in_receiver); + __ mov(result, receiver); + __ bind(&result_ok); + } +} + + +void LCodeGen::DoApplyArguments(LApplyArguments* instr) { + Register receiver = ToRegister(instr->receiver()); + Register function = ToRegister(instr->function()); + Register length = ToRegister(instr->length()); + Register elements = ToRegister(instr->elements()); + Register scratch = scratch0(); + DCHECK(receiver.is(a0)); // Used for parameter count. + DCHECK(function.is(a1)); // Required by InvokeFunction. + DCHECK(ToRegister(instr->result()).is(v0)); + + // Copy the arguments to this function possibly from the + // adaptor frame below it. + const uint32_t kArgumentsLimit = 1 * KB; + DeoptimizeIf(hi, instr->environment(), length, Operand(kArgumentsLimit)); + + // Push the receiver and use the register to keep the original + // number of arguments. + __ push(receiver); + __ Move(receiver, length); + // The arguments are at a one pointer size offset from elements. + __ Daddu(elements, elements, Operand(1 * kPointerSize)); + + // Loop through the arguments pushing them onto the execution + // stack. + Label invoke, loop; + // length is a small non-negative integer, due to the test above. + __ Branch(USE_DELAY_SLOT, &invoke, eq, length, Operand(zero_reg)); + __ dsll(scratch, length, kPointerSizeLog2); + __ bind(&loop); + __ Daddu(scratch, elements, scratch); + __ ld(scratch, MemOperand(scratch)); + __ push(scratch); + __ Dsubu(length, length, Operand(1)); + __ Branch(USE_DELAY_SLOT, &loop, ne, length, Operand(zero_reg)); + __ dsll(scratch, length, kPointerSizeLog2); + + __ bind(&invoke); + DCHECK(instr->HasPointerMap()); + LPointerMap* pointers = instr->pointer_map(); + SafepointGenerator safepoint_generator( + this, pointers, Safepoint::kLazyDeopt); + // The number of arguments is stored in receiver which is a0, as expected + // by InvokeFunction. + ParameterCount actual(receiver); + __ InvokeFunction(function, actual, CALL_FUNCTION, safepoint_generator); +} + + +void LCodeGen::DoPushArgument(LPushArgument* instr) { + LOperand* argument = instr->value(); + if (argument->IsDoubleRegister() || argument->IsDoubleStackSlot()) { + Abort(kDoPushArgumentNotImplementedForDoubleType); + } else { + Register argument_reg = EmitLoadRegister(argument, at); + __ push(argument_reg); + } +} + + +void LCodeGen::DoDrop(LDrop* instr) { + __ Drop(instr->count()); +} + + +void LCodeGen::DoThisFunction(LThisFunction* instr) { + Register result = ToRegister(instr->result()); + __ ld(result, MemOperand(fp, JavaScriptFrameConstants::kFunctionOffset)); +} + + +void LCodeGen::DoContext(LContext* instr) { + // If there is a non-return use, the context must be moved to a register. + Register result = ToRegister(instr->result()); + if (info()->IsOptimizing()) { + __ ld(result, MemOperand(fp, StandardFrameConstants::kContextOffset)); + } else { + // If there is no frame, the context must be in cp. + DCHECK(result.is(cp)); + } +} + + +void LCodeGen::DoDeclareGlobals(LDeclareGlobals* instr) { + DCHECK(ToRegister(instr->context()).is(cp)); + __ li(scratch0(), instr->hydrogen()->pairs()); + __ li(scratch1(), Operand(Smi::FromInt(instr->hydrogen()->flags()))); + // The context is the first argument. + __ Push(cp, scratch0(), scratch1()); + CallRuntime(Runtime::kDeclareGlobals, 3, instr); +} + + +void LCodeGen::CallKnownFunction(Handle<JSFunction> function, + int formal_parameter_count, + int arity, + LInstruction* instr, + A1State a1_state) { + bool dont_adapt_arguments = + formal_parameter_count == SharedFunctionInfo::kDontAdaptArgumentsSentinel; + bool can_invoke_directly = + dont_adapt_arguments || formal_parameter_count == arity; + + LPointerMap* pointers = instr->pointer_map(); + + if (can_invoke_directly) { + if (a1_state == A1_UNINITIALIZED) { + __ li(a1, function); + } + + // Change context. + __ ld(cp, FieldMemOperand(a1, JSFunction::kContextOffset)); + + // Set r0 to arguments count if adaption is not needed. Assumes that r0 + // is available to write to at this point. + if (dont_adapt_arguments) { + __ li(a0, Operand(arity)); + } + + // Invoke function. + __ ld(at, FieldMemOperand(a1, JSFunction::kCodeEntryOffset)); + __ Call(at); + + // Set up deoptimization. + RecordSafepointWithLazyDeopt(instr, RECORD_SIMPLE_SAFEPOINT); + } else { + SafepointGenerator generator(this, pointers, Safepoint::kLazyDeopt); + ParameterCount count(arity); + ParameterCount expected(formal_parameter_count); + __ InvokeFunction(function, expected, count, CALL_FUNCTION, generator); + } +} + + +void LCodeGen::DoDeferredMathAbsTaggedHeapNumber(LMathAbs* instr) { + DCHECK(instr->context() != NULL); + DCHECK(ToRegister(instr->context()).is(cp)); + Register input = ToRegister(instr->value()); + Register result = ToRegister(instr->result()); + Register scratch = scratch0(); + + // Deoptimize if not a heap number. + __ ld(scratch, FieldMemOperand(input, HeapObject::kMapOffset)); + __ LoadRoot(at, Heap::kHeapNumberMapRootIndex); + DeoptimizeIf(ne, instr->environment(), scratch, Operand(at)); + + Label done; + Register exponent = scratch0(); + scratch = no_reg; + __ lwu(exponent, FieldMemOperand(input, HeapNumber::kExponentOffset)); + // Check the sign of the argument. If the argument is positive, just + // return it. + __ Move(result, input); + __ And(at, exponent, Operand(HeapNumber::kSignMask)); + __ Branch(&done, eq, at, Operand(zero_reg)); + + // Input is negative. Reverse its sign. + // Preserve the value of all registers. + { + PushSafepointRegistersScope scope(this); + + // Registers were saved at the safepoint, so we can use + // many scratch registers. + Register tmp1 = input.is(a1) ? a0 : a1; + Register tmp2 = input.is(a2) ? a0 : a2; + Register tmp3 = input.is(a3) ? a0 : a3; + Register tmp4 = input.is(a4) ? a0 : a4; + + // exponent: floating point exponent value. + + Label allocated, slow; + __ LoadRoot(tmp4, Heap::kHeapNumberMapRootIndex); + __ AllocateHeapNumber(tmp1, tmp2, tmp3, tmp4, &slow); + __ Branch(&allocated); + + // Slow case: Call the runtime system to do the number allocation. + __ bind(&slow); + + CallRuntimeFromDeferred(Runtime::kAllocateHeapNumber, 0, instr, + instr->context()); + // Set the pointer to the new heap number in tmp. + if (!tmp1.is(v0)) + __ mov(tmp1, v0); + // Restore input_reg after call to runtime. + __ LoadFromSafepointRegisterSlot(input, input); + __ lwu(exponent, FieldMemOperand(input, HeapNumber::kExponentOffset)); + + __ bind(&allocated); + // exponent: floating point exponent value. + // tmp1: allocated heap number. + __ And(exponent, exponent, Operand(~HeapNumber::kSignMask)); + __ sw(exponent, FieldMemOperand(tmp1, HeapNumber::kExponentOffset)); + __ lwu(tmp2, FieldMemOperand(input, HeapNumber::kMantissaOffset)); + __ sw(tmp2, FieldMemOperand(tmp1, HeapNumber::kMantissaOffset)); + + __ StoreToSafepointRegisterSlot(tmp1, result); + } + + __ bind(&done); +} + + +void LCodeGen::EmitIntegerMathAbs(LMathAbs* instr) { + Register input = ToRegister(instr->value()); + Register result = ToRegister(instr->result()); + Assembler::BlockTrampolinePoolScope block_trampoline_pool(masm_); + Label done; + __ Branch(USE_DELAY_SLOT, &done, ge, input, Operand(zero_reg)); + __ mov(result, input); + __ dsubu(result, zero_reg, input); + // Overflow if result is still negative, i.e. 0x80000000. + DeoptimizeIf(lt, instr->environment(), result, Operand(zero_reg)); + __ bind(&done); +} + + +void LCodeGen::DoMathAbs(LMathAbs* instr) { + // Class for deferred case. + class DeferredMathAbsTaggedHeapNumber V8_FINAL : public LDeferredCode { + public: + DeferredMathAbsTaggedHeapNumber(LCodeGen* codegen, LMathAbs* instr) + : LDeferredCode(codegen), instr_(instr) { } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredMathAbsTaggedHeapNumber(instr_); + } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + private: + LMathAbs* instr_; + }; + + Representation r = instr->hydrogen()->value()->representation(); + if (r.IsDouble()) { + FPURegister input = ToDoubleRegister(instr->value()); + FPURegister result = ToDoubleRegister(instr->result()); + __ abs_d(result, input); + } else if (r.IsSmiOrInteger32()) { + EmitIntegerMathAbs(instr); + } else { + // Representation is tagged. + DeferredMathAbsTaggedHeapNumber* deferred = + new(zone()) DeferredMathAbsTaggedHeapNumber(this, instr); + Register input = ToRegister(instr->value()); + // Smi check. + __ JumpIfNotSmi(input, deferred->entry()); + // If smi, handle it directly. + EmitIntegerMathAbs(instr); + __ bind(deferred->exit()); + } +} + + +void LCodeGen::DoMathFloor(LMathFloor* instr) { + DoubleRegister input = ToDoubleRegister(instr->value()); + Register result = ToRegister(instr->result()); + Register scratch1 = scratch0(); + Register except_flag = ToRegister(instr->temp()); + + __ EmitFPUTruncate(kRoundToMinusInf, + result, + input, + scratch1, + double_scratch0(), + except_flag); + + // Deopt if the operation did not succeed. + DeoptimizeIf(ne, instr->environment(), except_flag, Operand(zero_reg)); + + if (instr->hydrogen()->CheckFlag(HValue::kBailoutOnMinusZero)) { + // Test for -0. + Label done; + __ Branch(&done, ne, result, Operand(zero_reg)); + __ mfhc1(scratch1, input); // Get exponent/sign bits. + __ And(scratch1, scratch1, Operand(HeapNumber::kSignMask)); + DeoptimizeIf(ne, instr->environment(), scratch1, Operand(zero_reg)); + __ bind(&done); + } +} + + +void LCodeGen::DoMathRound(LMathRound* instr) { + DoubleRegister input = ToDoubleRegister(instr->value()); + Register result = ToRegister(instr->result()); + DoubleRegister double_scratch1 = ToDoubleRegister(instr->temp()); + Register scratch = scratch0(); + Label done, check_sign_on_zero; + + // Extract exponent bits. + __ mfhc1(result, input); + __ Ext(scratch, + result, + HeapNumber::kExponentShift, + HeapNumber::kExponentBits); + + // If the number is in ]-0.5, +0.5[, the result is +/- 0. + Label skip1; + __ Branch(&skip1, gt, scratch, Operand(HeapNumber::kExponentBias - 2)); + __ mov(result, zero_reg); + if (instr->hydrogen()->CheckFlag(HValue::kBailoutOnMinusZero)) { + __ Branch(&check_sign_on_zero); + } else { + __ Branch(&done); + } + __ bind(&skip1); + + // The following conversion will not work with numbers + // outside of ]-2^32, 2^32[. + DeoptimizeIf(ge, instr->environment(), scratch, + Operand(HeapNumber::kExponentBias + 32)); + + // Save the original sign for later comparison. + __ And(scratch, result, Operand(HeapNumber::kSignMask)); + + __ Move(double_scratch0(), 0.5); + __ add_d(double_scratch0(), input, double_scratch0()); + + // Check sign of the result: if the sign changed, the input + // value was in ]0.5, 0[ and the result should be -0. + __ mfhc1(result, double_scratch0()); + // mfhc1 sign-extends, clear the upper bits. + __ dsll32(result, result, 0); + __ dsrl32(result, result, 0); + __ Xor(result, result, Operand(scratch)); + if (instr->hydrogen()->CheckFlag(HValue::kBailoutOnMinusZero)) { + // ARM uses 'mi' here, which is 'lt' + DeoptimizeIf(lt, instr->environment(), result, + Operand(zero_reg)); + } else { + Label skip2; + // ARM uses 'mi' here, which is 'lt' + // Negating it results in 'ge' + __ Branch(&skip2, ge, result, Operand(zero_reg)); + __ mov(result, zero_reg); + __ Branch(&done); + __ bind(&skip2); + } + + Register except_flag = scratch; + __ EmitFPUTruncate(kRoundToMinusInf, + result, + double_scratch0(), + at, + double_scratch1, + except_flag); + + DeoptimizeIf(ne, instr->environment(), except_flag, Operand(zero_reg)); + + if (instr->hydrogen()->CheckFlag(HValue::kBailoutOnMinusZero)) { + // Test for -0. + __ Branch(&done, ne, result, Operand(zero_reg)); + __ bind(&check_sign_on_zero); + __ mfhc1(scratch, input); // Get exponent/sign bits. + __ And(scratch, scratch, Operand(HeapNumber::kSignMask)); + DeoptimizeIf(ne, instr->environment(), scratch, Operand(zero_reg)); + } + __ bind(&done); +} + + +void LCodeGen::DoMathFround(LMathFround* instr) { + DoubleRegister input = ToDoubleRegister(instr->value()); + DoubleRegister result = ToDoubleRegister(instr->result()); + __ cvt_s_d(result, input); + __ cvt_d_s(result, result); +} + + +void LCodeGen::DoMathSqrt(LMathSqrt* instr) { + DoubleRegister input = ToDoubleRegister(instr->value()); + DoubleRegister result = ToDoubleRegister(instr->result()); + __ sqrt_d(result, input); +} + + +void LCodeGen::DoMathPowHalf(LMathPowHalf* instr) { + DoubleRegister input = ToDoubleRegister(instr->value()); + DoubleRegister result = ToDoubleRegister(instr->result()); + DoubleRegister temp = ToDoubleRegister(instr->temp()); + + DCHECK(!input.is(result)); + + // Note that according to ECMA-262 15.8.2.13: + // Math.pow(-Infinity, 0.5) == Infinity + // Math.sqrt(-Infinity) == NaN + Label done; + __ Move(temp, -V8_INFINITY); + __ BranchF(USE_DELAY_SLOT, &done, NULL, eq, temp, input); + // Set up Infinity in the delay slot. + // result is overwritten if the branch is not taken. + __ neg_d(result, temp); + + // Add +0 to convert -0 to +0. + __ add_d(result, input, kDoubleRegZero); + __ sqrt_d(result, result); + __ bind(&done); +} + + +void LCodeGen::DoPower(LPower* instr) { + Representation exponent_type = instr->hydrogen()->right()->representation(); + // Having marked this as a call, we can use any registers. + // Just make sure that the input/output registers are the expected ones. + DCHECK(!instr->right()->IsDoubleRegister() || + ToDoubleRegister(instr->right()).is(f4)); + DCHECK(!instr->right()->IsRegister() || + ToRegister(instr->right()).is(a2)); + DCHECK(ToDoubleRegister(instr->left()).is(f2)); + DCHECK(ToDoubleRegister(instr->result()).is(f0)); + + if (exponent_type.IsSmi()) { + MathPowStub stub(isolate(), MathPowStub::TAGGED); + __ CallStub(&stub); + } else if (exponent_type.IsTagged()) { + Label no_deopt; + __ JumpIfSmi(a2, &no_deopt); + __ ld(a7, FieldMemOperand(a2, HeapObject::kMapOffset)); + __ LoadRoot(at, Heap::kHeapNumberMapRootIndex); + DeoptimizeIf(ne, instr->environment(), a7, Operand(at)); + __ bind(&no_deopt); + MathPowStub stub(isolate(), MathPowStub::TAGGED); + __ CallStub(&stub); + } else if (exponent_type.IsInteger32()) { + MathPowStub stub(isolate(), MathPowStub::INTEGER); + __ CallStub(&stub); + } else { + DCHECK(exponent_type.IsDouble()); + MathPowStub stub(isolate(), MathPowStub::DOUBLE); + __ CallStub(&stub); + } +} + + +void LCodeGen::DoMathExp(LMathExp* instr) { + DoubleRegister input = ToDoubleRegister(instr->value()); + DoubleRegister result = ToDoubleRegister(instr->result()); + DoubleRegister double_scratch1 = ToDoubleRegister(instr->double_temp()); + DoubleRegister double_scratch2 = double_scratch0(); + Register temp1 = ToRegister(instr->temp1()); + Register temp2 = ToRegister(instr->temp2()); + + MathExpGenerator::EmitMathExp( + masm(), input, result, double_scratch1, double_scratch2, + temp1, temp2, scratch0()); +} + + +void LCodeGen::DoMathLog(LMathLog* instr) { + __ PrepareCallCFunction(0, 1, scratch0()); + __ MovToFloatParameter(ToDoubleRegister(instr->value())); + __ CallCFunction(ExternalReference::math_log_double_function(isolate()), + 0, 1); + __ MovFromFloatResult(ToDoubleRegister(instr->result())); +} + + +void LCodeGen::DoMathClz32(LMathClz32* instr) { + Register input = ToRegister(instr->value()); + Register result = ToRegister(instr->result()); + __ Clz(result, input); +} + + +void LCodeGen::DoInvokeFunction(LInvokeFunction* instr) { + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->function()).is(a1)); + DCHECK(instr->HasPointerMap()); + + Handle<JSFunction> known_function = instr->hydrogen()->known_function(); + if (known_function.is_null()) { + LPointerMap* pointers = instr->pointer_map(); + SafepointGenerator generator(this, pointers, Safepoint::kLazyDeopt); + ParameterCount count(instr->arity()); + __ InvokeFunction(a1, count, CALL_FUNCTION, generator); + } else { + CallKnownFunction(known_function, + instr->hydrogen()->formal_parameter_count(), + instr->arity(), + instr, + A1_CONTAINS_TARGET); + } +} + + +void LCodeGen::DoCallWithDescriptor(LCallWithDescriptor* instr) { + DCHECK(ToRegister(instr->result()).is(v0)); + + LPointerMap* pointers = instr->pointer_map(); + SafepointGenerator generator(this, pointers, Safepoint::kLazyDeopt); + + if (instr->target()->IsConstantOperand()) { + LConstantOperand* target = LConstantOperand::cast(instr->target()); + Handle<Code> code = Handle<Code>::cast(ToHandle(target)); + generator.BeforeCall(__ CallSize(code, RelocInfo::CODE_TARGET)); + __ Call(code, RelocInfo::CODE_TARGET); + } else { + DCHECK(instr->target()->IsRegister()); + Register target = ToRegister(instr->target()); + generator.BeforeCall(__ CallSize(target)); + __ Daddu(target, target, Operand(Code::kHeaderSize - kHeapObjectTag)); + __ Call(target); + } + generator.AfterCall(); +} + + +void LCodeGen::DoCallJSFunction(LCallJSFunction* instr) { + DCHECK(ToRegister(instr->function()).is(a1)); + DCHECK(ToRegister(instr->result()).is(v0)); + + if (instr->hydrogen()->pass_argument_count()) { + __ li(a0, Operand(instr->arity())); + } + + // Change context. + __ ld(cp, FieldMemOperand(a1, JSFunction::kContextOffset)); + + // Load the code entry address + __ ld(at, FieldMemOperand(a1, JSFunction::kCodeEntryOffset)); + __ Call(at); + + RecordSafepointWithLazyDeopt(instr, RECORD_SIMPLE_SAFEPOINT); +} + + +void LCodeGen::DoCallFunction(LCallFunction* instr) { + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->function()).is(a1)); + DCHECK(ToRegister(instr->result()).is(v0)); + + int arity = instr->arity(); + CallFunctionStub stub(isolate(), arity, instr->hydrogen()->function_flags()); + CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); +} + + +void LCodeGen::DoCallNew(LCallNew* instr) { + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->constructor()).is(a1)); + DCHECK(ToRegister(instr->result()).is(v0)); + + __ li(a0, Operand(instr->arity())); + // No cell in a2 for construct type feedback in optimized code + __ LoadRoot(a2, Heap::kUndefinedValueRootIndex); + CallConstructStub stub(isolate(), NO_CALL_CONSTRUCTOR_FLAGS); + CallCode(stub.GetCode(), RelocInfo::CONSTRUCT_CALL, instr); +} + + +void LCodeGen::DoCallNewArray(LCallNewArray* instr) { + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->constructor()).is(a1)); + DCHECK(ToRegister(instr->result()).is(v0)); + + __ li(a0, Operand(instr->arity())); + __ LoadRoot(a2, Heap::kUndefinedValueRootIndex); + ElementsKind kind = instr->hydrogen()->elements_kind(); + AllocationSiteOverrideMode override_mode = + (AllocationSite::GetMode(kind) == TRACK_ALLOCATION_SITE) + ? DISABLE_ALLOCATION_SITES + : DONT_OVERRIDE; + + if (instr->arity() == 0) { + ArrayNoArgumentConstructorStub stub(isolate(), kind, override_mode); + CallCode(stub.GetCode(), RelocInfo::CONSTRUCT_CALL, instr); + } else if (instr->arity() == 1) { + Label done; + if (IsFastPackedElementsKind(kind)) { + Label packed_case; + // We might need a change here, + // look at the first argument. + __ ld(a5, MemOperand(sp, 0)); + __ Branch(&packed_case, eq, a5, Operand(zero_reg)); + + ElementsKind holey_kind = GetHoleyElementsKind(kind); + ArraySingleArgumentConstructorStub stub(isolate(), + holey_kind, + override_mode); + CallCode(stub.GetCode(), RelocInfo::CONSTRUCT_CALL, instr); + __ jmp(&done); + __ bind(&packed_case); + } + + ArraySingleArgumentConstructorStub stub(isolate(), kind, override_mode); + CallCode(stub.GetCode(), RelocInfo::CONSTRUCT_CALL, instr); + __ bind(&done); + } else { + ArrayNArgumentsConstructorStub stub(isolate(), kind, override_mode); + CallCode(stub.GetCode(), RelocInfo::CONSTRUCT_CALL, instr); + } +} + + +void LCodeGen::DoCallRuntime(LCallRuntime* instr) { + CallRuntime(instr->function(), instr->arity(), instr); +} + + +void LCodeGen::DoStoreCodeEntry(LStoreCodeEntry* instr) { + Register function = ToRegister(instr->function()); + Register code_object = ToRegister(instr->code_object()); + __ Daddu(code_object, code_object, + Operand(Code::kHeaderSize - kHeapObjectTag)); + __ sd(code_object, + FieldMemOperand(function, JSFunction::kCodeEntryOffset)); +} + + +void LCodeGen::DoInnerAllocatedObject(LInnerAllocatedObject* instr) { + Register result = ToRegister(instr->result()); + Register base = ToRegister(instr->base_object()); + if (instr->offset()->IsConstantOperand()) { + LConstantOperand* offset = LConstantOperand::cast(instr->offset()); + __ Daddu(result, base, Operand(ToInteger32(offset))); + } else { + Register offset = ToRegister(instr->offset()); + __ Daddu(result, base, offset); + } +} + + +void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { + Representation representation = instr->representation(); + + Register object = ToRegister(instr->object()); + Register scratch2 = scratch1(); + Register scratch1 = scratch0(); + HObjectAccess access = instr->hydrogen()->access(); + int offset = access.offset(); + if (access.IsExternalMemory()) { + Register value = ToRegister(instr->value()); + MemOperand operand = MemOperand(object, offset); + __ Store(value, operand, representation); + return; + } + + __ AssertNotSmi(object); + + DCHECK(!representation.IsSmi() || + !instr->value()->IsConstantOperand() || + IsSmi(LConstantOperand::cast(instr->value()))); + if (representation.IsDouble()) { + DCHECK(access.IsInobject()); + DCHECK(!instr->hydrogen()->has_transition()); + DCHECK(!instr->hydrogen()->NeedsWriteBarrier()); + DoubleRegister value = ToDoubleRegister(instr->value()); + __ sdc1(value, FieldMemOperand(object, offset)); + return; + } + + if (instr->hydrogen()->has_transition()) { + Handle<Map> transition = instr->hydrogen()->transition_map(); + AddDeprecationDependency(transition); + __ li(scratch1, Operand(transition)); + __ sd(scratch1, FieldMemOperand(object, HeapObject::kMapOffset)); + if (instr->hydrogen()->NeedsWriteBarrierForMap()) { + Register temp = ToRegister(instr->temp()); + // Update the write barrier for the map field. + __ RecordWriteForMap(object, + scratch1, + temp, + GetRAState(), + kSaveFPRegs); + } + } + + // Do the store. + Register destination = object; + if (!access.IsInobject()) { + destination = scratch1; + __ ld(destination, FieldMemOperand(object, JSObject::kPropertiesOffset)); + } + Register value = ToRegister(instr->value()); + if (representation.IsSmi() && SmiValuesAre32Bits() && + instr->hydrogen()->value()->representation().IsInteger32()) { + DCHECK(instr->hydrogen()->store_mode() == STORE_TO_INITIALIZED_ENTRY); + if (FLAG_debug_code) { + __ Load(scratch2, FieldMemOperand(destination, offset), representation); + __ AssertSmi(scratch2); + } + + // Store int value directly to upper half of the smi. + offset += kPointerSize / 2; + representation = Representation::Integer32(); + } + + MemOperand operand = FieldMemOperand(destination, offset); + __ Store(value, operand, representation); + if (instr->hydrogen()->NeedsWriteBarrier()) { + // Update the write barrier for the object for in-object properties. + __ RecordWriteField(destination, + offset, + value, + scratch2, + GetRAState(), + kSaveFPRegs, + EMIT_REMEMBERED_SET, + instr->hydrogen()->SmiCheckForWriteBarrier(), + instr->hydrogen()->PointersToHereCheckForValue()); + } +} + + +void LCodeGen::DoStoreNamedGeneric(LStoreNamedGeneric* instr) { + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->object()).is(StoreIC::ReceiverRegister())); + DCHECK(ToRegister(instr->value()).is(StoreIC::ValueRegister())); + + __ li(StoreIC::NameRegister(), Operand(instr->name())); + Handle<Code> ic = StoreIC::initialize_stub(isolate(), instr->strict_mode()); + CallCode(ic, RelocInfo::CODE_TARGET, instr); +} + + +void LCodeGen::DoBoundsCheck(LBoundsCheck* instr) { + Condition cc = instr->hydrogen()->allow_equality() ? hi : hs; + Operand operand((int64_t)0); + Register reg; + if (instr->index()->IsConstantOperand()) { + operand = ToOperand(instr->index()); + reg = ToRegister(instr->length()); + cc = CommuteCondition(cc); + } else { + reg = ToRegister(instr->index()); + operand = ToOperand(instr->length()); + } + if (FLAG_debug_code && instr->hydrogen()->skip_check()) { + Label done; + __ Branch(&done, NegateCondition(cc), reg, operand); + __ stop("eliminated bounds check failed"); + __ bind(&done); + } else { + DeoptimizeIf(cc, instr->environment(), reg, operand); + } +} + + +void LCodeGen::DoStoreKeyedExternalArray(LStoreKeyed* instr) { + Register external_pointer = ToRegister(instr->elements()); + Register key = no_reg; + ElementsKind elements_kind = instr->elements_kind(); + bool key_is_constant = instr->key()->IsConstantOperand(); + int constant_key = 0; + if (key_is_constant) { + constant_key = ToInteger32(LConstantOperand::cast(instr->key())); + if (constant_key & 0xF0000000) { + Abort(kArrayIndexConstantValueTooBig); + } + } else { + key = ToRegister(instr->key()); + } + int element_size_shift = ElementsKindToShiftSize(elements_kind); + int shift_size = (instr->hydrogen()->key()->representation().IsSmi()) + ? (element_size_shift - (kSmiTagSize + kSmiShiftSize)) + : element_size_shift; + int base_offset = instr->base_offset(); + + if (elements_kind == EXTERNAL_FLOAT32_ELEMENTS || + elements_kind == FLOAT32_ELEMENTS || + elements_kind == EXTERNAL_FLOAT64_ELEMENTS || + elements_kind == FLOAT64_ELEMENTS) { + Register address = scratch0(); + FPURegister value(ToDoubleRegister(instr->value())); + if (key_is_constant) { + if (constant_key != 0) { + __ Daddu(address, external_pointer, + Operand(constant_key << element_size_shift)); + } else { + address = external_pointer; + } + } else { + if (shift_size < 0) { + if (shift_size == -32) { + __ dsra32(address, key, 0); + } else { + __ dsra(address, key, -shift_size); + } + } else { + __ dsll(address, key, shift_size); + } + __ Daddu(address, external_pointer, address); + } + + if (elements_kind == EXTERNAL_FLOAT32_ELEMENTS || + elements_kind == FLOAT32_ELEMENTS) { + __ cvt_s_d(double_scratch0(), value); + __ swc1(double_scratch0(), MemOperand(address, base_offset)); + } else { // Storing doubles, not floats. + __ sdc1(value, MemOperand(address, base_offset)); + } + } else { + Register value(ToRegister(instr->value())); + MemOperand mem_operand = PrepareKeyedOperand( + key, external_pointer, key_is_constant, constant_key, + element_size_shift, shift_size, + base_offset); + switch (elements_kind) { + case EXTERNAL_UINT8_CLAMPED_ELEMENTS: + case EXTERNAL_INT8_ELEMENTS: + case EXTERNAL_UINT8_ELEMENTS: + case UINT8_ELEMENTS: + case UINT8_CLAMPED_ELEMENTS: + case INT8_ELEMENTS: + __ sb(value, mem_operand); + break; + case EXTERNAL_INT16_ELEMENTS: + case EXTERNAL_UINT16_ELEMENTS: + case INT16_ELEMENTS: + case UINT16_ELEMENTS: + __ sh(value, mem_operand); + break; + case EXTERNAL_INT32_ELEMENTS: + case EXTERNAL_UINT32_ELEMENTS: + case INT32_ELEMENTS: + case UINT32_ELEMENTS: + __ sw(value, mem_operand); + break; + case FLOAT32_ELEMENTS: + case FLOAT64_ELEMENTS: + case EXTERNAL_FLOAT32_ELEMENTS: + case EXTERNAL_FLOAT64_ELEMENTS: + case FAST_DOUBLE_ELEMENTS: + case FAST_ELEMENTS: + case FAST_SMI_ELEMENTS: + case FAST_HOLEY_DOUBLE_ELEMENTS: + case FAST_HOLEY_ELEMENTS: + case FAST_HOLEY_SMI_ELEMENTS: + case DICTIONARY_ELEMENTS: + case SLOPPY_ARGUMENTS_ELEMENTS: + UNREACHABLE(); + break; + } + } +} + + +void LCodeGen::DoStoreKeyedFixedDoubleArray(LStoreKeyed* instr) { + DoubleRegister value = ToDoubleRegister(instr->value()); + Register elements = ToRegister(instr->elements()); + Register scratch = scratch0(); + DoubleRegister double_scratch = double_scratch0(); + bool key_is_constant = instr->key()->IsConstantOperand(); + int base_offset = instr->base_offset(); + Label not_nan, done; + + // Calculate the effective address of the slot in the array to store the + // double value. + int element_size_shift = ElementsKindToShiftSize(FAST_DOUBLE_ELEMENTS); + if (key_is_constant) { + int constant_key = ToInteger32(LConstantOperand::cast(instr->key())); + if (constant_key & 0xF0000000) { + Abort(kArrayIndexConstantValueTooBig); + } + __ Daddu(scratch, elements, + Operand((constant_key << element_size_shift) + base_offset)); + } else { + int shift_size = (instr->hydrogen()->key()->representation().IsSmi()) + ? (element_size_shift - (kSmiTagSize + kSmiShiftSize)) + : element_size_shift; + __ Daddu(scratch, elements, Operand(base_offset)); + DCHECK((shift_size == 3) || (shift_size == -29)); + if (shift_size == 3) { + __ dsll(at, ToRegister(instr->key()), 3); + } else if (shift_size == -29) { + __ dsra(at, ToRegister(instr->key()), 29); + } + __ Daddu(scratch, scratch, at); + } + + if (instr->NeedsCanonicalization()) { + Label is_nan; + // Check for NaN. All NaNs must be canonicalized. + __ BranchF(NULL, &is_nan, eq, value, value); + __ Branch(¬_nan); + + // Only load canonical NaN if the comparison above set the overflow. + __ bind(&is_nan); + __ LoadRoot(at, Heap::kNanValueRootIndex); + __ ldc1(double_scratch, FieldMemOperand(at, HeapNumber::kValueOffset)); + __ sdc1(double_scratch, MemOperand(scratch, 0)); + __ Branch(&done); + } + + __ bind(¬_nan); + __ sdc1(value, MemOperand(scratch, 0)); + __ bind(&done); +} + + +void LCodeGen::DoStoreKeyedFixedArray(LStoreKeyed* instr) { + Register value = ToRegister(instr->value()); + Register elements = ToRegister(instr->elements()); + Register key = instr->key()->IsRegister() ? ToRegister(instr->key()) + : no_reg; + Register scratch = scratch0(); + Register store_base = scratch; + int offset = instr->base_offset(); + + // Do the store. + if (instr->key()->IsConstantOperand()) { + DCHECK(!instr->hydrogen()->NeedsWriteBarrier()); + LConstantOperand* const_operand = LConstantOperand::cast(instr->key()); + offset += ToInteger32(const_operand) * kPointerSize; + store_base = elements; + } else { + // Even though the HLoadKeyed instruction forces the input + // representation for the key to be an integer, the input gets replaced + // during bound check elimination with the index argument to the bounds + // check, which can be tagged, so that case must be handled here, too. + if (instr->hydrogen()->key()->representation().IsSmi()) { + __ SmiScale(scratch, key, kPointerSizeLog2); + __ daddu(store_base, elements, scratch); + } else { + __ dsll(scratch, key, kPointerSizeLog2); + __ daddu(store_base, elements, scratch); + } + } + + Representation representation = instr->hydrogen()->value()->representation(); + if (representation.IsInteger32() && SmiValuesAre32Bits()) { + DCHECK(instr->hydrogen()->store_mode() == STORE_TO_INITIALIZED_ENTRY); + DCHECK(instr->hydrogen()->elements_kind() == FAST_SMI_ELEMENTS); + if (FLAG_debug_code) { + Register temp = scratch1(); + __ Load(temp, MemOperand(store_base, offset), Representation::Smi()); + __ AssertSmi(temp); + } + + // Store int value directly to upper half of the smi. + STATIC_ASSERT(kSmiTag == 0); + STATIC_ASSERT(kSmiTagSize + kSmiShiftSize == 32); + offset += kPointerSize / 2; + representation = Representation::Integer32(); + } + + __ Store(value, MemOperand(store_base, offset), representation); + + if (instr->hydrogen()->NeedsWriteBarrier()) { + SmiCheck check_needed = + instr->hydrogen()->value()->type().IsHeapObject() + ? OMIT_SMI_CHECK : INLINE_SMI_CHECK; + // Compute address of modified element and store it into key register. + __ Daddu(key, store_base, Operand(offset)); + __ RecordWrite(elements, + key, + value, + GetRAState(), + kSaveFPRegs, + EMIT_REMEMBERED_SET, + check_needed, + instr->hydrogen()->PointersToHereCheckForValue()); + } +} + + +void LCodeGen::DoStoreKeyed(LStoreKeyed* instr) { + // By cases: external, fast double + if (instr->is_typed_elements()) { + DoStoreKeyedExternalArray(instr); + } else if (instr->hydrogen()->value()->representation().IsDouble()) { + DoStoreKeyedFixedDoubleArray(instr); + } else { + DoStoreKeyedFixedArray(instr); + } +} + + +void LCodeGen::DoStoreKeyedGeneric(LStoreKeyedGeneric* instr) { + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->object()).is(KeyedStoreIC::ReceiverRegister())); + DCHECK(ToRegister(instr->key()).is(KeyedStoreIC::NameRegister())); + DCHECK(ToRegister(instr->value()).is(KeyedStoreIC::ValueRegister())); + + Handle<Code> ic = (instr->strict_mode() == STRICT) + ? isolate()->builtins()->KeyedStoreIC_Initialize_Strict() + : isolate()->builtins()->KeyedStoreIC_Initialize(); + CallCode(ic, RelocInfo::CODE_TARGET, instr); +} + + +void LCodeGen::DoTransitionElementsKind(LTransitionElementsKind* instr) { + Register object_reg = ToRegister(instr->object()); + Register scratch = scratch0(); + + Handle<Map> from_map = instr->original_map(); + Handle<Map> to_map = instr->transitioned_map(); + ElementsKind from_kind = instr->from_kind(); + ElementsKind to_kind = instr->to_kind(); + + Label not_applicable; + __ ld(scratch, FieldMemOperand(object_reg, HeapObject::kMapOffset)); + __ Branch(¬_applicable, ne, scratch, Operand(from_map)); + + if (IsSimpleMapChangeTransition(from_kind, to_kind)) { + Register new_map_reg = ToRegister(instr->new_map_temp()); + __ li(new_map_reg, Operand(to_map)); + __ sd(new_map_reg, FieldMemOperand(object_reg, HeapObject::kMapOffset)); + // Write barrier. + __ RecordWriteForMap(object_reg, + new_map_reg, + scratch, + GetRAState(), + kDontSaveFPRegs); + } else { + DCHECK(object_reg.is(a0)); + DCHECK(ToRegister(instr->context()).is(cp)); + PushSafepointRegistersScope scope(this); + __ li(a1, Operand(to_map)); + bool is_js_array = from_map->instance_type() == JS_ARRAY_TYPE; + TransitionElementsKindStub stub(isolate(), from_kind, to_kind, is_js_array); + __ CallStub(&stub); + RecordSafepointWithRegisters( + instr->pointer_map(), 0, Safepoint::kLazyDeopt); + } + __ bind(¬_applicable); +} + + +void LCodeGen::DoTrapAllocationMemento(LTrapAllocationMemento* instr) { + Register object = ToRegister(instr->object()); + Register temp = ToRegister(instr->temp()); + Label no_memento_found; + __ TestJSArrayForAllocationMemento(object, temp, &no_memento_found, + ne, &no_memento_found); + DeoptimizeIf(al, instr->environment()); + __ bind(&no_memento_found); +} + + +void LCodeGen::DoStringAdd(LStringAdd* instr) { + DCHECK(ToRegister(instr->context()).is(cp)); + DCHECK(ToRegister(instr->left()).is(a1)); + DCHECK(ToRegister(instr->right()).is(a0)); + StringAddStub stub(isolate(), + instr->hydrogen()->flags(), + instr->hydrogen()->pretenure_flag()); + CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); +} + + +void LCodeGen::DoStringCharCodeAt(LStringCharCodeAt* instr) { + class DeferredStringCharCodeAt V8_FINAL : public LDeferredCode { + public: + DeferredStringCharCodeAt(LCodeGen* codegen, LStringCharCodeAt* instr) + : LDeferredCode(codegen), instr_(instr) { } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredStringCharCodeAt(instr_); + } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + private: + LStringCharCodeAt* instr_; + }; + + DeferredStringCharCodeAt* deferred = + new(zone()) DeferredStringCharCodeAt(this, instr); + StringCharLoadGenerator::Generate(masm(), + ToRegister(instr->string()), + ToRegister(instr->index()), + ToRegister(instr->result()), + deferred->entry()); + __ bind(deferred->exit()); +} + + +void LCodeGen::DoDeferredStringCharCodeAt(LStringCharCodeAt* instr) { + Register string = ToRegister(instr->string()); + Register result = ToRegister(instr->result()); + Register scratch = scratch0(); + + // TODO(3095996): Get rid of this. For now, we need to make the + // result register contain a valid pointer because it is already + // contained in the register pointer map. + __ mov(result, zero_reg); + + PushSafepointRegistersScope scope(this); + __ push(string); + // Push the index as a smi. This is safe because of the checks in + // DoStringCharCodeAt above. + if (instr->index()->IsConstantOperand()) { + int const_index = ToInteger32(LConstantOperand::cast(instr->index())); + __ Daddu(scratch, zero_reg, Operand(Smi::FromInt(const_index))); + __ push(scratch); + } else { + Register index = ToRegister(instr->index()); + __ SmiTag(index); + __ push(index); + } + CallRuntimeFromDeferred(Runtime::kStringCharCodeAtRT, 2, instr, + instr->context()); + __ AssertSmi(v0); + __ SmiUntag(v0); + __ StoreToSafepointRegisterSlot(v0, result); +} + + +void LCodeGen::DoStringCharFromCode(LStringCharFromCode* instr) { + class DeferredStringCharFromCode V8_FINAL : public LDeferredCode { + public: + DeferredStringCharFromCode(LCodeGen* codegen, LStringCharFromCode* instr) + : LDeferredCode(codegen), instr_(instr) { } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredStringCharFromCode(instr_); + } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + private: + LStringCharFromCode* instr_; + }; + + DeferredStringCharFromCode* deferred = + new(zone()) DeferredStringCharFromCode(this, instr); + + DCHECK(instr->hydrogen()->value()->representation().IsInteger32()); + Register char_code = ToRegister(instr->char_code()); + Register result = ToRegister(instr->result()); + Register scratch = scratch0(); + DCHECK(!char_code.is(result)); + + __ Branch(deferred->entry(), hi, + char_code, Operand(String::kMaxOneByteCharCode)); + __ LoadRoot(result, Heap::kSingleCharacterStringCacheRootIndex); + __ dsll(scratch, char_code, kPointerSizeLog2); + __ Daddu(result, result, scratch); + __ ld(result, FieldMemOperand(result, FixedArray::kHeaderSize)); + __ LoadRoot(scratch, Heap::kUndefinedValueRootIndex); + __ Branch(deferred->entry(), eq, result, Operand(scratch)); + __ bind(deferred->exit()); +} + + +void LCodeGen::DoDeferredStringCharFromCode(LStringCharFromCode* instr) { + Register char_code = ToRegister(instr->char_code()); + Register result = ToRegister(instr->result()); + + // TODO(3095996): Get rid of this. For now, we need to make the + // result register contain a valid pointer because it is already + // contained in the register pointer map. + __ mov(result, zero_reg); + + PushSafepointRegistersScope scope(this); + __ SmiTag(char_code); + __ push(char_code); + CallRuntimeFromDeferred(Runtime::kCharFromCode, 1, instr, instr->context()); + __ StoreToSafepointRegisterSlot(v0, result); +} + + +void LCodeGen::DoInteger32ToDouble(LInteger32ToDouble* instr) { + LOperand* input = instr->value(); + DCHECK(input->IsRegister() || input->IsStackSlot()); + LOperand* output = instr->result(); + DCHECK(output->IsDoubleRegister()); + FPURegister single_scratch = double_scratch0().low(); + if (input->IsStackSlot()) { + Register scratch = scratch0(); + __ ld(scratch, ToMemOperand(input)); + __ mtc1(scratch, single_scratch); + } else { + __ mtc1(ToRegister(input), single_scratch); + } + __ cvt_d_w(ToDoubleRegister(output), single_scratch); +} + + +void LCodeGen::DoUint32ToDouble(LUint32ToDouble* instr) { + LOperand* input = instr->value(); + LOperand* output = instr->result(); + + FPURegister dbl_scratch = double_scratch0(); + __ mtc1(ToRegister(input), dbl_scratch); + __ Cvt_d_uw(ToDoubleRegister(output), dbl_scratch, f22); // TODO(plind): f22? +} + + +void LCodeGen::DoNumberTagU(LNumberTagU* instr) { + class DeferredNumberTagU V8_FINAL : public LDeferredCode { + public: + DeferredNumberTagU(LCodeGen* codegen, LNumberTagU* instr) + : LDeferredCode(codegen), instr_(instr) { } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredNumberTagIU(instr_, + instr_->value(), + instr_->temp1(), + instr_->temp2(), + UNSIGNED_INT32); + } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + private: + LNumberTagU* instr_; + }; + + Register input = ToRegister(instr->value()); + Register result = ToRegister(instr->result()); + + DeferredNumberTagU* deferred = new(zone()) DeferredNumberTagU(this, instr); + __ Branch(deferred->entry(), hi, input, Operand(Smi::kMaxValue)); + __ SmiTag(result, input); + __ bind(deferred->exit()); +} + + +void LCodeGen::DoDeferredNumberTagIU(LInstruction* instr, + LOperand* value, + LOperand* temp1, + LOperand* temp2, + IntegerSignedness signedness) { + Label done, slow; + Register src = ToRegister(value); + Register dst = ToRegister(instr->result()); + Register tmp1 = scratch0(); + Register tmp2 = ToRegister(temp1); + Register tmp3 = ToRegister(temp2); + DoubleRegister dbl_scratch = double_scratch0(); + + if (signedness == SIGNED_INT32) { + // There was overflow, so bits 30 and 31 of the original integer + // disagree. Try to allocate a heap number in new space and store + // the value in there. If that fails, call the runtime system. + if (dst.is(src)) { + __ SmiUntag(src, dst); + __ Xor(src, src, Operand(0x80000000)); + } + __ mtc1(src, dbl_scratch); + __ cvt_d_w(dbl_scratch, dbl_scratch); + } else { + __ mtc1(src, dbl_scratch); + __ Cvt_d_uw(dbl_scratch, dbl_scratch, f22); + } + + if (FLAG_inline_new) { + __ LoadRoot(tmp3, Heap::kHeapNumberMapRootIndex); + __ AllocateHeapNumber(dst, tmp1, tmp2, tmp3, &slow, TAG_RESULT); + __ Branch(&done); + } + + // Slow case: Call the runtime system to do the number allocation. + __ bind(&slow); + { + // TODO(3095996): Put a valid pointer value in the stack slot where the + // result register is stored, as this register is in the pointer map, but + // contains an integer value. + __ mov(dst, zero_reg); + // Preserve the value of all registers. + PushSafepointRegistersScope scope(this); + + // NumberTagI and NumberTagD use the context from the frame, rather than + // the environment's HContext or HInlinedContext value. + // They only call Runtime::kAllocateHeapNumber. + // The corresponding HChange instructions are added in a phase that does + // not have easy access to the local context. + __ ld(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); + __ CallRuntimeSaveDoubles(Runtime::kAllocateHeapNumber); + RecordSafepointWithRegisters( + instr->pointer_map(), 0, Safepoint::kNoLazyDeopt); + __ StoreToSafepointRegisterSlot(v0, dst); + } + + // Done. Put the value in dbl_scratch into the value of the allocated heap + // number. + __ bind(&done); + __ sdc1(dbl_scratch, FieldMemOperand(dst, HeapNumber::kValueOffset)); +} + + +void LCodeGen::DoNumberTagD(LNumberTagD* instr) { + class DeferredNumberTagD V8_FINAL : public LDeferredCode { + public: + DeferredNumberTagD(LCodeGen* codegen, LNumberTagD* instr) + : LDeferredCode(codegen), instr_(instr) { } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredNumberTagD(instr_); + } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + private: + LNumberTagD* instr_; + }; + + DoubleRegister input_reg = ToDoubleRegister(instr->value()); + Register scratch = scratch0(); + Register reg = ToRegister(instr->result()); + Register temp1 = ToRegister(instr->temp()); + Register temp2 = ToRegister(instr->temp2()); + + DeferredNumberTagD* deferred = new(zone()) DeferredNumberTagD(this, instr); + if (FLAG_inline_new) { + __ LoadRoot(scratch, Heap::kHeapNumberMapRootIndex); + // We want the untagged address first for performance + __ AllocateHeapNumber(reg, temp1, temp2, scratch, deferred->entry(), + DONT_TAG_RESULT); + } else { + __ Branch(deferred->entry()); + } + __ bind(deferred->exit()); + __ sdc1(input_reg, MemOperand(reg, HeapNumber::kValueOffset)); + // Now that we have finished with the object's real address tag it + __ Daddu(reg, reg, kHeapObjectTag); +} + + +void LCodeGen::DoDeferredNumberTagD(LNumberTagD* instr) { + // TODO(3095996): Get rid of this. For now, we need to make the + // result register contain a valid pointer because it is already + // contained in the register pointer map. + Register reg = ToRegister(instr->result()); + __ mov(reg, zero_reg); + + PushSafepointRegistersScope scope(this); + // NumberTagI and NumberTagD use the context from the frame, rather than + // the environment's HContext or HInlinedContext value. + // They only call Runtime::kAllocateHeapNumber. + // The corresponding HChange instructions are added in a phase that does + // not have easy access to the local context. + __ ld(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); + __ CallRuntimeSaveDoubles(Runtime::kAllocateHeapNumber); + RecordSafepointWithRegisters( + instr->pointer_map(), 0, Safepoint::kNoLazyDeopt); + __ Dsubu(v0, v0, kHeapObjectTag); + __ StoreToSafepointRegisterSlot(v0, reg); +} + + +void LCodeGen::DoSmiTag(LSmiTag* instr) { + HChange* hchange = instr->hydrogen(); + Register input = ToRegister(instr->value()); + Register output = ToRegister(instr->result()); + if (hchange->CheckFlag(HValue::kCanOverflow) && + hchange->value()->CheckFlag(HValue::kUint32)) { + __ And(at, input, Operand(0x80000000)); + DeoptimizeIf(ne, instr->environment(), at, Operand(zero_reg)); + } + if (hchange->CheckFlag(HValue::kCanOverflow) && + !hchange->value()->CheckFlag(HValue::kUint32)) { + __ SmiTagCheckOverflow(output, input, at); + DeoptimizeIf(lt, instr->environment(), at, Operand(zero_reg)); + } else { + __ SmiTag(output, input); + } +} + + +void LCodeGen::DoSmiUntag(LSmiUntag* instr) { + Register scratch = scratch0(); + Register input = ToRegister(instr->value()); + Register result = ToRegister(instr->result()); + if (instr->needs_check()) { + STATIC_ASSERT(kHeapObjectTag == 1); + // If the input is a HeapObject, value of scratch won't be zero. + __ And(scratch, input, Operand(kHeapObjectTag)); + __ SmiUntag(result, input); + DeoptimizeIf(ne, instr->environment(), scratch, Operand(zero_reg)); + } else { + __ SmiUntag(result, input); + } +} + + +void LCodeGen::EmitNumberUntagD(Register input_reg, + DoubleRegister result_reg, + bool can_convert_undefined_to_nan, + bool deoptimize_on_minus_zero, + LEnvironment* env, + NumberUntagDMode mode) { + Register scratch = scratch0(); + Label convert, load_smi, done; + if (mode == NUMBER_CANDIDATE_IS_ANY_TAGGED) { + // Smi check. + __ UntagAndJumpIfSmi(scratch, input_reg, &load_smi); + // Heap number map check. + __ ld(scratch, FieldMemOperand(input_reg, HeapObject::kMapOffset)); + __ LoadRoot(at, Heap::kHeapNumberMapRootIndex); + if (can_convert_undefined_to_nan) { + __ Branch(&convert, ne, scratch, Operand(at)); + } else { + DeoptimizeIf(ne, env, scratch, Operand(at)); + } + // Load heap number. + __ ldc1(result_reg, FieldMemOperand(input_reg, HeapNumber::kValueOffset)); + if (deoptimize_on_minus_zero) { + __ mfc1(at, result_reg); + __ Branch(&done, ne, at, Operand(zero_reg)); + __ mfhc1(scratch, result_reg); // Get exponent/sign bits. + DeoptimizeIf(eq, env, scratch, Operand(HeapNumber::kSignMask)); + } + __ Branch(&done); + if (can_convert_undefined_to_nan) { + __ bind(&convert); + // Convert undefined (and hole) to NaN. + __ LoadRoot(at, Heap::kUndefinedValueRootIndex); + DeoptimizeIf(ne, env, input_reg, Operand(at)); + __ LoadRoot(scratch, Heap::kNanValueRootIndex); + __ ldc1(result_reg, FieldMemOperand(scratch, HeapNumber::kValueOffset)); + __ Branch(&done); + } + } else { + __ SmiUntag(scratch, input_reg); + DCHECK(mode == NUMBER_CANDIDATE_IS_SMI); + } + // Smi to double register conversion + __ bind(&load_smi); + // scratch: untagged value of input_reg + __ mtc1(scratch, result_reg); + __ cvt_d_w(result_reg, result_reg); + __ bind(&done); +} + + +void LCodeGen::DoDeferredTaggedToI(LTaggedToI* instr) { + Register input_reg = ToRegister(instr->value()); + Register scratch1 = scratch0(); + Register scratch2 = ToRegister(instr->temp()); + DoubleRegister double_scratch = double_scratch0(); + DoubleRegister double_scratch2 = ToDoubleRegister(instr->temp2()); + + DCHECK(!scratch1.is(input_reg) && !scratch1.is(scratch2)); + DCHECK(!scratch2.is(input_reg) && !scratch2.is(scratch1)); + + Label done; + + // The input is a tagged HeapObject. + // Heap number map check. + __ ld(scratch1, FieldMemOperand(input_reg, HeapObject::kMapOffset)); + __ LoadRoot(at, Heap::kHeapNumberMapRootIndex); + // This 'at' value and scratch1 map value are used for tests in both clauses + // of the if. + + if (instr->truncating()) { + // Performs a truncating conversion of a floating point number as used by + // the JS bitwise operations. + Label no_heap_number, check_bools, check_false; + // Check HeapNumber map. + __ Branch(USE_DELAY_SLOT, &no_heap_number, ne, scratch1, Operand(at)); + __ mov(scratch2, input_reg); // In delay slot. + __ TruncateHeapNumberToI(input_reg, scratch2); + __ Branch(&done); + + // Check for Oddballs. Undefined/False is converted to zero and True to one + // for truncating conversions. + __ bind(&no_heap_number); + __ LoadRoot(at, Heap::kUndefinedValueRootIndex); + __ Branch(&check_bools, ne, input_reg, Operand(at)); + DCHECK(ToRegister(instr->result()).is(input_reg)); + __ Branch(USE_DELAY_SLOT, &done); + __ mov(input_reg, zero_reg); // In delay slot. + + __ bind(&check_bools); + __ LoadRoot(at, Heap::kTrueValueRootIndex); + __ Branch(&check_false, ne, scratch2, Operand(at)); + __ Branch(USE_DELAY_SLOT, &done); + __ li(input_reg, Operand(1)); // In delay slot. + + __ bind(&check_false); + __ LoadRoot(at, Heap::kFalseValueRootIndex); + DeoptimizeIf(ne, instr->environment(), scratch2, Operand(at)); + __ Branch(USE_DELAY_SLOT, &done); + __ mov(input_reg, zero_reg); // In delay slot. + } else { + // Deoptimize if we don't have a heap number. + DeoptimizeIf(ne, instr->environment(), scratch1, Operand(at)); + + // Load the double value. + __ ldc1(double_scratch, + FieldMemOperand(input_reg, HeapNumber::kValueOffset)); + + Register except_flag = scratch2; + __ EmitFPUTruncate(kRoundToZero, + input_reg, + double_scratch, + scratch1, + double_scratch2, + except_flag, + kCheckForInexactConversion); + + // Deopt if the operation did not succeed. + DeoptimizeIf(ne, instr->environment(), except_flag, Operand(zero_reg)); + + if (instr->hydrogen()->CheckFlag(HValue::kBailoutOnMinusZero)) { + __ Branch(&done, ne, input_reg, Operand(zero_reg)); + + __ mfhc1(scratch1, double_scratch); // Get exponent/sign bits. + __ And(scratch1, scratch1, Operand(HeapNumber::kSignMask)); + DeoptimizeIf(ne, instr->environment(), scratch1, Operand(zero_reg)); + } + } + __ bind(&done); +} + + +void LCodeGen::DoTaggedToI(LTaggedToI* instr) { + class DeferredTaggedToI V8_FINAL : public LDeferredCode { + public: + DeferredTaggedToI(LCodeGen* codegen, LTaggedToI* instr) + : LDeferredCode(codegen), instr_(instr) { } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredTaggedToI(instr_); + } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + private: + LTaggedToI* instr_; + }; + + LOperand* input = instr->value(); + DCHECK(input->IsRegister()); + DCHECK(input->Equals(instr->result())); + + Register input_reg = ToRegister(input); + + if (instr->hydrogen()->value()->representation().IsSmi()) { + __ SmiUntag(input_reg); + } else { + DeferredTaggedToI* deferred = new(zone()) DeferredTaggedToI(this, instr); + + // Let the deferred code handle the HeapObject case. + __ JumpIfNotSmi(input_reg, deferred->entry()); + + // Smi to int32 conversion. + __ SmiUntag(input_reg); + __ bind(deferred->exit()); + } +} + + +void LCodeGen::DoNumberUntagD(LNumberUntagD* instr) { + LOperand* input = instr->value(); + DCHECK(input->IsRegister()); + LOperand* result = instr->result(); + DCHECK(result->IsDoubleRegister()); + + Register input_reg = ToRegister(input); + DoubleRegister result_reg = ToDoubleRegister(result); + + HValue* value = instr->hydrogen()->value(); + NumberUntagDMode mode = value->representation().IsSmi() + ? NUMBER_CANDIDATE_IS_SMI : NUMBER_CANDIDATE_IS_ANY_TAGGED; + + EmitNumberUntagD(input_reg, result_reg, + instr->hydrogen()->can_convert_undefined_to_nan(), + instr->hydrogen()->deoptimize_on_minus_zero(), + instr->environment(), + mode); +} + + +void LCodeGen::DoDoubleToI(LDoubleToI* instr) { + Register result_reg = ToRegister(instr->result()); + Register scratch1 = scratch0(); + DoubleRegister double_input = ToDoubleRegister(instr->value()); + + if (instr->truncating()) { + __ TruncateDoubleToI(result_reg, double_input); + } else { + Register except_flag = LCodeGen::scratch1(); + + __ EmitFPUTruncate(kRoundToMinusInf, + result_reg, + double_input, + scratch1, + double_scratch0(), + except_flag, + kCheckForInexactConversion); + + // Deopt if the operation did not succeed (except_flag != 0). + DeoptimizeIf(ne, instr->environment(), except_flag, Operand(zero_reg)); + + if (instr->hydrogen()->CheckFlag(HValue::kBailoutOnMinusZero)) { + Label done; + __ Branch(&done, ne, result_reg, Operand(zero_reg)); + __ mfhc1(scratch1, double_input); // Get exponent/sign bits. + __ And(scratch1, scratch1, Operand(HeapNumber::kSignMask)); + DeoptimizeIf(ne, instr->environment(), scratch1, Operand(zero_reg)); + __ bind(&done); + } + } +} + + +void LCodeGen::DoDoubleToSmi(LDoubleToSmi* instr) { + Register result_reg = ToRegister(instr->result()); + Register scratch1 = LCodeGen::scratch0(); + DoubleRegister double_input = ToDoubleRegister(instr->value()); + + if (instr->truncating()) { + __ TruncateDoubleToI(result_reg, double_input); + } else { + Register except_flag = LCodeGen::scratch1(); + + __ EmitFPUTruncate(kRoundToMinusInf, + result_reg, + double_input, + scratch1, + double_scratch0(), + except_flag, + kCheckForInexactConversion); + + // Deopt if the operation did not succeed (except_flag != 0). + DeoptimizeIf(ne, instr->environment(), except_flag, Operand(zero_reg)); + + if (instr->hydrogen()->CheckFlag(HValue::kBailoutOnMinusZero)) { + Label done; + __ Branch(&done, ne, result_reg, Operand(zero_reg)); + __ mfhc1(scratch1, double_input); // Get exponent/sign bits. + __ And(scratch1, scratch1, Operand(HeapNumber::kSignMask)); + DeoptimizeIf(ne, instr->environment(), scratch1, Operand(zero_reg)); + __ bind(&done); + } + } + __ SmiTag(result_reg, result_reg); +} + + +void LCodeGen::DoCheckSmi(LCheckSmi* instr) { + LOperand* input = instr->value(); + __ SmiTst(ToRegister(input), at); + DeoptimizeIf(ne, instr->environment(), at, Operand(zero_reg)); +} + + +void LCodeGen::DoCheckNonSmi(LCheckNonSmi* instr) { + if (!instr->hydrogen()->value()->type().IsHeapObject()) { + LOperand* input = instr->value(); + __ SmiTst(ToRegister(input), at); + DeoptimizeIf(eq, instr->environment(), at, Operand(zero_reg)); + } +} + + +void LCodeGen::DoCheckInstanceType(LCheckInstanceType* instr) { + Register input = ToRegister(instr->value()); + Register scratch = scratch0(); + + __ GetObjectType(input, scratch, scratch); + + if (instr->hydrogen()->is_interval_check()) { + InstanceType first; + InstanceType last; + instr->hydrogen()->GetCheckInterval(&first, &last); + + // If there is only one type in the interval check for equality. + if (first == last) { + DeoptimizeIf(ne, instr->environment(), scratch, Operand(first)); + } else { + DeoptimizeIf(lo, instr->environment(), scratch, Operand(first)); + // Omit check for the last type. + if (last != LAST_TYPE) { + DeoptimizeIf(hi, instr->environment(), scratch, Operand(last)); + } + } + } else { + uint8_t mask; + uint8_t tag; + instr->hydrogen()->GetCheckMaskAndTag(&mask, &tag); + + if (IsPowerOf2(mask)) { + DCHECK(tag == 0 || IsPowerOf2(tag)); + __ And(at, scratch, mask); + DeoptimizeIf(tag == 0 ? ne : eq, instr->environment(), + at, Operand(zero_reg)); + } else { + __ And(scratch, scratch, Operand(mask)); + DeoptimizeIf(ne, instr->environment(), scratch, Operand(tag)); + } + } +} + + +void LCodeGen::DoCheckValue(LCheckValue* instr) { + Register reg = ToRegister(instr->value()); + Handle<HeapObject> object = instr->hydrogen()->object().handle(); + AllowDeferredHandleDereference smi_check; + if (isolate()->heap()->InNewSpace(*object)) { + Register reg = ToRegister(instr->value()); + Handle<Cell> cell = isolate()->factory()->NewCell(object); + __ li(at, Operand(Handle<Object>(cell))); + __ ld(at, FieldMemOperand(at, Cell::kValueOffset)); + DeoptimizeIf(ne, instr->environment(), reg, + Operand(at)); + } else { + DeoptimizeIf(ne, instr->environment(), reg, + Operand(object)); + } +} + + +void LCodeGen::DoDeferredInstanceMigration(LCheckMaps* instr, Register object) { + { + PushSafepointRegistersScope scope(this); + __ push(object); + __ mov(cp, zero_reg); + __ CallRuntimeSaveDoubles(Runtime::kTryMigrateInstance); + RecordSafepointWithRegisters( + instr->pointer_map(), 1, Safepoint::kNoLazyDeopt); + __ StoreToSafepointRegisterSlot(v0, scratch0()); + } + __ SmiTst(scratch0(), at); + DeoptimizeIf(eq, instr->environment(), at, Operand(zero_reg)); +} + + +void LCodeGen::DoCheckMaps(LCheckMaps* instr) { + class DeferredCheckMaps V8_FINAL : public LDeferredCode { + public: + DeferredCheckMaps(LCodeGen* codegen, LCheckMaps* instr, Register object) + : LDeferredCode(codegen), instr_(instr), object_(object) { + SetExit(check_maps()); + } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredInstanceMigration(instr_, object_); + } + Label* check_maps() { return &check_maps_; } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + private: + LCheckMaps* instr_; + Label check_maps_; + Register object_; + }; + + if (instr->hydrogen()->IsStabilityCheck()) { + const UniqueSet<Map>* maps = instr->hydrogen()->maps(); + for (int i = 0; i < maps->size(); ++i) { + AddStabilityDependency(maps->at(i).handle()); + } + return; + } + + Register map_reg = scratch0(); + LOperand* input = instr->value(); + DCHECK(input->IsRegister()); + Register reg = ToRegister(input); + __ ld(map_reg, FieldMemOperand(reg, HeapObject::kMapOffset)); + + DeferredCheckMaps* deferred = NULL; + if (instr->hydrogen()->HasMigrationTarget()) { + deferred = new(zone()) DeferredCheckMaps(this, instr, reg); + __ bind(deferred->check_maps()); + } + + const UniqueSet<Map>* maps = instr->hydrogen()->maps(); + Label success; + for (int i = 0; i < maps->size() - 1; i++) { + Handle<Map> map = maps->at(i).handle(); + __ CompareMapAndBranch(map_reg, map, &success, eq, &success); + } + Handle<Map> map = maps->at(maps->size() - 1).handle(); + // Do the CompareMap() directly within the Branch() and DeoptimizeIf(). + if (instr->hydrogen()->HasMigrationTarget()) { + __ Branch(deferred->entry(), ne, map_reg, Operand(map)); + } else { + DeoptimizeIf(ne, instr->environment(), map_reg, Operand(map)); + } + + __ bind(&success); +} + + +void LCodeGen::DoClampDToUint8(LClampDToUint8* instr) { + DoubleRegister value_reg = ToDoubleRegister(instr->unclamped()); + Register result_reg = ToRegister(instr->result()); + DoubleRegister temp_reg = ToDoubleRegister(instr->temp()); + __ ClampDoubleToUint8(result_reg, value_reg, temp_reg); +} + + +void LCodeGen::DoClampIToUint8(LClampIToUint8* instr) { + Register unclamped_reg = ToRegister(instr->unclamped()); + Register result_reg = ToRegister(instr->result()); + __ ClampUint8(result_reg, unclamped_reg); +} + + +void LCodeGen::DoClampTToUint8(LClampTToUint8* instr) { + Register scratch = scratch0(); + Register input_reg = ToRegister(instr->unclamped()); + Register result_reg = ToRegister(instr->result()); + DoubleRegister temp_reg = ToDoubleRegister(instr->temp()); + Label is_smi, done, heap_number; + + // Both smi and heap number cases are handled. + __ UntagAndJumpIfSmi(scratch, input_reg, &is_smi); + + // Check for heap number + __ ld(scratch, FieldMemOperand(input_reg, HeapObject::kMapOffset)); + __ Branch(&heap_number, eq, scratch, Operand(factory()->heap_number_map())); + + // Check for undefined. Undefined is converted to zero for clamping + // conversions. + DeoptimizeIf(ne, instr->environment(), input_reg, + Operand(factory()->undefined_value())); + __ mov(result_reg, zero_reg); + __ jmp(&done); + + // Heap number + __ bind(&heap_number); + __ ldc1(double_scratch0(), FieldMemOperand(input_reg, + HeapNumber::kValueOffset)); + __ ClampDoubleToUint8(result_reg, double_scratch0(), temp_reg); + __ jmp(&done); + + __ bind(&is_smi); + __ ClampUint8(result_reg, scratch); + + __ bind(&done); +} + + +void LCodeGen::DoDoubleBits(LDoubleBits* instr) { + DoubleRegister value_reg = ToDoubleRegister(instr->value()); + Register result_reg = ToRegister(instr->result()); + if (instr->hydrogen()->bits() == HDoubleBits::HIGH) { + __ FmoveHigh(result_reg, value_reg); + } else { + __ FmoveLow(result_reg, value_reg); + } +} + + +void LCodeGen::DoConstructDouble(LConstructDouble* instr) { + Register hi_reg = ToRegister(instr->hi()); + Register lo_reg = ToRegister(instr->lo()); + DoubleRegister result_reg = ToDoubleRegister(instr->result()); + __ Move(result_reg, lo_reg, hi_reg); +} + + +void LCodeGen::DoAllocate(LAllocate* instr) { + class DeferredAllocate V8_FINAL : public LDeferredCode { + public: + DeferredAllocate(LCodeGen* codegen, LAllocate* instr) + : LDeferredCode(codegen), instr_(instr) { } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredAllocate(instr_); + } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + private: + LAllocate* instr_; + }; + + DeferredAllocate* deferred = + new(zone()) DeferredAllocate(this, instr); + + Register result = ToRegister(instr->result()); + Register scratch = ToRegister(instr->temp1()); + Register scratch2 = ToRegister(instr->temp2()); + + // Allocate memory for the object. + AllocationFlags flags = TAG_OBJECT; + if (instr->hydrogen()->MustAllocateDoubleAligned()) { + flags = static_cast<AllocationFlags>(flags | DOUBLE_ALIGNMENT); + } + if (instr->hydrogen()->IsOldPointerSpaceAllocation()) { + DCHECK(!instr->hydrogen()->IsOldDataSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); + flags = static_cast<AllocationFlags>(flags | PRETENURE_OLD_POINTER_SPACE); + } else if (instr->hydrogen()->IsOldDataSpaceAllocation()) { + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); + flags = static_cast<AllocationFlags>(flags | PRETENURE_OLD_DATA_SPACE); + } + if (instr->size()->IsConstantOperand()) { + int32_t size = ToInteger32(LConstantOperand::cast(instr->size())); + if (size <= Page::kMaxRegularHeapObjectSize) { + __ Allocate(size, result, scratch, scratch2, deferred->entry(), flags); + } else { + __ jmp(deferred->entry()); + } + } else { + Register size = ToRegister(instr->size()); + __ Allocate(size, result, scratch, scratch2, deferred->entry(), flags); + } + + __ bind(deferred->exit()); + + if (instr->hydrogen()->MustPrefillWithFiller()) { + STATIC_ASSERT(kHeapObjectTag == 1); + if (instr->size()->IsConstantOperand()) { + int32_t size = ToInteger32(LConstantOperand::cast(instr->size())); + __ li(scratch, Operand(size - kHeapObjectTag)); + } else { + __ Dsubu(scratch, ToRegister(instr->size()), Operand(kHeapObjectTag)); + } + __ li(scratch2, Operand(isolate()->factory()->one_pointer_filler_map())); + Label loop; + __ bind(&loop); + __ Dsubu(scratch, scratch, Operand(kPointerSize)); + __ Daddu(at, result, Operand(scratch)); + __ sd(scratch2, MemOperand(at)); + __ Branch(&loop, ge, scratch, Operand(zero_reg)); + } +} + + +void LCodeGen::DoDeferredAllocate(LAllocate* instr) { + Register result = ToRegister(instr->result()); + + // TODO(3095996): Get rid of this. For now, we need to make the + // result register contain a valid pointer because it is already + // contained in the register pointer map. + __ mov(result, zero_reg); + + PushSafepointRegistersScope scope(this); + if (instr->size()->IsRegister()) { + Register size = ToRegister(instr->size()); + DCHECK(!size.is(result)); + __ SmiTag(size); + __ push(size); + } else { + int32_t size = ToInteger32(LConstantOperand::cast(instr->size())); + if (size >= 0 && size <= Smi::kMaxValue) { + __ li(v0, Operand(Smi::FromInt(size))); + __ Push(v0); + } else { + // We should never get here at runtime => abort + __ stop("invalid allocation size"); + return; + } + } + + int flags = AllocateDoubleAlignFlag::encode( + instr->hydrogen()->MustAllocateDoubleAligned()); + if (instr->hydrogen()->IsOldPointerSpaceAllocation()) { + DCHECK(!instr->hydrogen()->IsOldDataSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); + flags = AllocateTargetSpace::update(flags, OLD_POINTER_SPACE); + } else if (instr->hydrogen()->IsOldDataSpaceAllocation()) { + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); + flags = AllocateTargetSpace::update(flags, OLD_DATA_SPACE); + } else { + flags = AllocateTargetSpace::update(flags, NEW_SPACE); + } + __ li(v0, Operand(Smi::FromInt(flags))); + __ Push(v0); + + CallRuntimeFromDeferred( + Runtime::kAllocateInTargetSpace, 2, instr, instr->context()); + __ StoreToSafepointRegisterSlot(v0, result); +} + + +void LCodeGen::DoToFastProperties(LToFastProperties* instr) { + DCHECK(ToRegister(instr->value()).is(a0)); + DCHECK(ToRegister(instr->result()).is(v0)); + __ push(a0); + CallRuntime(Runtime::kToFastProperties, 1, instr); +} + + +void LCodeGen::DoRegExpLiteral(LRegExpLiteral* instr) { + DCHECK(ToRegister(instr->context()).is(cp)); + Label materialized; + // Registers will be used as follows: + // a7 = literals array. + // a1 = regexp literal. + // a0 = regexp literal clone. + // a2 and a4-a6 are used as temporaries. + int literal_offset = + FixedArray::OffsetOfElementAt(instr->hydrogen()->literal_index()); + __ li(a7, instr->hydrogen()->literals()); + __ ld(a1, FieldMemOperand(a7, literal_offset)); + __ LoadRoot(at, Heap::kUndefinedValueRootIndex); + __ Branch(&materialized, ne, a1, Operand(at)); + + // Create regexp literal using runtime function + // Result will be in v0. + __ li(a6, Operand(Smi::FromInt(instr->hydrogen()->literal_index()))); + __ li(a5, Operand(instr->hydrogen()->pattern())); + __ li(a4, Operand(instr->hydrogen()->flags())); + __ Push(a7, a6, a5, a4); + CallRuntime(Runtime::kMaterializeRegExpLiteral, 4, instr); + __ mov(a1, v0); + + __ bind(&materialized); + int size = JSRegExp::kSize + JSRegExp::kInObjectFieldCount * kPointerSize; + Label allocated, runtime_allocate; + + __ Allocate(size, v0, a2, a3, &runtime_allocate, TAG_OBJECT); + __ jmp(&allocated); + + __ bind(&runtime_allocate); + __ li(a0, Operand(Smi::FromInt(size))); + __ Push(a1, a0); + CallRuntime(Runtime::kAllocateInNewSpace, 1, instr); + __ pop(a1); + + __ bind(&allocated); + // Copy the content into the newly allocated memory. + // (Unroll copy loop once for better throughput). + for (int i = 0; i < size - kPointerSize; i += 2 * kPointerSize) { + __ ld(a3, FieldMemOperand(a1, i)); + __ ld(a2, FieldMemOperand(a1, i + kPointerSize)); + __ sd(a3, FieldMemOperand(v0, i)); + __ sd(a2, FieldMemOperand(v0, i + kPointerSize)); + } + if ((size % (2 * kPointerSize)) != 0) { + __ ld(a3, FieldMemOperand(a1, size - kPointerSize)); + __ sd(a3, FieldMemOperand(v0, size - kPointerSize)); + } +} + + +void LCodeGen::DoFunctionLiteral(LFunctionLiteral* instr) { + DCHECK(ToRegister(instr->context()).is(cp)); + // Use the fast case closure allocation code that allocates in new + // space for nested functions that don't need literals cloning. + bool pretenure = instr->hydrogen()->pretenure(); + if (!pretenure && instr->hydrogen()->has_no_literals()) { + FastNewClosureStub stub(isolate(), + instr->hydrogen()->strict_mode(), + instr->hydrogen()->is_generator()); + __ li(a2, Operand(instr->hydrogen()->shared_info())); + CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); + } else { + __ li(a2, Operand(instr->hydrogen()->shared_info())); + __ li(a1, Operand(pretenure ? factory()->true_value() + : factory()->false_value())); + __ Push(cp, a2, a1); + CallRuntime(Runtime::kNewClosure, 3, instr); + } +} + + +void LCodeGen::DoTypeof(LTypeof* instr) { + DCHECK(ToRegister(instr->result()).is(v0)); + Register input = ToRegister(instr->value()); + __ push(input); + CallRuntime(Runtime::kTypeof, 1, instr); +} + + +void LCodeGen::DoTypeofIsAndBranch(LTypeofIsAndBranch* instr) { + Register input = ToRegister(instr->value()); + + Register cmp1 = no_reg; + Operand cmp2 = Operand(no_reg); + + Condition final_branch_condition = EmitTypeofIs(instr->TrueLabel(chunk_), + instr->FalseLabel(chunk_), + input, + instr->type_literal(), + &cmp1, + &cmp2); + + DCHECK(cmp1.is_valid()); + DCHECK(!cmp2.is_reg() || cmp2.rm().is_valid()); + + if (final_branch_condition != kNoCondition) { + EmitBranch(instr, final_branch_condition, cmp1, cmp2); + } +} + + +Condition LCodeGen::EmitTypeofIs(Label* true_label, + Label* false_label, + Register input, + Handle<String> type_name, + Register* cmp1, + Operand* cmp2) { + // This function utilizes the delay slot heavily. This is used to load + // values that are always usable without depending on the type of the input + // register. + Condition final_branch_condition = kNoCondition; + Register scratch = scratch0(); + Factory* factory = isolate()->factory(); + if (String::Equals(type_name, factory->number_string())) { + __ JumpIfSmi(input, true_label); + __ ld(input, FieldMemOperand(input, HeapObject::kMapOffset)); + __ LoadRoot(at, Heap::kHeapNumberMapRootIndex); + *cmp1 = input; + *cmp2 = Operand(at); + final_branch_condition = eq; + + } else if (String::Equals(type_name, factory->string_string())) { + __ JumpIfSmi(input, false_label); + __ GetObjectType(input, input, scratch); + __ Branch(USE_DELAY_SLOT, false_label, + ge, scratch, Operand(FIRST_NONSTRING_TYPE)); + // input is an object so we can load the BitFieldOffset even if we take the + // other branch. + __ lbu(at, FieldMemOperand(input, Map::kBitFieldOffset)); + __ And(at, at, 1 << Map::kIsUndetectable); + *cmp1 = at; + *cmp2 = Operand(zero_reg); + final_branch_condition = eq; + + } else if (String::Equals(type_name, factory->symbol_string())) { + __ JumpIfSmi(input, false_label); + __ GetObjectType(input, input, scratch); + *cmp1 = scratch; + *cmp2 = Operand(SYMBOL_TYPE); + final_branch_condition = eq; + + } else if (String::Equals(type_name, factory->boolean_string())) { + __ LoadRoot(at, Heap::kTrueValueRootIndex); + __ Branch(USE_DELAY_SLOT, true_label, eq, at, Operand(input)); + __ LoadRoot(at, Heap::kFalseValueRootIndex); + *cmp1 = at; + *cmp2 = Operand(input); + final_branch_condition = eq; + + } else if (String::Equals(type_name, factory->undefined_string())) { + __ LoadRoot(at, Heap::kUndefinedValueRootIndex); + __ Branch(USE_DELAY_SLOT, true_label, eq, at, Operand(input)); + // The first instruction of JumpIfSmi is an And - it is safe in the delay + // slot. + __ JumpIfSmi(input, false_label); + // Check for undetectable objects => true. + __ ld(input, FieldMemOperand(input, HeapObject::kMapOffset)); + __ lbu(at, FieldMemOperand(input, Map::kBitFieldOffset)); + __ And(at, at, 1 << Map::kIsUndetectable); + *cmp1 = at; + *cmp2 = Operand(zero_reg); + final_branch_condition = ne; + + } else if (String::Equals(type_name, factory->function_string())) { + STATIC_ASSERT(NUM_OF_CALLABLE_SPEC_OBJECT_TYPES == 2); + __ JumpIfSmi(input, false_label); + __ GetObjectType(input, scratch, input); + __ Branch(true_label, eq, input, Operand(JS_FUNCTION_TYPE)); + *cmp1 = input; + *cmp2 = Operand(JS_FUNCTION_PROXY_TYPE); + final_branch_condition = eq; + + } else if (String::Equals(type_name, factory->object_string())) { + __ JumpIfSmi(input, false_label); + __ LoadRoot(at, Heap::kNullValueRootIndex); + __ Branch(USE_DELAY_SLOT, true_label, eq, at, Operand(input)); + Register map = input; + __ GetObjectType(input, map, scratch); + __ Branch(false_label, + lt, scratch, Operand(FIRST_NONCALLABLE_SPEC_OBJECT_TYPE)); + __ Branch(USE_DELAY_SLOT, false_label, + gt, scratch, Operand(LAST_NONCALLABLE_SPEC_OBJECT_TYPE)); + // map is still valid, so the BitField can be loaded in delay slot. + // Check for undetectable objects => false. + __ lbu(at, FieldMemOperand(map, Map::kBitFieldOffset)); + __ And(at, at, 1 << Map::kIsUndetectable); + *cmp1 = at; + *cmp2 = Operand(zero_reg); + final_branch_condition = eq; + + } else { + *cmp1 = at; + *cmp2 = Operand(zero_reg); // Set to valid regs, to avoid caller assertion. + __ Branch(false_label); + } + + return final_branch_condition; +} + + +void LCodeGen::DoIsConstructCallAndBranch(LIsConstructCallAndBranch* instr) { + Register temp1 = ToRegister(instr->temp()); + + EmitIsConstructCall(temp1, scratch0()); + + EmitBranch(instr, eq, temp1, + Operand(Smi::FromInt(StackFrame::CONSTRUCT))); +} + + +void LCodeGen::EmitIsConstructCall(Register temp1, Register temp2) { + DCHECK(!temp1.is(temp2)); + // Get the frame pointer for the calling frame. + __ ld(temp1, MemOperand(fp, StandardFrameConstants::kCallerFPOffset)); + + // Skip the arguments adaptor frame if it exists. + Label check_frame_marker; + __ ld(temp2, MemOperand(temp1, StandardFrameConstants::kContextOffset)); + __ Branch(&check_frame_marker, ne, temp2, + Operand(Smi::FromInt(StackFrame::ARGUMENTS_ADAPTOR))); + __ ld(temp1, MemOperand(temp1, StandardFrameConstants::kCallerFPOffset)); + + // Check the marker in the calling frame. + __ bind(&check_frame_marker); + __ ld(temp1, MemOperand(temp1, StandardFrameConstants::kMarkerOffset)); +} + + +void LCodeGen::EnsureSpaceForLazyDeopt(int space_needed) { + if (!info()->IsStub()) { + // Ensure that we have enough space after the previous lazy-bailout + // instruction for patching the code here. + int current_pc = masm()->pc_offset(); + if (current_pc < last_lazy_deopt_pc_ + space_needed) { + int padding_size = last_lazy_deopt_pc_ + space_needed - current_pc; + DCHECK_EQ(0, padding_size % Assembler::kInstrSize); + while (padding_size > 0) { + __ nop(); + padding_size -= Assembler::kInstrSize; + } + } + } + last_lazy_deopt_pc_ = masm()->pc_offset(); +} + + +void LCodeGen::DoLazyBailout(LLazyBailout* instr) { + last_lazy_deopt_pc_ = masm()->pc_offset(); + DCHECK(instr->HasEnvironment()); + LEnvironment* env = instr->environment(); + RegisterEnvironmentForDeoptimization(env, Safepoint::kLazyDeopt); + safepoints_.RecordLazyDeoptimizationIndex(env->deoptimization_index()); +} + + +void LCodeGen::DoDeoptimize(LDeoptimize* instr) { + Deoptimizer::BailoutType type = instr->hydrogen()->type(); + // TODO(danno): Stubs expect all deopts to be lazy for historical reasons (the + // needed return address), even though the implementation of LAZY and EAGER is + // now identical. When LAZY is eventually completely folded into EAGER, remove + // the special case below. + if (info()->IsStub() && type == Deoptimizer::EAGER) { + type = Deoptimizer::LAZY; + } + + Comment(";;; deoptimize: %s", instr->hydrogen()->reason()); + DeoptimizeIf(al, instr->environment(), type, zero_reg, Operand(zero_reg)); +} + + +void LCodeGen::DoDummy(LDummy* instr) { + // Nothing to see here, move on! +} + + +void LCodeGen::DoDummyUse(LDummyUse* instr) { + // Nothing to see here, move on! +} + + +void LCodeGen::DoDeferredStackCheck(LStackCheck* instr) { + PushSafepointRegistersScope scope(this); + LoadContextFromDeferred(instr->context()); + __ CallRuntimeSaveDoubles(Runtime::kStackGuard); + RecordSafepointWithLazyDeopt( + instr, RECORD_SAFEPOINT_WITH_REGISTERS_AND_NO_ARGUMENTS); + DCHECK(instr->HasEnvironment()); + LEnvironment* env = instr->environment(); + safepoints_.RecordLazyDeoptimizationIndex(env->deoptimization_index()); +} + + +void LCodeGen::DoStackCheck(LStackCheck* instr) { + class DeferredStackCheck V8_FINAL : public LDeferredCode { + public: + DeferredStackCheck(LCodeGen* codegen, LStackCheck* instr) + : LDeferredCode(codegen), instr_(instr) { } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredStackCheck(instr_); + } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + private: + LStackCheck* instr_; + }; + + DCHECK(instr->HasEnvironment()); + LEnvironment* env = instr->environment(); + // There is no LLazyBailout instruction for stack-checks. We have to + // prepare for lazy deoptimization explicitly here. + if (instr->hydrogen()->is_function_entry()) { + // Perform stack overflow check. + Label done; + __ LoadRoot(at, Heap::kStackLimitRootIndex); + __ Branch(&done, hs, sp, Operand(at)); + DCHECK(instr->context()->IsRegister()); + DCHECK(ToRegister(instr->context()).is(cp)); + CallCode(isolate()->builtins()->StackCheck(), + RelocInfo::CODE_TARGET, + instr); + __ bind(&done); + } else { + DCHECK(instr->hydrogen()->is_backwards_branch()); + // Perform stack overflow check if this goto needs it before jumping. + DeferredStackCheck* deferred_stack_check = + new(zone()) DeferredStackCheck(this, instr); + __ LoadRoot(at, Heap::kStackLimitRootIndex); + __ Branch(deferred_stack_check->entry(), lo, sp, Operand(at)); + EnsureSpaceForLazyDeopt(Deoptimizer::patch_size()); + __ bind(instr->done_label()); + deferred_stack_check->SetExit(instr->done_label()); + RegisterEnvironmentForDeoptimization(env, Safepoint::kLazyDeopt); + // Don't record a deoptimization index for the safepoint here. + // This will be done explicitly when emitting call and the safepoint in + // the deferred code. + } +} + + +void LCodeGen::DoOsrEntry(LOsrEntry* instr) { + // This is a pseudo-instruction that ensures that the environment here is + // properly registered for deoptimization and records the assembler's PC + // offset. + LEnvironment* environment = instr->environment(); + + // If the environment were already registered, we would have no way of + // backpatching it with the spill slot operands. + DCHECK(!environment->HasBeenRegistered()); + RegisterEnvironmentForDeoptimization(environment, Safepoint::kNoLazyDeopt); + + GenerateOsrPrologue(); +} + + +void LCodeGen::DoForInPrepareMap(LForInPrepareMap* instr) { + Register result = ToRegister(instr->result()); + Register object = ToRegister(instr->object()); + __ LoadRoot(at, Heap::kUndefinedValueRootIndex); + DeoptimizeIf(eq, instr->environment(), object, Operand(at)); + + Register null_value = a5; + __ LoadRoot(null_value, Heap::kNullValueRootIndex); + DeoptimizeIf(eq, instr->environment(), object, Operand(null_value)); + + __ And(at, object, kSmiTagMask); + DeoptimizeIf(eq, instr->environment(), at, Operand(zero_reg)); + + STATIC_ASSERT(FIRST_JS_PROXY_TYPE == FIRST_SPEC_OBJECT_TYPE); + __ GetObjectType(object, a1, a1); + DeoptimizeIf(le, instr->environment(), a1, Operand(LAST_JS_PROXY_TYPE)); + + Label use_cache, call_runtime; + DCHECK(object.is(a0)); + __ CheckEnumCache(null_value, &call_runtime); + + __ ld(result, FieldMemOperand(object, HeapObject::kMapOffset)); + __ Branch(&use_cache); + + // Get the set of properties to enumerate. + __ bind(&call_runtime); + __ push(object); + CallRuntime(Runtime::kGetPropertyNamesFast, 1, instr); + + __ ld(a1, FieldMemOperand(v0, HeapObject::kMapOffset)); + DCHECK(result.is(v0)); + __ LoadRoot(at, Heap::kMetaMapRootIndex); + DeoptimizeIf(ne, instr->environment(), a1, Operand(at)); + __ bind(&use_cache); +} + + +void LCodeGen::DoForInCacheArray(LForInCacheArray* instr) { + Register map = ToRegister(instr->map()); + Register result = ToRegister(instr->result()); + Label load_cache, done; + __ EnumLength(result, map); + __ Branch(&load_cache, ne, result, Operand(Smi::FromInt(0))); + __ li(result, Operand(isolate()->factory()->empty_fixed_array())); + __ jmp(&done); + + __ bind(&load_cache); + __ LoadInstanceDescriptors(map, result); + __ ld(result, + FieldMemOperand(result, DescriptorArray::kEnumCacheOffset)); + __ ld(result, + FieldMemOperand(result, FixedArray::SizeFor(instr->idx()))); + DeoptimizeIf(eq, instr->environment(), result, Operand(zero_reg)); + + __ bind(&done); +} + + +void LCodeGen::DoCheckMapValue(LCheckMapValue* instr) { + Register object = ToRegister(instr->value()); + Register map = ToRegister(instr->map()); + __ ld(scratch0(), FieldMemOperand(object, HeapObject::kMapOffset)); + DeoptimizeIf(ne, instr->environment(), map, Operand(scratch0())); +} + + +void LCodeGen::DoDeferredLoadMutableDouble(LLoadFieldByIndex* instr, + Register result, + Register object, + Register index) { + PushSafepointRegistersScope scope(this); + __ Push(object, index); + __ mov(cp, zero_reg); + __ CallRuntimeSaveDoubles(Runtime::kLoadMutableDouble); + RecordSafepointWithRegisters( + instr->pointer_map(), 2, Safepoint::kNoLazyDeopt); + __ StoreToSafepointRegisterSlot(v0, result); +} + + +void LCodeGen::DoLoadFieldByIndex(LLoadFieldByIndex* instr) { + class DeferredLoadMutableDouble V8_FINAL : public LDeferredCode { + public: + DeferredLoadMutableDouble(LCodeGen* codegen, + LLoadFieldByIndex* instr, + Register result, + Register object, + Register index) + : LDeferredCode(codegen), + instr_(instr), + result_(result), + object_(object), + index_(index) { + } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredLoadMutableDouble(instr_, result_, object_, index_); + } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + private: + LLoadFieldByIndex* instr_; + Register result_; + Register object_; + Register index_; + }; + + Register object = ToRegister(instr->object()); + Register index = ToRegister(instr->index()); + Register result = ToRegister(instr->result()); + Register scratch = scratch0(); + + DeferredLoadMutableDouble* deferred; + deferred = new(zone()) DeferredLoadMutableDouble( + this, instr, result, object, index); + + Label out_of_object, done; + + __ And(scratch, index, Operand(Smi::FromInt(1))); + __ Branch(deferred->entry(), ne, scratch, Operand(zero_reg)); + __ dsra(index, index, 1); + + __ Branch(USE_DELAY_SLOT, &out_of_object, lt, index, Operand(zero_reg)); + __ SmiScale(scratch, index, kPointerSizeLog2); // In delay slot. + __ Daddu(scratch, object, scratch); + __ ld(result, FieldMemOperand(scratch, JSObject::kHeaderSize)); + + __ Branch(&done); + + __ bind(&out_of_object); + __ ld(result, FieldMemOperand(object, JSObject::kPropertiesOffset)); + // Index is equal to negated out of object property index plus 1. + __ Dsubu(scratch, result, scratch); + __ ld(result, FieldMemOperand(scratch, + FixedArray::kHeaderSize - kPointerSize)); + __ bind(deferred->exit()); + __ bind(&done); +} + + +void LCodeGen::DoStoreFrameContext(LStoreFrameContext* instr) { + Register context = ToRegister(instr->context()); + __ sd(context, MemOperand(fp, StandardFrameConstants::kContextOffset)); +} + + +void LCodeGen::DoAllocateBlockContext(LAllocateBlockContext* instr) { + Handle<ScopeInfo> scope_info = instr->scope_info(); + __ li(at, scope_info); + __ Push(at, ToRegister(instr->function())); + CallRuntime(Runtime::kPushBlockContext, 2, instr); + RecordSafepoint(Safepoint::kNoLazyDeopt); +} + + +#undef __ + +} } // namespace v8::internal diff --git a/deps/v8/src/mips64/lithium-codegen-mips64.h b/deps/v8/src/mips64/lithium-codegen-mips64.h new file mode 100644 index 00000000000..d42a10e04e0 --- /dev/null +++ b/deps/v8/src/mips64/lithium-codegen-mips64.h @@ -0,0 +1,451 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_MIPS_LITHIUM_CODEGEN_MIPS_H_ +#define V8_MIPS_LITHIUM_CODEGEN_MIPS_H_ + +#include "src/deoptimizer.h" +#include "src/lithium-codegen.h" +#include "src/mips64/lithium-gap-resolver-mips64.h" +#include "src/mips64/lithium-mips64.h" +#include "src/safepoint-table.h" +#include "src/scopes.h" +#include "src/utils.h" + +namespace v8 { +namespace internal { + +// Forward declarations. +class LDeferredCode; +class SafepointGenerator; + +class LCodeGen: public LCodeGenBase { + public: + LCodeGen(LChunk* chunk, MacroAssembler* assembler, CompilationInfo* info) + : LCodeGenBase(chunk, assembler, info), + deoptimizations_(4, info->zone()), + deopt_jump_table_(4, info->zone()), + deoptimization_literals_(8, info->zone()), + inlined_function_count_(0), + scope_(info->scope()), + translations_(info->zone()), + deferred_(8, info->zone()), + osr_pc_offset_(-1), + frame_is_built_(false), + safepoints_(info->zone()), + resolver_(this), + expected_safepoint_kind_(Safepoint::kSimple) { + PopulateDeoptimizationLiteralsWithInlinedFunctions(); + } + + + int LookupDestination(int block_id) const { + return chunk()->LookupDestination(block_id); + } + + bool IsNextEmittedBlock(int block_id) const { + return LookupDestination(block_id) == GetNextEmittedBlock(); + } + + bool NeedsEagerFrame() const { + return GetStackSlotCount() > 0 || + info()->is_non_deferred_calling() || + !info()->IsStub() || + info()->requires_frame(); + } + bool NeedsDeferredFrame() const { + return !NeedsEagerFrame() && info()->is_deferred_calling(); + } + + RAStatus GetRAState() const { + return frame_is_built_ ? kRAHasBeenSaved : kRAHasNotBeenSaved; + } + + // Support for converting LOperands to assembler types. + // LOperand must be a register. + Register ToRegister(LOperand* op) const; + + // LOperand is loaded into scratch, unless already a register. + Register EmitLoadRegister(LOperand* op, Register scratch); + + // LOperand must be a double register. + DoubleRegister ToDoubleRegister(LOperand* op) const; + + // LOperand is loaded into dbl_scratch, unless already a double register. + DoubleRegister EmitLoadDoubleRegister(LOperand* op, + FloatRegister flt_scratch, + DoubleRegister dbl_scratch); + int32_t ToRepresentation_donotuse(LConstantOperand* op, + const Representation& r) const; + int32_t ToInteger32(LConstantOperand* op) const; + Smi* ToSmi(LConstantOperand* op) const; + double ToDouble(LConstantOperand* op) const; + Operand ToOperand(LOperand* op); + MemOperand ToMemOperand(LOperand* op) const; + // Returns a MemOperand pointing to the high word of a DoubleStackSlot. + MemOperand ToHighMemOperand(LOperand* op) const; + + bool IsInteger32(LConstantOperand* op) const; + bool IsSmi(LConstantOperand* op) const; + Handle<Object> ToHandle(LConstantOperand* op) const; + + // Try to generate code for the entire chunk, but it may fail if the + // chunk contains constructs we cannot handle. Returns true if the + // code generation attempt succeeded. + bool GenerateCode(); + + // Finish the code by setting stack height, safepoint, and bailout + // information on it. + void FinishCode(Handle<Code> code); + + void DoDeferredNumberTagD(LNumberTagD* instr); + + enum IntegerSignedness { SIGNED_INT32, UNSIGNED_INT32 }; + void DoDeferredNumberTagIU(LInstruction* instr, + LOperand* value, + LOperand* temp1, + LOperand* temp2, + IntegerSignedness signedness); + + void DoDeferredTaggedToI(LTaggedToI* instr); + void DoDeferredMathAbsTaggedHeapNumber(LMathAbs* instr); + void DoDeferredStackCheck(LStackCheck* instr); + void DoDeferredStringCharCodeAt(LStringCharCodeAt* instr); + void DoDeferredStringCharFromCode(LStringCharFromCode* instr); + void DoDeferredAllocate(LAllocate* instr); + void DoDeferredInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr, + Label* map_check); + + void DoDeferredInstanceMigration(LCheckMaps* instr, Register object); + void DoDeferredLoadMutableDouble(LLoadFieldByIndex* instr, + Register result, + Register object, + Register index); + + // Parallel move support. + void DoParallelMove(LParallelMove* move); + void DoGap(LGap* instr); + + MemOperand PrepareKeyedOperand(Register key, + Register base, + bool key_is_constant, + int constant_key, + int element_size, + int shift_size, + int base_offset); + + // Emit frame translation commands for an environment. + void WriteTranslation(LEnvironment* environment, Translation* translation); + + // Declare methods that deal with the individual node types. +#define DECLARE_DO(type) void Do##type(L##type* node); + LITHIUM_CONCRETE_INSTRUCTION_LIST(DECLARE_DO) +#undef DECLARE_DO + + private: + StrictMode strict_mode() const { return info()->strict_mode(); } + + Scope* scope() const { return scope_; } + + Register scratch0() { return kLithiumScratchReg; } + Register scratch1() { return kLithiumScratchReg2; } + DoubleRegister double_scratch0() { return kLithiumScratchDouble; } + + LInstruction* GetNextInstruction(); + + void EmitClassOfTest(Label* if_true, + Label* if_false, + Handle<String> class_name, + Register input, + Register temporary, + Register temporary2); + + int GetStackSlotCount() const { return chunk()->spill_slot_count(); } + + void AddDeferredCode(LDeferredCode* code) { deferred_.Add(code, zone()); } + + void SaveCallerDoubles(); + void RestoreCallerDoubles(); + + // Code generation passes. Returns true if code generation should + // continue. + void GenerateBodyInstructionPre(LInstruction* instr) V8_OVERRIDE; + bool GeneratePrologue(); + bool GenerateDeferredCode(); + bool GenerateDeoptJumpTable(); + bool GenerateSafepointTable(); + + // Generates the custom OSR entrypoint and sets the osr_pc_offset. + void GenerateOsrPrologue(); + + enum SafepointMode { + RECORD_SIMPLE_SAFEPOINT, + RECORD_SAFEPOINT_WITH_REGISTERS_AND_NO_ARGUMENTS + }; + + void CallCode(Handle<Code> code, + RelocInfo::Mode mode, + LInstruction* instr); + + void CallCodeGeneric(Handle<Code> code, + RelocInfo::Mode mode, + LInstruction* instr, + SafepointMode safepoint_mode); + + void CallRuntime(const Runtime::Function* function, + int num_arguments, + LInstruction* instr, + SaveFPRegsMode save_doubles = kDontSaveFPRegs); + + void CallRuntime(Runtime::FunctionId id, + int num_arguments, + LInstruction* instr) { + const Runtime::Function* function = Runtime::FunctionForId(id); + CallRuntime(function, num_arguments, instr); + } + + void LoadContextFromDeferred(LOperand* context); + void CallRuntimeFromDeferred(Runtime::FunctionId id, + int argc, + LInstruction* instr, + LOperand* context); + + enum A1State { + A1_UNINITIALIZED, + A1_CONTAINS_TARGET + }; + + // Generate a direct call to a known function. Expects the function + // to be in a1. + void CallKnownFunction(Handle<JSFunction> function, + int formal_parameter_count, + int arity, + LInstruction* instr, + A1State a1_state); + + void RecordSafepointWithLazyDeopt(LInstruction* instr, + SafepointMode safepoint_mode); + + void RegisterEnvironmentForDeoptimization(LEnvironment* environment, + Safepoint::DeoptMode mode); + void DeoptimizeIf(Condition condition, + LEnvironment* environment, + Deoptimizer::BailoutType bailout_type, + Register src1 = zero_reg, + const Operand& src2 = Operand(zero_reg)); + void DeoptimizeIf(Condition condition, + LEnvironment* environment, + Register src1 = zero_reg, + const Operand& src2 = Operand(zero_reg)); + + void AddToTranslation(LEnvironment* environment, + Translation* translation, + LOperand* op, + bool is_tagged, + bool is_uint32, + int* object_index_pointer, + int* dematerialized_index_pointer); + void PopulateDeoptimizationData(Handle<Code> code); + int DefineDeoptimizationLiteral(Handle<Object> literal); + + void PopulateDeoptimizationLiteralsWithInlinedFunctions(); + + Register ToRegister(int index) const; + DoubleRegister ToDoubleRegister(int index) const; + + MemOperand BuildSeqStringOperand(Register string, + LOperand* index, + String::Encoding encoding); + + void EmitIntegerMathAbs(LMathAbs* instr); + + // Support for recording safepoint and position information. + void RecordSafepoint(LPointerMap* pointers, + Safepoint::Kind kind, + int arguments, + Safepoint::DeoptMode mode); + void RecordSafepoint(LPointerMap* pointers, Safepoint::DeoptMode mode); + void RecordSafepoint(Safepoint::DeoptMode mode); + void RecordSafepointWithRegisters(LPointerMap* pointers, + int arguments, + Safepoint::DeoptMode mode); + + void RecordAndWritePosition(int position) V8_OVERRIDE; + + static Condition TokenToCondition(Token::Value op, bool is_unsigned); + void EmitGoto(int block); + + // EmitBranch expects to be the last instruction of a block. + template<class InstrType> + void EmitBranch(InstrType instr, + Condition condition, + Register src1, + const Operand& src2); + template<class InstrType> + void EmitBranchF(InstrType instr, + Condition condition, + FPURegister src1, + FPURegister src2); + template<class InstrType> + void EmitFalseBranch(InstrType instr, + Condition condition, + Register src1, + const Operand& src2); + template<class InstrType> + void EmitFalseBranchF(InstrType instr, + Condition condition, + FPURegister src1, + FPURegister src2); + void EmitCmpI(LOperand* left, LOperand* right); + void EmitNumberUntagD(Register input, + DoubleRegister result, + bool allow_undefined_as_nan, + bool deoptimize_on_minus_zero, + LEnvironment* env, + NumberUntagDMode mode); + + // Emits optimized code for typeof x == "y". Modifies input register. + // Returns the condition on which a final split to + // true and false label should be made, to optimize fallthrough. + // Returns two registers in cmp1 and cmp2 that can be used in the + // Branch instruction after EmitTypeofIs. + Condition EmitTypeofIs(Label* true_label, + Label* false_label, + Register input, + Handle<String> type_name, + Register* cmp1, + Operand* cmp2); + + // Emits optimized code for %_IsObject(x). Preserves input register. + // Returns the condition on which a final split to + // true and false label should be made, to optimize fallthrough. + Condition EmitIsObject(Register input, + Register temp1, + Register temp2, + Label* is_not_object, + Label* is_object); + + // Emits optimized code for %_IsString(x). Preserves input register. + // Returns the condition on which a final split to + // true and false label should be made, to optimize fallthrough. + Condition EmitIsString(Register input, + Register temp1, + Label* is_not_string, + SmiCheck check_needed); + + // Emits optimized code for %_IsConstructCall(). + // Caller should branch on equal condition. + void EmitIsConstructCall(Register temp1, Register temp2); + + // Emits optimized code to deep-copy the contents of statically known + // object graphs (e.g. object literal boilerplate). + void EmitDeepCopy(Handle<JSObject> object, + Register result, + Register source, + int* offset, + AllocationSiteMode mode); + // Emit optimized code for integer division. + // Inputs are signed. + // All registers are clobbered. + // If 'remainder' is no_reg, it is not computed. + void EmitSignedIntegerDivisionByConstant(Register result, + Register dividend, + int32_t divisor, + Register remainder, + Register scratch, + LEnvironment* environment); + + + void EnsureSpaceForLazyDeopt(int space_needed) V8_OVERRIDE; + void DoLoadKeyedExternalArray(LLoadKeyed* instr); + void DoLoadKeyedFixedDoubleArray(LLoadKeyed* instr); + void DoLoadKeyedFixedArray(LLoadKeyed* instr); + void DoStoreKeyedExternalArray(LStoreKeyed* instr); + void DoStoreKeyedFixedDoubleArray(LStoreKeyed* instr); + void DoStoreKeyedFixedArray(LStoreKeyed* instr); + + ZoneList<LEnvironment*> deoptimizations_; + ZoneList<Deoptimizer::JumpTableEntry> deopt_jump_table_; + ZoneList<Handle<Object> > deoptimization_literals_; + int inlined_function_count_; + Scope* const scope_; + TranslationBuffer translations_; + ZoneList<LDeferredCode*> deferred_; + int osr_pc_offset_; + bool frame_is_built_; + + // Builder that keeps track of safepoints in the code. The table + // itself is emitted at the end of the generated code. + SafepointTableBuilder safepoints_; + + // Compiler from a set of parallel moves to a sequential list of moves. + LGapResolver resolver_; + + Safepoint::Kind expected_safepoint_kind_; + + class PushSafepointRegistersScope V8_FINAL BASE_EMBEDDED { + public: + explicit PushSafepointRegistersScope(LCodeGen* codegen) + : codegen_(codegen) { + DCHECK(codegen_->info()->is_calling()); + DCHECK(codegen_->expected_safepoint_kind_ == Safepoint::kSimple); + codegen_->expected_safepoint_kind_ = Safepoint::kWithRegisters; + + StoreRegistersStateStub stub(codegen_->isolate()); + codegen_->masm_->push(ra); + codegen_->masm_->CallStub(&stub); + } + + ~PushSafepointRegistersScope() { + DCHECK(codegen_->expected_safepoint_kind_ == Safepoint::kWithRegisters); + RestoreRegistersStateStub stub(codegen_->isolate()); + codegen_->masm_->push(ra); + codegen_->masm_->CallStub(&stub); + codegen_->expected_safepoint_kind_ = Safepoint::kSimple; + } + + private: + LCodeGen* codegen_; + }; + + friend class LDeferredCode; + friend class LEnvironment; + friend class SafepointGenerator; + DISALLOW_COPY_AND_ASSIGN(LCodeGen); +}; + + +class LDeferredCode : public ZoneObject { + public: + explicit LDeferredCode(LCodeGen* codegen) + : codegen_(codegen), + external_exit_(NULL), + instruction_index_(codegen->current_instruction_) { + codegen->AddDeferredCode(this); + } + + virtual ~LDeferredCode() {} + virtual void Generate() = 0; + virtual LInstruction* instr() = 0; + + void SetExit(Label* exit) { external_exit_ = exit; } + Label* entry() { return &entry_; } + Label* exit() { return external_exit_ != NULL ? external_exit_ : &exit_; } + int instruction_index() const { return instruction_index_; } + + protected: + LCodeGen* codegen() const { return codegen_; } + MacroAssembler* masm() const { return codegen_->masm(); } + + private: + LCodeGen* codegen_; + Label entry_; + Label exit_; + Label* external_exit_; + int instruction_index_; +}; + +} } // namespace v8::internal + +#endif // V8_MIPS_LITHIUM_CODEGEN_MIPS_H_ diff --git a/deps/v8/src/mips64/lithium-gap-resolver-mips64.cc b/deps/v8/src/mips64/lithium-gap-resolver-mips64.cc new file mode 100644 index 00000000000..d965f651a31 --- /dev/null +++ b/deps/v8/src/mips64/lithium-gap-resolver-mips64.cc @@ -0,0 +1,300 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "src/mips64/lithium-codegen-mips64.h" +#include "src/mips64/lithium-gap-resolver-mips64.h" + +namespace v8 { +namespace internal { + +LGapResolver::LGapResolver(LCodeGen* owner) + : cgen_(owner), + moves_(32, owner->zone()), + root_index_(0), + in_cycle_(false), + saved_destination_(NULL) {} + + +void LGapResolver::Resolve(LParallelMove* parallel_move) { + DCHECK(moves_.is_empty()); + // Build up a worklist of moves. + BuildInitialMoveList(parallel_move); + + for (int i = 0; i < moves_.length(); ++i) { + LMoveOperands move = moves_[i]; + // Skip constants to perform them last. They don't block other moves + // and skipping such moves with register destinations keeps those + // registers free for the whole algorithm. + if (!move.IsEliminated() && !move.source()->IsConstantOperand()) { + root_index_ = i; // Any cycle is found when by reaching this move again. + PerformMove(i); + if (in_cycle_) { + RestoreValue(); + } + } + } + + // Perform the moves with constant sources. + for (int i = 0; i < moves_.length(); ++i) { + if (!moves_[i].IsEliminated()) { + DCHECK(moves_[i].source()->IsConstantOperand()); + EmitMove(i); + } + } + + moves_.Rewind(0); +} + + +void LGapResolver::BuildInitialMoveList(LParallelMove* parallel_move) { + // Perform a linear sweep of the moves to add them to the initial list of + // moves to perform, ignoring any move that is redundant (the source is + // the same as the destination, the destination is ignored and + // unallocated, or the move was already eliminated). + const ZoneList<LMoveOperands>* moves = parallel_move->move_operands(); + for (int i = 0; i < moves->length(); ++i) { + LMoveOperands move = moves->at(i); + if (!move.IsRedundant()) moves_.Add(move, cgen_->zone()); + } + Verify(); +} + + +void LGapResolver::PerformMove(int index) { + // Each call to this function performs a move and deletes it from the move + // graph. We first recursively perform any move blocking this one. We + // mark a move as "pending" on entry to PerformMove in order to detect + // cycles in the move graph. + + // We can only find a cycle, when doing a depth-first traversal of moves, + // be encountering the starting move again. So by spilling the source of + // the starting move, we break the cycle. All moves are then unblocked, + // and the starting move is completed by writing the spilled value to + // its destination. All other moves from the spilled source have been + // completed prior to breaking the cycle. + // An additional complication is that moves to MemOperands with large + // offsets (more than 1K or 4K) require us to spill this spilled value to + // the stack, to free up the register. + DCHECK(!moves_[index].IsPending()); + DCHECK(!moves_[index].IsRedundant()); + + // Clear this move's destination to indicate a pending move. The actual + // destination is saved in a stack allocated local. Multiple moves can + // be pending because this function is recursive. + DCHECK(moves_[index].source() != NULL); // Or else it will look eliminated. + LOperand* destination = moves_[index].destination(); + moves_[index].set_destination(NULL); + + // Perform a depth-first traversal of the move graph to resolve + // dependencies. Any unperformed, unpending move with a source the same + // as this one's destination blocks this one so recursively perform all + // such moves. + for (int i = 0; i < moves_.length(); ++i) { + LMoveOperands other_move = moves_[i]; + if (other_move.Blocks(destination) && !other_move.IsPending()) { + PerformMove(i); + // If there is a blocking, pending move it must be moves_[root_index_] + // and all other moves with the same source as moves_[root_index_] are + // sucessfully executed (because they are cycle-free) by this loop. + } + } + + // We are about to resolve this move and don't need it marked as + // pending, so restore its destination. + moves_[index].set_destination(destination); + + // The move may be blocked on a pending move, which must be the starting move. + // In this case, we have a cycle, and we save the source of this move to + // a scratch register to break it. + LMoveOperands other_move = moves_[root_index_]; + if (other_move.Blocks(destination)) { + DCHECK(other_move.IsPending()); + BreakCycle(index); + return; + } + + // This move is no longer blocked. + EmitMove(index); +} + + +void LGapResolver::Verify() { +#ifdef ENABLE_SLOW_DCHECKS + // No operand should be the destination for more than one move. + for (int i = 0; i < moves_.length(); ++i) { + LOperand* destination = moves_[i].destination(); + for (int j = i + 1; j < moves_.length(); ++j) { + SLOW_DCHECK(!destination->Equals(moves_[j].destination())); + } + } +#endif +} + +#define __ ACCESS_MASM(cgen_->masm()) + +void LGapResolver::BreakCycle(int index) { + // We save in a register the value that should end up in the source of + // moves_[root_index]. After performing all moves in the tree rooted + // in that move, we save the value to that source. + DCHECK(moves_[index].destination()->Equals(moves_[root_index_].source())); + DCHECK(!in_cycle_); + in_cycle_ = true; + LOperand* source = moves_[index].source(); + saved_destination_ = moves_[index].destination(); + if (source->IsRegister()) { + __ mov(kLithiumScratchReg, cgen_->ToRegister(source)); + } else if (source->IsStackSlot()) { + __ ld(kLithiumScratchReg, cgen_->ToMemOperand(source)); + } else if (source->IsDoubleRegister()) { + __ mov_d(kLithiumScratchDouble, cgen_->ToDoubleRegister(source)); + } else if (source->IsDoubleStackSlot()) { + __ ldc1(kLithiumScratchDouble, cgen_->ToMemOperand(source)); + } else { + UNREACHABLE(); + } + // This move will be done by restoring the saved value to the destination. + moves_[index].Eliminate(); +} + + +void LGapResolver::RestoreValue() { + DCHECK(in_cycle_); + DCHECK(saved_destination_ != NULL); + + // Spilled value is in kLithiumScratchReg or kLithiumScratchDouble. + if (saved_destination_->IsRegister()) { + __ mov(cgen_->ToRegister(saved_destination_), kLithiumScratchReg); + } else if (saved_destination_->IsStackSlot()) { + __ sd(kLithiumScratchReg, cgen_->ToMemOperand(saved_destination_)); + } else if (saved_destination_->IsDoubleRegister()) { + __ mov_d(cgen_->ToDoubleRegister(saved_destination_), + kLithiumScratchDouble); + } else if (saved_destination_->IsDoubleStackSlot()) { + __ sdc1(kLithiumScratchDouble, + cgen_->ToMemOperand(saved_destination_)); + } else { + UNREACHABLE(); + } + + in_cycle_ = false; + saved_destination_ = NULL; +} + + +void LGapResolver::EmitMove(int index) { + LOperand* source = moves_[index].source(); + LOperand* destination = moves_[index].destination(); + + // Dispatch on the source and destination operand kinds. Not all + // combinations are possible. + + if (source->IsRegister()) { + Register source_register = cgen_->ToRegister(source); + if (destination->IsRegister()) { + __ mov(cgen_->ToRegister(destination), source_register); + } else { + DCHECK(destination->IsStackSlot()); + __ sd(source_register, cgen_->ToMemOperand(destination)); + } + } else if (source->IsStackSlot()) { + MemOperand source_operand = cgen_->ToMemOperand(source); + if (destination->IsRegister()) { + __ ld(cgen_->ToRegister(destination), source_operand); + } else { + DCHECK(destination->IsStackSlot()); + MemOperand destination_operand = cgen_->ToMemOperand(destination); + if (in_cycle_) { + if (!destination_operand.OffsetIsInt16Encodable()) { + // 'at' is overwritten while saving the value to the destination. + // Therefore we can't use 'at'. It is OK if the read from the source + // destroys 'at', since that happens before the value is read. + // This uses only a single reg of the double reg-pair. + __ ldc1(kLithiumScratchDouble, source_operand); + __ sdc1(kLithiumScratchDouble, destination_operand); + } else { + __ ld(at, source_operand); + __ sd(at, destination_operand); + } + } else { + __ ld(kLithiumScratchReg, source_operand); + __ sd(kLithiumScratchReg, destination_operand); + } + } + + } else if (source->IsConstantOperand()) { + LConstantOperand* constant_source = LConstantOperand::cast(source); + if (destination->IsRegister()) { + Register dst = cgen_->ToRegister(destination); + if (cgen_->IsSmi(constant_source)) { + __ li(dst, Operand(cgen_->ToSmi(constant_source))); + } else if (cgen_->IsInteger32(constant_source)) { + __ li(dst, Operand(cgen_->ToInteger32(constant_source))); + } else { + __ li(dst, cgen_->ToHandle(constant_source)); + } + } else if (destination->IsDoubleRegister()) { + DoubleRegister result = cgen_->ToDoubleRegister(destination); + double v = cgen_->ToDouble(constant_source); + __ Move(result, v); + } else { + DCHECK(destination->IsStackSlot()); + DCHECK(!in_cycle_); // Constant moves happen after all cycles are gone. + if (cgen_->IsSmi(constant_source)) { + __ li(kLithiumScratchReg, Operand(cgen_->ToSmi(constant_source))); + __ sd(kLithiumScratchReg, cgen_->ToMemOperand(destination)); + } else if (cgen_->IsInteger32(constant_source)) { + __ li(kLithiumScratchReg, Operand(cgen_->ToInteger32(constant_source))); + __ sd(kLithiumScratchReg, cgen_->ToMemOperand(destination)); + } else { + __ li(kLithiumScratchReg, cgen_->ToHandle(constant_source)); + __ sd(kLithiumScratchReg, cgen_->ToMemOperand(destination)); + } + } + + } else if (source->IsDoubleRegister()) { + DoubleRegister source_register = cgen_->ToDoubleRegister(source); + if (destination->IsDoubleRegister()) { + __ mov_d(cgen_->ToDoubleRegister(destination), source_register); + } else { + DCHECK(destination->IsDoubleStackSlot()); + MemOperand destination_operand = cgen_->ToMemOperand(destination); + __ sdc1(source_register, destination_operand); + } + + } else if (source->IsDoubleStackSlot()) { + MemOperand source_operand = cgen_->ToMemOperand(source); + if (destination->IsDoubleRegister()) { + __ ldc1(cgen_->ToDoubleRegister(destination), source_operand); + } else { + DCHECK(destination->IsDoubleStackSlot()); + MemOperand destination_operand = cgen_->ToMemOperand(destination); + if (in_cycle_) { + // kLithiumScratchDouble was used to break the cycle, + // but kLithiumScratchReg is free. + MemOperand source_high_operand = + cgen_->ToHighMemOperand(source); + MemOperand destination_high_operand = + cgen_->ToHighMemOperand(destination); + __ lw(kLithiumScratchReg, source_operand); + __ sw(kLithiumScratchReg, destination_operand); + __ lw(kLithiumScratchReg, source_high_operand); + __ sw(kLithiumScratchReg, destination_high_operand); + } else { + __ ldc1(kLithiumScratchDouble, source_operand); + __ sdc1(kLithiumScratchDouble, destination_operand); + } + } + } else { + UNREACHABLE(); + } + + moves_[index].Eliminate(); +} + + +#undef __ + +} } // namespace v8::internal diff --git a/deps/v8/src/mips64/lithium-gap-resolver-mips64.h b/deps/v8/src/mips64/lithium-gap-resolver-mips64.h new file mode 100644 index 00000000000..0072e526cb1 --- /dev/null +++ b/deps/v8/src/mips64/lithium-gap-resolver-mips64.h @@ -0,0 +1,60 @@ +// Copyright 2011 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_MIPS_LITHIUM_GAP_RESOLVER_MIPS_H_ +#define V8_MIPS_LITHIUM_GAP_RESOLVER_MIPS_H_ + +#include "src/v8.h" + +#include "src/lithium.h" + +namespace v8 { +namespace internal { + +class LCodeGen; +class LGapResolver; + +class LGapResolver V8_FINAL BASE_EMBEDDED { + public: + explicit LGapResolver(LCodeGen* owner); + + // Resolve a set of parallel moves, emitting assembler instructions. + void Resolve(LParallelMove* parallel_move); + + private: + // Build the initial list of moves. + void BuildInitialMoveList(LParallelMove* parallel_move); + + // Perform the move at the moves_ index in question (possibly requiring + // other moves to satisfy dependencies). + void PerformMove(int index); + + // If a cycle is found in the series of moves, save the blocking value to + // a scratch register. The cycle must be found by hitting the root of the + // depth-first search. + void BreakCycle(int index); + + // After a cycle has been resolved, restore the value from the scratch + // register to its proper destination. + void RestoreValue(); + + // Emit a move and remove it from the move graph. + void EmitMove(int index); + + // Verify the move list before performing moves. + void Verify(); + + LCodeGen* cgen_; + + // List of moves not yet resolved. + ZoneList<LMoveOperands> moves_; + + int root_index_; + bool in_cycle_; + LOperand* saved_destination_; +}; + +} } // namespace v8::internal + +#endif // V8_MIPS_LITHIUM_GAP_RESOLVER_MIPS_H_ diff --git a/deps/v8/src/mips64/lithium-mips64.cc b/deps/v8/src/mips64/lithium-mips64.cc new file mode 100644 index 00000000000..0b6b0ddb366 --- /dev/null +++ b/deps/v8/src/mips64/lithium-mips64.cc @@ -0,0 +1,2581 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_MIPS64 + +#include "src/hydrogen-osr.h" +#include "src/lithium-inl.h" +#include "src/mips64/lithium-codegen-mips64.h" + +namespace v8 { +namespace internal { + +#define DEFINE_COMPILE(type) \ + void L##type::CompileToNative(LCodeGen* generator) { \ + generator->Do##type(this); \ + } +LITHIUM_CONCRETE_INSTRUCTION_LIST(DEFINE_COMPILE) +#undef DEFINE_COMPILE + +#ifdef DEBUG +void LInstruction::VerifyCall() { + // Call instructions can use only fixed registers as temporaries and + // outputs because all registers are blocked by the calling convention. + // Inputs operands must use a fixed register or use-at-start policy or + // a non-register policy. + DCHECK(Output() == NULL || + LUnallocated::cast(Output())->HasFixedPolicy() || + !LUnallocated::cast(Output())->HasRegisterPolicy()); + for (UseIterator it(this); !it.Done(); it.Advance()) { + LUnallocated* operand = LUnallocated::cast(it.Current()); + DCHECK(operand->HasFixedPolicy() || + operand->IsUsedAtStart()); + } + for (TempIterator it(this); !it.Done(); it.Advance()) { + LUnallocated* operand = LUnallocated::cast(it.Current()); + DCHECK(operand->HasFixedPolicy() ||!operand->HasRegisterPolicy()); + } +} +#endif + + +void LInstruction::PrintTo(StringStream* stream) { + stream->Add("%s ", this->Mnemonic()); + + PrintOutputOperandTo(stream); + + PrintDataTo(stream); + + if (HasEnvironment()) { + stream->Add(" "); + environment()->PrintTo(stream); + } + + if (HasPointerMap()) { + stream->Add(" "); + pointer_map()->PrintTo(stream); + } +} + + +void LInstruction::PrintDataTo(StringStream* stream) { + stream->Add("= "); + for (int i = 0; i < InputCount(); i++) { + if (i > 0) stream->Add(" "); + if (InputAt(i) == NULL) { + stream->Add("NULL"); + } else { + InputAt(i)->PrintTo(stream); + } + } +} + + +void LInstruction::PrintOutputOperandTo(StringStream* stream) { + if (HasResult()) result()->PrintTo(stream); +} + + +void LLabel::PrintDataTo(StringStream* stream) { + LGap::PrintDataTo(stream); + LLabel* rep = replacement(); + if (rep != NULL) { + stream->Add(" Dead block replaced with B%d", rep->block_id()); + } +} + + +bool LGap::IsRedundant() const { + for (int i = 0; i < 4; i++) { + if (parallel_moves_[i] != NULL && !parallel_moves_[i]->IsRedundant()) { + return false; + } + } + + return true; +} + + +void LGap::PrintDataTo(StringStream* stream) { + for (int i = 0; i < 4; i++) { + stream->Add("("); + if (parallel_moves_[i] != NULL) { + parallel_moves_[i]->PrintDataTo(stream); + } + stream->Add(") "); + } +} + + +const char* LArithmeticD::Mnemonic() const { + switch (op()) { + case Token::ADD: return "add-d"; + case Token::SUB: return "sub-d"; + case Token::MUL: return "mul-d"; + case Token::DIV: return "div-d"; + case Token::MOD: return "mod-d"; + default: + UNREACHABLE(); + return NULL; + } +} + + +const char* LArithmeticT::Mnemonic() const { + switch (op()) { + case Token::ADD: return "add-t"; + case Token::SUB: return "sub-t"; + case Token::MUL: return "mul-t"; + case Token::MOD: return "mod-t"; + case Token::DIV: return "div-t"; + case Token::BIT_AND: return "bit-and-t"; + case Token::BIT_OR: return "bit-or-t"; + case Token::BIT_XOR: return "bit-xor-t"; + case Token::ROR: return "ror-t"; + case Token::SHL: return "sll-t"; + case Token::SAR: return "sra-t"; + case Token::SHR: return "srl-t"; + default: + UNREACHABLE(); + return NULL; + } +} + + +bool LGoto::HasInterestingComment(LCodeGen* gen) const { + return !gen->IsNextEmittedBlock(block_id()); +} + + +void LGoto::PrintDataTo(StringStream* stream) { + stream->Add("B%d", block_id()); +} + + +void LBranch::PrintDataTo(StringStream* stream) { + stream->Add("B%d | B%d on ", true_block_id(), false_block_id()); + value()->PrintTo(stream); +} + + +LInstruction* LChunkBuilder::DoDebugBreak(HDebugBreak* instr) { + return new(zone()) LDebugBreak(); +} + + +void LCompareNumericAndBranch::PrintDataTo(StringStream* stream) { + stream->Add("if "); + left()->PrintTo(stream); + stream->Add(" %s ", Token::String(op())); + right()->PrintTo(stream); + stream->Add(" then B%d else B%d", true_block_id(), false_block_id()); +} + + +void LIsObjectAndBranch::PrintDataTo(StringStream* stream) { + stream->Add("if is_object("); + value()->PrintTo(stream); + stream->Add(") then B%d else B%d", true_block_id(), false_block_id()); +} + + +void LIsStringAndBranch::PrintDataTo(StringStream* stream) { + stream->Add("if is_string("); + value()->PrintTo(stream); + stream->Add(") then B%d else B%d", true_block_id(), false_block_id()); +} + + +void LIsSmiAndBranch::PrintDataTo(StringStream* stream) { + stream->Add("if is_smi("); + value()->PrintTo(stream); + stream->Add(") then B%d else B%d", true_block_id(), false_block_id()); +} + + +void LIsUndetectableAndBranch::PrintDataTo(StringStream* stream) { + stream->Add("if is_undetectable("); + value()->PrintTo(stream); + stream->Add(") then B%d else B%d", true_block_id(), false_block_id()); +} + + +void LStringCompareAndBranch::PrintDataTo(StringStream* stream) { + stream->Add("if string_compare("); + left()->PrintTo(stream); + right()->PrintTo(stream); + stream->Add(") then B%d else B%d", true_block_id(), false_block_id()); +} + + +void LHasInstanceTypeAndBranch::PrintDataTo(StringStream* stream) { + stream->Add("if has_instance_type("); + value()->PrintTo(stream); + stream->Add(") then B%d else B%d", true_block_id(), false_block_id()); +} + + +void LHasCachedArrayIndexAndBranch::PrintDataTo(StringStream* stream) { + stream->Add("if has_cached_array_index("); + value()->PrintTo(stream); + stream->Add(") then B%d else B%d", true_block_id(), false_block_id()); +} + + +void LClassOfTestAndBranch::PrintDataTo(StringStream* stream) { + stream->Add("if class_of_test("); + value()->PrintTo(stream); + stream->Add(", \"%o\") then B%d else B%d", + *hydrogen()->class_name(), + true_block_id(), + false_block_id()); +} + + +void LTypeofIsAndBranch::PrintDataTo(StringStream* stream) { + stream->Add("if typeof "); + value()->PrintTo(stream); + stream->Add(" == \"%s\" then B%d else B%d", + hydrogen()->type_literal()->ToCString().get(), + true_block_id(), false_block_id()); +} + + +void LStoreCodeEntry::PrintDataTo(StringStream* stream) { + stream->Add(" = "); + function()->PrintTo(stream); + stream->Add(".code_entry = "); + code_object()->PrintTo(stream); +} + + +void LInnerAllocatedObject::PrintDataTo(StringStream* stream) { + stream->Add(" = "); + base_object()->PrintTo(stream); + stream->Add(" + "); + offset()->PrintTo(stream); +} + + +void LCallJSFunction::PrintDataTo(StringStream* stream) { + stream->Add("= "); + function()->PrintTo(stream); + stream->Add("#%d / ", arity()); +} + + +void LCallWithDescriptor::PrintDataTo(StringStream* stream) { + for (int i = 0; i < InputCount(); i++) { + InputAt(i)->PrintTo(stream); + stream->Add(" "); + } + stream->Add("#%d / ", arity()); +} + + +void LLoadContextSlot::PrintDataTo(StringStream* stream) { + context()->PrintTo(stream); + stream->Add("[%d]", slot_index()); +} + + +void LStoreContextSlot::PrintDataTo(StringStream* stream) { + context()->PrintTo(stream); + stream->Add("[%d] <- ", slot_index()); + value()->PrintTo(stream); +} + + +void LInvokeFunction::PrintDataTo(StringStream* stream) { + stream->Add("= "); + function()->PrintTo(stream); + stream->Add(" #%d / ", arity()); +} + + +void LCallNew::PrintDataTo(StringStream* stream) { + stream->Add("= "); + constructor()->PrintTo(stream); + stream->Add(" #%d / ", arity()); +} + + +void LCallNewArray::PrintDataTo(StringStream* stream) { + stream->Add("= "); + constructor()->PrintTo(stream); + stream->Add(" #%d / ", arity()); + ElementsKind kind = hydrogen()->elements_kind(); + stream->Add(" (%s) ", ElementsKindToString(kind)); +} + + +void LAccessArgumentsAt::PrintDataTo(StringStream* stream) { + arguments()->PrintTo(stream); + stream->Add(" length "); + length()->PrintTo(stream); + stream->Add(" index "); + index()->PrintTo(stream); +} + + +void LStoreNamedField::PrintDataTo(StringStream* stream) { + object()->PrintTo(stream); + OStringStream os; + os << hydrogen()->access() << " <- "; + stream->Add(os.c_str()); + value()->PrintTo(stream); +} + + +void LStoreNamedGeneric::PrintDataTo(StringStream* stream) { + object()->PrintTo(stream); + stream->Add("."); + stream->Add(String::cast(*name())->ToCString().get()); + stream->Add(" <- "); + value()->PrintTo(stream); +} + + +void LLoadKeyed::PrintDataTo(StringStream* stream) { + elements()->PrintTo(stream); + stream->Add("["); + key()->PrintTo(stream); + if (hydrogen()->IsDehoisted()) { + stream->Add(" + %d]", base_offset()); + } else { + stream->Add("]"); + } +} + + +void LStoreKeyed::PrintDataTo(StringStream* stream) { + elements()->PrintTo(stream); + stream->Add("["); + key()->PrintTo(stream); + if (hydrogen()->IsDehoisted()) { + stream->Add(" + %d] <-", base_offset()); + } else { + stream->Add("] <- "); + } + + if (value() == NULL) { + DCHECK(hydrogen()->IsConstantHoleStore() && + hydrogen()->value()->representation().IsDouble()); + stream->Add("<the hole(nan)>"); + } else { + value()->PrintTo(stream); + } +} + + +void LStoreKeyedGeneric::PrintDataTo(StringStream* stream) { + object()->PrintTo(stream); + stream->Add("["); + key()->PrintTo(stream); + stream->Add("] <- "); + value()->PrintTo(stream); +} + + +void LTransitionElementsKind::PrintDataTo(StringStream* stream) { + object()->PrintTo(stream); + stream->Add(" %p -> %p", *original_map(), *transitioned_map()); +} + + +int LPlatformChunk::GetNextSpillIndex(RegisterKind kind) { + // Skip a slot if for a double-width slot. + if (kind == DOUBLE_REGISTERS) spill_slot_count_++; + return spill_slot_count_++; +} + + +LOperand* LPlatformChunk::GetNextSpillSlot(RegisterKind kind) { + int index = GetNextSpillIndex(kind); + if (kind == DOUBLE_REGISTERS) { + return LDoubleStackSlot::Create(index, zone()); + } else { + DCHECK(kind == GENERAL_REGISTERS); + return LStackSlot::Create(index, zone()); + } +} + + +LPlatformChunk* LChunkBuilder::Build() { + DCHECK(is_unused()); + chunk_ = new(zone()) LPlatformChunk(info(), graph()); + LPhase phase("L_Building chunk", chunk_); + status_ = BUILDING; + + // If compiling for OSR, reserve space for the unoptimized frame, + // which will be subsumed into this frame. + if (graph()->has_osr()) { + for (int i = graph()->osr()->UnoptimizedFrameSlots(); i > 0; i--) { + chunk_->GetNextSpillIndex(GENERAL_REGISTERS); + } + } + + const ZoneList<HBasicBlock*>* blocks = graph()->blocks(); + for (int i = 0; i < blocks->length(); i++) { + HBasicBlock* next = NULL; + if (i < blocks->length() - 1) next = blocks->at(i + 1); + DoBasicBlock(blocks->at(i), next); + if (is_aborted()) return NULL; + } + status_ = DONE; + return chunk_; +} + + +void LChunkBuilder::Abort(BailoutReason reason) { + info()->set_bailout_reason(reason); + status_ = ABORTED; +} + + +LUnallocated* LChunkBuilder::ToUnallocated(Register reg) { + return new(zone()) LUnallocated(LUnallocated::FIXED_REGISTER, + Register::ToAllocationIndex(reg)); +} + + +LUnallocated* LChunkBuilder::ToUnallocated(DoubleRegister reg) { + return new(zone()) LUnallocated(LUnallocated::FIXED_DOUBLE_REGISTER, + DoubleRegister::ToAllocationIndex(reg)); +} + + +LOperand* LChunkBuilder::UseFixed(HValue* value, Register fixed_register) { + return Use(value, ToUnallocated(fixed_register)); +} + + +LOperand* LChunkBuilder::UseFixedDouble(HValue* value, DoubleRegister reg) { + return Use(value, ToUnallocated(reg)); +} + + +LOperand* LChunkBuilder::UseRegister(HValue* value) { + return Use(value, new(zone()) LUnallocated(LUnallocated::MUST_HAVE_REGISTER)); +} + + +LOperand* LChunkBuilder::UseRegisterAtStart(HValue* value) { + return Use(value, + new(zone()) LUnallocated(LUnallocated::MUST_HAVE_REGISTER, + LUnallocated::USED_AT_START)); +} + + +LOperand* LChunkBuilder::UseTempRegister(HValue* value) { + return Use(value, new(zone()) LUnallocated(LUnallocated::WRITABLE_REGISTER)); +} + + +LOperand* LChunkBuilder::Use(HValue* value) { + return Use(value, new(zone()) LUnallocated(LUnallocated::NONE)); +} + + +LOperand* LChunkBuilder::UseAtStart(HValue* value) { + return Use(value, new(zone()) LUnallocated(LUnallocated::NONE, + LUnallocated::USED_AT_START)); +} + + +LOperand* LChunkBuilder::UseOrConstant(HValue* value) { + return value->IsConstant() + ? chunk_->DefineConstantOperand(HConstant::cast(value)) + : Use(value); +} + + +LOperand* LChunkBuilder::UseOrConstantAtStart(HValue* value) { + return value->IsConstant() + ? chunk_->DefineConstantOperand(HConstant::cast(value)) + : UseAtStart(value); +} + + +LOperand* LChunkBuilder::UseRegisterOrConstant(HValue* value) { + return value->IsConstant() + ? chunk_->DefineConstantOperand(HConstant::cast(value)) + : UseRegister(value); +} + + +LOperand* LChunkBuilder::UseRegisterOrConstantAtStart(HValue* value) { + return value->IsConstant() + ? chunk_->DefineConstantOperand(HConstant::cast(value)) + : UseRegisterAtStart(value); +} + + +LOperand* LChunkBuilder::UseConstant(HValue* value) { + return chunk_->DefineConstantOperand(HConstant::cast(value)); +} + + +LOperand* LChunkBuilder::UseAny(HValue* value) { + return value->IsConstant() + ? chunk_->DefineConstantOperand(HConstant::cast(value)) + : Use(value, new(zone()) LUnallocated(LUnallocated::ANY)); +} + + +LOperand* LChunkBuilder::Use(HValue* value, LUnallocated* operand) { + if (value->EmitAtUses()) { + HInstruction* instr = HInstruction::cast(value); + VisitInstruction(instr); + } + operand->set_virtual_register(value->id()); + return operand; +} + + +LInstruction* LChunkBuilder::Define(LTemplateResultInstruction<1>* instr, + LUnallocated* result) { + result->set_virtual_register(current_instruction_->id()); + instr->set_result(result); + return instr; +} + + +LInstruction* LChunkBuilder::DefineAsRegister( + LTemplateResultInstruction<1>* instr) { + return Define(instr, + new(zone()) LUnallocated(LUnallocated::MUST_HAVE_REGISTER)); +} + + +LInstruction* LChunkBuilder::DefineAsSpilled( + LTemplateResultInstruction<1>* instr, int index) { + return Define(instr, + new(zone()) LUnallocated(LUnallocated::FIXED_SLOT, index)); +} + + +LInstruction* LChunkBuilder::DefineSameAsFirst( + LTemplateResultInstruction<1>* instr) { + return Define(instr, + new(zone()) LUnallocated(LUnallocated::SAME_AS_FIRST_INPUT)); +} + + +LInstruction* LChunkBuilder::DefineFixed( + LTemplateResultInstruction<1>* instr, Register reg) { + return Define(instr, ToUnallocated(reg)); +} + + +LInstruction* LChunkBuilder::DefineFixedDouble( + LTemplateResultInstruction<1>* instr, DoubleRegister reg) { + return Define(instr, ToUnallocated(reg)); +} + + +LInstruction* LChunkBuilder::AssignEnvironment(LInstruction* instr) { + HEnvironment* hydrogen_env = current_block_->last_environment(); + int argument_index_accumulator = 0; + ZoneList<HValue*> objects_to_materialize(0, zone()); + instr->set_environment(CreateEnvironment(hydrogen_env, + &argument_index_accumulator, + &objects_to_materialize)); + return instr; +} + + +LInstruction* LChunkBuilder::MarkAsCall(LInstruction* instr, + HInstruction* hinstr, + CanDeoptimize can_deoptimize) { + info()->MarkAsNonDeferredCalling(); +#ifdef DEBUG + instr->VerifyCall(); +#endif + instr->MarkAsCall(); + instr = AssignPointerMap(instr); + + // If instruction does not have side-effects lazy deoptimization + // after the call will try to deoptimize to the point before the call. + // Thus we still need to attach environment to this call even if + // call sequence can not deoptimize eagerly. + bool needs_environment = + (can_deoptimize == CAN_DEOPTIMIZE_EAGERLY) || + !hinstr->HasObservableSideEffects(); + if (needs_environment && !instr->HasEnvironment()) { + instr = AssignEnvironment(instr); + // We can't really figure out if the environment is needed or not. + instr->environment()->set_has_been_used(); + } + + return instr; +} + + +LInstruction* LChunkBuilder::AssignPointerMap(LInstruction* instr) { + DCHECK(!instr->HasPointerMap()); + instr->set_pointer_map(new(zone()) LPointerMap(zone())); + return instr; +} + + +LUnallocated* LChunkBuilder::TempRegister() { + LUnallocated* operand = + new(zone()) LUnallocated(LUnallocated::MUST_HAVE_REGISTER); + int vreg = allocator_->GetVirtualRegister(); + if (!allocator_->AllocationOk()) { + Abort(kOutOfVirtualRegistersWhileTryingToAllocateTempRegister); + vreg = 0; + } + operand->set_virtual_register(vreg); + return operand; +} + + +LUnallocated* LChunkBuilder::TempDoubleRegister() { + LUnallocated* operand = + new(zone()) LUnallocated(LUnallocated::MUST_HAVE_DOUBLE_REGISTER); + int vreg = allocator_->GetVirtualRegister(); + if (!allocator_->AllocationOk()) { + Abort(kOutOfVirtualRegistersWhileTryingToAllocateTempRegister); + vreg = 0; + } + operand->set_virtual_register(vreg); + return operand; +} + + +LOperand* LChunkBuilder::FixedTemp(Register reg) { + LUnallocated* operand = ToUnallocated(reg); + DCHECK(operand->HasFixedPolicy()); + return operand; +} + + +LOperand* LChunkBuilder::FixedTemp(DoubleRegister reg) { + LUnallocated* operand = ToUnallocated(reg); + DCHECK(operand->HasFixedPolicy()); + return operand; +} + + +LInstruction* LChunkBuilder::DoBlockEntry(HBlockEntry* instr) { + return new(zone()) LLabel(instr->block()); +} + + +LInstruction* LChunkBuilder::DoDummyUse(HDummyUse* instr) { + return DefineAsRegister(new(zone()) LDummyUse(UseAny(instr->value()))); +} + + +LInstruction* LChunkBuilder::DoEnvironmentMarker(HEnvironmentMarker* instr) { + UNREACHABLE(); + return NULL; +} + + +LInstruction* LChunkBuilder::DoDeoptimize(HDeoptimize* instr) { + return AssignEnvironment(new(zone()) LDeoptimize); +} + + +LInstruction* LChunkBuilder::DoShift(Token::Value op, + HBitwiseBinaryOperation* instr) { + if (instr->representation().IsSmiOrInteger32()) { + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + LOperand* left = UseRegisterAtStart(instr->left()); + + HValue* right_value = instr->right(); + LOperand* right = NULL; + int constant_value = 0; + bool does_deopt = false; + if (right_value->IsConstant()) { + HConstant* constant = HConstant::cast(right_value); + right = chunk_->DefineConstantOperand(constant); + constant_value = constant->Integer32Value() & 0x1f; + // Left shifts can deoptimize if we shift by > 0 and the result cannot be + // truncated to smi. + if (instr->representation().IsSmi() && constant_value > 0) { + does_deopt = !instr->CheckUsesForFlag(HValue::kTruncatingToSmi); + } + } else { + right = UseRegisterAtStart(right_value); + } + + // Shift operations can only deoptimize if we do a logical shift + // by 0 and the result cannot be truncated to int32. + if (op == Token::SHR && constant_value == 0) { + if (FLAG_opt_safe_uint32_operations) { + does_deopt = !instr->CheckFlag(HInstruction::kUint32); + } else { + does_deopt = !instr->CheckUsesForFlag(HValue::kTruncatingToInt32); + } + } + + LInstruction* result = + DefineAsRegister(new(zone()) LShiftI(op, left, right, does_deopt)); + return does_deopt ? AssignEnvironment(result) : result; + } else { + return DoArithmeticT(op, instr); + } +} + + +LInstruction* LChunkBuilder::DoArithmeticD(Token::Value op, + HArithmeticBinaryOperation* instr) { + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); + DCHECK(instr->right()->representation().IsDouble()); + if (op == Token::MOD) { + LOperand* left = UseFixedDouble(instr->left(), f2); + LOperand* right = UseFixedDouble(instr->right(), f4); + LArithmeticD* result = new(zone()) LArithmeticD(op, left, right); + // We call a C function for double modulo. It can't trigger a GC. We need + // to use fixed result register for the call. + // TODO(fschneider): Allow any register as input registers. + return MarkAsCall(DefineFixedDouble(result, f2), instr); + } else { + LOperand* left = UseRegisterAtStart(instr->left()); + LOperand* right = UseRegisterAtStart(instr->right()); + LArithmeticD* result = new(zone()) LArithmeticD(op, left, right); + return DefineAsRegister(result); + } +} + + +LInstruction* LChunkBuilder::DoArithmeticT(Token::Value op, + HBinaryOperation* instr) { + HValue* left = instr->left(); + HValue* right = instr->right(); + DCHECK(left->representation().IsTagged()); + DCHECK(right->representation().IsTagged()); + LOperand* context = UseFixed(instr->context(), cp); + LOperand* left_operand = UseFixed(left, a1); + LOperand* right_operand = UseFixed(right, a0); + LArithmeticT* result = + new(zone()) LArithmeticT(op, context, left_operand, right_operand); + return MarkAsCall(DefineFixed(result, v0), instr); +} + + +void LChunkBuilder::DoBasicBlock(HBasicBlock* block, HBasicBlock* next_block) { + DCHECK(is_building()); + current_block_ = block; + next_block_ = next_block; + if (block->IsStartBlock()) { + block->UpdateEnvironment(graph_->start_environment()); + argument_count_ = 0; + } else if (block->predecessors()->length() == 1) { + // We have a single predecessor => copy environment and outgoing + // argument count from the predecessor. + DCHECK(block->phis()->length() == 0); + HBasicBlock* pred = block->predecessors()->at(0); + HEnvironment* last_environment = pred->last_environment(); + DCHECK(last_environment != NULL); + // Only copy the environment, if it is later used again. + if (pred->end()->SecondSuccessor() == NULL) { + DCHECK(pred->end()->FirstSuccessor() == block); + } else { + if (pred->end()->FirstSuccessor()->block_id() > block->block_id() || + pred->end()->SecondSuccessor()->block_id() > block->block_id()) { + last_environment = last_environment->Copy(); + } + } + block->UpdateEnvironment(last_environment); + DCHECK(pred->argument_count() >= 0); + argument_count_ = pred->argument_count(); + } else { + // We are at a state join => process phis. + HBasicBlock* pred = block->predecessors()->at(0); + // No need to copy the environment, it cannot be used later. + HEnvironment* last_environment = pred->last_environment(); + for (int i = 0; i < block->phis()->length(); ++i) { + HPhi* phi = block->phis()->at(i); + if (phi->HasMergedIndex()) { + last_environment->SetValueAt(phi->merged_index(), phi); + } + } + for (int i = 0; i < block->deleted_phis()->length(); ++i) { + if (block->deleted_phis()->at(i) < last_environment->length()) { + last_environment->SetValueAt(block->deleted_phis()->at(i), + graph_->GetConstantUndefined()); + } + } + block->UpdateEnvironment(last_environment); + // Pick up the outgoing argument count of one of the predecessors. + argument_count_ = pred->argument_count(); + } + HInstruction* current = block->first(); + int start = chunk_->instructions()->length(); + while (current != NULL && !is_aborted()) { + // Code for constants in registers is generated lazily. + if (!current->EmitAtUses()) { + VisitInstruction(current); + } + current = current->next(); + } + int end = chunk_->instructions()->length() - 1; + if (end >= start) { + block->set_first_instruction_index(start); + block->set_last_instruction_index(end); + } + block->set_argument_count(argument_count_); + next_block_ = NULL; + current_block_ = NULL; +} + + +void LChunkBuilder::VisitInstruction(HInstruction* current) { + HInstruction* old_current = current_instruction_; + current_instruction_ = current; + + LInstruction* instr = NULL; + if (current->CanReplaceWithDummyUses()) { + if (current->OperandCount() == 0) { + instr = DefineAsRegister(new(zone()) LDummy()); + } else { + DCHECK(!current->OperandAt(0)->IsControlInstruction()); + instr = DefineAsRegister(new(zone()) + LDummyUse(UseAny(current->OperandAt(0)))); + } + for (int i = 1; i < current->OperandCount(); ++i) { + if (current->OperandAt(i)->IsControlInstruction()) continue; + LInstruction* dummy = + new(zone()) LDummyUse(UseAny(current->OperandAt(i))); + dummy->set_hydrogen_value(current); + chunk_->AddInstruction(dummy, current_block_); + } + } else { + HBasicBlock* successor; + if (current->IsControlInstruction() && + HControlInstruction::cast(current)->KnownSuccessorBlock(&successor) && + successor != NULL) { + instr = new(zone()) LGoto(successor); + } else { + instr = current->CompileToLithium(this); + } + } + + argument_count_ += current->argument_delta(); + DCHECK(argument_count_ >= 0); + + if (instr != NULL) { + AddInstruction(instr, current); + } + + current_instruction_ = old_current; +} + + +void LChunkBuilder::AddInstruction(LInstruction* instr, + HInstruction* hydrogen_val) { +// Associate the hydrogen instruction first, since we may need it for + // the ClobbersRegisters() or ClobbersDoubleRegisters() calls below. + instr->set_hydrogen_value(hydrogen_val); + +#if DEBUG + // Make sure that the lithium instruction has either no fixed register + // constraints in temps or the result OR no uses that are only used at + // start. If this invariant doesn't hold, the register allocator can decide + // to insert a split of a range immediately before the instruction due to an + // already allocated register needing to be used for the instruction's fixed + // register constraint. In this case, The register allocator won't see an + // interference between the split child and the use-at-start (it would if + // the it was just a plain use), so it is free to move the split child into + // the same register that is used for the use-at-start. + // See https://code.google.com/p/chromium/issues/detail?id=201590 + if (!(instr->ClobbersRegisters() && + instr->ClobbersDoubleRegisters(isolate()))) { + int fixed = 0; + int used_at_start = 0; + for (UseIterator it(instr); !it.Done(); it.Advance()) { + LUnallocated* operand = LUnallocated::cast(it.Current()); + if (operand->IsUsedAtStart()) ++used_at_start; + } + if (instr->Output() != NULL) { + if (LUnallocated::cast(instr->Output())->HasFixedPolicy()) ++fixed; + } + for (TempIterator it(instr); !it.Done(); it.Advance()) { + LUnallocated* operand = LUnallocated::cast(it.Current()); + if (operand->HasFixedPolicy()) ++fixed; + } + DCHECK(fixed == 0 || used_at_start == 0); + } +#endif + + if (FLAG_stress_pointer_maps && !instr->HasPointerMap()) { + instr = AssignPointerMap(instr); + } + if (FLAG_stress_environments && !instr->HasEnvironment()) { + instr = AssignEnvironment(instr); + } + chunk_->AddInstruction(instr, current_block_); + + if (instr->IsCall()) { + HValue* hydrogen_value_for_lazy_bailout = hydrogen_val; + LInstruction* instruction_needing_environment = NULL; + if (hydrogen_val->HasObservableSideEffects()) { + HSimulate* sim = HSimulate::cast(hydrogen_val->next()); + instruction_needing_environment = instr; + sim->ReplayEnvironment(current_block_->last_environment()); + hydrogen_value_for_lazy_bailout = sim; + } + LInstruction* bailout = AssignEnvironment(new(zone()) LLazyBailout()); + bailout->set_hydrogen_value(hydrogen_value_for_lazy_bailout); + chunk_->AddInstruction(bailout, current_block_); + if (instruction_needing_environment != NULL) { + // Store the lazy deopt environment with the instruction if needed. + // Right now it is only used for LInstanceOfKnownGlobal. + instruction_needing_environment-> + SetDeferredLazyDeoptimizationEnvironment(bailout->environment()); + } + } +} + + +LInstruction* LChunkBuilder::DoGoto(HGoto* instr) { + return new(zone()) LGoto(instr->FirstSuccessor()); +} + + +LInstruction* LChunkBuilder::DoBranch(HBranch* instr) { + HValue* value = instr->value(); + Representation r = value->representation(); + HType type = value->type(); + ToBooleanStub::Types expected = instr->expected_input_types(); + if (expected.IsEmpty()) expected = ToBooleanStub::Types::Generic(); + + bool easy_case = !r.IsTagged() || type.IsBoolean() || type.IsSmi() || + type.IsJSArray() || type.IsHeapNumber() || type.IsString(); + LInstruction* branch = new(zone()) LBranch(UseRegister(value)); + if (!easy_case && + ((!expected.Contains(ToBooleanStub::SMI) && expected.NeedsMap()) || + !expected.IsGeneric())) { + branch = AssignEnvironment(branch); + } + return branch; +} + + +LInstruction* LChunkBuilder::DoCompareMap(HCompareMap* instr) { + DCHECK(instr->value()->representation().IsTagged()); + LOperand* value = UseRegisterAtStart(instr->value()); + LOperand* temp = TempRegister(); + return new(zone()) LCmpMapAndBranch(value, temp); +} + + +LInstruction* LChunkBuilder::DoArgumentsLength(HArgumentsLength* length) { + info()->MarkAsRequiresFrame(); + return DefineAsRegister( + new(zone()) LArgumentsLength(UseRegister(length->value()))); +} + + +LInstruction* LChunkBuilder::DoArgumentsElements(HArgumentsElements* elems) { + info()->MarkAsRequiresFrame(); + return DefineAsRegister(new(zone()) LArgumentsElements); +} + + +LInstruction* LChunkBuilder::DoInstanceOf(HInstanceOf* instr) { + LOperand* context = UseFixed(instr->context(), cp); + LInstanceOf* result = + new(zone()) LInstanceOf(context, UseFixed(instr->left(), a0), + UseFixed(instr->right(), a1)); + return MarkAsCall(DefineFixed(result, v0), instr); +} + + +LInstruction* LChunkBuilder::DoInstanceOfKnownGlobal( + HInstanceOfKnownGlobal* instr) { + LInstanceOfKnownGlobal* result = + new(zone()) LInstanceOfKnownGlobal( + UseFixed(instr->context(), cp), + UseFixed(instr->left(), a0), + FixedTemp(a4)); + return MarkAsCall(DefineFixed(result, v0), instr); +} + + +LInstruction* LChunkBuilder::DoWrapReceiver(HWrapReceiver* instr) { + LOperand* receiver = UseRegisterAtStart(instr->receiver()); + LOperand* function = UseRegisterAtStart(instr->function()); + LWrapReceiver* result = new(zone()) LWrapReceiver(receiver, function); + return AssignEnvironment(DefineAsRegister(result)); +} + + +LInstruction* LChunkBuilder::DoApplyArguments(HApplyArguments* instr) { + LOperand* function = UseFixed(instr->function(), a1); + LOperand* receiver = UseFixed(instr->receiver(), a0); + LOperand* length = UseFixed(instr->length(), a2); + LOperand* elements = UseFixed(instr->elements(), a3); + LApplyArguments* result = new(zone()) LApplyArguments(function, + receiver, + length, + elements); + return MarkAsCall(DefineFixed(result, v0), instr, CAN_DEOPTIMIZE_EAGERLY); +} + + +LInstruction* LChunkBuilder::DoPushArguments(HPushArguments* instr) { + int argc = instr->OperandCount(); + for (int i = 0; i < argc; ++i) { + LOperand* argument = Use(instr->argument(i)); + AddInstruction(new(zone()) LPushArgument(argument), instr); + } + return NULL; +} + + +LInstruction* LChunkBuilder::DoStoreCodeEntry( + HStoreCodeEntry* store_code_entry) { + LOperand* function = UseRegister(store_code_entry->function()); + LOperand* code_object = UseTempRegister(store_code_entry->code_object()); + return new(zone()) LStoreCodeEntry(function, code_object); +} + + +LInstruction* LChunkBuilder::DoInnerAllocatedObject( + HInnerAllocatedObject* instr) { + LOperand* base_object = UseRegisterAtStart(instr->base_object()); + LOperand* offset = UseRegisterOrConstantAtStart(instr->offset()); + return DefineAsRegister( + new(zone()) LInnerAllocatedObject(base_object, offset)); +} + + +LInstruction* LChunkBuilder::DoThisFunction(HThisFunction* instr) { + return instr->HasNoUses() + ? NULL + : DefineAsRegister(new(zone()) LThisFunction); +} + + +LInstruction* LChunkBuilder::DoContext(HContext* instr) { + if (instr->HasNoUses()) return NULL; + + if (info()->IsStub()) { + return DefineFixed(new(zone()) LContext, cp); + } + + return DefineAsRegister(new(zone()) LContext); +} + + +LInstruction* LChunkBuilder::DoDeclareGlobals(HDeclareGlobals* instr) { + LOperand* context = UseFixed(instr->context(), cp); + return MarkAsCall(new(zone()) LDeclareGlobals(context), instr); +} + + +LInstruction* LChunkBuilder::DoCallJSFunction( + HCallJSFunction* instr) { + LOperand* function = UseFixed(instr->function(), a1); + + LCallJSFunction* result = new(zone()) LCallJSFunction(function); + + return MarkAsCall(DefineFixed(result, v0), instr); +} + + +LInstruction* LChunkBuilder::DoCallWithDescriptor( + HCallWithDescriptor* instr) { + const InterfaceDescriptor* descriptor = instr->descriptor(); + + LOperand* target = UseRegisterOrConstantAtStart(instr->target()); + ZoneList<LOperand*> ops(instr->OperandCount(), zone()); + ops.Add(target, zone()); + for (int i = 1; i < instr->OperandCount(); i++) { + LOperand* op = UseFixed(instr->OperandAt(i), + descriptor->GetParameterRegister(i - 1)); + ops.Add(op, zone()); + } + + LCallWithDescriptor* result = new(zone()) LCallWithDescriptor( + descriptor, ops, zone()); + return MarkAsCall(DefineFixed(result, v0), instr); +} + + +LInstruction* LChunkBuilder::DoInvokeFunction(HInvokeFunction* instr) { + LOperand* context = UseFixed(instr->context(), cp); + LOperand* function = UseFixed(instr->function(), a1); + LInvokeFunction* result = new(zone()) LInvokeFunction(context, function); + return MarkAsCall(DefineFixed(result, v0), instr, CANNOT_DEOPTIMIZE_EAGERLY); +} + + +LInstruction* LChunkBuilder::DoUnaryMathOperation(HUnaryMathOperation* instr) { + switch (instr->op()) { + case kMathFloor: + return DoMathFloor(instr); + case kMathRound: + return DoMathRound(instr); + case kMathFround: + return DoMathFround(instr); + case kMathAbs: + return DoMathAbs(instr); + case kMathLog: + return DoMathLog(instr); + case kMathExp: + return DoMathExp(instr); + case kMathSqrt: + return DoMathSqrt(instr); + case kMathPowHalf: + return DoMathPowHalf(instr); + case kMathClz32: + return DoMathClz32(instr); + default: + UNREACHABLE(); + return NULL; + } +} + + +LInstruction* LChunkBuilder::DoMathLog(HUnaryMathOperation* instr) { + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->value()->representation().IsDouble()); + LOperand* input = UseFixedDouble(instr->value(), f4); + return MarkAsCall(DefineFixedDouble(new(zone()) LMathLog(input), f4), instr); +} + + +LInstruction* LChunkBuilder::DoMathClz32(HUnaryMathOperation* instr) { + LOperand* input = UseRegisterAtStart(instr->value()); + LMathClz32* result = new(zone()) LMathClz32(input); + return DefineAsRegister(result); +} + + +LInstruction* LChunkBuilder::DoMathExp(HUnaryMathOperation* instr) { + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->value()->representation().IsDouble()); + LOperand* input = UseRegister(instr->value()); + LOperand* temp1 = TempRegister(); + LOperand* temp2 = TempRegister(); + LOperand* double_temp = TempDoubleRegister(); + LMathExp* result = new(zone()) LMathExp(input, double_temp, temp1, temp2); + return DefineAsRegister(result); +} + + +LInstruction* LChunkBuilder::DoMathPowHalf(HUnaryMathOperation* instr) { + // Input cannot be the same as the result, see LCodeGen::DoMathPowHalf. + LOperand* input = UseFixedDouble(instr->value(), f8); + LOperand* temp = TempDoubleRegister(); + LMathPowHalf* result = new(zone()) LMathPowHalf(input, temp); + return DefineFixedDouble(result, f4); +} + + +LInstruction* LChunkBuilder::DoMathFround(HUnaryMathOperation* instr) { + LOperand* input = UseRegister(instr->value()); + LMathFround* result = new (zone()) LMathFround(input); + return DefineAsRegister(result); +} + + +LInstruction* LChunkBuilder::DoMathAbs(HUnaryMathOperation* instr) { + Representation r = instr->value()->representation(); + LOperand* context = (r.IsDouble() || r.IsSmiOrInteger32()) + ? NULL + : UseFixed(instr->context(), cp); + LOperand* input = UseRegister(instr->value()); + LInstruction* result = + DefineAsRegister(new(zone()) LMathAbs(context, input)); + if (!r.IsDouble() && !r.IsSmiOrInteger32()) result = AssignPointerMap(result); + if (!r.IsDouble()) result = AssignEnvironment(result); + return result; +} + + +LInstruction* LChunkBuilder::DoMathFloor(HUnaryMathOperation* instr) { + LOperand* input = UseRegister(instr->value()); + LOperand* temp = TempRegister(); + LMathFloor* result = new(zone()) LMathFloor(input, temp); + return AssignEnvironment(AssignPointerMap(DefineAsRegister(result))); +} + + +LInstruction* LChunkBuilder::DoMathSqrt(HUnaryMathOperation* instr) { + LOperand* input = UseRegister(instr->value()); + LMathSqrt* result = new(zone()) LMathSqrt(input); + return DefineAsRegister(result); +} + + +LInstruction* LChunkBuilder::DoMathRound(HUnaryMathOperation* instr) { + LOperand* input = UseRegister(instr->value()); + LOperand* temp = TempDoubleRegister(); + LMathRound* result = new(zone()) LMathRound(input, temp); + return AssignEnvironment(DefineAsRegister(result)); +} + + +LInstruction* LChunkBuilder::DoCallNew(HCallNew* instr) { + LOperand* context = UseFixed(instr->context(), cp); + LOperand* constructor = UseFixed(instr->constructor(), a1); + LCallNew* result = new(zone()) LCallNew(context, constructor); + return MarkAsCall(DefineFixed(result, v0), instr); +} + + +LInstruction* LChunkBuilder::DoCallNewArray(HCallNewArray* instr) { + LOperand* context = UseFixed(instr->context(), cp); + LOperand* constructor = UseFixed(instr->constructor(), a1); + LCallNewArray* result = new(zone()) LCallNewArray(context, constructor); + return MarkAsCall(DefineFixed(result, v0), instr); +} + + +LInstruction* LChunkBuilder::DoCallFunction(HCallFunction* instr) { + LOperand* context = UseFixed(instr->context(), cp); + LOperand* function = UseFixed(instr->function(), a1); + LCallFunction* call = new(zone()) LCallFunction(context, function); + return MarkAsCall(DefineFixed(call, v0), instr); +} + + +LInstruction* LChunkBuilder::DoCallRuntime(HCallRuntime* instr) { + LOperand* context = UseFixed(instr->context(), cp); + return MarkAsCall(DefineFixed(new(zone()) LCallRuntime(context), v0), instr); +} + + +LInstruction* LChunkBuilder::DoRor(HRor* instr) { + return DoShift(Token::ROR, instr); +} + + +LInstruction* LChunkBuilder::DoShr(HShr* instr) { + return DoShift(Token::SHR, instr); +} + + +LInstruction* LChunkBuilder::DoSar(HSar* instr) { + return DoShift(Token::SAR, instr); +} + + +LInstruction* LChunkBuilder::DoShl(HShl* instr) { + return DoShift(Token::SHL, instr); +} + + +LInstruction* LChunkBuilder::DoBitwise(HBitwise* instr) { + if (instr->representation().IsSmiOrInteger32()) { + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->CheckFlag(HValue::kTruncatingToInt32)); + + LOperand* left = UseRegisterAtStart(instr->BetterLeftOperand()); + LOperand* right = UseOrConstantAtStart(instr->BetterRightOperand()); + return DefineAsRegister(new(zone()) LBitI(left, right)); + } else { + return DoArithmeticT(instr->op(), instr); + } +} + + +LInstruction* LChunkBuilder::DoDivByPowerOf2I(HDiv* instr) { + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + LOperand* dividend = UseRegister(instr->left()); + int32_t divisor = instr->right()->GetInteger32Constant(); + LInstruction* result = DefineAsRegister(new(zone()) LDivByPowerOf2I( + dividend, divisor)); + if ((instr->CheckFlag(HValue::kBailoutOnMinusZero) && divisor < 0) || + (instr->CheckFlag(HValue::kCanOverflow) && divisor == -1) || + (!instr->CheckFlag(HInstruction::kAllUsesTruncatingToInt32) && + divisor != 1 && divisor != -1)) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoDivByConstI(HDiv* instr) { + DCHECK(instr->representation().IsInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + LOperand* dividend = UseRegister(instr->left()); + int32_t divisor = instr->right()->GetInteger32Constant(); + LInstruction* result = DefineAsRegister(new(zone()) LDivByConstI( + dividend, divisor)); + if (divisor == 0 || + (instr->CheckFlag(HValue::kBailoutOnMinusZero) && divisor < 0) || + !instr->CheckFlag(HInstruction::kAllUsesTruncatingToInt32)) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoDivI(HDiv* instr) { + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + LOperand* dividend = UseRegister(instr->left()); + LOperand* divisor = UseRegister(instr->right()); + LOperand* temp = instr->CheckFlag(HInstruction::kAllUsesTruncatingToInt32) + ? NULL : TempRegister(); + LInstruction* result = + DefineAsRegister(new(zone()) LDivI(dividend, divisor, temp)); + if (instr->CheckFlag(HValue::kCanBeDivByZero) || + instr->CheckFlag(HValue::kBailoutOnMinusZero) || + (instr->CheckFlag(HValue::kCanOverflow) && + !instr->CheckFlag(HValue::kAllUsesTruncatingToInt32)) || + (!instr->IsMathFloorOfDiv() && + !instr->CheckFlag(HValue::kAllUsesTruncatingToInt32))) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoDiv(HDiv* instr) { + if (instr->representation().IsSmiOrInteger32()) { + if (instr->RightIsPowerOf2()) { + return DoDivByPowerOf2I(instr); + } else if (instr->right()->IsConstant()) { + return DoDivByConstI(instr); + } else { + return DoDivI(instr); + } + } else if (instr->representation().IsDouble()) { + return DoArithmeticD(Token::DIV, instr); + } else { + return DoArithmeticT(Token::DIV, instr); + } +} + + +LInstruction* LChunkBuilder::DoFlooringDivByPowerOf2I(HMathFloorOfDiv* instr) { + LOperand* dividend = UseRegisterAtStart(instr->left()); + int32_t divisor = instr->right()->GetInteger32Constant(); + LInstruction* result = DefineAsRegister(new(zone()) LFlooringDivByPowerOf2I( + dividend, divisor)); + if ((instr->CheckFlag(HValue::kBailoutOnMinusZero) && divisor < 0) || + (instr->CheckFlag(HValue::kLeftCanBeMinInt) && divisor == -1)) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoFlooringDivByConstI(HMathFloorOfDiv* instr) { + DCHECK(instr->representation().IsInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + LOperand* dividend = UseRegister(instr->left()); + int32_t divisor = instr->right()->GetInteger32Constant(); + LOperand* temp = + ((divisor > 0 && !instr->CheckFlag(HValue::kLeftCanBeNegative)) || + (divisor < 0 && !instr->CheckFlag(HValue::kLeftCanBePositive))) ? + NULL : TempRegister(); + LInstruction* result = DefineAsRegister( + new(zone()) LFlooringDivByConstI(dividend, divisor, temp)); + if (divisor == 0 || + (instr->CheckFlag(HValue::kBailoutOnMinusZero) && divisor < 0)) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoFlooringDivI(HMathFloorOfDiv* instr) { + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + LOperand* dividend = UseRegister(instr->left()); + LOperand* divisor = UseRegister(instr->right()); + LFlooringDivI* div = new(zone()) LFlooringDivI(dividend, divisor); + return AssignEnvironment(DefineAsRegister(div)); +} + + +LInstruction* LChunkBuilder::DoMathFloorOfDiv(HMathFloorOfDiv* instr) { + if (instr->RightIsPowerOf2()) { + return DoFlooringDivByPowerOf2I(instr); + } else if (instr->right()->IsConstant()) { + return DoFlooringDivByConstI(instr); + } else { + return DoFlooringDivI(instr); + } +} + + +LInstruction* LChunkBuilder::DoModByPowerOf2I(HMod* instr) { + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + LOperand* dividend = UseRegisterAtStart(instr->left()); + int32_t divisor = instr->right()->GetInteger32Constant(); + LInstruction* result = DefineSameAsFirst(new(zone()) LModByPowerOf2I( + dividend, divisor)); + if (instr->CheckFlag(HValue::kLeftCanBeNegative) && + instr->CheckFlag(HValue::kBailoutOnMinusZero)) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoModByConstI(HMod* instr) { + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + LOperand* dividend = UseRegister(instr->left()); + int32_t divisor = instr->right()->GetInteger32Constant(); + LInstruction* result = DefineAsRegister(new(zone()) LModByConstI( + dividend, divisor)); + if (divisor == 0 || instr->CheckFlag(HValue::kBailoutOnMinusZero)) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoModI(HMod* instr) { + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + LOperand* dividend = UseRegister(instr->left()); + LOperand* divisor = UseRegister(instr->right()); + LInstruction* result = DefineAsRegister(new(zone()) LModI( + dividend, divisor)); + if (instr->CheckFlag(HValue::kCanBeDivByZero) || + instr->CheckFlag(HValue::kBailoutOnMinusZero)) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoMod(HMod* instr) { + if (instr->representation().IsSmiOrInteger32()) { + return instr->RightIsPowerOf2() ? DoModByPowerOf2I(instr) : DoModI(instr); + } else if (instr->representation().IsDouble()) { + return DoArithmeticD(Token::MOD, instr); + } else { + return DoArithmeticT(Token::MOD, instr); + } +} + + +LInstruction* LChunkBuilder::DoMul(HMul* instr) { + if (instr->representation().IsSmiOrInteger32()) { + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + HValue* left = instr->BetterLeftOperand(); + HValue* right = instr->BetterRightOperand(); + LOperand* left_op; + LOperand* right_op; + bool can_overflow = instr->CheckFlag(HValue::kCanOverflow); + bool bailout_on_minus_zero = instr->CheckFlag(HValue::kBailoutOnMinusZero); + + if (right->IsConstant()) { + HConstant* constant = HConstant::cast(right); + int32_t constant_value = constant->Integer32Value(); + // Constants -1, 0 and 1 can be optimized if the result can overflow. + // For other constants, it can be optimized only without overflow. + if (!can_overflow || ((constant_value >= -1) && (constant_value <= 1))) { + left_op = UseRegisterAtStart(left); + right_op = UseConstant(right); + } else { + if (bailout_on_minus_zero) { + left_op = UseRegister(left); + } else { + left_op = UseRegisterAtStart(left); + } + right_op = UseRegister(right); + } + } else { + if (bailout_on_minus_zero) { + left_op = UseRegister(left); + } else { + left_op = UseRegisterAtStart(left); + } + right_op = UseRegister(right); + } + LMulI* mul = new(zone()) LMulI(left_op, right_op); + if (can_overflow || bailout_on_minus_zero) { + AssignEnvironment(mul); + } + return DefineAsRegister(mul); + + } else if (instr->representation().IsDouble()) { + if (kArchVariant == kMips64r2) { + if (instr->HasOneUse() && instr->uses().value()->IsAdd()) { + HAdd* add = HAdd::cast(instr->uses().value()); + if (instr == add->left()) { + // This mul is the lhs of an add. The add and mul will be folded + // into a multiply-add. + return NULL; + } + if (instr == add->right() && !add->left()->IsMul()) { + // This mul is the rhs of an add, where the lhs is not another mul. + // The add and mul will be folded into a multiply-add. + return NULL; + } + } + } + return DoArithmeticD(Token::MUL, instr); + } else { + return DoArithmeticT(Token::MUL, instr); + } +} + + +LInstruction* LChunkBuilder::DoSub(HSub* instr) { + if (instr->representation().IsSmiOrInteger32()) { + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + LOperand* left = UseRegisterAtStart(instr->left()); + LOperand* right = UseOrConstantAtStart(instr->right()); + LSubI* sub = new(zone()) LSubI(left, right); + LInstruction* result = DefineAsRegister(sub); + if (instr->CheckFlag(HValue::kCanOverflow)) { + result = AssignEnvironment(result); + } + return result; + } else if (instr->representation().IsDouble()) { + return DoArithmeticD(Token::SUB, instr); + } else { + return DoArithmeticT(Token::SUB, instr); + } +} + + +LInstruction* LChunkBuilder::DoMultiplyAdd(HMul* mul, HValue* addend) { + LOperand* multiplier_op = UseRegisterAtStart(mul->left()); + LOperand* multiplicand_op = UseRegisterAtStart(mul->right()); + LOperand* addend_op = UseRegisterAtStart(addend); + return DefineSameAsFirst(new(zone()) LMultiplyAddD(addend_op, multiplier_op, + multiplicand_op)); +} + + +LInstruction* LChunkBuilder::DoAdd(HAdd* instr) { + if (instr->representation().IsSmiOrInteger32()) { + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + LOperand* left = UseRegisterAtStart(instr->BetterLeftOperand()); + LOperand* right = UseOrConstantAtStart(instr->BetterRightOperand()); + LAddI* add = new(zone()) LAddI(left, right); + LInstruction* result = DefineAsRegister(add); + if (instr->CheckFlag(HValue::kCanOverflow)) { + result = AssignEnvironment(result); + } + return result; + } else if (instr->representation().IsExternal()) { + DCHECK(instr->left()->representation().IsExternal()); + DCHECK(instr->right()->representation().IsInteger32()); + DCHECK(!instr->CheckFlag(HValue::kCanOverflow)); + LOperand* left = UseRegisterAtStart(instr->left()); + LOperand* right = UseOrConstantAtStart(instr->right()); + LAddI* add = new(zone()) LAddI(left, right); + LInstruction* result = DefineAsRegister(add); + return result; + } else if (instr->representation().IsDouble()) { + if (kArchVariant == kMips64r2) { + if (instr->left()->IsMul()) + return DoMultiplyAdd(HMul::cast(instr->left()), instr->right()); + + if (instr->right()->IsMul()) { + DCHECK(!instr->left()->IsMul()); + return DoMultiplyAdd(HMul::cast(instr->right()), instr->left()); + } + } + return DoArithmeticD(Token::ADD, instr); + } else { + return DoArithmeticT(Token::ADD, instr); + } +} + + +LInstruction* LChunkBuilder::DoMathMinMax(HMathMinMax* instr) { + LOperand* left = NULL; + LOperand* right = NULL; + if (instr->representation().IsSmiOrInteger32()) { + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + left = UseRegisterAtStart(instr->BetterLeftOperand()); + right = UseOrConstantAtStart(instr->BetterRightOperand()); + } else { + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); + DCHECK(instr->right()->representation().IsDouble()); + left = UseRegisterAtStart(instr->left()); + right = UseRegisterAtStart(instr->right()); + } + return DefineAsRegister(new(zone()) LMathMinMax(left, right)); +} + + +LInstruction* LChunkBuilder::DoPower(HPower* instr) { + DCHECK(instr->representation().IsDouble()); + // We call a C function for double power. It can't trigger a GC. + // We need to use fixed result register for the call. + Representation exponent_type = instr->right()->representation(); + DCHECK(instr->left()->representation().IsDouble()); + LOperand* left = UseFixedDouble(instr->left(), f2); + LOperand* right = exponent_type.IsDouble() ? + UseFixedDouble(instr->right(), f4) : + UseFixed(instr->right(), a2); + LPower* result = new(zone()) LPower(left, right); + return MarkAsCall(DefineFixedDouble(result, f0), + instr, + CAN_DEOPTIMIZE_EAGERLY); +} + + +LInstruction* LChunkBuilder::DoCompareGeneric(HCompareGeneric* instr) { + DCHECK(instr->left()->representation().IsTagged()); + DCHECK(instr->right()->representation().IsTagged()); + LOperand* context = UseFixed(instr->context(), cp); + LOperand* left = UseFixed(instr->left(), a1); + LOperand* right = UseFixed(instr->right(), a0); + LCmpT* result = new(zone()) LCmpT(context, left, right); + return MarkAsCall(DefineFixed(result, v0), instr); +} + + +LInstruction* LChunkBuilder::DoCompareNumericAndBranch( + HCompareNumericAndBranch* instr) { + Representation r = instr->representation(); + if (r.IsSmiOrInteger32()) { + DCHECK(instr->left()->representation().Equals(r)); + DCHECK(instr->right()->representation().Equals(r)); + LOperand* left = UseRegisterOrConstantAtStart(instr->left()); + LOperand* right = UseRegisterOrConstantAtStart(instr->right()); + return new(zone()) LCompareNumericAndBranch(left, right); + } else { + DCHECK(r.IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); + DCHECK(instr->right()->representation().IsDouble()); + LOperand* left = UseRegisterAtStart(instr->left()); + LOperand* right = UseRegisterAtStart(instr->right()); + return new(zone()) LCompareNumericAndBranch(left, right); + } +} + + +LInstruction* LChunkBuilder::DoCompareObjectEqAndBranch( + HCompareObjectEqAndBranch* instr) { + LOperand* left = UseRegisterAtStart(instr->left()); + LOperand* right = UseRegisterAtStart(instr->right()); + return new(zone()) LCmpObjectEqAndBranch(left, right); +} + + +LInstruction* LChunkBuilder::DoCompareHoleAndBranch( + HCompareHoleAndBranch* instr) { + LOperand* value = UseRegisterAtStart(instr->value()); + return new(zone()) LCmpHoleAndBranch(value); +} + + +LInstruction* LChunkBuilder::DoCompareMinusZeroAndBranch( + HCompareMinusZeroAndBranch* instr) { + LOperand* value = UseRegister(instr->value()); + LOperand* scratch = TempRegister(); + return new(zone()) LCompareMinusZeroAndBranch(value, scratch); +} + + +LInstruction* LChunkBuilder::DoIsObjectAndBranch(HIsObjectAndBranch* instr) { + DCHECK(instr->value()->representation().IsTagged()); + LOperand* temp = TempRegister(); + return new(zone()) LIsObjectAndBranch(UseRegisterAtStart(instr->value()), + temp); +} + + +LInstruction* LChunkBuilder::DoIsStringAndBranch(HIsStringAndBranch* instr) { + DCHECK(instr->value()->representation().IsTagged()); + LOperand* temp = TempRegister(); + return new(zone()) LIsStringAndBranch(UseRegisterAtStart(instr->value()), + temp); +} + + +LInstruction* LChunkBuilder::DoIsSmiAndBranch(HIsSmiAndBranch* instr) { + DCHECK(instr->value()->representation().IsTagged()); + return new(zone()) LIsSmiAndBranch(Use(instr->value())); +} + + +LInstruction* LChunkBuilder::DoIsUndetectableAndBranch( + HIsUndetectableAndBranch* instr) { + DCHECK(instr->value()->representation().IsTagged()); + return new(zone()) LIsUndetectableAndBranch( + UseRegisterAtStart(instr->value()), TempRegister()); +} + + +LInstruction* LChunkBuilder::DoStringCompareAndBranch( + HStringCompareAndBranch* instr) { + DCHECK(instr->left()->representation().IsTagged()); + DCHECK(instr->right()->representation().IsTagged()); + LOperand* context = UseFixed(instr->context(), cp); + LOperand* left = UseFixed(instr->left(), a1); + LOperand* right = UseFixed(instr->right(), a0); + LStringCompareAndBranch* result = + new(zone()) LStringCompareAndBranch(context, left, right); + return MarkAsCall(result, instr); +} + + +LInstruction* LChunkBuilder::DoHasInstanceTypeAndBranch( + HHasInstanceTypeAndBranch* instr) { + DCHECK(instr->value()->representation().IsTagged()); + LOperand* value = UseRegisterAtStart(instr->value()); + return new(zone()) LHasInstanceTypeAndBranch(value); +} + + +LInstruction* LChunkBuilder::DoGetCachedArrayIndex( + HGetCachedArrayIndex* instr) { + DCHECK(instr->value()->representation().IsTagged()); + LOperand* value = UseRegisterAtStart(instr->value()); + + return DefineAsRegister(new(zone()) LGetCachedArrayIndex(value)); +} + + +LInstruction* LChunkBuilder::DoHasCachedArrayIndexAndBranch( + HHasCachedArrayIndexAndBranch* instr) { + DCHECK(instr->value()->representation().IsTagged()); + return new(zone()) LHasCachedArrayIndexAndBranch( + UseRegisterAtStart(instr->value())); +} + + +LInstruction* LChunkBuilder::DoClassOfTestAndBranch( + HClassOfTestAndBranch* instr) { + DCHECK(instr->value()->representation().IsTagged()); + return new(zone()) LClassOfTestAndBranch(UseRegister(instr->value()), + TempRegister()); +} + + +LInstruction* LChunkBuilder::DoMapEnumLength(HMapEnumLength* instr) { + LOperand* map = UseRegisterAtStart(instr->value()); + return DefineAsRegister(new(zone()) LMapEnumLength(map)); +} + + +LInstruction* LChunkBuilder::DoDateField(HDateField* instr) { + LOperand* object = UseFixed(instr->value(), a0); + LDateField* result = + new(zone()) LDateField(object, FixedTemp(a1), instr->index()); + return MarkAsCall(DefineFixed(result, v0), instr, CAN_DEOPTIMIZE_EAGERLY); +} + + +LInstruction* LChunkBuilder::DoSeqStringGetChar(HSeqStringGetChar* instr) { + LOperand* string = UseRegisterAtStart(instr->string()); + LOperand* index = UseRegisterOrConstantAtStart(instr->index()); + return DefineAsRegister(new(zone()) LSeqStringGetChar(string, index)); +} + + +LInstruction* LChunkBuilder::DoSeqStringSetChar(HSeqStringSetChar* instr) { + LOperand* string = UseRegisterAtStart(instr->string()); + LOperand* index = FLAG_debug_code + ? UseRegisterAtStart(instr->index()) + : UseRegisterOrConstantAtStart(instr->index()); + LOperand* value = UseRegisterAtStart(instr->value()); + LOperand* context = FLAG_debug_code ? UseFixed(instr->context(), cp) : NULL; + return new(zone()) LSeqStringSetChar(context, string, index, value); +} + + +LInstruction* LChunkBuilder::DoBoundsCheck(HBoundsCheck* instr) { + if (!FLAG_debug_code && instr->skip_check()) return NULL; + LOperand* index = UseRegisterOrConstantAtStart(instr->index()); + LOperand* length = !index->IsConstantOperand() + ? UseRegisterOrConstantAtStart(instr->length()) + : UseRegisterAtStart(instr->length()); + LInstruction* result = new(zone()) LBoundsCheck(index, length); + if (!FLAG_debug_code || !instr->skip_check()) { + result = AssignEnvironment(result); + } +return result; +} + + +LInstruction* LChunkBuilder::DoBoundsCheckBaseIndexInformation( + HBoundsCheckBaseIndexInformation* instr) { + UNREACHABLE(); + return NULL; +} + + +LInstruction* LChunkBuilder::DoAbnormalExit(HAbnormalExit* instr) { + // The control instruction marking the end of a block that completed + // abruptly (e.g., threw an exception). There is nothing specific to do. + return NULL; +} + + +LInstruction* LChunkBuilder::DoUseConst(HUseConst* instr) { + return NULL; +} + + +LInstruction* LChunkBuilder::DoForceRepresentation(HForceRepresentation* bad) { + // All HForceRepresentation instructions should be eliminated in the + // representation change phase of Hydrogen. + UNREACHABLE(); + return NULL; +} + + +LInstruction* LChunkBuilder::DoChange(HChange* instr) { + Representation from = instr->from(); + Representation to = instr->to(); + HValue* val = instr->value(); + if (from.IsSmi()) { + if (to.IsTagged()) { + LOperand* value = UseRegister(val); + return DefineSameAsFirst(new(zone()) LDummyUse(value)); + } + from = Representation::Tagged(); + } + if (from.IsTagged()) { + if (to.IsDouble()) { + LOperand* value = UseRegister(val); + LInstruction* result = DefineAsRegister(new(zone()) LNumberUntagD(value)); + if (!val->representation().IsSmi()) result = AssignEnvironment(result); + return result; + } else if (to.IsSmi()) { + LOperand* value = UseRegister(val); + if (val->type().IsSmi()) { + return DefineSameAsFirst(new(zone()) LDummyUse(value)); + } + return AssignEnvironment(DefineSameAsFirst(new(zone()) LCheckSmi(value))); + } else { + DCHECK(to.IsInteger32()); + if (val->type().IsSmi() || val->representation().IsSmi()) { + LOperand* value = UseRegisterAtStart(val); + return DefineAsRegister(new(zone()) LSmiUntag(value, false)); + } else { + LOperand* value = UseRegister(val); + LOperand* temp1 = TempRegister(); + LOperand* temp2 = TempDoubleRegister(); + LInstruction* result = + DefineSameAsFirst(new(zone()) LTaggedToI(value, temp1, temp2)); + if (!val->representation().IsSmi()) result = AssignEnvironment(result); + return result; + } + } + } else if (from.IsDouble()) { + if (to.IsTagged()) { + info()->MarkAsDeferredCalling(); + LOperand* value = UseRegister(val); + LOperand* temp1 = TempRegister(); + LOperand* temp2 = TempRegister(); + + LUnallocated* result_temp = TempRegister(); + LNumberTagD* result = new(zone()) LNumberTagD(value, temp1, temp2); + return AssignPointerMap(Define(result, result_temp)); + } else if (to.IsSmi()) { + LOperand* value = UseRegister(val); + return AssignEnvironment( + DefineAsRegister(new(zone()) LDoubleToSmi(value))); + } else { + DCHECK(to.IsInteger32()); + LOperand* value = UseRegister(val); + LInstruction* result = DefineAsRegister(new(zone()) LDoubleToI(value)); + if (!instr->CanTruncateToInt32()) result = AssignEnvironment(result); + return result; + } + } else if (from.IsInteger32()) { + info()->MarkAsDeferredCalling(); + if (to.IsTagged()) { + if (val->CheckFlag(HInstruction::kUint32)) { + LOperand* value = UseRegisterAtStart(val); + LOperand* temp1 = TempRegister(); + LOperand* temp2 = TempRegister(); + LNumberTagU* result = new(zone()) LNumberTagU(value, temp1, temp2); + return AssignPointerMap(DefineAsRegister(result)); + } else { + STATIC_ASSERT((kMinInt == Smi::kMinValue) && + (kMaxInt == Smi::kMaxValue)); + LOperand* value = UseRegisterAtStart(val); + return DefineAsRegister(new(zone()) LSmiTag(value)); + } + } else if (to.IsSmi()) { + LOperand* value = UseRegister(val); + LInstruction* result = DefineAsRegister(new(zone()) LSmiTag(value)); + if (instr->CheckFlag(HValue::kCanOverflow)) { + result = AssignEnvironment(result); + } + return result; + } else { + DCHECK(to.IsDouble()); + if (val->CheckFlag(HInstruction::kUint32)) { + return DefineAsRegister(new(zone()) LUint32ToDouble(UseRegister(val))); + } else { + return DefineAsRegister(new(zone()) LInteger32ToDouble(Use(val))); + } + } + } + UNREACHABLE(); + return NULL; +} + + +LInstruction* LChunkBuilder::DoCheckHeapObject(HCheckHeapObject* instr) { + LOperand* value = UseRegisterAtStart(instr->value()); + LInstruction* result = new(zone()) LCheckNonSmi(value); + if (!instr->value()->type().IsHeapObject()) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoCheckSmi(HCheckSmi* instr) { + LOperand* value = UseRegisterAtStart(instr->value()); + return AssignEnvironment(new(zone()) LCheckSmi(value)); +} + + +LInstruction* LChunkBuilder::DoCheckInstanceType(HCheckInstanceType* instr) { + LOperand* value = UseRegisterAtStart(instr->value()); + LInstruction* result = new(zone()) LCheckInstanceType(value); + return AssignEnvironment(result); +} + + +LInstruction* LChunkBuilder::DoCheckValue(HCheckValue* instr) { + LOperand* value = UseRegisterAtStart(instr->value()); + return AssignEnvironment(new(zone()) LCheckValue(value)); +} + + +LInstruction* LChunkBuilder::DoCheckMaps(HCheckMaps* instr) { + if (instr->IsStabilityCheck()) return new(zone()) LCheckMaps; + LOperand* value = UseRegisterAtStart(instr->value()); + LInstruction* result = AssignEnvironment(new(zone()) LCheckMaps(value)); + if (instr->HasMigrationTarget()) { + info()->MarkAsDeferredCalling(); + result = AssignPointerMap(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoClampToUint8(HClampToUint8* instr) { + HValue* value = instr->value(); + Representation input_rep = value->representation(); + LOperand* reg = UseRegister(value); + if (input_rep.IsDouble()) { + // Revisit this decision, here and 8 lines below. + return DefineAsRegister(new(zone()) LClampDToUint8(reg, + TempDoubleRegister())); + } else if (input_rep.IsInteger32()) { + return DefineAsRegister(new(zone()) LClampIToUint8(reg)); + } else { + DCHECK(input_rep.IsSmiOrTagged()); + LClampTToUint8* result = + new(zone()) LClampTToUint8(reg, TempDoubleRegister()); + return AssignEnvironment(DefineAsRegister(result)); + } +} + + +LInstruction* LChunkBuilder::DoDoubleBits(HDoubleBits* instr) { + HValue* value = instr->value(); + DCHECK(value->representation().IsDouble()); + return DefineAsRegister(new(zone()) LDoubleBits(UseRegister(value))); +} + + +LInstruction* LChunkBuilder::DoConstructDouble(HConstructDouble* instr) { + LOperand* lo = UseRegister(instr->lo()); + LOperand* hi = UseRegister(instr->hi()); + return DefineAsRegister(new(zone()) LConstructDouble(hi, lo)); +} + + +LInstruction* LChunkBuilder::DoReturn(HReturn* instr) { + LOperand* context = info()->IsStub() + ? UseFixed(instr->context(), cp) + : NULL; + LOperand* parameter_count = UseRegisterOrConstant(instr->parameter_count()); + return new(zone()) LReturn(UseFixed(instr->value(), v0), context, + parameter_count); +} + + +LInstruction* LChunkBuilder::DoConstant(HConstant* instr) { + Representation r = instr->representation(); + if (r.IsSmi()) { + return DefineAsRegister(new(zone()) LConstantS); + } else if (r.IsInteger32()) { + return DefineAsRegister(new(zone()) LConstantI); + } else if (r.IsDouble()) { + return DefineAsRegister(new(zone()) LConstantD); + } else if (r.IsExternal()) { + return DefineAsRegister(new(zone()) LConstantE); + } else if (r.IsTagged()) { + return DefineAsRegister(new(zone()) LConstantT); + } else { + UNREACHABLE(); + return NULL; + } +} + + +LInstruction* LChunkBuilder::DoLoadGlobalCell(HLoadGlobalCell* instr) { + LLoadGlobalCell* result = new(zone()) LLoadGlobalCell; + return instr->RequiresHoleCheck() + ? AssignEnvironment(DefineAsRegister(result)) + : DefineAsRegister(result); +} + + +LInstruction* LChunkBuilder::DoLoadGlobalGeneric(HLoadGlobalGeneric* instr) { + LOperand* context = UseFixed(instr->context(), cp); + LOperand* global_object = UseFixed(instr->global_object(), + LoadIC::ReceiverRegister()); + LOperand* vector = NULL; + if (FLAG_vector_ics) { + vector = FixedTemp(LoadIC::VectorRegister()); + } + LLoadGlobalGeneric* result = + new(zone()) LLoadGlobalGeneric(context, global_object, vector); + return MarkAsCall(DefineFixed(result, v0), instr); +} + + +LInstruction* LChunkBuilder::DoStoreGlobalCell(HStoreGlobalCell* instr) { + LOperand* value = UseRegister(instr->value()); + // Use a temp to check the value in the cell in the case where we perform + // a hole check. + return instr->RequiresHoleCheck() + ? AssignEnvironment(new(zone()) LStoreGlobalCell(value, TempRegister())) + : new(zone()) LStoreGlobalCell(value, NULL); +} + + +LInstruction* LChunkBuilder::DoLoadContextSlot(HLoadContextSlot* instr) { + LOperand* context = UseRegisterAtStart(instr->value()); + LInstruction* result = + DefineAsRegister(new(zone()) LLoadContextSlot(context)); + if (instr->RequiresHoleCheck() && instr->DeoptimizesOnHole()) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoStoreContextSlot(HStoreContextSlot* instr) { + LOperand* context; + LOperand* value; + if (instr->NeedsWriteBarrier()) { + context = UseTempRegister(instr->context()); + value = UseTempRegister(instr->value()); + } else { + context = UseRegister(instr->context()); + value = UseRegister(instr->value()); + } + LInstruction* result = new(zone()) LStoreContextSlot(context, value); + if (instr->RequiresHoleCheck() && instr->DeoptimizesOnHole()) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoLoadNamedField(HLoadNamedField* instr) { + LOperand* obj = UseRegisterAtStart(instr->object()); + return DefineAsRegister(new(zone()) LLoadNamedField(obj)); +} + + +LInstruction* LChunkBuilder::DoLoadNamedGeneric(HLoadNamedGeneric* instr) { + LOperand* context = UseFixed(instr->context(), cp); + LOperand* object = UseFixed(instr->object(), LoadIC::ReceiverRegister()); + LOperand* vector = NULL; + if (FLAG_vector_ics) { + vector = FixedTemp(LoadIC::VectorRegister()); + } + + LInstruction* result = + DefineFixed(new(zone()) LLoadNamedGeneric(context, object, vector), v0); + return MarkAsCall(result, instr); +} + + +LInstruction* LChunkBuilder::DoLoadFunctionPrototype( + HLoadFunctionPrototype* instr) { + return AssignEnvironment(DefineAsRegister( + new(zone()) LLoadFunctionPrototype(UseRegister(instr->function())))); +} + + +LInstruction* LChunkBuilder::DoLoadRoot(HLoadRoot* instr) { + return DefineAsRegister(new(zone()) LLoadRoot); +} + + +LInstruction* LChunkBuilder::DoLoadKeyed(HLoadKeyed* instr) { + DCHECK(instr->key()->representation().IsSmiOrInteger32()); + ElementsKind elements_kind = instr->elements_kind(); + LOperand* key = UseRegisterOrConstantAtStart(instr->key()); + LInstruction* result = NULL; + + if (!instr->is_typed_elements()) { + LOperand* obj = NULL; + if (instr->representation().IsDouble()) { + obj = UseRegister(instr->elements()); + } else { + DCHECK(instr->representation().IsSmiOrTagged() || + instr->representation().IsInteger32()); + obj = UseRegisterAtStart(instr->elements()); + } + result = DefineAsRegister(new(zone()) LLoadKeyed(obj, key)); + } else { + DCHECK( + (instr->representation().IsInteger32() && + !IsDoubleOrFloatElementsKind(elements_kind)) || + (instr->representation().IsDouble() && + IsDoubleOrFloatElementsKind(elements_kind))); + LOperand* backing_store = UseRegister(instr->elements()); + result = DefineAsRegister(new(zone()) LLoadKeyed(backing_store, key)); + } + + if ((instr->is_external() || instr->is_fixed_typed_array()) ? + // see LCodeGen::DoLoadKeyedExternalArray + ((elements_kind == EXTERNAL_UINT32_ELEMENTS || + elements_kind == UINT32_ELEMENTS) && + !instr->CheckFlag(HInstruction::kUint32)) : + // see LCodeGen::DoLoadKeyedFixedDoubleArray and + // LCodeGen::DoLoadKeyedFixedArray + instr->RequiresHoleCheck()) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoLoadKeyedGeneric(HLoadKeyedGeneric* instr) { + LOperand* context = UseFixed(instr->context(), cp); + LOperand* object = UseFixed(instr->object(), LoadIC::ReceiverRegister()); + LOperand* key = UseFixed(instr->key(), LoadIC::NameRegister()); + LOperand* vector = NULL; + if (FLAG_vector_ics) { + vector = FixedTemp(LoadIC::VectorRegister()); + } + + LInstruction* result = + DefineFixed(new(zone()) LLoadKeyedGeneric(context, object, key, vector), + v0); + return MarkAsCall(result, instr); +} + + +LInstruction* LChunkBuilder::DoStoreKeyed(HStoreKeyed* instr) { + if (!instr->is_typed_elements()) { + DCHECK(instr->elements()->representation().IsTagged()); + bool needs_write_barrier = instr->NeedsWriteBarrier(); + LOperand* object = NULL; + LOperand* val = NULL; + LOperand* key = NULL; + + if (instr->value()->representation().IsDouble()) { + object = UseRegisterAtStart(instr->elements()); + key = UseRegisterOrConstantAtStart(instr->key()); + val = UseRegister(instr->value()); + } else { + DCHECK(instr->value()->representation().IsSmiOrTagged() || + instr->value()->representation().IsInteger32()); + if (needs_write_barrier) { + object = UseTempRegister(instr->elements()); + val = UseTempRegister(instr->value()); + key = UseTempRegister(instr->key()); + } else { + object = UseRegisterAtStart(instr->elements()); + val = UseRegisterAtStart(instr->value()); + key = UseRegisterOrConstantAtStart(instr->key()); + } + } + + return new(zone()) LStoreKeyed(object, key, val); + } + + DCHECK( + (instr->value()->representation().IsInteger32() && + !IsDoubleOrFloatElementsKind(instr->elements_kind())) || + (instr->value()->representation().IsDouble() && + IsDoubleOrFloatElementsKind(instr->elements_kind()))); + DCHECK((instr->is_fixed_typed_array() && + instr->elements()->representation().IsTagged()) || + (instr->is_external() && + instr->elements()->representation().IsExternal())); + LOperand* val = UseRegister(instr->value()); + LOperand* key = UseRegisterOrConstantAtStart(instr->key()); + LOperand* backing_store = UseRegister(instr->elements()); + return new(zone()) LStoreKeyed(backing_store, key, val); +} + + +LInstruction* LChunkBuilder::DoStoreKeyedGeneric(HStoreKeyedGeneric* instr) { + LOperand* context = UseFixed(instr->context(), cp); + LOperand* obj = UseFixed(instr->object(), KeyedStoreIC::ReceiverRegister()); + LOperand* key = UseFixed(instr->key(), KeyedStoreIC::NameRegister()); + LOperand* val = UseFixed(instr->value(), KeyedStoreIC::ValueRegister()); + + DCHECK(instr->object()->representation().IsTagged()); + DCHECK(instr->key()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); + + return MarkAsCall( + new(zone()) LStoreKeyedGeneric(context, obj, key, val), instr); +} + + +LInstruction* LChunkBuilder::DoTransitionElementsKind( + HTransitionElementsKind* instr) { + if (IsSimpleMapChangeTransition(instr->from_kind(), instr->to_kind())) { + LOperand* object = UseRegister(instr->object()); + LOperand* new_map_reg = TempRegister(); + LTransitionElementsKind* result = + new(zone()) LTransitionElementsKind(object, NULL, new_map_reg); + return result; + } else { + LOperand* object = UseFixed(instr->object(), a0); + LOperand* context = UseFixed(instr->context(), cp); + LTransitionElementsKind* result = + new(zone()) LTransitionElementsKind(object, context, NULL); + return MarkAsCall(result, instr); + } +} + + +LInstruction* LChunkBuilder::DoTrapAllocationMemento( + HTrapAllocationMemento* instr) { + LOperand* object = UseRegister(instr->object()); + LOperand* temp = TempRegister(); + LTrapAllocationMemento* result = + new(zone()) LTrapAllocationMemento(object, temp); + return AssignEnvironment(result); +} + + +LInstruction* LChunkBuilder::DoStoreNamedField(HStoreNamedField* instr) { + bool is_in_object = instr->access().IsInobject(); + bool needs_write_barrier = instr->NeedsWriteBarrier(); + bool needs_write_barrier_for_map = instr->has_transition() && + instr->NeedsWriteBarrierForMap(); + + LOperand* obj; + if (needs_write_barrier) { + obj = is_in_object + ? UseRegister(instr->object()) + : UseTempRegister(instr->object()); + } else { + obj = needs_write_barrier_for_map + ? UseRegister(instr->object()) + : UseRegisterAtStart(instr->object()); + } + + LOperand* val; + if (needs_write_barrier || instr->field_representation().IsSmi()) { + val = UseTempRegister(instr->value()); + } else if (instr->field_representation().IsDouble()) { + val = UseRegisterAtStart(instr->value()); + } else { + val = UseRegister(instr->value()); + } + + // We need a temporary register for write barrier of the map field. + LOperand* temp = needs_write_barrier_for_map ? TempRegister() : NULL; + + return new(zone()) LStoreNamedField(obj, val, temp); +} + + +LInstruction* LChunkBuilder::DoStoreNamedGeneric(HStoreNamedGeneric* instr) { + LOperand* context = UseFixed(instr->context(), cp); + LOperand* obj = UseFixed(instr->object(), StoreIC::ReceiverRegister()); + LOperand* val = UseFixed(instr->value(), StoreIC::ValueRegister()); + + LInstruction* result = new(zone()) LStoreNamedGeneric(context, obj, val); + return MarkAsCall(result, instr); +} + + +LInstruction* LChunkBuilder::DoStringAdd(HStringAdd* instr) { + LOperand* context = UseFixed(instr->context(), cp); + LOperand* left = UseFixed(instr->left(), a1); + LOperand* right = UseFixed(instr->right(), a0); + return MarkAsCall( + DefineFixed(new(zone()) LStringAdd(context, left, right), v0), + instr); +} + + +LInstruction* LChunkBuilder::DoStringCharCodeAt(HStringCharCodeAt* instr) { + LOperand* string = UseTempRegister(instr->string()); + LOperand* index = UseTempRegister(instr->index()); + LOperand* context = UseAny(instr->context()); + LStringCharCodeAt* result = + new(zone()) LStringCharCodeAt(context, string, index); + return AssignPointerMap(DefineAsRegister(result)); +} + + +LInstruction* LChunkBuilder::DoStringCharFromCode(HStringCharFromCode* instr) { + LOperand* char_code = UseRegister(instr->value()); + LOperand* context = UseAny(instr->context()); + LStringCharFromCode* result = + new(zone()) LStringCharFromCode(context, char_code); + return AssignPointerMap(DefineAsRegister(result)); +} + + +LInstruction* LChunkBuilder::DoAllocate(HAllocate* instr) { + info()->MarkAsDeferredCalling(); + LOperand* context = UseAny(instr->context()); + LOperand* size = UseRegisterOrConstant(instr->size()); + LOperand* temp1 = TempRegister(); + LOperand* temp2 = TempRegister(); + LAllocate* result = new(zone()) LAllocate(context, size, temp1, temp2); + return AssignPointerMap(DefineAsRegister(result)); +} + + +LInstruction* LChunkBuilder::DoRegExpLiteral(HRegExpLiteral* instr) { + LOperand* context = UseFixed(instr->context(), cp); + return MarkAsCall( + DefineFixed(new(zone()) LRegExpLiteral(context), v0), instr); +} + + +LInstruction* LChunkBuilder::DoFunctionLiteral(HFunctionLiteral* instr) { + LOperand* context = UseFixed(instr->context(), cp); + return MarkAsCall( + DefineFixed(new(zone()) LFunctionLiteral(context), v0), instr); +} + + +LInstruction* LChunkBuilder::DoOsrEntry(HOsrEntry* instr) { + DCHECK(argument_count_ == 0); + allocator_->MarkAsOsrEntry(); + current_block_->last_environment()->set_ast_id(instr->ast_id()); + return AssignEnvironment(new(zone()) LOsrEntry); +} + + +LInstruction* LChunkBuilder::DoParameter(HParameter* instr) { + LParameter* result = new(zone()) LParameter; + if (instr->kind() == HParameter::STACK_PARAMETER) { + int spill_index = chunk()->GetParameterStackSlot(instr->index()); + return DefineAsSpilled(result, spill_index); + } else { + DCHECK(info()->IsStub()); + CodeStubInterfaceDescriptor* descriptor = + info()->code_stub()->GetInterfaceDescriptor(); + int index = static_cast<int>(instr->index()); + Register reg = descriptor->GetEnvironmentParameterRegister(index); + return DefineFixed(result, reg); + } +} + + +LInstruction* LChunkBuilder::DoUnknownOSRValue(HUnknownOSRValue* instr) { + // Use an index that corresponds to the location in the unoptimized frame, + // which the optimized frame will subsume. + int env_index = instr->index(); + int spill_index = 0; + if (instr->environment()->is_parameter_index(env_index)) { + spill_index = chunk()->GetParameterStackSlot(env_index); + } else { + spill_index = env_index - instr->environment()->first_local_index(); + if (spill_index > LUnallocated::kMaxFixedSlotIndex) { + Abort(kTooManySpillSlotsNeededForOSR); + spill_index = 0; + } + } + return DefineAsSpilled(new(zone()) LUnknownOSRValue, spill_index); +} + + +LInstruction* LChunkBuilder::DoCallStub(HCallStub* instr) { + LOperand* context = UseFixed(instr->context(), cp); + return MarkAsCall(DefineFixed(new(zone()) LCallStub(context), v0), instr); +} + + +LInstruction* LChunkBuilder::DoArgumentsObject(HArgumentsObject* instr) { + // There are no real uses of the arguments object. + // arguments.length and element access are supported directly on + // stack arguments, and any real arguments object use causes a bailout. + // So this value is never used. + return NULL; +} + + +LInstruction* LChunkBuilder::DoCapturedObject(HCapturedObject* instr) { + instr->ReplayEnvironment(current_block_->last_environment()); + + // There are no real uses of a captured object. + return NULL; +} + + +LInstruction* LChunkBuilder::DoAccessArgumentsAt(HAccessArgumentsAt* instr) { + info()->MarkAsRequiresFrame(); + LOperand* args = UseRegister(instr->arguments()); + LOperand* length = UseRegisterOrConstantAtStart(instr->length()); + LOperand* index = UseRegisterOrConstantAtStart(instr->index()); + return DefineAsRegister(new(zone()) LAccessArgumentsAt(args, length, index)); +} + + +LInstruction* LChunkBuilder::DoToFastProperties(HToFastProperties* instr) { + LOperand* object = UseFixed(instr->value(), a0); + LToFastProperties* result = new(zone()) LToFastProperties(object); + return MarkAsCall(DefineFixed(result, v0), instr); +} + + +LInstruction* LChunkBuilder::DoTypeof(HTypeof* instr) { + LOperand* context = UseFixed(instr->context(), cp); + LTypeof* result = new(zone()) LTypeof(context, UseFixed(instr->value(), a0)); + return MarkAsCall(DefineFixed(result, v0), instr); +} + + +LInstruction* LChunkBuilder::DoTypeofIsAndBranch(HTypeofIsAndBranch* instr) { + return new(zone()) LTypeofIsAndBranch(UseTempRegister(instr->value())); +} + + +LInstruction* LChunkBuilder::DoIsConstructCallAndBranch( + HIsConstructCallAndBranch* instr) { + return new(zone()) LIsConstructCallAndBranch(TempRegister()); +} + + +LInstruction* LChunkBuilder::DoSimulate(HSimulate* instr) { + instr->ReplayEnvironment(current_block_->last_environment()); + return NULL; +} + + +LInstruction* LChunkBuilder::DoStackCheck(HStackCheck* instr) { + if (instr->is_function_entry()) { + LOperand* context = UseFixed(instr->context(), cp); + return MarkAsCall(new(zone()) LStackCheck(context), instr); + } else { + DCHECK(instr->is_backwards_branch()); + LOperand* context = UseAny(instr->context()); + return AssignEnvironment( + AssignPointerMap(new(zone()) LStackCheck(context))); + } +} + + +LInstruction* LChunkBuilder::DoEnterInlined(HEnterInlined* instr) { + HEnvironment* outer = current_block_->last_environment(); + outer->set_ast_id(instr->ReturnId()); + HConstant* undefined = graph()->GetConstantUndefined(); + HEnvironment* inner = outer->CopyForInlining(instr->closure(), + instr->arguments_count(), + instr->function(), + undefined, + instr->inlining_kind()); + // Only replay binding of arguments object if it wasn't removed from graph. + if (instr->arguments_var() != NULL && instr->arguments_object()->IsLinked()) { + inner->Bind(instr->arguments_var(), instr->arguments_object()); + } + inner->set_entry(instr); + current_block_->UpdateEnvironment(inner); + chunk_->AddInlinedClosure(instr->closure()); + return NULL; +} + + +LInstruction* LChunkBuilder::DoLeaveInlined(HLeaveInlined* instr) { + LInstruction* pop = NULL; + + HEnvironment* env = current_block_->last_environment(); + + if (env->entry()->arguments_pushed()) { + int argument_count = env->arguments_environment()->parameter_count(); + pop = new(zone()) LDrop(argument_count); + DCHECK(instr->argument_delta() == -argument_count); + } + + HEnvironment* outer = current_block_->last_environment()-> + DiscardInlined(false); + current_block_->UpdateEnvironment(outer); + + return pop; +} + + +LInstruction* LChunkBuilder::DoForInPrepareMap(HForInPrepareMap* instr) { + LOperand* context = UseFixed(instr->context(), cp); + LOperand* object = UseFixed(instr->enumerable(), a0); + LForInPrepareMap* result = new(zone()) LForInPrepareMap(context, object); + return MarkAsCall(DefineFixed(result, v0), instr, CAN_DEOPTIMIZE_EAGERLY); +} + + +LInstruction* LChunkBuilder::DoForInCacheArray(HForInCacheArray* instr) { + LOperand* map = UseRegister(instr->map()); + return AssignEnvironment(DefineAsRegister(new(zone()) LForInCacheArray(map))); +} + + +LInstruction* LChunkBuilder::DoCheckMapValue(HCheckMapValue* instr) { + LOperand* value = UseRegisterAtStart(instr->value()); + LOperand* map = UseRegisterAtStart(instr->map()); + return AssignEnvironment(new(zone()) LCheckMapValue(value, map)); +} + + +LInstruction* LChunkBuilder::DoLoadFieldByIndex(HLoadFieldByIndex* instr) { + LOperand* object = UseRegister(instr->object()); + LOperand* index = UseTempRegister(instr->index()); + LLoadFieldByIndex* load = new(zone()) LLoadFieldByIndex(object, index); + LInstruction* result = DefineSameAsFirst(load); + return AssignPointerMap(result); +} + + +LInstruction* LChunkBuilder::DoStoreFrameContext(HStoreFrameContext* instr) { + LOperand* context = UseRegisterAtStart(instr->context()); + return new(zone()) LStoreFrameContext(context); +} + + +LInstruction* LChunkBuilder::DoAllocateBlockContext( + HAllocateBlockContext* instr) { + LOperand* context = UseFixed(instr->context(), cp); + LOperand* function = UseRegisterAtStart(instr->function()); + LAllocateBlockContext* result = + new(zone()) LAllocateBlockContext(context, function); + return MarkAsCall(DefineFixed(result, cp), instr); +} + + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_MIPS64 diff --git a/deps/v8/src/mips64/lithium-mips64.h b/deps/v8/src/mips64/lithium-mips64.h new file mode 100644 index 00000000000..77e5b9c38dc --- /dev/null +++ b/deps/v8/src/mips64/lithium-mips64.h @@ -0,0 +1,2848 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_MIPS_LITHIUM_MIPS_H_ +#define V8_MIPS_LITHIUM_MIPS_H_ + +#include "src/hydrogen.h" +#include "src/lithium.h" +#include "src/lithium-allocator.h" +#include "src/safepoint-table.h" +#include "src/utils.h" + +namespace v8 { +namespace internal { + +// Forward declarations. +class LCodeGen; + +#define LITHIUM_CONCRETE_INSTRUCTION_LIST(V) \ + V(AccessArgumentsAt) \ + V(AddI) \ + V(Allocate) \ + V(AllocateBlockContext) \ + V(ApplyArguments) \ + V(ArgumentsElements) \ + V(ArgumentsLength) \ + V(ArithmeticD) \ + V(ArithmeticT) \ + V(BitI) \ + V(BoundsCheck) \ + V(Branch) \ + V(CallJSFunction) \ + V(CallWithDescriptor) \ + V(CallFunction) \ + V(CallNew) \ + V(CallNewArray) \ + V(CallRuntime) \ + V(CallStub) \ + V(CheckInstanceType) \ + V(CheckMaps) \ + V(CheckMapValue) \ + V(CheckNonSmi) \ + V(CheckSmi) \ + V(CheckValue) \ + V(ClampDToUint8) \ + V(ClampIToUint8) \ + V(ClampTToUint8) \ + V(ClassOfTestAndBranch) \ + V(CompareMinusZeroAndBranch) \ + V(CompareNumericAndBranch) \ + V(CmpObjectEqAndBranch) \ + V(CmpHoleAndBranch) \ + V(CmpMapAndBranch) \ + V(CmpT) \ + V(ConstantD) \ + V(ConstantE) \ + V(ConstantI) \ + V(ConstantS) \ + V(ConstantT) \ + V(ConstructDouble) \ + V(Context) \ + V(DateField) \ + V(DebugBreak) \ + V(DeclareGlobals) \ + V(Deoptimize) \ + V(DivByConstI) \ + V(DivByPowerOf2I) \ + V(DivI) \ + V(DoubleToI) \ + V(DoubleBits) \ + V(DoubleToSmi) \ + V(Drop) \ + V(Dummy) \ + V(DummyUse) \ + V(FlooringDivByConstI) \ + V(FlooringDivByPowerOf2I) \ + V(FlooringDivI) \ + V(ForInCacheArray) \ + V(ForInPrepareMap) \ + V(FunctionLiteral) \ + V(GetCachedArrayIndex) \ + V(Goto) \ + V(HasCachedArrayIndexAndBranch) \ + V(HasInstanceTypeAndBranch) \ + V(InnerAllocatedObject) \ + V(InstanceOf) \ + V(InstanceOfKnownGlobal) \ + V(InstructionGap) \ + V(Integer32ToDouble) \ + V(InvokeFunction) \ + V(IsConstructCallAndBranch) \ + V(IsObjectAndBranch) \ + V(IsStringAndBranch) \ + V(IsSmiAndBranch) \ + V(IsUndetectableAndBranch) \ + V(Label) \ + V(LazyBailout) \ + V(LoadContextSlot) \ + V(LoadRoot) \ + V(LoadFieldByIndex) \ + V(LoadFunctionPrototype) \ + V(LoadGlobalCell) \ + V(LoadGlobalGeneric) \ + V(LoadKeyed) \ + V(LoadKeyedGeneric) \ + V(LoadNamedField) \ + V(LoadNamedGeneric) \ + V(MapEnumLength) \ + V(MathAbs) \ + V(MathExp) \ + V(MathClz32) \ + V(MathFloor) \ + V(MathFround) \ + V(MathLog) \ + V(MathMinMax) \ + V(MathPowHalf) \ + V(MathRound) \ + V(MathSqrt) \ + V(ModByConstI) \ + V(ModByPowerOf2I) \ + V(ModI) \ + V(MulI) \ + V(MultiplyAddD) \ + V(NumberTagD) \ + V(NumberTagU) \ + V(NumberUntagD) \ + V(OsrEntry) \ + V(Parameter) \ + V(Power) \ + V(PushArgument) \ + V(RegExpLiteral) \ + V(Return) \ + V(SeqStringGetChar) \ + V(SeqStringSetChar) \ + V(ShiftI) \ + V(SmiTag) \ + V(SmiUntag) \ + V(StackCheck) \ + V(StoreCodeEntry) \ + V(StoreContextSlot) \ + V(StoreFrameContext) \ + V(StoreGlobalCell) \ + V(StoreKeyed) \ + V(StoreKeyedGeneric) \ + V(StoreNamedField) \ + V(StoreNamedGeneric) \ + V(StringAdd) \ + V(StringCharCodeAt) \ + V(StringCharFromCode) \ + V(StringCompareAndBranch) \ + V(SubI) \ + V(TaggedToI) \ + V(ThisFunction) \ + V(ToFastProperties) \ + V(TransitionElementsKind) \ + V(TrapAllocationMemento) \ + V(Typeof) \ + V(TypeofIsAndBranch) \ + V(Uint32ToDouble) \ + V(UnknownOSRValue) \ + V(WrapReceiver) + +#define DECLARE_CONCRETE_INSTRUCTION(type, mnemonic) \ + virtual Opcode opcode() const V8_FINAL V8_OVERRIDE { \ + return LInstruction::k##type; \ + } \ + virtual void CompileToNative(LCodeGen* generator) V8_FINAL V8_OVERRIDE; \ + virtual const char* Mnemonic() const V8_FINAL V8_OVERRIDE { \ + return mnemonic; \ + } \ + static L##type* cast(LInstruction* instr) { \ + DCHECK(instr->Is##type()); \ + return reinterpret_cast<L##type*>(instr); \ + } + + +#define DECLARE_HYDROGEN_ACCESSOR(type) \ + H##type* hydrogen() const { \ + return H##type::cast(hydrogen_value()); \ + } + + +class LInstruction : public ZoneObject { + public: + LInstruction() + : environment_(NULL), + hydrogen_value_(NULL), + bit_field_(IsCallBits::encode(false)) { + } + + virtual ~LInstruction() {} + + virtual void CompileToNative(LCodeGen* generator) = 0; + virtual const char* Mnemonic() const = 0; + virtual void PrintTo(StringStream* stream); + virtual void PrintDataTo(StringStream* stream); + virtual void PrintOutputOperandTo(StringStream* stream); + + enum Opcode { + // Declare a unique enum value for each instruction. +#define DECLARE_OPCODE(type) k##type, + LITHIUM_CONCRETE_INSTRUCTION_LIST(DECLARE_OPCODE) + kNumberOfInstructions +#undef DECLARE_OPCODE + }; + + virtual Opcode opcode() const = 0; + + // Declare non-virtual type testers for all leaf IR classes. +#define DECLARE_PREDICATE(type) \ + bool Is##type() const { return opcode() == k##type; } + LITHIUM_CONCRETE_INSTRUCTION_LIST(DECLARE_PREDICATE) +#undef DECLARE_PREDICATE + + // Declare virtual predicates for instructions that don't have + // an opcode. + virtual bool IsGap() const { return false; } + + virtual bool IsControl() const { return false; } + + // Try deleting this instruction if possible. + virtual bool TryDelete() { return false; } + + void set_environment(LEnvironment* env) { environment_ = env; } + LEnvironment* environment() const { return environment_; } + bool HasEnvironment() const { return environment_ != NULL; } + + void set_pointer_map(LPointerMap* p) { pointer_map_.set(p); } + LPointerMap* pointer_map() const { return pointer_map_.get(); } + bool HasPointerMap() const { return pointer_map_.is_set(); } + + void set_hydrogen_value(HValue* value) { hydrogen_value_ = value; } + HValue* hydrogen_value() const { return hydrogen_value_; } + + virtual void SetDeferredLazyDeoptimizationEnvironment(LEnvironment* env) { } + + void MarkAsCall() { bit_field_ = IsCallBits::update(bit_field_, true); } + bool IsCall() const { return IsCallBits::decode(bit_field_); } + + // Interface to the register allocator and iterators. + bool ClobbersTemps() const { return IsCall(); } + bool ClobbersRegisters() const { return IsCall(); } + virtual bool ClobbersDoubleRegisters(Isolate* isolate) const { + return IsCall(); + } + + // Interface to the register allocator and iterators. + bool IsMarkedAsCall() const { return IsCall(); } + + virtual bool HasResult() const = 0; + virtual LOperand* result() const = 0; + + LOperand* FirstInput() { return InputAt(0); } + LOperand* Output() { return HasResult() ? result() : NULL; } + + virtual bool HasInterestingComment(LCodeGen* gen) const { return true; } + +#ifdef DEBUG + void VerifyCall(); +#endif + + virtual int InputCount() = 0; + virtual LOperand* InputAt(int i) = 0; + + private: + // Iterator interface. + friend class InputIterator; + + friend class TempIterator; + virtual int TempCount() = 0; + virtual LOperand* TempAt(int i) = 0; + + class IsCallBits: public BitField<bool, 0, 1> {}; + + LEnvironment* environment_; + SetOncePointer<LPointerMap> pointer_map_; + HValue* hydrogen_value_; + int bit_field_; +}; + + +// R = number of result operands (0 or 1). +template<int R> +class LTemplateResultInstruction : public LInstruction { + public: + // Allow 0 or 1 output operands. + STATIC_ASSERT(R == 0 || R == 1); + virtual bool HasResult() const V8_FINAL V8_OVERRIDE { + return R != 0 && result() != NULL; + } + void set_result(LOperand* operand) { results_[0] = operand; } + LOperand* result() const { return results_[0]; } + + protected: + EmbeddedContainer<LOperand*, R> results_; +}; + + +// R = number of result operands (0 or 1). +// I = number of input operands. +// T = number of temporary operands. +template<int R, int I, int T> +class LTemplateInstruction : public LTemplateResultInstruction<R> { + protected: + EmbeddedContainer<LOperand*, I> inputs_; + EmbeddedContainer<LOperand*, T> temps_; + + private: + // Iterator support. + virtual int InputCount() V8_FINAL V8_OVERRIDE { return I; } + virtual LOperand* InputAt(int i) V8_FINAL V8_OVERRIDE { return inputs_[i]; } + + virtual int TempCount() V8_FINAL V8_OVERRIDE { return T; } + virtual LOperand* TempAt(int i) V8_FINAL V8_OVERRIDE { return temps_[i]; } +}; + + +class LGap : public LTemplateInstruction<0, 0, 0> { + public: + explicit LGap(HBasicBlock* block) + : block_(block) { + parallel_moves_[BEFORE] = NULL; + parallel_moves_[START] = NULL; + parallel_moves_[END] = NULL; + parallel_moves_[AFTER] = NULL; + } + + // Can't use the DECLARE-macro here because of sub-classes. + virtual bool IsGap() const V8_FINAL V8_OVERRIDE { return true; } + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + static LGap* cast(LInstruction* instr) { + DCHECK(instr->IsGap()); + return reinterpret_cast<LGap*>(instr); + } + + bool IsRedundant() const; + + HBasicBlock* block() const { return block_; } + + enum InnerPosition { + BEFORE, + START, + END, + AFTER, + FIRST_INNER_POSITION = BEFORE, + LAST_INNER_POSITION = AFTER + }; + + LParallelMove* GetOrCreateParallelMove(InnerPosition pos, Zone* zone) { + if (parallel_moves_[pos] == NULL) { + parallel_moves_[pos] = new(zone) LParallelMove(zone); + } + return parallel_moves_[pos]; + } + + LParallelMove* GetParallelMove(InnerPosition pos) { + return parallel_moves_[pos]; + } + + private: + LParallelMove* parallel_moves_[LAST_INNER_POSITION + 1]; + HBasicBlock* block_; +}; + + +class LInstructionGap V8_FINAL : public LGap { + public: + explicit LInstructionGap(HBasicBlock* block) : LGap(block) { } + + virtual bool HasInterestingComment(LCodeGen* gen) const V8_OVERRIDE { + return !IsRedundant(); + } + + DECLARE_CONCRETE_INSTRUCTION(InstructionGap, "gap") +}; + + +class LGoto V8_FINAL : public LTemplateInstruction<0, 0, 0> { + public: + explicit LGoto(HBasicBlock* block) : block_(block) { } + + virtual bool HasInterestingComment(LCodeGen* gen) const V8_OVERRIDE; + DECLARE_CONCRETE_INSTRUCTION(Goto, "goto") + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual bool IsControl() const V8_OVERRIDE { return true; } + + int block_id() const { return block_->block_id(); } + + private: + HBasicBlock* block_; +}; + + +class LLazyBailout V8_FINAL : public LTemplateInstruction<0, 0, 0> { + public: + LLazyBailout() : gap_instructions_size_(0) { } + + DECLARE_CONCRETE_INSTRUCTION(LazyBailout, "lazy-bailout") + + void set_gap_instructions_size(int gap_instructions_size) { + gap_instructions_size_ = gap_instructions_size; + } + int gap_instructions_size() { return gap_instructions_size_; } + + private: + int gap_instructions_size_; +}; + + +class LDummy V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + LDummy() {} + DECLARE_CONCRETE_INSTRUCTION(Dummy, "dummy") +}; + + +class LDummyUse V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LDummyUse(LOperand* value) { + inputs_[0] = value; + } + DECLARE_CONCRETE_INSTRUCTION(DummyUse, "dummy-use") +}; + + +class LDeoptimize V8_FINAL : public LTemplateInstruction<0, 0, 0> { + public: + virtual bool IsControl() const V8_OVERRIDE { return true; } + DECLARE_CONCRETE_INSTRUCTION(Deoptimize, "deoptimize") + DECLARE_HYDROGEN_ACCESSOR(Deoptimize) +}; + + +class LLabel V8_FINAL : public LGap { + public: + explicit LLabel(HBasicBlock* block) + : LGap(block), replacement_(NULL) { } + + virtual bool HasInterestingComment(LCodeGen* gen) const V8_OVERRIDE { + return false; + } + DECLARE_CONCRETE_INSTRUCTION(Label, "label") + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + + int block_id() const { return block()->block_id(); } + bool is_loop_header() const { return block()->IsLoopHeader(); } + bool is_osr_entry() const { return block()->is_osr_entry(); } + Label* label() { return &label_; } + LLabel* replacement() const { return replacement_; } + void set_replacement(LLabel* label) { replacement_ = label; } + bool HasReplacement() const { return replacement_ != NULL; } + + private: + Label label_; + LLabel* replacement_; +}; + + +class LParameter V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + virtual bool HasInterestingComment(LCodeGen* gen) const V8_OVERRIDE { + return false; + } + DECLARE_CONCRETE_INSTRUCTION(Parameter, "parameter") +}; + + +class LCallStub V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LCallStub(LOperand* context) { + inputs_[0] = context; + } + + LOperand* context() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(CallStub, "call-stub") + DECLARE_HYDROGEN_ACCESSOR(CallStub) +}; + + +class LUnknownOSRValue V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + virtual bool HasInterestingComment(LCodeGen* gen) const V8_OVERRIDE { + return false; + } + DECLARE_CONCRETE_INSTRUCTION(UnknownOSRValue, "unknown-osr-value") +}; + + +template<int I, int T> +class LControlInstruction : public LTemplateInstruction<0, I, T> { + public: + LControlInstruction() : false_label_(NULL), true_label_(NULL) { } + + virtual bool IsControl() const V8_FINAL V8_OVERRIDE { return true; } + + int SuccessorCount() { return hydrogen()->SuccessorCount(); } + HBasicBlock* SuccessorAt(int i) { return hydrogen()->SuccessorAt(i); } + + int TrueDestination(LChunk* chunk) { + return chunk->LookupDestination(true_block_id()); + } + int FalseDestination(LChunk* chunk) { + return chunk->LookupDestination(false_block_id()); + } + + Label* TrueLabel(LChunk* chunk) { + if (true_label_ == NULL) { + true_label_ = chunk->GetAssemblyLabel(TrueDestination(chunk)); + } + return true_label_; + } + Label* FalseLabel(LChunk* chunk) { + if (false_label_ == NULL) { + false_label_ = chunk->GetAssemblyLabel(FalseDestination(chunk)); + } + return false_label_; + } + + protected: + int true_block_id() { return SuccessorAt(0)->block_id(); } + int false_block_id() { return SuccessorAt(1)->block_id(); } + + private: + HControlInstruction* hydrogen() { + return HControlInstruction::cast(this->hydrogen_value()); + } + + Label* false_label_; + Label* true_label_; +}; + + +class LWrapReceiver V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LWrapReceiver(LOperand* receiver, LOperand* function) { + inputs_[0] = receiver; + inputs_[1] = function; + } + + DECLARE_CONCRETE_INSTRUCTION(WrapReceiver, "wrap-receiver") + DECLARE_HYDROGEN_ACCESSOR(WrapReceiver) + + LOperand* receiver() { return inputs_[0]; } + LOperand* function() { return inputs_[1]; } +}; + + +class LApplyArguments V8_FINAL : public LTemplateInstruction<1, 4, 0> { + public: + LApplyArguments(LOperand* function, + LOperand* receiver, + LOperand* length, + LOperand* elements) { + inputs_[0] = function; + inputs_[1] = receiver; + inputs_[2] = length; + inputs_[3] = elements; + } + + DECLARE_CONCRETE_INSTRUCTION(ApplyArguments, "apply-arguments") + + LOperand* function() { return inputs_[0]; } + LOperand* receiver() { return inputs_[1]; } + LOperand* length() { return inputs_[2]; } + LOperand* elements() { return inputs_[3]; } +}; + + +class LAccessArgumentsAt V8_FINAL : public LTemplateInstruction<1, 3, 0> { + public: + LAccessArgumentsAt(LOperand* arguments, LOperand* length, LOperand* index) { + inputs_[0] = arguments; + inputs_[1] = length; + inputs_[2] = index; + } + + DECLARE_CONCRETE_INSTRUCTION(AccessArgumentsAt, "access-arguments-at") + + LOperand* arguments() { return inputs_[0]; } + LOperand* length() { return inputs_[1]; } + LOperand* index() { return inputs_[2]; } + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LArgumentsLength V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LArgumentsLength(LOperand* elements) { + inputs_[0] = elements; + } + + LOperand* elements() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(ArgumentsLength, "arguments-length") +}; + + +class LArgumentsElements V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + DECLARE_CONCRETE_INSTRUCTION(ArgumentsElements, "arguments-elements") + DECLARE_HYDROGEN_ACCESSOR(ArgumentsElements) +}; + + +class LModByPowerOf2I V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + LModByPowerOf2I(LOperand* dividend, int32_t divisor) { + inputs_[0] = dividend; + divisor_ = divisor; + } + + LOperand* dividend() { return inputs_[0]; } + int32_t divisor() const { return divisor_; } + + DECLARE_CONCRETE_INSTRUCTION(ModByPowerOf2I, "mod-by-power-of-2-i") + DECLARE_HYDROGEN_ACCESSOR(Mod) + + private: + int32_t divisor_; +}; + + +class LModByConstI V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + LModByConstI(LOperand* dividend, int32_t divisor) { + inputs_[0] = dividend; + divisor_ = divisor; + } + + LOperand* dividend() { return inputs_[0]; } + int32_t divisor() const { return divisor_; } + + DECLARE_CONCRETE_INSTRUCTION(ModByConstI, "mod-by-const-i") + DECLARE_HYDROGEN_ACCESSOR(Mod) + + private: + int32_t divisor_; +}; + + +class LModI V8_FINAL : public LTemplateInstruction<1, 2, 3> { + public: + LModI(LOperand* left, + LOperand* right) { + inputs_[0] = left; + inputs_[1] = right; + } + + LOperand* left() { return inputs_[0]; } + LOperand* right() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(ModI, "mod-i") + DECLARE_HYDROGEN_ACCESSOR(Mod) +}; + + +class LDivByPowerOf2I V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + LDivByPowerOf2I(LOperand* dividend, int32_t divisor) { + inputs_[0] = dividend; + divisor_ = divisor; + } + + LOperand* dividend() { return inputs_[0]; } + int32_t divisor() const { return divisor_; } + + DECLARE_CONCRETE_INSTRUCTION(DivByPowerOf2I, "div-by-power-of-2-i") + DECLARE_HYDROGEN_ACCESSOR(Div) + + private: + int32_t divisor_; +}; + + +class LDivByConstI V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + LDivByConstI(LOperand* dividend, int32_t divisor) { + inputs_[0] = dividend; + divisor_ = divisor; + } + + LOperand* dividend() { return inputs_[0]; } + int32_t divisor() const { return divisor_; } + + DECLARE_CONCRETE_INSTRUCTION(DivByConstI, "div-by-const-i") + DECLARE_HYDROGEN_ACCESSOR(Div) + + private: + int32_t divisor_; +}; + + +class LDivI V8_FINAL : public LTemplateInstruction<1, 2, 1> { + public: + LDivI(LOperand* dividend, LOperand* divisor, LOperand* temp) { + inputs_[0] = dividend; + inputs_[1] = divisor; + temps_[0] = temp; + } + + LOperand* dividend() { return inputs_[0]; } + LOperand* divisor() { return inputs_[1]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(DivI, "div-i") + DECLARE_HYDROGEN_ACCESSOR(BinaryOperation) +}; + + +class LFlooringDivByPowerOf2I V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + LFlooringDivByPowerOf2I(LOperand* dividend, int32_t divisor) { + inputs_[0] = dividend; + divisor_ = divisor; + } + + LOperand* dividend() { return inputs_[0]; } + int32_t divisor() { return divisor_; } + + DECLARE_CONCRETE_INSTRUCTION(FlooringDivByPowerOf2I, + "flooring-div-by-power-of-2-i") + DECLARE_HYDROGEN_ACCESSOR(MathFloorOfDiv) + + private: + int32_t divisor_; +}; + + +class LFlooringDivByConstI V8_FINAL : public LTemplateInstruction<1, 1, 2> { + public: + LFlooringDivByConstI(LOperand* dividend, int32_t divisor, LOperand* temp) { + inputs_[0] = dividend; + divisor_ = divisor; + temps_[0] = temp; + } + + LOperand* dividend() { return inputs_[0]; } + int32_t divisor() const { return divisor_; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(FlooringDivByConstI, "flooring-div-by-const-i") + DECLARE_HYDROGEN_ACCESSOR(MathFloorOfDiv) + + private: + int32_t divisor_; +}; + + +class LFlooringDivI V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LFlooringDivI(LOperand* dividend, LOperand* divisor) { + inputs_[0] = dividend; + inputs_[1] = divisor; + } + + LOperand* dividend() { return inputs_[0]; } + LOperand* divisor() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(FlooringDivI, "flooring-div-i") + DECLARE_HYDROGEN_ACCESSOR(MathFloorOfDiv) +}; + + +class LMulI V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LMulI(LOperand* left, LOperand* right) { + inputs_[0] = left; + inputs_[1] = right; + } + + LOperand* left() { return inputs_[0]; } + LOperand* right() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(MulI, "mul-i") + DECLARE_HYDROGEN_ACCESSOR(Mul) +}; + + +// Instruction for computing multiplier * multiplicand + addend. +class LMultiplyAddD V8_FINAL : public LTemplateInstruction<1, 3, 0> { + public: + LMultiplyAddD(LOperand* addend, LOperand* multiplier, + LOperand* multiplicand) { + inputs_[0] = addend; + inputs_[1] = multiplier; + inputs_[2] = multiplicand; + } + + LOperand* addend() { return inputs_[0]; } + LOperand* multiplier() { return inputs_[1]; } + LOperand* multiplicand() { return inputs_[2]; } + + DECLARE_CONCRETE_INSTRUCTION(MultiplyAddD, "multiply-add-d") +}; + + +class LDebugBreak V8_FINAL : public LTemplateInstruction<0, 0, 0> { + public: + DECLARE_CONCRETE_INSTRUCTION(DebugBreak, "break") +}; + + +class LCompareNumericAndBranch V8_FINAL : public LControlInstruction<2, 0> { + public: + LCompareNumericAndBranch(LOperand* left, LOperand* right) { + inputs_[0] = left; + inputs_[1] = right; + } + + LOperand* left() { return inputs_[0]; } + LOperand* right() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(CompareNumericAndBranch, + "compare-numeric-and-branch") + DECLARE_HYDROGEN_ACCESSOR(CompareNumericAndBranch) + + Token::Value op() const { return hydrogen()->token(); } + bool is_double() const { + return hydrogen()->representation().IsDouble(); + } + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LMathFloor V8_FINAL : public LTemplateInstruction<1, 1, 1> { + public: + LMathFloor(LOperand* value, LOperand* temp) { + inputs_[0] = value; + temps_[0] = temp; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(MathFloor, "math-floor") + DECLARE_HYDROGEN_ACCESSOR(UnaryMathOperation) +}; + + +class LMathRound V8_FINAL : public LTemplateInstruction<1, 1, 1> { + public: + LMathRound(LOperand* value, LOperand* temp) { + inputs_[0] = value; + temps_[0] = temp; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(MathRound, "math-round") + DECLARE_HYDROGEN_ACCESSOR(UnaryMathOperation) +}; + + +class LMathFround V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LMathFround(LOperand* value) { inputs_[0] = value; } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(MathFround, "math-fround") +}; + + +class LMathAbs V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LMathAbs(LOperand* context, LOperand* value) { + inputs_[1] = context; + inputs_[0] = value; + } + + LOperand* context() { return inputs_[1]; } + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(MathAbs, "math-abs") + DECLARE_HYDROGEN_ACCESSOR(UnaryMathOperation) +}; + + +class LMathLog V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LMathLog(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(MathLog, "math-log") +}; + + +class LMathClz32 V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LMathClz32(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(MathClz32, "math-clz32") +}; + + +class LMathExp V8_FINAL : public LTemplateInstruction<1, 1, 3> { + public: + LMathExp(LOperand* value, + LOperand* double_temp, + LOperand* temp1, + LOperand* temp2) { + inputs_[0] = value; + temps_[0] = temp1; + temps_[1] = temp2; + temps_[2] = double_temp; + ExternalReference::InitializeMathExpData(); + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp1() { return temps_[0]; } + LOperand* temp2() { return temps_[1]; } + LOperand* double_temp() { return temps_[2]; } + + DECLARE_CONCRETE_INSTRUCTION(MathExp, "math-exp") +}; + + +class LMathSqrt V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LMathSqrt(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(MathSqrt, "math-sqrt") +}; + + +class LMathPowHalf V8_FINAL : public LTemplateInstruction<1, 1, 1> { + public: + LMathPowHalf(LOperand* value, LOperand* temp) { + inputs_[0] = value; + temps_[0] = temp; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(MathPowHalf, "math-pow-half") +}; + + +class LCmpObjectEqAndBranch V8_FINAL : public LControlInstruction<2, 0> { + public: + LCmpObjectEqAndBranch(LOperand* left, LOperand* right) { + inputs_[0] = left; + inputs_[1] = right; + } + + LOperand* left() { return inputs_[0]; } + LOperand* right() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(CmpObjectEqAndBranch, "cmp-object-eq-and-branch") + DECLARE_HYDROGEN_ACCESSOR(CompareObjectEqAndBranch) +}; + + +class LCmpHoleAndBranch V8_FINAL : public LControlInstruction<1, 0> { + public: + explicit LCmpHoleAndBranch(LOperand* object) { + inputs_[0] = object; + } + + LOperand* object() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(CmpHoleAndBranch, "cmp-hole-and-branch") + DECLARE_HYDROGEN_ACCESSOR(CompareHoleAndBranch) +}; + + +class LCompareMinusZeroAndBranch V8_FINAL : public LControlInstruction<1, 1> { + public: + LCompareMinusZeroAndBranch(LOperand* value, LOperand* temp) { + inputs_[0] = value; + temps_[0] = temp; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(CompareMinusZeroAndBranch, + "cmp-minus-zero-and-branch") + DECLARE_HYDROGEN_ACCESSOR(CompareMinusZeroAndBranch) +}; + + +class LIsObjectAndBranch V8_FINAL : public LControlInstruction<1, 1> { + public: + LIsObjectAndBranch(LOperand* value, LOperand* temp) { + inputs_[0] = value; + temps_[0] = temp; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(IsObjectAndBranch, "is-object-and-branch") + DECLARE_HYDROGEN_ACCESSOR(IsObjectAndBranch) + + virtual void PrintDataTo(StringStream* stream); +}; + + +class LIsStringAndBranch V8_FINAL : public LControlInstruction<1, 1> { + public: + LIsStringAndBranch(LOperand* value, LOperand* temp) { + inputs_[0] = value; + temps_[0] = temp; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(IsStringAndBranch, "is-string-and-branch") + DECLARE_HYDROGEN_ACCESSOR(IsStringAndBranch) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LIsSmiAndBranch V8_FINAL : public LControlInstruction<1, 0> { + public: + explicit LIsSmiAndBranch(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(IsSmiAndBranch, "is-smi-and-branch") + DECLARE_HYDROGEN_ACCESSOR(IsSmiAndBranch) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LIsUndetectableAndBranch V8_FINAL : public LControlInstruction<1, 1> { + public: + explicit LIsUndetectableAndBranch(LOperand* value, LOperand* temp) { + inputs_[0] = value; + temps_[0] = temp; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(IsUndetectableAndBranch, + "is-undetectable-and-branch") + DECLARE_HYDROGEN_ACCESSOR(IsUndetectableAndBranch) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LStringCompareAndBranch V8_FINAL : public LControlInstruction<3, 0> { + public: + LStringCompareAndBranch(LOperand* context, LOperand* left, LOperand* right) { + inputs_[0] = context; + inputs_[1] = left; + inputs_[2] = right; + } + + LOperand* context() { return inputs_[0]; } + LOperand* left() { return inputs_[1]; } + LOperand* right() { return inputs_[2]; } + + DECLARE_CONCRETE_INSTRUCTION(StringCompareAndBranch, + "string-compare-and-branch") + DECLARE_HYDROGEN_ACCESSOR(StringCompareAndBranch) + + Token::Value op() const { return hydrogen()->token(); } + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LHasInstanceTypeAndBranch V8_FINAL : public LControlInstruction<1, 0> { + public: + explicit LHasInstanceTypeAndBranch(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(HasInstanceTypeAndBranch, + "has-instance-type-and-branch") + DECLARE_HYDROGEN_ACCESSOR(HasInstanceTypeAndBranch) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LGetCachedArrayIndex V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LGetCachedArrayIndex(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(GetCachedArrayIndex, "get-cached-array-index") + DECLARE_HYDROGEN_ACCESSOR(GetCachedArrayIndex) +}; + + +class LHasCachedArrayIndexAndBranch V8_FINAL + : public LControlInstruction<1, 0> { + public: + explicit LHasCachedArrayIndexAndBranch(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(HasCachedArrayIndexAndBranch, + "has-cached-array-index-and-branch") + DECLARE_HYDROGEN_ACCESSOR(HasCachedArrayIndexAndBranch) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LClassOfTestAndBranch V8_FINAL : public LControlInstruction<1, 1> { + public: + LClassOfTestAndBranch(LOperand* value, LOperand* temp) { + inputs_[0] = value; + temps_[0] = temp; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(ClassOfTestAndBranch, + "class-of-test-and-branch") + DECLARE_HYDROGEN_ACCESSOR(ClassOfTestAndBranch) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LCmpT V8_FINAL : public LTemplateInstruction<1, 3, 0> { + public: + LCmpT(LOperand* context, LOperand* left, LOperand* right) { + inputs_[0] = context; + inputs_[1] = left; + inputs_[2] = right; + } + + LOperand* context() { return inputs_[0]; } + LOperand* left() { return inputs_[1]; } + LOperand* right() { return inputs_[2]; } + + DECLARE_CONCRETE_INSTRUCTION(CmpT, "cmp-t") + DECLARE_HYDROGEN_ACCESSOR(CompareGeneric) + + Token::Value op() const { return hydrogen()->token(); } +}; + + +class LInstanceOf V8_FINAL : public LTemplateInstruction<1, 3, 0> { + public: + LInstanceOf(LOperand* context, LOperand* left, LOperand* right) { + inputs_[0] = context; + inputs_[1] = left; + inputs_[2] = right; + } + + LOperand* context() { return inputs_[0]; } + LOperand* left() { return inputs_[1]; } + LOperand* right() { return inputs_[2]; } + + DECLARE_CONCRETE_INSTRUCTION(InstanceOf, "instance-of") +}; + + +class LInstanceOfKnownGlobal V8_FINAL : public LTemplateInstruction<1, 2, 1> { + public: + LInstanceOfKnownGlobal(LOperand* context, LOperand* value, LOperand* temp) { + inputs_[0] = context; + inputs_[1] = value; + temps_[0] = temp; + } + + LOperand* context() { return inputs_[0]; } + LOperand* value() { return inputs_[1]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(InstanceOfKnownGlobal, + "instance-of-known-global") + DECLARE_HYDROGEN_ACCESSOR(InstanceOfKnownGlobal) + + Handle<JSFunction> function() const { return hydrogen()->function(); } + LEnvironment* GetDeferredLazyDeoptimizationEnvironment() { + return lazy_deopt_env_; + } + virtual void SetDeferredLazyDeoptimizationEnvironment( + LEnvironment* env) V8_OVERRIDE { + lazy_deopt_env_ = env; + } + + private: + LEnvironment* lazy_deopt_env_; +}; + + +class LBoundsCheck V8_FINAL : public LTemplateInstruction<0, 2, 0> { + public: + LBoundsCheck(LOperand* index, LOperand* length) { + inputs_[0] = index; + inputs_[1] = length; + } + + LOperand* index() { return inputs_[0]; } + LOperand* length() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(BoundsCheck, "bounds-check") + DECLARE_HYDROGEN_ACCESSOR(BoundsCheck) +}; + + +class LBitI V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LBitI(LOperand* left, LOperand* right) { + inputs_[0] = left; + inputs_[1] = right; + } + + LOperand* left() { return inputs_[0]; } + LOperand* right() { return inputs_[1]; } + + Token::Value op() const { return hydrogen()->op(); } + + DECLARE_CONCRETE_INSTRUCTION(BitI, "bit-i") + DECLARE_HYDROGEN_ACCESSOR(Bitwise) +}; + + +class LShiftI V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LShiftI(Token::Value op, LOperand* left, LOperand* right, bool can_deopt) + : op_(op), can_deopt_(can_deopt) { + inputs_[0] = left; + inputs_[1] = right; + } + + Token::Value op() const { return op_; } + LOperand* left() { return inputs_[0]; } + LOperand* right() { return inputs_[1]; } + bool can_deopt() const { return can_deopt_; } + + DECLARE_CONCRETE_INSTRUCTION(ShiftI, "shift-i") + + private: + Token::Value op_; + bool can_deopt_; +}; + + +class LSubI V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LSubI(LOperand* left, LOperand* right) { + inputs_[0] = left; + inputs_[1] = right; + } + + LOperand* left() { return inputs_[0]; } + LOperand* right() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(SubI, "sub-i") + DECLARE_HYDROGEN_ACCESSOR(Sub) +}; + + +class LConstantI V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + DECLARE_CONCRETE_INSTRUCTION(ConstantI, "constant-i") + DECLARE_HYDROGEN_ACCESSOR(Constant) + + int32_t value() const { return hydrogen()->Integer32Value(); } +}; + + +class LConstantS V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + DECLARE_CONCRETE_INSTRUCTION(ConstantS, "constant-s") + DECLARE_HYDROGEN_ACCESSOR(Constant) + + Smi* value() const { return Smi::FromInt(hydrogen()->Integer32Value()); } +}; + + +class LConstantD V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + DECLARE_CONCRETE_INSTRUCTION(ConstantD, "constant-d") + DECLARE_HYDROGEN_ACCESSOR(Constant) + + double value() const { return hydrogen()->DoubleValue(); } +}; + + +class LConstantE V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + DECLARE_CONCRETE_INSTRUCTION(ConstantE, "constant-e") + DECLARE_HYDROGEN_ACCESSOR(Constant) + + ExternalReference value() const { + return hydrogen()->ExternalReferenceValue(); + } +}; + + +class LConstantT V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + DECLARE_CONCRETE_INSTRUCTION(ConstantT, "constant-t") + DECLARE_HYDROGEN_ACCESSOR(Constant) + + Handle<Object> value(Isolate* isolate) const { + return hydrogen()->handle(isolate); + } +}; + + +class LBranch V8_FINAL : public LControlInstruction<1, 0> { + public: + explicit LBranch(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(Branch, "branch") + DECLARE_HYDROGEN_ACCESSOR(Branch) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LCmpMapAndBranch V8_FINAL : public LControlInstruction<1, 1> { + public: + LCmpMapAndBranch(LOperand* value, LOperand* temp) { + inputs_[0] = value; + temps_[0] = temp; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(CmpMapAndBranch, "cmp-map-and-branch") + DECLARE_HYDROGEN_ACCESSOR(CompareMap) + + Handle<Map> map() const { return hydrogen()->map().handle(); } +}; + + +class LMapEnumLength V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LMapEnumLength(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(MapEnumLength, "map-enum-length") +}; + + +class LDateField V8_FINAL : public LTemplateInstruction<1, 1, 1> { + public: + LDateField(LOperand* date, LOperand* temp, Smi* index) : index_(index) { + inputs_[0] = date; + temps_[0] = temp; + } + + LOperand* date() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + Smi* index() const { return index_; } + + DECLARE_CONCRETE_INSTRUCTION(DateField, "date-field") + DECLARE_HYDROGEN_ACCESSOR(DateField) + + private: + Smi* index_; +}; + + +class LSeqStringGetChar V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LSeqStringGetChar(LOperand* string, LOperand* index) { + inputs_[0] = string; + inputs_[1] = index; + } + + LOperand* string() const { return inputs_[0]; } + LOperand* index() const { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(SeqStringGetChar, "seq-string-get-char") + DECLARE_HYDROGEN_ACCESSOR(SeqStringGetChar) +}; + + +class LSeqStringSetChar V8_FINAL : public LTemplateInstruction<1, 4, 0> { + public: + LSeqStringSetChar(LOperand* context, + LOperand* string, + LOperand* index, + LOperand* value) { + inputs_[0] = context; + inputs_[1] = string; + inputs_[2] = index; + inputs_[3] = value; + } + + LOperand* string() { return inputs_[1]; } + LOperand* index() { return inputs_[2]; } + LOperand* value() { return inputs_[3]; } + + DECLARE_CONCRETE_INSTRUCTION(SeqStringSetChar, "seq-string-set-char") + DECLARE_HYDROGEN_ACCESSOR(SeqStringSetChar) +}; + + +class LAddI V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LAddI(LOperand* left, LOperand* right) { + inputs_[0] = left; + inputs_[1] = right; + } + + LOperand* left() { return inputs_[0]; } + LOperand* right() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(AddI, "add-i") + DECLARE_HYDROGEN_ACCESSOR(Add) +}; + + +class LMathMinMax V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LMathMinMax(LOperand* left, LOperand* right) { + inputs_[0] = left; + inputs_[1] = right; + } + + LOperand* left() { return inputs_[0]; } + LOperand* right() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(MathMinMax, "math-min-max") + DECLARE_HYDROGEN_ACCESSOR(MathMinMax) +}; + + +class LPower V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LPower(LOperand* left, LOperand* right) { + inputs_[0] = left; + inputs_[1] = right; + } + + LOperand* left() { return inputs_[0]; } + LOperand* right() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(Power, "power") + DECLARE_HYDROGEN_ACCESSOR(Power) +}; + + +class LArithmeticD V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LArithmeticD(Token::Value op, LOperand* left, LOperand* right) + : op_(op) { + inputs_[0] = left; + inputs_[1] = right; + } + + Token::Value op() const { return op_; } + LOperand* left() { return inputs_[0]; } + LOperand* right() { return inputs_[1]; } + + virtual Opcode opcode() const V8_OVERRIDE { + return LInstruction::kArithmeticD; + } + virtual void CompileToNative(LCodeGen* generator) V8_OVERRIDE; + virtual const char* Mnemonic() const V8_OVERRIDE; + + private: + Token::Value op_; +}; + + +class LArithmeticT V8_FINAL : public LTemplateInstruction<1, 3, 0> { + public: + LArithmeticT(Token::Value op, + LOperand* context, + LOperand* left, + LOperand* right) + : op_(op) { + inputs_[0] = context; + inputs_[1] = left; + inputs_[2] = right; + } + + LOperand* context() { return inputs_[0]; } + LOperand* left() { return inputs_[1]; } + LOperand* right() { return inputs_[2]; } + Token::Value op() const { return op_; } + + virtual Opcode opcode() const V8_FINAL { return LInstruction::kArithmeticT; } + virtual void CompileToNative(LCodeGen* generator) V8_OVERRIDE; + virtual const char* Mnemonic() const V8_OVERRIDE; + + private: + Token::Value op_; +}; + + +class LReturn V8_FINAL : public LTemplateInstruction<0, 3, 0> { + public: + LReturn(LOperand* value, LOperand* context, LOperand* parameter_count) { + inputs_[0] = value; + inputs_[1] = context; + inputs_[2] = parameter_count; + } + + LOperand* value() { return inputs_[0]; } + + bool has_constant_parameter_count() { + return parameter_count()->IsConstantOperand(); + } + LConstantOperand* constant_parameter_count() { + DCHECK(has_constant_parameter_count()); + return LConstantOperand::cast(parameter_count()); + } + LOperand* parameter_count() { return inputs_[2]; } + + DECLARE_CONCRETE_INSTRUCTION(Return, "return") +}; + + +class LLoadNamedField V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LLoadNamedField(LOperand* object) { + inputs_[0] = object; + } + + LOperand* object() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(LoadNamedField, "load-named-field") + DECLARE_HYDROGEN_ACCESSOR(LoadNamedField) +}; + + +class LLoadNamedGeneric V8_FINAL : public LTemplateInstruction<1, 2, 1> { + public: + LLoadNamedGeneric(LOperand* context, LOperand* object, LOperand* vector) { + inputs_[0] = context; + inputs_[1] = object; + temps_[0] = vector; + } + + LOperand* context() { return inputs_[0]; } + LOperand* object() { return inputs_[1]; } + LOperand* temp_vector() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(LoadNamedGeneric, "load-named-generic") + DECLARE_HYDROGEN_ACCESSOR(LoadNamedGeneric) + + Handle<Object> name() const { return hydrogen()->name(); } +}; + + +class LLoadFunctionPrototype V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LLoadFunctionPrototype(LOperand* function) { + inputs_[0] = function; + } + + LOperand* function() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(LoadFunctionPrototype, "load-function-prototype") + DECLARE_HYDROGEN_ACCESSOR(LoadFunctionPrototype) +}; + + +class LLoadRoot V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + DECLARE_CONCRETE_INSTRUCTION(LoadRoot, "load-root") + DECLARE_HYDROGEN_ACCESSOR(LoadRoot) + + Heap::RootListIndex index() const { return hydrogen()->index(); } +}; + + +class LLoadKeyed V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LLoadKeyed(LOperand* elements, LOperand* key) { + inputs_[0] = elements; + inputs_[1] = key; + } + + LOperand* elements() { return inputs_[0]; } + LOperand* key() { return inputs_[1]; } + ElementsKind elements_kind() const { + return hydrogen()->elements_kind(); + } + bool is_external() const { + return hydrogen()->is_external(); + } + bool is_fixed_typed_array() const { + return hydrogen()->is_fixed_typed_array(); + } + bool is_typed_elements() const { + return is_external() || is_fixed_typed_array(); + } + + DECLARE_CONCRETE_INSTRUCTION(LoadKeyed, "load-keyed") + DECLARE_HYDROGEN_ACCESSOR(LoadKeyed) + + virtual void PrintDataTo(StringStream* stream); + uint32_t base_offset() const { return hydrogen()->base_offset(); } +}; + + +class LLoadKeyedGeneric V8_FINAL : public LTemplateInstruction<1, 3, 1> { + public: + LLoadKeyedGeneric(LOperand* context, LOperand* object, LOperand* key, + LOperand* vector) { + inputs_[0] = context; + inputs_[1] = object; + inputs_[2] = key; + temps_[0] = vector; + } + + LOperand* context() { return inputs_[0]; } + LOperand* object() { return inputs_[1]; } + LOperand* key() { return inputs_[2]; } + LOperand* temp_vector() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(LoadKeyedGeneric, "load-keyed-generic") + DECLARE_HYDROGEN_ACCESSOR(LoadKeyedGeneric) +}; + + +class LLoadGlobalCell V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + DECLARE_CONCRETE_INSTRUCTION(LoadGlobalCell, "load-global-cell") + DECLARE_HYDROGEN_ACCESSOR(LoadGlobalCell) +}; + + +class LLoadGlobalGeneric V8_FINAL : public LTemplateInstruction<1, 2, 1> { + public: + LLoadGlobalGeneric(LOperand* context, LOperand* global_object, + LOperand* vector) { + inputs_[0] = context; + inputs_[1] = global_object; + temps_[0] = vector; + } + + LOperand* context() { return inputs_[0]; } + LOperand* global_object() { return inputs_[1]; } + LOperand* temp_vector() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(LoadGlobalGeneric, "load-global-generic") + DECLARE_HYDROGEN_ACCESSOR(LoadGlobalGeneric) + + Handle<Object> name() const { return hydrogen()->name(); } + bool for_typeof() const { return hydrogen()->for_typeof(); } +}; + + +class LStoreGlobalCell V8_FINAL : public LTemplateInstruction<0, 1, 1> { + public: + LStoreGlobalCell(LOperand* value, LOperand* temp) { + inputs_[0] = value; + temps_[0] = temp; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(StoreGlobalCell, "store-global-cell") + DECLARE_HYDROGEN_ACCESSOR(StoreGlobalCell) +}; + + +class LLoadContextSlot V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LLoadContextSlot(LOperand* context) { + inputs_[0] = context; + } + + LOperand* context() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(LoadContextSlot, "load-context-slot") + DECLARE_HYDROGEN_ACCESSOR(LoadContextSlot) + + int slot_index() { return hydrogen()->slot_index(); } + + virtual void PrintDataTo(StringStream* stream); +}; + + +class LStoreContextSlot V8_FINAL : public LTemplateInstruction<0, 2, 0> { + public: + LStoreContextSlot(LOperand* context, LOperand* value) { + inputs_[0] = context; + inputs_[1] = value; + } + + LOperand* context() { return inputs_[0]; } + LOperand* value() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(StoreContextSlot, "store-context-slot") + DECLARE_HYDROGEN_ACCESSOR(StoreContextSlot) + + int slot_index() { return hydrogen()->slot_index(); } + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LPushArgument V8_FINAL : public LTemplateInstruction<0, 1, 0> { + public: + explicit LPushArgument(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(PushArgument, "push-argument") +}; + + +class LDrop V8_FINAL : public LTemplateInstruction<0, 0, 0> { + public: + explicit LDrop(int count) : count_(count) { } + + int count() const { return count_; } + + DECLARE_CONCRETE_INSTRUCTION(Drop, "drop") + + private: + int count_; +}; + + +class LStoreCodeEntry V8_FINAL: public LTemplateInstruction<0, 2, 0> { + public: + LStoreCodeEntry(LOperand* function, LOperand* code_object) { + inputs_[0] = function; + inputs_[1] = code_object; + } + + LOperand* function() { return inputs_[0]; } + LOperand* code_object() { return inputs_[1]; } + + virtual void PrintDataTo(StringStream* stream); + + DECLARE_CONCRETE_INSTRUCTION(StoreCodeEntry, "store-code-entry") + DECLARE_HYDROGEN_ACCESSOR(StoreCodeEntry) +}; + + +class LInnerAllocatedObject V8_FINAL: public LTemplateInstruction<1, 2, 0> { + public: + LInnerAllocatedObject(LOperand* base_object, LOperand* offset) { + inputs_[0] = base_object; + inputs_[1] = offset; + } + + LOperand* base_object() const { return inputs_[0]; } + LOperand* offset() const { return inputs_[1]; } + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + + DECLARE_CONCRETE_INSTRUCTION(InnerAllocatedObject, "inner-allocated-object") +}; + + +class LThisFunction V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + DECLARE_CONCRETE_INSTRUCTION(ThisFunction, "this-function") + DECLARE_HYDROGEN_ACCESSOR(ThisFunction) +}; + + +class LContext V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + DECLARE_CONCRETE_INSTRUCTION(Context, "context") + DECLARE_HYDROGEN_ACCESSOR(Context) +}; + + +class LDeclareGlobals V8_FINAL : public LTemplateInstruction<0, 1, 0> { + public: + explicit LDeclareGlobals(LOperand* context) { + inputs_[0] = context; + } + + LOperand* context() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(DeclareGlobals, "declare-globals") + DECLARE_HYDROGEN_ACCESSOR(DeclareGlobals) +}; + + +class LCallJSFunction V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LCallJSFunction(LOperand* function) { + inputs_[0] = function; + } + + LOperand* function() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(CallJSFunction, "call-js-function") + DECLARE_HYDROGEN_ACCESSOR(CallJSFunction) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + + int arity() const { return hydrogen()->argument_count() - 1; } +}; + + +class LCallWithDescriptor V8_FINAL : public LTemplateResultInstruction<1> { + public: + LCallWithDescriptor(const InterfaceDescriptor* descriptor, + const ZoneList<LOperand*>& operands, + Zone* zone) + : descriptor_(descriptor), + inputs_(descriptor->GetRegisterParameterCount() + 1, zone) { + DCHECK(descriptor->GetRegisterParameterCount() + 1 == operands.length()); + inputs_.AddAll(operands, zone); + } + + LOperand* target() const { return inputs_[0]; } + + const InterfaceDescriptor* descriptor() { return descriptor_; } + + private: + DECLARE_CONCRETE_INSTRUCTION(CallWithDescriptor, "call-with-descriptor") + DECLARE_HYDROGEN_ACCESSOR(CallWithDescriptor) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + + int arity() const { return hydrogen()->argument_count() - 1; } + + const InterfaceDescriptor* descriptor_; + ZoneList<LOperand*> inputs_; + + // Iterator support. + virtual int InputCount() V8_FINAL V8_OVERRIDE { return inputs_.length(); } + virtual LOperand* InputAt(int i) V8_FINAL V8_OVERRIDE { return inputs_[i]; } + + virtual int TempCount() V8_FINAL V8_OVERRIDE { return 0; } + virtual LOperand* TempAt(int i) V8_FINAL V8_OVERRIDE { return NULL; } +}; + + + +class LInvokeFunction V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LInvokeFunction(LOperand* context, LOperand* function) { + inputs_[0] = context; + inputs_[1] = function; + } + + LOperand* context() { return inputs_[0]; } + LOperand* function() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(InvokeFunction, "invoke-function") + DECLARE_HYDROGEN_ACCESSOR(InvokeFunction) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + + int arity() const { return hydrogen()->argument_count() - 1; } +}; + + +class LCallFunction V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LCallFunction(LOperand* context, LOperand* function) { + inputs_[0] = context; + inputs_[1] = function; + } + + LOperand* context() { return inputs_[0]; } + LOperand* function() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(CallFunction, "call-function") + DECLARE_HYDROGEN_ACCESSOR(CallFunction) + + int arity() const { return hydrogen()->argument_count() - 1; } +}; + + +class LCallNew V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LCallNew(LOperand* context, LOperand* constructor) { + inputs_[0] = context; + inputs_[1] = constructor; + } + + LOperand* context() { return inputs_[0]; } + LOperand* constructor() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(CallNew, "call-new") + DECLARE_HYDROGEN_ACCESSOR(CallNew) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + + int arity() const { return hydrogen()->argument_count() - 1; } +}; + + +class LCallNewArray V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LCallNewArray(LOperand* context, LOperand* constructor) { + inputs_[0] = context; + inputs_[1] = constructor; + } + + LOperand* context() { return inputs_[0]; } + LOperand* constructor() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(CallNewArray, "call-new-array") + DECLARE_HYDROGEN_ACCESSOR(CallNewArray) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + + int arity() const { return hydrogen()->argument_count() - 1; } +}; + + +class LCallRuntime V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LCallRuntime(LOperand* context) { + inputs_[0] = context; + } + + LOperand* context() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(CallRuntime, "call-runtime") + DECLARE_HYDROGEN_ACCESSOR(CallRuntime) + + virtual bool ClobbersDoubleRegisters(Isolate* isolate) const V8_OVERRIDE { + return save_doubles() == kDontSaveFPRegs; + } + + const Runtime::Function* function() const { return hydrogen()->function(); } + int arity() const { return hydrogen()->argument_count(); } + SaveFPRegsMode save_doubles() const { return hydrogen()->save_doubles(); } +}; + + +class LInteger32ToDouble V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LInteger32ToDouble(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(Integer32ToDouble, "int32-to-double") +}; + + +class LUint32ToDouble V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LUint32ToDouble(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(Uint32ToDouble, "uint32-to-double") +}; + + +class LNumberTagU V8_FINAL : public LTemplateInstruction<1, 1, 2> { + public: + LNumberTagU(LOperand* value, LOperand* temp1, LOperand* temp2) { + inputs_[0] = value; + temps_[0] = temp1; + temps_[1] = temp2; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp1() { return temps_[0]; } + LOperand* temp2() { return temps_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(NumberTagU, "number-tag-u") +}; + + +class LNumberTagD V8_FINAL : public LTemplateInstruction<1, 1, 2> { + public: + LNumberTagD(LOperand* value, LOperand* temp, LOperand* temp2) { + inputs_[0] = value; + temps_[0] = temp; + temps_[1] = temp2; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + LOperand* temp2() { return temps_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(NumberTagD, "number-tag-d") + DECLARE_HYDROGEN_ACCESSOR(Change) +}; + + +class LDoubleToSmi V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LDoubleToSmi(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(DoubleToSmi, "double-to-smi") + DECLARE_HYDROGEN_ACCESSOR(UnaryOperation) + + bool truncating() { return hydrogen()->CanTruncateToInt32(); } +}; + + +// Sometimes truncating conversion from a tagged value to an int32. +class LDoubleToI V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LDoubleToI(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(DoubleToI, "double-to-i") + DECLARE_HYDROGEN_ACCESSOR(UnaryOperation) + + bool truncating() { return hydrogen()->CanTruncateToInt32(); } +}; + + +// Truncating conversion from a tagged value to an int32. +class LTaggedToI V8_FINAL : public LTemplateInstruction<1, 1, 2> { + public: + LTaggedToI(LOperand* value, + LOperand* temp, + LOperand* temp2) { + inputs_[0] = value; + temps_[0] = temp; + temps_[1] = temp2; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + LOperand* temp2() { return temps_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(TaggedToI, "tagged-to-i") + DECLARE_HYDROGEN_ACCESSOR(Change) + + bool truncating() { return hydrogen()->CanTruncateToInt32(); } +}; + + +class LSmiTag V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LSmiTag(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(SmiTag, "smi-tag") + DECLARE_HYDROGEN_ACCESSOR(Change) +}; + + +class LNumberUntagD V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LNumberUntagD(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(NumberUntagD, "double-untag") + DECLARE_HYDROGEN_ACCESSOR(Change) +}; + + +class LSmiUntag V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + LSmiUntag(LOperand* value, bool needs_check) + : needs_check_(needs_check) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + bool needs_check() const { return needs_check_; } + + DECLARE_CONCRETE_INSTRUCTION(SmiUntag, "smi-untag") + + private: + bool needs_check_; +}; + + +class LStoreNamedField V8_FINAL : public LTemplateInstruction<0, 2, 1> { + public: + LStoreNamedField(LOperand* object, LOperand* value, LOperand* temp) { + inputs_[0] = object; + inputs_[1] = value; + temps_[0] = temp; + } + + LOperand* object() { return inputs_[0]; } + LOperand* value() { return inputs_[1]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(StoreNamedField, "store-named-field") + DECLARE_HYDROGEN_ACCESSOR(StoreNamedField) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + + Representation representation() const { + return hydrogen()->field_representation(); + } +}; + + +class LStoreNamedGeneric V8_FINAL : public LTemplateInstruction<0, 3, 0> { + public: + LStoreNamedGeneric(LOperand* context, LOperand* object, LOperand* value) { + inputs_[0] = context; + inputs_[1] = object; + inputs_[2] = value; + } + + LOperand* context() { return inputs_[0]; } + LOperand* object() { return inputs_[1]; } + LOperand* value() { return inputs_[2]; } + + DECLARE_CONCRETE_INSTRUCTION(StoreNamedGeneric, "store-named-generic") + DECLARE_HYDROGEN_ACCESSOR(StoreNamedGeneric) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + + Handle<Object> name() const { return hydrogen()->name(); } + StrictMode strict_mode() { return hydrogen()->strict_mode(); } +}; + + +class LStoreKeyed V8_FINAL : public LTemplateInstruction<0, 3, 0> { + public: + LStoreKeyed(LOperand* object, LOperand* key, LOperand* value) { + inputs_[0] = object; + inputs_[1] = key; + inputs_[2] = value; + } + + bool is_external() const { return hydrogen()->is_external(); } + bool is_fixed_typed_array() const { + return hydrogen()->is_fixed_typed_array(); + } + bool is_typed_elements() const { + return is_external() || is_fixed_typed_array(); + } + LOperand* elements() { return inputs_[0]; } + LOperand* key() { return inputs_[1]; } + LOperand* value() { return inputs_[2]; } + ElementsKind elements_kind() const { + return hydrogen()->elements_kind(); + } + + DECLARE_CONCRETE_INSTRUCTION(StoreKeyed, "store-keyed") + DECLARE_HYDROGEN_ACCESSOR(StoreKeyed) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + bool NeedsCanonicalization() { return hydrogen()->NeedsCanonicalization(); } + uint32_t base_offset() const { return hydrogen()->base_offset(); } +}; + + +class LStoreKeyedGeneric V8_FINAL : public LTemplateInstruction<0, 4, 0> { + public: + LStoreKeyedGeneric(LOperand* context, + LOperand* obj, + LOperand* key, + LOperand* value) { + inputs_[0] = context; + inputs_[1] = obj; + inputs_[2] = key; + inputs_[3] = value; + } + + LOperand* context() { return inputs_[0]; } + LOperand* object() { return inputs_[1]; } + LOperand* key() { return inputs_[2]; } + LOperand* value() { return inputs_[3]; } + + DECLARE_CONCRETE_INSTRUCTION(StoreKeyedGeneric, "store-keyed-generic") + DECLARE_HYDROGEN_ACCESSOR(StoreKeyedGeneric) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + + StrictMode strict_mode() { return hydrogen()->strict_mode(); } +}; + + +class LTransitionElementsKind V8_FINAL : public LTemplateInstruction<0, 2, 1> { + public: + LTransitionElementsKind(LOperand* object, + LOperand* context, + LOperand* new_map_temp) { + inputs_[0] = object; + inputs_[1] = context; + temps_[0] = new_map_temp; + } + + LOperand* context() { return inputs_[1]; } + LOperand* object() { return inputs_[0]; } + LOperand* new_map_temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(TransitionElementsKind, + "transition-elements-kind") + DECLARE_HYDROGEN_ACCESSOR(TransitionElementsKind) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + + Handle<Map> original_map() { return hydrogen()->original_map().handle(); } + Handle<Map> transitioned_map() { + return hydrogen()->transitioned_map().handle(); + } + ElementsKind from_kind() { return hydrogen()->from_kind(); } + ElementsKind to_kind() { return hydrogen()->to_kind(); } +}; + + +class LTrapAllocationMemento V8_FINAL : public LTemplateInstruction<0, 1, 1> { + public: + LTrapAllocationMemento(LOperand* object, + LOperand* temp) { + inputs_[0] = object; + temps_[0] = temp; + } + + LOperand* object() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(TrapAllocationMemento, + "trap-allocation-memento") +}; + + +class LStringAdd V8_FINAL : public LTemplateInstruction<1, 3, 0> { + public: + LStringAdd(LOperand* context, LOperand* left, LOperand* right) { + inputs_[0] = context; + inputs_[1] = left; + inputs_[2] = right; + } + + LOperand* context() { return inputs_[0]; } + LOperand* left() { return inputs_[1]; } + LOperand* right() { return inputs_[2]; } + + DECLARE_CONCRETE_INSTRUCTION(StringAdd, "string-add") + DECLARE_HYDROGEN_ACCESSOR(StringAdd) +}; + + + +class LStringCharCodeAt V8_FINAL : public LTemplateInstruction<1, 3, 0> { + public: + LStringCharCodeAt(LOperand* context, LOperand* string, LOperand* index) { + inputs_[0] = context; + inputs_[1] = string; + inputs_[2] = index; + } + + LOperand* context() { return inputs_[0]; } + LOperand* string() { return inputs_[1]; } + LOperand* index() { return inputs_[2]; } + + DECLARE_CONCRETE_INSTRUCTION(StringCharCodeAt, "string-char-code-at") + DECLARE_HYDROGEN_ACCESSOR(StringCharCodeAt) +}; + + +class LStringCharFromCode V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + explicit LStringCharFromCode(LOperand* context, LOperand* char_code) { + inputs_[0] = context; + inputs_[1] = char_code; + } + + LOperand* context() { return inputs_[0]; } + LOperand* char_code() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(StringCharFromCode, "string-char-from-code") + DECLARE_HYDROGEN_ACCESSOR(StringCharFromCode) +}; + + +class LCheckValue V8_FINAL : public LTemplateInstruction<0, 1, 0> { + public: + explicit LCheckValue(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(CheckValue, "check-value") + DECLARE_HYDROGEN_ACCESSOR(CheckValue) +}; + + +class LCheckInstanceType V8_FINAL : public LTemplateInstruction<0, 1, 0> { + public: + explicit LCheckInstanceType(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(CheckInstanceType, "check-instance-type") + DECLARE_HYDROGEN_ACCESSOR(CheckInstanceType) +}; + + +class LCheckMaps V8_FINAL : public LTemplateInstruction<0, 1, 0> { + public: + explicit LCheckMaps(LOperand* value = NULL) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(CheckMaps, "check-maps") + DECLARE_HYDROGEN_ACCESSOR(CheckMaps) +}; + + +class LCheckSmi V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LCheckSmi(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(CheckSmi, "check-smi") +}; + + +class LCheckNonSmi V8_FINAL : public LTemplateInstruction<0, 1, 0> { + public: + explicit LCheckNonSmi(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(CheckNonSmi, "check-non-smi") + DECLARE_HYDROGEN_ACCESSOR(CheckHeapObject) +}; + + +class LClampDToUint8 V8_FINAL : public LTemplateInstruction<1, 1, 1> { + public: + LClampDToUint8(LOperand* unclamped, LOperand* temp) { + inputs_[0] = unclamped; + temps_[0] = temp; + } + + LOperand* unclamped() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(ClampDToUint8, "clamp-d-to-uint8") +}; + + +class LClampIToUint8 V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LClampIToUint8(LOperand* unclamped) { + inputs_[0] = unclamped; + } + + LOperand* unclamped() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(ClampIToUint8, "clamp-i-to-uint8") +}; + + +class LClampTToUint8 V8_FINAL : public LTemplateInstruction<1, 1, 1> { + public: + LClampTToUint8(LOperand* unclamped, LOperand* temp) { + inputs_[0] = unclamped; + temps_[0] = temp; + } + + LOperand* unclamped() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(ClampTToUint8, "clamp-t-to-uint8") +}; + + +class LDoubleBits V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LDoubleBits(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(DoubleBits, "double-bits") + DECLARE_HYDROGEN_ACCESSOR(DoubleBits) +}; + + +class LConstructDouble V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LConstructDouble(LOperand* hi, LOperand* lo) { + inputs_[0] = hi; + inputs_[1] = lo; + } + + LOperand* hi() { return inputs_[0]; } + LOperand* lo() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(ConstructDouble, "construct-double") +}; + + +class LAllocate V8_FINAL : public LTemplateInstruction<1, 2, 2> { + public: + LAllocate(LOperand* context, + LOperand* size, + LOperand* temp1, + LOperand* temp2) { + inputs_[0] = context; + inputs_[1] = size; + temps_[0] = temp1; + temps_[1] = temp2; + } + + LOperand* context() { return inputs_[0]; } + LOperand* size() { return inputs_[1]; } + LOperand* temp1() { return temps_[0]; } + LOperand* temp2() { return temps_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(Allocate, "allocate") + DECLARE_HYDROGEN_ACCESSOR(Allocate) +}; + + +class LRegExpLiteral V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LRegExpLiteral(LOperand* context) { + inputs_[0] = context; + } + + LOperand* context() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(RegExpLiteral, "regexp-literal") + DECLARE_HYDROGEN_ACCESSOR(RegExpLiteral) +}; + + +class LFunctionLiteral V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LFunctionLiteral(LOperand* context) { + inputs_[0] = context; + } + + LOperand* context() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(FunctionLiteral, "function-literal") + DECLARE_HYDROGEN_ACCESSOR(FunctionLiteral) +}; + + +class LToFastProperties V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LToFastProperties(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(ToFastProperties, "to-fast-properties") + DECLARE_HYDROGEN_ACCESSOR(ToFastProperties) +}; + + +class LTypeof V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LTypeof(LOperand* context, LOperand* value) { + inputs_[0] = context; + inputs_[1] = value; + } + + LOperand* context() { return inputs_[0]; } + LOperand* value() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(Typeof, "typeof") +}; + + +class LTypeofIsAndBranch V8_FINAL : public LControlInstruction<1, 0> { + public: + explicit LTypeofIsAndBranch(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(TypeofIsAndBranch, "typeof-is-and-branch") + DECLARE_HYDROGEN_ACCESSOR(TypeofIsAndBranch) + + Handle<String> type_literal() { return hydrogen()->type_literal(); } + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LIsConstructCallAndBranch V8_FINAL : public LControlInstruction<0, 1> { + public: + explicit LIsConstructCallAndBranch(LOperand* temp) { + temps_[0] = temp; + } + + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(IsConstructCallAndBranch, + "is-construct-call-and-branch") +}; + + +class LOsrEntry V8_FINAL : public LTemplateInstruction<0, 0, 0> { + public: + LOsrEntry() {} + + virtual bool HasInterestingComment(LCodeGen* gen) const V8_OVERRIDE { + return false; + } + DECLARE_CONCRETE_INSTRUCTION(OsrEntry, "osr-entry") +}; + + +class LStackCheck V8_FINAL : public LTemplateInstruction<0, 1, 0> { + public: + explicit LStackCheck(LOperand* context) { + inputs_[0] = context; + } + + LOperand* context() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(StackCheck, "stack-check") + DECLARE_HYDROGEN_ACCESSOR(StackCheck) + + Label* done_label() { return &done_label_; } + + private: + Label done_label_; +}; + + +class LForInPrepareMap V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LForInPrepareMap(LOperand* context, LOperand* object) { + inputs_[0] = context; + inputs_[1] = object; + } + + LOperand* context() { return inputs_[0]; } + LOperand* object() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(ForInPrepareMap, "for-in-prepare-map") +}; + + +class LForInCacheArray V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LForInCacheArray(LOperand* map) { + inputs_[0] = map; + } + + LOperand* map() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(ForInCacheArray, "for-in-cache-array") + + int idx() { + return HForInCacheArray::cast(this->hydrogen_value())->idx(); + } +}; + + +class LCheckMapValue V8_FINAL : public LTemplateInstruction<0, 2, 0> { + public: + LCheckMapValue(LOperand* value, LOperand* map) { + inputs_[0] = value; + inputs_[1] = map; + } + + LOperand* value() { return inputs_[0]; } + LOperand* map() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(CheckMapValue, "check-map-value") +}; + + +class LLoadFieldByIndex V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LLoadFieldByIndex(LOperand* object, LOperand* index) { + inputs_[0] = object; + inputs_[1] = index; + } + + LOperand* object() { return inputs_[0]; } + LOperand* index() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(LoadFieldByIndex, "load-field-by-index") +}; + + +class LStoreFrameContext: public LTemplateInstruction<0, 1, 0> { + public: + explicit LStoreFrameContext(LOperand* context) { + inputs_[0] = context; + } + + LOperand* context() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(StoreFrameContext, "store-frame-context") +}; + + +class LAllocateBlockContext: public LTemplateInstruction<1, 2, 0> { + public: + LAllocateBlockContext(LOperand* context, LOperand* function) { + inputs_[0] = context; + inputs_[1] = function; + } + + LOperand* context() { return inputs_[0]; } + LOperand* function() { return inputs_[1]; } + + Handle<ScopeInfo> scope_info() { return hydrogen()->scope_info(); } + + DECLARE_CONCRETE_INSTRUCTION(AllocateBlockContext, "allocate-block-context") + DECLARE_HYDROGEN_ACCESSOR(AllocateBlockContext) +}; + + +class LChunkBuilder; +class LPlatformChunk V8_FINAL : public LChunk { + public: + LPlatformChunk(CompilationInfo* info, HGraph* graph) + : LChunk(info, graph) { } + + int GetNextSpillIndex(RegisterKind kind); + LOperand* GetNextSpillSlot(RegisterKind kind); +}; + + +class LChunkBuilder V8_FINAL : public LChunkBuilderBase { + public: + LChunkBuilder(CompilationInfo* info, HGraph* graph, LAllocator* allocator) + : LChunkBuilderBase(graph->zone()), + chunk_(NULL), + info_(info), + graph_(graph), + status_(UNUSED), + current_instruction_(NULL), + current_block_(NULL), + next_block_(NULL), + allocator_(allocator) { } + + Isolate* isolate() const { return graph_->isolate(); } + + // Build the sequence for the graph. + LPlatformChunk* Build(); + + // Declare methods that deal with the individual node types. +#define DECLARE_DO(type) LInstruction* Do##type(H##type* node); + HYDROGEN_CONCRETE_INSTRUCTION_LIST(DECLARE_DO) +#undef DECLARE_DO + + LInstruction* DoMultiplyAdd(HMul* mul, HValue* addend); + + static bool HasMagicNumberForDivisor(int32_t divisor); + + LInstruction* DoMathFloor(HUnaryMathOperation* instr); + LInstruction* DoMathRound(HUnaryMathOperation* instr); + LInstruction* DoMathFround(HUnaryMathOperation* instr); + LInstruction* DoMathAbs(HUnaryMathOperation* instr); + LInstruction* DoMathLog(HUnaryMathOperation* instr); + LInstruction* DoMathExp(HUnaryMathOperation* instr); + LInstruction* DoMathSqrt(HUnaryMathOperation* instr); + LInstruction* DoMathPowHalf(HUnaryMathOperation* instr); + LInstruction* DoMathClz32(HUnaryMathOperation* instr); + LInstruction* DoDivByPowerOf2I(HDiv* instr); + LInstruction* DoDivByConstI(HDiv* instr); + LInstruction* DoDivI(HDiv* instr); + LInstruction* DoModByPowerOf2I(HMod* instr); + LInstruction* DoModByConstI(HMod* instr); + LInstruction* DoModI(HMod* instr); + LInstruction* DoFlooringDivByPowerOf2I(HMathFloorOfDiv* instr); + LInstruction* DoFlooringDivByConstI(HMathFloorOfDiv* instr); + LInstruction* DoFlooringDivI(HMathFloorOfDiv* instr); + + private: + enum Status { + UNUSED, + BUILDING, + DONE, + ABORTED + }; + + LPlatformChunk* chunk() const { return chunk_; } + CompilationInfo* info() const { return info_; } + HGraph* graph() const { return graph_; } + + bool is_unused() const { return status_ == UNUSED; } + bool is_building() const { return status_ == BUILDING; } + bool is_done() const { return status_ == DONE; } + bool is_aborted() const { return status_ == ABORTED; } + + void Abort(BailoutReason reason); + + // Methods for getting operands for Use / Define / Temp. + LUnallocated* ToUnallocated(Register reg); + LUnallocated* ToUnallocated(DoubleRegister reg); + + // Methods for setting up define-use relationships. + MUST_USE_RESULT LOperand* Use(HValue* value, LUnallocated* operand); + MUST_USE_RESULT LOperand* UseFixed(HValue* value, Register fixed_register); + MUST_USE_RESULT LOperand* UseFixedDouble(HValue* value, + DoubleRegister fixed_register); + + // A value that is guaranteed to be allocated to a register. + // Operand created by UseRegister is guaranteed to be live until the end of + // instruction. This means that register allocator will not reuse it's + // register for any other operand inside instruction. + // Operand created by UseRegisterAtStart is guaranteed to be live only at + // instruction start. Register allocator is free to assign the same register + // to some other operand used inside instruction (i.e. temporary or + // output). + MUST_USE_RESULT LOperand* UseRegister(HValue* value); + MUST_USE_RESULT LOperand* UseRegisterAtStart(HValue* value); + + // An input operand in a register that may be trashed. + MUST_USE_RESULT LOperand* UseTempRegister(HValue* value); + + // An input operand in a register or stack slot. + MUST_USE_RESULT LOperand* Use(HValue* value); + MUST_USE_RESULT LOperand* UseAtStart(HValue* value); + + // An input operand in a register, stack slot or a constant operand. + MUST_USE_RESULT LOperand* UseOrConstant(HValue* value); + MUST_USE_RESULT LOperand* UseOrConstantAtStart(HValue* value); + + // An input operand in a register or a constant operand. + MUST_USE_RESULT LOperand* UseRegisterOrConstant(HValue* value); + MUST_USE_RESULT LOperand* UseRegisterOrConstantAtStart(HValue* value); + + // An input operand in a constant operand. + MUST_USE_RESULT LOperand* UseConstant(HValue* value); + + // An input operand in register, stack slot or a constant operand. + // Will not be moved to a register even if one is freely available. + virtual MUST_USE_RESULT LOperand* UseAny(HValue* value) V8_OVERRIDE; + + // Temporary operand that must be in a register. + MUST_USE_RESULT LUnallocated* TempRegister(); + MUST_USE_RESULT LUnallocated* TempDoubleRegister(); + MUST_USE_RESULT LOperand* FixedTemp(Register reg); + MUST_USE_RESULT LOperand* FixedTemp(DoubleRegister reg); + + // Methods for setting up define-use relationships. + // Return the same instruction that they are passed. + LInstruction* Define(LTemplateResultInstruction<1>* instr, + LUnallocated* result); + LInstruction* DefineAsRegister(LTemplateResultInstruction<1>* instr); + LInstruction* DefineAsSpilled(LTemplateResultInstruction<1>* instr, + int index); + LInstruction* DefineSameAsFirst(LTemplateResultInstruction<1>* instr); + LInstruction* DefineFixed(LTemplateResultInstruction<1>* instr, + Register reg); + LInstruction* DefineFixedDouble(LTemplateResultInstruction<1>* instr, + DoubleRegister reg); + LInstruction* AssignEnvironment(LInstruction* instr); + LInstruction* AssignPointerMap(LInstruction* instr); + + enum CanDeoptimize { CAN_DEOPTIMIZE_EAGERLY, CANNOT_DEOPTIMIZE_EAGERLY }; + + // By default we assume that instruction sequences generated for calls + // cannot deoptimize eagerly and we do not attach environment to this + // instruction. + LInstruction* MarkAsCall( + LInstruction* instr, + HInstruction* hinstr, + CanDeoptimize can_deoptimize = CANNOT_DEOPTIMIZE_EAGERLY); + + void VisitInstruction(HInstruction* current); + void AddInstruction(LInstruction* instr, HInstruction* current); + + void DoBasicBlock(HBasicBlock* block, HBasicBlock* next_block); + LInstruction* DoBit(Token::Value op, HBitwiseBinaryOperation* instr); + LInstruction* DoShift(Token::Value op, HBitwiseBinaryOperation* instr); + LInstruction* DoArithmeticD(Token::Value op, + HArithmeticBinaryOperation* instr); + LInstruction* DoArithmeticT(Token::Value op, + HBinaryOperation* instr); + + LPlatformChunk* chunk_; + CompilationInfo* info_; + HGraph* const graph_; + Status status_; + HInstruction* current_instruction_; + HBasicBlock* current_block_; + HBasicBlock* next_block_; + LAllocator* allocator_; + + DISALLOW_COPY_AND_ASSIGN(LChunkBuilder); +}; + +#undef DECLARE_HYDROGEN_ACCESSOR +#undef DECLARE_CONCRETE_INSTRUCTION + +} } // namespace v8::internal + +#endif // V8_MIPS_LITHIUM_MIPS_H_ diff --git a/deps/v8/src/mips64/macro-assembler-mips64.cc b/deps/v8/src/mips64/macro-assembler-mips64.cc new file mode 100644 index 00000000000..87124dca146 --- /dev/null +++ b/deps/v8/src/mips64/macro-assembler-mips64.cc @@ -0,0 +1,6111 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include <limits.h> // For LONG_MIN, LONG_MAX. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_MIPS64 + +#include "src/bootstrapper.h" +#include "src/codegen.h" +#include "src/cpu-profiler.h" +#include "src/debug.h" +#include "src/isolate-inl.h" +#include "src/runtime.h" + +namespace v8 { +namespace internal { + +MacroAssembler::MacroAssembler(Isolate* arg_isolate, void* buffer, int size) + : Assembler(arg_isolate, buffer, size), + generating_stub_(false), + has_frame_(false) { + if (isolate() != NULL) { + code_object_ = Handle<Object>(isolate()->heap()->undefined_value(), + isolate()); + } +} + + +void MacroAssembler::Load(Register dst, + const MemOperand& src, + Representation r) { + DCHECK(!r.IsDouble()); + if (r.IsInteger8()) { + lb(dst, src); + } else if (r.IsUInteger8()) { + lbu(dst, src); + } else if (r.IsInteger16()) { + lh(dst, src); + } else if (r.IsUInteger16()) { + lhu(dst, src); + } else if (r.IsInteger32()) { + lw(dst, src); + } else { + ld(dst, src); + } +} + + +void MacroAssembler::Store(Register src, + const MemOperand& dst, + Representation r) { + DCHECK(!r.IsDouble()); + if (r.IsInteger8() || r.IsUInteger8()) { + sb(src, dst); + } else if (r.IsInteger16() || r.IsUInteger16()) { + sh(src, dst); + } else if (r.IsInteger32()) { + sw(src, dst); + } else { + if (r.IsHeapObject()) { + AssertNotSmi(src); + } else if (r.IsSmi()) { + AssertSmi(src); + } + sd(src, dst); + } +} + + +void MacroAssembler::LoadRoot(Register destination, + Heap::RootListIndex index) { + ld(destination, MemOperand(s6, index << kPointerSizeLog2)); +} + + +void MacroAssembler::LoadRoot(Register destination, + Heap::RootListIndex index, + Condition cond, + Register src1, const Operand& src2) { + Branch(2, NegateCondition(cond), src1, src2); + ld(destination, MemOperand(s6, index << kPointerSizeLog2)); +} + + +void MacroAssembler::StoreRoot(Register source, + Heap::RootListIndex index) { + sd(source, MemOperand(s6, index << kPointerSizeLog2)); +} + + +void MacroAssembler::StoreRoot(Register source, + Heap::RootListIndex index, + Condition cond, + Register src1, const Operand& src2) { + Branch(2, NegateCondition(cond), src1, src2); + sd(source, MemOperand(s6, index << kPointerSizeLog2)); +} + + +// Push and pop all registers that can hold pointers. +void MacroAssembler::PushSafepointRegisters() { + // Safepoints expect a block of kNumSafepointRegisters values on the + // stack, so adjust the stack for unsaved registers. + const int num_unsaved = kNumSafepointRegisters - kNumSafepointSavedRegisters; + DCHECK(num_unsaved >= 0); + if (num_unsaved > 0) { + Dsubu(sp, sp, Operand(num_unsaved * kPointerSize)); + } + MultiPush(kSafepointSavedRegisters); +} + + +void MacroAssembler::PopSafepointRegisters() { + const int num_unsaved = kNumSafepointRegisters - kNumSafepointSavedRegisters; + MultiPop(kSafepointSavedRegisters); + if (num_unsaved > 0) { + Daddu(sp, sp, Operand(num_unsaved * kPointerSize)); + } +} + + +void MacroAssembler::StoreToSafepointRegisterSlot(Register src, Register dst) { + sd(src, SafepointRegisterSlot(dst)); +} + + +void MacroAssembler::LoadFromSafepointRegisterSlot(Register dst, Register src) { + ld(dst, SafepointRegisterSlot(src)); +} + + +int MacroAssembler::SafepointRegisterStackIndex(int reg_code) { + // The registers are pushed starting with the highest encoding, + // which means that lowest encodings are closest to the stack pointer. + return kSafepointRegisterStackIndexMap[reg_code]; +} + + +MemOperand MacroAssembler::SafepointRegisterSlot(Register reg) { + return MemOperand(sp, SafepointRegisterStackIndex(reg.code()) * kPointerSize); +} + + +MemOperand MacroAssembler::SafepointRegistersAndDoublesSlot(Register reg) { + UNIMPLEMENTED_MIPS(); + // General purpose registers are pushed last on the stack. + int doubles_size = FPURegister::NumAllocatableRegisters() * kDoubleSize; + int register_offset = SafepointRegisterStackIndex(reg.code()) * kPointerSize; + return MemOperand(sp, doubles_size + register_offset); +} + + +void MacroAssembler::InNewSpace(Register object, + Register scratch, + Condition cc, + Label* branch) { + DCHECK(cc == eq || cc == ne); + And(scratch, object, Operand(ExternalReference::new_space_mask(isolate()))); + Branch(branch, cc, scratch, + Operand(ExternalReference::new_space_start(isolate()))); +} + + +void MacroAssembler::RecordWriteField( + Register object, + int offset, + Register value, + Register dst, + RAStatus ra_status, + SaveFPRegsMode save_fp, + RememberedSetAction remembered_set_action, + SmiCheck smi_check, + PointersToHereCheck pointers_to_here_check_for_value) { + DCHECK(!AreAliased(value, dst, t8, object)); + // First, check if a write barrier is even needed. The tests below + // catch stores of Smis. + Label done; + + // Skip barrier if writing a smi. + if (smi_check == INLINE_SMI_CHECK) { + JumpIfSmi(value, &done); + } + + // Although the object register is tagged, the offset is relative to the start + // of the object, so so offset must be a multiple of kPointerSize. + DCHECK(IsAligned(offset, kPointerSize)); + + Daddu(dst, object, Operand(offset - kHeapObjectTag)); + if (emit_debug_code()) { + Label ok; + And(t8, dst, Operand((1 << kPointerSizeLog2) - 1)); + Branch(&ok, eq, t8, Operand(zero_reg)); + stop("Unaligned cell in write barrier"); + bind(&ok); + } + + RecordWrite(object, + dst, + value, + ra_status, + save_fp, + remembered_set_action, + OMIT_SMI_CHECK, + pointers_to_here_check_for_value); + + bind(&done); + + // Clobber clobbered input registers when running with the debug-code flag + // turned on to provoke errors. + if (emit_debug_code()) { + li(value, Operand(BitCast<int64_t>(kZapValue + 4))); + li(dst, Operand(BitCast<int64_t>(kZapValue + 8))); + } +} + + +// Will clobber 4 registers: object, map, dst, ip. The +// register 'object' contains a heap object pointer. +void MacroAssembler::RecordWriteForMap(Register object, + Register map, + Register dst, + RAStatus ra_status, + SaveFPRegsMode fp_mode) { + if (emit_debug_code()) { + DCHECK(!dst.is(at)); + ld(dst, FieldMemOperand(map, HeapObject::kMapOffset)); + Check(eq, + kWrongAddressOrValuePassedToRecordWrite, + dst, + Operand(isolate()->factory()->meta_map())); + } + + if (!FLAG_incremental_marking) { + return; + } + + if (emit_debug_code()) { + ld(at, FieldMemOperand(object, HeapObject::kMapOffset)); + Check(eq, + kWrongAddressOrValuePassedToRecordWrite, + map, + Operand(at)); + } + + Label done; + + // A single check of the map's pages interesting flag suffices, since it is + // only set during incremental collection, and then it's also guaranteed that + // the from object's page's interesting flag is also set. This optimization + // relies on the fact that maps can never be in new space. + CheckPageFlag(map, + map, // Used as scratch. + MemoryChunk::kPointersToHereAreInterestingMask, + eq, + &done); + + Daddu(dst, object, Operand(HeapObject::kMapOffset - kHeapObjectTag)); + if (emit_debug_code()) { + Label ok; + And(at, dst, Operand((1 << kPointerSizeLog2) - 1)); + Branch(&ok, eq, at, Operand(zero_reg)); + stop("Unaligned cell in write barrier"); + bind(&ok); + } + + // Record the actual write. + if (ra_status == kRAHasNotBeenSaved) { + push(ra); + } + RecordWriteStub stub(isolate(), object, map, dst, OMIT_REMEMBERED_SET, + fp_mode); + CallStub(&stub); + if (ra_status == kRAHasNotBeenSaved) { + pop(ra); + } + + bind(&done); + + // Count number of write barriers in generated code. + isolate()->counters()->write_barriers_static()->Increment(); + IncrementCounter(isolate()->counters()->write_barriers_dynamic(), 1, at, dst); + + // Clobber clobbered registers when running with the debug-code flag + // turned on to provoke errors. + if (emit_debug_code()) { + li(dst, Operand(BitCast<int64_t>(kZapValue + 12))); + li(map, Operand(BitCast<int64_t>(kZapValue + 16))); + } +} + + +// Will clobber 4 registers: object, address, scratch, ip. The +// register 'object' contains a heap object pointer. The heap object +// tag is shifted away. +void MacroAssembler::RecordWrite( + Register object, + Register address, + Register value, + RAStatus ra_status, + SaveFPRegsMode fp_mode, + RememberedSetAction remembered_set_action, + SmiCheck smi_check, + PointersToHereCheck pointers_to_here_check_for_value) { + DCHECK(!AreAliased(object, address, value, t8)); + DCHECK(!AreAliased(object, address, value, t9)); + + if (emit_debug_code()) { + ld(at, MemOperand(address)); + Assert( + eq, kWrongAddressOrValuePassedToRecordWrite, at, Operand(value)); + } + + if (remembered_set_action == OMIT_REMEMBERED_SET && + !FLAG_incremental_marking) { + return; + } + + // First, check if a write barrier is even needed. The tests below + // catch stores of smis and stores into the young generation. + Label done; + + if (smi_check == INLINE_SMI_CHECK) { + DCHECK_EQ(0, kSmiTag); + JumpIfSmi(value, &done); + } + + if (pointers_to_here_check_for_value != kPointersToHereAreAlwaysInteresting) { + CheckPageFlag(value, + value, // Used as scratch. + MemoryChunk::kPointersToHereAreInterestingMask, + eq, + &done); + } + CheckPageFlag(object, + value, // Used as scratch. + MemoryChunk::kPointersFromHereAreInterestingMask, + eq, + &done); + + // Record the actual write. + if (ra_status == kRAHasNotBeenSaved) { + push(ra); + } + RecordWriteStub stub(isolate(), object, value, address, remembered_set_action, + fp_mode); + CallStub(&stub); + if (ra_status == kRAHasNotBeenSaved) { + pop(ra); + } + + bind(&done); + + // Count number of write barriers in generated code. + isolate()->counters()->write_barriers_static()->Increment(); + IncrementCounter(isolate()->counters()->write_barriers_dynamic(), 1, at, + value); + + // Clobber clobbered registers when running with the debug-code flag + // turned on to provoke errors. + if (emit_debug_code()) { + li(address, Operand(BitCast<int64_t>(kZapValue + 12))); + li(value, Operand(BitCast<int64_t>(kZapValue + 16))); + } +} + + +void MacroAssembler::RememberedSetHelper(Register object, // For debug tests. + Register address, + Register scratch, + SaveFPRegsMode fp_mode, + RememberedSetFinalAction and_then) { + Label done; + if (emit_debug_code()) { + Label ok; + JumpIfNotInNewSpace(object, scratch, &ok); + stop("Remembered set pointer is in new space"); + bind(&ok); + } + // Load store buffer top. + ExternalReference store_buffer = + ExternalReference::store_buffer_top(isolate()); + li(t8, Operand(store_buffer)); + ld(scratch, MemOperand(t8)); + // Store pointer to buffer and increment buffer top. + sd(address, MemOperand(scratch)); + Daddu(scratch, scratch, kPointerSize); + // Write back new top of buffer. + sd(scratch, MemOperand(t8)); + // Call stub on end of buffer. + // Check for end of buffer. + And(t8, scratch, Operand(StoreBuffer::kStoreBufferOverflowBit)); + DCHECK(!scratch.is(t8)); + if (and_then == kFallThroughAtEnd) { + Branch(&done, eq, t8, Operand(zero_reg)); + } else { + DCHECK(and_then == kReturnAtEnd); + Ret(eq, t8, Operand(zero_reg)); + } + push(ra); + StoreBufferOverflowStub store_buffer_overflow = + StoreBufferOverflowStub(isolate(), fp_mode); + CallStub(&store_buffer_overflow); + pop(ra); + bind(&done); + if (and_then == kReturnAtEnd) { + Ret(); + } +} + + +// ----------------------------------------------------------------------------- +// Allocation support. + + +void MacroAssembler::CheckAccessGlobalProxy(Register holder_reg, + Register scratch, + Label* miss) { + Label same_contexts; + + DCHECK(!holder_reg.is(scratch)); + DCHECK(!holder_reg.is(at)); + DCHECK(!scratch.is(at)); + + // Load current lexical context from the stack frame. + ld(scratch, MemOperand(fp, StandardFrameConstants::kContextOffset)); + // In debug mode, make sure the lexical context is set. +#ifdef DEBUG + Check(ne, kWeShouldNotHaveAnEmptyLexicalContext, + scratch, Operand(zero_reg)); +#endif + + // Load the native context of the current context. + int offset = + Context::kHeaderSize + Context::GLOBAL_OBJECT_INDEX * kPointerSize; + ld(scratch, FieldMemOperand(scratch, offset)); + ld(scratch, FieldMemOperand(scratch, GlobalObject::kNativeContextOffset)); + + // Check the context is a native context. + if (emit_debug_code()) { + push(holder_reg); // Temporarily save holder on the stack. + // Read the first word and compare to the native_context_map. + ld(holder_reg, FieldMemOperand(scratch, HeapObject::kMapOffset)); + LoadRoot(at, Heap::kNativeContextMapRootIndex); + Check(eq, kJSGlobalObjectNativeContextShouldBeANativeContext, + holder_reg, Operand(at)); + pop(holder_reg); // Restore holder. + } + + // Check if both contexts are the same. + ld(at, FieldMemOperand(holder_reg, JSGlobalProxy::kNativeContextOffset)); + Branch(&same_contexts, eq, scratch, Operand(at)); + + // Check the context is a native context. + if (emit_debug_code()) { + push(holder_reg); // Temporarily save holder on the stack. + mov(holder_reg, at); // Move at to its holding place. + LoadRoot(at, Heap::kNullValueRootIndex); + Check(ne, kJSGlobalProxyContextShouldNotBeNull, + holder_reg, Operand(at)); + + ld(holder_reg, FieldMemOperand(holder_reg, HeapObject::kMapOffset)); + LoadRoot(at, Heap::kNativeContextMapRootIndex); + Check(eq, kJSGlobalObjectNativeContextShouldBeANativeContext, + holder_reg, Operand(at)); + // Restore at is not needed. at is reloaded below. + pop(holder_reg); // Restore holder. + // Restore at to holder's context. + ld(at, FieldMemOperand(holder_reg, JSGlobalProxy::kNativeContextOffset)); + } + + // Check that the security token in the calling global object is + // compatible with the security token in the receiving global + // object. + int token_offset = Context::kHeaderSize + + Context::SECURITY_TOKEN_INDEX * kPointerSize; + + ld(scratch, FieldMemOperand(scratch, token_offset)); + ld(at, FieldMemOperand(at, token_offset)); + Branch(miss, ne, scratch, Operand(at)); + + bind(&same_contexts); +} + + +// Compute the hash code from the untagged key. This must be kept in sync with +// ComputeIntegerHash in utils.h and KeyedLoadGenericStub in +// code-stub-hydrogen.cc +void MacroAssembler::GetNumberHash(Register reg0, Register scratch) { + // First of all we assign the hash seed to scratch. + LoadRoot(scratch, Heap::kHashSeedRootIndex); + SmiUntag(scratch); + + // Xor original key with a seed. + xor_(reg0, reg0, scratch); + + // Compute the hash code from the untagged key. This must be kept in sync + // with ComputeIntegerHash in utils.h. + // + // hash = ~hash + (hash << 15); + // The algorithm uses 32-bit integer values. + nor(scratch, reg0, zero_reg); + sll(at, reg0, 15); + addu(reg0, scratch, at); + + // hash = hash ^ (hash >> 12); + srl(at, reg0, 12); + xor_(reg0, reg0, at); + + // hash = hash + (hash << 2); + sll(at, reg0, 2); + addu(reg0, reg0, at); + + // hash = hash ^ (hash >> 4); + srl(at, reg0, 4); + xor_(reg0, reg0, at); + + // hash = hash * 2057; + sll(scratch, reg0, 11); + sll(at, reg0, 3); + addu(reg0, reg0, at); + addu(reg0, reg0, scratch); + + // hash = hash ^ (hash >> 16); + srl(at, reg0, 16); + xor_(reg0, reg0, at); +} + + +void MacroAssembler::LoadFromNumberDictionary(Label* miss, + Register elements, + Register key, + Register result, + Register reg0, + Register reg1, + Register reg2) { + // Register use: + // + // elements - holds the slow-case elements of the receiver on entry. + // Unchanged unless 'result' is the same register. + // + // key - holds the smi key on entry. + // Unchanged unless 'result' is the same register. + // + // + // result - holds the result on exit if the load succeeded. + // Allowed to be the same as 'key' or 'result'. + // Unchanged on bailout so 'key' or 'result' can be used + // in further computation. + // + // Scratch registers: + // + // reg0 - holds the untagged key on entry and holds the hash once computed. + // + // reg1 - Used to hold the capacity mask of the dictionary. + // + // reg2 - Used for the index into the dictionary. + // at - Temporary (avoid MacroAssembler instructions also using 'at'). + Label done; + + GetNumberHash(reg0, reg1); + + // Compute the capacity mask. + ld(reg1, FieldMemOperand(elements, SeededNumberDictionary::kCapacityOffset)); + SmiUntag(reg1, reg1); + Dsubu(reg1, reg1, Operand(1)); + + // Generate an unrolled loop that performs a few probes before giving up. + for (int i = 0; i < kNumberDictionaryProbes; i++) { + // Use reg2 for index calculations and keep the hash intact in reg0. + mov(reg2, reg0); + // Compute the masked index: (hash + i + i * i) & mask. + if (i > 0) { + Daddu(reg2, reg2, Operand(SeededNumberDictionary::GetProbeOffset(i))); + } + and_(reg2, reg2, reg1); + + // Scale the index by multiplying by the element size. + DCHECK(SeededNumberDictionary::kEntrySize == 3); + dsll(at, reg2, 1); // 2x. + daddu(reg2, reg2, at); // reg2 = reg2 * 3. + + // Check if the key is identical to the name. + dsll(at, reg2, kPointerSizeLog2); + daddu(reg2, elements, at); + + ld(at, FieldMemOperand(reg2, SeededNumberDictionary::kElementsStartOffset)); + if (i != kNumberDictionaryProbes - 1) { + Branch(&done, eq, key, Operand(at)); + } else { + Branch(miss, ne, key, Operand(at)); + } + } + + bind(&done); + // Check that the value is a normal property. + // reg2: elements + (index * kPointerSize). + const int kDetailsOffset = + SeededNumberDictionary::kElementsStartOffset + 2 * kPointerSize; + ld(reg1, FieldMemOperand(reg2, kDetailsOffset)); + And(at, reg1, Operand(Smi::FromInt(PropertyDetails::TypeField::kMask))); + Branch(miss, ne, at, Operand(zero_reg)); + + // Get the value at the masked, scaled index and return. + const int kValueOffset = + SeededNumberDictionary::kElementsStartOffset + kPointerSize; + ld(result, FieldMemOperand(reg2, kValueOffset)); +} + + +// --------------------------------------------------------------------------- +// Instruction macros. + +void MacroAssembler::Addu(Register rd, Register rs, const Operand& rt) { + if (rt.is_reg()) { + addu(rd, rs, rt.rm()); + } else { + if (is_int16(rt.imm64_) && !MustUseReg(rt.rmode_)) { + addiu(rd, rs, rt.imm64_); + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + addu(rd, rs, at); + } + } +} + + +void MacroAssembler::Daddu(Register rd, Register rs, const Operand& rt) { + if (rt.is_reg()) { + daddu(rd, rs, rt.rm()); + } else { + if (is_int16(rt.imm64_) && !MustUseReg(rt.rmode_)) { + daddiu(rd, rs, rt.imm64_); + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + daddu(rd, rs, at); + } + } +} + + +void MacroAssembler::Subu(Register rd, Register rs, const Operand& rt) { + if (rt.is_reg()) { + subu(rd, rs, rt.rm()); + } else { + if (is_int16(rt.imm64_) && !MustUseReg(rt.rmode_)) { + addiu(rd, rs, -rt.imm64_); // No subiu instr, use addiu(x, y, -imm). + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + subu(rd, rs, at); + } + } +} + + +void MacroAssembler::Dsubu(Register rd, Register rs, const Operand& rt) { + if (rt.is_reg()) { + dsubu(rd, rs, rt.rm()); + } else { + if (is_int16(rt.imm64_) && !MustUseReg(rt.rmode_)) { + daddiu(rd, rs, -rt.imm64_); // No subiu instr, use addiu(x, y, -imm). + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + dsubu(rd, rs, at); + } + } +} + + +void MacroAssembler::Mul(Register rd, Register rs, const Operand& rt) { + if (rt.is_reg()) { + mul(rd, rs, rt.rm()); + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + mul(rd, rs, at); + } +} + + +void MacroAssembler::Mulh(Register rd, Register rs, const Operand& rt) { + if (rt.is_reg()) { + if (kArchVariant != kMips64r6) { + mult(rs, rt.rm()); + mfhi(rd); + } else { + muh(rd, rs, rt.rm()); + } + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + if (kArchVariant != kMips64r6) { + mult(rs, at); + mfhi(rd); + } else { + muh(rd, rs, at); + } + } +} + + +void MacroAssembler::Dmul(Register rd, Register rs, const Operand& rt) { + if (rt.is_reg()) { + if (kArchVariant == kMips64r6) { + dmul(rd, rs, rt.rm()); + } else { + dmult(rs, rt.rm()); + mflo(rd); + } + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + if (kArchVariant == kMips64r6) { + dmul(rd, rs, at); + } else { + dmult(rs, at); + mflo(rd); + } + } +} + + +void MacroAssembler::Dmulh(Register rd, Register rs, const Operand& rt) { + if (rt.is_reg()) { + if (kArchVariant == kMips64r6) { + dmuh(rd, rs, rt.rm()); + } else { + dmult(rs, rt.rm()); + mfhi(rd); + } + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + if (kArchVariant == kMips64r6) { + dmuh(rd, rs, at); + } else { + dmult(rs, at); + mfhi(rd); + } + } +} + + +void MacroAssembler::Mult(Register rs, const Operand& rt) { + if (rt.is_reg()) { + mult(rs, rt.rm()); + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + mult(rs, at); + } +} + + +void MacroAssembler::Dmult(Register rs, const Operand& rt) { + if (rt.is_reg()) { + dmult(rs, rt.rm()); + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + dmult(rs, at); + } +} + + +void MacroAssembler::Multu(Register rs, const Operand& rt) { + if (rt.is_reg()) { + multu(rs, rt.rm()); + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + multu(rs, at); + } +} + + +void MacroAssembler::Dmultu(Register rs, const Operand& rt) { + if (rt.is_reg()) { + dmultu(rs, rt.rm()); + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + dmultu(rs, at); + } +} + + +void MacroAssembler::Div(Register rs, const Operand& rt) { + if (rt.is_reg()) { + div(rs, rt.rm()); + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + div(rs, at); + } +} + + +void MacroAssembler::Ddiv(Register rs, const Operand& rt) { + if (rt.is_reg()) { + ddiv(rs, rt.rm()); + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + ddiv(rs, at); + } +} + + +void MacroAssembler::Ddiv(Register rd, Register rs, const Operand& rt) { + if (kArchVariant != kMips64r6) { + if (rt.is_reg()) { + ddiv(rs, rt.rm()); + mflo(rd); + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + ddiv(rs, at); + mflo(rd); + } + } else { + if (rt.is_reg()) { + ddiv(rd, rs, rt.rm()); + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + ddiv(rd, rs, at); + } + } +} + + +void MacroAssembler::Divu(Register rs, const Operand& rt) { + if (rt.is_reg()) { + divu(rs, rt.rm()); + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + divu(rs, at); + } +} + + +void MacroAssembler::Ddivu(Register rs, const Operand& rt) { + if (rt.is_reg()) { + ddivu(rs, rt.rm()); + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + ddivu(rs, at); + } +} + + +void MacroAssembler::Dmod(Register rd, Register rs, const Operand& rt) { + if (kArchVariant != kMips64r6) { + if (rt.is_reg()) { + ddiv(rs, rt.rm()); + mfhi(rd); + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + ddiv(rs, at); + mfhi(rd); + } + } else { + if (rt.is_reg()) { + dmod(rd, rs, rt.rm()); + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + dmod(rd, rs, at); + } + } +} + + +void MacroAssembler::And(Register rd, Register rs, const Operand& rt) { + if (rt.is_reg()) { + and_(rd, rs, rt.rm()); + } else { + if (is_uint16(rt.imm64_) && !MustUseReg(rt.rmode_)) { + andi(rd, rs, rt.imm64_); + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + and_(rd, rs, at); + } + } +} + + +void MacroAssembler::Or(Register rd, Register rs, const Operand& rt) { + if (rt.is_reg()) { + or_(rd, rs, rt.rm()); + } else { + if (is_uint16(rt.imm64_) && !MustUseReg(rt.rmode_)) { + ori(rd, rs, rt.imm64_); + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + or_(rd, rs, at); + } + } +} + + +void MacroAssembler::Xor(Register rd, Register rs, const Operand& rt) { + if (rt.is_reg()) { + xor_(rd, rs, rt.rm()); + } else { + if (is_uint16(rt.imm64_) && !MustUseReg(rt.rmode_)) { + xori(rd, rs, rt.imm64_); + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + xor_(rd, rs, at); + } + } +} + + +void MacroAssembler::Nor(Register rd, Register rs, const Operand& rt) { + if (rt.is_reg()) { + nor(rd, rs, rt.rm()); + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + nor(rd, rs, at); + } +} + + +void MacroAssembler::Neg(Register rs, const Operand& rt) { + DCHECK(rt.is_reg()); + DCHECK(!at.is(rs)); + DCHECK(!at.is(rt.rm())); + li(at, -1); + xor_(rs, rt.rm(), at); +} + + +void MacroAssembler::Slt(Register rd, Register rs, const Operand& rt) { + if (rt.is_reg()) { + slt(rd, rs, rt.rm()); + } else { + if (is_int16(rt.imm64_) && !MustUseReg(rt.rmode_)) { + slti(rd, rs, rt.imm64_); + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + slt(rd, rs, at); + } + } +} + + +void MacroAssembler::Sltu(Register rd, Register rs, const Operand& rt) { + if (rt.is_reg()) { + sltu(rd, rs, rt.rm()); + } else { + if (is_int16(rt.imm64_) && !MustUseReg(rt.rmode_)) { + sltiu(rd, rs, rt.imm64_); + } else { + // li handles the relocation. + DCHECK(!rs.is(at)); + li(at, rt); + sltu(rd, rs, at); + } + } +} + + +void MacroAssembler::Ror(Register rd, Register rs, const Operand& rt) { + if (kArchVariant == kMips64r2) { + if (rt.is_reg()) { + rotrv(rd, rs, rt.rm()); + } else { + rotr(rd, rs, rt.imm64_); + } + } else { + if (rt.is_reg()) { + subu(at, zero_reg, rt.rm()); + sllv(at, rs, at); + srlv(rd, rs, rt.rm()); + or_(rd, rd, at); + } else { + if (rt.imm64_ == 0) { + srl(rd, rs, 0); + } else { + srl(at, rs, rt.imm64_); + sll(rd, rs, (0x20 - rt.imm64_) & 0x1f); + or_(rd, rd, at); + } + } + } +} + + +void MacroAssembler::Dror(Register rd, Register rs, const Operand& rt) { + if (rt.is_reg()) { + drotrv(rd, rs, rt.rm()); + } else { + drotr(rd, rs, rt.imm64_); + } +} + + +void MacroAssembler::Pref(int32_t hint, const MemOperand& rs) { + pref(hint, rs); +} + + +// ------------Pseudo-instructions------------- + +void MacroAssembler::Ulw(Register rd, const MemOperand& rs) { + lwr(rd, rs); + lwl(rd, MemOperand(rs.rm(), rs.offset() + 3)); +} + + +void MacroAssembler::Usw(Register rd, const MemOperand& rs) { + swr(rd, rs); + swl(rd, MemOperand(rs.rm(), rs.offset() + 3)); +} + + +// Do 64-bit load from unaligned address. Note this only handles +// the specific case of 32-bit aligned, but not 64-bit aligned. +void MacroAssembler::Uld(Register rd, const MemOperand& rs, Register scratch) { + // Assert fail if the offset from start of object IS actually aligned. + // ONLY use with known misalignment, since there is performance cost. + DCHECK((rs.offset() + kHeapObjectTag) & (kPointerSize - 1)); + // TODO(plind): endian dependency. + lwu(rd, rs); + lw(scratch, MemOperand(rs.rm(), rs.offset() + kPointerSize / 2)); + dsll32(scratch, scratch, 0); + Daddu(rd, rd, scratch); +} + + +// Do 64-bit store to unaligned address. Note this only handles +// the specific case of 32-bit aligned, but not 64-bit aligned. +void MacroAssembler::Usd(Register rd, const MemOperand& rs, Register scratch) { + // Assert fail if the offset from start of object IS actually aligned. + // ONLY use with known misalignment, since there is performance cost. + DCHECK((rs.offset() + kHeapObjectTag) & (kPointerSize - 1)); + // TODO(plind): endian dependency. + sw(rd, rs); + dsrl32(scratch, rd, 0); + sw(scratch, MemOperand(rs.rm(), rs.offset() + kPointerSize / 2)); +} + + +void MacroAssembler::li(Register dst, Handle<Object> value, LiFlags mode) { + AllowDeferredHandleDereference smi_check; + if (value->IsSmi()) { + li(dst, Operand(value), mode); + } else { + DCHECK(value->IsHeapObject()); + if (isolate()->heap()->InNewSpace(*value)) { + Handle<Cell> cell = isolate()->factory()->NewCell(value); + li(dst, Operand(cell)); + ld(dst, FieldMemOperand(dst, Cell::kValueOffset)); + } else { + li(dst, Operand(value)); + } + } +} + + +void MacroAssembler::li(Register rd, Operand j, LiFlags mode) { + DCHECK(!j.is_reg()); + BlockTrampolinePoolScope block_trampoline_pool(this); + if (!MustUseReg(j.rmode_) && mode == OPTIMIZE_SIZE) { + // Normal load of an immediate value which does not need Relocation Info. + if (is_int32(j.imm64_)) { + if (is_int16(j.imm64_)) { + daddiu(rd, zero_reg, (j.imm64_ & kImm16Mask)); + } else if (!(j.imm64_ & kHiMask)) { + ori(rd, zero_reg, (j.imm64_ & kImm16Mask)); + } else if (!(j.imm64_ & kImm16Mask)) { + lui(rd, (j.imm64_ >> kLuiShift) & kImm16Mask); + } else { + lui(rd, (j.imm64_ >> kLuiShift) & kImm16Mask); + ori(rd, rd, (j.imm64_ & kImm16Mask)); + } + } else { + lui(rd, (j.imm64_ >> 48) & kImm16Mask); + ori(rd, rd, (j.imm64_ >> 32) & kImm16Mask); + dsll(rd, rd, 16); + ori(rd, rd, (j.imm64_ >> 16) & kImm16Mask); + dsll(rd, rd, 16); + ori(rd, rd, j.imm64_ & kImm16Mask); + } + } else if (MustUseReg(j.rmode_)) { + RecordRelocInfo(j.rmode_, j.imm64_); + lui(rd, (j.imm64_ >> 32) & kImm16Mask); + ori(rd, rd, (j.imm64_ >> 16) & kImm16Mask); + dsll(rd, rd, 16); + ori(rd, rd, j.imm64_ & kImm16Mask); + } else if (mode == ADDRESS_LOAD) { + // We always need the same number of instructions as we may need to patch + // this code to load another value which may need all 4 instructions. + lui(rd, (j.imm64_ >> 32) & kImm16Mask); + ori(rd, rd, (j.imm64_ >> 16) & kImm16Mask); + dsll(rd, rd, 16); + ori(rd, rd, j.imm64_ & kImm16Mask); + } else { + lui(rd, (j.imm64_ >> 48) & kImm16Mask); + ori(rd, rd, (j.imm64_ >> 32) & kImm16Mask); + dsll(rd, rd, 16); + ori(rd, rd, (j.imm64_ >> 16) & kImm16Mask); + dsll(rd, rd, 16); + ori(rd, rd, j.imm64_ & kImm16Mask); + } +} + + +void MacroAssembler::MultiPush(RegList regs) { + int16_t num_to_push = NumberOfBitsSet(regs); + int16_t stack_offset = num_to_push * kPointerSize; + + Dsubu(sp, sp, Operand(stack_offset)); + for (int16_t i = kNumRegisters - 1; i >= 0; i--) { + if ((regs & (1 << i)) != 0) { + stack_offset -= kPointerSize; + sd(ToRegister(i), MemOperand(sp, stack_offset)); + } + } +} + + +void MacroAssembler::MultiPushReversed(RegList regs) { + int16_t num_to_push = NumberOfBitsSet(regs); + int16_t stack_offset = num_to_push * kPointerSize; + + Dsubu(sp, sp, Operand(stack_offset)); + for (int16_t i = 0; i < kNumRegisters; i++) { + if ((regs & (1 << i)) != 0) { + stack_offset -= kPointerSize; + sd(ToRegister(i), MemOperand(sp, stack_offset)); + } + } +} + + +void MacroAssembler::MultiPop(RegList regs) { + int16_t stack_offset = 0; + + for (int16_t i = 0; i < kNumRegisters; i++) { + if ((regs & (1 << i)) != 0) { + ld(ToRegister(i), MemOperand(sp, stack_offset)); + stack_offset += kPointerSize; + } + } + daddiu(sp, sp, stack_offset); +} + + +void MacroAssembler::MultiPopReversed(RegList regs) { + int16_t stack_offset = 0; + + for (int16_t i = kNumRegisters - 1; i >= 0; i--) { + if ((regs & (1 << i)) != 0) { + ld(ToRegister(i), MemOperand(sp, stack_offset)); + stack_offset += kPointerSize; + } + } + daddiu(sp, sp, stack_offset); +} + + +void MacroAssembler::MultiPushFPU(RegList regs) { + int16_t num_to_push = NumberOfBitsSet(regs); + int16_t stack_offset = num_to_push * kDoubleSize; + + Dsubu(sp, sp, Operand(stack_offset)); + for (int16_t i = kNumRegisters - 1; i >= 0; i--) { + if ((regs & (1 << i)) != 0) { + stack_offset -= kDoubleSize; + sdc1(FPURegister::from_code(i), MemOperand(sp, stack_offset)); + } + } +} + + +void MacroAssembler::MultiPushReversedFPU(RegList regs) { + int16_t num_to_push = NumberOfBitsSet(regs); + int16_t stack_offset = num_to_push * kDoubleSize; + + Dsubu(sp, sp, Operand(stack_offset)); + for (int16_t i = 0; i < kNumRegisters; i++) { + if ((regs & (1 << i)) != 0) { + stack_offset -= kDoubleSize; + sdc1(FPURegister::from_code(i), MemOperand(sp, stack_offset)); + } + } +} + + +void MacroAssembler::MultiPopFPU(RegList regs) { + int16_t stack_offset = 0; + + for (int16_t i = 0; i < kNumRegisters; i++) { + if ((regs & (1 << i)) != 0) { + ldc1(FPURegister::from_code(i), MemOperand(sp, stack_offset)); + stack_offset += kDoubleSize; + } + } + daddiu(sp, sp, stack_offset); +} + + +void MacroAssembler::MultiPopReversedFPU(RegList regs) { + int16_t stack_offset = 0; + + for (int16_t i = kNumRegisters - 1; i >= 0; i--) { + if ((regs & (1 << i)) != 0) { + ldc1(FPURegister::from_code(i), MemOperand(sp, stack_offset)); + stack_offset += kDoubleSize; + } + } + daddiu(sp, sp, stack_offset); +} + + +void MacroAssembler::FlushICache(Register address, unsigned instructions) { + RegList saved_regs = kJSCallerSaved | ra.bit(); + MultiPush(saved_regs); + AllowExternalCallThatCantCauseGC scope(this); + + // Save to a0 in case address == a4. + Move(a0, address); + PrepareCallCFunction(2, a4); + + li(a1, instructions * kInstrSize); + CallCFunction(ExternalReference::flush_icache_function(isolate()), 2); + MultiPop(saved_regs); +} + + +void MacroAssembler::Ext(Register rt, + Register rs, + uint16_t pos, + uint16_t size) { + DCHECK(pos < 32); + DCHECK(pos + size < 33); + ext_(rt, rs, pos, size); +} + + +void MacroAssembler::Ins(Register rt, + Register rs, + uint16_t pos, + uint16_t size) { + DCHECK(pos < 32); + DCHECK(pos + size <= 32); + DCHECK(size != 0); + ins_(rt, rs, pos, size); +} + + +void MacroAssembler::Cvt_d_uw(FPURegister fd, + FPURegister fs, + FPURegister scratch) { + // Move the data from fs to t8. + mfc1(t8, fs); + Cvt_d_uw(fd, t8, scratch); +} + + +void MacroAssembler::Cvt_d_uw(FPURegister fd, + Register rs, + FPURegister scratch) { + // Convert rs to a FP value in fd (and fd + 1). + // We do this by converting rs minus the MSB to avoid sign conversion, + // then adding 2^31 to the result (if needed). + + DCHECK(!fd.is(scratch)); + DCHECK(!rs.is(t9)); + DCHECK(!rs.is(at)); + + // Save rs's MSB to t9. + Ext(t9, rs, 31, 1); + // Remove rs's MSB. + Ext(at, rs, 0, 31); + // Move the result to fd. + mtc1(at, fd); + mthc1(zero_reg, fd); + + // Convert fd to a real FP value. + cvt_d_w(fd, fd); + + Label conversion_done; + + // If rs's MSB was 0, it's done. + // Otherwise we need to add that to the FP register. + Branch(&conversion_done, eq, t9, Operand(zero_reg)); + + // Load 2^31 into f20 as its float representation. + li(at, 0x41E00000); + mtc1(zero_reg, scratch); + mthc1(at, scratch); + // Add it to fd. + add_d(fd, fd, scratch); + + bind(&conversion_done); +} + + +void MacroAssembler::Round_l_d(FPURegister fd, FPURegister fs) { + round_l_d(fd, fs); +} + + +void MacroAssembler::Floor_l_d(FPURegister fd, FPURegister fs) { + floor_l_d(fd, fs); +} + + +void MacroAssembler::Ceil_l_d(FPURegister fd, FPURegister fs) { + ceil_l_d(fd, fs); +} + + +void MacroAssembler::Trunc_l_d(FPURegister fd, FPURegister fs) { + trunc_l_d(fd, fs); +} + + +void MacroAssembler::Trunc_l_ud(FPURegister fd, + FPURegister fs, + FPURegister scratch) { + // Load to GPR. + dmfc1(t8, fs); + // Reset sign bit. + li(at, 0x7fffffffffffffff); + and_(t8, t8, at); + dmtc1(t8, fs); + trunc_l_d(fd, fs); +} + + +void MacroAssembler::Trunc_uw_d(FPURegister fd, + FPURegister fs, + FPURegister scratch) { + Trunc_uw_d(fs, t8, scratch); + mtc1(t8, fd); +} + + +void MacroAssembler::Trunc_w_d(FPURegister fd, FPURegister fs) { + trunc_w_d(fd, fs); +} + + +void MacroAssembler::Round_w_d(FPURegister fd, FPURegister fs) { + round_w_d(fd, fs); +} + + +void MacroAssembler::Floor_w_d(FPURegister fd, FPURegister fs) { + floor_w_d(fd, fs); +} + + +void MacroAssembler::Ceil_w_d(FPURegister fd, FPURegister fs) { + ceil_w_d(fd, fs); +} + + +void MacroAssembler::Trunc_uw_d(FPURegister fd, + Register rs, + FPURegister scratch) { + DCHECK(!fd.is(scratch)); + DCHECK(!rs.is(at)); + + // Load 2^31 into scratch as its float representation. + li(at, 0x41E00000); + mtc1(zero_reg, scratch); + mthc1(at, scratch); + // Test if scratch > fd. + // If fd < 2^31 we can convert it normally. + Label simple_convert; + BranchF(&simple_convert, NULL, lt, fd, scratch); + + // First we subtract 2^31 from fd, then trunc it to rs + // and add 2^31 to rs. + sub_d(scratch, fd, scratch); + trunc_w_d(scratch, scratch); + mfc1(rs, scratch); + Or(rs, rs, 1 << 31); + + Label done; + Branch(&done); + // Simple conversion. + bind(&simple_convert); + trunc_w_d(scratch, fd); + mfc1(rs, scratch); + + bind(&done); +} + + +void MacroAssembler::Madd_d(FPURegister fd, FPURegister fr, FPURegister fs, + FPURegister ft, FPURegister scratch) { + if (0) { // TODO(plind): find reasonable arch-variant symbol names. + madd_d(fd, fr, fs, ft); + } else { + // Can not change source regs's value. + DCHECK(!fr.is(scratch) && !fs.is(scratch) && !ft.is(scratch)); + mul_d(scratch, fs, ft); + add_d(fd, fr, scratch); + } +} + + +void MacroAssembler::BranchF(Label* target, + Label* nan, + Condition cc, + FPURegister cmp1, + FPURegister cmp2, + BranchDelaySlot bd) { + BlockTrampolinePoolScope block_trampoline_pool(this); + if (cc == al) { + Branch(bd, target); + return; + } + + DCHECK(nan || target); + // Check for unordered (NaN) cases. + if (nan) { + if (kArchVariant != kMips64r6) { + c(UN, D, cmp1, cmp2); + bc1t(nan); + } else { + // Use f31 for comparison result. It has to be unavailable to lithium + // register allocator. + DCHECK(!cmp1.is(f31) && !cmp2.is(f31)); + cmp(UN, L, f31, cmp1, cmp2); + bc1nez(nan, f31); + } + } + + if (kArchVariant != kMips64r6) { + if (target) { + // Here NaN cases were either handled by this function or are assumed to + // have been handled by the caller. + switch (cc) { + case lt: + c(OLT, D, cmp1, cmp2); + bc1t(target); + break; + case gt: + c(ULE, D, cmp1, cmp2); + bc1f(target); + break; + case ge: + c(ULT, D, cmp1, cmp2); + bc1f(target); + break; + case le: + c(OLE, D, cmp1, cmp2); + bc1t(target); + break; + case eq: + c(EQ, D, cmp1, cmp2); + bc1t(target); + break; + case ueq: + c(UEQ, D, cmp1, cmp2); + bc1t(target); + break; + case ne: + c(EQ, D, cmp1, cmp2); + bc1f(target); + break; + case nue: + c(UEQ, D, cmp1, cmp2); + bc1f(target); + break; + default: + CHECK(0); + } + } + } else { + if (target) { + // Here NaN cases were either handled by this function or are assumed to + // have been handled by the caller. + // Unsigned conditions are treated as their signed counterpart. + // Use f31 for comparison result, it is valid in fp64 (FR = 1) mode. + DCHECK(!cmp1.is(f31) && !cmp2.is(f31)); + switch (cc) { + case lt: + cmp(OLT, L, f31, cmp1, cmp2); + bc1nez(target, f31); + break; + case gt: + cmp(ULE, L, f31, cmp1, cmp2); + bc1eqz(target, f31); + break; + case ge: + cmp(ULT, L, f31, cmp1, cmp2); + bc1eqz(target, f31); + break; + case le: + cmp(OLE, L, f31, cmp1, cmp2); + bc1nez(target, f31); + break; + case eq: + cmp(EQ, L, f31, cmp1, cmp2); + bc1nez(target, f31); + break; + case ueq: + cmp(UEQ, L, f31, cmp1, cmp2); + bc1nez(target, f31); + break; + case ne: + cmp(EQ, L, f31, cmp1, cmp2); + bc1eqz(target, f31); + break; + case nue: + cmp(UEQ, L, f31, cmp1, cmp2); + bc1eqz(target, f31); + break; + default: + CHECK(0); + } + } + } + + if (bd == PROTECT) { + nop(); + } +} + + +void MacroAssembler::Move(FPURegister dst, double imm) { + static const DoubleRepresentation minus_zero(-0.0); + static const DoubleRepresentation zero(0.0); + DoubleRepresentation value_rep(imm); + // Handle special values first. + bool force_load = dst.is(kDoubleRegZero); + if (value_rep == zero && !force_load) { + mov_d(dst, kDoubleRegZero); + } else if (value_rep == minus_zero && !force_load) { + neg_d(dst, kDoubleRegZero); + } else { + uint32_t lo, hi; + DoubleAsTwoUInt32(imm, &lo, &hi); + // Move the low part of the double into the lower bits of the corresponding + // FPU register. + if (lo != 0) { + li(at, Operand(lo)); + mtc1(at, dst); + } else { + mtc1(zero_reg, dst); + } + // Move the high part of the double into the high bits of the corresponding + // FPU register. + if (hi != 0) { + li(at, Operand(hi)); + mthc1(at, dst); + } else { + mthc1(zero_reg, dst); + } + } +} + + +void MacroAssembler::Movz(Register rd, Register rs, Register rt) { + if (kArchVariant == kMips64r6) { + Label done; + Branch(&done, ne, rt, Operand(zero_reg)); + mov(rd, rs); + bind(&done); + } else { + movz(rd, rs, rt); + } +} + + +void MacroAssembler::Movn(Register rd, Register rs, Register rt) { + if (kArchVariant == kMips64r6) { + Label done; + Branch(&done, eq, rt, Operand(zero_reg)); + mov(rd, rs); + bind(&done); + } else { + movn(rd, rs, rt); + } +} + + +void MacroAssembler::Movt(Register rd, Register rs, uint16_t cc) { + movt(rd, rs, cc); +} + + +void MacroAssembler::Movf(Register rd, Register rs, uint16_t cc) { + movf(rd, rs, cc); +} + + +void MacroAssembler::Clz(Register rd, Register rs) { + clz(rd, rs); +} + + +void MacroAssembler::EmitFPUTruncate(FPURoundingMode rounding_mode, + Register result, + DoubleRegister double_input, + Register scratch, + DoubleRegister double_scratch, + Register except_flag, + CheckForInexactConversion check_inexact) { + DCHECK(!result.is(scratch)); + DCHECK(!double_input.is(double_scratch)); + DCHECK(!except_flag.is(scratch)); + + Label done; + + // Clear the except flag (0 = no exception) + mov(except_flag, zero_reg); + + // Test for values that can be exactly represented as a signed 32-bit integer. + cvt_w_d(double_scratch, double_input); + mfc1(result, double_scratch); + cvt_d_w(double_scratch, double_scratch); + BranchF(&done, NULL, eq, double_input, double_scratch); + + int32_t except_mask = kFCSRFlagMask; // Assume interested in all exceptions. + + if (check_inexact == kDontCheckForInexactConversion) { + // Ignore inexact exceptions. + except_mask &= ~kFCSRInexactFlagMask; + } + + // Save FCSR. + cfc1(scratch, FCSR); + // Disable FPU exceptions. + ctc1(zero_reg, FCSR); + + // Do operation based on rounding mode. + switch (rounding_mode) { + case kRoundToNearest: + Round_w_d(double_scratch, double_input); + break; + case kRoundToZero: + Trunc_w_d(double_scratch, double_input); + break; + case kRoundToPlusInf: + Ceil_w_d(double_scratch, double_input); + break; + case kRoundToMinusInf: + Floor_w_d(double_scratch, double_input); + break; + } // End of switch-statement. + + // Retrieve FCSR. + cfc1(except_flag, FCSR); + // Restore FCSR. + ctc1(scratch, FCSR); + // Move the converted value into the result register. + mfc1(result, double_scratch); + + // Check for fpu exceptions. + And(except_flag, except_flag, Operand(except_mask)); + + bind(&done); +} + + +void MacroAssembler::TryInlineTruncateDoubleToI(Register result, + DoubleRegister double_input, + Label* done) { + DoubleRegister single_scratch = kLithiumScratchDouble.low(); + Register scratch = at; + Register scratch2 = t9; + + // Clear cumulative exception flags and save the FCSR. + cfc1(scratch2, FCSR); + ctc1(zero_reg, FCSR); + // Try a conversion to a signed integer. + trunc_w_d(single_scratch, double_input); + mfc1(result, single_scratch); + // Retrieve and restore the FCSR. + cfc1(scratch, FCSR); + ctc1(scratch2, FCSR); + // Check for overflow and NaNs. + And(scratch, + scratch, + kFCSROverflowFlagMask | kFCSRUnderflowFlagMask | kFCSRInvalidOpFlagMask); + // If we had no exceptions we are done. + Branch(done, eq, scratch, Operand(zero_reg)); +} + + +void MacroAssembler::TruncateDoubleToI(Register result, + DoubleRegister double_input) { + Label done; + + TryInlineTruncateDoubleToI(result, double_input, &done); + + // If we fell through then inline version didn't succeed - call stub instead. + push(ra); + Dsubu(sp, sp, Operand(kDoubleSize)); // Put input on stack. + sdc1(double_input, MemOperand(sp, 0)); + + DoubleToIStub stub(isolate(), sp, result, 0, true, true); + CallStub(&stub); + + Daddu(sp, sp, Operand(kDoubleSize)); + pop(ra); + + bind(&done); +} + + +void MacroAssembler::TruncateHeapNumberToI(Register result, Register object) { + Label done; + DoubleRegister double_scratch = f12; + DCHECK(!result.is(object)); + + ldc1(double_scratch, + MemOperand(object, HeapNumber::kValueOffset - kHeapObjectTag)); + TryInlineTruncateDoubleToI(result, double_scratch, &done); + + // If we fell through then inline version didn't succeed - call stub instead. + push(ra); + DoubleToIStub stub(isolate(), + object, + result, + HeapNumber::kValueOffset - kHeapObjectTag, + true, + true); + CallStub(&stub); + pop(ra); + + bind(&done); +} + + +void MacroAssembler::TruncateNumberToI(Register object, + Register result, + Register heap_number_map, + Register scratch, + Label* not_number) { + Label done; + DCHECK(!result.is(object)); + + UntagAndJumpIfSmi(result, object, &done); + JumpIfNotHeapNumber(object, heap_number_map, scratch, not_number); + TruncateHeapNumberToI(result, object); + + bind(&done); +} + + +void MacroAssembler::GetLeastBitsFromSmi(Register dst, + Register src, + int num_least_bits) { + // Ext(dst, src, kSmiTagSize, num_least_bits); + SmiUntag(dst, src); + And(dst, dst, Operand((1 << num_least_bits) - 1)); +} + + +void MacroAssembler::GetLeastBitsFromInt32(Register dst, + Register src, + int num_least_bits) { + DCHECK(!src.is(dst)); + And(dst, src, Operand((1 << num_least_bits) - 1)); +} + + +// Emulated condtional branches do not emit a nop in the branch delay slot. +// +// BRANCH_ARGS_CHECK checks that conditional jump arguments are correct. +#define BRANCH_ARGS_CHECK(cond, rs, rt) DCHECK( \ + (cond == cc_always && rs.is(zero_reg) && rt.rm().is(zero_reg)) || \ + (cond != cc_always && (!rs.is(zero_reg) || !rt.rm().is(zero_reg)))) + + +void MacroAssembler::Branch(int16_t offset, BranchDelaySlot bdslot) { + BranchShort(offset, bdslot); +} + + +void MacroAssembler::Branch(int16_t offset, Condition cond, Register rs, + const Operand& rt, + BranchDelaySlot bdslot) { + BranchShort(offset, cond, rs, rt, bdslot); +} + + +void MacroAssembler::Branch(Label* L, BranchDelaySlot bdslot) { + if (L->is_bound()) { + if (is_near(L)) { + BranchShort(L, bdslot); + } else { + Jr(L, bdslot); + } + } else { + if (is_trampoline_emitted()) { + Jr(L, bdslot); + } else { + BranchShort(L, bdslot); + } + } +} + + +void MacroAssembler::Branch(Label* L, Condition cond, Register rs, + const Operand& rt, + BranchDelaySlot bdslot) { + if (L->is_bound()) { + if (is_near(L)) { + BranchShort(L, cond, rs, rt, bdslot); + } else { + if (cond != cc_always) { + Label skip; + Condition neg_cond = NegateCondition(cond); + BranchShort(&skip, neg_cond, rs, rt); + Jr(L, bdslot); + bind(&skip); + } else { + Jr(L, bdslot); + } + } + } else { + if (is_trampoline_emitted()) { + if (cond != cc_always) { + Label skip; + Condition neg_cond = NegateCondition(cond); + BranchShort(&skip, neg_cond, rs, rt); + Jr(L, bdslot); + bind(&skip); + } else { + Jr(L, bdslot); + } + } else { + BranchShort(L, cond, rs, rt, bdslot); + } + } +} + + +void MacroAssembler::Branch(Label* L, + Condition cond, + Register rs, + Heap::RootListIndex index, + BranchDelaySlot bdslot) { + LoadRoot(at, index); + Branch(L, cond, rs, Operand(at), bdslot); +} + + +void MacroAssembler::BranchShort(int16_t offset, BranchDelaySlot bdslot) { + b(offset); + + // Emit a nop in the branch delay slot if required. + if (bdslot == PROTECT) + nop(); +} + + +void MacroAssembler::BranchShort(int16_t offset, Condition cond, Register rs, + const Operand& rt, + BranchDelaySlot bdslot) { + BRANCH_ARGS_CHECK(cond, rs, rt); + DCHECK(!rs.is(zero_reg)); + Register r2 = no_reg; + Register scratch = at; + + if (rt.is_reg()) { + // NOTE: 'at' can be clobbered by Branch but it is legal to use it as rs or + // rt. + BlockTrampolinePoolScope block_trampoline_pool(this); + r2 = rt.rm_; + switch (cond) { + case cc_always: + b(offset); + break; + case eq: + beq(rs, r2, offset); + break; + case ne: + bne(rs, r2, offset); + break; + // Signed comparison. + case greater: + if (r2.is(zero_reg)) { + bgtz(rs, offset); + } else { + slt(scratch, r2, rs); + bne(scratch, zero_reg, offset); + } + break; + case greater_equal: + if (r2.is(zero_reg)) { + bgez(rs, offset); + } else { + slt(scratch, rs, r2); + beq(scratch, zero_reg, offset); + } + break; + case less: + if (r2.is(zero_reg)) { + bltz(rs, offset); + } else { + slt(scratch, rs, r2); + bne(scratch, zero_reg, offset); + } + break; + case less_equal: + if (r2.is(zero_reg)) { + blez(rs, offset); + } else { + slt(scratch, r2, rs); + beq(scratch, zero_reg, offset); + } + break; + // Unsigned comparison. + case Ugreater: + if (r2.is(zero_reg)) { + bgtz(rs, offset); + } else { + sltu(scratch, r2, rs); + bne(scratch, zero_reg, offset); + } + break; + case Ugreater_equal: + if (r2.is(zero_reg)) { + bgez(rs, offset); + } else { + sltu(scratch, rs, r2); + beq(scratch, zero_reg, offset); + } + break; + case Uless: + if (r2.is(zero_reg)) { + // No code needs to be emitted. + return; + } else { + sltu(scratch, rs, r2); + bne(scratch, zero_reg, offset); + } + break; + case Uless_equal: + if (r2.is(zero_reg)) { + b(offset); + } else { + sltu(scratch, r2, rs); + beq(scratch, zero_reg, offset); + } + break; + default: + UNREACHABLE(); + } + } else { + // Be careful to always use shifted_branch_offset only just before the + // branch instruction, as the location will be remember for patching the + // target. + BlockTrampolinePoolScope block_trampoline_pool(this); + switch (cond) { + case cc_always: + b(offset); + break; + case eq: + // We don't want any other register but scratch clobbered. + DCHECK(!scratch.is(rs)); + r2 = scratch; + li(r2, rt); + beq(rs, r2, offset); + break; + case ne: + // We don't want any other register but scratch clobbered. + DCHECK(!scratch.is(rs)); + r2 = scratch; + li(r2, rt); + bne(rs, r2, offset); + break; + // Signed comparison. + case greater: + if (rt.imm64_ == 0) { + bgtz(rs, offset); + } else { + r2 = scratch; + li(r2, rt); + slt(scratch, r2, rs); + bne(scratch, zero_reg, offset); + } + break; + case greater_equal: + if (rt.imm64_ == 0) { + bgez(rs, offset); + } else if (is_int16(rt.imm64_)) { + slti(scratch, rs, rt.imm64_); + beq(scratch, zero_reg, offset); + } else { + r2 = scratch; + li(r2, rt); + slt(scratch, rs, r2); + beq(scratch, zero_reg, offset); + } + break; + case less: + if (rt.imm64_ == 0) { + bltz(rs, offset); + } else if (is_int16(rt.imm64_)) { + slti(scratch, rs, rt.imm64_); + bne(scratch, zero_reg, offset); + } else { + r2 = scratch; + li(r2, rt); + slt(scratch, rs, r2); + bne(scratch, zero_reg, offset); + } + break; + case less_equal: + if (rt.imm64_ == 0) { + blez(rs, offset); + } else { + r2 = scratch; + li(r2, rt); + slt(scratch, r2, rs); + beq(scratch, zero_reg, offset); + } + break; + // Unsigned comparison. + case Ugreater: + if (rt.imm64_ == 0) { + bgtz(rs, offset); + } else { + r2 = scratch; + li(r2, rt); + sltu(scratch, r2, rs); + bne(scratch, zero_reg, offset); + } + break; + case Ugreater_equal: + if (rt.imm64_ == 0) { + bgez(rs, offset); + } else if (is_int16(rt.imm64_)) { + sltiu(scratch, rs, rt.imm64_); + beq(scratch, zero_reg, offset); + } else { + r2 = scratch; + li(r2, rt); + sltu(scratch, rs, r2); + beq(scratch, zero_reg, offset); + } + break; + case Uless: + if (rt.imm64_ == 0) { + // No code needs to be emitted. + return; + } else if (is_int16(rt.imm64_)) { + sltiu(scratch, rs, rt.imm64_); + bne(scratch, zero_reg, offset); + } else { + r2 = scratch; + li(r2, rt); + sltu(scratch, rs, r2); + bne(scratch, zero_reg, offset); + } + break; + case Uless_equal: + if (rt.imm64_ == 0) { + b(offset); + } else { + r2 = scratch; + li(r2, rt); + sltu(scratch, r2, rs); + beq(scratch, zero_reg, offset); + } + break; + default: + UNREACHABLE(); + } + } + // Emit a nop in the branch delay slot if required. + if (bdslot == PROTECT) + nop(); +} + + +void MacroAssembler::BranchShort(Label* L, BranchDelaySlot bdslot) { + // We use branch_offset as an argument for the branch instructions to be sure + // it is called just before generating the branch instruction, as needed. + + b(shifted_branch_offset(L, false)); + + // Emit a nop in the branch delay slot if required. + if (bdslot == PROTECT) + nop(); +} + + +void MacroAssembler::BranchShort(Label* L, Condition cond, Register rs, + const Operand& rt, + BranchDelaySlot bdslot) { + BRANCH_ARGS_CHECK(cond, rs, rt); + + int32_t offset = 0; + Register r2 = no_reg; + Register scratch = at; + if (rt.is_reg()) { + BlockTrampolinePoolScope block_trampoline_pool(this); + r2 = rt.rm_; + // Be careful to always use shifted_branch_offset only just before the + // branch instruction, as the location will be remember for patching the + // target. + switch (cond) { + case cc_always: + offset = shifted_branch_offset(L, false); + b(offset); + break; + case eq: + offset = shifted_branch_offset(L, false); + beq(rs, r2, offset); + break; + case ne: + offset = shifted_branch_offset(L, false); + bne(rs, r2, offset); + break; + // Signed comparison. + case greater: + if (r2.is(zero_reg)) { + offset = shifted_branch_offset(L, false); + bgtz(rs, offset); + } else { + slt(scratch, r2, rs); + offset = shifted_branch_offset(L, false); + bne(scratch, zero_reg, offset); + } + break; + case greater_equal: + if (r2.is(zero_reg)) { + offset = shifted_branch_offset(L, false); + bgez(rs, offset); + } else { + slt(scratch, rs, r2); + offset = shifted_branch_offset(L, false); + beq(scratch, zero_reg, offset); + } + break; + case less: + if (r2.is(zero_reg)) { + offset = shifted_branch_offset(L, false); + bltz(rs, offset); + } else { + slt(scratch, rs, r2); + offset = shifted_branch_offset(L, false); + bne(scratch, zero_reg, offset); + } + break; + case less_equal: + if (r2.is(zero_reg)) { + offset = shifted_branch_offset(L, false); + blez(rs, offset); + } else { + slt(scratch, r2, rs); + offset = shifted_branch_offset(L, false); + beq(scratch, zero_reg, offset); + } + break; + // Unsigned comparison. + case Ugreater: + if (r2.is(zero_reg)) { + offset = shifted_branch_offset(L, false); + bgtz(rs, offset); + } else { + sltu(scratch, r2, rs); + offset = shifted_branch_offset(L, false); + bne(scratch, zero_reg, offset); + } + break; + case Ugreater_equal: + if (r2.is(zero_reg)) { + offset = shifted_branch_offset(L, false); + bgez(rs, offset); + } else { + sltu(scratch, rs, r2); + offset = shifted_branch_offset(L, false); + beq(scratch, zero_reg, offset); + } + break; + case Uless: + if (r2.is(zero_reg)) { + // No code needs to be emitted. + return; + } else { + sltu(scratch, rs, r2); + offset = shifted_branch_offset(L, false); + bne(scratch, zero_reg, offset); + } + break; + case Uless_equal: + if (r2.is(zero_reg)) { + offset = shifted_branch_offset(L, false); + b(offset); + } else { + sltu(scratch, r2, rs); + offset = shifted_branch_offset(L, false); + beq(scratch, zero_reg, offset); + } + break; + default: + UNREACHABLE(); + } + } else { + // Be careful to always use shifted_branch_offset only just before the + // branch instruction, as the location will be remember for patching the + // target. + BlockTrampolinePoolScope block_trampoline_pool(this); + switch (cond) { + case cc_always: + offset = shifted_branch_offset(L, false); + b(offset); + break; + case eq: + DCHECK(!scratch.is(rs)); + r2 = scratch; + li(r2, rt); + offset = shifted_branch_offset(L, false); + beq(rs, r2, offset); + break; + case ne: + DCHECK(!scratch.is(rs)); + r2 = scratch; + li(r2, rt); + offset = shifted_branch_offset(L, false); + bne(rs, r2, offset); + break; + // Signed comparison. + case greater: + if (rt.imm64_ == 0) { + offset = shifted_branch_offset(L, false); + bgtz(rs, offset); + } else { + DCHECK(!scratch.is(rs)); + r2 = scratch; + li(r2, rt); + slt(scratch, r2, rs); + offset = shifted_branch_offset(L, false); + bne(scratch, zero_reg, offset); + } + break; + case greater_equal: + if (rt.imm64_ == 0) { + offset = shifted_branch_offset(L, false); + bgez(rs, offset); + } else if (is_int16(rt.imm64_)) { + slti(scratch, rs, rt.imm64_); + offset = shifted_branch_offset(L, false); + beq(scratch, zero_reg, offset); + } else { + DCHECK(!scratch.is(rs)); + r2 = scratch; + li(r2, rt); + slt(scratch, rs, r2); + offset = shifted_branch_offset(L, false); + beq(scratch, zero_reg, offset); + } + break; + case less: + if (rt.imm64_ == 0) { + offset = shifted_branch_offset(L, false); + bltz(rs, offset); + } else if (is_int16(rt.imm64_)) { + slti(scratch, rs, rt.imm64_); + offset = shifted_branch_offset(L, false); + bne(scratch, zero_reg, offset); + } else { + DCHECK(!scratch.is(rs)); + r2 = scratch; + li(r2, rt); + slt(scratch, rs, r2); + offset = shifted_branch_offset(L, false); + bne(scratch, zero_reg, offset); + } + break; + case less_equal: + if (rt.imm64_ == 0) { + offset = shifted_branch_offset(L, false); + blez(rs, offset); + } else { + DCHECK(!scratch.is(rs)); + r2 = scratch; + li(r2, rt); + slt(scratch, r2, rs); + offset = shifted_branch_offset(L, false); + beq(scratch, zero_reg, offset); + } + break; + // Unsigned comparison. + case Ugreater: + if (rt.imm64_ == 0) { + offset = shifted_branch_offset(L, false); + bne(rs, zero_reg, offset); + } else { + DCHECK(!scratch.is(rs)); + r2 = scratch; + li(r2, rt); + sltu(scratch, r2, rs); + offset = shifted_branch_offset(L, false); + bne(scratch, zero_reg, offset); + } + break; + case Ugreater_equal: + if (rt.imm64_ == 0) { + offset = shifted_branch_offset(L, false); + bgez(rs, offset); + } else if (is_int16(rt.imm64_)) { + sltiu(scratch, rs, rt.imm64_); + offset = shifted_branch_offset(L, false); + beq(scratch, zero_reg, offset); + } else { + DCHECK(!scratch.is(rs)); + r2 = scratch; + li(r2, rt); + sltu(scratch, rs, r2); + offset = shifted_branch_offset(L, false); + beq(scratch, zero_reg, offset); + } + break; + case Uless: + if (rt.imm64_ == 0) { + // No code needs to be emitted. + return; + } else if (is_int16(rt.imm64_)) { + sltiu(scratch, rs, rt.imm64_); + offset = shifted_branch_offset(L, false); + bne(scratch, zero_reg, offset); + } else { + DCHECK(!scratch.is(rs)); + r2 = scratch; + li(r2, rt); + sltu(scratch, rs, r2); + offset = shifted_branch_offset(L, false); + bne(scratch, zero_reg, offset); + } + break; + case Uless_equal: + if (rt.imm64_ == 0) { + offset = shifted_branch_offset(L, false); + beq(rs, zero_reg, offset); + } else { + DCHECK(!scratch.is(rs)); + r2 = scratch; + li(r2, rt); + sltu(scratch, r2, rs); + offset = shifted_branch_offset(L, false); + beq(scratch, zero_reg, offset); + } + break; + default: + UNREACHABLE(); + } + } + // Check that offset could actually hold on an int16_t. + DCHECK(is_int16(offset)); + // Emit a nop in the branch delay slot if required. + if (bdslot == PROTECT) + nop(); +} + + +void MacroAssembler::BranchAndLink(int16_t offset, BranchDelaySlot bdslot) { + BranchAndLinkShort(offset, bdslot); +} + + +void MacroAssembler::BranchAndLink(int16_t offset, Condition cond, Register rs, + const Operand& rt, + BranchDelaySlot bdslot) { + BranchAndLinkShort(offset, cond, rs, rt, bdslot); +} + + +void MacroAssembler::BranchAndLink(Label* L, BranchDelaySlot bdslot) { + if (L->is_bound()) { + if (is_near(L)) { + BranchAndLinkShort(L, bdslot); + } else { + Jalr(L, bdslot); + } + } else { + if (is_trampoline_emitted()) { + Jalr(L, bdslot); + } else { + BranchAndLinkShort(L, bdslot); + } + } +} + + +void MacroAssembler::BranchAndLink(Label* L, Condition cond, Register rs, + const Operand& rt, + BranchDelaySlot bdslot) { + if (L->is_bound()) { + if (is_near(L)) { + BranchAndLinkShort(L, cond, rs, rt, bdslot); + } else { + Label skip; + Condition neg_cond = NegateCondition(cond); + BranchShort(&skip, neg_cond, rs, rt); + Jalr(L, bdslot); + bind(&skip); + } + } else { + if (is_trampoline_emitted()) { + Label skip; + Condition neg_cond = NegateCondition(cond); + BranchShort(&skip, neg_cond, rs, rt); + Jalr(L, bdslot); + bind(&skip); + } else { + BranchAndLinkShort(L, cond, rs, rt, bdslot); + } + } +} + + +// We need to use a bgezal or bltzal, but they can't be used directly with the +// slt instructions. We could use sub or add instead but we would miss overflow +// cases, so we keep slt and add an intermediate third instruction. +void MacroAssembler::BranchAndLinkShort(int16_t offset, + BranchDelaySlot bdslot) { + bal(offset); + + // Emit a nop in the branch delay slot if required. + if (bdslot == PROTECT) + nop(); +} + + +void MacroAssembler::BranchAndLinkShort(int16_t offset, Condition cond, + Register rs, const Operand& rt, + BranchDelaySlot bdslot) { + BRANCH_ARGS_CHECK(cond, rs, rt); + Register r2 = no_reg; + Register scratch = at; + + if (rt.is_reg()) { + r2 = rt.rm_; + } else if (cond != cc_always) { + r2 = scratch; + li(r2, rt); + } + + { + BlockTrampolinePoolScope block_trampoline_pool(this); + switch (cond) { + case cc_always: + bal(offset); + break; + case eq: + bne(rs, r2, 2); + nop(); + bal(offset); + break; + case ne: + beq(rs, r2, 2); + nop(); + bal(offset); + break; + + // Signed comparison. + case greater: + // rs > rt + slt(scratch, r2, rs); + beq(scratch, zero_reg, 2); + nop(); + bal(offset); + break; + case greater_equal: + // rs >= rt + slt(scratch, rs, r2); + bne(scratch, zero_reg, 2); + nop(); + bal(offset); + break; + case less: + // rs < r2 + slt(scratch, rs, r2); + bne(scratch, zero_reg, 2); + nop(); + bal(offset); + break; + case less_equal: + // rs <= r2 + slt(scratch, r2, rs); + bne(scratch, zero_reg, 2); + nop(); + bal(offset); + break; + + + // Unsigned comparison. + case Ugreater: + // rs > rt + sltu(scratch, r2, rs); + beq(scratch, zero_reg, 2); + nop(); + bal(offset); + break; + case Ugreater_equal: + // rs >= rt + sltu(scratch, rs, r2); + bne(scratch, zero_reg, 2); + nop(); + bal(offset); + break; + case Uless: + // rs < r2 + sltu(scratch, rs, r2); + bne(scratch, zero_reg, 2); + nop(); + bal(offset); + break; + case Uless_equal: + // rs <= r2 + sltu(scratch, r2, rs); + bne(scratch, zero_reg, 2); + nop(); + bal(offset); + break; + default: + UNREACHABLE(); + } + } + // Emit a nop in the branch delay slot if required. + if (bdslot == PROTECT) + nop(); +} + + +void MacroAssembler::BranchAndLinkShort(Label* L, BranchDelaySlot bdslot) { + bal(shifted_branch_offset(L, false)); + + // Emit a nop in the branch delay slot if required. + if (bdslot == PROTECT) + nop(); +} + + +void MacroAssembler::BranchAndLinkShort(Label* L, Condition cond, Register rs, + const Operand& rt, + BranchDelaySlot bdslot) { + BRANCH_ARGS_CHECK(cond, rs, rt); + + int32_t offset = 0; + Register r2 = no_reg; + Register scratch = at; + if (rt.is_reg()) { + r2 = rt.rm_; + } else if (cond != cc_always) { + r2 = scratch; + li(r2, rt); + } + + { + BlockTrampolinePoolScope block_trampoline_pool(this); + switch (cond) { + case cc_always: + offset = shifted_branch_offset(L, false); + bal(offset); + break; + case eq: + bne(rs, r2, 2); + nop(); + offset = shifted_branch_offset(L, false); + bal(offset); + break; + case ne: + beq(rs, r2, 2); + nop(); + offset = shifted_branch_offset(L, false); + bal(offset); + break; + + // Signed comparison. + case greater: + // rs > rt + slt(scratch, r2, rs); + beq(scratch, zero_reg, 2); + nop(); + offset = shifted_branch_offset(L, false); + bal(offset); + break; + case greater_equal: + // rs >= rt + slt(scratch, rs, r2); + bne(scratch, zero_reg, 2); + nop(); + offset = shifted_branch_offset(L, false); + bal(offset); + break; + case less: + // rs < r2 + slt(scratch, rs, r2); + bne(scratch, zero_reg, 2); + nop(); + offset = shifted_branch_offset(L, false); + bal(offset); + break; + case less_equal: + // rs <= r2 + slt(scratch, r2, rs); + bne(scratch, zero_reg, 2); + nop(); + offset = shifted_branch_offset(L, false); + bal(offset); + break; + + + // Unsigned comparison. + case Ugreater: + // rs > rt + sltu(scratch, r2, rs); + beq(scratch, zero_reg, 2); + nop(); + offset = shifted_branch_offset(L, false); + bal(offset); + break; + case Ugreater_equal: + // rs >= rt + sltu(scratch, rs, r2); + bne(scratch, zero_reg, 2); + nop(); + offset = shifted_branch_offset(L, false); + bal(offset); + break; + case Uless: + // rs < r2 + sltu(scratch, rs, r2); + bne(scratch, zero_reg, 2); + nop(); + offset = shifted_branch_offset(L, false); + bal(offset); + break; + case Uless_equal: + // rs <= r2 + sltu(scratch, r2, rs); + bne(scratch, zero_reg, 2); + nop(); + offset = shifted_branch_offset(L, false); + bal(offset); + break; + + default: + UNREACHABLE(); + } + } + // Check that offset could actually hold on an int16_t. + DCHECK(is_int16(offset)); + + // Emit a nop in the branch delay slot if required. + if (bdslot == PROTECT) + nop(); +} + + +void MacroAssembler::Jump(Register target, + Condition cond, + Register rs, + const Operand& rt, + BranchDelaySlot bd) { + BlockTrampolinePoolScope block_trampoline_pool(this); + if (cond == cc_always) { + jr(target); + } else { + BRANCH_ARGS_CHECK(cond, rs, rt); + Branch(2, NegateCondition(cond), rs, rt); + jr(target); + } + // Emit a nop in the branch delay slot if required. + if (bd == PROTECT) + nop(); +} + + +void MacroAssembler::Jump(intptr_t target, + RelocInfo::Mode rmode, + Condition cond, + Register rs, + const Operand& rt, + BranchDelaySlot bd) { + Label skip; + if (cond != cc_always) { + Branch(USE_DELAY_SLOT, &skip, NegateCondition(cond), rs, rt); + } + // The first instruction of 'li' may be placed in the delay slot. + // This is not an issue, t9 is expected to be clobbered anyway. + li(t9, Operand(target, rmode)); + Jump(t9, al, zero_reg, Operand(zero_reg), bd); + bind(&skip); +} + + +void MacroAssembler::Jump(Address target, + RelocInfo::Mode rmode, + Condition cond, + Register rs, + const Operand& rt, + BranchDelaySlot bd) { + DCHECK(!RelocInfo::IsCodeTarget(rmode)); + Jump(reinterpret_cast<intptr_t>(target), rmode, cond, rs, rt, bd); +} + + +void MacroAssembler::Jump(Handle<Code> code, + RelocInfo::Mode rmode, + Condition cond, + Register rs, + const Operand& rt, + BranchDelaySlot bd) { + DCHECK(RelocInfo::IsCodeTarget(rmode)); + AllowDeferredHandleDereference embedding_raw_address; + Jump(reinterpret_cast<intptr_t>(code.location()), rmode, cond, rs, rt, bd); +} + + +int MacroAssembler::CallSize(Register target, + Condition cond, + Register rs, + const Operand& rt, + BranchDelaySlot bd) { + int size = 0; + + if (cond == cc_always) { + size += 1; + } else { + size += 3; + } + + if (bd == PROTECT) + size += 1; + + return size * kInstrSize; +} + + +// Note: To call gcc-compiled C code on mips, you must call thru t9. +void MacroAssembler::Call(Register target, + Condition cond, + Register rs, + const Operand& rt, + BranchDelaySlot bd) { + BlockTrampolinePoolScope block_trampoline_pool(this); + Label start; + bind(&start); + if (cond == cc_always) { + jalr(target); + } else { + BRANCH_ARGS_CHECK(cond, rs, rt); + Branch(2, NegateCondition(cond), rs, rt); + jalr(target); + } + // Emit a nop in the branch delay slot if required. + if (bd == PROTECT) + nop(); + + DCHECK_EQ(CallSize(target, cond, rs, rt, bd), + SizeOfCodeGeneratedSince(&start)); +} + + +int MacroAssembler::CallSize(Address target, + RelocInfo::Mode rmode, + Condition cond, + Register rs, + const Operand& rt, + BranchDelaySlot bd) { + int size = CallSize(t9, cond, rs, rt, bd); + return size + 4 * kInstrSize; +} + + +void MacroAssembler::Call(Address target, + RelocInfo::Mode rmode, + Condition cond, + Register rs, + const Operand& rt, + BranchDelaySlot bd) { + BlockTrampolinePoolScope block_trampoline_pool(this); + Label start; + bind(&start); + int64_t target_int = reinterpret_cast<int64_t>(target); + // Must record previous source positions before the + // li() generates a new code target. + positions_recorder()->WriteRecordedPositions(); + li(t9, Operand(target_int, rmode), ADDRESS_LOAD); + Call(t9, cond, rs, rt, bd); + DCHECK_EQ(CallSize(target, rmode, cond, rs, rt, bd), + SizeOfCodeGeneratedSince(&start)); +} + + +int MacroAssembler::CallSize(Handle<Code> code, + RelocInfo::Mode rmode, + TypeFeedbackId ast_id, + Condition cond, + Register rs, + const Operand& rt, + BranchDelaySlot bd) { + AllowDeferredHandleDereference using_raw_address; + return CallSize(reinterpret_cast<Address>(code.location()), + rmode, cond, rs, rt, bd); +} + + +void MacroAssembler::Call(Handle<Code> code, + RelocInfo::Mode rmode, + TypeFeedbackId ast_id, + Condition cond, + Register rs, + const Operand& rt, + BranchDelaySlot bd) { + BlockTrampolinePoolScope block_trampoline_pool(this); + Label start; + bind(&start); + DCHECK(RelocInfo::IsCodeTarget(rmode)); + if (rmode == RelocInfo::CODE_TARGET && !ast_id.IsNone()) { + SetRecordedAstId(ast_id); + rmode = RelocInfo::CODE_TARGET_WITH_ID; + } + AllowDeferredHandleDereference embedding_raw_address; + Call(reinterpret_cast<Address>(code.location()), rmode, cond, rs, rt, bd); + DCHECK_EQ(CallSize(code, rmode, ast_id, cond, rs, rt, bd), + SizeOfCodeGeneratedSince(&start)); +} + + +void MacroAssembler::Ret(Condition cond, + Register rs, + const Operand& rt, + BranchDelaySlot bd) { + Jump(ra, cond, rs, rt, bd); +} + + +void MacroAssembler::J(Label* L, BranchDelaySlot bdslot) { + BlockTrampolinePoolScope block_trampoline_pool(this); + + uint64_t imm28; + imm28 = jump_address(L); + imm28 &= kImm28Mask; + { BlockGrowBufferScope block_buf_growth(this); + // Buffer growth (and relocation) must be blocked for internal references + // until associated instructions are emitted and available to be patched. + RecordRelocInfo(RelocInfo::INTERNAL_REFERENCE); + j(imm28); + } + // Emit a nop in the branch delay slot if required. + if (bdslot == PROTECT) + nop(); +} + + +void MacroAssembler::Jr(Label* L, BranchDelaySlot bdslot) { + BlockTrampolinePoolScope block_trampoline_pool(this); + + uint64_t imm64; + imm64 = jump_address(L); + { BlockGrowBufferScope block_buf_growth(this); + // Buffer growth (and relocation) must be blocked for internal references + // until associated instructions are emitted and available to be patched. + RecordRelocInfo(RelocInfo::INTERNAL_REFERENCE); + li(at, Operand(imm64), ADDRESS_LOAD); + } + jr(at); + + // Emit a nop in the branch delay slot if required. + if (bdslot == PROTECT) + nop(); +} + + +void MacroAssembler::Jalr(Label* L, BranchDelaySlot bdslot) { + BlockTrampolinePoolScope block_trampoline_pool(this); + + uint64_t imm64; + imm64 = jump_address(L); + { BlockGrowBufferScope block_buf_growth(this); + // Buffer growth (and relocation) must be blocked for internal references + // until associated instructions are emitted and available to be patched. + RecordRelocInfo(RelocInfo::INTERNAL_REFERENCE); + li(at, Operand(imm64), ADDRESS_LOAD); + } + jalr(at); + + // Emit a nop in the branch delay slot if required. + if (bdslot == PROTECT) + nop(); +} + + +void MacroAssembler::DropAndRet(int drop) { + Ret(USE_DELAY_SLOT); + daddiu(sp, sp, drop * kPointerSize); +} + +void MacroAssembler::DropAndRet(int drop, + Condition cond, + Register r1, + const Operand& r2) { + // Both Drop and Ret need to be conditional. + Label skip; + if (cond != cc_always) { + Branch(&skip, NegateCondition(cond), r1, r2); + } + + Drop(drop); + Ret(); + + if (cond != cc_always) { + bind(&skip); + } +} + + +void MacroAssembler::Drop(int count, + Condition cond, + Register reg, + const Operand& op) { + if (count <= 0) { + return; + } + + Label skip; + + if (cond != al) { + Branch(&skip, NegateCondition(cond), reg, op); + } + + daddiu(sp, sp, count * kPointerSize); + + if (cond != al) { + bind(&skip); + } +} + + + +void MacroAssembler::Swap(Register reg1, + Register reg2, + Register scratch) { + if (scratch.is(no_reg)) { + Xor(reg1, reg1, Operand(reg2)); + Xor(reg2, reg2, Operand(reg1)); + Xor(reg1, reg1, Operand(reg2)); + } else { + mov(scratch, reg1); + mov(reg1, reg2); + mov(reg2, scratch); + } +} + + +void MacroAssembler::Call(Label* target) { + BranchAndLink(target); +} + + +void MacroAssembler::Push(Handle<Object> handle) { + li(at, Operand(handle)); + push(at); +} + + +void MacroAssembler::PushRegisterAsTwoSmis(Register src, Register scratch) { + DCHECK(!src.is(scratch)); + mov(scratch, src); + dsrl32(src, src, 0); + dsll32(src, src, 0); + push(src); + dsll32(scratch, scratch, 0); + push(scratch); +} + + +void MacroAssembler::PopRegisterAsTwoSmis(Register dst, Register scratch) { + DCHECK(!dst.is(scratch)); + pop(scratch); + dsrl32(scratch, scratch, 0); + pop(dst); + dsrl32(dst, dst, 0); + dsll32(dst, dst, 0); + or_(dst, dst, scratch); +} + + +void MacroAssembler::DebugBreak() { + PrepareCEntryArgs(0); + PrepareCEntryFunction(ExternalReference(Runtime::kDebugBreak, isolate())); + CEntryStub ces(isolate(), 1); + DCHECK(AllowThisStubCall(&ces)); + Call(ces.GetCode(), RelocInfo::DEBUG_BREAK); +} + + +// --------------------------------------------------------------------------- +// Exception handling. + +void MacroAssembler::PushTryHandler(StackHandler::Kind kind, + int handler_index) { + // Adjust this code if not the case. + STATIC_ASSERT(StackHandlerConstants::kSize == 5 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kNextOffset == 0 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kCodeOffset == 1 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kStateOffset == 2 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kContextOffset == 3 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kFPOffset == 4 * kPointerSize); + + // For the JSEntry handler, we must preserve a0-a3 and s0. + // a5-a7 are available. We will build up the handler from the bottom by + // pushing on the stack. + // Set up the code object (a5) and the state (a6) for pushing. + unsigned state = + StackHandler::IndexField::encode(handler_index) | + StackHandler::KindField::encode(kind); + li(a5, Operand(CodeObject()), CONSTANT_SIZE); + li(a6, Operand(state)); + + // Push the frame pointer, context, state, and code object. + if (kind == StackHandler::JS_ENTRY) { + DCHECK_EQ(Smi::FromInt(0), 0); + // The second zero_reg indicates no context. + // The first zero_reg is the NULL frame pointer. + // The operands are reversed to match the order of MultiPush/Pop. + Push(zero_reg, zero_reg, a6, a5); + } else { + MultiPush(a5.bit() | a6.bit() | cp.bit() | fp.bit()); + } + + // Link the current handler as the next handler. + li(a6, Operand(ExternalReference(Isolate::kHandlerAddress, isolate()))); + ld(a5, MemOperand(a6)); + push(a5); + // Set this new handler as the current one. + sd(sp, MemOperand(a6)); +} + + +void MacroAssembler::PopTryHandler() { + STATIC_ASSERT(StackHandlerConstants::kNextOffset == 0); + pop(a1); + Daddu(sp, sp, Operand(StackHandlerConstants::kSize - kPointerSize)); + li(at, Operand(ExternalReference(Isolate::kHandlerAddress, isolate()))); + sd(a1, MemOperand(at)); +} + + +void MacroAssembler::JumpToHandlerEntry() { + // Compute the handler entry address and jump to it. The handler table is + // a fixed array of (smi-tagged) code offsets. + // v0 = exception, a1 = code object, a2 = state. + Uld(a3, FieldMemOperand(a1, Code::kHandlerTableOffset)); + Daddu(a3, a3, Operand(FixedArray::kHeaderSize - kHeapObjectTag)); + dsrl(a2, a2, StackHandler::kKindWidth); // Handler index. + dsll(a2, a2, kPointerSizeLog2); + Daddu(a2, a3, a2); + ld(a2, MemOperand(a2)); // Smi-tagged offset. + Daddu(a1, a1, Operand(Code::kHeaderSize - kHeapObjectTag)); // Code start. + dsra32(t9, a2, 0); + Daddu(t9, t9, a1); + Jump(t9); // Jump. +} + + +void MacroAssembler::Throw(Register value) { + // Adjust this code if not the case. + STATIC_ASSERT(StackHandlerConstants::kSize == 5 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kNextOffset == 0); + STATIC_ASSERT(StackHandlerConstants::kCodeOffset == 1 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kStateOffset == 2 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kContextOffset == 3 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kFPOffset == 4 * kPointerSize); + + // The exception is expected in v0. + Move(v0, value); + + // Drop the stack pointer to the top of the top handler. + li(a3, Operand(ExternalReference(Isolate::kHandlerAddress, + isolate()))); + ld(sp, MemOperand(a3)); + + // Restore the next handler. + pop(a2); + sd(a2, MemOperand(a3)); + + // Get the code object (a1) and state (a2). Restore the context and frame + // pointer. + MultiPop(a1.bit() | a2.bit() | cp.bit() | fp.bit()); + + // If the handler is a JS frame, restore the context to the frame. + // (kind == ENTRY) == (fp == 0) == (cp == 0), so we could test either fp + // or cp. + Label done; + Branch(&done, eq, cp, Operand(zero_reg)); + sd(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); + bind(&done); + + JumpToHandlerEntry(); +} + + +void MacroAssembler::ThrowUncatchable(Register value) { + // Adjust this code if not the case. + STATIC_ASSERT(StackHandlerConstants::kSize == 5 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kNextOffset == 0 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kCodeOffset == 1 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kStateOffset == 2 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kContextOffset == 3 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kFPOffset == 4 * kPointerSize); + + // The exception is expected in v0. + if (!value.is(v0)) { + mov(v0, value); + } + // Drop the stack pointer to the top of the top stack handler. + li(a3, Operand(ExternalReference(Isolate::kHandlerAddress, isolate()))); + ld(sp, MemOperand(a3)); + + // Unwind the handlers until the ENTRY handler is found. + Label fetch_next, check_kind; + jmp(&check_kind); + bind(&fetch_next); + ld(sp, MemOperand(sp, StackHandlerConstants::kNextOffset)); + + bind(&check_kind); + STATIC_ASSERT(StackHandler::JS_ENTRY == 0); + ld(a2, MemOperand(sp, StackHandlerConstants::kStateOffset)); + And(a2, a2, Operand(StackHandler::KindField::kMask)); + Branch(&fetch_next, ne, a2, Operand(zero_reg)); + + // Set the top handler address to next handler past the top ENTRY handler. + pop(a2); + sd(a2, MemOperand(a3)); + + // Get the code object (a1) and state (a2). Clear the context and frame + // pointer (0 was saved in the handler). + MultiPop(a1.bit() | a2.bit() | cp.bit() | fp.bit()); + + JumpToHandlerEntry(); +} + + +void MacroAssembler::Allocate(int object_size, + Register result, + Register scratch1, + Register scratch2, + Label* gc_required, + AllocationFlags flags) { + DCHECK(object_size <= Page::kMaxRegularHeapObjectSize); + if (!FLAG_inline_new) { + if (emit_debug_code()) { + // Trash the registers to simulate an allocation failure. + li(result, 0x7091); + li(scratch1, 0x7191); + li(scratch2, 0x7291); + } + jmp(gc_required); + return; + } + + DCHECK(!result.is(scratch1)); + DCHECK(!result.is(scratch2)); + DCHECK(!scratch1.is(scratch2)); + DCHECK(!scratch1.is(t9)); + DCHECK(!scratch2.is(t9)); + DCHECK(!result.is(t9)); + + // Make object size into bytes. + if ((flags & SIZE_IN_WORDS) != 0) { + object_size *= kPointerSize; + } + DCHECK(0 == (object_size & kObjectAlignmentMask)); + + // Check relative positions of allocation top and limit addresses. + // ARM adds additional checks to make sure the ldm instruction can be + // used. On MIPS we don't have ldm so we don't need additional checks either. + ExternalReference allocation_top = + AllocationUtils::GetAllocationTopReference(isolate(), flags); + ExternalReference allocation_limit = + AllocationUtils::GetAllocationLimitReference(isolate(), flags); + + intptr_t top = + reinterpret_cast<intptr_t>(allocation_top.address()); + intptr_t limit = + reinterpret_cast<intptr_t>(allocation_limit.address()); + DCHECK((limit - top) == kPointerSize); + + // Set up allocation top address and object size registers. + Register topaddr = scratch1; + li(topaddr, Operand(allocation_top)); + + // This code stores a temporary value in t9. + if ((flags & RESULT_CONTAINS_TOP) == 0) { + // Load allocation top into result and allocation limit into t9. + ld(result, MemOperand(topaddr)); + ld(t9, MemOperand(topaddr, kPointerSize)); + } else { + if (emit_debug_code()) { + // Assert that result actually contains top on entry. t9 is used + // immediately below so this use of t9 does not cause difference with + // respect to register content between debug and release mode. + ld(t9, MemOperand(topaddr)); + Check(eq, kUnexpectedAllocationTop, result, Operand(t9)); + } + // Load allocation limit into t9. Result already contains allocation top. + ld(t9, MemOperand(topaddr, limit - top)); + } + + DCHECK(kPointerSize == kDoubleSize); + if (emit_debug_code()) { + And(at, result, Operand(kDoubleAlignmentMask)); + Check(eq, kAllocationIsNotDoubleAligned, at, Operand(zero_reg)); + } + + // Calculate new top and bail out if new space is exhausted. Use result + // to calculate the new top. + Daddu(scratch2, result, Operand(object_size)); + Branch(gc_required, Ugreater, scratch2, Operand(t9)); + sd(scratch2, MemOperand(topaddr)); + + // Tag object if requested. + if ((flags & TAG_OBJECT) != 0) { + Daddu(result, result, Operand(kHeapObjectTag)); + } +} + + +void MacroAssembler::Allocate(Register object_size, + Register result, + Register scratch1, + Register scratch2, + Label* gc_required, + AllocationFlags flags) { + if (!FLAG_inline_new) { + if (emit_debug_code()) { + // Trash the registers to simulate an allocation failure. + li(result, 0x7091); + li(scratch1, 0x7191); + li(scratch2, 0x7291); + } + jmp(gc_required); + return; + } + + DCHECK(!result.is(scratch1)); + DCHECK(!result.is(scratch2)); + DCHECK(!scratch1.is(scratch2)); + DCHECK(!object_size.is(t9)); + DCHECK(!scratch1.is(t9) && !scratch2.is(t9) && !result.is(t9)); + + // Check relative positions of allocation top and limit addresses. + // ARM adds additional checks to make sure the ldm instruction can be + // used. On MIPS we don't have ldm so we don't need additional checks either. + ExternalReference allocation_top = + AllocationUtils::GetAllocationTopReference(isolate(), flags); + ExternalReference allocation_limit = + AllocationUtils::GetAllocationLimitReference(isolate(), flags); + intptr_t top = + reinterpret_cast<intptr_t>(allocation_top.address()); + intptr_t limit = + reinterpret_cast<intptr_t>(allocation_limit.address()); + DCHECK((limit - top) == kPointerSize); + + // Set up allocation top address and object size registers. + Register topaddr = scratch1; + li(topaddr, Operand(allocation_top)); + + // This code stores a temporary value in t9. + if ((flags & RESULT_CONTAINS_TOP) == 0) { + // Load allocation top into result and allocation limit into t9. + ld(result, MemOperand(topaddr)); + ld(t9, MemOperand(topaddr, kPointerSize)); + } else { + if (emit_debug_code()) { + // Assert that result actually contains top on entry. t9 is used + // immediately below so this use of t9 does not cause difference with + // respect to register content between debug and release mode. + ld(t9, MemOperand(topaddr)); + Check(eq, kUnexpectedAllocationTop, result, Operand(t9)); + } + // Load allocation limit into t9. Result already contains allocation top. + ld(t9, MemOperand(topaddr, limit - top)); + } + + DCHECK(kPointerSize == kDoubleSize); + if (emit_debug_code()) { + And(at, result, Operand(kDoubleAlignmentMask)); + Check(eq, kAllocationIsNotDoubleAligned, at, Operand(zero_reg)); + } + + // Calculate new top and bail out if new space is exhausted. Use result + // to calculate the new top. Object size may be in words so a shift is + // required to get the number of bytes. + if ((flags & SIZE_IN_WORDS) != 0) { + dsll(scratch2, object_size, kPointerSizeLog2); + Daddu(scratch2, result, scratch2); + } else { + Daddu(scratch2, result, Operand(object_size)); + } + Branch(gc_required, Ugreater, scratch2, Operand(t9)); + + // Update allocation top. result temporarily holds the new top. + if (emit_debug_code()) { + And(t9, scratch2, Operand(kObjectAlignmentMask)); + Check(eq, kUnalignedAllocationInNewSpace, t9, Operand(zero_reg)); + } + sd(scratch2, MemOperand(topaddr)); + + // Tag object if requested. + if ((flags & TAG_OBJECT) != 0) { + Daddu(result, result, Operand(kHeapObjectTag)); + } +} + + +void MacroAssembler::UndoAllocationInNewSpace(Register object, + Register scratch) { + ExternalReference new_space_allocation_top = + ExternalReference::new_space_allocation_top_address(isolate()); + + // Make sure the object has no tag before resetting top. + And(object, object, Operand(~kHeapObjectTagMask)); +#ifdef DEBUG + // Check that the object un-allocated is below the current top. + li(scratch, Operand(new_space_allocation_top)); + ld(scratch, MemOperand(scratch)); + Check(less, kUndoAllocationOfNonAllocatedMemory, + object, Operand(scratch)); +#endif + // Write the address of the object to un-allocate as the current top. + li(scratch, Operand(new_space_allocation_top)); + sd(object, MemOperand(scratch)); +} + + +void MacroAssembler::AllocateTwoByteString(Register result, + Register length, + Register scratch1, + Register scratch2, + Register scratch3, + Label* gc_required) { + // Calculate the number of bytes needed for the characters in the string while + // observing object alignment. + DCHECK((SeqTwoByteString::kHeaderSize & kObjectAlignmentMask) == 0); + dsll(scratch1, length, 1); // Length in bytes, not chars. + daddiu(scratch1, scratch1, + kObjectAlignmentMask + SeqTwoByteString::kHeaderSize); + And(scratch1, scratch1, Operand(~kObjectAlignmentMask)); + + // Allocate two-byte string in new space. + Allocate(scratch1, + result, + scratch2, + scratch3, + gc_required, + TAG_OBJECT); + + // Set the map, length and hash field. + InitializeNewString(result, + length, + Heap::kStringMapRootIndex, + scratch1, + scratch2); +} + + +void MacroAssembler::AllocateAsciiString(Register result, + Register length, + Register scratch1, + Register scratch2, + Register scratch3, + Label* gc_required) { + // Calculate the number of bytes needed for the characters in the string + // while observing object alignment. + DCHECK((SeqOneByteString::kHeaderSize & kObjectAlignmentMask) == 0); + DCHECK(kCharSize == 1); + daddiu(scratch1, length, + kObjectAlignmentMask + SeqOneByteString::kHeaderSize); + And(scratch1, scratch1, Operand(~kObjectAlignmentMask)); + + // Allocate ASCII string in new space. + Allocate(scratch1, + result, + scratch2, + scratch3, + gc_required, + TAG_OBJECT); + + // Set the map, length and hash field. + InitializeNewString(result, + length, + Heap::kAsciiStringMapRootIndex, + scratch1, + scratch2); +} + + +void MacroAssembler::AllocateTwoByteConsString(Register result, + Register length, + Register scratch1, + Register scratch2, + Label* gc_required) { + Allocate(ConsString::kSize, result, scratch1, scratch2, gc_required, + TAG_OBJECT); + InitializeNewString(result, + length, + Heap::kConsStringMapRootIndex, + scratch1, + scratch2); +} + + +void MacroAssembler::AllocateAsciiConsString(Register result, + Register length, + Register scratch1, + Register scratch2, + Label* gc_required) { + Allocate(ConsString::kSize, + result, + scratch1, + scratch2, + gc_required, + TAG_OBJECT); + + InitializeNewString(result, + length, + Heap::kConsAsciiStringMapRootIndex, + scratch1, + scratch2); +} + + +void MacroAssembler::AllocateTwoByteSlicedString(Register result, + Register length, + Register scratch1, + Register scratch2, + Label* gc_required) { + Allocate(SlicedString::kSize, result, scratch1, scratch2, gc_required, + TAG_OBJECT); + + InitializeNewString(result, + length, + Heap::kSlicedStringMapRootIndex, + scratch1, + scratch2); +} + + +void MacroAssembler::AllocateAsciiSlicedString(Register result, + Register length, + Register scratch1, + Register scratch2, + Label* gc_required) { + Allocate(SlicedString::kSize, result, scratch1, scratch2, gc_required, + TAG_OBJECT); + + InitializeNewString(result, + length, + Heap::kSlicedAsciiStringMapRootIndex, + scratch1, + scratch2); +} + + +void MacroAssembler::JumpIfNotUniqueName(Register reg, + Label* not_unique_name) { + STATIC_ASSERT(kInternalizedTag == 0 && kStringTag == 0); + Label succeed; + And(at, reg, Operand(kIsNotStringMask | kIsNotInternalizedMask)); + Branch(&succeed, eq, at, Operand(zero_reg)); + Branch(not_unique_name, ne, reg, Operand(SYMBOL_TYPE)); + + bind(&succeed); +} + + +// Allocates a heap number or jumps to the label if the young space is full and +// a scavenge is needed. +void MacroAssembler::AllocateHeapNumber(Register result, + Register scratch1, + Register scratch2, + Register heap_number_map, + Label* need_gc, + TaggingMode tagging_mode, + MutableMode mode) { + // Allocate an object in the heap for the heap number and tag it as a heap + // object. + Allocate(HeapNumber::kSize, result, scratch1, scratch2, need_gc, + tagging_mode == TAG_RESULT ? TAG_OBJECT : NO_ALLOCATION_FLAGS); + + Heap::RootListIndex map_index = mode == MUTABLE + ? Heap::kMutableHeapNumberMapRootIndex + : Heap::kHeapNumberMapRootIndex; + AssertIsRoot(heap_number_map, map_index); + + // Store heap number map in the allocated object. + if (tagging_mode == TAG_RESULT) { + sd(heap_number_map, FieldMemOperand(result, HeapObject::kMapOffset)); + } else { + sd(heap_number_map, MemOperand(result, HeapObject::kMapOffset)); + } +} + + +void MacroAssembler::AllocateHeapNumberWithValue(Register result, + FPURegister value, + Register scratch1, + Register scratch2, + Label* gc_required) { + LoadRoot(t8, Heap::kHeapNumberMapRootIndex); + AllocateHeapNumber(result, scratch1, scratch2, t8, gc_required); + sdc1(value, FieldMemOperand(result, HeapNumber::kValueOffset)); +} + + +// Copies a fixed number of fields of heap objects from src to dst. +void MacroAssembler::CopyFields(Register dst, + Register src, + RegList temps, + int field_count) { + DCHECK((temps & dst.bit()) == 0); + DCHECK((temps & src.bit()) == 0); + // Primitive implementation using only one temporary register. + + Register tmp = no_reg; + // Find a temp register in temps list. + for (int i = 0; i < kNumRegisters; i++) { + if ((temps & (1 << i)) != 0) { + tmp.code_ = i; + break; + } + } + DCHECK(!tmp.is(no_reg)); + + for (int i = 0; i < field_count; i++) { + ld(tmp, FieldMemOperand(src, i * kPointerSize)); + sd(tmp, FieldMemOperand(dst, i * kPointerSize)); + } +} + + +void MacroAssembler::CopyBytes(Register src, + Register dst, + Register length, + Register scratch) { + Label align_loop_1, word_loop, byte_loop, byte_loop_1, done; + + // Align src before copying in word size chunks. + Branch(&byte_loop, le, length, Operand(kPointerSize)); + bind(&align_loop_1); + And(scratch, src, kPointerSize - 1); + Branch(&word_loop, eq, scratch, Operand(zero_reg)); + lbu(scratch, MemOperand(src)); + Daddu(src, src, 1); + sb(scratch, MemOperand(dst)); + Daddu(dst, dst, 1); + Dsubu(length, length, Operand(1)); + Branch(&align_loop_1, ne, length, Operand(zero_reg)); + + // Copy bytes in word size chunks. + bind(&word_loop); + if (emit_debug_code()) { + And(scratch, src, kPointerSize - 1); + Assert(eq, kExpectingAlignmentForCopyBytes, + scratch, Operand(zero_reg)); + } + Branch(&byte_loop, lt, length, Operand(kPointerSize)); + ld(scratch, MemOperand(src)); + Daddu(src, src, kPointerSize); + + // TODO(kalmard) check if this can be optimized to use sw in most cases. + // Can't use unaligned access - copy byte by byte. + sb(scratch, MemOperand(dst, 0)); + dsrl(scratch, scratch, 8); + sb(scratch, MemOperand(dst, 1)); + dsrl(scratch, scratch, 8); + sb(scratch, MemOperand(dst, 2)); + dsrl(scratch, scratch, 8); + sb(scratch, MemOperand(dst, 3)); + dsrl(scratch, scratch, 8); + sb(scratch, MemOperand(dst, 4)); + dsrl(scratch, scratch, 8); + sb(scratch, MemOperand(dst, 5)); + dsrl(scratch, scratch, 8); + sb(scratch, MemOperand(dst, 6)); + dsrl(scratch, scratch, 8); + sb(scratch, MemOperand(dst, 7)); + Daddu(dst, dst, 8); + + Dsubu(length, length, Operand(kPointerSize)); + Branch(&word_loop); + + // Copy the last bytes if any left. + bind(&byte_loop); + Branch(&done, eq, length, Operand(zero_reg)); + bind(&byte_loop_1); + lbu(scratch, MemOperand(src)); + Daddu(src, src, 1); + sb(scratch, MemOperand(dst)); + Daddu(dst, dst, 1); + Dsubu(length, length, Operand(1)); + Branch(&byte_loop_1, ne, length, Operand(zero_reg)); + bind(&done); +} + + +void MacroAssembler::InitializeFieldsWithFiller(Register start_offset, + Register end_offset, + Register filler) { + Label loop, entry; + Branch(&entry); + bind(&loop); + sd(filler, MemOperand(start_offset)); + Daddu(start_offset, start_offset, kPointerSize); + bind(&entry); + Branch(&loop, lt, start_offset, Operand(end_offset)); +} + + +void MacroAssembler::CheckFastElements(Register map, + Register scratch, + Label* fail) { + STATIC_ASSERT(FAST_SMI_ELEMENTS == 0); + STATIC_ASSERT(FAST_HOLEY_SMI_ELEMENTS == 1); + STATIC_ASSERT(FAST_ELEMENTS == 2); + STATIC_ASSERT(FAST_HOLEY_ELEMENTS == 3); + lbu(scratch, FieldMemOperand(map, Map::kBitField2Offset)); + Branch(fail, hi, scratch, + Operand(Map::kMaximumBitField2FastHoleyElementValue)); +} + + +void MacroAssembler::CheckFastObjectElements(Register map, + Register scratch, + Label* fail) { + STATIC_ASSERT(FAST_SMI_ELEMENTS == 0); + STATIC_ASSERT(FAST_HOLEY_SMI_ELEMENTS == 1); + STATIC_ASSERT(FAST_ELEMENTS == 2); + STATIC_ASSERT(FAST_HOLEY_ELEMENTS == 3); + lbu(scratch, FieldMemOperand(map, Map::kBitField2Offset)); + Branch(fail, ls, scratch, + Operand(Map::kMaximumBitField2FastHoleySmiElementValue)); + Branch(fail, hi, scratch, + Operand(Map::kMaximumBitField2FastHoleyElementValue)); +} + + +void MacroAssembler::CheckFastSmiElements(Register map, + Register scratch, + Label* fail) { + STATIC_ASSERT(FAST_SMI_ELEMENTS == 0); + STATIC_ASSERT(FAST_HOLEY_SMI_ELEMENTS == 1); + lbu(scratch, FieldMemOperand(map, Map::kBitField2Offset)); + Branch(fail, hi, scratch, + Operand(Map::kMaximumBitField2FastHoleySmiElementValue)); +} + + +void MacroAssembler::StoreNumberToDoubleElements(Register value_reg, + Register key_reg, + Register elements_reg, + Register scratch1, + Register scratch2, + Register scratch3, + Label* fail, + int elements_offset) { + Label smi_value, maybe_nan, have_double_value, is_nan, done; + Register mantissa_reg = scratch2; + Register exponent_reg = scratch3; + + // Handle smi values specially. + JumpIfSmi(value_reg, &smi_value); + + // Ensure that the object is a heap number + CheckMap(value_reg, + scratch1, + Heap::kHeapNumberMapRootIndex, + fail, + DONT_DO_SMI_CHECK); + + // Check for nan: all NaN values have a value greater (signed) than 0x7ff00000 + // in the exponent. + li(scratch1, Operand(kNaNOrInfinityLowerBoundUpper32)); + lw(exponent_reg, FieldMemOperand(value_reg, HeapNumber::kExponentOffset)); + Branch(&maybe_nan, ge, exponent_reg, Operand(scratch1)); + + lwu(mantissa_reg, FieldMemOperand(value_reg, HeapNumber::kMantissaOffset)); + + bind(&have_double_value); + // dsll(scratch1, key_reg, kDoubleSizeLog2 - kSmiTagSize); + dsra(scratch1, key_reg, 32 - kDoubleSizeLog2); + Daddu(scratch1, scratch1, elements_reg); + sw(mantissa_reg, FieldMemOperand( + scratch1, FixedDoubleArray::kHeaderSize - elements_offset)); + uint32_t offset = FixedDoubleArray::kHeaderSize - elements_offset + + sizeof(kHoleNanLower32); + sw(exponent_reg, FieldMemOperand(scratch1, offset)); + jmp(&done); + + bind(&maybe_nan); + // Could be NaN, Infinity or -Infinity. If fraction is not zero, it's NaN, + // otherwise it's Infinity or -Infinity, and the non-NaN code path applies. + lw(mantissa_reg, FieldMemOperand(value_reg, HeapNumber::kMantissaOffset)); + Branch(&have_double_value, eq, mantissa_reg, Operand(zero_reg)); + bind(&is_nan); + // Load canonical NaN for storing into the double array. + LoadRoot(at, Heap::kNanValueRootIndex); + lw(mantissa_reg, FieldMemOperand(at, HeapNumber::kMantissaOffset)); + lw(exponent_reg, FieldMemOperand(at, HeapNumber::kExponentOffset)); + jmp(&have_double_value); + + bind(&smi_value); + Daddu(scratch1, elements_reg, + Operand(FixedDoubleArray::kHeaderSize - kHeapObjectTag - + elements_offset)); + // dsll(scratch2, key_reg, kDoubleSizeLog2 - kSmiTagSize); + dsra(scratch2, key_reg, 32 - kDoubleSizeLog2); + Daddu(scratch1, scratch1, scratch2); + // scratch1 is now effective address of the double element + + Register untagged_value = elements_reg; + SmiUntag(untagged_value, value_reg); + mtc1(untagged_value, f2); + cvt_d_w(f0, f2); + sdc1(f0, MemOperand(scratch1, 0)); + bind(&done); +} + + +void MacroAssembler::CompareMapAndBranch(Register obj, + Register scratch, + Handle<Map> map, + Label* early_success, + Condition cond, + Label* branch_to) { + ld(scratch, FieldMemOperand(obj, HeapObject::kMapOffset)); + CompareMapAndBranch(scratch, map, early_success, cond, branch_to); +} + + +void MacroAssembler::CompareMapAndBranch(Register obj_map, + Handle<Map> map, + Label* early_success, + Condition cond, + Label* branch_to) { + Branch(branch_to, cond, obj_map, Operand(map)); +} + + +void MacroAssembler::CheckMap(Register obj, + Register scratch, + Handle<Map> map, + Label* fail, + SmiCheckType smi_check_type) { + if (smi_check_type == DO_SMI_CHECK) { + JumpIfSmi(obj, fail); + } + Label success; + CompareMapAndBranch(obj, scratch, map, &success, ne, fail); + bind(&success); +} + + +void MacroAssembler::DispatchMap(Register obj, + Register scratch, + Handle<Map> map, + Handle<Code> success, + SmiCheckType smi_check_type) { + Label fail; + if (smi_check_type == DO_SMI_CHECK) { + JumpIfSmi(obj, &fail); + } + ld(scratch, FieldMemOperand(obj, HeapObject::kMapOffset)); + Jump(success, RelocInfo::CODE_TARGET, eq, scratch, Operand(map)); + bind(&fail); +} + + +void MacroAssembler::CheckMap(Register obj, + Register scratch, + Heap::RootListIndex index, + Label* fail, + SmiCheckType smi_check_type) { + if (smi_check_type == DO_SMI_CHECK) { + JumpIfSmi(obj, fail); + } + ld(scratch, FieldMemOperand(obj, HeapObject::kMapOffset)); + LoadRoot(at, index); + Branch(fail, ne, scratch, Operand(at)); +} + + +void MacroAssembler::MovFromFloatResult(const DoubleRegister dst) { + if (IsMipsSoftFloatABI) { + Move(dst, v0, v1); + } else { + Move(dst, f0); // Reg f0 is o32 ABI FP return value. + } +} + + +void MacroAssembler::MovFromFloatParameter(const DoubleRegister dst) { + if (IsMipsSoftFloatABI) { + Move(dst, a0, a1); + } else { + Move(dst, f12); // Reg f12 is o32 ABI FP first argument value. + } +} + + +void MacroAssembler::MovToFloatParameter(DoubleRegister src) { + if (!IsMipsSoftFloatABI) { + Move(f12, src); + } else { + Move(a0, a1, src); + } +} + + +void MacroAssembler::MovToFloatResult(DoubleRegister src) { + if (!IsMipsSoftFloatABI) { + Move(f0, src); + } else { + Move(v0, v1, src); + } +} + + +void MacroAssembler::MovToFloatParameters(DoubleRegister src1, + DoubleRegister src2) { + if (!IsMipsSoftFloatABI) { + const DoubleRegister fparg2 = (kMipsAbi == kN64) ? f13 : f14; + if (src2.is(f12)) { + DCHECK(!src1.is(fparg2)); + Move(fparg2, src2); + Move(f12, src1); + } else { + Move(f12, src1); + Move(fparg2, src2); + } + } else { + Move(a0, a1, src1); + Move(a2, a3, src2); + } +} + + +// ----------------------------------------------------------------------------- +// JavaScript invokes. + +void MacroAssembler::InvokePrologue(const ParameterCount& expected, + const ParameterCount& actual, + Handle<Code> code_constant, + Register code_reg, + Label* done, + bool* definitely_mismatches, + InvokeFlag flag, + const CallWrapper& call_wrapper) { + bool definitely_matches = false; + *definitely_mismatches = false; + Label regular_invoke; + + // Check whether the expected and actual arguments count match. If not, + // setup registers according to contract with ArgumentsAdaptorTrampoline: + // a0: actual arguments count + // a1: function (passed through to callee) + // a2: expected arguments count + + // The code below is made a lot easier because the calling code already sets + // up actual and expected registers according to the contract if values are + // passed in registers. + DCHECK(actual.is_immediate() || actual.reg().is(a0)); + DCHECK(expected.is_immediate() || expected.reg().is(a2)); + DCHECK((!code_constant.is_null() && code_reg.is(no_reg)) || code_reg.is(a3)); + + if (expected.is_immediate()) { + DCHECK(actual.is_immediate()); + if (expected.immediate() == actual.immediate()) { + definitely_matches = true; + } else { + li(a0, Operand(actual.immediate())); + const int sentinel = SharedFunctionInfo::kDontAdaptArgumentsSentinel; + if (expected.immediate() == sentinel) { + // Don't worry about adapting arguments for builtins that + // don't want that done. Skip adaption code by making it look + // like we have a match between expected and actual number of + // arguments. + definitely_matches = true; + } else { + *definitely_mismatches = true; + li(a2, Operand(expected.immediate())); + } + } + } else if (actual.is_immediate()) { + Branch(®ular_invoke, eq, expected.reg(), Operand(actual.immediate())); + li(a0, Operand(actual.immediate())); + } else { + Branch(®ular_invoke, eq, expected.reg(), Operand(actual.reg())); + } + + if (!definitely_matches) { + if (!code_constant.is_null()) { + li(a3, Operand(code_constant)); + daddiu(a3, a3, Code::kHeaderSize - kHeapObjectTag); + } + + Handle<Code> adaptor = + isolate()->builtins()->ArgumentsAdaptorTrampoline(); + if (flag == CALL_FUNCTION) { + call_wrapper.BeforeCall(CallSize(adaptor)); + Call(adaptor); + call_wrapper.AfterCall(); + if (!*definitely_mismatches) { + Branch(done); + } + } else { + Jump(adaptor, RelocInfo::CODE_TARGET); + } + bind(®ular_invoke); + } +} + + +void MacroAssembler::InvokeCode(Register code, + const ParameterCount& expected, + const ParameterCount& actual, + InvokeFlag flag, + const CallWrapper& call_wrapper) { + // You can't call a function without a valid frame. + DCHECK(flag == JUMP_FUNCTION || has_frame()); + + Label done; + + bool definitely_mismatches = false; + InvokePrologue(expected, actual, Handle<Code>::null(), code, + &done, &definitely_mismatches, flag, + call_wrapper); + if (!definitely_mismatches) { + if (flag == CALL_FUNCTION) { + call_wrapper.BeforeCall(CallSize(code)); + Call(code); + call_wrapper.AfterCall(); + } else { + DCHECK(flag == JUMP_FUNCTION); + Jump(code); + } + // Continue here if InvokePrologue does handle the invocation due to + // mismatched parameter counts. + bind(&done); + } +} + + +void MacroAssembler::InvokeFunction(Register function, + const ParameterCount& actual, + InvokeFlag flag, + const CallWrapper& call_wrapper) { + // You can't call a function without a valid frame. + DCHECK(flag == JUMP_FUNCTION || has_frame()); + + // Contract with called JS functions requires that function is passed in a1. + DCHECK(function.is(a1)); + Register expected_reg = a2; + Register code_reg = a3; + ld(code_reg, FieldMemOperand(a1, JSFunction::kSharedFunctionInfoOffset)); + ld(cp, FieldMemOperand(a1, JSFunction::kContextOffset)); + // The argument count is stored as int32_t on 64-bit platforms. + // TODO(plind): Smi on 32-bit platforms. + lw(expected_reg, + FieldMemOperand(code_reg, + SharedFunctionInfo::kFormalParameterCountOffset)); + ld(code_reg, FieldMemOperand(a1, JSFunction::kCodeEntryOffset)); + ParameterCount expected(expected_reg); + InvokeCode(code_reg, expected, actual, flag, call_wrapper); +} + + +void MacroAssembler::InvokeFunction(Register function, + const ParameterCount& expected, + const ParameterCount& actual, + InvokeFlag flag, + const CallWrapper& call_wrapper) { + // You can't call a function without a valid frame. + DCHECK(flag == JUMP_FUNCTION || has_frame()); + + // Contract with called JS functions requires that function is passed in a1. + DCHECK(function.is(a1)); + + // Get the function and setup the context. + ld(cp, FieldMemOperand(a1, JSFunction::kContextOffset)); + + // We call indirectly through the code field in the function to + // allow recompilation to take effect without changing any of the + // call sites. + ld(a3, FieldMemOperand(a1, JSFunction::kCodeEntryOffset)); + InvokeCode(a3, expected, actual, flag, call_wrapper); +} + + +void MacroAssembler::InvokeFunction(Handle<JSFunction> function, + const ParameterCount& expected, + const ParameterCount& actual, + InvokeFlag flag, + const CallWrapper& call_wrapper) { + li(a1, function); + InvokeFunction(a1, expected, actual, flag, call_wrapper); +} + + +void MacroAssembler::IsObjectJSObjectType(Register heap_object, + Register map, + Register scratch, + Label* fail) { + ld(map, FieldMemOperand(heap_object, HeapObject::kMapOffset)); + IsInstanceJSObjectType(map, scratch, fail); +} + + +void MacroAssembler::IsInstanceJSObjectType(Register map, + Register scratch, + Label* fail) { + lbu(scratch, FieldMemOperand(map, Map::kInstanceTypeOffset)); + Branch(fail, lt, scratch, Operand(FIRST_NONCALLABLE_SPEC_OBJECT_TYPE)); + Branch(fail, gt, scratch, Operand(LAST_NONCALLABLE_SPEC_OBJECT_TYPE)); +} + + +void MacroAssembler::IsObjectJSStringType(Register object, + Register scratch, + Label* fail) { + DCHECK(kNotStringTag != 0); + + ld(scratch, FieldMemOperand(object, HeapObject::kMapOffset)); + lbu(scratch, FieldMemOperand(scratch, Map::kInstanceTypeOffset)); + And(scratch, scratch, Operand(kIsNotStringMask)); + Branch(fail, ne, scratch, Operand(zero_reg)); +} + + +void MacroAssembler::IsObjectNameType(Register object, + Register scratch, + Label* fail) { + ld(scratch, FieldMemOperand(object, HeapObject::kMapOffset)); + lbu(scratch, FieldMemOperand(scratch, Map::kInstanceTypeOffset)); + Branch(fail, hi, scratch, Operand(LAST_NAME_TYPE)); +} + + +// --------------------------------------------------------------------------- +// Support functions. + + +void MacroAssembler::TryGetFunctionPrototype(Register function, + Register result, + Register scratch, + Label* miss, + bool miss_on_bound_function) { + Label non_instance; + if (miss_on_bound_function) { + // Check that the receiver isn't a smi. + JumpIfSmi(function, miss); + + // Check that the function really is a function. Load map into result reg. + GetObjectType(function, result, scratch); + Branch(miss, ne, scratch, Operand(JS_FUNCTION_TYPE)); + + ld(scratch, + FieldMemOperand(function, JSFunction::kSharedFunctionInfoOffset)); + lwu(scratch, + FieldMemOperand(scratch, SharedFunctionInfo::kCompilerHintsOffset)); + And(scratch, scratch, + Operand(1 << SharedFunctionInfo::kBoundFunction)); + Branch(miss, ne, scratch, Operand(zero_reg)); + + // Make sure that the function has an instance prototype. + lbu(scratch, FieldMemOperand(result, Map::kBitFieldOffset)); + And(scratch, scratch, Operand(1 << Map::kHasNonInstancePrototype)); + Branch(&non_instance, ne, scratch, Operand(zero_reg)); + } + + // Get the prototype or initial map from the function. + ld(result, + FieldMemOperand(function, JSFunction::kPrototypeOrInitialMapOffset)); + + // If the prototype or initial map is the hole, don't return it and + // simply miss the cache instead. This will allow us to allocate a + // prototype object on-demand in the runtime system. + LoadRoot(t8, Heap::kTheHoleValueRootIndex); + Branch(miss, eq, result, Operand(t8)); + + // If the function does not have an initial map, we're done. + Label done; + GetObjectType(result, scratch, scratch); + Branch(&done, ne, scratch, Operand(MAP_TYPE)); + + // Get the prototype from the initial map. + ld(result, FieldMemOperand(result, Map::kPrototypeOffset)); + + if (miss_on_bound_function) { + jmp(&done); + + // Non-instance prototype: Fetch prototype from constructor field + // in initial map. + bind(&non_instance); + ld(result, FieldMemOperand(result, Map::kConstructorOffset)); + } + + // All done. + bind(&done); +} + + +void MacroAssembler::GetObjectType(Register object, + Register map, + Register type_reg) { + ld(map, FieldMemOperand(object, HeapObject::kMapOffset)); + lbu(type_reg, FieldMemOperand(map, Map::kInstanceTypeOffset)); +} + + +// ----------------------------------------------------------------------------- +// Runtime calls. + +void MacroAssembler::CallStub(CodeStub* stub, + TypeFeedbackId ast_id, + Condition cond, + Register r1, + const Operand& r2, + BranchDelaySlot bd) { + DCHECK(AllowThisStubCall(stub)); // Stub calls are not allowed in some stubs. + Call(stub->GetCode(), RelocInfo::CODE_TARGET, ast_id, + cond, r1, r2, bd); +} + + +void MacroAssembler::TailCallStub(CodeStub* stub, + Condition cond, + Register r1, + const Operand& r2, + BranchDelaySlot bd) { + Jump(stub->GetCode(), RelocInfo::CODE_TARGET, cond, r1, r2, bd); +} + + +static int AddressOffset(ExternalReference ref0, ExternalReference ref1) { + int64_t offset = (ref0.address() - ref1.address()); + DCHECK(static_cast<int>(offset) == offset); + return static_cast<int>(offset); +} + + +void MacroAssembler::CallApiFunctionAndReturn( + Register function_address, + ExternalReference thunk_ref, + int stack_space, + MemOperand return_value_operand, + MemOperand* context_restore_operand) { + ExternalReference next_address = + ExternalReference::handle_scope_next_address(isolate()); + const int kNextOffset = 0; + const int kLimitOffset = AddressOffset( + ExternalReference::handle_scope_limit_address(isolate()), + next_address); + const int kLevelOffset = AddressOffset( + ExternalReference::handle_scope_level_address(isolate()), + next_address); + + DCHECK(function_address.is(a1) || function_address.is(a2)); + + Label profiler_disabled; + Label end_profiler_check; + li(t9, Operand(ExternalReference::is_profiling_address(isolate()))); + lb(t9, MemOperand(t9, 0)); + Branch(&profiler_disabled, eq, t9, Operand(zero_reg)); + + // Additional parameter is the address of the actual callback. + li(t9, Operand(thunk_ref)); + jmp(&end_profiler_check); + + bind(&profiler_disabled); + mov(t9, function_address); + bind(&end_profiler_check); + + // Allocate HandleScope in callee-save registers. + li(s3, Operand(next_address)); + ld(s0, MemOperand(s3, kNextOffset)); + ld(s1, MemOperand(s3, kLimitOffset)); + ld(s2, MemOperand(s3, kLevelOffset)); + Daddu(s2, s2, Operand(1)); + sd(s2, MemOperand(s3, kLevelOffset)); + + if (FLAG_log_timer_events) { + FrameScope frame(this, StackFrame::MANUAL); + PushSafepointRegisters(); + PrepareCallCFunction(1, a0); + li(a0, Operand(ExternalReference::isolate_address(isolate()))); + CallCFunction(ExternalReference::log_enter_external_function(isolate()), 1); + PopSafepointRegisters(); + } + + // Native call returns to the DirectCEntry stub which redirects to the + // return address pushed on stack (could have moved after GC). + // DirectCEntry stub itself is generated early and never moves. + DirectCEntryStub stub(isolate()); + stub.GenerateCall(this, t9); + + if (FLAG_log_timer_events) { + FrameScope frame(this, StackFrame::MANUAL); + PushSafepointRegisters(); + PrepareCallCFunction(1, a0); + li(a0, Operand(ExternalReference::isolate_address(isolate()))); + CallCFunction(ExternalReference::log_leave_external_function(isolate()), 1); + PopSafepointRegisters(); + } + + Label promote_scheduled_exception; + Label exception_handled; + Label delete_allocated_handles; + Label leave_exit_frame; + Label return_value_loaded; + + // Load value from ReturnValue. + ld(v0, return_value_operand); + bind(&return_value_loaded); + + // No more valid handles (the result handle was the last one). Restore + // previous handle scope. + sd(s0, MemOperand(s3, kNextOffset)); + if (emit_debug_code()) { + ld(a1, MemOperand(s3, kLevelOffset)); + Check(eq, kUnexpectedLevelAfterReturnFromApiCall, a1, Operand(s2)); + } + Dsubu(s2, s2, Operand(1)); + sd(s2, MemOperand(s3, kLevelOffset)); + ld(at, MemOperand(s3, kLimitOffset)); + Branch(&delete_allocated_handles, ne, s1, Operand(at)); + + // Check if the function scheduled an exception. + bind(&leave_exit_frame); + LoadRoot(a4, Heap::kTheHoleValueRootIndex); + li(at, Operand(ExternalReference::scheduled_exception_address(isolate()))); + ld(a5, MemOperand(at)); + Branch(&promote_scheduled_exception, ne, a4, Operand(a5)); + bind(&exception_handled); + + bool restore_context = context_restore_operand != NULL; + if (restore_context) { + ld(cp, *context_restore_operand); + } + li(s0, Operand(stack_space)); + LeaveExitFrame(false, s0, !restore_context, EMIT_RETURN); + + bind(&promote_scheduled_exception); + { + FrameScope frame(this, StackFrame::INTERNAL); + CallExternalReference( + ExternalReference(Runtime::kPromoteScheduledException, isolate()), + 0); + } + jmp(&exception_handled); + + // HandleScope limit has changed. Delete allocated extensions. + bind(&delete_allocated_handles); + sd(s1, MemOperand(s3, kLimitOffset)); + mov(s0, v0); + mov(a0, v0); + PrepareCallCFunction(1, s1); + li(a0, Operand(ExternalReference::isolate_address(isolate()))); + CallCFunction(ExternalReference::delete_handle_scope_extensions(isolate()), + 1); + mov(v0, s0); + jmp(&leave_exit_frame); +} + + +bool MacroAssembler::AllowThisStubCall(CodeStub* stub) { + return has_frame_ || !stub->SometimesSetsUpAFrame(); +} + + +void MacroAssembler::IndexFromHash(Register hash, Register index) { + // If the hash field contains an array index pick it out. The assert checks + // that the constants for the maximum number of digits for an array index + // cached in the hash field and the number of bits reserved for it does not + // conflict. + DCHECK(TenToThe(String::kMaxCachedArrayIndexLength) < + (1 << String::kArrayIndexValueBits)); + DecodeFieldToSmi<String::ArrayIndexValueBits>(index, hash); +} + + +void MacroAssembler::ObjectToDoubleFPURegister(Register object, + FPURegister result, + Register scratch1, + Register scratch2, + Register heap_number_map, + Label* not_number, + ObjectToDoubleFlags flags) { + Label done; + if ((flags & OBJECT_NOT_SMI) == 0) { + Label not_smi; + JumpIfNotSmi(object, ¬_smi); + // Remove smi tag and convert to double. + // dsra(scratch1, object, kSmiTagSize); + dsra32(scratch1, object, 0); + mtc1(scratch1, result); + cvt_d_w(result, result); + Branch(&done); + bind(¬_smi); + } + // Check for heap number and load double value from it. + ld(scratch1, FieldMemOperand(object, HeapObject::kMapOffset)); + Branch(not_number, ne, scratch1, Operand(heap_number_map)); + + if ((flags & AVOID_NANS_AND_INFINITIES) != 0) { + // If exponent is all ones the number is either a NaN or +/-Infinity. + Register exponent = scratch1; + Register mask_reg = scratch2; + lwu(exponent, FieldMemOperand(object, HeapNumber::kExponentOffset)); + li(mask_reg, HeapNumber::kExponentMask); + + And(exponent, exponent, mask_reg); + Branch(not_number, eq, exponent, Operand(mask_reg)); + } + ldc1(result, FieldMemOperand(object, HeapNumber::kValueOffset)); + bind(&done); +} + + +void MacroAssembler::SmiToDoubleFPURegister(Register smi, + FPURegister value, + Register scratch1) { + // dsra(scratch1, smi, kSmiTagSize); + dsra32(scratch1, smi, 0); + mtc1(scratch1, value); + cvt_d_w(value, value); +} + + +void MacroAssembler::AdduAndCheckForOverflow(Register dst, + Register left, + Register right, + Register overflow_dst, + Register scratch) { + DCHECK(!dst.is(overflow_dst)); + DCHECK(!dst.is(scratch)); + DCHECK(!overflow_dst.is(scratch)); + DCHECK(!overflow_dst.is(left)); + DCHECK(!overflow_dst.is(right)); + + if (left.is(right) && dst.is(left)) { + DCHECK(!dst.is(t9)); + DCHECK(!scratch.is(t9)); + DCHECK(!left.is(t9)); + DCHECK(!right.is(t9)); + DCHECK(!overflow_dst.is(t9)); + mov(t9, right); + right = t9; + } + + if (dst.is(left)) { + mov(scratch, left); // Preserve left. + daddu(dst, left, right); // Left is overwritten. + xor_(scratch, dst, scratch); // Original left. + xor_(overflow_dst, dst, right); + and_(overflow_dst, overflow_dst, scratch); + } else if (dst.is(right)) { + mov(scratch, right); // Preserve right. + daddu(dst, left, right); // Right is overwritten. + xor_(scratch, dst, scratch); // Original right. + xor_(overflow_dst, dst, left); + and_(overflow_dst, overflow_dst, scratch); + } else { + daddu(dst, left, right); + xor_(overflow_dst, dst, left); + xor_(scratch, dst, right); + and_(overflow_dst, scratch, overflow_dst); + } +} + + +void MacroAssembler::SubuAndCheckForOverflow(Register dst, + Register left, + Register right, + Register overflow_dst, + Register scratch) { + DCHECK(!dst.is(overflow_dst)); + DCHECK(!dst.is(scratch)); + DCHECK(!overflow_dst.is(scratch)); + DCHECK(!overflow_dst.is(left)); + DCHECK(!overflow_dst.is(right)); + DCHECK(!scratch.is(left)); + DCHECK(!scratch.is(right)); + + // This happens with some crankshaft code. Since Subu works fine if + // left == right, let's not make that restriction here. + if (left.is(right)) { + mov(dst, zero_reg); + mov(overflow_dst, zero_reg); + return; + } + + if (dst.is(left)) { + mov(scratch, left); // Preserve left. + dsubu(dst, left, right); // Left is overwritten. + xor_(overflow_dst, dst, scratch); // scratch is original left. + xor_(scratch, scratch, right); // scratch is original left. + and_(overflow_dst, scratch, overflow_dst); + } else if (dst.is(right)) { + mov(scratch, right); // Preserve right. + dsubu(dst, left, right); // Right is overwritten. + xor_(overflow_dst, dst, left); + xor_(scratch, left, scratch); // Original right. + and_(overflow_dst, scratch, overflow_dst); + } else { + dsubu(dst, left, right); + xor_(overflow_dst, dst, left); + xor_(scratch, left, right); + and_(overflow_dst, scratch, overflow_dst); + } +} + + +void MacroAssembler::CallRuntime(const Runtime::Function* f, + int num_arguments, + SaveFPRegsMode save_doubles) { + // All parameters are on the stack. v0 has the return value after call. + + // If the expected number of arguments of the runtime function is + // constant, we check that the actual number of arguments match the + // expectation. + CHECK(f->nargs < 0 || f->nargs == num_arguments); + + // TODO(1236192): Most runtime routines don't need the number of + // arguments passed in because it is constant. At some point we + // should remove this need and make the runtime routine entry code + // smarter. + PrepareCEntryArgs(num_arguments); + PrepareCEntryFunction(ExternalReference(f, isolate())); + CEntryStub stub(isolate(), 1, save_doubles); + CallStub(&stub); +} + + +void MacroAssembler::CallExternalReference(const ExternalReference& ext, + int num_arguments, + BranchDelaySlot bd) { + PrepareCEntryArgs(num_arguments); + PrepareCEntryFunction(ext); + + CEntryStub stub(isolate(), 1); + CallStub(&stub, TypeFeedbackId::None(), al, zero_reg, Operand(zero_reg), bd); +} + + +void MacroAssembler::TailCallExternalReference(const ExternalReference& ext, + int num_arguments, + int result_size) { + // TODO(1236192): Most runtime routines don't need the number of + // arguments passed in because it is constant. At some point we + // should remove this need and make the runtime routine entry code + // smarter. + PrepareCEntryArgs(num_arguments); + JumpToExternalReference(ext); +} + + +void MacroAssembler::TailCallRuntime(Runtime::FunctionId fid, + int num_arguments, + int result_size) { + TailCallExternalReference(ExternalReference(fid, isolate()), + num_arguments, + result_size); +} + + +void MacroAssembler::JumpToExternalReference(const ExternalReference& builtin, + BranchDelaySlot bd) { + PrepareCEntryFunction(builtin); + CEntryStub stub(isolate(), 1); + Jump(stub.GetCode(), + RelocInfo::CODE_TARGET, + al, + zero_reg, + Operand(zero_reg), + bd); +} + + +void MacroAssembler::InvokeBuiltin(Builtins::JavaScript id, + InvokeFlag flag, + const CallWrapper& call_wrapper) { + // You can't call a builtin without a valid frame. + DCHECK(flag == JUMP_FUNCTION || has_frame()); + + GetBuiltinEntry(t9, id); + if (flag == CALL_FUNCTION) { + call_wrapper.BeforeCall(CallSize(t9)); + Call(t9); + call_wrapper.AfterCall(); + } else { + DCHECK(flag == JUMP_FUNCTION); + Jump(t9); + } +} + + +void MacroAssembler::GetBuiltinFunction(Register target, + Builtins::JavaScript id) { + // Load the builtins object into target register. + ld(target, MemOperand(cp, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); + ld(target, FieldMemOperand(target, GlobalObject::kBuiltinsOffset)); + // Load the JavaScript builtin function from the builtins object. + ld(target, FieldMemOperand(target, + JSBuiltinsObject::OffsetOfFunctionWithId(id))); +} + + +void MacroAssembler::GetBuiltinEntry(Register target, Builtins::JavaScript id) { + DCHECK(!target.is(a1)); + GetBuiltinFunction(a1, id); + // Load the code entry point from the builtins object. + ld(target, FieldMemOperand(a1, JSFunction::kCodeEntryOffset)); +} + + +void MacroAssembler::SetCounter(StatsCounter* counter, int value, + Register scratch1, Register scratch2) { + if (FLAG_native_code_counters && counter->Enabled()) { + li(scratch1, Operand(value)); + li(scratch2, Operand(ExternalReference(counter))); + sd(scratch1, MemOperand(scratch2)); + } +} + + +void MacroAssembler::IncrementCounter(StatsCounter* counter, int value, + Register scratch1, Register scratch2) { + DCHECK(value > 0); + if (FLAG_native_code_counters && counter->Enabled()) { + li(scratch2, Operand(ExternalReference(counter))); + ld(scratch1, MemOperand(scratch2)); + Daddu(scratch1, scratch1, Operand(value)); + sd(scratch1, MemOperand(scratch2)); + } +} + + +void MacroAssembler::DecrementCounter(StatsCounter* counter, int value, + Register scratch1, Register scratch2) { + DCHECK(value > 0); + if (FLAG_native_code_counters && counter->Enabled()) { + li(scratch2, Operand(ExternalReference(counter))); + ld(scratch1, MemOperand(scratch2)); + Dsubu(scratch1, scratch1, Operand(value)); + sd(scratch1, MemOperand(scratch2)); + } +} + + +// ----------------------------------------------------------------------------- +// Debugging. + +void MacroAssembler::Assert(Condition cc, BailoutReason reason, + Register rs, Operand rt) { + if (emit_debug_code()) + Check(cc, reason, rs, rt); +} + + +void MacroAssembler::AssertFastElements(Register elements) { + if (emit_debug_code()) { + DCHECK(!elements.is(at)); + Label ok; + push(elements); + ld(elements, FieldMemOperand(elements, HeapObject::kMapOffset)); + LoadRoot(at, Heap::kFixedArrayMapRootIndex); + Branch(&ok, eq, elements, Operand(at)); + LoadRoot(at, Heap::kFixedDoubleArrayMapRootIndex); + Branch(&ok, eq, elements, Operand(at)); + LoadRoot(at, Heap::kFixedCOWArrayMapRootIndex); + Branch(&ok, eq, elements, Operand(at)); + Abort(kJSObjectWithFastElementsMapHasSlowElements); + bind(&ok); + pop(elements); + } +} + + +void MacroAssembler::Check(Condition cc, BailoutReason reason, + Register rs, Operand rt) { + Label L; + Branch(&L, cc, rs, rt); + Abort(reason); + // Will not return here. + bind(&L); +} + + +void MacroAssembler::Abort(BailoutReason reason) { + Label abort_start; + bind(&abort_start); +#ifdef DEBUG + const char* msg = GetBailoutReason(reason); + if (msg != NULL) { + RecordComment("Abort message: "); + RecordComment(msg); + } + + if (FLAG_trap_on_abort) { + stop(msg); + return; + } +#endif + + li(a0, Operand(Smi::FromInt(reason))); + push(a0); + // Disable stub call restrictions to always allow calls to abort. + if (!has_frame_) { + // We don't actually want to generate a pile of code for this, so just + // claim there is a stack frame, without generating one. + FrameScope scope(this, StackFrame::NONE); + CallRuntime(Runtime::kAbort, 1); + } else { + CallRuntime(Runtime::kAbort, 1); + } + // Will not return here. + if (is_trampoline_pool_blocked()) { + // If the calling code cares about the exact number of + // instructions generated, we insert padding here to keep the size + // of the Abort macro constant. + // Currently in debug mode with debug_code enabled the number of + // generated instructions is 10, so we use this as a maximum value. + static const int kExpectedAbortInstructions = 10; + int abort_instructions = InstructionsGeneratedSince(&abort_start); + DCHECK(abort_instructions <= kExpectedAbortInstructions); + while (abort_instructions++ < kExpectedAbortInstructions) { + nop(); + } + } +} + + +void MacroAssembler::LoadContext(Register dst, int context_chain_length) { + if (context_chain_length > 0) { + // Move up the chain of contexts to the context containing the slot. + ld(dst, MemOperand(cp, Context::SlotOffset(Context::PREVIOUS_INDEX))); + for (int i = 1; i < context_chain_length; i++) { + ld(dst, MemOperand(dst, Context::SlotOffset(Context::PREVIOUS_INDEX))); + } + } else { + // Slot is in the current function context. Move it into the + // destination register in case we store into it (the write barrier + // cannot be allowed to destroy the context in esi). + Move(dst, cp); + } +} + + +void MacroAssembler::LoadTransitionedArrayMapConditional( + ElementsKind expected_kind, + ElementsKind transitioned_kind, + Register map_in_out, + Register scratch, + Label* no_map_match) { + // Load the global or builtins object from the current context. + ld(scratch, + MemOperand(cp, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); + ld(scratch, FieldMemOperand(scratch, GlobalObject::kNativeContextOffset)); + + // Check that the function's map is the same as the expected cached map. + ld(scratch, + MemOperand(scratch, + Context::SlotOffset(Context::JS_ARRAY_MAPS_INDEX))); + size_t offset = expected_kind * kPointerSize + + FixedArrayBase::kHeaderSize; + ld(at, FieldMemOperand(scratch, offset)); + Branch(no_map_match, ne, map_in_out, Operand(at)); + + // Use the transitioned cached map. + offset = transitioned_kind * kPointerSize + + FixedArrayBase::kHeaderSize; + ld(map_in_out, FieldMemOperand(scratch, offset)); +} + + +void MacroAssembler::LoadGlobalFunction(int index, Register function) { + // Load the global or builtins object from the current context. + ld(function, + MemOperand(cp, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); + // Load the native context from the global or builtins object. + ld(function, FieldMemOperand(function, + GlobalObject::kNativeContextOffset)); + // Load the function from the native context. + ld(function, MemOperand(function, Context::SlotOffset(index))); +} + + +void MacroAssembler::LoadGlobalFunctionInitialMap(Register function, + Register map, + Register scratch) { + // Load the initial map. The global functions all have initial maps. + ld(map, FieldMemOperand(function, JSFunction::kPrototypeOrInitialMapOffset)); + if (emit_debug_code()) { + Label ok, fail; + CheckMap(map, scratch, Heap::kMetaMapRootIndex, &fail, DO_SMI_CHECK); + Branch(&ok); + bind(&fail); + Abort(kGlobalFunctionsMustHaveInitialMap); + bind(&ok); + } +} + + +void MacroAssembler::StubPrologue() { + Push(ra, fp, cp); + Push(Smi::FromInt(StackFrame::STUB)); + // Adjust FP to point to saved FP. + Daddu(fp, sp, Operand(StandardFrameConstants::kFixedFrameSizeFromFp)); +} + + +void MacroAssembler::Prologue(bool code_pre_aging) { + PredictableCodeSizeScope predictible_code_size_scope( + this, kNoCodeAgeSequenceLength); + // The following three instructions must remain together and unmodified + // for code aging to work properly. + if (code_pre_aging) { + // Pre-age the code. + Code* stub = Code::GetPreAgedCodeAgeStub(isolate()); + nop(Assembler::CODE_AGE_MARKER_NOP); + // Load the stub address to t9 and call it, + // GetCodeAgeAndParity() extracts the stub address from this instruction. + li(t9, + Operand(reinterpret_cast<uint64_t>(stub->instruction_start())), + ADDRESS_LOAD); + nop(); // Prevent jalr to jal optimization. + jalr(t9, a0); + nop(); // Branch delay slot nop. + nop(); // Pad the empty space. + } else { + Push(ra, fp, cp, a1); + nop(Assembler::CODE_AGE_SEQUENCE_NOP); + nop(Assembler::CODE_AGE_SEQUENCE_NOP); + nop(Assembler::CODE_AGE_SEQUENCE_NOP); + // Adjust fp to point to caller's fp. + Daddu(fp, sp, Operand(StandardFrameConstants::kFixedFrameSizeFromFp)); + } +} + + +void MacroAssembler::EnterFrame(StackFrame::Type type) { + daddiu(sp, sp, -5 * kPointerSize); + li(t8, Operand(Smi::FromInt(type))); + li(t9, Operand(CodeObject()), CONSTANT_SIZE); + sd(ra, MemOperand(sp, 4 * kPointerSize)); + sd(fp, MemOperand(sp, 3 * kPointerSize)); + sd(cp, MemOperand(sp, 2 * kPointerSize)); + sd(t8, MemOperand(sp, 1 * kPointerSize)); + sd(t9, MemOperand(sp, 0 * kPointerSize)); + // Adjust FP to point to saved FP. + Daddu(fp, sp, + Operand(StandardFrameConstants::kFixedFrameSizeFromFp + kPointerSize)); +} + + +void MacroAssembler::LeaveFrame(StackFrame::Type type) { + mov(sp, fp); + ld(fp, MemOperand(sp, 0 * kPointerSize)); + ld(ra, MemOperand(sp, 1 * kPointerSize)); + daddiu(sp, sp, 2 * kPointerSize); +} + + +void MacroAssembler::EnterExitFrame(bool save_doubles, + int stack_space) { + // Set up the frame structure on the stack. + STATIC_ASSERT(2 * kPointerSize == ExitFrameConstants::kCallerSPDisplacement); + STATIC_ASSERT(1 * kPointerSize == ExitFrameConstants::kCallerPCOffset); + STATIC_ASSERT(0 * kPointerSize == ExitFrameConstants::kCallerFPOffset); + + // This is how the stack will look: + // fp + 2 (==kCallerSPDisplacement) - old stack's end + // [fp + 1 (==kCallerPCOffset)] - saved old ra + // [fp + 0 (==kCallerFPOffset)] - saved old fp + // [fp - 1 (==kSPOffset)] - sp of the called function + // [fp - 2 (==kCodeOffset)] - CodeObject + // fp - (2 + stack_space + alignment) == sp == [fp - kSPOffset] - top of the + // new stack (will contain saved ra) + + // Save registers. + daddiu(sp, sp, -4 * kPointerSize); + sd(ra, MemOperand(sp, 3 * kPointerSize)); + sd(fp, MemOperand(sp, 2 * kPointerSize)); + daddiu(fp, sp, 2 * kPointerSize); // Set up new frame pointer. + + if (emit_debug_code()) { + sd(zero_reg, MemOperand(fp, ExitFrameConstants::kSPOffset)); + } + + // Accessed from ExitFrame::code_slot. + li(t8, Operand(CodeObject()), CONSTANT_SIZE); + sd(t8, MemOperand(fp, ExitFrameConstants::kCodeOffset)); + + // Save the frame pointer and the context in top. + li(t8, Operand(ExternalReference(Isolate::kCEntryFPAddress, isolate()))); + sd(fp, MemOperand(t8)); + li(t8, Operand(ExternalReference(Isolate::kContextAddress, isolate()))); + sd(cp, MemOperand(t8)); + + const int frame_alignment = MacroAssembler::ActivationFrameAlignment(); + if (save_doubles) { + // The stack is already aligned to 0 modulo 8 for stores with sdc1. + int kNumOfSavedRegisters = FPURegister::kMaxNumRegisters / 2; + int space = kNumOfSavedRegisters * kDoubleSize ; + Dsubu(sp, sp, Operand(space)); + // Remember: we only need to save every 2nd double FPU value. + for (int i = 0; i < kNumOfSavedRegisters; i++) { + FPURegister reg = FPURegister::from_code(2 * i); + sdc1(reg, MemOperand(sp, i * kDoubleSize)); + } + } + + // Reserve place for the return address, stack space and an optional slot + // (used by the DirectCEntryStub to hold the return value if a struct is + // returned) and align the frame preparing for calling the runtime function. + DCHECK(stack_space >= 0); + Dsubu(sp, sp, Operand((stack_space + 2) * kPointerSize)); + if (frame_alignment > 0) { + DCHECK(IsPowerOf2(frame_alignment)); + And(sp, sp, Operand(-frame_alignment)); // Align stack. + } + + // Set the exit frame sp value to point just before the return address + // location. + daddiu(at, sp, kPointerSize); + sd(at, MemOperand(fp, ExitFrameConstants::kSPOffset)); +} + + +void MacroAssembler::LeaveExitFrame(bool save_doubles, + Register argument_count, + bool restore_context, + bool do_return) { + // Optionally restore all double registers. + if (save_doubles) { + // Remember: we only need to restore every 2nd double FPU value. + int kNumOfSavedRegisters = FPURegister::kMaxNumRegisters / 2; + Dsubu(t8, fp, Operand(ExitFrameConstants::kFrameSize + + kNumOfSavedRegisters * kDoubleSize)); + for (int i = 0; i < kNumOfSavedRegisters; i++) { + FPURegister reg = FPURegister::from_code(2 * i); + ldc1(reg, MemOperand(t8, i * kDoubleSize)); + } + } + + // Clear top frame. + li(t8, Operand(ExternalReference(Isolate::kCEntryFPAddress, isolate()))); + sd(zero_reg, MemOperand(t8)); + + // Restore current context from top and clear it in debug mode. + if (restore_context) { + li(t8, Operand(ExternalReference(Isolate::kContextAddress, isolate()))); + ld(cp, MemOperand(t8)); + } +#ifdef DEBUG + li(t8, Operand(ExternalReference(Isolate::kContextAddress, isolate()))); + sd(a3, MemOperand(t8)); +#endif + + // Pop the arguments, restore registers, and return. + mov(sp, fp); // Respect ABI stack constraint. + ld(fp, MemOperand(sp, ExitFrameConstants::kCallerFPOffset)); + ld(ra, MemOperand(sp, ExitFrameConstants::kCallerPCOffset)); + + if (argument_count.is_valid()) { + dsll(t8, argument_count, kPointerSizeLog2); + daddu(sp, sp, t8); + } + + if (do_return) { + Ret(USE_DELAY_SLOT); + // If returning, the instruction in the delay slot will be the addiu below. + } + daddiu(sp, sp, 2 * kPointerSize); +} + + +void MacroAssembler::InitializeNewString(Register string, + Register length, + Heap::RootListIndex map_index, + Register scratch1, + Register scratch2) { + // dsll(scratch1, length, kSmiTagSize); + dsll32(scratch1, length, 0); + LoadRoot(scratch2, map_index); + sd(scratch1, FieldMemOperand(string, String::kLengthOffset)); + li(scratch1, Operand(String::kEmptyHashField)); + sd(scratch2, FieldMemOperand(string, HeapObject::kMapOffset)); + sd(scratch1, FieldMemOperand(string, String::kHashFieldOffset)); +} + + +int MacroAssembler::ActivationFrameAlignment() { +#if V8_HOST_ARCH_MIPS || V8_HOST_ARCH_MIPS64 + // Running on the real platform. Use the alignment as mandated by the local + // environment. + // Note: This will break if we ever start generating snapshots on one Mips + // platform for another Mips platform with a different alignment. + return base::OS::ActivationFrameAlignment(); +#else // V8_HOST_ARCH_MIPS + // If we are using the simulator then we should always align to the expected + // alignment. As the simulator is used to generate snapshots we do not know + // if the target platform will need alignment, so this is controlled from a + // flag. + return FLAG_sim_stack_alignment; +#endif // V8_HOST_ARCH_MIPS +} + + +void MacroAssembler::AssertStackIsAligned() { + if (emit_debug_code()) { + const int frame_alignment = ActivationFrameAlignment(); + const int frame_alignment_mask = frame_alignment - 1; + + if (frame_alignment > kPointerSize) { + Label alignment_as_expected; + DCHECK(IsPowerOf2(frame_alignment)); + andi(at, sp, frame_alignment_mask); + Branch(&alignment_as_expected, eq, at, Operand(zero_reg)); + // Don't use Check here, as it will call Runtime_Abort re-entering here. + stop("Unexpected stack alignment"); + bind(&alignment_as_expected); + } + } +} + + +void MacroAssembler::JumpIfNotPowerOfTwoOrZero( + Register reg, + Register scratch, + Label* not_power_of_two_or_zero) { + Dsubu(scratch, reg, Operand(1)); + Branch(USE_DELAY_SLOT, not_power_of_two_or_zero, lt, + scratch, Operand(zero_reg)); + and_(at, scratch, reg); // In the delay slot. + Branch(not_power_of_two_or_zero, ne, at, Operand(zero_reg)); +} + + +void MacroAssembler::SmiTagCheckOverflow(Register reg, Register overflow) { + DCHECK(!reg.is(overflow)); + mov(overflow, reg); // Save original value. + SmiTag(reg); + xor_(overflow, overflow, reg); // Overflow if (value ^ 2 * value) < 0. +} + + +void MacroAssembler::SmiTagCheckOverflow(Register dst, + Register src, + Register overflow) { + if (dst.is(src)) { + // Fall back to slower case. + SmiTagCheckOverflow(dst, overflow); + } else { + DCHECK(!dst.is(src)); + DCHECK(!dst.is(overflow)); + DCHECK(!src.is(overflow)); + SmiTag(dst, src); + xor_(overflow, dst, src); // Overflow if (value ^ 2 * value) < 0. + } +} + + +void MacroAssembler::SmiLoadUntag(Register dst, MemOperand src) { + if (SmiValuesAre32Bits()) { + lw(dst, UntagSmiMemOperand(src.rm(), src.offset())); + } else { + lw(dst, src); + SmiUntag(dst); + } +} + + +void MacroAssembler::SmiLoadScale(Register dst, MemOperand src, int scale) { + if (SmiValuesAre32Bits()) { + // TODO(plind): not clear if lw or ld faster here, need micro-benchmark. + lw(dst, UntagSmiMemOperand(src.rm(), src.offset())); + dsll(dst, dst, scale); + } else { + lw(dst, src); + DCHECK(scale >= kSmiTagSize); + sll(dst, dst, scale - kSmiTagSize); + } +} + + +// Returns 2 values: the Smi and a scaled version of the int within the Smi. +void MacroAssembler::SmiLoadWithScale(Register d_smi, + Register d_scaled, + MemOperand src, + int scale) { + if (SmiValuesAre32Bits()) { + ld(d_smi, src); + dsra(d_scaled, d_smi, kSmiShift - scale); + } else { + lw(d_smi, src); + DCHECK(scale >= kSmiTagSize); + sll(d_scaled, d_smi, scale - kSmiTagSize); + } +} + + +// Returns 2 values: the untagged Smi (int32) and scaled version of that int. +void MacroAssembler::SmiLoadUntagWithScale(Register d_int, + Register d_scaled, + MemOperand src, + int scale) { + if (SmiValuesAre32Bits()) { + lw(d_int, UntagSmiMemOperand(src.rm(), src.offset())); + dsll(d_scaled, d_int, scale); + } else { + lw(d_int, src); + // Need both the int and the scaled in, so use two instructions. + SmiUntag(d_int); + sll(d_scaled, d_int, scale); + } +} + + +void MacroAssembler::UntagAndJumpIfSmi(Register dst, + Register src, + Label* smi_case) { + // DCHECK(!dst.is(src)); + JumpIfSmi(src, smi_case, at, USE_DELAY_SLOT); + SmiUntag(dst, src); +} + + +void MacroAssembler::UntagAndJumpIfNotSmi(Register dst, + Register src, + Label* non_smi_case) { + // DCHECK(!dst.is(src)); + JumpIfNotSmi(src, non_smi_case, at, USE_DELAY_SLOT); + SmiUntag(dst, src); +} + +void MacroAssembler::JumpIfSmi(Register value, + Label* smi_label, + Register scratch, + BranchDelaySlot bd) { + DCHECK_EQ(0, kSmiTag); + andi(scratch, value, kSmiTagMask); + Branch(bd, smi_label, eq, scratch, Operand(zero_reg)); +} + +void MacroAssembler::JumpIfNotSmi(Register value, + Label* not_smi_label, + Register scratch, + BranchDelaySlot bd) { + DCHECK_EQ(0, kSmiTag); + andi(scratch, value, kSmiTagMask); + Branch(bd, not_smi_label, ne, scratch, Operand(zero_reg)); +} + + +void MacroAssembler::JumpIfNotBothSmi(Register reg1, + Register reg2, + Label* on_not_both_smi) { + STATIC_ASSERT(kSmiTag == 0); + // TODO(plind): Find some better to fix this assert issue. +#if defined(__APPLE__) + DCHECK_EQ(1, kSmiTagMask); +#else + DCHECK_EQ((uint64_t)1, kSmiTagMask); +#endif + or_(at, reg1, reg2); + JumpIfNotSmi(at, on_not_both_smi); +} + + +void MacroAssembler::JumpIfEitherSmi(Register reg1, + Register reg2, + Label* on_either_smi) { + STATIC_ASSERT(kSmiTag == 0); + // TODO(plind): Find some better to fix this assert issue. +#if defined(__APPLE__) + DCHECK_EQ(1, kSmiTagMask); +#else + DCHECK_EQ((uint64_t)1, kSmiTagMask); +#endif + // Both Smi tags must be 1 (not Smi). + and_(at, reg1, reg2); + JumpIfSmi(at, on_either_smi); +} + + +void MacroAssembler::AssertNotSmi(Register object) { + if (emit_debug_code()) { + STATIC_ASSERT(kSmiTag == 0); + andi(at, object, kSmiTagMask); + Check(ne, kOperandIsASmi, at, Operand(zero_reg)); + } +} + + +void MacroAssembler::AssertSmi(Register object) { + if (emit_debug_code()) { + STATIC_ASSERT(kSmiTag == 0); + andi(at, object, kSmiTagMask); + Check(eq, kOperandIsASmi, at, Operand(zero_reg)); + } +} + + +void MacroAssembler::AssertString(Register object) { + if (emit_debug_code()) { + STATIC_ASSERT(kSmiTag == 0); + SmiTst(object, a4); + Check(ne, kOperandIsASmiAndNotAString, a4, Operand(zero_reg)); + push(object); + ld(object, FieldMemOperand(object, HeapObject::kMapOffset)); + lbu(object, FieldMemOperand(object, Map::kInstanceTypeOffset)); + Check(lo, kOperandIsNotAString, object, Operand(FIRST_NONSTRING_TYPE)); + pop(object); + } +} + + +void MacroAssembler::AssertName(Register object) { + if (emit_debug_code()) { + STATIC_ASSERT(kSmiTag == 0); + SmiTst(object, a4); + Check(ne, kOperandIsASmiAndNotAName, a4, Operand(zero_reg)); + push(object); + ld(object, FieldMemOperand(object, HeapObject::kMapOffset)); + lbu(object, FieldMemOperand(object, Map::kInstanceTypeOffset)); + Check(le, kOperandIsNotAName, object, Operand(LAST_NAME_TYPE)); + pop(object); + } +} + + +void MacroAssembler::AssertUndefinedOrAllocationSite(Register object, + Register scratch) { + if (emit_debug_code()) { + Label done_checking; + AssertNotSmi(object); + LoadRoot(scratch, Heap::kUndefinedValueRootIndex); + Branch(&done_checking, eq, object, Operand(scratch)); + push(object); + ld(object, FieldMemOperand(object, HeapObject::kMapOffset)); + LoadRoot(scratch, Heap::kAllocationSiteMapRootIndex); + Assert(eq, kExpectedUndefinedOrCell, object, Operand(scratch)); + pop(object); + bind(&done_checking); + } +} + + +void MacroAssembler::AssertIsRoot(Register reg, Heap::RootListIndex index) { + if (emit_debug_code()) { + DCHECK(!reg.is(at)); + LoadRoot(at, index); + Check(eq, kHeapNumberMapRegisterClobbered, reg, Operand(at)); + } +} + + +void MacroAssembler::JumpIfNotHeapNumber(Register object, + Register heap_number_map, + Register scratch, + Label* on_not_heap_number) { + ld(scratch, FieldMemOperand(object, HeapObject::kMapOffset)); + AssertIsRoot(heap_number_map, Heap::kHeapNumberMapRootIndex); + Branch(on_not_heap_number, ne, scratch, Operand(heap_number_map)); +} + + +void MacroAssembler::LookupNumberStringCache(Register object, + Register result, + Register scratch1, + Register scratch2, + Register scratch3, + Label* not_found) { + // Use of registers. Register result is used as a temporary. + Register number_string_cache = result; + Register mask = scratch3; + + // Load the number string cache. + LoadRoot(number_string_cache, Heap::kNumberStringCacheRootIndex); + + // Make the hash mask from the length of the number string cache. It + // contains two elements (number and string) for each cache entry. + ld(mask, FieldMemOperand(number_string_cache, FixedArray::kLengthOffset)); + // Divide length by two (length is a smi). + // dsra(mask, mask, kSmiTagSize + 1); + dsra32(mask, mask, 1); + Daddu(mask, mask, -1); // Make mask. + + // Calculate the entry in the number string cache. The hash value in the + // number string cache for smis is just the smi value, and the hash for + // doubles is the xor of the upper and lower words. See + // Heap::GetNumberStringCache. + Label is_smi; + Label load_result_from_cache; + JumpIfSmi(object, &is_smi); + CheckMap(object, + scratch1, + Heap::kHeapNumberMapRootIndex, + not_found, + DONT_DO_SMI_CHECK); + + STATIC_ASSERT(8 == kDoubleSize); + Daddu(scratch1, + object, + Operand(HeapNumber::kValueOffset - kHeapObjectTag)); + ld(scratch2, MemOperand(scratch1, kPointerSize)); + ld(scratch1, MemOperand(scratch1, 0)); + Xor(scratch1, scratch1, Operand(scratch2)); + And(scratch1, scratch1, Operand(mask)); + + // Calculate address of entry in string cache: each entry consists + // of two pointer sized fields. + dsll(scratch1, scratch1, kPointerSizeLog2 + 1); + Daddu(scratch1, number_string_cache, scratch1); + + Register probe = mask; + ld(probe, FieldMemOperand(scratch1, FixedArray::kHeaderSize)); + JumpIfSmi(probe, not_found); + ldc1(f12, FieldMemOperand(object, HeapNumber::kValueOffset)); + ldc1(f14, FieldMemOperand(probe, HeapNumber::kValueOffset)); + BranchF(&load_result_from_cache, NULL, eq, f12, f14); + Branch(not_found); + + bind(&is_smi); + Register scratch = scratch1; + // dsra(scratch, object, 1); // Shift away the tag. + dsra32(scratch, scratch, 0); + And(scratch, mask, Operand(scratch)); + + // Calculate address of entry in string cache: each entry consists + // of two pointer sized fields. + dsll(scratch, scratch, kPointerSizeLog2 + 1); + Daddu(scratch, number_string_cache, scratch); + + // Check if the entry is the smi we are looking for. + ld(probe, FieldMemOperand(scratch, FixedArray::kHeaderSize)); + Branch(not_found, ne, object, Operand(probe)); + + // Get the result from the cache. + bind(&load_result_from_cache); + ld(result, FieldMemOperand(scratch, FixedArray::kHeaderSize + kPointerSize)); + + IncrementCounter(isolate()->counters()->number_to_string_native(), + 1, + scratch1, + scratch2); +} + + +void MacroAssembler::JumpIfNonSmisNotBothSequentialAsciiStrings( + Register first, + Register second, + Register scratch1, + Register scratch2, + Label* failure) { + // Test that both first and second are sequential ASCII strings. + // Assume that they are non-smis. + ld(scratch1, FieldMemOperand(first, HeapObject::kMapOffset)); + ld(scratch2, FieldMemOperand(second, HeapObject::kMapOffset)); + lbu(scratch1, FieldMemOperand(scratch1, Map::kInstanceTypeOffset)); + lbu(scratch2, FieldMemOperand(scratch2, Map::kInstanceTypeOffset)); + + JumpIfBothInstanceTypesAreNotSequentialAscii(scratch1, + scratch2, + scratch1, + scratch2, + failure); +} + + +void MacroAssembler::JumpIfNotBothSequentialAsciiStrings(Register first, + Register second, + Register scratch1, + Register scratch2, + Label* failure) { + // Check that neither is a smi. + STATIC_ASSERT(kSmiTag == 0); + And(scratch1, first, Operand(second)); + JumpIfSmi(scratch1, failure); + JumpIfNonSmisNotBothSequentialAsciiStrings(first, + second, + scratch1, + scratch2, + failure); +} + + +void MacroAssembler::JumpIfBothInstanceTypesAreNotSequentialAscii( + Register first, + Register second, + Register scratch1, + Register scratch2, + Label* failure) { + const int kFlatAsciiStringMask = + kIsNotStringMask | kStringEncodingMask | kStringRepresentationMask; + const int kFlatAsciiStringTag = + kStringTag | kOneByteStringTag | kSeqStringTag; + DCHECK(kFlatAsciiStringTag <= 0xffff); // Ensure this fits 16-bit immed. + andi(scratch1, first, kFlatAsciiStringMask); + Branch(failure, ne, scratch1, Operand(kFlatAsciiStringTag)); + andi(scratch2, second, kFlatAsciiStringMask); + Branch(failure, ne, scratch2, Operand(kFlatAsciiStringTag)); +} + + +void MacroAssembler::JumpIfInstanceTypeIsNotSequentialAscii(Register type, + Register scratch, + Label* failure) { + const int kFlatAsciiStringMask = + kIsNotStringMask | kStringEncodingMask | kStringRepresentationMask; + const int kFlatAsciiStringTag = + kStringTag | kOneByteStringTag | kSeqStringTag; + And(scratch, type, Operand(kFlatAsciiStringMask)); + Branch(failure, ne, scratch, Operand(kFlatAsciiStringTag)); +} + + +static const int kRegisterPassedArguments = (kMipsAbi == kN64) ? 8 : 4; + +int MacroAssembler::CalculateStackPassedWords(int num_reg_arguments, + int num_double_arguments) { + int stack_passed_words = 0; + num_reg_arguments += 2 * num_double_arguments; + + // O32: Up to four simple arguments are passed in registers a0..a3. + // N64: Up to eight simple arguments are passed in registers a0..a7. + if (num_reg_arguments > kRegisterPassedArguments) { + stack_passed_words += num_reg_arguments - kRegisterPassedArguments; + } + stack_passed_words += kCArgSlotCount; + return stack_passed_words; +} + + +void MacroAssembler::EmitSeqStringSetCharCheck(Register string, + Register index, + Register value, + Register scratch, + uint32_t encoding_mask) { + Label is_object; + SmiTst(string, at); + Check(ne, kNonObject, at, Operand(zero_reg)); + + ld(at, FieldMemOperand(string, HeapObject::kMapOffset)); + lbu(at, FieldMemOperand(at, Map::kInstanceTypeOffset)); + + andi(at, at, kStringRepresentationMask | kStringEncodingMask); + li(scratch, Operand(encoding_mask)); + Check(eq, kUnexpectedStringType, at, Operand(scratch)); + + // TODO(plind): requires Smi size check code for mips32. + + ld(at, FieldMemOperand(string, String::kLengthOffset)); + Check(lt, kIndexIsTooLarge, index, Operand(at)); + + DCHECK(Smi::FromInt(0) == 0); + Check(ge, kIndexIsNegative, index, Operand(zero_reg)); +} + + +void MacroAssembler::PrepareCallCFunction(int num_reg_arguments, + int num_double_arguments, + Register scratch) { + int frame_alignment = ActivationFrameAlignment(); + + // n64: Up to eight simple arguments in a0..a3, a4..a7, No argument slots. + // O32: Up to four simple arguments are passed in registers a0..a3. + // Those four arguments must have reserved argument slots on the stack for + // mips, even though those argument slots are not normally used. + // Both ABIs: Remaining arguments are pushed on the stack, above (higher + // address than) the (O32) argument slots. (arg slot calculation handled by + // CalculateStackPassedWords()). + int stack_passed_arguments = CalculateStackPassedWords( + num_reg_arguments, num_double_arguments); + if (frame_alignment > kPointerSize) { + // Make stack end at alignment and make room for num_arguments - 4 words + // and the original value of sp. + mov(scratch, sp); + Dsubu(sp, sp, Operand((stack_passed_arguments + 1) * kPointerSize)); + DCHECK(IsPowerOf2(frame_alignment)); + And(sp, sp, Operand(-frame_alignment)); + sd(scratch, MemOperand(sp, stack_passed_arguments * kPointerSize)); + } else { + Dsubu(sp, sp, Operand(stack_passed_arguments * kPointerSize)); + } +} + + +void MacroAssembler::PrepareCallCFunction(int num_reg_arguments, + Register scratch) { + PrepareCallCFunction(num_reg_arguments, 0, scratch); +} + + +void MacroAssembler::CallCFunction(ExternalReference function, + int num_reg_arguments, + int num_double_arguments) { + li(t8, Operand(function)); + CallCFunctionHelper(t8, num_reg_arguments, num_double_arguments); +} + + +void MacroAssembler::CallCFunction(Register function, + int num_reg_arguments, + int num_double_arguments) { + CallCFunctionHelper(function, num_reg_arguments, num_double_arguments); +} + + +void MacroAssembler::CallCFunction(ExternalReference function, + int num_arguments) { + CallCFunction(function, num_arguments, 0); +} + + +void MacroAssembler::CallCFunction(Register function, + int num_arguments) { + CallCFunction(function, num_arguments, 0); +} + + +void MacroAssembler::CallCFunctionHelper(Register function, + int num_reg_arguments, + int num_double_arguments) { + DCHECK(has_frame()); + // Make sure that the stack is aligned before calling a C function unless + // running in the simulator. The simulator has its own alignment check which + // provides more information. + // The argument stots are presumed to have been set up by + // PrepareCallCFunction. The C function must be called via t9, for mips ABI. + +#if V8_HOST_ARCH_MIPS || V8_HOST_ARCH_MIPS64 + if (emit_debug_code()) { + int frame_alignment = base::OS::ActivationFrameAlignment(); + int frame_alignment_mask = frame_alignment - 1; + if (frame_alignment > kPointerSize) { + DCHECK(IsPowerOf2(frame_alignment)); + Label alignment_as_expected; + And(at, sp, Operand(frame_alignment_mask)); + Branch(&alignment_as_expected, eq, at, Operand(zero_reg)); + // Don't use Check here, as it will call Runtime_Abort possibly + // re-entering here. + stop("Unexpected alignment in CallCFunction"); + bind(&alignment_as_expected); + } + } +#endif // V8_HOST_ARCH_MIPS + + // Just call directly. The function called cannot cause a GC, or + // allow preemption, so the return address in the link register + // stays correct. + + if (!function.is(t9)) { + mov(t9, function); + function = t9; + } + + Call(function); + + int stack_passed_arguments = CalculateStackPassedWords( + num_reg_arguments, num_double_arguments); + + if (base::OS::ActivationFrameAlignment() > kPointerSize) { + ld(sp, MemOperand(sp, stack_passed_arguments * kPointerSize)); + } else { + Daddu(sp, sp, Operand(stack_passed_arguments * kPointerSize)); + } +} + + +#undef BRANCH_ARGS_CHECK + + +void MacroAssembler::PatchRelocatedValue(Register li_location, + Register scratch, + Register new_value) { + lwu(scratch, MemOperand(li_location)); + // At this point scratch is a lui(at, ...) instruction. + if (emit_debug_code()) { + And(scratch, scratch, kOpcodeMask); + Check(eq, kTheInstructionToPatchShouldBeALui, + scratch, Operand(LUI)); + lwu(scratch, MemOperand(li_location)); + } + dsrl32(t9, new_value, 0); + Ins(scratch, t9, 0, kImm16Bits); + sw(scratch, MemOperand(li_location)); + + lwu(scratch, MemOperand(li_location, kInstrSize)); + // scratch is now ori(at, ...). + if (emit_debug_code()) { + And(scratch, scratch, kOpcodeMask); + Check(eq, kTheInstructionToPatchShouldBeAnOri, + scratch, Operand(ORI)); + lwu(scratch, MemOperand(li_location, kInstrSize)); + } + dsrl(t9, new_value, kImm16Bits); + Ins(scratch, t9, 0, kImm16Bits); + sw(scratch, MemOperand(li_location, kInstrSize)); + + lwu(scratch, MemOperand(li_location, kInstrSize * 3)); + // scratch is now ori(at, ...). + if (emit_debug_code()) { + And(scratch, scratch, kOpcodeMask); + Check(eq, kTheInstructionToPatchShouldBeAnOri, + scratch, Operand(ORI)); + lwu(scratch, MemOperand(li_location, kInstrSize * 3)); + } + + Ins(scratch, new_value, 0, kImm16Bits); + sw(scratch, MemOperand(li_location, kInstrSize * 3)); + + // Update the I-cache so the new lui and ori can be executed. + FlushICache(li_location, 4); +} + +void MacroAssembler::GetRelocatedValue(Register li_location, + Register value, + Register scratch) { + lwu(value, MemOperand(li_location)); + if (emit_debug_code()) { + And(value, value, kOpcodeMask); + Check(eq, kTheInstructionShouldBeALui, + value, Operand(LUI)); + lwu(value, MemOperand(li_location)); + } + + // value now holds a lui instruction. Extract the immediate. + andi(value, value, kImm16Mask); + dsll32(value, value, kImm16Bits); + + lwu(scratch, MemOperand(li_location, kInstrSize)); + if (emit_debug_code()) { + And(scratch, scratch, kOpcodeMask); + Check(eq, kTheInstructionShouldBeAnOri, + scratch, Operand(ORI)); + lwu(scratch, MemOperand(li_location, kInstrSize)); + } + // "scratch" now holds an ori instruction. Extract the immediate. + andi(scratch, scratch, kImm16Mask); + dsll32(scratch, scratch, 0); + + or_(value, value, scratch); + + lwu(scratch, MemOperand(li_location, kInstrSize * 3)); + if (emit_debug_code()) { + And(scratch, scratch, kOpcodeMask); + Check(eq, kTheInstructionShouldBeAnOri, + scratch, Operand(ORI)); + lwu(scratch, MemOperand(li_location, kInstrSize * 3)); + } + // "scratch" now holds an ori instruction. Extract the immediate. + andi(scratch, scratch, kImm16Mask); + dsll(scratch, scratch, kImm16Bits); + + or_(value, value, scratch); + // Sign extend extracted address. + dsra(value, value, kImm16Bits); +} + + +void MacroAssembler::CheckPageFlag( + Register object, + Register scratch, + int mask, + Condition cc, + Label* condition_met) { + And(scratch, object, Operand(~Page::kPageAlignmentMask)); + ld(scratch, MemOperand(scratch, MemoryChunk::kFlagsOffset)); + And(scratch, scratch, Operand(mask)); + Branch(condition_met, cc, scratch, Operand(zero_reg)); +} + + +void MacroAssembler::CheckMapDeprecated(Handle<Map> map, + Register scratch, + Label* if_deprecated) { + if (map->CanBeDeprecated()) { + li(scratch, Operand(map)); + ld(scratch, FieldMemOperand(scratch, Map::kBitField3Offset)); + And(scratch, scratch, Operand(Map::Deprecated::kMask)); + Branch(if_deprecated, ne, scratch, Operand(zero_reg)); + } +} + + +void MacroAssembler::JumpIfBlack(Register object, + Register scratch0, + Register scratch1, + Label* on_black) { + HasColor(object, scratch0, scratch1, on_black, 1, 0); // kBlackBitPattern. + DCHECK(strcmp(Marking::kBlackBitPattern, "10") == 0); +} + + +void MacroAssembler::HasColor(Register object, + Register bitmap_scratch, + Register mask_scratch, + Label* has_color, + int first_bit, + int second_bit) { + DCHECK(!AreAliased(object, bitmap_scratch, mask_scratch, t8)); + DCHECK(!AreAliased(object, bitmap_scratch, mask_scratch, t9)); + + GetMarkBits(object, bitmap_scratch, mask_scratch); + + Label other_color; + // Note that we are using a 4-byte aligned 8-byte load. + Uld(t9, MemOperand(bitmap_scratch, MemoryChunk::kHeaderSize)); + And(t8, t9, Operand(mask_scratch)); + Branch(&other_color, first_bit == 1 ? eq : ne, t8, Operand(zero_reg)); + // Shift left 1 by adding. + Daddu(mask_scratch, mask_scratch, Operand(mask_scratch)); + And(t8, t9, Operand(mask_scratch)); + Branch(has_color, second_bit == 1 ? ne : eq, t8, Operand(zero_reg)); + + bind(&other_color); +} + + +// Detect some, but not all, common pointer-free objects. This is used by the +// incremental write barrier which doesn't care about oddballs (they are always +// marked black immediately so this code is not hit). +void MacroAssembler::JumpIfDataObject(Register value, + Register scratch, + Label* not_data_object) { + DCHECK(!AreAliased(value, scratch, t8, no_reg)); + Label is_data_object; + ld(scratch, FieldMemOperand(value, HeapObject::kMapOffset)); + LoadRoot(t8, Heap::kHeapNumberMapRootIndex); + Branch(&is_data_object, eq, t8, Operand(scratch)); + DCHECK(kIsIndirectStringTag == 1 && kIsIndirectStringMask == 1); + DCHECK(kNotStringTag == 0x80 && kIsNotStringMask == 0x80); + // If it's a string and it's not a cons string then it's an object containing + // no GC pointers. + lbu(scratch, FieldMemOperand(scratch, Map::kInstanceTypeOffset)); + And(t8, scratch, Operand(kIsIndirectStringMask | kIsNotStringMask)); + Branch(not_data_object, ne, t8, Operand(zero_reg)); + bind(&is_data_object); +} + + +void MacroAssembler::GetMarkBits(Register addr_reg, + Register bitmap_reg, + Register mask_reg) { + DCHECK(!AreAliased(addr_reg, bitmap_reg, mask_reg, no_reg)); + // addr_reg is divided into fields: + // |63 page base 20|19 high 8|7 shift 3|2 0| + // 'high' gives the index of the cell holding color bits for the object. + // 'shift' gives the offset in the cell for this object's color. + And(bitmap_reg, addr_reg, Operand(~Page::kPageAlignmentMask)); + Ext(mask_reg, addr_reg, kPointerSizeLog2, Bitmap::kBitsPerCellLog2); + const int kLowBits = kPointerSizeLog2 + Bitmap::kBitsPerCellLog2; + Ext(t8, addr_reg, kLowBits, kPageSizeBits - kLowBits); + dsll(t8, t8, Bitmap::kBytesPerCellLog2); + Daddu(bitmap_reg, bitmap_reg, t8); + li(t8, Operand(1)); + dsllv(mask_reg, t8, mask_reg); +} + + +void MacroAssembler::EnsureNotWhite( + Register value, + Register bitmap_scratch, + Register mask_scratch, + Register load_scratch, + Label* value_is_white_and_not_data) { + DCHECK(!AreAliased(value, bitmap_scratch, mask_scratch, t8)); + GetMarkBits(value, bitmap_scratch, mask_scratch); + + // If the value is black or grey we don't need to do anything. + DCHECK(strcmp(Marking::kWhiteBitPattern, "00") == 0); + DCHECK(strcmp(Marking::kBlackBitPattern, "10") == 0); + DCHECK(strcmp(Marking::kGreyBitPattern, "11") == 0); + DCHECK(strcmp(Marking::kImpossibleBitPattern, "01") == 0); + + Label done; + + // Since both black and grey have a 1 in the first position and white does + // not have a 1 there we only need to check one bit. + // Note that we are using a 4-byte aligned 8-byte load. + Uld(load_scratch, MemOperand(bitmap_scratch, MemoryChunk::kHeaderSize)); + And(t8, mask_scratch, load_scratch); + Branch(&done, ne, t8, Operand(zero_reg)); + + if (emit_debug_code()) { + // Check for impossible bit pattern. + Label ok; + // sll may overflow, making the check conservative. + dsll(t8, mask_scratch, 1); + And(t8, load_scratch, t8); + Branch(&ok, eq, t8, Operand(zero_reg)); + stop("Impossible marking bit pattern"); + bind(&ok); + } + + // Value is white. We check whether it is data that doesn't need scanning. + // Currently only checks for HeapNumber and non-cons strings. + Register map = load_scratch; // Holds map while checking type. + Register length = load_scratch; // Holds length of object after testing type. + Label is_data_object; + + // Check for heap-number + ld(map, FieldMemOperand(value, HeapObject::kMapOffset)); + LoadRoot(t8, Heap::kHeapNumberMapRootIndex); + { + Label skip; + Branch(&skip, ne, t8, Operand(map)); + li(length, HeapNumber::kSize); + Branch(&is_data_object); + bind(&skip); + } + + // Check for strings. + DCHECK(kIsIndirectStringTag == 1 && kIsIndirectStringMask == 1); + DCHECK(kNotStringTag == 0x80 && kIsNotStringMask == 0x80); + // If it's a string and it's not a cons string then it's an object containing + // no GC pointers. + Register instance_type = load_scratch; + lbu(instance_type, FieldMemOperand(map, Map::kInstanceTypeOffset)); + And(t8, instance_type, Operand(kIsIndirectStringMask | kIsNotStringMask)); + Branch(value_is_white_and_not_data, ne, t8, Operand(zero_reg)); + // It's a non-indirect (non-cons and non-slice) string. + // If it's external, the length is just ExternalString::kSize. + // Otherwise it's String::kHeaderSize + string->length() * (1 or 2). + // External strings are the only ones with the kExternalStringTag bit + // set. + DCHECK_EQ(0, kSeqStringTag & kExternalStringTag); + DCHECK_EQ(0, kConsStringTag & kExternalStringTag); + And(t8, instance_type, Operand(kExternalStringTag)); + { + Label skip; + Branch(&skip, eq, t8, Operand(zero_reg)); + li(length, ExternalString::kSize); + Branch(&is_data_object); + bind(&skip); + } + + // Sequential string, either ASCII or UC16. + // For ASCII (char-size of 1) we shift the smi tag away to get the length. + // For UC16 (char-size of 2) we just leave the smi tag in place, thereby + // getting the length multiplied by 2. + DCHECK(kOneByteStringTag == 4 && kStringEncodingMask == 4); + DCHECK(kSmiTag == 0 && kSmiTagSize == 1); + lw(t9, UntagSmiFieldMemOperand(value, String::kLengthOffset)); + And(t8, instance_type, Operand(kStringEncodingMask)); + { + Label skip; + Branch(&skip, ne, t8, Operand(zero_reg)); + // Adjust length for UC16. + dsll(t9, t9, 1); + bind(&skip); + } + Daddu(length, t9, Operand(SeqString::kHeaderSize + kObjectAlignmentMask)); + DCHECK(!length.is(t8)); + And(length, length, Operand(~kObjectAlignmentMask)); + + bind(&is_data_object); + // Value is a data object, and it is white. Mark it black. Since we know + // that the object is white we can make it black by flipping one bit. + Uld(t8, MemOperand(bitmap_scratch, MemoryChunk::kHeaderSize)); + Or(t8, t8, Operand(mask_scratch)); + Usd(t8, MemOperand(bitmap_scratch, MemoryChunk::kHeaderSize)); + + And(bitmap_scratch, bitmap_scratch, Operand(~Page::kPageAlignmentMask)); + Uld(t8, MemOperand(bitmap_scratch, MemoryChunk::kLiveBytesOffset)); + Daddu(t8, t8, Operand(length)); + Usd(t8, MemOperand(bitmap_scratch, MemoryChunk::kLiveBytesOffset)); + + bind(&done); +} + + +void MacroAssembler::LoadInstanceDescriptors(Register map, + Register descriptors) { + ld(descriptors, FieldMemOperand(map, Map::kDescriptorsOffset)); +} + + +void MacroAssembler::NumberOfOwnDescriptors(Register dst, Register map) { + ld(dst, FieldMemOperand(map, Map::kBitField3Offset)); + DecodeField<Map::NumberOfOwnDescriptorsBits>(dst); +} + + +void MacroAssembler::EnumLength(Register dst, Register map) { + STATIC_ASSERT(Map::EnumLengthBits::kShift == 0); + ld(dst, FieldMemOperand(map, Map::kBitField3Offset)); + And(dst, dst, Operand(Map::EnumLengthBits::kMask)); + SmiTag(dst); +} + + +void MacroAssembler::CheckEnumCache(Register null_value, Label* call_runtime) { + Register empty_fixed_array_value = a6; + LoadRoot(empty_fixed_array_value, Heap::kEmptyFixedArrayRootIndex); + Label next, start; + mov(a2, a0); + + // Check if the enum length field is properly initialized, indicating that + // there is an enum cache. + ld(a1, FieldMemOperand(a2, HeapObject::kMapOffset)); + + EnumLength(a3, a1); + Branch( + call_runtime, eq, a3, Operand(Smi::FromInt(kInvalidEnumCacheSentinel))); + + jmp(&start); + + bind(&next); + ld(a1, FieldMemOperand(a2, HeapObject::kMapOffset)); + + // For all objects but the receiver, check that the cache is empty. + EnumLength(a3, a1); + Branch(call_runtime, ne, a3, Operand(Smi::FromInt(0))); + + bind(&start); + + // Check that there are no elements. Register a2 contains the current JS + // object we've reached through the prototype chain. + Label no_elements; + ld(a2, FieldMemOperand(a2, JSObject::kElementsOffset)); + Branch(&no_elements, eq, a2, Operand(empty_fixed_array_value)); + + // Second chance, the object may be using the empty slow element dictionary. + LoadRoot(at, Heap::kEmptySlowElementDictionaryRootIndex); + Branch(call_runtime, ne, a2, Operand(at)); + + bind(&no_elements); + ld(a2, FieldMemOperand(a1, Map::kPrototypeOffset)); + Branch(&next, ne, a2, Operand(null_value)); +} + + +void MacroAssembler::ClampUint8(Register output_reg, Register input_reg) { + DCHECK(!output_reg.is(input_reg)); + Label done; + li(output_reg, Operand(255)); + // Normal branch: nop in delay slot. + Branch(&done, gt, input_reg, Operand(output_reg)); + // Use delay slot in this branch. + Branch(USE_DELAY_SLOT, &done, lt, input_reg, Operand(zero_reg)); + mov(output_reg, zero_reg); // In delay slot. + mov(output_reg, input_reg); // Value is in range 0..255. + bind(&done); +} + + +void MacroAssembler::ClampDoubleToUint8(Register result_reg, + DoubleRegister input_reg, + DoubleRegister temp_double_reg) { + Label above_zero; + Label done; + Label in_bounds; + + Move(temp_double_reg, 0.0); + BranchF(&above_zero, NULL, gt, input_reg, temp_double_reg); + + // Double value is less than zero, NaN or Inf, return 0. + mov(result_reg, zero_reg); + Branch(&done); + + // Double value is >= 255, return 255. + bind(&above_zero); + Move(temp_double_reg, 255.0); + BranchF(&in_bounds, NULL, le, input_reg, temp_double_reg); + li(result_reg, Operand(255)); + Branch(&done); + + // In 0-255 range, round and truncate. + bind(&in_bounds); + cvt_w_d(temp_double_reg, input_reg); + mfc1(result_reg, temp_double_reg); + bind(&done); +} + + +void MacroAssembler::TestJSArrayForAllocationMemento( + Register receiver_reg, + Register scratch_reg, + Label* no_memento_found, + Condition cond, + Label* allocation_memento_present) { + ExternalReference new_space_start = + ExternalReference::new_space_start(isolate()); + ExternalReference new_space_allocation_top = + ExternalReference::new_space_allocation_top_address(isolate()); + Daddu(scratch_reg, receiver_reg, + Operand(JSArray::kSize + AllocationMemento::kSize - kHeapObjectTag)); + Branch(no_memento_found, lt, scratch_reg, Operand(new_space_start)); + li(at, Operand(new_space_allocation_top)); + ld(at, MemOperand(at)); + Branch(no_memento_found, gt, scratch_reg, Operand(at)); + ld(scratch_reg, MemOperand(scratch_reg, -AllocationMemento::kSize)); + if (allocation_memento_present) { + Branch(allocation_memento_present, cond, scratch_reg, + Operand(isolate()->factory()->allocation_memento_map())); + } +} + + +Register GetRegisterThatIsNotOneOf(Register reg1, + Register reg2, + Register reg3, + Register reg4, + Register reg5, + Register reg6) { + RegList regs = 0; + if (reg1.is_valid()) regs |= reg1.bit(); + if (reg2.is_valid()) regs |= reg2.bit(); + if (reg3.is_valid()) regs |= reg3.bit(); + if (reg4.is_valid()) regs |= reg4.bit(); + if (reg5.is_valid()) regs |= reg5.bit(); + if (reg6.is_valid()) regs |= reg6.bit(); + + for (int i = 0; i < Register::NumAllocatableRegisters(); i++) { + Register candidate = Register::FromAllocationIndex(i); + if (regs & candidate.bit()) continue; + return candidate; + } + UNREACHABLE(); + return no_reg; +} + + +void MacroAssembler::JumpIfDictionaryInPrototypeChain( + Register object, + Register scratch0, + Register scratch1, + Label* found) { + DCHECK(!scratch1.is(scratch0)); + Factory* factory = isolate()->factory(); + Register current = scratch0; + Label loop_again; + + // Scratch contained elements pointer. + Move(current, object); + + // Loop based on the map going up the prototype chain. + bind(&loop_again); + ld(current, FieldMemOperand(current, HeapObject::kMapOffset)); + lb(scratch1, FieldMemOperand(current, Map::kBitField2Offset)); + DecodeField<Map::ElementsKindBits>(scratch1); + Branch(found, eq, scratch1, Operand(DICTIONARY_ELEMENTS)); + ld(current, FieldMemOperand(current, Map::kPrototypeOffset)); + Branch(&loop_again, ne, current, Operand(factory->null_value())); +} + + +bool AreAliased(Register reg1, + Register reg2, + Register reg3, + Register reg4, + Register reg5, + Register reg6, + Register reg7, + Register reg8) { + int n_of_valid_regs = reg1.is_valid() + reg2.is_valid() + + reg3.is_valid() + reg4.is_valid() + reg5.is_valid() + reg6.is_valid() + + reg7.is_valid() + reg8.is_valid(); + + RegList regs = 0; + if (reg1.is_valid()) regs |= reg1.bit(); + if (reg2.is_valid()) regs |= reg2.bit(); + if (reg3.is_valid()) regs |= reg3.bit(); + if (reg4.is_valid()) regs |= reg4.bit(); + if (reg5.is_valid()) regs |= reg5.bit(); + if (reg6.is_valid()) regs |= reg6.bit(); + if (reg7.is_valid()) regs |= reg7.bit(); + if (reg8.is_valid()) regs |= reg8.bit(); + int n_of_non_aliasing_regs = NumRegs(regs); + + return n_of_valid_regs != n_of_non_aliasing_regs; +} + + +CodePatcher::CodePatcher(byte* address, + int instructions, + FlushICache flush_cache) + : address_(address), + size_(instructions * Assembler::kInstrSize), + masm_(NULL, address, size_ + Assembler::kGap), + flush_cache_(flush_cache) { + // Create a new macro assembler pointing to the address of the code to patch. + // The size is adjusted with kGap on order for the assembler to generate size + // bytes of instructions without failing with buffer size constraints. + DCHECK(masm_.reloc_info_writer.pos() == address_ + size_ + Assembler::kGap); +} + + +CodePatcher::~CodePatcher() { + // Indicate that code has changed. + if (flush_cache_ == FLUSH) { + CpuFeatures::FlushICache(address_, size_); + } + // Check that the code was patched as expected. + DCHECK(masm_.pc_ == address_ + size_); + DCHECK(masm_.reloc_info_writer.pos() == address_ + size_ + Assembler::kGap); +} + + +void CodePatcher::Emit(Instr instr) { + masm()->emit(instr); +} + + +void CodePatcher::Emit(Address addr) { + // masm()->emit(reinterpret_cast<Instr>(addr)); +} + + +void CodePatcher::ChangeBranchCondition(Condition cond) { + Instr instr = Assembler::instr_at(masm_.pc_); + DCHECK(Assembler::IsBranch(instr)); + uint32_t opcode = Assembler::GetOpcodeField(instr); + // Currently only the 'eq' and 'ne' cond values are supported and the simple + // branch instructions (with opcode being the branch type). + // There are some special cases (see Assembler::IsBranch()) so extending this + // would be tricky. + DCHECK(opcode == BEQ || + opcode == BNE || + opcode == BLEZ || + opcode == BGTZ || + opcode == BEQL || + opcode == BNEL || + opcode == BLEZL || + opcode == BGTZL); + opcode = (cond == eq) ? BEQ : BNE; + instr = (instr & ~kOpcodeMask) | opcode; + masm_.emit(instr); +} + + +void MacroAssembler::TruncatingDiv(Register result, + Register dividend, + int32_t divisor) { + DCHECK(!dividend.is(result)); + DCHECK(!dividend.is(at)); + DCHECK(!result.is(at)); + MultiplierAndShift ms(divisor); + li(at, Operand(ms.multiplier())); + Mulh(result, dividend, Operand(at)); + if (divisor > 0 && ms.multiplier() < 0) { + Addu(result, result, Operand(dividend)); + } + if (divisor < 0 && ms.multiplier() > 0) { + Subu(result, result, Operand(dividend)); + } + if (ms.shift() > 0) sra(result, result, ms.shift()); + srl(at, dividend, 31); + Addu(result, result, Operand(at)); +} + + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_MIPS64 diff --git a/deps/v8/src/mips64/macro-assembler-mips64.h b/deps/v8/src/mips64/macro-assembler-mips64.h new file mode 100644 index 00000000000..89ae1e50fd6 --- /dev/null +++ b/deps/v8/src/mips64/macro-assembler-mips64.h @@ -0,0 +1,1793 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_MIPS_MACRO_ASSEMBLER_MIPS_H_ +#define V8_MIPS_MACRO_ASSEMBLER_MIPS_H_ + +#include "src/assembler.h" +#include "src/globals.h" +#include "src/mips64/assembler-mips64.h" + +namespace v8 { +namespace internal { + +// Forward declaration. +class JumpTarget; + +// Reserved Register Usage Summary. +// +// Registers t8, t9, and at are reserved for use by the MacroAssembler. +// +// The programmer should know that the MacroAssembler may clobber these three, +// but won't touch other registers except in special cases. +// +// Per the MIPS ABI, register t9 must be used for indirect function call +// via 'jalr t9' or 'jr t9' instructions. This is relied upon by gcc when +// trying to update gp register for position-independent-code. Whenever +// MIPS generated code calls C code, it must be via t9 register. + + +// Flags used for LeaveExitFrame function. +enum LeaveExitFrameMode { + EMIT_RETURN = true, + NO_EMIT_RETURN = false +}; + +// Flags used for AllocateHeapNumber +enum TaggingMode { + // Tag the result. + TAG_RESULT, + // Don't tag + DONT_TAG_RESULT +}; + +// Flags used for the ObjectToDoubleFPURegister function. +enum ObjectToDoubleFlags { + // No special flags. + NO_OBJECT_TO_DOUBLE_FLAGS = 0, + // Object is known to be a non smi. + OBJECT_NOT_SMI = 1 << 0, + // Don't load NaNs or infinities, branch to the non number case instead. + AVOID_NANS_AND_INFINITIES = 1 << 1 +}; + +// Allow programmer to use Branch Delay Slot of Branches, Jumps, Calls. +enum BranchDelaySlot { + USE_DELAY_SLOT, + PROTECT +}; + +// Flags used for the li macro-assembler function. +enum LiFlags { + // If the constant value can be represented in just 16 bits, then + // optimize the li to use a single instruction, rather than lui/ori/dsll + // sequence. + OPTIMIZE_SIZE = 0, + // Always use 6 instructions (lui/ori/dsll sequence), even if the constant + // could be loaded with just one, so that this value is patchable later. + CONSTANT_SIZE = 1, + // For address loads only 4 instruction are required. Used to mark + // constant load that will be used as address without relocation + // information. It ensures predictable code size, so specific sites + // in code are patchable. + ADDRESS_LOAD = 2 +}; + + +enum RememberedSetAction { EMIT_REMEMBERED_SET, OMIT_REMEMBERED_SET }; +enum SmiCheck { INLINE_SMI_CHECK, OMIT_SMI_CHECK }; +enum PointersToHereCheck { + kPointersToHereMaybeInteresting, + kPointersToHereAreAlwaysInteresting +}; +enum RAStatus { kRAHasNotBeenSaved, kRAHasBeenSaved }; + +Register GetRegisterThatIsNotOneOf(Register reg1, + Register reg2 = no_reg, + Register reg3 = no_reg, + Register reg4 = no_reg, + Register reg5 = no_reg, + Register reg6 = no_reg); + +bool AreAliased(Register reg1, + Register reg2, + Register reg3 = no_reg, + Register reg4 = no_reg, + Register reg5 = no_reg, + Register reg6 = no_reg, + Register reg7 = no_reg, + Register reg8 = no_reg); + + +// ----------------------------------------------------------------------------- +// Static helper functions. + +inline MemOperand ContextOperand(Register context, int index) { + return MemOperand(context, Context::SlotOffset(index)); +} + + +inline MemOperand GlobalObjectOperand() { + return ContextOperand(cp, Context::GLOBAL_OBJECT_INDEX); +} + + +// Generate a MemOperand for loading a field from an object. +inline MemOperand FieldMemOperand(Register object, int offset) { + return MemOperand(object, offset - kHeapObjectTag); +} + + +inline MemOperand UntagSmiMemOperand(Register rm, int offset) { + // Assumes that Smis are shifted by 32 bits and little endianness. + STATIC_ASSERT(kSmiShift == 32); + return MemOperand(rm, offset + (kSmiShift / kBitsPerByte)); +} + + +inline MemOperand UntagSmiFieldMemOperand(Register rm, int offset) { + return UntagSmiMemOperand(rm, offset - kHeapObjectTag); +} + + +// Generate a MemOperand for storing arguments 5..N on the stack +// when calling CallCFunction(). +// TODO(plind): Currently ONLY used for O32. Should be fixed for +// n64, and used in RegExp code, and other places +// with more than 8 arguments. +inline MemOperand CFunctionArgumentOperand(int index) { + DCHECK(index > kCArgSlotCount); + // Argument 5 takes the slot just past the four Arg-slots. + int offset = (index - 5) * kPointerSize + kCArgsSlotsSize; + return MemOperand(sp, offset); +} + + +// MacroAssembler implements a collection of frequently used macros. +class MacroAssembler: public Assembler { + public: + // The isolate parameter can be NULL if the macro assembler should + // not use isolate-dependent functionality. In this case, it's the + // responsibility of the caller to never invoke such function on the + // macro assembler. + MacroAssembler(Isolate* isolate, void* buffer, int size); + + // Arguments macros. +#define COND_TYPED_ARGS Condition cond, Register r1, const Operand& r2 +#define COND_ARGS cond, r1, r2 + + // Cases when relocation is not needed. +#define DECLARE_NORELOC_PROTOTYPE(Name, target_type) \ + void Name(target_type target, BranchDelaySlot bd = PROTECT); \ + inline void Name(BranchDelaySlot bd, target_type target) { \ + Name(target, bd); \ + } \ + void Name(target_type target, \ + COND_TYPED_ARGS, \ + BranchDelaySlot bd = PROTECT); \ + inline void Name(BranchDelaySlot bd, \ + target_type target, \ + COND_TYPED_ARGS) { \ + Name(target, COND_ARGS, bd); \ + } + +#define DECLARE_BRANCH_PROTOTYPES(Name) \ + DECLARE_NORELOC_PROTOTYPE(Name, Label*) \ + DECLARE_NORELOC_PROTOTYPE(Name, int16_t) + + DECLARE_BRANCH_PROTOTYPES(Branch) + DECLARE_BRANCH_PROTOTYPES(BranchAndLink) + DECLARE_BRANCH_PROTOTYPES(BranchShort) + +#undef DECLARE_BRANCH_PROTOTYPES +#undef COND_TYPED_ARGS +#undef COND_ARGS + + + // Jump, Call, and Ret pseudo instructions implementing inter-working. +#define COND_ARGS Condition cond = al, Register rs = zero_reg, \ + const Operand& rt = Operand(zero_reg), BranchDelaySlot bd = PROTECT + + void Jump(Register target, COND_ARGS); + void Jump(intptr_t target, RelocInfo::Mode rmode, COND_ARGS); + void Jump(Address target, RelocInfo::Mode rmode, COND_ARGS); + void Jump(Handle<Code> code, RelocInfo::Mode rmode, COND_ARGS); + static int CallSize(Register target, COND_ARGS); + void Call(Register target, COND_ARGS); + static int CallSize(Address target, RelocInfo::Mode rmode, COND_ARGS); + void Call(Address target, RelocInfo::Mode rmode, COND_ARGS); + int CallSize(Handle<Code> code, + RelocInfo::Mode rmode = RelocInfo::CODE_TARGET, + TypeFeedbackId ast_id = TypeFeedbackId::None(), + COND_ARGS); + void Call(Handle<Code> code, + RelocInfo::Mode rmode = RelocInfo::CODE_TARGET, + TypeFeedbackId ast_id = TypeFeedbackId::None(), + COND_ARGS); + void Ret(COND_ARGS); + inline void Ret(BranchDelaySlot bd, Condition cond = al, + Register rs = zero_reg, const Operand& rt = Operand(zero_reg)) { + Ret(cond, rs, rt, bd); + } + + void Branch(Label* L, + Condition cond, + Register rs, + Heap::RootListIndex index, + BranchDelaySlot bdslot = PROTECT); + +#undef COND_ARGS + + // Emit code to discard a non-negative number of pointer-sized elements + // from the stack, clobbering only the sp register. + void Drop(int count, + Condition cond = cc_always, + Register reg = no_reg, + const Operand& op = Operand(no_reg)); + + // Trivial case of DropAndRet that utilizes the delay slot and only emits + // 2 instructions. + void DropAndRet(int drop); + + void DropAndRet(int drop, + Condition cond, + Register reg, + const Operand& op); + + // Swap two registers. If the scratch register is omitted then a slightly + // less efficient form using xor instead of mov is emitted. + void Swap(Register reg1, Register reg2, Register scratch = no_reg); + + void Call(Label* target); + + inline void Move(Register dst, Register src) { + if (!dst.is(src)) { + mov(dst, src); + } + } + + inline void Move(FPURegister dst, FPURegister src) { + if (!dst.is(src)) { + mov_d(dst, src); + } + } + + inline void Move(Register dst_low, Register dst_high, FPURegister src) { + mfc1(dst_low, src); + mfhc1(dst_high, src); + } + + inline void FmoveHigh(Register dst_high, FPURegister src) { + mfhc1(dst_high, src); + } + + inline void FmoveLow(Register dst_low, FPURegister src) { + mfc1(dst_low, src); + } + + inline void Move(FPURegister dst, Register src_low, Register src_high) { + mtc1(src_low, dst); + mthc1(src_high, dst); + } + + // Conditional move. + void Move(FPURegister dst, double imm); + void Movz(Register rd, Register rs, Register rt); + void Movn(Register rd, Register rs, Register rt); + void Movt(Register rd, Register rs, uint16_t cc = 0); + void Movf(Register rd, Register rs, uint16_t cc = 0); + + void Clz(Register rd, Register rs); + + // Jump unconditionally to given label. + // We NEED a nop in the branch delay slot, as it used by v8, for example in + // CodeGenerator::ProcessDeferred(). + // Currently the branch delay slot is filled by the MacroAssembler. + // Use rather b(Label) for code generation. + void jmp(Label* L) { + Branch(L); + } + + void Load(Register dst, const MemOperand& src, Representation r); + void Store(Register src, const MemOperand& dst, Representation r); + + // Load an object from the root table. + void LoadRoot(Register destination, + Heap::RootListIndex index); + void LoadRoot(Register destination, + Heap::RootListIndex index, + Condition cond, Register src1, const Operand& src2); + + // Store an object to the root table. + void StoreRoot(Register source, + Heap::RootListIndex index); + void StoreRoot(Register source, + Heap::RootListIndex index, + Condition cond, Register src1, const Operand& src2); + + // --------------------------------------------------------------------------- + // GC Support + + void IncrementalMarkingRecordWriteHelper(Register object, + Register value, + Register address); + + enum RememberedSetFinalAction { + kReturnAtEnd, + kFallThroughAtEnd + }; + + + // Record in the remembered set the fact that we have a pointer to new space + // at the address pointed to by the addr register. Only works if addr is not + // in new space. + void RememberedSetHelper(Register object, // Used for debug code. + Register addr, + Register scratch, + SaveFPRegsMode save_fp, + RememberedSetFinalAction and_then); + + void CheckPageFlag(Register object, + Register scratch, + int mask, + Condition cc, + Label* condition_met); + + void CheckMapDeprecated(Handle<Map> map, + Register scratch, + Label* if_deprecated); + + // Check if object is in new space. Jumps if the object is not in new space. + // The register scratch can be object itself, but it will be clobbered. + void JumpIfNotInNewSpace(Register object, + Register scratch, + Label* branch) { + InNewSpace(object, scratch, ne, branch); + } + + // Check if object is in new space. Jumps if the object is in new space. + // The register scratch can be object itself, but scratch will be clobbered. + void JumpIfInNewSpace(Register object, + Register scratch, + Label* branch) { + InNewSpace(object, scratch, eq, branch); + } + + // Check if an object has a given incremental marking color. + void HasColor(Register object, + Register scratch0, + Register scratch1, + Label* has_color, + int first_bit, + int second_bit); + + void JumpIfBlack(Register object, + Register scratch0, + Register scratch1, + Label* on_black); + + // Checks the color of an object. If the object is already grey or black + // then we just fall through, since it is already live. If it is white and + // we can determine that it doesn't need to be scanned, then we just mark it + // black and fall through. For the rest we jump to the label so the + // incremental marker can fix its assumptions. + void EnsureNotWhite(Register object, + Register scratch1, + Register scratch2, + Register scratch3, + Label* object_is_white_and_not_data); + + // Detects conservatively whether an object is data-only, i.e. it does need to + // be scanned by the garbage collector. + void JumpIfDataObject(Register value, + Register scratch, + Label* not_data_object); + + // Notify the garbage collector that we wrote a pointer into an object. + // |object| is the object being stored into, |value| is the object being + // stored. value and scratch registers are clobbered by the operation. + // The offset is the offset from the start of the object, not the offset from + // the tagged HeapObject pointer. For use with FieldOperand(reg, off). + void RecordWriteField( + Register object, + int offset, + Register value, + Register scratch, + RAStatus ra_status, + SaveFPRegsMode save_fp, + RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting); + + // As above, but the offset has the tag presubtracted. For use with + // MemOperand(reg, off). + inline void RecordWriteContextSlot( + Register context, + int offset, + Register value, + Register scratch, + RAStatus ra_status, + SaveFPRegsMode save_fp, + RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting) { + RecordWriteField(context, + offset + kHeapObjectTag, + value, + scratch, + ra_status, + save_fp, + remembered_set_action, + smi_check, + pointers_to_here_check_for_value); + } + + void RecordWriteForMap( + Register object, + Register map, + Register dst, + RAStatus ra_status, + SaveFPRegsMode save_fp); + + // For a given |object| notify the garbage collector that the slot |address| + // has been written. |value| is the object being stored. The value and + // address registers are clobbered by the operation. + void RecordWrite( + Register object, + Register address, + Register value, + RAStatus ra_status, + SaveFPRegsMode save_fp, + RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting); + + + // --------------------------------------------------------------------------- + // Inline caching support. + + // Generate code for checking access rights - used for security checks + // on access to global objects across environments. The holder register + // is left untouched, whereas both scratch registers are clobbered. + void CheckAccessGlobalProxy(Register holder_reg, + Register scratch, + Label* miss); + + void GetNumberHash(Register reg0, Register scratch); + + void LoadFromNumberDictionary(Label* miss, + Register elements, + Register key, + Register result, + Register reg0, + Register reg1, + Register reg2); + + + inline void MarkCode(NopMarkerTypes type) { + nop(type); + } + + // Check if the given instruction is a 'type' marker. + // i.e. check if it is a sll zero_reg, zero_reg, <type> (referenced as + // nop(type)). These instructions are generated to mark special location in + // the code, like some special IC code. + static inline bool IsMarkedCode(Instr instr, int type) { + DCHECK((FIRST_IC_MARKER <= type) && (type < LAST_CODE_MARKER)); + return IsNop(instr, type); + } + + + static inline int GetCodeMarker(Instr instr) { + uint32_t opcode = ((instr & kOpcodeMask)); + uint32_t rt = ((instr & kRtFieldMask) >> kRtShift); + uint32_t rs = ((instr & kRsFieldMask) >> kRsShift); + uint32_t sa = ((instr & kSaFieldMask) >> kSaShift); + + // Return <n> if we have a sll zero_reg, zero_reg, n + // else return -1. + bool sllzz = (opcode == SLL && + rt == static_cast<uint32_t>(ToNumber(zero_reg)) && + rs == static_cast<uint32_t>(ToNumber(zero_reg))); + int type = + (sllzz && FIRST_IC_MARKER <= sa && sa < LAST_CODE_MARKER) ? sa : -1; + DCHECK((type == -1) || + ((FIRST_IC_MARKER <= type) && (type < LAST_CODE_MARKER))); + return type; + } + + + + // --------------------------------------------------------------------------- + // Allocation support. + + // Allocate an object in new space or old pointer space. The object_size is + // specified either in bytes or in words if the allocation flag SIZE_IN_WORDS + // is passed. If the space is exhausted control continues at the gc_required + // label. The allocated object is returned in result. If the flag + // tag_allocated_object is true the result is tagged as as a heap object. + // All registers are clobbered also when control continues at the gc_required + // label. + void Allocate(int object_size, + Register result, + Register scratch1, + Register scratch2, + Label* gc_required, + AllocationFlags flags); + + void Allocate(Register object_size, + Register result, + Register scratch1, + Register scratch2, + Label* gc_required, + AllocationFlags flags); + + // Undo allocation in new space. The object passed and objects allocated after + // it will no longer be allocated. The caller must make sure that no pointers + // are left to the object(s) no longer allocated as they would be invalid when + // allocation is undone. + void UndoAllocationInNewSpace(Register object, Register scratch); + + + void AllocateTwoByteString(Register result, + Register length, + Register scratch1, + Register scratch2, + Register scratch3, + Label* gc_required); + void AllocateAsciiString(Register result, + Register length, + Register scratch1, + Register scratch2, + Register scratch3, + Label* gc_required); + void AllocateTwoByteConsString(Register result, + Register length, + Register scratch1, + Register scratch2, + Label* gc_required); + void AllocateAsciiConsString(Register result, + Register length, + Register scratch1, + Register scratch2, + Label* gc_required); + void AllocateTwoByteSlicedString(Register result, + Register length, + Register scratch1, + Register scratch2, + Label* gc_required); + void AllocateAsciiSlicedString(Register result, + Register length, + Register scratch1, + Register scratch2, + Label* gc_required); + + // Allocates a heap number or jumps to the gc_required label if the young + // space is full and a scavenge is needed. All registers are clobbered also + // when control continues at the gc_required label. + void AllocateHeapNumber(Register result, + Register scratch1, + Register scratch2, + Register heap_number_map, + Label* gc_required, + TaggingMode tagging_mode = TAG_RESULT, + MutableMode mode = IMMUTABLE); + + void AllocateHeapNumberWithValue(Register result, + FPURegister value, + Register scratch1, + Register scratch2, + Label* gc_required); + + // --------------------------------------------------------------------------- + // Instruction macros. + +#define DEFINE_INSTRUCTION(instr) \ + void instr(Register rd, Register rs, const Operand& rt); \ + void instr(Register rd, Register rs, Register rt) { \ + instr(rd, rs, Operand(rt)); \ + } \ + void instr(Register rs, Register rt, int32_t j) { \ + instr(rs, rt, Operand(j)); \ + } + +#define DEFINE_INSTRUCTION2(instr) \ + void instr(Register rs, const Operand& rt); \ + void instr(Register rs, Register rt) { \ + instr(rs, Operand(rt)); \ + } \ + void instr(Register rs, int32_t j) { \ + instr(rs, Operand(j)); \ + } + + DEFINE_INSTRUCTION(Addu); + DEFINE_INSTRUCTION(Daddu); + DEFINE_INSTRUCTION(Ddiv); + DEFINE_INSTRUCTION(Subu); + DEFINE_INSTRUCTION(Dsubu); + DEFINE_INSTRUCTION(Dmod); + DEFINE_INSTRUCTION(Mul); + DEFINE_INSTRUCTION(Mulh); + DEFINE_INSTRUCTION(Dmul); + DEFINE_INSTRUCTION(Dmulh); + DEFINE_INSTRUCTION2(Mult); + DEFINE_INSTRUCTION2(Dmult); + DEFINE_INSTRUCTION2(Multu); + DEFINE_INSTRUCTION2(Dmultu); + DEFINE_INSTRUCTION2(Div); + DEFINE_INSTRUCTION2(Ddiv); + DEFINE_INSTRUCTION2(Divu); + DEFINE_INSTRUCTION2(Ddivu); + + DEFINE_INSTRUCTION(And); + DEFINE_INSTRUCTION(Or); + DEFINE_INSTRUCTION(Xor); + DEFINE_INSTRUCTION(Nor); + DEFINE_INSTRUCTION2(Neg); + + DEFINE_INSTRUCTION(Slt); + DEFINE_INSTRUCTION(Sltu); + + // MIPS32 R2 instruction macro. + DEFINE_INSTRUCTION(Ror); + DEFINE_INSTRUCTION(Dror); + +#undef DEFINE_INSTRUCTION +#undef DEFINE_INSTRUCTION2 + + void Pref(int32_t hint, const MemOperand& rs); + + + // --------------------------------------------------------------------------- + // Pseudo-instructions. + + void mov(Register rd, Register rt) { or_(rd, rt, zero_reg); } + + void Ulw(Register rd, const MemOperand& rs); + void Usw(Register rd, const MemOperand& rs); + void Uld(Register rd, const MemOperand& rs, Register scratch = at); + void Usd(Register rd, const MemOperand& rs, Register scratch = at); + + // Load int32 in the rd register. + void li(Register rd, Operand j, LiFlags mode = OPTIMIZE_SIZE); + inline void li(Register rd, int64_t j, LiFlags mode = OPTIMIZE_SIZE) { + li(rd, Operand(j), mode); + } + void li(Register dst, Handle<Object> value, LiFlags mode = OPTIMIZE_SIZE); + + // Push multiple registers on the stack. + // Registers are saved in numerical order, with higher numbered registers + // saved in higher memory addresses. + void MultiPush(RegList regs); + void MultiPushReversed(RegList regs); + + void MultiPushFPU(RegList regs); + void MultiPushReversedFPU(RegList regs); + + void push(Register src) { + Daddu(sp, sp, Operand(-kPointerSize)); + sd(src, MemOperand(sp, 0)); + } + void Push(Register src) { push(src); } + + // Push a handle. + void Push(Handle<Object> handle); + void Push(Smi* smi) { Push(Handle<Smi>(smi, isolate())); } + + // Push two registers. Pushes leftmost register first (to highest address). + void Push(Register src1, Register src2) { + Dsubu(sp, sp, Operand(2 * kPointerSize)); + sd(src1, MemOperand(sp, 1 * kPointerSize)); + sd(src2, MemOperand(sp, 0 * kPointerSize)); + } + + // Push three registers. Pushes leftmost register first (to highest address). + void Push(Register src1, Register src2, Register src3) { + Dsubu(sp, sp, Operand(3 * kPointerSize)); + sd(src1, MemOperand(sp, 2 * kPointerSize)); + sd(src2, MemOperand(sp, 1 * kPointerSize)); + sd(src3, MemOperand(sp, 0 * kPointerSize)); + } + + // Push four registers. Pushes leftmost register first (to highest address). + void Push(Register src1, Register src2, Register src3, Register src4) { + Dsubu(sp, sp, Operand(4 * kPointerSize)); + sd(src1, MemOperand(sp, 3 * kPointerSize)); + sd(src2, MemOperand(sp, 2 * kPointerSize)); + sd(src3, MemOperand(sp, 1 * kPointerSize)); + sd(src4, MemOperand(sp, 0 * kPointerSize)); + } + + void Push(Register src, Condition cond, Register tst1, Register tst2) { + // Since we don't have conditional execution we use a Branch. + Branch(3, cond, tst1, Operand(tst2)); + Dsubu(sp, sp, Operand(kPointerSize)); + sd(src, MemOperand(sp, 0)); + } + + void PushRegisterAsTwoSmis(Register src, Register scratch = at); + void PopRegisterAsTwoSmis(Register dst, Register scratch = at); + + // Pops multiple values from the stack and load them in the + // registers specified in regs. Pop order is the opposite as in MultiPush. + void MultiPop(RegList regs); + void MultiPopReversed(RegList regs); + + void MultiPopFPU(RegList regs); + void MultiPopReversedFPU(RegList regs); + + void pop(Register dst) { + ld(dst, MemOperand(sp, 0)); + Daddu(sp, sp, Operand(kPointerSize)); + } + void Pop(Register dst) { pop(dst); } + + // Pop two registers. Pops rightmost register first (from lower address). + void Pop(Register src1, Register src2) { + DCHECK(!src1.is(src2)); + ld(src2, MemOperand(sp, 0 * kPointerSize)); + ld(src1, MemOperand(sp, 1 * kPointerSize)); + Daddu(sp, sp, 2 * kPointerSize); + } + + // Pop three registers. Pops rightmost register first (from lower address). + void Pop(Register src1, Register src2, Register src3) { + ld(src3, MemOperand(sp, 0 * kPointerSize)); + ld(src2, MemOperand(sp, 1 * kPointerSize)); + ld(src1, MemOperand(sp, 2 * kPointerSize)); + Daddu(sp, sp, 3 * kPointerSize); + } + + void Pop(uint32_t count = 1) { + Daddu(sp, sp, Operand(count * kPointerSize)); + } + + // Push and pop the registers that can hold pointers, as defined by the + // RegList constant kSafepointSavedRegisters. + void PushSafepointRegisters(); + void PopSafepointRegisters(); + // Store value in register src in the safepoint stack slot for + // register dst. + void StoreToSafepointRegisterSlot(Register src, Register dst); + // Load the value of the src register from its safepoint stack slot + // into register dst. + void LoadFromSafepointRegisterSlot(Register dst, Register src); + + // Flush the I-cache from asm code. You should use CpuFeatures::FlushICache + // from C. + // Does not handle errors. + void FlushICache(Register address, unsigned instructions); + + // MIPS64 R2 instruction macro. + void Ins(Register rt, Register rs, uint16_t pos, uint16_t size); + void Ext(Register rt, Register rs, uint16_t pos, uint16_t size); + + // --------------------------------------------------------------------------- + // FPU macros. These do not handle special cases like NaN or +- inf. + + // Convert unsigned word to double. + void Cvt_d_uw(FPURegister fd, FPURegister fs, FPURegister scratch); + void Cvt_d_uw(FPURegister fd, Register rs, FPURegister scratch); + + // Convert double to unsigned long. + void Trunc_l_ud(FPURegister fd, FPURegister fs, FPURegister scratch); + + void Trunc_l_d(FPURegister fd, FPURegister fs); + void Round_l_d(FPURegister fd, FPURegister fs); + void Floor_l_d(FPURegister fd, FPURegister fs); + void Ceil_l_d(FPURegister fd, FPURegister fs); + + // Convert double to unsigned word. + void Trunc_uw_d(FPURegister fd, FPURegister fs, FPURegister scratch); + void Trunc_uw_d(FPURegister fd, Register rs, FPURegister scratch); + + void Trunc_w_d(FPURegister fd, FPURegister fs); + void Round_w_d(FPURegister fd, FPURegister fs); + void Floor_w_d(FPURegister fd, FPURegister fs); + void Ceil_w_d(FPURegister fd, FPURegister fs); + + void Madd_d(FPURegister fd, + FPURegister fr, + FPURegister fs, + FPURegister ft, + FPURegister scratch); + + // Wrapper function for the different cmp/branch types. + void BranchF(Label* target, + Label* nan, + Condition cc, + FPURegister cmp1, + FPURegister cmp2, + BranchDelaySlot bd = PROTECT); + + // Alternate (inline) version for better readability with USE_DELAY_SLOT. + inline void BranchF(BranchDelaySlot bd, + Label* target, + Label* nan, + Condition cc, + FPURegister cmp1, + FPURegister cmp2) { + BranchF(target, nan, cc, cmp1, cmp2, bd); + } + + // Truncates a double using a specific rounding mode, and writes the value + // to the result register. + // The except_flag will contain any exceptions caused by the instruction. + // If check_inexact is kDontCheckForInexactConversion, then the inexact + // exception is masked. + void EmitFPUTruncate(FPURoundingMode rounding_mode, + Register result, + DoubleRegister double_input, + Register scratch, + DoubleRegister double_scratch, + Register except_flag, + CheckForInexactConversion check_inexact + = kDontCheckForInexactConversion); + + // Performs a truncating conversion of a floating point number as used by + // the JS bitwise operations. See ECMA-262 9.5: ToInt32. Goes to 'done' if it + // succeeds, otherwise falls through if result is saturated. On return + // 'result' either holds answer, or is clobbered on fall through. + // + // Only public for the test code in test-code-stubs-arm.cc. + void TryInlineTruncateDoubleToI(Register result, + DoubleRegister input, + Label* done); + + // Performs a truncating conversion of a floating point number as used by + // the JS bitwise operations. See ECMA-262 9.5: ToInt32. + // Exits with 'result' holding the answer. + void TruncateDoubleToI(Register result, DoubleRegister double_input); + + // Performs a truncating conversion of a heap number as used by + // the JS bitwise operations. See ECMA-262 9.5: ToInt32. 'result' and 'input' + // must be different registers. Exits with 'result' holding the answer. + void TruncateHeapNumberToI(Register result, Register object); + + // Converts the smi or heap number in object to an int32 using the rules + // for ToInt32 as described in ECMAScript 9.5.: the value is truncated + // and brought into the range -2^31 .. +2^31 - 1. 'result' and 'input' must be + // different registers. + void TruncateNumberToI(Register object, + Register result, + Register heap_number_map, + Register scratch, + Label* not_int32); + + // Loads the number from object into dst register. + // If |object| is neither smi nor heap number, |not_number| is jumped to + // with |object| still intact. + void LoadNumber(Register object, + FPURegister dst, + Register heap_number_map, + Register scratch, + Label* not_number); + + // Loads the number from object into double_dst in the double format. + // Control will jump to not_int32 if the value cannot be exactly represented + // by a 32-bit integer. + // Floating point value in the 32-bit integer range that are not exact integer + // won't be loaded. + void LoadNumberAsInt32Double(Register object, + DoubleRegister double_dst, + Register heap_number_map, + Register scratch1, + Register scratch2, + FPURegister double_scratch, + Label* not_int32); + + // Loads the number from object into dst as a 32-bit integer. + // Control will jump to not_int32 if the object cannot be exactly represented + // by a 32-bit integer. + // Floating point value in the 32-bit integer range that are not exact integer + // won't be converted. + void LoadNumberAsInt32(Register object, + Register dst, + Register heap_number_map, + Register scratch1, + Register scratch2, + FPURegister double_scratch0, + FPURegister double_scratch1, + Label* not_int32); + + // Enter exit frame. + // argc - argument count to be dropped by LeaveExitFrame. + // save_doubles - saves FPU registers on stack, currently disabled. + // stack_space - extra stack space. + void EnterExitFrame(bool save_doubles, + int stack_space = 0); + + // Leave the current exit frame. + void LeaveExitFrame(bool save_doubles, + Register arg_count, + bool restore_context, + bool do_return = NO_EMIT_RETURN); + + // Get the actual activation frame alignment for target environment. + static int ActivationFrameAlignment(); + + // Make sure the stack is aligned. Only emits code in debug mode. + void AssertStackIsAligned(); + + void LoadContext(Register dst, int context_chain_length); + + // Conditionally load the cached Array transitioned map of type + // transitioned_kind from the native context if the map in register + // map_in_out is the cached Array map in the native context of + // expected_kind. + void LoadTransitionedArrayMapConditional( + ElementsKind expected_kind, + ElementsKind transitioned_kind, + Register map_in_out, + Register scratch, + Label* no_map_match); + + void LoadGlobalFunction(int index, Register function); + + // Load the initial map from the global function. The registers + // function and map can be the same, function is then overwritten. + void LoadGlobalFunctionInitialMap(Register function, + Register map, + Register scratch); + + void InitializeRootRegister() { + ExternalReference roots_array_start = + ExternalReference::roots_array_start(isolate()); + li(kRootRegister, Operand(roots_array_start)); + } + + // ------------------------------------------------------------------------- + // JavaScript invokes. + + // Invoke the JavaScript function code by either calling or jumping. + void InvokeCode(Register code, + const ParameterCount& expected, + const ParameterCount& actual, + InvokeFlag flag, + const CallWrapper& call_wrapper); + + // Invoke the JavaScript function in the given register. Changes the + // current context to the context in the function before invoking. + void InvokeFunction(Register function, + const ParameterCount& actual, + InvokeFlag flag, + const CallWrapper& call_wrapper); + + void InvokeFunction(Register function, + const ParameterCount& expected, + const ParameterCount& actual, + InvokeFlag flag, + const CallWrapper& call_wrapper); + + void InvokeFunction(Handle<JSFunction> function, + const ParameterCount& expected, + const ParameterCount& actual, + InvokeFlag flag, + const CallWrapper& call_wrapper); + + + void IsObjectJSObjectType(Register heap_object, + Register map, + Register scratch, + Label* fail); + + void IsInstanceJSObjectType(Register map, + Register scratch, + Label* fail); + + void IsObjectJSStringType(Register object, + Register scratch, + Label* fail); + + void IsObjectNameType(Register object, + Register scratch, + Label* fail); + + // ------------------------------------------------------------------------- + // Debugger Support. + + void DebugBreak(); + + // ------------------------------------------------------------------------- + // Exception handling. + + // Push a new try handler and link into try handler chain. + void PushTryHandler(StackHandler::Kind kind, int handler_index); + + // Unlink the stack handler on top of the stack from the try handler chain. + // Must preserve the result register. + void PopTryHandler(); + + // Passes thrown value to the handler of top of the try handler chain. + void Throw(Register value); + + // Propagates an uncatchable exception to the top of the current JS stack's + // handler chain. + void ThrowUncatchable(Register value); + + // Copies a fixed number of fields of heap objects from src to dst. + void CopyFields(Register dst, Register src, RegList temps, int field_count); + + // Copies a number of bytes from src to dst. All registers are clobbered. On + // exit src and dst will point to the place just after where the last byte was + // read or written and length will be zero. + void CopyBytes(Register src, + Register dst, + Register length, + Register scratch); + + // Initialize fields with filler values. Fields starting at |start_offset| + // not including end_offset are overwritten with the value in |filler|. At + // the end the loop, |start_offset| takes the value of |end_offset|. + void InitializeFieldsWithFiller(Register start_offset, + Register end_offset, + Register filler); + + // ------------------------------------------------------------------------- + // Support functions. + + // Try to get function prototype of a function and puts the value in + // the result register. Checks that the function really is a + // function and jumps to the miss label if the fast checks fail. The + // function register will be untouched; the other registers may be + // clobbered. + void TryGetFunctionPrototype(Register function, + Register result, + Register scratch, + Label* miss, + bool miss_on_bound_function = false); + + void GetObjectType(Register function, + Register map, + Register type_reg); + + // Check if a map for a JSObject indicates that the object has fast elements. + // Jump to the specified label if it does not. + void CheckFastElements(Register map, + Register scratch, + Label* fail); + + // Check if a map for a JSObject indicates that the object can have both smi + // and HeapObject elements. Jump to the specified label if it does not. + void CheckFastObjectElements(Register map, + Register scratch, + Label* fail); + + // Check if a map for a JSObject indicates that the object has fast smi only + // elements. Jump to the specified label if it does not. + void CheckFastSmiElements(Register map, + Register scratch, + Label* fail); + + // Check to see if maybe_number can be stored as a double in + // FastDoubleElements. If it can, store it at the index specified by key in + // the FastDoubleElements array elements. Otherwise jump to fail. + void StoreNumberToDoubleElements(Register value_reg, + Register key_reg, + Register elements_reg, + Register scratch1, + Register scratch2, + Register scratch3, + Label* fail, + int elements_offset = 0); + + // Compare an object's map with the specified map and its transitioned + // elements maps if mode is ALLOW_ELEMENT_TRANSITION_MAPS. Jumps to + // "branch_to" if the result of the comparison is "cond". If multiple map + // compares are required, the compare sequences branches to early_success. + void CompareMapAndBranch(Register obj, + Register scratch, + Handle<Map> map, + Label* early_success, + Condition cond, + Label* branch_to); + + // As above, but the map of the object is already loaded into the register + // which is preserved by the code generated. + void CompareMapAndBranch(Register obj_map, + Handle<Map> map, + Label* early_success, + Condition cond, + Label* branch_to); + + // Check if the map of an object is equal to a specified map and branch to + // label if not. Skip the smi check if not required (object is known to be a + // heap object). If mode is ALLOW_ELEMENT_TRANSITION_MAPS, then also match + // against maps that are ElementsKind transition maps of the specificed map. + void CheckMap(Register obj, + Register scratch, + Handle<Map> map, + Label* fail, + SmiCheckType smi_check_type); + + + void CheckMap(Register obj, + Register scratch, + Heap::RootListIndex index, + Label* fail, + SmiCheckType smi_check_type); + + // Check if the map of an object is equal to a specified map and branch to a + // specified target if equal. Skip the smi check if not required (object is + // known to be a heap object) + void DispatchMap(Register obj, + Register scratch, + Handle<Map> map, + Handle<Code> success, + SmiCheckType smi_check_type); + + + // Load and check the instance type of an object for being a string. + // Loads the type into the second argument register. + // Returns a condition that will be enabled if the object was a string. + Condition IsObjectStringType(Register obj, + Register type, + Register result) { + ld(type, FieldMemOperand(obj, HeapObject::kMapOffset)); + lbu(type, FieldMemOperand(type, Map::kInstanceTypeOffset)); + And(type, type, Operand(kIsNotStringMask)); + DCHECK_EQ(0, kStringTag); + return eq; + } + + + // Picks out an array index from the hash field. + // Register use: + // hash - holds the index's hash. Clobbered. + // index - holds the overwritten index on exit. + void IndexFromHash(Register hash, Register index); + + // Get the number of least significant bits from a register. + void GetLeastBitsFromSmi(Register dst, Register src, int num_least_bits); + void GetLeastBitsFromInt32(Register dst, Register src, int mun_least_bits); + + // Load the value of a number object into a FPU double register. If the + // object is not a number a jump to the label not_number is performed + // and the FPU double register is unchanged. + void ObjectToDoubleFPURegister( + Register object, + FPURegister value, + Register scratch1, + Register scratch2, + Register heap_number_map, + Label* not_number, + ObjectToDoubleFlags flags = NO_OBJECT_TO_DOUBLE_FLAGS); + + // Load the value of a smi object into a FPU double register. The register + // scratch1 can be the same register as smi in which case smi will hold the + // untagged value afterwards. + void SmiToDoubleFPURegister(Register smi, + FPURegister value, + Register scratch1); + + // ------------------------------------------------------------------------- + // Overflow handling functions. + // Usage: first call the appropriate arithmetic function, then call one of the + // jump functions with the overflow_dst register as the second parameter. + + void AdduAndCheckForOverflow(Register dst, + Register left, + Register right, + Register overflow_dst, + Register scratch = at); + + void SubuAndCheckForOverflow(Register dst, + Register left, + Register right, + Register overflow_dst, + Register scratch = at); + + void BranchOnOverflow(Label* label, + Register overflow_check, + BranchDelaySlot bd = PROTECT) { + Branch(label, lt, overflow_check, Operand(zero_reg), bd); + } + + void BranchOnNoOverflow(Label* label, + Register overflow_check, + BranchDelaySlot bd = PROTECT) { + Branch(label, ge, overflow_check, Operand(zero_reg), bd); + } + + void RetOnOverflow(Register overflow_check, BranchDelaySlot bd = PROTECT) { + Ret(lt, overflow_check, Operand(zero_reg), bd); + } + + void RetOnNoOverflow(Register overflow_check, BranchDelaySlot bd = PROTECT) { + Ret(ge, overflow_check, Operand(zero_reg), bd); + } + + // ------------------------------------------------------------------------- + // Runtime calls. + + // See comments at the beginning of CEntryStub::Generate. + inline void PrepareCEntryArgs(int num_args) { + li(s0, num_args); + li(s1, (num_args - 1) * kPointerSize); + } + + inline void PrepareCEntryFunction(const ExternalReference& ref) { + li(s2, Operand(ref)); + } + +#define COND_ARGS Condition cond = al, Register rs = zero_reg, \ +const Operand& rt = Operand(zero_reg), BranchDelaySlot bd = PROTECT + + // Call a code stub. + void CallStub(CodeStub* stub, + TypeFeedbackId ast_id = TypeFeedbackId::None(), + COND_ARGS); + + // Tail call a code stub (jump). + void TailCallStub(CodeStub* stub, COND_ARGS); + +#undef COND_ARGS + + void CallJSExitStub(CodeStub* stub); + + // Call a runtime routine. + void CallRuntime(const Runtime::Function* f, + int num_arguments, + SaveFPRegsMode save_doubles = kDontSaveFPRegs); + void CallRuntimeSaveDoubles(Runtime::FunctionId id) { + const Runtime::Function* function = Runtime::FunctionForId(id); + CallRuntime(function, function->nargs, kSaveFPRegs); + } + + // Convenience function: Same as above, but takes the fid instead. + void CallRuntime(Runtime::FunctionId id, + int num_arguments, + SaveFPRegsMode save_doubles = kDontSaveFPRegs) { + CallRuntime(Runtime::FunctionForId(id), num_arguments, save_doubles); + } + + // Convenience function: call an external reference. + void CallExternalReference(const ExternalReference& ext, + int num_arguments, + BranchDelaySlot bd = PROTECT); + + // Tail call of a runtime routine (jump). + // Like JumpToExternalReference, but also takes care of passing the number + // of parameters. + void TailCallExternalReference(const ExternalReference& ext, + int num_arguments, + int result_size); + + // Convenience function: tail call a runtime routine (jump). + void TailCallRuntime(Runtime::FunctionId fid, + int num_arguments, + int result_size); + + int CalculateStackPassedWords(int num_reg_arguments, + int num_double_arguments); + + // Before calling a C-function from generated code, align arguments on stack + // and add space for the four mips argument slots. + // After aligning the frame, non-register arguments must be stored on the + // stack, after the argument-slots using helper: CFunctionArgumentOperand(). + // The argument count assumes all arguments are word sized. + // Some compilers/platforms require the stack to be aligned when calling + // C++ code. + // Needs a scratch register to do some arithmetic. This register will be + // trashed. + void PrepareCallCFunction(int num_reg_arguments, + int num_double_registers, + Register scratch); + void PrepareCallCFunction(int num_reg_arguments, + Register scratch); + + // Arguments 1-4 are placed in registers a0 thru a3 respectively. + // Arguments 5..n are stored to stack using following: + // sw(a4, CFunctionArgumentOperand(5)); + + // Calls a C function and cleans up the space for arguments allocated + // by PrepareCallCFunction. The called function is not allowed to trigger a + // garbage collection, since that might move the code and invalidate the + // return address (unless this is somehow accounted for by the called + // function). + void CallCFunction(ExternalReference function, int num_arguments); + void CallCFunction(Register function, int num_arguments); + void CallCFunction(ExternalReference function, + int num_reg_arguments, + int num_double_arguments); + void CallCFunction(Register function, + int num_reg_arguments, + int num_double_arguments); + void MovFromFloatResult(DoubleRegister dst); + void MovFromFloatParameter(DoubleRegister dst); + + // There are two ways of passing double arguments on MIPS, depending on + // whether soft or hard floating point ABI is used. These functions + // abstract parameter passing for the three different ways we call + // C functions from generated code. + void MovToFloatParameter(DoubleRegister src); + void MovToFloatParameters(DoubleRegister src1, DoubleRegister src2); + void MovToFloatResult(DoubleRegister src); + + // Calls an API function. Allocates HandleScope, extracts returned value + // from handle and propagates exceptions. Restores context. stack_space + // - space to be unwound on exit (includes the call JS arguments space and + // the additional space allocated for the fast call). + void CallApiFunctionAndReturn(Register function_address, + ExternalReference thunk_ref, + int stack_space, + MemOperand return_value_operand, + MemOperand* context_restore_operand); + + // Jump to the builtin routine. + void JumpToExternalReference(const ExternalReference& builtin, + BranchDelaySlot bd = PROTECT); + + // Invoke specified builtin JavaScript function. Adds an entry to + // the unresolved list if the name does not resolve. + void InvokeBuiltin(Builtins::JavaScript id, + InvokeFlag flag, + const CallWrapper& call_wrapper = NullCallWrapper()); + + // Store the code object for the given builtin in the target register and + // setup the function in a1. + void GetBuiltinEntry(Register target, Builtins::JavaScript id); + + // Store the function for the given builtin in the target register. + void GetBuiltinFunction(Register target, Builtins::JavaScript id); + + struct Unresolved { + int pc; + uint32_t flags; // See Bootstrapper::FixupFlags decoders/encoders. + const char* name; + }; + + Handle<Object> CodeObject() { + DCHECK(!code_object_.is_null()); + return code_object_; + } + + // Emit code for a truncating division by a constant. The dividend register is + // unchanged and at gets clobbered. Dividend and result must be different. + void TruncatingDiv(Register result, Register dividend, int32_t divisor); + + // ------------------------------------------------------------------------- + // StatsCounter support. + + void SetCounter(StatsCounter* counter, int value, + Register scratch1, Register scratch2); + void IncrementCounter(StatsCounter* counter, int value, + Register scratch1, Register scratch2); + void DecrementCounter(StatsCounter* counter, int value, + Register scratch1, Register scratch2); + + + // ------------------------------------------------------------------------- + // Debugging. + + // Calls Abort(msg) if the condition cc is not satisfied. + // Use --debug_code to enable. + void Assert(Condition cc, BailoutReason reason, Register rs, Operand rt); + void AssertFastElements(Register elements); + + // Like Assert(), but always enabled. + void Check(Condition cc, BailoutReason reason, Register rs, Operand rt); + + // Print a message to stdout and abort execution. + void Abort(BailoutReason msg); + + // Verify restrictions about code generated in stubs. + void set_generating_stub(bool value) { generating_stub_ = value; } + bool generating_stub() { return generating_stub_; } + void set_has_frame(bool value) { has_frame_ = value; } + bool has_frame() { return has_frame_; } + inline bool AllowThisStubCall(CodeStub* stub); + + // --------------------------------------------------------------------------- + // Number utilities. + + // Check whether the value of reg is a power of two and not zero. If not + // control continues at the label not_power_of_two. If reg is a power of two + // the register scratch contains the value of (reg - 1) when control falls + // through. + void JumpIfNotPowerOfTwoOrZero(Register reg, + Register scratch, + Label* not_power_of_two_or_zero); + + // ------------------------------------------------------------------------- + // Smi utilities. + + // Test for overflow < 0: use BranchOnOverflow() or BranchOnNoOverflow(). + void SmiTagCheckOverflow(Register reg, Register overflow); + void SmiTagCheckOverflow(Register dst, Register src, Register overflow); + + void SmiTag(Register dst, Register src) { + STATIC_ASSERT(kSmiTag == 0); + if (SmiValuesAre32Bits()) { + STATIC_ASSERT(kSmiShift == 32); + dsll32(dst, src, 0); + } else { + Addu(dst, src, src); + } + } + + void SmiTag(Register reg) { + SmiTag(reg, reg); + } + + // Try to convert int32 to smi. If the value is to large, preserve + // the original value and jump to not_a_smi. Destroys scratch and + // sets flags. + void TrySmiTag(Register reg, Register scratch, Label* not_a_smi) { + TrySmiTag(reg, reg, scratch, not_a_smi); + } + + void TrySmiTag(Register dst, + Register src, + Register scratch, + Label* not_a_smi) { + if (SmiValuesAre32Bits()) { + SmiTag(dst, src); + } else { + SmiTagCheckOverflow(at, src, scratch); + BranchOnOverflow(not_a_smi, scratch); + mov(dst, at); + } + } + + void SmiUntag(Register dst, Register src) { + if (SmiValuesAre32Bits()) { + STATIC_ASSERT(kSmiShift == 32); + dsra32(dst, src, 0); + } else { + sra(dst, src, kSmiTagSize); + } + } + + void SmiUntag(Register reg) { + SmiUntag(reg, reg); + } + + // Left-shifted from int32 equivalent of Smi. + void SmiScale(Register dst, Register src, int scale) { + if (SmiValuesAre32Bits()) { + // The int portion is upper 32-bits of 64-bit word. + dsra(dst, src, kSmiShift - scale); + } else { + DCHECK(scale >= kSmiTagSize); + sll(dst, src, scale - kSmiTagSize); + } + } + + // Combine load with untagging or scaling. + void SmiLoadUntag(Register dst, MemOperand src); + + void SmiLoadScale(Register dst, MemOperand src, int scale); + + // Returns 2 values: the Smi and a scaled version of the int within the Smi. + void SmiLoadWithScale(Register d_smi, + Register d_scaled, + MemOperand src, + int scale); + + // Returns 2 values: the untagged Smi (int32) and scaled version of that int. + void SmiLoadUntagWithScale(Register d_int, + Register d_scaled, + MemOperand src, + int scale); + + + // Test if the register contains a smi. + inline void SmiTst(Register value, Register scratch) { + And(scratch, value, Operand(kSmiTagMask)); + } + inline void NonNegativeSmiTst(Register value, Register scratch) { + And(scratch, value, Operand(kSmiTagMask | kSmiSignMask)); + } + + // Untag the source value into destination and jump if source is a smi. + // Source and destination can be the same register. + void UntagAndJumpIfSmi(Register dst, Register src, Label* smi_case); + + // Untag the source value into destination and jump if source is not a smi. + // Source and destination can be the same register. + void UntagAndJumpIfNotSmi(Register dst, Register src, Label* non_smi_case); + + // Jump the register contains a smi. + void JumpIfSmi(Register value, + Label* smi_label, + Register scratch = at, + BranchDelaySlot bd = PROTECT); + + // Jump if the register contains a non-smi. + void JumpIfNotSmi(Register value, + Label* not_smi_label, + Register scratch = at, + BranchDelaySlot bd = PROTECT); + + // Jump if either of the registers contain a non-smi. + void JumpIfNotBothSmi(Register reg1, Register reg2, Label* on_not_both_smi); + // Jump if either of the registers contain a smi. + void JumpIfEitherSmi(Register reg1, Register reg2, Label* on_either_smi); + + // Abort execution if argument is a smi, enabled via --debug-code. + void AssertNotSmi(Register object); + void AssertSmi(Register object); + + // Abort execution if argument is not a string, enabled via --debug-code. + void AssertString(Register object); + + // Abort execution if argument is not a name, enabled via --debug-code. + void AssertName(Register object); + + // Abort execution if argument is not undefined or an AllocationSite, enabled + // via --debug-code. + void AssertUndefinedOrAllocationSite(Register object, Register scratch); + + // Abort execution if reg is not the root value with the given index, + // enabled via --debug-code. + void AssertIsRoot(Register reg, Heap::RootListIndex index); + + // --------------------------------------------------------------------------- + // HeapNumber utilities. + + void JumpIfNotHeapNumber(Register object, + Register heap_number_map, + Register scratch, + Label* on_not_heap_number); + + // ------------------------------------------------------------------------- + // String utilities. + + // Generate code to do a lookup in the number string cache. If the number in + // the register object is found in the cache the generated code falls through + // with the result in the result register. The object and the result register + // can be the same. If the number is not found in the cache the code jumps to + // the label not_found with only the content of register object unchanged. + void LookupNumberStringCache(Register object, + Register result, + Register scratch1, + Register scratch2, + Register scratch3, + Label* not_found); + + // Checks if both instance types are sequential ASCII strings and jumps to + // label if either is not. + void JumpIfBothInstanceTypesAreNotSequentialAscii( + Register first_object_instance_type, + Register second_object_instance_type, + Register scratch1, + Register scratch2, + Label* failure); + + // Check if instance type is sequential ASCII string and jump to label if + // it is not. + void JumpIfInstanceTypeIsNotSequentialAscii(Register type, + Register scratch, + Label* failure); + + void JumpIfNotUniqueName(Register reg, Label* not_unique_name); + + void EmitSeqStringSetCharCheck(Register string, + Register index, + Register value, + Register scratch, + uint32_t encoding_mask); + + // Test that both first and second are sequential ASCII strings. + // Assume that they are non-smis. + void JumpIfNonSmisNotBothSequentialAsciiStrings(Register first, + Register second, + Register scratch1, + Register scratch2, + Label* failure); + + // Test that both first and second are sequential ASCII strings. + // Check that they are non-smis. + void JumpIfNotBothSequentialAsciiStrings(Register first, + Register second, + Register scratch1, + Register scratch2, + Label* failure); + + void ClampUint8(Register output_reg, Register input_reg); + + void ClampDoubleToUint8(Register result_reg, + DoubleRegister input_reg, + DoubleRegister temp_double_reg); + + + void LoadInstanceDescriptors(Register map, Register descriptors); + void EnumLength(Register dst, Register map); + void NumberOfOwnDescriptors(Register dst, Register map); + + template<typename Field> + void DecodeField(Register dst, Register src) { + Ext(dst, src, Field::kShift, Field::kSize); + } + + template<typename Field> + void DecodeField(Register reg) { + DecodeField<Field>(reg, reg); + } + + template<typename Field> + void DecodeFieldToSmi(Register dst, Register src) { + static const int shift = Field::kShift; + static const int mask = Field::kMask >> shift; + dsrl(dst, src, shift); + And(dst, dst, Operand(mask)); + dsll32(dst, dst, 0); + } + + template<typename Field> + void DecodeFieldToSmi(Register reg) { + DecodeField<Field>(reg, reg); + } + // Generates function and stub prologue code. + void StubPrologue(); + void Prologue(bool code_pre_aging); + + // Activation support. + void EnterFrame(StackFrame::Type type); + void LeaveFrame(StackFrame::Type type); + + // Patch the relocated value (lui/ori pair). + void PatchRelocatedValue(Register li_location, + Register scratch, + Register new_value); + // Get the relocatad value (loaded data) from the lui/ori pair. + void GetRelocatedValue(Register li_location, + Register value, + Register scratch); + + // Expects object in a0 and returns map with validated enum cache + // in a0. Assumes that any other register can be used as a scratch. + void CheckEnumCache(Register null_value, Label* call_runtime); + + // AllocationMemento support. Arrays may have an associated + // AllocationMemento object that can be checked for in order to pretransition + // to another type. + // On entry, receiver_reg should point to the array object. + // scratch_reg gets clobbered. + // If allocation info is present, jump to allocation_memento_present. + void TestJSArrayForAllocationMemento( + Register receiver_reg, + Register scratch_reg, + Label* no_memento_found, + Condition cond = al, + Label* allocation_memento_present = NULL); + + void JumpIfJSArrayHasAllocationMemento(Register receiver_reg, + Register scratch_reg, + Label* memento_found) { + Label no_memento_found; + TestJSArrayForAllocationMemento(receiver_reg, scratch_reg, + &no_memento_found, eq, memento_found); + bind(&no_memento_found); + } + + // Jumps to found label if a prototype map has dictionary elements. + void JumpIfDictionaryInPrototypeChain(Register object, Register scratch0, + Register scratch1, Label* found); + + private: + void CallCFunctionHelper(Register function, + int num_reg_arguments, + int num_double_arguments); + + void BranchAndLinkShort(int16_t offset, BranchDelaySlot bdslot = PROTECT); + void BranchAndLinkShort(int16_t offset, Condition cond, Register rs, + const Operand& rt, + BranchDelaySlot bdslot = PROTECT); + void BranchAndLinkShort(Label* L, BranchDelaySlot bdslot = PROTECT); + void BranchAndLinkShort(Label* L, Condition cond, Register rs, + const Operand& rt, + BranchDelaySlot bdslot = PROTECT); + void J(Label* L, BranchDelaySlot bdslot); + void Jr(Label* L, BranchDelaySlot bdslot); + void Jalr(Label* L, BranchDelaySlot bdslot); + + // Helper functions for generating invokes. + void InvokePrologue(const ParameterCount& expected, + const ParameterCount& actual, + Handle<Code> code_constant, + Register code_reg, + Label* done, + bool* definitely_mismatches, + InvokeFlag flag, + const CallWrapper& call_wrapper); + + // Get the code for the given builtin. Returns if able to resolve + // the function in the 'resolved' flag. + Handle<Code> ResolveBuiltin(Builtins::JavaScript id, bool* resolved); + + void InitializeNewString(Register string, + Register length, + Heap::RootListIndex map_index, + Register scratch1, + Register scratch2); + + // Helper for implementing JumpIfNotInNewSpace and JumpIfInNewSpace. + void InNewSpace(Register object, + Register scratch, + Condition cond, // eq for new space, ne otherwise. + Label* branch); + + // Helper for finding the mark bits for an address. Afterwards, the + // bitmap register points at the word with the mark bits and the mask + // the position of the first bit. Leaves addr_reg unchanged. + inline void GetMarkBits(Register addr_reg, + Register bitmap_reg, + Register mask_reg); + + // Helper for throwing exceptions. Compute a handler address and jump to + // it. See the implementation for register usage. + void JumpToHandlerEntry(); + + // Compute memory operands for safepoint stack slots. + static int SafepointRegisterStackIndex(int reg_code); + MemOperand SafepointRegisterSlot(Register reg); + MemOperand SafepointRegistersAndDoublesSlot(Register reg); + + bool generating_stub_; + bool has_frame_; + // This handle will be patched with the code object on installation. + Handle<Object> code_object_; + + // Needs access to SafepointRegisterStackIndex for compiled frame + // traversal. + friend class StandardFrame; +}; + + +// The code patcher is used to patch (typically) small parts of code e.g. for +// debugging and other types of instrumentation. When using the code patcher +// the exact number of bytes specified must be emitted. It is not legal to emit +// relocation information. If any of these constraints are violated it causes +// an assertion to fail. +class CodePatcher { + public: + enum FlushICache { + FLUSH, + DONT_FLUSH + }; + + CodePatcher(byte* address, + int instructions, + FlushICache flush_cache = FLUSH); + virtual ~CodePatcher(); + + // Macro assembler to emit code. + MacroAssembler* masm() { return &masm_; } + + // Emit an instruction directly. + void Emit(Instr instr); + + // Emit an address directly. + void Emit(Address addr); + + // Change the condition part of an instruction leaving the rest of the current + // instruction unchanged. + void ChangeBranchCondition(Condition cond); + + private: + byte* address_; // The address of the code being patched. + int size_; // Number of bytes of the expected patch size. + MacroAssembler masm_; // Macro assembler used to generate the code. + FlushICache flush_cache_; // Whether to flush the I cache after patching. +}; + + + +#ifdef GENERATED_CODE_COVERAGE +#define CODE_COVERAGE_STRINGIFY(x) #x +#define CODE_COVERAGE_TOSTRING(x) CODE_COVERAGE_STRINGIFY(x) +#define __FILE_LINE__ __FILE__ ":" CODE_COVERAGE_TOSTRING(__LINE__) +#define ACCESS_MASM(masm) masm->stop(__FILE_LINE__); masm-> +#else +#define ACCESS_MASM(masm) masm-> +#endif + +} } // namespace v8::internal + +#endif // V8_MIPS_MACRO_ASSEMBLER_MIPS_H_ diff --git a/deps/v8/src/mips64/regexp-macro-assembler-mips64.cc b/deps/v8/src/mips64/regexp-macro-assembler-mips64.cc new file mode 100644 index 00000000000..bcd133424b7 --- /dev/null +++ b/deps/v8/src/mips64/regexp-macro-assembler-mips64.cc @@ -0,0 +1,1371 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_MIPS64 + +#include "src/code-stubs.h" +#include "src/log.h" +#include "src/macro-assembler.h" +#include "src/regexp-macro-assembler.h" +#include "src/regexp-stack.h" +#include "src/unicode.h" + +#include "src/mips64/regexp-macro-assembler-mips64.h" + +namespace v8 { +namespace internal { + +#ifndef V8_INTERPRETED_REGEXP +/* + * This assembler uses the following register assignment convention + * - t3 : Temporarily stores the index of capture start after a matching pass + * for a global regexp. + * - a5 : Pointer to current code object (Code*) including heap object tag. + * - a6 : Current position in input, as negative offset from end of string. + * Please notice that this is the byte offset, not the character offset! + * - a7 : Currently loaded character. Must be loaded using + * LoadCurrentCharacter before using any of the dispatch methods. + * - t0 : Points to tip of backtrack stack + * - t1 : Unused. + * - t2 : End of input (points to byte after last character in input). + * - fp : Frame pointer. Used to access arguments, local variables and + * RegExp registers. + * - sp : Points to tip of C stack. + * + * The remaining registers are free for computations. + * Each call to a public method should retain this convention. + * + * TODO(plind): O32 documented here with intent of having single 32/64 codebase + * in the future. + * + * The O32 stack will have the following structure: + * + * - fp[76] Isolate* isolate (address of the current isolate) + * - fp[72] direct_call (if 1, direct call from JavaScript code, + * if 0, call through the runtime system). + * - fp[68] stack_area_base (High end of the memory area to use as + * backtracking stack). + * - fp[64] capture array size (may fit multiple sets of matches) + * - fp[60] int* capture_array (int[num_saved_registers_], for output). + * - fp[44..59] MIPS O32 four argument slots + * - fp[40] secondary link/return address used by native call. + * --- sp when called --- + * - fp[36] return address (lr). + * - fp[32] old frame pointer (r11). + * - fp[0..31] backup of registers s0..s7. + * --- frame pointer ---- + * - fp[-4] end of input (address of end of string). + * - fp[-8] start of input (address of first character in string). + * - fp[-12] start index (character index of start). + * - fp[-16] void* input_string (location of a handle containing the string). + * - fp[-20] success counter (only for global regexps to count matches). + * - fp[-24] Offset of location before start of input (effectively character + * position -1). Used to initialize capture registers to a + * non-position. + * - fp[-28] At start (if 1, we are starting at the start of the + * string, otherwise 0) + * - fp[-32] register 0 (Only positions must be stored in the first + * - register 1 num_saved_registers_ registers) + * - ... + * - register num_registers-1 + * --- sp --- + * + * + * The N64 stack will have the following structure: + * + * - fp[88] Isolate* isolate (address of the current isolate) kIsolate + * - fp[80] secondary link/return address used by exit frame on native call. kSecondaryReturnAddress + kStackFrameHeader + * --- sp when called --- + * - fp[72] ra Return from RegExp code (ra). kReturnAddress + * - fp[64] s9, old-fp Old fp, callee saved(s9). + * - fp[0..63] s0..s7 Callee-saved registers s0..s7. + * --- frame pointer ---- + * - fp[-8] direct_call (1 = direct call from JS, 0 = from runtime) kDirectCall + * - fp[-16] stack_base (Top of backtracking stack). kStackHighEnd + * - fp[-24] capture array size (may fit multiple sets of matches) kNumOutputRegisters + * - fp[-32] int* capture_array (int[num_saved_registers_], for output). kRegisterOutput + * - fp[-40] end of input (address of end of string). kInputEnd + * - fp[-48] start of input (address of first character in string). kInputStart + * - fp[-56] start index (character index of start). kStartIndex + * - fp[-64] void* input_string (location of a handle containing the string). kInputString + * - fp[-72] success counter (only for global regexps to count matches). kSuccessfulCaptures + * - fp[-80] Offset of location before start of input (effectively character kInputStartMinusOne + * position -1). Used to initialize capture registers to a + * non-position. + * --------- The following output registers are 32-bit values. --------- + * - fp[-88] register 0 (Only positions must be stored in the first kRegisterZero + * - register 1 num_saved_registers_ registers) + * - ... + * - register num_registers-1 + * --- sp --- + * + * The first num_saved_registers_ registers are initialized to point to + * "character -1" in the string (i.e., char_size() bytes before the first + * character of the string). The remaining registers start out as garbage. + * + * The data up to the return address must be placed there by the calling + * code and the remaining arguments are passed in registers, e.g. by calling the + * code entry as cast to a function with the signature: + * int (*match)(String* input_string, + * int start_index, + * Address start, + * Address end, + * Address secondary_return_address, // Only used by native call. + * int* capture_output_array, + * byte* stack_area_base, + * bool direct_call = false, + * void* return_address, + * Isolate* isolate); + * The call is performed by NativeRegExpMacroAssembler::Execute() + * (in regexp-macro-assembler.cc) via the CALL_GENERATED_REGEXP_CODE macro + * in mips/simulator-mips.h. + * When calling as a non-direct call (i.e., from C++ code), the return address + * area is overwritten with the ra register by the RegExp code. When doing a + * direct call from generated code, the return address is placed there by + * the calling code, as in a normal exit frame. + */ + +#define __ ACCESS_MASM(masm_) + +RegExpMacroAssemblerMIPS::RegExpMacroAssemblerMIPS( + Mode mode, + int registers_to_save, + Zone* zone) + : NativeRegExpMacroAssembler(zone), + masm_(new MacroAssembler(zone->isolate(), NULL, kRegExpCodeSize)), + mode_(mode), + num_registers_(registers_to_save), + num_saved_registers_(registers_to_save), + entry_label_(), + start_label_(), + success_label_(), + backtrack_label_(), + exit_label_(), + internal_failure_label_() { + DCHECK_EQ(0, registers_to_save % 2); + __ jmp(&entry_label_); // We'll write the entry code later. + // If the code gets too big or corrupted, an internal exception will be + // raised, and we will exit right away. + __ bind(&internal_failure_label_); + __ li(v0, Operand(FAILURE)); + __ Ret(); + __ bind(&start_label_); // And then continue from here. +} + + +RegExpMacroAssemblerMIPS::~RegExpMacroAssemblerMIPS() { + delete masm_; + // Unuse labels in case we throw away the assembler without calling GetCode. + entry_label_.Unuse(); + start_label_.Unuse(); + success_label_.Unuse(); + backtrack_label_.Unuse(); + exit_label_.Unuse(); + check_preempt_label_.Unuse(); + stack_overflow_label_.Unuse(); + internal_failure_label_.Unuse(); +} + + +int RegExpMacroAssemblerMIPS::stack_limit_slack() { + return RegExpStack::kStackLimitSlack; +} + + +void RegExpMacroAssemblerMIPS::AdvanceCurrentPosition(int by) { + if (by != 0) { + __ Daddu(current_input_offset(), + current_input_offset(), Operand(by * char_size())); + } +} + + +void RegExpMacroAssemblerMIPS::AdvanceRegister(int reg, int by) { + DCHECK(reg >= 0); + DCHECK(reg < num_registers_); + if (by != 0) { + __ ld(a0, register_location(reg)); + __ Daddu(a0, a0, Operand(by)); + __ sd(a0, register_location(reg)); + } +} + + +void RegExpMacroAssemblerMIPS::Backtrack() { + CheckPreemption(); + // Pop Code* offset from backtrack stack, add Code* and jump to location. + Pop(a0); + __ Daddu(a0, a0, code_pointer()); + __ Jump(a0); +} + + +void RegExpMacroAssemblerMIPS::Bind(Label* label) { + __ bind(label); +} + + +void RegExpMacroAssemblerMIPS::CheckCharacter(uint32_t c, Label* on_equal) { + BranchOrBacktrack(on_equal, eq, current_character(), Operand(c)); +} + + +void RegExpMacroAssemblerMIPS::CheckCharacterGT(uc16 limit, Label* on_greater) { + BranchOrBacktrack(on_greater, gt, current_character(), Operand(limit)); +} + + +void RegExpMacroAssemblerMIPS::CheckAtStart(Label* on_at_start) { + Label not_at_start; + // Did we start the match at the start of the string at all? + __ lw(a0, MemOperand(frame_pointer(), kStartIndex)); + BranchOrBacktrack(¬_at_start, ne, a0, Operand(zero_reg)); + + // If we did, are we still at the start of the input? + __ ld(a1, MemOperand(frame_pointer(), kInputStart)); + __ Daddu(a0, end_of_input_address(), Operand(current_input_offset())); + BranchOrBacktrack(on_at_start, eq, a0, Operand(a1)); + __ bind(¬_at_start); +} + + +void RegExpMacroAssemblerMIPS::CheckNotAtStart(Label* on_not_at_start) { + // Did we start the match at the start of the string at all? + __ lw(a0, MemOperand(frame_pointer(), kStartIndex)); + BranchOrBacktrack(on_not_at_start, ne, a0, Operand(zero_reg)); + // If we did, are we still at the start of the input? + __ ld(a1, MemOperand(frame_pointer(), kInputStart)); + __ Daddu(a0, end_of_input_address(), Operand(current_input_offset())); + BranchOrBacktrack(on_not_at_start, ne, a0, Operand(a1)); +} + + +void RegExpMacroAssemblerMIPS::CheckCharacterLT(uc16 limit, Label* on_less) { + BranchOrBacktrack(on_less, lt, current_character(), Operand(limit)); +} + + +void RegExpMacroAssemblerMIPS::CheckGreedyLoop(Label* on_equal) { + Label backtrack_non_equal; + __ lw(a0, MemOperand(backtrack_stackpointer(), 0)); + __ Branch(&backtrack_non_equal, ne, current_input_offset(), Operand(a0)); + __ Daddu(backtrack_stackpointer(), + backtrack_stackpointer(), + Operand(kIntSize)); + __ bind(&backtrack_non_equal); + BranchOrBacktrack(on_equal, eq, current_input_offset(), Operand(a0)); +} + + +void RegExpMacroAssemblerMIPS::CheckNotBackReferenceIgnoreCase( + int start_reg, + Label* on_no_match) { + Label fallthrough; + __ ld(a0, register_location(start_reg)); // Index of start of capture. + __ ld(a1, register_location(start_reg + 1)); // Index of end of capture. + __ Dsubu(a1, a1, a0); // Length of capture. + + // If length is zero, either the capture is empty or it is not participating. + // In either case succeed immediately. + __ Branch(&fallthrough, eq, a1, Operand(zero_reg)); + + __ Daddu(t1, a1, current_input_offset()); + // Check that there are enough characters left in the input. + BranchOrBacktrack(on_no_match, gt, t1, Operand(zero_reg)); + + if (mode_ == ASCII) { + Label success; + Label fail; + Label loop_check; + + // a0 - offset of start of capture. + // a1 - length of capture. + __ Daddu(a0, a0, Operand(end_of_input_address())); + __ Daddu(a2, end_of_input_address(), Operand(current_input_offset())); + __ Daddu(a1, a0, Operand(a1)); + + // a0 - Address of start of capture. + // a1 - Address of end of capture. + // a2 - Address of current input position. + + Label loop; + __ bind(&loop); + __ lbu(a3, MemOperand(a0, 0)); + __ daddiu(a0, a0, char_size()); + __ lbu(a4, MemOperand(a2, 0)); + __ daddiu(a2, a2, char_size()); + + __ Branch(&loop_check, eq, a4, Operand(a3)); + + // Mismatch, try case-insensitive match (converting letters to lower-case). + __ Or(a3, a3, Operand(0x20)); // Convert capture character to lower-case. + __ Or(a4, a4, Operand(0x20)); // Also convert input character. + __ Branch(&fail, ne, a4, Operand(a3)); + __ Dsubu(a3, a3, Operand('a')); + __ Branch(&loop_check, ls, a3, Operand('z' - 'a')); + // Latin-1: Check for values in range [224,254] but not 247. + __ Dsubu(a3, a3, Operand(224 - 'a')); + // Weren't Latin-1 letters. + __ Branch(&fail, hi, a3, Operand(254 - 224)); + // Check for 247. + __ Branch(&fail, eq, a3, Operand(247 - 224)); + + __ bind(&loop_check); + __ Branch(&loop, lt, a0, Operand(a1)); + __ jmp(&success); + + __ bind(&fail); + GoTo(on_no_match); + + __ bind(&success); + // Compute new value of character position after the matched part. + __ Dsubu(current_input_offset(), a2, end_of_input_address()); + } else { + DCHECK(mode_ == UC16); + // Put regexp engine registers on stack. + RegList regexp_registers_to_retain = current_input_offset().bit() | + current_character().bit() | backtrack_stackpointer().bit(); + __ MultiPush(regexp_registers_to_retain); + + int argument_count = 4; + __ PrepareCallCFunction(argument_count, a2); + + // a0 - offset of start of capture. + // a1 - length of capture. + + // Put arguments into arguments registers. + // Parameters are + // a0: Address byte_offset1 - Address captured substring's start. + // a1: Address byte_offset2 - Address of current character position. + // a2: size_t byte_length - length of capture in bytes(!). + // a3: Isolate* isolate. + + // Address of start of capture. + __ Daddu(a0, a0, Operand(end_of_input_address())); + // Length of capture. + __ mov(a2, a1); + // Save length in callee-save register for use on return. + __ mov(s3, a1); + // Address of current input position. + __ Daddu(a1, current_input_offset(), Operand(end_of_input_address())); + // Isolate. + __ li(a3, Operand(ExternalReference::isolate_address(masm_->isolate()))); + + { + AllowExternalCallThatCantCauseGC scope(masm_); + ExternalReference function = + ExternalReference::re_case_insensitive_compare_uc16(masm_->isolate()); + __ CallCFunction(function, argument_count); + } + + // Restore regexp engine registers. + __ MultiPop(regexp_registers_to_retain); + __ li(code_pointer(), Operand(masm_->CodeObject()), CONSTANT_SIZE); + __ ld(end_of_input_address(), MemOperand(frame_pointer(), kInputEnd)); + + // Check if function returned non-zero for success or zero for failure. + BranchOrBacktrack(on_no_match, eq, v0, Operand(zero_reg)); + // On success, increment position by length of capture. + __ Daddu(current_input_offset(), current_input_offset(), Operand(s3)); + } + + __ bind(&fallthrough); +} + + +void RegExpMacroAssemblerMIPS::CheckNotBackReference( + int start_reg, + Label* on_no_match) { + Label fallthrough; + Label success; + + // Find length of back-referenced capture. + __ ld(a0, register_location(start_reg)); + __ ld(a1, register_location(start_reg + 1)); + __ Dsubu(a1, a1, a0); // Length to check. + // Succeed on empty capture (including no capture). + __ Branch(&fallthrough, eq, a1, Operand(zero_reg)); + + __ Daddu(t1, a1, current_input_offset()); + // Check that there are enough characters left in the input. + BranchOrBacktrack(on_no_match, gt, t1, Operand(zero_reg)); + + // Compute pointers to match string and capture string. + __ Daddu(a0, a0, Operand(end_of_input_address())); + __ Daddu(a2, end_of_input_address(), Operand(current_input_offset())); + __ Daddu(a1, a1, Operand(a0)); + + Label loop; + __ bind(&loop); + if (mode_ == ASCII) { + __ lbu(a3, MemOperand(a0, 0)); + __ daddiu(a0, a0, char_size()); + __ lbu(a4, MemOperand(a2, 0)); + __ daddiu(a2, a2, char_size()); + } else { + DCHECK(mode_ == UC16); + __ lhu(a3, MemOperand(a0, 0)); + __ daddiu(a0, a0, char_size()); + __ lhu(a4, MemOperand(a2, 0)); + __ daddiu(a2, a2, char_size()); + } + BranchOrBacktrack(on_no_match, ne, a3, Operand(a4)); + __ Branch(&loop, lt, a0, Operand(a1)); + + // Move current character position to position after match. + __ Dsubu(current_input_offset(), a2, end_of_input_address()); + __ bind(&fallthrough); +} + + +void RegExpMacroAssemblerMIPS::CheckNotCharacter(uint32_t c, + Label* on_not_equal) { + BranchOrBacktrack(on_not_equal, ne, current_character(), Operand(c)); +} + + +void RegExpMacroAssemblerMIPS::CheckCharacterAfterAnd(uint32_t c, + uint32_t mask, + Label* on_equal) { + __ And(a0, current_character(), Operand(mask)); + Operand rhs = (c == 0) ? Operand(zero_reg) : Operand(c); + BranchOrBacktrack(on_equal, eq, a0, rhs); +} + + +void RegExpMacroAssemblerMIPS::CheckNotCharacterAfterAnd(uint32_t c, + uint32_t mask, + Label* on_not_equal) { + __ And(a0, current_character(), Operand(mask)); + Operand rhs = (c == 0) ? Operand(zero_reg) : Operand(c); + BranchOrBacktrack(on_not_equal, ne, a0, rhs); +} + + +void RegExpMacroAssemblerMIPS::CheckNotCharacterAfterMinusAnd( + uc16 c, + uc16 minus, + uc16 mask, + Label* on_not_equal) { + DCHECK(minus < String::kMaxUtf16CodeUnit); + __ Dsubu(a0, current_character(), Operand(minus)); + __ And(a0, a0, Operand(mask)); + BranchOrBacktrack(on_not_equal, ne, a0, Operand(c)); +} + + +void RegExpMacroAssemblerMIPS::CheckCharacterInRange( + uc16 from, + uc16 to, + Label* on_in_range) { + __ Dsubu(a0, current_character(), Operand(from)); + // Unsigned lower-or-same condition. + BranchOrBacktrack(on_in_range, ls, a0, Operand(to - from)); +} + + +void RegExpMacroAssemblerMIPS::CheckCharacterNotInRange( + uc16 from, + uc16 to, + Label* on_not_in_range) { + __ Dsubu(a0, current_character(), Operand(from)); + // Unsigned higher condition. + BranchOrBacktrack(on_not_in_range, hi, a0, Operand(to - from)); +} + + +void RegExpMacroAssemblerMIPS::CheckBitInTable( + Handle<ByteArray> table, + Label* on_bit_set) { + __ li(a0, Operand(table)); + if (mode_ != ASCII || kTableMask != String::kMaxOneByteCharCode) { + __ And(a1, current_character(), Operand(kTableSize - 1)); + __ Daddu(a0, a0, a1); + } else { + __ Daddu(a0, a0, current_character()); + } + + __ lbu(a0, FieldMemOperand(a0, ByteArray::kHeaderSize)); + BranchOrBacktrack(on_bit_set, ne, a0, Operand(zero_reg)); +} + + +bool RegExpMacroAssemblerMIPS::CheckSpecialCharacterClass(uc16 type, + Label* on_no_match) { + // Range checks (c in min..max) are generally implemented by an unsigned + // (c - min) <= (max - min) check. + switch (type) { + case 's': + // Match space-characters. + if (mode_ == ASCII) { + // One byte space characters are '\t'..'\r', ' ' and \u00a0. + Label success; + __ Branch(&success, eq, current_character(), Operand(' ')); + // Check range 0x09..0x0d. + __ Dsubu(a0, current_character(), Operand('\t')); + __ Branch(&success, ls, a0, Operand('\r' - '\t')); + // \u00a0 (NBSP). + BranchOrBacktrack(on_no_match, ne, a0, Operand(0x00a0 - '\t')); + __ bind(&success); + return true; + } + return false; + case 'S': + // The emitted code for generic character classes is good enough. + return false; + case 'd': + // Match ASCII digits ('0'..'9'). + __ Dsubu(a0, current_character(), Operand('0')); + BranchOrBacktrack(on_no_match, hi, a0, Operand('9' - '0')); + return true; + case 'D': + // Match non ASCII-digits. + __ Dsubu(a0, current_character(), Operand('0')); + BranchOrBacktrack(on_no_match, ls, a0, Operand('9' - '0')); + return true; + case '.': { + // Match non-newlines (not 0x0a('\n'), 0x0d('\r'), 0x2028 and 0x2029). + __ Xor(a0, current_character(), Operand(0x01)); + // See if current character is '\n'^1 or '\r'^1, i.e., 0x0b or 0x0c. + __ Dsubu(a0, a0, Operand(0x0b)); + BranchOrBacktrack(on_no_match, ls, a0, Operand(0x0c - 0x0b)); + if (mode_ == UC16) { + // Compare original value to 0x2028 and 0x2029, using the already + // computed (current_char ^ 0x01 - 0x0b). I.e., check for + // 0x201d (0x2028 - 0x0b) or 0x201e. + __ Dsubu(a0, a0, Operand(0x2028 - 0x0b)); + BranchOrBacktrack(on_no_match, ls, a0, Operand(1)); + } + return true; + } + case 'n': { + // Match newlines (0x0a('\n'), 0x0d('\r'), 0x2028 and 0x2029). + __ Xor(a0, current_character(), Operand(0x01)); + // See if current character is '\n'^1 or '\r'^1, i.e., 0x0b or 0x0c. + __ Dsubu(a0, a0, Operand(0x0b)); + if (mode_ == ASCII) { + BranchOrBacktrack(on_no_match, hi, a0, Operand(0x0c - 0x0b)); + } else { + Label done; + BranchOrBacktrack(&done, ls, a0, Operand(0x0c - 0x0b)); + // Compare original value to 0x2028 and 0x2029, using the already + // computed (current_char ^ 0x01 - 0x0b). I.e., check for + // 0x201d (0x2028 - 0x0b) or 0x201e. + __ Dsubu(a0, a0, Operand(0x2028 - 0x0b)); + BranchOrBacktrack(on_no_match, hi, a0, Operand(1)); + __ bind(&done); + } + return true; + } + case 'w': { + if (mode_ != ASCII) { + // Table is 128 entries, so all ASCII characters can be tested. + BranchOrBacktrack(on_no_match, hi, current_character(), Operand('z')); + } + ExternalReference map = ExternalReference::re_word_character_map(); + __ li(a0, Operand(map)); + __ Daddu(a0, a0, current_character()); + __ lbu(a0, MemOperand(a0, 0)); + BranchOrBacktrack(on_no_match, eq, a0, Operand(zero_reg)); + return true; + } + case 'W': { + Label done; + if (mode_ != ASCII) { + // Table is 128 entries, so all ASCII characters can be tested. + __ Branch(&done, hi, current_character(), Operand('z')); + } + ExternalReference map = ExternalReference::re_word_character_map(); + __ li(a0, Operand(map)); + __ Daddu(a0, a0, current_character()); + __ lbu(a0, MemOperand(a0, 0)); + BranchOrBacktrack(on_no_match, ne, a0, Operand(zero_reg)); + if (mode_ != ASCII) { + __ bind(&done); + } + return true; + } + case '*': + // Match any character. + return true; + // No custom implementation (yet): s(UC16), S(UC16). + default: + return false; + } +} + + +void RegExpMacroAssemblerMIPS::Fail() { + __ li(v0, Operand(FAILURE)); + __ jmp(&exit_label_); +} + + +Handle<HeapObject> RegExpMacroAssemblerMIPS::GetCode(Handle<String> source) { + Label return_v0; + if (masm_->has_exception()) { + // If the code gets corrupted due to long regular expressions and lack of + // space on trampolines, an internal exception flag is set. If this case + // is detected, we will jump into exit sequence right away. + __ bind_to(&entry_label_, internal_failure_label_.pos()); + } else { + // Finalize code - write the entry point code now we know how many + // registers we need. + + // Entry code: + __ bind(&entry_label_); + + // Tell the system that we have a stack frame. Because the type is MANUAL, + // no is generated. + FrameScope scope(masm_, StackFrame::MANUAL); + + // Actually emit code to start a new stack frame. + // Push arguments + // Save callee-save registers. + // Start new stack frame. + // Store link register in existing stack-cell. + // Order here should correspond to order of offset constants in header file. + // TODO(plind): we save s0..s7, but ONLY use s3 here - use the regs + // or dont save. + RegList registers_to_retain = s0.bit() | s1.bit() | s2.bit() | + s3.bit() | s4.bit() | s5.bit() | s6.bit() | s7.bit() | fp.bit(); + RegList argument_registers = a0.bit() | a1.bit() | a2.bit() | a3.bit(); + + if (kMipsAbi == kN64) { + // TODO(plind): Should probably alias a4-a7, for clarity. + argument_registers |= a4.bit() | a5.bit() | a6.bit() | a7.bit(); + } + + __ MultiPush(argument_registers | registers_to_retain | ra.bit()); + // Set frame pointer in space for it if this is not a direct call + // from generated code. + // TODO(plind): this 8 is the # of argument regs, should have definition. + __ Daddu(frame_pointer(), sp, Operand(8 * kPointerSize)); + __ mov(a0, zero_reg); + __ push(a0); // Make room for success counter and initialize it to 0. + __ push(a0); // Make room for "position - 1" constant (value irrelevant). + + // Check if we have space on the stack for registers. + Label stack_limit_hit; + Label stack_ok; + + ExternalReference stack_limit = + ExternalReference::address_of_stack_limit(masm_->isolate()); + __ li(a0, Operand(stack_limit)); + __ ld(a0, MemOperand(a0)); + __ Dsubu(a0, sp, a0); + // Handle it if the stack pointer is already below the stack limit. + __ Branch(&stack_limit_hit, le, a0, Operand(zero_reg)); + // Check if there is room for the variable number of registers above + // the stack limit. + __ Branch(&stack_ok, hs, a0, Operand(num_registers_ * kPointerSize)); + // Exit with OutOfMemory exception. There is not enough space on the stack + // for our working registers. + __ li(v0, Operand(EXCEPTION)); + __ jmp(&return_v0); + + __ bind(&stack_limit_hit); + CallCheckStackGuardState(a0); + // If returned value is non-zero, we exit with the returned value as result. + __ Branch(&return_v0, ne, v0, Operand(zero_reg)); + + __ bind(&stack_ok); + // Allocate space on stack for registers. + __ Dsubu(sp, sp, Operand(num_registers_ * kPointerSize)); + // Load string end. + __ ld(end_of_input_address(), MemOperand(frame_pointer(), kInputEnd)); + // Load input start. + __ ld(a0, MemOperand(frame_pointer(), kInputStart)); + // Find negative length (offset of start relative to end). + __ Dsubu(current_input_offset(), a0, end_of_input_address()); + // Set a0 to address of char before start of the input string + // (effectively string position -1). + __ ld(a1, MemOperand(frame_pointer(), kStartIndex)); + __ Dsubu(a0, current_input_offset(), Operand(char_size())); + __ dsll(t1, a1, (mode_ == UC16) ? 1 : 0); + __ Dsubu(a0, a0, t1); + // Store this value in a local variable, for use when clearing + // position registers. + __ sd(a0, MemOperand(frame_pointer(), kInputStartMinusOne)); + + // Initialize code pointer register + __ li(code_pointer(), Operand(masm_->CodeObject()), CONSTANT_SIZE); + + Label load_char_start_regexp, start_regexp; + // Load newline if index is at start, previous character otherwise. + __ Branch(&load_char_start_regexp, ne, a1, Operand(zero_reg)); + __ li(current_character(), Operand('\n')); + __ jmp(&start_regexp); + + // Global regexp restarts matching here. + __ bind(&load_char_start_regexp); + // Load previous char as initial value of current character register. + LoadCurrentCharacterUnchecked(-1, 1); + __ bind(&start_regexp); + + // Initialize on-stack registers. + if (num_saved_registers_ > 0) { // Always is, if generated from a regexp. + // Fill saved registers with initial value = start offset - 1. + if (num_saved_registers_ > 8) { + // Address of register 0. + __ Daddu(a1, frame_pointer(), Operand(kRegisterZero)); + __ li(a2, Operand(num_saved_registers_)); + Label init_loop; + __ bind(&init_loop); + __ sd(a0, MemOperand(a1)); + __ Daddu(a1, a1, Operand(-kPointerSize)); + __ Dsubu(a2, a2, Operand(1)); + __ Branch(&init_loop, ne, a2, Operand(zero_reg)); + } else { + for (int i = 0; i < num_saved_registers_; i++) { + __ sd(a0, register_location(i)); + } + } + } + + // Initialize backtrack stack pointer. + __ ld(backtrack_stackpointer(), MemOperand(frame_pointer(), kStackHighEnd)); + + __ jmp(&start_label_); + + + // Exit code: + if (success_label_.is_linked()) { + // Save captures when successful. + __ bind(&success_label_); + if (num_saved_registers_ > 0) { + // Copy captures to output. + __ ld(a1, MemOperand(frame_pointer(), kInputStart)); + __ ld(a0, MemOperand(frame_pointer(), kRegisterOutput)); + __ ld(a2, MemOperand(frame_pointer(), kStartIndex)); + __ Dsubu(a1, end_of_input_address(), a1); + // a1 is length of input in bytes. + if (mode_ == UC16) { + __ dsrl(a1, a1, 1); + } + // a1 is length of input in characters. + __ Daddu(a1, a1, Operand(a2)); + // a1 is length of string in characters. + + DCHECK_EQ(0, num_saved_registers_ % 2); + // Always an even number of capture registers. This allows us to + // unroll the loop once to add an operation between a load of a register + // and the following use of that register. + for (int i = 0; i < num_saved_registers_; i += 2) { + __ ld(a2, register_location(i)); + __ ld(a3, register_location(i + 1)); + if (i == 0 && global_with_zero_length_check()) { + // Keep capture start in a4 for the zero-length check later. + __ mov(t3, a2); + } + if (mode_ == UC16) { + __ dsra(a2, a2, 1); + __ Daddu(a2, a2, a1); + __ dsra(a3, a3, 1); + __ Daddu(a3, a3, a1); + } else { + __ Daddu(a2, a1, Operand(a2)); + __ Daddu(a3, a1, Operand(a3)); + } + // V8 expects the output to be an int32_t array. + __ sw(a2, MemOperand(a0)); + __ Daddu(a0, a0, kIntSize); + __ sw(a3, MemOperand(a0)); + __ Daddu(a0, a0, kIntSize); + } + } + + if (global()) { + // Restart matching if the regular expression is flagged as global. + __ ld(a0, MemOperand(frame_pointer(), kSuccessfulCaptures)); + __ lw(a1, MemOperand(frame_pointer(), kNumOutputRegisters)); + __ ld(a2, MemOperand(frame_pointer(), kRegisterOutput)); + // Increment success counter. + __ Daddu(a0, a0, 1); + __ sd(a0, MemOperand(frame_pointer(), kSuccessfulCaptures)); + // Capture results have been stored, so the number of remaining global + // output registers is reduced by the number of stored captures. + __ Dsubu(a1, a1, num_saved_registers_); + // Check whether we have enough room for another set of capture results. + __ mov(v0, a0); + __ Branch(&return_v0, lt, a1, Operand(num_saved_registers_)); + + __ sd(a1, MemOperand(frame_pointer(), kNumOutputRegisters)); + // Advance the location for output. + __ Daddu(a2, a2, num_saved_registers_ * kIntSize); + __ sd(a2, MemOperand(frame_pointer(), kRegisterOutput)); + + // Prepare a0 to initialize registers with its value in the next run. + __ ld(a0, MemOperand(frame_pointer(), kInputStartMinusOne)); + + if (global_with_zero_length_check()) { + // Special case for zero-length matches. + // t3: capture start index + // Not a zero-length match, restart. + __ Branch( + &load_char_start_regexp, ne, current_input_offset(), Operand(t3)); + // Offset from the end is zero if we already reached the end. + __ Branch(&exit_label_, eq, current_input_offset(), + Operand(zero_reg)); + // Advance current position after a zero-length match. + __ Daddu(current_input_offset(), + current_input_offset(), + Operand((mode_ == UC16) ? 2 : 1)); + } + + __ Branch(&load_char_start_regexp); + } else { + __ li(v0, Operand(SUCCESS)); + } + } + // Exit and return v0. + __ bind(&exit_label_); + if (global()) { + __ ld(v0, MemOperand(frame_pointer(), kSuccessfulCaptures)); + } + + __ bind(&return_v0); + // Skip sp past regexp registers and local variables.. + __ mov(sp, frame_pointer()); + // Restore registers s0..s7 and return (restoring ra to pc). + __ MultiPop(registers_to_retain | ra.bit()); + __ Ret(); + + // Backtrack code (branch target for conditional backtracks). + if (backtrack_label_.is_linked()) { + __ bind(&backtrack_label_); + Backtrack(); + } + + Label exit_with_exception; + + // Preempt-code. + if (check_preempt_label_.is_linked()) { + SafeCallTarget(&check_preempt_label_); + // Put regexp engine registers on stack. + RegList regexp_registers_to_retain = current_input_offset().bit() | + current_character().bit() | backtrack_stackpointer().bit(); + __ MultiPush(regexp_registers_to_retain); + CallCheckStackGuardState(a0); + __ MultiPop(regexp_registers_to_retain); + // If returning non-zero, we should end execution with the given + // result as return value. + __ Branch(&return_v0, ne, v0, Operand(zero_reg)); + + // String might have moved: Reload end of string from frame. + __ ld(end_of_input_address(), MemOperand(frame_pointer(), kInputEnd)); + __ li(code_pointer(), Operand(masm_->CodeObject()), CONSTANT_SIZE); + SafeReturn(); + } + + // Backtrack stack overflow code. + if (stack_overflow_label_.is_linked()) { + SafeCallTarget(&stack_overflow_label_); + // Reached if the backtrack-stack limit has been hit. + // Put regexp engine registers on stack first. + RegList regexp_registers = current_input_offset().bit() | + current_character().bit(); + __ MultiPush(regexp_registers); + Label grow_failed; + // Call GrowStack(backtrack_stackpointer(), &stack_base) + static const int num_arguments = 3; + __ PrepareCallCFunction(num_arguments, a0); + __ mov(a0, backtrack_stackpointer()); + __ Daddu(a1, frame_pointer(), Operand(kStackHighEnd)); + __ li(a2, Operand(ExternalReference::isolate_address(masm_->isolate()))); + ExternalReference grow_stack = + ExternalReference::re_grow_stack(masm_->isolate()); + __ CallCFunction(grow_stack, num_arguments); + // Restore regexp registers. + __ MultiPop(regexp_registers); + // If return NULL, we have failed to grow the stack, and + // must exit with a stack-overflow exception. + __ Branch(&exit_with_exception, eq, v0, Operand(zero_reg)); + // Otherwise use return value as new stack pointer. + __ mov(backtrack_stackpointer(), v0); + // Restore saved registers and continue. + __ li(code_pointer(), Operand(masm_->CodeObject()), CONSTANT_SIZE); + __ ld(end_of_input_address(), MemOperand(frame_pointer(), kInputEnd)); + SafeReturn(); + } + + if (exit_with_exception.is_linked()) { + // If any of the code above needed to exit with an exception. + __ bind(&exit_with_exception); + // Exit with Result EXCEPTION(-1) to signal thrown exception. + __ li(v0, Operand(EXCEPTION)); + __ jmp(&return_v0); + } + } + + CodeDesc code_desc; + masm_->GetCode(&code_desc); + Handle<Code> code = isolate()->factory()->NewCode( + code_desc, Code::ComputeFlags(Code::REGEXP), masm_->CodeObject()); + LOG(masm_->isolate(), RegExpCodeCreateEvent(*code, *source)); + return Handle<HeapObject>::cast(code); +} + + +void RegExpMacroAssemblerMIPS::GoTo(Label* to) { + if (to == NULL) { + Backtrack(); + return; + } + __ jmp(to); + return; +} + + +void RegExpMacroAssemblerMIPS::IfRegisterGE(int reg, + int comparand, + Label* if_ge) { + __ ld(a0, register_location(reg)); + BranchOrBacktrack(if_ge, ge, a0, Operand(comparand)); +} + + +void RegExpMacroAssemblerMIPS::IfRegisterLT(int reg, + int comparand, + Label* if_lt) { + __ ld(a0, register_location(reg)); + BranchOrBacktrack(if_lt, lt, a0, Operand(comparand)); +} + + +void RegExpMacroAssemblerMIPS::IfRegisterEqPos(int reg, + Label* if_eq) { + __ ld(a0, register_location(reg)); + BranchOrBacktrack(if_eq, eq, a0, Operand(current_input_offset())); +} + + +RegExpMacroAssembler::IrregexpImplementation + RegExpMacroAssemblerMIPS::Implementation() { + return kMIPSImplementation; +} + + +void RegExpMacroAssemblerMIPS::LoadCurrentCharacter(int cp_offset, + Label* on_end_of_input, + bool check_bounds, + int characters) { + DCHECK(cp_offset >= -1); // ^ and \b can look behind one character. + DCHECK(cp_offset < (1<<30)); // Be sane! (And ensure negation works). + if (check_bounds) { + CheckPosition(cp_offset + characters - 1, on_end_of_input); + } + LoadCurrentCharacterUnchecked(cp_offset, characters); +} + + +void RegExpMacroAssemblerMIPS::PopCurrentPosition() { + Pop(current_input_offset()); +} + + +void RegExpMacroAssemblerMIPS::PopRegister(int register_index) { + Pop(a0); + __ sd(a0, register_location(register_index)); +} + + +void RegExpMacroAssemblerMIPS::PushBacktrack(Label* label) { + if (label->is_bound()) { + int target = label->pos(); + __ li(a0, Operand(target + Code::kHeaderSize - kHeapObjectTag)); + } else { + Assembler::BlockTrampolinePoolScope block_trampoline_pool(masm_); + Label after_constant; + __ Branch(&after_constant); + int offset = masm_->pc_offset(); + int cp_offset = offset + Code::kHeaderSize - kHeapObjectTag; + __ emit(0); + masm_->label_at_put(label, offset); + __ bind(&after_constant); + if (is_int16(cp_offset)) { + __ lwu(a0, MemOperand(code_pointer(), cp_offset)); + } else { + __ Daddu(a0, code_pointer(), cp_offset); + __ lwu(a0, MemOperand(a0, 0)); + } + } + Push(a0); + CheckStackLimit(); +} + + +void RegExpMacroAssemblerMIPS::PushCurrentPosition() { + Push(current_input_offset()); +} + + +void RegExpMacroAssemblerMIPS::PushRegister(int register_index, + StackCheckFlag check_stack_limit) { + __ ld(a0, register_location(register_index)); + Push(a0); + if (check_stack_limit) CheckStackLimit(); +} + + +void RegExpMacroAssemblerMIPS::ReadCurrentPositionFromRegister(int reg) { + __ ld(current_input_offset(), register_location(reg)); +} + + +void RegExpMacroAssemblerMIPS::ReadStackPointerFromRegister(int reg) { + __ ld(backtrack_stackpointer(), register_location(reg)); + __ ld(a0, MemOperand(frame_pointer(), kStackHighEnd)); + __ Daddu(backtrack_stackpointer(), backtrack_stackpointer(), Operand(a0)); +} + + +void RegExpMacroAssemblerMIPS::SetCurrentPositionFromEnd(int by) { + Label after_position; + __ Branch(&after_position, + ge, + current_input_offset(), + Operand(-by * char_size())); + __ li(current_input_offset(), -by * char_size()); + // On RegExp code entry (where this operation is used), the character before + // the current position is expected to be already loaded. + // We have advanced the position, so it's safe to read backwards. + LoadCurrentCharacterUnchecked(-1, 1); + __ bind(&after_position); +} + + +void RegExpMacroAssemblerMIPS::SetRegister(int register_index, int to) { + DCHECK(register_index >= num_saved_registers_); // Reserved for positions! + __ li(a0, Operand(to)); + __ sd(a0, register_location(register_index)); +} + + +bool RegExpMacroAssemblerMIPS::Succeed() { + __ jmp(&success_label_); + return global(); +} + + +void RegExpMacroAssemblerMIPS::WriteCurrentPositionToRegister(int reg, + int cp_offset) { + if (cp_offset == 0) { + __ sd(current_input_offset(), register_location(reg)); + } else { + __ Daddu(a0, current_input_offset(), Operand(cp_offset * char_size())); + __ sd(a0, register_location(reg)); + } +} + + +void RegExpMacroAssemblerMIPS::ClearRegisters(int reg_from, int reg_to) { + DCHECK(reg_from <= reg_to); + __ ld(a0, MemOperand(frame_pointer(), kInputStartMinusOne)); + for (int reg = reg_from; reg <= reg_to; reg++) { + __ sd(a0, register_location(reg)); + } +} + + +void RegExpMacroAssemblerMIPS::WriteStackPointerToRegister(int reg) { + __ ld(a1, MemOperand(frame_pointer(), kStackHighEnd)); + __ Dsubu(a0, backtrack_stackpointer(), a1); + __ sd(a0, register_location(reg)); +} + + +bool RegExpMacroAssemblerMIPS::CanReadUnaligned() { + return false; +} + + +// Private methods: + +void RegExpMacroAssemblerMIPS::CallCheckStackGuardState(Register scratch) { + int stack_alignment = base::OS::ActivationFrameAlignment(); + + // Align the stack pointer and save the original sp value on the stack. + __ mov(scratch, sp); + __ Dsubu(sp, sp, Operand(kPointerSize)); + DCHECK(IsPowerOf2(stack_alignment)); + __ And(sp, sp, Operand(-stack_alignment)); + __ sd(scratch, MemOperand(sp)); + + __ mov(a2, frame_pointer()); + // Code* of self. + __ li(a1, Operand(masm_->CodeObject()), CONSTANT_SIZE); + + // We need to make room for the return address on the stack. + DCHECK(IsAligned(stack_alignment, kPointerSize)); + __ Dsubu(sp, sp, Operand(stack_alignment)); + + // Stack pointer now points to cell where return address is to be written. + // Arguments are in registers, meaning we teat the return address as + // argument 5. Since DirectCEntryStub will handleallocating space for the C + // argument slots, we don't need to care about that here. This is how the + // stack will look (sp meaning the value of sp at this moment): + // [sp + 3] - empty slot if needed for alignment. + // [sp + 2] - saved sp. + // [sp + 1] - second word reserved for return value. + // [sp + 0] - first word reserved for return value. + + // a0 will point to the return address, placed by DirectCEntry. + __ mov(a0, sp); + + ExternalReference stack_guard_check = + ExternalReference::re_check_stack_guard_state(masm_->isolate()); + __ li(t9, Operand(stack_guard_check)); + DirectCEntryStub stub(isolate()); + stub.GenerateCall(masm_, t9); + + // DirectCEntryStub allocated space for the C argument slots so we have to + // drop them with the return address from the stack with loading saved sp. + // At this point stack must look: + // [sp + 7] - empty slot if needed for alignment. + // [sp + 6] - saved sp. + // [sp + 5] - second word reserved for return value. + // [sp + 4] - first word reserved for return value. + // [sp + 3] - C argument slot. + // [sp + 2] - C argument slot. + // [sp + 1] - C argument slot. + // [sp + 0] - C argument slot. + __ ld(sp, MemOperand(sp, stack_alignment + kCArgsSlotsSize)); + + __ li(code_pointer(), Operand(masm_->CodeObject())); +} + + +// Helper function for reading a value out of a stack frame. +template <typename T> +static T& frame_entry(Address re_frame, int frame_offset) { + return reinterpret_cast<T&>(Memory::int32_at(re_frame + frame_offset)); +} + + +int RegExpMacroAssemblerMIPS::CheckStackGuardState(Address* return_address, + Code* re_code, + Address re_frame) { + Isolate* isolate = frame_entry<Isolate*>(re_frame, kIsolate); + StackLimitCheck check(isolate); + if (check.JsHasOverflowed()) { + isolate->StackOverflow(); + return EXCEPTION; + } + + // If not real stack overflow the stack guard was used to interrupt + // execution for another purpose. + + // If this is a direct call from JavaScript retry the RegExp forcing the call + // through the runtime system. Currently the direct call cannot handle a GC. + if (frame_entry<int>(re_frame, kDirectCall) == 1) { + return RETRY; + } + + // Prepare for possible GC. + HandleScope handles(isolate); + Handle<Code> code_handle(re_code); + + Handle<String> subject(frame_entry<String*>(re_frame, kInputString)); + // Current string. + bool is_ascii = subject->IsOneByteRepresentationUnderneath(); + + DCHECK(re_code->instruction_start() <= *return_address); + DCHECK(*return_address <= + re_code->instruction_start() + re_code->instruction_size()); + + Object* result = isolate->stack_guard()->HandleInterrupts(); + + if (*code_handle != re_code) { // Return address no longer valid. + int delta = code_handle->address() - re_code->address(); + // Overwrite the return address on the stack. + *return_address += delta; + } + + if (result->IsException()) { + return EXCEPTION; + } + + Handle<String> subject_tmp = subject; + int slice_offset = 0; + + // Extract the underlying string and the slice offset. + if (StringShape(*subject_tmp).IsCons()) { + subject_tmp = Handle<String>(ConsString::cast(*subject_tmp)->first()); + } else if (StringShape(*subject_tmp).IsSliced()) { + SlicedString* slice = SlicedString::cast(*subject_tmp); + subject_tmp = Handle<String>(slice->parent()); + slice_offset = slice->offset(); + } + + // String might have changed. + if (subject_tmp->IsOneByteRepresentation() != is_ascii) { + // If we changed between an ASCII and an UC16 string, the specialized + // code cannot be used, and we need to restart regexp matching from + // scratch (including, potentially, compiling a new version of the code). + return RETRY; + } + + // Otherwise, the content of the string might have moved. It must still + // be a sequential or external string with the same content. + // Update the start and end pointers in the stack frame to the current + // location (whether it has actually moved or not). + DCHECK(StringShape(*subject_tmp).IsSequential() || + StringShape(*subject_tmp).IsExternal()); + + // The original start address of the characters to match. + const byte* start_address = frame_entry<const byte*>(re_frame, kInputStart); + + // Find the current start address of the same character at the current string + // position. + int start_index = frame_entry<int>(re_frame, kStartIndex); + const byte* new_address = StringCharacterPosition(*subject_tmp, + start_index + slice_offset); + + if (start_address != new_address) { + // If there is a difference, update the object pointer and start and end + // addresses in the RegExp stack frame to match the new value. + const byte* end_address = frame_entry<const byte* >(re_frame, kInputEnd); + int byte_length = static_cast<int>(end_address - start_address); + frame_entry<const String*>(re_frame, kInputString) = *subject; + frame_entry<const byte*>(re_frame, kInputStart) = new_address; + frame_entry<const byte*>(re_frame, kInputEnd) = new_address + byte_length; + } else if (frame_entry<const String*>(re_frame, kInputString) != *subject) { + // Subject string might have been a ConsString that underwent + // short-circuiting during GC. That will not change start_address but + // will change pointer inside the subject handle. + frame_entry<const String*>(re_frame, kInputString) = *subject; + } + + return 0; +} + + +MemOperand RegExpMacroAssemblerMIPS::register_location(int register_index) { + DCHECK(register_index < (1<<30)); + if (num_registers_ <= register_index) { + num_registers_ = register_index + 1; + } + return MemOperand(frame_pointer(), + kRegisterZero - register_index * kPointerSize); +} + + +void RegExpMacroAssemblerMIPS::CheckPosition(int cp_offset, + Label* on_outside_input) { + BranchOrBacktrack(on_outside_input, + ge, + current_input_offset(), + Operand(-cp_offset * char_size())); +} + + +void RegExpMacroAssemblerMIPS::BranchOrBacktrack(Label* to, + Condition condition, + Register rs, + const Operand& rt) { + if (condition == al) { // Unconditional. + if (to == NULL) { + Backtrack(); + return; + } + __ jmp(to); + return; + } + if (to == NULL) { + __ Branch(&backtrack_label_, condition, rs, rt); + return; + } + __ Branch(to, condition, rs, rt); +} + + +void RegExpMacroAssemblerMIPS::SafeCall(Label* to, + Condition cond, + Register rs, + const Operand& rt) { + __ BranchAndLink(to, cond, rs, rt); +} + + +void RegExpMacroAssemblerMIPS::SafeReturn() { + __ pop(ra); + __ Daddu(t1, ra, Operand(masm_->CodeObject())); + __ Jump(t1); +} + + +void RegExpMacroAssemblerMIPS::SafeCallTarget(Label* name) { + __ bind(name); + __ Dsubu(ra, ra, Operand(masm_->CodeObject())); + __ push(ra); +} + + +void RegExpMacroAssemblerMIPS::Push(Register source) { + DCHECK(!source.is(backtrack_stackpointer())); + __ Daddu(backtrack_stackpointer(), + backtrack_stackpointer(), + Operand(-kIntSize)); + __ sw(source, MemOperand(backtrack_stackpointer())); +} + + +void RegExpMacroAssemblerMIPS::Pop(Register target) { + DCHECK(!target.is(backtrack_stackpointer())); + __ lw(target, MemOperand(backtrack_stackpointer())); + __ Daddu(backtrack_stackpointer(), backtrack_stackpointer(), kIntSize); +} + + +void RegExpMacroAssemblerMIPS::CheckPreemption() { + // Check for preemption. + ExternalReference stack_limit = + ExternalReference::address_of_stack_limit(masm_->isolate()); + __ li(a0, Operand(stack_limit)); + __ ld(a0, MemOperand(a0)); + SafeCall(&check_preempt_label_, ls, sp, Operand(a0)); +} + + +void RegExpMacroAssemblerMIPS::CheckStackLimit() { + ExternalReference stack_limit = + ExternalReference::address_of_regexp_stack_limit(masm_->isolate()); + + __ li(a0, Operand(stack_limit)); + __ ld(a0, MemOperand(a0)); + SafeCall(&stack_overflow_label_, ls, backtrack_stackpointer(), Operand(a0)); +} + + +void RegExpMacroAssemblerMIPS::LoadCurrentCharacterUnchecked(int cp_offset, + int characters) { + Register offset = current_input_offset(); + if (cp_offset != 0) { + // t3 is not being used to store the capture start index at this point. + __ Daddu(t3, current_input_offset(), Operand(cp_offset * char_size())); + offset = t3; + } + // We assume that we cannot do unaligned loads on MIPS, so this function + // must only be used to load a single character at a time. + DCHECK(characters == 1); + __ Daddu(t1, end_of_input_address(), Operand(offset)); + if (mode_ == ASCII) { + __ lbu(current_character(), MemOperand(t1, 0)); + } else { + DCHECK(mode_ == UC16); + __ lhu(current_character(), MemOperand(t1, 0)); + } +} + +#undef __ + +#endif // V8_INTERPRETED_REGEXP + +}} // namespace v8::internal + +#endif // V8_TARGET_ARCH_MIPS64 diff --git a/deps/v8/src/mips64/regexp-macro-assembler-mips64.h b/deps/v8/src/mips64/regexp-macro-assembler-mips64.h new file mode 100644 index 00000000000..647e4150cb1 --- /dev/null +++ b/deps/v8/src/mips64/regexp-macro-assembler-mips64.h @@ -0,0 +1,269 @@ +// Copyright 2011 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + + +#ifndef V8_MIPS_REGEXP_MACRO_ASSEMBLER_MIPS_H_ +#define V8_MIPS_REGEXP_MACRO_ASSEMBLER_MIPS_H_ + +#include "src/macro-assembler.h" +#include "src/mips64/assembler-mips64-inl.h" +#include "src/mips64/assembler-mips64.h" +#include "src/mips64/macro-assembler-mips64.h" + +namespace v8 { +namespace internal { + +#ifndef V8_INTERPRETED_REGEXP +class RegExpMacroAssemblerMIPS: public NativeRegExpMacroAssembler { + public: + RegExpMacroAssemblerMIPS(Mode mode, int registers_to_save, Zone* zone); + virtual ~RegExpMacroAssemblerMIPS(); + virtual int stack_limit_slack(); + virtual void AdvanceCurrentPosition(int by); + virtual void AdvanceRegister(int reg, int by); + virtual void Backtrack(); + virtual void Bind(Label* label); + virtual void CheckAtStart(Label* on_at_start); + virtual void CheckCharacter(uint32_t c, Label* on_equal); + virtual void CheckCharacterAfterAnd(uint32_t c, + uint32_t mask, + Label* on_equal); + virtual void CheckCharacterGT(uc16 limit, Label* on_greater); + virtual void CheckCharacterLT(uc16 limit, Label* on_less); + // A "greedy loop" is a loop that is both greedy and with a simple + // body. It has a particularly simple implementation. + virtual void CheckGreedyLoop(Label* on_tos_equals_current_position); + virtual void CheckNotAtStart(Label* on_not_at_start); + virtual void CheckNotBackReference(int start_reg, Label* on_no_match); + virtual void CheckNotBackReferenceIgnoreCase(int start_reg, + Label* on_no_match); + virtual void CheckNotCharacter(uint32_t c, Label* on_not_equal); + virtual void CheckNotCharacterAfterAnd(uint32_t c, + uint32_t mask, + Label* on_not_equal); + virtual void CheckNotCharacterAfterMinusAnd(uc16 c, + uc16 minus, + uc16 mask, + Label* on_not_equal); + virtual void CheckCharacterInRange(uc16 from, + uc16 to, + Label* on_in_range); + virtual void CheckCharacterNotInRange(uc16 from, + uc16 to, + Label* on_not_in_range); + virtual void CheckBitInTable(Handle<ByteArray> table, Label* on_bit_set); + + // Checks whether the given offset from the current position is before + // the end of the string. + virtual void CheckPosition(int cp_offset, Label* on_outside_input); + virtual bool CheckSpecialCharacterClass(uc16 type, + Label* on_no_match); + virtual void Fail(); + virtual Handle<HeapObject> GetCode(Handle<String> source); + virtual void GoTo(Label* label); + virtual void IfRegisterGE(int reg, int comparand, Label* if_ge); + virtual void IfRegisterLT(int reg, int comparand, Label* if_lt); + virtual void IfRegisterEqPos(int reg, Label* if_eq); + virtual IrregexpImplementation Implementation(); + virtual void LoadCurrentCharacter(int cp_offset, + Label* on_end_of_input, + bool check_bounds = true, + int characters = 1); + virtual void PopCurrentPosition(); + virtual void PopRegister(int register_index); + virtual void PushBacktrack(Label* label); + virtual void PushCurrentPosition(); + virtual void PushRegister(int register_index, + StackCheckFlag check_stack_limit); + virtual void ReadCurrentPositionFromRegister(int reg); + virtual void ReadStackPointerFromRegister(int reg); + virtual void SetCurrentPositionFromEnd(int by); + virtual void SetRegister(int register_index, int to); + virtual bool Succeed(); + virtual void WriteCurrentPositionToRegister(int reg, int cp_offset); + virtual void ClearRegisters(int reg_from, int reg_to); + virtual void WriteStackPointerToRegister(int reg); + virtual bool CanReadUnaligned(); + + // Called from RegExp if the stack-guard is triggered. + // If the code object is relocated, the return address is fixed before + // returning. + static int CheckStackGuardState(Address* return_address, + Code* re_code, + Address re_frame); + + void print_regexp_frame_constants(); + + private: +#if defined(MIPS_ABI_N64) + // Offsets from frame_pointer() of function parameters and stored registers. + static const int kFramePointer = 0; + + // Above the frame pointer - Stored registers and stack passed parameters. + // Registers s0 to s7, fp, and ra. + static const int kStoredRegisters = kFramePointer; + // Return address (stored from link register, read into pc on return). + +// TODO(plind): This 9 - is 8 s-regs (s0..s7) plus fp. + + static const int kReturnAddress = kStoredRegisters + 9 * kPointerSize; + static const int kSecondaryReturnAddress = kReturnAddress + kPointerSize; + // Stack frame header. + static const int kStackFrameHeader = kSecondaryReturnAddress; + // Stack parameters placed by caller. + static const int kIsolate = kStackFrameHeader + kPointerSize; + + // Below the frame pointer. + // Register parameters stored by setup code. + static const int kDirectCall = kFramePointer - kPointerSize; + static const int kStackHighEnd = kDirectCall - kPointerSize; + static const int kNumOutputRegisters = kStackHighEnd - kPointerSize; + static const int kRegisterOutput = kNumOutputRegisters - kPointerSize; + static const int kInputEnd = kRegisterOutput - kPointerSize; + static const int kInputStart = kInputEnd - kPointerSize; + static const int kStartIndex = kInputStart - kPointerSize; + static const int kInputString = kStartIndex - kPointerSize; + // When adding local variables remember to push space for them in + // the frame in GetCode. + static const int kSuccessfulCaptures = kInputString - kPointerSize; + static const int kInputStartMinusOne = kSuccessfulCaptures - kPointerSize; + // First register address. Following registers are below it on the stack. + static const int kRegisterZero = kInputStartMinusOne - kPointerSize; + +#elif defined(MIPS_ABI_O32) + // Offsets from frame_pointer() of function parameters and stored registers. + static const int kFramePointer = 0; + + // Above the frame pointer - Stored registers and stack passed parameters. + // Registers s0 to s7, fp, and ra. + static const int kStoredRegisters = kFramePointer; + // Return address (stored from link register, read into pc on return). + static const int kReturnAddress = kStoredRegisters + 9 * kPointerSize; + static const int kSecondaryReturnAddress = kReturnAddress + kPointerSize; + // Stack frame header. + static const int kStackFrameHeader = kReturnAddress + kPointerSize; + // Stack parameters placed by caller. + static const int kRegisterOutput = + kStackFrameHeader + 4 * kPointerSize + kPointerSize; + static const int kNumOutputRegisters = kRegisterOutput + kPointerSize; + static const int kStackHighEnd = kNumOutputRegisters + kPointerSize; + static const int kDirectCall = kStackHighEnd + kPointerSize; + static const int kIsolate = kDirectCall + kPointerSize; + + // Below the frame pointer. + // Register parameters stored by setup code. + static const int kInputEnd = kFramePointer - kPointerSize; + static const int kInputStart = kInputEnd - kPointerSize; + static const int kStartIndex = kInputStart - kPointerSize; + static const int kInputString = kStartIndex - kPointerSize; + // When adding local variables remember to push space for them in + // the frame in GetCode. + static const int kSuccessfulCaptures = kInputString - kPointerSize; + static const int kInputStartMinusOne = kSuccessfulCaptures - kPointerSize; + // First register address. Following registers are below it on the stack. + static const int kRegisterZero = kInputStartMinusOne - kPointerSize; + +#else +# error "undefined MIPS ABI" +#endif + + // Initial size of code buffer. + static const size_t kRegExpCodeSize = 1024; + + // Load a number of characters at the given offset from the + // current position, into the current-character register. + void LoadCurrentCharacterUnchecked(int cp_offset, int character_count); + + // Check whether preemption has been requested. + void CheckPreemption(); + + // Check whether we are exceeding the stack limit on the backtrack stack. + void CheckStackLimit(); + + + // Generate a call to CheckStackGuardState. + void CallCheckStackGuardState(Register scratch); + + // The ebp-relative location of a regexp register. + MemOperand register_location(int register_index); + + // Register holding the current input position as negative offset from + // the end of the string. + inline Register current_input_offset() { return a6; } + + // The register containing the current character after LoadCurrentCharacter. + inline Register current_character() { return a7; } + + // Register holding address of the end of the input string. + inline Register end_of_input_address() { return t2; } + + // Register holding the frame address. Local variables, parameters and + // regexp registers are addressed relative to this. + inline Register frame_pointer() { return fp; } + + // The register containing the backtrack stack top. Provides a meaningful + // name to the register. + inline Register backtrack_stackpointer() { return t0; } + + // Register holding pointer to the current code object. + inline Register code_pointer() { return a5; } + + // Byte size of chars in the string to match (decided by the Mode argument). + inline int char_size() { return static_cast<int>(mode_); } + + // Equivalent to a conditional branch to the label, unless the label + // is NULL, in which case it is a conditional Backtrack. + void BranchOrBacktrack(Label* to, + Condition condition, + Register rs, + const Operand& rt); + + // Call and return internally in the generated code in a way that + // is GC-safe (i.e., doesn't leave absolute code addresses on the stack) + inline void SafeCall(Label* to, + Condition cond, + Register rs, + const Operand& rt); + inline void SafeReturn(); + inline void SafeCallTarget(Label* name); + + // Pushes the value of a register on the backtrack stack. Decrements the + // stack pointer by a word size and stores the register's value there. + inline void Push(Register source); + + // Pops a value from the backtrack stack. Reads the word at the stack pointer + // and increments it by a word size. + inline void Pop(Register target); + + Isolate* isolate() const { return masm_->isolate(); } + + MacroAssembler* masm_; + + // Which mode to generate code for (ASCII or UC16). + Mode mode_; + + // One greater than maximal register index actually used. + int num_registers_; + + // Number of registers to output at the end (the saved registers + // are always 0..num_saved_registers_-1). + int num_saved_registers_; + + // Labels used internally. + Label entry_label_; + Label start_label_; + Label success_label_; + Label backtrack_label_; + Label exit_label_; + Label check_preempt_label_; + Label stack_overflow_label_; + Label internal_failure_label_; +}; + +#endif // V8_INTERPRETED_REGEXP + + +}} // namespace v8::internal + +#endif // V8_MIPS_REGEXP_MACRO_ASSEMBLER_MIPS_H_ diff --git a/deps/v8/src/mips64/simulator-mips64.cc b/deps/v8/src/mips64/simulator-mips64.cc new file mode 100644 index 00000000000..c07558465fb --- /dev/null +++ b/deps/v8/src/mips64/simulator-mips64.cc @@ -0,0 +1,3451 @@ +// Copyright 2011 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include <limits.h> +#include <stdarg.h> +#include <stdlib.h> +#include <cmath> + +#include "src/v8.h" + +#if V8_TARGET_ARCH_MIPS64 + +#include "src/assembler.h" +#include "src/disasm.h" +#include "src/globals.h" // Need the BitCast. +#include "src/mips64/constants-mips64.h" +#include "src/mips64/simulator-mips64.h" +#include "src/ostreams.h" + +// Only build the simulator if not compiling for real MIPS hardware. +#if defined(USE_SIMULATOR) + +namespace v8 { +namespace internal { + +// Utils functions. +bool HaveSameSign(int64_t a, int64_t b) { + return ((a ^ b) >= 0); +} + + +uint32_t get_fcsr_condition_bit(uint32_t cc) { + if (cc == 0) { + return 23; + } else { + return 24 + cc; + } +} + + +static int64_t MultiplyHighSigned(int64_t u, int64_t v) { + uint64_t u0, v0, w0; + int64_t u1, v1, w1, w2, t; + + u0 = u & 0xffffffffL; + u1 = u >> 32; + v0 = v & 0xffffffffL; + v1 = v >> 32; + + w0 = u0 * v0; + t = u1 * v0 + (w0 >> 32); + w1 = t & 0xffffffffL; + w2 = t >> 32; + w1 = u0 * v1 + w1; + + return u1 * v1 + w2 + (w1 >> 32); +} + + +// This macro provides a platform independent use of sscanf. The reason for +// SScanF not being implemented in a platform independent was through +// ::v8::internal::OS in the same way as SNPrintF is that the Windows C Run-Time +// Library does not provide vsscanf. +#define SScanF sscanf // NOLINT + +// The MipsDebugger class is used by the simulator while debugging simulated +// code. +class MipsDebugger { + public: + explicit MipsDebugger(Simulator* sim) : sim_(sim) { } + ~MipsDebugger(); + + void Stop(Instruction* instr); + void Debug(); + // Print all registers with a nice formatting. + void PrintAllRegs(); + void PrintAllRegsIncludingFPU(); + + private: + // We set the breakpoint code to 0xfffff to easily recognize it. + static const Instr kBreakpointInstr = SPECIAL | BREAK | 0xfffff << 6; + static const Instr kNopInstr = 0x0; + + Simulator* sim_; + + int64_t GetRegisterValue(int regnum); + int64_t GetFPURegisterValue(int regnum); + float GetFPURegisterValueFloat(int regnum); + double GetFPURegisterValueDouble(int regnum); + bool GetValue(const char* desc, int64_t* value); + + // Set or delete a breakpoint. Returns true if successful. + bool SetBreakpoint(Instruction* breakpc); + bool DeleteBreakpoint(Instruction* breakpc); + + // Undo and redo all breakpoints. This is needed to bracket disassembly and + // execution to skip past breakpoints when run from the debugger. + void UndoBreakpoints(); + void RedoBreakpoints(); +}; + + +MipsDebugger::~MipsDebugger() { +} + + +#ifdef GENERATED_CODE_COVERAGE +static FILE* coverage_log = NULL; + + +static void InitializeCoverage() { + char* file_name = getenv("V8_GENERATED_CODE_COVERAGE_LOG"); + if (file_name != NULL) { + coverage_log = fopen(file_name, "aw+"); + } +} + + +void MipsDebugger::Stop(Instruction* instr) { + // Get the stop code. + uint32_t code = instr->Bits(25, 6); + // Retrieve the encoded address, which comes just after this stop. + char** msg_address = + reinterpret_cast<char**>(sim_->get_pc() + Instr::kInstrSize); + char* msg = *msg_address; + DCHECK(msg != NULL); + + // Update this stop description. + if (!watched_stops_[code].desc) { + watched_stops_[code].desc = msg; + } + + if (strlen(msg) > 0) { + if (coverage_log != NULL) { + fprintf(coverage_log, "%s\n", str); + fflush(coverage_log); + } + // Overwrite the instruction and address with nops. + instr->SetInstructionBits(kNopInstr); + reinterpret_cast<Instr*>(msg_address)->SetInstructionBits(kNopInstr); + } + // TODO(yuyin): 2 -> 3? + sim_->set_pc(sim_->get_pc() + 3 * Instruction::kInstructionSize); +} + + +#else // GENERATED_CODE_COVERAGE + +#define UNSUPPORTED() printf("Unsupported instruction.\n"); + +static void InitializeCoverage() {} + + +void MipsDebugger::Stop(Instruction* instr) { + // Get the stop code. + uint32_t code = instr->Bits(25, 6); + // Retrieve the encoded address, which comes just after this stop. + char* msg = *reinterpret_cast<char**>(sim_->get_pc() + + Instruction::kInstrSize); + // Update this stop description. + if (!sim_->watched_stops_[code].desc) { + sim_->watched_stops_[code].desc = msg; + } + PrintF("Simulator hit %s (%u)\n", msg, code); + // TODO(yuyin): 2 -> 3? + sim_->set_pc(sim_->get_pc() + 3 * Instruction::kInstrSize); + Debug(); +} +#endif // GENERATED_CODE_COVERAGE + + +int64_t MipsDebugger::GetRegisterValue(int regnum) { + if (regnum == kNumSimuRegisters) { + return sim_->get_pc(); + } else { + return sim_->get_register(regnum); + } +} + + +int64_t MipsDebugger::GetFPURegisterValue(int regnum) { + if (regnum == kNumFPURegisters) { + return sim_->get_pc(); + } else { + return sim_->get_fpu_register(regnum); + } +} + + +float MipsDebugger::GetFPURegisterValueFloat(int regnum) { + if (regnum == kNumFPURegisters) { + return sim_->get_pc(); + } else { + return sim_->get_fpu_register_float(regnum); + } +} + + +double MipsDebugger::GetFPURegisterValueDouble(int regnum) { + if (regnum == kNumFPURegisters) { + return sim_->get_pc(); + } else { + return sim_->get_fpu_register_double(regnum); + } +} + + +bool MipsDebugger::GetValue(const char* desc, int64_t* value) { + int regnum = Registers::Number(desc); + int fpuregnum = FPURegisters::Number(desc); + + if (regnum != kInvalidRegister) { + *value = GetRegisterValue(regnum); + return true; + } else if (fpuregnum != kInvalidFPURegister) { + *value = GetFPURegisterValue(fpuregnum); + return true; + } else if (strncmp(desc, "0x", 2) == 0) { + return SScanF(desc + 2, "%" SCNx64, + reinterpret_cast<uint64_t*>(value)) == 1; + } else { + return SScanF(desc, "%" SCNu64, reinterpret_cast<uint64_t*>(value)) == 1; + } + return false; +} + + +bool MipsDebugger::SetBreakpoint(Instruction* breakpc) { + // Check if a breakpoint can be set. If not return without any side-effects. + if (sim_->break_pc_ != NULL) { + return false; + } + + // Set the breakpoint. + sim_->break_pc_ = breakpc; + sim_->break_instr_ = breakpc->InstructionBits(); + // Not setting the breakpoint instruction in the code itself. It will be set + // when the debugger shell continues. + return true; +} + + +bool MipsDebugger::DeleteBreakpoint(Instruction* breakpc) { + if (sim_->break_pc_ != NULL) { + sim_->break_pc_->SetInstructionBits(sim_->break_instr_); + } + + sim_->break_pc_ = NULL; + sim_->break_instr_ = 0; + return true; +} + + +void MipsDebugger::UndoBreakpoints() { + if (sim_->break_pc_ != NULL) { + sim_->break_pc_->SetInstructionBits(sim_->break_instr_); + } +} + + +void MipsDebugger::RedoBreakpoints() { + if (sim_->break_pc_ != NULL) { + sim_->break_pc_->SetInstructionBits(kBreakpointInstr); + } +} + + +void MipsDebugger::PrintAllRegs() { +#define REG_INFO(n) Registers::Name(n), GetRegisterValue(n), GetRegisterValue(n) + + PrintF("\n"); + // at, v0, a0. + PrintF("%3s: 0x%016lx %14ld\t%3s: 0x%016lx %14ld\t%3s: 0x%016lx %14ld\n", + REG_INFO(1), REG_INFO(2), REG_INFO(4)); + // v1, a1. + PrintF("%34s\t%3s: 0x%016lx %14ld\t%3s: 0x%016lx %14ld\n", + "", REG_INFO(3), REG_INFO(5)); + // a2. + PrintF("%34s\t%34s\t%3s: 0x%016lx %14ld\n", "", "", REG_INFO(6)); + // a3. + PrintF("%34s\t%34s\t%3s: 0x%016lx %14ld\n", "", "", REG_INFO(7)); + PrintF("\n"); + // a4-t3, s0-s7 + for (int i = 0; i < 8; i++) { + PrintF("%3s: 0x%016lx %14ld\t%3s: 0x%016lx %14ld\n", + REG_INFO(8+i), REG_INFO(16+i)); + } + PrintF("\n"); + // t8, k0, LO. + PrintF("%3s: 0x%016lx %14ld\t%3s: 0x%016lx %14ld\t%3s: 0x%016lx %14ld\n", + REG_INFO(24), REG_INFO(26), REG_INFO(32)); + // t9, k1, HI. + PrintF("%3s: 0x%016lx %14ld\t%3s: 0x%016lx %14ld\t%3s: 0x%016lx %14ld\n", + REG_INFO(25), REG_INFO(27), REG_INFO(33)); + // sp, fp, gp. + PrintF("%3s: 0x%016lx %14ld\t%3s: 0x%016lx %14ld\t%3s: 0x%016lx %14ld\n", + REG_INFO(29), REG_INFO(30), REG_INFO(28)); + // pc. + PrintF("%3s: 0x%016lx %14ld\t%3s: 0x%016lx %14ld\n", + REG_INFO(31), REG_INFO(34)); + +#undef REG_INFO +#undef FPU_REG_INFO +} + + +void MipsDebugger::PrintAllRegsIncludingFPU() { +#define FPU_REG_INFO(n) FPURegisters::Name(n), \ + GetFPURegisterValue(n), \ + GetFPURegisterValueDouble(n) + + PrintAllRegs(); + + PrintF("\n\n"); + // f0, f1, f2, ... f31. + // TODO(plind): consider printing 2 columns for space efficiency. + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(0) ); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(1) ); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(2) ); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(3) ); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(4) ); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(5) ); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(6) ); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(7) ); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(8) ); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(9) ); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(10)); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(11)); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(12)); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(13)); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(14)); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(15)); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(16)); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(17)); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(18)); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(19)); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(20)); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(21)); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(22)); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(23)); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(24)); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(25)); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(26)); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(27)); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(28)); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(29)); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(30)); + PrintF("%3s: 0x%016lx %16.4e\n", FPU_REG_INFO(31)); + +#undef REG_INFO +#undef FPU_REG_INFO +} + + +void MipsDebugger::Debug() { + intptr_t last_pc = -1; + bool done = false; + +#define COMMAND_SIZE 63 +#define ARG_SIZE 255 + +#define STR(a) #a +#define XSTR(a) STR(a) + + char cmd[COMMAND_SIZE + 1]; + char arg1[ARG_SIZE + 1]; + char arg2[ARG_SIZE + 1]; + char* argv[3] = { cmd, arg1, arg2 }; + + // Make sure to have a proper terminating character if reaching the limit. + cmd[COMMAND_SIZE] = 0; + arg1[ARG_SIZE] = 0; + arg2[ARG_SIZE] = 0; + + // Undo all set breakpoints while running in the debugger shell. This will + // make them invisible to all commands. + UndoBreakpoints(); + + while (!done && (sim_->get_pc() != Simulator::end_sim_pc)) { + if (last_pc != sim_->get_pc()) { + disasm::NameConverter converter; + disasm::Disassembler dasm(converter); + // Use a reasonably large buffer. + v8::internal::EmbeddedVector<char, 256> buffer; + dasm.InstructionDecode(buffer, + reinterpret_cast<byte*>(sim_->get_pc())); + PrintF(" 0x%016lx %s\n", sim_->get_pc(), buffer.start()); + last_pc = sim_->get_pc(); + } + char* line = ReadLine("sim> "); + if (line == NULL) { + break; + } else { + char* last_input = sim_->last_debugger_input(); + if (strcmp(line, "\n") == 0 && last_input != NULL) { + line = last_input; + } else { + // Ownership is transferred to sim_; + sim_->set_last_debugger_input(line); + } + // Use sscanf to parse the individual parts of the command line. At the + // moment no command expects more than two parameters. + int argc = SScanF(line, + "%" XSTR(COMMAND_SIZE) "s " + "%" XSTR(ARG_SIZE) "s " + "%" XSTR(ARG_SIZE) "s", + cmd, arg1, arg2); + if ((strcmp(cmd, "si") == 0) || (strcmp(cmd, "stepi") == 0)) { + Instruction* instr = reinterpret_cast<Instruction*>(sim_->get_pc()); + if (!(instr->IsTrap()) || + instr->InstructionBits() == rtCallRedirInstr) { + sim_->InstructionDecode( + reinterpret_cast<Instruction*>(sim_->get_pc())); + } else { + // Allow si to jump over generated breakpoints. + PrintF("/!\\ Jumping over generated breakpoint.\n"); + sim_->set_pc(sim_->get_pc() + Instruction::kInstrSize); + } + } else if ((strcmp(cmd, "c") == 0) || (strcmp(cmd, "cont") == 0)) { + // Execute the one instruction we broke at with breakpoints disabled. + sim_->InstructionDecode(reinterpret_cast<Instruction*>(sim_->get_pc())); + // Leave the debugger shell. + done = true; + } else if ((strcmp(cmd, "p") == 0) || (strcmp(cmd, "print") == 0)) { + if (argc == 2) { + int64_t value; + double dvalue; + if (strcmp(arg1, "all") == 0) { + PrintAllRegs(); + } else if (strcmp(arg1, "allf") == 0) { + PrintAllRegsIncludingFPU(); + } else { + int regnum = Registers::Number(arg1); + int fpuregnum = FPURegisters::Number(arg1); + + if (regnum != kInvalidRegister) { + value = GetRegisterValue(regnum); + PrintF("%s: 0x%08lx %ld \n", arg1, value, value); + } else if (fpuregnum != kInvalidFPURegister) { + value = GetFPURegisterValue(fpuregnum); + dvalue = GetFPURegisterValueDouble(fpuregnum); + PrintF("%3s: 0x%016lx %16.4e\n", + FPURegisters::Name(fpuregnum), value, dvalue); + } else { + PrintF("%s unrecognized\n", arg1); + } + } + } else { + if (argc == 3) { + if (strcmp(arg2, "single") == 0) { + int64_t value; + float fvalue; + int fpuregnum = FPURegisters::Number(arg1); + + if (fpuregnum != kInvalidFPURegister) { + value = GetFPURegisterValue(fpuregnum); + value &= 0xffffffffUL; + fvalue = GetFPURegisterValueFloat(fpuregnum); + PrintF("%s: 0x%08lx %11.4e\n", arg1, value, fvalue); + } else { + PrintF("%s unrecognized\n", arg1); + } + } else { + PrintF("print <fpu register> single\n"); + } + } else { + PrintF("print <register> or print <fpu register> single\n"); + } + } + } else if ((strcmp(cmd, "po") == 0) + || (strcmp(cmd, "printobject") == 0)) { + if (argc == 2) { + int64_t value; + OFStream os(stdout); + if (GetValue(arg1, &value)) { + Object* obj = reinterpret_cast<Object*>(value); + os << arg1 << ": \n"; +#ifdef DEBUG + obj->Print(os); + os << "\n"; +#else + os << Brief(obj) << "\n"; +#endif + } else { + os << arg1 << " unrecognized\n"; + } + } else { + PrintF("printobject <value>\n"); + } + } else if (strcmp(cmd, "stack") == 0 || strcmp(cmd, "mem") == 0) { + int64_t* cur = NULL; + int64_t* end = NULL; + int next_arg = 1; + + if (strcmp(cmd, "stack") == 0) { + cur = reinterpret_cast<int64_t*>(sim_->get_register(Simulator::sp)); + } else { // Command "mem". + int64_t value; + if (!GetValue(arg1, &value)) { + PrintF("%s unrecognized\n", arg1); + continue; + } + cur = reinterpret_cast<int64_t*>(value); + next_arg++; + } + + int64_t words; + if (argc == next_arg) { + words = 10; + } else { + if (!GetValue(argv[next_arg], &words)) { + words = 10; + } + } + end = cur + words; + + while (cur < end) { + PrintF(" 0x%012lx: 0x%016lx %14ld", + reinterpret_cast<intptr_t>(cur), *cur, *cur); + HeapObject* obj = reinterpret_cast<HeapObject*>(*cur); + int64_t value = *cur; + Heap* current_heap = v8::internal::Isolate::Current()->heap(); + if (((value & 1) == 0) || current_heap->Contains(obj)) { + PrintF(" ("); + if ((value & 1) == 0) { + PrintF("smi %d", static_cast<int>(value >> 32)); + } else { + obj->ShortPrint(); + } + PrintF(")"); + } + PrintF("\n"); + cur++; + } + + } else if ((strcmp(cmd, "disasm") == 0) || + (strcmp(cmd, "dpc") == 0) || + (strcmp(cmd, "di") == 0)) { + disasm::NameConverter converter; + disasm::Disassembler dasm(converter); + // Use a reasonably large buffer. + v8::internal::EmbeddedVector<char, 256> buffer; + + byte* cur = NULL; + byte* end = NULL; + + if (argc == 1) { + cur = reinterpret_cast<byte*>(sim_->get_pc()); + end = cur + (10 * Instruction::kInstrSize); + } else if (argc == 2) { + int regnum = Registers::Number(arg1); + if (regnum != kInvalidRegister || strncmp(arg1, "0x", 2) == 0) { + // The argument is an address or a register name. + int64_t value; + if (GetValue(arg1, &value)) { + cur = reinterpret_cast<byte*>(value); + // Disassemble 10 instructions at <arg1>. + end = cur + (10 * Instruction::kInstrSize); + } + } else { + // The argument is the number of instructions. + int64_t value; + if (GetValue(arg1, &value)) { + cur = reinterpret_cast<byte*>(sim_->get_pc()); + // Disassemble <arg1> instructions. + end = cur + (value * Instruction::kInstrSize); + } + } + } else { + int64_t value1; + int64_t value2; + if (GetValue(arg1, &value1) && GetValue(arg2, &value2)) { + cur = reinterpret_cast<byte*>(value1); + end = cur + (value2 * Instruction::kInstrSize); + } + } + + while (cur < end) { + dasm.InstructionDecode(buffer, cur); + PrintF(" 0x%08lx %s\n", + reinterpret_cast<intptr_t>(cur), buffer.start()); + cur += Instruction::kInstrSize; + } + } else if (strcmp(cmd, "gdb") == 0) { + PrintF("relinquishing control to gdb\n"); + v8::base::OS::DebugBreak(); + PrintF("regaining control from gdb\n"); + } else if (strcmp(cmd, "break") == 0) { + if (argc == 2) { + int64_t value; + if (GetValue(arg1, &value)) { + if (!SetBreakpoint(reinterpret_cast<Instruction*>(value))) { + PrintF("setting breakpoint failed\n"); + } + } else { + PrintF("%s unrecognized\n", arg1); + } + } else { + PrintF("break <address>\n"); + } + } else if (strcmp(cmd, "del") == 0) { + if (!DeleteBreakpoint(NULL)) { + PrintF("deleting breakpoint failed\n"); + } + } else if (strcmp(cmd, "flags") == 0) { + PrintF("No flags on MIPS !\n"); + } else if (strcmp(cmd, "stop") == 0) { + int64_t value; + intptr_t stop_pc = sim_->get_pc() - + 2 * Instruction::kInstrSize; + Instruction* stop_instr = reinterpret_cast<Instruction*>(stop_pc); + Instruction* msg_address = + reinterpret_cast<Instruction*>(stop_pc + + Instruction::kInstrSize); + if ((argc == 2) && (strcmp(arg1, "unstop") == 0)) { + // Remove the current stop. + if (sim_->IsStopInstruction(stop_instr)) { + stop_instr->SetInstructionBits(kNopInstr); + msg_address->SetInstructionBits(kNopInstr); + } else { + PrintF("Not at debugger stop.\n"); + } + } else if (argc == 3) { + // Print information about all/the specified breakpoint(s). + if (strcmp(arg1, "info") == 0) { + if (strcmp(arg2, "all") == 0) { + PrintF("Stop information:\n"); + for (uint32_t i = kMaxWatchpointCode + 1; + i <= kMaxStopCode; + i++) { + sim_->PrintStopInfo(i); + } + } else if (GetValue(arg2, &value)) { + sim_->PrintStopInfo(value); + } else { + PrintF("Unrecognized argument.\n"); + } + } else if (strcmp(arg1, "enable") == 0) { + // Enable all/the specified breakpoint(s). + if (strcmp(arg2, "all") == 0) { + for (uint32_t i = kMaxWatchpointCode + 1; + i <= kMaxStopCode; + i++) { + sim_->EnableStop(i); + } + } else if (GetValue(arg2, &value)) { + sim_->EnableStop(value); + } else { + PrintF("Unrecognized argument.\n"); + } + } else if (strcmp(arg1, "disable") == 0) { + // Disable all/the specified breakpoint(s). + if (strcmp(arg2, "all") == 0) { + for (uint32_t i = kMaxWatchpointCode + 1; + i <= kMaxStopCode; + i++) { + sim_->DisableStop(i); + } + } else if (GetValue(arg2, &value)) { + sim_->DisableStop(value); + } else { + PrintF("Unrecognized argument.\n"); + } + } + } else { + PrintF("Wrong usage. Use help command for more information.\n"); + } + } else if ((strcmp(cmd, "stat") == 0) || (strcmp(cmd, "st") == 0)) { + // Print registers and disassemble. + PrintAllRegs(); + PrintF("\n"); + + disasm::NameConverter converter; + disasm::Disassembler dasm(converter); + // Use a reasonably large buffer. + v8::internal::EmbeddedVector<char, 256> buffer; + + byte* cur = NULL; + byte* end = NULL; + + if (argc == 1) { + cur = reinterpret_cast<byte*>(sim_->get_pc()); + end = cur + (10 * Instruction::kInstrSize); + } else if (argc == 2) { + int64_t value; + if (GetValue(arg1, &value)) { + cur = reinterpret_cast<byte*>(value); + // no length parameter passed, assume 10 instructions + end = cur + (10 * Instruction::kInstrSize); + } + } else { + int64_t value1; + int64_t value2; + if (GetValue(arg1, &value1) && GetValue(arg2, &value2)) { + cur = reinterpret_cast<byte*>(value1); + end = cur + (value2 * Instruction::kInstrSize); + } + } + + while (cur < end) { + dasm.InstructionDecode(buffer, cur); + PrintF(" 0x%08lx %s\n", + reinterpret_cast<intptr_t>(cur), buffer.start()); + cur += Instruction::kInstrSize; + } + } else if ((strcmp(cmd, "h") == 0) || (strcmp(cmd, "help") == 0)) { + PrintF("cont\n"); + PrintF(" continue execution (alias 'c')\n"); + PrintF("stepi\n"); + PrintF(" step one instruction (alias 'si')\n"); + PrintF("print <register>\n"); + PrintF(" print register content (alias 'p')\n"); + PrintF(" use register name 'all' to print all registers\n"); + PrintF("printobject <register>\n"); + PrintF(" print an object from a register (alias 'po')\n"); + PrintF("stack [<words>]\n"); + PrintF(" dump stack content, default dump 10 words)\n"); + PrintF("mem <address> [<words>]\n"); + PrintF(" dump memory content, default dump 10 words)\n"); + PrintF("flags\n"); + PrintF(" print flags\n"); + PrintF("disasm [<instructions>]\n"); + PrintF("disasm [<address/register>]\n"); + PrintF("disasm [[<address/register>] <instructions>]\n"); + PrintF(" disassemble code, default is 10 instructions\n"); + PrintF(" from pc (alias 'di')\n"); + PrintF("gdb\n"); + PrintF(" enter gdb\n"); + PrintF("break <address>\n"); + PrintF(" set a break point on the address\n"); + PrintF("del\n"); + PrintF(" delete the breakpoint\n"); + PrintF("stop feature:\n"); + PrintF(" Description:\n"); + PrintF(" Stops are debug instructions inserted by\n"); + PrintF(" the Assembler::stop() function.\n"); + PrintF(" When hitting a stop, the Simulator will\n"); + PrintF(" stop and and give control to the Debugger.\n"); + PrintF(" All stop codes are watched:\n"); + PrintF(" - They can be enabled / disabled: the Simulator\n"); + PrintF(" will / won't stop when hitting them.\n"); + PrintF(" - The Simulator keeps track of how many times they \n"); + PrintF(" are met. (See the info command.) Going over a\n"); + PrintF(" disabled stop still increases its counter. \n"); + PrintF(" Commands:\n"); + PrintF(" stop info all/<code> : print infos about number <code>\n"); + PrintF(" or all stop(s).\n"); + PrintF(" stop enable/disable all/<code> : enables / disables\n"); + PrintF(" all or number <code> stop(s)\n"); + PrintF(" stop unstop\n"); + PrintF(" ignore the stop instruction at the current location\n"); + PrintF(" from now on\n"); + } else { + PrintF("Unknown command: %s\n", cmd); + } + } + } + + // Add all the breakpoints back to stop execution and enter the debugger + // shell when hit. + RedoBreakpoints(); + +#undef COMMAND_SIZE +#undef ARG_SIZE + +#undef STR +#undef XSTR +} + + +static bool ICacheMatch(void* one, void* two) { + DCHECK((reinterpret_cast<intptr_t>(one) & CachePage::kPageMask) == 0); + DCHECK((reinterpret_cast<intptr_t>(two) & CachePage::kPageMask) == 0); + return one == two; +} + + +static uint32_t ICacheHash(void* key) { + return static_cast<uint32_t>(reinterpret_cast<uintptr_t>(key)) >> 2; +} + + +static bool AllOnOnePage(uintptr_t start, int size) { + intptr_t start_page = (start & ~CachePage::kPageMask); + intptr_t end_page = ((start + size) & ~CachePage::kPageMask); + return start_page == end_page; +} + + +void Simulator::set_last_debugger_input(char* input) { + DeleteArray(last_debugger_input_); + last_debugger_input_ = input; +} + + +void Simulator::FlushICache(v8::internal::HashMap* i_cache, + void* start_addr, + size_t size) { + int64_t start = reinterpret_cast<int64_t>(start_addr); + int64_t intra_line = (start & CachePage::kLineMask); + start -= intra_line; + size += intra_line; + size = ((size - 1) | CachePage::kLineMask) + 1; + int offset = (start & CachePage::kPageMask); + while (!AllOnOnePage(start, size - 1)) { + int bytes_to_flush = CachePage::kPageSize - offset; + FlushOnePage(i_cache, start, bytes_to_flush); + start += bytes_to_flush; + size -= bytes_to_flush; + DCHECK_EQ((uint64_t)0, start & CachePage::kPageMask); + offset = 0; + } + if (size != 0) { + FlushOnePage(i_cache, start, size); + } +} + + +CachePage* Simulator::GetCachePage(v8::internal::HashMap* i_cache, void* page) { + v8::internal::HashMap::Entry* entry = i_cache->Lookup(page, + ICacheHash(page), + true); + if (entry->value == NULL) { + CachePage* new_page = new CachePage(); + entry->value = new_page; + } + return reinterpret_cast<CachePage*>(entry->value); +} + + +// Flush from start up to and not including start + size. +void Simulator::FlushOnePage(v8::internal::HashMap* i_cache, + intptr_t start, + int size) { + DCHECK(size <= CachePage::kPageSize); + DCHECK(AllOnOnePage(start, size - 1)); + DCHECK((start & CachePage::kLineMask) == 0); + DCHECK((size & CachePage::kLineMask) == 0); + void* page = reinterpret_cast<void*>(start & (~CachePage::kPageMask)); + int offset = (start & CachePage::kPageMask); + CachePage* cache_page = GetCachePage(i_cache, page); + char* valid_bytemap = cache_page->ValidityByte(offset); + memset(valid_bytemap, CachePage::LINE_INVALID, size >> CachePage::kLineShift); +} + + +void Simulator::CheckICache(v8::internal::HashMap* i_cache, + Instruction* instr) { + int64_t address = reinterpret_cast<int64_t>(instr); + void* page = reinterpret_cast<void*>(address & (~CachePage::kPageMask)); + void* line = reinterpret_cast<void*>(address & (~CachePage::kLineMask)); + int offset = (address & CachePage::kPageMask); + CachePage* cache_page = GetCachePage(i_cache, page); + char* cache_valid_byte = cache_page->ValidityByte(offset); + bool cache_hit = (*cache_valid_byte == CachePage::LINE_VALID); + char* cached_line = cache_page->CachedData(offset & ~CachePage::kLineMask); + if (cache_hit) { + // Check that the data in memory matches the contents of the I-cache. + CHECK_EQ(0, memcmp(reinterpret_cast<void*>(instr), + cache_page->CachedData(offset), + Instruction::kInstrSize)); + } else { + // Cache miss. Load memory into the cache. + memcpy(cached_line, line, CachePage::kLineLength); + *cache_valid_byte = CachePage::LINE_VALID; + } +} + + +void Simulator::Initialize(Isolate* isolate) { + if (isolate->simulator_initialized()) return; + isolate->set_simulator_initialized(true); + ::v8::internal::ExternalReference::set_redirector(isolate, + &RedirectExternalReference); +} + + +Simulator::Simulator(Isolate* isolate) : isolate_(isolate) { + i_cache_ = isolate_->simulator_i_cache(); + if (i_cache_ == NULL) { + i_cache_ = new v8::internal::HashMap(&ICacheMatch); + isolate_->set_simulator_i_cache(i_cache_); + } + Initialize(isolate); + // Set up simulator support first. Some of this information is needed to + // setup the architecture state. + stack_size_ = FLAG_sim_stack_size * KB; + stack_ = reinterpret_cast<char*>(malloc(stack_size_)); + pc_modified_ = false; + icount_ = 0; + break_count_ = 0; + break_pc_ = NULL; + break_instr_ = 0; + + // Set up architecture state. + // All registers are initialized to zero to start with. + for (int i = 0; i < kNumSimuRegisters; i++) { + registers_[i] = 0; + } + for (int i = 0; i < kNumFPURegisters; i++) { + FPUregisters_[i] = 0; + } + FCSR_ = 0; + + // The sp is initialized to point to the bottom (high address) of the + // allocated stack area. To be safe in potential stack underflows we leave + // some buffer below. + registers_[sp] = reinterpret_cast<int64_t>(stack_) + stack_size_ - 64; + // The ra and pc are initialized to a known bad value that will cause an + // access violation if the simulator ever tries to execute it. + registers_[pc] = bad_ra; + registers_[ra] = bad_ra; + InitializeCoverage(); + for (int i = 0; i < kNumExceptions; i++) { + exceptions[i] = 0; + } + + last_debugger_input_ = NULL; +} + + +Simulator::~Simulator() { +} + + +// When the generated code calls an external reference we need to catch that in +// the simulator. The external reference will be a function compiled for the +// host architecture. We need to call that function instead of trying to +// execute it with the simulator. We do that by redirecting the external +// reference to a swi (software-interrupt) instruction that is handled by +// the simulator. We write the original destination of the jump just at a known +// offset from the swi instruction so the simulator knows what to call. +class Redirection { + public: + Redirection(void* external_function, ExternalReference::Type type) + : external_function_(external_function), + swi_instruction_(rtCallRedirInstr), + type_(type), + next_(NULL) { + Isolate* isolate = Isolate::Current(); + next_ = isolate->simulator_redirection(); + Simulator::current(isolate)-> + FlushICache(isolate->simulator_i_cache(), + reinterpret_cast<void*>(&swi_instruction_), + Instruction::kInstrSize); + isolate->set_simulator_redirection(this); + } + + void* address_of_swi_instruction() { + return reinterpret_cast<void*>(&swi_instruction_); + } + + void* external_function() { return external_function_; } + ExternalReference::Type type() { return type_; } + + static Redirection* Get(void* external_function, + ExternalReference::Type type) { + Isolate* isolate = Isolate::Current(); + Redirection* current = isolate->simulator_redirection(); + for (; current != NULL; current = current->next_) { + if (current->external_function_ == external_function) return current; + } + return new Redirection(external_function, type); + } + + static Redirection* FromSwiInstruction(Instruction* swi_instruction) { + char* addr_of_swi = reinterpret_cast<char*>(swi_instruction); + char* addr_of_redirection = + addr_of_swi - OFFSET_OF(Redirection, swi_instruction_); + return reinterpret_cast<Redirection*>(addr_of_redirection); + } + + static void* ReverseRedirection(int64_t reg) { + Redirection* redirection = FromSwiInstruction( + reinterpret_cast<Instruction*>(reinterpret_cast<void*>(reg))); + return redirection->external_function(); + } + + private: + void* external_function_; + uint32_t swi_instruction_; + ExternalReference::Type type_; + Redirection* next_; +}; + + +void* Simulator::RedirectExternalReference(void* external_function, + ExternalReference::Type type) { + Redirection* redirection = Redirection::Get(external_function, type); + return redirection->address_of_swi_instruction(); +} + + +// Get the active Simulator for the current thread. +Simulator* Simulator::current(Isolate* isolate) { + v8::internal::Isolate::PerIsolateThreadData* isolate_data = + isolate->FindOrAllocatePerThreadDataForThisThread(); + DCHECK(isolate_data != NULL); + DCHECK(isolate_data != NULL); + + Simulator* sim = isolate_data->simulator(); + if (sim == NULL) { + // TODO(146): delete the simulator object when a thread/isolate goes away. + sim = new Simulator(isolate); + isolate_data->set_simulator(sim); + } + return sim; +} + + +// Sets the register in the architecture state. It will also deal with updating +// Simulator internal state for special registers such as PC. +void Simulator::set_register(int reg, int64_t value) { + DCHECK((reg >= 0) && (reg < kNumSimuRegisters)); + if (reg == pc) { + pc_modified_ = true; + } + + // Zero register always holds 0. + registers_[reg] = (reg == 0) ? 0 : value; +} + + +void Simulator::set_dw_register(int reg, const int* dbl) { + DCHECK((reg >= 0) && (reg < kNumSimuRegisters)); + registers_[reg] = dbl[1]; + registers_[reg] = registers_[reg] << 32; + registers_[reg] += dbl[0]; +} + + +void Simulator::set_fpu_register(int fpureg, int64_t value) { + DCHECK((fpureg >= 0) && (fpureg < kNumFPURegisters)); + FPUregisters_[fpureg] = value; +} + + +void Simulator::set_fpu_register_word(int fpureg, int32_t value) { + // Set ONLY lower 32-bits, leaving upper bits untouched. + // TODO(plind): big endian issue. + DCHECK((fpureg >= 0) && (fpureg < kNumFPURegisters)); + int32_t *pword = reinterpret_cast<int32_t*>(&FPUregisters_[fpureg]); + *pword = value; +} + + +void Simulator::set_fpu_register_hi_word(int fpureg, int32_t value) { + // Set ONLY upper 32-bits, leaving lower bits untouched. + // TODO(plind): big endian issue. + DCHECK((fpureg >= 0) && (fpureg < kNumFPURegisters)); + int32_t *phiword = (reinterpret_cast<int32_t*>(&FPUregisters_[fpureg])) + 1; + *phiword = value; +} + + +void Simulator::set_fpu_register_float(int fpureg, float value) { + DCHECK((fpureg >= 0) && (fpureg < kNumFPURegisters)); + *BitCast<float*>(&FPUregisters_[fpureg]) = value; +} + + +void Simulator::set_fpu_register_double(int fpureg, double value) { + DCHECK((fpureg >= 0) && (fpureg < kNumFPURegisters)); + *BitCast<double*>(&FPUregisters_[fpureg]) = value; +} + + +// Get the register from the architecture state. This function does handle +// the special case of accessing the PC register. +int64_t Simulator::get_register(int reg) const { + DCHECK((reg >= 0) && (reg < kNumSimuRegisters)); + if (reg == 0) + return 0; + else + return registers_[reg] + ((reg == pc) ? Instruction::kPCReadOffset : 0); +} + + +double Simulator::get_double_from_register_pair(int reg) { + // TODO(plind): bad ABI stuff, refactor or remove. + DCHECK((reg >= 0) && (reg < kNumSimuRegisters) && ((reg % 2) == 0)); + + double dm_val = 0.0; + // Read the bits from the unsigned integer register_[] array + // into the double precision floating point value and return it. + char buffer[sizeof(registers_[0])]; + memcpy(buffer, ®isters_[reg], sizeof(registers_[0])); + memcpy(&dm_val, buffer, sizeof(registers_[0])); + return(dm_val); +} + + +int64_t Simulator::get_fpu_register(int fpureg) const { + DCHECK((fpureg >= 0) && (fpureg < kNumFPURegisters)); + return FPUregisters_[fpureg]; +} + + +int32_t Simulator::get_fpu_register_word(int fpureg) const { + DCHECK((fpureg >= 0) && (fpureg < kNumFPURegisters)); + return static_cast<int32_t>(FPUregisters_[fpureg] & 0xffffffff); +} + + +int32_t Simulator::get_fpu_register_signed_word(int fpureg) const { + DCHECK((fpureg >= 0) && (fpureg < kNumFPURegisters)); + return static_cast<int32_t>(FPUregisters_[fpureg] & 0xffffffff); +} + + +uint32_t Simulator::get_fpu_register_hi_word(int fpureg) const { + DCHECK((fpureg >= 0) && (fpureg < kNumFPURegisters)); + return static_cast<uint32_t>((FPUregisters_[fpureg] >> 32) & 0xffffffff); +} + + +float Simulator::get_fpu_register_float(int fpureg) const { + DCHECK((fpureg >= 0) && (fpureg < kNumFPURegisters)); + return *BitCast<float*>( + const_cast<int64_t*>(&FPUregisters_[fpureg])); +} + + +double Simulator::get_fpu_register_double(int fpureg) const { + DCHECK((fpureg >= 0) && (fpureg < kNumFPURegisters)); + return *BitCast<double*>(&FPUregisters_[fpureg]); +} + + +// Runtime FP routines take up to two double arguments and zero +// or one integer arguments. All are constructed here, +// from a0-a3 or f12 and f13 (n64), or f14 (O32). +void Simulator::GetFpArgs(double* x, double* y, int32_t* z) { + if (!IsMipsSoftFloatABI) { + const int fparg2 = (kMipsAbi == kN64) ? 13 : 14; + *x = get_fpu_register_double(12); + *y = get_fpu_register_double(fparg2); + *z = get_register(a2); + } else { + // TODO(plind): bad ABI stuff, refactor or remove. + // We use a char buffer to get around the strict-aliasing rules which + // otherwise allow the compiler to optimize away the copy. + char buffer[sizeof(*x)]; + int32_t* reg_buffer = reinterpret_cast<int32_t*>(buffer); + + // Registers a0 and a1 -> x. + reg_buffer[0] = get_register(a0); + reg_buffer[1] = get_register(a1); + memcpy(x, buffer, sizeof(buffer)); + // Registers a2 and a3 -> y. + reg_buffer[0] = get_register(a2); + reg_buffer[1] = get_register(a3); + memcpy(y, buffer, sizeof(buffer)); + // Register 2 -> z. + reg_buffer[0] = get_register(a2); + memcpy(z, buffer, sizeof(*z)); + } +} + + +// The return value is either in v0/v1 or f0. +void Simulator::SetFpResult(const double& result) { + if (!IsMipsSoftFloatABI) { + set_fpu_register_double(0, result); + } else { + char buffer[2 * sizeof(registers_[0])]; + int64_t* reg_buffer = reinterpret_cast<int64_t*>(buffer); + memcpy(buffer, &result, sizeof(buffer)); + // Copy result to v0 and v1. + set_register(v0, reg_buffer[0]); + set_register(v1, reg_buffer[1]); + } +} + + +// Helper functions for setting and testing the FCSR register's bits. +void Simulator::set_fcsr_bit(uint32_t cc, bool value) { + if (value) { + FCSR_ |= (1 << cc); + } else { + FCSR_ &= ~(1 << cc); + } +} + + +bool Simulator::test_fcsr_bit(uint32_t cc) { + return FCSR_ & (1 << cc); +} + + +// Sets the rounding error codes in FCSR based on the result of the rounding. +// Returns true if the operation was invalid. +bool Simulator::set_fcsr_round_error(double original, double rounded) { + bool ret = false; + double max_int32 = std::numeric_limits<int32_t>::max(); + double min_int32 = std::numeric_limits<int32_t>::min(); + + if (!std::isfinite(original) || !std::isfinite(rounded)) { + set_fcsr_bit(kFCSRInvalidOpFlagBit, true); + ret = true; + } + + if (original != rounded) { + set_fcsr_bit(kFCSRInexactFlagBit, true); + } + + if (rounded < DBL_MIN && rounded > -DBL_MIN && rounded != 0) { + set_fcsr_bit(kFCSRUnderflowFlagBit, true); + ret = true; + } + + if (rounded > max_int32 || rounded < min_int32) { + set_fcsr_bit(kFCSROverflowFlagBit, true); + // The reference is not really clear but it seems this is required: + set_fcsr_bit(kFCSRInvalidOpFlagBit, true); + ret = true; + } + + return ret; +} + + +// Sets the rounding error codes in FCSR based on the result of the rounding. +// Returns true if the operation was invalid. +bool Simulator::set_fcsr_round64_error(double original, double rounded) { + bool ret = false; + double max_int64 = std::numeric_limits<int64_t>::max(); + double min_int64 = std::numeric_limits<int64_t>::min(); + + if (!std::isfinite(original) || !std::isfinite(rounded)) { + set_fcsr_bit(kFCSRInvalidOpFlagBit, true); + ret = true; + } + + if (original != rounded) { + set_fcsr_bit(kFCSRInexactFlagBit, true); + } + + if (rounded < DBL_MIN && rounded > -DBL_MIN && rounded != 0) { + set_fcsr_bit(kFCSRUnderflowFlagBit, true); + ret = true; + } + + if (rounded > max_int64 || rounded < min_int64) { + set_fcsr_bit(kFCSROverflowFlagBit, true); + // The reference is not really clear but it seems this is required: + set_fcsr_bit(kFCSRInvalidOpFlagBit, true); + ret = true; + } + + return ret; +} + + +// Raw access to the PC register. +void Simulator::set_pc(int64_t value) { + pc_modified_ = true; + registers_[pc] = value; +} + + +bool Simulator::has_bad_pc() const { + return ((registers_[pc] == bad_ra) || (registers_[pc] == end_sim_pc)); +} + + +// Raw access to the PC register without the special adjustment when reading. +int64_t Simulator::get_pc() const { + return registers_[pc]; +} + + +// The MIPS cannot do unaligned reads and writes. On some MIPS platforms an +// interrupt is caused. On others it does a funky rotation thing. For now we +// simply disallow unaligned reads, but at some point we may want to move to +// emulating the rotate behaviour. Note that simulator runs have the runtime +// system running directly on the host system and only generated code is +// executed in the simulator. Since the host is typically IA32 we will not +// get the correct MIPS-like behaviour on unaligned accesses. + +// TODO(plind): refactor this messy debug code when we do unaligned access. +void Simulator::DieOrDebug() { + if (1) { // Flag for this was removed. + MipsDebugger dbg(this); + dbg.Debug(); + } else { + base::OS::Abort(); + } +} + + +void Simulator::TraceRegWr(int64_t value) { + if (::v8::internal::FLAG_trace_sim) { + SNPrintF(trace_buf_, "%016lx", value); + } +} + + +// TODO(plind): consider making icount_ printing a flag option. +void Simulator::TraceMemRd(int64_t addr, int64_t value) { + if (::v8::internal::FLAG_trace_sim) { + SNPrintF(trace_buf_, "%016lx <-- [%016lx] (%ld)", + value, addr, icount_); + } +} + + +void Simulator::TraceMemWr(int64_t addr, int64_t value, TraceType t) { + if (::v8::internal::FLAG_trace_sim) { + switch (t) { + case BYTE: + SNPrintF(trace_buf_, " %02x --> [%016lx]", + static_cast<int8_t>(value), addr); + break; + case HALF: + SNPrintF(trace_buf_, " %04x --> [%016lx]", + static_cast<int16_t>(value), addr); + break; + case WORD: + SNPrintF(trace_buf_, " %08x --> [%016lx]", + static_cast<int32_t>(value), addr); + break; + case DWORD: + SNPrintF(trace_buf_, "%016lx --> [%016lx] (%ld)", + value, addr, icount_); + break; + } + } +} + + +// TODO(plind): sign-extend and zero-extend not implmented properly +// on all the ReadXX functions, I don't think re-interpret cast does it. +int32_t Simulator::ReadW(int64_t addr, Instruction* instr) { + if (addr >=0 && addr < 0x400) { + // This has to be a NULL-dereference, drop into debugger. + PrintF("Memory read from bad address: 0x%08lx, pc=0x%08lx\n", + addr, reinterpret_cast<intptr_t>(instr)); + DieOrDebug(); + } + if ((addr & 0x3) == 0) { + int32_t* ptr = reinterpret_cast<int32_t*>(addr); + TraceMemRd(addr, static_cast<int64_t>(*ptr)); + return *ptr; + } + PrintF("Unaligned read at 0x%08lx, pc=0x%08" V8PRIxPTR "\n", + addr, + reinterpret_cast<intptr_t>(instr)); + DieOrDebug(); + return 0; +} + + +uint32_t Simulator::ReadWU(int64_t addr, Instruction* instr) { + if (addr >=0 && addr < 0x400) { + // This has to be a NULL-dereference, drop into debugger. + PrintF("Memory read from bad address: 0x%08lx, pc=0x%08lx\n", + addr, reinterpret_cast<intptr_t>(instr)); + DieOrDebug(); + } + if ((addr & 0x3) == 0) { + uint32_t* ptr = reinterpret_cast<uint32_t*>(addr); + TraceMemRd(addr, static_cast<int64_t>(*ptr)); + return *ptr; + } + PrintF("Unaligned read at 0x%08lx, pc=0x%08" V8PRIxPTR "\n", + addr, + reinterpret_cast<intptr_t>(instr)); + DieOrDebug(); + return 0; +} + + +void Simulator::WriteW(int64_t addr, int value, Instruction* instr) { + if (addr >= 0 && addr < 0x400) { + // This has to be a NULL-dereference, drop into debugger. + PrintF("Memory write to bad address: 0x%08lx, pc=0x%08lx\n", + addr, reinterpret_cast<intptr_t>(instr)); + DieOrDebug(); + } + if ((addr & 0x3) == 0) { + TraceMemWr(addr, value, WORD); + int* ptr = reinterpret_cast<int*>(addr); + *ptr = value; + return; + } + PrintF("Unaligned write at 0x%08lx, pc=0x%08" V8PRIxPTR "\n", + addr, + reinterpret_cast<intptr_t>(instr)); + DieOrDebug(); +} + + +int64_t Simulator::Read2W(int64_t addr, Instruction* instr) { + if (addr >=0 && addr < 0x400) { + // This has to be a NULL-dereference, drop into debugger. + PrintF("Memory read from bad address: 0x%08lx, pc=0x%08lx\n", + addr, reinterpret_cast<intptr_t>(instr)); + DieOrDebug(); + } + if ((addr & kPointerAlignmentMask) == 0) { + int64_t* ptr = reinterpret_cast<int64_t*>(addr); + TraceMemRd(addr, *ptr); + return *ptr; + } + PrintF("Unaligned read at 0x%08lx, pc=0x%08" V8PRIxPTR "\n", + addr, + reinterpret_cast<intptr_t>(instr)); + DieOrDebug(); + return 0; +} + + +void Simulator::Write2W(int64_t addr, int64_t value, Instruction* instr) { + if (addr >= 0 && addr < 0x400) { + // This has to be a NULL-dereference, drop into debugger. + PrintF("Memory write to bad address: 0x%08lx, pc=0x%08lx\n", + addr, reinterpret_cast<intptr_t>(instr)); + DieOrDebug(); + } + if ((addr & kPointerAlignmentMask) == 0) { + TraceMemWr(addr, value, DWORD); + int64_t* ptr = reinterpret_cast<int64_t*>(addr); + *ptr = value; + return; + } + PrintF("Unaligned write at 0x%08lx, pc=0x%08" V8PRIxPTR "\n", + addr, + reinterpret_cast<intptr_t>(instr)); + DieOrDebug(); +} + + +double Simulator::ReadD(int64_t addr, Instruction* instr) { + if ((addr & kDoubleAlignmentMask) == 0) { + double* ptr = reinterpret_cast<double*>(addr); + return *ptr; + } + PrintF("Unaligned (double) read at 0x%08lx, pc=0x%08" V8PRIxPTR "\n", + addr, + reinterpret_cast<intptr_t>(instr)); + base::OS::Abort(); + return 0; +} + + +void Simulator::WriteD(int64_t addr, double value, Instruction* instr) { + if ((addr & kDoubleAlignmentMask) == 0) { + double* ptr = reinterpret_cast<double*>(addr); + *ptr = value; + return; + } + PrintF("Unaligned (double) write at 0x%08lx, pc=0x%08" V8PRIxPTR "\n", + addr, + reinterpret_cast<intptr_t>(instr)); + DieOrDebug(); +} + + +uint16_t Simulator::ReadHU(int64_t addr, Instruction* instr) { + if ((addr & 1) == 0) { + uint16_t* ptr = reinterpret_cast<uint16_t*>(addr); + TraceMemRd(addr, static_cast<int64_t>(*ptr)); + return *ptr; + } + PrintF("Unaligned unsigned halfword read at 0x%08lx, pc=0x%08" V8PRIxPTR "\n", + addr, + reinterpret_cast<intptr_t>(instr)); + DieOrDebug(); + return 0; +} + + +int16_t Simulator::ReadH(int64_t addr, Instruction* instr) { + if ((addr & 1) == 0) { + int16_t* ptr = reinterpret_cast<int16_t*>(addr); + TraceMemRd(addr, static_cast<int64_t>(*ptr)); + return *ptr; + } + PrintF("Unaligned signed halfword read at 0x%08lx, pc=0x%08" V8PRIxPTR "\n", + addr, + reinterpret_cast<intptr_t>(instr)); + DieOrDebug(); + return 0; +} + + +void Simulator::WriteH(int64_t addr, uint16_t value, Instruction* instr) { + if ((addr & 1) == 0) { + TraceMemWr(addr, value, HALF); + uint16_t* ptr = reinterpret_cast<uint16_t*>(addr); + *ptr = value; + return; + } + PrintF( + "Unaligned unsigned halfword write at 0x%08lx, pc=0x%08" V8PRIxPTR "\n", + addr, + reinterpret_cast<intptr_t>(instr)); + DieOrDebug(); +} + + +void Simulator::WriteH(int64_t addr, int16_t value, Instruction* instr) { + if ((addr & 1) == 0) { + TraceMemWr(addr, value, HALF); + int16_t* ptr = reinterpret_cast<int16_t*>(addr); + *ptr = value; + return; + } + PrintF("Unaligned halfword write at 0x%08lx, pc=0x%08" V8PRIxPTR "\n", + addr, + reinterpret_cast<intptr_t>(instr)); + DieOrDebug(); +} + + +uint32_t Simulator::ReadBU(int64_t addr) { + uint8_t* ptr = reinterpret_cast<uint8_t*>(addr); + TraceMemRd(addr, static_cast<int64_t>(*ptr)); + return *ptr & 0xff; +} + + +int32_t Simulator::ReadB(int64_t addr) { + int8_t* ptr = reinterpret_cast<int8_t*>(addr); + TraceMemRd(addr, static_cast<int64_t>(*ptr)); + return *ptr; +} + + +void Simulator::WriteB(int64_t addr, uint8_t value) { + TraceMemWr(addr, value, BYTE); + uint8_t* ptr = reinterpret_cast<uint8_t*>(addr); + *ptr = value; +} + + +void Simulator::WriteB(int64_t addr, int8_t value) { + TraceMemWr(addr, value, BYTE); + int8_t* ptr = reinterpret_cast<int8_t*>(addr); + *ptr = value; +} + + +// Returns the limit of the stack area to enable checking for stack overflows. +uintptr_t Simulator::StackLimit() const { + // Leave a safety margin of 1024 bytes to prevent overrunning the stack when + // pushing values. + return reinterpret_cast<uintptr_t>(stack_) + 1024; +} + + +// Unsupported instructions use Format to print an error and stop execution. +void Simulator::Format(Instruction* instr, const char* format) { + PrintF("Simulator found unsupported instruction:\n 0x%08lx: %s\n", + reinterpret_cast<intptr_t>(instr), format); + UNIMPLEMENTED_MIPS(); +} + + +// Calls into the V8 runtime are based on this very simple interface. +// Note: To be able to return two values from some calls the code in runtime.cc +// uses the ObjectPair which is essentially two 32-bit values stuffed into a +// 64-bit value. With the code below we assume that all runtime calls return +// 64 bits of result. If they don't, the v1 result register contains a bogus +// value, which is fine because it is caller-saved. + +struct ObjectPair { + Object* x; + Object* y; +}; + +typedef ObjectPair (*SimulatorRuntimeCall)(int64_t arg0, + int64_t arg1, + int64_t arg2, + int64_t arg3, + int64_t arg4, + int64_t arg5); + + +// These prototypes handle the four types of FP calls. +typedef int64_t (*SimulatorRuntimeCompareCall)(double darg0, double darg1); +typedef double (*SimulatorRuntimeFPFPCall)(double darg0, double darg1); +typedef double (*SimulatorRuntimeFPCall)(double darg0); +typedef double (*SimulatorRuntimeFPIntCall)(double darg0, int32_t arg0); + +// This signature supports direct call in to API function native callback +// (refer to InvocationCallback in v8.h). +typedef void (*SimulatorRuntimeDirectApiCall)(int64_t arg0); +typedef void (*SimulatorRuntimeProfilingApiCall)(int64_t arg0, void* arg1); + +// This signature supports direct call to accessor getter callback. +typedef void (*SimulatorRuntimeDirectGetterCall)(int64_t arg0, int64_t arg1); +typedef void (*SimulatorRuntimeProfilingGetterCall)( + int64_t arg0, int64_t arg1, void* arg2); + +// Software interrupt instructions are used by the simulator to call into the +// C-based V8 runtime. They are also used for debugging with simulator. +void Simulator::SoftwareInterrupt(Instruction* instr) { + // There are several instructions that could get us here, + // the break_ instruction, or several variants of traps. All + // Are "SPECIAL" class opcode, and are distinuished by function. + int32_t func = instr->FunctionFieldRaw(); + uint32_t code = (func == BREAK) ? instr->Bits(25, 6) : -1; + // We first check if we met a call_rt_redirected. + if (instr->InstructionBits() == rtCallRedirInstr) { + Redirection* redirection = Redirection::FromSwiInstruction(instr); + int64_t arg0 = get_register(a0); + int64_t arg1 = get_register(a1); + int64_t arg2 = get_register(a2); + int64_t arg3 = get_register(a3); + int64_t arg4, arg5; + + if (kMipsAbi == kN64) { + arg4 = get_register(a4); // Abi n64 register a4. + arg5 = get_register(a5); // Abi n64 register a5. + } else { // Abi O32. + int64_t* stack_pointer = reinterpret_cast<int64_t*>(get_register(sp)); + // Args 4 and 5 are on the stack after the reserved space for args 0..3. + arg4 = stack_pointer[4]; + arg5 = stack_pointer[5]; + } + bool fp_call = + (redirection->type() == ExternalReference::BUILTIN_FP_FP_CALL) || + (redirection->type() == ExternalReference::BUILTIN_COMPARE_CALL) || + (redirection->type() == ExternalReference::BUILTIN_FP_CALL) || + (redirection->type() == ExternalReference::BUILTIN_FP_INT_CALL); + + if (!IsMipsSoftFloatABI) { + // With the hard floating point calling convention, double + // arguments are passed in FPU registers. Fetch the arguments + // from there and call the builtin using soft floating point + // convention. + switch (redirection->type()) { + case ExternalReference::BUILTIN_FP_FP_CALL: + case ExternalReference::BUILTIN_COMPARE_CALL: + arg0 = get_fpu_register(f12); + arg1 = get_fpu_register(f13); + arg2 = get_fpu_register(f14); + arg3 = get_fpu_register(f15); + break; + case ExternalReference::BUILTIN_FP_CALL: + arg0 = get_fpu_register(f12); + arg1 = get_fpu_register(f13); + break; + case ExternalReference::BUILTIN_FP_INT_CALL: + arg0 = get_fpu_register(f12); + arg1 = get_fpu_register(f13); + arg2 = get_register(a2); + break; + default: + break; + } + } + + // This is dodgy but it works because the C entry stubs are never moved. + // See comment in codegen-arm.cc and bug 1242173. + int64_t saved_ra = get_register(ra); + + intptr_t external = + reinterpret_cast<intptr_t>(redirection->external_function()); + + // Based on CpuFeatures::IsSupported(FPU), Mips will use either hardware + // FPU, or gcc soft-float routines. Hardware FPU is simulated in this + // simulator. Soft-float has additional abstraction of ExternalReference, + // to support serialization. + if (fp_call) { + double dval0, dval1; // one or two double parameters + int32_t ival; // zero or one integer parameters + int64_t iresult = 0; // integer return value + double dresult = 0; // double return value + GetFpArgs(&dval0, &dval1, &ival); + SimulatorRuntimeCall generic_target = + reinterpret_cast<SimulatorRuntimeCall>(external); + if (::v8::internal::FLAG_trace_sim) { + switch (redirection->type()) { + case ExternalReference::BUILTIN_FP_FP_CALL: + case ExternalReference::BUILTIN_COMPARE_CALL: + PrintF("Call to host function at %p with args %f, %f", + FUNCTION_ADDR(generic_target), dval0, dval1); + break; + case ExternalReference::BUILTIN_FP_CALL: + PrintF("Call to host function at %p with arg %f", + FUNCTION_ADDR(generic_target), dval0); + break; + case ExternalReference::BUILTIN_FP_INT_CALL: + PrintF("Call to host function at %p with args %f, %d", + FUNCTION_ADDR(generic_target), dval0, ival); + break; + default: + UNREACHABLE(); + break; + } + } + switch (redirection->type()) { + case ExternalReference::BUILTIN_COMPARE_CALL: { + SimulatorRuntimeCompareCall target = + reinterpret_cast<SimulatorRuntimeCompareCall>(external); + iresult = target(dval0, dval1); + set_register(v0, static_cast<int64_t>(iresult)); + // set_register(v1, static_cast<int64_t>(iresult >> 32)); + break; + } + case ExternalReference::BUILTIN_FP_FP_CALL: { + SimulatorRuntimeFPFPCall target = + reinterpret_cast<SimulatorRuntimeFPFPCall>(external); + dresult = target(dval0, dval1); + SetFpResult(dresult); + break; + } + case ExternalReference::BUILTIN_FP_CALL: { + SimulatorRuntimeFPCall target = + reinterpret_cast<SimulatorRuntimeFPCall>(external); + dresult = target(dval0); + SetFpResult(dresult); + break; + } + case ExternalReference::BUILTIN_FP_INT_CALL: { + SimulatorRuntimeFPIntCall target = + reinterpret_cast<SimulatorRuntimeFPIntCall>(external); + dresult = target(dval0, ival); + SetFpResult(dresult); + break; + } + default: + UNREACHABLE(); + break; + } + if (::v8::internal::FLAG_trace_sim) { + switch (redirection->type()) { + case ExternalReference::BUILTIN_COMPARE_CALL: + PrintF("Returned %08x\n", static_cast<int32_t>(iresult)); + break; + case ExternalReference::BUILTIN_FP_FP_CALL: + case ExternalReference::BUILTIN_FP_CALL: + case ExternalReference::BUILTIN_FP_INT_CALL: + PrintF("Returned %f\n", dresult); + break; + default: + UNREACHABLE(); + break; + } + } + } else if (redirection->type() == ExternalReference::DIRECT_API_CALL) { + if (::v8::internal::FLAG_trace_sim) { + PrintF("Call to host function at %p args %08lx\n", + reinterpret_cast<void*>(external), arg0); + } + SimulatorRuntimeDirectApiCall target = + reinterpret_cast<SimulatorRuntimeDirectApiCall>(external); + target(arg0); + } else if ( + redirection->type() == ExternalReference::PROFILING_API_CALL) { + if (::v8::internal::FLAG_trace_sim) { + PrintF("Call to host function at %p args %08lx %08lx\n", + reinterpret_cast<void*>(external), arg0, arg1); + } + SimulatorRuntimeProfilingApiCall target = + reinterpret_cast<SimulatorRuntimeProfilingApiCall>(external); + target(arg0, Redirection::ReverseRedirection(arg1)); + } else if ( + redirection->type() == ExternalReference::DIRECT_GETTER_CALL) { + if (::v8::internal::FLAG_trace_sim) { + PrintF("Call to host function at %p args %08lx %08lx\n", + reinterpret_cast<void*>(external), arg0, arg1); + } + SimulatorRuntimeDirectGetterCall target = + reinterpret_cast<SimulatorRuntimeDirectGetterCall>(external); + target(arg0, arg1); + } else if ( + redirection->type() == ExternalReference::PROFILING_GETTER_CALL) { + if (::v8::internal::FLAG_trace_sim) { + PrintF("Call to host function at %p args %08lx %08lx %08lx\n", + reinterpret_cast<void*>(external), arg0, arg1, arg2); + } + SimulatorRuntimeProfilingGetterCall target = + reinterpret_cast<SimulatorRuntimeProfilingGetterCall>(external); + target(arg0, arg1, Redirection::ReverseRedirection(arg2)); + } else { + SimulatorRuntimeCall target = + reinterpret_cast<SimulatorRuntimeCall>(external); + if (::v8::internal::FLAG_trace_sim) { + PrintF( + "Call to host function at %p " + "args %08lx, %08lx, %08lx, %08lx, %08lx, %08lx\n", + FUNCTION_ADDR(target), + arg0, + arg1, + arg2, + arg3, + arg4, + arg5); + } + // int64_t result = target(arg0, arg1, arg2, arg3, arg4, arg5); + // set_register(v0, static_cast<int32_t>(result)); + // set_register(v1, static_cast<int32_t>(result >> 32)); + ObjectPair result = target(arg0, arg1, arg2, arg3, arg4, arg5); + set_register(v0, (int64_t)(result.x)); + set_register(v1, (int64_t)(result.y)); + } + if (::v8::internal::FLAG_trace_sim) { + PrintF("Returned %08lx : %08lx\n", get_register(v1), get_register(v0)); + } + set_register(ra, saved_ra); + set_pc(get_register(ra)); + + } else if (func == BREAK && code <= kMaxStopCode) { + if (IsWatchpoint(code)) { + PrintWatchpoint(code); + } else { + IncreaseStopCounter(code); + HandleStop(code, instr); + } + } else { + // All remaining break_ codes, and all traps are handled here. + MipsDebugger dbg(this); + dbg.Debug(); + } +} + + +// Stop helper functions. +bool Simulator::IsWatchpoint(uint64_t code) { + return (code <= kMaxWatchpointCode); +} + + +void Simulator::PrintWatchpoint(uint64_t code) { + MipsDebugger dbg(this); + ++break_count_; + PrintF("\n---- break %ld marker: %3d (instr count: %8ld) ----------" + "----------------------------------", + code, break_count_, icount_); + dbg.PrintAllRegs(); // Print registers and continue running. +} + + +void Simulator::HandleStop(uint64_t code, Instruction* instr) { + // Stop if it is enabled, otherwise go on jumping over the stop + // and the message address. + if (IsEnabledStop(code)) { + MipsDebugger dbg(this); + dbg.Stop(instr); + } else { + set_pc(get_pc() + 2 * Instruction::kInstrSize); + } +} + + +bool Simulator::IsStopInstruction(Instruction* instr) { + int32_t func = instr->FunctionFieldRaw(); + uint32_t code = static_cast<uint32_t>(instr->Bits(25, 6)); + return (func == BREAK) && code > kMaxWatchpointCode && code <= kMaxStopCode; +} + + +bool Simulator::IsEnabledStop(uint64_t code) { + DCHECK(code <= kMaxStopCode); + DCHECK(code > kMaxWatchpointCode); + return !(watched_stops_[code].count & kStopDisabledBit); +} + + +void Simulator::EnableStop(uint64_t code) { + if (!IsEnabledStop(code)) { + watched_stops_[code].count &= ~kStopDisabledBit; + } +} + + +void Simulator::DisableStop(uint64_t code) { + if (IsEnabledStop(code)) { + watched_stops_[code].count |= kStopDisabledBit; + } +} + + +void Simulator::IncreaseStopCounter(uint64_t code) { + DCHECK(code <= kMaxStopCode); + if ((watched_stops_[code].count & ~(1 << 31)) == 0x7fffffff) { + PrintF("Stop counter for code %ld has overflowed.\n" + "Enabling this code and reseting the counter to 0.\n", code); + watched_stops_[code].count = 0; + EnableStop(code); + } else { + watched_stops_[code].count++; + } +} + + +// Print a stop status. +void Simulator::PrintStopInfo(uint64_t code) { + if (code <= kMaxWatchpointCode) { + PrintF("That is a watchpoint, not a stop.\n"); + return; + } else if (code > kMaxStopCode) { + PrintF("Code too large, only %u stops can be used\n", kMaxStopCode + 1); + return; + } + const char* state = IsEnabledStop(code) ? "Enabled" : "Disabled"; + int32_t count = watched_stops_[code].count & ~kStopDisabledBit; + // Don't print the state of unused breakpoints. + if (count != 0) { + if (watched_stops_[code].desc) { + PrintF("stop %ld - 0x%lx: \t%s, \tcounter = %i, \t%s\n", + code, code, state, count, watched_stops_[code].desc); + } else { + PrintF("stop %ld - 0x%lx: \t%s, \tcounter = %i\n", + code, code, state, count); + } + } +} + + +void Simulator::SignalExceptions() { + for (int i = 1; i < kNumExceptions; i++) { + if (exceptions[i] != 0) { + V8_Fatal(__FILE__, __LINE__, "Error: Exception %i raised.", i); + } + } +} + + +// Handle execution based on instruction types. + +void Simulator::ConfigureTypeRegister(Instruction* instr, + int64_t* alu_out, + int64_t* i64hilo, + uint64_t* u64hilo, + int64_t* next_pc, + int64_t* return_addr_reg, + bool* do_interrupt, + int64_t* i128resultH, + int64_t* i128resultL) { + // Every local variable declared here needs to be const. + // This is to make sure that changed values are sent back to + // DecodeTypeRegister correctly. + + // Instruction fields. + const Opcode op = instr->OpcodeFieldRaw(); + const int64_t rs_reg = instr->RsValue(); + const int64_t rs = get_register(rs_reg); + const uint64_t rs_u = static_cast<uint64_t>(rs); + const int64_t rt_reg = instr->RtValue(); + const int64_t rt = get_register(rt_reg); + const uint64_t rt_u = static_cast<uint64_t>(rt); + const int64_t rd_reg = instr->RdValue(); + const uint64_t sa = instr->SaValue(); + + const int32_t fs_reg = instr->FsValue(); + + + // ---------- Configuration. + switch (op) { + case COP1: // Coprocessor instructions. + switch (instr->RsFieldRaw()) { + case CFC1: + // At the moment only FCSR is supported. + DCHECK(fs_reg == kFCSRRegister); + *alu_out = FCSR_; + break; + case MFC1: + *alu_out = static_cast<int64_t>(get_fpu_register_word(fs_reg)); + break; + case DMFC1: + *alu_out = get_fpu_register(fs_reg); + break; + case MFHC1: + *alu_out = get_fpu_register_hi_word(fs_reg); + break; + case CTC1: + case MTC1: + case DMTC1: + case MTHC1: + case S: + case D: + case W: + case L: + case PS: + // Do everything in the execution step. + break; + default: + // BC1 BC1EQZ BC1NEZ handled in DecodeTypeImmed, should never come here. + UNREACHABLE(); + } + break; + case COP1X: + break; + case SPECIAL: + switch (instr->FunctionFieldRaw()) { + case JR: + case JALR: + *next_pc = get_register(instr->RsValue()); + *return_addr_reg = instr->RdValue(); + break; + case SLL: + *alu_out = (int32_t)rt << sa; + break; + case DSLL: + *alu_out = rt << sa; + break; + case DSLL32: + *alu_out = rt << sa << 32; + break; + case SRL: + if (rs_reg == 0) { + // Regular logical right shift of a word by a fixed number of + // bits instruction. RS field is always equal to 0. + *alu_out = (uint32_t)rt_u >> sa; + } else { + // Logical right-rotate of a word by a fixed number of bits. This + // is special case of SRL instruction, added in MIPS32 Release 2. + // RS field is equal to 00001. + *alu_out = ((uint32_t)rt_u >> sa) | ((uint32_t)rt_u << (32 - sa)); + } + break; + case DSRL: + *alu_out = rt_u >> sa; + break; + case DSRL32: + *alu_out = rt_u >> sa >> 32; + break; + case SRA: + *alu_out = (int32_t)rt >> sa; + break; + case DSRA: + *alu_out = rt >> sa; + break; + case DSRA32: + *alu_out = rt >> sa >> 32; + break; + case SLLV: + *alu_out = (int32_t)rt << rs; + break; + case DSLLV: + *alu_out = rt << rs; + break; + case SRLV: + if (sa == 0) { + // Regular logical right-shift of a word by a variable number of + // bits instruction. SA field is always equal to 0. + *alu_out = (uint32_t)rt_u >> rs; + } else { + // Logical right-rotate of a word by a variable number of bits. + // This is special case od SRLV instruction, added in MIPS32 + // Release 2. SA field is equal to 00001. + *alu_out = + ((uint32_t)rt_u >> rs_u) | ((uint32_t)rt_u << (32 - rs_u)); + } + break; + case DSRLV: + if (sa == 0) { + // Regular logical right-shift of a word by a variable number of + // bits instruction. SA field is always equal to 0. + *alu_out = rt_u >> rs; + } else { + // Logical right-rotate of a word by a variable number of bits. + // This is special case od SRLV instruction, added in MIPS32 + // Release 2. SA field is equal to 00001. + *alu_out = (rt_u >> rs_u) | (rt_u << (32 - rs_u)); + } + break; + case SRAV: + *alu_out = (int32_t)rt >> rs; + break; + case DSRAV: + *alu_out = rt >> rs; + break; + case MFHI: // MFHI == CLZ on R6. + if (kArchVariant != kMips64r6) { + DCHECK(instr->SaValue() == 0); + *alu_out = get_register(HI); + } else { + // MIPS spec: If no bits were set in GPR rs, the result written to + // GPR rd is 32. + // GCC __builtin_clz: If input is 0, the result is undefined. + DCHECK(instr->SaValue() == 1); + *alu_out = + rs_u == 0 ? 32 : CompilerIntrinsics::CountLeadingZeros(rs_u); + } + break; + case MFLO: + *alu_out = get_register(LO); + break; + case MULT: // MULT == D_MUL_MUH. + // TODO(plind) - Unify MULT/DMULT with single set of 64-bit HI/Lo + // regs. + // TODO(plind) - make the 32-bit MULT ops conform to spec regarding + // checking of 32-bit input values, and un-define operations of HW. + *i64hilo = static_cast<int64_t>((int32_t)rs) * + static_cast<int64_t>((int32_t)rt); + break; + case MULTU: + *u64hilo = static_cast<uint64_t>(rs_u) * static_cast<uint64_t>(rt_u); + break; + case DMULT: // DMULT == D_MUL_MUH. + if (kArchVariant != kMips64r6) { + *i128resultH = MultiplyHighSigned(rs, rt); + *i128resultL = rs * rt; + } else { + switch (instr->SaValue()) { + case MUL_OP: + *i128resultL = rs * rt; + break; + case MUH_OP: + *i128resultH = MultiplyHighSigned(rs, rt); + break; + default: + UNIMPLEMENTED_MIPS(); + break; + } + } + break; + case DMULTU: + UNIMPLEMENTED_MIPS(); + break; + case ADD: + case DADD: + if (HaveSameSign(rs, rt)) { + if (rs > 0) { + exceptions[kIntegerOverflow] = rs > (Registers::kMaxValue - rt); + } else if (rs < 0) { + exceptions[kIntegerUnderflow] = rs < (Registers::kMinValue - rt); + } + } + *alu_out = rs + rt; + break; + case ADDU: { + int32_t alu32_out = rs + rt; + // Sign-extend result of 32bit operation into 64bit register. + *alu_out = static_cast<int64_t>(alu32_out); + } + break; + case DADDU: + *alu_out = rs + rt; + break; + case SUB: + case DSUB: + if (!HaveSameSign(rs, rt)) { + if (rs > 0) { + exceptions[kIntegerOverflow] = rs > (Registers::kMaxValue + rt); + } else if (rs < 0) { + exceptions[kIntegerUnderflow] = rs < (Registers::kMinValue + rt); + } + } + *alu_out = rs - rt; + break; + case SUBU: { + int32_t alu32_out = rs - rt; + // Sign-extend result of 32bit operation into 64bit register. + *alu_out = static_cast<int64_t>(alu32_out); + } + break; + case DSUBU: + *alu_out = rs - rt; + break; + case AND: + *alu_out = rs & rt; + break; + case OR: + *alu_out = rs | rt; + break; + case XOR: + *alu_out = rs ^ rt; + break; + case NOR: + *alu_out = ~(rs | rt); + break; + case SLT: + *alu_out = rs < rt ? 1 : 0; + break; + case SLTU: + *alu_out = rs_u < rt_u ? 1 : 0; + break; + // Break and trap instructions. + case BREAK: + + *do_interrupt = true; + break; + case TGE: + *do_interrupt = rs >= rt; + break; + case TGEU: + *do_interrupt = rs_u >= rt_u; + break; + case TLT: + *do_interrupt = rs < rt; + break; + case TLTU: + *do_interrupt = rs_u < rt_u; + break; + case TEQ: + *do_interrupt = rs == rt; + break; + case TNE: + *do_interrupt = rs != rt; + break; + case MOVN: + case MOVZ: + case MOVCI: + // No action taken on decode. + break; + case DIV: + case DIVU: + case DDIV: + case DDIVU: + // div and divu never raise exceptions. + break; + default: + UNREACHABLE(); + } + break; + case SPECIAL2: + switch (instr->FunctionFieldRaw()) { + case MUL: + // Only the lower 32 bits are kept. + *alu_out = (int32_t)rs_u * (int32_t)rt_u; + break; + case CLZ: + // MIPS32 spec: If no bits were set in GPR rs, the result written to + // GPR rd is 32. + // GCC __builtin_clz: If input is 0, the result is undefined. + *alu_out = + rs_u == 0 ? 32 : CompilerIntrinsics::CountLeadingZeros(rs_u); + break; + default: + UNREACHABLE(); + } + break; + case SPECIAL3: + switch (instr->FunctionFieldRaw()) { + case INS: { // Mips32r2 instruction. + // Interpret rd field as 5-bit msb of insert. + uint16_t msb = rd_reg; + // Interpret sa field as 5-bit lsb of insert. + uint16_t lsb = sa; + uint16_t size = msb - lsb + 1; + uint32_t mask = (1 << size) - 1; + *alu_out = (rt_u & ~(mask << lsb)) | ((rs_u & mask) << lsb); + break; + } + case EXT: { // Mips32r2 instruction. + // Interpret rd field as 5-bit msb of extract. + uint16_t msb = rd_reg; + // Interpret sa field as 5-bit lsb of extract. + uint16_t lsb = sa; + uint16_t size = msb + 1; + uint32_t mask = (1 << size) - 1; + *alu_out = (rs_u & (mask << lsb)) >> lsb; + break; + } + default: + UNREACHABLE(); + } + break; + default: + UNREACHABLE(); + } +} + + +void Simulator::DecodeTypeRegister(Instruction* instr) { + // Instruction fields. + const Opcode op = instr->OpcodeFieldRaw(); + const int64_t rs_reg = instr->RsValue(); + const int64_t rs = get_register(rs_reg); + const uint64_t rs_u = static_cast<uint32_t>(rs); + const int64_t rt_reg = instr->RtValue(); + const int64_t rt = get_register(rt_reg); + const uint64_t rt_u = static_cast<uint32_t>(rt); + const int64_t rd_reg = instr->RdValue(); + + const int32_t fr_reg = instr->FrValue(); + const int32_t fs_reg = instr->FsValue(); + const int32_t ft_reg = instr->FtValue(); + const int64_t fd_reg = instr->FdValue(); + int64_t i64hilo = 0; + uint64_t u64hilo = 0; + + // ALU output. + // It should not be used as is. Instructions using it should always + // initialize it first. + int64_t alu_out = 0x12345678; + + // For break and trap instructions. + bool do_interrupt = false; + + // For jr and jalr. + // Get current pc. + int64_t current_pc = get_pc(); + // Next pc + int64_t next_pc = 0; + int64_t return_addr_reg = 31; + + int64_t i128resultH; + int64_t i128resultL; + + // Set up the variables if needed before executing the instruction. + ConfigureTypeRegister(instr, + &alu_out, + &i64hilo, + &u64hilo, + &next_pc, + &return_addr_reg, + &do_interrupt, + &i128resultH, + &i128resultL); + + // ---------- Raise exceptions triggered. + SignalExceptions(); + + // ---------- Execution. + switch (op) { + case COP1: + switch (instr->RsFieldRaw()) { + case BC1: // Branch on coprocessor condition. + case BC1EQZ: + case BC1NEZ: + UNREACHABLE(); + break; + case CFC1: + set_register(rt_reg, alu_out); + break; + case MFC1: + case DMFC1: + case MFHC1: + set_register(rt_reg, alu_out); + break; + case CTC1: + // At the moment only FCSR is supported. + DCHECK(fs_reg == kFCSRRegister); + FCSR_ = registers_[rt_reg]; + break; + case MTC1: + // Hardware writes upper 32-bits to zero on mtc1. + set_fpu_register_hi_word(fs_reg, 0); + set_fpu_register_word(fs_reg, registers_[rt_reg]); + break; + case DMTC1: + set_fpu_register(fs_reg, registers_[rt_reg]); + break; + case MTHC1: + set_fpu_register_hi_word(fs_reg, registers_[rt_reg]); + break; + case S: + float f; + switch (instr->FunctionFieldRaw()) { + case CVT_D_S: + f = get_fpu_register_float(fs_reg); + set_fpu_register_double(fd_reg, static_cast<double>(f)); + break; + default: + // CVT_W_S CVT_L_S TRUNC_W_S ROUND_W_S ROUND_L_S FLOOR_W_S FLOOR_L_S + // CEIL_W_S CEIL_L_S CVT_PS_S are unimplemented. + UNREACHABLE(); + } + break; + case D: + double ft, fs; + uint32_t cc, fcsr_cc; + int64_t i64; + fs = get_fpu_register_double(fs_reg); + ft = get_fpu_register_double(ft_reg); + cc = instr->FCccValue(); + fcsr_cc = get_fcsr_condition_bit(cc); + switch (instr->FunctionFieldRaw()) { + case ADD_D: + set_fpu_register_double(fd_reg, fs + ft); + break; + case SUB_D: + set_fpu_register_double(fd_reg, fs - ft); + break; + case MUL_D: + set_fpu_register_double(fd_reg, fs * ft); + break; + case DIV_D: + set_fpu_register_double(fd_reg, fs / ft); + break; + case ABS_D: + set_fpu_register_double(fd_reg, fabs(fs)); + break; + case MOV_D: + set_fpu_register_double(fd_reg, fs); + break; + case NEG_D: + set_fpu_register_double(fd_reg, -fs); + break; + case SQRT_D: + set_fpu_register_double(fd_reg, sqrt(fs)); + break; + case C_UN_D: + set_fcsr_bit(fcsr_cc, std::isnan(fs) || std::isnan(ft)); + break; + case C_EQ_D: + set_fcsr_bit(fcsr_cc, (fs == ft)); + break; + case C_UEQ_D: + set_fcsr_bit(fcsr_cc, + (fs == ft) || (std::isnan(fs) || std::isnan(ft))); + break; + case C_OLT_D: + set_fcsr_bit(fcsr_cc, (fs < ft)); + break; + case C_ULT_D: + set_fcsr_bit(fcsr_cc, + (fs < ft) || (std::isnan(fs) || std::isnan(ft))); + break; + case C_OLE_D: + set_fcsr_bit(fcsr_cc, (fs <= ft)); + break; + case C_ULE_D: + set_fcsr_bit(fcsr_cc, + (fs <= ft) || (std::isnan(fs) || std::isnan(ft))); + break; + case CVT_W_D: // Convert double to word. + // Rounding modes are not yet supported. + DCHECK((FCSR_ & 3) == 0); + // In rounding mode 0 it should behave like ROUND. + // No break. + case ROUND_W_D: // Round double to word (round half to even). + { + double rounded = std::floor(fs + 0.5); + int32_t result = static_cast<int32_t>(rounded); + if ((result & 1) != 0 && result - fs == 0.5) { + // If the number is halfway between two integers, + // round to the even one. + result--; + } + set_fpu_register_word(fd_reg, result); + if (set_fcsr_round_error(fs, rounded)) { + set_fpu_register(fd_reg, kFPUInvalidResult); + } + } + break; + case TRUNC_W_D: // Truncate double to word (round towards 0). + { + double rounded = trunc(fs); + int32_t result = static_cast<int32_t>(rounded); + set_fpu_register_word(fd_reg, result); + if (set_fcsr_round_error(fs, rounded)) { + set_fpu_register(fd_reg, kFPUInvalidResult); + } + } + break; + case FLOOR_W_D: // Round double to word towards negative infinity. + { + double rounded = std::floor(fs); + int32_t result = static_cast<int32_t>(rounded); + set_fpu_register_word(fd_reg, result); + if (set_fcsr_round_error(fs, rounded)) { + set_fpu_register(fd_reg, kFPUInvalidResult); + } + } + break; + case CEIL_W_D: // Round double to word towards positive infinity. + { + double rounded = std::ceil(fs); + int32_t result = static_cast<int32_t>(rounded); + set_fpu_register_word(fd_reg, result); + if (set_fcsr_round_error(fs, rounded)) { + set_fpu_register(fd_reg, kFPUInvalidResult); + } + } + break; + case CVT_S_D: // Convert double to float (single). + set_fpu_register_float(fd_reg, static_cast<float>(fs)); + break; + case CVT_L_D: // Mips64r2: Truncate double to 64-bit long-word. + // Rounding modes are not yet supported. + DCHECK((FCSR_ & 3) == 0); + // In rounding mode 0 it should behave like ROUND. + // No break. + case ROUND_L_D: { // Mips64r2 instruction. + // check error cases + double rounded = fs > 0 ? floor(fs + 0.5) : ceil(fs - 0.5); + int64_t result = static_cast<int64_t>(rounded); + set_fpu_register(fd_reg, result); + if (set_fcsr_round64_error(fs, rounded)) { + set_fpu_register(fd_reg, kFPU64InvalidResult); + } + break; + } + case TRUNC_L_D: { // Mips64r2 instruction. + double rounded = trunc(fs); + int64_t result = static_cast<int64_t>(rounded); + set_fpu_register(fd_reg, result); + if (set_fcsr_round64_error(fs, rounded)) { + set_fpu_register(fd_reg, kFPU64InvalidResult); + } + break; + } + case FLOOR_L_D: { // Mips64r2 instruction. + double rounded = floor(fs); + int64_t result = static_cast<int64_t>(rounded); + set_fpu_register(fd_reg, result); + if (set_fcsr_round64_error(fs, rounded)) { + set_fpu_register(fd_reg, kFPU64InvalidResult); + } + break; + } + case CEIL_L_D: { // Mips64r2 instruction. + double rounded = ceil(fs); + int64_t result = static_cast<int64_t>(rounded); + set_fpu_register(fd_reg, result); + if (set_fcsr_round64_error(fs, rounded)) { + set_fpu_register(fd_reg, kFPU64InvalidResult); + } + break; + } + case C_F_D: + UNIMPLEMENTED_MIPS(); + break; + default: + UNREACHABLE(); + } + break; + case W: + switch (instr->FunctionFieldRaw()) { + case CVT_S_W: // Convert word to float (single). + alu_out = get_fpu_register_signed_word(fs_reg); + set_fpu_register_float(fd_reg, static_cast<float>(alu_out)); + break; + case CVT_D_W: // Convert word to double. + alu_out = get_fpu_register_signed_word(fs_reg); + set_fpu_register_double(fd_reg, static_cast<double>(alu_out)); + break; + default: // Mips64r6 CMP.S instructions unimplemented. + UNREACHABLE(); + } + break; + case L: + fs = get_fpu_register_double(fs_reg); + ft = get_fpu_register_double(ft_reg); + switch (instr->FunctionFieldRaw()) { + case CVT_D_L: // Mips32r2 instruction. + i64 = get_fpu_register(fs_reg); + set_fpu_register_double(fd_reg, static_cast<double>(i64)); + break; + case CVT_S_L: + UNIMPLEMENTED_MIPS(); + break; + case CMP_AF: // Mips64r6 CMP.D instructions. + UNIMPLEMENTED_MIPS(); + break; + case CMP_UN: + if (std::isnan(fs) || std::isnan(ft)) { + set_fpu_register(fd_reg, -1); + } else { + set_fpu_register(fd_reg, 0); + } + break; + case CMP_EQ: + if (fs == ft) { + set_fpu_register(fd_reg, -1); + } else { + set_fpu_register(fd_reg, 0); + } + break; + case CMP_UEQ: + if ((fs == ft) || (std::isnan(fs) || std::isnan(ft))) { + set_fpu_register(fd_reg, -1); + } else { + set_fpu_register(fd_reg, 0); + } + break; + case CMP_LT: + if (fs < ft) { + set_fpu_register(fd_reg, -1); + } else { + set_fpu_register(fd_reg, 0); + } + break; + case CMP_ULT: + if ((fs < ft) || (std::isnan(fs) || std::isnan(ft))) { + set_fpu_register(fd_reg, -1); + } else { + set_fpu_register(fd_reg, 0); + } + break; + case CMP_LE: + if (fs <= ft) { + set_fpu_register(fd_reg, -1); + } else { + set_fpu_register(fd_reg, 0); + } + break; + case CMP_ULE: + if ((fs <= ft) || (std::isnan(fs) || std::isnan(ft))) { + set_fpu_register(fd_reg, -1); + } else { + set_fpu_register(fd_reg, 0); + } + break; + default: // CMP_OR CMP_UNE CMP_NE UNIMPLEMENTED + UNREACHABLE(); + } + break; + default: + UNREACHABLE(); + } + break; + case COP1X: + switch (instr->FunctionFieldRaw()) { + case MADD_D: + double fr, ft, fs; + fr = get_fpu_register_double(fr_reg); + fs = get_fpu_register_double(fs_reg); + ft = get_fpu_register_double(ft_reg); + set_fpu_register_double(fd_reg, fs * ft + fr); + break; + default: + UNREACHABLE(); + } + break; + case SPECIAL: + switch (instr->FunctionFieldRaw()) { + case JR: { + Instruction* branch_delay_instr = reinterpret_cast<Instruction*>( + current_pc+Instruction::kInstrSize); + BranchDelayInstructionDecode(branch_delay_instr); + set_pc(next_pc); + pc_modified_ = true; + break; + } + case JALR: { + Instruction* branch_delay_instr = reinterpret_cast<Instruction*>( + current_pc+Instruction::kInstrSize); + BranchDelayInstructionDecode(branch_delay_instr); + set_register(return_addr_reg, + current_pc + 2 * Instruction::kInstrSize); + set_pc(next_pc); + pc_modified_ = true; + break; + } + // Instructions using HI and LO registers. + case MULT: + if (kArchVariant != kMips64r6) { + set_register(LO, static_cast<int32_t>(i64hilo & 0xffffffff)); + set_register(HI, static_cast<int32_t>(i64hilo >> 32)); + } else { + switch (instr->SaValue()) { + case MUL_OP: + set_register(rd_reg, + static_cast<int32_t>(i64hilo & 0xffffffff)); + break; + case MUH_OP: + set_register(rd_reg, static_cast<int32_t>(i64hilo >> 32)); + break; + default: + UNIMPLEMENTED_MIPS(); + break; + } + } + break; + case MULTU: + set_register(LO, static_cast<int32_t>(u64hilo & 0xffffffff)); + set_register(HI, static_cast<int32_t>(u64hilo >> 32)); + break; + case DMULT: // DMULT == D_MUL_MUH. + if (kArchVariant != kMips64r6) { + set_register(LO, static_cast<int64_t>(i128resultL)); + set_register(HI, static_cast<int64_t>(i128resultH)); + } else { + switch (instr->SaValue()) { + case MUL_OP: + set_register(rd_reg, static_cast<int64_t>(i128resultL)); + break; + case MUH_OP: + set_register(rd_reg, static_cast<int64_t>(i128resultH)); + break; + default: + UNIMPLEMENTED_MIPS(); + break; + } + } + break; + case DMULTU: + UNIMPLEMENTED_MIPS(); + break; + case DSLL: + set_register(rd_reg, alu_out); + break; + case DIV: + case DDIV: + switch (kArchVariant) { + case kMips64r2: + // Divide by zero and overflow was not checked in the + // configuration step - div and divu do not raise exceptions. On + // division by 0 the result will be UNPREDICTABLE. On overflow + // (INT_MIN/-1), return INT_MIN which is what the hardware does. + if (rs == INT_MIN && rt == -1) { + set_register(LO, INT_MIN); + set_register(HI, 0); + } else if (rt != 0) { + set_register(LO, rs / rt); + set_register(HI, rs % rt); + } + break; + case kMips64r6: + switch (instr->SaValue()) { + case DIV_OP: + if (rs == INT_MIN && rt == -1) { + set_register(rd_reg, INT_MIN); + } else if (rt != 0) { + set_register(rd_reg, rs / rt); + } + break; + case MOD_OP: + if (rs == INT_MIN && rt == -1) { + set_register(rd_reg, 0); + } else if (rt != 0) { + set_register(rd_reg, rs % rt); + } + break; + default: + UNIMPLEMENTED_MIPS(); + break; + } + break; + default: + break; + } + break; + case DIVU: + if (rt_u != 0) { + set_register(LO, rs_u / rt_u); + set_register(HI, rs_u % rt_u); + } + break; + // Break and trap instructions. + case BREAK: + case TGE: + case TGEU: + case TLT: + case TLTU: + case TEQ: + case TNE: + if (do_interrupt) { + SoftwareInterrupt(instr); + } + break; + // Conditional moves. + case MOVN: + if (rt) { + set_register(rd_reg, rs); + TraceRegWr(rs); + } + break; + case MOVCI: { + uint32_t cc = instr->FBccValue(); + uint32_t fcsr_cc = get_fcsr_condition_bit(cc); + if (instr->Bit(16)) { // Read Tf bit. + if (test_fcsr_bit(fcsr_cc)) set_register(rd_reg, rs); + } else { + if (!test_fcsr_bit(fcsr_cc)) set_register(rd_reg, rs); + } + break; + } + case MOVZ: + if (!rt) { + set_register(rd_reg, rs); + TraceRegWr(rs); + } + break; + default: // For other special opcodes we do the default operation. + set_register(rd_reg, alu_out); + TraceRegWr(alu_out); + } + break; + case SPECIAL2: + switch (instr->FunctionFieldRaw()) { + case MUL: + set_register(rd_reg, alu_out); + TraceRegWr(alu_out); + // HI and LO are UNPREDICTABLE after the operation. + set_register(LO, Unpredictable); + set_register(HI, Unpredictable); + break; + default: // For other special2 opcodes we do the default operation. + set_register(rd_reg, alu_out); + } + break; + case SPECIAL3: + switch (instr->FunctionFieldRaw()) { + case INS: + // Ins instr leaves result in Rt, rather than Rd. + set_register(rt_reg, alu_out); + TraceRegWr(alu_out); + break; + case EXT: + // Ext instr leaves result in Rt, rather than Rd. + set_register(rt_reg, alu_out); + TraceRegWr(alu_out); + break; + default: + UNREACHABLE(); + } + break; + // Unimplemented opcodes raised an error in the configuration step before, + // so we can use the default here to set the destination register in common + // cases. + default: + set_register(rd_reg, alu_out); + TraceRegWr(alu_out); + } +} + + +// Type 2: instructions using a 16 bytes immediate. (e.g. addi, beq). +void Simulator::DecodeTypeImmediate(Instruction* instr) { + // Instruction fields. + Opcode op = instr->OpcodeFieldRaw(); + int64_t rs = get_register(instr->RsValue()); + uint64_t rs_u = static_cast<uint64_t>(rs); + int64_t rt_reg = instr->RtValue(); // Destination register. + int64_t rt = get_register(rt_reg); + int16_t imm16 = instr->Imm16Value(); + + int32_t ft_reg = instr->FtValue(); // Destination register. + int64_t ft = get_fpu_register(ft_reg); + + // Zero extended immediate. + uint32_t oe_imm16 = 0xffff & imm16; + // Sign extended immediate. + int32_t se_imm16 = imm16; + + // Get current pc. + int64_t current_pc = get_pc(); + // Next pc. + int64_t next_pc = bad_ra; + + // Used for conditional branch instructions. + bool do_branch = false; + bool execute_branch_delay_instruction = false; + + // Used for arithmetic instructions. + int64_t alu_out = 0; + // Floating point. + double fp_out = 0.0; + uint32_t cc, cc_value, fcsr_cc; + + // Used for memory instructions. + int64_t addr = 0x0; + // Value to be written in memory. + uint64_t mem_value = 0x0; + // Alignment for 32-bit integers used in LWL, LWR, etc. + const int kInt32AlignmentMask = sizeof(uint32_t) - 1; + + // ---------- Configuration (and execution for REGIMM). + switch (op) { + // ------------- COP1. Coprocessor instructions. + case COP1: + switch (instr->RsFieldRaw()) { + case BC1: // Branch on coprocessor condition. + cc = instr->FBccValue(); + fcsr_cc = get_fcsr_condition_bit(cc); + cc_value = test_fcsr_bit(fcsr_cc); + do_branch = (instr->FBtrueValue()) ? cc_value : !cc_value; + execute_branch_delay_instruction = true; + // Set next_pc. + if (do_branch) { + next_pc = current_pc + (imm16 << 2) + Instruction::kInstrSize; + } else { + next_pc = current_pc + kBranchReturnOffset; + } + break; + case BC1EQZ: + do_branch = (ft & 0x1) ? false : true; + execute_branch_delay_instruction = true; + // Set next_pc. + if (do_branch) { + next_pc = current_pc + (imm16 << 2) + Instruction::kInstrSize; + } else { + next_pc = current_pc + kBranchReturnOffset; + } + break; + case BC1NEZ: + do_branch = (ft & 0x1) ? true : false; + execute_branch_delay_instruction = true; + // Set next_pc. + if (do_branch) { + next_pc = current_pc + (imm16 << 2) + Instruction::kInstrSize; + } else { + next_pc = current_pc + kBranchReturnOffset; + } + break; + default: + UNREACHABLE(); + } + break; + // ------------- REGIMM class. + case REGIMM: + switch (instr->RtFieldRaw()) { + case BLTZ: + do_branch = (rs < 0); + break; + case BLTZAL: + do_branch = rs < 0; + break; + case BGEZ: + do_branch = rs >= 0; + break; + case BGEZAL: + do_branch = rs >= 0; + break; + default: + UNREACHABLE(); + } + switch (instr->RtFieldRaw()) { + case BLTZ: + case BLTZAL: + case BGEZ: + case BGEZAL: + // Branch instructions common part. + execute_branch_delay_instruction = true; + // Set next_pc. + if (do_branch) { + next_pc = current_pc + (imm16 << 2) + Instruction::kInstrSize; + if (instr->IsLinkingInstruction()) { + set_register(31, current_pc + kBranchReturnOffset); + } + } else { + next_pc = current_pc + kBranchReturnOffset; + } + default: + break; + } + break; // case REGIMM. + // ------------- Branch instructions. + // When comparing to zero, the encoding of rt field is always 0, so we don't + // need to replace rt with zero. + case BEQ: + do_branch = (rs == rt); + break; + case BNE: + do_branch = rs != rt; + break; + case BLEZ: + do_branch = rs <= 0; + break; + case BGTZ: + do_branch = rs > 0; + break; + // ------------- Arithmetic instructions. + case ADDI: + case DADDI: + if (HaveSameSign(rs, se_imm16)) { + if (rs > 0) { + exceptions[kIntegerOverflow] = rs > (Registers::kMaxValue - se_imm16); + } else if (rs < 0) { + exceptions[kIntegerUnderflow] = + rs < (Registers::kMinValue - se_imm16); + } + } + alu_out = rs + se_imm16; + break; + case ADDIU: { + int32_t alu32_out = rs + se_imm16; + // Sign-extend result of 32bit operation into 64bit register. + alu_out = static_cast<int64_t>(alu32_out); + } + break; + case DADDIU: + alu_out = rs + se_imm16; + break; + case SLTI: + alu_out = (rs < se_imm16) ? 1 : 0; + break; + case SLTIU: + alu_out = (rs_u < static_cast<uint32_t>(se_imm16)) ? 1 : 0; + break; + case ANDI: + alu_out = rs & oe_imm16; + break; + case ORI: + alu_out = rs | oe_imm16; + break; + case XORI: + alu_out = rs ^ oe_imm16; + break; + case LUI: { + int32_t alu32_out = (oe_imm16 << 16); + // Sign-extend result of 32bit operation into 64bit register. + alu_out = static_cast<int64_t>(alu32_out); + } + break; + // ------------- Memory instructions. + case LB: + addr = rs + se_imm16; + alu_out = ReadB(addr); + break; + case LH: + addr = rs + se_imm16; + alu_out = ReadH(addr, instr); + break; + case LWL: { + // al_offset is offset of the effective address within an aligned word. + uint8_t al_offset = (rs + se_imm16) & kInt32AlignmentMask; + uint8_t byte_shift = kInt32AlignmentMask - al_offset; + uint32_t mask = (1 << byte_shift * 8) - 1; + addr = rs + se_imm16 - al_offset; + alu_out = ReadW(addr, instr); + alu_out <<= byte_shift * 8; + alu_out |= rt & mask; + break; + } + case LW: + addr = rs + se_imm16; + alu_out = ReadW(addr, instr); + break; + case LWU: + addr = rs + se_imm16; + alu_out = ReadWU(addr, instr); + break; + case LD: + addr = rs + se_imm16; + alu_out = Read2W(addr, instr); + break; + case LBU: + addr = rs + se_imm16; + alu_out = ReadBU(addr); + break; + case LHU: + addr = rs + se_imm16; + alu_out = ReadHU(addr, instr); + break; + case LWR: { + // al_offset is offset of the effective address within an aligned word. + uint8_t al_offset = (rs + se_imm16) & kInt32AlignmentMask; + uint8_t byte_shift = kInt32AlignmentMask - al_offset; + uint32_t mask = al_offset ? (~0 << (byte_shift + 1) * 8) : 0; + addr = rs + se_imm16 - al_offset; + alu_out = ReadW(addr, instr); + alu_out = static_cast<uint32_t> (alu_out) >> al_offset * 8; + alu_out |= rt & mask; + break; + } + case SB: + addr = rs + se_imm16; + break; + case SH: + addr = rs + se_imm16; + break; + case SWL: { + uint8_t al_offset = (rs + se_imm16) & kInt32AlignmentMask; + uint8_t byte_shift = kInt32AlignmentMask - al_offset; + uint32_t mask = byte_shift ? (~0 << (al_offset + 1) * 8) : 0; + addr = rs + se_imm16 - al_offset; + mem_value = ReadW(addr, instr) & mask; + mem_value |= static_cast<uint32_t>(rt) >> byte_shift * 8; + break; + } + case SW: + case SD: + addr = rs + se_imm16; + break; + case SWR: { + uint8_t al_offset = (rs + se_imm16) & kInt32AlignmentMask; + uint32_t mask = (1 << al_offset * 8) - 1; + addr = rs + se_imm16 - al_offset; + mem_value = ReadW(addr, instr); + mem_value = (rt << al_offset * 8) | (mem_value & mask); + break; + } + case LWC1: + addr = rs + se_imm16; + alu_out = ReadW(addr, instr); + break; + case LDC1: + addr = rs + se_imm16; + fp_out = ReadD(addr, instr); + break; + case SWC1: + case SDC1: + addr = rs + se_imm16; + break; + default: + UNREACHABLE(); + } + + // ---------- Raise exceptions triggered. + SignalExceptions(); + + // ---------- Execution. + switch (op) { + // ------------- Branch instructions. + case BEQ: + case BNE: + case BLEZ: + case BGTZ: + // Branch instructions common part. + execute_branch_delay_instruction = true; + // Set next_pc. + if (do_branch) { + next_pc = current_pc + (imm16 << 2) + Instruction::kInstrSize; + if (instr->IsLinkingInstruction()) { + set_register(31, current_pc + 2* Instruction::kInstrSize); + } + } else { + next_pc = current_pc + 2 * Instruction::kInstrSize; + } + break; + // ------------- Arithmetic instructions. + case ADDI: + case DADDI: + case ADDIU: + case DADDIU: + case SLTI: + case SLTIU: + case ANDI: + case ORI: + case XORI: + case LUI: + set_register(rt_reg, alu_out); + TraceRegWr(alu_out); + break; + // ------------- Memory instructions. + case LB: + case LH: + case LWL: + case LW: + case LWU: + case LD: + case LBU: + case LHU: + case LWR: + set_register(rt_reg, alu_out); + break; + case SB: + WriteB(addr, static_cast<int8_t>(rt)); + break; + case SH: + WriteH(addr, static_cast<uint16_t>(rt), instr); + break; + case SWL: + WriteW(addr, mem_value, instr); + break; + case SW: + WriteW(addr, rt, instr); + break; + case SD: + Write2W(addr, rt, instr); + break; + case SWR: + WriteW(addr, mem_value, instr); + break; + case LWC1: + set_fpu_register(ft_reg, kFPUInvalidResult); // Trash upper 32 bits. + set_fpu_register_word(ft_reg, static_cast<int32_t>(alu_out)); + break; + case LDC1: + set_fpu_register_double(ft_reg, fp_out); + break; + case SWC1: + addr = rs + se_imm16; + WriteW(addr, get_fpu_register(ft_reg), instr); + break; + case SDC1: + addr = rs + se_imm16; + WriteD(addr, get_fpu_register_double(ft_reg), instr); + break; + default: + break; + } + + + if (execute_branch_delay_instruction) { + // Execute branch delay slot + // We don't check for end_sim_pc. First it should not be met as the current + // pc is valid. Secondly a jump should always execute its branch delay slot. + Instruction* branch_delay_instr = + reinterpret_cast<Instruction*>(current_pc+Instruction::kInstrSize); + BranchDelayInstructionDecode(branch_delay_instr); + } + + // If needed update pc after the branch delay execution. + if (next_pc != bad_ra) { + set_pc(next_pc); + } +} + + +// Type 3: instructions using a 26 bytes immediate. (e.g. j, jal). +void Simulator::DecodeTypeJump(Instruction* instr) { + // Get current pc. + int32_t current_pc = get_pc(); + // Get unchanged bits of pc. + int32_t pc_high_bits = current_pc & 0xf0000000; + // Next pc. + int32_t next_pc = pc_high_bits | (instr->Imm26Value() << 2); + + // Execute branch delay slot. + // We don't check for end_sim_pc. First it should not be met as the current pc + // is valid. Secondly a jump should always execute its branch delay slot. + Instruction* branch_delay_instr = + reinterpret_cast<Instruction*>(current_pc + Instruction::kInstrSize); + BranchDelayInstructionDecode(branch_delay_instr); + + // Update pc and ra if necessary. + // Do this after the branch delay execution. + if (instr->IsLinkingInstruction()) { + set_register(31, current_pc + 2 * Instruction::kInstrSize); + } + set_pc(next_pc); + pc_modified_ = true; +} + + +// Executes the current instruction. +void Simulator::InstructionDecode(Instruction* instr) { + if (v8::internal::FLAG_check_icache) { + CheckICache(isolate_->simulator_i_cache(), instr); + } + pc_modified_ = false; + + v8::internal::EmbeddedVector<char, 256> buffer; + + if (::v8::internal::FLAG_trace_sim) { + SNPrintF(trace_buf_, " "); + disasm::NameConverter converter; + disasm::Disassembler dasm(converter); + // Use a reasonably large buffer. + dasm.InstructionDecode(buffer, reinterpret_cast<byte*>(instr)); + } + + switch (instr->InstructionType()) { + case Instruction::kRegisterType: + DecodeTypeRegister(instr); + break; + case Instruction::kImmediateType: + DecodeTypeImmediate(instr); + break; + case Instruction::kJumpType: + DecodeTypeJump(instr); + break; + default: + UNSUPPORTED(); + } + + if (::v8::internal::FLAG_trace_sim) { + PrintF(" 0x%08lx %-44s %s\n", reinterpret_cast<intptr_t>(instr), + buffer.start(), trace_buf_.start()); + } + + if (!pc_modified_) { + set_register(pc, reinterpret_cast<int64_t>(instr) + + Instruction::kInstrSize); + } +} + + + +void Simulator::Execute() { + // Get the PC to simulate. Cannot use the accessor here as we need the + // raw PC value and not the one used as input to arithmetic instructions. + int64_t program_counter = get_pc(); + if (::v8::internal::FLAG_stop_sim_at == 0) { + // Fast version of the dispatch loop without checking whether the simulator + // should be stopping at a particular executed instruction. + while (program_counter != end_sim_pc) { + Instruction* instr = reinterpret_cast<Instruction*>(program_counter); + icount_++; + InstructionDecode(instr); + program_counter = get_pc(); + } + } else { + // FLAG_stop_sim_at is at the non-default value. Stop in the debugger when + // we reach the particular instuction count. + while (program_counter != end_sim_pc) { + Instruction* instr = reinterpret_cast<Instruction*>(program_counter); + icount_++; + if (icount_ == static_cast<int64_t>(::v8::internal::FLAG_stop_sim_at)) { + MipsDebugger dbg(this); + dbg.Debug(); + } else { + InstructionDecode(instr); + } + program_counter = get_pc(); + } + } +} + + +void Simulator::CallInternal(byte* entry) { + // Prepare to execute the code at entry. + set_register(pc, reinterpret_cast<int64_t>(entry)); + // Put down marker for end of simulation. The simulator will stop simulation + // when the PC reaches this value. By saving the "end simulation" value into + // the LR the simulation stops when returning to this call point. + set_register(ra, end_sim_pc); + + // Remember the values of callee-saved registers. + // The code below assumes that r9 is not used as sb (static base) in + // simulator code and therefore is regarded as a callee-saved register. + int64_t s0_val = get_register(s0); + int64_t s1_val = get_register(s1); + int64_t s2_val = get_register(s2); + int64_t s3_val = get_register(s3); + int64_t s4_val = get_register(s4); + int64_t s5_val = get_register(s5); + int64_t s6_val = get_register(s6); + int64_t s7_val = get_register(s7); + int64_t gp_val = get_register(gp); + int64_t sp_val = get_register(sp); + int64_t fp_val = get_register(fp); + + // Set up the callee-saved registers with a known value. To be able to check + // that they are preserved properly across JS execution. + int64_t callee_saved_value = icount_; + set_register(s0, callee_saved_value); + set_register(s1, callee_saved_value); + set_register(s2, callee_saved_value); + set_register(s3, callee_saved_value); + set_register(s4, callee_saved_value); + set_register(s5, callee_saved_value); + set_register(s6, callee_saved_value); + set_register(s7, callee_saved_value); + set_register(gp, callee_saved_value); + set_register(fp, callee_saved_value); + + // Start the simulation. + Execute(); + + // Check that the callee-saved registers have been preserved. + CHECK_EQ(callee_saved_value, get_register(s0)); + CHECK_EQ(callee_saved_value, get_register(s1)); + CHECK_EQ(callee_saved_value, get_register(s2)); + CHECK_EQ(callee_saved_value, get_register(s3)); + CHECK_EQ(callee_saved_value, get_register(s4)); + CHECK_EQ(callee_saved_value, get_register(s5)); + CHECK_EQ(callee_saved_value, get_register(s6)); + CHECK_EQ(callee_saved_value, get_register(s7)); + CHECK_EQ(callee_saved_value, get_register(gp)); + CHECK_EQ(callee_saved_value, get_register(fp)); + + // Restore callee-saved registers with the original value. + set_register(s0, s0_val); + set_register(s1, s1_val); + set_register(s2, s2_val); + set_register(s3, s3_val); + set_register(s4, s4_val); + set_register(s5, s5_val); + set_register(s6, s6_val); + set_register(s7, s7_val); + set_register(gp, gp_val); + set_register(sp, sp_val); + set_register(fp, fp_val); +} + + +int64_t Simulator::Call(byte* entry, int argument_count, ...) { + const int kRegisterPassedArguments = (kMipsAbi == kN64) ? 8 : 4; + va_list parameters; + va_start(parameters, argument_count); + // Set up arguments. + + // First four arguments passed in registers in both ABI's. + DCHECK(argument_count >= 4); + set_register(a0, va_arg(parameters, int64_t)); + set_register(a1, va_arg(parameters, int64_t)); + set_register(a2, va_arg(parameters, int64_t)); + set_register(a3, va_arg(parameters, int64_t)); + + if (kMipsAbi == kN64) { + // Up to eight arguments passed in registers in N64 ABI. + // TODO(plind): N64 ABI calls these regs a4 - a7. Clarify this. + if (argument_count >= 5) set_register(a4, va_arg(parameters, int64_t)); + if (argument_count >= 6) set_register(a5, va_arg(parameters, int64_t)); + if (argument_count >= 7) set_register(a6, va_arg(parameters, int64_t)); + if (argument_count >= 8) set_register(a7, va_arg(parameters, int64_t)); + } + + // Remaining arguments passed on stack. + int64_t original_stack = get_register(sp); + // Compute position of stack on entry to generated code. + int stack_args_count = (argument_count > kRegisterPassedArguments) ? + (argument_count - kRegisterPassedArguments) : 0; + int stack_args_size = stack_args_count * sizeof(int64_t) + kCArgsSlotsSize; + int64_t entry_stack = original_stack - stack_args_size; + + if (base::OS::ActivationFrameAlignment() != 0) { + entry_stack &= -base::OS::ActivationFrameAlignment(); + } + // Store remaining arguments on stack, from low to high memory. + intptr_t* stack_argument = reinterpret_cast<intptr_t*>(entry_stack); + for (int i = kRegisterPassedArguments; i < argument_count; i++) { + int stack_index = i - kRegisterPassedArguments + kCArgSlotCount; + stack_argument[stack_index] = va_arg(parameters, int64_t); + } + va_end(parameters); + set_register(sp, entry_stack); + + CallInternal(entry); + + // Pop stack passed arguments. + CHECK_EQ(entry_stack, get_register(sp)); + set_register(sp, original_stack); + + int64_t result = get_register(v0); + return result; +} + + +double Simulator::CallFP(byte* entry, double d0, double d1) { + if (!IsMipsSoftFloatABI) { + const FPURegister fparg2 = (kMipsAbi == kN64) ? f13 : f14; + set_fpu_register_double(f12, d0); + set_fpu_register_double(fparg2, d1); + } else { + int buffer[2]; + DCHECK(sizeof(buffer[0]) * 2 == sizeof(d0)); + memcpy(buffer, &d0, sizeof(d0)); + set_dw_register(a0, buffer); + memcpy(buffer, &d1, sizeof(d1)); + set_dw_register(a2, buffer); + } + CallInternal(entry); + if (!IsMipsSoftFloatABI) { + return get_fpu_register_double(f0); + } else { + return get_double_from_register_pair(v0); + } +} + + +uintptr_t Simulator::PushAddress(uintptr_t address) { + int64_t new_sp = get_register(sp) - sizeof(uintptr_t); + uintptr_t* stack_slot = reinterpret_cast<uintptr_t*>(new_sp); + *stack_slot = address; + set_register(sp, new_sp); + return new_sp; +} + + +uintptr_t Simulator::PopAddress() { + int64_t current_sp = get_register(sp); + uintptr_t* stack_slot = reinterpret_cast<uintptr_t*>(current_sp); + uintptr_t address = *stack_slot; + set_register(sp, current_sp + sizeof(uintptr_t)); + return address; +} + + +#undef UNSUPPORTED + +} } // namespace v8::internal + +#endif // USE_SIMULATOR + +#endif // V8_TARGET_ARCH_MIPS64 diff --git a/deps/v8/src/mips64/simulator-mips64.h b/deps/v8/src/mips64/simulator-mips64.h new file mode 100644 index 00000000000..6bad6324a09 --- /dev/null +++ b/deps/v8/src/mips64/simulator-mips64.h @@ -0,0 +1,479 @@ +// Copyright 2011 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + + +// Declares a Simulator for MIPS instructions if we are not generating a native +// MIPS binary. This Simulator allows us to run and debug MIPS code generation +// on regular desktop machines. +// V8 calls into generated code by "calling" the CALL_GENERATED_CODE macro, +// which will start execution in the Simulator or forwards to the real entry +// on a MIPS HW platform. + +#ifndef V8_MIPS_SIMULATOR_MIPS_H_ +#define V8_MIPS_SIMULATOR_MIPS_H_ + +#include "src/allocation.h" +#include "src/mips64/constants-mips64.h" + +#if !defined(USE_SIMULATOR) +// Running without a simulator on a native mips platform. + +namespace v8 { +namespace internal { + +// When running without a simulator we call the entry directly. +#define CALL_GENERATED_CODE(entry, p0, p1, p2, p3, p4) \ + entry(p0, p1, p2, p3, p4) + + +// Call the generated regexp code directly. The code at the entry address +// should act as a function matching the type arm_regexp_matcher. +// The fifth (or ninth) argument is a dummy that reserves the space used for +// the return address added by the ExitFrame in native calls. +#ifdef MIPS_ABI_N64 +typedef int (*mips_regexp_matcher)(String* input, + int64_t start_offset, + const byte* input_start, + const byte* input_end, + int* output, + int64_t output_size, + Address stack_base, + int64_t direct_call, + void* return_address, + Isolate* isolate); + +#define CALL_GENERATED_REGEXP_CODE(entry, p0, p1, p2, p3, p4, p5, p6, p7, p8) \ + (FUNCTION_CAST<mips_regexp_matcher>(entry)( \ + p0, p1, p2, p3, p4, p5, p6, p7, NULL, p8)) + +#else // O32 Abi. + +typedef int (*mips_regexp_matcher)(String* input, + int32_t start_offset, + const byte* input_start, + const byte* input_end, + void* return_address, + int* output, + int32_t output_size, + Address stack_base, + int32_t direct_call, + Isolate* isolate); + +#define CALL_GENERATED_REGEXP_CODE(entry, p0, p1, p2, p3, p4, p5, p6, p7, p8) \ + (FUNCTION_CAST<mips_regexp_matcher>(entry)( \ + p0, p1, p2, p3, NULL, p4, p5, p6, p7, p8)) + +#endif // MIPS_ABI_N64 + + +// The stack limit beyond which we will throw stack overflow errors in +// generated code. Because generated code on mips uses the C stack, we +// just use the C stack limit. +class SimulatorStack : public v8::internal::AllStatic { + public: + static inline uintptr_t JsLimitFromCLimit(Isolate* isolate, + uintptr_t c_limit) { + return c_limit; + } + + static inline uintptr_t RegisterCTryCatch(uintptr_t try_catch_address) { + return try_catch_address; + } + + static inline void UnregisterCTryCatch() { } +}; + +} } // namespace v8::internal + +// Calculated the stack limit beyond which we will throw stack overflow errors. +// This macro must be called from a C++ method. It relies on being able to take +// the address of "this" to get a value on the current execution stack and then +// calculates the stack limit based on that value. +// NOTE: The check for overflow is not safe as there is no guarantee that the +// running thread has its stack in all memory up to address 0x00000000. +#define GENERATED_CODE_STACK_LIMIT(limit) \ + (reinterpret_cast<uintptr_t>(this) >= limit ? \ + reinterpret_cast<uintptr_t>(this) - limit : 0) + +#else // !defined(USE_SIMULATOR) +// Running with a simulator. + +#include "src/assembler.h" +#include "src/hashmap.h" + +namespace v8 { +namespace internal { + +// ----------------------------------------------------------------------------- +// Utility functions + +class CachePage { + public: + static const int LINE_VALID = 0; + static const int LINE_INVALID = 1; + + static const int kPageShift = 12; + static const int kPageSize = 1 << kPageShift; + static const int kPageMask = kPageSize - 1; + static const int kLineShift = 2; // The cache line is only 4 bytes right now. + static const int kLineLength = 1 << kLineShift; + static const int kLineMask = kLineLength - 1; + + CachePage() { + memset(&validity_map_, LINE_INVALID, sizeof(validity_map_)); + } + + char* ValidityByte(int offset) { + return &validity_map_[offset >> kLineShift]; + } + + char* CachedData(int offset) { + return &data_[offset]; + } + + private: + char data_[kPageSize]; // The cached data. + static const int kValidityMapSize = kPageSize >> kLineShift; + char validity_map_[kValidityMapSize]; // One byte per line. +}; + +class Simulator { + public: + friend class MipsDebugger; + + // Registers are declared in order. See SMRL chapter 2. + enum Register { + no_reg = -1, + zero_reg = 0, + at, + v0, v1, + a0, a1, a2, a3, a4, a5, a6, a7, + t0, t1, t2, t3, + s0, s1, s2, s3, s4, s5, s6, s7, + t8, t9, + k0, k1, + gp, + sp, + s8, + ra, + // LO, HI, and pc. + LO, + HI, + pc, // pc must be the last register. + kNumSimuRegisters, + // aliases + fp = s8 + }; + + // Coprocessor registers. + // Generated code will always use doubles. So we will only use even registers. + enum FPURegister { + f0, f1, f2, f3, f4, f5, f6, f7, f8, f9, f10, f11, + f12, f13, f14, f15, // f12 and f14 are arguments FPURegisters. + f16, f17, f18, f19, f20, f21, f22, f23, f24, f25, + f26, f27, f28, f29, f30, f31, + kNumFPURegisters + }; + + explicit Simulator(Isolate* isolate); + ~Simulator(); + + // The currently executing Simulator instance. Potentially there can be one + // for each native thread. + static Simulator* current(v8::internal::Isolate* isolate); + + // Accessors for register state. Reading the pc value adheres to the MIPS + // architecture specification and is off by a 8 from the currently executing + // instruction. + void set_register(int reg, int64_t value); + void set_register_word(int reg, int32_t value); + void set_dw_register(int dreg, const int* dbl); + int64_t get_register(int reg) const; + double get_double_from_register_pair(int reg); + // Same for FPURegisters. + void set_fpu_register(int fpureg, int64_t value); + void set_fpu_register_word(int fpureg, int32_t value); + void set_fpu_register_hi_word(int fpureg, int32_t value); + void set_fpu_register_float(int fpureg, float value); + void set_fpu_register_double(int fpureg, double value); + int64_t get_fpu_register(int fpureg) const; + int32_t get_fpu_register_word(int fpureg) const; + int32_t get_fpu_register_signed_word(int fpureg) const; + uint32_t get_fpu_register_hi_word(int fpureg) const; + float get_fpu_register_float(int fpureg) const; + double get_fpu_register_double(int fpureg) const; + void set_fcsr_bit(uint32_t cc, bool value); + bool test_fcsr_bit(uint32_t cc); + bool set_fcsr_round_error(double original, double rounded); + bool set_fcsr_round64_error(double original, double rounded); + + // Special case of set_register and get_register to access the raw PC value. + void set_pc(int64_t value); + int64_t get_pc() const; + + Address get_sp() { + return reinterpret_cast<Address>(static_cast<intptr_t>(get_register(sp))); + } + + // Accessor to the internal simulator stack area. + uintptr_t StackLimit() const; + + // Executes MIPS instructions until the PC reaches end_sim_pc. + void Execute(); + + // Call on program start. + static void Initialize(Isolate* isolate); + + // V8 generally calls into generated JS code with 5 parameters and into + // generated RegExp code with 7 parameters. This is a convenience function, + // which sets up the simulator state and grabs the result on return. + int64_t Call(byte* entry, int argument_count, ...); + // Alternative: call a 2-argument double function. + double CallFP(byte* entry, double d0, double d1); + + // Push an address onto the JS stack. + uintptr_t PushAddress(uintptr_t address); + + // Pop an address from the JS stack. + uintptr_t PopAddress(); + + // Debugger input. + void set_last_debugger_input(char* input); + char* last_debugger_input() { return last_debugger_input_; } + + // ICache checking. + static void FlushICache(v8::internal::HashMap* i_cache, void* start, + size_t size); + + // Returns true if pc register contains one of the 'special_values' defined + // below (bad_ra, end_sim_pc). + bool has_bad_pc() const; + + private: + enum special_values { + // Known bad pc value to ensure that the simulator does not execute + // without being properly setup. + bad_ra = -1, + // A pc value used to signal the simulator to stop execution. Generally + // the ra is set to this value on transition from native C code to + // simulated execution, so that the simulator can "return" to the native + // C code. + end_sim_pc = -2, + // Unpredictable value. + Unpredictable = 0xbadbeaf + }; + + // Unsupported instructions use Format to print an error and stop execution. + void Format(Instruction* instr, const char* format); + + // Read and write memory. + inline uint32_t ReadBU(int64_t addr); + inline int32_t ReadB(int64_t addr); + inline void WriteB(int64_t addr, uint8_t value); + inline void WriteB(int64_t addr, int8_t value); + + inline uint16_t ReadHU(int64_t addr, Instruction* instr); + inline int16_t ReadH(int64_t addr, Instruction* instr); + // Note: Overloaded on the sign of the value. + inline void WriteH(int64_t addr, uint16_t value, Instruction* instr); + inline void WriteH(int64_t addr, int16_t value, Instruction* instr); + + inline uint32_t ReadWU(int64_t addr, Instruction* instr); + inline int32_t ReadW(int64_t addr, Instruction* instr); + inline void WriteW(int64_t addr, int32_t value, Instruction* instr); + inline int64_t Read2W(int64_t addr, Instruction* instr); + inline void Write2W(int64_t addr, int64_t value, Instruction* instr); + + inline double ReadD(int64_t addr, Instruction* instr); + inline void WriteD(int64_t addr, double value, Instruction* instr); + + // Helper for debugging memory access. + inline void DieOrDebug(); + + // Helpers for data value tracing. + enum TraceType { + BYTE, + HALF, + WORD, + DWORD + // DFLOAT - Floats may have printing issues due to paired lwc1's + }; + + void TraceRegWr(int64_t value); + void TraceMemWr(int64_t addr, int64_t value, TraceType t); + void TraceMemRd(int64_t addr, int64_t value); + + // Operations depending on endianness. + // Get Double Higher / Lower word. + inline int32_t GetDoubleHIW(double* addr); + inline int32_t GetDoubleLOW(double* addr); + // Set Double Higher / Lower word. + inline int32_t SetDoubleHIW(double* addr); + inline int32_t SetDoubleLOW(double* addr); + + // Executing is handled based on the instruction type. + void DecodeTypeRegister(Instruction* instr); + + // Helper function for DecodeTypeRegister. + void ConfigureTypeRegister(Instruction* instr, + int64_t* alu_out, + int64_t* i64hilo, + uint64_t* u64hilo, + int64_t* next_pc, + int64_t* return_addr_reg, + bool* do_interrupt, + int64_t* result128H, + int64_t* result128L); + + void DecodeTypeImmediate(Instruction* instr); + void DecodeTypeJump(Instruction* instr); + + // Used for breakpoints and traps. + void SoftwareInterrupt(Instruction* instr); + + // Stop helper functions. + bool IsWatchpoint(uint64_t code); + void PrintWatchpoint(uint64_t code); + void HandleStop(uint64_t code, Instruction* instr); + bool IsStopInstruction(Instruction* instr); + bool IsEnabledStop(uint64_t code); + void EnableStop(uint64_t code); + void DisableStop(uint64_t code); + void IncreaseStopCounter(uint64_t code); + void PrintStopInfo(uint64_t code); + + + // Executes one instruction. + void InstructionDecode(Instruction* instr); + // Execute one instruction placed in a branch delay slot. + void BranchDelayInstructionDecode(Instruction* instr) { + if (instr->InstructionBits() == nopInstr) { + // Short-cut generic nop instructions. They are always valid and they + // never change the simulator state. + return; + } + + if (instr->IsForbiddenInBranchDelay()) { + V8_Fatal(__FILE__, __LINE__, + "Eror:Unexpected %i opcode in a branch delay slot.", + instr->OpcodeValue()); + } + InstructionDecode(instr); + } + + // ICache. + static void CheckICache(v8::internal::HashMap* i_cache, Instruction* instr); + static void FlushOnePage(v8::internal::HashMap* i_cache, intptr_t start, + int size); + static CachePage* GetCachePage(v8::internal::HashMap* i_cache, void* page); + + enum Exception { + none, + kIntegerOverflow, + kIntegerUnderflow, + kDivideByZero, + kNumExceptions + }; + int16_t exceptions[kNumExceptions]; + + // Exceptions. + void SignalExceptions(); + + // Runtime call support. + static void* RedirectExternalReference(void* external_function, + ExternalReference::Type type); + + // Handle arguments and return value for runtime FP functions. + void GetFpArgs(double* x, double* y, int32_t* z); + void SetFpResult(const double& result); + + void CallInternal(byte* entry); + + // Architecture state. + // Registers. + int64_t registers_[kNumSimuRegisters]; + // Coprocessor Registers. + int64_t FPUregisters_[kNumFPURegisters]; + // FPU control register. + uint32_t FCSR_; + + // Simulator support. + // Allocate 1MB for stack. + size_t stack_size_; + char* stack_; + bool pc_modified_; + int64_t icount_; + int break_count_; + EmbeddedVector<char, 128> trace_buf_; + + // Debugger input. + char* last_debugger_input_; + + // Icache simulation. + v8::internal::HashMap* i_cache_; + + v8::internal::Isolate* isolate_; + + // Registered breakpoints. + Instruction* break_pc_; + Instr break_instr_; + + // Stop is disabled if bit 31 is set. + static const uint32_t kStopDisabledBit = 1 << 31; + + // A stop is enabled, meaning the simulator will stop when meeting the + // instruction, if bit 31 of watched_stops_[code].count is unset. + // The value watched_stops_[code].count & ~(1 << 31) indicates how many times + // the breakpoint was hit or gone through. + struct StopCountAndDesc { + uint32_t count; + char* desc; + }; + StopCountAndDesc watched_stops_[kMaxStopCode + 1]; +}; + + +// When running with the simulator transition into simulated execution at this +// point. +#define CALL_GENERATED_CODE(entry, p0, p1, p2, p3, p4) \ + reinterpret_cast<Object*>(Simulator::current(Isolate::Current())->Call( \ + FUNCTION_ADDR(entry), 5, p0, p1, p2, p3, p4)) + +#ifdef MIPS_ABI_N64 +#define CALL_GENERATED_REGEXP_CODE(entry, p0, p1, p2, p3, p4, p5, p6, p7, p8) \ + Simulator::current(Isolate::Current())->Call( \ + entry, 10, p0, p1, p2, p3, p4, p5, p6, p7, NULL, p8) +#else // Must be O32 Abi. +#define CALL_GENERATED_REGEXP_CODE(entry, p0, p1, p2, p3, p4, p5, p6, p7, p8) \ + Simulator::current(Isolate::Current())->Call( \ + entry, 10, p0, p1, p2, p3, NULL, p4, p5, p6, p7, p8) +#endif // MIPS_ABI_N64 + + +// The simulator has its own stack. Thus it has a different stack limit from +// the C-based native code. Setting the c_limit to indicate a very small +// stack cause stack overflow errors, since the simulator ignores the input. +// This is unlikely to be an issue in practice, though it might cause testing +// trouble down the line. +class SimulatorStack : public v8::internal::AllStatic { + public: + static inline uintptr_t JsLimitFromCLimit(Isolate* isolate, + uintptr_t c_limit) { + return Simulator::current(isolate)->StackLimit(); + } + + static inline uintptr_t RegisterCTryCatch(uintptr_t try_catch_address) { + Simulator* sim = Simulator::current(Isolate::Current()); + return sim->PushAddress(try_catch_address); + } + + static inline void UnregisterCTryCatch() { + Simulator::current(Isolate::Current())->PopAddress(); + } +}; + +} } // namespace v8::internal + +#endif // !defined(USE_SIMULATOR) +#endif // V8_MIPS_SIMULATOR_MIPS_H_ diff --git a/deps/v8/src/mips64/stub-cache-mips64.cc b/deps/v8/src/mips64/stub-cache-mips64.cc new file mode 100644 index 00000000000..fde21a9d121 --- /dev/null +++ b/deps/v8/src/mips64/stub-cache-mips64.cc @@ -0,0 +1,1191 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_MIPS64 + +#include "src/codegen.h" +#include "src/ic-inl.h" +#include "src/stub-cache.h" + +namespace v8 { +namespace internal { + +#define __ ACCESS_MASM(masm) + + +static void ProbeTable(Isolate* isolate, + MacroAssembler* masm, + Code::Flags flags, + StubCache::Table table, + Register receiver, + Register name, + // Number of the cache entry, not scaled. + Register offset, + Register scratch, + Register scratch2, + Register offset_scratch) { + ExternalReference key_offset(isolate->stub_cache()->key_reference(table)); + ExternalReference value_offset(isolate->stub_cache()->value_reference(table)); + ExternalReference map_offset(isolate->stub_cache()->map_reference(table)); + + uint64_t key_off_addr = reinterpret_cast<uint64_t>(key_offset.address()); + uint64_t value_off_addr = reinterpret_cast<uint64_t>(value_offset.address()); + uint64_t map_off_addr = reinterpret_cast<uint64_t>(map_offset.address()); + + // Check the relative positions of the address fields. + DCHECK(value_off_addr > key_off_addr); + DCHECK((value_off_addr - key_off_addr) % 4 == 0); + DCHECK((value_off_addr - key_off_addr) < (256 * 4)); + DCHECK(map_off_addr > key_off_addr); + DCHECK((map_off_addr - key_off_addr) % 4 == 0); + DCHECK((map_off_addr - key_off_addr) < (256 * 4)); + + Label miss; + Register base_addr = scratch; + scratch = no_reg; + + // Multiply by 3 because there are 3 fields per entry (name, code, map). + __ dsll(offset_scratch, offset, 1); + __ Daddu(offset_scratch, offset_scratch, offset); + + // Calculate the base address of the entry. + __ li(base_addr, Operand(key_offset)); + __ dsll(at, offset_scratch, kPointerSizeLog2); + __ Daddu(base_addr, base_addr, at); + + // Check that the key in the entry matches the name. + __ ld(at, MemOperand(base_addr, 0)); + __ Branch(&miss, ne, name, Operand(at)); + + // Check the map matches. + __ ld(at, MemOperand(base_addr, map_off_addr - key_off_addr)); + __ ld(scratch2, FieldMemOperand(receiver, HeapObject::kMapOffset)); + __ Branch(&miss, ne, at, Operand(scratch2)); + + // Get the code entry from the cache. + Register code = scratch2; + scratch2 = no_reg; + __ ld(code, MemOperand(base_addr, value_off_addr - key_off_addr)); + + // Check that the flags match what we're looking for. + Register flags_reg = base_addr; + base_addr = no_reg; + __ lw(flags_reg, FieldMemOperand(code, Code::kFlagsOffset)); + __ And(flags_reg, flags_reg, Operand(~Code::kFlagsNotUsedInLookup)); + __ Branch(&miss, ne, flags_reg, Operand(flags)); + +#ifdef DEBUG + if (FLAG_test_secondary_stub_cache && table == StubCache::kPrimary) { + __ jmp(&miss); + } else if (FLAG_test_primary_stub_cache && table == StubCache::kSecondary) { + __ jmp(&miss); + } +#endif + + // Jump to the first instruction in the code stub. + __ Daddu(at, code, Operand(Code::kHeaderSize - kHeapObjectTag)); + __ Jump(at); + + // Miss: fall through. + __ bind(&miss); +} + + +void PropertyHandlerCompiler::GenerateDictionaryNegativeLookup( + MacroAssembler* masm, Label* miss_label, Register receiver, + Handle<Name> name, Register scratch0, Register scratch1) { + DCHECK(name->IsUniqueName()); + DCHECK(!receiver.is(scratch0)); + Counters* counters = masm->isolate()->counters(); + __ IncrementCounter(counters->negative_lookups(), 1, scratch0, scratch1); + __ IncrementCounter(counters->negative_lookups_miss(), 1, scratch0, scratch1); + + Label done; + + const int kInterceptorOrAccessCheckNeededMask = + (1 << Map::kHasNamedInterceptor) | (1 << Map::kIsAccessCheckNeeded); + + // Bail out if the receiver has a named interceptor or requires access checks. + Register map = scratch1; + __ ld(map, FieldMemOperand(receiver, HeapObject::kMapOffset)); + __ lbu(scratch0, FieldMemOperand(map, Map::kBitFieldOffset)); + __ And(scratch0, scratch0, Operand(kInterceptorOrAccessCheckNeededMask)); + __ Branch(miss_label, ne, scratch0, Operand(zero_reg)); + + // Check that receiver is a JSObject. + __ lbu(scratch0, FieldMemOperand(map, Map::kInstanceTypeOffset)); + __ Branch(miss_label, lt, scratch0, Operand(FIRST_SPEC_OBJECT_TYPE)); + + // Load properties array. + Register properties = scratch0; + __ ld(properties, FieldMemOperand(receiver, JSObject::kPropertiesOffset)); + // Check that the properties array is a dictionary. + __ ld(map, FieldMemOperand(properties, HeapObject::kMapOffset)); + Register tmp = properties; + __ LoadRoot(tmp, Heap::kHashTableMapRootIndex); + __ Branch(miss_label, ne, map, Operand(tmp)); + + // Restore the temporarily used register. + __ ld(properties, FieldMemOperand(receiver, JSObject::kPropertiesOffset)); + + + NameDictionaryLookupStub::GenerateNegativeLookup(masm, + miss_label, + &done, + receiver, + properties, + name, + scratch1); + __ bind(&done); + __ DecrementCounter(counters->negative_lookups_miss(), 1, scratch0, scratch1); +} + + +void StubCache::GenerateProbe(MacroAssembler* masm, + Code::Flags flags, + Register receiver, + Register name, + Register scratch, + Register extra, + Register extra2, + Register extra3) { + Isolate* isolate = masm->isolate(); + Label miss; + + // Make sure that code is valid. The multiplying code relies on the + // entry size being 12. + // DCHECK(sizeof(Entry) == 12); + // DCHECK(sizeof(Entry) == 3 * kPointerSize); + + // Make sure the flags does not name a specific type. + DCHECK(Code::ExtractTypeFromFlags(flags) == 0); + + // Make sure that there are no register conflicts. + DCHECK(!scratch.is(receiver)); + DCHECK(!scratch.is(name)); + DCHECK(!extra.is(receiver)); + DCHECK(!extra.is(name)); + DCHECK(!extra.is(scratch)); + DCHECK(!extra2.is(receiver)); + DCHECK(!extra2.is(name)); + DCHECK(!extra2.is(scratch)); + DCHECK(!extra2.is(extra)); + + // Check register validity. + DCHECK(!scratch.is(no_reg)); + DCHECK(!extra.is(no_reg)); + DCHECK(!extra2.is(no_reg)); + DCHECK(!extra3.is(no_reg)); + + Counters* counters = masm->isolate()->counters(); + __ IncrementCounter(counters->megamorphic_stub_cache_probes(), 1, + extra2, extra3); + + // Check that the receiver isn't a smi. + __ JumpIfSmi(receiver, &miss); + + // Get the map of the receiver and compute the hash. + __ ld(scratch, FieldMemOperand(name, Name::kHashFieldOffset)); + __ ld(at, FieldMemOperand(receiver, HeapObject::kMapOffset)); + __ Daddu(scratch, scratch, at); + uint64_t mask = kPrimaryTableSize - 1; + // We shift out the last two bits because they are not part of the hash and + // they are always 01 for maps. + __ dsrl(scratch, scratch, kCacheIndexShift); + __ Xor(scratch, scratch, Operand((flags >> kCacheIndexShift) & mask)); + __ And(scratch, scratch, Operand(mask)); + + // Probe the primary table. + ProbeTable(isolate, + masm, + flags, + kPrimary, + receiver, + name, + scratch, + extra, + extra2, + extra3); + + // Primary miss: Compute hash for secondary probe. + __ dsrl(at, name, kCacheIndexShift); + __ Dsubu(scratch, scratch, at); + uint64_t mask2 = kSecondaryTableSize - 1; + __ Daddu(scratch, scratch, Operand((flags >> kCacheIndexShift) & mask2)); + __ And(scratch, scratch, Operand(mask2)); + + // Probe the secondary table. + ProbeTable(isolate, + masm, + flags, + kSecondary, + receiver, + name, + scratch, + extra, + extra2, + extra3); + + // Cache miss: Fall-through and let caller handle the miss by + // entering the runtime system. + __ bind(&miss); + __ IncrementCounter(counters->megamorphic_stub_cache_misses(), 1, + extra2, extra3); +} + + +void NamedLoadHandlerCompiler::GenerateDirectLoadGlobalFunctionPrototype( + MacroAssembler* masm, int index, Register prototype, Label* miss) { + Isolate* isolate = masm->isolate(); + // Get the global function with the given index. + Handle<JSFunction> function( + JSFunction::cast(isolate->native_context()->get(index))); + + // Check we're still in the same context. + Register scratch = prototype; + const int offset = Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX); + __ ld(scratch, MemOperand(cp, offset)); + __ ld(scratch, FieldMemOperand(scratch, GlobalObject::kNativeContextOffset)); + __ ld(scratch, MemOperand(scratch, Context::SlotOffset(index))); + __ li(at, function); + __ Branch(miss, ne, at, Operand(scratch)); + + // Load its initial map. The global functions all have initial maps. + __ li(prototype, Handle<Map>(function->initial_map())); + // Load the prototype from the initial map. + __ ld(prototype, FieldMemOperand(prototype, Map::kPrototypeOffset)); +} + + +void NamedLoadHandlerCompiler::GenerateLoadFunctionPrototype( + MacroAssembler* masm, Register receiver, Register scratch1, + Register scratch2, Label* miss_label) { + __ TryGetFunctionPrototype(receiver, scratch1, scratch2, miss_label); + __ Ret(USE_DELAY_SLOT); + __ mov(v0, scratch1); +} + + +void PropertyHandlerCompiler::GenerateCheckPropertyCell( + MacroAssembler* masm, Handle<JSGlobalObject> global, Handle<Name> name, + Register scratch, Label* miss) { + Handle<Cell> cell = JSGlobalObject::EnsurePropertyCell(global, name); + DCHECK(cell->value()->IsTheHole()); + __ li(scratch, Operand(cell)); + __ ld(scratch, FieldMemOperand(scratch, Cell::kValueOffset)); + __ LoadRoot(at, Heap::kTheHoleValueRootIndex); + __ Branch(miss, ne, scratch, Operand(at)); +} + + +static void PushInterceptorArguments(MacroAssembler* masm, + Register receiver, + Register holder, + Register name, + Handle<JSObject> holder_obj) { + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsNameIndex == 0); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsInfoIndex == 1); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsThisIndex == 2); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsHolderIndex == 3); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsLength == 4); + __ push(name); + Handle<InterceptorInfo> interceptor(holder_obj->GetNamedInterceptor()); + DCHECK(!masm->isolate()->heap()->InNewSpace(*interceptor)); + Register scratch = name; + __ li(scratch, Operand(interceptor)); + __ Push(scratch, receiver, holder); +} + + +static void CompileCallLoadPropertyWithInterceptor( + MacroAssembler* masm, + Register receiver, + Register holder, + Register name, + Handle<JSObject> holder_obj, + IC::UtilityId id) { + PushInterceptorArguments(masm, receiver, holder, name, holder_obj); + __ CallExternalReference(ExternalReference(IC_Utility(id), masm->isolate()), + NamedLoadHandlerCompiler::kInterceptorArgsLength); +} + + +// Generate call to api function. +void PropertyHandlerCompiler::GenerateFastApiCall( + MacroAssembler* masm, const CallOptimization& optimization, + Handle<Map> receiver_map, Register receiver, Register scratch_in, + bool is_store, int argc, Register* values) { + DCHECK(!receiver.is(scratch_in)); + // Preparing to push, adjust sp. + __ Dsubu(sp, sp, Operand((argc + 1) * kPointerSize)); + __ sd(receiver, MemOperand(sp, argc * kPointerSize)); // Push receiver. + // Write the arguments to stack frame. + for (int i = 0; i < argc; i++) { + Register arg = values[argc-1-i]; + DCHECK(!receiver.is(arg)); + DCHECK(!scratch_in.is(arg)); + __ sd(arg, MemOperand(sp, (argc-1-i) * kPointerSize)); // Push arg. + } + DCHECK(optimization.is_simple_api_call()); + + // Abi for CallApiFunctionStub. + Register callee = a0; + Register call_data = a4; + Register holder = a2; + Register api_function_address = a1; + + // Put holder in place. + CallOptimization::HolderLookup holder_lookup; + Handle<JSObject> api_holder = optimization.LookupHolderOfExpectedType( + receiver_map, + &holder_lookup); + switch (holder_lookup) { + case CallOptimization::kHolderIsReceiver: + __ Move(holder, receiver); + break; + case CallOptimization::kHolderFound: + __ li(holder, api_holder); + break; + case CallOptimization::kHolderNotFound: + UNREACHABLE(); + break; + } + + Isolate* isolate = masm->isolate(); + Handle<JSFunction> function = optimization.constant_function(); + Handle<CallHandlerInfo> api_call_info = optimization.api_call_info(); + Handle<Object> call_data_obj(api_call_info->data(), isolate); + + // Put callee in place. + __ li(callee, function); + + bool call_data_undefined = false; + // Put call_data in place. + if (isolate->heap()->InNewSpace(*call_data_obj)) { + __ li(call_data, api_call_info); + __ ld(call_data, FieldMemOperand(call_data, CallHandlerInfo::kDataOffset)); + } else if (call_data_obj->IsUndefined()) { + call_data_undefined = true; + __ LoadRoot(call_data, Heap::kUndefinedValueRootIndex); + } else { + __ li(call_data, call_data_obj); + } + // Put api_function_address in place. + Address function_address = v8::ToCData<Address>(api_call_info->callback()); + ApiFunction fun(function_address); + ExternalReference::Type type = ExternalReference::DIRECT_API_CALL; + ExternalReference ref = + ExternalReference(&fun, + type, + masm->isolate()); + __ li(api_function_address, Operand(ref)); + + // Jump to stub. + CallApiFunctionStub stub(isolate, is_store, call_data_undefined, argc); + __ TailCallStub(&stub); +} + + +void PropertyAccessCompiler::GenerateTailCall(MacroAssembler* masm, + Handle<Code> code) { + __ Jump(code, RelocInfo::CODE_TARGET); +} + + +#undef __ +#define __ ACCESS_MASM(masm()) + + +void NamedStoreHandlerCompiler::GenerateRestoreName(Label* label, + Handle<Name> name) { + if (!label->is_unused()) { + __ bind(label); + __ li(this->name(), Operand(name)); + } +} + + +// Generate StoreTransition code, value is passed in a0 register. +// After executing generated code, the receiver_reg and name_reg +// may be clobbered. +void NamedStoreHandlerCompiler::GenerateStoreTransition( + Handle<Map> transition, Handle<Name> name, Register receiver_reg, + Register storage_reg, Register value_reg, Register scratch1, + Register scratch2, Register scratch3, Label* miss_label, Label* slow) { + // a0 : value. + Label exit; + + int descriptor = transition->LastAdded(); + DescriptorArray* descriptors = transition->instance_descriptors(); + PropertyDetails details = descriptors->GetDetails(descriptor); + Representation representation = details.representation(); + DCHECK(!representation.IsNone()); + + if (details.type() == CONSTANT) { + Handle<Object> constant(descriptors->GetValue(descriptor), isolate()); + __ li(scratch1, constant); + __ Branch(miss_label, ne, value_reg, Operand(scratch1)); + } else if (representation.IsSmi()) { + __ JumpIfNotSmi(value_reg, miss_label); + } else if (representation.IsHeapObject()) { + __ JumpIfSmi(value_reg, miss_label); + HeapType* field_type = descriptors->GetFieldType(descriptor); + HeapType::Iterator<Map> it = field_type->Classes(); + Handle<Map> current; + if (!it.Done()) { + __ ld(scratch1, FieldMemOperand(value_reg, HeapObject::kMapOffset)); + Label do_store; + while (true) { + // Do the CompareMap() directly within the Branch() functions. + current = it.Current(); + it.Advance(); + if (it.Done()) { + __ Branch(miss_label, ne, scratch1, Operand(current)); + break; + } + __ Branch(&do_store, eq, scratch1, Operand(current)); + } + __ bind(&do_store); + } + } else if (representation.IsDouble()) { + Label do_store, heap_number; + __ LoadRoot(scratch3, Heap::kMutableHeapNumberMapRootIndex); + __ AllocateHeapNumber(storage_reg, scratch1, scratch2, scratch3, slow, + TAG_RESULT, MUTABLE); + + __ JumpIfNotSmi(value_reg, &heap_number); + __ SmiUntag(scratch1, value_reg); + __ mtc1(scratch1, f6); + __ cvt_d_w(f4, f6); + __ jmp(&do_store); + + __ bind(&heap_number); + __ CheckMap(value_reg, scratch1, Heap::kHeapNumberMapRootIndex, + miss_label, DONT_DO_SMI_CHECK); + __ ldc1(f4, FieldMemOperand(value_reg, HeapNumber::kValueOffset)); + + __ bind(&do_store); + __ sdc1(f4, FieldMemOperand(storage_reg, HeapNumber::kValueOffset)); + } + + // Stub never generated for objects that require access checks. + DCHECK(!transition->is_access_check_needed()); + + // Perform map transition for the receiver if necessary. + if (details.type() == FIELD && + Map::cast(transition->GetBackPointer())->unused_property_fields() == 0) { + // The properties must be extended before we can store the value. + // We jump to a runtime call that extends the properties array. + __ push(receiver_reg); + __ li(a2, Operand(transition)); + __ Push(a2, a0); + __ TailCallExternalReference( + ExternalReference(IC_Utility(IC::kSharedStoreIC_ExtendStorage), + isolate()), + 3, 1); + return; + } + + // Update the map of the object. + __ li(scratch1, Operand(transition)); + __ sd(scratch1, FieldMemOperand(receiver_reg, HeapObject::kMapOffset)); + + // Update the write barrier for the map field. + __ RecordWriteField(receiver_reg, + HeapObject::kMapOffset, + scratch1, + scratch2, + kRAHasNotBeenSaved, + kDontSaveFPRegs, + OMIT_REMEMBERED_SET, + OMIT_SMI_CHECK); + + if (details.type() == CONSTANT) { + DCHECK(value_reg.is(a0)); + __ Ret(USE_DELAY_SLOT); + __ mov(v0, a0); + return; + } + + int index = transition->instance_descriptors()->GetFieldIndex( + transition->LastAdded()); + + // Adjust for the number of properties stored in the object. Even in the + // face of a transition we can use the old map here because the size of the + // object and the number of in-object properties is not going to change. + index -= transition->inobject_properties(); + + // TODO(verwaest): Share this code as a code stub. + SmiCheck smi_check = representation.IsTagged() + ? INLINE_SMI_CHECK : OMIT_SMI_CHECK; + if (index < 0) { + // Set the property straight into the object. + int offset = transition->instance_size() + (index * kPointerSize); + if (representation.IsDouble()) { + __ sd(storage_reg, FieldMemOperand(receiver_reg, offset)); + } else { + __ sd(value_reg, FieldMemOperand(receiver_reg, offset)); + } + + if (!representation.IsSmi()) { + // Update the write barrier for the array address. + if (!representation.IsDouble()) { + __ mov(storage_reg, value_reg); + } + __ RecordWriteField(receiver_reg, + offset, + storage_reg, + scratch1, + kRAHasNotBeenSaved, + kDontSaveFPRegs, + EMIT_REMEMBERED_SET, + smi_check); + } + } else { + // Write to the properties array. + int offset = index * kPointerSize + FixedArray::kHeaderSize; + // Get the properties array + __ ld(scratch1, + FieldMemOperand(receiver_reg, JSObject::kPropertiesOffset)); + if (representation.IsDouble()) { + __ sd(storage_reg, FieldMemOperand(scratch1, offset)); + } else { + __ sd(value_reg, FieldMemOperand(scratch1, offset)); + } + + if (!representation.IsSmi()) { + // Update the write barrier for the array address. + if (!representation.IsDouble()) { + __ mov(storage_reg, value_reg); + } + __ RecordWriteField(scratch1, + offset, + storage_reg, + receiver_reg, + kRAHasNotBeenSaved, + kDontSaveFPRegs, + EMIT_REMEMBERED_SET, + smi_check); + } + } + + // Return the value (register v0). + DCHECK(value_reg.is(a0)); + __ bind(&exit); + __ Ret(USE_DELAY_SLOT); + __ mov(v0, a0); +} + + +void NamedStoreHandlerCompiler::GenerateStoreField(LookupResult* lookup, + Register value_reg, + Label* miss_label) { + DCHECK(lookup->representation().IsHeapObject()); + __ JumpIfSmi(value_reg, miss_label); + HeapType::Iterator<Map> it = lookup->GetFieldType()->Classes(); + __ ld(scratch1(), FieldMemOperand(value_reg, HeapObject::kMapOffset)); + Label do_store; + Handle<Map> current; + while (true) { + // Do the CompareMap() directly within the Branch() functions. + current = it.Current(); + it.Advance(); + if (it.Done()) { + __ Branch(miss_label, ne, scratch1(), Operand(current)); + break; + } + __ Branch(&do_store, eq, scratch1(), Operand(current)); + } + __ bind(&do_store); + + StoreFieldStub stub(isolate(), lookup->GetFieldIndex(), + lookup->representation()); + GenerateTailCall(masm(), stub.GetCode()); +} + + +Register PropertyHandlerCompiler::CheckPrototypes( + Register object_reg, Register holder_reg, Register scratch1, + Register scratch2, Handle<Name> name, Label* miss, + PrototypeCheckType check) { + Handle<Map> receiver_map(IC::TypeToMap(*type(), isolate())); + + // Make sure there's no overlap between holder and object registers. + DCHECK(!scratch1.is(object_reg) && !scratch1.is(holder_reg)); + DCHECK(!scratch2.is(object_reg) && !scratch2.is(holder_reg) + && !scratch2.is(scratch1)); + + // Keep track of the current object in register reg. + Register reg = object_reg; + int depth = 0; + + Handle<JSObject> current = Handle<JSObject>::null(); + if (type()->IsConstant()) { + current = Handle<JSObject>::cast(type()->AsConstant()->Value()); + } + Handle<JSObject> prototype = Handle<JSObject>::null(); + Handle<Map> current_map = receiver_map; + Handle<Map> holder_map(holder()->map()); + // Traverse the prototype chain and check the maps in the prototype chain for + // fast and global objects or do negative lookup for normal objects. + while (!current_map.is_identical_to(holder_map)) { + ++depth; + + // Only global objects and objects that do not require access + // checks are allowed in stubs. + DCHECK(current_map->IsJSGlobalProxyMap() || + !current_map->is_access_check_needed()); + + prototype = handle(JSObject::cast(current_map->prototype())); + if (current_map->is_dictionary_map() && + !current_map->IsJSGlobalObjectMap()) { + DCHECK(!current_map->IsJSGlobalProxyMap()); // Proxy maps are fast. + if (!name->IsUniqueName()) { + DCHECK(name->IsString()); + name = factory()->InternalizeString(Handle<String>::cast(name)); + } + DCHECK(current.is_null() || + current->property_dictionary()->FindEntry(name) == + NameDictionary::kNotFound); + + GenerateDictionaryNegativeLookup(masm(), miss, reg, name, + scratch1, scratch2); + + __ ld(scratch1, FieldMemOperand(reg, HeapObject::kMapOffset)); + reg = holder_reg; // From now on the object will be in holder_reg. + __ ld(reg, FieldMemOperand(scratch1, Map::kPrototypeOffset)); + } else { + // Two possible reasons for loading the prototype from the map: + // (1) Can't store references to new space in code. + // (2) Handler is shared for all receivers with the same prototype + // map (but not necessarily the same prototype instance). + bool load_prototype_from_map = + heap()->InNewSpace(*prototype) || depth == 1; + Register map_reg = scratch1; + if (depth != 1 || check == CHECK_ALL_MAPS) { + // CheckMap implicitly loads the map of |reg| into |map_reg|. + __ CheckMap(reg, map_reg, current_map, miss, DONT_DO_SMI_CHECK); + } else { + __ ld(map_reg, FieldMemOperand(reg, HeapObject::kMapOffset)); + } + + // Check access rights to the global object. This has to happen after + // the map check so that we know that the object is actually a global + // object. + // This allows us to install generated handlers for accesses to the + // global proxy (as opposed to using slow ICs). See corresponding code + // in LookupForRead(). + if (current_map->IsJSGlobalProxyMap()) { + __ CheckAccessGlobalProxy(reg, scratch2, miss); + } else if (current_map->IsJSGlobalObjectMap()) { + GenerateCheckPropertyCell( + masm(), Handle<JSGlobalObject>::cast(current), name, + scratch2, miss); + } + + reg = holder_reg; // From now on the object will be in holder_reg. + + if (load_prototype_from_map) { + __ ld(reg, FieldMemOperand(map_reg, Map::kPrototypeOffset)); + } else { + __ li(reg, Operand(prototype)); + } + } + + // Go to the next object in the prototype chain. + current = prototype; + current_map = handle(current->map()); + } + + // Log the check depth. + LOG(isolate(), IntEvent("check-maps-depth", depth + 1)); + + if (depth != 0 || check == CHECK_ALL_MAPS) { + // Check the holder map. + __ CheckMap(reg, scratch1, current_map, miss, DONT_DO_SMI_CHECK); + } + + // Perform security check for access to the global object. + DCHECK(current_map->IsJSGlobalProxyMap() || + !current_map->is_access_check_needed()); + if (current_map->IsJSGlobalProxyMap()) { + __ CheckAccessGlobalProxy(reg, scratch1, miss); + } + + // Return the register containing the holder. + return reg; +} + + +void NamedLoadHandlerCompiler::FrontendFooter(Handle<Name> name, Label* miss) { + if (!miss->is_unused()) { + Label success; + __ Branch(&success); + __ bind(miss); + TailCallBuiltin(masm(), MissBuiltin(kind())); + __ bind(&success); + } +} + + +void NamedStoreHandlerCompiler::FrontendFooter(Handle<Name> name, Label* miss) { + if (!miss->is_unused()) { + Label success; + __ Branch(&success); + GenerateRestoreName(miss, name); + TailCallBuiltin(masm(), MissBuiltin(kind())); + __ bind(&success); + } +} + + +void NamedLoadHandlerCompiler::GenerateLoadConstant(Handle<Object> value) { + // Return the constant value. + __ li(v0, value); + __ Ret(); +} + + +void NamedLoadHandlerCompiler::GenerateLoadCallback( + Register reg, Handle<ExecutableAccessorInfo> callback) { + // Build AccessorInfo::args_ list on the stack and push property name below + // the exit frame to make GC aware of them and store pointers to them. + STATIC_ASSERT(PropertyCallbackArguments::kHolderIndex == 0); + STATIC_ASSERT(PropertyCallbackArguments::kIsolateIndex == 1); + STATIC_ASSERT(PropertyCallbackArguments::kReturnValueDefaultValueIndex == 2); + STATIC_ASSERT(PropertyCallbackArguments::kReturnValueOffset == 3); + STATIC_ASSERT(PropertyCallbackArguments::kDataIndex == 4); + STATIC_ASSERT(PropertyCallbackArguments::kThisIndex == 5); + STATIC_ASSERT(PropertyCallbackArguments::kArgsLength == 6); + DCHECK(!scratch2().is(reg)); + DCHECK(!scratch3().is(reg)); + DCHECK(!scratch4().is(reg)); + __ push(receiver()); + if (heap()->InNewSpace(callback->data())) { + __ li(scratch3(), callback); + __ ld(scratch3(), FieldMemOperand(scratch3(), + ExecutableAccessorInfo::kDataOffset)); + } else { + __ li(scratch3(), Handle<Object>(callback->data(), isolate())); + } + __ Dsubu(sp, sp, 6 * kPointerSize); + __ sd(scratch3(), MemOperand(sp, 5 * kPointerSize)); + __ LoadRoot(scratch3(), Heap::kUndefinedValueRootIndex); + __ sd(scratch3(), MemOperand(sp, 4 * kPointerSize)); + __ sd(scratch3(), MemOperand(sp, 3 * kPointerSize)); + __ li(scratch4(), + Operand(ExternalReference::isolate_address(isolate()))); + __ sd(scratch4(), MemOperand(sp, 2 * kPointerSize)); + __ sd(reg, MemOperand(sp, 1 * kPointerSize)); + __ sd(name(), MemOperand(sp, 0 * kPointerSize)); + __ Daddu(scratch2(), sp, 1 * kPointerSize); + + __ mov(a2, scratch2()); // Saved in case scratch2 == a1. + // Abi for CallApiGetter. + Register getter_address_reg = a2; + + Address getter_address = v8::ToCData<Address>(callback->getter()); + ApiFunction fun(getter_address); + ExternalReference::Type type = ExternalReference::DIRECT_GETTER_CALL; + ExternalReference ref = ExternalReference(&fun, type, isolate()); + __ li(getter_address_reg, Operand(ref)); + + CallApiGetterStub stub(isolate()); + __ TailCallStub(&stub); +} + + +void NamedLoadHandlerCompiler::GenerateLoadInterceptor(Register holder_reg, + LookupResult* lookup, + Handle<Name> name) { + DCHECK(holder()->HasNamedInterceptor()); + DCHECK(!holder()->GetNamedInterceptor()->getter()->IsUndefined()); + + // So far the most popular follow ups for interceptor loads are FIELD + // and CALLBACKS, so inline only them, other cases may be added + // later. + bool compile_followup_inline = false; + if (lookup->IsFound() && lookup->IsCacheable()) { + if (lookup->IsField()) { + compile_followup_inline = true; + } else if (lookup->type() == CALLBACKS && + lookup->GetCallbackObject()->IsExecutableAccessorInfo()) { + Handle<ExecutableAccessorInfo> callback( + ExecutableAccessorInfo::cast(lookup->GetCallbackObject())); + compile_followup_inline = + callback->getter() != NULL && + ExecutableAccessorInfo::IsCompatibleReceiverType(isolate(), callback, + type()); + } + } + + if (compile_followup_inline) { + // Compile the interceptor call, followed by inline code to load the + // property from further up the prototype chain if the call fails. + // Check that the maps haven't changed. + DCHECK(holder_reg.is(receiver()) || holder_reg.is(scratch1())); + + // Preserve the receiver register explicitly whenever it is different from + // the holder and it is needed should the interceptor return without any + // result. The CALLBACKS case needs the receiver to be passed into C++ code, + // the FIELD case might cause a miss during the prototype check. + bool must_perfrom_prototype_check = *holder() != lookup->holder(); + bool must_preserve_receiver_reg = !receiver().is(holder_reg) && + (lookup->type() == CALLBACKS || must_perfrom_prototype_check); + + // Save necessary data before invoking an interceptor. + // Requires a frame to make GC aware of pushed pointers. + { + FrameScope frame_scope(masm(), StackFrame::INTERNAL); + if (must_preserve_receiver_reg) { + __ Push(receiver(), holder_reg, this->name()); + } else { + __ Push(holder_reg, this->name()); + } + // Invoke an interceptor. Note: map checks from receiver to + // interceptor's holder has been compiled before (see a caller + // of this method). + CompileCallLoadPropertyWithInterceptor( + masm(), receiver(), holder_reg, this->name(), holder(), + IC::kLoadPropertyWithInterceptorOnly); + + // Check if interceptor provided a value for property. If it's + // the case, return immediately. + Label interceptor_failed; + __ LoadRoot(scratch1(), Heap::kNoInterceptorResultSentinelRootIndex); + __ Branch(&interceptor_failed, eq, v0, Operand(scratch1())); + frame_scope.GenerateLeaveFrame(); + __ Ret(); + + __ bind(&interceptor_failed); + __ pop(this->name()); + __ pop(holder_reg); + if (must_preserve_receiver_reg) { + __ pop(receiver()); + } + // Leave the internal frame. + } + GenerateLoadPostInterceptor(holder_reg, name, lookup); + } else { // !compile_followup_inline + // Call the runtime system to load the interceptor. + // Check that the maps haven't changed. + PushInterceptorArguments(masm(), receiver(), holder_reg, this->name(), + holder()); + + ExternalReference ref = ExternalReference( + IC_Utility(IC::kLoadPropertyWithInterceptor), isolate()); + __ TailCallExternalReference( + ref, NamedLoadHandlerCompiler::kInterceptorArgsLength, 1); + } +} + + +Handle<Code> NamedStoreHandlerCompiler::CompileStoreCallback( + Handle<JSObject> object, Handle<Name> name, + Handle<ExecutableAccessorInfo> callback) { + Register holder_reg = Frontend(receiver(), name); + + __ Push(receiver(), holder_reg); // Receiver. + __ li(at, Operand(callback)); // Callback info. + __ push(at); + __ li(at, Operand(name)); + __ Push(at, value()); + + // Do tail-call to the runtime system. + ExternalReference store_callback_property = + ExternalReference(IC_Utility(IC::kStoreCallbackProperty), isolate()); + __ TailCallExternalReference(store_callback_property, 5, 1); + + // Return the generated code. + return GetCode(kind(), Code::FAST, name); +} + + +#undef __ +#define __ ACCESS_MASM(masm) + + +void NamedStoreHandlerCompiler::GenerateStoreViaSetter( + MacroAssembler* masm, Handle<HeapType> type, Register receiver, + Handle<JSFunction> setter) { + // ----------- S t a t e ------------- + // -- ra : return address + // ----------------------------------- + { + FrameScope scope(masm, StackFrame::INTERNAL); + + // Save value register, so we can restore it later. + __ push(value()); + + if (!setter.is_null()) { + // Call the JavaScript setter with receiver and value on the stack. + if (IC::TypeToMap(*type, masm->isolate())->IsJSGlobalObjectMap()) { + // Swap in the global receiver. + __ ld(receiver, + FieldMemOperand(receiver, JSGlobalObject::kGlobalProxyOffset)); + } + __ Push(receiver, value()); + ParameterCount actual(1); + ParameterCount expected(setter); + __ InvokeFunction(setter, expected, actual, + CALL_FUNCTION, NullCallWrapper()); + } else { + // If we generate a global code snippet for deoptimization only, remember + // the place to continue after deoptimization. + masm->isolate()->heap()->SetSetterStubDeoptPCOffset(masm->pc_offset()); + } + + // We have to return the passed value, not the return value of the setter. + __ pop(v0); + + // Restore context register. + __ ld(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); + } + __ Ret(); +} + + +#undef __ +#define __ ACCESS_MASM(masm()) + + +Handle<Code> NamedStoreHandlerCompiler::CompileStoreInterceptor( + Handle<Name> name) { + __ Push(receiver(), this->name(), value()); + + // Do tail-call to the runtime system. + ExternalReference store_ic_property = ExternalReference( + IC_Utility(IC::kStorePropertyWithInterceptor), isolate()); + __ TailCallExternalReference(store_ic_property, 3, 1); + + // Return the generated code. + return GetCode(kind(), Code::FAST, name); +} + + +Register* PropertyAccessCompiler::load_calling_convention() { + // receiver, name, scratch1, scratch2, scratch3, scratch4. + Register receiver = LoadIC::ReceiverRegister(); + Register name = LoadIC::NameRegister(); + static Register registers[] = { receiver, name, a3, a0, a4, a5 }; + return registers; +} + + +Register* PropertyAccessCompiler::store_calling_convention() { + // receiver, name, scratch1, scratch2, scratch3. + Register receiver = StoreIC::ReceiverRegister(); + Register name = StoreIC::NameRegister(); + DCHECK(a3.is(KeyedStoreIC::MapRegister())); + static Register registers[] = { receiver, name, a3, a4, a5 }; + return registers; +} + + +Register NamedStoreHandlerCompiler::value() { return StoreIC::ValueRegister(); } + + +#undef __ +#define __ ACCESS_MASM(masm) + + +void NamedLoadHandlerCompiler::GenerateLoadViaGetter( + MacroAssembler* masm, Handle<HeapType> type, Register receiver, + Handle<JSFunction> getter) { + // ----------- S t a t e ------------- + // -- a0 : receiver + // -- a2 : name + // -- ra : return address + // ----------------------------------- + { + FrameScope scope(masm, StackFrame::INTERNAL); + + if (!getter.is_null()) { + // Call the JavaScript getter with the receiver on the stack. + if (IC::TypeToMap(*type, masm->isolate())->IsJSGlobalObjectMap()) { + // Swap in the global receiver. + __ ld(receiver, + FieldMemOperand(receiver, JSGlobalObject::kGlobalProxyOffset)); + } + __ push(receiver); + ParameterCount actual(0); + ParameterCount expected(getter); + __ InvokeFunction(getter, expected, actual, + CALL_FUNCTION, NullCallWrapper()); + } else { + // If we generate a global code snippet for deoptimization only, remember + // the place to continue after deoptimization. + masm->isolate()->heap()->SetGetterStubDeoptPCOffset(masm->pc_offset()); + } + + // Restore context register. + __ ld(cp, MemOperand(fp, StandardFrameConstants::kContextOffset)); + } + __ Ret(); +} + + +#undef __ +#define __ ACCESS_MASM(masm()) + + +Handle<Code> NamedLoadHandlerCompiler::CompileLoadGlobal( + Handle<PropertyCell> cell, Handle<Name> name, bool is_configurable) { + Label miss; + + FrontendHeader(receiver(), name, &miss); + + // Get the value from the cell. + Register result = StoreIC::ValueRegister(); + __ li(result, Operand(cell)); + __ ld(result, FieldMemOperand(result, Cell::kValueOffset)); + + // Check for deleted property if property can actually be deleted. + if (is_configurable) { + __ LoadRoot(at, Heap::kTheHoleValueRootIndex); + __ Branch(&miss, eq, result, Operand(at)); + } + + Counters* counters = isolate()->counters(); + __ IncrementCounter(counters->named_load_global_stub(), 1, a1, a3); + __ Ret(USE_DELAY_SLOT); + __ mov(v0, result); + + FrontendFooter(name, &miss); + + // Return the generated code. + return GetCode(kind(), Code::NORMAL, name); +} + + +Handle<Code> PropertyICCompiler::CompilePolymorphic(TypeHandleList* types, + CodeHandleList* handlers, + Handle<Name> name, + Code::StubType type, + IcCheckType check) { + Label miss; + + if (check == PROPERTY && + (kind() == Code::KEYED_LOAD_IC || kind() == Code::KEYED_STORE_IC)) { + // In case we are compiling an IC for dictionary loads and stores, just + // check whether the name is unique. + if (name.is_identical_to(isolate()->factory()->normal_ic_symbol())) { + __ JumpIfNotUniqueName(this->name(), &miss); + } else { + __ Branch(&miss, ne, this->name(), Operand(name)); + } + } + + Label number_case; + Register match = scratch2(); + Label* smi_target = IncludesNumberType(types) ? &number_case : &miss; + __ JumpIfSmi(receiver(), smi_target, match); // Reg match is 0 if Smi. + + // Polymorphic keyed stores may use the map register + Register map_reg = scratch1(); + DCHECK(kind() != Code::KEYED_STORE_IC || + map_reg.is(KeyedStoreIC::MapRegister())); + + int receiver_count = types->length(); + int number_of_handled_maps = 0; + __ ld(map_reg, FieldMemOperand(receiver(), HeapObject::kMapOffset)); + for (int current = 0; current < receiver_count; ++current) { + Handle<HeapType> type = types->at(current); + Handle<Map> map = IC::TypeToMap(*type, isolate()); + if (!map->is_deprecated()) { + number_of_handled_maps++; + // Check map and tail call if there's a match. + // Separate compare from branch, to provide path for above JumpIfSmi(). + __ Dsubu(match, map_reg, Operand(map)); + if (type->Is(HeapType::Number())) { + DCHECK(!number_case.is_unused()); + __ bind(&number_case); + } + __ Jump(handlers->at(current), RelocInfo::CODE_TARGET, + eq, match, Operand(zero_reg)); + } + } + DCHECK(number_of_handled_maps != 0); + + __ bind(&miss); + TailCallBuiltin(masm(), MissBuiltin(kind())); + + // Return the generated code. + InlineCacheState state = + number_of_handled_maps > 1 ? POLYMORPHIC : MONOMORPHIC; + return GetCode(kind(), type, name, state); +} + + +Handle<Code> PropertyICCompiler::CompileKeyedStorePolymorphic( + MapHandleList* receiver_maps, CodeHandleList* handler_stubs, + MapHandleList* transitioned_maps) { + Label miss; + __ JumpIfSmi(receiver(), &miss); + + int receiver_count = receiver_maps->length(); + __ ld(scratch1(), FieldMemOperand(receiver(), HeapObject::kMapOffset)); + for (int i = 0; i < receiver_count; ++i) { + if (transitioned_maps->at(i).is_null()) { + __ Jump(handler_stubs->at(i), RelocInfo::CODE_TARGET, eq, + scratch1(), Operand(receiver_maps->at(i))); + } else { + Label next_map; + __ Branch(&next_map, ne, scratch1(), Operand(receiver_maps->at(i))); + __ li(transition_map(), Operand(transitioned_maps->at(i))); + __ Jump(handler_stubs->at(i), RelocInfo::CODE_TARGET); + __ bind(&next_map); + } + } + + __ bind(&miss); + TailCallBuiltin(masm(), MissBuiltin(kind())); + + // Return the generated code. + return GetCode(kind(), Code::NORMAL, factory()->empty_string(), POLYMORPHIC); +} + + +#undef __ +#define __ ACCESS_MASM(masm) + + +void ElementHandlerCompiler::GenerateLoadDictionaryElement( + MacroAssembler* masm) { + // The return address is in ra + Label slow, miss; + + Register key = LoadIC::NameRegister(); + Register receiver = LoadIC::ReceiverRegister(); + DCHECK(receiver.is(a1)); + DCHECK(key.is(a2)); + + __ UntagAndJumpIfNotSmi(a6, key, &miss); + __ ld(a4, FieldMemOperand(receiver, JSObject::kElementsOffset)); + DCHECK(kSmiTagSize + kSmiShiftSize == 32); + __ LoadFromNumberDictionary(&slow, a4, key, v0, a6, a3, a5); + __ Ret(); + + // Slow case, key and receiver still unmodified. + __ bind(&slow); + __ IncrementCounter( + masm->isolate()->counters()->keyed_load_external_array_slow(), + 1, a2, a3); + + TailCallBuiltin(masm, Builtins::kKeyedLoadIC_Slow); + + // Miss case, call the runtime. + __ bind(&miss); + + TailCallBuiltin(masm, Builtins::kKeyedLoadIC_Miss); +} + + +#undef __ + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_MIPS64 diff --git a/deps/v8/src/mirror-debugger.js b/deps/v8/src/mirror-debugger.js index fde3f10f3f1..6da847fd549 100644 --- a/deps/v8/src/mirror-debugger.js +++ b/deps/v8/src/mirror-debugger.js @@ -8,12 +8,11 @@ var next_transient_handle_ = -1; // Mirror cache. var mirror_cache_ = []; +var mirror_cache_enabled_ = true; -/** - * Clear the mirror handle cache. - */ -function ClearMirrorCache() { +function ToggleMirrorCache(value) { + mirror_cache_enabled_ = value; next_handle_ = 0; mirror_cache_ = []; } @@ -21,10 +20,11 @@ function ClearMirrorCache() { // Wrapper to check whether an object is a Promise. The call may not work // if promises are not enabled. -// TODO(yangguo): remove this wrapper once promises are enabled by default. +// TODO(yangguo): remove try-catch once promises are enabled by default. function ObjectIsPromise(value) { try { - return %IsPromise(value); + return IS_SPEC_OBJECT(value) && + !IS_UNDEFINED(%DebugGetProperty(value, builtins.promiseStatus)); } catch (e) { return false; } @@ -43,7 +43,7 @@ function MakeMirror(value, opt_transient) { var mirror; // Look for non transient mirrors in the mirror cache. - if (!opt_transient) { + if (!opt_transient && mirror_cache_enabled_) { for (id in mirror_cache_) { mirror = mirror_cache_[id]; if (mirror.value() === value) { @@ -67,6 +67,8 @@ function MakeMirror(value, opt_transient) { mirror = new NumberMirror(value); } else if (IS_STRING(value)) { mirror = new StringMirror(value); + } else if (IS_SYMBOL(value)) { + mirror = new SymbolMirror(value); } else if (IS_ARRAY(value)) { mirror = new ArrayMirror(value); } else if (IS_DATE(value)) { @@ -79,13 +81,17 @@ function MakeMirror(value, opt_transient) { mirror = new ErrorMirror(value); } else if (IS_SCRIPT(value)) { mirror = new ScriptMirror(value); + } else if (IS_MAP(value) || IS_WEAKMAP(value)) { + mirror = new MapMirror(value); + } else if (IS_SET(value) || IS_WEAKSET(value)) { + mirror = new SetMirror(value); } else if (ObjectIsPromise(value)) { mirror = new PromiseMirror(value); } else { mirror = new ObjectMirror(value, OBJECT_TYPE, opt_transient); } - mirror_cache_[mirror.handle()] = mirror; + if (mirror_cache_enabled_) mirror_cache_[mirror.handle()] = mirror; return mirror; } @@ -98,6 +104,7 @@ function MakeMirror(value, opt_transient) { * undefined if no mirror with the requested handle was found */ function LookupMirror(handle) { + if (!mirror_cache_enabled_) throw new Error("Mirror cache is disabled"); return mirror_cache_[handle]; } @@ -140,6 +147,7 @@ var NULL_TYPE = 'null'; var BOOLEAN_TYPE = 'boolean'; var NUMBER_TYPE = 'number'; var STRING_TYPE = 'string'; +var SYMBOL_TYPE = 'symbol'; var OBJECT_TYPE = 'object'; var FUNCTION_TYPE = 'function'; var REGEXP_TYPE = 'regexp'; @@ -151,6 +159,8 @@ var SCRIPT_TYPE = 'script'; var CONTEXT_TYPE = 'context'; var SCOPE_TYPE = 'scope'; var PROMISE_TYPE = 'promise'; +var MAP_TYPE = 'map'; +var SET_TYPE = 'set'; // Maximum length when sending strings through the JSON protocol. var kMaxProtocolStringLength = 80; @@ -161,7 +171,7 @@ PropertyKind.Named = 1; PropertyKind.Indexed = 2; -// A copy of the PropertyType enum from global.h +// A copy of the PropertyType enum from property-details.h var PropertyType = {}; PropertyType.Normal = 0; PropertyType.Field = 1; @@ -169,8 +179,7 @@ PropertyType.Constant = 2; PropertyType.Callbacks = 3; PropertyType.Handler = 4; PropertyType.Interceptor = 5; -PropertyType.Transition = 6; -PropertyType.Nonexistent = 7; +PropertyType.Nonexistent = 6; // Different attributes for a property. @@ -197,6 +206,7 @@ var ScopeType = { Global: 0, // - NullMirror // - NumberMirror // - StringMirror +// - SymbolMirror // - ObjectMirror // - FunctionMirror // - UnresolvedFunctionMirror @@ -205,6 +215,8 @@ var ScopeType = { Global: 0, // - RegExpMirror // - ErrorMirror // - PromiseMirror +// - MapMirror +// - SetMirror // - PropertyMirror // - InternalPropertyMirror // - FrameMirror @@ -280,6 +292,15 @@ Mirror.prototype.isString = function() { }; +/** + * Check whether the mirror reflects a symbol. + * @returns {boolean} True if the mirror reflects a symbol + */ +Mirror.prototype.isSymbol = function() { + return this instanceof SymbolMirror; +}; + + /** * Check whether the mirror reflects an object. * @returns {boolean} True if the mirror reflects an object @@ -406,11 +427,29 @@ Mirror.prototype.isScope = function() { }; +/** + * Check whether the mirror reflects a map. + * @returns {boolean} True if the mirror reflects a map + */ +Mirror.prototype.isMap = function() { + return this instanceof MapMirror; +}; + + +/** + * Check whether the mirror reflects a set. + * @returns {boolean} True if the mirror reflects a set + */ +Mirror.prototype.isSet = function() { + return this instanceof SetMirror; +}; + + /** * Allocate a handle id for this object. */ Mirror.prototype.allocateHandle_ = function() { - this.handle_ = next_handle_++; + if (mirror_cache_enabled_) this.handle_ = next_handle_++; }; @@ -465,7 +504,8 @@ ValueMirror.prototype.isPrimitive = function() { type === 'null' || type === 'boolean' || type === 'number' || - type === 'string'; + type === 'string' || + type === 'symbol'; }; @@ -573,6 +613,28 @@ StringMirror.prototype.toText = function() { }; +/** + * Mirror object for a Symbol + * @param {Object} value The Symbol + * @constructor + * @extends Mirror + */ +function SymbolMirror(value) { + %_CallFunction(this, SYMBOL_TYPE, value, ValueMirror); +} +inherits(SymbolMirror, ValueMirror); + + +SymbolMirror.prototype.description = function() { + return %SymbolDescription(%_ValueOf(this.value_)); +} + + +SymbolMirror.prototype.toText = function() { + return %_CallFunction(this.value_, builtins.SymbolToString); +} + + /** * Mirror object for objects. * @param {object} value The object reflected by this mirror @@ -621,6 +683,19 @@ ObjectMirror.prototype.hasIndexedInterceptor = function() { }; +// Get all own property names except for private symbols. +function TryGetPropertyNames(object) { + try { + // TODO(yangguo): Should there be a special debugger implementation of + // %GetOwnPropertyNames that doesn't perform access checks? + return %GetOwnPropertyNames(object, PROPERTY_ATTRIBUTES_PRIVATE_SYMBOL); + } catch (e) { + // Might have hit a failed access check. + return []; + } +} + + /** * Return the property names for this object. * @param {number} kind Indicate whether named, indexed or both kinds of @@ -639,9 +714,7 @@ ObjectMirror.prototype.propertyNames = function(kind, limit) { // Find all the named properties. if (kind & PropertyKind.Named) { - // Get all the local property names except for private symbols. - propertyNames = - %GetLocalPropertyNames(this.value_, PROPERTY_ATTRIBUTES_PRIVATE_SYMBOL); + propertyNames = TryGetPropertyNames(this.value_); total += propertyNames.length; // Get names for named interceptor properties if any. @@ -657,8 +730,8 @@ ObjectMirror.prototype.propertyNames = function(kind, limit) { // Find all the indexed properties. if (kind & PropertyKind.Indexed) { - // Get the local element names. - elementNames = %GetLocalElementNames(this.value_); + // Get own element names. + elementNames = %GetOwnElementNames(this.value_); total += elementNames.length; // Get names for indexed interceptor properties. @@ -724,7 +797,7 @@ ObjectMirror.prototype.internalProperties = function() { ObjectMirror.prototype.property = function(name) { - var details = %DebugGetPropertyDetails(this.value_, %ToString(name)); + var details = %DebugGetPropertyDetails(this.value_, %ToName(name)); if (details) { return new PropertyMirror(this, name, details); } @@ -798,7 +871,8 @@ ObjectMirror.prototype.toText = function() { /** * Return the internal properties of the value, such as [[PrimitiveValue]] of - * scalar wrapper objects and properties of the bound function. + * scalar wrapper objects, properties of the bound function and properties of + * the promise. * This method is done static to be accessible from Debug API with the bare * values without mirrors. * @return {Array} array (possibly empty) of InternalProperty instances @@ -822,6 +896,13 @@ ObjectMirror.GetInternalProperties = function(value) { result.push(new InternalPropertyMirror("[[BoundArgs]]", boundArgs)); } return result; + } else if (ObjectIsPromise(value)) { + var result = []; + result.push(new InternalPropertyMirror("[[PromiseStatus]]", + PromiseGetStatus_(value))); + result.push(new InternalPropertyMirror("[[PromiseValue]]", + PromiseGetValue_(value))); + return result; } return []; } @@ -1175,9 +1256,9 @@ ErrorMirror.prototype.toText = function() { /** * Mirror object for a Promise object. - * @param {Object} data The Promise object + * @param {Object} value The Promise object * @constructor - * @extends Mirror + * @extends ObjectMirror */ function PromiseMirror(value) { %_CallFunction(this, value, PROMISE_TYPE, ObjectMirror); @@ -1185,16 +1266,91 @@ function PromiseMirror(value) { inherits(PromiseMirror, ObjectMirror); -PromiseMirror.prototype.status = function() { - var status = builtins.GetPromiseStatus(this.value_); +function PromiseGetStatus_(value) { + var status = %DebugGetProperty(value, builtins.promiseStatus); if (status == 0) return "pending"; if (status == 1) return "resolved"; return "rejected"; +} + + +function PromiseGetValue_(value) { + return %DebugGetProperty(value, builtins.promiseValue); +} + + +PromiseMirror.prototype.status = function() { + return PromiseGetStatus_(this.value_); }; PromiseMirror.prototype.promiseValue = function() { - return builtins.GetPromiseValue(this.value_); + return MakeMirror(PromiseGetValue_(this.value_)); +}; + + +function MapMirror(value) { + %_CallFunction(this, value, MAP_TYPE, ObjectMirror); +} +inherits(MapMirror, ObjectMirror); + + +/** + * Returns an array of key/value pairs of a map. + * This will keep keys alive for WeakMaps. + * + * @returns {Array.<Object>} Array of key/value pairs of a map. + */ +MapMirror.prototype.entries = function() { + var result = []; + + if (IS_WEAKMAP(this.value_)) { + var entries = %GetWeakMapEntries(this.value_); + for (var i = 0; i < entries.length; i += 2) { + result.push({ + key: entries[i], + value: entries[i + 1] + }); + } + return result; + } + + var iter = %_CallFunction(this.value_, builtins.MapEntries); + var next; + while (!(next = iter.next()).done) { + result.push({ + key: next.value[0], + value: next.value[1] + }); + } + return result; +}; + + +function SetMirror(value) { + %_CallFunction(this, value, SET_TYPE, ObjectMirror); +} +inherits(SetMirror, ObjectMirror); + + +/** + * Returns an array of elements of a set. + * This will keep elements alive for WeakSets. + * + * @returns {Array.<Object>} Array of elements of a set. + */ +SetMirror.prototype.values = function() { + if (IS_WEAKSET(this.value_)) { + return %GetWeakSetValues(this.value_); + } + + var result = []; + var iter = %_CallFunction(this.value_, builtins.SetValues); + var next; + while (!(next = iter.next()).done) { + result.push(next.value); + } + return result; }; @@ -2299,6 +2455,9 @@ JSONProtocolSerializer.prototype.serializeReferenceWithDisplayData_ = case STRING_TYPE: o.value = mirror.getTruncatedValue(this.maxStringLength_()); break; + case SYMBOL_TYPE: + o.description = mirror.description(); + break; case FUNCTION_TYPE: o.name = mirror.name(); o.inferredName = mirror.inferredName(); @@ -2373,6 +2532,10 @@ JSONProtocolSerializer.prototype.serialize_ = function(mirror, reference, content.length = mirror.length(); break; + case SYMBOL_TYPE: + content.description = mirror.description(); + break; + case OBJECT_TYPE: case FUNCTION_TYPE: case ERROR_TYPE: @@ -2515,7 +2678,7 @@ JSONProtocolSerializer.prototype.serializeObject_ = function(mirror, content, if (mirror.isPromise()) { // Add promise specific properties. content.status = mirror.status(); - content.promiseValue = mirror.promiseValue(); + content.promiseValue = this.serializeReference(mirror.promiseValue()); } // Add actual properties - named properties followed by indexed properties. diff --git a/deps/v8/src/misc-intrinsics.h b/deps/v8/src/misc-intrinsics.h index a8582bbe118..5256a293a21 100644 --- a/deps/v8/src/misc-intrinsics.h +++ b/deps/v8/src/misc-intrinsics.h @@ -5,8 +5,8 @@ #ifndef V8_MISC_INTRINSICS_H_ #define V8_MISC_INTRINSICS_H_ -#include "../include/v8.h" -#include "globals.h" +#include "include/v8.h" +#include "src/globals.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/mksnapshot.cc b/deps/v8/src/mksnapshot.cc index a8bf871f131..37b174e207f 100644 --- a/deps/v8/src/mksnapshot.cc +++ b/deps/v8/src/mksnapshot.cc @@ -9,18 +9,17 @@ #endif #include <signal.h> -#include "v8.h" +#include "src/v8.h" -#include "bootstrapper.h" -#include "flags.h" -#include "natives.h" -#include "platform.h" -#include "serialize.h" -#include "list.h" +#include "include/libplatform/libplatform.h" +#include "src/assembler.h" +#include "src/base/platform/platform.h" +#include "src/bootstrapper.h" +#include "src/flags.h" +#include "src/list.h" +#include "src/natives.h" +#include "src/serialize.h" -#if V8_TARGET_ARCH_ARM -#include "arm/assembler-arm-inl.h" -#endif using namespace v8; @@ -28,19 +27,8 @@ using namespace v8; class Compressor { public: virtual ~Compressor() {} - virtual bool Compress(i::Vector<char> input) = 0; - virtual i::Vector<char>* output() = 0; -}; - - -class ListSnapshotSink : public i::SnapshotByteSink { - public: - explicit ListSnapshotSink(i::List<char>* data) : data_(data) { } - virtual ~ListSnapshotSink() {} - virtual void Put(int byte, const char* description) { data_->Add(byte); } - virtual int Position() { return data_->length(); } - private: - i::List<char>* data_; + virtual bool Compress(i::Vector<i::byte> input) = 0; + virtual i::Vector<i::byte>* output() = 0; }; @@ -50,33 +38,80 @@ class SnapshotWriter { : fp_(GetFileDescriptorOrDie(snapshot_file)) , raw_file_(NULL) , raw_context_file_(NULL) - , compressor_(NULL) - , omit_(false) { + , startup_blob_file_(NULL) + , compressor_(NULL) { } ~SnapshotWriter() { fclose(fp_); if (raw_file_) fclose(raw_file_); if (raw_context_file_) fclose(raw_context_file_); + if (startup_blob_file_) fclose(startup_blob_file_); } void SetCompressor(Compressor* compressor) { compressor_ = compressor; } - void SetOmit(bool omit) { - omit_ = omit; - } - void SetRawFiles(const char* raw_file, const char* raw_context_file) { raw_file_ = GetFileDescriptorOrDie(raw_file); raw_context_file_ = GetFileDescriptorOrDie(raw_context_file); } - void WriteSnapshot(const i::List<char>& snapshot_data, + void SetStartupBlobFile(const char* startup_blob_file) { + if (startup_blob_file != NULL) + startup_blob_file_ = GetFileDescriptorOrDie(startup_blob_file); + } + + void WriteSnapshot(const i::List<i::byte>& snapshot_data, const i::Serializer& serializer, - const i::List<char>& context_snapshot_data, + const i::List<i::byte>& context_snapshot_data, const i::Serializer& context_serializer) const { + WriteSnapshotFile(snapshot_data, serializer, + context_snapshot_data, context_serializer); + MaybeWriteStartupBlob(snapshot_data, serializer, + context_snapshot_data, context_serializer); + } + + private: + void MaybeWriteStartupBlob(const i::List<i::byte>& snapshot_data, + const i::Serializer& serializer, + const i::List<i::byte>& context_snapshot_data, + const i::Serializer& context_serializer) const { + if (!startup_blob_file_) + return; + + i::List<i::byte> startup_blob; + i::ListSnapshotSink sink(&startup_blob); + + int spaces[] = { + i::NEW_SPACE, i::OLD_POINTER_SPACE, i::OLD_DATA_SPACE, i::CODE_SPACE, + i::MAP_SPACE, i::CELL_SPACE, i::PROPERTY_CELL_SPACE + }; + + i::byte* snapshot_bytes = snapshot_data.begin(); + sink.PutBlob(snapshot_bytes, snapshot_data.length(), "snapshot"); + for (size_t i = 0; i < ARRAY_SIZE(spaces); ++i) + sink.PutInt(serializer.CurrentAllocationAddress(spaces[i]), "spaces"); + + i::byte* context_bytes = context_snapshot_data.begin(); + sink.PutBlob(context_bytes, context_snapshot_data.length(), "context"); + for (size_t i = 0; i < ARRAY_SIZE(spaces); ++i) + sink.PutInt(context_serializer.CurrentAllocationAddress(spaces[i]), + "spaces"); + + size_t written = fwrite(startup_blob.begin(), 1, startup_blob.length(), + startup_blob_file_); + if (written != (size_t)startup_blob.length()) { + i::PrintF("Writing snapshot file failed.. Aborting.\n"); + exit(1); + } + } + + void WriteSnapshotFile(const i::List<i::byte>& snapshot_data, + const i::Serializer& serializer, + const i::List<i::byte>& context_snapshot_data, + const i::Serializer& context_serializer) const { WriteFilePrefix(); WriteData("", snapshot_data, raw_file_); WriteData("context_", context_snapshot_data, raw_context_file_); @@ -85,12 +120,11 @@ class SnapshotWriter { WriteFileSuffix(); } - private: void WriteFilePrefix() const { fprintf(fp_, "// Autogenerated snapshot file. Do not edit.\n\n"); - fprintf(fp_, "#include \"v8.h\"\n"); - fprintf(fp_, "#include \"platform.h\"\n\n"); - fprintf(fp_, "#include \"snapshot.h\"\n\n"); + fprintf(fp_, "#include \"src/v8.h\"\n"); + fprintf(fp_, "#include \"src/base/platform/platform.h\"\n\n"); + fprintf(fp_, "#include \"src/snapshot.h\"\n\n"); fprintf(fp_, "namespace v8 {\n"); fprintf(fp_, "namespace internal {\n\n"); } @@ -100,11 +134,10 @@ class SnapshotWriter { fprintf(fp_, "} // namespace v8\n"); } - void WriteData(const char* prefix, - const i::List<char>& source_data, + void WriteData(const char* prefix, const i::List<i::byte>& source_data, FILE* raw_file) const { - const i::List <char>* data_to_be_written = NULL; - i::List<char> compressed_data; + const i::List<i::byte>* data_to_be_written = NULL; + i::List<i::byte> compressed_data; if (!compressor_) { data_to_be_written = &source_data; } else if (compressor_->Compress(source_data.ToVector())) { @@ -115,18 +148,18 @@ class SnapshotWriter { exit(1); } - ASSERT(data_to_be_written); + DCHECK(data_to_be_written); MaybeWriteRawFile(data_to_be_written, raw_file); WriteData(prefix, source_data, data_to_be_written); } - void MaybeWriteRawFile(const i::List<char>* data, FILE* raw_file) const { + void MaybeWriteRawFile(const i::List<i::byte>* data, FILE* raw_file) const { if (!data || !raw_file) return; // Sanity check, whether i::List iterators truly return pointers to an // internal array. - ASSERT(data->end() - data->begin() == data->length()); + DCHECK(data->end() - data->begin() == data->length()); size_t written = fwrite(data->begin(), 1, data->length(), raw_file); if (written != (size_t)data->length()) { @@ -135,17 +168,15 @@ class SnapshotWriter { } } - void WriteData(const char* prefix, - const i::List<char>& source_data, - const i::List<char>* data_to_be_written) const { + void WriteData(const char* prefix, const i::List<i::byte>& source_data, + const i::List<i::byte>* data_to_be_written) const { fprintf(fp_, "const byte Snapshot::%sdata_[] = {\n", prefix); - if (!omit_) - WriteSnapshotData(data_to_be_written); + WriteSnapshotData(data_to_be_written); fprintf(fp_, "};\n"); fprintf(fp_, "const int Snapshot::%ssize_ = %d;\n", prefix, data_to_be_written->length()); - if (data_to_be_written == &source_data && !omit_) { + if (data_to_be_written == &source_data) { fprintf(fp_, "const byte* Snapshot::%sraw_data_ = Snapshot::%sdata_;\n", prefix, prefix); fprintf(fp_, "const int Snapshot::%sraw_size_ = Snapshot::%ssize_;\n", @@ -175,7 +206,7 @@ class SnapshotWriter { prefix, name, ser.CurrentAllocationAddress(space)); } - void WriteSnapshotData(const i::List<char>* data) const { + void WriteSnapshotData(const i::List<i::byte>* data) const { for (int i = 0; i < data->length(); i++) { if ((i & 0x1f) == 0x1f) fprintf(fp_, "\n"); @@ -187,7 +218,7 @@ class SnapshotWriter { } FILE* GetFileDescriptorOrDie(const char* filename) { - FILE* fp = i::OS::FOpen(filename, "wb"); + FILE* fp = base::OS::FOpen(filename, "wb"); if (fp == NULL) { i::PrintF("Unable to open file \"%s\" for writing.\n", filename); exit(1); @@ -198,8 +229,8 @@ class SnapshotWriter { FILE* fp_; FILE* raw_file_; FILE* raw_context_file_; + FILE* startup_blob_file_; Compressor* compressor_; - bool omit_; }; @@ -241,7 +272,7 @@ class BZip2Decompressor : public StartupDataDecompressor { int* raw_data_size, const char* compressed_data, int compressed_data_size) { - ASSERT_EQ(StartupData::kBZip2, + DCHECK_EQ(StartupData::kBZip2, V8::GetCompressedStartupDataAlgorithm()); unsigned int decompressed_size = *raw_data_size; int result = @@ -273,14 +304,16 @@ void DumpException(Handle<Message> message) { int main(int argc, char** argv) { V8::InitializeICU(); - i::Isolate::SetCrashIfDefaultIsolateInitialized(); + v8::Platform* platform = v8::platform::CreateDefaultPlatform(); + v8::V8::InitializePlatform(platform); + i::CpuFeatures::Probe(true); // By default, log code create information in the snapshot. i::FLAG_log_code = true; // Print the usage if an error occurs when parsing the command line // flags or if the help flag is set. - int result = i::FlagList::SetFlagsFromCommandLine(&argc, argv, true, true); + int result = i::FlagList::SetFlagsFromCommandLine(&argc, argv, true); if (result > 0 || argc != 2 || i::FLAG_help) { ::printf("Usage: %s [flag] ... outfile\n", argv[0]); i::FlagList::PrintHelp(); @@ -297,102 +330,106 @@ int main(int argc, char** argv) { i::FLAG_logfile_per_isolate = false; Isolate* isolate = v8::Isolate::New(); - isolate->Enter(); - i::Isolate* internal_isolate = reinterpret_cast<i::Isolate*>(isolate); - i::Serializer::RequestEnable(internal_isolate); - Persistent<Context> context; - { - HandleScope handle_scope(isolate); - context.Reset(isolate, Context::New(isolate)); - } + { Isolate::Scope isolate_scope(isolate); + i::Isolate* internal_isolate = reinterpret_cast<i::Isolate*>(isolate); + internal_isolate->enable_serializer(); + + Persistent<Context> context; + { + HandleScope handle_scope(isolate); + context.Reset(isolate, Context::New(isolate)); + } - if (context.IsEmpty()) { - fprintf(stderr, - "\nException thrown while compiling natives - see above.\n\n"); - exit(1); - } - if (i::FLAG_extra_code != NULL) { - // Capture 100 frames if anything happens. - V8::SetCaptureStackTraceForUncaughtExceptions(true, 100); - HandleScope scope(isolate); - v8::Context::Scope cscope(v8::Local<v8::Context>::New(isolate, context)); - const char* name = i::FLAG_extra_code; - FILE* file = i::OS::FOpen(name, "rb"); - if (file == NULL) { - fprintf(stderr, "Failed to open '%s': errno %d\n", name, errno); + if (context.IsEmpty()) { + fprintf(stderr, + "\nException thrown while compiling natives - see above.\n\n"); exit(1); } + if (i::FLAG_extra_code != NULL) { + // Capture 100 frames if anything happens. + V8::SetCaptureStackTraceForUncaughtExceptions(true, 100); + HandleScope scope(isolate); + v8::Context::Scope cscope(v8::Local<v8::Context>::New(isolate, context)); + const char* name = i::FLAG_extra_code; + FILE* file = base::OS::FOpen(name, "rb"); + if (file == NULL) { + fprintf(stderr, "Failed to open '%s': errno %d\n", name, errno); + exit(1); + } - fseek(file, 0, SEEK_END); - int size = ftell(file); - rewind(file); - - char* chars = new char[size + 1]; - chars[size] = '\0'; - for (int i = 0; i < size;) { - int read = static_cast<int>(fread(&chars[i], 1, size - i, file)); - if (read < 0) { - fprintf(stderr, "Failed to read '%s': errno %d\n", name, errno); + fseek(file, 0, SEEK_END); + int size = ftell(file); + rewind(file); + + char* chars = new char[size + 1]; + chars[size] = '\0'; + for (int i = 0; i < size;) { + int read = static_cast<int>(fread(&chars[i], 1, size - i, file)); + if (read < 0) { + fprintf(stderr, "Failed to read '%s': errno %d\n", name, errno); + exit(1); + } + i += read; + } + fclose(file); + Local<String> source = String::NewFromUtf8(isolate, chars); + TryCatch try_catch; + Local<Script> script = Script::Compile(source); + if (try_catch.HasCaught()) { + fprintf(stderr, "Failure compiling '%s'\n", name); + DumpException(try_catch.Message()); + exit(1); + } + script->Run(); + if (try_catch.HasCaught()) { + fprintf(stderr, "Failure running '%s'\n", name); + DumpException(try_catch.Message()); exit(1); } - i += read; - } - fclose(file); - Local<String> source = String::NewFromUtf8(isolate, chars); - TryCatch try_catch; - Local<Script> script = Script::Compile(source); - if (try_catch.HasCaught()) { - fprintf(stderr, "Failure compiling '%s'\n", name); - DumpException(try_catch.Message()); - exit(1); } - script->Run(); - if (try_catch.HasCaught()) { - fprintf(stderr, "Failure running '%s'\n", name); - DumpException(try_catch.Message()); - exit(1); + // Make sure all builtin scripts are cached. + { HandleScope scope(isolate); + for (int i = 0; i < i::Natives::GetBuiltinsCount(); i++) { + internal_isolate->bootstrapper()->NativesSourceLookup(i); + } } - } - // Make sure all builtin scripts are cached. - { HandleScope scope(isolate); - for (int i = 0; i < i::Natives::GetBuiltinsCount(); i++) { - internal_isolate->bootstrapper()->NativesSourceLookup(i); + // If we don't do this then we end up with a stray root pointing at the + // context even after we have disposed of the context. + internal_isolate->heap()->CollectAllGarbage( + i::Heap::kNoGCFlags, "mksnapshot"); + i::Object* raw_context = *v8::Utils::OpenPersistent(context); + context.Reset(); + + // This results in a somewhat smaller snapshot, probably because it gets + // rid of some things that are cached between garbage collections. + i::List<i::byte> snapshot_data; + i::ListSnapshotSink snapshot_sink(&snapshot_data); + i::StartupSerializer ser(internal_isolate, &snapshot_sink); + ser.SerializeStrongReferences(); + + i::List<i::byte> context_data; + i::ListSnapshotSink contex_sink(&context_data); + i::PartialSerializer context_ser(internal_isolate, &ser, &contex_sink); + context_ser.Serialize(&raw_context); + ser.SerializeWeakReferences(); + + { + SnapshotWriter writer(argv[1]); + if (i::FLAG_raw_file && i::FLAG_raw_context_file) + writer.SetRawFiles(i::FLAG_raw_file, i::FLAG_raw_context_file); + if (i::FLAG_startup_blob) + writer.SetStartupBlobFile(i::FLAG_startup_blob); + #ifdef COMPRESS_STARTUP_DATA_BZ2 + BZip2Compressor bzip2; + writer.SetCompressor(&bzip2); + #endif + writer.WriteSnapshot(snapshot_data, ser, context_data, context_ser); } } - // If we don't do this then we end up with a stray root pointing at the - // context even after we have disposed of the context. - internal_isolate->heap()->CollectAllGarbage( - i::Heap::kNoGCFlags, "mksnapshot"); - i::Object* raw_context = *v8::Utils::OpenPersistent(context); - context.Reset(); - - // This results in a somewhat smaller snapshot, probably because it gets rid - // of some things that are cached between garbage collections. - i::List<char> snapshot_data; - ListSnapshotSink snapshot_sink(&snapshot_data); - i::StartupSerializer ser(internal_isolate, &snapshot_sink); - ser.SerializeStrongReferences(); - - i::List<char> context_data; - ListSnapshotSink contex_sink(&context_data); - i::PartialSerializer context_ser(internal_isolate, &ser, &contex_sink); - context_ser.Serialize(&raw_context); - ser.SerializeWeakReferences(); - - { - SnapshotWriter writer(argv[1]); - writer.SetOmit(i::FLAG_omit); - if (i::FLAG_raw_file && i::FLAG_raw_context_file) - writer.SetRawFiles(i::FLAG_raw_file, i::FLAG_raw_context_file); -#ifdef COMPRESS_STARTUP_DATA_BZ2 - BZip2Compressor bzip2; - writer.SetCompressor(&bzip2); -#endif - writer.WriteSnapshot(snapshot_data, ser, context_data, context_ser); - } - isolate->Exit(); isolate->Dispose(); V8::Dispose(); + V8::ShutdownPlatform(); + delete platform; return 0; } diff --git a/deps/v8/src/msan.h b/deps/v8/src/msan.h index 5282583af5a..4130d22a652 100644 --- a/deps/v8/src/msan.h +++ b/deps/v8/src/msan.h @@ -7,7 +7,7 @@ #ifndef V8_MSAN_H_ #define V8_MSAN_H_ -#include "globals.h" +#include "src/globals.h" #ifndef __has_feature # define __has_feature(x) 0 diff --git a/deps/v8/src/natives-external.cc b/deps/v8/src/natives-external.cc new file mode 100644 index 00000000000..dfe3f826503 --- /dev/null +++ b/deps/v8/src/natives-external.cc @@ -0,0 +1,196 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/natives.h" + +#include "src/base/logging.h" +#include "src/list.h" +#include "src/list-inl.h" +#include "src/snapshot-source-sink.h" +#include "src/vector.h" + +namespace v8 { +namespace internal { + + +/** + * NativesStore stores the 'native' (builtin) JS libraries. + * + * NativesStore needs to be initialized before using V8, usually by the + * embedder calling v8::SetNativesDataBlob, which calls SetNativesFromFile + * below. + */ +class NativesStore { + public: + ~NativesStore() {} + + int GetBuiltinsCount() { return native_names_.length(); } + int GetDebuggerCount() { return debugger_count_; } + Vector<const char> GetScriptName(int index) { return native_names_[index]; } + Vector<const char> GetRawScriptSource(int index) { + return native_source_[index]; + } + + int GetIndex(const char* name) { + for (int i = 0; i < native_names_.length(); ++i) { + if (strcmp(name, native_names_[i].start()) == 0) { + return i; + } + } + DCHECK(false); + return -1; + } + + int GetRawScriptsSize() { + DCHECK(false); // Used for compression. Doesn't really make sense here. + return 0; + } + + Vector<const byte> GetScriptsSource() { + DCHECK(false); // Used for compression. Doesn't really make sense here. + return Vector<const byte>(); + } + + static NativesStore* MakeFromScriptsSource(SnapshotByteSource* source) { + NativesStore* store = new NativesStore; + + // We expect the libraries in the following format: + // int: # of debugger sources. + // 2N blobs: N pairs of source name + actual source. + // then, repeat for non-debugger sources. + int debugger_count = source->GetInt(); + for (int i = 0; i < debugger_count; ++i) + store->ReadNameAndContentPair(source); + int library_count = source->GetInt(); + for (int i = 0; i < library_count; ++i) + store->ReadNameAndContentPair(source); + + store->debugger_count_ = debugger_count; + return store; + } + + private: + NativesStore() : debugger_count_(0) {} + + bool ReadNameAndContentPair(SnapshotByteSource* bytes) { + const byte* name; + int name_length; + const byte* source; + int source_length; + bool success = bytes->GetBlob(&name, &name_length) && + bytes->GetBlob(&source, &source_length); + if (success) { + Vector<const char> name_vector( + reinterpret_cast<const char*>(name), name_length); + Vector<const char> source_vector( + reinterpret_cast<const char*>(source), source_length); + native_names_.Add(name_vector); + native_source_.Add(source_vector); + } + return success; + } + + List<Vector<const char> > native_names_; + List<Vector<const char> > native_source_; + int debugger_count_; + + DISALLOW_COPY_AND_ASSIGN(NativesStore); +}; + + +template<NativeType type> +class NativesHolder { + public: + static NativesStore* get() { + DCHECK(holder_); + return holder_; + } + static void set(NativesStore* store) { + DCHECK(store); + holder_ = store; + } + + private: + static NativesStore* holder_; +}; + +template<NativeType type> +NativesStore* NativesHolder<type>::holder_ = NULL; + + +/** + * Read the Natives (library sources) blob, as generated by js2c + the build + * system. + */ +void SetNativesFromFile(StartupData* natives_blob) { + DCHECK(natives_blob); + DCHECK(natives_blob->data); + DCHECK(natives_blob->raw_size > 0); + + SnapshotByteSource bytes( + reinterpret_cast<const byte*>(natives_blob->data), + natives_blob->raw_size); + NativesHolder<CORE>::set(NativesStore::MakeFromScriptsSource(&bytes)); + NativesHolder<EXPERIMENTAL>::set(NativesStore::MakeFromScriptsSource(&bytes)); + DCHECK(!bytes.HasMore()); +} + + +// Implement NativesCollection<T> bsaed on NativesHolder + NativesStore. +// +// (The callers expect a purely static interface, since this is how the +// natives are usually compiled in. Since we implement them based on +// runtime content, we have to implement this indirection to offer +// a static interface.) +template<NativeType type> +int NativesCollection<type>::GetBuiltinsCount() { + return NativesHolder<type>::get()->GetBuiltinsCount(); +} + +template<NativeType type> +int NativesCollection<type>::GetDebuggerCount() { + return NativesHolder<type>::get()->GetDebuggerCount(); +} + +template<NativeType type> +int NativesCollection<type>::GetIndex(const char* name) { + return NativesHolder<type>::get()->GetIndex(name); +} + +template<NativeType type> +int NativesCollection<type>::GetRawScriptsSize() { + return NativesHolder<type>::get()->GetRawScriptsSize(); +} + +template<NativeType type> +Vector<const char> NativesCollection<type>::GetRawScriptSource(int index) { + return NativesHolder<type>::get()->GetRawScriptSource(index); +} + +template<NativeType type> +Vector<const char> NativesCollection<type>::GetScriptName(int index) { + return NativesHolder<type>::get()->GetScriptName(index); +} + +template<NativeType type> +Vector<const byte> NativesCollection<type>::GetScriptsSource() { + return NativesHolder<type>::get()->GetScriptsSource(); +} + +template<NativeType type> +void NativesCollection<type>::SetRawScriptsSource( + Vector<const char> raw_source) { + CHECK(false); // Use SetNativesFromFile for this implementation. +} + + +// The compiler can't 'see' all uses of the static methods and hence +// my chose to elide them. This we'll explicitly instantiate these. +template class NativesCollection<CORE>; +template class NativesCollection<EXPERIMENTAL>; +template class NativesCollection<D8>; +template class NativesCollection<TEST>; + +} // namespace v8::internal +} // namespace v8 diff --git a/deps/v8/src/natives.h b/deps/v8/src/natives.h index 2f930dc7bdc..6ddedf02cab 100644 --- a/deps/v8/src/natives.h +++ b/deps/v8/src/natives.h @@ -5,6 +5,10 @@ #ifndef V8_NATIVES_H_ #define V8_NATIVES_H_ +#include "src/vector.h" + +namespace v8 { class StartupData; } // Forward declaration. + namespace v8 { namespace internal { @@ -39,6 +43,11 @@ class NativesCollection { typedef NativesCollection<CORE> Natives; typedef NativesCollection<EXPERIMENTAL> ExperimentalNatives; +#ifdef V8_USE_EXTERNAL_STARTUP_DATA +// Used for reading the natives at runtime. Implementation in natives-empty.cc +void SetNativesFromFile(StartupData* natives_blob); +#endif + } } // namespace v8::internal #endif // V8_NATIVES_H_ diff --git a/deps/v8/src/object-observe.js b/deps/v8/src/object-observe.js index 532b0d25254..76f39159e1d 100644 --- a/deps/v8/src/object-observe.js +++ b/deps/v8/src/object-observe.js @@ -35,7 +35,7 @@ var observationState; -function GetObservationState() { +function GetObservationStateJS() { if (IS_UNDEFINED(observationState)) observationState = %GetObservationState(); @@ -45,6 +45,7 @@ function GetObservationState() { observationState.notifierObjectInfoMap = %ObservationWeakMapCreate(); observationState.pendingObservers = null; observationState.nextCallbackPriority = 0; + observationState.lastMicrotaskId = 0; } return observationState; @@ -76,7 +77,7 @@ var contextMaps; function GetContextMaps() { if (IS_UNDEFINED(contextMaps)) { var map = GetWeakMapWrapper(); - var observationState = GetObservationState(); + var observationState = GetObservationStateJS(); contextMaps = { callbackInfoMap: new map(observationState.callbackInfoMap), objectInfoMap: new map(observationState.objectInfoMap), @@ -100,15 +101,15 @@ function GetNotifierObjectInfoMap() { } function GetPendingObservers() { - return GetObservationState().pendingObservers; + return GetObservationStateJS().pendingObservers; } function SetPendingObservers(pendingObservers) { - GetObservationState().pendingObservers = pendingObservers; + GetObservationStateJS().pendingObservers = pendingObservers; } function GetNextCallbackPriority() { - return GetObservationState().nextCallbackPriority++; + return GetObservationStateJS().nextCallbackPriority++; } function nullProtoObject() { @@ -127,9 +128,9 @@ function TypeMapRemoveType(typeMap, type) { typeMap[type]--; } -function TypeMapCreateFromList(typeList) { +function TypeMapCreateFromList(typeList, length) { var typeMap = TypeMapCreate(); - for (var i = 0; i < typeList.length; i++) { + for (var i = 0; i < length; i++) { TypeMapAddType(typeMap, typeList[i], true); } return typeMap; @@ -151,14 +152,17 @@ function TypeMapIsDisjointFrom(typeMap1, typeMap2) { return true; } -var defaultAcceptTypes = TypeMapCreateFromList([ - 'add', - 'update', - 'delete', - 'setPrototype', - 'reconfigure', - 'preventExtensions' -]); +var defaultAcceptTypes = (function() { + var defaultTypes = [ + 'add', + 'update', + 'delete', + 'setPrototype', + 'reconfigure', + 'preventExtensions' + ]; + return TypeMapCreateFromList(defaultTypes, defaultTypes.length); +})(); // An Observer is a registration to observe an object by a callback with // a given set of accept types. If the set of accept types is the default @@ -170,7 +174,7 @@ function ObserverCreate(callback, acceptList) { return callback; var observer = nullProtoObject(); observer.callback = callback; - observer.accept = TypeMapCreateFromList(acceptList); + observer.accept = acceptList; return observer; } @@ -306,16 +310,18 @@ function ObjectInfoGetPerformingTypes(objectInfo) { return objectInfo.performingCount > 0 ? objectInfo.performing : null; } -function AcceptArgIsValid(arg) { +function ConvertAcceptListToTypeMap(arg) { + // We use undefined as a sentinel for the default accept list. if (IS_UNDEFINED(arg)) - return true; + return arg; - if (!IS_SPEC_OBJECT(arg) || - !IS_NUMBER(arg.length) || - arg.length < 0) - return false; + if (!IS_SPEC_OBJECT(arg)) + throw MakeTypeError("observe_accept_invalid"); - return true; + var len = ToInteger(arg.length); + if (len < 0) len = 0; + + return TypeMapCreateFromList(arg, len); } // CallbackInfo's optimized state is just a number which represents its global @@ -362,15 +368,15 @@ function ObjectObserve(object, callback, acceptList) { throw MakeTypeError("observe_non_function", ["observe"]); if (ObjectIsFrozen(callback)) throw MakeTypeError("observe_callback_frozen"); - if (!AcceptArgIsValid(acceptList)) - throw MakeTypeError("observe_accept_invalid"); - return %ObjectObserveInObjectContext(object, callback, acceptList); + var objectObserveFn = %GetObjectContextObjectObserve(object); + return objectObserveFn(object, callback, acceptList); } function NativeObjectObserve(object, callback, acceptList) { var objectInfo = ObjectInfoGetOrCreate(object); - ObjectInfoAddObserver(objectInfo, callback, acceptList); + var typeList = ConvertAcceptListToTypeMap(acceptList); + ObjectInfoAddObserver(objectInfo, callback, typeList); return object; } @@ -415,8 +421,19 @@ function ObserverEnqueueIfActive(observer, objectInfo, changeRecord) { var callbackInfo = CallbackInfoNormalize(callback); if (IS_NULL(GetPendingObservers())) { - SetPendingObservers(nullProtoObject()) - EnqueueMicrotask(ObserveMicrotaskRunner); + SetPendingObservers(nullProtoObject()); + if (DEBUG_IS_ACTIVE) { + var id = ++GetObservationStateJS().lastMicrotaskId; + var name = "Object.observe"; + %EnqueueMicrotask(function() { + %DebugAsyncTaskEvent({ type: "willHandle", id: id, name: name }); + ObserveMicrotaskRunner(); + %DebugAsyncTaskEvent({ type: "didHandle", id: id, name: name }); + }); + %DebugAsyncTaskEvent({ type: "enqueue", id: id, name: name }); + } else { + %EnqueueMicrotask(ObserveMicrotaskRunner); + } } GetPendingObservers()[callbackInfo.priority] = callback; callbackInfo.push(changeRecord); @@ -433,10 +450,10 @@ function ObjectInfoEnqueueExternalChangeRecord(objectInfo, changeRecord, type) { for (var prop in changeRecord) { if (prop === 'object' || (hasType && prop === 'type')) continue; - %DefineOrRedefineDataProperty(newRecord, prop, changeRecord[prop], - READ_ONLY + DONT_DELETE); + %DefineDataPropertyUnchecked( + newRecord, prop, changeRecord[prop], READ_ONLY + DONT_DELETE); } - ObjectFreeze(newRecord); + ObjectFreezeJS(newRecord); ObjectInfoEnqueueInternalChangeRecord(objectInfo, newRecord); } @@ -484,8 +501,8 @@ function EnqueueSpliceRecord(array, index, removed, addedCount) { addedCount: addedCount }; - ObjectFreeze(changeRecord); - ObjectFreeze(changeRecord.removed); + ObjectFreezeJS(changeRecord); + ObjectFreezeJS(changeRecord.removed); ObjectInfoEnqueueInternalChangeRecord(objectInfo, changeRecord); } @@ -508,7 +525,7 @@ function NotifyChange(type, object, name, oldValue) { }; } - ObjectFreeze(changeRecord); + ObjectFreezeJS(changeRecord); ObjectInfoEnqueueInternalChangeRecord(objectInfo, changeRecord); } @@ -539,8 +556,8 @@ function ObjectNotifierPerformChange(changeType, changeFn) { if (!IS_SPEC_FUNCTION(changeFn)) throw MakeTypeError("observe_perform_non_function"); - return %ObjectNotifierPerformChangeInObjectContext( - objectInfo, changeType, changeFn); + var performChangeFn = %GetObjectContextNotifierPerformChange(objectInfo); + performChangeFn(objectInfo, changeType, changeFn); } function NativeObjectNotifierPerformChange(objectInfo, changeType, changeFn) { @@ -567,7 +584,8 @@ function ObjectGetNotifier(object) { if (!%ObjectWasCreatedInCurrentOrigin(object)) return null; - return %ObjectGetNotifierInObjectContext(object); + var getNotifierFn = %GetObjectContextObjectGetNotifier(object); + return getNotifierFn(object); } function NativeObjectGetNotifier(object) { diff --git a/deps/v8/src/objects-debug.cc b/deps/v8/src/objects-debug.cc index 7b7c9c9a705..4834ef2033e 100644 --- a/deps/v8/src/objects-debug.cc +++ b/deps/v8/src/objects-debug.cc @@ -2,13 +2,14 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "disassembler.h" -#include "disasm.h" -#include "jsregexp.h" -#include "macro-assembler.h" -#include "objects-visiting.h" +#include "src/disasm.h" +#include "src/disassembler.h" +#include "src/heap/objects-visiting.h" +#include "src/jsregexp.h" +#include "src/macro-assembler.h" +#include "src/ostreams.h" namespace v8 { namespace internal { @@ -54,6 +55,7 @@ void HeapObject::HeapObjectVerify() { Map::cast(this)->MapVerify(); break; case HEAP_NUMBER_TYPE: + case MUTABLE_HEAP_NUMBER_TYPE: HeapNumber::cast(this)->HeapNumberVerify(); break; case FIXED_ARRAY_TYPE: @@ -205,7 +207,7 @@ void Symbol::SymbolVerify() { void HeapNumber::HeapNumberVerify() { - CHECK(IsHeapNumber()); + CHECK(IsHeapNumber() || IsMutableHeapNumber()); } @@ -261,12 +263,12 @@ void JSObject::JSObjectVerify() { for (int i = 0; i < map()->NumberOfOwnDescriptors(); i++) { if (descriptors->GetDetails(i).type() == FIELD) { Representation r = descriptors->GetDetails(i).representation(); - int field = descriptors->GetFieldIndex(i); - Object* value = RawFastPropertyAt(field); - if (r.IsDouble()) ASSERT(value->IsHeapNumber()); + FieldIndex index = FieldIndex::ForDescriptor(map(), i); + Object* value = RawFastPropertyAt(index); + if (r.IsDouble()) DCHECK(value->IsMutableHeapNumber()); if (value->IsUninitialized()) continue; - if (r.IsSmi()) ASSERT(value->IsSmi()); - if (r.IsHeapObject()) ASSERT(value->IsHeapObject()); + if (r.IsSmi()) DCHECK(value->IsSmi()); + if (r.IsHeapObject()) DCHECK(value->IsHeapObject()); HeapType* field_type = descriptors->GetFieldType(i); if (r.IsNone()) { CHECK(field_type->Is(HeapType::None())); @@ -298,17 +300,17 @@ void Map::MapVerify() { instance_size() < heap->Capacity())); VerifyHeapPointer(prototype()); VerifyHeapPointer(instance_descriptors()); - SLOW_ASSERT(instance_descriptors()->IsSortedNoDuplicates()); + SLOW_DCHECK(instance_descriptors()->IsSortedNoDuplicates()); if (HasTransitionArray()) { - SLOW_ASSERT(transitions()->IsSortedNoDuplicates()); - SLOW_ASSERT(transitions()->IsConsistentWithBackPointers(this)); + SLOW_DCHECK(transitions()->IsSortedNoDuplicates()); + SLOW_DCHECK(transitions()->IsConsistentWithBackPointers(this)); } } -void Map::SharedMapVerify() { +void Map::DictionaryMapVerify() { MapVerify(); - CHECK(is_shared()); + CHECK(is_dictionary_map()); CHECK(instance_descriptors()->IsEmpty()); CHECK_EQ(0, pre_allocated_property_fields()); CHECK_EQ(0, unused_property_fields()); @@ -347,6 +349,7 @@ void PolymorphicCodeCache::PolymorphicCodeCacheVerify() { void TypeFeedbackInfo::TypeFeedbackInfoVerify() { VerifyObjectField(kStorage1Offset); VerifyObjectField(kStorage2Offset); + VerifyObjectField(kStorage3Offset); } @@ -378,12 +381,14 @@ void FixedDoubleArray::FixedDoubleArrayVerify() { void ConstantPoolArray::ConstantPoolArrayVerify() { CHECK(IsConstantPoolArray()); - for (int i = 0; i < count_of_code_ptr_entries(); i++) { - Address code_entry = get_code_ptr_entry(first_code_ptr_index() + i); + ConstantPoolArray::Iterator code_iter(this, ConstantPoolArray::CODE_PTR); + while (!code_iter.is_finished()) { + Address code_entry = get_code_ptr_entry(code_iter.next_index()); VerifyPointer(Code::GetCodeFromTargetAddress(code_entry)); } - for (int i = 0; i < count_of_heap_ptr_entries(); i++) { - VerifyObjectField(OffsetOfElementAt(first_heap_ptr_index() + i)); + ConstantPoolArray::Iterator heap_iter(this, ConstantPoolArray::HEAP_PTR); + while (!heap_iter.is_finished()) { + VerifyObjectField(OffsetOfElementAt(heap_iter.next_index())); } } @@ -542,7 +547,7 @@ void JSGlobalProxy::JSGlobalProxyVerify() { VerifyObjectField(JSGlobalProxy::kNativeContextOffset); // Make sure that this object has no properties, elements. CHECK_EQ(0, properties()->length()); - CHECK(HasFastObjectElements()); + CHECK(HasFastSmiElements()); CHECK_EQ(0, FixedArray::cast(elements())->length()); } @@ -636,6 +641,8 @@ void Code::CodeVerify() { last_gc_pc = it.rinfo()->pc(); } } + CHECK(raw_type_feedback_info() == Smi::FromInt(0) || + raw_type_feedback_info()->IsSmi() == IsCodeStubOrIC()); } @@ -701,14 +708,8 @@ void JSSetIterator::JSSetIteratorVerify() { JSObjectVerify(); VerifyHeapPointer(table()); CHECK(table()->IsOrderedHashTable() || table()->IsUndefined()); - CHECK(index()->IsSmi()); - CHECK(count()->IsSmi()); - CHECK(kind()->IsSmi()); - VerifyHeapPointer(next_iterator()); - CHECK(next_iterator()->IsJSSetIterator() || next_iterator()->IsUndefined()); - VerifyHeapPointer(table()); - CHECK(previous_iterator()->IsJSSetIterator() - || previous_iterator()->IsUndefined()); + CHECK(index()->IsSmi() || index()->IsUndefined()); + CHECK(kind()->IsSmi() || kind()->IsUndefined()); } @@ -717,14 +718,8 @@ void JSMapIterator::JSMapIteratorVerify() { JSObjectVerify(); VerifyHeapPointer(table()); CHECK(table()->IsOrderedHashTable() || table()->IsUndefined()); - CHECK(index()->IsSmi()); - CHECK(count()->IsSmi()); - CHECK(kind()->IsSmi()); - VerifyHeapPointer(next_iterator()); - CHECK(next_iterator()->IsJSMapIterator() || next_iterator()->IsUndefined()); - VerifyHeapPointer(table()); - CHECK(previous_iterator()->IsJSMapIterator() - || previous_iterator()->IsUndefined()); + CHECK(index()->IsSmi() || index()->IsUndefined()); + CHECK(kind()->IsSmi() || kind()->IsUndefined()); } @@ -888,7 +883,6 @@ void AccessorPair::AccessorPairVerify() { CHECK(IsAccessorPair()); VerifyPointer(getter()); VerifyPointer(setter()); - VerifySmiField(kAccessFlagsOffset); } @@ -1018,7 +1012,7 @@ void NormalizedMapCache::NormalizedMapCacheVerify() { for (int i = 0; i < length(); i++) { Object* e = FixedArray::get(i); if (e->IsMap()) { - Map::cast(e)->SharedMapVerify(); + Map::cast(e)->DictionaryMapVerify(); } else { CHECK(e->IsUndefined()); } @@ -1150,13 +1144,15 @@ bool DescriptorArray::IsSortedNoDuplicates(int valid_entries) { for (int i = 0; i < number_of_descriptors(); i++) { Name* key = GetSortedKey(i); if (key == current_key) { - PrintDescriptors(); + OFStream os(stdout); + PrintDescriptors(os); return false; } current_key = key; uint32_t hash = GetSortedKey(i)->Hash(); if (hash < current) { - PrintDescriptors(); + OFStream os(stdout); + PrintDescriptors(os); return false; } current = hash; @@ -1166,19 +1162,21 @@ bool DescriptorArray::IsSortedNoDuplicates(int valid_entries) { bool TransitionArray::IsSortedNoDuplicates(int valid_entries) { - ASSERT(valid_entries == -1); + DCHECK(valid_entries == -1); Name* current_key = NULL; uint32_t current = 0; for (int i = 0; i < number_of_transitions(); i++) { Name* key = GetSortedKey(i); if (key == current_key) { - PrintTransitions(); + OFStream os(stdout); + PrintTransitions(os); return false; } current_key = key; uint32_t hash = GetSortedKey(i)->Hash(); if (hash < current) { - PrintTransitions(); + OFStream os(stdout); + PrintTransitions(os); return false; } current = hash; diff --git a/deps/v8/src/objects-inl.h b/deps/v8/src/objects-inl.h index 029e80d9bb9..7e1c5b1eb37 100644 --- a/deps/v8/src/objects-inl.h +++ b/deps/v8/src/objects-inl.h @@ -12,21 +12,25 @@ #ifndef V8_OBJECTS_INL_H_ #define V8_OBJECTS_INL_H_ -#include "elements.h" -#include "objects.h" -#include "contexts.h" -#include "conversions-inl.h" -#include "heap.h" -#include "isolate.h" -#include "heap-inl.h" -#include "property.h" -#include "spaces.h" -#include "store-buffer.h" -#include "v8memory.h" -#include "factory.h" -#include "incremental-marking.h" -#include "transitions-inl.h" -#include "objects-visiting.h" +#include "src/base/atomicops.h" +#include "src/contexts.h" +#include "src/conversions-inl.h" +#include "src/elements.h" +#include "src/factory.h" +#include "src/field-index-inl.h" +#include "src/heap/heap-inl.h" +#include "src/heap/heap.h" +#include "src/heap/incremental-marking.h" +#include "src/heap/objects-visiting.h" +#include "src/heap/spaces.h" +#include "src/heap/store-buffer.h" +#include "src/isolate.h" +#include "src/lookup.h" +#include "src/objects.h" +#include "src/property.h" +#include "src/prototype.h" +#include "src/transitions-inl.h" +#include "src/v8memory.h" namespace v8 { namespace internal { @@ -51,43 +55,47 @@ PropertyDetails PropertyDetails::AsDeleted() const { #define TYPE_CHECKER(type, instancetype) \ - bool Object::Is##type() { \ + bool Object::Is##type() const { \ return Object::IsHeapObject() && \ HeapObject::cast(this)->map()->instance_type() == instancetype; \ } -#define CAST_ACCESSOR(type) \ - type* type::cast(Object* object) { \ - SLOW_ASSERT(object->Is##type()); \ - return reinterpret_cast<type*>(object); \ +#define CAST_ACCESSOR(type) \ + type* type::cast(Object* object) { \ + SLOW_DCHECK(object->Is##type()); \ + return reinterpret_cast<type*>(object); \ + } \ + const type* type::cast(const Object* object) { \ + SLOW_DCHECK(object->Is##type()); \ + return reinterpret_cast<const type*>(object); \ } -#define INT_ACCESSORS(holder, name, offset) \ - int holder::name() { return READ_INT_FIELD(this, offset); } \ +#define INT_ACCESSORS(holder, name, offset) \ + int holder::name() const { return READ_INT_FIELD(this, offset); } \ void holder::set_##name(int value) { WRITE_INT_FIELD(this, offset, value); } -#define ACCESSORS(holder, name, type, offset) \ - type* holder::name() { return type::cast(READ_FIELD(this, offset)); } \ - void holder::set_##name(type* value, WriteBarrierMode mode) { \ - WRITE_FIELD(this, offset, value); \ - CONDITIONAL_WRITE_BARRIER(GetHeap(), this, offset, value, mode); \ +#define ACCESSORS(holder, name, type, offset) \ + type* holder::name() const { return type::cast(READ_FIELD(this, offset)); } \ + void holder::set_##name(type* value, WriteBarrierMode mode) { \ + WRITE_FIELD(this, offset, value); \ + CONDITIONAL_WRITE_BARRIER(GetHeap(), this, offset, value, mode); \ } // Getter that returns a tagged Smi and setter that writes a tagged Smi. -#define ACCESSORS_TO_SMI(holder, name, offset) \ - Smi* holder::name() { return Smi::cast(READ_FIELD(this, offset)); } \ - void holder::set_##name(Smi* value, WriteBarrierMode mode) { \ - WRITE_FIELD(this, offset, value); \ +#define ACCESSORS_TO_SMI(holder, name, offset) \ + Smi* holder::name() const { return Smi::cast(READ_FIELD(this, offset)); } \ + void holder::set_##name(Smi* value, WriteBarrierMode mode) { \ + WRITE_FIELD(this, offset, value); \ } // Getter that returns a Smi as an int and writes an int as a Smi. #define SMI_ACCESSORS(holder, name, offset) \ - int holder::name() { \ + int holder::name() const { \ Object* value = READ_FIELD(this, offset); \ return Smi::cast(value)->value(); \ } \ @@ -96,7 +104,7 @@ PropertyDetails PropertyDetails::AsDeleted() const { } #define SYNCHRONIZED_SMI_ACCESSORS(holder, name, offset) \ - int holder::synchronized_##name() { \ + int holder::synchronized_##name() const { \ Object* value = ACQUIRE_READ_FIELD(this, offset); \ return Smi::cast(value)->value(); \ } \ @@ -105,7 +113,7 @@ PropertyDetails PropertyDetails::AsDeleted() const { } #define NOBARRIER_SMI_ACCESSORS(holder, name, offset) \ - int holder::nobarrier_##name() { \ + int holder::nobarrier_##name() const { \ Object* value = NOBARRIER_READ_FIELD(this, offset); \ return Smi::cast(value)->value(); \ } \ @@ -114,13 +122,13 @@ PropertyDetails PropertyDetails::AsDeleted() const { } #define BOOL_GETTER(holder, field, name, offset) \ - bool holder::name() { \ + bool holder::name() const { \ return BooleanBit::get(field(), offset); \ } \ #define BOOL_ACCESSORS(holder, field, name, offset) \ - bool holder::name() { \ + bool holder::name() const { \ return BooleanBit::get(field(), offset); \ } \ void holder::set_##name(bool value) { \ @@ -128,74 +136,75 @@ PropertyDetails PropertyDetails::AsDeleted() const { } -bool Object::IsFixedArrayBase() { +bool Object::IsFixedArrayBase() const { return IsFixedArray() || IsFixedDoubleArray() || IsConstantPoolArray() || IsFixedTypedArrayBase() || IsExternalArray(); } // External objects are not extensible, so the map check is enough. -bool Object::IsExternal() { +bool Object::IsExternal() const { return Object::IsHeapObject() && HeapObject::cast(this)->map() == HeapObject::cast(this)->GetHeap()->external_map(); } -bool Object::IsAccessorInfo() { +bool Object::IsAccessorInfo() const { return IsExecutableAccessorInfo() || IsDeclaredAccessorInfo(); } -bool Object::IsSmi() { +bool Object::IsSmi() const { return HAS_SMI_TAG(this); } -bool Object::IsHeapObject() { +bool Object::IsHeapObject() const { return Internals::HasHeapObjectTag(this); } TYPE_CHECKER(HeapNumber, HEAP_NUMBER_TYPE) +TYPE_CHECKER(MutableHeapNumber, MUTABLE_HEAP_NUMBER_TYPE) TYPE_CHECKER(Symbol, SYMBOL_TYPE) -bool Object::IsString() { +bool Object::IsString() const { return Object::IsHeapObject() && HeapObject::cast(this)->map()->instance_type() < FIRST_NONSTRING_TYPE; } -bool Object::IsName() { +bool Object::IsName() const { return IsString() || IsSymbol(); } -bool Object::IsUniqueName() { +bool Object::IsUniqueName() const { return IsInternalizedString() || IsSymbol(); } -bool Object::IsSpecObject() { +bool Object::IsSpecObject() const { return Object::IsHeapObject() && HeapObject::cast(this)->map()->instance_type() >= FIRST_SPEC_OBJECT_TYPE; } -bool Object::IsSpecFunction() { +bool Object::IsSpecFunction() const { if (!Object::IsHeapObject()) return false; InstanceType type = HeapObject::cast(this)->map()->instance_type(); return type == JS_FUNCTION_TYPE || type == JS_FUNCTION_PROXY_TYPE; } -bool Object::IsTemplateInfo() { +bool Object::IsTemplateInfo() const { return IsObjectTemplateInfo() || IsFunctionTemplateInfo(); } -bool Object::IsInternalizedString() { +bool Object::IsInternalizedString() const { if (!this->IsHeapObject()) return false; uint32_t type = HeapObject::cast(this)->map()->instance_type(); STATIC_ASSERT(kNotInternalizedTag != 0); @@ -204,52 +213,52 @@ bool Object::IsInternalizedString() { } -bool Object::IsConsString() { +bool Object::IsConsString() const { if (!IsString()) return false; return StringShape(String::cast(this)).IsCons(); } -bool Object::IsSlicedString() { +bool Object::IsSlicedString() const { if (!IsString()) return false; return StringShape(String::cast(this)).IsSliced(); } -bool Object::IsSeqString() { +bool Object::IsSeqString() const { if (!IsString()) return false; return StringShape(String::cast(this)).IsSequential(); } -bool Object::IsSeqOneByteString() { +bool Object::IsSeqOneByteString() const { if (!IsString()) return false; return StringShape(String::cast(this)).IsSequential() && String::cast(this)->IsOneByteRepresentation(); } -bool Object::IsSeqTwoByteString() { +bool Object::IsSeqTwoByteString() const { if (!IsString()) return false; return StringShape(String::cast(this)).IsSequential() && String::cast(this)->IsTwoByteRepresentation(); } -bool Object::IsExternalString() { +bool Object::IsExternalString() const { if (!IsString()) return false; return StringShape(String::cast(this)).IsExternal(); } -bool Object::IsExternalAsciiString() { +bool Object::IsExternalAsciiString() const { if (!IsString()) return false; return StringShape(String::cast(this)).IsExternal() && String::cast(this)->IsOneByteRepresentation(); } -bool Object::IsExternalTwoByteString() { +bool Object::IsExternalTwoByteString() const { if (!IsString()) return false; return StringShape(String::cast(this)).IsExternal() && String::cast(this)->IsTwoByteRepresentation(); @@ -270,49 +279,66 @@ Handle<Object> Object::NewStorageFor(Isolate* isolate, return handle(Smi::FromInt(0), isolate); } if (!representation.IsDouble()) return object; + double value; if (object->IsUninitialized()) { - return isolate->factory()->NewHeapNumber(0); + value = 0; + } else if (object->IsMutableHeapNumber()) { + value = HeapNumber::cast(*object)->value(); + } else { + value = object->Number(); + } + return isolate->factory()->NewHeapNumber(value, MUTABLE); +} + + +Handle<Object> Object::WrapForRead(Isolate* isolate, + Handle<Object> object, + Representation representation) { + DCHECK(!object->IsUninitialized()); + if (!representation.IsDouble()) { + DCHECK(object->FitsRepresentation(representation)); + return object; } - return isolate->factory()->NewHeapNumber(object->Number()); + return isolate->factory()->NewHeapNumber(HeapNumber::cast(*object)->value()); } -StringShape::StringShape(String* str) +StringShape::StringShape(const String* str) : type_(str->map()->instance_type()) { set_valid(); - ASSERT((type_ & kIsNotStringMask) == kStringTag); + DCHECK((type_ & kIsNotStringMask) == kStringTag); } StringShape::StringShape(Map* map) : type_(map->instance_type()) { set_valid(); - ASSERT((type_ & kIsNotStringMask) == kStringTag); + DCHECK((type_ & kIsNotStringMask) == kStringTag); } StringShape::StringShape(InstanceType t) : type_(static_cast<uint32_t>(t)) { set_valid(); - ASSERT((type_ & kIsNotStringMask) == kStringTag); + DCHECK((type_ & kIsNotStringMask) == kStringTag); } bool StringShape::IsInternalized() { - ASSERT(valid()); + DCHECK(valid()); STATIC_ASSERT(kNotInternalizedTag != 0); return (type_ & (kIsNotStringMask | kIsNotInternalizedMask)) == (kStringTag | kInternalizedTag); } -bool String::IsOneByteRepresentation() { +bool String::IsOneByteRepresentation() const { uint32_t type = map()->instance_type(); return (type & kStringEncodingMask) == kOneByteStringTag; } -bool String::IsTwoByteRepresentation() { +bool String::IsTwoByteRepresentation() const { uint32_t type = map()->instance_type(); return (type & kStringEncodingMask) == kTwoByteStringTag; } @@ -322,7 +348,7 @@ bool String::IsOneByteRepresentationUnderneath() { uint32_t type = map()->instance_type(); STATIC_ASSERT(kIsIndirectStringTag != 0); STATIC_ASSERT((kIsIndirectStringMask & kStringEncodingMask) == 0); - ASSERT(IsFlat()); + DCHECK(IsFlat()); switch (type & (kIsIndirectStringMask | kStringEncodingMask)) { case kOneByteStringTag: return true; @@ -338,7 +364,7 @@ bool String::IsTwoByteRepresentationUnderneath() { uint32_t type = map()->instance_type(); STATIC_ASSERT(kIsIndirectStringTag != 0); STATIC_ASSERT((kIsIndirectStringMask & kStringEncodingMask) == 0); - ASSERT(IsFlat()); + DCHECK(IsFlat()); switch (type & (kIsIndirectStringMask | kStringEncodingMask)) { case kOneByteStringTag: return false; @@ -398,10 +424,10 @@ uint32_t StringShape::full_representation_tag() { } -STATIC_CHECK((kStringRepresentationMask | kStringEncodingMask) == +STATIC_ASSERT((kStringRepresentationMask | kStringEncodingMask) == Internals::kFullStringRepresentationMask); -STATIC_CHECK(static_cast<uint32_t>(kStringEncodingMask) == +STATIC_ASSERT(static_cast<uint32_t>(kStringEncodingMask) == Internals::kStringEncodingMask); @@ -420,10 +446,10 @@ bool StringShape::IsExternalAscii() { } -STATIC_CHECK((kExternalStringTag | kOneByteStringTag) == +STATIC_ASSERT((kExternalStringTag | kOneByteStringTag) == Internals::kExternalAsciiRepresentationTag); -STATIC_CHECK(v8::String::ASCII_ENCODING == kOneByteStringTag); +STATIC_ASSERT(v8::String::ASCII_ENCODING == kOneByteStringTag); bool StringShape::IsExternalTwoByte() { @@ -431,13 +457,13 @@ bool StringShape::IsExternalTwoByte() { } -STATIC_CHECK((kExternalStringTag | kTwoByteStringTag) == +STATIC_ASSERT((kExternalStringTag | kTwoByteStringTag) == Internals::kExternalTwoByteRepresentationTag); -STATIC_CHECK(v8::String::TWO_BYTE_ENCODING == kTwoByteStringTag); +STATIC_ASSERT(v8::String::TWO_BYTE_ENCODING == kTwoByteStringTag); uc32 FlatStringReader::Get(int index) { - ASSERT(0 <= index && index <= length_); + DCHECK(0 <= index && index <= length_); if (is_ascii_) { return static_cast<const byte*>(start_)[index]; } else { @@ -479,7 +505,7 @@ class SequentialStringKey : public HashTableKey { seed_); uint32_t result = hash_field_ >> String::kHashShift; - ASSERT(result != 0); // Ensure that the hash value of 0 is never computed. + DCHECK(result != 0); // Ensure that the hash value of 0 is never computed. return result; } @@ -515,17 +541,17 @@ class SubStringKey : public HashTableKey { if (string_->IsSlicedString()) { string_ = Handle<String>(Unslice(*string_, &from_)); } - ASSERT(string_->IsSeqString() || string->IsExternalString()); + DCHECK(string_->IsSeqString() || string->IsExternalString()); } virtual uint32_t Hash() V8_OVERRIDE { - ASSERT(length_ >= 0); - ASSERT(from_ + length_ <= string_->length()); + DCHECK(length_ >= 0); + DCHECK(from_ + length_ <= string_->length()); const Char* chars = GetChars() + from_; hash_field_ = StringHasher::HashSequentialString( chars, length_, string_->GetHeap()->HashSeed()); uint32_t result = hash_field_ >> String::kHashShift; - ASSERT(result != 0); // Ensure that the hash value of 0 is never computed. + DCHECK(result != 0); // Ensure that the hash value of 0 is never computed. return result; } @@ -581,7 +607,7 @@ class Utf8StringKey : public HashTableKey { if (hash_field_ != 0) return hash_field_ >> String::kHashShift; hash_field_ = StringHasher::ComputeUtf8Hash(string_, seed_, &chars_); uint32_t result = hash_field_ >> String::kHashShift; - ASSERT(result != 0); // Ensure that the hash value of 0 is never computed. + DCHECK(result != 0); // Ensure that the hash value of 0 is never computed. return result; } @@ -602,7 +628,7 @@ class Utf8StringKey : public HashTableKey { }; -bool Object::IsNumber() { +bool Object::IsNumber() const { return IsSmi() || IsHeapNumber(); } @@ -611,14 +637,14 @@ TYPE_CHECKER(ByteArray, BYTE_ARRAY_TYPE) TYPE_CHECKER(FreeSpace, FREE_SPACE_TYPE) -bool Object::IsFiller() { +bool Object::IsFiller() const { if (!Object::IsHeapObject()) return false; InstanceType instance_type = HeapObject::cast(this)->map()->instance_type(); return instance_type == FREE_SPACE_TYPE || instance_type == FILLER_TYPE; } -bool Object::IsExternalArray() { +bool Object::IsExternalArray() const { if (!Object::IsHeapObject()) return false; InstanceType instance_type = @@ -636,7 +662,7 @@ TYPED_ARRAYS(TYPED_ARRAY_TYPE_CHECKER) #undef TYPED_ARRAY_TYPE_CHECKER -bool Object::IsFixedTypedArrayBase() { +bool Object::IsFixedTypedArrayBase() const { if (!Object::IsHeapObject()) return false; InstanceType instance_type = @@ -646,24 +672,23 @@ bool Object::IsFixedTypedArrayBase() { } -bool Object::IsJSReceiver() { +bool Object::IsJSReceiver() const { STATIC_ASSERT(LAST_JS_RECEIVER_TYPE == LAST_TYPE); return IsHeapObject() && HeapObject::cast(this)->map()->instance_type() >= FIRST_JS_RECEIVER_TYPE; } -bool Object::IsJSObject() { +bool Object::IsJSObject() const { STATIC_ASSERT(LAST_JS_OBJECT_TYPE == LAST_TYPE); return IsHeapObject() && HeapObject::cast(this)->map()->instance_type() >= FIRST_JS_OBJECT_TYPE; } -bool Object::IsJSProxy() { +bool Object::IsJSProxy() const { if (!Object::IsHeapObject()) return false; - InstanceType type = HeapObject::cast(this)->map()->instance_type(); - return FIRST_JS_PROXY_TYPE <= type && type <= LAST_JS_PROXY_TYPE; + return HeapObject::cast(this)->map()->IsJSProxyMap(); } @@ -681,22 +706,22 @@ TYPE_CHECKER(FixedDoubleArray, FIXED_DOUBLE_ARRAY_TYPE) TYPE_CHECKER(ConstantPoolArray, CONSTANT_POOL_ARRAY_TYPE) -bool Object::IsJSWeakCollection() { +bool Object::IsJSWeakCollection() const { return IsJSWeakMap() || IsJSWeakSet(); } -bool Object::IsDescriptorArray() { +bool Object::IsDescriptorArray() const { return IsFixedArray(); } -bool Object::IsTransitionArray() { +bool Object::IsTransitionArray() const { return IsFixedArray(); } -bool Object::IsDeoptimizationInputData() { +bool Object::IsDeoptimizationInputData() const { // Must be a fixed array. if (!IsFixedArray()) return false; @@ -706,14 +731,23 @@ bool Object::IsDeoptimizationInputData() { // the entry size. int length = FixedArray::cast(this)->length(); if (length == 0) return true; + if (length < DeoptimizationInputData::kFirstDeoptEntryIndex) return false; + + FixedArray* self = FixedArray::cast(const_cast<Object*>(this)); + int deopt_count = + Smi::cast(self->get(DeoptimizationInputData::kDeoptEntryCountIndex)) + ->value(); + int patch_count = + Smi::cast( + self->get( + DeoptimizationInputData::kReturnAddressPatchEntryCountIndex)) + ->value(); - length -= DeoptimizationInputData::kFirstDeoptEntryIndex; - return length >= 0 && - length % DeoptimizationInputData::kDeoptEntrySize == 0; + return length == DeoptimizationInputData::LengthFor(deopt_count, patch_count); } -bool Object::IsDeoptimizationOutputData() { +bool Object::IsDeoptimizationOutputData() const { if (!IsFixedArray()) return false; // There's actually no way to see the difference between a fixed array and // a deoptimization data array. Since this is used for asserts we can check @@ -723,7 +757,7 @@ bool Object::IsDeoptimizationOutputData() { } -bool Object::IsDependentCode() { +bool Object::IsDependentCode() const { if (!IsFixedArray()) return false; // There's actually no way to see the difference between a fixed array and // a dependent codes array. @@ -731,7 +765,7 @@ bool Object::IsDependentCode() { } -bool Object::IsContext() { +bool Object::IsContext() const { if (!Object::IsHeapObject()) return false; Map* map = HeapObject::cast(this)->map(); Heap* heap = map->GetHeap(); @@ -745,14 +779,14 @@ bool Object::IsContext() { } -bool Object::IsNativeContext() { +bool Object::IsNativeContext() const { return Object::IsHeapObject() && HeapObject::cast(this)->map() == HeapObject::cast(this)->GetHeap()->native_context_map(); } -bool Object::IsScopeInfo() { +bool Object::IsScopeInfo() const { return Object::IsHeapObject() && HeapObject::cast(this)->map() == HeapObject::cast(this)->GetHeap()->scope_info_map(); @@ -779,7 +813,7 @@ TYPE_CHECKER(JSDate, JS_DATE_TYPE) TYPE_CHECKER(JSMessageObject, JS_MESSAGE_OBJECT_TYPE) -bool Object::IsStringWrapper() { +bool Object::IsStringWrapper() const { return IsJSValue() && JSValue::cast(this)->value()->IsString(); } @@ -787,7 +821,7 @@ bool Object::IsStringWrapper() { TYPE_CHECKER(Foreign, FOREIGN_TYPE) -bool Object::IsBoolean() { +bool Object::IsBoolean() const { return IsOddball() && ((Oddball::cast(this)->kind() & Oddball::kNotBooleanMask) == 0); } @@ -799,7 +833,7 @@ TYPE_CHECKER(JSTypedArray, JS_TYPED_ARRAY_TYPE) TYPE_CHECKER(JSDataView, JS_DATA_VIEW_TYPE) -bool Object::IsJSArrayBufferView() { +bool Object::IsJSArrayBufferView() const { return IsJSDataView() || IsJSTypedArray(); } @@ -812,27 +846,47 @@ template <> inline bool Is<JSArray>(Object* obj) { } -bool Object::IsHashTable() { +bool Object::IsHashTable() const { return Object::IsHeapObject() && HeapObject::cast(this)->map() == HeapObject::cast(this)->GetHeap()->hash_table_map(); } -bool Object::IsDictionary() { +bool Object::IsWeakHashTable() const { + return IsHashTable(); +} + + +bool Object::IsDictionary() const { return IsHashTable() && this != HeapObject::cast(this)->GetHeap()->string_table(); } -bool Object::IsStringTable() { +bool Object::IsNameDictionary() const { + return IsDictionary(); +} + + +bool Object::IsSeededNumberDictionary() const { + return IsDictionary(); +} + + +bool Object::IsUnseededNumberDictionary() const { + return IsDictionary(); +} + + +bool Object::IsStringTable() const { return IsHashTable(); } -bool Object::IsJSFunctionResultCache() { +bool Object::IsJSFunctionResultCache() const { if (!IsFixedArray()) return false; - FixedArray* self = FixedArray::cast(this); + const FixedArray* self = FixedArray::cast(this); int length = self->length(); if (length < JSFunctionResultCache::kEntriesIndex) return false; if ((length - JSFunctionResultCache::kEntriesIndex) @@ -841,7 +895,10 @@ bool Object::IsJSFunctionResultCache() { } #ifdef VERIFY_HEAP if (FLAG_verify_heap) { - reinterpret_cast<JSFunctionResultCache*>(this)-> + // TODO(svenpanne) We use const_cast here and below to break our dependency + // cycle between the predicates and the verifiers. This can be removed when + // the verifiers are const-correct, too. + reinterpret_cast<JSFunctionResultCache*>(const_cast<Object*>(this))-> JSFunctionResultCacheVerify(); } #endif @@ -849,7 +906,7 @@ bool Object::IsJSFunctionResultCache() { } -bool Object::IsNormalizedMapCache() { +bool Object::IsNormalizedMapCache() const { return NormalizedMapCache::IsNormalizedMapCache(this); } @@ -859,68 +916,79 @@ int NormalizedMapCache::GetIndex(Handle<Map> map) { } -bool NormalizedMapCache::IsNormalizedMapCache(Object* obj) { +bool NormalizedMapCache::IsNormalizedMapCache(const Object* obj) { if (!obj->IsFixedArray()) return false; if (FixedArray::cast(obj)->length() != NormalizedMapCache::kEntries) { return false; } #ifdef VERIFY_HEAP if (FLAG_verify_heap) { - reinterpret_cast<NormalizedMapCache*>(obj)->NormalizedMapCacheVerify(); + reinterpret_cast<NormalizedMapCache*>(const_cast<Object*>(obj))-> + NormalizedMapCacheVerify(); } #endif return true; } -bool Object::IsCompilationCacheTable() { +bool Object::IsCompilationCacheTable() const { return IsHashTable(); } -bool Object::IsCodeCacheHashTable() { +bool Object::IsCodeCacheHashTable() const { return IsHashTable(); } -bool Object::IsPolymorphicCodeCacheHashTable() { +bool Object::IsPolymorphicCodeCacheHashTable() const { return IsHashTable(); } -bool Object::IsMapCache() { +bool Object::IsMapCache() const { return IsHashTable(); } -bool Object::IsObjectHashTable() { +bool Object::IsObjectHashTable() const { return IsHashTable(); } -bool Object::IsOrderedHashTable() { +bool Object::IsOrderedHashTable() const { return IsHeapObject() && HeapObject::cast(this)->map() == HeapObject::cast(this)->GetHeap()->ordered_hash_table_map(); } -bool Object::IsPrimitive() { +bool Object::IsOrderedHashSet() const { + return IsOrderedHashTable(); +} + + +bool Object::IsOrderedHashMap() const { + return IsOrderedHashTable(); +} + + +bool Object::IsPrimitive() const { return IsOddball() || IsNumber() || IsString(); } -bool Object::IsJSGlobalProxy() { +bool Object::IsJSGlobalProxy() const { bool result = IsHeapObject() && (HeapObject::cast(this)->map()->instance_type() == JS_GLOBAL_PROXY_TYPE); - ASSERT(!result || + DCHECK(!result || HeapObject::cast(this)->map()->is_access_check_needed()); return result; } -bool Object::IsGlobalObject() { +bool Object::IsGlobalObject() const { if (!IsHeapObject()) return false; InstanceType type = HeapObject::cast(this)->map()->instance_type(); @@ -933,25 +1001,24 @@ TYPE_CHECKER(JSGlobalObject, JS_GLOBAL_OBJECT_TYPE) TYPE_CHECKER(JSBuiltinsObject, JS_BUILTINS_OBJECT_TYPE) -bool Object::IsUndetectableObject() { +bool Object::IsUndetectableObject() const { return IsHeapObject() && HeapObject::cast(this)->map()->is_undetectable(); } -bool Object::IsAccessCheckNeeded() { +bool Object::IsAccessCheckNeeded() const { if (!IsHeapObject()) return false; if (IsJSGlobalProxy()) { - JSGlobalProxy* proxy = JSGlobalProxy::cast(this); - GlobalObject* global = - proxy->GetIsolate()->context()->global_object(); + const JSGlobalProxy* proxy = JSGlobalProxy::cast(this); + GlobalObject* global = proxy->GetIsolate()->context()->global_object(); return proxy->IsDetachedFrom(global); } return HeapObject::cast(this)->map()->is_access_check_needed(); } -bool Object::IsStruct() { +bool Object::IsStruct() const { if (!IsHeapObject()) return false; switch (HeapObject::cast(this)->map()->instance_type()) { #define MAKE_STRUCT_CASE(NAME, Name, name) case NAME##_TYPE: return true; @@ -962,68 +1029,74 @@ bool Object::IsStruct() { } -#define MAKE_STRUCT_PREDICATE(NAME, Name, name) \ - bool Object::Is##Name() { \ - return Object::IsHeapObject() \ +#define MAKE_STRUCT_PREDICATE(NAME, Name, name) \ + bool Object::Is##Name() const { \ + return Object::IsHeapObject() \ && HeapObject::cast(this)->map()->instance_type() == NAME##_TYPE; \ } STRUCT_LIST(MAKE_STRUCT_PREDICATE) #undef MAKE_STRUCT_PREDICATE -bool Object::IsUndefined() { +bool Object::IsUndefined() const { return IsOddball() && Oddball::cast(this)->kind() == Oddball::kUndefined; } -bool Object::IsNull() { +bool Object::IsNull() const { return IsOddball() && Oddball::cast(this)->kind() == Oddball::kNull; } -bool Object::IsTheHole() { +bool Object::IsTheHole() const { return IsOddball() && Oddball::cast(this)->kind() == Oddball::kTheHole; } -bool Object::IsException() { +bool Object::IsException() const { return IsOddball() && Oddball::cast(this)->kind() == Oddball::kException; } -bool Object::IsUninitialized() { +bool Object::IsUninitialized() const { return IsOddball() && Oddball::cast(this)->kind() == Oddball::kUninitialized; } -bool Object::IsTrue() { +bool Object::IsTrue() const { return IsOddball() && Oddball::cast(this)->kind() == Oddball::kTrue; } -bool Object::IsFalse() { +bool Object::IsFalse() const { return IsOddball() && Oddball::cast(this)->kind() == Oddball::kFalse; } -bool Object::IsArgumentsMarker() { +bool Object::IsArgumentsMarker() const { return IsOddball() && Oddball::cast(this)->kind() == Oddball::kArgumentMarker; } double Object::Number() { - ASSERT(IsNumber()); + DCHECK(IsNumber()); return IsSmi() ? static_cast<double>(reinterpret_cast<Smi*>(this)->value()) : reinterpret_cast<HeapNumber*>(this)->value(); } -bool Object::IsNaN() { +bool Object::IsNaN() const { return this->IsHeapNumber() && std::isnan(HeapNumber::cast(this)->value()); } +bool Object::IsMinusZero() const { + return this->IsHeapNumber() && + i::IsMinusZero(HeapNumber::cast(this)->value()); +} + + MaybeHandle<Smi> Object::ToSmi(Isolate* isolate, Handle<Object> object) { if (object->IsSmi()) return Handle<Smi>::cast(object); if (object->IsHeapNumber()) { @@ -1051,8 +1124,8 @@ bool Object::HasSpecificClassOf(String* name) { MaybeHandle<Object> Object::GetProperty(Handle<Object> object, Handle<Name> name) { - PropertyAttributes attributes; - return GetPropertyWithReceiver(object, object, name, &attributes); + LookupIterator it(object, name); + return GetProperty(&it); } @@ -1060,9 +1133,9 @@ MaybeHandle<Object> Object::GetElement(Isolate* isolate, Handle<Object> object, uint32_t index) { // GetElement can trigger a getter which can cause allocation. - // This was not always the case. This ASSERT is here to catch + // This was not always the case. This DCHECK is here to catch // leftover incorrect uses. - ASSERT(AllowHeapAllocation::IsAllowed()); + DCHECK(AllowHeapAllocation::IsAllowed()); return Object::GetElementWithReceiver(isolate, object, object, index); } @@ -1080,10 +1153,10 @@ MaybeHandle<Object> Object::GetProperty(Isolate* isolate, Handle<Object> object, const char* name) { Handle<String> str = isolate->factory()->InternalizeUtf8String(name); - ASSERT(!str.is_null()); + DCHECK(!str.is_null()); #ifdef DEBUG uint32_t index; // Assert that the name is not an array index. - ASSERT(!str->AsArrayIndex(&index)); + DCHECK(!str->AsArrayIndex(&index)); #endif // DEBUG return GetProperty(object, str); } @@ -1104,12 +1177,12 @@ MaybeHandle<Object> JSProxy::SetElementWithHandler(Handle<JSProxy> proxy, StrictMode strict_mode) { Isolate* isolate = proxy->GetIsolate(); Handle<String> name = isolate->factory()->Uint32ToString(index); - return SetPropertyWithHandler( - proxy, receiver, name, value, NONE, strict_mode); + return SetPropertyWithHandler(proxy, receiver, name, value, strict_mode); } -bool JSProxy::HasElementWithHandler(Handle<JSProxy> proxy, uint32_t index) { +Maybe<bool> JSProxy::HasElementWithHandler(Handle<JSProxy> proxy, + uint32_t index) { Isolate* isolate = proxy->GetIsolate(); Handle<String> name = isolate->factory()->Uint32ToString(index); return HasPropertyWithHandler(proxy, name); @@ -1119,27 +1192,32 @@ bool JSProxy::HasElementWithHandler(Handle<JSProxy> proxy, uint32_t index) { #define FIELD_ADDR(p, offset) \ (reinterpret_cast<byte*>(p) + offset - kHeapObjectTag) +#define FIELD_ADDR_CONST(p, offset) \ + (reinterpret_cast<const byte*>(p) + offset - kHeapObjectTag) + #define READ_FIELD(p, offset) \ - (*reinterpret_cast<Object**>(FIELD_ADDR(p, offset))) + (*reinterpret_cast<Object* const*>(FIELD_ADDR_CONST(p, offset))) -#define ACQUIRE_READ_FIELD(p, offset) \ - reinterpret_cast<Object*>( \ - Acquire_Load(reinterpret_cast<AtomicWord*>(FIELD_ADDR(p, offset)))) +#define ACQUIRE_READ_FIELD(p, offset) \ + reinterpret_cast<Object*>(base::Acquire_Load( \ + reinterpret_cast<const base::AtomicWord*>(FIELD_ADDR_CONST(p, offset)))) -#define NOBARRIER_READ_FIELD(p, offset) \ - reinterpret_cast<Object*>( \ - NoBarrier_Load(reinterpret_cast<AtomicWord*>(FIELD_ADDR(p, offset)))) +#define NOBARRIER_READ_FIELD(p, offset) \ + reinterpret_cast<Object*>(base::NoBarrier_Load( \ + reinterpret_cast<const base::AtomicWord*>(FIELD_ADDR_CONST(p, offset)))) #define WRITE_FIELD(p, offset, value) \ (*reinterpret_cast<Object**>(FIELD_ADDR(p, offset)) = value) -#define RELEASE_WRITE_FIELD(p, offset, value) \ - Release_Store(reinterpret_cast<AtomicWord*>(FIELD_ADDR(p, offset)), \ - reinterpret_cast<AtomicWord>(value)); +#define RELEASE_WRITE_FIELD(p, offset, value) \ + base::Release_Store( \ + reinterpret_cast<base::AtomicWord*>(FIELD_ADDR(p, offset)), \ + reinterpret_cast<base::AtomicWord>(value)); -#define NOBARRIER_WRITE_FIELD(p, offset, value) \ - NoBarrier_Store(reinterpret_cast<AtomicWord*>(FIELD_ADDR(p, offset)), \ - reinterpret_cast<AtomicWord>(value)); +#define NOBARRIER_WRITE_FIELD(p, offset, value) \ + base::NoBarrier_Store( \ + reinterpret_cast<base::AtomicWord*>(FIELD_ADDR(p, offset)), \ + reinterpret_cast<base::AtomicWord>(value)); #define WRITE_BARRIER(heap, object, offset, value) \ heap->incremental_marking()->RecordWrite( \ @@ -1159,17 +1237,19 @@ bool JSProxy::HasElementWithHandler(Handle<JSProxy> proxy, uint32_t index) { #ifndef V8_TARGET_ARCH_MIPS #define READ_DOUBLE_FIELD(p, offset) \ - (*reinterpret_cast<double*>(FIELD_ADDR(p, offset))) + (*reinterpret_cast<const double*>(FIELD_ADDR_CONST(p, offset))) #else // V8_TARGET_ARCH_MIPS // Prevent gcc from using load-double (mips ldc1) on (possibly) // non-64-bit aligned HeapNumber::value. - static inline double read_double_field(void* p, int offset) { + static inline double read_double_field(const void* p, int offset) { union conversion { double d; uint32_t u[2]; } c; - c.u[0] = (*reinterpret_cast<uint32_t*>(FIELD_ADDR(p, offset))); - c.u[1] = (*reinterpret_cast<uint32_t*>(FIELD_ADDR(p, offset + 4))); + c.u[0] = (*reinterpret_cast<const uint32_t*>( + FIELD_ADDR_CONST(p, offset))); + c.u[1] = (*reinterpret_cast<const uint32_t*>( + FIELD_ADDR_CONST(p, offset + 4))); return c.d; } #define READ_DOUBLE_FIELD(p, offset) read_double_field(p, offset) @@ -1197,73 +1277,74 @@ bool JSProxy::HasElementWithHandler(Handle<JSProxy> proxy, uint32_t index) { #define READ_INT_FIELD(p, offset) \ - (*reinterpret_cast<int*>(FIELD_ADDR(p, offset))) + (*reinterpret_cast<const int*>(FIELD_ADDR_CONST(p, offset))) #define WRITE_INT_FIELD(p, offset, value) \ (*reinterpret_cast<int*>(FIELD_ADDR(p, offset)) = value) #define READ_INTPTR_FIELD(p, offset) \ - (*reinterpret_cast<intptr_t*>(FIELD_ADDR(p, offset))) + (*reinterpret_cast<const intptr_t*>(FIELD_ADDR_CONST(p, offset))) #define WRITE_INTPTR_FIELD(p, offset, value) \ (*reinterpret_cast<intptr_t*>(FIELD_ADDR(p, offset)) = value) #define READ_UINT32_FIELD(p, offset) \ - (*reinterpret_cast<uint32_t*>(FIELD_ADDR(p, offset))) + (*reinterpret_cast<const uint32_t*>(FIELD_ADDR_CONST(p, offset))) #define WRITE_UINT32_FIELD(p, offset, value) \ (*reinterpret_cast<uint32_t*>(FIELD_ADDR(p, offset)) = value) #define READ_INT32_FIELD(p, offset) \ - (*reinterpret_cast<int32_t*>(FIELD_ADDR(p, offset))) + (*reinterpret_cast<const int32_t*>(FIELD_ADDR_CONST(p, offset))) #define WRITE_INT32_FIELD(p, offset, value) \ (*reinterpret_cast<int32_t*>(FIELD_ADDR(p, offset)) = value) #define READ_INT64_FIELD(p, offset) \ - (*reinterpret_cast<int64_t*>(FIELD_ADDR(p, offset))) + (*reinterpret_cast<const int64_t*>(FIELD_ADDR_CONST(p, offset))) #define WRITE_INT64_FIELD(p, offset, value) \ (*reinterpret_cast<int64_t*>(FIELD_ADDR(p, offset)) = value) #define READ_SHORT_FIELD(p, offset) \ - (*reinterpret_cast<uint16_t*>(FIELD_ADDR(p, offset))) + (*reinterpret_cast<const uint16_t*>(FIELD_ADDR_CONST(p, offset))) #define WRITE_SHORT_FIELD(p, offset, value) \ (*reinterpret_cast<uint16_t*>(FIELD_ADDR(p, offset)) = value) #define READ_BYTE_FIELD(p, offset) \ - (*reinterpret_cast<byte*>(FIELD_ADDR(p, offset))) + (*reinterpret_cast<const byte*>(FIELD_ADDR_CONST(p, offset))) -#define NOBARRIER_READ_BYTE_FIELD(p, offset) \ - static_cast<byte>(NoBarrier_Load( \ - reinterpret_cast<Atomic8*>(FIELD_ADDR(p, offset))) ) +#define NOBARRIER_READ_BYTE_FIELD(p, offset) \ + static_cast<byte>(base::NoBarrier_Load( \ + reinterpret_cast<base::Atomic8*>(FIELD_ADDR(p, offset)))) #define WRITE_BYTE_FIELD(p, offset, value) \ (*reinterpret_cast<byte*>(FIELD_ADDR(p, offset)) = value) -#define NOBARRIER_WRITE_BYTE_FIELD(p, offset, value) \ - NoBarrier_Store(reinterpret_cast<Atomic8*>(FIELD_ADDR(p, offset)), \ - static_cast<Atomic8>(value)); +#define NOBARRIER_WRITE_BYTE_FIELD(p, offset, value) \ + base::NoBarrier_Store( \ + reinterpret_cast<base::Atomic8*>(FIELD_ADDR(p, offset)), \ + static_cast<base::Atomic8>(value)); Object** HeapObject::RawField(HeapObject* obj, int byte_offset) { - return &READ_FIELD(obj, byte_offset); + return reinterpret_cast<Object**>(FIELD_ADDR(obj, byte_offset)); } -int Smi::value() { +int Smi::value() const { return Internals::SmiValue(this); } Smi* Smi::FromInt(int value) { - ASSERT(Smi::IsValid(value)); + DCHECK(Smi::IsValid(value)); return reinterpret_cast<Smi*>(Internals::IntToSmi(value)); } Smi* Smi::FromIntptr(intptr_t value) { - ASSERT(Smi::IsValid(value)); + DCHECK(Smi::IsValid(value)); int smi_shift_bits = kSmiTagSize + kSmiShiftSize; return reinterpret_cast<Smi*>((value << smi_shift_bits) | kSmiTag); } @@ -1271,12 +1352,12 @@ Smi* Smi::FromIntptr(intptr_t value) { bool Smi::IsValid(intptr_t value) { bool result = Internals::IsValidSmi(value); - ASSERT_EQ(result, value >= kMinValue && value <= kMaxValue); + DCHECK_EQ(result, value >= kMinValue && value <= kMaxValue); return result; } -MapWord MapWord::FromMap(Map* map) { +MapWord MapWord::FromMap(const Map* map) { return MapWord(reinterpret_cast<uintptr_t>(map)); } @@ -1298,7 +1379,7 @@ MapWord MapWord::FromForwardingAddress(HeapObject* object) { HeapObject* MapWord::ToForwardingAddress() { - ASSERT(IsForwardingAddress()); + DCHECK(IsForwardingAddress()); return HeapObject::FromAddress(reinterpret_cast<Address>(value_)); } @@ -1314,21 +1395,28 @@ void HeapObject::VerifySmiField(int offset) { #endif -Heap* HeapObject::GetHeap() { +Heap* HeapObject::GetHeap() const { Heap* heap = - MemoryChunk::FromAddress(reinterpret_cast<Address>(this))->heap(); - SLOW_ASSERT(heap != NULL); + MemoryChunk::FromAddress(reinterpret_cast<const byte*>(this))->heap(); + SLOW_DCHECK(heap != NULL); return heap; } -Isolate* HeapObject::GetIsolate() { +Isolate* HeapObject::GetIsolate() const { return GetHeap()->isolate(); } -Map* HeapObject::map() { +Map* HeapObject::map() const { +#ifdef DEBUG + // Clear mark potentially added by PathTracer. + uintptr_t raw_value = + map_word().ToRawValue() & ~static_cast<uintptr_t>(PathTracer::kMarkTag); + return MapWord::FromRawValue(raw_value).ToMap(); +#else return map_word().ToMap(); +#endif } @@ -1368,7 +1456,7 @@ void HeapObject::set_map_no_write_barrier(Map* value) { } -MapWord HeapObject::map_word() { +MapWord HeapObject::map_word() const { return MapWord( reinterpret_cast<uintptr_t>(NOBARRIER_READ_FIELD(this, kMapOffset))); } @@ -1380,7 +1468,7 @@ void HeapObject::set_map_word(MapWord map_word) { } -MapWord HeapObject::synchronized_map_word() { +MapWord HeapObject::synchronized_map_word() const { return MapWord( reinterpret_cast<uintptr_t>(ACQUIRE_READ_FIELD(this, kMapOffset))); } @@ -1393,7 +1481,7 @@ void HeapObject::synchronized_set_map_word(MapWord map_word) { HeapObject* HeapObject::FromAddress(Address address) { - ASSERT_TAG_ALIGNED(address); + DCHECK_TAG_ALIGNED(address); return reinterpret_cast<HeapObject*>(address + kHeapObjectTag); } @@ -1408,6 +1496,24 @@ int HeapObject::Size() { } +bool HeapObject::MayContainNewSpacePointers() { + InstanceType type = map()->instance_type(); + if (type <= LAST_NAME_TYPE) { + if (type == SYMBOL_TYPE) { + return true; + } + DCHECK(type < FIRST_NONSTRING_TYPE); + // There are four string representations: sequential strings, external + // strings, cons strings, and sliced strings. + // Only the latter two contain non-map-word pointers to heap objects. + return ((type & kIsIndirectStringMask) == kIsIndirectStringTag); + } + // The ConstantPoolArray contains heap pointers, but not new space pointers. + if (type == CONSTANT_POOL_ARRAY_TYPE) return false; + return (type > LAST_DATA_TYPE); +} + + void HeapObject::IteratePointers(ObjectVisitor* v, int start, int end) { v->VisitPointers(reinterpret_cast<Object**>(FIELD_ADDR(this, start)), reinterpret_cast<Object**>(FIELD_ADDR(this, end))); @@ -1424,7 +1530,7 @@ void HeapObject::IterateNextCodeLink(ObjectVisitor* v, int offset) { } -double HeapNumber::value() { +double HeapNumber::value() const { return READ_DOUBLE_FIELD(this, kValueOffset); } @@ -1464,14 +1570,14 @@ bool FixedArray::ContainsOnlySmisOrHoles() { } -FixedArrayBase* JSObject::elements() { +FixedArrayBase* JSObject::elements() const { Object* array = READ_FIELD(this, kElementsOffset); return static_cast<FixedArrayBase*>(array); } void JSObject::ValidateElements(Handle<JSObject> object) { -#ifdef ENABLE_SLOW_ASSERTS +#ifdef ENABLE_SLOW_DCHECKS if (FLAG_enable_slow_asserts) { ElementsAccessor* accessor = object->GetElementsAccessor(); accessor->Validate(object); @@ -1492,7 +1598,7 @@ void AllocationSite::Initialize() { void AllocationSite::MarkZombie() { - ASSERT(!IsZombie()); + DCHECK(!IsZombie()); Initialize(); set_pretenure_decision(kZombie); } @@ -1552,10 +1658,10 @@ inline void AllocationSite::set_memento_found_count(int count) { int value = pretenure_data()->value(); // Verify that we can count more mementos than we can possibly find in one // new space collection. - ASSERT((GetHeap()->MaxSemiSpaceSize() / + DCHECK((GetHeap()->MaxSemiSpaceSize() / (StaticVisitorBase::kMinObjectSizeInWords * kPointerSize + AllocationMemento::kSize)) < MementoFoundCountBits::kMax); - ASSERT(count < MementoFoundCountBits::kMax); + DCHECK(count < MementoFoundCountBits::kMax); set_pretenure_data( Smi::FromInt(MementoFoundCountBits::update(value, count)), SKIP_WRITE_BARRIER); @@ -1566,50 +1672,71 @@ inline bool AllocationSite::IncrementMementoFoundCount() { int value = memento_found_count(); set_memento_found_count(value + 1); - return value == 0; + return memento_found_count() == kPretenureMinimumCreated; } inline void AllocationSite::IncrementMementoCreateCount() { - ASSERT(FLAG_allocation_site_pretenuring); + DCHECK(FLAG_allocation_site_pretenuring); int value = memento_create_count(); set_memento_create_count(value + 1); } -inline bool AllocationSite::DigestPretenuringFeedback() { - bool decision_changed = false; +inline bool AllocationSite::MakePretenureDecision( + PretenureDecision current_decision, + double ratio, + bool maximum_size_scavenge) { + // Here we just allow state transitions from undecided or maybe tenure + // to don't tenure, maybe tenure, or tenure. + if ((current_decision == kUndecided || current_decision == kMaybeTenure)) { + if (ratio >= kPretenureRatio) { + // We just transition into tenure state when the semi-space was at + // maximum capacity. + if (maximum_size_scavenge) { + set_deopt_dependent_code(true); + set_pretenure_decision(kTenure); + // Currently we just need to deopt when we make a state transition to + // tenure. + return true; + } + set_pretenure_decision(kMaybeTenure); + } else { + set_pretenure_decision(kDontTenure); + } + } + return false; +} + + +inline bool AllocationSite::DigestPretenuringFeedback( + bool maximum_size_scavenge) { + bool deopt = false; int create_count = memento_create_count(); int found_count = memento_found_count(); bool minimum_mementos_created = create_count >= kPretenureMinimumCreated; double ratio = minimum_mementos_created || FLAG_trace_pretenuring_statistics ? static_cast<double>(found_count) / create_count : 0.0; - PretenureFlag current_mode = GetPretenureMode(); + PretenureDecision current_decision = pretenure_decision(); if (minimum_mementos_created) { - PretenureDecision result = ratio >= kPretenureRatio - ? kTenure - : kDontTenure; - set_pretenure_decision(result); - if (current_mode != GetPretenureMode()) { - decision_changed = true; - set_deopt_dependent_code(true); - } + deopt = MakePretenureDecision( + current_decision, ratio, maximum_size_scavenge); } if (FLAG_trace_pretenuring_statistics) { PrintF( "AllocationSite(%p): (created, found, ratio) (%d, %d, %f) %s => %s\n", static_cast<void*>(this), create_count, found_count, ratio, - current_mode == TENURED ? "tenured" : "not tenured", - GetPretenureMode() == TENURED ? "tenured" : "not tenured"); + PretenureDecisionName(current_decision), + PretenureDecisionName(pretenure_decision())); } // Clear feedback calculation fields until the next gc. set_memento_found_count(0); set_memento_create_count(0); - return decision_changed; + return deopt; } @@ -1634,7 +1761,7 @@ void JSObject::EnsureCanContainElements(Handle<JSObject> object, ElementsKind target_kind = current_kind; { DisallowHeapAllocation no_allocation; - ASSERT(mode != ALLOW_COPIED_DOUBLE_ELEMENTS); + DCHECK(mode != ALLOW_COPIED_DOUBLE_ELEMENTS); bool is_holey = IsFastHoleyElementsKind(current_kind); if (current_kind == FAST_HOLEY_ELEMENTS) return; Heap* heap = object->GetHeap(); @@ -1674,7 +1801,7 @@ void JSObject::EnsureCanContainElements(Handle<JSObject> object, EnsureElementsMode mode) { Heap* heap = object->GetHeap(); if (elements->map() != heap->fixed_double_array_map()) { - ASSERT(elements->map() == heap->fixed_array_map() || + DCHECK(elements->map() == heap->fixed_array_map() || elements->map() == heap->fixed_cow_array_map()); if (mode == ALLOW_COPIED_DOUBLE_ELEMENTS) { mode = DONT_ALLOW_DOUBLE_ELEMENTS; @@ -1685,7 +1812,7 @@ void JSObject::EnsureCanContainElements(Handle<JSObject> object, return; } - ASSERT(mode == ALLOW_COPIED_DOUBLE_ELEMENTS); + DCHECK(mode == ALLOW_COPIED_DOUBLE_ELEMENTS); if (object->GetElementsKind() == FAST_HOLEY_SMI_ELEMENTS) { TransitionElementsKind(object, FAST_HOLEY_DOUBLE_ELEMENTS); } else if (object->GetElementsKind() == FAST_SMI_ELEMENTS) { @@ -1706,11 +1833,11 @@ void JSObject::SetMapAndElements(Handle<JSObject> object, Handle<Map> new_map, Handle<FixedArrayBase> value) { JSObject::MigrateToMap(object, new_map); - ASSERT((object->map()->has_fast_smi_or_object_elements() || + DCHECK((object->map()->has_fast_smi_or_object_elements() || (*value == object->GetHeap()->empty_fixed_array())) == (value->map() == object->GetHeap()->fixed_array_map() || value->map() == object->GetHeap()->fixed_cow_array_map())); - ASSERT((*value == object->GetHeap()->empty_fixed_array()) || + DCHECK((*value == object->GetHeap()->empty_fixed_array()) || (object->map()->has_fast_double_elements() == value->IsFixedDoubleArray())); object->set_elements(*value); @@ -1724,7 +1851,7 @@ void JSObject::set_elements(FixedArrayBase* value, WriteBarrierMode mode) { void JSObject::initialize_properties() { - ASSERT(!GetHeap()->InNewSpace(GetHeap()->empty_fixed_array())); + DCHECK(!GetHeap()->InNewSpace(GetHeap()->empty_fixed_array())); WRITE_FIELD(this, kPropertiesOffset, GetHeap()->empty_fixed_array()); } @@ -1735,7 +1862,7 @@ void JSObject::initialize_elements() { } -Handle<String> JSObject::ExpectedTransitionKey(Handle<Map> map) { +Handle<String> Map::ExpectedTransitionKey(Handle<Map> map) { DisallowHeapAllocation no_gc; if (!map->HasTransitionArray()) return Handle<String>::null(); TransitionArray* transitions = map->transitions(); @@ -1750,14 +1877,14 @@ Handle<String> JSObject::ExpectedTransitionKey(Handle<Map> map) { } -Handle<Map> JSObject::ExpectedTransitionTarget(Handle<Map> map) { - ASSERT(!ExpectedTransitionKey(map).is_null()); +Handle<Map> Map::ExpectedTransitionTarget(Handle<Map> map) { + DCHECK(!ExpectedTransitionKey(map).is_null()); return Handle<Map>(map->transitions()->GetTarget( TransitionArray::kSimpleTransitionIndex)); } -Handle<Map> JSObject::FindTransitionToField(Handle<Map> map, Handle<Name> key) { +Handle<Map> Map::FindTransitionToField(Handle<Map> map, Handle<Name> key) { DisallowHeapAllocation no_allocation; if (!map->HasTransitionArray()) return Handle<Map>::null(); TransitionArray* transitions = map->transitions(); @@ -1774,7 +1901,7 @@ ACCESSORS(Oddball, to_string, String, kToStringOffset) ACCESSORS(Oddball, to_number, Object, kToNumberOffset) -byte Oddball::kind() { +byte Oddball::kind() const { return Smi::cast(READ_FIELD(this, kKindOffset))->value(); } @@ -1784,20 +1911,20 @@ void Oddball::set_kind(byte value) { } -Object* Cell::value() { +Object* Cell::value() const { return READ_FIELD(this, kValueOffset); } void Cell::set_value(Object* val, WriteBarrierMode ignored) { // The write barrier is not used for global property cells. - ASSERT(!val->IsPropertyCell() && !val->IsCell()); + DCHECK(!val->IsPropertyCell() && !val->IsCell()); WRITE_FIELD(this, kValueOffset, val); } ACCESSORS(PropertyCell, dependent_code, DependentCode, kDependentCodeOffset) -Object* PropertyCell::type_raw() { +Object* PropertyCell::type_raw() const { return READ_FIELD(this, kTypeOffset); } @@ -1866,7 +1993,7 @@ int JSObject::GetHeaderSize() { int JSObject::GetInternalFieldCount() { - ASSERT(1 << kPointerSizeLog2 == kPointerSize); + DCHECK(1 << kPointerSizeLog2 == kPointerSize); // Make sure to adjust for the number of in-object properties. These // properties do contribute to the size, but are not internal fields. return ((Size() - GetHeaderSize()) >> kPointerSizeLog2) - @@ -1875,13 +2002,13 @@ int JSObject::GetInternalFieldCount() { int JSObject::GetInternalFieldOffset(int index) { - ASSERT(index < GetInternalFieldCount() && index >= 0); + DCHECK(index < GetInternalFieldCount() && index >= 0); return GetHeaderSize() + (kPointerSize * index); } Object* JSObject::GetInternalField(int index) { - ASSERT(index < GetInternalFieldCount() && index >= 0); + DCHECK(index < GetInternalFieldCount() && index >= 0); // Internal objects do follow immediately after the header, whereas in-object // properties are at the end of the object. Therefore there is no need // to adjust the index here. @@ -1890,7 +2017,7 @@ Object* JSObject::GetInternalField(int index) { void JSObject::SetInternalField(int index, Object* value) { - ASSERT(index < GetInternalFieldCount() && index >= 0); + DCHECK(index < GetInternalFieldCount() && index >= 0); // Internal objects do follow immediately after the header, whereas in-object // properties are at the end of the object. Therefore there is no need // to adjust the index here. @@ -1901,7 +2028,7 @@ void JSObject::SetInternalField(int index, Object* value) { void JSObject::SetInternalField(int index, Smi* value) { - ASSERT(index < GetInternalFieldCount() && index >= 0); + DCHECK(index < GetInternalFieldCount() && index >= 0); // Internal objects do follow immediately after the header, whereas in-object // properties are at the end of the object. Therefore there is no need // to adjust the index here. @@ -1913,29 +2040,22 @@ void JSObject::SetInternalField(int index, Smi* value) { // Access fast-case object properties at index. The use of these routines // is needed to correctly distinguish between properties stored in-object and // properties stored in the properties array. -Object* JSObject::RawFastPropertyAt(int index) { - // Adjust for the number of properties stored in the object. - index -= map()->inobject_properties(); - if (index < 0) { - int offset = map()->instance_size() + (index * kPointerSize); - return READ_FIELD(this, offset); +Object* JSObject::RawFastPropertyAt(FieldIndex index) { + if (index.is_inobject()) { + return READ_FIELD(this, index.offset()); } else { - ASSERT(index < properties()->length()); - return properties()->get(index); + return properties()->get(index.outobject_array_index()); } } -void JSObject::FastPropertyAtPut(int index, Object* value) { - // Adjust for the number of properties stored in the object. - index -= map()->inobject_properties(); - if (index < 0) { - int offset = map()->instance_size() + (index * kPointerSize); +void JSObject::FastPropertyAtPut(FieldIndex index, Object* value) { + if (index.is_inobject()) { + int offset = index.offset(); WRITE_FIELD(this, offset, value); WRITE_BARRIER(GetHeap(), this, offset, value); } else { - ASSERT(index < properties()->length()); - properties()->set(index, value); + properties()->set(index.outobject_array_index(), value); } } @@ -1966,15 +2086,15 @@ Object* JSObject::InObjectPropertyAtPut(int index, void JSObject::InitializeBody(Map* map, Object* pre_allocated_value, Object* filler_value) { - ASSERT(!filler_value->IsHeapObject() || + DCHECK(!filler_value->IsHeapObject() || !GetHeap()->InNewSpace(filler_value)); - ASSERT(!pre_allocated_value->IsHeapObject() || + DCHECK(!pre_allocated_value->IsHeapObject() || !GetHeap()->InNewSpace(pre_allocated_value)); int size = map->instance_size(); int offset = kHeaderSize; if (filler_value != pre_allocated_value) { int pre_allocated = map->pre_allocated_property_fields(); - ASSERT(pre_allocated * kPointerSize + kHeaderSize <= size); + DCHECK(pre_allocated * kPointerSize + kHeaderSize <= size); for (int i = 0; i < pre_allocated; i++) { WRITE_FIELD(this, offset, pre_allocated_value); offset += kPointerSize; @@ -1988,28 +2108,18 @@ void JSObject::InitializeBody(Map* map, bool JSObject::HasFastProperties() { - ASSERT(properties()->IsDictionary() == map()->is_dictionary_map()); + DCHECK(properties()->IsDictionary() == map()->is_dictionary_map()); return !properties()->IsDictionary(); } -bool JSObject::TooManyFastProperties(StoreFromKeyed store_mode) { - // Allow extra fast properties if the object has more than - // kFastPropertiesSoftLimit in-object properties. When this is the case, it is - // very unlikely that the object is being used as a dictionary and there is a - // good chance that allowing more map transitions will be worth it. - Map* map = this->map(); - if (map->unused_property_fields() != 0) return false; - - int inobject = map->inobject_properties(); - - int limit; - if (store_mode == CERTAINLY_NOT_STORE_FROM_KEYED) { - limit = Max(inobject, kMaxFastProperties); - } else { - limit = Max(inobject, kFastPropertiesSoftLimit); - } - return properties()->length() > limit; +bool Map::TooManyFastProperties(StoreFromKeyed store_mode) { + if (unused_property_fields() != 0) return false; + if (is_prototype_map()) return false; + int minimum = store_mode == CERTAINLY_NOT_STORE_FROM_KEYED ? 128 : 12; + int limit = Max(minimum, inobject_properties()); + int external = NumberOfFields() - inobject_properties(); + return external > limit; } @@ -2070,14 +2180,8 @@ void Object::VerifyApiCallResultType() { } -FixedArrayBase* FixedArrayBase::cast(Object* object) { - ASSERT(object->IsFixedArrayBase()); - return reinterpret_cast<FixedArrayBase*>(object); -} - - Object* FixedArray::get(int index) { - SLOW_ASSERT(index >= 0 && index < this->length()); + SLOW_DCHECK(index >= 0 && index < this->length()); return READ_FIELD(this, kHeaderSize + index * kPointerSize); } @@ -2093,17 +2197,18 @@ bool FixedArray::is_the_hole(int index) { void FixedArray::set(int index, Smi* value) { - ASSERT(map() != GetHeap()->fixed_cow_array_map()); - ASSERT(index >= 0 && index < this->length()); - ASSERT(reinterpret_cast<Object*>(value)->IsSmi()); + DCHECK(map() != GetHeap()->fixed_cow_array_map()); + DCHECK(index >= 0 && index < this->length()); + DCHECK(reinterpret_cast<Object*>(value)->IsSmi()); int offset = kHeaderSize + index * kPointerSize; WRITE_FIELD(this, offset, value); } void FixedArray::set(int index, Object* value) { - ASSERT(map() != GetHeap()->fixed_cow_array_map()); - ASSERT(index >= 0 && index < this->length()); + DCHECK_NE(GetHeap()->fixed_cow_array_map(), map()); + DCHECK_EQ(FIXED_ARRAY_TYPE, map()->instance_type()); + DCHECK(index >= 0 && index < this->length()); int offset = kHeaderSize + index * kPointerSize; WRITE_FIELD(this, offset, value); WRITE_BARRIER(GetHeap(), this, offset, value); @@ -2121,25 +2226,25 @@ inline double FixedDoubleArray::hole_nan_as_double() { inline double FixedDoubleArray::canonical_not_the_hole_nan_as_double() { - ASSERT(BitCast<uint64_t>(OS::nan_value()) != kHoleNanInt64); - ASSERT((BitCast<uint64_t>(OS::nan_value()) >> 32) != kHoleNanUpper32); - return OS::nan_value(); + DCHECK(BitCast<uint64_t>(base::OS::nan_value()) != kHoleNanInt64); + DCHECK((BitCast<uint64_t>(base::OS::nan_value()) >> 32) != kHoleNanUpper32); + return base::OS::nan_value(); } double FixedDoubleArray::get_scalar(int index) { - ASSERT(map() != GetHeap()->fixed_cow_array_map() && + DCHECK(map() != GetHeap()->fixed_cow_array_map() && map() != GetHeap()->fixed_array_map()); - ASSERT(index >= 0 && index < this->length()); + DCHECK(index >= 0 && index < this->length()); double result = READ_DOUBLE_FIELD(this, kHeaderSize + index * kDoubleSize); - ASSERT(!is_the_hole_nan(result)); + DCHECK(!is_the_hole_nan(result)); return result; } int64_t FixedDoubleArray::get_representation(int index) { - ASSERT(map() != GetHeap()->fixed_cow_array_map() && + DCHECK(map() != GetHeap()->fixed_cow_array_map() && map() != GetHeap()->fixed_array_map()); - ASSERT(index >= 0 && index < this->length()); + DCHECK(index >= 0 && index < this->length()); return READ_INT64_FIELD(this, kHeaderSize + index * kDoubleSize); } @@ -2155,7 +2260,7 @@ Handle<Object> FixedDoubleArray::get(Handle<FixedDoubleArray> array, void FixedDoubleArray::set(int index, double value) { - ASSERT(map() != GetHeap()->fixed_cow_array_map() && + DCHECK(map() != GetHeap()->fixed_cow_array_map() && map() != GetHeap()->fixed_array_map()); int offset = kHeaderSize + index * kDoubleSize; if (std::isnan(value)) value = canonical_not_the_hole_nan_as_double(); @@ -2164,7 +2269,7 @@ void FixedDoubleArray::set(int index, double value) { void FixedDoubleArray::set_the_hole(int index) { - ASSERT(map() != GetHeap()->fixed_cow_array_map() && + DCHECK(map() != GetHeap()->fixed_cow_array_map() && map() != GetHeap()->fixed_array_map()); int offset = kHeaderSize + index * kDoubleSize; WRITE_DOUBLE_FIELD(this, offset, hole_nan_as_double()); @@ -2189,152 +2294,384 @@ void FixedDoubleArray::FillWithHoles(int from, int to) { } -void ConstantPoolArray::set_weak_object_state( - ConstantPoolArray::WeakObjectState state) { - int old_layout_field = READ_INT_FIELD(this, kArrayLayoutOffset); - int new_layout_field = WeakObjectStateField::update(old_layout_field, state); - WRITE_INT_FIELD(this, kArrayLayoutOffset, new_layout_field); +void ConstantPoolArray::NumberOfEntries::increment(Type type) { + DCHECK(type < NUMBER_OF_TYPES); + element_counts_[type]++; } -ConstantPoolArray::WeakObjectState ConstantPoolArray::get_weak_object_state() { - int layout_field = READ_INT_FIELD(this, kArrayLayoutOffset); - return WeakObjectStateField::decode(layout_field); +int ConstantPoolArray::NumberOfEntries::equals( + const ConstantPoolArray::NumberOfEntries& other) const { + for (int i = 0; i < NUMBER_OF_TYPES; i++) { + if (element_counts_[i] != other.element_counts_[i]) return false; + } + return true; } -int ConstantPoolArray::first_int64_index() { - return 0; +bool ConstantPoolArray::NumberOfEntries::is_empty() const { + return total_count() == 0; } -int ConstantPoolArray::first_code_ptr_index() { - int layout_field = READ_INT_FIELD(this, kArrayLayoutOffset); - return first_int64_index() + - NumberOfInt64EntriesField::decode(layout_field); +int ConstantPoolArray::NumberOfEntries::count_of(Type type) const { + DCHECK(type < NUMBER_OF_TYPES); + return element_counts_[type]; } -int ConstantPoolArray::first_heap_ptr_index() { - int layout_field = READ_INT_FIELD(this, kArrayLayoutOffset); - return first_code_ptr_index() + - NumberOfCodePtrEntriesField::decode(layout_field); +int ConstantPoolArray::NumberOfEntries::base_of(Type type) const { + int base = 0; + DCHECK(type < NUMBER_OF_TYPES); + for (int i = 0; i < type; i++) { + base += element_counts_[i]; + } + return base; } -int ConstantPoolArray::first_int32_index() { - int layout_field = READ_INT_FIELD(this, kArrayLayoutOffset); - return first_heap_ptr_index() + - NumberOfHeapPtrEntriesField::decode(layout_field); +int ConstantPoolArray::NumberOfEntries::total_count() const { + int count = 0; + for (int i = 0; i < NUMBER_OF_TYPES; i++) { + count += element_counts_[i]; + } + return count; } -int ConstantPoolArray::count_of_int64_entries() { - return first_code_ptr_index(); +int ConstantPoolArray::NumberOfEntries::are_in_range(int min, int max) const { + for (int i = FIRST_TYPE; i < NUMBER_OF_TYPES; i++) { + if (element_counts_[i] < min || element_counts_[i] > max) { + return false; + } + } + return true; } -int ConstantPoolArray::count_of_code_ptr_entries() { - return first_heap_ptr_index() - first_code_ptr_index(); +int ConstantPoolArray::Iterator::next_index() { + DCHECK(!is_finished()); + int ret = next_index_++; + update_section(); + return ret; } -int ConstantPoolArray::count_of_heap_ptr_entries() { - return first_int32_index() - first_heap_ptr_index(); +bool ConstantPoolArray::Iterator::is_finished() { + return next_index_ > array_->last_index(type_, final_section_); } -int ConstantPoolArray::count_of_int32_entries() { - return length() - first_int32_index(); +void ConstantPoolArray::Iterator::update_section() { + if (next_index_ > array_->last_index(type_, current_section_) && + current_section_ != final_section_) { + DCHECK(final_section_ == EXTENDED_SECTION); + current_section_ = EXTENDED_SECTION; + next_index_ = array_->first_index(type_, EXTENDED_SECTION); + } } -void ConstantPoolArray::Init(int number_of_int64_entries, - int number_of_code_ptr_entries, - int number_of_heap_ptr_entries, - int number_of_int32_entries) { - set_length(number_of_int64_entries + - number_of_code_ptr_entries + - number_of_heap_ptr_entries + - number_of_int32_entries); - int layout_field = - NumberOfInt64EntriesField::encode(number_of_int64_entries) | - NumberOfCodePtrEntriesField::encode(number_of_code_ptr_entries) | - NumberOfHeapPtrEntriesField::encode(number_of_heap_ptr_entries) | - WeakObjectStateField::encode(NO_WEAK_OBJECTS); - WRITE_INT_FIELD(this, kArrayLayoutOffset, layout_field); +bool ConstantPoolArray::is_extended_layout() { + uint32_t small_layout_1 = READ_UINT32_FIELD(this, kSmallLayout1Offset); + return IsExtendedField::decode(small_layout_1); +} + + +ConstantPoolArray::LayoutSection ConstantPoolArray::final_section() { + return is_extended_layout() ? EXTENDED_SECTION : SMALL_SECTION; +} + + +int ConstantPoolArray::first_extended_section_index() { + DCHECK(is_extended_layout()); + uint32_t small_layout_2 = READ_UINT32_FIELD(this, kSmallLayout2Offset); + return TotalCountField::decode(small_layout_2); +} + + +int ConstantPoolArray::get_extended_section_header_offset() { + return RoundUp(SizeFor(NumberOfEntries(this, SMALL_SECTION)), kInt64Size); +} + + +ConstantPoolArray::WeakObjectState ConstantPoolArray::get_weak_object_state() { + uint32_t small_layout_2 = READ_UINT32_FIELD(this, kSmallLayout2Offset); + return WeakObjectStateField::decode(small_layout_2); +} + + +void ConstantPoolArray::set_weak_object_state( + ConstantPoolArray::WeakObjectState state) { + uint32_t small_layout_2 = READ_UINT32_FIELD(this, kSmallLayout2Offset); + small_layout_2 = WeakObjectStateField::update(small_layout_2, state); + WRITE_INT32_FIELD(this, kSmallLayout2Offset, small_layout_2); +} + + +int ConstantPoolArray::first_index(Type type, LayoutSection section) { + int index = 0; + if (section == EXTENDED_SECTION) { + DCHECK(is_extended_layout()); + index += first_extended_section_index(); + } + + for (Type type_iter = FIRST_TYPE; type_iter < type; + type_iter = next_type(type_iter)) { + index += number_of_entries(type_iter, section); + } + + return index; +} + + +int ConstantPoolArray::last_index(Type type, LayoutSection section) { + return first_index(type, section) + number_of_entries(type, section) - 1; +} + + +int ConstantPoolArray::number_of_entries(Type type, LayoutSection section) { + if (section == SMALL_SECTION) { + uint32_t small_layout_1 = READ_UINT32_FIELD(this, kSmallLayout1Offset); + uint32_t small_layout_2 = READ_UINT32_FIELD(this, kSmallLayout2Offset); + switch (type) { + case INT64: + return Int64CountField::decode(small_layout_1); + case CODE_PTR: + return CodePtrCountField::decode(small_layout_1); + case HEAP_PTR: + return HeapPtrCountField::decode(small_layout_1); + case INT32: + return Int32CountField::decode(small_layout_2); + default: + UNREACHABLE(); + return 0; + } + } else { + DCHECK(section == EXTENDED_SECTION && is_extended_layout()); + int offset = get_extended_section_header_offset(); + switch (type) { + case INT64: + offset += kExtendedInt64CountOffset; + break; + case CODE_PTR: + offset += kExtendedCodePtrCountOffset; + break; + case HEAP_PTR: + offset += kExtendedHeapPtrCountOffset; + break; + case INT32: + offset += kExtendedInt32CountOffset; + break; + default: + UNREACHABLE(); + } + return READ_INT_FIELD(this, offset); + } +} + + +bool ConstantPoolArray::offset_is_type(int offset, Type type) { + return (offset >= OffsetOfElementAt(first_index(type, SMALL_SECTION)) && + offset <= OffsetOfElementAt(last_index(type, SMALL_SECTION))) || + (is_extended_layout() && + offset >= OffsetOfElementAt(first_index(type, EXTENDED_SECTION)) && + offset <= OffsetOfElementAt(last_index(type, EXTENDED_SECTION))); +} + + +ConstantPoolArray::Type ConstantPoolArray::get_type(int index) { + LayoutSection section; + if (is_extended_layout() && index >= first_extended_section_index()) { + section = EXTENDED_SECTION; + } else { + section = SMALL_SECTION; + } + + Type type = FIRST_TYPE; + while (index > last_index(type, section)) { + type = next_type(type); + } + DCHECK(type <= LAST_TYPE); + return type; } int64_t ConstantPoolArray::get_int64_entry(int index) { - ASSERT(map() == GetHeap()->constant_pool_array_map()); - ASSERT(index >= 0 && index < first_code_ptr_index()); + DCHECK(map() == GetHeap()->constant_pool_array_map()); + DCHECK(get_type(index) == INT64); return READ_INT64_FIELD(this, OffsetOfElementAt(index)); } + double ConstantPoolArray::get_int64_entry_as_double(int index) { STATIC_ASSERT(kDoubleSize == kInt64Size); - ASSERT(map() == GetHeap()->constant_pool_array_map()); - ASSERT(index >= 0 && index < first_code_ptr_index()); + DCHECK(map() == GetHeap()->constant_pool_array_map()); + DCHECK(get_type(index) == INT64); return READ_DOUBLE_FIELD(this, OffsetOfElementAt(index)); } Address ConstantPoolArray::get_code_ptr_entry(int index) { - ASSERT(map() == GetHeap()->constant_pool_array_map()); - ASSERT(index >= first_code_ptr_index() && index < first_heap_ptr_index()); + DCHECK(map() == GetHeap()->constant_pool_array_map()); + DCHECK(get_type(index) == CODE_PTR); return reinterpret_cast<Address>(READ_FIELD(this, OffsetOfElementAt(index))); } Object* ConstantPoolArray::get_heap_ptr_entry(int index) { - ASSERT(map() == GetHeap()->constant_pool_array_map()); - ASSERT(index >= first_heap_ptr_index() && index < first_int32_index()); + DCHECK(map() == GetHeap()->constant_pool_array_map()); + DCHECK(get_type(index) == HEAP_PTR); return READ_FIELD(this, OffsetOfElementAt(index)); } int32_t ConstantPoolArray::get_int32_entry(int index) { - ASSERT(map() == GetHeap()->constant_pool_array_map()); - ASSERT(index >= first_int32_index() && index < length()); + DCHECK(map() == GetHeap()->constant_pool_array_map()); + DCHECK(get_type(index) == INT32); return READ_INT32_FIELD(this, OffsetOfElementAt(index)); } +void ConstantPoolArray::set(int index, int64_t value) { + DCHECK(map() == GetHeap()->constant_pool_array_map()); + DCHECK(get_type(index) == INT64); + WRITE_INT64_FIELD(this, OffsetOfElementAt(index), value); +} + + +void ConstantPoolArray::set(int index, double value) { + STATIC_ASSERT(kDoubleSize == kInt64Size); + DCHECK(map() == GetHeap()->constant_pool_array_map()); + DCHECK(get_type(index) == INT64); + WRITE_DOUBLE_FIELD(this, OffsetOfElementAt(index), value); +} + + void ConstantPoolArray::set(int index, Address value) { - ASSERT(map() == GetHeap()->constant_pool_array_map()); - ASSERT(index >= first_code_ptr_index() && index < first_heap_ptr_index()); + DCHECK(map() == GetHeap()->constant_pool_array_map()); + DCHECK(get_type(index) == CODE_PTR); WRITE_FIELD(this, OffsetOfElementAt(index), reinterpret_cast<Object*>(value)); } void ConstantPoolArray::set(int index, Object* value) { - ASSERT(map() == GetHeap()->constant_pool_array_map()); - ASSERT(index >= first_code_ptr_index() && index < first_int32_index()); + DCHECK(map() == GetHeap()->constant_pool_array_map()); + DCHECK(!GetHeap()->InNewSpace(value)); + DCHECK(get_type(index) == HEAP_PTR); WRITE_FIELD(this, OffsetOfElementAt(index), value); WRITE_BARRIER(GetHeap(), this, OffsetOfElementAt(index), value); } -void ConstantPoolArray::set(int index, int64_t value) { - ASSERT(map() == GetHeap()->constant_pool_array_map()); - ASSERT(index >= first_int64_index() && index < first_code_ptr_index()); - WRITE_INT64_FIELD(this, OffsetOfElementAt(index), value); +void ConstantPoolArray::set(int index, int32_t value) { + DCHECK(map() == GetHeap()->constant_pool_array_map()); + DCHECK(get_type(index) == INT32); + WRITE_INT32_FIELD(this, OffsetOfElementAt(index), value); } -void ConstantPoolArray::set(int index, double value) { - STATIC_ASSERT(kDoubleSize == kInt64Size); - ASSERT(map() == GetHeap()->constant_pool_array_map()); - ASSERT(index >= first_int64_index() && index < first_code_ptr_index()); - WRITE_DOUBLE_FIELD(this, OffsetOfElementAt(index), value); +void ConstantPoolArray::set_at_offset(int offset, int32_t value) { + DCHECK(map() == GetHeap()->constant_pool_array_map()); + DCHECK(offset_is_type(offset, INT32)); + WRITE_INT32_FIELD(this, offset, value); } -void ConstantPoolArray::set(int index, int32_t value) { - ASSERT(map() == GetHeap()->constant_pool_array_map()); - ASSERT(index >= this->first_int32_index() && index < length()); - WRITE_INT32_FIELD(this, OffsetOfElementAt(index), value); +void ConstantPoolArray::set_at_offset(int offset, int64_t value) { + DCHECK(map() == GetHeap()->constant_pool_array_map()); + DCHECK(offset_is_type(offset, INT64)); + WRITE_INT64_FIELD(this, offset, value); +} + + +void ConstantPoolArray::set_at_offset(int offset, double value) { + DCHECK(map() == GetHeap()->constant_pool_array_map()); + DCHECK(offset_is_type(offset, INT64)); + WRITE_DOUBLE_FIELD(this, offset, value); +} + + +void ConstantPoolArray::set_at_offset(int offset, Address value) { + DCHECK(map() == GetHeap()->constant_pool_array_map()); + DCHECK(offset_is_type(offset, CODE_PTR)); + WRITE_FIELD(this, offset, reinterpret_cast<Object*>(value)); + WRITE_BARRIER(GetHeap(), this, offset, reinterpret_cast<Object*>(value)); +} + + +void ConstantPoolArray::set_at_offset(int offset, Object* value) { + DCHECK(map() == GetHeap()->constant_pool_array_map()); + DCHECK(!GetHeap()->InNewSpace(value)); + DCHECK(offset_is_type(offset, HEAP_PTR)); + WRITE_FIELD(this, offset, value); + WRITE_BARRIER(GetHeap(), this, offset, value); +} + + +void ConstantPoolArray::Init(const NumberOfEntries& small) { + uint32_t small_layout_1 = + Int64CountField::encode(small.count_of(INT64)) | + CodePtrCountField::encode(small.count_of(CODE_PTR)) | + HeapPtrCountField::encode(small.count_of(HEAP_PTR)) | + IsExtendedField::encode(false); + uint32_t small_layout_2 = + Int32CountField::encode(small.count_of(INT32)) | + TotalCountField::encode(small.total_count()) | + WeakObjectStateField::encode(NO_WEAK_OBJECTS); + WRITE_UINT32_FIELD(this, kSmallLayout1Offset, small_layout_1); + WRITE_UINT32_FIELD(this, kSmallLayout2Offset, small_layout_2); + if (kHeaderSize != kFirstEntryOffset) { + DCHECK(kFirstEntryOffset - kHeaderSize == kInt32Size); + WRITE_UINT32_FIELD(this, kHeaderSize, 0); // Zero out header padding. + } +} + + +void ConstantPoolArray::InitExtended(const NumberOfEntries& small, + const NumberOfEntries& extended) { + // Initialize small layout fields first. + Init(small); + + // Set is_extended_layout field. + uint32_t small_layout_1 = READ_UINT32_FIELD(this, kSmallLayout1Offset); + small_layout_1 = IsExtendedField::update(small_layout_1, true); + WRITE_INT32_FIELD(this, kSmallLayout1Offset, small_layout_1); + + // Initialize the extended layout fields. + int extended_header_offset = get_extended_section_header_offset(); + WRITE_INT_FIELD(this, extended_header_offset + kExtendedInt64CountOffset, + extended.count_of(INT64)); + WRITE_INT_FIELD(this, extended_header_offset + kExtendedCodePtrCountOffset, + extended.count_of(CODE_PTR)); + WRITE_INT_FIELD(this, extended_header_offset + kExtendedHeapPtrCountOffset, + extended.count_of(HEAP_PTR)); + WRITE_INT_FIELD(this, extended_header_offset + kExtendedInt32CountOffset, + extended.count_of(INT32)); +} + + +int ConstantPoolArray::size() { + NumberOfEntries small(this, SMALL_SECTION); + if (!is_extended_layout()) { + return SizeFor(small); + } else { + NumberOfEntries extended(this, EXTENDED_SECTION); + return SizeForExtended(small, extended); + } +} + + +int ConstantPoolArray::length() { + uint32_t small_layout_2 = READ_UINT32_FIELD(this, kSmallLayout2Offset); + int length = TotalCountField::decode(small_layout_2); + if (is_extended_layout()) { + length += number_of_entries(INT64, EXTENDED_SECTION) + + number_of_entries(CODE_PTR, EXTENDED_SECTION) + + number_of_entries(HEAP_PTR, EXTENDED_SECTION) + + number_of_entries(INT32, EXTENDED_SECTION); + } + return length; } @@ -2350,8 +2687,8 @@ WriteBarrierMode HeapObject::GetWriteBarrierMode( void FixedArray::set(int index, Object* value, WriteBarrierMode mode) { - ASSERT(map() != GetHeap()->fixed_cow_array_map()); - ASSERT(index >= 0 && index < this->length()); + DCHECK(map() != GetHeap()->fixed_cow_array_map()); + DCHECK(index >= 0 && index < this->length()); int offset = kHeaderSize + index * kPointerSize; WRITE_FIELD(this, offset, value); CONDITIONAL_WRITE_BARRIER(GetHeap(), this, offset, value, mode); @@ -2361,8 +2698,8 @@ void FixedArray::set(int index, void FixedArray::NoIncrementalWriteBarrierSet(FixedArray* array, int index, Object* value) { - ASSERT(array->map() != array->GetHeap()->fixed_cow_array_map()); - ASSERT(index >= 0 && index < array->length()); + DCHECK(array->map() != array->GetHeap()->fixed_cow_array_map()); + DCHECK(index >= 0 && index < array->length()); int offset = kHeaderSize + index * kPointerSize; WRITE_FIELD(array, offset, value); Heap* heap = array->GetHeap(); @@ -2375,17 +2712,17 @@ void FixedArray::NoIncrementalWriteBarrierSet(FixedArray* array, void FixedArray::NoWriteBarrierSet(FixedArray* array, int index, Object* value) { - ASSERT(array->map() != array->GetHeap()->fixed_cow_array_map()); - ASSERT(index >= 0 && index < array->length()); - ASSERT(!array->GetHeap()->InNewSpace(value)); + DCHECK(array->map() != array->GetHeap()->fixed_cow_array_map()); + DCHECK(index >= 0 && index < array->length()); + DCHECK(!array->GetHeap()->InNewSpace(value)); WRITE_FIELD(array, kHeaderSize + index * kPointerSize, value); } void FixedArray::set_undefined(int index) { - ASSERT(map() != GetHeap()->fixed_cow_array_map()); - ASSERT(index >= 0 && index < this->length()); - ASSERT(!GetHeap()->InNewSpace(GetHeap()->undefined_value())); + DCHECK(map() != GetHeap()->fixed_cow_array_map()); + DCHECK(index >= 0 && index < this->length()); + DCHECK(!GetHeap()->InNewSpace(GetHeap()->undefined_value())); WRITE_FIELD(this, kHeaderSize + index * kPointerSize, GetHeap()->undefined_value()); @@ -2393,8 +2730,8 @@ void FixedArray::set_undefined(int index) { void FixedArray::set_null(int index) { - ASSERT(index >= 0 && index < this->length()); - ASSERT(!GetHeap()->InNewSpace(GetHeap()->null_value())); + DCHECK(index >= 0 && index < this->length()); + DCHECK(!GetHeap()->InNewSpace(GetHeap()->null_value())); WRITE_FIELD(this, kHeaderSize + index * kPointerSize, GetHeap()->null_value()); @@ -2402,9 +2739,9 @@ void FixedArray::set_null(int index) { void FixedArray::set_the_hole(int index) { - ASSERT(map() != GetHeap()->fixed_cow_array_map()); - ASSERT(index >= 0 && index < this->length()); - ASSERT(!GetHeap()->InNewSpace(GetHeap()->the_hole_value())); + DCHECK(map() != GetHeap()->fixed_cow_array_map()); + DCHECK(index >= 0 && index < this->length()); + DCHECK(!GetHeap()->InNewSpace(GetHeap()->the_hole_value())); WRITE_FIELD(this, kHeaderSize + index * kPointerSize, GetHeap()->the_hole_value()); @@ -2424,7 +2761,7 @@ Object** FixedArray::data_start() { bool DescriptorArray::IsEmpty() { - ASSERT(length() >= kFirstIndex || + DCHECK(length() >= kFirstIndex || this == GetHeap()->empty_descriptor_array()); return length() < kFirstIndex; } @@ -2444,7 +2781,7 @@ int BinarySearch(T* array, Name* name, int low, int high, int valid_entries) { uint32_t hash = name->Hash(); int limit = high; - ASSERT(low <= high); + DCHECK(low <= high); while (low != high) { int mid = (low + high) / 2; @@ -2488,7 +2825,7 @@ int LinearSearch(T* array, Name* name, int len, int valid_entries) { if (current_hash == hash && entry->Equals(name)) return sorted_index; } } else { - ASSERT(len >= valid_entries); + DCHECK(len >= valid_entries); for (int number = 0; number < valid_entries; number++) { Name* entry = array->GetKey(number); uint32_t current_hash = entry->Hash(); @@ -2502,9 +2839,9 @@ int LinearSearch(T* array, Name* name, int len, int valid_entries) { template<SearchMode search_mode, typename T> int Search(T* array, Name* name, int valid_entries) { if (search_mode == VALID_ENTRIES) { - SLOW_ASSERT(array->IsSortedNoDuplicates(valid_entries)); + SLOW_DCHECK(array->IsSortedNoDuplicates(valid_entries)); } else { - SLOW_ASSERT(array->IsSortedNoDuplicates()); + SLOW_DCHECK(array->IsSortedNoDuplicates()); } int nof = array->number_of_entries(); @@ -2572,17 +2909,20 @@ void Map::LookupTransition(JSObject* holder, FixedArrayBase* Map::GetInitialElements() { if (has_fast_smi_or_object_elements() || has_fast_double_elements()) { - ASSERT(!GetHeap()->InNewSpace(GetHeap()->empty_fixed_array())); + DCHECK(!GetHeap()->InNewSpace(GetHeap()->empty_fixed_array())); return GetHeap()->empty_fixed_array(); } else if (has_external_array_elements()) { ExternalArray* empty_array = GetHeap()->EmptyExternalArrayForMap(this); - ASSERT(!GetHeap()->InNewSpace(empty_array)); + DCHECK(!GetHeap()->InNewSpace(empty_array)); return empty_array; } else if (has_fixed_typed_array_elements()) { FixedTypedArrayBase* empty_array = GetHeap()->EmptyFixedTypedArrayForMap(this); - ASSERT(!GetHeap()->InNewSpace(empty_array)); + DCHECK(!GetHeap()->InNewSpace(empty_array)); return empty_array; + } else if (has_dictionary_elements()) { + DCHECK(!GetHeap()->InNewSpace(GetHeap()->empty_slow_element_dictionary())); + return GetHeap()->empty_slow_element_dictionary(); } else { UNREACHABLE(); } @@ -2591,7 +2931,7 @@ FixedArrayBase* Map::GetInitialElements() { Object** DescriptorArray::GetKeySlot(int descriptor_number) { - ASSERT(descriptor_number < number_of_descriptors()); + DCHECK(descriptor_number < number_of_descriptors()); return RawFieldOfElementAt(ToKeyIndex(descriptor_number)); } @@ -2607,7 +2947,7 @@ Object** DescriptorArray::GetDescriptorEndSlot(int descriptor_number) { Name* DescriptorArray::GetKey(int descriptor_number) { - ASSERT(descriptor_number < number_of_descriptors()); + DCHECK(descriptor_number < number_of_descriptors()); return Name::cast(get(ToKeyIndex(descriptor_number))); } @@ -2630,7 +2970,7 @@ void DescriptorArray::SetSortedKey(int descriptor_index, int pointer) { void DescriptorArray::SetRepresentation(int descriptor_index, Representation representation) { - ASSERT(!representation.IsNone()); + DCHECK(!representation.IsNone()); PropertyDetails details = GetDetails(descriptor_index); set(ToDetailsIndex(descriptor_index), details.CopyWithRepresentation(representation).AsSmi()); @@ -2638,13 +2978,18 @@ void DescriptorArray::SetRepresentation(int descriptor_index, Object** DescriptorArray::GetValueSlot(int descriptor_number) { - ASSERT(descriptor_number < number_of_descriptors()); + DCHECK(descriptor_number < number_of_descriptors()); return RawFieldOfElementAt(ToValueIndex(descriptor_number)); } +int DescriptorArray::GetValueOffset(int descriptor_number) { + return OffsetOfElementAt(ToValueIndex(descriptor_number)); +} + + Object* DescriptorArray::GetValue(int descriptor_number) { - ASSERT(descriptor_number < number_of_descriptors()); + DCHECK(descriptor_number < number_of_descriptors()); return get(ToValueIndex(descriptor_number)); } @@ -2655,7 +3000,7 @@ void DescriptorArray::SetValue(int descriptor_index, Object* value) { PropertyDetails DescriptorArray::GetDetails(int descriptor_number) { - ASSERT(descriptor_number < number_of_descriptors()); + DCHECK(descriptor_number < number_of_descriptors()); Object* details = get(ToDetailsIndex(descriptor_number)); return PropertyDetails(Smi::cast(details)); } @@ -2667,13 +3012,13 @@ PropertyType DescriptorArray::GetType(int descriptor_number) { int DescriptorArray::GetFieldIndex(int descriptor_number) { - ASSERT(GetDetails(descriptor_number).type() == FIELD); + DCHECK(GetDetails(descriptor_number).type() == FIELD); return GetDetails(descriptor_number).field_index(); } HeapType* DescriptorArray::GetFieldType(int descriptor_number) { - ASSERT(GetDetails(descriptor_number).type() == FIELD); + DCHECK(GetDetails(descriptor_number).type() == FIELD); return HeapType::cast(GetValue(descriptor_number)); } @@ -2684,13 +3029,13 @@ Object* DescriptorArray::GetConstant(int descriptor_number) { Object* DescriptorArray::GetCallbacksObject(int descriptor_number) { - ASSERT(GetType(descriptor_number) == CALLBACKS); + DCHECK(GetType(descriptor_number) == CALLBACKS); return GetValue(descriptor_number); } AccessorDescriptor* DescriptorArray::GetCallbacks(int descriptor_number) { - ASSERT(GetType(descriptor_number) == CALLBACKS); + DCHECK(GetType(descriptor_number) == CALLBACKS); Foreign* p = Foreign::cast(GetCallbacksObject(descriptor_number)); return reinterpret_cast<AccessorDescriptor*>(p->foreign_address()); } @@ -2707,7 +3052,7 @@ void DescriptorArray::Set(int descriptor_number, Descriptor* desc, const WhitenessWitness&) { // Range check. - ASSERT(descriptor_number < number_of_descriptors()); + DCHECK(descriptor_number < number_of_descriptors()); NoIncrementalWriteBarrierSet(this, ToKeyIndex(descriptor_number), @@ -2723,7 +3068,7 @@ void DescriptorArray::Set(int descriptor_number, void DescriptorArray::Set(int descriptor_number, Descriptor* desc) { // Range check. - ASSERT(descriptor_number < number_of_descriptors()); + DCHECK(descriptor_number < number_of_descriptors()); set(ToKeyIndex(descriptor_number), *desc->GetKey()); set(ToValueIndex(descriptor_number), *desc->GetValue()); @@ -2782,7 +3127,7 @@ void DescriptorArray::SwapSortedKeys(int first, int second) { DescriptorArray::WhitenessWitness::WhitenessWitness(DescriptorArray* array) : marking_(array->GetHeap()->incremental_marking()) { marking_->EnterNoMarkingScope(); - ASSERT(!marking_->IsMarking() || + DCHECK(!marking_->IsMarking() || Marking::Color(array) == Marking::WHITE_OBJECT); } @@ -2837,7 +3182,7 @@ bool SeededNumberDictionary::requires_slow_elements() { } uint32_t SeededNumberDictionary::max_number_key() { - ASSERT(!requires_slow_elements()); + DCHECK(!requires_slow_elements()); Object* max_index_object = get(kMaxNumberKeyIndex); if (!max_index_object->IsSmi()) return 0; uint32_t value = static_cast<uint32_t>(Smi::cast(max_index_object)->value()); @@ -2853,84 +3198,108 @@ void SeededNumberDictionary::set_requires_slow_elements() { // Cast operations -CAST_ACCESSOR(FixedArray) -CAST_ACCESSOR(FixedDoubleArray) -CAST_ACCESSOR(FixedTypedArrayBase) +CAST_ACCESSOR(AccessorInfo) +CAST_ACCESSOR(ByteArray) +CAST_ACCESSOR(Cell) +CAST_ACCESSOR(Code) +CAST_ACCESSOR(CodeCacheHashTable) +CAST_ACCESSOR(CompilationCacheTable) +CAST_ACCESSOR(ConsString) CAST_ACCESSOR(ConstantPoolArray) -CAST_ACCESSOR(DescriptorArray) CAST_ACCESSOR(DeoptimizationInputData) CAST_ACCESSOR(DeoptimizationOutputData) CAST_ACCESSOR(DependentCode) -CAST_ACCESSOR(StringTable) -CAST_ACCESSOR(JSFunctionResultCache) -CAST_ACCESSOR(NormalizedMapCache) -CAST_ACCESSOR(ScopeInfo) -CAST_ACCESSOR(CompilationCacheTable) -CAST_ACCESSOR(CodeCacheHashTable) -CAST_ACCESSOR(PolymorphicCodeCacheHashTable) -CAST_ACCESSOR(MapCache) -CAST_ACCESSOR(String) -CAST_ACCESSOR(SeqString) -CAST_ACCESSOR(SeqOneByteString) -CAST_ACCESSOR(SeqTwoByteString) -CAST_ACCESSOR(SlicedString) -CAST_ACCESSOR(ConsString) -CAST_ACCESSOR(ExternalString) +CAST_ACCESSOR(DescriptorArray) +CAST_ACCESSOR(ExternalArray) CAST_ACCESSOR(ExternalAsciiString) +CAST_ACCESSOR(ExternalFloat32Array) +CAST_ACCESSOR(ExternalFloat64Array) +CAST_ACCESSOR(ExternalInt16Array) +CAST_ACCESSOR(ExternalInt32Array) +CAST_ACCESSOR(ExternalInt8Array) +CAST_ACCESSOR(ExternalString) CAST_ACCESSOR(ExternalTwoByteString) -CAST_ACCESSOR(Symbol) -CAST_ACCESSOR(Name) -CAST_ACCESSOR(JSReceiver) -CAST_ACCESSOR(JSObject) -CAST_ACCESSOR(Smi) -CAST_ACCESSOR(HeapObject) -CAST_ACCESSOR(HeapNumber) -CAST_ACCESSOR(Oddball) -CAST_ACCESSOR(Cell) -CAST_ACCESSOR(PropertyCell) -CAST_ACCESSOR(SharedFunctionInfo) -CAST_ACCESSOR(Map) -CAST_ACCESSOR(JSFunction) +CAST_ACCESSOR(ExternalUint16Array) +CAST_ACCESSOR(ExternalUint32Array) +CAST_ACCESSOR(ExternalUint8Array) +CAST_ACCESSOR(ExternalUint8ClampedArray) +CAST_ACCESSOR(FixedArray) +CAST_ACCESSOR(FixedArrayBase) +CAST_ACCESSOR(FixedDoubleArray) +CAST_ACCESSOR(FixedTypedArrayBase) +CAST_ACCESSOR(Foreign) +CAST_ACCESSOR(FreeSpace) CAST_ACCESSOR(GlobalObject) -CAST_ACCESSOR(JSGlobalProxy) -CAST_ACCESSOR(JSGlobalObject) -CAST_ACCESSOR(JSBuiltinsObject) -CAST_ACCESSOR(Code) +CAST_ACCESSOR(HeapObject) CAST_ACCESSOR(JSArray) CAST_ACCESSOR(JSArrayBuffer) CAST_ACCESSOR(JSArrayBufferView) -CAST_ACCESSOR(JSTypedArray) +CAST_ACCESSOR(JSBuiltinsObject) CAST_ACCESSOR(JSDataView) -CAST_ACCESSOR(JSRegExp) -CAST_ACCESSOR(JSProxy) +CAST_ACCESSOR(JSDate) +CAST_ACCESSOR(JSFunction) CAST_ACCESSOR(JSFunctionProxy) -CAST_ACCESSOR(JSSet) +CAST_ACCESSOR(JSFunctionResultCache) +CAST_ACCESSOR(JSGeneratorObject) +CAST_ACCESSOR(JSGlobalObject) +CAST_ACCESSOR(JSGlobalProxy) CAST_ACCESSOR(JSMap) -CAST_ACCESSOR(JSSetIterator) CAST_ACCESSOR(JSMapIterator) +CAST_ACCESSOR(JSMessageObject) +CAST_ACCESSOR(JSModule) +CAST_ACCESSOR(JSObject) +CAST_ACCESSOR(JSProxy) +CAST_ACCESSOR(JSReceiver) +CAST_ACCESSOR(JSRegExp) +CAST_ACCESSOR(JSSet) +CAST_ACCESSOR(JSSetIterator) +CAST_ACCESSOR(JSTypedArray) +CAST_ACCESSOR(JSValue) CAST_ACCESSOR(JSWeakMap) CAST_ACCESSOR(JSWeakSet) -CAST_ACCESSOR(Foreign) -CAST_ACCESSOR(ByteArray) -CAST_ACCESSOR(FreeSpace) -CAST_ACCESSOR(ExternalArray) -CAST_ACCESSOR(ExternalInt8Array) -CAST_ACCESSOR(ExternalUint8Array) -CAST_ACCESSOR(ExternalInt16Array) -CAST_ACCESSOR(ExternalUint16Array) -CAST_ACCESSOR(ExternalInt32Array) -CAST_ACCESSOR(ExternalUint32Array) -CAST_ACCESSOR(ExternalFloat32Array) -CAST_ACCESSOR(ExternalFloat64Array) -CAST_ACCESSOR(ExternalUint8ClampedArray) +CAST_ACCESSOR(Map) +CAST_ACCESSOR(MapCache) +CAST_ACCESSOR(Name) +CAST_ACCESSOR(NameDictionary) +CAST_ACCESSOR(NormalizedMapCache) +CAST_ACCESSOR(Object) +CAST_ACCESSOR(ObjectHashTable) +CAST_ACCESSOR(Oddball) +CAST_ACCESSOR(OrderedHashMap) +CAST_ACCESSOR(OrderedHashSet) +CAST_ACCESSOR(PolymorphicCodeCacheHashTable) +CAST_ACCESSOR(PropertyCell) +CAST_ACCESSOR(ScopeInfo) +CAST_ACCESSOR(SeededNumberDictionary) +CAST_ACCESSOR(SeqOneByteString) +CAST_ACCESSOR(SeqString) +CAST_ACCESSOR(SeqTwoByteString) +CAST_ACCESSOR(SharedFunctionInfo) +CAST_ACCESSOR(SlicedString) +CAST_ACCESSOR(Smi) +CAST_ACCESSOR(String) +CAST_ACCESSOR(StringTable) CAST_ACCESSOR(Struct) -CAST_ACCESSOR(AccessorInfo) +CAST_ACCESSOR(Symbol) +CAST_ACCESSOR(UnseededNumberDictionary) +CAST_ACCESSOR(WeakHashTable) + template <class Traits> FixedTypedArray<Traits>* FixedTypedArray<Traits>::cast(Object* object) { - SLOW_ASSERT(object->IsHeapObject() && - HeapObject::cast(object)->map()->instance_type() == - Traits::kInstanceType); + SLOW_DCHECK(object->IsHeapObject() && + HeapObject::cast(object)->map()->instance_type() == + Traits::kInstanceType); + return reinterpret_cast<FixedTypedArray<Traits>*>(object); +} + + +template <class Traits> +const FixedTypedArray<Traits>* +FixedTypedArray<Traits>::cast(const Object* object) { + SLOW_DCHECK(object->IsHeapObject() && + HeapObject::cast(object)->map()->instance_type() == + Traits::kInstanceType); return reinterpret_cast<FixedTypedArray<Traits>*>(object); } @@ -2943,11 +3312,19 @@ FixedTypedArray<Traits>* FixedTypedArray<Traits>::cast(Object* object) { template <typename Derived, typename Shape, typename Key> HashTable<Derived, Shape, Key>* HashTable<Derived, Shape, Key>::cast(Object* obj) { - ASSERT(obj->IsHashTable()); + SLOW_DCHECK(obj->IsHashTable()); return reinterpret_cast<HashTable*>(obj); } +template <typename Derived, typename Shape, typename Key> +const HashTable<Derived, Shape, Key>* +HashTable<Derived, Shape, Key>::cast(const Object* obj) { + SLOW_DCHECK(obj->IsHashTable()); + return reinterpret_cast<const HashTable*>(obj); +} + + SMI_ACCESSORS(FixedArrayBase, length, kLengthOffset) SYNCHRONIZED_SMI_ACCESSORS(FixedArrayBase, length, kLengthOffset) @@ -2995,6 +3372,7 @@ bool Name::Equals(Handle<Name> one, Handle<Name> two) { ACCESSORS(Symbol, name, Object, kNameOffset) ACCESSORS(Symbol, flags, Smi, kFlagsOffset) BOOL_ACCESSORS(Symbol, flags, is_private, kPrivateBit) +BOOL_ACCESSORS(Symbol, flags, is_own, kOwnBit) bool String::Equals(String* other) { @@ -3024,7 +3402,7 @@ Handle<String> String::Flatten(Handle<String> string, PretenureFlag pretenure) { uint16_t String::Get(int index) { - ASSERT(index >= 0 && index < length()); + DCHECK(index >= 0 && index < length()); switch (StringShape(this).full_representation_tag()) { case kSeqStringTag | kOneByteStringTag: return SeqOneByteString::cast(this)->SeqOneByteStringGet(index); @@ -3050,8 +3428,8 @@ uint16_t String::Get(int index) { void String::Set(int index, uint16_t value) { - ASSERT(index >= 0 && index < length()); - ASSERT(StringShape(this).IsSequential()); + DCHECK(index >= 0 && index < length()); + DCHECK(StringShape(this).IsSequential()); return this->IsOneByteRepresentation() ? SeqOneByteString::cast(this)->SeqOneByteStringSet(index, value) @@ -3068,8 +3446,8 @@ bool String::IsFlat() { String* String::GetUnderlying() { // Giving direct access to underlying string only makes sense if the // wrapping string is already flattened. - ASSERT(this->IsFlat()); - ASSERT(StringShape(this).IsIndirect()); + DCHECK(this->IsFlat()); + DCHECK(StringShape(this).IsIndirect()); STATIC_ASSERT(ConsString::kFirstOffset == SlicedString::kParentOffset); const int kUnderlyingOffset = SlicedString::kParentOffset; return String::cast(READ_FIELD(this, kUnderlyingOffset)); @@ -3082,7 +3460,7 @@ ConsString* String::VisitFlat(Visitor* visitor, const int offset) { int slice_offset = offset; const int length = string->length(); - ASSERT(offset <= length); + DCHECK(offset <= length); while (true) { int32_t type = string->map()->instance_type(); switch (type & (kStringRepresentationMask | kStringEncodingMask)) { @@ -3131,13 +3509,13 @@ ConsString* String::VisitFlat(Visitor* visitor, uint16_t SeqOneByteString::SeqOneByteStringGet(int index) { - ASSERT(index >= 0 && index < length()); + DCHECK(index >= 0 && index < length()); return READ_BYTE_FIELD(this, kHeaderSize + index * kCharSize); } void SeqOneByteString::SeqOneByteStringSet(int index, uint16_t value) { - ASSERT(index >= 0 && index < length() && value <= kMaxOneByteCharCode); + DCHECK(index >= 0 && index < length() && value <= kMaxOneByteCharCode); WRITE_BYTE_FIELD(this, kHeaderSize + index * kCharSize, static_cast<byte>(value)); } @@ -3164,13 +3542,13 @@ uc16* SeqTwoByteString::GetChars() { uint16_t SeqTwoByteString::SeqTwoByteStringGet(int index) { - ASSERT(index >= 0 && index < length()); + DCHECK(index >= 0 && index < length()); return READ_SHORT_FIELD(this, kHeaderSize + index * kShortSize); } void SeqTwoByteString::SeqTwoByteStringSet(int index, uint16_t value) { - ASSERT(index >= 0 && index < length()); + DCHECK(index >= 0 && index < length()); WRITE_SHORT_FIELD(this, kHeaderSize + index * kShortSize, value); } @@ -3191,7 +3569,7 @@ String* SlicedString::parent() { void SlicedString::set_parent(String* parent, WriteBarrierMode mode) { - ASSERT(parent->IsSeqString() || parent->IsExternalString()); + DCHECK(parent->IsSeqString() || parent->IsExternalString()); WRITE_FIELD(this, kParentOffset, parent); CONDITIONAL_WRITE_BARRIER(GetHeap(), this, kParentOffset, parent, mode); } @@ -3253,7 +3631,7 @@ void ExternalAsciiString::update_data_cache() { void ExternalAsciiString::set_resource( const ExternalAsciiString::Resource* resource) { - ASSERT(IsAligned(reinterpret_cast<intptr_t>(resource), kPointerSize)); + DCHECK(IsAligned(reinterpret_cast<intptr_t>(resource), kPointerSize)); *reinterpret_cast<const Resource**>( FIELD_ADDR(this, kResourceOffset)) = resource; if (resource != NULL) update_data_cache(); @@ -3266,7 +3644,7 @@ const uint8_t* ExternalAsciiString::GetChars() { uint16_t ExternalAsciiString::ExternalAsciiStringGet(int index) { - ASSERT(index >= 0 && index < length()); + DCHECK(index >= 0 && index < length()); return GetChars()[index]; } @@ -3298,7 +3676,7 @@ const uint16_t* ExternalTwoByteString::GetChars() { uint16_t ExternalTwoByteString::ExternalTwoByteStringGet(int index) { - ASSERT(index >= 0 && index < length()); + DCHECK(index >= 0 && index < length()); return GetChars()[index]; } @@ -3331,17 +3709,17 @@ void ConsStringIteratorOp::AdjustMaximumDepth() { void ConsStringIteratorOp::Pop() { - ASSERT(depth_ > 0); - ASSERT(depth_ <= maximum_depth_); + DCHECK(depth_ > 0); + DCHECK(depth_ <= maximum_depth_); depth_--; } uint16_t StringCharacterStream::GetNext() { - ASSERT(buffer8_ != NULL && end_ != NULL); + DCHECK(buffer8_ != NULL && end_ != NULL); // Advance cursor if needed. if (buffer8_ == end_) HasMore(); - ASSERT(buffer8_ < end_); + DCHECK(buffer8_ < end_); return is_one_byte_ ? *buffer8_++ : *buffer16_++; } @@ -3371,10 +3749,10 @@ bool StringCharacterStream::HasMore() { if (buffer8_ != end_) return true; int offset; String* string = op_->Next(&offset); - ASSERT_EQ(offset, 0); + DCHECK_EQ(offset, 0); if (string == NULL) return false; String::VisitFlat(this, string); - ASSERT(buffer8_ != end_); + DCHECK(buffer8_ != end_); return true; } @@ -3432,25 +3810,25 @@ void JSFunctionResultCache::set_finger_index(int finger_index) { byte ByteArray::get(int index) { - ASSERT(index >= 0 && index < this->length()); + DCHECK(index >= 0 && index < this->length()); return READ_BYTE_FIELD(this, kHeaderSize + index * kCharSize); } void ByteArray::set(int index, byte value) { - ASSERT(index >= 0 && index < this->length()); + DCHECK(index >= 0 && index < this->length()); WRITE_BYTE_FIELD(this, kHeaderSize + index * kCharSize, value); } int ByteArray::get_int(int index) { - ASSERT(index >= 0 && (index * kIntSize) < this->length()); + DCHECK(index >= 0 && (index * kIntSize) < this->length()); return READ_INT_FIELD(this, kHeaderSize + index * kIntSize); } ByteArray* ByteArray::FromDataStartAddress(Address address) { - ASSERT_TAG_ALIGNED(address); + DCHECK_TAG_ALIGNED(address); return reinterpret_cast<ByteArray*>(address - kHeaderSize + kHeapObjectTag); } @@ -3466,7 +3844,7 @@ uint8_t* ExternalUint8ClampedArray::external_uint8_clamped_pointer() { uint8_t ExternalUint8ClampedArray::get_scalar(int index) { - ASSERT((index >= 0) && (index < this->length())); + DCHECK((index >= 0) && (index < this->length())); uint8_t* ptr = external_uint8_clamped_pointer(); return ptr[index]; } @@ -3481,13 +3859,13 @@ Handle<Object> ExternalUint8ClampedArray::get( void ExternalUint8ClampedArray::set(int index, uint8_t value) { - ASSERT((index >= 0) && (index < this->length())); + DCHECK((index >= 0) && (index < this->length())); uint8_t* ptr = external_uint8_clamped_pointer(); ptr[index] = value; } -void* ExternalArray::external_pointer() { +void* ExternalArray::external_pointer() const { intptr_t ptr = READ_INTPTR_FIELD(this, kExternalPointerOffset); return reinterpret_cast<void*>(ptr); } @@ -3500,7 +3878,7 @@ void ExternalArray::set_external_pointer(void* value, WriteBarrierMode mode) { int8_t ExternalInt8Array::get_scalar(int index) { - ASSERT((index >= 0) && (index < this->length())); + DCHECK((index >= 0) && (index < this->length())); int8_t* ptr = static_cast<int8_t*>(external_pointer()); return ptr[index]; } @@ -3514,14 +3892,14 @@ Handle<Object> ExternalInt8Array::get(Handle<ExternalInt8Array> array, void ExternalInt8Array::set(int index, int8_t value) { - ASSERT((index >= 0) && (index < this->length())); + DCHECK((index >= 0) && (index < this->length())); int8_t* ptr = static_cast<int8_t*>(external_pointer()); ptr[index] = value; } uint8_t ExternalUint8Array::get_scalar(int index) { - ASSERT((index >= 0) && (index < this->length())); + DCHECK((index >= 0) && (index < this->length())); uint8_t* ptr = static_cast<uint8_t*>(external_pointer()); return ptr[index]; } @@ -3535,14 +3913,14 @@ Handle<Object> ExternalUint8Array::get(Handle<ExternalUint8Array> array, void ExternalUint8Array::set(int index, uint8_t value) { - ASSERT((index >= 0) && (index < this->length())); + DCHECK((index >= 0) && (index < this->length())); uint8_t* ptr = static_cast<uint8_t*>(external_pointer()); ptr[index] = value; } int16_t ExternalInt16Array::get_scalar(int index) { - ASSERT((index >= 0) && (index < this->length())); + DCHECK((index >= 0) && (index < this->length())); int16_t* ptr = static_cast<int16_t*>(external_pointer()); return ptr[index]; } @@ -3556,14 +3934,14 @@ Handle<Object> ExternalInt16Array::get(Handle<ExternalInt16Array> array, void ExternalInt16Array::set(int index, int16_t value) { - ASSERT((index >= 0) && (index < this->length())); + DCHECK((index >= 0) && (index < this->length())); int16_t* ptr = static_cast<int16_t*>(external_pointer()); ptr[index] = value; } uint16_t ExternalUint16Array::get_scalar(int index) { - ASSERT((index >= 0) && (index < this->length())); + DCHECK((index >= 0) && (index < this->length())); uint16_t* ptr = static_cast<uint16_t*>(external_pointer()); return ptr[index]; } @@ -3577,14 +3955,14 @@ Handle<Object> ExternalUint16Array::get(Handle<ExternalUint16Array> array, void ExternalUint16Array::set(int index, uint16_t value) { - ASSERT((index >= 0) && (index < this->length())); + DCHECK((index >= 0) && (index < this->length())); uint16_t* ptr = static_cast<uint16_t*>(external_pointer()); ptr[index] = value; } int32_t ExternalInt32Array::get_scalar(int index) { - ASSERT((index >= 0) && (index < this->length())); + DCHECK((index >= 0) && (index < this->length())); int32_t* ptr = static_cast<int32_t*>(external_pointer()); return ptr[index]; } @@ -3598,14 +3976,14 @@ Handle<Object> ExternalInt32Array::get(Handle<ExternalInt32Array> array, void ExternalInt32Array::set(int index, int32_t value) { - ASSERT((index >= 0) && (index < this->length())); + DCHECK((index >= 0) && (index < this->length())); int32_t* ptr = static_cast<int32_t*>(external_pointer()); ptr[index] = value; } uint32_t ExternalUint32Array::get_scalar(int index) { - ASSERT((index >= 0) && (index < this->length())); + DCHECK((index >= 0) && (index < this->length())); uint32_t* ptr = static_cast<uint32_t*>(external_pointer()); return ptr[index]; } @@ -3619,14 +3997,14 @@ Handle<Object> ExternalUint32Array::get(Handle<ExternalUint32Array> array, void ExternalUint32Array::set(int index, uint32_t value) { - ASSERT((index >= 0) && (index < this->length())); + DCHECK((index >= 0) && (index < this->length())); uint32_t* ptr = static_cast<uint32_t*>(external_pointer()); ptr[index] = value; } float ExternalFloat32Array::get_scalar(int index) { - ASSERT((index >= 0) && (index < this->length())); + DCHECK((index >= 0) && (index < this->length())); float* ptr = static_cast<float*>(external_pointer()); return ptr[index]; } @@ -3639,14 +4017,14 @@ Handle<Object> ExternalFloat32Array::get(Handle<ExternalFloat32Array> array, void ExternalFloat32Array::set(int index, float value) { - ASSERT((index >= 0) && (index < this->length())); + DCHECK((index >= 0) && (index < this->length())); float* ptr = static_cast<float*>(external_pointer()); ptr[index] = value; } double ExternalFloat64Array::get_scalar(int index) { - ASSERT((index >= 0) && (index < this->length())); + DCHECK((index >= 0) && (index < this->length())); double* ptr = static_cast<double*>(external_pointer()); return ptr[index]; } @@ -3659,7 +4037,7 @@ Handle<Object> ExternalFloat64Array::get(Handle<ExternalFloat64Array> array, void ExternalFloat64Array::set(int index, double value) { - ASSERT((index >= 0) && (index < this->length())); + DCHECK((index >= 0) && (index < this->length())); double* ptr = static_cast<double*>(external_pointer()); ptr[index] = value; } @@ -3670,10 +4048,9 @@ void* FixedTypedArrayBase::DataPtr() { } -int FixedTypedArrayBase::DataSize() { - InstanceType instance_type = map()->instance_type(); +int FixedTypedArrayBase::DataSize(InstanceType type) { int element_size; - switch (instance_type) { + switch (type) { #define TYPED_ARRAY_CASE(Type, type, TYPE, ctype, size) \ case FIXED_##TYPE##_ARRAY_TYPE: \ element_size = size; \ @@ -3689,11 +4066,21 @@ int FixedTypedArrayBase::DataSize() { } +int FixedTypedArrayBase::DataSize() { + return DataSize(map()->instance_type()); +} + + int FixedTypedArrayBase::size() { return OBJECT_POINTER_ALIGN(kDataOffset + DataSize()); } +int FixedTypedArrayBase::TypedArraySize(InstanceType type) { + return OBJECT_POINTER_ALIGN(kDataOffset + DataSize(type)); +} + + uint8_t Uint8ArrayTraits::defaultValue() { return 0; } @@ -3716,16 +4103,16 @@ int32_t Int32ArrayTraits::defaultValue() { return 0; } float Float32ArrayTraits::defaultValue() { - return static_cast<float>(OS::nan_value()); + return static_cast<float>(base::OS::nan_value()); } -double Float64ArrayTraits::defaultValue() { return OS::nan_value(); } +double Float64ArrayTraits::defaultValue() { return base::OS::nan_value(); } template <class Traits> typename Traits::ElementType FixedTypedArray<Traits>::get_scalar(int index) { - ASSERT((index >= 0) && (index < this->length())); + DCHECK((index >= 0) && (index < this->length())); ElementType* ptr = reinterpret_cast<ElementType*>( FIELD_ADDR(this, kDataOffset)); return ptr[index]; @@ -3735,14 +4122,14 @@ typename Traits::ElementType FixedTypedArray<Traits>::get_scalar(int index) { template<> inline FixedTypedArray<Float64ArrayTraits>::ElementType FixedTypedArray<Float64ArrayTraits>::get_scalar(int index) { - ASSERT((index >= 0) && (index < this->length())); + DCHECK((index >= 0) && (index < this->length())); return READ_DOUBLE_FIELD(this, ElementOffset(index)); } template <class Traits> void FixedTypedArray<Traits>::set(int index, ElementType value) { - ASSERT((index >= 0) && (index < this->length())); + DCHECK((index >= 0) && (index < this->length())); ElementType* ptr = reinterpret_cast<ElementType*>( FIELD_ADDR(this, kDataOffset)); ptr[index] = value; @@ -3752,7 +4139,7 @@ void FixedTypedArray<Traits>::set(int index, ElementType value) { template<> inline void FixedTypedArray<Float64ArrayTraits>::set( int index, Float64ArrayTraits::ElementType value) { - ASSERT((index >= 0) && (index < this->length())); + DCHECK((index >= 0) && (index < this->length())); WRITE_DOUBLE_FIELD(this, ElementOffset(index), value); } @@ -3822,7 +4209,7 @@ Handle<Object> FixedTypedArray<Traits>::SetValue( } else { // Clamp undefined to the default value. All other types have been // converted to a number type further up in the call chain. - ASSERT(value->IsUndefined()); + DCHECK(value->IsUndefined()); } array->set(index, cast_value); } @@ -3882,7 +4269,7 @@ int Map::visitor_id() { void Map::set_visitor_id(int id) { - ASSERT(0 <= id && id < 256); + DCHECK(0 <= id && id < 256); WRITE_BYTE_FIELD(this, kVisitorIdOffset, static_cast<byte>(id)); } @@ -3906,7 +4293,7 @@ int Map::pre_allocated_property_fields() { int Map::GetInObjectPropertyOffset(int index) { // Adjust for the number of properties stored in the object. index -= inobject_properties(); - ASSERT(index < 0); + DCHECK(index <= 0); return instance_size() + (index * kPointerSize); } @@ -3915,7 +4302,7 @@ int HeapObject::SizeFromMap(Map* map) { int instance_size = map->instance_size(); if (instance_size != kVariableSizeSentinel) return instance_size; // Only inline the most frequent cases. - int instance_type = static_cast<int>(map->instance_type()); + InstanceType instance_type = map->instance_type(); if (instance_type == FIXED_ARRAY_TYPE) { return FixedArray::BodyDescriptor::SizeOf(map, this); } @@ -3940,38 +4327,35 @@ int HeapObject::SizeFromMap(Map* map) { reinterpret_cast<FixedDoubleArray*>(this)->length()); } if (instance_type == CONSTANT_POOL_ARRAY_TYPE) { - return ConstantPoolArray::SizeFor( - reinterpret_cast<ConstantPoolArray*>(this)->count_of_int64_entries(), - reinterpret_cast<ConstantPoolArray*>(this)->count_of_code_ptr_entries(), - reinterpret_cast<ConstantPoolArray*>(this)->count_of_heap_ptr_entries(), - reinterpret_cast<ConstantPoolArray*>(this)->count_of_int32_entries()); + return reinterpret_cast<ConstantPoolArray*>(this)->size(); } if (instance_type >= FIRST_FIXED_TYPED_ARRAY_TYPE && instance_type <= LAST_FIXED_TYPED_ARRAY_TYPE) { - return reinterpret_cast<FixedTypedArrayBase*>(this)->size(); + return reinterpret_cast<FixedTypedArrayBase*>( + this)->TypedArraySize(instance_type); } - ASSERT(instance_type == CODE_TYPE); + DCHECK(instance_type == CODE_TYPE); return reinterpret_cast<Code*>(this)->CodeSize(); } void Map::set_instance_size(int value) { - ASSERT_EQ(0, value & (kPointerSize - 1)); + DCHECK_EQ(0, value & (kPointerSize - 1)); value >>= kPointerSizeLog2; - ASSERT(0 <= value && value < 256); + DCHECK(0 <= value && value < 256); NOBARRIER_WRITE_BYTE_FIELD( this, kInstanceSizeOffset, static_cast<byte>(value)); } void Map::set_inobject_properties(int value) { - ASSERT(0 <= value && value < 256); + DCHECK(0 <= value && value < 256); WRITE_BYTE_FIELD(this, kInObjectPropertiesOffset, static_cast<byte>(value)); } void Map::set_pre_allocated_property_fields(int value) { - ASSERT(0 <= value && value < 256); + DCHECK(0 <= value && value < 256); WRITE_BYTE_FIELD(this, kPreAllocatedPropertyFieldsOffset, static_cast<byte>(value)); @@ -4033,12 +4417,12 @@ bool Map::has_non_instance_prototype() { void Map::set_function_with_prototype(bool value) { - set_bit_field3(FunctionWithPrototype::update(bit_field3(), value)); + set_bit_field(FunctionWithPrototype::update(bit_field(), value)); } bool Map::function_with_prototype() { - return FunctionWithPrototype::decode(bit_field3()); + return FunctionWithPrototype::decode(bit_field()); } @@ -4069,28 +4453,15 @@ bool Map::is_extensible() { } -void Map::set_attached_to_shared_function_info(bool value) { - if (value) { - set_bit_field2(bit_field2() | (1 << kAttachedToSharedFunctionInfo)); - } else { - set_bit_field2(bit_field2() & ~(1 << kAttachedToSharedFunctionInfo)); - } -} - -bool Map::attached_to_shared_function_info() { - return ((1 << kAttachedToSharedFunctionInfo) & bit_field2()) != 0; +void Map::set_is_prototype_map(bool value) { + set_bit_field2(IsPrototypeMapBits::update(bit_field2(), value)); } - -void Map::set_is_shared(bool value) { - set_bit_field3(IsShared::update(bit_field3(), value)); +bool Map::is_prototype_map() { + return IsPrototypeMapBits::decode(bit_field2()); } -bool Map::is_shared() { - return IsShared::decode(bit_field3()); } - - void Map::set_dictionary_map(bool value) { uint32_t new_bit_field3 = DictionaryMap::update(bit_field3(), value); new_bit_field3 = IsUnstable::update(new_bit_field3, value); @@ -4108,8 +4479,8 @@ Code::Flags Code::flags() { } -void Map::set_owns_descriptors(bool is_shared) { - set_bit_field3(OwnsDescriptors::update(bit_field3(), is_shared)); +void Map::set_owns_descriptors(bool owns_descriptors) { + set_bit_field3(OwnsDescriptors::update(bit_field3(), owns_descriptors)); } @@ -4148,6 +4519,26 @@ bool Map::is_migration_target() { } +void Map::set_done_inobject_slack_tracking(bool value) { + set_bit_field3(DoneInobjectSlackTracking::update(bit_field3(), value)); +} + + +bool Map::done_inobject_slack_tracking() { + return DoneInobjectSlackTracking::decode(bit_field3()); +} + + +void Map::set_construction_count(int value) { + set_bit_field3(ConstructionCount::update(bit_field3(), value)); +} + + +int Map::construction_count() { + return ConstructionCount::decode(bit_field3()); +} + + void Map::freeze() { set_bit_field3(IsFrozen::update(bit_field3(), true)); } @@ -4274,12 +4665,21 @@ Code::Kind Code::kind() { } +bool Code::IsCodeStubOrIC() { + return kind() == STUB || kind() == HANDLER || kind() == LOAD_IC || + kind() == KEYED_LOAD_IC || kind() == CALL_IC || kind() == STORE_IC || + kind() == KEYED_STORE_IC || kind() == BINARY_OP_IC || + kind() == COMPARE_IC || kind() == COMPARE_NIL_IC || + kind() == TO_BOOLEAN_IC; +} + + InlineCacheState Code::ic_state() { InlineCacheState result = ExtractICStateFromFlags(flags()); // Only allow uninitialized or debugger states for non-IC code // objects. This is used in the debugger to determine whether or not // a call to code object has been replaced with a debug break call. - ASSERT(is_inline_cache_stub() || + DCHECK(is_inline_cache_stub() || result == UNINITIALIZED || result == DEBUG_STUB); return result; @@ -4287,7 +4687,7 @@ InlineCacheState Code::ic_state() { ExtraICState Code::extra_ic_state() { - ASSERT(is_inline_cache_stub() || ic_state() == DEBUG_STUB); + DCHECK(is_inline_cache_stub() || ic_state() == DEBUG_STUB); return ExtractExtraICStateFromFlags(flags()); } @@ -4314,6 +4714,11 @@ inline bool Code::is_crankshafted() { } +inline bool Code::is_hydrogen_stub() { + return is_crankshafted() && kind() != OPTIMIZED_FUNCTION; +} + + inline void Code::set_is_crankshafted(bool value) { int previous = READ_UINT32_FIELD(this, kKindSpecificFlags2Offset); int updated = IsCrankshaftedField::update(previous, value); @@ -4321,58 +4726,42 @@ inline void Code::set_is_crankshafted(bool value) { } -int Code::major_key() { - ASSERT(has_major_key()); - return StubMajorKeyField::decode( - READ_UINT32_FIELD(this, kKindSpecificFlags2Offset)); -} - - -void Code::set_major_key(int major) { - ASSERT(has_major_key()); - ASSERT(0 <= major && major < 256); - int previous = READ_UINT32_FIELD(this, kKindSpecificFlags2Offset); - int updated = StubMajorKeyField::update(previous, major); - WRITE_UINT32_FIELD(this, kKindSpecificFlags2Offset, updated); +inline bool Code::is_turbofanned() { + DCHECK(kind() == OPTIMIZED_FUNCTION || kind() == STUB); + return IsTurbofannedField::decode( + READ_UINT32_FIELD(this, kKindSpecificFlags1Offset)); } -bool Code::has_major_key() { - return kind() == STUB || - kind() == HANDLER || - kind() == BINARY_OP_IC || - kind() == COMPARE_IC || - kind() == COMPARE_NIL_IC || - kind() == LOAD_IC || - kind() == KEYED_LOAD_IC || - kind() == STORE_IC || - kind() == CALL_IC || - kind() == KEYED_STORE_IC || - kind() == TO_BOOLEAN_IC; +inline void Code::set_is_turbofanned(bool value) { + DCHECK(kind() == OPTIMIZED_FUNCTION || kind() == STUB); + int previous = READ_UINT32_FIELD(this, kKindSpecificFlags1Offset); + int updated = IsTurbofannedField::update(previous, value); + WRITE_UINT32_FIELD(this, kKindSpecificFlags1Offset, updated); } bool Code::optimizable() { - ASSERT_EQ(FUNCTION, kind()); + DCHECK_EQ(FUNCTION, kind()); return READ_BYTE_FIELD(this, kOptimizableOffset) == 1; } void Code::set_optimizable(bool value) { - ASSERT_EQ(FUNCTION, kind()); + DCHECK_EQ(FUNCTION, kind()); WRITE_BYTE_FIELD(this, kOptimizableOffset, value ? 1 : 0); } bool Code::has_deoptimization_support() { - ASSERT_EQ(FUNCTION, kind()); + DCHECK_EQ(FUNCTION, kind()); byte flags = READ_BYTE_FIELD(this, kFullCodeFlags); return FullCodeFlagsHasDeoptimizationSupportField::decode(flags); } void Code::set_has_deoptimization_support(bool value) { - ASSERT_EQ(FUNCTION, kind()); + DCHECK_EQ(FUNCTION, kind()); byte flags = READ_BYTE_FIELD(this, kFullCodeFlags); flags = FullCodeFlagsHasDeoptimizationSupportField::update(flags, value); WRITE_BYTE_FIELD(this, kFullCodeFlags, flags); @@ -4380,14 +4769,14 @@ void Code::set_has_deoptimization_support(bool value) { bool Code::has_debug_break_slots() { - ASSERT_EQ(FUNCTION, kind()); + DCHECK_EQ(FUNCTION, kind()); byte flags = READ_BYTE_FIELD(this, kFullCodeFlags); return FullCodeFlagsHasDebugBreakSlotsField::decode(flags); } void Code::set_has_debug_break_slots(bool value) { - ASSERT_EQ(FUNCTION, kind()); + DCHECK_EQ(FUNCTION, kind()); byte flags = READ_BYTE_FIELD(this, kFullCodeFlags); flags = FullCodeFlagsHasDebugBreakSlotsField::update(flags, value); WRITE_BYTE_FIELD(this, kFullCodeFlags, flags); @@ -4395,14 +4784,14 @@ void Code::set_has_debug_break_slots(bool value) { bool Code::is_compiled_optimizable() { - ASSERT_EQ(FUNCTION, kind()); + DCHECK_EQ(FUNCTION, kind()); byte flags = READ_BYTE_FIELD(this, kFullCodeFlags); return FullCodeFlagsIsCompiledOptimizable::decode(flags); } void Code::set_compiled_optimizable(bool value) { - ASSERT_EQ(FUNCTION, kind()); + DCHECK_EQ(FUNCTION, kind()); byte flags = READ_BYTE_FIELD(this, kFullCodeFlags); flags = FullCodeFlagsIsCompiledOptimizable::update(flags, value); WRITE_BYTE_FIELD(this, kFullCodeFlags, flags); @@ -4410,33 +4799,48 @@ void Code::set_compiled_optimizable(bool value) { int Code::allow_osr_at_loop_nesting_level() { - ASSERT_EQ(FUNCTION, kind()); - return READ_BYTE_FIELD(this, kAllowOSRAtLoopNestingLevelOffset); + DCHECK_EQ(FUNCTION, kind()); + int fields = READ_UINT32_FIELD(this, kKindSpecificFlags2Offset); + return AllowOSRAtLoopNestingLevelField::decode(fields); } void Code::set_allow_osr_at_loop_nesting_level(int level) { - ASSERT_EQ(FUNCTION, kind()); - ASSERT(level >= 0 && level <= kMaxLoopNestingMarker); - WRITE_BYTE_FIELD(this, kAllowOSRAtLoopNestingLevelOffset, level); + DCHECK_EQ(FUNCTION, kind()); + DCHECK(level >= 0 && level <= kMaxLoopNestingMarker); + int previous = READ_UINT32_FIELD(this, kKindSpecificFlags2Offset); + int updated = AllowOSRAtLoopNestingLevelField::update(previous, level); + WRITE_UINT32_FIELD(this, kKindSpecificFlags2Offset, updated); } int Code::profiler_ticks() { - ASSERT_EQ(FUNCTION, kind()); + DCHECK_EQ(FUNCTION, kind()); return READ_BYTE_FIELD(this, kProfilerTicksOffset); } void Code::set_profiler_ticks(int ticks) { - ASSERT_EQ(FUNCTION, kind()); - ASSERT(ticks < 256); + DCHECK_EQ(FUNCTION, kind()); + DCHECK(ticks < 256); WRITE_BYTE_FIELD(this, kProfilerTicksOffset, ticks); } +int Code::builtin_index() { + DCHECK_EQ(BUILTIN, kind()); + return READ_INT32_FIELD(this, kKindSpecificFlags1Offset); +} + + +void Code::set_builtin_index(int index) { + DCHECK_EQ(BUILTIN, kind()); + WRITE_INT32_FIELD(this, kKindSpecificFlags1Offset, index); +} + + unsigned Code::stack_slots() { - ASSERT(is_crankshafted()); + DCHECK(is_crankshafted()); return StackSlotsField::decode( READ_UINT32_FIELD(this, kKindSpecificFlags1Offset)); } @@ -4444,7 +4848,7 @@ unsigned Code::stack_slots() { void Code::set_stack_slots(unsigned slots) { CHECK(slots <= (1 << kStackSlotsBitCount)); - ASSERT(is_crankshafted()); + DCHECK(is_crankshafted()); int previous = READ_UINT32_FIELD(this, kKindSpecificFlags1Offset); int updated = StackSlotsField::update(previous, slots); WRITE_UINT32_FIELD(this, kKindSpecificFlags1Offset, updated); @@ -4452,7 +4856,7 @@ void Code::set_stack_slots(unsigned slots) { unsigned Code::safepoint_table_offset() { - ASSERT(is_crankshafted()); + DCHECK(is_crankshafted()); return SafepointTableOffsetField::decode( READ_UINT32_FIELD(this, kKindSpecificFlags2Offset)); } @@ -4460,8 +4864,8 @@ unsigned Code::safepoint_table_offset() { void Code::set_safepoint_table_offset(unsigned offset) { CHECK(offset <= (1 << kSafepointTableOffsetBitCount)); - ASSERT(is_crankshafted()); - ASSERT(IsAligned(offset, static_cast<unsigned>(kIntSize))); + DCHECK(is_crankshafted()); + DCHECK(IsAligned(offset, static_cast<unsigned>(kIntSize))); int previous = READ_UINT32_FIELD(this, kKindSpecificFlags2Offset); int updated = SafepointTableOffsetField::update(previous, offset); WRITE_UINT32_FIELD(this, kKindSpecificFlags2Offset, updated); @@ -4469,15 +4873,16 @@ void Code::set_safepoint_table_offset(unsigned offset) { unsigned Code::back_edge_table_offset() { - ASSERT_EQ(FUNCTION, kind()); + DCHECK_EQ(FUNCTION, kind()); return BackEdgeTableOffsetField::decode( - READ_UINT32_FIELD(this, kKindSpecificFlags2Offset)); + READ_UINT32_FIELD(this, kKindSpecificFlags2Offset)) << kPointerSizeLog2; } void Code::set_back_edge_table_offset(unsigned offset) { - ASSERT_EQ(FUNCTION, kind()); - ASSERT(IsAligned(offset, static_cast<unsigned>(kIntSize))); + DCHECK_EQ(FUNCTION, kind()); + DCHECK(IsAligned(offset, static_cast<unsigned>(kPointerSize))); + offset = offset >> kPointerSizeLog2; int previous = READ_UINT32_FIELD(this, kKindSpecificFlags2Offset); int updated = BackEdgeTableOffsetField::update(previous, offset); WRITE_UINT32_FIELD(this, kKindSpecificFlags2Offset, updated); @@ -4485,35 +4890,25 @@ void Code::set_back_edge_table_offset(unsigned offset) { bool Code::back_edges_patched_for_osr() { - ASSERT_EQ(FUNCTION, kind()); - return BackEdgesPatchedForOSRField::decode( - READ_UINT32_FIELD(this, kKindSpecificFlags2Offset)); -} - - -void Code::set_back_edges_patched_for_osr(bool value) { - ASSERT_EQ(FUNCTION, kind()); - int previous = READ_UINT32_FIELD(this, kKindSpecificFlags2Offset); - int updated = BackEdgesPatchedForOSRField::update(previous, value); - WRITE_UINT32_FIELD(this, kKindSpecificFlags2Offset, updated); + DCHECK_EQ(FUNCTION, kind()); + return allow_osr_at_loop_nesting_level() > 0; } - byte Code::to_boolean_state() { return extra_ic_state(); } bool Code::has_function_cache() { - ASSERT(kind() == STUB); + DCHECK(kind() == STUB); return HasFunctionCacheField::decode( READ_UINT32_FIELD(this, kKindSpecificFlags1Offset)); } void Code::set_has_function_cache(bool flag) { - ASSERT(kind() == STUB); + DCHECK(kind() == STUB); int previous = READ_UINT32_FIELD(this, kKindSpecificFlags1Offset); int updated = HasFunctionCacheField::update(previous, flag); WRITE_UINT32_FIELD(this, kKindSpecificFlags1Offset, updated); @@ -4521,15 +4916,15 @@ void Code::set_has_function_cache(bool flag) { bool Code::marked_for_deoptimization() { - ASSERT(kind() == OPTIMIZED_FUNCTION); + DCHECK(kind() == OPTIMIZED_FUNCTION); return MarkedForDeoptimizationField::decode( READ_UINT32_FIELD(this, kKindSpecificFlags1Offset)); } void Code::set_marked_for_deoptimization(bool flag) { - ASSERT(kind() == OPTIMIZED_FUNCTION); - ASSERT(!flag || AllowDeoptimization::IsAllowed(GetIsolate())); + DCHECK(kind() == OPTIMIZED_FUNCTION); + DCHECK(!flag || AllowDeoptimization::IsAllowed(GetIsolate())); int previous = READ_UINT32_FIELD(this, kKindSpecificFlags1Offset); int updated = MarkedForDeoptimizationField::update(previous, flag); WRITE_UINT32_FIELD(this, kKindSpecificFlags1Offset, updated); @@ -4543,7 +4938,7 @@ bool Code::is_weak_stub() { void Code::mark_as_weak_stub() { - ASSERT(CanBeWeakStub()); + DCHECK(CanBeWeakStub()); int previous = READ_UINT32_FIELD(this, kKindSpecificFlags1Offset); int updated = WeakStubField::update(previous, true); WRITE_UINT32_FIELD(this, kKindSpecificFlags1Offset, updated); @@ -4557,7 +4952,7 @@ bool Code::is_invalidated_weak_stub() { void Code::mark_as_invalidated_weak_stub() { - ASSERT(is_inline_cache_stub()); + DCHECK(is_inline_cache_stub()); int previous = READ_UINT32_FIELD(this, kKindSpecificFlags1Offset); int updated = InvalidatedWeakStubField::update(previous, true); WRITE_UINT32_FIELD(this, kKindSpecificFlags1Offset, updated); @@ -4591,17 +4986,15 @@ ConstantPoolArray* Code::constant_pool() { void Code::set_constant_pool(Object* value) { - ASSERT(value->IsConstantPoolArray()); + DCHECK(value->IsConstantPoolArray()); WRITE_FIELD(this, kConstantPoolOffset, value); WRITE_BARRIER(GetHeap(), this, kConstantPoolOffset, value); } -Code::Flags Code::ComputeFlags(Kind kind, - InlineCacheState ic_state, - ExtraICState extra_ic_state, - StubType type, - InlineCacheHolderFlag holder) { +Code::Flags Code::ComputeFlags(Kind kind, InlineCacheState ic_state, + ExtraICState extra_ic_state, StubType type, + CacheHolderFlag holder) { // Compute the bit mask. unsigned int bits = KindField::encode(kind) | ICStateField::encode(ic_state) @@ -4614,15 +5007,14 @@ Code::Flags Code::ComputeFlags(Kind kind, Code::Flags Code::ComputeMonomorphicFlags(Kind kind, ExtraICState extra_ic_state, - InlineCacheHolderFlag holder, + CacheHolderFlag holder, StubType type) { return ComputeFlags(kind, MONOMORPHIC, extra_ic_state, type, holder); } -Code::Flags Code::ComputeHandlerFlags(Kind handler_kind, - StubType type, - InlineCacheHolderFlag holder) { +Code::Flags Code::ComputeHandlerFlags(Kind handler_kind, StubType type, + CacheHolderFlag holder) { return ComputeFlags(Code::HANDLER, MONOMORPHIC, handler_kind, type, holder); } @@ -4647,7 +5039,7 @@ Code::StubType Code::ExtractTypeFromFlags(Flags flags) { } -InlineCacheHolderFlag Code::ExtractCacheHolderFromFlags(Flags flags) { +CacheHolderFlag Code::ExtractCacheHolderFromFlags(Flags flags) { return CacheHolderField::decode(flags); } @@ -4658,6 +5050,12 @@ Code::Flags Code::RemoveTypeFromFlags(Flags flags) { } +Code::Flags Code::RemoveTypeAndHolderFromFlags(Flags flags) { + int bits = flags & ~TypeField::kMask & ~CacheHolderField::kMask; + return static_cast<Flags>(bits); +} + + Code* Code::GetCodeFromTargetAddress(Address address) { HeapObject* code = HeapObject::FromAddress(address - Code::kHeaderSize); // GetCodeFromTargetAddress might be called when marking objects during mark @@ -4693,7 +5091,7 @@ class Code::FindAndReplacePattern { public: FindAndReplacePattern() : count_(0) { } void Add(Handle<Map> map_to_find, Handle<Object> obj_to_replace) { - ASSERT(count_ < kMaxCount); + DCHECK(count_ < kMaxCount); find_[count_] = map_to_find; replace_[count_] = obj_to_replace; ++count_; @@ -4714,13 +5112,13 @@ bool Code::IsWeakObjectInIC(Object* object) { } -Object* Map::prototype() { +Object* Map::prototype() const { return READ_FIELD(this, kPrototypeOffset); } void Map::set_prototype(Object* value, WriteBarrierMode mode) { - ASSERT(value->IsNull() || value->IsJSReceiver()); + DCHECK(value->IsNull() || value->IsJSReceiver()); WRITE_FIELD(this, kPrototypeOffset, value); CONDITIONAL_WRITE_BARRIER(GetHeap(), this, kPrototypeOffset, value, mode); } @@ -4753,23 +5151,22 @@ ACCESSORS(Map, instance_descriptors, DescriptorArray, kDescriptorsOffset) void Map::set_bit_field3(uint32_t bits) { - // Ensure the upper 2 bits have the same value by sign extending it. This is - // necessary to be able to use the 31st bit. - int value = bits << 1; - WRITE_FIELD(this, kBitField3Offset, Smi::FromInt(value >> 1)); + if (kInt32Size != kPointerSize) { + WRITE_UINT32_FIELD(this, kBitField3Offset + kInt32Size, 0); + } + WRITE_UINT32_FIELD(this, kBitField3Offset, bits); } uint32_t Map::bit_field3() { - Object* value = READ_FIELD(this, kBitField3Offset); - return Smi::cast(value)->value(); + return READ_UINT32_FIELD(this, kBitField3Offset); } void Map::AppendDescriptor(Descriptor* desc) { DescriptorArray* descriptors = instance_descriptors(); int number_of_own_descriptors = NumberOfOwnDescriptors(); - ASSERT(descriptors->number_of_descriptors() == number_of_own_descriptors); + DCHECK(descriptors->number_of_descriptors() == number_of_own_descriptors); descriptors->Append(desc); SetNumberOfOwnDescriptors(number_of_own_descriptors + 1); } @@ -4780,7 +5177,7 @@ Object* Map::GetBackPointer() { if (object->IsDescriptorArray()) { return TransitionArray::cast(object)->back_pointer_storage(); } else { - ASSERT(object->IsMap() || object->IsUndefined()); + DCHECK(object->IsMap() || object->IsUndefined()); return object; } } @@ -4791,7 +5188,7 @@ bool Map::HasElementsTransition() { } -bool Map::HasTransitionArray() { +bool Map::HasTransitionArray() const { Object* object = READ_FIELD(this, kTransitionsOrBackPointerOffset); return object->IsTransitionArray(); } @@ -4837,7 +5234,7 @@ void Map::SetPrototypeTransitions( int old_number_of_transitions = map->NumberOfProtoTransitions(); #ifdef DEBUG if (map->HasPrototypeTransitions()) { - ASSERT(map->GetPrototypeTransitions() != *proto_transitions); + DCHECK(map->GetPrototypeTransitions() != *proto_transitions); map->ZapPrototypeTransitions(); } #endif @@ -4851,8 +5248,8 @@ bool Map::HasPrototypeTransitions() { } -TransitionArray* Map::transitions() { - ASSERT(HasTransitionArray()); +TransitionArray* Map::transitions() const { + DCHECK(HasTransitionArray()); Object* object = READ_FIELD(this, kTransitionsOrBackPointerOffset); return TransitionArray::cast(object); } @@ -4871,12 +5268,12 @@ void Map::set_transitions(TransitionArray* transition_array, if (target->instance_descriptors() == instance_descriptors()) { Name* key = transitions()->GetKey(i); int new_target_index = transition_array->Search(key); - ASSERT(new_target_index != TransitionArray::kNotFound); - ASSERT(transition_array->GetTarget(new_target_index) == target); + DCHECK(new_target_index != TransitionArray::kNotFound); + DCHECK(transition_array->GetTarget(new_target_index) == target); } } #endif - ASSERT(transitions() != transition_array); + DCHECK(transitions() != transition_array); ZapTransitions(); } @@ -4887,14 +5284,14 @@ void Map::set_transitions(TransitionArray* transition_array, void Map::init_back_pointer(Object* undefined) { - ASSERT(undefined->IsUndefined()); + DCHECK(undefined->IsUndefined()); WRITE_FIELD(this, kTransitionsOrBackPointerOffset, undefined); } void Map::SetBackPointer(Object* value, WriteBarrierMode mode) { - ASSERT(instance_type() >= FIRST_JS_RECEIVER_TYPE); - ASSERT((value->IsUndefined() && GetBackPointer()->IsMap()) || + DCHECK(instance_type() >= FIRST_JS_RECEIVER_TYPE); + DCHECK((value->IsUndefined() && GetBackPointer()->IsMap()) || (value->IsMap() && GetBackPointer()->IsUndefined())); Object* object = READ_FIELD(this, kTransitionsOrBackPointerOffset); if (object->IsTransitionArray()) { @@ -4918,7 +5315,7 @@ ACCESSORS(JSFunction, next_function_link, Object, kNextFunctionLinkOffset) ACCESSORS(GlobalObject, builtins, JSBuiltinsObject, kBuiltinsOffset) ACCESSORS(GlobalObject, native_context, Context, kNativeContextOffset) ACCESSORS(GlobalObject, global_context, Context, kGlobalContextOffset) -ACCESSORS(GlobalObject, global_receiver, JSObject, kGlobalReceiverOffset) +ACCESSORS(GlobalObject, global_proxy, JSObject, kGlobalProxyOffset) ACCESSORS(JSGlobalProxy, native_context, Object, kNativeContextOffset) ACCESSORS(JSGlobalProxy, hash, Object, kHashOffset) @@ -4942,7 +5339,6 @@ ACCESSORS(Box, value, Object, kValueOffset) ACCESSORS(AccessorPair, getter, Object, kGetterOffset) ACCESSORS(AccessorPair, setter, Object, kSetterOffset) -ACCESSORS_TO_SMI(AccessorPair, access_flags, kAccessFlagsOffset) ACCESSORS(AccessCheckInfo, named_callback, Object, kNamedCallbackOffset) ACCESSORS(AccessCheckInfo, indexed_callback, Object, kIndexedCallbackOffset) @@ -5014,6 +5410,8 @@ ACCESSORS_TO_SMI(Script, eval_from_instructions_offset, kEvalFrominstructionsOffsetOffset) ACCESSORS_TO_SMI(Script, flags, kFlagsOffset) BOOL_ACCESSORS(Script, flags, is_shared_cross_origin, kIsSharedCrossOriginBit) +ACCESSORS(Script, source_url, Object, kSourceUrlOffset) +ACCESSORS(Script, source_mapping_url, Object, kSourceMappingUrlOffset) Script::CompilationType Script::compilation_type() { return BooleanBit::get(flags(), kCompilationTypeBit) ? @@ -5049,7 +5447,6 @@ ACCESSORS(SharedFunctionInfo, optimized_code_map, Object, ACCESSORS(SharedFunctionInfo, construct_stub, Code, kConstructStubOffset) ACCESSORS(SharedFunctionInfo, feedback_vector, FixedArray, kFeedbackVectorOffset) -ACCESSORS(SharedFunctionInfo, initial_map, Object, kInitialMapOffset) ACCESSORS(SharedFunctionInfo, instance_class_name, Object, kInstanceClassNameOffset) ACCESSORS(SharedFunctionInfo, function_data, Object, kFunctionDataOffset) @@ -5117,16 +5514,16 @@ SMI_ACCESSORS(SharedFunctionInfo, profiler_ticks, kProfilerTicksOffset) #define PSEUDO_SMI_ACCESSORS_LO(holder, name, offset) \ STATIC_ASSERT(holder::offset % kPointerSize == 0); \ - int holder::name() { \ + int holder::name() const { \ int value = READ_INT_FIELD(this, offset); \ - ASSERT(kHeapObjectTag == 1); \ - ASSERT((value & kHeapObjectTag) == 0); \ + DCHECK(kHeapObjectTag == 1); \ + DCHECK((value & kHeapObjectTag) == 0); \ return value >> 1; \ } \ void holder::set_##name(int value) { \ - ASSERT(kHeapObjectTag == 1); \ - ASSERT((value & 0xC0000000) == 0xC0000000 || \ - (value & 0xC0000000) == 0x000000000); \ + DCHECK(kHeapObjectTag == 1); \ + DCHECK((value & 0xC0000000) == 0xC0000000 || \ + (value & 0xC0000000) == 0x0); \ WRITE_INT_FIELD(this, \ offset, \ (value << 1) & ~kHeapObjectTag); \ @@ -5174,28 +5571,6 @@ PSEUDO_SMI_ACCESSORS_HI(SharedFunctionInfo, #endif -int SharedFunctionInfo::construction_count() { - return READ_BYTE_FIELD(this, kConstructionCountOffset); -} - - -void SharedFunctionInfo::set_construction_count(int value) { - ASSERT(0 <= value && value < 256); - WRITE_BYTE_FIELD(this, kConstructionCountOffset, static_cast<byte>(value)); -} - - -BOOL_ACCESSORS(SharedFunctionInfo, - compiler_hints, - live_objects_may_exist, - kLiveObjectsMayExist) - - -bool SharedFunctionInfo::IsInobjectSlackTrackingInProgress() { - return initial_map() != GetHeap()->undefined_value(); -} - - BOOL_GETTER(SharedFunctionInfo, compiler_hints, optimization_disabled, @@ -5222,7 +5597,7 @@ StrictMode SharedFunctionInfo::strict_mode() { void SharedFunctionInfo::set_strict_mode(StrictMode strict_mode) { // We only allow mode transitions from sloppy to strict. - ASSERT(this->strict_mode() == SLOPPY || this->strict_mode() == strict_mode); + DCHECK(this->strict_mode() == SLOPPY || this->strict_mode() == strict_mode); int hints = compiler_hints(); hints = BooleanBit::set(hints, kStrictModeFunction, strict_mode == STRICT); set_compiler_hints(hints); @@ -5238,17 +5613,10 @@ BOOL_ACCESSORS(SharedFunctionInfo, compiler_hints, BOOL_ACCESSORS(SharedFunctionInfo, compiler_hints, bound, kBoundFunction) BOOL_ACCESSORS(SharedFunctionInfo, compiler_hints, is_anonymous, kIsAnonymous) BOOL_ACCESSORS(SharedFunctionInfo, compiler_hints, is_function, kIsFunction) -BOOL_ACCESSORS(SharedFunctionInfo, compiler_hints, dont_optimize, - kDontOptimize) -BOOL_ACCESSORS(SharedFunctionInfo, compiler_hints, dont_inline, kDontInline) BOOL_ACCESSORS(SharedFunctionInfo, compiler_hints, dont_cache, kDontCache) BOOL_ACCESSORS(SharedFunctionInfo, compiler_hints, dont_flush, kDontFlush) BOOL_ACCESSORS(SharedFunctionInfo, compiler_hints, is_generator, kIsGenerator) - -void SharedFunctionInfo::BeforeVisitingPointers() { - if (IsInobjectSlackTrackingInProgress()) DetachInitialMap(); -} - +BOOL_ACCESSORS(SharedFunctionInfo, compiler_hints, is_arrow, kIsArrow) ACCESSORS(CodeCache, default_cache, FixedArray, kDefaultCacheOffset) ACCESSORS(CodeCache, normal_type_cache, Object, kNormalTypeCacheOffset) @@ -5270,12 +5638,12 @@ bool Script::HasValidSource() { void SharedFunctionInfo::DontAdaptArguments() { - ASSERT(code()->kind() == Code::BUILTIN); + DCHECK(code()->kind() == Code::BUILTIN); set_formal_parameter_count(kDontAdaptArgumentsSentinel); } -int SharedFunctionInfo::start_position() { +int SharedFunctionInfo::start_position() const { return start_position_and_type() >> kStartPositionShift; } @@ -5286,13 +5654,13 @@ void SharedFunctionInfo::set_start_position(int start_position) { } -Code* SharedFunctionInfo::code() { +Code* SharedFunctionInfo::code() const { return Code::cast(READ_FIELD(this, kCodeOffset)); } void SharedFunctionInfo::set_code(Code* value, WriteBarrierMode mode) { - ASSERT(value->kind() != Code::OPTIMIZED_FUNCTION); + DCHECK(value->kind() != Code::OPTIMIZED_FUNCTION); WRITE_FIELD(this, kCodeOffset, value); CONDITIONAL_WRITE_BARRIER(value->GetHeap(), this, kCodeOffset, value, mode); } @@ -5306,13 +5674,13 @@ void SharedFunctionInfo::ReplaceCode(Code* value) { flusher->EvictCandidate(this); } - ASSERT(code()->gc_metadata() == NULL && value->gc_metadata() == NULL); + DCHECK(code()->gc_metadata() == NULL && value->gc_metadata() == NULL); set_code(value); } -ScopeInfo* SharedFunctionInfo::scope_info() { +ScopeInfo* SharedFunctionInfo::scope_info() const { return reinterpret_cast<ScopeInfo*>(READ_FIELD(this, kScopeInfoOffset)); } @@ -5340,7 +5708,7 @@ bool SharedFunctionInfo::IsApiFunction() { FunctionTemplateInfo* SharedFunctionInfo::get_api_func_data() { - ASSERT(IsApiFunction()); + DCHECK(IsApiFunction()); return FunctionTemplateInfo::cast(function_data()); } @@ -5351,7 +5719,7 @@ bool SharedFunctionInfo::HasBuiltinFunctionId() { BuiltinFunctionId SharedFunctionInfo::builtin_function_id() { - ASSERT(HasBuiltinFunctionId()); + DCHECK(HasBuiltinFunctionId()); return static_cast<BuiltinFunctionId>(Smi::cast(function_data())->value()); } @@ -5437,6 +5805,22 @@ bool JSFunction::IsBuiltin() { } +bool JSFunction::IsFromNativeScript() { + Object* script = shared()->script(); + bool native = script->IsScript() && + Script::cast(script)->type()->value() == Script::TYPE_NATIVE; + DCHECK(!IsBuiltin() || native); // All builtins are also native. + return native; +} + + +bool JSFunction::IsFromExtensionScript() { + Object* script = shared()->script(); + return script->IsScript() && + Script::cast(script)->type()->value() == Script::TYPE_EXTENSION; +} + + bool JSFunction::NeedsArgumentsAdaption() { return shared()->formal_parameter_count() != SharedFunctionInfo::kDontAdaptArgumentsSentinel; @@ -5471,6 +5855,12 @@ bool JSFunction::IsInOptimizationQueue() { } +bool JSFunction::IsInobjectSlackTrackingInProgress() { + return has_initial_map() && + initial_map()->construction_count() != JSFunction::kNoSlackTracking; +} + + Code* JSFunction::code() { return Code::cast( Code::GetObjectFromEntryAddress(FIELD_ADDR(this, kCodeEntryOffset))); @@ -5478,7 +5868,7 @@ Code* JSFunction::code() { void JSFunction::set_code(Code* value) { - ASSERT(!GetHeap()->InNewSpace(value)); + DCHECK(!GetHeap()->InNewSpace(value)); Address entry = value->entry(); WRITE_INTPTR_FIELD(this, kCodeEntryOffset, reinterpret_cast<intptr_t>(entry)); GetHeap()->incremental_marking()->RecordWriteOfCodeEntry( @@ -5489,7 +5879,7 @@ void JSFunction::set_code(Code* value) { void JSFunction::set_code_no_write_barrier(Code* value) { - ASSERT(!GetHeap()->InNewSpace(value)); + DCHECK(!GetHeap()->InNewSpace(value)); Address entry = value->entry(); WRITE_INTPTR_FIELD(this, kCodeEntryOffset, reinterpret_cast<intptr_t>(entry)); } @@ -5523,8 +5913,13 @@ Context* JSFunction::context() { } +JSObject* JSFunction::global_proxy() { + return context()->global_proxy(); +} + + void JSFunction::set_context(Object* value) { - ASSERT(value->IsUndefined() || value->IsContext()); + DCHECK(value->IsUndefined() || value->IsContext()); WRITE_FIELD(this, kContextOffset, value); WRITE_BARRIER(GetHeap(), this, kContextOffset, value); } @@ -5538,11 +5933,6 @@ Map* JSFunction::initial_map() { } -void JSFunction::set_initial_map(Map* value) { - set_prototype_or_initial_map(value); -} - - bool JSFunction::has_initial_map() { return prototype_or_initial_map()->IsMap(); } @@ -5559,7 +5949,7 @@ bool JSFunction::has_prototype() { Object* JSFunction::instance_prototype() { - ASSERT(has_instance_prototype()); + DCHECK(has_instance_prototype()); if (has_initial_map()) return initial_map()->prototype(); // When there is no initial map and the prototype is a JSObject, the // initial map field is used for the prototype field. @@ -5568,7 +5958,7 @@ Object* JSFunction::instance_prototype() { Object* JSFunction::prototype() { - ASSERT(has_prototype()); + DCHECK(has_prototype()); // If the function's prototype property has been set to a non-JSObject // value, that value is stored in the constructor field of the map. if (map()->has_non_instance_prototype()) return map()->constructor(); @@ -5588,64 +5978,64 @@ bool JSFunction::is_compiled() { FixedArray* JSFunction::literals() { - ASSERT(!shared()->bound()); + DCHECK(!shared()->bound()); return literals_or_bindings(); } void JSFunction::set_literals(FixedArray* literals) { - ASSERT(!shared()->bound()); + DCHECK(!shared()->bound()); set_literals_or_bindings(literals); } FixedArray* JSFunction::function_bindings() { - ASSERT(shared()->bound()); + DCHECK(shared()->bound()); return literals_or_bindings(); } void JSFunction::set_function_bindings(FixedArray* bindings) { - ASSERT(shared()->bound()); + DCHECK(shared()->bound()); // Bound function literal may be initialized to the empty fixed array // before the bindings are set. - ASSERT(bindings == GetHeap()->empty_fixed_array() || + DCHECK(bindings == GetHeap()->empty_fixed_array() || bindings->map() == GetHeap()->fixed_cow_array_map()); set_literals_or_bindings(bindings); } int JSFunction::NumberOfLiterals() { - ASSERT(!shared()->bound()); + DCHECK(!shared()->bound()); return literals()->length(); } Object* JSBuiltinsObject::javascript_builtin(Builtins::JavaScript id) { - ASSERT(id < kJSBuiltinsCount); // id is unsigned. + DCHECK(id < kJSBuiltinsCount); // id is unsigned. return READ_FIELD(this, OffsetOfFunctionWithId(id)); } void JSBuiltinsObject::set_javascript_builtin(Builtins::JavaScript id, Object* value) { - ASSERT(id < kJSBuiltinsCount); // id is unsigned. + DCHECK(id < kJSBuiltinsCount); // id is unsigned. WRITE_FIELD(this, OffsetOfFunctionWithId(id), value); WRITE_BARRIER(GetHeap(), this, OffsetOfFunctionWithId(id), value); } Code* JSBuiltinsObject::javascript_builtin_code(Builtins::JavaScript id) { - ASSERT(id < kJSBuiltinsCount); // id is unsigned. + DCHECK(id < kJSBuiltinsCount); // id is unsigned. return Code::cast(READ_FIELD(this, OffsetOfCodeWithId(id))); } void JSBuiltinsObject::set_javascript_builtin_code(Builtins::JavaScript id, Code* value) { - ASSERT(id < kJSBuiltinsCount); // id is unsigned. + DCHECK(id < kJSBuiltinsCount); // id is unsigned. WRITE_FIELD(this, OffsetOfCodeWithId(id), value); - ASSERT(!GetHeap()->InNewSpace(value)); + DCHECK(!GetHeap()->InNewSpace(value)); } @@ -5656,20 +6046,19 @@ ACCESSORS(JSFunctionProxy, construct_trap, Object, kConstructTrapOffset) void JSProxy::InitializeBody(int object_size, Object* value) { - ASSERT(!value->IsHeapObject() || !GetHeap()->InNewSpace(value)); + DCHECK(!value->IsHeapObject() || !GetHeap()->InNewSpace(value)); for (int offset = kHeaderSize; offset < object_size; offset += kPointerSize) { WRITE_FIELD(this, offset, value); } } -ACCESSORS(JSSet, table, Object, kTableOffset) -ACCESSORS(JSMap, table, Object, kTableOffset) +ACCESSORS(JSCollection, table, Object, kTableOffset) #define ORDERED_HASH_TABLE_ITERATOR_ACCESSORS(name, type, offset) \ template<class Derived, class TableType> \ - type* OrderedHashTableIterator<Derived, TableType>::name() { \ + type* OrderedHashTableIterator<Derived, TableType>::name() const { \ return type::cast(READ_FIELD(this, offset)); \ } \ template<class Derived, class TableType> \ @@ -5681,12 +6070,7 @@ ACCESSORS(JSMap, table, Object, kTableOffset) ORDERED_HASH_TABLE_ITERATOR_ACCESSORS(table, Object, kTableOffset) ORDERED_HASH_TABLE_ITERATOR_ACCESSORS(index, Smi, kIndexOffset) -ORDERED_HASH_TABLE_ITERATOR_ACCESSORS(count, Smi, kCountOffset) ORDERED_HASH_TABLE_ITERATOR_ACCESSORS(kind, Smi, kKindOffset) -ORDERED_HASH_TABLE_ITERATOR_ACCESSORS(next_iterator, Object, - kNextIteratorOffset) -ORDERED_HASH_TABLE_ITERATOR_ACCESSORS(previous_iterator, Object, - kPreviousIteratorOffset) #undef ORDERED_HASH_TABLE_ITERATOR_ACCESSORS @@ -5713,36 +6097,35 @@ ACCESSORS(JSGeneratorObject, operand_stack, FixedArray, kOperandStackOffset) SMI_ACCESSORS(JSGeneratorObject, stack_handler_index, kStackHandlerIndexOffset) bool JSGeneratorObject::is_suspended() { - ASSERT_LT(kGeneratorExecuting, kGeneratorClosed); - ASSERT_EQ(kGeneratorClosed, 0); + DCHECK_LT(kGeneratorExecuting, kGeneratorClosed); + DCHECK_EQ(kGeneratorClosed, 0); return continuation() > 0; } -JSGeneratorObject* JSGeneratorObject::cast(Object* obj) { - ASSERT(obj->IsJSGeneratorObject()); - ASSERT(HeapObject::cast(obj)->Size() == JSGeneratorObject::kSize); - return reinterpret_cast<JSGeneratorObject*>(obj); +bool JSGeneratorObject::is_closed() { + return continuation() == kGeneratorClosed; } +bool JSGeneratorObject::is_executing() { + return continuation() == kGeneratorExecuting; +} ACCESSORS(JSModule, context, Object, kContextOffset) ACCESSORS(JSModule, scope_info, ScopeInfo, kScopeInfoOffset) -JSModule* JSModule::cast(Object* obj) { - ASSERT(obj->IsJSModule()); - ASSERT(HeapObject::cast(obj)->Size() == JSModule::kSize); - return reinterpret_cast<JSModule*>(obj); -} +ACCESSORS(JSValue, value, Object, kValueOffset) -ACCESSORS(JSValue, value, Object, kValueOffset) +HeapNumber* HeapNumber::cast(Object* object) { + SLOW_DCHECK(object->IsHeapNumber() || object->IsMutableHeapNumber()); + return reinterpret_cast<HeapNumber*>(object); +} -JSValue* JSValue::cast(Object* obj) { - ASSERT(obj->IsJSValue()); - ASSERT(HeapObject::cast(obj)->Size() == JSValue::kSize); - return reinterpret_cast<JSValue*>(obj); +const HeapNumber* HeapNumber::cast(const Object* object) { + SLOW_DCHECK(object->IsHeapNumber() || object->IsMutableHeapNumber()); + return reinterpret_cast<const HeapNumber*>(object); } @@ -5757,13 +6140,6 @@ ACCESSORS(JSDate, min, Object, kMinOffset) ACCESSORS(JSDate, sec, Object, kSecOffset) -JSDate* JSDate::cast(Object* obj) { - ASSERT(obj->IsJSDate()); - ASSERT(HeapObject::cast(obj)->Size() == JSDate::kSize); - return reinterpret_cast<JSDate*>(obj); -} - - ACCESSORS(JSMessageObject, type, String, kTypeOffset) ACCESSORS(JSMessageObject, arguments, JSArray, kArgumentsOffset) ACCESSORS(JSMessageObject, script, Object, kScriptOffset) @@ -5772,13 +6148,6 @@ SMI_ACCESSORS(JSMessageObject, start_position, kStartPositionOffset) SMI_ACCESSORS(JSMessageObject, end_position, kEndPositionOffset) -JSMessageObject* JSMessageObject::cast(Object* obj) { - ASSERT(obj->IsJSMessageObject()); - ASSERT(HeapObject::cast(obj)->Size() == JSMessageObject::kSize); - return reinterpret_cast<JSMessageObject*>(obj); -} - - INT_ACCESSORS(Code, instruction_size, kInstructionSizeOffset) INT_ACCESSORS(Code, prologue_offset, kPrologueOffset) ACCESSORS(Code, relocation_info, ByteArray, kRelocationInfoOffset) @@ -5793,7 +6162,7 @@ void Code::WipeOutHeader() { WRITE_FIELD(this, kHandlerTableOffset, NULL); WRITE_FIELD(this, kDeoptimizationDataOffset, NULL); WRITE_FIELD(this, kConstantPoolOffset, NULL); - // Do not wipe out e.g. a minor key. + // Do not wipe out major/minor keys on a code stub or IC if (!READ_FIELD(this, kTypeFeedbackInfoOffset)->IsSmi()) { WRITE_FIELD(this, kTypeFeedbackInfoOffset, NULL); } @@ -5801,37 +6170,29 @@ void Code::WipeOutHeader() { Object* Code::type_feedback_info() { - ASSERT(kind() == FUNCTION); + DCHECK(kind() == FUNCTION); return raw_type_feedback_info(); } void Code::set_type_feedback_info(Object* value, WriteBarrierMode mode) { - ASSERT(kind() == FUNCTION); + DCHECK(kind() == FUNCTION); set_raw_type_feedback_info(value, mode); CONDITIONAL_WRITE_BARRIER(GetHeap(), this, kTypeFeedbackInfoOffset, value, mode); } -int Code::stub_info() { - ASSERT(kind() == COMPARE_IC || kind() == COMPARE_NIL_IC || - kind() == BINARY_OP_IC || kind() == LOAD_IC || kind() == CALL_IC); - return Smi::cast(raw_type_feedback_info())->value(); +uint32_t Code::stub_key() { + DCHECK(IsCodeStubOrIC()); + Smi* smi_key = Smi::cast(raw_type_feedback_info()); + return static_cast<uint32_t>(smi_key->value()); } -void Code::set_stub_info(int value) { - ASSERT(kind() == COMPARE_IC || - kind() == COMPARE_NIL_IC || - kind() == BINARY_OP_IC || - kind() == STUB || - kind() == LOAD_IC || - kind() == CALL_IC || - kind() == KEYED_LOAD_IC || - kind() == STORE_IC || - kind() == KEYED_STORE_IC); - set_raw_type_feedback_info(Smi::FromInt(value)); +void Code::set_stub_key(uint32_t key) { + DCHECK(IsCodeStubOrIC()); + set_raw_type_feedback_info(Smi::FromInt(key)); } @@ -5882,7 +6243,7 @@ bool Code::contains(byte* inner_pointer) { ACCESSORS(JSArray, length, Object, kLengthOffset) -void* JSArrayBuffer::backing_store() { +void* JSArrayBuffer::backing_store() const { intptr_t ptr = READ_INTPTR_FIELD(this, kBackingStoreOffset); return reinterpret_cast<void*>(ptr); } @@ -5953,7 +6314,7 @@ int JSRegExp::CaptureCount() { JSRegExp::Flags JSRegExp::GetFlags() { - ASSERT(this->data()->IsFixedArray()); + DCHECK(this->data()->IsFixedArray()); Object* data = this->data(); Smi* smi = Smi::cast(FixedArray::cast(data)->get(kFlagsIndex)); return Flags(smi->value()); @@ -5961,7 +6322,7 @@ JSRegExp::Flags JSRegExp::GetFlags() { String* JSRegExp::Pattern() { - ASSERT(this->data()->IsFixedArray()); + DCHECK(this->data()->IsFixedArray()); Object* data = this->data(); String* pattern= String::cast(FixedArray::cast(data)->get(kSourceIndex)); return pattern; @@ -5969,14 +6330,14 @@ String* JSRegExp::Pattern() { Object* JSRegExp::DataAt(int index) { - ASSERT(TypeTag() != NOT_COMPILED); + DCHECK(TypeTag() != NOT_COMPILED); return FixedArray::cast(data())->get(index); } void JSRegExp::SetDataAt(int index, Object* value) { - ASSERT(TypeTag() != NOT_COMPILED); - ASSERT(index >= kDataIndex); // Only implementation data can be set this way. + DCHECK(TypeTag() != NOT_COMPILED); + DCHECK(index >= kDataIndex); // Only implementation data can be set this way. FixedArray::cast(data())->set(index, value); } @@ -5991,7 +6352,7 @@ ElementsKind JSObject::GetElementsKind() { // pointer may point to a one pointer filler map. if (ElementsAreSafeToExamine()) { Map* map = fixed_array->map(); - ASSERT((IsFastSmiOrObjectElementsKind(kind) && + DCHECK((IsFastSmiOrObjectElementsKind(kind) && (map == GetHeap()->fixed_array_map() || map == GetHeap()->fixed_cow_array_map())) || (IsFastDoubleElementsKind(kind) && @@ -6001,7 +6362,7 @@ ElementsKind JSObject::GetElementsKind() { fixed_array->IsFixedArray() && fixed_array->IsDictionary()) || (kind > DICTIONARY_ELEMENTS)); - ASSERT((kind != SLOPPY_ARGUMENTS_ELEMENTS) || + DCHECK((kind != SLOPPY_ARGUMENTS_ELEMENTS) || (elements()->IsFixedArray() && elements()->length() >= 2)); } #endif @@ -6056,7 +6417,7 @@ bool JSObject::HasSloppyArgumentsElements() { bool JSObject::HasExternalArrayElements() { HeapObject* array = elements(); - ASSERT(array != NULL); + DCHECK(array != NULL); return array->IsExternalArray(); } @@ -6064,7 +6425,7 @@ bool JSObject::HasExternalArrayElements() { #define EXTERNAL_ELEMENTS_CHECK(Type, type, TYPE, ctype, size) \ bool JSObject::HasExternal##Type##Elements() { \ HeapObject* array = elements(); \ - ASSERT(array != NULL); \ + DCHECK(array != NULL); \ if (!array->IsHeapObject()) \ return false; \ return array->map()->instance_type() == EXTERNAL_##TYPE##_ARRAY_TYPE; \ @@ -6077,7 +6438,7 @@ TYPED_ARRAYS(EXTERNAL_ELEMENTS_CHECK) bool JSObject::HasFixedTypedArrayElements() { HeapObject* array = elements(); - ASSERT(array != NULL); + DCHECK(array != NULL); return array->IsFixedTypedArrayBase(); } @@ -6085,7 +6446,7 @@ bool JSObject::HasFixedTypedArrayElements() { #define FIXED_TYPED_ELEMENTS_CHECK(Type, type, TYPE, ctype, size) \ bool JSObject::HasFixed##Type##Elements() { \ HeapObject* array = elements(); \ - ASSERT(array != NULL); \ + DCHECK(array != NULL); \ if (!array->IsHeapObject()) \ return false; \ return array->map()->instance_type() == FIXED_##TYPE##_ARRAY_TYPE; \ @@ -6107,31 +6468,17 @@ bool JSObject::HasIndexedInterceptor() { NameDictionary* JSObject::property_dictionary() { - ASSERT(!HasFastProperties()); + DCHECK(!HasFastProperties()); return NameDictionary::cast(properties()); } SeededNumberDictionary* JSObject::element_dictionary() { - ASSERT(HasDictionaryElements()); + DCHECK(HasDictionaryElements()); return SeededNumberDictionary::cast(elements()); } -Handle<JSSetIterator> JSSetIterator::Create( - Handle<OrderedHashSet> table, - int kind) { - return CreateInternal(table->GetIsolate()->set_iterator_map(), table, kind); -} - - -Handle<JSMapIterator> JSMapIterator::Create( - Handle<OrderedHashMap> table, - int kind) { - return CreateInternal(table->GetIsolate()->map_iterator_map(), table, kind); -} - - bool Name::IsHashFieldComputed(uint32_t field) { return (field & kHashNotComputedMask) == 0; } @@ -6150,6 +6497,10 @@ uint32_t Name::Hash() { return String::cast(this)->ComputeAndSetHash(); } +bool Name::IsOwn() { + return this->IsSymbol() && Symbol::cast(this)->is_own(); +} + StringHasher::StringHasher(int length, uint32_t seed) : length_(length), @@ -6157,7 +6508,7 @@ StringHasher::StringHasher(int length, uint32_t seed) array_index_(0), is_array_index_(0 < length_ && length_ <= String::kMaxArrayIndexSize), is_first_char_(true) { - ASSERT(FLAG_randomize_hashes || raw_running_hash_ == 0); + DCHECK(FLAG_randomize_hashes || raw_running_hash_ == 0); } @@ -6193,7 +6544,7 @@ void StringHasher::AddCharacter(uint16_t c) { bool StringHasher::UpdateIndex(uint16_t c) { - ASSERT(is_array_index_); + DCHECK(is_array_index_); if (c < '0' || c > '9') { is_array_index_ = false; return false; @@ -6217,7 +6568,7 @@ bool StringHasher::UpdateIndex(uint16_t c) { template<typename Char> inline void StringHasher::AddCharacters(const Char* chars, int length) { - ASSERT(sizeof(Char) == 1 || sizeof(Char) == 2); + DCHECK(sizeof(Char) == 1 || sizeof(Char) == 2); int i = 0; if (is_array_index_) { for (; i < length; i++) { @@ -6229,7 +6580,7 @@ inline void StringHasher::AddCharacters(const Char* chars, int length) { } } for (; i < length; i++) { - ASSERT(!is_array_index_); + DCHECK(!is_array_index_); AddCharacter(chars[i]); } } @@ -6245,6 +6596,35 @@ uint32_t StringHasher::HashSequentialString(const schar* chars, } +uint32_t IteratingStringHasher::Hash(String* string, uint32_t seed) { + IteratingStringHasher hasher(string->length(), seed); + // Nothing to do. + if (hasher.has_trivial_hash()) return hasher.GetHashField(); + ConsString* cons_string = String::VisitFlat(&hasher, string); + // The string was flat. + if (cons_string == NULL) return hasher.GetHashField(); + // This is a ConsString, iterate across it. + ConsStringIteratorOp op(cons_string); + int offset; + while (NULL != (string = op.Next(&offset))) { + String::VisitFlat(&hasher, string, offset); + } + return hasher.GetHashField(); +} + + +void IteratingStringHasher::VisitOneByteString(const uint8_t* chars, + int length) { + AddCharacters(chars, length); +} + + +void IteratingStringHasher::VisitTwoByteString(const uint16_t* chars, + int length) { + AddCharacters(chars, length); +} + + bool Name::AsArrayIndex(uint32_t* index) { return IsString() && String::cast(this)->AsArrayIndex(index); } @@ -6259,8 +6639,29 @@ bool String::AsArrayIndex(uint32_t* index) { } -Object* JSReceiver::GetPrototype() { - return map()->prototype(); +void String::SetForwardedInternalizedString(String* canonical) { + DCHECK(IsInternalizedString()); + DCHECK(HasHashCode()); + if (canonical == this) return; // No need to forward. + DCHECK(SlowEquals(canonical)); + DCHECK(canonical->IsInternalizedString()); + DCHECK(canonical->HasHashCode()); + WRITE_FIELD(this, kHashFieldOffset, canonical); + // Setting the hash field to a tagged value sets the LSB, causing the hash + // code to be interpreted as uninitialized. We use this fact to recognize + // that we have a forwarded string. + DCHECK(!HasHashCode()); +} + + +String* String::GetForwardedInternalizedString() { + DCHECK(IsInternalizedString()); + if (HasHashCode()) return this; + String* canonical = String::cast(READ_FIELD(this, kHashFieldOffset)); + DCHECK(canonical->IsInternalizedString()); + DCHECK(SlowEquals(canonical)); + DCHECK(canonical->HasHashCode()); + return canonical; } @@ -6269,38 +6670,43 @@ Object* JSReceiver::GetConstructor() { } -bool JSReceiver::HasProperty(Handle<JSReceiver> object, - Handle<Name> name) { +Maybe<bool> JSReceiver::HasProperty(Handle<JSReceiver> object, + Handle<Name> name) { if (object->IsJSProxy()) { Handle<JSProxy> proxy = Handle<JSProxy>::cast(object); return JSProxy::HasPropertyWithHandler(proxy, name); } - return GetPropertyAttribute(object, name) != ABSENT; + Maybe<PropertyAttributes> result = GetPropertyAttributes(object, name); + if (!result.has_value) return Maybe<bool>(); + return maybe(result.value != ABSENT); } -bool JSReceiver::HasLocalProperty(Handle<JSReceiver> object, - Handle<Name> name) { +Maybe<bool> JSReceiver::HasOwnProperty(Handle<JSReceiver> object, + Handle<Name> name) { if (object->IsJSProxy()) { Handle<JSProxy> proxy = Handle<JSProxy>::cast(object); return JSProxy::HasPropertyWithHandler(proxy, name); } - return GetLocalPropertyAttribute(object, name) != ABSENT; + Maybe<PropertyAttributes> result = GetOwnPropertyAttributes(object, name); + if (!result.has_value) return Maybe<bool>(); + return maybe(result.value != ABSENT); } -PropertyAttributes JSReceiver::GetPropertyAttribute(Handle<JSReceiver> object, - Handle<Name> key) { +Maybe<PropertyAttributes> JSReceiver::GetPropertyAttributes( + Handle<JSReceiver> object, Handle<Name> key) { uint32_t index; if (object->IsJSObject() && key->AsArrayIndex(&index)) { return GetElementAttribute(object, index); } - return GetPropertyAttributeWithReceiver(object, object, key); + LookupIterator it(object, key); + return GetPropertyAttributes(&it); } -PropertyAttributes JSReceiver::GetElementAttribute(Handle<JSReceiver> object, - uint32_t index) { +Maybe<PropertyAttributes> JSReceiver::GetElementAttribute( + Handle<JSReceiver> object, uint32_t index) { if (object->IsJSProxy()) { return JSProxy::GetElementAttributeWithHandler( Handle<JSProxy>::cast(object), object, index); @@ -6311,16 +6717,18 @@ PropertyAttributes JSReceiver::GetElementAttribute(Handle<JSReceiver> object, bool JSGlobalObject::IsDetached() { - return JSGlobalProxy::cast(global_receiver())->IsDetachedFrom(this); + return JSGlobalProxy::cast(global_proxy())->IsDetachedFrom(this); } -bool JSGlobalProxy::IsDetachedFrom(GlobalObject* global) { - return GetPrototype() != global; +bool JSGlobalProxy::IsDetachedFrom(GlobalObject* global) const { + const PrototypeIterator iter(this->GetIsolate(), + const_cast<JSGlobalProxy*>(this)); + return iter.GetCurrent() != global; } -Handle<Object> JSReceiver::GetOrCreateIdentityHash(Handle<JSReceiver> object) { +Handle<Smi> JSReceiver::GetOrCreateIdentityHash(Handle<JSReceiver> object) { return object->IsJSProxy() ? JSProxy::GetOrCreateIdentityHash(Handle<JSProxy>::cast(object)) : JSObject::GetOrCreateIdentityHash(Handle<JSObject>::cast(object)); @@ -6334,27 +6742,32 @@ Object* JSReceiver::GetIdentityHash() { } -bool JSReceiver::HasElement(Handle<JSReceiver> object, uint32_t index) { +Maybe<bool> JSReceiver::HasElement(Handle<JSReceiver> object, uint32_t index) { if (object->IsJSProxy()) { Handle<JSProxy> proxy = Handle<JSProxy>::cast(object); return JSProxy::HasElementWithHandler(proxy, index); } - return JSObject::GetElementAttributeWithReceiver( - Handle<JSObject>::cast(object), object, index, true) != ABSENT; + Maybe<PropertyAttributes> result = JSObject::GetElementAttributeWithReceiver( + Handle<JSObject>::cast(object), object, index, true); + if (!result.has_value) return Maybe<bool>(); + return maybe(result.value != ABSENT); } -bool JSReceiver::HasLocalElement(Handle<JSReceiver> object, uint32_t index) { +Maybe<bool> JSReceiver::HasOwnElement(Handle<JSReceiver> object, + uint32_t index) { if (object->IsJSProxy()) { Handle<JSProxy> proxy = Handle<JSProxy>::cast(object); return JSProxy::HasElementWithHandler(proxy, index); } - return JSObject::GetElementAttributeWithReceiver( - Handle<JSObject>::cast(object), object, index, false) != ABSENT; + Maybe<PropertyAttributes> result = JSObject::GetElementAttributeWithReceiver( + Handle<JSObject>::cast(object), object, index, false); + if (!result.has_value) return Maybe<bool>(); + return maybe(result.value != ABSENT); } -PropertyAttributes JSReceiver::GetLocalElementAttribute( +Maybe<PropertyAttributes> JSReceiver::GetOwnElementAttribute( Handle<JSReceiver> object, uint32_t index) { if (object->IsJSProxy()) { return JSProxy::GetElementAttributeWithHandler( @@ -6385,16 +6798,6 @@ void AccessorInfo::set_all_can_write(bool value) { } -bool AccessorInfo::prohibits_overwriting() { - return BooleanBit::get(flag(), kProhibitsOverwritingBit); -} - - -void AccessorInfo::set_prohibits_overwriting(bool value) { - set_flag(BooleanBit::set(flag(), kProhibitsOverwritingBit, value)); -} - - PropertyAttributes AccessorInfo::property_attributes() { return AttributesField::decode(static_cast<uint32_t>(flag()->value())); } @@ -6406,39 +6809,15 @@ void AccessorInfo::set_property_attributes(PropertyAttributes attributes) { bool AccessorInfo::IsCompatibleReceiver(Object* receiver) { - Object* function_template = expected_receiver_type(); - if (!function_template->IsFunctionTemplateInfo()) return true; - return FunctionTemplateInfo::cast(function_template)->IsTemplateFor(receiver); + if (!HasExpectedReceiverType()) return true; + if (!receiver->IsJSObject()) return false; + return FunctionTemplateInfo::cast(expected_receiver_type()) + ->IsTemplateFor(JSObject::cast(receiver)->map()); } -void AccessorPair::set_access_flags(v8::AccessControl access_control) { - int current = access_flags()->value(); - current = BooleanBit::set(current, - kProhibitsOverwritingBit, - access_control & PROHIBITS_OVERWRITING); - current = BooleanBit::set(current, - kAllCanReadBit, - access_control & ALL_CAN_READ); - current = BooleanBit::set(current, - kAllCanWriteBit, - access_control & ALL_CAN_WRITE); - set_access_flags(Smi::FromInt(current)); -} - - -bool AccessorPair::all_can_read() { - return BooleanBit::get(access_flags(), kAllCanReadBit); -} - - -bool AccessorPair::all_can_write() { - return BooleanBit::get(access_flags(), kAllCanWriteBit); -} - - -bool AccessorPair::prohibits_overwriting() { - return BooleanBit::get(access_flags(), kProhibitsOverwritingBit); +void ExecutableAccessorInfo::clear_setter() { + set_setter(GetIsolate()->heap()->undefined_value(), SKIP_WRITE_BARRIER); } @@ -6455,7 +6834,7 @@ void Dictionary<Derived, Shape, Key>::SetEntry(int entry, Handle<Object> key, Handle<Object> value, PropertyDetails details) { - ASSERT(!key->IsName() || + DCHECK(!key->IsName() || details.IsDeleted() || details.dictionary_index() > 0); int index = DerivedHashTable::EntryToIndex(entry); @@ -6468,7 +6847,7 @@ void Dictionary<Derived, Shape, Key>::SetEntry(int entry, bool NumberDictionaryShape::IsMatch(uint32_t key, Object* other) { - ASSERT(other->IsNumber()); + DCHECK(other->IsNumber()); return key == static_cast<uint32_t>(other->Number()); } @@ -6480,7 +6859,7 @@ uint32_t UnseededNumberDictionaryShape::Hash(uint32_t key) { uint32_t UnseededNumberDictionaryShape::HashForObject(uint32_t key, Object* other) { - ASSERT(other->IsNumber()); + DCHECK(other->IsNumber()); return ComputeIntegerHash(static_cast<uint32_t>(other->Number()), 0); } @@ -6493,7 +6872,7 @@ uint32_t SeededNumberDictionaryShape::SeededHash(uint32_t key, uint32_t seed) { uint32_t SeededNumberDictionaryShape::SeededHashForObject(uint32_t key, uint32_t seed, Object* other) { - ASSERT(other->IsNumber()); + DCHECK(other->IsNumber()); return ComputeIntegerHash(static_cast<uint32_t>(other->Number()), seed); } @@ -6523,7 +6902,7 @@ uint32_t NameDictionaryShape::HashForObject(Handle<Name> key, Object* other) { Handle<Object> NameDictionaryShape::AsHandle(Isolate* isolate, Handle<Name> key) { - ASSERT(key->IsUniqueName()); + DCHECK(key->IsUniqueName()); return key; } @@ -6595,13 +6974,13 @@ void Map::ClearCodeCache(Heap* heap) { // Please note this function is used during marking: // - MarkCompactCollector::MarkUnmarkedObject // - IncrementalMarking::Step - ASSERT(!heap->InNewSpace(heap->empty_fixed_array())); + DCHECK(!heap->InNewSpace(heap->empty_fixed_array())); WRITE_FIELD(this, kCodeCacheOffset, heap->empty_fixed_array()); } void JSArray::EnsureSize(Handle<JSArray> array, int required_size) { - ASSERT(array->HasFastSmiOrObjectElements()); + DCHECK(array->HasFastSmiOrObjectElements()); Handle<FixedArray> elts = handle(FixedArray::cast(array->elements())); const int kArraySizeThatFitsComfortablyInNewSpace = 128; if (elts->length() < required_size) { @@ -6626,7 +7005,7 @@ void JSArray::set_length(Smi* length) { bool JSArray::AllowsSetElementsLength() { bool result = elements()->IsFixedArray() || elements()->IsFixedDoubleArray(); - ASSERT(result == !HasExternalArrayElements()); + DCHECK(result == !HasExternalArrayElements()); return result; } @@ -6636,7 +7015,7 @@ void JSArray::SetContent(Handle<JSArray> array, EnsureCanContainElements(array, storage, storage->length(), ALLOW_COPIED_DOUBLE_ELEMENTS); - ASSERT((storage->map() == array->GetHeap()->fixed_double_array_map() && + DCHECK((storage->map() == array->GetHeap()->fixed_double_array_map() && IsFastDoubleElementsKind(array->GetElementsKind())) || ((storage->map() != array->GetHeap()->fixed_double_array_map()) && (IsFastObjectElementsKind(array->GetElementsKind()) || @@ -6689,6 +7068,7 @@ int TypeFeedbackInfo::ic_with_type_info_count() { void TypeFeedbackInfo::change_ic_with_type_info_count(int delta) { + if (delta == 0) return; int value = Smi::cast(READ_FIELD(this, kStorage2Offset))->value(); int new_count = ICsWithTypeInfoCountField::decode(value) + delta; // We can get negative count here when the type-feedback info is @@ -6704,9 +7084,25 @@ void TypeFeedbackInfo::change_ic_with_type_info_count(int delta) { } +int TypeFeedbackInfo::ic_generic_count() { + return Smi::cast(READ_FIELD(this, kStorage3Offset))->value(); +} + + +void TypeFeedbackInfo::change_ic_generic_count(int delta) { + if (delta == 0) return; + int new_count = ic_generic_count() + delta; + if (new_count >= 0) { + new_count &= ~Smi::kMinValue; + WRITE_FIELD(this, kStorage3Offset, Smi::FromInt(new_count)); + } +} + + void TypeFeedbackInfo::initialize_storage() { WRITE_FIELD(this, kStorage1Offset, Smi::FromInt(0)); WRITE_FIELD(this, kStorage2Offset, Smi::FromInt(0)); + WRITE_FIELD(this, kStorage3Offset, Smi::FromInt(0)); } @@ -6757,7 +7153,7 @@ Relocatable::Relocatable(Isolate* isolate) { Relocatable::~Relocatable() { - ASSERT_EQ(isolate_->relocatable_top(), this); + DCHECK_EQ(isolate_->relocatable_top(), this); isolate_->set_relocatable_top(prev_); } @@ -6828,6 +7224,36 @@ void FlexibleBodyDescriptor<start_offset>::IterateBody(HeapObject* obj, } +template<class Derived, class TableType> +Object* OrderedHashTableIterator<Derived, TableType>::CurrentKey() { + TableType* table(TableType::cast(this->table())); + int index = Smi::cast(this->index())->value(); + Object* key = table->KeyAt(index); + DCHECK(!key->IsTheHole()); + return key; +} + + +void JSSetIterator::PopulateValueArray(FixedArray* array) { + array->set(0, CurrentKey()); +} + + +void JSMapIterator::PopulateValueArray(FixedArray* array) { + array->set(0, CurrentKey()); + array->set(1, CurrentValue()); +} + + +Object* JSMapIterator::CurrentValue() { + OrderedHashMap* table(OrderedHashMap::cast(this->table())); + int index = Smi::cast(this->index())->value(); + Object* value = table->ValueAt(index); + DCHECK(!value->IsTheHole()); + return value; +} + + #undef TYPE_CHECKER #undef CAST_ACCESSOR #undef INT_ACCESSORS @@ -6839,6 +7265,7 @@ void FlexibleBodyDescriptor<start_offset>::IterateBody(HeapObject* obj, #undef BOOL_GETTER #undef BOOL_ACCESSORS #undef FIELD_ADDR +#undef FIELD_ADDR_CONST #undef READ_FIELD #undef NOBARRIER_READ_FIELD #undef WRITE_FIELD diff --git a/deps/v8/src/objects-printer.cc b/deps/v8/src/objects-printer.cc index 4fb5b567645..8fbe2182c51 100644 --- a/deps/v8/src/objects-printer.cc +++ b/deps/v8/src/objects-printer.cc @@ -2,12 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "disassembler.h" -#include "disasm.h" -#include "jsregexp.h" -#include "objects-visiting.h" +#include "src/disasm.h" +#include "src/disassembler.h" +#include "src/heap/objects-visiting.h" +#include "src/jsregexp.h" +#include "src/ostreams.h" namespace v8 { namespace internal { @@ -15,201 +16,196 @@ namespace internal { #ifdef OBJECT_PRINT void Object::Print() { - Print(stdout); + OFStream os(stdout); + this->Print(os); + os << flush; } -void Object::Print(FILE* out) { +void Object::Print(OStream& os) { // NOLINT if (IsSmi()) { - Smi::cast(this)->SmiPrint(out); + Smi::cast(this)->SmiPrint(os); } else { - HeapObject::cast(this)->HeapObjectPrint(out); + HeapObject::cast(this)->HeapObjectPrint(os); } - Flush(out); } -void Object::PrintLn() { - PrintLn(stdout); +void HeapObject::PrintHeader(OStream& os, const char* id) { // NOLINT + os << "" << reinterpret_cast<void*>(this) << ": [" << id << "]\n"; } -void Object::PrintLn(FILE* out) { - Print(out); - PrintF(out, "\n"); -} - - -void HeapObject::PrintHeader(FILE* out, const char* id) { - PrintF(out, "%p: [%s]\n", reinterpret_cast<void*>(this), id); -} - - -void HeapObject::HeapObjectPrint(FILE* out) { +void HeapObject::HeapObjectPrint(OStream& os) { // NOLINT InstanceType instance_type = map()->instance_type(); HandleScope scope(GetIsolate()); if (instance_type < FIRST_NONSTRING_TYPE) { - String::cast(this)->StringPrint(out); + String::cast(this)->StringPrint(os); return; } switch (instance_type) { case SYMBOL_TYPE: - Symbol::cast(this)->SymbolPrint(out); + Symbol::cast(this)->SymbolPrint(os); break; case MAP_TYPE: - Map::cast(this)->MapPrint(out); + Map::cast(this)->MapPrint(os); break; case HEAP_NUMBER_TYPE: - HeapNumber::cast(this)->HeapNumberPrint(out); + HeapNumber::cast(this)->HeapNumberPrint(os); + break; + case MUTABLE_HEAP_NUMBER_TYPE: + os << "<mutable "; + HeapNumber::cast(this)->HeapNumberPrint(os); + os << ">"; break; case FIXED_DOUBLE_ARRAY_TYPE: - FixedDoubleArray::cast(this)->FixedDoubleArrayPrint(out); + FixedDoubleArray::cast(this)->FixedDoubleArrayPrint(os); break; case CONSTANT_POOL_ARRAY_TYPE: - ConstantPoolArray::cast(this)->ConstantPoolArrayPrint(out); + ConstantPoolArray::cast(this)->ConstantPoolArrayPrint(os); break; case FIXED_ARRAY_TYPE: - FixedArray::cast(this)->FixedArrayPrint(out); + FixedArray::cast(this)->FixedArrayPrint(os); break; case BYTE_ARRAY_TYPE: - ByteArray::cast(this)->ByteArrayPrint(out); + ByteArray::cast(this)->ByteArrayPrint(os); break; case FREE_SPACE_TYPE: - FreeSpace::cast(this)->FreeSpacePrint(out); + FreeSpace::cast(this)->FreeSpacePrint(os); break; -#define PRINT_EXTERNAL_ARRAY(Type, type, TYPE, ctype, size) \ - case EXTERNAL_##TYPE##_ARRAY_TYPE: \ - External##Type##Array::cast(this)->External##Type##ArrayPrint(out); \ - break; +#define PRINT_EXTERNAL_ARRAY(Type, type, TYPE, ctype, size) \ + case EXTERNAL_##TYPE##_ARRAY_TYPE: \ + External##Type##Array::cast(this)->External##Type##ArrayPrint(os); \ + break; TYPED_ARRAYS(PRINT_EXTERNAL_ARRAY) #undef PRINT_EXTERNAL_ARRAY -#define PRINT_FIXED_TYPED_ARRAY(Type, type, TYPE, ctype, size) \ - case Fixed##Type##Array::kInstanceType: \ - Fixed##Type##Array::cast(this)->FixedTypedArrayPrint(out); \ - break; +#define PRINT_FIXED_TYPED_ARRAY(Type, type, TYPE, ctype, size) \ + case Fixed##Type##Array::kInstanceType: \ + Fixed##Type##Array::cast(this)->FixedTypedArrayPrint(os); \ + break; TYPED_ARRAYS(PRINT_FIXED_TYPED_ARRAY) #undef PRINT_FIXED_TYPED_ARRAY case FILLER_TYPE: - PrintF(out, "filler"); + os << "filler"; break; case JS_OBJECT_TYPE: // fall through case JS_CONTEXT_EXTENSION_OBJECT_TYPE: case JS_ARRAY_TYPE: case JS_GENERATOR_OBJECT_TYPE: case JS_REGEXP_TYPE: - JSObject::cast(this)->JSObjectPrint(out); + JSObject::cast(this)->JSObjectPrint(os); break; case ODDBALL_TYPE: - Oddball::cast(this)->to_string()->Print(out); + Oddball::cast(this)->to_string()->Print(os); break; case JS_MODULE_TYPE: - JSModule::cast(this)->JSModulePrint(out); + JSModule::cast(this)->JSModulePrint(os); break; case JS_FUNCTION_TYPE: - JSFunction::cast(this)->JSFunctionPrint(out); + JSFunction::cast(this)->JSFunctionPrint(os); break; case JS_GLOBAL_PROXY_TYPE: - JSGlobalProxy::cast(this)->JSGlobalProxyPrint(out); + JSGlobalProxy::cast(this)->JSGlobalProxyPrint(os); break; case JS_GLOBAL_OBJECT_TYPE: - JSGlobalObject::cast(this)->JSGlobalObjectPrint(out); + JSGlobalObject::cast(this)->JSGlobalObjectPrint(os); break; case JS_BUILTINS_OBJECT_TYPE: - JSBuiltinsObject::cast(this)->JSBuiltinsObjectPrint(out); + JSBuiltinsObject::cast(this)->JSBuiltinsObjectPrint(os); break; case JS_VALUE_TYPE: - PrintF(out, "Value wrapper around:"); - JSValue::cast(this)->value()->Print(out); + os << "Value wrapper around:"; + JSValue::cast(this)->value()->Print(os); break; case JS_DATE_TYPE: - JSDate::cast(this)->JSDatePrint(out); + JSDate::cast(this)->JSDatePrint(os); break; case CODE_TYPE: - Code::cast(this)->CodePrint(out); + Code::cast(this)->CodePrint(os); break; case JS_PROXY_TYPE: - JSProxy::cast(this)->JSProxyPrint(out); + JSProxy::cast(this)->JSProxyPrint(os); break; case JS_FUNCTION_PROXY_TYPE: - JSFunctionProxy::cast(this)->JSFunctionProxyPrint(out); + JSFunctionProxy::cast(this)->JSFunctionProxyPrint(os); break; case JS_SET_TYPE: - JSSet::cast(this)->JSSetPrint(out); + JSSet::cast(this)->JSSetPrint(os); break; case JS_MAP_TYPE: - JSMap::cast(this)->JSMapPrint(out); + JSMap::cast(this)->JSMapPrint(os); break; case JS_SET_ITERATOR_TYPE: - JSSetIterator::cast(this)->JSSetIteratorPrint(out); + JSSetIterator::cast(this)->JSSetIteratorPrint(os); break; case JS_MAP_ITERATOR_TYPE: - JSMapIterator::cast(this)->JSMapIteratorPrint(out); + JSMapIterator::cast(this)->JSMapIteratorPrint(os); break; case JS_WEAK_MAP_TYPE: - JSWeakMap::cast(this)->JSWeakMapPrint(out); + JSWeakMap::cast(this)->JSWeakMapPrint(os); break; case JS_WEAK_SET_TYPE: - JSWeakSet::cast(this)->JSWeakSetPrint(out); + JSWeakSet::cast(this)->JSWeakSetPrint(os); break; case FOREIGN_TYPE: - Foreign::cast(this)->ForeignPrint(out); + Foreign::cast(this)->ForeignPrint(os); break; case SHARED_FUNCTION_INFO_TYPE: - SharedFunctionInfo::cast(this)->SharedFunctionInfoPrint(out); + SharedFunctionInfo::cast(this)->SharedFunctionInfoPrint(os); break; case JS_MESSAGE_OBJECT_TYPE: - JSMessageObject::cast(this)->JSMessageObjectPrint(out); + JSMessageObject::cast(this)->JSMessageObjectPrint(os); break; case CELL_TYPE: - Cell::cast(this)->CellPrint(out); + Cell::cast(this)->CellPrint(os); break; case PROPERTY_CELL_TYPE: - PropertyCell::cast(this)->PropertyCellPrint(out); + PropertyCell::cast(this)->PropertyCellPrint(os); break; case JS_ARRAY_BUFFER_TYPE: - JSArrayBuffer::cast(this)->JSArrayBufferPrint(out); + JSArrayBuffer::cast(this)->JSArrayBufferPrint(os); break; case JS_TYPED_ARRAY_TYPE: - JSTypedArray::cast(this)->JSTypedArrayPrint(out); + JSTypedArray::cast(this)->JSTypedArrayPrint(os); break; case JS_DATA_VIEW_TYPE: - JSDataView::cast(this)->JSDataViewPrint(out); + JSDataView::cast(this)->JSDataViewPrint(os); break; #define MAKE_STRUCT_CASE(NAME, Name, name) \ case NAME##_TYPE: \ - Name::cast(this)->Name##Print(out); \ + Name::cast(this)->Name##Print(os); \ break; STRUCT_LIST(MAKE_STRUCT_CASE) #undef MAKE_STRUCT_CASE default: - PrintF(out, "UNKNOWN TYPE %d", map()->instance_type()); + os << "UNKNOWN TYPE " << map()->instance_type(); UNREACHABLE(); break; } } -void ByteArray::ByteArrayPrint(FILE* out) { - PrintF(out, "byte array, data starts at %p", GetDataStartAddress()); +void ByteArray::ByteArrayPrint(OStream& os) { // NOLINT + os << "byte array, data starts at " << GetDataStartAddress(); } -void FreeSpace::FreeSpacePrint(FILE* out) { - PrintF(out, "free space, size %d", Size()); +void FreeSpace::FreeSpacePrint(OStream& os) { // NOLINT + os << "free space, size " << Size(); } -#define EXTERNAL_ARRAY_PRINTER(Type, type, TYPE, ctype, size) \ - void External##Type##Array::External##Type##ArrayPrint(FILE* out) { \ - PrintF(out, "external " #type " array"); \ +#define EXTERNAL_ARRAY_PRINTER(Type, type, TYPE, ctype, size) \ + void External##Type##Array::External##Type##ArrayPrint(OStream& os) { \ + os << "external " #type " array"; \ } TYPED_ARRAYS(EXTERNAL_ARRAY_PRINTER) @@ -218,32 +214,30 @@ TYPED_ARRAYS(EXTERNAL_ARRAY_PRINTER) template <class Traits> -void FixedTypedArray<Traits>::FixedTypedArrayPrint(FILE* out) { - PrintF(out, "fixed %s", Traits::Designator()); +void FixedTypedArray<Traits>::FixedTypedArrayPrint(OStream& os) { // NOLINT + os << "fixed " << Traits::Designator(); } -void JSObject::PrintProperties(FILE* out) { +void JSObject::PrintProperties(OStream& os) { // NOLINT if (HasFastProperties()) { DescriptorArray* descs = map()->instance_descriptors(); for (int i = 0; i < map()->NumberOfOwnDescriptors(); i++) { - PrintF(out, " "); - descs->GetKey(i)->NamePrint(out); - PrintF(out, ": "); + os << " "; + descs->GetKey(i)->NamePrint(os); + os << ": "; switch (descs->GetType(i)) { case FIELD: { - int index = descs->GetFieldIndex(i); - RawFastPropertyAt(index)->ShortPrint(out); - PrintF(out, " (field at offset %d)\n", index); + FieldIndex index = FieldIndex::ForDescriptor(map(), i); + os << Brief(RawFastPropertyAt(index)) << " (field at offset " + << index.property_index() << ")\n"; break; } case CONSTANT: - descs->GetConstant(i)->ShortPrint(out); - PrintF(out, " (constant)\n"); + os << Brief(descs->GetConstant(i)) << " (constant)\n"; break; case CALLBACKS: - descs->GetCallbacksObject(i)->ShortPrint(out); - PrintF(out, " (callback)\n"); + os << Brief(descs->GetCallbacksObject(i)) << " (callback)\n"; break; case NORMAL: // only in slow mode case HANDLER: // only in lookup results, not in descriptors @@ -255,30 +249,21 @@ void JSObject::PrintProperties(FILE* out) { } } } else { - property_dictionary()->Print(out); + property_dictionary()->Print(os); } } -template<class T> -static void DoPrintElements(FILE *out, Object* object) { +template <class T> +static void DoPrintElements(OStream& os, Object* object) { // NOLINT T* p = T::cast(object); for (int i = 0; i < p->length(); i++) { - PrintF(out, " %d: %d\n", i, p->get_scalar(i)); + os << " " << i << ": " << p->get_scalar(i) << "\n"; } } -template<class T> -static void DoPrintDoubleElements(FILE* out, Object* object) { - T* p = T::cast(object); - for (int i = 0; i < p->length(); i++) { - PrintF(out, " %d: %f\n", i, p->get_scalar(i)); - } -} - - -void JSObject::PrintElements(FILE* out) { +void JSObject::PrintElements(OStream& os) { // NOLINT // Don't call GetElementsKind, its validation code can cause the printer to // fail when debugging. switch (map()->elements_kind()) { @@ -289,9 +274,7 @@ void JSObject::PrintElements(FILE* out) { // Print in array notation for non-sparse arrays. FixedArray* p = FixedArray::cast(elements()); for (int i = 0; i < p->length(); i++) { - PrintF(out, " %d: ", i); - p->get(i)->ShortPrint(out); - PrintF(out, "\n"); + os << " " << i << ": " << Brief(p->get(i)) << "\n"; } break; } @@ -301,29 +284,24 @@ void JSObject::PrintElements(FILE* out) { if (elements()->length() > 0) { FixedDoubleArray* p = FixedDoubleArray::cast(elements()); for (int i = 0; i < p->length(); i++) { + os << " " << i << ": "; if (p->is_the_hole(i)) { - PrintF(out, " %d: <the hole>", i); + os << "<the hole>"; } else { - PrintF(out, " %d: %g", i, p->get_scalar(i)); + os << p->get_scalar(i); } - PrintF(out, "\n"); + os << "\n"; } } break; } -#define PRINT_ELEMENTS(Kind, Type) \ - case Kind: { \ - DoPrintElements<Type>(out, elements()); \ - break; \ - } - -#define PRINT_DOUBLE_ELEMENTS(Kind, Type) \ - case Kind: { \ - DoPrintDoubleElements<Type>(out, elements()); \ - break; \ - } +#define PRINT_ELEMENTS(Kind, Type) \ + case Kind: { \ + DoPrintElements<Type>(os, elements()); \ + break; \ + } PRINT_ELEMENTS(EXTERNAL_UINT8_CLAMPED_ELEMENTS, ExternalUint8ClampedArray) PRINT_ELEMENTS(EXTERNAL_INT8_ELEMENTS, ExternalInt8Array) @@ -335,9 +313,8 @@ void JSObject::PrintElements(FILE* out) { PRINT_ELEMENTS(EXTERNAL_INT32_ELEMENTS, ExternalInt32Array) PRINT_ELEMENTS(EXTERNAL_UINT32_ELEMENTS, ExternalUint32Array) - PRINT_DOUBLE_ELEMENTS(EXTERNAL_FLOAT32_ELEMENTS, ExternalFloat32Array) - PRINT_DOUBLE_ELEMENTS(EXTERNAL_FLOAT64_ELEMENTS, ExternalFloat64Array) - + PRINT_ELEMENTS(EXTERNAL_FLOAT32_ELEMENTS, ExternalFloat32Array) + PRINT_ELEMENTS(EXTERNAL_FLOAT64_ELEMENTS, ExternalFloat64Array) PRINT_ELEMENTS(UINT8_ELEMENTS, FixedUint8Array) PRINT_ELEMENTS(UINT8_CLAMPED_ELEMENTS, FixedUint8ClampedArray) @@ -346,60 +323,55 @@ void JSObject::PrintElements(FILE* out) { PRINT_ELEMENTS(INT16_ELEMENTS, FixedInt16Array) PRINT_ELEMENTS(UINT32_ELEMENTS, FixedUint32Array) PRINT_ELEMENTS(INT32_ELEMENTS, FixedInt32Array) - PRINT_DOUBLE_ELEMENTS(FLOAT32_ELEMENTS, FixedFloat32Array) - PRINT_DOUBLE_ELEMENTS(FLOAT64_ELEMENTS, FixedFloat64Array) + PRINT_ELEMENTS(FLOAT32_ELEMENTS, FixedFloat32Array) + PRINT_ELEMENTS(FLOAT64_ELEMENTS, FixedFloat64Array) -#undef PRINT_DOUBLE_ELEMENTS #undef PRINT_ELEMENTS case DICTIONARY_ELEMENTS: - elements()->Print(out); + elements()->Print(os); break; case SLOPPY_ARGUMENTS_ELEMENTS: { FixedArray* p = FixedArray::cast(elements()); - PrintF(out, " parameter map:"); + os << " parameter map:"; for (int i = 2; i < p->length(); i++) { - PrintF(out, " %d:", i - 2); - p->get(i)->ShortPrint(out); + os << " " << (i - 2) << ":" << Brief(p->get(i)); } - PrintF(out, "\n context: "); - p->get(0)->ShortPrint(out); - PrintF(out, "\n arguments: "); - p->get(1)->ShortPrint(out); - PrintF(out, "\n"); + os << "\n context: " << Brief(p->get(0)) + << "\n arguments: " << Brief(p->get(1)) << "\n"; break; } } } -void JSObject::PrintTransitions(FILE* out) { +void JSObject::PrintTransitions(OStream& os) { // NOLINT if (!map()->HasTransitionArray()) return; TransitionArray* transitions = map()->transitions(); for (int i = 0; i < transitions->number_of_transitions(); i++) { Name* key = transitions->GetKey(i); - PrintF(out, " "); - key->NamePrint(out); - PrintF(out, ": "); + os << " "; + key->NamePrint(os); + os << ": "; if (key == GetHeap()->frozen_symbol()) { - PrintF(out, " (transition to frozen)\n"); + os << " (transition to frozen)\n"; } else if (key == GetHeap()->elements_transition_symbol()) { - PrintF(out, " (transition to "); - PrintElementsKind(out, transitions->GetTarget(i)->elements_kind()); - PrintF(out, ")\n"); + os << " (transition to " + << ElementsKindToString(transitions->GetTarget(i)->elements_kind()) + << ")\n"; } else if (key == GetHeap()->observed_symbol()) { - PrintF(out, " (transition to Object.observe)\n"); + os << " (transition to Object.observe)\n"; } else { switch (transitions->GetTargetDetails(i).type()) { case FIELD: { - PrintF(out, " (transition to field)\n"); + os << " (transition to field)\n"; break; } case CONSTANT: - PrintF(out, " (transition to constant)\n"); + os << " (transition to constant)\n"; break; case CALLBACKS: - PrintF(out, " (transition to callback)\n"); + os << " (transition to callback)\n"; break; // Values below are never in the target descriptor array. case NORMAL: @@ -414,35 +386,32 @@ void JSObject::PrintTransitions(FILE* out) { } -void JSObject::JSObjectPrint(FILE* out) { - PrintF(out, "%p: [JSObject]\n", reinterpret_cast<void*>(this)); - PrintF(out, " - map = %p [", reinterpret_cast<void*>(map())); +void JSObject::JSObjectPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "JSObject"); // Don't call GetElementsKind, its validation code can cause the printer to // fail when debugging. - PrintElementsKind(out, this->map()->elements_kind()); - PrintF(out, - "]\n - prototype = %p\n", - reinterpret_cast<void*>(GetPrototype())); - PrintF(out, " {\n"); - PrintProperties(out); - PrintTransitions(out); - PrintElements(out); - PrintF(out, " }\n"); + PrototypeIterator iter(GetIsolate(), this); + os << " - map = " << reinterpret_cast<void*>(map()) << " [" + << ElementsKindToString(this->map()->elements_kind()) + << "]\n - prototype = " << reinterpret_cast<void*>(iter.GetCurrent()) + << "\n {\n"; + PrintProperties(os); + PrintTransitions(os); + PrintElements(os); + os << " }\n"; } -void JSModule::JSModulePrint(FILE* out) { - HeapObject::PrintHeader(out, "JSModule"); - PrintF(out, " - map = %p\n", reinterpret_cast<void*>(map())); - PrintF(out, " - context = "); - context()->Print(out); - PrintF(out, " - scope_info = "); - scope_info()->ShortPrint(out); - PrintElementsKind(out, this->map()->elements_kind()); - PrintF(out, " {\n"); - PrintProperties(out); - PrintElements(out); - PrintF(out, " }\n"); +void JSModule::JSModulePrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "JSModule"); + os << " - map = " << reinterpret_cast<void*>(map()) << "\n" + << " - context = "; + context()->Print(os); + os << " - scope_info = " << Brief(scope_info()) + << ElementsKindToString(this->map()->elements_kind()) << " {\n"; + PrintProperties(os); + PrintElements(os); + os << " }\n"; } @@ -457,174 +426,165 @@ static const char* TypeToString(InstanceType type) { } -void Symbol::SymbolPrint(FILE* out) { - HeapObject::PrintHeader(out, "Symbol"); - PrintF(out, " - hash: %d\n", Hash()); - PrintF(out, " - name: "); - name()->ShortPrint(); - PrintF(out, " - private: %d\n", is_private()); - PrintF(out, "\n"); -} - - -void Map::MapPrint(FILE* out) { - HeapObject::PrintHeader(out, "Map"); - PrintF(out, " - type: %s\n", TypeToString(instance_type())); - PrintF(out, " - instance size: %d\n", instance_size()); - PrintF(out, " - inobject properties: %d\n", inobject_properties()); - PrintF(out, " - elements kind: "); - PrintElementsKind(out, elements_kind()); - PrintF(out, "\n - pre-allocated property fields: %d\n", - pre_allocated_property_fields()); - PrintF(out, " - unused property fields: %d\n", unused_property_fields()); - if (is_hidden_prototype()) { - PrintF(out, " - hidden_prototype\n"); - } - if (has_named_interceptor()) { - PrintF(out, " - named_interceptor\n"); - } - if (has_indexed_interceptor()) { - PrintF(out, " - indexed_interceptor\n"); - } - if (is_undetectable()) { - PrintF(out, " - undetectable\n"); - } - if (has_instance_call_handler()) { - PrintF(out, " - instance_call_handler\n"); - } - if (is_access_check_needed()) { - PrintF(out, " - access_check_needed\n"); - } +void Symbol::SymbolPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "Symbol"); + os << " - hash: " << Hash(); + os << "\n - name: " << Brief(name()); + os << "\n - private: " << is_private(); + os << "\n - own: " << is_own(); + os << "\n"; +} + + +void Map::MapPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "Map"); + os << " - type: " << TypeToString(instance_type()) << "\n"; + os << " - instance size: " << instance_size() << "\n"; + os << " - inobject properties: " << inobject_properties() << "\n"; + os << " - elements kind: " << ElementsKindToString(elements_kind()); + os << "\n - pre-allocated property fields: " + << pre_allocated_property_fields() << "\n"; + os << " - unused property fields: " << unused_property_fields() << "\n"; + if (is_hidden_prototype()) os << " - hidden_prototype\n"; + if (has_named_interceptor()) os << " - named_interceptor\n"; + if (has_indexed_interceptor()) os << " - indexed_interceptor\n"; + if (is_undetectable()) os << " - undetectable\n"; + if (has_instance_call_handler()) os << " - instance_call_handler\n"; + if (is_access_check_needed()) os << " - access_check_needed\n"; if (is_frozen()) { - PrintF(out, " - frozen\n"); + os << " - frozen\n"; } else if (!is_extensible()) { - PrintF(out, " - sealed\n"); + os << " - sealed\n"; } - PrintF(out, " - back pointer: "); - GetBackPointer()->ShortPrint(out); - PrintF(out, "\n - instance descriptors %s#%i: ", - owns_descriptors() ? "(own) " : "", - NumberOfOwnDescriptors()); - instance_descriptors()->ShortPrint(out); + os << " - back pointer: " << Brief(GetBackPointer()); + os << "\n - instance descriptors " << (owns_descriptors() ? "(own) " : "") + << "#" << NumberOfOwnDescriptors() << ": " + << Brief(instance_descriptors()); if (HasTransitionArray()) { - PrintF(out, "\n - transitions: "); - transitions()->ShortPrint(out); + os << "\n - transitions: " << Brief(transitions()); } - PrintF(out, "\n - prototype: "); - prototype()->ShortPrint(out); - PrintF(out, "\n - constructor: "); - constructor()->ShortPrint(out); - PrintF(out, "\n - code cache: "); - code_cache()->ShortPrint(out); - PrintF(out, "\n - dependent code: "); - dependent_code()->ShortPrint(out); - PrintF(out, "\n"); + os << "\n - prototype: " << Brief(prototype()); + os << "\n - constructor: " << Brief(constructor()); + os << "\n - code cache: " << Brief(code_cache()); + os << "\n - dependent code: " << Brief(dependent_code()); + os << "\n"; } -void CodeCache::CodeCachePrint(FILE* out) { - HeapObject::PrintHeader(out, "CodeCache"); - PrintF(out, "\n - default_cache: "); - default_cache()->ShortPrint(out); - PrintF(out, "\n - normal_type_cache: "); - normal_type_cache()->ShortPrint(out); +void CodeCache::CodeCachePrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "CodeCache"); + os << "\n - default_cache: " << Brief(default_cache()); + os << "\n - normal_type_cache: " << Brief(normal_type_cache()); } -void PolymorphicCodeCache::PolymorphicCodeCachePrint(FILE* out) { - HeapObject::PrintHeader(out, "PolymorphicCodeCache"); - PrintF(out, "\n - cache: "); - cache()->ShortPrint(out); +void PolymorphicCodeCache::PolymorphicCodeCachePrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "PolymorphicCodeCache"); + os << "\n - cache: " << Brief(cache()); } -void TypeFeedbackInfo::TypeFeedbackInfoPrint(FILE* out) { - HeapObject::PrintHeader(out, "TypeFeedbackInfo"); - PrintF(out, " - ic_total_count: %d, ic_with_type_info_count: %d\n", - ic_total_count(), ic_with_type_info_count()); +void TypeFeedbackInfo::TypeFeedbackInfoPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "TypeFeedbackInfo"); + os << " - ic_total_count: " << ic_total_count() + << ", ic_with_type_info_count: " << ic_with_type_info_count() + << ", ic_generic_count: " << ic_generic_count() << "\n"; } -void AliasedArgumentsEntry::AliasedArgumentsEntryPrint(FILE* out) { - HeapObject::PrintHeader(out, "AliasedArgumentsEntry"); - PrintF(out, "\n - aliased_context_slot: %d", aliased_context_slot()); +void AliasedArgumentsEntry::AliasedArgumentsEntryPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "AliasedArgumentsEntry"); + os << "\n - aliased_context_slot: " << aliased_context_slot(); } -void FixedArray::FixedArrayPrint(FILE* out) { - HeapObject::PrintHeader(out, "FixedArray"); - PrintF(out, " - length: %d", length()); +void FixedArray::FixedArrayPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "FixedArray"); + os << " - length: " << length(); for (int i = 0; i < length(); i++) { - PrintF(out, "\n [%d]: ", i); - get(i)->ShortPrint(out); + os << "\n [" << i << "]: " << Brief(get(i)); } - PrintF(out, "\n"); + os << "\n"; } -void FixedDoubleArray::FixedDoubleArrayPrint(FILE* out) { - HeapObject::PrintHeader(out, "FixedDoubleArray"); - PrintF(out, " - length: %d", length()); +void FixedDoubleArray::FixedDoubleArrayPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "FixedDoubleArray"); + os << " - length: " << length(); for (int i = 0; i < length(); i++) { + os << "\n [" << i << "]: "; if (is_the_hole(i)) { - PrintF(out, "\n [%d]: <the hole>", i); + os << "<the hole>"; } else { - PrintF(out, "\n [%d]: %g", i, get_scalar(i)); + os << get_scalar(i); } } - PrintF(out, "\n"); -} - - -void ConstantPoolArray::ConstantPoolArrayPrint(FILE* out) { - HeapObject::PrintHeader(out, "ConstantPoolArray"); - PrintF(out, " - length: %d", length()); - for (int i = 0; i < length(); i++) { - if (i < first_code_ptr_index()) { - PrintF(out, "\n [%d]: double: %g", i, get_int64_entry_as_double(i)); - } else if (i < first_heap_ptr_index()) { - PrintF(out, "\n [%d]: code target pointer: %p", i, - reinterpret_cast<void*>(get_code_ptr_entry(i))); - } else if (i < first_int32_index()) { - PrintF(out, "\n [%d]: heap pointer: %p", i, - reinterpret_cast<void*>(get_heap_ptr_entry(i))); - } else { - PrintF(out, "\n [%d]: int32: %d", i, get_int32_entry(i)); + os << "\n"; +} + + +void ConstantPoolArray::ConstantPoolArrayPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "ConstantPoolArray"); + os << " - length: " << length(); + for (int i = 0; i <= last_index(INT32, SMALL_SECTION); i++) { + if (i < last_index(INT64, SMALL_SECTION)) { + os << "\n [" << i << "]: double: " << get_int64_entry_as_double(i); + } else if (i <= last_index(CODE_PTR, SMALL_SECTION)) { + os << "\n [" << i << "]: code target pointer: " + << reinterpret_cast<void*>(get_code_ptr_entry(i)); + } else if (i <= last_index(HEAP_PTR, SMALL_SECTION)) { + os << "\n [" << i << "]: heap pointer: " + << reinterpret_cast<void*>(get_heap_ptr_entry(i)); + } else if (i <= last_index(INT32, SMALL_SECTION)) { + os << "\n [" << i << "]: int32: " << get_int32_entry(i); + } + } + if (is_extended_layout()) { + os << "\n Extended section:"; + for (int i = first_extended_section_index(); + i <= last_index(INT32, EXTENDED_SECTION); i++) { + if (i < last_index(INT64, EXTENDED_SECTION)) { + os << "\n [" << i << "]: double: " << get_int64_entry_as_double(i); + } else if (i <= last_index(CODE_PTR, EXTENDED_SECTION)) { + os << "\n [" << i << "]: code target pointer: " + << reinterpret_cast<void*>(get_code_ptr_entry(i)); + } else if (i <= last_index(HEAP_PTR, EXTENDED_SECTION)) { + os << "\n [" << i << "]: heap pointer: " + << reinterpret_cast<void*>(get_heap_ptr_entry(i)); + } else if (i <= last_index(INT32, EXTENDED_SECTION)) { + os << "\n [" << i << "]: int32: " << get_int32_entry(i); + } } } - PrintF(out, "\n"); + os << "\n"; } -void JSValue::JSValuePrint(FILE* out) { - HeapObject::PrintHeader(out, "ValueObject"); - value()->Print(out); +void JSValue::JSValuePrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "ValueObject"); + value()->Print(os); } -void JSMessageObject::JSMessageObjectPrint(FILE* out) { - HeapObject::PrintHeader(out, "JSMessageObject"); - PrintF(out, " - type: "); - type()->ShortPrint(out); - PrintF(out, "\n - arguments: "); - arguments()->ShortPrint(out); - PrintF(out, "\n - start_position: %d", start_position()); - PrintF(out, "\n - end_position: %d", end_position()); - PrintF(out, "\n - script: "); - script()->ShortPrint(out); - PrintF(out, "\n - stack_frames: "); - stack_frames()->ShortPrint(out); - PrintF(out, "\n"); +void JSMessageObject::JSMessageObjectPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "JSMessageObject"); + os << " - type: " << Brief(type()); + os << "\n - arguments: " << Brief(arguments()); + os << "\n - start_position: " << start_position(); + os << "\n - end_position: " << end_position(); + os << "\n - script: " << Brief(script()); + os << "\n - stack_frames: " << Brief(stack_frames()); + os << "\n"; } -void String::StringPrint(FILE* out) { +void String::StringPrint(OStream& os) { // NOLINT if (StringShape(this).IsInternalized()) { - PrintF(out, "#"); + os << "#"; } else if (StringShape(this).IsCons()) { - PrintF(out, "c\""); + os << "c\""; } else { - PrintF(out, "\""); + os << "\""; } const char truncated_epilogue[] = "...<truncated>"; @@ -635,21 +595,21 @@ void String::StringPrint(FILE* out) { } } for (int i = 0; i < len; i++) { - PrintF(out, "%c", Get(i)); + os << AsUC16(Get(i)); } if (len != length()) { - PrintF(out, "%s", truncated_epilogue); + os << truncated_epilogue; } - if (!StringShape(this).IsInternalized()) PrintF(out, "\""); + if (!StringShape(this).IsInternalized()) os << "\""; } -void Name::NamePrint(FILE* out) { +void Name::NamePrint(OStream& os) { // NOLINT if (IsString()) - String::cast(this)->StringPrint(out); + String::cast(this)->StringPrint(os); else - ShortPrint(); + os << Brief(this); } @@ -673,210 +633,181 @@ static const char* const weekdays[] = { }; -void JSDate::JSDatePrint(FILE* out) { - HeapObject::PrintHeader(out, "JSDate"); - PrintF(out, " - map = %p\n", reinterpret_cast<void*>(map())); - PrintF(out, " - value = "); - value()->Print(out); +void JSDate::JSDatePrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "JSDate"); + os << " - map = " << reinterpret_cast<void*>(map()) << "\n"; + os << " - value = "; + value()->Print(os); if (!year()->IsSmi()) { - PrintF(out, " - time = NaN\n"); + os << " - time = NaN\n"; } else { - PrintF(out, " - time = %s %04d/%02d/%02d %02d:%02d:%02d\n", - weekdays[weekday()->IsSmi() ? Smi::cast(weekday())->value() + 1 : 0], - year()->IsSmi() ? Smi::cast(year())->value() : -1, - month()->IsSmi() ? Smi::cast(month())->value() : -1, - day()->IsSmi() ? Smi::cast(day())->value() : -1, - hour()->IsSmi() ? Smi::cast(hour())->value() : -1, - min()->IsSmi() ? Smi::cast(min())->value() : -1, - sec()->IsSmi() ? Smi::cast(sec())->value() : -1); + // TODO(svenpanne) Add some basic formatting to our streams. + Vector<char> buf = Vector<char>::New(100); + SNPrintF( + buf, " - time = %s %04d/%02d/%02d %02d:%02d:%02d\n", + weekdays[weekday()->IsSmi() ? Smi::cast(weekday())->value() + 1 : 0], + year()->IsSmi() ? Smi::cast(year())->value() : -1, + month()->IsSmi() ? Smi::cast(month())->value() : -1, + day()->IsSmi() ? Smi::cast(day())->value() : -1, + hour()->IsSmi() ? Smi::cast(hour())->value() : -1, + min()->IsSmi() ? Smi::cast(min())->value() : -1, + sec()->IsSmi() ? Smi::cast(sec())->value() : -1); + os << buf.start(); } } -void JSProxy::JSProxyPrint(FILE* out) { - HeapObject::PrintHeader(out, "JSProxy"); - PrintF(out, " - map = %p\n", reinterpret_cast<void*>(map())); - PrintF(out, " - handler = "); - handler()->Print(out); - PrintF(out, "\n - hash = "); - hash()->Print(out); - PrintF(out, "\n"); +void JSProxy::JSProxyPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "JSProxy"); + os << " - map = " << reinterpret_cast<void*>(map()) << "\n"; + os << " - handler = "; + handler()->Print(os); + os << "\n - hash = "; + hash()->Print(os); + os << "\n"; } -void JSFunctionProxy::JSFunctionProxyPrint(FILE* out) { - HeapObject::PrintHeader(out, "JSFunctionProxy"); - PrintF(out, " - map = %p\n", reinterpret_cast<void*>(map())); - PrintF(out, " - handler = "); - handler()->Print(out); - PrintF(out, "\n - call_trap = "); - call_trap()->Print(out); - PrintF(out, "\n - construct_trap = "); - construct_trap()->Print(out); - PrintF(out, "\n"); +void JSFunctionProxy::JSFunctionProxyPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "JSFunctionProxy"); + os << " - map = " << reinterpret_cast<void*>(map()) << "\n"; + os << " - handler = "; + handler()->Print(os); + os << "\n - call_trap = "; + call_trap()->Print(os); + os << "\n - construct_trap = "; + construct_trap()->Print(os); + os << "\n"; } -void JSSet::JSSetPrint(FILE* out) { - HeapObject::PrintHeader(out, "JSSet"); - PrintF(out, " - map = %p\n", reinterpret_cast<void*>(map())); - PrintF(out, " - table = "); - table()->ShortPrint(out); - PrintF(out, "\n"); +void JSSet::JSSetPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "JSSet"); + os << " - map = " << reinterpret_cast<void*>(map()) << "\n"; + os << " - table = " << Brief(table()); + os << "\n"; } -void JSMap::JSMapPrint(FILE* out) { - HeapObject::PrintHeader(out, "JSMap"); - PrintF(out, " - map = %p\n", reinterpret_cast<void*>(map())); - PrintF(out, " - table = "); - table()->ShortPrint(out); - PrintF(out, "\n"); +void JSMap::JSMapPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "JSMap"); + os << " - map = " << reinterpret_cast<void*>(map()) << "\n"; + os << " - table = " << Brief(table()); + os << "\n"; } -template<class Derived, class TableType> -void OrderedHashTableIterator<Derived, TableType>:: - OrderedHashTableIteratorPrint(FILE* out) { - PrintF(out, " - map = %p\n", reinterpret_cast<void*>(map())); - PrintF(out, " - table = "); - table()->ShortPrint(out); - PrintF(out, "\n - index = "); - index()->ShortPrint(out); - PrintF(out, "\n - count = "); - count()->ShortPrint(out); - PrintF(out, "\n - kind = "); - kind()->ShortPrint(out); - PrintF(out, "\n - next_iterator = "); - next_iterator()->ShortPrint(out); - PrintF(out, "\n - previous_iterator = "); - previous_iterator()->ShortPrint(out); - PrintF(out, "\n"); +template <class Derived, class TableType> +void OrderedHashTableIterator< + Derived, TableType>::OrderedHashTableIteratorPrint(OStream& os) { // NOLINT + os << " - map = " << reinterpret_cast<void*>(map()) << "\n"; + os << " - table = " << Brief(table()); + os << "\n - index = " << Brief(index()); + os << "\n - kind = " << Brief(kind()); + os << "\n"; } -template void -OrderedHashTableIterator<JSSetIterator, - OrderedHashSet>::OrderedHashTableIteratorPrint(FILE* out); +template void OrderedHashTableIterator< + JSSetIterator, + OrderedHashSet>::OrderedHashTableIteratorPrint(OStream& os); // NOLINT -template void -OrderedHashTableIterator<JSMapIterator, - OrderedHashMap>::OrderedHashTableIteratorPrint(FILE* out); +template void OrderedHashTableIterator< + JSMapIterator, + OrderedHashMap>::OrderedHashTableIteratorPrint(OStream& os); // NOLINT -void JSSetIterator::JSSetIteratorPrint(FILE* out) { - HeapObject::PrintHeader(out, "JSSetIterator"); - OrderedHashTableIteratorPrint(out); +void JSSetIterator::JSSetIteratorPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "JSSetIterator"); + OrderedHashTableIteratorPrint(os); } -void JSMapIterator::JSMapIteratorPrint(FILE* out) { - HeapObject::PrintHeader(out, "JSMapIterator"); - OrderedHashTableIteratorPrint(out); +void JSMapIterator::JSMapIteratorPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "JSMapIterator"); + OrderedHashTableIteratorPrint(os); } -void JSWeakMap::JSWeakMapPrint(FILE* out) { - HeapObject::PrintHeader(out, "JSWeakMap"); - PrintF(out, " - map = %p\n", reinterpret_cast<void*>(map())); - PrintF(out, " - table = "); - table()->ShortPrint(out); - PrintF(out, "\n"); +void JSWeakMap::JSWeakMapPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "JSWeakMap"); + os << " - map = " << reinterpret_cast<void*>(map()) << "\n"; + os << " - table = " << Brief(table()); + os << "\n"; } -void JSWeakSet::JSWeakSetPrint(FILE* out) { - HeapObject::PrintHeader(out, "JSWeakSet"); - PrintF(out, " - map = %p\n", reinterpret_cast<void*>(map())); - PrintF(out, " - table = "); - table()->ShortPrint(out); - PrintF(out, "\n"); +void JSWeakSet::JSWeakSetPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "JSWeakSet"); + os << " - map = " << reinterpret_cast<void*>(map()) << "\n"; + os << " - table = " << Brief(table()); + os << "\n"; } -void JSArrayBuffer::JSArrayBufferPrint(FILE* out) { - HeapObject::PrintHeader(out, "JSArrayBuffer"); - PrintF(out, " - map = %p\n", reinterpret_cast<void*>(map())); - PrintF(out, " - backing_store = %p\n", backing_store()); - PrintF(out, " - byte_length = "); - byte_length()->ShortPrint(out); - PrintF(out, "\n"); +void JSArrayBuffer::JSArrayBufferPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "JSArrayBuffer"); + os << " - map = " << reinterpret_cast<void*>(map()) << "\n"; + os << " - backing_store = " << backing_store() << "\n"; + os << " - byte_length = " << Brief(byte_length()); + os << "\n"; } -void JSTypedArray::JSTypedArrayPrint(FILE* out) { - HeapObject::PrintHeader(out, "JSTypedArray"); - PrintF(out, " - map = %p\n", reinterpret_cast<void*>(map())); - PrintF(out, " - buffer ="); - buffer()->ShortPrint(out); - PrintF(out, "\n - byte_offset = "); - byte_offset()->ShortPrint(out); - PrintF(out, "\n - byte_length = "); - byte_length()->ShortPrint(out); - PrintF(out, "\n - length = "); - length()->ShortPrint(out); - PrintF(out, "\n"); - PrintElements(out); +void JSTypedArray::JSTypedArrayPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "JSTypedArray"); + os << " - map = " << reinterpret_cast<void*>(map()) << "\n"; + os << " - buffer =" << Brief(buffer()); + os << "\n - byte_offset = " << Brief(byte_offset()); + os << "\n - byte_length = " << Brief(byte_length()); + os << "\n - length = " << Brief(length()); + os << "\n"; + PrintElements(os); } -void JSDataView::JSDataViewPrint(FILE* out) { - HeapObject::PrintHeader(out, "JSDataView"); - PrintF(out, " - map = %p\n", reinterpret_cast<void*>(map())); - PrintF(out, " - buffer ="); - buffer()->ShortPrint(out); - PrintF(out, "\n - byte_offset = "); - byte_offset()->ShortPrint(out); - PrintF(out, "\n - byte_length = "); - byte_length()->ShortPrint(out); - PrintF(out, "\n"); +void JSDataView::JSDataViewPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "JSDataView"); + os << " - map = " << reinterpret_cast<void*>(map()) << "\n"; + os << " - buffer =" << Brief(buffer()); + os << "\n - byte_offset = " << Brief(byte_offset()); + os << "\n - byte_length = " << Brief(byte_length()); + os << "\n"; } -void JSFunction::JSFunctionPrint(FILE* out) { - HeapObject::PrintHeader(out, "Function"); - PrintF(out, " - map = %p\n", reinterpret_cast<void*>(map())); - PrintF(out, " - initial_map = "); - if (has_initial_map()) { - initial_map()->ShortPrint(out); - } - PrintF(out, "\n - shared_info = "); - shared()->ShortPrint(out); - PrintF(out, "\n - name = "); - shared()->name()->Print(out); - PrintF(out, "\n - context = "); - context()->ShortPrint(out); +void JSFunction::JSFunctionPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "Function"); + os << " - map = " << reinterpret_cast<void*>(map()) << "\n"; + os << " - initial_map = "; + if (has_initial_map()) os << Brief(initial_map()); + os << "\n - shared_info = " << Brief(shared()); + os << "\n - name = " << Brief(shared()->name()); + os << "\n - context = " << Brief(context()); if (shared()->bound()) { - PrintF(out, "\n - bindings = "); - function_bindings()->ShortPrint(out); + os << "\n - bindings = " << Brief(function_bindings()); } else { - PrintF(out, "\n - literals = "); - literals()->ShortPrint(out); + os << "\n - literals = " << Brief(literals()); } - PrintF(out, "\n - code = "); - code()->ShortPrint(out); - PrintF(out, "\n"); - - PrintProperties(out); - PrintElements(out); - - PrintF(out, "\n"); -} - - -void SharedFunctionInfo::SharedFunctionInfoPrint(FILE* out) { - HeapObject::PrintHeader(out, "SharedFunctionInfo"); - PrintF(out, " - name: "); - name()->ShortPrint(out); - PrintF(out, "\n - expected_nof_properties: %d", expected_nof_properties()); - PrintF(out, "\n - ast_node_count: %d", ast_node_count()); - PrintF(out, "\n - instance class name = "); - instance_class_name()->Print(out); - PrintF(out, "\n - code = "); - code()->ShortPrint(out); + os << "\n - code = " << Brief(code()); + os << "\n"; + PrintProperties(os); + PrintElements(os); + os << "\n"; +} + + +void SharedFunctionInfo::SharedFunctionInfoPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "SharedFunctionInfo"); + os << " - name: " << Brief(name()); + os << "\n - expected_nof_properties: " << expected_nof_properties(); + os << "\n - ast_node_count: " << ast_node_count(); + os << "\n - instance class name = "; + instance_class_name()->Print(os); + os << "\n - code = " << Brief(code()); if (HasSourceCode()) { - PrintF(out, "\n - source code = "); + os << "\n - source code = "; String* source = String::cast(Script::cast(script())->source()); int start = start_position(); int length = end_position() - start; @@ -884,357 +815,293 @@ void SharedFunctionInfo::SharedFunctionInfoPrint(FILE* out) { source->ToCString(DISALLOW_NULLS, FAST_STRING_TRAVERSAL, start, length, NULL); - PrintF(out, "%s", source_string.get()); + os << source_string.get(); } // Script files are often large, hard to read. - // PrintF(out, "\n - script ="); - // script()->Print(out); - PrintF(out, "\n - function token position = %d", function_token_position()); - PrintF(out, "\n - start position = %d", start_position()); - PrintF(out, "\n - end position = %d", end_position()); - PrintF(out, "\n - is expression = %d", is_expression()); - PrintF(out, "\n - debug info = "); - debug_info()->ShortPrint(out); - PrintF(out, "\n - length = %d", length()); - PrintF(out, "\n - optimized_code_map = "); - optimized_code_map()->ShortPrint(out); - PrintF(out, "\n - feedback_vector = "); - feedback_vector()->FixedArrayPrint(out); - PrintF(out, "\n"); + // os << "\n - script ="; + // script()->Print(os); + os << "\n - function token position = " << function_token_position(); + os << "\n - start position = " << start_position(); + os << "\n - end position = " << end_position(); + os << "\n - is expression = " << is_expression(); + os << "\n - debug info = " << Brief(debug_info()); + os << "\n - length = " << length(); + os << "\n - optimized_code_map = " << Brief(optimized_code_map()); + os << "\n - feedback_vector = "; + feedback_vector()->FixedArrayPrint(os); + os << "\n"; } -void JSGlobalProxy::JSGlobalProxyPrint(FILE* out) { - PrintF(out, "global_proxy "); - JSObjectPrint(out); - PrintF(out, "native context : "); - native_context()->ShortPrint(out); - PrintF(out, "\n"); +void JSGlobalProxy::JSGlobalProxyPrint(OStream& os) { // NOLINT + os << "global_proxy "; + JSObjectPrint(os); + os << "native context : " << Brief(native_context()); + os << "\n"; } -void JSGlobalObject::JSGlobalObjectPrint(FILE* out) { - PrintF(out, "global "); - JSObjectPrint(out); - PrintF(out, "native context : "); - native_context()->ShortPrint(out); - PrintF(out, "\n"); +void JSGlobalObject::JSGlobalObjectPrint(OStream& os) { // NOLINT + os << "global "; + JSObjectPrint(os); + os << "native context : " << Brief(native_context()); + os << "\n"; } -void JSBuiltinsObject::JSBuiltinsObjectPrint(FILE* out) { - PrintF(out, "builtins "); - JSObjectPrint(out); +void JSBuiltinsObject::JSBuiltinsObjectPrint(OStream& os) { // NOLINT + os << "builtins "; + JSObjectPrint(os); } -void Cell::CellPrint(FILE* out) { - HeapObject::PrintHeader(out, "Cell"); +void Cell::CellPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "Cell"); } -void PropertyCell::PropertyCellPrint(FILE* out) { - HeapObject::PrintHeader(out, "PropertyCell"); +void PropertyCell::PropertyCellPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "PropertyCell"); } -void Code::CodePrint(FILE* out) { - HeapObject::PrintHeader(out, "Code"); +void Code::CodePrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "Code"); #ifdef ENABLE_DISASSEMBLER if (FLAG_use_verbose_printer) { - Disassemble(NULL, out); + Disassemble(NULL, os); } #endif } -void Foreign::ForeignPrint(FILE* out) { - PrintF(out, "foreign address : %p", foreign_address()); +void Foreign::ForeignPrint(OStream& os) { // NOLINT + os << "foreign address : " << foreign_address(); +} + + +void ExecutableAccessorInfo::ExecutableAccessorInfoPrint( + OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "ExecutableAccessorInfo"); + os << "\n - name: " << Brief(name()); + os << "\n - flag: " << Brief(flag()); + os << "\n - getter: " << Brief(getter()); + os << "\n - setter: " << Brief(setter()); + os << "\n - data: " << Brief(data()); + os << "\n"; +} + + +void DeclaredAccessorInfo::DeclaredAccessorInfoPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "DeclaredAccessorInfo"); + os << "\n - name: " << Brief(name()); + os << "\n - flag: " << Brief(flag()); + os << "\n - descriptor: " << Brief(descriptor()); + os << "\n"; +} + + +void DeclaredAccessorDescriptor::DeclaredAccessorDescriptorPrint( + OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "DeclaredAccessorDescriptor"); + os << "\n - internal field: " << Brief(serialized_data()); + os << "\n"; +} + + +void Box::BoxPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "Box"); + os << "\n - value: " << Brief(value()); + os << "\n"; +} + + +void AccessorPair::AccessorPairPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "AccessorPair"); + os << "\n - getter: " << Brief(getter()); + os << "\n - setter: " << Brief(setter()); + os << "\n"; +} + + +void AccessCheckInfo::AccessCheckInfoPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "AccessCheckInfo"); + os << "\n - named_callback: " << Brief(named_callback()); + os << "\n - indexed_callback: " << Brief(indexed_callback()); + os << "\n - data: " << Brief(data()); + os << "\n"; } -void ExecutableAccessorInfo::ExecutableAccessorInfoPrint(FILE* out) { - HeapObject::PrintHeader(out, "ExecutableAccessorInfo"); - PrintF(out, "\n - name: "); - name()->ShortPrint(out); - PrintF(out, "\n - flag: "); - flag()->ShortPrint(out); - PrintF(out, "\n - getter: "); - getter()->ShortPrint(out); - PrintF(out, "\n - setter: "); - setter()->ShortPrint(out); - PrintF(out, "\n - data: "); - data()->ShortPrint(out); +void InterceptorInfo::InterceptorInfoPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "InterceptorInfo"); + os << "\n - getter: " << Brief(getter()); + os << "\n - setter: " << Brief(setter()); + os << "\n - query: " << Brief(query()); + os << "\n - deleter: " << Brief(deleter()); + os << "\n - enumerator: " << Brief(enumerator()); + os << "\n - data: " << Brief(data()); + os << "\n"; } -void DeclaredAccessorInfo::DeclaredAccessorInfoPrint(FILE* out) { - HeapObject::PrintHeader(out, "DeclaredAccessorInfo"); - PrintF(out, "\n - name: "); - name()->ShortPrint(out); - PrintF(out, "\n - flag: "); - flag()->ShortPrint(out); - PrintF(out, "\n - descriptor: "); - descriptor()->ShortPrint(out); -} +void CallHandlerInfo::CallHandlerInfoPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "CallHandlerInfo"); + os << "\n - callback: " << Brief(callback()); + os << "\n - data: " << Brief(data()); + os << "\n"; +} + + +void FunctionTemplateInfo::FunctionTemplateInfoPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "FunctionTemplateInfo"); + os << "\n - class name: " << Brief(class_name()); + os << "\n - tag: " << Brief(tag()); + os << "\n - property_list: " << Brief(property_list()); + os << "\n - serial_number: " << Brief(serial_number()); + os << "\n - call_code: " << Brief(call_code()); + os << "\n - property_accessors: " << Brief(property_accessors()); + os << "\n - prototype_template: " << Brief(prototype_template()); + os << "\n - parent_template: " << Brief(parent_template()); + os << "\n - named_property_handler: " << Brief(named_property_handler()); + os << "\n - indexed_property_handler: " << Brief(indexed_property_handler()); + os << "\n - instance_template: " << Brief(instance_template()); + os << "\n - signature: " << Brief(signature()); + os << "\n - access_check_info: " << Brief(access_check_info()); + os << "\n - hidden_prototype: " << (hidden_prototype() ? "true" : "false"); + os << "\n - undetectable: " << (undetectable() ? "true" : "false"); + os << "\n - need_access_check: " << (needs_access_check() ? "true" : "false"); + os << "\n"; +} -void DeclaredAccessorDescriptor::DeclaredAccessorDescriptorPrint(FILE* out) { - HeapObject::PrintHeader(out, "DeclaredAccessorDescriptor"); - PrintF(out, "\n - internal field: "); - serialized_data()->ShortPrint(out); -} - - -void Box::BoxPrint(FILE* out) { - HeapObject::PrintHeader(out, "Box"); - PrintF(out, "\n - value: "); - value()->ShortPrint(out); -} - - -void AccessorPair::AccessorPairPrint(FILE* out) { - HeapObject::PrintHeader(out, "AccessorPair"); - PrintF(out, "\n - getter: "); - getter()->ShortPrint(out); - PrintF(out, "\n - setter: "); - setter()->ShortPrint(out); - PrintF(out, "\n - flag: "); - access_flags()->ShortPrint(out); -} +void ObjectTemplateInfo::ObjectTemplateInfoPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "ObjectTemplateInfo"); + os << " - tag: " << Brief(tag()); + os << "\n - property_list: " << Brief(property_list()); + os << "\n - property_accessors: " << Brief(property_accessors()); + os << "\n - constructor: " << Brief(constructor()); + os << "\n - internal_field_count: " << Brief(internal_field_count()); + os << "\n"; +} -void AccessCheckInfo::AccessCheckInfoPrint(FILE* out) { - HeapObject::PrintHeader(out, "AccessCheckInfo"); - PrintF(out, "\n - named_callback: "); - named_callback()->ShortPrint(out); - PrintF(out, "\n - indexed_callback: "); - indexed_callback()->ShortPrint(out); - PrintF(out, "\n - data: "); - data()->ShortPrint(out); +void SignatureInfo::SignatureInfoPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "SignatureInfo"); + os << "\n - receiver: " << Brief(receiver()); + os << "\n - args: " << Brief(args()); + os << "\n"; } -void InterceptorInfo::InterceptorInfoPrint(FILE* out) { - HeapObject::PrintHeader(out, "InterceptorInfo"); - PrintF(out, "\n - getter: "); - getter()->ShortPrint(out); - PrintF(out, "\n - setter: "); - setter()->ShortPrint(out); - PrintF(out, "\n - query: "); - query()->ShortPrint(out); - PrintF(out, "\n - deleter: "); - deleter()->ShortPrint(out); - PrintF(out, "\n - enumerator: "); - enumerator()->ShortPrint(out); - PrintF(out, "\n - data: "); - data()->ShortPrint(out); -} - - -void CallHandlerInfo::CallHandlerInfoPrint(FILE* out) { - HeapObject::PrintHeader(out, "CallHandlerInfo"); - PrintF(out, "\n - callback: "); - callback()->ShortPrint(out); - PrintF(out, "\n - data: "); - data()->ShortPrint(out); - PrintF(out, "\n - call_stub_cache: "); -} - - -void FunctionTemplateInfo::FunctionTemplateInfoPrint(FILE* out) { - HeapObject::PrintHeader(out, "FunctionTemplateInfo"); - PrintF(out, "\n - class name: "); - class_name()->ShortPrint(out); - PrintF(out, "\n - tag: "); - tag()->ShortPrint(out); - PrintF(out, "\n - property_list: "); - property_list()->ShortPrint(out); - PrintF(out, "\n - serial_number: "); - serial_number()->ShortPrint(out); - PrintF(out, "\n - call_code: "); - call_code()->ShortPrint(out); - PrintF(out, "\n - property_accessors: "); - property_accessors()->ShortPrint(out); - PrintF(out, "\n - prototype_template: "); - prototype_template()->ShortPrint(out); - PrintF(out, "\n - parent_template: "); - parent_template()->ShortPrint(out); - PrintF(out, "\n - named_property_handler: "); - named_property_handler()->ShortPrint(out); - PrintF(out, "\n - indexed_property_handler: "); - indexed_property_handler()->ShortPrint(out); - PrintF(out, "\n - instance_template: "); - instance_template()->ShortPrint(out); - PrintF(out, "\n - signature: "); - signature()->ShortPrint(out); - PrintF(out, "\n - access_check_info: "); - access_check_info()->ShortPrint(out); - PrintF(out, "\n - hidden_prototype: %s", - hidden_prototype() ? "true" : "false"); - PrintF(out, "\n - undetectable: %s", undetectable() ? "true" : "false"); - PrintF(out, "\n - need_access_check: %s", - needs_access_check() ? "true" : "false"); -} - - -void ObjectTemplateInfo::ObjectTemplateInfoPrint(FILE* out) { - HeapObject::PrintHeader(out, "ObjectTemplateInfo"); - PrintF(out, " - tag: "); - tag()->ShortPrint(out); - PrintF(out, "\n - property_list: "); - property_list()->ShortPrint(out); - PrintF(out, "\n - property_accessors: "); - property_accessors()->ShortPrint(out); - PrintF(out, "\n - constructor: "); - constructor()->ShortPrint(out); - PrintF(out, "\n - internal_field_count: "); - internal_field_count()->ShortPrint(out); - PrintF(out, "\n"); -} - - -void SignatureInfo::SignatureInfoPrint(FILE* out) { - HeapObject::PrintHeader(out, "SignatureInfo"); - PrintF(out, "\n - receiver: "); - receiver()->ShortPrint(out); - PrintF(out, "\n - args: "); - args()->ShortPrint(out); -} - - -void TypeSwitchInfo::TypeSwitchInfoPrint(FILE* out) { - HeapObject::PrintHeader(out, "TypeSwitchInfo"); - PrintF(out, "\n - types: "); - types()->ShortPrint(out); -} - - -void AllocationSite::AllocationSitePrint(FILE* out) { - HeapObject::PrintHeader(out, "AllocationSite"); - PrintF(out, " - weak_next: "); - weak_next()->ShortPrint(out); - PrintF(out, "\n - dependent code: "); - dependent_code()->ShortPrint(out); - PrintF(out, "\n - nested site: "); - nested_site()->ShortPrint(out); - PrintF(out, "\n - memento found count: "); - Smi::FromInt(memento_found_count())->ShortPrint(out); - PrintF(out, "\n - memento create count: "); - Smi::FromInt(memento_create_count())->ShortPrint(out); - PrintF(out, "\n - pretenure decision: "); - Smi::FromInt(pretenure_decision())->ShortPrint(out); - PrintF(out, "\n - transition_info: "); +void TypeSwitchInfo::TypeSwitchInfoPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "TypeSwitchInfo"); + os << "\n - types: " << Brief(types()); + os << "\n"; +} + + +void AllocationSite::AllocationSitePrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "AllocationSite"); + os << " - weak_next: " << Brief(weak_next()); + os << "\n - dependent code: " << Brief(dependent_code()); + os << "\n - nested site: " << Brief(nested_site()); + os << "\n - memento found count: " + << Brief(Smi::FromInt(memento_found_count())); + os << "\n - memento create count: " + << Brief(Smi::FromInt(memento_create_count())); + os << "\n - pretenure decision: " + << Brief(Smi::FromInt(pretenure_decision())); + os << "\n - transition_info: "; if (transition_info()->IsSmi()) { ElementsKind kind = GetElementsKind(); - PrintF(out, "Array allocation with ElementsKind "); - PrintElementsKind(out, kind); - PrintF(out, "\n"); - return; + os << "Array allocation with ElementsKind " << ElementsKindToString(kind); } else if (transition_info()->IsJSArray()) { - PrintF(out, "Array literal "); - transition_info()->ShortPrint(out); - PrintF(out, "\n"); - return; + os << "Array literal " << Brief(transition_info()); + } else { + os << "unknown transition_info" << Brief(transition_info()); } - - PrintF(out, "unknown transition_info"); - transition_info()->ShortPrint(out); - PrintF(out, "\n"); + os << "\n"; } -void AllocationMemento::AllocationMementoPrint(FILE* out) { - HeapObject::PrintHeader(out, "AllocationMemento"); - PrintF(out, " - allocation site: "); +void AllocationMemento::AllocationMementoPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "AllocationMemento"); + os << " - allocation site: "; if (IsValid()) { - GetAllocationSite()->Print(); + GetAllocationSite()->Print(os); } else { - PrintF(out, "<invalid>\n"); + os << "<invalid>\n"; } } -void Script::ScriptPrint(FILE* out) { - HeapObject::PrintHeader(out, "Script"); - PrintF(out, "\n - source: "); - source()->ShortPrint(out); - PrintF(out, "\n - name: "); - name()->ShortPrint(out); - PrintF(out, "\n - line_offset: "); - line_offset()->ShortPrint(out); - PrintF(out, "\n - column_offset: "); - column_offset()->ShortPrint(out); - PrintF(out, "\n - type: "); - type()->ShortPrint(out); - PrintF(out, "\n - id: "); - id()->ShortPrint(out); - PrintF(out, "\n - context data: "); - context_data()->ShortPrint(out); - PrintF(out, "\n - wrapper: "); - wrapper()->ShortPrint(out); - PrintF(out, "\n - compilation type: %d", compilation_type()); - PrintF(out, "\n - line ends: "); - line_ends()->ShortPrint(out); - PrintF(out, "\n - eval from shared: "); - eval_from_shared()->ShortPrint(out); - PrintF(out, "\n - eval from instructions offset: "); - eval_from_instructions_offset()->ShortPrint(out); - PrintF(out, "\n"); -} - - -void DebugInfo::DebugInfoPrint(FILE* out) { - HeapObject::PrintHeader(out, "DebugInfo"); - PrintF(out, "\n - shared: "); - shared()->ShortPrint(out); - PrintF(out, "\n - original_code: "); - original_code()->ShortPrint(out); - PrintF(out, "\n - code: "); - code()->ShortPrint(out); - PrintF(out, "\n - break_points: "); - break_points()->Print(out); -} - - -void BreakPointInfo::BreakPointInfoPrint(FILE* out) { - HeapObject::PrintHeader(out, "BreakPointInfo"); - PrintF(out, "\n - code_position: %d", code_position()->value()); - PrintF(out, "\n - source_position: %d", source_position()->value()); - PrintF(out, "\n - statement_position: %d", statement_position()->value()); - PrintF(out, "\n - break_point_objects: "); - break_point_objects()->ShortPrint(out); -} - - -void DescriptorArray::PrintDescriptors(FILE* out) { - PrintF(out, "Descriptor array %d\n", number_of_descriptors()); +void Script::ScriptPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "Script"); + os << "\n - source: " << Brief(source()); + os << "\n - name: " << Brief(name()); + os << "\n - line_offset: " << Brief(line_offset()); + os << "\n - column_offset: " << Brief(column_offset()); + os << "\n - type: " << Brief(type()); + os << "\n - id: " << Brief(id()); + os << "\n - context data: " << Brief(context_data()); + os << "\n - wrapper: " << Brief(wrapper()); + os << "\n - compilation type: " << compilation_type(); + os << "\n - line ends: " << Brief(line_ends()); + os << "\n - eval from shared: " << Brief(eval_from_shared()); + os << "\n - eval from instructions offset: " + << Brief(eval_from_instructions_offset()); + os << "\n"; +} + + +void DebugInfo::DebugInfoPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "DebugInfo"); + os << "\n - shared: " << Brief(shared()); + os << "\n - original_code: " << Brief(original_code()); + os << "\n - code: " << Brief(code()); + os << "\n - break_points: "; + break_points()->Print(os); +} + + +void BreakPointInfo::BreakPointInfoPrint(OStream& os) { // NOLINT + HeapObject::PrintHeader(os, "BreakPointInfo"); + os << "\n - code_position: " << code_position()->value(); + os << "\n - source_position: " << source_position()->value(); + os << "\n - statement_position: " << statement_position()->value(); + os << "\n - break_point_objects: " << Brief(break_point_objects()); + os << "\n"; +} + + +void DescriptorArray::PrintDescriptors(OStream& os) { // NOLINT + os << "Descriptor array " << number_of_descriptors() << "\n"; for (int i = 0; i < number_of_descriptors(); i++) { - PrintF(out, " %d: ", i); Descriptor desc; Get(i, &desc); - desc.Print(out); + os << " " << i << ": " << desc; } - PrintF(out, "\n"); + os << "\n"; } -void TransitionArray::PrintTransitions(FILE* out) { - PrintF(out, "Transition array %d\n", number_of_transitions()); +void TransitionArray::PrintTransitions(OStream& os) { // NOLINT + os << "Transition array %d\n", number_of_transitions(); for (int i = 0; i < number_of_transitions(); i++) { - PrintF(out, " %d: ", i); - GetKey(i)->NamePrint(out); - PrintF(out, ": "); + os << " " << i << ": "; + GetKey(i)->NamePrint(os); + os << ": "; switch (GetTargetDetails(i).type()) { case FIELD: { - PrintF(out, " (transition to field)\n"); + os << " (transition to field)\n"; break; } case CONSTANT: - PrintF(out, " (transition to constant)\n"); + os << " (transition to constant)\n"; break; case CALLBACKS: - PrintF(out, " (transition to callback)\n"); + os << " (transition to callback)\n"; break; // Values below are never in the target descriptor array. case NORMAL: @@ -1245,7 +1112,7 @@ void TransitionArray::PrintTransitions(FILE* out) { break; } } - PrintF(out, "\n"); + os << "\n"; } diff --git a/deps/v8/src/objects.cc b/deps/v8/src/objects.cc index ec569d141cc..cfdb9ccb630 100644 --- a/deps/v8/src/objects.cc +++ b/deps/v8/src/objects.cc @@ -2,37 +2,41 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" - -#include "accessors.h" -#include "allocation-site-scopes.h" -#include "api.h" -#include "arguments.h" -#include "bootstrapper.h" -#include "codegen.h" -#include "code-stubs.h" -#include "cpu-profiler.h" -#include "debug.h" -#include "deoptimizer.h" -#include "date.h" -#include "elements.h" -#include "execution.h" -#include "full-codegen.h" -#include "hydrogen.h" -#include "isolate-inl.h" -#include "log.h" -#include "objects-inl.h" -#include "objects-visiting-inl.h" -#include "macro-assembler.h" -#include "mark-compact.h" -#include "safepoint-table.h" -#include "string-search.h" -#include "string-stream.h" -#include "utils.h" +#include "src/v8.h" + +#include "src/accessors.h" +#include "src/allocation-site-scopes.h" +#include "src/api.h" +#include "src/arguments.h" +#include "src/bootstrapper.h" +#include "src/code-stubs.h" +#include "src/codegen.h" +#include "src/cpu-profiler.h" +#include "src/date.h" +#include "src/debug.h" +#include "src/deoptimizer.h" +#include "src/elements.h" +#include "src/execution.h" +#include "src/field-index-inl.h" +#include "src/field-index.h" +#include "src/full-codegen.h" +#include "src/heap/mark-compact.h" +#include "src/heap/objects-visiting-inl.h" +#include "src/hydrogen.h" +#include "src/isolate-inl.h" +#include "src/log.h" +#include "src/lookup.h" +#include "src/macro-assembler.h" +#include "src/objects-inl.h" +#include "src/prototype.h" +#include "src/safepoint-table.h" +#include "src/string-search.h" +#include "src/string-stream.h" +#include "src/utils.h" #ifdef ENABLE_DISASSEMBLER -#include "disasm.h" -#include "disassembler.h" +#include "src/disasm.h" +#include "src/disassembler.h" #endif namespace v8 { @@ -89,8 +93,8 @@ bool Object::BooleanValue() { } -bool Object::IsCallable() { - Object* fun = this; +bool Object::IsCallable() const { + const Object* fun = this; while (fun->IsJSFunctionProxy()) { fun = JSFunctionProxy::cast(fun)->call_trap(); } @@ -120,22 +124,44 @@ void Object::Lookup(Handle<Name> name, LookupResult* result) { 0xDEAD0000, this, JSReceiver::cast(this)->map(), 0xDEAD0001); } } - ASSERT(holder != NULL); // Cannot handle null or undefined. + DCHECK(holder != NULL); // Cannot handle null or undefined. JSReceiver::cast(holder)->Lookup(name, result); } -MaybeHandle<Object> Object::GetPropertyWithReceiver( - Handle<Object> object, - Handle<Object> receiver, - Handle<Name> name, - PropertyAttributes* attributes) { - LookupResult lookup(name->GetIsolate()); - object->Lookup(name, &lookup); - MaybeHandle<Object> result = - GetProperty(object, receiver, &lookup, name, attributes); - ASSERT(*attributes <= ABSENT); - return result; +MaybeHandle<Object> Object::GetProperty(LookupIterator* it) { + for (; it->IsFound(); it->Next()) { + switch (it->state()) { + case LookupIterator::NOT_FOUND: + UNREACHABLE(); + case LookupIterator::JSPROXY: + return JSProxy::GetPropertyWithHandler(it->GetHolder<JSProxy>(), + it->GetReceiver(), it->name()); + case LookupIterator::INTERCEPTOR: { + MaybeHandle<Object> maybe_result = JSObject::GetPropertyWithInterceptor( + it->GetHolder<JSObject>(), it->GetReceiver(), it->name()); + if (!maybe_result.is_null()) return maybe_result; + if (it->isolate()->has_pending_exception()) return maybe_result; + break; + } + case LookupIterator::ACCESS_CHECK: + if (it->HasAccess(v8::ACCESS_GET)) break; + return JSObject::GetPropertyWithFailedAccessCheck(it); + case LookupIterator::PROPERTY: + if (it->HasProperty()) { + switch (it->property_kind()) { + case LookupIterator::ACCESSOR: + return GetPropertyWithAccessor(it->GetReceiver(), it->name(), + it->GetHolder<JSObject>(), + it->GetAccessors()); + case LookupIterator::DATA: + return it->GetDataValue(); + } + } + break; + } + } + return it->factory()->undefined_value(); } @@ -202,7 +228,7 @@ bool FunctionTemplateInfo::IsTemplateFor(Map* map) { template<typename To> static inline To* CheckedCast(void *from) { uintptr_t temp = reinterpret_cast<uintptr_t>(from); - ASSERT(temp % sizeof(To) == 0); + DCHECK(temp % sizeof(To) == 0); return reinterpret_cast<To*>(temp); } @@ -304,39 +330,39 @@ static Handle<Object> GetDeclaredAccessorProperty( const DeclaredAccessorDescriptorData* data = iterator.Next(); switch (data->type) { case kDescriptorReturnObject: { - ASSERT(iterator.Complete()); + DCHECK(iterator.Complete()); current = *CheckedCast<char*>(current); return handle(*CheckedCast<Object*>(current), isolate); } case kDescriptorPointerDereference: - ASSERT(!iterator.Complete()); + DCHECK(!iterator.Complete()); current = *reinterpret_cast<char**>(current); break; case kDescriptorPointerShift: - ASSERT(!iterator.Complete()); + DCHECK(!iterator.Complete()); current += data->pointer_shift_descriptor.byte_offset; break; case kDescriptorObjectDereference: { - ASSERT(!iterator.Complete()); + DCHECK(!iterator.Complete()); Object* object = CheckedCast<Object>(current); int field = data->object_dereference_descriptor.internal_field; Object* smi = JSObject::cast(object)->GetInternalField(field); - ASSERT(smi->IsSmi()); + DCHECK(smi->IsSmi()); current = reinterpret_cast<char*>(smi); break; } case kDescriptorBitmaskCompare: - ASSERT(iterator.Complete()); + DCHECK(iterator.Complete()); return PerformCompare(data->bitmask_compare_descriptor, current, isolate); case kDescriptorPointerCompare: - ASSERT(iterator.Complete()); + DCHECK(iterator.Complete()); return PerformCompare(data->pointer_compare_descriptor, current, isolate); case kDescriptorPrimitiveValue: - ASSERT(iterator.Complete()); + DCHECK(iterator.Complete()); return GetPrimitiveValue(data->primitive_value_descriptor, current, isolate); @@ -349,7 +375,7 @@ static Handle<Object> GetDeclaredAccessorProperty( Handle<FixedArray> JSObject::EnsureWritableFastElements( Handle<JSObject> object) { - ASSERT(object->HasFastSmiOrObjectElements()); + DCHECK(object->HasFastSmiOrObjectElements()); Isolate* isolate = object->GetIsolate(); Handle<FixedArray> elems(FixedArray::cast(object->elements()), isolate); if (elems->map() != isolate->heap()->fixed_cow_array_map()) return elems; @@ -361,16 +387,30 @@ Handle<FixedArray> JSObject::EnsureWritableFastElements( } -MaybeHandle<Object> JSObject::GetPropertyWithCallback(Handle<JSObject> object, - Handle<Object> receiver, - Handle<Object> structure, - Handle<Name> name) { +MaybeHandle<Object> JSProxy::GetPropertyWithHandler(Handle<JSProxy> proxy, + Handle<Object> receiver, + Handle<Name> name) { + Isolate* isolate = proxy->GetIsolate(); + + // TODO(rossberg): adjust once there is a story for symbols vs proxies. + if (name->IsSymbol()) return isolate->factory()->undefined_value(); + + Handle<Object> args[] = { receiver, name }; + return CallTrap( + proxy, "get", isolate->derived_get_trap(), ARRAY_SIZE(args), args); +} + + +MaybeHandle<Object> Object::GetPropertyWithAccessor(Handle<Object> receiver, + Handle<Name> name, + Handle<JSObject> holder, + Handle<Object> structure) { Isolate* isolate = name->GetIsolate(); - ASSERT(!structure->IsForeign()); + DCHECK(!structure->IsForeign()); // api style callbacks. if (structure->IsAccessorInfo()) { - Handle<AccessorInfo> accessor_info = Handle<AccessorInfo>::cast(structure); - if (!accessor_info->IsCompatibleReceiver(*receiver)) { + Handle<AccessorInfo> info = Handle<AccessorInfo>::cast(structure); + if (!info->IsCompatibleReceiver(*receiver)) { Handle<Object> args[2] = { name, receiver }; Handle<Object> error = isolate->factory()->NewTypeError("incompatible_method_receiver", @@ -395,8 +435,8 @@ MaybeHandle<Object> JSObject::GetPropertyWithCallback(Handle<JSObject> object, if (call_fun == NULL) return isolate->factory()->undefined_value(); Handle<String> key = Handle<String>::cast(name); - LOG(isolate, ApiNamedPropertyAccess("load", *object, *name)); - PropertyCallbackArguments args(isolate, data->data(), *receiver, *object); + LOG(isolate, ApiNamedPropertyAccess("load", *holder, *name)); + PropertyCallbackArguments args(isolate, data->data(), *receiver, *holder); v8::Handle<v8::Value> result = args.Call(call_fun, v8::Utils::ToLocal(key)); RETURN_EXCEPTION_IF_SCHEDULED_EXCEPTION(isolate, Object); @@ -415,29 +455,88 @@ MaybeHandle<Object> JSObject::GetPropertyWithCallback(Handle<JSObject> object, if (getter->IsSpecFunction()) { // TODO(rossberg): nicer would be to cast to some JSCallable here... return Object::GetPropertyWithDefinedGetter( - object, receiver, Handle<JSReceiver>::cast(getter)); + receiver, Handle<JSReceiver>::cast(getter)); } // Getter is not a function. return isolate->factory()->undefined_value(); } -MaybeHandle<Object> JSProxy::GetPropertyWithHandler(Handle<JSProxy> proxy, - Handle<Object> receiver, - Handle<Name> name) { - Isolate* isolate = proxy->GetIsolate(); +bool AccessorInfo::IsCompatibleReceiverType(Isolate* isolate, + Handle<AccessorInfo> info, + Handle<HeapType> type) { + if (!info->HasExpectedReceiverType()) return true; + Handle<Map> map = IC::TypeToMap(*type, isolate); + if (!map->IsJSObjectMap()) return false; + return FunctionTemplateInfo::cast(info->expected_receiver_type()) + ->IsTemplateFor(*map); +} - // TODO(rossberg): adjust once there is a story for symbols vs proxies. - if (name->IsSymbol()) return isolate->factory()->undefined_value(); - Handle<Object> args[] = { receiver, name }; - return CallTrap( - proxy, "get", isolate->derived_get_trap(), ARRAY_SIZE(args), args); +MaybeHandle<Object> Object::SetPropertyWithAccessor( + Handle<Object> receiver, Handle<Name> name, Handle<Object> value, + Handle<JSObject> holder, Handle<Object> structure, StrictMode strict_mode) { + Isolate* isolate = name->GetIsolate(); + + // We should never get here to initialize a const with the hole + // value since a const declaration would conflict with the setter. + DCHECK(!structure->IsForeign()); + if (structure->IsExecutableAccessorInfo()) { + // Don't call executable accessor setters with non-JSObject receivers. + if (!receiver->IsJSObject()) return value; + // api style callbacks + ExecutableAccessorInfo* info = ExecutableAccessorInfo::cast(*structure); + if (!info->IsCompatibleReceiver(*receiver)) { + Handle<Object> args[2] = { name, receiver }; + Handle<Object> error = + isolate->factory()->NewTypeError("incompatible_method_receiver", + HandleVector(args, + ARRAY_SIZE(args))); + return isolate->Throw<Object>(error); + } + // TODO(rossberg): Support symbols in the API. + if (name->IsSymbol()) return value; + Object* call_obj = info->setter(); + v8::AccessorSetterCallback call_fun = + v8::ToCData<v8::AccessorSetterCallback>(call_obj); + if (call_fun == NULL) return value; + Handle<String> key = Handle<String>::cast(name); + LOG(isolate, ApiNamedPropertyAccess("store", *holder, *name)); + PropertyCallbackArguments args(isolate, info->data(), *receiver, *holder); + args.Call(call_fun, + v8::Utils::ToLocal(key), + v8::Utils::ToLocal(value)); + RETURN_EXCEPTION_IF_SCHEDULED_EXCEPTION(isolate, Object); + return value; + } + + if (structure->IsAccessorPair()) { + Handle<Object> setter(AccessorPair::cast(*structure)->setter(), isolate); + if (setter->IsSpecFunction()) { + // TODO(rossberg): nicer would be to cast to some JSCallable here... + return SetPropertyWithDefinedSetter( + receiver, Handle<JSReceiver>::cast(setter), value); + } else { + if (strict_mode == SLOPPY) return value; + Handle<Object> args[2] = { name, holder }; + Handle<Object> error = + isolate->factory()->NewTypeError("no_setter_in_callback", + HandleVector(args, 2)); + return isolate->Throw<Object>(error); + } + } + + // TODO(dcarney): Handle correctly. + if (structure->IsDeclaredAccessorInfo()) { + return value; + } + + UNREACHABLE(); + return MaybeHandle<Object>(); } MaybeHandle<Object> Object::GetPropertyWithDefinedGetter( - Handle<Object> object, Handle<Object> receiver, Handle<JSReceiver> getter) { Isolate* isolate = getter->GetIsolate(); @@ -453,153 +552,124 @@ MaybeHandle<Object> Object::GetPropertyWithDefinedGetter( } -// Only deal with CALLBACKS and INTERCEPTOR -MaybeHandle<Object> JSObject::GetPropertyWithFailedAccessCheck( - Handle<JSObject> object, +MaybeHandle<Object> Object::SetPropertyWithDefinedSetter( Handle<Object> receiver, - LookupResult* result, - Handle<Name> name, - PropertyAttributes* attributes) { - Isolate* isolate = name->GetIsolate(); - if (result->IsProperty()) { - switch (result->type()) { - case CALLBACKS: { - // Only allow API accessors. - Handle<Object> callback_obj(result->GetCallbackObject(), isolate); - if (callback_obj->IsAccessorInfo()) { - if (!AccessorInfo::cast(*callback_obj)->all_can_read()) break; - *attributes = result->GetAttributes(); - // Fall through to GetPropertyWithCallback. - } else if (callback_obj->IsAccessorPair()) { - if (!AccessorPair::cast(*callback_obj)->all_can_read()) break; - // Fall through to GetPropertyWithCallback. - } else { - break; - } - Handle<JSObject> holder(result->holder(), isolate); - return GetPropertyWithCallback(holder, receiver, callback_obj, name); - } - case NORMAL: - case FIELD: - case CONSTANT: { - // Search ALL_CAN_READ accessors in prototype chain. - LookupResult r(isolate); - result->holder()->LookupRealNamedPropertyInPrototypes(name, &r); - if (r.IsProperty()) { - return GetPropertyWithFailedAccessCheck( - object, receiver, &r, name, attributes); - } - break; - } - case INTERCEPTOR: { - // If the object has an interceptor, try real named properties. - // No access check in GetPropertyAttributeWithInterceptor. - LookupResult r(isolate); - result->holder()->LookupRealNamedProperty(name, &r); - if (r.IsProperty()) { - return GetPropertyWithFailedAccessCheck( - object, receiver, &r, name, attributes); - } - break; - } - default: - UNREACHABLE(); - } + Handle<JSReceiver> setter, + Handle<Object> value) { + Isolate* isolate = setter->GetIsolate(); + + Debug* debug = isolate->debug(); + // Handle stepping into a setter if step into is active. + // TODO(rossberg): should this apply to getters that are function proxies? + if (debug->StepInActive() && setter->IsJSFunction()) { + debug->HandleStepIn( + Handle<JSFunction>::cast(setter), Handle<Object>::null(), 0, false); } - // No accessible property found. - *attributes = ABSENT; - isolate->ReportFailedAccessCheck(object, v8::ACCESS_GET); - RETURN_EXCEPTION_IF_SCHEDULED_EXCEPTION(isolate, Object); - return isolate->factory()->undefined_value(); + Handle<Object> argv[] = { value }; + RETURN_ON_EXCEPTION(isolate, Execution::Call(isolate, setter, receiver, + ARRAY_SIZE(argv), argv, true), + Object); + return value; } -PropertyAttributes JSObject::GetPropertyAttributeWithFailedAccessCheck( - Handle<JSObject> object, - LookupResult* result, - Handle<Name> name, - bool continue_search) { - if (result->IsProperty()) { - switch (result->type()) { - case CALLBACKS: { - // Only allow API accessors. - Handle<Object> obj(result->GetCallbackObject(), object->GetIsolate()); - if (obj->IsAccessorInfo()) { - Handle<AccessorInfo> info = Handle<AccessorInfo>::cast(obj); - if (info->all_can_read()) { - return result->GetAttributes(); - } - } else if (obj->IsAccessorPair()) { - Handle<AccessorPair> pair = Handle<AccessorPair>::cast(obj); - if (pair->all_can_read()) { - return result->GetAttributes(); - } - } - break; +static bool FindAllCanReadHolder(LookupIterator* it) { + it->skip_interceptor(); + it->skip_access_check(); + for (; it->IsFound(); it->Next()) { + if (it->state() == LookupIterator::PROPERTY && + it->HasProperty() && + it->property_kind() == LookupIterator::ACCESSOR) { + Handle<Object> accessors = it->GetAccessors(); + if (accessors->IsAccessorInfo()) { + if (AccessorInfo::cast(*accessors)->all_can_read()) return true; } + } + } + return false; +} - case NORMAL: - case FIELD: - case CONSTANT: { - if (!continue_search) break; - // Search ALL_CAN_READ accessors in prototype chain. - LookupResult r(object->GetIsolate()); - result->holder()->LookupRealNamedPropertyInPrototypes(name, &r); - if (r.IsProperty()) { - return GetPropertyAttributeWithFailedAccessCheck( - object, &r, name, continue_search); - } - break; - } - case INTERCEPTOR: { - // If the object has an interceptor, try real named properties. - // No access check in GetPropertyAttributeWithInterceptor. - LookupResult r(object->GetIsolate()); - if (continue_search) { - result->holder()->LookupRealNamedProperty(name, &r); - } else { - result->holder()->LocalLookupRealNamedProperty(name, &r); - } - if (!r.IsFound()) break; - return GetPropertyAttributeWithFailedAccessCheck( - object, &r, name, continue_search); +MaybeHandle<Object> JSObject::GetPropertyWithFailedAccessCheck( + LookupIterator* it) { + Handle<JSObject> checked = it->GetHolder<JSObject>(); + if (FindAllCanReadHolder(it)) { + return GetPropertyWithAccessor(it->GetReceiver(), it->name(), + it->GetHolder<JSObject>(), + it->GetAccessors()); + } + it->isolate()->ReportFailedAccessCheck(checked, v8::ACCESS_GET); + RETURN_EXCEPTION_IF_SCHEDULED_EXCEPTION(it->isolate(), Object); + return it->factory()->undefined_value(); +} + + +Maybe<PropertyAttributes> JSObject::GetPropertyAttributesWithFailedAccessCheck( + LookupIterator* it) { + Handle<JSObject> checked = it->GetHolder<JSObject>(); + if (FindAllCanReadHolder(it)) + return maybe(it->property_details().attributes()); + it->isolate()->ReportFailedAccessCheck(checked, v8::ACCESS_HAS); + RETURN_VALUE_IF_SCHEDULED_EXCEPTION(it->isolate(), + Maybe<PropertyAttributes>()); + return maybe(ABSENT); +} + + +static bool FindAllCanWriteHolder(LookupIterator* it) { + it->skip_interceptor(); + it->skip_access_check(); + for (; it->IsFound(); it->Next()) { + if (it->state() == LookupIterator::PROPERTY && it->HasProperty() && + it->property_kind() == LookupIterator::ACCESSOR) { + Handle<Object> accessors = it->GetAccessors(); + if (accessors->IsAccessorInfo()) { + if (AccessorInfo::cast(*accessors)->all_can_write()) return true; } - - case HANDLER: - case NONEXISTENT: - UNREACHABLE(); } } + return false; +} + + +MaybeHandle<Object> JSObject::SetPropertyWithFailedAccessCheck( + LookupIterator* it, Handle<Object> value, StrictMode strict_mode) { + Handle<JSObject> checked = it->GetHolder<JSObject>(); + if (FindAllCanWriteHolder(it)) { + return SetPropertyWithAccessor(it->GetReceiver(), it->name(), value, + it->GetHolder<JSObject>(), + it->GetAccessors(), strict_mode); + } - object->GetIsolate()->ReportFailedAccessCheck(object, v8::ACCESS_HAS); - // TODO(yangguo): Issue 3269, check for scheduled exception missing? - return ABSENT; + it->isolate()->ReportFailedAccessCheck(checked, v8::ACCESS_SET); + RETURN_EXCEPTION_IF_SCHEDULED_EXCEPTION(it->isolate(), Object); + return value; } Object* JSObject::GetNormalizedProperty(const LookupResult* result) { - ASSERT(!HasFastProperties()); + DCHECK(!HasFastProperties()); Object* value = property_dictionary()->ValueAt(result->GetDictionaryEntry()); if (IsGlobalObject()) { value = PropertyCell::cast(value)->value(); } - ASSERT(!value->IsPropertyCell() && !value->IsCell()); + DCHECK(!value->IsPropertyCell() && !value->IsCell()); return value; } Handle<Object> JSObject::GetNormalizedProperty(Handle<JSObject> object, const LookupResult* result) { - ASSERT(!object->HasFastProperties()); + DCHECK(!object->HasFastProperties()); Isolate* isolate = object->GetIsolate(); Handle<Object> value(object->property_dictionary()->ValueAt( result->GetDictionaryEntry()), isolate); if (object->IsGlobalObject()) { - value = Handle<Object>(Handle<PropertyCell>::cast(value)->value(), isolate); + value = handle(Handle<PropertyCell>::cast(value)->value(), isolate); + DCHECK(!value->IsTheHole()); } - ASSERT(!value->IsPropertyCell() && !value->IsCell()); + DCHECK(!value->IsPropertyCell() && !value->IsCell()); return value; } @@ -607,7 +677,7 @@ Handle<Object> JSObject::GetNormalizedProperty(Handle<JSObject> object, void JSObject::SetNormalizedProperty(Handle<JSObject> object, const LookupResult* result, Handle<Object> value) { - ASSERT(!object->HasFastProperties()); + DCHECK(!object->HasFastProperties()); NameDictionary* property_dictionary = object->property_dictionary(); if (object->IsGlobalObject()) { Handle<PropertyCell> cell(PropertyCell::cast( @@ -623,7 +693,7 @@ void JSObject::SetNormalizedProperty(Handle<JSObject> object, Handle<Name> name, Handle<Object> value, PropertyDetails details) { - ASSERT(!object->HasFastProperties()); + DCHECK(!object->HasFastProperties()); Handle<NameDictionary> property_dictionary(object->property_dictionary()); if (!name->IsUniqueName()) { @@ -652,7 +722,7 @@ void JSObject::SetNormalizedProperty(Handle<JSObject> object, property_dictionary->SetNextEnumerationIndex(enumeration_index + 1); } else { enumeration_index = original_details.dictionary_index(); - ASSERT(enumeration_index > 0); + DCHECK(enumeration_index > 0); } details = PropertyDetails( @@ -673,7 +743,7 @@ void JSObject::SetNormalizedProperty(Handle<JSObject> object, Handle<Object> JSObject::DeleteNormalizedProperty(Handle<JSObject> object, Handle<Name> name, DeleteMode mode) { - ASSERT(!object->HasFastProperties()); + DCHECK(!object->HasFastProperties()); Isolate* isolate = object->GetIsolate(); Handle<NameDictionary> dictionary(object->property_dictionary()); int entry = dictionary->FindEntry(name); @@ -688,8 +758,8 @@ Handle<Object> JSObject::DeleteNormalizedProperty(Handle<JSObject> object, // from the DontDelete cell without checking if it contains // the hole value. Handle<Map> new_map = Map::CopyDropDescriptors(handle(object->map())); - ASSERT(new_map->is_dictionary_map()); - object->set_map(*new_map); + DCHECK(new_map->is_dictionary_map()); + JSObject::MigrateToMap(object, new_map); } Handle<PropertyCell> cell(PropertyCell::cast(dictionary->ValueAt(entry))); Handle<Object> value = isolate->factory()->the_hole_value(); @@ -725,132 +795,34 @@ bool JSObject::IsDirty() { } -MaybeHandle<Object> Object::GetProperty(Handle<Object> object, - Handle<Object> receiver, - LookupResult* result, - Handle<Name> name, - PropertyAttributes* attributes) { - Isolate* isolate = name->GetIsolate(); - Factory* factory = isolate->factory(); - - // Make sure that the top context does not change when doing - // callbacks or interceptor calls. - AssertNoContextChange ncc(isolate); - - // Traverse the prototype chain from the current object (this) to - // the holder and check for access rights. This avoids traversing the - // objects more than once in case of interceptors, because the - // holder will always be the interceptor holder and the search may - // only continue with a current object just after the interceptor - // holder in the prototype chain. - // Proxy handlers do not use the proxy's prototype, so we can skip this. - if (!result->IsHandler()) { - ASSERT(*object != object->GetPrototype(isolate)); - Handle<Object> last = result->IsProperty() - ? Handle<Object>(result->holder(), isolate) - : Handle<Object>::cast(factory->null_value()); - for (Handle<Object> current = object; - true; - current = Handle<Object>(current->GetPrototype(isolate), isolate)) { - if (current->IsAccessCheckNeeded()) { - // Check if we're allowed to read from the current object. Note - // that even though we may not actually end up loading the named - // property from the current object, we still check that we have - // access to it. - Handle<JSObject> checked = Handle<JSObject>::cast(current); - if (!isolate->MayNamedAccess(checked, name, v8::ACCESS_GET)) { - return JSObject::GetPropertyWithFailedAccessCheck( - checked, receiver, result, name, attributes); - } - } - // Stop traversing the chain once we reach the last object in the - // chain; either the holder of the result or null in case of an - // absent property. - if (current.is_identical_to(last)) break; - } - } - - if (!result->IsProperty()) { - *attributes = ABSENT; - return factory->undefined_value(); - } - *attributes = result->GetAttributes(); - - Handle<Object> value; - switch (result->type()) { - case NORMAL: { - value = JSObject::GetNormalizedProperty( - handle(result->holder(), isolate), result); - break; - } - case FIELD: - value = JSObject::FastPropertyAt(handle(result->holder(), isolate), - result->representation(), - result->GetFieldIndex().field_index()); - break; - case CONSTANT: - return handle(result->GetConstant(), isolate); - case CALLBACKS: - return JSObject::GetPropertyWithCallback( - handle(result->holder(), isolate), - receiver, - handle(result->GetCallbackObject(), isolate), - name); - case HANDLER: - return JSProxy::GetPropertyWithHandler( - handle(result->proxy(), isolate), receiver, name); - case INTERCEPTOR: - return JSObject::GetPropertyWithInterceptor( - handle(result->holder(), isolate), receiver, name, attributes); - case NONEXISTENT: - UNREACHABLE(); - break; - } - ASSERT(!value->IsTheHole() || result->IsReadOnly()); - return value->IsTheHole() ? Handle<Object>::cast(factory->undefined_value()) - : value; -} - - MaybeHandle<Object> Object::GetElementWithReceiver(Isolate* isolate, Handle<Object> object, Handle<Object> receiver, uint32_t index) { - Handle<Object> holder; + if (object->IsUndefined()) { + // TODO(verwaest): Why is this check here? + UNREACHABLE(); + return isolate->factory()->undefined_value(); + } // Iterate up the prototype chain until an element is found or the null // prototype is encountered. - for (holder = object; - !holder->IsNull(); - holder = Handle<Object>(holder->GetPrototype(isolate), isolate)) { - if (!holder->IsJSObject()) { - Context* native_context = isolate->context()->native_context(); - if (holder->IsNumber()) { - holder = Handle<Object>( - native_context->number_function()->instance_prototype(), isolate); - } else if (holder->IsString()) { - holder = Handle<Object>( - native_context->string_function()->instance_prototype(), isolate); - } else if (holder->IsSymbol()) { - holder = Handle<Object>( - native_context->symbol_function()->instance_prototype(), isolate); - } else if (holder->IsBoolean()) { - holder = Handle<Object>( - native_context->boolean_function()->instance_prototype(), isolate); - } else if (holder->IsJSProxy()) { - return JSProxy::GetElementWithHandler( - Handle<JSProxy>::cast(holder), receiver, index); - } else { - // Undefined and null have no indexed properties. - ASSERT(holder->IsUndefined() || holder->IsNull()); - return isolate->factory()->undefined_value(); - } + for (PrototypeIterator iter(isolate, object, + object->IsJSProxy() || object->IsJSObject() + ? PrototypeIterator::START_AT_RECEIVER + : PrototypeIterator::START_AT_PROTOTYPE); + !iter.IsAtEnd(); iter.Advance()) { + if (PrototypeIterator::GetCurrent(iter)->IsJSProxy()) { + return JSProxy::GetElementWithHandler( + Handle<JSProxy>::cast(PrototypeIterator::GetCurrent(iter)), receiver, + index); } // Inline the case for JSObjects. Doing so significantly improves the // performance of fetching elements where checking the prototype chain is // necessary. - Handle<JSObject> js_object = Handle<JSObject>::cast(holder); + Handle<JSObject> js_object = + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)); // Check access rights if needed. if (js_object->IsAccessCheckNeeded()) { @@ -879,11 +851,11 @@ MaybeHandle<Object> Object::GetElementWithReceiver(Isolate* isolate, } -Object* Object::GetPrototype(Isolate* isolate) { +Map* Object::GetRootMap(Isolate* isolate) { DisallowHeapAllocation no_alloc; if (IsSmi()) { Context* context = isolate->context()->native_context(); - return context->number_function()->instance_prototype(); + return context->number_function()->initial_map(); } HeapObject* heap_object = HeapObject::cast(this); @@ -891,36 +863,23 @@ Object* Object::GetPrototype(Isolate* isolate) { // The object is either a number, a string, a boolean, // a real JS object, or a Harmony proxy. if (heap_object->IsJSReceiver()) { - return heap_object->map()->prototype(); + return heap_object->map(); } Context* context = isolate->context()->native_context(); if (heap_object->IsHeapNumber()) { - return context->number_function()->instance_prototype(); + return context->number_function()->initial_map(); } if (heap_object->IsString()) { - return context->string_function()->instance_prototype(); + return context->string_function()->initial_map(); } if (heap_object->IsSymbol()) { - return context->symbol_function()->instance_prototype(); + return context->symbol_function()->initial_map(); } if (heap_object->IsBoolean()) { - return context->boolean_function()->instance_prototype(); - } else { - return isolate->heap()->null_value(); + return context->boolean_function()->initial_map(); } -} - - -Handle<Object> Object::GetPrototype(Isolate* isolate, - Handle<Object> object) { - return handle(object->GetPrototype(isolate), isolate); -} - - -Map* Object::GetMarkerMap(Isolate* isolate) { - if (IsSmi()) return isolate->heap()->heap_number_map(); - return HeapObject::cast(this)->map(); + return isolate->heap()->null_value()->map(); } @@ -940,18 +899,16 @@ Object* Object::GetHash() { return Smi::FromInt(hash); } - ASSERT(IsJSReceiver()); + DCHECK(IsJSReceiver()); return JSReceiver::cast(this)->GetIdentityHash(); } -Handle<Object> Object::GetOrCreateHash(Handle<Object> object, - Isolate* isolate) { +Handle<Smi> Object::GetOrCreateHash(Isolate* isolate, Handle<Object> object) { Handle<Object> hash(object->GetHash(), isolate); - if (hash->IsSmi()) - return hash; + if (hash->IsSmi()) return Handle<Smi>::cast(hash); - ASSERT(object->IsJSReceiver()); + DCHECK(object->IsJSReceiver()); return JSReceiver::GetOrCreateIdentityHash(Handle<JSReceiver>::cast(object)); } @@ -977,30 +934,52 @@ bool Object::SameValue(Object* other) { } +bool Object::SameValueZero(Object* other) { + if (other == this) return true; + + // The object is either a number, a name, an odd-ball, + // a real JS object, or a Harmony proxy. + if (IsNumber() && other->IsNumber()) { + double this_value = Number(); + double other_value = other->Number(); + // +0 == -0 is true + return this_value == other_value + || (std::isnan(this_value) && std::isnan(other_value)); + } + if (IsString() && other->IsString()) { + return String::cast(this)->Equals(String::cast(other)); + } + return false; +} + + void Object::ShortPrint(FILE* out) { - HeapStringAllocator allocator; - StringStream accumulator(&allocator); - ShortPrint(&accumulator); - accumulator.OutputToFile(out); + OFStream os(out); + os << Brief(this); } void Object::ShortPrint(StringStream* accumulator) { - if (IsSmi()) { - Smi::cast(this)->SmiPrint(accumulator); - } else { - HeapObject::cast(this)->HeapObjectShortPrint(accumulator); - } + OStringStream os; + os << Brief(this); + accumulator->Add(os.c_str()); } -void Smi::SmiPrint(FILE* out) { - PrintF(out, "%d", value()); +OStream& operator<<(OStream& os, const Brief& v) { + if (v.value->IsSmi()) { + Smi::cast(v.value)->SmiPrint(os); + } else { + // TODO(svenpanne) Const-correct HeapObjectShortPrint! + HeapObject* obj = const_cast<HeapObject*>(HeapObject::cast(v.value)); + obj->HeapObjectShortPrint(os); + } + return os; } -void Smi::SmiPrint(StringStream* accumulator) { - accumulator->Add("%d", value()); +void Smi::SmiPrint(OStream& os) const { // NOLINT + os << value(); } @@ -1030,8 +1009,8 @@ static bool AnWord(String* str) { Handle<String> String::SlowFlatten(Handle<ConsString> cons, PretenureFlag pretenure) { - ASSERT(AllowHeapAllocation::IsAllowed()); - ASSERT(cons->second()->length() != 0); + DCHECK(AllowHeapAllocation::IsAllowed()); + DCHECK(cons->second()->length() != 0); Isolate* isolate = cons->GetIsolate(); int length = cons->length(); PretenureFlag tenure = isolate->heap()->InNewSpace(*cons) ? pretenure @@ -1052,7 +1031,7 @@ Handle<String> String::SlowFlatten(Handle<ConsString> cons, } cons->set_first(*result); cons->set_second(isolate->heap()->empty_string()); - ASSERT(result->IsFlat()); + DCHECK(result->IsFlat()); return result; } @@ -1061,23 +1040,22 @@ Handle<String> String::SlowFlatten(Handle<ConsString> cons, bool String::MakeExternal(v8::String::ExternalStringResource* resource) { // Externalizing twice leaks the external resource, so it's // prohibited by the API. - ASSERT(!this->IsExternalString()); -#ifdef ENABLE_SLOW_ASSERTS + DCHECK(!this->IsExternalString()); +#ifdef ENABLE_SLOW_DCHECKS if (FLAG_enable_slow_asserts) { // Assert that the resource and the string are equivalent. - ASSERT(static_cast<size_t>(this->length()) == resource->length()); + DCHECK(static_cast<size_t>(this->length()) == resource->length()); ScopedVector<uc16> smart_chars(this->length()); String::WriteToFlat(this, smart_chars.start(), 0, this->length()); - ASSERT(memcmp(smart_chars.start(), + DCHECK(memcmp(smart_chars.start(), resource->data(), resource->length() * sizeof(smart_chars[0])) == 0); } #endif // DEBUG - Heap* heap = GetHeap(); int size = this->Size(); // Byte size of the original string. - if (size < ExternalString::kShortSize) { - return false; - } + // Abort if size does not allow in-place conversion. + if (size < ExternalString::kShortSize) return false; + Heap* heap = GetHeap(); bool is_ascii = this->IsOneByteRepresentation(); bool is_internalized = this->IsInternalizedString(); @@ -1130,27 +1108,29 @@ bool String::MakeExternal(v8::String::ExternalStringResource* resource) { bool String::MakeExternal(v8::String::ExternalAsciiStringResource* resource) { -#ifdef ENABLE_SLOW_ASSERTS + // Externalizing twice leaks the external resource, so it's + // prohibited by the API. + DCHECK(!this->IsExternalString()); +#ifdef ENABLE_SLOW_DCHECKS if (FLAG_enable_slow_asserts) { // Assert that the resource and the string are equivalent. - ASSERT(static_cast<size_t>(this->length()) == resource->length()); + DCHECK(static_cast<size_t>(this->length()) == resource->length()); if (this->IsTwoByteRepresentation()) { ScopedVector<uint16_t> smart_chars(this->length()); String::WriteToFlat(this, smart_chars.start(), 0, this->length()); - ASSERT(String::IsOneByte(smart_chars.start(), this->length())); + DCHECK(String::IsOneByte(smart_chars.start(), this->length())); } ScopedVector<char> smart_chars(this->length()); String::WriteToFlat(this, smart_chars.start(), 0, this->length()); - ASSERT(memcmp(smart_chars.start(), + DCHECK(memcmp(smart_chars.start(), resource->data(), resource->length() * sizeof(smart_chars[0])) == 0); } #endif // DEBUG - Heap* heap = GetHeap(); int size = this->Size(); // Byte size of the original string. - if (size < ExternalString::kShortSize) { - return false; - } + // Abort if size does not allow in-place conversion. + if (size < ExternalString::kShortSize) return false; + Heap* heap = GetHeap(); bool is_internalized = this->IsInternalizedString(); // Morph the string to an external string by replacing the map and @@ -1256,6 +1236,16 @@ void String::StringShortPrint(StringStream* accumulator) { } +void String::PrintUC16(OStream& os, int start, int end) { // NOLINT + if (end < 0) end = length(); + ConsStringIteratorOp op; + StringCharacterStream stream(this, &op, start); + for (int i = start; i < end && stream.HasMore(); i++) { + os << AsUC16(stream.GetNext()); + } +} + + void JSObject::JSObjectShortPrint(StringStream* accumulator) { switch (map()->instance_type()) { case JS_ARRAY_TYPE: { @@ -1359,11 +1349,9 @@ void JSObject::PrintElementsTransition( ElementsKind from_kind, Handle<FixedArrayBase> from_elements, ElementsKind to_kind, Handle<FixedArrayBase> to_elements) { if (from_kind != to_kind) { - PrintF(file, "elements transition ["); - PrintElementsKind(file, from_kind); - PrintF(file, " -> "); - PrintElementsKind(file, to_kind); - PrintF(file, "] in "); + OFStream os(file); + os << "elements transition [" << ElementsKindToString(from_kind) << " -> " + << ElementsKindToString(to_kind) << "] in "; JavaScriptFrame::PrintTop(object->GetIsolate(), file, false, true); PrintF(file, " for "); object->ShortPrint(file); @@ -1386,37 +1374,35 @@ void Map::PrintGeneralization(FILE* file, Representation new_representation, HeapType* old_field_type, HeapType* new_field_type) { - PrintF(file, "[generalizing "); + OFStream os(file); + os << "[generalizing "; constructor_name()->PrintOn(file); - PrintF(file, "] "); + os << "] "; Name* name = instance_descriptors()->GetKey(modify_index); if (name->IsString()) { String::cast(name)->PrintOn(file); } else { - PrintF(file, "{symbol %p}", static_cast<void*>(name)); + os << "{symbol " << static_cast<void*>(name) << "}"; } - PrintF(file, ":"); + os << ":"; if (constant_to_field) { - PrintF(file, "c"); + os << "c"; } else { - PrintF(file, "%s", old_representation.Mnemonic()); - PrintF(file, "{"); - old_field_type->TypePrint(file, HeapType::SEMANTIC_DIM); - PrintF(file, "}"); - } - PrintF(file, "->%s", new_representation.Mnemonic()); - PrintF(file, "{"); - new_field_type->TypePrint(file, HeapType::SEMANTIC_DIM); - PrintF(file, "}"); - PrintF(file, " ("); + os << old_representation.Mnemonic() << "{"; + old_field_type->PrintTo(os, HeapType::SEMANTIC_DIM); + os << "}"; + } + os << "->" << new_representation.Mnemonic() << "{"; + new_field_type->PrintTo(os, HeapType::SEMANTIC_DIM); + os << "} ("; if (strlen(reason) > 0) { - PrintF(file, "%s", reason); + os << reason; } else { - PrintF(file, "+%i maps", descriptors - split); + os << "+" << (descriptors - split) << " maps"; } - PrintF(file, ") ["); + os << ") ["; JavaScriptFrame::PrintTop(GetIsolate(), file, false, true); - PrintF(file, "]\n"); + os << "]\n"; } @@ -1449,53 +1435,59 @@ void JSObject::PrintInstanceMigration(FILE* file, } -void HeapObject::HeapObjectShortPrint(StringStream* accumulator) { +void HeapObject::HeapObjectShortPrint(OStream& os) { // NOLINT Heap* heap = GetHeap(); if (!heap->Contains(this)) { - accumulator->Add("!!!INVALID POINTER!!!"); + os << "!!!INVALID POINTER!!!"; return; } if (!heap->Contains(map())) { - accumulator->Add("!!!INVALID MAP!!!"); + os << "!!!INVALID MAP!!!"; return; } - accumulator->Add("%p ", this); + os << this << " "; if (IsString()) { - String::cast(this)->StringShortPrint(accumulator); + HeapStringAllocator allocator; + StringStream accumulator(&allocator); + String::cast(this)->StringShortPrint(&accumulator); + os << accumulator.ToCString().get(); return; } if (IsJSObject()) { - JSObject::cast(this)->JSObjectShortPrint(accumulator); + HeapStringAllocator allocator; + StringStream accumulator(&allocator); + JSObject::cast(this)->JSObjectShortPrint(&accumulator); + os << accumulator.ToCString().get(); return; } switch (map()->instance_type()) { case MAP_TYPE: - accumulator->Add("<Map(elements=%u)>", Map::cast(this)->elements_kind()); + os << "<Map(elements=" << Map::cast(this)->elements_kind() << ")>"; break; case FIXED_ARRAY_TYPE: - accumulator->Add("<FixedArray[%u]>", FixedArray::cast(this)->length()); + os << "<FixedArray[" << FixedArray::cast(this)->length() << "]>"; break; case FIXED_DOUBLE_ARRAY_TYPE: - accumulator->Add("<FixedDoubleArray[%u]>", - FixedDoubleArray::cast(this)->length()); + os << "<FixedDoubleArray[" << FixedDoubleArray::cast(this)->length() + << "]>"; break; case BYTE_ARRAY_TYPE: - accumulator->Add("<ByteArray[%u]>", ByteArray::cast(this)->length()); + os << "<ByteArray[" << ByteArray::cast(this)->length() << "]>"; break; case FREE_SPACE_TYPE: - accumulator->Add("<FreeSpace[%u]>", FreeSpace::cast(this)->Size()); - break; -#define TYPED_ARRAY_SHORT_PRINT(Type, type, TYPE, ctype, size) \ - case EXTERNAL_##TYPE##_ARRAY_TYPE: \ - accumulator->Add("<External" #Type "Array[%u]>", \ - External##Type##Array::cast(this)->length()); \ - break; \ - case FIXED_##TYPE##_ARRAY_TYPE: \ - accumulator->Add("<Fixed" #Type "Array[%u]>", \ - Fixed##Type##Array::cast(this)->length()); \ + os << "<FreeSpace[" << FreeSpace::cast(this)->Size() << "]>"; break; +#define TYPED_ARRAY_SHORT_PRINT(Type, type, TYPE, ctype, size) \ + case EXTERNAL_##TYPE##_ARRAY_TYPE: \ + os << "<External" #Type "Array[" \ + << External##Type##Array::cast(this)->length() << "]>"; \ + break; \ + case FIXED_##TYPE##_ARRAY_TYPE: \ + os << "<Fixed" #Type "Array[" << Fixed##Type##Array::cast(this)->length() \ + << "]>"; \ + break; TYPED_ARRAYS(TYPED_ARRAY_SHORT_PRINT) #undef TYPED_ARRAY_SHORT_PRINT @@ -1505,75 +1497,94 @@ void HeapObject::HeapObjectShortPrint(StringStream* accumulator) { SmartArrayPointer<char> debug_name = shared->DebugName()->ToCString(); if (debug_name[0] != 0) { - accumulator->Add("<SharedFunctionInfo %s>", debug_name.get()); + os << "<SharedFunctionInfo " << debug_name.get() << ">"; } else { - accumulator->Add("<SharedFunctionInfo>"); + os << "<SharedFunctionInfo>"; } break; } case JS_MESSAGE_OBJECT_TYPE: - accumulator->Add("<JSMessageObject>"); + os << "<JSMessageObject>"; break; #define MAKE_STRUCT_CASE(NAME, Name, name) \ case NAME##_TYPE: \ - accumulator->Put('<'); \ - accumulator->Add(#Name); \ - accumulator->Put('>'); \ + os << "<" #Name ">"; \ break; STRUCT_LIST(MAKE_STRUCT_CASE) #undef MAKE_STRUCT_CASE - case CODE_TYPE: - accumulator->Add("<Code>"); + case CODE_TYPE: { + Code* code = Code::cast(this); + os << "<Code: " << Code::Kind2String(code->kind()) << ">"; break; + } case ODDBALL_TYPE: { - if (IsUndefined()) - accumulator->Add("<undefined>"); - else if (IsTheHole()) - accumulator->Add("<the hole>"); - else if (IsNull()) - accumulator->Add("<null>"); - else if (IsTrue()) - accumulator->Add("<true>"); - else if (IsFalse()) - accumulator->Add("<false>"); - else - accumulator->Add("<Odd Oddball>"); + if (IsUndefined()) { + os << "<undefined>"; + } else if (IsTheHole()) { + os << "<the hole>"; + } else if (IsNull()) { + os << "<null>"; + } else if (IsTrue()) { + os << "<true>"; + } else if (IsFalse()) { + os << "<false>"; + } else { + os << "<Odd Oddball>"; + } break; } case SYMBOL_TYPE: { Symbol* symbol = Symbol::cast(this); - accumulator->Add("<Symbol: %d", symbol->Hash()); + os << "<Symbol: " << symbol->Hash(); if (!symbol->name()->IsUndefined()) { - accumulator->Add(" "); - String::cast(symbol->name())->StringShortPrint(accumulator); + os << " "; + HeapStringAllocator allocator; + StringStream accumulator(&allocator); + String::cast(symbol->name())->StringShortPrint(&accumulator); + os << accumulator.ToCString().get(); } - accumulator->Add(">"); + os << ">"; break; } - case HEAP_NUMBER_TYPE: - accumulator->Add("<Number: "); - HeapNumber::cast(this)->HeapNumberPrint(accumulator); - accumulator->Put('>'); + case HEAP_NUMBER_TYPE: { + os << "<Number: "; + HeapNumber::cast(this)->HeapNumberPrint(os); + os << ">"; + break; + } + case MUTABLE_HEAP_NUMBER_TYPE: { + os << "<MutableNumber: "; + HeapNumber::cast(this)->HeapNumberPrint(os); + os << '>'; break; + } case JS_PROXY_TYPE: - accumulator->Add("<JSProxy>"); + os << "<JSProxy>"; break; case JS_FUNCTION_PROXY_TYPE: - accumulator->Add("<JSFunctionProxy>"); + os << "<JSFunctionProxy>"; break; case FOREIGN_TYPE: - accumulator->Add("<Foreign>"); + os << "<Foreign>"; break; - case CELL_TYPE: - accumulator->Add("Cell for "); - Cell::cast(this)->value()->ShortPrint(accumulator); + case CELL_TYPE: { + os << "Cell for "; + HeapStringAllocator allocator; + StringStream accumulator(&allocator); + Cell::cast(this)->value()->ShortPrint(&accumulator); + os << accumulator.ToCString().get(); break; - case PROPERTY_CELL_TYPE: - accumulator->Add("PropertyCell for "); - PropertyCell::cast(this)->value()->ShortPrint(accumulator); + } + case PROPERTY_CELL_TYPE: { + os << "PropertyCell for "; + HeapStringAllocator allocator; + StringStream accumulator(&allocator); + PropertyCell::cast(this)->value()->ShortPrint(&accumulator); + os << accumulator.ToCString().get(); break; + } default: - accumulator->Add("<Other heap object (%d)>", map()->instance_type()); + os << "<Other heap object (" << map()->instance_type() << ")>"; break; } } @@ -1680,6 +1691,7 @@ void HeapObject::IterateBody(InstanceType type, int object_size, break; case HEAP_NUMBER_TYPE: + case MUTABLE_HEAP_NUMBER_TYPE: case FILLER_TYPE: case BYTE_ARRAY_TYPE: case FREE_SPACE_TYPE: @@ -1716,45 +1728,17 @@ void HeapObject::IterateBody(InstanceType type, int object_size, bool HeapNumber::HeapNumberBooleanValue() { - // NaN, +0, and -0 should return the false object -#if __BYTE_ORDER == __LITTLE_ENDIAN - union IeeeDoubleLittleEndianArchType u; -#elif __BYTE_ORDER == __BIG_ENDIAN - union IeeeDoubleBigEndianArchType u; -#endif - u.d = value(); - if (u.bits.exp == 2047) { - // Detect NaN for IEEE double precision floating point. - if ((u.bits.man_low | u.bits.man_high) != 0) return false; - } - if (u.bits.exp == 0) { - // Detect +0, and -0 for IEEE double precision floating point. - if ((u.bits.man_low | u.bits.man_high) == 0) return false; - } - return true; -} - - -void HeapNumber::HeapNumberPrint(FILE* out) { - PrintF(out, "%.16g", Number()); + return DoubleToBoolean(value()); } -void HeapNumber::HeapNumberPrint(StringStream* accumulator) { - // The Windows version of vsnprintf can allocate when printing a %g string - // into a buffer that may not be big enough. We don't want random memory - // allocation when producing post-crash stack traces, so we print into a - // buffer that is plenty big enough for any floating point number, then - // print that using vsnprintf (which may truncate but never allocate if - // there is no more space in the buffer). - EmbeddedVector<char, 100> buffer; - OS::SNPrintF(buffer, "%.16g", Number()); - accumulator->Add("%s", buffer.start()); +void HeapNumber::HeapNumberPrint(OStream& os) { // NOLINT + os << value(); } String* JSReceiver::class_name() { - if (IsJSFunction() && IsJSFunctionProxy()) { + if (IsJSFunction() || IsJSFunctionProxy()) { return GetHeap()->function_class_string(); } if (map()->constructor()->IsJSFunction()) { @@ -1793,7 +1777,7 @@ MaybeHandle<Map> Map::CopyWithField(Handle<Map> map, PropertyAttributes attributes, Representation representation, TransitionFlag flag) { - ASSERT(DescriptorArray::kNotFound == + DCHECK(DescriptorArray::kNotFound == map->instance_descriptors()->Search( *name, map->NumberOfOwnDescriptors())); @@ -1802,11 +1786,7 @@ MaybeHandle<Map> Map::CopyWithField(Handle<Map> map, return MaybeHandle<Map>(); } - // Normalize the object if the name is an actual name (not the - // hidden strings) and is not a real identifier. - // Normalize the object if it will have too many fast properties. Isolate* isolate = map->GetIsolate(); - if (!name->IsCacheable(isolate)) return MaybeHandle<Map>(); // Compute the new index for new field. int index = map->NextFreePropertyIndex(); @@ -1848,17 +1828,16 @@ void JSObject::AddFastProperty(Handle<JSObject> object, Handle<Object> value, PropertyAttributes attributes, StoreFromKeyed store_mode, - ValueType value_type, TransitionFlag flag) { - ASSERT(!object->IsJSGlobalProxy()); + DCHECK(!object->IsJSGlobalProxy()); MaybeHandle<Map> maybe_map; if (value->IsJSFunction()) { maybe_map = Map::CopyWithConstant( handle(object->map()), name, value, attributes, flag); - } else if (!object->TooManyFastProperties(store_mode)) { + } else if (!object->map()->TooManyFastProperties(store_mode)) { Isolate* isolate = object->GetIsolate(); - Representation representation = value->OptimalRepresentation(value_type); + Representation representation = value->OptimalRepresentation(); maybe_map = Map::CopyWithField( handle(object->map(), isolate), name, value->OptimalType(isolate, representation), @@ -1879,7 +1858,7 @@ void JSObject::AddSlowProperty(Handle<JSObject> object, Handle<Name> name, Handle<Object> value, PropertyAttributes attributes) { - ASSERT(!object->HasFastProperties()); + DCHECK(!object->HasFastProperties()); Isolate* isolate = object->GetIsolate(); Handle<NameDictionary> dict(object->property_dictionary()); if (object->IsGlobalObject()) { @@ -1907,18 +1886,11 @@ void JSObject::AddSlowProperty(Handle<JSObject> object, } -MaybeHandle<Object> JSObject::AddProperty( - Handle<JSObject> object, - Handle<Name> name, - Handle<Object> value, - PropertyAttributes attributes, - StrictMode strict_mode, - JSReceiver::StoreFromKeyed store_mode, - ExtensibilityCheck extensibility_check, - ValueType value_type, - StoreMode mode, - TransitionFlag transition_flag) { - ASSERT(!object->IsJSGlobalProxy()); +MaybeHandle<Object> JSObject::AddPropertyInternal( + Handle<JSObject> object, Handle<Name> name, Handle<Object> value, + PropertyAttributes attributes, JSReceiver::StoreFromKeyed store_mode, + ExtensibilityCheck extensibility_check, TransitionFlag transition_flag) { + DCHECK(!object->IsJSGlobalProxy()); Isolate* isolate = object->GetIsolate(); if (!name->IsUniqueName()) { @@ -1928,19 +1900,15 @@ MaybeHandle<Object> JSObject::AddProperty( if (extensibility_check == PERFORM_EXTENSIBILITY_CHECK && !object->map()->is_extensible()) { - if (strict_mode == SLOPPY) { - return value; - } else { - Handle<Object> args[1] = { name }; - Handle<Object> error = isolate->factory()->NewTypeError( - "object_not_extensible", HandleVector(args, ARRAY_SIZE(args))); - return isolate->Throw<Object>(error); - } + Handle<Object> args[1] = {name}; + Handle<Object> error = isolate->factory()->NewTypeError( + "object_not_extensible", HandleVector(args, ARRAY_SIZE(args))); + return isolate->Throw<Object>(error); } if (object->HasFastProperties()) { AddFastProperty(object, name, value, attributes, store_mode, - value_type, transition_flag); + transition_flag); } if (!object->HasFastProperties()) { @@ -1976,8 +1944,8 @@ void JSObject::EnqueueChangeRecord(Handle<JSObject> object, const char* type_str, Handle<Name> name, Handle<Object> old_value) { - ASSERT(!object->IsJSGlobalProxy()); - ASSERT(!object->IsJSGlobalObject()); + DCHECK(!object->IsJSGlobalProxy()); + DCHECK(!object->IsJSGlobalObject()); Isolate* isolate = object->GetIsolate(); HandleScope scope(isolate); Handle<String> type = isolate->factory()->InternalizeUtf8String(type_str); @@ -1991,38 +1959,6 @@ void JSObject::EnqueueChangeRecord(Handle<JSObject> object, } -MaybeHandle<Object> JSObject::SetPropertyPostInterceptor( - Handle<JSObject> object, - Handle<Name> name, - Handle<Object> value, - PropertyAttributes attributes, - StrictMode strict_mode) { - // Check local property, ignore interceptor. - Isolate* isolate = object->GetIsolate(); - LookupResult result(isolate); - object->LocalLookupRealNamedProperty(name, &result); - if (!result.IsFound()) { - object->map()->LookupTransition(*object, *name, &result); - } - if (result.IsFound()) { - // An existing property or a map transition was found. Use set property to - // handle all these cases. - return SetPropertyForResult(object, &result, name, value, attributes, - strict_mode, MAY_BE_STORE_FROM_KEYED); - } - bool done = false; - Handle<Object> result_object; - ASSIGN_RETURN_ON_EXCEPTION( - isolate, result_object, - SetPropertyViaPrototypes( - object, name, value, attributes, strict_mode, &done), - Object); - if (done) return result_object; - // Add a new real property. - return AddProperty(object, name, value, attributes, strict_mode); -} - - static void ReplaceSlowProperty(Handle<JSObject> object, Handle<Name> name, Handle<Object> value, @@ -2056,70 +1992,21 @@ const char* Representation::Mnemonic() const { } -static void ZapEndOfFixedArray(Address new_end, int to_trim) { - // If we are doing a big trim in old space then we zap the space. - Object** zap = reinterpret_cast<Object**>(new_end); - zap++; // Header of filler must be at least one word so skip that. - for (int i = 1; i < to_trim; i++) { - *zap++ = Smi::FromInt(0); - } -} - - -template<Heap::InvocationMode mode> -static void RightTrimFixedArray(Heap* heap, FixedArray* elms, int to_trim) { - ASSERT(elms->map() != heap->fixed_cow_array_map()); - // For now this trick is only applied to fixed arrays in new and paged space. - ASSERT(!heap->lo_space()->Contains(elms)); - - const int len = elms->length(); - - ASSERT(to_trim < len); - - Address new_end = elms->address() + FixedArray::SizeFor(len - to_trim); - - if (mode != Heap::FROM_GC || Heap::ShouldZapGarbage()) { - ZapEndOfFixedArray(new_end, to_trim); - } - - int size_delta = to_trim * kPointerSize; - - // Technically in new space this write might be omitted (except for - // debug mode which iterates through the heap), but to play safer - // we still do it. - heap->CreateFillerObjectAt(new_end, size_delta); - - // We are storing the new length using release store after creating a filler - // for the left-over space to avoid races with the sweeper thread. - elms->synchronized_set_length(len - to_trim); - - heap->AdjustLiveBytes(elms->address(), -size_delta, mode); - - // The array may not be moved during GC, - // and size has to be adjusted nevertheless. - HeapProfiler* profiler = heap->isolate()->heap_profiler(); - if (profiler->is_tracking_allocations()) { - profiler->UpdateObjectSizeEvent(elms->address(), elms->Size()); - } -} - - -bool Map::InstancesNeedRewriting(Map* target, - int target_number_of_fields, - int target_inobject, - int target_unused) { +bool Map::InstancesNeedRewriting(Map* target, int target_number_of_fields, + int target_inobject, int target_unused, + int* old_number_of_fields) { // If fields were added (or removed), rewrite the instance. - int number_of_fields = NumberOfFields(); - ASSERT(target_number_of_fields >= number_of_fields); - if (target_number_of_fields != number_of_fields) return true; + *old_number_of_fields = NumberOfFields(); + DCHECK(target_number_of_fields >= *old_number_of_fields); + if (target_number_of_fields != *old_number_of_fields) return true; // If smi descriptors were replaced by double descriptors, rewrite. DescriptorArray* old_desc = instance_descriptors(); DescriptorArray* new_desc = target->instance_descriptors(); int limit = NumberOfOwnDescriptors(); for (int i = 0; i < limit; i++) { - if (new_desc->GetDetails(i).representation().IsDouble() && - !old_desc->GetDetails(i).representation().IsDouble()) { + if (new_desc->GetDetails(i).representation().IsDouble() != + old_desc->GetDetails(i).representation().IsDouble()) { return true; } } @@ -2130,9 +2017,9 @@ bool Map::InstancesNeedRewriting(Map* target, // In-object slack tracking may have reduced the object size of the new map. // In that case, succeed if all existing fields were inobject, and they still // fit within the new inobject size. - ASSERT(target_inobject < inobject_properties()); + DCHECK(target_inobject < inobject_properties()); if (target_number_of_fields <= target_inobject) { - ASSERT(target_number_of_fields + target_unused == target_inobject); + DCHECK(target_number_of_fields + target_unused == target_inobject); return false; } // Otherwise, properties will need to be moved to the backing store. @@ -2140,19 +2027,43 @@ bool Map::InstancesNeedRewriting(Map* target, } -Handle<TransitionArray> Map::SetElementsTransitionMap( - Handle<Map> map, Handle<Map> transitioned_map) { - Handle<TransitionArray> transitions = TransitionArray::CopyInsert( - map, - map->GetIsolate()->factory()->elements_transition_symbol(), - transitioned_map, - FULL_TRANSITION); - map->set_transitions(*transitions); - return transitions; +void Map::ConnectElementsTransition(Handle<Map> parent, Handle<Map> child) { + Isolate* isolate = parent->GetIsolate(); + Handle<Name> name = isolate->factory()->elements_transition_symbol(); + ConnectTransition(parent, child, name, FULL_TRANSITION); +} + + +void JSObject::MigrateToMap(Handle<JSObject> object, Handle<Map> new_map) { + if (object->map() == *new_map) return; + if (object->HasFastProperties()) { + if (!new_map->is_dictionary_map()) { + Handle<Map> old_map(object->map()); + MigrateFastToFast(object, new_map); + if (old_map->is_prototype_map()) { + // Clear out the old descriptor array to avoid problems to sharing + // the descriptor array without using an explicit. + old_map->InitializeDescriptors( + old_map->GetHeap()->empty_descriptor_array()); + // Ensure that no transition was inserted for prototype migrations. + DCHECK(!old_map->HasTransitionArray()); + DCHECK(new_map->GetBackPointer()->IsUndefined()); + } + } else { + MigrateFastToSlow(object, new_map, 0); + } + } else { + // For slow-to-fast migrations JSObject::TransformToFastProperties() + // must be used instead. + CHECK(new_map->is_dictionary_map()); + + // Slow-to-slow migration is trivial. + object->set_map(*new_map); + } } -// To migrate an instance to a map: +// To migrate a fast instance to a fast map: // - First check whether the instance needs to be rewritten. If not, simply // change the map. // - Otherwise, allocate a fixed array large enough to hold all fields, in @@ -2167,25 +2078,56 @@ Handle<TransitionArray> Map::SetElementsTransitionMap( // to temporarily store the inobject properties. // * If there are properties left in the backing store, install the backing // store. -void JSObject::MigrateToMap(Handle<JSObject> object, Handle<Map> new_map) { +void JSObject::MigrateFastToFast(Handle<JSObject> object, Handle<Map> new_map) { Isolate* isolate = object->GetIsolate(); Handle<Map> old_map(object->map()); + int old_number_of_fields; int number_of_fields = new_map->NumberOfFields(); int inobject = new_map->inobject_properties(); int unused = new_map->unused_property_fields(); // Nothing to do if no functions were converted to fields and no smis were // converted to doubles. - if (!old_map->InstancesNeedRewriting( - *new_map, number_of_fields, inobject, unused)) { - // Writing the new map here does not require synchronization since it does - // not change the actual object size. + if (!old_map->InstancesNeedRewriting(*new_map, number_of_fields, inobject, + unused, &old_number_of_fields)) { object->synchronized_set_map(*new_map); return; } int total_size = number_of_fields + unused; int external = total_size - inobject; + + if ((old_map->unused_property_fields() == 0) && + (number_of_fields != old_number_of_fields) && + (new_map->GetBackPointer() == *old_map)) { + DCHECK(number_of_fields == old_number_of_fields + 1); + // This migration is a transition from a map that has run out out property + // space. Therefore it could be done by extending the backing store. + Handle<FixedArray> old_storage = handle(object->properties(), isolate); + Handle<FixedArray> new_storage = + FixedArray::CopySize(old_storage, external); + + // Properly initialize newly added property. + PropertyDetails details = new_map->GetLastDescriptorDetails(); + Handle<Object> value; + if (details.representation().IsDouble()) { + value = isolate->factory()->NewHeapNumber(0, MUTABLE); + } else { + value = isolate->factory()->uninitialized_value(); + } + DCHECK(details.type() == FIELD); + int target_index = details.field_index() - inobject; + DCHECK(target_index >= 0); // Must be a backing store index. + new_storage->set(target_index, *value); + + // From here on we cannot fail and we shouldn't GC anymore. + DisallowHeapAllocation no_allocation; + + // Set the new property value and do the map transition. + object->set_properties(*new_storage); + object->synchronized_set_map(*new_map); + return; + } Handle<FixedArray> array = isolate->factory()->NewFixedArray(total_size); Handle<DescriptorArray> old_descriptors(old_map->instance_descriptors()); @@ -2195,21 +2137,21 @@ void JSObject::MigrateToMap(Handle<JSObject> object, Handle<Map> new_map) { // This method only supports generalizing instances to at least the same // number of properties. - ASSERT(old_nof <= new_nof); + DCHECK(old_nof <= new_nof); for (int i = 0; i < old_nof; i++) { PropertyDetails details = new_descriptors->GetDetails(i); if (details.type() != FIELD) continue; PropertyDetails old_details = old_descriptors->GetDetails(i); if (old_details.type() == CALLBACKS) { - ASSERT(details.representation().IsTagged()); + DCHECK(details.representation().IsTagged()); continue; } - ASSERT(old_details.type() == CONSTANT || + DCHECK(old_details.type() == CONSTANT || old_details.type() == FIELD); Object* raw_value = old_details.type() == CONSTANT ? old_descriptors->GetValue(i) - : object->RawFastPropertyAt(old_descriptors->GetFieldIndex(i)); + : object->RawFastPropertyAt(FieldIndex::ForDescriptor(*old_map, i)); Handle<Object> value(raw_value, isolate); if (!old_details.representation().IsDouble() && details.representation().IsDouble()) { @@ -2217,8 +2159,11 @@ void JSObject::MigrateToMap(Handle<JSObject> object, Handle<Map> new_map) { value = handle(Smi::FromInt(0), isolate); } value = Object::NewStorageFor(isolate, value, details.representation()); + } else if (old_details.representation().IsDouble() && + !details.representation().IsDouble()) { + value = Object::WrapForRead(isolate, value, old_details.representation()); } - ASSERT(!(details.representation().IsDouble() && value->IsSmi())); + DCHECK(!(details.representation().IsDouble() && value->IsSmi())); int target_index = new_descriptors->GetFieldIndex(i) - inobject; if (target_index < 0) target_index += total_size; array->set(target_index, *value); @@ -2229,7 +2174,7 @@ void JSObject::MigrateToMap(Handle<JSObject> object, Handle<Map> new_map) { if (details.type() != FIELD) continue; Handle<Object> value; if (details.representation().IsDouble()) { - value = isolate->factory()->NewHeapNumber(0); + value = isolate->factory()->NewHeapNumber(0, MUTABLE); } else { value = isolate->factory()->uninitialized_value(); } @@ -2245,46 +2190,45 @@ void JSObject::MigrateToMap(Handle<JSObject> object, Handle<Map> new_map) { // avoid overwriting |one_pointer_filler_map|. int limit = Min(inobject, number_of_fields); for (int i = 0; i < limit; i++) { - object->FastPropertyAtPut(i, array->get(external + i)); + FieldIndex index = FieldIndex::ForPropertyIndex(*new_map, i); + object->FastPropertyAtPut(index, array->get(external + i)); } - // Create filler object past the new instance size. - int new_instance_size = new_map->instance_size(); - int instance_size_delta = old_map->instance_size() - new_instance_size; - ASSERT(instance_size_delta >= 0); - Address address = object->address() + new_instance_size; - - // The trimming is performed on a newly allocated object, which is on a - // fresly allocated page or on an already swept page. Hence, the sweeper - // thread can not get confused with the filler creation. No synchronization - // needed. - isolate->heap()->CreateFillerObjectAt(address, instance_size_delta); + Heap* heap = isolate->heap(); // If there are properties in the new backing store, trim it to the correct // size and install the backing store into the object. if (external > 0) { - RightTrimFixedArray<Heap::FROM_MUTATOR>(isolate->heap(), *array, inobject); + heap->RightTrimFixedArray<Heap::FROM_MUTATOR>(*array, inobject); object->set_properties(*array); } - // The trimming is performed on a newly allocated object, which is on a - // fresly allocated page or on an already swept page. Hence, the sweeper - // thread can not get confused with the filler creation. No synchronization - // needed. - object->set_map(*new_map); + // Create filler object past the new instance size. + int new_instance_size = new_map->instance_size(); + int instance_size_delta = old_map->instance_size() - new_instance_size; + DCHECK(instance_size_delta >= 0); + + if (instance_size_delta > 0) { + Address address = object->address(); + heap->CreateFillerObjectAt( + address + new_instance_size, instance_size_delta); + heap->AdjustLiveBytes(address, -instance_size_delta, Heap::FROM_MUTATOR); + } + + // We are storing the new map using release store after creating a filler for + // the left-over space to avoid races with the sweeper thread. + object->synchronized_set_map(*new_map); } void JSObject::GeneralizeFieldRepresentation(Handle<JSObject> object, int modify_index, Representation new_representation, - Handle<HeapType> new_field_type, - StoreMode store_mode) { + Handle<HeapType> new_field_type) { Handle<Map> new_map = Map::GeneralizeRepresentation( - handle(object->map()), modify_index, new_representation, - new_field_type, store_mode); - if (object->map() == *new_map) return; - return MigrateToMap(object, new_map); + handle(object->map()), modify_index, new_representation, new_field_type, + FORCE_FIELD); + MigrateToMap(object, new_map); } @@ -2317,17 +2261,22 @@ Handle<Map> Map::CopyGeneralizeAllRepresentations(Handle<Map> map, // Unless the instance is being migrated, ensure that modify_index is a field. PropertyDetails details = descriptors->GetDetails(modify_index); - if (store_mode == FORCE_FIELD && details.type() != FIELD) { + if (store_mode == FORCE_FIELD && + (details.type() != FIELD || details.attributes() != attributes)) { + int field_index = details.type() == FIELD ? details.field_index() + : new_map->NumberOfFields(); FieldDescriptor d(handle(descriptors->GetKey(modify_index), isolate), - new_map->NumberOfFields(), - attributes, - Representation::Tagged()); + field_index, attributes, Representation::Tagged()); descriptors->Replace(modify_index, &d); - int unused_property_fields = new_map->unused_property_fields() - 1; - if (unused_property_fields < 0) { - unused_property_fields += JSObject::kFieldsAdded; + if (details.type() != FIELD) { + int unused_property_fields = new_map->unused_property_fields() - 1; + if (unused_property_fields < 0) { + unused_property_fields += JSObject::kFieldsAdded; + } + new_map->set_unused_property_fields(unused_property_fields); } - new_map->set_unused_property_fields(unused_property_fields); + } else { + DCHECK(details.attributes() == attributes); } if (FLAG_trace_generalization) { @@ -2418,7 +2367,7 @@ Map* Map::FindLastMatchMap(int verbatim, DisallowHeapAllocation no_allocation; // This can only be called on roots of transition trees. - ASSERT(GetBackPointer()->IsUndefined()); + DCHECK(GetBackPointer()->IsUndefined()); Map* current = this; @@ -2452,7 +2401,7 @@ Map* Map::FindLastMatchMap(int verbatim, Map* Map::FindFieldOwner(int descriptor) { DisallowHeapAllocation no_allocation; - ASSERT_EQ(FIELD, instance_descriptors()->GetDetails(descriptor).type()); + DCHECK_EQ(FIELD, instance_descriptors()->GetDetails(descriptor).type()); Map* result = this; while (true) { Object* back = result->GetBackPointer(); @@ -2465,15 +2414,22 @@ Map* Map::FindFieldOwner(int descriptor) { } -void Map::UpdateDescriptor(int descriptor_number, Descriptor* desc) { +void Map::UpdateFieldType(int descriptor, Handle<Name> name, + Handle<HeapType> new_type) { DisallowHeapAllocation no_allocation; + PropertyDetails details = instance_descriptors()->GetDetails(descriptor); + if (details.type() != FIELD) return; if (HasTransitionArray()) { TransitionArray* transitions = this->transitions(); for (int i = 0; i < transitions->number_of_transitions(); ++i) { - transitions->GetTarget(i)->UpdateDescriptor(descriptor_number, desc); + transitions->GetTarget(i)->UpdateFieldType(descriptor, name, new_type); } } - instance_descriptors()->Replace(descriptor_number, desc);; + // Skip if already updated the shared descriptor. + if (instance_descriptors()->GetFieldType(descriptor) == *new_type) return; + FieldDescriptor d(name, instance_descriptors()->GetFieldIndex(descriptor), + new_type, details.attributes(), details.representation()); + instance_descriptors()->Replace(descriptor, &d); } @@ -2487,9 +2443,9 @@ Handle<HeapType> Map::GeneralizeFieldType(Handle<HeapType> type1, if (type1->NowStable() && type2->NowStable()) { Handle<HeapType> type = HeapType::Union(type1, type2, isolate); if (type->NumClasses() <= kMaxClassesPerFieldType) { - ASSERT(type->NowStable()); - ASSERT(type1->NowIs(type)); - ASSERT(type2->NowIs(type)); + DCHECK(type->NowStable()); + DCHECK(type1->NowIs(type)); + DCHECK(type2->NowIs(type)); return type; } } @@ -2507,7 +2463,7 @@ void Map::GeneralizeFieldType(Handle<Map> map, Handle<HeapType> old_field_type( map->instance_descriptors()->GetFieldType(modify_index), isolate); if (new_field_type->NowIs(old_field_type)) { - ASSERT(Map::GeneralizeFieldType(old_field_type, + DCHECK(Map::GeneralizeFieldType(old_field_type, new_field_type, isolate)->NowIs(old_field_type)); return; @@ -2517,19 +2473,15 @@ void Map::GeneralizeFieldType(Handle<Map> map, Handle<Map> field_owner(map->FindFieldOwner(modify_index), isolate); Handle<DescriptorArray> descriptors( field_owner->instance_descriptors(), isolate); - ASSERT_EQ(*old_field_type, descriptors->GetFieldType(modify_index)); + DCHECK_EQ(*old_field_type, descriptors->GetFieldType(modify_index)); // Determine the generalized new field type. new_field_type = Map::GeneralizeFieldType( old_field_type, new_field_type, isolate); PropertyDetails details = descriptors->GetDetails(modify_index); - FieldDescriptor d(handle(descriptors->GetKey(modify_index), isolate), - descriptors->GetFieldIndex(modify_index), - new_field_type, - details.attributes(), - details.representation()); - field_owner->UpdateDescriptor(modify_index, &d); + Handle<Name> name(descriptors->GetKey(modify_index)); + field_owner->UpdateFieldType(modify_index, name, new_field_type); field_owner->dependent_code()->DeoptimizeDependentCodeGroup( isolate, DependentCode::kFieldTypeGroup); @@ -2583,8 +2535,8 @@ Handle<Map> Map::GeneralizeRepresentation(Handle<Map> old_map, if (old_representation.IsNone() && !new_representation.IsNone() && !new_representation.IsDouble()) { - ASSERT(old_details.type() == FIELD); - ASSERT(old_descriptors->GetFieldType(modify_index)->NowIs( + DCHECK(old_details.type() == FIELD); + DCHECK(old_descriptors->GetFieldType(modify_index)->NowIs( HeapType::None())); if (FLAG_trace_generalization) { old_map->PrintGeneralization( @@ -2661,8 +2613,8 @@ Handle<Map> Map::GeneralizeRepresentation(Handle<Map> old_map, break; } } else { - ASSERT_EQ(tmp_type, old_type); - ASSERT_EQ(tmp_descriptors->GetValue(i), old_descriptors->GetValue(i)); + DCHECK_EQ(tmp_type, old_type); + DCHECK_EQ(tmp_descriptors->GetValue(i), old_descriptors->GetValue(i)); } target_map = tmp_map; } @@ -2674,10 +2626,10 @@ Handle<Map> Map::GeneralizeRepresentation(Handle<Map> old_map, if (target_nof == old_nof && (store_mode != FORCE_FIELD || target_descriptors->GetDetails(modify_index).type() == FIELD)) { - ASSERT(modify_index < target_nof); - ASSERT(new_representation.fits_into( + DCHECK(modify_index < target_nof); + DCHECK(new_representation.fits_into( target_descriptors->GetDetails(modify_index).representation())); - ASSERT(target_descriptors->GetDetails(modify_index).type() != FIELD || + DCHECK(target_descriptors->GetDetails(modify_index).type() != FIELD || new_field_type->NowIs( target_descriptors->GetFieldType(modify_index))); return target_map; @@ -2713,11 +2665,11 @@ Handle<Map> Map::GeneralizeRepresentation(Handle<Map> old_map, old_nof, old_descriptors->number_of_descriptors()) - old_nof; Handle<DescriptorArray> new_descriptors = DescriptorArray::Allocate( isolate, old_nof, new_slack); - ASSERT(new_descriptors->length() > target_descriptors->length() || + DCHECK(new_descriptors->length() > target_descriptors->length() || new_descriptors->NumberOfSlackDescriptors() > 0 || new_descriptors->number_of_descriptors() == old_descriptors->number_of_descriptors()); - ASSERT(new_descriptors->number_of_descriptors() == old_nof); + DCHECK(new_descriptors->number_of_descriptors() == old_nof); // 0 -> |root_nof| int current_offset = 0; @@ -2742,7 +2694,7 @@ Handle<Map> Map::GeneralizeRepresentation(Handle<Map> old_map, target_details = target_details.CopyWithRepresentation( new_representation.generalize(target_details.representation())); } - ASSERT_EQ(old_details.attributes(), target_details.attributes()); + DCHECK_EQ(old_details.attributes(), target_details.attributes()); if (old_details.type() == FIELD || target_details.type() == FIELD || (modify_index == i && store_mode == FORCE_FIELD) || @@ -2768,7 +2720,7 @@ Handle<Map> Map::GeneralizeRepresentation(Handle<Map> old_map, target_details.representation()); new_descriptors->Set(i, &d); } else { - ASSERT_NE(FIELD, target_details.type()); + DCHECK_NE(FIELD, target_details.type()); Descriptor d(target_key, handle(target_descriptors->GetValue(i), isolate), target_details); @@ -2798,7 +2750,7 @@ Handle<Map> Map::GeneralizeRepresentation(Handle<Map> old_map, old_details.representation()); new_descriptors->Set(i, &d); } else { - ASSERT(old_details.type() == CONSTANT || old_details.type() == CALLBACKS); + DCHECK(old_details.type() == CONSTANT || old_details.type() == CALLBACKS); if (modify_index == i && store_mode == FORCE_FIELD) { FieldDescriptor d(old_key, current_offset++, @@ -2810,7 +2762,7 @@ Handle<Map> Map::GeneralizeRepresentation(Handle<Map> old_map, old_details.representation()); new_descriptors->Set(i, &d); } else { - ASSERT_NE(FIELD, old_details.type()); + DCHECK_NE(FIELD, old_details.type()); Descriptor d(old_key, handle(old_descriptors->GetValue(i), isolate), old_details); @@ -2821,13 +2773,13 @@ Handle<Map> Map::GeneralizeRepresentation(Handle<Map> old_map, new_descriptors->Sort(); - ASSERT(store_mode != FORCE_FIELD || + DCHECK(store_mode != FORCE_FIELD || new_descriptors->GetDetails(modify_index).type() == FIELD); Handle<Map> split_map(root_map->FindLastMatchMap( root_nof, old_nof, *new_descriptors), isolate); int split_nof = split_map->NumberOfOwnDescriptors(); - ASSERT_NE(old_nof, split_nof); + DCHECK_NE(old_nof, split_nof); split_map->DeprecateTarget( old_descriptors->GetKey(split_nof), *new_descriptors); @@ -2876,7 +2828,7 @@ Handle<Map> Map::GeneralizeAllFieldRepresentations( // static -MaybeHandle<Map> Map::CurrentMapForDeprecated(Handle<Map> map) { +MaybeHandle<Map> Map::TryUpdate(Handle<Map> map) { Handle<Map> proto_map(map); while (proto_map->prototype()->IsJSObject()) { Handle<JSObject> holder(JSObject::cast(proto_map->prototype())); @@ -2885,12 +2837,20 @@ MaybeHandle<Map> Map::CurrentMapForDeprecated(Handle<Map> map) { proto_map = Handle<Map>(holder->map()); } } - return CurrentMapForDeprecatedInternal(map); + return TryUpdateInternal(map); } // static -MaybeHandle<Map> Map::CurrentMapForDeprecatedInternal(Handle<Map> old_map) { +Handle<Map> Map::Update(Handle<Map> map) { + return GeneralizeRepresentation(map, 0, Representation::None(), + HeapType::None(map->GetIsolate()), + ALLOW_AS_CONSTANT); +} + + +// static +MaybeHandle<Map> Map::TryUpdateInternal(Handle<Map> old_map) { DisallowHeapAllocation no_allocation; DisallowDeoptimization no_deoptimization(old_map->GetIsolate()); @@ -2952,137 +2912,226 @@ MaybeHandle<Map> Map::CurrentMapForDeprecatedInternal(Handle<Map> old_map) { } -MaybeHandle<Object> JSObject::SetPropertyWithInterceptor( - Handle<JSObject> object, - Handle<Name> name, - Handle<Object> value, - PropertyAttributes attributes, - StrictMode strict_mode) { +MaybeHandle<Object> JSObject::SetPropertyWithInterceptor(LookupIterator* it, + Handle<Object> value) { // TODO(rossberg): Support symbols in the API. - if (name->IsSymbol()) return value; - Isolate* isolate = object->GetIsolate(); - Handle<String> name_string = Handle<String>::cast(name); - Handle<InterceptorInfo> interceptor(object->GetNamedInterceptor()); - if (!interceptor->setter()->IsUndefined()) { - LOG(isolate, - ApiNamedPropertyAccess("interceptor-named-set", *object, *name)); - PropertyCallbackArguments args( - isolate, interceptor->data(), *object, *object); - v8::NamedPropertySetterCallback setter = - v8::ToCData<v8::NamedPropertySetterCallback>(interceptor->setter()); - Handle<Object> value_unhole = value->IsTheHole() - ? Handle<Object>(isolate->factory()->undefined_value()) : value; - v8::Handle<v8::Value> result = args.Call(setter, - v8::Utils::ToLocal(name_string), - v8::Utils::ToLocal(value_unhole)); - RETURN_EXCEPTION_IF_SCHEDULED_EXCEPTION(isolate, Object); - if (!result.IsEmpty()) return value; - } - return SetPropertyPostInterceptor( - object, name, value, attributes, strict_mode); + if (it->name()->IsSymbol()) return value; + + Handle<String> name_string = Handle<String>::cast(it->name()); + Handle<JSObject> holder = it->GetHolder<JSObject>(); + Handle<InterceptorInfo> interceptor(holder->GetNamedInterceptor()); + if (interceptor->setter()->IsUndefined()) return MaybeHandle<Object>(); + + LOG(it->isolate(), + ApiNamedPropertyAccess("interceptor-named-set", *holder, *name_string)); + PropertyCallbackArguments args(it->isolate(), interceptor->data(), *holder, + *holder); + v8::NamedPropertySetterCallback setter = + v8::ToCData<v8::NamedPropertySetterCallback>(interceptor->setter()); + v8::Handle<v8::Value> result = args.Call( + setter, v8::Utils::ToLocal(name_string), v8::Utils::ToLocal(value)); + RETURN_EXCEPTION_IF_SCHEDULED_EXCEPTION(it->isolate(), Object); + if (!result.IsEmpty()) return value; + + return MaybeHandle<Object>(); } -MaybeHandle<Object> JSReceiver::SetProperty(Handle<JSReceiver> object, - Handle<Name> name, - Handle<Object> value, - PropertyAttributes attributes, - StrictMode strict_mode, - StoreFromKeyed store_mode) { - LookupResult result(object->GetIsolate()); - object->LocalLookup(name, &result, true); - if (!result.IsFound()) { - object->map()->LookupTransition(JSObject::cast(*object), *name, &result); - } - return SetProperty(object, &result, name, value, attributes, strict_mode, - store_mode); +MaybeHandle<Object> Object::SetProperty(Handle<Object> object, + Handle<Name> name, Handle<Object> value, + StrictMode strict_mode, + StoreFromKeyed store_mode) { + LookupIterator it(object, name); + return SetProperty(&it, value, strict_mode, store_mode); } -MaybeHandle<Object> JSObject::SetPropertyWithCallback(Handle<JSObject> object, - Handle<Object> structure, - Handle<Name> name, - Handle<Object> value, - Handle<JSObject> holder, - StrictMode strict_mode) { - Isolate* isolate = object->GetIsolate(); +MaybeHandle<Object> Object::SetProperty(LookupIterator* it, + Handle<Object> value, + StrictMode strict_mode, + StoreFromKeyed store_mode) { + // Make sure that the top context does not change when doing callbacks or + // interceptor calls. + AssertNoContextChange ncc(it->isolate()); - // We should never get here to initialize a const with the hole - // value since a const declaration would conflict with the setter. - ASSERT(!value->IsTheHole()); - ASSERT(!structure->IsForeign()); - if (structure->IsExecutableAccessorInfo()) { - // api style callbacks - ExecutableAccessorInfo* data = ExecutableAccessorInfo::cast(*structure); - if (!data->IsCompatibleReceiver(*object)) { - Handle<Object> args[2] = { name, object }; - Handle<Object> error = - isolate->factory()->NewTypeError("incompatible_method_receiver", - HandleVector(args, - ARRAY_SIZE(args))); - return isolate->Throw<Object>(error); - } - // TODO(rossberg): Support symbols in the API. - if (name->IsSymbol()) return value; - Object* call_obj = data->setter(); - v8::AccessorSetterCallback call_fun = - v8::ToCData<v8::AccessorSetterCallback>(call_obj); - if (call_fun == NULL) return value; - Handle<String> key = Handle<String>::cast(name); - LOG(isolate, ApiNamedPropertyAccess("store", *object, *name)); - PropertyCallbackArguments args(isolate, data->data(), *object, *holder); - args.Call(call_fun, - v8::Utils::ToLocal(key), - v8::Utils::ToLocal(value)); - RETURN_EXCEPTION_IF_SCHEDULED_EXCEPTION(isolate, Object); - return value; - } + bool done = false; + for (; it->IsFound(); it->Next()) { + switch (it->state()) { + case LookupIterator::NOT_FOUND: + UNREACHABLE(); - if (structure->IsAccessorPair()) { - Handle<Object> setter(AccessorPair::cast(*structure)->setter(), isolate); - if (setter->IsSpecFunction()) { - // TODO(rossberg): nicer would be to cast to some JSCallable here... - return SetPropertyWithDefinedSetter( - object, Handle<JSReceiver>::cast(setter), value); - } else { - if (strict_mode == SLOPPY) return value; - Handle<Object> args[2] = { name, holder }; - Handle<Object> error = - isolate->factory()->NewTypeError("no_setter_in_callback", - HandleVector(args, 2)); - return isolate->Throw<Object>(error); + case LookupIterator::ACCESS_CHECK: + // TODO(verwaest): Remove the distinction. This is mostly bogus since we + // don't know whether we'll want to fetch attributes or call a setter + // until we find the property. + if (it->HasAccess(v8::ACCESS_SET)) break; + return JSObject::SetPropertyWithFailedAccessCheck(it, value, + strict_mode); + + case LookupIterator::JSPROXY: + if (it->HolderIsReceiverOrHiddenPrototype()) { + return JSProxy::SetPropertyWithHandler(it->GetHolder<JSProxy>(), + it->GetReceiver(), it->name(), + value, strict_mode); + } else { + // TODO(verwaest): Use the MaybeHandle to indicate result. + bool has_result = false; + MaybeHandle<Object> maybe_result = + JSProxy::SetPropertyViaPrototypesWithHandler( + it->GetHolder<JSProxy>(), it->GetReceiver(), it->name(), + value, strict_mode, &has_result); + if (has_result) return maybe_result; + done = true; + } + break; + + case LookupIterator::INTERCEPTOR: + if (it->HolderIsReceiverOrHiddenPrototype()) { + MaybeHandle<Object> maybe_result = + JSObject::SetPropertyWithInterceptor(it, value); + if (!maybe_result.is_null()) return maybe_result; + if (it->isolate()->has_pending_exception()) return maybe_result; + } else { + Maybe<PropertyAttributes> maybe_attributes = + JSObject::GetPropertyAttributesWithInterceptor( + it->GetHolder<JSObject>(), it->GetReceiver(), it->name()); + if (!maybe_attributes.has_value) return MaybeHandle<Object>(); + done = maybe_attributes.value != ABSENT; + if (done && (maybe_attributes.value & READ_ONLY) != 0) { + return WriteToReadOnlyProperty(it, value, strict_mode); + } + } + break; + + case LookupIterator::PROPERTY: + if (!it->HasProperty()) break; + if (it->property_details().IsReadOnly()) { + return WriteToReadOnlyProperty(it, value, strict_mode); + } + switch (it->property_kind()) { + case LookupIterator::ACCESSOR: + if (it->HolderIsReceiverOrHiddenPrototype() || + !it->GetAccessors()->IsDeclaredAccessorInfo()) { + return SetPropertyWithAccessor(it->GetReceiver(), it->name(), + value, it->GetHolder<JSObject>(), + it->GetAccessors(), strict_mode); + } + break; + case LookupIterator::DATA: + if (it->HolderIsReceiverOrHiddenPrototype()) { + return SetDataProperty(it, value); + } + } + done = true; + break; } + + if (done) break; } - // TODO(dcarney): Handle correctly. - if (structure->IsDeclaredAccessorInfo()) { - return value; + return AddDataProperty(it, value, NONE, strict_mode, store_mode); +} + + +MaybeHandle<Object> Object::WriteToReadOnlyProperty(LookupIterator* it, + Handle<Object> value, + StrictMode strict_mode) { + if (strict_mode != STRICT) return value; + + Handle<Object> args[] = {it->name(), it->GetReceiver()}; + Handle<Object> error = it->factory()->NewTypeError( + "strict_read_only_property", HandleVector(args, ARRAY_SIZE(args))); + return it->isolate()->Throw<Object>(error); +} + + +MaybeHandle<Object> Object::SetDataProperty(LookupIterator* it, + Handle<Object> value) { + // Proxies are handled on the WithHandler path. Other non-JSObjects cannot + // have own properties. + Handle<JSObject> receiver = Handle<JSObject>::cast(it->GetReceiver()); + + // Store on the holder which may be hidden behind the receiver. + DCHECK(it->HolderIsReceiverOrHiddenPrototype()); + + // Old value for the observation change record. + // Fetch before transforming the object since the encoding may become + // incompatible with what's cached in |it|. + bool is_observed = + receiver->map()->is_observed() && + !it->name().is_identical_to(it->factory()->hidden_string()); + MaybeHandle<Object> maybe_old; + if (is_observed) maybe_old = it->GetDataValue(); + + // Possibly migrate to the most up-to-date map that will be able to store + // |value| under it->name(). + it->PrepareForDataProperty(value); + + // Write the property value. + it->WriteDataValue(value); + + // Send the change record if there are observers. + if (is_observed && !value->SameValue(*maybe_old.ToHandleChecked())) { + JSObject::EnqueueChangeRecord(receiver, "update", it->name(), + maybe_old.ToHandleChecked()); } - UNREACHABLE(); - return MaybeHandle<Object>(); + return value; } -MaybeHandle<Object> JSReceiver::SetPropertyWithDefinedSetter( - Handle<JSReceiver> object, - Handle<JSReceiver> setter, - Handle<Object> value) { - Isolate* isolate = object->GetIsolate(); +MaybeHandle<Object> Object::AddDataProperty(LookupIterator* it, + Handle<Object> value, + PropertyAttributes attributes, + StrictMode strict_mode, + StoreFromKeyed store_mode) { + DCHECK(!it->GetReceiver()->IsJSProxy()); + if (!it->GetReceiver()->IsJSObject()) { + // TODO(verwaest): Throw a TypeError with a more specific message. + return WriteToReadOnlyProperty(it, value, strict_mode); + } + Handle<JSObject> receiver = Handle<JSObject>::cast(it->GetReceiver()); - Debug* debug = isolate->debug(); - // Handle stepping into a setter if step into is active. - // TODO(rossberg): should this apply to getters that are function proxies? - if (debug->StepInActive() && setter->IsJSFunction()) { - debug->HandleStepIn( - Handle<JSFunction>::cast(setter), Handle<Object>::null(), 0, false); + // If the receiver is a JSGlobalProxy, store on the prototype (JSGlobalObject) + // instead. If the prototype is Null, the proxy is detached. + if (receiver->IsJSGlobalProxy()) { + // Trying to assign to a detached proxy. + PrototypeIterator iter(it->isolate(), receiver); + if (iter.IsAtEnd()) return value; + receiver = + Handle<JSGlobalObject>::cast(PrototypeIterator::GetCurrent(iter)); + } + + if (!receiver->map()->is_extensible()) { + if (strict_mode == SLOPPY) return value; + + Handle<Object> args[1] = {it->name()}; + Handle<Object> error = it->factory()->NewTypeError( + "object_not_extensible", HandleVector(args, ARRAY_SIZE(args))); + return it->isolate()->Throw<Object>(error); + } + + // Possibly migrate to the most up-to-date map that will be able to store + // |value| under it->name() with |attributes|. + it->TransitionToDataProperty(value, attributes, store_mode); + + // TODO(verwaest): Encapsulate dictionary handling better. + if (receiver->map()->is_dictionary_map()) { + // TODO(verwaest): Probably should ensure this is done beforehand. + it->InternalizeName(); + JSObject::AddSlowProperty(receiver, it->name(), value, attributes); + } else { + // Write the property value. + it->WriteDataValue(value); + } + + // Send the change record if there are observers. + if (receiver->map()->is_observed() && + !it->name().is_identical_to(it->factory()->hidden_string())) { + JSObject::EnqueueChangeRecord(receiver, "add", it->name(), + it->factory()->the_hole_value()); } - Handle<Object> argv[] = { value }; - RETURN_ON_EXCEPTION( - isolate, - Execution::Call(isolate, setter, object, ARRAY_SIZE(argv), argv), - Object); return value; } @@ -3094,20 +3143,16 @@ MaybeHandle<Object> JSObject::SetElementWithCallbackSetterInPrototypes( bool* found, StrictMode strict_mode) { Isolate *isolate = object->GetIsolate(); - for (Handle<Object> proto = handle(object->GetPrototype(), isolate); - !proto->IsNull(); - proto = handle(proto->GetPrototype(isolate), isolate)) { - if (proto->IsJSProxy()) { + for (PrototypeIterator iter(isolate, object); !iter.IsAtEnd(); + iter.Advance()) { + if (PrototypeIterator::GetCurrent(iter)->IsJSProxy()) { return JSProxy::SetPropertyViaPrototypesWithHandler( - Handle<JSProxy>::cast(proto), - object, + Handle<JSProxy>::cast(PrototypeIterator::GetCurrent(iter)), object, isolate->factory()->Uint32ToString(index), // name - value, - NONE, - strict_mode, - found); + value, strict_mode, found); } - Handle<JSObject> js_proto = Handle<JSObject>::cast(proto); + Handle<JSObject> js_proto = + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)); if (!js_proto->HasDictionaryElements()) { continue; } @@ -3119,78 +3164,18 @@ MaybeHandle<Object> JSObject::SetElementWithCallbackSetterInPrototypes( *found = true; Handle<Object> structure(dictionary->ValueAt(entry), isolate); return SetElementWithCallback(object, structure, index, value, js_proto, - strict_mode); - } - } - } - *found = false; - return isolate->factory()->the_hole_value(); -} - - -MaybeHandle<Object> JSObject::SetPropertyViaPrototypes( - Handle<JSObject> object, - Handle<Name> name, - Handle<Object> value, - PropertyAttributes attributes, - StrictMode strict_mode, - bool* done) { - Isolate* isolate = object->GetIsolate(); - - *done = false; - // We could not find a local property so let's check whether there is an - // accessor that wants to handle the property, or whether the property is - // read-only on the prototype chain. - LookupResult result(isolate); - object->LookupRealNamedPropertyInPrototypes(name, &result); - if (result.IsFound()) { - switch (result.type()) { - case NORMAL: - case FIELD: - case CONSTANT: - *done = result.IsReadOnly(); - break; - case INTERCEPTOR: { - PropertyAttributes attr = GetPropertyAttributeWithInterceptor( - handle(result.holder()), object, name, true); - *done = !!(attr & READ_ONLY); - break; - } - case CALLBACKS: { - *done = true; - if (!result.IsReadOnly()) { - Handle<Object> callback_object(result.GetCallbackObject(), isolate); - return SetPropertyWithCallback(object, callback_object, name, value, - handle(result.holder()), strict_mode); - } - break; - } - case HANDLER: { - Handle<JSProxy> proxy(result.proxy()); - return JSProxy::SetPropertyViaPrototypesWithHandler( - proxy, object, name, value, attributes, strict_mode, done); + strict_mode); } - case NONEXISTENT: - UNREACHABLE(); - break; } } - - // If we get here with *done true, we have encountered a read-only property. - if (*done) { - if (strict_mode == SLOPPY) return value; - Handle<Object> args[] = { name, object }; - Handle<Object> error = isolate->factory()->NewTypeError( - "strict_read_only_property", HandleVector(args, ARRAY_SIZE(args))); - return isolate->Throw<Object>(error); - } + *found = false; return isolate->factory()->the_hole_value(); } void Map::EnsureDescriptorSlack(Handle<Map> map, int slack) { // Only supports adding slack to owned descriptors. - ASSERT(map->owns_descriptors()); + DCHECK(map->owns_descriptors()); Handle<DescriptorArray> descriptors(map->instance_descriptors()); int old_size = map->NumberOfOwnDescriptors(); @@ -3310,7 +3295,7 @@ void Map::AppendCallbackDescriptors(Handle<Map> map, int nof = map->NumberOfOwnDescriptors(); Handle<DescriptorArray> array(map->instance_descriptors()); NeanderArray callbacks(descriptors); - ASSERT(array->NumberOfSlackDescriptors() >= callbacks.length()); + DCHECK(array->NumberOfSlackDescriptors() >= callbacks.length()); nof = AppendUniqueCallbacks<DescriptorArrayAppender>(&callbacks, array, nof); map->SetNumberOfOwnDescriptors(nof); } @@ -3320,7 +3305,7 @@ int AccessorInfo::AppendUnique(Handle<Object> descriptors, Handle<FixedArray> array, int valid_descriptors) { NeanderArray callbacks(descriptors); - ASSERT(array->length() >= callbacks.length() + valid_descriptors); + DCHECK(array->length() >= callbacks.length() + valid_descriptors); return AppendUniqueCallbacks<FixedArrayAppender>(&callbacks, array, valid_descriptors); @@ -3328,7 +3313,7 @@ int AccessorInfo::AppendUnique(Handle<Object> descriptors, static bool ContainsMap(MapHandleList* maps, Handle<Map> map) { - ASSERT(!map.is_null()); + DCHECK(!map.is_null()); for (int i = 0; i < maps->length(); ++i) { if (!maps->at(i).is_null() && maps->at(i).is_identical_to(map)) return true; } @@ -3394,12 +3379,12 @@ static Map* FindClosestElementsTransition(Map* map, ElementsKind to_kind) { } if (to_kind != kind && current_map->HasElementsTransition()) { - ASSERT(to_kind == DICTIONARY_ELEMENTS); + DCHECK(to_kind == DICTIONARY_ELEMENTS); Map* next_map = current_map->elements_transition_map(); if (next_map->elements_kind() == to_kind) return next_map; } - ASSERT(current_map->elements_kind() == target_kind); + DCHECK(current_map->elements_kind() == target_kind); return current_map; } @@ -3427,15 +3412,17 @@ bool Map::IsMapInArrayPrototypeChain() { static Handle<Map> AddMissingElementsTransitions(Handle<Map> map, ElementsKind to_kind) { - ASSERT(IsTransitionElementsKind(map->elements_kind())); + DCHECK(IsTransitionElementsKind(map->elements_kind())); Handle<Map> current_map = map; ElementsKind kind = map->elements_kind(); - while (kind != to_kind && !IsTerminalElementsKind(kind)) { - kind = GetNextTransitionElementsKind(kind); - current_map = Map::CopyAsElementsKind( - current_map, kind, INSERT_TRANSITION); + if (!map->is_prototype_map()) { + while (kind != to_kind && !IsTerminalElementsKind(kind)) { + kind = GetNextTransitionElementsKind(kind); + current_map = + Map::CopyAsElementsKind(current_map, kind, INSERT_TRANSITION); + } } // In case we are exiting the fast elements kind system, just add the map in @@ -3445,7 +3432,7 @@ static Handle<Map> AddMissingElementsTransitions(Handle<Map> map, current_map, to_kind, INSERT_TRANSITION); } - ASSERT(current_map->elements_kind() == to_kind); + DCHECK(current_map->elements_kind() == to_kind); return current_map; } @@ -3484,7 +3471,7 @@ Handle<Map> Map::TransitionElementsToSlow(Handle<Map> map, bool allow_store_transition = // Only remember the map transition if there is not an already existing // non-matching element transition. - !map->IsUndefined() && !map->is_shared() && + !map->IsUndefined() && !map->is_dictionary_map() && IsTransitionElementsKind(from_kind); // Only store fast element maps in ascending generality. @@ -3521,30 +3508,24 @@ Handle<Map> JSObject::GetElementsTransitionMap(Handle<JSObject> object, } -void JSObject::LocalLookupRealNamedProperty(Handle<Name> name, - LookupResult* result) { +void JSObject::LookupOwnRealNamedProperty(Handle<Name> name, + LookupResult* result) { DisallowHeapAllocation no_gc; if (IsJSGlobalProxy()) { - Object* proto = GetPrototype(); - if (proto->IsNull()) return result->NotFound(); - ASSERT(proto->IsJSGlobalObject()); - return JSObject::cast(proto)->LocalLookupRealNamedProperty(name, result); + PrototypeIterator iter(GetIsolate(), this); + if (iter.IsAtEnd()) return result->NotFound(); + DCHECK(iter.GetCurrent()->IsJSGlobalObject()); + return JSObject::cast(iter.GetCurrent()) + ->LookupOwnRealNamedProperty(name, result); } if (HasFastProperties()) { map()->LookupDescriptor(this, *name, result); // A property or a map transition was found. We return all of these result - // types because LocalLookupRealNamedProperty is used when setting + // types because LookupOwnRealNamedProperty is used when setting // properties where map transitions are handled. - ASSERT(!result->IsFound() || + DCHECK(!result->IsFound() || (result->holder() == this && result->IsFastPropertyType())); - // Disallow caching for uninitialized constants. These can only - // occur as fields. - if (result->IsField() && - result->IsReadOnly() && - RawFastPropertyAt(result->GetFieldIndex().field_index())->IsTheHole()) { - result->DisallowCaching(); - } return; } @@ -3553,15 +3534,12 @@ void JSObject::LocalLookupRealNamedProperty(Handle<Name> name, Object* value = property_dictionary()->ValueAt(entry); if (IsGlobalObject()) { PropertyDetails d = property_dictionary()->DetailsAt(entry); - if (d.IsDeleted()) { + if (d.IsDeleted() || PropertyCell::cast(value)->value()->IsTheHole()) { result->NotFound(); return; } value = PropertyCell::cast(value)->value(); } - // Make sure to disallow caching for uninitialized constants - // found in the dictionary-mode objects. - if (value->IsTheHole()) result->DisallowCaching(); result->DictionaryResult(this, entry); return; } @@ -3573,7 +3551,7 @@ void JSObject::LocalLookupRealNamedProperty(Handle<Name> name, void JSObject::LookupRealNamedProperty(Handle<Name> name, LookupResult* result) { DisallowHeapAllocation no_gc; - LocalLookupRealNamedProperty(name, result); + LookupOwnRealNamedProperty(name, result); if (result->IsFound()) return; LookupRealNamedPropertyInPrototypes(name, result); @@ -3582,137 +3560,48 @@ void JSObject::LookupRealNamedProperty(Handle<Name> name, void JSObject::LookupRealNamedPropertyInPrototypes(Handle<Name> name, LookupResult* result) { + if (name->IsOwn()) { + result->NotFound(); + return; + } + DisallowHeapAllocation no_gc; Isolate* isolate = GetIsolate(); - Heap* heap = isolate->heap(); - for (Object* pt = GetPrototype(); - pt != heap->null_value(); - pt = pt->GetPrototype(isolate)) { - if (pt->IsJSProxy()) { - return result->HandlerResult(JSProxy::cast(pt)); - } - JSObject::cast(pt)->LocalLookupRealNamedProperty(name, result); - ASSERT(!(result->IsFound() && result->type() == INTERCEPTOR)); + for (PrototypeIterator iter(isolate, this); !iter.IsAtEnd(); iter.Advance()) { + if (iter.GetCurrent()->IsJSProxy()) { + return result->HandlerResult(JSProxy::cast(iter.GetCurrent())); + } + JSObject::cast(iter.GetCurrent())->LookupOwnRealNamedProperty(name, result); + DCHECK(!(result->IsFound() && result->type() == INTERCEPTOR)); if (result->IsFound()) return; } result->NotFound(); } -// We only need to deal with CALLBACKS and INTERCEPTORS -MaybeHandle<Object> JSObject::SetPropertyWithFailedAccessCheck( - Handle<JSObject> object, - LookupResult* result, - Handle<Name> name, - Handle<Object> value, - bool check_prototype, - StrictMode strict_mode) { - if (check_prototype && !result->IsProperty()) { - object->LookupRealNamedPropertyInPrototypes(name, result); - } - - if (result->IsProperty()) { - if (!result->IsReadOnly()) { - switch (result->type()) { - case CALLBACKS: { - Object* obj = result->GetCallbackObject(); - if (obj->IsAccessorInfo()) { - Handle<AccessorInfo> info(AccessorInfo::cast(obj)); - if (info->all_can_write()) { - return SetPropertyWithCallback(object, - info, - name, - value, - handle(result->holder()), - strict_mode); - } - } else if (obj->IsAccessorPair()) { - Handle<AccessorPair> pair(AccessorPair::cast(obj)); - if (pair->all_can_read()) { - return SetPropertyWithCallback(object, - pair, - name, - value, - handle(result->holder()), - strict_mode); - } - } - break; - } - case INTERCEPTOR: { - // Try lookup real named properties. Note that only property can be - // set is callbacks marked as ALL_CAN_WRITE on the prototype chain. - LookupResult r(object->GetIsolate()); - object->LookupRealNamedProperty(name, &r); - if (r.IsProperty()) { - return SetPropertyWithFailedAccessCheck(object, - &r, - name, - value, - check_prototype, - strict_mode); - } - break; - } - default: { - break; - } - } - } - } - - Isolate* isolate = object->GetIsolate(); - isolate->ReportFailedAccessCheck(object, v8::ACCESS_SET); - RETURN_EXCEPTION_IF_SCHEDULED_EXCEPTION(isolate, Object); - return value; -} - - -MaybeHandle<Object> JSReceiver::SetProperty(Handle<JSReceiver> object, - LookupResult* result, - Handle<Name> key, - Handle<Object> value, - PropertyAttributes attributes, - StrictMode strict_mode, - StoreFromKeyed store_mode) { - if (result->IsHandler()) { - return JSProxy::SetPropertyWithHandler(handle(result->proxy()), - object, key, value, attributes, strict_mode); - } else { - return JSObject::SetPropertyForResult(Handle<JSObject>::cast(object), - result, key, value, attributes, strict_mode, store_mode); - } -} - - -bool JSProxy::HasPropertyWithHandler(Handle<JSProxy> proxy, Handle<Name> name) { +Maybe<bool> JSProxy::HasPropertyWithHandler(Handle<JSProxy> proxy, + Handle<Name> name) { Isolate* isolate = proxy->GetIsolate(); // TODO(rossberg): adjust once there is a story for symbols vs proxies. - if (name->IsSymbol()) return false; + if (name->IsSymbol()) return maybe(false); Handle<Object> args[] = { name }; Handle<Object> result; ASSIGN_RETURN_ON_EXCEPTION_VALUE( - isolate, result, - CallTrap(proxy, - "has", - isolate->derived_has_trap(), - ARRAY_SIZE(args), - args), - false); + isolate, result, CallTrap(proxy, "has", isolate->derived_has_trap(), + ARRAY_SIZE(args), args), + Maybe<bool>()); - return result->BooleanValue(); + return maybe(result->BooleanValue()); } -MaybeHandle<Object> JSProxy::SetPropertyWithHandler( - Handle<JSProxy> proxy, - Handle<JSReceiver> receiver, - Handle<Name> name, - Handle<Object> value, - PropertyAttributes attributes, - StrictMode strict_mode) { +MaybeHandle<Object> JSProxy::SetPropertyWithHandler(Handle<JSProxy> proxy, + Handle<Object> receiver, + Handle<Name> name, + Handle<Object> value, + StrictMode strict_mode) { Isolate* isolate = proxy->GetIsolate(); // TODO(rossberg): adjust once there is a story for symbols vs proxies. @@ -3733,13 +3622,8 @@ MaybeHandle<Object> JSProxy::SetPropertyWithHandler( MaybeHandle<Object> JSProxy::SetPropertyViaPrototypesWithHandler( - Handle<JSProxy> proxy, - Handle<JSReceiver> receiver, - Handle<Name> name, - Handle<Object> value, - PropertyAttributes attributes, - StrictMode strict_mode, - bool* done) { + Handle<JSProxy> proxy, Handle<Object> receiver, Handle<Name> name, + Handle<Object> value, StrictMode strict_mode, bool* done) { Isolate* isolate = proxy->GetIsolate(); Handle<Object> handler(proxy->handler(), isolate); // Trap might morph proxy. @@ -3784,7 +3668,7 @@ MaybeHandle<Object> JSProxy::SetPropertyViaPrototypesWithHandler( STATIC_ASCII_VECTOR("configurable_")); Handle<Object> configurable = Object::GetProperty(desc, configurable_name).ToHandleChecked(); - ASSERT(configurable->IsBoolean()); + DCHECK(configurable->IsBoolean()); if (configurable->IsFalse()) { Handle<String> trap = isolate->factory()->InternalizeOneByteString( @@ -3794,7 +3678,7 @@ MaybeHandle<Object> JSProxy::SetPropertyViaPrototypesWithHandler( "proxy_prop_not_configurable", HandleVector(args, ARRAY_SIZE(args))); return isolate->Throw<Object>(error); } - ASSERT(configurable->IsTrue()); + DCHECK(configurable->IsTrue()); // Check for DataDescriptor. Handle<String> hasWritable_name = @@ -3802,14 +3686,14 @@ MaybeHandle<Object> JSProxy::SetPropertyViaPrototypesWithHandler( STATIC_ASCII_VECTOR("hasWritable_")); Handle<Object> hasWritable = Object::GetProperty(desc, hasWritable_name).ToHandleChecked(); - ASSERT(hasWritable->IsBoolean()); + DCHECK(hasWritable->IsBoolean()); if (hasWritable->IsTrue()) { Handle<String> writable_name = isolate->factory()->InternalizeOneByteString( STATIC_ASCII_VECTOR("writable_")); Handle<Object> writable = Object::GetProperty(desc, writable_name).ToHandleChecked(); - ASSERT(writable->IsBoolean()); + DCHECK(writable->IsBoolean()); *done = writable->IsFalse(); if (!*done) return isolate->factory()->the_hole_value(); if (strict_mode == SLOPPY) return value; @@ -3877,62 +3761,58 @@ MaybeHandle<Object> JSProxy::DeleteElementWithHandler( } -PropertyAttributes JSProxy::GetPropertyAttributeWithHandler( - Handle<JSProxy> proxy, - Handle<JSReceiver> receiver, - Handle<Name> name) { +Maybe<PropertyAttributes> JSProxy::GetPropertyAttributesWithHandler( + Handle<JSProxy> proxy, Handle<Object> receiver, Handle<Name> name) { Isolate* isolate = proxy->GetIsolate(); HandleScope scope(isolate); // TODO(rossberg): adjust once there is a story for symbols vs proxies. - if (name->IsSymbol()) return ABSENT; + if (name->IsSymbol()) return maybe(ABSENT); Handle<Object> args[] = { name }; Handle<Object> result; ASSIGN_RETURN_ON_EXCEPTION_VALUE( isolate, result, - proxy->CallTrap(proxy, - "getPropertyDescriptor", - Handle<Object>(), - ARRAY_SIZE(args), - args), - NONE); + proxy->CallTrap(proxy, "getPropertyDescriptor", Handle<Object>(), + ARRAY_SIZE(args), args), + Maybe<PropertyAttributes>()); - if (result->IsUndefined()) return ABSENT; + if (result->IsUndefined()) return maybe(ABSENT); Handle<Object> argv[] = { result }; Handle<Object> desc; ASSIGN_RETURN_ON_EXCEPTION_VALUE( isolate, desc, - Execution::Call(isolate, - isolate->to_complete_property_descriptor(), - result, - ARRAY_SIZE(argv), - argv), - NONE); + Execution::Call(isolate, isolate->to_complete_property_descriptor(), + result, ARRAY_SIZE(argv), argv), + Maybe<PropertyAttributes>()); // Convert result to PropertyAttributes. Handle<String> enum_n = isolate->factory()->InternalizeOneByteString( STATIC_ASCII_VECTOR("enumerable_")); Handle<Object> enumerable; - ASSIGN_RETURN_ON_EXCEPTION_VALUE( - isolate, enumerable, Object::GetProperty(desc, enum_n), NONE); + ASSIGN_RETURN_ON_EXCEPTION_VALUE(isolate, enumerable, + Object::GetProperty(desc, enum_n), + Maybe<PropertyAttributes>()); Handle<String> conf_n = isolate->factory()->InternalizeOneByteString( STATIC_ASCII_VECTOR("configurable_")); Handle<Object> configurable; - ASSIGN_RETURN_ON_EXCEPTION_VALUE( - isolate, configurable, Object::GetProperty(desc, conf_n), NONE); + ASSIGN_RETURN_ON_EXCEPTION_VALUE(isolate, configurable, + Object::GetProperty(desc, conf_n), + Maybe<PropertyAttributes>()); Handle<String> writ_n = isolate->factory()->InternalizeOneByteString( STATIC_ASCII_VECTOR("writable_")); Handle<Object> writable; - ASSIGN_RETURN_ON_EXCEPTION_VALUE( - isolate, writable, Object::GetProperty(desc, writ_n), NONE); + ASSIGN_RETURN_ON_EXCEPTION_VALUE(isolate, writable, + Object::GetProperty(desc, writ_n), + Maybe<PropertyAttributes>()); if (!writable->BooleanValue()) { Handle<String> set_n = isolate->factory()->InternalizeOneByteString( STATIC_ASCII_VECTOR("set_")); Handle<Object> setter; - ASSIGN_RETURN_ON_EXCEPTION_VALUE( - isolate, setter, Object::GetProperty(desc, set_n), NONE); + ASSIGN_RETURN_ON_EXCEPTION_VALUE(isolate, setter, + Object::GetProperty(desc, set_n), + Maybe<PropertyAttributes>()); writable = isolate->factory()->ToBoolean(!setter->IsUndefined()); } @@ -3944,24 +3824,22 @@ PropertyAttributes JSProxy::GetPropertyAttributeWithHandler( Handle<Object> error = isolate->factory()->NewTypeError( "proxy_prop_not_configurable", HandleVector(args, ARRAY_SIZE(args))); isolate->Throw(*error); - return NONE; + return maybe(NONE); } int attributes = NONE; if (!enumerable->BooleanValue()) attributes |= DONT_ENUM; if (!configurable->BooleanValue()) attributes |= DONT_DELETE; if (!writable->BooleanValue()) attributes |= READ_ONLY; - return static_cast<PropertyAttributes>(attributes); + return maybe(static_cast<PropertyAttributes>(attributes)); } -PropertyAttributes JSProxy::GetElementAttributeWithHandler( - Handle<JSProxy> proxy, - Handle<JSReceiver> receiver, - uint32_t index) { +Maybe<PropertyAttributes> JSProxy::GetElementAttributeWithHandler( + Handle<JSProxy> proxy, Handle<JSReceiver> receiver, uint32_t index) { Isolate* isolate = proxy->GetIsolate(); Handle<String> name = isolate->factory()->Uint32ToString(index); - return GetPropertyAttributeWithHandler(proxy, receiver, name); + return GetPropertyAttributesWithHandler(proxy, receiver, name); } @@ -3977,7 +3855,7 @@ void JSProxy::Fix(Handle<JSProxy> proxy) { } else { isolate->factory()->BecomeJSObject(proxy); } - ASSERT(proxy->IsJSObject()); + DCHECK(proxy->IsJSObject()); // Inherit identity, if it was present. if (hash->IsSmi()) { @@ -4017,7 +3895,7 @@ MaybeHandle<Object> JSProxy::CallTrap(Handle<JSProxy> proxy, void JSObject::AllocateStorageForMap(Handle<JSObject> object, Handle<Map> map) { - ASSERT(object->map()->inobject_properties() == map->inobject_properties()); + DCHECK(object->map()->inobject_properties() == map->inobject_properties()); ElementsKind obj_kind = object->map()->elements_kind(); ElementsKind map_kind = map->elements_kind(); if (map_kind != obj_kind) { @@ -4038,17 +3916,12 @@ void JSObject::AllocateStorageForMap(Handle<JSObject> object, Handle<Map> map) { void JSObject::MigrateInstance(Handle<JSObject> object) { - // Converting any field to the most specific type will cause the - // GeneralizeFieldRepresentation algorithm to create the most general existing - // transition that matches the object. This achieves what is needed. Handle<Map> original_map(object->map()); - GeneralizeFieldRepresentation( - object, 0, Representation::None(), - HeapType::None(object->GetIsolate()), - ALLOW_AS_CONSTANT); - object->map()->set_migration_target(true); + Handle<Map> map = Map::Update(original_map); + map->set_migration_target(true); + MigrateToMap(object, map); if (FLAG_trace_migration) { - object->PrintInstanceMigration(stdout, *original_map, object->map()); + object->PrintInstanceMigration(stdout, *original_map, *map); } } @@ -4059,7 +3932,7 @@ bool JSObject::TryMigrateInstance(Handle<JSObject> object) { DisallowDeoptimization no_deoptimization(isolate); Handle<Map> original_map(object->map(), isolate); Handle<Map> new_map; - if (!Map::CurrentMapForDeprecatedInternal(original_map).ToHandle(&new_map)) { + if (!Map::TryUpdate(original_map).ToHandle(&new_map)) { return false; } JSObject::MigrateToMap(object, new_map); @@ -4083,14 +3956,13 @@ MaybeHandle<Object> JSObject::SetPropertyUsingTransition( PropertyDetails details = descriptors->GetDetails(descriptor); if (details.type() == CALLBACKS || attributes != details.attributes()) { - // AddProperty will either normalize the object, or create a new fast copy - // of the map. If we get a fast copy of the map, all field representations - // will be tagged since the transition is omitted. - return JSObject::AddProperty( - object, name, value, attributes, SLOPPY, + // AddPropertyInternal will either normalize the object, or create a new + // fast copy of the map. If we get a fast copy of the map, all field + // representations will be tagged since the transition is omitted. + return JSObject::AddPropertyInternal( + object, name, value, attributes, JSReceiver::CERTAINLY_NOT_STORE_FROM_KEYED, - JSReceiver::OMIT_EXTENSIBILITY_CHECK, - JSObject::FORCE_TAGGED, FORCE_FIELD, OMIT_TRANSITION); + JSReceiver::OMIT_EXTENSIBILITY_CHECK, OMIT_TRANSITION); } // Keep the target CONSTANT if the same value is stored. @@ -4125,54 +3997,56 @@ void JSObject::WriteToField(int descriptor, Object* value) { DescriptorArray* desc = map()->instance_descriptors(); PropertyDetails details = desc->GetDetails(descriptor); - ASSERT(details.type() == FIELD); + DCHECK(details.type() == FIELD); - int field_index = desc->GetFieldIndex(descriptor); + FieldIndex index = FieldIndex::ForDescriptor(map(), descriptor); if (details.representation().IsDouble()) { // Nothing more to be done. if (value->IsUninitialized()) return; - HeapNumber* box = HeapNumber::cast(RawFastPropertyAt(field_index)); + HeapNumber* box = HeapNumber::cast(RawFastPropertyAt(index)); + DCHECK(box->IsMutableHeapNumber()); box->set_value(value->Number()); } else { - FastPropertyAtPut(field_index, value); + FastPropertyAtPut(index, value); } } -static void SetPropertyToField(LookupResult* lookup, - Handle<Object> value) { +void JSObject::SetPropertyToField(LookupResult* lookup, Handle<Object> value) { if (lookup->type() == CONSTANT || !lookup->CanHoldValue(value)) { Representation field_representation = value->OptimalRepresentation(); Handle<HeapType> field_type = value->OptimalType( lookup->isolate(), field_representation); JSObject::GeneralizeFieldRepresentation(handle(lookup->holder()), lookup->GetDescriptorIndex(), - field_representation, field_type, - FORCE_FIELD); + field_representation, field_type); } lookup->holder()->WriteToField(lookup->GetDescriptorIndex(), *value); } -static void ConvertAndSetLocalProperty(LookupResult* lookup, - Handle<Name> name, - Handle<Object> value, - PropertyAttributes attributes) { +void JSObject::ConvertAndSetOwnProperty(LookupResult* lookup, + Handle<Name> name, + Handle<Object> value, + PropertyAttributes attributes) { Handle<JSObject> object(lookup->holder()); - if (object->TooManyFastProperties()) { + if (object->map()->TooManyFastProperties(Object::MAY_BE_STORE_FROM_KEYED)) { JSObject::NormalizeProperties(object, CLEAR_INOBJECT_PROPERTIES, 0); + } else if (object->map()->is_prototype_map()) { + JSObject::NormalizeProperties(object, KEEP_INOBJECT_PROPERTIES, 0); } if (!object->HasFastProperties()) { ReplaceSlowProperty(object, name, value, attributes); + ReoptimizeIfPrototype(object); return; } int descriptor_index = lookup->GetDescriptorIndex(); if (lookup->GetAttributes() == attributes) { - JSObject::GeneralizeFieldRepresentation( - object, descriptor_index, Representation::Tagged(), - HeapType::Any(lookup->isolate()), FORCE_FIELD); + JSObject::GeneralizeFieldRepresentation(object, descriptor_index, + Representation::Tagged(), + HeapType::Any(lookup->isolate())); } else { Handle<Map> old_map(object->map()); Handle<Map> new_map = Map::CopyGeneralizeAllRepresentations(old_map, @@ -4184,170 +4058,48 @@ static void ConvertAndSetLocalProperty(LookupResult* lookup, } -static void SetPropertyToFieldWithAttributes(LookupResult* lookup, - Handle<Name> name, - Handle<Object> value, - PropertyAttributes attributes) { +void JSObject::SetPropertyToFieldWithAttributes(LookupResult* lookup, + Handle<Name> name, + Handle<Object> value, + PropertyAttributes attributes) { if (lookup->GetAttributes() == attributes) { if (value->IsUninitialized()) return; SetPropertyToField(lookup, value); } else { - ConvertAndSetLocalProperty(lookup, name, value, attributes); + ConvertAndSetOwnProperty(lookup, name, value, attributes); } } -MaybeHandle<Object> JSObject::SetPropertyForResult( - Handle<JSObject> object, - LookupResult* lookup, - Handle<Name> name, - Handle<Object> value, - PropertyAttributes attributes, - StrictMode strict_mode, - StoreFromKeyed store_mode) { - Isolate* isolate = object->GetIsolate(); - - // Make sure that the top context does not change when doing callbacks or - // interceptor calls. - AssertNoContextChange ncc(isolate); - - // Optimization for 2-byte strings often used as keys in a decompression - // dictionary. We internalize these short keys to avoid constantly - // reallocating them. - if (name->IsString() && !name->IsInternalizedString() && - Handle<String>::cast(name)->length() <= 2) { - name = isolate->factory()->InternalizeString(Handle<String>::cast(name)); - } - - // Check access rights if needed. - if (object->IsAccessCheckNeeded()) { - if (!isolate->MayNamedAccess(object, name, v8::ACCESS_SET)) { - return SetPropertyWithFailedAccessCheck(object, lookup, name, value, - true, strict_mode); - } - } - - if (object->IsJSGlobalProxy()) { - Handle<Object> proto(object->GetPrototype(), isolate); - if (proto->IsNull()) return value; - ASSERT(proto->IsJSGlobalObject()); - return SetPropertyForResult(Handle<JSObject>::cast(proto), - lookup, name, value, attributes, strict_mode, store_mode); - } - - ASSERT(!lookup->IsFound() || lookup->holder() == *object || - lookup->holder()->map()->is_hidden_prototype()); - - if (!lookup->IsProperty() && !object->IsJSContextExtensionObject()) { - bool done = false; - Handle<Object> result_object; - ASSIGN_RETURN_ON_EXCEPTION( - isolate, result_object, - SetPropertyViaPrototypes( - object, name, value, attributes, strict_mode, &done), - Object); - if (done) return result_object; - } - - if (!lookup->IsFound()) { - // Neither properties nor transitions found. - return AddProperty( - object, name, value, attributes, strict_mode, store_mode); - } - - if (lookup->IsProperty() && lookup->IsReadOnly()) { - if (strict_mode == STRICT) { - Handle<Object> args[] = { name, object }; - Handle<Object> error = isolate->factory()->NewTypeError( - "strict_read_only_property", HandleVector(args, ARRAY_SIZE(args))); - return isolate->Throw<Object>(error); - } else { - return value; - } - } - - Handle<Object> old_value = isolate->factory()->the_hole_value(); - bool is_observed = object->map()->is_observed() && - *name != isolate->heap()->hidden_string(); - if (is_observed && lookup->IsDataProperty()) { - old_value = Object::GetPropertyOrElement(object, name).ToHandleChecked(); - } - - // This is a real property that is not read-only, or it is a - // transition or null descriptor and there are no setters in the prototypes. - MaybeHandle<Object> maybe_result = value; - if (lookup->IsTransition()) { - maybe_result = SetPropertyUsingTransition(handle(lookup->holder()), lookup, - name, value, attributes); - } else { - switch (lookup->type()) { - case NORMAL: - SetNormalizedProperty(handle(lookup->holder()), lookup, value); - break; - case FIELD: - SetPropertyToField(lookup, value); - break; - case CONSTANT: - // Only replace the constant if necessary. - if (*value == lookup->GetConstant()) return value; - SetPropertyToField(lookup, value); - break; - case CALLBACKS: { - Handle<Object> callback_object(lookup->GetCallbackObject(), isolate); - return SetPropertyWithCallback(object, callback_object, name, value, - handle(lookup->holder()), strict_mode); - } - case INTERCEPTOR: - maybe_result = SetPropertyWithInterceptor( - handle(lookup->holder()), name, value, attributes, strict_mode); - break; - case HANDLER: - case NONEXISTENT: - UNREACHABLE(); - } - } - - Handle<Object> result; - ASSIGN_RETURN_ON_EXCEPTION(isolate, result, maybe_result, Object); - - if (is_observed) { - if (lookup->IsTransition()) { - EnqueueChangeRecord(object, "add", name, old_value); - } else { - LookupResult new_lookup(isolate); - object->LocalLookup(name, &new_lookup, true); - if (new_lookup.IsDataProperty()) { - Handle<Object> new_value = - Object::GetPropertyOrElement(object, name).ToHandleChecked(); - if (!new_value->SameValue(*old_value)) { - EnqueueChangeRecord(object, "update", name, old_value); - } - } - } - } - - return result; +void JSObject::AddProperty(Handle<JSObject> object, Handle<Name> name, + Handle<Object> value, + PropertyAttributes attributes) { +#ifdef DEBUG + uint32_t index; + DCHECK(!object->IsJSProxy()); + DCHECK(!name->AsArrayIndex(&index)); + LookupIterator it(object, name, LookupIterator::CHECK_OWN_REAL); + Maybe<PropertyAttributes> maybe = GetPropertyAttributes(&it); + DCHECK(maybe.has_value); + DCHECK(!it.IsFound()); + DCHECK(object->map()->is_extensible()); +#endif + SetOwnPropertyIgnoreAttributes(object, name, value, attributes, + OMIT_EXTENSIBILITY_CHECK).Check(); } -// Set a real local property, even if it is READ_ONLY. If the property is not -// present, add it with attributes NONE. This code is an exact clone of -// SetProperty, with the check for IsReadOnly and the check for a -// callback setter removed. The two lines looking up the LookupResult -// result are also added. If one of the functions is changed, the other -// should be. -// Note that this method cannot be used to set the prototype of a function -// because ConvertDescriptorToField() which is called in "case CALLBACKS:" -// doesn't handle function prototypes correctly. -MaybeHandle<Object> JSObject::SetLocalPropertyIgnoreAttributes( +// Reconfigures a property to a data property with attributes, even if it is not +// reconfigurable. +MaybeHandle<Object> JSObject::SetOwnPropertyIgnoreAttributes( Handle<JSObject> object, Handle<Name> name, Handle<Object> value, PropertyAttributes attributes, - ValueType value_type, - StoreMode mode, ExtensibilityCheck extensibility_check, - StoreFromKeyed store_from_keyed) { + StoreFromKeyed store_from_keyed, + ExecutableAccessorInfoHandling handling) { + DCHECK(!value->IsTheHole()); Isolate* isolate = object->GetIsolate(); // Make sure that the top context does not change when doing callbacks or @@ -4355,7 +4107,7 @@ MaybeHandle<Object> JSObject::SetLocalPropertyIgnoreAttributes( AssertNoContextChange ncc(isolate); LookupResult lookup(isolate); - object->LocalLookup(name, &lookup, true); + object->LookupOwn(name, &lookup, true); if (!lookup.IsFound()) { object->map()->LookupTransition(*object, *name, &lookup); } @@ -4363,22 +4115,23 @@ MaybeHandle<Object> JSObject::SetLocalPropertyIgnoreAttributes( // Check access rights if needed. if (object->IsAccessCheckNeeded()) { if (!isolate->MayNamedAccess(object, name, v8::ACCESS_SET)) { - return SetPropertyWithFailedAccessCheck(object, &lookup, name, value, - false, SLOPPY); + LookupIterator it(object, name, LookupIterator::CHECK_OWN); + return SetPropertyWithFailedAccessCheck(&it, value, SLOPPY); } } if (object->IsJSGlobalProxy()) { - Handle<Object> proto(object->GetPrototype(), isolate); - if (proto->IsNull()) return value; - ASSERT(proto->IsJSGlobalObject()); - return SetLocalPropertyIgnoreAttributes(Handle<JSObject>::cast(proto), - name, value, attributes, value_type, mode, extensibility_check); + PrototypeIterator iter(isolate, object); + if (iter.IsAtEnd()) return value; + DCHECK(PrototypeIterator::GetCurrent(iter)->IsJSGlobalObject()); + return SetOwnPropertyIgnoreAttributes( + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)), name, + value, attributes, extensibility_check); } if (lookup.IsInterceptor() || (lookup.IsDescriptorOrDictionary() && lookup.type() == CALLBACKS)) { - object->LocalLookupRealNamedProperty(name, &lookup); + object->LookupOwnRealNamedProperty(name, &lookup); } // Check for accessor in prototype chain removed here in clone. @@ -4387,8 +4140,8 @@ MaybeHandle<Object> JSObject::SetLocalPropertyIgnoreAttributes( TransitionFlag flag = lookup.IsFound() ? OMIT_TRANSITION : INSERT_TRANSITION; // Neither properties nor transitions found. - return AddProperty(object, name, value, attributes, SLOPPY, - store_from_keyed, extensibility_check, value_type, mode, flag); + return AddPropertyInternal(object, name, value, attributes, + store_from_keyed, extensibility_check, flag); } Handle<Object> old_value = isolate->factory()->the_hole_value(); @@ -4402,6 +4155,8 @@ MaybeHandle<Object> JSObject::SetLocalPropertyIgnoreAttributes( old_attributes = lookup.GetAttributes(); } + bool executed_set_prototype = false; + // Check of IsReadOnly removed from here in clone. if (lookup.IsTransition()) { Handle<Object> result; @@ -4426,8 +4181,44 @@ MaybeHandle<Object> JSObject::SetLocalPropertyIgnoreAttributes( } break; case CALLBACKS: - ConvertAndSetLocalProperty(&lookup, name, value, attributes); + { + Handle<Object> callback(lookup.GetCallbackObject(), isolate); + if (callback->IsExecutableAccessorInfo() && + handling == DONT_FORCE_FIELD) { + Handle<Object> result; + ASSIGN_RETURN_ON_EXCEPTION( + isolate, result, JSObject::SetPropertyWithAccessor( + object, name, value, handle(lookup.holder()), + callback, STRICT), + Object); + + if (attributes != lookup.GetAttributes()) { + Handle<ExecutableAccessorInfo> new_data = + Accessors::CloneAccessor( + isolate, Handle<ExecutableAccessorInfo>::cast(callback)); + new_data->set_property_attributes(attributes); + if (attributes & READ_ONLY) { + // This way we don't have to introduce a lookup to the setter, + // simply make it unavailable to reflect the attributes. + new_data->clear_setter(); + } + + SetPropertyCallback(object, name, new_data, attributes); + } + if (is_observed) { + // If we are setting the prototype of a function and are observed, + // don't send change records because the prototype handles that + // itself. + executed_set_prototype = object->IsJSFunction() && + String::Equals(isolate->factory()->prototype_string(), + Handle<String>::cast(name)) && + Handle<JSFunction>::cast(object)->should_have_prototype(); + } + } else { + ConvertAndSetOwnProperty(&lookup, name, value, attributes); + } break; + } case NONEXISTENT: case HANDLER: case INTERCEPTOR: @@ -4435,14 +4226,14 @@ MaybeHandle<Object> JSObject::SetLocalPropertyIgnoreAttributes( } } - if (is_observed) { + if (is_observed && !executed_set_prototype) { if (lookup.IsTransition()) { EnqueueChangeRecord(object, "add", name, old_value); } else if (old_value->IsTheHole()) { EnqueueChangeRecord(object, "reconfigure", name, old_value); } else { LookupResult new_lookup(isolate); - object->LocalLookup(name, &new_lookup, true); + object->LookupOwn(name, &new_lookup, true); bool value_changed = false; if (new_lookup.IsDataProperty()) { Handle<Object> new_value = @@ -4462,182 +4253,129 @@ MaybeHandle<Object> JSObject::SetLocalPropertyIgnoreAttributes( } -PropertyAttributes JSObject::GetPropertyAttributePostInterceptor( - Handle<JSObject> object, - Handle<JSObject> receiver, - Handle<Name> name, - bool continue_search) { - // Check local property, ignore interceptor. - Isolate* isolate = object->GetIsolate(); - LookupResult result(isolate); - object->LocalLookupRealNamedProperty(name, &result); - if (result.IsFound()) return result.GetAttributes(); - - if (continue_search) { - // Continue searching via the prototype chain. - Handle<Object> proto(object->GetPrototype(), isolate); - if (!proto->IsNull()) { - return JSReceiver::GetPropertyAttributeWithReceiver( - Handle<JSObject>::cast(proto), receiver, name); - } - } - return ABSENT; -} - - -PropertyAttributes JSObject::GetPropertyAttributeWithInterceptor( - Handle<JSObject> object, - Handle<JSObject> receiver, - Handle<Name> name, - bool continue_search) { +Maybe<PropertyAttributes> JSObject::GetPropertyAttributesWithInterceptor( + Handle<JSObject> holder, + Handle<Object> receiver, + Handle<Name> name) { // TODO(rossberg): Support symbols in the API. - if (name->IsSymbol()) return ABSENT; + if (name->IsSymbol()) return maybe(ABSENT); - Isolate* isolate = object->GetIsolate(); + Isolate* isolate = holder->GetIsolate(); HandleScope scope(isolate); // Make sure that the top context does not change when doing // callbacks or interceptor calls. AssertNoContextChange ncc(isolate); - Handle<InterceptorInfo> interceptor(object->GetNamedInterceptor()); + Handle<InterceptorInfo> interceptor(holder->GetNamedInterceptor()); PropertyCallbackArguments args( - isolate, interceptor->data(), *receiver, *object); + isolate, interceptor->data(), *receiver, *holder); if (!interceptor->query()->IsUndefined()) { v8::NamedPropertyQueryCallback query = v8::ToCData<v8::NamedPropertyQueryCallback>(interceptor->query()); LOG(isolate, - ApiNamedPropertyAccess("interceptor-named-has", *object, *name)); + ApiNamedPropertyAccess("interceptor-named-has", *holder, *name)); v8::Handle<v8::Integer> result = args.Call(query, v8::Utils::ToLocal(Handle<String>::cast(name))); if (!result.IsEmpty()) { - ASSERT(result->IsInt32()); - return static_cast<PropertyAttributes>(result->Int32Value()); + DCHECK(result->IsInt32()); + return maybe(static_cast<PropertyAttributes>(result->Int32Value())); } } else if (!interceptor->getter()->IsUndefined()) { v8::NamedPropertyGetterCallback getter = v8::ToCData<v8::NamedPropertyGetterCallback>(interceptor->getter()); LOG(isolate, - ApiNamedPropertyAccess("interceptor-named-get-has", *object, *name)); + ApiNamedPropertyAccess("interceptor-named-get-has", *holder, *name)); v8::Handle<v8::Value> result = args.Call(getter, v8::Utils::ToLocal(Handle<String>::cast(name))); - if (!result.IsEmpty()) return DONT_ENUM; + if (!result.IsEmpty()) return maybe(DONT_ENUM); } - return GetPropertyAttributePostInterceptor( - object, receiver, name, continue_search); + + RETURN_VALUE_IF_SCHEDULED_EXCEPTION(isolate, Maybe<PropertyAttributes>()); + return maybe(ABSENT); } -PropertyAttributes JSReceiver::GetPropertyAttributeWithReceiver( - Handle<JSReceiver> object, - Handle<JSReceiver> receiver, - Handle<Name> key) { +Maybe<PropertyAttributes> JSReceiver::GetOwnPropertyAttributes( + Handle<JSReceiver> object, Handle<Name> name) { + // Check whether the name is an array index. uint32_t index = 0; - if (object->IsJSObject() && key->AsArrayIndex(&index)) { - return JSObject::GetElementAttributeWithReceiver( - Handle<JSObject>::cast(object), receiver, index, true); + if (object->IsJSObject() && name->AsArrayIndex(&index)) { + return GetOwnElementAttribute(object, index); } - // Named property. - LookupResult lookup(object->GetIsolate()); - object->Lookup(key, &lookup); - return GetPropertyAttributeForResult(object, receiver, &lookup, key, true); + LookupIterator it(object, name, LookupIterator::CHECK_OWN); + return GetPropertyAttributes(&it); } -PropertyAttributes JSReceiver::GetPropertyAttributeForResult( - Handle<JSReceiver> object, - Handle<JSReceiver> receiver, - LookupResult* lookup, - Handle<Name> name, - bool continue_search) { - // Check access rights if needed. - if (object->IsAccessCheckNeeded()) { - Heap* heap = object->GetHeap(); - Handle<JSObject> obj = Handle<JSObject>::cast(object); - if (!heap->isolate()->MayNamedAccess(obj, name, v8::ACCESS_HAS)) { - return JSObject::GetPropertyAttributeWithFailedAccessCheck( - obj, lookup, name, continue_search); - } - } - if (lookup->IsFound()) { - switch (lookup->type()) { - case NORMAL: // fall through - case FIELD: - case CONSTANT: - case CALLBACKS: - return lookup->GetAttributes(); - case HANDLER: { - return JSProxy::GetPropertyAttributeWithHandler( - handle(lookup->proxy()), receiver, name); - } - case INTERCEPTOR: - return JSObject::GetPropertyAttributeWithInterceptor( - handle(lookup->holder()), - Handle<JSObject>::cast(receiver), - name, - continue_search); - case NONEXISTENT: +Maybe<PropertyAttributes> JSReceiver::GetPropertyAttributes( + LookupIterator* it) { + for (; it->IsFound(); it->Next()) { + switch (it->state()) { + case LookupIterator::NOT_FOUND: UNREACHABLE(); + case LookupIterator::JSPROXY: + return JSProxy::GetPropertyAttributesWithHandler( + it->GetHolder<JSProxy>(), it->GetReceiver(), it->name()); + case LookupIterator::INTERCEPTOR: { + Maybe<PropertyAttributes> result = + JSObject::GetPropertyAttributesWithInterceptor( + it->GetHolder<JSObject>(), it->GetReceiver(), it->name()); + if (!result.has_value) return result; + if (result.value != ABSENT) return result; + break; + } + case LookupIterator::ACCESS_CHECK: + if (it->HasAccess(v8::ACCESS_HAS)) break; + return JSObject::GetPropertyAttributesWithFailedAccessCheck(it); + case LookupIterator::PROPERTY: + if (it->HasProperty()) { + return maybe(it->property_details().attributes()); + } + break; } } - return ABSENT; -} - - -PropertyAttributes JSReceiver::GetLocalPropertyAttribute( - Handle<JSReceiver> object, Handle<Name> name) { - // Check whether the name is an array index. - uint32_t index = 0; - if (object->IsJSObject() && name->AsArrayIndex(&index)) { - return GetLocalElementAttribute(object, index); - } - // Named property. - LookupResult lookup(object->GetIsolate()); - object->LocalLookup(name, &lookup, true); - return GetPropertyAttributeForResult(object, object, &lookup, name, false); + return maybe(ABSENT); } -PropertyAttributes JSObject::GetElementAttributeWithReceiver( - Handle<JSObject> object, - Handle<JSReceiver> receiver, - uint32_t index, - bool continue_search) { +Maybe<PropertyAttributes> JSObject::GetElementAttributeWithReceiver( + Handle<JSObject> object, Handle<JSReceiver> receiver, uint32_t index, + bool check_prototype) { Isolate* isolate = object->GetIsolate(); // Check access rights if needed. if (object->IsAccessCheckNeeded()) { if (!isolate->MayIndexedAccess(object, index, v8::ACCESS_HAS)) { isolate->ReportFailedAccessCheck(object, v8::ACCESS_HAS); - // TODO(yangguo): Issue 3269, check for scheduled exception missing? - return ABSENT; + RETURN_VALUE_IF_SCHEDULED_EXCEPTION(isolate, Maybe<PropertyAttributes>()); + return maybe(ABSENT); } } if (object->IsJSGlobalProxy()) { - Handle<Object> proto(object->GetPrototype(), isolate); - if (proto->IsNull()) return ABSENT; - ASSERT(proto->IsJSGlobalObject()); + PrototypeIterator iter(isolate, object); + if (iter.IsAtEnd()) return maybe(ABSENT); + DCHECK(PrototypeIterator::GetCurrent(iter)->IsJSGlobalObject()); return JSObject::GetElementAttributeWithReceiver( - Handle<JSObject>::cast(proto), receiver, index, continue_search); + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)), receiver, + index, check_prototype); } // Check for lookup interceptor except when bootstrapping. if (object->HasIndexedInterceptor() && !isolate->bootstrapper()->IsActive()) { return JSObject::GetElementAttributeWithInterceptor( - object, receiver, index, continue_search); + object, receiver, index, check_prototype); } return GetElementAttributeWithoutInterceptor( - object, receiver, index, continue_search); + object, receiver, index, check_prototype); } -PropertyAttributes JSObject::GetElementAttributeWithInterceptor( - Handle<JSObject> object, - Handle<JSReceiver> receiver, - uint32_t index, - bool continue_search) { +Maybe<PropertyAttributes> JSObject::GetElementAttributeWithInterceptor( + Handle<JSObject> object, Handle<JSReceiver> receiver, uint32_t index, + bool check_prototype) { Isolate* isolate = object->GetIsolate(); HandleScope scope(isolate); @@ -4655,7 +4393,7 @@ PropertyAttributes JSObject::GetElementAttributeWithInterceptor( ApiIndexedPropertyAccess("interceptor-indexed-has", *object, index)); v8::Handle<v8::Integer> result = args.Call(query, index); if (!result.IsEmpty()) - return static_cast<PropertyAttributes>(result->Int32Value()); + return maybe(static_cast<PropertyAttributes>(result->Int32Value())); } else if (!interceptor->getter()->IsUndefined()) { v8::IndexedPropertyGetterCallback getter = v8::ToCData<v8::IndexedPropertyGetterCallback>(interceptor->getter()); @@ -4663,39 +4401,39 @@ PropertyAttributes JSObject::GetElementAttributeWithInterceptor( ApiIndexedPropertyAccess( "interceptor-indexed-get-has", *object, index)); v8::Handle<v8::Value> result = args.Call(getter, index); - if (!result.IsEmpty()) return NONE; + if (!result.IsEmpty()) return maybe(NONE); } return GetElementAttributeWithoutInterceptor( - object, receiver, index, continue_search); + object, receiver, index, check_prototype); } -PropertyAttributes JSObject::GetElementAttributeWithoutInterceptor( - Handle<JSObject> object, - Handle<JSReceiver> receiver, - uint32_t index, - bool continue_search) { +Maybe<PropertyAttributes> JSObject::GetElementAttributeWithoutInterceptor( + Handle<JSObject> object, Handle<JSReceiver> receiver, uint32_t index, + bool check_prototype) { PropertyAttributes attr = object->GetElementsAccessor()->GetAttributes( receiver, object, index); - if (attr != ABSENT) return attr; + if (attr != ABSENT) return maybe(attr); // Handle [] on String objects. if (object->IsStringObjectWithCharacterAt(index)) { - return static_cast<PropertyAttributes>(READ_ONLY | DONT_DELETE); + return maybe(static_cast<PropertyAttributes>(READ_ONLY | DONT_DELETE)); } - if (!continue_search) return ABSENT; + if (!check_prototype) return maybe(ABSENT); - Handle<Object> proto(object->GetPrototype(), object->GetIsolate()); - if (proto->IsJSProxy()) { + PrototypeIterator iter(object->GetIsolate(), object); + if (PrototypeIterator::GetCurrent(iter)->IsJSProxy()) { // We need to follow the spec and simulate a call to [[GetOwnProperty]]. return JSProxy::GetElementAttributeWithHandler( - Handle<JSProxy>::cast(proto), receiver, index); + Handle<JSProxy>::cast(PrototypeIterator::GetCurrent(iter)), receiver, + index); } - if (proto->IsNull()) return ABSENT; + if (iter.IsAtEnd()) return maybe(ABSENT); return GetElementAttributeWithReceiver( - Handle<JSObject>::cast(proto), receiver, index, true); + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)), receiver, + index, true); } @@ -4721,7 +4459,7 @@ MaybeHandle<Map> NormalizedMapCache::Get(Handle<Map> fast_map, void NormalizedMapCache::Set(Handle<Map> fast_map, Handle<Map> normalized_map) { DisallowHeapAllocation no_gc; - ASSERT(normalized_map->is_dictionary_map()); + DCHECK(normalized_map->is_dictionary_map()); FixedArray::set(GetIndex(fast_map), *normalized_map); } @@ -4747,15 +4485,24 @@ void JSObject::NormalizeProperties(Handle<JSObject> object, int expected_additional_properties) { if (!object->HasFastProperties()) return; + Handle<Map> map(object->map()); + Handle<Map> new_map = Map::Normalize(map, mode); + + MigrateFastToSlow(object, new_map, expected_additional_properties); +} + + +void JSObject::MigrateFastToSlow(Handle<JSObject> object, + Handle<Map> new_map, + int expected_additional_properties) { // The global object is always normalized. - ASSERT(!object->IsGlobalObject()); + DCHECK(!object->IsGlobalObject()); // JSGlobalProxy must never be normalized - ASSERT(!object->IsJSGlobalProxy()); + DCHECK(!object->IsJSGlobalProxy()); Isolate* isolate = object->GetIsolate(); HandleScope scope(isolate); Handle<Map> map(object->map()); - Handle<Map> new_map = Map::Normalize(map, mode); // Allocate new content. int real_size = map->NumberOfOwnDescriptors(); @@ -4782,8 +4529,14 @@ void JSObject::NormalizeProperties(Handle<JSObject> object, } case FIELD: { Handle<Name> key(descs->GetKey(i)); + FieldIndex index = FieldIndex::ForDescriptor(*map, i); Handle<Object> value( - object->RawFastPropertyAt(descs->GetFieldIndex(i)), isolate); + object->RawFastPropertyAt(index), isolate); + if (details.representation().IsDouble()) { + DCHECK(value->IsMutableHeapNumber()); + Handle<HeapNumber> old = Handle<HeapNumber>::cast(value); + value = isolate->factory()->NewHeapNumber(old->value()); + } PropertyDetails d = PropertyDetails(details.attributes(), NORMAL, i + 1); dictionary = NameDictionary::Add(dictionary, key, value, d); @@ -4816,13 +4569,15 @@ void JSObject::NormalizeProperties(Handle<JSObject> object, // Resize the object in the heap if necessary. int new_instance_size = new_map->instance_size(); int instance_size_delta = map->instance_size() - new_instance_size; - ASSERT(instance_size_delta >= 0); - Heap* heap = isolate->heap(); - heap->CreateFillerObjectAt(object->address() + new_instance_size, - instance_size_delta); - heap->AdjustLiveBytes(object->address(), - -instance_size_delta, - Heap::FROM_MUTATOR); + DCHECK(instance_size_delta >= 0); + + if (instance_size_delta > 0) { + Heap* heap = isolate->heap(); + heap->CreateFillerObjectAt(object->address() + new_instance_size, + instance_size_delta); + heap->AdjustLiveBytes(object->address(), -instance_size_delta, + Heap::FROM_MUTATOR); + } // We are storing the new map using release store after creating a filler for // the left-over space to avoid races with the sweeper thread. @@ -4834,17 +4589,18 @@ void JSObject::NormalizeProperties(Handle<JSObject> object, #ifdef DEBUG if (FLAG_trace_normalization) { - PrintF("Object properties have been normalized:\n"); - object->Print(); + OFStream os(stdout); + os << "Object properties have been normalized:\n"; + object->Print(os); } #endif } -void JSObject::TransformToFastProperties(Handle<JSObject> object, - int unused_property_fields) { +void JSObject::MigrateSlowToFast(Handle<JSObject> object, + int unused_property_fields) { if (object->HasFastProperties()) return; - ASSERT(!object->IsGlobalObject()); + DCHECK(!object->IsGlobalObject()); Isolate* isolate = object->GetIsolate(); Factory* factory = isolate->factory(); Handle<NameDictionary> dictionary(object->property_dictionary()); @@ -4868,7 +4624,7 @@ void JSObject::TransformToFastProperties(Handle<JSObject> object, if (dictionary->IsKey(k)) { Object* value = dictionary->ValueAt(i); PropertyType type = dictionary->DetailsAt(i).type(); - ASSERT(type != FIELD); + DCHECK(type != FIELD); instance_descriptor_length++; if (type == NORMAL && !value->IsJSFunction()) { number_of_fields += 1; @@ -4884,13 +4640,13 @@ void JSObject::TransformToFastProperties(Handle<JSObject> object, if (instance_descriptor_length == 0) { DisallowHeapAllocation no_gc; - ASSERT_LE(unused_property_fields, inobject_props); + DCHECK_LE(unused_property_fields, inobject_props); // Transform the object. new_map->set_unused_property_fields(inobject_props); - object->set_map(*new_map); + object->synchronized_set_map(*new_map); object->set_properties(isolate->heap()->empty_fixed_array()); // Check that it really works. - ASSERT(object->HasFastProperties()); + DCHECK(object->HasFastProperties()); return; } @@ -4959,7 +4715,7 @@ void JSObject::TransformToFastProperties(Handle<JSObject> object, } } } - ASSERT(current_offset == number_of_fields); + DCHECK(current_offset == number_of_fields); descriptors->Sort(); @@ -4968,39 +4724,20 @@ void JSObject::TransformToFastProperties(Handle<JSObject> object, new_map->set_unused_property_fields(unused_property_fields); // Transform the object. - object->set_map(*new_map); + object->synchronized_set_map(*new_map); object->set_properties(*fields); - ASSERT(object->IsJSObject()); + DCHECK(object->IsJSObject()); // Check that it really works. - ASSERT(object->HasFastProperties()); + DCHECK(object->HasFastProperties()); } void JSObject::ResetElements(Handle<JSObject> object) { - if (object->map()->is_observed()) { - // Maintain invariant that observed elements are always in dictionary mode. - Isolate* isolate = object->GetIsolate(); - Factory* factory = isolate->factory(); - Handle<SeededNumberDictionary> dictionary = - SeededNumberDictionary::New(isolate, 0); - if (object->map() == *factory->sloppy_arguments_elements_map()) { - FixedArray::cast(object->elements())->set(1, *dictionary); - } else { - object->set_elements(*dictionary); - } - return; - } - - ElementsKind elements_kind = GetInitialFastElementsKind(); - if (!FLAG_smi_only_arrays) { - elements_kind = FastSmiToObjectElementsKind(elements_kind); - } - Handle<Map> map = JSObject::GetElementsTransitionMap(object, elements_kind); - DisallowHeapAllocation no_gc; - Handle<FixedArrayBase> elements(map->GetInitialElements()); - JSObject::SetMapAndElements(object, map, elements); + Heap* heap = object->GetIsolate()->heap(); + CHECK(object->map() != heap->sloppy_arguments_elements_map()); + object->set_elements(object->map()->GetInitialElements()); } @@ -5036,7 +4773,7 @@ static Handle<SeededNumberDictionary> CopyFastElementsToDictionary( Handle<SeededNumberDictionary> JSObject::NormalizeElements( Handle<JSObject> object) { - ASSERT(!object->HasExternalArrayElements() && + DCHECK(!object->HasExternalArrayElements() && !object->HasFixedTypedArrayElements()); Isolate* isolate = object->GetIsolate(); @@ -5050,7 +4787,7 @@ Handle<SeededNumberDictionary> JSObject::NormalizeElements( } if (array->IsDictionary()) return Handle<SeededNumberDictionary>::cast(array); - ASSERT(object->HasFastSmiOrObjectElements() || + DCHECK(object->HasFastSmiOrObjectElements() || object->HasFastDoubleElements() || object->HasFastArgumentsElements()); // Compute the effective length and allocate a new backing store. @@ -5082,12 +4819,13 @@ Handle<SeededNumberDictionary> JSObject::NormalizeElements( #ifdef DEBUG if (FLAG_trace_normalization) { - PrintF("Object elements have been normalized:\n"); - object->Print(); + OFStream os(stdout); + os << "Object elements have been normalized:\n"; + object->Print(os); } #endif - ASSERT(object->HasDictionaryElements() || + DCHECK(object->HasDictionaryElements() || object->HasDictionaryArgumentsElements()); return dictionary; } @@ -5109,20 +4847,20 @@ static Smi* GenerateIdentityHash(Isolate* isolate) { void JSObject::SetIdentityHash(Handle<JSObject> object, Handle<Smi> hash) { - ASSERT(!object->IsJSGlobalProxy()); + DCHECK(!object->IsJSGlobalProxy()); Isolate* isolate = object->GetIsolate(); SetHiddenProperty(object, isolate->factory()->identity_hash_string(), hash); } template<typename ProxyType> -static Handle<Object> GetOrCreateIdentityHashHelper(Handle<ProxyType> proxy) { +static Handle<Smi> GetOrCreateIdentityHashHelper(Handle<ProxyType> proxy) { Isolate* isolate = proxy->GetIsolate(); - Handle<Object> hash(proxy->hash(), isolate); - if (hash->IsSmi()) return hash; + Handle<Object> maybe_hash(proxy->hash(), isolate); + if (maybe_hash->IsSmi()) return Handle<Smi>::cast(maybe_hash); - hash = handle(GenerateIdentityHash(isolate), isolate); + Handle<Smi> hash(GenerateIdentityHash(isolate), isolate); proxy->set_hash(*hash); return hash; } @@ -5142,17 +4880,17 @@ Object* JSObject::GetIdentityHash() { } -Handle<Object> JSObject::GetOrCreateIdentityHash(Handle<JSObject> object) { +Handle<Smi> JSObject::GetOrCreateIdentityHash(Handle<JSObject> object) { if (object->IsJSGlobalProxy()) { return GetOrCreateIdentityHashHelper(Handle<JSGlobalProxy>::cast(object)); } Isolate* isolate = object->GetIsolate(); - Handle<Object> hash(object->GetIdentityHash(), isolate); - if (hash->IsSmi()) return hash; + Handle<Object> maybe_hash(object->GetIdentityHash(), isolate); + if (maybe_hash->IsSmi()) return Handle<Smi>::cast(maybe_hash); - hash = handle(GenerateIdentityHash(isolate), isolate); + Handle<Smi> hash(GenerateIdentityHash(isolate), isolate); SetHiddenProperty(object, isolate->factory()->identity_hash_string(), hash); return hash; } @@ -5163,25 +4901,25 @@ Object* JSProxy::GetIdentityHash() { } -Handle<Object> JSProxy::GetOrCreateIdentityHash(Handle<JSProxy> proxy) { +Handle<Smi> JSProxy::GetOrCreateIdentityHash(Handle<JSProxy> proxy) { return GetOrCreateIdentityHashHelper(proxy); } Object* JSObject::GetHiddenProperty(Handle<Name> key) { DisallowHeapAllocation no_gc; - ASSERT(key->IsUniqueName()); + DCHECK(key->IsUniqueName()); if (IsJSGlobalProxy()) { // JSGlobalProxies store their hash internally. - ASSERT(*key != GetHeap()->identity_hash_string()); + DCHECK(*key != GetHeap()->identity_hash_string()); // For a proxy, use the prototype as target object. - Object* proxy_parent = GetPrototype(); + PrototypeIterator iter(GetIsolate(), this); // If the proxy is detached, return undefined. - if (proxy_parent->IsNull()) return GetHeap()->the_hole_value(); - ASSERT(proxy_parent->IsJSGlobalObject()); - return JSObject::cast(proxy_parent)->GetHiddenProperty(key); + if (iter.IsAtEnd()) return GetHeap()->the_hole_value(); + DCHECK(iter.GetCurrent()->IsJSGlobalObject()); + return JSObject::cast(iter.GetCurrent())->GetHiddenProperty(key); } - ASSERT(!IsJSGlobalProxy()); + DCHECK(!IsJSGlobalProxy()); Object* inline_value = GetHiddenPropertiesHashTable(); if (inline_value->IsSmi()) { @@ -5206,18 +4944,20 @@ Handle<Object> JSObject::SetHiddenProperty(Handle<JSObject> object, Handle<Object> value) { Isolate* isolate = object->GetIsolate(); - ASSERT(key->IsUniqueName()); + DCHECK(key->IsUniqueName()); if (object->IsJSGlobalProxy()) { // JSGlobalProxies store their hash internally. - ASSERT(*key != *isolate->factory()->identity_hash_string()); + DCHECK(*key != *isolate->factory()->identity_hash_string()); // For a proxy, use the prototype as target object. - Handle<Object> proxy_parent(object->GetPrototype(), isolate); + PrototypeIterator iter(isolate, object); // If the proxy is detached, return undefined. - if (proxy_parent->IsNull()) return isolate->factory()->undefined_value(); - ASSERT(proxy_parent->IsJSGlobalObject()); - return SetHiddenProperty(Handle<JSObject>::cast(proxy_parent), key, value); + if (iter.IsAtEnd()) return isolate->factory()->undefined_value(); + DCHECK(PrototypeIterator::GetCurrent(iter)->IsJSGlobalObject()); + return SetHiddenProperty( + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)), key, + value); } - ASSERT(!object->IsJSGlobalProxy()); + DCHECK(!object->IsJSGlobalProxy()); Handle<Object> inline_value(object->GetHiddenPropertiesHashTable(), isolate); @@ -5247,35 +4987,40 @@ Handle<Object> JSObject::SetHiddenProperty(Handle<JSObject> object, void JSObject::DeleteHiddenProperty(Handle<JSObject> object, Handle<Name> key) { Isolate* isolate = object->GetIsolate(); - ASSERT(key->IsUniqueName()); + DCHECK(key->IsUniqueName()); if (object->IsJSGlobalProxy()) { - Handle<Object> proto(object->GetPrototype(), isolate); - if (proto->IsNull()) return; - ASSERT(proto->IsJSGlobalObject()); - return DeleteHiddenProperty(Handle<JSObject>::cast(proto), key); + PrototypeIterator iter(isolate, object); + if (iter.IsAtEnd()) return; + DCHECK(PrototypeIterator::GetCurrent(iter)->IsJSGlobalObject()); + return DeleteHiddenProperty( + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)), key); } Object* inline_value = object->GetHiddenPropertiesHashTable(); // We never delete (inline-stored) identity hashes. - ASSERT(*key != *isolate->factory()->identity_hash_string()); + DCHECK(*key != *isolate->factory()->identity_hash_string()); if (inline_value->IsUndefined() || inline_value->IsSmi()) return; Handle<ObjectHashTable> hashtable(ObjectHashTable::cast(inline_value)); - ObjectHashTable::Put(hashtable, key, isolate->factory()->the_hole_value()); + bool was_present = false; + ObjectHashTable::Remove(hashtable, key, &was_present); } bool JSObject::HasHiddenProperties(Handle<JSObject> object) { Handle<Name> hidden = object->GetIsolate()->factory()->hidden_string(); - return GetPropertyAttributePostInterceptor( - object, object, hidden, false) != ABSENT; + LookupIterator it(object, hidden, LookupIterator::CHECK_OWN_REAL); + Maybe<PropertyAttributes> maybe = GetPropertyAttributes(&it); + // Cannot get an exception since the hidden_string isn't accessible to JS. + DCHECK(maybe.has_value); + return maybe.value != ABSENT; } Object* JSObject::GetHiddenPropertiesHashTable() { - ASSERT(!IsJSGlobalProxy()); + DCHECK(!IsJSGlobalProxy()); if (HasFastProperties()) { // If the object has fast properties, check whether the first slot // in the descriptor array matches the hidden string. Since the @@ -5286,11 +5031,12 @@ Object* JSObject::GetHiddenPropertiesHashTable() { int sorted_index = descriptors->GetSortedKeyIndex(0); if (descriptors->GetKey(sorted_index) == GetHeap()->hidden_string() && sorted_index < map()->NumberOfOwnDescriptors()) { - ASSERT(descriptors->GetType(sorted_index) == FIELD); - ASSERT(descriptors->GetDetails(sorted_index).representation(). + DCHECK(descriptors->GetType(sorted_index) == FIELD); + DCHECK(descriptors->GetDetails(sorted_index).representation(). IsCompatibleForLoad(Representation::Tagged())); - return this->RawFastPropertyAt( - descriptors->GetFieldIndex(sorted_index)); + FieldIndex index = FieldIndex::ForDescriptor(this->map(), + sorted_index); + return this->RawFastPropertyAt(index); } else { return GetHeap()->undefined_value(); } @@ -5300,12 +5046,11 @@ Object* JSObject::GetHiddenPropertiesHashTable() { } else { Isolate* isolate = GetIsolate(); LookupResult result(isolate); - LocalLookupRealNamedProperty(isolate->factory()->hidden_string(), &result); + LookupOwnRealNamedProperty(isolate->factory()->hidden_string(), &result); if (result.IsFound()) { - ASSERT(result.IsNormal()); - ASSERT(result.holder() == this); - Object* value = GetNormalizedProperty(&result); - if (!value->IsTheHole()) return value; + DCHECK(result.IsNormal()); + DCHECK(result.holder() == this); + return GetNormalizedProperty(&result); } return GetHeap()->undefined_value(); } @@ -5332,14 +5077,9 @@ Handle<ObjectHashTable> JSObject::GetOrCreateHiddenPropertiesHashtable( inline_value); } - JSObject::SetLocalPropertyIgnoreAttributes( - object, - isolate->factory()->hidden_string(), - hashtable, - DONT_ENUM, - OPTIMAL_REPRESENTATION, - ALLOW_AS_CONSTANT, - OMIT_EXTENSIBILITY_CHECK).Assert(); + JSObject::SetOwnPropertyIgnoreAttributes( + object, isolate->factory()->hidden_string(), + hashtable, DONT_ENUM).Assert(); return hashtable; } @@ -5347,13 +5087,13 @@ Handle<ObjectHashTable> JSObject::GetOrCreateHiddenPropertiesHashtable( Handle<Object> JSObject::SetHiddenPropertiesHashTable(Handle<JSObject> object, Handle<Object> value) { - ASSERT(!object->IsJSGlobalProxy()); + DCHECK(!object->IsJSGlobalProxy()); Isolate* isolate = object->GetIsolate(); // We can store the identity hash inline iff there is no backing store // for hidden properties yet. - ASSERT(JSObject::HasHiddenProperties(object) != value->IsSmi()); + DCHECK(JSObject::HasHiddenProperties(object) != value->IsSmi()); if (object->HasFastProperties()) { // If the object has fast properties, check whether the first slot // in the descriptor array matches the hidden string. Since the @@ -5370,30 +5110,31 @@ Handle<Object> JSObject::SetHiddenPropertiesHashTable(Handle<JSObject> object, } } - SetLocalPropertyIgnoreAttributes(object, - isolate->factory()->hidden_string(), - value, - DONT_ENUM, - OPTIMAL_REPRESENTATION, - ALLOW_AS_CONSTANT, - OMIT_EXTENSIBILITY_CHECK).Assert(); + SetOwnPropertyIgnoreAttributes(object, isolate->factory()->hidden_string(), + value, DONT_ENUM, + OMIT_EXTENSIBILITY_CHECK).Assert(); return object; } Handle<Object> JSObject::DeletePropertyPostInterceptor(Handle<JSObject> object, Handle<Name> name, - DeleteMode mode) { - // Check local property, ignore interceptor. + DeleteMode delete_mode) { + // Check own property, ignore interceptor. Isolate* isolate = object->GetIsolate(); - LookupResult result(isolate); - object->LocalLookupRealNamedProperty(name, &result); - if (!result.IsFound()) return isolate->factory()->true_value(); + LookupResult lookup(isolate); + object->LookupOwnRealNamedProperty(name, &lookup); + if (!lookup.IsFound()) return isolate->factory()->true_value(); + PropertyNormalizationMode mode = object->map()->is_prototype_map() + ? KEEP_INOBJECT_PROPERTIES + : CLEAR_INOBJECT_PROPERTIES; // Normalize object if needed. - NormalizeProperties(object, CLEAR_INOBJECT_PROPERTIES, 0); + NormalizeProperties(object, mode, 0); - return DeleteNormalizedProperty(object, name, mode); + Handle<Object> result = DeleteNormalizedProperty(object, name, delete_mode); + ReoptimizeIfPrototype(object); + return result; } @@ -5416,7 +5157,7 @@ MaybeHandle<Object> JSObject::DeletePropertyWithInterceptor( args.Call(deleter, v8::Utils::ToLocal(Handle<String>::cast(name))); RETURN_EXCEPTION_IF_SCHEDULED_EXCEPTION(isolate, Object); if (!result.IsEmpty()) { - ASSERT(result->IsBoolean()); + DCHECK(result->IsBoolean()); Handle<Object> result_internal = v8::Utils::OpenHandle(*result); result_internal->VerifyApiCallResultType(); // Rebox CustomArguments::kReturnValueOffset before returning. @@ -5450,7 +5191,7 @@ MaybeHandle<Object> JSObject::DeleteElementWithInterceptor( v8::Handle<v8::Boolean> result = args.Call(deleter, index); RETURN_EXCEPTION_IF_SCHEDULED_EXCEPTION(isolate, Object); if (!result.IsEmpty()) { - ASSERT(result->IsBoolean()); + DCHECK(result->IsBoolean()); Handle<Object> result_internal = v8::Utils::OpenHandle(*result); result_internal->VerifyApiCallResultType(); // Rebox CustomArguments::kReturnValueOffset before returning. @@ -5491,18 +5232,22 @@ MaybeHandle<Object> JSObject::DeleteElement(Handle<JSObject> object, } if (object->IsJSGlobalProxy()) { - Handle<Object> proto(object->GetPrototype(), isolate); - if (proto->IsNull()) return factory->false_value(); - ASSERT(proto->IsJSGlobalObject()); - return DeleteElement(Handle<JSObject>::cast(proto), index, mode); + PrototypeIterator iter(isolate, object); + if (iter.IsAtEnd()) return factory->false_value(); + DCHECK(PrototypeIterator::GetCurrent(iter)->IsJSGlobalObject()); + return DeleteElement( + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)), index, + mode); } Handle<Object> old_value; bool should_enqueue_change_record = false; if (object->map()->is_observed()) { - should_enqueue_change_record = HasLocalElement(object, index); + Maybe<bool> maybe = HasOwnElement(object, index); + if (!maybe.has_value) return MaybeHandle<Object>(); + should_enqueue_change_record = maybe.value; if (should_enqueue_change_record) { - if (!GetLocalElementAccessorPair(object, index).is_null()) { + if (!GetOwnElementAccessorPair(object, index).is_null()) { old_value = Handle<Object>::cast(factory->the_hole_value()); } else { old_value = Object::GetElement( @@ -5521,9 +5266,13 @@ MaybeHandle<Object> JSObject::DeleteElement(Handle<JSObject> object, Handle<Object> result; ASSIGN_RETURN_ON_EXCEPTION(isolate, result, maybe_result, Object); - if (should_enqueue_change_record && !HasLocalElement(object, index)) { - Handle<String> name = factory->Uint32ToString(index); - EnqueueChangeRecord(object, "delete", name, old_value); + if (should_enqueue_change_record) { + Maybe<bool> maybe = HasOwnElement(object, index); + if (!maybe.has_value) return MaybeHandle<Object>(); + if (!maybe.value) { + Handle<String> name = factory->Uint32ToString(index); + EnqueueChangeRecord(object, "delete", name, old_value); + } } return result; @@ -5532,10 +5281,10 @@ MaybeHandle<Object> JSObject::DeleteElement(Handle<JSObject> object, MaybeHandle<Object> JSObject::DeleteProperty(Handle<JSObject> object, Handle<Name> name, - DeleteMode mode) { + DeleteMode delete_mode) { Isolate* isolate = object->GetIsolate(); // ECMA-262, 3rd, 8.6.2.5 - ASSERT(name->IsName()); + DCHECK(name->IsName()); // Check access rights if needed. if (object->IsAccessCheckNeeded() && @@ -5546,24 +5295,25 @@ MaybeHandle<Object> JSObject::DeleteProperty(Handle<JSObject> object, } if (object->IsJSGlobalProxy()) { - Object* proto = object->GetPrototype(); - if (proto->IsNull()) return isolate->factory()->false_value(); - ASSERT(proto->IsJSGlobalObject()); + PrototypeIterator iter(isolate, object); + if (iter.IsAtEnd()) return isolate->factory()->false_value(); + DCHECK(PrototypeIterator::GetCurrent(iter)->IsJSGlobalObject()); return JSGlobalObject::DeleteProperty( - handle(JSGlobalObject::cast(proto)), name, mode); + Handle<JSGlobalObject>::cast(PrototypeIterator::GetCurrent(iter)), name, + delete_mode); } uint32_t index = 0; if (name->AsArrayIndex(&index)) { - return DeleteElement(object, index, mode); + return DeleteElement(object, index, delete_mode); } LookupResult lookup(isolate); - object->LocalLookup(name, &lookup, true); + object->LookupOwn(name, &lookup, true); if (!lookup.IsFound()) return isolate->factory()->true_value(); // Ignore attributes if forcing a deletion. - if (lookup.IsDontDelete() && mode != FORCE_DELETION) { - if (mode == STRICT_DELETION) { + if (lookup.IsDontDelete() && delete_mode != FORCE_DELETION) { + if (delete_mode == STRICT_DELETION) { // Deleting a non-configurable property in strict mode. Handle<Object> args[2] = { name, object }; Handle<Object> error = isolate->factory()->NewTypeError( @@ -5585,8 +5335,8 @@ MaybeHandle<Object> JSObject::DeleteProperty(Handle<JSObject> object, // Check for interceptor. if (lookup.IsInterceptor()) { // Skip interceptor if forcing a deletion. - if (mode == FORCE_DELETION) { - result = DeletePropertyPostInterceptor(object, name, mode); + if (delete_mode == FORCE_DELETION) { + result = DeletePropertyPostInterceptor(object, name, delete_mode); } else { ASSIGN_RETURN_ON_EXCEPTION( isolate, result, @@ -5594,14 +5344,22 @@ MaybeHandle<Object> JSObject::DeleteProperty(Handle<JSObject> object, Object); } } else { + PropertyNormalizationMode mode = object->map()->is_prototype_map() + ? KEEP_INOBJECT_PROPERTIES + : CLEAR_INOBJECT_PROPERTIES; // Normalize object if needed. - NormalizeProperties(object, CLEAR_INOBJECT_PROPERTIES, 0); + NormalizeProperties(object, mode, 0); // Make sure the properties are normalized before removing the entry. - result = DeleteNormalizedProperty(object, name, mode); + result = DeleteNormalizedProperty(object, name, delete_mode); + ReoptimizeIfPrototype(object); } - if (is_observed && !HasLocalProperty(object, name)) { - EnqueueChangeRecord(object, "delete", name, old_value); + if (is_observed) { + Maybe<bool> maybe = HasOwnProperty(object, name); + if (!maybe.has_value) return MaybeHandle<Object>(); + if (!maybe.value) { + EnqueueChangeRecord(object, "delete", name, old_value); + } } return result; @@ -5633,7 +5391,7 @@ MaybeHandle<Object> JSReceiver::DeleteProperty(Handle<JSReceiver> object, bool JSObject::ReferencesObjectFromElements(FixedArray* elements, ElementsKind kind, Object* object) { - ASSERT(IsFastObjectElementsKind(kind) || + DCHECK(IsFastObjectElementsKind(kind) || kind == DICTIONARY_ELEMENTS); if (IsFastObjectElementsKind(kind)) { int length = IsJSArray() @@ -5720,11 +5478,10 @@ bool JSObject::ReferencesObject(Object* obj) { // For functions check the context. if (IsJSFunction()) { // Get the constructor function for arguments array. - JSObject* arguments_boilerplate = - heap->isolate()->context()->native_context()-> - sloppy_arguments_boilerplate(); + Map* arguments_map = + heap->isolate()->context()->native_context()->sloppy_arguments_map(); JSFunction* arguments_function = - JSFunction::cast(arguments_boilerplate->map()->constructor()); + JSFunction::cast(arguments_map->constructor()); // Get the context and don't check if it is the native context. JSFunction* f = JSFunction::cast(this); @@ -5780,10 +5537,11 @@ MaybeHandle<Object> JSObject::PreventExtensions(Handle<JSObject> object) { } if (object->IsJSGlobalProxy()) { - Handle<Object> proto(object->GetPrototype(), isolate); - if (proto->IsNull()) return object; - ASSERT(proto->IsJSGlobalObject()); - return PreventExtensions(Handle<JSObject>::cast(proto)); + PrototypeIterator iter(isolate, object); + if (iter.IsAtEnd()) return object; + DCHECK(PrototypeIterator::GetCurrent(iter)->IsJSGlobalObject()); + return PreventExtensions( + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter))); } // It's not possible to seal objects with external array elements @@ -5798,7 +5556,7 @@ MaybeHandle<Object> JSObject::PreventExtensions(Handle<JSObject> object) { // If there are fast elements we normalize. Handle<SeededNumberDictionary> dictionary = NormalizeElements(object); - ASSERT(object->HasDictionaryElements() || + DCHECK(object->HasDictionaryElements() || object->HasDictionaryArgumentsElements()); // Make sure that we never go back to fast case. @@ -5811,7 +5569,7 @@ MaybeHandle<Object> JSObject::PreventExtensions(Handle<JSObject> object) { new_map->set_is_extensible(false); JSObject::MigrateToMap(object, new_map); - ASSERT(!object->map()->is_extensible()); + DCHECK(!object->map()->is_extensible()); if (object->map()->is_observed()) { EnqueueChangeRecord(object, "preventExtensions", Handle<Name>(), @@ -5826,7 +5584,8 @@ static void FreezeDictionary(Dictionary* dictionary) { int capacity = dictionary->Capacity(); for (int i = 0; i < capacity; i++) { Object* k = dictionary->KeyAt(i); - if (dictionary->IsKey(k)) { + if (dictionary->IsKey(k) && + !(k->IsSymbol() && Symbol::cast(k)->is_private())) { PropertyDetails details = dictionary->DetailsAt(i); int attrs = DONT_DELETE; // READ_ONLY is an invalid attribute for JS setters/getters. @@ -5847,8 +5606,8 @@ static void FreezeDictionary(Dictionary* dictionary) { MaybeHandle<Object> JSObject::Freeze(Handle<JSObject> object) { // Freezing sloppy arguments should be handled elsewhere. - ASSERT(!object->HasSloppyArgumentsElements()); - ASSERT(!object->map()->is_observed()); + DCHECK(!object->HasSloppyArgumentsElements()); + DCHECK(!object->map()->is_observed()); if (object->map()->is_frozen()) return object; @@ -5862,10 +5621,10 @@ MaybeHandle<Object> JSObject::Freeze(Handle<JSObject> object) { } if (object->IsJSGlobalProxy()) { - Handle<Object> proto(object->GetPrototype(), isolate); - if (proto->IsNull()) return object; - ASSERT(proto->IsJSGlobalObject()); - return Freeze(Handle<JSObject>::cast(proto)); + PrototypeIterator iter(isolate, object); + if (iter.IsAtEnd()) return object; + DCHECK(PrototypeIterator::GetCurrent(iter)->IsJSGlobalObject()); + return Freeze(Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter))); } // It's not possible to freeze objects with external array elements @@ -5905,15 +5664,16 @@ MaybeHandle<Object> JSObject::Freeze(Handle<JSObject> object) { isolate->heap()->frozen_symbol()); if (transition_index != TransitionArray::kNotFound) { Handle<Map> transition_map(old_map->GetTransition(transition_index)); - ASSERT(transition_map->has_dictionary_elements()); - ASSERT(transition_map->is_frozen()); - ASSERT(!transition_map->is_extensible()); + DCHECK(transition_map->has_dictionary_elements()); + DCHECK(transition_map->is_frozen()); + DCHECK(!transition_map->is_extensible()); JSObject::MigrateToMap(object, transition_map); } else if (object->HasFastProperties() && old_map->CanHaveMoreTransitions()) { // Create a new descriptor array with fully-frozen properties Handle<Map> new_map = Map::CopyForFreeze(old_map); JSObject::MigrateToMap(object, new_map); } else { + DCHECK(old_map->is_dictionary_map() || !old_map->is_prototype_map()); // Slow path: need to normalize properties for safety NormalizeProperties(object, CLEAR_INOBJECT_PROPERTIES, 0); @@ -5929,7 +5689,7 @@ MaybeHandle<Object> JSObject::Freeze(Handle<JSObject> object) { FreezeDictionary(object->property_dictionary()); } - ASSERT(object->map()->has_dictionary_elements()); + DCHECK(object->map()->has_dictionary_elements()); if (!new_element_dictionary.is_null()) { object->set_elements(*new_element_dictionary); } @@ -5947,17 +5707,17 @@ MaybeHandle<Object> JSObject::Freeze(Handle<JSObject> object) { void JSObject::SetObserved(Handle<JSObject> object) { - ASSERT(!object->IsJSGlobalProxy()); - ASSERT(!object->IsJSGlobalObject()); + DCHECK(!object->IsJSGlobalProxy()); + DCHECK(!object->IsJSGlobalObject()); Isolate* isolate = object->GetIsolate(); Handle<Map> new_map; Handle<Map> old_map(object->map(), isolate); - ASSERT(!old_map->is_observed()); + DCHECK(!old_map->is_observed()); int transition_index = old_map->SearchTransition( isolate->heap()->observed_symbol()); if (transition_index != TransitionArray::kNotFound) { new_map = handle(old_map->GetTransition(transition_index), isolate); - ASSERT(new_map->is_observed()); + DCHECK(new_map->is_observed()); } else if (object->HasFastProperties() && old_map->CanHaveMoreTransitions()) { new_map = Map::CopyForObserved(old_map); } else { @@ -5970,10 +5730,10 @@ void JSObject::SetObserved(Handle<JSObject> object) { Handle<Object> JSObject::FastPropertyAt(Handle<JSObject> object, Representation representation, - int index) { + FieldIndex index) { Isolate* isolate = object->GetIsolate(); Handle<Object> raw_value(object->RawFastPropertyAt(index), isolate); - return Object::NewStorageFor(isolate, raw_value, representation); + return Object::WrapForRead(isolate, raw_value, representation); } @@ -6015,7 +5775,7 @@ MaybeHandle<JSObject> JSObjectWalkVisitor<ContextObject>::StructureWalk( Handle<JSObject> object) { Isolate* isolate = this->isolate(); bool copying = this->copying(); - bool shallow = hints_ == JSObject::kObjectIsShallowArray; + bool shallow = hints_ == JSObject::kObjectIsShallow; if (!shallow) { StackLimitCheck check(isolate); @@ -6042,7 +5802,7 @@ MaybeHandle<JSObject> JSObjectWalkVisitor<ContextObject>::StructureWalk( copy = object; } - ASSERT(copying || copy.is_identical_to(object)); + DCHECK(copying || copy.is_identical_to(object)); ElementsKind kind = copy->GetElementsKind(); if (copying && IsFastSmiOrObjectElementsKind(kind) && @@ -6054,14 +5814,14 @@ MaybeHandle<JSObject> JSObjectWalkVisitor<ContextObject>::StructureWalk( if (!shallow) { HandleScope scope(isolate); - // Deep copy local properties. + // Deep copy own properties. if (copy->HasFastProperties()) { Handle<DescriptorArray> descriptors(copy->map()->instance_descriptors()); int limit = copy->map()->NumberOfOwnDescriptors(); for (int i = 0; i < limit; i++) { PropertyDetails details = descriptors->GetDetails(i); if (details.type() != FIELD) continue; - int index = descriptors->GetFieldIndex(i); + FieldIndex index = FieldIndex::ForDescriptor(copy->map(), i); Handle<Object> value(object->RawFastPropertyAt(index), isolate); if (value->IsJSObject()) { ASSIGN_RETURN_ON_EXCEPTION( @@ -6078,13 +5838,15 @@ MaybeHandle<JSObject> JSObjectWalkVisitor<ContextObject>::StructureWalk( } } else { Handle<FixedArray> names = - isolate->factory()->NewFixedArray(copy->NumberOfLocalProperties()); - copy->GetLocalPropertyNames(*names, 0); + isolate->factory()->NewFixedArray(copy->NumberOfOwnProperties()); + copy->GetOwnPropertyNames(*names, 0); for (int i = 0; i < names->length(); i++) { - ASSERT(names->get(i)->IsString()); + DCHECK(names->get(i)->IsString()); Handle<String> key_string(String::cast(names->get(i))); - PropertyAttributes attributes = - JSReceiver::GetLocalPropertyAttribute(copy, key_string); + Maybe<PropertyAttributes> maybe = + JSReceiver::GetOwnPropertyAttributes(copy, key_string); + DCHECK(maybe.has_value); + PropertyAttributes attributes = maybe.value; // Only deep copy fields from the object literal expression. // In particular, don't try to copy the length attribute of // an array. @@ -6099,16 +5861,15 @@ MaybeHandle<JSObject> JSObjectWalkVisitor<ContextObject>::StructureWalk( JSObject); if (copying) { // Creating object copy for literals. No strict mode needed. - JSObject::SetProperty( - copy, key_string, result, NONE, SLOPPY).Assert(); + JSObject::SetProperty(copy, key_string, result, SLOPPY).Assert(); } } } } - // Deep copy local elements. + // Deep copy own elements. // Pixel elements cannot be created using an object literal. - ASSERT(!copy->HasExternalArrayElements()); + DCHECK(!copy->HasExternalArrayElements()); switch (kind) { case FAST_SMI_ELEMENTS: case FAST_ELEMENTS: @@ -6118,13 +5879,13 @@ MaybeHandle<JSObject> JSObjectWalkVisitor<ContextObject>::StructureWalk( if (elements->map() == isolate->heap()->fixed_cow_array_map()) { #ifdef DEBUG for (int i = 0; i < elements->length(); i++) { - ASSERT(!elements->get(i)->IsJSObject()); + DCHECK(!elements->get(i)->IsJSObject()); } #endif } else { for (int i = 0; i < elements->length(); i++) { Handle<Object> value(elements->get(i), isolate); - ASSERT(value->IsSmi() || + DCHECK(value->IsSmi() || value->IsTheHole() || (IsFastObjectElementsKind(copy->GetElementsKind()))); if (value->IsJSObject()) { @@ -6193,7 +5954,7 @@ MaybeHandle<JSObject> JSObject::DeepWalk( kNoHints); MaybeHandle<JSObject> result = v.StructureWalk(object); Handle<JSObject> for_assert; - ASSERT(!result.ToHandle(&for_assert) || for_assert.is_identical_to(object)); + DCHECK(!result.ToHandle(&for_assert) || for_assert.is_identical_to(object)); return result; } @@ -6205,7 +5966,7 @@ MaybeHandle<JSObject> JSObject::DeepCopy( JSObjectWalkVisitor<AllocationSiteUsageContext> v(site_context, true, hints); MaybeHandle<JSObject> copy = v.StructureWalk(object); Handle<JSObject> for_assert; - ASSERT(!copy.ToHandle(&for_assert) || !for_assert.is_identical_to(object)); + DCHECK(!copy.ToHandle(&for_assert) || !for_assert.is_identical_to(object)); return copy; } @@ -6228,7 +5989,7 @@ Handle<Object> JSObject::GetDataProperty(Handle<JSObject> object, case FIELD: result = FastPropertyAt(Handle<JSObject>(lookup.holder(), isolate), lookup.representation(), - lookup.GetFieldIndex().field_index()); + lookup.GetFieldIndex()); break; case CONSTANT: result = Handle<Object>(lookup.GetConstant(), isolate); @@ -6251,17 +6012,16 @@ Handle<Object> JSObject::GetDataProperty(Handle<JSObject> object, // - This object has no elements. // - No prototype has enumerable properties/elements. bool JSReceiver::IsSimpleEnum() { - Heap* heap = GetHeap(); - for (Object* o = this; - o != heap->null_value(); - o = JSObject::cast(o)->GetPrototype()) { - if (!o->IsJSObject()) return false; - JSObject* curr = JSObject::cast(o); + for (PrototypeIterator iter(GetIsolate(), this, + PrototypeIterator::START_AT_RECEIVER); + !iter.IsAtEnd(); iter.Advance()) { + if (!iter.GetCurrent()->IsJSObject()) return false; + JSObject* curr = JSObject::cast(iter.GetCurrent()); int enum_length = curr->map()->EnumLength(); if (enum_length == kInvalidEnumCacheSentinel) return false; if (curr->IsAccessCheckNeeded()) return false; - ASSERT(!curr->HasNamedInterceptor()); - ASSERT(!curr->HasIndexedInterceptor()); + DCHECK(!curr->HasNamedInterceptor()); + DCHECK(!curr->HasIndexedInterceptor()); if (curr->NumberOfEnumElements() > 0) return false; if (curr != this && enum_length != 0) return false; } @@ -6318,17 +6078,17 @@ int Map::NextFreePropertyIndex() { } -void JSReceiver::LocalLookup( +void JSReceiver::LookupOwn( Handle<Name> name, LookupResult* result, bool search_hidden_prototypes) { DisallowHeapAllocation no_gc; - ASSERT(name->IsName()); + DCHECK(name->IsName()); if (IsJSGlobalProxy()) { - Object* proto = GetPrototype(); - if (proto->IsNull()) return result->NotFound(); - ASSERT(proto->IsJSGlobalObject()); - return JSReceiver::cast(proto)->LocalLookup( - name, result, search_hidden_prototypes); + PrototypeIterator iter(GetIsolate(), this); + if (iter.IsAtEnd()) return result->NotFound(); + DCHECK(iter.GetCurrent()->IsJSGlobalObject()); + return JSReceiver::cast(iter.GetCurrent()) + ->LookupOwn(name, result, search_hidden_prototypes); } if (IsJSProxy()) { @@ -6351,14 +6111,14 @@ void JSReceiver::LocalLookup( return; } - js_object->LocalLookupRealNamedProperty(name, result); - if (result->IsFound() || !search_hidden_prototypes) return; + js_object->LookupOwnRealNamedProperty(name, result); + if (result->IsFound() || name->IsOwn() || !search_hidden_prototypes) return; - Object* proto = js_object->GetPrototype(); - if (!proto->IsJSReceiver()) return; - JSReceiver* receiver = JSReceiver::cast(proto); + PrototypeIterator iter(GetIsolate(), js_object); + if (!iter.GetCurrent()->IsJSReceiver()) return; + JSReceiver* receiver = JSReceiver::cast(iter.GetCurrent()); if (receiver->map()->is_hidden_prototype()) { - receiver->LocalLookup(name, result, search_hidden_prototypes); + receiver->LookupOwn(name, result, search_hidden_prototypes); } } @@ -6366,26 +6126,15 @@ void JSReceiver::LocalLookup( void JSReceiver::Lookup(Handle<Name> name, LookupResult* result) { DisallowHeapAllocation no_gc; // Ecma-262 3rd 8.6.2.4 - Handle<Object> null_value = GetIsolate()->factory()->null_value(); - for (Object* current = this; - current != *null_value; - current = JSObject::cast(current)->GetPrototype()) { - JSReceiver::cast(current)->LocalLookup(name, result, false); + for (PrototypeIterator iter(GetIsolate(), this, + PrototypeIterator::START_AT_RECEIVER); + !iter.IsAtEnd(); iter.Advance()) { + JSReceiver::cast(iter.GetCurrent())->LookupOwn(name, result, false); if (result->IsFound()) return; - } - result->NotFound(); -} - - -// Search object and its prototype chain for callback properties. -void JSObject::LookupCallbackProperty(Handle<Name> name, LookupResult* result) { - DisallowHeapAllocation no_gc; - Handle<Object> null_value = GetIsolate()->factory()->null_value(); - for (Object* current = this; - current != *null_value && current->IsJSObject(); - current = JSObject::cast(current)->GetPrototype()) { - JSObject::cast(current)->LocalLookupRealNamedProperty(name, result); - if (result->IsPropertyCallbacks()) return; + if (name->IsOwn()) { + result->NotFound(); + return; + } } result->NotFound(); } @@ -6403,7 +6152,7 @@ static bool ContainsOnlyValidKeys(Handle<FixedArray> array) { static Handle<FixedArray> ReduceFixedArrayTo( Handle<FixedArray> array, int length) { - ASSERT(array->length() >= length); + DCHECK(array->length() >= length); if (array->length() == length) return array; Handle<FixedArray> new_array = @@ -6427,7 +6176,7 @@ static Handle<FixedArray> GetEnumPropertyKeys(Handle<JSObject> object, own_property_count = object->map()->NumberOfDescribedProperties( OWN_DESCRIPTORS, DONT_SHOW); } else { - ASSERT(own_property_count == object->map()->NumberOfDescribedProperties( + DCHECK(own_property_count == object->map()->NumberOfDescribedProperties( OWN_DESCRIPTORS, DONT_SHOW)); } @@ -6476,21 +6225,15 @@ static Handle<FixedArray> GetEnumPropertyKeys(Handle<JSObject> object, if (details.type() != FIELD) { indices = Handle<FixedArray>(); } else { - int field_index = descs->GetFieldIndex(i); - if (field_index >= map->inobject_properties()) { - field_index = -(field_index - map->inobject_properties() + 1); - } - field_index = field_index << 1; - if (details.representation().IsDouble()) { - field_index |= 1; - } - indices->set(index, Smi::FromInt(field_index)); + FieldIndex field_index = FieldIndex::ForDescriptor(*map, i); + int load_by_field_index = field_index.GetLoadByFieldIndex(); + indices->set(index, Smi::FromInt(load_by_field_index)); } } index++; } } - ASSERT(index == storage->length()); + DCHECK(index == storage->length()); Handle<FixedArray> bridge_storage = isolate->factory()->NewFixedArray( @@ -6522,19 +6265,16 @@ MaybeHandle<FixedArray> JSReceiver::GetKeys(Handle<JSReceiver> object, USE(ContainsOnlyValidKeys); Isolate* isolate = object->GetIsolate(); Handle<FixedArray> content = isolate->factory()->empty_fixed_array(); - Handle<JSObject> arguments_boilerplate = Handle<JSObject>( - isolate->context()->native_context()->sloppy_arguments_boilerplate(), - isolate); - Handle<JSFunction> arguments_function = Handle<JSFunction>( - JSFunction::cast(arguments_boilerplate->map()->constructor()), - isolate); + Handle<JSFunction> arguments_function( + JSFunction::cast(isolate->sloppy_arguments_map()->constructor())); // Only collect keys if access is permitted. - for (Handle<Object> p = object; - *p != isolate->heap()->null_value(); - p = Handle<Object>(p->GetPrototype(isolate), isolate)) { - if (p->IsJSProxy()) { - Handle<JSProxy> proxy(JSProxy::cast(*p), isolate); + for (PrototypeIterator iter(isolate, object, + PrototypeIterator::START_AT_RECEIVER); + !iter.IsAtEnd(); iter.Advance()) { + if (PrototypeIterator::GetCurrent(iter)->IsJSProxy()) { + Handle<JSProxy> proxy(JSProxy::cast(*PrototypeIterator::GetCurrent(iter)), + isolate); Handle<Object> args[] = { proxy }; Handle<Object> names; ASSIGN_RETURN_ON_EXCEPTION( @@ -6553,7 +6293,8 @@ MaybeHandle<FixedArray> JSReceiver::GetKeys(Handle<JSReceiver> object, break; } - Handle<JSObject> current(JSObject::cast(*p), isolate); + Handle<JSObject> current = + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)); // Check access rights if required. if (current->IsAccessCheckNeeded() && @@ -6572,7 +6313,7 @@ MaybeHandle<FixedArray> JSReceiver::GetKeys(Handle<JSReceiver> object, isolate, content, FixedArray::UnionOfKeys(content, element_keys), FixedArray); - ASSERT(ContainsOnlyValidKeys(content)); + DCHECK(ContainsOnlyValidKeys(content)); // Add the element keys from the interceptor. if (current->HasIndexedInterceptor()) { @@ -6584,7 +6325,7 @@ MaybeHandle<FixedArray> JSReceiver::GetKeys(Handle<JSReceiver> object, FixedArray::AddKeysFromArrayLike(content, result), FixedArray); } - ASSERT(ContainsOnlyValidKeys(content)); + DCHECK(ContainsOnlyValidKeys(content)); } // We can cache the computed property keys if access checks are @@ -6609,7 +6350,7 @@ MaybeHandle<FixedArray> JSReceiver::GetKeys(Handle<JSReceiver> object, FixedArray::UnionOfKeys( content, GetEnumPropertyKeys(current, cache_enum_keys)), FixedArray); - ASSERT(ContainsOnlyValidKeys(content)); + DCHECK(ContainsOnlyValidKeys(content)); // Add the property keys from the interceptor. if (current->HasNamedInterceptor()) { @@ -6621,12 +6362,12 @@ MaybeHandle<FixedArray> JSReceiver::GetKeys(Handle<JSReceiver> object, FixedArray::AddKeysFromArrayLike(content, result), FixedArray); } - ASSERT(ContainsOnlyValidKeys(content)); + DCHECK(ContainsOnlyValidKeys(content)); } - // If we only want local properties we bail out after the first + // If we only want own properties we bail out after the first // iteration. - if (type == LOCAL_ONLY) break; + if (type == OWN_ONLY) break; } return content; } @@ -6645,7 +6386,7 @@ static bool UpdateGetterSetterInDictionary( Object* result = dictionary->ValueAt(entry); PropertyDetails details = dictionary->DetailsAt(entry); if (details.type() == CALLBACKS && result->IsAccessorPair()) { - ASSERT(!details.IsDontDelete()); + DCHECK(!details.IsDontDelete()); if (details.attributes() != attributes) { dictionary->DetailsAtPut( entry, @@ -6663,8 +6404,7 @@ void JSObject::DefineElementAccessor(Handle<JSObject> object, uint32_t index, Handle<Object> getter, Handle<Object> setter, - PropertyAttributes attributes, - v8::AccessControl access_control) { + PropertyAttributes attributes) { switch (object->GetElementsKind()) { case FAST_SMI_ELEMENTS: case FAST_ELEMENTS: @@ -6721,7 +6461,6 @@ void JSObject::DefineElementAccessor(Handle<JSObject> object, Isolate* isolate = object->GetIsolate(); Handle<AccessorPair> accessors = isolate->factory()->NewAccessorPair(); accessors->SetComponents(*getter, *setter); - accessors->set_access_flags(access_control); SetElementCallback(object, index, accessors, attributes); } @@ -6731,7 +6470,7 @@ Handle<AccessorPair> JSObject::CreateAccessorPairFor(Handle<JSObject> object, Handle<Name> name) { Isolate* isolate = object->GetIsolate(); LookupResult result(isolate); - object->LocalLookupRealNamedProperty(name, &result); + object->LookupOwnRealNamedProperty(name, &result); if (result.IsPropertyCallbacks()) { // Note that the result can actually have IsDontDelete() == true when we // e.g. have to fall back to the slow case while adding a setter after @@ -6752,13 +6491,11 @@ void JSObject::DefinePropertyAccessor(Handle<JSObject> object, Handle<Name> name, Handle<Object> getter, Handle<Object> setter, - PropertyAttributes attributes, - v8::AccessControl access_control) { + PropertyAttributes attributes) { // We could assert that the property is configurable here, but we would need // to do a lookup, which seems to be a bit of overkill. bool only_attribute_changes = getter->IsNull() && setter->IsNull(); if (object->HasFastProperties() && !only_attribute_changes && - access_control == v8::DEFAULT && (object->map()->NumberOfOwnDescriptors() <= kMaxNumberOfDescriptors)) { bool getterOk = getter->IsNull() || DefineFastAccessor(object, name, ACCESSOR_GETTER, getter, attributes); @@ -6769,55 +6506,24 @@ void JSObject::DefinePropertyAccessor(Handle<JSObject> object, Handle<AccessorPair> accessors = CreateAccessorPairFor(object, name); accessors->SetComponents(*getter, *setter); - accessors->set_access_flags(access_control); SetPropertyCallback(object, name, accessors, attributes); } -bool JSObject::CanSetCallback(Handle<JSObject> object, Handle<Name> name) { - Isolate* isolate = object->GetIsolate(); - ASSERT(!object->IsAccessCheckNeeded() || - isolate->MayNamedAccess(object, name, v8::ACCESS_SET)); - - // Check if there is an API defined callback object which prohibits - // callback overwriting in this object or its prototype chain. - // This mechanism is needed for instance in a browser setting, where - // certain accessors such as window.location should not be allowed - // to be overwritten because allowing overwriting could potentially - // cause security problems. - LookupResult callback_result(isolate); - object->LookupCallbackProperty(name, &callback_result); - if (callback_result.IsFound()) { - Object* callback_obj = callback_result.GetCallbackObject(); - if (callback_obj->IsAccessorInfo()) { - return !AccessorInfo::cast(callback_obj)->prohibits_overwriting(); - } - if (callback_obj->IsAccessorPair()) { - return !AccessorPair::cast(callback_obj)->prohibits_overwriting(); - } - } - return true; -} - - bool Map::DictionaryElementsInPrototypeChainOnly() { - Heap* heap = GetHeap(); - if (IsDictionaryElementsKind(elements_kind())) { return false; } - for (Object* prototype = this->prototype(); - prototype != heap->null_value(); - prototype = prototype->GetPrototype(GetIsolate())) { - if (prototype->IsJSProxy()) { + for (PrototypeIterator iter(this); !iter.IsAtEnd(); iter.Advance()) { + if (iter.GetCurrent()->IsJSProxy()) { // Be conservative, don't walk into proxies. return true; } if (IsDictionaryElementsKind( - JSObject::cast(prototype)->map()->elements_kind())) { + JSObject::cast(iter.GetCurrent())->map()->elements_kind())) { return true; } } @@ -6836,7 +6542,7 @@ void JSObject::SetElementCallback(Handle<JSObject> object, // Normalize elements to make this operation simple. bool had_dictionary_elements = object->HasDictionaryElements(); Handle<SeededNumberDictionary> dictionary = NormalizeElements(object); - ASSERT(object->HasDictionaryElements() || + DCHECK(object->HasDictionaryElements() || object->HasDictionaryArgumentsElements()); // Update the dictionary with the new CALLBACKS property. dictionary = SeededNumberDictionary::Set(dictionary, index, structure, @@ -6870,15 +6576,18 @@ void JSObject::SetPropertyCallback(Handle<JSObject> object, Handle<Name> name, Handle<Object> structure, PropertyAttributes attributes) { + PropertyNormalizationMode mode = object->map()->is_prototype_map() + ? KEEP_INOBJECT_PROPERTIES + : CLEAR_INOBJECT_PROPERTIES; // Normalize object to make this operation simple. - NormalizeProperties(object, CLEAR_INOBJECT_PROPERTIES, 0); + NormalizeProperties(object, mode, 0); // For the global object allocate a new map to invalidate the global inline // caches which have a global property cell reference directly in the code. if (object->IsGlobalObject()) { Handle<Map> new_map = Map::CopyDropDescriptors(handle(object->map())); - ASSERT(new_map->is_dictionary_map()); - object->set_map(*new_map); + DCHECK(new_map->is_dictionary_map()); + JSObject::MigrateToMap(object, new_map); // When running crankshaft, changing the map is not enough. We // need to deoptimize all functions that rely on this global @@ -6889,35 +6598,32 @@ void JSObject::SetPropertyCallback(Handle<JSObject> object, // Update the dictionary with the new CALLBACKS property. PropertyDetails details = PropertyDetails(attributes, CALLBACKS, 0); SetNormalizedProperty(object, name, structure, details); + + ReoptimizeIfPrototype(object); } -void JSObject::DefineAccessor(Handle<JSObject> object, - Handle<Name> name, - Handle<Object> getter, - Handle<Object> setter, - PropertyAttributes attributes, - v8::AccessControl access_control) { +MaybeHandle<Object> JSObject::DefineAccessor(Handle<JSObject> object, + Handle<Name> name, + Handle<Object> getter, + Handle<Object> setter, + PropertyAttributes attributes) { Isolate* isolate = object->GetIsolate(); // Check access rights if needed. if (object->IsAccessCheckNeeded() && !isolate->MayNamedAccess(object, name, v8::ACCESS_SET)) { isolate->ReportFailedAccessCheck(object, v8::ACCESS_SET); - // TODO(yangguo): Issue 3269, check for scheduled exception missing? - return; + RETURN_EXCEPTION_IF_SCHEDULED_EXCEPTION(isolate, Object); + return isolate->factory()->undefined_value(); } if (object->IsJSGlobalProxy()) { - Handle<Object> proto(object->GetPrototype(), isolate); - if (proto->IsNull()) return; - ASSERT(proto->IsJSGlobalObject()); - DefineAccessor(Handle<JSObject>::cast(proto), - name, - getter, - setter, - attributes, - access_control); - return; + PrototypeIterator iter(isolate, object); + if (iter.IsAtEnd()) return isolate->factory()->undefined_value(); + DCHECK(PrototypeIterator::GetCurrent(iter)->IsJSGlobalObject()); + DefineAccessor(Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)), + name, getter, setter, attributes); + return isolate->factory()->undefined_value(); } // Make sure that the top context does not change when doing callbacks or @@ -6927,8 +6633,6 @@ void JSObject::DefineAccessor(Handle<JSObject> object, // Try to flatten before operating on the string. if (name->IsString()) name = String::Flatten(Handle<String>::cast(name)); - if (!JSObject::CanSetCallback(object, name)) return; - uint32_t index = 0; bool is_element = name->AsArrayIndex(&index); @@ -6938,14 +6642,21 @@ void JSObject::DefineAccessor(Handle<JSObject> object, bool preexists = false; if (is_observed) { if (is_element) { - preexists = HasLocalElement(object, index); - if (preexists && GetLocalElementAccessorPair(object, index).is_null()) { + Maybe<bool> maybe = HasOwnElement(object, index); + // Workaround for a GCC 4.4.3 bug which leads to "‘preexists’ may be used + // uninitialized in this function". + if (!maybe.has_value) { + DCHECK(false); + return isolate->factory()->undefined_value(); + } + preexists = maybe.value; + if (preexists && GetOwnElementAccessorPair(object, index).is_null()) { old_value = Object::GetElement(isolate, object, index).ToHandleChecked(); } } else { LookupResult lookup(isolate); - object->LocalLookup(name, &lookup, true); + object->LookupOwn(name, &lookup, true); preexists = lookup.IsProperty(); if (preexists && lookup.IsDataProperty()) { old_value = @@ -6955,17 +6666,17 @@ void JSObject::DefineAccessor(Handle<JSObject> object, } if (is_element) { - DefineElementAccessor( - object, index, getter, setter, attributes, access_control); + DefineElementAccessor(object, index, getter, setter, attributes); } else { - DefinePropertyAccessor( - object, name, getter, setter, attributes, access_control); + DefinePropertyAccessor(object, name, getter, setter, attributes); } if (is_observed) { const char* type = preexists ? "reconfigure" : "add"; EnqueueChangeRecord(object, type, name, old_value); } + + return isolate->factory()->undefined_value(); } @@ -7003,10 +6714,10 @@ bool JSObject::DefineFastAccessor(Handle<JSObject> object, AccessorComponent component, Handle<Object> accessor, PropertyAttributes attributes) { - ASSERT(accessor->IsSpecFunction() || accessor->IsUndefined()); + DCHECK(accessor->IsSpecFunction() || accessor->IsUndefined()); Isolate* isolate = object->GetIsolate(); LookupResult result(isolate); - object->LocalLookup(name, &result); + object->LookupOwn(name, &result); if (result.IsFound() && !result.IsPropertyCallbacks()) { return false; @@ -7032,11 +6743,13 @@ bool JSObject::DefineFastAccessor(Handle<JSObject> object, if (result.IsFound()) { Handle<Map> target(result.GetTransitionTarget()); - ASSERT(target->NumberOfOwnDescriptors() == + DCHECK(target->NumberOfOwnDescriptors() == object->map()->NumberOfOwnDescriptors()); // This works since descriptors are sorted in order of addition. - ASSERT(object->map()->instance_descriptors()-> - GetKey(descriptor_number) == *name); + DCHECK(Name::Equals( + handle(object->map()->instance_descriptors()->GetKey( + descriptor_number)), + name)); return TryAccessorTransition(object, target, descriptor_number, component, accessor, attributes); } @@ -7048,7 +6761,7 @@ bool JSObject::DefineFastAccessor(Handle<JSObject> object, if (result.IsFound()) { Handle<Map> target(result.GetTransitionTarget()); int descriptor_number = target->LastAdded(); - ASSERT(Name::Equals(name, + DCHECK(Name::Equals(name, handle(target->instance_descriptors()->GetKey(descriptor_number)))); return TryAccessorTransition(object, target, descriptor_number, component, accessor, attributes); @@ -7087,10 +6800,11 @@ MaybeHandle<Object> JSObject::SetAccessor(Handle<JSObject> object, } if (object->IsJSGlobalProxy()) { - Handle<Object> proto(object->GetPrototype(), isolate); - if (proto->IsNull()) return object; - ASSERT(proto->IsJSGlobalObject()); - return SetAccessor(Handle<JSObject>::cast(proto), info); + PrototypeIterator iter(isolate, object); + if (iter.IsAtEnd()) return object; + DCHECK(PrototypeIterator::GetCurrent(iter)->IsJSGlobalObject()); + return SetAccessor( + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)), info); } // Make sure that the top context does not change when doing callbacks or @@ -7100,10 +6814,6 @@ MaybeHandle<Object> JSObject::SetAccessor(Handle<JSObject> object, // Try to flatten before operating on the string. if (name->IsString()) name = String::Flatten(Handle<String>::cast(name)); - if (!JSObject::CanSetCallback(object, name)) { - return factory->undefined_value(); - } - uint32_t index = 0; bool is_element = name->AsArrayIndex(&index); @@ -7141,7 +6851,7 @@ MaybeHandle<Object> JSObject::SetAccessor(Handle<JSObject> object, } else { // Lookup the name. LookupResult result(isolate); - object->LocalLookup(name, &result, true); + object->LookupOwn(name, &result, true); // ES5 forbids turning a property into an accessor if it's not // configurable (that is IsDontDelete in ES3 and v8), see 8.6.1 (Table 5). if (result.IsFound() && (result.IsReadOnly() || result.IsDontDelete())) { @@ -7175,11 +6885,14 @@ MaybeHandle<Object> JSObject::GetAccessor(Handle<JSObject> object, // Make the lookup and include prototypes. uint32_t index = 0; if (name->AsArrayIndex(&index)) { - for (Handle<Object> obj = object; - !obj->IsNull(); - obj = handle(JSReceiver::cast(*obj)->GetPrototype(), isolate)) { - if (obj->IsJSObject() && JSObject::cast(*obj)->HasDictionaryElements()) { - JSObject* js_object = JSObject::cast(*obj); + for (PrototypeIterator iter(isolate, object, + PrototypeIterator::START_AT_RECEIVER); + !iter.IsAtEnd(); iter.Advance()) { + if (PrototypeIterator::GetCurrent(iter)->IsJSObject() && + JSObject::cast(*PrototypeIterator::GetCurrent(iter)) + ->HasDictionaryElements()) { + JSObject* js_object = + JSObject::cast(*PrototypeIterator::GetCurrent(iter)); SeededNumberDictionary* dictionary = js_object->element_dictionary(); int entry = dictionary->FindEntry(index); if (entry != SeededNumberDictionary::kNotFound) { @@ -7193,11 +6906,12 @@ MaybeHandle<Object> JSObject::GetAccessor(Handle<JSObject> object, } } } else { - for (Handle<Object> obj = object; - !obj->IsNull(); - obj = handle(JSReceiver::cast(*obj)->GetPrototype(), isolate)) { + for (PrototypeIterator iter(isolate, object, + PrototypeIterator::START_AT_RECEIVER); + !iter.IsAtEnd(); iter.Advance()) { LookupResult result(isolate); - JSReceiver::cast(*obj)->LocalLookup(name, &result); + JSReceiver::cast(*PrototypeIterator::GetCurrent(iter)) + ->LookupOwn(name, &result); if (result.IsFound()) { if (result.IsReadOnly()) return isolate->factory()->undefined_value(); if (result.IsPropertyCallbacks()) { @@ -7220,9 +6934,10 @@ Object* JSObject::SlowReverseLookup(Object* value) { DescriptorArray* descs = map()->instance_descriptors(); for (int i = 0; i < number_of_own_descriptors; i++) { if (descs->GetType(i) == FIELD) { - Object* property = RawFastPropertyAt(descs->GetFieldIndex(i)); + Object* property = + RawFastPropertyAt(FieldIndex::ForDescriptor(map(), i)); if (descs->GetDetails(i).representation().IsDouble()) { - ASSERT(property->IsHeapNumber()); + DCHECK(property->IsMutableHeapNumber()); if (value->IsNumber() && property->Number() == value->Number()) { return descs->GetKey(i); } @@ -7258,6 +6973,8 @@ Handle<Map> Map::RawCopy(Handle<Map> map, int instance_size) { if (!map->is_dictionary_map()) { new_bit_field3 = IsUnstable::update(new_bit_field3, false); } + new_bit_field3 = ConstructionCount::update(new_bit_field3, + JSFunction::kNoSlackTracking); result->set_bit_field3(new_bit_field3); return result; } @@ -7265,42 +6982,44 @@ Handle<Map> Map::RawCopy(Handle<Map> map, int instance_size) { Handle<Map> Map::Normalize(Handle<Map> fast_map, PropertyNormalizationMode mode) { - ASSERT(!fast_map->is_dictionary_map()); + DCHECK(!fast_map->is_dictionary_map()); Isolate* isolate = fast_map->GetIsolate(); - Handle<NormalizedMapCache> cache( - isolate->context()->native_context()->normalized_map_cache()); + Handle<Object> maybe_cache(isolate->native_context()->normalized_map_cache(), + isolate); + bool use_cache = !maybe_cache->IsUndefined(); + Handle<NormalizedMapCache> cache; + if (use_cache) cache = Handle<NormalizedMapCache>::cast(maybe_cache); Handle<Map> new_map; - if (cache->Get(fast_map, mode).ToHandle(&new_map)) { + if (use_cache && cache->Get(fast_map, mode).ToHandle(&new_map)) { #ifdef VERIFY_HEAP - if (FLAG_verify_heap) { - new_map->SharedMapVerify(); - } + if (FLAG_verify_heap) new_map->DictionaryMapVerify(); #endif -#ifdef ENABLE_SLOW_ASSERTS +#ifdef ENABLE_SLOW_DCHECKS if (FLAG_enable_slow_asserts) { // The cached map should match newly created normalized map bit-by-bit, // except for the code cache, which can contain some ics which can be // applied to the shared map. - Handle<Map> fresh = Map::CopyNormalized( - fast_map, mode, SHARED_NORMALIZED_MAP); + Handle<Map> fresh = Map::CopyNormalized(fast_map, mode); - ASSERT(memcmp(fresh->address(), + DCHECK(memcmp(fresh->address(), new_map->address(), Map::kCodeCacheOffset) == 0); STATIC_ASSERT(Map::kDependentCodeOffset == Map::kCodeCacheOffset + kPointerSize); int offset = Map::kDependentCodeOffset + kPointerSize; - ASSERT(memcmp(fresh->address() + offset, + DCHECK(memcmp(fresh->address() + offset, new_map->address() + offset, Map::kSize - offset) == 0); } #endif } else { - new_map = Map::CopyNormalized(fast_map, mode, SHARED_NORMALIZED_MAP); - cache->Set(fast_map, new_map); - isolate->counters()->normalized_maps()->Increment(); + new_map = Map::CopyNormalized(fast_map, mode); + if (use_cache) { + cache->Set(fast_map, new_map); + isolate->counters()->normalized_maps()->Increment(); + } } fast_map->NotifyLeafMapLayoutChange(); return new_map; @@ -7308,8 +7027,7 @@ Handle<Map> Map::Normalize(Handle<Map> fast_map, Handle<Map> Map::CopyNormalized(Handle<Map> map, - PropertyNormalizationMode mode, - NormalizedMapSharingMode sharing) { + PropertyNormalizationMode mode) { int new_instance_size = map->instance_size(); if (mode == CLEAR_INOBJECT_PROPERTIES) { new_instance_size -= map->inobject_properties() * kPointerSize; @@ -7321,14 +7039,11 @@ Handle<Map> Map::CopyNormalized(Handle<Map> map, result->set_inobject_properties(map->inobject_properties()); } - result->set_is_shared(sharing == SHARED_NORMALIZED_MAP); result->set_dictionary_map(true); result->set_migration_target(false); #ifdef VERIFY_HEAP - if (FLAG_verify_heap && result->is_shared()) { - result->SharedMapVerify(); - } + if (FLAG_verify_heap) result->DictionaryMapVerify(); #endif return result; @@ -7344,7 +7059,6 @@ Handle<Map> Map::CopyDropDescriptors(Handle<Map> map) { result->set_pre_allocated_property_fields( map->pre_allocated_property_fields()); - result->set_is_shared(false); result->ClearCodeCache(map->GetHeap()); map->NotifyLeafMapLayoutChange(); return result; @@ -7357,13 +7071,11 @@ Handle<Map> Map::ShareDescriptor(Handle<Map> map, // Sanity check. This path is only to be taken if the map owns its descriptor // array, implying that its NumberOfOwnDescriptors equals the number of // descriptors in the descriptor array. - ASSERT(map->NumberOfOwnDescriptors() == + DCHECK(map->NumberOfOwnDescriptors() == map->instance_descriptors()->number_of_descriptors()); Handle<Map> result = CopyDropDescriptors(map); Handle<Name> name = descriptor->GetKey(); - Handle<TransitionArray> transitions = - TransitionArray::CopyInsert(map, name, result, SIMPLE_TRANSITION); // Ensure there's space for the new descriptor in the shared descriptor array. if (descriptors->NumberOfSlackDescriptors() == 0) { @@ -7376,19 +7088,30 @@ Handle<Map> Map::ShareDescriptor(Handle<Map> map, } } - // Commit the state atomically. - DisallowHeapAllocation no_gc; + { + DisallowHeapAllocation no_gc; + descriptors->Append(descriptor); + result->InitializeDescriptors(*descriptors); + } - descriptors->Append(descriptor); - result->SetBackPointer(*map); - result->InitializeDescriptors(*descriptors); + DCHECK(result->NumberOfOwnDescriptors() == map->NumberOfOwnDescriptors() + 1); + ConnectTransition(map, result, name, SIMPLE_TRANSITION); - ASSERT(result->NumberOfOwnDescriptors() == map->NumberOfOwnDescriptors() + 1); + return result; +} - map->set_transitions(*transitions); - map->set_owns_descriptors(false); - return result; +void Map::ConnectTransition(Handle<Map> parent, Handle<Map> child, + Handle<Name> name, SimpleTransitionFlag flag) { + parent->set_owns_descriptors(false); + if (parent->is_prototype_map()) { + DCHECK(child->is_prototype_map()); + } else { + Handle<TransitionArray> transitions = + TransitionArray::CopyInsert(parent, name, child, flag); + parent->set_transitions(*transitions); + child->SetBackPointer(*parent); + } } @@ -7397,24 +7120,23 @@ Handle<Map> Map::CopyReplaceDescriptors(Handle<Map> map, TransitionFlag flag, MaybeHandle<Name> maybe_name, SimpleTransitionFlag simple_flag) { - ASSERT(descriptors->IsSortedNoDuplicates()); + DCHECK(descriptors->IsSortedNoDuplicates()); Handle<Map> result = CopyDropDescriptors(map); result->InitializeDescriptors(*descriptors); - if (flag == INSERT_TRANSITION && map->CanHaveMoreTransitions()) { - Handle<Name> name; - CHECK(maybe_name.ToHandle(&name)); - Handle<TransitionArray> transitions = TransitionArray::CopyInsert( - map, name, result, simple_flag); - map->set_transitions(*transitions); - result->SetBackPointer(*map); - } else { - int length = descriptors->number_of_descriptors(); - for (int i = 0; i < length; i++) { - descriptors->SetRepresentation(i, Representation::Tagged()); - if (descriptors->GetDetails(i).type() == FIELD) { - descriptors->SetValue(i, HeapType::Any()); + if (!map->is_prototype_map()) { + if (flag == INSERT_TRANSITION && map->CanHaveMoreTransitions()) { + Handle<Name> name; + CHECK(maybe_name.ToHandle(&name)); + ConnectTransition(map, result, name, simple_flag); + } else { + int length = descriptors->number_of_descriptors(); + for (int i = 0; i < length; i++) { + descriptors->SetRepresentation(i, Representation::Tagged()); + if (descriptors->GetDetails(i).type() == FIELD) { + descriptors->SetValue(i, HeapType::Any()); + } } } } @@ -7428,7 +7150,7 @@ Handle<Map> Map::CopyReplaceDescriptors(Handle<Map> map, Handle<Map> Map::CopyInstallDescriptors(Handle<Map> map, int new_descriptor, Handle<DescriptorArray> descriptors) { - ASSERT(descriptors->IsSortedNoDuplicates()); + DCHECK(descriptors->IsSortedNoDuplicates()); Handle<Map> result = CopyDropDescriptors(map); @@ -7444,14 +7166,9 @@ Handle<Map> Map::CopyInstallDescriptors(Handle<Map> map, } result->set_unused_property_fields(unused_property_fields); - result->set_owns_descriptors(false); Handle<Name> name = handle(descriptors->GetKey(new_descriptor)); - Handle<TransitionArray> transitions = TransitionArray::CopyInsert( - map, name, result, SIMPLE_TRANSITION); - - map->set_transitions(*transitions); - result->SetBackPointer(*map); + ConnectTransition(map, result, name, SIMPLE_TRANSITION); return result; } @@ -7460,16 +7177,16 @@ Handle<Map> Map::CopyInstallDescriptors(Handle<Map> map, Handle<Map> Map::CopyAsElementsKind(Handle<Map> map, ElementsKind kind, TransitionFlag flag) { if (flag == INSERT_TRANSITION) { - ASSERT(!map->HasElementsTransition() || + DCHECK(!map->HasElementsTransition() || ((map->elements_transition_map()->elements_kind() == DICTIONARY_ELEMENTS || IsExternalArrayElementsKind( map->elements_transition_map()->elements_kind())) && (kind == DICTIONARY_ELEMENTS || IsExternalArrayElementsKind(kind)))); - ASSERT(!IsFastElementsKind(kind) || + DCHECK(!IsFastElementsKind(kind) || IsMoreGeneralElementsKindTransition(map->elements_kind(), kind)); - ASSERT(kind != map->elements_kind()); + DCHECK(kind != map->elements_kind()); } bool insert_transition = @@ -7480,12 +7197,10 @@ Handle<Map> Map::CopyAsElementsKind(Handle<Map> map, ElementsKind kind, // transfer ownership to the new map. Handle<Map> new_map = CopyDropDescriptors(map); - SetElementsTransitionMap(map, new_map); + ConnectElementsTransition(map, new_map); new_map->set_elements_kind(kind); new_map->InitializeDescriptors(map->instance_descriptors()); - new_map->SetBackPointer(*map); - map->set_owns_descriptors(false); return new_map; } @@ -7497,8 +7212,7 @@ Handle<Map> Map::CopyAsElementsKind(Handle<Map> map, ElementsKind kind, new_map->set_elements_kind(kind); if (insert_transition) { - SetElementsTransitionMap(map, new_map); - new_map->SetBackPointer(*map); + ConnectElementsTransition(map, new_map); } return new_map; @@ -7506,7 +7220,7 @@ Handle<Map> Map::CopyAsElementsKind(Handle<Map> map, ElementsKind kind, Handle<Map> Map::CopyForObserved(Handle<Map> map) { - ASSERT(!map->is_observed()); + DCHECK(!map->is_observed()); Isolate* isolate = map->GetIsolate(); @@ -7516,22 +7230,18 @@ Handle<Map> Map::CopyForObserved(Handle<Map> map) { if (map->owns_descriptors()) { new_map = CopyDropDescriptors(map); } else { + DCHECK(!map->is_prototype_map()); new_map = Copy(map); } - Handle<TransitionArray> transitions = TransitionArray::CopyInsert( - map, isolate->factory()->observed_symbol(), new_map, FULL_TRANSITION); - - map->set_transitions(*transitions); - new_map->set_is_observed(); - if (map->owns_descriptors()) { new_map->InitializeDescriptors(map->instance_descriptors()); - map->set_owns_descriptors(false); } - new_map->SetBackPointer(*map); + Handle<Name> name = isolate->factory()->observed_symbol(); + ConnectTransition(map, new_map, name, FULL_TRANSITION); + return new_map; } @@ -7580,7 +7290,7 @@ Handle<Map> Map::CopyForFreeze(Handle<Map> map) { Isolate* isolate = map->GetIsolate(); Handle<DescriptorArray> new_desc = DescriptorArray::CopyUpToAddAttributes( handle(map->instance_descriptors(), isolate), num_descriptors, FROZEN); - Handle<Map> new_map = Map::CopyReplaceDescriptors( + Handle<Map> new_map = CopyReplaceDescriptors( map, new_desc, INSERT_TRANSITION, isolate->factory()->frozen_symbol()); new_map->freeze(); new_map->set_is_extensible(false); @@ -7589,6 +7299,101 @@ Handle<Map> Map::CopyForFreeze(Handle<Map> map) { } +bool DescriptorArray::CanHoldValue(int descriptor, Object* value) { + PropertyDetails details = GetDetails(descriptor); + switch (details.type()) { + case FIELD: + return value->FitsRepresentation(details.representation()) && + GetFieldType(descriptor)->NowContains(value); + + case CONSTANT: + DCHECK(GetConstant(descriptor) != value || + value->FitsRepresentation(details.representation())); + return GetConstant(descriptor) == value; + + case CALLBACKS: + return false; + + case NORMAL: + case INTERCEPTOR: + case HANDLER: + case NONEXISTENT: + break; + } + + UNREACHABLE(); + return false; +} + + +Handle<Map> Map::PrepareForDataProperty(Handle<Map> map, int descriptor, + Handle<Object> value) { + // Dictionaries can store any property value. + if (map->is_dictionary_map()) return map; + + // Migrate to the newest map before storing the property. + if (map->is_deprecated()) map = Update(map); + + Handle<DescriptorArray> descriptors(map->instance_descriptors()); + + if (descriptors->CanHoldValue(descriptor, *value)) return map; + + Isolate* isolate = map->GetIsolate(); + Representation representation = value->OptimalRepresentation(); + Handle<HeapType> type = value->OptimalType(isolate, representation); + + return GeneralizeRepresentation(map, descriptor, representation, type, + FORCE_FIELD); +} + + +Handle<Map> Map::TransitionToDataProperty(Handle<Map> map, Handle<Name> name, + Handle<Object> value, + PropertyAttributes attributes, + StoreFromKeyed store_mode) { + // Dictionary maps can always have additional data properties. + if (map->is_dictionary_map()) return map; + + // Migrate to the newest map before transitioning to the new property. + if (map->is_deprecated()) map = Update(map); + + int index = map->SearchTransition(*name); + if (index != TransitionArray::kNotFound) { + Handle<Map> transition(map->GetTransition(index)); + int descriptor = transition->LastAdded(); + + // TODO(verwaest): Handle attributes better. + DescriptorArray* descriptors = transition->instance_descriptors(); + if (descriptors->GetDetails(descriptor).attributes() != attributes) { + return CopyGeneralizeAllRepresentations(transition, descriptor, + FORCE_FIELD, attributes, + "attributes mismatch"); + } + + return Map::PrepareForDataProperty(transition, descriptor, value); + } + + TransitionFlag flag = INSERT_TRANSITION; + MaybeHandle<Map> maybe_map; + if (value->IsJSFunction()) { + maybe_map = Map::CopyWithConstant(map, name, value, attributes, flag); + } else if (!map->TooManyFastProperties(store_mode)) { + Isolate* isolate = name->GetIsolate(); + Representation representation = value->OptimalRepresentation(); + Handle<HeapType> type = value->OptimalType(isolate, representation); + maybe_map = + Map::CopyWithField(map, name, type, attributes, representation, flag); + } + + Handle<Map> result; + if (!maybe_map.ToHandle(&result)) { + return Map::Normalize(map, CLEAR_INOBJECT_PROPERTIES); + } + + return result; +} + + Handle<Map> Map::CopyAddDescriptor(Handle<Map> map, Descriptor* descriptor, TransitionFlag flag) { @@ -7656,17 +7461,20 @@ Handle<DescriptorArray> DescriptorArray::CopyUpToAddAttributes( if (attributes != NONE) { for (int i = 0; i < size; ++i) { Object* value = desc->GetValue(i); + Name* key = desc->GetKey(i); PropertyDetails details = desc->GetDetails(i); - int mask = DONT_DELETE | DONT_ENUM; - // READ_ONLY is an invalid attribute for JS setters/getters. - if (details.type() != CALLBACKS || !value->IsAccessorPair()) { - mask |= READ_ONLY; + // Bulk attribute changes never affect private properties. + if (!key->IsSymbol() || !Symbol::cast(key)->is_private()) { + int mask = DONT_DELETE | DONT_ENUM; + // READ_ONLY is an invalid attribute for JS setters/getters. + if (details.type() != CALLBACKS || !value->IsAccessorPair()) { + mask |= READ_ONLY; + } + details = details.CopyAddAttributes( + static_cast<PropertyAttributes>(attributes & mask)); } - details = details.CopyAddAttributes( - static_cast<PropertyAttributes>(attributes & mask)); - Descriptor inner_desc(handle(desc->GetKey(i)), - handle(value, desc->GetIsolate()), - details); + Descriptor inner_desc( + handle(key), handle(value, desc->GetIsolate()), details); descriptors->Set(i, &inner_desc, witness); } } else { @@ -7690,7 +7498,7 @@ Handle<Map> Map::CopyReplaceDescriptor(Handle<Map> map, descriptor->KeyToUniqueName(); Handle<Name> key = descriptor->GetKey(); - ASSERT(*key == descriptors->GetKey(insertion_index)); + DCHECK(*key == descriptors->GetKey(insertion_index)); Handle<DescriptorArray> new_descriptors = DescriptorArray::CopyUpTo( descriptors, map->NumberOfOwnDescriptors()); @@ -7744,7 +7552,7 @@ int Map::IndexInCodeCache(Object* name, Code* code) { void Map::RemoveFromCodeCache(Name* name, Code* code, int index) { // No GC is supposed to happen between a call to IndexInCodeCache and // RemoveFromCodeCache so the code cache must be there. - ASSERT(!code_cache()->IsFixedArray()); + DCHECK(!code_cache()->IsFixedArray()); CodeCache::cast(code_cache())->RemoveByIndex(name, code, index); } @@ -7762,9 +7570,9 @@ class IntrusiveMapTransitionIterator { constructor_(constructor) { } void StartIfNotStarted() { - ASSERT(!(*IteratorField())->IsSmi() || IsIterating()); + DCHECK(!(*IteratorField())->IsSmi() || IsIterating()); if (!(*IteratorField())->IsSmi()) { - ASSERT(*IteratorField() == constructor_); + DCHECK(*IteratorField() == constructor_); *IteratorField() = Smi::FromInt(-1); } } @@ -7775,7 +7583,7 @@ class IntrusiveMapTransitionIterator { } Map* Next() { - ASSERT(IsIterating()); + DCHECK(IsIterating()); int value = Smi::cast(*IteratorField())->value(); int index = -value - 1; int number_of_transitions = transition_array_->number_of_transitions(); @@ -7811,7 +7619,7 @@ class IntrusivePrototypeTransitionIterator { void StartIfNotStarted() { if (!(*IteratorField())->IsSmi()) { - ASSERT(*IteratorField() == constructor_); + DCHECK(*IteratorField() == constructor_); *IteratorField() = Smi::FromInt(0); } } @@ -7822,7 +7630,7 @@ class IntrusivePrototypeTransitionIterator { } Map* Next() { - ASSERT(IsIterating()); + DCHECK(IsIterating()); int transitionNumber = Smi::cast(*IteratorField())->value(); if (transitionNumber < NumberOfTransitions()) { *IteratorField() = Smi::FromInt(transitionNumber + 1); @@ -7970,7 +7778,7 @@ void CodeCache::Update( } UpdateNormalTypeCache(code_cache, name, code); } else { - ASSERT(code_cache->default_cache()->IsFixedArray()); + DCHECK(code_cache->default_cache()->IsFixedArray()); UpdateDefaultCache(code_cache, name, code); } } @@ -8025,7 +7833,7 @@ void CodeCache::UpdateDefaultCache( // multiple of the entry size. int new_length = length + ((length >> 1)) + kCodeCacheEntrySize; new_length = new_length - new_length % kCodeCacheEntrySize; - ASSERT((new_length % kCodeCacheEntrySize) == 0); + DCHECK((new_length % kCodeCacheEntrySize) == 0); cache = FixedArray::CopySize(cache, new_length); // Add the (name, code) pair to the new cache. @@ -8102,17 +7910,17 @@ int CodeCache::GetIndex(Object* name, Code* code) { void CodeCache::RemoveByIndex(Object* name, Code* code, int index) { if (code->type() == Code::NORMAL) { - ASSERT(!normal_type_cache()->IsUndefined()); + DCHECK(!normal_type_cache()->IsUndefined()); CodeCacheHashTable* cache = CodeCacheHashTable::cast(normal_type_cache()); - ASSERT(cache->GetIndex(Name::cast(name), code->flags()) == index); + DCHECK(cache->GetIndex(Name::cast(name), code->flags()) == index); cache->RemoveByIndex(index); } else { FixedArray* array = default_cache(); - ASSERT(array->length() >= index && array->get(index)->IsCode()); + DCHECK(array->length() >= index && array->get(index)->IsCode()); // Use null instead of undefined for deleted elements to distinguish // deleted elements from unused elements. This distinction is used // when looking up in the cache and when updating the cache. - ASSERT_EQ(1, kCodeCacheEntryCodeOffset - kCodeCacheEntryNameOffset); + DCHECK_EQ(1, kCodeCacheEntryCodeOffset - kCodeCacheEntryNameOffset); array->set_null(index - 1); // Name. array->set_null(index); // Code. } @@ -8205,7 +8013,7 @@ int CodeCacheHashTable::GetIndex(Name* name, Code::Flags flags) { void CodeCacheHashTable::RemoveByIndex(int index) { - ASSERT(index >= 0); + DCHECK(index >= 0); Heap* heap = GetHeap(); set(EntryToIndex(index), heap->the_hole_value()); set(EntryToIndex(index) + 1, heap->the_hole_value()); @@ -8226,7 +8034,7 @@ void PolymorphicCodeCache::Update(Handle<PolymorphicCodeCache> code_cache, code_cache->set_cache(*result); } else { // This entry shouldn't be contained in the cache yet. - ASSERT(PolymorphicCodeCacheHashTable::cast(code_cache->cache()) + DCHECK(PolymorphicCodeCacheHashTable::cast(code_cache->cache()) ->Lookup(maps, flags)->IsUndefined()); } Handle<PolymorphicCodeCacheHashTable> hash_table = @@ -8366,10 +8174,10 @@ Handle<PolymorphicCodeCacheHashTable> PolymorphicCodeCacheHashTable::Put( void FixedArray::Shrink(int new_length) { - ASSERT(0 <= new_length && new_length <= length()); + DCHECK(0 <= new_length && new_length <= length()); if (new_length < length()) { - RightTrimFixedArray<Heap::FROM_MUTATOR>( - GetHeap(), this, length() - new_length); + GetHeap()->RightTrimFixedArray<Heap::FROM_MUTATOR>( + this, length() - new_length); } } @@ -8377,7 +8185,7 @@ void FixedArray::Shrink(int new_length) { MaybeHandle<FixedArray> FixedArray::AddKeysFromArrayLike( Handle<FixedArray> content, Handle<JSObject> array) { - ASSERT(array->IsJSArray() || array->HasSloppyArgumentsElements()); + DCHECK(array->IsJSArray() || array->HasSloppyArgumentsElements()); ElementsAccessor* accessor = array->GetElementsAccessor(); Handle<FixedArray> result; ASSIGN_RETURN_ON_EXCEPTION( @@ -8385,12 +8193,12 @@ MaybeHandle<FixedArray> FixedArray::AddKeysFromArrayLike( accessor->AddElementsToFixedArray(array, array, content), FixedArray); -#ifdef ENABLE_SLOW_ASSERTS +#ifdef ENABLE_SLOW_DCHECKS if (FLAG_enable_slow_asserts) { DisallowHeapAllocation no_allocation; for (int i = 0; i < result->length(); i++) { Object* current = result->get(i); - ASSERT(current->IsNumber() || current->IsName()); + DCHECK(current->IsNumber() || current->IsName()); } } #endif @@ -8411,12 +8219,12 @@ MaybeHandle<FixedArray> FixedArray::UnionOfKeys(Handle<FixedArray> first, Handle<FixedArrayBase>::cast(second)), FixedArray); -#ifdef ENABLE_SLOW_ASSERTS +#ifdef ENABLE_SLOW_DCHECKS if (FLAG_enable_slow_asserts) { DisallowHeapAllocation no_allocation; for (int i = 0; i < result->length(); i++) { Object* current = result->get(i); - ASSERT(current->IsNumber() || current->IsName()); + DCHECK(current->IsNumber() || current->IsName()); } } #endif @@ -8468,7 +8276,7 @@ bool FixedArray::IsEqualTo(FixedArray* other) { Handle<DescriptorArray> DescriptorArray::Allocate(Isolate* isolate, int number_of_descriptors, int slack) { - ASSERT(0 <= number_of_descriptors); + DCHECK(0 <= number_of_descriptors); Factory* factory = isolate->factory(); // Do not use DescriptorArray::cast on incomplete object. int size = number_of_descriptors + slack; @@ -8496,10 +8304,10 @@ void DescriptorArray::Replace(int index, Descriptor* descriptor) { void DescriptorArray::SetEnumCache(FixedArray* bridge_storage, FixedArray* new_cache, Object* new_index_cache) { - ASSERT(bridge_storage->length() >= kEnumCacheBridgeLength); - ASSERT(new_index_cache->IsSmi() || new_index_cache->IsFixedArray()); - ASSERT(!IsEmpty()); - ASSERT(!HasEnumCache() || new_cache->length() > GetEnumCache()->length()); + DCHECK(bridge_storage->length() >= kEnumCacheBridgeLength); + DCHECK(new_index_cache->IsSmi() || new_index_cache->IsFixedArray()); + DCHECK(!IsEmpty()); + DCHECK(!HasEnumCache() || new_cache->length() > GetEnumCache()->length()); FixedArray::cast(bridge_storage)-> set(kEnumCacheBridgeCacheIndex, new_cache); FixedArray::cast(bridge_storage)-> @@ -8575,7 +8383,7 @@ void DescriptorArray::Sort() { parent_index = child_index; } } - ASSERT(IsSortedNoDuplicates()); + DCHECK(IsSortedNoDuplicates()); } @@ -8594,13 +8402,17 @@ Object* AccessorPair::GetComponent(AccessorComponent component) { Handle<DeoptimizationInputData> DeoptimizationInputData::New( - Isolate* isolate, - int deopt_entry_count, + Isolate* isolate, int deopt_entry_count, int return_patch_address_count, PretenureFlag pretenure) { - ASSERT(deopt_entry_count > 0); - return Handle<DeoptimizationInputData>::cast( - isolate->factory()->NewFixedArray( - LengthFor(deopt_entry_count), pretenure)); + DCHECK(deopt_entry_count + return_patch_address_count > 0); + Handle<FixedArray> deoptimization_data = + Handle<FixedArray>::cast(isolate->factory()->NewFixedArray( + LengthFor(deopt_entry_count, return_patch_address_count), pretenure)); + deoptimization_data->set(kDeoptEntryCountIndex, + Smi::FromInt(deopt_entry_count)); + deoptimization_data->set(kReturnAddressPatchEntryCountIndex, + Smi::FromInt(return_patch_address_count)); + return Handle<DeoptimizationInputData>::cast(deoptimization_data); } @@ -8632,30 +8444,6 @@ bool DescriptorArray::IsEqualTo(DescriptorArray* other) { #endif -static bool IsIdentifier(UnicodeCache* cache, Name* name) { - // Checks whether the buffer contains an identifier (no escape). - if (!name->IsString()) return false; - String* string = String::cast(name); - if (string->length() == 0) return true; - ConsStringIteratorOp op; - StringCharacterStream stream(string, &op); - if (!cache->IsIdentifierStart(stream.GetNext())) { - return false; - } - while (stream.HasMore()) { - if (!cache->IsIdentifierPart(stream.GetNext())) { - return false; - } - } - return true; -} - - -bool Name::IsCacheable(Isolate* isolate) { - return IsSymbol() || IsIdentifier(isolate->unicode_cache(), this); -} - - bool String::LooksValid() { if (!GetIsolate()->heap()->Contains(this)) return false; return true; @@ -8663,7 +8451,7 @@ bool String::LooksValid() { String::FlatContent String::GetFlatContent() { - ASSERT(!AllowHeapAllocation::IsAllowed()); + DCHECK(!AllowHeapAllocation::IsAllowed()); int length = this->length(); StringShape shape(this); String* string = this; @@ -8681,7 +8469,7 @@ String::FlatContent String::GetFlatContent() { offset = slice->offset(); string = slice->parent(); shape = StringShape(string); - ASSERT(shape.representation_tag() != kConsStringTag && + DCHECK(shape.representation_tag() != kConsStringTag && shape.representation_tag() != kSlicedStringTag); } if (shape.encoding_tag() == kOneByteStringTag) { @@ -8693,7 +8481,7 @@ String::FlatContent String::GetFlatContent() { } return FlatContent(start + offset, length); } else { - ASSERT(shape.encoding_tag() == kTwoByteStringTag); + DCHECK(shape.encoding_tag() == kTwoByteStringTag); const uc16* start; if (shape.representation_tag() == kSeqStringTag) { start = SeqTwoByteString::cast(string)->GetChars(); @@ -8764,7 +8552,7 @@ SmartArrayPointer<char> String::ToCString(AllowNullsFlag allow_nulls, const uc16* String::GetTwoByteData(unsigned start) { - ASSERT(!IsOneByteRepresentationUnderneath()); + DCHECK(!IsOneByteRepresentationUnderneath()); switch (StringShape(this).representation_tag()) { case kSeqStringTag: return SeqTwoByteString::cast(this)->SeqTwoByteStringGetData(start); @@ -8827,7 +8615,7 @@ int Relocatable::ArchiveSpacePerThread() { } -// Archive statics that are thread local. +// Archive statics that are thread-local. char* Relocatable::ArchiveState(Isolate* isolate, char* to) { *reinterpret_cast<Relocatable**>(to) = isolate->relocatable_top(); isolate->set_relocatable_top(NULL); @@ -8835,7 +8623,7 @@ char* Relocatable::ArchiveState(Isolate* isolate, char* to) { } -// Restore statics that are thread local. +// Restore statics that are thread-local. char* Relocatable::RestoreState(Isolate* isolate, char* from) { isolate->set_relocatable_top(*reinterpret_cast<Relocatable**>(from)); return from + ArchiveSpacePerThread(); @@ -8882,11 +8670,11 @@ FlatStringReader::FlatStringReader(Isolate* isolate, Vector<const char> input) void FlatStringReader::PostGarbageCollection() { if (str_ == NULL) return; Handle<String> str(str_); - ASSERT(str->IsFlat()); + DCHECK(str->IsFlat()); DisallowHeapAllocation no_gc; // This does not actually prevent the vector from being relocated later. String::FlatContent content = str->GetFlatContent(); - ASSERT(content.IsFlat()); + DCHECK(content.IsFlat()); is_ascii_ = content.IsAscii(); if (is_ascii_) { start_ = content.ToOneByteVector().start(); @@ -8897,26 +8685,26 @@ void FlatStringReader::PostGarbageCollection() { void ConsStringIteratorOp::Initialize(ConsString* cons_string, int offset) { - ASSERT(cons_string != NULL); + DCHECK(cons_string != NULL); root_ = cons_string; consumed_ = offset; // Force stack blown condition to trigger restart. depth_ = 1; maximum_depth_ = kStackSize + depth_; - ASSERT(StackBlown()); + DCHECK(StackBlown()); } String* ConsStringIteratorOp::Continue(int* offset_out) { - ASSERT(depth_ != 0); - ASSERT_EQ(0, *offset_out); + DCHECK(depth_ != 0); + DCHECK_EQ(0, *offset_out); bool blew_stack = StackBlown(); String* string = NULL; // Get the next leaf if there is one. if (!blew_stack) string = NextLeaf(&blew_stack); // Restart search from root. if (blew_stack) { - ASSERT(string == NULL); + DCHECK(string == NULL); string = Search(offset_out); } // Ensure future calls return null immediately. @@ -8975,7 +8763,7 @@ String* ConsStringIteratorOp::Search(int* offset_out) { // Pop stack so next iteration is in correct place. Pop(); } - ASSERT(length != 0); + DCHECK(length != 0); // Adjust return values and exit. consumed_ = offset + length; *offset_out = consumed - offset; @@ -9021,7 +8809,7 @@ String* ConsStringIteratorOp::NextLeaf(bool* blew_stack) { if ((type & kStringRepresentationMask) != kConsStringTag) { AdjustMaximumDepth(); int length = string->length(); - ASSERT(length != 0); + DCHECK(length != 0); consumed_ += length; return string; } @@ -9035,7 +8823,7 @@ String* ConsStringIteratorOp::NextLeaf(bool* blew_stack) { uint16_t ConsString::ConsStringGet(int index) { - ASSERT(index >= 0 && index < this->length()); + DCHECK(index >= 0 && index < this->length()); // Check for a flattened cons string if (second()->length() == 0) { @@ -9079,7 +8867,7 @@ void String::WriteToFlat(String* src, int from = f; int to = t; while (true) { - ASSERT(0 <= from && from <= to && to <= source->length()); + DCHECK(0 <= from && from <= to && to <= source->length()); switch (StringShape(source).full_representation_tag()) { case kOneByteStringTag | kExternalStringTag: { CopyChars(sink, @@ -9196,7 +8984,7 @@ Handle<FixedArray> String::CalculateLineEnds(Handle<String> src, { DisallowHeapAllocation no_allocation; // ensure vectors stay valid. // Dispatch on type of strings. String::FlatContent content = src->GetFlatContent(); - ASSERT(content.IsFlat()); + DCHECK(content.IsFlat()); if (content.IsAscii()) { CalculateLineEndsImpl(isolate, &line_ends, @@ -9230,8 +9018,8 @@ static inline bool CompareRawStringContents(const Char* const a, // then we have to check that the strings are aligned before // comparing them blockwise. const int kAlignmentMask = sizeof(uint32_t) - 1; // NOLINT - uint32_t pa_addr = reinterpret_cast<uint32_t>(a); - uint32_t pb_addr = reinterpret_cast<uint32_t>(b); + uintptr_t pa_addr = reinterpret_cast<uintptr_t>(a); + uintptr_t pb_addr = reinterpret_cast<uintptr_t>(b); if (((pa_addr & kAlignmentMask) | (pb_addr & kAlignmentMask)) == 0) { #endif const int kStepSize = sizeof(int) / sizeof(Char); // NOLINT @@ -9261,7 +9049,7 @@ template<typename Chars1, typename Chars2> class RawStringComparator : public AllStatic { public: static inline bool compare(const Chars1* a, const Chars2* b, int len) { - ASSERT(sizeof(Chars1) != sizeof(Chars2)); + DCHECK(sizeof(Chars1) != sizeof(Chars2)); for (int i = 0; i < len; i++) { if (a[i] != b[i]) { return false; @@ -9319,7 +9107,7 @@ class StringComparator { } void Advance(int consumed) { - ASSERT(consumed <= length_); + DCHECK(consumed <= length_); // Still in buffer. if (length_ != consumed) { if (is_one_byte_) { @@ -9333,8 +9121,8 @@ class StringComparator { // Advance state. int offset; String* next = op_->Next(&offset); - ASSERT_EQ(0, offset); - ASSERT(next != NULL); + DCHECK_EQ(0, offset); + DCHECK(next != NULL); String::VisitFlat(this, next); } @@ -9370,7 +9158,7 @@ class StringComparator { state_2_.Init(string_2); while (true) { int to_check = Min(state_1_.length_, state_2_.length_); - ASSERT(to_check > 0 && to_check <= length); + DCHECK(to_check > 0 && to_check <= length); bool is_equal; if (state_1_.is_one_byte_) { if (state_2_.is_one_byte_) { @@ -9412,7 +9200,7 @@ bool String::SlowEquals(String* other) { // Fast check: if hash code is computed for both strings // a fast negative check can be performed. if (HasHashCode() && other->HasHashCode()) { -#ifdef ENABLE_SLOW_ASSERTS +#ifdef ENABLE_SLOW_DCHECKS if (FLAG_enable_slow_asserts) { if (Hash() != other->Hash()) { bool found_difference = false; @@ -9422,7 +9210,7 @@ bool String::SlowEquals(String* other) { break; } } - ASSERT(found_difference); + DCHECK(found_difference); } } #endif @@ -9456,7 +9244,7 @@ bool String::SlowEquals(Handle<String> one, Handle<String> two) { // Fast check: if hash code is computed for both strings // a fast negative check can be performed. if (one->HasHashCode() && two->HasHashCode()) { -#ifdef ENABLE_SLOW_ASSERTS +#ifdef ENABLE_SLOW_DCHECKS if (FLAG_enable_slow_asserts) { if (one->Hash() != two->Hash()) { bool found_difference = false; @@ -9466,7 +9254,7 @@ bool String::SlowEquals(Handle<String> one, Handle<String> two) { break; } } - ASSERT(found_difference); + DCHECK(found_difference); } } #endif @@ -9529,7 +9317,7 @@ bool String::IsUtf8EqualTo(Vector<const char> str, bool allow_prefix_match) { for (i = 0; i < slen && remaining_in_str > 0; i++) { unsigned cursor = 0; uint32_t r = unibrow::Utf8::ValueOf(utf8_data, remaining_in_str, &cursor); - ASSERT(cursor > 0 && cursor <= remaining_in_str); + DCHECK(cursor > 0 && cursor <= remaining_in_str); if (r > unibrow::Utf16::kMaxNonSurrogateCharCode) { if (i > slen - 1) return false; if (Get(i++) != unibrow::Utf16::LeadSurrogate(r)) return false; @@ -9575,50 +9363,18 @@ bool String::IsTwoByteEqualTo(Vector<const uc16> str) { } -class IteratingStringHasher: public StringHasher { - public: - static inline uint32_t Hash(String* string, uint32_t seed) { - IteratingStringHasher hasher(string->length(), seed); - // Nothing to do. - if (hasher.has_trivial_hash()) return hasher.GetHashField(); - ConsString* cons_string = String::VisitFlat(&hasher, string); - // The string was flat. - if (cons_string == NULL) return hasher.GetHashField(); - // This is a ConsString, iterate across it. - ConsStringIteratorOp op(cons_string); - int offset; - while (NULL != (string = op.Next(&offset))) { - String::VisitFlat(&hasher, string, offset); - } - return hasher.GetHashField(); - } - inline void VisitOneByteString(const uint8_t* chars, int length) { - AddCharacters(chars, length); - } - inline void VisitTwoByteString(const uint16_t* chars, int length) { - AddCharacters(chars, length); - } - - private: - inline IteratingStringHasher(int len, uint32_t seed) - : StringHasher(len, seed) { - } - DISALLOW_COPY_AND_ASSIGN(IteratingStringHasher); -}; - - uint32_t String::ComputeAndSetHash() { // Should only be called if hash code has not yet been computed. - ASSERT(!HasHashCode()); + DCHECK(!HasHashCode()); // Store the hash code in the object. uint32_t field = IteratingStringHasher::Hash(this, GetHeap()->HashSeed()); set_hash_field(field); // Check the hash code is there. - ASSERT(HasHashCode()); + DCHECK(HasHashCode()); uint32_t result = field >> kHashShift; - ASSERT(result != 0); // Ensure that the hash value of 0 is never computed. + DCHECK(result != 0); // Ensure that the hash value of 0 is never computed. return result; } @@ -9628,29 +9384,7 @@ bool String::ComputeArrayIndex(uint32_t* index) { if (length == 0 || length > kMaxArrayIndexSize) return false; ConsStringIteratorOp op; StringCharacterStream stream(this, &op); - uint16_t ch = stream.GetNext(); - - // If the string begins with a '0' character, it must only consist - // of it to be a legal array index. - if (ch == '0') { - *index = 0; - return length == 1; - } - - // Convert string to uint32 array index; character by character. - int d = ch - '0'; - if (d < 0 || d > 9) return false; - uint32_t result = d; - while (stream.HasMore()) { - d = stream.GetNext() - '0'; - if (d < 0 || d > 9) return false; - // Check that the new result is below the 32 bit limit. - if (result > 429496729U - ((d > 5) ? 1 : 0)) return false; - result = (result * 10) + d; - } - - *index = result; - return true; + return StringToArrayIndex(&stream, index); } @@ -9660,7 +9394,7 @@ bool String::SlowAsArrayIndex(uint32_t* index) { uint32_t field = hash_field(); if ((field & kIsNotArrayIndexMask) != 0) return false; // Isolate the array index form the full hash field. - *index = (kArrayIndexHashMask & field) >> kHashShift; + *index = ArrayIndexValueBits::decode(field); return true; } else { return ComputeArrayIndex(index); @@ -9677,7 +9411,7 @@ Handle<String> SeqString::Truncate(Handle<SeqString> string, int new_length) { old_size = SeqOneByteString::SizeFor(old_length); new_size = SeqOneByteString::SizeFor(new_length); } else { - ASSERT(string->IsSeqTwoByteString()); + DCHECK(string->IsSeqTwoByteString()); old_size = SeqTwoByteString::SizeFor(old_length); new_size = SeqTwoByteString::SizeFor(new_length); } @@ -9685,8 +9419,8 @@ Handle<String> SeqString::Truncate(Handle<SeqString> string, int new_length) { int delta = old_size - new_size; Address start_of_string = string->address(); - ASSERT_OBJECT_ALIGNED(start_of_string); - ASSERT_OBJECT_ALIGNED(start_of_string + new_size); + DCHECK_OBJECT_ALIGNED(start_of_string); + DCHECK_OBJECT_ALIGNED(start_of_string + new_size); Heap* heap = string->GetHeap(); NewSpace* newspace = heap->new_space(); @@ -9713,16 +9447,16 @@ Handle<String> SeqString::Truncate(Handle<SeqString> string, int new_length) { uint32_t StringHasher::MakeArrayIndexHash(uint32_t value, int length) { // For array indexes mix the length into the hash as an array index could // be zero. - ASSERT(length > 0); - ASSERT(length <= String::kMaxArrayIndexSize); - ASSERT(TenToThe(String::kMaxCachedArrayIndexLength) < + DCHECK(length > 0); + DCHECK(length <= String::kMaxArrayIndexSize); + DCHECK(TenToThe(String::kMaxCachedArrayIndexLength) < (1 << String::kArrayIndexValueBits)); - value <<= String::kHashShift; - value |= length << String::kArrayIndexHashLengthShift; + value <<= String::ArrayIndexValueBits::kShift; + value |= length << String::ArrayIndexLengthBits::kShift; - ASSERT((value & String::kIsNotArrayIndexMask) == 0); - ASSERT((length > String::kMaxCachedArrayIndexLength) || + DCHECK((value & String::kIsNotArrayIndexMask) == 0); + DCHECK((length > String::kMaxCachedArrayIndexLength) || (value & String::kContainsCachedArrayIndexMask) == 0); return value; } @@ -9747,7 +9481,7 @@ uint32_t StringHasher::ComputeUtf8Hash(Vector<const char> chars, int vector_length = chars.length(); // Handle some edge cases if (vector_length <= 1) { - ASSERT(vector_length == 0 || + DCHECK(vector_length == 0 || static_cast<uint8_t>(chars.start()[0]) <= unibrow::Utf8::kMaxOneByteChar); *utf16_length_out = vector_length; @@ -9760,11 +9494,11 @@ uint32_t StringHasher::ComputeUtf8Hash(Vector<const char> chars, const uint8_t* stream = reinterpret_cast<const uint8_t*>(chars.start()); int utf16_length = 0; bool is_index = true; - ASSERT(hasher.is_array_index_); + DCHECK(hasher.is_array_index_); while (remaining > 0) { unsigned consumed = 0; uint32_t c = unibrow::Utf8::ValueOf(stream, remaining, &consumed); - ASSERT(consumed > 0 && consumed <= remaining); + DCHECK(consumed > 0 && consumed <= remaining); stream += consumed; remaining -= consumed; bool is_two_characters = c > unibrow::Utf16::kMaxNonSurrogateCharCode; @@ -9798,119 +9532,6 @@ void String::PrintOn(FILE* file) { } -static void TrimEnumCache(Heap* heap, Map* map, DescriptorArray* descriptors) { - int live_enum = map->EnumLength(); - if (live_enum == kInvalidEnumCacheSentinel) { - live_enum = map->NumberOfDescribedProperties(OWN_DESCRIPTORS, DONT_ENUM); - } - if (live_enum == 0) return descriptors->ClearEnumCache(); - - FixedArray* enum_cache = descriptors->GetEnumCache(); - - int to_trim = enum_cache->length() - live_enum; - if (to_trim <= 0) return; - RightTrimFixedArray<Heap::FROM_GC>( - heap, descriptors->GetEnumCache(), to_trim); - - if (!descriptors->HasEnumIndicesCache()) return; - FixedArray* enum_indices_cache = descriptors->GetEnumIndicesCache(); - RightTrimFixedArray<Heap::FROM_GC>(heap, enum_indices_cache, to_trim); -} - - -static void TrimDescriptorArray(Heap* heap, - Map* map, - DescriptorArray* descriptors, - int number_of_own_descriptors) { - int number_of_descriptors = descriptors->number_of_descriptors_storage(); - int to_trim = number_of_descriptors - number_of_own_descriptors; - if (to_trim == 0) return; - - RightTrimFixedArray<Heap::FROM_GC>( - heap, descriptors, to_trim * DescriptorArray::kDescriptorSize); - descriptors->SetNumberOfDescriptors(number_of_own_descriptors); - - if (descriptors->HasEnumCache()) TrimEnumCache(heap, map, descriptors); - descriptors->Sort(); -} - - -// Clear a possible back pointer in case the transition leads to a dead map. -// Return true in case a back pointer has been cleared and false otherwise. -static bool ClearBackPointer(Heap* heap, Map* target) { - if (Marking::MarkBitFrom(target).Get()) return false; - target->SetBackPointer(heap->undefined_value(), SKIP_WRITE_BARRIER); - return true; -} - - -// TODO(mstarzinger): This method should be moved into MarkCompactCollector, -// because it cannot be called from outside the GC and we already have methods -// depending on the transitions layout in the GC anyways. -void Map::ClearNonLiveTransitions(Heap* heap) { - // If there are no transitions to be cleared, return. - // TODO(verwaest) Should be an assert, otherwise back pointers are not - // properly cleared. - if (!HasTransitionArray()) return; - - TransitionArray* t = transitions(); - MarkCompactCollector* collector = heap->mark_compact_collector(); - - int transition_index = 0; - - DescriptorArray* descriptors = instance_descriptors(); - bool descriptors_owner_died = false; - - // Compact all live descriptors to the left. - for (int i = 0; i < t->number_of_transitions(); ++i) { - Map* target = t->GetTarget(i); - if (ClearBackPointer(heap, target)) { - if (target->instance_descriptors() == descriptors) { - descriptors_owner_died = true; - } - } else { - if (i != transition_index) { - Name* key = t->GetKey(i); - t->SetKey(transition_index, key); - Object** key_slot = t->GetKeySlot(transition_index); - collector->RecordSlot(key_slot, key_slot, key); - // Target slots do not need to be recorded since maps are not compacted. - t->SetTarget(transition_index, t->GetTarget(i)); - } - transition_index++; - } - } - - // If there are no transitions to be cleared, return. - // TODO(verwaest) Should be an assert, otherwise back pointers are not - // properly cleared. - if (transition_index == t->number_of_transitions()) return; - - int number_of_own_descriptors = NumberOfOwnDescriptors(); - - if (descriptors_owner_died) { - if (number_of_own_descriptors > 0) { - TrimDescriptorArray(heap, this, descriptors, number_of_own_descriptors); - ASSERT(descriptors->number_of_descriptors() == number_of_own_descriptors); - set_owns_descriptors(true); - } else { - ASSERT(descriptors == GetHeap()->empty_descriptor_array()); - } - } - - // Note that we never eliminate a transition array, though we might right-trim - // such that number_of_transitions() == 0. If this assumption changes, - // TransitionArray::CopyInsert() will need to deal with the case that a - // transition array disappeared during GC. - int trim = t->number_of_transitions() - transition_index; - if (trim > 0) { - RightTrimFixedArray<Heap::FROM_GC>(heap, t, t->IsSimpleTransition() - ? trim : trim * TransitionArray::kTransitionSize); - } - ASSERT(HasTransitionArray()); -} - - int Map::Hash() { // For performance reasons we only hash the 3 most variable fields of a map: // constructor, prototype and bit_field2. @@ -9936,8 +9557,8 @@ static bool CheckEquivalent(Map* first, Map* second) { first->instance_type() == second->instance_type() && first->bit_field() == second->bit_field() && first->bit_field2() == second->bit_field2() && - first->is_observed() == second->is_observed() && - first->function_with_prototype() == second->function_with_prototype(); + first->is_frozen() == second->is_frozen() && + first->has_instance_call_handler() == second->has_instance_call_handler(); } @@ -9955,13 +9576,44 @@ bool Map::EquivalentToForNormalization(Map* other, void ConstantPoolArray::ConstantPoolIterateBody(ObjectVisitor* v) { - for (int i = 0; i < count_of_code_ptr_entries(); i++) { - int index = first_code_ptr_index() + i; - v->VisitCodeEntry(reinterpret_cast<Address>(RawFieldOfElementAt(index))); - } - for (int i = 0; i < count_of_heap_ptr_entries(); i++) { - int index = first_heap_ptr_index() + i; - v->VisitPointer(RawFieldOfElementAt(index)); + // Unfortunately the serializer relies on pointers within an object being + // visited in-order, so we have to iterate both the code and heap pointers in + // the small section before doing so in the extended section. + for (int s = 0; s <= final_section(); ++s) { + LayoutSection section = static_cast<LayoutSection>(s); + ConstantPoolArray::Iterator code_iter(this, ConstantPoolArray::CODE_PTR, + section); + while (!code_iter.is_finished()) { + v->VisitCodeEntry(reinterpret_cast<Address>( + RawFieldOfElementAt(code_iter.next_index()))); + } + + ConstantPoolArray::Iterator heap_iter(this, ConstantPoolArray::HEAP_PTR, + section); + while (!heap_iter.is_finished()) { + v->VisitPointer(RawFieldOfElementAt(heap_iter.next_index())); + } + } +} + + +void ConstantPoolArray::ClearPtrEntries(Isolate* isolate) { + Type type[] = { CODE_PTR, HEAP_PTR }; + Address default_value[] = { + isolate->builtins()->builtin(Builtins::kIllegal)->entry(), + reinterpret_cast<Address>(isolate->heap()->undefined_value()) }; + + for (int i = 0; i < 2; ++i) { + for (int s = 0; s <= final_section(); ++s) { + LayoutSection section = static_cast<LayoutSection>(s); + if (number_of_entries(type[i], section) > 0) { + int offset = OffsetOfElementAt(first_index(type[i], section)); + MemsetPointer( + reinterpret_cast<Address*>(HeapObject::RawField(this, offset)), + default_value[i], + number_of_entries(type[i], section)); + } + } } } @@ -9976,11 +9628,11 @@ void JSFunction::JSFunctionIterateBody(int object_size, ObjectVisitor* v) { void JSFunction::MarkForOptimization() { - ASSERT(is_compiled() || GetIsolate()->DebuggerHasBreakPoints()); - ASSERT(!IsOptimized()); - ASSERT(shared()->allows_lazy_compilation() || + DCHECK(is_compiled() || GetIsolate()->DebuggerHasBreakPoints()); + DCHECK(!IsOptimized()); + DCHECK(shared()->allows_lazy_compilation() || code()->optimizable()); - ASSERT(!shared()->is_generator()); + DCHECK(!shared()->is_generator()); set_code_no_write_barrier( GetIsolate()->builtins()->builtin(Builtins::kCompileOptimized)); // No write barrier required, since the builtin is part of the root set. @@ -9988,11 +9640,11 @@ void JSFunction::MarkForOptimization() { void JSFunction::MarkForConcurrentOptimization() { - ASSERT(is_compiled() || GetIsolate()->DebuggerHasBreakPoints()); - ASSERT(!IsOptimized()); - ASSERT(shared()->allows_lazy_compilation() || code()->optimizable()); - ASSERT(!shared()->is_generator()); - ASSERT(GetIsolate()->concurrent_recompilation_enabled()); + DCHECK(is_compiled() || GetIsolate()->DebuggerHasBreakPoints()); + DCHECK(!IsOptimized()); + DCHECK(shared()->allows_lazy_compilation() || code()->optimizable()); + DCHECK(!shared()->is_generator()); + DCHECK(GetIsolate()->concurrent_recompilation_enabled()); if (FLAG_trace_concurrent_recompilation) { PrintF(" ** Marking "); PrintName(); @@ -10007,10 +9659,10 @@ void JSFunction::MarkForConcurrentOptimization() { void JSFunction::MarkInOptimizationQueue() { // We can only arrive here via the concurrent-recompilation builtin. If // break points were set, the code would point to the lazy-compile builtin. - ASSERT(!GetIsolate()->DebuggerHasBreakPoints()); - ASSERT(IsMarkedForConcurrentOptimization() && !IsOptimized()); - ASSERT(shared()->allows_lazy_compilation() || code()->optimizable()); - ASSERT(GetIsolate()->concurrent_recompilation_enabled()); + DCHECK(!GetIsolate()->DebuggerHasBreakPoints()); + DCHECK(IsMarkedForConcurrentOptimization() && !IsOptimized()); + DCHECK(shared()->allows_lazy_compilation() || code()->optimizable()); + DCHECK(GetIsolate()->concurrent_recompilation_enabled()); if (FLAG_trace_concurrent_recompilation) { PrintF(" ** Queueing "); PrintName(); @@ -10029,22 +9681,22 @@ void SharedFunctionInfo::AddToOptimizedCodeMap( Handle<FixedArray> literals, BailoutId osr_ast_id) { Isolate* isolate = shared->GetIsolate(); - ASSERT(code->kind() == Code::OPTIMIZED_FUNCTION); - ASSERT(native_context->IsNativeContext()); + DCHECK(code->kind() == Code::OPTIMIZED_FUNCTION); + DCHECK(native_context->IsNativeContext()); STATIC_ASSERT(kEntryLength == 4); Handle<FixedArray> new_code_map; Handle<Object> value(shared->optimized_code_map(), isolate); int old_length; if (value->IsSmi()) { // No optimized code map. - ASSERT_EQ(0, Smi::cast(*value)->value()); + DCHECK_EQ(0, Smi::cast(*value)->value()); // Create 3 entries per context {context, code, literals}. new_code_map = isolate->factory()->NewFixedArray(kInitialLength); old_length = kEntriesStart; } else { // Copy old map and append one new entry. Handle<FixedArray> old_code_map = Handle<FixedArray>::cast(value); - ASSERT_EQ(-1, shared->SearchOptimizedCodeMap(*native_context, osr_ast_id)); + DCHECK_EQ(-1, shared->SearchOptimizedCodeMap(*native_context, osr_ast_id)); old_length = old_code_map->length(); new_code_map = FixedArray::CopySize( old_code_map, old_length + kEntryLength); @@ -10062,12 +9714,12 @@ void SharedFunctionInfo::AddToOptimizedCodeMap( #ifdef DEBUG for (int i = kEntriesStart; i < new_code_map->length(); i += kEntryLength) { - ASSERT(new_code_map->get(i + kContextOffset)->IsNativeContext()); - ASSERT(new_code_map->get(i + kCachedCodeOffset)->IsCode()); - ASSERT(Code::cast(new_code_map->get(i + kCachedCodeOffset))->kind() == + DCHECK(new_code_map->get(i + kContextOffset)->IsNativeContext()); + DCHECK(new_code_map->get(i + kCachedCodeOffset)->IsCode()); + DCHECK(Code::cast(new_code_map->get(i + kCachedCodeOffset))->kind() == Code::OPTIMIZED_FUNCTION); - ASSERT(new_code_map->get(i + kLiteralsOffset)->IsFixedArray()); - ASSERT(new_code_map->get(i + kOsrAstIdOffset)->IsSmi()); + DCHECK(new_code_map->get(i + kLiteralsOffset)->IsFixedArray()); + DCHECK(new_code_map->get(i + kOsrAstIdOffset)->IsSmi()); } #endif shared->set_optimized_code_map(*new_code_map); @@ -10075,11 +9727,11 @@ void SharedFunctionInfo::AddToOptimizedCodeMap( FixedArray* SharedFunctionInfo::GetLiteralsFromOptimizedCodeMap(int index) { - ASSERT(index > kEntriesStart); + DCHECK(index > kEntriesStart); FixedArray* code_map = FixedArray::cast(optimized_code_map()); if (!bound()) { FixedArray* cached_literals = FixedArray::cast(code_map->get(index + 1)); - ASSERT_NE(NULL, cached_literals); + DCHECK_NE(NULL, cached_literals); return cached_literals; } return NULL; @@ -10087,10 +9739,10 @@ FixedArray* SharedFunctionInfo::GetLiteralsFromOptimizedCodeMap(int index) { Code* SharedFunctionInfo::GetCodeFromOptimizedCodeMap(int index) { - ASSERT(index > kEntriesStart); + DCHECK(index > kEntriesStart); FixedArray* code_map = FixedArray::cast(optimized_code_map()); Code* code = Code::cast(code_map->get(index)); - ASSERT_NE(NULL, code); + DCHECK_NE(NULL, code); return code; } @@ -10105,7 +9757,7 @@ void SharedFunctionInfo::ClearOptimizedCodeMap() { flusher->EvictOptimizedCodeMap(this); } - ASSERT(code_map->get(kNextMapIndex)->IsUndefined()); + DCHECK(code_map->get(kNextMapIndex)->IsUndefined()); set_optimized_code_map(Smi::FromInt(0)); } @@ -10119,7 +9771,7 @@ void SharedFunctionInfo::EvictFromOptimizedCodeMap(Code* optimized_code, int dst = kEntriesStart; int length = code_map->length(); for (int src = kEntriesStart; src < length; src += kEntryLength) { - ASSERT(code_map->get(src)->IsNativeContext()); + DCHECK(code_map->get(src)->IsNativeContext()); if (Code::cast(code_map->get(src + kCachedCodeOffset)) == optimized_code) { // Evict the src entry by not copying it to the dst entry. if (FLAG_trace_opt) { @@ -10149,7 +9801,7 @@ void SharedFunctionInfo::EvictFromOptimizedCodeMap(Code* optimized_code, } if (dst != length) { // Always trim even when array is cleared because of heap verifier. - RightTrimFixedArray<Heap::FROM_MUTATOR>(GetHeap(), code_map, length - dst); + GetHeap()->RightTrimFixedArray<Heap::FROM_MUTATOR>(code_map, length - dst); if (code_map->length() == kEntriesStart) ClearOptimizedCodeMap(); } } @@ -10157,27 +9809,42 @@ void SharedFunctionInfo::EvictFromOptimizedCodeMap(Code* optimized_code, void SharedFunctionInfo::TrimOptimizedCodeMap(int shrink_by) { FixedArray* code_map = FixedArray::cast(optimized_code_map()); - ASSERT(shrink_by % kEntryLength == 0); - ASSERT(shrink_by <= code_map->length() - kEntriesStart); + DCHECK(shrink_by % kEntryLength == 0); + DCHECK(shrink_by <= code_map->length() - kEntriesStart); // Always trim even when array is cleared because of heap verifier. - RightTrimFixedArray<Heap::FROM_GC>(GetHeap(), code_map, shrink_by); + GetHeap()->RightTrimFixedArray<Heap::FROM_GC>(code_map, shrink_by); if (code_map->length() == kEntriesStart) { ClearOptimizedCodeMap(); } } -void JSObject::OptimizeAsPrototype(Handle<JSObject> object) { +void JSObject::OptimizeAsPrototype(Handle<JSObject> object, + PrototypeOptimizationMode mode) { if (object->IsGlobalObject()) return; - - // Make sure prototypes are fast objects and their maps have the bit set - // so they remain fast. + if (object->IsJSGlobalProxy()) return; + if (mode == FAST_PROTOTYPE && !object->map()->is_prototype_map()) { + // First normalize to ensure all JSFunctions are CONSTANT. + JSObject::NormalizeProperties(object, KEEP_INOBJECT_PROPERTIES, 0); + } if (!object->HasFastProperties()) { - TransformToFastProperties(object, 0); + JSObject::MigrateSlowToFast(object, 0); + } + if (mode == FAST_PROTOTYPE && object->HasFastProperties() && + !object->map()->is_prototype_map()) { + Handle<Map> new_map = Map::Copy(handle(object->map())); + JSObject::MigrateToMap(object, new_map); + object->map()->set_is_prototype_map(true); } } +void JSObject::ReoptimizeIfPrototype(Handle<JSObject> object) { + if (!object->map()->is_prototype_map()) return; + OptimizeAsPrototype(object, FAST_PROTOTYPE); +} + + Handle<Object> CacheInitialJSArrayMaps( Handle<Context> native_context, Handle<Map> initial_map) { // Replace all of the cached initial array maps in the native context with @@ -10188,7 +9855,7 @@ Handle<Object> CacheInitialJSArrayMaps( Handle<Map> current_map = initial_map; ElementsKind kind = current_map->elements_kind(); - ASSERT(kind == GetInitialFastElementsKind()); + DCHECK(kind == GetInitialFastElementsKind()); maps->set(kind, *current_map); for (int i = GetSequenceIndexFromFastElementsKind(kind) + 1; i < kFastElementsKindCount; ++i) { @@ -10196,7 +9863,7 @@ Handle<Object> CacheInitialJSArrayMaps( ElementsKind next_kind = GetFastElementsKindFromSequenceIndex(i); if (current_map->HasElementsTransition()) { new_map = handle(current_map->elements_transition_map()); - ASSERT(new_map->elements_kind() == next_kind); + DCHECK(new_map->elements_kind() == next_kind); } else { new_map = Map::CopyAsElementsKind( current_map, next_kind, INSERT_TRANSITION); @@ -10211,13 +9878,9 @@ Handle<Object> CacheInitialJSArrayMaps( void JSFunction::SetInstancePrototype(Handle<JSFunction> function, Handle<Object> value) { - ASSERT(value->IsJSReceiver()); + Isolate* isolate = function->GetIsolate(); - // First some logic for the map of the prototype to make sure it is in fast - // mode. - if (value->IsJSObject()) { - JSObject::OptimizeAsPrototype(Handle<JSObject>::cast(value)); - } + DCHECK(value->IsJSReceiver()); // Now some logic for the maps of the objects that are created by using this // function as a constructor. @@ -10226,35 +9889,49 @@ void JSFunction::SetInstancePrototype(Handle<JSFunction> function, // copy containing the new prototype. Also complete any in-object // slack tracking that is in progress at this point because it is // still tracking the old copy. - if (function->shared()->IsInobjectSlackTrackingInProgress()) { - function->shared()->CompleteInobjectSlackTracking(); + if (function->IsInobjectSlackTrackingInProgress()) { + function->CompleteInobjectSlackTracking(); } - Handle<Map> new_map = Map::Copy(handle(function->initial_map())); - new_map->set_prototype(*value); - // If the function is used as the global Array function, cache the - // initial map (and transitioned versions) in the native context. - Context* native_context = function->context()->native_context(); - Object* array_function = native_context->get(Context::ARRAY_FUNCTION_INDEX); - if (array_function->IsJSFunction() && - *function == JSFunction::cast(array_function)) { - CacheInitialJSArrayMaps(handle(native_context), new_map); + Handle<Map> initial_map(function->initial_map(), isolate); + + if (!initial_map->GetIsolate()->bootstrapper()->IsActive() && + initial_map->instance_type() == JS_OBJECT_TYPE) { + // Put the value in the initial map field until an initial map is needed. + // At that point, a new initial map is created and the prototype is put + // into the initial map where it belongs. + function->set_prototype_or_initial_map(*value); + } else { + Handle<Map> new_map = Map::Copy(initial_map); + JSFunction::SetInitialMap(function, new_map, value); + + // If the function is used as the global Array function, cache the + // initial map (and transitioned versions) in the native context. + Context* native_context = function->context()->native_context(); + Object* array_function = + native_context->get(Context::ARRAY_FUNCTION_INDEX); + if (array_function->IsJSFunction() && + *function == JSFunction::cast(array_function)) { + CacheInitialJSArrayMaps(handle(native_context, isolate), new_map); + } } - function->set_initial_map(*new_map); + // Deoptimize all code that embeds the previous initial map. + initial_map->dependent_code()->DeoptimizeDependentCodeGroup( + isolate, DependentCode::kInitialMapChangedGroup); } else { // Put the value in the initial map field until an initial map is // needed. At that point, a new initial map is created and the // prototype is put into the initial map where it belongs. function->set_prototype_or_initial_map(*value); } - function->GetHeap()->ClearInstanceofCache(); + isolate->heap()->ClearInstanceofCache(); } void JSFunction::SetPrototype(Handle<JSFunction> function, Handle<Object> value) { - ASSERT(function->should_have_prototype()); + DCHECK(function->should_have_prototype()); Handle<Object> construct_prototype = value; // If the value is not a JSReceiver, store the value in the map's @@ -10304,6 +9981,18 @@ bool JSFunction::RemovePrototype() { } +void JSFunction::SetInitialMap(Handle<JSFunction> function, Handle<Map> map, + Handle<Object> prototype) { + if (prototype->IsJSObject()) { + Handle<JSObject> js_proto = Handle<JSObject>::cast(prototype); + JSObject::OptimizeAsPrototype(js_proto, FAST_PROTOTYPE); + } + map->set_prototype(*prototype); + function->set_prototype_or_initial_map(*map); + map->set_constructor(*function); +} + + void JSFunction::EnsureHasInitialMap(Handle<JSFunction> function) { if (function->has_initial_map()) return; Isolate* isolate = function->GetIsolate(); @@ -10333,16 +10022,14 @@ void JSFunction::EnsureHasInitialMap(Handle<JSFunction> function) { } map->set_inobject_properties(in_object_properties); map->set_unused_property_fields(in_object_properties); - map->set_prototype(*prototype); - ASSERT(map->has_fast_object_elements()); + DCHECK(map->has_fast_object_elements()); + + // Finally link initial map and constructor function. + JSFunction::SetInitialMap(function, map, Handle<JSReceiver>::cast(prototype)); if (!function->shared()->is_generator()) { - function->shared()->StartInobjectSlackTracking(*map); + function->StartInobjectSlackTracking(); } - - // Finally link initial map and constructor function. - function->set_initial_map(*map); - map->set_constructor(*function); } @@ -10369,6 +10056,7 @@ Context* JSFunction::NativeContextFromLiterals(FixedArray* literals) { // "" only the top-level function // "name" only the function "name" // "name*" only functions starting with "name" +// "~" none; the tilde is not an identifier bool JSFunction::PassesFilter(const char* raw_filter) { if (*raw_filter == '*') return true; String* name = shared()->DebugName(); @@ -10417,10 +10105,10 @@ void Script::InitLineEnds(Handle<Script> script) { Isolate* isolate = script->GetIsolate(); if (!script->source()->IsString()) { - ASSERT(script->source()->IsUndefined()); + DCHECK(script->source()->IsUndefined()); Handle<FixedArray> empty = isolate->factory()->NewFixedArray(0); script->set_line_ends(*empty); - ASSERT(script->line_ends()->IsFixedArray()); + DCHECK(script->line_ends()->IsFixedArray()); return; } @@ -10433,7 +10121,7 @@ void Script::InitLineEnds(Handle<Script> script) { } script->set_line_ends(*array); - ASSERT(script->line_ends()->IsFixedArray()); + DCHECK(script->line_ends()->IsFixedArray()); } @@ -10453,7 +10141,7 @@ int Script::GetColumnNumber(Handle<Script> script, int code_pos) { int Script::GetLineNumberWithArray(int code_pos) { DisallowHeapAllocation no_allocation; - ASSERT(line_ends()->IsFixedArray()); + DCHECK(line_ends()->IsFixedArray()); FixedArray* line_ends_array = FixedArray::cast(line_ends()); int line_ends_len = line_ends_array->length(); if (line_ends_len == 0) return -1; @@ -10507,7 +10195,7 @@ Handle<Object> Script::GetNameOrSourceURL(Handle<Script> script) { Handle<JSObject> script_wrapper = Script::GetWrapper(script); Handle<Object> property = Object::GetProperty( script_wrapper, name_or_source_url_key).ToHandleChecked(); - ASSERT(property->IsJSFunction()); + DCHECK(property->IsJSFunction()); Handle<JSFunction> method = Handle<JSFunction>::cast(property); Handle<Object> result; // Do not check against pending exception, since this function may be called @@ -10525,16 +10213,21 @@ Handle<Object> Script::GetNameOrSourceURL(Handle<Script> script) { // collector will call the weak callback on the global handle // associated with the wrapper and get rid of both the wrapper and the // handle. -static void ClearWrapperCache( +static void ClearWrapperCacheWeakCallback( const v8::WeakCallbackData<v8::Value, void>& data) { Object** location = reinterpret_cast<Object**>(data.GetParameter()); JSValue* wrapper = JSValue::cast(*location); - Foreign* foreign = Script::cast(wrapper->value())->wrapper(); - ASSERT_EQ(foreign->foreign_address(), reinterpret_cast<Address>(location)); + Script::cast(wrapper->value())->ClearWrapperCache(); +} + + +void Script::ClearWrapperCache() { + Foreign* foreign = wrapper(); + Object** location = reinterpret_cast<Object**>(foreign->foreign_address()); + DCHECK_EQ(foreign->foreign_address(), reinterpret_cast<Address>(location)); foreign->set_foreign_address(0); GlobalHandles::Destroy(location); - Isolate* isolate = reinterpret_cast<Isolate*>(data.GetIsolate()); - isolate->counters()->script_wrappers()->Decrement(); + GetIsolate()->counters()->script_wrappers()->Decrement(); } @@ -10559,7 +10252,7 @@ Handle<JSObject> Script::GetWrapper(Handle<Script> script) { Handle<Object> handle = isolate->global_handles()->Create(*result); GlobalHandles::MakeWeak(handle.location(), reinterpret_cast<void*>(handle.location()), - &ClearWrapperCache); + &ClearWrapperCacheWeakCallback); script->wrapper()->set_foreign_address( reinterpret_cast<Address>(handle.location())); return result; @@ -10573,7 +10266,7 @@ String* SharedFunctionInfo::DebugName() { } -bool SharedFunctionInfo::HasSourceCode() { +bool SharedFunctionInfo::HasSourceCode() const { return !script()->IsUndefined() && !reinterpret_cast<Script*>(script())->source()->IsUndefined(); } @@ -10618,43 +10311,36 @@ int SharedFunctionInfo::CalculateInObjectProperties() { } -// Support function for printing the source code to a StringStream -// without any allocation in the heap. -void SharedFunctionInfo::SourceCodePrint(StringStream* accumulator, - int max_length) { +// Output the source code without any allocation in the heap. +OStream& operator<<(OStream& os, const SourceCodeOf& v) { + const SharedFunctionInfo* s = v.value; // For some native functions there is no source. - if (!HasSourceCode()) { - accumulator->Add("<No Source>"); - return; - } + if (!s->HasSourceCode()) return os << "<No Source>"; // Get the source for the script which this function came from. // Don't use String::cast because we don't want more assertion errors while // we are already creating a stack dump. String* script_source = - reinterpret_cast<String*>(Script::cast(script())->source()); + reinterpret_cast<String*>(Script::cast(s->script())->source()); - if (!script_source->LooksValid()) { - accumulator->Add("<Invalid Source>"); - return; - } + if (!script_source->LooksValid()) return os << "<Invalid Source>"; - if (!is_toplevel()) { - accumulator->Add("function "); - Object* name = this->name(); + if (!s->is_toplevel()) { + os << "function "; + Object* name = s->name(); if (name->IsString() && String::cast(name)->length() > 0) { - accumulator->PrintName(name); + String::cast(name)->PrintUC16(os); } } - int len = end_position() - start_position(); - if (len <= max_length || max_length < 0) { - accumulator->Put(script_source, start_position(), end_position()); + int len = s->end_position() - s->start_position(); + if (len <= v.max_length || v.max_length < 0) { + script_source->PrintUC16(os, s->start_position(), s->end_position()); + return os; } else { - accumulator->Put(script_source, - start_position(), - start_position() + max_length); - accumulator->Add("...\n"); + script_source->PrintUC16(os, s->start_position(), + s->start_position() + v.max_length); + return os << "...\n"; } } @@ -10673,7 +10359,7 @@ static bool IsCodeEquivalent(Code* code, Code* recompiled) { void SharedFunctionInfo::EnableDeoptimizationSupport(Code* recompiled) { - ASSERT(!has_deoptimization_support()); + DCHECK(!has_deoptimization_support()); DisallowHeapAllocation no_allocation; Code* code = this->code(); if (IsCodeEquivalent(code, recompiled)) { @@ -10687,7 +10373,7 @@ void SharedFunctionInfo::EnableDeoptimizationSupport(Code* recompiled) { // effectively resetting all IC state. ReplaceCode(recompiled); } - ASSERT(has_deoptimization_support()); + DCHECK(has_deoptimization_support()); } @@ -10703,13 +10389,11 @@ void SharedFunctionInfo::DisableOptimization(BailoutReason reason) { set_bailout_reason(reason); // Code should be the lazy compilation stub or else unoptimized. If the // latter, disable optimization for the code too. - ASSERT(code()->kind() == Code::FUNCTION || code()->kind() == Code::BUILTIN); + DCHECK(code()->kind() == Code::FUNCTION || code()->kind() == Code::BUILTIN); if (code()->kind() == Code::FUNCTION) { code()->set_optimizable(false); } - PROFILE(GetIsolate(), - LogExistingFunction(Handle<SharedFunctionInfo>(this), - Handle<Code>(code()))); + PROFILE(GetIsolate(), CodeDisableOptEvent(code(), this)); if (FLAG_trace_opt) { PrintF("[disabled optimization for "); ShortPrint(); @@ -10719,81 +10403,33 @@ void SharedFunctionInfo::DisableOptimization(BailoutReason reason) { bool SharedFunctionInfo::VerifyBailoutId(BailoutId id) { - ASSERT(!id.IsNone()); + DCHECK(!id.IsNone()); Code* unoptimized = code(); DeoptimizationOutputData* data = DeoptimizationOutputData::cast(unoptimized->deoptimization_data()); unsigned ignore = Deoptimizer::GetOutputInfo(data, id, this); USE(ignore); - return true; // Return true if there was no ASSERT. + return true; // Return true if there was no DCHECK. } -void SharedFunctionInfo::StartInobjectSlackTracking(Map* map) { - ASSERT(!IsInobjectSlackTrackingInProgress()); +void JSFunction::StartInobjectSlackTracking() { + DCHECK(has_initial_map() && !IsInobjectSlackTrackingInProgress()); if (!FLAG_clever_optimizations) return; + Map* map = initial_map(); // Only initiate the tracking the first time. - if (live_objects_may_exist()) return; - set_live_objects_may_exist(true); + if (map->done_inobject_slack_tracking()) return; + map->set_done_inobject_slack_tracking(true); // No tracking during the snapshot construction phase. Isolate* isolate = GetIsolate(); - if (Serializer::enabled(isolate)) return; + if (isolate->serializer_enabled()) return; if (map->unused_property_fields() == 0) return; - // Nonzero counter is a leftover from the previous attempt interrupted - // by GC, keep it. - if (construction_count() == 0) { - set_construction_count(kGenerousAllocationCount); - } - set_initial_map(map); - Builtins* builtins = isolate->builtins(); - ASSERT_EQ(builtins->builtin(Builtins::kJSConstructStubGeneric), - construct_stub()); - set_construct_stub(builtins->builtin(Builtins::kJSConstructStubCountdown)); -} - - -// Called from GC, hence reinterpret_cast and unchecked accessors. -void SharedFunctionInfo::DetachInitialMap() { - Map* map = reinterpret_cast<Map*>(initial_map()); - - // Make the map remember to restore the link if it survives the GC. - map->set_bit_field2( - map->bit_field2() | (1 << Map::kAttachedToSharedFunctionInfo)); - - // Undo state changes made by StartInobjectTracking (except the - // construction_count). This way if the initial map does not survive the GC - // then StartInobjectTracking will be called again the next time the - // constructor is called. The countdown will continue and (possibly after - // several more GCs) CompleteInobjectSlackTracking will eventually be called. - Heap* heap = map->GetHeap(); - set_initial_map(heap->undefined_value()); - Builtins* builtins = heap->isolate()->builtins(); - ASSERT_EQ(builtins->builtin(Builtins::kJSConstructStubCountdown), - *RawField(this, kConstructStubOffset)); - set_construct_stub(builtins->builtin(Builtins::kJSConstructStubGeneric)); - // It is safe to clear the flag: it will be set again if the map is live. - set_live_objects_may_exist(false); -} - - -// Called from GC, hence reinterpret_cast and unchecked accessors. -void SharedFunctionInfo::AttachInitialMap(Map* map) { - map->set_bit_field2( - map->bit_field2() & ~(1 << Map::kAttachedToSharedFunctionInfo)); - - // Resume inobject slack tracking. - set_initial_map(map); - Builtins* builtins = map->GetHeap()->isolate()->builtins(); - ASSERT_EQ(builtins->builtin(Builtins::kJSConstructStubGeneric), - *RawField(this, kConstructStubOffset)); - set_construct_stub(builtins->builtin(Builtins::kJSConstructStubCountdown)); - // The map survived the gc, so there may be objects referencing it. - set_live_objects_may_exist(true); + map->set_construction_count(kGenerousAllocationCount); } @@ -10836,26 +10472,18 @@ static void ShrinkInstanceSize(Map* map, void* data) { } -void SharedFunctionInfo::CompleteInobjectSlackTracking() { - ASSERT(live_objects_may_exist() && IsInobjectSlackTrackingInProgress()); - Map* map = Map::cast(initial_map()); +void JSFunction::CompleteInobjectSlackTracking() { + DCHECK(has_initial_map()); + Map* map = initial_map(); - Heap* heap = map->GetHeap(); - set_initial_map(heap->undefined_value()); - Builtins* builtins = heap->isolate()->builtins(); - ASSERT_EQ(builtins->builtin(Builtins::kJSConstructStubCountdown), - construct_stub()); - set_construct_stub(builtins->builtin(Builtins::kJSConstructStubGeneric)); + DCHECK(map->done_inobject_slack_tracking()); + map->set_construction_count(kNoSlackTracking); int slack = map->unused_property_fields(); map->TraverseTransitionTree(&GetMinInobjectSlack, &slack); if (slack != 0) { // Resize the initial map and all maps in its transition tree. map->TraverseTransitionTree(&ShrinkInstanceSize, &slack); - - // Give the correct expected_nof_properties to initial maps created later. - ASSERT(expected_nof_properties() >= slack); - set_expected_nof_properties(expected_nof_properties() - slack); } } @@ -10863,7 +10491,7 @@ void SharedFunctionInfo::CompleteInobjectSlackTracking() { int SharedFunctionInfo::SearchOptimizedCodeMap(Context* native_context, BailoutId osr_ast_id) { DisallowHeapAllocation no_gc; - ASSERT(native_context->IsNativeContext()); + DCHECK(native_context->IsNativeContext()); if (!FLAG_cache_optimized_code) return -1; Object* value = optimized_code_map(); if (!value->IsSmi()) { @@ -10903,7 +10531,7 @@ const char* const VisitorSynchronization::kTagNames[ void ObjectVisitor::VisitCodeTarget(RelocInfo* rinfo) { - ASSERT(RelocInfo::IsCodeTarget(rinfo->rmode())); + DCHECK(RelocInfo::IsCodeTarget(rinfo->rmode())); Object* target = Code::GetCodeFromTargetAddress(rinfo->target_address()); Object* old_target = target; VisitPointer(&target); @@ -10912,7 +10540,7 @@ void ObjectVisitor::VisitCodeTarget(RelocInfo* rinfo) { void ObjectVisitor::VisitCodeAgeSequence(RelocInfo* rinfo) { - ASSERT(RelocInfo::IsCodeAgeSequence(rinfo->rmode())); + DCHECK(RelocInfo::IsCodeAgeSequence(rinfo->rmode())); Object* stub = rinfo->code_age_stub(); if (stub) { VisitPointer(&stub); @@ -10931,7 +10559,7 @@ void ObjectVisitor::VisitCodeEntry(Address entry_address) { void ObjectVisitor::VisitCell(RelocInfo* rinfo) { - ASSERT(rinfo->rmode() == RelocInfo::CELL); + DCHECK(rinfo->rmode() == RelocInfo::CELL); Object* cell = rinfo->target_cell(); Object* old_cell = cell; VisitPointer(&cell); @@ -10942,7 +10570,7 @@ void ObjectVisitor::VisitCell(RelocInfo* rinfo) { void ObjectVisitor::VisitDebugTarget(RelocInfo* rinfo) { - ASSERT((RelocInfo::IsJSReturn(rinfo->rmode()) && + DCHECK((RelocInfo::IsJSReturn(rinfo->rmode()) && rinfo->IsPatchedReturnSequence()) || (RelocInfo::IsDebugBreakSlot(rinfo->rmode()) && rinfo->IsPatchedDebugBreakSlotSequence())); @@ -10954,7 +10582,7 @@ void ObjectVisitor::VisitDebugTarget(RelocInfo* rinfo) { void ObjectVisitor::VisitEmbeddedPointer(RelocInfo* rinfo) { - ASSERT(rinfo->rmode() == RelocInfo::EMBEDDED_OBJECT); + DCHECK(rinfo->rmode() == RelocInfo::EMBEDDED_OBJECT); Object* p = rinfo->target_object(); VisitPointer(&p); } @@ -10967,6 +10595,7 @@ void ObjectVisitor::VisitExternalReference(RelocInfo* rinfo) { void Code::InvalidateRelocation() { + InvalidateEmbeddedObjects(); set_relocation_info(GetHeap()->empty_byte_array()); } @@ -10989,14 +10618,14 @@ void Code::InvalidateEmbeddedObjects() { void Code::Relocate(intptr_t delta) { for (RelocIterator it(this, RelocInfo::kApplyMask); !it.done(); it.next()) { - it.rinfo()->apply(delta); + it.rinfo()->apply(delta, SKIP_ICACHE_FLUSH); } - CPU::FlushICache(instruction_start(), instruction_size()); + CpuFeatures::FlushICache(instruction_start(), instruction_size()); } void Code::CopyFrom(const CodeDesc& desc) { - ASSERT(Marking::Color(this) == Marking::WHITE_OBJECT); + DCHECK(Marking::Color(this) == Marking::WHITE_OBJECT); // copy code CopyBytes(instruction_start(), desc.buffer, @@ -11021,29 +10650,31 @@ void Code::CopyFrom(const CodeDesc& desc) { RelocInfo::Mode mode = it.rinfo()->rmode(); if (mode == RelocInfo::EMBEDDED_OBJECT) { Handle<Object> p = it.rinfo()->target_object_handle(origin); - it.rinfo()->set_target_object(*p, SKIP_WRITE_BARRIER); + it.rinfo()->set_target_object(*p, SKIP_WRITE_BARRIER, SKIP_ICACHE_FLUSH); } else if (mode == RelocInfo::CELL) { Handle<Cell> cell = it.rinfo()->target_cell_handle(); - it.rinfo()->set_target_cell(*cell, SKIP_WRITE_BARRIER); + it.rinfo()->set_target_cell(*cell, SKIP_WRITE_BARRIER, SKIP_ICACHE_FLUSH); } else if (RelocInfo::IsCodeTarget(mode)) { // rewrite code handles in inline cache targets to direct // pointers to the first instruction in the code object Handle<Object> p = it.rinfo()->target_object_handle(origin); Code* code = Code::cast(*p); it.rinfo()->set_target_address(code->instruction_start(), - SKIP_WRITE_BARRIER); + SKIP_WRITE_BARRIER, + SKIP_ICACHE_FLUSH); } else if (RelocInfo::IsRuntimeEntry(mode)) { Address p = it.rinfo()->target_runtime_entry(origin); - it.rinfo()->set_target_runtime_entry(p, SKIP_WRITE_BARRIER); + it.rinfo()->set_target_runtime_entry(p, SKIP_WRITE_BARRIER, + SKIP_ICACHE_FLUSH); } else if (mode == RelocInfo::CODE_AGE_SEQUENCE) { Handle<Object> p = it.rinfo()->code_age_stub_handle(origin); Code* code = Code::cast(*p); - it.rinfo()->set_code_age_stub(code); + it.rinfo()->set_code_age_stub(code, SKIP_ICACHE_FLUSH); } else { - it.rinfo()->apply(delta); + it.rinfo()->apply(delta, SKIP_ICACHE_FLUSH); } } - CPU::FlushICache(instruction_start(), instruction_size()); + CpuFeatures::FlushICache(instruction_start(), instruction_size()); } @@ -11105,12 +10736,31 @@ int Code::SourceStatementPosition(Address pc) { SafepointEntry Code::GetSafepointEntry(Address pc) { SafepointTable table(this); - return table.FindEntry(pc); + SafepointEntry entry = table.FindEntry(pc); + if (entry.is_valid() || !is_turbofanned()) { + return entry; + } + + // If the code is turbofanned, we might be looking for + // an address that was patched by lazy deoptimization. + // In that case look through the patch table, try to + // lookup the original address there, and then use this + // to find the safepoint entry. + DeoptimizationInputData* deopt_data = + DeoptimizationInputData::cast(deoptimization_data()); + intptr_t offset = pc - instruction_start(); + for (int i = 0; i < deopt_data->ReturnAddressPatchCount(); i++) { + if (deopt_data->PatchedAddressPc(i)->value() == offset) { + int original_offset = deopt_data->ReturnAddressPc(i)->value(); + return table.FindEntry(instruction_start() + original_offset); + } + } + return SafepointEntry(); } Object* Code::FindNthObject(int n, Map* match_map) { - ASSERT(is_inline_cache_stub()); + DCHECK(is_inline_cache_stub()); DisallowHeapAllocation no_allocation; int mask = RelocInfo::ModeMask(RelocInfo::EMBEDDED_OBJECT); for (RelocIterator it(this, mask); !it.done(); it.next()) { @@ -11139,7 +10789,7 @@ Map* Code::FindFirstMap() { void Code::FindAndReplace(const FindAndReplacePattern& pattern) { - ASSERT(is_inline_cache_stub() || is_handler()); + DCHECK(is_inline_cache_stub() || is_handler()); DisallowHeapAllocation no_allocation; int mask = RelocInfo::ModeMask(RelocInfo::EMBEDDED_OBJECT); STATIC_ASSERT(FindAndReplacePattern::kMaxCount < 32); @@ -11160,7 +10810,7 @@ void Code::FindAndReplace(const FindAndReplacePattern& pattern) { void Code::FindAllMaps(MapHandleList* maps) { - ASSERT(is_inline_cache_stub()); + DCHECK(is_inline_cache_stub()); DisallowHeapAllocation no_allocation; int mask = RelocInfo::ModeMask(RelocInfo::EMBEDDED_OBJECT); for (RelocIterator it(this, mask); !it.done(); it.next()) { @@ -11172,7 +10822,7 @@ void Code::FindAllMaps(MapHandleList* maps) { Code* Code::FindFirstHandler() { - ASSERT(is_inline_cache_stub()); + DCHECK(is_inline_cache_stub()); DisallowHeapAllocation no_allocation; int mask = RelocInfo::ModeMask(RelocInfo::CODE_TARGET); for (RelocIterator it(this, mask); !it.done(); it.next()) { @@ -11185,7 +10835,7 @@ Code* Code::FindFirstHandler() { bool Code::FindHandlers(CodeHandleList* code_list, int length) { - ASSERT(is_inline_cache_stub()); + DCHECK(is_inline_cache_stub()); DisallowHeapAllocation no_allocation; int mask = RelocInfo::ModeMask(RelocInfo::CODE_TARGET); int i = 0; @@ -11203,8 +10853,28 @@ bool Code::FindHandlers(CodeHandleList* code_list, int length) { } +MaybeHandle<Code> Code::FindHandlerForMap(Map* map) { + DCHECK(is_inline_cache_stub()); + int mask = RelocInfo::ModeMask(RelocInfo::CODE_TARGET) | + RelocInfo::ModeMask(RelocInfo::EMBEDDED_OBJECT); + bool return_next = false; + for (RelocIterator it(this, mask); !it.done(); it.next()) { + RelocInfo* info = it.rinfo(); + if (info->rmode() == RelocInfo::EMBEDDED_OBJECT) { + Object* object = info->target_object(); + if (object == map) return_next = true; + } else if (return_next) { + Code* code = Code::GetCodeFromTargetAddress(info->target_address()); + DCHECK(code->kind() == Code::HANDLER); + return handle(code); + } + } + return MaybeHandle<Code>(); +} + + Name* Code::FindFirstName() { - ASSERT(is_inline_cache_stub()); + DCHECK(is_inline_cache_stub()); DisallowHeapAllocation no_allocation; int mask = RelocInfo::ModeMask(RelocInfo::EMBEDDED_OBJECT); for (RelocIterator it(this, mask); !it.done(); it.next()) { @@ -11246,13 +10916,23 @@ void Code::ClearInlineCaches(Code::Kind* kind) { void SharedFunctionInfo::ClearTypeFeedbackInfo() { FixedArray* vector = feedback_vector(); Heap* heap = GetHeap(); - for (int i = 0; i < vector->length(); i++) { + int length = vector->length(); + + for (int i = 0; i < length; i++) { Object* obj = vector->get(i); - if (!obj->IsAllocationSite()) { - vector->set( - i, - TypeFeedbackInfo::RawUninitializedSentinel(heap), - SKIP_WRITE_BARRIER); + if (obj->IsHeapObject()) { + InstanceType instance_type = + HeapObject::cast(obj)->map()->instance_type(); + switch (instance_type) { + case ALLOCATION_SITE_TYPE: + // AllocationSites are not cleared because they do not store + // information that leaks. + break; + // Fall through... + default: + vector->set(i, TypeFeedbackInfo::RawUninitializedSentinel(heap), + SKIP_WRITE_BARRIER); + } } } } @@ -11260,7 +10940,7 @@ void SharedFunctionInfo::ClearTypeFeedbackInfo() { BailoutId Code::TranslatePcOffsetToAstId(uint32_t pc_offset) { DisallowHeapAllocation no_gc; - ASSERT(kind() == FUNCTION); + DCHECK(kind() == FUNCTION); BackEdgeTable back_edges(this, &no_gc); for (uint32_t i = 0; i < back_edges.length(); i++) { if (back_edges.pc_offset(i) == pc_offset) return back_edges.ast_id(i); @@ -11271,7 +10951,7 @@ BailoutId Code::TranslatePcOffsetToAstId(uint32_t pc_offset) { uint32_t Code::TranslateAstIdToPcOffset(BailoutId ast_id) { DisallowHeapAllocation no_gc; - ASSERT(kind() == FUNCTION); + DCHECK(kind() == FUNCTION); BackEdgeTable back_edges(this, &no_gc); for (uint32_t i = 0; i < back_edges.length(); i++) { if (back_edges.ast_id(i) == ast_id) return back_edges.pc_offset(i); @@ -11403,11 +11083,11 @@ Code* Code::GetCodeAgeStub(Isolate* isolate, Age age, MarkingParity parity) { CODE_AGE_LIST(HANDLE_CODE_AGE) #undef HANDLE_CODE_AGE case kNotExecutedCodeAge: { - ASSERT(parity == NO_MARKING_PARITY); + DCHECK(parity == NO_MARKING_PARITY); return *builtins->MarkCodeAsExecutedOnce(); } case kExecutedOnceCodeAge: { - ASSERT(parity == NO_MARKING_PARITY); + DCHECK(parity == NO_MARKING_PARITY); return *builtins->MarkCodeAsExecutedTwice(); } default: @@ -11430,7 +11110,9 @@ void Code::PrintDeoptLocation(FILE* out, int bailout_id) { if ((bailout_id == Deoptimizer::GetDeoptimizationId( GetIsolate(), info->target_address(), Deoptimizer::EAGER)) || (bailout_id == Deoptimizer::GetDeoptimizationId( - GetIsolate(), info->target_address(), Deoptimizer::SOFT))) { + GetIsolate(), info->target_address(), Deoptimizer::SOFT)) || + (bailout_id == Deoptimizer::GetDeoptimizationId( + GetIsolate(), info->target_address(), Deoptimizer::LAZY))) { CHECK(RelocInfo::IsRuntimeEntry(info->rmode())); PrintF(out, " %s\n", last_comment); return; @@ -11468,23 +11150,25 @@ const char* Code::Kind2String(Kind kind) { #ifdef ENABLE_DISASSEMBLER -void DeoptimizationInputData::DeoptimizationInputDataPrint(FILE* out) { +void DeoptimizationInputData::DeoptimizationInputDataPrint( + OStream& os) { // NOLINT disasm::NameConverter converter; int deopt_count = DeoptCount(); - PrintF(out, "Deoptimization Input Data (deopt points = %d)\n", deopt_count); - if (0 == deopt_count) return; - - PrintF(out, "%6s %6s %6s %6s %12s\n", "index", "ast id", "argc", "pc", - FLAG_print_code_verbose ? "commands" : ""); + os << "Deoptimization Input Data (deopt points = " << deopt_count << ")\n"; + if (0 != deopt_count) { + os << " index ast id argc pc"; + if (FLAG_print_code_verbose) os << " commands"; + os << "\n"; + } for (int i = 0; i < deopt_count; i++) { - PrintF(out, "%6d %6d %6d %6d", - i, - AstId(i).ToInt(), - ArgumentsStackHeight(i)->value(), - Pc(i)->value()); + // TODO(svenpanne) Add some basic formatting to our streams. + Vector<char> buf1 = Vector<char>::New(128); + SNPrintF(buf1, "%6d %6d %6d %6d", i, AstId(i).ToInt(), + ArgumentsStackHeight(i)->value(), Pc(i)->value()); + os << buf1.start(); if (!FLAG_print_code_verbose) { - PrintF(out, "\n"); + os << "\n"; continue; } // Print details of the frame translation. @@ -11492,18 +11176,19 @@ void DeoptimizationInputData::DeoptimizationInputDataPrint(FILE* out) { TranslationIterator iterator(TranslationByteArray(), translation_index); Translation::Opcode opcode = static_cast<Translation::Opcode>(iterator.Next()); - ASSERT(Translation::BEGIN == opcode); + DCHECK(Translation::BEGIN == opcode); int frame_count = iterator.Next(); int jsframe_count = iterator.Next(); - PrintF(out, " %s {frame count=%d, js frame count=%d}\n", - Translation::StringFor(opcode), - frame_count, - jsframe_count); + os << " " << Translation::StringFor(opcode) + << " {frame count=" << frame_count + << ", js frame count=" << jsframe_count << "}\n"; while (iterator.HasNext() && Translation::BEGIN != (opcode = static_cast<Translation::Opcode>(iterator.Next()))) { - PrintF(out, "%24s %s ", "", Translation::StringFor(opcode)); + Vector<char> buf2 = Vector<char>::New(128); + SNPrintF(buf2, "%27s %s ", "", Translation::StringFor(opcode)); + os << buf2.start(); switch (opcode) { case Translation::BEGIN: @@ -11514,20 +11199,20 @@ void DeoptimizationInputData::DeoptimizationInputDataPrint(FILE* out) { int ast_id = iterator.Next(); int function_id = iterator.Next(); unsigned height = iterator.Next(); - PrintF(out, "{ast_id=%d, function=", ast_id); + os << "{ast_id=" << ast_id << ", function="; if (function_id != Translation::kSelfLiteralId) { Object* function = LiteralArray()->get(function_id); - JSFunction::cast(function)->PrintName(out); + os << Brief(JSFunction::cast(function)->shared()->DebugName()); } else { - PrintF(out, "<self>"); + os << "<self>"; } - PrintF(out, ", height=%u}", height); + os << ", height=" << height << "}"; break; } case Translation::COMPILED_STUB_FRAME: { Code::Kind stub_kind = static_cast<Code::Kind>(iterator.Next()); - PrintF(out, "{kind=%d}", stub_kind); + os << "{kind=" << stub_kind << "}"; break; } @@ -11537,9 +11222,8 @@ void DeoptimizationInputData::DeoptimizationInputDataPrint(FILE* out) { JSFunction* function = JSFunction::cast(LiteralArray()->get(function_id)); unsigned height = iterator.Next(); - PrintF(out, "{function="); - function->PrintName(out); - PrintF(out, ", height=%u}", height); + os << "{function=" << Brief(function->shared()->DebugName()) + << ", height=" << height << "}"; break; } @@ -11548,100 +11232,114 @@ void DeoptimizationInputData::DeoptimizationInputDataPrint(FILE* out) { int function_id = iterator.Next(); JSFunction* function = JSFunction::cast(LiteralArray()->get(function_id)); - PrintF(out, "{function="); - function->PrintName(out); - PrintF(out, "}"); + os << "{function=" << Brief(function->shared()->DebugName()) << "}"; break; } case Translation::REGISTER: { int reg_code = iterator.Next(); - PrintF(out, "{input=%s}", converter.NameOfCPURegister(reg_code)); + os << "{input=" << converter.NameOfCPURegister(reg_code) << "}"; break; } case Translation::INT32_REGISTER: { int reg_code = iterator.Next(); - PrintF(out, "{input=%s}", converter.NameOfCPURegister(reg_code)); + os << "{input=" << converter.NameOfCPURegister(reg_code) << "}"; break; } case Translation::UINT32_REGISTER: { int reg_code = iterator.Next(); - PrintF(out, "{input=%s (unsigned)}", - converter.NameOfCPURegister(reg_code)); + os << "{input=" << converter.NameOfCPURegister(reg_code) + << " (unsigned)}"; break; } case Translation::DOUBLE_REGISTER: { int reg_code = iterator.Next(); - PrintF(out, "{input=%s}", - DoubleRegister::AllocationIndexToString(reg_code)); + os << "{input=" << DoubleRegister::AllocationIndexToString(reg_code) + << "}"; break; } case Translation::STACK_SLOT: { int input_slot_index = iterator.Next(); - PrintF(out, "{input=%d}", input_slot_index); + os << "{input=" << input_slot_index << "}"; break; } case Translation::INT32_STACK_SLOT: { int input_slot_index = iterator.Next(); - PrintF(out, "{input=%d}", input_slot_index); + os << "{input=" << input_slot_index << "}"; break; } case Translation::UINT32_STACK_SLOT: { int input_slot_index = iterator.Next(); - PrintF(out, "{input=%d (unsigned)}", input_slot_index); + os << "{input=" << input_slot_index << " (unsigned)}"; break; } case Translation::DOUBLE_STACK_SLOT: { int input_slot_index = iterator.Next(); - PrintF(out, "{input=%d}", input_slot_index); + os << "{input=" << input_slot_index << "}"; break; } case Translation::LITERAL: { unsigned literal_index = iterator.Next(); - PrintF(out, "{literal_id=%u}", literal_index); + os << "{literal_id=" << literal_index << "}"; break; } case Translation::DUPLICATED_OBJECT: { int object_index = iterator.Next(); - PrintF(out, "{object_index=%d}", object_index); + os << "{object_index=" << object_index << "}"; break; } case Translation::ARGUMENTS_OBJECT: case Translation::CAPTURED_OBJECT: { int args_length = iterator.Next(); - PrintF(out, "{length=%d}", args_length); + os << "{length=" << args_length << "}"; break; } } - PrintF(out, "\n"); + os << "\n"; } } + + int return_address_patch_count = ReturnAddressPatchCount(); + if (return_address_patch_count != 0) { + os << "Return address patch data (count = " << return_address_patch_count + << ")\n"; + os << " index pc patched_pc\n"; + } + for (int i = 0; i < return_address_patch_count; i++) { + Vector<char> buf = Vector<char>::New(128); + SNPrintF(buf, "%6d %6d %12d\n", i, ReturnAddressPc(i)->value(), + PatchedAddressPc(i)->value()); + os << buf.start(); + } } -void DeoptimizationOutputData::DeoptimizationOutputDataPrint(FILE* out) { - PrintF(out, "Deoptimization Output Data (deopt points = %d)\n", - this->DeoptPoints()); +void DeoptimizationOutputData::DeoptimizationOutputDataPrint( + OStream& os) { // NOLINT + os << "Deoptimization Output Data (deopt points = " << this->DeoptPoints() + << ")\n"; if (this->DeoptPoints() == 0) return; - PrintF(out, "%6s %8s %s\n", "ast id", "pc", "state"); + os << "ast id pc state\n"; for (int i = 0; i < this->DeoptPoints(); i++) { int pc_and_state = this->PcAndState(i)->value(); - PrintF(out, "%6d %8d %s\n", - this->AstId(i).ToInt(), - FullCodeGenerator::PcField::decode(pc_and_state), - FullCodeGenerator::State2String( - FullCodeGenerator::StateField::decode(pc_and_state))); + // TODO(svenpanne) Add some basic formatting to our streams. + Vector<char> buf = Vector<char>::New(100); + SNPrintF(buf, "%6d %8d %s\n", this->AstId(i).ToInt(), + FullCodeGenerator::PcField::decode(pc_and_state), + FullCodeGenerator::State2String( + FullCodeGenerator::StateField::decode(pc_and_state))); + os << buf.start(); } } @@ -11651,11 +11349,14 @@ const char* Code::ICState2String(InlineCacheState state) { case UNINITIALIZED: return "UNINITIALIZED"; case PREMONOMORPHIC: return "PREMONOMORPHIC"; case MONOMORPHIC: return "MONOMORPHIC"; - case MONOMORPHIC_PROTOTYPE_FAILURE: return "MONOMORPHIC_PROTOTYPE_FAILURE"; + case PROTOTYPE_FAILURE: + return "PROTOTYPE_FAILURE"; case POLYMORPHIC: return "POLYMORPHIC"; case MEGAMORPHIC: return "MEGAMORPHIC"; case GENERIC: return "GENERIC"; case DEBUG_STUB: return "DEBUG_STUB"; + case DEFAULT: + return "DEFAULT"; } UNREACHABLE(); return NULL; @@ -11672,92 +11373,93 @@ const char* Code::StubType2String(StubType type) { } -void Code::PrintExtraICState(FILE* out, Kind kind, ExtraICState extra) { - PrintF(out, "extra_ic_state = "); - const char* name = NULL; - switch (kind) { - case STORE_IC: - case KEYED_STORE_IC: - if (extra == STRICT) name = "STRICT"; - break; - default: - break; - } - if (name != NULL) { - PrintF(out, "%s\n", name); +void Code::PrintExtraICState(OStream& os, // NOLINT + Kind kind, ExtraICState extra) { + os << "extra_ic_state = "; + if ((kind == STORE_IC || kind == KEYED_STORE_IC) && (extra == STRICT)) { + os << "STRICT\n"; } else { - PrintF(out, "%d\n", extra); + os << extra << "\n"; } } -void Code::Disassemble(const char* name, FILE* out) { - PrintF(out, "kind = %s\n", Kind2String(kind())); - if (has_major_key()) { - PrintF(out, "major_key = %s\n", - CodeStub::MajorName(CodeStub::GetMajorKey(this), true)); +void Code::Disassemble(const char* name, OStream& os) { // NOLINT + os << "kind = " << Kind2String(kind()) << "\n"; + if (IsCodeStubOrIC()) { + const char* n = CodeStub::MajorName(CodeStub::GetMajorKey(this), true); + os << "major_key = " << (n == NULL ? "null" : n) << "\n"; } if (is_inline_cache_stub()) { - PrintF(out, "ic_state = %s\n", ICState2String(ic_state())); - PrintExtraICState(out, kind(), extra_ic_state()); + os << "ic_state = " << ICState2String(ic_state()) << "\n"; + PrintExtraICState(os, kind(), extra_ic_state()); if (ic_state() == MONOMORPHIC) { - PrintF(out, "type = %s\n", StubType2String(type())); + os << "type = " << StubType2String(type()) << "\n"; } if (is_compare_ic_stub()) { - ASSERT(major_key() == CodeStub::CompareIC); + DCHECK(CodeStub::GetMajorKey(this) == CodeStub::CompareIC); CompareIC::State left_state, right_state, handler_state; Token::Value op; - ICCompareStub::DecodeMinorKey(stub_info(), &left_state, &right_state, - &handler_state, &op); - PrintF(out, "compare_state = %s*%s -> %s\n", - CompareIC::GetStateName(left_state), - CompareIC::GetStateName(right_state), - CompareIC::GetStateName(handler_state)); - PrintF(out, "compare_operation = %s\n", Token::Name(op)); + ICCompareStub::DecodeKey(stub_key(), &left_state, &right_state, + &handler_state, &op); + os << "compare_state = " << CompareIC::GetStateName(left_state) << "*" + << CompareIC::GetStateName(right_state) << " -> " + << CompareIC::GetStateName(handler_state) << "\n"; + os << "compare_operation = " << Token::Name(op) << "\n"; } } if ((name != NULL) && (name[0] != '\0')) { - PrintF(out, "name = %s\n", name); + os << "name = " << name << "\n"; } if (kind() == OPTIMIZED_FUNCTION) { - PrintF(out, "stack_slots = %d\n", stack_slots()); + os << "stack_slots = " << stack_slots() << "\n"; } - PrintF(out, "Instructions (size = %d)\n", instruction_size()); - Disassembler::Decode(out, this); - PrintF(out, "\n"); + os << "Instructions (size = " << instruction_size() << ")\n"; + // TODO(svenpanne) The Disassembler should use streams, too! + { + CodeTracer::Scope trace_scope(GetIsolate()->GetCodeTracer()); + Disassembler::Decode(trace_scope.file(), this); + } + os << "\n"; if (kind() == FUNCTION) { DeoptimizationOutputData* data = DeoptimizationOutputData::cast(this->deoptimization_data()); - data->DeoptimizationOutputDataPrint(out); + data->DeoptimizationOutputDataPrint(os); } else if (kind() == OPTIMIZED_FUNCTION) { DeoptimizationInputData* data = DeoptimizationInputData::cast(this->deoptimization_data()); - data->DeoptimizationInputDataPrint(out); + data->DeoptimizationInputDataPrint(os); } - PrintF(out, "\n"); + os << "\n"; if (is_crankshafted()) { SafepointTable table(this); - PrintF(out, "Safepoints (size = %u)\n", table.size()); + os << "Safepoints (size = " << table.size() << ")\n"; for (unsigned i = 0; i < table.length(); i++) { unsigned pc_offset = table.GetPcOffset(i); - PrintF(out, "%p %4d ", (instruction_start() + pc_offset), pc_offset); - table.PrintEntry(i, out); - PrintF(out, " (sp -> fp)"); + os << (instruction_start() + pc_offset) << " "; + // TODO(svenpanne) Add some basic formatting to our streams. + Vector<char> buf1 = Vector<char>::New(30); + SNPrintF(buf1, "%4d", pc_offset); + os << buf1.start() << " "; + table.PrintEntry(i, os); + os << " (sp -> fp) "; SafepointEntry entry = table.GetEntry(i); if (entry.deoptimization_index() != Safepoint::kNoDeoptimizationIndex) { - PrintF(out, " %6d", entry.deoptimization_index()); + Vector<char> buf2 = Vector<char>::New(30); + SNPrintF(buf2, "%6d", entry.deoptimization_index()); + os << buf2.start(); } else { - PrintF(out, " <none>"); + os << "<none>"; } if (entry.argument_count() > 0) { - PrintF(out, " argc: %d", entry.argument_count()); + os << " argc: " << entry.argument_count(); } - PrintF(out, "\n"); + os << "\n"; } - PrintF(out, "\n"); + os << "\n"; } else if (kind() == FUNCTION) { unsigned offset = back_edge_table_offset(); // If there is no back edge table, the "table start" will be at or after @@ -11766,30 +11468,32 @@ void Code::Disassemble(const char* name, FILE* out) { DisallowHeapAllocation no_gc; BackEdgeTable back_edges(this, &no_gc); - PrintF(out, "Back edges (size = %u)\n", back_edges.length()); - PrintF(out, "ast_id pc_offset loop_depth\n"); + os << "Back edges (size = " << back_edges.length() << ")\n"; + os << "ast_id pc_offset loop_depth\n"; for (uint32_t i = 0; i < back_edges.length(); i++) { - PrintF(out, "%6d %9u %10u\n", back_edges.ast_id(i).ToInt(), - back_edges.pc_offset(i), - back_edges.loop_depth(i)); + Vector<char> buf = Vector<char>::New(100); + SNPrintF(buf, "%6d %9u %10u\n", back_edges.ast_id(i).ToInt(), + back_edges.pc_offset(i), back_edges.loop_depth(i)); + os << buf.start(); } - PrintF(out, "\n"); + os << "\n"; } #ifdef OBJECT_PRINT if (!type_feedback_info()->IsUndefined()) { - TypeFeedbackInfo::cast(type_feedback_info())->TypeFeedbackInfoPrint(out); - PrintF(out, "\n"); + OFStream os(stdout); + TypeFeedbackInfo::cast(type_feedback_info())->TypeFeedbackInfoPrint(os); + os << "\n"; } #endif } - PrintF(out, "RelocInfo (size = %d)\n", relocation_size()); + os << "RelocInfo (size = " << relocation_size() << ")\n"; for (RelocIterator it(this); !it.done(); it.next()) { - it.rinfo()->Print(GetIsolate(), out); + it.rinfo()->Print(GetIsolate(), os); } - PrintF(out, "\n"); + os << "\n"; } #endif // ENABLE_DISASSEMBLER @@ -11800,7 +11504,7 @@ Handle<FixedArray> JSObject::SetFastElementsCapacityAndLength( int length, SetFastElementsCapacitySmiMode smi_mode) { // We should never end in here with a pixel or external array. - ASSERT(!object->HasExternalArrayElements()); + DCHECK(!object->HasExternalArrayElements()); // Allocate a new fast elements backing store. Handle<FixedArray> new_elements = @@ -11860,7 +11564,7 @@ void JSObject::SetFastDoubleElementsCapacityAndLength(Handle<JSObject> object, int capacity, int length) { // We should never end in here with a pixel or external array. - ASSERT(!object->HasExternalArrayElements()); + DCHECK(!object->HasExternalArrayElements()); Handle<FixedArrayBase> elems = object->GetIsolate()->factory()->NewFixedDoubleArray(capacity); @@ -11896,7 +11600,7 @@ void JSObject::SetFastDoubleElementsCapacityAndLength(Handle<JSObject> object, // static void JSArray::Initialize(Handle<JSArray> array, int capacity, int length) { - ASSERT(capacity >= 0); + DCHECK(capacity >= 0); array->GetIsolate()->factory()->NewJSArrayStorage( array, length, capacity, INITIALIZE_ARRAY_ELEMENTS_WITH_HOLE); } @@ -11916,12 +11620,13 @@ static bool GetOldValue(Isolate* isolate, uint32_t index, List<Handle<Object> >* old_values, List<uint32_t>* indices) { - PropertyAttributes attributes = - JSReceiver::GetLocalElementAttribute(object, index); - ASSERT(attributes != ABSENT); - if (attributes == DONT_DELETE) return false; + Maybe<PropertyAttributes> maybe = + JSReceiver::GetOwnElementAttribute(object, index); + DCHECK(maybe.has_value); + DCHECK(maybe.value != ABSENT); + if (maybe.value == DONT_DELETE) return false; Handle<Object> value; - if (!JSObject::GetLocalElementAccessorPair(object, index).is_null()) { + if (!JSObject::GetOwnElementAccessorPair(object, index).is_null()) { value = Handle<Object>::cast(isolate->factory()->the_hole_value()); } else { value = Object::GetElement(isolate, object, index).ToHandleChecked(); @@ -11981,8 +11686,19 @@ static void EndPerformSplice(Handle<JSArray> object) { MaybeHandle<Object> JSArray::SetElementsLength( Handle<JSArray> array, Handle<Object> new_length_handle) { + if (array->HasFastElements()) { + // If the new array won't fit in a some non-trivial fraction of the max old + // space size, then force it to go dictionary mode. + int max_fast_array_size = static_cast<int>( + (array->GetHeap()->MaxOldGenerationSize() / kDoubleSize) / 4); + if (new_length_handle->IsNumber() && + NumberToInt32(*new_length_handle) >= max_fast_array_size) { + NormalizeElements(array); + } + } + // We should never end in here with a pixel or external array. - ASSERT(array->AllowsSetElementsLength()); + DCHECK(array->AllowsSetElementsLength()); if (!array->map()->is_observed()) { return array->GetElementsAccessor()->SetLength(array, new_length_handle); } @@ -11997,7 +11713,7 @@ MaybeHandle<Object> JSArray::SetElementsLength( CHECK(new_length_handle->ToArrayIndex(&new_length)); static const PropertyAttributes kNoAttrFilter = NONE; - int num_elements = array->NumberOfLocalElements(kNoAttrFilter); + int num_elements = array->NumberOfOwnElements(kNoAttrFilter); if (num_elements > 0) { if (old_length == static_cast<uint32_t>(num_elements)) { // Simple case for arrays without holes. @@ -12009,7 +11725,7 @@ MaybeHandle<Object> JSArray::SetElementsLength( // TODO(rafaelw): For fast, sparse arrays, we can avoid iterating over // the to-be-removed indices twice. Handle<FixedArray> keys = isolate->factory()->NewFixedArray(num_elements); - array->GetLocalElementKeys(*keys, kNoAttrFilter); + array->GetOwnElementKeys(*keys, kNoAttrFilter); while (num_elements-- > 0) { uint32_t index = NumberToUint32(keys->get(num_elements)); if (index < new_length) break; @@ -12058,7 +11774,7 @@ MaybeHandle<Object> JSArray::SetElementsLength( SetProperty(deleted, isolate->factory()->length_string(), isolate->factory()->NewNumberFromUint(delete_count), - NONE, SLOPPY).Assert(); + STRICT).Assert(); } EnqueueSpliceRecord(array, index, deleted, add_count); @@ -12088,10 +11804,12 @@ Handle<Map> Map::GetPrototypeTransition(Handle<Map> map, Handle<Map> Map::PutPrototypeTransition(Handle<Map> map, Handle<Object> prototype, Handle<Map> target_map) { - ASSERT(target_map->IsMap()); - ASSERT(HeapObject::cast(*prototype)->map()->IsMap()); - // Don't cache prototype transition if this map is shared. - if (map->is_shared() || !FLAG_cache_prototype_transitions) return map; + DCHECK(target_map->IsMap()); + DCHECK(HeapObject::cast(*prototype)->map()->IsMap()); + // Don't cache prototype transition if this map is either shared, or a map of + // a prototype. + if (map->is_prototype_map()) return map; + if (map->is_dictionary_map() || !FLAG_cache_prototype_transitions) return map; const int step = kProtoTransitionElementsPerEntry; const int header = kProtoTransitionHeaderSize; @@ -12167,7 +11885,7 @@ void Map::AddDependentCode(Handle<Map> map, // static void Map::AddDependentIC(Handle<Map> map, Handle<Code> stub) { - ASSERT(stub->next_code_link()->IsUndefined()); + DCHECK(stub->next_code_link()->IsUndefined()); int n = map->dependent_code()->number_of_entries(DependentCode::kWeakICGroup); if (n == 0) { // Slow path: insert the head of the list with possible heap allocation. @@ -12175,7 +11893,7 @@ void Map::AddDependentIC(Handle<Map> map, } else { // Fast path: link the stub to the existing head of the list without any // heap allocation. - ASSERT(n == 1); + DCHECK(n == 1); map->dependent_code()->AddToDependentICList(stub); } } @@ -12266,7 +11984,7 @@ void DependentCode::UpdateToFinishedCode(DependencyGroup group, #ifdef DEBUG for (int i = start; i < end; i++) { - ASSERT(is_code_at(i) || compilation_info_at(i) != info); + DCHECK(is_code_at(i) || compilation_info_at(i) != info); } #endif } @@ -12293,18 +12011,18 @@ void DependentCode::RemoveCompilationInfo(DependentCode::DependencyGroup group, // Use the last of each group to fill the gap in the previous group. for (int i = group; i < kGroupCount; i++) { int last_of_group = starts.at(i + 1) - 1; - ASSERT(last_of_group >= gap); + DCHECK(last_of_group >= gap); if (last_of_group == gap) continue; copy(last_of_group, gap); gap = last_of_group; } - ASSERT(gap == starts.number_of_entries() - 1); + DCHECK(gap == starts.number_of_entries() - 1); clear_at(gap); // Clear last gap. set_number_of_entries(group, end - start - 1); #ifdef DEBUG for (int i = start; i < end - 1; i++) { - ASSERT(is_code_at(i) || compilation_info_at(i) != info); + DCHECK(is_code_at(i) || compilation_info_at(i) != info); } #endif } @@ -12374,7 +12092,7 @@ bool DependentCode::MarkCodeForDeoptimization( void DependentCode::DeoptimizeDependentCodeGroup( Isolate* isolate, DependentCode::DependencyGroup group) { - ASSERT(AllowCodeDependencyChange::IsAllowed()); + DCHECK(AllowCodeDependencyChange::IsAllowed()); DisallowHeapAllocation no_allocation_scope; bool marked = MarkCodeForDeoptimization(isolate, group); @@ -12414,7 +12132,7 @@ Handle<Map> Map::TransitionToPrototype(Handle<Map> map, MaybeHandle<Object> JSObject::SetPrototype(Handle<JSObject> object, Handle<Object> value, - bool skip_hidden_prototypes) { + bool from_javascript) { #ifdef DEBUG int size = object->Size(); #endif @@ -12444,10 +12162,10 @@ MaybeHandle<Object> JSObject::SetPrototype(Handle<JSObject> object, // prototype cycles are prevented. // It is sufficient to validate that the receiver is not in the new prototype // chain. - for (Object* pt = *value; - pt != heap->null_value(); - pt = pt->GetPrototype(isolate)) { - if (JSReceiver::cast(pt) == *object) { + for (PrototypeIterator iter(isolate, *value, + PrototypeIterator::START_AT_RECEIVER); + !iter.IsAtEnd(); iter.Advance()) { + if (JSReceiver::cast(iter.GetCurrent()) == *object) { // Cycle detected. Handle<Object> error = isolate->factory()->NewError( "cyclic_proto", HandleVector<Object>(NULL, 0)); @@ -12459,14 +12177,14 @@ MaybeHandle<Object> JSObject::SetPrototype(Handle<JSObject> object, object->map()->DictionaryElementsInPrototypeChainOnly(); Handle<JSObject> real_receiver = object; - if (skip_hidden_prototypes) { + if (from_javascript) { // Find the first object in the chain whose prototype object is not // hidden and set the new prototype on that object. - Object* current_proto = real_receiver->GetPrototype(); - while (current_proto->IsJSObject() && - JSObject::cast(current_proto)->map()->is_hidden_prototype()) { - real_receiver = handle(JSObject::cast(current_proto), isolate); - current_proto = current_proto->GetPrototype(isolate); + PrototypeIterator iter(isolate, real_receiver); + while (!iter.IsAtEnd(PrototypeIterator::END_AT_NON_HIDDEN)) { + real_receiver = + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)); + iter.Advance(); } } @@ -12477,11 +12195,13 @@ MaybeHandle<Object> JSObject::SetPrototype(Handle<JSObject> object, if (map->prototype() == *value) return value; if (value->IsJSObject()) { - JSObject::OptimizeAsPrototype(Handle<JSObject>::cast(value)); + PrototypeOptimizationMode mode = + from_javascript ? REGULAR_PROTOTYPE : FAST_PROTOTYPE; + JSObject::OptimizeAsPrototype(Handle<JSObject>::cast(value), mode); } Handle<Map> new_map = Map::TransitionToPrototype(map, value); - ASSERT(new_map->prototype() == *value); + DCHECK(new_map->prototype() == *value); JSObject::MigrateToMap(real_receiver, new_map); if (!dictionary_elements_in_chain && @@ -12493,7 +12213,7 @@ MaybeHandle<Object> JSObject::SetPrototype(Handle<JSObject> object, } heap->ClearInstanceofCache(); - ASSERT(size == object->Size()); + DCHECK(size == object->Size()); return value; } @@ -12511,34 +12231,15 @@ void JSObject::EnsureCanContainElements(Handle<JSObject> object, } -MaybeHandle<AccessorPair> JSObject::GetLocalPropertyAccessorPair( - Handle<JSObject> object, - Handle<Name> name) { - uint32_t index = 0; - if (name->AsArrayIndex(&index)) { - return GetLocalElementAccessorPair(object, index); - } - - Isolate* isolate = object->GetIsolate(); - LookupResult lookup(isolate); - object->LocalLookupRealNamedProperty(name, &lookup); - - if (lookup.IsPropertyCallbacks() && - lookup.GetCallbackObject()->IsAccessorPair()) { - return handle(AccessorPair::cast(lookup.GetCallbackObject()), isolate); - } - return MaybeHandle<AccessorPair>(); -} - - -MaybeHandle<AccessorPair> JSObject::GetLocalElementAccessorPair( +MaybeHandle<AccessorPair> JSObject::GetOwnElementAccessorPair( Handle<JSObject> object, uint32_t index) { if (object->IsJSGlobalProxy()) { - Handle<Object> proto(object->GetPrototype(), object->GetIsolate()); - if (proto->IsNull()) return MaybeHandle<AccessorPair>(); - ASSERT(proto->IsJSGlobalObject()); - return GetLocalElementAccessorPair(Handle<JSObject>::cast(proto), index); + PrototypeIterator iter(object->GetIsolate(), object); + if (iter.IsAtEnd()) return MaybeHandle<AccessorPair>(); + DCHECK(PrototypeIterator::GetCurrent(iter)->IsJSGlobalObject()); + return GetOwnElementAccessorPair( + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)), index); } // Check for lookup interceptor. @@ -12590,7 +12291,7 @@ MaybeHandle<Object> JSObject::GetElementWithCallback( uint32_t index, Handle<Object> holder) { Isolate* isolate = object->GetIsolate(); - ASSERT(!structure->IsForeign()); + DCHECK(!structure->IsForeign()); // api style callbacks. if (structure->IsExecutableAccessorInfo()) { Handle<ExecutableAccessorInfo> data = @@ -12621,7 +12322,7 @@ MaybeHandle<Object> JSObject::GetElementWithCallback( if (getter->IsSpecFunction()) { // TODO(rossberg): nicer would be to cast to some JSCallable here... return GetPropertyWithDefinedGetter( - object, receiver, Handle<JSReceiver>::cast(getter)); + receiver, Handle<JSReceiver>::cast(getter)); } // Getter is not a function. return isolate->factory()->undefined_value(); @@ -12647,8 +12348,8 @@ MaybeHandle<Object> JSObject::SetElementWithCallback(Handle<JSObject> object, // We should never get here to initialize a const with the hole // value since a const declaration would conflict with the setter. - ASSERT(!value->IsTheHole()); - ASSERT(!structure->IsForeign()); + DCHECK(!value->IsTheHole()); + DCHECK(!structure->IsForeign()); if (structure->IsExecutableAccessorInfo()) { // api style callbacks Handle<ExecutableAccessorInfo> data = @@ -12725,7 +12426,7 @@ MaybeHandle<Object> JSObject::SetFastElement(Handle<JSObject> object, Handle<Object> value, StrictMode strict_mode, bool check_prototype) { - ASSERT(object->HasFastSmiOrObjectElements() || + DCHECK(object->HasFastSmiOrObjectElements() || object->HasFastArgumentsElements()); Isolate* isolate = object->GetIsolate(); @@ -12788,7 +12489,7 @@ MaybeHandle<Object> JSObject::SetFastElement(Handle<JSObject> object, bool convert_to_slow = true; if ((index - capacity) < kMaxGap) { new_capacity = NewElementsCapacity(index + 1); - ASSERT(new_capacity > index); + DCHECK(new_capacity > index); if (!object->ShouldConvertToSlowElements(new_capacity)) { convert_to_slow = false; } @@ -12822,7 +12523,7 @@ MaybeHandle<Object> JSObject::SetFastElement(Handle<JSObject> object, UpdateAllocationSite(object, kind); Handle<Map> new_map = GetElementsTransitionMap(object, kind); JSObject::MigrateToMap(object, new_map); - ASSERT(IsFastObjectElementsKind(object->GetElementsKind())); + DCHECK(IsFastObjectElementsKind(object->GetElementsKind())); } // Increase backing store capacity if that's been decided previously. if (new_capacity != capacity) { @@ -12839,7 +12540,7 @@ MaybeHandle<Object> JSObject::SetFastElement(Handle<JSObject> object, } // Finally, set the new element and length. - ASSERT(object->elements()->IsFixedArray()); + DCHECK(object->elements()->IsFixedArray()); backing_store->set(index, *value); if (must_update_array_length) { Handle<JSArray>::cast(object)->set_length(Smi::FromInt(array_length)); @@ -12856,7 +12557,7 @@ MaybeHandle<Object> JSObject::SetDictionaryElement( StrictMode strict_mode, bool check_prototype, SetPropertyMode set_mode) { - ASSERT(object->HasDictionaryElements() || + DCHECK(object->HasDictionaryElements() || object->HasDictionaryArgumentsElements()); Isolate* isolate = object->GetIsolate(); @@ -12902,7 +12603,7 @@ MaybeHandle<Object> JSObject::SetDictionaryElement( Handle<AliasedArgumentsEntry>::cast(element); Handle<Context> context(Context::cast(elements->get(0))); int context_index = entry->aliased_context_slot(); - ASSERT(!context->get(context_index)->IsTheHole()); + DCHECK(!context->get(context_index)->IsTheHole()); context->set(context_index, *value); // For elements that are still writable we keep slow aliasing. if (!details.IsReadOnly()) value = element; @@ -12963,15 +12664,11 @@ MaybeHandle<Object> JSObject::SetDictionaryElement( } else { new_length = dictionary->max_number_key() + 1; } - SetFastElementsCapacitySmiMode smi_mode = FLAG_smi_only_arrays - ? kAllowSmiElements - : kDontAllowSmiElements; bool has_smi_only_elements = false; bool should_convert_to_fast_double_elements = object->ShouldConvertToFastDoubleElements(&has_smi_only_elements); - if (has_smi_only_elements) { - smi_mode = kForceSmiElements; - } + SetFastElementsCapacitySmiMode smi_mode = + has_smi_only_elements ? kForceSmiElements : kAllowSmiElements; if (should_convert_to_fast_double_elements) { SetFastDoubleElementsCapacityAndLength(object, new_length, new_length); @@ -12982,8 +12679,9 @@ MaybeHandle<Object> JSObject::SetDictionaryElement( JSObject::ValidateElements(object); #ifdef DEBUG if (FLAG_trace_normalization) { - PrintF("Object elements are fast case again:\n"); - object->Print(); + OFStream os(stdout); + os << "Object elements are fast case again:\n"; + object->Print(os); } #endif } @@ -12996,7 +12694,7 @@ MaybeHandle<Object> JSObject::SetFastDoubleElement( Handle<Object> value, StrictMode strict_mode, bool check_prototype) { - ASSERT(object->HasFastDoubleElements()); + DCHECK(object->HasFastDoubleElements()); Handle<FixedArrayBase> base_elms(FixedArrayBase::cast(object->elements())); uint32_t elms_length = static_cast<uint32_t>(base_elms->length()); @@ -13069,7 +12767,7 @@ MaybeHandle<Object> JSObject::SetFastDoubleElement( // Try allocating extra space. int new_capacity = NewElementsCapacity(index+1); if (!object->ShouldConvertToSlowElements(new_capacity)) { - ASSERT(static_cast<uint32_t>(new_capacity) > index); + DCHECK(static_cast<uint32_t>(new_capacity) > index); SetFastDoubleElementsCapacityAndLength(object, new_capacity, index + 1); FixedDoubleArray::cast(object->elements())->set(index, double_value); JSObject::ValidateElements(object); @@ -13078,13 +12776,13 @@ MaybeHandle<Object> JSObject::SetFastDoubleElement( } // Otherwise default to slow case. - ASSERT(object->HasFastDoubleElements()); - ASSERT(object->map()->has_fast_double_elements()); - ASSERT(object->elements()->IsFixedDoubleArray() || + DCHECK(object->HasFastDoubleElements()); + DCHECK(object->map()->has_fast_double_elements()); + DCHECK(object->elements()->IsFixedDoubleArray() || object->elements()->length() == 0); NormalizeElements(object); - ASSERT(object->HasDictionaryElements()); + DCHECK(object->HasDictionaryElements()); return SetElement(object, index, value, NONE, strict_mode, check_prototype); } @@ -13107,7 +12805,7 @@ MaybeHandle<Object> JSObject::SetOwnElement(Handle<JSObject> object, uint32_t index, Handle<Object> value, StrictMode strict_mode) { - ASSERT(!object->HasExternalArrayElements()); + DCHECK(!object->HasExternalArrayElements()); return JSObject::SetElement(object, index, value, NONE, strict_mode, false); } @@ -13140,13 +12838,12 @@ MaybeHandle<Object> JSObject::SetElement(Handle<JSObject> object, } if (object->IsJSGlobalProxy()) { - Handle<Object> proto(object->GetPrototype(), isolate); - if (proto->IsNull()) return value; - ASSERT(proto->IsJSGlobalObject()); - return SetElement(Handle<JSObject>::cast(proto), index, value, attributes, - strict_mode, - check_prototype, - set_mode); + PrototypeIterator iter(isolate, object); + if (iter.IsAtEnd()) return value; + DCHECK(PrototypeIterator::GetCurrent(iter)->IsJSGlobalObject()); + return SetElement( + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)), index, + value, attributes, strict_mode, check_prototype, set_mode); } // Don't allow element properties to be redefined for external arrays. @@ -13175,14 +12872,17 @@ MaybeHandle<Object> JSObject::SetElement(Handle<JSObject> object, strict_mode, check_prototype, set_mode); } - PropertyAttributes old_attributes = - JSReceiver::GetLocalElementAttribute(object, index); + Maybe<PropertyAttributes> maybe = + JSReceiver::GetOwnElementAttribute(object, index); + if (!maybe.has_value) return MaybeHandle<Object>(); + PropertyAttributes old_attributes = maybe.value; + Handle<Object> old_value = isolate->factory()->the_hole_value(); Handle<Object> old_length_handle; Handle<Object> new_length_handle; if (old_attributes != ABSENT) { - if (GetLocalElementAccessorPair(object, index).is_null()) { + if (GetOwnElementAccessorPair(object, index).is_null()) { old_value = Object::GetElement(isolate, object, index).ToHandleChecked(); } } else if (object->IsJSArray()) { @@ -13205,7 +12905,10 @@ MaybeHandle<Object> JSObject::SetElement(Handle<JSObject> object, Object); Handle<String> name = isolate->factory()->Uint32ToString(index); - PropertyAttributes new_attributes = GetLocalElementAttribute(object, index); + maybe = GetOwnElementAttribute(object, index); + if (!maybe.has_value) return MaybeHandle<Object>(); + PropertyAttributes new_attributes = maybe.value; + if (old_attributes == ABSENT) { if (object->IsJSArray() && !old_length_handle->SameValue( @@ -13254,7 +12957,7 @@ MaybeHandle<Object> JSObject::SetElementWithoutInterceptor( StrictMode strict_mode, bool check_prototype, SetPropertyMode set_mode) { - ASSERT(object->HasDictionaryElements() || + DCHECK(object->HasDictionaryElements() || object->HasDictionaryArgumentsElements() || (attributes & (DONT_DELETE | DONT_ENUM | READ_ONLY)) == 0); Isolate* isolate = object->GetIsolate(); @@ -13268,6 +12971,14 @@ MaybeHandle<Object> JSObject::SetElementWithoutInterceptor( CheckArrayAbuse(object, "elements write", index, true); } } + if (object->IsJSArray() && JSArray::WouldChangeReadOnlyLength( + Handle<JSArray>::cast(object), index)) { + if (strict_mode == SLOPPY) { + return value; + } else { + return JSArray::ReadOnlyLengthError(Handle<JSArray>::cast(object)); + } + } switch (object->GetElementsKind()) { case FAST_SMI_ELEMENTS: case FAST_ELEMENTS: @@ -13308,7 +13019,7 @@ MaybeHandle<Object> JSObject::SetElementWithoutInterceptor( if (!probe.is_null() && !probe->IsTheHole()) { Handle<Context> context(Context::cast(parameter_map->get(0))); int context_index = Handle<Smi>::cast(probe)->value(); - ASSERT(!context->get(context_index)->IsTheHole()); + DCHECK(!context->get(context_index)->IsTheHole()); context->set(context_index, *value); // Redefining attributes of an aliased element destroys fast aliasing. if (set_mode == SET_PROPERTY || attributes == NONE) return value; @@ -13356,7 +13067,7 @@ PretenureFlag AllocationSite::GetPretenureMode() { bool AllocationSite::IsNestedSite() { - ASSERT(FLAG_trace_track_allocation_sites); + DCHECK(FLAG_trace_track_allocation_sites); Object* current = GetHeap()->allocation_sites_list(); while (current->IsAllocationSite()) { AllocationSite* current_site = AllocationSite::cast(current); @@ -13435,6 +13146,19 @@ void AllocationSite::AddDependentCompilationInfo(Handle<AllocationSite> site, } +const char* AllocationSite::PretenureDecisionName(PretenureDecision decision) { + switch (decision) { + case kUndecided: return "undecided"; + case kDontTenure: return "don't tenure"; + case kMaybeTenure: return "maybe tenure"; + case kTenure: return "tenure"; + case kZombie: return "zombie"; + default: UNREACHABLE(); + } + return NULL; +} + + void JSObject::UpdateAllocationSite(Handle<JSObject> object, ElementsKind to_kind) { if (!object->IsJSArray()) return; @@ -13476,7 +13200,7 @@ void JSObject::TransitionElementsKind(Handle<JSObject> object, IsFastSmiOrObjectElementsKind(to_kind)) || (from_kind == FAST_DOUBLE_ELEMENTS && to_kind == FAST_HOLEY_DOUBLE_ELEMENTS)) { - ASSERT(from_kind != TERMINAL_FAST_ELEMENTS_KIND); + DCHECK(from_kind != TERMINAL_FAST_ELEMENTS_KIND); // No change is needed to the elements() buffer, the transition // only requires a map change. Handle<Map> new_map = GetElementsTransitionMap(object, to_kind); @@ -13553,6 +13277,41 @@ void JSArray::JSArrayUpdateLengthFromIndex(Handle<JSArray> array, } +bool JSArray::IsReadOnlyLengthDescriptor(Handle<Map> jsarray_map) { + Isolate* isolate = jsarray_map->GetIsolate(); + DCHECK(!jsarray_map->is_dictionary_map()); + LookupResult lookup(isolate); + Handle<Name> length_string = isolate->factory()->length_string(); + jsarray_map->LookupDescriptor(NULL, *length_string, &lookup); + return lookup.IsReadOnly(); +} + + +bool JSArray::WouldChangeReadOnlyLength(Handle<JSArray> array, + uint32_t index) { + uint32_t length = 0; + CHECK(array->length()->ToArrayIndex(&length)); + if (length <= index) { + Isolate* isolate = array->GetIsolate(); + LookupResult lookup(isolate); + Handle<Name> length_string = isolate->factory()->length_string(); + array->LookupOwnRealNamedProperty(length_string, &lookup); + return lookup.IsReadOnly(); + } + return false; +} + + +MaybeHandle<Object> JSArray::ReadOnlyLengthError(Handle<JSArray> array) { + Isolate* isolate = array->GetIsolate(); + Handle<Name> length = isolate->factory()->length_string(); + Handle<Object> args[2] = { length, array }; + Handle<Object> error = isolate->factory()->NewTypeError( + "strict_read_only_property", HandleVector(args, ARRAY_SIZE(args))); + return isolate->Throw<Object>(error); +} + + MaybeHandle<Object> JSObject::GetElementWithInterceptor( Handle<JSObject> object, Handle<Object> receiver, @@ -13588,9 +13347,10 @@ MaybeHandle<Object> JSObject::GetElementWithInterceptor( Object); if (!result->IsTheHole()) return result; - Handle<Object> proto(object->GetPrototype(), isolate); - if (proto->IsNull()) return isolate->factory()->undefined_value(); - return Object::GetElementWithReceiver(isolate, proto, receiver, index); + PrototypeIterator iter(isolate, object); + if (iter.IsAtEnd()) return isolate->factory()->undefined_value(); + return Object::GetElementWithReceiver( + isolate, PrototypeIterator::GetCurrent(iter), receiver, index); } @@ -13713,7 +13473,7 @@ bool JSObject::ShouldConvertToSlowElements(int new_capacity) { bool JSObject::ShouldConvertToFastElements() { - ASSERT(HasDictionaryElements() || HasDictionaryArgumentsElements()); + DCHECK(HasDictionaryElements() || HasDictionaryArgumentsElements()); // If the elements are sparse, we should not go back to fast case. if (!HasDenseElements()) return false; // An object requiring access checks is never allowed to have fast @@ -13753,7 +13513,7 @@ bool JSObject::ShouldConvertToFastDoubleElements( *has_smi_only_elements = false; if (HasSloppyArgumentsElements()) return false; if (FLAG_unbox_double_arrays) { - ASSERT(HasDictionaryElements()); + DCHECK(HasDictionaryElements()); SeededNumberDictionary* dictionary = element_dictionary(); bool found_double = false; for (int i = 0; i < dictionary->Capacity(); i++) { @@ -13780,21 +13540,19 @@ bool JSObject::ShouldConvertToFastDoubleElements( // together, so even though this function belongs in objects-debug.cc, // we keep it here instead to satisfy certain compilers. #ifdef OBJECT_PRINT -template<typename Derived, typename Shape, typename Key> -void Dictionary<Derived, Shape, Key>::Print(FILE* out) { +template <typename Derived, typename Shape, typename Key> +void Dictionary<Derived, Shape, Key>::Print(OStream& os) { // NOLINT int capacity = DerivedHashTable::Capacity(); for (int i = 0; i < capacity; i++) { Object* k = DerivedHashTable::KeyAt(i); if (DerivedHashTable::IsKey(k)) { - PrintF(out, " "); + os << " "; if (k->IsString()) { - String::cast(k)->StringPrint(out); + String::cast(k)->StringPrint(os); } else { - k->ShortPrint(out); + os << Brief(k); } - PrintF(out, ": "); - ValueAt(i)->ShortPrint(out); - PrintF(out, "\n"); + os << ": " << Brief(ValueAt(i)) << "\n"; } } } @@ -13813,14 +13571,14 @@ void Dictionary<Derived, Shape, Key>::CopyValuesTo(FixedArray* elements) { elements->set(pos++, ValueAt(i), mode); } } - ASSERT(pos == elements->length()); + DCHECK(pos == elements->length()); } InterceptorInfo* JSObject::GetNamedInterceptor() { - ASSERT(map()->has_named_interceptor()); + DCHECK(map()->has_named_interceptor()); JSFunction* constructor = JSFunction::cast(map()->constructor()); - ASSERT(constructor->shared()->IsApiFunction()); + DCHECK(constructor->shared()->IsApiFunction()); Object* result = constructor->shared()->get_api_func_data()->named_property_handler(); return InterceptorInfo::cast(result); @@ -13828,69 +13586,44 @@ InterceptorInfo* JSObject::GetNamedInterceptor() { InterceptorInfo* JSObject::GetIndexedInterceptor() { - ASSERT(map()->has_indexed_interceptor()); + DCHECK(map()->has_indexed_interceptor()); JSFunction* constructor = JSFunction::cast(map()->constructor()); - ASSERT(constructor->shared()->IsApiFunction()); + DCHECK(constructor->shared()->IsApiFunction()); Object* result = constructor->shared()->get_api_func_data()->indexed_property_handler(); return InterceptorInfo::cast(result); } -MaybeHandle<Object> JSObject::GetPropertyPostInterceptor( - Handle<JSObject> object, - Handle<Object> receiver, - Handle<Name> name, - PropertyAttributes* attributes) { - // Check local property in holder, ignore interceptor. - Isolate* isolate = object->GetIsolate(); - LookupResult lookup(isolate); - object->LocalLookupRealNamedProperty(name, &lookup); - if (lookup.IsFound()) { - return GetProperty(object, receiver, &lookup, name, attributes); - } else { - // Continue searching via the prototype chain. - Handle<Object> prototype(object->GetPrototype(), isolate); - *attributes = ABSENT; - if (prototype->IsNull()) return isolate->factory()->undefined_value(); - return GetPropertyWithReceiver(prototype, receiver, name, attributes); - } -} - - MaybeHandle<Object> JSObject::GetPropertyWithInterceptor( - Handle<JSObject> object, + Handle<JSObject> holder, Handle<Object> receiver, - Handle<Name> name, - PropertyAttributes* attributes) { - Isolate* isolate = object->GetIsolate(); + Handle<Name> name) { + Isolate* isolate = holder->GetIsolate(); // TODO(rossberg): Support symbols in the API. if (name->IsSymbol()) return isolate->factory()->undefined_value(); - Handle<InterceptorInfo> interceptor(object->GetNamedInterceptor(), isolate); + Handle<InterceptorInfo> interceptor(holder->GetNamedInterceptor(), isolate); Handle<String> name_string = Handle<String>::cast(name); - if (!interceptor->getter()->IsUndefined()) { - v8::NamedPropertyGetterCallback getter = - v8::ToCData<v8::NamedPropertyGetterCallback>(interceptor->getter()); - LOG(isolate, - ApiNamedPropertyAccess("interceptor-named-get", *object, *name)); - PropertyCallbackArguments - args(isolate, interceptor->data(), *receiver, *object); - v8::Handle<v8::Value> result = - args.Call(getter, v8::Utils::ToLocal(name_string)); - RETURN_EXCEPTION_IF_SCHEDULED_EXCEPTION(isolate, Object); - if (!result.IsEmpty()) { - *attributes = NONE; - Handle<Object> result_internal = v8::Utils::OpenHandle(*result); - result_internal->VerifyApiCallResultType(); - // Rebox handle before return. - return handle(*result_internal, isolate); - } - } + if (interceptor->getter()->IsUndefined()) return MaybeHandle<Object>(); + + v8::NamedPropertyGetterCallback getter = + v8::ToCData<v8::NamedPropertyGetterCallback>(interceptor->getter()); + LOG(isolate, + ApiNamedPropertyAccess("interceptor-named-get", *holder, *name)); + PropertyCallbackArguments + args(isolate, interceptor->data(), *receiver, *holder); + v8::Handle<v8::Value> result = + args.Call(getter, v8::Utils::ToLocal(name_string)); + RETURN_EXCEPTION_IF_SCHEDULED_EXCEPTION(isolate, Object); + if (result.IsEmpty()) return MaybeHandle<Object>(); - return GetPropertyPostInterceptor(object, receiver, name, attributes); + Handle<Object> result_internal = v8::Utils::OpenHandle(*result); + result_internal->VerifyApiCallResultType(); + // Rebox handle before return + return handle(*result_internal, isolate); } @@ -13945,70 +13678,74 @@ MaybeHandle<JSObject> JSObject::GetKeysForIndexedInterceptor( } -bool JSObject::HasRealNamedProperty(Handle<JSObject> object, - Handle<Name> key) { +Maybe<bool> JSObject::HasRealNamedProperty(Handle<JSObject> object, + Handle<Name> key) { Isolate* isolate = object->GetIsolate(); SealHandleScope shs(isolate); // Check access rights if needed. if (object->IsAccessCheckNeeded()) { if (!isolate->MayNamedAccess(object, key, v8::ACCESS_HAS)) { isolate->ReportFailedAccessCheck(object, v8::ACCESS_HAS); - // TODO(yangguo): Issue 3269, check for scheduled exception missing? - return false; + RETURN_VALUE_IF_SCHEDULED_EXCEPTION(isolate, Maybe<bool>()); + return maybe(false); } } LookupResult result(isolate); - object->LocalLookupRealNamedProperty(key, &result); - return result.IsFound() && !result.IsInterceptor(); + object->LookupOwnRealNamedProperty(key, &result); + return maybe(result.IsFound() && !result.IsInterceptor()); } -bool JSObject::HasRealElementProperty(Handle<JSObject> object, uint32_t index) { +Maybe<bool> JSObject::HasRealElementProperty(Handle<JSObject> object, + uint32_t index) { Isolate* isolate = object->GetIsolate(); HandleScope scope(isolate); // Check access rights if needed. if (object->IsAccessCheckNeeded()) { if (!isolate->MayIndexedAccess(object, index, v8::ACCESS_HAS)) { isolate->ReportFailedAccessCheck(object, v8::ACCESS_HAS); - // TODO(yangguo): Issue 3269, check for scheduled exception missing? - return false; + RETURN_VALUE_IF_SCHEDULED_EXCEPTION(isolate, Maybe<bool>()); + return maybe(false); } } if (object->IsJSGlobalProxy()) { HandleScope scope(isolate); - Handle<Object> proto(object->GetPrototype(), isolate); - if (proto->IsNull()) return false; - ASSERT(proto->IsJSGlobalObject()); - return HasRealElementProperty(Handle<JSObject>::cast(proto), index); + PrototypeIterator iter(isolate, object); + if (iter.IsAtEnd()) return maybe(false); + DCHECK(PrototypeIterator::GetCurrent(iter)->IsJSGlobalObject()); + return HasRealElementProperty( + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)), index); } - return GetElementAttributeWithoutInterceptor( - object, object, index, false) != ABSENT; + Maybe<PropertyAttributes> result = + GetElementAttributeWithoutInterceptor(object, object, index, false); + if (!result.has_value) return Maybe<bool>(); + return maybe(result.value != ABSENT); } -bool JSObject::HasRealNamedCallbackProperty(Handle<JSObject> object, - Handle<Name> key) { +Maybe<bool> JSObject::HasRealNamedCallbackProperty(Handle<JSObject> object, + Handle<Name> key) { Isolate* isolate = object->GetIsolate(); SealHandleScope shs(isolate); // Check access rights if needed. if (object->IsAccessCheckNeeded()) { if (!isolate->MayNamedAccess(object, key, v8::ACCESS_HAS)) { isolate->ReportFailedAccessCheck(object, v8::ACCESS_HAS); - // TODO(yangguo): Issue 3269, check for scheduled exception missing? - return false; + RETURN_VALUE_IF_SCHEDULED_EXCEPTION(isolate, Maybe<bool>()); + return maybe(false); } } LookupResult result(isolate); - object->LocalLookupRealNamedProperty(key, &result); - return result.IsPropertyCallbacks(); + object->LookupOwnRealNamedProperty(key, &result); + return maybe(result.IsPropertyCallbacks()); } -int JSObject::NumberOfLocalProperties(PropertyAttributes filter) { +int JSObject::NumberOfOwnProperties(PropertyAttributes filter) { if (HasFastProperties()) { Map* map = this->map(); if (filter == NONE) return map->NumberOfOwnDescriptors(); @@ -14051,7 +13788,7 @@ static void InsertionSortPairs(FixedArray* content, void HeapSortPairs(FixedArray* content, FixedArray* numbers, int len) { // In-place heap sort. - ASSERT(content->length() == numbers->length()); + DCHECK(content->length() == numbers->length()); // Bottom-up max-heap construction. for (int i = 1; i < len; ++i) { @@ -14097,7 +13834,7 @@ void HeapSortPairs(FixedArray* content, FixedArray* numbers, int len) { // Sort this array and the numbers as pairs wrt. the (distinct) numbers. void FixedArray::SortPairs(FixedArray* numbers, uint32_t len) { - ASSERT(this->length() == numbers->length()); + DCHECK(this->length() == numbers->length()); // For small arrays, simply use insertion sort. if (len <= 10) { InsertionSortPairs(this, numbers, len); @@ -14135,12 +13872,12 @@ void FixedArray::SortPairs(FixedArray* numbers, uint32_t len) { } -// Fill in the names of local properties into the supplied storage. The main +// Fill in the names of own properties into the supplied storage. The main // purpose of this function is to provide reflection information for the object // mirrors. -void JSObject::GetLocalPropertyNames( +void JSObject::GetOwnPropertyNames( FixedArray* storage, int index, PropertyAttributes filter) { - ASSERT(storage->length() >= (NumberOfLocalProperties(filter) - index)); + DCHECK(storage->length() >= (NumberOfOwnProperties(filter) - index)); if (HasFastProperties()) { int real_size = map()->NumberOfOwnDescriptors(); DescriptorArray* descs = map()->instance_descriptors(); @@ -14159,8 +13896,8 @@ void JSObject::GetLocalPropertyNames( } -int JSObject::NumberOfLocalElements(PropertyAttributes filter) { - return GetLocalElementKeys(NULL, filter); +int JSObject::NumberOfOwnElements(PropertyAttributes filter) { + return GetOwnElementKeys(NULL, filter); } @@ -14174,12 +13911,12 @@ int JSObject::NumberOfEnumElements() { if (length == 0) return 0; } // Compute the number of enumerable elements. - return NumberOfLocalElements(static_cast<PropertyAttributes>(DONT_ENUM)); + return NumberOfOwnElements(static_cast<PropertyAttributes>(DONT_ENUM)); } -int JSObject::GetLocalElementKeys(FixedArray* storage, - PropertyAttributes filter) { +int JSObject::GetOwnElementKeys(FixedArray* storage, + PropertyAttributes filter) { int counter = 0; switch (GetElementsKind()) { case FAST_SMI_ELEMENTS: @@ -14197,14 +13934,14 @@ int JSObject::GetLocalElementKeys(FixedArray* storage, counter++; } } - ASSERT(!storage || storage->length() >= counter); + DCHECK(!storage || storage->length() >= counter); break; } case FAST_DOUBLE_ELEMENTS: case FAST_HOLEY_DOUBLE_ELEMENTS: { int length = IsJSArray() ? Smi::cast(JSArray::cast(this)->length())->value() : - FixedDoubleArray::cast(elements())->length(); + FixedArrayBase::cast(elements())->length(); for (int i = 0; i < length; i++) { if (!FixedDoubleArray::cast(elements())->is_the_hole(i)) { if (storage != NULL) { @@ -14213,7 +13950,7 @@ int JSObject::GetLocalElementKeys(FixedArray* storage, counter++; } } - ASSERT(!storage || storage->length() >= counter); + DCHECK(!storage || storage->length() >= counter); break; } @@ -14231,7 +13968,7 @@ int JSObject::GetLocalElementKeys(FixedArray* storage, } counter++; } - ASSERT(!storage || storage->length() >= counter); + DCHECK(!storage || storage->length() >= counter); break; } @@ -14299,44 +14036,16 @@ int JSObject::GetLocalElementKeys(FixedArray* storage, counter += str->length(); } } - ASSERT(!storage || storage->length() == counter); + DCHECK(!storage || storage->length() == counter); return counter; } int JSObject::GetEnumElementKeys(FixedArray* storage) { - return GetLocalElementKeys(storage, - static_cast<PropertyAttributes>(DONT_ENUM)); + return GetOwnElementKeys(storage, static_cast<PropertyAttributes>(DONT_ENUM)); } -// StringKey simply carries a string object as key. -class StringKey : public HashTableKey { - public: - explicit StringKey(String* string) : - string_(string), - hash_(HashForObject(string)) { } - - bool IsMatch(Object* string) { - // We know that all entries in a hash table had their hash keys created. - // Use that knowledge to have fast failure. - if (hash_ != HashForObject(string)) { - return false; - } - return string_->Equals(String::cast(string)); - } - - uint32_t Hash() { return hash_; } - - uint32_t HashForObject(Object* other) { return String::cast(other)->Hash(); } - - Object* AsObject(Heap* heap) { return string_; } - - String* string_; - uint32_t hash_; -}; - - // StringSharedKeys are used as keys in the eval cache. class StringSharedKey : public HashTableKey { public: @@ -14356,7 +14065,7 @@ class StringSharedKey : public HashTableKey { SharedFunctionInfo* shared = SharedFunctionInfo::cast(other_array->get(0)); if (shared != *shared_) return false; int strict_unchecked = Smi::cast(other_array->get(2))->value(); - ASSERT(strict_unchecked == SLOPPY || strict_unchecked == STRICT); + DCHECK(strict_unchecked == SLOPPY || strict_unchecked == STRICT); StrictMode strict_mode = static_cast<StrictMode>(strict_unchecked); if (strict_mode != strict_mode_) return false; int scope_position = Smi::cast(other_array->get(3))->value(); @@ -14395,7 +14104,7 @@ class StringSharedKey : public HashTableKey { SharedFunctionInfo* shared = SharedFunctionInfo::cast(other_array->get(0)); String* source = String::cast(other_array->get(1)); int strict_unchecked = Smi::cast(other_array->get(2))->value(); - ASSERT(strict_unchecked == SLOPPY || strict_unchecked == STRICT); + DCHECK(strict_unchecked == SLOPPY || strict_unchecked == STRICT); StrictMode strict_mode = static_cast<StrictMode>(strict_unchecked); int scope_position = Smi::cast(other_array->get(3))->value(); return StringSharedHashHelper( @@ -14546,7 +14255,7 @@ class InternalizedStringKey : public HashTableKey { Handle<Map> map; if (maybe_map.ToHandle(&map)) { string_->set_map_no_write_barrier(*map); - ASSERT(string_->IsInternalizedString()); + DCHECK(string_->IsInternalizedString()); return string_; } // Otherwise allocate a new internalized string. @@ -14582,8 +14291,8 @@ Handle<Derived> HashTable<Derived, Shape, Key>::New( int at_least_space_for, MinimumCapacity capacity_option, PretenureFlag pretenure) { - ASSERT(0 <= at_least_space_for); - ASSERT(!capacity_option || IsPowerOf2(at_least_space_for)); + DCHECK(0 <= at_least_space_for); + DCHECK(!capacity_option || IsPowerOf2(at_least_space_for)); int capacity = (capacity_option == USE_CUSTOM_MINIMUM_CAPACITY) ? at_least_space_for : ComputeCapacity(at_least_space_for); @@ -14637,7 +14346,7 @@ int NameDictionary::FindEntry(Handle<Name> key) { set(index, *key); return entry; } - ASSERT(element->IsTheHole() || !Name::cast(element)->Equals(*key)); + DCHECK(element->IsTheHole() || !Name::cast(element)->Equals(*key)); entry = NextProbe(entry, count++, capacity); } return kNotFound; @@ -14648,7 +14357,7 @@ template<typename Derived, typename Shape, typename Key> void HashTable<Derived, Shape, Key>::Rehash( Handle<Derived> new_table, Key key) { - ASSERT(NumberOfElements() < new_table->Capacity()); + DCHECK(NumberOfElements() < new_table->Capacity()); DisallowHeapAllocation no_gc; WriteBarrierMode mode = new_table->GetWriteBarrierMode(no_gc); @@ -14879,10 +14588,6 @@ template Object* Dictionary<SeededNumberDictionary, SeededNumberDictionaryShape, uint32_t>:: SlowReverseLookup(Object* value); -template Object* -Dictionary<UnseededNumberDictionary, UnseededNumberDictionaryShape, uint32_t>:: - SlowReverseLookup(Object* value); - template Object* Dictionary<NameDictionary, NameDictionaryShape, Handle<Name> >:: SlowReverseLookup(Object* value); @@ -14981,7 +14686,7 @@ int HashTable<SeededNumberDictionary, SeededNumberDictionaryShape, uint32_t>:: Handle<Object> JSObject::PrepareSlowElementsForSort( Handle<JSObject> object, uint32_t limit) { - ASSERT(object->HasDictionaryElements()); + DCHECK(object->HasDictionaryElements()); Isolate* isolate = object->GetIsolate(); // Must stay in dictionary mode, either because of requires_slow_elements, // or because we are not going to sort (and therefore compact) all of the @@ -15001,10 +14706,10 @@ Handle<Object> JSObject::PrepareSlowElementsForSort( Object* k = dict->KeyAt(i); if (!dict->IsKey(k)) continue; - ASSERT(k->IsNumber()); - ASSERT(!k->IsSmi() || Smi::cast(k)->value() >= 0); - ASSERT(!k->IsHeapNumber() || HeapNumber::cast(k)->value() >= 0); - ASSERT(!k->IsHeapNumber() || HeapNumber::cast(k)->value() <= kMaxUInt32); + DCHECK(k->IsNumber()); + DCHECK(!k->IsSmi() || Smi::cast(k)->value() >= 0); + DCHECK(!k->IsHeapNumber() || HeapNumber::cast(k)->value() >= 0); + DCHECK(!k->IsHeapNumber() || HeapNumber::cast(k)->value() <= kMaxUInt32); HandleScope scope(isolate); Handle<Object> value(dict->ValueAt(i), isolate); @@ -15026,7 +14731,7 @@ Handle<Object> JSObject::PrepareSlowElementsForSort( } else { Handle<Object> result = SeededNumberDictionary::AddNumberEntry( new_dict, pos, value, details); - ASSERT(result.is_identical_to(new_dict)); + DCHECK(result.is_identical_to(new_dict)); USE(result); pos++; } @@ -15037,7 +14742,7 @@ Handle<Object> JSObject::PrepareSlowElementsForSort( } else { Handle<Object> result = SeededNumberDictionary::AddNumberEntry( new_dict, key, value, details); - ASSERT(result.is_identical_to(new_dict)); + DCHECK(result.is_identical_to(new_dict)); USE(result); } } @@ -15053,7 +14758,7 @@ Handle<Object> JSObject::PrepareSlowElementsForSort( HandleScope scope(isolate); Handle<Object> result = SeededNumberDictionary::AddNumberEntry( new_dict, pos, isolate->factory()->undefined_value(), no_details); - ASSERT(result.is_identical_to(new_dict)); + DCHECK(result.is_identical_to(new_dict)); USE(result); pos++; undefs--; @@ -15107,7 +14812,7 @@ Handle<Object> JSObject::PrepareElementsForSort(Handle<JSObject> object, } else if (!object->HasFastDoubleElements()) { EnsureWritableFastElements(object); } - ASSERT(object->HasFastSmiOrObjectElements() || + DCHECK(object->HasFastSmiOrObjectElements() || object->HasFastDoubleElements()); // Collect holes at the end, undefined before that and the rest at the @@ -15263,7 +14968,7 @@ Handle<Object> ExternalUint8ClampedArray::SetValue( } else { // Clamp undefined to zero (default). All other types have been // converted to a number type further up in the call chain. - ASSERT(value->IsUndefined()); + DCHECK(value->IsUndefined()); } array->set(index, clamped_value); } @@ -15288,7 +14993,7 @@ static Handle<Object> ExternalArrayIntSetter( } else { // Clamp undefined to zero (default). All other types have been // converted to a number type further up in the call chain. - ASSERT(value->IsUndefined()); + DCHECK(value->IsUndefined()); } receiver->set(index, cast_value); } @@ -15351,7 +15056,7 @@ Handle<Object> ExternalUint32Array::SetValue( } else { // Clamp undefined to zero (default). All other types have been // converted to a number type further up in the call chain. - ASSERT(value->IsUndefined()); + DCHECK(value->IsUndefined()); } array->set(index, cast_value); } @@ -15363,7 +15068,7 @@ Handle<Object> ExternalFloat32Array::SetValue( Handle<ExternalFloat32Array> array, uint32_t index, Handle<Object> value) { - float cast_value = static_cast<float>(OS::nan_value()); + float cast_value = static_cast<float>(base::OS::nan_value()); if (index < static_cast<uint32_t>(array->length())) { if (value->IsSmi()) { int int_value = Handle<Smi>::cast(value)->value(); @@ -15374,7 +15079,7 @@ Handle<Object> ExternalFloat32Array::SetValue( } else { // Clamp undefined to NaN (default). All other types have been // converted to a number type further up in the call chain. - ASSERT(value->IsUndefined()); + DCHECK(value->IsUndefined()); } array->set(index, cast_value); } @@ -15386,14 +15091,14 @@ Handle<Object> ExternalFloat64Array::SetValue( Handle<ExternalFloat64Array> array, uint32_t index, Handle<Object> value) { - double double_value = OS::nan_value(); + double double_value = base::OS::nan_value(); if (index < static_cast<uint32_t>(array->length())) { if (value->IsNumber()) { double_value = value->Number(); } else { // Clamp undefined to NaN (default). All other types have been // converted to a number type further up in the call chain. - ASSERT(value->IsUndefined()); + DCHECK(value->IsUndefined()); } array->set(index, double_value); } @@ -15402,7 +15107,7 @@ Handle<Object> ExternalFloat64Array::SetValue( PropertyCell* GlobalObject::GetPropertyCell(LookupResult* result) { - ASSERT(!HasFastProperties()); + DCHECK(!HasFastProperties()); Object* value = property_dictionary()->ValueAt(result->GetDictionaryEntry()); return PropertyCell::cast(value); } @@ -15411,7 +15116,7 @@ PropertyCell* GlobalObject::GetPropertyCell(LookupResult* result) { Handle<PropertyCell> JSGlobalObject::EnsurePropertyCell( Handle<JSGlobalObject> global, Handle<Name> name) { - ASSERT(!global->HasFastProperties()); + DCHECK(!global->HasFastProperties()); int entry = global->property_dictionary()->FindEntry(name); if (entry == NameDictionary::kNotFound) { Isolate* isolate = global->GetIsolate(); @@ -15425,7 +15130,7 @@ Handle<PropertyCell> JSGlobalObject::EnsurePropertyCell( return cell; } else { Object* value = global->property_dictionary()->ValueAt(entry); - ASSERT(value->IsPropertyCell()); + DCHECK(value->IsPropertyCell()); return handle(PropertyCell::cast(value)); } } @@ -15463,7 +15168,7 @@ class TwoCharHashTableKey : public HashTableKey { uint16_t chars[2] = {c1, c2}; uint32_t check_hash = StringHasher::HashSequentialString(chars, 2, seed); hash = (hash << String::kHashShift) | String::kIsNotArrayIndexMask; - ASSERT_EQ(static_cast<int32_t>(hash), static_cast<int32_t>(check_hash)); + DCHECK_EQ(static_cast<int32_t>(hash), static_cast<int32_t>(check_hash)); #endif } @@ -15515,7 +15220,7 @@ MaybeHandle<String> StringTable::LookupStringIfExists( return MaybeHandle<String>(); } else { Handle<String> result(String::cast(string_table->KeyAt(entry)), isolate); - ASSERT(StringShape(*result).IsInternalized()); + DCHECK(StringShape(*result).IsInternalized()); return result; } } @@ -15532,7 +15237,7 @@ MaybeHandle<String> StringTable::LookupTwoCharsStringIfExists( return MaybeHandle<String>(); } else { Handle<String> result(String::cast(string_table->KeyAt(entry)), isolate); - ASSERT(StringShape(*result).IsInternalized()); + DCHECK(StringShape(*result).IsInternalized()); return result; } } @@ -15735,7 +15440,7 @@ Handle<Derived> Dictionary<Derived, Shape, Key>::New( Isolate* isolate, int at_least_space_for, PretenureFlag pretenure) { - ASSERT(0 <= at_least_space_for); + DCHECK(0 <= at_least_space_for); Handle<Derived> dict = DerivedHashTable::New(isolate, at_least_space_for, USE_DEFAULT_MINIMUM_CAPACITY, @@ -15862,7 +15567,7 @@ Handle<Derived> Dictionary<Derived, Shape, Key>::Add( Handle<Object> value, PropertyDetails details) { // Valdate key is absent. - SLOW_ASSERT((dictionary->FindEntry(key) == Dictionary::kNotFound)); + SLOW_DCHECK((dictionary->FindEntry(key) == Dictionary::kNotFound)); // Check whether the dictionary should be extended. dictionary = EnsureCapacity(dictionary, 1, key); @@ -15894,7 +15599,7 @@ void Dictionary<Derived, Shape, Key>::AddEntry( dictionary->SetNextEnumerationIndex(index + 1); } dictionary->SetEntry(entry, k, value, details); - ASSERT((dictionary->KeyAt(entry)->IsNumber() || + DCHECK((dictionary->KeyAt(entry)->IsNumber() || dictionary->KeyAt(entry)->IsName())); dictionary->ElementAdded(); } @@ -15926,7 +15631,7 @@ Handle<SeededNumberDictionary> SeededNumberDictionary::AddNumberEntry( Handle<Object> value, PropertyDetails details) { dictionary->UpdateMaxNumberKey(key); - SLOW_ASSERT(dictionary->FindEntry(key) == kNotFound); + SLOW_DCHECK(dictionary->FindEntry(key) == kNotFound); return Add(dictionary, key, value, details); } @@ -15935,7 +15640,7 @@ Handle<UnseededNumberDictionary> UnseededNumberDictionary::AddNumberEntry( Handle<UnseededNumberDictionary> dictionary, uint32_t key, Handle<Object> value) { - SLOW_ASSERT(dictionary->FindEntry(key) == kNotFound); + SLOW_DCHECK(dictionary->FindEntry(key) == kNotFound); return Add(dictionary, key, value, PropertyDetails(NONE, NORMAL, 0)); } @@ -16021,7 +15726,7 @@ void Dictionary<Derived, Shape, Key>::CopyKeysTo( FixedArray* storage, PropertyAttributes filter, typename Dictionary<Derived, Shape, Key>::SortMode sort_mode) { - ASSERT(storage->length() >= NumberOfElementsFilterAttributes(filter)); + DCHECK(storage->length() >= NumberOfElementsFilterAttributes(filter)); int capacity = DerivedHashTable::Capacity(); int index = 0; for (int i = 0; i < capacity; i++) { @@ -16036,7 +15741,7 @@ void Dictionary<Derived, Shape, Key>::CopyKeysTo( if (sort_mode == Dictionary::SORTED) { storage->SortPairs(storage, index); } - ASSERT(storage->length() >= index); + DCHECK(storage->length() >= index); } @@ -16065,6 +15770,7 @@ void NameDictionary::CopyEnumKeysTo(FixedArray* storage) { if (properties == length) break; } } + CHECK_EQ(length, properties); EnumIndexComparator cmp(this); Smi** start = reinterpret_cast<Smi**>(storage->GetFirstElementAddress()); std::sort(start, start + length, cmp); @@ -16081,7 +15787,7 @@ void Dictionary<Derived, Shape, Key>::CopyKeysTo( int index, PropertyAttributes filter, typename Dictionary<Derived, Shape, Key>::SortMode sort_mode) { - ASSERT(storage->length() >= NumberOfElementsFilterAttributes(filter)); + DCHECK(storage->length() >= NumberOfElementsFilterAttributes(filter)); int capacity = DerivedHashTable::Capacity(); for (int i = 0; i < capacity; i++) { Object* k = DerivedHashTable::KeyAt(i); @@ -16095,7 +15801,7 @@ void Dictionary<Derived, Shape, Key>::CopyKeysTo( if (sort_mode == Dictionary::SORTED) { storage->SortPairs(storage, index); } - ASSERT(storage->length() >= index); + DCHECK(storage->length() >= index); } @@ -16120,7 +15826,7 @@ Object* Dictionary<Derived, Shape, Key>::SlowReverseLookup(Object* value) { Object* ObjectHashTable::Lookup(Handle<Object> key) { DisallowHeapAllocation no_gc; - ASSERT(IsKey(*key)); + DCHECK(IsKey(*key)); // If the object does not have an identity hash, it was never used as a key. Object* hash = key->GetHash(); @@ -16136,22 +15842,16 @@ Object* ObjectHashTable::Lookup(Handle<Object> key) { Handle<ObjectHashTable> ObjectHashTable::Put(Handle<ObjectHashTable> table, Handle<Object> key, Handle<Object> value) { - ASSERT(table->IsKey(*key)); + DCHECK(table->IsKey(*key)); + DCHECK(!value->IsTheHole()); Isolate* isolate = table->GetIsolate(); // Make sure the key object has an identity hash code. - Handle<Object> hash = Object::GetOrCreateHash(key, isolate); + Handle<Smi> hash = Object::GetOrCreateHash(isolate, key); int entry = table->FindEntry(key); - // Check whether to perform removal operation. - if (value->IsTheHole()) { - if (entry == kNotFound) return table; - table->RemoveEntry(entry); - return Shrink(table, key); - } - // Key is already in table, just overwrite value. if (entry != kNotFound) { table->set(EntryToIndex(entry) + 1, *value); @@ -16160,13 +15860,36 @@ Handle<ObjectHashTable> ObjectHashTable::Put(Handle<ObjectHashTable> table, // Check whether the hash table should be extended. table = EnsureCapacity(table, 1, key); - table->AddEntry(table->FindInsertionEntry(Handle<Smi>::cast(hash)->value()), + table->AddEntry(table->FindInsertionEntry(hash->value()), *key, *value); return table; } +Handle<ObjectHashTable> ObjectHashTable::Remove(Handle<ObjectHashTable> table, + Handle<Object> key, + bool* was_present) { + DCHECK(table->IsKey(*key)); + + Object* hash = key->GetHash(); + if (hash->IsUndefined()) { + *was_present = false; + return table; + } + + int entry = table->FindEntry(key); + if (entry == kNotFound) { + *was_present = false; + return table; + } + + *was_present = true; + table->RemoveEntry(entry); + return Shrink(table, key); +} + + void ObjectHashTable::AddEntry(int entry, Object* key, Object* value) { set(EntryToIndex(entry), key); set(EntryToIndex(entry) + 1, value); @@ -16183,7 +15906,7 @@ void ObjectHashTable::RemoveEntry(int entry) { Object* WeakHashTable::Lookup(Handle<Object> key) { DisallowHeapAllocation no_gc; - ASSERT(IsKey(*key)); + DCHECK(IsKey(*key)); int entry = FindEntry(key); if (entry == kNotFound) return GetHeap()->the_hole_value(); return get(EntryToValueIndex(entry)); @@ -16193,7 +15916,7 @@ Object* WeakHashTable::Lookup(Handle<Object> key) { Handle<WeakHashTable> WeakHashTable::Put(Handle<WeakHashTable> table, Handle<Object> key, Handle<Object> value) { - ASSERT(table->IsKey(*key)); + DCHECK(table->IsKey(*key)); int entry = table->FindEntry(key); // Key is already in table, just overwrite value. if (entry != kNotFound) { @@ -16249,7 +15972,6 @@ Handle<Derived> OrderedHashTable<Derived, Iterator, entrysize>::Allocate( table->SetNumberOfBuckets(num_buckets); table->SetNumberOfElements(0); table->SetNumberOfDeletedElements(0); - table->set_iterators(isolate->heap()->undefined_value()); return table; } @@ -16257,6 +15979,8 @@ Handle<Derived> OrderedHashTable<Derived, Iterator, entrysize>::Allocate( template<class Derived, class Iterator, int entrysize> Handle<Derived> OrderedHashTable<Derived, Iterator, entrysize>::EnsureGrowable( Handle<Derived> table) { + DCHECK(!table->IsObsolete()); + int nof = table->NumberOfElements(); int nod = table->NumberOfDeletedElements(); int capacity = table->Capacity(); @@ -16271,9 +15995,11 @@ Handle<Derived> OrderedHashTable<Derived, Iterator, entrysize>::EnsureGrowable( template<class Derived, class Iterator, int entrysize> Handle<Derived> OrderedHashTable<Derived, Iterator, entrysize>::Shrink( Handle<Derived> table) { + DCHECK(!table->IsObsolete()); + int nof = table->NumberOfElements(); int capacity = table->Capacity(); - if (nof > (capacity >> 2)) return table; + if (nof >= (capacity >> 2)) return table; return Rehash(table, capacity / 2); } @@ -16281,29 +16007,39 @@ Handle<Derived> OrderedHashTable<Derived, Iterator, entrysize>::Shrink( template<class Derived, class Iterator, int entrysize> Handle<Derived> OrderedHashTable<Derived, Iterator, entrysize>::Clear( Handle<Derived> table) { + DCHECK(!table->IsObsolete()); + Handle<Derived> new_table = Allocate(table->GetIsolate(), kMinCapacity, table->GetHeap()->InNewSpace(*table) ? NOT_TENURED : TENURED); - new_table->set_iterators(table->iterators()); - table->set_iterators(table->GetHeap()->undefined_value()); - - DisallowHeapAllocation no_allocation; - for (Object* object = new_table->iterators(); - !object->IsUndefined(); - object = Iterator::cast(object)->next_iterator()) { - Iterator::cast(object)->TableCleared(); - Iterator::cast(object)->set_table(*new_table); - } + table->SetNextTable(*new_table); + table->SetNumberOfDeletedElements(-1); return new_table; } +template<class Derived, class Iterator, int entrysize> +Handle<Derived> OrderedHashTable<Derived, Iterator, entrysize>::Remove( + Handle<Derived> table, Handle<Object> key, bool* was_present) { + int entry = table->FindEntry(key); + if (entry == kNotFound) { + *was_present = false; + return table; + } + *was_present = true; + table->RemoveEntry(entry); + return Shrink(table); +} + + template<class Derived, class Iterator, int entrysize> Handle<Derived> OrderedHashTable<Derived, Iterator, entrysize>::Rehash( Handle<Derived> table, int new_capacity) { + DCHECK(!table->IsObsolete()); + Handle<Derived> new_table = Allocate(table->GetIsolate(), new_capacity, @@ -16312,9 +16048,15 @@ Handle<Derived> OrderedHashTable<Derived, Iterator, entrysize>::Rehash( int nod = table->NumberOfDeletedElements(); int new_buckets = new_table->NumberOfBuckets(); int new_entry = 0; + int removed_holes_index = 0; + for (int old_entry = 0; old_entry < (nof + nod); ++old_entry) { Object* key = table->KeyAt(old_entry); - if (key->IsTheHole()) continue; + if (key->IsTheHole()) { + table->SetRemovedIndexAt(removed_holes_index++, old_entry); + continue; + } + Object* hash = key->GetHash(); int bucket = Smi::cast(hash)->value() & (new_buckets - 1); Object* chain_entry = new_table->get(kHashTableStartIndex + bucket); @@ -16328,43 +16070,47 @@ Handle<Derived> OrderedHashTable<Derived, Iterator, entrysize>::Rehash( new_table->set(new_index + kChainOffset, chain_entry); ++new_entry; } - new_table->SetNumberOfElements(nof); - new_table->set_iterators(table->iterators()); - table->set_iterators(table->GetHeap()->undefined_value()); + DCHECK_EQ(nod, removed_holes_index); - DisallowHeapAllocation no_allocation; - for (Object* object = new_table->iterators(); - !object->IsUndefined(); - object = Iterator::cast(object)->next_iterator()) { - Iterator::cast(object)->TableCompacted(); - Iterator::cast(object)->set_table(*new_table); - } + new_table->SetNumberOfElements(nof); + table->SetNextTable(*new_table); return new_table; } -template<class Derived, class Iterator, int entrysize> +template <class Derived, class Iterator, int entrysize> int OrderedHashTable<Derived, Iterator, entrysize>::FindEntry( - Handle<Object> key) { + Handle<Object> key, int hash) { + DCHECK(!IsObsolete()); + DisallowHeapAllocation no_gc; - ASSERT(!key->IsTheHole()); - Object* hash = key->GetHash(); - if (hash->IsUndefined()) return kNotFound; - for (int entry = HashToEntry(Smi::cast(hash)->value()); - entry != kNotFound; + DCHECK(!key->IsTheHole()); + for (int entry = HashToEntry(hash); entry != kNotFound; entry = ChainAt(entry)) { Object* candidate = KeyAt(entry); - if (candidate->SameValue(*key)) + if (candidate->SameValueZero(*key)) return entry; } return kNotFound; } -template<class Derived, class Iterator, int entrysize> +template <class Derived, class Iterator, int entrysize> +int OrderedHashTable<Derived, Iterator, entrysize>::FindEntry( + Handle<Object> key) { + DisallowHeapAllocation no_gc; + Object* hash = key->GetHash(); + if (!hash->IsSmi()) return kNotFound; + return FindEntry(key, Smi::cast(hash)->value()); +} + + +template <class Derived, class Iterator, int entrysize> int OrderedHashTable<Derived, Iterator, entrysize>::AddEntry(int hash) { + DCHECK(!IsObsolete()); + int entry = UsedCapacity(); int bucket = HashToBucket(hash); int index = EntryToIndex(entry); @@ -16378,19 +16124,14 @@ int OrderedHashTable<Derived, Iterator, entrysize>::AddEntry(int hash) { template<class Derived, class Iterator, int entrysize> void OrderedHashTable<Derived, Iterator, entrysize>::RemoveEntry(int entry) { + DCHECK(!IsObsolete()); + int index = EntryToIndex(entry); for (int i = 0; i < entrysize; ++i) { set_the_hole(index + i); } SetNumberOfElements(NumberOfElements() - 1); SetNumberOfDeletedElements(NumberOfDeletedElements() + 1); - - DisallowHeapAllocation no_allocation; - for (Object* object = iterators(); - !object->IsUndefined(); - object = Iterator::cast(object)->next_iterator()) { - Iterator::cast(object)->EntryRemoved(entry); - } } @@ -16410,8 +16151,13 @@ template Handle<OrderedHashSet> OrderedHashTable<OrderedHashSet, JSSetIterator, 1>::Clear( Handle<OrderedHashSet> table); -template int -OrderedHashTable<OrderedHashSet, JSSetIterator, 1>::FindEntry( +template Handle<OrderedHashSet> +OrderedHashTable<OrderedHashSet, JSSetIterator, 1>::Remove( + Handle<OrderedHashSet> table, Handle<Object> key, bool* was_present); + +template int OrderedHashTable<OrderedHashSet, JSSetIterator, 1>::FindEntry( + Handle<Object> key, int hash); +template int OrderedHashTable<OrderedHashSet, JSSetIterator, 1>::FindEntry( Handle<Object> key); template int @@ -16437,8 +16183,13 @@ template Handle<OrderedHashMap> OrderedHashTable<OrderedHashMap, JSMapIterator, 2>::Clear( Handle<OrderedHashMap> table); -template int -OrderedHashTable<OrderedHashMap, JSMapIterator, 2>::FindEntry( +template Handle<OrderedHashMap> +OrderedHashTable<OrderedHashMap, JSMapIterator, 2>::Remove( + Handle<OrderedHashMap> table, Handle<Object> key, bool* was_present); + +template int OrderedHashTable<OrderedHashMap, JSMapIterator, 2>::FindEntry( + Handle<Object> key, int hash); +template int OrderedHashTable<OrderedHashMap, JSMapIterator, 2>::FindEntry( Handle<Object> key); template int @@ -16455,26 +16206,17 @@ bool OrderedHashSet::Contains(Handle<Object> key) { Handle<OrderedHashSet> OrderedHashSet::Add(Handle<OrderedHashSet> table, Handle<Object> key) { - if (table->FindEntry(key) != kNotFound) return table; + int hash = GetOrCreateHash(table->GetIsolate(), key)->value(); + if (table->FindEntry(key, hash) != kNotFound) return table; table = EnsureGrowable(table); - Handle<Object> hash = GetOrCreateHash(key, table->GetIsolate()); - int index = table->AddEntry(Smi::cast(*hash)->value()); + int index = table->AddEntry(hash); table->set(index, *key); return table; } -Handle<OrderedHashSet> OrderedHashSet::Remove(Handle<OrderedHashSet> table, - Handle<Object> key) { - int entry = table->FindEntry(key); - if (entry == kNotFound) return table; - table->RemoveEntry(entry); - return Shrink(table); -} - - Object* OrderedHashMap::Lookup(Handle<Object> key) { DisallowHeapAllocation no_gc; int entry = FindEntry(key); @@ -16486,13 +16228,10 @@ Object* OrderedHashMap::Lookup(Handle<Object> key) { Handle<OrderedHashMap> OrderedHashMap::Put(Handle<OrderedHashMap> table, Handle<Object> key, Handle<Object> value) { - int entry = table->FindEntry(key); + DCHECK(!key->IsTheHole()); - if (value->IsTheHole()) { - if (entry == kNotFound) return table; - table->RemoveEntry(entry); - return Shrink(table); - } + int hash = GetOrCreateHash(table->GetIsolate(), key)->value(); + int entry = table->FindEntry(key, hash); if (entry != kNotFound) { table->set(table->EntryToIndex(entry) + kValueOffset, *value); @@ -16501,8 +16240,7 @@ Handle<OrderedHashMap> OrderedHashMap::Put(Handle<OrderedHashMap> table, table = EnsureGrowable(table); - Handle<Object> hash = GetOrCreateHash(key, table->GetIsolate()); - int index = table->AddEntry(Smi::cast(*hash)->value()); + int index = table->AddEntry(hash); table->set(index, *key); table->set(index + kValueOffset, *value); return table; @@ -16510,218 +16248,108 @@ Handle<OrderedHashMap> OrderedHashMap::Put(Handle<OrderedHashMap> table, template<class Derived, class TableType> -void OrderedHashTableIterator<Derived, TableType>::EntryRemoved(int index) { - int i = this->index()->value(); - if (index < i) { - set_count(Smi::FromInt(count()->value() - 1)); - } - if (index == i) { - Seek(); - } -} - - -template<class Derived, class TableType> -void OrderedHashTableIterator<Derived, TableType>::Close() { - if (Closed()) return; - +void OrderedHashTableIterator<Derived, TableType>::Transition() { DisallowHeapAllocation no_allocation; - - Object* undefined = GetHeap()->undefined_value(); TableType* table = TableType::cast(this->table()); - Object* previous = previous_iterator(); - Object* next = next_iterator(); + if (!table->IsObsolete()) return; - if (previous == undefined) { - ASSERT_EQ(table->iterators(), this); - table->set_iterators(next); - } else { - ASSERT_EQ(Derived::cast(previous)->next_iterator(), this); - Derived::cast(previous)->set_next_iterator(next); - } + int index = Smi::cast(this->index())->value(); + while (table->IsObsolete()) { + TableType* next_table = table->NextTable(); + + if (index > 0) { + int nod = table->NumberOfDeletedElements(); + + // When we clear the table we set the number of deleted elements to -1. + if (nod == -1) { + index = 0; + } else { + int old_index = index; + for (int i = 0; i < nod; ++i) { + int removed_index = table->RemovedIndexAt(i); + if (removed_index >= old_index) break; + --index; + } + } + } - if (!next->IsUndefined()) { - ASSERT_EQ(Derived::cast(next)->previous_iterator(), this); - Derived::cast(next)->set_previous_iterator(previous); + table = next_table; } - set_previous_iterator(undefined); - set_next_iterator(undefined); - set_table(undefined); + set_table(table); + set_index(Smi::FromInt(index)); } template<class Derived, class TableType> -void OrderedHashTableIterator<Derived, TableType>::Seek() { - ASSERT(!Closed()); - +bool OrderedHashTableIterator<Derived, TableType>::HasMore() { DisallowHeapAllocation no_allocation; + if (this->table()->IsUndefined()) return false; - int index = this->index()->value(); + Transition(); TableType* table = TableType::cast(this->table()); + int index = Smi::cast(this->index())->value(); int used_capacity = table->UsedCapacity(); while (index < used_capacity && table->KeyAt(index)->IsTheHole()) { index++; } - set_index(Smi::FromInt(index)); -} + set_index(Smi::FromInt(index)); -template<class Derived, class TableType> -void OrderedHashTableIterator<Derived, TableType>::MoveNext() { - ASSERT(!Closed()); + if (index < used_capacity) return true; - set_index(Smi::FromInt(index()->value() + 1)); - set_count(Smi::FromInt(count()->value() + 1)); - Seek(); + set_table(GetHeap()->undefined_value()); + return false; } template<class Derived, class TableType> -Handle<JSObject> OrderedHashTableIterator<Derived, TableType>::Next( - Handle<Derived> iterator) { - Isolate* isolate = iterator->GetIsolate(); - Factory* factory = isolate->factory(); - - Handle<Object> object(iterator->table(), isolate); - - if (!object->IsUndefined()) { - Handle<TableType> table = Handle<TableType>::cast(object); - int index = iterator->index()->value(); - if (index < table->UsedCapacity()) { - int entry_index = table->EntryToIndex(index); - iterator->MoveNext(); - Handle<Object> value = Derived::ValueForKind(iterator, entry_index); - return factory->NewIteratorResultObject(value, false); - } else { - iterator->Close(); - } +Smi* OrderedHashTableIterator<Derived, TableType>::Next(JSArray* value_array) { + DisallowHeapAllocation no_allocation; + if (HasMore()) { + FixedArray* array = FixedArray::cast(value_array->elements()); + static_cast<Derived*>(this)->PopulateValueArray(array); + MoveNext(); + return kind(); } - - return factory->NewIteratorResultObject(factory->undefined_value(), true); + return Smi::FromInt(0); } -template<class Derived, class TableType> -Handle<Derived> OrderedHashTableIterator<Derived, TableType>::CreateInternal( - Handle<Map> map, - Handle<TableType> table, - int kind) { - Isolate* isolate = table->GetIsolate(); - - Handle<Object> undefined = isolate->factory()->undefined_value(); - - Handle<Derived> new_iterator = Handle<Derived>::cast( - isolate->factory()->NewJSObjectFromMap(map)); - new_iterator->set_previous_iterator(*undefined); - new_iterator->set_table(*table); - new_iterator->set_index(Smi::FromInt(0)); - new_iterator->set_count(Smi::FromInt(0)); - new_iterator->set_kind(Smi::FromInt(kind)); - - Handle<Object> old_iterator(table->iterators(), isolate); - if (!old_iterator->IsUndefined()) { - Handle<Derived>::cast(old_iterator)->set_previous_iterator(*new_iterator); - new_iterator->set_next_iterator(*old_iterator); - } else { - new_iterator->set_next_iterator(*undefined); - } - - table->set_iterators(*new_iterator); - - return new_iterator; -} - +template Smi* +OrderedHashTableIterator<JSSetIterator, OrderedHashSet>::Next( + JSArray* value_array); -template void -OrderedHashTableIterator<JSSetIterator, OrderedHashSet>::EntryRemoved( - int index); +template bool +OrderedHashTableIterator<JSSetIterator, OrderedHashSet>::HasMore(); template void -OrderedHashTableIterator<JSSetIterator, OrderedHashSet>::Close(); - -template Handle<JSObject> -OrderedHashTableIterator<JSSetIterator, OrderedHashSet>::Next( - Handle<JSSetIterator> iterator); - -template Handle<JSSetIterator> -OrderedHashTableIterator<JSSetIterator, OrderedHashSet>::CreateInternal( - Handle<Map> map, Handle<OrderedHashSet> table, int kind); +OrderedHashTableIterator<JSSetIterator, OrderedHashSet>::MoveNext(); +template Object* +OrderedHashTableIterator<JSSetIterator, OrderedHashSet>::CurrentKey(); template void -OrderedHashTableIterator<JSMapIterator, OrderedHashMap>::EntryRemoved( - int index); +OrderedHashTableIterator<JSSetIterator, OrderedHashSet>::Transition(); -template void -OrderedHashTableIterator<JSMapIterator, OrderedHashMap>::Close(); -template Handle<JSObject> +template Smi* OrderedHashTableIterator<JSMapIterator, OrderedHashMap>::Next( - Handle<JSMapIterator> iterator); - -template Handle<JSMapIterator> -OrderedHashTableIterator<JSMapIterator, OrderedHashMap>::CreateInternal( - Handle<Map> map, Handle<OrderedHashMap> table, int kind); - - -Handle<Object> JSSetIterator::ValueForKind( - Handle<JSSetIterator> iterator, int entry_index) { - int kind = iterator->kind()->value(); - // Set.prototype only has values and entries. - ASSERT(kind == kKindValues || kind == kKindEntries); - - Isolate* isolate = iterator->GetIsolate(); - Factory* factory = isolate->factory(); - - Handle<OrderedHashSet> table( - OrderedHashSet::cast(iterator->table()), isolate); - Handle<Object> value = Handle<Object>(table->get(entry_index), isolate); - - if (kind == kKindEntries) { - Handle<FixedArray> array = factory->NewFixedArray(2); - array->set(0, *value); - array->set(1, *value); - return factory->NewJSArrayWithElements(array); - } - - return value; -} - - -Handle<Object> JSMapIterator::ValueForKind( - Handle<JSMapIterator> iterator, int entry_index) { - int kind = iterator->kind()->value(); - ASSERT(kind == kKindKeys || kind == kKindValues || kind == kKindEntries); + JSArray* value_array); - Isolate* isolate = iterator->GetIsolate(); - Factory* factory = isolate->factory(); - - Handle<OrderedHashMap> table( - OrderedHashMap::cast(iterator->table()), isolate); - - switch (kind) { - case kKindKeys: - return Handle<Object>(table->get(entry_index), isolate); +template bool +OrderedHashTableIterator<JSMapIterator, OrderedHashMap>::HasMore(); - case kKindValues: - return Handle<Object>(table->get(entry_index + 1), isolate); +template void +OrderedHashTableIterator<JSMapIterator, OrderedHashMap>::MoveNext(); - case kKindEntries: { - Handle<Object> key(table->get(entry_index), isolate); - Handle<Object> value(table->get(entry_index + 1), isolate); - Handle<FixedArray> array = factory->NewFixedArray(2); - array->set(0, *key); - array->set(1, *value); - return factory->NewJSArrayWithElements(array); - } - } +template Object* +OrderedHashTableIterator<JSMapIterator, OrderedHashMap>::CurrentKey(); - UNREACHABLE(); - return factory->undefined_value(); -} +template void +OrderedHashTableIterator<JSMapIterator, OrderedHashMap>::Transition(); DeclaredAccessorDescriptorIterator::DeclaredAccessorDescriptorIterator( @@ -16734,13 +16362,13 @@ DeclaredAccessorDescriptorIterator::DeclaredAccessorDescriptorIterator( const DeclaredAccessorDescriptorData* DeclaredAccessorDescriptorIterator::Next() { - ASSERT(offset_ < length_); + DCHECK(offset_ < length_); uint8_t* ptr = &array_[offset_]; - ASSERT(reinterpret_cast<uintptr_t>(ptr) % sizeof(uintptr_t) == 0); + DCHECK(reinterpret_cast<uintptr_t>(ptr) % sizeof(uintptr_t) == 0); const DeclaredAccessorDescriptorData* data = reinterpret_cast<const DeclaredAccessorDescriptorData*>(ptr); offset_ += sizeof(*data); - ASSERT(offset_ <= length_); + DCHECK(offset_ <= length_); return data; } @@ -16764,10 +16392,10 @@ Handle<DeclaredAccessorDescriptor> DeclaredAccessorDescriptor::Create( if (previous_length != 0) { uint8_t* previous_array = previous->serialized_data()->GetDataStartAddress(); - OS::MemCopy(array, previous_array, previous_length); + MemCopy(array, previous_array, previous_length); array += previous_length; } - ASSERT(reinterpret_cast<uintptr_t>(array) % sizeof(uintptr_t) == 0); + DCHECK(reinterpret_cast<uintptr_t>(array) % sizeof(uintptr_t) == 0); DeclaredAccessorDescriptorData* data = reinterpret_cast<DeclaredAccessorDescriptorData*>(array); *data = descriptor; @@ -16843,7 +16471,7 @@ void DebugInfo::SetBreakPoint(Handle<DebugInfo> debug_info, Handle<FixedArray> new_break_points = isolate->factory()->NewFixedArray( old_break_points->length() + - Debug::kEstimatedNofBreakPointsInFunction); + DebugInfo::kEstimatedNofBreakPointsInFunction); debug_info->set_break_points(*new_break_points); for (int i = 0; i < old_break_points->length(); i++) { @@ -16851,7 +16479,7 @@ void DebugInfo::SetBreakPoint(Handle<DebugInfo> debug_info, } index = old_break_points->length(); } - ASSERT(index != kNoBreakPointInfo); + DCHECK(index != kNoBreakPointInfo); // Allocate new BreakPointInfo object and set the break point. Handle<BreakPointInfo> new_break_point_info = Handle<BreakPointInfo>::cast( @@ -16943,7 +16571,7 @@ void BreakPointInfo::ClearBreakPoint(Handle<BreakPointInfo> break_point_info, return; } // If there are multiple break points shrink the array - ASSERT(break_point_info->break_point_objects()->IsFixedArray()); + DCHECK(break_point_info->break_point_objects()->IsFixedArray()); Handle<FixedArray> old_array = Handle<FixedArray>( FixedArray::cast(break_point_info->break_point_objects())); @@ -16952,7 +16580,7 @@ void BreakPointInfo::ClearBreakPoint(Handle<BreakPointInfo> break_point_info, int found_count = 0; for (int i = 0; i < old_array->length(); i++) { if (old_array->get(i) == *break_point_object) { - ASSERT(found_count == 0); + DCHECK(found_count == 0); found_count++; } else { new_array->set(i - found_count, old_array->get(i)); @@ -17038,7 +16666,7 @@ Object* JSDate::GetField(Object* object, Smi* index) { Object* JSDate::DoGetField(FieldIndex index) { - ASSERT(index != kDateValue); + DCHECK(index != kDateValue); DateCache* date_cache = GetIsolate()->date_cache(); @@ -17048,7 +16676,7 @@ Object* JSDate::DoGetField(FieldIndex index) { // Since the stamp is not NaN, the value is also not NaN. int64_t local_time_ms = date_cache->ToLocal(static_cast<int64_t>(value()->Number())); - SetLocalFields(local_time_ms, date_cache); + SetCachedFields(local_time_ms, date_cache); } switch (index) { case kYear: return year(); @@ -17076,7 +16704,7 @@ Object* JSDate::DoGetField(FieldIndex index) { int time_in_day_ms = DateCache::TimeInDay(local_time_ms, days); if (index == kMillisecond) return Smi::FromInt(time_in_day_ms % 1000); - ASSERT(index == kTimeInDay); + DCHECK(index == kTimeInDay); return Smi::FromInt(time_in_day_ms); } @@ -17084,7 +16712,7 @@ Object* JSDate::DoGetField(FieldIndex index) { Object* JSDate::GetUTCField(FieldIndex index, double value, DateCache* date_cache) { - ASSERT(index >= kFirstUTCField); + DCHECK(index >= kFirstUTCField); if (std::isnan(value)) return GetIsolate()->heap()->nan_value(); @@ -17103,7 +16731,7 @@ Object* JSDate::GetUTCField(FieldIndex index, date_cache->YearMonthDayFromDays(days, &year, &month, &day); if (index == kYearUTC) return Smi::FromInt(year); if (index == kMonthUTC) return Smi::FromInt(month); - ASSERT(index == kDayUTC); + DCHECK(index == kDayUTC); return Smi::FromInt(day); } @@ -17141,7 +16769,7 @@ void JSDate::SetValue(Object* value, bool is_value_nan) { } -void JSDate::SetLocalFields(int64_t local_time_ms, DateCache* date_cache) { +void JSDate::SetCachedFields(int64_t local_time_ms, DateCache* date_cache) { int days = DateCache::DaysFromTime(local_time_ms); int time_in_day_ms = DateCache::TimeInDay(local_time_ms, days); int year, month, day; @@ -17162,7 +16790,7 @@ void JSDate::SetLocalFields(int64_t local_time_ms, DateCache* date_cache) { void JSArrayBuffer::Neuter() { - ASSERT(is_external()); + DCHECK(is_external()); set_backing_store(NULL); set_byte_length(Smi::FromInt(0)); } @@ -17207,7 +16835,7 @@ Handle<JSArrayBuffer> JSTypedArray::MaterializeArrayBuffer( Handle<Map> map(typed_array->map()); Isolate* isolate = typed_array->GetIsolate(); - ASSERT(IsFixedTypedArrayElementsKind(map->elements_kind())); + DCHECK(IsFixedTypedArrayElementsKind(map->elements_kind())); Handle<Map> new_map = Map::TransitionElementsTo( map, @@ -17227,7 +16855,7 @@ Handle<JSArrayBuffer> JSTypedArray::MaterializeArrayBuffer( static_cast<uint8_t*>(buffer->backing_store())); buffer->set_weak_first_view(*typed_array); - ASSERT(typed_array->weak_next() == isolate->heap()->undefined_value()); + DCHECK(typed_array->weak_next() == isolate->heap()->undefined_value()); typed_array->set_buffer(*buffer); JSObject::SetMapAndElements(typed_array, new_map, new_elements); @@ -17238,7 +16866,7 @@ Handle<JSArrayBuffer> JSTypedArray::MaterializeArrayBuffer( Handle<JSArrayBuffer> JSTypedArray::GetBuffer() { Handle<Object> result(buffer(), GetIsolate()); if (*result != Smi::FromInt(0)) { - ASSERT(IsExternalArrayElementsKind(map()->elements_kind())); + DCHECK(IsExternalArrayElementsKind(map()->elements_kind())); return Handle<JSArrayBuffer>::cast(result); } Handle<JSTypedArray> self(this); @@ -17252,7 +16880,7 @@ HeapType* PropertyCell::type() { void PropertyCell::set_type(HeapType* type, WriteBarrierMode ignored) { - ASSERT(IsPropertyCell()); + DCHECK(IsPropertyCell()); set_type_raw(type, ignored); } @@ -17261,14 +16889,9 @@ Handle<HeapType> PropertyCell::UpdatedType(Handle<PropertyCell> cell, Handle<Object> value) { Isolate* isolate = cell->GetIsolate(); Handle<HeapType> old_type(cell->type(), isolate); - // TODO(2803): Do not track ConsString as constant because they cannot be - // embedded into code. - Handle<HeapType> new_type = value->IsConsString() || value->IsTheHole() - ? HeapType::Any(isolate) : HeapType::Constant(value, isolate); + Handle<HeapType> new_type = HeapType::Constant(value, isolate); - if (new_type->Is(old_type)) { - return old_type; - } + if (new_type->Is(old_type)) return old_type; cell->dependent_code()->DeoptimizeDependentCodeGroup( isolate, DependentCode::kPropertyCellChangedGroup); @@ -17305,7 +16928,7 @@ void PropertyCell::AddDependentCompilationInfo(Handle<PropertyCell> cell, const char* GetBailoutReason(BailoutReason reason) { - ASSERT(reason < kLastErrorMessage); + DCHECK(reason < kLastErrorMessage); #define ERROR_MESSAGES_TEXTS(C, T) T, static const char* error_messages_[] = { ERROR_MESSAGES_LIST(ERROR_MESSAGES_TEXTS) diff --git a/deps/v8/src/objects.h b/deps/v8/src/objects.h index ab974e3ee4d..2bb47e80f54 100644 --- a/deps/v8/src/objects.h +++ b/deps/v8/src/objects.h @@ -5,24 +5,28 @@ #ifndef V8_OBJECTS_H_ #define V8_OBJECTS_H_ -#include "allocation.h" -#include "assert-scope.h" -#include "builtins.h" -#include "elements-kind.h" -#include "flags.h" -#include "list.h" -#include "property-details.h" -#include "smart-pointers.h" -#include "unicode-inl.h" -#if V8_TARGET_ARCH_ARM64 -#include "arm64/constants-arm64.h" -#elif V8_TARGET_ARCH_ARM -#include "arm/constants-arm.h" +#include "src/allocation.h" +#include "src/assert-scope.h" +#include "src/builtins.h" +#include "src/checks.h" +#include "src/elements-kind.h" +#include "src/field-index.h" +#include "src/flags.h" +#include "src/list.h" +#include "src/property-details.h" +#include "src/smart-pointers.h" +#include "src/unicode-inl.h" +#include "src/zone.h" + +#if V8_TARGET_ARCH_ARM +#include "src/arm/constants-arm.h" // NOLINT +#elif V8_TARGET_ARCH_ARM64 +#include "src/arm64/constants-arm64.h" // NOLINT #elif V8_TARGET_ARCH_MIPS -#include "mips/constants-mips.h" +#include "src/mips/constants-mips.h" // NOLINT +#elif V8_TARGET_ARCH_MIPS64 +#include "src/mips64/constants-mips64.h" // NOLINT #endif -#include "v8checks.h" -#include "zone.h" // @@ -39,8 +43,9 @@ // - JSArrayBufferView // - JSTypedArray // - JSDataView -// - JSSet -// - JSMap +// - JSCollection +// - JSSet +// - JSMap // - JSSetIterator // - JSMapIterator // - JSWeakCollection @@ -140,6 +145,8 @@ namespace v8 { namespace internal { +class OStream; + enum KeyedAccessStoreMode { STANDARD_STORE, STORE_TRANSITION_SMI_TO_OBJECT, @@ -166,6 +173,12 @@ enum ContextualMode { }; +enum MutableMode { + MUTABLE, + IMMUTABLE +}; + + static const int kGrowICDelta = STORE_AND_GROW_NO_TRANSITION - STANDARD_STORE; STATIC_ASSERT(STANDARD_STORE == 0); @@ -234,12 +247,12 @@ enum PropertyNormalizationMode { }; -// NormalizedMapSharingMode is used to specify whether a map may be shared -// by different objects with normalized properties. -enum NormalizedMapSharingMode { - UNIQUE_NORMALIZED_MAP, - SHARED_NORMALIZED_MAP -}; +// Indicates how aggressively the prototype should be optimized. FAST_PROTOTYPE +// will give the fastest result by tailoring the map to the prototype, but that +// will cause polymorphism with other objects. REGULAR_PROTOTYPE is to be used +// (at least for now) when dynamically modifying the prototype chain of an +// object using __proto__ or Object.setPrototypeOf. +enum PrototypeOptimizationMode { REGULAR_PROTOTYPE, FAST_PROTOTYPE }; // Indicates whether transitions can be added to a source map or not. @@ -292,8 +305,10 @@ static const ExtraICState kNoExtraICState = 0; // Instance size sentinel for objects of variable size. const int kVariableSizeSentinel = 0; +// We may store the unsigned bit field as signed Smi value and do not +// use the sign bit. const int kStubMajorKeyBits = 7; -const int kStubMinorKeyBits = kBitsPerInt - kSmiTagSize - kStubMajorKeyBits; +const int kStubMinorKeyBits = kSmiValueSize - kStubMajorKeyBits - 1; // All Maps have a field instance_type containing a InstanceType. // It describes the type of the instances. @@ -350,6 +365,7 @@ const int kStubMinorKeyBits = kBitsPerInt - kSmiTagSize - kStubMajorKeyBits; V(PROPERTY_CELL_TYPE) \ \ V(HEAP_NUMBER_TYPE) \ + V(MUTABLE_HEAP_NUMBER_TYPE) \ V(FOREIGN_TYPE) \ V(BYTE_ARRAY_TYPE) \ V(FREE_SPACE_TYPE) \ @@ -582,12 +598,12 @@ enum StringRepresentationTag { }; const uint32_t kIsIndirectStringMask = 0x1; const uint32_t kIsIndirectStringTag = 0x1; -STATIC_ASSERT((kSeqStringTag & kIsIndirectStringMask) == 0); -STATIC_ASSERT((kExternalStringTag & kIsIndirectStringMask) == 0); -STATIC_ASSERT( - (kConsStringTag & kIsIndirectStringMask) == kIsIndirectStringTag); -STATIC_ASSERT( - (kSlicedStringTag & kIsIndirectStringMask) == kIsIndirectStringTag); +STATIC_ASSERT((kSeqStringTag & kIsIndirectStringMask) == 0); // NOLINT +STATIC_ASSERT((kExternalStringTag & kIsIndirectStringMask) == 0); // NOLINT +STATIC_ASSERT((kConsStringTag & + kIsIndirectStringMask) == kIsIndirectStringTag); // NOLINT +STATIC_ASSERT((kSlicedStringTag & + kIsIndirectStringMask) == kIsIndirectStringTag); // NOLINT // Use this mask to distinguish between cons and slice only after making // sure that the string is one of the two (an indirect string). @@ -606,16 +622,21 @@ const uint32_t kShortExternalStringTag = 0x10; // A ConsString with an empty string as the right side is a candidate -// for being shortcut by the garbage collector unless it is internalized. -// It's not common to have non-flat internalized strings, so we do not -// shortcut them thereby avoiding turning internalized strings into strings. -// See heap.cc and mark-compact.cc. +// for being shortcut by the garbage collector. We don't allocate any +// non-flat internalized strings, so we do not shortcut them thereby +// avoiding turning internalized strings into strings. The bit-masks +// below contain the internalized bit as additional safety. +// See heap.cc, mark-compact.cc and objects-visiting.cc. const uint32_t kShortcutTypeMask = kIsNotStringMask | kIsNotInternalizedMask | kStringRepresentationMask; const uint32_t kShortcutTypeTag = kConsStringTag | kNotInternalizedTag; +static inline bool IsShortcutCandidate(int type) { + return ((type & kShortcutTypeMask) == kShortcutTypeTag); +} + enum InstanceType { // String types. @@ -678,6 +699,7 @@ enum InstanceType { // "Data", objects that cannot contain non-map-word pointers to heap // objects. HEAP_NUMBER_TYPE, + MUTABLE_HEAP_NUMBER_TYPE, FOREIGN_TYPE, BYTE_ARRAY_TYPE, FREE_SPACE_TYPE, @@ -808,10 +830,10 @@ enum InstanceType { const int kExternalArrayTypeCount = LAST_EXTERNAL_ARRAY_TYPE - FIRST_EXTERNAL_ARRAY_TYPE + 1; -STATIC_CHECK(JS_OBJECT_TYPE == Internals::kJSObjectType); -STATIC_CHECK(FIRST_NONSTRING_TYPE == Internals::kFirstNonstringType); -STATIC_CHECK(ODDBALL_TYPE == Internals::kOddballType); -STATIC_CHECK(FOREIGN_TYPE == Internals::kForeignType); +STATIC_ASSERT(JS_OBJECT_TYPE == Internals::kJSObjectType); +STATIC_ASSERT(FIRST_NONSTRING_TYPE == Internals::kFirstNonstringType); +STATIC_ASSERT(ODDBALL_TYPE == Internals::kOddballType); +STATIC_ASSERT(FOREIGN_TYPE == Internals::kForeignType); #define FIXED_ARRAY_SUB_INSTANCE_TYPE_LIST(V) \ @@ -843,15 +865,21 @@ enum CompareResult { #define DECL_BOOLEAN_ACCESSORS(name) \ - inline bool name(); \ + inline bool name() const; \ inline void set_##name(bool value); \ #define DECL_ACCESSORS(name, type) \ - inline type* name(); \ + inline type* name() const; \ inline void set_##name(type* value, \ WriteBarrierMode mode = UPDATE_WRITE_BARRIER); \ + +#define DECLARE_CAST(type) \ + INLINE(static type* cast(Object* object)); \ + INLINE(static const type* cast(const Object* object)); + + class AccessorPair; class AllocationSite; class AllocationSiteCreationContext; @@ -861,6 +889,7 @@ class ElementsAccessor; class FixedArrayBase; class GlobalObject; class ObjectVisitor; +class LookupIterator; class StringStream; // We cannot just say "class HeapType;" if it is created from a template... =8-? template<class> class TypeImpl; @@ -878,7 +907,7 @@ template <class C> inline bool Is(Object* obj); #endif #ifdef OBJECT_PRINT -#define DECLARE_PRINTER(Name) void Name##Print(FILE* out = stdout); +#define DECLARE_PRINTER(Name) void Name##Print(OStream& os); // NOLINT #else #define DECLARE_PRINTER(Name) #endif @@ -891,6 +920,7 @@ template <class C> inline bool Is(Object* obj); #define HEAP_OBJECT_TYPE_LIST(V) \ V(HeapNumber) \ + V(MutableHeapNumber) \ V(Name) \ V(UniqueName) \ V(String) \ @@ -1034,7 +1064,7 @@ template <class C> inline bool Is(Object* obj); V(kCopyBuffersOverlap, "Copy buffers overlap") \ V(kCouldNotGenerateZero, "Could not generate +0.0") \ V(kCouldNotGenerateNegativeZero, "Could not generate -0.0") \ - V(kDebuggerIsActive, "Debugger is active") \ + V(kDebuggerHasBreakPoints, "Debugger has break points") \ V(kDebuggerStatement, "DebuggerStatement") \ V(kDeclarationInCatchContext, "Declaration in catch context") \ V(kDeclarationInWithContext, "Declaration in with context") \ @@ -1071,6 +1101,7 @@ template <class C> inline bool Is(Object* obj); "Expected fixed array in register r2") \ V(kExpectedFixedArrayInRegisterRbx, \ "Expected fixed array in register rbx") \ + V(kExpectedNewSpaceObject, "Expected new space object") \ V(kExpectedSmiOrHeapNumber, "Expected smi or HeapNumber") \ V(kExpectedUndefinedOrCell, \ "Expected undefined or cell in register") \ @@ -1147,12 +1178,6 @@ template <class C> inline bool Is(Object* obj); V(kLetBindingReInitialization, "Let binding re-initialization") \ V(kLhsHasBeenClobbered, "lhs has been clobbered") \ V(kLiveBytesCountOverflowChunkSize, "Live Bytes Count overflow chunk size") \ - V(kLiveEditFrameDroppingIsNotSupportedOnARM64, \ - "LiveEdit frame dropping is not supported on arm64") \ - V(kLiveEditFrameDroppingIsNotSupportedOnArm, \ - "LiveEdit frame dropping is not supported on arm") \ - V(kLiveEditFrameDroppingIsNotSupportedOnMips, \ - "LiveEdit frame dropping is not supported on mips") \ V(kLiveEdit, "LiveEdit") \ V(kLookupVariableInCountOperation, \ "Lookup variable in count operation") \ @@ -1166,6 +1191,7 @@ template <class C> inline bool Is(Object* obj); V(kModuleVariable, "Module variable") \ V(kModuleUrl, "Module url") \ V(kNativeFunctionLiteral, "Native function literal") \ + V(kNeedSmiLiteral, "Need a Smi literal here") \ V(kNoCasesLeft, "No cases left") \ V(kNoEmptyArraysHereInEmitFastAsciiArrayJoin, \ "No empty arrays here in EmitFastAsciiArrayJoin") \ @@ -1229,10 +1255,8 @@ template <class C> inline bool Is(Object* obj); "The current stack pointer is below csp") \ V(kTheInstructionShouldBeALui, "The instruction should be a lui") \ V(kTheInstructionShouldBeAnOri, "The instruction should be an ori") \ - V(kTheInstructionToPatchShouldBeALoadFromPc, \ - "The instruction to patch should be a load from pc") \ - V(kTheInstructionToPatchShouldBeALoadFromPp, \ - "The instruction to patch should be a load from pp") \ + V(kTheInstructionToPatchShouldBeALoadFromConstantPool, \ + "The instruction to patch should be a load from the constant pool") \ V(kTheInstructionToPatchShouldBeAnLdrLiteral, \ "The instruction to patch should be a ldr literal") \ V(kTheInstructionToPatchShouldBeALui, \ @@ -1347,57 +1371,62 @@ const char* GetBailoutReason(BailoutReason reason); class Object { public: // Type testing. - bool IsObject() { return true; } + bool IsObject() const { return true; } -#define IS_TYPE_FUNCTION_DECL(type_) inline bool Is##type_(); +#define IS_TYPE_FUNCTION_DECL(type_) INLINE(bool Is##type_() const); OBJECT_TYPE_LIST(IS_TYPE_FUNCTION_DECL) HEAP_OBJECT_TYPE_LIST(IS_TYPE_FUNCTION_DECL) #undef IS_TYPE_FUNCTION_DECL - inline bool IsFixedArrayBase(); - inline bool IsExternal(); - inline bool IsAccessorInfo(); + // A non-keyed store is of the form a.x = foo or a["x"] = foo whereas + // a keyed store is of the form a[expression] = foo. + enum StoreFromKeyed { + MAY_BE_STORE_FROM_KEYED, + CERTAINLY_NOT_STORE_FROM_KEYED + }; + + INLINE(bool IsFixedArrayBase() const); + INLINE(bool IsExternal() const); + INLINE(bool IsAccessorInfo() const); - inline bool IsStruct(); -#define DECLARE_STRUCT_PREDICATE(NAME, Name, name) inline bool Is##Name(); + INLINE(bool IsStruct() const); +#define DECLARE_STRUCT_PREDICATE(NAME, Name, name) \ + INLINE(bool Is##Name() const); STRUCT_LIST(DECLARE_STRUCT_PREDICATE) #undef DECLARE_STRUCT_PREDICATE - INLINE(bool IsSpecObject()); - INLINE(bool IsSpecFunction()); - INLINE(bool IsTemplateInfo()); - bool IsCallable(); + INLINE(bool IsSpecObject()) const; + INLINE(bool IsSpecFunction()) const; + INLINE(bool IsTemplateInfo()) const; + INLINE(bool IsNameDictionary() const); + INLINE(bool IsSeededNumberDictionary() const); + INLINE(bool IsUnseededNumberDictionary() const); + INLINE(bool IsOrderedHashSet() const); + INLINE(bool IsOrderedHashMap() const); + bool IsCallable() const; // Oddball testing. - INLINE(bool IsUndefined()); - INLINE(bool IsNull()); - INLINE(bool IsTheHole()); - INLINE(bool IsException()); - INLINE(bool IsUninitialized()); - INLINE(bool IsTrue()); - INLINE(bool IsFalse()); - inline bool IsArgumentsMarker(); + INLINE(bool IsUndefined() const); + INLINE(bool IsNull() const); + INLINE(bool IsTheHole() const); + INLINE(bool IsException() const); + INLINE(bool IsUninitialized() const); + INLINE(bool IsTrue() const); + INLINE(bool IsFalse() const); + INLINE(bool IsArgumentsMarker() const); // Filler objects (fillers and free space objects). - inline bool IsFiller(); + INLINE(bool IsFiller() const); // Extract the number. inline double Number(); - inline bool IsNaN(); + INLINE(bool IsNaN() const); + INLINE(bool IsMinusZero() const); bool ToInt32(int32_t* value); bool ToUint32(uint32_t* value); - // Indicates whether OptimalRepresentation can do its work, or whether it - // always has to return Representation::Tagged(). - enum ValueType { - OPTIMAL_REPRESENTATION, - FORCE_TAGGED - }; - - inline Representation OptimalRepresentation( - ValueType type = OPTIMAL_REPRESENTATION) { + inline Representation OptimalRepresentation() { if (!FLAG_track_fields) return Representation::Tagged(); - if (type == FORCE_TAGGED) return Representation::Tagged(); if (IsSmi()) { return Representation::Smi(); } else if (FLAG_track_double_fields && IsHeapNumber()) { @@ -1405,7 +1434,7 @@ class Object { } else if (FLAG_track_computed_fields && IsUninitialized()) { return Representation::None(); } else if (FLAG_track_heap_object_fields) { - ASSERT(IsHeapObject()); + DCHECK(IsHeapObject()); return Representation::HeapObject(); } else { return Representation::Tagged(); @@ -1418,7 +1447,7 @@ class Object { } else if (FLAG_track_fields && representation.IsSmi()) { return IsSmi(); } else if (FLAG_track_double_fields && representation.IsDouble()) { - return IsNumber(); + return IsMutableHeapNumber() || IsNumber(); } else if (FLAG_track_heap_object_fields && representation.IsHeapObject()) { return IsHeapObject(); } @@ -1431,6 +1460,10 @@ class Object { Handle<Object> object, Representation representation); + inline static Handle<Object> WrapForRead(Isolate* isolate, + Handle<Object> object, + Representation representation); + // Returns true if the object is of the correct type to be used as a // implementation of a JSObject's elements. inline bool HasValidElements(); @@ -1453,11 +1486,24 @@ class Object { void Lookup(Handle<Name> name, LookupResult* result); - MUST_USE_RESULT static MaybeHandle<Object> GetPropertyWithReceiver( - Handle<Object> object, - Handle<Object> receiver, - Handle<Name> name, - PropertyAttributes* attributes); + MUST_USE_RESULT static MaybeHandle<Object> GetProperty(LookupIterator* it); + + // Implementation of [[Put]], ECMA-262 5th edition, section 8.12.5. + MUST_USE_RESULT static MaybeHandle<Object> SetProperty( + Handle<Object> object, Handle<Name> key, Handle<Object> value, + StrictMode strict_mode, + StoreFromKeyed store_mode = MAY_BE_STORE_FROM_KEYED); + + MUST_USE_RESULT static MaybeHandle<Object> SetProperty( + LookupIterator* it, Handle<Object> value, StrictMode strict_mode, + StoreFromKeyed store_mode); + MUST_USE_RESULT static MaybeHandle<Object> WriteToReadOnlyProperty( + LookupIterator* it, Handle<Object> value, StrictMode strict_mode); + MUST_USE_RESULT static MaybeHandle<Object> SetDataProperty( + LookupIterator* it, Handle<Object> value); + MUST_USE_RESULT static MaybeHandle<Object> AddDataProperty( + LookupIterator* it, Handle<Object> value, PropertyAttributes attributes, + StrictMode strict_mode, StoreFromKeyed store_mode); MUST_USE_RESULT static inline MaybeHandle<Object> GetPropertyOrElement( Handle<Object> object, Handle<Name> key); @@ -1468,17 +1514,24 @@ class Object { MUST_USE_RESULT static inline MaybeHandle<Object> GetProperty( Handle<Object> object, Handle<Name> key); - MUST_USE_RESULT static MaybeHandle<Object> GetProperty( - Handle<Object> object, + + MUST_USE_RESULT static MaybeHandle<Object> GetPropertyWithAccessor( Handle<Object> receiver, - LookupResult* result, - Handle<Name> key, - PropertyAttributes* attributes); + Handle<Name> name, + Handle<JSObject> holder, + Handle<Object> structure); + MUST_USE_RESULT static MaybeHandle<Object> SetPropertyWithAccessor( + Handle<Object> receiver, Handle<Name> name, Handle<Object> value, + Handle<JSObject> holder, Handle<Object> structure, + StrictMode strict_mode); MUST_USE_RESULT static MaybeHandle<Object> GetPropertyWithDefinedGetter( - Handle<Object> object, Handle<Object> receiver, Handle<JSReceiver> getter); + MUST_USE_RESULT static MaybeHandle<Object> SetPropertyWithDefinedSetter( + Handle<Object> receiver, + Handle<JSReceiver> setter, + Handle<Object> value); MUST_USE_RESULT static inline MaybeHandle<Object> GetElement( Isolate* isolate, @@ -1491,11 +1544,6 @@ class Object { Handle<Object> receiver, uint32_t index); - // Return the object's prototype (might be Heap::null_value()). - Object* GetPrototype(Isolate* isolate); - static Handle<Object> GetPrototype(Isolate* isolate, Handle<Object> object); - Map* GetMarkerMap(Isolate* isolate); - // Returns the permanent hash code associated with this object. May return // undefined if not yet created. Object* GetHash(); @@ -1503,16 +1551,19 @@ class Object { // Returns the permanent hash code associated with this object depending on // the actual object type. May create and store a hash code if needed and none // exists. - // TODO(rafaelw): Remove isolate parameter when objects.cc is fully - // handlified. - static Handle<Object> GetOrCreateHash(Handle<Object> object, - Isolate* isolate); + static Handle<Smi> GetOrCreateHash(Isolate* isolate, Handle<Object> object); // Checks whether this object has the same value as the given one. This // function is implemented according to ES5, section 9.12 and can be used // to implement the Harmony "egal" function. bool SameValue(Object* other); + // Checks whether this object has the same value as the given one. + // +0 and -0 are treated equal. Everything else is the same as SameValue. + // This function is implemented according to ES6, section 7.2.4 and is used + // by ES6 Map and Set. + bool SameValueZero(Object* other); + // Tries to convert an object to an array index. Returns true and sets // the output parameter if it succeeds. inline bool ToArrayIndex(uint32_t* index); @@ -1535,25 +1586,39 @@ class Object { // Prints this object without details to a message accumulator. void ShortPrint(StringStream* accumulator); - // Casting: This cast is only needed to satisfy macros in objects-inl.h. - static Object* cast(Object* value) { return value; } + DECLARE_CAST(Object) // Layout description. static const int kHeaderSize = 0; // Object does not take up any space. #ifdef OBJECT_PRINT - // Prints this object with details. + // For our gdb macros, we should perhaps change these in the future. void Print(); - void Print(FILE* out); - void PrintLn(); - void PrintLn(FILE* out); + + // Prints this object with details. + void Print(OStream& os); // NOLINT #endif private: + friend class LookupIterator; + friend class PrototypeIterator; + + // Return the map of the root of object's prototype chain. + Map* GetRootMap(Isolate* isolate); + DISALLOW_IMPLICIT_CONSTRUCTORS(Object); }; +struct Brief { + explicit Brief(const Object* const v) : value(v) {} + const Object* value; +}; + + +OStream& operator<<(OStream& os, const Brief& v); + + // Smi represents integer Numbers that can be stored in 31 bits. // Smis are immediate which means they are NOT allocated in the heap. // The this pointer has the following format: [31 bit signed int] 0 @@ -1563,7 +1628,7 @@ class Object { class Smi: public Object { public: // Returns the integer value. - inline int value(); + inline int value() const; // Convert a value to a Smi object. static inline Smi* FromInt(int value); @@ -1573,13 +1638,10 @@ class Smi: public Object { // Returns whether value can be represented in a Smi. static inline bool IsValid(intptr_t value); - // Casting. - static inline Smi* cast(Object* object); + DECLARE_CAST(Smi) // Dispatched behavior. - void SmiPrint(FILE* out = stdout); - void SmiPrint(StringStream* accumulator); - + void SmiPrint(OStream& os) const; // NOLINT DECLARE_VERIFIER(Smi) static const int kMinValue = @@ -1600,7 +1662,7 @@ class MapWord BASE_EMBEDDED { // Normal state: the map word contains a map pointer. // Create a map word from a map pointer. - static inline MapWord FromMap(Map* map); + static inline MapWord FromMap(const Map* map); // View this map word as a map pointer. inline Map* ToMap(); @@ -1644,7 +1706,7 @@ class HeapObject: public Object { public: // [map]: Contains a map which contains the object's reflective // information. - inline Map* map(); + inline Map* map() const; inline void set_map(Map* value); // The no-write-barrier version. This is OK if the object is white and in // new space, or if the value is an immortal immutable object, like the maps @@ -1653,7 +1715,7 @@ class HeapObject: public Object { // Get the map using acquire load. inline Map* synchronized_map(); - inline MapWord synchronized_map_word(); + inline MapWord synchronized_map_word() const; // Set the map using release store inline void synchronized_set_map(Map* value); @@ -1662,14 +1724,14 @@ class HeapObject: public Object { // During garbage collection, the map word of a heap object does not // necessarily contain a map pointer. - inline MapWord map_word(); + inline MapWord map_word() const; inline void set_map_word(MapWord map_word); // The Heap the object was allocated in. Used also to access Isolate. - inline Heap* GetHeap(); + inline Heap* GetHeap() const; // Convenience method to get current isolate. - inline Isolate* GetIsolate(); + inline Isolate* GetIsolate() const; // Converts an address to a HeapObject pointer. static inline HeapObject* FromAddress(Address address); @@ -1689,6 +1751,10 @@ class HeapObject: public Object { // Returns the heap object's size in bytes inline int Size(); + // Returns true if this heap object may contain pointers to objects in new + // space. + inline bool MayContainNewSpacePointers(); + // Given a heap object's map pointer, returns the heap size in bytes // Useful when the map pointer field is used for other purposes. // GC internal. @@ -1707,8 +1773,7 @@ class HeapObject: public Object { Handle<Name> name, Handle<Code> code); - // Casting. - static inline HeapObject* cast(Object* obj); + DECLARE_CAST(HeapObject) // Return the write barrier mode for this. Callers of this function // must be able to present a reference to an DisallowHeapAllocation @@ -1719,9 +1784,9 @@ class HeapObject: public Object { const DisallowHeapAllocation& promise); // Dispatched behavior. - void HeapObjectShortPrint(StringStream* accumulator); + void HeapObjectShortPrint(OStream& os); // NOLINT #ifdef OBJECT_PRINT - void PrintHeader(FILE* out, const char* id); + void PrintHeader(OStream& os, const char* id); // NOLINT #endif DECLARE_PRINTER(HeapObject) DECLARE_VERIFIER(HeapObject) @@ -1739,7 +1804,7 @@ class HeapObject: public Object { static const int kMapOffset = Object::kHeaderSize; static const int kHeaderSize = kMapOffset + kPointerSize; - STATIC_CHECK(kMapOffset == Internals::kHeapObjectMapOffset); + STATIC_ASSERT(kMapOffset == Internals::kHeapObjectMapOffset); protected: // helpers for calling an ObjectVisitor to iterate over pointers in the @@ -1800,17 +1865,15 @@ class FlexibleBodyDescriptor { class HeapNumber: public HeapObject { public: // [value]: number value. - inline double value(); + inline double value() const; inline void set_value(double value); - // Casting. - static inline HeapNumber* cast(Object* obj); + DECLARE_CAST(HeapNumber) // Dispatched behavior. bool HeapNumberBooleanValue(); - void HeapNumberPrint(FILE* out = stdout); - void HeapNumberPrint(StringStream* accumulator); + void HeapNumberPrint(OStream& os); // NOLINT DECLARE_VERIFIER(HeapNumber) inline int get_exponent(); @@ -1884,13 +1947,6 @@ class JSReceiver: public HeapObject { FORCE_DELETION }; - // A non-keyed store is of the form a.x = foo or a["x"] = foo whereas - // a keyed store is of the form a[expression] = foo. - enum StoreFromKeyed { - MAY_BE_STORE_FROM_KEYED, - CERTAINLY_NOT_STORE_FROM_KEYED - }; - // Internal properties (e.g. the hidden properties dictionary) might // be added even though the receiver is non-extensible. enum ExtensibilityCheck { @@ -1898,17 +1954,8 @@ class JSReceiver: public HeapObject { OMIT_EXTENSIBILITY_CHECK }; - // Casting. - static inline JSReceiver* cast(Object* obj); + DECLARE_CAST(JSReceiver) - // Implementation of [[Put]], ECMA-262 5th edition, section 8.12.5. - MUST_USE_RESULT static MaybeHandle<Object> SetProperty( - Handle<JSReceiver> object, - Handle<Name> key, - Handle<Object> value, - PropertyAttributes attributes, - StrictMode strict_mode, - StoreFromKeyed store_mode = MAY_BE_STORE_FROM_KEYED); MUST_USE_RESULT static MaybeHandle<Object> SetElement( Handle<JSReceiver> object, uint32_t index, @@ -1917,10 +1964,14 @@ class JSReceiver: public HeapObject { StrictMode strict_mode); // Implementation of [[HasProperty]], ECMA-262 5th edition, section 8.12.6. - static inline bool HasProperty(Handle<JSReceiver> object, Handle<Name> name); - static inline bool HasLocalProperty(Handle<JSReceiver>, Handle<Name> name); - static inline bool HasElement(Handle<JSReceiver> object, uint32_t index); - static inline bool HasLocalElement(Handle<JSReceiver> object, uint32_t index); + MUST_USE_RESULT static inline Maybe<bool> HasProperty( + Handle<JSReceiver> object, Handle<Name> name); + MUST_USE_RESULT static inline Maybe<bool> HasOwnProperty(Handle<JSReceiver>, + Handle<Name> name); + MUST_USE_RESULT static inline Maybe<bool> HasElement( + Handle<JSReceiver> object, uint32_t index); + MUST_USE_RESULT static inline Maybe<bool> HasOwnElement( + Handle<JSReceiver> object, uint32_t index); // Implementation of [[Delete]], ECMA-262 5th edition, section 8.12.7. MUST_USE_RESULT static MaybeHandle<Object> DeleteProperty( @@ -1942,26 +1993,17 @@ class JSReceiver: public HeapObject { // function that was used to instantiate the object). String* constructor_name(); - static inline PropertyAttributes GetPropertyAttribute( - Handle<JSReceiver> object, - Handle<Name> name); - static PropertyAttributes GetPropertyAttributeWithReceiver( - Handle<JSReceiver> object, - Handle<JSReceiver> receiver, - Handle<Name> name); - static PropertyAttributes GetLocalPropertyAttribute( - Handle<JSReceiver> object, - Handle<Name> name); + MUST_USE_RESULT static inline Maybe<PropertyAttributes> GetPropertyAttributes( + Handle<JSReceiver> object, Handle<Name> name); + MUST_USE_RESULT static Maybe<PropertyAttributes> GetPropertyAttributes( + LookupIterator* it); + MUST_USE_RESULT static Maybe<PropertyAttributes> GetOwnPropertyAttributes( + Handle<JSReceiver> object, Handle<Name> name); - static inline PropertyAttributes GetElementAttribute( - Handle<JSReceiver> object, - uint32_t index); - static inline PropertyAttributes GetLocalElementAttribute( - Handle<JSReceiver> object, - uint32_t index); - - // Return the object's prototype (might be Heap::null_value()). - inline Object* GetPrototype(); + MUST_USE_RESULT static inline Maybe<PropertyAttributes> GetElementAttribute( + Handle<JSReceiver> object, uint32_t index); + MUST_USE_RESULT static inline Maybe<PropertyAttributes> + GetOwnElementAttribute(Handle<JSReceiver> object, uint32_t index); // Return the constructor function (may be Heap::null_value()). inline Object* GetConstructor(); @@ -1972,16 +2014,16 @@ class JSReceiver: public HeapObject { // Retrieves a permanent object identity hash code. May create and store a // hash code if needed and none exists. - inline static Handle<Object> GetOrCreateIdentityHash( + inline static Handle<Smi> GetOrCreateIdentityHash( Handle<JSReceiver> object); // Lookup a property. If found, the result is valid and has // detailed information. - void LocalLookup(Handle<Name> name, LookupResult* result, - bool search_hidden_prototypes = false); + void LookupOwn(Handle<Name> name, LookupResult* result, + bool search_hidden_prototypes = false); void Lookup(Handle<Name> name, LookupResult* result); - enum KeyCollectionType { LOCAL_ONLY, INCLUDE_PROTOS }; + enum KeyCollectionType { OWN_ONLY, INCLUDE_PROTOS }; // Computes the enumerable keys for a JSObject. Used for implementing // "for (n in object) { }". @@ -1989,29 +2031,7 @@ class JSReceiver: public HeapObject { Handle<JSReceiver> object, KeyCollectionType type); - protected: - MUST_USE_RESULT static MaybeHandle<Object> SetPropertyWithDefinedSetter( - Handle<JSReceiver> object, - Handle<JSReceiver> setter, - Handle<Object> value); - private: - static PropertyAttributes GetPropertyAttributeForResult( - Handle<JSReceiver> object, - Handle<JSReceiver> receiver, - LookupResult* result, - Handle<Name> name, - bool continue_search); - - MUST_USE_RESULT static MaybeHandle<Object> SetProperty( - Handle<JSReceiver> receiver, - LookupResult* result, - Handle<Name> key, - Handle<Object> value, - PropertyAttributes attributes, - StrictMode strict_mode, - StoreFromKeyed store_from_keyed); - DISALLOW_IMPLICIT_CONSTRUCTORS(JSReceiver); }; @@ -2123,53 +2143,27 @@ class JSObject: public JSReceiver { static Handle<Object> PrepareSlowElementsForSort(Handle<JSObject> object, uint32_t limit); - MUST_USE_RESULT static MaybeHandle<Object> GetPropertyWithCallback( - Handle<JSObject> object, - Handle<Object> receiver, - Handle<Object> structure, - Handle<Name> name); - - MUST_USE_RESULT static MaybeHandle<Object> SetPropertyWithCallback( - Handle<JSObject> object, - Handle<Object> structure, - Handle<Name> name, - Handle<Object> value, - Handle<JSObject> holder, - StrictMode strict_mode); - MUST_USE_RESULT static MaybeHandle<Object> SetPropertyWithInterceptor( - Handle<JSObject> object, - Handle<Name> name, - Handle<Object> value, - PropertyAttributes attributes, - StrictMode strict_mode); + LookupIterator* it, Handle<Object> value); - MUST_USE_RESULT static MaybeHandle<Object> SetPropertyForResult( - Handle<JSObject> object, - LookupResult* result, - Handle<Name> name, - Handle<Object> value, - PropertyAttributes attributes, - StrictMode strict_mode, - StoreFromKeyed store_mode = MAY_BE_STORE_FROM_KEYED); + // SetLocalPropertyIgnoreAttributes converts callbacks to fields. We need to + // grant an exemption to ExecutableAccessor callbacks in some cases. + enum ExecutableAccessorInfoHandling { + DEFAULT_HANDLING, + DONT_FORCE_FIELD + }; - MUST_USE_RESULT static MaybeHandle<Object> SetLocalPropertyIgnoreAttributes( + MUST_USE_RESULT static MaybeHandle<Object> SetOwnPropertyIgnoreAttributes( Handle<JSObject> object, Handle<Name> key, Handle<Object> value, PropertyAttributes attributes, - ValueType value_type = OPTIMAL_REPRESENTATION, - StoreMode mode = ALLOW_AS_CONSTANT, ExtensibilityCheck extensibility_check = PERFORM_EXTENSIBILITY_CHECK, - StoreFromKeyed store_mode = MAY_BE_STORE_FROM_KEYED); - - static inline Handle<String> ExpectedTransitionKey(Handle<Map> map); - static inline Handle<Map> ExpectedTransitionTarget(Handle<Map> map); + StoreFromKeyed store_mode = MAY_BE_STORE_FROM_KEYED, + ExecutableAccessorInfoHandling handling = DEFAULT_HANDLING); - // Try to follow an existing transition to a field with attributes NONE. The - // return value indicates whether the transition was successful. - static inline Handle<Map> FindTransitionToField(Handle<Map> map, - Handle<Name> key); + static void AddProperty(Handle<JSObject> object, Handle<Name> key, + Handle<Object> value, PropertyAttributes attributes); // Extend the receiver with a single fast property appeared first in the // passed map. This also extends the property backing store if necessary. @@ -2202,33 +2196,25 @@ class JSObject: public JSReceiver { Handle<Object> value, PropertyDetails details); - static void OptimizeAsPrototype(Handle<JSObject> object); + static void OptimizeAsPrototype(Handle<JSObject> object, + PrototypeOptimizationMode mode); + static void ReoptimizeIfPrototype(Handle<JSObject> object); // Retrieve interceptors. InterceptorInfo* GetNamedInterceptor(); InterceptorInfo* GetIndexedInterceptor(); // Used from JSReceiver. - static PropertyAttributes GetPropertyAttributePostInterceptor( - Handle<JSObject> object, - Handle<JSObject> receiver, - Handle<Name> name, - bool continue_search); - static PropertyAttributes GetPropertyAttributeWithInterceptor( - Handle<JSObject> object, - Handle<JSObject> receiver, - Handle<Name> name, - bool continue_search); - static PropertyAttributes GetPropertyAttributeWithFailedAccessCheck( - Handle<JSObject> object, - LookupResult* result, - Handle<Name> name, - bool continue_search); - static PropertyAttributes GetElementAttributeWithReceiver( - Handle<JSObject> object, - Handle<JSReceiver> receiver, - uint32_t index, - bool continue_search); + MUST_USE_RESULT static Maybe<PropertyAttributes> + GetPropertyAttributesWithInterceptor(Handle<JSObject> holder, + Handle<Object> receiver, + Handle<Name> name); + MUST_USE_RESULT static Maybe<PropertyAttributes> + GetPropertyAttributesWithFailedAccessCheck(LookupIterator* it); + MUST_USE_RESULT static Maybe<PropertyAttributes> + GetElementAttributeWithReceiver(Handle<JSObject> object, + Handle<JSReceiver> receiver, + uint32_t index, bool check_prototype); // Retrieves an AccessorPair property from the given object. Might return // undefined if the property doesn't exist or is of a different kind. @@ -2238,14 +2224,12 @@ class JSObject: public JSReceiver { AccessorComponent component); // Defines an AccessorPair property on the given object. - // TODO(mstarzinger): Rename to SetAccessor() and return empty handle on - // exception instead of letting callers check for scheduled exception. - static void DefineAccessor(Handle<JSObject> object, - Handle<Name> name, - Handle<Object> getter, - Handle<Object> setter, - PropertyAttributes attributes, - v8::AccessControl access_control = v8::DEFAULT); + // TODO(mstarzinger): Rename to SetAccessor(). + static MaybeHandle<Object> DefineAccessor(Handle<JSObject> object, + Handle<Name> name, + Handle<Object> getter, + Handle<Object> setter, + PropertyAttributes attributes); // Defines an AccessorInfo property on the given object. MUST_USE_RESULT static MaybeHandle<Object> SetAccessor( @@ -2255,13 +2239,7 @@ class JSObject: public JSReceiver { MUST_USE_RESULT static MaybeHandle<Object> GetPropertyWithInterceptor( Handle<JSObject> object, Handle<Object> receiver, - Handle<Name> name, - PropertyAttributes* attributes); - MUST_USE_RESULT static MaybeHandle<Object> GetPropertyPostInterceptor( - Handle<JSObject> object, - Handle<Object> receiver, - Handle<Name> name, - PropertyAttributes* attributes); + Handle<Name> name); // Returns true if this is an instance of an api function and has // been modified since it was created. May give false positives. @@ -2269,8 +2247,8 @@ class JSObject: public JSReceiver { // Accessors for hidden properties object. // - // Hidden properties are not local properties of the object itself. - // Instead they are stored in an auxiliary structure kept as a local + // Hidden properties are not own properties of the object itself. + // Instead they are stored in an auxiliary structure kept as an own // property with a special name Heap::hidden_string(). But if the // receiver is a JSGlobalProxy then the auxiliary object is a property // of its prototype, and if it's a detached proxy, then you can't have @@ -2340,10 +2318,7 @@ class JSObject: public JSReceiver { } // These methods do not perform access checks! - MUST_USE_RESULT static MaybeHandle<AccessorPair> GetLocalPropertyAccessorPair( - Handle<JSObject> object, - Handle<Name> name); - MUST_USE_RESULT static MaybeHandle<AccessorPair> GetLocalElementAccessorPair( + MUST_USE_RESULT static MaybeHandle<AccessorPair> GetOwnElementAccessorPair( Handle<JSObject> object, uint32_t index); @@ -2411,11 +2386,12 @@ class JSObject: public JSReceiver { Handle<JSReceiver> receiver); // Support functions for v8 api (needed for correct interceptor behavior). - static bool HasRealNamedProperty(Handle<JSObject> object, - Handle<Name> key); - static bool HasRealElementProperty(Handle<JSObject> object, uint32_t index); - static bool HasRealNamedCallbackProperty(Handle<JSObject> object, - Handle<Name> key); + MUST_USE_RESULT static Maybe<bool> HasRealNamedProperty( + Handle<JSObject> object, Handle<Name> key); + MUST_USE_RESULT static Maybe<bool> HasRealElementProperty( + Handle<JSObject> object, uint32_t index); + MUST_USE_RESULT static Maybe<bool> HasRealNamedCallbackProperty( + Handle<JSObject> object, Handle<Name> key); // Get the header size for a JSObject. Used to compute the index of // internal fields as well as the number of internal fields. @@ -2428,28 +2404,27 @@ class JSObject: public JSReceiver { inline void SetInternalField(int index, Smi* value); // The following lookup functions skip interceptors. - void LocalLookupRealNamedProperty(Handle<Name> name, LookupResult* result); + void LookupOwnRealNamedProperty(Handle<Name> name, LookupResult* result); void LookupRealNamedProperty(Handle<Name> name, LookupResult* result); void LookupRealNamedPropertyInPrototypes(Handle<Name> name, LookupResult* result); - void LookupCallbackProperty(Handle<Name> name, LookupResult* result); // Returns the number of properties on this object filtering out properties // with the specified attributes (ignoring interceptors). - int NumberOfLocalProperties(PropertyAttributes filter = NONE); + int NumberOfOwnProperties(PropertyAttributes filter = NONE); // Fill in details for properties into storage starting at the specified // index. - void GetLocalPropertyNames( + void GetOwnPropertyNames( FixedArray* storage, int index, PropertyAttributes filter = NONE); // Returns the number of properties on this object filtering out properties // with the specified attributes (ignoring interceptors). - int NumberOfLocalElements(PropertyAttributes filter); + int NumberOfOwnElements(PropertyAttributes filter); // Returns the number of enumerable elements (ignoring interceptors). int NumberOfEnumElements(); // Returns the number of elements on this object filtering out elements // with the specified attributes (ignoring interceptors). - int GetLocalElementKeys(FixedArray* storage, PropertyAttributes filter); + int GetOwnElementKeys(FixedArray* storage, PropertyAttributes filter); // Count and fill in the enumerable elements into storage. // (storage->length() == NumberOfEnumElements()). // If storage is NULL, will count the elements without adding @@ -2464,13 +2439,7 @@ class JSObject: public JSReceiver { static void TransitionElementsKind(Handle<JSObject> object, ElementsKind to_kind); - // TODO(mstarzinger): Both public because of ConvertAnsSetLocalProperty(). static void MigrateToMap(Handle<JSObject> object, Handle<Map> new_map); - static void GeneralizeFieldRepresentation(Handle<JSObject> object, - int modify_index, - Representation new_representation, - Handle<HeapType> new_field_type, - StoreMode store_mode); // Convert the object to use the canonical dictionary // representation. If the object is expected to have additional properties @@ -2486,15 +2455,15 @@ class JSObject: public JSReceiver { Handle<JSObject> object); // Transform slow named properties to fast variants. - static void TransformToFastProperties(Handle<JSObject> object, - int unused_property_fields); + static void MigrateSlowToFast(Handle<JSObject> object, + int unused_property_fields); // Access fast-case object properties at index. static Handle<Object> FastPropertyAt(Handle<JSObject> object, Representation representation, - int index); - inline Object* RawFastPropertyAt(int index); - inline void FastPropertyAtPut(int index, Object* value); + FieldIndex index); + inline Object* RawFastPropertyAt(FieldIndex index); + inline void FastPropertyAtPut(FieldIndex index, Object* value); void WriteToField(int descriptor, Object* value); // Access to in object properties. @@ -2507,9 +2476,7 @@ class JSObject: public JSReceiver { // Set the object's prototype (only JSReceiver and null are allowed values). MUST_USE_RESULT static MaybeHandle<Object> SetPrototype( - Handle<JSObject> object, - Handle<Object> value, - bool skip_hidden_prototypes = false); + Handle<JSObject> object, Handle<Object> value, bool from_javascript); // Initializes the body after properties slot, properties slot is // initialized by set_properties. Fill the pre-allocated fields with @@ -2534,10 +2501,7 @@ class JSObject: public JSReceiver { static void SetObserved(Handle<JSObject> object); // Copy object. - enum DeepCopyHints { - kNoHints = 0, - kObjectIsShallowArray = 1 - }; + enum DeepCopyHints { kNoHints = 0, kObjectIsShallow = 1 }; static Handle<JSObject> Copy(Handle<JSObject> object); MUST_USE_RESULT static MaybeHandle<JSObject> DeepCopy( @@ -2551,17 +2515,16 @@ class JSObject: public JSReceiver { static Handle<Object> GetDataProperty(Handle<JSObject> object, Handle<Name> key); - // Casting. - static inline JSObject* cast(Object* obj); + DECLARE_CAST(JSObject) // Dispatched behavior. void JSObjectShortPrint(StringStream* accumulator); DECLARE_PRINTER(JSObject) DECLARE_VERIFIER(JSObject) #ifdef OBJECT_PRINT - void PrintProperties(FILE* out = stdout); - void PrintElements(FILE* out = stdout); - void PrintTransitions(FILE* out = stdout); + void PrintProperties(OStream& os); // NOLINT + void PrintElements(OStream& os); // NOLINT + void PrintTransitions(OStream& os); // NOLINT #endif static void PrintElementsTransition( @@ -2602,12 +2565,6 @@ class JSObject: public JSReceiver { Object* SlowReverseLookup(Object* value); - // Maximal number of fast properties for the JSObject. Used to - // restrict the number of map transitions to avoid an explosion in - // the number of maps for objects used as dictionaries. - inline bool TooManyFastProperties( - StoreFromKeyed store_mode = MAY_BE_STORE_FROM_KEYED); - // Maximal number of elements (numbered 0 .. kMaxElementCount - 1). // Also maximal value of JSArray's length property. static const uint32_t kMaxElementCount = 0xffffffffu; @@ -2628,11 +2585,13 @@ class JSObject: public JSReceiver { static const int kMaxUncheckedOldFastElementsLength = 500; // Note that Page::kMaxRegularHeapObjectSize puts a limit on - // permissible values (see the ASSERT in heap.cc). + // permissible values (see the DCHECK in heap.cc). static const int kInitialMaxFastElementArray = 100000; - static const int kFastPropertiesSoftLimit = 12; - static const int kMaxFastProperties = 128; + // This constant applies only to the initial map of "$Object" aka + // "global.Object" and not to arbitrary other JSObject maps. + static const int kInitialGlobalObjectUnusedPropertiesCount = 4; + static const int kMaxInstanceSize = 255 * kPointerSize; // When extending the backing storage for property values, we increase // its size by more than the 1 entry necessary, so sequentially adding fields @@ -2644,7 +2603,7 @@ class JSObject: public JSReceiver { static const int kElementsOffset = kPropertiesOffset + kPointerSize; static const int kHeaderSize = kElementsOffset + kPointerSize; - STATIC_CHECK(kHeaderSize == Internals::kJSObjectHeaderSize); + STATIC_ASSERT(kHeaderSize == Internals::kJSObjectHeaderSize); class BodyDescriptor : public FlexibleBodyDescriptor<kPropertiesOffset> { public: @@ -2659,21 +2618,42 @@ class JSObject: public JSReceiver { Handle<Name> name, Handle<Object> old_value); + static void MigrateToNewProperty(Handle<JSObject> object, + Handle<Map> transition, + Handle<Object> value); + private: friend class DictionaryElementsAccessor; friend class JSReceiver; friend class Object; + static void MigrateFastToFast(Handle<JSObject> object, Handle<Map> new_map); + static void MigrateFastToSlow(Handle<JSObject> object, + Handle<Map> new_map, + int expected_additional_properties); + + static void SetPropertyToField(LookupResult* lookup, Handle<Object> value); + + static void ConvertAndSetOwnProperty(LookupResult* lookup, + Handle<Name> name, + Handle<Object> value, + PropertyAttributes attributes); + + static void SetPropertyToFieldWithAttributes(LookupResult* lookup, + Handle<Name> name, + Handle<Object> value, + PropertyAttributes attributes); + static void GeneralizeFieldRepresentation(Handle<JSObject> object, + int modify_index, + Representation new_representation, + Handle<HeapType> new_field_type); + static void UpdateAllocationSite(Handle<JSObject> object, ElementsKind to_kind); // Used from Object::GetProperty(). MUST_USE_RESULT static MaybeHandle<Object> GetPropertyWithFailedAccessCheck( - Handle<JSObject> object, - Handle<Object> receiver, - LookupResult* result, - Handle<Name> name, - PropertyAttributes* attributes); + LookupIterator* it); MUST_USE_RESULT static MaybeHandle<Object> GetElementWithCallback( Handle<JSObject> object, @@ -2682,16 +2662,15 @@ class JSObject: public JSReceiver { uint32_t index, Handle<Object> holder); - static PropertyAttributes GetElementAttributeWithInterceptor( - Handle<JSObject> object, - Handle<JSReceiver> receiver, - uint32_t index, - bool continue_search); - static PropertyAttributes GetElementAttributeWithoutInterceptor( - Handle<JSObject> object, - Handle<JSReceiver> receiver, - uint32_t index, - bool continue_search); + MUST_USE_RESULT static Maybe<PropertyAttributes> + GetElementAttributeWithInterceptor(Handle<JSObject> object, + Handle<JSReceiver> receiver, + uint32_t index, bool continue_search); + MUST_USE_RESULT static Maybe<PropertyAttributes> + GetElementAttributeWithoutInterceptor(Handle<JSObject> object, + Handle<JSReceiver> receiver, + uint32_t index, + bool continue_search); MUST_USE_RESULT static MaybeHandle<Object> SetElementWithCallback( Handle<JSObject> object, Handle<Object> structure, @@ -2737,23 +2716,6 @@ class JSObject: public JSReceiver { StrictMode strict_mode, bool check_prototype = true); - // Searches the prototype chain for property 'name'. If it is found and - // has a setter, invoke it and set '*done' to true. If it is found and is - // read-only, reject and set '*done' to true. Otherwise, set '*done' to - // false. Can throw and return an empty handle with '*done==true'. - MUST_USE_RESULT static MaybeHandle<Object> SetPropertyViaPrototypes( - Handle<JSObject> object, - Handle<Name> name, - Handle<Object> value, - PropertyAttributes attributes, - StrictMode strict_mode, - bool* done); - MUST_USE_RESULT static MaybeHandle<Object> SetPropertyPostInterceptor( - Handle<JSObject> object, - Handle<Name> name, - Handle<Object> value, - PropertyAttributes attributes, - StrictMode strict_mode); MUST_USE_RESULT static MaybeHandle<Object> SetPropertyUsingTransition( Handle<JSObject> object, LookupResult* lookup, @@ -2761,25 +2723,13 @@ class JSObject: public JSReceiver { Handle<Object> value, PropertyAttributes attributes); MUST_USE_RESULT static MaybeHandle<Object> SetPropertyWithFailedAccessCheck( - Handle<JSObject> object, - LookupResult* result, - Handle<Name> name, - Handle<Object> value, - bool check_prototype, - StrictMode strict_mode); + LookupIterator* it, Handle<Object> value, StrictMode strict_mode); // Add a property to an object. - MUST_USE_RESULT static MaybeHandle<Object> AddProperty( - Handle<JSObject> object, - Handle<Name> name, - Handle<Object> value, - PropertyAttributes attributes, - StrictMode strict_mode, - StoreFromKeyed store_mode = MAY_BE_STORE_FROM_KEYED, - ExtensibilityCheck extensibility_check = PERFORM_EXTENSIBILITY_CHECK, - ValueType value_type = OPTIMAL_REPRESENTATION, - StoreMode mode = ALLOW_AS_CONSTANT, - TransitionFlag flag = INSERT_TRANSITION); + MUST_USE_RESULT static MaybeHandle<Object> AddPropertyInternal( + Handle<JSObject> object, Handle<Name> name, Handle<Object> value, + PropertyAttributes attributes, StoreFromKeyed store_mode, + ExtensibilityCheck extensibility_check, TransitionFlag flag); // Add a property to a fast-case object. static void AddFastProperty(Handle<JSObject> object, @@ -2787,13 +2737,8 @@ class JSObject: public JSReceiver { Handle<Object> value, PropertyAttributes attributes, StoreFromKeyed store_mode, - ValueType value_type, TransitionFlag flag); - static void MigrateToNewProperty(Handle<JSObject> object, - Handle<Map> transition, - Handle<Object> value); - // Add a property to a slow-case object. static void AddSlowProperty(Handle<JSObject> object, Handle<Name> name, @@ -2847,16 +2792,14 @@ class JSObject: public JSReceiver { uint32_t index, Handle<Object> getter, Handle<Object> setter, - PropertyAttributes attributes, - v8::AccessControl access_control); + PropertyAttributes attributes); static Handle<AccessorPair> CreateAccessorPairFor(Handle<JSObject> object, Handle<Name> name); static void DefinePropertyAccessor(Handle<JSObject> object, Handle<Name> name, Handle<Object> getter, Handle<Object> setter, - PropertyAttributes attributes, - v8::AccessControl access_control); + PropertyAttributes attributes); // Try to define a single accessor paying attention to map transitions. // Returns false if this was not possible and we have to use the slow case. @@ -2884,7 +2827,7 @@ class JSObject: public JSReceiver { MUST_USE_RESULT Object* GetIdentityHash(); - static Handle<Object> GetOrCreateIdentityHash(Handle<JSObject> object); + static Handle<Smi> GetOrCreateIdentityHash(Handle<JSObject> object); DISALLOW_IMPLICIT_CONSTRUCTORS(JSObject); }; @@ -2895,14 +2838,14 @@ class JSObject: public JSReceiver { class FixedArrayBase: public HeapObject { public: // [length]: length of the array. - inline int length(); + inline int length() const; inline void set_length(int value); // Get and set the length using acquire loads and release stores. - inline int synchronized_length(); + inline int synchronized_length() const; inline void synchronized_set_length(int value); - inline static FixedArrayBase* cast(Object* object); + DECLARE_CAST(FixedArrayBase) // Layout description. // Length is smi tagged when it is stored. @@ -2976,8 +2919,7 @@ class FixedArray: public FixedArrayBase { return HeapObject::RawField(this, OffsetOfElementAt(index)); } - // Casting. - static inline FixedArray* cast(Object* obj); + DECLARE_CAST(FixedArray) // Maximal allowed size, in bytes, of a single FixedArray. // Prevents overflowing size computations, as well as extreme memory @@ -3026,7 +2968,7 @@ class FixedArray: public FixedArrayBase { Object* value); private: - STATIC_CHECK(kHeaderSize == Internals::kFixedArrayHeaderSize); + STATIC_ASSERT(kHeaderSize == Internals::kFixedArrayHeaderSize); DISALLOW_IMPLICIT_CONSTRUCTORS(FixedArray); }; @@ -3062,8 +3004,7 @@ class FixedDoubleArray: public FixedArrayBase { inline static double hole_nan_as_double(); inline static double canonical_not_the_hole_nan_as_double(); - // Casting. - static inline FixedDoubleArray* cast(Object* obj); + DECLARE_CAST(FixedDoubleArray) // Maximal allowed size, in bytes, of a single FixedDoubleArray. // Prevents overflowing size computations, as well as extreme memory @@ -3082,16 +3023,41 @@ class FixedDoubleArray: public FixedArrayBase { // ConstantPoolArray describes a fixed-sized array containing constant pool -// entires. -// The format of the pool is: -// [0]: Field holding the first index which is a raw code target pointer entry -// [1]: Field holding the first index which is a heap pointer entry -// [2]: Field holding the first index which is a int32 entry -// [3] ... [first_code_ptr_index() - 1] : 64 bit entries -// [first_code_ptr_index()] ... [first_heap_ptr_index() - 1] : code pointers -// [first_heap_ptr_index()] ... [first_int32_index() - 1] : heap pointers -// [first_int32_index()] ... [length - 1] : 32 bit entries -class ConstantPoolArray: public FixedArrayBase { +// entries. +// +// A ConstantPoolArray can be structured in two different ways depending upon +// whether it is extended or small. The is_extended_layout() method can be used +// to discover which layout the constant pool has. +// +// The format of a small constant pool is: +// [kSmallLayout1Offset] : Small section layout bitmap 1 +// [kSmallLayout2Offset] : Small section layout bitmap 2 +// [first_index(INT64, SMALL_SECTION)] : 64 bit entries +// ... : ... +// [first_index(CODE_PTR, SMALL_SECTION)] : code pointer entries +// ... : ... +// [first_index(HEAP_PTR, SMALL_SECTION)] : heap pointer entries +// ... : ... +// [first_index(INT32, SMALL_SECTION)] : 32 bit entries +// ... : ... +// +// If the constant pool has an extended layout, the extended section constant +// pool also contains an extended section, which has the following format at +// location get_extended_section_header_offset(): +// [kExtendedInt64CountOffset] : count of extended 64 bit entries +// [kExtendedCodePtrCountOffset] : count of extended code pointers +// [kExtendedHeapPtrCountOffset] : count of extended heap pointers +// [kExtendedInt32CountOffset] : count of extended 32 bit entries +// [first_index(INT64, EXTENDED_SECTION)] : 64 bit entries +// ... : ... +// [first_index(CODE_PTR, EXTENDED_SECTION)]: code pointer entries +// ... : ... +// [first_index(HEAP_PTR, EXTENDED_SECTION)]: heap pointer entries +// ... : ... +// [first_index(INT32, EXTENDED_SECTION)] : 32 bit entries +// ... : ... +// +class ConstantPoolArray: public HeapObject { public: enum WeakObjectState { NO_WEAK_OBJECTS, @@ -3099,17 +3065,100 @@ class ConstantPoolArray: public FixedArrayBase { WEAK_OBJECTS_IN_IC }; - // Getters for the field storing the first index for different type entries. - inline int first_code_ptr_index(); - inline int first_heap_ptr_index(); - inline int first_int64_index(); - inline int first_int32_index(); + enum Type { + INT64 = 0, + CODE_PTR, + HEAP_PTR, + INT32, + // Number of types stored by the ConstantPoolArrays. + NUMBER_OF_TYPES, + FIRST_TYPE = INT64, + LAST_TYPE = INT32 + }; - // Getters for counts of different type entries. - inline int count_of_code_ptr_entries(); - inline int count_of_heap_ptr_entries(); - inline int count_of_int64_entries(); - inline int count_of_int32_entries(); + enum LayoutSection { + SMALL_SECTION = 0, + EXTENDED_SECTION, + NUMBER_OF_LAYOUT_SECTIONS + }; + + class NumberOfEntries BASE_EMBEDDED { + public: + inline NumberOfEntries() { + for (int i = 0; i < NUMBER_OF_TYPES; i++) { + element_counts_[i] = 0; + } + } + + inline NumberOfEntries(int int64_count, int code_ptr_count, + int heap_ptr_count, int int32_count) { + element_counts_[INT64] = int64_count; + element_counts_[CODE_PTR] = code_ptr_count; + element_counts_[HEAP_PTR] = heap_ptr_count; + element_counts_[INT32] = int32_count; + } + + inline NumberOfEntries(ConstantPoolArray* array, LayoutSection section) { + element_counts_[INT64] = array->number_of_entries(INT64, section); + element_counts_[CODE_PTR] = array->number_of_entries(CODE_PTR, section); + element_counts_[HEAP_PTR] = array->number_of_entries(HEAP_PTR, section); + element_counts_[INT32] = array->number_of_entries(INT32, section); + } + + inline void increment(Type type); + inline int equals(const NumberOfEntries& other) const; + inline bool is_empty() const; + inline int count_of(Type type) const; + inline int base_of(Type type) const; + inline int total_count() const; + inline int are_in_range(int min, int max) const; + + private: + int element_counts_[NUMBER_OF_TYPES]; + }; + + class Iterator BASE_EMBEDDED { + public: + inline Iterator(ConstantPoolArray* array, Type type) + : array_(array), + type_(type), + final_section_(array->final_section()), + current_section_(SMALL_SECTION), + next_index_(array->first_index(type, SMALL_SECTION)) { + update_section(); + } + + inline Iterator(ConstantPoolArray* array, Type type, LayoutSection section) + : array_(array), + type_(type), + final_section_(section), + current_section_(section), + next_index_(array->first_index(type, section)) { + update_section(); + } + + inline int next_index(); + inline bool is_finished(); + + private: + inline void update_section(); + ConstantPoolArray* array_; + const Type type_; + const LayoutSection final_section_; + + LayoutSection current_section_; + int next_index_; + }; + + // Getters for the first index, the last index and the count of entries of + // a given type for a given layout section. + inline int first_index(Type type, LayoutSection layout_section); + inline int last_index(Type type, LayoutSection layout_section); + inline int number_of_entries(Type type, LayoutSection layout_section); + + // Returns the type of the entry at the given index. + inline Type get_type(int index); + inline bool offset_is_type(int offset, Type type); // Setter and getter for pool elements. inline Address get_code_ptr_entry(int index); @@ -3118,70 +3167,150 @@ class ConstantPoolArray: public FixedArrayBase { inline int32_t get_int32_entry(int index); inline double get_int64_entry_as_double(int index); - // Setter and getter for weak objects state - inline void set_weak_object_state(WeakObjectState state); - inline WeakObjectState get_weak_object_state(); - inline void set(int index, Address value); inline void set(int index, Object* value); inline void set(int index, int64_t value); inline void set(int index, double value); inline void set(int index, int32_t value); - // Set up initial state. - inline void Init(int number_of_int64_entries, - int number_of_code_ptr_entries, - int number_of_heap_ptr_entries, - int number_of_int32_entries); + // Setters which take a raw offset rather than an index (for code generation). + inline void set_at_offset(int offset, int32_t value); + inline void set_at_offset(int offset, int64_t value); + inline void set_at_offset(int offset, double value); + inline void set_at_offset(int offset, Address value); + inline void set_at_offset(int offset, Object* value); + + // Setter and getter for weak objects state + inline void set_weak_object_state(WeakObjectState state); + inline WeakObjectState get_weak_object_state(); + + // Returns true if the constant pool has an extended layout, false if it has + // only the small layout. + inline bool is_extended_layout(); + + // Returns the last LayoutSection in this constant pool array. + inline LayoutSection final_section(); + + // Set up initial state for a small layout constant pool array. + inline void Init(const NumberOfEntries& small); + + // Set up initial state for an extended layout constant pool array. + inline void InitExtended(const NumberOfEntries& small, + const NumberOfEntries& extended); + + // Clears the pointer entries with GC safe values. + void ClearPtrEntries(Isolate* isolate); + + // returns the total number of entries in the constant pool array. + inline int length(); // Garbage collection support. - inline static int SizeFor(int number_of_int64_entries, - int number_of_code_ptr_entries, - int number_of_heap_ptr_entries, - int number_of_int32_entries) { - return RoundUp(OffsetAt(number_of_int64_entries, - number_of_code_ptr_entries, - number_of_heap_ptr_entries, - number_of_int32_entries), - kPointerSize); + inline int size(); + + + inline static int MaxInt64Offset(int number_of_int64) { + return kFirstEntryOffset + (number_of_int64 * kInt64Size); + } + + inline static int SizeFor(const NumberOfEntries& small) { + int size = kFirstEntryOffset + + (small.count_of(INT64) * kInt64Size) + + (small.count_of(CODE_PTR) * kPointerSize) + + (small.count_of(HEAP_PTR) * kPointerSize) + + (small.count_of(INT32) * kInt32Size); + return RoundUp(size, kPointerSize); + } + + inline static int SizeForExtended(const NumberOfEntries& small, + const NumberOfEntries& extended) { + int size = SizeFor(small); + size = RoundUp(size, kInt64Size); // Align extended header to 64 bits. + size += kExtendedFirstOffset + + (extended.count_of(INT64) * kInt64Size) + + (extended.count_of(CODE_PTR) * kPointerSize) + + (extended.count_of(HEAP_PTR) * kPointerSize) + + (extended.count_of(INT32) * kInt32Size); + return RoundUp(size, kPointerSize); + } + + inline static int entry_size(Type type) { + switch (type) { + case INT32: + return kInt32Size; + case INT64: + return kInt64Size; + case CODE_PTR: + case HEAP_PTR: + return kPointerSize; + default: + UNREACHABLE(); + return 0; + } } // Code Generation support. inline int OffsetOfElementAt(int index) { - ASSERT(index < length()); - if (index >= first_int32_index()) { - return OffsetAt(count_of_int64_entries(), count_of_code_ptr_entries(), - count_of_heap_ptr_entries(), index - first_int32_index()); - } else if (index >= first_heap_ptr_index()) { - return OffsetAt(count_of_int64_entries(), count_of_code_ptr_entries(), - index - first_heap_ptr_index(), 0); - } else if (index >= first_code_ptr_index()) { - return OffsetAt(count_of_int64_entries(), index - first_code_ptr_index(), - 0, 0); + int offset; + LayoutSection section; + if (is_extended_layout() && index >= first_extended_section_index()) { + section = EXTENDED_SECTION; + offset = get_extended_section_header_offset() + kExtendedFirstOffset; } else { - return OffsetAt(index, 0, 0, 0); + section = SMALL_SECTION; + offset = kFirstEntryOffset; } + + // Add offsets for the preceding type sections. + DCHECK(index <= last_index(LAST_TYPE, section)); + for (Type type = FIRST_TYPE; index > last_index(type, section); + type = next_type(type)) { + offset += entry_size(type) * number_of_entries(type, section); + } + + // Add offset for the index in it's type. + Type type = get_type(index); + offset += entry_size(type) * (index - first_index(type, section)); + return offset; } - // Casting. - static inline ConstantPoolArray* cast(Object* obj); + DECLARE_CAST(ConstantPoolArray) // Garbage collection support. Object** RawFieldOfElementAt(int index) { return HeapObject::RawField(this, OffsetOfElementAt(index)); } - // Layout description. - static const int kArrayLayoutOffset = FixedArray::kHeaderSize; - static const int kFirstOffset = kArrayLayoutOffset + kPointerSize; - - static const int kFieldBitSize = 10; - static const int kMaxEntriesPerType = (1 << kFieldBitSize) - 1; - - class NumberOfInt64EntriesField: public BitField<int, 0, kFieldBitSize> {}; - class NumberOfCodePtrEntriesField: public BitField<int, 10, kFieldBitSize> {}; - class NumberOfHeapPtrEntriesField: public BitField<int, 20, kFieldBitSize> {}; - class WeakObjectStateField: public BitField<WeakObjectState, 30, 2> {}; + // Small Layout description. + static const int kSmallLayout1Offset = HeapObject::kHeaderSize; + static const int kSmallLayout2Offset = kSmallLayout1Offset + kInt32Size; + static const int kHeaderSize = kSmallLayout2Offset + kInt32Size; + static const int kFirstEntryOffset = ROUND_UP(kHeaderSize, kInt64Size); + + static const int kSmallLayoutCountBits = 10; + static const int kMaxSmallEntriesPerType = (1 << kSmallLayoutCountBits) - 1; + + // Fields in kSmallLayout1Offset. + class Int64CountField: public BitField<int, 1, kSmallLayoutCountBits> {}; + class CodePtrCountField: public BitField<int, 11, kSmallLayoutCountBits> {}; + class HeapPtrCountField: public BitField<int, 21, kSmallLayoutCountBits> {}; + class IsExtendedField: public BitField<bool, 31, 1> {}; + + // Fields in kSmallLayout2Offset. + class Int32CountField: public BitField<int, 1, kSmallLayoutCountBits> {}; + class TotalCountField: public BitField<int, 11, 12> {}; + class WeakObjectStateField: public BitField<WeakObjectState, 23, 2> {}; + + // Extended layout description, which starts at + // get_extended_section_header_offset(). + static const int kExtendedInt64CountOffset = 0; + static const int kExtendedCodePtrCountOffset = + kExtendedInt64CountOffset + kPointerSize; + static const int kExtendedHeapPtrCountOffset = + kExtendedCodePtrCountOffset + kPointerSize; + static const int kExtendedInt32CountOffset = + kExtendedHeapPtrCountOffset + kPointerSize; + static const int kExtendedFirstOffset = + kExtendedInt32CountOffset + kPointerSize; // Dispatched behavior. void ConstantPoolIterateBody(ObjectVisitor* v); @@ -3190,15 +3319,13 @@ class ConstantPoolArray: public FixedArrayBase { DECLARE_VERIFIER(ConstantPoolArray) private: - inline static int OffsetAt(int number_of_int64_entries, - int number_of_code_ptr_entries, - int number_of_heap_ptr_entries, - int number_of_int32_entries) { - return kFirstOffset - + (number_of_int64_entries * kInt64Size) - + (number_of_code_ptr_entries * kPointerSize) - + (number_of_heap_ptr_entries * kPointerSize) - + (number_of_int32_entries * kInt32Size); + inline int first_extended_section_index(); + inline int get_extended_section_header_offset(); + + inline static Type next_type(Type type) { + DCHECK(type >= FIRST_TYPE && type < NUMBER_OF_TYPES); + int type_int = static_cast<int>(type); + return static_cast<Type>(++type_int); } DISALLOW_IMPLICIT_CONSTRUCTORS(ConstantPoolArray); @@ -3222,7 +3349,7 @@ class DescriptorArray: public FixedArray { // Returns the number of descriptors in the array. int number_of_descriptors() { - ASSERT(length() >= kFirstIndex || IsEmpty()); + DCHECK(length() >= kFirstIndex || IsEmpty()); int len = length(); return len == 0 ? 0 : Smi::cast(get(kDescriptorLengthIndex))->value(); } @@ -3248,7 +3375,7 @@ class DescriptorArray: public FixedArray { } FixedArray* GetEnumCache() { - ASSERT(HasEnumCache()); + DCHECK(HasEnumCache()); FixedArray* bridge = FixedArray::cast(get(kEnumCacheIndex)); return FixedArray::cast(bridge->get(kEnumCacheBridgeCacheIndex)); } @@ -3262,13 +3389,13 @@ class DescriptorArray: public FixedArray { } FixedArray* GetEnumIndicesCache() { - ASSERT(HasEnumIndicesCache()); + DCHECK(HasEnumIndicesCache()); FixedArray* bridge = FixedArray::cast(get(kEnumCacheIndex)); return FixedArray::cast(bridge->get(kEnumCacheBridgeIndicesCacheIndex)); } Object** GetEnumCacheSlot() { - ASSERT(HasEnumCache()); + DCHECK(HasEnumCache()); return HeapObject::RawField(reinterpret_cast<HeapObject*>(this), kEnumCacheOffset); } @@ -3281,12 +3408,15 @@ class DescriptorArray: public FixedArray { FixedArray* new_cache, Object* new_index_cache); + bool CanHoldValue(int descriptor, Object* value); + // Accessors for fetching instance descriptor at descriptor number. inline Name* GetKey(int descriptor_number); inline Object** GetKeySlot(int descriptor_number); inline Object* GetValue(int descriptor_number); inline void SetValue(int descriptor_number, Object* value); inline Object** GetValueSlot(int descriptor_number); + static inline int GetValueOffset(int descriptor_number); inline Object** GetDescriptorStartSlot(int descriptor_number); inline Object** GetDescriptorEndSlot(int descriptor_number); inline PropertyDetails GetDetails(int descriptor_number); @@ -3339,8 +3469,7 @@ class DescriptorArray: public FixedArray { int number_of_descriptors, int slack = 0); - // Casting. - static inline DescriptorArray* cast(Object* obj); + DECLARE_CAST(DescriptorArray) // Constant for denoting key was not found. static const int kNotFound = -1; @@ -3370,7 +3499,7 @@ class DescriptorArray: public FixedArray { #ifdef OBJECT_PRINT // Print all the descriptors. - void PrintDescriptors(FILE* out = stdout); + void PrintDescriptors(OStream& os); // NOLINT #endif #ifdef DEBUG @@ -3509,12 +3638,12 @@ class BaseShape { static const bool UsesSeed = false; static uint32_t Hash(Key key) { return 0; } static uint32_t SeededHash(Key key, uint32_t seed) { - ASSERT(UsesSeed); + DCHECK(UsesSeed); return Hash(key); } static uint32_t HashForObject(Key key, Object* object) { return 0; } static uint32_t SeededHashForObject(Key key, uint32_t seed, Object* object) { - ASSERT(UsesSeed); + DCHECK(UsesSeed); return HashForObject(key, object); } }; @@ -3593,8 +3722,7 @@ class HashTable: public FixedArray { void IteratePrefix(ObjectVisitor* visitor); void IterateElements(ObjectVisitor* visitor); - // Casting. - static inline HashTable* cast(Object* obj); + DECLARE_CAST(HashTable) // Compute the probe offset (quadratic probing). INLINE(static uint32_t GetProbeOffset(uint32_t n)) { @@ -3656,15 +3784,15 @@ class HashTable: public FixedArray { // To scale a computed hash code to fit within the hash table, we // use bit-wise AND with a mask, so the capacity must be positive // and non-zero. - ASSERT(capacity > 0); - ASSERT(capacity <= kMaxCapacity); + DCHECK(capacity > 0); + DCHECK(capacity <= kMaxCapacity); set(kCapacityIndex, Smi::FromInt(capacity)); } // Returns probe entry. static uint32_t GetProbe(uint32_t hash, uint32_t number, uint32_t size) { - ASSERT(IsPowerOf2(size)); + DCHECK(IsPowerOf2(size)); return (hash + GetProbeOffset(number)) & (size - 1); } @@ -3767,8 +3895,7 @@ class StringTable: public HashTable<StringTable, uint16_t c1, uint16_t c2); - // Casting. - static inline StringTable* cast(Object* obj); + DECLARE_CAST(StringTable) private: template <bool seq_ascii> friend class JsonParser; @@ -3808,7 +3935,7 @@ class MapCache: public HashTable<MapCache, MapCacheShape, HashTableKey*> { Object* Lookup(FixedArray* key); static Handle<MapCache> Put( Handle<MapCache> map_cache, Handle<FixedArray> key, Handle<Map> value); - static inline MapCache* cast(Object* obj); + DECLARE_CAST(MapCache) private: DISALLOW_IMPLICIT_CONSTRUCTORS(MapCache); @@ -3821,10 +3948,6 @@ class Dictionary: public HashTable<Derived, Shape, Key> { typedef HashTable<Derived, Shape, Key> DerivedHashTable; public: - static inline Dictionary* cast(Object* obj) { - return reinterpret_cast<Dictionary*>(obj); - } - // Returns the value at entry. Object* ValueAt(int entry) { return this->get(DerivedHashTable::EntryToIndex(entry) + 1); @@ -3837,7 +3960,7 @@ class Dictionary: public HashTable<Derived, Shape, Key> { // Returns the property details for the property at entry. PropertyDetails DetailsAt(int entry) { - ASSERT(entry >= 0); // Not found is -1, which is not caught by get(). + DCHECK(entry >= 0); // Not found is -1, which is not caught by get(). return PropertyDetails( Smi::cast(this->get(DerivedHashTable::EntryToIndex(entry) + 2))); } @@ -3883,7 +4006,7 @@ class Dictionary: public HashTable<Derived, Shape, Key> { // Accessors for next enumeration index. void SetNextEnumerationIndex(int index) { - ASSERT(index != 0); + DCHECK(index != 0); this->set(kNextEnumerationIndexIndex, Smi::FromInt(index)); } @@ -3901,7 +4024,7 @@ class Dictionary: public HashTable<Derived, Shape, Key> { static Handle<Derived> EnsureCapacity(Handle<Derived> obj, int n, Key key); #ifdef OBJECT_PRINT - void Print(FILE* out = stdout); + void Print(OStream& os); // NOLINT #endif // Returns the key (slow). Object* SlowReverseLookup(Object* value); @@ -3962,10 +4085,7 @@ class NameDictionary: public Dictionary<NameDictionary, NameDictionary, NameDictionaryShape, Handle<Name> > DerivedDictionary; public: - static inline NameDictionary* cast(Object* obj) { - ASSERT(obj->IsDictionary()); - return reinterpret_cast<NameDictionary*>(obj); - } + DECLARE_CAST(NameDictionary) // Copies enumerable keys to preallocated fixed array. void CopyEnumKeysTo(FixedArray* storage); @@ -4013,10 +4133,7 @@ class SeededNumberDictionary SeededNumberDictionaryShape, uint32_t> { public: - static SeededNumberDictionary* cast(Object* obj) { - ASSERT(obj->IsDictionary()); - return reinterpret_cast<SeededNumberDictionary*>(obj); - } + DECLARE_CAST(SeededNumberDictionary) // Type specific at put (default NONE attributes is used when adding). MUST_USE_RESULT static Handle<SeededNumberDictionary> AtNumberPut( @@ -4064,10 +4181,7 @@ class UnseededNumberDictionary UnseededNumberDictionaryShape, uint32_t> { public: - static UnseededNumberDictionary* cast(Object* obj) { - ASSERT(obj->IsDictionary()); - return reinterpret_cast<UnseededNumberDictionary*>(obj); - } + DECLARE_CAST(UnseededNumberDictionary) // Type specific at put (default NONE attributes is used when adding). MUST_USE_RESULT static Handle<UnseededNumberDictionary> AtNumberPut( @@ -4107,10 +4221,7 @@ class ObjectHashTable: public HashTable<ObjectHashTable, typedef HashTable< ObjectHashTable, ObjectHashTableShape, Handle<Object> > DerivedHashTable; public: - static inline ObjectHashTable* cast(Object* obj) { - ASSERT(obj->IsHashTable()); - return reinterpret_cast<ObjectHashTable*>(obj); - } + DECLARE_CAST(ObjectHashTable) // Attempt to shrink hash table after removal of key. MUST_USE_RESULT static inline Handle<ObjectHashTable> Shrink( @@ -4121,12 +4232,16 @@ class ObjectHashTable: public HashTable<ObjectHashTable, // returned in case the key is not present. Object* Lookup(Handle<Object> key); - // Adds (or overwrites) the value associated with the given key. Mapping a - // key to the hole value causes removal of the whole entry. + // Adds (or overwrites) the value associated with the given key. static Handle<ObjectHashTable> Put(Handle<ObjectHashTable> table, Handle<Object> key, Handle<Object> value); + // Returns an ObjectHashTable (possibly |table|) where |key| has been removed. + static Handle<ObjectHashTable> Remove(Handle<ObjectHashTable> table, + Handle<Object> key, + bool* was_present); + private: friend class MarkCompactCollector; @@ -4144,7 +4259,7 @@ class ObjectHashTable: public HashTable<ObjectHashTable, // insertion order. There are Map and Set interfaces (OrderedHashMap // and OrderedHashTable, below). It is meant to be used by JSMap/JSSet. // -// Only Object* keys are supported, with Object::SameValue() used as the +// Only Object* keys are supported, with Object::SameValueZero() used as the // equality operator and Object::GetHash() for the hash function. // // Based on the "Deterministic Hash Table" as described by Jason Orendorff at @@ -4155,16 +4270,27 @@ class ObjectHashTable: public HashTable<ObjectHashTable, // [0]: bucket count // [1]: element count // [2]: deleted element count -// [3]: live iterators (doubly-linked list) -// [4..(NumberOfBuckets() - 1)]: "hash table", where each item is an offset -// into the data table (see below) where the -// first item in this bucket is stored. -// [4 + NumberOfBuckets()..length]: "data table", an array of length +// [3..(3 + NumberOfBuckets() - 1)]: "hash table", where each item is an +// offset into the data table (see below) where the +// first item in this bucket is stored. +// [3 + NumberOfBuckets()..length]: "data table", an array of length // Capacity() * kEntrySize, where the first entrysize // items are handled by the derived class and the // item at kChainOffset is another entry into the // data table indicating the next entry in this hash // bucket. +// +// When we transition the table to a new version we obsolete it and reuse parts +// of the memory to store information how to transition an iterator to the new +// table: +// +// Memory layout for obsolete table: +// [0]: bucket count +// [1]: Next newer table +// [2]: Number of removed holes or -1 when the table was cleared. +// [3..(3 + NumberOfRemovedHoles() - 1)]: The indexes of the removed holes. +// [3 + NumberOfRemovedHoles()..length]: Not used +// template<class Derived, class Iterator, int entrysize> class OrderedHashTable: public FixedArray { public: @@ -4180,11 +4306,19 @@ class OrderedHashTable: public FixedArray { // if possible. static Handle<Derived> Shrink(Handle<Derived> table); - // Returns a new empty OrderedHashTable and updates all the iterators to - // point to the new table. + // Returns a new empty OrderedHashTable and records the clearing so that + // exisiting iterators can be updated. static Handle<Derived> Clear(Handle<Derived> table); + // Returns an OrderedHashTable (possibly |table|) where |key| has been + // removed. + static Handle<Derived> Remove(Handle<Derived> table, Handle<Object> key, + bool* was_present); + // Returns kNotFound if the key isn't present. + int FindEntry(Handle<Object> key, int hash); + + // Like the above, but doesn't require the caller to provide a hash. int FindEntry(Handle<Object> key); int NumberOfElements() { @@ -4201,10 +4335,6 @@ class OrderedHashTable: public FixedArray { return Smi::cast(get(kNumberOfBucketsIndex))->value(); } - Object* iterators() { return get(kIteratorsIndex); } - - void set_iterators(Object* value) { set(kIteratorsIndex, value); } - // Returns the index into the data table where the new entry // should be placed. The table is assumed to have enough space // for a new entry. @@ -4221,6 +4351,20 @@ class OrderedHashTable: public FixedArray { Object* KeyAt(int entry) { return get(EntryToIndex(entry)); } + bool IsObsolete() { + return !get(kNextTableIndex)->IsSmi(); + } + + // The next newer table. This is only valid if the table is obsolete. + Derived* NextTable() { + return Derived::cast(get(kNextTableIndex)); + } + + // When the table is obsolete we store the indexes of the removed holes. + int RemovedIndexAt(int index) { + return Smi::cast(get(kRemovedHolesIndex + index))->value(); + } + static const int kNotFound = -1; static const int kMinCapacity = 4; @@ -4257,11 +4401,21 @@ class OrderedHashTable: public FixedArray { return Smi::cast(get(kHashTableStartIndex + bucket))->value(); } + void SetNextTable(Derived* next_table) { + set(kNextTableIndex, next_table); + } + + void SetRemovedIndexAt(int index, int removed_index) { + return set(kRemovedHolesIndex + index, Smi::FromInt(removed_index)); + } + static const int kNumberOfBucketsIndex = 0; static const int kNumberOfElementsIndex = kNumberOfBucketsIndex + 1; static const int kNumberOfDeletedElementsIndex = kNumberOfElementsIndex + 1; - static const int kIteratorsIndex = kNumberOfDeletedElementsIndex + 1; - static const int kHashTableStartIndex = kIteratorsIndex + 1; + static const int kHashTableStartIndex = kNumberOfDeletedElementsIndex + 1; + + static const int kNextTableIndex = kNumberOfElementsIndex; + static const int kRemovedHolesIndex = kHashTableStartIndex; static const int kEntrySize = entrysize + 1; static const int kChainOffset = entrysize; @@ -4279,16 +4433,11 @@ class JSSetIterator; class OrderedHashSet: public OrderedHashTable< OrderedHashSet, JSSetIterator, 1> { public: - static OrderedHashSet* cast(Object* obj) { - ASSERT(obj->IsOrderedHashTable()); - return reinterpret_cast<OrderedHashSet*>(obj); - } + DECLARE_CAST(OrderedHashSet) bool Contains(Handle<Object> key); static Handle<OrderedHashSet> Add( Handle<OrderedHashSet> table, Handle<Object> key); - static Handle<OrderedHashSet> Remove( - Handle<OrderedHashSet> table, Handle<Object> key); }; @@ -4298,10 +4447,7 @@ class JSMapIterator; class OrderedHashMap:public OrderedHashTable< OrderedHashMap, JSMapIterator, 2> { public: - static OrderedHashMap* cast(Object* obj) { - ASSERT(obj->IsOrderedHashTable()); - return reinterpret_cast<OrderedHashMap*>(obj); - } + DECLARE_CAST(OrderedHashMap) Object* Lookup(Handle<Object> key); static Handle<OrderedHashMap> Put( @@ -4309,11 +4455,11 @@ class OrderedHashMap:public OrderedHashTable< Handle<Object> key, Handle<Object> value); - private: Object* ValueAt(int entry) { return get(EntryToIndex(entry) + kValueOffset); } + private: static const int kValueOffset = 1; }; @@ -4339,10 +4485,7 @@ class WeakHashTable: public HashTable<WeakHashTable, typedef HashTable< WeakHashTable, WeakHashTableShape<2>, Handle<Object> > DerivedHashTable; public: - static inline WeakHashTable* cast(Object* obj) { - ASSERT(obj->IsHashTable()); - return reinterpret_cast<WeakHashTable*>(obj); - } + DECLARE_CAST(WeakHashTable) // Looks up the value associated with the given key. The hole value is // returned in case the key is not present. @@ -4404,8 +4547,7 @@ class JSFunctionResultCache: public FixedArray { inline int finger_index(); inline void set_finger_index(int finger_index); - // Casting - static inline JSFunctionResultCache* cast(Object* obj); + DECLARE_CAST(JSFunctionResultCache) DECLARE_VERIFIER(JSFunctionResultCache) }; @@ -4420,7 +4562,7 @@ class JSFunctionResultCache: public FixedArray { // routines. class ScopeInfo : public FixedArray { public: - static inline ScopeInfo* cast(Object* object); + DECLARE_CAST(ScopeInfo) // Return the type of this scope. ScopeType scope_type(); @@ -4483,6 +4625,9 @@ class ScopeInfo : public FixedArray { // Return the initialization flag of the given context local. InitializationFlag ContextLocalInitFlag(int var); + // Return the initialization flag of the given context local. + MaybeAssignedFlag ContextLocalMaybeAssignedFlag(int var); + // Return true if this local was introduced by the compiler, and should not be // exposed to the user in a debugger. bool LocalIsSynthetic(int var); @@ -4498,10 +4643,9 @@ class ScopeInfo : public FixedArray { // returns a value < 0. The name must be an internalized string. // If the slot is present and mode != NULL, sets *mode to the corresponding // mode for that variable. - static int ContextSlotIndex(Handle<ScopeInfo> scope_info, - Handle<String> name, - VariableMode* mode, - InitializationFlag* init_flag); + static int ContextSlotIndex(Handle<ScopeInfo> scope_info, Handle<String> name, + VariableMode* mode, InitializationFlag* init_flag, + MaybeAssignedFlag* maybe_assigned_flag); // Lookup support for serialized scope info. Returns the // parameter index for a given parameter name if the parameter is present; @@ -4619,6 +4763,8 @@ class ScopeInfo : public FixedArray { // ContextLocalInfoEntries part. class ContextLocalMode: public BitField<VariableMode, 0, 3> {}; class ContextLocalInitFlag: public BitField<InitializationFlag, 3, 1> {}; + class ContextLocalMaybeAssignedFlag + : public BitField<MaybeAssignedFlag, 4, 1> {}; }; @@ -4635,9 +4781,9 @@ class NormalizedMapCache: public FixedArray { void Clear(); - // Casting - static inline NormalizedMapCache* cast(Object* obj); - static inline bool IsNormalizedMapCache(Object* obj); + DECLARE_CAST(NormalizedMapCache) + + static inline bool IsNormalizedMapCache(const Object* obj); DECLARE_VERIFIER(NormalizedMapCache) private: @@ -4672,8 +4818,8 @@ class ByteArray: public FixedArrayBase { // array, this function returns the number of elements a byte array should // have. static int LengthFor(int size_in_bytes) { - ASSERT(IsAligned(size_in_bytes, kPointerSize)); - ASSERT(size_in_bytes >= kHeaderSize); + DCHECK(IsAligned(size_in_bytes, kPointerSize)); + DCHECK(size_in_bytes >= kHeaderSize); return size_in_bytes - kHeaderSize; } @@ -4683,8 +4829,7 @@ class ByteArray: public FixedArrayBase { // Returns a pointer to the ByteArray object for a given data start address. static inline ByteArray* FromDataStartAddress(Address address); - // Casting. - static inline ByteArray* cast(Object* obj); + DECLARE_CAST(ByteArray) // Dispatched behavior. inline int ByteArraySize() { @@ -4711,16 +4856,15 @@ class ByteArray: public FixedArrayBase { class FreeSpace: public HeapObject { public: // [size]: size of the free space including the header. - inline int size(); + inline int size() const; inline void set_size(int value); - inline int nobarrier_size(); + inline int nobarrier_size() const; inline void nobarrier_set_size(int value); inline int Size() { return size(); } - // Casting. - static inline FreeSpace* cast(Object* obj); + DECLARE_CAST(FreeSpace) // Dispatched behavior. DECLARE_PRINTER(FreeSpace) @@ -4771,8 +4915,7 @@ class ExternalArray: public FixedArrayBase { // external array. DECL_ACCESSORS(external_pointer, void) // Pointer to the data store. - // Casting. - static inline ExternalArray* cast(Object* obj); + DECLARE_CAST(ExternalArray) // Maximal acceptable length for an external array. static const int kMaxLength = 0x3fffffff; @@ -4812,8 +4955,7 @@ class ExternalUint8ClampedArray: public ExternalArray { uint32_t index, Handle<Object> value); - // Casting. - static inline ExternalUint8ClampedArray* cast(Object* obj); + DECLARE_CAST(ExternalUint8ClampedArray) // Dispatched behavior. DECLARE_PRINTER(ExternalUint8ClampedArray) @@ -4837,8 +4979,7 @@ class ExternalInt8Array: public ExternalArray { uint32_t index, Handle<Object> value); - // Casting. - static inline ExternalInt8Array* cast(Object* obj); + DECLARE_CAST(ExternalInt8Array) // Dispatched behavior. DECLARE_PRINTER(ExternalInt8Array) @@ -4862,8 +5003,7 @@ class ExternalUint8Array: public ExternalArray { uint32_t index, Handle<Object> value); - // Casting. - static inline ExternalUint8Array* cast(Object* obj); + DECLARE_CAST(ExternalUint8Array) // Dispatched behavior. DECLARE_PRINTER(ExternalUint8Array) @@ -4887,8 +5027,7 @@ class ExternalInt16Array: public ExternalArray { uint32_t index, Handle<Object> value); - // Casting. - static inline ExternalInt16Array* cast(Object* obj); + DECLARE_CAST(ExternalInt16Array) // Dispatched behavior. DECLARE_PRINTER(ExternalInt16Array) @@ -4913,8 +5052,7 @@ class ExternalUint16Array: public ExternalArray { uint32_t index, Handle<Object> value); - // Casting. - static inline ExternalUint16Array* cast(Object* obj); + DECLARE_CAST(ExternalUint16Array) // Dispatched behavior. DECLARE_PRINTER(ExternalUint16Array) @@ -4938,8 +5076,7 @@ class ExternalInt32Array: public ExternalArray { uint32_t index, Handle<Object> value); - // Casting. - static inline ExternalInt32Array* cast(Object* obj); + DECLARE_CAST(ExternalInt32Array) // Dispatched behavior. DECLARE_PRINTER(ExternalInt32Array) @@ -4964,8 +5101,7 @@ class ExternalUint32Array: public ExternalArray { uint32_t index, Handle<Object> value); - // Casting. - static inline ExternalUint32Array* cast(Object* obj); + DECLARE_CAST(ExternalUint32Array) // Dispatched behavior. DECLARE_PRINTER(ExternalUint32Array) @@ -4990,8 +5126,7 @@ class ExternalFloat32Array: public ExternalArray { uint32_t index, Handle<Object> value); - // Casting. - static inline ExternalFloat32Array* cast(Object* obj); + DECLARE_CAST(ExternalFloat32Array) // Dispatched behavior. DECLARE_PRINTER(ExternalFloat32Array) @@ -5016,8 +5151,7 @@ class ExternalFloat64Array: public ExternalArray { uint32_t index, Handle<Object> value); - // Casting. - static inline ExternalFloat64Array* cast(Object* obj); + DECLARE_CAST(ExternalFloat64Array) // Dispatched behavior. DECLARE_PRINTER(ExternalFloat64Array) @@ -5030,19 +5164,22 @@ class ExternalFloat64Array: public ExternalArray { class FixedTypedArrayBase: public FixedArrayBase { public: - // Casting: - static inline FixedTypedArrayBase* cast(Object* obj); + DECLARE_CAST(FixedTypedArrayBase) static const int kDataOffset = kHeaderSize; inline int size(); + inline int TypedArraySize(InstanceType type); + // Use with care: returns raw pointer into heap. inline void* DataPtr(); inline int DataSize(); private: + inline int DataSize(InstanceType type); + DISALLOW_IMPLICIT_CONSTRUCTORS(FixedTypedArrayBase); }; @@ -5053,8 +5190,7 @@ class FixedTypedArray: public FixedTypedArrayBase { typedef typename Traits::ElementType ElementType; static const InstanceType kInstanceType = Traits::kInstanceType; - // Casting: - static inline FixedTypedArray<Traits>* cast(Object* obj); + DECLARE_CAST(FixedTypedArray<Traits>) static inline int ElementOffset(int index) { return kDataOffset + index * sizeof(ElementType); @@ -5086,13 +5222,13 @@ class FixedTypedArray: public FixedTypedArrayBase { #define FIXED_TYPED_ARRAY_TRAITS(Type, type, TYPE, elementType, size) \ class Type##ArrayTraits { \ - public: \ - typedef elementType ElementType; \ - static const InstanceType kInstanceType = FIXED_##TYPE##_ARRAY_TYPE; \ - static const char* Designator() { return #type " array"; } \ - static inline Handle<Object> ToHandle(Isolate* isolate, \ - elementType scalar); \ - static inline elementType defaultValue(); \ + public: /* NOLINT */ \ + typedef elementType ElementType; \ + static const InstanceType kInstanceType = FIXED_##TYPE##_ARRAY_TYPE; \ + static const char* Designator() { return #type " array"; } \ + static inline Handle<Object> ToHandle(Isolate* isolate, \ + elementType scalar); \ + static inline elementType defaultValue(); \ }; \ \ typedef FixedTypedArray<Type##ArrayTraits> Fixed##Type##Array; @@ -5111,14 +5247,16 @@ TYPED_ARRAYS(FIXED_TYPED_ARRAY_TRAITS) class DeoptimizationInputData: public FixedArray { public: // Layout description. Indices in the array. - static const int kTranslationByteArrayIndex = 0; - static const int kInlinedFunctionCountIndex = 1; - static const int kLiteralArrayIndex = 2; - static const int kOsrAstIdIndex = 3; - static const int kOsrPcOffsetIndex = 4; - static const int kOptimizationIdIndex = 5; - static const int kSharedFunctionInfoIndex = 6; - static const int kFirstDeoptEntryIndex = 7; + static const int kDeoptEntryCountIndex = 0; + static const int kReturnAddressPatchEntryCountIndex = 1; + static const int kTranslationByteArrayIndex = 2; + static const int kInlinedFunctionCountIndex = 3; + static const int kLiteralArrayIndex = 4; + static const int kOsrAstIdIndex = 5; + static const int kOsrPcOffsetIndex = 6; + static const int kOptimizationIdIndex = 7; + static const int kSharedFunctionInfoIndex = 8; + static const int kFirstDeoptEntryIndex = 9; // Offsets of deopt entry elements relative to the start of the entry. static const int kAstIdRawOffset = 0; @@ -5127,6 +5265,12 @@ class DeoptimizationInputData: public FixedArray { static const int kPcOffset = 3; static const int kDeoptEntrySize = 4; + // Offsets of return address patch entry elements relative to the start of the + // entry + static const int kReturnAddressPcOffset = 0; + static const int kPatchedAddressPcOffset = 1; + static const int kReturnAddressPatchEntrySize = 2; + // Simple element accessors. #define DEFINE_ELEMENT_ACCESSORS(name, type) \ type* name() { \ @@ -5147,20 +5291,35 @@ class DeoptimizationInputData: public FixedArray { #undef DEFINE_ELEMENT_ACCESSORS // Accessors for elements of the ith deoptimization entry. -#define DEFINE_ENTRY_ACCESSORS(name, type) \ - type* name(int i) { \ - return type::cast(get(IndexForEntry(i) + k##name##Offset)); \ - } \ - void Set##name(int i, type* value) { \ - set(IndexForEntry(i) + k##name##Offset, value); \ +#define DEFINE_DEOPT_ENTRY_ACCESSORS(name, type) \ + type* name(int i) { \ + return type::cast(get(IndexForEntry(i) + k##name##Offset)); \ + } \ + void Set##name(int i, type* value) { \ + set(IndexForEntry(i) + k##name##Offset, value); \ } - DEFINE_ENTRY_ACCESSORS(AstIdRaw, Smi) - DEFINE_ENTRY_ACCESSORS(TranslationIndex, Smi) - DEFINE_ENTRY_ACCESSORS(ArgumentsStackHeight, Smi) - DEFINE_ENTRY_ACCESSORS(Pc, Smi) + DEFINE_DEOPT_ENTRY_ACCESSORS(AstIdRaw, Smi) + DEFINE_DEOPT_ENTRY_ACCESSORS(TranslationIndex, Smi) + DEFINE_DEOPT_ENTRY_ACCESSORS(ArgumentsStackHeight, Smi) + DEFINE_DEOPT_ENTRY_ACCESSORS(Pc, Smi) + +#undef DEFINE_DEOPT_ENTRY_ACCESSORS + +// Accessors for elements of the ith deoptimization entry. +#define DEFINE_PATCH_ENTRY_ACCESSORS(name, type) \ + type* name(int i) { \ + return type::cast( \ + get(IndexForReturnAddressPatchEntry(i) + k##name##Offset)); \ + } \ + void Set##name(int i, type* value) { \ + set(IndexForReturnAddressPatchEntry(i) + k##name##Offset, value); \ + } -#undef DEFINE_ENTRY_ACCESSORS + DEFINE_PATCH_ENTRY_ACCESSORS(ReturnAddressPc, Smi) + DEFINE_PATCH_ENTRY_ACCESSORS(PatchedAddressPc, Smi) + +#undef DEFINE_PATCH_ENTRY_ACCESSORS BailoutId AstId(int i) { return BailoutId(AstIdRaw(i)->value()); @@ -5170,29 +5329,43 @@ class DeoptimizationInputData: public FixedArray { SetAstIdRaw(i, Smi::FromInt(value.ToInt())); } - int DeoptCount() { - return (length() - kFirstDeoptEntryIndex) / kDeoptEntrySize; + int DeoptCount() { + return length() == 0 ? 0 : Smi::cast(get(kDeoptEntryCountIndex))->value(); + } + + int ReturnAddressPatchCount() { + return length() == 0 + ? 0 + : Smi::cast(get(kReturnAddressPatchEntryCountIndex))->value(); } // Allocates a DeoptimizationInputData. static Handle<DeoptimizationInputData> New(Isolate* isolate, int deopt_entry_count, + int return_address_patch_count, PretenureFlag pretenure); - // Casting. - static inline DeoptimizationInputData* cast(Object* obj); + DECLARE_CAST(DeoptimizationInputData) #ifdef ENABLE_DISASSEMBLER - void DeoptimizationInputDataPrint(FILE* out); + void DeoptimizationInputDataPrint(OStream& os); // NOLINT #endif private: + friend class Object; // For accessing LengthFor. + static int IndexForEntry(int i) { return kFirstDeoptEntryIndex + (i * kDeoptEntrySize); } - static int LengthFor(int entry_count) { - return IndexForEntry(entry_count); + int IndexForReturnAddressPatchEntry(int i) { + return kFirstDeoptEntryIndex + (DeoptCount() * kDeoptEntrySize) + + (i * kReturnAddressPatchEntrySize); + } + + static int LengthFor(int deopt_count, int return_address_patch_count) { + return kFirstDeoptEntryIndex + (deopt_count * kDeoptEntrySize) + + (return_address_patch_count * kReturnAddressPatchEntrySize); } }; @@ -5226,11 +5399,10 @@ class DeoptimizationOutputData: public FixedArray { int number_of_deopt_points, PretenureFlag pretenure); - // Casting. - static inline DeoptimizationOutputData* cast(Object* obj); + DECLARE_CAST(DeoptimizationOutputData) #if defined(OBJECT_PRINT) || defined(ENABLE_DISASSEMBLER) - void DeoptimizationOutputDataPrint(FILE* out); + void DeoptimizationOutputDataPrint(OStream& os); // NOLINT #endif }; @@ -5296,12 +5468,13 @@ class Code: public HeapObject { // Printing static const char* ICState2String(InlineCacheState state); static const char* StubType2String(StubType type); - static void PrintExtraICState(FILE* out, Kind kind, ExtraICState extra); - void Disassemble(const char* name, FILE* out = stdout); + static void PrintExtraICState(OStream& os, // NOLINT + Kind kind, ExtraICState extra); + void Disassemble(const char* name, OStream& os); // NOLINT #endif // ENABLE_DISASSEMBLER // [instruction_size]: Size of the native instructions - inline int instruction_size(); + inline int instruction_size() const; inline void set_instruction_size(int value); // [relocation_info]: Code relocation information @@ -5318,13 +5491,13 @@ class Code: public HeapObject { // [raw_type_feedback_info]: This field stores various things, depending on // the kind of the code object. // FUNCTION => type feedback information. - // STUB => various things, e.g. a SMI + // STUB and ICs => major/minor key as Smi. DECL_ACCESSORS(raw_type_feedback_info, Object) inline Object* type_feedback_info(); inline void set_type_feedback_info( Object* value, WriteBarrierMode mode = UPDATE_WRITE_BARRIER); - inline int stub_info(); - inline void set_stub_info(int info); + inline uint32_t stub_key(); + inline void set_stub_key(uint32_t key); // [next_code_link]: Link for lists of optimized or deoptimized code. // Note that storage for this field is overlapped with typefeedback_info. @@ -5338,11 +5511,11 @@ class Code: public HeapObject { // [ic_age]: Inline caching age: the value of the Heap::global_ic_age // at the moment when this object was created. inline void set_ic_age(int count); - inline int ic_age(); + inline int ic_age() const; // [prologue_offset]: Offset of the function prologue, used for aging // FUNCTIONs and OPTIMIZED_FUNCTIONs. - inline int prologue_offset(); + inline int prologue_offset() const; inline void set_prologue_offset(int offset); // Unchecked accessors to be used during GC. @@ -5388,19 +5561,23 @@ class Code: public HeapObject { ic_state() == MONOMORPHIC; } + inline bool IsCodeStubOrIC(); + inline void set_raw_kind_specific_flags1(int value); inline void set_raw_kind_specific_flags2(int value); - // [major_key]: For kind STUB or BINARY_OP_IC, the major key. - inline int major_key(); - inline void set_major_key(int value); - inline bool has_major_key(); - - // For kind STUB or ICs, tells whether or not a code object was generated by - // the optimizing compiler (but it may not be an optimized function). - bool is_crankshafted(); + // [is_crankshafted]: For kind STUB or ICs, tells whether or not a code + // object was generated by either the hydrogen or the TurboFan optimizing + // compiler (but it may not be an optimized function). + inline bool is_crankshafted(); + inline bool is_hydrogen_stub(); // Crankshafted, but not a function. inline void set_is_crankshafted(bool value); + // [is_turbofanned]: For kind STUB or OPTIMIZED_FUNCTION, tells whether the + // code object was generated by the TurboFan optimizing compiler. + inline bool is_turbofanned(); + inline void set_is_turbofanned(bool value); + // [optimizable]: For FUNCTION kind, tells if it is optimizable. inline bool optimizable(); inline void set_optimizable(bool value); @@ -5432,12 +5609,16 @@ class Code: public HeapObject { inline int profiler_ticks(); inline void set_profiler_ticks(int ticks); + // [builtin_index]: For BUILTIN kind, tells which builtin index it has. + inline int builtin_index(); + inline void set_builtin_index(int id); + // [stack_slots]: For kind OPTIMIZED_FUNCTION, the number of stack slots // reserved in the code prologue. inline unsigned stack_slots(); inline void set_stack_slots(unsigned slots); - // [safepoint_table_start]: For kind OPTIMIZED_CODE, the offset in + // [safepoint_table_start]: For kind OPTIMIZED_FUNCTION, the offset in // the instruction stream where the safepoint table starts. inline unsigned safepoint_table_offset(); inline void set_safepoint_table_offset(unsigned offset); @@ -5448,7 +5629,6 @@ class Code: public HeapObject { inline void set_back_edge_table_offset(unsigned offset); inline bool back_edges_patched_for_osr(); - inline void set_back_edges_patched_for_osr(bool value); // [to_boolean_foo]: For kind TO_BOOLEAN_IC tells what state the stub is in. inline byte to_boolean_state(); @@ -5488,6 +5668,9 @@ class Code: public HeapObject { // enough handlers can be found. bool FindHandlers(CodeHandleList* code_list, int length = -1); + // Find the handler for |map|. + MaybeHandle<Code> FindHandlerForMap(Map* map); + // Find the first name in an IC stub. Name* FindFirstName(); @@ -5509,30 +5692,26 @@ class Code: public HeapObject { // Flags operations. static inline Flags ComputeFlags( - Kind kind, - InlineCacheState ic_state = UNINITIALIZED, - ExtraICState extra_ic_state = kNoExtraICState, - StubType type = NORMAL, - InlineCacheHolderFlag holder = OWN_MAP); + Kind kind, InlineCacheState ic_state = UNINITIALIZED, + ExtraICState extra_ic_state = kNoExtraICState, StubType type = NORMAL, + CacheHolderFlag holder = kCacheOnReceiver); static inline Flags ComputeMonomorphicFlags( - Kind kind, - ExtraICState extra_ic_state = kNoExtraICState, - InlineCacheHolderFlag holder = OWN_MAP, - StubType type = NORMAL); + Kind kind, ExtraICState extra_ic_state = kNoExtraICState, + CacheHolderFlag holder = kCacheOnReceiver, StubType type = NORMAL); static inline Flags ComputeHandlerFlags( - Kind handler_kind, - StubType type = NORMAL, - InlineCacheHolderFlag holder = OWN_MAP); + Kind handler_kind, StubType type = NORMAL, + CacheHolderFlag holder = kCacheOnReceiver); static inline InlineCacheState ExtractICStateFromFlags(Flags flags); static inline StubType ExtractTypeFromFlags(Flags flags); + static inline CacheHolderFlag ExtractCacheHolderFromFlags(Flags flags); static inline Kind ExtractKindFromFlags(Flags flags); - static inline InlineCacheHolderFlag ExtractCacheHolderFromFlags(Flags flags); static inline ExtraICState ExtractExtraICStateFromFlags(Flags flags); static inline Flags RemoveTypeFromFlags(Flags flags); + static inline Flags RemoveTypeAndHolderFromFlags(Flags flags); // Convert a target address into a code object. static inline Code* GetCodeFromTargetAddress(Address address); @@ -5567,7 +5746,7 @@ class Code: public HeapObject { // Returns the object size for a given body (used for allocation). static int SizeFor(int body_size) { - ASSERT_SIZE_TAG_ALIGNED(body_size); + DCHECK_SIZE_TAG_ALIGNED(body_size); return RoundUp(kHeaderSize + body_size, kCodeAlignment); } @@ -5575,7 +5754,7 @@ class Code: public HeapObject { // the layout of the code object into account. int ExecutableSize() { // Check that the assumptions about the layout of the code object holds. - ASSERT_EQ(static_cast<int>(instruction_start() - address()), + DCHECK_EQ(static_cast<int>(instruction_start() - address()), Code::kHeaderSize); return instruction_size() + Code::kHeaderSize; } @@ -5584,8 +5763,7 @@ class Code: public HeapObject { int SourcePosition(Address pc); int SourceStatementPosition(Address pc); - // Casting. - static inline Code* cast(Object* obj); + DECLARE_CAST(Code) // Dispatched behavior. int CodeSize() { return SizeFor(body_size()); } @@ -5647,7 +5825,8 @@ class Code: public HeapObject { } inline bool IsWeakObject(Object* object) { - return (is_optimized_code() && IsWeakObjectInOptimizedCode(object)) || + return (is_optimized_code() && !is_turbofanned() && + IsWeakObjectInOptimizedCode(object)) || (is_weak_stub() && IsWeakObjectInIC(object)); } @@ -5664,6 +5843,7 @@ class Code: public HeapObject { static const int kHandlerTableOffset = kRelocationInfoOffset + kPointerSize; static const int kDeoptimizationDataOffset = kHandlerTableOffset + kPointerSize; + // For FUNCTION kind, we store the type feedback info here. static const int kTypeFeedbackInfoOffset = kDeoptimizationDataOffset + kPointerSize; static const int kNextCodeLinkOffset = kTypeFeedbackInfoOffset + kPointerSize; @@ -5694,52 +5874,40 @@ class Code: public HeapObject { class FullCodeFlagsHasDebugBreakSlotsField: public BitField<bool, 1, 1> {}; class FullCodeFlagsIsCompiledOptimizable: public BitField<bool, 2, 1> {}; - static const int kAllowOSRAtLoopNestingLevelOffset = kFullCodeFlags + 1; - static const int kProfilerTicksOffset = kAllowOSRAtLoopNestingLevelOffset + 1; + static const int kProfilerTicksOffset = kFullCodeFlags + 1; // Flags layout. BitField<type, shift, size>. - class ICStateField: public BitField<InlineCacheState, 0, 3> {}; - class TypeField: public BitField<StubType, 3, 1> {}; - class CacheHolderField: public BitField<InlineCacheHolderFlag, 5, 1> {}; - class KindField: public BitField<Kind, 6, 4> {}; - // TODO(bmeurer): Bit 10 is available for free use. :-) + class ICStateField : public BitField<InlineCacheState, 0, 4> {}; + class TypeField : public BitField<StubType, 4, 1> {}; + class CacheHolderField : public BitField<CacheHolderFlag, 5, 2> {}; + class KindField : public BitField<Kind, 7, 4> {}; class ExtraICStateField: public BitField<ExtraICState, 11, PlatformSmiTagging::kSmiValueSize - 11 + 1> {}; // NOLINT // KindSpecificFlags1 layout (STUB and OPTIMIZED_FUNCTION) static const int kStackSlotsFirstBit = 0; static const int kStackSlotsBitCount = 24; - static const int kHasFunctionCacheFirstBit = + static const int kHasFunctionCacheBit = kStackSlotsFirstBit + kStackSlotsBitCount; - static const int kHasFunctionCacheBitCount = 1; - static const int kMarkedForDeoptimizationFirstBit = - kStackSlotsFirstBit + kStackSlotsBitCount + 1; - static const int kMarkedForDeoptimizationBitCount = 1; - static const int kWeakStubFirstBit = - kMarkedForDeoptimizationFirstBit + kMarkedForDeoptimizationBitCount; - static const int kWeakStubBitCount = 1; - static const int kInvalidatedWeakStubFirstBit = - kWeakStubFirstBit + kWeakStubBitCount; - static const int kInvalidatedWeakStubBitCount = 1; + static const int kMarkedForDeoptimizationBit = kHasFunctionCacheBit + 1; + static const int kWeakStubBit = kMarkedForDeoptimizationBit + 1; + static const int kInvalidatedWeakStubBit = kWeakStubBit + 1; + static const int kIsTurbofannedBit = kInvalidatedWeakStubBit + 1; STATIC_ASSERT(kStackSlotsFirstBit + kStackSlotsBitCount <= 32); - STATIC_ASSERT(kHasFunctionCacheFirstBit + kHasFunctionCacheBitCount <= 32); - STATIC_ASSERT(kInvalidatedWeakStubFirstBit + - kInvalidatedWeakStubBitCount <= 32); + STATIC_ASSERT(kIsTurbofannedBit + 1 <= 32); class StackSlotsField: public BitField<int, kStackSlotsFirstBit, kStackSlotsBitCount> {}; // NOLINT - class HasFunctionCacheField: public BitField<bool, - kHasFunctionCacheFirstBit, kHasFunctionCacheBitCount> {}; // NOLINT - class MarkedForDeoptimizationField: public BitField<bool, - kMarkedForDeoptimizationFirstBit, - kMarkedForDeoptimizationBitCount> {}; // NOLINT - class WeakStubField: public BitField<bool, - kWeakStubFirstBit, - kWeakStubBitCount> {}; // NOLINT - class InvalidatedWeakStubField: public BitField<bool, - kInvalidatedWeakStubFirstBit, - kInvalidatedWeakStubBitCount> {}; // NOLINT + class HasFunctionCacheField : public BitField<bool, kHasFunctionCacheBit, 1> { + }; // NOLINT + class MarkedForDeoptimizationField + : public BitField<bool, kMarkedForDeoptimizationBit, 1> {}; // NOLINT + class WeakStubField : public BitField<bool, kWeakStubBit, 1> {}; // NOLINT + class InvalidatedWeakStubField + : public BitField<bool, kInvalidatedWeakStubBit, 1> {}; // NOLINT + class IsTurbofannedField : public BitField<bool, kIsTurbofannedBit, 1> { + }; // NOLINT // KindSpecificFlags2 layout (ALL) static const int kIsCrankshaftedBit = 0; @@ -5747,28 +5915,23 @@ class Code: public HeapObject { kIsCrankshaftedBit, 1> {}; // NOLINT // KindSpecificFlags2 layout (STUB and OPTIMIZED_FUNCTION) - static const int kStubMajorKeyFirstBit = kIsCrankshaftedBit + 1; - static const int kSafepointTableOffsetFirstBit = - kStubMajorKeyFirstBit + kStubMajorKeyBits; + static const int kSafepointTableOffsetFirstBit = kIsCrankshaftedBit + 1; static const int kSafepointTableOffsetBitCount = 24; - STATIC_ASSERT(kStubMajorKeyFirstBit + kStubMajorKeyBits <= 32); STATIC_ASSERT(kSafepointTableOffsetFirstBit + kSafepointTableOffsetBitCount <= 32); - STATIC_ASSERT(1 + kStubMajorKeyBits + - kSafepointTableOffsetBitCount <= 32); + STATIC_ASSERT(1 + kSafepointTableOffsetBitCount <= 32); class SafepointTableOffsetField: public BitField<int, kSafepointTableOffsetFirstBit, kSafepointTableOffsetBitCount> {}; // NOLINT - class StubMajorKeyField: public BitField<int, - kStubMajorKeyFirstBit, kStubMajorKeyBits> {}; // NOLINT // KindSpecificFlags2 layout (FUNCTION) class BackEdgeTableOffsetField: public BitField<int, - kIsCrankshaftedBit + 1, 29> {}; // NOLINT - class BackEdgesPatchedForOSRField: public BitField<bool, - kIsCrankshaftedBit + 1 + 29, 1> {}; // NOLINT + kIsCrankshaftedBit + 1, 27> {}; // NOLINT + class AllowOSRAtLoopNestingLevelField: public BitField<int, + kIsCrankshaftedBit + 1 + 27, 4> {}; // NOLINT + STATIC_ASSERT(AllowOSRAtLoopNestingLevelField::kMax >= kMaxLoopNestingMarker); static const int kArgumentsBits = 16; static const int kMaxArguments = (1 << kArgumentsBits) - 1; @@ -5849,6 +6012,9 @@ class DependentCode: public FixedArray { // Group of code that omit run-time type checks for the field(s) introduced // by this map. kFieldTypeGroup, + // Group of code that omit run-time type checks for initial maps of + // constructors. + kInitialMapChangedGroup, // Group of code that depends on tenuring information in AllocationSites // not being changed. kAllocationSiteTenuringChangedGroup, @@ -5899,7 +6065,7 @@ class DependentCode: public FixedArray { inline Object* object_at(int i); inline void clear_at(int i); inline void copy(int from, int to); - static inline DependentCode* cast(Object* object); + DECLARE_CAST(DependentCode) static DependentCode* ForObject(Handle<HeapObject> object, DependencyGroup group); @@ -5958,15 +6124,19 @@ class Map: public HeapObject { class NumberOfOwnDescriptorsBits: public BitField<int, kDescriptorIndexBitCount, kDescriptorIndexBitCount> {}; // NOLINT STATIC_ASSERT(kDescriptorIndexBitCount + kDescriptorIndexBitCount == 20); - class IsShared: public BitField<bool, 20, 1> {}; - class FunctionWithPrototype: public BitField<bool, 21, 1> {}; - class DictionaryMap: public BitField<bool, 22, 1> {}; - class OwnsDescriptors: public BitField<bool, 23, 1> {}; - class HasInstanceCallHandler: public BitField<bool, 24, 1> {}; - class Deprecated: public BitField<bool, 25, 1> {}; - class IsFrozen: public BitField<bool, 26, 1> {}; - class IsUnstable: public BitField<bool, 27, 1> {}; - class IsMigrationTarget: public BitField<bool, 28, 1> {}; + class DictionaryMap : public BitField<bool, 20, 1> {}; + class OwnsDescriptors : public BitField<bool, 21, 1> {}; + class HasInstanceCallHandler : public BitField<bool, 22, 1> {}; + class Deprecated : public BitField<bool, 23, 1> {}; + class IsFrozen : public BitField<bool, 24, 1> {}; + class IsUnstable : public BitField<bool, 25, 1> {}; + class IsMigrationTarget : public BitField<bool, 26, 1> {}; + class DoneInobjectSlackTracking : public BitField<bool, 27, 1> {}; + // Bit 28 is free. + + // Keep this bit field at the very end for better code in + // Builtins::kJSConstructStubGeneric stub. + class ConstructionCount: public BitField<int, 29, 3> {}; // Tells whether the object in the prototype property will be used // for instances created from this function. If the prototype @@ -6035,18 +6205,18 @@ class Map: public HeapObject { inline void set_is_extensible(bool value); inline bool is_extensible(); + inline void set_is_prototype_map(bool value); + inline bool is_prototype_map(); inline void set_elements_kind(ElementsKind elements_kind) { - ASSERT(elements_kind < kElementsKindCount); - ASSERT(kElementsKindCount <= (1 << kElementsKindBitCount)); - set_bit_field2((bit_field2() & ~kElementsKindMask) | - (elements_kind << kElementsKindShift)); - ASSERT(this->elements_kind() == elements_kind); + DCHECK(elements_kind < kElementsKindCount); + DCHECK(kElementsKindCount <= (1 << Map::ElementsKindBits::kSize)); + set_bit_field2(Map::ElementsKindBits::update(bit_field2(), elements_kind)); + DCHECK(this->elements_kind() == elements_kind); } inline ElementsKind elements_kind() { - return static_cast<ElementsKind>( - (bit_field2() & kElementsKindMask) >> kElementsKindShift); + return Map::ElementsKindBits::decode(bit_field2()); } // Tells whether the instance has fast elements that are only Smis. @@ -6099,17 +6269,24 @@ class Map: public HeapObject { // map with DICTIONARY_ELEMENTS was found in the prototype chain. bool DictionaryElementsInPrototypeChainOnly(); - inline bool HasTransitionArray(); + inline bool HasTransitionArray() const; inline bool HasElementsTransition(); inline Map* elements_transition_map(); - static Handle<TransitionArray> SetElementsTransitionMap( - Handle<Map> map, Handle<Map> transitioned_map); + inline Map* GetTransition(int transition_index); inline int SearchTransition(Name* name); inline FixedArrayBase* GetInitialElements(); DECL_ACCESSORS(transitions, TransitionArray) + static inline Handle<String> ExpectedTransitionKey(Handle<Map> map); + static inline Handle<Map> ExpectedTransitionTarget(Handle<Map> map); + + // Try to follow an existing transition to a field with attributes NONE. The + // return value indicates whether the transition was successful. + static inline Handle<Map> FindTransitionToField(Handle<Map> map, + Handle<Name> key); + Map* FindRootMap(); Map* FindFieldOwner(int descriptor); @@ -6117,15 +6294,16 @@ class Map: public HeapObject { int NumberOfFields(); - bool InstancesNeedRewriting(Map* target, - int target_number_of_fields, - int target_inobject, - int target_unused); + // TODO(ishell): candidate with JSObject::MigrateToMap(). + bool InstancesNeedRewriting(Map* target, int target_number_of_fields, + int target_inobject, int target_unused, + int* old_number_of_fields); + // TODO(ishell): moveit! static Handle<Map> GeneralizeAllFieldRepresentations(Handle<Map> map); - static Handle<HeapType> GeneralizeFieldType(Handle<HeapType> type1, - Handle<HeapType> type2, - Isolate* isolate) - V8_WARN_UNUSED_RESULT; + MUST_USE_RESULT static Handle<HeapType> GeneralizeFieldType( + Handle<HeapType> type1, + Handle<HeapType> type2, + Isolate* isolate); static void GeneralizeFieldType(Handle<Map> map, int modify_index, Handle<HeapType> new_field_type); @@ -6147,24 +6325,16 @@ class Map: public HeapObject { StoreMode store_mode, const char* reason); + static Handle<Map> PrepareForDataProperty(Handle<Map> old_map, + int descriptor_number, + Handle<Object> value); + static Handle<Map> Normalize(Handle<Map> map, PropertyNormalizationMode mode); // Returns the constructor name (the name (possibly, inferred name) of the // function that was used to instantiate the object). String* constructor_name(); - // Tells whether the map is attached to SharedFunctionInfo - // (for inobject slack tracking). - inline void set_attached_to_shared_function_info(bool value); - - inline bool attached_to_shared_function_info(); - - // Tells whether the map is shared between objects that may have different - // behavior. If true, the map should never be modified, instead a clone - // should be created and modified. - inline void set_is_shared(bool value); - inline bool is_shared(); - // Tells whether the map is used for JSObjects in dictionary mode (ie // normalized objects, ie objects for which HasFastProperties returns false). // A map can never be used for both dictionary mode and fast mode JSObjects. @@ -6231,7 +6401,7 @@ class Map: public HeapObject { inline void SetNumberOfProtoTransitions(int value) { FixedArray* cache = GetPrototypeTransitions(); - ASSERT(cache->length() != 0); + DCHECK(cache->length() != 0); cache->set(kProtoTransitionNumberOfEntriesOffset, Smi::FromInt(value)); } @@ -6255,7 +6425,7 @@ class Map: public HeapObject { int LastAdded() { int number_of_own_descriptors = NumberOfOwnDescriptors(); - ASSERT(number_of_own_descriptors > 0); + DCHECK(number_of_own_descriptors > 0); return number_of_own_descriptors - 1; } @@ -6264,7 +6434,7 @@ class Map: public HeapObject { } void SetNumberOfOwnDescriptors(int number) { - ASSERT(number <= instance_descriptors()->number_of_descriptors()); + DCHECK(number <= instance_descriptors()->number_of_descriptors()); set_bit_field3(NumberOfOwnDescriptorsBits::update(bit_field3(), number)); } @@ -6276,15 +6446,15 @@ class Map: public HeapObject { void SetEnumLength(int length) { if (length != kInvalidEnumCacheSentinel) { - ASSERT(length >= 0); - ASSERT(length == 0 || instance_descriptors()->HasEnumCache()); - ASSERT(length <= NumberOfOwnDescriptors()); + DCHECK(length >= 0); + DCHECK(length == 0 || instance_descriptors()->HasEnumCache()); + DCHECK(length <= NumberOfOwnDescriptors()); } set_bit_field3(EnumLengthBits::update(bit_field3(), length)); } inline bool owns_descriptors(); - inline void set_owns_descriptors(bool is_shared); + inline void set_owns_descriptors(bool owns_descriptors); inline bool has_instance_call_handler(); inline void set_has_instance_call_handler(); inline void freeze(); @@ -6293,6 +6463,10 @@ class Map: public HeapObject { inline bool is_stable(); inline void set_migration_target(bool value); inline bool is_migration_target(); + inline void set_done_inobject_slack_tracking(bool value); + inline bool done_inobject_slack_tracking(); + inline void set_construction_count(int value); + inline int construction_count(); inline void deprecate(); inline bool is_deprecated(); inline bool CanBeDeprecated(); @@ -6301,21 +6475,20 @@ class Map: public HeapObject { // is found by re-transitioning from the root of the transition tree using the // descriptor array of the map. Returns NULL if no updated map is found. // This method also applies any pending migrations along the prototype chain. - static MaybeHandle<Map> CurrentMapForDeprecated(Handle<Map> map) - V8_WARN_UNUSED_RESULT; + static MaybeHandle<Map> TryUpdate(Handle<Map> map) V8_WARN_UNUSED_RESULT; // Same as above, but does not touch the prototype chain. - static MaybeHandle<Map> CurrentMapForDeprecatedInternal(Handle<Map> map) + static MaybeHandle<Map> TryUpdateInternal(Handle<Map> map) V8_WARN_UNUSED_RESULT; + // Returns a non-deprecated version of the input. This method may deprecate + // existing maps along the way if encodings conflict. Not for use while + // gathering type feedback. Use TryUpdate in those cases instead. + static Handle<Map> Update(Handle<Map> map); + static Handle<Map> CopyDropDescriptors(Handle<Map> map); static Handle<Map> CopyInsertDescriptor(Handle<Map> map, Descriptor* descriptor, TransitionFlag flag); - static Handle<Map> CopyReplaceDescriptor(Handle<Map> map, - Handle<DescriptorArray> descriptors, - Descriptor* descriptor, - int index, - TransitionFlag flag); MUST_USE_RESULT static MaybeHandle<Map> CopyWithField( Handle<Map> map, @@ -6346,6 +6519,15 @@ class Map: public HeapObject { static Handle<Map> CopyForObserved(Handle<Map> map); static Handle<Map> CopyForFreeze(Handle<Map> map); + // Maximal number of fast properties. Used to restrict the number of map + // transitions to avoid an explosion in the number of maps for objects used as + // dictionaries. + inline bool TooManyFastProperties(StoreFromKeyed store_mode); + static Handle<Map> TransitionToDataProperty(Handle<Map> map, + Handle<Name> name, + Handle<Object> value, + PropertyAttributes attributes, + StoreFromKeyed store_mode); inline void AppendDescriptor(Descriptor* desc); @@ -6370,8 +6552,7 @@ class Map: public HeapObject { inobject_properties(); } - // Casting. - static inline Map* cast(Object* obj); + DECLARE_CAST(Map) // Code cache operations. @@ -6418,7 +6599,6 @@ class Map: public HeapObject { // elements_kind that's found in |candidates|, or null handle if no match is // found at all. Handle<Map> FindTransitionedMap(MapHandleList* candidates); - Map* FindTransitionedMap(MapList* candidates); bool CanTransition() { // Only JSObject and subtypes have map transitions and back pointers. @@ -6429,6 +6609,10 @@ class Map: public HeapObject { bool IsJSObjectMap() { return instance_type() >= FIRST_JS_OBJECT_TYPE; } + bool IsJSProxyMap() { + InstanceType type = instance_type(); + return FIRST_JS_PROXY_TYPE <= type && type <= LAST_JS_PROXY_TYPE; + } bool IsJSGlobalProxyMap() { return instance_type() == JS_GLOBAL_PROXY_TYPE; } @@ -6459,7 +6643,7 @@ class Map: public HeapObject { DECLARE_VERIFIER(Map) #ifdef VERIFY_HEAP - void SharedMapVerify(); + void DictionaryMapVerify(); void VerifyOmittedMapChecks(); #endif @@ -6486,7 +6670,8 @@ class Map: public HeapObject { // Layout description. static const int kInstanceSizesOffset = HeapObject::kHeaderSize; static const int kInstanceAttributesOffset = kInstanceSizesOffset + kIntSize; - static const int kPrototypeOffset = kInstanceAttributesOffset + kIntSize; + static const int kBitField3Offset = kInstanceAttributesOffset + kIntSize; + static const int kPrototypeOffset = kBitField3Offset + kPointerSize; static const int kConstructorOffset = kPrototypeOffset + kPointerSize; // Storage for the transition array is overloaded to directly contain a back // pointer if unused. When the map has transitions, the back pointer is @@ -6498,13 +6683,12 @@ class Map: public HeapObject { kTransitionsOrBackPointerOffset + kPointerSize; static const int kCodeCacheOffset = kDescriptorsOffset + kPointerSize; static const int kDependentCodeOffset = kCodeCacheOffset + kPointerSize; - static const int kBitField3Offset = kDependentCodeOffset + kPointerSize; - static const int kSize = kBitField3Offset + kPointerSize; + static const int kSize = kDependentCodeOffset + kPointerSize; // Layout of pointer fields. Heap iteration code relies on them // being continuously allocated. static const int kPointerFieldsBeginOffset = Map::kPrototypeOffset; - static const int kPointerFieldsEndOffset = kBitField3Offset + kPointerSize; + static const int kPointerFieldsEndOffset = kSize; // Byte offsets within kInstanceSizesOffset. static const int kInstanceSizeOffset = kInstanceSizesOffset + 0; @@ -6518,46 +6702,52 @@ class Map: public HeapObject { static const int kVisitorIdOffset = kInstanceSizesOffset + kVisitorIdByte; // Byte offsets within kInstanceAttributesOffset attributes. +#if V8_TARGET_LITTLE_ENDIAN + // Order instance type and bit field together such that they can be loaded + // together as a 16-bit word with instance type in the lower 8 bits regardless + // of endianess. Also provide endian-independent offset to that 16-bit word. static const int kInstanceTypeOffset = kInstanceAttributesOffset + 0; - static const int kUnusedPropertyFieldsOffset = kInstanceAttributesOffset + 1; - static const int kBitFieldOffset = kInstanceAttributesOffset + 2; - static const int kBitField2Offset = kInstanceAttributesOffset + 3; + static const int kBitFieldOffset = kInstanceAttributesOffset + 1; +#else + static const int kBitFieldOffset = kInstanceAttributesOffset + 0; + static const int kInstanceTypeOffset = kInstanceAttributesOffset + 1; +#endif + static const int kInstanceTypeAndBitFieldOffset = + kInstanceAttributesOffset + 0; + static const int kBitField2Offset = kInstanceAttributesOffset + 2; + static const int kUnusedPropertyFieldsOffset = kInstanceAttributesOffset + 3; - STATIC_CHECK(kInstanceTypeOffset == Internals::kMapInstanceTypeOffset); + STATIC_ASSERT(kInstanceTypeAndBitFieldOffset == + Internals::kMapInstanceTypeAndBitFieldOffset); // Bit positions for bit field. - static const int kUnused = 0; // To be used for marking recently used maps. - static const int kHasNonInstancePrototype = 1; - static const int kIsHiddenPrototype = 2; - static const int kHasNamedInterceptor = 3; - static const int kHasIndexedInterceptor = 4; - static const int kIsUndetectable = 5; - static const int kIsObserved = 6; - static const int kIsAccessCheckNeeded = 7; + static const int kHasNonInstancePrototype = 0; + static const int kIsHiddenPrototype = 1; + static const int kHasNamedInterceptor = 2; + static const int kHasIndexedInterceptor = 3; + static const int kIsUndetectable = 4; + static const int kIsObserved = 5; + static const int kIsAccessCheckNeeded = 6; + class FunctionWithPrototype: public BitField<bool, 7, 1> {}; // Bit positions for bit field 2 static const int kIsExtensible = 0; static const int kStringWrapperSafeForDefaultValueOf = 1; - static const int kAttachedToSharedFunctionInfo = 2; - // No bits can be used after kElementsKindFirstBit, they are all reserved for - // storing ElementKind. - static const int kElementsKindShift = 3; - static const int kElementsKindBitCount = 5; + class IsPrototypeMapBits : public BitField<bool, 2, 1> {}; + class ElementsKindBits: public BitField<ElementsKind, 3, 5> {}; // Derived values from bit field 2 - static const int kElementsKindMask = (-1 << kElementsKindShift) & - ((1 << (kElementsKindShift + kElementsKindBitCount)) - 1); static const int8_t kMaximumBitField2FastElementValue = static_cast<int8_t>( - (FAST_ELEMENTS + 1) << Map::kElementsKindShift) - 1; + (FAST_ELEMENTS + 1) << Map::ElementsKindBits::kShift) - 1; static const int8_t kMaximumBitField2FastSmiElementValue = static_cast<int8_t>((FAST_SMI_ELEMENTS + 1) << - Map::kElementsKindShift) - 1; + Map::ElementsKindBits::kShift) - 1; static const int8_t kMaximumBitField2FastHoleyElementValue = static_cast<int8_t>((FAST_HOLEY_ELEMENTS + 1) << - Map::kElementsKindShift) - 1; + Map::ElementsKindBits::kShift) - 1; static const int8_t kMaximumBitField2FastHoleySmiElementValue = static_cast<int8_t>((FAST_HOLEY_SMI_ELEMENTS + 1) << - Map::kElementsKindShift) - 1; + Map::ElementsKindBits::kShift) - 1; typedef FixedBodyDescriptor<kPointerFieldsBeginOffset, kPointerFieldsEndOffset, @@ -6570,6 +6760,10 @@ class Map: public HeapObject { bool EquivalentToForNormalization(Map* other, PropertyNormalizationMode mode); private: + static void ConnectElementsTransition(Handle<Map> parent, Handle<Map> child); + static void ConnectTransition(Handle<Map> parent, Handle<Map> child, + Handle<Name> name, SimpleTransitionFlag flag); + bool EquivalentToForTransition(Map* other); static Handle<Map> RawCopy(Handle<Map> map, int instance_size); static Handle<Map> ShareDescriptor(Handle<Map> map, @@ -6588,10 +6782,14 @@ class Map: public HeapObject { TransitionFlag flag, MaybeHandle<Name> maybe_name, SimpleTransitionFlag simple_flag = FULL_TRANSITION); + static Handle<Map> CopyReplaceDescriptor(Handle<Map> map, + Handle<DescriptorArray> descriptors, + Descriptor* descriptor, + int index, + TransitionFlag flag); static Handle<Map> CopyNormalized(Handle<Map> map, - PropertyNormalizationMode mode, - NormalizedMapSharingMode sharing); + PropertyNormalizationMode mode); // Fires when the layout of an object with a leaf map changes. // This includes adding transitions to the leaf map or changing @@ -6615,7 +6813,8 @@ class Map: public HeapObject { Map* FindLastMatchMap(int verbatim, int length, DescriptorArray* descriptors); - void UpdateDescriptor(int descriptor_number, Descriptor* desc); + void UpdateFieldType(int descriptor_number, Handle<Name> name, + Handle<HeapType> new_type); void PrintGeneralization(FILE* file, const char* reason, @@ -6638,6 +6837,9 @@ class Map: public HeapObject { Handle<Object> prototype, Handle<Map> target_map); + static const int kFastPropertiesSoftLimit = 12; + static const int kMaxFastProperties = 128; + DISALLOW_IMPLICIT_CONSTRUCTORS(Map); }; @@ -6648,7 +6850,7 @@ class Map: public HeapObject { class Struct: public HeapObject { public: inline void InitializeBody(int object_size); - static inline Struct* cast(Object* that); + DECLARE_CAST(Struct) }; @@ -6658,7 +6860,7 @@ class Box : public Struct { // [value]: the boxed contents. DECL_ACCESSORS(value, Object) - static inline Box* cast(Object* obj); + DECLARE_CAST(Box) // Dispatched behavior. DECLARE_PRINTER(Box) @@ -6733,6 +6935,12 @@ class Script: public Struct { // [flags]: Holds an exciting bitfield. DECL_ACCESSORS(flags, Smi) + // [source_url]: sourceURL from magic comment + DECL_ACCESSORS(source_url, Object) + + // [source_url]: sourceMappingURL magic comment + DECL_ACCESSORS(source_mapping_url, Object) + // [compilation_type]: how the the script was compiled. Encoded in the // 'flags' field. inline CompilationType compilation_type(); @@ -6749,7 +6957,7 @@ class Script: public Struct { // the 'flags' field. DECL_BOOLEAN_ACCESSORS(is_shared_cross_origin) - static inline Script* cast(Object* obj); + DECLARE_CAST(Script) // If script source is an external string, check that the underlying // resource is accessible. Otherwise, always return true. @@ -6770,6 +6978,7 @@ class Script: public Struct { // Get the JS object wrapping the given script; create it if none exists. static Handle<JSObject> GetWrapper(Handle<Script> script); + void ClearWrapperCache(); // Dispatched behavior. DECLARE_PRINTER(Script) @@ -6789,7 +6998,9 @@ class Script: public Struct { kEvalFromSharedOffset + kPointerSize; static const int kFlagsOffset = kEvalFrominstructionsOffsetOffset + kPointerSize; - static const int kSize = kFlagsOffset + kPointerSize; + static const int kSourceUrlOffset = kFlagsOffset + kPointerSize; + static const int kSourceMappingUrlOffset = kSourceUrlOffset + kPointerSize; + static const int kSize = kSourceMappingUrlOffset + kPointerSize; private: int GetLineNumberWithArray(int code_pos); @@ -6813,8 +7024,11 @@ class Script: public Struct { // Installation of ids for the selected builtin functions is handled // by the bootstrapper. #define FUNCTIONS_WITH_ID_LIST(V) \ + V(Array.prototype, indexOf, ArrayIndexOf) \ + V(Array.prototype, lastIndexOf, ArrayLastIndexOf) \ V(Array.prototype, push, ArrayPush) \ V(Array.prototype, pop, ArrayPop) \ + V(Array.prototype, shift, ArrayShift) \ V(Function.prototype, apply, FunctionApply) \ V(String.prototype, charCodeAt, StringCharCodeAt) \ V(String.prototype, charAt, StringCharAt) \ @@ -6829,7 +7043,9 @@ class Script: public Struct { V(Math, pow, MathPow) \ V(Math, max, MathMax) \ V(Math, min, MathMin) \ - V(Math, imul, MathImul) + V(Math, imul, MathImul) \ + V(Math, clz32, MathClz32) \ + V(Math, fround, MathFround) enum BuiltinFunctionId { kArrayCode, @@ -6839,9 +7055,7 @@ enum BuiltinFunctionId { #undef DECLARE_FUNCTION_ID // Fake id for a special case of Math.pow. Note, it continues the // list of math functions. - kMathPowHalf, - // Installed only on --harmony-maths. - kMathClz32 + kMathPowHalf }; @@ -6910,11 +7124,11 @@ class SharedFunctionInfo: public HeapObject { // [length]: The function length - usually the number of declared parameters. // Use up to 2^30 parameters. - inline int length(); + inline int length() const; inline void set_length(int value); // [formal parameter count]: The declared number of parameters. - inline int formal_parameter_count(); + inline int formal_parameter_count() const; inline void set_formal_parameter_count(int value); // Set the formal parameter count so the function code will be @@ -6922,111 +7136,15 @@ class SharedFunctionInfo: public HeapObject { inline void DontAdaptArguments(); // [expected_nof_properties]: Expected number of properties for the function. - inline int expected_nof_properties(); + inline int expected_nof_properties() const; inline void set_expected_nof_properties(int value); - // Inobject slack tracking is the way to reclaim unused inobject space. - // - // The instance size is initially determined by adding some slack to - // expected_nof_properties (to allow for a few extra properties added - // after the constructor). There is no guarantee that the extra space - // will not be wasted. - // - // Here is the algorithm to reclaim the unused inobject space: - // - Detect the first constructor call for this SharedFunctionInfo. - // When it happens enter the "in progress" state: remember the - // constructor's initial_map and install a special construct stub that - // counts constructor calls. - // - While the tracking is in progress create objects filled with - // one_pointer_filler_map instead of undefined_value. This way they can be - // resized quickly and safely. - // - Once enough (kGenerousAllocationCount) objects have been created - // compute the 'slack' (traverse the map transition tree starting from the - // initial_map and find the lowest value of unused_property_fields). - // - Traverse the transition tree again and decrease the instance size - // of every map. Existing objects will resize automatically (they are - // filled with one_pointer_filler_map). All further allocations will - // use the adjusted instance size. - // - Decrease expected_nof_properties so that an allocations made from - // another context will use the adjusted instance size too. - // - Exit "in progress" state by clearing the reference to the initial_map - // and setting the regular construct stub (generic or inline). - // - // The above is the main event sequence. Some special cases are possible - // while the tracking is in progress: - // - // - GC occurs. - // Check if the initial_map is referenced by any live objects (except this - // SharedFunctionInfo). If it is, continue tracking as usual. - // If it is not, clear the reference and reset the tracking state. The - // tracking will be initiated again on the next constructor call. - // - // - The constructor is called from another context. - // Immediately complete the tracking, perform all the necessary changes - // to maps. This is necessary because there is no efficient way to track - // multiple initial_maps. - // Proceed to create an object in the current context (with the adjusted - // size). - // - // - A different constructor function sharing the same SharedFunctionInfo is - // called in the same context. This could be another closure in the same - // context, or the first function could have been disposed. - // This is handled the same way as the previous case. - // - // Important: inobject slack tracking is not attempted during the snapshot - // creation. - - static const int kGenerousAllocationCount = 8; - - // [construction_count]: Counter for constructor calls made during - // the tracking phase. - inline int construction_count(); - inline void set_construction_count(int value); - // [feedback_vector] - accumulates ast node feedback from full-codegen and // (increasingly) from crankshafted code where sufficient feedback isn't // available. Currently the field is duplicated in // TypeFeedbackInfo::feedback_vector, but the allocation is done here. DECL_ACCESSORS(feedback_vector, FixedArray) - // [initial_map]: initial map of the first function called as a constructor. - // Saved for the duration of the tracking phase. - // This is a weak link (GC resets it to undefined_value if no other live - // object reference this map). - DECL_ACCESSORS(initial_map, Object) - - // True if the initial_map is not undefined and the countdown stub is - // installed. - inline bool IsInobjectSlackTrackingInProgress(); - - // Starts the tracking. - // Stores the initial map and installs the countdown stub. - // IsInobjectSlackTrackingInProgress is normally true after this call, - // except when tracking have not been started (e.g. the map has no unused - // properties or the snapshot is being built). - void StartInobjectSlackTracking(Map* map); - - // Completes the tracking. - // IsInobjectSlackTrackingInProgress is false after this call. - void CompleteInobjectSlackTracking(); - - // Invoked before pointers in SharedFunctionInfo are being marked. - // Also clears the optimized code map. - inline void BeforeVisitingPointers(); - - // Clears the initial_map before the GC marking phase to ensure the reference - // is weak. IsInobjectSlackTrackingInProgress is false after this call. - void DetachInitialMap(); - - // Restores the link to the initial map after the GC marking phase. - // IsInobjectSlackTrackingInProgress is true after this call. - void AttachInitialMap(Map* map); - - // False if there are definitely no live objects created from this function. - // True if live objects _may_ exist (existence not guaranteed). - // May go back from true to false after GC. - DECL_BOOLEAN_ACCESSORS(live_objects_may_exist) - // [instance class name]: class name for instances. DECL_ACCESSORS(instance_class_name, Object) @@ -7047,7 +7165,7 @@ class SharedFunctionInfo: public HeapObject { DECL_ACCESSORS(script, Object) // [num_literals]: Number of literals used by this function. - inline int num_literals(); + inline int num_literals() const; inline void set_num_literals(int value); // [start_position_and_type]: Field used to store both the source code @@ -7055,7 +7173,7 @@ class SharedFunctionInfo: public HeapObject { // and whether or not the function is a toplevel function. The two // least significants bit indicates whether the function is an // expression and the rest contains the source code position. - inline int start_position_and_type(); + inline int start_position_and_type() const; inline void set_start_position_and_type(int value); // [debug info]: Debug information. @@ -7072,15 +7190,15 @@ class SharedFunctionInfo: public HeapObject { String* DebugName(); // Position of the 'function' token in the script source. - inline int function_token_position(); + inline int function_token_position() const; inline void set_function_token_position(int function_token_position); // Position of this function in the script source. - inline int start_position(); + inline int start_position() const; inline void set_start_position(int start_position); // End position of this function in the script source. - inline int end_position(); + inline int end_position() const; inline void set_end_position(int end_position); // Is this function a function expression in the source code. @@ -7091,13 +7209,13 @@ class SharedFunctionInfo: public HeapObject { // Bit field containing various information collected by the compiler to // drive optimization. - inline int compiler_hints(); + inline int compiler_hints() const; inline void set_compiler_hints(int value); - inline int ast_node_count(); + inline int ast_node_count() const; inline void set_ast_node_count(int count); - inline int profiler_ticks(); + inline int profiler_ticks() const; inline void set_profiler_ticks(int ticks); // Inline cache age is used to infer whether the function survived a context @@ -7158,12 +7276,6 @@ class SharedFunctionInfo: public HeapObject { // Is this a function or top-level/eval code. DECL_BOOLEAN_ACCESSORS(is_function) - // Indicates that the function cannot be optimized. - DECL_BOOLEAN_ACCESSORS(dont_optimize) - - // Indicates that the function cannot be inlined. - DECL_BOOLEAN_ACCESSORS(dont_inline) - // Indicates that code for this function cannot be cached. DECL_BOOLEAN_ACCESSORS(dont_cache) @@ -7173,6 +7285,9 @@ class SharedFunctionInfo: public HeapObject { // Indicates that this function is a generator. DECL_BOOLEAN_ACCESSORS(is_generator) + // Indicates that this function is an arrow function. + DECL_BOOLEAN_ACCESSORS(is_arrow) + // Indicates whether or not the code in the shared function support // deoptimization. inline bool has_deoptimization_support(); @@ -7186,13 +7301,13 @@ class SharedFunctionInfo: public HeapObject { inline BailoutReason DisableOptimizationReason(); - // Lookup the bailout ID and ASSERT that it exists in the non-optimized + // Lookup the bailout ID and DCHECK that it exists in the non-optimized // code, returns whether it asserted (i.e., always true if assertions are // disabled). bool VerifyBailoutId(BailoutId id); // [source code]: Source code for the function. - bool HasSourceCode(); + bool HasSourceCode() const; Handle<Object> GetSourceCode(); // Number of times the function was optimized. @@ -7213,11 +7328,11 @@ class SharedFunctionInfo: public HeapObject { // Stores deopt_count, opt_reenable_tries and ic_age as bit-fields. inline void set_counters(int value); - inline int counters(); + inline int counters() const; // Stores opt_count and bailout_reason as bit-fields. inline void set_opt_count_and_bailout_reason(int value); - inline int opt_count_and_bailout_reason(); + inline int opt_count_and_bailout_reason() const; void set_bailout_reason(BailoutReason reason) { set_opt_count_and_bailout_reason( @@ -7225,11 +7340,6 @@ class SharedFunctionInfo: public HeapObject { reason)); } - void set_dont_optimize_reason(BailoutReason reason) { - set_bailout_reason(reason); - set_dont_optimize(reason != kNoReason); - } - // Check whether or not this function is inlineable. bool IsInlineable(); @@ -7243,15 +7353,12 @@ class SharedFunctionInfo: public HeapObject { int CalculateInObjectProperties(); // Dispatched behavior. - // Set max_length to -1 for unlimited length. - void SourceCodePrint(StringStream* accumulator, int max_length); DECLARE_PRINTER(SharedFunctionInfo) DECLARE_VERIFIER(SharedFunctionInfo) void ResetForNewContext(int new_ic_age); - // Casting. - static inline SharedFunctionInfo* cast(Object* obj); + DECLARE_CAST(SharedFunctionInfo) // Constants. static const int kDontAdaptArgumentsSentinel = -1; @@ -7272,12 +7379,10 @@ class SharedFunctionInfo: public HeapObject { static const int kInferredNameOffset = kDebugInfoOffset + kPointerSize; static const int kFeedbackVectorOffset = kInferredNameOffset + kPointerSize; - static const int kInitialMapOffset = - kFeedbackVectorOffset + kPointerSize; #if V8_HOST_ARCH_32_BIT // Smi fields. static const int kLengthOffset = - kInitialMapOffset + kPointerSize; + kFeedbackVectorOffset + kPointerSize; static const int kFormalParameterCountOffset = kLengthOffset + kPointerSize; static const int kExpectedNofPropertiesOffset = kFormalParameterCountOffset + kPointerSize; @@ -7313,7 +7418,7 @@ class SharedFunctionInfo: public HeapObject { // word is not set and thus this word cannot be treated as pointer // to HeapObject during old space traversal. static const int kLengthOffset = - kInitialMapOffset + kPointerSize; + kFeedbackVectorOffset + kPointerSize; static const int kFormalParameterCountOffset = kLengthOffset + kIntSize; @@ -7345,23 +7450,12 @@ class SharedFunctionInfo: public HeapObject { // Total size. static const int kSize = kProfilerTicksOffset + kIntSize; -#endif - - // The construction counter for inobject slack tracking is stored in the - // most significant byte of compiler_hints which is otherwise unused. - // Its offset depends on the endian-ness of the architecture. -#if defined(V8_TARGET_LITTLE_ENDIAN) - static const int kConstructionCountOffset = kCompilerHintsOffset + 3; -#elif defined(V8_TARGET_BIG_ENDIAN) - static const int kConstructionCountOffset = kCompilerHintsOffset + 0; -#else -#error Unknown byte ordering #endif static const int kAlignedSize = POINTER_SIZE_ALIGN(kSize); typedef FixedBodyDescriptor<kNameOffset, - kInitialMapOffset + kPointerSize, + kFeedbackVectorOffset + kPointerSize, kSize> BodyDescriptor; // Bit positions in start_position_and_type. @@ -7376,7 +7470,6 @@ class SharedFunctionInfo: public HeapObject { enum CompilerHints { kAllowLazyCompilation, kAllowLazyCompilationWithoutContext, - kLiveObjectsMayExist, kOptimizationDisabled, kStrictModeFunction, kUsesArguments, @@ -7387,11 +7480,10 @@ class SharedFunctionInfo: public HeapObject { kIsAnonymous, kNameShouldPrintAsAnonymous, kIsFunction, - kDontOptimize, - kDontInline, kDontCache, kDontFlush, kIsGenerator, + kIsArrow, kCompilerHintsCount // Pseudo entry }; @@ -7447,6 +7539,18 @@ class SharedFunctionInfo: public HeapObject { }; +// Printing support. +struct SourceCodeOf { + explicit SourceCodeOf(SharedFunctionInfo* v, int max = -1) + : value(v), max_length(max) {} + const SharedFunctionInfo* value; + int max_length; +}; + + +OStream& operator<<(OStream& os, const SourceCodeOf& v); + + class JSGeneratorObject: public JSObject { public: // [function]: The function corresponding to this generator object. @@ -7463,8 +7567,10 @@ class JSGeneratorObject: public JSObject { // A positive offset indicates a suspended generator. The special // kGeneratorExecuting and kGeneratorClosed values indicate that a generator // cannot be resumed. - inline int continuation(); + inline int continuation() const; inline void set_continuation(int continuation); + inline bool is_closed(); + inline bool is_executing(); inline bool is_suspended(); // [operand_stack]: Saved operand stack. @@ -7472,11 +7578,10 @@ class JSGeneratorObject: public JSObject { // [stack_handler_index]: Index of first stack handler in operand_stack, or -1 // if the captured activation had no stack handler. - inline int stack_handler_index(); + inline int stack_handler_index() const; inline void set_stack_handler_index(int stack_handler_index); - // Casting. - static inline JSGeneratorObject* cast(Object* obj); + DECLARE_CAST(JSGeneratorObject) // Dispatched behavior. DECLARE_PRINTER(JSGeneratorObject) @@ -7524,8 +7629,7 @@ class JSModule: public JSObject { // [scope_info]: Scope info. DECL_ACCESSORS(scope_info, ScopeInfo) - // Casting. - static inline JSModule* cast(Object* obj); + DECLARE_CAST(JSModule) // Dispatched behavior. DECLARE_PRINTER(JSModule) @@ -7554,6 +7658,7 @@ class JSFunction: public JSObject { // [context]: The context for this function. inline Context* context(); inline void set_context(Object* context); + inline JSObject* global_proxy(); // [code]: The generated code object for this function. Executed // when the function is invoked, e.g. foo() or new foo(). See @@ -7567,6 +7672,12 @@ class JSFunction: public JSObject { // Tells whether this function is builtin. inline bool IsBuiltin(); + // Tells whether this function is defined in a native script. + inline bool IsFromNativeScript(); + + // Tells whether this function is defined in an extension script. + inline bool IsFromExtensionScript(); + // Tells whether or not the function needs arguments adaption. inline bool NeedsArgumentsAdaption(); @@ -7590,6 +7701,54 @@ class JSFunction: public JSObject { // Tells whether or not the function is on the concurrent recompilation queue. inline bool IsInOptimizationQueue(); + // Inobject slack tracking is the way to reclaim unused inobject space. + // + // The instance size is initially determined by adding some slack to + // expected_nof_properties (to allow for a few extra properties added + // after the constructor). There is no guarantee that the extra space + // will not be wasted. + // + // Here is the algorithm to reclaim the unused inobject space: + // - Detect the first constructor call for this JSFunction. + // When it happens enter the "in progress" state: initialize construction + // counter in the initial_map and set the |done_inobject_slack_tracking| + // flag. + // - While the tracking is in progress create objects filled with + // one_pointer_filler_map instead of undefined_value. This way they can be + // resized quickly and safely. + // - Once enough (kGenerousAllocationCount) objects have been created + // compute the 'slack' (traverse the map transition tree starting from the + // initial_map and find the lowest value of unused_property_fields). + // - Traverse the transition tree again and decrease the instance size + // of every map. Existing objects will resize automatically (they are + // filled with one_pointer_filler_map). All further allocations will + // use the adjusted instance size. + // - SharedFunctionInfo's expected_nof_properties left unmodified since + // allocations made using different closures could actually create different + // kind of objects (see prototype inheritance pattern). + // + // Important: inobject slack tracking is not attempted during the snapshot + // creation. + + static const int kGenerousAllocationCount = Map::ConstructionCount::kMax; + static const int kFinishSlackTracking = 1; + static const int kNoSlackTracking = 0; + + // True if the initial_map is set and the object constructions countdown + // counter is not zero. + inline bool IsInobjectSlackTrackingInProgress(); + + // Starts the tracking. + // Initializes object constructions countdown counter in the initial map. + // IsInobjectSlackTrackingInProgress is normally true after this call, + // except when tracking have not been started (e.g. the map has no unused + // properties or the snapshot is being built). + void StartInobjectSlackTracking(); + + // Completes the tracking. + // IsInobjectSlackTrackingInProgress is false after this call. + void CompleteInobjectSlackTracking(); + // [literals_or_bindings]: Fixed array holding either // the materialized literals or the bindings of a bound function. // @@ -7614,7 +7773,8 @@ class JSFunction: public JSObject { // The initial map for an object created by this constructor. inline Map* initial_map(); - inline void set_initial_map(Map* value); + static void SetInitialMap(Handle<JSFunction> function, Handle<Map> map, + Handle<Object> prototype); inline bool has_initial_map(); static void EnsureHasInitialMap(Handle<JSFunction> function); @@ -7659,8 +7819,7 @@ class JSFunction: public JSObject { // Prints the name of the function using PrintF. void PrintName(FILE* out = stdout); - // Casting. - static inline JSFunction* cast(Object* obj); + DECLARE_CAST(JSFunction) // Iterates the objects, including code objects indirectly referenced // through pointers to the first instruction in the code object. @@ -7723,10 +7882,9 @@ class JSGlobalProxy : public JSObject { // [hash]: The hash code property (undefined if not initialized yet). DECL_ACCESSORS(hash, Object) - // Casting. - static inline JSGlobalProxy* cast(Object* obj); + DECLARE_CAST(JSGlobalProxy) - inline bool IsDetachedFrom(GlobalObject* global); + inline bool IsDetachedFrom(GlobalObject* global) const; // Dispatched behavior. DECLARE_PRINTER(JSGlobalProxy) @@ -7758,21 +7916,20 @@ class GlobalObject: public JSObject { // [global context]: the most recent (i.e. innermost) global context. DECL_ACCESSORS(global_context, Context) - // [global receiver]: the global receiver object of the context - DECL_ACCESSORS(global_receiver, JSObject) + // [global proxy]: the global proxy object of the context + DECL_ACCESSORS(global_proxy, JSObject) // Retrieve the property cell used to store a property. PropertyCell* GetPropertyCell(LookupResult* result); - // Casting. - static inline GlobalObject* cast(Object* obj); + DECLARE_CAST(GlobalObject) // Layout description. static const int kBuiltinsOffset = JSObject::kHeaderSize; static const int kNativeContextOffset = kBuiltinsOffset + kPointerSize; static const int kGlobalContextOffset = kNativeContextOffset + kPointerSize; - static const int kGlobalReceiverOffset = kGlobalContextOffset + kPointerSize; - static const int kHeaderSize = kGlobalReceiverOffset + kPointerSize; + static const int kGlobalProxyOffset = kGlobalContextOffset + kPointerSize; + static const int kHeaderSize = kGlobalProxyOffset + kPointerSize; private: DISALLOW_IMPLICIT_CONSTRUCTORS(GlobalObject); @@ -7782,8 +7939,7 @@ class GlobalObject: public JSObject { // JavaScript global object. class JSGlobalObject: public GlobalObject { public: - // Casting. - static inline JSGlobalObject* cast(Object* obj); + DECLARE_CAST(JSGlobalObject) // Ensure that the global object has a cell for the given property name. static Handle<PropertyCell> EnsurePropertyCell(Handle<JSGlobalObject> global, @@ -7815,8 +7971,7 @@ class JSBuiltinsObject: public GlobalObject { inline Code* javascript_builtin_code(Builtins::JavaScript id); inline void set_javascript_builtin_code(Builtins::JavaScript id, Code* value); - // Casting. - static inline JSBuiltinsObject* cast(Object* obj); + DECLARE_CAST(JSBuiltinsObject) // Dispatched behavior. DECLARE_PRINTER(JSBuiltinsObject) @@ -7851,8 +8006,7 @@ class JSValue: public JSObject { // [value]: the object being wrapped. DECL_ACCESSORS(value, Object) - // Casting. - static inline JSValue* cast(Object* obj); + DECLARE_CAST(JSValue) // Dispatched behavior. DECLARE_PRINTER(JSValue) @@ -7890,11 +8044,10 @@ class JSDate: public JSObject { // [sec]: caches seconds. Either undefined, smi, or NaN. DECL_ACCESSORS(sec, Object) // [cache stamp]: sample of the date cache stamp at the - // moment when local fields were cached. + // moment when chached fields were cached. DECL_ACCESSORS(cache_stamp, Object) - // Casting. - static inline JSDate* cast(Object* obj); + DECLARE_CAST(JSDate) // Returns the date field with the specified index. // See FieldIndex for the list of date fields. @@ -7954,7 +8107,7 @@ class JSDate: public JSObject { Object* GetUTCField(FieldIndex index, double value, DateCache* date_cache); // Computes and caches the cacheable fields of the date. - inline void SetLocalFields(int64_t local_time_ms, DateCache* date_cache); + inline void SetCachedFields(int64_t local_time_ms, DateCache* date_cache); DISALLOW_IMPLICIT_CONSTRUCTORS(JSDate); @@ -7982,15 +8135,14 @@ class JSMessageObject: public JSObject { DECL_ACCESSORS(stack_frames, Object) // [start_position]: the start position in the script for the error message. - inline int start_position(); + inline int start_position() const; inline void set_start_position(int value); // [end_position]: the end position in the script for the error message. - inline int end_position(); + inline int end_position() const; inline void set_end_position(int value); - // Casting. - static inline JSMessageObject* cast(Object* obj); + DECLARE_CAST(JSMessageObject) // Dispatched behavior. DECLARE_PRINTER(JSMessageObject) @@ -8074,7 +8226,7 @@ class JSRegExp: public JSObject { } } - static inline JSRegExp* cast(Object* obj); + DECLARE_CAST(JSRegExp) // Dispatched behavior. DECLARE_VERIFIER(JSRegExp) @@ -8192,7 +8344,7 @@ class CompilationCacheTable: public HashTable<CompilationCacheTable, JSRegExp::Flags flags, Handle<FixedArray> value); void Remove(Object* value); - static inline CompilationCacheTable* cast(Object* obj); + DECLARE_CAST(CompilationCacheTable) private: DISALLOW_IMPLICIT_CONSTRUCTORS(CompilationCacheTable); @@ -8221,7 +8373,7 @@ class CodeCache: public Struct { // Remove an object from the cache with the provided internal index. void RemoveByIndex(Object* name, Code* code, int index); - static inline CodeCache* cast(Object* obj); + DECLARE_CAST(CodeCache) // Dispatched behavior. DECLARE_PRINTER(CodeCache) @@ -8284,7 +8436,7 @@ class CodeCacheHashTable: public HashTable<CodeCacheHashTable, int GetIndex(Name* name, Code::Flags flags); void RemoveByIndex(int index); - static inline CodeCacheHashTable* cast(Object* obj); + DECLARE_CAST(CodeCacheHashTable) // Initial size of the fixed array backing the hash table. static const int kInitialSize = 64; @@ -8307,7 +8459,7 @@ class PolymorphicCodeCache: public Struct { // Returns an undefined value if the entry is not found. Handle<Object> Lookup(MapHandleList* maps, Code::Flags flags); - static inline PolymorphicCodeCache* cast(Object* obj); + DECLARE_CAST(PolymorphicCodeCache) // Dispatched behavior. DECLARE_PRINTER(PolymorphicCodeCache) @@ -8334,7 +8486,7 @@ class PolymorphicCodeCacheHashTable int code_kind, Handle<Code> code); - static inline PolymorphicCodeCacheHashTable* cast(Object* obj); + DECLARE_CAST(PolymorphicCodeCacheHashTable) static const int kInitialSize = 64; private: @@ -8348,7 +8500,10 @@ class TypeFeedbackInfo: public Struct { inline void set_ic_total_count(int count); inline int ic_with_type_info_count(); - inline void change_ic_with_type_info_count(int count); + inline void change_ic_with_type_info_count(int delta); + + inline int ic_generic_count(); + inline void change_ic_generic_count(int delta); inline void initialize_storage(); @@ -8359,7 +8514,7 @@ class TypeFeedbackInfo: public Struct { inline bool matches_inlined_type_change_checksum(int checksum); - static inline TypeFeedbackInfo* cast(Object* obj); + DECLARE_CAST(TypeFeedbackInfo) // Dispatched behavior. DECLARE_PRINTER(TypeFeedbackInfo) @@ -8367,7 +8522,8 @@ class TypeFeedbackInfo: public Struct { static const int kStorage1Offset = HeapObject::kHeaderSize; static const int kStorage2Offset = kStorage1Offset + kPointerSize; - static const int kSize = kStorage2Offset + kPointerSize; + static const int kStorage3Offset = kStorage2Offset + kPointerSize; + static const int kSize = kStorage3Offset + kPointerSize; // TODO(mvstanton): move these sentinel declarations to shared function info. // The object that indicates an uninitialized cache. @@ -8420,11 +8576,14 @@ class AllocationSite: public Struct { enum PretenureDecision { kUndecided = 0, kDontTenure = 1, - kTenure = 2, - kZombie = 3, + kMaybeTenure = 2, + kTenure = 3, + kZombie = 4, kLastPretenureDecisionValue = kZombie }; + const char* PretenureDecisionName(PretenureDecision decision); + DECL_ACCESSORS(transition_info, Object) // nested_site threads a list of sites that represent nested literals // walked in a particular order. So [[1, 2], 1, 2] will have one @@ -8446,8 +8605,8 @@ class AllocationSite: public Struct { class DoNotInlineBit: public BitField<bool, 29, 1> {}; // Bitfields for pretenure_data - class MementoFoundCountBits: public BitField<int, 0, 27> {}; - class PretenureDecisionBits: public BitField<PretenureDecision, 27, 2> {}; + class MementoFoundCountBits: public BitField<int, 0, 26> {}; + class PretenureDecisionBits: public BitField<PretenureDecision, 26, 3> {}; class DeoptDependentCodeBit: public BitField<bool, 29, 1> {}; STATIC_ASSERT(PretenureDecisionBits::kMax >= kLastPretenureDecisionValue); @@ -8508,12 +8667,20 @@ class AllocationSite: public Struct { return pretenure_decision() == kZombie; } + bool IsMaybeTenure() { + return pretenure_decision() == kMaybeTenure; + } + inline void MarkZombie(); - inline bool DigestPretenuringFeedback(); + inline bool MakePretenureDecision(PretenureDecision current_decision, + double ratio, + bool maximum_size_scavenge); + + inline bool DigestPretenuringFeedback(bool maximum_size_scavenge); ElementsKind GetElementsKind() { - ASSERT(!SitePointsToLiteral()); + DCHECK(!SitePointsToLiteral()); int value = Smi::cast(transition_info())->value(); return ElementsKindBits::decode(value); } @@ -8557,7 +8724,7 @@ class AllocationSite: public Struct { DECLARE_PRINTER(AllocationSite) DECLARE_VERIFIER(AllocationSite) - static inline AllocationSite* cast(Object* obj); + DECLARE_CAST(AllocationSite) static inline AllocationSiteMode GetMode( ElementsKind boilerplate_elements_kind); static inline AllocationSiteMode GetMode(ElementsKind from, ElementsKind to); @@ -8605,14 +8772,14 @@ class AllocationMemento: public Struct { !AllocationSite::cast(allocation_site())->IsZombie(); } AllocationSite* GetAllocationSite() { - ASSERT(IsValid()); + DCHECK(IsValid()); return AllocationSite::cast(allocation_site()); } DECLARE_PRINTER(AllocationMemento) DECLARE_VERIFIER(AllocationMemento) - static inline AllocationMemento* cast(Object* obj); + DECLARE_CAST(AllocationMemento) private: DISALLOW_IMPLICIT_CONSTRUCTORS(AllocationMemento); @@ -8629,10 +8796,10 @@ class AllocationMemento: public Struct { // - all attributes are available as part if the property details class AliasedArgumentsEntry: public Struct { public: - inline int aliased_context_slot(); + inline int aliased_context_slot() const; inline void set_aliased_context_slot(int count); - static inline AliasedArgumentsEntry* cast(Object* obj); + DECLARE_CAST(AliasedArgumentsEntry) // Dispatched behavior. DECLARE_PRINTER(AliasedArgumentsEntry) @@ -8704,6 +8871,19 @@ class StringHasher { }; +class IteratingStringHasher : public StringHasher { + public: + static inline uint32_t Hash(String* string, uint32_t seed); + inline void VisitOneByteString(const uint8_t* chars, int length); + inline void VisitTwoByteString(const uint16_t* chars, int length); + + private: + inline IteratingStringHasher(int len, uint32_t seed) + : StringHasher(len, seed) {} + DISALLOW_COPY_AND_ASSIGN(IteratingStringHasher); +}; + + // The characteristics of a string are stored in its map. Retrieving these // few bits of information is moderately expensive, involving two memory // loads where the second is dependent on the first. To improve efficiency @@ -8717,7 +8897,7 @@ class StringHasher { // concrete performance benefit at that particular point in the code. class StringShape BASE_EMBEDDED { public: - inline explicit StringShape(String* s); + inline explicit StringShape(const String* s); inline explicit StringShape(Map* s); inline explicit StringShape(InstanceType t); inline bool IsSequential(); @@ -8774,10 +8954,10 @@ class Name: public HeapObject { // Conversion. inline bool AsArrayIndex(uint32_t* index); - // Casting. - static inline Name* cast(Object* obj); + // Whether name can only name own properties. + inline bool IsOwn(); - bool IsCacheable(Isolate* isolate); + DECLARE_CAST(Name) DECLARE_PRINTER(Name) @@ -8811,23 +8991,22 @@ class Name: public HeapObject { static const int kArrayIndexLengthBits = kBitsPerInt - kArrayIndexValueBits - kNofHashBitFields; - STATIC_CHECK((kArrayIndexLengthBits > 0)); - - static const int kArrayIndexHashLengthShift = - kArrayIndexValueBits + kNofHashBitFields; - - static const int kArrayIndexHashMask = (1 << kArrayIndexHashLengthShift) - 1; + STATIC_ASSERT((kArrayIndexLengthBits > 0)); - static const int kArrayIndexValueMask = - ((1 << kArrayIndexValueBits) - 1) << kHashShift; + class ArrayIndexValueBits : public BitField<unsigned int, kNofHashBitFields, + kArrayIndexValueBits> {}; // NOLINT + class ArrayIndexLengthBits : public BitField<unsigned int, + kNofHashBitFields + kArrayIndexValueBits, + kArrayIndexLengthBits> {}; // NOLINT // Check that kMaxCachedArrayIndexLength + 1 is a power of two so we // could use a mask to test if the length of string is less than or equal to // kMaxCachedArrayIndexLength. - STATIC_CHECK(IS_POWER_OF_TWO(kMaxCachedArrayIndexLength + 1)); + STATIC_ASSERT(IS_POWER_OF_TWO(kMaxCachedArrayIndexLength + 1)); static const unsigned int kContainsCachedArrayIndexMask = - (~kMaxCachedArrayIndexLength << kArrayIndexHashLengthShift) | + (~static_cast<unsigned>(kMaxCachedArrayIndexLength) + << ArrayIndexLengthBits::kShift) | kIsNotArrayIndexMask; // Value of empty hash field indicating that the hash is not computed. @@ -8853,8 +9032,11 @@ class Symbol: public Name { // [is_private]: whether this is a private symbol. DECL_BOOLEAN_ACCESSORS(is_private) - // Casting. - static inline Symbol* cast(Object* obj); + // [is_own]: whether this is an own symbol, that is, only used to designate + // own properties of objects. + DECL_BOOLEAN_ACCESSORS(is_own) + + DECLARE_CAST(Symbol) // Dispatched behavior. DECLARE_PRINTER(Symbol) @@ -8869,6 +9051,7 @@ class Symbol: public Name { private: static const int kPrivateBit = 0; + static const int kOwnBit = 1; DISALLOW_IMPLICIT_CONSTRUCTORS(Symbol); }; @@ -8888,6 +9071,34 @@ class String: public Name { public: enum Encoding { ONE_BYTE_ENCODING, TWO_BYTE_ENCODING }; + // Array index strings this short can keep their index in the hash field. + static const int kMaxCachedArrayIndexLength = 7; + + // For strings which are array indexes the hash value has the string length + // mixed into the hash, mainly to avoid a hash value of zero which would be + // the case for the string '0'. 24 bits are used for the array index value. + static const int kArrayIndexValueBits = 24; + static const int kArrayIndexLengthBits = + kBitsPerInt - kArrayIndexValueBits - kNofHashBitFields; + + STATIC_ASSERT((kArrayIndexLengthBits > 0)); + + class ArrayIndexValueBits : public BitField<unsigned int, kNofHashBitFields, + kArrayIndexValueBits> {}; // NOLINT + class ArrayIndexLengthBits : public BitField<unsigned int, + kNofHashBitFields + kArrayIndexValueBits, + kArrayIndexLengthBits> {}; // NOLINT + + // Check that kMaxCachedArrayIndexLength + 1 is a power of two so we + // could use a mask to test if the length of string is less than or equal to + // kMaxCachedArrayIndexLength. + STATIC_ASSERT(IS_POWER_OF_TWO(kMaxCachedArrayIndexLength + 1)); + + static const unsigned int kContainsCachedArrayIndexMask = + (~static_cast<unsigned>(kMaxCachedArrayIndexLength) + << ArrayIndexLengthBits::kShift) | + kIsNotArrayIndexMask; + // Representation of the flat content of a String. // A non-flat string doesn't have flat content. // A flat string has content that's encoded as a sequence of either @@ -8905,19 +9116,19 @@ class String: public Name { // Return the one byte content of the string. Only use if IsAscii() returns // true. Vector<const uint8_t> ToOneByteVector() { - ASSERT_EQ(ASCII, state_); + DCHECK_EQ(ASCII, state_); return Vector<const uint8_t>(onebyte_start, length_); } // Return the two-byte content of the string. Only use if IsTwoByte() // returns true. Vector<const uc16> ToUC16Vector() { - ASSERT_EQ(TWO_BYTE, state_); + DCHECK_EQ(TWO_BYTE, state_); return Vector<const uc16>(twobyte_start, length_); } uc16 Get(int i) { - ASSERT(i < length_); - ASSERT(state_ != NON_FLAT); + DCHECK(i < length_); + DCHECK(state_ != NON_FLAT); if (state_ == ASCII) return onebyte_start[i]; return twobyte_start[i]; } @@ -8943,20 +9154,20 @@ class String: public Name { }; // Get and set the length of the string. - inline int length(); + inline int length() const; inline void set_length(int value); // Get and set the length of the string using acquire loads and release // stores. - inline int synchronized_length(); + inline int synchronized_length() const; inline void synchronized_set_length(int value); // Returns whether this string has only ASCII chars, i.e. all of them can // be ASCII encoded. This might be the case even if the string is // two-byte. Such strings may appear when the embedder prefers // two-byte external representations even for ASCII data. - inline bool IsOneByteRepresentation(); - inline bool IsTwoByteRepresentation(); + inline bool IsOneByteRepresentation() const; + inline bool IsTwoByteRepresentation() const; // Cons and slices have an encoding flag that may not represent the actual // encoding of the underlying string. This is taken into account here. @@ -9048,8 +9259,7 @@ class String: public Name { // Conversion. inline bool AsArrayIndex(uint32_t* index); - // Casting. - static inline String* cast(Object* obj); + DECLARE_CAST(String) void PrintOn(FILE* out); @@ -9058,6 +9268,7 @@ class String: public Name { // Dispatched behavior. void StringShortPrint(StringStream* accumulator); + void PrintUC16(OStream& os, int start = 0, int end = -1); // NOLINT #ifdef OBJECT_PRINT char* ToAsciiArray(); #endif @@ -9073,7 +9284,7 @@ class String: public Name { // Maximum number of characters to consider when trying to convert a string // value into an array index. static const int kMaxArrayIndexSize = 10; - STATIC_CHECK(kMaxArrayIndexSize < (1 << kArrayIndexLengthBits)); + STATIC_ASSERT(kMaxArrayIndexSize < (1 << kArrayIndexLengthBits)); // Max char codes. static const int32_t kMaxOneByteCharCode = unibrow::Latin1::kMaxChar; @@ -9111,7 +9322,7 @@ class String: public Name { const char* start = chars; const char* limit = chars + length; #ifdef V8_HOST_CAN_READ_UNALIGNED - ASSERT(unibrow::Utf8::kMaxOneByteChar == 0x7F); + DCHECK(unibrow::Utf8::kMaxOneByteChar == 0x7F); const uintptr_t non_ascii_mask = kUintptrAllBitsSet / 0xFF * 0x80; while (chars + sizeof(uintptr_t) <= limit) { if (*reinterpret_cast<const uintptr_t*>(chars) & non_ascii_mask) { @@ -9160,8 +9371,14 @@ class String: public Name { static Handle<FixedArray> CalculateLineEnds(Handle<String> string, bool include_ending_line); + // Use the hash field to forward to the canonical internalized string + // when deserializing an internalized string. + inline void SetForwardedInternalizedString(String* string); + inline String* GetForwardedInternalizedString(); + private: friend class Name; + friend class StringTableInsertionKey; static Handle<String> SlowFlatten(Handle<ConsString> cons, PretenureFlag tenure); @@ -9185,8 +9402,7 @@ class String: public Name { // The SeqString abstract class captures sequential string values. class SeqString: public String { public: - // Casting. - static inline SeqString* cast(Object* obj); + DECLARE_CAST(SeqString) // Layout description. static const int kHeaderSize = String::kSize; @@ -9216,8 +9432,7 @@ class SeqOneByteString: public SeqString { inline uint8_t* GetChars(); - // Casting - static inline SeqOneByteString* cast(Object* obj); + DECLARE_CAST(SeqOneByteString) // Garbage collection support. This method is called by the // garbage collector to compute the actual size of an AsciiString @@ -9231,7 +9446,7 @@ class SeqOneByteString: public SeqString { // Maximal memory usage for a single sequential ASCII string. static const int kMaxSize = 512 * MB - 1; - STATIC_CHECK((kMaxSize - kHeaderSize) >= String::kMaxLength); + STATIC_ASSERT((kMaxSize - kHeaderSize) >= String::kMaxLength); private: DISALLOW_IMPLICIT_CONSTRUCTORS(SeqOneByteString); @@ -9256,8 +9471,7 @@ class SeqTwoByteString: public SeqString { // For regexp code. const uint16_t* SeqTwoByteStringGetData(unsigned start); - // Casting - static inline SeqTwoByteString* cast(Object* obj); + DECLARE_CAST(SeqTwoByteString) // Garbage collection support. This method is called by the // garbage collector to compute the actual size of a TwoByteString @@ -9271,7 +9485,7 @@ class SeqTwoByteString: public SeqString { // Maximal memory usage for a single sequential two-byte string. static const int kMaxSize = 512 * MB - 1; - STATIC_CHECK(static_cast<int>((kMaxSize - kHeaderSize)/sizeof(uint16_t)) >= + STATIC_ASSERT(static_cast<int>((kMaxSize - kHeaderSize)/sizeof(uint16_t)) >= String::kMaxLength); private: @@ -9308,8 +9522,7 @@ class ConsString: public String { // Dispatched behavior. uint16_t ConsStringGet(int index); - // Casting. - static inline ConsString* cast(Object* obj); + DECLARE_CAST(ConsString) // Layout description. static const int kFirstOffset = POINTER_SIZE_ALIGN(String::kSize); @@ -9346,14 +9559,13 @@ class SlicedString: public String { inline String* parent(); inline void set_parent(String* parent, WriteBarrierMode mode = UPDATE_WRITE_BARRIER); - inline int offset(); + inline int offset() const; inline void set_offset(int offset); // Dispatched behavior. uint16_t SlicedStringGet(int index); - // Casting. - static inline SlicedString* cast(Object* obj); + DECLARE_CAST(SlicedString) // Layout description. static const int kParentOffset = POINTER_SIZE_ALIGN(String::kSize); @@ -9385,8 +9597,7 @@ class SlicedString: public String { // API. Therefore, ExternalStrings should not be used internally. class ExternalString: public String { public: - // Casting - static inline ExternalString* cast(Object* obj); + DECLARE_CAST(ExternalString) // Layout description. static const int kResourceOffset = POINTER_SIZE_ALIGN(String::kSize); @@ -9400,7 +9611,7 @@ class ExternalString: public String { // Return whether external string is short (data pointer is not cached). inline bool is_short(); - STATIC_CHECK(kResourceOffset == Internals::kStringResourceOffset); + STATIC_ASSERT(kResourceOffset == Internals::kStringResourceOffset); private: DISALLOW_IMPLICIT_CONSTRUCTORS(ExternalString); @@ -9430,8 +9641,7 @@ class ExternalAsciiString: public ExternalString { // Dispatched behavior. inline uint16_t ExternalAsciiStringGet(int index); - // Casting. - static inline ExternalAsciiString* cast(Object* obj); + DECLARE_CAST(ExternalAsciiString) // Garbage collection support. inline void ExternalAsciiStringIterateBody(ObjectVisitor* v); @@ -9470,8 +9680,7 @@ class ExternalTwoByteString: public ExternalString { // For regexp code. inline const uint16_t* ExternalTwoByteStringGetData(unsigned start); - // Casting. - static inline ExternalTwoByteString* cast(Object* obj); + DECLARE_CAST(ExternalTwoByteString) // Garbage collection support. inline void ExternalTwoByteStringIterateBody(ObjectVisitor* v); @@ -9544,7 +9753,8 @@ class ConsStringNullOp { class ConsStringIteratorOp { public: inline ConsStringIteratorOp() {} - inline ConsStringIteratorOp(ConsString* cons_string, int offset = 0) { + inline explicit ConsStringIteratorOp(ConsString* cons_string, + int offset = 0) { Reset(cons_string, offset); } inline void Reset(ConsString* cons_string, int offset = 0) { @@ -9633,11 +9843,10 @@ class Oddball: public HeapObject { // [to_number]: Cached to_number computed at startup. DECL_ACCESSORS(to_number, Object) - inline byte kind(); + inline byte kind() const; inline void set_kind(byte kind); - // Casting. - static inline Oddball* cast(Object* obj); + DECLARE_CAST(Oddball) // Dispatched behavior. DECLARE_VERIFIER(Oddball) @@ -9670,9 +9879,9 @@ class Oddball: public HeapObject { kToNumberOffset + kPointerSize, kSize> BodyDescriptor; - STATIC_CHECK(kKindOffset == Internals::kOddballKindOffset); - STATIC_CHECK(kNull == Internals::kNullOddballKind); - STATIC_CHECK(kUndefined == Internals::kUndefinedOddballKind); + STATIC_ASSERT(kKindOffset == Internals::kOddballKindOffset); + STATIC_ASSERT(kNull == Internals::kNullOddballKind); + STATIC_ASSERT(kUndefined == Internals::kUndefinedOddballKind); private: DISALLOW_IMPLICIT_CONSTRUCTORS(Oddball); @@ -9684,12 +9893,11 @@ class Cell: public HeapObject { // [value]: value of the global property. DECL_ACCESSORS(value, Object) - // Casting. - static inline Cell* cast(Object* obj); + DECLARE_CAST(Cell) static inline Cell* FromValueAddress(Address value) { Object* result = FromAddress(value - kValueOffset); - ASSERT(result->IsCell() || result->IsPropertyCell()); + DCHECK(result->IsCell() || result->IsPropertyCell()); return static_cast<Cell*>(result); } @@ -9739,8 +9947,7 @@ class PropertyCell: public Cell { static void AddDependentCompilationInfo(Handle<PropertyCell> cell, CompilationInfo* info); - // Casting. - static inline PropertyCell* cast(Object* obj); + DECLARE_CAST(PropertyCell) inline Address TypeAddress() { return address() + kTypeOffset; @@ -9777,8 +9984,7 @@ class JSProxy: public JSReceiver { // [hash]: The hash code property (undefined if not initialized yet). DECL_ACCESSORS(hash, Object) - // Casting. - static inline JSProxy* cast(Object* obj); + DECLARE_CAST(JSProxy) MUST_USE_RESULT static MaybeHandle<Object> GetPropertyWithHandler( Handle<JSProxy> proxy, @@ -9795,22 +10001,20 @@ class JSProxy: public JSReceiver { // otherwise set it to false. MUST_USE_RESULT static MaybeHandle<Object> SetPropertyViaPrototypesWithHandler( - Handle<JSProxy> proxy, - Handle<JSReceiver> receiver, - Handle<Name> name, - Handle<Object> value, - PropertyAttributes attributes, - StrictMode strict_mode, - bool* done); - - static PropertyAttributes GetPropertyAttributeWithHandler( - Handle<JSProxy> proxy, - Handle<JSReceiver> receiver, - Handle<Name> name); - static PropertyAttributes GetElementAttributeWithHandler( - Handle<JSProxy> proxy, - Handle<JSReceiver> receiver, - uint32_t index); + Handle<JSProxy> proxy, Handle<Object> receiver, Handle<Name> name, + Handle<Object> value, StrictMode strict_mode, bool* done); + + MUST_USE_RESULT static Maybe<PropertyAttributes> + GetPropertyAttributesWithHandler(Handle<JSProxy> proxy, + Handle<Object> receiver, + Handle<Name> name); + MUST_USE_RESULT static Maybe<PropertyAttributes> + GetElementAttributeWithHandler(Handle<JSProxy> proxy, + Handle<JSReceiver> receiver, + uint32_t index); + MUST_USE_RESULT static MaybeHandle<Object> SetPropertyWithHandler( + Handle<JSProxy> proxy, Handle<Object> receiver, Handle<Name> name, + Handle<Object> value, StrictMode strict_mode); // Turn the proxy into an (empty) JSObject. static void Fix(Handle<JSProxy> proxy); @@ -9841,7 +10045,7 @@ class JSProxy: public JSReceiver { static const int kHeaderSize = kPaddingOffset; static const int kPaddingSize = kSize - kPaddingOffset; - STATIC_CHECK(kPaddingSize >= 0); + STATIC_ASSERT(kPaddingSize >= 0); typedef FixedBodyDescriptor<kHandlerOffset, kPaddingOffset, @@ -9850,13 +10054,6 @@ class JSProxy: public JSReceiver { private: friend class JSReceiver; - MUST_USE_RESULT static MaybeHandle<Object> SetPropertyWithHandler( - Handle<JSProxy> proxy, - Handle<JSReceiver> receiver, - Handle<Name> name, - Handle<Object> value, - PropertyAttributes attributes, - StrictMode strict_mode); MUST_USE_RESULT static inline MaybeHandle<Object> SetElementWithHandler( Handle<JSProxy> proxy, Handle<JSReceiver> receiver, @@ -9864,9 +10061,10 @@ class JSProxy: public JSReceiver { Handle<Object> value, StrictMode strict_mode); - static bool HasPropertyWithHandler(Handle<JSProxy> proxy, Handle<Name> name); - static inline bool HasElementWithHandler(Handle<JSProxy> proxy, - uint32_t index); + MUST_USE_RESULT static Maybe<bool> HasPropertyWithHandler( + Handle<JSProxy> proxy, Handle<Name> name); + MUST_USE_RESULT static inline Maybe<bool> HasElementWithHandler( + Handle<JSProxy> proxy, uint32_t index); MUST_USE_RESULT static MaybeHandle<Object> DeletePropertyWithHandler( Handle<JSProxy> proxy, @@ -9879,7 +10077,7 @@ class JSProxy: public JSReceiver { MUST_USE_RESULT Object* GetIdentityHash(); - static Handle<Object> GetOrCreateIdentityHash(Handle<JSProxy> proxy); + static Handle<Smi> GetOrCreateIdentityHash(Handle<JSProxy> proxy); DISALLOW_IMPLICIT_CONSTRUCTORS(JSProxy); }; @@ -9893,8 +10091,7 @@ class JSFunctionProxy: public JSProxy { // [construct_trap]: The construct trap. DECL_ACCESSORS(construct_trap, Object) - // Casting. - static inline JSFunctionProxy* cast(Object* obj); + DECLARE_CAST(JSFunctionProxy) // Dispatched behavior. DECLARE_PRINTER(JSFunctionProxy) @@ -9907,7 +10104,7 @@ class JSFunctionProxy: public JSProxy { static const int kSize = JSFunction::kSize; static const int kPaddingSize = kSize - kPaddingOffset; - STATIC_CHECK(kPaddingSize >= 0); + STATIC_ASSERT(kPaddingSize >= 0); typedef FixedBodyDescriptor<kHandlerOffset, kConstructTrapOffset + kPointerSize, @@ -9918,43 +10115,42 @@ class JSFunctionProxy: public JSProxy { }; -// The JSSet describes EcmaScript Harmony sets -class JSSet: public JSObject { +class JSCollection : public JSObject { public: - // [set]: the backing hash set containing keys. + // [table]: the backing hash table DECL_ACCESSORS(table, Object) - // Casting. - static inline JSSet* cast(Object* obj); + static const int kTableOffset = JSObject::kHeaderSize; + static const int kSize = kTableOffset + kPointerSize; + + private: + DISALLOW_IMPLICIT_CONSTRUCTORS(JSCollection); +}; + + +// The JSSet describes EcmaScript Harmony sets +class JSSet : public JSCollection { + public: + DECLARE_CAST(JSSet) // Dispatched behavior. DECLARE_PRINTER(JSSet) DECLARE_VERIFIER(JSSet) - static const int kTableOffset = JSObject::kHeaderSize; - static const int kSize = kTableOffset + kPointerSize; - private: DISALLOW_IMPLICIT_CONSTRUCTORS(JSSet); }; // The JSMap describes EcmaScript Harmony maps -class JSMap: public JSObject { +class JSMap : public JSCollection { public: - // [table]: the backing hash table mapping keys to values. - DECL_ACCESSORS(table, Object) - - // Casting. - static inline JSMap* cast(Object* obj); + DECLARE_CAST(JSMap) // Dispatched behavior. DECLARE_PRINTER(JSMap) DECLARE_VERIFIER(JSMap) - static const int kTableOffset = JSObject::kHeaderSize; - static const int kSize = kTableOffset + kPointerSize; - private: DISALLOW_IMPLICIT_CONSTRUCTORS(JSMap); }; @@ -9963,17 +10159,15 @@ class JSMap: public JSObject { // OrderedHashTableIterator is an iterator that iterates over the keys and // values of an OrderedHashTable. // -// The hash table has a reference to the iterator and the iterators themselves -// have references to the [next_iterator] and [previous_iterator], thus creating -// a double linked list. +// The iterator has a reference to the underlying OrderedHashTable data, +// [table], as well as the current [index] the iterator is at. // -// When the hash table changes the iterators are called to update their [index] -// and [count]. The hash table calls [EntryRemoved], [TableCompacted] as well -// as [TableCleared]. +// When the OrderedHashTable is rehashed it adds a reference from the old table +// to the new table as well as storing enough data about the changes so that the +// iterator [index] can be adjusted accordingly. // -// When an iterator is done it closes itself. It removes itself from the double -// linked list and it sets its [table] to undefined, no longer keeping the -// [table] alive. +// When the [Next] result from the iterator is requested, the iterator checks if +// there is a newer table that it needs to transition to. template<class Derived, class TableType> class OrderedHashTableIterator: public JSObject { public: @@ -9983,29 +10177,17 @@ class OrderedHashTableIterator: public JSObject { // [index]: The index into the data table. DECL_ACCESSORS(index, Smi) - // [count]: The logical index into the data table, ignoring the holes. - DECL_ACCESSORS(count, Smi) - // [kind]: The kind of iteration this is. One of the [Kind] enum values. DECL_ACCESSORS(kind, Smi) - // [next_iterator]: Used as a double linked list for the live iterators. - DECL_ACCESSORS(next_iterator, Object) - - // [previous_iterator]: Used as a double linked list for the live iterators. - DECL_ACCESSORS(previous_iterator, Object) - #ifdef OBJECT_PRINT - void OrderedHashTableIteratorPrint(FILE* out); + void OrderedHashTableIteratorPrint(OStream& os); // NOLINT #endif static const int kTableOffset = JSObject::kHeaderSize; static const int kIndexOffset = kTableOffset + kPointerSize; - static const int kCountOffset = kIndexOffset + kPointerSize; - static const int kKindOffset = kCountOffset + kPointerSize; - static const int kNextIteratorOffset = kKindOffset + kPointerSize; - static const int kPreviousIteratorOffset = kNextIteratorOffset + kPointerSize; - static const int kSize = kPreviousIteratorOffset + kPointerSize; + static const int kKindOffset = kIndexOffset + kPointerSize; + static const int kSize = kKindOffset + kPointerSize; enum Kind { kKindKeys = 1, @@ -10013,45 +10195,28 @@ class OrderedHashTableIterator: public JSObject { kKindEntries = 3 }; - // Called by the underlying [table] when an entry is removed. - void EntryRemoved(int index); - - // Called by the underlying [table] when it is compacted/rehashed. - void TableCompacted() { - // All holes have been removed so index is now same as count. - set_index(count()); - } + // Whether the iterator has more elements. This needs to be called before + // calling |CurrentKey| and/or |CurrentValue|. + bool HasMore(); - // Called by the underlying [table] when it is cleared. - void TableCleared() { - set_index(Smi::FromInt(0)); - set_count(Smi::FromInt(0)); + // Move the index forward one. + void MoveNext() { + set_index(Smi::FromInt(Smi::cast(index())->value() + 1)); } - // Removes the iterator from the double linked list and removes its reference - // back to the [table]. - void Close(); + // Populates the array with the next key and value and then moves the iterator + // forward. + // This returns the |kind| or 0 if the iterator is already at the end. + Smi* Next(JSArray* value_array); - // Returns an iterator result object: {value: any, done: boolean} and moves - // the index to the next valid entry. Closes the iterator if moving past the - // end. - static Handle<JSObject> Next(Handle<Derived> iterator); - - protected: - static Handle<Derived> CreateInternal( - Handle<Map> map, Handle<TableType> table, int kind); + // Returns the current key of the iterator. This should only be called when + // |HasMore| returns true. + inline Object* CurrentKey(); private: - // Ensures [index] is not pointing to a hole. - void Seek(); - - // Moves [index] to next valid entry. Closes the iterator if moving past the - // end. - void MoveNext(); - - bool Closed() { - return table()->IsUndefined(); - } + // Transitions the iterator to the non obsolete backing store. This is a NOP + // if the [table] is not obsolete. + void Transition(); DISALLOW_IMPLICIT_CONSTRUCTORS(OrderedHashTableIterator); }; @@ -10060,20 +10225,15 @@ class OrderedHashTableIterator: public JSObject { class JSSetIterator: public OrderedHashTableIterator<JSSetIterator, OrderedHashSet> { public: - // Creates a new iterator associated with [table]. - // [kind] needs to be one of the OrderedHashTableIterator Kind enum values. - static inline Handle<JSSetIterator> Create( - Handle<OrderedHashSet> table, int kind); - // Dispatched behavior. DECLARE_PRINTER(JSSetIterator) DECLARE_VERIFIER(JSSetIterator) - // Casting. - static inline JSSetIterator* cast(Object* obj); + DECLARE_CAST(JSSetIterator) - static Handle<Object> ValueForKind( - Handle<JSSetIterator> iterator, int entry_index); + // Called by |Next| to populate the array. This allows the subclasses to + // populate the array differently. + inline void PopulateValueArray(FixedArray* array); private: DISALLOW_IMPLICIT_CONSTRUCTORS(JSSetIterator); @@ -10083,22 +10243,21 @@ class JSSetIterator: public OrderedHashTableIterator<JSSetIterator, class JSMapIterator: public OrderedHashTableIterator<JSMapIterator, OrderedHashMap> { public: - // Creates a new iterator associated with [table]. - // [kind] needs to be one of the OrderedHashTableIterator Kind enum values. - static inline Handle<JSMapIterator> Create( - Handle<OrderedHashMap> table, int kind); - // Dispatched behavior. DECLARE_PRINTER(JSMapIterator) DECLARE_VERIFIER(JSMapIterator) - // Casting. - static inline JSMapIterator* cast(Object* obj); + DECLARE_CAST(JSMapIterator) - static Handle<Object> ValueForKind( - Handle<JSMapIterator> iterator, int entry_index); + // Called by |Next| to populate the array. This allows the subclasses to + // populate the array differently. + inline void PopulateValueArray(FixedArray* array); private: + // Returns the current value of the iterator. This should only be called when + // |HasMore| returns true. + inline Object* CurrentValue(); + DISALLOW_IMPLICIT_CONSTRUCTORS(JSMapIterator); }; @@ -10124,8 +10283,7 @@ class JSWeakCollection: public JSObject { // The JSWeakMap describes EcmaScript Harmony weak maps class JSWeakMap: public JSWeakCollection { public: - // Casting. - static inline JSWeakMap* cast(Object* obj); + DECLARE_CAST(JSWeakMap) // Dispatched behavior. DECLARE_PRINTER(JSWeakMap) @@ -10139,8 +10297,7 @@ class JSWeakMap: public JSWeakCollection { // The JSWeakSet describes EcmaScript Harmony weak sets class JSWeakSet: public JSWeakCollection { public: - // Casting. - static inline JSWeakSet* cast(Object* obj); + DECLARE_CAST(JSWeakSet) // Dispatched behavior. DECLARE_PRINTER(JSWeakSet) @@ -10174,8 +10331,7 @@ class JSArrayBuffer: public JSObject { // [weak_first_array]: weak linked list of views. DECL_ACCESSORS(weak_first_view, Object) - // Casting. - static inline JSArrayBuffer* cast(Object* obj); + DECLARE_CAST(JSArrayBuffer) // Neutering. Only neuters the buffer, not associated typed arrays. void Neuter(); @@ -10217,8 +10373,7 @@ class JSArrayBufferView: public JSObject { // [weak_next]: linked list of typed arrays over the same array buffer. DECL_ACCESSORS(weak_next, Object) - // Casting. - static inline JSArrayBufferView* cast(Object* obj); + DECLARE_CAST(JSArrayBufferView) DECLARE_VERIFIER(JSArrayBufferView) @@ -10244,8 +10399,7 @@ class JSTypedArray: public JSArrayBufferView { // Neutering. Only neuters this typed array. void Neuter(); - // Casting. - static inline JSTypedArray* cast(Object* obj); + DECLARE_CAST(JSTypedArray) ExternalArrayType type(); size_t element_size(); @@ -10275,8 +10429,7 @@ class JSDataView: public JSArrayBufferView { // Only neuters this DataView void Neuter(); - // Casting. - static inline JSDataView* cast(Object* obj); + DECLARE_CAST(JSDataView) // Dispatched behavior. DECLARE_PRINTER(JSDataView) @@ -10301,8 +10454,7 @@ class Foreign: public HeapObject { inline Address foreign_address(); inline void set_foreign_address(Address value); - // Casting. - static inline Foreign* cast(Object* obj); + DECLARE_CAST(Foreign) // Dispatched behavior. inline void ForeignIterateBody(ObjectVisitor* v); @@ -10319,7 +10471,7 @@ class Foreign: public HeapObject { static const int kForeignAddressOffset = HeapObject::kHeaderSize; static const int kSize = kForeignAddressOffset + kPointerSize; - STATIC_CHECK(kForeignAddressOffset == Internals::kForeignAddressOffset); + STATIC_ASSERT(kForeignAddressOffset == Internals::kForeignAddressOffset); private: DISALLOW_IMPLICIT_CONSTRUCTORS(Foreign); @@ -10344,6 +10496,10 @@ class JSArray: public JSObject { uint32_t index, Handle<Object> value); + static bool IsReadOnlyLengthDescriptor(Handle<Map> jsarray_map); + static bool WouldChangeReadOnlyLength(Handle<JSArray> array, uint32_t index); + static MaybeHandle<Object> ReadOnlyLengthError(Handle<JSArray> array); + // Initialize the array with the given capacity. The function may // fail due to out-of-memory situations, but only if the requested // capacity is non-zero. @@ -10360,8 +10516,7 @@ class JSArray: public JSObject { static inline void SetContent(Handle<JSArray> array, Handle<FixedArrayBase> storage); - // Casting. - static inline JSArray* cast(Object* obj); + DECLARE_CAST(JSArray) // Ensures that the fixed array backing the JSArray has at // least the stated size. @@ -10425,16 +10580,16 @@ class AccessorInfo: public Struct { inline bool all_can_write(); inline void set_all_can_write(bool value); - inline bool prohibits_overwriting(); - inline void set_prohibits_overwriting(bool value); - inline PropertyAttributes property_attributes(); inline void set_property_attributes(PropertyAttributes attributes); // Checks whether the given receiver is compatible with this accessor. + static bool IsCompatibleReceiverType(Isolate* isolate, + Handle<AccessorInfo> info, + Handle<HeapType> type); inline bool IsCompatibleReceiver(Object* receiver); - static inline AccessorInfo* cast(Object* obj); + DECLARE_CAST(AccessorInfo) // Dispatched behavior. DECLARE_VERIFIER(AccessorInfo) @@ -10451,11 +10606,13 @@ class AccessorInfo: public Struct { static const int kSize = kExpectedReceiverTypeOffset + kPointerSize; private: + inline bool HasExpectedReceiverType() { + return expected_receiver_type()->IsFunctionTemplateInfo(); + } // Bit positions in flag. static const int kAllCanReadBit = 0; static const int kAllCanWriteBit = 1; - static const int kProhibitsOverwritingBit = 2; - class AttributesField: public BitField<PropertyAttributes, 3, 3> {}; + class AttributesField: public BitField<PropertyAttributes, 2, 3> {}; DISALLOW_IMPLICIT_CONSTRUCTORS(AccessorInfo); }; @@ -10533,7 +10690,7 @@ class DeclaredAccessorDescriptor: public Struct { public: DECL_ACCESSORS(serialized_data, ByteArray) - static inline DeclaredAccessorDescriptor* cast(Object* obj); + DECLARE_CAST(DeclaredAccessorDescriptor) static Handle<DeclaredAccessorDescriptor> Create( Isolate* isolate, @@ -10556,7 +10713,7 @@ class DeclaredAccessorInfo: public AccessorInfo { public: DECL_ACCESSORS(descriptor, DeclaredAccessorDescriptor) - static inline DeclaredAccessorInfo* cast(Object* obj); + DECLARE_CAST(DeclaredAccessorInfo) // Dispatched behavior. DECLARE_PRINTER(DeclaredAccessorInfo) @@ -10577,7 +10734,7 @@ class DeclaredAccessorInfo: public AccessorInfo { // the request is ignored. // // If the accessor in the prototype has the READ_ONLY property attribute, then -// a new value is added to the local object when the property is set. +// a new value is added to the derived object when the property is set. // This shadows the accessor in the prototype. class ExecutableAccessorInfo: public AccessorInfo { public: @@ -10585,7 +10742,7 @@ class ExecutableAccessorInfo: public AccessorInfo { DECL_ACCESSORS(setter, Object) DECL_ACCESSORS(data, Object) - static inline ExecutableAccessorInfo* cast(Object* obj); + DECLARE_CAST(ExecutableAccessorInfo) // Dispatched behavior. DECLARE_PRINTER(ExecutableAccessorInfo) @@ -10596,6 +10753,8 @@ class ExecutableAccessorInfo: public AccessorInfo { static const int kDataOffset = kSetterOffset + kPointerSize; static const int kSize = kDataOffset + kPointerSize; + inline void clear_setter(); + private: DISALLOW_IMPLICIT_CONSTRUCTORS(ExecutableAccessorInfo); }; @@ -10607,20 +10766,12 @@ class ExecutableAccessorInfo: public AccessorInfo { // * undefined: considered an accessor by the spec, too, strangely enough // * the hole: an accessor which has not been set // * a pointer to a map: a transition used to ensure map sharing -// access_flags provides the ability to override access checks on access check -// failure. class AccessorPair: public Struct { public: DECL_ACCESSORS(getter, Object) DECL_ACCESSORS(setter, Object) - DECL_ACCESSORS(access_flags, Smi) - inline void set_access_flags(v8::AccessControl access_control); - inline bool all_can_read(); - inline bool all_can_write(); - inline bool prohibits_overwriting(); - - static inline AccessorPair* cast(Object* obj); + DECLARE_CAST(AccessorPair) static Handle<AccessorPair> Copy(Handle<AccessorPair> pair); @@ -10655,14 +10806,9 @@ class AccessorPair: public Struct { static const int kGetterOffset = HeapObject::kHeaderSize; static const int kSetterOffset = kGetterOffset + kPointerSize; - static const int kAccessFlagsOffset = kSetterOffset + kPointerSize; - static const int kSize = kAccessFlagsOffset + kPointerSize; + static const int kSize = kSetterOffset + kPointerSize; private: - static const int kAllCanReadBit = 0; - static const int kAllCanWriteBit = 1; - static const int kProhibitsOverwritingBit = 2; - // Strangely enough, in addition to functions and harmony proxies, the spec // requires us to consider undefined as a kind of accessor, too: // var obj = {}; @@ -10682,7 +10828,7 @@ class AccessCheckInfo: public Struct { DECL_ACCESSORS(indexed_callback, Object) DECL_ACCESSORS(data, Object) - static inline AccessCheckInfo* cast(Object* obj); + DECLARE_CAST(AccessCheckInfo) // Dispatched behavior. DECLARE_PRINTER(AccessCheckInfo) @@ -10707,7 +10853,7 @@ class InterceptorInfo: public Struct { DECL_ACCESSORS(enumerator, Object) DECL_ACCESSORS(data, Object) - static inline InterceptorInfo* cast(Object* obj); + DECLARE_CAST(InterceptorInfo) // Dispatched behavior. DECLARE_PRINTER(InterceptorInfo) @@ -10731,7 +10877,7 @@ class CallHandlerInfo: public Struct { DECL_ACCESSORS(callback, Object) DECL_ACCESSORS(data, Object) - static inline CallHandlerInfo* cast(Object* obj); + DECLARE_CAST(CallHandlerInfo) // Dispatched behavior. DECLARE_PRINTER(CallHandlerInfo) @@ -10780,7 +10926,7 @@ class FunctionTemplateInfo: public TemplateInfo { DECL_ACCESSORS(access_check_info, Object) DECL_ACCESSORS(flag, Smi) - inline int length(); + inline int length() const; inline void set_length(int value); // Following properties use flag bits. @@ -10793,7 +10939,7 @@ class FunctionTemplateInfo: public TemplateInfo { DECL_BOOLEAN_ACCESSORS(remove_prototype) DECL_BOOLEAN_ACCESSORS(do_not_cache) - static inline FunctionTemplateInfo* cast(Object* obj); + DECLARE_CAST(FunctionTemplateInfo) // Dispatched behavior. DECLARE_PRINTER(FunctionTemplateInfo) @@ -10842,7 +10988,7 @@ class ObjectTemplateInfo: public TemplateInfo { DECL_ACCESSORS(constructor, Object) DECL_ACCESSORS(internal_field_count, Object) - static inline ObjectTemplateInfo* cast(Object* obj); + DECLARE_CAST(ObjectTemplateInfo) // Dispatched behavior. DECLARE_PRINTER(ObjectTemplateInfo) @@ -10860,7 +11006,7 @@ class SignatureInfo: public Struct { DECL_ACCESSORS(receiver, Object) DECL_ACCESSORS(args, Object) - static inline SignatureInfo* cast(Object* obj); + DECLARE_CAST(SignatureInfo) // Dispatched behavior. DECLARE_PRINTER(SignatureInfo) @@ -10879,7 +11025,7 @@ class TypeSwitchInfo: public Struct { public: DECL_ACCESSORS(types, Object) - static inline TypeSwitchInfo* cast(Object* obj); + DECLARE_CAST(TypeSwitchInfo) // Dispatched behavior. DECLARE_PRINTER(TypeSwitchInfo) @@ -10924,7 +11070,7 @@ class DebugInfo: public Struct { // Get the number of break points for this function. int GetBreakPointCount(); - static inline DebugInfo* cast(Object* obj); + DECLARE_CAST(DebugInfo) // Dispatched behavior. DECLARE_PRINTER(DebugInfo) @@ -10939,6 +11085,8 @@ class DebugInfo: public Struct { kActiveBreakPointsCountIndex + kPointerSize; static const int kSize = kBreakPointsStateIndex + kPointerSize; + static const int kEstimatedNofBreakPointsInFunction = 16; + private: static const int kNoBreakPointInfo = -1; @@ -10976,7 +11124,7 @@ class BreakPointInfo: public Struct { // Get the number of break points for this code position. int GetBreakPointCount(); - static inline BreakPointInfo* cast(Object* obj); + DECLARE_CAST(BreakPointInfo) // Dispatched behavior. DECLARE_PRINTER(BreakPointInfo) @@ -10997,6 +11145,7 @@ class BreakPointInfo: public Struct { #undef DECL_BOOLEAN_ACCESSORS #undef DECL_ACCESSORS +#undef DECLARE_CAST #undef DECLARE_VERIFIER #define VISITOR_SYNCHRONIZATION_TAGS_LIST(V) \ diff --git a/deps/v8/src/optimizing-compiler-thread.cc b/deps/v8/src/optimizing-compiler-thread.cc index f26eb88acbb..0074adbefe1 100644 --- a/deps/v8/src/optimizing-compiler-thread.cc +++ b/deps/v8/src/optimizing-compiler-thread.cc @@ -2,20 +2,21 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "optimizing-compiler-thread.h" +#include "src/optimizing-compiler-thread.h" -#include "v8.h" +#include "src/v8.h" -#include "full-codegen.h" -#include "hydrogen.h" -#include "isolate.h" -#include "v8threads.h" +#include "src/base/atomicops.h" +#include "src/full-codegen.h" +#include "src/hydrogen.h" +#include "src/isolate.h" +#include "src/v8threads.h" namespace v8 { namespace internal { OptimizingCompilerThread::~OptimizingCompilerThread() { - ASSERT_EQ(0, input_queue_length_); + DCHECK_EQ(0, input_queue_length_); DeleteArray(input_queue_); if (FLAG_concurrent_osr) { #ifdef DEBUG @@ -30,7 +31,7 @@ OptimizingCompilerThread::~OptimizingCompilerThread() { void OptimizingCompilerThread::Run() { #ifdef DEBUG - { LockGuard<Mutex> lock_guard(&thread_id_mutex_); + { base::LockGuard<base::Mutex> lock_guard(&thread_id_mutex_); thread_id_ = ThreadId::Current().ToInteger(); } #endif @@ -39,19 +40,18 @@ void OptimizingCompilerThread::Run() { DisallowHandleAllocation no_handles; DisallowHandleDereference no_deref; - ElapsedTimer total_timer; + base::ElapsedTimer total_timer; if (FLAG_trace_concurrent_recompilation) total_timer.Start(); while (true) { input_queue_semaphore_.Wait(); - Logger::TimerEventScope timer( - isolate_, Logger::TimerEventScope::v8_recompile_concurrent); + TimerEventScope<TimerEventRecompileConcurrent> timer(isolate_); if (FLAG_concurrent_recompilation_delay != 0) { - OS::Sleep(FLAG_concurrent_recompilation_delay); + base::OS::Sleep(FLAG_concurrent_recompilation_delay); } - switch (static_cast<StopFlag>(Acquire_Load(&stop_thread_))) { + switch (static_cast<StopFlag>(base::Acquire_Load(&stop_thread_))) { case CONTINUE: break; case STOP: @@ -65,13 +65,14 @@ void OptimizingCompilerThread::Run() { { AllowHandleDereference allow_handle_dereference; FlushInputQueue(true); } - Release_Store(&stop_thread_, static_cast<AtomicWord>(CONTINUE)); + base::Release_Store(&stop_thread_, + static_cast<base::AtomicWord>(CONTINUE)); stop_semaphore_.Signal(); // Return to start of consumer loop. continue; } - ElapsedTimer compiling_timer; + base::ElapsedTimer compiling_timer; if (FLAG_trace_concurrent_recompilation) compiling_timer.Start(); CompileNext(); @@ -84,10 +85,10 @@ void OptimizingCompilerThread::Run() { OptimizedCompileJob* OptimizingCompilerThread::NextInput() { - LockGuard<Mutex> access_input_queue_(&input_queue_mutex_); + base::LockGuard<base::Mutex> access_input_queue_(&input_queue_mutex_); if (input_queue_length_ == 0) return NULL; OptimizedCompileJob* job = input_queue_[InputQueueIndex(0)]; - ASSERT_NE(NULL, job); + DCHECK_NE(NULL, job); input_queue_shift_ = InputQueueIndex(1); input_queue_length_--; return job; @@ -96,12 +97,12 @@ OptimizedCompileJob* OptimizingCompilerThread::NextInput() { void OptimizingCompilerThread::CompileNext() { OptimizedCompileJob* job = NextInput(); - ASSERT_NE(NULL, job); + DCHECK_NE(NULL, job); // The function may have already been optimized by OSR. Simply continue. OptimizedCompileJob::Status status = job->OptimizeGraph(); USE(status); // Prevent an unused-variable error in release mode. - ASSERT(status != OptimizedCompileJob::FAILED); + DCHECK(status != OptimizedCompileJob::FAILED); // The function may have already been optimized by OSR. Simply continue. // Use a mutex to make sure that functions marked for install @@ -168,8 +169,8 @@ void OptimizingCompilerThread::FlushOsrBuffer(bool restore_function_code) { void OptimizingCompilerThread::Flush() { - ASSERT(!IsOptimizerThread()); - Release_Store(&stop_thread_, static_cast<AtomicWord>(FLUSH)); + DCHECK(!IsOptimizerThread()); + base::Release_Store(&stop_thread_, static_cast<base::AtomicWord>(FLUSH)); if (FLAG_block_concurrent_recompilation) Unblock(); input_queue_semaphore_.Signal(); stop_semaphore_.Wait(); @@ -182,8 +183,8 @@ void OptimizingCompilerThread::Flush() { void OptimizingCompilerThread::Stop() { - ASSERT(!IsOptimizerThread()); - Release_Store(&stop_thread_, static_cast<AtomicWord>(STOP)); + DCHECK(!IsOptimizerThread()); + base::Release_Store(&stop_thread_, static_cast<base::AtomicWord>(STOP)); if (FLAG_block_concurrent_recompilation) Unblock(); input_queue_semaphore_.Signal(); stop_semaphore_.Wait(); @@ -215,7 +216,7 @@ void OptimizingCompilerThread::Stop() { void OptimizingCompilerThread::InstallOptimizedFunctions() { - ASSERT(!IsOptimizerThread()); + DCHECK(!IsOptimizerThread()); HandleScope handle_scope(isolate_); OptimizedCompileJob* job; @@ -248,23 +249,23 @@ void OptimizingCompilerThread::InstallOptimizedFunctions() { void OptimizingCompilerThread::QueueForOptimization(OptimizedCompileJob* job) { - ASSERT(IsQueueAvailable()); - ASSERT(!IsOptimizerThread()); + DCHECK(IsQueueAvailable()); + DCHECK(!IsOptimizerThread()); CompilationInfo* info = job->info(); if (info->is_osr()) { osr_attempts_++; AddToOsrBuffer(job); // Add job to the front of the input queue. - LockGuard<Mutex> access_input_queue(&input_queue_mutex_); - ASSERT_LT(input_queue_length_, input_queue_capacity_); + base::LockGuard<base::Mutex> access_input_queue(&input_queue_mutex_); + DCHECK_LT(input_queue_length_, input_queue_capacity_); // Move shift_ back by one. input_queue_shift_ = InputQueueIndex(input_queue_capacity_ - 1); input_queue_[InputQueueIndex(0)] = job; input_queue_length_++; } else { // Add job to the back of the input queue. - LockGuard<Mutex> access_input_queue(&input_queue_mutex_); - ASSERT_LT(input_queue_length_, input_queue_capacity_); + base::LockGuard<base::Mutex> access_input_queue(&input_queue_mutex_); + DCHECK_LT(input_queue_length_, input_queue_capacity_); input_queue_[InputQueueIndex(input_queue_length_)] = job; input_queue_length_++; } @@ -277,7 +278,7 @@ void OptimizingCompilerThread::QueueForOptimization(OptimizedCompileJob* job) { void OptimizingCompilerThread::Unblock() { - ASSERT(!IsOptimizerThread()); + DCHECK(!IsOptimizerThread()); while (blocked_jobs_ > 0) { input_queue_semaphore_.Signal(); blocked_jobs_--; @@ -287,7 +288,7 @@ void OptimizingCompilerThread::Unblock() { OptimizedCompileJob* OptimizingCompilerThread::FindReadyOSRCandidate( Handle<JSFunction> function, BailoutId osr_ast_id) { - ASSERT(!IsOptimizerThread()); + DCHECK(!IsOptimizerThread()); for (int i = 0; i < osr_buffer_capacity_; i++) { OptimizedCompileJob* current = osr_buffer_[i]; if (current != NULL && @@ -304,7 +305,7 @@ OptimizedCompileJob* OptimizingCompilerThread::FindReadyOSRCandidate( bool OptimizingCompilerThread::IsQueuedForOSR(Handle<JSFunction> function, BailoutId osr_ast_id) { - ASSERT(!IsOptimizerThread()); + DCHECK(!IsOptimizerThread()); for (int i = 0; i < osr_buffer_capacity_; i++) { OptimizedCompileJob* current = osr_buffer_[i]; if (current != NULL && @@ -317,7 +318,7 @@ bool OptimizingCompilerThread::IsQueuedForOSR(Handle<JSFunction> function, bool OptimizingCompilerThread::IsQueuedForOSR(JSFunction* function) { - ASSERT(!IsOptimizerThread()); + DCHECK(!IsOptimizerThread()); for (int i = 0; i < osr_buffer_capacity_; i++) { OptimizedCompileJob* current = osr_buffer_[i]; if (current != NULL && *current->info()->closure() == function) { @@ -329,7 +330,7 @@ bool OptimizingCompilerThread::IsQueuedForOSR(JSFunction* function) { void OptimizingCompilerThread::AddToOsrBuffer(OptimizedCompileJob* job) { - ASSERT(!IsOptimizerThread()); + DCHECK(!IsOptimizerThread()); // Find the next slot that is empty or has a stale job. OptimizedCompileJob* stale = NULL; while (true) { @@ -340,7 +341,7 @@ void OptimizingCompilerThread::AddToOsrBuffer(OptimizedCompileJob* job) { // Add to found slot and dispose the evicted job. if (stale != NULL) { - ASSERT(stale->IsWaitingForInstall()); + DCHECK(stale->IsWaitingForInstall()); CompilationInfo* info = stale->info(); if (FLAG_trace_osr) { PrintF("[COSR - Discarded "); @@ -362,7 +363,7 @@ bool OptimizingCompilerThread::IsOptimizerThread(Isolate* isolate) { bool OptimizingCompilerThread::IsOptimizerThread() { - LockGuard<Mutex> lock_guard(&thread_id_mutex_); + base::LockGuard<base::Mutex> lock_guard(&thread_id_mutex_); return ThreadId::Current().ToInteger() == thread_id_; } #endif diff --git a/deps/v8/src/optimizing-compiler-thread.h b/deps/v8/src/optimizing-compiler-thread.h index 7e27527009b..6ff4f2a61ab 100644 --- a/deps/v8/src/optimizing-compiler-thread.h +++ b/deps/v8/src/optimizing-compiler-thread.h @@ -5,13 +5,13 @@ #ifndef V8_OPTIMIZING_COMPILER_THREAD_H_ #define V8_OPTIMIZING_COMPILER_THREAD_H_ -#include "atomicops.h" -#include "flags.h" -#include "list.h" -#include "platform.h" -#include "platform/mutex.h" -#include "platform/time.h" -#include "unbound-queue-inl.h" +#include "src/base/atomicops.h" +#include "src/base/platform/mutex.h" +#include "src/base/platform/platform.h" +#include "src/base/platform/time.h" +#include "src/flags.h" +#include "src/list.h" +#include "src/unbound-queue-inl.h" namespace v8 { namespace internal { @@ -20,25 +20,26 @@ class HOptimizedGraphBuilder; class OptimizedCompileJob; class SharedFunctionInfo; -class OptimizingCompilerThread : public Thread { +class OptimizingCompilerThread : public base::Thread { public: - explicit OptimizingCompilerThread(Isolate *isolate) : - Thread("OptimizingCompilerThread"), + explicit OptimizingCompilerThread(Isolate* isolate) + : Thread(Options("OptimizingCompilerThread")), #ifdef DEBUG - thread_id_(0), + thread_id_(0), #endif - isolate_(isolate), - stop_semaphore_(0), - input_queue_semaphore_(0), - input_queue_capacity_(FLAG_concurrent_recompilation_queue_length), - input_queue_length_(0), - input_queue_shift_(0), - osr_buffer_capacity_(FLAG_concurrent_recompilation_queue_length + 4), - osr_buffer_cursor_(0), - osr_hits_(0), - osr_attempts_(0), - blocked_jobs_(0) { - NoBarrier_Store(&stop_thread_, static_cast<AtomicWord>(CONTINUE)); + isolate_(isolate), + stop_semaphore_(0), + input_queue_semaphore_(0), + input_queue_capacity_(FLAG_concurrent_recompilation_queue_length), + input_queue_length_(0), + input_queue_shift_(0), + osr_buffer_capacity_(FLAG_concurrent_recompilation_queue_length + 4), + osr_buffer_cursor_(0), + osr_hits_(0), + osr_attempts_(0), + blocked_jobs_(0) { + base::NoBarrier_Store(&stop_thread_, + static_cast<base::AtomicWord>(CONTINUE)); input_queue_ = NewArray<OptimizedCompileJob*>(input_queue_capacity_); if (FLAG_concurrent_osr) { // Allocate and mark OSR buffer slots as empty. @@ -62,7 +63,7 @@ class OptimizingCompilerThread : public Thread { bool IsQueuedForOSR(JSFunction* function); inline bool IsQueueAvailable() { - LockGuard<Mutex> access_input_queue(&input_queue_mutex_); + base::LockGuard<base::Mutex> access_input_queue(&input_queue_mutex_); return input_queue_length_ < input_queue_capacity_; } @@ -97,26 +98,26 @@ class OptimizingCompilerThread : public Thread { inline int InputQueueIndex(int i) { int result = (i + input_queue_shift_) % input_queue_capacity_; - ASSERT_LE(0, result); - ASSERT_LT(result, input_queue_capacity_); + DCHECK_LE(0, result); + DCHECK_LT(result, input_queue_capacity_); return result; } #ifdef DEBUG int thread_id_; - Mutex thread_id_mutex_; + base::Mutex thread_id_mutex_; #endif Isolate* isolate_; - Semaphore stop_semaphore_; - Semaphore input_queue_semaphore_; + base::Semaphore stop_semaphore_; + base::Semaphore input_queue_semaphore_; // Circular queue of incoming recompilation tasks (including OSR). OptimizedCompileJob** input_queue_; int input_queue_capacity_; int input_queue_length_; int input_queue_shift_; - Mutex input_queue_mutex_; + base::Mutex input_queue_mutex_; // Queue of recompilation tasks ready to be installed (excluding OSR). UnboundQueue<OptimizedCompileJob*> output_queue_; @@ -126,9 +127,9 @@ class OptimizingCompilerThread : public Thread { int osr_buffer_capacity_; int osr_buffer_cursor_; - volatile AtomicWord stop_thread_; - TimeDelta time_spent_compiling_; - TimeDelta time_spent_total_; + volatile base::AtomicWord stop_thread_; + base::TimeDelta time_spent_compiling_; + base::TimeDelta time_spent_total_; int osr_hits_; int osr_attempts_; diff --git a/deps/v8/src/ostreams.cc b/deps/v8/src/ostreams.cc new file mode 100644 index 00000000000..62304eb9081 --- /dev/null +++ b/deps/v8/src/ostreams.cc @@ -0,0 +1,185 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include <algorithm> +#include <cctype> +#include <cmath> + +#include "src/base/platform/platform.h" // For isinf/isnan with MSVC +#include "src/ostreams.h" + +#if V8_OS_WIN +#define snprintf sprintf_s +#endif + +namespace v8 { +namespace internal { + +// Be lazy and delegate the value=>char conversion to snprintf. +template<class T> +OStream& OStream::print(const char* format, T x) { + char buf[32]; + int n = snprintf(buf, sizeof(buf), format, x); + return (n < 0) ? *this : write(buf, n); +} + + +OStream& OStream::operator<<(short x) { // NOLINT(runtime/int) + return print(hex_ ? "%hx" : "%hd", x); +} + + +OStream& OStream::operator<<(unsigned short x) { // NOLINT(runtime/int) + return print(hex_ ? "%hx" : "%hu", x); +} + + +OStream& OStream::operator<<(int x) { + return print(hex_ ? "%x" : "%d", x); +} + + +OStream& OStream::operator<<(unsigned int x) { + return print(hex_ ? "%x" : "%u", x); +} + + +OStream& OStream::operator<<(long x) { // NOLINT(runtime/int) + return print(hex_ ? "%lx" : "%ld", x); +} + + +OStream& OStream::operator<<(unsigned long x) { // NOLINT(runtime/int) + return print(hex_ ? "%lx" : "%lu", x); +} + + +OStream& OStream::operator<<(long long x) { // NOLINT(runtime/int) + return print(hex_ ? "%llx" : "%lld", x); +} + + +OStream& OStream::operator<<(unsigned long long x) { // NOLINT(runtime/int) + return print(hex_ ? "%llx" : "%llu", x); +} + + +OStream& OStream::operator<<(double x) { + if (std::isinf(x)) return *this << (x < 0 ? "-inf" : "inf"); + if (std::isnan(x)) return *this << "nan"; + return print("%g", x); +} + + +OStream& OStream::operator<<(void* x) { + return print("%p", x); +} + + +OStream& OStream::operator<<(char x) { + return put(x); +} + + +OStream& OStream::operator<<(signed char x) { + return put(x); +} + + +OStream& OStream::operator<<(unsigned char x) { + return put(x); +} + + +OStream& OStream::dec() { + hex_ = false; + return *this; +} + + +OStream& OStream::hex() { + hex_ = true; + return *this; +} + + +OStream& flush(OStream& os) { // NOLINT(runtime/references) + return os.flush(); +} + + +OStream& endl(OStream& os) { // NOLINT(runtime/references) + return flush(os.put('\n')); +} + + +OStream& hex(OStream& os) { // NOLINT(runtime/references) + return os.hex(); +} + + +OStream& dec(OStream& os) { // NOLINT(runtime/references) + return os.dec(); +} + + +OStringStream& OStringStream::write(const char* s, size_t n) { + size_t new_size = size_ + n; + if (new_size < size_) return *this; // Overflow => no-op. + reserve(new_size + 1); + memcpy(data_ + size_, s, n); + size_ = new_size; + data_[size_] = '\0'; + return *this; +} + + +OStringStream& OStringStream::flush() { + return *this; +} + + +void OStringStream::reserve(size_t requested_capacity) { + if (requested_capacity <= capacity_) return; + size_t new_capacity = // Handle possible overflow by not doubling. + std::max(std::max(capacity_ * 2, capacity_), requested_capacity); + char * new_data = allocate(new_capacity); + memcpy(new_data, data_, size_); + deallocate(data_, capacity_); + capacity_ = new_capacity; + data_ = new_data; +} + + +OFStream& OFStream::write(const char* s, size_t n) { + if (f_) fwrite(s, n, 1, f_); + return *this; +} + + +OFStream& OFStream::flush() { + if (f_) fflush(f_); + return *this; +} + + +OStream& operator<<(OStream& os, const AsReversiblyEscapedUC16& c) { + char buf[10]; + const char* format = + (std::isprint(c.value) || std::isspace(c.value)) && c.value != '\\' + ? "%c" + : (c.value <= 0xff) ? "\\x%02x" : "\\u%04x"; + snprintf(buf, sizeof(buf), format, c.value); + return os << buf; +} + + +OStream& operator<<(OStream& os, const AsUC16& c) { + char buf[10]; + const char* format = + std::isprint(c.value) ? "%c" : (c.value <= 0xff) ? "\\x%02x" : "\\u%04x"; + snprintf(buf, sizeof(buf), format, c.value); + return os << buf; +} +} } // namespace v8::internal diff --git a/deps/v8/src/ostreams.h b/deps/v8/src/ostreams.h new file mode 100644 index 00000000000..08f53c52ac3 --- /dev/null +++ b/deps/v8/src/ostreams.h @@ -0,0 +1,143 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_OSTREAMS_H_ +#define V8_OSTREAMS_H_ + +#include <stddef.h> +#include <stdio.h> +#include <string.h> + +#include "include/v8config.h" +#include "src/base/macros.h" + +namespace v8 { +namespace internal { + +// An abstract base class for output streams with a cut-down standard interface. +class OStream { + public: + OStream() : hex_(false) { } + virtual ~OStream() { } + + // For manipulators like 'os << endl' or 'os << flush', etc. + OStream& operator<<(OStream& (*manipulator)(OStream& os)) { + return manipulator(*this); + } + + // Numeric conversions. + OStream& operator<<(short x); // NOLINT(runtime/int) + OStream& operator<<(unsigned short x); // NOLINT(runtime/int) + OStream& operator<<(int x); + OStream& operator<<(unsigned int x); + OStream& operator<<(long x); // NOLINT(runtime/int) + OStream& operator<<(unsigned long x); // NOLINT(runtime/int) + OStream& operator<<(long long x); // NOLINT(runtime/int) + OStream& operator<<(unsigned long long x); // NOLINT(runtime/int) + OStream& operator<<(double x); + OStream& operator<<(void* x); + + // Character output. + OStream& operator<<(char x); + OStream& operator<<(signed char x); + OStream& operator<<(unsigned char x); + OStream& operator<<(const char* s) { return write(s, strlen(s)); } + OStream& put(char c) { return write(&c, 1); } + + // Primitive format flag handling, can be extended if needed. + OStream& dec(); + OStream& hex(); + + virtual OStream& write(const char* s, size_t n) = 0; + virtual OStream& flush() = 0; + + private: + template<class T> OStream& print(const char* format, T x); + + bool hex_; + + DISALLOW_COPY_AND_ASSIGN(OStream); +}; + + +// Some manipulators. +OStream& flush(OStream& os); // NOLINT(runtime/references) +OStream& endl(OStream& os); // NOLINT(runtime/references) +OStream& dec(OStream& os); // NOLINT(runtime/references) +OStream& hex(OStream& os); // NOLINT(runtime/references) + + +// An output stream writing to a character buffer. +class OStringStream: public OStream { + public: + OStringStream() : size_(0), capacity_(32), data_(allocate(capacity_)) { + data_[0] = '\0'; + } + ~OStringStream() { deallocate(data_, capacity_); } + + size_t size() const { return size_; } + size_t capacity() const { return capacity_; } + const char* data() const { return data_; } + + // Internally, our character data is always 0-terminated. + const char* c_str() const { return data(); } + + virtual OStringStream& write(const char* s, size_t n) V8_OVERRIDE; + virtual OStringStream& flush() V8_OVERRIDE; + + private: + // Primitive allocator interface, can be extracted if needed. + static char* allocate (size_t n) { return new char[n]; } + static void deallocate (char* s, size_t n) { delete[] s; } + + void reserve(size_t requested_capacity); + + size_t size_; + size_t capacity_; + char* data_; + + DISALLOW_COPY_AND_ASSIGN(OStringStream); +}; + + +// An output stream writing to a file. +class OFStream: public OStream { + public: + explicit OFStream(FILE* f) : f_(f) { } + virtual ~OFStream() { } + + virtual OFStream& write(const char* s, size_t n) V8_OVERRIDE; + virtual OFStream& flush() V8_OVERRIDE; + + private: + FILE* const f_; + + DISALLOW_COPY_AND_ASSIGN(OFStream); +}; + + +// Wrappers to disambiguate uint16_t and uc16. +struct AsUC16 { + explicit AsUC16(uint16_t v) : value(v) {} + uint16_t value; +}; + + +struct AsReversiblyEscapedUC16 { + explicit AsReversiblyEscapedUC16(uint16_t v) : value(v) {} + uint16_t value; +}; + + +// Writes the given character to the output escaping everything outside of +// printable/space ASCII range. Additionally escapes '\' making escaping +// reversible. +OStream& operator<<(OStream& os, const AsReversiblyEscapedUC16& c); + +// Writes the given character to the output escaping everything outside +// of printable ASCII range. +OStream& operator<<(OStream& os, const AsUC16& c); +} } // namespace v8::internal + +#endif // V8_OSTREAMS_H_ diff --git a/deps/v8/src/parser.cc b/deps/v8/src/parser.cc index f07f37ebb2b..6c941daa978 100644 --- a/deps/v8/src/parser.cc +++ b/deps/v8/src/parser.cc @@ -2,22 +2,22 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" - -#include "api.h" -#include "ast.h" -#include "bootstrapper.h" -#include "char-predicates-inl.h" -#include "codegen.h" -#include "compiler.h" -#include "messages.h" -#include "parser.h" -#include "platform.h" -#include "preparser.h" -#include "runtime.h" -#include "scanner-character-streams.h" -#include "scopeinfo.h" -#include "string-stream.h" +#include "src/v8.h" + +#include "src/api.h" +#include "src/ast.h" +#include "src/base/platform/platform.h" +#include "src/bootstrapper.h" +#include "src/char-predicates-inl.h" +#include "src/codegen.h" +#include "src/compiler.h" +#include "src/messages.h" +#include "src/parser.h" +#include "src/preparser.h" +#include "src/runtime.h" +#include "src/scanner-character-streams.h" +#include "src/scopeinfo.h" +#include "src/string-stream.h" namespace v8 { namespace internal { @@ -143,7 +143,7 @@ void RegExpBuilder::AddQuantifierToAtom( } RegExpTree* atom; if (characters_ != NULL) { - ASSERT(last_added_ == ADD_CHAR); + DCHECK(last_added_ == ADD_CHAR); // Last atom was character. Vector<const uc16> char_vector = characters_->ToConstVector(); int num_chars = char_vector.length(); @@ -156,11 +156,11 @@ void RegExpBuilder::AddQuantifierToAtom( atom = new(zone()) RegExpAtom(char_vector); FlushText(); } else if (text_.length() > 0) { - ASSERT(last_added_ == ADD_ATOM); + DCHECK(last_added_ == ADD_ATOM); atom = text_.RemoveLast(); FlushText(); } else if (terms_.length() > 0) { - ASSERT(last_added_ == ADD_ATOM); + DCHECK(last_added_ == ADD_ATOM); atom = terms_.RemoveLast(); if (atom->max_match() == 0) { // Guaranteed to only match an empty string. @@ -182,152 +182,93 @@ void RegExpBuilder::AddQuantifierToAtom( } -ScriptData* ScriptData::New(const char* data, int length) { - // The length is obviously invalid. - if (length % sizeof(unsigned) != 0) { - return NULL; - } - - int deserialized_data_length = length / sizeof(unsigned); - unsigned* deserialized_data; - bool owns_store = reinterpret_cast<intptr_t>(data) % sizeof(unsigned) != 0; - if (owns_store) { - // Copy the data to align it. - deserialized_data = i::NewArray<unsigned>(deserialized_data_length); - i::CopyBytes(reinterpret_cast<char*>(deserialized_data), - data, static_cast<size_t>(length)); - } else { - // If aligned, don't create a copy of the data. - deserialized_data = reinterpret_cast<unsigned*>(const_cast<char*>(data)); - } - return new ScriptData( - Vector<unsigned>(deserialized_data, deserialized_data_length), - owns_store); -} - - -FunctionEntry ScriptData::GetFunctionEntry(int start) { +FunctionEntry ParseData::GetFunctionEntry(int start) { // The current pre-data entry must be a FunctionEntry with the given // start position. - if ((function_index_ + FunctionEntry::kSize <= store_.length()) - && (static_cast<int>(store_[function_index_]) == start)) { + if ((function_index_ + FunctionEntry::kSize <= Length()) && + (static_cast<int>(Data()[function_index_]) == start)) { int index = function_index_; function_index_ += FunctionEntry::kSize; - return FunctionEntry(store_.SubVector(index, - index + FunctionEntry::kSize)); + Vector<unsigned> subvector(&(Data()[index]), FunctionEntry::kSize); + return FunctionEntry(subvector); } return FunctionEntry(); } -int ScriptData::GetSymbolIdentifier() { - return ReadNumber(&symbol_data_); +int ParseData::FunctionCount() { + int functions_size = FunctionsSize(); + if (functions_size < 0) return 0; + if (functions_size % FunctionEntry::kSize != 0) return 0; + return functions_size / FunctionEntry::kSize; } -bool ScriptData::SanityCheck() { +bool ParseData::IsSane() { // Check that the header data is valid and doesn't specify // point to positions outside the store. - if (store_.length() < PreparseDataConstants::kHeaderSize) return false; - if (magic() != PreparseDataConstants::kMagicNumber) return false; - if (version() != PreparseDataConstants::kCurrentVersion) return false; - if (has_error()) { - // Extra sane sanity check for error message encoding. - if (store_.length() <= PreparseDataConstants::kHeaderSize - + PreparseDataConstants::kMessageTextPos) { - return false; - } - if (Read(PreparseDataConstants::kMessageStartPos) > - Read(PreparseDataConstants::kMessageEndPos)) { - return false; - } - unsigned arg_count = Read(PreparseDataConstants::kMessageArgCountPos); - int pos = PreparseDataConstants::kMessageTextPos; - for (unsigned int i = 0; i <= arg_count; i++) { - if (store_.length() <= PreparseDataConstants::kHeaderSize + pos) { - return false; - } - int length = static_cast<int>(Read(pos)); - if (length < 0) return false; - pos += 1 + length; - } - if (store_.length() < PreparseDataConstants::kHeaderSize + pos) { - return false; - } - return true; - } + int data_length = Length(); + if (data_length < PreparseDataConstants::kHeaderSize) return false; + if (Magic() != PreparseDataConstants::kMagicNumber) return false; + if (Version() != PreparseDataConstants::kCurrentVersion) return false; + if (HasError()) return false; // Check that the space allocated for function entries is sane. - int functions_size = - static_cast<int>(store_[PreparseDataConstants::kFunctionsSizeOffset]); + int functions_size = FunctionsSize(); if (functions_size < 0) return false; if (functions_size % FunctionEntry::kSize != 0) return false; // Check that the total size has room for header and function entries. int minimum_size = PreparseDataConstants::kHeaderSize + functions_size; - if (store_.length() < minimum_size) return false; + if (data_length < minimum_size) return false; return true; } - -const char* ScriptData::ReadString(unsigned* start, int* chars) { - int length = start[0]; - char* result = NewArray<char>(length + 1); - for (int i = 0; i < length; i++) { - result[i] = start[i + 1]; +void ParseData::Initialize() { + // Prepares state for use. + int data_length = Length(); + if (data_length >= PreparseDataConstants::kHeaderSize) { + function_index_ = PreparseDataConstants::kHeaderSize; } - result[length] = '\0'; - if (chars != NULL) *chars = length; - return result; } -Scanner::Location ScriptData::MessageLocation() const { - int beg_pos = Read(PreparseDataConstants::kMessageStartPos); - int end_pos = Read(PreparseDataConstants::kMessageEndPos); - return Scanner::Location(beg_pos, end_pos); +bool ParseData::HasError() { + return Data()[PreparseDataConstants::kHasErrorOffset]; } -bool ScriptData::IsReferenceError() const { - return Read(PreparseDataConstants::kIsReferenceErrorPos); +unsigned ParseData::Magic() { + return Data()[PreparseDataConstants::kMagicOffset]; } -const char* ScriptData::BuildMessage() const { - unsigned* start = ReadAddress(PreparseDataConstants::kMessageTextPos); - return ReadString(start, NULL); -} - - -Vector<const char*> ScriptData::BuildArgs() const { - int arg_count = Read(PreparseDataConstants::kMessageArgCountPos); - const char** array = NewArray<const char*>(arg_count); - // Position after text found by skipping past length field and - // length field content words. - int pos = PreparseDataConstants::kMessageTextPos + 1 - + Read(PreparseDataConstants::kMessageTextPos); - for (int i = 0; i < arg_count; i++) { - int count = 0; - array[i] = ReadString(ReadAddress(pos), &count); - pos += count + 1; - } - return Vector<const char*>(array, arg_count); +unsigned ParseData::Version() { + return Data()[PreparseDataConstants::kVersionOffset]; } -unsigned ScriptData::Read(int position) const { - return store_[PreparseDataConstants::kHeaderSize + position]; +int ParseData::FunctionsSize() { + return static_cast<int>(Data()[PreparseDataConstants::kFunctionsSizeOffset]); } -unsigned* ScriptData::ReadAddress(int position) const { - return &store_[PreparseDataConstants::kHeaderSize + position]; +void Parser::SetCachedData() { + if (compile_options() == ScriptCompiler::kNoCompileOptions) { + cached_parse_data_ = NULL; + } else { + DCHECK(info_->cached_data() != NULL); + if (compile_options() == ScriptCompiler::kConsumeParserCache) { + cached_parse_data_ = new ParseData(*info_->cached_data()); + } + } } Scope* Parser::NewScope(Scope* parent, ScopeType scope_type) { - Scope* result = new(zone()) Scope(parent, scope_type, zone()); + DCHECK(ast_value_factory_); + Scope* result = + new (zone()) Scope(parent, scope_type, ast_value_factory_, zone()); result->Initialize(); return result; } @@ -400,15 +341,14 @@ class TargetScope BASE_EMBEDDED { // ---------------------------------------------------------------------------- // Implementation of Parser -bool ParserTraits::IsEvalOrArguments(Handle<String> identifier) const { - Factory* factory = parser_->isolate()->factory(); - return identifier.is_identical_to(factory->eval_string()) - || identifier.is_identical_to(factory->arguments_string()); +bool ParserTraits::IsEvalOrArguments(const AstRawString* identifier) const { + return identifier == parser_->ast_value_factory_->eval_string() || + identifier == parser_->ast_value_factory_->arguments_string(); } bool ParserTraits::IsThisProperty(Expression* expression) { - ASSERT(expression != NULL); + DCHECK(expression != NULL); Property* property = expression->AsProperty(); return property != NULL && property->obj()->AsVariableProxy() != NULL && @@ -425,17 +365,17 @@ bool ParserTraits::IsIdentifier(Expression* expression) { void ParserTraits::PushPropertyName(FuncNameInferrer* fni, Expression* expression) { if (expression->IsPropertyName()) { - fni->PushLiteralName(expression->AsLiteral()->AsPropertyName()); + fni->PushLiteralName(expression->AsLiteral()->AsRawPropertyName()); } else { fni->PushLiteralName( - parser_->isolate()->factory()->anonymous_function_string()); + parser_->ast_value_factory_->anonymous_function_string()); } } void ParserTraits::CheckAssigningFunctionLiteralToProperty(Expression* left, Expression* right) { - ASSERT(left != NULL); + DCHECK(left != NULL); if (left->AsProperty() != NULL && right->AsFunctionLiteral() != NULL) { right->AsFunctionLiteral()->set_pretenure(); @@ -447,17 +387,16 @@ void ParserTraits::CheckPossibleEvalCall(Expression* expression, Scope* scope) { VariableProxy* callee = expression->AsVariableProxy(); if (callee != NULL && - callee->IsVariable(parser_->isolate()->factory()->eval_string())) { + callee->raw_name() == parser_->ast_value_factory_->eval_string()) { scope->DeclarationScope()->RecordEvalCall(); } } -Expression* ParserTraits::MarkExpressionAsLValue(Expression* expression) { - VariableProxy* proxy = expression != NULL - ? expression->AsVariableProxy() - : NULL; - if (proxy != NULL) proxy->MarkAsLValue(); +Expression* ParserTraits::MarkExpressionAsAssigned(Expression* expression) { + VariableProxy* proxy = + expression != NULL ? expression->AsVariableProxy() : NULL; + if (proxy != NULL) proxy->set_is_assigned(); return expression; } @@ -465,10 +404,10 @@ Expression* ParserTraits::MarkExpressionAsLValue(Expression* expression) { bool ParserTraits::ShortcutNumericLiteralBinaryExpression( Expression** x, Expression* y, Token::Value op, int pos, AstNodeFactory<AstConstructionVisitor>* factory) { - if ((*x)->AsLiteral() && (*x)->AsLiteral()->value()->IsNumber() && - y->AsLiteral() && y->AsLiteral()->value()->IsNumber()) { - double x_val = (*x)->AsLiteral()->value()->Number(); - double y_val = y->AsLiteral()->value()->Number(); + if ((*x)->AsLiteral() && (*x)->AsLiteral()->raw_value()->IsNumber() && + y->AsLiteral() && y->AsLiteral()->raw_value()->IsNumber()) { + double x_val = (*x)->AsLiteral()->raw_value()->AsNumber(); + double y_val = y->AsLiteral()->raw_value()->AsNumber(); switch (op) { case Token::ADD: *x = factory->NewNumberLiteral(x_val + y_val, pos); @@ -525,18 +464,16 @@ bool ParserTraits::ShortcutNumericLiteralBinaryExpression( Expression* ParserTraits::BuildUnaryExpression( Expression* expression, Token::Value op, int pos, AstNodeFactory<AstConstructionVisitor>* factory) { - ASSERT(expression != NULL); - if (expression->AsLiteral() != NULL) { - Handle<Object> literal = expression->AsLiteral()->value(); + DCHECK(expression != NULL); + if (expression->IsLiteral()) { + const AstValue* literal = expression->AsLiteral()->raw_value(); if (op == Token::NOT) { // Convert the literal to a boolean condition and negate it. bool condition = literal->BooleanValue(); - Handle<Object> result = - parser_->isolate()->factory()->ToBoolean(!condition); - return factory->NewLiteral(result, pos); + return factory->NewBooleanLiteral(!condition, pos); } else if (literal->IsNumber()) { // Compute some expressions involving only number literals. - double value = literal->Number(); + double value = literal->AsNumber(); switch (op) { case Token::ADD: return expression; @@ -570,51 +507,40 @@ Expression* ParserTraits::BuildUnaryExpression( Expression* ParserTraits::NewThrowReferenceError(const char* message, int pos) { return NewThrowError( - parser_->isolate()->factory()->MakeReferenceError_string(), - message, HandleVector<Object>(NULL, 0), pos); + parser_->ast_value_factory_->make_reference_error_string(), message, NULL, + pos); } Expression* ParserTraits::NewThrowSyntaxError( - const char* message, Handle<Object> arg, int pos) { - int argc = arg.is_null() ? 0 : 1; - Vector< Handle<Object> > arguments = HandleVector<Object>(&arg, argc); - return NewThrowError( - parser_->isolate()->factory()->MakeSyntaxError_string(), - message, arguments, pos); + const char* message, const AstRawString* arg, int pos) { + return NewThrowError(parser_->ast_value_factory_->make_syntax_error_string(), + message, arg, pos); } Expression* ParserTraits::NewThrowTypeError( - const char* message, Handle<Object> arg, int pos) { - int argc = arg.is_null() ? 0 : 1; - Vector< Handle<Object> > arguments = HandleVector<Object>(&arg, argc); - return NewThrowError( - parser_->isolate()->factory()->MakeTypeError_string(), - message, arguments, pos); + const char* message, const AstRawString* arg, int pos) { + return NewThrowError(parser_->ast_value_factory_->make_type_error_string(), + message, arg, pos); } Expression* ParserTraits::NewThrowError( - Handle<String> constructor, const char* message, - Vector<Handle<Object> > arguments, int pos) { + const AstRawString* constructor, const char* message, + const AstRawString* arg, int pos) { Zone* zone = parser_->zone(); - Factory* factory = parser_->isolate()->factory(); - int argc = arguments.length(); - Handle<FixedArray> elements = factory->NewFixedArray(argc, TENURED); - for (int i = 0; i < argc; i++) { - Handle<Object> element = arguments[i]; - if (!element.is_null()) { - elements->set(i, *element); - } - } - Handle<JSArray> array = - factory->NewJSArrayWithElements(elements, FAST_ELEMENTS, TENURED); - - ZoneList<Expression*>* args = new(zone) ZoneList<Expression*>(2, zone); - Handle<String> type = factory->InternalizeUtf8String(message); - args->Add(parser_->factory()->NewLiteral(type, pos), zone); - args->Add(parser_->factory()->NewLiteral(array, pos), zone); + int argc = arg != NULL ? 1 : 0; + const AstRawString* type = + parser_->ast_value_factory_->GetOneByteString(message); + ZoneList<const AstRawString*>* array = + new (zone) ZoneList<const AstRawString*>(argc, zone); + if (arg != NULL) { + array->Add(arg, zone); + } + ZoneList<Expression*>* args = new (zone) ZoneList<Expression*>(2, zone); + args->Add(parser_->factory()->NewStringLiteral(type, pos), zone); + args->Add(parser_->factory()->NewStringListLiteral(array, pos), zone); CallRuntime* call_constructor = parser_->factory()->NewCallRuntime(constructor, NULL, args, pos); return parser_->factory()->NewThrow(call_constructor, pos); @@ -623,7 +549,7 @@ Expression* ParserTraits::NewThrowError( void ParserTraits::ReportMessageAt(Scanner::Location source_location, const char* message, - Vector<const char*> args, + const char* arg, bool is_reference_error) { if (parser_->stack_overflow()) { // Suppress the error message (syntax error or such) in the presence of a @@ -631,35 +557,34 @@ void ParserTraits::ReportMessageAt(Scanner::Location source_location, // and we want to report the stack overflow later. return; } - MessageLocation location(parser_->script_, - source_location.beg_pos, - source_location.end_pos); - Factory* factory = parser_->isolate()->factory(); - Handle<FixedArray> elements = factory->NewFixedArray(args.length()); - for (int i = 0; i < args.length(); i++) { - Handle<String> arg_string = - factory->NewStringFromUtf8(CStrVector(args[i])).ToHandleChecked(); - elements->set(i, *arg_string); - } - Handle<JSArray> array = factory->NewJSArrayWithElements(elements); - Handle<Object> result = is_reference_error - ? factory->NewReferenceError(message, array) - : factory->NewSyntaxError(message, array); - parser_->isolate()->Throw(*result, &location); + parser_->has_pending_error_ = true; + parser_->pending_error_location_ = source_location; + parser_->pending_error_message_ = message; + parser_->pending_error_char_arg_ = arg; + parser_->pending_error_arg_ = NULL; + parser_->pending_error_is_reference_error_ = is_reference_error; } void ParserTraits::ReportMessage(const char* message, - Vector<Handle<String> > args, + const char* arg, bool is_reference_error) { Scanner::Location source_location = parser_->scanner()->location(); - ReportMessageAt(source_location, message, args, is_reference_error); + ReportMessageAt(source_location, message, arg, is_reference_error); +} + + +void ParserTraits::ReportMessage(const char* message, + const AstRawString* arg, + bool is_reference_error) { + Scanner::Location source_location = parser_->scanner()->location(); + ReportMessageAt(source_location, message, arg, is_reference_error); } void ParserTraits::ReportMessageAt(Scanner::Location source_location, const char* message, - Vector<Handle<String> > args, + const AstRawString* arg, bool is_reference_error) { if (parser_->stack_overflow()) { // Suppress the error message (syntax error or such) in the presence of a @@ -667,40 +592,31 @@ void ParserTraits::ReportMessageAt(Scanner::Location source_location, // and we want to report the stack overflow later. return; } - MessageLocation location(parser_->script_, - source_location.beg_pos, - source_location.end_pos); - Factory* factory = parser_->isolate()->factory(); - Handle<FixedArray> elements = factory->NewFixedArray(args.length()); - for (int i = 0; i < args.length(); i++) { - elements->set(i, *args[i]); - } - Handle<JSArray> array = factory->NewJSArrayWithElements(elements); - Handle<Object> result = is_reference_error - ? factory->NewReferenceError(message, array) - : factory->NewSyntaxError(message, array); - parser_->isolate()->Throw(*result, &location); + parser_->has_pending_error_ = true; + parser_->pending_error_location_ = source_location; + parser_->pending_error_message_ = message; + parser_->pending_error_char_arg_ = NULL; + parser_->pending_error_arg_ = arg; + parser_->pending_error_is_reference_error_ = is_reference_error; } -Handle<String> ParserTraits::GetSymbol(Scanner* scanner) { - Handle<String> result = - parser_->scanner()->AllocateInternalizedString(parser_->isolate()); - ASSERT(!result.is_null()); +const AstRawString* ParserTraits::GetSymbol(Scanner* scanner) { + const AstRawString* result = + parser_->scanner()->CurrentSymbol(parser_->ast_value_factory_); + DCHECK(result != NULL); return result; } -Handle<String> ParserTraits::NextLiteralString(Scanner* scanner, - PretenureFlag tenured) { - return scanner->AllocateNextLiteralString(parser_->isolate(), tenured); +const AstRawString* ParserTraits::GetNextSymbol(Scanner* scanner) { + return parser_->scanner()->NextSymbol(parser_->ast_value_factory_); } Expression* ParserTraits::ThisExpression( - Scope* scope, - AstNodeFactory<AstConstructionVisitor>* factory) { - return factory->NewVariableProxy(scope->receiver()); + Scope* scope, AstNodeFactory<AstConstructionVisitor>* factory, int pos) { + return factory->NewVariableProxy(scope->receiver(), pos); } @@ -708,33 +624,32 @@ Literal* ParserTraits::ExpressionFromLiteral( Token::Value token, int pos, Scanner* scanner, AstNodeFactory<AstConstructionVisitor>* factory) { - Factory* isolate_factory = parser_->isolate()->factory(); switch (token) { case Token::NULL_LITERAL: - return factory->NewLiteral(isolate_factory->null_value(), pos); + return factory->NewNullLiteral(pos); case Token::TRUE_LITERAL: - return factory->NewLiteral(isolate_factory->true_value(), pos); + return factory->NewBooleanLiteral(true, pos); case Token::FALSE_LITERAL: - return factory->NewLiteral(isolate_factory->false_value(), pos); + return factory->NewBooleanLiteral(false, pos); case Token::NUMBER: { double value = scanner->DoubleValue(); return factory->NewNumberLiteral(value, pos); } default: - ASSERT(false); + DCHECK(false); } return NULL; } Expression* ParserTraits::ExpressionFromIdentifier( - Handle<String> name, int pos, Scope* scope, + const AstRawString* name, int pos, Scope* scope, AstNodeFactory<AstConstructionVisitor>* factory) { if (parser_->fni_ != NULL) parser_->fni_->PushVariableName(name); // The name may refer to a module instance object, so its type is unknown. #ifdef DEBUG if (FLAG_print_interface_details) - PrintF("# Variable %s ", name->ToAsciiArray()); + PrintF("# Variable %.*s ", name->length(), name->raw_data()); #endif Interface* interface = Interface::NewUnknown(parser_->zone()); return scope->NewUnresolved(factory, name, interface, pos); @@ -744,16 +659,28 @@ Expression* ParserTraits::ExpressionFromIdentifier( Expression* ParserTraits::ExpressionFromString( int pos, Scanner* scanner, AstNodeFactory<AstConstructionVisitor>* factory) { - Handle<String> symbol = GetSymbol(scanner); + const AstRawString* symbol = GetSymbol(scanner); if (parser_->fni_ != NULL) parser_->fni_->PushLiteralName(symbol); - return factory->NewLiteral(symbol, pos); + return factory->NewStringLiteral(symbol, pos); +} + + +Expression* ParserTraits::GetIterator( + Expression* iterable, AstNodeFactory<AstConstructionVisitor>* factory) { + Expression* iterator_symbol_literal = + factory->NewSymbolLiteral("symbolIterator", RelocInfo::kNoPosition); + int pos = iterable->position(); + Expression* prop = + factory->NewProperty(iterable, iterator_symbol_literal, pos); + Zone* zone = parser_->zone(); + ZoneList<Expression*>* args = new (zone) ZoneList<Expression*>(0, zone); + return factory->NewCall(prop, args, pos); } Literal* ParserTraits::GetLiteralTheHole( int position, AstNodeFactory<AstConstructionVisitor>* factory) { - return factory->NewLiteral(parser_->isolate()->factory()->the_hole_value(), - RelocInfo::kNoPosition); + return factory->NewTheHoleLiteral(RelocInfo::kNoPosition); } @@ -763,44 +690,51 @@ Expression* ParserTraits::ParseV8Intrinsic(bool* ok) { FunctionLiteral* ParserTraits::ParseFunctionLiteral( - Handle<String> name, + const AstRawString* name, Scanner::Location function_name_location, bool name_is_strict_reserved, bool is_generator, int function_token_position, FunctionLiteral::FunctionType type, + FunctionLiteral::ArityRestriction arity_restriction, bool* ok) { return parser_->ParseFunctionLiteral(name, function_name_location, name_is_strict_reserved, is_generator, - function_token_position, type, ok); + function_token_position, type, + arity_restriction, ok); } Parser::Parser(CompilationInfo* info) : ParserBase<ParserTraits>(&scanner_, info->isolate()->stack_guard()->real_climit(), - info->extension(), - NULL, - info->zone(), - this), + info->extension(), NULL, info->zone(), this), isolate_(info->isolate()), script_(info->script()), scanner_(isolate_->unicode_cache()), reusable_preparser_(NULL), original_scope_(NULL), target_stack_(NULL), - cached_data_(NULL), - cached_data_mode_(NO_CACHED_DATA), - info_(info) { - ASSERT(!script_.is_null()); + cached_parse_data_(NULL), + ast_value_factory_(NULL), + info_(info), + has_pending_error_(false), + pending_error_message_(NULL), + pending_error_arg_(NULL), + pending_error_char_arg_(NULL) { + DCHECK(!script_.is_null()); isolate_->set_ast_node_id(0); set_allow_harmony_scoping(!info->is_native() && FLAG_harmony_scoping); set_allow_modules(!info->is_native() && FLAG_harmony_modules); set_allow_natives_syntax(FLAG_allow_natives_syntax || info->is_native()); set_allow_lazy(false); // Must be explicitly enabled. set_allow_generators(FLAG_harmony_generators); - set_allow_for_of(FLAG_harmony_iteration); + set_allow_arrow_functions(FLAG_harmony_arrow_functions); set_allow_harmony_numeric_literals(FLAG_harmony_numeric_literals); + for (int feature = 0; feature < v8::Isolate::kUseCounterFeatureCount; + ++feature) { + use_counts_[feature] = 0; + } } @@ -810,18 +744,19 @@ FunctionLiteral* Parser::ParseProgram() { HistogramTimerScope timer_scope(isolate()->counters()->parse(), true); Handle<String> source(String::cast(script_->source())); isolate()->counters()->total_parse_size()->Increment(source->length()); - ElapsedTimer timer; + base::ElapsedTimer timer; if (FLAG_trace_parse) { timer.Start(); } - fni_ = new(zone()) FuncNameInferrer(isolate(), zone()); + fni_ = new(zone()) FuncNameInferrer(ast_value_factory_, zone()); // Initialize parser state. CompleteParserRecorder recorder; - if (cached_data_mode_ == PRODUCE_CACHED_DATA) { + + if (compile_options() == ScriptCompiler::kProduceParserCache) { log_ = &recorder; - } else if (cached_data_mode_ == CONSUME_CACHED_DATA) { - (*cached_data_)->Initialize(); + } else if (compile_options() == ScriptCompiler::kConsumeParserCache) { + cached_parse_data_->Initialize(); } source = String::Flatten(source); @@ -853,11 +788,8 @@ FunctionLiteral* Parser::ParseProgram() { } PrintF(" - took %0.3f ms]\n", ms); } - if (cached_data_mode_ == PRODUCE_CACHED_DATA) { - if (result != NULL) { - Vector<unsigned> store = recorder.ExtractData(); - *cached_data_ = new ScriptData(store); - } + if (compile_options() == ScriptCompiler::kProduceParserCache) { + if (result != NULL) *info_->cached_data() = recorder.GetScriptData(); log_ = NULL; } return result; @@ -866,16 +798,19 @@ FunctionLiteral* Parser::ParseProgram() { FunctionLiteral* Parser::DoParseProgram(CompilationInfo* info, Handle<String> source) { - ASSERT(scope_ == NULL); - ASSERT(target_stack_ == NULL); - - Handle<String> no_name = isolate()->factory()->empty_string(); + DCHECK(scope_ == NULL); + DCHECK(target_stack_ == NULL); FunctionLiteral* result = NULL; { Scope* scope = NewScope(scope_, GLOBAL_SCOPE); info->SetGlobalScope(scope); - if (!info->context().is_null()) { + if (!info->context().is_null() && !info->context()->IsNativeContext()) { scope = Scope::DeserializeScopeChain(*info->context(), scope, zone()); + // The Scope is backed up by ScopeInfo (which is in the V8 heap); this + // means the Parser cannot operate independent of the V8 heap. Tell the + // string table to internalize strings and values right after they're + // created. + ast_value_factory_->Internalize(isolate()); } original_scope_ = scope; if (info->is_eval()) { @@ -898,13 +833,17 @@ FunctionLiteral* Parser::DoParseProgram(CompilationInfo* info, ParsingModeScope parsing_mode(this, mode); // Enters 'scope'. - FunctionState function_state(&function_state_, &scope_, scope, zone()); + FunctionState function_state(&function_state_, &scope_, scope, zone(), + ast_value_factory_); scope_->SetStrictMode(info->strict_mode()); ZoneList<Statement*>* body = new(zone()) ZoneList<Statement*>(16, zone()); bool ok = true; int beg_pos = scanner()->location().beg_pos; ParseSourceElements(body, Token::EOS, info->is_eval(), true, &ok); + + HandleSourceURLComments(); + if (ok && strict_mode() == STRICT) { CheckOctalLiteral(beg_pos, scanner()->location().end_pos, &ok); } @@ -918,36 +857,34 @@ FunctionLiteral* Parser::DoParseProgram(CompilationInfo* info, !body->at(0)->IsExpressionStatement() || !body->at(0)->AsExpressionStatement()-> expression()->IsFunctionLiteral()) { - ReportMessage("single_function_literal", Vector<const char*>::empty()); + ReportMessage("single_function_literal"); ok = false; } } + ast_value_factory_->Internalize(isolate()); if (ok) { result = factory()->NewFunctionLiteral( - no_name, - scope_, - body, + ast_value_factory_->empty_string(), ast_value_factory_, scope_, body, function_state.materialized_literal_count(), function_state.expected_property_count(), - function_state.handler_count(), - 0, + function_state.handler_count(), 0, FunctionLiteral::kNoDuplicateParameters, - FunctionLiteral::ANONYMOUS_EXPRESSION, - FunctionLiteral::kGlobalOrEval, - FunctionLiteral::kNotParenthesized, - FunctionLiteral::kNotGenerator, + FunctionLiteral::ANONYMOUS_EXPRESSION, FunctionLiteral::kGlobalOrEval, + FunctionLiteral::kNotParenthesized, FunctionLiteral::kNormalFunction, 0); result->set_ast_properties(factory()->visitor()->ast_properties()); result->set_dont_optimize_reason( factory()->visitor()->dont_optimize_reason()); } else if (stack_overflow()) { isolate()->StackOverflow(); + } else { + ThrowPendingError(); } } // Make sure the target stack is empty. - ASSERT(target_stack_ == NULL); + DCHECK(target_stack_ == NULL); return result; } @@ -957,7 +894,7 @@ FunctionLiteral* Parser::ParseLazy() { HistogramTimerScope timer_scope(isolate()->counters()->parse_lazy()); Handle<String> source(String::cast(script_->source())); isolate()->counters()->total_parse_size()->Increment(source->length()); - ElapsedTimer timer; + base::ElapsedTimer timer; if (FLAG_trace_parse) { timer.Start(); } @@ -991,12 +928,14 @@ FunctionLiteral* Parser::ParseLazy() { FunctionLiteral* Parser::ParseLazy(Utf16CharacterStream* source) { Handle<SharedFunctionInfo> shared_info = info()->shared_info(); scanner_.Initialize(source); - ASSERT(scope_ == NULL); - ASSERT(target_stack_ == NULL); + DCHECK(scope_ == NULL); + DCHECK(target_stack_ == NULL); Handle<String> name(String::cast(shared_info->name())); - fni_ = new(zone()) FuncNameInferrer(isolate(), zone()); - fni_->PushEnclosingName(name); + DCHECK(ast_value_factory_); + fni_ = new(zone()) FuncNameInferrer(ast_value_factory_, zone()); + const AstRawString* raw_name = ast_value_factory_->GetString(name); + fni_->PushEnclosingName(raw_name); ParsingModeScope parsing_mode(this, PARSE_EAGERLY); @@ -1012,32 +951,45 @@ FunctionLiteral* Parser::ParseLazy(Utf16CharacterStream* source) { zone()); } original_scope_ = scope; - FunctionState function_state(&function_state_, &scope_, scope, zone()); - ASSERT(scope->strict_mode() == SLOPPY || info()->strict_mode() == STRICT); - ASSERT(info()->strict_mode() == shared_info->strict_mode()); + FunctionState function_state(&function_state_, &scope_, scope, zone(), + ast_value_factory_); + DCHECK(scope->strict_mode() == SLOPPY || info()->strict_mode() == STRICT); + DCHECK(info()->strict_mode() == shared_info->strict_mode()); scope->SetStrictMode(shared_info->strict_mode()); FunctionLiteral::FunctionType function_type = shared_info->is_expression() ? (shared_info->is_anonymous() ? FunctionLiteral::ANONYMOUS_EXPRESSION : FunctionLiteral::NAMED_EXPRESSION) : FunctionLiteral::DECLARATION; + bool is_generator = shared_info->is_generator(); bool ok = true; - result = ParseFunctionLiteral(name, - Scanner::Location::invalid(), - false, // Strict mode name already checked. - shared_info->is_generator(), - RelocInfo::kNoPosition, - function_type, - &ok); + + if (shared_info->is_arrow()) { + DCHECK(!is_generator); + Expression* expression = ParseExpression(false, &ok); + DCHECK(expression->IsFunctionLiteral()); + result = expression->AsFunctionLiteral(); + } else { + result = ParseFunctionLiteral(raw_name, Scanner::Location::invalid(), + false, // Strict mode name already checked. + is_generator, RelocInfo::kNoPosition, + function_type, + FunctionLiteral::NORMAL_ARITY, &ok); + } // Make sure the results agree. - ASSERT(ok == (result != NULL)); + DCHECK(ok == (result != NULL)); } // Make sure the target stack is empty. - ASSERT(target_stack_ == NULL); + DCHECK(target_stack_ == NULL); + ast_value_factory_->Internalize(isolate()); if (result == NULL) { - if (stack_overflow()) isolate()->StackOverflow(); + if (stack_overflow()) { + isolate()->StackOverflow(); + } else { + ThrowPendingError(); + } } else { Handle<String> inferred_name(shared_info->inferred_name()); result->set_inferred_name(inferred_name); @@ -1060,7 +1012,7 @@ void* Parser::ParseSourceElements(ZoneList<Statement*>* processor, // functions. TargetScope scope(&this->target_stack_); - ASSERT(processor != NULL); + DCHECK(processor != NULL); bool directive_prologue = true; // Parsing directive prologue. while (peek() != end_token) { @@ -1087,22 +1039,21 @@ void* Parser::ParseSourceElements(ZoneList<Statement*>* processor, // Still processing directive prologue? if ((e_stat = stat->AsExpressionStatement()) != NULL && (literal = e_stat->expression()->AsLiteral()) != NULL && - literal->value()->IsString()) { - Handle<String> directive = Handle<String>::cast(literal->value()); - - // Check "use strict" directive (ES5 14.1). + literal->raw_value()->IsString()) { + // Check "use strict" directive (ES5 14.1) and "use asm" directive. Only + // one can be present. if (strict_mode() == SLOPPY && - String::Equals(isolate()->factory()->use_strict_string(), - directive) && + literal->raw_value()->AsString() == + ast_value_factory_->use_strict_string() && token_loc.end_pos - token_loc.beg_pos == - isolate()->heap()->use_strict_string()->length() + 2) { + ast_value_factory_->use_strict_string()->length() + 2) { // TODO(mstarzinger): Global strict eval calls, need their own scope // as specified in ES5 10.4.2(3). The correct fix would be to always // add this scope in DoParseProgram(), but that requires adaptations // all over the code base, so we go with a quick-fix for now. // In the same manner, we have to patch the parsing mode. if (is_eval && !scope_->is_eval_scope()) { - ASSERT(scope_->is_global_scope()); + DCHECK(scope_->is_global_scope()); Scope* scope = NewScope(scope_, EVAL_SCOPE); scope->set_start_position(scope_->start_position()); scope->set_end_position(scope_->end_position()); @@ -1112,6 +1063,13 @@ void* Parser::ParseSourceElements(ZoneList<Statement*>* processor, scope_->SetStrictMode(STRICT); // "use strict" is the only directive for now. directive_prologue = false; + } else if (literal->raw_value()->AsString() == + ast_value_factory_->use_asm_string() && + token_loc.end_pos - token_loc.beg_pos == + ast_value_factory_->use_asm_string()->length() + 2) { + // Store the usage count; The actual use counter on the isolate is + // incremented after parsing is done. + ++use_counts_[v8::Isolate::kUseAsm]; } } else { // End of the directive prologue. @@ -1126,7 +1084,7 @@ void* Parser::ParseSourceElements(ZoneList<Statement*>* processor, } -Statement* Parser::ParseModuleElement(ZoneStringList* labels, +Statement* Parser::ParseModuleElement(ZoneList<const AstRawString*>* labels, bool* ok) { // (Ecma 262 5th Edition, clause 14): // SourceElement: @@ -1145,13 +1103,18 @@ Statement* Parser::ParseModuleElement(ZoneStringList* labels, switch (peek()) { case Token::FUNCTION: return ParseFunctionDeclaration(NULL, ok); - case Token::LET: - case Token::CONST: - return ParseVariableStatement(kModuleElement, NULL, ok); case Token::IMPORT: return ParseImportDeclaration(ok); case Token::EXPORT: return ParseExportDeclaration(ok); + case Token::CONST: + return ParseVariableStatement(kModuleElement, NULL, ok); + case Token::LET: + DCHECK(allow_harmony_scoping()); + if (strict_mode() == STRICT) { + return ParseVariableStatement(kModuleElement, NULL, ok); + } + // Fall through. default: { Statement* stmt = ParseStatement(labels, CHECK_OK); // Handle 'module' as a context-sensitive keyword. @@ -1160,10 +1123,9 @@ Statement* Parser::ParseModuleElement(ZoneStringList* labels, !scanner()->HasAnyLineTerminatorBeforeNext() && stmt != NULL) { ExpressionStatement* estmt = stmt->AsExpressionStatement(); - if (estmt != NULL && - estmt->expression()->AsVariableProxy() != NULL && - String::Equals(isolate()->factory()->module_string(), - estmt->expression()->AsVariableProxy()->name()) && + if (estmt != NULL && estmt->expression()->AsVariableProxy() != NULL && + estmt->expression()->AsVariableProxy()->raw_name() == + ast_value_factory_->module_string() && !scanner()->literal_contains_escapes()) { return ParseModuleDeclaration(NULL, ok); } @@ -1174,16 +1136,18 @@ Statement* Parser::ParseModuleElement(ZoneStringList* labels, } -Statement* Parser::ParseModuleDeclaration(ZoneStringList* names, bool* ok) { +Statement* Parser::ParseModuleDeclaration(ZoneList<const AstRawString*>* names, + bool* ok) { // ModuleDeclaration: // 'module' Identifier Module int pos = peek_position(); - Handle<String> name = ParseIdentifier(kDontAllowEvalOrArguments, CHECK_OK); + const AstRawString* name = + ParseIdentifier(kDontAllowEvalOrArguments, CHECK_OK); #ifdef DEBUG if (FLAG_print_interface_details) - PrintF("# Module %s...\n", name->ToAsciiArray()); + PrintF("# Module %.*s ", name->length(), name->raw_data()); #endif Module* module = ParseModule(CHECK_OK); @@ -1194,10 +1158,9 @@ Statement* Parser::ParseModuleDeclaration(ZoneStringList* names, bool* ok) { #ifdef DEBUG if (FLAG_print_interface_details) - PrintF("# Module %s.\n", name->ToAsciiArray()); - + PrintF("# Module %.*s ", name->length(), name->raw_data()); if (FLAG_print_interfaces) { - PrintF("module %s : ", name->ToAsciiArray()); + PrintF("module %.*s: ", name->length(), name->raw_data()); module->interface()->Print(); } #endif @@ -1275,19 +1238,17 @@ Module* Parser::ParseModuleLiteral(bool* ok) { Interface* interface = scope->interface(); for (Interface::Iterator it = interface->iterator(); !it.done(); it.Advance()) { - if (scope->LocalLookup(it.name()) == NULL) { - Handle<String> name(it.name()); - ParserTraits::ReportMessage("module_export_undefined", - Vector<Handle<String> >(&name, 1)); + if (scope->LookupLocal(it.name()) == NULL) { + ParserTraits::ReportMessage("module_export_undefined", it.name()); *ok = false; return NULL; } } interface->MakeModule(ok); - ASSERT(*ok); + DCHECK(*ok); interface->Freeze(ok); - ASSERT(*ok); + DCHECK(*ok); return factory()->NewModuleLiteral(body, interface, pos); } @@ -1300,25 +1261,24 @@ Module* Parser::ParseModulePath(bool* ok) { int pos = peek_position(); Module* result = ParseModuleVariable(CHECK_OK); while (Check(Token::PERIOD)) { - Handle<String> name = ParseIdentifierName(CHECK_OK); + const AstRawString* name = ParseIdentifierName(CHECK_OK); #ifdef DEBUG if (FLAG_print_interface_details) - PrintF("# Path .%s ", name->ToAsciiArray()); + PrintF("# Path .%.*s ", name->length(), name->raw_data()); #endif Module* member = factory()->NewModulePath(result, name, pos); result->interface()->Add(name, member->interface(), zone(), ok); if (!*ok) { #ifdef DEBUG if (FLAG_print_interfaces) { - PrintF("PATH TYPE ERROR at '%s'\n", name->ToAsciiArray()); + PrintF("PATH TYPE ERROR at '%.*s'\n", name->length(), name->raw_data()); PrintF("result: "); result->interface()->Print(); PrintF("member: "); member->interface()->Print(); } #endif - ParserTraits::ReportMessage("invalid_module_path", - Vector<Handle<String> >(&name, 1)); + ParserTraits::ReportMessage("invalid_module_path", name); return NULL; } result = member; @@ -1333,10 +1293,11 @@ Module* Parser::ParseModuleVariable(bool* ok) { // Identifier int pos = peek_position(); - Handle<String> name = ParseIdentifier(kDontAllowEvalOrArguments, CHECK_OK); + const AstRawString* name = + ParseIdentifier(kDontAllowEvalOrArguments, CHECK_OK); #ifdef DEBUG if (FLAG_print_interface_details) - PrintF("# Module variable %s ", name->ToAsciiArray()); + PrintF("# Module variable %.*s ", name->length(), name->raw_data()); #endif VariableProxy* proxy = scope_->NewUnresolved( factory(), name, Interface::NewModule(zone()), @@ -1352,7 +1313,7 @@ Module* Parser::ParseModuleUrl(bool* ok) { int pos = peek_position(); Expect(Token::STRING, CHECK_OK); - Handle<String> symbol = GetSymbol(); + const AstRawString* symbol = GetSymbol(scanner()); // TODO(ES6): Request JS resource from environment... @@ -1368,9 +1329,9 @@ Module* Parser::ParseModuleUrl(bool* ok) { Interface* interface = scope->interface(); Module* result = factory()->NewModuleLiteral(body, interface, pos); interface->Freeze(ok); - ASSERT(*ok); + DCHECK(*ok); interface->Unify(scope->interface(), zone(), ok); - ASSERT(*ok); + DCHECK(*ok); return result; } @@ -1396,9 +1357,9 @@ Block* Parser::ParseImportDeclaration(bool* ok) { int pos = peek_position(); Expect(Token::IMPORT, CHECK_OK); - ZoneStringList names(1, zone()); + ZoneList<const AstRawString*> names(1, zone()); - Handle<String> name = ParseIdentifierName(CHECK_OK); + const AstRawString* name = ParseIdentifierName(CHECK_OK); names.Add(name, zone()); while (peek() == Token::COMMA) { Consume(Token::COMMA); @@ -1416,20 +1377,20 @@ Block* Parser::ParseImportDeclaration(bool* ok) { for (int i = 0; i < names.length(); ++i) { #ifdef DEBUG if (FLAG_print_interface_details) - PrintF("# Import %s ", names[i]->ToAsciiArray()); + PrintF("# Import %.*s ", name->length(), name->raw_data()); #endif Interface* interface = Interface::NewUnknown(zone()); module->interface()->Add(names[i], interface, zone(), ok); if (!*ok) { #ifdef DEBUG if (FLAG_print_interfaces) { - PrintF("IMPORT TYPE ERROR at '%s'\n", names[i]->ToAsciiArray()); + PrintF("IMPORT TYPE ERROR at '%.*s'\n", name->length(), + name->raw_data()); PrintF("module: "); module->interface()->Print(); } #endif - ParserTraits::ReportMessage("invalid_module_path", - Vector<Handle<String> >(&name, 1)); + ParserTraits::ReportMessage("invalid_module_path", name); return NULL; } VariableProxy* proxy = NewUnresolved(names[i], LET, interface); @@ -1455,14 +1416,14 @@ Statement* Parser::ParseExportDeclaration(bool* ok) { Expect(Token::EXPORT, CHECK_OK); Statement* result = NULL; - ZoneStringList names(1, zone()); + ZoneList<const AstRawString*> names(1, zone()); switch (peek()) { case Token::IDENTIFIER: { int pos = position(); - Handle<String> name = + const AstRawString* name = ParseIdentifier(kDontAllowEvalOrArguments, CHECK_OK); // Handle 'module' as a context-sensitive keyword. - if (!name->IsOneByteEqualTo(STATIC_ASCII_VECTOR("module"))) { + if (name != ast_value_factory_->module_string()) { names.Add(name, zone()); while (peek() == Token::COMMA) { Consume(Token::COMMA); @@ -1493,12 +1454,25 @@ Statement* Parser::ParseExportDeclaration(bool* ok) { return NULL; } + // Every export of a module may be assigned. + for (int i = 0; i < names.length(); ++i) { + Variable* var = scope_->Lookup(names[i]); + if (var == NULL) { + // TODO(sigurds) This is an export that has no definition yet, + // not clear what to do in this case. + continue; + } + if (!IsImmutableVariableMode(var->mode())) { + var->set_maybe_assigned(); + } + } + // Extract declared names into export declarations and interface. Interface* interface = scope_->interface(); for (int i = 0; i < names.length(); ++i) { #ifdef DEBUG if (FLAG_print_interface_details) - PrintF("# Export %s ", names[i]->ToAsciiArray()); + PrintF("# Export %.*s ", names[i]->length(), names[i]->raw_data()); #endif Interface* inner = Interface::NewUnknown(zone()); interface->Add(names[i], inner, zone(), CHECK_OK); @@ -1513,12 +1487,12 @@ Statement* Parser::ParseExportDeclaration(bool* ok) { // scope_->AddDeclaration(declaration); } - ASSERT(result != NULL); + DCHECK(result != NULL); return result; } -Statement* Parser::ParseBlockElement(ZoneStringList* labels, +Statement* Parser::ParseBlockElement(ZoneList<const AstRawString*>* labels, bool* ok) { // (Ecma 262 5th Edition, clause 14): // SourceElement: @@ -1534,16 +1508,22 @@ Statement* Parser::ParseBlockElement(ZoneStringList* labels, switch (peek()) { case Token::FUNCTION: return ParseFunctionDeclaration(NULL, ok); - case Token::LET: case Token::CONST: return ParseVariableStatement(kModuleElement, NULL, ok); + case Token::LET: + DCHECK(allow_harmony_scoping()); + if (strict_mode() == STRICT) { + return ParseVariableStatement(kModuleElement, NULL, ok); + } + // Fall through. default: return ParseStatement(labels, ok); } } -Statement* Parser::ParseStatement(ZoneStringList* labels, bool* ok) { +Statement* Parser::ParseStatement(ZoneList<const AstRawString*>* labels, + bool* ok) { // Statement :: // Block // VariableStatement @@ -1571,11 +1551,6 @@ Statement* Parser::ParseStatement(ZoneStringList* labels, bool* ok) { case Token::LBRACE: return ParseBlock(labels, ok); - case Token::CONST: // fall through - case Token::LET: - case Token::VAR: - return ParseVariableStatement(kStatement, NULL, ok); - case Token::SEMICOLON: Next(); return factory()->NewEmptyStatement(RelocInfo::kNoPosition); @@ -1647,14 +1622,24 @@ Statement* Parser::ParseStatement(ZoneStringList* labels, bool* ok) { case Token::DEBUGGER: return ParseDebuggerStatement(ok); + case Token::VAR: + case Token::CONST: + return ParseVariableStatement(kStatement, NULL, ok); + + case Token::LET: + DCHECK(allow_harmony_scoping()); + if (strict_mode() == STRICT) { + return ParseVariableStatement(kStatement, NULL, ok); + } + // Fall through. default: return ParseExpressionOrLabelledStatement(labels, ok); } } -VariableProxy* Parser::NewUnresolved( - Handle<String> name, VariableMode mode, Interface* interface) { +VariableProxy* Parser::NewUnresolved(const AstRawString* name, + VariableMode mode, Interface* interface) { // If we are inside a function, a declaration of a var/const variable is a // truly local variable, and the scope of the variable is always the function // scope. @@ -1667,7 +1652,8 @@ VariableProxy* Parser::NewUnresolved( void Parser::Declare(Declaration* declaration, bool resolve, bool* ok) { VariableProxy* proxy = declaration->proxy(); - Handle<String> name = proxy->name(); + DCHECK(proxy->raw_name() != NULL); + const AstRawString* name = proxy->raw_name(); VariableMode mode = declaration->mode(); Scope* declaration_scope = DeclarationScope(mode); Variable* var = NULL; @@ -1691,20 +1677,20 @@ void Parser::Declare(Declaration* declaration, bool resolve, bool* ok) { // global scope. var = declaration_scope->is_global_scope() ? declaration_scope->Lookup(name) - : declaration_scope->LocalLookup(name); + : declaration_scope->LookupLocal(name); if (var == NULL) { // Declare the name. - var = declaration_scope->DeclareLocal( - name, mode, declaration->initialization(), proxy->interface()); - } else if ((mode != VAR || var->mode() != VAR) && - (!declaration_scope->is_global_scope() || - IsLexicalVariableMode(mode) || - IsLexicalVariableMode(var->mode()))) { + var = declaration_scope->DeclareLocal(name, mode, + declaration->initialization(), + kNotAssigned, proxy->interface()); + } else if (IsLexicalVariableMode(mode) || IsLexicalVariableMode(var->mode()) + || ((mode == CONST_LEGACY || var->mode() == CONST_LEGACY) && + !declaration_scope->is_global_scope())) { // The name was declared in this scope before; check for conflicting // re-declarations. We have a conflict if either of the declarations is // not a var (in the global scope, we also have to ignore legacy const for // compatibility). There is similar code in runtime.cc in the Declare - // functions. The function CheckNonConflictingScope checks for conflicting + // functions. The function CheckConflictingVarDeclarations checks for // var and let bindings from different scopes whereas this is a check for // conflicting declarations within the same scope. This check also covers // the special case @@ -1713,20 +1699,19 @@ void Parser::Declare(Declaration* declaration, bool resolve, bool* ok) { // // because the var declaration is hoisted to the function scope where 'x' // is already bound. - ASSERT(IsDeclaredVariableMode(var->mode())); + DCHECK(IsDeclaredVariableMode(var->mode())); if (allow_harmony_scoping() && strict_mode() == STRICT) { // In harmony we treat re-declarations as early errors. See // ES5 16 for a definition of early errors. - SmartArrayPointer<char> c_string = name->ToCString(DISALLOW_NULLS); - const char* elms[1] = { c_string.get() }; - Vector<const char*> args(elms, 1); - ReportMessage("var_redeclaration", args); + ParserTraits::ReportMessage("var_redeclaration", name); *ok = false; return; } Expression* expression = NewThrowTypeError( "var_redeclaration", name, declaration->position()); declaration_scope->SetIllegalRedeclaration(expression); + } else if (mode == VAR) { + var->set_maybe_assigned(); } } @@ -1745,25 +1730,26 @@ void Parser::Declare(Declaration* declaration, bool resolve, bool* ok) { // same variable if it is declared several times. This is not a // semantic issue as long as we keep the source order, but it may be // a performance issue since it may lead to repeated - // RuntimeHidden_DeclareContextSlot calls. + // RuntimeHidden_DeclareLookupSlot calls. declaration_scope->AddDeclaration(declaration); if (mode == CONST_LEGACY && declaration_scope->is_global_scope()) { // For global const variables we bind the proxy to a variable. - ASSERT(resolve); // should be set by all callers + DCHECK(resolve); // should be set by all callers Variable::Kind kind = Variable::NORMAL; - var = new(zone()) Variable( - declaration_scope, name, mode, true, kind, - kNeedsInitialization, proxy->interface()); + var = new (zone()) + Variable(declaration_scope, name, mode, true, kind, + kNeedsInitialization, kNotAssigned, proxy->interface()); } else if (declaration_scope->is_eval_scope() && declaration_scope->strict_mode() == SLOPPY) { // For variable declarations in a sloppy eval scope the proxy is bound // to a lookup variable to force a dynamic declaration using the - // DeclareContextSlot runtime function. + // DeclareLookupSlot runtime function. Variable::Kind kind = Variable::NORMAL; - var = new(zone()) Variable( - declaration_scope, name, mode, true, kind, - declaration->initialization(), proxy->interface()); + // TODO(sigurds) figure out if kNotAssigned is OK here + var = new (zone()) Variable(declaration_scope, name, mode, true, kind, + declaration->initialization(), kNotAssigned, + proxy->interface()); var->AllocateTo(Variable::LOOKUP, -1); resolve = true; } @@ -1798,8 +1784,10 @@ void Parser::Declare(Declaration* declaration, bool resolve, bool* ok) { if (FLAG_harmony_modules) { bool ok; #ifdef DEBUG - if (FLAG_print_interface_details) - PrintF("# Declare %s\n", var->name()->ToAsciiArray()); + if (FLAG_print_interface_details) { + PrintF("# Declare %.*s ", var->raw_name()->length(), + var->raw_name()->raw_data()); + } #endif proxy->interface()->Unify(var->interface(), zone(), &ok); if (!ok) { @@ -1812,8 +1800,7 @@ void Parser::Declare(Declaration* declaration, bool resolve, bool* ok) { var->interface()->Print(); } #endif - ParserTraits::ReportMessage("module_type_error", - Vector<Handle<String> >(&name, 1)); + ParserTraits::ReportMessage("module_type_error", name); } } } @@ -1828,7 +1815,7 @@ Statement* Parser::ParseNativeDeclaration(bool* ok) { int pos = peek_position(); Expect(Token::FUNCTION, CHECK_OK); // Allow "eval" or "arguments" for backward compatibility. - Handle<String> name = ParseIdentifier(kAllowEvalOrArguments, CHECK_OK); + const AstRawString* name = ParseIdentifier(kAllowEvalOrArguments, CHECK_OK); Expect(Token::LPAREN, CHECK_OK); bool done = (peek() == Token::RPAREN); while (!done) { @@ -1863,7 +1850,8 @@ Statement* Parser::ParseNativeDeclaration(bool* ok) { } -Statement* Parser::ParseFunctionDeclaration(ZoneStringList* names, bool* ok) { +Statement* Parser::ParseFunctionDeclaration( + ZoneList<const AstRawString*>* names, bool* ok) { // FunctionDeclaration :: // 'function' Identifier '(' FormalParameterListopt ')' '{' FunctionBody '}' // GeneratorDeclaration :: @@ -1873,7 +1861,7 @@ Statement* Parser::ParseFunctionDeclaration(ZoneStringList* names, bool* ok) { int pos = position(); bool is_generator = allow_generators() && Check(Token::MUL); bool is_strict_reserved = false; - Handle<String> name = ParseIdentifierOrStrictReservedWord( + const AstRawString* name = ParseIdentifierOrStrictReservedWord( &is_strict_reserved, CHECK_OK); FunctionLiteral* fun = ParseFunctionLiteral(name, scanner()->location(), @@ -1881,15 +1869,17 @@ Statement* Parser::ParseFunctionDeclaration(ZoneStringList* names, bool* ok) { is_generator, pos, FunctionLiteral::DECLARATION, + FunctionLiteral::NORMAL_ARITY, CHECK_OK); // Even if we're not at the top-level of the global or a function // scope, we treat it as such and introduce the function with its // initial value upon entering the corresponding scope. - // In extended mode, a function behaves as a lexical binding, except in the - // global scope. + // In ES6, a function behaves as a lexical binding, except in the + // global scope, or the initial scope of eval or another function. VariableMode mode = - allow_harmony_scoping() && - strict_mode() == STRICT && !scope_->is_global_scope() ? LET : VAR; + allow_harmony_scoping() && strict_mode() == STRICT && + !(scope_->is_global_scope() || scope_->is_eval_scope() || + scope_->is_function_scope()) ? LET : VAR; VariableProxy* proxy = NewUnresolved(name, mode, Interface::NewValue()); Declaration* declaration = factory()->NewFunctionDeclaration(proxy, mode, fun, scope_, pos); @@ -1899,7 +1889,7 @@ Statement* Parser::ParseFunctionDeclaration(ZoneStringList* names, bool* ok) { } -Block* Parser::ParseBlock(ZoneStringList* labels, bool* ok) { +Block* Parser::ParseBlock(ZoneList<const AstRawString*>* labels, bool* ok) { if (allow_harmony_scoping() && strict_mode() == STRICT) { return ParseScopedBlock(labels, ok); } @@ -1926,7 +1916,8 @@ Block* Parser::ParseBlock(ZoneStringList* labels, bool* ok) { } -Block* Parser::ParseScopedBlock(ZoneStringList* labels, bool* ok) { +Block* Parser::ParseScopedBlock(ZoneList<const AstRawString*>* labels, + bool* ok) { // The harmony mode uses block elements instead of statements. // // Block :: @@ -1961,12 +1952,12 @@ Block* Parser::ParseScopedBlock(ZoneStringList* labels, bool* ok) { Block* Parser::ParseVariableStatement(VariableDeclarationContext var_context, - ZoneStringList* names, + ZoneList<const AstRawString*>* names, bool* ok) { // VariableStatement :: // VariableDeclarations ';' - Handle<String> ignore; + const AstRawString* ignore; Block* result = ParseVariableDeclarations(var_context, NULL, names, &ignore, CHECK_OK); ExpectSemicolon(CHECK_OK); @@ -1982,8 +1973,8 @@ Block* Parser::ParseVariableStatement(VariableDeclarationContext var_context, Block* Parser::ParseVariableDeclarations( VariableDeclarationContext var_context, VariableDeclarationProperties* decl_props, - ZoneStringList* names, - Handle<String>* out, + ZoneList<const AstRawString*>* names, + const AstRawString** out, bool* ok) { // VariableDeclarations :: // ('var' | 'const' | 'let') (Identifier ('=' AssignmentExpression)?)+[','] @@ -2032,38 +2023,26 @@ Block* Parser::ParseVariableDeclarations( if (var_context == kStatement) { // In strict mode 'const' declarations are only allowed in source // element positions. - ReportMessage("unprotected_const", Vector<const char*>::empty()); + ReportMessage("unprotected_const"); *ok = false; return NULL; } mode = CONST; init_op = Token::INIT_CONST; } else { - ReportMessage("strict_const", Vector<const char*>::empty()); + ReportMessage("strict_const"); *ok = false; return NULL; } } is_const = true; needs_init = true; - } else if (peek() == Token::LET) { - // ES6 Draft Rev4 section 12.2.1: - // - // LetDeclaration : let LetBindingList ; - // - // * It is a Syntax Error if the code that matches this production is not - // contained in extended code. - // - // TODO(rossberg): make 'let' a legal identifier in sloppy mode. - if (!allow_harmony_scoping() || strict_mode() == SLOPPY) { - ReportMessage("illegal_let", Vector<const char*>::empty()); - *ok = false; - return NULL; - } + } else if (peek() == Token::LET && strict_mode() == STRICT) { + DCHECK(allow_harmony_scoping()); Consume(Token::LET); if (var_context == kStatement) { // Let declarations are only allowed in source element positions. - ReportMessage("unprotected_let", Vector<const char*>::empty()); + ReportMessage("unprotected_let"); *ok = false; return NULL; } @@ -2091,7 +2070,7 @@ Block* Parser::ParseVariableDeclarations( // Create new block with one expected declaration. Block* block = factory()->NewBlock(NULL, 1, true, pos); int nvars = 0; // the number of variables declared - Handle<String> name; + const AstRawString* name = NULL; do { if (fni_ != NULL) fni_->Enter(); @@ -2123,7 +2102,7 @@ Block* Parser::ParseVariableDeclarations( Declare(declaration, mode != VAR, CHECK_OK); nvars++; if (declaration_scope->num_var_or_const() > kMaxNumFunctionLocals) { - ReportMessageAt(scanner()->location(), "too_many_variables"); + ReportMessage("too_many_variables"); *ok = false; return NULL; } @@ -2197,9 +2176,8 @@ Block* Parser::ParseVariableDeclarations( // executed. // // Executing the variable declaration statement will always - // guarantee to give the global object a "local" variable; a - // variable defined in the global object and not in any - // prototype. This way, global variable declarations can shadow + // guarantee to give the global object an own property. + // This way, global variable declarations can shadow // properties in the prototype chain, but only after the variable // declaration statement has been executed. This is important in // browsers where the global object (window) has lots of @@ -2210,7 +2188,7 @@ Block* Parser::ParseVariableDeclarations( ZoneList<Expression*>* arguments = new(zone()) ZoneList<Expression*>(3, zone()); // We have at least 1 parameter. - arguments->Add(factory()->NewLiteral(name, pos), zone()); + arguments->Add(factory()->NewStringLiteral(name, pos), zone()); CallRuntime* initialize; if (is_const) { @@ -2222,8 +2200,8 @@ Block* Parser::ParseVariableDeclarations( // Note that the function does different things depending on // the number of arguments (1 or 2). initialize = factory()->NewCallRuntime( - isolate()->factory()->InitializeConstGlobal_string(), - Runtime::FunctionForId(Runtime::kHiddenInitializeConstGlobal), + ast_value_factory_->initialize_const_global_string(), + Runtime::FunctionForId(Runtime::kInitializeConstGlobal), arguments, pos); } else { // Add strict mode. @@ -2238,21 +2216,22 @@ Block* Parser::ParseVariableDeclarations( if (value != NULL && !inside_with()) { arguments->Add(value, zone()); value = NULL; // zap the value to avoid the unnecessary assignment + // Construct the call to Runtime_InitializeVarGlobal + // and add it to the initialization statement block. + initialize = factory()->NewCallRuntime( + ast_value_factory_->initialize_var_global_string(), + Runtime::FunctionForId(Runtime::kInitializeVarGlobal), arguments, + pos); + } else { + initialize = NULL; } - - // Construct the call to Runtime_InitializeVarGlobal - // and add it to the initialization statement block. - // Note that the function does different things depending on - // the number of arguments (2 or 3). - initialize = factory()->NewCallRuntime( - isolate()->factory()->InitializeVarGlobal_string(), - Runtime::FunctionForId(Runtime::kInitializeVarGlobal), - arguments, pos); } - block->AddStatement( - factory()->NewExpressionStatement(initialize, RelocInfo::kNoPosition), - zone()); + if (initialize != NULL) { + block->AddStatement(factory()->NewExpressionStatement( + initialize, RelocInfo::kNoPosition), + zone()); + } } else if (needs_init) { // Constant initializations always assign to the declared constant which // is always at the function scope level. This is only relevant for @@ -2261,9 +2240,9 @@ Block* Parser::ParseVariableDeclarations( // context for var declared variables). Sigh... // For 'let' and 'const' declared variables in harmony mode the // initialization also always assigns to the declared variable. - ASSERT(proxy != NULL); - ASSERT(proxy->var() != NULL); - ASSERT(value != NULL); + DCHECK(proxy != NULL); + DCHECK(proxy->var() != NULL); + DCHECK(value != NULL); Assignment* assignment = factory()->NewAssignment(init_op, proxy, value, pos); block->AddStatement( @@ -2275,7 +2254,7 @@ Block* Parser::ParseVariableDeclarations( // Add an assignment node to the initialization statement block if we still // have a pending initialization value. if (value != NULL) { - ASSERT(mode == VAR); + DCHECK(mode == VAR); // 'var' initializations are simply assignments (with all the consequences // if they are inside a 'with' statement - they may change a 'with' object // property). @@ -2301,19 +2280,22 @@ Block* Parser::ParseVariableDeclarations( } -static bool ContainsLabel(ZoneStringList* labels, Handle<String> label) { - ASSERT(!label.is_null()); - if (labels != NULL) - for (int i = labels->length(); i-- > 0; ) - if (labels->at(i).is_identical_to(label)) +static bool ContainsLabel(ZoneList<const AstRawString*>* labels, + const AstRawString* label) { + DCHECK(label != NULL); + if (labels != NULL) { + for (int i = labels->length(); i-- > 0; ) { + if (labels->at(i) == label) { return true; - + } + } + } return false; } -Statement* Parser::ParseExpressionOrLabelledStatement(ZoneStringList* labels, - bool* ok) { +Statement* Parser::ParseExpressionOrLabelledStatement( + ZoneList<const AstRawString*>* labels, bool* ok) { // ExpressionStatement | LabelledStatement :: // Expression ';' // Identifier ':' Statement @@ -2326,22 +2308,19 @@ Statement* Parser::ParseExpressionOrLabelledStatement(ZoneStringList* labels, // Expression is a single identifier, and not, e.g., a parenthesized // identifier. VariableProxy* var = expr->AsVariableProxy(); - Handle<String> label = var->name(); + const AstRawString* label = var->raw_name(); // TODO(1240780): We don't check for redeclaration of labels // during preparsing since keeping track of the set of active // labels requires nontrivial changes to the way scopes are // structured. However, these are probably changes we want to // make later anyway so we should go back and fix this then. if (ContainsLabel(labels, label) || TargetStackContainsLabel(label)) { - SmartArrayPointer<char> c_string = label->ToCString(DISALLOW_NULLS); - const char* elms[1] = { c_string.get() }; - Vector<const char*> args(elms, 1); - ReportMessage("label_redeclaration", args); + ParserTraits::ReportMessage("label_redeclaration", label); *ok = false; return NULL; } if (labels == NULL) { - labels = new(zone()) ZoneStringList(4, zone()); + labels = new(zone()) ZoneList<const AstRawString*>(4, zone()); } labels->Add(label, zone()); // Remove the "ghost" variable that turned out to be a label @@ -2360,8 +2339,8 @@ Statement* Parser::ParseExpressionOrLabelledStatement(ZoneStringList* labels, !scanner()->HasAnyLineTerminatorBeforeNext() && expr != NULL && expr->AsVariableProxy() != NULL && - String::Equals(isolate()->factory()->native_string(), - expr->AsVariableProxy()->name()) && + expr->AsVariableProxy()->raw_name() == + ast_value_factory_->native_string() && !scanner()->literal_contains_escapes()) { return ParseNativeDeclaration(ok); } @@ -2372,8 +2351,8 @@ Statement* Parser::ParseExpressionOrLabelledStatement(ZoneStringList* labels, peek() != Token::IDENTIFIER || scanner()->HasAnyLineTerminatorBeforeNext() || expr->AsVariableProxy() == NULL || - !String::Equals(isolate()->factory()->module_string(), - expr->AsVariableProxy()->name()) || + expr->AsVariableProxy()->raw_name() != + ast_value_factory_->module_string() || scanner()->literal_contains_escapes()) { ExpectSemicolon(CHECK_OK); } @@ -2381,7 +2360,8 @@ Statement* Parser::ParseExpressionOrLabelledStatement(ZoneStringList* labels, } -IfStatement* Parser::ParseIfStatement(ZoneStringList* labels, bool* ok) { +IfStatement* Parser::ParseIfStatement(ZoneList<const AstRawString*>* labels, + bool* ok) { // IfStatement :: // 'if' '(' Expression ')' Statement ('else' Statement)? @@ -2409,24 +2389,21 @@ Statement* Parser::ParseContinueStatement(bool* ok) { int pos = peek_position(); Expect(Token::CONTINUE, CHECK_OK); - Handle<String> label = Handle<String>::null(); + const AstRawString* label = NULL; Token::Value tok = peek(); if (!scanner()->HasAnyLineTerminatorBeforeNext() && tok != Token::SEMICOLON && tok != Token::RBRACE && tok != Token::EOS) { // ECMA allows "eval" or "arguments" as labels even in strict mode. label = ParseIdentifier(kAllowEvalOrArguments, CHECK_OK); } - IterationStatement* target = NULL; - target = LookupContinueTarget(label, CHECK_OK); + IterationStatement* target = LookupContinueTarget(label, CHECK_OK); if (target == NULL) { // Illegal continue statement. const char* message = "illegal_continue"; - Vector<Handle<String> > args; - if (!label.is_null()) { + if (label != NULL) { message = "unknown_label"; - args = Vector<Handle<String> >(&label, 1); } - ParserTraits::ReportMessageAt(scanner()->location(), message, args); + ParserTraits::ReportMessage(message, label); *ok = false; return NULL; } @@ -2435,13 +2412,14 @@ Statement* Parser::ParseContinueStatement(bool* ok) { } -Statement* Parser::ParseBreakStatement(ZoneStringList* labels, bool* ok) { +Statement* Parser::ParseBreakStatement(ZoneList<const AstRawString*>* labels, + bool* ok) { // BreakStatement :: // 'break' Identifier? ';' int pos = peek_position(); Expect(Token::BREAK, CHECK_OK); - Handle<String> label; + const AstRawString* label = NULL; Token::Value tok = peek(); if (!scanner()->HasAnyLineTerminatorBeforeNext() && tok != Token::SEMICOLON && tok != Token::RBRACE && tok != Token::EOS) { @@ -2450,7 +2428,7 @@ Statement* Parser::ParseBreakStatement(ZoneStringList* labels, bool* ok) { } // Parse labeled break statements that target themselves into // empty statements, e.g. 'l1: l2: l3: break l2;' - if (!label.is_null() && ContainsLabel(labels, label)) { + if (label != NULL && ContainsLabel(labels, label)) { ExpectSemicolon(CHECK_OK); return factory()->NewEmptyStatement(pos); } @@ -2459,12 +2437,10 @@ Statement* Parser::ParseBreakStatement(ZoneStringList* labels, bool* ok) { if (target == NULL) { // Illegal break statement. const char* message = "illegal_break"; - Vector<Handle<String> > args; - if (!label.is_null()) { + if (label != NULL) { message = "unknown_label"; - args = Vector<Handle<String> >(&label, 1); } - ParserTraits::ReportMessageAt(scanner()->location(), message, args); + ParserTraits::ReportMessage(message, label); *ok = false; return NULL; } @@ -2515,7 +2491,8 @@ Statement* Parser::ParseReturnStatement(bool* ok) { } -Statement* Parser::ParseWithStatement(ZoneStringList* labels, bool* ok) { +Statement* Parser::ParseWithStatement(ZoneList<const AstRawString*>* labels, + bool* ok) { // WithStatement :: // 'with' '(' Expression ')' Statement @@ -2523,7 +2500,7 @@ Statement* Parser::ParseWithStatement(ZoneStringList* labels, bool* ok) { int pos = position(); if (strict_mode() == STRICT) { - ReportMessage("strict_mode_with", Vector<const char*>::empty()); + ReportMessage("strict_mode_with"); *ok = false; return NULL; } @@ -2556,8 +2533,7 @@ CaseClause* Parser::ParseCaseClause(bool* default_seen_ptr, bool* ok) { } else { Expect(Token::DEFAULT, CHECK_OK); if (*default_seen_ptr) { - ReportMessage("multiple_defaults_in_switch", - Vector<const char*>::empty()); + ReportMessage("multiple_defaults_in_switch"); *ok = false; return NULL; } @@ -2578,8 +2554,8 @@ CaseClause* Parser::ParseCaseClause(bool* default_seen_ptr, bool* ok) { } -SwitchStatement* Parser::ParseSwitchStatement(ZoneStringList* labels, - bool* ok) { +SwitchStatement* Parser::ParseSwitchStatement( + ZoneList<const AstRawString*>* labels, bool* ok) { // SwitchStatement :: // 'switch' '(' Expression ')' '{' CaseClause* '}' @@ -2613,7 +2589,7 @@ Statement* Parser::ParseThrowStatement(bool* ok) { Expect(Token::THROW, CHECK_OK); int pos = position(); if (scanner()->HasAnyLineTerminatorBeforeNext()) { - ReportMessage("newline_after_throw", Vector<const char*>::empty()); + ReportMessage("newline_after_throw"); *ok = false; return NULL; } @@ -2649,7 +2625,7 @@ TryStatement* Parser::ParseTryStatement(bool* ok) { Token::Value tok = peek(); if (tok != Token::CATCH && tok != Token::FINALLY) { - ReportMessage("no_catch_or_finally", Vector<const char*>::empty()); + ReportMessage("no_catch_or_finally"); *ok = false; return NULL; } @@ -2662,7 +2638,7 @@ TryStatement* Parser::ParseTryStatement(bool* ok) { Scope* catch_scope = NULL; Variable* catch_variable = NULL; Block* catch_block = NULL; - Handle<String> name; + const AstRawString* name = NULL; if (tok == Token::CATCH) { Consume(Token::CATCH); @@ -2676,9 +2652,7 @@ TryStatement* Parser::ParseTryStatement(bool* ok) { Target target(&this->target_stack_, &catch_collector); VariableMode mode = allow_harmony_scoping() && strict_mode() == STRICT ? LET : VAR; - catch_variable = - catch_scope->DeclareLocal(name, mode, kCreatedInitialized); - + catch_variable = catch_scope->DeclareLocal(name, mode, kCreatedInitialized); BlockState block_state(&scope_, catch_scope); catch_block = ParseBlock(NULL, CHECK_OK); @@ -2687,7 +2661,7 @@ TryStatement* Parser::ParseTryStatement(bool* ok) { } Block* finally_block = NULL; - ASSERT(tok == Token::FINALLY || catch_block != NULL); + DCHECK(tok == Token::FINALLY || catch_block != NULL); if (tok == Token::FINALLY) { Consume(Token::FINALLY); finally_block = ParseBlock(NULL, CHECK_OK); @@ -2700,7 +2674,7 @@ TryStatement* Parser::ParseTryStatement(bool* ok) { if (catch_block != NULL && finally_block != NULL) { // If we have both, create an inner try/catch. - ASSERT(catch_scope != NULL && catch_variable != NULL); + DCHECK(catch_scope != NULL && catch_variable != NULL); int index = function_state_->NextHandlerIndex(); TryCatchStatement* statement = factory()->NewTryCatchStatement( index, try_block, catch_scope, catch_variable, catch_block, @@ -2713,13 +2687,13 @@ TryStatement* Parser::ParseTryStatement(bool* ok) { TryStatement* result = NULL; if (catch_block != NULL) { - ASSERT(finally_block == NULL); - ASSERT(catch_scope != NULL && catch_variable != NULL); + DCHECK(finally_block == NULL); + DCHECK(catch_scope != NULL && catch_variable != NULL); int index = function_state_->NextHandlerIndex(); result = factory()->NewTryCatchStatement( index, try_block, catch_scope, catch_variable, catch_block, pos); } else { - ASSERT(finally_block != NULL); + DCHECK(finally_block != NULL); int index = function_state_->NextHandlerIndex(); result = factory()->NewTryFinallyStatement( index, try_block, finally_block, pos); @@ -2732,8 +2706,8 @@ TryStatement* Parser::ParseTryStatement(bool* ok) { } -DoWhileStatement* Parser::ParseDoWhileStatement(ZoneStringList* labels, - bool* ok) { +DoWhileStatement* Parser::ParseDoWhileStatement( + ZoneList<const AstRawString*>* labels, bool* ok) { // DoStatement :: // 'do' Statement 'while' '(' Expression ')' ';' @@ -2760,7 +2734,8 @@ DoWhileStatement* Parser::ParseDoWhileStatement(ZoneStringList* labels, } -WhileStatement* Parser::ParseWhileStatement(ZoneStringList* labels, bool* ok) { +WhileStatement* Parser::ParseWhileStatement( + ZoneList<const AstRawString*>* labels, bool* ok) { // WhileStatement :: // 'while' '(' Expression ')' Statement @@ -2783,8 +2758,7 @@ bool Parser::CheckInOrOf(bool accept_OF, if (Check(Token::IN)) { *visit_mode = ForEachStatement::ENUMERATE; return true; - } else if (allow_for_of() && accept_OF && - CheckContextualKeyword(CStrVector("of"))) { + } else if (accept_OF && CheckContextualKeyword(CStrVector("of"))) { *visit_mode = ForEachStatement::ITERATE; return true; } @@ -2799,29 +2773,26 @@ void Parser::InitializeForEachStatement(ForEachStatement* stmt, ForOfStatement* for_of = stmt->AsForOfStatement(); if (for_of != NULL) { - Factory* heap_factory = isolate()->factory(); Variable* iterator = scope_->DeclarationScope()->NewTemporary( - heap_factory->dot_iterator_string()); + ast_value_factory_->dot_iterator_string()); Variable* result = scope_->DeclarationScope()->NewTemporary( - heap_factory->dot_result_string()); + ast_value_factory_->dot_result_string()); Expression* assign_iterator; Expression* next_result; Expression* result_done; Expression* assign_each; - // var iterator = iterable; - { - Expression* iterator_proxy = factory()->NewVariableProxy(iterator); - assign_iterator = factory()->NewAssignment( - Token::ASSIGN, iterator_proxy, subject, RelocInfo::kNoPosition); - } + // var iterator = subject[Symbol.iterator](); + assign_iterator = factory()->NewAssignment( + Token::ASSIGN, factory()->NewVariableProxy(iterator), + GetIterator(subject, factory()), RelocInfo::kNoPosition); // var result = iterator.next(); { Expression* iterator_proxy = factory()->NewVariableProxy(iterator); - Expression* next_literal = factory()->NewLiteral( - heap_factory->next_string(), RelocInfo::kNoPosition); + Expression* next_literal = factory()->NewStringLiteral( + ast_value_factory_->next_string(), RelocInfo::kNoPosition); Expression* next_property = factory()->NewProperty( iterator_proxy, next_literal, RelocInfo::kNoPosition); ZoneList<Expression*>* next_arguments = @@ -2835,8 +2806,8 @@ void Parser::InitializeForEachStatement(ForEachStatement* stmt, // result.done { - Expression* done_literal = factory()->NewLiteral( - heap_factory->done_string(), RelocInfo::kNoPosition); + Expression* done_literal = factory()->NewStringLiteral( + ast_value_factory_->done_string(), RelocInfo::kNoPosition); Expression* result_proxy = factory()->NewVariableProxy(result); result_done = factory()->NewProperty( result_proxy, done_literal, RelocInfo::kNoPosition); @@ -2844,8 +2815,8 @@ void Parser::InitializeForEachStatement(ForEachStatement* stmt, // each = result.value { - Expression* value_literal = factory()->NewLiteral( - heap_factory->value_string(), RelocInfo::kNoPosition); + Expression* value_literal = factory()->NewStringLiteral( + ast_value_factory_->value_string(), RelocInfo::kNoPosition); Expression* result_proxy = factory()->NewVariableProxy(result); Expression* result_value = factory()->NewProperty( result_proxy, value_literal, RelocInfo::kNoPosition); @@ -2854,19 +2825,183 @@ void Parser::InitializeForEachStatement(ForEachStatement* stmt, } for_of->Initialize(each, subject, body, - assign_iterator, next_result, result_done, assign_each); + assign_iterator, + next_result, + result_done, + assign_each); } else { stmt->Initialize(each, subject, body); } } -Statement* Parser::ParseForStatement(ZoneStringList* labels, bool* ok) { +Statement* Parser::DesugarLetBindingsInForStatement( + Scope* inner_scope, ZoneList<const AstRawString*>* names, + ForStatement* loop, Statement* init, Expression* cond, Statement* next, + Statement* body, bool* ok) { + // ES6 13.6.3.4 specifies that on each loop iteration the let variables are + // copied into a new environment. After copying, the "next" statement of the + // loop is executed to update the loop variables. The loop condition is + // checked and the loop body is executed. + // + // We rewrite a for statement of the form + // + // for (let x = i; cond; next) body + // + // into + // + // { + // let x = i; + // temp_x = x; + // flag = 1; + // for (;;) { + // let x = temp_x; + // if (flag == 1) { + // flag = 0; + // } else { + // next; + // } + // if (cond) { + // <empty> + // } else { + // break; + // } + // b + // temp_x = x; + // } + // } + + DCHECK(names->length() > 0); + Scope* for_scope = scope_; + ZoneList<Variable*> temps(names->length(), zone()); + + Block* outer_block = factory()->NewBlock(NULL, names->length() + 3, false, + RelocInfo::kNoPosition); + outer_block->AddStatement(init, zone()); + + const AstRawString* temp_name = ast_value_factory_->dot_for_string(); + + // For each let variable x: + // make statement: temp_x = x. + for (int i = 0; i < names->length(); i++) { + VariableProxy* proxy = + NewUnresolved(names->at(i), LET, Interface::NewValue()); + Variable* temp = scope_->DeclarationScope()->NewTemporary(temp_name); + VariableProxy* temp_proxy = factory()->NewVariableProxy(temp); + Assignment* assignment = factory()->NewAssignment( + Token::ASSIGN, temp_proxy, proxy, RelocInfo::kNoPosition); + Statement* assignment_statement = factory()->NewExpressionStatement( + assignment, RelocInfo::kNoPosition); + outer_block->AddStatement(assignment_statement, zone()); + temps.Add(temp, zone()); + } + + Variable* flag = scope_->DeclarationScope()->NewTemporary(temp_name); + // Make statement: flag = 1. + { + VariableProxy* flag_proxy = factory()->NewVariableProxy(flag); + Expression* const1 = factory()->NewSmiLiteral(1, RelocInfo::kNoPosition); + Assignment* assignment = factory()->NewAssignment( + Token::ASSIGN, flag_proxy, const1, RelocInfo::kNoPosition); + Statement* assignment_statement = factory()->NewExpressionStatement( + assignment, RelocInfo::kNoPosition); + outer_block->AddStatement(assignment_statement, zone()); + } + + outer_block->AddStatement(loop, zone()); + outer_block->set_scope(for_scope); + scope_ = inner_scope; + + Block* inner_block = factory()->NewBlock(NULL, 2 * names->length() + 3, + false, RelocInfo::kNoPosition); + int pos = scanner()->location().beg_pos; + ZoneList<Variable*> inner_vars(names->length(), zone()); + + // For each let variable x: + // make statement: let x = temp_x. + for (int i = 0; i < names->length(); i++) { + VariableProxy* proxy = + NewUnresolved(names->at(i), LET, Interface::NewValue()); + Declaration* declaration = + factory()->NewVariableDeclaration(proxy, LET, scope_, pos); + Declare(declaration, true, CHECK_OK); + inner_vars.Add(declaration->proxy()->var(), zone()); + VariableProxy* temp_proxy = factory()->NewVariableProxy(temps.at(i)); + Assignment* assignment = factory()->NewAssignment( + Token::INIT_LET, proxy, temp_proxy, pos); + Statement* assignment_statement = factory()->NewExpressionStatement( + assignment, pos); + proxy->var()->set_initializer_position(pos); + inner_block->AddStatement(assignment_statement, zone()); + } + + // Make statement: if (flag == 1) { flag = 0; } else { next; }. + if (next) { + Expression* compare = NULL; + // Make compare expresion: flag == 1. + { + Expression* const1 = factory()->NewSmiLiteral(1, RelocInfo::kNoPosition); + VariableProxy* flag_proxy = factory()->NewVariableProxy(flag); + compare = factory()->NewCompareOperation( + Token::EQ, flag_proxy, const1, pos); + } + Statement* clear_flag = NULL; + // Make statement: flag = 0. + { + VariableProxy* flag_proxy = factory()->NewVariableProxy(flag); + Expression* const0 = factory()->NewSmiLiteral(0, RelocInfo::kNoPosition); + Assignment* assignment = factory()->NewAssignment( + Token::ASSIGN, flag_proxy, const0, RelocInfo::kNoPosition); + clear_flag = factory()->NewExpressionStatement(assignment, pos); + } + Statement* clear_flag_or_next = factory()->NewIfStatement( + compare, clear_flag, next, RelocInfo::kNoPosition); + inner_block->AddStatement(clear_flag_or_next, zone()); + } + + + // Make statement: if (cond) { } else { break; }. + if (cond) { + Statement* empty = factory()->NewEmptyStatement(RelocInfo::kNoPosition); + BreakableStatement* t = LookupBreakTarget(NULL, CHECK_OK); + Statement* stop = factory()->NewBreakStatement(t, RelocInfo::kNoPosition); + Statement* if_not_cond_break = factory()->NewIfStatement( + cond, empty, stop, cond->position()); + inner_block->AddStatement(if_not_cond_break, zone()); + } + + inner_block->AddStatement(body, zone()); + + // For each let variable x: + // make statement: temp_x = x; + for (int i = 0; i < names->length(); i++) { + VariableProxy* temp_proxy = factory()->NewVariableProxy(temps.at(i)); + int pos = scanner()->location().end_pos; + VariableProxy* proxy = factory()->NewVariableProxy(inner_vars.at(i), pos); + Assignment* assignment = factory()->NewAssignment( + Token::ASSIGN, temp_proxy, proxy, RelocInfo::kNoPosition); + Statement* assignment_statement = factory()->NewExpressionStatement( + assignment, RelocInfo::kNoPosition); + inner_block->AddStatement(assignment_statement, zone()); + } + + inner_scope->set_end_position(scanner()->location().end_pos); + inner_block->set_scope(inner_scope); + scope_ = for_scope; + + loop->Initialize(NULL, NULL, NULL, inner_block); + return outer_block; +} + + +Statement* Parser::ParseForStatement(ZoneList<const AstRawString*>* labels, + bool* ok) { // ForStatement :: // 'for' '(' Expression? ';' Expression? ';' Expression? ')' Statement int pos = peek_position(); Statement* init = NULL; + ZoneList<const AstRawString*> let_bindings(1, zone()); // Create an in-between scope for let-bound iteration variables. Scope* saved_scope = scope_; @@ -2879,7 +3014,7 @@ Statement* Parser::ParseForStatement(ZoneStringList* labels, bool* ok) { if (peek() != Token::SEMICOLON) { if (peek() == Token::VAR || peek() == Token::CONST) { bool is_const = peek() == Token::CONST; - Handle<String> name; + const AstRawString* name = NULL; VariableDeclarationProperties decl_props = kHasNoInitializers; Block* variable_statement = ParseVariableDeclarations(kForStatement, &decl_props, NULL, &name, @@ -2887,7 +3022,7 @@ Statement* Parser::ParseForStatement(ZoneStringList* labels, bool* ok) { bool accept_OF = decl_props == kHasNoInitializers; ForEachStatement::VisitMode mode; - if (!name.is_null() && CheckInOrOf(accept_OF, &mode)) { + if (name != NULL && CheckInOrOf(accept_OF, &mode)) { Interface* interface = is_const ? Interface::NewConst() : Interface::NewValue(); ForEachStatement* loop = @@ -2908,19 +3043,20 @@ Statement* Parser::ParseForStatement(ZoneStringList* labels, bool* ok) { scope_ = saved_scope; for_scope->set_end_position(scanner()->location().end_pos); for_scope = for_scope->FinalizeBlockScope(); - ASSERT(for_scope == NULL); + DCHECK(for_scope == NULL); // Parsed for-in loop w/ variable/const declaration. return result; } else { init = variable_statement; } - } else if (peek() == Token::LET) { - Handle<String> name; + } else if (peek() == Token::LET && strict_mode() == STRICT) { + DCHECK(allow_harmony_scoping()); + const AstRawString* name = NULL; VariableDeclarationProperties decl_props = kHasNoInitializers; Block* variable_statement = - ParseVariableDeclarations(kForStatement, &decl_props, NULL, &name, - CHECK_OK); - bool accept_IN = !name.is_null() && decl_props != kHasInitializers; + ParseVariableDeclarations(kForStatement, &decl_props, &let_bindings, + &name, CHECK_OK); + bool accept_IN = name != NULL && decl_props != kHasInitializers; bool accept_OF = decl_props == kHasNoInitializers; ForEachStatement::VisitMode mode; @@ -2940,14 +3076,8 @@ Statement* Parser::ParseForStatement(ZoneStringList* labels, bool* ok) { // TODO(keuchel): Move the temporary variable to the block scope, after // implementing stack allocated block scoped variables. - Factory* heap_factory = isolate()->factory(); - Handle<String> tempstr; - ASSIGN_RETURN_ON_EXCEPTION_VALUE( - isolate(), tempstr, - heap_factory->NewConsString(heap_factory->dot_for_string(), name), - 0); - Handle<String> tempname = heap_factory->InternalizeString(tempstr); - Variable* temp = scope_->DeclarationScope()->NewTemporary(tempname); + Variable* temp = scope_->DeclarationScope()->NewTemporary( + ast_value_factory_->dot_for_string()); VariableProxy* temp_proxy = factory()->NewVariableProxy(temp); ForEachStatement* loop = factory()->NewForEachStatement(mode, labels, pos); @@ -3004,7 +3134,7 @@ Statement* Parser::ParseForStatement(ZoneStringList* labels, bool* ok) { scope_ = saved_scope; for_scope->set_end_position(scanner()->location().end_pos); for_scope = for_scope->FinalizeBlockScope(); - ASSERT(for_scope == NULL); + DCHECK(for_scope == NULL); // Parsed for-in loop. return loop; @@ -3022,6 +3152,15 @@ Statement* Parser::ParseForStatement(ZoneStringList* labels, bool* ok) { // Parsed initializer at this point. Expect(Token::SEMICOLON, CHECK_OK); + // If there are let bindings, then condition and the next statement of the + // for loop must be parsed in a new scope. + Scope* inner_scope = NULL; + if (let_bindings.length() > 0) { + inner_scope = NewScope(for_scope, BLOCK_SCOPE); + inner_scope->set_start_position(scanner()->location().beg_pos); + scope_ = inner_scope; + } + Expression* cond = NULL; if (peek() != Token::SEMICOLON) { cond = ParseExpression(true, CHECK_OK); @@ -3036,31 +3175,42 @@ Statement* Parser::ParseForStatement(ZoneStringList* labels, bool* ok) { Expect(Token::RPAREN, CHECK_OK); Statement* body = ParseStatement(NULL, CHECK_OK); - scope_ = saved_scope; - for_scope->set_end_position(scanner()->location().end_pos); - for_scope = for_scope->FinalizeBlockScope(); - if (for_scope != NULL) { - // Rewrite a for statement of the form - // - // for (let x = i; c; n) b - // - // into - // - // { - // let x = i; - // for (; c; n) b - // } - ASSERT(init != NULL); - Block* result = factory()->NewBlock(NULL, 2, false, RelocInfo::kNoPosition); - result->AddStatement(init, zone()); - result->AddStatement(loop, zone()); - result->set_scope(for_scope); - loop->Initialize(NULL, cond, next, body); - return result; + + Statement* result = NULL; + if (let_bindings.length() > 0) { + scope_ = for_scope; + result = DesugarLetBindingsInForStatement(inner_scope, &let_bindings, loop, + init, cond, next, body, CHECK_OK); + scope_ = saved_scope; + for_scope->set_end_position(scanner()->location().end_pos); } else { - loop->Initialize(init, cond, next, body); - return loop; + scope_ = saved_scope; + for_scope->set_end_position(scanner()->location().end_pos); + for_scope = for_scope->FinalizeBlockScope(); + if (for_scope) { + // Rewrite a for statement of the form + // for (const x = i; c; n) b + // + // into + // + // { + // const x = i; + // for (; c; n) b + // } + DCHECK(init != NULL); + Block* block = + factory()->NewBlock(NULL, 2, false, RelocInfo::kNoPosition); + block->AddStatement(init, zone()); + block->AddStatement(loop, zone()); + block->set_scope(for_scope); + loop->Initialize(NULL, cond, next, body); + result = block; + } else { + loop->Initialize(init, cond, next, body); + result = loop; + } } + return result; } @@ -3078,17 +3228,8 @@ DebuggerStatement* Parser::ParseDebuggerStatement(bool* ok) { } -void Parser::ReportInvalidCachedData(Handle<String> name, bool* ok) { - SmartArrayPointer<char> name_string = name->ToCString(DISALLOW_NULLS); - const char* element[1] = { name_string.get() }; - ReportMessage("invalid_cached_data_function", - Vector<const char*>(element, 1)); - *ok = false; -} - - bool CompileTimeValue::IsCompileTimeValue(Expression* expression) { - if (expression->AsLiteral() != NULL) return true; + if (expression->IsLiteral()) return true; MaterializedLiteral* lit = expression->AsMaterializedLiteral(); return lit != NULL && lit->is_simple(); } @@ -3097,11 +3238,11 @@ bool CompileTimeValue::IsCompileTimeValue(Expression* expression) { Handle<FixedArray> CompileTimeValue::GetValue(Isolate* isolate, Expression* expression) { Factory* factory = isolate->factory(); - ASSERT(IsCompileTimeValue(expression)); + DCHECK(IsCompileTimeValue(expression)); Handle<FixedArray> result = factory->NewFixedArray(2, TENURED); ObjectLiteral* object_literal = expression->AsObjectLiteral(); if (object_literal != NULL) { - ASSERT(object_literal->is_simple()); + DCHECK(object_literal->is_simple()); if (object_literal->fast_elements()) { result->set(kLiteralTypeSlot, Smi::FromInt(OBJECT_LITERAL_FAST_ELEMENTS)); } else { @@ -3110,7 +3251,7 @@ Handle<FixedArray> CompileTimeValue::GetValue(Isolate* isolate, result->set(kElementsSlot, *object_literal->constant_properties()); } else { ArrayLiteral* array_literal = expression->AsArrayLiteral(); - ASSERT(array_literal != NULL && array_literal->is_simple()); + DCHECK(array_literal != NULL && array_literal->is_simple()); result->set(kLiteralTypeSlot, Smi::FromInt(ARRAY_LITERAL)); result->set(kElementsSlot, *array_literal->constant_elements()); } @@ -3130,16 +3271,85 @@ Handle<FixedArray> CompileTimeValue::GetElements(Handle<FixedArray> value) { } +bool CheckAndDeclareArrowParameter(ParserTraits* traits, Expression* expression, + Scope* scope, int* num_params, + Scanner::Location* dupe_loc) { + // Case for empty parameter lists: + // () => ... + if (expression == NULL) return true; + + // Too many parentheses around expression: + // (( ... )) => ... + if (expression->parenthesization_level() > 1) return false; + + // Case for a single parameter: + // (foo) => ... + // foo => ... + if (expression->IsVariableProxy()) { + if (expression->AsVariableProxy()->is_this()) return false; + + const AstRawString* raw_name = expression->AsVariableProxy()->raw_name(); + if (traits->IsEvalOrArguments(raw_name) || + traits->IsFutureStrictReserved(raw_name)) + return false; + + if (scope->IsDeclared(raw_name)) { + *dupe_loc = Scanner::Location( + expression->position(), expression->position() + raw_name->length()); + return false; + } + + scope->DeclareParameter(raw_name, VAR); + ++(*num_params); + return true; + } + + // Case for more than one parameter: + // (foo, bar [, ...]) => ... + if (expression->IsBinaryOperation()) { + BinaryOperation* binop = expression->AsBinaryOperation(); + if (binop->op() != Token::COMMA || binop->left()->is_parenthesized() || + binop->right()->is_parenthesized()) + return false; + + return CheckAndDeclareArrowParameter(traits, binop->left(), scope, + num_params, dupe_loc) && + CheckAndDeclareArrowParameter(traits, binop->right(), scope, + num_params, dupe_loc); + } + + // Any other kind of expression is not a valid parameter list. + return false; +} + + +int ParserTraits::DeclareArrowParametersFromExpression( + Expression* expression, Scope* scope, Scanner::Location* dupe_loc, + bool* ok) { + int num_params = 0; + *ok = CheckAndDeclareArrowParameter(this, expression, scope, &num_params, + dupe_loc); + return num_params; +} + + FunctionLiteral* Parser::ParseFunctionLiteral( - Handle<String> function_name, + const AstRawString* function_name, Scanner::Location function_name_location, bool name_is_strict_reserved, bool is_generator, int function_token_pos, FunctionLiteral::FunctionType function_type, + FunctionLiteral::ArityRestriction arity_restriction, bool* ok) { // Function :: // '(' FormalParameterList? ')' '{' FunctionBody '}' + // + // Getter :: + // '(' ')' '{' FunctionBody '}' + // + // Setter :: + // '(' PropertySetParameterList ')' '{' FunctionBody '}' int pos = function_token_pos == RelocInfo::kNoPosition ? peek_position() : function_token_pos; @@ -3147,11 +3357,11 @@ FunctionLiteral* Parser::ParseFunctionLiteral( // Anonymous functions were passed either the empty symbol or a null // handle as the function name. Remember if we were passed a non-empty // handle to decide whether to invoke function name inference. - bool should_infer_name = function_name.is_null(); + bool should_infer_name = function_name == NULL; // We want a non-null handle as the function name. if (should_infer_name) { - function_name = isolate()->factory()->empty_string(); + function_name = ast_value_factory_->empty_string(); } int num_parameters = 0; @@ -3203,7 +3413,9 @@ FunctionLiteral* Parser::ParseFunctionLiteral( AstProperties ast_properties; BailoutReason dont_optimize_reason = kNoReason; // Parse function body. - { FunctionState function_state(&function_state_, &scope_, scope, zone()); + { + FunctionState function_state(&function_state_, &scope_, scope, zone(), + ast_value_factory_); scope_->SetScopeName(function_name); if (is_generator) { @@ -3216,7 +3428,7 @@ FunctionLiteral* Parser::ParseFunctionLiteral( // in a temporary variable, a definition that is used by "yield" // expressions. This also marks the FunctionState as a generator. Variable* temp = scope_->DeclarationScope()->NewTemporary( - isolate()->factory()->dot_generator_object_string()); + ast_value_factory_->dot_generator_object_string()); function_state.set_generator_object_variable(temp); } @@ -3232,10 +3444,12 @@ FunctionLiteral* Parser::ParseFunctionLiteral( Scanner::Location dupe_error_loc = Scanner::Location::invalid(); Scanner::Location reserved_loc = Scanner::Location::invalid(); - bool done = (peek() == Token::RPAREN); + bool done = arity_restriction == FunctionLiteral::GETTER_ARITY || + (peek() == Token::RPAREN && + arity_restriction != FunctionLiteral::SETTER_ARITY); while (!done) { bool is_strict_reserved = false; - Handle<String> param_name = + const AstRawString* param_name = ParseIdentifierOrStrictReservedWord(&is_strict_reserved, CHECK_OK); // Store locations for possible future error reports. @@ -3250,13 +3464,21 @@ FunctionLiteral* Parser::ParseFunctionLiteral( dupe_error_loc = scanner()->location(); } - scope_->DeclareParameter(param_name, VAR); + Variable* var = scope_->DeclareParameter(param_name, VAR); + if (scope->strict_mode() == SLOPPY) { + // TODO(sigurds) Mark every parameter as maybe assigned. This is a + // conservative approximation necessary to account for parameters + // that are assigned via the arguments array. + var->set_maybe_assigned(); + } + num_parameters++; if (num_parameters > Code::kMaxArguments) { - ReportMessageAt(scanner()->location(), "too_many_parameters"); + ReportMessage("too_many_parameters"); *ok = false; return NULL; } + if (arity_restriction == FunctionLiteral::SETTER_ARITY) break; done = (peek() == Token::RPAREN); if (!done) Expect(Token::COMMA, CHECK_OK); } @@ -3277,11 +3499,13 @@ FunctionLiteral* Parser::ParseFunctionLiteral( fvar_init_op = Token::INIT_CONST; } VariableMode fvar_mode = - allow_harmony_scoping() && strict_mode() == STRICT ? CONST - : CONST_LEGACY; - fvar = new(zone()) Variable(scope_, - function_name, fvar_mode, true /* is valid LHS */, - Variable::NORMAL, kCreatedInitialized, Interface::NewConst()); + allow_harmony_scoping() && strict_mode() == STRICT + ? CONST : CONST_LEGACY; + DCHECK(function_name != NULL); + fvar = new (zone()) + Variable(scope_, function_name, fvar_mode, true /* is valid LHS */, + Variable::NORMAL, kCreatedInitialized, kNotAssigned, + Interface::NewConst()); VariableProxy* proxy = factory()->NewVariableProxy(fvar); VariableDeclaration* fvar_declaration = factory()->NewVariableDeclaration( proxy, fvar_mode, scope_, RelocInfo::kNoPosition); @@ -3337,63 +3561,35 @@ FunctionLiteral* Parser::ParseFunctionLiteral( handler_count = function_state.handler_count(); } - // Validate strict mode. We can do this only after parsing the function, - // since the function can declare itself strict. + // Validate strict mode. if (strict_mode() == STRICT) { - if (IsEvalOrArguments(function_name)) { - ReportMessageAt(function_name_location, "strict_eval_arguments"); - *ok = false; - return NULL; - } - if (name_is_strict_reserved) { - ReportMessageAt(function_name_location, "unexpected_strict_reserved"); - *ok = false; - return NULL; - } - if (eval_args_error_log.IsValid()) { - ReportMessageAt(eval_args_error_log, "strict_eval_arguments"); - *ok = false; - return NULL; - } - if (dupe_error_loc.IsValid()) { - ReportMessageAt(dupe_error_loc, "strict_param_dupe"); - *ok = false; - return NULL; - } - if (reserved_loc.IsValid()) { - ReportMessageAt(reserved_loc, "unexpected_strict_reserved"); - *ok = false; - return NULL; - } + CheckStrictFunctionNameAndParameters(function_name, + name_is_strict_reserved, + function_name_location, + eval_args_error_log, + dupe_error_loc, + reserved_loc, + CHECK_OK); CheckOctalLiteral(scope->start_position(), scope->end_position(), CHECK_OK); } ast_properties = *factory()->visitor()->ast_properties(); dont_optimize_reason = factory()->visitor()->dont_optimize_reason(); + + if (allow_harmony_scoping() && strict_mode() == STRICT) { + CheckConflictingVarDeclarations(scope, CHECK_OK); + } } - if (allow_harmony_scoping() && strict_mode() == STRICT) { - CheckConflictingVarDeclarations(scope, CHECK_OK); - } - - FunctionLiteral::IsGeneratorFlag generator = is_generator - ? FunctionLiteral::kIsGenerator - : FunctionLiteral::kNotGenerator; - FunctionLiteral* function_literal = - factory()->NewFunctionLiteral(function_name, - scope, - body, - materialized_literal_count, - expected_property_count, - handler_count, - num_parameters, - duplicate_parameters, - function_type, - FunctionLiteral::kIsFunction, - parenthesized, - generator, - pos); + FunctionLiteral::KindFlag kind = is_generator + ? FunctionLiteral::kGeneratorFunction + : FunctionLiteral::kNormalFunction; + FunctionLiteral* function_literal = factory()->NewFunctionLiteral( + function_name, ast_value_factory_, scope, body, + materialized_literal_count, expected_property_count, handler_count, + num_parameters, duplicate_parameters, function_type, + FunctionLiteral::kIsFunction, parenthesized, kind, pos); function_literal->set_function_token_position(function_token_pos); function_literal->set_ast_properties(&ast_properties); function_literal->set_dont_optimize_reason(dont_optimize_reason); @@ -3403,42 +3599,32 @@ FunctionLiteral* Parser::ParseFunctionLiteral( } -void Parser::SkipLazyFunctionBody(Handle<String> function_name, +void Parser::SkipLazyFunctionBody(const AstRawString* function_name, int* materialized_literal_count, int* expected_property_count, bool* ok) { int function_block_pos = position(); - if (cached_data_mode_ == CONSUME_CACHED_DATA) { + if (compile_options() == ScriptCompiler::kConsumeParserCache) { // If we have cached data, we use it to skip parsing the function body. The // data contains the information we need to construct the lazy function. FunctionEntry entry = - (*cached_data())->GetFunctionEntry(function_block_pos); - if (entry.is_valid()) { - if (entry.end_pos() <= function_block_pos) { - // End position greater than end of stream is safe, and hard to check. - ReportInvalidCachedData(function_name, ok); - if (!*ok) { - return; - } - } - scanner()->SeekForward(entry.end_pos() - 1); - - scope_->set_end_position(entry.end_pos()); - Expect(Token::RBRACE, ok); - if (!*ok) { - return; - } - isolate()->counters()->total_preparse_skipped()->Increment( - scope_->end_position() - function_block_pos); - *materialized_literal_count = entry.literal_count(); - *expected_property_count = entry.property_count(); - scope_->SetStrictMode(entry.strict_mode()); - } else { - // This case happens when we have preparse data but it doesn't contain an - // entry for the function. Fail the compilation. - ReportInvalidCachedData(function_name, ok); + cached_parse_data_->GetFunctionEntry(function_block_pos); + // Check that cached data is valid. + CHECK(entry.is_valid()); + // End position greater than end of stream is safe, and hard to check. + CHECK(entry.end_pos() > function_block_pos); + scanner()->SeekForward(entry.end_pos() - 1); + + scope_->set_end_position(entry.end_pos()); + Expect(Token::RBRACE, ok); + if (!*ok) { return; } + isolate()->counters()->total_preparse_skipped()->Increment( + scope_->end_position() - function_block_pos); + *materialized_literal_count = entry.literal_count(); + *expected_property_count = entry.property_count(); + scope_->SetStrictMode(entry.strict_mode()); } else { // With no cached data, we partially parse the function, without building an // AST. This gathers the data needed to build a lazy function. @@ -3452,14 +3638,9 @@ void Parser::SkipLazyFunctionBody(Handle<String> function_name, return; } if (logger.has_error()) { - const char* arg = logger.argument_opt(); - Vector<const char*> args; - if (arg != NULL) { - args = Vector<const char*>(&arg, 1); - } ParserTraits::ReportMessageAt( Scanner::Location(logger.start(), logger.end()), - logger.message(), args, logger.is_reference_error()); + logger.message(), logger.argument_opt(), logger.is_reference_error()); *ok = false; return; } @@ -3473,8 +3654,8 @@ void Parser::SkipLazyFunctionBody(Handle<String> function_name, *materialized_literal_count = logger.literals(); *expected_property_count = logger.properties(); scope_->SetStrictMode(logger.strict_mode()); - if (cached_data_mode_ == PRODUCE_CACHED_DATA) { - ASSERT(log_); + if (compile_options() == ScriptCompiler::kProduceParserCache) { + DCHECK(log_); // Position right after terminal '}'. int body_end = scanner()->location().end_pos; log_->LogFunction(function_block_pos, body_end, @@ -3487,7 +3668,7 @@ void Parser::SkipLazyFunctionBody(Handle<String> function_name, ZoneList<Statement*>* Parser::ParseEagerFunctionBody( - Handle<String> function_name, int pos, Variable* fvar, + const AstRawString* function_name, int pos, Variable* fvar, Token::Value fvar_init_op, bool is_generator, bool* ok) { // Everything inside an eagerly parsed function will be parsed eagerly // (see comment above). @@ -3510,8 +3691,8 @@ ZoneList<Statement*>* Parser::ParseEagerFunctionBody( ZoneList<Expression*>* arguments = new(zone()) ZoneList<Expression*>(0, zone()); CallRuntime* allocation = factory()->NewCallRuntime( - isolate()->factory()->empty_string(), - Runtime::FunctionForId(Runtime::kHiddenCreateJSGeneratorObject), + ast_value_factory_->empty_string(), + Runtime::FunctionForId(Runtime::kCreateJSGeneratorObject), arguments, pos); VariableProxy* init_proxy = factory()->NewVariableProxy( function_state_->generator_object_variable()); @@ -3530,10 +3711,10 @@ ZoneList<Statement*>* Parser::ParseEagerFunctionBody( if (is_generator) { VariableProxy* get_proxy = factory()->NewVariableProxy( function_state_->generator_object_variable()); - Expression *undefined = factory()->NewLiteral( - isolate()->factory()->undefined_value(), RelocInfo::kNoPosition); - Yield* yield = factory()->NewYield( - get_proxy, undefined, Yield::FINAL, RelocInfo::kNoPosition); + Expression* undefined = + factory()->NewUndefinedLiteral(RelocInfo::kNoPosition); + Yield* yield = factory()->NewYield(get_proxy, undefined, Yield::FINAL, + RelocInfo::kNoPosition); body->Add(factory()->NewExpressionStatement( yield, RelocInfo::kNoPosition), zone()); } @@ -3548,7 +3729,7 @@ ZoneList<Statement*>* Parser::ParseEagerFunctionBody( PreParser::PreParseResult Parser::ParseLazyFunctionBodyWithPreParser( SingletonLogger* logger) { HistogramTimerScope preparse_scope(isolate()->counters()->pre_parse()); - ASSERT_EQ(Token::LBRACE, scanner()->current_token()); + DCHECK_EQ(Token::LBRACE, scanner()->current_token()); if (reusable_preparser_ == NULL) { intptr_t stack_limit = isolate()->stack_guard()->real_climit(); @@ -3558,7 +3739,7 @@ PreParser::PreParseResult Parser::ParseLazyFunctionBodyWithPreParser( reusable_preparser_->set_allow_natives_syntax(allow_natives_syntax()); reusable_preparser_->set_allow_lazy(true); reusable_preparser_->set_allow_generators(allow_generators()); - reusable_preparser_->set_allow_for_of(allow_for_of()); + reusable_preparser_->set_allow_arrow_functions(allow_arrow_functions()); reusable_preparser_->set_allow_harmony_numeric_literals( allow_harmony_numeric_literals()); } @@ -3577,7 +3758,7 @@ Expression* Parser::ParseV8Intrinsic(bool* ok) { int pos = peek_position(); Expect(Token::MOD, CHECK_OK); // Allow "eval" or "arguments" for backward compatibility. - Handle<String> name = ParseIdentifier(kAllowEvalOrArguments, CHECK_OK); + const AstRawString* name = ParseIdentifier(kAllowEvalOrArguments, CHECK_OK); ZoneList<Expression*>* args = ParseArguments(CHECK_OK); if (extension_ != NULL) { @@ -3586,7 +3767,7 @@ Expression* Parser::ParseV8Intrinsic(bool* ok) { scope_->DeclarationScope()->ForceEagerCompilation(); } - const Runtime::Function* function = Runtime::FunctionForName(name); + const Runtime::Function* function = Runtime::FunctionForName(name->string()); // Check for built-in IS_VAR macro. if (function != NULL && @@ -3598,7 +3779,7 @@ Expression* Parser::ParseV8Intrinsic(bool* ok) { if (args->length() == 1 && args->at(0)->AsVariableProxy() != NULL) { return args->at(0); } else { - ReportMessage("not_isvar", Vector<const char*>::empty()); + ReportMessage("not_isvar"); *ok = false; return NULL; } @@ -3608,15 +3789,14 @@ Expression* Parser::ParseV8Intrinsic(bool* ok) { if (function != NULL && function->nargs != -1 && function->nargs != args->length()) { - ReportMessage("illegal_access", Vector<const char*>::empty()); + ReportMessage("illegal_access"); *ok = false; return NULL; } // Check that the function is defined if it's an inline runtime call. - if (function == NULL && name->Get(0) == '_') { - ParserTraits::ReportMessage("not_defined", - Vector<Handle<String> >(&name, 1)); + if (function == NULL && name->FirstCharacter() == '_') { + ParserTraits::ReportMessage("not_defined", name); *ok = false; return NULL; } @@ -3627,8 +3807,7 @@ Expression* Parser::ParseV8Intrinsic(bool* ok) { Literal* Parser::GetLiteralUndefined(int position) { - return factory()->NewLiteral( - isolate()->factory()->undefined_value(), position); + return factory()->NewUndefinedLiteral(position); } @@ -3637,15 +3816,12 @@ void Parser::CheckConflictingVarDeclarations(Scope* scope, bool* ok) { if (decl != NULL) { // In harmony mode we treat conflicting variable bindinds as early // errors. See ES5 16 for a definition of early errors. - Handle<String> name = decl->proxy()->name(); - SmartArrayPointer<char> c_string = name->ToCString(DISALLOW_NULLS); - const char* elms[1] = { c_string.get() }; - Vector<const char*> args(elms, 1); + const AstRawString* name = decl->proxy()->raw_name(); int position = decl->proxy()->position(); Scanner::Location location = position == RelocInfo::kNoPosition ? Scanner::Location::invalid() : Scanner::Location(position, position + 1); - ParserTraits::ReportMessageAt(location, "var_redeclaration", args); + ParserTraits::ReportMessageAt(location, "var_redeclaration", name); *ok = false; } } @@ -3655,7 +3831,7 @@ void Parser::CheckConflictingVarDeclarations(Scope* scope, bool* ok) { // Parser support -bool Parser::TargetStackContainsLabel(Handle<String> label) { +bool Parser::TargetStackContainsLabel(const AstRawString* label) { for (Target* t = target_stack_; t != NULL; t = t->previous()) { BreakableStatement* stat = t->node()->AsBreakableStatement(); if (stat != NULL && ContainsLabel(stat->labels(), label)) @@ -3665,8 +3841,9 @@ bool Parser::TargetStackContainsLabel(Handle<String> label) { } -BreakableStatement* Parser::LookupBreakTarget(Handle<String> label, bool* ok) { - bool anonymous = label.is_null(); +BreakableStatement* Parser::LookupBreakTarget(const AstRawString* label, + bool* ok) { + bool anonymous = label == NULL; for (Target* t = target_stack_; t != NULL; t = t->previous()) { BreakableStatement* stat = t->node()->AsBreakableStatement(); if (stat == NULL) continue; @@ -3680,14 +3857,14 @@ BreakableStatement* Parser::LookupBreakTarget(Handle<String> label, bool* ok) { } -IterationStatement* Parser::LookupContinueTarget(Handle<String> label, +IterationStatement* Parser::LookupContinueTarget(const AstRawString* label, bool* ok) { - bool anonymous = label.is_null(); + bool anonymous = label == NULL; for (Target* t = target_stack_; t != NULL; t = t->previous()) { IterationStatement* stat = t->node()->AsIterationStatement(); if (stat == NULL) continue; - ASSERT(stat->is_target_for_anonymous()); + DCHECK(stat->is_target_for_anonymous()); if (anonymous || ContainsLabel(stat->labels(), label)) { RegisterTargetUse(stat->continue_target(), t->previous()); return stat; @@ -3708,6 +3885,59 @@ void Parser::RegisterTargetUse(Label* target, Target* stop) { } +void Parser::HandleSourceURLComments() { + if (scanner_.source_url()->length() > 0) { + Handle<String> source_url = scanner_.source_url()->Internalize(isolate()); + info_->script()->set_source_url(*source_url); + } + if (scanner_.source_mapping_url()->length() > 0) { + Handle<String> source_mapping_url = + scanner_.source_mapping_url()->Internalize(isolate()); + info_->script()->set_source_mapping_url(*source_mapping_url); + } +} + + +void Parser::ThrowPendingError() { + DCHECK(ast_value_factory_->IsInternalized()); + if (has_pending_error_) { + MessageLocation location(script_, + pending_error_location_.beg_pos, + pending_error_location_.end_pos); + Factory* factory = isolate()->factory(); + bool has_arg = + pending_error_arg_ != NULL || pending_error_char_arg_ != NULL; + Handle<FixedArray> elements = factory->NewFixedArray(has_arg ? 1 : 0); + if (pending_error_arg_ != NULL) { + Handle<String> arg_string = pending_error_arg_->string(); + elements->set(0, *arg_string); + } else if (pending_error_char_arg_ != NULL) { + Handle<String> arg_string = + factory->NewStringFromUtf8(CStrVector(pending_error_char_arg_)) + .ToHandleChecked(); + elements->set(0, *arg_string); + } + isolate()->debug()->OnCompileError(script_); + + Handle<JSArray> array = factory->NewJSArrayWithElements(elements); + Handle<Object> result = pending_error_is_reference_error_ + ? factory->NewReferenceError(pending_error_message_, array) + : factory->NewSyntaxError(pending_error_message_, array); + isolate()->Throw(*result, &location); + } +} + + +void Parser::InternalizeUseCounts() { + for (int feature = 0; feature < v8::Isolate::kUseCounterFeatureCount; + ++feature) { + for (int i = 0; i < use_counts_[feature]; ++i) { + isolate()->CountUsage(v8::Isolate::UseCounterFeature(feature)); + } + } +} + + // ---------------------------------------------------------------------------- // Regular expressions @@ -3793,7 +4023,7 @@ RegExpTree* RegExpParser::ReportError(Vector<const char> message) { // Disjunction RegExpTree* RegExpParser::ParsePattern() { RegExpTree* result = ParseDisjunction(CHECK_FAILED); - ASSERT(!has_more()); + DCHECK(!has_more()); // If the result of parsing is a literal string atom, and it has the // same length as the input, then the atom is identical to the input. if (result->IsAtom() && result->AsAtom()->length() == in()->length()) { @@ -3826,14 +4056,14 @@ RegExpTree* RegExpParser::ParseDisjunction() { // Inside a parenthesized group when hitting end of input. ReportError(CStrVector("Unterminated group") CHECK_FAILED); } - ASSERT_EQ(INITIAL, stored_state->group_type()); + DCHECK_EQ(INITIAL, stored_state->group_type()); // Parsing completed successfully. return builder->ToRegExp(); case ')': { if (!stored_state->IsSubexpression()) { ReportError(CStrVector("Unmatched ')'") CHECK_FAILED); } - ASSERT_NE(INITIAL, stored_state->group_type()); + DCHECK_NE(INITIAL, stored_state->group_type()); Advance(); // End disjunction parsing and convert builder content to new single @@ -3855,7 +4085,7 @@ RegExpTree* RegExpParser::ParseDisjunction() { captures_->at(capture_index - 1) = capture; body = capture; } else if (group_type != GROUPING) { - ASSERT(group_type == POSITIVE_LOOKAHEAD || + DCHECK(group_type == POSITIVE_LOOKAHEAD || group_type == NEGATIVE_LOOKAHEAD); bool is_positive = (group_type == POSITIVE_LOOKAHEAD); body = new(zone()) RegExpLookahead(body, @@ -4139,7 +4369,7 @@ RegExpTree* RegExpParser::ParseDisjunction() { #ifdef DEBUG -// Currently only used in an ASSERT. +// Currently only used in an DCHECK. static bool IsSpecialClassEscape(uc32 c) { switch (c) { case 'd': case 'D': @@ -4193,8 +4423,8 @@ void RegExpParser::ScanForCaptures() { bool RegExpParser::ParseBackReferenceIndex(int* index_out) { - ASSERT_EQ('\\', current()); - ASSERT('1' <= Next() && Next() <= '9'); + DCHECK_EQ('\\', current()); + DCHECK('1' <= Next() && Next() <= '9'); // Try to parse a decimal literal that is no greater than the total number // of left capturing parentheses in the input. int start = position(); @@ -4237,7 +4467,7 @@ bool RegExpParser::ParseBackReferenceIndex(int* index_out) { // Returns true if parsing succeeds, and set the min_out and max_out // values. Values are truncated to RegExpTree::kInfinity if they overflow. bool RegExpParser::ParseIntervalQuantifier(int* min_out, int* max_out) { - ASSERT_EQ(current(), '{'); + DCHECK_EQ(current(), '{'); int start = position(); Advance(); int min = 0; @@ -4297,7 +4527,7 @@ bool RegExpParser::ParseIntervalQuantifier(int* min_out, int* max_out) { uc32 RegExpParser::ParseOctalLiteral() { - ASSERT(('0' <= current() && current() <= '7') || current() == kEndMarker); + DCHECK(('0' <= current() && current() <= '7') || current() == kEndMarker); // For compatibility with some other browsers (not all), we parse // up to three octal digits with a value below 256. uc32 value = current() - '0'; @@ -4337,8 +4567,8 @@ bool RegExpParser::ParseHexEscape(int length, uc32 *value) { uc32 RegExpParser::ParseClassCharacterEscape() { - ASSERT(current() == '\\'); - ASSERT(has_next() && !IsSpecialClassEscape(Next())); + DCHECK(current() == '\\'); + DCHECK(has_next() && !IsSpecialClassEscape(Next())); Advance(); switch (current()) { case 'b': @@ -4418,7 +4648,7 @@ uc32 RegExpParser::ParseClassCharacterEscape() { CharacterRange RegExpParser::ParseClassAtom(uc16* char_class) { - ASSERT_EQ(0, *char_class); + DCHECK_EQ(0, *char_class); uc32 first = current(); if (first == '\\') { switch (Next()) { @@ -4461,7 +4691,7 @@ RegExpTree* RegExpParser::ParseCharacterClass() { static const char* kUnterminated = "Unterminated character class"; static const char* kRangeOutOfOrder = "Range out of order in character class"; - ASSERT_EQ(current(), '['); + DCHECK_EQ(current(), '['); Advance(); bool is_negated = false; if (current() == '^') { @@ -4516,83 +4746,19 @@ RegExpTree* RegExpParser::ParseCharacterClass() { // ---------------------------------------------------------------------------- // The Parser interface. -ScriptData::~ScriptData() { - if (owns_store_) store_.Dispose(); -} - - -int ScriptData::Length() { - return store_.length() * sizeof(unsigned); -} - - -const char* ScriptData::Data() { - return reinterpret_cast<const char*>(store_.start()); -} - - -bool ScriptData::HasError() { - return has_error(); -} - - -void ScriptData::Initialize() { - // Prepares state for use. - if (store_.length() >= PreparseDataConstants::kHeaderSize) { - function_index_ = PreparseDataConstants::kHeaderSize; - int symbol_data_offset = PreparseDataConstants::kHeaderSize - + store_[PreparseDataConstants::kFunctionsSizeOffset]; - if (store_.length() > symbol_data_offset) { - symbol_data_ = reinterpret_cast<byte*>(&store_[symbol_data_offset]); - } else { - // Partial preparse causes no symbol information. - symbol_data_ = reinterpret_cast<byte*>(&store_[0] + store_.length()); - } - symbol_data_end_ = reinterpret_cast<byte*>(&store_[0] + store_.length()); - } -} - - -int ScriptData::ReadNumber(byte** source) { - // Reads a number from symbol_data_ in base 128. The most significant - // bit marks that there are more digits. - // If the first byte is 0x80 (kNumberTerminator), it would normally - // represent a leading zero. Since that is useless, and therefore won't - // appear as the first digit of any actual value, it is used to - // mark the end of the input stream. - byte* data = *source; - if (data >= symbol_data_end_) return -1; - byte input = *data; - if (input == PreparseDataConstants::kNumberTerminator) { - // End of stream marker. - return -1; - } - int result = input & 0x7f; - data++; - while ((input & 0x80u) != 0) { - if (data >= symbol_data_end_) return -1; - input = *data; - result = (result << 7) | (input & 0x7f); - data++; - } - *source = data; - return result; -} - - bool RegExpParser::ParseRegExp(FlatStringReader* input, bool multiline, RegExpCompileData* result, Zone* zone) { - ASSERT(result != NULL); + DCHECK(result != NULL); RegExpParser parser(input, &result->error, multiline, zone); RegExpTree* tree = parser.ParsePattern(); if (parser.failed()) { - ASSERT(tree == NULL); - ASSERT(!result->error.is_null()); + DCHECK(tree == NULL); + DCHECK(!result->error.is_null()); } else { - ASSERT(tree != NULL); - ASSERT(result->error.is_null()); + DCHECK(tree != NULL); + DCHECK(result->error.is_null()); result->tree = tree; int capture_count = parser.captures_started(); result->simple = tree->IsAtom() && parser.simple() && capture_count == 0; @@ -4604,36 +4770,41 @@ bool RegExpParser::ParseRegExp(FlatStringReader* input, bool Parser::Parse() { - ASSERT(info()->function() == NULL); + DCHECK(info()->function() == NULL); FunctionLiteral* result = NULL; + ast_value_factory_ = info()->ast_value_factory(); + if (ast_value_factory_ == NULL) { + ast_value_factory_ = + new AstValueFactory(zone(), isolate()->heap()->HashSeed()); + } + if (allow_natives_syntax() || extension_ != NULL) { + // If intrinsics are allowed, the Parser cannot operate independent of the + // V8 heap because of Rumtime. Tell the string table to internalize strings + // and values right after they're created. + ast_value_factory_->Internalize(isolate()); + } + if (info()->is_lazy()) { - ASSERT(!info()->is_eval()); + DCHECK(!info()->is_eval()); if (info()->shared_info()->is_function()) { result = ParseLazy(); } else { result = ParseProgram(); } } else { - SetCachedData(info()->cached_data(), info()->cached_data_mode()); - if (info()->cached_data_mode() == CONSUME_CACHED_DATA && - (*info()->cached_data())->has_error()) { - ScriptData* cached_data = *(info()->cached_data()); - Scanner::Location loc = cached_data->MessageLocation(); - const char* message = cached_data->BuildMessage(); - Vector<const char*> args = cached_data->BuildArgs(); - ParserTraits::ReportMessageAt(loc, message, args, - cached_data->IsReferenceError()); - DeleteArray(message); - for (int i = 0; i < args.length(); i++) { - DeleteArray(args[i]); - } - DeleteArray(args.start()); - ASSERT(info()->isolate()->has_pending_exception()); - } else { - result = ParseProgram(); - } + SetCachedData(); + result = ParseProgram(); } info()->SetFunction(result); + DCHECK(ast_value_factory_->IsInternalized()); + // info takes ownership of ast_value_factory_. + if (info()->ast_value_factory() == NULL) { + info()->SetAstValueFactory(ast_value_factory_); + } + ast_value_factory_ = NULL; + + InternalizeUseCounts(); + return (result != NULL); } diff --git a/deps/v8/src/parser.h b/deps/v8/src/parser.h index 71bbfd195d0..e3cee840973 100644 --- a/deps/v8/src/parser.h +++ b/deps/v8/src/parser.h @@ -5,13 +5,13 @@ #ifndef V8_PARSER_H_ #define V8_PARSER_H_ -#include "allocation.h" -#include "ast.h" -#include "compiler.h" // For CachedDataMode -#include "preparse-data-format.h" -#include "preparse-data.h" -#include "scopes.h" -#include "preparser.h" +#include "src/allocation.h" +#include "src/ast.h" +#include "src/compiler.h" // For CachedDataMode +#include "src/preparse-data.h" +#include "src/preparse-data-format.h" +#include "src/preparser.h" +#include "src/scopes.h" namespace v8 { class ScriptCompiler; @@ -47,7 +47,7 @@ class FunctionEntry BASE_EMBEDDED { int literal_count() { return backing_[kLiteralCountIndex]; } int property_count() { return backing_[kPropertyCountIndex]; } StrictMode strict_mode() { - ASSERT(backing_[kStrictModeIndex] == SLOPPY || + DCHECK(backing_[kStrictModeIndex] == SLOPPY || backing_[kStrictModeIndex] == STRICT); return static_cast<StrictMode>(backing_[kStrictModeIndex]); } @@ -59,73 +59,39 @@ class FunctionEntry BASE_EMBEDDED { }; -class ScriptData { +// Wrapper around ScriptData to provide parser-specific functionality. +class ParseData { public: - explicit ScriptData(Vector<unsigned> store) - : store_(store), - owns_store_(true) { } - - ScriptData(Vector<unsigned> store, bool owns_store) - : store_(store), - owns_store_(owns_store) { } - - // The created ScriptData won't take ownership of the data. If the alignment - // is not correct, this will copy the data (and the created ScriptData will - // take ownership of the copy). - static ScriptData* New(const char* data, int length); - - virtual ~ScriptData(); - virtual int Length(); - virtual const char* Data(); - virtual bool HasError(); - + explicit ParseData(ScriptData* script_data) : script_data_(script_data) { + CHECK(IsAligned(script_data->length(), sizeof(unsigned))); + CHECK(IsSane()); + } void Initialize(); - void ReadNextSymbolPosition(); - FunctionEntry GetFunctionEntry(int start); - int GetSymbolIdentifier(); - bool SanityCheck(); - - Scanner::Location MessageLocation() const; - bool IsReferenceError() const; - const char* BuildMessage() const; - Vector<const char*> BuildArgs() const; - - int function_count() { - int functions_size = - static_cast<int>(store_[PreparseDataConstants::kFunctionsSizeOffset]); - if (functions_size < 0) return 0; - if (functions_size % FunctionEntry::kSize != 0) return 0; - return functions_size / FunctionEntry::kSize; + int FunctionCount(); + + bool HasError(); + + unsigned* Data() { // Writable data as unsigned int array. + return reinterpret_cast<unsigned*>(const_cast<byte*>(script_data_->data())); } - // The following functions should only be called if SanityCheck has - // returned true. - bool has_error() { return store_[PreparseDataConstants::kHasErrorOffset]; } - unsigned magic() { return store_[PreparseDataConstants::kMagicOffset]; } - unsigned version() { return store_[PreparseDataConstants::kVersionOffset]; } private: - // Disable copying and assigning; because of owns_store they won't be correct. - ScriptData(const ScriptData&); - ScriptData& operator=(const ScriptData&); - - friend class v8::ScriptCompiler; - Vector<unsigned> store_; - unsigned char* symbol_data_; - unsigned char* symbol_data_end_; - int function_index_; - bool owns_store_; + bool IsSane(); + unsigned Magic(); + unsigned Version(); + int FunctionsSize(); + int Length() const { + // Script data length is already checked to be a multiple of unsigned size. + return script_data_->length() / sizeof(unsigned); + } - unsigned Read(int position) const; - unsigned* ReadAddress(int position) const; - // Reads a number from the current symbols - int ReadNumber(byte** source); + ScriptData* script_data_; + int function_index_; - // Read strings written by ParserRecorder::WriteString. - static const char* ReadString(unsigned* start, int* chars); + DISALLOW_COPY_AND_ASSIGN(ParseData); }; - // ---------------------------------------------------------------------------- // REGEXP PARSING @@ -154,12 +120,12 @@ class BufferedZoneList { } T* last() { - ASSERT(last_ != NULL); + DCHECK(last_ != NULL); return last_; } T* RemoveLast() { - ASSERT(last_ != NULL); + DCHECK(last_ != NULL); T* result = last_; if ((list_ != NULL) && (list_->length() > 0)) last_ = list_->RemoveLast(); @@ -169,13 +135,13 @@ class BufferedZoneList { } T* Get(int i) { - ASSERT((0 <= i) && (i < length())); + DCHECK((0 <= i) && (i < length())); if (list_ == NULL) { - ASSERT_EQ(0, i); + DCHECK_EQ(0, i); return last_; } else { if (i == list_->length()) { - ASSERT(last_ != NULL); + DCHECK(last_ != NULL); return last_; } else { return list_->at(i); @@ -385,11 +351,30 @@ class ParserTraits { // Used by FunctionState and BlockState. typedef v8::internal::Scope Scope; + typedef v8::internal::Scope* ScopePtr; typedef Variable GeneratorVariable; typedef v8::internal::Zone Zone; + class Checkpoint BASE_EMBEDDED { + public: + template <typename Parser> + explicit Checkpoint(Parser* parser) { + isolate_ = parser->zone()->isolate(); + saved_ast_node_id_ = isolate_->ast_node_id(); + } + + void Restore() { isolate_->set_ast_node_id(saved_ast_node_id_); } + + private: + Isolate* isolate_; + int saved_ast_node_id_; + }; + + typedef v8::internal::AstProperties AstProperties; + typedef Vector<VariableProxy*> ParameterIdentifierVector; + // Return types for traversing functions. - typedef Handle<String> Identifier; + typedef const AstRawString* Identifier; typedef v8::internal::Expression* Expression; typedef Yield* YieldExpression; typedef v8::internal::FunctionLiteral* FunctionLiteral; @@ -421,32 +406,37 @@ class ParserTraits { } // Helper functions for recursive descent. - bool IsEvalOrArguments(Handle<String> identifier) const; + bool IsEvalOrArguments(const AstRawString* identifier) const; + V8_INLINE bool IsFutureStrictReserved(const AstRawString* identifier) const; // Returns true if the expression is of type "this.foo". static bool IsThisProperty(Expression* expression); static bool IsIdentifier(Expression* expression); - static Handle<String> AsIdentifier(Expression* expression) { - ASSERT(IsIdentifier(expression)); - return expression->AsVariableProxy()->name(); + static const AstRawString* AsIdentifier(Expression* expression) { + DCHECK(IsIdentifier(expression)); + return expression->AsVariableProxy()->raw_name(); } static bool IsBoilerplateProperty(ObjectLiteral::Property* property) { return ObjectLiteral::IsBoilerplateProperty(property); } - static bool IsArrayIndex(Handle<String> string, uint32_t* index) { - return !string.is_null() && string->AsArrayIndex(index); + static bool IsArrayIndex(const AstRawString* string, uint32_t* index) { + return string->AsArrayIndex(index); } // Functions for encapsulating the differences between parsing and preparsing; // operations interleaved with the recursive descent. - static void PushLiteralName(FuncNameInferrer* fni, Handle<String> id) { + static void PushLiteralName(FuncNameInferrer* fni, const AstRawString* id) { fni->PushLiteralName(id); } void PushPropertyName(FuncNameInferrer* fni, Expression* expression); + static void InferFunctionName(FuncNameInferrer* fni, + FunctionLiteral* func_to_infer) { + fni->AddFunction(func_to_infer); + } static void CheckFunctionLiteralInsideTopLevelObjectLiteral( Scope* scope, Expression* value, bool* has_function) { @@ -468,9 +458,8 @@ class ParserTraits { void CheckPossibleEvalCall(Expression* expression, Scope* scope); // Determine if the expression is a variable proxy and mark it as being used - // in an assignment or with a increment/decrement operator. This is currently - // used on for the statically checking assignments to harmony const bindings. - static Expression* MarkExpressionAsLValue(Expression* expression); + // in an assignment or with a increment/decrement operator. + static Expression* MarkExpressionAsAssigned(Expression* expression); // Returns true if we have a binary expression between two numeric // literals. In that case, *x will be changed to an expression which is the @@ -501,64 +490,76 @@ class ParserTraits { // type. The first argument may be null (in the handle sense) in // which case no arguments are passed to the constructor. Expression* NewThrowSyntaxError( - const char* type, Handle<Object> arg, int pos); + const char* type, const AstRawString* arg, int pos); // Generate AST node that throws a TypeError with the given // type. Both arguments must be non-null (in the handle sense). - Expression* NewThrowTypeError(const char* type, Handle<Object> arg, int pos); + Expression* NewThrowTypeError(const char* type, const AstRawString* arg, + int pos); // Generic AST generator for throwing errors from compiled code. Expression* NewThrowError( - Handle<String> constructor, const char* type, - Vector<Handle<Object> > arguments, int pos); + const AstRawString* constructor, const char* type, + const AstRawString* arg, int pos); // Reporting errors. void ReportMessageAt(Scanner::Location source_location, const char* message, - Vector<const char*> args, + const char* arg = NULL, bool is_reference_error = false); void ReportMessage(const char* message, - Vector<Handle<String> > args, + const char* arg = NULL, + bool is_reference_error = false); + void ReportMessage(const char* message, + const AstRawString* arg, bool is_reference_error = false); void ReportMessageAt(Scanner::Location source_location, const char* message, - Vector<Handle<String> > args, + const AstRawString* arg, bool is_reference_error = false); // "null" return type creators. - static Handle<String> EmptyIdentifier() { - return Handle<String>(); + static const AstRawString* EmptyIdentifier() { + return NULL; } static Expression* EmptyExpression() { return NULL; } + static Expression* EmptyArrowParamList() { return NULL; } static Literal* EmptyLiteral() { return NULL; } + // Used in error return values. static ZoneList<Expression*>* NullExpressionList() { return NULL; } + // Non-NULL empty string. + V8_INLINE const AstRawString* EmptyIdentifierString(); + // Odd-ball literal creators. Literal* GetLiteralTheHole(int position, AstNodeFactory<AstConstructionVisitor>* factory); // Producing data during the recursive descent. - Handle<String> GetSymbol(Scanner* scanner = NULL); - Handle<String> NextLiteralString(Scanner* scanner, - PretenureFlag tenured); + const AstRawString* GetSymbol(Scanner* scanner); + const AstRawString* GetNextSymbol(Scanner* scanner); + Expression* ThisExpression(Scope* scope, - AstNodeFactory<AstConstructionVisitor>* factory); + AstNodeFactory<AstConstructionVisitor>* factory, + int pos = RelocInfo::kNoPosition); Literal* ExpressionFromLiteral( Token::Value token, int pos, Scanner* scanner, AstNodeFactory<AstConstructionVisitor>* factory); Expression* ExpressionFromIdentifier( - Handle<String> name, int pos, Scope* scope, + const AstRawString* name, int pos, Scope* scope, AstNodeFactory<AstConstructionVisitor>* factory); Expression* ExpressionFromString( int pos, Scanner* scanner, AstNodeFactory<AstConstructionVisitor>* factory); + Expression* GetIterator(Expression* iterable, + AstNodeFactory<AstConstructionVisitor>* factory); ZoneList<v8::internal::Expression*>* NewExpressionList(int size, Zone* zone) { return new(zone) ZoneList<v8::internal::Expression*>(size, zone); } @@ -568,17 +569,33 @@ class ParserTraits { ZoneList<v8::internal::Statement*>* NewStatementList(int size, Zone* zone) { return new(zone) ZoneList<v8::internal::Statement*>(size, zone); } + V8_INLINE Scope* NewScope(Scope* parent_scope, ScopeType scope_type); + + // Utility functions + int DeclareArrowParametersFromExpression(Expression* expression, Scope* scope, + Scanner::Location* dupe_loc, + bool* ok); + V8_INLINE AstValueFactory* ast_value_factory(); // Temporary glue; these functions will move to ParserBase. Expression* ParseV8Intrinsic(bool* ok); FunctionLiteral* ParseFunctionLiteral( - Handle<String> name, + const AstRawString* name, Scanner::Location function_name_location, bool name_is_strict_reserved, bool is_generator, int function_token_position, FunctionLiteral::FunctionType type, + FunctionLiteral::ArityRestriction arity_restriction, bool* ok); + V8_INLINE void SkipLazyFunctionBody(const AstRawString* name, + int* materialized_literal_count, + int* expected_property_count, bool* ok); + V8_INLINE ZoneList<Statement*>* ParseEagerFunctionBody( + const AstRawString* name, int pos, Variable* fvar, + Token::Value fvar_init_op, bool is_generator, bool* ok); + V8_INLINE void CheckConflictingVarDeclarations(v8::internal::Scope* scope, + bool* ok); private: Parser* parser_; @@ -591,6 +608,8 @@ class Parser : public ParserBase<ParserTraits> { ~Parser() { delete reusable_preparser_; reusable_preparser_ = NULL; + delete cached_parse_data_; + cached_parse_data_ = NULL; } // Parses the source code represented by the compilation info and sets its @@ -642,23 +661,12 @@ class Parser : public ParserBase<ParserTraits> { FunctionLiteral* DoParseProgram(CompilationInfo* info, Handle<String> source); - // Report syntax error - void ReportInvalidCachedData(Handle<String> name, bool* ok); - - void SetCachedData(ScriptData** data, - CachedDataMode cached_data_mode) { - cached_data_mode_ = cached_data_mode; - if (cached_data_mode == NO_CACHED_DATA) { - cached_data_ = NULL; - } else { - ASSERT(data != NULL); - cached_data_ = data; - } - } + void SetCachedData(); bool inside_with() const { return scope_->inside_with(); } - ScriptData** cached_data() const { return cached_data_; } - CachedDataMode cached_data_mode() const { return cached_data_mode_; } + ScriptCompiler::CompileOptions compile_options() const { + return info_->compile_options(); + } Scope* DeclarationScope(VariableMode mode) { return IsLexicalVariableMode(mode) ? scope_ : scope_->DeclarationScope(); @@ -670,8 +678,10 @@ class Parser : public ParserBase<ParserTraits> { // for failure at the call sites. void* ParseSourceElements(ZoneList<Statement*>* processor, int end_token, bool is_eval, bool is_global, bool* ok); - Statement* ParseModuleElement(ZoneStringList* labels, bool* ok); - Statement* ParseModuleDeclaration(ZoneStringList* names, bool* ok); + Statement* ParseModuleElement(ZoneList<const AstRawString*>* labels, + bool* ok); + Statement* ParseModuleDeclaration(ZoneList<const AstRawString*>* names, + bool* ok); Module* ParseModule(bool* ok); Module* ParseModuleLiteral(bool* ok); Module* ParseModulePath(bool* ok); @@ -680,52 +690,64 @@ class Parser : public ParserBase<ParserTraits> { Module* ParseModuleSpecifier(bool* ok); Block* ParseImportDeclaration(bool* ok); Statement* ParseExportDeclaration(bool* ok); - Statement* ParseBlockElement(ZoneStringList* labels, bool* ok); - Statement* ParseStatement(ZoneStringList* labels, bool* ok); - Statement* ParseFunctionDeclaration(ZoneStringList* names, bool* ok); + Statement* ParseBlockElement(ZoneList<const AstRawString*>* labels, bool* ok); + Statement* ParseStatement(ZoneList<const AstRawString*>* labels, bool* ok); + Statement* ParseFunctionDeclaration(ZoneList<const AstRawString*>* names, + bool* ok); Statement* ParseNativeDeclaration(bool* ok); - Block* ParseBlock(ZoneStringList* labels, bool* ok); + Block* ParseBlock(ZoneList<const AstRawString*>* labels, bool* ok); Block* ParseVariableStatement(VariableDeclarationContext var_context, - ZoneStringList* names, + ZoneList<const AstRawString*>* names, bool* ok); Block* ParseVariableDeclarations(VariableDeclarationContext var_context, VariableDeclarationProperties* decl_props, - ZoneStringList* names, - Handle<String>* out, + ZoneList<const AstRawString*>* names, + const AstRawString** out, bool* ok); - Statement* ParseExpressionOrLabelledStatement(ZoneStringList* labels, - bool* ok); - IfStatement* ParseIfStatement(ZoneStringList* labels, bool* ok); + Statement* ParseExpressionOrLabelledStatement( + ZoneList<const AstRawString*>* labels, bool* ok); + IfStatement* ParseIfStatement(ZoneList<const AstRawString*>* labels, + bool* ok); Statement* ParseContinueStatement(bool* ok); - Statement* ParseBreakStatement(ZoneStringList* labels, bool* ok); + Statement* ParseBreakStatement(ZoneList<const AstRawString*>* labels, + bool* ok); Statement* ParseReturnStatement(bool* ok); - Statement* ParseWithStatement(ZoneStringList* labels, bool* ok); + Statement* ParseWithStatement(ZoneList<const AstRawString*>* labels, + bool* ok); CaseClause* ParseCaseClause(bool* default_seen_ptr, bool* ok); - SwitchStatement* ParseSwitchStatement(ZoneStringList* labels, bool* ok); - DoWhileStatement* ParseDoWhileStatement(ZoneStringList* labels, bool* ok); - WhileStatement* ParseWhileStatement(ZoneStringList* labels, bool* ok); - Statement* ParseForStatement(ZoneStringList* labels, bool* ok); + SwitchStatement* ParseSwitchStatement(ZoneList<const AstRawString*>* labels, + bool* ok); + DoWhileStatement* ParseDoWhileStatement(ZoneList<const AstRawString*>* labels, + bool* ok); + WhileStatement* ParseWhileStatement(ZoneList<const AstRawString*>* labels, + bool* ok); + Statement* ParseForStatement(ZoneList<const AstRawString*>* labels, bool* ok); Statement* ParseThrowStatement(bool* ok); Expression* MakeCatchContext(Handle<String> id, VariableProxy* value); TryStatement* ParseTryStatement(bool* ok); DebuggerStatement* ParseDebuggerStatement(bool* ok); // Support for hamony block scoped bindings. - Block* ParseScopedBlock(ZoneStringList* labels, bool* ok); + Block* ParseScopedBlock(ZoneList<const AstRawString*>* labels, bool* ok); // Initialize the components of a for-in / for-of statement. void InitializeForEachStatement(ForEachStatement* stmt, Expression* each, Expression* subject, Statement* body); + Statement* DesugarLetBindingsInForStatement( + Scope* inner_scope, ZoneList<const AstRawString*>* names, + ForStatement* loop, Statement* init, Expression* cond, Statement* next, + Statement* body, bool* ok); FunctionLiteral* ParseFunctionLiteral( - Handle<String> name, + const AstRawString* name, Scanner::Location function_name_location, bool name_is_strict_reserved, bool is_generator, int function_token_position, FunctionLiteral::FunctionType type, + FunctionLiteral::ArityRestriction arity_restriction, bool* ok); // Magical syntax support. @@ -748,14 +770,14 @@ class Parser : public ParserBase<ParserTraits> { void CheckConflictingVarDeclarations(Scope* scope, bool* ok); // Parser support - VariableProxy* NewUnresolved(Handle<String> name, + VariableProxy* NewUnresolved(const AstRawString* name, VariableMode mode, Interface* interface); void Declare(Declaration* declaration, bool resolve, bool* ok); - bool TargetStackContainsLabel(Handle<String> label); - BreakableStatement* LookupBreakTarget(Handle<String> label, bool* ok); - IterationStatement* LookupContinueTarget(Handle<String> label, bool* ok); + bool TargetStackContainsLabel(const AstRawString* label); + BreakableStatement* LookupBreakTarget(const AstRawString* label, bool* ok); + IterationStatement* LookupContinueTarget(const AstRawString* label, bool* ok); void RegisterTargetUse(Label* target, Target* stop); @@ -765,7 +787,7 @@ class Parser : public ParserBase<ParserTraits> { // Skip over a lazy function, either using cached data if we have it, or // by parsing the function with PreParser. Consumes the ending }. - void SkipLazyFunctionBody(Handle<String> function_name, + void SkipLazyFunctionBody(const AstRawString* function_name, int* materialized_literal_count, int* expected_property_count, bool* ok); @@ -774,12 +796,15 @@ class Parser : public ParserBase<ParserTraits> { SingletonLogger* logger); // Consumes the ending }. - ZoneList<Statement*>* ParseEagerFunctionBody(Handle<String> function_name, - int pos, - Variable* fvar, - Token::Value fvar_init_op, - bool is_generator, - bool* ok); + ZoneList<Statement*>* ParseEagerFunctionBody( + const AstRawString* function_name, int pos, Variable* fvar, + Token::Value fvar_init_op, bool is_generator, bool* ok); + + void HandleSourceURLComments(); + + void ThrowPendingError(); + + void InternalizeUseCounts(); Isolate* isolate_; @@ -788,13 +813,67 @@ class Parser : public ParserBase<ParserTraits> { PreParser* reusable_preparser_; Scope* original_scope_; // for ES5 function declarations in sloppy eval Target* target_stack_; // for break, continue statements - ScriptData** cached_data_; - CachedDataMode cached_data_mode_; + ParseData* cached_parse_data_; + AstValueFactory* ast_value_factory_; CompilationInfo* info_; + + // Pending errors. + bool has_pending_error_; + Scanner::Location pending_error_location_; + const char* pending_error_message_; + const AstRawString* pending_error_arg_; + const char* pending_error_char_arg_; + bool pending_error_is_reference_error_; + + int use_counts_[v8::Isolate::kUseCounterFeatureCount]; }; +bool ParserTraits::IsFutureStrictReserved( + const AstRawString* identifier) const { + return identifier->IsOneByteEqualTo("yield") || + parser_->scanner()->IdentifierIsFutureStrictReserved(identifier); +} + + +Scope* ParserTraits::NewScope(Scope* parent_scope, ScopeType scope_type) { + return parser_->NewScope(parent_scope, scope_type); +} + + +const AstRawString* ParserTraits::EmptyIdentifierString() { + return parser_->ast_value_factory_->empty_string(); +} + + +void ParserTraits::SkipLazyFunctionBody(const AstRawString* function_name, + int* materialized_literal_count, + int* expected_property_count, + bool* ok) { + return parser_->SkipLazyFunctionBody( + function_name, materialized_literal_count, expected_property_count, ok); +} + + +ZoneList<Statement*>* ParserTraits::ParseEagerFunctionBody( + const AstRawString* name, int pos, Variable* fvar, + Token::Value fvar_init_op, bool is_generator, bool* ok) { + return parser_->ParseEagerFunctionBody(name, pos, fvar, fvar_init_op, + is_generator, ok); +} + +void ParserTraits::CheckConflictingVarDeclarations(v8::internal::Scope* scope, + bool* ok) { + parser_->CheckConflictingVarDeclarations(scope, ok); +} + + +AstValueFactory* ParserTraits::ast_value_factory() { + return parser_->ast_value_factory_; +} + + // Support for handling complex values (array and object literals) that // can be fully handled at compile time. class CompileTimeValue: public AllStatic { diff --git a/deps/v8/src/perf-jit.cc b/deps/v8/src/perf-jit.cc new file mode 100644 index 00000000000..5e7fd4cf4ac --- /dev/null +++ b/deps/v8/src/perf-jit.cc @@ -0,0 +1,147 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following +// disclaimer in the documentation and/or other materials provided +// with the distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived +// from this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +#include "src/perf-jit.h" + +#if V8_OS_LINUX +#include <fcntl.h> +#include <unistd.h> +#include "src/third_party/kernel/tools/perf/util/jitdump.h" +#endif // V8_OS_LINUX + +namespace v8 { +namespace internal { + +#if V8_OS_LINUX + +const char PerfJitLogger::kFilenameFormatString[] = "perfjit-%d.dump"; + +// Extra padding for the PID in the filename +const int PerfJitLogger::kFilenameBufferPadding = 16; + + +PerfJitLogger::PerfJitLogger() : perf_output_handle_(NULL), code_index_(0) { + if (!base::TimeTicks::KernelTimestampAvailable()) { + FATAL("Cannot profile with perf JIT - kernel timestamps not available."); + } + + // Open the perf JIT dump file. + int bufferSize = sizeof(kFilenameFormatString) + kFilenameBufferPadding; + ScopedVector<char> perf_dump_name(bufferSize); + int size = SNPrintF(perf_dump_name, kFilenameFormatString, + base::OS::GetCurrentProcessId()); + CHECK_NE(size, -1); + perf_output_handle_ = + base::OS::FOpen(perf_dump_name.start(), base::OS::LogFileOpenMode); + CHECK_NE(perf_output_handle_, NULL); + setvbuf(perf_output_handle_, NULL, _IOFBF, kLogBufferSize); + + LogWriteHeader(); +} + + +PerfJitLogger::~PerfJitLogger() { + fclose(perf_output_handle_); + perf_output_handle_ = NULL; +} + + +uint64_t PerfJitLogger::GetTimestamp() { + return static_cast<int64_t>( + base::TimeTicks::KernelTimestampNow().ToInternalValue()); +} + + +void PerfJitLogger::LogRecordedBuffer(Code* code, SharedFunctionInfo*, + const char* name, int length) { + DCHECK(code->instruction_start() == code->address() + Code::kHeaderSize); + DCHECK(perf_output_handle_ != NULL); + + const char* code_name = name; + uint8_t* code_pointer = reinterpret_cast<uint8_t*>(code->instruction_start()); + uint32_t code_size = code->instruction_size(); + + static const char string_terminator[] = "\0"; + + jr_code_load code_load; + code_load.p.id = JIT_CODE_LOAD; + code_load.p.total_size = sizeof(code_load) + length + 1 + code_size; + code_load.p.timestamp = GetTimestamp(); + code_load.pid = static_cast<uint32_t>(base::OS::GetCurrentProcessId()); + code_load.tid = static_cast<uint32_t>(base::OS::GetCurrentThreadId()); + code_load.vma = 0x0; // Our addresses are absolute. + code_load.code_addr = reinterpret_cast<uint64_t>(code_pointer); + code_load.code_size = code_size; + code_load.code_index = code_index_; + + code_index_++; + + LogWriteBytes(reinterpret_cast<const char*>(&code_load), sizeof(code_load)); + LogWriteBytes(code_name, length); + LogWriteBytes(string_terminator, 1); + LogWriteBytes(reinterpret_cast<const char*>(code_pointer), code_size); +} + + +void PerfJitLogger::CodeMoveEvent(Address from, Address to) { + // Code relocation not supported. + UNREACHABLE(); +} + + +void PerfJitLogger::CodeDeleteEvent(Address from) { + // V8 does not send notification on code unload +} + + +void PerfJitLogger::SnapshotPositionEvent(Address addr, int pos) {} + + +void PerfJitLogger::LogWriteBytes(const char* bytes, int size) { + size_t rv = fwrite(bytes, 1, size, perf_output_handle_); + DCHECK(static_cast<size_t>(size) == rv); + USE(rv); +} + + +void PerfJitLogger::LogWriteHeader() { + DCHECK(perf_output_handle_ != NULL); + jitheader header; + header.magic = JITHEADER_MAGIC; + header.version = JITHEADER_VERSION; + header.total_size = sizeof(jitheader); + header.pad1 = 0xdeadbeef; + header.elf_mach = GetElfMach(); + header.pid = base::OS::GetCurrentProcessId(); + header.timestamp = + static_cast<uint64_t>(base::OS::TimeCurrentMillis() * 1000.0); + LogWriteBytes(reinterpret_cast<const char*>(&header), sizeof(header)); +} + +#endif // V8_OS_LINUX +} +} // namespace v8::internal diff --git a/deps/v8/src/perf-jit.h b/deps/v8/src/perf-jit.h new file mode 100644 index 00000000000..7872910b295 --- /dev/null +++ b/deps/v8/src/perf-jit.h @@ -0,0 +1,120 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following +// disclaimer in the documentation and/or other materials provided +// with the distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived +// from this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +#ifndef V8_PERF_JIT_H_ +#define V8_PERF_JIT_H_ + +#include "src/v8.h" + +namespace v8 { +namespace internal { + +// TODO(jarin) For now, we disable perf integration on Android because of a +// build problem - when building the snapshot with AOSP, librt is not +// available, so we cannot use the clock_gettime function. To fix this, we +// should thread through the V8_LIBRT_NOT_AVAILABLE flag here and only disable +// the perf integration when this flag is present (the perf integration is not +// needed when generating snapshot, so it is fine to ifdef it away). + +#if V8_OS_LINUX + +// Linux perf tool logging support +class PerfJitLogger : public CodeEventLogger { + public: + PerfJitLogger(); + virtual ~PerfJitLogger(); + + virtual void CodeMoveEvent(Address from, Address to); + virtual void CodeDeleteEvent(Address from); + virtual void CodeDisableOptEvent(Code* code, SharedFunctionInfo* shared) {} + virtual void SnapshotPositionEvent(Address addr, int pos); + + private: + uint64_t GetTimestamp(); + virtual void LogRecordedBuffer(Code* code, SharedFunctionInfo* shared, + const char* name, int length); + + // Extension added to V8 log file name to get the low-level log name. + static const char kFilenameFormatString[]; + static const int kFilenameBufferPadding; + + // File buffer size of the low-level log. We don't use the default to + // minimize the associated overhead. + static const int kLogBufferSize = 2 * MB; + + void LogWriteBytes(const char* bytes, int size); + void LogWriteHeader(); + + static const uint32_t kElfMachIA32 = 3; + static const uint32_t kElfMachX64 = 62; + static const uint32_t kElfMachARM = 40; + static const uint32_t kElfMachMIPS = 10; + + uint32_t GetElfMach() { +#if V8_TARGET_ARCH_IA32 + return kElfMachIA32; +#elif V8_TARGET_ARCH_X64 + return kElfMachX64; +#elif V8_TARGET_ARCH_ARM + return kElfMachARM; +#elif V8_TARGET_ARCH_MIPS + return kElfMachMIPS; +#else + UNIMPLEMENTED(); + return 0; +#endif + } + + FILE* perf_output_handle_; + uint64_t code_index_; +}; + +#else + +// PerfJitLogger is only implemented on Linux +class PerfJitLogger : public CodeEventLogger { + public: + virtual void CodeMoveEvent(Address from, Address to) { UNIMPLEMENTED(); } + + virtual void CodeDeleteEvent(Address from) { UNIMPLEMENTED(); } + + virtual void CodeDisableOptEvent(Code* code, SharedFunctionInfo* shared) { + UNIMPLEMENTED(); + } + + virtual void SnapshotPositionEvent(Address addr, int pos) { UNIMPLEMENTED(); } + + virtual void LogRecordedBuffer(Code* code, SharedFunctionInfo* shared, + const char* name, int length) { + UNIMPLEMENTED(); + } +}; + +#endif // V8_OS_LINUX +} +} // namespace v8::internal +#endif diff --git a/deps/v8/src/platform/socket.cc b/deps/v8/src/platform/socket.cc deleted file mode 100644 index f4b33873b41..00000000000 --- a/deps/v8/src/platform/socket.cc +++ /dev/null @@ -1,201 +0,0 @@ -// Copyright 2013 the V8 project authors. All rights reserved. -// Use of this source code is governed by a BSD-style license that can be -// found in the LICENSE file. - -#include "platform/socket.h" - -#if V8_OS_POSIX -#include <sys/types.h> -#include <sys/socket.h> - -#include <netinet/in.h> -#include <netdb.h> - -#include <unistd.h> -#endif - -#include <errno.h> - -#include "checks.h" -#include "once.h" - -namespace v8 { -namespace internal { - -#if V8_OS_WIN - -static V8_DECLARE_ONCE(initialize_winsock) = V8_ONCE_INIT; - - -static void InitializeWinsock() { - WSADATA wsa_data; - int result = WSAStartup(MAKEWORD(1, 0), &wsa_data); - CHECK_EQ(0, result); -} - -#endif // V8_OS_WIN - - -Socket::Socket() { -#if V8_OS_WIN - // Be sure to initialize the WinSock DLL first. - CallOnce(&initialize_winsock, &InitializeWinsock); -#endif // V8_OS_WIN - - // Create the native socket handle. - native_handle_ = ::socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); -} - - -bool Socket::Bind(int port) { - ASSERT_GE(port, 0); - ASSERT_LT(port, 65536); - if (!IsValid()) return false; - struct sockaddr_in sin; - memset(&sin, 0, sizeof(sin)); - sin.sin_family = AF_INET; - sin.sin_addr.s_addr = htonl(INADDR_LOOPBACK); - sin.sin_port = htons(static_cast<uint16_t>(port)); - int result = ::bind( - native_handle_, reinterpret_cast<struct sockaddr*>(&sin), sizeof(sin)); - return result == 0; -} - - -bool Socket::Listen(int backlog) { - if (!IsValid()) return false; - int result = ::listen(native_handle_, backlog); - return result == 0; -} - - -Socket* Socket::Accept() { - if (!IsValid()) return NULL; - while (true) { - NativeHandle native_handle = ::accept(native_handle_, NULL, NULL); - if (native_handle == kInvalidNativeHandle) { -#if V8_OS_POSIX - if (errno == EINTR) continue; // Retry after signal. -#endif - return NULL; - } - return new Socket(native_handle); - } -} - - -bool Socket::Connect(const char* host, const char* port) { - ASSERT_NE(NULL, host); - ASSERT_NE(NULL, port); - if (!IsValid()) return false; - - // Lookup host and port. - struct addrinfo* info = NULL; - struct addrinfo hint; - memset(&hint, 0, sizeof(hint)); - hint.ai_family = AF_INET; - hint.ai_socktype = SOCK_STREAM; - hint.ai_protocol = IPPROTO_TCP; - int result = ::getaddrinfo(host, port, &hint, &info); - if (result != 0) { - return false; - } - - // Connect to the host on the given port. - for (struct addrinfo* ai = info; ai != NULL; ai = ai->ai_next) { - // Try to connect using this addr info. - while (true) { - result = ::connect( - native_handle_, ai->ai_addr, static_cast<int>(ai->ai_addrlen)); - if (result == 0) { - freeaddrinfo(info); - return true; - } -#if V8_OS_POSIX - if (errno == EINTR) continue; // Retry after signal. -#endif - break; - } - } - freeaddrinfo(info); - return false; -} - - -bool Socket::Shutdown() { - if (!IsValid()) return false; - // Shutdown socket for both read and write. -#if V8_OS_POSIX - int result = ::shutdown(native_handle_, SHUT_RDWR); - ::close(native_handle_); -#elif V8_OS_WIN - int result = ::shutdown(native_handle_, SD_BOTH); - ::closesocket(native_handle_); -#endif - native_handle_ = kInvalidNativeHandle; - return result == 0; -} - - -int Socket::Send(const char* buffer, int length) { - ASSERT(length <= 0 || buffer != NULL); - if (!IsValid()) return 0; - int offset = 0; - while (offset < length) { - int result = ::send(native_handle_, buffer + offset, length - offset, 0); - if (result == 0) { - break; - } else if (result > 0) { - ASSERT(result <= length - offset); - offset += result; - } else { -#if V8_OS_POSIX - if (errno == EINTR) continue; // Retry after signal. -#endif - return 0; - } - } - return offset; -} - - -int Socket::Receive(char* buffer, int length) { - if (!IsValid()) return 0; - if (length <= 0) return 0; - ASSERT_NE(NULL, buffer); - while (true) { - int result = ::recv(native_handle_, buffer, length, 0); - if (result < 0) { -#if V8_OS_POSIX - if (errno == EINTR) continue; // Retry after signal. -#endif - return 0; - } - return result; - } -} - - -bool Socket::SetReuseAddress(bool reuse_address) { - if (!IsValid()) return 0; - int v = reuse_address ? 1 : 0; - int result = ::setsockopt(native_handle_, SOL_SOCKET, SO_REUSEADDR, - reinterpret_cast<char*>(&v), sizeof(v)); - return result == 0; -} - - -// static -int Socket::GetLastError() { -#if V8_OS_POSIX - return errno; -#elif V8_OS_WIN - // Be sure to initialize the WinSock DLL first. - CallOnce(&initialize_winsock, &InitializeWinsock); - - // Now we can safely perform WSA calls. - return ::WSAGetLastError(); -#endif -} - -} } // namespace v8::internal diff --git a/deps/v8/src/platform/socket.h b/deps/v8/src/platform/socket.h deleted file mode 100644 index 8605ae0fa96..00000000000 --- a/deps/v8/src/platform/socket.h +++ /dev/null @@ -1,78 +0,0 @@ -// Copyright 2013 the V8 project authors. All rights reserved. -// Use of this source code is governed by a BSD-style license that can be -// found in the LICENSE file. - -#ifndef V8_PLATFORM_SOCKET_H_ -#define V8_PLATFORM_SOCKET_H_ - -#include "globals.h" -#if V8_OS_WIN -#include "win32-headers.h" -#endif - -namespace v8 { -namespace internal { - -// ---------------------------------------------------------------------------- -// Socket -// - -class Socket V8_FINAL { - public: - Socket(); - ~Socket() { Shutdown(); } - - // Server initialization. - bool Bind(int port) V8_WARN_UNUSED_RESULT; - bool Listen(int backlog) V8_WARN_UNUSED_RESULT; - Socket* Accept() V8_WARN_UNUSED_RESULT; - - // Client initialization. - bool Connect(const char* host, const char* port) V8_WARN_UNUSED_RESULT; - - // Shutdown socket for both read and write. This causes blocking Send and - // Receive calls to exit. After |Shutdown()| the Socket object cannot be - // used for any communication. - bool Shutdown(); - - // Data Transimission - // Return 0 on failure. - int Send(const char* buffer, int length) V8_WARN_UNUSED_RESULT; - int Receive(char* buffer, int length) V8_WARN_UNUSED_RESULT; - - // Set the value of the SO_REUSEADDR socket option. - bool SetReuseAddress(bool reuse_address); - - V8_INLINE bool IsValid() const { - return native_handle_ != kInvalidNativeHandle; - } - - static int GetLastError(); - - // The implementation-defined native handle type. -#if V8_OS_POSIX - typedef int NativeHandle; - static const NativeHandle kInvalidNativeHandle = -1; -#elif V8_OS_WIN - typedef SOCKET NativeHandle; - static const NativeHandle kInvalidNativeHandle = INVALID_SOCKET; -#endif - - NativeHandle& native_handle() { - return native_handle_; - } - const NativeHandle& native_handle() const { - return native_handle_; - } - - private: - explicit Socket(NativeHandle native_handle) : native_handle_(native_handle) {} - - NativeHandle native_handle_; - - DISALLOW_COPY_AND_ASSIGN(Socket); -}; - -} } // namespace v8::internal - -#endif // V8_PLATFORM_SOCKET_H_ diff --git a/deps/v8/src/preparse-data.cc b/deps/v8/src/preparse-data.cc index 5ddf72137f3..15509c02919 100644 --- a/deps/v8/src/preparse-data.cc +++ b/deps/v8/src/preparse-data.cc @@ -2,14 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "../include/v8stdint.h" - -#include "preparse-data-format.h" -#include "preparse-data.h" - -#include "checks.h" -#include "globals.h" -#include "hashmap.h" +#include "include/v8stdint.h" +#include "src/base/logging.h" +#include "src/compiler.h" +#include "src/globals.h" +#include "src/hashmap.h" +#include "src/preparse-data.h" +#include "src/preparse-data-format.h" namespace v8 { namespace internal { @@ -24,7 +23,7 @@ CompleteParserRecorder::CompleteParserRecorder() preamble_[PreparseDataConstants::kHasErrorOffset] = false; preamble_[PreparseDataConstants::kFunctionsSizeOffset] = 0; preamble_[PreparseDataConstants::kSizeOffset] = 0; - ASSERT_EQ(5, PreparseDataConstants::kHeaderSize); + DCHECK_EQ(5, PreparseDataConstants::kHeaderSize); #ifdef DEBUG prev_start_ = -1; #endif @@ -36,7 +35,7 @@ void CompleteParserRecorder::LogMessage(int start_pos, const char* message, const char* arg_opt, bool is_reference_error) { - if (has_error()) return; + if (HasError()) return; preamble_[PreparseDataConstants::kHasErrorOffset] = true; function_store_.Reset(); STATIC_ASSERT(PreparseDataConstants::kMessageStartPos == 0); @@ -61,17 +60,21 @@ void CompleteParserRecorder::WriteString(Vector<const char> str) { } -Vector<unsigned> CompleteParserRecorder::ExtractData() { +ScriptData* CompleteParserRecorder::GetScriptData() { int function_size = function_store_.size(); int total_size = PreparseDataConstants::kHeaderSize + function_size; - Vector<unsigned> data = Vector<unsigned>::New(total_size); + unsigned* data = NewArray<unsigned>(total_size); preamble_[PreparseDataConstants::kFunctionsSizeOffset] = function_size; - OS::MemCopy(data.start(), preamble_, sizeof(preamble_)); + MemCopy(data, preamble_, sizeof(preamble_)); if (function_size > 0) { - function_store_.WriteTo(data.SubVector(PreparseDataConstants::kHeaderSize, - total_size)); + function_store_.WriteTo(Vector<unsigned>( + data + PreparseDataConstants::kHeaderSize, function_size)); } - return data; + DCHECK(IsAligned(reinterpret_cast<intptr_t>(data), kPointerAlignment)); + ScriptData* result = new ScriptData(reinterpret_cast<byte*>(data), + total_size * sizeof(unsigned)); + result->AcquireDataOwnership(); + return result; } diff --git a/deps/v8/src/preparse-data.h b/deps/v8/src/preparse-data.h index 051a9096903..c1331d044fc 100644 --- a/deps/v8/src/preparse-data.h +++ b/deps/v8/src/preparse-data.h @@ -5,13 +5,16 @@ #ifndef V8_PREPARSE_DATA_H_ #define V8_PREPARSE_DATA_H_ -#include "allocation.h" -#include "hashmap.h" -#include "utils-inl.h" +#include "src/allocation.h" +#include "src/hashmap.h" +#include "src/preparse-data-format.h" +#include "src/utils-inl.h" namespace v8 { namespace internal { +class ScriptData; + // Abstract interface for preparse data recorder. class ParserRecorder { @@ -52,13 +55,13 @@ class SingletonLogger : public ParserRecorder { int literals, int properties, StrictMode strict_mode) { - ASSERT(!has_error_); + DCHECK(!has_error_); start_ = start; end_ = end; literals_ = literals; properties_ = properties; strict_mode_ = strict_mode; - }; + } // Logs an error message and marks the log as containing an error. // Further logging will be ignored, and ExtractData will return a vector @@ -82,24 +85,24 @@ class SingletonLogger : public ParserRecorder { int start() const { return start_; } int end() const { return end_; } int literals() const { - ASSERT(!has_error_); + DCHECK(!has_error_); return literals_; } int properties() const { - ASSERT(!has_error_); + DCHECK(!has_error_); return properties_; } StrictMode strict_mode() const { - ASSERT(!has_error_); + DCHECK(!has_error_); return strict_mode_; } int is_reference_error() const { return is_reference_error_; } const char* message() { - ASSERT(has_error_); + DCHECK(has_error_); return message_; } const char* argument_opt() const { - ASSERT(has_error_); + DCHECK(has_error_); return argument_opt_; } @@ -148,13 +151,17 @@ class CompleteParserRecorder : public ParserRecorder { const char* message, const char* argument_opt, bool is_reference_error_); - Vector<unsigned> ExtractData(); + ScriptData* GetScriptData(); - private: - bool has_error() { + bool HasError() { return static_cast<bool>(preamble_[PreparseDataConstants::kHasErrorOffset]); } + Vector<unsigned> ErrorMessageData() { + DCHECK(HasError()); + return function_store_.ToVector(); + } + private: void WriteString(Vector<const char> str); // Write a non-negative number to the symbol store. diff --git a/deps/v8/src/preparser.cc b/deps/v8/src/preparser.cc index 96623b9526a..7ce8e3d91aa 100644 --- a/deps/v8/src/preparser.cc +++ b/deps/v8/src/preparser.cc @@ -4,20 +4,20 @@ #include <cmath> -#include "../include/v8stdint.h" - -#include "allocation.h" -#include "checks.h" -#include "conversions.h" -#include "conversions-inl.h" -#include "globals.h" -#include "hashmap.h" -#include "list.h" -#include "preparse-data-format.h" -#include "preparse-data.h" -#include "preparser.h" -#include "unicode.h" -#include "utils.h" +#include "include/v8stdint.h" + +#include "src/allocation.h" +#include "src/base/logging.h" +#include "src/conversions-inl.h" +#include "src/conversions.h" +#include "src/globals.h" +#include "src/hashmap.h" +#include "src/list.h" +#include "src/preparse-data.h" +#include "src/preparse-data-format.h" +#include "src/preparser.h" +#include "src/unicode.h" +#include "src/utils.h" #if V8_LIBC_MSVCRT && (_MSC_VER < 1800) namespace std { @@ -35,31 +35,22 @@ namespace internal { void PreParserTraits::ReportMessageAt(Scanner::Location location, const char* message, - Vector<const char*> args, + const char* arg, bool is_reference_error) { ReportMessageAt(location.beg_pos, location.end_pos, message, - args.length() > 0 ? args[0] : NULL, + arg, is_reference_error); } -void PreParserTraits::ReportMessageAt(Scanner::Location location, - const char* type, - const char* name_opt, - bool is_reference_error) { - pre_parser_->log_->LogMessage(location.beg_pos, location.end_pos, type, - name_opt, is_reference_error); -} - - void PreParserTraits::ReportMessageAt(int start_pos, int end_pos, - const char* type, - const char* name_opt, + const char* message, + const char* arg, bool is_reference_error) { - pre_parser_->log_->LogMessage(start_pos, end_pos, type, name_opt, + pre_parser_->log_->LogMessage(start_pos, end_pos, message, arg, is_reference_error); } @@ -70,6 +61,8 @@ PreParserIdentifier PreParserTraits::GetSymbol(Scanner* scanner) { } else if (scanner->current_token() == Token::FUTURE_STRICT_RESERVED_WORD) { return PreParserIdentifier::FutureStrictReserved(); + } else if (scanner->current_token() == Token::LET) { + return PreParserIdentifier::Let(); } else if (scanner->current_token() == Token::YIELD) { return PreParserIdentifier::Yield(); } @@ -104,10 +97,11 @@ PreParserExpression PreParserTraits::ParseFunctionLiteral( bool is_generator, int function_token_position, FunctionLiteral::FunctionType type, + FunctionLiteral::ArityRestriction arity_restriction, bool* ok) { return pre_parser_->ParseFunctionLiteral( name, function_name_location, name_is_strict_reserved, is_generator, - function_token_position, type, ok); + function_token_position, type, arity_restriction, ok); } @@ -116,12 +110,14 @@ PreParser::PreParseResult PreParser::PreParseLazyFunction( log_ = log; // Lazy functions always have trivial outer scopes (no with/catch scopes). PreParserScope top_scope(scope_, GLOBAL_SCOPE); - FunctionState top_state(&function_state_, &scope_, &top_scope); + FunctionState top_state(&function_state_, &scope_, &top_scope, NULL, + this->ast_value_factory()); scope_->SetStrictMode(strict_mode); PreParserScope function_scope(scope_, FUNCTION_SCOPE); - FunctionState function_state(&function_state_, &scope_, &function_scope); + FunctionState function_state(&function_state_, &scope_, &function_scope, NULL, + this->ast_value_factory()); function_state.set_is_generator(is_generator); - ASSERT_EQ(Token::LBRACE, scanner()->current_token()); + DCHECK_EQ(Token::LBRACE, scanner()->current_token()); bool ok = true; int start_position = peek_position(); ParseLazyFunctionLiteralBody(&ok); @@ -129,7 +125,7 @@ PreParser::PreParseResult PreParser::PreParseLazyFunction( if (!ok) { ReportUnexpectedToken(scanner()->current_token()); } else { - ASSERT_EQ(Token::RBRACE, scanner()->peek()); + DCHECK_EQ(Token::RBRACE, scanner()->peek()); if (scope_->strict_mode() == STRICT) { int end_pos = scanner()->location().end_pos; CheckOctalLiteral(start_position, end_pos, &ok); @@ -175,9 +171,14 @@ PreParser::Statement PreParser::ParseSourceElement(bool* ok) { switch (peek()) { case Token::FUNCTION: return ParseFunctionDeclaration(ok); - case Token::LET: case Token::CONST: return ParseVariableStatement(kSourceElement, ok); + case Token::LET: + DCHECK(allow_harmony_scoping()); + if (strict_mode() == STRICT) { + return ParseVariableStatement(kSourceElement, ok); + } + // Fall through. default: return ParseStatement(ok); } @@ -245,11 +246,6 @@ PreParser::Statement PreParser::ParseStatement(bool* ok) { case Token::LBRACE: return ParseBlock(ok); - case Token::CONST: - case Token::LET: - case Token::VAR: - return ParseVariableStatement(kStatement, ok); - case Token::SEMICOLON: Next(); return Statement::Default(); @@ -294,8 +290,7 @@ PreParser::Statement PreParser::ParseStatement(bool* ok) { if (strict_mode() == STRICT) { PreParserTraits::ReportMessageAt(start_location.beg_pos, end_location.end_pos, - "strict_function", - NULL); + "strict_function"); *ok = false; return Statement::Default(); } else { @@ -306,6 +301,16 @@ PreParser::Statement PreParser::ParseStatement(bool* ok) { case Token::DEBUGGER: return ParseDebuggerStatement(ok); + case Token::VAR: + case Token::CONST: + return ParseVariableStatement(kStatement, ok); + + case Token::LET: + DCHECK(allow_harmony_scoping()); + if (strict_mode() == STRICT) { + return ParseVariableStatement(kStatement, ok); + } + // Fall through. default: return ParseExpressionOrLabelledStatement(ok); } @@ -330,6 +335,7 @@ PreParser::Statement PreParser::ParseFunctionDeclaration(bool* ok) { is_generator, pos, FunctionLiteral::DECLARATION, + FunctionLiteral::NORMAL_ARITY, CHECK_OK); return Statement::FunctionDeclaration(); } @@ -423,23 +429,9 @@ PreParser::Statement PreParser::ParseVariableDeclarations( return Statement::Default(); } } - } else if (peek() == Token::LET) { - // ES6 Draft Rev4 section 12.2.1: - // - // LetDeclaration : let LetBindingList ; - // - // * It is a Syntax Error if the code that matches this production is not - // contained in extended code. - // - // TODO(rossberg): make 'let' a legal identifier in sloppy mode. - if (!allow_harmony_scoping() || strict_mode() == SLOPPY) { - ReportMessageAt(scanner()->peek_location(), "illegal_let"); - *ok = false; - return Statement::Default(); - } + } else if (peek() == Token::LET && strict_mode() == STRICT) { Consume(Token::LET); - if (var_context != kSourceElement && - var_context != kForStatement) { + if (var_context != kSourceElement && var_context != kForStatement) { ReportMessageAt(scanner()->peek_location(), "unprotected_let"); *ok = false; return Statement::Default(); @@ -484,8 +476,8 @@ PreParser::Statement PreParser::ParseExpressionOrLabelledStatement(bool* ok) { if (starts_with_identifier && expr.IsIdentifier() && peek() == Token::COLON) { // Expression is a single identifier, and not, e.g., a parenthesized // identifier. - ASSERT(!expr.AsIdentifier().IsFutureReserved()); - ASSERT(strict_mode() == SLOPPY || + DCHECK(!expr.AsIdentifier().IsFutureReserved()); + DCHECK(strict_mode() == SLOPPY || (!expr.AsIdentifier().IsFutureStrictReserved() && !expr.AsIdentifier().IsYield())); Consume(Token::COLON); @@ -661,8 +653,7 @@ PreParser::Statement PreParser::ParseWhileStatement(bool* ok) { bool PreParser::CheckInOrOf(bool accept_OF) { if (Check(Token::IN) || - (allow_for_of() && accept_OF && - CheckContextualKeyword(CStrVector("of")))) { + (accept_OF && CheckContextualKeyword(CStrVector("of")))) { return true; } return false; @@ -677,7 +668,7 @@ PreParser::Statement PreParser::ParseForStatement(bool* ok) { Expect(Token::LPAREN, CHECK_OK); if (peek() != Token::SEMICOLON) { if (peek() == Token::VAR || peek() == Token::CONST || - peek() == Token::LET) { + (peek() == Token::LET && strict_mode() == STRICT)) { bool is_let = peek() == Token::LET; int decl_count; VariableDeclarationProperties decl_props = kHasNoInitializers; @@ -809,6 +800,7 @@ PreParser::Expression PreParser::ParseFunctionLiteral( bool is_generator, int function_token_pos, FunctionLiteral::FunctionType function_type, + FunctionLiteral::ArityRestriction arity_restriction, bool* ok) { // Function :: // '(' FormalParameterList? ')' '{' FunctionBody '}' @@ -816,13 +808,13 @@ PreParser::Expression PreParser::ParseFunctionLiteral( // Parse function body. ScopeType outer_scope_type = scope_->type(); PreParserScope function_scope(scope_, FUNCTION_SCOPE); - FunctionState function_state(&function_state_, &scope_, &function_scope); + FunctionState function_state(&function_state_, &scope_, &function_scope, NULL, + this->ast_value_factory()); function_state.set_is_generator(is_generator); // FormalParameterList :: // '(' (Identifier)*[','] ')' Expect(Token::LPAREN, CHECK_OK); int start_position = position(); - bool done = (peek() == Token::RPAREN); DuplicateFinder duplicate_finder(scanner()->unicode_cache()); // We don't yet know if the function will be strict, so we cannot yet produce // errors for parameter names or duplicates. However, we remember the @@ -830,6 +822,10 @@ PreParser::Expression PreParser::ParseFunctionLiteral( Scanner::Location eval_args_error_loc = Scanner::Location::invalid(); Scanner::Location dupe_error_loc = Scanner::Location::invalid(); Scanner::Location reserved_error_loc = Scanner::Location::invalid(); + + bool done = arity_restriction == FunctionLiteral::GETTER_ARITY || + (peek() == Token::RPAREN && + arity_restriction != FunctionLiteral::SETTER_ARITY); while (!done) { bool is_strict_reserved = false; Identifier param_name = @@ -847,10 +843,9 @@ PreParser::Expression PreParser::ParseFunctionLiteral( dupe_error_loc = scanner()->location(); } + if (arity_restriction == FunctionLiteral::SETTER_ARITY) break; done = (peek() == Token::RPAREN); - if (!done) { - Expect(Token::COMMA, CHECK_OK); - } + if (!done) Expect(Token::COMMA, CHECK_OK); } Expect(Token::RPAREN, CHECK_OK); @@ -911,7 +906,7 @@ void PreParser::ParseLazyFunctionLiteralBody(bool* ok) { if (!*ok) return; // Position right after terminal '}'. - ASSERT_EQ(Token::RBRACE, scanner()->peek()); + DCHECK_EQ(Token::RBRACE, scanner()->peek()); int body_end = scanner()->peek_location().end_pos; log_->LogFunction(body_start, body_end, function_state_->materialized_literal_count(), diff --git a/deps/v8/src/preparser.h b/deps/v8/src/preparser.h index 6733658579c..8a932582586 100644 --- a/deps/v8/src/preparser.h +++ b/deps/v8/src/preparser.h @@ -5,12 +5,13 @@ #ifndef V8_PREPARSER_H #define V8_PREPARSER_H -#include "func-name-inferrer.h" -#include "hashmap.h" -#include "scopes.h" -#include "token.h" -#include "scanner.h" -#include "v8.h" +#include "src/v8.h" + +#include "src/func-name-inferrer.h" +#include "src/hashmap.h" +#include "src/scanner.h" +#include "src/scopes.h" +#include "src/token.h" namespace v8 { namespace internal { @@ -61,11 +62,10 @@ class ParserBase : public Traits { // Shorten type names defined by Traits. typedef typename Traits::Type::Expression ExpressionT; typedef typename Traits::Type::Identifier IdentifierT; + typedef typename Traits::Type::FunctionLiteral FunctionLiteralT; - ParserBase(Scanner* scanner, uintptr_t stack_limit, - v8::Extension* extension, - ParserRecorder* log, - typename Traits::Type::Zone* zone, + ParserBase(Scanner* scanner, uintptr_t stack_limit, v8::Extension* extension, + ParserRecorder* log, typename Traits::Type::Zone* zone, typename Traits::Type::Parser this_object) : Traits(this_object), parenthesized_function_(false), @@ -81,15 +81,15 @@ class ParserBase : public Traits { allow_lazy_(false), allow_natives_syntax_(false), allow_generators_(false), - allow_for_of_(false), - zone_(zone) { } + allow_arrow_functions_(false), + zone_(zone) {} // Getters that indicate whether certain syntactical constructs are // allowed to be parsed by this instance of the parser. bool allow_lazy() const { return allow_lazy_; } bool allow_natives_syntax() const { return allow_natives_syntax_; } bool allow_generators() const { return allow_generators_; } - bool allow_for_of() const { return allow_for_of_; } + bool allow_arrow_functions() const { return allow_arrow_functions_; } bool allow_modules() const { return scanner()->HarmonyModules(); } bool allow_harmony_scoping() const { return scanner()->HarmonyScoping(); } bool allow_harmony_numeric_literals() const { @@ -101,7 +101,7 @@ class ParserBase : public Traits { void set_allow_lazy(bool allow) { allow_lazy_ = allow; } void set_allow_natives_syntax(bool allow) { allow_natives_syntax_ = allow; } void set_allow_generators(bool allow) { allow_generators_ = allow; } - void set_allow_for_of(bool allow) { allow_for_of_ = allow; } + void set_allow_arrow_functions(bool allow) { allow_arrow_functions_ = allow; } void set_allow_modules(bool allow) { scanner()->SetHarmonyModules(allow); } void set_allow_harmony_scoping(bool allow) { scanner()->SetHarmonyScoping(allow); @@ -111,6 +111,8 @@ class ParserBase : public Traits { } protected: + friend class Traits::Type::Checkpoint; + enum AllowEvalOrArgumentsAsIdentifier { kAllowEvalOrArguments, kDontAllowEvalOrArguments @@ -121,6 +123,8 @@ class ParserBase : public Traits { PARSE_EAGERLY }; + class ParserCheckpoint; + // --------------------------------------------------------------------------- // FunctionState and BlockState together implement the parser's scope stack. // The parser's current scope is in scope_. BlockState and FunctionState @@ -149,7 +153,13 @@ class ParserBase : public Traits { FunctionState** function_state_stack, typename Traits::Type::Scope** scope_stack, typename Traits::Type::Scope* scope, - typename Traits::Type::Zone* zone = NULL); + typename Traits::Type::Zone* zone = NULL, + AstValueFactory* ast_value_factory = NULL); + FunctionState(FunctionState** function_state_stack, + typename Traits::Type::Scope** scope_stack, + typename Traits::Type::Scope** scope, + typename Traits::Type::Zone* zone = NULL, + AstValueFactory* ast_value_factory = NULL); ~FunctionState(); int NextMaterializedLiteralIndex() { @@ -170,8 +180,8 @@ class ParserBase : public Traits { void set_generator_object_variable( typename Traits::Type::GeneratorVariable* variable) { - ASSERT(variable != NULL); - ASSERT(!is_generator()); + DCHECK(variable != NULL); + DCHECK(!is_generator()); generator_object_variable_ = variable; is_generator_ = true; } @@ -210,6 +220,38 @@ class ParserBase : public Traits { typename Traits::Type::Factory factory_; friend class ParserTraits; + friend class ParserCheckpoint; + }; + + // Annoyingly, arrow functions first parse as comma expressions, then when we + // see the => we have to go back and reinterpret the arguments as being formal + // parameters. To do so we need to reset some of the parser state back to + // what it was before the arguments were first seen. + class ParserCheckpoint : public Traits::Type::Checkpoint { + public: + template <typename Parser> + explicit ParserCheckpoint(Parser* parser) + : Traits::Type::Checkpoint(parser) { + function_state_ = parser->function_state_; + next_materialized_literal_index_ = + function_state_->next_materialized_literal_index_; + next_handler_index_ = function_state_->next_handler_index_; + expected_property_count_ = function_state_->expected_property_count_; + } + + void Restore() { + Traits::Type::Checkpoint::Restore(); + function_state_->next_materialized_literal_index_ = + next_materialized_literal_index_; + function_state_->next_handler_index_ = next_handler_index_; + function_state_->expected_property_count_ = expected_property_count_; + } + + private: + FunctionState* function_state_; + int next_materialized_literal_index_; + int next_handler_index_; + int expected_property_count_; }; class ParsingModeScope BASE_EMBEDDED { @@ -244,8 +286,7 @@ class ParserBase : public Traits { INLINE(Token::Value Next()) { if (stack_overflow_) return Token::ILLEGAL; { - int marker; - if (reinterpret_cast<uintptr_t>(&marker) < stack_limit_) { + if (GetCurrentStackPosition() < stack_limit_) { // Any further calls to Next or peek will return the illegal token. // The current call must return the next token, which might already // have been peek'ed. @@ -259,7 +300,7 @@ class ParserBase : public Traits { Token::Value next = Next(); USE(next); USE(token); - ASSERT(next == token); + DCHECK(next == token); } bool Check(Token::Value token) { @@ -300,6 +341,7 @@ class ParserBase : public Traits { return next == Token::IDENTIFIER || next == Token::FUTURE_RESERVED_WORD || next == Token::FUTURE_STRICT_RESERVED_WORD || + next == Token::LET || next == Token::YIELD; } @@ -333,6 +375,44 @@ class ParserBase : public Traits { } } + // Validates strict mode for function parameter lists. This has to be + // done after parsing the function, since the function can declare + // itself strict. + void CheckStrictFunctionNameAndParameters( + IdentifierT function_name, + bool function_name_is_strict_reserved, + const Scanner::Location& function_name_loc, + const Scanner::Location& eval_args_error_loc, + const Scanner::Location& dupe_error_loc, + const Scanner::Location& reserved_loc, + bool* ok) { + if (this->IsEvalOrArguments(function_name)) { + Traits::ReportMessageAt(function_name_loc, "strict_eval_arguments"); + *ok = false; + return; + } + if (function_name_is_strict_reserved) { + Traits::ReportMessageAt(function_name_loc, "unexpected_strict_reserved"); + *ok = false; + return; + } + if (eval_args_error_loc.IsValid()) { + Traits::ReportMessageAt(eval_args_error_loc, "strict_eval_arguments"); + *ok = false; + return; + } + if (dupe_error_loc.IsValid()) { + Traits::ReportMessageAt(dupe_error_loc, "strict_param_dupe"); + *ok = false; + return; + } + if (reserved_loc.IsValid()) { + Traits::ReportMessageAt(reserved_loc, "unexpected_strict_reserved"); + *ok = false; + return; + } + } + // Determine precedence of given token. static int Precedence(Token::Value token, bool accept_IN) { if (token == Token::IN && !accept_IN) @@ -348,15 +428,16 @@ class ParserBase : public Traits { bool is_generator() const { return function_state_->is_generator(); } // Report syntax errors. - void ReportMessage(const char* message, Vector<const char*> args, + void ReportMessage(const char* message, const char* arg = NULL, bool is_reference_error = false) { Scanner::Location source_location = scanner()->location(); - Traits::ReportMessageAt(source_location, message, args, is_reference_error); + Traits::ReportMessageAt(source_location, message, arg, is_reference_error); } void ReportMessageAt(Scanner::Location location, const char* message, bool is_reference_error = false) { - Traits::ReportMessageAt(location, message, Vector<const char*>::empty(), + Traits::ReportMessageAt(location, message, + reinterpret_cast<const char*>(NULL), is_reference_error); } @@ -401,6 +482,8 @@ class ParserBase : public Traits { ExpressionT ParseMemberExpression(bool* ok); ExpressionT ParseMemberExpressionContinuation(ExpressionT expression, bool* ok); + ExpressionT ParseArrowFunctionLiteral(int start_pos, ExpressionT params_ast, + bool* ok); // Checks if the expression is a valid reference expression (e.g., on the // left-hand side of assignments). Although ruled out by ECMA as early errors, @@ -484,7 +567,7 @@ class ParserBase : public Traits { bool allow_lazy_; bool allow_natives_syntax_; bool allow_generators_; - bool allow_for_of_; + bool allow_arrow_functions_; typename Traits::Type::Zone* zone_; // Only used by Parser. }; @@ -508,24 +591,36 @@ class PreParserIdentifier { static PreParserIdentifier FutureStrictReserved() { return PreParserIdentifier(kFutureStrictReservedIdentifier); } + static PreParserIdentifier Let() { + return PreParserIdentifier(kLetIdentifier); + } static PreParserIdentifier Yield() { return PreParserIdentifier(kYieldIdentifier); } - bool IsEval() { return type_ == kEvalIdentifier; } - bool IsArguments() { return type_ == kArgumentsIdentifier; } - bool IsEvalOrArguments() { return type_ >= kEvalIdentifier; } - bool IsYield() { return type_ == kYieldIdentifier; } - bool IsFutureReserved() { return type_ == kFutureReservedIdentifier; } - bool IsFutureStrictReserved() { + bool IsEval() const { return type_ == kEvalIdentifier; } + bool IsArguments() const { return type_ == kArgumentsIdentifier; } + bool IsEvalOrArguments() const { return type_ >= kEvalIdentifier; } + bool IsYield() const { return type_ == kYieldIdentifier; } + bool IsFutureReserved() const { return type_ == kFutureReservedIdentifier; } + bool IsFutureStrictReserved() const { return type_ == kFutureStrictReservedIdentifier; } - bool IsValidStrictVariable() { return type_ == kUnknownIdentifier; } + bool IsValidStrictVariable() const { return type_ == kUnknownIdentifier; } + + // Allow identifier->name()[->length()] to work. The preparser + // does not need the actual positions/lengths of the identifiers. + const PreParserIdentifier* operator->() const { return this; } + const PreParserIdentifier raw_name() const { return *this; } + + int position() const { return 0; } + int length() const { return 0; } private: enum Type { kUnknownIdentifier, kFutureReservedIdentifier, kFutureStrictReservedIdentifier, + kLetIdentifier, kYieldIdentifier, kEvalIdentifier, kArgumentsIdentifier @@ -534,6 +629,7 @@ class PreParserIdentifier { Type type_; friend class PreParserExpression; + friend class PreParserScope; }; @@ -549,10 +645,26 @@ class PreParserExpression { } static PreParserExpression FromIdentifier(PreParserIdentifier id) { - return PreParserExpression(kIdentifierFlag | + return PreParserExpression(kTypeIdentifier | (id.type_ << kIdentifierShift)); } + static PreParserExpression BinaryOperation(PreParserExpression left, + Token::Value op, + PreParserExpression right) { + int code = ((op == Token::COMMA) && !left.is_parenthesized() && + !right.is_parenthesized()) + ? left.ArrowParamListBit() & right.ArrowParamListBit() + : 0; + return PreParserExpression(kTypeBinaryOperation | code); + } + + static PreParserExpression EmptyArrowParamList() { + // Any expression for which IsValidArrowParamList() returns true + // will work here. + return FromIdentifier(PreParserIdentifier::Default()); + } + static PreParserExpression StringLiteral() { return PreParserExpression(kUnknownStringLiteral); } @@ -577,40 +689,63 @@ class PreParserExpression { return PreParserExpression(kCallExpression); } - bool IsIdentifier() { return (code_ & kIdentifierFlag) != 0; } + bool IsIdentifier() const { return (code_ & kTypeMask) == kTypeIdentifier; } - PreParserIdentifier AsIdentifier() { - ASSERT(IsIdentifier()); + PreParserIdentifier AsIdentifier() const { + DCHECK(IsIdentifier()); return PreParserIdentifier( static_cast<PreParserIdentifier::Type>(code_ >> kIdentifierShift)); } - bool IsStringLiteral() { return (code_ & kStringLiteralFlag) != 0; } + bool IsStringLiteral() const { + return (code_ & kTypeMask) == kTypeStringLiteral; + } - bool IsUseStrictLiteral() { - return (code_ & kStringLiteralMask) == kUseStrictString; + bool IsUseStrictLiteral() const { + return (code_ & kUseStrictString) == kUseStrictString; } - bool IsThis() { return code_ == kThisExpression; } + bool IsThis() const { return (code_ & kThisExpression) == kThisExpression; } - bool IsThisProperty() { return code_ == kThisPropertyExpression; } + bool IsThisProperty() const { + return (code_ & kThisPropertyExpression) == kThisPropertyExpression; + } - bool IsProperty() { - return code_ == kPropertyExpression || code_ == kThisPropertyExpression; + bool IsProperty() const { + return (code_ & kPropertyExpression) == kPropertyExpression || + (code_ & kThisPropertyExpression) == kThisPropertyExpression; } - bool IsCall() { return code_ == kCallExpression; } + bool IsCall() const { return (code_ & kCallExpression) == kCallExpression; } - bool IsValidReferenceExpression() { + bool IsValidReferenceExpression() const { return IsIdentifier() || IsProperty(); } + bool IsValidArrowParamList() const { + return (ArrowParamListBit() & kBinaryOperationArrowParamList) != 0 && + (code_ & kMultiParenthesizedExpression) == 0; + } + // At the moment PreParser doesn't track these expression types. bool IsFunctionLiteral() const { return false; } bool IsCallNew() const { return false; } PreParserExpression AsFunctionLiteral() { return *this; } + bool IsBinaryOperation() const { + return (code_ & kTypeMask) == kTypeBinaryOperation; + } + + bool is_parenthesized() const { + return (code_ & kParenthesizedExpression) != 0; + } + + void increase_parenthesization_level() { + code_ |= is_parenthesized() ? kMultiParenthesizedExpression + : kParenthesizedExpression; + } + // Dummy implementation for making expression->somefunc() work in both Parser // and PreParser. PreParserExpression* operator->() { return this; } @@ -619,33 +754,69 @@ class PreParserExpression { void set_index(int index) {} // For YieldExpressions void set_parenthesized() {} + int position() const { return RelocInfo::kNoPosition; } + void set_function_token_position(int position) {} + void set_ast_properties(int* ast_properties) {} + void set_dont_optimize_reason(BailoutReason dont_optimize_reason) {} + + bool operator==(const PreParserExpression& other) const { + return code_ == other.code_; + } + bool operator!=(const PreParserExpression& other) const { + return code_ != other.code_; + } + private: - // Least significant 2 bits are used as flags. Bits 0 and 1 represent - // identifiers or strings literals, and are mutually exclusive, but can both - // be absent. If the expression is an identifier or a string literal, the - // other bits describe the type (see PreParserIdentifier::Type and string - // literal constants below). + // Least significant 2 bits are used as expression type. The third least + // significant bit tracks whether an expression is parenthesized. If the + // expression is an identifier or a string literal, the other bits + // describe the type/ (see PreParserIdentifier::Type and string literal + // constants below). For binary operations, the other bits are flags + // which further describe the contents of the expression. enum { kUnknownExpression = 0, - // Identifiers - kIdentifierFlag = 1, // Used to detect labels. - kIdentifierShift = 3, + kTypeMask = 1 | 2, + kParenthesizedExpression = (1 << 2), + kMultiParenthesizedExpression = (1 << 3), - kStringLiteralFlag = 2, // Used to detect directive prologue. - kUnknownStringLiteral = kStringLiteralFlag, - kUseStrictString = kStringLiteralFlag | 8, + // Identifiers + kTypeIdentifier = 1, // Used to detect labels. + kIdentifierShift = 5, + kTypeStringLiteral = 2, // Used to detect directive prologue. + kUnknownStringLiteral = kTypeStringLiteral, + kUseStrictString = kTypeStringLiteral | 32, kStringLiteralMask = kUseStrictString, + // Binary operations. Those are needed to detect certain keywords and + // duplicated identifier in parameter lists for arrow functions, because + // they are initially parsed as comma-separated expressions. + kTypeBinaryOperation = 3, + kBinaryOperationArrowParamList = (1 << 4), + // Below here applies if neither identifier nor string literal. Reserve the // 2 least significant bits for flags. - kThisExpression = 1 << 2, - kThisPropertyExpression = 2 << 2, - kPropertyExpression = 3 << 2, - kCallExpression = 4 << 2 + kThisExpression = (1 << 4), + kThisPropertyExpression = (2 << 4), + kPropertyExpression = (3 << 4), + kCallExpression = (4 << 4) }; explicit PreParserExpression(int expression_code) : code_(expression_code) {} + V8_INLINE int ArrowParamListBit() const { + if (IsBinaryOperation()) return code_ & kBinaryOperationArrowParamList; + if (IsIdentifier()) { + const PreParserIdentifier ident = AsIdentifier(); + // A valid identifier can be an arrow function parameter list + // except for eval, arguments, yield, and reserved keywords. + if (ident.IsEval() || ident.IsArguments() || ident.IsYield() || + ident.IsFutureStrictReserved()) + return 0; + return kBinaryOperationArrowParamList; + } + return 0; + } + int code_; }; @@ -727,7 +898,8 @@ class PreParserStatementList { class PreParserScope { public: - explicit PreParserScope(PreParserScope* outer_scope, ScopeType scope_type) + explicit PreParserScope(PreParserScope* outer_scope, ScopeType scope_type, + void* = NULL) : scope_type_(scope_type) { strict_mode_ = outer_scope ? outer_scope->strict_mode() : SLOPPY; } @@ -736,6 +908,19 @@ class PreParserScope { StrictMode strict_mode() const { return strict_mode_; } void SetStrictMode(StrictMode strict_mode) { strict_mode_ = strict_mode; } + // When PreParser is in use, lazy compilation is already being done, + // things cannot get lazier than that. + bool AllowsLazyCompilation() const { return false; } + + void set_start_position(int position) {} + void set_end_position(int position) {} + + bool IsDeclared(const PreParserIdentifier& identifier) const { return false; } + void DeclareParameter(const PreParserIdentifier& identifier, VariableMode) {} + + // Allow scope->Foo() to work. + PreParserScope* operator->() { return this; } + private: ScopeType scope_type_; StrictMode strict_mode_; @@ -744,9 +929,9 @@ class PreParserScope { class PreParserFactory { public: - explicit PreParserFactory(void* extra_param) {} - PreParserExpression NewLiteral(PreParserIdentifier identifier, - int pos) { + explicit PreParserFactory(void* extra_param1, void* extra_param2) {} + PreParserExpression NewStringLiteral(PreParserIdentifier identifier, + int pos) { return PreParserExpression::Default(); } PreParserExpression NewNumberLiteral(double number, @@ -799,7 +984,7 @@ class PreParserFactory { PreParserExpression NewBinaryOperation(Token::Value op, PreParserExpression left, PreParserExpression right, int pos) { - return PreParserExpression::Default(); + return PreParserExpression::BinaryOperation(left, op, right); } PreParserExpression NewCompareOperation(Token::Value op, PreParserExpression left, @@ -840,6 +1025,31 @@ class PreParserFactory { int pos) { return PreParserExpression::Default(); } + PreParserStatement NewReturnStatement(PreParserExpression expression, + int pos) { + return PreParserStatement::Default(); + } + PreParserExpression NewFunctionLiteral( + PreParserIdentifier name, AstValueFactory* ast_value_factory, + const PreParserScope& scope, PreParserStatementList body, + int materialized_literal_count, int expected_property_count, + int handler_count, int parameter_count, + FunctionLiteral::ParameterFlag has_duplicate_parameters, + FunctionLiteral::FunctionType function_type, + FunctionLiteral::IsFunctionFlag is_function, + FunctionLiteral::IsParenthesizedFlag is_parenthesized, + FunctionLiteral::KindFlag kind, int position) { + return PreParserExpression::Default(); + } + + // Return the object itself as AstVisitor and implement the needed + // dummy method right in this class. + PreParserFactory* visitor() { return this; } + BailoutReason dont_optimize_reason() { return kNoReason; } + int* ast_properties() { + static int dummy = 42; + return &dummy; + } }; @@ -854,11 +1064,23 @@ class PreParserTraits { // Used by FunctionState and BlockState. typedef PreParserScope Scope; + typedef PreParserScope ScopePtr; + + class Checkpoint BASE_EMBEDDED { + public: + template <typename Parser> + explicit Checkpoint(Parser* parser) {} + void Restore() {} + }; + // PreParser doesn't need to store generator variables. typedef void GeneratorVariable; // No interaction with Zones. typedef void Zone; + typedef int AstProperties; + typedef Vector<PreParserIdentifier> ParameterIdentifierVector; + // Return types for traversing functions. typedef PreParserIdentifier Identifier; typedef PreParserExpression Expression; @@ -901,6 +1123,10 @@ class PreParserTraits { return expression.AsIdentifier(); } + static bool IsFutureStrictReserved(PreParserIdentifier identifier) { + return identifier.IsYield() || identifier.IsFutureStrictReserved(); + } + static bool IsBoilerplateProperty(PreParserExpression property) { // PreParser doesn't count boilerplate properties. return false; @@ -921,6 +1147,11 @@ class PreParserTraits { // PreParser should not use FuncNameInferrer. UNREACHABLE(); } + static void InferFunctionName(FuncNameInferrer* fni, + PreParserExpression expression) { + // PreParser should not use FuncNameInferrer. + UNREACHABLE(); + } static void CheckFunctionLiteralInsideTopLevelObjectLiteral( PreParserScope* scope, PreParserExpression value, bool* has_function) {} @@ -932,10 +1163,10 @@ class PreParserTraits { static void CheckPossibleEvalCall(PreParserExpression expression, PreParserScope* scope) {} - static PreParserExpression MarkExpressionAsLValue( + static PreParserExpression MarkExpressionAsAssigned( PreParserExpression expression) { // TODO(marja): To be able to produce the same errors, the preparser needs - // to start tracking which expressions are variables and which are lvalues. + // to start tracking which expressions are variables and which are assigned. return expression; } @@ -964,29 +1195,34 @@ class PreParserTraits { const char* type, Handle<Object> arg, int pos) { return PreParserExpression::Default(); } + PreParserScope NewScope(PreParserScope* outer_scope, ScopeType scope_type) { + return PreParserScope(outer_scope, scope_type); + } // Reporting errors. void ReportMessageAt(Scanner::Location location, const char* message, - Vector<const char*> args, - bool is_reference_error = false); - void ReportMessageAt(Scanner::Location location, - const char* type, - const char* name_opt, + const char* arg = NULL, bool is_reference_error = false); void ReportMessageAt(int start_pos, int end_pos, - const char* type, - const char* name_opt, + const char* message, + const char* arg = NULL, bool is_reference_error = false); // "null" return type creators. static PreParserIdentifier EmptyIdentifier() { return PreParserIdentifier::Default(); } + static PreParserIdentifier EmptyIdentifierString() { + return PreParserIdentifier::Default(); + } static PreParserExpression EmptyExpression() { return PreParserExpression::Default(); } + static PreParserExpression EmptyArrowParamList() { + return PreParserExpression::EmptyArrowParamList(); + } static PreParserExpression EmptyLiteral() { return PreParserExpression::Default(); } @@ -1002,8 +1238,8 @@ class PreParserTraits { // Producing data during the recursive descent. PreParserIdentifier GetSymbol(Scanner* scanner); - static PreParserIdentifier NextLiteralString(Scanner* scanner, - PretenureFlag tenured) { + + static PreParserIdentifier GetNextSymbol(Scanner* scanner) { return PreParserIdentifier::Default(); } @@ -1028,6 +1264,11 @@ class PreParserTraits { Scanner* scanner, PreParserFactory* factory = NULL); + PreParserExpression GetIterator(PreParserExpression iterable, + PreParserFactory* factory) { + return PreParserExpression::Default(); + } + static PreParserExpressionList NewExpressionList(int size, void* zone) { return PreParserExpressionList(); } @@ -1040,6 +1281,31 @@ class PreParserTraits { return PreParserExpressionList(); } + V8_INLINE void SkipLazyFunctionBody(PreParserIdentifier function_name, + int* materialized_literal_count, + int* expected_property_count, bool* ok) { + UNREACHABLE(); + } + + V8_INLINE PreParserStatementList + ParseEagerFunctionBody(PreParserIdentifier function_name, int pos, + Variable* fvar, Token::Value fvar_init_op, + bool is_generator, bool* ok); + + // Utility functions + int DeclareArrowParametersFromExpression(PreParserExpression expression, + PreParserScope* scope, + Scanner::Location* dupe_loc, + bool* ok) { + // TODO(aperez): Detect duplicated identifiers in paramlists. + *ok = expression.IsValidArrowParamList(); + return 0; + } + + static AstValueFactory* ast_value_factory() { return NULL; } + + void CheckConflictingVarDeclarations(PreParserScope scope, bool* ok) {} + // Temporary glue; these functions will move to ParserBase. PreParserExpression ParseV8Intrinsic(bool* ok); PreParserExpression ParseFunctionLiteral( @@ -1049,6 +1315,7 @@ class PreParserTraits { bool is_generator, int function_token_position, FunctionLiteral::FunctionType type, + FunctionLiteral::ArityRestriction arity_restriction, bool* ok); private: @@ -1089,7 +1356,7 @@ class PreParser : public ParserBase<PreParserTraits> { // during parsing. PreParseResult PreParseProgram() { PreParserScope scope(scope_, GLOBAL_SCOPE); - FunctionState top_scope(&function_state_, &scope_, &scope, NULL); + FunctionState top_scope(&function_state_, &scope_, &scope); bool ok = true; int start_position = scanner()->peek_location().beg_pos; ParseSourceElements(Token::EOS, &ok); @@ -1171,6 +1438,14 @@ class PreParser : public ParserBase<PreParserTraits> { Expression ParseObjectLiteral(bool* ok); Expression ParseV8Intrinsic(bool* ok); + V8_INLINE void SkipLazyFunctionBody(PreParserIdentifier function_name, + int* materialized_literal_count, + int* expected_property_count, bool* ok); + V8_INLINE PreParserStatementList + ParseEagerFunctionBody(PreParserIdentifier function_name, int pos, + Variable* fvar, Token::Value fvar_init_op, + bool is_generator, bool* ok); + Expression ParseFunctionLiteral( Identifier name, Scanner::Location function_name_location, @@ -1178,18 +1453,42 @@ class PreParser : public ParserBase<PreParserTraits> { bool is_generator, int function_token_pos, FunctionLiteral::FunctionType function_type, + FunctionLiteral::ArityRestriction arity_restriction, bool* ok); void ParseLazyFunctionLiteralBody(bool* ok); bool CheckInOrOf(bool accept_OF); }; + +PreParserStatementList PreParser::ParseEagerFunctionBody( + PreParserIdentifier function_name, int pos, Variable* fvar, + Token::Value fvar_init_op, bool is_generator, bool* ok) { + ParsingModeScope parsing_mode(this, PARSE_EAGERLY); + + ParseSourceElements(Token::RBRACE, ok); + if (!*ok) return PreParserStatementList(); + + Expect(Token::RBRACE, ok); + return PreParserStatementList(); +} + + +PreParserStatementList PreParserTraits::ParseEagerFunctionBody( + PreParserIdentifier function_name, int pos, Variable* fvar, + Token::Value fvar_init_op, bool is_generator, bool* ok) { + return pre_parser_->ParseEagerFunctionBody(function_name, pos, fvar, + fvar_init_op, is_generator, ok); +} + + template<class Traits> ParserBase<Traits>::FunctionState::FunctionState( FunctionState** function_state_stack, typename Traits::Type::Scope** scope_stack, typename Traits::Type::Scope* scope, - typename Traits::Type::Zone* extra_param) + typename Traits::Type::Zone* extra_param, + AstValueFactory* ast_value_factory) : next_materialized_literal_index_(JSFunction::kLiteralsPrefixSize), next_handler_index_(0), expected_property_count_(0), @@ -1201,14 +1500,39 @@ ParserBase<Traits>::FunctionState::FunctionState( outer_scope_(*scope_stack), saved_ast_node_id_(0), extra_param_(extra_param), - factory_(extra_param) { + factory_(extra_param, ast_value_factory) { *scope_stack_ = scope; *function_state_stack = this; Traits::SetUpFunctionState(this, extra_param); } -template<class Traits> +template <class Traits> +ParserBase<Traits>::FunctionState::FunctionState( + FunctionState** function_state_stack, + typename Traits::Type::Scope** scope_stack, + typename Traits::Type::Scope** scope, + typename Traits::Type::Zone* extra_param, + AstValueFactory* ast_value_factory) + : next_materialized_literal_index_(JSFunction::kLiteralsPrefixSize), + next_handler_index_(0), + expected_property_count_(0), + is_generator_(false), + generator_object_variable_(NULL), + function_state_stack_(function_state_stack), + outer_function_state_(*function_state_stack), + scope_stack_(scope_stack), + outer_scope_(*scope_stack), + saved_ast_node_id_(0), + extra_param_(extra_param), + factory_(extra_param, ast_value_factory) { + *scope_stack_ = *scope; + *function_state_stack = this; + Traits::SetUpFunctionState(this, extra_param); +} + + +template <class Traits> ParserBase<Traits>::FunctionState::~FunctionState() { *scope_stack_ = outer_scope_; *function_state_stack_ = outer_function_state_; @@ -1232,15 +1556,15 @@ void ParserBase<Traits>::ReportUnexpectedToken(Token::Value token) { return ReportMessageAt(source_location, "unexpected_token_identifier"); case Token::FUTURE_RESERVED_WORD: return ReportMessageAt(source_location, "unexpected_reserved"); + case Token::LET: case Token::YIELD: case Token::FUTURE_STRICT_RESERVED_WORD: return ReportMessageAt(source_location, strict_mode() == SLOPPY ? "unexpected_token_identifier" : "unexpected_strict_reserved"); default: const char* name = Token::String(token); - ASSERT(name != NULL); - Traits::ReportMessageAt( - source_location, "unexpected_token", Vector<const char*>(&name, 1)); + DCHECK(name != NULL); + Traits::ReportMessageAt(source_location, "unexpected_token", name); } } @@ -1254,12 +1578,13 @@ typename ParserBase<Traits>::IdentifierT ParserBase<Traits>::ParseIdentifier( IdentifierT name = this->GetSymbol(scanner()); if (allow_eval_or_arguments == kDontAllowEvalOrArguments && strict_mode() == STRICT && this->IsEvalOrArguments(name)) { - ReportMessageAt(scanner()->location(), "strict_eval_arguments"); + ReportMessage("strict_eval_arguments"); *ok = false; } return name; } else if (strict_mode() == SLOPPY && (next == Token::FUTURE_STRICT_RESERVED_WORD || + (next == Token::LET) || (next == Token::YIELD && !is_generator()))) { return this->GetSymbol(scanner()); } else { @@ -1278,6 +1603,7 @@ typename ParserBase<Traits>::IdentifierT ParserBase< if (next == Token::IDENTIFIER) { *is_strict_reserved = false; } else if (next == Token::FUTURE_STRICT_RESERVED_WORD || + next == Token::LET || (next == Token::YIELD && !this->is_generator())) { *is_strict_reserved = true; } else { @@ -1294,6 +1620,7 @@ typename ParserBase<Traits>::IdentifierT ParserBase<Traits>::ParseIdentifierName(bool* ok) { Token::Value next = Next(); if (next != Token::IDENTIFIER && next != Token::FUTURE_RESERVED_WORD && + next != Token::LET && next != Token::YIELD && next != Token::FUTURE_STRICT_RESERVED_WORD && !Token::IsKeyword(next)) { this->ReportUnexpectedToken(next); *ok = false; @@ -1321,21 +1648,21 @@ typename ParserBase<Traits>::ExpressionT ParserBase<Traits>::ParseRegExpLiteral( int pos = peek_position(); if (!scanner()->ScanRegExpPattern(seen_equal)) { Next(); - ReportMessage("unterminated_regexp", Vector<const char*>::empty()); + ReportMessage("unterminated_regexp"); *ok = false; return Traits::EmptyExpression(); } int literal_index = function_state_->NextMaterializedLiteralIndex(); - IdentifierT js_pattern = this->NextLiteralString(scanner(), TENURED); + IdentifierT js_pattern = this->GetNextSymbol(scanner()); if (!scanner()->ScanRegExpFlags()) { Next(); - ReportMessageAt(scanner()->location(), "invalid_regexp_flags"); + ReportMessage("invalid_regexp_flags"); *ok = false; return Traits::EmptyExpression(); } - IdentifierT js_flags = this->NextLiteralString(scanner(), TENURED); + IdentifierT js_flags = this->GetNextSymbol(scanner()); Next(); return factory()->NewRegExpLiteral(js_pattern, js_flags, literal_index, pos); } @@ -1389,6 +1716,7 @@ ParserBase<Traits>::ParsePrimaryExpression(bool* ok) { break; case Token::IDENTIFIER: + case Token::LET: case Token::YIELD: case Token::FUTURE_STRICT_RESERVED_WORD: { // Using eval or arguments in this context is OK even in strict mode. @@ -1421,11 +1749,20 @@ ParserBase<Traits>::ParsePrimaryExpression(bool* ok) { case Token::LPAREN: Consume(Token::LPAREN); - // Heuristically try to detect immediately called functions before - // seeing the call parentheses. - parenthesized_function_ = (peek() == Token::FUNCTION); - result = this->ParseExpression(true, CHECK_OK); - Expect(Token::RPAREN, CHECK_OK); + if (allow_arrow_functions() && peek() == Token::RPAREN) { + // Arrow functions are the only expression type constructions + // for which an empty parameter list "()" is valid input. + Consume(Token::RPAREN); + result = this->ParseArrowFunctionLiteral( + pos, this->EmptyArrowParamList(), CHECK_OK); + } else { + // Heuristically try to detect immediately called functions before + // seeing the call parentheses. + parenthesized_function_ = (peek() == Token::FUNCTION); + result = this->ParseExpression(true, CHECK_OK); + result->increase_parenthesization_level(); + Expect(Token::RPAREN, CHECK_OK); + } break; case Token::MOD: @@ -1504,7 +1841,7 @@ typename ParserBase<Traits>::ExpressionT ParserBase<Traits>::ParseObjectLiteral( // ((IdentifierName | String | Number) ':' AssignmentExpression) | // (('get' | 'set') (IdentifierName | String | Number) FunctionLiteral) // ) ',')* '}' - // (Except that trailing comma is not required and not allowed.) + // (Except that the trailing comma is not required.) int pos = peek_position(); typename Traits::Type::PropertyList properties = @@ -1526,6 +1863,8 @@ typename ParserBase<Traits>::ExpressionT ParserBase<Traits>::ParseObjectLiteral( switch (next) { case Token::FUTURE_RESERVED_WORD: case Token::FUTURE_STRICT_RESERVED_WORD: + case Token::LET: + case Token::YIELD: case Token::IDENTIFIER: { bool is_getter = false; bool is_setter = false; @@ -1541,6 +1880,8 @@ typename ParserBase<Traits>::ExpressionT ParserBase<Traits>::ParseObjectLiteral( if (next != i::Token::IDENTIFIER && next != i::Token::FUTURE_RESERVED_WORD && next != i::Token::FUTURE_STRICT_RESERVED_WORD && + next != i::Token::LET && + next != i::Token::YIELD && next != i::Token::NUMBER && next != i::Token::STRING && !Token::IsKeyword(next)) { @@ -1558,9 +1899,9 @@ typename ParserBase<Traits>::ExpressionT ParserBase<Traits>::ParseObjectLiteral( false, // reserved words are allowed here false, // not a generator RelocInfo::kNoPosition, FunctionLiteral::ANONYMOUS_EXPRESSION, + is_getter ? FunctionLiteral::GETTER_ARITY + : FunctionLiteral::SETTER_ARITY, CHECK_OK); - // Allow any number of parameters for compatibilty with JSC. - // Specification only allows zero parameters for get and one for set. typename Traits::Type::ObjectLiteralProperty property = factory()->NewObjectLiteralProperty(is_getter, value, next_pos); if (this->IsBoilerplateProperty(property)) { @@ -1580,7 +1921,7 @@ typename ParserBase<Traits>::ExpressionT ParserBase<Traits>::ParseObjectLiteral( } // Failed to parse as get/set property, so it's just a normal property // (which might be called "get" or "set" or something else). - key = factory()->NewLiteral(id, next_pos); + key = factory()->NewStringLiteral(id, next_pos); break; } case Token::STRING: { @@ -1592,7 +1933,7 @@ typename ParserBase<Traits>::ExpressionT ParserBase<Traits>::ParseObjectLiteral( key = factory()->NewNumberLiteral(index, next_pos); break; } - key = factory()->NewLiteral(string, next_pos); + key = factory()->NewStringLiteral(string, next_pos); break; } case Token::NUMBER: { @@ -1605,7 +1946,7 @@ typename ParserBase<Traits>::ExpressionT ParserBase<Traits>::ParseObjectLiteral( if (Token::IsKeyword(next)) { Consume(next); IdentifierT string = this->GetSymbol(scanner_); - key = factory()->NewLiteral(string, next_pos); + key = factory()->NewStringLiteral(string, next_pos); } else { Token::Value next = Next(); ReportUnexpectedToken(next); @@ -1635,7 +1976,6 @@ typename ParserBase<Traits>::ExpressionT ParserBase<Traits>::ParseObjectLiteral( } properties->Add(property, zone()); - // TODO(1240767): Consider allowing trailing comma. if (peek() != Token::RBRACE) { // Need {} because of the CHECK_OK macro. Expect(Token::COMMA, CHECK_OK); @@ -1674,7 +2014,7 @@ typename Traits::Type::ExpressionList ParserBase<Traits>::ParseArguments( true, CHECK_OK_CUSTOM(NullExpressionList)); result->Add(argument, zone_); if (result->length() > Code::kMaxArguments) { - ReportMessageAt(scanner()->location(), "too_many_arguments"); + ReportMessage("too_many_arguments"); *ok = false; return this->NullExpressionList(); } @@ -1694,6 +2034,7 @@ typename ParserBase<Traits>::ExpressionT ParserBase<Traits>::ParseAssignmentExpression(bool accept_IN, bool* ok) { // AssignmentExpression :: // ConditionalExpression + // ArrowFunction // YieldExpression // LeftHandSideExpression AssignmentOperator AssignmentExpression @@ -1704,9 +2045,17 @@ ParserBase<Traits>::ParseAssignmentExpression(bool accept_IN, bool* ok) { } if (fni_ != NULL) fni_->Enter(); + ParserCheckpoint checkpoint(this); ExpressionT expression = this->ParseConditionalExpression(accept_IN, CHECK_OK); + if (allow_arrow_functions() && peek() == Token::ARROW) { + checkpoint.Restore(); + expression = this->ParseArrowFunctionLiteral(lhs_location.beg_pos, + expression, CHECK_OK); + return expression; + } + if (!Token::IsAssignmentOp(peek())) { if (fni_ != NULL) fni_->Leave(); // Parsed conditional expression only (no assignment). @@ -1715,7 +2064,7 @@ ParserBase<Traits>::ParseAssignmentExpression(bool accept_IN, bool* ok) { expression = this->CheckAndRewriteReferenceExpression( expression, lhs_location, "invalid_lhs_in_assignment", CHECK_OK); - expression = this->MarkExpressionAsLValue(expression); + expression = this->MarkExpressionAsAssigned(expression); Token::Value op = Next(); // Get assignment operator. int pos = position(); @@ -1754,15 +2103,40 @@ template <class Traits> typename ParserBase<Traits>::ExpressionT ParserBase<Traits>::ParseYieldExpression(bool* ok) { // YieldExpression :: - // 'yield' '*'? AssignmentExpression + // 'yield' ([no line terminator] '*'? AssignmentExpression)? int pos = peek_position(); Expect(Token::YIELD, CHECK_OK); - Yield::Kind kind = - Check(Token::MUL) ? Yield::DELEGATING : Yield::SUSPEND; ExpressionT generator_object = factory()->NewVariableProxy(function_state_->generator_object_variable()); - ExpressionT expression = - ParseAssignmentExpression(false, CHECK_OK); + ExpressionT expression = Traits::EmptyExpression(); + Yield::Kind kind = Yield::SUSPEND; + if (!scanner()->HasAnyLineTerminatorBeforeNext()) { + if (Check(Token::MUL)) kind = Yield::DELEGATING; + switch (peek()) { + case Token::EOS: + case Token::SEMICOLON: + case Token::RBRACE: + case Token::RBRACK: + case Token::RPAREN: + case Token::COLON: + case Token::COMMA: + // The above set of tokens is the complete set of tokens that can appear + // after an AssignmentExpression, and none of them can start an + // AssignmentExpression. This allows us to avoid looking for an RHS for + // a Yield::SUSPEND operation, given only one look-ahead token. + if (kind == Yield::SUSPEND) + break; + DCHECK(kind == Yield::DELEGATING); + // Delegating yields require an RHS; fall through. + default: + expression = ParseAssignmentExpression(false, CHECK_OK); + break; + } + } + if (kind == Yield::DELEGATING) { + // var iterator = subject[Symbol.iterator](); + expression = this->GetIterator(expression, factory()); + } typename Traits::Type::YieldExpression yield = factory()->NewYield(generator_object, expression, kind, pos); if (kind == Yield::DELEGATING) { @@ -1799,7 +2173,7 @@ ParserBase<Traits>::ParseConditionalExpression(bool accept_IN, bool* ok) { template <class Traits> typename ParserBase<Traits>::ExpressionT ParserBase<Traits>::ParseBinaryExpression(int prec, bool accept_IN, bool* ok) { - ASSERT(prec >= 4); + DCHECK(prec >= 4); ExpressionT x = this->ParseUnaryExpression(CHECK_OK); for (int prec1 = Precedence(peek(), accept_IN); prec1 >= prec; prec1--) { // prec1 >= 4 @@ -1864,7 +2238,7 @@ ParserBase<Traits>::ParseUnaryExpression(bool* ok) { // "delete identifier" is a syntax error in strict mode. if (op == Token::DELETE && strict_mode() == STRICT && this->IsIdentifier(expression)) { - ReportMessage("strict_delete", Vector<const char*>::empty()); + ReportMessage("strict_delete"); *ok = false; return this->EmptyExpression(); } @@ -1877,7 +2251,7 @@ ParserBase<Traits>::ParseUnaryExpression(bool* ok) { ExpressionT expression = this->ParseUnaryExpression(CHECK_OK); expression = this->CheckAndRewriteReferenceExpression( expression, lhs_location, "invalid_lhs_in_prefix_op", CHECK_OK); - this->MarkExpressionAsLValue(expression); + this->MarkExpressionAsAssigned(expression); return factory()->NewCountOperation(op, true /* prefix */, @@ -1902,7 +2276,7 @@ ParserBase<Traits>::ParsePostfixExpression(bool* ok) { Token::IsCountOp(peek())) { expression = this->CheckAndRewriteReferenceExpression( expression, lhs_location, "invalid_lhs_in_postfix_op", CHECK_OK); - expression = this->MarkExpressionAsLValue(expression); + expression = this->MarkExpressionAsAssigned(expression); Token::Value next = Next(); expression = @@ -1974,7 +2348,7 @@ ParserBase<Traits>::ParseLeftHandSideExpression(bool* ok) { int pos = position(); IdentifierT name = ParseIdentifierName(CHECK_OK); result = factory()->NewProperty( - result, factory()->NewLiteral(name, pos), pos); + result, factory()->NewStringLiteral(name, pos), pos); if (fni_ != NULL) this->PushLiteralName(fni_, name); break; } @@ -2062,6 +2436,7 @@ ParserBase<Traits>::ParseMemberExpression(bool* ok) { is_generator, function_token_position, function_type, + FunctionLiteral::NORMAL_ARITY, CHECK_OK); } else { result = ParsePrimaryExpression(CHECK_OK); @@ -2096,7 +2471,7 @@ ParserBase<Traits>::ParseMemberExpressionContinuation(ExpressionT expression, int pos = position(); IdentifierT name = ParseIdentifierName(CHECK_OK); expression = factory()->NewProperty( - expression, factory()->NewLiteral(name, pos), pos); + expression, factory()->NewStringLiteral(name, pos), pos); if (fni_ != NULL) { this->PushLiteralName(fni_, name); } @@ -2106,11 +2481,123 @@ ParserBase<Traits>::ParseMemberExpressionContinuation(ExpressionT expression, return expression; } } - ASSERT(false); + DCHECK(false); return this->EmptyExpression(); } +template <class Traits> +typename ParserBase<Traits>::ExpressionT ParserBase< + Traits>::ParseArrowFunctionLiteral(int start_pos, ExpressionT params_ast, + bool* ok) { + // TODO(aperez): Change this to use ARROW_SCOPE + typename Traits::Type::ScopePtr scope = + this->NewScope(scope_, FUNCTION_SCOPE); + typename Traits::Type::StatementList body; + typename Traits::Type::AstProperties ast_properties; + BailoutReason dont_optimize_reason = kNoReason; + int num_parameters = -1; + int materialized_literal_count = -1; + int expected_property_count = -1; + int handler_count = 0; + + { + FunctionState function_state(&function_state_, &scope_, &scope, zone(), + this->ast_value_factory()); + Scanner::Location dupe_error_loc = Scanner::Location::invalid(); + num_parameters = Traits::DeclareArrowParametersFromExpression( + params_ast, scope_, &dupe_error_loc, ok); + if (!*ok) { + ReportMessageAt( + Scanner::Location(start_pos, scanner()->location().beg_pos), + "malformed_arrow_function_parameter_list"); + return this->EmptyExpression(); + } + + if (num_parameters > Code::kMaxArguments) { + ReportMessageAt(Scanner::Location(params_ast->position(), position()), + "too_many_parameters"); + *ok = false; + return this->EmptyExpression(); + } + + Expect(Token::ARROW, CHECK_OK); + + if (peek() == Token::LBRACE) { + // Multiple statemente body + Consume(Token::LBRACE); + bool is_lazily_parsed = + (mode() == PARSE_LAZILY && scope_->AllowsLazyCompilation()); + if (is_lazily_parsed) { + body = this->NewStatementList(0, zone()); + this->SkipLazyFunctionBody(this->EmptyIdentifier(), + &materialized_literal_count, + &expected_property_count, CHECK_OK); + } else { + body = this->ParseEagerFunctionBody( + this->EmptyIdentifier(), RelocInfo::kNoPosition, NULL, + Token::INIT_VAR, false, // Not a generator. + CHECK_OK); + materialized_literal_count = + function_state.materialized_literal_count(); + expected_property_count = function_state.expected_property_count(); + handler_count = function_state.handler_count(); + } + } else { + // Single-expression body + int pos = position(); + parenthesized_function_ = false; + ExpressionT expression = ParseAssignmentExpression(true, CHECK_OK); + body = this->NewStatementList(1, zone()); + body->Add(factory()->NewReturnStatement(expression, pos), zone()); + materialized_literal_count = function_state.materialized_literal_count(); + expected_property_count = function_state.expected_property_count(); + handler_count = function_state.handler_count(); + } + + scope->set_start_position(start_pos); + scope->set_end_position(scanner()->location().end_pos); + + // Arrow function *parameter lists* are always checked as in strict mode. + bool function_name_is_strict_reserved = false; + Scanner::Location function_name_loc = Scanner::Location::invalid(); + Scanner::Location eval_args_error_loc = Scanner::Location::invalid(); + Scanner::Location reserved_loc = Scanner::Location::invalid(); + this->CheckStrictFunctionNameAndParameters( + this->EmptyIdentifier(), function_name_is_strict_reserved, + function_name_loc, eval_args_error_loc, dupe_error_loc, reserved_loc, + CHECK_OK); + + // Validate strict mode. + if (strict_mode() == STRICT) { + CheckOctalLiteral(start_pos, scanner()->location().end_pos, CHECK_OK); + } + + if (allow_harmony_scoping() && strict_mode() == STRICT) + this->CheckConflictingVarDeclarations(scope, CHECK_OK); + + ast_properties = *factory()->visitor()->ast_properties(); + dont_optimize_reason = factory()->visitor()->dont_optimize_reason(); + } + + FunctionLiteralT function_literal = factory()->NewFunctionLiteral( + this->EmptyIdentifierString(), this->ast_value_factory(), scope, body, + materialized_literal_count, expected_property_count, handler_count, + num_parameters, FunctionLiteral::kNoDuplicateParameters, + FunctionLiteral::ANONYMOUS_EXPRESSION, FunctionLiteral::kIsFunction, + FunctionLiteral::kNotParenthesized, FunctionLiteral::kArrowFunction, + start_pos); + + function_literal->set_function_token_position(start_pos); + function_literal->set_ast_properties(&ast_properties); + function_literal->set_dont_optimize_reason(dont_optimize_reason); + + if (fni_ != NULL) this->InferFunctionName(fni_, function_literal); + + return function_literal; +} + + template <typename Traits> typename ParserBase<Traits>::ExpressionT ParserBase<Traits>::CheckAndRewriteReferenceExpression( @@ -2157,17 +2644,14 @@ void ParserBase<Traits>::ObjectLiteralChecker::CheckProperty( if (IsDataDataConflict(old_type, type)) { // Both are data properties. if (strict_mode_ == SLOPPY) return; - parser()->ReportMessageAt(scanner()->location(), - "strict_duplicate_property"); + parser()->ReportMessage("strict_duplicate_property"); } else if (IsDataAccessorConflict(old_type, type)) { // Both a data and an accessor property with the same name. - parser()->ReportMessageAt(scanner()->location(), - "accessor_data_property"); + parser()->ReportMessage("accessor_data_property"); } else { - ASSERT(IsAccessorAccessorConflict(old_type, type)); + DCHECK(IsAccessorAccessorConflict(old_type, type)); // Both accessors of the same type. - parser()->ReportMessageAt(scanner()->location(), - "accessor_get_set"); + parser()->ReportMessage("accessor_get_set"); } *ok = false; } diff --git a/deps/v8/src/prettyprinter.cc b/deps/v8/src/prettyprinter.cc index 233d7c2fabe..19b0290758c 100644 --- a/deps/v8/src/prettyprinter.cc +++ b/deps/v8/src/prettyprinter.cc @@ -4,11 +4,12 @@ #include <stdarg.h> -#include "v8.h" +#include "src/v8.h" -#include "prettyprinter.h" -#include "scopes.h" -#include "platform.h" +#include "src/ast-value-factory.h" +#include "src/base/platform/platform.h" +#include "src/prettyprinter.h" +#include "src/scopes.h" namespace v8 { namespace internal { @@ -133,10 +134,10 @@ void PrettyPrinter::VisitIfStatement(IfStatement* node) { void PrettyPrinter::VisitContinueStatement(ContinueStatement* node) { Print("continue"); - ZoneStringList* labels = node->target()->labels(); + ZoneList<const AstRawString*>* labels = node->target()->labels(); if (labels != NULL) { Print(" "); - ASSERT(labels->length() > 0); // guaranteed to have at least one entry + DCHECK(labels->length() > 0); // guaranteed to have at least one entry PrintLiteral(labels->at(0), false); // any label from the list is fine } Print(";"); @@ -145,10 +146,10 @@ void PrettyPrinter::VisitContinueStatement(ContinueStatement* node) { void PrettyPrinter::VisitBreakStatement(BreakStatement* node) { Print("break"); - ZoneStringList* labels = node->target()->labels(); + ZoneList<const AstRawString*>* labels = node->target()->labels(); if (labels != NULL) { Print(" "); - ASSERT(labels->length() > 0); // guaranteed to have at least one entry + DCHECK(labels->length() > 0); // guaranteed to have at least one entry PrintLiteral(labels->at(0), false); // any label from the list is fine } Print(";"); @@ -478,7 +479,7 @@ void PrettyPrinter::PrintOut(Zone* zone, AstNode* node) { void PrettyPrinter::Init() { if (size_ == 0) { - ASSERT(output_ == NULL); + DCHECK(output_ == NULL); const int initial_size = 256; output_ = NewArray<char>(initial_size); size_ = initial_size; @@ -492,9 +493,9 @@ void PrettyPrinter::Print(const char* format, ...) { for (;;) { va_list arguments; va_start(arguments, format); - int n = OS::VSNPrintF(Vector<char>(output_, size_) + pos_, - format, - arguments); + int n = VSNPrintF(Vector<char>(output_, size_) + pos_, + format, + arguments); va_end(arguments); if (n >= 0) { @@ -506,7 +507,7 @@ void PrettyPrinter::Print(const char* format, ...) { const int slack = 32; int new_size = size_ + (size_ >> 1) + slack; char* new_output = NewArray<char>(new_size); - OS::MemCopy(new_output, output_, pos_); + MemCopy(new_output, output_, pos_); DeleteArray(output_); output_ = new_output; size_ = new_size; @@ -524,7 +525,7 @@ void PrettyPrinter::PrintStatements(ZoneList<Statement*>* statements) { } -void PrettyPrinter::PrintLabels(ZoneStringList* labels) { +void PrettyPrinter::PrintLabels(ZoneList<const AstRawString*>* labels) { if (labels != NULL) { for (int i = 0; i < labels->length(); i++) { PrintLiteral(labels->at(i), false); @@ -582,6 +583,11 @@ void PrettyPrinter::PrintLiteral(Handle<Object> value, bool quote) { } +void PrettyPrinter::PrintLiteral(const AstRawString* value, bool quote) { + PrintLiteral(value->string(), quote); +} + + void PrettyPrinter::PrintParameters(Scope* scope) { Print("("); for (int i = 0; i < scope->num_parameters(); i++) { @@ -639,7 +645,7 @@ AstPrinter::AstPrinter(Zone* zone) : PrettyPrinter(zone), indent_(0) { AstPrinter::~AstPrinter() { - ASSERT(indent_ == 0); + DCHECK(indent_ == 0); } @@ -668,15 +674,15 @@ void AstPrinter::PrintLiteralWithModeIndented(const char* info, PrintLiteralIndented(info, value, true); } else { EmbeddedVector<char, 256> buf; - int pos = OS::SNPrintF(buf, "%s (mode = %s", info, - Variable::Mode2String(var->mode())); - OS::SNPrintF(buf + pos, ")"); + int pos = SNPrintF(buf, "%s (mode = %s", info, + Variable::Mode2String(var->mode())); + SNPrintF(buf + pos, ")"); PrintLiteralIndented(buf.start(), value, true); } } -void AstPrinter::PrintLabelsIndented(ZoneStringList* labels) { +void AstPrinter::PrintLabelsIndented(ZoneList<const AstRawString*>* labels) { if (labels == NULL || labels->length() == 0) return; PrintIndented("LABELS "); PrintLabels(labels); @@ -1033,21 +1039,21 @@ void AstPrinter::VisitArrayLiteral(ArrayLiteral* node) { void AstPrinter::VisitVariableProxy(VariableProxy* node) { Variable* var = node->var(); EmbeddedVector<char, 128> buf; - int pos = OS::SNPrintF(buf, "VAR PROXY"); + int pos = SNPrintF(buf, "VAR PROXY"); switch (var->location()) { case Variable::UNALLOCATED: break; case Variable::PARAMETER: - OS::SNPrintF(buf + pos, " parameter[%d]", var->index()); + SNPrintF(buf + pos, " parameter[%d]", var->index()); break; case Variable::LOCAL: - OS::SNPrintF(buf + pos, " local[%d]", var->index()); + SNPrintF(buf + pos, " local[%d]", var->index()); break; case Variable::CONTEXT: - OS::SNPrintF(buf + pos, " context[%d]", var->index()); + SNPrintF(buf + pos, " context[%d]", var->index()); break; case Variable::LOOKUP: - OS::SNPrintF(buf + pos, " lookup"); + SNPrintF(buf + pos, " lookup"); break; } PrintLiteralWithModeIndented(buf.start(), var, node->name()); @@ -1114,8 +1120,8 @@ void AstPrinter::VisitUnaryOperation(UnaryOperation* node) { void AstPrinter::VisitCountOperation(CountOperation* node) { EmbeddedVector<char, 128> buf; - OS::SNPrintF(buf, "%s %s", (node->is_prefix() ? "PRE" : "POST"), - Token::Name(node->op())); + SNPrintF(buf, "%s %s", (node->is_prefix() ? "PRE" : "POST"), + Token::Name(node->op())); IndentedScope indent(this, buf.start()); Visit(node->expression()); } diff --git a/deps/v8/src/prettyprinter.h b/deps/v8/src/prettyprinter.h index e0467ca82ae..de40aae03de 100644 --- a/deps/v8/src/prettyprinter.h +++ b/deps/v8/src/prettyprinter.h @@ -5,8 +5,8 @@ #ifndef V8_PRETTYPRINTER_H_ #define V8_PRETTYPRINTER_H_ -#include "allocation.h" -#include "ast.h" +#include "src/allocation.h" +#include "src/ast.h" namespace v8 { namespace internal { @@ -44,9 +44,10 @@ class PrettyPrinter: public AstVisitor { const char* Output() const { return output_; } virtual void PrintStatements(ZoneList<Statement*>* statements); - void PrintLabels(ZoneStringList* labels); + void PrintLabels(ZoneList<const AstRawString*>* labels); virtual void PrintArguments(ZoneList<Expression*>* arguments); void PrintLiteral(Handle<Object> value, bool quote); + void PrintLiteral(const AstRawString* value, bool quote); void PrintParameters(Scope* scope); void PrintDeclarations(ZoneList<Declaration*>* declarations); void PrintFunctionLiteral(FunctionLiteral* function); @@ -83,7 +84,7 @@ class AstPrinter: public PrettyPrinter { void PrintLiteralWithModeIndented(const char* info, Variable* var, Handle<Object> value); - void PrintLabelsIndented(ZoneStringList* labels); + void PrintLabelsIndented(ZoneList<const AstRawString*>* labels); void inc_indent() { indent_++; } void dec_indent() { indent_--; } diff --git a/deps/v8/src/profile-generator-inl.h b/deps/v8/src/profile-generator-inl.h index 9e8408d916a..58c124fe62b 100644 --- a/deps/v8/src/profile-generator-inl.h +++ b/deps/v8/src/profile-generator-inl.h @@ -5,7 +5,7 @@ #ifndef V8_PROFILE_GENERATOR_INL_H_ #define V8_PROFILE_GENERATOR_INL_H_ -#include "profile-generator.h" +#include "src/profile-generator.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/profile-generator.cc b/deps/v8/src/profile-generator.cc index 957f5dbae06..6017f12dc67 100644 --- a/deps/v8/src/profile-generator.cc +++ b/deps/v8/src/profile-generator.cc @@ -2,17 +2,17 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "profile-generator-inl.h" +#include "src/profile-generator-inl.h" -#include "compiler.h" -#include "debug.h" -#include "sampler.h" -#include "global-handles.h" -#include "scopeinfo.h" -#include "unicode.h" -#include "zone-inl.h" +#include "src/compiler.h" +#include "src/debug.h" +#include "src/global-handles.h" +#include "src/sampler.h" +#include "src/scopeinfo.h" +#include "src/unicode.h" +#include "src/zone-inl.h" namespace v8 { namespace internal { @@ -43,7 +43,7 @@ const char* StringsStorage::GetCopy(const char* src) { HashMap::Entry* entry = GetEntry(src, len); if (entry->value == NULL) { Vector<char> dst = Vector<char>::New(len + 1); - OS::StrNCpy(dst, src, len); + StrNCpy(dst, src, len); dst[len] = '\0'; entry->key = dst.start(); entry->value = entry->key; @@ -76,7 +76,7 @@ const char* StringsStorage::AddOrDisposeString(char* str, int len) { const char* StringsStorage::GetVFormatted(const char* format, va_list args) { Vector<char> str = Vector<char>::New(1024); - int len = OS::VSNPrintF(str, format, args); + int len = VSNPrintF(str, format, args); if (len == -1) { DeleteArray(str.start()); return GetCopy(format); @@ -107,17 +107,12 @@ const char* StringsStorage::GetName(int index) { const char* StringsStorage::GetFunctionName(Name* name) { - return BeautifyFunctionName(GetName(name)); + return GetName(name); } const char* StringsStorage::GetFunctionName(const char* name) { - return BeautifyFunctionName(GetCopy(name)); -} - - -const char* StringsStorage::BeautifyFunctionName(const char* name) { - return (*name == 0) ? ProfileGenerator::kAnonymousFunctionName : name; + return GetCopy(name); } @@ -208,17 +203,12 @@ ProfileNode* ProfileNode::FindOrAddChild(CodeEntry* entry) { void ProfileNode::Print(int indent) { - OS::Print("%5u %*c %s%s %d #%d %s", - self_ticks_, - indent, ' ', - entry_->name_prefix(), - entry_->name(), - entry_->script_id(), - id(), - entry_->bailout_reason()); + base::OS::Print("%5u %*s %s%s %d #%d %s", self_ticks_, indent, "", + entry_->name_prefix(), entry_->name(), entry_->script_id(), + id(), entry_->bailout_reason()); if (entry_->resource_name()[0] != '\0') - OS::Print(" %s:%d", entry_->resource_name(), entry_->line_number()); - OS::Print("\n"); + base::OS::Print(" %s:%d", entry_->resource_name(), entry_->line_number()); + base::OS::Print("\n"); for (HashMap::Entry* p = children_.Start(); p != NULL; p = children_.Next(p)) { @@ -332,11 +322,12 @@ void ProfileTree::TraverseDepthFirst(Callback* callback) { CpuProfile::CpuProfile(const char* title, bool record_samples) : title_(title), record_samples_(record_samples), - start_time_(TimeTicks::HighResolutionNow()) { + start_time_(base::TimeTicks::HighResolutionNow()) { } -void CpuProfile::AddPath(TimeTicks timestamp, const Vector<CodeEntry*>& path) { +void CpuProfile::AddPath(base::TimeTicks timestamp, + const Vector<CodeEntry*>& path) { ProfileNode* top_frame_node = top_down_.AddPathFromEnd(path); if (record_samples_) { timestamps_.Add(timestamp); @@ -346,12 +337,12 @@ void CpuProfile::AddPath(TimeTicks timestamp, const Vector<CodeEntry*>& path) { void CpuProfile::CalculateTotalTicksAndSamplingRate() { - end_time_ = TimeTicks::HighResolutionNow(); + end_time_ = base::TimeTicks::HighResolutionNow(); } void CpuProfile::Print() { - OS::Print("[Top down]:\n"); + base::OS::Print("[Top down]:\n"); top_down_.Print(); } @@ -403,7 +394,7 @@ int CodeMap::GetSharedId(Address addr) { // For shared function entries, 'size' field is used to store their IDs. if (tree_.Find(addr, &locator)) { const CodeEntryInfo& entry = locator.value(); - ASSERT(entry.entry == kSharedFunctionCodeEntry); + DCHECK(entry.entry == kSharedFunctionCodeEntry); return entry.size; } else { tree_.Insert(addr, &locator); @@ -428,9 +419,9 @@ void CodeMap::CodeTreePrinter::Call( const Address& key, const CodeMap::CodeEntryInfo& value) { // For shared function entries, 'size' field is used to store their IDs. if (value.entry == kSharedFunctionCodeEntry) { - OS::Print("%p SharedFunctionInfo %d\n", key, value.size); + base::OS::Print("%p SharedFunctionInfo %d\n", key, value.size); } else { - OS::Print("%p %5d %s\n", key, value.size, value.entry->name()); + base::OS::Print("%p %5d %s\n", key, value.size, value.entry->name()); } } @@ -473,9 +464,10 @@ bool CpuProfilesCollection::StartProfiling(const char* title, } for (int i = 0; i < current_profiles_.length(); ++i) { if (strcmp(current_profiles_[i]->title(), title) == 0) { - // Ignore attempts to start profile with the same title. + // Ignore attempts to start profile with the same title... current_profiles_semaphore_.Signal(); - return false; + // ... though return true to force it collect a sample. + return true; } } current_profiles_.Add(new CpuProfile(title, record_samples)); @@ -525,7 +517,7 @@ void CpuProfilesCollection::RemoveProfile(CpuProfile* profile) { void CpuProfilesCollection::AddPathToCurrentProfiles( - TimeTicks timestamp, const Vector<CodeEntry*>& path) { + base::TimeTicks timestamp, const Vector<CodeEntry*>& path) { // As starting / stopping profiles is rare relatively to this // method, we don't bother minimizing the duration of lock holding, // e.g. copying contents of the list to a local vector. @@ -555,8 +547,6 @@ CodeEntry* CpuProfilesCollection::NewCodeEntry( } -const char* const ProfileGenerator::kAnonymousFunctionName = - "(anonymous function)"; const char* const ProfileGenerator::kProgramEntryName = "(program)"; const char* const ProfileGenerator::kIdleEntryName = diff --git a/deps/v8/src/profile-generator.h b/deps/v8/src/profile-generator.h index 1d8ad87cf40..5ebb92bba3b 100644 --- a/deps/v8/src/profile-generator.h +++ b/deps/v8/src/profile-generator.h @@ -5,9 +5,9 @@ #ifndef V8_PROFILE_GENERATOR_H_ #define V8_PROFILE_GENERATOR_H_ -#include "allocation.h" -#include "hashmap.h" -#include "../include/v8-profiler.h" +#include "include/v8-profiler.h" +#include "src/allocation.h" +#include "src/hashmap.h" namespace v8 { namespace internal { @@ -34,7 +34,6 @@ class StringsStorage { static const int kMaxNameSize = 1024; static bool StringsMatch(void* key1, void* key2); - const char* BeautifyFunctionName(const char* name); const char* AddOrDisposeString(char* str, int len); HashMap::Entry* GetEntry(const char* str, int len); @@ -176,7 +175,7 @@ class CpuProfile { CpuProfile(const char* title, bool record_samples); // Add pc -> ... -> main() call path to the profile. - void AddPath(TimeTicks timestamp, const Vector<CodeEntry*>& path); + void AddPath(base::TimeTicks timestamp, const Vector<CodeEntry*>& path); void CalculateTotalTicksAndSamplingRate(); const char* title() const { return title_; } @@ -184,10 +183,12 @@ class CpuProfile { int samples_count() const { return samples_.length(); } ProfileNode* sample(int index) const { return samples_.at(index); } - TimeTicks sample_timestamp(int index) const { return timestamps_.at(index); } + base::TimeTicks sample_timestamp(int index) const { + return timestamps_.at(index); + } - TimeTicks start_time() const { return start_time_; } - TimeTicks end_time() const { return end_time_; } + base::TimeTicks start_time() const { return start_time_; } + base::TimeTicks end_time() const { return end_time_; } void UpdateTicksScale(); @@ -196,10 +197,10 @@ class CpuProfile { private: const char* title_; bool record_samples_; - TimeTicks start_time_; - TimeTicks end_time_; + base::TimeTicks start_time_; + base::TimeTicks end_time_; List<ProfileNode*> samples_; - List<TimeTicks> timestamps_; + List<base::TimeTicks> timestamps_; ProfileTree top_down_; DISALLOW_COPY_AND_ASSIGN(CpuProfile); @@ -285,7 +286,7 @@ class CpuProfilesCollection { // Called from profile generator thread. void AddPathToCurrentProfiles( - TimeTicks timestamp, const Vector<CodeEntry*>& path); + base::TimeTicks timestamp, const Vector<CodeEntry*>& path); // Limits the number of profiles that can be simultaneously collected. static const int kMaxSimultaneousProfiles = 100; @@ -297,7 +298,7 @@ class CpuProfilesCollection { // Accessed by VM thread and profile generator thread. List<CpuProfile*> current_profiles_; - Semaphore current_profiles_semaphore_; + base::Semaphore current_profiles_semaphore_; DISALLOW_COPY_AND_ASSIGN(CpuProfilesCollection); }; @@ -311,7 +312,6 @@ class ProfileGenerator { CodeMap* code_map() { return &code_map_; } - static const char* const kAnonymousFunctionName; static const char* const kProgramEntryName; static const char* const kIdleEntryName; static const char* const kGarbageCollectorEntryName; diff --git a/deps/v8/src/promise.js b/deps/v8/src/promise.js index fa650eaf0e9..2d8314a42fa 100644 --- a/deps/v8/src/promise.js +++ b/deps/v8/src/promise.js @@ -9,28 +9,20 @@ // var $Object = global.Object // var $WeakMap = global.WeakMap +// For bootstrapper. -var $Promise = function Promise(resolver) { - if (resolver === promiseRaw) return; - if (!%_IsConstructCall()) throw MakeTypeError('not_a_promise', [this]); - if (!IS_SPEC_FUNCTION(resolver)) - throw MakeTypeError('resolver_not_a_function', [resolver]); - var promise = PromiseInit(this); - try { - %DebugPromiseHandlePrologue(function() { return promise }); - resolver(function(x) { PromiseResolve(promise, x) }, - function(r) { PromiseReject(promise, r) }); - } catch (e) { - PromiseReject(promise, e); - } finally { - %DebugPromiseHandleEpilogue(); - } -} - - -//------------------------------------------------------------------- +var IsPromise; +var PromiseCreate; +var PromiseResolve; +var PromiseReject; +var PromiseChain; +var PromiseCatch; +var PromiseThen; +var PromiseHasRejectHandler; -// Core functionality. +// mirror-debugger.js currently uses builtins.promiseStatus. It would be nice +// if we could move these property names into the closure below. +// TODO(jkummerow/rossberg/yangguo): Find a better solution. // Status values: 0 = pending, +1 = resolved, -1 = rejected var promiseStatus = GLOBAL_PRIVATE("Promise#status"); @@ -38,262 +30,313 @@ var promiseValue = GLOBAL_PRIVATE("Promise#value"); var promiseOnResolve = GLOBAL_PRIVATE("Promise#onResolve"); var promiseOnReject = GLOBAL_PRIVATE("Promise#onReject"); var promiseRaw = GLOBAL_PRIVATE("Promise#raw"); +var promiseDebug = GLOBAL_PRIVATE("Promise#debug"); +var lastMicrotaskId = 0; + +(function() { + + var $Promise = function Promise(resolver) { + if (resolver === promiseRaw) return; + if (!%_IsConstructCall()) throw MakeTypeError('not_a_promise', [this]); + if (!IS_SPEC_FUNCTION(resolver)) + throw MakeTypeError('resolver_not_a_function', [resolver]); + var promise = PromiseInit(this); + try { + %DebugPushPromise(promise); + resolver(function(x) { PromiseResolve(promise, x) }, + function(r) { PromiseReject(promise, r) }); + } catch (e) { + PromiseReject(promise, e); + } finally { + %DebugPopPromise(); + } + } + + // Core functionality. + + function PromiseSet(promise, status, value, onResolve, onReject) { + SET_PRIVATE(promise, promiseStatus, status); + SET_PRIVATE(promise, promiseValue, value); + SET_PRIVATE(promise, promiseOnResolve, onResolve); + SET_PRIVATE(promise, promiseOnReject, onReject); + if (DEBUG_IS_ACTIVE) { + %DebugPromiseEvent({ promise: promise, status: status, value: value }); + } + return promise; + } -function IsPromise(x) { - return IS_SPEC_OBJECT(x) && %HasLocalProperty(x, promiseStatus); -} - -function PromiseSet(promise, status, value, onResolve, onReject) { - SET_PRIVATE(promise, promiseStatus, status); - SET_PRIVATE(promise, promiseValue, value); - SET_PRIVATE(promise, promiseOnResolve, onResolve); - SET_PRIVATE(promise, promiseOnReject, onReject); - return promise; -} - -function PromiseInit(promise) { - return PromiseSet(promise, 0, UNDEFINED, new InternalArray, new InternalArray) -} - -function PromiseDone(promise, status, value, promiseQueue) { - if (GET_PRIVATE(promise, promiseStatus) === 0) { - PromiseEnqueue(value, GET_PRIVATE(promise, promiseQueue)); - PromiseSet(promise, status, value); + function PromiseInit(promise) { + return PromiseSet( + promise, 0, UNDEFINED, new InternalArray, new InternalArray) } -} -function PromiseResolve(promise, x) { - PromiseDone(promise, +1, x, promiseOnResolve) -} + function PromiseDone(promise, status, value, promiseQueue) { + if (GET_PRIVATE(promise, promiseStatus) === 0) { + PromiseEnqueue(value, GET_PRIVATE(promise, promiseQueue), status); + PromiseSet(promise, status, value); + } + } -function PromiseReject(promise, r) { - PromiseDone(promise, -1, r, promiseOnReject) -} + function PromiseCoerce(constructor, x) { + if (!IsPromise(x) && IS_SPEC_OBJECT(x)) { + var then; + try { + then = x.then; + } catch(r) { + return %_CallFunction(constructor, r, PromiseRejected); + } + if (IS_SPEC_FUNCTION(then)) { + var deferred = %_CallFunction(constructor, PromiseDeferred); + try { + %_CallFunction(x, deferred.resolve, deferred.reject, then); + } catch(r) { + deferred.reject(r); + } + return deferred.promise; + } + } + return x; + } + function PromiseHandle(value, handler, deferred) { + try { + %DebugPushPromise(deferred.promise); + var result = handler(value); + if (result === deferred.promise) + throw MakeTypeError('promise_cyclic', [result]); + else if (IsPromise(result)) + %_CallFunction(result, deferred.resolve, deferred.reject, PromiseChain); + else + deferred.resolve(result); + } catch (exception) { + try { deferred.reject(exception); } catch (e) { } + } finally { + %DebugPopPromise(); + } + } -// For API. + function PromiseEnqueue(value, tasks, status) { + var id, name, instrumenting = DEBUG_IS_ACTIVE; + %EnqueueMicrotask(function() { + if (instrumenting) { + %DebugAsyncTaskEvent({ type: "willHandle", id: id, name: name }); + } + for (var i = 0; i < tasks.length; i += 2) { + PromiseHandle(value, tasks[i], tasks[i + 1]) + } + if (instrumenting) { + %DebugAsyncTaskEvent({ type: "didHandle", id: id, name: name }); + } + }); + if (instrumenting) { + id = ++lastMicrotaskId; + name = status > 0 ? "Promise.resolve" : "Promise.reject"; + %DebugAsyncTaskEvent({ type: "enqueue", id: id, name: name }); + } + } -function PromiseNopResolver() {} + function PromiseIdResolveHandler(x) { return x } + function PromiseIdRejectHandler(r) { throw r } -function PromiseCreate() { - return new $Promise(PromiseNopResolver) -} + function PromiseNopResolver() {} + // ------------------------------------------------------------------- + // Define exported functions. -// Convenience. + // For bootstrapper. -function PromiseDeferred() { - if (this === $Promise) { - // Optimized case, avoid extra closure. - var promise = PromiseInit(new $Promise(promiseRaw)); - return { - promise: promise, - resolve: function(x) { PromiseResolve(promise, x) }, - reject: function(r) { PromiseReject(promise, r) } - }; - } else { - var result = {}; - result.promise = new this(function(resolve, reject) { - result.resolve = resolve; - result.reject = reject; - }) - return result; + IsPromise = function IsPromise(x) { + return IS_SPEC_OBJECT(x) && HAS_PRIVATE(x, promiseStatus); } -} - -function PromiseResolved(x) { - if (this === $Promise) { - // Optimized case, avoid extra closure. - return PromiseSet(new $Promise(promiseRaw), +1, x); - } else { - return new this(function(resolve, reject) { resolve(x) }); + + PromiseCreate = function PromiseCreate() { + return new $Promise(PromiseNopResolver) } -} - -function PromiseRejected(r) { - if (this === $Promise) { - // Optimized case, avoid extra closure. - return PromiseSet(new $Promise(promiseRaw), -1, r); - } else { - return new this(function(resolve, reject) { reject(r) }); + + PromiseResolve = function PromiseResolve(promise, x) { + PromiseDone(promise, +1, x, promiseOnResolve) } -} - - -// Simple chaining. - -function PromiseIdResolveHandler(x) { return x } -function PromiseIdRejectHandler(r) { throw r } - -function PromiseChain(onResolve, onReject) { // a.k.a. flatMap - onResolve = IS_UNDEFINED(onResolve) ? PromiseIdResolveHandler : onResolve; - onReject = IS_UNDEFINED(onReject) ? PromiseIdRejectHandler : onReject; - var deferred = %_CallFunction(this.constructor, PromiseDeferred); - switch (GET_PRIVATE(this, promiseStatus)) { - case UNDEFINED: - throw MakeTypeError('not_a_promise', [this]); - case 0: // Pending - GET_PRIVATE(this, promiseOnResolve).push(onResolve, deferred); - GET_PRIVATE(this, promiseOnReject).push(onReject, deferred); - break; - case +1: // Resolved - PromiseEnqueue(GET_PRIVATE(this, promiseValue), [onResolve, deferred]); - break; - case -1: // Rejected - PromiseEnqueue(GET_PRIVATE(this, promiseValue), [onReject, deferred]); - break; + + PromiseReject = function PromiseReject(promise, r) { + // Check promise status to confirm that this reject has an effect. + // Check promiseDebug property to avoid duplicate event. + if (DEBUG_IS_ACTIVE && + GET_PRIVATE(promise, promiseStatus) == 0 && + !HAS_PRIVATE(promise, promiseDebug)) { + %DebugPromiseRejectEvent(promise, r); + } + PromiseDone(promise, -1, r, promiseOnReject) } - return deferred.promise; -} -function PromiseCatch(onReject) { - return this.then(UNDEFINED, onReject); -} + // Convenience. + + function PromiseDeferred() { + if (this === $Promise) { + // Optimized case, avoid extra closure. + var promise = PromiseInit(new $Promise(promiseRaw)); + return { + promise: promise, + resolve: function(x) { PromiseResolve(promise, x) }, + reject: function(r) { PromiseReject(promise, r) } + }; + } else { + var result = {}; + result.promise = new this(function(resolve, reject) { + result.resolve = resolve; + result.reject = reject; + }) + return result; + } + } -function PromiseEnqueue(value, tasks) { - EnqueueMicrotask(function() { - for (var i = 0; i < tasks.length; i += 2) { - PromiseHandle(value, tasks[i], tasks[i + 1]) + function PromiseResolved(x) { + if (this === $Promise) { + // Optimized case, avoid extra closure. + return PromiseSet(new $Promise(promiseRaw), +1, x); + } else { + return new this(function(resolve, reject) { resolve(x) }); } - }); -} - -function PromiseHandle(value, handler, deferred) { - try { - %DebugPromiseHandlePrologue( - function() { - var queue = GET_PRIVATE(deferred.promise, promiseOnReject); - return (queue && queue.length == 0) ? deferred.promise : UNDEFINED; - }); - var result = handler(value); - if (result === deferred.promise) - throw MakeTypeError('promise_cyclic', [result]); - else if (IsPromise(result)) - %_CallFunction(result, deferred.resolve, deferred.reject, PromiseChain); - else - deferred.resolve(result); - } catch (exception) { - try { - %DebugPromiseHandlePrologue(function() { return deferred.promise }); - deferred.reject(exception); - } catch (e) { } finally { - %DebugPromiseHandleEpilogue(); + } + + function PromiseRejected(r) { + if (this === $Promise) { + // Optimized case, avoid extra closure. + return PromiseSet(new $Promise(promiseRaw), -1, r); + } else { + return new this(function(resolve, reject) { reject(r) }); } - } finally { - %DebugPromiseHandleEpilogue(); } -} - - -// Multi-unwrapped chaining with thenable coercion. - -function PromiseThen(onResolve, onReject) { - onResolve = IS_SPEC_FUNCTION(onResolve) ? onResolve : PromiseIdResolveHandler; - onReject = IS_SPEC_FUNCTION(onReject) ? onReject : PromiseIdRejectHandler; - var that = this; - var constructor = this.constructor; - return %_CallFunction( - this, - function(x) { - x = PromiseCoerce(constructor, x); - return x === that ? onReject(MakeTypeError('promise_cyclic', [x])) : - IsPromise(x) ? x.then(onResolve, onReject) : onResolve(x); - }, - onReject, - PromiseChain - ); -} - -PromiseCoerce.table = new $WeakMap; - -function PromiseCoerce(constructor, x) { - if (!IsPromise(x) && IS_SPEC_OBJECT(x)) { - var then; - try { - then = x.then; - } catch(r) { - var promise = %_CallFunction(constructor, r, PromiseRejected); - PromiseCoerce.table.set(x, promise); - return promise; + + // Simple chaining. + + PromiseChain = function PromiseChain(onResolve, onReject) { // a.k.a. + // flatMap + onResolve = IS_UNDEFINED(onResolve) ? PromiseIdResolveHandler : onResolve; + onReject = IS_UNDEFINED(onReject) ? PromiseIdRejectHandler : onReject; + var deferred = %_CallFunction(this.constructor, PromiseDeferred); + switch (GET_PRIVATE(this, promiseStatus)) { + case UNDEFINED: + throw MakeTypeError('not_a_promise', [this]); + case 0: // Pending + GET_PRIVATE(this, promiseOnResolve).push(onResolve, deferred); + GET_PRIVATE(this, promiseOnReject).push(onReject, deferred); + break; + case +1: // Resolved + PromiseEnqueue(GET_PRIVATE(this, promiseValue), + [onResolve, deferred], + +1); + break; + case -1: // Rejected + PromiseEnqueue(GET_PRIVATE(this, promiseValue), + [onReject, deferred], + -1); + break; } - if (typeof then === 'function') { - if (PromiseCoerce.table.has(x)) { - return PromiseCoerce.table.get(x); - } else { - var deferred = %_CallFunction(constructor, PromiseDeferred); - PromiseCoerce.table.set(x, deferred.promise); - try { - %_CallFunction(x, deferred.resolve, deferred.reject, then); - } catch(r) { - deferred.reject(r); - } - return deferred.promise; - } + if (DEBUG_IS_ACTIVE) { + %DebugPromiseEvent({ promise: deferred.promise, parentPromise: this }); } + return deferred.promise; } - return x; -} + PromiseCatch = function PromiseCatch(onReject) { + return this.then(UNDEFINED, onReject); + } + + // Multi-unwrapped chaining with thenable coercion. + + PromiseThen = function PromiseThen(onResolve, onReject) { + onResolve = IS_SPEC_FUNCTION(onResolve) ? onResolve + : PromiseIdResolveHandler; + onReject = IS_SPEC_FUNCTION(onReject) ? onReject + : PromiseIdRejectHandler; + var that = this; + var constructor = this.constructor; + return %_CallFunction( + this, + function(x) { + x = PromiseCoerce(constructor, x); + return x === that ? onReject(MakeTypeError('promise_cyclic', [x])) : + IsPromise(x) ? x.then(onResolve, onReject) : onResolve(x); + }, + onReject, + PromiseChain + ); + } -// Combinators. + // Combinators. -function PromiseCast(x) { - // TODO(rossberg): cannot do better until we support @@create. - return IsPromise(x) ? x : new this(function(resolve) { resolve(x) }); -} + function PromiseCast(x) { + // TODO(rossberg): cannot do better until we support @@create. + return IsPromise(x) ? x : new this(function(resolve) { resolve(x) }); + } -function PromiseAll(values) { - var deferred = %_CallFunction(this, PromiseDeferred); - var resolutions = []; - if (!%_IsArray(values)) { - deferred.reject(MakeTypeError('invalid_argument')); + function PromiseAll(values) { + var deferred = %_CallFunction(this, PromiseDeferred); + var resolutions = []; + if (!%_IsArray(values)) { + deferred.reject(MakeTypeError('invalid_argument')); + return deferred.promise; + } + try { + var count = values.length; + if (count === 0) { + deferred.resolve(resolutions); + } else { + for (var i = 0; i < values.length; ++i) { + this.resolve(values[i]).then( + (function() { + // Nested scope to get closure over current i (and avoid .bind). + // TODO(rossberg): Use for-let instead once available. + var i_captured = i; + return function(x) { + resolutions[i_captured] = x; + if (--count === 0) deferred.resolve(resolutions); + }; + })(), + function(r) { deferred.reject(r) } + ); + } + } + } catch (e) { + deferred.reject(e) + } return deferred.promise; } - try { - var count = values.length; - if (count === 0) { - deferred.resolve(resolutions); - } else { + + function PromiseOne(values) { + var deferred = %_CallFunction(this, PromiseDeferred); + if (!%_IsArray(values)) { + deferred.reject(MakeTypeError('invalid_argument')); + return deferred.promise; + } + try { for (var i = 0; i < values.length; ++i) { this.resolve(values[i]).then( - function(i, x) { - resolutions[i] = x; - if (--count === 0) deferred.resolve(resolutions); - }.bind(UNDEFINED, i), // TODO(rossberg): use let loop once available + function(x) { deferred.resolve(x) }, function(r) { deferred.reject(r) } ); } + } catch (e) { + deferred.reject(e) } - } catch (e) { - deferred.reject(e) - } - return deferred.promise; -} - -function PromiseOne(values) { - var deferred = %_CallFunction(this, PromiseDeferred); - if (!%_IsArray(values)) { - deferred.reject(MakeTypeError('invalid_argument')); return deferred.promise; } - try { - for (var i = 0; i < values.length; ++i) { - this.resolve(values[i]).then( - function(x) { deferred.resolve(x) }, - function(r) { deferred.reject(r) } - ); - } - } catch (e) { - deferred.reject(e) - } - return deferred.promise; -} -//------------------------------------------------------------------- -function SetUpPromise() { + // Utility for debugger + + PromiseHasRejectHandler = function PromiseHasRejectHandler() { + // Mark promise as already having triggered a reject event. + SET_PRIVATE(this, promiseDebug, true); + var queue = GET_PRIVATE(this, promiseOnReject); + return !IS_UNDEFINED(queue) && queue.length > 0; + }; + + // ------------------------------------------------------------------- + // Install exported functions. + %CheckIsBootstrapping(); - %SetProperty(global, 'Promise', $Promise, DONT_ENUM); + %AddNamedProperty(global, 'Promise', $Promise, DONT_ENUM); InstallFunctions($Promise, DONT_ENUM, [ "defer", PromiseDeferred, "accept", PromiseResolved, @@ -307,15 +350,5 @@ function SetUpPromise() { "then", PromiseThen, "catch", PromiseCatch ]); -} - -SetUpPromise(); - -// Functions to expose promise details to the debugger. -function GetPromiseStatus(promise) { - return GET_PRIVATE(promise, promiseStatus); -} -function GetPromiseValue(promise) { - return GET_PRIVATE(promise, promiseValue); -} +})(); diff --git a/deps/v8/src/property-details-inl.h b/deps/v8/src/property-details-inl.h index 353f8f58750..eaa596f9daf 100644 --- a/deps/v8/src/property-details-inl.h +++ b/deps/v8/src/property-details-inl.h @@ -5,9 +5,10 @@ #ifndef V8_PROPERTY_DETAILS_INL_H_ #define V8_PROPERTY_DETAILS_INL_H_ -#include "conversions.h" -#include "objects.h" -#include "property-details.h" +#include "src/conversions.h" +#include "src/objects.h" +#include "src/property-details.h" +#include "src/types.h" namespace v8 { namespace internal { @@ -23,6 +24,16 @@ inline bool Representation::CanContainDouble(double value) { return false; } + +Representation Representation::FromType(Type* type) { + DisallowHeapAllocation no_allocation; + if (type->Is(Type::None())) return Representation::None(); + if (type->Is(Type::SignedSmall())) return Representation::Smi(); + if (type->Is(Type::Signed32())) return Representation::Integer32(); + if (type->Is(Type::Number())) return Representation::Double(); + return Representation::Tagged(); +} + } } // namespace v8::internal #endif // V8_PROPERTY_DETAILS_INL_H_ diff --git a/deps/v8/src/property-details.h b/deps/v8/src/property-details.h index cb33f9c9a7b..7eb2e4ea9da 100644 --- a/deps/v8/src/property-details.h +++ b/deps/v8/src/property-details.h @@ -5,9 +5,9 @@ #ifndef V8_PROPERTY_DETAILS_H_ #define V8_PROPERTY_DETAILS_H_ -#include "../include/v8.h" -#include "allocation.h" -#include "utils.h" +#include "include/v8.h" +#include "src/allocation.h" +#include "src/utils.h" // Ecma-262 3rd 8.6.1 enum PropertyAttributes { @@ -112,8 +112,8 @@ class Representation { if (kind_ == kExternal && other.kind_ == kExternal) return false; if (kind_ == kNone && other.kind_ == kExternal) return false; - ASSERT(kind_ != kExternal); - ASSERT(other.kind_ != kExternal); + DCHECK(kind_ != kExternal); + DCHECK(other.kind_ != kExternal); if (IsHeapObject()) return other.IsNone(); if (kind_ == kUInteger8 && other.kind_ == kInteger8) return false; if (kind_ == kUInteger16 && other.kind_ == kInteger16) return false; @@ -133,7 +133,7 @@ class Representation { } int size() const { - ASSERT(!IsNone()); + DCHECK(!IsNone()); if (IsInteger8() || IsUInteger8()) { return sizeof(uint8_t); } @@ -197,8 +197,8 @@ class PropertyDetails BASE_EMBEDDED { | AttributesField::encode(attributes) | DictionaryStorageField::encode(index); - ASSERT(type == this->type()); - ASSERT(attributes == this->attributes()); + DCHECK(type == this->type()); + DCHECK(attributes == this->attributes()); } PropertyDetails(PropertyAttributes attributes, @@ -247,7 +247,7 @@ class PropertyDetails BASE_EMBEDDED { } Representation representation() const { - ASSERT(type() != NORMAL); + DCHECK(type() != NORMAL); return DecodeRepresentation(RepresentationField::decode(value_)); } diff --git a/deps/v8/src/property.cc b/deps/v8/src/property.cc index e7d0c4e2f4b..f1378ecdf93 100644 --- a/deps/v8/src/property.cc +++ b/deps/v8/src/property.cc @@ -2,9 +2,10 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "property.h" +#include "src/property.h" -#include "handles-inl.h" +#include "src/handles-inl.h" +#include "src/ostreams.h" namespace v8 { namespace internal { @@ -19,62 +20,46 @@ void LookupResult::Iterate(ObjectVisitor* visitor) { } -#ifdef OBJECT_PRINT -void LookupResult::Print(FILE* out) { - if (!IsFound()) { - PrintF(out, "Not Found\n"); - return; - } +OStream& operator<<(OStream& os, const LookupResult& r) { + if (!r.IsFound()) return os << "Not Found\n"; - PrintF(out, "LookupResult:\n"); - PrintF(out, " -cacheable = %s\n", IsCacheable() ? "true" : "false"); - PrintF(out, " -attributes = %x\n", GetAttributes()); - if (IsTransition()) { - PrintF(out, " -transition target:\n"); - GetTransitionTarget()->Print(out); - PrintF(out, "\n"); + os << "LookupResult:\n"; + os << " -cacheable = " << (r.IsCacheable() ? "true" : "false") << "\n"; + os << " -attributes = " << hex << r.GetAttributes() << dec << "\n"; + if (r.IsTransition()) { + os << " -transition target:\n" << Brief(r.GetTransitionTarget()) << "\n"; } - switch (type()) { + switch (r.type()) { case NORMAL: - PrintF(out, " -type = normal\n"); - PrintF(out, " -entry = %d", GetDictionaryEntry()); - break; + return os << " -type = normal\n" + << " -entry = " << r.GetDictionaryEntry() << "\n"; case CONSTANT: - PrintF(out, " -type = constant\n"); - PrintF(out, " -value:\n"); - GetConstant()->Print(out); - PrintF(out, "\n"); - break; + return os << " -type = constant\n" + << " -value:\n" << Brief(r.GetConstant()) << "\n"; case FIELD: - PrintF(out, " -type = field\n"); - PrintF(out, " -index = %d\n", GetFieldIndex().field_index()); - PrintF(out, " -field type:\n"); - GetFieldType()->TypePrint(out); - break; + os << " -type = field\n" + << " -index = " << r.GetFieldIndex().property_index() << "\n" + << " -field type:"; + r.GetFieldType()->PrintTo(os); + return os << "\n"; case CALLBACKS: - PrintF(out, " -type = call backs\n"); - PrintF(out, " -callback object:\n"); - GetCallbackObject()->Print(out); - break; + return os << " -type = call backs\n" + << " -callback object:\n" << Brief(r.GetCallbackObject()); case HANDLER: - PrintF(out, " -type = lookup proxy\n"); - break; + return os << " -type = lookup proxy\n"; case INTERCEPTOR: - PrintF(out, " -type = lookup interceptor\n"); - break; + return os << " -type = lookup interceptor\n"; case NONEXISTENT: UNREACHABLE(); break; } + return os; } -void Descriptor::Print(FILE* out) { - PrintF(out, "Descriptor "); - GetKey()->ShortPrint(out); - PrintF(out, " @ "); - GetValue()->ShortPrint(out); +OStream& operator<<(OStream& os, const Descriptor& d) { + return os << "Descriptor " << Brief(*d.GetKey()) << " @ " + << Brief(*d.GetValue()); } -#endif } } // namespace v8::internal diff --git a/deps/v8/src/property.h b/deps/v8/src/property.h index c7a4e6a637a..272a0a5ab91 100644 --- a/deps/v8/src/property.h +++ b/deps/v8/src/property.h @@ -5,13 +5,17 @@ #ifndef V8_PROPERTY_H_ #define V8_PROPERTY_H_ -#include "isolate.h" -#include "factory.h" -#include "types.h" +#include "src/factory.h" +#include "src/field-index.h" +#include "src/field-index-inl.h" +#include "src/isolate.h" +#include "src/types.h" namespace v8 { namespace internal { +class OStream; + // Abstraction for elements in instance-descriptor arrays. // // Each descriptor has a key, property attributes, property type, @@ -26,13 +30,9 @@ class Descriptor BASE_EMBEDDED { } } - Handle<Name> GetKey() { return key_; } - Handle<Object> GetValue() { return value_; } - PropertyDetails GetDetails() { return details_; } - -#ifdef OBJECT_PRINT - void Print(FILE* out); -#endif + Handle<Name> GetKey() const { return key_; } + Handle<Object> GetValue() const { return value_; } + PropertyDetails GetDetails() const { return details_; } void SetSortedKeyIndex(int index) { details_ = details_.set_pointer(index); } @@ -70,6 +70,9 @@ class Descriptor BASE_EMBEDDED { }; +OStream& operator<<(OStream& os, const Descriptor& d); + + class FieldDescriptor V8_FINAL : public Descriptor { public: FieldDescriptor(Handle<Name> key, @@ -108,56 +111,6 @@ class CallbacksDescriptor V8_FINAL : public Descriptor { }; -// Holds a property index value distinguishing if it is a field index or an -// index inside the object header. -class PropertyIndex V8_FINAL { - public: - static PropertyIndex NewFieldIndex(int index) { - return PropertyIndex(index, false); - } - static PropertyIndex NewHeaderIndex(int index) { - return PropertyIndex(index, true); - } - - bool is_field_index() { return (index_ & kHeaderIndexBit) == 0; } - bool is_header_index() { return (index_ & kHeaderIndexBit) != 0; } - - int field_index() { - ASSERT(is_field_index()); - return value(); - } - int header_index() { - ASSERT(is_header_index()); - return value(); - } - - bool is_inobject(Handle<JSObject> holder) { - if (is_header_index()) return true; - return field_index() < holder->map()->inobject_properties(); - } - - int translate(Handle<JSObject> holder) { - if (is_header_index()) return header_index(); - int index = field_index() - holder->map()->inobject_properties(); - if (index >= 0) return index; - return index + holder->map()->instance_size() / kPointerSize; - } - - private: - static const int kHeaderIndexBit = 1 << 31; - static const int kIndexMask = ~kHeaderIndexBit; - - int value() { return index_ & kIndexMask; } - - PropertyIndex(int index, bool is_header_based) - : index_(index | (is_header_based ? kHeaderIndexBit : 0)) { - ASSERT(index <= kIndexMask); - } - - int index_; -}; - - class LookupResult V8_FINAL BASE_EMBEDDED { public: explicit LookupResult(Isolate* isolate) @@ -172,7 +125,7 @@ class LookupResult V8_FINAL BASE_EMBEDDED { } ~LookupResult() { - ASSERT(isolate()->top_lookup_result() == this); + DCHECK(isolate()->top_lookup_result() == this); isolate()->set_top_lookup_result(next_); } @@ -194,7 +147,7 @@ class LookupResult V8_FINAL BASE_EMBEDDED { return value->FitsRepresentation(representation()) && GetFieldType()->NowContains(value); case CONSTANT: - ASSERT(GetConstant() != *value || + DCHECK(GetConstant() != *value || value->FitsRepresentation(representation())); return GetConstant() == *value; case CALLBACKS: @@ -247,29 +200,29 @@ class LookupResult V8_FINAL BASE_EMBEDDED { } JSObject* holder() const { - ASSERT(IsFound()); + DCHECK(IsFound()); return JSObject::cast(holder_); } JSProxy* proxy() const { - ASSERT(IsHandler()); + DCHECK(IsHandler()); return JSProxy::cast(holder_); } PropertyType type() const { - ASSERT(IsFound()); + DCHECK(IsFound()); return details_.type(); } Representation representation() const { - ASSERT(IsFound()); - ASSERT(details_.type() != NONEXISTENT); + DCHECK(IsFound()); + DCHECK(details_.type() != NONEXISTENT); return details_.representation(); } PropertyAttributes GetAttributes() const { - ASSERT(IsFound()); - ASSERT(details_.type() != NONEXISTENT); + DCHECK(IsFound()); + DCHECK(details_.type() != NONEXISTENT); return details_.attributes(); } @@ -278,34 +231,34 @@ class LookupResult V8_FINAL BASE_EMBEDDED { } bool IsFastPropertyType() const { - ASSERT(IsFound()); + DCHECK(IsFound()); return IsTransition() || type() != NORMAL; } // Property callbacks does not include transitions to callbacks. bool IsPropertyCallbacks() const { - ASSERT(!(details_.type() == CALLBACKS && !IsFound())); + DCHECK(!(details_.type() == CALLBACKS && !IsFound())); return !IsTransition() && details_.type() == CALLBACKS; } bool IsReadOnly() const { - ASSERT(IsFound()); - ASSERT(details_.type() != NONEXISTENT); + DCHECK(IsFound()); + DCHECK(details_.type() != NONEXISTENT); return details_.IsReadOnly(); } bool IsField() const { - ASSERT(!(details_.type() == FIELD && !IsFound())); + DCHECK(!(details_.type() == FIELD && !IsFound())); return IsDescriptorOrDictionary() && type() == FIELD; } bool IsNormal() const { - ASSERT(!(details_.type() == NORMAL && !IsFound())); + DCHECK(!(details_.type() == NORMAL && !IsFound())); return IsDescriptorOrDictionary() && type() == NORMAL; } bool IsConstant() const { - ASSERT(!(details_.type() == CONSTANT && !IsFound())); + DCHECK(!(details_.type() == CONSTANT && !IsFound())); return IsDescriptorOrDictionary() && type() == CONSTANT; } @@ -345,7 +298,7 @@ class LookupResult V8_FINAL BASE_EMBEDDED { return true; case CALLBACKS: { Object* callback = GetCallbackObject(); - ASSERT(!callback->IsForeign()); + DCHECK(!callback->IsForeign()); return callback->IsAccessorInfo(); } case HANDLER: @@ -374,7 +327,7 @@ class LookupResult V8_FINAL BASE_EMBEDDED { case DICTIONARY_TYPE: switch (type()) { case FIELD: - return holder()->RawFastPropertyAt(GetFieldIndex().field_index()); + return holder()->RawFastPropertyAt(GetFieldIndex()); case NORMAL: { Object* value = holder()->property_dictionary()->ValueAt( GetDictionaryEntry()); @@ -399,7 +352,7 @@ class LookupResult V8_FINAL BASE_EMBEDDED { } Map* GetTransitionTarget() const { - ASSERT(IsTransition()); + DCHECK(IsTransition()); return transition_; } @@ -412,14 +365,14 @@ class LookupResult V8_FINAL BASE_EMBEDDED { } int GetDescriptorIndex() const { - ASSERT(lookup_type_ == DESCRIPTOR_TYPE); + DCHECK(lookup_type_ == DESCRIPTOR_TYPE); return number_; } - PropertyIndex GetFieldIndex() const { - ASSERT(lookup_type_ == DESCRIPTOR_TYPE || + FieldIndex GetFieldIndex() const { + DCHECK(lookup_type_ == DESCRIPTOR_TYPE || lookup_type_ == TRANSITION_TYPE); - return PropertyIndex::NewFieldIndex(GetFieldIndexFromMap(holder()->map())); + return FieldIndex::ForLookupResult(this); } int GetLocalFieldIndexFromMap(Map* map) const { @@ -427,17 +380,17 @@ class LookupResult V8_FINAL BASE_EMBEDDED { } int GetDictionaryEntry() const { - ASSERT(lookup_type_ == DICTIONARY_TYPE); + DCHECK(lookup_type_ == DICTIONARY_TYPE); return number_; } JSFunction* GetConstantFunction() const { - ASSERT(type() == CONSTANT); + DCHECK(type() == CONSTANT); return JSFunction::cast(GetValue()); } Object* GetConstantFromMap(Map* map) const { - ASSERT(type() == CONSTANT); + DCHECK(type() == CONSTANT); return GetValueFromMap(map); } @@ -446,20 +399,16 @@ class LookupResult V8_FINAL BASE_EMBEDDED { } Object* GetConstant() const { - ASSERT(type() == CONSTANT); + DCHECK(type() == CONSTANT); return GetValue(); } Object* GetCallbackObject() const { - ASSERT(!IsTransition()); - ASSERT(type() == CALLBACKS); + DCHECK(!IsTransition()); + DCHECK(type() == CALLBACKS); return GetValue(); } -#ifdef OBJECT_PRINT - void Print(FILE* out); -#endif - Object* GetValue() const { if (lookup_type_ == DESCRIPTOR_TYPE) { return GetValueFromMap(holder()->map()); @@ -467,37 +416,37 @@ class LookupResult V8_FINAL BASE_EMBEDDED { return GetValueFromMap(transition_); } // In the dictionary case, the data is held in the value field. - ASSERT(lookup_type_ == DICTIONARY_TYPE); + DCHECK(lookup_type_ == DICTIONARY_TYPE); return holder()->GetNormalizedProperty(this); } Object* GetValueFromMap(Map* map) const { - ASSERT(lookup_type_ == DESCRIPTOR_TYPE || + DCHECK(lookup_type_ == DESCRIPTOR_TYPE || lookup_type_ == TRANSITION_TYPE); - ASSERT(number_ < map->NumberOfOwnDescriptors()); + DCHECK(number_ < map->NumberOfOwnDescriptors()); return map->instance_descriptors()->GetValue(number_); } int GetFieldIndexFromMap(Map* map) const { - ASSERT(lookup_type_ == DESCRIPTOR_TYPE || + DCHECK(lookup_type_ == DESCRIPTOR_TYPE || lookup_type_ == TRANSITION_TYPE); - ASSERT(number_ < map->NumberOfOwnDescriptors()); + DCHECK(number_ < map->NumberOfOwnDescriptors()); return map->instance_descriptors()->GetFieldIndex(number_); } HeapType* GetFieldType() const { - ASSERT(type() == FIELD); + DCHECK(type() == FIELD); if (lookup_type_ == DESCRIPTOR_TYPE) { return GetFieldTypeFromMap(holder()->map()); } - ASSERT(lookup_type_ == TRANSITION_TYPE); + DCHECK(lookup_type_ == TRANSITION_TYPE); return GetFieldTypeFromMap(transition_); } HeapType* GetFieldTypeFromMap(Map* map) const { - ASSERT(lookup_type_ == DESCRIPTOR_TYPE || + DCHECK(lookup_type_ == DESCRIPTOR_TYPE || lookup_type_ == TRANSITION_TYPE); - ASSERT(number_ < map->NumberOfOwnDescriptors()); + DCHECK(number_ < map->NumberOfOwnDescriptors()); return map->instance_descriptors()->GetFieldType(number_); } @@ -506,12 +455,18 @@ class LookupResult V8_FINAL BASE_EMBEDDED { } Map* GetFieldOwnerFromMap(Map* map) const { - ASSERT(lookup_type_ == DESCRIPTOR_TYPE || + DCHECK(lookup_type_ == DESCRIPTOR_TYPE || lookup_type_ == TRANSITION_TYPE); - ASSERT(number_ < map->NumberOfOwnDescriptors()); + DCHECK(number_ < map->NumberOfOwnDescriptors()); return map->FindFieldOwner(number_); } + bool ReceiverIsHolder(Handle<Object> receiver) { + if (*receiver == holder()) return true; + if (lookup_type_ == TRANSITION_TYPE) return true; + return false; + } + void Iterate(ObjectVisitor* visitor); private: @@ -535,6 +490,8 @@ class LookupResult V8_FINAL BASE_EMBEDDED { PropertyDetails details_; }; + +OStream& operator<<(OStream& os, const LookupResult& r); } } // namespace v8::internal #endif // V8_PROPERTY_H_ diff --git a/deps/v8/src/prototype.h b/deps/v8/src/prototype.h new file mode 100644 index 00000000000..4df1114c770 --- /dev/null +++ b/deps/v8/src/prototype.h @@ -0,0 +1,135 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_PROTOTYPE_H_ +#define V8_PROTOTYPE_H_ + +#include "src/isolate.h" +#include "src/objects.h" + +namespace v8 { +namespace internal { + +/** + * A class to uniformly access the prototype of any Object and walk its + * prototype chain. + * + * The PrototypeIterator can either start at the prototype (default), or + * include the receiver itself. If a PrototypeIterator is constructed for a + * Map, it will always start at the prototype. + * + * The PrototypeIterator can either run to the null_value(), the first + * non-hidden prototype, or a given object. + */ +class PrototypeIterator { + public: + enum WhereToStart { START_AT_RECEIVER, START_AT_PROTOTYPE }; + + enum WhereToEnd { END_AT_NULL, END_AT_NON_HIDDEN }; + + PrototypeIterator(Isolate* isolate, Handle<Object> receiver, + WhereToStart where_to_start = START_AT_PROTOTYPE) + : did_jump_to_prototype_chain_(false), + object_(NULL), + handle_(receiver), + isolate_(isolate) { + CHECK(!handle_.is_null()); + if (where_to_start == START_AT_PROTOTYPE) { + Advance(); + } + } + PrototypeIterator(Isolate* isolate, Object* receiver, + WhereToStart where_to_start = START_AT_PROTOTYPE) + : did_jump_to_prototype_chain_(false), + object_(receiver), + isolate_(isolate) { + if (where_to_start == START_AT_PROTOTYPE) { + Advance(); + } + } + explicit PrototypeIterator(Map* receiver_map) + : did_jump_to_prototype_chain_(true), + object_(receiver_map->prototype()), + isolate_(receiver_map->GetIsolate()) {} + explicit PrototypeIterator(Handle<Map> receiver_map) + : did_jump_to_prototype_chain_(true), + object_(NULL), + handle_(handle(receiver_map->prototype(), receiver_map->GetIsolate())), + isolate_(receiver_map->GetIsolate()) {} + ~PrototypeIterator() {} + + Object* GetCurrent() const { + DCHECK(handle_.is_null()); + return object_; + } + static Handle<Object> GetCurrent(const PrototypeIterator& iterator) { + DCHECK(!iterator.handle_.is_null()); + return iterator.handle_; + } + void Advance() { + if (handle_.is_null() && object_->IsJSProxy()) { + did_jump_to_prototype_chain_ = true; + object_ = isolate_->heap()->null_value(); + return; + } else if (!handle_.is_null() && handle_->IsJSProxy()) { + did_jump_to_prototype_chain_ = true; + handle_ = handle(isolate_->heap()->null_value(), isolate_); + return; + } + AdvanceIgnoringProxies(); + } + void AdvanceIgnoringProxies() { + if (!did_jump_to_prototype_chain_) { + did_jump_to_prototype_chain_ = true; + if (handle_.is_null()) { + object_ = object_->GetRootMap(isolate_)->prototype(); + } else { + handle_ = handle(handle_->GetRootMap(isolate_)->prototype(), isolate_); + } + } else { + if (handle_.is_null()) { + object_ = HeapObject::cast(object_)->map()->prototype(); + } else { + handle_ = + handle(HeapObject::cast(*handle_)->map()->prototype(), isolate_); + } + } + } + bool IsAtEnd(WhereToEnd where_to_end = END_AT_NULL) const { + if (handle_.is_null()) { + return object_->IsNull() || + (did_jump_to_prototype_chain_ && + where_to_end == END_AT_NON_HIDDEN && + !HeapObject::cast(object_)->map()->is_hidden_prototype()); + } else { + return handle_->IsNull() || + (did_jump_to_prototype_chain_ && + where_to_end == END_AT_NON_HIDDEN && + !Handle<HeapObject>::cast(handle_)->map()->is_hidden_prototype()); + } + } + bool IsAtEnd(Object* final_object) { + DCHECK(handle_.is_null()); + return object_->IsNull() || object_ == final_object; + } + bool IsAtEnd(Handle<Object> final_object) { + DCHECK(!handle_.is_null()); + return handle_->IsNull() || *handle_ == *final_object; + } + + private: + bool did_jump_to_prototype_chain_; + Object* object_; + Handle<Object> handle_; + Isolate* isolate_; + + DISALLOW_COPY_AND_ASSIGN(PrototypeIterator); +}; + + +} // namespace internal + +} // namespace v8 + +#endif // V8_PROTOTYPE_H_ diff --git a/deps/v8/src/proxy.js b/deps/v8/src/proxy.js index 99f9dab9f3a..0776eeae8f9 100644 --- a/deps/v8/src/proxy.js +++ b/deps/v8/src/proxy.js @@ -49,8 +49,8 @@ function ProxyCreateFunction(handler, callTrap, constructTrap) { function SetUpProxy() { %CheckIsBootstrapping() - var global_receiver = %GlobalReceiver(global); - global_receiver.Proxy = $Proxy; + var global_proxy = %GlobalProxy(global); + global_proxy.Proxy = $Proxy; // Set up non-enumerable properties of the Proxy object. InstallFunctions($Proxy, DONT_ENUM, [ diff --git a/deps/v8/src/regexp-macro-assembler-irregexp-inl.h b/deps/v8/src/regexp-macro-assembler-irregexp-inl.h index 8fe6a427759..942cf575215 100644 --- a/deps/v8/src/regexp-macro-assembler-irregexp-inl.h +++ b/deps/v8/src/regexp-macro-assembler-irregexp-inl.h @@ -5,9 +5,10 @@ // A light-weight assembler for the Irregexp byte code. -#include "v8.h" -#include "ast.h" -#include "bytecodes-irregexp.h" +#include "src/v8.h" + +#include "src/ast.h" +#include "src/bytecodes-irregexp.h" #ifndef V8_REGEXP_MACRO_ASSEMBLER_IRREGEXP_INL_H_ #define V8_REGEXP_MACRO_ASSEMBLER_IRREGEXP_INL_H_ @@ -20,7 +21,7 @@ namespace internal { void RegExpMacroAssemblerIrregexp::Emit(uint32_t byte, uint32_t twenty_four_bits) { uint32_t word = ((twenty_four_bits << BYTECODE_SHIFT) | byte); - ASSERT(pc_ <= buffer_.length()); + DCHECK(pc_ <= buffer_.length()); if (pc_ + 3 >= buffer_.length()) { Expand(); } @@ -30,7 +31,7 @@ void RegExpMacroAssemblerIrregexp::Emit(uint32_t byte, void RegExpMacroAssemblerIrregexp::Emit16(uint32_t word) { - ASSERT(pc_ <= buffer_.length()); + DCHECK(pc_ <= buffer_.length()); if (pc_ + 1 >= buffer_.length()) { Expand(); } @@ -40,7 +41,7 @@ void RegExpMacroAssemblerIrregexp::Emit16(uint32_t word) { void RegExpMacroAssemblerIrregexp::Emit8(uint32_t word) { - ASSERT(pc_ <= buffer_.length()); + DCHECK(pc_ <= buffer_.length()); if (pc_ == buffer_.length()) { Expand(); } @@ -50,7 +51,7 @@ void RegExpMacroAssemblerIrregexp::Emit8(uint32_t word) { void RegExpMacroAssemblerIrregexp::Emit32(uint32_t word) { - ASSERT(pc_ <= buffer_.length()); + DCHECK(pc_ <= buffer_.length()); if (pc_ + 3 >= buffer_.length()) { Expand(); } diff --git a/deps/v8/src/regexp-macro-assembler-irregexp.cc b/deps/v8/src/regexp-macro-assembler-irregexp.cc index 368fb3006cf..469fb8cbb31 100644 --- a/deps/v8/src/regexp-macro-assembler-irregexp.cc +++ b/deps/v8/src/regexp-macro-assembler-irregexp.cc @@ -2,12 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" -#include "ast.h" -#include "bytecodes-irregexp.h" -#include "regexp-macro-assembler.h" -#include "regexp-macro-assembler-irregexp.h" -#include "regexp-macro-assembler-irregexp-inl.h" +#include "src/v8.h" + +#include "src/ast.h" +#include "src/bytecodes-irregexp.h" +#include "src/regexp-macro-assembler.h" +#include "src/regexp-macro-assembler-irregexp.h" +#include "src/regexp-macro-assembler-irregexp-inl.h" namespace v8 { @@ -39,7 +40,7 @@ RegExpMacroAssemblerIrregexp::Implementation() { void RegExpMacroAssemblerIrregexp::Bind(Label* l) { advance_current_end_ = kInvalidPC; - ASSERT(!l->is_bound()); + DCHECK(!l->is_bound()); if (l->is_linked()) { int pos = l->pos(); while (pos != 0) { @@ -68,8 +69,8 @@ void RegExpMacroAssemblerIrregexp::EmitOrLink(Label* l) { void RegExpMacroAssemblerIrregexp::PopRegister(int register_index) { - ASSERT(register_index >= 0); - ASSERT(register_index <= kMaxRegister); + DCHECK(register_index >= 0); + DCHECK(register_index <= kMaxRegister); Emit(BC_POP_REGISTER, register_index); } @@ -77,23 +78,23 @@ void RegExpMacroAssemblerIrregexp::PopRegister(int register_index) { void RegExpMacroAssemblerIrregexp::PushRegister( int register_index, StackCheckFlag check_stack_limit) { - ASSERT(register_index >= 0); - ASSERT(register_index <= kMaxRegister); + DCHECK(register_index >= 0); + DCHECK(register_index <= kMaxRegister); Emit(BC_PUSH_REGISTER, register_index); } void RegExpMacroAssemblerIrregexp::WriteCurrentPositionToRegister( int register_index, int cp_offset) { - ASSERT(register_index >= 0); - ASSERT(register_index <= kMaxRegister); + DCHECK(register_index >= 0); + DCHECK(register_index <= kMaxRegister); Emit(BC_SET_REGISTER_TO_CP, register_index); Emit32(cp_offset); // Current position offset. } void RegExpMacroAssemblerIrregexp::ClearRegisters(int reg_from, int reg_to) { - ASSERT(reg_from <= reg_to); + DCHECK(reg_from <= reg_to); for (int reg = reg_from; reg <= reg_to; reg++) { SetRegister(reg, -1); } @@ -102,45 +103,45 @@ void RegExpMacroAssemblerIrregexp::ClearRegisters(int reg_from, int reg_to) { void RegExpMacroAssemblerIrregexp::ReadCurrentPositionFromRegister( int register_index) { - ASSERT(register_index >= 0); - ASSERT(register_index <= kMaxRegister); + DCHECK(register_index >= 0); + DCHECK(register_index <= kMaxRegister); Emit(BC_SET_CP_TO_REGISTER, register_index); } void RegExpMacroAssemblerIrregexp::WriteStackPointerToRegister( int register_index) { - ASSERT(register_index >= 0); - ASSERT(register_index <= kMaxRegister); + DCHECK(register_index >= 0); + DCHECK(register_index <= kMaxRegister); Emit(BC_SET_REGISTER_TO_SP, register_index); } void RegExpMacroAssemblerIrregexp::ReadStackPointerFromRegister( int register_index) { - ASSERT(register_index >= 0); - ASSERT(register_index <= kMaxRegister); + DCHECK(register_index >= 0); + DCHECK(register_index <= kMaxRegister); Emit(BC_SET_SP_TO_REGISTER, register_index); } void RegExpMacroAssemblerIrregexp::SetCurrentPositionFromEnd(int by) { - ASSERT(is_uint24(by)); + DCHECK(is_uint24(by)); Emit(BC_SET_CURRENT_POSITION_FROM_END, by); } void RegExpMacroAssemblerIrregexp::SetRegister(int register_index, int to) { - ASSERT(register_index >= 0); - ASSERT(register_index <= kMaxRegister); + DCHECK(register_index >= 0); + DCHECK(register_index <= kMaxRegister); Emit(BC_SET_REGISTER, register_index); Emit32(to); } void RegExpMacroAssemblerIrregexp::AdvanceRegister(int register_index, int by) { - ASSERT(register_index >= 0); - ASSERT(register_index <= kMaxRegister); + DCHECK(register_index >= 0); + DCHECK(register_index <= kMaxRegister); Emit(BC_ADVANCE_REGISTER, register_index); Emit32(by); } @@ -194,8 +195,8 @@ void RegExpMacroAssemblerIrregexp::Fail() { void RegExpMacroAssemblerIrregexp::AdvanceCurrentPosition(int by) { - ASSERT(by >= kMinCPOffset); - ASSERT(by <= kMaxCPOffset); + DCHECK(by >= kMinCPOffset); + DCHECK(by <= kMaxCPOffset); advance_current_start_ = pc_; advance_current_offset_ = by; Emit(BC_ADVANCE_CP, by); @@ -214,8 +215,8 @@ void RegExpMacroAssemblerIrregexp::LoadCurrentCharacter(int cp_offset, Label* on_failure, bool check_bounds, int characters) { - ASSERT(cp_offset >= kMinCPOffset); - ASSERT(cp_offset <= kMaxCPOffset); + DCHECK(cp_offset >= kMinCPOffset); + DCHECK(cp_offset <= kMaxCPOffset); int bytecode; if (check_bounds) { if (characters == 4) { @@ -223,7 +224,7 @@ void RegExpMacroAssemblerIrregexp::LoadCurrentCharacter(int cp_offset, } else if (characters == 2) { bytecode = BC_LOAD_2_CURRENT_CHARS; } else { - ASSERT(characters == 1); + DCHECK(characters == 1); bytecode = BC_LOAD_CURRENT_CHAR; } } else { @@ -232,7 +233,7 @@ void RegExpMacroAssemblerIrregexp::LoadCurrentCharacter(int cp_offset, } else if (characters == 2) { bytecode = BC_LOAD_2_CURRENT_CHARS_UNCHECKED; } else { - ASSERT(characters == 1); + DCHECK(characters == 1); bytecode = BC_LOAD_CURRENT_CHAR_UNCHECKED; } } @@ -370,8 +371,8 @@ void RegExpMacroAssemblerIrregexp::CheckBitInTable( void RegExpMacroAssemblerIrregexp::CheckNotBackReference(int start_reg, Label* on_not_equal) { - ASSERT(start_reg >= 0); - ASSERT(start_reg <= kMaxRegister); + DCHECK(start_reg >= 0); + DCHECK(start_reg <= kMaxRegister); Emit(BC_CHECK_NOT_BACK_REF, start_reg); EmitOrLink(on_not_equal); } @@ -380,8 +381,8 @@ void RegExpMacroAssemblerIrregexp::CheckNotBackReference(int start_reg, void RegExpMacroAssemblerIrregexp::CheckNotBackReferenceIgnoreCase( int start_reg, Label* on_not_equal) { - ASSERT(start_reg >= 0); - ASSERT(start_reg <= kMaxRegister); + DCHECK(start_reg >= 0); + DCHECK(start_reg <= kMaxRegister); Emit(BC_CHECK_NOT_BACK_REF_NO_CASE, start_reg); EmitOrLink(on_not_equal); } @@ -390,8 +391,8 @@ void RegExpMacroAssemblerIrregexp::CheckNotBackReferenceIgnoreCase( void RegExpMacroAssemblerIrregexp::IfRegisterLT(int register_index, int comparand, Label* on_less_than) { - ASSERT(register_index >= 0); - ASSERT(register_index <= kMaxRegister); + DCHECK(register_index >= 0); + DCHECK(register_index <= kMaxRegister); Emit(BC_CHECK_REGISTER_LT, register_index); Emit32(comparand); EmitOrLink(on_less_than); @@ -401,8 +402,8 @@ void RegExpMacroAssemblerIrregexp::IfRegisterLT(int register_index, void RegExpMacroAssemblerIrregexp::IfRegisterGE(int register_index, int comparand, Label* on_greater_or_equal) { - ASSERT(register_index >= 0); - ASSERT(register_index <= kMaxRegister); + DCHECK(register_index >= 0); + DCHECK(register_index <= kMaxRegister); Emit(BC_CHECK_REGISTER_GE, register_index); Emit32(comparand); EmitOrLink(on_greater_or_equal); @@ -411,8 +412,8 @@ void RegExpMacroAssemblerIrregexp::IfRegisterGE(int register_index, void RegExpMacroAssemblerIrregexp::IfRegisterEqPos(int register_index, Label* on_eq) { - ASSERT(register_index >= 0); - ASSERT(register_index <= kMaxRegister); + DCHECK(register_index >= 0); + DCHECK(register_index <= kMaxRegister); Emit(BC_CHECK_REGISTER_EQ_POS, register_index); EmitOrLink(on_eq); } @@ -434,7 +435,7 @@ int RegExpMacroAssemblerIrregexp::length() { void RegExpMacroAssemblerIrregexp::Copy(Address a) { - OS::MemCopy(a, buffer_.start(), length()); + MemCopy(a, buffer_.start(), length()); } @@ -443,7 +444,7 @@ void RegExpMacroAssemblerIrregexp::Expand() { Vector<byte> old_buffer = buffer_; buffer_ = Vector<byte>::New(old_buffer.length() * 2); own_buffer_ = true; - OS::MemCopy(buffer_.start(), old_buffer.start(), old_buffer.length()); + MemCopy(buffer_.start(), old_buffer.start(), old_buffer.length()); if (old_buffer_was_our_own) { old_buffer.Dispose(); } diff --git a/deps/v8/src/regexp-macro-assembler-irregexp.h b/deps/v8/src/regexp-macro-assembler-irregexp.h index 54afe92e136..cdfb46ad15e 100644 --- a/deps/v8/src/regexp-macro-assembler-irregexp.h +++ b/deps/v8/src/regexp-macro-assembler-irregexp.h @@ -5,6 +5,8 @@ #ifndef V8_REGEXP_MACRO_ASSEMBLER_IRREGEXP_H_ #define V8_REGEXP_MACRO_ASSEMBLER_IRREGEXP_H_ +#include "src/regexp-macro-assembler.h" + namespace v8 { namespace internal { diff --git a/deps/v8/src/regexp-macro-assembler-tracer.cc b/deps/v8/src/regexp-macro-assembler-tracer.cc index c307eaf6113..14da2da895b 100644 --- a/deps/v8/src/regexp-macro-assembler-tracer.cc +++ b/deps/v8/src/regexp-macro-assembler-tracer.cc @@ -2,10 +2,11 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" -#include "ast.h" -#include "regexp-macro-assembler.h" -#include "regexp-macro-assembler-tracer.h" +#include "src/v8.h" + +#include "src/ast.h" +#include "src/regexp-macro-assembler.h" +#include "src/regexp-macro-assembler-tracer.h" namespace v8 { namespace internal { @@ -15,9 +16,9 @@ RegExpMacroAssemblerTracer::RegExpMacroAssemblerTracer( RegExpMacroAssembler(assembler->zone()), assembler_(assembler) { unsigned int type = assembler->Implementation(); - ASSERT(type < 6); + DCHECK(type < 6); const char* impl_names[] = {"IA32", "ARM", "ARM64", - "MIPS", "X64", "Bytecode"}; + "MIPS", "X64", "X87", "Bytecode"}; PrintF("RegExpMacroAssembler%s();\n", impl_names[type]); } @@ -192,7 +193,7 @@ class PrintablePrinter { buffer_[0] = '\0'; } return &buffer_[0]; - }; + } private: uc16 character_; diff --git a/deps/v8/src/regexp-macro-assembler.cc b/deps/v8/src/regexp-macro-assembler.cc index a522f97c6c1..13c2a6a32fb 100644 --- a/deps/v8/src/regexp-macro-assembler.cc +++ b/deps/v8/src/regexp-macro-assembler.cc @@ -2,12 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" -#include "ast.h" -#include "assembler.h" -#include "regexp-stack.h" -#include "regexp-macro-assembler.h" -#include "simulator.h" +#include "src/v8.h" + +#include "src/assembler.h" +#include "src/ast.h" +#include "src/regexp-macro-assembler.h" +#include "src/regexp-stack.h" +#include "src/simulator.h" namespace v8 { namespace internal { @@ -51,16 +52,16 @@ const byte* NativeRegExpMacroAssembler::StringCharacterPosition( String* subject, int start_index) { // Not just flat, but ultra flat. - ASSERT(subject->IsExternalString() || subject->IsSeqString()); - ASSERT(start_index >= 0); - ASSERT(start_index <= subject->length()); + DCHECK(subject->IsExternalString() || subject->IsSeqString()); + DCHECK(start_index >= 0); + DCHECK(start_index <= subject->length()); if (subject->IsOneByteRepresentation()) { const byte* address; if (StringShape(subject).IsExternal()) { const uint8_t* data = ExternalAsciiString::cast(subject)->GetChars(); address = reinterpret_cast<const byte*>(data); } else { - ASSERT(subject->IsSeqOneByteString()); + DCHECK(subject->IsSeqOneByteString()); const uint8_t* data = SeqOneByteString::cast(subject)->GetChars(); address = reinterpret_cast<const byte*>(data); } @@ -70,7 +71,7 @@ const byte* NativeRegExpMacroAssembler::StringCharacterPosition( if (StringShape(subject).IsExternal()) { data = ExternalTwoByteString::cast(subject)->GetChars(); } else { - ASSERT(subject->IsSeqTwoByteString()); + DCHECK(subject->IsSeqTwoByteString()); data = SeqTwoByteString::cast(subject)->GetChars(); } return reinterpret_cast<const byte*>(data + start_index); @@ -85,9 +86,9 @@ NativeRegExpMacroAssembler::Result NativeRegExpMacroAssembler::Match( int previous_index, Isolate* isolate) { - ASSERT(subject->IsFlat()); - ASSERT(previous_index >= 0); - ASSERT(previous_index <= subject->length()); + DCHECK(subject->IsFlat()); + DCHECK(previous_index >= 0); + DCHECK(previous_index <= subject->length()); // No allocations before calling the regexp, but we can't use // DisallowHeapAllocation, since regexps might be preempted, and another @@ -102,7 +103,7 @@ NativeRegExpMacroAssembler::Result NativeRegExpMacroAssembler::Match( // The string has been flattened, so if it is a cons string it contains the // full string in the first part. if (StringShape(subject_ptr).IsCons()) { - ASSERT_EQ(0, ConsString::cast(subject_ptr)->second()->length()); + DCHECK_EQ(0, ConsString::cast(subject_ptr)->second()->length()); subject_ptr = ConsString::cast(subject_ptr)->first(); } else if (StringShape(subject_ptr).IsSliced()) { SlicedString* slice = SlicedString::cast(subject_ptr); @@ -111,7 +112,7 @@ NativeRegExpMacroAssembler::Result NativeRegExpMacroAssembler::Match( } // Ensure that an underlying string has the same ASCII-ness. bool is_ascii = subject_ptr->IsOneByteRepresentation(); - ASSERT(subject_ptr->IsExternalString() || subject_ptr->IsSeqString()); + DCHECK(subject_ptr->IsExternalString() || subject_ptr->IsSeqString()); // String is now either Sequential or External int char_size_shift = is_ascii ? 0 : 1; @@ -155,7 +156,7 @@ NativeRegExpMacroAssembler::Result NativeRegExpMacroAssembler::Execute( stack_base, direct_call, isolate); - ASSERT(result >= RETRY); + DCHECK(result >= RETRY); if (result == EXCEPTION && !isolate->has_pending_exception()) { // We detected a stack overflow (on the backtrack stack) in RegExp code, @@ -219,7 +220,7 @@ int NativeRegExpMacroAssembler::CaseInsensitiveCompareUC16( // This function is not allowed to cause a garbage collection. // A GC might move the calling generated code and invalidate the // return address on the stack. - ASSERT(byte_length % 2 == 0); + DCHECK(byte_length % 2 == 0); uc16* substring1 = reinterpret_cast<uc16*>(byte_offset1); uc16* substring2 = reinterpret_cast<uc16*>(byte_offset2); size_t length = byte_length >> 1; @@ -249,9 +250,9 @@ Address NativeRegExpMacroAssembler::GrowStack(Address stack_pointer, RegExpStack* regexp_stack = isolate->regexp_stack(); size_t size = regexp_stack->stack_capacity(); Address old_stack_base = regexp_stack->stack_base(); - ASSERT(old_stack_base == *stack_base); - ASSERT(stack_pointer <= old_stack_base); - ASSERT(static_cast<size_t>(old_stack_base - stack_pointer) <= size); + DCHECK(old_stack_base == *stack_base); + DCHECK(stack_pointer <= old_stack_base); + DCHECK(static_cast<size_t>(old_stack_base - stack_pointer) <= size); Address new_stack_base = regexp_stack->EnsureCapacity(size * 2); if (new_stack_base == NULL) { return NULL; diff --git a/deps/v8/src/regexp-macro-assembler.h b/deps/v8/src/regexp-macro-assembler.h index 8d9129504dc..f0cfc465fc6 100644 --- a/deps/v8/src/regexp-macro-assembler.h +++ b/deps/v8/src/regexp-macro-assembler.h @@ -5,7 +5,7 @@ #ifndef V8_REGEXP_MACRO_ASSEMBLER_H_ #define V8_REGEXP_MACRO_ASSEMBLER_H_ -#include "ast.h" +#include "src/ast.h" namespace v8 { namespace internal { @@ -33,6 +33,7 @@ class RegExpMacroAssembler { kARM64Implementation, kMIPSImplementation, kX64Implementation, + kX87Implementation, kBytecodeImplementation }; diff --git a/deps/v8/src/regexp-stack.cc b/deps/v8/src/regexp-stack.cc index 5e250dd85ee..f114ae44240 100644 --- a/deps/v8/src/regexp-stack.cc +++ b/deps/v8/src/regexp-stack.cc @@ -2,8 +2,9 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" -#include "regexp-stack.h" +#include "src/v8.h" + +#include "src/regexp-stack.h" namespace v8 { namespace internal { @@ -33,7 +34,7 @@ RegExpStack::~RegExpStack() { char* RegExpStack::ArchiveStack(char* to) { size_t size = sizeof(thread_local_); - OS::MemCopy(reinterpret_cast<void*>(to), &thread_local_, size); + MemCopy(reinterpret_cast<void*>(to), &thread_local_, size); thread_local_ = ThreadLocal(); return to + size; } @@ -41,7 +42,7 @@ char* RegExpStack::ArchiveStack(char* to) { char* RegExpStack::RestoreStack(char* from) { size_t size = sizeof(thread_local_); - OS::MemCopy(&thread_local_, reinterpret_cast<void*>(from), size); + MemCopy(&thread_local_, reinterpret_cast<void*>(from), size); return from + size; } @@ -69,11 +70,10 @@ Address RegExpStack::EnsureCapacity(size_t size) { Address new_memory = NewArray<byte>(static_cast<int>(size)); if (thread_local_.memory_size_ > 0) { // Copy original memory into top of new memory. - OS::MemCopy( - reinterpret_cast<void*>( - new_memory + size - thread_local_.memory_size_), - reinterpret_cast<void*>(thread_local_.memory_), - thread_local_.memory_size_); + MemCopy(reinterpret_cast<void*>(new_memory + size - + thread_local_.memory_size_), + reinterpret_cast<void*>(thread_local_.memory_), + thread_local_.memory_size_); DeleteArray(thread_local_.memory_); } thread_local_.memory_ = new_memory; diff --git a/deps/v8/src/regexp-stack.h b/deps/v8/src/regexp-stack.h index 745782d73b7..d18ce708d62 100644 --- a/deps/v8/src/regexp-stack.h +++ b/deps/v8/src/regexp-stack.h @@ -41,7 +41,7 @@ class RegExpStack { // Gives the top of the memory used as stack. Address stack_base() { - ASSERT(thread_local_.memory_size_ != 0); + DCHECK(thread_local_.memory_size_ != 0); return thread_local_.memory_ + thread_local_.memory_size_; } diff --git a/deps/v8/src/regexp.js b/deps/v8/src/regexp.js index 6a0e2b5d92c..d7883fb6937 100644 --- a/deps/v8/src/regexp.js +++ b/deps/v8/src/regexp.js @@ -80,7 +80,7 @@ function RegExpConstructor(pattern, flags) { // were called again. In SpiderMonkey, this method returns the regexp object. // In JSC, it returns undefined. For compatibility with JSC, we match their // behavior. -function RegExpCompile(pattern, flags) { +function RegExpCompileJS(pattern, flags) { // Both JSC and SpiderMonkey treat a missing pattern argument as the // empty subject string, and an actual undefined value passed as the // pattern as the string 'undefined'. Note that JSC is inconsistent @@ -108,23 +108,30 @@ function DoRegExpExec(regexp, string, index) { } -function BuildResultFromMatchInfo(lastMatchInfo, s) { - var numResults = NUMBER_OF_CAPTURES(lastMatchInfo) >> 1; - var start = lastMatchInfo[CAPTURE0]; - var end = lastMatchInfo[CAPTURE1]; - var result = %_RegExpConstructResult(numResults, start, s); - result[0] = %_SubString(s, start, end); +// This is kind of performance sensitive, so we want to avoid unnecessary +// type checks on inputs. But we also don't want to inline it several times +// manually, so we use a macro :-) +macro RETURN_NEW_RESULT_FROM_MATCH_INFO(MATCHINFO, STRING) + var numResults = NUMBER_OF_CAPTURES(MATCHINFO) >> 1; + var start = MATCHINFO[CAPTURE0]; + var end = MATCHINFO[CAPTURE1]; + // Calculate the substring of the first match before creating the result array + // to avoid an unnecessary write barrier storing the first result. + var first = %_SubString(STRING, start, end); + var result = %_RegExpConstructResult(numResults, start, STRING); + result[0] = first; + if (numResults == 1) return result; var j = REGEXP_FIRST_CAPTURE + 2; for (var i = 1; i < numResults; i++) { - start = lastMatchInfo[j++]; + start = MATCHINFO[j++]; if (start != -1) { - end = lastMatchInfo[j]; - result[i] = %_SubString(s, start, end); + end = MATCHINFO[j]; + result[i] = %_SubString(STRING, start, end); } j++; } return result; -} +endmacro function RegExpExecNoTests(regexp, string, start) { @@ -132,7 +139,7 @@ function RegExpExecNoTests(regexp, string, start) { var matchInfo = %_RegExpExec(regexp, string, start, lastMatchInfo); if (matchInfo !== null) { lastMatchInfoOverride = null; - return BuildResultFromMatchInfo(matchInfo, string); + RETURN_NEW_RESULT_FROM_MATCH_INFO(matchInfo, string); } regexp.lastIndex = 0; return null; @@ -175,7 +182,7 @@ function RegExpExec(string) { if (global) { this.lastIndex = lastMatchInfo[CAPTURE1]; } - return BuildResultFromMatchInfo(matchIndices, string); + RETURN_NEW_RESULT_FROM_MATCH_INFO(matchIndices, string); } @@ -374,14 +381,14 @@ var lastMatchInfoOverride = null; function SetUpRegExp() { %CheckIsBootstrapping(); %FunctionSetInstanceClassName($RegExp, 'RegExp'); - %SetProperty($RegExp.prototype, 'constructor', $RegExp, DONT_ENUM); + %AddNamedProperty($RegExp.prototype, 'constructor', $RegExp, DONT_ENUM); %SetCode($RegExp, RegExpConstructor); InstallFunctions($RegExp.prototype, DONT_ENUM, $Array( "exec", RegExpExec, "test", RegExpTest, "toString", RegExpToString, - "compile", RegExpCompile + "compile", RegExpCompileJS )); // The length of compile is 1 in SpiderMonkey. @@ -399,12 +406,12 @@ function SetUpRegExp() { }; %OptimizeObjectForAddingMultipleProperties($RegExp, 22); - %DefineOrRedefineAccessorProperty($RegExp, 'input', RegExpGetInput, - RegExpSetInput, DONT_DELETE); - %DefineOrRedefineAccessorProperty($RegExp, '$_', RegExpGetInput, - RegExpSetInput, DONT_ENUM | DONT_DELETE); - %DefineOrRedefineAccessorProperty($RegExp, '$input', RegExpGetInput, - RegExpSetInput, DONT_ENUM | DONT_DELETE); + %DefineAccessorPropertyUnchecked($RegExp, 'input', RegExpGetInput, + RegExpSetInput, DONT_DELETE); + %DefineAccessorPropertyUnchecked($RegExp, '$_', RegExpGetInput, + RegExpSetInput, DONT_ENUM | DONT_DELETE); + %DefineAccessorPropertyUnchecked($RegExp, '$input', RegExpGetInput, + RegExpSetInput, DONT_ENUM | DONT_DELETE); // The properties multiline and $* are aliases for each other. When this // value is set in SpiderMonkey, the value it is set to is coerced to a @@ -418,40 +425,40 @@ function SetUpRegExp() { var RegExpGetMultiline = function() { return multiline; }; var RegExpSetMultiline = function(flag) { multiline = flag ? true : false; }; - %DefineOrRedefineAccessorProperty($RegExp, 'multiline', RegExpGetMultiline, - RegExpSetMultiline, DONT_DELETE); - %DefineOrRedefineAccessorProperty($RegExp, '$*', RegExpGetMultiline, - RegExpSetMultiline, - DONT_ENUM | DONT_DELETE); + %DefineAccessorPropertyUnchecked($RegExp, 'multiline', RegExpGetMultiline, + RegExpSetMultiline, DONT_DELETE); + %DefineAccessorPropertyUnchecked($RegExp, '$*', RegExpGetMultiline, + RegExpSetMultiline, + DONT_ENUM | DONT_DELETE); var NoOpSetter = function(ignored) {}; // Static properties set by a successful match. - %DefineOrRedefineAccessorProperty($RegExp, 'lastMatch', RegExpGetLastMatch, - NoOpSetter, DONT_DELETE); - %DefineOrRedefineAccessorProperty($RegExp, '$&', RegExpGetLastMatch, - NoOpSetter, DONT_ENUM | DONT_DELETE); - %DefineOrRedefineAccessorProperty($RegExp, 'lastParen', RegExpGetLastParen, - NoOpSetter, DONT_DELETE); - %DefineOrRedefineAccessorProperty($RegExp, '$+', RegExpGetLastParen, - NoOpSetter, DONT_ENUM | DONT_DELETE); - %DefineOrRedefineAccessorProperty($RegExp, 'leftContext', - RegExpGetLeftContext, NoOpSetter, - DONT_DELETE); - %DefineOrRedefineAccessorProperty($RegExp, '$`', RegExpGetLeftContext, - NoOpSetter, DONT_ENUM | DONT_DELETE); - %DefineOrRedefineAccessorProperty($RegExp, 'rightContext', - RegExpGetRightContext, NoOpSetter, - DONT_DELETE); - %DefineOrRedefineAccessorProperty($RegExp, "$'", RegExpGetRightContext, - NoOpSetter, DONT_ENUM | DONT_DELETE); + %DefineAccessorPropertyUnchecked($RegExp, 'lastMatch', RegExpGetLastMatch, + NoOpSetter, DONT_DELETE); + %DefineAccessorPropertyUnchecked($RegExp, '$&', RegExpGetLastMatch, + NoOpSetter, DONT_ENUM | DONT_DELETE); + %DefineAccessorPropertyUnchecked($RegExp, 'lastParen', RegExpGetLastParen, + NoOpSetter, DONT_DELETE); + %DefineAccessorPropertyUnchecked($RegExp, '$+', RegExpGetLastParen, + NoOpSetter, DONT_ENUM | DONT_DELETE); + %DefineAccessorPropertyUnchecked($RegExp, 'leftContext', + RegExpGetLeftContext, NoOpSetter, + DONT_DELETE); + %DefineAccessorPropertyUnchecked($RegExp, '$`', RegExpGetLeftContext, + NoOpSetter, DONT_ENUM | DONT_DELETE); + %DefineAccessorPropertyUnchecked($RegExp, 'rightContext', + RegExpGetRightContext, NoOpSetter, + DONT_DELETE); + %DefineAccessorPropertyUnchecked($RegExp, "$'", RegExpGetRightContext, + NoOpSetter, DONT_ENUM | DONT_DELETE); for (var i = 1; i < 10; ++i) { - %DefineOrRedefineAccessorProperty($RegExp, '$' + i, - RegExpMakeCaptureGetter(i), NoOpSetter, - DONT_DELETE); + %DefineAccessorPropertyUnchecked($RegExp, '$' + i, + RegExpMakeCaptureGetter(i), NoOpSetter, + DONT_DELETE); } %ToFastProperties($RegExp); } diff --git a/deps/v8/src/rewriter.cc b/deps/v8/src/rewriter.cc index 27f03dc7a48..169dce9a41f 100644 --- a/deps/v8/src/rewriter.cc +++ b/deps/v8/src/rewriter.cc @@ -2,13 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "rewriter.h" +#include "src/rewriter.h" -#include "ast.h" -#include "compiler.h" -#include "scopes.h" +#include "src/ast.h" +#include "src/compiler.h" +#include "src/scopes.h" namespace v8 { namespace internal { @@ -20,7 +20,9 @@ class Processor: public AstVisitor { result_assigned_(false), is_set_(false), in_try_(false), - factory_(zone) { + // Passing a null AstValueFactory is fine, because Processor doesn't + // need to create strings or literals. + factory_(zone, NULL) { InitializeAstVisitor(zone); } @@ -227,21 +229,23 @@ EXPRESSION_NODE_LIST(DEF_VISIT) // continue to be used in the case of failure. bool Rewriter::Rewrite(CompilationInfo* info) { FunctionLiteral* function = info->function(); - ASSERT(function != NULL); + DCHECK(function != NULL); Scope* scope = function->scope(); - ASSERT(scope != NULL); + DCHECK(scope != NULL); if (!scope->is_global_scope() && !scope->is_eval_scope()) return true; ZoneList<Statement*>* body = function->body(); if (!body->is_empty()) { - Variable* result = scope->NewTemporary( - info->isolate()->factory()->dot_result_string()); + Variable* result = + scope->NewTemporary(info->ast_value_factory()->dot_result_string()); + // The name string must be internalized at this point. + DCHECK(!result->name().is_null()); Processor processor(result, info->zone()); processor.Process(body); if (processor.HasStackOverflow()) return false; if (processor.result_assigned()) { - ASSERT(function->end_position() != RelocInfo::kNoPosition); + DCHECK(function->end_position() != RelocInfo::kNoPosition); // Set the position of the assignment statement one character past the // source code, such that it definitely is not in the source code range // of an immediate inner scope. For example in @@ -250,7 +254,7 @@ bool Rewriter::Rewrite(CompilationInfo* info) { // coincides with the end of the with scope which is the position of '1'. int pos = function->end_position(); VariableProxy* result_proxy = processor.factory()->NewVariableProxy( - result->name(), false, result->interface(), pos); + result->raw_name(), false, result->interface(), pos); result_proxy->BindTo(result); Statement* result_statement = processor.factory()->NewReturnStatement(result_proxy, pos); diff --git a/deps/v8/src/runtime-profiler.cc b/deps/v8/src/runtime-profiler.cc index 69229871ea6..d6099d43be8 100644 --- a/deps/v8/src/runtime-profiler.cc +++ b/deps/v8/src/runtime-profiler.cc @@ -2,21 +2,21 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" - -#include "runtime-profiler.h" - -#include "assembler.h" -#include "bootstrapper.h" -#include "code-stubs.h" -#include "compilation-cache.h" -#include "execution.h" -#include "full-codegen.h" -#include "global-handles.h" -#include "isolate-inl.h" -#include "mark-compact.h" -#include "platform.h" -#include "scopeinfo.h" +#include "src/v8.h" + +#include "src/runtime-profiler.h" + +#include "src/assembler.h" +#include "src/base/platform/platform.h" +#include "src/bootstrapper.h" +#include "src/code-stubs.h" +#include "src/compilation-cache.h" +#include "src/execution.h" +#include "src/full-codegen.h" +#include "src/global-handles.h" +#include "src/heap/mark-compact.h" +#include "src/isolate-inl.h" +#include "src/scopeinfo.h" namespace v8 { namespace internal { @@ -57,35 +57,43 @@ RuntimeProfiler::RuntimeProfiler(Isolate* isolate) } -static void GetICCounts(Code* shared_code, - int* ic_with_type_info_count, - int* ic_total_count, - int* percentage) { +static void GetICCounts(Code* shared_code, int* ic_with_type_info_count, + int* ic_generic_count, int* ic_total_count, + int* type_info_percentage, int* generic_percentage) { *ic_total_count = 0; + *ic_generic_count = 0; *ic_with_type_info_count = 0; Object* raw_info = shared_code->type_feedback_info(); if (raw_info->IsTypeFeedbackInfo()) { TypeFeedbackInfo* info = TypeFeedbackInfo::cast(raw_info); *ic_with_type_info_count = info->ic_with_type_info_count(); + *ic_generic_count = info->ic_generic_count(); *ic_total_count = info->ic_total_count(); } - *percentage = *ic_total_count > 0 - ? 100 * *ic_with_type_info_count / *ic_total_count - : 100; + if (*ic_total_count > 0) { + *type_info_percentage = 100 * *ic_with_type_info_count / *ic_total_count; + *generic_percentage = 100 * *ic_generic_count / *ic_total_count; + } else { + *type_info_percentage = 100; // Compared against lower bound. + *generic_percentage = 0; // Compared against upper bound. + } } void RuntimeProfiler::Optimize(JSFunction* function, const char* reason) { - ASSERT(function->IsOptimizable()); + DCHECK(function->IsOptimizable()); if (FLAG_trace_opt && function->PassesFilter(FLAG_hydrogen_filter)) { PrintF("[marking "); function->ShortPrint(); PrintF(" for recompilation, reason: %s", reason); if (FLAG_type_info_threshold > 0) { - int typeinfo, total, percentage; - GetICCounts(function->shared()->code(), &typeinfo, &total, &percentage); - PrintF(", ICs with typeinfo: %d/%d (%d%%)", typeinfo, total, percentage); + int typeinfo, generic, total, type_percentage, generic_percentage; + GetICCounts(function->shared()->code(), &typeinfo, &generic, &total, + &type_percentage, &generic_percentage); + PrintF(", ICs with typeinfo: %d/%d (%d%%)", typeinfo, total, + type_percentage); + PrintF(", generic ICs: %d/%d (%d%%)", generic, total, generic_percentage); } PrintF("]\n"); } @@ -101,7 +109,7 @@ void RuntimeProfiler::Optimize(JSFunction* function, const char* reason) { // recompilation race. This goes away as soon as OSR becomes one-shot. return; } - ASSERT(!function->IsInOptimizationQueue()); + DCHECK(!function->IsInOptimizationQueue()); function->MarkForConcurrentOptimization(); } else { // The next call to the function will trigger optimization. @@ -110,7 +118,9 @@ void RuntimeProfiler::Optimize(JSFunction* function, const char* reason) { } -void RuntimeProfiler::AttemptOnStackReplacement(JSFunction* function) { +void RuntimeProfiler::AttemptOnStackReplacement(JSFunction* function, + int loop_nesting_levels) { + SharedFunctionInfo* shared = function->shared(); // See AlwaysFullCompiler (in compiler.cc) comment on why we need // Debug::has_break_points(). if (!FLAG_use_osr || @@ -119,7 +129,6 @@ void RuntimeProfiler::AttemptOnStackReplacement(JSFunction* function) { return; } - SharedFunctionInfo* shared = function->shared(); // If the code is not optimizable, don't try OSR. if (!shared->code()->optimizable()) return; @@ -137,7 +146,9 @@ void RuntimeProfiler::AttemptOnStackReplacement(JSFunction* function) { PrintF("]\n"); } - BackEdgeTable::Patch(isolate_, shared->code()); + for (int i = 0; i < loop_nesting_levels; i++) { + BackEdgeTable::Patch(isolate_, shared->code()); + } } @@ -175,14 +186,8 @@ void RuntimeProfiler::OptimizeNow() { if (shared_code->kind() != Code::FUNCTION) continue; if (function->IsInOptimizationQueue()) continue; - if (FLAG_always_osr && - shared_code->allow_osr_at_loop_nesting_level() == 0) { - // Testing mode: always try an OSR compile for every function. - for (int i = 0; i < Code::kMaxLoopNestingMarker; i++) { - // TODO(titzer): fix AttemptOnStackReplacement to avoid this dumb loop. - shared_code->set_allow_osr_at_loop_nesting_level(i); - AttemptOnStackReplacement(function); - } + if (FLAG_always_osr) { + AttemptOnStackReplacement(function, Code::kMaxLoopNestingMarker); // Fall through and do a normal optimized compile as well. } else if (!frame->is_optimized() && (function->IsMarkedForOptimization() || @@ -196,12 +201,7 @@ void RuntimeProfiler::OptimizeNow() { if (shared_code->CodeSize() > allowance) { if (ticks < 255) shared_code->set_profiler_ticks(ticks + 1); } else { - int nesting = shared_code->allow_osr_at_loop_nesting_level(); - if (nesting < Code::kMaxLoopNestingMarker) { - int new_nesting = nesting + 1; - shared_code->set_allow_osr_at_loop_nesting_level(new_nesting); - AttemptOnStackReplacement(function); - } + AttemptOnStackReplacement(function); } continue; } @@ -235,9 +235,11 @@ void RuntimeProfiler::OptimizeNow() { int ticks = shared_code->profiler_ticks(); if (ticks >= kProfilerTicksBeforeOptimization) { - int typeinfo, total, percentage; - GetICCounts(shared_code, &typeinfo, &total, &percentage); - if (percentage >= FLAG_type_info_threshold) { + int typeinfo, generic, total, type_percentage, generic_percentage; + GetICCounts(shared_code, &typeinfo, &generic, &total, &type_percentage, + &generic_percentage); + if (type_percentage >= FLAG_type_info_threshold && + generic_percentage <= FLAG_generic_ic_threshold) { // If this particular function hasn't had any ICs patched for enough // ticks, optimize it now. Optimize(function, "hot and stable"); @@ -248,15 +250,23 @@ void RuntimeProfiler::OptimizeNow() { if (FLAG_trace_opt_verbose) { PrintF("[not yet optimizing "); function->PrintName(); - PrintF(", not enough type info: %d/%d (%d%%)]\n", - typeinfo, total, percentage); + PrintF(", not enough type info: %d/%d (%d%%)]\n", typeinfo, total, + type_percentage); } } } else if (!any_ic_changed_ && shared_code->instruction_size() < kMaxSizeEarlyOpt) { // If no IC was patched since the last tick and this function is very // small, optimistically optimize it now. - Optimize(function, "small function"); + int typeinfo, generic, total, type_percentage, generic_percentage; + GetICCounts(shared_code, &typeinfo, &generic, &total, &type_percentage, + &generic_percentage); + if (type_percentage >= FLAG_type_info_threshold && + generic_percentage <= FLAG_generic_ic_threshold) { + Optimize(function, "small function"); + } else { + shared_code->set_profiler_ticks(ticks + 1); + } } else { shared_code->set_profiler_ticks(ticks + 1); } diff --git a/deps/v8/src/runtime-profiler.h b/deps/v8/src/runtime-profiler.h index 450910ca828..eff443d926a 100644 --- a/deps/v8/src/runtime-profiler.h +++ b/deps/v8/src/runtime-profiler.h @@ -5,16 +5,19 @@ #ifndef V8_RUNTIME_PROFILER_H_ #define V8_RUNTIME_PROFILER_H_ -#include "allocation.h" -#include "atomicops.h" +#include "src/allocation.h" namespace v8 { + +namespace base { +class Semaphore; +} + namespace internal { class Isolate; class JSFunction; class Object; -class Semaphore; class RuntimeProfiler { public: @@ -24,7 +27,7 @@ class RuntimeProfiler { void NotifyICChanged() { any_ic_changed_ = true; } - void AttemptOnStackReplacement(JSFunction* function); + void AttemptOnStackReplacement(JSFunction* function, int nesting_levels = 1); private: void Optimize(JSFunction* function, const char* reason); diff --git a/deps/v8/src/runtime.cc b/deps/v8/src/runtime.cc index 079332b982e..1fbedc6adc3 100644 --- a/deps/v8/src/runtime.cc +++ b/deps/v8/src/runtime.cc @@ -5,47 +5,50 @@ #include <stdlib.h> #include <limits> -#include "v8.h" - -#include "accessors.h" -#include "allocation-site-scopes.h" -#include "api.h" -#include "arguments.h" -#include "bootstrapper.h" -#include "codegen.h" -#include "compilation-cache.h" -#include "compiler.h" -#include "conversions.h" -#include "cpu.h" -#include "cpu-profiler.h" -#include "dateparser-inl.h" -#include "debug.h" -#include "deoptimizer.h" -#include "date.h" -#include "execution.h" -#include "full-codegen.h" -#include "global-handles.h" -#include "isolate-inl.h" -#include "jsregexp.h" -#include "jsregexp-inl.h" -#include "json-parser.h" -#include "json-stringifier.h" -#include "liveedit.h" -#include "misc-intrinsics.h" -#include "parser.h" -#include "platform.h" -#include "runtime-profiler.h" -#include "runtime.h" -#include "scopeinfo.h" -#include "smart-pointers.h" -#include "string-search.h" -#include "stub-cache.h" -#include "uri.h" -#include "v8threads.h" -#include "vm-state-inl.h" +#include "src/v8.h" + +#include "src/accessors.h" +#include "src/allocation-site-scopes.h" +#include "src/api.h" +#include "src/arguments.h" +#include "src/base/cpu.h" +#include "src/base/platform/platform.h" +#include "src/bootstrapper.h" +#include "src/codegen.h" +#include "src/compilation-cache.h" +#include "src/compiler.h" +#include "src/conversions.h" +#include "src/cpu-profiler.h" +#include "src/date.h" +#include "src/dateparser-inl.h" +#include "src/debug.h" +#include "src/deoptimizer.h" +#include "src/execution.h" +#include "src/full-codegen.h" +#include "src/global-handles.h" +#include "src/isolate-inl.h" +#include "src/json-parser.h" +#include "src/json-stringifier.h" +#include "src/jsregexp-inl.h" +#include "src/jsregexp.h" +#include "src/liveedit.h" +#include "src/misc-intrinsics.h" +#include "src/parser.h" +#include "src/prototype.h" +#include "src/runtime.h" +#include "src/runtime-profiler.h" +#include "src/scopeinfo.h" +#include "src/smart-pointers.h" +#include "src/string-search.h" +#include "src/stub-cache.h" +#include "src/uri.h" +#include "src/utils.h" +#include "src/v8threads.h" +#include "src/vm-state-inl.h" +#include "third_party/fdlibm/fdlibm.h" #ifdef V8_I18N_SUPPORT -#include "i18n.h" +#include "src/i18n.h" #include "unicode/brkiter.h" #include "unicode/calendar.h" #include "unicode/coll.h" @@ -169,8 +172,8 @@ static Handle<Map> ComputeObjectLiteralMap( } else { // Bail out as a non-internalized-string non-index key makes caching // impossible. - // ASSERT to make sure that the if condition after the loop is false. - ASSERT(number_of_string_keys != number_of_properties); + // DCHECK to make sure that the if condition after the loop is false. + DCHECK(number_of_string_keys != number_of_properties); break; } } @@ -190,7 +193,7 @@ static Handle<Map> ComputeObjectLiteralMap( keys->set(index++, key); } } - ASSERT(index == number_of_string_keys); + DCHECK(index == number_of_string_keys); } *is_result_from_cache = true; return isolate->factory()->ObjectLiteralMapFromCache(context, keys); @@ -245,14 +248,12 @@ MUST_USE_RESULT static MaybeHandle<Object> CreateObjectLiteralBoilerplate( int length = constant_properties->length(); bool should_transform = !is_result_from_cache && boilerplate->HasFastProperties(); - if (should_transform || has_function_literal) { - // Normalize the properties of object to avoid n^2 behavior - // when extending the object multiple properties. Indicate the number of - // properties to be added. + bool should_normalize = should_transform || has_function_literal; + if (should_normalize) { + // TODO(verwaest): We might not want to ever normalize here. JSObject::NormalizeProperties( boilerplate, KEEP_INOBJECT_PROPERTIES, length / 2); } - // TODO(verwaest): Support tracking representations in the boilerplate. for (int index = 0; index < length; index +=2) { Handle<Object> key(constant_properties->get(index+0), isolate); @@ -268,34 +269,33 @@ MUST_USE_RESULT static MaybeHandle<Object> CreateObjectLiteralBoilerplate( } MaybeHandle<Object> maybe_result; uint32_t element_index = 0; - StoreMode mode = value->IsJSObject() ? FORCE_FIELD : ALLOW_AS_CONSTANT; if (key->IsInternalizedString()) { if (Handle<String>::cast(key)->AsArrayIndex(&element_index)) { // Array index as string (uint32). - maybe_result = JSObject::SetOwnElement( - boilerplate, element_index, value, SLOPPY); + if (value->IsUninitialized()) value = handle(Smi::FromInt(0), isolate); + maybe_result = + JSObject::SetOwnElement(boilerplate, element_index, value, SLOPPY); } else { Handle<String> name(String::cast(*key)); - ASSERT(!name->AsArrayIndex(&element_index)); - maybe_result = JSObject::SetLocalPropertyIgnoreAttributes( - boilerplate, name, value, NONE, - Object::OPTIMAL_REPRESENTATION, mode); + DCHECK(!name->AsArrayIndex(&element_index)); + maybe_result = JSObject::SetOwnPropertyIgnoreAttributes( + boilerplate, name, value, NONE); } } else if (key->ToArrayIndex(&element_index)) { // Array index (uint32). - maybe_result = JSObject::SetOwnElement( - boilerplate, element_index, value, SLOPPY); + if (value->IsUninitialized()) value = handle(Smi::FromInt(0), isolate); + maybe_result = + JSObject::SetOwnElement(boilerplate, element_index, value, SLOPPY); } else { // Non-uint32 number. - ASSERT(key->IsNumber()); + DCHECK(key->IsNumber()); double num = key->Number(); char arr[100]; Vector<char> buffer(arr, ARRAY_SIZE(arr)); const char* str = DoubleToCString(num, buffer); Handle<String> name = isolate->factory()->NewStringFromAsciiChecked(str); - maybe_result = JSObject::SetLocalPropertyIgnoreAttributes( - boilerplate, name, value, NONE, - Object::OPTIMAL_REPRESENTATION, mode); + maybe_result = JSObject::SetOwnPropertyIgnoreAttributes(boilerplate, name, + value, NONE); } // If setting the property on the boilerplate throws an // exception, the exception is converted to an empty handle in @@ -309,7 +309,7 @@ MUST_USE_RESULT static MaybeHandle<Object> CreateObjectLiteralBoilerplate( // computed properties have been assigned so that we can generate // constant function properties. if (should_transform && !has_function_literal) { - JSObject::TransformToFastProperties( + JSObject::MigrateSlowToFast( boilerplate, boilerplate->map()->unused_property_fields()); } @@ -337,9 +337,6 @@ MUST_USE_RESULT static MaybeHandle<Object> TransitionElements( } -static const int kSmiLiteralMinimumLength = 1024; - - MaybeHandle<Object> Runtime::CreateArrayLiteralBoilerplate( Isolate* isolate, Handle<FixedArray> literals, @@ -360,21 +357,20 @@ MaybeHandle<Object> Runtime::CreateArrayLiteralBoilerplate( FixedArrayBase::cast(elements->get(1))); { DisallowHeapAllocation no_gc; - ASSERT(IsFastElementsKind(constant_elements_kind)); + DCHECK(IsFastElementsKind(constant_elements_kind)); Context* native_context = isolate->context()->native_context(); Object* maps_array = native_context->js_array_maps(); - ASSERT(!maps_array->IsUndefined()); + DCHECK(!maps_array->IsUndefined()); Object* map = FixedArray::cast(maps_array)->get(constant_elements_kind); object->set_map(Map::cast(map)); } Handle<FixedArrayBase> copied_elements_values; if (IsFastDoubleElementsKind(constant_elements_kind)) { - ASSERT(FLAG_smi_only_arrays); copied_elements_values = isolate->factory()->CopyFixedDoubleArray( Handle<FixedDoubleArray>::cast(constant_elements_values)); } else { - ASSERT(IsFastSmiOrObjectElementsKind(constant_elements_kind)); + DCHECK(IsFastSmiOrObjectElementsKind(constant_elements_kind)); const bool is_cow = (constant_elements_values->map() == isolate->heap()->fixed_cow_array_map()); @@ -384,7 +380,7 @@ MaybeHandle<Object> Runtime::CreateArrayLiteralBoilerplate( Handle<FixedArray> fixed_array_values = Handle<FixedArray>::cast(copied_elements_values); for (int i = 0; i < fixed_array_values->length(); i++) { - ASSERT(!fixed_array_values->get(i)->IsFixedArray()); + DCHECK(!fixed_array_values->get(i)->IsFixedArray()); } #endif } else { @@ -411,20 +407,6 @@ MaybeHandle<Object> Runtime::CreateArrayLiteralBoilerplate( object->set_elements(*copied_elements_values); object->set_length(Smi::FromInt(copied_elements_values->length())); - // Ensure that the boilerplate object has FAST_*_ELEMENTS, unless the flag is - // on or the object is larger than the threshold. - if (!FLAG_smi_only_arrays && - constant_elements_values->length() < kSmiLiteralMinimumLength) { - ElementsKind elements_kind = object->GetElementsKind(); - if (!IsFastObjectElementsKind(elements_kind)) { - if (IsFastHoleyElementsKind(elements_kind)) { - TransitionElements(object, FAST_HOLEY_ELEMENTS, isolate).Check(); - } else { - TransitionElements(object, FAST_ELEMENTS, isolate).Check(); - } - } - } - JSObject::ValidateElements(object); return object; } @@ -459,9 +441,9 @@ MUST_USE_RESULT static MaybeHandle<Object> CreateLiteralBoilerplate( } -RUNTIME_FUNCTION(RuntimeHidden_CreateObjectLiteral) { +RUNTIME_FUNCTION(Runtime_CreateObjectLiteral) { HandleScope scope(isolate); - ASSERT(args.length() == 4); + DCHECK(args.length() == 4); CONVERT_ARG_HANDLE_CHECKED(FixedArray, literals, 0); CONVERT_SMI_ARG_CHECKED(literals_index, 1); CONVERT_ARG_HANDLE_CHECKED(FixedArray, constant_properties, 2); @@ -522,7 +504,7 @@ MUST_USE_RESULT static MaybeHandle<AllocationSite> GetLiteralAllocationSite( Handle<Object> literal_site(literals->get(literals_index), isolate); Handle<AllocationSite> site; if (*literal_site == isolate->heap()->undefined_value()) { - ASSERT(*elements != isolate->heap()->empty_fixed_array()); + DCHECK(*elements != isolate->heap()->empty_fixed_array()); Handle<Object> boilerplate; ASSIGN_RETURN_ON_EXCEPTION( isolate, boilerplate, @@ -564,8 +546,8 @@ static MaybeHandle<JSObject> CreateArrayLiteralImpl(Isolate* isolate, AllocationSiteUsageContext usage_context(isolate, site, enable_mementos); usage_context.EnterNewScope(); JSObject::DeepCopyHints hints = (flags & ArrayLiteral::kShallowElements) == 0 - ? JSObject::kNoHints - : JSObject::kObjectIsShallowArray; + ? JSObject::kNoHints + : JSObject::kObjectIsShallow; MaybeHandle<JSObject> copy = JSObject::DeepCopy(boilerplate, &usage_context, hints); usage_context.ExitScope(site, boilerplate); @@ -573,9 +555,9 @@ static MaybeHandle<JSObject> CreateArrayLiteralImpl(Isolate* isolate, } -RUNTIME_FUNCTION(RuntimeHidden_CreateArrayLiteral) { +RUNTIME_FUNCTION(Runtime_CreateArrayLiteral) { HandleScope scope(isolate); - ASSERT(args.length() == 4); + DCHECK(args.length() == 4); CONVERT_ARG_HANDLE_CHECKED(FixedArray, literals, 0); CONVERT_SMI_ARG_CHECKED(literals_index, 1); CONVERT_ARG_HANDLE_CHECKED(FixedArray, elements, 2); @@ -589,9 +571,9 @@ RUNTIME_FUNCTION(RuntimeHidden_CreateArrayLiteral) { } -RUNTIME_FUNCTION(RuntimeHidden_CreateArrayLiteralStubBailout) { +RUNTIME_FUNCTION(Runtime_CreateArrayLiteralStubBailout) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(FixedArray, literals, 0); CONVERT_SMI_ARG_CHECKED(literals_index, 1); CONVERT_ARG_HANDLE_CHECKED(FixedArray, elements, 2); @@ -606,7 +588,7 @@ RUNTIME_FUNCTION(RuntimeHidden_CreateArrayLiteralStubBailout) { RUNTIME_FUNCTION(Runtime_CreateSymbol) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(Object, name, 0); RUNTIME_ASSERT(name->IsString() || name->IsUndefined()); Handle<Symbol> symbol = isolate->factory()->NewSymbol(); @@ -617,7 +599,7 @@ RUNTIME_FUNCTION(Runtime_CreateSymbol) { RUNTIME_FUNCTION(Runtime_CreatePrivateSymbol) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(Object, name, 0); RUNTIME_ASSERT(name->IsString() || name->IsUndefined()); Handle<Symbol> symbol = isolate->factory()->NewPrivateSymbol(); @@ -626,9 +608,20 @@ RUNTIME_FUNCTION(Runtime_CreatePrivateSymbol) { } +RUNTIME_FUNCTION(Runtime_CreatePrivateOwnSymbol) { + HandleScope scope(isolate); + DCHECK(args.length() == 1); + CONVERT_ARG_HANDLE_CHECKED(Object, name, 0); + RUNTIME_ASSERT(name->IsString() || name->IsUndefined()); + Handle<Symbol> symbol = isolate->factory()->NewPrivateOwnSymbol(); + if (name->IsString()) symbol->set_name(*name); + return *symbol; +} + + RUNTIME_FUNCTION(Runtime_CreateGlobalPrivateSymbol) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(String, name, 0); Handle<JSObject> registry = isolate->GetSymbolRegistry(); Handle<String> part = isolate->factory()->private_intern_string(); @@ -639,11 +632,11 @@ RUNTIME_FUNCTION(Runtime_CreateGlobalPrivateSymbol) { ASSIGN_RETURN_FAILURE_ON_EXCEPTION( isolate, symbol, Object::GetPropertyOrElement(privates, name)); if (!symbol->IsSymbol()) { - ASSERT(symbol->IsUndefined()); + DCHECK(symbol->IsUndefined()); symbol = isolate->factory()->NewPrivateSymbol(); Handle<Symbol>::cast(symbol)->set_name(*name); - JSObject::SetProperty(Handle<JSObject>::cast(privates), - name, symbol, NONE, STRICT).Assert(); + JSObject::SetProperty(Handle<JSObject>::cast(privates), name, symbol, + STRICT).Assert(); } return *symbol; } @@ -651,7 +644,7 @@ RUNTIME_FUNCTION(Runtime_CreateGlobalPrivateSymbol) { RUNTIME_FUNCTION(Runtime_NewSymbolWrapper) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(Symbol, symbol, 0); return *Object::ToObject(isolate, symbol).ToHandleChecked(); } @@ -659,7 +652,7 @@ RUNTIME_FUNCTION(Runtime_NewSymbolWrapper) { RUNTIME_FUNCTION(Runtime_SymbolDescription) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(Symbol, symbol, 0); return symbol->name(); } @@ -667,14 +660,14 @@ RUNTIME_FUNCTION(Runtime_SymbolDescription) { RUNTIME_FUNCTION(Runtime_SymbolRegistry) { HandleScope scope(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); return *isolate->GetSymbolRegistry(); } RUNTIME_FUNCTION(Runtime_SymbolIsPrivate) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(Symbol, symbol, 0); return isolate->heap()->ToBoolean(symbol->is_private()); } @@ -682,7 +675,7 @@ RUNTIME_FUNCTION(Runtime_SymbolIsPrivate) { RUNTIME_FUNCTION(Runtime_CreateJSProxy) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSReceiver, handler, 0); CONVERT_ARG_HANDLE_CHECKED(Object, prototype, 1); if (!prototype->IsJSReceiver()) prototype = isolate->factory()->null_value(); @@ -692,7 +685,7 @@ RUNTIME_FUNCTION(Runtime_CreateJSProxy) { RUNTIME_FUNCTION(Runtime_CreateJSFunctionProxy) { HandleScope scope(isolate); - ASSERT(args.length() == 4); + DCHECK(args.length() == 4); CONVERT_ARG_HANDLE_CHECKED(JSReceiver, handler, 0); CONVERT_ARG_HANDLE_CHECKED(Object, call_trap, 1); RUNTIME_ASSERT(call_trap->IsJSFunction() || call_trap->IsJSFunctionProxy()); @@ -706,7 +699,7 @@ RUNTIME_FUNCTION(Runtime_CreateJSFunctionProxy) { RUNTIME_FUNCTION(Runtime_IsJSProxy) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(Object, obj, 0); return isolate->heap()->ToBoolean(obj->IsJSProxy()); } @@ -714,7 +707,7 @@ RUNTIME_FUNCTION(Runtime_IsJSProxy) { RUNTIME_FUNCTION(Runtime_IsJSFunctionProxy) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(Object, obj, 0); return isolate->heap()->ToBoolean(obj->IsJSFunctionProxy()); } @@ -722,7 +715,7 @@ RUNTIME_FUNCTION(Runtime_IsJSFunctionProxy) { RUNTIME_FUNCTION(Runtime_GetHandler) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(JSProxy, proxy, 0); return proxy->handler(); } @@ -730,7 +723,7 @@ RUNTIME_FUNCTION(Runtime_GetHandler) { RUNTIME_FUNCTION(Runtime_GetCallTrap) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(JSFunctionProxy, proxy, 0); return proxy->call_trap(); } @@ -738,7 +731,7 @@ RUNTIME_FUNCTION(Runtime_GetCallTrap) { RUNTIME_FUNCTION(Runtime_GetConstructTrap) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(JSFunctionProxy, proxy, 0); return proxy->construct_trap(); } @@ -746,7 +739,7 @@ RUNTIME_FUNCTION(Runtime_GetConstructTrap) { RUNTIME_FUNCTION(Runtime_Fix) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSProxy, proxy, 0); JSProxy::Fix(proxy); return isolate->heap()->undefined_value(); @@ -756,7 +749,7 @@ RUNTIME_FUNCTION(Runtime_Fix) { void Runtime::FreeArrayBuffer(Isolate* isolate, JSArrayBuffer* phantom_array_buffer) { if (phantom_array_buffer->should_be_freed()) { - ASSERT(phantom_array_buffer->is_external()); + DCHECK(phantom_array_buffer->is_external()); free(phantom_array_buffer->backing_store()); } if (phantom_array_buffer->is_external()) return; @@ -764,8 +757,9 @@ void Runtime::FreeArrayBuffer(Isolate* isolate, size_t allocated_length = NumberToSize( isolate, phantom_array_buffer->byte_length()); - isolate->heap()->AdjustAmountOfExternalAllocatedMemory( - -static_cast<int64_t>(allocated_length)); + reinterpret_cast<v8::Isolate*>(isolate) + ->AdjustAmountOfExternalAllocatedMemory( + -static_cast<int64_t>(allocated_length)); CHECK(V8::ArrayBufferAllocator() != NULL); V8::ArrayBufferAllocator()->Free( phantom_array_buffer->backing_store(), @@ -778,7 +772,7 @@ void Runtime::SetupArrayBuffer(Isolate* isolate, bool is_external, void* data, size_t allocated_length) { - ASSERT(array_buffer->GetInternalFieldCount() == + DCHECK(array_buffer->GetInternalFieldCount() == v8::ArrayBuffer::kInternalFieldCount); for (int i = 0; i < v8::ArrayBuffer::kInternalFieldCount; i++) { array_buffer->SetInternalField(i, Smi::FromInt(0)); @@ -819,7 +813,8 @@ bool Runtime::SetupArrayBufferAllocatingData( SetupArrayBuffer(isolate, array_buffer, false, data, allocated_length); - isolate->heap()->AdjustAmountOfExternalAllocatedMemory(allocated_length); + reinterpret_cast<v8::Isolate*>(isolate) + ->AdjustAmountOfExternalAllocatedMemory(allocated_length); return true; } @@ -845,7 +840,7 @@ void Runtime::NeuterArrayBuffer(Handle<JSArrayBuffer> array_buffer) { RUNTIME_FUNCTION(Runtime_ArrayBufferInitialize) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSArrayBuffer, holder, 0); CONVERT_NUMBER_ARG_HANDLE_CHECKED(byteLength, 1); if (!holder->byte_length()->IsUndefined()) { @@ -870,7 +865,7 @@ RUNTIME_FUNCTION(Runtime_ArrayBufferInitialize) { RUNTIME_FUNCTION(Runtime_ArrayBufferGetByteLength) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(JSArrayBuffer, holder, 0); return holder->byte_length(); } @@ -878,10 +873,11 @@ RUNTIME_FUNCTION(Runtime_ArrayBufferGetByteLength) { RUNTIME_FUNCTION(Runtime_ArrayBufferSliceImpl) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(JSArrayBuffer, source, 0); CONVERT_ARG_HANDLE_CHECKED(JSArrayBuffer, target, 1); CONVERT_NUMBER_ARG_HANDLE_CHECKED(first, 2); + RUNTIME_ASSERT(!source.is_identical_to(target)); size_t start = 0; RUNTIME_ASSERT(TryNumberToSize(isolate, *first, &start)); size_t target_length = NumberToSize(isolate, target->byte_length()); @@ -900,7 +896,7 @@ RUNTIME_FUNCTION(Runtime_ArrayBufferSliceImpl) { RUNTIME_FUNCTION(Runtime_ArrayBufferIsView) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(Object, object, 0); return isolate->heap()->ToBoolean(object->IsJSArrayBufferView()); } @@ -908,13 +904,13 @@ RUNTIME_FUNCTION(Runtime_ArrayBufferIsView) { RUNTIME_FUNCTION(Runtime_ArrayBufferNeuter) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSArrayBuffer, array_buffer, 0); if (array_buffer->backing_store() == NULL) { CHECK(Smi::FromInt(0) == array_buffer->byte_length()); return isolate->heap()->undefined_value(); } - ASSERT(!array_buffer->is_external()); + DCHECK(!array_buffer->is_external()); void* backing_store = array_buffer->backing_store(); size_t byte_length = NumberToSize(isolate, array_buffer->byte_length()); array_buffer->set_is_external(true); @@ -950,7 +946,7 @@ void Runtime::ArrayIdToTypeAndSize( RUNTIME_FUNCTION(Runtime_TypedArrayInitialize) { HandleScope scope(isolate); - ASSERT(args.length() == 5); + DCHECK(args.length() == 5); CONVERT_ARG_HANDLE_CHECKED(JSTypedArray, holder, 0); CONVERT_SMI_ARG_CHECKED(arrayId, 1); CONVERT_ARG_HANDLE_CHECKED(Object, maybe_buffer, 2); @@ -959,13 +955,6 @@ RUNTIME_FUNCTION(Runtime_TypedArrayInitialize) { RUNTIME_ASSERT(arrayId >= Runtime::ARRAY_ID_FIRST && arrayId <= Runtime::ARRAY_ID_LAST); - RUNTIME_ASSERT(maybe_buffer->IsNull() || maybe_buffer->IsJSArrayBuffer()); - - ASSERT(holder->GetInternalFieldCount() == - v8::ArrayBufferView::kInternalFieldCount); - for (int i = 0; i < v8::ArrayBufferView::kInternalFieldCount; i++) { - holder->SetInternalField(i, Smi::FromInt(0)); - } ExternalArrayType array_type = kExternalInt8Array; // Bogus initialization. size_t element_size = 1; // Bogus initialization. @@ -977,16 +966,24 @@ RUNTIME_FUNCTION(Runtime_TypedArrayInitialize) { &external_elements_kind, &fixed_elements_kind, &element_size); + RUNTIME_ASSERT(holder->map()->elements_kind() == fixed_elements_kind); size_t byte_offset = 0; size_t byte_length = 0; RUNTIME_ASSERT(TryNumberToSize(isolate, *byte_offset_object, &byte_offset)); RUNTIME_ASSERT(TryNumberToSize(isolate, *byte_length_object, &byte_length)); - holder->set_byte_offset(*byte_offset_object); - holder->set_byte_length(*byte_length_object); + if (maybe_buffer->IsJSArrayBuffer()) { + Handle<JSArrayBuffer> buffer = Handle<JSArrayBuffer>::cast(maybe_buffer); + size_t array_buffer_byte_length = + NumberToSize(isolate, buffer->byte_length()); + RUNTIME_ASSERT(byte_offset <= array_buffer_byte_length); + RUNTIME_ASSERT(array_buffer_byte_length - byte_offset >= byte_length); + } else { + RUNTIME_ASSERT(maybe_buffer->IsNull()); + } - CHECK_EQ(0, static_cast<int>(byte_length % element_size)); + RUNTIME_ASSERT(byte_length % element_size == 0); size_t length = byte_length / element_size; if (length > static_cast<unsigned>(Smi::kMaxValue)) { @@ -995,16 +992,20 @@ RUNTIME_FUNCTION(Runtime_TypedArrayInitialize) { HandleVector<Object>(NULL, 0))); } + // All checks are done, now we can modify objects. + + DCHECK(holder->GetInternalFieldCount() == + v8::ArrayBufferView::kInternalFieldCount); + for (int i = 0; i < v8::ArrayBufferView::kInternalFieldCount; i++) { + holder->SetInternalField(i, Smi::FromInt(0)); + } Handle<Object> length_obj = isolate->factory()->NewNumberFromSize(length); holder->set_length(*length_obj); - if (!maybe_buffer->IsNull()) { - Handle<JSArrayBuffer> buffer(JSArrayBuffer::cast(*maybe_buffer)); - - size_t array_buffer_byte_length = - NumberToSize(isolate, buffer->byte_length()); - RUNTIME_ASSERT(byte_offset <= array_buffer_byte_length); - RUNTIME_ASSERT(array_buffer_byte_length - byte_offset >= byte_length); + holder->set_byte_offset(*byte_offset_object); + holder->set_byte_length(*byte_length_object); + if (!maybe_buffer->IsNull()) { + Handle<JSArrayBuffer> buffer = Handle<JSArrayBuffer>::cast(maybe_buffer); holder->set_buffer(*buffer); holder->set_weak_next(buffer->weak_first_view()); buffer->set_weak_first_view(*holder); @@ -1016,7 +1017,7 @@ RUNTIME_FUNCTION(Runtime_TypedArrayInitialize) { Handle<Map> map = JSObject::GetElementsTransitionMap(holder, external_elements_kind); JSObject::SetMapAndElements(holder, map, elements); - ASSERT(IsExternalArrayElementsKind(holder->map()->elements_kind())); + DCHECK(IsExternalArrayElementsKind(holder->map()->elements_kind())); } else { holder->set_buffer(Smi::FromInt(0)); holder->set_weak_next(isolate->heap()->undefined_value()); @@ -1036,7 +1037,7 @@ RUNTIME_FUNCTION(Runtime_TypedArrayInitialize) { // Returns true if backing store was initialized or false otherwise. RUNTIME_FUNCTION(Runtime_TypedArrayInitializeFromArrayLike) { HandleScope scope(isolate); - ASSERT(args.length() == 4); + DCHECK(args.length() == 4); CONVERT_ARG_HANDLE_CHECKED(JSTypedArray, holder, 0); CONVERT_SMI_ARG_CHECKED(arrayId, 1); CONVERT_ARG_HANDLE_CHECKED(Object, source, 2); @@ -1045,12 +1046,6 @@ RUNTIME_FUNCTION(Runtime_TypedArrayInitializeFromArrayLike) { RUNTIME_ASSERT(arrayId >= Runtime::ARRAY_ID_FIRST && arrayId <= Runtime::ARRAY_ID_LAST); - ASSERT(holder->GetInternalFieldCount() == - v8::ArrayBufferView::kInternalFieldCount); - for (int i = 0; i < v8::ArrayBufferView::kInternalFieldCount; i++) { - holder->SetInternalField(i, Smi::FromInt(0)); - } - ExternalArrayType array_type = kExternalInt8Array; // Bogus initialization. size_t element_size = 1; // Bogus initialization. ElementsKind external_elements_kind = @@ -1062,6 +1057,8 @@ RUNTIME_FUNCTION(Runtime_TypedArrayInitializeFromArrayLike) { &fixed_elements_kind, &element_size); + RUNTIME_ASSERT(holder->map()->elements_kind() == fixed_elements_kind); + Handle<JSArrayBuffer> buffer = isolate->factory()->NewJSArrayBuffer(); if (source->IsJSTypedArray() && JSTypedArray::cast(*source)->type() == array_type) { @@ -1078,6 +1075,12 @@ RUNTIME_FUNCTION(Runtime_TypedArrayInitializeFromArrayLike) { } size_t byte_length = length * element_size; + DCHECK(holder->GetInternalFieldCount() == + v8::ArrayBufferView::kInternalFieldCount); + for (int i = 0; i < v8::ArrayBufferView::kInternalFieldCount; i++) { + holder->SetInternalField(i, Smi::FromInt(0)); + } + // NOTE: not initializing backing store. // We assume that the caller of this function will initialize holder // with the loop @@ -1142,7 +1145,7 @@ RUNTIME_FUNCTION(Runtime_TypedArrayInitializeFromArrayLike) { #define BUFFER_VIEW_GETTER(Type, getter, accessor) \ RUNTIME_FUNCTION(Runtime_##Type##Get##getter) { \ HandleScope scope(isolate); \ - ASSERT(args.length() == 1); \ + DCHECK(args.length() == 1); \ CONVERT_ARG_HANDLE_CHECKED(JS##Type, holder, 0); \ return holder->accessor(); \ } @@ -1156,7 +1159,7 @@ BUFFER_VIEW_GETTER(DataView, Buffer, buffer) RUNTIME_FUNCTION(Runtime_TypedArrayGetBuffer) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSTypedArray, holder, 0); return *holder->GetBuffer(); } @@ -1179,7 +1182,7 @@ enum TypedArraySetResultCodes { RUNTIME_FUNCTION(Runtime_TypedArraySetFastCases) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); if (!args[0]->IsJSTypedArray()) return isolate->Throw(*isolate->factory()->NewTypeError( "not_typed_array", HandleVector<Object>(NULL, 0))); @@ -1227,7 +1230,7 @@ RUNTIME_FUNCTION(Runtime_TypedArraySetFastCases) { (target_base <= source_base && target_base + target_byte_length > source_base)) { // We do not support overlapping ArrayBuffers - ASSERT( + DCHECK( target->GetBuffer()->backing_store() == source->GetBuffer()->backing_store()); return Smi::FromInt(TYPED_ARRAY_SET_TYPED_ARRAY_OVERLAPPING); @@ -1238,8 +1241,8 @@ RUNTIME_FUNCTION(Runtime_TypedArraySetFastCases) { RUNTIME_FUNCTION(Runtime_TypedArrayMaxSizeInHeap) { - ASSERT(args.length() == 0); - ASSERT_OBJECT_SIZE( + DCHECK(args.length() == 0); + DCHECK_OBJECT_SIZE( FLAG_typed_array_max_size_in_heap + FixedTypedArrayBase::kDataOffset); return Smi::FromInt(FLAG_typed_array_max_size_in_heap); } @@ -1247,13 +1250,13 @@ RUNTIME_FUNCTION(Runtime_TypedArrayMaxSizeInHeap) { RUNTIME_FUNCTION(Runtime_DataViewInitialize) { HandleScope scope(isolate); - ASSERT(args.length() == 4); + DCHECK(args.length() == 4); CONVERT_ARG_HANDLE_CHECKED(JSDataView, holder, 0); CONVERT_ARG_HANDLE_CHECKED(JSArrayBuffer, buffer, 1); CONVERT_NUMBER_ARG_HANDLE_CHECKED(byte_offset, 2); CONVERT_NUMBER_ARG_HANDLE_CHECKED(byte_length, 3); - ASSERT(holder->GetInternalFieldCount() == + DCHECK(holder->GetInternalFieldCount() == v8::ArrayBufferView::kInternalFieldCount); for (int i = 0; i < v8::ArrayBufferView::kInternalFieldCount; i++) { holder->SetInternalField(i, Smi::FromInt(0)); @@ -1339,7 +1342,7 @@ inline static bool DataViewGetValue( Value value; size_t buffer_offset = data_view_byte_offset + byte_offset; - ASSERT( + DCHECK( NumberToSize(isolate, buffer->byte_length()) >= buffer_offset + sizeof(T)); uint8_t* source = @@ -1384,7 +1387,7 @@ static bool DataViewSetValue( Value value; value.data = data; size_t buffer_offset = data_view_byte_offset + byte_offset; - ASSERT( + DCHECK( NumberToSize(isolate, buffer->byte_length()) >= buffer_offset + sizeof(T)); uint8_t* target = @@ -1399,11 +1402,11 @@ static bool DataViewSetValue( #define DATA_VIEW_GETTER(TypeName, Type, Converter) \ - RUNTIME_FUNCTION(Runtime_DataViewGet##TypeName) { \ + RUNTIME_FUNCTION(Runtime_DataViewGet##TypeName) { \ HandleScope scope(isolate); \ - ASSERT(args.length() == 3); \ + DCHECK(args.length() == 3); \ CONVERT_ARG_HANDLE_CHECKED(JSDataView, holder, 0); \ - CONVERT_ARG_HANDLE_CHECKED(Object, offset, 1); \ + CONVERT_NUMBER_ARG_HANDLE_CHECKED(offset, 1); \ CONVERT_BOOLEAN_ARG_CHECKED(is_little_endian, 2); \ Type result; \ if (DataViewGetValue( \ @@ -1481,12 +1484,12 @@ double DataViewConvertValue<double>(double value) { #define DATA_VIEW_SETTER(TypeName, Type) \ - RUNTIME_FUNCTION(Runtime_DataViewSet##TypeName) { \ + RUNTIME_FUNCTION(Runtime_DataViewSet##TypeName) { \ HandleScope scope(isolate); \ - ASSERT(args.length() == 4); \ + DCHECK(args.length() == 4); \ CONVERT_ARG_HANDLE_CHECKED(JSDataView, holder, 0); \ - CONVERT_ARG_HANDLE_CHECKED(Object, offset, 1); \ - CONVERT_ARG_HANDLE_CHECKED(Object, value, 2); \ + CONVERT_NUMBER_ARG_HANDLE_CHECKED(offset, 1); \ + CONVERT_NUMBER_ARG_HANDLE_CHECKED(value, 2); \ CONVERT_BOOLEAN_ARG_CHECKED(is_little_endian, 3); \ Type v = DataViewConvertValue<Type>(value->Number()); \ if (DataViewSetValue( \ @@ -1513,7 +1516,7 @@ DATA_VIEW_SETTER(Float64, double) RUNTIME_FUNCTION(Runtime_SetInitialize) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSSet, holder, 0); Handle<OrderedHashSet> table = isolate->factory()->NewOrderedHashSet(); holder->set_table(*table); @@ -1523,19 +1526,19 @@ RUNTIME_FUNCTION(Runtime_SetInitialize) { RUNTIME_FUNCTION(Runtime_SetAdd) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSSet, holder, 0); CONVERT_ARG_HANDLE_CHECKED(Object, key, 1); Handle<OrderedHashSet> table(OrderedHashSet::cast(holder->table())); table = OrderedHashSet::Add(table, key); holder->set_table(*table); - return isolate->heap()->undefined_value(); + return *holder; } RUNTIME_FUNCTION(Runtime_SetHas) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSSet, holder, 0); CONVERT_ARG_HANDLE_CHECKED(Object, key, 1); Handle<OrderedHashSet> table(OrderedHashSet::cast(holder->table())); @@ -1545,19 +1548,20 @@ RUNTIME_FUNCTION(Runtime_SetHas) { RUNTIME_FUNCTION(Runtime_SetDelete) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSSet, holder, 0); CONVERT_ARG_HANDLE_CHECKED(Object, key, 1); Handle<OrderedHashSet> table(OrderedHashSet::cast(holder->table())); - table = OrderedHashSet::Remove(table, key); + bool was_present = false; + table = OrderedHashSet::Remove(table, key, &was_present); holder->set_table(*table); - return isolate->heap()->undefined_value(); + return isolate->heap()->ToBoolean(was_present); } RUNTIME_FUNCTION(Runtime_SetClear) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSSet, holder, 0); Handle<OrderedHashSet> table(OrderedHashSet::cast(holder->table())); table = OrderedHashSet::Clear(table); @@ -1568,45 +1572,41 @@ RUNTIME_FUNCTION(Runtime_SetClear) { RUNTIME_FUNCTION(Runtime_SetGetSize) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSSet, holder, 0); Handle<OrderedHashSet> table(OrderedHashSet::cast(holder->table())); return Smi::FromInt(table->NumberOfElements()); } -RUNTIME_FUNCTION(Runtime_SetCreateIterator) { +RUNTIME_FUNCTION(Runtime_SetIteratorInitialize) { HandleScope scope(isolate); - ASSERT(args.length() == 2); - CONVERT_ARG_HANDLE_CHECKED(JSSet, holder, 0); - CONVERT_SMI_ARG_CHECKED(kind, 1) + DCHECK(args.length() == 3); + CONVERT_ARG_HANDLE_CHECKED(JSSetIterator, holder, 0); + CONVERT_ARG_HANDLE_CHECKED(JSSet, set, 1); + CONVERT_SMI_ARG_CHECKED(kind, 2) RUNTIME_ASSERT(kind == JSSetIterator::kKindValues || kind == JSSetIterator::kKindEntries); - Handle<OrderedHashSet> table(OrderedHashSet::cast(holder->table())); - return *JSSetIterator::Create(table, kind); + Handle<OrderedHashSet> table(OrderedHashSet::cast(set->table())); + holder->set_table(*table); + holder->set_index(Smi::FromInt(0)); + holder->set_kind(Smi::FromInt(kind)); + return isolate->heap()->undefined_value(); } RUNTIME_FUNCTION(Runtime_SetIteratorNext) { - HandleScope scope(isolate); - ASSERT(args.length() == 1); - CONVERT_ARG_HANDLE_CHECKED(JSSetIterator, holder, 0); - return *JSSetIterator::Next(holder); -} - - -RUNTIME_FUNCTION(Runtime_SetIteratorClose) { - HandleScope scope(isolate); - ASSERT(args.length() == 1); - CONVERT_ARG_HANDLE_CHECKED(JSSetIterator, holder, 0); - holder->Close(); - return isolate->heap()->undefined_value(); + SealHandleScope shs(isolate); + DCHECK(args.length() == 2); + CONVERT_ARG_CHECKED(JSSetIterator, holder, 0); + CONVERT_ARG_CHECKED(JSArray, value_array, 1); + return holder->Next(value_array); } RUNTIME_FUNCTION(Runtime_MapInitialize) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSMap, holder, 0); Handle<OrderedHashMap> table = isolate->factory()->NewOrderedHashMap(); holder->set_table(*table); @@ -1616,7 +1616,7 @@ RUNTIME_FUNCTION(Runtime_MapInitialize) { RUNTIME_FUNCTION(Runtime_MapGet) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSMap, holder, 0); CONVERT_ARG_HANDLE_CHECKED(Object, key, 1); Handle<OrderedHashMap> table(OrderedHashMap::cast(holder->table())); @@ -1627,7 +1627,7 @@ RUNTIME_FUNCTION(Runtime_MapGet) { RUNTIME_FUNCTION(Runtime_MapHas) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSMap, holder, 0); CONVERT_ARG_HANDLE_CHECKED(Object, key, 1); Handle<OrderedHashMap> table(OrderedHashMap::cast(holder->table())); @@ -1638,21 +1638,21 @@ RUNTIME_FUNCTION(Runtime_MapHas) { RUNTIME_FUNCTION(Runtime_MapDelete) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSMap, holder, 0); CONVERT_ARG_HANDLE_CHECKED(Object, key, 1); Handle<OrderedHashMap> table(OrderedHashMap::cast(holder->table())); - Handle<Object> lookup(table->Lookup(key), isolate); + bool was_present = false; Handle<OrderedHashMap> new_table = - OrderedHashMap::Put(table, key, isolate->factory()->the_hole_value()); + OrderedHashMap::Remove(table, key, &was_present); holder->set_table(*new_table); - return isolate->heap()->ToBoolean(!lookup->IsTheHole()); + return isolate->heap()->ToBoolean(was_present); } RUNTIME_FUNCTION(Runtime_MapClear) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSMap, holder, 0); Handle<OrderedHashMap> table(OrderedHashMap::cast(holder->table())); table = OrderedHashMap::Clear(table); @@ -1663,70 +1663,88 @@ RUNTIME_FUNCTION(Runtime_MapClear) { RUNTIME_FUNCTION(Runtime_MapSet) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(JSMap, holder, 0); CONVERT_ARG_HANDLE_CHECKED(Object, key, 1); CONVERT_ARG_HANDLE_CHECKED(Object, value, 2); Handle<OrderedHashMap> table(OrderedHashMap::cast(holder->table())); Handle<OrderedHashMap> new_table = OrderedHashMap::Put(table, key, value); holder->set_table(*new_table); - return isolate->heap()->undefined_value(); + return *holder; } RUNTIME_FUNCTION(Runtime_MapGetSize) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSMap, holder, 0); Handle<OrderedHashMap> table(OrderedHashMap::cast(holder->table())); return Smi::FromInt(table->NumberOfElements()); } -RUNTIME_FUNCTION(Runtime_MapCreateIterator) { +RUNTIME_FUNCTION(Runtime_MapIteratorInitialize) { HandleScope scope(isolate); - ASSERT(args.length() == 2); - CONVERT_ARG_HANDLE_CHECKED(JSMap, holder, 0); - CONVERT_SMI_ARG_CHECKED(kind, 1) + DCHECK(args.length() == 3); + CONVERT_ARG_HANDLE_CHECKED(JSMapIterator, holder, 0); + CONVERT_ARG_HANDLE_CHECKED(JSMap, map, 1); + CONVERT_SMI_ARG_CHECKED(kind, 2) RUNTIME_ASSERT(kind == JSMapIterator::kKindKeys || kind == JSMapIterator::kKindValues || kind == JSMapIterator::kKindEntries); - Handle<OrderedHashMap> table(OrderedHashMap::cast(holder->table())); - return *JSMapIterator::Create(table, kind); + Handle<OrderedHashMap> table(OrderedHashMap::cast(map->table())); + holder->set_table(*table); + holder->set_index(Smi::FromInt(0)); + holder->set_kind(Smi::FromInt(kind)); + return isolate->heap()->undefined_value(); } -RUNTIME_FUNCTION(Runtime_MapIteratorNext) { +RUNTIME_FUNCTION(Runtime_GetWeakMapEntries) { HandleScope scope(isolate); - ASSERT(args.length() == 1); - CONVERT_ARG_HANDLE_CHECKED(JSMapIterator, holder, 0); - return *JSMapIterator::Next(holder); + DCHECK(args.length() == 1); + CONVERT_ARG_HANDLE_CHECKED(JSWeakCollection, holder, 0); + Handle<ObjectHashTable> table(ObjectHashTable::cast(holder->table())); + Handle<FixedArray> entries = + isolate->factory()->NewFixedArray(table->NumberOfElements() * 2); + { + DisallowHeapAllocation no_gc; + int number_of_non_hole_elements = 0; + for (int i = 0; i < table->Capacity(); i++) { + Handle<Object> key(table->KeyAt(i), isolate); + if (table->IsKey(*key)) { + entries->set(number_of_non_hole_elements++, *key); + entries->set(number_of_non_hole_elements++, table->Lookup(key)); + } + } + DCHECK_EQ(table->NumberOfElements() * 2, number_of_non_hole_elements); + } + return *isolate->factory()->NewJSArrayWithElements(entries); } -RUNTIME_FUNCTION(Runtime_MapIteratorClose) { - HandleScope scope(isolate); - ASSERT(args.length() == 1); - CONVERT_ARG_HANDLE_CHECKED(JSMapIterator, holder, 0); - holder->Close(); - return isolate->heap()->undefined_value(); +RUNTIME_FUNCTION(Runtime_MapIteratorNext) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 2); + CONVERT_ARG_CHECKED(JSMapIterator, holder, 0); + CONVERT_ARG_CHECKED(JSArray, value_array, 1); + return holder->Next(value_array); } static Handle<JSWeakCollection> WeakCollectionInitialize( Isolate* isolate, Handle<JSWeakCollection> weak_collection) { - ASSERT(weak_collection->map()->inobject_properties() == 0); + DCHECK(weak_collection->map()->inobject_properties() == 0); Handle<ObjectHashTable> table = ObjectHashTable::New(isolate, 0); weak_collection->set_table(*table); - weak_collection->set_next(Smi::FromInt(0)); return weak_collection; } RUNTIME_FUNCTION(Runtime_WeakCollectionInitialize) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSWeakCollection, weak_collection, 0); return *WeakCollectionInitialize(isolate, weak_collection); } @@ -1734,11 +1752,13 @@ RUNTIME_FUNCTION(Runtime_WeakCollectionInitialize) { RUNTIME_FUNCTION(Runtime_WeakCollectionGet) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSWeakCollection, weak_collection, 0); CONVERT_ARG_HANDLE_CHECKED(Object, key, 1); + RUNTIME_ASSERT(key->IsJSReceiver() || key->IsSymbol()); Handle<ObjectHashTable> table( ObjectHashTable::cast(weak_collection->table())); + RUNTIME_ASSERT(table->IsKey(*key)); Handle<Object> lookup(table->Lookup(key), isolate); return lookup->IsTheHole() ? isolate->heap()->undefined_value() : *lookup; } @@ -1746,11 +1766,13 @@ RUNTIME_FUNCTION(Runtime_WeakCollectionGet) { RUNTIME_FUNCTION(Runtime_WeakCollectionHas) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSWeakCollection, weak_collection, 0); CONVERT_ARG_HANDLE_CHECKED(Object, key, 1); + RUNTIME_ASSERT(key->IsJSReceiver() || key->IsSymbol()); Handle<ObjectHashTable> table( ObjectHashTable::cast(weak_collection->table())); + RUNTIME_ASSERT(table->IsKey(*key)); Handle<Object> lookup(table->Lookup(key), isolate); return isolate->heap()->ToBoolean(!lookup->IsTheHole()); } @@ -1758,79 +1780,116 @@ RUNTIME_FUNCTION(Runtime_WeakCollectionHas) { RUNTIME_FUNCTION(Runtime_WeakCollectionDelete) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSWeakCollection, weak_collection, 0); CONVERT_ARG_HANDLE_CHECKED(Object, key, 1); + RUNTIME_ASSERT(key->IsJSReceiver() || key->IsSymbol()); Handle<ObjectHashTable> table(ObjectHashTable::cast( weak_collection->table())); - Handle<Object> lookup(table->Lookup(key), isolate); + RUNTIME_ASSERT(table->IsKey(*key)); + bool was_present = false; Handle<ObjectHashTable> new_table = - ObjectHashTable::Put(table, key, isolate->factory()->the_hole_value()); + ObjectHashTable::Remove(table, key, &was_present); weak_collection->set_table(*new_table); - return isolate->heap()->ToBoolean(!lookup->IsTheHole()); + return isolate->heap()->ToBoolean(was_present); } RUNTIME_FUNCTION(Runtime_WeakCollectionSet) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(JSWeakCollection, weak_collection, 0); CONVERT_ARG_HANDLE_CHECKED(Object, key, 1); + RUNTIME_ASSERT(key->IsJSReceiver() || key->IsSymbol()); CONVERT_ARG_HANDLE_CHECKED(Object, value, 2); Handle<ObjectHashTable> table( ObjectHashTable::cast(weak_collection->table())); + RUNTIME_ASSERT(table->IsKey(*key)); Handle<ObjectHashTable> new_table = ObjectHashTable::Put(table, key, value); weak_collection->set_table(*new_table); - return isolate->heap()->undefined_value(); + return *weak_collection; } -RUNTIME_FUNCTION(Runtime_ClassOf) { - SealHandleScope shs(isolate); - ASSERT(args.length() == 1); - CONVERT_ARG_CHECKED(Object, obj, 0); - if (!obj->IsJSObject()) return isolate->heap()->null_value(); - return JSObject::cast(obj)->class_name(); +RUNTIME_FUNCTION(Runtime_GetWeakSetValues) { + HandleScope scope(isolate); + DCHECK(args.length() == 1); + CONVERT_ARG_HANDLE_CHECKED(JSWeakCollection, holder, 0); + Handle<ObjectHashTable> table(ObjectHashTable::cast(holder->table())); + Handle<FixedArray> values = + isolate->factory()->NewFixedArray(table->NumberOfElements()); + { + DisallowHeapAllocation no_gc; + int number_of_non_hole_elements = 0; + for (int i = 0; i < table->Capacity(); i++) { + Handle<Object> key(table->KeyAt(i), isolate); + if (table->IsKey(*key)) { + values->set(number_of_non_hole_elements++, *key); + } + } + DCHECK_EQ(table->NumberOfElements(), number_of_non_hole_elements); + } + return *isolate->factory()->NewJSArrayWithElements(values); } RUNTIME_FUNCTION(Runtime_GetPrototype) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(Object, obj, 0); // We don't expect access checks to be needed on JSProxy objects. - ASSERT(!obj->IsAccessCheckNeeded() || obj->IsJSObject()); + DCHECK(!obj->IsAccessCheckNeeded() || obj->IsJSObject()); + PrototypeIterator iter(isolate, obj, PrototypeIterator::START_AT_RECEIVER); do { - if (obj->IsAccessCheckNeeded() && - !isolate->MayNamedAccess(Handle<JSObject>::cast(obj), - isolate->factory()->proto_string(), - v8::ACCESS_GET)) { - isolate->ReportFailedAccessCheck(Handle<JSObject>::cast(obj), - v8::ACCESS_GET); + if (PrototypeIterator::GetCurrent(iter)->IsAccessCheckNeeded() && + !isolate->MayNamedAccess( + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)), + isolate->factory()->proto_string(), v8::ACCESS_GET)) { + isolate->ReportFailedAccessCheck( + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)), + v8::ACCESS_GET); RETURN_FAILURE_IF_SCHEDULED_EXCEPTION(isolate); return isolate->heap()->undefined_value(); } - obj = Object::GetPrototype(isolate, obj); - } while (obj->IsJSObject() && - JSObject::cast(*obj)->map()->is_hidden_prototype()); - return *obj; + iter.AdvanceIgnoringProxies(); + if (PrototypeIterator::GetCurrent(iter)->IsJSProxy()) { + return *PrototypeIterator::GetCurrent(iter); + } + } while (!iter.IsAtEnd(PrototypeIterator::END_AT_NON_HIDDEN)); + return *PrototypeIterator::GetCurrent(iter); } static inline Handle<Object> GetPrototypeSkipHiddenPrototypes( Isolate* isolate, Handle<Object> receiver) { - Handle<Object> current = Object::GetPrototype(isolate, receiver); - while (current->IsJSObject() && - JSObject::cast(*current)->map()->is_hidden_prototype()) { - current = Object::GetPrototype(isolate, current); + PrototypeIterator iter(isolate, receiver); + while (!iter.IsAtEnd(PrototypeIterator::END_AT_NON_HIDDEN)) { + if (PrototypeIterator::GetCurrent(iter)->IsJSProxy()) { + return PrototypeIterator::GetCurrent(iter); + } + iter.Advance(); } - return current; + return PrototypeIterator::GetCurrent(iter); +} + + +RUNTIME_FUNCTION(Runtime_InternalSetPrototype) { + HandleScope scope(isolate); + DCHECK(args.length() == 2); + CONVERT_ARG_HANDLE_CHECKED(JSObject, obj, 0); + CONVERT_ARG_HANDLE_CHECKED(Object, prototype, 1); + DCHECK(!obj->IsAccessCheckNeeded()); + DCHECK(!obj->map()->is_observed()); + Handle<Object> result; + ASSIGN_RETURN_FAILURE_ON_EXCEPTION( + isolate, result, JSObject::SetPrototype(obj, prototype, false)); + return *result; } RUNTIME_FUNCTION(Runtime_SetPrototype) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSObject, obj, 0); CONVERT_ARG_HANDLE_CHECKED(Object, prototype, 1); if (obj->IsAccessCheckNeeded() && @@ -1865,126 +1924,19 @@ RUNTIME_FUNCTION(Runtime_SetPrototype) { RUNTIME_FUNCTION(Runtime_IsInPrototypeChain) { HandleScope shs(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); // See ECMA-262, section 15.3.5.3, page 88 (steps 5 - 8). CONVERT_ARG_HANDLE_CHECKED(Object, O, 0); CONVERT_ARG_HANDLE_CHECKED(Object, V, 1); + PrototypeIterator iter(isolate, V, PrototypeIterator::START_AT_RECEIVER); while (true) { - Handle<Object> prototype = Object::GetPrototype(isolate, V); - if (prototype->IsNull()) return isolate->heap()->false_value(); - if (*O == *prototype) return isolate->heap()->true_value(); - V = prototype; + iter.AdvanceIgnoringProxies(); + if (iter.IsAtEnd()) return isolate->heap()->false_value(); + if (iter.IsAtEnd(O)) return isolate->heap()->true_value(); } } -static bool CheckAccessException(Object* callback, - v8::AccessType access_type) { - DisallowHeapAllocation no_gc; - ASSERT(!callback->IsForeign()); - if (callback->IsAccessorInfo()) { - AccessorInfo* info = AccessorInfo::cast(callback); - return - (access_type == v8::ACCESS_HAS && - (info->all_can_read() || info->all_can_write())) || - (access_type == v8::ACCESS_GET && info->all_can_read()) || - (access_type == v8::ACCESS_SET && info->all_can_write()); - } - if (callback->IsAccessorPair()) { - AccessorPair* info = AccessorPair::cast(callback); - return - (access_type == v8::ACCESS_HAS && - (info->all_can_read() || info->all_can_write())) || - (access_type == v8::ACCESS_GET && info->all_can_read()) || - (access_type == v8::ACCESS_SET && info->all_can_write()); - } - return false; -} - - -template<class Key> -static bool CheckGenericAccess( - Handle<JSObject> receiver, - Handle<JSObject> holder, - Key key, - v8::AccessType access_type, - bool (Isolate::*mayAccess)(Handle<JSObject>, Key, v8::AccessType)) { - Isolate* isolate = receiver->GetIsolate(); - for (Handle<JSObject> current = receiver; - true; - current = handle(JSObject::cast(current->GetPrototype()), isolate)) { - if (current->IsAccessCheckNeeded() && - !(isolate->*mayAccess)(current, key, access_type)) { - return false; - } - if (current.is_identical_to(holder)) break; - } - return true; -} - - -enum AccessCheckResult { - ACCESS_FORBIDDEN, - ACCESS_ALLOWED, - ACCESS_ABSENT -}; - - -static AccessCheckResult CheckPropertyAccess(Handle<JSObject> obj, - Handle<Name> name, - v8::AccessType access_type) { - uint32_t index; - if (name->AsArrayIndex(&index)) { - // TODO(1095): we should traverse hidden prototype hierachy as well. - if (CheckGenericAccess( - obj, obj, index, access_type, &Isolate::MayIndexedAccess)) { - return ACCESS_ALLOWED; - } - - obj->GetIsolate()->ReportFailedAccessCheck(obj, access_type); - return ACCESS_FORBIDDEN; - } - - Isolate* isolate = obj->GetIsolate(); - LookupResult lookup(isolate); - obj->LocalLookup(name, &lookup, true); - - if (!lookup.IsProperty()) return ACCESS_ABSENT; - Handle<JSObject> holder(lookup.holder(), isolate); - if (CheckGenericAccess<Handle<Object> >( - obj, holder, name, access_type, &Isolate::MayNamedAccess)) { - return ACCESS_ALLOWED; - } - - // Access check callback denied the access, but some properties - // can have a special permissions which override callbacks descision - // (currently see v8::AccessControl). - // API callbacks can have per callback access exceptions. - switch (lookup.type()) { - case CALLBACKS: - if (CheckAccessException(lookup.GetCallbackObject(), access_type)) { - return ACCESS_ALLOWED; - } - break; - case INTERCEPTOR: - // If the object has an interceptor, try real named properties. - // Overwrite the result to fetch the correct property later. - holder->LookupRealNamedProperty(name, &lookup); - if (lookup.IsProperty() && lookup.IsPropertyCallbacks()) { - if (CheckAccessException(lookup.GetCallbackObject(), access_type)) { - return ACCESS_ALLOWED; - } - } - break; - default: - break; - } - - isolate->ReportFailedAccessCheck(obj, access_type); - return ACCESS_FORBIDDEN; -} - - // Enumerator used as indices into the array returned from GetOwnProperty enum PropertyDescriptorIndices { IS_ACCESSOR_INDEX, @@ -2003,61 +1955,68 @@ MUST_USE_RESULT static MaybeHandle<Object> GetOwnProperty(Isolate* isolate, Handle<Name> name) { Heap* heap = isolate->heap(); Factory* factory = isolate->factory(); - // Due to some WebKit tests, we want to make sure that we do not log - // more than one access failure here. - AccessCheckResult access_check_result = - CheckPropertyAccess(obj, name, v8::ACCESS_HAS); - RETURN_EXCEPTION_IF_SCHEDULED_EXCEPTION(isolate, Object); - switch (access_check_result) { - case ACCESS_FORBIDDEN: return factory->false_value(); - case ACCESS_ALLOWED: break; - case ACCESS_ABSENT: return factory->undefined_value(); - } - - PropertyAttributes attrs = JSReceiver::GetLocalPropertyAttribute(obj, name); - if (attrs == ABSENT) { - RETURN_EXCEPTION_IF_SCHEDULED_EXCEPTION(isolate, Object); - return factory->undefined_value(); - } - ASSERT(!isolate->has_scheduled_exception()); - Handle<AccessorPair> accessors; - bool has_accessors = - JSObject::GetLocalPropertyAccessorPair(obj, name).ToHandle(&accessors); - Handle<FixedArray> elms = isolate->factory()->NewFixedArray(DESCRIPTOR_SIZE); + + PropertyAttributes attrs; + uint32_t index = 0; + Handle<Object> value; + MaybeHandle<AccessorPair> maybe_accessors; + // TODO(verwaest): Unify once indexed properties can be handled by the + // LookupIterator. + if (name->AsArrayIndex(&index)) { + // Get attributes. + Maybe<PropertyAttributes> maybe = + JSReceiver::GetOwnElementAttribute(obj, index); + if (!maybe.has_value) return MaybeHandle<Object>(); + attrs = maybe.value; + if (attrs == ABSENT) return factory->undefined_value(); + + // Get AccessorPair if present. + maybe_accessors = JSObject::GetOwnElementAccessorPair(obj, index); + + // Get value if not an AccessorPair. + if (maybe_accessors.is_null()) { + ASSIGN_RETURN_ON_EXCEPTION(isolate, value, + Runtime::GetElementOrCharAt(isolate, obj, index), Object); + } + } else { + // Get attributes. + LookupIterator it(obj, name, LookupIterator::CHECK_OWN); + Maybe<PropertyAttributes> maybe = JSObject::GetPropertyAttributes(&it); + if (!maybe.has_value) return MaybeHandle<Object>(); + attrs = maybe.value; + if (attrs == ABSENT) return factory->undefined_value(); + + // Get AccessorPair if present. + if (it.state() == LookupIterator::PROPERTY && + it.property_kind() == LookupIterator::ACCESSOR && + it.GetAccessors()->IsAccessorPair()) { + maybe_accessors = Handle<AccessorPair>::cast(it.GetAccessors()); + } + + // Get value if not an AccessorPair. + if (maybe_accessors.is_null()) { + ASSIGN_RETURN_ON_EXCEPTION( + isolate, value, Object::GetProperty(&it), Object); + } + } + DCHECK(!isolate->has_pending_exception()); + Handle<FixedArray> elms = factory->NewFixedArray(DESCRIPTOR_SIZE); elms->set(ENUMERABLE_INDEX, heap->ToBoolean((attrs & DONT_ENUM) == 0)); elms->set(CONFIGURABLE_INDEX, heap->ToBoolean((attrs & DONT_DELETE) == 0)); - elms->set(IS_ACCESSOR_INDEX, heap->ToBoolean(has_accessors)); + elms->set(IS_ACCESSOR_INDEX, heap->ToBoolean(!maybe_accessors.is_null())); - if (!has_accessors) { - elms->set(WRITABLE_INDEX, heap->ToBoolean((attrs & READ_ONLY) == 0)); - // Runtime::GetObjectProperty does access check. - Handle<Object> value; - ASSIGN_RETURN_ON_EXCEPTION( - isolate, value, Runtime::GetObjectProperty(isolate, obj, name), - Object); - elms->set(VALUE_INDEX, *value); - } else { - // Access checks are performed for both accessors separately. - // When they fail, the respective field is not set in the descriptor. + Handle<AccessorPair> accessors; + if (maybe_accessors.ToHandle(&accessors)) { Handle<Object> getter(accessors->GetComponent(ACCESSOR_GETTER), isolate); Handle<Object> setter(accessors->GetComponent(ACCESSOR_SETTER), isolate); - - if (!getter->IsMap() && CheckPropertyAccess(obj, name, v8::ACCESS_GET)) { - ASSERT(!isolate->has_scheduled_exception()); - elms->set(GETTER_INDEX, *getter); - } else { - RETURN_EXCEPTION_IF_SCHEDULED_EXCEPTION(isolate, Object); - } - - if (!setter->IsMap() && CheckPropertyAccess(obj, name, v8::ACCESS_SET)) { - ASSERT(!isolate->has_scheduled_exception()); - elms->set(SETTER_INDEX, *setter); - } else { - RETURN_EXCEPTION_IF_SCHEDULED_EXCEPTION(isolate, Object); - } + elms->set(GETTER_INDEX, *getter); + elms->set(SETTER_INDEX, *setter); + } else { + elms->set(WRITABLE_INDEX, heap->ToBoolean((attrs & READ_ONLY) == 0)); + elms->set(VALUE_INDEX, *value); } - return isolate->factory()->NewJSArrayWithElements(elms); + return factory->NewJSArrayWithElements(elms); } @@ -2070,7 +2029,7 @@ MUST_USE_RESULT static MaybeHandle<Object> GetOwnProperty(Isolate* isolate, // [true, GetFunction, SetFunction, Enumerable, Configurable] RUNTIME_FUNCTION(Runtime_GetOwnProperty) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSObject, obj, 0); CONVERT_ARG_HANDLE_CHECKED(Name, name, 1); Handle<Object> result; @@ -2082,7 +2041,7 @@ RUNTIME_FUNCTION(Runtime_GetOwnProperty) { RUNTIME_FUNCTION(Runtime_PreventExtensions) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSObject, obj, 0); Handle<Object> result; ASSIGN_RETURN_FAILURE_ON_EXCEPTION( @@ -2093,13 +2052,13 @@ RUNTIME_FUNCTION(Runtime_PreventExtensions) { RUNTIME_FUNCTION(Runtime_IsExtensible) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(JSObject, obj, 0); if (obj->IsJSGlobalProxy()) { - Object* proto = obj->GetPrototype(); - if (proto->IsNull()) return isolate->heap()->false_value(); - ASSERT(proto->IsJSGlobalObject()); - obj = JSObject::cast(proto); + PrototypeIterator iter(isolate, obj); + if (iter.IsAtEnd()) return isolate->heap()->false_value(); + DCHECK(iter.GetCurrent()->IsJSGlobalObject()); + obj = JSObject::cast(iter.GetCurrent()); } return isolate->heap()->ToBoolean(obj->map()->is_extensible()); } @@ -2107,7 +2066,7 @@ RUNTIME_FUNCTION(Runtime_IsExtensible) { RUNTIME_FUNCTION(Runtime_RegExpCompile) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(JSRegExp, re, 0); CONVERT_ARG_HANDLE_CHECKED(String, pattern, 1); CONVERT_ARG_HANDLE_CHECKED(String, flags, 2); @@ -2120,7 +2079,7 @@ RUNTIME_FUNCTION(Runtime_RegExpCompile) { RUNTIME_FUNCTION(Runtime_CreateApiFunction) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(FunctionTemplateInfo, data, 0); CONVERT_ARG_HANDLE_CHECKED(Object, prototype, 1); return *isolate->factory()->CreateApiFunction(data, prototype); @@ -2129,7 +2088,7 @@ RUNTIME_FUNCTION(Runtime_CreateApiFunction) { RUNTIME_FUNCTION(Runtime_IsTemplate) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(Object, arg, 0); bool result = arg->IsObjectTemplateInfo() || arg->IsFunctionTemplateInfo(); return isolate->heap()->ToBoolean(result); @@ -2138,7 +2097,7 @@ RUNTIME_FUNCTION(Runtime_IsTemplate) { RUNTIME_FUNCTION(Runtime_GetTemplateField) { SealHandleScope shs(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_CHECKED(HeapObject, templ, 0); CONVERT_SMI_ARG_CHECKED(index, 1); int offset = index * kPointerSize + HeapObject::kHeaderSize; @@ -2157,7 +2116,7 @@ RUNTIME_FUNCTION(Runtime_GetTemplateField) { RUNTIME_FUNCTION(Runtime_DisableAccessChecks) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(HeapObject, object, 0); Handle<Map> old_map(object->map()); bool needs_access_checks = old_map->is_access_check_needed(); @@ -2165,11 +2124,7 @@ RUNTIME_FUNCTION(Runtime_DisableAccessChecks) { // Copy map so it won't interfere constructor's initial map. Handle<Map> new_map = Map::Copy(old_map); new_map->set_is_access_check_needed(false); - if (object->IsJSObject()) { - JSObject::MigrateToMap(Handle<JSObject>::cast(object), new_map); - } else { - object->set_map(*new_map); - } + JSObject::MigrateToMap(Handle<JSObject>::cast(object), new_map); } return isolate->heap()->ToBoolean(needs_access_checks); } @@ -2177,52 +2132,14 @@ RUNTIME_FUNCTION(Runtime_DisableAccessChecks) { RUNTIME_FUNCTION(Runtime_EnableAccessChecks) { HandleScope scope(isolate); - ASSERT(args.length() == 1); - CONVERT_ARG_HANDLE_CHECKED(HeapObject, object, 0); - Handle<Map> old_map(object->map()); - if (!old_map->is_access_check_needed()) { - // Copy map so it won't interfere constructor's initial map. - Handle<Map> new_map = Map::Copy(old_map); - new_map->set_is_access_check_needed(true); - if (object->IsJSObject()) { - JSObject::MigrateToMap(Handle<JSObject>::cast(object), new_map); - } else { - object->set_map(*new_map); - } - } - return isolate->heap()->undefined_value(); -} - - -// Transform getter or setter into something DefineAccessor can handle. -static Handle<Object> InstantiateAccessorComponent(Isolate* isolate, - Handle<Object> component) { - if (component->IsUndefined()) return isolate->factory()->null_value(); - Handle<FunctionTemplateInfo> info = - Handle<FunctionTemplateInfo>::cast(component); - return Utils::OpenHandle(*Utils::ToLocal(info)->GetFunction()); -} - - -RUNTIME_FUNCTION(Runtime_SetAccessorProperty) { - HandleScope scope(isolate); - ASSERT(args.length() == 6); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSObject, object, 0); - CONVERT_ARG_HANDLE_CHECKED(Name, name, 1); - CONVERT_ARG_HANDLE_CHECKED(Object, getter, 2); - CONVERT_ARG_HANDLE_CHECKED(Object, setter, 3); - CONVERT_SMI_ARG_CHECKED(attribute, 4); - CONVERT_SMI_ARG_CHECKED(access_control, 5); - RUNTIME_ASSERT(getter->IsUndefined() || getter->IsFunctionTemplateInfo()); - RUNTIME_ASSERT(setter->IsUndefined() || setter->IsFunctionTemplateInfo()); - RUNTIME_ASSERT(PropertyDetails::AttributesField::is_valid( - static_cast<PropertyAttributes>(attribute))); - JSObject::DefineAccessor(object, - name, - InstantiateAccessorComponent(isolate, getter), - InstantiateAccessorComponent(isolate, setter), - static_cast<PropertyAttributes>(attribute), - static_cast<v8::AccessControl>(access_control)); + Handle<Map> old_map(object->map()); + RUNTIME_ASSERT(!old_map->is_access_check_needed()); + // Copy map so it won't interfere constructor's initial map. + Handle<Map> new_map = Map::Copy(old_map); + new_map->set_is_access_check_needed(true); + JSObject::MigrateToMap(object, new_map); return isolate->heap()->undefined_value(); } @@ -2236,11 +2153,56 @@ static Object* ThrowRedeclarationError(Isolate* isolate, Handle<String> name) { } -RUNTIME_FUNCTION(RuntimeHidden_DeclareGlobals) { +// May throw a RedeclarationError. +static Object* DeclareGlobals(Isolate* isolate, Handle<GlobalObject> global, + Handle<String> name, Handle<Object> value, + PropertyAttributes attr, bool is_var, + bool is_const, bool is_function) { + // Do the lookup own properties only, see ES5 erratum. + LookupIterator it(global, name, LookupIterator::CHECK_HIDDEN); + Maybe<PropertyAttributes> maybe = JSReceiver::GetPropertyAttributes(&it); + DCHECK(maybe.has_value); + PropertyAttributes old_attributes = maybe.value; + + if (old_attributes != ABSENT) { + // The name was declared before; check for conflicting re-declarations. + if (is_const) return ThrowRedeclarationError(isolate, name); + + // Skip var re-declarations. + if (is_var) return isolate->heap()->undefined_value(); + + DCHECK(is_function); + if ((old_attributes & DONT_DELETE) != 0) { + // Only allow reconfiguring globals to functions in user code (no + // natives, which are marked as read-only). + DCHECK((attr & READ_ONLY) == 0); + + // Check whether we can reconfigure the existing property into a + // function. + PropertyDetails old_details = it.property_details(); + // TODO(verwaest): CALLBACKS invalidly includes ExecutableAccessInfo, + // which are actually data properties, not accessor properties. + if (old_details.IsReadOnly() || old_details.IsDontEnum() || + old_details.type() == CALLBACKS) { + return ThrowRedeclarationError(isolate, name); + } + // If the existing property is not configurable, keep its attributes. Do + attr = old_attributes; + } + } + + // Define or redefine own property. + RETURN_FAILURE_ON_EXCEPTION(isolate, JSObject::SetOwnPropertyIgnoreAttributes( + global, name, value, attr)); + + return isolate->heap()->undefined_value(); +} + + +RUNTIME_FUNCTION(Runtime_DeclareGlobals) { HandleScope scope(isolate); - ASSERT(args.length() == 3); - Handle<GlobalObject> global = Handle<GlobalObject>( - isolate->context()->global_object()); + DCHECK(args.length() == 3); + Handle<GlobalObject> global(isolate->global_object()); CONVERT_ARG_HANDLE_CHECKED(Context, context, 0); CONVERT_ARG_HANDLE_CHECKED(FixedArray, pairs, 1); @@ -2251,181 +2213,42 @@ RUNTIME_FUNCTION(RuntimeHidden_DeclareGlobals) { for (int i = 0; i < length; i += 2) { HandleScope scope(isolate); Handle<String> name(String::cast(pairs->get(i))); - Handle<Object> value(pairs->get(i + 1), isolate); + Handle<Object> initial_value(pairs->get(i + 1), isolate); // We have to declare a global const property. To capture we only // assign to it when evaluating the assignment for "const x = // <expr>" the initial value is the hole. - bool is_var = value->IsUndefined(); - bool is_const = value->IsTheHole(); - bool is_function = value->IsSharedFunctionInfo(); - ASSERT(is_var + is_const + is_function == 1); - - if (is_var || is_const) { - // Lookup the property in the global object, and don't set the - // value of the variable if the property is already there. - // Do the lookup locally only, see ES5 erratum. - LookupResult lookup(isolate); - global->LocalLookup(name, &lookup, true); - if (lookup.IsFound()) { - // We found an existing property. Unless it was an interceptor - // that claims the property is absent, skip this declaration. - if (!lookup.IsInterceptor()) continue; - if (JSReceiver::GetPropertyAttribute(global, name) != ABSENT) continue; - // Fall-through and introduce the absent property by using - // SetProperty. - } - } else if (is_function) { + bool is_var = initial_value->IsUndefined(); + bool is_const = initial_value->IsTheHole(); + bool is_function = initial_value->IsSharedFunctionInfo(); + DCHECK(is_var + is_const + is_function == 1); + + Handle<Object> value; + if (is_function) { // Copy the function and update its context. Use it as value. Handle<SharedFunctionInfo> shared = - Handle<SharedFunctionInfo>::cast(value); + Handle<SharedFunctionInfo>::cast(initial_value); Handle<JSFunction> function = - isolate->factory()->NewFunctionFromSharedFunctionInfo( - shared, context, TENURED); + isolate->factory()->NewFunctionFromSharedFunctionInfo(shared, context, + TENURED); value = function; + } else { + value = isolate->factory()->undefined_value(); } - LookupResult lookup(isolate); - global->LocalLookup(name, &lookup, true); - // Compute the property attributes. According to ECMA-262, // the property must be non-configurable except in eval. - int attr = NONE; - bool is_eval = DeclareGlobalsEvalFlag::decode(flags); - if (!is_eval) { - attr |= DONT_DELETE; - } bool is_native = DeclareGlobalsNativeFlag::decode(flags); - if (is_const || (is_native && is_function)) { - attr |= READ_ONLY; - } - - StrictMode strict_mode = DeclareGlobalsStrictMode::decode(flags); - - if (!lookup.IsFound() || is_function) { - // If the local property exists, check that we can reconfigure it - // as required for function declarations. - if (lookup.IsFound() && lookup.IsDontDelete()) { - if (lookup.IsReadOnly() || lookup.IsDontEnum() || - lookup.IsPropertyCallbacks()) { - return ThrowRedeclarationError(isolate, name); - } - // If the existing property is not configurable, keep its attributes. - attr = lookup.GetAttributes(); - } - // Define or redefine own property. - RETURN_FAILURE_ON_EXCEPTION(isolate, - JSObject::SetLocalPropertyIgnoreAttributes( - global, name, value, static_cast<PropertyAttributes>(attr))); - } else { - // Do a [[Put]] on the existing (own) property. - RETURN_FAILURE_ON_EXCEPTION( - isolate, - JSObject::SetProperty( - global, name, value, static_cast<PropertyAttributes>(attr), - strict_mode)); - } - } - - ASSERT(!isolate->has_pending_exception()); - return isolate->heap()->undefined_value(); -} - - -RUNTIME_FUNCTION(RuntimeHidden_DeclareContextSlot) { - HandleScope scope(isolate); - ASSERT(args.length() == 4); - - // Declarations are always made in a function or native context. In the - // case of eval code, the context passed is the context of the caller, - // which may be some nested context and not the declaration context. - CONVERT_ARG_HANDLE_CHECKED(Context, context_arg, 0); - Handle<Context> context(context_arg->declaration_context()); - CONVERT_ARG_HANDLE_CHECKED(String, name, 1); - CONVERT_SMI_ARG_CHECKED(mode_arg, 2); - PropertyAttributes mode = static_cast<PropertyAttributes>(mode_arg); - RUNTIME_ASSERT(mode == READ_ONLY || mode == NONE); - CONVERT_ARG_HANDLE_CHECKED(Object, initial_value, 3); - - int index; - PropertyAttributes attributes; - ContextLookupFlags flags = DONT_FOLLOW_CHAINS; - BindingFlags binding_flags; - Handle<Object> holder = - context->Lookup(name, flags, &index, &attributes, &binding_flags); - - if (attributes != ABSENT) { - // The name was declared before; check for conflicting re-declarations. - // Note: this is actually inconsistent with what happens for globals (where - // we silently ignore such declarations). - if (((attributes & READ_ONLY) != 0) || (mode == READ_ONLY)) { - // Functions are not read-only. - ASSERT(mode != READ_ONLY || initial_value->IsTheHole()); - return ThrowRedeclarationError(isolate, name); - } - - // Initialize it if necessary. - if (*initial_value != NULL) { - if (index >= 0) { - ASSERT(holder.is_identical_to(context)); - if (((attributes & READ_ONLY) == 0) || - context->get(index)->IsTheHole()) { - context->set(index, *initial_value); - } - } else { - // Slow case: The property is in the context extension object of a - // function context or the global object of a native context. - Handle<JSObject> object = Handle<JSObject>::cast(holder); - RETURN_FAILURE_ON_EXCEPTION( - isolate, - JSReceiver::SetProperty(object, name, initial_value, mode, SLOPPY)); - } - } + bool is_eval = DeclareGlobalsEvalFlag::decode(flags); + int attr = NONE; + if (is_const) attr |= READ_ONLY; + if (is_function && is_native) attr |= READ_ONLY; + if (!is_const && !is_eval) attr |= DONT_DELETE; - } else { - // The property is not in the function context. It needs to be - // "declared" in the function context's extension context or as a - // property of the the global object. - Handle<JSObject> object; - if (context->has_extension()) { - object = Handle<JSObject>(JSObject::cast(context->extension())); - } else { - // Context extension objects are allocated lazily. - ASSERT(context->IsFunctionContext()); - object = isolate->factory()->NewJSObject( - isolate->context_extension_function()); - context->set_extension(*object); - } - ASSERT(*object != NULL); - - // Declare the property by setting it to the initial value if provided, - // or undefined, and use the correct mode (e.g. READ_ONLY attribute for - // constant declarations). - ASSERT(!JSReceiver::HasLocalProperty(object, name)); - Handle<Object> value(isolate->heap()->undefined_value(), isolate); - if (*initial_value != NULL) value = initial_value; - // Declaring a const context slot is a conflicting declaration if - // there is a callback with that name in a prototype. It is - // allowed to introduce const variables in - // JSContextExtensionObjects. They are treated specially in - // SetProperty and no setters are invoked for those since they are - // not real JSObjects. - if (initial_value->IsTheHole() && - !object->IsJSContextExtensionObject()) { - LookupResult lookup(isolate); - object->Lookup(name, &lookup); - if (lookup.IsPropertyCallbacks()) { - return ThrowRedeclarationError(isolate, name); - } - } - if (object->IsJSGlobalObject()) { - // Define own property on the global object. - RETURN_FAILURE_ON_EXCEPTION(isolate, - JSObject::SetLocalPropertyIgnoreAttributes(object, name, value, mode)); - } else { - RETURN_FAILURE_ON_EXCEPTION(isolate, - JSReceiver::SetProperty(object, name, value, mode, SLOPPY)); - } + Object* result = DeclareGlobals(isolate, global, name, value, + static_cast<PropertyAttributes>(attr), + is_var, is_const, is_function); + if (isolate->has_pending_exception()) return result; } return isolate->heap()->undefined_value(); @@ -2440,60 +2263,22 @@ RUNTIME_FUNCTION(Runtime_InitializeVarGlobal) { // Determine if we need to assign to the variable if it already // exists (based on the number of arguments). - RUNTIME_ASSERT(args.length() == 2 || args.length() == 3); - bool assign = args.length() == 3; + RUNTIME_ASSERT(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(String, name, 0); CONVERT_STRICT_MODE_ARG_CHECKED(strict_mode, 1); + CONVERT_ARG_HANDLE_CHECKED(Object, value, 2); - // According to ECMA-262, section 12.2, page 62, the property must - // not be deletable. - PropertyAttributes attributes = DONT_DELETE; - - // Lookup the property locally in the global object. If it isn't - // there, there is a property with this name in the prototype chain. - // We follow Safari and Firefox behavior and only set the property - // locally if there is an explicit initialization value that we have - // to assign to the property. - // Note that objects can have hidden prototypes, so we need to traverse - // the whole chain of hidden prototypes to do a 'local' lookup. - LookupResult lookup(isolate); - isolate->context()->global_object()->LocalLookup(name, &lookup, true); - if (lookup.IsInterceptor()) { - Handle<JSObject> holder(lookup.holder()); - PropertyAttributes intercepted = - JSReceiver::GetPropertyAttribute(holder, name); - if (intercepted != ABSENT && (intercepted & READ_ONLY) == 0) { - // Found an interceptor that's not read only. - if (assign) { - CONVERT_ARG_HANDLE_CHECKED(Object, value, 2); - Handle<Object> result; - ASSIGN_RETURN_FAILURE_ON_EXCEPTION( - isolate, result, - JSObject::SetPropertyForResult( - holder, &lookup, name, value, attributes, strict_mode)); - return *result; - } else { - return isolate->heap()->undefined_value(); - } - } - } - - if (assign) { - CONVERT_ARG_HANDLE_CHECKED(Object, value, 2); - Handle<GlobalObject> global(isolate->context()->global_object()); - Handle<Object> result; - ASSIGN_RETURN_FAILURE_ON_EXCEPTION( - isolate, result, - JSReceiver::SetProperty(global, name, value, attributes, strict_mode)); - return *result; - } - return isolate->heap()->undefined_value(); + Handle<GlobalObject> global(isolate->context()->global_object()); + Handle<Object> result; + ASSIGN_RETURN_FAILURE_ON_EXCEPTION( + isolate, result, Object::SetProperty(global, name, value, strict_mode)); + return *result; } -RUNTIME_FUNCTION(RuntimeHidden_InitializeConstGlobal) { - SealHandleScope shs(isolate); +RUNTIME_FUNCTION(Runtime_InitializeConstGlobal) { + HandleScope handle_scope(isolate); // All constants are declared with an initial value. The name // of the constant is the first argument and the initial value // is the second. @@ -2501,80 +2286,119 @@ RUNTIME_FUNCTION(RuntimeHidden_InitializeConstGlobal) { CONVERT_ARG_HANDLE_CHECKED(String, name, 0); CONVERT_ARG_HANDLE_CHECKED(Object, value, 1); - // Get the current global object from top. - GlobalObject* global = isolate->context()->global_object(); + Handle<GlobalObject> global = isolate->global_object(); - // According to ECMA-262, section 12.2, page 62, the property must - // not be deletable. Since it's a const, it must be READ_ONLY too. - PropertyAttributes attributes = - static_cast<PropertyAttributes>(DONT_DELETE | READ_ONLY); + // Lookup the property as own on the global object. + LookupIterator it(global, name, LookupIterator::CHECK_HIDDEN); + Maybe<PropertyAttributes> maybe = JSReceiver::GetPropertyAttributes(&it); + DCHECK(maybe.has_value); + PropertyAttributes old_attributes = maybe.value; - // Lookup the property locally in the global object. If it isn't - // there, we add the property and take special precautions to always - // add it as a local property even in case of callbacks in the - // prototype chain (this rules out using SetProperty). - // We use SetLocalPropertyIgnoreAttributes instead - LookupResult lookup(isolate); - global->LocalLookup(name, &lookup); - if (!lookup.IsFound()) { - HandleScope handle_scope(isolate); - Handle<GlobalObject> global(isolate->context()->global_object()); - RETURN_FAILURE_ON_EXCEPTION( - isolate, - JSObject::SetLocalPropertyIgnoreAttributes(global, name, value, - attributes)); - return *value; + PropertyAttributes attr = + static_cast<PropertyAttributes>(DONT_DELETE | READ_ONLY); + // Set the value if the property is either missing, or the property attributes + // allow setting the value without invoking an accessor. + if (it.IsFound()) { + // Ignore if we can't reconfigure the value. + if ((old_attributes & DONT_DELETE) != 0) { + if ((old_attributes & READ_ONLY) != 0 || + it.property_kind() == LookupIterator::ACCESSOR) { + return *value; + } + attr = static_cast<PropertyAttributes>(old_attributes | READ_ONLY); + } } - if (!lookup.IsReadOnly()) { - // Restore global object from context (in case of GC) and continue - // with setting the value. - HandleScope handle_scope(isolate); - Handle<GlobalObject> global(isolate->context()->global_object()); + RETURN_FAILURE_ON_EXCEPTION(isolate, JSObject::SetOwnPropertyIgnoreAttributes( + global, name, value, attr)); - // BUG 1213575: Handle the case where we have to set a read-only - // property through an interceptor and only do it if it's - // uninitialized, e.g. the hole. Nirk... - // Passing sloppy mode because the property is writable. - RETURN_FAILURE_ON_EXCEPTION( - isolate, - JSReceiver::SetProperty(global, name, value, attributes, SLOPPY)); - return *value; + return *value; +} + + +RUNTIME_FUNCTION(Runtime_DeclareLookupSlot) { + HandleScope scope(isolate); + DCHECK(args.length() == 4); + + // Declarations are always made in a function, native, or global context. In + // the case of eval code, the context passed is the context of the caller, + // which may be some nested context and not the declaration context. + CONVERT_ARG_HANDLE_CHECKED(Context, context_arg, 0); + Handle<Context> context(context_arg->declaration_context()); + CONVERT_ARG_HANDLE_CHECKED(String, name, 1); + CONVERT_SMI_ARG_CHECKED(attr_arg, 2); + PropertyAttributes attr = static_cast<PropertyAttributes>(attr_arg); + RUNTIME_ASSERT(attr == READ_ONLY || attr == NONE); + CONVERT_ARG_HANDLE_CHECKED(Object, initial_value, 3); + + // TODO(verwaest): Unify the encoding indicating "var" with DeclareGlobals. + bool is_var = *initial_value == NULL; + bool is_const = initial_value->IsTheHole(); + bool is_function = initial_value->IsJSFunction(); + DCHECK(is_var + is_const + is_function == 1); + + int index; + PropertyAttributes attributes; + ContextLookupFlags flags = DONT_FOLLOW_CHAINS; + BindingFlags binding_flags; + Handle<Object> holder = + context->Lookup(name, flags, &index, &attributes, &binding_flags); + + Handle<JSObject> object; + Handle<Object> value = + is_function ? initial_value + : Handle<Object>::cast(isolate->factory()->undefined_value()); + + // TODO(verwaest): This case should probably not be covered by this function, + // but by DeclareGlobals instead. + if ((attributes != ABSENT && holder->IsJSGlobalObject()) || + (context_arg->has_extension() && + context_arg->extension()->IsJSGlobalObject())) { + return DeclareGlobals(isolate, Handle<JSGlobalObject>::cast(holder), name, + value, attr, is_var, is_const, is_function); } - // Set the value, but only if we're assigning the initial value to a - // constant. For now, we determine this by checking if the - // current value is the hole. - // Strict mode handling not needed (const is disallowed in strict mode). - if (lookup.IsField()) { - FixedArray* properties = global->properties(); - int index = lookup.GetFieldIndex().field_index(); - if (properties->get(index)->IsTheHole() || !lookup.IsReadOnly()) { - properties->set(index, *value); + if (attributes != ABSENT) { + // The name was declared before; check for conflicting re-declarations. + if (is_const || (attributes & READ_ONLY) != 0) { + return ThrowRedeclarationError(isolate, name); } - } else if (lookup.IsNormal()) { - if (global->GetNormalizedProperty(&lookup)->IsTheHole() || - !lookup.IsReadOnly()) { - HandleScope scope(isolate); - JSObject::SetNormalizedProperty(Handle<JSObject>(global), &lookup, value); + + // Skip var re-declarations. + if (is_var) return isolate->heap()->undefined_value(); + + DCHECK(is_function); + if (index >= 0) { + DCHECK(holder.is_identical_to(context)); + context->set(index, *initial_value); + return isolate->heap()->undefined_value(); } + + object = Handle<JSObject>::cast(holder); + + } else if (context->has_extension()) { + object = handle(JSObject::cast(context->extension())); + DCHECK(object->IsJSContextExtensionObject() || object->IsJSGlobalObject()); } else { - // Ignore re-initialization of constants that have already been - // assigned a constant value. - ASSERT(lookup.IsReadOnly() && lookup.IsConstant()); + DCHECK(context->IsFunctionContext()); + object = + isolate->factory()->NewJSObject(isolate->context_extension_function()); + context->set_extension(*object); } - // Use the set value as the result of the operation. - return *value; + RETURN_FAILURE_ON_EXCEPTION(isolate, JSObject::SetOwnPropertyIgnoreAttributes( + object, name, value, attr)); + + return isolate->heap()->undefined_value(); } -RUNTIME_FUNCTION(RuntimeHidden_InitializeConstContextSlot) { +RUNTIME_FUNCTION(Runtime_InitializeLegacyConstLookupSlot) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(Object, value, 0); - ASSERT(!value->IsTheHole()); + DCHECK(!value->IsTheHole()); // Initializations are always done in a function or native context. CONVERT_ARG_HANDLE_CHECKED(Context, context_arg, 1); Handle<Context> context(context_arg->declaration_context()); @@ -2582,92 +2406,65 @@ RUNTIME_FUNCTION(RuntimeHidden_InitializeConstContextSlot) { int index; PropertyAttributes attributes; - ContextLookupFlags flags = FOLLOW_CHAINS; + ContextLookupFlags flags = DONT_FOLLOW_CHAINS; BindingFlags binding_flags; Handle<Object> holder = context->Lookup(name, flags, &index, &attributes, &binding_flags); if (index >= 0) { - ASSERT(holder->IsContext()); - // Property was found in a context. Perform the assignment if we - // found some non-constant or an uninitialized constant. + DCHECK(holder->IsContext()); + // Property was found in a context. Perform the assignment if the constant + // was uninitialized. Handle<Context> context = Handle<Context>::cast(holder); - if ((attributes & READ_ONLY) == 0 || context->get(index)->IsTheHole()) { - context->set(index, *value); - } + DCHECK((attributes & READ_ONLY) != 0); + if (context->get(index)->IsTheHole()) context->set(index, *value); return *value; } - // The property could not be found, we introduce it as a property of the - // global object. - if (attributes == ABSENT) { - Handle<JSObject> global = Handle<JSObject>( - isolate->context()->global_object()); - // Strict mode not needed (const disallowed in strict mode). - RETURN_FAILURE_ON_EXCEPTION( - isolate, - JSReceiver::SetProperty(global, name, value, NONE, SLOPPY)); - return *value; - } + PropertyAttributes attr = + static_cast<PropertyAttributes>(DONT_DELETE | READ_ONLY); - // The property was present in some function's context extension object, - // as a property on the subject of a with, or as a property of the global - // object. - // - // In most situations, eval-introduced consts should still be present in - // the context extension object. However, because declaration and - // initialization are separate, the property might have been deleted - // before we reach the initialization point. - // - // Example: - // - // function f() { eval("delete x; const x;"); } - // - // In that case, the initialization behaves like a normal assignment. - Handle<JSObject> object = Handle<JSObject>::cast(holder); + // Strict mode handling not needed (legacy const is disallowed in strict + // mode). - if (*object == context->extension()) { - // This is the property that was introduced by the const declaration. - // Set it if it hasn't been set before. NOTE: We cannot use - // GetProperty() to get the current value as it 'unholes' the value. - LookupResult lookup(isolate); - object->LocalLookupRealNamedProperty(name, &lookup); - ASSERT(lookup.IsFound()); // the property was declared - ASSERT(lookup.IsReadOnly()); // and it was declared as read-only - - if (lookup.IsField()) { - FixedArray* properties = object->properties(); - int index = lookup.GetFieldIndex().field_index(); - if (properties->get(index)->IsTheHole()) { - properties->set(index, *value); - } - } else if (lookup.IsNormal()) { - if (object->GetNormalizedProperty(&lookup)->IsTheHole()) { - JSObject::SetNormalizedProperty(object, &lookup, value); - } - } else { - // We should not reach here. Any real, named property should be - // either a field or a dictionary slot. - UNREACHABLE(); - } + // The declared const was configurable, and may have been deleted in the + // meanwhile. If so, re-introduce the variable in the context extension. + DCHECK(context_arg->has_extension()); + if (attributes == ABSENT) { + holder = handle(context_arg->extension(), isolate); } else { - // The property was found on some other object. Set it if it is not a - // read-only property. - if ((attributes & READ_ONLY) == 0) { - // Strict mode not needed (const disallowed in strict mode). - RETURN_FAILURE_ON_EXCEPTION( - isolate, - JSReceiver::SetProperty(object, name, value, attributes, SLOPPY)); + // For JSContextExtensionObjects, the initializer can be run multiple times + // if in a for loop: for (var i = 0; i < 2; i++) { const x = i; }. Only the + // first assignment should go through. For JSGlobalObjects, additionally any + // code can run in between that modifies the declared property. + DCHECK(holder->IsJSGlobalObject() || holder->IsJSContextExtensionObject()); + + LookupIterator it(holder, name, LookupIterator::CHECK_HIDDEN); + Maybe<PropertyAttributes> maybe = JSReceiver::GetPropertyAttributes(&it); + if (!maybe.has_value) return isolate->heap()->exception(); + PropertyAttributes old_attributes = maybe.value; + + // Ignore if we can't reconfigure the value. + if ((old_attributes & DONT_DELETE) != 0) { + if ((old_attributes & READ_ONLY) != 0 || + it.property_kind() == LookupIterator::ACCESSOR) { + return *value; + } + attr = static_cast<PropertyAttributes>(old_attributes | READ_ONLY); } } + RETURN_FAILURE_ON_EXCEPTION( + isolate, JSObject::SetOwnPropertyIgnoreAttributes( + Handle<JSObject>::cast(holder), name, value, attr)); + return *value; } RUNTIME_FUNCTION(Runtime_OptimizeObjectForAddingMultipleProperties) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSObject, object, 0); CONVERT_SMI_ARG_CHECKED(properties, 1); // Conservative upper limit to prevent fuzz tests from going OOM. @@ -2679,9 +2476,9 @@ RUNTIME_FUNCTION(Runtime_OptimizeObjectForAddingMultipleProperties) { } -RUNTIME_FUNCTION(RuntimeHidden_RegExpExec) { +RUNTIME_FUNCTION(Runtime_RegExpExecRT) { HandleScope scope(isolate); - ASSERT(args.length() == 4); + DCHECK(args.length() == 4); CONVERT_ARG_HANDLE_CHECKED(JSRegExp, regexp, 0); CONVERT_ARG_HANDLE_CHECKED(String, subject, 1); // Due to the way the JS calls are constructed this must be less than the @@ -2699,9 +2496,9 @@ RUNTIME_FUNCTION(RuntimeHidden_RegExpExec) { } -RUNTIME_FUNCTION(RuntimeHidden_RegExpConstructResult) { +RUNTIME_FUNCTION(Runtime_RegExpConstructResult) { HandleScope handle_scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_SMI_ARG_CHECKED(size, 0); RUNTIME_ASSERT(size >= 0 && size <= FixedArray::kMaxLength); CONVERT_ARG_HANDLE_CHECKED(Object, index, 1); @@ -2722,7 +2519,7 @@ RUNTIME_FUNCTION(RuntimeHidden_RegExpConstructResult) { RUNTIME_FUNCTION(Runtime_RegExpInitializeObject) { HandleScope scope(isolate); - ASSERT(args.length() == 5); + DCHECK(args.length() == 5); CONVERT_ARG_HANDLE_CHECKED(JSRegExp, regexp, 0); CONVERT_ARG_HANDLE_CHECKED(String, source, 1); // If source is the empty string we set it to "(?:)" instead as @@ -2764,15 +2561,15 @@ RUNTIME_FUNCTION(Runtime_RegExpInitializeObject) { static_cast<PropertyAttributes>(DONT_ENUM | DONT_DELETE); Handle<Object> zero(Smi::FromInt(0), isolate); Factory* factory = isolate->factory(); - JSObject::SetLocalPropertyIgnoreAttributes( + JSObject::SetOwnPropertyIgnoreAttributes( regexp, factory->source_string(), source, final).Check(); - JSObject::SetLocalPropertyIgnoreAttributes( + JSObject::SetOwnPropertyIgnoreAttributes( regexp, factory->global_string(), global, final).Check(); - JSObject::SetLocalPropertyIgnoreAttributes( + JSObject::SetOwnPropertyIgnoreAttributes( regexp, factory->ignore_case_string(), ignoreCase, final).Check(); - JSObject::SetLocalPropertyIgnoreAttributes( + JSObject::SetOwnPropertyIgnoreAttributes( regexp, factory->multiline_string(), multiline, final).Check(); - JSObject::SetLocalPropertyIgnoreAttributes( + JSObject::SetOwnPropertyIgnoreAttributes( regexp, factory->last_index_string(), zero, writable).Check(); return *regexp; } @@ -2780,7 +2577,7 @@ RUNTIME_FUNCTION(Runtime_RegExpInitializeObject) { RUNTIME_FUNCTION(Runtime_FinishArrayPrototypeSetup) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSArray, prototype, 0); Object* length = prototype->length(); RUNTIME_ASSERT(length->IsSmi() && Smi::cast(length)->value() == 0); @@ -2792,29 +2589,24 @@ RUNTIME_FUNCTION(Runtime_FinishArrayPrototypeSetup) { } -static Handle<JSFunction> InstallBuiltin(Isolate* isolate, - Handle<JSObject> holder, - const char* name, - Builtins::Name builtin_name) { +static void InstallBuiltin(Isolate* isolate, + Handle<JSObject> holder, + const char* name, + Builtins::Name builtin_name) { Handle<String> key = isolate->factory()->InternalizeUtf8String(name); Handle<Code> code(isolate->builtins()->builtin(builtin_name)); Handle<JSFunction> optimized = - isolate->factory()->NewFunction(MaybeHandle<Object>(), - key, - JS_OBJECT_TYPE, - JSObject::kHeaderSize, - code, - false); + isolate->factory()->NewFunctionWithoutPrototype(key, code); optimized->shared()->DontAdaptArguments(); - JSReceiver::SetProperty(holder, key, optimized, NONE, STRICT).Assert(); - return optimized; + JSObject::AddProperty(holder, key, optimized, NONE); } RUNTIME_FUNCTION(Runtime_SpecialArrayFunctions) { HandleScope scope(isolate); - ASSERT(args.length() == 1); - CONVERT_ARG_HANDLE_CHECKED(JSObject, holder, 0); + DCHECK(args.length() == 0); + Handle<JSObject> holder = + isolate->factory()->NewJSObject(isolate->object_function()); InstallBuiltin(isolate, holder, "pop", Builtins::kArrayPop); InstallBuiltin(isolate, holder, "push", Builtins::kArrayPush); @@ -2830,7 +2622,7 @@ RUNTIME_FUNCTION(Runtime_SpecialArrayFunctions) { RUNTIME_FUNCTION(Runtime_IsSloppyModeFunction) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(JSReceiver, callable, 0); if (!callable->IsJSFunction()) { HandleScope scope(isolate); @@ -2849,7 +2641,7 @@ RUNTIME_FUNCTION(Runtime_IsSloppyModeFunction) { RUNTIME_FUNCTION(Runtime_GetDefaultReceiver) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(JSReceiver, callable, 0); if (!callable->IsJSFunction()) { @@ -2870,15 +2662,13 @@ RUNTIME_FUNCTION(Runtime_GetDefaultReceiver) { // Returns undefined for strict or native functions, or // the associated global receiver for "normal" functions. - Context* native_context = - function->context()->global_object()->native_context(); - return native_context->global_object()->global_receiver(); + return function->global_proxy(); } -RUNTIME_FUNCTION(RuntimeHidden_MaterializeRegExpLiteral) { +RUNTIME_FUNCTION(Runtime_MaterializeRegExpLiteral) { HandleScope scope(isolate); - ASSERT(args.length() == 4); + DCHECK(args.length() == 4); CONVERT_ARG_HANDLE_CHECKED(FixedArray, literals, 0); CONVERT_SMI_ARG_CHECKED(index, 1); CONVERT_ARG_HANDLE_CHECKED(String, pattern, 2); @@ -2904,7 +2694,7 @@ RUNTIME_FUNCTION(RuntimeHidden_MaterializeRegExpLiteral) { RUNTIME_FUNCTION(Runtime_FunctionGetName) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(JSFunction, f, 0); return f->shared()->name(); @@ -2913,7 +2703,7 @@ RUNTIME_FUNCTION(Runtime_FunctionGetName) { RUNTIME_FUNCTION(Runtime_FunctionSetName) { SealHandleScope shs(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_CHECKED(JSFunction, f, 0); CONVERT_ARG_CHECKED(String, name, 1); @@ -2924,7 +2714,7 @@ RUNTIME_FUNCTION(Runtime_FunctionSetName) { RUNTIME_FUNCTION(Runtime_FunctionNameShouldPrintAsAnonymous) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(JSFunction, f, 0); return isolate->heap()->ToBoolean( f->shared()->name_should_print_as_anonymous()); @@ -2933,7 +2723,7 @@ RUNTIME_FUNCTION(Runtime_FunctionNameShouldPrintAsAnonymous) { RUNTIME_FUNCTION(Runtime_FunctionMarkNameShouldPrintAsAnonymous) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(JSFunction, f, 0); f->shared()->set_name_should_print_as_anonymous(true); return isolate->heap()->undefined_value(); @@ -2942,15 +2732,23 @@ RUNTIME_FUNCTION(Runtime_FunctionMarkNameShouldPrintAsAnonymous) { RUNTIME_FUNCTION(Runtime_FunctionIsGenerator) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(JSFunction, f, 0); return isolate->heap()->ToBoolean(f->shared()->is_generator()); } +RUNTIME_FUNCTION(Runtime_FunctionIsArrow) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 1); + CONVERT_ARG_CHECKED(JSFunction, f, 0); + return isolate->heap()->ToBoolean(f->shared()->is_arrow()); +} + + RUNTIME_FUNCTION(Runtime_FunctionRemovePrototype) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(JSFunction, f, 0); RUNTIME_ASSERT(f->RemovePrototype()); @@ -2961,7 +2759,7 @@ RUNTIME_FUNCTION(Runtime_FunctionRemovePrototype) { RUNTIME_FUNCTION(Runtime_FunctionGetScript) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(JSFunction, fun, 0); Handle<Object> script = Handle<Object>(fun->shared()->script(), isolate); @@ -2973,7 +2771,7 @@ RUNTIME_FUNCTION(Runtime_FunctionGetScript) { RUNTIME_FUNCTION(Runtime_FunctionGetSourceCode) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSFunction, f, 0); Handle<SharedFunctionInfo> shared(f->shared()); @@ -2983,7 +2781,7 @@ RUNTIME_FUNCTION(Runtime_FunctionGetSourceCode) { RUNTIME_FUNCTION(Runtime_FunctionGetScriptSourcePosition) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(JSFunction, fun, 0); int pos = fun->shared()->start_position(); @@ -2993,7 +2791,7 @@ RUNTIME_FUNCTION(Runtime_FunctionGetScriptSourcePosition) { RUNTIME_FUNCTION(Runtime_FunctionGetPositionForOffset) { SealHandleScope shs(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_CHECKED(Code, code, 0); CONVERT_NUMBER_CHECKED(int, offset, Int32, args[1]); @@ -3007,7 +2805,7 @@ RUNTIME_FUNCTION(Runtime_FunctionGetPositionForOffset) { RUNTIME_FUNCTION(Runtime_FunctionSetInstanceClassName) { SealHandleScope shs(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_CHECKED(JSFunction, fun, 0); CONVERT_ARG_CHECKED(String, name, 1); @@ -3018,10 +2816,12 @@ RUNTIME_FUNCTION(Runtime_FunctionSetInstanceClassName) { RUNTIME_FUNCTION(Runtime_FunctionSetLength) { SealHandleScope shs(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_CHECKED(JSFunction, fun, 0); CONVERT_SMI_ARG_CHECKED(length, 1); + RUNTIME_ASSERT((length & 0xC0000000) == 0xC0000000 || + (length & 0xC0000000) == 0x0); fun->shared()->set_length(length); return isolate->heap()->undefined_value(); } @@ -3029,62 +2829,19 @@ RUNTIME_FUNCTION(Runtime_FunctionSetLength) { RUNTIME_FUNCTION(Runtime_FunctionSetPrototype) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSFunction, fun, 0); CONVERT_ARG_HANDLE_CHECKED(Object, value, 1); - ASSERT(fun->should_have_prototype()); + RUNTIME_ASSERT(fun->should_have_prototype()); Accessors::FunctionSetPrototype(fun, value); return args[0]; // return TOS } -RUNTIME_FUNCTION(Runtime_FunctionSetReadOnlyPrototype) { - HandleScope shs(isolate); - RUNTIME_ASSERT(args.length() == 1); - CONVERT_ARG_HANDLE_CHECKED(JSFunction, function, 0); - - Handle<String> name = isolate->factory()->prototype_string(); - - if (function->HasFastProperties()) { - // Construct a new field descriptor with updated attributes. - Handle<DescriptorArray> instance_desc = - handle(function->map()->instance_descriptors()); - - int index = instance_desc->SearchWithCache(*name, function->map()); - ASSERT(index != DescriptorArray::kNotFound); - PropertyDetails details = instance_desc->GetDetails(index); - - CallbacksDescriptor new_desc( - name, - handle(instance_desc->GetValue(index), isolate), - static_cast<PropertyAttributes>(details.attributes() | READ_ONLY)); - - // Create a new map featuring the new field descriptors array. - Handle<Map> map = handle(function->map()); - Handle<Map> new_map = Map::CopyReplaceDescriptor( - map, instance_desc, &new_desc, index, OMIT_TRANSITION); - - JSObject::MigrateToMap(function, new_map); - } else { // Dictionary properties. - // Directly manipulate the property details. - DisallowHeapAllocation no_gc; - int entry = function->property_dictionary()->FindEntry(name); - ASSERT(entry != NameDictionary::kNotFound); - PropertyDetails details = function->property_dictionary()->DetailsAt(entry); - PropertyDetails new_details( - static_cast<PropertyAttributes>(details.attributes() | READ_ONLY), - details.type(), - details.dictionary_index()); - function->property_dictionary()->DetailsAtPut(entry, new_details); - } - return *function; -} - - RUNTIME_FUNCTION(Runtime_FunctionIsAPIFunction) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(JSFunction, f, 0); return isolate->heap()->ToBoolean(f->shared()->IsApiFunction()); @@ -3093,7 +2850,7 @@ RUNTIME_FUNCTION(Runtime_FunctionIsAPIFunction) { RUNTIME_FUNCTION(Runtime_FunctionIsBuiltin) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(JSFunction, f, 0); return isolate->heap()->ToBoolean(f->IsBuiltin()); @@ -3102,7 +2859,7 @@ RUNTIME_FUNCTION(Runtime_FunctionIsBuiltin) { RUNTIME_FUNCTION(Runtime_SetCode) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSFunction, target, 0); CONVERT_ARG_HANDLE_CHECKED(JSFunction, source, 1); @@ -3117,8 +2874,8 @@ RUNTIME_FUNCTION(Runtime_SetCode) { // Mark both, the source and the target, as un-flushable because the // shared unoptimized code makes them impossible to enqueue in a list. - ASSERT(target_shared->code()->gc_metadata() == NULL); - ASSERT(source_shared->code()->gc_metadata() == NULL); + DCHECK(target_shared->code()->gc_metadata() == NULL); + DCHECK(source_shared->code()->gc_metadata() == NULL); target_shared->set_dont_flush(true); source_shared->set_dont_flush(true); @@ -3141,7 +2898,7 @@ RUNTIME_FUNCTION(Runtime_SetCode) { // Set the code of the target function. target->ReplaceCode(source_shared->code()); - ASSERT(target->next_function_link()->IsUndefined()); + DCHECK(target->next_function_link()->IsUndefined()); // Make sure we get a fresh copy of the literal vector to avoid cross // context contamination. @@ -3166,32 +2923,9 @@ RUNTIME_FUNCTION(Runtime_SetCode) { } -RUNTIME_FUNCTION(Runtime_SetExpectedNumberOfProperties) { +RUNTIME_FUNCTION(Runtime_CreateJSGeneratorObject) { HandleScope scope(isolate); - ASSERT(args.length() == 2); - CONVERT_ARG_HANDLE_CHECKED(JSFunction, func, 0); - CONVERT_SMI_ARG_CHECKED(num, 1); - RUNTIME_ASSERT(num >= 0); - // If objects constructed from this function exist then changing - // 'estimated_nof_properties' is dangerous since the previous value might - // have been compiled into the fast construct stub. Moreover, the inobject - // slack tracking logic might have adjusted the previous value, so even - // passing the same value is risky. - if (!func->shared()->live_objects_may_exist()) { - func->shared()->set_expected_nof_properties(num); - if (func->has_initial_map()) { - Handle<Map> new_initial_map = Map::Copy(handle(func->initial_map())); - new_initial_map->set_unused_property_fields(num); - func->set_initial_map(*new_initial_map); - } - } - return isolate->heap()->undefined_value(); -} - - -RUNTIME_FUNCTION(RuntimeHidden_CreateJSGeneratorObject) { - HandleScope scope(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); JavaScriptFrameIterator it(isolate); JavaScriptFrame* frame = it.frame(); @@ -3215,36 +2949,36 @@ RUNTIME_FUNCTION(RuntimeHidden_CreateJSGeneratorObject) { } -RUNTIME_FUNCTION(RuntimeHidden_SuspendJSGeneratorObject) { +RUNTIME_FUNCTION(Runtime_SuspendJSGeneratorObject) { HandleScope handle_scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSGeneratorObject, generator_object, 0); JavaScriptFrameIterator stack_iterator(isolate); JavaScriptFrame* frame = stack_iterator.frame(); RUNTIME_ASSERT(frame->function()->shared()->is_generator()); - ASSERT_EQ(frame->function(), generator_object->function()); + DCHECK_EQ(frame->function(), generator_object->function()); // The caller should have saved the context and continuation already. - ASSERT_EQ(generator_object->context(), Context::cast(frame->context())); - ASSERT_LT(0, generator_object->continuation()); + DCHECK_EQ(generator_object->context(), Context::cast(frame->context())); + DCHECK_LT(0, generator_object->continuation()); // We expect there to be at least two values on the operand stack: the return // value of the yield expression, and the argument to this runtime call. // Neither of those should be saved. int operands_count = frame->ComputeOperandsCount(); - ASSERT_GE(operands_count, 2); + DCHECK_GE(operands_count, 2); operands_count -= 2; if (operands_count == 0) { // Although it's semantically harmless to call this function with an // operands_count of zero, it is also unnecessary. - ASSERT_EQ(generator_object->operand_stack(), + DCHECK_EQ(generator_object->operand_stack(), isolate->heap()->empty_fixed_array()); - ASSERT_EQ(generator_object->stack_handler_index(), -1); + DCHECK_EQ(generator_object->stack_handler_index(), -1); // If there are no operands on the stack, there shouldn't be a handler // active either. - ASSERT(!frame->HasHandler()); + DCHECK(!frame->HasHandler()); } else { int stack_handler_index = -1; Handle<FixedArray> operand_stack = @@ -3265,24 +2999,24 @@ RUNTIME_FUNCTION(RuntimeHidden_SuspendJSGeneratorObject) { // inlined into GeneratorNext and GeneratorThrow. EmitGeneratorResumeResume is // called in any case, as it needs to reconstruct the stack frame and make space // for arguments and operands. -RUNTIME_FUNCTION(RuntimeHidden_ResumeJSGeneratorObject) { +RUNTIME_FUNCTION(Runtime_ResumeJSGeneratorObject) { SealHandleScope shs(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_CHECKED(JSGeneratorObject, generator_object, 0); CONVERT_ARG_CHECKED(Object, value, 1); CONVERT_SMI_ARG_CHECKED(resume_mode_int, 2); JavaScriptFrameIterator stack_iterator(isolate); JavaScriptFrame* frame = stack_iterator.frame(); - ASSERT_EQ(frame->function(), generator_object->function()); - ASSERT(frame->function()->is_compiled()); + DCHECK_EQ(frame->function(), generator_object->function()); + DCHECK(frame->function()->is_compiled()); STATIC_ASSERT(JSGeneratorObject::kGeneratorExecuting < 0); STATIC_ASSERT(JSGeneratorObject::kGeneratorClosed == 0); Address pc = generator_object->function()->code()->instruction_start(); int offset = generator_object->continuation(); - ASSERT(offset > 0); + DCHECK(offset > 0); frame->set_pc(pc + offset); if (FLAG_enable_ool_constant_pool) { frame->set_constant_pool( @@ -3313,9 +3047,9 @@ RUNTIME_FUNCTION(RuntimeHidden_ResumeJSGeneratorObject) { } -RUNTIME_FUNCTION(RuntimeHidden_ThrowGeneratorStateError) { +RUNTIME_FUNCTION(Runtime_ThrowGeneratorStateError) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSGeneratorObject, generator, 0); int continuation = generator->continuation(); const char* message = continuation == JSGeneratorObject::kGeneratorClosed ? @@ -3328,17 +3062,23 @@ RUNTIME_FUNCTION(RuntimeHidden_ThrowGeneratorStateError) { RUNTIME_FUNCTION(Runtime_ObjectFreeze) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSObject, object, 0); + + // %ObjectFreeze is a fast path and these cases are handled elsewhere. + RUNTIME_ASSERT(!object->HasSloppyArgumentsElements() && + !object->map()->is_observed() && + !object->IsJSProxy()); + Handle<Object> result; ASSIGN_RETURN_FAILURE_ON_EXCEPTION(isolate, result, JSObject::Freeze(object)); return *result; } -RUNTIME_FUNCTION(RuntimeHidden_StringCharCodeAt) { +RUNTIME_FUNCTION(Runtime_StringCharCodeAtRT) { HandleScope handle_scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(String, subject, 0); CONVERT_NUMBER_CHECKED(uint32_t, i, Uint32, args[1]); @@ -3358,7 +3098,7 @@ RUNTIME_FUNCTION(RuntimeHidden_StringCharCodeAt) { RUNTIME_FUNCTION(Runtime_CharFromCode) { HandleScope handlescope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); if (args[0]->IsNumber()) { CONVERT_NUMBER_CHECKED(uint32_t, code, Uint32, args[0]); code &= 0xffff; @@ -3376,7 +3116,7 @@ class FixedArrayBuilder { has_non_smi_elements_(false) { // Require a non-zero initial size. Ensures that doubling the size to // extend the array will work. - ASSERT(initial_capacity > 0); + DCHECK(initial_capacity > 0); } explicit FixedArrayBuilder(Handle<FixedArray> backing_store) @@ -3385,7 +3125,7 @@ class FixedArrayBuilder { has_non_smi_elements_(false) { // Require a non-zero initial size. Ensures that doubling the size to // extend the array will work. - ASSERT(backing_store->length() > 0); + DCHECK(backing_store->length() > 0); } bool HasCapacity(int elements) { @@ -3410,16 +3150,16 @@ class FixedArrayBuilder { } void Add(Object* value) { - ASSERT(!value->IsSmi()); - ASSERT(length_ < capacity()); + DCHECK(!value->IsSmi()); + DCHECK(length_ < capacity()); array_->set(length_, value); length_++; has_non_smi_elements_ = true; } void Add(Smi* value) { - ASSERT(value->IsSmi()); - ASSERT(length_ < capacity()); + DCHECK(value->IsSmi()); + DCHECK(length_ < capacity()); array_->set(length_, value); length_++; } @@ -3480,15 +3220,15 @@ class ReplacementStringBuilder { is_ascii_(subject->IsOneByteRepresentation()) { // Require a non-zero initial size. Ensures that doubling the size to // extend the array will work. - ASSERT(estimated_part_count > 0); + DCHECK(estimated_part_count > 0); } static inline void AddSubjectSlice(FixedArrayBuilder* builder, int from, int to) { - ASSERT(from >= 0); + DCHECK(from >= 0); int length = to - from; - ASSERT(length > 0); + DCHECK(length > 0); if (StringBuilderSubstringLength::is_valid(length) && StringBuilderSubstringPosition::is_valid(from)) { int encoded_slice = StringBuilderSubstringLength::encode(length) | @@ -3515,7 +3255,7 @@ class ReplacementStringBuilder { void AddString(Handle<String> string) { int length = string->length(); - ASSERT(length > 0); + DCHECK(length > 0); AddElement(*string); if (!string->IsOneByteRepresentation()) { is_ascii_ = false; @@ -3576,8 +3316,8 @@ class ReplacementStringBuilder { private: void AddElement(Object* element) { - ASSERT(element->IsSmi() || element->IsString()); - ASSERT(array_builder_.capacity() > array_builder_.length()); + DCHECK(element->IsSmi() || element->IsString()); + DCHECK(array_builder_.capacity() > array_builder_.length()); array_builder_.Add(element); } @@ -3640,8 +3380,8 @@ class CompiledReplacement { return ReplacementPart(REPLACEMENT_STRING, 0); } static inline ReplacementPart ReplacementSubString(int from, int to) { - ASSERT(from >= 0); - ASSERT(to > from); + DCHECK(from >= 0); + DCHECK(to > from); return ReplacementPart(-from, to); } @@ -3650,7 +3390,7 @@ class CompiledReplacement { ReplacementPart(int tag, int data) : tag(tag), data(data) { // Must be non-positive or a PartType value. - ASSERT(tag < NUMBER_OF_PART_TYPES); + DCHECK(tag < NUMBER_OF_PART_TYPES); } // Either a value of PartType or a non-positive number that is // the negation of an index into the replacement string. @@ -3753,7 +3493,7 @@ class CompiledReplacement { if (i > last) { parts->Add(ReplacementPart::ReplacementSubString(last, i), zone); } - ASSERT(capture_ref <= capture_count); + DCHECK(capture_ref <= capture_count); parts->Add(ReplacementPart::SubjectCapture(capture_ref), zone); last = next_index + 1; } @@ -3789,7 +3529,7 @@ bool CompiledReplacement::Compile(Handle<String> replacement, { DisallowHeapAllocation no_gc; String::FlatContent content = replacement->GetFlatContent(); - ASSERT(content.IsFlat()); + DCHECK(content.IsFlat()); bool simple = false; if (content.IsAscii()) { simple = ParseReplacementPattern(&parts_, @@ -3798,7 +3538,7 @@ bool CompiledReplacement::Compile(Handle<String> replacement, subject_length, zone()); } else { - ASSERT(content.IsTwoByte()); + DCHECK(content.IsTwoByte()); simple = ParseReplacementPattern(&parts_, content.ToUC16Vector(), capture_count, @@ -3835,7 +3575,7 @@ void CompiledReplacement::Apply(ReplacementStringBuilder* builder, int match_from, int match_to, int32_t* match) { - ASSERT_LT(0, parts_.length()); + DCHECK_LT(0, parts_.length()); for (int i = 0, n = parts_.length(); i < n; i++) { ReplacementPart part = parts_[i]; switch (part.tag) { @@ -3874,7 +3614,7 @@ void FindAsciiStringIndices(Vector<const uint8_t> subject, ZoneList<int>* indices, unsigned int limit, Zone* zone) { - ASSERT(limit > 0); + DCHECK(limit > 0); // Collect indices of pattern in subject using memchr. // Stop after finding at most limit values. const uint8_t* subject_start = subject.start(); @@ -3896,7 +3636,7 @@ void FindTwoByteStringIndices(const Vector<const uc16> subject, ZoneList<int>* indices, unsigned int limit, Zone* zone) { - ASSERT(limit > 0); + DCHECK(limit > 0); const uc16* subject_start = subject.start(); const uc16* subject_end = subject_start + subject.length(); for (const uc16* pos = subject_start; pos < subject_end && limit > 0; pos++) { @@ -3915,7 +3655,7 @@ void FindStringIndices(Isolate* isolate, ZoneList<int>* indices, unsigned int limit, Zone* zone) { - ASSERT(limit > 0); + DCHECK(limit > 0); // Collect indices of pattern in subject. // Stop after finding at most limit values. int pattern_length = pattern.length(); @@ -3941,8 +3681,8 @@ void FindStringIndicesDispatch(Isolate* isolate, DisallowHeapAllocation no_gc; String::FlatContent subject_content = subject->GetFlatContent(); String::FlatContent pattern_content = pattern->GetFlatContent(); - ASSERT(subject_content.IsFlat()); - ASSERT(pattern_content.IsFlat()); + DCHECK(subject_content.IsFlat()); + DCHECK(pattern_content.IsFlat()); if (subject_content.IsAscii()) { Vector<const uint8_t> subject_vector = subject_content.ToOneByteVector(); if (pattern_content.IsAscii()) { @@ -4018,12 +3758,12 @@ MUST_USE_RESULT static Object* StringReplaceGlobalAtomRegExpWithString( Handle<JSRegExp> pattern_regexp, Handle<String> replacement, Handle<JSArray> last_match_info) { - ASSERT(subject->IsFlat()); - ASSERT(replacement->IsFlat()); + DCHECK(subject->IsFlat()); + DCHECK(replacement->IsFlat()); ZoneScope zone_scope(isolate->runtime_zone()); ZoneList<int> indices(8, zone_scope.zone()); - ASSERT_EQ(JSRegExp::ATOM, pattern_regexp->TypeTag()); + DCHECK_EQ(JSRegExp::ATOM, pattern_regexp->TypeTag()); String* pattern = String::cast(pattern_regexp->DataAt(JSRegExp::kAtomPatternIndex)); int subject_len = subject->length(); @@ -4106,8 +3846,8 @@ MUST_USE_RESULT static Object* StringReplaceGlobalRegExpWithString( Handle<JSRegExp> regexp, Handle<String> replacement, Handle<JSArray> last_match_info) { - ASSERT(subject->IsFlat()); - ASSERT(replacement->IsFlat()); + DCHECK(subject->IsFlat()); + DCHECK(replacement->IsFlat()); int capture_count = regexp->CaptureCount(); int subject_length = subject->length(); @@ -4202,7 +3942,7 @@ MUST_USE_RESULT static Object* StringReplaceGlobalRegExpWithEmptyString( Handle<String> subject, Handle<JSRegExp> regexp, Handle<JSArray> last_match_info) { - ASSERT(subject->IsFlat()); + DCHECK(subject->IsFlat()); // Shortcut for simple non-regexp global replacements if (regexp->TypeTag() == JSRegExp::ATOM) { @@ -4297,7 +4037,7 @@ MUST_USE_RESULT static Object* StringReplaceGlobalRegExpWithEmptyString( RUNTIME_FUNCTION(Runtime_StringReplaceGlobalRegExpWithString) { HandleScope scope(isolate); - ASSERT(args.length() == 4); + DCHECK(args.length() == 4); CONVERT_ARG_HANDLE_CHECKED(String, subject, 0); CONVERT_ARG_HANDLE_CHECKED(String, replacement, 2); @@ -4305,6 +4045,7 @@ RUNTIME_FUNCTION(Runtime_StringReplaceGlobalRegExpWithString) { CONVERT_ARG_HANDLE_CHECKED(JSArray, last_match_info, 3); RUNTIME_ASSERT(regexp->GetFlags().is_global()); + RUNTIME_ASSERT(last_match_info->HasFastObjectElements()); subject = String::Flatten(subject); @@ -4333,7 +4074,10 @@ MaybeHandle<String> StringReplaceOneCharWithString(Isolate* isolate, Handle<String> replace, bool* found, int recursion_limit) { - if (recursion_limit == 0) return MaybeHandle<String>(); + StackLimitCheck stackLimitCheck(isolate); + if (stackLimitCheck.HasOverflowed() || (recursion_limit == 0)) { + return MaybeHandle<String>(); + } recursion_limit--; if (subject->IsConsString()) { ConsString* cons = ConsString::cast(*subject); @@ -4375,7 +4119,7 @@ MaybeHandle<String> StringReplaceOneCharWithString(Isolate* isolate, RUNTIME_FUNCTION(Runtime_StringReplaceOneCharWithString) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(String, subject, 0); CONVERT_ARG_HANDLE_CHECKED(String, search, 1); CONVERT_ARG_HANDLE_CHECKED(String, replace, 2); @@ -4408,8 +4152,8 @@ int Runtime::StringMatch(Isolate* isolate, Handle<String> sub, Handle<String> pat, int start_index) { - ASSERT(0 <= start_index); - ASSERT(start_index <= sub->length()); + DCHECK(0 <= start_index); + DCHECK(start_index <= sub->length()); int pattern_length = pat->length(); if (pattern_length == 0) return start_index; @@ -4455,7 +4199,7 @@ int Runtime::StringMatch(Isolate* isolate, RUNTIME_FUNCTION(Runtime_StringIndexOf) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(String, sub, 0); CONVERT_ARG_HANDLE_CHECKED(String, pat, 1); @@ -4475,8 +4219,8 @@ static int StringMatchBackwards(Vector<const schar> subject, Vector<const pchar> pattern, int idx) { int pattern_length = pattern.length(); - ASSERT(pattern_length >= 1); - ASSERT(idx + pattern_length <= subject.length()); + DCHECK(pattern_length >= 1); + DCHECK(idx + pattern_length <= subject.length()); if (sizeof(schar) == 1 && sizeof(pchar) > 1) { for (int i = 0; i < pattern_length; i++) { @@ -4507,7 +4251,7 @@ static int StringMatchBackwards(Vector<const schar> subject, RUNTIME_FUNCTION(Runtime_StringLastIndexOf) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(String, sub, 0); CONVERT_ARG_HANDLE_CHECKED(String, pat, 1); @@ -4566,7 +4310,7 @@ RUNTIME_FUNCTION(Runtime_StringLastIndexOf) { RUNTIME_FUNCTION(Runtime_StringLocaleCompare) { HandleScope handle_scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(String, str1, 0); CONVERT_ARG_HANDLE_CHECKED(String, str2, 1); @@ -4608,9 +4352,9 @@ RUNTIME_FUNCTION(Runtime_StringLocaleCompare) { } -RUNTIME_FUNCTION(RuntimeHidden_SubString) { +RUNTIME_FUNCTION(Runtime_SubString) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(String, string, 0); int start, end; @@ -4636,9 +4380,17 @@ RUNTIME_FUNCTION(RuntimeHidden_SubString) { } +RUNTIME_FUNCTION(Runtime_InternalizeString) { + HandleScope handles(isolate); + RUNTIME_ASSERT(args.length() == 1); + CONVERT_ARG_HANDLE_CHECKED(String, string, 0); + return *isolate->factory()->InternalizeString(string); +} + + RUNTIME_FUNCTION(Runtime_StringMatch) { HandleScope handles(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(String, subject, 0); CONVERT_ARG_HANDLE_CHECKED(JSRegExp, regexp, 1); @@ -4701,8 +4453,8 @@ static Object* SearchRegExpMultiple( Handle<JSRegExp> regexp, Handle<JSArray> last_match_array, Handle<JSArray> result_array) { - ASSERT(subject->IsFlat()); - ASSERT_NE(has_capture, regexp->CaptureCount() == 0); + DCHECK(subject->IsFlat()); + DCHECK_NE(has_capture, regexp->CaptureCount() == 0); int capture_count = regexp->CaptureCount(); int subject_length = subject->length(); @@ -4735,12 +4487,11 @@ static Object* SearchRegExpMultiple( RegExpImpl::GlobalCache global_cache(regexp, subject, true, isolate); if (global_cache.HasException()) return isolate->heap()->exception(); - Handle<FixedArray> result_elements; - if (result_array->HasFastObjectElements()) { - result_elements = - Handle<FixedArray>(FixedArray::cast(result_array->elements())); - } - if (result_elements.is_null() || result_elements->length() < 16) { + // Ensured in Runtime_RegExpExecMultiple. + DCHECK(result_array->HasFastObjectElements()); + Handle<FixedArray> result_elements( + FixedArray::cast(result_array->elements())); + if (result_elements->length() < 16) { result_elements = isolate->factory()->NewFixedArrayWithHoles(16); } @@ -4791,12 +4542,12 @@ static Object* SearchRegExpMultiple( int start = current_match[i * 2]; if (start >= 0) { int end = current_match[i * 2 + 1]; - ASSERT(start <= end); + DCHECK(start <= end); Handle<String> substring = isolate->factory()->NewSubString(subject, start, end); elements->set(i, *substring); } else { - ASSERT(current_match[i * 2 + 1] < 0); + DCHECK(current_match[i * 2 + 1] < 0); elements->set(i, isolate->heap()->undefined_value()); } } @@ -4848,15 +4599,17 @@ static Object* SearchRegExpMultiple( // set any other last match array info. RUNTIME_FUNCTION(Runtime_RegExpExecMultiple) { HandleScope handles(isolate); - ASSERT(args.length() == 4); + DCHECK(args.length() == 4); CONVERT_ARG_HANDLE_CHECKED(String, subject, 1); CONVERT_ARG_HANDLE_CHECKED(JSRegExp, regexp, 0); CONVERT_ARG_HANDLE_CHECKED(JSArray, last_match_info, 2); CONVERT_ARG_HANDLE_CHECKED(JSArray, result_array, 3); + RUNTIME_ASSERT(last_match_info->HasFastObjectElements()); + RUNTIME_ASSERT(result_array->HasFastObjectElements()); subject = String::Flatten(subject); - ASSERT(regexp->GetFlags().is_global()); + RUNTIME_ASSERT(regexp->GetFlags().is_global()); if (regexp->CaptureCount() == 0) { return SearchRegExpMultiple<false>( @@ -4870,7 +4623,7 @@ RUNTIME_FUNCTION(Runtime_RegExpExecMultiple) { RUNTIME_FUNCTION(Runtime_NumberToRadixString) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_SMI_ARG_CHECKED(radix, 1); RUNTIME_ASSERT(2 <= radix && radix <= 36); @@ -4905,13 +4658,14 @@ RUNTIME_FUNCTION(Runtime_NumberToRadixString) { RUNTIME_FUNCTION(Runtime_NumberToFixed) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_DOUBLE_ARG_CHECKED(value, 0); CONVERT_DOUBLE_ARG_CHECKED(f_number, 1); int f = FastD2IChecked(f_number); // See DoubleToFixedCString for these constants: RUNTIME_ASSERT(f >= 0 && f <= 20); + RUNTIME_ASSERT(!Double(value).IsSpecial()); char* str = DoubleToFixedCString(value, f); Handle<String> result = isolate->factory()->NewStringFromAsciiChecked(str); DeleteArray(str); @@ -4921,12 +4675,13 @@ RUNTIME_FUNCTION(Runtime_NumberToFixed) { RUNTIME_FUNCTION(Runtime_NumberToExponential) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_DOUBLE_ARG_CHECKED(value, 0); CONVERT_DOUBLE_ARG_CHECKED(f_number, 1); int f = FastD2IChecked(f_number); RUNTIME_ASSERT(f >= -1 && f <= 20); + RUNTIME_ASSERT(!Double(value).IsSpecial()); char* str = DoubleToExponentialCString(value, f); Handle<String> result = isolate->factory()->NewStringFromAsciiChecked(str); DeleteArray(str); @@ -4936,12 +4691,13 @@ RUNTIME_FUNCTION(Runtime_NumberToExponential) { RUNTIME_FUNCTION(Runtime_NumberToPrecision) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_DOUBLE_ARG_CHECKED(value, 0); CONVERT_DOUBLE_ARG_CHECKED(f_number, 1); int f = FastD2IChecked(f_number); RUNTIME_ASSERT(f >= 1 && f <= 21); + RUNTIME_ASSERT(!Double(value).IsSpecial()); char* str = DoubleToPrecisionCString(value, f); Handle<String> result = isolate->factory()->NewStringFromAsciiChecked(str); DeleteArray(str); @@ -4951,7 +4707,7 @@ RUNTIME_FUNCTION(Runtime_NumberToPrecision) { RUNTIME_FUNCTION(Runtime_IsValidSmi) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_NUMBER_CHECKED(int32_t, number, Int32, args[0]); return isolate->heap()->ToBoolean(Smi::IsValid(number)); @@ -4989,8 +4745,9 @@ MaybeHandle<Object> Runtime::GetElementOrCharAt(Isolate* isolate, Handle<Object> result; if (object->IsString() || object->IsNumber() || object->IsBoolean()) { - Handle<Object> proto(object->GetPrototype(isolate), isolate); - return Object::GetElement(isolate, proto, index); + PrototypeIterator iter(isolate, object); + return Object::GetElement(isolate, PrototypeIterator::GetCurrent(iter), + index); } else { return Object::GetElement(isolate, object, index); } @@ -5013,17 +4770,21 @@ static MaybeHandle<Name> ToName(Isolate* isolate, Handle<Object> key) { MaybeHandle<Object> Runtime::HasObjectProperty(Isolate* isolate, Handle<JSReceiver> object, Handle<Object> key) { + Maybe<bool> maybe; // Check if the given key is an array index. uint32_t index; if (key->ToArrayIndex(&index)) { - return isolate->factory()->ToBoolean(JSReceiver::HasElement(object, index)); - } + maybe = JSReceiver::HasElement(object, index); + } else { + // Convert the key to a name - possibly by calling back into JavaScript. + Handle<Name> name; + ASSIGN_RETURN_ON_EXCEPTION(isolate, name, ToName(isolate, key), Object); - // Convert the key to a name - possibly by calling back into JavaScript. - Handle<Name> name; - ASSIGN_RETURN_ON_EXCEPTION(isolate, name, ToName(isolate, key), Object); + maybe = JSReceiver::HasProperty(object, name); + } - return isolate->factory()->ToBoolean(JSReceiver::HasProperty(object, name)); + if (!maybe.has_value) return MaybeHandle<Object>(); + return isolate->factory()->ToBoolean(maybe.value); } @@ -5059,7 +4820,7 @@ MaybeHandle<Object> Runtime::GetObjectProperty(Isolate* isolate, RUNTIME_FUNCTION(Runtime_GetProperty) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(Object, object, 0); CONVERT_ARG_HANDLE_CHECKED(Object, key, 1); @@ -5074,7 +4835,7 @@ RUNTIME_FUNCTION(Runtime_GetProperty) { // KeyedGetProperty is called from KeyedLoadIC::GenerateGeneric. RUNTIME_FUNCTION(Runtime_KeyedGetProperty) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(Object, receiver_obj, 0); CONVERT_ARG_HANDLE_CHECKED(Object, key_obj, 1); @@ -5082,11 +4843,11 @@ RUNTIME_FUNCTION(Runtime_KeyedGetProperty) { // Fast cases for getting named properties of the receiver JSObject // itself. // - // The global proxy objects has to be excluded since LocalLookup on + // The global proxy objects has to be excluded since LookupOwn on // the global proxy object can return a valid result even though the // global proxy object never has properties. This is the case // because the global proxy object forwards everything to its hidden - // prototype including local lookups. + // prototype including own lookups. // // Additionally, we need to make sure that we do not cache results // for objects that require access checks. @@ -5101,28 +4862,27 @@ RUNTIME_FUNCTION(Runtime_KeyedGetProperty) { // Attempt to use lookup cache. Handle<Map> receiver_map(receiver->map(), isolate); KeyedLookupCache* keyed_lookup_cache = isolate->keyed_lookup_cache(); - int offset = keyed_lookup_cache->Lookup(receiver_map, key); - if (offset != -1) { + int index = keyed_lookup_cache->Lookup(receiver_map, key); + if (index != -1) { // Doubles are not cached, so raw read the value. - Object* value = receiver->RawFastPropertyAt(offset); - return value->IsTheHole() - ? isolate->heap()->undefined_value() - : value; + return receiver->RawFastPropertyAt( + FieldIndex::ForKeyedLookupCacheIndex(*receiver_map, index)); } // Lookup cache miss. Perform lookup and update the cache if // appropriate. LookupResult result(isolate); - receiver->LocalLookup(key, &result); + receiver->LookupOwn(key, &result); if (result.IsField()) { - int offset = result.GetFieldIndex().field_index(); + FieldIndex field_index = result.GetFieldIndex(); // Do not track double fields in the keyed lookup cache. Reading // double values requires boxing. if (!result.representation().IsDouble()) { - keyed_lookup_cache->Update(receiver_map, key, offset); + keyed_lookup_cache->Update(receiver_map, key, + field_index.GetKeyedLookupCacheIndex()); } AllowHeapAllocation allow_allocation; - return *JSObject::FastPropertyAt( - receiver, result.representation(), offset); + return *JSObject::FastPropertyAt(receiver, result.representation(), + field_index); } } else { // Attempt dictionary lookup. @@ -5134,10 +4894,10 @@ RUNTIME_FUNCTION(Runtime_KeyedGetProperty) { if (!receiver->IsGlobalObject()) return value; value = PropertyCell::cast(value)->value(); if (!value->IsTheHole()) return value; - // If value is the hole do the general lookup. + // If value is the hole (meaning, absent) do the general lookup. } } - } else if (FLAG_smi_only_arrays && key_obj->IsSmi()) { + } else if (key_obj->IsSmi()) { // JSObject without a name key. If the key is a Smi, check for a // definite out-of-bounds access to elements, which is a strong indicator // that subsequent accesses will also call the runtime. Proactively @@ -5158,7 +4918,7 @@ RUNTIME_FUNCTION(Runtime_KeyedGetProperty) { isolate, TransitionElements(js_object, elements_kind, isolate)); } } else { - ASSERT(IsFastSmiOrObjectElementsKind(elements_kind) || + DCHECK(IsFastSmiOrObjectElementsKind(elements_kind) || !IsFastElementsKind(elements_kind)); } } @@ -5185,15 +4945,46 @@ static bool IsValidAccessor(Handle<Object> obj) { } +// Transform getter or setter into something DefineAccessor can handle. +static Handle<Object> InstantiateAccessorComponent(Isolate* isolate, + Handle<Object> component) { + if (component->IsUndefined()) return isolate->factory()->null_value(); + Handle<FunctionTemplateInfo> info = + Handle<FunctionTemplateInfo>::cast(component); + return Utils::OpenHandle(*Utils::ToLocal(info)->GetFunction()); +} + + +RUNTIME_FUNCTION(Runtime_DefineApiAccessorProperty) { + HandleScope scope(isolate); + DCHECK(args.length() == 5); + CONVERT_ARG_HANDLE_CHECKED(JSObject, object, 0); + CONVERT_ARG_HANDLE_CHECKED(Name, name, 1); + CONVERT_ARG_HANDLE_CHECKED(Object, getter, 2); + CONVERT_ARG_HANDLE_CHECKED(Object, setter, 3); + CONVERT_SMI_ARG_CHECKED(attribute, 4); + RUNTIME_ASSERT(getter->IsUndefined() || getter->IsFunctionTemplateInfo()); + RUNTIME_ASSERT(setter->IsUndefined() || setter->IsFunctionTemplateInfo()); + RUNTIME_ASSERT(PropertyDetails::AttributesField::is_valid( + static_cast<PropertyAttributes>(attribute))); + RETURN_FAILURE_ON_EXCEPTION( + isolate, JSObject::DefineAccessor( + object, name, InstantiateAccessorComponent(isolate, getter), + InstantiateAccessorComponent(isolate, setter), + static_cast<PropertyAttributes>(attribute))); + return isolate->heap()->undefined_value(); +} + + // Implements part of 8.12.9 DefineOwnProperty. // There are 3 cases that lead here: // Step 4b - define a new accessor property. // Steps 9c & 12 - replace an existing data property with an accessor property. // Step 12 - update an existing accessor property with an accessor or generic // descriptor. -RUNTIME_FUNCTION(Runtime_DefineOrRedefineAccessorProperty) { +RUNTIME_FUNCTION(Runtime_DefineAccessorPropertyUnchecked) { HandleScope scope(isolate); - ASSERT(args.length() == 5); + DCHECK(args.length() == 5); CONVERT_ARG_HANDLE_CHECKED(JSObject, obj, 0); RUNTIME_ASSERT(!obj->IsNull()); CONVERT_ARG_HANDLE_CHECKED(Name, name, 1); @@ -5206,10 +4997,9 @@ RUNTIME_FUNCTION(Runtime_DefineOrRedefineAccessorProperty) { PropertyAttributes attr = static_cast<PropertyAttributes>(unchecked); bool fast = obj->HasFastProperties(); - // DefineAccessor checks access rights. - JSObject::DefineAccessor(obj, name, getter, setter, attr); - RETURN_FAILURE_IF_SCHEDULED_EXCEPTION(isolate); - if (fast) JSObject::TransformToFastProperties(obj, 0); + RETURN_FAILURE_ON_EXCEPTION( + isolate, JSObject::DefineAccessor(obj, name, getter, setter, attr)); + if (fast) JSObject::MigrateSlowToFast(obj, 0); return isolate->heap()->undefined_value(); } @@ -5220,9 +5010,9 @@ RUNTIME_FUNCTION(Runtime_DefineOrRedefineAccessorProperty) { // Steps 9b & 12 - replace an existing accessor property with a data property. // Step 12 - update an existing data property with a data or generic // descriptor. -RUNTIME_FUNCTION(Runtime_DefineOrRedefineDataProperty) { +RUNTIME_FUNCTION(Runtime_DefineDataPropertyUnchecked) { HandleScope scope(isolate); - ASSERT(args.length() == 4); + DCHECK(args.length() == 4); CONVERT_ARG_HANDLE_CHECKED(JSObject, js_object, 0); CONVERT_ARG_HANDLE_CHECKED(Name, name, 1); CONVERT_ARG_HANDLE_CHECKED(Object, obj_value, 2); @@ -5237,60 +5027,31 @@ RUNTIME_FUNCTION(Runtime_DefineOrRedefineDataProperty) { } LookupResult lookup(isolate); - js_object->LocalLookupRealNamedProperty(name, &lookup); - - // Special case for callback properties. - if (lookup.IsPropertyCallbacks()) { - Handle<Object> callback(lookup.GetCallbackObject(), isolate); - // Avoid redefining callback as data property, just use the stored - // setter to update the value instead. - // TODO(mstarzinger): So far this only works if property attributes don't - // change, this should be fixed once we cleanup the underlying code. - ASSERT(!callback->IsForeign()); - if (callback->IsAccessorInfo() && - lookup.GetAttributes() == attr) { - Handle<Object> result_object; - ASSIGN_RETURN_FAILURE_ON_EXCEPTION( - isolate, result_object, - JSObject::SetPropertyWithCallback(js_object, - callback, - name, - obj_value, - handle(lookup.holder()), - STRICT)); - return *result_object; - } - } + js_object->LookupOwnRealNamedProperty(name, &lookup); // Take special care when attributes are different and there is already // a property. For simplicity we normalize the property which enables us // to not worry about changing the instance_descriptor and creating a new - // map. The current version of SetObjectProperty does not handle attributes - // correctly in the case where a property is a field and is reset with - // new attributes. + // map. if (lookup.IsFound() && (attr != lookup.GetAttributes() || lookup.IsPropertyCallbacks())) { - // New attributes - normalize to avoid writing to instance descriptor - if (js_object->IsJSGlobalProxy()) { - // Since the result is a property, the prototype will exist so - // we don't have to check for null. - js_object = Handle<JSObject>(JSObject::cast(js_object->GetPrototype())); - } - JSObject::NormalizeProperties(js_object, CLEAR_INOBJECT_PROPERTIES, 0); // Use IgnoreAttributes version since a readonly property may be // overridden and SetProperty does not allow this. Handle<Object> result; ASSIGN_RETURN_FAILURE_ON_EXCEPTION( isolate, result, - JSObject::SetLocalPropertyIgnoreAttributes( - js_object, name, obj_value, attr)); + JSObject::SetOwnPropertyIgnoreAttributes( + js_object, name, obj_value, attr, + JSReceiver::PERFORM_EXTENSIBILITY_CHECK, + JSReceiver::MAY_BE_STORE_FROM_KEYED, + JSObject::DONT_FORCE_FIELD)); return *result; } Handle<Object> result; ASSIGN_RETURN_FAILURE_ON_EXCEPTION( isolate, result, - Runtime::ForceSetObjectProperty( + Runtime::DefineObjectProperty( js_object, name, obj_value, attr, JSReceiver::CERTAINLY_NOT_STORE_FROM_KEYED)); return *result; @@ -5300,7 +5061,7 @@ RUNTIME_FUNCTION(Runtime_DefineOrRedefineDataProperty) { // Return property without being observable by accessors or interceptors. RUNTIME_FUNCTION(Runtime_GetDataProperty) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSObject, object, 0); CONVERT_ARG_HANDLE_CHECKED(Name, key, 1); return *JSObject::GetDataProperty(object, key); @@ -5311,10 +5072,7 @@ MaybeHandle<Object> Runtime::SetObjectProperty(Isolate* isolate, Handle<Object> object, Handle<Object> key, Handle<Object> value, - PropertyAttributes attr, StrictMode strict_mode) { - SetPropertyMode set_mode = attr == NONE ? SET_PROPERTY : DEFINE_PROPERTY; - if (object->IsUndefined() || object->IsNull()) { Handle<Object> args[2] = { key, object }; Handle<Object> error = @@ -5332,19 +5090,17 @@ MaybeHandle<Object> Runtime::SetObjectProperty(Isolate* isolate, isolate, name_object, Execution::ToString(isolate, key), Object); } Handle<Name> name = Handle<Name>::cast(name_object); - return JSReceiver::SetProperty(Handle<JSProxy>::cast(object), name, value, - attr, - strict_mode); + return Object::SetProperty(Handle<JSProxy>::cast(object), name, value, + strict_mode); } - // If the object isn't a JavaScript object, we ignore the store. - if (!object->IsJSObject()) return value; - - Handle<JSObject> js_object = Handle<JSObject>::cast(object); - // Check if the given key is an array index. uint32_t index; if (key->ToArrayIndex(&index)) { + // TODO(verwaest): Support non-JSObject receivers. + if (!object->IsJSObject()) return value; + Handle<JSObject> js_object = Handle<JSObject>::cast(object); + // In Firefox/SpiderMonkey, Safari and Opera you can access the characters // of a string using [] notation. We need to support this too in // JavaScript. @@ -5366,7 +5122,7 @@ MaybeHandle<Object> Runtime::SetObjectProperty(Isolate* isolate, } MaybeHandle<Object> result = JSObject::SetElement( - js_object, index, value, attr, strict_mode, true, set_mode); + js_object, index, value, NONE, strict_mode, true, SET_PROPERTY); JSObject::ValidateElements(js_object); return result.is_null() ? result : value; @@ -5375,17 +5131,20 @@ MaybeHandle<Object> Runtime::SetObjectProperty(Isolate* isolate, if (key->IsName()) { Handle<Name> name = Handle<Name>::cast(key); if (name->AsArrayIndex(&index)) { + // TODO(verwaest): Support non-JSObject receivers. + if (!object->IsJSObject()) return value; + Handle<JSObject> js_object = Handle<JSObject>::cast(object); if (js_object->HasExternalArrayElements()) { if (!value->IsNumber() && !value->IsUndefined()) { ASSIGN_RETURN_ON_EXCEPTION( isolate, value, Execution::ToNumber(isolate, value), Object); } } - return JSObject::SetElement(js_object, index, value, attr, - strict_mode, true, set_mode); + return JSObject::SetElement(js_object, index, value, NONE, strict_mode, + true, SET_PROPERTY); } else { if (name->IsString()) name = String::Flatten(Handle<String>::cast(name)); - return JSReceiver::SetProperty(js_object, name, value, attr, strict_mode); + return Object::SetProperty(object, name, value, strict_mode); } } @@ -5396,15 +5155,17 @@ MaybeHandle<Object> Runtime::SetObjectProperty(Isolate* isolate, Handle<String> name = Handle<String>::cast(converted); if (name->AsArrayIndex(&index)) { - return JSObject::SetElement(js_object, index, value, attr, - strict_mode, true, set_mode); - } else { - return JSReceiver::SetProperty(js_object, name, value, attr, strict_mode); + // TODO(verwaest): Support non-JSObject receivers. + if (!object->IsJSObject()) return value; + Handle<JSObject> js_object = Handle<JSObject>::cast(object); + return JSObject::SetElement(js_object, index, value, NONE, strict_mode, + true, SET_PROPERTY); } + return Object::SetProperty(object, name, value, strict_mode); } -MaybeHandle<Object> Runtime::ForceSetObjectProperty( +MaybeHandle<Object> Runtime::DefineObjectProperty( Handle<JSObject> js_object, Handle<Object> key, Handle<Object> value, @@ -5436,9 +5197,8 @@ MaybeHandle<Object> Runtime::ForceSetObjectProperty( SLOPPY, false, DEFINE_PROPERTY); } else { if (name->IsString()) name = String::Flatten(Handle<String>::cast(name)); - return JSObject::SetLocalPropertyIgnoreAttributes( - js_object, name, value, attr, Object::OPTIMAL_REPRESENTATION, - ALLOW_AS_CONSTANT, JSReceiver::PERFORM_EXTENSIBILITY_CHECK, + return JSObject::SetOwnPropertyIgnoreAttributes( + js_object, name, value, attr, JSReceiver::PERFORM_EXTENSIBILITY_CHECK, store_from_keyed); } } @@ -5453,9 +5213,8 @@ MaybeHandle<Object> Runtime::ForceSetObjectProperty( return JSObject::SetElement(js_object, index, value, attr, SLOPPY, false, DEFINE_PROPERTY); } else { - return JSObject::SetLocalPropertyIgnoreAttributes( - js_object, name, value, attr, Object::OPTIMAL_REPRESENTATION, - ALLOW_AS_CONSTANT, JSReceiver::PERFORM_EXTENSIBILITY_CHECK, + return JSObject::SetOwnPropertyIgnoreAttributes( + js_object, name, value, attr, JSReceiver::PERFORM_EXTENSIBILITY_CHECK, store_from_keyed); } } @@ -5504,15 +5263,47 @@ RUNTIME_FUNCTION(Runtime_SetHiddenProperty) { CONVERT_ARG_HANDLE_CHECKED(JSObject, object, 0); CONVERT_ARG_HANDLE_CHECKED(String, key, 1); CONVERT_ARG_HANDLE_CHECKED(Object, value, 2); + RUNTIME_ASSERT(key->IsUniqueName()); return *JSObject::SetHiddenProperty(object, key, value); } -RUNTIME_FUNCTION(Runtime_SetProperty) { +RUNTIME_FUNCTION(Runtime_AddNamedProperty) { HandleScope scope(isolate); - RUNTIME_ASSERT(args.length() == 4 || args.length() == 5); + RUNTIME_ASSERT(args.length() == 4); - CONVERT_ARG_HANDLE_CHECKED(Object, object, 0); + CONVERT_ARG_HANDLE_CHECKED(JSObject, object, 0); + CONVERT_ARG_HANDLE_CHECKED(Name, key, 1); + CONVERT_ARG_HANDLE_CHECKED(Object, value, 2); + CONVERT_SMI_ARG_CHECKED(unchecked_attributes, 3); + RUNTIME_ASSERT( + (unchecked_attributes & ~(READ_ONLY | DONT_ENUM | DONT_DELETE)) == 0); + // Compute attributes. + PropertyAttributes attributes = + static_cast<PropertyAttributes>(unchecked_attributes); + +#ifdef DEBUG + uint32_t index = 0; + DCHECK(!key->ToArrayIndex(&index)); + LookupIterator it(object, key, LookupIterator::CHECK_OWN_REAL); + Maybe<PropertyAttributes> maybe = JSReceiver::GetPropertyAttributes(&it); + DCHECK(maybe.has_value); + RUNTIME_ASSERT(!it.IsFound()); +#endif + + Handle<Object> result; + ASSIGN_RETURN_FAILURE_ON_EXCEPTION( + isolate, result, + JSObject::SetOwnPropertyIgnoreAttributes(object, key, value, attributes)); + return *result; +} + + +RUNTIME_FUNCTION(Runtime_AddPropertyForTemplate) { + HandleScope scope(isolate); + RUNTIME_ASSERT(args.length() == 4); + + CONVERT_ARG_HANDLE_CHECKED(JSObject, object, 0); CONVERT_ARG_HANDLE_CHECKED(Object, key, 1); CONVERT_ARG_HANDLE_CHECKED(Object, value, 2); CONVERT_SMI_ARG_CHECKED(unchecked_attributes, 3); @@ -5522,17 +5313,51 @@ RUNTIME_FUNCTION(Runtime_SetProperty) { PropertyAttributes attributes = static_cast<PropertyAttributes>(unchecked_attributes); - StrictMode strict_mode = SLOPPY; - if (args.length() == 5) { - CONVERT_STRICT_MODE_ARG_CHECKED(strict_mode_arg, 4); - strict_mode = strict_mode_arg; +#ifdef DEBUG + bool duplicate; + if (key->IsName()) { + LookupIterator it(object, Handle<Name>::cast(key), + LookupIterator::CHECK_OWN_REAL); + Maybe<PropertyAttributes> maybe = JSReceiver::GetPropertyAttributes(&it); + DCHECK(maybe.has_value); + duplicate = it.IsFound(); + } else { + uint32_t index = 0; + RUNTIME_ASSERT(key->ToArrayIndex(&index)); + Maybe<bool> maybe = JSReceiver::HasOwnElement(object, index); + if (!maybe.has_value) return isolate->heap()->exception(); + duplicate = maybe.value; + } + if (duplicate) { + Handle<Object> args[1] = { key }; + Handle<Object> error = isolate->factory()->NewTypeError( + "duplicate_template_property", HandleVector(args, 1)); + return isolate->Throw(*error); } +#endif + + Handle<Object> result; + ASSIGN_RETURN_FAILURE_ON_EXCEPTION( + isolate, result, + Runtime::DefineObjectProperty(object, key, value, attributes)); + return *result; +} + + +RUNTIME_FUNCTION(Runtime_SetProperty) { + HandleScope scope(isolate); + RUNTIME_ASSERT(args.length() == 4); + + CONVERT_ARG_HANDLE_CHECKED(Object, object, 0); + CONVERT_ARG_HANDLE_CHECKED(Object, key, 1); + CONVERT_ARG_HANDLE_CHECKED(Object, value, 2); + CONVERT_STRICT_MODE_ARG_CHECKED(strict_mode_arg, 3); + StrictMode strict_mode = strict_mode_arg; Handle<Object> result; ASSIGN_RETURN_FAILURE_ON_EXCEPTION( isolate, result, - Runtime::SetObjectProperty( - isolate, object, key, value, attributes, strict_mode)); + Runtime::SetObjectProperty(isolate, object, key, value, strict_mode)); return *result; } @@ -5596,12 +5421,12 @@ RUNTIME_FUNCTION(Runtime_StoreArrayLiteralElement) { } Handle<JSArray> boilerplate_object(boilerplate); ElementsKind elements_kind = object->GetElementsKind(); - ASSERT(IsFastElementsKind(elements_kind)); + DCHECK(IsFastElementsKind(elements_kind)); // Smis should never trigger transitions. - ASSERT(!value->IsSmi()); + DCHECK(!value->IsSmi()); if (value->IsNumber()) { - ASSERT(IsFastSmiElementsKind(elements_kind)); + DCHECK(IsFastSmiElementsKind(elements_kind)); ElementsKind transitioned_kind = IsFastHoleyElementsKind(elements_kind) ? FAST_HOLEY_DOUBLE_ELEMENTS : FAST_DOUBLE_ELEMENTS; @@ -5611,21 +5436,22 @@ RUNTIME_FUNCTION(Runtime_StoreArrayLiteralElement) { JSObject::TransitionElementsKind(boilerplate_object, transitioned_kind); } JSObject::TransitionElementsKind(object, transitioned_kind); - ASSERT(IsFastDoubleElementsKind(object->GetElementsKind())); + DCHECK(IsFastDoubleElementsKind(object->GetElementsKind())); FixedDoubleArray* double_array = FixedDoubleArray::cast(object->elements()); HeapNumber* number = HeapNumber::cast(*value); double_array->set(store_index, number->Number()); } else { - ASSERT(IsFastSmiElementsKind(elements_kind) || - IsFastDoubleElementsKind(elements_kind)); - ElementsKind transitioned_kind = IsFastHoleyElementsKind(elements_kind) - ? FAST_HOLEY_ELEMENTS - : FAST_ELEMENTS; - JSObject::TransitionElementsKind(object, transitioned_kind); - if (IsMoreGeneralElementsKindTransition( - boilerplate_object->GetElementsKind(), - transitioned_kind)) { - JSObject::TransitionElementsKind(boilerplate_object, transitioned_kind); + if (!IsFastObjectElementsKind(elements_kind)) { + ElementsKind transitioned_kind = IsFastHoleyElementsKind(elements_kind) + ? FAST_HOLEY_ELEMENTS + : FAST_ELEMENTS; + JSObject::TransitionElementsKind(object, transitioned_kind); + ElementsKind boilerplate_elements_kind = + boilerplate_object->GetElementsKind(); + if (IsMoreGeneralElementsKindTransition(boilerplate_elements_kind, + transitioned_kind)) { + JSObject::TransitionElementsKind(boilerplate_object, transitioned_kind); + } } FixedArray* object_array = FixedArray::cast(object->elements()); object_array->set(store_index, *value); @@ -5637,8 +5463,8 @@ RUNTIME_FUNCTION(Runtime_StoreArrayLiteralElement) { // Check whether debugger and is about to step into the callback that is passed // to a built-in function such as Array.forEach. RUNTIME_FUNCTION(Runtime_DebugCallbackSupportsStepping) { - ASSERT(args.length() == 1); - if (!isolate->IsDebuggerActive() || !isolate->debug()->StepInActive()) { + DCHECK(args.length() == 1); + if (!isolate->debug()->is_active() || !isolate->debug()->StepInActive()) { return isolate->heap()->false_value(); } CONVERT_ARG_CHECKED(Object, callback, 0); @@ -5651,7 +5477,7 @@ RUNTIME_FUNCTION(Runtime_DebugCallbackSupportsStepping) { // Set one shot breakpoints for the callback function that is passed to a // built-in function such as Array.forEach to enable stepping into the callback. RUNTIME_FUNCTION(Runtime_DebugPrepareStepInIfStepping) { - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); Debug* debug = isolate->debug(); if (!debug->IsStepping()) return isolate->heap()->undefined_value(); CONVERT_ARG_HANDLE_CHECKED(JSFunction, callback, 0); @@ -5665,55 +5491,54 @@ RUNTIME_FUNCTION(Runtime_DebugPrepareStepInIfStepping) { } -// The argument is a closure that is kept until the epilogue is called. -// On exception, the closure is called, which returns the promise if the -// exception is considered uncaught, or undefined otherwise. -RUNTIME_FUNCTION(Runtime_DebugPromiseHandlePrologue) { - ASSERT(args.length() == 1); +RUNTIME_FUNCTION(Runtime_DebugPushPromise) { + DCHECK(args.length() == 1); HandleScope scope(isolate); - CONVERT_ARG_HANDLE_CHECKED(JSFunction, promise_getter, 0); - isolate->debug()->PromiseHandlePrologue(promise_getter); + CONVERT_ARG_HANDLE_CHECKED(JSObject, promise, 0); + isolate->debug()->PushPromise(promise); return isolate->heap()->undefined_value(); } -RUNTIME_FUNCTION(Runtime_DebugPromiseHandleEpilogue) { - ASSERT(args.length() == 0); +RUNTIME_FUNCTION(Runtime_DebugPopPromise) { + DCHECK(args.length() == 0); SealHandleScope shs(isolate); - isolate->debug()->PromiseHandleEpilogue(); + isolate->debug()->PopPromise(); return isolate->heap()->undefined_value(); } -// Set a local property, even if it is READ_ONLY. If the property does not -// exist, it will be added with attributes NONE. -RUNTIME_FUNCTION(Runtime_IgnoreAttributesAndSetProperty) { +RUNTIME_FUNCTION(Runtime_DebugPromiseEvent) { + DCHECK(args.length() == 1); HandleScope scope(isolate); - RUNTIME_ASSERT(args.length() == 3 || args.length() == 4); - CONVERT_ARG_HANDLE_CHECKED(JSObject, object, 0); - CONVERT_ARG_HANDLE_CHECKED(Name, name, 1); - CONVERT_ARG_HANDLE_CHECKED(Object, value, 2); - // Compute attributes. - PropertyAttributes attributes = NONE; - if (args.length() == 4) { - CONVERT_SMI_ARG_CHECKED(unchecked_value, 3); - // Only attribute bits should be set. - RUNTIME_ASSERT( - (unchecked_value & ~(READ_ONLY | DONT_ENUM | DONT_DELETE)) == 0); - attributes = static_cast<PropertyAttributes>(unchecked_value); - } - Handle<Object> result; - ASSIGN_RETURN_FAILURE_ON_EXCEPTION( - isolate, result, - JSObject::SetLocalPropertyIgnoreAttributes( - object, name, value, attributes)); - return *result; + CONVERT_ARG_HANDLE_CHECKED(JSObject, data, 0); + isolate->debug()->OnPromiseEvent(data); + return isolate->heap()->undefined_value(); +} + + +RUNTIME_FUNCTION(Runtime_DebugPromiseRejectEvent) { + DCHECK(args.length() == 2); + HandleScope scope(isolate); + CONVERT_ARG_HANDLE_CHECKED(JSObject, promise, 0); + CONVERT_ARG_HANDLE_CHECKED(Object, value, 1); + isolate->debug()->OnPromiseReject(promise, value); + return isolate->heap()->undefined_value(); +} + + +RUNTIME_FUNCTION(Runtime_DebugAsyncTaskEvent) { + DCHECK(args.length() == 1); + HandleScope scope(isolate); + CONVERT_ARG_HANDLE_CHECKED(JSObject, data, 0); + isolate->debug()->OnAsyncTaskEvent(data); + return isolate->heap()->undefined_value(); } RUNTIME_FUNCTION(Runtime_DeleteProperty) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(JSReceiver, object, 0); CONVERT_ARG_HANDLE_CHECKED(Name, key, 1); CONVERT_STRICT_MODE_ARG_CHECKED(strict_mode, 2); @@ -5727,30 +5552,34 @@ RUNTIME_FUNCTION(Runtime_DeleteProperty) { } -static Object* HasLocalPropertyImplementation(Isolate* isolate, - Handle<JSObject> object, - Handle<Name> key) { - if (JSReceiver::HasLocalProperty(object, key)) { - return isolate->heap()->true_value(); - } +static Object* HasOwnPropertyImplementation(Isolate* isolate, + Handle<JSObject> object, + Handle<Name> key) { + Maybe<bool> maybe = JSReceiver::HasOwnProperty(object, key); + if (!maybe.has_value) return isolate->heap()->exception(); + if (maybe.value) return isolate->heap()->true_value(); // Handle hidden prototypes. If there's a hidden prototype above this thing // then we have to check it for properties, because they are supposed to // look like they are on this object. - Handle<Object> proto(object->GetPrototype(), isolate); - if (proto->IsJSObject() && - Handle<JSObject>::cast(proto)->map()->is_hidden_prototype()) { - return HasLocalPropertyImplementation(isolate, - Handle<JSObject>::cast(proto), - key); + PrototypeIterator iter(isolate, object); + if (!iter.IsAtEnd() && + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)) + ->map() + ->is_hidden_prototype()) { + // TODO(verwaest): The recursion is not necessary for keys that are array + // indices. Removing this. + return HasOwnPropertyImplementation( + isolate, Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)), + key); } RETURN_FAILURE_IF_SCHEDULED_EXCEPTION(isolate); return isolate->heap()->false_value(); } -RUNTIME_FUNCTION(Runtime_HasLocalProperty) { +RUNTIME_FUNCTION(Runtime_HasOwnProperty) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(Object, object, 0) CONVERT_ARG_HANDLE_CHECKED(Name, key, 1); @@ -5763,11 +5592,11 @@ RUNTIME_FUNCTION(Runtime_HasLocalProperty) { // Fast case: either the key is a real named property or it is not // an array index and there are no interceptors or hidden // prototypes. - if (JSObject::HasRealNamedProperty(js_obj, key)) { - ASSERT(!isolate->has_scheduled_exception()); + Maybe<bool> maybe = JSObject::HasRealNamedProperty(js_obj, key); + if (!maybe.has_value) return isolate->heap()->exception(); + DCHECK(!isolate->has_pending_exception()); + if (maybe.value) { return isolate->heap()->true_value(); - } else { - RETURN_FAILURE_IF_SCHEDULED_EXCEPTION(isolate); } Map* map = js_obj->map(); if (!key_is_array_index && @@ -5776,9 +5605,9 @@ RUNTIME_FUNCTION(Runtime_HasLocalProperty) { return isolate->heap()->false_value(); } // Slow case. - return HasLocalPropertyImplementation(isolate, - Handle<JSObject>(js_obj), - Handle<Name>(key)); + return HasOwnPropertyImplementation(isolate, + Handle<JSObject>(js_obj), + Handle<Name>(key)); } else if (object->IsString() && key_is_array_index) { // Well, there is one exception: Handle [] on strings. Handle<String> string = Handle<String>::cast(object); @@ -5792,49 +5621,46 @@ RUNTIME_FUNCTION(Runtime_HasLocalProperty) { RUNTIME_FUNCTION(Runtime_HasProperty) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSReceiver, receiver, 0); CONVERT_ARG_HANDLE_CHECKED(Name, key, 1); - bool result = JSReceiver::HasProperty(receiver, key); - RETURN_FAILURE_IF_SCHEDULED_EXCEPTION(isolate); - if (isolate->has_pending_exception()) return isolate->heap()->exception(); - return isolate->heap()->ToBoolean(result); + Maybe<bool> maybe = JSReceiver::HasProperty(receiver, key); + if (!maybe.has_value) return isolate->heap()->exception(); + return isolate->heap()->ToBoolean(maybe.value); } RUNTIME_FUNCTION(Runtime_HasElement) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSReceiver, receiver, 0); CONVERT_SMI_ARG_CHECKED(index, 1); - bool result = JSReceiver::HasElement(receiver, index); - RETURN_FAILURE_IF_SCHEDULED_EXCEPTION(isolate); - return isolate->heap()->ToBoolean(result); + Maybe<bool> maybe = JSReceiver::HasElement(receiver, index); + if (!maybe.has_value) return isolate->heap()->exception(); + return isolate->heap()->ToBoolean(maybe.value); } RUNTIME_FUNCTION(Runtime_IsPropertyEnumerable) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSObject, object, 0); CONVERT_ARG_HANDLE_CHECKED(Name, key, 1); - PropertyAttributes att = JSReceiver::GetLocalPropertyAttribute(object, key); - if (att == ABSENT || (att & DONT_ENUM) != 0) { - RETURN_FAILURE_IF_SCHEDULED_EXCEPTION(isolate); - return isolate->heap()->false_value(); - } - ASSERT(!isolate->has_scheduled_exception()); - return isolate->heap()->true_value(); + Maybe<PropertyAttributes> maybe = + JSReceiver::GetOwnPropertyAttributes(object, key); + if (!maybe.has_value) return isolate->heap()->exception(); + if (maybe.value == ABSENT) maybe.value = DONT_ENUM; + return isolate->heap()->ToBoolean((maybe.value & DONT_ENUM) == 0); } RUNTIME_FUNCTION(Runtime_GetPropertyNames) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSReceiver, object, 0); Handle<JSArray> result; @@ -5854,7 +5680,7 @@ RUNTIME_FUNCTION(Runtime_GetPropertyNames) { // the check for deletions during a for-in. RUNTIME_FUNCTION(Runtime_GetPropertyNamesFast) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(JSReceiver, raw_object, 0); @@ -5874,27 +5700,25 @@ RUNTIME_FUNCTION(Runtime_GetPropertyNamesFast) { } -// Find the length of the prototype chain that is to to handled as one. If a +// Find the length of the prototype chain that is to be handled as one. If a // prototype object is hidden it is to be viewed as part of the the object it // is prototype for. -static int LocalPrototypeChainLength(JSObject* obj) { +static int OwnPrototypeChainLength(JSObject* obj) { int count = 1; - Object* proto = obj->GetPrototype(); - while (proto->IsJSObject() && - JSObject::cast(proto)->map()->is_hidden_prototype()) { + for (PrototypeIterator iter(obj->GetIsolate(), obj); + !iter.IsAtEnd(PrototypeIterator::END_AT_NON_HIDDEN); iter.Advance()) { count++; - proto = JSObject::cast(proto)->GetPrototype(); } return count; } -// Return the names of the local named properties. +// Return the names of the own named properties. // args[0]: object // args[1]: PropertyAttributes as int -RUNTIME_FUNCTION(Runtime_GetLocalPropertyNames) { +RUNTIME_FUNCTION(Runtime_GetOwnPropertyNames) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); if (!args[0]->IsJSObject()) { return isolate->heap()->undefined_value(); } @@ -5913,31 +5737,36 @@ RUNTIME_FUNCTION(Runtime_GetLocalPropertyNames) { RETURN_FAILURE_IF_SCHEDULED_EXCEPTION(isolate); return *isolate->factory()->NewJSArray(0); } - obj = Handle<JSObject>(JSObject::cast(obj->GetPrototype())); + PrototypeIterator iter(isolate, obj); + obj = Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)); } // Find the number of objects making up this. - int length = LocalPrototypeChainLength(*obj); + int length = OwnPrototypeChainLength(*obj); - // Find the number of local properties for each of the objects. - ScopedVector<int> local_property_count(length); + // Find the number of own properties for each of the objects. + ScopedVector<int> own_property_count(length); int total_property_count = 0; - Handle<JSObject> jsproto = obj; - for (int i = 0; i < length; i++) { - // Only collect names if access is permitted. - if (jsproto->IsAccessCheckNeeded() && - !isolate->MayNamedAccess( - jsproto, isolate->factory()->undefined_value(), v8::ACCESS_KEYS)) { - isolate->ReportFailedAccessCheck(jsproto, v8::ACCESS_KEYS); - RETURN_FAILURE_IF_SCHEDULED_EXCEPTION(isolate); - return *isolate->factory()->NewJSArray(0); - } - int n; - n = jsproto->NumberOfLocalProperties(filter); - local_property_count[i] = n; - total_property_count += n; - if (i < length - 1) { - jsproto = Handle<JSObject>(JSObject::cast(jsproto->GetPrototype())); + { + PrototypeIterator iter(isolate, obj, PrototypeIterator::START_AT_RECEIVER); + for (int i = 0; i < length; i++) { + DCHECK(!iter.IsAtEnd()); + Handle<JSObject> jsproto = + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)); + // Only collect names if access is permitted. + if (jsproto->IsAccessCheckNeeded() && + !isolate->MayNamedAccess(jsproto, + isolate->factory()->undefined_value(), + v8::ACCESS_KEYS)) { + isolate->ReportFailedAccessCheck(jsproto, v8::ACCESS_KEYS); + RETURN_FAILURE_IF_SCHEDULED_EXCEPTION(isolate); + return *isolate->factory()->NewJSArray(0); + } + int n; + n = jsproto->NumberOfOwnProperties(filter); + own_property_count[i] = n; + total_property_count += n; + iter.Advance(); } } @@ -5946,39 +5775,41 @@ RUNTIME_FUNCTION(Runtime_GetLocalPropertyNames) { isolate->factory()->NewFixedArray(total_property_count); // Get the property names. - jsproto = obj; int next_copy_index = 0; int hidden_strings = 0; - for (int i = 0; i < length; i++) { - jsproto->GetLocalPropertyNames(*names, next_copy_index, filter); - if (i > 0) { - // Names from hidden prototypes may already have been added - // for inherited function template instances. Count the duplicates - // and stub them out; the final copy pass at the end ignores holes. - for (int j = next_copy_index; - j < next_copy_index + local_property_count[i]; - j++) { - Object* name_from_hidden_proto = names->get(j); - for (int k = 0; k < next_copy_index; k++) { - if (names->get(k) != isolate->heap()->hidden_string()) { - Object* name = names->get(k); - if (name_from_hidden_proto == name) { - names->set(j, isolate->heap()->hidden_string()); - hidden_strings++; - break; + { + PrototypeIterator iter(isolate, obj, PrototypeIterator::START_AT_RECEIVER); + for (int i = 0; i < length; i++) { + DCHECK(!iter.IsAtEnd()); + Handle<JSObject> jsproto = + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)); + jsproto->GetOwnPropertyNames(*names, next_copy_index, filter); + if (i > 0) { + // Names from hidden prototypes may already have been added + // for inherited function template instances. Count the duplicates + // and stub them out; the final copy pass at the end ignores holes. + for (int j = next_copy_index; + j < next_copy_index + own_property_count[i]; j++) { + Object* name_from_hidden_proto = names->get(j); + for (int k = 0; k < next_copy_index; k++) { + if (names->get(k) != isolate->heap()->hidden_string()) { + Object* name = names->get(k); + if (name_from_hidden_proto == name) { + names->set(j, isolate->heap()->hidden_string()); + hidden_strings++; + break; + } } } } } - } - next_copy_index += local_property_count[i]; + next_copy_index += own_property_count[i]; - // Hidden properties only show up if the filter does not skip strings. - if ((filter & STRING) == 0 && JSObject::HasHiddenProperties(jsproto)) { - hidden_strings++; - } - if (i < length - 1) { - jsproto = Handle<JSObject>(JSObject::cast(jsproto->GetPrototype())); + // Hidden properties only show up if the filter does not skip strings. + if ((filter & STRING) == 0 && JSObject::HasHiddenProperties(jsproto)) { + hidden_strings++; + } + iter.Advance(); } } @@ -5997,26 +5828,26 @@ RUNTIME_FUNCTION(Runtime_GetLocalPropertyNames) { } names->set(dest_pos++, name); } - ASSERT_EQ(0, hidden_strings); + DCHECK_EQ(0, hidden_strings); } return *isolate->factory()->NewJSArrayWithElements(names); } -// Return the names of the local indexed properties. +// Return the names of the own indexed properties. // args[0]: object -RUNTIME_FUNCTION(Runtime_GetLocalElementNames) { +RUNTIME_FUNCTION(Runtime_GetOwnElementNames) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); if (!args[0]->IsJSObject()) { return isolate->heap()->undefined_value(); } CONVERT_ARG_HANDLE_CHECKED(JSObject, obj, 0); - int n = obj->NumberOfLocalElements(static_cast<PropertyAttributes>(NONE)); + int n = obj->NumberOfOwnElements(static_cast<PropertyAttributes>(NONE)); Handle<FixedArray> names = isolate->factory()->NewFixedArray(n); - obj->GetLocalElementKeys(*names, static_cast<PropertyAttributes>(NONE)); + obj->GetOwnElementKeys(*names, static_cast<PropertyAttributes>(NONE)); return *isolate->factory()->NewJSArrayWithElements(names); } @@ -6025,7 +5856,7 @@ RUNTIME_FUNCTION(Runtime_GetLocalElementNames) { // args[0]: object RUNTIME_FUNCTION(Runtime_GetInterceptorInfo) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); if (!args[0]->IsJSObject()) { return Smi::FromInt(0); } @@ -6043,7 +5874,7 @@ RUNTIME_FUNCTION(Runtime_GetInterceptorInfo) { // args[0]: object RUNTIME_FUNCTION(Runtime_GetNamedInterceptorPropertyNames) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSObject, obj, 0); if (obj->HasNamedInterceptor()) { @@ -6060,7 +5891,7 @@ RUNTIME_FUNCTION(Runtime_GetNamedInterceptorPropertyNames) { // args[0]: object RUNTIME_FUNCTION(Runtime_GetIndexedInterceptorElementNames) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSObject, obj, 0); if (obj->HasIndexedInterceptor()) { @@ -6073,9 +5904,9 @@ RUNTIME_FUNCTION(Runtime_GetIndexedInterceptorElementNames) { } -RUNTIME_FUNCTION(Runtime_LocalKeys) { +RUNTIME_FUNCTION(Runtime_OwnKeys) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(JSObject, raw_object, 0); Handle<JSObject> object(raw_object); @@ -6089,16 +5920,16 @@ RUNTIME_FUNCTION(Runtime_LocalKeys) { return *isolate->factory()->NewJSArray(0); } - Handle<Object> proto(object->GetPrototype(), isolate); + PrototypeIterator iter(isolate, object); // If proxy is detached we simply return an empty array. - if (proto->IsNull()) return *isolate->factory()->NewJSArray(0); - object = Handle<JSObject>::cast(proto); + if (iter.IsAtEnd()) return *isolate->factory()->NewJSArray(0); + object = Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)); } Handle<FixedArray> contents; ASSIGN_RETURN_FAILURE_ON_EXCEPTION( isolate, contents, - JSReceiver::GetKeys(object, JSReceiver::LOCAL_ONLY)); + JSReceiver::GetKeys(object, JSReceiver::OWN_ONLY)); // Some fast paths through GetKeysInFixedArrayFor reuse a cached // property array and since the result is mutable we have to create @@ -6110,7 +5941,7 @@ RUNTIME_FUNCTION(Runtime_LocalKeys) { if (entry->IsString()) { copy->set(i, entry); } else { - ASSERT(entry->IsNumber()); + DCHECK(entry->IsNumber()); HandleScope scope(isolate); Handle<Object> entry_handle(entry, isolate); Handle<Object> entry_str = @@ -6124,7 +5955,7 @@ RUNTIME_FUNCTION(Runtime_LocalKeys) { RUNTIME_FUNCTION(Runtime_GetArgumentsProperty) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(Object, raw_key, 0); // Compute the frame holding the arguments. @@ -6197,10 +6028,10 @@ RUNTIME_FUNCTION(Runtime_GetArgumentsProperty) { RUNTIME_FUNCTION(Runtime_ToFastProperties) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(Object, object, 0); if (object->IsJSObject() && !object->IsGlobalObject()) { - JSObject::TransformToFastProperties(Handle<JSObject>::cast(object), 0); + JSObject::MigrateSlowToFast(Handle<JSObject>::cast(object), 0); } return *object; } @@ -6208,7 +6039,7 @@ RUNTIME_FUNCTION(Runtime_ToFastProperties) { RUNTIME_FUNCTION(Runtime_ToBool) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(Object, object, 0); return isolate->heap()->ToBoolean(object->BooleanValue()); @@ -6219,7 +6050,7 @@ RUNTIME_FUNCTION(Runtime_ToBool) { // Possible optimizations: put the type string into the oddballs. RUNTIME_FUNCTION(Runtime_Typeof) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(Object, obj, 0); if (obj->IsNumber()) return isolate->heap()->number_string(); HeapObject* heap_obj = HeapObject::cast(obj); @@ -6240,11 +6071,9 @@ RUNTIME_FUNCTION(Runtime_Typeof) { return isolate->heap()->boolean_string(); } if (heap_obj->IsNull()) { - return FLAG_harmony_typeof - ? isolate->heap()->null_string() - : isolate->heap()->object_string(); + return isolate->heap()->object_string(); } - ASSERT(heap_obj->IsUndefined()); + DCHECK(heap_obj->IsUndefined()); return isolate->heap()->undefined_string(); case SYMBOL_TYPE: return isolate->heap()->symbol_string(); @@ -6259,6 +6088,35 @@ RUNTIME_FUNCTION(Runtime_Typeof) { } +RUNTIME_FUNCTION(Runtime_Booleanize) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 2); + CONVERT_ARG_CHECKED(Object, value_raw, 0); + CONVERT_SMI_ARG_CHECKED(token_raw, 1); + intptr_t value = reinterpret_cast<intptr_t>(value_raw); + Token::Value token = static_cast<Token::Value>(token_raw); + switch (token) { + case Token::EQ: + case Token::EQ_STRICT: + return isolate->heap()->ToBoolean(value == 0); + case Token::NE: + case Token::NE_STRICT: + return isolate->heap()->ToBoolean(value != 0); + case Token::LT: + return isolate->heap()->ToBoolean(value < 0); + case Token::GT: + return isolate->heap()->ToBoolean(value > 0); + case Token::LTE: + return isolate->heap()->ToBoolean(value <= 0); + case Token::GTE: + return isolate->heap()->ToBoolean(value >= 0); + default: + // This should only happen during natives fuzzing. + return isolate->heap()->undefined_value(); + } +} + + static bool AreDigits(const uint8_t*s, int from, int to) { for (int i = from; i < to; i++) { if (s[i] < '0' || s[i] > '9') return false; @@ -6269,8 +6127,8 @@ static bool AreDigits(const uint8_t*s, int from, int to) { static int ParseDecimalInteger(const uint8_t*s, int from, int to) { - ASSERT(to - from < 10); // Overflow is not possible. - ASSERT(from < to); + DCHECK(to - from < 10); // Overflow is not possible. + DCHECK(from < to); int d = s[from] - '0'; for (int i = from + 1; i < to; i++) { @@ -6283,7 +6141,7 @@ static int ParseDecimalInteger(const uint8_t*s, int from, int to) { RUNTIME_FUNCTION(Runtime_StringToNumber) { HandleScope handle_scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(String, subject, 0); subject = String::Flatten(subject); @@ -6322,7 +6180,7 @@ RUNTIME_FUNCTION(Runtime_StringToNumber) { uint32_t hash = StringHasher::MakeArrayIndexHash(d, len); #ifdef DEBUG subject->Hash(); // Force hash calculation. - ASSERT_EQ(static_cast<int>(subject->hash_field()), + DCHECK_EQ(static_cast<int>(subject->hash_field()), static_cast<int>(hash)); #endif subject->set_hash_field(hash); @@ -6346,7 +6204,7 @@ RUNTIME_FUNCTION(Runtime_StringToNumber) { RUNTIME_FUNCTION(Runtime_NewString) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_SMI_ARG_CHECKED(length, 0); CONVERT_BOOLEAN_ARG_CHECKED(is_one_byte, 1); if (length == 0) return isolate->heap()->empty_string(); @@ -6364,7 +6222,7 @@ RUNTIME_FUNCTION(Runtime_NewString) { RUNTIME_FUNCTION(Runtime_TruncateString) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(SeqString, string, 0); CONVERT_SMI_ARG_CHECKED(new_length, 1); RUNTIME_ASSERT(new_length >= 0); @@ -6374,10 +6232,10 @@ RUNTIME_FUNCTION(Runtime_TruncateString) { RUNTIME_FUNCTION(Runtime_URIEscape) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(String, source, 0); Handle<String> string = String::Flatten(source); - ASSERT(string->IsFlat()); + DCHECK(string->IsFlat()); Handle<String> result; ASSIGN_RETURN_FAILURE_ON_EXCEPTION( isolate, result, @@ -6390,10 +6248,10 @@ RUNTIME_FUNCTION(Runtime_URIEscape) { RUNTIME_FUNCTION(Runtime_URIUnescape) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(String, source, 0); Handle<String> string = String::Flatten(source); - ASSERT(string->IsFlat()); + DCHECK(string->IsFlat()); Handle<String> result; ASSIGN_RETURN_FAILURE_ON_EXCEPTION( isolate, result, @@ -6407,7 +6265,7 @@ RUNTIME_FUNCTION(Runtime_URIUnescape) { RUNTIME_FUNCTION(Runtime_QuoteJSONString) { HandleScope scope(isolate); CONVERT_ARG_HANDLE_CHECKED(String, string, 0); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); Handle<Object> result; ASSIGN_RETURN_FAILURE_ON_EXCEPTION( isolate, result, BasicJsonStringifier::StringifyString(isolate, string)); @@ -6417,7 +6275,7 @@ RUNTIME_FUNCTION(Runtime_QuoteJSONString) { RUNTIME_FUNCTION(Runtime_BasicJSONStringify) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(Object, object, 0); BasicJsonStringifier stringifier(isolate); Handle<Object> result; @@ -6429,7 +6287,7 @@ RUNTIME_FUNCTION(Runtime_BasicJSONStringify) { RUNTIME_FUNCTION(Runtime_StringParseInt) { HandleScope handle_scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(String, subject, 0); CONVERT_NUMBER_CHECKED(int, radix, Int32, args[1]); RUNTIME_ASSERT(radix == 0 || (2 <= radix && radix <= 36)); @@ -6456,12 +6314,12 @@ RUNTIME_FUNCTION(Runtime_StringParseInt) { RUNTIME_FUNCTION(Runtime_StringParseFloat) { HandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(String, subject, 0); subject = String::Flatten(subject); - double value = StringToDouble( - isolate->unicode_cache(), *subject, ALLOW_TRAILING_JUNK, OS::nan_value()); + double value = StringToDouble(isolate->unicode_cache(), *subject, + ALLOW_TRAILING_JUNK, base::OS::nan_value()); return *isolate->factory()->NewNumber(value); } @@ -6515,7 +6373,7 @@ MUST_USE_RESULT static Object* ConvertCaseHelper( } else if (char_length == 1 && (ignore_overflow || !ToUpperOverflows(current))) { // Common case: converting the letter resulted in one character. - ASSERT(static_cast<uc32>(chars[0]) != current); + DCHECK(static_cast<uc32>(chars[0]) != current); result->Set(i, chars[0]); has_changed_character = true; i++; @@ -6593,7 +6451,7 @@ static const uintptr_t kAsciiMask = kOneInEveryByte << 7; static inline uintptr_t AsciiRangeMask(uintptr_t w, char m, char n) { // Use strict inequalities since in edge cases the function could be // further simplified. - ASSERT(0 < m && m < n); + DCHECK(0 < m && m < n); // Has high bit set in every w byte less than n. uintptr_t tmp1 = kOneInEveryByte * (0x7F + n) - w; // Has high bit set in every w byte greater than m. @@ -6613,11 +6471,11 @@ static bool CheckFastAsciiConvert(char* dst, if (dst[i] == src[i]) continue; expected_changed = true; if (is_to_lower) { - ASSERT('A' <= src[i] && src[i] <= 'Z'); - ASSERT(dst[i] == src[i] + ('a' - 'A')); + DCHECK('A' <= src[i] && src[i] <= 'Z'); + DCHECK(dst[i] == src[i] + ('a' - 'A')); } else { - ASSERT('a' <= src[i] && src[i] <= 'z'); - ASSERT(dst[i] == src[i] - ('a' - 'A')); + DCHECK('a' <= src[i] && src[i] <= 'z'); + DCHECK(dst[i] == src[i] - ('a' - 'A')); } } return (expected_changed == changed); @@ -6637,7 +6495,7 @@ static bool FastAsciiConvert(char* dst, DisallowHeapAllocation no_gc; // We rely on the distance between upper and lower case letters // being a known power of 2. - ASSERT('a' - 'A' == (1 << 5)); + DCHECK('a' - 'A' == (1 << 5)); // Boundaries for the range of input characters than require conversion. static const char lo = Converter::kIsToLower ? 'A' - 1 : 'a' - 1; static const char hi = Converter::kIsToLower ? 'Z' + 1 : 'z' + 1; @@ -6689,7 +6547,7 @@ static bool FastAsciiConvert(char* dst, return false; } - ASSERT(CheckFastAsciiConvert( + DCHECK(CheckFastAsciiConvert( saved_dst, saved_src, length, changed, Converter::kIsToLower)); *changed_out = changed; @@ -6721,7 +6579,7 @@ MUST_USE_RESULT static Object* ConvertCase( isolate->factory()->NewRawOneByteString(length).ToHandleChecked(); DisallowHeapAllocation no_gc; String::FlatContent flat_content = s->GetFlatContent(); - ASSERT(flat_content.IsFlat()); + DCHECK(flat_content.IsFlat()); bool has_changed_character = false; bool is_ascii = FastAsciiConvert<Converter>( reinterpret_cast<char*>(result->GetChars()), @@ -6742,7 +6600,7 @@ MUST_USE_RESULT static Object* ConvertCase( Object* answer = ConvertCaseHelper(isolate, *s, *result, length, mapping); if (answer->IsException() || answer->IsString()) return answer; - ASSERT(answer->IsSmi()); + DCHECK(answer->IsSmi()); length = Smi::cast(answer)->value(); if (s->IsOneByteRepresentation() && length > 0) { ASSIGN_RETURN_FAILURE_ON_EXCEPTION( @@ -6758,7 +6616,7 @@ MUST_USE_RESULT static Object* ConvertCase( RUNTIME_FUNCTION(Runtime_StringToLowerCase) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(String, s, 0); return ConvertCase( s, isolate, isolate->runtime_state()->to_lower_mapping()); @@ -6767,7 +6625,7 @@ RUNTIME_FUNCTION(Runtime_StringToLowerCase) { RUNTIME_FUNCTION(Runtime_StringToUpperCase) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(String, s, 0); return ConvertCase( s, isolate, isolate->runtime_state()->to_upper_mapping()); @@ -6776,7 +6634,7 @@ RUNTIME_FUNCTION(Runtime_StringToUpperCase) { RUNTIME_FUNCTION(Runtime_StringTrim) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(String, string, 0); CONVERT_BOOLEAN_ARG_CHECKED(trimLeft, 1); @@ -6809,7 +6667,7 @@ RUNTIME_FUNCTION(Runtime_StringTrim) { RUNTIME_FUNCTION(Runtime_StringSplit) { HandleScope handle_scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(String, subject, 0); CONVERT_ARG_HANDLE_CHECKED(String, pattern, 1); CONVERT_NUMBER_CHECKED(uint32_t, limit, Uint32, args[2]); @@ -6866,7 +6724,7 @@ RUNTIME_FUNCTION(Runtime_StringSplit) { JSObject::EnsureCanContainHeapObjectElements(result); result->set_length(Smi::FromInt(part_count)); - ASSERT(result->HasFastObjectElements()); + DCHECK(result->HasFastObjectElements()); if (part_count == 1 && indices.at(0) == subject_length) { FixedArray::cast(result->elements())->set(0, *subject); @@ -6917,13 +6775,13 @@ static int CopyCachedAsciiCharsToArray(Heap* heap, elements->set(i, value, mode); } if (i < length) { - ASSERT(Smi::FromInt(0) == 0); + DCHECK(Smi::FromInt(0) == 0); memset(elements->data_start() + i, 0, kPointerSize * (length - i)); } #ifdef DEBUG for (int j = 0; j < length; ++j) { Object* element = elements->get(j); - ASSERT(element == Smi::FromInt(0) || + DCHECK(element == Smi::FromInt(0) || (element->IsString() && String::cast(element)->LooksValid())); } #endif @@ -6935,7 +6793,7 @@ static int CopyCachedAsciiCharsToArray(Heap* heap, // For example, "foo" => ["f", "o", "o"]. RUNTIME_FUNCTION(Runtime_StringToArray) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(String, s, 0); CONVERT_NUMBER_CHECKED(uint32_t, limit, Uint32, args[1]); @@ -6974,7 +6832,7 @@ RUNTIME_FUNCTION(Runtime_StringToArray) { #ifdef DEBUG for (int i = 0; i < length; ++i) { - ASSERT(String::cast(elements->get(i))->length() == 1); + DCHECK(String::cast(elements->get(i))->length() == 1); } #endif @@ -6984,7 +6842,7 @@ RUNTIME_FUNCTION(Runtime_StringToArray) { RUNTIME_FUNCTION(Runtime_NewStringWrapper) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(String, value, 0); return *Object::ToObject(isolate, value).ToHandleChecked(); } @@ -6997,18 +6855,18 @@ bool Runtime::IsUpperCaseChar(RuntimeState* runtime_state, uint16_t ch) { } -RUNTIME_FUNCTION(RuntimeHidden_NumberToString) { +RUNTIME_FUNCTION(Runtime_NumberToStringRT) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_NUMBER_ARG_HANDLE_CHECKED(number, 0); return *isolate->factory()->NumberToString(number); } -RUNTIME_FUNCTION(RuntimeHidden_NumberToStringSkipCache) { +RUNTIME_FUNCTION(Runtime_NumberToStringSkipCache) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_NUMBER_ARG_HANDLE_CHECKED(number, 0); return *isolate->factory()->NumberToString(number, false); @@ -7017,7 +6875,7 @@ RUNTIME_FUNCTION(RuntimeHidden_NumberToStringSkipCache) { RUNTIME_FUNCTION(Runtime_NumberToInteger) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_DOUBLE_ARG_CHECKED(number, 0); return *isolate->factory()->NewNumber(DoubleToInteger(number)); @@ -7026,7 +6884,7 @@ RUNTIME_FUNCTION(Runtime_NumberToInteger) { RUNTIME_FUNCTION(Runtime_NumberToIntegerMapMinusZero) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_DOUBLE_ARG_CHECKED(number, 0); double double_value = DoubleToInteger(number); @@ -7039,7 +6897,7 @@ RUNTIME_FUNCTION(Runtime_NumberToIntegerMapMinusZero) { RUNTIME_FUNCTION(Runtime_NumberToJSUint32) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_NUMBER_CHECKED(int32_t, number, Uint32, args[0]); return *isolate->factory()->NewNumberFromUint(number); @@ -7048,7 +6906,7 @@ RUNTIME_FUNCTION(Runtime_NumberToJSUint32) { RUNTIME_FUNCTION(Runtime_NumberToJSInt32) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_DOUBLE_ARG_CHECKED(number, 0); return *isolate->factory()->NewNumberFromInt(DoubleToInt32(number)); @@ -7057,9 +6915,9 @@ RUNTIME_FUNCTION(Runtime_NumberToJSInt32) { // Converts a Number to a Smi, if possible. Returns NaN if the number is not // a small integer. -RUNTIME_FUNCTION(RuntimeHidden_NumberToSmi) { +RUNTIME_FUNCTION(Runtime_NumberToSmi) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(Object, obj, 0); if (obj->IsSmi()) { return obj; @@ -7075,16 +6933,16 @@ RUNTIME_FUNCTION(RuntimeHidden_NumberToSmi) { } -RUNTIME_FUNCTION(RuntimeHidden_AllocateHeapNumber) { +RUNTIME_FUNCTION(Runtime_AllocateHeapNumber) { HandleScope scope(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); return *isolate->factory()->NewHeapNumber(0); } RUNTIME_FUNCTION(Runtime_NumberAdd) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_DOUBLE_ARG_CHECKED(x, 0); CONVERT_DOUBLE_ARG_CHECKED(y, 1); @@ -7094,7 +6952,7 @@ RUNTIME_FUNCTION(Runtime_NumberAdd) { RUNTIME_FUNCTION(Runtime_NumberSub) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_DOUBLE_ARG_CHECKED(x, 0); CONVERT_DOUBLE_ARG_CHECKED(y, 1); @@ -7104,7 +6962,7 @@ RUNTIME_FUNCTION(Runtime_NumberSub) { RUNTIME_FUNCTION(Runtime_NumberMul) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_DOUBLE_ARG_CHECKED(x, 0); CONVERT_DOUBLE_ARG_CHECKED(y, 1); @@ -7114,7 +6972,7 @@ RUNTIME_FUNCTION(Runtime_NumberMul) { RUNTIME_FUNCTION(Runtime_NumberUnaryMinus) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_DOUBLE_ARG_CHECKED(x, 0); return *isolate->factory()->NewNumber(-x); @@ -7123,7 +6981,7 @@ RUNTIME_FUNCTION(Runtime_NumberUnaryMinus) { RUNTIME_FUNCTION(Runtime_NumberDiv) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_DOUBLE_ARG_CHECKED(x, 0); CONVERT_DOUBLE_ARG_CHECKED(y, 1); @@ -7133,7 +6991,7 @@ RUNTIME_FUNCTION(Runtime_NumberDiv) { RUNTIME_FUNCTION(Runtime_NumberMod) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_DOUBLE_ARG_CHECKED(x, 0); CONVERT_DOUBLE_ARG_CHECKED(y, 1); @@ -7143,17 +7001,20 @@ RUNTIME_FUNCTION(Runtime_NumberMod) { RUNTIME_FUNCTION(Runtime_NumberImul) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); - CONVERT_NUMBER_CHECKED(int32_t, x, Int32, args[0]); - CONVERT_NUMBER_CHECKED(int32_t, y, Int32, args[1]); - return *isolate->factory()->NewNumberFromInt(x * y); + // We rely on implementation-defined behavior below, but at least not on + // undefined behavior. + CONVERT_NUMBER_CHECKED(uint32_t, x, Int32, args[0]); + CONVERT_NUMBER_CHECKED(uint32_t, y, Int32, args[1]); + int32_t product = static_cast<int32_t>(x * y); + return *isolate->factory()->NewNumberFromInt(product); } -RUNTIME_FUNCTION(RuntimeHidden_StringAdd) { +RUNTIME_FUNCTION(Runtime_StringAdd) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(String, str1, 0); CONVERT_ARG_HANDLE_CHECKED(String, str2, 1); isolate->counters()->string_add_runtime()->Increment(); @@ -7185,7 +7046,7 @@ static inline void StringBuilderConcatHelper(String* special, } else { // Position and length encoded in two smis. Object* obj = fixed_array->get(++i); - ASSERT(obj->IsSmi()); + DCHECK(obj->IsSmi()); pos = Smi::cast(obj)->value(); len = -encoded_slice; } @@ -7235,8 +7096,8 @@ static inline int StringBuilderConcatLength(int special_length, pos = Smi::cast(next_smi)->value(); if (pos < 0) return -1; } - ASSERT(pos >= 0); - ASSERT(len >= 0); + DCHECK(pos >= 0); + DCHECK(len >= 0); if (pos > special_length || len > special_length - pos) return -1; increment = len; } else if (elt->IsString()) { @@ -7260,7 +7121,7 @@ static inline int StringBuilderConcatLength(int special_length, RUNTIME_FUNCTION(Runtime_StringBuilderConcat) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(JSArray, array, 0); if (!args[1]->IsSmi()) return isolate->ThrowInvalidStringLength(); CONVERT_SMI_ARG_CHECKED(array_length, 1); @@ -7273,7 +7134,7 @@ RUNTIME_FUNCTION(Runtime_StringBuilderConcat) { RUNTIME_ASSERT(static_cast<size_t>(array_length) <= actual_array_length); // This assumption is used by the slice encoding in one or two smis. - ASSERT(Smi::kMaxValue >= String::kMaxLength); + DCHECK(Smi::kMaxValue >= String::kMaxLength); RUNTIME_ASSERT(array->HasFastElements()); JSObject::EnsureCanContainHeapObjectElements(array); @@ -7332,12 +7193,13 @@ RUNTIME_FUNCTION(Runtime_StringBuilderConcat) { RUNTIME_FUNCTION(Runtime_StringBuilderJoin) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(JSArray, array, 0); if (!args[1]->IsSmi()) return isolate->ThrowInvalidStringLength(); CONVERT_SMI_ARG_CHECKED(array_length, 1); CONVERT_ARG_HANDLE_CHECKED(String, separator, 2); RUNTIME_ASSERT(array->HasFastObjectElements()); + RUNTIME_ASSERT(array_length >= 0); Handle<FixedArray> fixed_array(FixedArray::cast(array->elements())); if (fixed_array->length() < array_length) { @@ -7385,27 +7247,29 @@ RUNTIME_FUNCTION(Runtime_StringBuilderJoin) { uc16* end = sink + length; #endif + RUNTIME_ASSERT(fixed_array->get(0)->IsString()); String* first = String::cast(fixed_array->get(0)); - String* seperator_raw = *separator; + String* separator_raw = *separator; int first_length = first->length(); String::WriteToFlat(first, sink, 0, first_length); sink += first_length; for (int i = 1; i < array_length; i++) { - ASSERT(sink + separator_length <= end); - String::WriteToFlat(seperator_raw, sink, 0, separator_length); + DCHECK(sink + separator_length <= end); + String::WriteToFlat(separator_raw, sink, 0, separator_length); sink += separator_length; + RUNTIME_ASSERT(fixed_array->get(i)->IsString()); String* element = String::cast(fixed_array->get(i)); int element_length = element->length(); - ASSERT(sink + element_length <= end); + DCHECK(sink + element_length <= end); String::WriteToFlat(element, sink, 0, element_length); sink += element_length; } - ASSERT(sink == end); + DCHECK(sink == end); // Use %_FastAsciiArrayJoin instead. - ASSERT(!answer->IsOneByteRepresentation()); + DCHECK(!answer->IsOneByteRepresentation()); return *answer; } @@ -7438,7 +7302,7 @@ static void JoinSparseArrayWithSeparator(FixedArray* elements, if (separator_length > 0) { // Array length must be representable as a signed 32-bit number, // otherwise the total string length would have been too large. - ASSERT(array_length <= 0x7fffffff); // Is int32_t. + DCHECK(array_length <= 0x7fffffff); // Is int32_t. int last_array_index = static_cast<int>(array_length - 1); while (previous_separator_position < last_array_index) { String::WriteToFlat<Char>(separator, &buffer[cursor], @@ -7447,13 +7311,13 @@ static void JoinSparseArrayWithSeparator(FixedArray* elements, previous_separator_position++; } } - ASSERT(cursor <= buffer.length()); + DCHECK(cursor <= buffer.length()); } RUNTIME_FUNCTION(Runtime_SparseJoinWithSeparator) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(JSArray, elements_array, 0); CONVERT_NUMBER_CHECKED(uint32_t, array_length, Uint32, args[1]); CONVERT_ARG_HANDLE_CHECKED(String, separator, 2); @@ -7474,6 +7338,8 @@ RUNTIME_FUNCTION(Runtime_SparseJoinWithSeparator) { FixedArray* elements = FixedArray::cast(elements_array->elements()); for (int i = 0; i < elements_length; i += 2) { RUNTIME_ASSERT(elements->get(i)->IsNumber()); + CONVERT_NUMBER_CHECKED(uint32_t, position, Uint32, elements->get(i)); + RUNTIME_ASSERT(position < array_length); RUNTIME_ASSERT(elements->get(i + 1)->IsString()); } @@ -7544,7 +7410,7 @@ RUNTIME_FUNCTION(Runtime_SparseJoinWithSeparator) { RUNTIME_FUNCTION(Runtime_NumberOr) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_NUMBER_CHECKED(int32_t, x, Int32, args[0]); CONVERT_NUMBER_CHECKED(int32_t, y, Int32, args[1]); @@ -7554,7 +7420,7 @@ RUNTIME_FUNCTION(Runtime_NumberOr) { RUNTIME_FUNCTION(Runtime_NumberAnd) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_NUMBER_CHECKED(int32_t, x, Int32, args[0]); CONVERT_NUMBER_CHECKED(int32_t, y, Int32, args[1]); @@ -7564,7 +7430,7 @@ RUNTIME_FUNCTION(Runtime_NumberAnd) { RUNTIME_FUNCTION(Runtime_NumberXor) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_NUMBER_CHECKED(int32_t, x, Int32, args[0]); CONVERT_NUMBER_CHECKED(int32_t, y, Int32, args[1]); @@ -7574,7 +7440,7 @@ RUNTIME_FUNCTION(Runtime_NumberXor) { RUNTIME_FUNCTION(Runtime_NumberShl) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_NUMBER_CHECKED(int32_t, x, Int32, args[0]); CONVERT_NUMBER_CHECKED(int32_t, y, Int32, args[1]); @@ -7584,7 +7450,7 @@ RUNTIME_FUNCTION(Runtime_NumberShl) { RUNTIME_FUNCTION(Runtime_NumberShr) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_NUMBER_CHECKED(uint32_t, x, Uint32, args[0]); CONVERT_NUMBER_CHECKED(int32_t, y, Int32, args[1]); @@ -7594,7 +7460,7 @@ RUNTIME_FUNCTION(Runtime_NumberShr) { RUNTIME_FUNCTION(Runtime_NumberSar) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_NUMBER_CHECKED(int32_t, x, Int32, args[0]); CONVERT_NUMBER_CHECKED(int32_t, y, Int32, args[1]); @@ -7605,7 +7471,7 @@ RUNTIME_FUNCTION(Runtime_NumberSar) { RUNTIME_FUNCTION(Runtime_NumberEquals) { SealHandleScope shs(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_DOUBLE_ARG_CHECKED(x, 0); CONVERT_DOUBLE_ARG_CHECKED(y, 1); @@ -7624,7 +7490,7 @@ RUNTIME_FUNCTION(Runtime_NumberEquals) { RUNTIME_FUNCTION(Runtime_StringEquals) { HandleScope handle_scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(String, x, 0); CONVERT_ARG_HANDLE_CHECKED(String, y, 1); @@ -7633,16 +7499,16 @@ RUNTIME_FUNCTION(Runtime_StringEquals) { // This is slightly convoluted because the value that signifies // equality is 0 and inequality is 1 so we have to negate the result // from String::Equals. - ASSERT(not_equal == 0 || not_equal == 1); - STATIC_CHECK(EQUAL == 0); - STATIC_CHECK(NOT_EQUAL == 1); + DCHECK(not_equal == 0 || not_equal == 1); + STATIC_ASSERT(EQUAL == 0); + STATIC_ASSERT(NOT_EQUAL == 1); return Smi::FromInt(not_equal); } RUNTIME_FUNCTION(Runtime_NumberCompare) { SealHandleScope shs(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_DOUBLE_ARG_CHECKED(x, 0); CONVERT_DOUBLE_ARG_CHECKED(y, 1); @@ -7658,7 +7524,7 @@ RUNTIME_FUNCTION(Runtime_NumberCompare) { // compared lexicographically. RUNTIME_FUNCTION(Runtime_SmiLexicographicCompare) { SealHandleScope shs(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_SMI_ARG_CHECKED(x_value, 0); CONVERT_SMI_ARG_CHECKED(y_value, 1); @@ -7731,9 +7597,9 @@ RUNTIME_FUNCTION(Runtime_SmiLexicographicCompare) { } -RUNTIME_FUNCTION(RuntimeHidden_StringCompare) { +RUNTIME_FUNCTION(Runtime_StringCompare) { HandleScope handle_scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(String, x, 0); CONVERT_ARG_HANDLE_CHECKED(String, y, 1); @@ -7801,7 +7667,7 @@ RUNTIME_FUNCTION(RuntimeHidden_StringCompare) { #define RUNTIME_UNARY_MATH(Name, name) \ RUNTIME_FUNCTION(Runtime_Math##Name) { \ HandleScope scope(isolate); \ - ASSERT(args.length() == 1); \ + DCHECK(args.length() == 1); \ isolate->counters()->math_##name()->Increment(); \ CONVERT_DOUBLE_ARG_CHECKED(x, 0); \ return *isolate->factory()->NewHeapNumber(std::name(x)); \ @@ -7810,13 +7676,13 @@ RUNTIME_FUNCTION(Runtime_Math##Name) { \ RUNTIME_UNARY_MATH(Acos, acos) RUNTIME_UNARY_MATH(Asin, asin) RUNTIME_UNARY_MATH(Atan, atan) -RUNTIME_UNARY_MATH(Log, log) +RUNTIME_UNARY_MATH(LogRT, log) #undef RUNTIME_UNARY_MATH RUNTIME_FUNCTION(Runtime_DoubleHi) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_DOUBLE_ARG_CHECKED(x, 0); uint64_t integer = double_to_uint64(x); integer = (integer >> 32) & 0xFFFFFFFFu; @@ -7826,7 +7692,7 @@ RUNTIME_FUNCTION(Runtime_DoubleHi) { RUNTIME_FUNCTION(Runtime_DoubleLo) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_DOUBLE_ARG_CHECKED(x, 0); return *isolate->factory()->NewNumber( static_cast<int32_t>(double_to_uint64(x) & 0xFFFFFFFFu)); @@ -7835,7 +7701,7 @@ RUNTIME_FUNCTION(Runtime_DoubleLo) { RUNTIME_FUNCTION(Runtime_ConstructDouble) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_NUMBER_CHECKED(uint32_t, hi, Uint32, args[0]); CONVERT_NUMBER_CHECKED(uint32_t, lo, Uint32, args[1]); uint64_t result = (static_cast<uint64_t>(hi) << 32) | lo; @@ -7843,12 +7709,29 @@ RUNTIME_FUNCTION(Runtime_ConstructDouble) { } +RUNTIME_FUNCTION(Runtime_RemPiO2) { + HandleScope handle_scope(isolate); + DCHECK(args.length() == 1); + CONVERT_DOUBLE_ARG_CHECKED(x, 0); + Factory* factory = isolate->factory(); + double y[2]; + int n = fdlibm::rempio2(x, y); + Handle<FixedArray> array = factory->NewFixedArray(3); + Handle<HeapNumber> y0 = factory->NewHeapNumber(y[0]); + Handle<HeapNumber> y1 = factory->NewHeapNumber(y[1]); + array->set(0, Smi::FromInt(n)); + array->set(1, *y0); + array->set(2, *y1); + return *factory->NewJSArrayWithElements(array); +} + + static const double kPiDividedBy4 = 0.78539816339744830962; RUNTIME_FUNCTION(Runtime_MathAtan2) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); isolate->counters()->math_atan2()->Increment(); CONVERT_DOUBLE_ARG_CHECKED(x, 0); @@ -7869,9 +7752,9 @@ RUNTIME_FUNCTION(Runtime_MathAtan2) { } -RUNTIME_FUNCTION(Runtime_MathExp) { +RUNTIME_FUNCTION(Runtime_MathExpRT) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); isolate->counters()->math_exp()->Increment(); CONVERT_DOUBLE_ARG_CHECKED(x, 0); @@ -7880,21 +7763,21 @@ RUNTIME_FUNCTION(Runtime_MathExp) { } -RUNTIME_FUNCTION(Runtime_MathFloor) { +RUNTIME_FUNCTION(Runtime_MathFloorRT) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); isolate->counters()->math_floor()->Increment(); CONVERT_DOUBLE_ARG_CHECKED(x, 0); - return *isolate->factory()->NewNumber(std::floor(x)); + return *isolate->factory()->NewNumber(Floor(x)); } // Slow version of Math.pow. We check for fast paths for special cases. -// Used if SSE2/VFP3 is not available. -RUNTIME_FUNCTION(RuntimeHidden_MathPowSlow) { +// Used if VFP3 is not available. +RUNTIME_FUNCTION(Runtime_MathPowSlow) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); isolate->counters()->math_pow()->Increment(); CONVERT_DOUBLE_ARG_CHECKED(x, 0); @@ -7915,9 +7798,9 @@ RUNTIME_FUNCTION(RuntimeHidden_MathPowSlow) { // Fast version of Math.pow if we know that y is not an integer and y is not // -0.5 or 0.5. Used as slow case from full codegen. -RUNTIME_FUNCTION(RuntimeHidden_MathPow) { +RUNTIME_FUNCTION(Runtime_MathPowRT) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); isolate->counters()->math_pow()->Increment(); CONVERT_DOUBLE_ARG_CHECKED(x, 0); @@ -7934,12 +7817,12 @@ RUNTIME_FUNCTION(RuntimeHidden_MathPow) { RUNTIME_FUNCTION(Runtime_RoundNumber) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_NUMBER_ARG_HANDLE_CHECKED(input, 0); isolate->counters()->math_round()->Increment(); if (!input->IsHeapNumber()) { - ASSERT(input->IsSmi()); + DCHECK(input->IsSmi()); return *input; } @@ -7971,13 +7854,13 @@ RUNTIME_FUNCTION(Runtime_RoundNumber) { if (sign && value >= -0.5) return isolate->heap()->minus_zero_value(); // Do not call NumberFromDouble() to avoid extra checks. - return *isolate->factory()->NewNumber(std::floor(value + 0.5)); + return *isolate->factory()->NewNumber(Floor(value + 0.5)); } -RUNTIME_FUNCTION(Runtime_MathSqrt) { +RUNTIME_FUNCTION(Runtime_MathSqrtRT) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); isolate->counters()->math_sqrt()->Increment(); CONVERT_DOUBLE_ARG_CHECKED(x, 0); @@ -7987,7 +7870,7 @@ RUNTIME_FUNCTION(Runtime_MathSqrt) { RUNTIME_FUNCTION(Runtime_MathFround) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_DOUBLE_ARG_CHECKED(x, 0); float xf = static_cast<float>(x); @@ -7997,7 +7880,7 @@ RUNTIME_FUNCTION(Runtime_MathFround) { RUNTIME_FUNCTION(Runtime_DateMakeDay) { SealHandleScope shs(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_SMI_ARG_CHECKED(year, 0); CONVERT_SMI_ARG_CHECKED(month, 1); @@ -8010,7 +7893,7 @@ RUNTIME_FUNCTION(Runtime_DateMakeDay) { RUNTIME_FUNCTION(Runtime_DateSetValue) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(JSDate, date, 0); CONVERT_DOUBLE_ARG_CHECKED(time, 1); @@ -8043,16 +7926,13 @@ RUNTIME_FUNCTION(Runtime_DateSetValue) { } -RUNTIME_FUNCTION(RuntimeHidden_NewArgumentsFast) { - HandleScope scope(isolate); - ASSERT(args.length() == 3); - - CONVERT_ARG_HANDLE_CHECKED(JSFunction, callee, 0); - Object** parameters = reinterpret_cast<Object**>(args[1]); - CONVERT_SMI_ARG_CHECKED(argument_count, 2); - +static Handle<JSObject> NewSloppyArguments(Isolate* isolate, + Handle<JSFunction> callee, + Object** parameters, + int argument_count) { Handle<JSObject> result = isolate->factory()->NewArgumentsObject(callee, argument_count); + // Allocate the elements if needed. int parameter_count = callee->shared()->formal_parameter_count(); if (argument_count > 0) { @@ -8114,7 +7994,7 @@ RUNTIME_FUNCTION(RuntimeHidden_NewArgumentsFast) { break; } } - ASSERT(context_index >= 0); + DCHECK(context_index >= 0); arguments->set_the_hole(index); parameter_map->set(index + 2, Smi::FromInt( Context::MIN_CONTEXT_SLOTS + context_index)); @@ -8133,37 +8013,73 @@ RUNTIME_FUNCTION(RuntimeHidden_NewArgumentsFast) { } } } - return *result; + return result; } -RUNTIME_FUNCTION(RuntimeHidden_NewStrictArgumentsFast) { - HandleScope scope(isolate); - ASSERT(args.length() == 3); - CONVERT_ARG_HANDLE_CHECKED(JSFunction, callee, 0) - Object** parameters = reinterpret_cast<Object**>(args[1]); - CONVERT_SMI_ARG_CHECKED(length, 2); - +static Handle<JSObject> NewStrictArguments(Isolate* isolate, + Handle<JSFunction> callee, + Object** parameters, + int argument_count) { Handle<JSObject> result = - isolate->factory()->NewArgumentsObject(callee, length); + isolate->factory()->NewArgumentsObject(callee, argument_count); - if (length > 0) { + if (argument_count > 0) { Handle<FixedArray> array = - isolate->factory()->NewUninitializedFixedArray(length); + isolate->factory()->NewUninitializedFixedArray(argument_count); DisallowHeapAllocation no_gc; WriteBarrierMode mode = array->GetWriteBarrierMode(no_gc); - for (int i = 0; i < length; i++) { + for (int i = 0; i < argument_count; i++) { array->set(i, *--parameters, mode); } result->set_elements(*array); } - return *result; + return result; +} + + +RUNTIME_FUNCTION(Runtime_NewArguments) { + HandleScope scope(isolate); + DCHECK(args.length() == 1); + CONVERT_ARG_HANDLE_CHECKED(JSFunction, callee, 0); + JavaScriptFrameIterator it(isolate); + + // Find the frame that holds the actual arguments passed to the function. + it.AdvanceToArgumentsFrame(); + JavaScriptFrame* frame = it.frame(); + + // Determine parameter location on the stack and dispatch on language mode. + int argument_count = frame->GetArgumentsLength(); + Object** parameters = reinterpret_cast<Object**>(frame->GetParameterSlot(-1)); + return callee->shared()->strict_mode() == STRICT + ? *NewStrictArguments(isolate, callee, parameters, argument_count) + : *NewSloppyArguments(isolate, callee, parameters, argument_count); +} + + +RUNTIME_FUNCTION(Runtime_NewSloppyArguments) { + HandleScope scope(isolate); + DCHECK(args.length() == 3); + CONVERT_ARG_HANDLE_CHECKED(JSFunction, callee, 0); + Object** parameters = reinterpret_cast<Object**>(args[1]); + CONVERT_SMI_ARG_CHECKED(argument_count, 2); + return *NewSloppyArguments(isolate, callee, parameters, argument_count); +} + + +RUNTIME_FUNCTION(Runtime_NewStrictArguments) { + HandleScope scope(isolate); + DCHECK(args.length() == 3); + CONVERT_ARG_HANDLE_CHECKED(JSFunction, callee, 0) + Object** parameters = reinterpret_cast<Object**>(args[1]); + CONVERT_SMI_ARG_CHECKED(argument_count, 2); + return *NewStrictArguments(isolate, callee, parameters, argument_count); } -RUNTIME_FUNCTION(RuntimeHidden_NewClosureFromStubFailure) { +RUNTIME_FUNCTION(Runtime_NewClosureFromStubFailure) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(SharedFunctionInfo, shared, 0); Handle<Context> context(isolate->context()); PretenureFlag pretenure_flag = NOT_TENURED; @@ -8172,9 +8088,9 @@ RUNTIME_FUNCTION(RuntimeHidden_NewClosureFromStubFailure) { } -RUNTIME_FUNCTION(RuntimeHidden_NewClosure) { +RUNTIME_FUNCTION(Runtime_NewClosure) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(Context, context, 0); CONVERT_ARG_HANDLE_CHECKED(SharedFunctionInfo, shared, 1); CONVERT_BOOLEAN_ARG_CHECKED(pretenure, 2); @@ -8239,10 +8155,11 @@ static SmartArrayPointer<Handle<Object> > GetCallerArguments( RUNTIME_FUNCTION(Runtime_FunctionBindArguments) { HandleScope scope(isolate); - ASSERT(args.length() == 4); + DCHECK(args.length() == 4); CONVERT_ARG_HANDLE_CHECKED(JSFunction, bound_function, 0); - RUNTIME_ASSERT(args[3]->IsNumber()); - Handle<Object> bindee = args.at<Object>(1); + CONVERT_ARG_HANDLE_CHECKED(Object, bindee, 1); + CONVERT_ARG_HANDLE_CHECKED(Object, this_object, 2); + CONVERT_NUMBER_ARG_HANDLE_CHECKED(new_length, 3); // TODO(lrn): Create bound function in C++ code from premade shared info. bound_function->shared()->set_bound(true); @@ -8252,10 +8169,10 @@ RUNTIME_FUNCTION(Runtime_FunctionBindArguments) { GetCallerArguments(isolate, 0, &argc); // Don't count the this-arg. if (argc > 0) { - RUNTIME_ASSERT(*arguments[0] == args[2]); + RUNTIME_ASSERT(arguments[0].is_identical_to(this_object)); argc--; } else { - RUNTIME_ASSERT(args[2]->IsUndefined()); + RUNTIME_ASSERT(this_object->IsUndefined()); } // Initialize array of bindings (function, this, and any existing arguments // if the function was already bound). @@ -8277,7 +8194,7 @@ RUNTIME_FUNCTION(Runtime_FunctionBindArguments) { int array_size = JSFunction::kBoundArgumentsStartIndex + argc; new_bindings = isolate->factory()->NewFixedArray(array_size); new_bindings->set(JSFunction::kBoundFunctionIndex, *bindee); - new_bindings->set(JSFunction::kBoundThisIndex, args[2]); + new_bindings->set(JSFunction::kBoundThisIndex, *this_object); i = 2; } // Copy arguments, skipping the first which is "this_arg". @@ -8288,20 +8205,26 @@ RUNTIME_FUNCTION(Runtime_FunctionBindArguments) { isolate->heap()->fixed_cow_array_map()); bound_function->set_function_bindings(*new_bindings); - // Update length. + // Update length. Have to remove the prototype first so that map migration + // is happy about the number of fields. + RUNTIME_ASSERT(bound_function->RemovePrototype()); + Handle<Map> bound_function_map( + isolate->native_context()->bound_function_map()); + JSObject::MigrateToMap(bound_function, bound_function_map); Handle<String> length_string = isolate->factory()->length_string(); - Handle<Object> new_length(args.at<Object>(3)); PropertyAttributes attr = static_cast<PropertyAttributes>(DONT_DELETE | DONT_ENUM | READ_ONLY); - Runtime::ForceSetObjectProperty( - bound_function, length_string, new_length, attr).Assert(); + RETURN_FAILURE_ON_EXCEPTION( + isolate, + JSObject::SetOwnPropertyIgnoreAttributes( + bound_function, length_string, new_length, attr)); return *bound_function; } RUNTIME_FUNCTION(Runtime_BoundFunctionGetBindings) { HandleScope handles(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSReceiver, callable, 0); if (callable->IsJSFunction()) { Handle<JSFunction> function = Handle<JSFunction>::cast(callable); @@ -8317,7 +8240,7 @@ RUNTIME_FUNCTION(Runtime_BoundFunctionGetBindings) { RUNTIME_FUNCTION(Runtime_NewObjectFromBound) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); // First argument is a function to use as a constructor. CONVERT_ARG_HANDLE_CHECKED(JSFunction, function, 0); RUNTIME_ASSERT(function->shared()->bound()); @@ -8330,7 +8253,7 @@ RUNTIME_FUNCTION(Runtime_NewObjectFromBound) { Handle<Object> bound_function( JSReceiver::cast(bound_args->get(JSFunction::kBoundFunctionIndex)), isolate); - ASSERT(!bound_function->IsJSFunction() || + DCHECK(!bound_function->IsJSFunction() || !Handle<JSFunction>::cast(bound_function)->shared()->bound()); int total_argc = 0; @@ -8346,7 +8269,7 @@ RUNTIME_FUNCTION(Runtime_NewObjectFromBound) { isolate, bound_function, Execution::TryGetConstructorDelegate(isolate, bound_function)); } - ASSERT(bound_function->IsJSFunction()); + DCHECK(bound_function->IsJSFunction()); Handle<Object> result; ASSIGN_RETURN_FAILURE_ON_EXCEPTION( @@ -8398,7 +8321,7 @@ static Object* Runtime_NewObjectHelper(Isolate* isolate, // instead of a new JSFunction object. This way, errors are // reported the same way whether or not 'Function' is called // using 'new'. - return isolate->context()->global_object(); + return isolate->global_proxy(); } } @@ -8406,15 +8329,6 @@ static Object* Runtime_NewObjectHelper(Isolate* isolate, // available. Compiler::EnsureCompiled(function, CLEAR_EXCEPTION); - Handle<SharedFunctionInfo> shared(function->shared(), isolate); - if (!function->has_initial_map() && - shared->IsInobjectSlackTrackingInProgress()) { - // The tracking is already in progress for another function. We can only - // track one initial_map at a time, so we force the completion before the - // function is called as a constructor for the first time. - shared->CompleteInobjectSlackTracking(); - } - Handle<JSObject> result; if (site.is_null()) { result = isolate->factory()->NewJSObject(function); @@ -8429,9 +8343,9 @@ static Object* Runtime_NewObjectHelper(Isolate* isolate, } -RUNTIME_FUNCTION(RuntimeHidden_NewObject) { +RUNTIME_FUNCTION(Runtime_NewObject) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(Object, constructor, 0); return Runtime_NewObjectHelper(isolate, constructor, @@ -8439,9 +8353,9 @@ RUNTIME_FUNCTION(RuntimeHidden_NewObject) { } -RUNTIME_FUNCTION(RuntimeHidden_NewObjectWithAllocationSite) { +RUNTIME_FUNCTION(Runtime_NewObjectWithAllocationSite) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(Object, constructor, 1); CONVERT_ARG_HANDLE_CHECKED(Object, feedback, 0); Handle<AllocationSite> site; @@ -8453,20 +8367,20 @@ RUNTIME_FUNCTION(RuntimeHidden_NewObjectWithAllocationSite) { } -RUNTIME_FUNCTION(RuntimeHidden_FinalizeInstanceSize) { +RUNTIME_FUNCTION(Runtime_FinalizeInstanceSize) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSFunction, function, 0); - function->shared()->CompleteInobjectSlackTracking(); + function->CompleteInobjectSlackTracking(); return isolate->heap()->undefined_value(); } -RUNTIME_FUNCTION(RuntimeHidden_CompileUnoptimized) { +RUNTIME_FUNCTION(Runtime_CompileUnoptimized) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSFunction, function, 0); #ifdef DEBUG if (FLAG_trace_lazy && !function->shared()->is_compiled()) { @@ -8477,7 +8391,7 @@ RUNTIME_FUNCTION(RuntimeHidden_CompileUnoptimized) { #endif // Compile the target function. - ASSERT(function->shared()->allows_lazy_compilation()); + DCHECK(function->shared()->allows_lazy_compilation()); Handle<Code> code; ASSIGN_RETURN_FAILURE_ON_EXCEPTION(isolate, code, @@ -8485,17 +8399,17 @@ RUNTIME_FUNCTION(RuntimeHidden_CompileUnoptimized) { function->ReplaceCode(*code); // All done. Return the compiled code. - ASSERT(function->is_compiled()); - ASSERT(function->code()->kind() == Code::FUNCTION || + DCHECK(function->is_compiled()); + DCHECK(function->code()->kind() == Code::FUNCTION || (FLAG_always_opt && function->code()->kind() == Code::OPTIMIZED_FUNCTION)); return *code; } -RUNTIME_FUNCTION(RuntimeHidden_CompileOptimized) { +RUNTIME_FUNCTION(Runtime_CompileOptimized) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSFunction, function, 0); CONVERT_BOOLEAN_ARG_CHECKED(concurrent, 1); @@ -8531,7 +8445,7 @@ RUNTIME_FUNCTION(RuntimeHidden_CompileOptimized) { } } - ASSERT(function->code()->kind() == Code::FUNCTION || + DCHECK(function->code()->kind() == Code::FUNCTION || function->code()->kind() == Code::OPTIMIZED_FUNCTION || function->IsInOptimizationQueue()); return function->code(); @@ -8561,30 +8475,30 @@ class ActivationsFinder : public ThreadVisitor { }; -RUNTIME_FUNCTION(RuntimeHidden_NotifyStubFailure) { +RUNTIME_FUNCTION(Runtime_NotifyStubFailure) { HandleScope scope(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); Deoptimizer* deoptimizer = Deoptimizer::Grab(isolate); - ASSERT(AllowHeapAllocation::IsAllowed()); + DCHECK(AllowHeapAllocation::IsAllowed()); delete deoptimizer; return isolate->heap()->undefined_value(); } -RUNTIME_FUNCTION(RuntimeHidden_NotifyDeoptimized) { +RUNTIME_FUNCTION(Runtime_NotifyDeoptimized) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_SMI_ARG_CHECKED(type_arg, 0); Deoptimizer::BailoutType type = static_cast<Deoptimizer::BailoutType>(type_arg); Deoptimizer* deoptimizer = Deoptimizer::Grab(isolate); - ASSERT(AllowHeapAllocation::IsAllowed()); + DCHECK(AllowHeapAllocation::IsAllowed()); Handle<JSFunction> function = deoptimizer->function(); Handle<Code> optimized_code = deoptimizer->compiled_code(); - ASSERT(optimized_code->kind() == Code::OPTIMIZED_FUNCTION); - ASSERT(type == deoptimizer->bailout_type()); + DCHECK(optimized_code->kind() == Code::OPTIMIZED_FUNCTION); + DCHECK(type == deoptimizer->bailout_type()); // Make sure to materialize objects before causing any allocation. JavaScriptFrameIterator it(isolate); @@ -8593,7 +8507,7 @@ RUNTIME_FUNCTION(RuntimeHidden_NotifyDeoptimized) { JavaScriptFrame* frame = it.frame(); RUNTIME_ASSERT(frame->function()->IsJSFunction()); - ASSERT(frame->function() == *function); + DCHECK(frame->function() == *function); // Avoid doing too much work when running with --always-opt and keep // the optimized code around. @@ -8632,10 +8546,15 @@ RUNTIME_FUNCTION(RuntimeHidden_NotifyDeoptimized) { RUNTIME_FUNCTION(Runtime_DeoptimizeFunction) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSFunction, function, 0); if (!function->IsOptimized()) return isolate->heap()->undefined_value(); + // TODO(turbofan): Deoptimization is not supported yet. + if (function->code()->is_turbofanned() && !FLAG_turbo_deoptimization) { + return isolate->heap()->undefined_value(); + } + Deoptimizer::DeoptimizeFunction(*function); return isolate->heap()->undefined_value(); @@ -8644,7 +8563,7 @@ RUNTIME_FUNCTION(Runtime_DeoptimizeFunction) { RUNTIME_FUNCTION(Runtime_ClearFunctionTypeFeedback) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSFunction, function, 0); function->shared()->ClearTypeFeedbackInfo(); Code* unoptimized = function->shared()->code(); @@ -8657,7 +8576,7 @@ RUNTIME_FUNCTION(Runtime_ClearFunctionTypeFeedback) { RUNTIME_FUNCTION(Runtime_RunningInSimulator) { SealHandleScope shs(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); #if defined(USE_SIMULATOR) return isolate->heap()->true_value(); #else @@ -8668,7 +8587,7 @@ RUNTIME_FUNCTION(Runtime_RunningInSimulator) { RUNTIME_FUNCTION(Runtime_IsConcurrentRecompilationSupported) { SealHandleScope shs(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); return isolate->heap()->ToBoolean( isolate->concurrent_recompilation_enabled()); } @@ -8691,14 +8610,11 @@ RUNTIME_FUNCTION(Runtime_OptimizeFunctionOnNextCall) { if (args.length() == 2 && unoptimized->kind() == Code::FUNCTION) { CONVERT_ARG_HANDLE_CHECKED(String, type, 1); - if (type->IsOneByteEqualTo(STATIC_ASCII_VECTOR("osr"))) { + if (type->IsOneByteEqualTo(STATIC_ASCII_VECTOR("osr")) && FLAG_use_osr) { // Start patching from the currently patched loop nesting level. - int current_level = unoptimized->allow_osr_at_loop_nesting_level(); - ASSERT(BackEdgeTable::Verify(isolate, unoptimized, current_level)); - for (int i = current_level + 1; i <= Code::kMaxLoopNestingMarker; i++) { - unoptimized->set_allow_osr_at_loop_nesting_level(i); - isolate->runtime_profiler()->AttemptOnStackReplacement(*function); - } + DCHECK(BackEdgeTable::Verify(isolate, unoptimized)); + isolate->runtime_profiler()->AttemptOnStackReplacement( + *function, Code::kMaxLoopNestingMarker); } else if (type->IsOneByteEqualTo(STATIC_ASCII_VECTOR("concurrent")) && isolate->concurrent_recompilation_enabled()) { function->MarkForConcurrentOptimization(); @@ -8711,7 +8627,7 @@ RUNTIME_FUNCTION(Runtime_OptimizeFunctionOnNextCall) { RUNTIME_FUNCTION(Runtime_NeverOptimizeFunction) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(JSFunction, function, 0); function->shared()->set_optimization_disabled(true); return isolate->heap()->undefined_value(); @@ -8736,7 +8652,7 @@ RUNTIME_FUNCTION(Runtime_GetOptimizationStatus) { sync_with_compiler_thread) { while (function->IsInOptimizationQueue()) { isolate->optimizing_compiler_thread()->InstallOptimizedFunctions(); - OS::Sleep(50); + base::OS::Sleep(50); } } if (FLAG_always_opt) { @@ -8748,13 +8664,16 @@ RUNTIME_FUNCTION(Runtime_GetOptimizationStatus) { if (FLAG_deopt_every_n_times) { return Smi::FromInt(6); // 6 == "maybe deopted". } + if (function->IsOptimized() && function->code()->is_turbofanned()) { + return Smi::FromInt(7); // 7 == "TurboFan compiler". + } return function->IsOptimized() ? Smi::FromInt(1) // 1 == "yes". : Smi::FromInt(2); // 2 == "no". } RUNTIME_FUNCTION(Runtime_UnblockConcurrentRecompilation) { - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); RUNTIME_ASSERT(FLAG_block_concurrent_recompilation); RUNTIME_ASSERT(isolate->concurrent_recompilation_enabled()); isolate->optimizing_compiler_thread()->Unblock(); @@ -8764,7 +8683,7 @@ RUNTIME_FUNCTION(Runtime_UnblockConcurrentRecompilation) { RUNTIME_FUNCTION(Runtime_GetOptimizationCount) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSFunction, function, 0); return Smi::FromInt(function->shared()->opt_count()); } @@ -8791,12 +8710,14 @@ static bool IsSuitableForOnStackReplacement(Isolate* isolate, RUNTIME_FUNCTION(Runtime_CompileForOnStackReplacement) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSFunction, function, 0); Handle<Code> caller_code(function->shared()->code()); // We're not prepared to handle a function with arguments object. - ASSERT(!function->shared()->uses_arguments()); + DCHECK(!function->shared()->uses_arguments()); + + RUNTIME_ASSERT(FLAG_use_osr); // Passing the PC in the javascript frame from the caller directly is // not GC safe, so we walk the stack to get it. @@ -8812,17 +8733,19 @@ RUNTIME_FUNCTION(Runtime_CompileForOnStackReplacement) { frame->pc() - caller_code->instruction_start()); #ifdef DEBUG - ASSERT_EQ(frame->function(), *function); - ASSERT_EQ(frame->LookupCode(), *caller_code); - ASSERT(caller_code->contains(frame->pc())); + DCHECK_EQ(frame->function(), *function); + DCHECK_EQ(frame->LookupCode(), *caller_code); + DCHECK(caller_code->contains(frame->pc())); #endif // DEBUG BailoutId ast_id = caller_code->TranslatePcOffsetToAstId(pc_offset); - ASSERT(!ast_id.IsNone()); + DCHECK(!ast_id.IsNone()); - Compiler::ConcurrencyMode mode = isolate->concurrent_osr_enabled() - ? Compiler::CONCURRENT : Compiler::NOT_CONCURRENT; + Compiler::ConcurrencyMode mode = + isolate->concurrent_osr_enabled() && + (function->shared()->ast_node_count() > 512) ? Compiler::CONCURRENT + : Compiler::NOT_CONCURRENT; Handle<Code> result = Handle<Code>::null(); OptimizedCompileJob* job = NULL; @@ -8874,7 +8797,7 @@ RUNTIME_FUNCTION(Runtime_CompileForOnStackReplacement) { DeoptimizationInputData::cast(result->deoptimization_data()); if (data->OsrPcOffset()->value() >= 0) { - ASSERT(BailoutId(data->OsrAstId()->value()) == ast_id); + DCHECK(BailoutId(data->OsrAstId()->value()) == ast_id); if (FLAG_trace_osr) { PrintF("[OSR - Entry at AST id %d, offset %d in optimized code]\n", ast_id.ToInt(), data->OsrPcOffset()->value()); @@ -8905,7 +8828,7 @@ RUNTIME_FUNCTION(Runtime_CompileForOnStackReplacement) { RUNTIME_FUNCTION(Runtime_SetAllocationTimeout) { SealHandleScope shs(isolate); - ASSERT(args.length() == 2 || args.length() == 3); + DCHECK(args.length() == 2 || args.length() == 3); #ifdef DEBUG CONVERT_SMI_ARG_CHECKED(interval, 0); CONVERT_SMI_ARG_CHECKED(timeout, 1); @@ -8927,7 +8850,7 @@ RUNTIME_FUNCTION(Runtime_SetAllocationTimeout) { RUNTIME_FUNCTION(Runtime_CheckIsBootstrapping) { SealHandleScope shs(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); RUNTIME_ASSERT(isolate->bootstrapper()->IsActive()); return isolate->heap()->undefined_value(); } @@ -8935,7 +8858,7 @@ RUNTIME_FUNCTION(Runtime_CheckIsBootstrapping) { RUNTIME_FUNCTION(Runtime_GetRootNaN) { SealHandleScope shs(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); RUNTIME_ASSERT(isolate->bootstrapper()->IsActive()); return isolate->heap()->nan_value(); } @@ -8943,7 +8866,7 @@ RUNTIME_FUNCTION(Runtime_GetRootNaN) { RUNTIME_FUNCTION(Runtime_Call) { HandleScope scope(isolate); - ASSERT(args.length() >= 2); + DCHECK(args.length() >= 2); int argc = args.length() - 2; CONVERT_ARG_CHECKED(JSReceiver, fun, argc + 1); Object* receiver = args[0]; @@ -8975,7 +8898,7 @@ RUNTIME_FUNCTION(Runtime_Call) { RUNTIME_FUNCTION(Runtime_Apply) { HandleScope scope(isolate); - ASSERT(args.length() == 5); + DCHECK(args.length() == 5); CONVERT_ARG_HANDLE_CHECKED(JSReceiver, fun, 0); CONVERT_ARG_HANDLE_CHECKED(Object, receiver, 1); CONVERT_ARG_HANDLE_CHECKED(JSObject, arguments, 2); @@ -9014,7 +8937,7 @@ RUNTIME_FUNCTION(Runtime_Apply) { RUNTIME_FUNCTION(Runtime_GetFunctionDelegate) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(Object, object, 0); RUNTIME_ASSERT(!object->IsJSFunction()); return *Execution::GetFunctionDelegate(isolate, object); @@ -9023,42 +8946,44 @@ RUNTIME_FUNCTION(Runtime_GetFunctionDelegate) { RUNTIME_FUNCTION(Runtime_GetConstructorDelegate) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(Object, object, 0); RUNTIME_ASSERT(!object->IsJSFunction()); return *Execution::GetConstructorDelegate(isolate, object); } -RUNTIME_FUNCTION(RuntimeHidden_NewGlobalContext) { +RUNTIME_FUNCTION(Runtime_NewGlobalContext) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSFunction, function, 0); CONVERT_ARG_HANDLE_CHECKED(ScopeInfo, scope_info, 1); Handle<Context> result = isolate->factory()->NewGlobalContext(function, scope_info); - ASSERT(function->context() == isolate->context()); - ASSERT(function->context()->global_object() == result->global_object()); + DCHECK(function->context() == isolate->context()); + DCHECK(function->context()->global_object() == result->global_object()); result->global_object()->set_global_context(*result); return *result; } -RUNTIME_FUNCTION(RuntimeHidden_NewFunctionContext) { +RUNTIME_FUNCTION(Runtime_NewFunctionContext) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSFunction, function, 0); + + DCHECK(function->context() == isolate->context()); int length = function->shared()->scope_info()->ContextLength(); return *isolate->factory()->NewFunctionContext(length, function); } -RUNTIME_FUNCTION(RuntimeHidden_PushWithContext) { +RUNTIME_FUNCTION(Runtime_PushWithContext) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); Handle<JSReceiver> extension_object; if (args[0]->IsJSReceiver()) { extension_object = args.at<JSReceiver>(0); @@ -9080,7 +9005,7 @@ RUNTIME_FUNCTION(RuntimeHidden_PushWithContext) { // A smi sentinel indicates a context nested inside global code rather // than some function. There is a canonical empty function that can be // gotten from the native context. - function = handle(isolate->context()->native_context()->closure()); + function = handle(isolate->native_context()->closure()); } else { function = args.at<JSFunction>(1); } @@ -9093,9 +9018,9 @@ RUNTIME_FUNCTION(RuntimeHidden_PushWithContext) { } -RUNTIME_FUNCTION(RuntimeHidden_PushCatchContext) { +RUNTIME_FUNCTION(Runtime_PushCatchContext) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(String, name, 0); CONVERT_ARG_HANDLE_CHECKED(Object, thrown_object, 1); Handle<JSFunction> function; @@ -9103,7 +9028,7 @@ RUNTIME_FUNCTION(RuntimeHidden_PushCatchContext) { // A smi sentinel indicates a context nested inside global code rather // than some function. There is a canonical empty function that can be // gotten from the native context. - function = handle(isolate->context()->native_context()->closure()); + function = handle(isolate->native_context()->closure()); } else { function = args.at<JSFunction>(2); } @@ -9115,16 +9040,16 @@ RUNTIME_FUNCTION(RuntimeHidden_PushCatchContext) { } -RUNTIME_FUNCTION(RuntimeHidden_PushBlockContext) { +RUNTIME_FUNCTION(Runtime_PushBlockContext) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(ScopeInfo, scope_info, 0); Handle<JSFunction> function; if (args[1]->IsSmi()) { // A smi sentinel indicates a context nested inside global code rather // than some function. There is a canonical empty function that can be // gotten from the native context. - function = handle(isolate->context()->native_context()->closure()); + function = handle(isolate->native_context()->closure()); } else { function = args.at<JSFunction>(1); } @@ -9138,22 +9063,22 @@ RUNTIME_FUNCTION(RuntimeHidden_PushBlockContext) { RUNTIME_FUNCTION(Runtime_IsJSModule) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(Object, obj, 0); return isolate->heap()->ToBoolean(obj->IsJSModule()); } -RUNTIME_FUNCTION(RuntimeHidden_PushModuleContext) { +RUNTIME_FUNCTION(Runtime_PushModuleContext) { SealHandleScope shs(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_SMI_ARG_CHECKED(index, 0); if (!args[1]->IsScopeInfo()) { // Module already initialized. Find hosting context and retrieve context. Context* host = Context::cast(isolate->context())->global_context(); Context* context = Context::cast(host->get(index)); - ASSERT(context->previous() == isolate->context()); + DCHECK(context->previous() == isolate->context()); isolate->set_context(context); return context; } @@ -9179,9 +9104,9 @@ RUNTIME_FUNCTION(RuntimeHidden_PushModuleContext) { } -RUNTIME_FUNCTION(RuntimeHidden_DeclareModules) { +RUNTIME_FUNCTION(Runtime_DeclareModules) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(FixedArray, descriptions, 0); Context* host_context = isolate->context(); @@ -9206,14 +9131,15 @@ RUNTIME_FUNCTION(RuntimeHidden_DeclareModules) { Accessors::MakeModuleExport(name, index, attr); Handle<Object> result = JSObject::SetAccessor(module, info).ToHandleChecked(); - ASSERT(!result->IsUndefined()); + DCHECK(!result->IsUndefined()); USE(result); break; } case MODULE: { Object* referenced_context = Context::cast(host_context)->get(index); Handle<JSModule> value(Context::cast(referenced_context)->module()); - JSReceiver::SetProperty(module, name, value, FROZEN, STRICT).Assert(); + JSObject::SetOwnPropertyIgnoreAttributes(module, name, value, FROZEN) + .Assert(); break; } case INTERNAL: @@ -9228,14 +9154,14 @@ RUNTIME_FUNCTION(RuntimeHidden_DeclareModules) { JSObject::PreventExtensions(module).Assert(); } - ASSERT(!isolate->has_pending_exception()); + DCHECK(!isolate->has_pending_exception()); return isolate->heap()->undefined_value(); } -RUNTIME_FUNCTION(RuntimeHidden_DeleteContextSlot) { +RUNTIME_FUNCTION(Runtime_DeleteLookupSlot) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(Context, context, 0); CONVERT_ARG_HANDLE_CHECKED(String, name, 1); @@ -9293,6 +9219,23 @@ static inline ObjectPair MakePair(Object* x, Object* y) { // In Win64 they are assigned to a hidden first argument. return result; } +#elif V8_TARGET_ARCH_X64 && V8_TARGET_ARCH_32_BIT +// For x32 a 128-bit struct return is done as rax and rdx from the ObjectPair +// are used in the full codegen and Crankshaft compiler. An alternative is +// using uint64_t and modifying full codegen and Crankshaft compiler. +struct ObjectPair { + Object* x; + uint32_t x_upper; + Object* y; + uint32_t y_upper; +}; + + +static inline ObjectPair MakePair(Object* x, Object* y) { + ObjectPair result = {x, 0, y, 0}; + // Pointers x and y returned in rax and rdx, in x32-abi. + return result; +} #else typedef uint64_t ObjectPair; static inline ObjectPair MakePair(Object* x, Object* y) { @@ -9311,7 +9254,7 @@ static inline ObjectPair MakePair(Object* x, Object* y) { static Object* ComputeReceiverForNonGlobal(Isolate* isolate, JSObject* holder) { - ASSERT(!holder->IsGlobalObject()); + DCHECK(!holder->IsGlobalObject()); Context* top = isolate->context(); // Get the context extension function. JSFunction* context_extension_function = @@ -9329,11 +9272,10 @@ static Object* ComputeReceiverForNonGlobal(Isolate* isolate, } -static ObjectPair LoadContextSlotHelper(Arguments args, - Isolate* isolate, - bool throw_error) { +static ObjectPair LoadLookupSlotHelper(Arguments args, Isolate* isolate, + bool throw_error) { HandleScope scope(isolate); - ASSERT_EQ(2, args.length()); + DCHECK_EQ(2, args.length()); if (!args[0]->IsContext() || !args[1]->IsString()) { return MakePair(isolate->ThrowIllegalOperation(), NULL); @@ -9356,7 +9298,7 @@ static ObjectPair LoadContextSlotHelper(Arguments args, // If the index is non-negative, the slot has been found in a context. if (index >= 0) { - ASSERT(holder->IsContext()); + DCHECK(holder->IsContext()); // If the "property" we were looking for is a local variable, the // receiver is the global object; see ECMA-262, 3rd., 10.1.6 and 10.2.3. Handle<Object> receiver = isolate->factory()->undefined_value(); @@ -9375,11 +9317,11 @@ static ObjectPair LoadContextSlotHelper(Arguments args, case MUTABLE_IS_INITIALIZED: case IMMUTABLE_IS_INITIALIZED: case IMMUTABLE_IS_INITIALIZED_HARMONY: - ASSERT(!value->IsTheHole()); + DCHECK(!value->IsTheHole()); return MakePair(value, *receiver); case IMMUTABLE_CHECK_INITIALIZED: if (value->IsTheHole()) { - ASSERT((attributes & READ_ONLY) != 0); + DCHECK((attributes & READ_ONLY) != 0); value = isolate->heap()->undefined_value(); } return MakePair(value, *receiver); @@ -9394,7 +9336,13 @@ static ObjectPair LoadContextSlotHelper(Arguments args, // property from it. if (!holder.is_null()) { Handle<JSReceiver> object = Handle<JSReceiver>::cast(holder); - ASSERT(object->IsJSProxy() || JSReceiver::HasProperty(object, name)); +#ifdef DEBUG + if (!object->IsJSProxy()) { + Maybe<bool> maybe = JSReceiver::HasProperty(object, name); + DCHECK(maybe.has_value); + DCHECK(maybe.value); + } +#endif // GetProperty below can cause GC. Handle<Object> receiver_handle( object->IsGlobalObject() @@ -9427,19 +9375,19 @@ static ObjectPair LoadContextSlotHelper(Arguments args, } -RUNTIME_FUNCTION_RETURN_PAIR(RuntimeHidden_LoadContextSlot) { - return LoadContextSlotHelper(args, isolate, true); +RUNTIME_FUNCTION_RETURN_PAIR(Runtime_LoadLookupSlot) { + return LoadLookupSlotHelper(args, isolate, true); } -RUNTIME_FUNCTION_RETURN_PAIR(RuntimeHidden_LoadContextSlotNoReferenceError) { - return LoadContextSlotHelper(args, isolate, false); +RUNTIME_FUNCTION_RETURN_PAIR(Runtime_LoadLookupSlotNoReferenceError) { + return LoadLookupSlotHelper(args, isolate, false); } -RUNTIME_FUNCTION(RuntimeHidden_StoreContextSlot) { +RUNTIME_FUNCTION(Runtime_StoreLookupSlot) { HandleScope scope(isolate); - ASSERT(args.length() == 4); + DCHECK(args.length() == 4); CONVERT_ARG_HANDLE_CHECKED(Object, value, 0); CONVERT_ARG_HANDLE_CHECKED(Context, context, 1); @@ -9455,22 +9403,13 @@ RUNTIME_FUNCTION(RuntimeHidden_StoreContextSlot) { &index, &attributes, &binding_flags); + // In case of JSProxy, an exception might have been thrown. if (isolate->has_pending_exception()) return isolate->heap()->exception(); + // The property was found in a context slot. if (index >= 0) { - // The property was found in a context slot. - Handle<Context> context = Handle<Context>::cast(holder); - if (binding_flags == MUTABLE_CHECK_INITIALIZED && - context->get(index)->IsTheHole()) { - Handle<Object> error = - isolate->factory()->NewReferenceError("not_defined", - HandleVector(&name, 1)); - return isolate->Throw(*error); - } - // Ignore if read_only variable. if ((attributes & READ_ONLY) == 0) { - // Context is a fixed array and set cannot fail. - context->set(index, *value); + Handle<Context>::cast(holder)->set(index, *value); } else if (strict_mode == STRICT) { // Setting read only property in strict mode. Handle<Object> error = @@ -9485,69 +9424,52 @@ RUNTIME_FUNCTION(RuntimeHidden_StoreContextSlot) { // context extension object, a property of the subject of a with, or a // property of the global object. Handle<JSReceiver> object; - - if (!holder.is_null()) { + if (attributes != ABSENT) { // The property exists on the holder. object = Handle<JSReceiver>::cast(holder); + } else if (strict_mode == STRICT) { + // If absent in strict mode: throw. + Handle<Object> error = isolate->factory()->NewReferenceError( + "not_defined", HandleVector(&name, 1)); + return isolate->Throw(*error); } else { - // The property was not found. - ASSERT(attributes == ABSENT); - - if (strict_mode == STRICT) { - // Throw in strict mode (assignment to undefined variable). - Handle<Object> error = - isolate->factory()->NewReferenceError( - "not_defined", HandleVector(&name, 1)); - return isolate->Throw(*error); - } - // In sloppy mode, the property is added to the global object. - attributes = NONE; - object = Handle<JSReceiver>(isolate->context()->global_object()); + // If absent in sloppy mode: add the property to the global object. + object = Handle<JSReceiver>(context->global_object()); } - // Set the property if it's not read only or doesn't yet exist. - if ((attributes & READ_ONLY) == 0 || - (JSReceiver::GetLocalPropertyAttribute(object, name) == ABSENT)) { - RETURN_FAILURE_ON_EXCEPTION( - isolate, - JSReceiver::SetProperty(object, name, value, NONE, strict_mode)); - } else if (strict_mode == STRICT && (attributes & READ_ONLY) != 0) { - // Setting read only property in strict mode. - Handle<Object> error = - isolate->factory()->NewTypeError( - "strict_cannot_assign", HandleVector(&name, 1)); - return isolate->Throw(*error); - } + RETURN_FAILURE_ON_EXCEPTION( + isolate, Object::SetProperty(object, name, value, strict_mode)); + return *value; } -RUNTIME_FUNCTION(RuntimeHidden_Throw) { +RUNTIME_FUNCTION(Runtime_Throw) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); return isolate->Throw(args[0]); } -RUNTIME_FUNCTION(RuntimeHidden_ReThrow) { +RUNTIME_FUNCTION(Runtime_ReThrow) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); return isolate->ReThrow(args[0]); } -RUNTIME_FUNCTION(RuntimeHidden_PromoteScheduledException) { +RUNTIME_FUNCTION(Runtime_PromoteScheduledException) { SealHandleScope shs(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); return isolate->PromoteScheduledException(); } -RUNTIME_FUNCTION(RuntimeHidden_ThrowReferenceError) { +RUNTIME_FUNCTION(Runtime_ThrowReferenceError) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(Object, name, 0); Handle<Object> reference_error = isolate->factory()->NewReferenceError("not_defined", @@ -9556,46 +9478,36 @@ RUNTIME_FUNCTION(RuntimeHidden_ThrowReferenceError) { } -RUNTIME_FUNCTION(RuntimeHidden_ThrowNotDateError) { +RUNTIME_FUNCTION(Runtime_ThrowNotDateError) { HandleScope scope(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); return isolate->Throw(*isolate->factory()->NewTypeError( "not_date_object", HandleVector<Object>(NULL, 0))); } -RUNTIME_FUNCTION(RuntimeHidden_ThrowMessage) { - HandleScope scope(isolate); - ASSERT(args.length() == 1); - CONVERT_SMI_ARG_CHECKED(message_id, 0); - const char* message = GetBailoutReason( - static_cast<BailoutReason>(message_id)); - Handle<String> message_handle = - isolate->factory()->NewStringFromAsciiChecked(message); - return isolate->Throw(*message_handle); -} - - -RUNTIME_FUNCTION(RuntimeHidden_StackGuard) { +RUNTIME_FUNCTION(Runtime_StackGuard) { SealHandleScope shs(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); // First check if this is a real stack overflow. - if (isolate->stack_guard()->IsStackOverflow()) { + StackLimitCheck check(isolate); + if (check.JsHasOverflowed()) { return isolate->StackOverflow(); } - return Execution::HandleStackGuardInterrupt(isolate); + return isolate->stack_guard()->HandleInterrupts(); } -RUNTIME_FUNCTION(RuntimeHidden_TryInstallOptimizedCode) { +RUNTIME_FUNCTION(Runtime_TryInstallOptimizedCode) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSFunction, function, 0); // First check if this is a real stack overflow. - if (isolate->stack_guard()->IsStackOverflow()) { + StackLimitCheck check(isolate); + if (check.JsHasOverflowed()) { SealHandleScope shs(isolate); return isolate->StackOverflow(); } @@ -9606,10 +9518,10 @@ RUNTIME_FUNCTION(RuntimeHidden_TryInstallOptimizedCode) { } -RUNTIME_FUNCTION(RuntimeHidden_Interrupt) { +RUNTIME_FUNCTION(Runtime_Interrupt) { SealHandleScope shs(isolate); - ASSERT(args.length() == 0); - return Execution::HandleStackGuardInterrupt(isolate); + DCHECK(args.length() == 0); + return isolate->stack_guard()->HandleInterrupts(); } @@ -9644,7 +9556,7 @@ static void PrintTransition(Isolate* isolate, Object* result) { RUNTIME_FUNCTION(Runtime_TraceEnter) { SealHandleScope shs(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); PrintTransition(isolate, NULL); return isolate->heap()->undefined_value(); } @@ -9652,7 +9564,7 @@ RUNTIME_FUNCTION(Runtime_TraceEnter) { RUNTIME_FUNCTION(Runtime_TraceExit) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(Object, obj, 0); PrintTransition(isolate, obj); return obj; // return TOS @@ -9661,30 +9573,30 @@ RUNTIME_FUNCTION(Runtime_TraceExit) { RUNTIME_FUNCTION(Runtime_DebugPrint) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); + OFStream os(stdout); #ifdef DEBUG if (args[0]->IsString()) { // If we have a string, assume it's a code "marker" // and print some interesting cpu debugging info. JavaScriptFrameIterator it(isolate); JavaScriptFrame* frame = it.frame(); - PrintF("fp = %p, sp = %p, caller_sp = %p: ", - frame->fp(), frame->sp(), frame->caller_sp()); + os << "fp = " << frame->fp() << ", sp = " << frame->sp() + << ", caller_sp = " << frame->caller_sp() << ": "; } else { - PrintF("DebugPrint: "); + os << "DebugPrint: "; } - args[0]->Print(); + args[0]->Print(os); if (args[0]->IsHeapObject()) { - PrintF("\n"); - HeapObject::cast(args[0])->map()->Print(); + os << "\n"; + HeapObject::cast(args[0])->map()->Print(os); } #else // ShortPrint is available in release mode. Print is not. - args[0]->ShortPrint(); + os << Brief(args[0]); #endif - PrintF("\n"); - Flush(); + os << endl; return args[0]; // return TOS } @@ -9692,7 +9604,7 @@ RUNTIME_FUNCTION(Runtime_DebugPrint) { RUNTIME_FUNCTION(Runtime_DebugTrace) { SealHandleScope shs(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); isolate->PrintStack(stdout); return isolate->heap()->undefined_value(); } @@ -9700,20 +9612,27 @@ RUNTIME_FUNCTION(Runtime_DebugTrace) { RUNTIME_FUNCTION(Runtime_DateCurrentTime) { HandleScope scope(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); + if (FLAG_log_timer_events) LOG(isolate, CurrentTimeEvent()); // According to ECMA-262, section 15.9.1, page 117, the precision of // the number in a Date object representing a particular instant in // time is milliseconds. Therefore, we floor the result of getting // the OS time. - double millis = std::floor(OS::TimeCurrentMillis()); + double millis; + if (FLAG_verify_predictable) { + millis = 1388534400000.0; // Jan 1 2014 00:00:00 GMT+0000 + millis += Floor(isolate->heap()->synthetic_time()); + } else { + millis = Floor(base::OS::TimeCurrentMillis()); + } return *isolate->factory()->NewNumber(millis); } RUNTIME_FUNCTION(Runtime_DateParseString) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(String, str, 0); CONVERT_ARG_HANDLE_CHECKED(JSArray, output, 1); @@ -9733,7 +9652,7 @@ RUNTIME_FUNCTION(Runtime_DateParseString) { *output_array, isolate->unicode_cache()); } else { - ASSERT(str_content.IsTwoByte()); + DCHECK(str_content.IsTwoByte()); result = DateParser::Parse(str_content.ToUC16Vector(), *output_array, isolate->unicode_cache()); @@ -9749,9 +9668,11 @@ RUNTIME_FUNCTION(Runtime_DateParseString) { RUNTIME_FUNCTION(Runtime_DateLocalTimezone) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_DOUBLE_ARG_CHECKED(x, 0); + RUNTIME_ASSERT(x >= -DateCache::kMaxTimeBeforeUTCInMs && + x <= DateCache::kMaxTimeBeforeUTCInMs); const char* zone = isolate->date_cache()->LocalTimezone(static_cast<int64_t>(x)); Handle<String> result = isolate->factory()->NewStringFromUtf8( @@ -9762,9 +9683,11 @@ RUNTIME_FUNCTION(Runtime_DateLocalTimezone) { RUNTIME_FUNCTION(Runtime_DateToUTC) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_DOUBLE_ARG_CHECKED(x, 0); + RUNTIME_ASSERT(x >= -DateCache::kMaxTimeBeforeUTCInMs && + x <= DateCache::kMaxTimeBeforeUTCInMs); int64_t time = isolate->date_cache()->ToUTC(static_cast<int64_t>(x)); return *isolate->factory()->NewNumber(static_cast<double>(time)); @@ -9773,7 +9696,7 @@ RUNTIME_FUNCTION(Runtime_DateToUTC) { RUNTIME_FUNCTION(Runtime_DateCacheVersion) { HandleScope hs(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); if (!isolate->eternal_handles()->Exists(EternalHandles::DATE_CACHE_VERSION)) { Handle<FixedArray> date_cache_version = isolate->factory()->NewFixedArray(1, TENURED); @@ -9792,18 +9715,18 @@ RUNTIME_FUNCTION(Runtime_DateCacheVersion) { } -RUNTIME_FUNCTION(Runtime_GlobalReceiver) { +RUNTIME_FUNCTION(Runtime_GlobalProxy) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(Object, global, 0); if (!global->IsJSGlobalObject()) return isolate->heap()->null_value(); - return JSGlobalObject::cast(global)->global_receiver(); + return JSGlobalObject::cast(global)->global_proxy(); } RUNTIME_FUNCTION(Runtime_IsAttachedGlobal) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(Object, global, 0); if (!global->IsJSGlobalObject()) return isolate->heap()->false_value(); return isolate->heap()->ToBoolean( @@ -9813,7 +9736,7 @@ RUNTIME_FUNCTION(Runtime_IsAttachedGlobal) { RUNTIME_FUNCTION(Runtime_ParseJson) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(String, source, 0); source = String::Flatten(source); @@ -9829,7 +9752,7 @@ RUNTIME_FUNCTION(Runtime_ParseJson) { bool CodeGenerationFromStringsAllowed(Isolate* isolate, Handle<Context> context) { - ASSERT(context->allow_code_gen_from_strings()->IsFalse()); + DCHECK(context->allow_code_gen_from_strings()->IsFalse()); // Check with callback if set. AllowCodeGenerationFromStringsCallback callback = isolate->allow_code_gen_callback(); @@ -9844,14 +9767,72 @@ bool CodeGenerationFromStringsAllowed(Isolate* isolate, } +// Walk up the stack expecting: +// - Runtime_CompileString +// - JSFunction callee (eval, Function constructor, etc) +// - call() (maybe) +// - apply() (maybe) +// - bind() (maybe) +// - JSFunction caller (maybe) +// +// return true if the caller has the same security token as the callee +// or if an exit frame was hit, in which case allow it through, as it could +// have come through the api. +static bool TokensMatchForCompileString(Isolate* isolate) { + MaybeHandle<JSFunction> callee; + bool exit_handled = true; + bool tokens_match = true; + bool done = false; + for (StackFrameIterator it(isolate); !it.done() && !done; it.Advance()) { + StackFrame* raw_frame = it.frame(); + if (!raw_frame->is_java_script()) { + if (raw_frame->is_exit()) exit_handled = false; + continue; + } + JavaScriptFrame* outer_frame = JavaScriptFrame::cast(raw_frame); + List<FrameSummary> frames(FLAG_max_inlining_levels + 1); + outer_frame->Summarize(&frames); + for (int i = frames.length() - 1; i >= 0 && !done; --i) { + FrameSummary& frame = frames[i]; + Handle<JSFunction> fun = frame.function(); + // Capture the callee function. + if (callee.is_null()) { + callee = fun; + exit_handled = true; + continue; + } + // Exit condition. + Handle<Context> context(callee.ToHandleChecked()->context()); + if (!fun->context()->HasSameSecurityTokenAs(*context)) { + tokens_match = false; + done = true; + continue; + } + // Skip bound functions in correct origin. + if (fun->shared()->bound()) { + exit_handled = true; + continue; + } + done = true; + } + } + return !exit_handled || tokens_match; +} + + RUNTIME_FUNCTION(Runtime_CompileString) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(String, source, 0); CONVERT_BOOLEAN_ARG_CHECKED(function_literal_only, 1); // Extract native context. - Handle<Context> context(isolate->context()->native_context()); + Handle<Context> context(isolate->native_context()); + + // Filter cross security context calls. + if (!TokensMatchForCompileString(isolate)) { + return isolate->heap()->undefined_value(); + } // Check if native context allows code generation from // strings. Throw an exception if it doesn't. @@ -9907,9 +9888,9 @@ static ObjectPair CompileGlobalEval(Isolate* isolate, } -RUNTIME_FUNCTION_RETURN_PAIR(RuntimeHidden_ResolvePossiblyDirectEval) { +RUNTIME_FUNCTION_RETURN_PAIR(Runtime_ResolvePossiblyDirectEval) { HandleScope scope(isolate); - ASSERT(args.length() == 5); + DCHECK(args.length() == 5); Handle<Object> callee = args.at<Object>(0); @@ -9923,10 +9904,10 @@ RUNTIME_FUNCTION_RETURN_PAIR(RuntimeHidden_ResolvePossiblyDirectEval) { return MakePair(*callee, isolate->heap()->undefined_value()); } - ASSERT(args[3]->IsSmi()); - ASSERT(args.smi_at(3) == SLOPPY || args.smi_at(3) == STRICT); + DCHECK(args[3]->IsSmi()); + DCHECK(args.smi_at(3) == SLOPPY || args.smi_at(3) == STRICT); StrictMode strict_mode = static_cast<StrictMode>(args.smi_at(3)); - ASSERT(args[4]->IsSmi()); + DCHECK(args[4]->IsSmi()); return CompileGlobalEval(isolate, args.at<String>(1), args.at<Object>(2), @@ -9935,9 +9916,9 @@ RUNTIME_FUNCTION_RETURN_PAIR(RuntimeHidden_ResolvePossiblyDirectEval) { } -RUNTIME_FUNCTION(RuntimeHidden_AllocateInNewSpace) { +RUNTIME_FUNCTION(Runtime_AllocateInNewSpace) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_SMI_ARG_CHECKED(size, 0); RUNTIME_ASSERT(IsAligned(size, kPointerSize)); RUNTIME_ASSERT(size > 0); @@ -9946,9 +9927,9 @@ RUNTIME_FUNCTION(RuntimeHidden_AllocateInNewSpace) { } -RUNTIME_FUNCTION(RuntimeHidden_AllocateInTargetSpace) { +RUNTIME_FUNCTION(Runtime_AllocateInTargetSpace) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_SMI_ARG_CHECKED(size, 0); CONVERT_SMI_ARG_CHECKED(flags, 1); RUNTIME_ASSERT(IsAligned(size, kPointerSize)); @@ -9965,7 +9946,7 @@ RUNTIME_FUNCTION(RuntimeHidden_AllocateInTargetSpace) { // false otherwise. RUNTIME_FUNCTION(Runtime_PushIfAbsent) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSArray, array, 0); CONVERT_ARG_HANDLE_CHECKED(JSReceiver, element, 1); RUNTIME_ASSERT(array->HasFastSmiOrObjectElements()); @@ -10029,7 +10010,7 @@ class ArrayConcatVisitor { SetDictionaryMode(); // Fall-through to dictionary mode. } - ASSERT(!fast_elements_); + DCHECK(!fast_elements_); Handle<SeededNumberDictionary> dict( SeededNumberDictionary::cast(*storage_)); Handle<SeededNumberDictionary> result = @@ -10077,7 +10058,7 @@ class ArrayConcatVisitor { private: // Convert storage to dictionary mode. void SetDictionaryMode() { - ASSERT(fast_elements_); + DCHECK(fast_elements_); Handle<FixedArray> current_storage(*storage_); Handle<SeededNumberDictionary> slow_storage( SeededNumberDictionary::New(isolate_, current_storage->length())); @@ -10127,7 +10108,7 @@ static uint32_t EstimateElementCount(Handle<JSArray> array) { case FAST_HOLEY_ELEMENTS: { // Fast elements can't have lengths that are not representable by // a 32-bit signed integer. - ASSERT(static_cast<int32_t>(FixedArray::kMaxLength) >= 0); + DCHECK(static_cast<int32_t>(FixedArray::kMaxLength) >= 0); int fast_length = static_cast<int>(length); Handle<FixedArray> elements(FixedArray::cast(array->elements())); for (int i = 0; i < fast_length; i++) { @@ -10139,10 +10120,10 @@ static uint32_t EstimateElementCount(Handle<JSArray> array) { case FAST_HOLEY_DOUBLE_ELEMENTS: { // Fast elements can't have lengths that are not representable by // a 32-bit signed integer. - ASSERT(static_cast<int32_t>(FixedDoubleArray::kMaxLength) >= 0); + DCHECK(static_cast<int32_t>(FixedDoubleArray::kMaxLength) >= 0); int fast_length = static_cast<int>(length); if (array->elements()->IsFixedArray()) { - ASSERT(FixedArray::cast(array->elements())->length() == 0); + DCHECK(FixedArray::cast(array->elements())->length() == 0); break; } Handle<FixedDoubleArray> elements( @@ -10191,7 +10172,7 @@ static void IterateExternalArrayElements(Isolate* isolate, ExternalArrayClass::cast(receiver->elements())); uint32_t len = static_cast<uint32_t>(array->length()); - ASSERT(visitor != NULL); + DCHECK(visitor != NULL); if (elements_are_ints) { if (elements_are_guaranteed_smis) { for (uint32_t j = 0; j < len; j++) { @@ -10266,7 +10247,7 @@ static void CollectElementIndices(Handle<JSObject> object, HandleScope loop_scope(isolate); Handle<Object> k(dict->KeyAt(j), isolate); if (dict->IsKey(*k)) { - ASSERT(k->IsNumber()); + DCHECK(k->IsNumber()); uint32_t index = static_cast<uint32_t>(k->Number()); if (index < range) { indices->Add(index); @@ -10308,11 +10289,13 @@ static void CollectElementIndices(Handle<JSObject> object, } } - Handle<Object> prototype(object->GetPrototype(), isolate); - if (prototype->IsJSObject()) { + PrototypeIterator iter(isolate, object); + if (!iter.IsAtEnd()) { // The prototype will usually have no inherited element indices, // but we have to check. - CollectElementIndices(Handle<JSObject>::cast(prototype), range, indices); + CollectElementIndices( + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)), range, + indices); } } @@ -10340,20 +10323,23 @@ static bool IterateElements(Isolate* isolate, // to check the prototype for missing elements. Handle<FixedArray> elements(FixedArray::cast(receiver->elements())); int fast_length = static_cast<int>(length); - ASSERT(fast_length <= elements->length()); + DCHECK(fast_length <= elements->length()); for (int j = 0; j < fast_length; j++) { HandleScope loop_scope(isolate); Handle<Object> element_value(elements->get(j), isolate); if (!element_value->IsTheHole()) { visitor->visit(j, element_value); - } else if (JSReceiver::HasElement(receiver, j)) { - // Call GetElement on receiver, not its prototype, or getters won't - // have the correct receiver. - ASSIGN_RETURN_ON_EXCEPTION_VALUE( - isolate, element_value, - Object::GetElement(isolate, receiver, j), - false); - visitor->visit(j, element_value); + } else { + Maybe<bool> maybe = JSReceiver::HasElement(receiver, j); + if (!maybe.has_value) return false; + if (maybe.value) { + // Call GetElement on receiver, not its prototype, or getters won't + // have the correct receiver. + ASSIGN_RETURN_ON_EXCEPTION_VALUE( + isolate, element_value, + Object::GetElement(isolate, receiver, j), false); + visitor->visit(j, element_value); + } } } break; @@ -10364,10 +10350,14 @@ static bool IterateElements(Isolate* isolate, if (length == 0) break; // Run through the elements FixedArray and use HasElement and GetElement // to check the prototype for missing elements. + if (receiver->elements()->IsFixedArray()) { + DCHECK(receiver->elements()->length() == 0); + break; + } Handle<FixedDoubleArray> elements( FixedDoubleArray::cast(receiver->elements())); int fast_length = static_cast<int>(length); - ASSERT(fast_length <= elements->length()); + DCHECK(fast_length <= elements->length()); for (int j = 0; j < fast_length; j++) { HandleScope loop_scope(isolate); if (!elements->is_the_hole(j)) { @@ -10375,15 +10365,18 @@ static bool IterateElements(Isolate* isolate, Handle<Object> element_value = isolate->factory()->NewNumber(double_value); visitor->visit(j, element_value); - } else if (JSReceiver::HasElement(receiver, j)) { - // Call GetElement on receiver, not its prototype, or getters won't - // have the correct receiver. - Handle<Object> element_value; - ASSIGN_RETURN_ON_EXCEPTION_VALUE( - isolate, element_value, - Object::GetElement(isolate, receiver, j), - false); - visitor->visit(j, element_value); + } else { + Maybe<bool> maybe = JSReceiver::HasElement(receiver, j); + if (!maybe.has_value) return false; + if (maybe.value) { + // Call GetElement on receiver, not its prototype, or getters won't + // have the correct receiver. + Handle<Object> element_value; + ASSIGN_RETURN_ON_EXCEPTION_VALUE( + isolate, element_value, + Object::GetElement(isolate, receiver, j), false); + visitor->visit(j, element_value); + } } } break; @@ -10479,7 +10472,7 @@ static bool IterateElements(Isolate* isolate, */ RUNTIME_FUNCTION(Runtime_ArrayConcat) { HandleScope handle_scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSArray, arguments, 0); int argument_count = static_cast<int>(arguments->length()->Number()); @@ -10598,7 +10591,7 @@ RUNTIME_FUNCTION(Runtime_ArrayConcat) { break; } case FAST_HOLEY_ELEMENTS: - ASSERT_EQ(0, length); + DCHECK_EQ(0, length); break; default: UNREACHABLE(); @@ -10659,7 +10652,7 @@ RUNTIME_FUNCTION(Runtime_ArrayConcat) { // very slowly for very deeply nested ConsStrings. For debugging use only. RUNTIME_FUNCTION(Runtime_GlobalPrint) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(String, string, 0); ConsStringIteratorOp op; @@ -10680,7 +10673,7 @@ RUNTIME_FUNCTION(Runtime_GlobalPrint) { // Returns -1 if hole removal is not supported by this method. RUNTIME_FUNCTION(Runtime_RemoveArrayHoles) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSObject, object, 0); CONVERT_NUMBER_CHECKED(uint32_t, limit, Uint32, args[1]); return *JSObject::PrepareElementsForSort(object, limit); @@ -10690,7 +10683,7 @@ RUNTIME_FUNCTION(Runtime_RemoveArrayHoles) { // Move contents of argument 0 (an array) to argument 1 (an array) RUNTIME_FUNCTION(Runtime_MoveArrayContents) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSArray, from, 0); CONVERT_ARG_HANDLE_CHECKED(JSArray, to, 1); JSObject::ValidateElements(from); @@ -10712,17 +10705,39 @@ RUNTIME_FUNCTION(Runtime_MoveArrayContents) { // How many elements does this object/array have? RUNTIME_FUNCTION(Runtime_EstimateNumberOfElements) { + HandleScope scope(isolate); + DCHECK(args.length() == 1); + CONVERT_ARG_HANDLE_CHECKED(JSArray, array, 0); + Handle<FixedArrayBase> elements(array->elements(), isolate); SealHandleScope shs(isolate); - ASSERT(args.length() == 1); - CONVERT_ARG_CHECKED(JSObject, object, 0); - HeapObject* elements = object->elements(); if (elements->IsDictionary()) { - int result = SeededNumberDictionary::cast(elements)->NumberOfElements(); + int result = + Handle<SeededNumberDictionary>::cast(elements)->NumberOfElements(); return Smi::FromInt(result); - } else if (object->IsJSArray()) { - return JSArray::cast(object)->length(); } else { - return Smi::FromInt(FixedArray::cast(elements)->length()); + DCHECK(array->length()->IsSmi()); + // For packed elements, we know the exact number of elements + int length = elements->length(); + ElementsKind kind = array->GetElementsKind(); + if (IsFastPackedElementsKind(kind)) { + return Smi::FromInt(length); + } + // For holey elements, take samples from the buffer checking for holes + // to generate the estimate. + const int kNumberOfHoleCheckSamples = 97; + int increment = (length < kNumberOfHoleCheckSamples) + ? 1 + : static_cast<int>(length / kNumberOfHoleCheckSamples); + ElementsAccessor* accessor = array->GetElementsAccessor(); + int holes = 0; + for (int i = 0; i < length; i += increment) { + if (!accessor->HasElement(array, array, i, elements)) { + ++holes; + } + } + int estimate = static_cast<int>((kNumberOfHoleCheckSamples - holes) / + kNumberOfHoleCheckSamples * length); + return Smi::FromInt(estimate); } } @@ -10734,24 +10749,26 @@ RUNTIME_FUNCTION(Runtime_EstimateNumberOfElements) { // Intervals can span over some keys that are not in the object. RUNTIME_FUNCTION(Runtime_GetArrayKeys) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSObject, array, 0); CONVERT_NUMBER_CHECKED(uint32_t, length, Uint32, args[1]); if (array->elements()->IsDictionary()) { Handle<FixedArray> keys = isolate->factory()->empty_fixed_array(); - for (Handle<Object> p = array; - !p->IsNull(); - p = Handle<Object>(p->GetPrototype(isolate), isolate)) { - if (p->IsJSProxy() || JSObject::cast(*p)->HasIndexedInterceptor()) { + for (PrototypeIterator iter(isolate, array, + PrototypeIterator::START_AT_RECEIVER); + !iter.IsAtEnd(); iter.Advance()) { + if (PrototypeIterator::GetCurrent(iter)->IsJSProxy() || + JSObject::cast(*PrototypeIterator::GetCurrent(iter)) + ->HasIndexedInterceptor()) { // Bail out if we find a proxy or interceptor, likely not worth // collecting keys in that case. return *isolate->factory()->NewNumberFromUint(length); } - Handle<JSObject> current = Handle<JSObject>::cast(p); + Handle<JSObject> current = + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)); Handle<FixedArray> current_keys = - isolate->factory()->NewFixedArray( - current->NumberOfLocalElements(NONE)); - current->GetLocalElementKeys(*current_keys, NONE); + isolate->factory()->NewFixedArray(current->NumberOfOwnElements(NONE)); + current->GetOwnElementKeys(*current_keys, NONE); ASSIGN_RETURN_FAILURE_ON_EXCEPTION( isolate, keys, FixedArray::UnionOfKeys(keys, current_keys)); } @@ -10763,8 +10780,8 @@ RUNTIME_FUNCTION(Runtime_GetArrayKeys) { } return *isolate->factory()->NewJSArrayWithElements(keys); } else { - ASSERT(array->HasFastSmiOrObjectElements() || - array->HasFastDoubleElements()); + RUNTIME_ASSERT(array->HasFastSmiOrObjectElements() || + array->HasFastDoubleElements()); uint32_t actual_length = static_cast<uint32_t>(array->elements()->length()); return *isolate->factory()->NewNumberFromUint(Min(actual_length, length)); } @@ -10773,7 +10790,7 @@ RUNTIME_FUNCTION(Runtime_GetArrayKeys) { RUNTIME_FUNCTION(Runtime_LookupAccessor) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(JSReceiver, receiver, 0); CONVERT_ARG_HANDLE_CHECKED(Name, name, 1); CONVERT_SMI_ARG_CHECKED(flag, 2); @@ -10789,14 +10806,15 @@ RUNTIME_FUNCTION(Runtime_LookupAccessor) { RUNTIME_FUNCTION(Runtime_DebugBreak) { SealHandleScope shs(isolate); - ASSERT(args.length() == 0); - return Execution::DebugBreakHelper(isolate); + DCHECK(args.length() == 0); + isolate->debug()->HandleDebugBreak(); + return isolate->heap()->undefined_value(); } // Helper functions for wrapping and unwrapping stack frame ids. static Smi* WrapFrameId(StackFrame::Id id) { - ASSERT(IsAligned(OffsetFrom(id), static_cast<intptr_t>(4))); + DCHECK(IsAligned(OffsetFrom(id), static_cast<intptr_t>(4))); return Smi::FromInt(id >> 2); } @@ -10812,13 +10830,13 @@ static StackFrame::Id UnwrapFrameId(int wrapped) { // args[1]: object supplied during callback RUNTIME_FUNCTION(Runtime_SetDebugEventListener) { SealHandleScope shs(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); RUNTIME_ASSERT(args[0]->IsJSFunction() || args[0]->IsUndefined() || args[0]->IsNull()); CONVERT_ARG_HANDLE_CHECKED(Object, callback, 0); CONVERT_ARG_HANDLE_CHECKED(Object, data, 1); - isolate->debugger()->SetEventListener(callback, data); + isolate->debug()->SetEventListener(callback, data); return isolate->heap()->undefined_value(); } @@ -10826,8 +10844,8 @@ RUNTIME_FUNCTION(Runtime_SetDebugEventListener) { RUNTIME_FUNCTION(Runtime_Break) { SealHandleScope shs(isolate); - ASSERT(args.length() == 0); - isolate->stack_guard()->DebugBreak(); + DCHECK(args.length() == 0); + isolate->stack_guard()->RequestDebugBreak(); return isolate->heap()->undefined_value(); } @@ -10841,22 +10859,20 @@ static Handle<Object> DebugLookupResultValue(Isolate* isolate, if (!result->IsFound()) return value; switch (result->type()) { case NORMAL: - value = JSObject::GetNormalizedProperty( - handle(result->holder(), isolate), result); - break; + return JSObject::GetNormalizedProperty(handle(result->holder(), isolate), + result); case FIELD: - value = JSObject::FastPropertyAt(handle(result->holder(), isolate), - result->representation(), - result->GetFieldIndex().field_index()); - break; + return JSObject::FastPropertyAt(handle(result->holder(), isolate), + result->representation(), + result->GetFieldIndex()); case CONSTANT: return handle(result->GetConstant(), isolate); case CALLBACKS: { Handle<Object> structure(result->GetCallbackObject(), isolate); - ASSERT(!structure->IsForeign()); + DCHECK(!structure->IsForeign()); if (structure->IsAccessorInfo()) { - MaybeHandle<Object> obj = JSObject::GetPropertyWithCallback( - handle(result->holder(), isolate), receiver, structure, name); + MaybeHandle<Object> obj = JSObject::GetPropertyWithAccessor( + receiver, name, handle(result->holder(), isolate), structure); if (!obj.ToHandle(&value)) { value = handle(isolate->pending_exception(), isolate); isolate->clear_pending_exception(); @@ -10867,15 +10883,13 @@ static Handle<Object> DebugLookupResultValue(Isolate* isolate, break; } case INTERCEPTOR: - break; case HANDLER: + break; case NONEXISTENT: UNREACHABLE(); break; } - ASSERT(!value->IsTheHole() || result->IsReadOnly()); - return value->IsTheHole() - ? Handle<Object>::cast(isolate->factory()->undefined_value()) : value; + return value; } @@ -10894,7 +10908,7 @@ static Handle<Object> DebugLookupResultValue(Isolate* isolate, RUNTIME_FUNCTION(Runtime_DebugGetPropertyDetails) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSObject, obj, 0); CONVERT_ARG_HANDLE_CHECKED(Name, name, 1); @@ -10906,17 +10920,10 @@ RUNTIME_FUNCTION(Runtime_DebugGetPropertyDetails) { // could have the assumption that its own native context is the current // context and not some internal debugger context. SaveContext save(isolate); - if (isolate->debug()->InDebugger()) { + if (isolate->debug()->in_debug_scope()) { isolate->set_context(*isolate->debug()->debugger_entry()->GetContext()); } - // Skip the global proxy as it has no properties and always delegates to the - // real global object. - if (obj->IsJSGlobalProxy()) { - obj = Handle<JSObject>(JSObject::cast(obj->GetPrototype())); - } - - // Check if the name is trivially convertible to an index and get the element // if so. uint32_t index; @@ -10933,13 +10940,16 @@ RUNTIME_FUNCTION(Runtime_DebugGetPropertyDetails) { } // Find the number of objects making up this. - int length = LocalPrototypeChainLength(*obj); + int length = OwnPrototypeChainLength(*obj); - // Try local lookup on each of the objects. - Handle<JSObject> jsproto = obj; + // Try own lookup on each of the objects. + PrototypeIterator iter(isolate, obj, PrototypeIterator::START_AT_RECEIVER); for (int i = 0; i < length; i++) { + DCHECK(!iter.IsAtEnd()); + Handle<JSObject> jsproto = + Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)); LookupResult result(isolate); - jsproto->LocalLookup(name, &result); + jsproto->LookupOwn(name, &result); if (result.IsFound()) { // LookupResult is not GC safe as it holds raw object pointers. // GC can happen later in this code so put the required fields into @@ -10972,9 +10982,7 @@ RUNTIME_FUNCTION(Runtime_DebugGetPropertyDetails) { return *isolate->factory()->NewJSArrayWithElements(details); } - if (i < length - 1) { - jsproto = Handle<JSObject>(JSObject::cast(jsproto->GetPrototype())); - } + iter.Advance(); } return isolate->heap()->undefined_value(); @@ -10984,7 +10992,7 @@ RUNTIME_FUNCTION(Runtime_DebugGetPropertyDetails) { RUNTIME_FUNCTION(Runtime_DebugGetProperty) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSObject, obj, 0); CONVERT_ARG_HANDLE_CHECKED(Name, name, 1); @@ -10999,7 +11007,7 @@ RUNTIME_FUNCTION(Runtime_DebugGetProperty) { // args[0]: smi with property details. RUNTIME_FUNCTION(Runtime_DebugPropertyTypeFromDetails) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_PROPERTY_DETAILS_CHECKED(details, 0); return Smi::FromInt(static_cast<int>(details.type())); } @@ -11009,7 +11017,7 @@ RUNTIME_FUNCTION(Runtime_DebugPropertyTypeFromDetails) { // args[0]: smi with property details. RUNTIME_FUNCTION(Runtime_DebugPropertyAttributesFromDetails) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_PROPERTY_DETAILS_CHECKED(details, 0); return Smi::FromInt(static_cast<int>(details.attributes())); } @@ -11019,7 +11027,7 @@ RUNTIME_FUNCTION(Runtime_DebugPropertyAttributesFromDetails) { // args[0]: smi with property details. RUNTIME_FUNCTION(Runtime_DebugPropertyIndexFromDetails) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_PROPERTY_DETAILS_CHECKED(details, 0); // TODO(verwaest): Depends on the type of details. return Smi::FromInt(details.dictionary_index()); @@ -11031,16 +11039,14 @@ RUNTIME_FUNCTION(Runtime_DebugPropertyIndexFromDetails) { // args[1]: property name RUNTIME_FUNCTION(Runtime_DebugNamedInterceptorPropertyValue) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSObject, obj, 0); RUNTIME_ASSERT(obj->HasNamedInterceptor()); CONVERT_ARG_HANDLE_CHECKED(Name, name, 1); - PropertyAttributes attributes; Handle<Object> result; ASSIGN_RETURN_FAILURE_ON_EXCEPTION( - isolate, result, - JSObject::GetPropertyWithInterceptor(obj, obj, name, &attributes)); + isolate, result, JSObject::GetProperty(obj, name)); return *result; } @@ -11050,7 +11056,7 @@ RUNTIME_FUNCTION(Runtime_DebugNamedInterceptorPropertyValue) { // args[1]: index RUNTIME_FUNCTION(Runtime_DebugIndexedInterceptorElementValue) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSObject, obj, 0); RUNTIME_ASSERT(obj->HasIndexedInterceptor()); CONVERT_NUMBER_CHECKED(uint32_t, index, Uint32, args[1]); @@ -11062,14 +11068,15 @@ RUNTIME_FUNCTION(Runtime_DebugIndexedInterceptorElementValue) { static bool CheckExecutionState(Isolate* isolate, int break_id) { - return (isolate->debug()->break_id() != 0 && - break_id == isolate->debug()->break_id()); + return !isolate->debug()->debug_context().is_null() && + isolate->debug()->break_id() != 0 && + isolate->debug()->break_id() == break_id; } RUNTIME_FUNCTION(Runtime_CheckExecutionState) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_NUMBER_CHECKED(int, break_id, Int32, args[0]); RUNTIME_ASSERT(CheckExecutionState(isolate, break_id)); return isolate->heap()->true_value(); @@ -11078,7 +11085,7 @@ RUNTIME_FUNCTION(Runtime_CheckExecutionState) { RUNTIME_FUNCTION(Runtime_GetFrameCount) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_NUMBER_CHECKED(int, break_id, Int32, args[0]); RUNTIME_ASSERT(CheckExecutionState(isolate, break_id)); @@ -11091,7 +11098,12 @@ RUNTIME_FUNCTION(Runtime_GetFrameCount) { } for (JavaScriptFrameIterator it(isolate, id); !it.done(); it.Advance()) { - n += it.frame()->GetInlineCount(); + List<FrameSummary> frames(FLAG_max_inlining_levels + 1); + it.frame()->Summarize(&frames); + for (int i = frames.length() - 1; i >= 0; i--) { + // Omit functions from native scripts. + if (!frames[i].function()->IsFromNativeScript()) n++; + } } return Smi::FromInt(n); } @@ -11156,10 +11168,10 @@ class FrameInspector { // To inspect all the provided arguments the frame might need to be // replaced with the arguments frame. void SetArgumentsFrame(JavaScriptFrame* frame) { - ASSERT(has_adapted_arguments_); + DCHECK(has_adapted_arguments_); frame_ = frame; is_optimized_ = frame_->is_optimized(); - ASSERT(!is_optimized_); + DCHECK(!is_optimized_); } private: @@ -11192,11 +11204,37 @@ static SaveContext* FindSavedContextForFrame(Isolate* isolate, while (save != NULL && !save->IsBelowFrame(frame)) { save = save->prev(); } - ASSERT(save != NULL); + DCHECK(save != NULL); return save; } +RUNTIME_FUNCTION(Runtime_IsOptimized) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 0); + JavaScriptFrameIterator it(isolate); + JavaScriptFrame* frame = it.frame(); + return isolate->heap()->ToBoolean(frame->is_optimized()); +} + + +// Advances the iterator to the frame that matches the index and returns the +// inlined frame index, or -1 if not found. Skips native JS functions. +static int FindIndexedNonNativeFrame(JavaScriptFrameIterator* it, int index) { + int count = -1; + for (; !it->done(); it->Advance()) { + List<FrameSummary> frames(FLAG_max_inlining_levels + 1); + it->frame()->Summarize(&frames); + for (int i = frames.length() - 1; i >= 0; i--) { + // Omit functions from native scripts. + if (frames[i].function()->IsFromNativeScript()) continue; + if (++count == index) return i; + } + } + return -1; +} + + // Return an array with frame details // args[0]: number: break id // args[1]: number: frame index @@ -11216,7 +11254,7 @@ static SaveContext* FindSavedContextForFrame(Isolate* isolate, // Return value if any RUNTIME_FUNCTION(Runtime_GetFrameDetails) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_NUMBER_CHECKED(int, break_id, Int32, args[0]); RUNTIME_ASSERT(CheckExecutionState(isolate, break_id)); @@ -11230,22 +11268,13 @@ RUNTIME_FUNCTION(Runtime_GetFrameDetails) { return heap->undefined_value(); } - int count = 0; JavaScriptFrameIterator it(isolate, id); - for (; !it.done(); it.Advance()) { - if (index < count + it.frame()->GetInlineCount()) break; - count += it.frame()->GetInlineCount(); - } - if (it.done()) return heap->undefined_value(); - - bool is_optimized = it.frame()->is_optimized(); + // Inlined frame index in optimized frame, starting from outer function. + int inlined_jsframe_index = FindIndexedNonNativeFrame(&it, index); + if (inlined_jsframe_index == -1) return heap->undefined_value(); - int inlined_jsframe_index = 0; // Inlined frame index in optimized frame. - if (is_optimized) { - inlined_jsframe_index = - it.frame()->GetInlineCount() - (index - count) - 1; - } FrameInspector frame_inspector(it.frame(), inlined_jsframe_index, isolate); + bool is_optimized = it.frame()->is_optimized(); // Traverse the saved contexts chain to find the active context for the // selected frame. @@ -11264,7 +11293,7 @@ RUNTIME_FUNCTION(Runtime_GetFrameDetails) { Handle<JSFunction> function(JSFunction::cast(frame_inspector.GetFunction())); Handle<SharedFunctionInfo> shared(function->shared()); Handle<ScopeInfo> scope_info(shared->scope_info()); - ASSERT(*scope_info != ScopeInfo::Empty(isolate)); + DCHECK(*scope_info != ScopeInfo::Empty(isolate)); // Get the locals names and values into a temporary array. int local_count = scope_info->LocalCount(); @@ -11299,9 +11328,10 @@ RUNTIME_FUNCTION(Runtime_GetFrameDetails) { Handle<String> name(scope_info->LocalName(i)); VariableMode mode; InitializationFlag init_flag; + MaybeAssignedFlag maybe_assigned_flag; locals->set(local * 2, *name); - int context_slot_index = - ScopeInfo::ContextSlotIndex(scope_info, name, &mode, &init_flag); + int context_slot_index = ScopeInfo::ContextSlotIndex( + scope_info, name, &mode, &init_flag, &maybe_assigned_flag); Object* value = context->get(context_slot_index); locals->set(local * 2 + 1, value); local++; @@ -11455,10 +11485,9 @@ RUNTIME_FUNCTION(Runtime_GetFrameDetails) { // native context. it.Advance(); if (receiver->IsUndefined()) { - Context* context = function->context(); - receiver = handle(context->global_object()->global_receiver()); + receiver = handle(function->global_proxy()); } else { - ASSERT(!receiver->IsNull()); + DCHECK(!receiver->IsNull()); Context* context = Context::cast(it.frame()->context()); Handle<Context> native_context(Context::cast(context->native_context())); receiver = Object::ToObject( @@ -11467,7 +11496,7 @@ RUNTIME_FUNCTION(Runtime_GetFrameDetails) { } details->set(kFrameDetailsReceiverIndex, *receiver); - ASSERT_EQ(details_size, details_index); + DCHECK_EQ(details_size, details_index); return *isolate->factory()->NewJSArrayWithElements(details); } @@ -11475,8 +11504,10 @@ RUNTIME_FUNCTION(Runtime_GetFrameDetails) { static bool ParameterIsShadowedByContextLocal(Handle<ScopeInfo> info, Handle<String> parameter_name) { VariableMode mode; - InitializationFlag flag; - return ScopeInfo::ContextSlotIndex(info, parameter_name, &mode, &flag) != -1; + InitializationFlag init_flag; + MaybeAssignedFlag maybe_assigned_flag; + return ScopeInfo::ContextSlotIndex(info, parameter_name, &mode, &init_flag, + &maybe_assigned_flag) != -1; } @@ -11502,11 +11533,11 @@ static MaybeHandle<JSObject> MaterializeStackLocalsWithFrameInspector( ? frame_inspector->GetParameter(i) : isolate->heap()->undefined_value(), isolate); - ASSERT(!value->IsTheHole()); + DCHECK(!value->IsTheHole()); RETURN_ON_EXCEPTION( isolate, - Runtime::SetObjectProperty(isolate, target, name, value, NONE, SLOPPY), + Runtime::SetObjectProperty(isolate, target, name, value, SLOPPY), JSObject); } @@ -11519,7 +11550,7 @@ static MaybeHandle<JSObject> MaterializeStackLocalsWithFrameInspector( RETURN_ON_EXCEPTION( isolate, - Runtime::SetObjectProperty(isolate, target, name, value, NONE, SLOPPY), + Runtime::SetObjectProperty(isolate, target, name, value, SLOPPY), JSObject); } @@ -11548,7 +11579,7 @@ static void UpdateStackLocalsFromMaterializedObject(Isolate* isolate, Handle<String> name(scope_info->ParameterName(i)); if (ParameterIsShadowedByContextLocal(scope_info, name)) continue; - ASSERT(!frame->GetParameter(i)->IsTheHole()); + DCHECK(!frame->GetParameter(i)->IsTheHole()); HandleScope scope(isolate); Handle<Object> value = Object::GetPropertyOrElement(target, name).ToHandleChecked(); @@ -11601,15 +11632,14 @@ MUST_USE_RESULT static MaybeHandle<JSObject> MaterializeLocalContext( for (int i = 0; i < keys->length(); i++) { // Names of variables introduced by eval are strings. - ASSERT(keys->get(i)->IsString()); + DCHECK(keys->get(i)->IsString()); Handle<String> key(String::cast(keys->get(i))); Handle<Object> value; ASSIGN_RETURN_ON_EXCEPTION( isolate, value, Object::GetPropertyOrElement(ext, key), JSObject); RETURN_ON_EXCEPTION( isolate, - Runtime::SetObjectProperty( - isolate, target, key, value, NONE, SLOPPY), + Runtime::SetObjectProperty(isolate, target, key, value, SLOPPY), JSObject); } } @@ -11649,8 +11679,9 @@ static bool SetContextLocalValue(Isolate* isolate, if (String::Equals(variable_name, next_name)) { VariableMode mode; InitializationFlag init_flag; - int context_index = - ScopeInfo::ContextSlotIndex(scope_info, next_name, &mode, &init_flag); + MaybeAssignedFlag maybe_assigned_flag; + int context_index = ScopeInfo::ContextSlotIndex( + scope_info, next_name, &mode, &init_flag, &maybe_assigned_flag); context->set(context_index, *new_value); return true; } @@ -11710,11 +11741,13 @@ static bool SetLocalVariableValue(Isolate* isolate, !function_context->IsNativeContext()) { Handle<JSObject> ext(JSObject::cast(function_context->extension())); - if (JSReceiver::HasProperty(ext, variable_name)) { + Maybe<bool> maybe = JSReceiver::HasProperty(ext, variable_name); + DCHECK(maybe.has_value); + if (maybe.value) { // We don't expect this to do anything except replacing // property value. Runtime::SetObjectProperty(isolate, ext, variable_name, new_value, - NONE, SLOPPY).Assert(); + SLOPPY).Assert(); return true; } } @@ -11730,7 +11763,7 @@ static bool SetLocalVariableValue(Isolate* isolate, MUST_USE_RESULT static MaybeHandle<JSObject> MaterializeClosure( Isolate* isolate, Handle<Context> context) { - ASSERT(context->IsFunctionContext()); + DCHECK(context->IsFunctionContext()); Handle<SharedFunctionInfo> shared(context->closure()->shared()); Handle<ScopeInfo> scope_info(shared->scope_info()); @@ -11758,15 +11791,14 @@ MUST_USE_RESULT static MaybeHandle<JSObject> MaterializeClosure( for (int i = 0; i < keys->length(); i++) { HandleScope scope(isolate); // Names of variables introduced by eval are strings. - ASSERT(keys->get(i)->IsString()); + DCHECK(keys->get(i)->IsString()); Handle<String> key(String::cast(keys->get(i))); Handle<Object> value; ASSIGN_RETURN_ON_EXCEPTION( isolate, value, Object::GetPropertyOrElement(ext, key), JSObject); RETURN_ON_EXCEPTION( isolate, - Runtime::SetObjectProperty( - isolate, closure_scope, key, value, NONE, SLOPPY), + Runtime::DefineObjectProperty(closure_scope, key, value, NONE), JSObject); } } @@ -11780,7 +11812,7 @@ static bool SetClosureVariableValue(Isolate* isolate, Handle<Context> context, Handle<String> variable_name, Handle<Object> new_value) { - ASSERT(context->IsFunctionContext()); + DCHECK(context->IsFunctionContext()); Handle<SharedFunctionInfo> shared(context->closure()->shared()); Handle<ScopeInfo> scope_info(shared->scope_info()); @@ -11795,10 +11827,12 @@ static bool SetClosureVariableValue(Isolate* isolate, // be variables introduced by eval. if (context->has_extension()) { Handle<JSObject> ext(JSObject::cast(context->extension())); - if (JSReceiver::HasProperty(ext, variable_name)) { + Maybe<bool> maybe = JSReceiver::HasProperty(ext, variable_name); + DCHECK(maybe.has_value); + if (maybe.value) { // We don't expect this to do anything except replacing property value. - Runtime::SetObjectProperty(isolate, ext, variable_name, new_value, - NONE, SLOPPY).Assert(); + Runtime::DefineObjectProperty( + ext, variable_name, new_value, NONE).Assert(); return true; } } @@ -11812,7 +11846,7 @@ static bool SetClosureVariableValue(Isolate* isolate, MUST_USE_RESULT static MaybeHandle<JSObject> MaterializeCatchScope( Isolate* isolate, Handle<Context> context) { - ASSERT(context->IsCatchContext()); + DCHECK(context->IsCatchContext()); Handle<String> name(String::cast(context->extension())); Handle<Object> thrown_object(context->get(Context::THROWN_OBJECT_INDEX), isolate); @@ -11820,8 +11854,7 @@ MUST_USE_RESULT static MaybeHandle<JSObject> MaterializeCatchScope( isolate->factory()->NewJSObject(isolate->object_function()); RETURN_ON_EXCEPTION( isolate, - Runtime::SetObjectProperty(isolate, catch_scope, name, thrown_object, - NONE, SLOPPY), + Runtime::DefineObjectProperty(catch_scope, name, thrown_object, NONE), JSObject); return catch_scope; } @@ -11831,7 +11864,7 @@ static bool SetCatchVariableValue(Isolate* isolate, Handle<Context> context, Handle<String> variable_name, Handle<Object> new_value) { - ASSERT(context->IsCatchContext()); + DCHECK(context->IsCatchContext()); Handle<String> name(String::cast(context->extension())); if (!String::Equals(name, variable_name)) { return false; @@ -11846,7 +11879,7 @@ static bool SetCatchVariableValue(Isolate* isolate, MUST_USE_RESULT static MaybeHandle<JSObject> MaterializeBlockScope( Isolate* isolate, Handle<Context> context) { - ASSERT(context->IsBlockContext()); + DCHECK(context->IsBlockContext()); Handle<ScopeInfo> scope_info(ScopeInfo::cast(context->extension())); // Allocate and initialize a JSObject with all the arguments, stack locals @@ -11869,7 +11902,7 @@ MUST_USE_RESULT static MaybeHandle<JSObject> MaterializeBlockScope( MUST_USE_RESULT static MaybeHandle<JSObject> MaterializeModuleScope( Isolate* isolate, Handle<Context> context) { - ASSERT(context->IsModuleContext()); + DCHECK(context->IsModuleContext()); Handle<ScopeInfo> scope_info(ScopeInfo::cast(context->extension())); // Allocate and initialize a JSObject with all the members of the debugged @@ -11978,7 +12011,7 @@ class ScopeIterator { if (scope_info->scope_type() == GLOBAL_SCOPE) { info.MarkAsGlobal(); } else { - ASSERT(scope_info->scope_type() == EVAL_SCOPE); + DCHECK(scope_info->scope_type() == EVAL_SCOPE); info.MarkAsEval(); info.SetContext(Handle<Context>(function_->context())); } @@ -12012,7 +12045,7 @@ class ScopeIterator { // More scopes? bool Done() { - ASSERT(!failed_); + DCHECK(!failed_); return context_.is_null(); } @@ -12020,11 +12053,11 @@ class ScopeIterator { // Move to the next scope. void Next() { - ASSERT(!failed_); + DCHECK(!failed_); ScopeType scope_type = Type(); if (scope_type == ScopeTypeGlobal) { // The global scope is always the last in the chain. - ASSERT(context_->IsNativeContext()); + DCHECK(context_->IsNativeContext()); context_ = Handle<Context>(); return; } @@ -12032,7 +12065,7 @@ class ScopeIterator { context_ = Handle<Context>(context_->previous(), isolate_); } else { if (nested_scope_chain_.last()->HasContext()) { - ASSERT(context_->previous() != NULL); + DCHECK(context_->previous() != NULL); context_ = Handle<Context>(context_->previous(), isolate_); } nested_scope_chain_.RemoveLast(); @@ -12041,28 +12074,28 @@ class ScopeIterator { // Return the type of the current scope. ScopeType Type() { - ASSERT(!failed_); + DCHECK(!failed_); if (!nested_scope_chain_.is_empty()) { Handle<ScopeInfo> scope_info = nested_scope_chain_.last(); switch (scope_info->scope_type()) { case FUNCTION_SCOPE: - ASSERT(context_->IsFunctionContext() || + DCHECK(context_->IsFunctionContext() || !scope_info->HasContext()); return ScopeTypeLocal; case MODULE_SCOPE: - ASSERT(context_->IsModuleContext()); + DCHECK(context_->IsModuleContext()); return ScopeTypeModule; case GLOBAL_SCOPE: - ASSERT(context_->IsNativeContext()); + DCHECK(context_->IsNativeContext()); return ScopeTypeGlobal; case WITH_SCOPE: - ASSERT(context_->IsWithContext()); + DCHECK(context_->IsWithContext()); return ScopeTypeWith; case CATCH_SCOPE: - ASSERT(context_->IsCatchContext()); + DCHECK(context_->IsCatchContext()); return ScopeTypeCatch; case BLOCK_SCOPE: - ASSERT(!scope_info->HasContext() || + DCHECK(!scope_info->HasContext() || context_->IsBlockContext()); return ScopeTypeBlock; case EVAL_SCOPE: @@ -12070,7 +12103,7 @@ class ScopeIterator { } } if (context_->IsNativeContext()) { - ASSERT(context_->global_object()->IsGlobalObject()); + DCHECK(context_->global_object()->IsGlobalObject()); return ScopeTypeGlobal; } if (context_->IsFunctionContext()) { @@ -12085,19 +12118,19 @@ class ScopeIterator { if (context_->IsModuleContext()) { return ScopeTypeModule; } - ASSERT(context_->IsWithContext()); + DCHECK(context_->IsWithContext()); return ScopeTypeWith; } // Return the JavaScript object with the content of the current scope. MaybeHandle<JSObject> ScopeObject() { - ASSERT(!failed_); + DCHECK(!failed_); switch (Type()) { case ScopeIterator::ScopeTypeGlobal: return Handle<JSObject>(CurrentContext()->global_object()); case ScopeIterator::ScopeTypeLocal: // Materialize the content of the local scope into a JSObject. - ASSERT(nested_scope_chain_.length() == 1); + DCHECK(nested_scope_chain_.length() == 1); return MaterializeLocalScope(isolate_, frame_, inlined_jsframe_index_); case ScopeIterator::ScopeTypeWith: // Return the with object. @@ -12118,7 +12151,7 @@ class ScopeIterator { bool SetVariableValue(Handle<String> variable_name, Handle<Object> new_value) { - ASSERT(!failed_); + DCHECK(!failed_); switch (Type()) { case ScopeIterator::ScopeTypeGlobal: break; @@ -12144,7 +12177,7 @@ class ScopeIterator { } Handle<ScopeInfo> CurrentScopeInfo() { - ASSERT(!failed_); + DCHECK(!failed_); if (!nested_scope_chain_.is_empty()) { return nested_scope_chain_.last(); } else if (context_->IsBlockContext()) { @@ -12158,7 +12191,7 @@ class ScopeIterator { // Return the context for this scope. For the local context there might not // be an actual context. Handle<Context> CurrentContext() { - ASSERT(!failed_); + DCHECK(!failed_); if (Type() == ScopeTypeGlobal || nested_scope_chain_.is_empty()) { return context_; @@ -12172,22 +12205,23 @@ class ScopeIterator { #ifdef DEBUG // Debug print of the content of the current scope. void DebugPrint() { - ASSERT(!failed_); + OFStream os(stdout); + DCHECK(!failed_); switch (Type()) { case ScopeIterator::ScopeTypeGlobal: - PrintF("Global:\n"); - CurrentContext()->Print(); + os << "Global:\n"; + CurrentContext()->Print(os); break; case ScopeIterator::ScopeTypeLocal: { - PrintF("Local:\n"); + os << "Local:\n"; function_->shared()->scope_info()->Print(); if (!CurrentContext().is_null()) { - CurrentContext()->Print(); + CurrentContext()->Print(os); if (CurrentContext()->has_extension()) { Handle<Object> extension(CurrentContext()->extension(), isolate_); if (extension->IsJSContextExtensionObject()) { - extension->Print(); + extension->Print(os); } } } @@ -12195,23 +12229,23 @@ class ScopeIterator { } case ScopeIterator::ScopeTypeWith: - PrintF("With:\n"); - CurrentContext()->extension()->Print(); + os << "With:\n"; + CurrentContext()->extension()->Print(os); break; case ScopeIterator::ScopeTypeCatch: - PrintF("Catch:\n"); - CurrentContext()->extension()->Print(); - CurrentContext()->get(Context::THROWN_OBJECT_INDEX)->Print(); + os << "Catch:\n"; + CurrentContext()->extension()->Print(os); + CurrentContext()->get(Context::THROWN_OBJECT_INDEX)->Print(os); break; case ScopeIterator::ScopeTypeClosure: - PrintF("Closure:\n"); - CurrentContext()->Print(); + os << "Closure:\n"; + CurrentContext()->Print(os); if (CurrentContext()->has_extension()) { Handle<Object> extension(CurrentContext()->extension(), isolate_); if (extension->IsJSContextExtensionObject()) { - extension->Print(); + extension->Print(os); } } break; @@ -12244,7 +12278,7 @@ class ScopeIterator { // information we get from the context chain but nothing about // completely stack allocated scopes or stack allocated locals. // Or it could be due to stack overflow. - ASSERT(isolate_->has_pending_exception()); + DCHECK(isolate_->has_pending_exception()); failed_ = true; } } @@ -12255,7 +12289,7 @@ class ScopeIterator { RUNTIME_FUNCTION(Runtime_GetScopeCount) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_NUMBER_CHECKED(int, break_id, Int32, args[0]); RUNTIME_ASSERT(CheckExecutionState(isolate, break_id)); @@ -12283,7 +12317,7 @@ RUNTIME_FUNCTION(Runtime_GetScopeCount) { // of the corresponding statement. RUNTIME_FUNCTION(Runtime_GetStepInPositions) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_NUMBER_CHECKED(int, break_id, Int32, args[0]); RUNTIME_ASSERT(CheckExecutionState(isolate, break_id)); @@ -12390,7 +12424,7 @@ MUST_USE_RESULT static MaybeHandle<JSObject> MaterializeScopeDetails( // 1: Scope object RUNTIME_FUNCTION(Runtime_GetScopeDetails) { HandleScope scope(isolate); - ASSERT(args.length() == 4); + DCHECK(args.length() == 4); CONVERT_NUMBER_CHECKED(int, break_id, Int32, args[0]); RUNTIME_ASSERT(CheckExecutionState(isolate, break_id)); @@ -12430,7 +12464,7 @@ RUNTIME_FUNCTION(Runtime_GetScopeDetails) { // 1: Scope object RUNTIME_FUNCTION(Runtime_GetAllScopesDetails) { HandleScope scope(isolate); - ASSERT(args.length() == 3 || args.length() == 4); + DCHECK(args.length() == 3 || args.length() == 4); CONVERT_NUMBER_CHECKED(int, break_id, Int32, args[0]); RUNTIME_ASSERT(CheckExecutionState(isolate, break_id)); @@ -12467,7 +12501,7 @@ RUNTIME_FUNCTION(Runtime_GetAllScopesDetails) { RUNTIME_FUNCTION(Runtime_GetFunctionScopeCount) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); // Check arguments. CONVERT_ARG_HANDLE_CHECKED(JSFunction, fun, 0); @@ -12484,7 +12518,7 @@ RUNTIME_FUNCTION(Runtime_GetFunctionScopeCount) { RUNTIME_FUNCTION(Runtime_GetFunctionScopeDetails) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); // Check arguments. CONVERT_ARG_HANDLE_CHECKED(JSFunction, fun, 0); @@ -12531,7 +12565,7 @@ static bool SetScopeVariableValue(ScopeIterator* it, int index, // Return true if success and false otherwise RUNTIME_FUNCTION(Runtime_SetScopeVariableValue) { HandleScope scope(isolate); - ASSERT(args.length() == 6); + DCHECK(args.length() == 6); // Check arguments. CONVERT_NUMBER_CHECKED(int, index, Int32, args[3]); @@ -12565,7 +12599,7 @@ RUNTIME_FUNCTION(Runtime_SetScopeVariableValue) { RUNTIME_FUNCTION(Runtime_DebugPrintScopes) { HandleScope scope(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); #ifdef DEBUG // Print the scopes for the top frame. @@ -12583,7 +12617,7 @@ RUNTIME_FUNCTION(Runtime_DebugPrintScopes) { RUNTIME_FUNCTION(Runtime_GetThreadCount) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_NUMBER_CHECKED(int, break_id, Int32, args[0]); RUNTIME_ASSERT(CheckExecutionState(isolate, break_id)); @@ -12614,7 +12648,7 @@ static const int kThreadDetailsSize = 2; // 1: Thread id RUNTIME_FUNCTION(Runtime_GetThreadDetails) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_NUMBER_CHECKED(int, break_id, Int32, args[0]); RUNTIME_ASSERT(CheckExecutionState(isolate, break_id)); @@ -12660,7 +12694,7 @@ RUNTIME_FUNCTION(Runtime_GetThreadDetails) { // args[0]: disable break state RUNTIME_FUNCTION(Runtime_SetDisableBreak) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_BOOLEAN_ARG_CHECKED(disable_break, 0); isolate->debug()->set_disable_break(disable_break); return isolate->heap()->undefined_value(); @@ -12674,7 +12708,7 @@ static bool IsPositionAlignmentCodeCorrect(int alignment) { RUNTIME_FUNCTION(Runtime_GetBreakLocations) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSFunction, fun, 0); CONVERT_NUMBER_CHECKED(int32_t, statement_aligned_code, Int32, args[1]); @@ -12702,15 +12736,16 @@ RUNTIME_FUNCTION(Runtime_GetBreakLocations) { // args[2]: number: break point object RUNTIME_FUNCTION(Runtime_SetFunctionBreakPoint) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(JSFunction, function, 0); CONVERT_NUMBER_CHECKED(int32_t, source_position, Int32, args[1]); - RUNTIME_ASSERT(source_position >= 0); + RUNTIME_ASSERT(source_position >= function->shared()->start_position() && + source_position <= function->shared()->end_position()); CONVERT_ARG_HANDLE_CHECKED(Object, break_point_object_arg, 2); // Set break point. - isolate->debug()->SetBreakPoint(function, break_point_object_arg, - &source_position); + RUNTIME_ASSERT(isolate->debug()->SetBreakPoint( + function, break_point_object_arg, &source_position)); return Smi::FromInt(source_position); } @@ -12725,7 +12760,7 @@ RUNTIME_FUNCTION(Runtime_SetFunctionBreakPoint) { // args[3]: number: break point object RUNTIME_FUNCTION(Runtime_SetScriptBreakPoint) { HandleScope scope(isolate); - ASSERT(args.length() == 4); + DCHECK(args.length() == 4); CONVERT_ARG_HANDLE_CHECKED(JSValue, wrapper, 0); CONVERT_NUMBER_CHECKED(int32_t, source_position, Int32, args[1]); RUNTIME_ASSERT(source_position >= 0); @@ -12757,7 +12792,7 @@ RUNTIME_FUNCTION(Runtime_SetScriptBreakPoint) { // args[0]: number: break point object RUNTIME_FUNCTION(Runtime_ClearBreakPoint) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(Object, break_point_object_arg, 0); // Clear break point. @@ -12772,7 +12807,7 @@ RUNTIME_FUNCTION(Runtime_ClearBreakPoint) { // args[1]: Boolean indicating on/off. RUNTIME_FUNCTION(Runtime_ChangeBreakOnException) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_NUMBER_CHECKED(uint32_t, type_arg, Uint32, args[0]); CONVERT_BOOLEAN_ARG_CHECKED(enable, 1); @@ -12789,7 +12824,7 @@ RUNTIME_FUNCTION(Runtime_ChangeBreakOnException) { // args[0]: boolean indicating uncaught exceptions RUNTIME_FUNCTION(Runtime_IsBreakOnException) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_NUMBER_CHECKED(uint32_t, type_arg, Uint32, args[0]); ExceptionBreakType type = static_cast<ExceptionBreakType>(type_arg); @@ -12805,7 +12840,7 @@ RUNTIME_FUNCTION(Runtime_IsBreakOnException) { // of frames to step down. RUNTIME_FUNCTION(Runtime_PrepareStep) { HandleScope scope(isolate); - ASSERT(args.length() == 4); + DCHECK(args.length() == 4); CONVERT_NUMBER_CHECKED(int, break_id, Int32, args[0]); RUNTIME_ASSERT(CheckExecutionState(isolate, break_id)); @@ -12857,7 +12892,7 @@ RUNTIME_FUNCTION(Runtime_PrepareStep) { // Clear all stepping set by PrepareStep. RUNTIME_FUNCTION(Runtime_ClearStepping) { HandleScope scope(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); isolate->debug()->ClearStepping(); return isolate->heap()->undefined_value(); } @@ -12871,11 +12906,11 @@ MUST_USE_RESULT static MaybeHandle<JSObject> MaterializeArgumentsObject( Handle<JSFunction> function) { // Do not materialize the arguments object for eval or top-level code. // Skip if "arguments" is already taken. - if (!function->shared()->is_function() || - JSReceiver::HasLocalProperty(target, - isolate->factory()->arguments_string())) { - return target; - } + if (!function->shared()->is_function()) return target; + Maybe<bool> maybe = JSReceiver::HasOwnProperty( + target, isolate->factory()->arguments_string()); + if (!maybe.has_value) return MaybeHandle<JSObject>(); + if (maybe.value) return target; // FunctionGetArguments can't throw an exception. Handle<JSObject> arguments = Handle<JSObject>::cast( @@ -12883,8 +12918,7 @@ MUST_USE_RESULT static MaybeHandle<JSObject> MaterializeArgumentsObject( Handle<String> arguments_str = isolate->factory()->arguments_string(); RETURN_ON_EXCEPTION( isolate, - Runtime::SetObjectProperty( - isolate, target, arguments_str, arguments, ::NONE, SLOPPY), + Runtime::DefineObjectProperty(target, arguments_str, arguments, NONE), JSObject); return target; } @@ -12921,7 +12955,9 @@ static MaybeHandle<Object> DebugEvaluate(Isolate* isolate, // Skip the global proxy as it has no properties and always delegates to the // real global object. if (result->IsJSGlobalProxy()) { - result = Handle<JSObject>(JSObject::cast(result->GetPrototype(isolate))); + PrototypeIterator iter(isolate, result); + // TODO(verwaest): This will crash when the global proxy is detached. + result = Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)); } // Clear the oneshot breakpoints so that the debugger does not step further. @@ -12940,7 +12976,7 @@ RUNTIME_FUNCTION(Runtime_DebugEvaluate) { // Check the execution state and decode arguments frame and source to be // evaluated. - ASSERT(args.length() == 6); + DCHECK(args.length() == 6); CONVERT_NUMBER_CHECKED(int, break_id, Int32, args[0]); RUNTIME_ASSERT(CheckExecutionState(isolate, break_id)); @@ -12951,7 +12987,7 @@ RUNTIME_FUNCTION(Runtime_DebugEvaluate) { CONVERT_ARG_HANDLE_CHECKED(Object, context_extension, 5); // Handle the processing of break. - DisableBreak disable_break_save(isolate, disable_break); + DisableBreak disable_break_scope(isolate->debug(), disable_break); // Get the frame where the debugging is performed. StackFrame::Id id = UnwrapFrameId(wrapped_id); @@ -12969,7 +13005,7 @@ RUNTIME_FUNCTION(Runtime_DebugEvaluate) { // Evaluate on the context of the frame. Handle<Context> context(Context::cast(frame->context())); - ASSERT(!context.is_null()); + DCHECK(!context.is_null()); // Materialize stack locals and the arguments object. Handle<JSObject> materialized = @@ -13006,7 +13042,7 @@ RUNTIME_FUNCTION(Runtime_DebugEvaluateGlobal) { // Check the execution state and decode arguments frame and source to be // evaluated. - ASSERT(args.length() == 4); + DCHECK(args.length() == 4); CONVERT_NUMBER_CHECKED(int, break_id, Int32, args[0]); RUNTIME_ASSERT(CheckExecutionState(isolate, break_id)); @@ -13015,7 +13051,7 @@ RUNTIME_FUNCTION(Runtime_DebugEvaluateGlobal) { CONVERT_ARG_HANDLE_CHECKED(Object, context_extension, 3); // Handle the processing of break. - DisableBreak disable_break_save(isolate, disable_break); + DisableBreak disable_break_scope(isolate->debug(), disable_break); // Enter the top context from before the debugger was invoked. SaveContext save(isolate); @@ -13030,7 +13066,7 @@ RUNTIME_FUNCTION(Runtime_DebugEvaluateGlobal) { // Get the native context now set to the top context from before the // debugger was invoked. Handle<Context> context = isolate->native_context(); - Handle<Object> receiver = isolate->global_object(); + Handle<JSObject> receiver(context->global_proxy()); Handle<Object> result; ASSIGN_RETURN_FAILURE_ON_EXCEPTION( isolate, result, @@ -13041,7 +13077,7 @@ RUNTIME_FUNCTION(Runtime_DebugEvaluateGlobal) { RUNTIME_FUNCTION(Runtime_DebugGetLoadedScripts) { HandleScope scope(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); // Fill the script objects. Handle<FixedArray> instances = isolate->debug()->GetLoadedScripts(); @@ -13097,17 +13133,12 @@ static int DebugReferencedBy(HeapIterator* iterator, // Check instance filter if supplied. This is normally used to avoid // references from mirror objects (see Runtime_IsInPrototypeChain). if (!instance_filter->IsUndefined()) { - Object* V = obj; - while (true) { - Object* prototype = V->GetPrototype(isolate); - if (prototype->IsNull()) { - break; - } - if (instance_filter == prototype) { + for (PrototypeIterator iter(isolate, obj); !iter.IsAtEnd(); + iter.Advance()) { + if (iter.GetCurrent() == instance_filter) { obj = NULL; // Don't add this object. break; } - V = prototype; } } @@ -13143,15 +13174,7 @@ static int DebugReferencedBy(HeapIterator* iterator, // args[2]: the the maximum number of objects to return RUNTIME_FUNCTION(Runtime_DebugReferencedBy) { HandleScope scope(isolate); - ASSERT(args.length() == 3); - - // First perform a full GC in order to avoid references from dead objects. - Heap* heap = isolate->heap(); - heap->CollectAllGarbage(Heap::kMakeHeapIterableMask, "%DebugReferencedBy"); - // The heap iterator reserves the right to do a GC to make the heap iterable. - // Due to the GC above we know it won't need to do that, but it seems cleaner - // to get the heap iterator constructed before we start having unprotected - // Object* locals that are not protected by handles. + DCHECK(args.length() == 3); // Check parameters. CONVERT_ARG_HANDLE_CHECKED(JSObject, target, 0); @@ -13163,32 +13186,35 @@ RUNTIME_FUNCTION(Runtime_DebugReferencedBy) { // Get the constructor function for context extension and arguments array. - Handle<JSObject> arguments_boilerplate( - isolate->context()->native_context()->sloppy_arguments_boilerplate()); Handle<JSFunction> arguments_function( - JSFunction::cast(arguments_boilerplate->map()->constructor())); + JSFunction::cast(isolate->sloppy_arguments_map()->constructor())); // Get the number of referencing objects. int count; - HeapIterator heap_iterator(heap); - count = DebugReferencedBy(&heap_iterator, - *target, *instance_filter, max_references, - NULL, 0, *arguments_function); + // First perform a full GC in order to avoid dead objects and to make the heap + // iterable. + Heap* heap = isolate->heap(); + heap->CollectAllGarbage(Heap::kMakeHeapIterableMask, "%DebugConstructedBy"); + { + HeapIterator heap_iterator(heap); + count = DebugReferencedBy(&heap_iterator, + *target, *instance_filter, max_references, + NULL, 0, *arguments_function); + } // Allocate an array to hold the result. Handle<FixedArray> instances = isolate->factory()->NewFixedArray(count); // Fill the referencing objects. - // AllocateFixedArray above does not make the heap non-iterable. - ASSERT(heap->IsHeapIterable()); - HeapIterator heap_iterator2(heap); - count = DebugReferencedBy(&heap_iterator2, - *target, *instance_filter, max_references, - *instances, count, *arguments_function); + { + HeapIterator heap_iterator(heap); + count = DebugReferencedBy(&heap_iterator, + *target, *instance_filter, max_references, + *instances, count, *arguments_function); + } // Return result as JS array. - Handle<JSFunction> constructor( - isolate->context()->native_context()->array_function()); + Handle<JSFunction> constructor = isolate->array_function(); Handle<JSObject> result = isolate->factory()->NewJSObject(constructor); JSArray::SetContent(Handle<JSArray>::cast(result), instances); @@ -13233,11 +13259,8 @@ static int DebugConstructedBy(HeapIterator* iterator, // args[1]: the the maximum number of objects to return RUNTIME_FUNCTION(Runtime_DebugConstructedBy) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); - // First perform a full GC in order to avoid dead objects. - Heap* heap = isolate->heap(); - heap->CollectAllGarbage(Heap::kMakeHeapIterableMask, "%DebugConstructedBy"); // Check parameters. CONVERT_ARG_HANDLE_CHECKED(JSFunction, constructor, 0); @@ -13246,28 +13269,34 @@ RUNTIME_FUNCTION(Runtime_DebugConstructedBy) { // Get the number of referencing objects. int count; - HeapIterator heap_iterator(heap); - count = DebugConstructedBy(&heap_iterator, - *constructor, - max_references, - NULL, - 0); + // First perform a full GC in order to avoid dead objects and to make the heap + // iterable. + Heap* heap = isolate->heap(); + heap->CollectAllGarbage(Heap::kMakeHeapIterableMask, "%DebugConstructedBy"); + { + HeapIterator heap_iterator(heap); + count = DebugConstructedBy(&heap_iterator, + *constructor, + max_references, + NULL, + 0); + } // Allocate an array to hold the result. Handle<FixedArray> instances = isolate->factory()->NewFixedArray(count); - ASSERT(heap->IsHeapIterable()); // Fill the referencing objects. - HeapIterator heap_iterator2(heap); - count = DebugConstructedBy(&heap_iterator2, - *constructor, - max_references, - *instances, - count); + { + HeapIterator heap_iterator2(heap); + count = DebugConstructedBy(&heap_iterator2, + *constructor, + max_references, + *instances, + count); + } // Return result as JS array. - Handle<JSFunction> array_function( - isolate->context()->native_context()->array_function()); + Handle<JSFunction> array_function = isolate->array_function(); Handle<JSObject> result = isolate->factory()->NewJSObject(array_function); JSArray::SetContent(Handle<JSArray>::cast(result), instances); return *result; @@ -13278,7 +13307,7 @@ RUNTIME_FUNCTION(Runtime_DebugConstructedBy) { // args[0]: the object to find the prototype for. RUNTIME_FUNCTION(Runtime_DebugGetPrototype) { HandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSObject, obj, 0); return *GetPrototypeSkipHiddenPrototypes(isolate, obj); } @@ -13287,7 +13316,7 @@ RUNTIME_FUNCTION(Runtime_DebugGetPrototype) { // Patches script source (should be called upon BeforeCompile event). RUNTIME_FUNCTION(Runtime_DebugSetScriptSource) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSValue, script_wrapper, 0); CONVERT_ARG_HANDLE_CHECKED(String, source, 1); @@ -13305,8 +13334,8 @@ RUNTIME_FUNCTION(Runtime_DebugSetScriptSource) { RUNTIME_FUNCTION(Runtime_SystemBreak) { SealHandleScope shs(isolate); - ASSERT(args.length() == 0); - OS::DebugBreak(); + DCHECK(args.length() == 0); + base::OS::DebugBreak(); return isolate->heap()->undefined_value(); } @@ -13314,13 +13343,15 @@ RUNTIME_FUNCTION(Runtime_SystemBreak) { RUNTIME_FUNCTION(Runtime_DebugDisassembleFunction) { HandleScope scope(isolate); #ifdef DEBUG - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); // Get the function and make sure it is compiled. CONVERT_ARG_HANDLE_CHECKED(JSFunction, func, 0); if (!Compiler::EnsureCompiled(func, KEEP_EXCEPTION)) { return isolate->heap()->exception(); } - func->code()->PrintLn(); + OFStream os(stdout); + func->code()->Print(os); + os << endl; #endif // DEBUG return isolate->heap()->undefined_value(); } @@ -13329,13 +13360,15 @@ RUNTIME_FUNCTION(Runtime_DebugDisassembleFunction) { RUNTIME_FUNCTION(Runtime_DebugDisassembleConstructor) { HandleScope scope(isolate); #ifdef DEBUG - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); // Get the function and make sure it is compiled. CONVERT_ARG_HANDLE_CHECKED(JSFunction, func, 0); if (!Compiler::EnsureCompiled(func, KEEP_EXCEPTION)) { return isolate->heap()->exception(); } - func->shared()->construct_stub()->PrintLn(); + OFStream os(stdout); + func->shared()->construct_stub()->Print(os); + os << endl; #endif // DEBUG return isolate->heap()->undefined_value(); } @@ -13343,7 +13376,7 @@ RUNTIME_FUNCTION(Runtime_DebugDisassembleConstructor) { RUNTIME_FUNCTION(Runtime_FunctionGetInferredName) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(JSFunction, f, 0); return f->shared()->inferred_name(); @@ -13359,7 +13392,7 @@ static int FindSharedFunctionInfosForScript(HeapIterator* iterator, for (HeapObject* obj = iterator->next(); obj != NULL; obj = iterator->next()) { - ASSERT(obj != NULL); + DCHECK(obj != NULL); if (!obj->IsSharedFunctionInfo()) { continue; } @@ -13381,8 +13414,8 @@ static int FindSharedFunctionInfosForScript(HeapIterator* iterator, // in OpaqueReferences. RUNTIME_FUNCTION(Runtime_LiveEditFindSharedFunctionInfosForScript) { HandleScope scope(isolate); - CHECK(isolate->debugger()->live_edit_enabled()); - ASSERT(args.length() == 1); + CHECK(isolate->debug()->live_edit_enabled()); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(JSValue, script_value, 0); RUNTIME_ASSERT(script_value->value()->IsScript()); @@ -13395,8 +13428,6 @@ RUNTIME_FUNCTION(Runtime_LiveEditFindSharedFunctionInfosForScript) { int number; Heap* heap = isolate->heap(); { - heap->EnsureHeapIsIterable(); - DisallowHeapAllocation no_allocation; HeapIterator heap_iterator(heap); Script* scr = *script; FixedArray* arr = *array; @@ -13404,8 +13435,6 @@ RUNTIME_FUNCTION(Runtime_LiveEditFindSharedFunctionInfosForScript) { } if (number > kBufferSize) { array = isolate->factory()->NewFixedArray(number); - heap->EnsureHeapIsIterable(); - DisallowHeapAllocation no_allocation; HeapIterator heap_iterator(heap); Script* scr = *script; FixedArray* arr = *array; @@ -13430,8 +13459,8 @@ RUNTIME_FUNCTION(Runtime_LiveEditFindSharedFunctionInfosForScript) { // with the function itself going first. The root function is a script function. RUNTIME_FUNCTION(Runtime_LiveEditGatherCompileInfo) { HandleScope scope(isolate); - CHECK(isolate->debugger()->live_edit_enabled()); - ASSERT(args.length() == 2); + CHECK(isolate->debug()->live_edit_enabled()); + DCHECK(args.length() == 2); CONVERT_ARG_CHECKED(JSValue, script, 0); CONVERT_ARG_HANDLE_CHECKED(String, source, 1); @@ -13450,8 +13479,8 @@ RUNTIME_FUNCTION(Runtime_LiveEditGatherCompileInfo) { // the script with its original source and sends notification to debugger. RUNTIME_FUNCTION(Runtime_LiveEditReplaceScript) { HandleScope scope(isolate); - CHECK(isolate->debugger()->live_edit_enabled()); - ASSERT(args.length() == 3); + CHECK(isolate->debug()->live_edit_enabled()); + DCHECK(args.length() == 3); CONVERT_ARG_CHECKED(JSValue, original_script_value, 0); CONVERT_ARG_HANDLE_CHECKED(String, new_source, 1); CONVERT_ARG_HANDLE_CHECKED(Object, old_script_name, 2); @@ -13473,8 +13502,8 @@ RUNTIME_FUNCTION(Runtime_LiveEditReplaceScript) { RUNTIME_FUNCTION(Runtime_LiveEditFunctionSourceUpdated) { HandleScope scope(isolate); - CHECK(isolate->debugger()->live_edit_enabled()); - ASSERT(args.length() == 1); + CHECK(isolate->debug()->live_edit_enabled()); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSArray, shared_info, 0); RUNTIME_ASSERT(SharedInfoWrapper::IsInstance(shared_info)); @@ -13486,8 +13515,8 @@ RUNTIME_FUNCTION(Runtime_LiveEditFunctionSourceUpdated) { // Replaces code of SharedFunctionInfo with a new one. RUNTIME_FUNCTION(Runtime_LiveEditReplaceFunctionCode) { HandleScope scope(isolate); - CHECK(isolate->debugger()->live_edit_enabled()); - ASSERT(args.length() == 2); + CHECK(isolate->debug()->live_edit_enabled()); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSArray, new_compile_info, 0); CONVERT_ARG_HANDLE_CHECKED(JSArray, shared_info, 1); RUNTIME_ASSERT(SharedInfoWrapper::IsInstance(shared_info)); @@ -13500,8 +13529,8 @@ RUNTIME_FUNCTION(Runtime_LiveEditReplaceFunctionCode) { // Connects SharedFunctionInfo to another script. RUNTIME_FUNCTION(Runtime_LiveEditFunctionSetScript) { HandleScope scope(isolate); - CHECK(isolate->debugger()->live_edit_enabled()); - ASSERT(args.length() == 2); + CHECK(isolate->debug()->live_edit_enabled()); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(Object, function_object, 0); CONVERT_ARG_HANDLE_CHECKED(Object, script_object, 1); @@ -13512,7 +13541,7 @@ RUNTIME_FUNCTION(Runtime_LiveEditFunctionSetScript) { Script* script = Script::cast(JSValue::cast(*script_object)->value()); script_object = Handle<Object>(script, isolate); } - + RUNTIME_ASSERT(function_wrapper->value()->IsSharedFunctionInfo()); LiveEdit::SetFunctionScript(function_wrapper, script_object); } else { // Just ignore this. We may not have a SharedFunctionInfo for some functions @@ -13527,8 +13556,8 @@ RUNTIME_FUNCTION(Runtime_LiveEditFunctionSetScript) { // with a substitution one. RUNTIME_FUNCTION(Runtime_LiveEditReplaceRefToNestedFunction) { HandleScope scope(isolate); - CHECK(isolate->debugger()->live_edit_enabled()); - ASSERT(args.length() == 3); + CHECK(isolate->debug()->live_edit_enabled()); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(JSValue, parent_wrapper, 0); CONVERT_ARG_HANDLE_CHECKED(JSValue, orig_wrapper, 1); @@ -13550,8 +13579,8 @@ RUNTIME_FUNCTION(Runtime_LiveEditReplaceRefToNestedFunction) { // Each group describes a change in text; groups are sorted by change_begin. RUNTIME_FUNCTION(Runtime_LiveEditPatchFunctionPositions) { HandleScope scope(isolate); - CHECK(isolate->debugger()->live_edit_enabled()); - ASSERT(args.length() == 2); + CHECK(isolate->debug()->live_edit_enabled()); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSArray, shared_array, 0); CONVERT_ARG_HANDLE_CHECKED(JSArray, position_change_array, 1); RUNTIME_ASSERT(SharedInfoWrapper::IsInstance(shared_array)) @@ -13567,11 +13596,12 @@ RUNTIME_FUNCTION(Runtime_LiveEditPatchFunctionPositions) { // LiveEdit::FunctionPatchabilityStatus type. RUNTIME_FUNCTION(Runtime_LiveEditCheckAndDropActivations) { HandleScope scope(isolate); - CHECK(isolate->debugger()->live_edit_enabled()); - ASSERT(args.length() == 2); + CHECK(isolate->debug()->live_edit_enabled()); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSArray, shared_array, 0); CONVERT_BOOLEAN_ARG_CHECKED(do_drop, 1); RUNTIME_ASSERT(shared_array->length()->IsSmi()); + RUNTIME_ASSERT(shared_array->HasFastElements()) int array_length = Smi::cast(shared_array->length())->value(); for (int i = 0; i < array_length; i++) { Handle<Object> element = @@ -13590,8 +13620,8 @@ RUNTIME_FUNCTION(Runtime_LiveEditCheckAndDropActivations) { // of diff chunks. RUNTIME_FUNCTION(Runtime_LiveEditCompareStrings) { HandleScope scope(isolate); - CHECK(isolate->debugger()->live_edit_enabled()); - ASSERT(args.length() == 2); + CHECK(isolate->debug()->live_edit_enabled()); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(String, s1, 0); CONVERT_ARG_HANDLE_CHECKED(String, s2, 1); @@ -13603,8 +13633,8 @@ RUNTIME_FUNCTION(Runtime_LiveEditCompareStrings) { // Returns true if successful. Otherwise returns undefined or an error message. RUNTIME_FUNCTION(Runtime_LiveEditRestartFrame) { HandleScope scope(isolate); - CHECK(isolate->debugger()->live_edit_enabled()); - ASSERT(args.length() == 2); + CHECK(isolate->debug()->live_edit_enabled()); + DCHECK(args.length() == 2); CONVERT_NUMBER_CHECKED(int, break_id, Int32, args[0]); RUNTIME_ASSERT(CheckExecutionState(isolate, break_id)); @@ -13618,14 +13648,11 @@ RUNTIME_FUNCTION(Runtime_LiveEditRestartFrame) { return heap->undefined_value(); } - int count = 0; JavaScriptFrameIterator it(isolate, id); - for (; !it.done(); it.Advance()) { - if (index < count + it.frame()->GetInlineCount()) break; - count += it.frame()->GetInlineCount(); - } - if (it.done()) return heap->undefined_value(); - + int inlined_jsframe_index = FindIndexedNonNativeFrame(&it, index); + if (inlined_jsframe_index == -1) return heap->undefined_value(); + // We don't really care what the inlined frame index is, since we are + // throwing away the entire frame anyways. const char* error_message = LiveEdit::RestartFrame(it.frame()); if (error_message) { return *(isolate->factory()->InternalizeUtf8String(error_message)); @@ -13638,8 +13665,8 @@ RUNTIME_FUNCTION(Runtime_LiveEditRestartFrame) { // source_position. RUNTIME_FUNCTION(Runtime_GetFunctionCodePositionFromSource) { HandleScope scope(isolate); - CHECK(isolate->debugger()->live_edit_enabled()); - ASSERT(args.length() == 2); + CHECK(isolate->debug()->live_edit_enabled()); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSFunction, function, 0); CONVERT_NUMBER_CHECKED(int32_t, source_position, Int32, args[1]); @@ -13676,7 +13703,7 @@ RUNTIME_FUNCTION(Runtime_GetFunctionCodePositionFromSource) { // to have a stack with C++ frame in the middle. RUNTIME_FUNCTION(Runtime_ExecuteInDebugContext) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSFunction, function, 0); CONVERT_BOOLEAN_ARG_CHECKED(without_debugger, 1); @@ -13684,14 +13711,14 @@ RUNTIME_FUNCTION(Runtime_ExecuteInDebugContext) { if (without_debugger) { maybe_result = Execution::Call(isolate, function, - isolate->global_object(), + handle(function->global_proxy()), 0, NULL); } else { - EnterDebugger enter_debugger(isolate); + DebugScope debug_scope(isolate->debug()); maybe_result = Execution::Call(isolate, function, - isolate->global_object(), + handle(function->global_proxy()), 0, NULL); } @@ -13704,7 +13731,7 @@ RUNTIME_FUNCTION(Runtime_ExecuteInDebugContext) { // Sets a v8 flag. RUNTIME_FUNCTION(Runtime_SetFlags) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(String, arg, 0); SmartArrayPointer<char> flags = arg->ToCString(DISALLOW_NULLS, ROBUST_STRING_TRAVERSAL); @@ -13717,7 +13744,7 @@ RUNTIME_FUNCTION(Runtime_SetFlags) { // Presently, it only does a full GC. RUNTIME_FUNCTION(Runtime_CollectGarbage) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); isolate->heap()->CollectAllGarbage(Heap::kNoGCFlags, "%CollectGarbage"); return isolate->heap()->undefined_value(); } @@ -13726,7 +13753,7 @@ RUNTIME_FUNCTION(Runtime_CollectGarbage) { // Gets the current heap usage. RUNTIME_FUNCTION(Runtime_GetHeapUsage) { SealHandleScope shs(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); int usage = static_cast<int>(isolate->heap()->SizeOfObjects()); if (!Smi::IsValid(usage)) { return *isolate->factory()->NewNumberFromInt(usage); @@ -13740,7 +13767,7 @@ RUNTIME_FUNCTION(Runtime_CanonicalizeLanguageTag) { HandleScope scope(isolate); Factory* factory = isolate->factory(); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(String, locale_id_str, 0); v8::String::Utf8Value locale_id(v8::Utils::ToLocal(locale_id_str)); @@ -13775,7 +13802,7 @@ RUNTIME_FUNCTION(Runtime_AvailableLocalesOf) { HandleScope scope(isolate); Factory* factory = isolate->factory(); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(String, service, 0); const icu::Locale* available_locales = NULL; @@ -13808,7 +13835,7 @@ RUNTIME_FUNCTION(Runtime_AvailableLocalesOf) { } RETURN_FAILURE_ON_EXCEPTION(isolate, - JSObject::SetLocalPropertyIgnoreAttributes( + JSObject::SetOwnPropertyIgnoreAttributes( locales, factory->NewStringFromAsciiChecked(result), factory->NewNumber(i), @@ -13823,7 +13850,7 @@ RUNTIME_FUNCTION(Runtime_GetDefaultICULocale) { HandleScope scope(isolate); Factory* factory = isolate->factory(); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); icu::Locale default_locale; @@ -13844,7 +13871,7 @@ RUNTIME_FUNCTION(Runtime_GetLanguageTagVariants) { HandleScope scope(isolate); Factory* factory = isolate->factory(); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSArray, input, 0); @@ -13912,18 +13939,10 @@ RUNTIME_FUNCTION(Runtime_GetLanguageTagVariants) { } Handle<JSObject> result = factory->NewJSObject(isolate->object_function()); - RETURN_FAILURE_ON_EXCEPTION(isolate, - JSObject::SetLocalPropertyIgnoreAttributes( - result, - maximized, - factory->NewStringFromAsciiChecked(base_max_locale), - NONE)); - RETURN_FAILURE_ON_EXCEPTION(isolate, - JSObject::SetLocalPropertyIgnoreAttributes( - result, - base, - factory->NewStringFromAsciiChecked(base_locale), - NONE)); + Handle<String> value = factory->NewStringFromAsciiChecked(base_max_locale); + JSObject::AddProperty(result, maximized, value, NONE); + value = factory->NewStringFromAsciiChecked(base_locale); + JSObject::AddProperty(result, base, value, NONE); output->set(i, *result); } @@ -13936,7 +13955,7 @@ RUNTIME_FUNCTION(Runtime_GetLanguageTagVariants) { RUNTIME_FUNCTION(Runtime_IsInitializedIntlObject) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(Object, input, 0); @@ -13952,7 +13971,7 @@ RUNTIME_FUNCTION(Runtime_IsInitializedIntlObject) { RUNTIME_FUNCTION(Runtime_IsInitializedIntlObjectOfType) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(Object, input, 0); CONVERT_ARG_HANDLE_CHECKED(String, expected_type, 1); @@ -13970,7 +13989,7 @@ RUNTIME_FUNCTION(Runtime_IsInitializedIntlObjectOfType) { RUNTIME_FUNCTION(Runtime_MarkAsInitializedIntlObjectOfType) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(JSObject, input, 0); CONVERT_ARG_HANDLE_CHECKED(String, type, 1); @@ -13989,7 +14008,7 @@ RUNTIME_FUNCTION(Runtime_MarkAsInitializedIntlObjectOfType) { RUNTIME_FUNCTION(Runtime_GetImplFromInitializedIntlObject) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(Object, input, 0); @@ -14017,7 +14036,7 @@ RUNTIME_FUNCTION(Runtime_GetImplFromInitializedIntlObject) { RUNTIME_FUNCTION(Runtime_CreateDateTimeFormat) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(String, locale, 0); CONVERT_ARG_HANDLE_CHECKED(JSObject, options, 1); @@ -14040,12 +14059,10 @@ RUNTIME_FUNCTION(Runtime_CreateDateTimeFormat) { local_object->SetInternalField(0, reinterpret_cast<Smi*>(date_format)); - RETURN_FAILURE_ON_EXCEPTION(isolate, - JSObject::SetLocalPropertyIgnoreAttributes( - local_object, - isolate->factory()->NewStringFromStaticAscii("dateFormat"), - isolate->factory()->NewStringFromStaticAscii("valid"), - NONE)); + Factory* factory = isolate->factory(); + Handle<String> key = factory->NewStringFromStaticAscii("dateFormat"); + Handle<String> value = factory->NewStringFromStaticAscii("valid"); + JSObject::AddProperty(local_object, key, value, NONE); // Make object handle weak so we can delete the data format once GC kicks in. Handle<Object> wrapper = isolate->global_handles()->Create(*local_object); @@ -14059,7 +14076,7 @@ RUNTIME_FUNCTION(Runtime_CreateDateTimeFormat) { RUNTIME_FUNCTION(Runtime_InternalDateFormat) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSObject, date_format_holder, 0); CONVERT_ARG_HANDLE_CHECKED(JSDate, date, 1); @@ -14089,7 +14106,7 @@ RUNTIME_FUNCTION(Runtime_InternalDateFormat) { RUNTIME_FUNCTION(Runtime_InternalDateParse) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSObject, date_format_holder, 0); CONVERT_ARG_HANDLE_CHECKED(String, date_string, 1); @@ -14108,7 +14125,7 @@ RUNTIME_FUNCTION(Runtime_InternalDateParse) { ASSIGN_RETURN_FAILURE_ON_EXCEPTION( isolate, result, Execution::NewDate(isolate, static_cast<double>(date))); - ASSERT(result->IsJSDate()); + DCHECK(result->IsJSDate()); return *result; } @@ -14116,7 +14133,7 @@ RUNTIME_FUNCTION(Runtime_InternalDateParse) { RUNTIME_FUNCTION(Runtime_CreateNumberFormat) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(String, locale, 0); CONVERT_ARG_HANDLE_CHECKED(JSObject, options, 1); @@ -14139,12 +14156,10 @@ RUNTIME_FUNCTION(Runtime_CreateNumberFormat) { local_object->SetInternalField(0, reinterpret_cast<Smi*>(number_format)); - RETURN_FAILURE_ON_EXCEPTION(isolate, - JSObject::SetLocalPropertyIgnoreAttributes( - local_object, - isolate->factory()->NewStringFromStaticAscii("numberFormat"), - isolate->factory()->NewStringFromStaticAscii("valid"), - NONE)); + Factory* factory = isolate->factory(); + Handle<String> key = factory->NewStringFromStaticAscii("numberFormat"); + Handle<String> value = factory->NewStringFromStaticAscii("valid"); + JSObject::AddProperty(local_object, key, value, NONE); Handle<Object> wrapper = isolate->global_handles()->Create(*local_object); GlobalHandles::MakeWeak(wrapper.location(), @@ -14157,7 +14172,7 @@ RUNTIME_FUNCTION(Runtime_CreateNumberFormat) { RUNTIME_FUNCTION(Runtime_InternalNumberFormat) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSObject, number_format_holder, 0); CONVERT_ARG_HANDLE_CHECKED(Object, number, 1); @@ -14187,7 +14202,7 @@ RUNTIME_FUNCTION(Runtime_InternalNumberFormat) { RUNTIME_FUNCTION(Runtime_InternalNumberParse) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSObject, number_format_holder, 0); CONVERT_ARG_HANDLE_CHECKED(String, number_string, 1); @@ -14226,7 +14241,7 @@ RUNTIME_FUNCTION(Runtime_InternalNumberParse) { RUNTIME_FUNCTION(Runtime_CreateCollator) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(String, locale, 0); CONVERT_ARG_HANDLE_CHECKED(JSObject, options, 1); @@ -14247,12 +14262,10 @@ RUNTIME_FUNCTION(Runtime_CreateCollator) { local_object->SetInternalField(0, reinterpret_cast<Smi*>(collator)); - RETURN_FAILURE_ON_EXCEPTION(isolate, - JSObject::SetLocalPropertyIgnoreAttributes( - local_object, - isolate->factory()->NewStringFromStaticAscii("collator"), - isolate->factory()->NewStringFromStaticAscii("valid"), - NONE)); + Factory* factory = isolate->factory(); + Handle<String> key = factory->NewStringFromStaticAscii("collator"); + Handle<String> value = factory->NewStringFromStaticAscii("valid"); + JSObject::AddProperty(local_object, key, value, NONE); Handle<Object> wrapper = isolate->global_handles()->Create(*local_object); GlobalHandles::MakeWeak(wrapper.location(), @@ -14265,7 +14278,7 @@ RUNTIME_FUNCTION(Runtime_CreateCollator) { RUNTIME_FUNCTION(Runtime_InternalCompare) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(JSObject, collator_holder, 0); CONVERT_ARG_HANDLE_CHECKED(String, string1, 1); @@ -14295,7 +14308,7 @@ RUNTIME_FUNCTION(Runtime_StringNormalize) { static const UNormalizationMode normalizationForms[] = { UNORM_NFC, UNORM_NFD, UNORM_NFKC, UNORM_NFKD }; - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(String, stringValue, 0); CONVERT_NUMBER_CHECKED(int, form_id, Int32, args[1]); @@ -14328,7 +14341,7 @@ RUNTIME_FUNCTION(Runtime_StringNormalize) { RUNTIME_FUNCTION(Runtime_CreateBreakIterator) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(String, locale, 0); CONVERT_ARG_HANDLE_CHECKED(JSObject, options, 1); @@ -14353,12 +14366,10 @@ RUNTIME_FUNCTION(Runtime_CreateBreakIterator) { // Make sure that the pointer to adopted text is NULL. local_object->SetInternalField(1, reinterpret_cast<Smi*>(NULL)); - RETURN_FAILURE_ON_EXCEPTION(isolate, - JSObject::SetLocalPropertyIgnoreAttributes( - local_object, - isolate->factory()->NewStringFromStaticAscii("breakIterator"), - isolate->factory()->NewStringFromStaticAscii("valid"), - NONE)); + Factory* factory = isolate->factory(); + Handle<String> key = factory->NewStringFromStaticAscii("breakIterator"); + Handle<String> value = factory->NewStringFromStaticAscii("valid"); + JSObject::AddProperty(local_object, key, value, NONE); // Make object handle weak so we can delete the break iterator once GC kicks // in. @@ -14373,7 +14384,7 @@ RUNTIME_FUNCTION(Runtime_CreateBreakIterator) { RUNTIME_FUNCTION(Runtime_BreakIteratorAdoptText) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSObject, break_iterator_holder, 0); CONVERT_ARG_HANDLE_CHECKED(String, text, 1); @@ -14400,7 +14411,7 @@ RUNTIME_FUNCTION(Runtime_BreakIteratorAdoptText) { RUNTIME_FUNCTION(Runtime_BreakIteratorFirst) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSObject, break_iterator_holder, 0); @@ -14415,7 +14426,7 @@ RUNTIME_FUNCTION(Runtime_BreakIteratorFirst) { RUNTIME_FUNCTION(Runtime_BreakIteratorNext) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSObject, break_iterator_holder, 0); @@ -14430,7 +14441,7 @@ RUNTIME_FUNCTION(Runtime_BreakIteratorNext) { RUNTIME_FUNCTION(Runtime_BreakIteratorCurrent) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSObject, break_iterator_holder, 0); @@ -14445,7 +14456,7 @@ RUNTIME_FUNCTION(Runtime_BreakIteratorCurrent) { RUNTIME_FUNCTION(Runtime_BreakIteratorBreakType) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSObject, break_iterator_holder, 0); @@ -14488,8 +14499,6 @@ static Handle<Object> Runtime_GetScriptFromScriptName( Handle<Script> script; Factory* factory = script_name->GetIsolate()->factory(); Heap* heap = script_name->GetHeap(); - heap->EnsureHeapIsIterable(); - DisallowHeapAllocation no_allocation_during_heap_iteration; HeapIterator iterator(heap); HeapObject* obj = NULL; while (script.is_null() && ((obj = iterator.next()) != NULL)) { @@ -14517,7 +14526,7 @@ static Handle<Object> Runtime_GetScriptFromScriptName( RUNTIME_FUNCTION(Runtime_GetScript) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(String, script_name, 0); @@ -14533,37 +14542,24 @@ RUNTIME_FUNCTION(Runtime_GetScript) { // native code offset. RUNTIME_FUNCTION(Runtime_CollectStackTrace) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSObject, error_object, 0); CONVERT_ARG_HANDLE_CHECKED(Object, caller, 1); - CONVERT_NUMBER_CHECKED(int32_t, limit, Int32, args[2]); - - // Optionally capture a more detailed stack trace for the message. - isolate->CaptureAndSetDetailedStackTrace(error_object); - // Capture a simple stack trace for the stack property. - return *isolate->CaptureSimpleStackTrace(error_object, caller, limit); -} - -// Retrieve the stack trace. This is the raw stack trace that yet has to -// be formatted. Since we only need this once, clear it afterwards. -RUNTIME_FUNCTION(Runtime_GetAndClearOverflowedStackTrace) { - HandleScope scope(isolate); - ASSERT(args.length() == 1); - CONVERT_ARG_HANDLE_CHECKED(JSObject, error_object, 0); - Handle<String> key = isolate->factory()->hidden_stack_trace_string(); - Handle<Object> result(error_object->GetHiddenProperty(key), isolate); - if (result->IsTheHole()) return isolate->heap()->undefined_value(); - RUNTIME_ASSERT(result->IsJSArray() || result->IsUndefined()); - JSObject::DeleteHiddenProperty(error_object, key); - return *result; + if (!isolate->bootstrapper()->IsActive()) { + // Optionally capture a more detailed stack trace for the message. + isolate->CaptureAndSetDetailedStackTrace(error_object); + // Capture a simple stack trace for the stack property. + isolate->CaptureAndSetSimpleStackTrace(error_object, caller); + } + return isolate->heap()->undefined_value(); } // Returns V8 version as a string. RUNTIME_FUNCTION(Runtime_GetV8Version) { HandleScope scope(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); const char* version_string = v8::V8::GetVersion(); @@ -14573,13 +14569,13 @@ RUNTIME_FUNCTION(Runtime_GetV8Version) { RUNTIME_FUNCTION(Runtime_Abort) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_SMI_ARG_CHECKED(message_id, 0); const char* message = GetBailoutReason( static_cast<BailoutReason>(message_id)); - OS::PrintError("abort: %s\n", message); + base::OS::PrintError("abort: %s\n", message); isolate->PrintStack(stderr); - OS::Abort(); + base::OS::Abort(); UNREACHABLE(); return NULL; } @@ -14587,11 +14583,11 @@ RUNTIME_FUNCTION(Runtime_Abort) { RUNTIME_FUNCTION(Runtime_AbortJS) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(String, message, 0); - OS::PrintError("abort: %s\n", message->ToCString().get()); + base::OS::PrintError("abort: %s\n", message->ToCString().get()); isolate->PrintStack(stderr); - OS::Abort(); + base::OS::Abort(); UNREACHABLE(); return NULL; } @@ -14599,7 +14595,7 @@ RUNTIME_FUNCTION(Runtime_AbortJS) { RUNTIME_FUNCTION(Runtime_FlattenString) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(String, str, 0); return *String::Flatten(str); } @@ -14607,7 +14603,7 @@ RUNTIME_FUNCTION(Runtime_FlattenString) { RUNTIME_FUNCTION(Runtime_NotifyContextDisposed) { HandleScope scope(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); isolate->heap()->NotifyContextDisposed(); return isolate->heap()->undefined_value(); } @@ -14615,25 +14611,28 @@ RUNTIME_FUNCTION(Runtime_NotifyContextDisposed) { RUNTIME_FUNCTION(Runtime_LoadMutableDouble) { HandleScope scope(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_HANDLE_CHECKED(JSObject, object, 0); CONVERT_ARG_HANDLE_CHECKED(Smi, index, 1); - int idx = index->value() >> 1; - int inobject_properties = object->map()->inobject_properties(); - if (idx < 0) { - idx = -idx + inobject_properties - 1; + RUNTIME_ASSERT((index->value() & 1) == 1); + FieldIndex field_index = + FieldIndex::ForLoadByFieldIndex(object->map(), index->value()); + if (field_index.is_inobject()) { + RUNTIME_ASSERT(field_index.property_index() < + object->map()->inobject_properties()); + } else { + RUNTIME_ASSERT(field_index.outobject_array_index() < + object->properties()->length()); } - int max_idx = object->properties()->length() + inobject_properties; - RUNTIME_ASSERT(idx < max_idx); - Handle<Object> raw_value(object->RawFastPropertyAt(idx), isolate); - RUNTIME_ASSERT(raw_value->IsNumber() || raw_value->IsUninitialized()); - return *Object::NewStorageFor(isolate, raw_value, Representation::Double()); + Handle<Object> raw_value(object->RawFastPropertyAt(field_index), isolate); + RUNTIME_ASSERT(raw_value->IsMutableHeapNumber()); + return *Object::WrapForRead(isolate, raw_value, Representation::Double()); } RUNTIME_FUNCTION(Runtime_TryMigrateInstance) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(Object, object, 0); if (!object->IsJSObject()) return Smi::FromInt(0); Handle<JSObject> js_object = Handle<JSObject>::cast(object); @@ -14647,7 +14646,7 @@ RUNTIME_FUNCTION(Runtime_TryMigrateInstance) { } -RUNTIME_FUNCTION(RuntimeHidden_GetFromCache) { +RUNTIME_FUNCTION(Runtime_GetFromCache) { SealHandleScope shs(isolate); // This is only called from codegen, so checks might be more lax. CONVERT_ARG_CHECKED(JSFunctionResultCache, cache, 0); @@ -14674,7 +14673,7 @@ RUNTIME_FUNCTION(RuntimeHidden_GetFromCache) { } int size = cache->size(); - ASSERT(size <= cache->length()); + DCHECK(size <= cache->length()); for (int i = size - 2; i > finger_index; i -= 2) { o = cache->get(i); @@ -14695,8 +14694,7 @@ RUNTIME_FUNCTION(RuntimeHidden_GetFromCache) { Handle<JSFunction> factory(JSFunction::cast( cache_handle->get(JSFunctionResultCache::kFactoryIndex))); // TODO(antonm): consider passing a receiver when constructing a cache. - Handle<Object> receiver(isolate->native_context()->global_object(), - isolate); + Handle<JSObject> receiver(isolate->global_proxy()); // This handle is nor shared, nor used later, so it's safe. Handle<Object> argv[] = { key_handle }; ASSIGN_RETURN_FAILURE_ON_EXCEPTION( @@ -14727,9 +14725,9 @@ RUNTIME_FUNCTION(RuntimeHidden_GetFromCache) { } } - ASSERT(index % 2 == 0); - ASSERT(index >= JSFunctionResultCache::kEntriesIndex); - ASSERT(index < cache_handle->length()); + DCHECK(index % 2 == 0); + DCHECK(index >= JSFunctionResultCache::kEntriesIndex); + DCHECK(index < cache_handle->length()); cache_handle->set(index, *key_handle); cache_handle->set(index + 1, *value); @@ -14747,7 +14745,7 @@ RUNTIME_FUNCTION(RuntimeHidden_GetFromCache) { RUNTIME_FUNCTION(Runtime_MessageGetStartPosition) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(JSMessageObject, message, 0); return Smi::FromInt(message->start_position()); } @@ -14755,7 +14753,7 @@ RUNTIME_FUNCTION(Runtime_MessageGetStartPosition) { RUNTIME_FUNCTION(Runtime_MessageGetScript) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(JSMessageObject, message, 0); return message->script(); } @@ -14766,12 +14764,12 @@ RUNTIME_FUNCTION(Runtime_MessageGetScript) { // Exclude the code in release mode. RUNTIME_FUNCTION(Runtime_ListNatives) { HandleScope scope(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); #define COUNT_ENTRY(Name, argc, ressize) + 1 int entry_count = 0 RUNTIME_FUNCTION_LIST(COUNT_ENTRY) - RUNTIME_HIDDEN_FUNCTION_LIST(COUNT_ENTRY) - INLINE_FUNCTION_LIST(COUNT_ENTRY); + INLINE_FUNCTION_LIST(COUNT_ENTRY) + INLINE_OPTIMIZED_FUNCTION_LIST(COUNT_ENTRY); #undef COUNT_ENTRY Factory* factory = isolate->factory(); Handle<FixedArray> elements = factory->NewFixedArray(entry_count); @@ -14795,12 +14793,11 @@ RUNTIME_FUNCTION(Runtime_ListNatives) { } inline_runtime_functions = false; RUNTIME_FUNCTION_LIST(ADD_ENTRY) - // Calling hidden runtime functions should just throw. - RUNTIME_HIDDEN_FUNCTION_LIST(ADD_ENTRY) + INLINE_OPTIMIZED_FUNCTION_LIST(ADD_ENTRY) inline_runtime_functions = true; INLINE_FUNCTION_LIST(ADD_ENTRY) #undef ADD_ENTRY - ASSERT_EQ(index, entry_count); + DCHECK_EQ(index, entry_count); Handle<JSArray> result = factory->NewJSArrayWithElements(elements); return *result; } @@ -14857,7 +14854,7 @@ TYPED_ARRAYS(FIXED_TYPED_ARRAYS_CHECK_RUNTIME_FUNCTION) RUNTIME_FUNCTION(Runtime_HaveSameMap) { SealHandleScope shs(isolate); - ASSERT(args.length() == 2); + DCHECK(args.length() == 2); CONVERT_ARG_CHECKED(JSObject, obj1, 0); CONVERT_ARG_CHECKED(JSObject, obj2, 1); return isolate->heap()->ToBoolean(obj1->map() == obj2->map()); @@ -14866,7 +14863,7 @@ RUNTIME_FUNCTION(Runtime_HaveSameMap) { RUNTIME_FUNCTION(Runtime_IsJSGlobalProxy) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_CHECKED(Object, obj, 0); return isolate->heap()->ToBoolean(obj->IsJSGlobalProxy()); } @@ -14874,64 +14871,56 @@ RUNTIME_FUNCTION(Runtime_IsJSGlobalProxy) { RUNTIME_FUNCTION(Runtime_IsObserved) { SealHandleScope shs(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); if (!args[0]->IsJSReceiver()) return isolate->heap()->false_value(); CONVERT_ARG_CHECKED(JSReceiver, obj, 0); - ASSERT(!obj->IsJSGlobalProxy() || !obj->map()->is_observed()); + DCHECK(!obj->IsJSGlobalProxy() || !obj->map()->is_observed()); return isolate->heap()->ToBoolean(obj->map()->is_observed()); } RUNTIME_FUNCTION(Runtime_SetIsObserved) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSReceiver, obj, 0); - ASSERT(!obj->IsJSGlobalProxy()); - if (obj->IsJSProxy()) - return isolate->heap()->undefined_value(); + RUNTIME_ASSERT(!obj->IsJSGlobalProxy()); + if (obj->IsJSProxy()) return isolate->heap()->undefined_value(); + RUNTIME_ASSERT(!obj->map()->is_observed()); - ASSERT(obj->IsJSObject()); + DCHECK(obj->IsJSObject()); JSObject::SetObserved(Handle<JSObject>::cast(obj)); return isolate->heap()->undefined_value(); } -RUNTIME_FUNCTION(Runtime_SetMicrotaskPending) { - SealHandleScope shs(isolate); - ASSERT(args.length() == 1); - CONVERT_BOOLEAN_ARG_CHECKED(new_state, 0); - bool old_state = isolate->microtask_pending(); - isolate->set_microtask_pending(new_state); - return isolate->heap()->ToBoolean(old_state); +RUNTIME_FUNCTION(Runtime_EnqueueMicrotask) { + HandleScope scope(isolate); + DCHECK(args.length() == 1); + CONVERT_ARG_HANDLE_CHECKED(JSFunction, microtask, 0); + isolate->EnqueueMicrotask(microtask); + return isolate->heap()->undefined_value(); } RUNTIME_FUNCTION(Runtime_RunMicrotasks) { HandleScope scope(isolate); - ASSERT(args.length() == 0); - if (isolate->microtask_pending()) Execution::RunMicrotasks(isolate); + DCHECK(args.length() == 0); + isolate->RunMicrotasks(); return isolate->heap()->undefined_value(); } -RUNTIME_FUNCTION(Runtime_GetMicrotaskState) { - SealHandleScope shs(isolate); - ASSERT(args.length() == 0); - return isolate->heap()->microtask_state(); -} - - RUNTIME_FUNCTION(Runtime_GetObservationState) { SealHandleScope shs(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); return isolate->heap()->observation_state(); } RUNTIME_FUNCTION(Runtime_ObservationWeakMapCreate) { HandleScope scope(isolate); - ASSERT(args.length() == 0); + DCHECK(args.length() == 0); // TODO(adamk): Currently this runtime function is only called three times per // isolate. If it's called more often, the map should be moved into the // strong root list. @@ -14951,13 +14940,12 @@ static bool ContextsHaveSameOrigin(Handle<Context> context1, RUNTIME_FUNCTION(Runtime_ObserverObjectAndRecordHaveSameOrigin) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); CONVERT_ARG_HANDLE_CHECKED(JSFunction, observer, 0); CONVERT_ARG_HANDLE_CHECKED(JSObject, object, 1); CONVERT_ARG_HANDLE_CHECKED(JSObject, record, 2); - Handle<Context> observer_context(observer->context()->native_context(), - isolate); + Handle<Context> observer_context(observer->context()->native_context()); Handle<Context> object_context(object->GetCreationContext()); Handle<Context> record_context(record->GetCreationContext()); @@ -14969,7 +14957,7 @@ RUNTIME_FUNCTION(Runtime_ObserverObjectAndRecordHaveSameOrigin) { RUNTIME_FUNCTION(Runtime_ObjectWasCreatedInCurrentOrigin) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSObject, object, 0); Handle<Context> creation_context(object->GetCreationContext(), isolate); @@ -14978,65 +14966,33 @@ RUNTIME_FUNCTION(Runtime_ObjectWasCreatedInCurrentOrigin) { } -RUNTIME_FUNCTION(Runtime_ObjectObserveInObjectContext) { +RUNTIME_FUNCTION(Runtime_GetObjectContextObjectObserve) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSObject, object, 0); - CONVERT_ARG_HANDLE_CHECKED(JSFunction, callback, 1); - CONVERT_ARG_HANDLE_CHECKED(Object, accept, 2); - RUNTIME_ASSERT(accept->IsUndefined() || accept->IsJSObject()); Handle<Context> context(object->GetCreationContext(), isolate); - Handle<JSFunction> function(context->native_object_observe(), isolate); - Handle<Object> call_args[] = { object, callback, accept }; - Handle<Object> result; - - ASSIGN_RETURN_FAILURE_ON_EXCEPTION( - isolate, result, - Execution::Call(isolate, function, - handle(context->object_function(), isolate), - ARRAY_SIZE(call_args), call_args, true)); - return *result; + return context->native_object_observe(); } -RUNTIME_FUNCTION(Runtime_ObjectGetNotifierInObjectContext) { +RUNTIME_FUNCTION(Runtime_GetObjectContextObjectGetNotifier) { HandleScope scope(isolate); - ASSERT(args.length() == 1); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSObject, object, 0); Handle<Context> context(object->GetCreationContext(), isolate); - Handle<JSFunction> function(context->native_object_get_notifier(), isolate); - Handle<Object> call_args[] = { object }; - Handle<Object> result; - - ASSIGN_RETURN_FAILURE_ON_EXCEPTION( - isolate, result, - Execution::Call(isolate, function, - handle(context->object_function(), isolate), - ARRAY_SIZE(call_args), call_args, true)); - return *result; + return context->native_object_get_notifier(); } -RUNTIME_FUNCTION(Runtime_ObjectNotifierPerformChangeInObjectContext) { +RUNTIME_FUNCTION(Runtime_GetObjectContextNotifierPerformChange) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 1); CONVERT_ARG_HANDLE_CHECKED(JSObject, object_info, 0); - CONVERT_ARG_HANDLE_CHECKED(String, change_type, 1); - CONVERT_ARG_HANDLE_CHECKED(JSFunction, change_fn, 2); Handle<Context> context(object_info->GetCreationContext(), isolate); - Handle<JSFunction> function(context->native_object_notifier_perform_change(), - isolate); - Handle<Object> call_args[] = { object_info, change_type, change_fn }; - Handle<Object> result; - - ASSIGN_RETURN_FAILURE_ON_EXCEPTION( - isolate, result, - Execution::Call(isolate, function, isolate->factory()->undefined_value(), - ARRAY_SIZE(call_args), call_args, true)); - return *result; + return context->native_object_notifier_perform_change(); } @@ -15118,7 +15074,7 @@ static Object* ArrayConstructorCommon(Isolate* isolate, } -RUNTIME_FUNCTION(RuntimeHidden_ArrayConstructor) { +RUNTIME_FUNCTION(Runtime_ArrayConstructor) { HandleScope scope(isolate); // If we get 2 arguments then they are the stub parameters (constructor, type // info). If we get 4, then the first one is a pointer to the arguments @@ -15127,7 +15083,7 @@ RUNTIME_FUNCTION(RuntimeHidden_ArrayConstructor) { // with an assert). Arguments empty_args(0, NULL); bool no_caller_args = args.length() == 2; - ASSERT(no_caller_args || args.length() == 4); + DCHECK(no_caller_args || args.length() == 4); int parameters_start = no_caller_args ? 0 : 1; Arguments* caller_args = no_caller_args ? &empty_args @@ -15137,7 +15093,7 @@ RUNTIME_FUNCTION(RuntimeHidden_ArrayConstructor) { #ifdef DEBUG if (!no_caller_args) { CONVERT_SMI_ARG_CHECKED(arg_count, parameters_start + 2); - ASSERT(arg_count == caller_args->length()); + DCHECK(arg_count == caller_args->length()); } #endif @@ -15145,7 +15101,7 @@ RUNTIME_FUNCTION(RuntimeHidden_ArrayConstructor) { if (!type_info.is_null() && *type_info != isolate->heap()->undefined_value()) { site = Handle<AllocationSite>::cast(type_info); - ASSERT(!site->SitePointsToLiteral()); + DCHECK(!site->SitePointsToLiteral()); } return ArrayConstructorCommon(isolate, @@ -15155,11 +15111,11 @@ RUNTIME_FUNCTION(RuntimeHidden_ArrayConstructor) { } -RUNTIME_FUNCTION(RuntimeHidden_InternalArrayConstructor) { +RUNTIME_FUNCTION(Runtime_InternalArrayConstructor) { HandleScope scope(isolate); Arguments empty_args(0, NULL); bool no_caller_args = args.length() == 1; - ASSERT(no_caller_args || args.length() == 3); + DCHECK(no_caller_args || args.length() == 3); int parameters_start = no_caller_args ? 0 : 1; Arguments* caller_args = no_caller_args ? &empty_args @@ -15168,7 +15124,7 @@ RUNTIME_FUNCTION(RuntimeHidden_InternalArrayConstructor) { #ifdef DEBUG if (!no_caller_args) { CONVERT_SMI_ARG_CHECKED(arg_count, parameters_start + 1); - ASSERT(arg_count == caller_args->length()); + DCHECK(arg_count == caller_args->length()); } #endif return ArrayConstructorCommon(isolate, @@ -15178,51 +15134,467 @@ RUNTIME_FUNCTION(RuntimeHidden_InternalArrayConstructor) { } +RUNTIME_FUNCTION(Runtime_NormalizeElements) { + HandleScope scope(isolate); + DCHECK(args.length() == 1); + CONVERT_ARG_HANDLE_CHECKED(JSObject, array, 0); + RUNTIME_ASSERT(!array->HasExternalArrayElements() && + !array->HasFixedTypedArrayElements()); + JSObject::NormalizeElements(array); + return *array; +} + + RUNTIME_FUNCTION(Runtime_MaxSmi) { - ASSERT(args.length() == 0); + SealHandleScope shs(isolate); + DCHECK(args.length() == 0); return Smi::FromInt(Smi::kMaxValue); } +// TODO(dcarney): remove this function when TurboFan supports it. +// Takes the object to be iterated over and the result of GetPropertyNamesFast +// Returns pair (cache_array, cache_type). +RUNTIME_FUNCTION_RETURN_PAIR(Runtime_ForInInit) { + SealHandleScope scope(isolate); + DCHECK(args.length() == 2); + // This simulates CONVERT_ARG_HANDLE_CHECKED for calls returning pairs. + // Not worth creating a macro atm as this function should be removed. + if (!args[0]->IsJSReceiver() || !args[1]->IsObject()) { + Object* error = isolate->ThrowIllegalOperation(); + return MakePair(error, isolate->heap()->undefined_value()); + } + Handle<JSReceiver> object = args.at<JSReceiver>(0); + Handle<Object> cache_type = args.at<Object>(1); + if (cache_type->IsMap()) { + // Enum cache case. + if (Map::EnumLengthBits::decode(Map::cast(*cache_type)->bit_field3()) == + 0) { + // 0 length enum. + // Can't handle this case in the graph builder, + // so transform it into the empty fixed array case. + return MakePair(isolate->heap()->empty_fixed_array(), Smi::FromInt(1)); + } + return MakePair(object->map()->instance_descriptors()->GetEnumCache(), + *cache_type); + } else { + // FixedArray case. + Smi* new_cache_type = Smi::FromInt(object->IsJSProxy() ? 0 : 1); + return MakePair(*Handle<FixedArray>::cast(cache_type), new_cache_type); + } +} + + +// TODO(dcarney): remove this function when TurboFan supports it. +RUNTIME_FUNCTION(Runtime_ForInCacheArrayLength) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 2); + CONVERT_ARG_HANDLE_CHECKED(Object, cache_type, 0); + CONVERT_ARG_HANDLE_CHECKED(FixedArray, array, 1); + int length = 0; + if (cache_type->IsMap()) { + length = Map::cast(*cache_type)->EnumLength(); + } else { + DCHECK(cache_type->IsSmi()); + length = array->length(); + } + return Smi::FromInt(length); +} + + +// TODO(dcarney): remove this function when TurboFan supports it. +// Takes (the object to be iterated over, +// cache_array from ForInInit, +// cache_type from ForInInit, +// the current index) +// Returns pair (array[index], needs_filtering). +RUNTIME_FUNCTION_RETURN_PAIR(Runtime_ForInNext) { + SealHandleScope scope(isolate); + DCHECK(args.length() == 4); + // This simulates CONVERT_ARG_HANDLE_CHECKED for calls returning pairs. + // Not worth creating a macro atm as this function should be removed. + if (!args[0]->IsJSReceiver() || !args[1]->IsFixedArray() || + !args[2]->IsObject() || !args[3]->IsSmi()) { + Object* error = isolate->ThrowIllegalOperation(); + return MakePair(error, isolate->heap()->undefined_value()); + } + Handle<JSReceiver> object = args.at<JSReceiver>(0); + Handle<FixedArray> array = args.at<FixedArray>(1); + Handle<Object> cache_type = args.at<Object>(2); + int index = args.smi_at(3); + // Figure out first if a slow check is needed for this object. + bool slow_check_needed = false; + if (cache_type->IsMap()) { + if (object->map() != Map::cast(*cache_type)) { + // Object transitioned. Need slow check. + slow_check_needed = true; + } + } else { + // No slow check needed for proxies. + slow_check_needed = Smi::cast(*cache_type)->value() == 1; + } + return MakePair(array->get(index), + isolate->heap()->ToBoolean(slow_check_needed)); +} + + // ---------------------------------------------------------------------------- -// Implementation of Runtime +// Reference implementation for inlined runtime functions. Only used when the +// compiler does not support a certain intrinsic. Don't optimize these, but +// implement the intrinsic in the respective compiler instead. + +// TODO(mstarzinger): These are place-holder stubs for TurboFan and will +// eventually all have a C++ implementation and this macro will be gone. +#define U(name) \ + RUNTIME_FUNCTION(RuntimeReference_##name) { \ + UNIMPLEMENTED(); \ + return NULL; \ + } + +U(IsStringWrapperSafeForDefaultValueOf) +U(GeneratorNext) +U(GeneratorThrow) +U(DebugBreakInOptimizedCode) + +#undef U + + +RUNTIME_FUNCTION(RuntimeReference_IsSmi) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 1); + CONVERT_ARG_CHECKED(Object, obj, 0); + return isolate->heap()->ToBoolean(obj->IsSmi()); +} + + +RUNTIME_FUNCTION(RuntimeReference_IsNonNegativeSmi) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 1); + CONVERT_ARG_CHECKED(Object, obj, 0); + return isolate->heap()->ToBoolean(obj->IsSmi() && + Smi::cast(obj)->value() >= 0); +} + + +RUNTIME_FUNCTION(RuntimeReference_IsArray) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 1); + CONVERT_ARG_CHECKED(Object, obj, 0); + return isolate->heap()->ToBoolean(obj->IsJSArray()); +} + + +RUNTIME_FUNCTION(RuntimeReference_IsRegExp) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 1); + CONVERT_ARG_CHECKED(Object, obj, 0); + return isolate->heap()->ToBoolean(obj->IsJSRegExp()); +} + + +RUNTIME_FUNCTION(RuntimeReference_IsConstructCall) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 0); + JavaScriptFrameIterator it(isolate); + JavaScriptFrame* frame = it.frame(); + return isolate->heap()->ToBoolean(frame->IsConstructor()); +} + + +RUNTIME_FUNCTION(RuntimeReference_CallFunction) { + SealHandleScope shs(isolate); + return __RT_impl_Runtime_Call(args, isolate); +} + + +RUNTIME_FUNCTION(RuntimeReference_ArgumentsLength) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 0); + JavaScriptFrameIterator it(isolate); + JavaScriptFrame* frame = it.frame(); + return Smi::FromInt(frame->GetArgumentsLength()); +} + + +RUNTIME_FUNCTION(RuntimeReference_Arguments) { + SealHandleScope shs(isolate); + return __RT_impl_Runtime_GetArgumentsProperty(args, isolate); +} + + +RUNTIME_FUNCTION(RuntimeReference_ValueOf) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 1); + CONVERT_ARG_CHECKED(Object, obj, 0); + if (!obj->IsJSValue()) return obj; + return JSValue::cast(obj)->value(); +} + + +RUNTIME_FUNCTION(RuntimeReference_SetValueOf) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 2); + CONVERT_ARG_CHECKED(Object, obj, 0); + CONVERT_ARG_CHECKED(Object, value, 1); + if (!obj->IsJSValue()) return value; + JSValue::cast(obj)->set_value(value); + return value; +} + + +RUNTIME_FUNCTION(RuntimeReference_DateField) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 2); + CONVERT_ARG_CHECKED(Object, obj, 0); + CONVERT_SMI_ARG_CHECKED(index, 1); + if (!obj->IsJSDate()) { + HandleScope scope(isolate); + return isolate->Throw(*isolate->factory()->NewTypeError( + "not_date_object", HandleVector<Object>(NULL, 0))); + } + JSDate* date = JSDate::cast(obj); + if (index == 0) return date->value(); + return JSDate::GetField(date, Smi::FromInt(index)); +} + + +RUNTIME_FUNCTION(RuntimeReference_StringCharFromCode) { + SealHandleScope shs(isolate); + return __RT_impl_Runtime_CharFromCode(args, isolate); +} + + +RUNTIME_FUNCTION(RuntimeReference_StringCharAt) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 2); + if (!args[0]->IsString()) return Smi::FromInt(0); + if (!args[1]->IsNumber()) return Smi::FromInt(0); + if (std::isinf(args.number_at(1))) return isolate->heap()->empty_string(); + Object* code = __RT_impl_Runtime_StringCharCodeAtRT(args, isolate); + if (code->IsNaN()) return isolate->heap()->empty_string(); + return __RT_impl_Runtime_CharFromCode(Arguments(1, &code), isolate); +} + + +RUNTIME_FUNCTION(RuntimeReference_OneByteSeqStringSetChar) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 3); + CONVERT_ARG_CHECKED(SeqOneByteString, string, 0); + CONVERT_SMI_ARG_CHECKED(index, 1); + CONVERT_SMI_ARG_CHECKED(value, 2); + string->SeqOneByteStringSet(index, value); + return string; +} + + +RUNTIME_FUNCTION(RuntimeReference_TwoByteSeqStringSetChar) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 3); + CONVERT_ARG_CHECKED(SeqTwoByteString, string, 0); + CONVERT_SMI_ARG_CHECKED(index, 1); + CONVERT_SMI_ARG_CHECKED(value, 2); + string->SeqTwoByteStringSet(index, value); + return string; +} + + +RUNTIME_FUNCTION(RuntimeReference_ObjectEquals) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 2); + CONVERT_ARG_CHECKED(Object, obj1, 0); + CONVERT_ARG_CHECKED(Object, obj2, 1); + return isolate->heap()->ToBoolean(obj1 == obj2); +} + + +RUNTIME_FUNCTION(RuntimeReference_IsObject) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 1); + CONVERT_ARG_CHECKED(Object, obj, 0); + if (!obj->IsHeapObject()) return isolate->heap()->false_value(); + if (obj->IsNull()) return isolate->heap()->true_value(); + if (obj->IsUndetectableObject()) return isolate->heap()->false_value(); + Map* map = HeapObject::cast(obj)->map(); + bool is_non_callable_spec_object = + map->instance_type() >= FIRST_NONCALLABLE_SPEC_OBJECT_TYPE && + map->instance_type() <= LAST_NONCALLABLE_SPEC_OBJECT_TYPE; + return isolate->heap()->ToBoolean(is_non_callable_spec_object); +} + + +RUNTIME_FUNCTION(RuntimeReference_IsFunction) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 1); + CONVERT_ARG_CHECKED(Object, obj, 0); + return isolate->heap()->ToBoolean(obj->IsJSFunction()); +} + + +RUNTIME_FUNCTION(RuntimeReference_IsUndetectableObject) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 1); + CONVERT_ARG_CHECKED(Object, obj, 0); + return isolate->heap()->ToBoolean(obj->IsUndetectableObject()); +} + + +RUNTIME_FUNCTION(RuntimeReference_IsSpecObject) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 1); + CONVERT_ARG_CHECKED(Object, obj, 0); + return isolate->heap()->ToBoolean(obj->IsSpecObject()); +} + + +RUNTIME_FUNCTION(RuntimeReference_MathPow) { + SealHandleScope shs(isolate); + return __RT_impl_Runtime_MathPowSlow(args, isolate); +} + + +RUNTIME_FUNCTION(RuntimeReference_IsMinusZero) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 1); + CONVERT_ARG_CHECKED(Object, obj, 0); + if (!obj->IsHeapNumber()) return isolate->heap()->false_value(); + HeapNumber* number = HeapNumber::cast(obj); + return isolate->heap()->ToBoolean(IsMinusZero(number->value())); +} -#define F(name, number_of_args, result_size) \ - { Runtime::k##name, Runtime::RUNTIME, #name, \ - FUNCTION_ADDR(Runtime_##name), number_of_args, result_size }, +RUNTIME_FUNCTION(RuntimeReference_HasCachedArrayIndex) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 1); + return isolate->heap()->false_value(); +} + + +RUNTIME_FUNCTION(RuntimeReference_GetCachedArrayIndex) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 1); + return isolate->heap()->undefined_value(); +} + + +RUNTIME_FUNCTION(RuntimeReference_FastAsciiArrayJoin) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 2); + return isolate->heap()->undefined_value(); +} + + +RUNTIME_FUNCTION(RuntimeReference_ClassOf) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 1); + CONVERT_ARG_CHECKED(Object, obj, 0); + if (!obj->IsJSReceiver()) return isolate->heap()->null_value(); + return JSReceiver::cast(obj)->class_name(); +} + + +RUNTIME_FUNCTION(RuntimeReference_StringCharCodeAt) { + SealHandleScope shs(isolate); + DCHECK(args.length() == 2); + if (!args[0]->IsString()) return isolate->heap()->undefined_value(); + if (!args[1]->IsNumber()) return isolate->heap()->undefined_value(); + if (std::isinf(args.number_at(1))) return isolate->heap()->nan_value(); + return __RT_impl_Runtime_StringCharCodeAtRT(args, isolate); +} + + +RUNTIME_FUNCTION(RuntimeReference_StringAdd) { + SealHandleScope shs(isolate); + return __RT_impl_Runtime_StringAdd(args, isolate); +} + + +RUNTIME_FUNCTION(RuntimeReference_SubString) { + SealHandleScope shs(isolate); + return __RT_impl_Runtime_SubString(args, isolate); +} + + +RUNTIME_FUNCTION(RuntimeReference_StringCompare) { + SealHandleScope shs(isolate); + return __RT_impl_Runtime_StringCompare(args, isolate); +} + + +RUNTIME_FUNCTION(RuntimeReference_RegExpExec) { + SealHandleScope shs(isolate); + return __RT_impl_Runtime_RegExpExecRT(args, isolate); +} + + +RUNTIME_FUNCTION(RuntimeReference_RegExpConstructResult) { + SealHandleScope shs(isolate); + return __RT_impl_Runtime_RegExpConstructResult(args, isolate); +} + + +RUNTIME_FUNCTION(RuntimeReference_GetFromCache) { + HandleScope scope(isolate); + DCHECK(args.length() == 2); + CONVERT_SMI_ARG_CHECKED(id, 0); + args[0] = isolate->native_context()->jsfunction_result_caches()->get(id); + return __RT_impl_Runtime_GetFromCache(args, isolate); +} + + +RUNTIME_FUNCTION(RuntimeReference_NumberToString) { + SealHandleScope shs(isolate); + return __RT_impl_Runtime_NumberToStringRT(args, isolate); +} + + +RUNTIME_FUNCTION(RuntimeReference_DebugIsActive) { + SealHandleScope shs(isolate); + return Smi::FromInt(isolate->debug()->is_active()); +} + + +// ---------------------------------------------------------------------------- +// Implementation of Runtime -#define FH(name, number_of_args, result_size) \ - { Runtime::kHidden##name, Runtime::RUNTIME_HIDDEN, NULL, \ - FUNCTION_ADDR(RuntimeHidden_##name), number_of_args, result_size }, +#define F(name, number_of_args, result_size) \ + { \ + Runtime::k##name, Runtime::RUNTIME, #name, FUNCTION_ADDR(Runtime_##name), \ + number_of_args, result_size \ + } \ + , -#define I(name, number_of_args, result_size) \ - { Runtime::kInline##name, Runtime::INLINE, \ - "_" #name, NULL, number_of_args, result_size }, +#define I(name, number_of_args, result_size) \ + { \ + Runtime::kInline##name, Runtime::INLINE, "_" #name, \ + FUNCTION_ADDR(RuntimeReference_##name), number_of_args, result_size \ + } \ + , -#define IO(name, number_of_args, result_size) \ - { Runtime::kInlineOptimized##name, Runtime::INLINE_OPTIMIZED, \ - "_" #name, FUNCTION_ADDR(Runtime_##name), number_of_args, result_size }, +#define IO(name, number_of_args, result_size) \ + { \ + Runtime::kInlineOptimized##name, Runtime::INLINE_OPTIMIZED, "_" #name, \ + FUNCTION_ADDR(Runtime_##name), number_of_args, result_size \ + } \ + , static const Runtime::Function kIntrinsicFunctions[] = { RUNTIME_FUNCTION_LIST(F) - RUNTIME_HIDDEN_FUNCTION_LIST(FH) + INLINE_OPTIMIZED_FUNCTION_LIST(F) INLINE_FUNCTION_LIST(I) INLINE_OPTIMIZED_FUNCTION_LIST(IO) }; #undef IO #undef I -#undef FH #undef F void Runtime::InitializeIntrinsicFunctionNames(Isolate* isolate, Handle<NameDictionary> dict) { - ASSERT(dict->NumberOfElements() == 0); + DCHECK(dict->NumberOfElements() == 0); HandleScope scope(isolate); for (int i = 0; i < kNumFunctions; ++i) { const char* name = kIntrinsicFunctions[i].name; @@ -15250,6 +15622,16 @@ const Runtime::Function* Runtime::FunctionForName(Handle<String> name) { } +const Runtime::Function* Runtime::FunctionForEntry(Address entry) { + for (size_t i = 0; i < ARRAY_SIZE(kIntrinsicFunctions); ++i) { + if (entry == kIntrinsicFunctions[i].entry) { + return &(kIntrinsicFunctions[i]); + } + } + return NULL; +} + + const Runtime::Function* Runtime::FunctionForId(Runtime::FunctionId id) { return &(kIntrinsicFunctions[static_cast<int>(id)]); } diff --git a/deps/v8/src/runtime.h b/deps/v8/src/runtime.h index 38d2126f0ea..4a78edb897d 100644 --- a/deps/v8/src/runtime.h +++ b/deps/v8/src/runtime.h @@ -5,8 +5,8 @@ #ifndef V8_RUNTIME_H_ #define V8_RUNTIME_H_ -#include "allocation.h" -#include "zone.h" +#include "src/allocation.h" +#include "src/zone.h" namespace v8 { namespace internal { @@ -21,387 +21,484 @@ namespace internal { // WARNING: RUNTIME_FUNCTION_LIST_ALWAYS_* is a very large macro that caused // MSVC Intellisense to crash. It was broken into two macros to work around // this problem. Please avoid large recursive macros whenever possible. -#define RUNTIME_FUNCTION_LIST_ALWAYS_1(F) \ - /* Property access */ \ - F(GetProperty, 2, 1) \ - F(KeyedGetProperty, 2, 1) \ - F(DeleteProperty, 3, 1) \ - F(HasLocalProperty, 2, 1) \ - F(HasProperty, 2, 1) \ - F(HasElement, 2, 1) \ - F(IsPropertyEnumerable, 2, 1) \ - F(GetPropertyNames, 1, 1) \ - F(GetPropertyNamesFast, 1, 1) \ - F(GetLocalPropertyNames, 2, 1) \ - F(GetLocalElementNames, 1, 1) \ - F(GetInterceptorInfo, 1, 1) \ - F(GetNamedInterceptorPropertyNames, 1, 1) \ - F(GetIndexedInterceptorElementNames, 1, 1) \ - F(GetArgumentsProperty, 1, 1) \ - F(ToFastProperties, 1, 1) \ - F(FinishArrayPrototypeSetup, 1, 1) \ - F(SpecialArrayFunctions, 1, 1) \ - F(IsSloppyModeFunction, 1, 1) \ - F(GetDefaultReceiver, 1, 1) \ - \ - F(GetPrototype, 1, 1) \ - F(SetPrototype, 2, 1) \ - F(IsInPrototypeChain, 2, 1) \ - \ - F(GetOwnProperty, 2, 1) \ - \ - F(IsExtensible, 1, 1) \ - F(PreventExtensions, 1, 1)\ - \ - /* Utilities */ \ - F(CheckIsBootstrapping, 0, 1) \ - F(GetRootNaN, 0, 1) \ - F(Call, -1 /* >= 2 */, 1) \ - F(Apply, 5, 1) \ - F(GetFunctionDelegate, 1, 1) \ - F(GetConstructorDelegate, 1, 1) \ - F(DeoptimizeFunction, 1, 1) \ - F(ClearFunctionTypeFeedback, 1, 1) \ - F(RunningInSimulator, 0, 1) \ - F(IsConcurrentRecompilationSupported, 0, 1) \ - F(OptimizeFunctionOnNextCall, -1, 1) \ - F(NeverOptimizeFunction, 1, 1) \ - F(GetOptimizationStatus, -1, 1) \ - F(GetOptimizationCount, 1, 1) \ - F(UnblockConcurrentRecompilation, 0, 1) \ - F(CompileForOnStackReplacement, 1, 1) \ - F(SetAllocationTimeout, -1 /* 2 || 3 */, 1) \ - F(SetNativeFlag, 1, 1) \ - F(SetInlineBuiltinFlag, 1, 1) \ - F(StoreArrayLiteralElement, 5, 1) \ - F(DebugPrepareStepInIfStepping, 1, 1) \ - F(DebugPromiseHandlePrologue, 1, 1) \ - F(DebugPromiseHandleEpilogue, 0, 1) \ - F(FlattenString, 1, 1) \ - F(LoadMutableDouble, 2, 1) \ - F(TryMigrateInstance, 1, 1) \ - F(NotifyContextDisposed, 0, 1) \ - \ - /* Array join support */ \ - F(PushIfAbsent, 2, 1) \ - F(ArrayConcat, 1, 1) \ - \ - /* Conversions */ \ - F(ToBool, 1, 1) \ - F(Typeof, 1, 1) \ - \ - F(StringToNumber, 1, 1) \ - F(StringParseInt, 2, 1) \ - F(StringParseFloat, 1, 1) \ - F(StringToLowerCase, 1, 1) \ - F(StringToUpperCase, 1, 1) \ - F(StringSplit, 3, 1) \ - F(CharFromCode, 1, 1) \ - F(URIEscape, 1, 1) \ - F(URIUnescape, 1, 1) \ - \ - F(NumberToInteger, 1, 1) \ - F(NumberToIntegerMapMinusZero, 1, 1) \ - F(NumberToJSUint32, 1, 1) \ - F(NumberToJSInt32, 1, 1) \ - \ - /* Arithmetic operations */ \ - F(NumberAdd, 2, 1) \ - F(NumberSub, 2, 1) \ - F(NumberMul, 2, 1) \ - F(NumberDiv, 2, 1) \ - F(NumberMod, 2, 1) \ - F(NumberUnaryMinus, 1, 1) \ - F(NumberImul, 2, 1) \ - \ - F(StringBuilderConcat, 3, 1) \ - F(StringBuilderJoin, 3, 1) \ - F(SparseJoinWithSeparator, 3, 1) \ - \ - /* Bit operations */ \ - F(NumberOr, 2, 1) \ - F(NumberAnd, 2, 1) \ - F(NumberXor, 2, 1) \ - \ - F(NumberShl, 2, 1) \ - F(NumberShr, 2, 1) \ - F(NumberSar, 2, 1) \ - \ - /* Comparisons */ \ - F(NumberEquals, 2, 1) \ - F(StringEquals, 2, 1) \ - \ - F(NumberCompare, 3, 1) \ - F(SmiLexicographicCompare, 2, 1) \ - \ - /* Math */ \ - F(MathAcos, 1, 1) \ - F(MathAsin, 1, 1) \ - F(MathAtan, 1, 1) \ - F(MathFloor, 1, 1) \ - F(MathAtan2, 2, 1) \ - F(MathExp, 1, 1) \ - F(RoundNumber, 1, 1) \ - F(MathFround, 1, 1) \ - \ - /* Regular expressions */ \ - F(RegExpCompile, 3, 1) \ - F(RegExpExecMultiple, 4, 1) \ - F(RegExpInitializeObject, 5, 1) \ - \ - /* JSON */ \ - F(ParseJson, 1, 1) \ - F(BasicJSONStringify, 1, 1) \ - F(QuoteJSONString, 1, 1) \ - \ - /* Strings */ \ - F(StringIndexOf, 3, 1) \ - F(StringLastIndexOf, 3, 1) \ - F(StringLocaleCompare, 2, 1) \ - F(StringReplaceGlobalRegExpWithString, 4, 1) \ - F(StringReplaceOneCharWithString, 3, 1) \ - F(StringMatch, 3, 1) \ - F(StringTrim, 3, 1) \ - F(StringToArray, 2, 1) \ - F(NewStringWrapper, 1, 1) \ - F(NewString, 2, 1) \ - F(TruncateString, 2, 1) \ - \ - /* Numbers */ \ - F(NumberToRadixString, 2, 1) \ - F(NumberToFixed, 2, 1) \ - F(NumberToExponential, 2, 1) \ - F(NumberToPrecision, 2, 1) \ +#define RUNTIME_FUNCTION_LIST_ALWAYS_1(F) \ + /* Property access */ \ + F(GetProperty, 2, 1) \ + F(KeyedGetProperty, 2, 1) \ + F(DeleteProperty, 3, 1) \ + F(HasOwnProperty, 2, 1) \ + F(HasProperty, 2, 1) \ + F(HasElement, 2, 1) \ + F(IsPropertyEnumerable, 2, 1) \ + F(GetPropertyNames, 1, 1) \ + F(GetPropertyNamesFast, 1, 1) \ + F(GetOwnPropertyNames, 2, 1) \ + F(GetOwnElementNames, 1, 1) \ + F(GetInterceptorInfo, 1, 1) \ + F(GetNamedInterceptorPropertyNames, 1, 1) \ + F(GetIndexedInterceptorElementNames, 1, 1) \ + F(GetArgumentsProperty, 1, 1) \ + F(ToFastProperties, 1, 1) \ + F(FinishArrayPrototypeSetup, 1, 1) \ + F(SpecialArrayFunctions, 0, 1) \ + F(IsSloppyModeFunction, 1, 1) \ + F(GetDefaultReceiver, 1, 1) \ + \ + F(GetPrototype, 1, 1) \ + F(SetPrototype, 2, 1) \ + F(InternalSetPrototype, 2, 1) \ + F(IsInPrototypeChain, 2, 1) \ + \ + F(GetOwnProperty, 2, 1) \ + \ + F(IsExtensible, 1, 1) \ + F(PreventExtensions, 1, 1) \ + \ + /* Utilities */ \ + F(CheckIsBootstrapping, 0, 1) \ + F(GetRootNaN, 0, 1) \ + F(Call, -1 /* >= 2 */, 1) \ + F(Apply, 5, 1) \ + F(GetFunctionDelegate, 1, 1) \ + F(GetConstructorDelegate, 1, 1) \ + F(DeoptimizeFunction, 1, 1) \ + F(ClearFunctionTypeFeedback, 1, 1) \ + F(RunningInSimulator, 0, 1) \ + F(IsConcurrentRecompilationSupported, 0, 1) \ + F(OptimizeFunctionOnNextCall, -1, 1) \ + F(NeverOptimizeFunction, 1, 1) \ + F(GetOptimizationStatus, -1, 1) \ + F(IsOptimized, 0, 1) /* TODO(turbofan): Only temporary */ \ + F(GetOptimizationCount, 1, 1) \ + F(UnblockConcurrentRecompilation, 0, 1) \ + F(CompileForOnStackReplacement, 1, 1) \ + F(SetAllocationTimeout, -1 /* 2 || 3 */, 1) \ + F(SetNativeFlag, 1, 1) \ + F(SetInlineBuiltinFlag, 1, 1) \ + F(StoreArrayLiteralElement, 5, 1) \ + F(DebugPrepareStepInIfStepping, 1, 1) \ + F(DebugPushPromise, 1, 1) \ + F(DebugPopPromise, 0, 1) \ + F(DebugPromiseEvent, 1, 1) \ + F(DebugPromiseRejectEvent, 2, 1) \ + F(DebugAsyncTaskEvent, 1, 1) \ + F(FlattenString, 1, 1) \ + F(LoadMutableDouble, 2, 1) \ + F(TryMigrateInstance, 1, 1) \ + F(NotifyContextDisposed, 0, 1) \ + \ + /* Array join support */ \ + F(PushIfAbsent, 2, 1) \ + F(ArrayConcat, 1, 1) \ + \ + /* Conversions */ \ + F(ToBool, 1, 1) \ + F(Typeof, 1, 1) \ + \ + F(Booleanize, 2, 1) /* TODO(turbofan): Only temporary */ \ + \ + F(StringToNumber, 1, 1) \ + F(StringParseInt, 2, 1) \ + F(StringParseFloat, 1, 1) \ + F(StringToLowerCase, 1, 1) \ + F(StringToUpperCase, 1, 1) \ + F(StringSplit, 3, 1) \ + F(CharFromCode, 1, 1) \ + F(URIEscape, 1, 1) \ + F(URIUnescape, 1, 1) \ + \ + F(NumberToInteger, 1, 1) \ + F(NumberToIntegerMapMinusZero, 1, 1) \ + F(NumberToJSUint32, 1, 1) \ + F(NumberToJSInt32, 1, 1) \ + \ + /* Arithmetic operations */ \ + F(NumberAdd, 2, 1) \ + F(NumberSub, 2, 1) \ + F(NumberMul, 2, 1) \ + F(NumberDiv, 2, 1) \ + F(NumberMod, 2, 1) \ + F(NumberUnaryMinus, 1, 1) \ + F(NumberImul, 2, 1) \ + \ + F(StringBuilderConcat, 3, 1) \ + F(StringBuilderJoin, 3, 1) \ + F(SparseJoinWithSeparator, 3, 1) \ + \ + /* Bit operations */ \ + F(NumberOr, 2, 1) \ + F(NumberAnd, 2, 1) \ + F(NumberXor, 2, 1) \ + \ + F(NumberShl, 2, 1) \ + F(NumberShr, 2, 1) \ + F(NumberSar, 2, 1) \ + \ + /* Comparisons */ \ + F(NumberEquals, 2, 1) \ + F(StringEquals, 2, 1) \ + \ + F(NumberCompare, 3, 1) \ + F(SmiLexicographicCompare, 2, 1) \ + \ + /* Math */ \ + F(MathAcos, 1, 1) \ + F(MathAsin, 1, 1) \ + F(MathAtan, 1, 1) \ + F(MathFloorRT, 1, 1) \ + F(MathAtan2, 2, 1) \ + F(MathExpRT, 1, 1) \ + F(RoundNumber, 1, 1) \ + F(MathFround, 1, 1) \ + F(RemPiO2, 1, 1) \ + \ + /* Regular expressions */ \ + F(RegExpCompile, 3, 1) \ + F(RegExpExecMultiple, 4, 1) \ + F(RegExpInitializeObject, 5, 1) \ + \ + /* JSON */ \ + F(ParseJson, 1, 1) \ + F(BasicJSONStringify, 1, 1) \ + F(QuoteJSONString, 1, 1) \ + \ + /* Strings */ \ + F(StringIndexOf, 3, 1) \ + F(StringLastIndexOf, 3, 1) \ + F(StringLocaleCompare, 2, 1) \ + F(StringReplaceGlobalRegExpWithString, 4, 1) \ + F(StringReplaceOneCharWithString, 3, 1) \ + F(StringMatch, 3, 1) \ + F(StringTrim, 3, 1) \ + F(StringToArray, 2, 1) \ + F(NewStringWrapper, 1, 1) \ + F(NewString, 2, 1) \ + F(TruncateString, 2, 1) \ + \ + /* Numbers */ \ + F(NumberToRadixString, 2, 1) \ + F(NumberToFixed, 2, 1) \ + F(NumberToExponential, 2, 1) \ + F(NumberToPrecision, 2, 1) \ F(IsValidSmi, 1, 1) -#define RUNTIME_FUNCTION_LIST_ALWAYS_2(F) \ - /* Reflection */ \ - F(FunctionSetInstanceClassName, 2, 1) \ - F(FunctionSetLength, 2, 1) \ - F(FunctionSetPrototype, 2, 1) \ - F(FunctionSetReadOnlyPrototype, 1, 1) \ - F(FunctionGetName, 1, 1) \ - F(FunctionSetName, 2, 1) \ - F(FunctionNameShouldPrintAsAnonymous, 1, 1) \ - F(FunctionMarkNameShouldPrintAsAnonymous, 1, 1) \ - F(FunctionIsGenerator, 1, 1) \ - F(FunctionBindArguments, 4, 1) \ - F(BoundFunctionGetBindings, 1, 1) \ - F(FunctionRemovePrototype, 1, 1) \ - F(FunctionGetSourceCode, 1, 1) \ - F(FunctionGetScript, 1, 1) \ - F(FunctionGetScriptSourcePosition, 1, 1) \ - F(FunctionGetPositionForOffset, 2, 1) \ - F(FunctionIsAPIFunction, 1, 1) \ - F(FunctionIsBuiltin, 1, 1) \ - F(GetScript, 1, 1) \ - F(CollectStackTrace, 3, 1) \ - F(GetAndClearOverflowedStackTrace, 1, 1) \ - F(GetV8Version, 0, 1) \ - \ - F(SetCode, 2, 1) \ - F(SetExpectedNumberOfProperties, 2, 1) \ - \ - F(CreateApiFunction, 2, 1) \ - F(IsTemplate, 1, 1) \ - F(GetTemplateField, 2, 1) \ - F(DisableAccessChecks, 1, 1) \ - F(EnableAccessChecks, 1, 1) \ - F(SetAccessorProperty, 6, 1) \ - \ - /* Dates */ \ - F(DateCurrentTime, 0, 1) \ - F(DateParseString, 2, 1) \ - F(DateLocalTimezone, 1, 1) \ - F(DateToUTC, 1, 1) \ - F(DateMakeDay, 2, 1) \ - F(DateSetValue, 3, 1) \ - F(DateCacheVersion, 0, 1) \ - \ - /* Globals */ \ - F(CompileString, 2, 1) \ - \ - /* Eval */ \ - F(GlobalReceiver, 1, 1) \ - F(IsAttachedGlobal, 1, 1) \ - \ - F(SetProperty, -1 /* 4 or 5 */, 1) \ - F(DefineOrRedefineDataProperty, 4, 1) \ - F(DefineOrRedefineAccessorProperty, 5, 1) \ - F(IgnoreAttributesAndSetProperty, -1 /* 3 or 4 */, 1) \ - F(GetDataProperty, 2, 1) \ - F(SetHiddenProperty, 3, 1) \ - \ - /* Arrays */ \ - F(RemoveArrayHoles, 2, 1) \ - F(GetArrayKeys, 2, 1) \ - F(MoveArrayContents, 2, 1) \ - F(EstimateNumberOfElements, 1, 1) \ - \ - /* Getters and Setters */ \ - F(LookupAccessor, 3, 1) \ - \ - /* ES5 */ \ - F(ObjectFreeze, 1, 1) \ - \ - /* Harmony microtasks */ \ - F(GetMicrotaskState, 0, 1) \ - \ - /* Harmony modules */ \ - F(IsJSModule, 1, 1) \ - \ - /* Harmony symbols */ \ - F(CreateSymbol, 1, 1) \ - F(CreatePrivateSymbol, 1, 1) \ - F(CreateGlobalPrivateSymbol, 1, 1) \ - F(NewSymbolWrapper, 1, 1) \ - F(SymbolDescription, 1, 1) \ - F(SymbolRegistry, 0, 1) \ - F(SymbolIsPrivate, 1, 1) \ - \ - /* Harmony proxies */ \ - F(CreateJSProxy, 2, 1) \ - F(CreateJSFunctionProxy, 4, 1) \ - F(IsJSProxy, 1, 1) \ - F(IsJSFunctionProxy, 1, 1) \ - F(GetHandler, 1, 1) \ - F(GetCallTrap, 1, 1) \ - F(GetConstructTrap, 1, 1) \ - F(Fix, 1, 1) \ - \ - /* Harmony sets */ \ - F(SetInitialize, 1, 1) \ - F(SetAdd, 2, 1) \ - F(SetHas, 2, 1) \ - F(SetDelete, 2, 1) \ - F(SetClear, 1, 1) \ - F(SetGetSize, 1, 1) \ - F(SetCreateIterator, 2, 1) \ - \ - F(SetIteratorNext, 1, 1) \ - F(SetIteratorClose, 1, 1) \ - \ - /* Harmony maps */ \ - F(MapInitialize, 1, 1) \ - F(MapGet, 2, 1) \ - F(MapHas, 2, 1) \ - F(MapDelete, 2, 1) \ - F(MapClear, 1, 1) \ - F(MapSet, 3, 1) \ - F(MapGetSize, 1, 1) \ - F(MapCreateIterator, 2, 1) \ - \ - F(MapIteratorNext, 1, 1) \ - F(MapIteratorClose, 1, 1) \ - \ - /* Harmony weak maps and sets */ \ - F(WeakCollectionInitialize, 1, 1) \ - F(WeakCollectionGet, 2, 1) \ - F(WeakCollectionHas, 2, 1) \ - F(WeakCollectionDelete, 2, 1) \ - F(WeakCollectionSet, 3, 1) \ - \ - /* Harmony events */ \ - F(SetMicrotaskPending, 1, 1) \ - F(RunMicrotasks, 0, 1) \ - \ - /* Harmony observe */ \ - F(IsObserved, 1, 1) \ - F(SetIsObserved, 1, 1) \ - F(GetObservationState, 0, 1) \ - F(ObservationWeakMapCreate, 0, 1) \ - F(ObserverObjectAndRecordHaveSameOrigin, 3, 1) \ - F(ObjectWasCreatedInCurrentOrigin, 1, 1) \ - F(ObjectObserveInObjectContext, 3, 1) \ - F(ObjectGetNotifierInObjectContext, 1, 1) \ - F(ObjectNotifierPerformChangeInObjectContext, 3, 1) \ - \ - /* Harmony typed arrays */ \ - F(ArrayBufferInitialize, 2, 1)\ - F(ArrayBufferSliceImpl, 3, 1) \ - F(ArrayBufferIsView, 1, 1) \ - F(ArrayBufferNeuter, 1, 1) \ - \ - F(TypedArrayInitializeFromArrayLike, 4, 1) \ - F(TypedArrayGetBuffer, 1, 1) \ - F(TypedArraySetFastCases, 3, 1) \ - \ - F(DataViewGetBuffer, 1, 1) \ - F(DataViewGetInt8, 3, 1) \ - F(DataViewGetUint8, 3, 1) \ - F(DataViewGetInt16, 3, 1) \ - F(DataViewGetUint16, 3, 1) \ - F(DataViewGetInt32, 3, 1) \ - F(DataViewGetUint32, 3, 1) \ - F(DataViewGetFloat32, 3, 1) \ - F(DataViewGetFloat64, 3, 1) \ - \ - F(DataViewSetInt8, 4, 1) \ - F(DataViewSetUint8, 4, 1) \ - F(DataViewSetInt16, 4, 1) \ - F(DataViewSetUint16, 4, 1) \ - F(DataViewSetInt32, 4, 1) \ - F(DataViewSetUint32, 4, 1) \ - F(DataViewSetFloat32, 4, 1) \ - F(DataViewSetFloat64, 4, 1) \ - \ - /* Statements */ \ - F(NewObjectFromBound, 1, 1) \ - \ - /* Declarations and initialization */ \ - F(InitializeVarGlobal, -1 /* 2 or 3 */, 1) \ - F(OptimizeObjectForAddingMultipleProperties, 2, 1) \ - \ - /* Debugging */ \ - F(DebugPrint, 1, 1) \ - F(GlobalPrint, 1, 1) \ - F(DebugTrace, 0, 1) \ - F(TraceEnter, 0, 1) \ - F(TraceExit, 1, 1) \ - F(Abort, 1, 1) \ - F(AbortJS, 1, 1) \ - /* ES5 */ \ - F(LocalKeys, 1, 1) \ - \ - /* Message objects */ \ - F(MessageGetStartPosition, 1, 1) \ - F(MessageGetScript, 1, 1) \ - \ - /* Pseudo functions - handled as macros by parser */ \ - F(IS_VAR, 1, 1) \ - \ - /* expose boolean functions from objects-inl.h */ \ - F(HasFastSmiElements, 1, 1) \ - F(HasFastSmiOrObjectElements, 1, 1) \ - F(HasFastObjectElements, 1, 1) \ - F(HasFastDoubleElements, 1, 1) \ - F(HasFastHoleyElements, 1, 1) \ - F(HasDictionaryElements, 1, 1) \ - F(HasSloppyArgumentsElements, 1, 1) \ - F(HasExternalUint8ClampedElements, 1, 1) \ - F(HasExternalArrayElements, 1, 1) \ - F(HasExternalInt8Elements, 1, 1) \ - F(HasExternalUint8Elements, 1, 1) \ - F(HasExternalInt16Elements, 1, 1) \ - F(HasExternalUint16Elements, 1, 1) \ - F(HasExternalInt32Elements, 1, 1) \ - F(HasExternalUint32Elements, 1, 1) \ - F(HasExternalFloat32Elements, 1, 1) \ - F(HasExternalFloat64Elements, 1, 1) \ - F(HasFixedUint8ClampedElements, 1, 1) \ - F(HasFixedInt8Elements, 1, 1) \ - F(HasFixedUint8Elements, 1, 1) \ - F(HasFixedInt16Elements, 1, 1) \ - F(HasFixedUint16Elements, 1, 1) \ - F(HasFixedInt32Elements, 1, 1) \ - F(HasFixedUint32Elements, 1, 1) \ - F(HasFixedFloat32Elements, 1, 1) \ - F(HasFixedFloat64Elements, 1, 1) \ - F(HasFastProperties, 1, 1) \ - F(TransitionElementsKind, 2, 1) \ - F(HaveSameMap, 2, 1) \ - F(IsJSGlobalProxy, 1, 1) +#define RUNTIME_FUNCTION_LIST_ALWAYS_2(F) \ + /* Reflection */ \ + F(FunctionSetInstanceClassName, 2, 1) \ + F(FunctionSetLength, 2, 1) \ + F(FunctionSetPrototype, 2, 1) \ + F(FunctionGetName, 1, 1) \ + F(FunctionSetName, 2, 1) \ + F(FunctionNameShouldPrintAsAnonymous, 1, 1) \ + F(FunctionMarkNameShouldPrintAsAnonymous, 1, 1) \ + F(FunctionIsGenerator, 1, 1) \ + F(FunctionIsArrow, 1, 1) \ + F(FunctionBindArguments, 4, 1) \ + F(BoundFunctionGetBindings, 1, 1) \ + F(FunctionRemovePrototype, 1, 1) \ + F(FunctionGetSourceCode, 1, 1) \ + F(FunctionGetScript, 1, 1) \ + F(FunctionGetScriptSourcePosition, 1, 1) \ + F(FunctionGetPositionForOffset, 2, 1) \ + F(FunctionIsAPIFunction, 1, 1) \ + F(FunctionIsBuiltin, 1, 1) \ + F(GetScript, 1, 1) \ + F(CollectStackTrace, 2, 1) \ + F(GetV8Version, 0, 1) \ + \ + F(SetCode, 2, 1) \ + \ + F(CreateApiFunction, 2, 1) \ + F(IsTemplate, 1, 1) \ + F(GetTemplateField, 2, 1) \ + F(DisableAccessChecks, 1, 1) \ + F(EnableAccessChecks, 1, 1) \ + \ + /* Dates */ \ + F(DateCurrentTime, 0, 1) \ + F(DateParseString, 2, 1) \ + F(DateLocalTimezone, 1, 1) \ + F(DateToUTC, 1, 1) \ + F(DateMakeDay, 2, 1) \ + F(DateSetValue, 3, 1) \ + F(DateCacheVersion, 0, 1) \ + \ + /* Globals */ \ + F(CompileString, 2, 1) \ + \ + /* Eval */ \ + F(GlobalProxy, 1, 1) \ + F(IsAttachedGlobal, 1, 1) \ + \ + F(AddNamedProperty, 4, 1) \ + F(AddPropertyForTemplate, 4, 1) \ + F(SetProperty, 4, 1) \ + F(DefineApiAccessorProperty, 5, 1) \ + F(DefineDataPropertyUnchecked, 4, 1) \ + F(DefineAccessorPropertyUnchecked, 5, 1) \ + F(GetDataProperty, 2, 1) \ + F(SetHiddenProperty, 3, 1) \ + \ + /* Arrays */ \ + F(RemoveArrayHoles, 2, 1) \ + F(GetArrayKeys, 2, 1) \ + F(MoveArrayContents, 2, 1) \ + F(EstimateNumberOfElements, 1, 1) \ + F(NormalizeElements, 1, 1) \ + \ + /* Getters and Setters */ \ + F(LookupAccessor, 3, 1) \ + \ + /* ES5 */ \ + F(ObjectFreeze, 1, 1) \ + \ + /* Harmony modules */ \ + F(IsJSModule, 1, 1) \ + \ + /* Harmony symbols */ \ + F(CreateSymbol, 1, 1) \ + F(CreatePrivateSymbol, 1, 1) \ + F(CreateGlobalPrivateSymbol, 1, 1) \ + F(CreatePrivateOwnSymbol, 1, 1) \ + F(NewSymbolWrapper, 1, 1) \ + F(SymbolDescription, 1, 1) \ + F(SymbolRegistry, 0, 1) \ + F(SymbolIsPrivate, 1, 1) \ + \ + /* Harmony proxies */ \ + F(CreateJSProxy, 2, 1) \ + F(CreateJSFunctionProxy, 4, 1) \ + F(IsJSProxy, 1, 1) \ + F(IsJSFunctionProxy, 1, 1) \ + F(GetHandler, 1, 1) \ + F(GetCallTrap, 1, 1) \ + F(GetConstructTrap, 1, 1) \ + F(Fix, 1, 1) \ + \ + /* Harmony sets */ \ + F(SetInitialize, 1, 1) \ + F(SetAdd, 2, 1) \ + F(SetHas, 2, 1) \ + F(SetDelete, 2, 1) \ + F(SetClear, 1, 1) \ + F(SetGetSize, 1, 1) \ + \ + F(SetIteratorInitialize, 3, 1) \ + F(SetIteratorNext, 2, 1) \ + \ + /* Harmony maps */ \ + F(MapInitialize, 1, 1) \ + F(MapGet, 2, 1) \ + F(MapHas, 2, 1) \ + F(MapDelete, 2, 1) \ + F(MapClear, 1, 1) \ + F(MapSet, 3, 1) \ + F(MapGetSize, 1, 1) \ + \ + F(MapIteratorInitialize, 3, 1) \ + F(MapIteratorNext, 2, 1) \ + \ + /* Harmony weak maps and sets */ \ + F(WeakCollectionInitialize, 1, 1) \ + F(WeakCollectionGet, 2, 1) \ + F(WeakCollectionHas, 2, 1) \ + F(WeakCollectionDelete, 2, 1) \ + F(WeakCollectionSet, 3, 1) \ + \ + F(GetWeakMapEntries, 1, 1) \ + F(GetWeakSetValues, 1, 1) \ + \ + /* Harmony events */ \ + F(EnqueueMicrotask, 1, 1) \ + F(RunMicrotasks, 0, 1) \ + \ + /* Harmony observe */ \ + F(IsObserved, 1, 1) \ + F(SetIsObserved, 1, 1) \ + F(GetObservationState, 0, 1) \ + F(ObservationWeakMapCreate, 0, 1) \ + F(ObserverObjectAndRecordHaveSameOrigin, 3, 1) \ + F(ObjectWasCreatedInCurrentOrigin, 1, 1) \ + F(GetObjectContextObjectObserve, 1, 1) \ + F(GetObjectContextObjectGetNotifier, 1, 1) \ + F(GetObjectContextNotifierPerformChange, 1, 1) \ + \ + /* Harmony typed arrays */ \ + F(ArrayBufferInitialize, 2, 1) \ + F(ArrayBufferSliceImpl, 3, 1) \ + F(ArrayBufferIsView, 1, 1) \ + F(ArrayBufferNeuter, 1, 1) \ + \ + F(TypedArrayInitializeFromArrayLike, 4, 1) \ + F(TypedArrayGetBuffer, 1, 1) \ + F(TypedArraySetFastCases, 3, 1) \ + \ + F(DataViewGetBuffer, 1, 1) \ + F(DataViewGetInt8, 3, 1) \ + F(DataViewGetUint8, 3, 1) \ + F(DataViewGetInt16, 3, 1) \ + F(DataViewGetUint16, 3, 1) \ + F(DataViewGetInt32, 3, 1) \ + F(DataViewGetUint32, 3, 1) \ + F(DataViewGetFloat32, 3, 1) \ + F(DataViewGetFloat64, 3, 1) \ + \ + F(DataViewSetInt8, 4, 1) \ + F(DataViewSetUint8, 4, 1) \ + F(DataViewSetInt16, 4, 1) \ + F(DataViewSetUint16, 4, 1) \ + F(DataViewSetInt32, 4, 1) \ + F(DataViewSetUint32, 4, 1) \ + F(DataViewSetFloat32, 4, 1) \ + F(DataViewSetFloat64, 4, 1) \ + \ + /* Statements */ \ + F(NewObjectFromBound, 1, 1) \ + \ + /* Declarations and initialization */ \ + F(InitializeVarGlobal, 3, 1) \ + F(OptimizeObjectForAddingMultipleProperties, 2, 1) \ + \ + /* Debugging */ \ + F(DebugPrint, 1, 1) \ + F(GlobalPrint, 1, 1) \ + F(DebugTrace, 0, 1) \ + F(TraceEnter, 0, 1) \ + F(TraceExit, 1, 1) \ + F(Abort, 1, 1) \ + F(AbortJS, 1, 1) \ + /* ES5 */ \ + F(OwnKeys, 1, 1) \ + \ + /* Message objects */ \ + F(MessageGetStartPosition, 1, 1) \ + F(MessageGetScript, 1, 1) \ + \ + /* Pseudo functions - handled as macros by parser */ \ + F(IS_VAR, 1, 1) \ + \ + /* expose boolean functions from objects-inl.h */ \ + F(HasFastSmiElements, 1, 1) \ + F(HasFastSmiOrObjectElements, 1, 1) \ + F(HasFastObjectElements, 1, 1) \ + F(HasFastDoubleElements, 1, 1) \ + F(HasFastHoleyElements, 1, 1) \ + F(HasDictionaryElements, 1, 1) \ + F(HasSloppyArgumentsElements, 1, 1) \ + F(HasExternalUint8ClampedElements, 1, 1) \ + F(HasExternalArrayElements, 1, 1) \ + F(HasExternalInt8Elements, 1, 1) \ + F(HasExternalUint8Elements, 1, 1) \ + F(HasExternalInt16Elements, 1, 1) \ + F(HasExternalUint16Elements, 1, 1) \ + F(HasExternalInt32Elements, 1, 1) \ + F(HasExternalUint32Elements, 1, 1) \ + F(HasExternalFloat32Elements, 1, 1) \ + F(HasExternalFloat64Elements, 1, 1) \ + F(HasFixedUint8ClampedElements, 1, 1) \ + F(HasFixedInt8Elements, 1, 1) \ + F(HasFixedUint8Elements, 1, 1) \ + F(HasFixedInt16Elements, 1, 1) \ + F(HasFixedUint16Elements, 1, 1) \ + F(HasFixedInt32Elements, 1, 1) \ + F(HasFixedUint32Elements, 1, 1) \ + F(HasFixedFloat32Elements, 1, 1) \ + F(HasFixedFloat64Elements, 1, 1) \ + F(HasFastProperties, 1, 1) \ + F(TransitionElementsKind, 2, 1) \ + F(HaveSameMap, 2, 1) \ + F(IsJSGlobalProxy, 1, 1) \ + F(ForInInit, 2, 2) /* TODO(turbofan): Only temporary */ \ + F(ForInNext, 4, 2) /* TODO(turbofan): Only temporary */ \ + F(ForInCacheArrayLength, 2, 1) /* TODO(turbofan): Only temporary */ + + +#define RUNTIME_FUNCTION_LIST_ALWAYS_3(F) \ + /* String and Regexp */ \ + F(NumberToStringRT, 1, 1) \ + F(RegExpConstructResult, 3, 1) \ + F(RegExpExecRT, 4, 1) \ + F(StringAdd, 2, 1) \ + F(SubString, 3, 1) \ + F(InternalizeString, 1, 1) \ + F(StringCompare, 2, 1) \ + F(StringCharCodeAtRT, 2, 1) \ + F(GetFromCache, 2, 1) \ + \ + /* Compilation */ \ + F(CompileUnoptimized, 1, 1) \ + F(CompileOptimized, 2, 1) \ + F(TryInstallOptimizedCode, 1, 1) \ + F(NotifyDeoptimized, 1, 1) \ + F(NotifyStubFailure, 0, 1) \ + \ + /* Utilities */ \ + F(AllocateInNewSpace, 1, 1) \ + F(AllocateInTargetSpace, 2, 1) \ + F(AllocateHeapNumber, 0, 1) \ + F(NumberToSmi, 1, 1) \ + F(NumberToStringSkipCache, 1, 1) \ + \ + F(NewArguments, 1, 1) /* TODO(turbofan): Only temporary */ \ + F(NewSloppyArguments, 3, 1) \ + F(NewStrictArguments, 3, 1) \ + \ + /* Harmony generators */ \ + F(CreateJSGeneratorObject, 0, 1) \ + F(SuspendJSGeneratorObject, 1, 1) \ + F(ResumeJSGeneratorObject, 3, 1) \ + F(ThrowGeneratorStateError, 1, 1) \ + \ + /* Arrays */ \ + F(ArrayConstructor, -1, 1) \ + F(InternalArrayConstructor, -1, 1) \ + \ + /* Literals */ \ + F(MaterializeRegExpLiteral, 4, 1) \ + F(CreateObjectLiteral, 4, 1) \ + F(CreateArrayLiteral, 4, 1) \ + F(CreateArrayLiteralStubBailout, 3, 1) \ + \ + /* Statements */ \ + F(NewClosure, 3, 1) \ + F(NewClosureFromStubFailure, 1, 1) \ + F(NewObject, 1, 1) \ + F(NewObjectWithAllocationSite, 2, 1) \ + F(FinalizeInstanceSize, 1, 1) \ + F(Throw, 1, 1) \ + F(ReThrow, 1, 1) \ + F(ThrowReferenceError, 1, 1) \ + F(ThrowNotDateError, 0, 1) \ + F(StackGuard, 0, 1) \ + F(Interrupt, 0, 1) \ + F(PromoteScheduledException, 0, 1) \ + \ + /* Contexts */ \ + F(NewGlobalContext, 2, 1) \ + F(NewFunctionContext, 1, 1) \ + F(PushWithContext, 2, 1) \ + F(PushCatchContext, 3, 1) \ + F(PushBlockContext, 2, 1) \ + F(PushModuleContext, 2, 1) \ + F(DeleteLookupSlot, 2, 1) \ + F(LoadLookupSlot, 2, 2) \ + F(LoadLookupSlotNoReferenceError, 2, 2) \ + F(StoreLookupSlot, 4, 1) \ + \ + /* Declarations and initialization */ \ + F(DeclareGlobals, 3, 1) \ + F(DeclareModules, 1, 1) \ + F(DeclareLookupSlot, 4, 1) \ + F(InitializeConstGlobal, 2, 1) \ + F(InitializeLegacyConstLookupSlot, 3, 1) \ + \ + /* Eval */ \ + F(ResolvePossiblyDirectEval, 5, 2) \ + \ + /* Maths */ \ + F(MathPowSlow, 2, 1) \ + F(MathPowRT, 2, 1) #define RUNTIME_FUNCTION_LIST_DEBUGGER(F) \ @@ -445,6 +542,7 @@ namespace internal { F(DebugConstructedBy, 2, 1) \ F(DebugGetPrototype, 1, 1) \ F(DebugSetScriptSource, 2, 1) \ + F(DebugCallbackSupportsStepping, 1, 1) \ F(SystemBreak, 0, 1) \ F(DebugDisassembleFunction, 1, 1) \ F(DebugDisassembleConstructor, 1, 1) \ @@ -528,98 +626,11 @@ namespace internal { #define RUNTIME_FUNCTION_LIST(F) \ RUNTIME_FUNCTION_LIST_ALWAYS_1(F) \ RUNTIME_FUNCTION_LIST_ALWAYS_2(F) \ + RUNTIME_FUNCTION_LIST_ALWAYS_3(F) \ RUNTIME_FUNCTION_LIST_DEBUG(F) \ RUNTIME_FUNCTION_LIST_DEBUGGER(F) \ RUNTIME_FUNCTION_LIST_I18N_SUPPORT(F) -// RUNTIME_HIDDEN_FUNCTION_LIST defines all runtime functions accessed -// by id from code generator, but not via native call by name. -// Entries have the form F(name, number of arguments, number of return values). -#define RUNTIME_HIDDEN_FUNCTION_LIST(F) \ - /* String and Regexp */ \ - F(NumberToString, 1, 1) \ - F(RegExpConstructResult, 3, 1) \ - F(RegExpExec, 4, 1) \ - F(StringAdd, 2, 1) \ - F(SubString, 3, 1) \ - F(StringCompare, 2, 1) \ - F(StringCharCodeAt, 2, 1) \ - F(GetFromCache, 2, 1) \ - \ - /* Compilation */ \ - F(CompileUnoptimized, 1, 1) \ - F(CompileOptimized, 2, 1) \ - F(TryInstallOptimizedCode, 1, 1) \ - F(NotifyDeoptimized, 1, 1) \ - F(NotifyStubFailure, 0, 1) \ - \ - /* Utilities */ \ - F(AllocateInNewSpace, 1, 1) \ - F(AllocateInTargetSpace, 2, 1) \ - F(AllocateHeapNumber, 0, 1) \ - F(NumberToSmi, 1, 1) \ - F(NumberToStringSkipCache, 1, 1) \ - \ - F(NewArgumentsFast, 3, 1) \ - F(NewStrictArgumentsFast, 3, 1) \ - \ - /* Harmony generators */ \ - F(CreateJSGeneratorObject, 0, 1) \ - F(SuspendJSGeneratorObject, 1, 1) \ - F(ResumeJSGeneratorObject, 3, 1) \ - F(ThrowGeneratorStateError, 1, 1) \ - \ - /* Arrays */ \ - F(ArrayConstructor, -1, 1) \ - F(InternalArrayConstructor, -1, 1) \ - \ - /* Literals */ \ - F(MaterializeRegExpLiteral, 4, 1)\ - F(CreateObjectLiteral, 4, 1) \ - F(CreateArrayLiteral, 4, 1) \ - F(CreateArrayLiteralStubBailout, 3, 1) \ - \ - /* Statements */ \ - F(NewClosure, 3, 1) \ - F(NewClosureFromStubFailure, 1, 1) \ - F(NewObject, 1, 1) \ - F(NewObjectWithAllocationSite, 2, 1) \ - F(FinalizeInstanceSize, 1, 1) \ - F(Throw, 1, 1) \ - F(ReThrow, 1, 1) \ - F(ThrowReferenceError, 1, 1) \ - F(ThrowNotDateError, 0, 1) \ - F(ThrowMessage, 1, 1) \ - F(StackGuard, 0, 1) \ - F(Interrupt, 0, 1) \ - F(PromoteScheduledException, 0, 1) \ - \ - /* Contexts */ \ - F(NewGlobalContext, 2, 1) \ - F(NewFunctionContext, 1, 1) \ - F(PushWithContext, 2, 1) \ - F(PushCatchContext, 3, 1) \ - F(PushBlockContext, 2, 1) \ - F(PushModuleContext, 2, 1) \ - F(DeleteContextSlot, 2, 1) \ - F(LoadContextSlot, 2, 2) \ - F(LoadContextSlotNoReferenceError, 2, 2) \ - F(StoreContextSlot, 4, 1) \ - \ - /* Declarations and initialization */ \ - F(DeclareGlobals, 3, 1) \ - F(DeclareModules, 1, 1) \ - F(DeclareContextSlot, 4, 1) \ - F(InitializeConstGlobal, 2, 1) \ - F(InitializeConstContextSlot, 3, 1) \ - \ - /* Eval */ \ - F(ResolvePossiblyDirectEval, 5, 2) \ - \ - /* Maths */ \ - F(MathPowSlow, 2, 1) \ - F(MathPow, 2, 1) - // ---------------------------------------------------------------------------- // INLINE_FUNCTION_LIST defines all inlined functions accessed // with a native call of the form %_name from within JS code. @@ -662,13 +673,16 @@ namespace internal { F(RegExpExec, 4, 1) \ F(RegExpConstructResult, 3, 1) \ F(GetFromCache, 2, 1) \ - F(NumberToString, 1, 1) + F(NumberToString, 1, 1) \ + F(DebugIsActive, 0, 1) // ---------------------------------------------------------------------------- // INLINE_OPTIMIZED_FUNCTION_LIST defines all inlined functions accessed // with a native call of the form %_name from within JS code that also have // a corresponding runtime function, that is called from non-optimized code. +// For the benefit of (fuzz) tests, the runtime version can also be called +// directly as %name (i.e. without the leading underscore). // Entries have the form F(name, number of arguments, number of return values). #define INLINE_OPTIMIZED_FUNCTION_LIST(F) \ /* Typed Arrays */ \ @@ -685,10 +699,8 @@ namespace internal { F(ConstructDouble, 2, 1) \ F(DoubleHi, 1, 1) \ F(DoubleLo, 1, 1) \ - F(MathSqrt, 1, 1) \ - F(MathLog, 1, 1) \ - /* Debugger */ \ - F(DebugCallbackSupportsStepping, 1, 1) + F(MathSqrtRT, 1, 1) \ + F(MathLogRT, 1, 1) //--------------------------------------------------------------------------- @@ -741,9 +753,7 @@ class Runtime : public AllStatic { enum FunctionId { #define F(name, nargs, ressize) k##name, RUNTIME_FUNCTION_LIST(F) -#undef F -#define F(name, nargs, ressize) kHidden##name, - RUNTIME_HIDDEN_FUNCTION_LIST(F) + INLINE_OPTIMIZED_FUNCTION_LIST(F) #undef F #define F(name, nargs, ressize) kInline##name, INLINE_FUNCTION_LIST(F) @@ -757,7 +767,6 @@ class Runtime : public AllStatic { enum IntrinsicType { RUNTIME, - RUNTIME_HIDDEN, INLINE, INLINE_OPTIMIZED }; @@ -792,6 +801,9 @@ class Runtime : public AllStatic { // Get the intrinsic function with the given FunctionId. static const Function* FunctionForId(FunctionId id); + // Get the intrinsic function with the given function entry address. + static const Function* FunctionForEntry(Address ref); + // General-purpose helper functions for runtime system. static int StringMatch(Isolate* isolate, Handle<String> sub, @@ -811,20 +823,16 @@ class Runtime : public AllStatic { uint32_t index); MUST_USE_RESULT static MaybeHandle<Object> SetObjectProperty( - Isolate* isolate, - Handle<Object> object, - Handle<Object> key, - Handle<Object> value, - PropertyAttributes attr, - StrictMode strict_mode); + Isolate* isolate, Handle<Object> object, Handle<Object> key, + Handle<Object> value, StrictMode strict_mode); - MUST_USE_RESULT static MaybeHandle<Object> ForceSetObjectProperty( + MUST_USE_RESULT static MaybeHandle<Object> DefineObjectProperty( Handle<JSObject> object, Handle<Object> key, Handle<Object> value, PropertyAttributes attr, - JSReceiver::StoreFromKeyed store_from_keyed - = JSReceiver::MAY_BE_STORE_FROM_KEYED); + JSReceiver::StoreFromKeyed store_from_keyed = + JSReceiver::MAY_BE_STORE_FROM_KEYED); MUST_USE_RESULT static MaybeHandle<Object> DeleteObjectProperty( Isolate* isolate, diff --git a/deps/v8/src/runtime.js b/deps/v8/src/runtime.js index 1dee2e08f61..d9e1fe5c994 100644 --- a/deps/v8/src/runtime.js +++ b/deps/v8/src/runtime.js @@ -36,6 +36,7 @@ function EQUALS(y) { while (true) { if (IS_NUMBER(y)) return %NumberEquals(x, y); if (IS_NULL_OR_UNDEFINED(y)) return 1; // not equal + if (IS_SYMBOL(y)) return 1; // not equal if (!IS_SPEC_OBJECT(y)) { // String or boolean. return %NumberEquals(x, %ToNumber(y)); @@ -501,7 +502,7 @@ function ToNumber(x) { } if (IS_BOOLEAN(x)) return x ? 1 : 0; if (IS_UNDEFINED(x)) return NAN; - if (IS_SYMBOL(x)) return NAN; + if (IS_SYMBOL(x)) throw MakeTypeError('symbol_to_number', []); return (IS_NULL(x)) ? 0 : ToNumber(%DefaultNumber(x)); } @@ -512,7 +513,7 @@ function NonNumberToNumber(x) { } if (IS_BOOLEAN(x)) return x ? 1 : 0; if (IS_UNDEFINED(x)) return NAN; - if (IS_SYMBOL(x)) return NAN; + if (IS_SYMBOL(x)) throw MakeTypeError('symbol_to_number', []); return (IS_NULL(x)) ? 0 : ToNumber(%DefaultNumber(x)); } @@ -607,35 +608,37 @@ function IsPrimitive(x) { // ECMA-262, section 8.6.2.6, page 28. function DefaultNumber(x) { - var valueOf = x.valueOf; - if (IS_SPEC_FUNCTION(valueOf)) { - var v = %_CallFunction(x, valueOf); - if (%IsPrimitive(v)) return v; - } + if (!IS_SYMBOL_WRAPPER(x)) { + var valueOf = x.valueOf; + if (IS_SPEC_FUNCTION(valueOf)) { + var v = %_CallFunction(x, valueOf); + if (%IsPrimitive(v)) return v; + } - var toString = x.toString; - if (IS_SPEC_FUNCTION(toString)) { - var s = %_CallFunction(x, toString); - if (%IsPrimitive(s)) return s; + var toString = x.toString; + if (IS_SPEC_FUNCTION(toString)) { + var s = %_CallFunction(x, toString); + if (%IsPrimitive(s)) return s; + } } - throw %MakeTypeError('cannot_convert_to_primitive', []); } // ECMA-262, section 8.6.2.6, page 28. function DefaultString(x) { - var toString = x.toString; - if (IS_SPEC_FUNCTION(toString)) { - var s = %_CallFunction(x, toString); - if (%IsPrimitive(s)) return s; - } + if (!IS_SYMBOL_WRAPPER(x)) { + var toString = x.toString; + if (IS_SPEC_FUNCTION(toString)) { + var s = %_CallFunction(x, toString); + if (%IsPrimitive(s)) return s; + } - var valueOf = x.valueOf; - if (IS_SPEC_FUNCTION(valueOf)) { - var v = %_CallFunction(x, valueOf); - if (%IsPrimitive(v)) return v; + var valueOf = x.valueOf; + if (IS_SPEC_FUNCTION(valueOf)) { + var v = %_CallFunction(x, valueOf); + if (%IsPrimitive(v)) return v; + } } - throw %MakeTypeError('cannot_convert_to_primitive', []); } diff --git a/deps/v8/src/safepoint-table.cc b/deps/v8/src/safepoint-table.cc index b7833c232b1..d00b751a058 100644 --- a/deps/v8/src/safepoint-table.cc +++ b/deps/v8/src/safepoint-table.cc @@ -2,22 +2,23 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "safepoint-table.h" +#include "src/safepoint-table.h" -#include "deoptimizer.h" -#include "disasm.h" -#include "macro-assembler.h" -#include "zone-inl.h" +#include "src/deoptimizer.h" +#include "src/disasm.h" +#include "src/macro-assembler.h" +#include "src/ostreams.h" +#include "src/zone-inl.h" namespace v8 { namespace internal { bool SafepointEntry::HasRegisters() const { - ASSERT(is_valid()); - ASSERT(IsAligned(kNumSafepointRegisters, kBitsPerByte)); + DCHECK(is_valid()); + DCHECK(IsAligned(kNumSafepointRegisters, kBitsPerByte)); const int num_reg_bytes = kNumSafepointRegisters >> kBitsPerByteLog2; for (int i = 0; i < num_reg_bytes; i++) { if (bits_[i] != SafepointTable::kNoRegisters) return true; @@ -27,8 +28,8 @@ bool SafepointEntry::HasRegisters() const { bool SafepointEntry::HasRegisterAt(int reg_index) const { - ASSERT(is_valid()); - ASSERT(reg_index >= 0 && reg_index < kNumSafepointRegisters); + DCHECK(is_valid()); + DCHECK(reg_index >= 0 && reg_index < kNumSafepointRegisters); int byte_index = reg_index >> kBitsPerByteLog2; int bit_index = reg_index & (kBitsPerByte - 1); return (bits_[byte_index] & (1 << bit_index)) != 0; @@ -36,7 +37,7 @@ bool SafepointEntry::HasRegisterAt(int reg_index) const { SafepointTable::SafepointTable(Code* code) { - ASSERT(code->is_crankshafted()); + DCHECK(code->is_crankshafted()); code_ = code; Address header = code->instruction_start() + code->safepoint_table_offset(); length_ = Memory::uint32_at(header + kLengthOffset); @@ -44,7 +45,7 @@ SafepointTable::SafepointTable(Code* code) { pc_and_deoptimization_indexes_ = header + kHeaderSize; entries_ = pc_and_deoptimization_indexes_ + (length_ * kPcAndDeoptimizationIndexSize); - ASSERT(entry_size_ > 0); + DCHECK(entry_size_ > 0); STATIC_ASSERT(SafepointEntry::DeoptimizationIndexField::kMax == Safepoint::kNoDeoptimizationIndex); } @@ -60,35 +61,36 @@ SafepointEntry SafepointTable::FindEntry(Address pc) const { } -void SafepointTable::PrintEntry(unsigned index, FILE* out) const { +void SafepointTable::PrintEntry(unsigned index, OStream& os) const { // NOLINT disasm::NameConverter converter; SafepointEntry entry = GetEntry(index); uint8_t* bits = entry.bits(); // Print the stack slot bits. if (entry_size_ > 0) { - ASSERT(IsAligned(kNumSafepointRegisters, kBitsPerByte)); + DCHECK(IsAligned(kNumSafepointRegisters, kBitsPerByte)); const int first = kNumSafepointRegisters >> kBitsPerByteLog2; int last = entry_size_ - 1; - for (int i = first; i < last; i++) PrintBits(out, bits[i], kBitsPerByte); + for (int i = first; i < last; i++) PrintBits(os, bits[i], kBitsPerByte); int last_bits = code_->stack_slots() - ((last - first) * kBitsPerByte); - PrintBits(out, bits[last], last_bits); + PrintBits(os, bits[last], last_bits); // Print the registers (if any). if (!entry.HasRegisters()) return; for (int j = 0; j < kNumSafepointRegisters; j++) { if (entry.HasRegisterAt(j)) { - PrintF(out, " | %s", converter.NameOfCPURegister(j)); + os << " | " << converter.NameOfCPURegister(j); } } } } -void SafepointTable::PrintBits(FILE* out, uint8_t byte, int digits) { - ASSERT(digits >= 0 && digits <= kBitsPerByte); +void SafepointTable::PrintBits(OStream& os, // NOLINT + uint8_t byte, int digits) { + DCHECK(digits >= 0 && digits <= kBitsPerByte); for (int i = 0; i < digits; i++) { - PrintF(out, "%c", ((byte & (1 << i)) == 0) ? '0' : '1'); + os << (((byte & (1 << i)) == 0) ? "0" : "1"); } } @@ -103,7 +105,7 @@ Safepoint SafepointTableBuilder::DefineSafepoint( Safepoint::Kind kind, int arguments, Safepoint::DeoptMode deopt_mode) { - ASSERT(arguments >= 0); + DCHECK(arguments >= 0); DeoptimizationInfo info; info.pc = assembler->pc_offset(); info.arguments = arguments; @@ -129,7 +131,7 @@ void SafepointTableBuilder::RecordLazyDeoptimizationIndex(int index) { } unsigned SafepointTableBuilder::GetCodeOffset() const { - ASSERT(emitted_); + DCHECK(emitted_); return offset_; } @@ -168,7 +170,7 @@ void SafepointTableBuilder::Emit(Assembler* assembler, int bits_per_entry) { bits.AddBlock(0, bytes_per_entry, zone_); // Run through the registers (if any). - ASSERT(IsAligned(kNumSafepointRegisters, kBitsPerByte)); + DCHECK(IsAligned(kNumSafepointRegisters, kBitsPerByte)); if (registers == NULL) { const int num_reg_bytes = kNumSafepointRegisters >> kBitsPerByteLog2; for (int j = 0; j < num_reg_bytes; j++) { @@ -177,7 +179,7 @@ void SafepointTableBuilder::Emit(Assembler* assembler, int bits_per_entry) { } else { for (int j = 0; j < registers->length(); j++) { int index = registers->at(j); - ASSERT(index >= 0 && index < kNumSafepointRegisters); + DCHECK(index >= 0 && index < kNumSafepointRegisters); int byte_index = index >> kBitsPerByteLog2; int bit_index = index & (kBitsPerByte - 1); bits[byte_index] |= (1 << bit_index); diff --git a/deps/v8/src/safepoint-table.h b/deps/v8/src/safepoint-table.h index 5a1af55cb1c..0bdd43104d5 100644 --- a/deps/v8/src/safepoint-table.h +++ b/deps/v8/src/safepoint-table.h @@ -5,10 +5,10 @@ #ifndef V8_SAFEPOINT_TABLE_H_ #define V8_SAFEPOINT_TABLE_H_ -#include "allocation.h" -#include "heap.h" -#include "v8memory.h" -#include "zone.h" +#include "src/allocation.h" +#include "src/heap/heap.h" +#include "src/v8memory.h" +#include "src/zone.h" namespace v8 { namespace internal { @@ -20,7 +20,7 @@ class SafepointEntry BASE_EMBEDDED { SafepointEntry() : info_(0), bits_(NULL) {} SafepointEntry(unsigned info, uint8_t* bits) : info_(info), bits_(bits) { - ASSERT(is_valid()); + DCHECK(is_valid()); } bool is_valid() const { return bits_ != NULL; } @@ -35,7 +35,7 @@ class SafepointEntry BASE_EMBEDDED { } int deoptimization_index() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return DeoptimizationIndexField::decode(info_); } @@ -55,17 +55,17 @@ class SafepointEntry BASE_EMBEDDED { kSaveDoublesFieldBits> { }; // NOLINT int argument_count() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return ArgumentsField::decode(info_); } bool has_doubles() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return SaveDoublesField::decode(info_); } uint8_t* bits() { - ASSERT(is_valid()); + DCHECK(is_valid()); return bits_; } @@ -89,12 +89,12 @@ class SafepointTable BASE_EMBEDDED { unsigned entry_size() const { return entry_size_; } unsigned GetPcOffset(unsigned index) const { - ASSERT(index < length_); + DCHECK(index < length_); return Memory::uint32_at(GetPcOffsetLocation(index)); } SafepointEntry GetEntry(unsigned index) const { - ASSERT(index < length_); + DCHECK(index < length_); unsigned info = Memory::uint32_at(GetInfoLocation(index)); uint8_t* bits = &Memory::uint8_at(entries_ + (index * entry_size_)); return SafepointEntry(info, bits); @@ -103,7 +103,7 @@ class SafepointTable BASE_EMBEDDED { // Returns the entry for the given pc. SafepointEntry FindEntry(Address pc) const; - void PrintEntry(unsigned index, FILE* out = stdout) const; + void PrintEntry(unsigned index, OStream& os) const; // NOLINT private: static const uint8_t kNoRegisters = 0xFF; @@ -126,7 +126,8 @@ class SafepointTable BASE_EMBEDDED { return GetPcOffsetLocation(index) + kPcSize; } - static void PrintBits(FILE* out, uint8_t byte, int digits); + static void PrintBits(OStream& os, // NOLINT + uint8_t byte, int digits); DisallowHeapAllocation no_allocation_; Code* code_; diff --git a/deps/v8/src/sampler.cc b/deps/v8/src/sampler.cc index 3cb0b749a87..413b6be9060 100644 --- a/deps/v8/src/sampler.cc +++ b/deps/v8/src/sampler.cc @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "sampler.h" +#include "src/sampler.h" #if V8_OS_POSIX && !V8_OS_CYGWIN @@ -14,7 +14,7 @@ #include <sys/time.h> #if !V8_OS_QNX -#include <sys/syscall.h> +#include <sys/syscall.h> // NOLINT #endif #if V8_OS_MACOSX @@ -33,25 +33,25 @@ #if V8_OS_ANDROID && !defined(__BIONIC_HAVE_UCONTEXT_T) && \ (defined(__arm__) || defined(__aarch64__)) && \ !defined(__BIONIC_HAVE_STRUCT_SIGCONTEXT) -#include <asm/sigcontext.h> +#include <asm/sigcontext.h> // NOLINT #endif #elif V8_OS_WIN || V8_OS_CYGWIN -#include "win32-headers.h" +#include "src/base/win32-headers.h" #endif -#include "v8.h" +#include "src/v8.h" -#include "cpu-profiler-inl.h" -#include "flags.h" -#include "frames-inl.h" -#include "log.h" -#include "platform.h" -#include "simulator.h" -#include "v8threads.h" -#include "vm-state-inl.h" +#include "src/base/platform/platform.h" +#include "src/cpu-profiler-inl.h" +#include "src/flags.h" +#include "src/frames-inl.h" +#include "src/log.h" +#include "src/simulator.h" +#include "src/v8threads.h" +#include "src/vm-state-inl.h" #if V8_OS_ANDROID && !defined(__BIONIC_HAVE_UCONTEXT_T) @@ -256,6 +256,12 @@ class SimulatorHelper { Simulator::sp)); state->fp = reinterpret_cast<Address>(simulator_->get_register( Simulator::fp)); +#elif V8_TARGET_ARCH_MIPS64 + state->pc = reinterpret_cast<Address>(simulator_->get_pc()); + state->sp = reinterpret_cast<Address>(simulator_->get_register( + Simulator::sp)); + state->fp = reinterpret_cast<Address>(simulator_->get_register( + Simulator::fp)); #endif } @@ -269,16 +275,16 @@ class SimulatorHelper { class SignalHandler : public AllStatic { public: - static void SetUp() { if (!mutex_) mutex_ = new Mutex(); } + static void SetUp() { if (!mutex_) mutex_ = new base::Mutex(); } static void TearDown() { delete mutex_; } static void IncreaseSamplerCount() { - LockGuard<Mutex> lock_guard(mutex_); + base::LockGuard<base::Mutex> lock_guard(mutex_); if (++client_count_ == 1) Install(); } static void DecreaseSamplerCount() { - LockGuard<Mutex> lock_guard(mutex_); + base::LockGuard<base::Mutex> lock_guard(mutex_); if (--client_count_ == 0) Restore(); } @@ -309,14 +315,14 @@ class SignalHandler : public AllStatic { static void HandleProfilerSignal(int signal, siginfo_t* info, void* context); // Protects the process wide state below. - static Mutex* mutex_; + static base::Mutex* mutex_; static int client_count_; static bool signal_handler_installed_; static struct sigaction old_signal_handler_; }; -Mutex* SignalHandler::mutex_ = NULL; +base::Mutex* SignalHandler::mutex_ = NULL; int SignalHandler::client_count_ = 0; struct sigaction SignalHandler::old_signal_handler_; bool SignalHandler::signal_handler_installed_ = false; @@ -331,7 +337,7 @@ void SignalHandler::HandleProfilerSignal(int signal, siginfo_t* info, #else USE(info); if (signal != SIGPROF) return; - Isolate* isolate = Isolate::UncheckedCurrent(); + Isolate* isolate = Isolate::UnsafeCurrent(); if (isolate == NULL || !isolate->IsInitialized() || !isolate->IsInUse()) { // We require a fully initialized and entered isolate. return; @@ -393,6 +399,10 @@ void SignalHandler::HandleProfilerSignal(int signal, siginfo_t* info, state.pc = reinterpret_cast<Address>(mcontext.pc); state.sp = reinterpret_cast<Address>(mcontext.gregs[29]); state.fp = reinterpret_cast<Address>(mcontext.gregs[30]); +#elif V8_HOST_ARCH_MIPS64 + state.pc = reinterpret_cast<Address>(mcontext.pc); + state.sp = reinterpret_cast<Address>(mcontext.gregs[29]); + state.fp = reinterpret_cast<Address>(mcontext.gregs[30]); #endif // V8_HOST_ARCH_* #elif V8_OS_MACOSX #if V8_HOST_ARCH_X64 @@ -473,20 +483,20 @@ void SignalHandler::HandleProfilerSignal(int signal, siginfo_t* info, #endif -class SamplerThread : public Thread { +class SamplerThread : public base::Thread { public: static const int kSamplerThreadStackSize = 64 * KB; explicit SamplerThread(int interval) - : Thread(Thread::Options("SamplerThread", kSamplerThreadStackSize)), + : Thread(base::Thread::Options("SamplerThread", kSamplerThreadStackSize)), interval_(interval) {} - static void SetUp() { if (!mutex_) mutex_ = new Mutex(); } + static void SetUp() { if (!mutex_) mutex_ = new base::Mutex(); } static void TearDown() { delete mutex_; mutex_ = NULL; } static void AddActiveSampler(Sampler* sampler) { bool need_to_start = false; - LockGuard<Mutex> lock_guard(mutex_); + base::LockGuard<base::Mutex> lock_guard(mutex_); if (instance_ == NULL) { // Start a thread that will send SIGPROF signal to VM threads, // when CPU profiling will be enabled. @@ -494,9 +504,9 @@ class SamplerThread : public Thread { need_to_start = true; } - ASSERT(sampler->IsActive()); - ASSERT(!instance_->active_samplers_.Contains(sampler)); - ASSERT(instance_->interval_ == sampler->interval()); + DCHECK(sampler->IsActive()); + DCHECK(!instance_->active_samplers_.Contains(sampler)); + DCHECK(instance_->interval_ == sampler->interval()); instance_->active_samplers_.Add(sampler); if (need_to_start) instance_->StartSynchronously(); @@ -505,11 +515,11 @@ class SamplerThread : public Thread { static void RemoveActiveSampler(Sampler* sampler) { SamplerThread* instance_to_remove = NULL; { - LockGuard<Mutex> lock_guard(mutex_); + base::LockGuard<base::Mutex> lock_guard(mutex_); - ASSERT(sampler->IsActive()); + DCHECK(sampler->IsActive()); bool removed = instance_->active_samplers_.RemoveElement(sampler); - ASSERT(removed); + DCHECK(removed); USE(removed); // We cannot delete the instance immediately as we need to Join() the @@ -529,7 +539,7 @@ class SamplerThread : public Thread { virtual void Run() { while (true) { { - LockGuard<Mutex> lock_guard(mutex_); + base::LockGuard<base::Mutex> lock_guard(mutex_); if (active_samplers_.is_empty()) break; // When CPU profiling is enabled both JavaScript and C++ code is // profiled. We must not suspend. @@ -540,13 +550,13 @@ class SamplerThread : public Thread { sampler->DoSample(); } } - OS::Sleep(interval_); + base::OS::Sleep(interval_); } } private: // Protects the process wide state below. - static Mutex* mutex_; + static base::Mutex* mutex_; static SamplerThread* instance_; const int interval_; @@ -556,7 +566,7 @@ class SamplerThread : public Thread { }; -Mutex* SamplerThread::mutex_ = NULL; +base::Mutex* SamplerThread::mutex_ = NULL; SamplerThread* SamplerThread::instance_ = NULL; @@ -565,8 +575,8 @@ SamplerThread* SamplerThread::instance_ = NULL; // DISABLE_ASAN void TickSample::Init(Isolate* isolate, const RegisterState& regs) { - ASSERT(isolate->IsInitialized()); - timestamp = TimeTicks::HighResolutionNow(); + DCHECK(isolate->IsInitialized()); + timestamp = base::TimeTicks::HighResolutionNow(); pc = regs.pc; state = isolate->current_vm_state(); @@ -596,7 +606,7 @@ DISABLE_ASAN void TickSample::Init(Isolate* isolate, SafeStackFrameIterator it(isolate, regs.fp, regs.sp, js_entry_sp); top_frame_type = it.top_frame_type(); - int i = 0; + unsigned i = 0; while (!it.done() && i < TickSample::kMaxFramesCount) { stack[i++] = it.frame()->pc(); it.Advance(); @@ -634,27 +644,27 @@ Sampler::Sampler(Isolate* isolate, int interval) Sampler::~Sampler() { - ASSERT(!IsActive()); + DCHECK(!IsActive()); delete data_; } void Sampler::Start() { - ASSERT(!IsActive()); + DCHECK(!IsActive()); SetActive(true); SamplerThread::AddActiveSampler(this); } void Sampler::Stop() { - ASSERT(IsActive()); + DCHECK(IsActive()); SamplerThread::RemoveActiveSampler(this); SetActive(false); } void Sampler::IncreaseProfilingDepth() { - NoBarrier_AtomicIncrement(&profiling_, 1); + base::NoBarrier_AtomicIncrement(&profiling_, 1); #if defined(USE_SIGNALS) SignalHandler::IncreaseSamplerCount(); #endif @@ -665,7 +675,7 @@ void Sampler::DecreaseProfilingDepth() { #if defined(USE_SIGNALS) SignalHandler::DecreaseSamplerCount(); #endif - NoBarrier_AtomicIncrement(&profiling_, -1); + base::NoBarrier_AtomicIncrement(&profiling_, -1); } diff --git a/deps/v8/src/sampler.h b/deps/v8/src/sampler.h index 41da7494dc4..c3dce4ed7c2 100644 --- a/deps/v8/src/sampler.h +++ b/deps/v8/src/sampler.h @@ -5,9 +5,9 @@ #ifndef V8_SAMPLER_H_ #define V8_SAMPLER_H_ -#include "atomicops.h" -#include "frames.h" -#include "v8globals.h" +#include "src/base/atomicops.h" +#include "src/frames.h" +#include "src/globals.h" namespace v8 { namespace internal { @@ -44,10 +44,11 @@ struct TickSample { Address tos; // Top stack value (*sp). Address external_callback; }; - static const int kMaxFramesCount = 64; + static const unsigned kMaxFramesCountLog2 = 8; + static const unsigned kMaxFramesCount = (1 << kMaxFramesCountLog2) - 1; Address stack[kMaxFramesCount]; // Call stack. - TimeTicks timestamp; - int frames_count : 8; // Number of captured frames. + base::TimeTicks timestamp; + unsigned frames_count : kMaxFramesCountLog2; // Number of captured frames. bool has_external_callback : 1; StackFrame::Type top_frame_type : 4; }; @@ -74,20 +75,20 @@ class Sampler { // Whether the sampling thread should use this Sampler for CPU profiling? bool IsProfiling() const { - return NoBarrier_Load(&profiling_) > 0 && - !NoBarrier_Load(&has_processing_thread_); + return base::NoBarrier_Load(&profiling_) > 0 && + !base::NoBarrier_Load(&has_processing_thread_); } void IncreaseProfilingDepth(); void DecreaseProfilingDepth(); // Whether the sampler is running (that is, consumes resources). - bool IsActive() const { return NoBarrier_Load(&active_); } + bool IsActive() const { return base::NoBarrier_Load(&active_); } void DoSample(); // If true next sample must be initiated on the profiler event processor // thread right after latest sample is processed. void SetHasProcessingThread(bool value) { - NoBarrier_Store(&has_processing_thread_, value); + base::NoBarrier_Store(&has_processing_thread_, value); } // Used in tests to make sure that stack sampling is performed. @@ -108,13 +109,13 @@ class Sampler { virtual void Tick(TickSample* sample) = 0; private: - void SetActive(bool value) { NoBarrier_Store(&active_, value); } + void SetActive(bool value) { base::NoBarrier_Store(&active_, value); } Isolate* isolate_; const int interval_; - Atomic32 profiling_; - Atomic32 has_processing_thread_; - Atomic32 active_; + base::Atomic32 profiling_; + base::Atomic32 has_processing_thread_; + base::Atomic32 active_; PlatformData* data_; // Platform specific data. bool is_counting_samples_; // Counts stack samples taken in JS VM state. diff --git a/deps/v8/src/scanner-character-streams.cc b/deps/v8/src/scanner-character-streams.cc index 8fbfe4ee968..9ec0ad10083 100644 --- a/deps/v8/src/scanner-character-streams.cc +++ b/deps/v8/src/scanner-character-streams.cc @@ -2,12 +2,12 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "scanner-character-streams.h" +#include "src/scanner-character-streams.h" -#include "handles.h" -#include "unicode-inl.h" +#include "src/handles.h" +#include "src/unicode-inl.h" namespace v8 { namespace internal { @@ -55,8 +55,8 @@ void BufferedUtf16CharacterStream::SlowPushBack(uc16 character) { buffer_cursor_ = buffer_end_; } // Ensure that there is room for at least one pushback. - ASSERT(buffer_cursor_ > buffer_); - ASSERT(pos_ > 0); + DCHECK(buffer_cursor_ > buffer_); + DCHECK(pos_ > 0); buffer_[--buffer_cursor_ - buffer_] = character; if (buffer_cursor_ == buffer_) { pushback_limit_ = NULL; @@ -78,7 +78,7 @@ bool BufferedUtf16CharacterStream::ReadBlock() { if (buffer_cursor_ < buffer_end_) return true; // Otherwise read a new block. } - unsigned length = FillBuffer(pos_, kBufferSize); + unsigned length = FillBuffer(pos_); buffer_end_ = buffer_ + length; return length > 0; } @@ -102,7 +102,7 @@ GenericStringUtf16CharacterStream::GenericStringUtf16CharacterStream( unsigned end_position) : string_(data), length_(end_position) { - ASSERT(end_position >= start_position); + DCHECK(end_position >= start_position); pos_ = start_position; } @@ -118,9 +118,9 @@ unsigned GenericStringUtf16CharacterStream::BufferSeekForward(unsigned delta) { } -unsigned GenericStringUtf16CharacterStream::FillBuffer(unsigned from_pos, - unsigned length) { +unsigned GenericStringUtf16CharacterStream::FillBuffer(unsigned from_pos) { if (from_pos >= length_) return 0; + unsigned length = kBufferSize; if (from_pos + length > length_) { length = length_ - from_pos; } @@ -155,8 +155,7 @@ unsigned Utf8ToUtf16CharacterStream::BufferSeekForward(unsigned delta) { } -unsigned Utf8ToUtf16CharacterStream::FillBuffer(unsigned char_position, - unsigned length) { +unsigned Utf8ToUtf16CharacterStream::FillBuffer(unsigned char_position) { static const unibrow::uchar kMaxUtf16Character = 0xffff; SetRawPosition(char_position); if (raw_character_position_ != char_position) { @@ -165,7 +164,7 @@ unsigned Utf8ToUtf16CharacterStream::FillBuffer(unsigned char_position, return 0u; } unsigned i = 0; - while (i < length - 1) { + while (i < kBufferSize - 1) { if (raw_data_pos_ == raw_data_length_) break; unibrow::uchar c = raw_data_[raw_data_pos_]; if (c <= unibrow::Utf8::kMaxOneByteChar) { @@ -209,12 +208,12 @@ static bool IsUtf8MultiCharacterFollower(byte later_byte) { static inline void Utf8CharacterBack(const byte* buffer, unsigned* cursor) { byte character = buffer[--*cursor]; if (character > unibrow::Utf8::kMaxOneByteChar) { - ASSERT(IsUtf8MultiCharacterFollower(character)); + DCHECK(IsUtf8MultiCharacterFollower(character)); // Last byte of a multi-byte character encoding. Step backwards until // pointing to the first byte of the encoding, recognized by having the // top two bits set. while (IsUtf8MultiCharacterFollower(buffer[--*cursor])) { } - ASSERT(IsUtf8MultiCharacterStart(buffer[*cursor])); + DCHECK(IsUtf8MultiCharacterStart(buffer[*cursor])); } } @@ -230,7 +229,7 @@ static inline void Utf8CharacterForward(const byte* buffer, unsigned* cursor) { // 110..... - (0xCx, 0xDx) one additional byte (minimum). // 1110.... - (0xEx) two additional bytes. // 11110... - (0xFx) three additional bytes (maximum). - ASSERT(IsUtf8MultiCharacterStart(character)); + DCHECK(IsUtf8MultiCharacterStart(character)); // Additional bytes is: // 1 if value in range 0xC0 .. 0xDF. // 2 if value in range 0xE0 .. 0xEF. @@ -239,7 +238,7 @@ static inline void Utf8CharacterForward(const byte* buffer, unsigned* cursor) { unsigned additional_bytes = ((0x3211u) >> (((character - 0xC0) >> 2) & 0xC)) & 0x03; *cursor += additional_bytes; - ASSERT(!IsUtf8MultiCharacterFollower(buffer[1 + additional_bytes])); + DCHECK(!IsUtf8MultiCharacterFollower(buffer[1 + additional_bytes])); } } @@ -255,12 +254,12 @@ void Utf8ToUtf16CharacterStream::SetRawPosition(unsigned target_position) { int old_pos = raw_data_pos_; Utf8CharacterBack(raw_data_, &raw_data_pos_); raw_character_position_--; - ASSERT(old_pos - raw_data_pos_ <= 4); + DCHECK(old_pos - raw_data_pos_ <= 4); // Step back over both code units for surrogate pairs. if (old_pos - raw_data_pos_ == 4) raw_character_position_--; } while (raw_character_position_ > target_position); // No surrogate pair splitting. - ASSERT(raw_character_position_ == target_position); + DCHECK(raw_character_position_ == target_position); return; } // Spool forwards in the utf8 buffer. @@ -269,11 +268,11 @@ void Utf8ToUtf16CharacterStream::SetRawPosition(unsigned target_position) { int old_pos = raw_data_pos_; Utf8CharacterForward(raw_data_, &raw_data_pos_); raw_character_position_++; - ASSERT(raw_data_pos_ - old_pos <= 4); + DCHECK(raw_data_pos_ - old_pos <= 4); if (raw_data_pos_ - old_pos == 4) raw_character_position_++; } // No surrogate pair splitting. - ASSERT(raw_character_position_ == target_position); + DCHECK(raw_character_position_ == target_position); } diff --git a/deps/v8/src/scanner-character-streams.h b/deps/v8/src/scanner-character-streams.h index 0d02f020196..eeb40e260f6 100644 --- a/deps/v8/src/scanner-character-streams.h +++ b/deps/v8/src/scanner-character-streams.h @@ -5,7 +5,7 @@ #ifndef V8_SCANNER_CHARACTER_STREAMS_H_ #define V8_SCANNER_CHARACTER_STREAMS_H_ -#include "scanner.h" +#include "src/scanner.h" namespace v8 { namespace internal { @@ -29,7 +29,7 @@ class BufferedUtf16CharacterStream: public Utf16CharacterStream { virtual void SlowPushBack(uc16 character); virtual unsigned BufferSeekForward(unsigned delta) = 0; - virtual unsigned FillBuffer(unsigned position, unsigned length) = 0; + virtual unsigned FillBuffer(unsigned position) = 0; const uc16* pushback_limit_; uc16 buffer_[kBufferSize]; @@ -46,7 +46,7 @@ class GenericStringUtf16CharacterStream: public BufferedUtf16CharacterStream { protected: virtual unsigned BufferSeekForward(unsigned delta); - virtual unsigned FillBuffer(unsigned position, unsigned length); + virtual unsigned FillBuffer(unsigned position); Handle<String> string_; unsigned length_; @@ -61,7 +61,7 @@ class Utf8ToUtf16CharacterStream: public BufferedUtf16CharacterStream { protected: virtual unsigned BufferSeekForward(unsigned delta); - virtual unsigned FillBuffer(unsigned char_position, unsigned length); + virtual unsigned FillBuffer(unsigned char_position); void SetRawPosition(unsigned char_position); const byte* raw_data_; @@ -82,7 +82,7 @@ class ExternalTwoByteStringUtf16CharacterStream: public Utf16CharacterStream { virtual ~ExternalTwoByteStringUtf16CharacterStream(); virtual void PushBack(uc32 character) { - ASSERT(buffer_cursor_ > raw_data_); + DCHECK(buffer_cursor_ > raw_data_); buffer_cursor_--; pos_--; } diff --git a/deps/v8/src/scanner.cc b/deps/v8/src/scanner.cc index 2e039ca4035..2e8e24b06f0 100644 --- a/deps/v8/src/scanner.cc +++ b/deps/v8/src/scanner.cc @@ -6,18 +6,28 @@ #include <cmath> -#include "scanner.h" +#include "src/v8.h" -#include "../include/v8stdint.h" -#include "char-predicates-inl.h" -#include "conversions-inl.h" -#include "list-inl.h" -#include "v8.h" -#include "parser.h" +#include "include/v8stdint.h" +#include "src/ast-value-factory.h" +#include "src/char-predicates-inl.h" +#include "src/conversions-inl.h" +#include "src/list-inl.h" +#include "src/parser.h" +#include "src/scanner.h" namespace v8 { namespace internal { + +Handle<String> LiteralBuffer::Internalize(Isolate* isolate) const { + if (is_one_byte()) { + return isolate->factory()->InternalizeOneByteString(one_byte_literal()); + } + return isolate->factory()->InternalizeTwoByteString(two_byte_literal()); +} + + // ---------------------------------------------------------------------------- // Scanner @@ -43,7 +53,7 @@ void Scanner::Initialize(Utf16CharacterStream* source) { uc32 Scanner::ScanHexNumber(int expected_length) { - ASSERT(expected_length <= 4); // prevent overflow + DCHECK(expected_length <= 4); // prevent overflow uc32 digits[4] = { 0, 0, 0, 0 }; uc32 x = 0; @@ -294,8 +304,70 @@ Token::Value Scanner::SkipSingleLineComment() { } +Token::Value Scanner::SkipSourceURLComment() { + TryToParseSourceURLComment(); + while (c0_ >= 0 && !unicode_cache_->IsLineTerminator(c0_)) { + Advance(); + } + + return Token::WHITESPACE; +} + + +void Scanner::TryToParseSourceURLComment() { + // Magic comments are of the form: //[#@]\s<name>=\s*<value>\s*.* and this + // function will just return if it cannot parse a magic comment. + if (!unicode_cache_->IsWhiteSpace(c0_)) + return; + Advance(); + LiteralBuffer name; + while (c0_ >= 0 && !unicode_cache_->IsWhiteSpaceOrLineTerminator(c0_) && + c0_ != '=') { + name.AddChar(c0_); + Advance(); + } + if (!name.is_one_byte()) return; + Vector<const uint8_t> name_literal = name.one_byte_literal(); + LiteralBuffer* value; + if (name_literal == STATIC_ASCII_VECTOR("sourceURL")) { + value = &source_url_; + } else if (name_literal == STATIC_ASCII_VECTOR("sourceMappingURL")) { + value = &source_mapping_url_; + } else { + return; + } + if (c0_ != '=') + return; + Advance(); + value->Reset(); + while (c0_ >= 0 && unicode_cache_->IsWhiteSpace(c0_)) { + Advance(); + } + while (c0_ >= 0 && !unicode_cache_->IsLineTerminator(c0_)) { + // Disallowed characters. + if (c0_ == '"' || c0_ == '\'') { + value->Reset(); + return; + } + if (unicode_cache_->IsWhiteSpace(c0_)) { + break; + } + value->AddChar(c0_); + Advance(); + } + // Allow whitespace at the end. + while (c0_ >= 0 && !unicode_cache_->IsLineTerminator(c0_)) { + if (!unicode_cache_->IsWhiteSpace(c0_)) { + value->Reset(); + break; + } + Advance(); + } +} + + Token::Value Scanner::SkipMultiLineComment() { - ASSERT(c0_ == '*'); + DCHECK(c0_ == '*'); Advance(); while (c0_ >= 0) { @@ -322,7 +394,7 @@ Token::Value Scanner::SkipMultiLineComment() { Token::Value Scanner::ScanHtmlComment() { // Check for <!-- comments. - ASSERT(c0_ == '!'); + DCHECK(c0_ == '!'); Advance(); if (c0_ == '-') { Advance(); @@ -330,7 +402,7 @@ Token::Value Scanner::ScanHtmlComment() { PushBack('-'); // undo Advance() } PushBack('!'); // undo Advance() - ASSERT(c0_ == '!'); + DCHECK(c0_ == '!'); return Token::LT; } @@ -394,10 +466,12 @@ void Scanner::Scan() { break; case '=': - // = == === + // = == === => Advance(); if (c0_ == '=') { token = Select('=', Token::EQ_STRICT, Token::EQ); + } else if (c0_ == '>') { + token = Select(Token::ARROW); } else { token = Token::ASSIGN; } @@ -458,7 +532,14 @@ void Scanner::Scan() { // / // /* /= Advance(); if (c0_ == '/') { - token = SkipSingleLineComment(); + Advance(); + if (c0_ == '@' || c0_ == '#') { + Advance(); + token = SkipSourceURLComment(); + } else { + PushBack(c0_); + token = SkipSingleLineComment(); + } } else if (c0_ == '*') { token = SkipMultiLineComment(); } else if (c0_ == '=') { @@ -580,9 +661,9 @@ void Scanner::SeekForward(int pos) { // the "next" token. The "current" token will be invalid. if (pos == next_.location.beg_pos) return; int current_pos = source_pos(); - ASSERT_EQ(next_.location.end_pos, current_pos); + DCHECK_EQ(next_.location.end_pos, current_pos); // Positions inside the lookahead token aren't supported. - ASSERT(pos >= current_pos); + DCHECK(pos >= current_pos); if (pos != current_pos) { source_->SeekForward(pos - source_->pos()); Advance(); @@ -702,7 +783,7 @@ void Scanner::ScanDecimalDigits() { Token::Value Scanner::ScanNumber(bool seen_period) { - ASSERT(IsDecimalDigit(c0_)); // the first digit of the number or the fraction + DCHECK(IsDecimalDigit(c0_)); // the first digit of the number or the fraction enum { DECIMAL, HEX, OCTAL, IMPLICIT_OCTAL, BINARY } kind = DECIMAL; @@ -781,7 +862,7 @@ Token::Value Scanner::ScanNumber(bool seen_period) { // scan exponent, if any if (c0_ == 'e' || c0_ == 'E') { - ASSERT(kind != HEX); // 'e'/'E' must be scanned as part of the hex number + DCHECK(kind != HEX); // 'e'/'E' must be scanned as part of the hex number if (kind != DECIMAL) return Token::ILLEGAL; // scan exponent AddLiteralCharAdvance(); @@ -890,7 +971,7 @@ static Token::Value KeywordOrIdentifierToken(const uint8_t* input, int input_length, bool harmony_scoping, bool harmony_modules) { - ASSERT(input_length >= 1); + DCHECK(input_length >= 1); const int kMinLength = 2; const int kMaxLength = 10; if (input_length < kMinLength || input_length > kMaxLength) { @@ -927,8 +1008,18 @@ static Token::Value KeywordOrIdentifierToken(const uint8_t* input, } +bool Scanner::IdentifierIsFutureStrictReserved( + const AstRawString* string) const { + // Keywords are always 1-byte strings. + return string->is_one_byte() && + Token::FUTURE_STRICT_RESERVED_WORD == + KeywordOrIdentifierToken(string->raw_data(), string->length(), + harmony_scoping_, harmony_modules_); +} + + Token::Value Scanner::ScanIdentifierOrKeyword() { - ASSERT(unicode_cache_->IsIdentifierStart(c0_)); + DCHECK(unicode_cache_->IsIdentifierStart(c0_)); LiteralScope literal(this); // Scan identifier start character. if (c0_ == '\\') { @@ -1044,7 +1135,7 @@ bool Scanner::ScanRegExpPattern(bool seen_equal) { bool Scanner::ScanLiteralUnicodeEscape() { - ASSERT(c0_ == '\\'); + DCHECK(c0_ == '\\'); uc32 chars_read[6] = {'\\', 'u', 0, 0, 0, 0}; Advance(); int i = 1; @@ -1093,31 +1184,24 @@ bool Scanner::ScanRegExpFlags() { } -Handle<String> Scanner::AllocateNextLiteralString(Isolate* isolate, - PretenureFlag tenured) { - if (is_next_literal_one_byte()) { - return isolate->factory()->NewStringFromOneByte( - next_literal_one_byte_string(), tenured).ToHandleChecked(); - } else { - return isolate->factory()->NewStringFromTwoByte( - next_literal_two_byte_string(), tenured).ToHandleChecked(); +const AstRawString* Scanner::CurrentSymbol(AstValueFactory* ast_value_factory) { + if (is_literal_one_byte()) { + return ast_value_factory->GetOneByteString(literal_one_byte_string()); } + return ast_value_factory->GetTwoByteString(literal_two_byte_string()); } -Handle<String> Scanner::AllocateInternalizedString(Isolate* isolate) { - if (is_literal_one_byte()) { - return isolate->factory()->InternalizeOneByteString( - literal_one_byte_string()); - } else { - return isolate->factory()->InternalizeTwoByteString( - literal_two_byte_string()); +const AstRawString* Scanner::NextSymbol(AstValueFactory* ast_value_factory) { + if (is_next_literal_one_byte()) { + return ast_value_factory->GetOneByteString(next_literal_one_byte_string()); } + return ast_value_factory->GetTwoByteString(next_literal_two_byte_string()); } double Scanner::DoubleValue() { - ASSERT(is_literal_one_byte()); + DCHECK(is_literal_one_byte()); return StringToDouble( unicode_cache_, literal_one_byte_string(), @@ -1162,7 +1246,7 @@ int DuplicateFinder::AddSymbol(Vector<const uint8_t> key, int DuplicateFinder::AddNumber(Vector<const uint8_t> key, int value) { - ASSERT(key.length() > 0); + DCHECK(key.length() > 0); // Quick check for already being in canonical form. if (IsNumberCanonical(key)) { return AddOneByteSymbol(key, value); diff --git a/deps/v8/src/scanner.h b/deps/v8/src/scanner.h index 037da5b1739..d3a6c6b22ee 100644 --- a/deps/v8/src/scanner.h +++ b/deps/v8/src/scanner.h @@ -7,20 +7,22 @@ #ifndef V8_SCANNER_H_ #define V8_SCANNER_H_ -#include "allocation.h" -#include "char-predicates.h" -#include "checks.h" -#include "globals.h" -#include "hashmap.h" -#include "list.h" -#include "token.h" -#include "unicode-inl.h" -#include "utils.h" +#include "src/allocation.h" +#include "src/base/logging.h" +#include "src/char-predicates.h" +#include "src/globals.h" +#include "src/hashmap.h" +#include "src/list.h" +#include "src/token.h" +#include "src/unicode-inl.h" +#include "src/utils.h" namespace v8 { namespace internal { +class AstRawString; +class AstValueFactory; class ParserRecorder; @@ -209,34 +211,34 @@ class LiteralBuffer { } ConvertToTwoByte(); } - ASSERT(code_unit < 0x10000u); + DCHECK(code_unit < 0x10000u); *reinterpret_cast<uint16_t*>(&backing_store_[position_]) = code_unit; position_ += kUC16Size; } - bool is_one_byte() { return is_one_byte_; } + bool is_one_byte() const { return is_one_byte_; } - bool is_contextual_keyword(Vector<const char> keyword) { + bool is_contextual_keyword(Vector<const char> keyword) const { return is_one_byte() && keyword.length() == position_ && (memcmp(keyword.start(), backing_store_.start(), position_) == 0); } - Vector<const uint16_t> two_byte_literal() { - ASSERT(!is_one_byte_); - ASSERT((position_ & 0x1) == 0); + Vector<const uint16_t> two_byte_literal() const { + DCHECK(!is_one_byte_); + DCHECK((position_ & 0x1) == 0); return Vector<const uint16_t>( reinterpret_cast<const uint16_t*>(backing_store_.start()), position_ >> 1); } - Vector<const uint8_t> one_byte_literal() { - ASSERT(is_one_byte_); + Vector<const uint8_t> one_byte_literal() const { + DCHECK(is_one_byte_); return Vector<const uint8_t>( reinterpret_cast<const uint8_t*>(backing_store_.start()), position_); } - int length() { + int length() const { return is_one_byte_ ? position_ : (position_ >> 1); } @@ -245,6 +247,8 @@ class LiteralBuffer { is_one_byte_ = true; } + Handle<String> Internalize(Isolate* isolate) const; + private: static const int kInitialCapacity = 16; static const int kGrowthFactory = 4; @@ -258,13 +262,13 @@ class LiteralBuffer { void ExpandBuffer() { Vector<byte> new_store = Vector<byte>::New(NewCapacity(kInitialCapacity)); - OS::MemCopy(new_store.start(), backing_store_.start(), position_); + MemCopy(new_store.start(), backing_store_.start(), position_); backing_store_.Dispose(); backing_store_ = new_store; } void ConvertToTwoByte() { - ASSERT(is_one_byte_); + DCHECK(is_one_byte_); Vector<byte> new_store; int new_content_size = position_ * kUC16Size; if (new_content_size >= backing_store_.length()) { @@ -368,17 +372,16 @@ class Scanner { return current_.literal_chars->length() != source_length; } bool is_literal_contextual_keyword(Vector<const char> keyword) { - ASSERT_NOT_NULL(current_.literal_chars); + DCHECK_NOT_NULL(current_.literal_chars); return current_.literal_chars->is_contextual_keyword(keyword); } bool is_next_contextual_keyword(Vector<const char> keyword) { - ASSERT_NOT_NULL(next_.literal_chars); + DCHECK_NOT_NULL(next_.literal_chars); return next_.literal_chars->is_contextual_keyword(keyword); } - Handle<String> AllocateNextLiteralString(Isolate* isolate, - PretenureFlag tenured); - Handle<String> AllocateInternalizedString(Isolate* isolate); + const AstRawString* CurrentSymbol(AstValueFactory* ast_value_factory); + const AstRawString* NextSymbol(AstValueFactory* ast_value_factory); double DoubleValue(); bool UnescapedLiteralMatches(const char* data, int length) { @@ -450,6 +453,13 @@ class Scanner { // be empty). bool ScanRegExpFlags(); + const LiteralBuffer* source_url() const { return &source_url_; } + const LiteralBuffer* source_mapping_url() const { + return &source_mapping_url_; + } + + bool IdentifierIsFutureStrictReserved(const AstRawString* string) const; + private: // The current and look-ahead token. struct TokenDesc { @@ -481,7 +491,7 @@ class Scanner { } INLINE(void AddLiteralChar(uc32 c)) { - ASSERT_NOT_NULL(next_.literal_chars); + DCHECK_NOT_NULL(next_.literal_chars); next_.literal_chars->AddChar(c); } @@ -530,37 +540,37 @@ class Scanner { // These functions only give the correct result if the literal // was scanned between calls to StartLiteral() and TerminateLiteral(). Vector<const uint8_t> literal_one_byte_string() { - ASSERT_NOT_NULL(current_.literal_chars); + DCHECK_NOT_NULL(current_.literal_chars); return current_.literal_chars->one_byte_literal(); } Vector<const uint16_t> literal_two_byte_string() { - ASSERT_NOT_NULL(current_.literal_chars); + DCHECK_NOT_NULL(current_.literal_chars); return current_.literal_chars->two_byte_literal(); } bool is_literal_one_byte() { - ASSERT_NOT_NULL(current_.literal_chars); + DCHECK_NOT_NULL(current_.literal_chars); return current_.literal_chars->is_one_byte(); } int literal_length() const { - ASSERT_NOT_NULL(current_.literal_chars); + DCHECK_NOT_NULL(current_.literal_chars); return current_.literal_chars->length(); } // Returns the literal string for the next token (the token that // would be returned if Next() were called). Vector<const uint8_t> next_literal_one_byte_string() { - ASSERT_NOT_NULL(next_.literal_chars); + DCHECK_NOT_NULL(next_.literal_chars); return next_.literal_chars->one_byte_literal(); } Vector<const uint16_t> next_literal_two_byte_string() { - ASSERT_NOT_NULL(next_.literal_chars); + DCHECK_NOT_NULL(next_.literal_chars); return next_.literal_chars->two_byte_literal(); } bool is_next_literal_one_byte() { - ASSERT_NOT_NULL(next_.literal_chars); + DCHECK_NOT_NULL(next_.literal_chars); return next_.literal_chars->is_one_byte(); } int next_literal_length() const { - ASSERT_NOT_NULL(next_.literal_chars); + DCHECK_NOT_NULL(next_.literal_chars); return next_.literal_chars->length(); } @@ -571,6 +581,8 @@ class Scanner { bool SkipWhiteSpace(); Token::Value SkipSingleLineComment(); + Token::Value SkipSourceURLComment(); + void TryToParseSourceURLComment(); Token::Value SkipMultiLineComment(); // Scans a possible HTML comment -- begins with '<!'. Token::Value ScanHtmlComment(); @@ -605,6 +617,10 @@ class Scanner { LiteralBuffer literal_buffer1_; LiteralBuffer literal_buffer2_; + // Values parsed from magic comments. + LiteralBuffer source_url_; + LiteralBuffer source_mapping_url_; + TokenDesc current_; // desc for current token (as returned by Next()) TokenDesc next_; // desc for next token (one token look-ahead) diff --git a/deps/v8/src/scopeinfo.cc b/deps/v8/src/scopeinfo.cc index 1ed7e0b777c..6aed7252a17 100644 --- a/deps/v8/src/scopeinfo.cc +++ b/deps/v8/src/scopeinfo.cc @@ -4,10 +4,10 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "scopeinfo.h" -#include "scopes.h" +#include "src/scopeinfo.h" +#include "src/scopes.h" namespace v8 { namespace internal { @@ -21,8 +21,8 @@ Handle<ScopeInfo> ScopeInfo::Create(Scope* scope, Zone* zone) { const int stack_local_count = stack_locals.length(); const int context_local_count = context_locals.length(); // Make sure we allocate the correct amount. - ASSERT(scope->StackLocalCount() == stack_local_count); - ASSERT(scope->ContextLocalCount() == context_local_count); + DCHECK(scope->StackLocalCount() == stack_local_count); + DCHECK(scope->ContextLocalCount() == context_local_count); // Determine use and location of the function variable if it is present. FunctionVariableInfo function_name_info; @@ -34,7 +34,7 @@ Handle<ScopeInfo> ScopeInfo::Create(Scope* scope, Zone* zone) { } else if (var->IsContextSlot()) { function_name_info = CONTEXT; } else { - ASSERT(var->IsStackLocal()); + DCHECK(var->IsStackLocal()); function_name_info = STACK; } function_variable_mode = var->mode(); @@ -65,7 +65,7 @@ Handle<ScopeInfo> ScopeInfo::Create(Scope* scope, Zone* zone) { int index = kVariablePartIndex; // Add parameters. - ASSERT(index == scope_info->ParameterEntriesIndex()); + DCHECK(index == scope_info->ParameterEntriesIndex()); for (int i = 0; i < parameter_count; ++i) { scope_info->set(index++, *scope->parameter(i)->name()); } @@ -73,9 +73,9 @@ Handle<ScopeInfo> ScopeInfo::Create(Scope* scope, Zone* zone) { // Add stack locals' names. We are assuming that the stack locals' // slots are allocated in increasing order, so we can simply add // them to the ScopeInfo object. - ASSERT(index == scope_info->StackLocalEntriesIndex()); + DCHECK(index == scope_info->StackLocalEntriesIndex()); for (int i = 0; i < stack_local_count; ++i) { - ASSERT(stack_locals[i]->index() == i); + DCHECK(stack_locals[i]->index() == i); scope_info->set(index++, *stack_locals[i]->name()); } @@ -88,37 +88,39 @@ Handle<ScopeInfo> ScopeInfo::Create(Scope* scope, Zone* zone) { context_locals.Sort(&Variable::CompareIndex); // Add context locals' names. - ASSERT(index == scope_info->ContextLocalNameEntriesIndex()); + DCHECK(index == scope_info->ContextLocalNameEntriesIndex()); for (int i = 0; i < context_local_count; ++i) { scope_info->set(index++, *context_locals[i]->name()); } // Add context locals' info. - ASSERT(index == scope_info->ContextLocalInfoEntriesIndex()); + DCHECK(index == scope_info->ContextLocalInfoEntriesIndex()); for (int i = 0; i < context_local_count; ++i) { Variable* var = context_locals[i]; - uint32_t value = ContextLocalMode::encode(var->mode()) | - ContextLocalInitFlag::encode(var->initialization_flag()); + uint32_t value = + ContextLocalMode::encode(var->mode()) | + ContextLocalInitFlag::encode(var->initialization_flag()) | + ContextLocalMaybeAssignedFlag::encode(var->maybe_assigned()); scope_info->set(index++, Smi::FromInt(value)); } // If present, add the function variable name and its index. - ASSERT(index == scope_info->FunctionNameEntryIndex()); + DCHECK(index == scope_info->FunctionNameEntryIndex()); if (has_function_name) { int var_index = scope->function()->proxy()->var()->index(); scope_info->set(index++, *scope->function()->proxy()->name()); scope_info->set(index++, Smi::FromInt(var_index)); - ASSERT(function_name_info != STACK || + DCHECK(function_name_info != STACK || (var_index == scope_info->StackLocalCount() && var_index == scope_info->StackSlotCount() - 1)); - ASSERT(function_name_info != CONTEXT || + DCHECK(function_name_info != CONTEXT || var_index == scope_info->ContextLength() - 1); } - ASSERT(index == scope_info->length()); - ASSERT(scope->num_parameters() == scope_info->ParameterCount()); - ASSERT(scope->num_stack_slots() == scope_info->StackSlotCount()); - ASSERT(scope->num_heap_slots() == scope_info->ContextLength() || + DCHECK(index == scope_info->length()); + DCHECK(scope->num_parameters() == scope_info->ParameterCount()); + DCHECK(scope->num_stack_slots() == scope_info->StackSlotCount()); + DCHECK(scope->num_heap_slots() == scope_info->ContextLength() || (scope->num_heap_slots() == kVariablePartIndex && scope_info->ContextLength() == 0)); return scope_info; @@ -131,7 +133,7 @@ ScopeInfo* ScopeInfo::Empty(Isolate* isolate) { ScopeType ScopeInfo::scope_type() { - ASSERT(length() > 0); + DCHECK(length() > 0); return ScopeTypeField::decode(Flags()); } @@ -204,21 +206,21 @@ bool ScopeInfo::HasContext() { String* ScopeInfo::FunctionName() { - ASSERT(HasFunctionName()); + DCHECK(HasFunctionName()); return String::cast(get(FunctionNameEntryIndex())); } String* ScopeInfo::ParameterName(int var) { - ASSERT(0 <= var && var < ParameterCount()); + DCHECK(0 <= var && var < ParameterCount()); int info_index = ParameterEntriesIndex() + var; return String::cast(get(info_index)); } String* ScopeInfo::LocalName(int var) { - ASSERT(0 <= var && var < LocalCount()); - ASSERT(StackLocalEntriesIndex() + StackLocalCount() == + DCHECK(0 <= var && var < LocalCount()); + DCHECK(StackLocalEntriesIndex() + StackLocalCount() == ContextLocalNameEntriesIndex()); int info_index = StackLocalEntriesIndex() + var; return String::cast(get(info_index)); @@ -226,21 +228,21 @@ String* ScopeInfo::LocalName(int var) { String* ScopeInfo::StackLocalName(int var) { - ASSERT(0 <= var && var < StackLocalCount()); + DCHECK(0 <= var && var < StackLocalCount()); int info_index = StackLocalEntriesIndex() + var; return String::cast(get(info_index)); } String* ScopeInfo::ContextLocalName(int var) { - ASSERT(0 <= var && var < ContextLocalCount()); + DCHECK(0 <= var && var < ContextLocalCount()); int info_index = ContextLocalNameEntriesIndex() + var; return String::cast(get(info_index)); } VariableMode ScopeInfo::ContextLocalMode(int var) { - ASSERT(0 <= var && var < ContextLocalCount()); + DCHECK(0 <= var && var < ContextLocalCount()); int info_index = ContextLocalInfoEntriesIndex() + var; int value = Smi::cast(get(info_index))->value(); return ContextLocalMode::decode(value); @@ -248,15 +250,23 @@ VariableMode ScopeInfo::ContextLocalMode(int var) { InitializationFlag ScopeInfo::ContextLocalInitFlag(int var) { - ASSERT(0 <= var && var < ContextLocalCount()); + DCHECK(0 <= var && var < ContextLocalCount()); int info_index = ContextLocalInfoEntriesIndex() + var; int value = Smi::cast(get(info_index))->value(); return ContextLocalInitFlag::decode(value); } +MaybeAssignedFlag ScopeInfo::ContextLocalMaybeAssignedFlag(int var) { + DCHECK(0 <= var && var < ContextLocalCount()); + int info_index = ContextLocalInfoEntriesIndex() + var; + int value = Smi::cast(get(info_index))->value(); + return ContextLocalMaybeAssignedFlag::decode(value); +} + + bool ScopeInfo::LocalIsSynthetic(int var) { - ASSERT(0 <= var && var < LocalCount()); + DCHECK(0 <= var && var < LocalCount()); // There's currently no flag stored on the ScopeInfo to indicate that a // variable is a compiler-introduced temporary. However, to avoid conflict // with user declarations, the current temporaries like .generator_object and @@ -267,7 +277,7 @@ bool ScopeInfo::LocalIsSynthetic(int var) { int ScopeInfo::StackSlotIndex(String* name) { - ASSERT(name->IsInternalizedString()); + DCHECK(name->IsInternalizedString()); if (length() > 0) { int start = StackLocalEntriesIndex(); int end = StackLocalEntriesIndex() + StackLocalCount(); @@ -282,19 +292,19 @@ int ScopeInfo::StackSlotIndex(String* name) { int ScopeInfo::ContextSlotIndex(Handle<ScopeInfo> scope_info, - Handle<String> name, - VariableMode* mode, - InitializationFlag* init_flag) { - ASSERT(name->IsInternalizedString()); - ASSERT(mode != NULL); - ASSERT(init_flag != NULL); + Handle<String> name, VariableMode* mode, + InitializationFlag* init_flag, + MaybeAssignedFlag* maybe_assigned_flag) { + DCHECK(name->IsInternalizedString()); + DCHECK(mode != NULL); + DCHECK(init_flag != NULL); if (scope_info->length() > 0) { ContextSlotCache* context_slot_cache = scope_info->GetIsolate()->context_slot_cache(); - int result = - context_slot_cache->Lookup(*scope_info, *name, mode, init_flag); + int result = context_slot_cache->Lookup(*scope_info, *name, mode, init_flag, + maybe_assigned_flag); if (result != ContextSlotCache::kNotFound) { - ASSERT(result < scope_info->ContextLength()); + DCHECK(result < scope_info->ContextLength()); return result; } @@ -306,22 +316,24 @@ int ScopeInfo::ContextSlotIndex(Handle<ScopeInfo> scope_info, int var = i - start; *mode = scope_info->ContextLocalMode(var); *init_flag = scope_info->ContextLocalInitFlag(var); + *maybe_assigned_flag = scope_info->ContextLocalMaybeAssignedFlag(var); result = Context::MIN_CONTEXT_SLOTS + var; - context_slot_cache->Update(scope_info, name, *mode, *init_flag, result); - ASSERT(result < scope_info->ContextLength()); + context_slot_cache->Update(scope_info, name, *mode, *init_flag, + *maybe_assigned_flag, result); + DCHECK(result < scope_info->ContextLength()); return result; } } - // Cache as not found. Mode and init flag don't matter. - context_slot_cache->Update( - scope_info, name, INTERNAL, kNeedsInitialization, -1); + // Cache as not found. Mode, init flag and maybe assigned flag don't matter. + context_slot_cache->Update(scope_info, name, INTERNAL, kNeedsInitialization, + kNotAssigned, -1); } return -1; } int ScopeInfo::ParameterIndex(String* name) { - ASSERT(name->IsInternalizedString()); + DCHECK(name->IsInternalizedString()); if (length() > 0) { // We must read parameters from the end since for // multiply declared parameters the value of the @@ -341,8 +353,8 @@ int ScopeInfo::ParameterIndex(String* name) { int ScopeInfo::FunctionContextSlotIndex(String* name, VariableMode* mode) { - ASSERT(name->IsInternalizedString()); - ASSERT(mode != NULL); + DCHECK(name->IsInternalizedString()); + DCHECK(mode != NULL); if (length() > 0) { if (FunctionVariableField::decode(Flags()) == CONTEXT && FunctionName() == name) { @@ -368,13 +380,11 @@ bool ScopeInfo::CopyContextLocalsToScopeObject(Handle<ScopeInfo> scope_info, int context_index = Context::MIN_CONTEXT_SLOTS + i; RETURN_ON_EXCEPTION_VALUE( isolate, - Runtime::SetObjectProperty( - isolate, + Runtime::DefineObjectProperty( scope_object, Handle<String>(String::cast(scope_info->get(i + start))), Handle<Object>(context->get(context_index), isolate), - ::NONE, - SLOPPY), + ::NONE), false); } return true; @@ -382,7 +392,7 @@ bool ScopeInfo::CopyContextLocalsToScopeObject(Handle<ScopeInfo> scope_info, int ScopeInfo::ParameterEntriesIndex() { - ASSERT(length() > 0); + DCHECK(length() > 0); return kVariablePartIndex; } @@ -415,30 +425,30 @@ int ContextSlotCache::Hash(Object* data, String* name) { } -int ContextSlotCache::Lookup(Object* data, - String* name, - VariableMode* mode, - InitializationFlag* init_flag) { +int ContextSlotCache::Lookup(Object* data, String* name, VariableMode* mode, + InitializationFlag* init_flag, + MaybeAssignedFlag* maybe_assigned_flag) { int index = Hash(data, name); Key& key = keys_[index]; if ((key.data == data) && key.name->Equals(name)) { Value result(values_[index]); if (mode != NULL) *mode = result.mode(); if (init_flag != NULL) *init_flag = result.initialization_flag(); + if (maybe_assigned_flag != NULL) + *maybe_assigned_flag = result.maybe_assigned_flag(); return result.index() + kNotFound; } return kNotFound; } -void ContextSlotCache::Update(Handle<Object> data, - Handle<String> name, - VariableMode mode, - InitializationFlag init_flag, +void ContextSlotCache::Update(Handle<Object> data, Handle<String> name, + VariableMode mode, InitializationFlag init_flag, + MaybeAssignedFlag maybe_assigned_flag, int slot_index) { DisallowHeapAllocation no_gc; Handle<String> internalized_name; - ASSERT(slot_index > kNotFound); + DCHECK(slot_index > kNotFound); if (StringTable::InternalizeStringIfExists(name->GetIsolate(), name). ToHandle(&internalized_name)) { int index = Hash(*data, *internalized_name); @@ -446,9 +456,10 @@ void ContextSlotCache::Update(Handle<Object> data, key.data = *data; key.name = *internalized_name; // Please note value only takes a uint as index. - values_[index] = Value(mode, init_flag, slot_index - kNotFound).raw(); + values_[index] = Value(mode, init_flag, maybe_assigned_flag, + slot_index - kNotFound).raw(); #ifdef DEBUG - ValidateEntry(data, name, mode, init_flag, slot_index); + ValidateEntry(data, name, mode, init_flag, maybe_assigned_flag, slot_index); #endif } } @@ -461,10 +472,10 @@ void ContextSlotCache::Clear() { #ifdef DEBUG -void ContextSlotCache::ValidateEntry(Handle<Object> data, - Handle<String> name, +void ContextSlotCache::ValidateEntry(Handle<Object> data, Handle<String> name, VariableMode mode, InitializationFlag init_flag, + MaybeAssignedFlag maybe_assigned_flag, int slot_index) { DisallowHeapAllocation no_gc; Handle<String> internalized_name; @@ -472,12 +483,13 @@ void ContextSlotCache::ValidateEntry(Handle<Object> data, ToHandle(&internalized_name)) { int index = Hash(*data, *name); Key& key = keys_[index]; - ASSERT(key.data == *data); - ASSERT(key.name->Equals(*name)); + DCHECK(key.data == *data); + DCHECK(key.name->Equals(*name)); Value result(values_[index]); - ASSERT(result.mode() == mode); - ASSERT(result.initialization_flag() == init_flag); - ASSERT(result.index() + kNotFound == slot_index); + DCHECK(result.mode() == mode); + DCHECK(result.initialization_flag() == init_flag); + DCHECK(result.maybe_assigned_flag() == maybe_assigned_flag); + DCHECK(result.index() + kNotFound == slot_index); } } @@ -539,20 +551,20 @@ Handle<ModuleInfo> ModuleInfo::Create( int i = 0; for (Interface::Iterator it = interface->iterator(); !it.done(); it.Advance(), ++i) { - Variable* var = scope->LocalLookup(it.name()); - info->set_name(i, *it.name()); + Variable* var = scope->LookupLocal(it.name()); + info->set_name(i, *(it.name()->string())); info->set_mode(i, var->mode()); - ASSERT((var->mode() == MODULE) == (it.interface()->IsModule())); + DCHECK((var->mode() == MODULE) == (it.interface()->IsModule())); if (var->mode() == MODULE) { - ASSERT(it.interface()->IsFrozen()); - ASSERT(it.interface()->Index() >= 0); + DCHECK(it.interface()->IsFrozen()); + DCHECK(it.interface()->Index() >= 0); info->set_index(i, it.interface()->Index()); } else { - ASSERT(var->index() >= 0); + DCHECK(var->index() >= 0); info->set_index(i, var->index()); } } - ASSERT(i == info->length()); + DCHECK(i == info->length()); return info; } diff --git a/deps/v8/src/scopeinfo.h b/deps/v8/src/scopeinfo.h index 755b6a31013..1d9f06fde8f 100644 --- a/deps/v8/src/scopeinfo.h +++ b/deps/v8/src/scopeinfo.h @@ -5,9 +5,9 @@ #ifndef V8_SCOPEINFO_H_ #define V8_SCOPEINFO_H_ -#include "allocation.h" -#include "variables.h" -#include "zone-inl.h" +#include "src/allocation.h" +#include "src/variables.h" +#include "src/zone-inl.h" namespace v8 { namespace internal { @@ -20,17 +20,14 @@ class ContextSlotCache { public: // Lookup context slot index for (data, name). // If absent, kNotFound is returned. - int Lookup(Object* data, - String* name, - VariableMode* mode, - InitializationFlag* init_flag); + int Lookup(Object* data, String* name, VariableMode* mode, + InitializationFlag* init_flag, + MaybeAssignedFlag* maybe_assigned_flag); // Update an element in the cache. - void Update(Handle<Object> data, - Handle<String> name, - VariableMode mode, + void Update(Handle<Object> data, Handle<String> name, VariableMode mode, InitializationFlag init_flag, - int slot_index); + MaybeAssignedFlag maybe_assigned_flag, int slot_index); // Clear the cache. void Clear(); @@ -49,11 +46,9 @@ class ContextSlotCache { inline static int Hash(Object* data, String* name); #ifdef DEBUG - void ValidateEntry(Handle<Object> data, - Handle<String> name, - VariableMode mode, - InitializationFlag init_flag, - int slot_index); + void ValidateEntry(Handle<Object> data, Handle<String> name, + VariableMode mode, InitializationFlag init_flag, + MaybeAssignedFlag maybe_assigned_flag, int slot_index); #endif static const int kLength = 256; @@ -63,18 +58,19 @@ class ContextSlotCache { }; struct Value { - Value(VariableMode mode, - InitializationFlag init_flag, - int index) { - ASSERT(ModeField::is_valid(mode)); - ASSERT(InitField::is_valid(init_flag)); - ASSERT(IndexField::is_valid(index)); - value_ = ModeField::encode(mode) | - IndexField::encode(index) | - InitField::encode(init_flag); - ASSERT(mode == this->mode()); - ASSERT(init_flag == this->initialization_flag()); - ASSERT(index == this->index()); + Value(VariableMode mode, InitializationFlag init_flag, + MaybeAssignedFlag maybe_assigned_flag, int index) { + DCHECK(ModeField::is_valid(mode)); + DCHECK(InitField::is_valid(init_flag)); + DCHECK(MaybeAssignedField::is_valid(maybe_assigned_flag)); + DCHECK(IndexField::is_valid(index)); + value_ = ModeField::encode(mode) | IndexField::encode(index) | + InitField::encode(init_flag) | + MaybeAssignedField::encode(maybe_assigned_flag); + DCHECK(mode == this->mode()); + DCHECK(init_flag == this->initialization_flag()); + DCHECK(maybe_assigned_flag == this->maybe_assigned_flag()); + DCHECK(index == this->index()); } explicit inline Value(uint32_t value) : value_(value) {} @@ -87,13 +83,18 @@ class ContextSlotCache { return InitField::decode(value_); } + MaybeAssignedFlag maybe_assigned_flag() { + return MaybeAssignedField::decode(value_); + } + int index() { return IndexField::decode(value_); } // Bit fields in value_ (type, shift, size). Must be public so the // constants can be embedded in generated code. - class ModeField: public BitField<VariableMode, 0, 4> {}; - class InitField: public BitField<InitializationFlag, 4, 1> {}; - class IndexField: public BitField<int, 5, 32-5> {}; + class ModeField : public BitField<VariableMode, 0, 4> {}; + class InitField : public BitField<InitializationFlag, 4, 1> {}; + class MaybeAssignedField : public BitField<MaybeAssignedFlag, 5, 1> {}; + class IndexField : public BitField<int, 6, 32 - 6> {}; private: uint32_t value_; diff --git a/deps/v8/src/scopes.cc b/deps/v8/src/scopes.cc index 1818909afc5..e810d98800d 100644 --- a/deps/v8/src/scopes.cc +++ b/deps/v8/src/scopes.cc @@ -2,15 +2,15 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "scopes.h" +#include "src/scopes.h" -#include "accessors.h" -#include "bootstrapper.h" -#include "compiler.h" -#include "messages.h" -#include "scopeinfo.h" +#include "src/accessors.h" +#include "src/bootstrapper.h" +#include "src/compiler.h" +#include "src/messages.h" +#include "src/scopeinfo.h" namespace v8 { namespace internal { @@ -24,52 +24,40 @@ namespace internal { // use. Because a Variable holding a handle with the same location exists // this is ensured. -static bool Match(void* key1, void* key2) { - String* name1 = *reinterpret_cast<String**>(key1); - String* name2 = *reinterpret_cast<String**>(key2); - ASSERT(name1->IsInternalizedString()); - ASSERT(name2->IsInternalizedString()); - return name1 == name2; -} - - VariableMap::VariableMap(Zone* zone) - : ZoneHashMap(Match, 8, ZoneAllocationPolicy(zone)), + : ZoneHashMap(ZoneHashMap::PointersMatch, 8, ZoneAllocationPolicy(zone)), zone_(zone) {} VariableMap::~VariableMap() {} -Variable* VariableMap::Declare( - Scope* scope, - Handle<String> name, - VariableMode mode, - bool is_valid_lhs, - Variable::Kind kind, - InitializationFlag initialization_flag, - Interface* interface) { - Entry* p = ZoneHashMap::Lookup(name.location(), name->Hash(), true, - ZoneAllocationPolicy(zone())); +Variable* VariableMap::Declare(Scope* scope, const AstRawString* name, + VariableMode mode, bool is_valid_lhs, + Variable::Kind kind, + InitializationFlag initialization_flag, + MaybeAssignedFlag maybe_assigned_flag, + Interface* interface) { + // AstRawStrings are unambiguous, i.e., the same string is always represented + // by the same AstRawString*. + // FIXME(marja): fix the type of Lookup. + Entry* p = ZoneHashMap::Lookup(const_cast<AstRawString*>(name), name->hash(), + true, ZoneAllocationPolicy(zone())); if (p->value == NULL) { // The variable has not been declared yet -> insert it. - ASSERT(p->key == name.location()); - p->value = new(zone()) Variable(scope, - name, - mode, - is_valid_lhs, - kind, - initialization_flag, - interface); + DCHECK(p->key == name); + p->value = new (zone()) + Variable(scope, name, mode, is_valid_lhs, kind, initialization_flag, + maybe_assigned_flag, interface); } return reinterpret_cast<Variable*>(p->value); } -Variable* VariableMap::Lookup(Handle<String> name) { - Entry* p = ZoneHashMap::Lookup(name.location(), name->Hash(), false, - ZoneAllocationPolicy(NULL)); +Variable* VariableMap::Lookup(const AstRawString* name) { + Entry* p = ZoneHashMap::Lookup(const_cast<AstRawString*>(name), name->hash(), + false, ZoneAllocationPolicy(NULL)); if (p != NULL) { - ASSERT(*reinterpret_cast<String**>(p->key) == *name); - ASSERT(p->value != NULL); + DCHECK(reinterpret_cast<const AstRawString*>(p->key) == name); + DCHECK(p->value != NULL); return reinterpret_cast<Variable*>(p->value); } return NULL; @@ -79,7 +67,8 @@ Variable* VariableMap::Lookup(Handle<String> name) { // ---------------------------------------------------------------------------- // Implementation of Scope -Scope::Scope(Scope* outer_scope, ScopeType scope_type, Zone* zone) +Scope::Scope(Scope* outer_scope, ScopeType scope_type, + AstValueFactory* ast_value_factory, Zone* zone) : isolate_(zone->isolate()), inner_scopes_(4, zone), variables_(zone), @@ -92,17 +81,19 @@ Scope::Scope(Scope* outer_scope, ScopeType scope_type, Zone* zone) (scope_type == MODULE_SCOPE || scope_type == GLOBAL_SCOPE) ? Interface::NewModule(zone) : NULL), already_resolved_(false), + ast_value_factory_(ast_value_factory), zone_(zone) { SetDefaults(scope_type, outer_scope, Handle<ScopeInfo>::null()); // The outermost scope must be a global scope. - ASSERT(scope_type == GLOBAL_SCOPE || outer_scope != NULL); - ASSERT(!HasIllegalRedeclaration()); + DCHECK(scope_type == GLOBAL_SCOPE || outer_scope != NULL); + DCHECK(!HasIllegalRedeclaration()); } Scope::Scope(Scope* inner_scope, ScopeType scope_type, Handle<ScopeInfo> scope_info, + AstValueFactory* value_factory, Zone* zone) : isolate_(zone->isolate()), inner_scopes_(4, zone), @@ -114,6 +105,7 @@ Scope::Scope(Scope* inner_scope, decls_(4, zone), interface_(NULL), already_resolved_(true), + ast_value_factory_(value_factory), zone_(zone) { SetDefaults(scope_type, NULL, scope_info); if (!scope_info.is_null()) { @@ -126,7 +118,8 @@ Scope::Scope(Scope* inner_scope, } -Scope::Scope(Scope* inner_scope, Handle<String> catch_variable_name, Zone* zone) +Scope::Scope(Scope* inner_scope, const AstRawString* catch_variable_name, + AstValueFactory* value_factory, Zone* zone) : isolate_(zone->isolate()), inner_scopes_(1, zone), variables_(zone), @@ -137,6 +130,7 @@ Scope::Scope(Scope* inner_scope, Handle<String> catch_variable_name, Zone* zone) decls_(0, zone), interface_(NULL), already_resolved_(true), + ast_value_factory_(value_factory), zone_(zone) { SetDefaults(CATCH_SCOPE, NULL, Handle<ScopeInfo>::null()); AddInnerScope(inner_scope); @@ -157,7 +151,7 @@ void Scope::SetDefaults(ScopeType scope_type, Handle<ScopeInfo> scope_info) { outer_scope_ = outer_scope; scope_type_ = scope_type; - scope_name_ = isolate_->factory()->empty_string(); + scope_name_ = ast_value_factory_->empty_string(); dynamics_ = NULL; receiver_ = NULL; function_ = NULL; @@ -199,6 +193,7 @@ Scope* Scope::DeserializeScopeChain(Context* context, Scope* global_scope, Scope* with_scope = new(zone) Scope(current_scope, WITH_SCOPE, Handle<ScopeInfo>::null(), + global_scope->ast_value_factory_, zone); current_scope = with_scope; // All the inner scopes are inside a with. @@ -211,30 +206,36 @@ Scope* Scope::DeserializeScopeChain(Context* context, Scope* global_scope, current_scope = new(zone) Scope(current_scope, GLOBAL_SCOPE, Handle<ScopeInfo>(scope_info), + global_scope->ast_value_factory_, zone); } else if (context->IsModuleContext()) { ScopeInfo* scope_info = ScopeInfo::cast(context->module()->scope_info()); current_scope = new(zone) Scope(current_scope, MODULE_SCOPE, Handle<ScopeInfo>(scope_info), + global_scope->ast_value_factory_, zone); } else if (context->IsFunctionContext()) { ScopeInfo* scope_info = context->closure()->shared()->scope_info(); current_scope = new(zone) Scope(current_scope, FUNCTION_SCOPE, Handle<ScopeInfo>(scope_info), + global_scope->ast_value_factory_, zone); } else if (context->IsBlockContext()) { ScopeInfo* scope_info = ScopeInfo::cast(context->extension()); current_scope = new(zone) Scope(current_scope, BLOCK_SCOPE, Handle<ScopeInfo>(scope_info), + global_scope->ast_value_factory_, zone); } else { - ASSERT(context->IsCatchContext()); + DCHECK(context->IsCatchContext()); String* name = String::cast(context->extension()); - current_scope = new(zone) Scope( - current_scope, Handle<String>(name), zone); + current_scope = new (zone) Scope( + current_scope, + global_scope->ast_value_factory_->GetString(Handle<String>(name)), + global_scope->ast_value_factory_, zone); } if (contains_with) current_scope->RecordWithStatement(); if (innermost_scope == NULL) innermost_scope = current_scope; @@ -253,7 +254,7 @@ Scope* Scope::DeserializeScopeChain(Context* context, Scope* global_scope, bool Scope::Analyze(CompilationInfo* info) { - ASSERT(info->function() != NULL); + DCHECK(info->function() != NULL); Scope* scope = info->function()->scope(); Scope* top = scope; @@ -266,7 +267,9 @@ bool Scope::Analyze(CompilationInfo* info) { // Allocate the variables. { - AstNodeFactory<AstNullVisitor> ast_node_factory(info->zone()); + // Passing NULL as AstValueFactory is ok, because AllocateVariables doesn't + // need to create new strings or values. + AstNodeFactory<AstNullVisitor> ast_node_factory(info->zone(), NULL); if (!top->AllocateVariables(info, &ast_node_factory)) return false; } @@ -289,7 +292,7 @@ bool Scope::Analyze(CompilationInfo* info) { void Scope::Initialize() { - ASSERT(!already_resolved()); + DCHECK(!already_resolved()); // Add this scope as a new inner scope of the outer scope. if (outer_scope_ != NULL) { @@ -310,7 +313,7 @@ void Scope::Initialize() { if (is_declaration_scope()) { Variable* var = variables_.Declare(this, - isolate_->factory()->this_string(), + ast_value_factory_->this_string(), VAR, false, Variable::THIS, @@ -318,7 +321,7 @@ void Scope::Initialize() { var->AllocateTo(Variable::PARAMETER, -1); receiver_ = var; } else { - ASSERT(outer_scope() != NULL); + DCHECK(outer_scope() != NULL); receiver_ = outer_scope()->receiver(); } @@ -327,7 +330,7 @@ void Scope::Initialize() { // Note that it might never be accessed, in which case it won't be // allocated during variable allocation. variables_.Declare(this, - isolate_->factory()->arguments_string(), + ast_value_factory_->arguments_string(), VAR, true, Variable::ARGUMENTS, @@ -337,10 +340,10 @@ void Scope::Initialize() { Scope* Scope::FinalizeBlockScope() { - ASSERT(is_block_scope()); - ASSERT(internals_.is_empty()); - ASSERT(temps_.is_empty()); - ASSERT(params_.is_empty()); + DCHECK(is_block_scope()); + DCHECK(internals_.is_empty()); + DCHECK(temps_.is_empty()); + DCHECK(params_.is_empty()); if (num_var_or_const() > 0) return this; @@ -366,45 +369,55 @@ Scope* Scope::FinalizeBlockScope() { } -Variable* Scope::LocalLookup(Handle<String> name) { +Variable* Scope::LookupLocal(const AstRawString* name) { Variable* result = variables_.Lookup(name); if (result != NULL || scope_info_.is_null()) { return result; } + // The Scope is backed up by ScopeInfo. This means it cannot operate in a + // heap-independent mode, and all strings must be internalized immediately. So + // it's ok to get the Handle<String> here. + Handle<String> name_handle = name->string(); // If we have a serialized scope info, we might find the variable there. // There should be no local slot with the given name. - ASSERT(scope_info_->StackSlotIndex(*name) < 0); + DCHECK(scope_info_->StackSlotIndex(*name_handle) < 0); // Check context slot lookup. VariableMode mode; Variable::Location location = Variable::CONTEXT; InitializationFlag init_flag; - int index = ScopeInfo::ContextSlotIndex(scope_info_, name, &mode, &init_flag); + MaybeAssignedFlag maybe_assigned_flag; + int index = ScopeInfo::ContextSlotIndex(scope_info_, name_handle, &mode, + &init_flag, &maybe_assigned_flag); if (index < 0) { // Check parameters. - index = scope_info_->ParameterIndex(*name); + index = scope_info_->ParameterIndex(*name_handle); if (index < 0) return NULL; mode = DYNAMIC; location = Variable::LOOKUP; init_flag = kCreatedInitialized; + // Be conservative and flag parameters as maybe assigned. Better information + // would require ScopeInfo to serialize the maybe_assigned bit also for + // parameters. + maybe_assigned_flag = kMaybeAssigned; } Variable* var = variables_.Declare(this, name, mode, true, Variable::NORMAL, - init_flag); + init_flag, maybe_assigned_flag); var->AllocateTo(location, index); return var; } -Variable* Scope::LookupFunctionVar(Handle<String> name, +Variable* Scope::LookupFunctionVar(const AstRawString* name, AstNodeFactory<AstNullVisitor>* factory) { - if (function_ != NULL && function_->proxy()->name().is_identical_to(name)) { + if (function_ != NULL && function_->proxy()->raw_name() == name) { return function_->proxy()->var(); } else if (!scope_info_.is_null()) { // If we are backed by a scope info, try to lookup the variable there. VariableMode mode; - int index = scope_info_->FunctionContextSlotIndex(*name, &mode); + int index = scope_info_->FunctionContextSlotIndex(*(name->string()), &mode); if (index < 0) return NULL; Variable* var = new(zone()) Variable( this, name, mode, true /* is valid LHS */, @@ -421,43 +434,44 @@ Variable* Scope::LookupFunctionVar(Handle<String> name, } -Variable* Scope::Lookup(Handle<String> name) { +Variable* Scope::Lookup(const AstRawString* name) { for (Scope* scope = this; scope != NULL; scope = scope->outer_scope()) { - Variable* var = scope->LocalLookup(name); + Variable* var = scope->LookupLocal(name); if (var != NULL) return var; } return NULL; } -void Scope::DeclareParameter(Handle<String> name, VariableMode mode) { - ASSERT(!already_resolved()); - ASSERT(is_function_scope()); +Variable* Scope::DeclareParameter(const AstRawString* name, VariableMode mode) { + DCHECK(!already_resolved()); + DCHECK(is_function_scope()); Variable* var = variables_.Declare(this, name, mode, true, Variable::NORMAL, kCreatedInitialized); params_.Add(var, zone()); + return var; } -Variable* Scope::DeclareLocal(Handle<String> name, - VariableMode mode, +Variable* Scope::DeclareLocal(const AstRawString* name, VariableMode mode, InitializationFlag init_flag, + MaybeAssignedFlag maybe_assigned_flag, Interface* interface) { - ASSERT(!already_resolved()); + DCHECK(!already_resolved()); // This function handles VAR, LET, and CONST modes. DYNAMIC variables are // introduces during variable allocation, INTERNAL variables are allocated // explicitly, and TEMPORARY variables are allocated via NewTemporary(). - ASSERT(IsDeclaredVariableMode(mode)); + DCHECK(IsDeclaredVariableMode(mode)); ++num_var_or_const_; - return variables_.Declare( - this, name, mode, true, Variable::NORMAL, init_flag, interface); + return variables_.Declare(this, name, mode, true, Variable::NORMAL, init_flag, + maybe_assigned_flag, interface); } -Variable* Scope::DeclareDynamicGlobal(Handle<String> name) { - ASSERT(is_global_scope()); +Variable* Scope::DeclareDynamicGlobal(const AstRawString* name) { + DCHECK(is_global_scope()); return variables_.Declare(this, name, DYNAMIC_GLOBAL, @@ -479,8 +493,8 @@ void Scope::RemoveUnresolved(VariableProxy* var) { } -Variable* Scope::NewInternal(Handle<String> name) { - ASSERT(!already_resolved()); +Variable* Scope::NewInternal(const AstRawString* name) { + DCHECK(!already_resolved()); Variable* var = new(zone()) Variable(this, name, INTERNAL, @@ -492,8 +506,8 @@ Variable* Scope::NewInternal(Handle<String> name) { } -Variable* Scope::NewTemporary(Handle<String> name) { - ASSERT(!already_resolved()); +Variable* Scope::NewTemporary(const AstRawString* name) { + DCHECK(!already_resolved()); Variable* var = new(zone()) Variable(this, name, TEMPORARY, @@ -515,12 +529,12 @@ void Scope::SetIllegalRedeclaration(Expression* expression) { if (!HasIllegalRedeclaration()) { illegal_redecl_ = expression; } - ASSERT(HasIllegalRedeclaration()); + DCHECK(HasIllegalRedeclaration()); } void Scope::VisitIllegalRedeclaration(AstVisitor* visitor) { - ASSERT(HasIllegalRedeclaration()); + DCHECK(HasIllegalRedeclaration()); illegal_redecl_->Accept(visitor); } @@ -530,7 +544,7 @@ Declaration* Scope::CheckConflictingVarDeclarations() { for (int i = 0; i < length; i++) { Declaration* decl = decls_[i]; if (decl->mode() != VAR) continue; - Handle<String> name = decl->proxy()->name(); + const AstRawString* name = decl->proxy()->raw_name(); // Iterate through all scopes until and including the declaration scope. Scope* previous = NULL; @@ -566,14 +580,14 @@ class VarAndOrder { void Scope::CollectStackAndContextLocals(ZoneList<Variable*>* stack_locals, ZoneList<Variable*>* context_locals) { - ASSERT(stack_locals != NULL); - ASSERT(context_locals != NULL); + DCHECK(stack_locals != NULL); + DCHECK(context_locals != NULL); // Collect internals which are always allocated on the heap. for (int i = 0; i < internals_.length(); i++) { Variable* var = internals_[i]; if (var->is_used()) { - ASSERT(var->IsContextSlot()); + DCHECK(var->IsContextSlot()); context_locals->Add(var, zone()); } } @@ -584,10 +598,10 @@ void Scope::CollectStackAndContextLocals(ZoneList<Variable*>* stack_locals, Variable* var = temps_[i]; if (var->is_used()) { if (var->IsContextSlot()) { - ASSERT(has_forced_context_allocation()); + DCHECK(has_forced_context_allocation()); context_locals->Add(var, zone()); } else { - ASSERT(var->IsStackLocal()); + DCHECK(var->IsStackLocal()); stack_locals->Add(var, zone()); } } @@ -629,7 +643,7 @@ bool Scope::AllocateVariables(CompilationInfo* info, // 2) Allocate module instances. if (FLAG_harmony_modules && (is_global_scope() || is_module_scope())) { - ASSERT(num_modules_ == 0); + DCHECK(num_modules_ == 0); AllocateModulesRecursively(this); } @@ -698,11 +712,11 @@ bool Scope::AllowsLazyCompilationWithoutContext() const { int Scope::ContextChainLength(Scope* scope) { int n = 0; for (Scope* s = this; s != scope; s = s->outer_scope_) { - ASSERT(s != NULL); // scope must be in the scope chain + DCHECK(s != NULL); // scope must be in the scope chain if (s->is_with_scope() || s->num_heap_slots() > 0) n++; // Catch and module scopes always have heap slots. - ASSERT(!s->is_catch_scope() || s->num_heap_slots() > 0); - ASSERT(!s->is_module_scope() || s->num_heap_slots() > 0); + DCHECK(!s->is_catch_scope() || s->num_heap_slots() > 0); + DCHECK(!s->is_module_scope() || s->num_heap_slots() > 0); } return n; } @@ -743,7 +757,7 @@ void Scope::GetNestedScopeChain( Scope* scope = inner_scopes_[i]; int beg_pos = scope->start_position(); int end_pos = scope->end_position(); - ASSERT(beg_pos >= 0 && end_pos >= 0); + DCHECK(beg_pos >= 0 && end_pos >= 0); if (beg_pos <= position && position < end_pos) { scope->GetNestedScopeChain(chain, position); return; @@ -773,9 +787,8 @@ static void Indent(int n, const char* str) { } -static void PrintName(Handle<String> name) { - SmartArrayPointer<char> s = name->ToCString(DISALLOW_NULLS); - PrintF("%s", s.get()); +static void PrintName(const AstRawString* name) { + PrintF("%.*s", name->length(), name->raw_data()); } @@ -803,12 +816,18 @@ static void PrintVar(int indent, Variable* var) { if (var->is_used() || !var->IsUnallocated()) { Indent(indent, Variable::Mode2String(var->mode())); PrintF(" "); - PrintName(var->name()); + PrintName(var->raw_name()); PrintF("; // "); PrintLocation(var); + bool comma = !var->IsUnallocated(); if (var->has_forced_context_allocation()) { - if (!var->IsUnallocated()) PrintF(", "); + if (comma) PrintF(", "); PrintF("forced context allocation"); + comma = true; + } + if (var->maybe_assigned() == kMaybeAssigned) { + if (comma) PrintF(", "); + PrintF("maybe assigned"); } PrintF("\n"); } @@ -829,7 +848,7 @@ void Scope::Print(int n) { // Print header. Indent(n0, Header(scope_type_)); - if (scope_name_->length() > 0) { + if (!scope_name_->IsEmpty()) { PrintF(" "); PrintName(scope_name_); } @@ -839,7 +858,7 @@ void Scope::Print(int n) { PrintF(" ("); for (int i = 0; i < params_.length(); i++) { if (i > 0) PrintF(", "); - PrintName(params_[i]->name()); + PrintName(params_[i]->raw_name()); } PrintF(")"); } @@ -849,7 +868,7 @@ void Scope::Print(int n) { // Function name, if any (named function literals, only). if (function_ != NULL) { Indent(n1, "// (local) function name: "); - PrintName(function_->proxy()->name()); + PrintName(function_->proxy()->raw_name()); PrintF("\n"); } @@ -917,8 +936,8 @@ void Scope::Print(int n) { #endif // DEBUG -Variable* Scope::NonLocal(Handle<String> name, VariableMode mode) { - if (dynamics_ == NULL) dynamics_ = new(zone()) DynamicScopePart(zone()); +Variable* Scope::NonLocal(const AstRawString* name, VariableMode mode) { + if (dynamics_ == NULL) dynamics_ = new (zone()) DynamicScopePart(zone()); VariableMap* map = dynamics_->GetMap(mode); Variable* var = map->Lookup(name); if (var == NULL) { @@ -938,10 +957,10 @@ Variable* Scope::NonLocal(Handle<String> name, VariableMode mode) { } -Variable* Scope::LookupRecursive(Handle<String> name, +Variable* Scope::LookupRecursive(VariableProxy* proxy, BindingKind* binding_kind, AstNodeFactory<AstNullVisitor>* factory) { - ASSERT(binding_kind != NULL); + DCHECK(binding_kind != NULL); if (already_resolved() && is_with_scope()) { // Short-cut: if the scope is deserialized from a scope info, variable // allocation is already fixed. We can simply return with dynamic lookup. @@ -950,7 +969,7 @@ Variable* Scope::LookupRecursive(Handle<String> name, } // Try to find the variable in this scope. - Variable* var = LocalLookup(name); + Variable* var = LookupLocal(proxy->raw_name()); // We found a variable and we are done. (Even if there is an 'eval' in // this scope which introduces the same variable again, the resulting @@ -964,26 +983,27 @@ Variable* Scope::LookupRecursive(Handle<String> name, // if any. We can do this for all scopes, since the function variable is // only present - if at all - for function scopes. *binding_kind = UNBOUND; - var = LookupFunctionVar(name, factory); + var = LookupFunctionVar(proxy->raw_name(), factory); if (var != NULL) { *binding_kind = BOUND; } else if (outer_scope_ != NULL) { - var = outer_scope_->LookupRecursive(name, binding_kind, factory); + var = outer_scope_->LookupRecursive(proxy, binding_kind, factory); if (*binding_kind == BOUND && (is_function_scope() || is_with_scope())) { var->ForceContextAllocation(); } } else { - ASSERT(is_global_scope()); + DCHECK(is_global_scope()); } if (is_with_scope()) { - ASSERT(!already_resolved()); + DCHECK(!already_resolved()); // The current scope is a with scope, so the variable binding can not be // statically resolved. However, note that it was necessary to do a lookup // in the outer scope anyway, because if a binding exists in an outer scope, // the associated variable has to be marked as potentially being accessed // from inside of an inner with scope (the property may not be in the 'with' // object). + if (var != NULL && proxy->is_assigned()) var->set_maybe_assigned(); *binding_kind = DYNAMIC_LOOKUP; return NULL; } else if (calls_sloppy_eval()) { @@ -1004,7 +1024,7 @@ Variable* Scope::LookupRecursive(Handle<String> name, bool Scope::ResolveVariable(CompilationInfo* info, VariableProxy* proxy, AstNodeFactory<AstNullVisitor>* factory) { - ASSERT(info->global_scope()->is_global_scope()); + DCHECK(info->global_scope()->is_global_scope()); // If the proxy is already resolved there's nothing to do // (functions and consts may be resolved by the parser). @@ -1012,7 +1032,7 @@ bool Scope::ResolveVariable(CompilationInfo* info, // Otherwise, try to resolve the variable. BindingKind binding_kind; - Variable* var = LookupRecursive(proxy->name(), &binding_kind, factory); + Variable* var = LookupRecursive(proxy, &binding_kind, factory); switch (binding_kind) { case BOUND: // We found a variable binding. @@ -1024,36 +1044,37 @@ bool Scope::ResolveVariable(CompilationInfo* info, // scope which was not promoted to a context, this can happen if we use // debugger to evaluate arbitrary expressions at a break point). if (var->IsGlobalObjectProperty()) { - var = NonLocal(proxy->name(), DYNAMIC_GLOBAL); + var = NonLocal(proxy->raw_name(), DYNAMIC_GLOBAL); } else if (var->is_dynamic()) { - var = NonLocal(proxy->name(), DYNAMIC); + var = NonLocal(proxy->raw_name(), DYNAMIC); } else { Variable* invalidated = var; - var = NonLocal(proxy->name(), DYNAMIC_LOCAL); + var = NonLocal(proxy->raw_name(), DYNAMIC_LOCAL); var->set_local_if_not_shadowed(invalidated); } break; case UNBOUND: // No binding has been found. Declare a variable on the global object. - var = info->global_scope()->DeclareDynamicGlobal(proxy->name()); + var = info->global_scope()->DeclareDynamicGlobal(proxy->raw_name()); break; case UNBOUND_EVAL_SHADOWED: // No binding has been found. But some scope makes a sloppy 'eval' call. - var = NonLocal(proxy->name(), DYNAMIC_GLOBAL); + var = NonLocal(proxy->raw_name(), DYNAMIC_GLOBAL); break; case DYNAMIC_LOOKUP: // The variable could not be resolved statically. - var = NonLocal(proxy->name(), DYNAMIC); + var = NonLocal(proxy->raw_name(), DYNAMIC); break; } - ASSERT(var != NULL); + DCHECK(var != NULL); + if (proxy->is_assigned()) var->set_maybe_assigned(); if (FLAG_harmony_scoping && strict_mode() == STRICT && - var->is_const_mode() && proxy->IsLValue()) { + var->is_const_mode() && proxy->is_assigned()) { // Assignment to const. Throw a syntax error. MessageLocation location( info->script(), proxy->position(), proxy->position()); @@ -1069,8 +1090,10 @@ bool Scope::ResolveVariable(CompilationInfo* info, if (FLAG_harmony_modules) { bool ok; #ifdef DEBUG - if (FLAG_print_interface_details) - PrintF("# Resolve %s:\n", var->name()->ToAsciiArray()); + if (FLAG_print_interface_details) { + PrintF("# Resolve %.*s:\n", var->raw_name()->length(), + var->raw_name()->raw_data()); + } #endif proxy->interface()->Unify(var->interface(), zone(), &ok); if (!ok) { @@ -1108,7 +1131,7 @@ bool Scope::ResolveVariable(CompilationInfo* info, bool Scope::ResolveVariablesRecursively( CompilationInfo* info, AstNodeFactory<AstNullVisitor>* factory) { - ASSERT(info->global_scope()->is_global_scope()); + DCHECK(info->global_scope()->is_global_scope()); // Resolve unresolved variables for this scope. for (int i = 0; i < unresolved_.length(); i++) { @@ -1125,7 +1148,7 @@ bool Scope::ResolveVariablesRecursively( } -bool Scope::PropagateScopeInfo(bool outer_scope_calls_sloppy_eval ) { +void Scope::PropagateScopeInfo(bool outer_scope_calls_sloppy_eval ) { if (outer_scope_calls_sloppy_eval) { outer_scope_calls_sloppy_eval_ = true; } @@ -1133,16 +1156,15 @@ bool Scope::PropagateScopeInfo(bool outer_scope_calls_sloppy_eval ) { bool calls_sloppy_eval = this->calls_sloppy_eval() || outer_scope_calls_sloppy_eval_; for (int i = 0; i < inner_scopes_.length(); i++) { - Scope* inner_scope = inner_scopes_[i]; - if (inner_scope->PropagateScopeInfo(calls_sloppy_eval)) { + Scope* inner = inner_scopes_[i]; + inner->PropagateScopeInfo(calls_sloppy_eval); + if (inner->scope_calls_eval_ || inner->inner_scope_calls_eval_) { inner_scope_calls_eval_ = true; } - if (inner_scope->force_eager_compilation_) { + if (inner->force_eager_compilation_) { force_eager_compilation_ = true; } } - - return scope_calls_eval_ || inner_scope_calls_eval_; } @@ -1150,7 +1172,7 @@ bool Scope::MustAllocate(Variable* var) { // Give var a read/write use if there is a chance it might be accessed // via an eval() call. This is only possible if the variable has a // visible name. - if ((var->is_this() || var->name()->length() > 0) && + if ((var->is_this() || !var->raw_name()->IsEmpty()) && (var->has_forced_context_allocation() || scope_calls_eval_ || inner_scope_calls_eval_ || @@ -1159,7 +1181,8 @@ bool Scope::MustAllocate(Variable* var) { is_block_scope() || is_module_scope() || is_global_scope())) { - var->set_is_used(true); + var->set_is_used(); + if (scope_calls_eval_ || inner_scope_calls_eval_) var->set_maybe_assigned(); } // Global variables do not need to be allocated. return !var->IsGlobalObjectProperty() && var->is_used(); @@ -1210,9 +1233,9 @@ void Scope::AllocateHeapSlot(Variable* var) { void Scope::AllocateParameterLocals() { - ASSERT(is_function_scope()); - Variable* arguments = LocalLookup(isolate_->factory()->arguments_string()); - ASSERT(arguments != NULL); // functions have 'arguments' declared implicitly + DCHECK(is_function_scope()); + Variable* arguments = LookupLocal(ast_value_factory_->arguments_string()); + DCHECK(arguments != NULL); // functions have 'arguments' declared implicitly bool uses_sloppy_arguments = false; @@ -1242,7 +1265,7 @@ void Scope::AllocateParameterLocals() { // order is relevant! for (int i = params_.length() - 1; i >= 0; --i) { Variable* var = params_[i]; - ASSERT(var->scope() == this); + DCHECK(var->scope() == this); if (uses_sloppy_arguments || has_forced_context_allocation()) { // Force context allocation of the parameter. var->ForceContextAllocation(); @@ -1250,12 +1273,12 @@ void Scope::AllocateParameterLocals() { if (MustAllocate(var)) { if (MustAllocateInContext(var)) { - ASSERT(var->IsUnallocated() || var->IsContextSlot()); + DCHECK(var->IsUnallocated() || var->IsContextSlot()); if (var->IsUnallocated()) { AllocateHeapSlot(var); } } else { - ASSERT(var->IsUnallocated() || var->IsParameter()); + DCHECK(var->IsUnallocated() || var->IsParameter()); if (var->IsUnallocated()) { var->AllocateTo(Variable::PARAMETER, i); } @@ -1266,8 +1289,8 @@ void Scope::AllocateParameterLocals() { void Scope::AllocateNonParameterLocal(Variable* var) { - ASSERT(var->scope() == this); - ASSERT(!var->IsVariable(isolate_->factory()->dot_result_string()) || + DCHECK(var->scope() == this); + DCHECK(!var->IsVariable(isolate_->factory()->dot_result_string()) || !var->IsStackLocal()); if (var->IsUnallocated() && MustAllocate(var)) { if (MustAllocateInContext(var)) { @@ -1344,18 +1367,17 @@ void Scope::AllocateVariablesRecursively() { } // Allocation done. - ASSERT(num_heap_slots_ == 0 || num_heap_slots_ >= Context::MIN_CONTEXT_SLOTS); + DCHECK(num_heap_slots_ == 0 || num_heap_slots_ >= Context::MIN_CONTEXT_SLOTS); } void Scope::AllocateModulesRecursively(Scope* host_scope) { if (already_resolved()) return; if (is_module_scope()) { - ASSERT(interface_->IsFrozen()); - Handle<String> name = isolate_->factory()->InternalizeOneByteString( - STATIC_ASCII_VECTOR(".module")); - ASSERT(module_var_ == NULL); - module_var_ = host_scope->NewInternal(name); + DCHECK(interface_->IsFrozen()); + DCHECK(module_var_ == NULL); + module_var_ = + host_scope->NewInternal(ast_value_factory_->dot_module_string()); ++host_scope->num_modules_; } diff --git a/deps/v8/src/scopes.h b/deps/v8/src/scopes.h index 23a10f19c8d..2757bf2402a 100644 --- a/deps/v8/src/scopes.h +++ b/deps/v8/src/scopes.h @@ -5,8 +5,8 @@ #ifndef V8_SCOPES_H_ #define V8_SCOPES_H_ -#include "ast.h" -#include "zone.h" +#include "src/ast.h" +#include "src/zone.h" namespace v8 { namespace internal { @@ -21,15 +21,13 @@ class VariableMap: public ZoneHashMap { virtual ~VariableMap(); - Variable* Declare(Scope* scope, - Handle<String> name, - VariableMode mode, - bool is_valid_lhs, - Variable::Kind kind, + Variable* Declare(Scope* scope, const AstRawString* name, VariableMode mode, + bool is_valid_lhs, Variable::Kind kind, InitializationFlag initialization_flag, + MaybeAssignedFlag maybe_assigned_flag = kNotAssigned, Interface* interface = Interface::NewValue()); - Variable* Lookup(Handle<String> name); + Variable* Lookup(const AstRawString* name); Zone* zone() const { return zone_; } @@ -51,7 +49,7 @@ class DynamicScopePart : public ZoneObject { VariableMap* GetMap(VariableMode mode) { int index = mode - DYNAMIC; - ASSERT(index >= 0 && index < 3); + DCHECK(index >= 0 && index < 3); return maps_[index]; } @@ -74,7 +72,8 @@ class Scope: public ZoneObject { // --------------------------------------------------------------------------- // Construction - Scope(Scope* outer_scope, ScopeType scope_type, Zone* zone); + Scope(Scope* outer_scope, ScopeType scope_type, + AstValueFactory* value_factory, Zone* zone); // Compute top scope and allocate variables. For lazy compilation the top // scope only contains the single lazily compiled function, so this @@ -85,7 +84,9 @@ class Scope: public ZoneObject { Zone* zone); // The scope name is only used for printing/debugging. - void SetScopeName(Handle<String> scope_name) { scope_name_ = scope_name; } + void SetScopeName(const AstRawString* scope_name) { + scope_name_ = scope_name; + } void Initialize(); @@ -100,55 +101,55 @@ class Scope: public ZoneObject { // Declarations // Lookup a variable in this scope. Returns the variable or NULL if not found. - Variable* LocalLookup(Handle<String> name); + Variable* LookupLocal(const AstRawString* name); // This lookup corresponds to a lookup in the "intermediate" scope sitting // between this scope and the outer scope. (ECMA-262, 3rd., requires that // the name of named function literal is kept in an intermediate scope // in between this scope and the next outer scope.) - Variable* LookupFunctionVar(Handle<String> name, + Variable* LookupFunctionVar(const AstRawString* name, AstNodeFactory<AstNullVisitor>* factory); // Lookup a variable in this scope or outer scopes. // Returns the variable or NULL if not found. - Variable* Lookup(Handle<String> name); + Variable* Lookup(const AstRawString* name); // Declare the function variable for a function literal. This variable // is in an intermediate scope between this function scope and the the // outer scope. Only possible for function scopes; at most one variable. void DeclareFunctionVar(VariableDeclaration* declaration) { - ASSERT(is_function_scope()); + DCHECK(is_function_scope()); function_ = declaration; } // Declare a parameter in this scope. When there are duplicated // parameters the rightmost one 'wins'. However, the implementation // expects all parameters to be declared and from left to right. - void DeclareParameter(Handle<String> name, VariableMode mode); + Variable* DeclareParameter(const AstRawString* name, VariableMode mode); // Declare a local variable in this scope. If the variable has been // declared before, the previously declared variable is returned. - Variable* DeclareLocal(Handle<String> name, - VariableMode mode, + Variable* DeclareLocal(const AstRawString* name, VariableMode mode, InitializationFlag init_flag, + MaybeAssignedFlag maybe_assigned_flag = kNotAssigned, Interface* interface = Interface::NewValue()); // Declare an implicit global variable in this scope which must be a // global scope. The variable was introduced (possibly from an inner // scope) by a reference to an unresolved variable with no intervening // with statements or eval calls. - Variable* DeclareDynamicGlobal(Handle<String> name); + Variable* DeclareDynamicGlobal(const AstRawString* name); // Create a new unresolved variable. template<class Visitor> VariableProxy* NewUnresolved(AstNodeFactory<Visitor>* factory, - Handle<String> name, + const AstRawString* name, Interface* interface = Interface::NewValue(), int position = RelocInfo::kNoPosition) { // Note that we must not share the unresolved variables with // the same name because they may be removed selectively via // RemoveUnresolved(). - ASSERT(!already_resolved()); + DCHECK(!already_resolved()); VariableProxy* proxy = factory->NewVariableProxy(name, false, interface, position); unresolved_.Add(proxy, zone_); @@ -167,13 +168,13 @@ class Scope: public ZoneObject { // for printing and cannot be used to find the variable. In particular, // the only way to get hold of the temporary is by keeping the Variable* // around. - Variable* NewInternal(Handle<String> name); + Variable* NewInternal(const AstRawString* name); // Creates a new temporary variable in this scope. The name is only used // for printing and cannot be used to find the variable. In particular, // the only way to get hold of the temporary is by keeping the Variable* // around. The name should not clash with a legitimate variable names. - Variable* NewTemporary(Handle<String> name); + Variable* NewTemporary(const AstRawString* name); // Adds the specific declaration node to the list of declarations in // this scope. The declarations are processed as part of entering @@ -246,7 +247,7 @@ class Scope: public ZoneObject { // In some cases we want to force context allocation for a whole scope. void ForceContextAllocation() { - ASSERT(!already_resolved()); + DCHECK(!already_resolved()); force_context_allocation_ = true; } bool has_forced_context_allocation() const { @@ -301,14 +302,14 @@ class Scope: public ZoneObject { // The variable holding the function literal for named function // literals, or NULL. Only valid for function scopes. VariableDeclaration* function() const { - ASSERT(is_function_scope()); + DCHECK(is_function_scope()); return function_; } // Parameters. The left-most parameter has index 0. // Only valid for function scopes. Variable* parameter(int index) const { - ASSERT(is_function_scope()); + DCHECK(is_function_scope()); return params_[index]; } @@ -390,7 +391,7 @@ class Scope: public ZoneObject { // --------------------------------------------------------------------------- // Strict mode support. - bool IsDeclared(Handle<String> name) { + bool IsDeclared(const AstRawString* name) { // During formal parameter list parsing the scope only contains // two variables inserted at initialization: "this" and "arguments". // "this" is an invalid parameter name and "arguments" is invalid parameter @@ -421,7 +422,7 @@ class Scope: public ZoneObject { ScopeType scope_type_; // Debugging support. - Handle<String> scope_name_; + const AstRawString* scope_name_; // The variables declared in this scope: // @@ -497,7 +498,7 @@ class Scope: public ZoneObject { // Create a non-local variable with a given name. // These variables are looked up dynamically at runtime. - Variable* NonLocal(Handle<String> name, VariableMode mode); + Variable* NonLocal(const AstRawString* name, VariableMode mode); // Variable resolution. // Possible results of a recursive variable lookup telling if and how a @@ -548,7 +549,7 @@ class Scope: public ZoneObject { // Lookup a variable reference given by name recursively starting with this // scope. If the code is executed because of a call to 'eval', the context // parameter should be set to the calling context of 'eval'. - Variable* LookupRecursive(Handle<String> name, + Variable* LookupRecursive(VariableProxy* proxy, BindingKind* binding_kind, AstNodeFactory<AstNullVisitor>* factory); MUST_USE_RESULT @@ -560,7 +561,7 @@ class Scope: public ZoneObject { AstNodeFactory<AstNullVisitor>* factory); // Scope analysis. - bool PropagateScopeInfo(bool outer_scope_calls_sloppy_eval); + void PropagateScopeInfo(bool outer_scope_calls_sloppy_eval); bool HasTrivialContext() const; // Predicates. @@ -592,10 +593,12 @@ class Scope: public ZoneObject { private: // Construct a scope based on the scope info. Scope(Scope* inner_scope, ScopeType type, Handle<ScopeInfo> scope_info, - Zone* zone); + AstValueFactory* value_factory, Zone* zone); // Construct a catch scope with a binding for the name. - Scope(Scope* inner_scope, Handle<String> catch_variable_name, Zone* zone); + Scope(Scope* inner_scope, + const AstRawString* catch_variable_name, + AstValueFactory* value_factory, Zone* zone); void AddInnerScope(Scope* inner_scope) { if (inner_scope != NULL) { @@ -608,6 +611,7 @@ class Scope: public ZoneObject { Scope* outer_scope, Handle<ScopeInfo> scope_info); + AstValueFactory* ast_value_factory_; Zone* zone_; }; diff --git a/deps/v8/src/serialize.cc b/deps/v8/src/serialize.cc index 2b43c0ee69a..4b28d23fe9a 100644 --- a/deps/v8/src/serialize.cc +++ b/deps/v8/src/serialize.cc @@ -2,22 +2,25 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" - -#include "accessors.h" -#include "api.h" -#include "bootstrapper.h" -#include "deoptimizer.h" -#include "execution.h" -#include "global-handles.h" -#include "ic-inl.h" -#include "natives.h" -#include "platform.h" -#include "runtime.h" -#include "serialize.h" -#include "snapshot.h" -#include "stub-cache.h" -#include "v8threads.h" +#include "src/v8.h" + +#include "src/accessors.h" +#include "src/api.h" +#include "src/base/platform/platform.h" +#include "src/bootstrapper.h" +#include "src/deoptimizer.h" +#include "src/execution.h" +#include "src/global-handles.h" +#include "src/ic-inl.h" +#include "src/natives.h" +#include "src/objects.h" +#include "src/runtime.h" +#include "src/serialize.h" +#include "src/snapshot.h" +#include "src/snapshot-source-sink.h" +#include "src/stub-cache.h" +#include "src/v8threads.h" +#include "src/version.h" namespace v8 { namespace internal { @@ -91,12 +94,14 @@ void ExternalReferenceTable::Add(Address address, TypeCode type, uint16_t id, const char* name) { - ASSERT_NE(NULL, address); + DCHECK_NE(NULL, address); ExternalReferenceEntry entry; entry.address = address; entry.code = EncodeExternal(type, id); entry.name = name; - ASSERT_NE(0, entry.code); + DCHECK_NE(0, entry.code); + // Assert that the code is added in ascending order to rule out duplicates. + DCHECK((size() == 0) || (code(size() - 1) < entry.code)); refs_.Add(entry); if (id > max_id_[type]) max_id_[type] = id; } @@ -107,6 +112,144 @@ void ExternalReferenceTable::PopulateTable(Isolate* isolate) { max_id_[type_code] = 0; } + // Miscellaneous + Add(ExternalReference::roots_array_start(isolate).address(), + "Heap::roots_array_start()"); + Add(ExternalReference::address_of_stack_limit(isolate).address(), + "StackGuard::address_of_jslimit()"); + Add(ExternalReference::address_of_real_stack_limit(isolate).address(), + "StackGuard::address_of_real_jslimit()"); + Add(ExternalReference::new_space_start(isolate).address(), + "Heap::NewSpaceStart()"); + Add(ExternalReference::new_space_mask(isolate).address(), + "Heap::NewSpaceMask()"); + Add(ExternalReference::new_space_allocation_limit_address(isolate).address(), + "Heap::NewSpaceAllocationLimitAddress()"); + Add(ExternalReference::new_space_allocation_top_address(isolate).address(), + "Heap::NewSpaceAllocationTopAddress()"); + Add(ExternalReference::debug_break(isolate).address(), "Debug::Break()"); + Add(ExternalReference::debug_step_in_fp_address(isolate).address(), + "Debug::step_in_fp_addr()"); + Add(ExternalReference::mod_two_doubles_operation(isolate).address(), + "mod_two_doubles"); + // Keyed lookup cache. + Add(ExternalReference::keyed_lookup_cache_keys(isolate).address(), + "KeyedLookupCache::keys()"); + Add(ExternalReference::keyed_lookup_cache_field_offsets(isolate).address(), + "KeyedLookupCache::field_offsets()"); + Add(ExternalReference::handle_scope_next_address(isolate).address(), + "HandleScope::next"); + Add(ExternalReference::handle_scope_limit_address(isolate).address(), + "HandleScope::limit"); + Add(ExternalReference::handle_scope_level_address(isolate).address(), + "HandleScope::level"); + Add(ExternalReference::new_deoptimizer_function(isolate).address(), + "Deoptimizer::New()"); + Add(ExternalReference::compute_output_frames_function(isolate).address(), + "Deoptimizer::ComputeOutputFrames()"); + Add(ExternalReference::address_of_min_int().address(), + "LDoubleConstant::min_int"); + Add(ExternalReference::address_of_one_half().address(), + "LDoubleConstant::one_half"); + Add(ExternalReference::isolate_address(isolate).address(), "isolate"); + Add(ExternalReference::address_of_negative_infinity().address(), + "LDoubleConstant::negative_infinity"); + Add(ExternalReference::power_double_double_function(isolate).address(), + "power_double_double_function"); + Add(ExternalReference::power_double_int_function(isolate).address(), + "power_double_int_function"); + Add(ExternalReference::math_log_double_function(isolate).address(), + "std::log"); + Add(ExternalReference::store_buffer_top(isolate).address(), + "store_buffer_top"); + Add(ExternalReference::address_of_canonical_non_hole_nan().address(), + "canonical_nan"); + Add(ExternalReference::address_of_the_hole_nan().address(), "the_hole_nan"); + Add(ExternalReference::get_date_field_function(isolate).address(), + "JSDate::GetField"); + Add(ExternalReference::date_cache_stamp(isolate).address(), + "date_cache_stamp"); + Add(ExternalReference::address_of_pending_message_obj(isolate).address(), + "address_of_pending_message_obj"); + Add(ExternalReference::address_of_has_pending_message(isolate).address(), + "address_of_has_pending_message"); + Add(ExternalReference::address_of_pending_message_script(isolate).address(), + "pending_message_script"); + Add(ExternalReference::get_make_code_young_function(isolate).address(), + "Code::MakeCodeYoung"); + Add(ExternalReference::cpu_features().address(), "cpu_features"); + Add(ExternalReference(Runtime::kAllocateInNewSpace, isolate).address(), + "Runtime::AllocateInNewSpace"); + Add(ExternalReference(Runtime::kAllocateInTargetSpace, isolate).address(), + "Runtime::AllocateInTargetSpace"); + Add(ExternalReference::old_pointer_space_allocation_top_address(isolate) + .address(), + "Heap::OldPointerSpaceAllocationTopAddress"); + Add(ExternalReference::old_pointer_space_allocation_limit_address(isolate) + .address(), + "Heap::OldPointerSpaceAllocationLimitAddress"); + Add(ExternalReference::old_data_space_allocation_top_address(isolate) + .address(), + "Heap::OldDataSpaceAllocationTopAddress"); + Add(ExternalReference::old_data_space_allocation_limit_address(isolate) + .address(), + "Heap::OldDataSpaceAllocationLimitAddress"); + Add(ExternalReference::allocation_sites_list_address(isolate).address(), + "Heap::allocation_sites_list_address()"); + Add(ExternalReference::address_of_uint32_bias().address(), "uint32_bias"); + Add(ExternalReference::get_mark_code_as_executed_function(isolate).address(), + "Code::MarkCodeAsExecuted"); + Add(ExternalReference::is_profiling_address(isolate).address(), + "CpuProfiler::is_profiling"); + Add(ExternalReference::scheduled_exception_address(isolate).address(), + "Isolate::scheduled_exception"); + Add(ExternalReference::invoke_function_callback(isolate).address(), + "InvokeFunctionCallback"); + Add(ExternalReference::invoke_accessor_getter_callback(isolate).address(), + "InvokeAccessorGetterCallback"); + Add(ExternalReference::flush_icache_function(isolate).address(), + "CpuFeatures::FlushICache"); + Add(ExternalReference::log_enter_external_function(isolate).address(), + "Logger::EnterExternal"); + Add(ExternalReference::log_leave_external_function(isolate).address(), + "Logger::LeaveExternal"); + Add(ExternalReference::address_of_minus_one_half().address(), + "double_constants.minus_one_half"); + Add(ExternalReference::stress_deopt_count(isolate).address(), + "Isolate::stress_deopt_count_address()"); + Add(ExternalReference::incremental_marking_record_write_function(isolate) + .address(), + "IncrementalMarking::RecordWriteFromCode"); + + // Debug addresses + Add(ExternalReference::debug_after_break_target_address(isolate).address(), + "Debug::after_break_target_address()"); + Add(ExternalReference::debug_restarter_frame_function_pointer_address(isolate) + .address(), + "Debug::restarter_frame_function_pointer_address()"); + Add(ExternalReference::debug_is_active_address(isolate).address(), + "Debug::is_active_address()"); + +#ifndef V8_INTERPRETED_REGEXP + Add(ExternalReference::re_case_insensitive_compare_uc16(isolate).address(), + "NativeRegExpMacroAssembler::CaseInsensitiveCompareUC16()"); + Add(ExternalReference::re_check_stack_guard_state(isolate).address(), + "RegExpMacroAssembler*::CheckStackGuardState()"); + Add(ExternalReference::re_grow_stack(isolate).address(), + "NativeRegExpMacroAssembler::GrowStack()"); + Add(ExternalReference::re_word_character_map().address(), + "NativeRegExpMacroAssembler::word_character_map"); + Add(ExternalReference::address_of_regexp_stack_limit(isolate).address(), + "RegExpStack::limit_address()"); + Add(ExternalReference::address_of_regexp_stack_memory_address(isolate) + .address(), + "RegExpStack::memory_address()"); + Add(ExternalReference::address_of_regexp_stack_memory_size(isolate).address(), + "RegExpStack::memory_size()"); + Add(ExternalReference::address_of_static_offsets_vector(isolate).address(), + "OffsetsVector::static_offsets_vector"); +#endif // V8_INTERPRETED_REGEXP + // The following populates all of the different type of external references // into the ExternalReferenceTable. // @@ -150,16 +293,9 @@ void ExternalReferenceTable::PopulateTable(Isolate* isolate) { "Runtime::" #name }, RUNTIME_FUNCTION_LIST(RUNTIME_ENTRY) + INLINE_OPTIMIZED_FUNCTION_LIST(RUNTIME_ENTRY) #undef RUNTIME_ENTRY -#define RUNTIME_HIDDEN_ENTRY(name, nargs, ressize) \ - { RUNTIME_FUNCTION, \ - Runtime::kHidden##name, \ - "Runtime::Hidden" #name }, - - RUNTIME_HIDDEN_FUNCTION_LIST(RUNTIME_HIDDEN_ENTRY) -#undef RUNTIME_HIDDEN_ENTRY - #define INLINE_OPTIMIZED_ENTRY(name, nargs, ressize) \ { RUNTIME_FUNCTION, \ Runtime::kInlineOptimized##name, \ @@ -185,16 +321,6 @@ void ExternalReferenceTable::PopulateTable(Isolate* isolate) { isolate); } - // Debug addresses - Add(Debug_Address(Debug::k_after_break_target_address).address(isolate), - DEBUG_ADDRESS, - Debug::k_after_break_target_address << kDebugIdShift, - "Debug::after_break_target_address()"); - Add(Debug_Address(Debug::k_restarter_frame_function_pointer).address(isolate), - DEBUG_ADDRESS, - Debug::k_restarter_frame_function_pointer << kDebugIdShift, - "Debug::restarter_frame_function_pointer_address()"); - // Stat counters struct StatsRefTableEntry { StatsCounter* (Counters::*counter)(); @@ -254,286 +380,26 @@ void ExternalReferenceTable::PopulateTable(Isolate* isolate) { // Stub cache tables Add(stub_cache->key_reference(StubCache::kPrimary).address(), - STUB_CACHE_TABLE, - 1, - "StubCache::primary_->key"); + STUB_CACHE_TABLE, 1, "StubCache::primary_->key"); Add(stub_cache->value_reference(StubCache::kPrimary).address(), - STUB_CACHE_TABLE, - 2, - "StubCache::primary_->value"); + STUB_CACHE_TABLE, 2, "StubCache::primary_->value"); Add(stub_cache->map_reference(StubCache::kPrimary).address(), - STUB_CACHE_TABLE, - 3, - "StubCache::primary_->map"); + STUB_CACHE_TABLE, 3, "StubCache::primary_->map"); Add(stub_cache->key_reference(StubCache::kSecondary).address(), - STUB_CACHE_TABLE, - 4, - "StubCache::secondary_->key"); + STUB_CACHE_TABLE, 4, "StubCache::secondary_->key"); Add(stub_cache->value_reference(StubCache::kSecondary).address(), - STUB_CACHE_TABLE, - 5, - "StubCache::secondary_->value"); + STUB_CACHE_TABLE, 5, "StubCache::secondary_->value"); Add(stub_cache->map_reference(StubCache::kSecondary).address(), - STUB_CACHE_TABLE, - 6, - "StubCache::secondary_->map"); + STUB_CACHE_TABLE, 6, "StubCache::secondary_->map"); // Runtime entries Add(ExternalReference::delete_handle_scope_extensions(isolate).address(), - RUNTIME_ENTRY, - 4, - "HandleScope::DeleteExtensions"); - Add(ExternalReference:: - incremental_marking_record_write_function(isolate).address(), - RUNTIME_ENTRY, - 5, - "IncrementalMarking::RecordWrite"); + RUNTIME_ENTRY, 1, "HandleScope::DeleteExtensions"); + Add(ExternalReference::incremental_marking_record_write_function(isolate) + .address(), + RUNTIME_ENTRY, 2, "IncrementalMarking::RecordWrite"); Add(ExternalReference::store_buffer_overflow_function(isolate).address(), - RUNTIME_ENTRY, - 6, - "StoreBuffer::StoreBufferOverflow"); - - // Miscellaneous - Add(ExternalReference::roots_array_start(isolate).address(), - UNCLASSIFIED, - 3, - "Heap::roots_array_start()"); - Add(ExternalReference::address_of_stack_limit(isolate).address(), - UNCLASSIFIED, - 4, - "StackGuard::address_of_jslimit()"); - Add(ExternalReference::address_of_real_stack_limit(isolate).address(), - UNCLASSIFIED, - 5, - "StackGuard::address_of_real_jslimit()"); -#ifndef V8_INTERPRETED_REGEXP - Add(ExternalReference::address_of_regexp_stack_limit(isolate).address(), - UNCLASSIFIED, - 6, - "RegExpStack::limit_address()"); - Add(ExternalReference::address_of_regexp_stack_memory_address( - isolate).address(), - UNCLASSIFIED, - 7, - "RegExpStack::memory_address()"); - Add(ExternalReference::address_of_regexp_stack_memory_size(isolate).address(), - UNCLASSIFIED, - 8, - "RegExpStack::memory_size()"); - Add(ExternalReference::address_of_static_offsets_vector(isolate).address(), - UNCLASSIFIED, - 9, - "OffsetsVector::static_offsets_vector"); -#endif // V8_INTERPRETED_REGEXP - Add(ExternalReference::new_space_start(isolate).address(), - UNCLASSIFIED, - 10, - "Heap::NewSpaceStart()"); - Add(ExternalReference::new_space_mask(isolate).address(), - UNCLASSIFIED, - 11, - "Heap::NewSpaceMask()"); - Add(ExternalReference::new_space_allocation_limit_address(isolate).address(), - UNCLASSIFIED, - 14, - "Heap::NewSpaceAllocationLimitAddress()"); - Add(ExternalReference::new_space_allocation_top_address(isolate).address(), - UNCLASSIFIED, - 15, - "Heap::NewSpaceAllocationTopAddress()"); - Add(ExternalReference::debug_break(isolate).address(), - UNCLASSIFIED, - 16, - "Debug::Break()"); - Add(ExternalReference::debug_step_in_fp_address(isolate).address(), - UNCLASSIFIED, - 17, - "Debug::step_in_fp_addr()"); - Add(ExternalReference::mod_two_doubles_operation(isolate).address(), - UNCLASSIFIED, - 22, - "mod_two_doubles"); -#ifndef V8_INTERPRETED_REGEXP - Add(ExternalReference::re_case_insensitive_compare_uc16(isolate).address(), - UNCLASSIFIED, - 24, - "NativeRegExpMacroAssembler::CaseInsensitiveCompareUC16()"); - Add(ExternalReference::re_check_stack_guard_state(isolate).address(), - UNCLASSIFIED, - 25, - "RegExpMacroAssembler*::CheckStackGuardState()"); - Add(ExternalReference::re_grow_stack(isolate).address(), - UNCLASSIFIED, - 26, - "NativeRegExpMacroAssembler::GrowStack()"); - Add(ExternalReference::re_word_character_map().address(), - UNCLASSIFIED, - 27, - "NativeRegExpMacroAssembler::word_character_map"); -#endif // V8_INTERPRETED_REGEXP - // Keyed lookup cache. - Add(ExternalReference::keyed_lookup_cache_keys(isolate).address(), - UNCLASSIFIED, - 28, - "KeyedLookupCache::keys()"); - Add(ExternalReference::keyed_lookup_cache_field_offsets(isolate).address(), - UNCLASSIFIED, - 29, - "KeyedLookupCache::field_offsets()"); - Add(ExternalReference::handle_scope_next_address(isolate).address(), - UNCLASSIFIED, - 31, - "HandleScope::next"); - Add(ExternalReference::handle_scope_limit_address(isolate).address(), - UNCLASSIFIED, - 32, - "HandleScope::limit"); - Add(ExternalReference::handle_scope_level_address(isolate).address(), - UNCLASSIFIED, - 33, - "HandleScope::level"); - Add(ExternalReference::new_deoptimizer_function(isolate).address(), - UNCLASSIFIED, - 34, - "Deoptimizer::New()"); - Add(ExternalReference::compute_output_frames_function(isolate).address(), - UNCLASSIFIED, - 35, - "Deoptimizer::ComputeOutputFrames()"); - Add(ExternalReference::address_of_min_int().address(), - UNCLASSIFIED, - 36, - "LDoubleConstant::min_int"); - Add(ExternalReference::address_of_one_half().address(), - UNCLASSIFIED, - 37, - "LDoubleConstant::one_half"); - Add(ExternalReference::isolate_address(isolate).address(), - UNCLASSIFIED, - 38, - "isolate"); - Add(ExternalReference::address_of_minus_zero().address(), - UNCLASSIFIED, - 39, - "LDoubleConstant::minus_zero"); - Add(ExternalReference::address_of_negative_infinity().address(), - UNCLASSIFIED, - 40, - "LDoubleConstant::negative_infinity"); - Add(ExternalReference::power_double_double_function(isolate).address(), - UNCLASSIFIED, - 41, - "power_double_double_function"); - Add(ExternalReference::power_double_int_function(isolate).address(), - UNCLASSIFIED, - 42, - "power_double_int_function"); - Add(ExternalReference::store_buffer_top(isolate).address(), - UNCLASSIFIED, - 43, - "store_buffer_top"); - Add(ExternalReference::address_of_canonical_non_hole_nan().address(), - UNCLASSIFIED, - 44, - "canonical_nan"); - Add(ExternalReference::address_of_the_hole_nan().address(), - UNCLASSIFIED, - 45, - "the_hole_nan"); - Add(ExternalReference::get_date_field_function(isolate).address(), - UNCLASSIFIED, - 46, - "JSDate::GetField"); - Add(ExternalReference::date_cache_stamp(isolate).address(), - UNCLASSIFIED, - 47, - "date_cache_stamp"); - Add(ExternalReference::address_of_pending_message_obj(isolate).address(), - UNCLASSIFIED, - 48, - "address_of_pending_message_obj"); - Add(ExternalReference::address_of_has_pending_message(isolate).address(), - UNCLASSIFIED, - 49, - "address_of_has_pending_message"); - Add(ExternalReference::address_of_pending_message_script(isolate).address(), - UNCLASSIFIED, - 50, - "pending_message_script"); - Add(ExternalReference::get_make_code_young_function(isolate).address(), - UNCLASSIFIED, - 51, - "Code::MakeCodeYoung"); - Add(ExternalReference::cpu_features().address(), - UNCLASSIFIED, - 52, - "cpu_features"); - Add(ExternalReference(Runtime::kHiddenAllocateInNewSpace, isolate).address(), - UNCLASSIFIED, - 53, - "Runtime::AllocateInNewSpace"); - Add(ExternalReference( - Runtime::kHiddenAllocateInTargetSpace, isolate).address(), - UNCLASSIFIED, - 54, - "Runtime::AllocateInTargetSpace"); - Add(ExternalReference::old_pointer_space_allocation_top_address( - isolate).address(), - UNCLASSIFIED, - 55, - "Heap::OldPointerSpaceAllocationTopAddress"); - Add(ExternalReference::old_pointer_space_allocation_limit_address( - isolate).address(), - UNCLASSIFIED, - 56, - "Heap::OldPointerSpaceAllocationLimitAddress"); - Add(ExternalReference::old_data_space_allocation_top_address( - isolate).address(), - UNCLASSIFIED, - 57, - "Heap::OldDataSpaceAllocationTopAddress"); - Add(ExternalReference::old_data_space_allocation_limit_address( - isolate).address(), - UNCLASSIFIED, - 58, - "Heap::OldDataSpaceAllocationLimitAddress"); - Add(ExternalReference::new_space_high_promotion_mode_active_address(isolate). - address(), - UNCLASSIFIED, - 59, - "Heap::NewSpaceAllocationLimitAddress"); - Add(ExternalReference::allocation_sites_list_address(isolate).address(), - UNCLASSIFIED, - 60, - "Heap::allocation_sites_list_address()"); - Add(ExternalReference::address_of_uint32_bias().address(), - UNCLASSIFIED, - 61, - "uint32_bias"); - Add(ExternalReference::get_mark_code_as_executed_function(isolate).address(), - UNCLASSIFIED, - 62, - "Code::MarkCodeAsExecuted"); - - Add(ExternalReference::is_profiling_address(isolate).address(), - UNCLASSIFIED, - 63, - "CpuProfiler::is_profiling"); - - Add(ExternalReference::scheduled_exception_address(isolate).address(), - UNCLASSIFIED, - 64, - "Isolate::scheduled_exception"); - - Add(ExternalReference::invoke_function_callback(isolate).address(), - UNCLASSIFIED, - 65, - "InvokeFunctionCallback"); - - Add(ExternalReference::invoke_accessor_getter_callback(isolate).address(), - UNCLASSIFIED, - 66, - "InvokeAccessorGetterCallback"); + RUNTIME_ENTRY, 3, "StoreBuffer::StoreBufferOverflow"); // Add a small set of deopt entry addresses to encoder without generating the // deopt table code, which isn't possible at deserialization time. @@ -562,16 +428,16 @@ ExternalReferenceEncoder::ExternalReferenceEncoder(Isolate* isolate) uint32_t ExternalReferenceEncoder::Encode(Address key) const { int index = IndexOf(key); - ASSERT(key == NULL || index >= 0); - return index >=0 ? + DCHECK(key == NULL || index >= 0); + return index >= 0 ? ExternalReferenceTable::instance(isolate_)->code(index) : 0; } const char* ExternalReferenceEncoder::NameOfAddress(Address key) const { int index = IndexOf(key); - return index >= 0 ? - ExternalReferenceTable::instance(isolate_)->name(index) : NULL; + return index >= 0 ? ExternalReferenceTable::instance(isolate_)->name(index) + : "<unknown>"; } @@ -613,7 +479,6 @@ ExternalReferenceDecoder::~ExternalReferenceDecoder() { DeleteArray(encodings_); } -AtomicWord Serializer::serialization_state_ = SERIALIZER_STATE_UNINITIALIZED; class CodeAddressMap: public CodeEventLogger { public: @@ -630,6 +495,9 @@ class CodeAddressMap: public CodeEventLogger { address_to_name_map_.Move(from, to); } + virtual void CodeDisableOptEvent(Code* code, SharedFunctionInfo* shared) { + } + virtual void CodeDeleteEvent(Address from) { address_to_name_map_.Remove(from); } @@ -672,11 +540,11 @@ class CodeAddressMap: public CodeEventLogger { void Move(Address from, Address to) { if (from == to) return; HashMap::Entry* from_entry = FindEntry(from); - ASSERT(from_entry != NULL); + DCHECK(from_entry != NULL); void* value = from_entry->value; RemoveEntry(from_entry); HashMap::Entry* to_entry = FindOrCreateEntry(to); - ASSERT(to_entry->value == NULL); + DCHECK(to_entry->value == NULL); to_entry->value = value; } @@ -723,50 +591,9 @@ class CodeAddressMap: public CodeEventLogger { }; -CodeAddressMap* Serializer::code_address_map_ = NULL; - - -void Serializer::RequestEnable(Isolate* isolate) { - isolate->InitializeLoggingAndCounters(); - code_address_map_ = new CodeAddressMap(isolate); -} - - -void Serializer::InitializeOncePerProcess() { - // InitializeOncePerProcess is called by V8::InitializeOncePerProcess, a - // method guaranteed to be called only once in a process lifetime. - // serialization_state_ is read by many threads, hence the use of - // Atomic primitives. Here, we don't need a barrier or mutex to - // write it because V8 initialization is done by one thread, and gates - // all reads of serialization_state_. - ASSERT(NoBarrier_Load(&serialization_state_) == - SERIALIZER_STATE_UNINITIALIZED); - SerializationState state = code_address_map_ - ? SERIALIZER_STATE_ENABLED - : SERIALIZER_STATE_DISABLED; - NoBarrier_Store(&serialization_state_, state); -} - - -void Serializer::TearDown() { - // TearDown is called by V8::TearDown() for the default isolate. It's safe - // to shut down the serializer by that point. Just to be safe, we restore - // serialization_state_ to uninitialized. - ASSERT(NoBarrier_Load(&serialization_state_) != - SERIALIZER_STATE_UNINITIALIZED); - if (code_address_map_) { - ASSERT(NoBarrier_Load(&serialization_state_) == - SERIALIZER_STATE_ENABLED); - delete code_address_map_; - code_address_map_ = NULL; - } - - NoBarrier_Store(&serialization_state_, SERIALIZER_STATE_UNINITIALIZED); -} - - Deserializer::Deserializer(SnapshotByteSource* source) : isolate_(NULL), + attached_objects_(NULL), source_(source), external_reference_decoder_(NULL) { for (int i = 0; i < LAST_SPACE + 1; i++) { @@ -779,20 +606,20 @@ void Deserializer::FlushICacheForNewCodeObjects() { PageIterator it(isolate_->heap()->code_space()); while (it.has_next()) { Page* p = it.next(); - CPU::FlushICache(p->area_start(), p->area_end() - p->area_start()); + CpuFeatures::FlushICache(p->area_start(), p->area_end() - p->area_start()); } } void Deserializer::Deserialize(Isolate* isolate) { isolate_ = isolate; - ASSERT(isolate_ != NULL); + DCHECK(isolate_ != NULL); isolate_->heap()->ReserveSpace(reservations_, &high_water_[0]); // No active threads. - ASSERT_EQ(NULL, isolate_->thread_manager()->FirstThreadStateInUse()); + DCHECK_EQ(NULL, isolate_->thread_manager()->FirstThreadStateInUse()); // No active handles. - ASSERT(isolate_->handle_scope_implementer()->blocks()->is_empty()); - ASSERT_EQ(NULL, external_reference_decoder_); + DCHECK(isolate_->handle_scope_implementer()->blocks()->is_empty()); + DCHECK_EQ(NULL, external_reference_decoder_); external_reference_decoder_ = new ExternalReferenceDecoder(isolate); isolate_->heap()->IterateSmiRoots(this); isolate_->heap()->IterateStrongRoots(this, VISIT_ONLY_STRONG); @@ -832,13 +659,15 @@ void Deserializer::Deserialize(Isolate* isolate) { void Deserializer::DeserializePartial(Isolate* isolate, Object** root) { isolate_ = isolate; for (int i = NEW_SPACE; i < kNumberOfSpaces; i++) { - ASSERT(reservations_[i] != kUninitializedReservation); + DCHECK(reservations_[i] != kUninitializedReservation); } isolate_->heap()->ReserveSpace(reservations_, &high_water_[0]); if (external_reference_decoder_ == NULL) { external_reference_decoder_ = new ExternalReferenceDecoder(isolate); } + DisallowHeapAllocation no_gc; + // Keep track of the code space start and end pointers in case new // code objects were unserialized OldSpace* code_space = isolate_->heap()->code_space(); @@ -854,11 +683,12 @@ void Deserializer::DeserializePartial(Isolate* isolate, Object** root) { Deserializer::~Deserializer() { // TODO(svenpanne) Re-enable this assertion when v8 initialization is fixed. - // ASSERT(source_->AtEOF()); + // DCHECK(source_->AtEOF()); if (external_reference_decoder_) { delete external_reference_decoder_; external_reference_decoder_ = NULL; } + if (attached_objects_) attached_objects_->Dispose(); } @@ -881,6 +711,64 @@ void Deserializer::RelinkAllocationSite(AllocationSite* site) { } +// Used to insert a deserialized internalized string into the string table. +class StringTableInsertionKey : public HashTableKey { + public: + explicit StringTableInsertionKey(String* string) + : string_(string), hash_(HashForObject(string)) { + DCHECK(string->IsInternalizedString()); + } + + virtual bool IsMatch(Object* string) { + // We know that all entries in a hash table had their hash keys created. + // Use that knowledge to have fast failure. + if (hash_ != HashForObject(string)) return false; + // We want to compare the content of two internalized strings here. + return string_->SlowEquals(String::cast(string)); + } + + virtual uint32_t Hash() V8_OVERRIDE { return hash_; } + + virtual uint32_t HashForObject(Object* key) V8_OVERRIDE { + return String::cast(key)->Hash(); + } + + MUST_USE_RESULT virtual Handle<Object> AsHandle(Isolate* isolate) + V8_OVERRIDE { + return handle(string_, isolate); + } + + String* string_; + uint32_t hash_; +}; + + +HeapObject* Deserializer::ProcessNewObjectFromSerializedCode(HeapObject* obj) { + if (obj->IsString()) { + String* string = String::cast(obj); + // Uninitialize hash field as the hash seed may have changed. + string->set_hash_field(String::kEmptyHashField); + if (string->IsInternalizedString()) { + DisallowHeapAllocation no_gc; + HandleScope scope(isolate_); + StringTableInsertionKey key(string); + String* canonical = *StringTable::LookupKey(isolate_, &key); + string->SetForwardedInternalizedString(canonical); + return canonical; + } + } + return obj; +} + + +Object* Deserializer::ProcessBackRefInSerializedCode(Object* obj) { + if (obj->IsInternalizedString()) { + return String::cast(obj)->GetForwardedInternalizedString(); + } + return obj; +} + + // This routine writes the new object into the pointer provided and then // returns true if the new object was in young space and false otherwise. // The reason for this strange interface is that otherwise the object is @@ -891,7 +779,7 @@ void Deserializer::ReadObject(int space_number, int size = source_->GetInt() << kObjectAlignmentBits; Address address = Allocate(space_number, size); HeapObject* obj = HeapObject::FromAddress(address); - *write_back = obj; + isolate_->heap()->OnAllocationEvent(obj, size); Object** current = reinterpret_cast<Object**>(address); Object** limit = current + (size >> kPointerSizeLog2); if (FLAG_log_snapshot_positions) { @@ -902,13 +790,15 @@ void Deserializer::ReadObject(int space_number, // TODO(mvstanton): consider treating the heap()->allocation_sites_list() // as a (weak) root. If this root is relocated correctly, // RelinkAllocationSite() isn't necessary. - if (obj->IsAllocationSite()) { - RelinkAllocationSite(AllocationSite::cast(obj)); - } + if (obj->IsAllocationSite()) RelinkAllocationSite(AllocationSite::cast(obj)); + + // Fix up strings from serialized user code. + if (deserializing_user_code()) obj = ProcessNewObjectFromSerializedCode(obj); + *write_back = obj; #ifdef DEBUG bool is_codespace = (space_number == CODE_SPACE); - ASSERT(obj->IsCode() == is_codespace); + DCHECK(obj->IsCode() == is_codespace); #endif } @@ -929,91 +819,107 @@ void Deserializer::ReadChunk(Object** current, while (current < limit) { int data = source_->Get(); switch (data) { -#define CASE_STATEMENT(where, how, within, space_number) \ - case where + how + within + space_number: \ - ASSERT((where & ~kPointedToMask) == 0); \ - ASSERT((how & ~kHowToCodeMask) == 0); \ - ASSERT((within & ~kWhereToPointMask) == 0); \ - ASSERT((space_number & ~kSpaceMask) == 0); +#define CASE_STATEMENT(where, how, within, space_number) \ + case where + how + within + space_number: \ + STATIC_ASSERT((where & ~kPointedToMask) == 0); \ + STATIC_ASSERT((how & ~kHowToCodeMask) == 0); \ + STATIC_ASSERT((within & ~kWhereToPointMask) == 0); \ + STATIC_ASSERT((space_number & ~kSpaceMask) == 0); #define CASE_BODY(where, how, within, space_number_if_any) \ - { \ - bool emit_write_barrier = false; \ - bool current_was_incremented = false; \ - int space_number = space_number_if_any == kAnyOldSpace ? \ - (data & kSpaceMask) : space_number_if_any; \ - if (where == kNewObject && how == kPlain && within == kStartOfObject) {\ - ReadObject(space_number, current); \ - emit_write_barrier = (space_number == NEW_SPACE); \ - } else { \ - Object* new_object = NULL; /* May not be a real Object pointer. */ \ - if (where == kNewObject) { \ - ReadObject(space_number, &new_object); \ - } else if (where == kRootArray) { \ - int root_id = source_->GetInt(); \ - new_object = isolate->heap()->roots_array_start()[root_id]; \ - emit_write_barrier = isolate->heap()->InNewSpace(new_object); \ - } else if (where == kPartialSnapshotCache) { \ - int cache_index = source_->GetInt(); \ - new_object = isolate->serialize_partial_snapshot_cache() \ - [cache_index]; \ - emit_write_barrier = isolate->heap()->InNewSpace(new_object); \ - } else if (where == kExternalReference) { \ - int skip = source_->GetInt(); \ - current = reinterpret_cast<Object**>(reinterpret_cast<Address>( \ - current) + skip); \ - int reference_id = source_->GetInt(); \ - Address address = external_reference_decoder_-> \ - Decode(reference_id); \ - new_object = reinterpret_cast<Object*>(address); \ - } else if (where == kBackref) { \ - emit_write_barrier = (space_number == NEW_SPACE); \ - new_object = GetAddressFromEnd(data & kSpaceMask); \ - } else { \ - ASSERT(where == kBackrefWithSkip); \ - int skip = source_->GetInt(); \ - current = reinterpret_cast<Object**>( \ - reinterpret_cast<Address>(current) + skip); \ - emit_write_barrier = (space_number == NEW_SPACE); \ - new_object = GetAddressFromEnd(data & kSpaceMask); \ - } \ - if (within == kInnerPointer) { \ - if (space_number != CODE_SPACE || new_object->IsCode()) { \ - Code* new_code_object = reinterpret_cast<Code*>(new_object); \ - new_object = reinterpret_cast<Object*>( \ - new_code_object->instruction_start()); \ - } else { \ - ASSERT(space_number == CODE_SPACE); \ - Cell* cell = Cell::cast(new_object); \ - new_object = reinterpret_cast<Object*>( \ - cell->ValueAddress()); \ - } \ - } \ - if (how == kFromCode) { \ - Address location_of_branch_data = \ - reinterpret_cast<Address>(current); \ - Assembler::deserialization_set_special_target_at( \ - location_of_branch_data, \ - Code::cast(HeapObject::FromAddress(current_object_address)), \ - reinterpret_cast<Address>(new_object)); \ - location_of_branch_data += Assembler::kSpecialTargetSize; \ - current = reinterpret_cast<Object**>(location_of_branch_data); \ - current_was_incremented = true; \ - } else { \ - *current = new_object; \ - } \ + { \ + bool emit_write_barrier = false; \ + bool current_was_incremented = false; \ + int space_number = space_number_if_any == kAnyOldSpace \ + ? (data & kSpaceMask) \ + : space_number_if_any; \ + if (where == kNewObject && how == kPlain && within == kStartOfObject) { \ + ReadObject(space_number, current); \ + emit_write_barrier = (space_number == NEW_SPACE); \ + } else { \ + Object* new_object = NULL; /* May not be a real Object pointer. */ \ + if (where == kNewObject) { \ + ReadObject(space_number, &new_object); \ + } else if (where == kRootArray) { \ + int root_id = source_->GetInt(); \ + new_object = isolate->heap()->roots_array_start()[root_id]; \ + emit_write_barrier = isolate->heap()->InNewSpace(new_object); \ + } else if (where == kPartialSnapshotCache) { \ + int cache_index = source_->GetInt(); \ + new_object = isolate->serialize_partial_snapshot_cache()[cache_index]; \ + emit_write_barrier = isolate->heap()->InNewSpace(new_object); \ + } else if (where == kExternalReference) { \ + int skip = source_->GetInt(); \ + current = reinterpret_cast<Object**>( \ + reinterpret_cast<Address>(current) + skip); \ + int reference_id = source_->GetInt(); \ + Address address = external_reference_decoder_->Decode(reference_id); \ + new_object = reinterpret_cast<Object*>(address); \ + } else if (where == kBackref) { \ + emit_write_barrier = (space_number == NEW_SPACE); \ + new_object = GetAddressFromEnd(data & kSpaceMask); \ + if (deserializing_user_code()) { \ + new_object = ProcessBackRefInSerializedCode(new_object); \ } \ - if (emit_write_barrier && write_barrier_needed) { \ - Address current_address = reinterpret_cast<Address>(current); \ - isolate->heap()->RecordWrite( \ - current_object_address, \ - static_cast<int>(current_address - current_object_address)); \ + } else if (where == kBuiltin) { \ + DCHECK(deserializing_user_code()); \ + int builtin_id = source_->GetInt(); \ + DCHECK_LE(0, builtin_id); \ + DCHECK_LT(builtin_id, Builtins::builtin_count); \ + Builtins::Name name = static_cast<Builtins::Name>(builtin_id); \ + new_object = isolate->builtins()->builtin(name); \ + emit_write_barrier = false; \ + } else if (where == kAttachedReference) { \ + DCHECK(deserializing_user_code()); \ + int index = source_->GetInt(); \ + new_object = attached_objects_->at(index); \ + emit_write_barrier = isolate->heap()->InNewSpace(new_object); \ + } else { \ + DCHECK(where == kBackrefWithSkip); \ + int skip = source_->GetInt(); \ + current = reinterpret_cast<Object**>( \ + reinterpret_cast<Address>(current) + skip); \ + emit_write_barrier = (space_number == NEW_SPACE); \ + new_object = GetAddressFromEnd(data & kSpaceMask); \ + if (deserializing_user_code()) { \ + new_object = ProcessBackRefInSerializedCode(new_object); \ } \ - if (!current_was_incremented) { \ - current++; \ + } \ + if (within == kInnerPointer) { \ + if (space_number != CODE_SPACE || new_object->IsCode()) { \ + Code* new_code_object = reinterpret_cast<Code*>(new_object); \ + new_object = \ + reinterpret_cast<Object*>(new_code_object->instruction_start()); \ + } else { \ + DCHECK(space_number == CODE_SPACE); \ + Cell* cell = Cell::cast(new_object); \ + new_object = reinterpret_cast<Object*>(cell->ValueAddress()); \ } \ - break; \ } \ + if (how == kFromCode) { \ + Address location_of_branch_data = reinterpret_cast<Address>(current); \ + Assembler::deserialization_set_special_target_at( \ + location_of_branch_data, \ + Code::cast(HeapObject::FromAddress(current_object_address)), \ + reinterpret_cast<Address>(new_object)); \ + location_of_branch_data += Assembler::kSpecialTargetSize; \ + current = reinterpret_cast<Object**>(location_of_branch_data); \ + current_was_incremented = true; \ + } else { \ + *current = new_object; \ + } \ + } \ + if (emit_write_barrier && write_barrier_needed) { \ + Address current_address = reinterpret_cast<Address>(current); \ + isolate->heap()->RecordWrite( \ + current_object_address, \ + static_cast<int>(current_address - current_object_address)); \ + } \ + if (!current_was_incremented) { \ + current++; \ + } \ + break; \ + } // This generates a case and a body for the new space (which has to do extra // write barrier handling) and handles the other spaces with 8 fall-through @@ -1100,7 +1006,7 @@ void Deserializer::ReadChunk(Object** current, SIXTEEN_CASES(kRootArrayConstants + kNoSkipDistance + 16) { int root_id = RootArrayConstantFromByteCode(data); Object* object = isolate->heap()->roots_array_start()[root_id]; - ASSERT(!isolate->heap()->InNewSpace(object)); + DCHECK(!isolate->heap()->InNewSpace(object)); *current++ = object; break; } @@ -1112,7 +1018,7 @@ void Deserializer::ReadChunk(Object** current, current = reinterpret_cast<Object**>( reinterpret_cast<intptr_t>(current) + skip); Object* object = isolate->heap()->roots_array_start()[root_id]; - ASSERT(!isolate->heap()->InNewSpace(object)); + DCHECK(!isolate->heap()->InNewSpace(object)); *current++ = object; break; } @@ -1120,7 +1026,7 @@ void Deserializer::ReadChunk(Object** current, case kRepeat: { int repeats = source_->GetInt(); Object* object = current[-1]; - ASSERT(!isolate->heap()->InNewSpace(object)); + DCHECK(!isolate->heap()->InNewSpace(object)); for (int i = 0; i < repeats; i++) current[i] = object; current += repeats; break; @@ -1135,7 +1041,7 @@ void Deserializer::ReadChunk(Object** current, FOUR_CASES(kConstantRepeat + 9) { int repeats = RepeatsForCode(data); Object* object = current[-1]; - ASSERT(!isolate->heap()->InNewSpace(object)); + DCHECK(!isolate->heap()->InNewSpace(object)); for (int i = 0; i < repeats; i++) current[i] = object; current += repeats; break; @@ -1156,7 +1062,8 @@ void Deserializer::ReadChunk(Object** current, // allocation point and write a pointer to it to the current object. ALL_SPACES(kBackref, kPlain, kStartOfObject) ALL_SPACES(kBackrefWithSkip, kPlain, kStartOfObject) -#if defined(V8_TARGET_ARCH_MIPS) || V8_OOL_CONSTANT_POOL +#if defined(V8_TARGET_ARCH_MIPS) || V8_OOL_CONSTANT_POOL || \ + defined(V8_TARGET_ARCH_MIPS64) // Deserialize a new object from pointer found in code and write // a pointer to it to the current object. Required only for MIPS or ARM // with ool constant pool, and omitted on the other architectures because @@ -1208,6 +1115,16 @@ void Deserializer::ReadChunk(Object** current, kFromCode, kStartOfObject, 0) + // Find a builtin and write a pointer to it to the current object. + CASE_STATEMENT(kBuiltin, kPlain, kStartOfObject, 0) + CASE_BODY(kBuiltin, kPlain, kStartOfObject, 0) + // Find a builtin and write a pointer to it in the current code object. + CASE_STATEMENT(kBuiltin, kFromCode, kInnerPointer, 0) + CASE_BODY(kBuiltin, kFromCode, kInnerPointer, 0) + // Find an object in the attached references and write a pointer to it to + // the current object. + CASE_STATEMENT(kAttachedReference, kPlain, kStartOfObject, 0) + CASE_BODY(kAttachedReference, kPlain, kStartOfObject, 0) #undef CASE_STATEMENT #undef CASE_BODY @@ -1241,20 +1158,7 @@ void Deserializer::ReadChunk(Object** current, UNREACHABLE(); } } - ASSERT_EQ(limit, current); -} - - -void SnapshotByteSink::PutInt(uintptr_t integer, const char* description) { - ASSERT(integer < 1 << 22); - integer <<= 2; - int bytes = 1; - if (integer > 0xff) bytes = 2; - if (integer > 0xffff) bytes = 3; - integer |= bytes; - Put(static_cast<int>(integer & 0xff), "IntPart1"); - if (bytes > 1) Put(static_cast<int>((integer >> 8) & 0xff), "IntPart2"); - if (bytes > 2) Put(static_cast<int>((integer >> 16) & 0xff), "IntPart3"); + DCHECK_EQ(limit, current); } @@ -1262,7 +1166,8 @@ Serializer::Serializer(Isolate* isolate, SnapshotByteSink* sink) : isolate_(isolate), sink_(sink), external_reference_encoder_(new ExternalReferenceEncoder(isolate)), - root_index_wave_front_(0) { + root_index_wave_front_(0), + code_address_map_(NULL) { // The serializer is meant to be used only to generate initial heap images // from a context in which there is only one isolate. for (int i = 0; i <= LAST_SPACE; i++) { @@ -1273,6 +1178,7 @@ Serializer::Serializer(Isolate* isolate, SnapshotByteSink* sink) Serializer::~Serializer() { delete external_reference_encoder_; + if (code_address_map_ != NULL) delete code_address_map_; } @@ -1338,7 +1244,7 @@ void Serializer::VisitPointers(Object** start, Object** end) { // deserialized objects. void SerializerDeserializer::Iterate(Isolate* isolate, ObjectVisitor* visitor) { - if (Serializer::enabled(isolate)) return; + if (isolate->serializer_enabled()) return; for (int i = 0; ; i++) { if (isolate->serialize_partial_snapshot_cache_length() <= i) { // Extend the array ready to get a value from the visitor when @@ -1374,7 +1280,7 @@ int PartialSerializer::PartialSnapshotCacheIndex(HeapObject* heap_object) { startup_serializer_->VisitPointer(reinterpret_cast<Object**>(&heap_object)); // We don't recurse from the startup snapshot generator into the partial // snapshot generator. - ASSERT(length == isolate->serialize_partial_snapshot_cache_length() - 1); + DCHECK(length == isolate->serialize_partial_snapshot_cache_length() - 1); return length; } @@ -1385,7 +1291,8 @@ int Serializer::RootIndex(HeapObject* heap_object, HowToCode from) { for (int i = 0; i < root_index_wave_front_; i++) { Object* root = heap->roots_array_start()[i]; if (!root->IsSmi() && root == heap_object) { -#if defined(V8_TARGET_ARCH_MIPS) || V8_OOL_CONSTANT_POOL +#if defined(V8_TARGET_ARCH_MIPS) || V8_OOL_CONSTANT_POOL || \ + defined(V8_TARGET_ARCH_MIPS64) if (from == kFromCode) { // In order to avoid code bloat in the deserializer we don't have // support for the encoding that specifies a particular root should @@ -1431,6 +1338,7 @@ void StartupSerializer::SerializeObject( int skip) { CHECK(o->IsHeapObject()); HeapObject* heap_object = HeapObject::cast(o); + DCHECK(!heap_object->IsJSFunction()); int root_index; if ((root_index = RootIndex(heap_object, how_to_code)) != kInvalidRootIndex) { @@ -1515,7 +1423,7 @@ void PartialSerializer::SerializeObject( if (heap_object->IsMap()) { // The code-caches link to context-specific code objects, which // the startup and context serializes cannot currently handle. - ASSERT(Map::cast(heap_object)->code_cache() == + DCHECK(Map::cast(heap_object)->code_cache() == heap_object->GetHeap()->empty_fixed_array()); } @@ -1541,10 +1449,10 @@ void PartialSerializer::SerializeObject( // Pointers from the partial snapshot to the objects in the startup snapshot // should go through the root array or through the partial snapshot cache. // If this is not the case you may have to add something to the root array. - ASSERT(!startup_serializer_->address_mapper()->IsMapped(heap_object)); + DCHECK(!startup_serializer_->address_mapper()->IsMapped(heap_object)); // All the internalized strings that the partial snapshot needs should be // either in the root table or in the partial snapshot cache. - ASSERT(!heap_object->IsInternalizedString()); + DCHECK(!heap_object->IsInternalizedString()); if (address_mapper_.IsMapped(heap_object)) { int space = SpaceOfObject(heap_object); @@ -1578,12 +1486,14 @@ void Serializer::ObjectSerializer::Serialize() { "ObjectSerialization"); sink_->PutInt(size >> kObjectAlignmentBits, "Size in words"); - ASSERT(code_address_map_); - const char* code_name = code_address_map_->Lookup(object_->address()); - LOG(serializer_->isolate_, - CodeNameEvent(object_->address(), sink_->Position(), code_name)); - LOG(serializer_->isolate_, - SnapshotPositionEvent(object_->address(), sink_->Position())); + if (serializer_->code_address_map_) { + const char* code_name = + serializer_->code_address_map_->Lookup(object_->address()); + LOG(serializer_->isolate_, + CodeNameEvent(object_->address(), sink_->Position(), code_name)); + LOG(serializer_->isolate_, + SnapshotPositionEvent(object_->address(), sink_->Position())); + } // Mark this object as already serialized. int offset = serializer_->Allocate(space, size); @@ -1617,7 +1527,7 @@ void Serializer::ObjectSerializer::VisitPointers(Object** start, root_index != kInvalidRootIndex && root_index < kRootArrayNumberOfConstantEncodings && current_contents == current[-1]) { - ASSERT(!serializer_->isolate()->heap()->InNewSpace(current_contents)); + DCHECK(!serializer_->isolate()->heap()->InNewSpace(current_contents)); int repeat_count = 1; while (current < end - 1 && current[repeat_count] == current_contents) { repeat_count++; @@ -1746,7 +1656,7 @@ void Serializer::ObjectSerializer::VisitExternalAsciiString( static Code* CloneCodeObject(HeapObject* code) { Address copy = new byte[code->Size()]; - OS::MemCopy(copy, code->address(), code->Size()); + MemCopy(copy, code->address(), code->Size()); return Code::cast(HeapObject::FromAddress(copy)); } @@ -1772,10 +1682,10 @@ int Serializer::ObjectSerializer::OutputRawData( int up_to_offset = static_cast<int>(up_to - object_start); int to_skip = up_to_offset - bytes_processed_so_far_; int bytes_to_output = to_skip; - bytes_processed_so_far_ += to_skip; + bytes_processed_so_far_ += to_skip; // This assert will fail if the reloc info gives us the target_address_address // locations in a non-ascending order. Luckily that doesn't happen. - ASSERT(to_skip >= 0); + DCHECK(to_skip >= 0); bool outputting_code = false; if (to_skip != 0 && code_object_ && !code_has_been_output_) { // Output the code all at once and fix later. @@ -1828,7 +1738,7 @@ int Serializer::SpaceOfObject(HeapObject* object) { for (int i = FIRST_SPACE; i <= LAST_SPACE; i++) { AllocationSpace s = static_cast<AllocationSpace>(i); if (object->GetHeap()->InSpace(object, s)) { - ASSERT(i < kNumberOfSpaces); + DCHECK(i < kNumberOfSpaces); return i; } } @@ -1863,12 +1773,182 @@ void Serializer::Pad() { } -bool SnapshotByteSource::AtEOF() { - if (0u + length_ - position_ > 2 * sizeof(uint32_t)) return false; - for (int x = position_; x < length_; x++) { - if (data_[x] != SerializerDeserializer::nop()) return false; +void Serializer::InitializeCodeAddressMap() { + isolate_->InitializeLoggingAndCounters(); + code_address_map_ = new CodeAddressMap(isolate_); +} + + +ScriptData* CodeSerializer::Serialize(Isolate* isolate, + Handle<SharedFunctionInfo> info, + Handle<String> source) { + // Serialize code object. + List<byte> payload; + ListSnapshotSink list_sink(&payload); + CodeSerializer cs(isolate, &list_sink, *source); + DisallowHeapAllocation no_gc; + Object** location = Handle<Object>::cast(info).location(); + cs.VisitPointer(location); + cs.Pad(); + + SerializedCodeData data(&payload, &cs); + return data.GetScriptData(); +} + + +void CodeSerializer::SerializeObject(Object* o, HowToCode how_to_code, + WhereToPoint where_to_point, int skip) { + CHECK(o->IsHeapObject()); + HeapObject* heap_object = HeapObject::cast(o); + + // The code-caches link to context-specific code objects, which + // the startup and context serializes cannot currently handle. + DCHECK(!heap_object->IsMap() || + Map::cast(heap_object)->code_cache() == + heap_object->GetHeap()->empty_fixed_array()); + + int root_index; + if ((root_index = RootIndex(heap_object, how_to_code)) != kInvalidRootIndex) { + PutRoot(root_index, heap_object, how_to_code, where_to_point, skip); + return; + } + + // TODO(yangguo) wire up stubs from stub cache. + // TODO(yangguo) wire up global object. + // TODO(yangguo) We cannot deal with different hash seeds yet. + DCHECK(!heap_object->IsHashTable()); + + if (address_mapper_.IsMapped(heap_object)) { + int space = SpaceOfObject(heap_object); + int address = address_mapper_.MappedTo(heap_object); + SerializeReferenceToPreviousObject(space, address, how_to_code, + where_to_point, skip); + return; + } + + if (heap_object->IsCode()) { + Code* code_object = Code::cast(heap_object); + if (code_object->kind() == Code::BUILTIN) { + SerializeBuiltin(code_object, how_to_code, where_to_point, skip); + return; + } + // TODO(yangguo) figure out whether other code kinds can be handled smarter. + } + + if (heap_object == source_) { + SerializeSourceObject(how_to_code, where_to_point, skip); + return; + } + + if (heap_object->IsScript()) { + // The wrapper cache uses a Foreign object to point to a global handle. + // However, the object visitor expects foreign objects to point to external + // references. Clear the cache to avoid this issue. + Script::cast(heap_object)->ClearWrapperCache(); + } + + if (skip != 0) { + sink_->Put(kSkip, "SkipFromSerializeObject"); + sink_->PutInt(skip, "SkipDistanceFromSerializeObject"); + } + // Object has not yet been serialized. Serialize it here. + ObjectSerializer serializer(this, heap_object, sink_, how_to_code, + where_to_point); + serializer.Serialize(); +} + + +void CodeSerializer::SerializeBuiltin(Code* builtin, HowToCode how_to_code, + WhereToPoint where_to_point, int skip) { + if (skip != 0) { + sink_->Put(kSkip, "SkipFromSerializeBuiltin"); + sink_->PutInt(skip, "SkipDistanceFromSerializeBuiltin"); + } + + DCHECK((how_to_code == kPlain && where_to_point == kStartOfObject) || + (how_to_code == kFromCode && where_to_point == kInnerPointer)); + int builtin_index = builtin->builtin_index(); + DCHECK_LT(builtin_index, Builtins::builtin_count); + DCHECK_LE(0, builtin_index); + sink_->Put(kBuiltin + how_to_code + where_to_point, "Builtin"); + sink_->PutInt(builtin_index, "builtin_index"); +} + + +void CodeSerializer::SerializeSourceObject(HowToCode how_to_code, + WhereToPoint where_to_point, + int skip) { + if (skip != 0) { + sink_->Put(kSkip, "SkipFromSerializeSourceObject"); + sink_->PutInt(skip, "SkipDistanceFromSerializeSourceObject"); + } + + DCHECK(how_to_code == kPlain && where_to_point == kStartOfObject); + sink_->Put(kAttachedReference + how_to_code + where_to_point, "Source"); + sink_->PutInt(kSourceObjectIndex, "kSourceObjectIndex"); +} + + +Handle<SharedFunctionInfo> CodeSerializer::Deserialize(Isolate* isolate, + ScriptData* data, + Handle<String> source) { + base::ElapsedTimer timer; + if (FLAG_profile_deserialization) timer.Start(); + SerializedCodeData scd(data, *source); + SnapshotByteSource payload(scd.Payload(), scd.PayloadLength()); + Deserializer deserializer(&payload); + STATIC_ASSERT(NEW_SPACE == 0); + for (int i = NEW_SPACE; i <= PROPERTY_CELL_SPACE; i++) { + deserializer.set_reservation(i, scd.GetReservation(i)); + } + + // Prepare and register list of attached objects. + Vector<Object*> attached_objects = Vector<Object*>::New(1); + attached_objects[kSourceObjectIndex] = *source; + deserializer.SetAttachedObjects(&attached_objects); + + Object* root; + deserializer.DeserializePartial(isolate, &root); + deserializer.FlushICacheForNewCodeObjects(); + if (FLAG_profile_deserialization) { + double ms = timer.Elapsed().InMillisecondsF(); + int length = data->length(); + PrintF("[Deserializing from %d bytes took %0.3f ms]\n", length, ms); + } + return Handle<SharedFunctionInfo>(SharedFunctionInfo::cast(root), isolate); +} + + +SerializedCodeData::SerializedCodeData(List<byte>* payload, CodeSerializer* cs) + : owns_script_data_(true) { + DisallowHeapAllocation no_gc; + int data_length = payload->length() + kHeaderEntries * kIntSize; + byte* data = NewArray<byte>(data_length); + DCHECK(IsAligned(reinterpret_cast<intptr_t>(data), kPointerAlignment)); + CopyBytes(data + kHeaderEntries * kIntSize, payload->begin(), + static_cast<size_t>(payload->length())); + script_data_ = new ScriptData(data, data_length); + script_data_->AcquireDataOwnership(); + SetHeaderValue(kCheckSumOffset, CheckSum(cs->source())); + STATIC_ASSERT(NEW_SPACE == 0); + for (int i = NEW_SPACE; i <= PROPERTY_CELL_SPACE; i++) { + SetHeaderValue(kReservationsOffset + i, cs->CurrentAllocationAddress(i)); } - return true; } + +bool SerializedCodeData::IsSane(String* source) { + return GetHeaderValue(kCheckSumOffset) == CheckSum(source) && + PayloadLength() >= SharedFunctionInfo::kSize; +} + + +int SerializedCodeData::CheckSum(String* string) { + int checksum = Version::Hash(); +#ifdef DEBUG + uint32_t seed = static_cast<uint32_t>(checksum); + checksum = static_cast<int>(IteratingStringHasher::Hash(string, seed)); +#endif // DEBUG + return checksum; +} } } // namespace v8::internal diff --git a/deps/v8/src/serialize.h b/deps/v8/src/serialize.h index 958f20e24f3..e4e6c3ad86b 100644 --- a/deps/v8/src/serialize.h +++ b/deps/v8/src/serialize.h @@ -5,7 +5,11 @@ #ifndef V8_SERIALIZE_H_ #define V8_SERIALIZE_H_ -#include "hashmap.h" +#include "src/compiler.h" +#include "src/hashmap.h" +#include "src/heap-profiler.h" +#include "src/isolate.h" +#include "src/snapshot-source-sink.h" namespace v8 { namespace internal { @@ -13,18 +17,16 @@ namespace internal { // A TypeCode is used to distinguish different kinds of external reference. // It is a single bit to make testing for types easy. enum TypeCode { - UNCLASSIFIED, // One-of-a-kind references. + UNCLASSIFIED, // One-of-a-kind references. + C_BUILTIN, BUILTIN, RUNTIME_FUNCTION, IC_UTILITY, - DEBUG_ADDRESS, STATS_COUNTER, TOP_ADDRESS, - C_BUILTIN, - EXTENSION, ACCESSOR, - RUNTIME_ENTRY, STUB_CACHE_TABLE, + RUNTIME_ENTRY, LAZY_DEOPTIMIZATION }; @@ -34,10 +36,8 @@ const int kFirstTypeCode = UNCLASSIFIED; const int kReferenceIdBits = 16; const int kReferenceIdMask = (1 << kReferenceIdBits) - 1; const int kReferenceTypeShift = kReferenceIdBits; -const int kDebugRegisterBits = 4; -const int kDebugIdShift = kDebugRegisterBits; -const int kDeoptTableSerializeEntryCount = 12; +const int kDeoptTableSerializeEntryCount = 64; // ExternalReferenceTable is a helper class that defines the relationship // between external references and their encodings. It is used to build @@ -60,7 +60,7 @@ class ExternalReferenceTable { private: explicit ExternalReferenceTable(Isolate* isolate) : refs_(64) { - PopulateTable(isolate); + PopulateTable(isolate); } struct ExternalReferenceEntry { @@ -80,8 +80,12 @@ class ExternalReferenceTable { // For other types of references, the caller will figure out the address. void Add(Address address, TypeCode type, uint16_t id, const char* name); + void Add(Address address, const char* name) { + Add(address, UNCLASSIFIED, ++max_id_[UNCLASSIFIED], name); + } + List<ExternalReferenceEntry> refs_; - int max_id_[kTypeCodeCount]; + uint16_t max_id_[kTypeCodeCount]; }; @@ -122,7 +126,7 @@ class ExternalReferenceDecoder { Address* Lookup(uint32_t key) const { int type = key >> kReferenceTypeShift; - ASSERT(kFirstTypeCode <= type && type < kTypeCodeCount); + DCHECK(kFirstTypeCode <= type && type < kTypeCodeCount); int id = key & kReferenceIdMask; return &encodings_[type][id]; } @@ -135,49 +139,6 @@ class ExternalReferenceDecoder { }; -class SnapshotByteSource { - public: - SnapshotByteSource(const byte* array, int length) - : data_(array), length_(length), position_(0) { } - - bool HasMore() { return position_ < length_; } - - int Get() { - ASSERT(position_ < length_); - return data_[position_++]; - } - - int32_t GetUnalignedInt() { -#if defined(V8_HOST_CAN_READ_UNALIGNED) && __BYTE_ORDER == __LITTLE_ENDIAN - int32_t answer; - ASSERT(position_ + sizeof(answer) <= length_ + 0u); - answer = *reinterpret_cast<const int32_t*>(data_ + position_); -#else - int32_t answer = data_[position_]; - answer |= data_[position_ + 1] << 8; - answer |= data_[position_ + 2] << 16; - answer |= data_[position_ + 3] << 24; -#endif - return answer; - } - - void Advance(int by) { position_ += by; } - - inline void CopyRaw(byte* to, int number_of_bytes); - - inline int GetInt(); - - bool AtEOF(); - - int position() { return position_; } - - private: - const byte* data_; - int length_; - int position_; -}; - - // The Serializer/Deserializer class is a common superclass for Serializer and // Deserializer which is used to store common constants and methods used by // both. @@ -190,17 +151,18 @@ class SerializerDeserializer: public ObjectVisitor { protected: // Where the pointed-to object can be found: enum Where { - kNewObject = 0, // Object is next in snapshot. + kNewObject = 0, // Object is next in snapshot. // 1-6 One per space. - kRootArray = 0x9, // Object is found in root array. - kPartialSnapshotCache = 0xa, // Object is in the cache. - kExternalReference = 0xb, // Pointer to an external reference. - kSkip = 0xc, // Skip n bytes. - kNop = 0xd, // Does nothing, used to pad. - // 0xe-0xf Free. - kBackref = 0x10, // Object is described relative to end. + kRootArray = 0x9, // Object is found in root array. + kPartialSnapshotCache = 0xa, // Object is in the cache. + kExternalReference = 0xb, // Pointer to an external reference. + kSkip = 0xc, // Skip n bytes. + kBuiltin = 0xd, // Builtin code object. + kAttachedReference = 0xe, // Object is described in an attached list. + kNop = 0xf, // Does nothing, used to pad. + kBackref = 0x10, // Object is described relative to end. // 0x11-0x16 One per space. - kBackrefWithSkip = 0x18, // Object is described relative to end. + kBackrefWithSkip = 0x18, // Object is described relative to end. // 0x19-0x1e One per space. // 0x20-0x3f Used by misc. tags below. kPointedToMask = 0x3f @@ -249,11 +211,11 @@ class SerializerDeserializer: public ObjectVisitor { // 0x73-0x7f Repeat last word (subtract 0x72 to get the count). static const int kMaxRepeats = 0x7f - 0x72; static int CodeForRepeats(int repeats) { - ASSERT(repeats >= 1 && repeats <= kMaxRepeats); + DCHECK(repeats >= 1 && repeats <= kMaxRepeats); return 0x72 + repeats; } static int RepeatsForCode(int byte_code) { - ASSERT(byte_code >= kConstantRepeat && byte_code <= 0x7f); + DCHECK(byte_code >= kConstantRepeat && byte_code <= 0x7f); return byte_code - 0x72; } static const int kRootArrayConstants = 0xa0; @@ -271,26 +233,6 @@ class SerializerDeserializer: public ObjectVisitor { }; -int SnapshotByteSource::GetInt() { - // This way of variable-length encoding integers does not suffer from branch - // mispredictions. - uint32_t answer = GetUnalignedInt(); - int bytes = answer & 3; - Advance(bytes); - uint32_t mask = 0xffffffffu; - mask >>= 32 - (bytes << 3); - answer &= mask; - answer >>= 2; - return answer; -} - - -void SnapshotByteSource::CopyRaw(byte* to, int number_of_bytes) { - OS::MemCopy(to, data_ + position_, number_of_bytes); - position_ += number_of_bytes; -} - - // A Deserializer reads a snapshot and reconstructs the Object graph it defines. class Deserializer: public SerializerDeserializer { public: @@ -306,11 +248,21 @@ class Deserializer: public SerializerDeserializer { void DeserializePartial(Isolate* isolate, Object** root); void set_reservation(int space_number, int reservation) { - ASSERT(space_number >= 0); - ASSERT(space_number <= LAST_SPACE); + DCHECK(space_number >= 0); + DCHECK(space_number <= LAST_SPACE); reservations_[space_number] = reservation; } + void FlushICacheForNewCodeObjects(); + + // Serialized user code reference certain objects that are provided in a list + // By calling this method, we assume that we are deserializing user code. + void SetAttachedObjects(Vector<Object*>* attached_objects) { + attached_objects_ = attached_objects; + } + + bool deserializing_user_code() { return attached_objects_ != NULL; } + private: virtual void VisitPointers(Object** start, Object** end); @@ -331,16 +283,16 @@ class Deserializer: public SerializerDeserializer { Object** start, Object** end, int space, Address object_address); void ReadObject(int space_number, Object** write_back); + // Special handling for serialized code like hooking up internalized strings. + HeapObject* ProcessNewObjectFromSerializedCode(HeapObject* obj); + Object* ProcessBackRefInSerializedCode(Object* obj); + // This routine both allocates a new object, and also keeps // track of where objects have been allocated so that we can // fix back references when deserializing. Address Allocate(int space_index, int size) { Address address = high_water_[space_index]; high_water_[space_index] = address + size; - HeapProfiler* profiler = isolate_->heap_profiler(); - if (profiler->is_tracking_allocations()) { - profiler->AllocationEvent(address, size); - } return address; } @@ -352,11 +304,12 @@ class Deserializer: public SerializerDeserializer { return HeapObject::FromAddress(high_water_[space] - offset); } - void FlushICacheForNewCodeObjects(); - // Cached current isolate. Isolate* isolate_; + // Objects from the attached object descriptions in the serialized user code. + Vector<Object*>* attached_objects_; + SnapshotByteSource* source_; // This is the address of the next object that will be allocated in each // space. It is used to calculate the addresses of back-references. @@ -371,18 +324,6 @@ class Deserializer: public SerializerDeserializer { }; -class SnapshotByteSink { - public: - virtual ~SnapshotByteSink() { } - virtual void Put(int byte, const char* description) = 0; - virtual void PutSection(int byte, const char* description) { - Put(byte, description); - } - void PutInt(uintptr_t integer, const char* description); - virtual int Position() = 0; -}; - - // Mapping objects to their location after deserialization. // This is used during building, but not at runtime by V8. class SerializationAddressMapper { @@ -400,13 +341,13 @@ class SerializationAddressMapper { } int MappedTo(HeapObject* obj) { - ASSERT(IsMapped(obj)); + DCHECK(IsMapped(obj)); return static_cast<int>(reinterpret_cast<intptr_t>( serialization_map_->Lookup(Key(obj), Hash(obj), false)->value)); } void AddMapping(HeapObject* obj, int to) { - ASSERT(!IsMapped(obj)); + DCHECK(!IsMapped(obj)); HashMap::Entry* entry = serialization_map_->Lookup(Key(obj), Hash(obj), true); entry->value = Value(to); @@ -442,21 +383,12 @@ class Serializer : public SerializerDeserializer { // You can call this after serialization to find out how much space was used // in each space. int CurrentAllocationAddress(int space) const { - ASSERT(space < kNumberOfSpaces); + DCHECK(space < kNumberOfSpaces); return fullness_[space]; } Isolate* isolate() const { return isolate_; } - static void RequestEnable(Isolate* isolate); - static void InitializeOncePerProcess(); - static void TearDown(); - static bool enabled(Isolate* isolate) { - SerializationState state = static_cast<SerializationState>( - NoBarrier_Load(&serialization_state_)); - ASSERT(state != SERIALIZER_STATE_UNINITIALIZED); - return state == SERIALIZER_STATE_ENABLED; - } SerializationAddressMapper* address_mapper() { return &address_mapper_; } void PutRoot(int index, HeapObject* object, @@ -468,10 +400,9 @@ class Serializer : public SerializerDeserializer { static const int kInvalidRootIndex = -1; int RootIndex(HeapObject* heap_object, HowToCode from); - virtual bool ShouldBeInThePartialSnapshotCache(HeapObject* o) = 0; intptr_t root_index_wave_front() { return root_index_wave_front_; } void set_root_index_wave_front(intptr_t value) { - ASSERT(value >= root_index_wave_front_); + DCHECK(value >= root_index_wave_front_); root_index_wave_front_ = value; } @@ -555,14 +486,6 @@ class Serializer : public SerializerDeserializer { SnapshotByteSink* sink_; ExternalReferenceEncoder* external_reference_encoder_; - enum SerializationState { - SERIALIZER_STATE_UNINITIALIZED = 0, - SERIALIZER_STATE_DISABLED = 1, - SERIALIZER_STATE_ENABLED = 2 - }; - - static AtomicWord serialization_state_; - SerializationAddressMapper address_mapper_; intptr_t root_index_wave_front_; void Pad(); @@ -570,8 +493,12 @@ class Serializer : public SerializerDeserializer { friend class ObjectSerializer; friend class Deserializer; + // We may not need the code address map for logging for every instance + // of the serializer. Initialize it on demand. + void InitializeCodeAddressMap(); + private: - static CodeAddressMap* code_address_map_; + CodeAddressMap* code_address_map_; DISALLOW_COPY_AND_ASSIGN(Serializer); }; @@ -584,23 +511,24 @@ class PartialSerializer : public Serializer { : Serializer(isolate, sink), startup_serializer_(startup_snapshot_serializer) { set_root_index_wave_front(Heap::kStrongRootListLength); + InitializeCodeAddressMap(); } // Serialize the objects reachable from a single object pointer. - virtual void Serialize(Object** o); + void Serialize(Object** o); virtual void SerializeObject(Object* o, HowToCode how_to_code, WhereToPoint where_to_point, int skip); - protected: - virtual int PartialSnapshotCacheIndex(HeapObject* o); - virtual bool ShouldBeInThePartialSnapshotCache(HeapObject* o) { + private: + int PartialSnapshotCacheIndex(HeapObject* o); + bool ShouldBeInThePartialSnapshotCache(HeapObject* o) { // Scripts should be referred only through shared function infos. We can't // allow them to be part of the partial snapshot because they contain a // unique ID, and deserializing several partial snapshots containing script // would cause dupes. - ASSERT(!o->IsScript()); + DCHECK(!o->IsScript()); return o->IsName() || o->IsSharedFunctionInfo() || o->IsHeapNumber() || o->IsCode() || o->IsScopeInfo() || @@ -608,7 +536,7 @@ class PartialSerializer : public Serializer { startup_serializer_->isolate()->heap()->fixed_cow_array_map(); } - private: + Serializer* startup_serializer_; DISALLOW_COPY_AND_ASSIGN(PartialSerializer); }; @@ -623,6 +551,7 @@ class StartupSerializer : public Serializer { // which will repopulate the cache with objects needed by that partial // snapshot. isolate->set_serialize_partial_snapshot_cache_length(0); + InitializeCodeAddressMap(); } // Serialize the current state of the heap. The order is: // 1) Strong references. @@ -641,12 +570,110 @@ class StartupSerializer : public Serializer { } private: - virtual bool ShouldBeInThePartialSnapshotCache(HeapObject* o) { - return false; + DISALLOW_COPY_AND_ASSIGN(StartupSerializer); +}; + + +class CodeSerializer : public Serializer { + public: + CodeSerializer(Isolate* isolate, SnapshotByteSink* sink, String* source) + : Serializer(isolate, sink), source_(source) { + set_root_index_wave_front(Heap::kStrongRootListLength); + InitializeCodeAddressMap(); } + + static ScriptData* Serialize(Isolate* isolate, + Handle<SharedFunctionInfo> info, + Handle<String> source); + + virtual void SerializeObject(Object* o, HowToCode how_to_code, + WhereToPoint where_to_point, int skip); + + static Handle<SharedFunctionInfo> Deserialize(Isolate* isolate, + ScriptData* data, + Handle<String> source); + + static const int kSourceObjectIndex = 0; + + String* source() { + DCHECK(!AllowHeapAllocation::IsAllowed()); + return source_; + } + + private: + void SerializeBuiltin(Code* builtin, HowToCode how_to_code, + WhereToPoint where_to_point, int skip); + void SerializeSourceObject(HowToCode how_to_code, WhereToPoint where_to_point, + int skip); + + DisallowHeapAllocation no_gc_; + String* source_; + DISALLOW_COPY_AND_ASSIGN(CodeSerializer); }; +// Wrapper around ScriptData to provide code-serializer-specific functionality. +class SerializedCodeData { + public: + // Used by when consuming. + explicit SerializedCodeData(ScriptData* data, String* source) + : script_data_(data), owns_script_data_(false) { + DisallowHeapAllocation no_gc; + CHECK(IsSane(source)); + } + + // Used when producing. + SerializedCodeData(List<byte>* payload, CodeSerializer* cs); + + ~SerializedCodeData() { + if (owns_script_data_) delete script_data_; + } + + // Return ScriptData object and relinquish ownership over it to the caller. + ScriptData* GetScriptData() { + ScriptData* result = script_data_; + script_data_ = NULL; + DCHECK(owns_script_data_); + owns_script_data_ = false; + return result; + } + + const byte* Payload() const { + return script_data_->data() + kHeaderEntries * kIntSize; + } + + int PayloadLength() const { + return script_data_->length() - kHeaderEntries * kIntSize; + } + + int GetReservation(int space) const { + return GetHeaderValue(kReservationsOffset + space); + } + + private: + void SetHeaderValue(int offset, int value) { + reinterpret_cast<int*>(const_cast<byte*>(script_data_->data()))[offset] = + value; + } + + int GetHeaderValue(int offset) const { + return reinterpret_cast<const int*>(script_data_->data())[offset]; + } + + bool IsSane(String* source); + + int CheckSum(String* source); + + // The data header consists of int-sized entries: + // [0] version hash + // [1..7] reservation sizes for spaces from NEW_SPACE to PROPERTY_CELL_SPACE. + static const int kCheckSumOffset = 0; + static const int kReservationsOffset = 1; + static const int kHeaderEntries = 8; + + ScriptData* script_data_; + bool owns_script_data_; +}; } } // namespace v8::internal #endif // V8_SERIALIZE_H_ diff --git a/deps/v8/src/simulator.h b/deps/v8/src/simulator.h index 910e1674cac..6dd08f4a5ed 100644 --- a/deps/v8/src/simulator.h +++ b/deps/v8/src/simulator.h @@ -6,15 +6,19 @@ #define V8_SIMULATOR_H_ #if V8_TARGET_ARCH_IA32 -#include "ia32/simulator-ia32.h" +#include "src/ia32/simulator-ia32.h" #elif V8_TARGET_ARCH_X64 -#include "x64/simulator-x64.h" +#include "src/x64/simulator-x64.h" #elif V8_TARGET_ARCH_ARM64 -#include "arm64/simulator-arm64.h" +#include "src/arm64/simulator-arm64.h" #elif V8_TARGET_ARCH_ARM -#include "arm/simulator-arm.h" +#include "src/arm/simulator-arm.h" #elif V8_TARGET_ARCH_MIPS -#include "mips/simulator-mips.h" +#include "src/mips/simulator-mips.h" +#elif V8_TARGET_ARCH_MIPS64 +#include "src/mips64/simulator-mips64.h" +#elif V8_TARGET_ARCH_X87 +#include "src/x87/simulator-x87.h" #else #error Unsupported target architecture. #endif diff --git a/deps/v8/src/small-pointer-list.h b/deps/v8/src/small-pointer-list.h index 2359a8b9fc5..241689e5b23 100644 --- a/deps/v8/src/small-pointer-list.h +++ b/deps/v8/src/small-pointer-list.h @@ -5,9 +5,9 @@ #ifndef V8_SMALL_POINTER_LIST_H_ #define V8_SMALL_POINTER_LIST_H_ -#include "checks.h" -#include "v8globals.h" -#include "zone.h" +#include "src/base/logging.h" +#include "src/globals.h" +#include "src/zone.h" namespace v8 { namespace internal { @@ -38,7 +38,7 @@ class SmallPointerList { if ((data_ & kTagMask) == kSingletonTag) { list->Add(single_value(), zone); } - ASSERT(IsAligned(reinterpret_cast<intptr_t>(list), kPointerAlignment)); + DCHECK(IsAligned(reinterpret_cast<intptr_t>(list), kPointerAlignment)); data_ = reinterpret_cast<intptr_t>(list) | kListTag; } @@ -61,7 +61,7 @@ class SmallPointerList { } void Add(T* pointer, Zone* zone) { - ASSERT(IsAligned(reinterpret_cast<intptr_t>(pointer), kPointerAlignment)); + DCHECK(IsAligned(reinterpret_cast<intptr_t>(pointer), kPointerAlignment)); if ((data_ & kTagMask) == kEmptyTag) { data_ = reinterpret_cast<intptr_t>(pointer) | kSingletonTag; return; @@ -70,7 +70,7 @@ class SmallPointerList { PointerList* list = new(zone) PointerList(2, zone); list->Add(single_value(), zone); list->Add(pointer, zone); - ASSERT(IsAligned(reinterpret_cast<intptr_t>(list), kPointerAlignment)); + DCHECK(IsAligned(reinterpret_cast<intptr_t>(list), kPointerAlignment)); data_ = reinterpret_cast<intptr_t>(list) | kListTag; return; } @@ -80,9 +80,9 @@ class SmallPointerList { // Note: returns T* and not T*& (unlike List from list.h). // This makes the implementation simpler and more const correct. T* at(int i) const { - ASSERT((data_ & kTagMask) != kEmptyTag); + DCHECK((data_ & kTagMask) != kEmptyTag); if ((data_ & kTagMask) == kSingletonTag) { - ASSERT(i == 0); + DCHECK(i == 0); return single_value(); } return list()->at(i); @@ -104,7 +104,7 @@ class SmallPointerList { } T* RemoveLast() { - ASSERT((data_ & kTagMask) != kEmptyTag); + DCHECK((data_ & kTagMask) != kEmptyTag); if ((data_ & kTagMask) == kSingletonTag) { T* result = single_value(); data_ = kEmptyTag; @@ -115,11 +115,11 @@ class SmallPointerList { void Rewind(int pos) { if ((data_ & kTagMask) == kEmptyTag) { - ASSERT(pos == 0); + DCHECK(pos == 0); return; } if ((data_ & kTagMask) == kSingletonTag) { - ASSERT(pos == 0 || pos == 1); + DCHECK(pos == 0 || pos == 1); if (pos == 0) { data_ = kEmptyTag; } @@ -155,13 +155,13 @@ class SmallPointerList { STATIC_ASSERT(kTagMask + 1 <= kPointerAlignment); T* single_value() const { - ASSERT((data_ & kTagMask) == kSingletonTag); + DCHECK((data_ & kTagMask) == kSingletonTag); STATIC_ASSERT(kSingletonTag == 0); return reinterpret_cast<T*>(data_); } PointerList* list() const { - ASSERT((data_ & kTagMask) == kListTag); + DCHECK((data_ & kTagMask) == kListTag); return reinterpret_cast<PointerList*>(data_ & kValueMask); } diff --git a/deps/v8/src/smart-pointers.h b/deps/v8/src/smart-pointers.h index db2206a32f5..c4bbd0b45e5 100644 --- a/deps/v8/src/smart-pointers.h +++ b/deps/v8/src/smart-pointers.h @@ -56,7 +56,7 @@ class SmartPointerBase { } void Reset(T* new_value) { - ASSERT(p_ == NULL || p_ != new_value); + DCHECK(p_ == NULL || p_ != new_value); if (p_) Deallocator::Delete(p_); p_ = new_value; } @@ -66,7 +66,7 @@ class SmartPointerBase { // double freeing. SmartPointerBase<Deallocator, T>& operator=( const SmartPointerBase<Deallocator, T>& rhs) { - ASSERT(is_empty()); + DCHECK(is_empty()); T* tmp = rhs.p_; // swap to handle self-assignment const_cast<SmartPointerBase<Deallocator, T>&>(rhs).p_ = NULL; p_ = tmp; diff --git a/deps/v8/src/snapshot-common.cc b/deps/v8/src/snapshot-common.cc index b22f2902d37..25193b2ed0d 100644 --- a/deps/v8/src/snapshot-common.cc +++ b/deps/v8/src/snapshot-common.cc @@ -4,54 +4,16 @@ // The common functionality when building with or without snapshots. -#include "v8.h" +#include "src/v8.h" -#include "api.h" -#include "serialize.h" -#include "snapshot.h" -#include "platform.h" +#include "src/api.h" +#include "src/base/platform/platform.h" +#include "src/serialize.h" +#include "src/snapshot.h" namespace v8 { namespace internal { - -static void ReserveSpaceForSnapshot(Deserializer* deserializer, - const char* file_name) { - int file_name_length = StrLength(file_name) + 10; - Vector<char> name = Vector<char>::New(file_name_length + 1); - OS::SNPrintF(name, "%s.size", file_name); - FILE* fp = OS::FOpen(name.start(), "r"); - CHECK_NE(NULL, fp); - int new_size, pointer_size, data_size, code_size, map_size, cell_size, - property_cell_size; -#ifdef _MSC_VER - // Avoid warning about unsafe fscanf from MSVC. - // Please note that this is only fine if %c and %s are not being used. -#define fscanf fscanf_s -#endif - CHECK_EQ(1, fscanf(fp, "new %d\n", &new_size)); - CHECK_EQ(1, fscanf(fp, "pointer %d\n", &pointer_size)); - CHECK_EQ(1, fscanf(fp, "data %d\n", &data_size)); - CHECK_EQ(1, fscanf(fp, "code %d\n", &code_size)); - CHECK_EQ(1, fscanf(fp, "map %d\n", &map_size)); - CHECK_EQ(1, fscanf(fp, "cell %d\n", &cell_size)); - CHECK_EQ(1, fscanf(fp, "property cell %d\n", &property_cell_size)); -#ifdef _MSC_VER -#undef fscanf -#endif - fclose(fp); - deserializer->set_reservation(NEW_SPACE, new_size); - deserializer->set_reservation(OLD_POINTER_SPACE, pointer_size); - deserializer->set_reservation(OLD_DATA_SPACE, data_size); - deserializer->set_reservation(CODE_SPACE, code_size); - deserializer->set_reservation(MAP_SPACE, map_size); - deserializer->set_reservation(CELL_SPACE, cell_size); - deserializer->set_reservation(PROPERTY_CELL_SPACE, - property_cell_size); - name.Dispose(); -} - - void Snapshot::ReserveSpaceForLinkedInSnapshot(Deserializer* deserializer) { deserializer->set_reservation(NEW_SPACE, new_space_used_); deserializer->set_reservation(OLD_POINTER_SPACE, pointer_space_used_); @@ -64,22 +26,9 @@ void Snapshot::ReserveSpaceForLinkedInSnapshot(Deserializer* deserializer) { } -bool Snapshot::Initialize(const char* snapshot_file) { - if (snapshot_file) { - int len; - byte* str = ReadBytes(snapshot_file, &len); - if (!str) return false; - bool success; - { - SnapshotByteSource source(str, len); - Deserializer deserializer(&source); - ReserveSpaceForSnapshot(&deserializer, snapshot_file); - success = V8::Initialize(&deserializer); - } - DeleteArray(str); - return success; - } else if (size_ > 0) { - ElapsedTimer timer; +bool Snapshot::Initialize() { + if (size_ > 0) { + base::ElapsedTimer timer; if (FLAG_profile_deserialization) { timer.Start(); } @@ -123,4 +72,15 @@ Handle<Context> Snapshot::NewContextFromSnapshot(Isolate* isolate) { return Handle<Context>(Context::cast(root)); } + +#ifdef V8_USE_EXTERNAL_STARTUP_DATA +// Dummy implementations of Set*FromFile(..) APIs. +// +// These are meant for use with snapshot-external.cc. Should this file +// be compiled with those options we just supply these dummy implementations +// below. This happens when compiling the mksnapshot utility. +void SetNativesFromFile(StartupData* data) { CHECK(false); } +void SetSnapshotFromFile(StartupData* data) { CHECK(false); } +#endif // V8_USE_EXTERNAL_STARTUP_DATA + } } // namespace v8::internal diff --git a/deps/v8/src/snapshot-empty.cc b/deps/v8/src/snapshot-empty.cc index 62b77fe35fb..65207bfc744 100644 --- a/deps/v8/src/snapshot-empty.cc +++ b/deps/v8/src/snapshot-empty.cc @@ -4,9 +4,9 @@ // Used for building without snapshots. -#include "v8.h" +#include "src/v8.h" -#include "snapshot.h" +#include "src/snapshot.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/snapshot-external.cc b/deps/v8/src/snapshot-external.cc new file mode 100644 index 00000000000..38b7cf414c6 --- /dev/null +++ b/deps/v8/src/snapshot-external.cc @@ -0,0 +1,140 @@ +// Copyright 2006-2008 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Used for building with external snapshots. + +#include "src/snapshot.h" + +#include "src/serialize.h" +#include "src/snapshot-source-sink.h" +#include "src/v8.h" // for V8::Initialize + +namespace v8 { +namespace internal { + + +struct SnapshotImpl { + public: + const byte* data; + int size; + int new_space_used; + int pointer_space_used; + int data_space_used; + int code_space_used; + int map_space_used; + int cell_space_used; + int property_cell_space_used; + + const byte* context_data; + int context_size; + int context_new_space_used; + int context_pointer_space_used; + int context_data_space_used; + int context_code_space_used; + int context_map_space_used; + int context_cell_space_used; + int context_property_cell_space_used; +}; + + +static SnapshotImpl* snapshot_impl_ = NULL; + + +bool Snapshot::HaveASnapshotToStartFrom() { + return snapshot_impl_ != NULL; +} + + +bool Snapshot::Initialize() { + if (!HaveASnapshotToStartFrom()) + return false; + + base::ElapsedTimer timer; + if (FLAG_profile_deserialization) { + timer.Start(); + } + SnapshotByteSource source(snapshot_impl_->data, snapshot_impl_->size); + Deserializer deserializer(&source); + deserializer.set_reservation(NEW_SPACE, snapshot_impl_->new_space_used); + deserializer.set_reservation(OLD_POINTER_SPACE, + snapshot_impl_->pointer_space_used); + deserializer.set_reservation(OLD_DATA_SPACE, + snapshot_impl_->data_space_used); + deserializer.set_reservation(CODE_SPACE, snapshot_impl_->code_space_used); + deserializer.set_reservation(MAP_SPACE, snapshot_impl_->map_space_used); + deserializer.set_reservation(CELL_SPACE, snapshot_impl_->cell_space_used); + deserializer.set_reservation(PROPERTY_CELL_SPACE, + snapshot_impl_->property_cell_space_used); + bool success = V8::Initialize(&deserializer); + if (FLAG_profile_deserialization) { + double ms = timer.Elapsed().InMillisecondsF(); + PrintF("[Snapshot loading and deserialization took %0.3f ms]\n", ms); + } + return success; +} + + +Handle<Context> Snapshot::NewContextFromSnapshot(Isolate* isolate) { + if (!HaveASnapshotToStartFrom()) + return Handle<Context>(); + + SnapshotByteSource source(snapshot_impl_->context_data, + snapshot_impl_->context_size); + Deserializer deserializer(&source); + deserializer.set_reservation(NEW_SPACE, + snapshot_impl_->context_new_space_used); + deserializer.set_reservation(OLD_POINTER_SPACE, + snapshot_impl_->context_pointer_space_used); + deserializer.set_reservation(OLD_DATA_SPACE, + snapshot_impl_->context_data_space_used); + deserializer.set_reservation(CODE_SPACE, + snapshot_impl_->context_code_space_used); + deserializer.set_reservation(MAP_SPACE, + snapshot_impl_->context_map_space_used); + deserializer.set_reservation(CELL_SPACE, + snapshot_impl_->context_cell_space_used); + deserializer.set_reservation(PROPERTY_CELL_SPACE, + snapshot_impl_-> + context_property_cell_space_used); + Object* root; + deserializer.DeserializePartial(isolate, &root); + CHECK(root->IsContext()); + return Handle<Context>(Context::cast(root)); +} + + +void SetSnapshotFromFile(StartupData* snapshot_blob) { + DCHECK(snapshot_blob); + DCHECK(snapshot_blob->data); + DCHECK(snapshot_blob->raw_size > 0); + DCHECK(!snapshot_impl_); + + snapshot_impl_ = new SnapshotImpl; + SnapshotByteSource source(reinterpret_cast<const byte*>(snapshot_blob->data), + snapshot_blob->raw_size); + + bool success = source.GetBlob(&snapshot_impl_->data, + &snapshot_impl_->size); + snapshot_impl_->new_space_used = source.GetInt(); + snapshot_impl_->pointer_space_used = source.GetInt(); + snapshot_impl_->data_space_used = source.GetInt(); + snapshot_impl_->code_space_used = source.GetInt(); + snapshot_impl_->map_space_used = source.GetInt(); + snapshot_impl_->cell_space_used = source.GetInt(); + snapshot_impl_->property_cell_space_used = source.GetInt(); + + success &= source.GetBlob(&snapshot_impl_->context_data, + &snapshot_impl_->context_size); + snapshot_impl_->context_new_space_used = source.GetInt(); + snapshot_impl_->context_pointer_space_used = source.GetInt(); + snapshot_impl_->context_data_space_used = source.GetInt(); + snapshot_impl_->context_code_space_used = source.GetInt(); + snapshot_impl_->context_map_space_used = source.GetInt(); + snapshot_impl_->context_cell_space_used = source.GetInt(); + snapshot_impl_->context_property_cell_space_used = source.GetInt(); + + DCHECK(success); +} + +} } // namespace v8::internal diff --git a/deps/v8/src/snapshot-source-sink.cc b/deps/v8/src/snapshot-source-sink.cc new file mode 100644 index 00000000000..2be14383fa6 --- /dev/null +++ b/deps/v8/src/snapshot-source-sink.cc @@ -0,0 +1,101 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + + +#include "src/snapshot-source-sink.h" + +#include "src/base/logging.h" +#include "src/handles-inl.h" +#include "src/serialize.h" // for SerializerDeserializer::nop() in AtEOF() + + +namespace v8 { +namespace internal { + + +SnapshotByteSource::SnapshotByteSource(const byte* array, int length) + : data_(array), length_(length), position_(0) { +} + + +SnapshotByteSource::~SnapshotByteSource() { } + + +int32_t SnapshotByteSource::GetUnalignedInt() { + DCHECK(position_ < length_); // Require at least one byte left. +#if defined(V8_HOST_CAN_READ_UNALIGNED) && __BYTE_ORDER == __LITTLE_ENDIAN + int32_t answer = *reinterpret_cast<const int32_t*>(data_ + position_); +#else + int32_t answer = data_[position_]; + answer |= data_[position_ + 1] << 8; + answer |= data_[position_ + 2] << 16; + answer |= data_[position_ + 3] << 24; +#endif + return answer; +} + + +void SnapshotByteSource::CopyRaw(byte* to, int number_of_bytes) { + MemCopy(to, data_ + position_, number_of_bytes); + position_ += number_of_bytes; +} + + +void SnapshotByteSink::PutInt(uintptr_t integer, const char* description) { + DCHECK(integer < 1 << 22); + integer <<= 2; + int bytes = 1; + if (integer > 0xff) bytes = 2; + if (integer > 0xffff) bytes = 3; + integer |= bytes; + Put(static_cast<int>(integer & 0xff), "IntPart1"); + if (bytes > 1) Put(static_cast<int>((integer >> 8) & 0xff), "IntPart2"); + if (bytes > 2) Put(static_cast<int>((integer >> 16) & 0xff), "IntPart3"); +} + +void SnapshotByteSink::PutRaw(byte* data, int number_of_bytes, + const char* description) { + for (int i = 0; i < number_of_bytes; ++i) { + Put(data[i], description); + } +} + +void SnapshotByteSink::PutBlob(byte* data, int number_of_bytes, + const char* description) { + PutInt(number_of_bytes, description); + PutRaw(data, number_of_bytes, description); +} + + +bool SnapshotByteSource::AtEOF() { + if (0u + length_ - position_ > 2 * sizeof(uint32_t)) return false; + for (int x = position_; x < length_; x++) { + if (data_[x] != SerializerDeserializer::nop()) return false; + } + return true; +} + + +bool SnapshotByteSource::GetBlob(const byte** data, int* number_of_bytes) { + int size = GetInt(); + *number_of_bytes = size; + + if (position_ + size < length_) { + *data = &data_[position_]; + Advance(size); + return true; + } else { + Advance(length_ - position_); // proceed until end. + return false; + } +} + + +void DebugSnapshotSink::Put(byte b, const char* description) { + PrintF("%24s: %x\n", description, b); + sink_->Put(b, description); +} + +} // namespace v8::internal +} // namespace v8 diff --git a/deps/v8/src/snapshot-source-sink.h b/deps/v8/src/snapshot-source-sink.h new file mode 100644 index 00000000000..bda6455e8b9 --- /dev/null +++ b/deps/v8/src/snapshot-source-sink.h @@ -0,0 +1,125 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_SNAPSHOT_SOURCE_SINK_H_ +#define V8_SNAPSHOT_SOURCE_SINK_H_ + +#include "src/base/logging.h" +#include "src/utils.h" + +namespace v8 { +namespace internal { + + +/** + * Source to read snapshot and builtins files from. + * + * Note: Memory ownership remains with callee. + */ +class SnapshotByteSource V8_FINAL { + public: + SnapshotByteSource(const byte* array, int length); + ~SnapshotByteSource(); + + bool HasMore() { return position_ < length_; } + + int Get() { + DCHECK(position_ < length_); + return data_[position_++]; + } + + int32_t GetUnalignedInt(); + + void Advance(int by) { position_ += by; } + + void CopyRaw(byte* to, int number_of_bytes); + + inline int GetInt() { + // This way of variable-length encoding integers does not suffer from branch + // mispredictions. + uint32_t answer = GetUnalignedInt(); + int bytes = answer & 3; + Advance(bytes); + uint32_t mask = 0xffffffffu; + mask >>= 32 - (bytes << 3); + answer &= mask; + answer >>= 2; + return answer; + } + + bool GetBlob(const byte** data, int* number_of_bytes); + + bool AtEOF(); + + int position() { return position_; } + + private: + const byte* data_; + int length_; + int position_; + + DISALLOW_COPY_AND_ASSIGN(SnapshotByteSource); +}; + + +/** + * Sink to write snapshot files to. + * + * Subclasses must implement actual storage or i/o. + */ +class SnapshotByteSink { + public: + virtual ~SnapshotByteSink() { } + virtual void Put(byte b, const char* description) = 0; + virtual void PutSection(int b, const char* description) { + DCHECK_LE(b, kMaxUInt8); + Put(static_cast<byte>(b), description); + } + void PutInt(uintptr_t integer, const char* description); + void PutRaw(byte* data, int number_of_bytes, const char* description); + void PutBlob(byte* data, int number_of_bytes, const char* description); + virtual int Position() = 0; +}; + + +class DummySnapshotSink : public SnapshotByteSink { + public: + DummySnapshotSink() : length_(0) {} + virtual ~DummySnapshotSink() {} + virtual void Put(byte b, const char* description) { length_++; } + virtual int Position() { return length_; } + + private: + int length_; +}; + + +// Wrap a SnapshotByteSink into a DebugSnapshotSink to get debugging output. +class DebugSnapshotSink : public SnapshotByteSink { + public: + explicit DebugSnapshotSink(SnapshotByteSink* chained) : sink_(chained) {} + virtual void Put(byte b, const char* description) V8_OVERRIDE; + virtual int Position() V8_OVERRIDE { return sink_->Position(); } + + private: + SnapshotByteSink* sink_; +}; + + +class ListSnapshotSink : public i::SnapshotByteSink { + public: + explicit ListSnapshotSink(i::List<byte>* data) : data_(data) {} + virtual void Put(byte b, const char* description) V8_OVERRIDE { + data_->Add(b); + } + virtual int Position() V8_OVERRIDE { return data_->length(); } + + private: + i::List<byte>* data_; +}; + +} // namespace v8::internal +} // namespace v8 + +#endif // V8_SNAPSHOT_SOURCE_SINK_H_ diff --git a/deps/v8/src/snapshot.h b/deps/v8/src/snapshot.h index d5271b2dac7..b785cf57008 100644 --- a/deps/v8/src/snapshot.h +++ b/deps/v8/src/snapshot.h @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "isolate.h" +#include "src/isolate.h" #ifndef V8_SNAPSHOT_H_ #define V8_SNAPSHOT_H_ @@ -12,23 +12,16 @@ namespace internal { class Snapshot { public: - // Initialize the VM from the given snapshot file. If snapshot_file is - // NULL, use the internal snapshot instead. Returns false if no snapshot + // Initialize the VM from the internal snapshot. Returns false if no snapshot // could be found. - static bool Initialize(const char* snapshot_file = NULL); + static bool Initialize(); static bool HaveASnapshotToStartFrom(); // Create a new context using the internal partial snapshot. static Handle<Context> NewContextFromSnapshot(Isolate* isolate); - // Returns whether or not the snapshot is enabled. - static bool IsEnabled() { return size_ != 0; } - - // Write snapshot to the given file. Returns true if snapshot was written - // successfully. - static bool WriteToFile(const char* snapshot_file); - + // These methods support COMPRESS_STARTUP_DATA_BZ2. static const byte* data() { return data_; } static int size() { return size_; } static int raw_size() { return raw_size_; } @@ -72,6 +65,10 @@ class Snapshot { DISALLOW_IMPLICIT_CONSTRUCTORS(Snapshot); }; +#ifdef V8_USE_EXTERNAL_STARTUP_DATA +void SetSnapshotFromFile(StartupData* snapshot_blob); +#endif + } } // namespace v8::internal #endif // V8_SNAPSHOT_H_ diff --git a/deps/v8/src/splay-tree-inl.h b/deps/v8/src/splay-tree-inl.h index fe40f276cd5..6c7b4f404ca 100644 --- a/deps/v8/src/splay-tree-inl.h +++ b/deps/v8/src/splay-tree-inl.h @@ -5,7 +5,7 @@ #ifndef V8_SPLAY_TREE_INL_H_ #define V8_SPLAY_TREE_INL_H_ -#include "splay-tree.h" +#include "src/splay-tree.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/splay-tree.h b/deps/v8/src/splay-tree.h index 77f05b010e2..30e5d6787f3 100644 --- a/deps/v8/src/splay-tree.h +++ b/deps/v8/src/splay-tree.h @@ -5,7 +5,7 @@ #ifndef V8_SPLAY_TREE_H_ #define V8_SPLAY_TREE_H_ -#include "allocation.h" +#include "src/allocation.h" namespace v8 { namespace internal { @@ -35,8 +35,8 @@ class SplayTree { class Locator; - SplayTree(AllocationPolicy allocator = AllocationPolicy()) - : root_(NULL), allocator_(allocator) { } + explicit SplayTree(AllocationPolicy allocator = AllocationPolicy()) + : root_(NULL), allocator_(allocator) {} ~SplayTree(); INLINE(void* operator new(size_t size, diff --git a/deps/v8/src/string-iterator.js b/deps/v8/src/string-iterator.js new file mode 100644 index 00000000000..7222885a56f --- /dev/null +++ b/deps/v8/src/string-iterator.js @@ -0,0 +1,106 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +'use strict'; + + +// This file relies on the fact that the following declaration has been made +// in runtime.js: +// var $String = global.String; + + +var stringIteratorIteratedStringSymbol = + GLOBAL_PRIVATE("StringIterator#iteratedString"); +var stringIteratorNextIndexSymbol = GLOBAL_PRIVATE("StringIterator#next"); + + +function StringIterator() {} + + +// 21.1.5.1 CreateStringIterator Abstract Operation +function CreateStringIterator(string) { + var s = TO_STRING_INLINE(string); + var iterator = new StringIterator; + SET_PRIVATE(iterator, stringIteratorIteratedStringSymbol, s); + SET_PRIVATE(iterator, stringIteratorNextIndexSymbol, 0); + return iterator; +} + + +// 21.1.5.2.2 %StringIteratorPrototype%[@@iterator] +function StringIteratorIterator() { + return this; +} + + +// 21.1.5.2.1 %StringIteratorPrototype%.next( ) +function StringIteratorNext() { + var iterator = ToObject(this); + + if (!HAS_PRIVATE(iterator, stringIteratorIteratedStringSymbol)) { + throw MakeTypeError('incompatible_method_receiver', + ['String Iterator.prototype.next']); + } + + var s = GET_PRIVATE(iterator, stringIteratorIteratedStringSymbol); + if (IS_UNDEFINED(s)) { + return CreateIteratorResultObject(UNDEFINED, true); + } + + var position = GET_PRIVATE(iterator, stringIteratorNextIndexSymbol); + var length = TO_UINT32(s.length); + + if (position >= length) { + SET_PRIVATE(iterator, stringIteratorIteratedStringSymbol, UNDEFINED); + return CreateIteratorResultObject(UNDEFINED, true); + } + + var first = %_StringCharCodeAt(s, position); + var resultString = %_StringCharFromCode(first); + position++; + + if (first >= 0xD800 && first <= 0xDBFF && position < length) { + var second = %_StringCharCodeAt(s, position); + if (second >= 0xDC00 && second <= 0xDFFF) { + resultString += %_StringCharFromCode(second); + position++; + } + } + + SET_PRIVATE(iterator, stringIteratorNextIndexSymbol, position); + + return CreateIteratorResultObject(resultString, false); +} + + +function SetUpStringIterator() { + %CheckIsBootstrapping(); + + %FunctionSetPrototype(StringIterator, new $Object()); + %FunctionSetInstanceClassName(StringIterator, 'String Iterator'); + + InstallFunctions(StringIterator.prototype, DONT_ENUM, $Array( + 'next', StringIteratorNext + )); + %FunctionSetName(StringIteratorIterator, '[Symbol.iterator]'); + %AddNamedProperty(StringIterator.prototype, symbolIterator, + StringIteratorIterator, DONT_ENUM); +} +SetUpStringIterator(); + + +// 21.1.3.27 String.prototype [ @@iterator ]( ) +function StringPrototypeIterator() { + return CreateStringIterator(this); +} + + +function ExtendStringPrototypeWithIterator() { + %CheckIsBootstrapping(); + + %FunctionSetName(StringPrototypeIterator, '[Symbol.iterator]'); + %AddNamedProperty($String.prototype, symbolIterator, + StringPrototypeIterator, DONT_ENUM); +} +ExtendStringPrototypeWithIterator(); diff --git a/deps/v8/src/string-search.cc b/deps/v8/src/string-search.cc index 38dacc9cc5d..0c18762750c 100644 --- a/deps/v8/src/string-search.cc +++ b/deps/v8/src/string-search.cc @@ -2,8 +2,9 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" -#include "string-search.h" +#include "src/v8.h" + +#include "src/string-search.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/string-search.h b/deps/v8/src/string-search.h index 09bc36ef82e..ef47db6241d 100644 --- a/deps/v8/src/string-search.h +++ b/deps/v8/src/string-search.h @@ -84,7 +84,7 @@ class StringSearch : private StringSearchBase { // ASCII needle. return kAsciiAlphabetSize; } else { - ASSERT(sizeof(PatternChar) == 2); + DCHECK(sizeof(PatternChar) == 2); // UC16 needle. return kUC16AlphabetSize; } @@ -196,7 +196,7 @@ int StringSearch<PatternChar, SubjectChar>::SingleCharSearch( StringSearch<PatternChar, SubjectChar>* search, Vector<const SubjectChar> subject, int index) { - ASSERT_EQ(1, search->pattern_.length()); + DCHECK_EQ(1, search->pattern_.length()); PatternChar pattern_first_char = search->pattern_[0]; int i = index; if (sizeof(SubjectChar) == 1 && sizeof(PatternChar) == 1) { @@ -230,7 +230,7 @@ template <typename PatternChar, typename SubjectChar> inline bool CharCompare(const PatternChar* pattern, const SubjectChar* subject, int length) { - ASSERT(length > 0); + DCHECK(length > 0); int pos = 0; do { if (pattern[pos] != subject[pos]) { @@ -249,7 +249,7 @@ int StringSearch<PatternChar, SubjectChar>::LinearSearch( Vector<const SubjectChar> subject, int index) { Vector<const PatternChar> pattern = search->pattern_; - ASSERT(pattern.length() > 1); + DCHECK(pattern.length() > 1); int pattern_length = pattern.length(); PatternChar pattern_first_char = pattern[0]; int i = index; diff --git a/deps/v8/src/string-stream.cc b/deps/v8/src/string-stream.cc index 25e340b4a56..42c2af7bdff 100644 --- a/deps/v8/src/string-stream.cc +++ b/deps/v8/src/string-stream.cc @@ -2,9 +2,10 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "string-stream.h" +#include "src/string-stream.h" -#include "handles-inl.h" +#include "src/handles-inl.h" +#include "src/prototype.h" namespace v8 { namespace internal { @@ -17,16 +18,9 @@ char* HeapStringAllocator::allocate(unsigned bytes) { } -NoAllocationStringAllocator::NoAllocationStringAllocator(char* memory, - unsigned size) { - size_ = size; - space_ = memory; -} - - bool StringStream::Put(char c) { if (full()) return false; - ASSERT(length_ < capacity_); + DCHECK(length_ < capacity_); // Since the trailing '\0' is not accounted for in length_ fullness is // indicated by a difference of 1 between length_ and capacity_. Thus when // reaching a difference of 2 we need to grow the buffer. @@ -38,7 +32,7 @@ bool StringStream::Put(char c) { buffer_ = new_buffer; } else { // Reached the end of the available buffer. - ASSERT(capacity_ >= 5); + DCHECK(capacity_ >= 5); length_ = capacity_ - 1; // Indicate fullness of the stream. buffer_[length_ - 4] = '.'; buffer_[length_ - 3] = '.'; @@ -96,26 +90,26 @@ void StringStream::Add(Vector<const char> format, Vector<FmtElm> elms) { FmtElm current = elms[elm++]; switch (type) { case 's': { - ASSERT_EQ(FmtElm::C_STR, current.type_); + DCHECK_EQ(FmtElm::C_STR, current.type_); const char* value = current.data_.u_c_str_; Add(value); break; } case 'w': { - ASSERT_EQ(FmtElm::LC_STR, current.type_); + DCHECK_EQ(FmtElm::LC_STR, current.type_); Vector<const uc16> value = *current.data_.u_lc_str_; for (int i = 0; i < value.length(); i++) Put(static_cast<char>(value[i])); break; } case 'o': { - ASSERT_EQ(FmtElm::OBJ, current.type_); + DCHECK_EQ(FmtElm::OBJ, current.type_); Object* obj = current.data_.u_obj_; PrintObject(obj); break; } case 'k': { - ASSERT_EQ(FmtElm::INT, current.type_); + DCHECK_EQ(FmtElm::INT, current.type_); int value = current.data_.u_int_; if (0x20 <= value && value <= 0x7F) { Put(value); @@ -129,21 +123,30 @@ void StringStream::Add(Vector<const char> format, Vector<FmtElm> elms) { case 'i': case 'd': case 'u': case 'x': case 'c': case 'X': { int value = current.data_.u_int_; EmbeddedVector<char, 24> formatted; - int length = OS::SNPrintF(formatted, temp.start(), value); + int length = SNPrintF(formatted, temp.start(), value); Add(Vector<const char>(formatted.start(), length)); break; } case 'f': case 'g': case 'G': case 'e': case 'E': { double value = current.data_.u_double_; - EmbeddedVector<char, 28> formatted; - OS::SNPrintF(formatted, temp.start(), value); - Add(formatted.start()); + int inf = std::isinf(value); + if (inf == -1) { + Add("-inf"); + } else if (inf == 1) { + Add("inf"); + } else if (std::isnan(value)) { + Add("nan"); + } else { + EmbeddedVector<char, 28> formatted; + SNPrintF(formatted, temp.start(), value); + Add(formatted.start()); + } break; } case 'p': { void* value = current.data_.u_pointer_; EmbeddedVector<char, 20> formatted; - OS::SNPrintF(formatted, temp.start(), value); + SNPrintF(formatted, temp.start(), value); Add(formatted.start()); break; } @@ -154,7 +157,7 @@ void StringStream::Add(Vector<const char> format, Vector<FmtElm> elms) { } // Verify that the buffer is 0-terminated - ASSERT(buffer_[length_] == '\0'); + DCHECK(buffer_[length_] == '\0'); } @@ -237,7 +240,7 @@ void StringStream::Add(const char* format, FmtElm arg0, FmtElm arg1, SmartArrayPointer<const char> StringStream::ToCString() const { char* str = NewArray<char>(length_ + 1); - OS::MemCopy(str, buffer_, length_); + MemCopy(str, buffer_, length_); str[length_] = '\0'; return SmartArrayPointer<const char>(str); } @@ -348,7 +351,8 @@ void StringStream::PrintUsingMap(JSObject* js_object) { key->ShortPrint(); } Add(": "); - Object* value = js_object->RawFastPropertyAt(descs->GetFieldIndex(i)); + FieldIndex index = FieldIndex::ForDescriptor(map, i); + Object* value = js_object->RawFastPropertyAt(index); Add("%o\n", value); } } @@ -505,11 +509,11 @@ void StringStream::PrintPrototype(JSFunction* fun, Object* receiver) { Object* name = fun->shared()->name(); bool print_name = false; Isolate* isolate = fun->GetIsolate(); - for (Object* p = receiver; - p != isolate->heap()->null_value(); - p = p->GetPrototype(isolate)) { - if (p->IsJSObject()) { - Object* key = JSObject::cast(p)->SlowReverseLookup(fun); + for (PrototypeIterator iter(isolate, receiver, + PrototypeIterator::START_AT_RECEIVER); + !iter.IsAtEnd(); iter.Advance()) { + if (iter.GetCurrent()->IsJSObject()) { + Object* key = JSObject::cast(iter.GetCurrent())->SlowReverseLookup(fun); if (key != isolate->heap()->undefined_value()) { if (!name->IsString() || !key->IsString() || @@ -546,7 +550,7 @@ char* HeapStringAllocator::grow(unsigned* bytes) { if (new_space == NULL) { return space_; } - OS::MemCopy(new_space, space_, *bytes); + MemCopy(new_space, space_, *bytes); *bytes = new_bytes; DeleteArray(space_); space_ = new_space; @@ -554,12 +558,4 @@ char* HeapStringAllocator::grow(unsigned* bytes) { } -// Only grow once to the maximum allowable size. -char* NoAllocationStringAllocator::grow(unsigned* bytes) { - ASSERT(size_ >= *bytes); - *bytes = size_; - return space_; -} - - } } // namespace v8::internal diff --git a/deps/v8/src/string-stream.h b/deps/v8/src/string-stream.h index ecc0e80f5d6..e2475ff3b39 100644 --- a/deps/v8/src/string-stream.h +++ b/deps/v8/src/string-stream.h @@ -5,7 +5,7 @@ #ifndef V8_STRING_STREAM_H_ #define V8_STRING_STREAM_H_ -#include "handles.h" +#include "src/handles.h" namespace v8 { namespace internal { @@ -35,21 +35,6 @@ class HeapStringAllocator V8_FINAL : public StringAllocator { }; -// Allocator for use when no new c++ heap allocation is allowed. -// Given a preallocated buffer up front and does no allocation while -// building message. -class NoAllocationStringAllocator V8_FINAL : public StringAllocator { - public: - NoAllocationStringAllocator(char* memory, unsigned size); - virtual char* allocate(unsigned bytes) V8_OVERRIDE { return space_; } - virtual char* grow(unsigned* bytes) V8_OVERRIDE; - - private: - unsigned size_; - char* space_; -}; - - class FmtElm V8_FINAL { public: FmtElm(int value) : type_(INT) { // NOLINT @@ -168,31 +153,6 @@ class StringStream V8_FINAL { DISALLOW_IMPLICIT_CONSTRUCTORS(StringStream); }; - -// Utility class to print a list of items to a stream, divided by a separator. -class SimpleListPrinter V8_FINAL { - public: - explicit SimpleListPrinter(StringStream* stream, char separator = ',') { - separator_ = separator; - stream_ = stream; - first_ = true; - } - - void Add(const char* str) { - if (first_) { - first_ = false; - } else { - stream_->Put(separator_); - } - stream_->Add(str); - } - - private: - bool first_; - char separator_; - StringStream* stream_; -}; - } } // namespace v8::internal #endif // V8_STRING_STREAM_H_ diff --git a/deps/v8/src/string.js b/deps/v8/src/string.js index 9c904278353..ae65264d4a3 100644 --- a/deps/v8/src/string.js +++ b/deps/v8/src/string.js @@ -61,13 +61,13 @@ function StringCharCodeAt(pos) { // ECMA-262, section 15.5.4.6 -function StringConcat() { +function StringConcat(other /* and more */) { // length == 1 CHECK_OBJECT_COERCIBLE(this, "String.prototype.concat"); var len = %_ArgumentsLength(); var this_as_string = TO_STRING_INLINE(this); if (len === 1) { - return this_as_string + %_Arguments(0); + return this_as_string + other; } var parts = new InternalArray(len + 1); parts[0] = this_as_string; @@ -78,12 +78,9 @@ function StringConcat() { return %StringBuilderConcat(parts, len + 1, ""); } -// Match ES3 and Safari -%FunctionSetLength(StringConcat, 1); - // ECMA-262 section 15.5.4.7 -function StringIndexOf(pattern /* position */) { // length == 1 +function StringIndexOfJS(pattern /* position */) { // length == 1 CHECK_OBJECT_COERCIBLE(this, "String.prototype.indexOf"); var subject = TO_STRING_INLINE(this); @@ -100,7 +97,7 @@ function StringIndexOf(pattern /* position */) { // length == 1 // ECMA-262 section 15.5.4.8 -function StringLastIndexOf(pat /* position */) { // length == 1 +function StringLastIndexOfJS(pat /* position */) { // length == 1 CHECK_OBJECT_COERCIBLE(this, "String.prototype.lastIndexOf"); var sub = TO_STRING_INLINE(this); @@ -131,7 +128,7 @@ function StringLastIndexOf(pat /* position */) { // length == 1 // // This function is implementation specific. For now, we do not // do anything locale specific. -function StringLocaleCompare(other) { +function StringLocaleCompareJS(other) { CHECK_OBJECT_COERCIBLE(this, "String.prototype.localeCompare"); return %StringLocaleCompare(TO_STRING_INLINE(this), @@ -140,7 +137,7 @@ function StringLocaleCompare(other) { // ECMA-262 section 15.5.4.10 -function StringMatch(regexp) { +function StringMatchJS(regexp) { CHECK_OBJECT_COERCIBLE(this, "String.prototype.match"); var subject = TO_STRING_INLINE(this); @@ -170,7 +167,7 @@ var NORMALIZATION_FORMS = ['NFC', 'NFD', 'NFKC', 'NFKD']; // For now we do nothing, as proper normalization requires big tables. // If Intl is enabled, then i18n.js will override it and provide the the // proper functionality. -function StringNormalize(form) { +function StringNormalizeJS(form) { CHECK_OBJECT_COERCIBLE(this, "String.prototype.normalize"); var form = form ? TO_STRING_INLINE(form) : 'NFC'; @@ -585,7 +582,7 @@ function StringSlice(start, end) { // ECMA-262 section 15.5.4.14 -function StringSplit(separator, limit) { +function StringSplitJS(separator, limit) { CHECK_OBJECT_COERCIBLE(this, "String.prototype.split"); var subject = TO_STRING_INLINE(this); @@ -618,8 +615,6 @@ function StringSplit(separator, limit) { } -var ArrayPushBuiltin = $Array.prototype.push; - function StringSplitOnRegExp(subject, separator, limit, length) { if (length === 0) { if (DoRegExpExec(separator, subject, 0, 0) != null) { @@ -637,15 +632,13 @@ function StringSplitOnRegExp(subject, separator, limit, length) { while (true) { if (startIndex === length) { - %_CallFunction(result, %_SubString(subject, currentIndex, length), - ArrayPushBuiltin); + result[result.length] = %_SubString(subject, currentIndex, length); break; } var matchInfo = DoRegExpExec(separator, subject, startIndex); if (matchInfo == null || length === (startMatch = matchInfo[CAPTURE0])) { - %_CallFunction(result, %_SubString(subject, currentIndex, length), - ArrayPushBuiltin); + result[result.length] = %_SubString(subject, currentIndex, length); break; } var endIndex = matchInfo[CAPTURE1]; @@ -656,8 +649,7 @@ function StringSplitOnRegExp(subject, separator, limit, length) { continue; } - %_CallFunction(result, %_SubString(subject, currentIndex, startMatch), - ArrayPushBuiltin); + result[result.length] = %_SubString(subject, currentIndex, startMatch); if (result.length === limit) break; @@ -666,10 +658,9 @@ function StringSplitOnRegExp(subject, separator, limit, length) { var start = matchInfo[i++]; var end = matchInfo[i++]; if (end != -1) { - %_CallFunction(result, %_SubString(subject, start, end), - ArrayPushBuiltin); + result[result.length] = %_SubString(subject, start, end); } else { - %_CallFunction(result, UNDEFINED, ArrayPushBuiltin); + result[result.length] = UNDEFINED; } if (result.length === limit) break outer_loop; } @@ -715,7 +706,7 @@ function StringSubstring(start, end) { } -// This is not a part of ECMA-262. +// ES6 draft, revision 26 (2014-07-18), section B.2.3.1 function StringSubstr(start, n) { CHECK_OBJECT_COERCIBLE(this, "String.prototype.substr"); @@ -756,7 +747,7 @@ function StringSubstr(start, n) { // ECMA-262, 15.5.4.16 -function StringToLowerCase() { +function StringToLowerCaseJS() { CHECK_OBJECT_COERCIBLE(this, "String.prototype.toLowerCase"); return %StringToLowerCase(TO_STRING_INLINE(this)); @@ -772,7 +763,7 @@ function StringToLocaleLowerCase() { // ECMA-262, 15.5.4.18 -function StringToUpperCase() { +function StringToUpperCaseJS() { CHECK_OBJECT_COERCIBLE(this, "String.prototype.toUpperCase"); return %StringToUpperCase(TO_STRING_INLINE(this)); @@ -787,7 +778,7 @@ function StringToLocaleUpperCase() { } // ES5, 15.5.4.20 -function StringTrim() { +function StringTrimJS() { CHECK_OBJECT_COERCIBLE(this, "String.prototype.trim"); return %StringTrim(TO_STRING_INLINE(this), true, true); @@ -836,78 +827,99 @@ function StringFromCharCode(code) { } -// Helper function for very basic XSS protection. +// ES6 draft, revision 26 (2014-07-18), section B.2.3.2.1 function HtmlEscape(str) { - return TO_STRING_INLINE(str).replace(/</g, "<") - .replace(/>/g, ">") - .replace(/"/g, """) - .replace(/'/g, "'"); -} - - -// Compatibility support for KJS. -// Tested by mozilla/js/tests/js1_5/Regress/regress-276103.js. -function StringLink(s) { - return "<a href=\"" + HtmlEscape(s) + "\">" + this + "</a>"; + return TO_STRING_INLINE(str).replace(/"/g, """); } +// ES6 draft, revision 26 (2014-07-18), section B.2.3.2 function StringAnchor(name) { + CHECK_OBJECT_COERCIBLE(this, "String.prototype.anchor"); return "<a name=\"" + HtmlEscape(name) + "\">" + this + "</a>"; } -function StringFontcolor(color) { - return "<font color=\"" + HtmlEscape(color) + "\">" + this + "</font>"; -} - - -function StringFontsize(size) { - return "<font size=\"" + HtmlEscape(size) + "\">" + this + "</font>"; -} - - +// ES6 draft, revision 26 (2014-07-18), section B.2.3.3 function StringBig() { + CHECK_OBJECT_COERCIBLE(this, "String.prototype.big"); return "<big>" + this + "</big>"; } +// ES6 draft, revision 26 (2014-07-18), section B.2.3.4 function StringBlink() { + CHECK_OBJECT_COERCIBLE(this, "String.prototype.blink"); return "<blink>" + this + "</blink>"; } +// ES6 draft, revision 26 (2014-07-18), section B.2.3.5 function StringBold() { + CHECK_OBJECT_COERCIBLE(this, "String.prototype.bold"); return "<b>" + this + "</b>"; } +// ES6 draft, revision 26 (2014-07-18), section B.2.3.6 function StringFixed() { + CHECK_OBJECT_COERCIBLE(this, "String.prototype.fixed"); return "<tt>" + this + "</tt>"; } +// ES6 draft, revision 26 (2014-07-18), section B.2.3.7 +function StringFontcolor(color) { + CHECK_OBJECT_COERCIBLE(this, "String.prototype.fontcolor"); + return "<font color=\"" + HtmlEscape(color) + "\">" + this + "</font>"; +} + + +// ES6 draft, revision 26 (2014-07-18), section B.2.3.8 +function StringFontsize(size) { + CHECK_OBJECT_COERCIBLE(this, "String.prototype.fontsize"); + return "<font size=\"" + HtmlEscape(size) + "\">" + this + "</font>"; +} + + +// ES6 draft, revision 26 (2014-07-18), section B.2.3.9 function StringItalics() { + CHECK_OBJECT_COERCIBLE(this, "String.prototype.italics"); return "<i>" + this + "</i>"; } +// ES6 draft, revision 26 (2014-07-18), section B.2.3.10 +function StringLink(s) { + CHECK_OBJECT_COERCIBLE(this, "String.prototype.link"); + return "<a href=\"" + HtmlEscape(s) + "\">" + this + "</a>"; +} + + +// ES6 draft, revision 26 (2014-07-18), section B.2.3.11 function StringSmall() { + CHECK_OBJECT_COERCIBLE(this, "String.prototype.small"); return "<small>" + this + "</small>"; } +// ES6 draft, revision 26 (2014-07-18), section B.2.3.12 function StringStrike() { + CHECK_OBJECT_COERCIBLE(this, "String.prototype.strike"); return "<strike>" + this + "</strike>"; } +// ES6 draft, revision 26 (2014-07-18), section B.2.3.13 function StringSub() { + CHECK_OBJECT_COERCIBLE(this, "String.prototype.sub"); return "<sub>" + this + "</sub>"; } +// ES6 draft, revision 26 (2014-07-18), section B.2.3.14 function StringSup() { + CHECK_OBJECT_COERCIBLE(this, "String.prototype.sup"); return "<sup>" + this + "</sup>"; } @@ -921,7 +933,7 @@ function SetUpString() { %FunctionSetPrototype($String, new $String()); // Set up the constructor property on the String prototype object. - %SetProperty($String.prototype, "constructor", $String, DONT_ENUM); + %AddNamedProperty($String.prototype, "constructor", $String, DONT_ENUM); // Set up the non-enumerable functions on the String object. InstallFunctions($String, DONT_ENUM, $Array( @@ -935,22 +947,22 @@ function SetUpString() { "charAt", StringCharAt, "charCodeAt", StringCharCodeAt, "concat", StringConcat, - "indexOf", StringIndexOf, - "lastIndexOf", StringLastIndexOf, - "localeCompare", StringLocaleCompare, - "match", StringMatch, - "normalize", StringNormalize, + "indexOf", StringIndexOfJS, + "lastIndexOf", StringLastIndexOfJS, + "localeCompare", StringLocaleCompareJS, + "match", StringMatchJS, + "normalize", StringNormalizeJS, "replace", StringReplace, "search", StringSearch, "slice", StringSlice, - "split", StringSplit, + "split", StringSplitJS, "substring", StringSubstring, "substr", StringSubstr, - "toLowerCase", StringToLowerCase, + "toLowerCase", StringToLowerCaseJS, "toLocaleLowerCase", StringToLocaleLowerCase, - "toUpperCase", StringToUpperCase, + "toUpperCase", StringToUpperCaseJS, "toLocaleUpperCase", StringToLocaleUpperCase, - "trim", StringTrim, + "trim", StringTrimJS, "trimLeft", StringTrimLeft, "trimRight", StringTrimRight, "link", StringLink, diff --git a/deps/v8/src/strtod.cc b/deps/v8/src/strtod.cc index aac74199ab3..64b7a29ef03 100644 --- a/deps/v8/src/strtod.cc +++ b/deps/v8/src/strtod.cc @@ -5,12 +5,14 @@ #include <stdarg.h> #include <cmath> -#include "globals.h" -#include "utils.h" -#include "strtod.h" -#include "bignum.h" -#include "cached-powers.h" -#include "double.h" +#include "src/v8.h" + +#include "src/bignum.h" +#include "src/cached-powers.h" +#include "src/double.h" +#include "src/globals.h" +#include "src/strtod.h" +#include "src/utils.h" namespace v8 { namespace internal { @@ -97,7 +99,7 @@ static void TrimToMaxSignificantDigits(Vector<const char> buffer, } // The input buffer has been trimmed. Therefore the last digit must be // different from '0'. - ASSERT(buffer[buffer.length() - 1] != '0'); + DCHECK(buffer[buffer.length() - 1] != '0'); // Set the last digit to be non-zero. This is sufficient to guarantee // correct rounding. significant_buffer[kMaxSignificantDecimalDigits - 1] = '1'; @@ -117,7 +119,7 @@ static uint64_t ReadUint64(Vector<const char> buffer, int i = 0; while (i < buffer.length() && result <= (kMaxUint64 / 10 - 1)) { int digit = buffer[i++] - '0'; - ASSERT(0 <= digit && digit <= 9); + DCHECK(0 <= digit && digit <= 9); result = 10 * result + digit; } *number_of_read_digits = i; @@ -153,7 +155,8 @@ static void ReadDiyFp(Vector<const char> buffer, static bool DoubleStrtod(Vector<const char> trimmed, int exponent, double* result) { -#if (V8_TARGET_ARCH_IA32 || defined(USE_SIMULATOR)) && !defined(_MSC_VER) +#if (V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87 || defined(USE_SIMULATOR)) && \ + !defined(_MSC_VER) // On x86 the floating-point stack can be 64 or 80 bits wide. If it is // 80 bits wide (as is the case on Linux) then double-rounding occurs and the // result is not accurate. @@ -174,14 +177,14 @@ static bool DoubleStrtod(Vector<const char> trimmed, if (exponent < 0 && -exponent < kExactPowersOfTenSize) { // 10^-exponent fits into a double. *result = static_cast<double>(ReadUint64(trimmed, &read_digits)); - ASSERT(read_digits == trimmed.length()); + DCHECK(read_digits == trimmed.length()); *result /= exact_powers_of_ten[-exponent]; return true; } if (0 <= exponent && exponent < kExactPowersOfTenSize) { // 10^exponent fits into a double. *result = static_cast<double>(ReadUint64(trimmed, &read_digits)); - ASSERT(read_digits == trimmed.length()); + DCHECK(read_digits == trimmed.length()); *result *= exact_powers_of_ten[exponent]; return true; } @@ -193,7 +196,7 @@ static bool DoubleStrtod(Vector<const char> trimmed, // 10^remaining_digits. As a result the remaining exponent now fits // into a double too. *result = static_cast<double>(ReadUint64(trimmed, &read_digits)); - ASSERT(read_digits == trimmed.length()); + DCHECK(read_digits == trimmed.length()); *result *= exact_powers_of_ten[remaining_digits]; *result *= exact_powers_of_ten[exponent - remaining_digits]; return true; @@ -206,11 +209,11 @@ static bool DoubleStrtod(Vector<const char> trimmed, // Returns 10^exponent as an exact DiyFp. // The given exponent must be in the range [1; kDecimalExponentDistance[. static DiyFp AdjustmentPowerOfTen(int exponent) { - ASSERT(0 < exponent); - ASSERT(exponent < PowersOfTenCache::kDecimalExponentDistance); + DCHECK(0 < exponent); + DCHECK(exponent < PowersOfTenCache::kDecimalExponentDistance); // Simply hardcode the remaining powers for the given decimal exponent // distance. - ASSERT(PowersOfTenCache::kDecimalExponentDistance == 8); + DCHECK(PowersOfTenCache::kDecimalExponentDistance == 8); switch (exponent) { case 1: return DiyFp(V8_2PART_UINT64_C(0xa0000000, 00000000), -60); case 2: return DiyFp(V8_2PART_UINT64_C(0xc8000000, 00000000), -57); @@ -244,13 +247,13 @@ static bool DiyFpStrtod(Vector<const char> buffer, const int kDenominator = 1 << kDenominatorLog; // Move the remaining decimals into the exponent. exponent += remaining_decimals; - int error = (remaining_decimals == 0 ? 0 : kDenominator / 2); + int64_t error = (remaining_decimals == 0 ? 0 : kDenominator / 2); int old_e = input.e(); input.Normalize(); error <<= old_e - input.e(); - ASSERT(exponent <= PowersOfTenCache::kMaxDecimalExponent); + DCHECK(exponent <= PowersOfTenCache::kMaxDecimalExponent); if (exponent < PowersOfTenCache::kMinDecimalExponent) { *result = 0.0; return true; @@ -268,7 +271,7 @@ static bool DiyFpStrtod(Vector<const char> buffer, if (kMaxUint64DecimalDigits - buffer.length() >= adjustment_exponent) { // The product of input with the adjustment power fits into a 64 bit // integer. - ASSERT(DiyFp::kSignificandSize == 64); + DCHECK(DiyFp::kSignificandSize == 64); } else { // The adjustment power is exact. There is hence only an error of 0.5. error += kDenominator / 2; @@ -310,8 +313,8 @@ static bool DiyFpStrtod(Vector<const char> buffer, precision_digits_count -= shift_amount; } // We use uint64_ts now. This only works if the DiyFp uses uint64_ts too. - ASSERT(DiyFp::kSignificandSize == 64); - ASSERT(precision_digits_count < 64); + DCHECK(DiyFp::kSignificandSize == 64); + DCHECK(precision_digits_count < 64); uint64_t one64 = 1; uint64_t precision_bits_mask = (one64 << precision_digits_count) - 1; uint64_t precision_bits = input.f() & precision_bits_mask; @@ -355,14 +358,14 @@ static double BignumStrtod(Vector<const char> buffer, DiyFp upper_boundary = Double(guess).UpperBoundary(); - ASSERT(buffer.length() + exponent <= kMaxDecimalPower + 1); - ASSERT(buffer.length() + exponent > kMinDecimalPower); - ASSERT(buffer.length() <= kMaxSignificantDecimalDigits); + DCHECK(buffer.length() + exponent <= kMaxDecimalPower + 1); + DCHECK(buffer.length() + exponent > kMinDecimalPower); + DCHECK(buffer.length() <= kMaxSignificantDecimalDigits); // Make sure that the Bignum will be able to hold all our numbers. // Our Bignum implementation has a separate field for exponents. Shifts will // consume at most one bigit (< 64 bits). // ln(10) == 3.3219... - ASSERT(((kMaxDecimalPower + 1) * 333 / 100) < Bignum::kMaxSignificantBits); + DCHECK(((kMaxDecimalPower + 1) * 333 / 100) < Bignum::kMaxSignificantBits); Bignum input; Bignum boundary; input.AssignDecimalString(buffer); diff --git a/deps/v8/src/stub-cache.cc b/deps/v8/src/stub-cache.cc index ef9faefc83b..f959e924988 100644 --- a/deps/v8/src/stub-cache.cc +++ b/deps/v8/src/stub-cache.cc @@ -2,18 +2,18 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" - -#include "api.h" -#include "arguments.h" -#include "ast.h" -#include "code-stubs.h" -#include "cpu-profiler.h" -#include "gdb-jit.h" -#include "ic-inl.h" -#include "stub-cache.h" -#include "type-info.h" -#include "vm-state-inl.h" +#include "src/v8.h" + +#include "src/api.h" +#include "src/arguments.h" +#include "src/ast.h" +#include "src/code-stubs.h" +#include "src/cpu-profiler.h" +#include "src/gdb-jit.h" +#include "src/ic-inl.h" +#include "src/stub-cache.h" +#include "src/type-info.h" +#include "src/vm-state-inl.h" namespace v8 { namespace internal { @@ -27,30 +27,37 @@ StubCache::StubCache(Isolate* isolate) void StubCache::Initialize() { - ASSERT(IsPowerOf2(kPrimaryTableSize)); - ASSERT(IsPowerOf2(kSecondaryTableSize)); + DCHECK(IsPowerOf2(kPrimaryTableSize)); + DCHECK(IsPowerOf2(kSecondaryTableSize)); Clear(); } -Code* StubCache::Set(Name* name, Map* map, Code* code) { - // Get the flags from the code. - Code::Flags flags = Code::RemoveTypeFromFlags(code->flags()); +static Code::Flags CommonStubCacheChecks(Name* name, Map* map, + Code::Flags flags) { + flags = Code::RemoveTypeAndHolderFromFlags(flags); // Validate that the name does not move on scavenge, and that we // can use identity checks instead of structural equality checks. - ASSERT(!heap()->InNewSpace(name)); - ASSERT(name->IsUniqueName()); - - // The state bits are not important to the hash function because - // the stub cache only contains monomorphic stubs. Make sure that - // the bits are the least significant so they will be the ones - // masked out. - ASSERT(Code::ExtractICStateFromFlags(flags) == MONOMORPHIC); + DCHECK(!name->GetHeap()->InNewSpace(name)); + DCHECK(name->IsUniqueName()); + + // The state bits are not important to the hash function because the stub + // cache only contains handlers. Make sure that the bits are the least + // significant so they will be the ones masked out. + DCHECK_EQ(Code::HANDLER, Code::ExtractKindFromFlags(flags)); STATIC_ASSERT((Code::ICStateField::kMask & 1) == 1); - // Make sure that the code type is not included in the hash. - ASSERT(Code::ExtractTypeFromFlags(flags) == 0); + // Make sure that the code type and cache holder are not included in the hash. + DCHECK(Code::ExtractTypeFromFlags(flags) == 0); + DCHECK(Code::ExtractCacheHolderFromFlags(flags) == 0); + + return flags; +} + + +Code* StubCache::Set(Name* name, Map* map, Code* code) { + Code::Flags flags = CommonStubCacheChecks(name, map, code->flags()); // Compute the primary entry. int primary_offset = PrimaryOffset(name, flags, map); @@ -61,7 +68,8 @@ Code* StubCache::Set(Name* name, Map* map, Code* code) { // secondary cache before overwriting it. if (old_code != isolate_->builtins()->builtin(Builtins::kIllegal)) { Map* old_map = primary->map; - Code::Flags old_flags = Code::RemoveTypeFromFlags(old_code->flags()); + Code::Flags old_flags = + Code::RemoveTypeAndHolderFromFlags(old_code->flags()); int seed = PrimaryOffset(primary->key, old_flags, old_map); int secondary_offset = SecondaryOffset(primary->key, old_flags, seed); Entry* secondary = entry(secondary_, secondary_offset); @@ -77,150 +85,183 @@ Code* StubCache::Set(Name* name, Map* map, Code* code) { } -Handle<Code> StubCache::FindIC(Handle<Name> name, - Handle<Map> stub_holder, - Code::Kind kind, - ExtraICState extra_state, - InlineCacheHolderFlag cache_holder) { +Code* StubCache::Get(Name* name, Map* map, Code::Flags flags) { + flags = CommonStubCacheChecks(name, map, flags); + int primary_offset = PrimaryOffset(name, flags, map); + Entry* primary = entry(primary_, primary_offset); + if (primary->key == name && primary->map == map) { + return primary->value; + } + int secondary_offset = SecondaryOffset(name, flags, primary_offset); + Entry* secondary = entry(secondary_, secondary_offset); + if (secondary->key == name && secondary->map == map) { + return secondary->value; + } + return NULL; +} + + +Handle<Code> PropertyICCompiler::Find(Handle<Name> name, + Handle<Map> stub_holder, Code::Kind kind, + ExtraICState extra_state, + CacheHolderFlag cache_holder) { Code::Flags flags = Code::ComputeMonomorphicFlags( kind, extra_state, cache_holder); - Handle<Object> probe(stub_holder->FindInCodeCache(*name, flags), isolate_); - if (probe->IsCode()) return Handle<Code>::cast(probe); + Object* probe = stub_holder->FindInCodeCache(*name, flags); + if (probe->IsCode()) return handle(Code::cast(probe)); return Handle<Code>::null(); } -Handle<Code> StubCache::FindHandler(Handle<Name> name, - Handle<Map> stub_holder, - Code::Kind kind, - InlineCacheHolderFlag cache_holder, - Code::StubType type) { +Handle<Code> PropertyHandlerCompiler::Find(Handle<Name> name, + Handle<Map> stub_holder, + Code::Kind kind, + CacheHolderFlag cache_holder, + Code::StubType type) { Code::Flags flags = Code::ComputeHandlerFlags(kind, type, cache_holder); - - Handle<Object> probe(stub_holder->FindInCodeCache(*name, flags), isolate_); - if (probe->IsCode()) return Handle<Code>::cast(probe); + Object* probe = stub_holder->FindInCodeCache(*name, flags); + if (probe->IsCode()) return handle(Code::cast(probe)); return Handle<Code>::null(); } -Handle<Code> StubCache::ComputeMonomorphicIC( - Code::Kind kind, - Handle<Name> name, - Handle<HeapType> type, - Handle<Code> handler, - ExtraICState extra_ic_state) { - InlineCacheHolderFlag flag = IC::GetCodeCacheFlag(*type); +Handle<Code> PropertyICCompiler::ComputeMonomorphic( + Code::Kind kind, Handle<Name> name, Handle<HeapType> type, + Handle<Code> handler, ExtraICState extra_ic_state) { + Isolate* isolate = name->GetIsolate(); + if (handler.is_identical_to(isolate->builtins()->LoadIC_Normal()) || + handler.is_identical_to(isolate->builtins()->StoreIC_Normal())) { + name = isolate->factory()->normal_ic_symbol(); + } + + CacheHolderFlag flag; + Handle<Map> stub_holder = IC::GetICCacheHolder(*type, isolate, &flag); - Handle<Map> stub_holder; Handle<Code> ic; // There are multiple string maps that all use the same prototype. That // prototype cannot hold multiple handlers, one for each of the string maps, // for a single name. Hence, turn off caching of the IC. bool can_be_cached = !type->Is(HeapType::String()); if (can_be_cached) { - stub_holder = IC::GetCodeCacheHolder(flag, *type, isolate()); - ic = FindIC(name, stub_holder, kind, extra_ic_state, flag); + ic = Find(name, stub_holder, kind, extra_ic_state, flag); if (!ic.is_null()) return ic; } - if (kind == Code::LOAD_IC) { - LoadStubCompiler ic_compiler(isolate(), extra_ic_state, flag); - ic = ic_compiler.CompileMonomorphicIC(type, handler, name); - } else if (kind == Code::KEYED_LOAD_IC) { - KeyedLoadStubCompiler ic_compiler(isolate(), extra_ic_state, flag); - ic = ic_compiler.CompileMonomorphicIC(type, handler, name); - } else if (kind == Code::STORE_IC) { - StoreStubCompiler ic_compiler(isolate(), extra_ic_state); - ic = ic_compiler.CompileMonomorphicIC(type, handler, name); - } else { - ASSERT(kind == Code::KEYED_STORE_IC); - ASSERT(STANDARD_STORE == +#ifdef DEBUG + if (kind == Code::KEYED_STORE_IC) { + DCHECK(STANDARD_STORE == KeyedStoreIC::GetKeyedAccessStoreMode(extra_ic_state)); - KeyedStoreStubCompiler ic_compiler(isolate(), extra_ic_state); - ic = ic_compiler.CompileMonomorphicIC(type, handler, name); } +#endif + + PropertyICCompiler ic_compiler(isolate, kind, extra_ic_state, flag); + ic = ic_compiler.CompileMonomorphic(type, handler, name, PROPERTY); if (can_be_cached) Map::UpdateCodeCache(stub_holder, name, ic); return ic; } -Handle<Code> StubCache::ComputeLoadNonexistent(Handle<Name> name, - Handle<HeapType> type) { - InlineCacheHolderFlag flag = IC::GetCodeCacheFlag(*type); - Handle<Map> stub_holder = IC::GetCodeCacheHolder(flag, *type, isolate()); +Handle<Code> NamedLoadHandlerCompiler::ComputeLoadNonexistent( + Handle<Name> name, Handle<HeapType> type) { + Isolate* isolate = name->GetIsolate(); + Handle<Map> receiver_map = IC::TypeToMap(*type, isolate); + if (receiver_map->prototype()->IsNull()) { + // TODO(jkummerow/verwaest): If there is no prototype and the property + // is nonexistent, introduce a builtin to handle this (fast properties + // -> return undefined, dictionary properties -> do negative lookup). + return Handle<Code>(); + } + CacheHolderFlag flag; + Handle<Map> stub_holder_map = + IC::GetHandlerCacheHolder(*type, false, isolate, &flag); + // If no dictionary mode objects are present in the prototype chain, the load // nonexistent IC stub can be shared for all names for a given map and we use // the empty string for the map cache in that case. If there are dictionary // mode objects involved, we need to do negative lookups in the stub and // therefore the stub will be specific to the name. - Handle<Map> current_map = stub_holder; - Handle<Name> cache_name = current_map->is_dictionary_map() - ? name : Handle<Name>::cast(isolate()->factory()->nonexistent_symbol()); - Handle<Object> next(current_map->prototype(), isolate()); - Handle<JSObject> last = Handle<JSObject>::null(); - while (!next->IsNull()) { - last = Handle<JSObject>::cast(next); - next = handle(current_map->prototype(), isolate()); - current_map = handle(Handle<HeapObject>::cast(next)->map()); + Handle<Name> cache_name = + receiver_map->is_dictionary_map() + ? name + : Handle<Name>::cast(isolate->factory()->nonexistent_symbol()); + Handle<Map> current_map = stub_holder_map; + Handle<JSObject> last(JSObject::cast(receiver_map->prototype())); + while (true) { if (current_map->is_dictionary_map()) cache_name = name; + if (current_map->prototype()->IsNull()) break; + last = handle(JSObject::cast(current_map->prototype())); + current_map = handle(last->map()); } - // Compile the stub that is either shared for all names or // name specific if there are global objects involved. - Handle<Code> handler = FindHandler( - cache_name, stub_holder, Code::LOAD_IC, flag, Code::FAST); - if (!handler.is_null()) { - return handler; - } + Handle<Code> handler = PropertyHandlerCompiler::Find( + cache_name, stub_holder_map, Code::LOAD_IC, flag, Code::FAST); + if (!handler.is_null()) return handler; - LoadStubCompiler compiler(isolate_, kNoExtraICState, flag); - handler = compiler.CompileLoadNonexistent(type, last, cache_name); - Map::UpdateCodeCache(stub_holder, cache_name, handler); + NamedLoadHandlerCompiler compiler(isolate, type, last, flag); + handler = compiler.CompileLoadNonexistent(cache_name); + Map::UpdateCodeCache(stub_holder_map, cache_name, handler); return handler; } -Handle<Code> StubCache::ComputeKeyedLoadElement(Handle<Map> receiver_map) { +Handle<Code> PropertyICCompiler::ComputeKeyedLoadMonomorphic( + Handle<Map> receiver_map) { + Isolate* isolate = receiver_map->GetIsolate(); Code::Flags flags = Code::ComputeMonomorphicFlags(Code::KEYED_LOAD_IC); - Handle<Name> name = - isolate()->factory()->KeyedLoadElementMonomorphic_string(); + Handle<Name> name = isolate->factory()->KeyedLoadMonomorphic_string(); - Handle<Object> probe(receiver_map->FindInCodeCache(*name, flags), isolate_); + Handle<Object> probe(receiver_map->FindInCodeCache(*name, flags), isolate); if (probe->IsCode()) return Handle<Code>::cast(probe); - KeyedLoadStubCompiler compiler(isolate()); - Handle<Code> code = compiler.CompileLoadElement(receiver_map); + ElementsKind elements_kind = receiver_map->elements_kind(); + Handle<Code> stub; + if (receiver_map->has_fast_elements() || + receiver_map->has_external_array_elements() || + receiver_map->has_fixed_typed_array_elements()) { + stub = LoadFastElementStub(isolate, + receiver_map->instance_type() == JS_ARRAY_TYPE, + elements_kind).GetCode(); + } else { + stub = FLAG_compiled_keyed_dictionary_loads + ? LoadDictionaryElementStub(isolate).GetCode() + : LoadDictionaryElementPlatformStub(isolate).GetCode(); + } + PropertyICCompiler compiler(isolate, Code::KEYED_LOAD_IC); + Handle<Code> code = + compiler.CompileMonomorphic(HeapType::Class(receiver_map, isolate), stub, + isolate->factory()->empty_string(), ELEMENT); Map::UpdateCodeCache(receiver_map, name, code); return code; } -Handle<Code> StubCache::ComputeKeyedStoreElement( - Handle<Map> receiver_map, - StrictMode strict_mode, +Handle<Code> PropertyICCompiler::ComputeKeyedStoreMonomorphic( + Handle<Map> receiver_map, StrictMode strict_mode, KeyedAccessStoreMode store_mode) { + Isolate* isolate = receiver_map->GetIsolate(); ExtraICState extra_state = KeyedStoreIC::ComputeExtraICState(strict_mode, store_mode); - Code::Flags flags = Code::ComputeMonomorphicFlags( - Code::KEYED_STORE_IC, extra_state); + Code::Flags flags = + Code::ComputeMonomorphicFlags(Code::KEYED_STORE_IC, extra_state); - ASSERT(store_mode == STANDARD_STORE || + DCHECK(store_mode == STANDARD_STORE || store_mode == STORE_AND_GROW_NO_TRANSITION || store_mode == STORE_NO_TRANSITION_IGNORE_OUT_OF_BOUNDS || store_mode == STORE_NO_TRANSITION_HANDLE_COW); - Handle<String> name = - isolate()->factory()->KeyedStoreElementMonomorphic_string(); - Handle<Object> probe(receiver_map->FindInCodeCache(*name, flags), isolate_); + Handle<String> name = isolate->factory()->KeyedStoreMonomorphic_string(); + Handle<Object> probe(receiver_map->FindInCodeCache(*name, flags), isolate); if (probe->IsCode()) return Handle<Code>::cast(probe); - KeyedStoreStubCompiler compiler(isolate(), extra_state); - Handle<Code> code = compiler.CompileStoreElement(receiver_map); + PropertyICCompiler compiler(isolate, Code::KEYED_STORE_IC, extra_state); + Handle<Code> code = + compiler.CompileKeyedStoreMonomorphic(receiver_map, store_mode); Map::UpdateCodeCache(receiver_map, name, code); - ASSERT(KeyedStoreIC::GetKeyedAccessStoreMode(code->extra_ic_state()) + DCHECK(KeyedStoreIC::GetKeyedAccessStoreMode(code->extra_ic_state()) == store_mode); return code; } @@ -237,12 +278,13 @@ static void FillCache(Isolate* isolate, Handle<Code> code) { } -Code* StubCache::FindPreMonomorphicIC(Code::Kind kind, ExtraICState state) { +Code* PropertyICCompiler::FindPreMonomorphic(Isolate* isolate, Code::Kind kind, + ExtraICState state) { Code::Flags flags = Code::ComputeFlags(kind, PREMONOMORPHIC, state); UnseededNumberDictionary* dictionary = - isolate()->heap()->non_monomorphic_cache(); - int entry = dictionary->FindEntry(isolate(), flags); - ASSERT(entry != -1); + isolate->heap()->non_monomorphic_cache(); + int entry = dictionary->FindEntry(isolate, flags); + DCHECK(entry != -1); Object* code = dictionary->ValueAt(entry); // This might be called during the marking phase of the collector // hence the unchecked cast. @@ -250,15 +292,16 @@ Code* StubCache::FindPreMonomorphicIC(Code::Kind kind, ExtraICState state) { } -Handle<Code> StubCache::ComputeLoad(InlineCacheState ic_state, - ExtraICState extra_state) { +Handle<Code> PropertyICCompiler::ComputeLoad(Isolate* isolate, + InlineCacheState ic_state, + ExtraICState extra_state) { Code::Flags flags = Code::ComputeFlags(Code::LOAD_IC, ic_state, extra_state); Handle<UnseededNumberDictionary> cache = - isolate_->factory()->non_monomorphic_cache(); - int entry = cache->FindEntry(isolate_, flags); + isolate->factory()->non_monomorphic_cache(); + int entry = cache->FindEntry(isolate, flags); if (entry != -1) return Handle<Code>(Code::cast(cache->ValueAt(entry))); - StubCompiler compiler(isolate_); + PropertyICCompiler compiler(isolate, Code::LOAD_IC); Handle<Code> code; if (ic_state == UNINITIALIZED) { code = compiler.CompileLoadInitialize(flags); @@ -269,20 +312,21 @@ Handle<Code> StubCache::ComputeLoad(InlineCacheState ic_state, } else { UNREACHABLE(); } - FillCache(isolate_, code); + FillCache(isolate, code); return code; } -Handle<Code> StubCache::ComputeStore(InlineCacheState ic_state, - ExtraICState extra_state) { +Handle<Code> PropertyICCompiler::ComputeStore(Isolate* isolate, + InlineCacheState ic_state, + ExtraICState extra_state) { Code::Flags flags = Code::ComputeFlags(Code::STORE_IC, ic_state, extra_state); Handle<UnseededNumberDictionary> cache = - isolate_->factory()->non_monomorphic_cache(); - int entry = cache->FindEntry(isolate_, flags); + isolate->factory()->non_monomorphic_cache(); + int entry = cache->FindEntry(isolate, flags); if (entry != -1) return Handle<Code>(Code::cast(cache->ValueAt(entry))); - StubCompiler compiler(isolate_); + PropertyICCompiler compiler(isolate, Code::STORE_IC); Handle<Code> code; if (ic_state == UNINITIALIZED) { code = compiler.CompileStoreInitialize(flags); @@ -296,25 +340,26 @@ Handle<Code> StubCache::ComputeStore(InlineCacheState ic_state, UNREACHABLE(); } - FillCache(isolate_, code); + FillCache(isolate, code); return code; } -Handle<Code> StubCache::ComputeCompareNil(Handle<Map> receiver_map, - CompareNilICStub& stub) { - Handle<String> name(isolate_->heap()->empty_string()); - if (!receiver_map->is_shared()) { - Handle<Code> cached_ic = FindIC(name, receiver_map, Code::COMPARE_NIL_IC, - stub.GetExtraICState()); +Handle<Code> PropertyICCompiler::ComputeCompareNil(Handle<Map> receiver_map, + CompareNilICStub* stub) { + Isolate* isolate = receiver_map->GetIsolate(); + Handle<String> name(isolate->heap()->empty_string()); + if (!receiver_map->is_dictionary_map()) { + Handle<Code> cached_ic = + Find(name, receiver_map, Code::COMPARE_NIL_IC, stub->GetExtraICState()); if (!cached_ic.is_null()) return cached_ic; } Code::FindAndReplacePattern pattern; - pattern.Add(isolate_->factory()->meta_map(), receiver_map); - Handle<Code> ic = stub.GetCodeCopy(pattern); + pattern.Add(isolate->factory()->meta_map(), receiver_map); + Handle<Code> ic = stub->GetCodeCopy(pattern); - if (!receiver_map->is_shared()) { + if (!receiver_map->is_dictionary_map()) { Map::UpdateCodeCache(receiver_map, name, ic); } @@ -323,64 +368,55 @@ Handle<Code> StubCache::ComputeCompareNil(Handle<Map> receiver_map, // TODO(verwaest): Change this method so it takes in a TypeHandleList. -Handle<Code> StubCache::ComputeLoadElementPolymorphic( +Handle<Code> PropertyICCompiler::ComputeKeyedLoadPolymorphic( MapHandleList* receiver_maps) { + Isolate* isolate = receiver_maps->at(0)->GetIsolate(); Code::Flags flags = Code::ComputeFlags(Code::KEYED_LOAD_IC, POLYMORPHIC); Handle<PolymorphicCodeCache> cache = - isolate_->factory()->polymorphic_code_cache(); + isolate->factory()->polymorphic_code_cache(); Handle<Object> probe = cache->Lookup(receiver_maps, flags); if (probe->IsCode()) return Handle<Code>::cast(probe); TypeHandleList types(receiver_maps->length()); for (int i = 0; i < receiver_maps->length(); i++) { - types.Add(HeapType::Class(receiver_maps->at(i), isolate())); + types.Add(HeapType::Class(receiver_maps->at(i), isolate)); } CodeHandleList handlers(receiver_maps->length()); - KeyedLoadStubCompiler compiler(isolate_); + ElementHandlerCompiler compiler(isolate); compiler.CompileElementHandlers(receiver_maps, &handlers); - Handle<Code> code = compiler.CompilePolymorphicIC( - &types, &handlers, factory()->empty_string(), Code::NORMAL, ELEMENT); + PropertyICCompiler ic_compiler(isolate, Code::KEYED_LOAD_IC); + Handle<Code> code = ic_compiler.CompilePolymorphic( + &types, &handlers, isolate->factory()->empty_string(), Code::NORMAL, + ELEMENT); - isolate()->counters()->keyed_load_polymorphic_stubs()->Increment(); + isolate->counters()->keyed_load_polymorphic_stubs()->Increment(); PolymorphicCodeCache::Update(cache, receiver_maps, flags, code); return code; } -Handle<Code> StubCache::ComputePolymorphicIC( - Code::Kind kind, - TypeHandleList* types, - CodeHandleList* handlers, - int number_of_valid_types, - Handle<Name> name, - ExtraICState extra_ic_state) { +Handle<Code> PropertyICCompiler::ComputePolymorphic( + Code::Kind kind, TypeHandleList* types, CodeHandleList* handlers, + int valid_types, Handle<Name> name, ExtraICState extra_ic_state) { Handle<Code> handler = handlers->at(0); - Code::StubType type = number_of_valid_types == 1 ? handler->type() - : Code::NORMAL; - if (kind == Code::LOAD_IC) { - LoadStubCompiler ic_compiler(isolate_, extra_ic_state); - return ic_compiler.CompilePolymorphicIC( - types, handlers, name, type, PROPERTY); - } else { - ASSERT(kind == Code::STORE_IC); - StoreStubCompiler ic_compiler(isolate_, extra_ic_state); - return ic_compiler.CompilePolymorphicIC( - types, handlers, name, type, PROPERTY); - } + Code::StubType type = valid_types == 1 ? handler->type() : Code::NORMAL; + DCHECK(kind == Code::LOAD_IC || kind == Code::STORE_IC); + PropertyICCompiler ic_compiler(name->GetIsolate(), kind, extra_ic_state); + return ic_compiler.CompilePolymorphic(types, handlers, name, type, PROPERTY); } -Handle<Code> StubCache::ComputeStoreElementPolymorphic( - MapHandleList* receiver_maps, - KeyedAccessStoreMode store_mode, +Handle<Code> PropertyICCompiler::ComputeKeyedStorePolymorphic( + MapHandleList* receiver_maps, KeyedAccessStoreMode store_mode, StrictMode strict_mode) { - ASSERT(store_mode == STANDARD_STORE || + Isolate* isolate = receiver_maps->at(0)->GetIsolate(); + DCHECK(store_mode == STANDARD_STORE || store_mode == STORE_AND_GROW_NO_TRANSITION || store_mode == STORE_NO_TRANSITION_IGNORE_OUT_OF_BOUNDS || store_mode == STORE_NO_TRANSITION_HANDLE_COW); Handle<PolymorphicCodeCache> cache = - isolate_->factory()->polymorphic_code_cache(); + isolate->factory()->polymorphic_code_cache(); ExtraICState extra_state = KeyedStoreIC::ComputeExtraICState( strict_mode, store_mode); Code::Flags flags = @@ -388,8 +424,9 @@ Handle<Code> StubCache::ComputeStoreElementPolymorphic( Handle<Object> probe = cache->Lookup(receiver_maps, flags); if (probe->IsCode()) return Handle<Code>::cast(probe); - KeyedStoreStubCompiler compiler(isolate_, extra_state); - Handle<Code> code = compiler.CompileStoreElementPolymorphic(receiver_maps); + PropertyICCompiler compiler(isolate, Code::KEYED_STORE_IC, extra_state); + Handle<Code> code = + compiler.CompileKeyedStorePolymorphic(receiver_maps, store_mode); PolymorphicCodeCache::Update(cache, receiver_maps, flags, code); return code; } @@ -398,12 +435,12 @@ Handle<Code> StubCache::ComputeStoreElementPolymorphic( void StubCache::Clear() { Code* empty = isolate_->builtins()->builtin(Builtins::kIllegal); for (int i = 0; i < kPrimaryTableSize; i++) { - primary_[i].key = heap()->empty_string(); + primary_[i].key = isolate()->heap()->empty_string(); primary_[i].map = NULL; primary_[i].value = empty; } for (int j = 0; j < kSecondaryTableSize; j++) { - secondary_[j].key = heap()->empty_string(); + secondary_[j].key = isolate()->heap()->empty_string(); secondary_[j].map = NULL; secondary_[j].value = empty; } @@ -456,25 +493,27 @@ void StubCache::CollectMatchingMaps(SmallMapList* types, RUNTIME_FUNCTION(StoreCallbackProperty) { - JSObject* receiver = JSObject::cast(args[0]); - JSObject* holder = JSObject::cast(args[1]); - ExecutableAccessorInfo* callback = ExecutableAccessorInfo::cast(args[2]); - Address setter_address = v8::ToCData<Address>(callback->setter()); - v8::AccessorSetterCallback fun = - FUNCTION_CAST<v8::AccessorSetterCallback>(setter_address); - ASSERT(fun != NULL); - ASSERT(callback->IsCompatibleReceiver(receiver)); + Handle<JSObject> receiver = args.at<JSObject>(0); + Handle<JSObject> holder = args.at<JSObject>(1); + Handle<ExecutableAccessorInfo> callback = args.at<ExecutableAccessorInfo>(2); Handle<Name> name = args.at<Name>(3); Handle<Object> value = args.at<Object>(4); HandleScope scope(isolate); + DCHECK(callback->IsCompatibleReceiver(*receiver)); + + Address setter_address = v8::ToCData<Address>(callback->setter()); + v8::AccessorSetterCallback fun = + FUNCTION_CAST<v8::AccessorSetterCallback>(setter_address); + DCHECK(fun != NULL); + // TODO(rossberg): Support symbols in the API. if (name->IsSymbol()) return *value; Handle<String> str = Handle<String>::cast(name); - LOG(isolate, ApiNamedPropertyAccess("store", receiver, *name)); - PropertyCallbackArguments - custom_args(isolate, callback->data(), receiver, holder); + LOG(isolate, ApiNamedPropertyAccess("store", *receiver, *name)); + PropertyCallbackArguments custom_args(isolate, callback->data(), *receiver, + *holder); custom_args.Call(fun, v8::Utils::ToLocal(str), v8::Utils::ToLocal(value)); RETURN_FAILURE_IF_SCHEDULED_EXCEPTION(isolate); return *value; @@ -489,11 +528,11 @@ RUNTIME_FUNCTION(StoreCallbackProperty) { * provide any value for the given name. */ RUNTIME_FUNCTION(LoadPropertyWithInterceptorOnly) { - ASSERT(args.length() == StubCache::kInterceptorArgsLength); + DCHECK(args.length() == NamedLoadHandlerCompiler::kInterceptorArgsLength); Handle<Name> name_handle = - args.at<Name>(StubCache::kInterceptorArgsNameIndex); - Handle<InterceptorInfo> interceptor_info = - args.at<InterceptorInfo>(StubCache::kInterceptorArgsInfoIndex); + args.at<Name>(NamedLoadHandlerCompiler::kInterceptorArgsNameIndex); + Handle<InterceptorInfo> interceptor_info = args.at<InterceptorInfo>( + NamedLoadHandlerCompiler::kInterceptorArgsInfoIndex); // TODO(rossberg): Support symbols in the API. if (name_handle->IsSymbol()) @@ -503,12 +542,12 @@ RUNTIME_FUNCTION(LoadPropertyWithInterceptorOnly) { Address getter_address = v8::ToCData<Address>(interceptor_info->getter()); v8::NamedPropertyGetterCallback getter = FUNCTION_CAST<v8::NamedPropertyGetterCallback>(getter_address); - ASSERT(getter != NULL); + DCHECK(getter != NULL); Handle<JSObject> receiver = - args.at<JSObject>(StubCache::kInterceptorArgsThisIndex); + args.at<JSObject>(NamedLoadHandlerCompiler::kInterceptorArgsThisIndex); Handle<JSObject> holder = - args.at<JSObject>(StubCache::kInterceptorArgsHolderIndex); + args.at<JSObject>(NamedLoadHandlerCompiler::kInterceptorArgsHolderIndex); PropertyCallbackArguments callback_args( isolate, interceptor_info->data(), *receiver, *holder); { @@ -546,119 +585,60 @@ static Object* ThrowReferenceError(Isolate* isolate, Name* name) { } -MUST_USE_RESULT static MaybeHandle<Object> LoadWithInterceptor( - Arguments* args, - PropertyAttributes* attrs) { - ASSERT(args->length() == StubCache::kInterceptorArgsLength); - Handle<Name> name_handle = - args->at<Name>(StubCache::kInterceptorArgsNameIndex); - Handle<InterceptorInfo> interceptor_info = - args->at<InterceptorInfo>(StubCache::kInterceptorArgsInfoIndex); - Handle<JSObject> receiver_handle = - args->at<JSObject>(StubCache::kInterceptorArgsThisIndex); - Handle<JSObject> holder_handle = - args->at<JSObject>(StubCache::kInterceptorArgsHolderIndex); - - Isolate* isolate = receiver_handle->GetIsolate(); - - // TODO(rossberg): Support symbols in the API. - if (name_handle->IsSymbol()) { - return JSObject::GetPropertyPostInterceptor( - holder_handle, receiver_handle, name_handle, attrs); - } - Handle<String> name = Handle<String>::cast(name_handle); - - Address getter_address = v8::ToCData<Address>(interceptor_info->getter()); - v8::NamedPropertyGetterCallback getter = - FUNCTION_CAST<v8::NamedPropertyGetterCallback>(getter_address); - ASSERT(getter != NULL); - - PropertyCallbackArguments callback_args(isolate, - interceptor_info->data(), - *receiver_handle, - *holder_handle); - { - HandleScope scope(isolate); - // Use the interceptor getter. - v8::Handle<v8::Value> r = - callback_args.Call(getter, v8::Utils::ToLocal(name)); - RETURN_EXCEPTION_IF_SCHEDULED_EXCEPTION(isolate, Object); - if (!r.IsEmpty()) { - *attrs = NONE; - Handle<Object> result = v8::Utils::OpenHandle(*r); - result->VerifyApiCallResultType(); - return scope.CloseAndEscape(result); - } - } - - return JSObject::GetPropertyPostInterceptor( - holder_handle, receiver_handle, name_handle, attrs); -} - - /** * Loads a property with an interceptor performing post interceptor * lookup if interceptor failed. */ -RUNTIME_FUNCTION(LoadPropertyWithInterceptorForLoad) { - PropertyAttributes attr = NONE; +RUNTIME_FUNCTION(LoadPropertyWithInterceptor) { HandleScope scope(isolate); + DCHECK(args.length() == NamedLoadHandlerCompiler::kInterceptorArgsLength); + Handle<Name> name = + args.at<Name>(NamedLoadHandlerCompiler::kInterceptorArgsNameIndex); + Handle<JSObject> receiver = + args.at<JSObject>(NamedLoadHandlerCompiler::kInterceptorArgsThisIndex); + Handle<JSObject> holder = + args.at<JSObject>(NamedLoadHandlerCompiler::kInterceptorArgsHolderIndex); + Handle<Object> result; + LookupIterator it(receiver, name, holder); ASSIGN_RETURN_FAILURE_ON_EXCEPTION( - isolate, result, LoadWithInterceptor(&args, &attr)); - - // If the property is present, return it. - if (attr != ABSENT) return *result; - return ThrowReferenceError(isolate, Name::cast(args[0])); -} + isolate, result, JSObject::GetProperty(&it)); + if (it.IsFound()) return *result; -RUNTIME_FUNCTION(LoadPropertyWithInterceptorForCall) { - PropertyAttributes attr; - HandleScope scope(isolate); - Handle<Object> result; - ASSIGN_RETURN_FAILURE_ON_EXCEPTION( - isolate, result, LoadWithInterceptor(&args, &attr)); - // This is call IC. In this case, we simply return the undefined result which - // will lead to an exception when trying to invoke the result as a - // function. - return *result; + return ThrowReferenceError(isolate, Name::cast(args[0])); } -RUNTIME_FUNCTION(StoreInterceptorProperty) { +RUNTIME_FUNCTION(StorePropertyWithInterceptor) { HandleScope scope(isolate); - ASSERT(args.length() == 3); + DCHECK(args.length() == 3); StoreIC ic(IC::NO_EXTRA_FRAME, isolate); Handle<JSObject> receiver = args.at<JSObject>(0); Handle<Name> name = args.at<Name>(1); Handle<Object> value = args.at<Object>(2); - if (receiver->IsJSGlobalProxy()) { - Object* proto = Object::cast(*receiver)->GetPrototype(isolate); #ifdef DEBUG - ASSERT(proto == NULL || - JSGlobalObject::cast(proto)->HasNamedInterceptor()); -#endif - receiver = Handle<JSObject>(JSObject::cast(proto)); + if (receiver->IsJSGlobalProxy()) { + PrototypeIterator iter(isolate, receiver); + DCHECK(iter.IsAtEnd() || + Handle<JSGlobalObject>::cast(PrototypeIterator::GetCurrent(iter)) + ->HasNamedInterceptor()); } else { -#ifdef DEBUG - ASSERT(receiver->HasNamedInterceptor()); -#endif + DCHECK(receiver->HasNamedInterceptor()); } - PropertyAttributes attr = NONE; +#endif Handle<Object> result; ASSIGN_RETURN_FAILURE_ON_EXCEPTION( isolate, result, - JSObject::SetPropertyWithInterceptor( - receiver, name, value, attr, ic.strict_mode())); + JSObject::SetProperty(receiver, name, value, ic.strict_mode())); return *result; } -RUNTIME_FUNCTION(KeyedLoadPropertyWithInterceptor) { +RUNTIME_FUNCTION(LoadElementWithInterceptor) { HandleScope scope(isolate); Handle<JSObject> receiver = args.at<JSObject>(0); - ASSERT(args.smi_at(1) >= 0); + DCHECK(args.smi_at(1) >= 0); uint32_t index = args.smi_at(1); Handle<Object> result; ASSIGN_RETURN_FAILURE_ON_EXCEPTION( @@ -668,74 +648,67 @@ RUNTIME_FUNCTION(KeyedLoadPropertyWithInterceptor) { } -Handle<Code> StubCompiler::CompileLoadInitialize(Code::Flags flags) { +Handle<Code> PropertyICCompiler::CompileLoadInitialize(Code::Flags flags) { LoadIC::GenerateInitialize(masm()); Handle<Code> code = GetCodeWithFlags(flags, "CompileLoadInitialize"); PROFILE(isolate(), CodeCreateEvent(Logger::LOAD_INITIALIZE_TAG, *code, 0)); - GDBJIT(AddCode(GDBJITInterface::LOAD_IC, *code)); return code; } -Handle<Code> StubCompiler::CompileLoadPreMonomorphic(Code::Flags flags) { +Handle<Code> PropertyICCompiler::CompileLoadPreMonomorphic(Code::Flags flags) { LoadIC::GeneratePreMonomorphic(masm()); Handle<Code> code = GetCodeWithFlags(flags, "CompileLoadPreMonomorphic"); PROFILE(isolate(), CodeCreateEvent(Logger::LOAD_PREMONOMORPHIC_TAG, *code, 0)); - GDBJIT(AddCode(GDBJITInterface::LOAD_IC, *code)); return code; } -Handle<Code> StubCompiler::CompileLoadMegamorphic(Code::Flags flags) { +Handle<Code> PropertyICCompiler::CompileLoadMegamorphic(Code::Flags flags) { LoadIC::GenerateMegamorphic(masm()); Handle<Code> code = GetCodeWithFlags(flags, "CompileLoadMegamorphic"); PROFILE(isolate(), CodeCreateEvent(Logger::LOAD_MEGAMORPHIC_TAG, *code, 0)); - GDBJIT(AddCode(GDBJITInterface::LOAD_IC, *code)); return code; } -Handle<Code> StubCompiler::CompileStoreInitialize(Code::Flags flags) { +Handle<Code> PropertyICCompiler::CompileStoreInitialize(Code::Flags flags) { StoreIC::GenerateInitialize(masm()); Handle<Code> code = GetCodeWithFlags(flags, "CompileStoreInitialize"); PROFILE(isolate(), CodeCreateEvent(Logger::STORE_INITIALIZE_TAG, *code, 0)); - GDBJIT(AddCode(GDBJITInterface::STORE_IC, *code)); return code; } -Handle<Code> StubCompiler::CompileStorePreMonomorphic(Code::Flags flags) { +Handle<Code> PropertyICCompiler::CompileStorePreMonomorphic(Code::Flags flags) { StoreIC::GeneratePreMonomorphic(masm()); Handle<Code> code = GetCodeWithFlags(flags, "CompileStorePreMonomorphic"); PROFILE(isolate(), CodeCreateEvent(Logger::STORE_PREMONOMORPHIC_TAG, *code, 0)); - GDBJIT(AddCode(GDBJITInterface::STORE_IC, *code)); return code; } -Handle<Code> StubCompiler::CompileStoreGeneric(Code::Flags flags) { +Handle<Code> PropertyICCompiler::CompileStoreGeneric(Code::Flags flags) { ExtraICState extra_state = Code::ExtractExtraICStateFromFlags(flags); StrictMode strict_mode = StoreIC::GetStrictMode(extra_state); StoreIC::GenerateRuntimeSetProperty(masm(), strict_mode); Handle<Code> code = GetCodeWithFlags(flags, "CompileStoreGeneric"); PROFILE(isolate(), CodeCreateEvent(Logger::STORE_GENERIC_TAG, *code, 0)); - GDBJIT(AddCode(GDBJITInterface::STORE_IC, *code)); return code; } -Handle<Code> StubCompiler::CompileStoreMegamorphic(Code::Flags flags) { +Handle<Code> PropertyICCompiler::CompileStoreMegamorphic(Code::Flags flags) { StoreIC::GenerateMegamorphic(masm()); Handle<Code> code = GetCodeWithFlags(flags, "CompileStoreMegamorphic"); PROFILE(isolate(), CodeCreateEvent(Logger::STORE_MEGAMORPHIC_TAG, *code, 0)); - GDBJIT(AddCode(GDBJITInterface::STORE_IC, *code)); return code; } @@ -743,58 +716,46 @@ Handle<Code> StubCompiler::CompileStoreMegamorphic(Code::Flags flags) { #undef CALL_LOGGER_TAG -Handle<Code> StubCompiler::GetCodeWithFlags(Code::Flags flags, - const char* name) { +Handle<Code> PropertyAccessCompiler::GetCodeWithFlags(Code::Flags flags, + const char* name) { // Create code object in the heap. CodeDesc desc; - masm_.GetCode(&desc); - Handle<Code> code = factory()->NewCode(desc, flags, masm_.CodeObject()); - if (code->has_major_key()) { - code->set_major_key(CodeStub::NoCache); - } + masm()->GetCode(&desc); + Handle<Code> code = factory()->NewCode(desc, flags, masm()->CodeObject()); + if (code->IsCodeStubOrIC()) code->set_stub_key(CodeStub::NoCacheKey()); #ifdef ENABLE_DISASSEMBLER - if (FLAG_print_code_stubs) code->Disassemble(name); + if (FLAG_print_code_stubs) { + OFStream os(stdout); + code->Disassemble(name, os); + } #endif return code; } -Handle<Code> StubCompiler::GetCodeWithFlags(Code::Flags flags, - Handle<Name> name) { +Handle<Code> PropertyAccessCompiler::GetCodeWithFlags(Code::Flags flags, + Handle<Name> name) { return (FLAG_print_code_stubs && !name.is_null() && name->IsString()) ? GetCodeWithFlags(flags, Handle<String>::cast(name)->ToCString().get()) : GetCodeWithFlags(flags, NULL); } -void StubCompiler::LookupPostInterceptor(Handle<JSObject> holder, - Handle<Name> name, - LookupResult* lookup) { - holder->LocalLookupRealNamedProperty(name, lookup); - if (lookup->IsFound()) return; - if (holder->GetPrototype()->IsNull()) return; - holder->GetPrototype()->Lookup(name, lookup); -} - - #define __ ACCESS_MASM(masm()) -Register LoadStubCompiler::HandlerFrontendHeader( - Handle<HeapType> type, - Register object_reg, - Handle<JSObject> holder, - Handle<Name> name, - Label* miss) { +Register NamedLoadHandlerCompiler::FrontendHeader(Register object_reg, + Handle<Name> name, + Label* miss) { PrototypeCheckType check_type = CHECK_ALL_MAPS; int function_index = -1; - if (type->Is(HeapType::String())) { + if (type()->Is(HeapType::String())) { function_index = Context::STRING_FUNCTION_INDEX; - } else if (type->Is(HeapType::Symbol())) { + } else if (type()->Is(HeapType::Symbol())) { function_index = Context::SYMBOL_FUNCTION_INDEX; - } else if (type->Is(HeapType::Number())) { + } else if (type()->Is(HeapType::Number())) { function_index = Context::NUMBER_FUNCTION_INDEX; - } else if (type->Is(HeapType::Boolean())) { + } else if (type()->Is(HeapType::Boolean())) { function_index = Context::BOOLEAN_FUNCTION_INDEX; } else { check_type = SKIP_RECEIVER; @@ -805,31 +766,27 @@ Register LoadStubCompiler::HandlerFrontendHeader( masm(), function_index, scratch1(), miss); Object* function = isolate()->native_context()->get(function_index); Object* prototype = JSFunction::cast(function)->instance_prototype(); - type = IC::CurrentTypeOf(handle(prototype, isolate()), isolate()); + set_type_for_object(handle(prototype, isolate())); object_reg = scratch1(); } // Check that the maps starting from the prototype haven't changed. - return CheckPrototypes( - type, object_reg, holder, scratch1(), scratch2(), scratch3(), - name, miss, check_type); + return CheckPrototypes(object_reg, scratch1(), scratch2(), scratch3(), name, + miss, check_type); } -// HandlerFrontend for store uses the name register. It has to be restored -// before a miss. -Register StoreStubCompiler::HandlerFrontendHeader( - Handle<HeapType> type, - Register object_reg, - Handle<JSObject> holder, - Handle<Name> name, - Label* miss) { - return CheckPrototypes(type, object_reg, holder, this->name(), - scratch1(), scratch2(), name, miss, SKIP_RECEIVER); +// Frontend for store uses the name register. It has to be restored before a +// miss. +Register NamedStoreHandlerCompiler::FrontendHeader(Register object_reg, + Handle<Name> name, + Label* miss) { + return CheckPrototypes(object_reg, this->name(), scratch1(), scratch2(), name, + miss, SKIP_RECEIVER); } -bool BaseLoadStoreStubCompiler::IncludesNumberType(TypeHandleList* types) { +bool PropertyICCompiler::IncludesNumberType(TypeHandleList* types) { for (int i = 0; i < types->length(); ++i) { if (types->at(i)->Is(HeapType::Number())) return true; } @@ -837,455 +794,301 @@ bool BaseLoadStoreStubCompiler::IncludesNumberType(TypeHandleList* types) { } -Register BaseLoadStoreStubCompiler::HandlerFrontend(Handle<HeapType> type, - Register object_reg, - Handle<JSObject> holder, - Handle<Name> name) { +Register PropertyHandlerCompiler::Frontend(Register object_reg, + Handle<Name> name) { Label miss; - - Register reg = HandlerFrontendHeader(type, object_reg, holder, name, &miss); - - HandlerFrontendFooter(name, &miss); - + Register reg = FrontendHeader(object_reg, name, &miss); + FrontendFooter(name, &miss); return reg; } -void LoadStubCompiler::NonexistentHandlerFrontend(Handle<HeapType> type, - Handle<JSObject> last, - Handle<Name> name) { - Label miss; - - Register holder; +void PropertyHandlerCompiler::NonexistentFrontendHeader(Handle<Name> name, + Label* miss, + Register scratch1, + Register scratch2) { + Register holder_reg; Handle<Map> last_map; - if (last.is_null()) { - holder = receiver(); - last_map = IC::TypeToMap(*type, isolate()); - // If |type| has null as its prototype, |last| is Handle<JSObject>::null(). - ASSERT(last_map->prototype() == isolate()->heap()->null_value()); + if (holder().is_null()) { + holder_reg = receiver(); + last_map = IC::TypeToMap(*type(), isolate()); + // If |type| has null as its prototype, |holder()| is + // Handle<JSObject>::null(). + DCHECK(last_map->prototype() == isolate()->heap()->null_value()); } else { - holder = HandlerFrontendHeader(type, receiver(), last, name, &miss); - last_map = handle(last->map()); + holder_reg = FrontendHeader(receiver(), name, miss); + last_map = handle(holder()->map()); } - if (last_map->is_dictionary_map() && - !last_map->IsJSGlobalObjectMap() && - !last_map->IsJSGlobalProxyMap()) { - if (!name->IsUniqueName()) { - ASSERT(name->IsString()); - name = factory()->InternalizeString(Handle<String>::cast(name)); + if (last_map->is_dictionary_map()) { + if (last_map->IsJSGlobalObjectMap()) { + Handle<JSGlobalObject> global = + holder().is_null() + ? Handle<JSGlobalObject>::cast(type()->AsConstant()->Value()) + : Handle<JSGlobalObject>::cast(holder()); + GenerateCheckPropertyCell(masm(), global, name, scratch1, miss); + } else { + if (!name->IsUniqueName()) { + DCHECK(name->IsString()); + name = factory()->InternalizeString(Handle<String>::cast(name)); + } + DCHECK(holder().is_null() || + holder()->property_dictionary()->FindEntry(name) == + NameDictionary::kNotFound); + GenerateDictionaryNegativeLookup(masm(), miss, holder_reg, name, scratch1, + scratch2); } - ASSERT(last.is_null() || - last->property_dictionary()->FindEntry(name) == - NameDictionary::kNotFound); - GenerateDictionaryNegativeLookup(masm(), &miss, holder, name, - scratch2(), scratch3()); } +} - // If the last object in the prototype chain is a global object, - // check that the global property cell is empty. - if (last_map->IsJSGlobalObjectMap()) { - Handle<JSGlobalObject> global = last.is_null() - ? Handle<JSGlobalObject>::cast(type->AsConstant()->Value()) - : Handle<JSGlobalObject>::cast(last); - GenerateCheckPropertyCell(masm(), global, name, scratch2(), &miss); - } - HandlerFrontendFooter(name, &miss); +Handle<Code> NamedLoadHandlerCompiler::CompileLoadField(Handle<Name> name, + FieldIndex field) { + Register reg = Frontend(receiver(), name); + __ Move(receiver(), reg); + LoadFieldStub stub(isolate(), field); + GenerateTailCall(masm(), stub.GetCode()); + return GetCode(kind(), Code::FAST, name); } -Handle<Code> LoadStubCompiler::CompileLoadField( - Handle<HeapType> type, - Handle<JSObject> holder, - Handle<Name> name, - PropertyIndex field, - Representation representation) { - Register reg = HandlerFrontend(type, receiver(), holder, name); - GenerateLoadField(reg, holder, field, representation); - - // Return the generated code. +Handle<Code> NamedLoadHandlerCompiler::CompileLoadConstant(Handle<Name> name, + int constant_index) { + Register reg = Frontend(receiver(), name); + __ Move(receiver(), reg); + LoadConstantStub stub(isolate(), constant_index); + GenerateTailCall(masm(), stub.GetCode()); return GetCode(kind(), Code::FAST, name); } -Handle<Code> LoadStubCompiler::CompileLoadConstant( - Handle<HeapType> type, - Handle<JSObject> holder, - Handle<Name> name, - Handle<Object> value) { - HandlerFrontend(type, receiver(), holder, name); - GenerateLoadConstant(value); - - // Return the generated code. +Handle<Code> NamedLoadHandlerCompiler::CompileLoadNonexistent( + Handle<Name> name) { + Label miss; + NonexistentFrontendHeader(name, &miss, scratch2(), scratch3()); + GenerateLoadConstant(isolate()->factory()->undefined_value()); + FrontendFooter(name, &miss); return GetCode(kind(), Code::FAST, name); } -Handle<Code> LoadStubCompiler::CompileLoadCallback( - Handle<HeapType> type, - Handle<JSObject> holder, - Handle<Name> name, - Handle<ExecutableAccessorInfo> callback) { - Register reg = CallbackHandlerFrontend( - type, receiver(), holder, name, callback); +Handle<Code> NamedLoadHandlerCompiler::CompileLoadCallback( + Handle<Name> name, Handle<ExecutableAccessorInfo> callback) { + Register reg = Frontend(receiver(), name); GenerateLoadCallback(reg, callback); - - // Return the generated code. return GetCode(kind(), Code::FAST, name); } -Handle<Code> LoadStubCompiler::CompileLoadCallback( - Handle<HeapType> type, - Handle<JSObject> holder, - Handle<Name> name, - const CallOptimization& call_optimization) { - ASSERT(call_optimization.is_simple_api_call()); - Handle<JSFunction> callback = call_optimization.constant_function(); - CallbackHandlerFrontend(type, receiver(), holder, name, callback); - Handle<Map>receiver_map = IC::TypeToMap(*type, isolate()); +Handle<Code> NamedLoadHandlerCompiler::CompileLoadCallback( + Handle<Name> name, const CallOptimization& call_optimization) { + DCHECK(call_optimization.is_simple_api_call()); + Frontend(receiver(), name); + Handle<Map> receiver_map = IC::TypeToMap(*type(), isolate()); GenerateFastApiCall( masm(), call_optimization, receiver_map, receiver(), scratch1(), false, 0, NULL); - // Return the generated code. return GetCode(kind(), Code::FAST, name); } -Handle<Code> LoadStubCompiler::CompileLoadInterceptor( - Handle<HeapType> type, - Handle<JSObject> holder, +Handle<Code> NamedLoadHandlerCompiler::CompileLoadInterceptor( Handle<Name> name) { + // Perform a lookup after the interceptor. LookupResult lookup(isolate()); - LookupPostInterceptor(holder, name, &lookup); + holder()->LookupOwnRealNamedProperty(name, &lookup); + if (!lookup.IsFound()) { + PrototypeIterator iter(holder()->GetIsolate(), holder()); + if (!iter.IsAtEnd()) { + PrototypeIterator::GetCurrent(iter)->Lookup(name, &lookup); + } + } - Register reg = HandlerFrontend(type, receiver(), holder, name); + Register reg = Frontend(receiver(), name); // TODO(368): Compile in the whole chain: all the interceptors in // prototypes and ultimate answer. - GenerateLoadInterceptor(reg, type, holder, &lookup, name); - - // Return the generated code. + GenerateLoadInterceptor(reg, &lookup, name); return GetCode(kind(), Code::FAST, name); } -void LoadStubCompiler::GenerateLoadPostInterceptor( - Register interceptor_reg, - Handle<JSObject> interceptor_holder, - Handle<Name> name, - LookupResult* lookup) { - Handle<JSObject> holder(lookup->holder()); +void NamedLoadHandlerCompiler::GenerateLoadPostInterceptor( + Register interceptor_reg, Handle<Name> name, LookupResult* lookup) { + Handle<JSObject> real_named_property_holder(lookup->holder()); + + set_type_for_object(holder()); + set_holder(real_named_property_holder); + Register reg = Frontend(interceptor_reg, name); + if (lookup->IsField()) { - PropertyIndex field = lookup->GetFieldIndex(); - if (interceptor_holder.is_identical_to(holder)) { - GenerateLoadField( - interceptor_reg, holder, field, lookup->representation()); - } else { - // We found FIELD property in prototype chain of interceptor's holder. - // Retrieve a field from field's holder. - Register reg = HandlerFrontend( - IC::CurrentTypeOf(interceptor_holder, isolate()), - interceptor_reg, holder, name); - GenerateLoadField( - reg, holder, field, lookup->representation()); - } + __ Move(receiver(), reg); + LoadFieldStub stub(isolate(), lookup->GetFieldIndex()); + GenerateTailCall(masm(), stub.GetCode()); } else { - // We found CALLBACKS property in prototype chain of interceptor's - // holder. - ASSERT(lookup->type() == CALLBACKS); + DCHECK(lookup->type() == CALLBACKS); Handle<ExecutableAccessorInfo> callback( ExecutableAccessorInfo::cast(lookup->GetCallbackObject())); - ASSERT(callback->getter() != NULL); - - Register reg = CallbackHandlerFrontend( - IC::CurrentTypeOf(interceptor_holder, isolate()), - interceptor_reg, holder, name, callback); + DCHECK(callback->getter() != NULL); GenerateLoadCallback(reg, callback); } } -Handle<Code> BaseLoadStoreStubCompiler::CompileMonomorphicIC( - Handle<HeapType> type, - Handle<Code> handler, - Handle<Name> name) { +Handle<Code> PropertyICCompiler::CompileMonomorphic(Handle<HeapType> type, + Handle<Code> handler, + Handle<Name> name, + IcCheckType check) { TypeHandleList types(1); CodeHandleList handlers(1); types.Add(type); handlers.Add(handler); Code::StubType stub_type = handler->type(); - return CompilePolymorphicIC(&types, &handlers, name, stub_type, PROPERTY); + return CompilePolymorphic(&types, &handlers, name, stub_type, check); } -Handle<Code> LoadStubCompiler::CompileLoadViaGetter( - Handle<HeapType> type, - Handle<JSObject> holder, - Handle<Name> name, - Handle<JSFunction> getter) { - HandlerFrontend(type, receiver(), holder, name); - GenerateLoadViaGetter(masm(), type, receiver(), getter); - - // Return the generated code. +Handle<Code> NamedLoadHandlerCompiler::CompileLoadViaGetter( + Handle<Name> name, Handle<JSFunction> getter) { + Frontend(receiver(), name); + GenerateLoadViaGetter(masm(), type(), receiver(), getter); return GetCode(kind(), Code::FAST, name); } -Handle<Code> StoreStubCompiler::CompileStoreTransition( - Handle<JSObject> object, - LookupResult* lookup, - Handle<Map> transition, - Handle<Name> name) { +// TODO(verwaest): Cleanup. holder() is actually the receiver. +Handle<Code> NamedStoreHandlerCompiler::CompileStoreTransition( + Handle<Map> transition, Handle<Name> name) { Label miss, slow; // Ensure no transitions to deprecated maps are followed. __ CheckMapDeprecated(transition, scratch1(), &miss); // Check that we are allowed to write this. - if (object->GetPrototype()->IsJSObject()) { - Handle<JSObject> holder; - // holder == object indicates that no property was found. - if (lookup->holder() != *object) { - holder = Handle<JSObject>(lookup->holder()); - } else { - // Find the top object. - holder = object; - do { - holder = Handle<JSObject>(JSObject::cast(holder->GetPrototype())); - } while (holder->GetPrototype()->IsJSObject()); - } - - Register holder_reg = HandlerFrontendHeader( - IC::CurrentTypeOf(object, isolate()), receiver(), holder, name, &miss); - - // If no property was found, and the holder (the last object in the - // prototype chain) is in slow mode, we need to do a negative lookup on the - // holder. - if (lookup->holder() == *object) { - GenerateNegativeHolderLookup(masm(), holder, holder_reg, name, &miss); + bool is_nonexistent = holder()->map() == transition->GetBackPointer(); + if (is_nonexistent) { + // Find the top object. + Handle<JSObject> last; + PrototypeIterator iter(isolate(), holder()); + while (!iter.IsAtEnd()) { + last = Handle<JSObject>::cast(PrototypeIterator::GetCurrent(iter)); + iter.Advance(); } + if (!last.is_null()) set_holder(last); + NonexistentFrontendHeader(name, &miss, scratch1(), scratch2()); + } else { + FrontendHeader(receiver(), name, &miss); + DCHECK(holder()->HasFastProperties()); } - GenerateStoreTransition(masm(), - object, - lookup, - transition, - name, - receiver(), this->name(), value(), - scratch1(), scratch2(), scratch3(), - &miss, - &slow); - - // Handle store cache miss. - GenerateRestoreName(masm(), &miss, name); - TailCallBuiltin(masm(), MissBuiltin(kind())); - - GenerateRestoreName(masm(), &slow, name); - TailCallBuiltin(masm(), SlowBuiltin(kind())); - - // Return the generated code. - return GetCode(kind(), Code::FAST, name); -} + GenerateStoreTransition(transition, name, receiver(), this->name(), value(), + scratch1(), scratch2(), scratch3(), &miss, &slow); - -Handle<Code> StoreStubCompiler::CompileStoreField(Handle<JSObject> object, - LookupResult* lookup, - Handle<Name> name) { - Label miss; - - HandlerFrontendHeader(IC::CurrentTypeOf(object, isolate()), - receiver(), object, name, &miss); - - // Generate store field code. - GenerateStoreField(masm(), - object, - lookup, - receiver(), this->name(), value(), scratch1(), scratch2(), - &miss); - - // Handle store cache miss. - __ bind(&miss); + GenerateRestoreName(&miss, name); TailCallBuiltin(masm(), MissBuiltin(kind())); - // Return the generated code. + GenerateRestoreName(&slow, name); + TailCallBuiltin(masm(), SlowBuiltin(kind())); return GetCode(kind(), Code::FAST, name); } -Handle<Code> StoreStubCompiler::CompileStoreArrayLength(Handle<JSObject> object, - LookupResult* lookup, - Handle<Name> name) { - // This accepts as a receiver anything JSArray::SetElementsLength accepts - // (currently anything except for external arrays which means anything with - // elements of FixedArray type). Value must be a number, but only smis are - // accepted as the most common case. +Handle<Code> NamedStoreHandlerCompiler::CompileStoreField(LookupResult* lookup, + Handle<Name> name) { Label miss; - - // Check that value is a smi. - __ JumpIfNotSmi(value(), &miss); - - // Generate tail call to StoreIC_ArrayLength. - GenerateStoreArrayLength(); - - // Handle miss case. + GenerateStoreField(lookup, value(), &miss); __ bind(&miss); TailCallBuiltin(masm(), MissBuiltin(kind())); - - // Return the generated code. return GetCode(kind(), Code::FAST, name); } -Handle<Code> StoreStubCompiler::CompileStoreViaSetter( - Handle<JSObject> object, - Handle<JSObject> holder, - Handle<Name> name, - Handle<JSFunction> setter) { - Handle<HeapType> type = IC::CurrentTypeOf(object, isolate()); - HandlerFrontend(type, receiver(), holder, name); - GenerateStoreViaSetter(masm(), type, receiver(), setter); +Handle<Code> NamedStoreHandlerCompiler::CompileStoreViaSetter( + Handle<JSObject> object, Handle<Name> name, Handle<JSFunction> setter) { + Frontend(receiver(), name); + GenerateStoreViaSetter(masm(), type(), receiver(), setter); return GetCode(kind(), Code::FAST, name); } -Handle<Code> StoreStubCompiler::CompileStoreCallback( - Handle<JSObject> object, - Handle<JSObject> holder, - Handle<Name> name, +Handle<Code> NamedStoreHandlerCompiler::CompileStoreCallback( + Handle<JSObject> object, Handle<Name> name, const CallOptimization& call_optimization) { - HandlerFrontend(IC::CurrentTypeOf(object, isolate()), - receiver(), holder, name); + Frontend(receiver(), name); Register values[] = { value() }; GenerateFastApiCall( masm(), call_optimization, handle(object->map()), receiver(), scratch1(), true, 1, values); - // Return the generated code. return GetCode(kind(), Code::FAST, name); } -Handle<Code> KeyedLoadStubCompiler::CompileLoadElement( - Handle<Map> receiver_map) { - ElementsKind elements_kind = receiver_map->elements_kind(); - if (receiver_map->has_fast_elements() || - receiver_map->has_external_array_elements() || - receiver_map->has_fixed_typed_array_elements()) { - Handle<Code> stub = KeyedLoadFastElementStub( - isolate(), - receiver_map->instance_type() == JS_ARRAY_TYPE, - elements_kind).GetCode(); - __ DispatchMap(receiver(), scratch1(), receiver_map, stub, DO_SMI_CHECK); - } else { - Handle<Code> stub = FLAG_compiled_keyed_dictionary_loads - ? KeyedLoadDictionaryElementStub(isolate()).GetCode() - : KeyedLoadDictionaryElementPlatformStub(isolate()).GetCode(); - __ DispatchMap(receiver(), scratch1(), receiver_map, stub, DO_SMI_CHECK); - } - - TailCallBuiltin(masm(), Builtins::kKeyedLoadIC_Miss); - - // Return the generated code. - return GetICCode(kind(), Code::NORMAL, factory()->empty_string()); -} - - -Handle<Code> KeyedStoreStubCompiler::CompileStoreElement( - Handle<Map> receiver_map) { +Handle<Code> PropertyICCompiler::CompileKeyedStoreMonomorphic( + Handle<Map> receiver_map, KeyedAccessStoreMode store_mode) { ElementsKind elements_kind = receiver_map->elements_kind(); bool is_jsarray = receiver_map->instance_type() == JS_ARRAY_TYPE; Handle<Code> stub; if (receiver_map->has_fast_elements() || receiver_map->has_external_array_elements() || receiver_map->has_fixed_typed_array_elements()) { - stub = KeyedStoreFastElementStub( - isolate(), - is_jsarray, - elements_kind, - store_mode()).GetCode(); + stub = StoreFastElementStub(isolate(), is_jsarray, elements_kind, + store_mode).GetCode(); } else { - stub = KeyedStoreElementStub(isolate(), - is_jsarray, - elements_kind, - store_mode()).GetCode(); + stub = StoreElementStub(isolate(), is_jsarray, elements_kind, store_mode) + .GetCode(); } __ DispatchMap(receiver(), scratch1(), receiver_map, stub, DO_SMI_CHECK); TailCallBuiltin(masm(), Builtins::kKeyedStoreIC_Miss); - // Return the generated code. - return GetICCode(kind(), Code::NORMAL, factory()->empty_string()); + return GetCode(kind(), Code::NORMAL, factory()->empty_string()); } #undef __ -void StubCompiler::TailCallBuiltin(MacroAssembler* masm, Builtins::Name name) { +void PropertyAccessCompiler::TailCallBuiltin(MacroAssembler* masm, + Builtins::Name name) { Handle<Code> code(masm->isolate()->builtins()->builtin(name)); GenerateTailCall(masm, code); } -void BaseLoadStoreStubCompiler::JitEvent(Handle<Name> name, Handle<Code> code) { -#ifdef ENABLE_GDB_JIT_INTERFACE - GDBJITInterface::CodeTag tag; - if (kind_ == Code::LOAD_IC) { - tag = GDBJITInterface::LOAD_IC; - } else if (kind_ == Code::KEYED_LOAD_IC) { - tag = GDBJITInterface::KEYED_LOAD_IC; - } else if (kind_ == Code::STORE_IC) { - tag = GDBJITInterface::STORE_IC; - } else { - tag = GDBJITInterface::KEYED_STORE_IC; - } - GDBJIT(AddCode(tag, *name, *code)); -#endif -} - - -void BaseLoadStoreStubCompiler::InitializeRegisters() { - if (kind_ == Code::LOAD_IC) { - registers_ = LoadStubCompiler::registers(); - } else if (kind_ == Code::KEYED_LOAD_IC) { - registers_ = KeyedLoadStubCompiler::registers(); - } else if (kind_ == Code::STORE_IC) { - registers_ = StoreStubCompiler::registers(); - } else { - registers_ = KeyedStoreStubCompiler::registers(); +Register* PropertyAccessCompiler::GetCallingConvention(Code::Kind kind) { + if (kind == Code::LOAD_IC || kind == Code::KEYED_LOAD_IC) { + return load_calling_convention(); } + DCHECK(kind == Code::STORE_IC || kind == Code::KEYED_STORE_IC); + return store_calling_convention(); } -Handle<Code> BaseLoadStoreStubCompiler::GetICCode(Code::Kind kind, - Code::StubType type, - Handle<Name> name, - InlineCacheState state) { - Code::Flags flags = Code::ComputeFlags(kind, state, extra_state(), type); +Handle<Code> PropertyICCompiler::GetCode(Code::Kind kind, Code::StubType type, + Handle<Name> name, + InlineCacheState state) { + Code::Flags flags = + Code::ComputeFlags(kind, state, extra_ic_state_, type, cache_holder()); Handle<Code> code = GetCodeWithFlags(flags, name); IC::RegisterWeakMapDependency(code); PROFILE(isolate(), CodeCreateEvent(log_kind(code), *code, *name)); - JitEvent(name, code); return code; } -Handle<Code> BaseLoadStoreStubCompiler::GetCode(Code::Kind kind, - Code::StubType type, - Handle<Name> name) { - ASSERT_EQ(kNoExtraICState, extra_state()); - Code::Flags flags = Code::ComputeHandlerFlags(kind, type, cache_holder_); +Handle<Code> PropertyHandlerCompiler::GetCode(Code::Kind kind, + Code::StubType type, + Handle<Name> name) { + Code::Flags flags = Code::ComputeHandlerFlags(kind, type, cache_holder()); Handle<Code> code = GetCodeWithFlags(flags, name); - PROFILE(isolate(), CodeCreateEvent(log_kind(code), *code, *name)); - JitEvent(name, code); + PROFILE(isolate(), CodeCreateEvent(Logger::STUB_TAG, *code, *name)); return code; } -void KeyedLoadStubCompiler::CompileElementHandlers(MapHandleList* receiver_maps, - CodeHandleList* handlers) { +void ElementHandlerCompiler::CompileElementHandlers( + MapHandleList* receiver_maps, CodeHandleList* handlers) { for (int i = 0; i < receiver_maps->length(); ++i) { Handle<Map> receiver_map = receiver_maps->at(i); Handle<Code> cached_stub; @@ -1301,16 +1104,13 @@ void KeyedLoadStubCompiler::CompileElementHandlers(MapHandleList* receiver_maps, if (IsFastElementsKind(elements_kind) || IsExternalArrayElementsKind(elements_kind) || IsFixedTypedArrayElementsKind(elements_kind)) { - cached_stub = - KeyedLoadFastElementStub(isolate(), - is_js_array, - elements_kind).GetCode(); + cached_stub = LoadFastElementStub(isolate(), is_js_array, elements_kind) + .GetCode(); } else if (elements_kind == SLOPPY_ARGUMENTS_ELEMENTS) { cached_stub = isolate()->builtins()->KeyedLoadIC_SloppyArguments(); } else { - ASSERT(elements_kind == DICTIONARY_ELEMENTS); - cached_stub = - KeyedLoadDictionaryElementStub(isolate()).GetCode(); + DCHECK(elements_kind == DICTIONARY_ELEMENTS); + cached_stub = LoadDictionaryElementStub(isolate()).GetCode(); } } @@ -1319,8 +1119,8 @@ void KeyedLoadStubCompiler::CompileElementHandlers(MapHandleList* receiver_maps, } -Handle<Code> KeyedStoreStubCompiler::CompileStoreElementPolymorphic( - MapHandleList* receiver_maps) { +Handle<Code> PropertyICCompiler::CompileKeyedStorePolymorphic( + MapHandleList* receiver_maps, KeyedAccessStoreMode store_mode) { // Collect MONOMORPHIC stubs for all |receiver_maps|. CodeHandleList handlers(receiver_maps->length()); MapHandleList transitioned_maps(receiver_maps->length()); @@ -1338,45 +1138,37 @@ Handle<Code> KeyedStoreStubCompiler::CompileStoreElementPolymorphic( bool is_js_array = receiver_map->instance_type() == JS_ARRAY_TYPE; ElementsKind elements_kind = receiver_map->elements_kind(); if (!transitioned_map.is_null()) { - cached_stub = ElementsTransitionAndStoreStub( - isolate(), - elements_kind, - transitioned_map->elements_kind(), - is_js_array, - store_mode()).GetCode(); + cached_stub = + ElementsTransitionAndStoreStub(isolate(), elements_kind, + transitioned_map->elements_kind(), + is_js_array, store_mode).GetCode(); } else if (receiver_map->instance_type() < FIRST_JS_RECEIVER_TYPE) { cached_stub = isolate()->builtins()->KeyedStoreIC_Slow(); } else { if (receiver_map->has_fast_elements() || receiver_map->has_external_array_elements() || receiver_map->has_fixed_typed_array_elements()) { - cached_stub = KeyedStoreFastElementStub( - isolate(), - is_js_array, - elements_kind, - store_mode()).GetCode(); + cached_stub = StoreFastElementStub(isolate(), is_js_array, + elements_kind, store_mode).GetCode(); } else { - cached_stub = KeyedStoreElementStub( - isolate(), - is_js_array, - elements_kind, - store_mode()).GetCode(); + cached_stub = StoreElementStub(isolate(), is_js_array, elements_kind, + store_mode).GetCode(); } } - ASSERT(!cached_stub.is_null()); + DCHECK(!cached_stub.is_null()); handlers.Add(cached_stub); transitioned_maps.Add(transitioned_map); } - Handle<Code> code = - CompileStorePolymorphic(receiver_maps, &handlers, &transitioned_maps); + + Handle<Code> code = CompileKeyedStorePolymorphic(receiver_maps, &handlers, + &transitioned_maps); isolate()->counters()->keyed_store_polymorphic_stubs()->Increment(); - PROFILE(isolate(), - CodeCreateEvent(Logger::KEYED_STORE_POLYMORPHIC_IC_TAG, *code, 0)); + PROFILE(isolate(), CodeCreateEvent(log_kind(code), *code, 0)); return code; } -void KeyedStoreStubCompiler::GenerateStoreDictionaryElement( +void ElementHandlerCompiler::GenerateStoreDictionaryElement( MacroAssembler* masm) { KeyedStoreIC::GenerateSlow(masm); } @@ -1402,7 +1194,7 @@ CallOptimization::CallOptimization(Handle<JSFunction> function) { Handle<JSObject> CallOptimization::LookupHolderOfExpectedType( Handle<Map> object_map, HolderLookup* holder_lookup) const { - ASSERT(is_simple_api_call()); + DCHECK(is_simple_api_call()); if (!object_map->IsJSObjectMap()) { *holder_lookup = kHolderNotFound; return Handle<JSObject>::null(); @@ -1429,7 +1221,7 @@ Handle<JSObject> CallOptimization::LookupHolderOfExpectedType( bool CallOptimization::IsCompatibleReceiver(Handle<Object> receiver, Handle<JSObject> holder) const { - ASSERT(is_simple_api_call()); + DCHECK(is_simple_api_call()); if (!receiver->IsJSObject()) return false; Handle<Map> map(JSObject::cast(*receiver)->map()); HolderLookup holder_lookup; diff --git a/deps/v8/src/stub-cache.h b/deps/v8/src/stub-cache.h index 707df6c7bbb..77bd14cba11 100644 --- a/deps/v8/src/stub-cache.h +++ b/deps/v8/src/stub-cache.h @@ -5,25 +5,22 @@ #ifndef V8_STUB_CACHE_H_ #define V8_STUB_CACHE_H_ -#include "allocation.h" -#include "arguments.h" -#include "code-stubs.h" -#include "ic-inl.h" -#include "macro-assembler.h" -#include "objects.h" -#include "zone-inl.h" +#include "src/allocation.h" +#include "src/arguments.h" +#include "src/code-stubs.h" +#include "src/ic-inl.h" +#include "src/macro-assembler.h" +#include "src/objects.h" +#include "src/zone-inl.h" namespace v8 { namespace internal { -// The stub cache is used for megamorphic calls and property accesses. -// It maps (map, name, type)->Code* - -// The design of the table uses the inline cache stubs used for -// mono-morphic calls. The beauty of this, we do not have to -// invalidate the cache whenever a prototype map is changed. The stub -// validates the map chain as in the mono-morphic case. +// The stub cache is used for megamorphic property accesses. +// It maps (map, name, type) to property access handlers. The cache does not +// need explicit invalidation when a prototype chain is modified, since the +// handlers verify the chain. class CallOptimization; @@ -53,77 +50,17 @@ class StubCache { }; void Initialize(); - - Handle<JSObject> StubHolder(Handle<JSObject> receiver, - Handle<JSObject> holder); - - Handle<Code> FindIC(Handle<Name> name, - Handle<Map> stub_holder_map, - Code::Kind kind, - ExtraICState extra_state = kNoExtraICState, - InlineCacheHolderFlag cache_holder = OWN_MAP); - - Handle<Code> FindHandler(Handle<Name> name, - Handle<Map> map, - Code::Kind kind, - InlineCacheHolderFlag cache_holder, - Code::StubType type); - - Handle<Code> ComputeMonomorphicIC(Code::Kind kind, - Handle<Name> name, - Handle<HeapType> type, - Handle<Code> handler, - ExtraICState extra_ic_state); - - Handle<Code> ComputeLoadNonexistent(Handle<Name> name, Handle<HeapType> type); - - Handle<Code> ComputeKeyedLoadElement(Handle<Map> receiver_map); - - Handle<Code> ComputeKeyedStoreElement(Handle<Map> receiver_map, - StrictMode strict_mode, - KeyedAccessStoreMode store_mode); - - // --- - - Handle<Code> ComputeLoad(InlineCacheState ic_state, ExtraICState extra_state); - Handle<Code> ComputeStore(InlineCacheState ic_state, - ExtraICState extra_state); - - // --- - - Handle<Code> ComputeCompareNil(Handle<Map> receiver_map, - CompareNilICStub& stub); - - // --- - - Handle<Code> ComputeLoadElementPolymorphic(MapHandleList* receiver_maps); - Handle<Code> ComputeStoreElementPolymorphic(MapHandleList* receiver_maps, - KeyedAccessStoreMode store_mode, - StrictMode strict_mode); - - Handle<Code> ComputePolymorphicIC(Code::Kind kind, - TypeHandleList* types, - CodeHandleList* handlers, - int number_of_valid_maps, - Handle<Name> name, - ExtraICState extra_ic_state); - - // Finds the Code object stored in the Heap::non_monomorphic_cache(). - Code* FindPreMonomorphicIC(Code::Kind kind, ExtraICState extra_ic_state); - - // Update cache for entry hash(name, map). + // Access cache for entry hash(name, map). Code* Set(Name* name, Map* map, Code* code); - + Code* Get(Name* name, Map* map, Code::Flags flags); // Clear the lookup table (@ mark compact collection). void Clear(); - // Collect all maps that match the name and flags. void CollectMatchingMaps(SmallMapList* types, Handle<Name> name, Code::Flags flags, Handle<Context> native_context, Zone* zone); - // Generate code for probing the stub cache table. // Arguments extra, extra2 and extra3 may be used to pass additional scratch // registers. Set to no_reg if not needed. @@ -141,25 +78,21 @@ class StubCache { kSecondary }; - SCTableReference key_reference(StubCache::Table table) { return SCTableReference( reinterpret_cast<Address>(&first_entry(table)->key)); } - SCTableReference map_reference(StubCache::Table table) { return SCTableReference( reinterpret_cast<Address>(&first_entry(table)->map)); } - SCTableReference value_reference(StubCache::Table table) { return SCTableReference( reinterpret_cast<Address>(&first_entry(table)->value)); } - StubCache::Entry* first_entry(StubCache::Table table) { switch (table) { case StubCache::kPrimary: return StubCache::primary_; @@ -170,18 +103,11 @@ class StubCache { } Isolate* isolate() { return isolate_; } - Heap* heap() { return isolate()->heap(); } - Factory* factory() { return isolate()->factory(); } - // These constants describe the structure of the interceptor arguments on the - // stack. The arguments are pushed by the (platform-specific) - // PushInterceptorArguments and read by LoadPropertyWithInterceptorOnly and - // LoadWithInterceptor. - static const int kInterceptorArgsNameIndex = 0; - static const int kInterceptorArgsInfoIndex = 1; - static const int kInterceptorArgsThisIndex = 2; - static const int kInterceptorArgsHolderIndex = 3; - static const int kInterceptorArgsLength = 4; + // Setting the entry size such that the index is shifted by Name::kHashShift + // is convenient; shifting down the length field (to extract the hash code) + // automatically discards the hash bit field. + static const int kCacheIndexShift = Name::kHashShift; private: explicit StubCache(Isolate* isolate); @@ -195,15 +121,11 @@ class StubCache { // Hash algorithm for the primary table. This algorithm is replicated in // assembler for every architecture. Returns an index into the table that - // is scaled by 1 << kHeapObjectTagSize. + // is scaled by 1 << kCacheIndexShift. static int PrimaryOffset(Name* name, Code::Flags flags, Map* map) { - // This works well because the heap object tag size and the hash - // shift are equal. Shifting down the length field to get the - // hash code would effectively throw away two bits of the hash - // code. - STATIC_ASSERT(kHeapObjectTagSize == Name::kHashShift); + STATIC_ASSERT(kCacheIndexShift == Name::kHashShift); // Compute the hash of the name (use entire hash field). - ASSERT(name->HasHashCode()); + DCHECK(name->HasHashCode()); uint32_t field = name->hash_field(); // Using only the low bits in 64-bit mode is unlikely to increase the // risk of collision even if the heap is spread over an area larger than @@ -216,12 +138,12 @@ class StubCache { (static_cast<uint32_t>(flags) & ~Code::kFlagsNotUsedInLookup); // Base the offset on a simple combination of name, flags, and map. uint32_t key = (map_low32bits + field) ^ iflags; - return key & ((kPrimaryTableSize - 1) << kHeapObjectTagSize); + return key & ((kPrimaryTableSize - 1) << kCacheIndexShift); } // Hash algorithm for the secondary table. This algorithm is replicated in // assembler for every architecture. Returns an index into the table that - // is scaled by 1 << kHeapObjectTagSize. + // is scaled by 1 << kCacheIndexShift. static int SecondaryOffset(Name* name, Code::Flags flags, int seed) { // Use the seed from the primary cache in the secondary cache. uint32_t name_low32bits = @@ -231,7 +153,7 @@ class StubCache { uint32_t iflags = (static_cast<uint32_t>(flags) & ~Code::kFlagsNotUsedInLookup); uint32_t key = (seed - name_low32bits) + iflags; - return key & ((kSecondaryTableSize - 1) << kHeapObjectTagSize); + return key & ((kSecondaryTableSize - 1) << kCacheIndexShift); } // Compute the entry for a given offset in exactly the same way as @@ -270,37 +192,214 @@ DECLARE_RUNTIME_FUNCTION(StoreCallbackProperty); // Support functions for IC stubs for interceptors. DECLARE_RUNTIME_FUNCTION(LoadPropertyWithInterceptorOnly); -DECLARE_RUNTIME_FUNCTION(LoadPropertyWithInterceptorForLoad); -DECLARE_RUNTIME_FUNCTION(LoadPropertyWithInterceptorForCall); -DECLARE_RUNTIME_FUNCTION(StoreInterceptorProperty); -DECLARE_RUNTIME_FUNCTION(KeyedLoadPropertyWithInterceptor); +DECLARE_RUNTIME_FUNCTION(LoadPropertyWithInterceptor); +DECLARE_RUNTIME_FUNCTION(LoadElementWithInterceptor); +DECLARE_RUNTIME_FUNCTION(StorePropertyWithInterceptor); enum PrototypeCheckType { CHECK_ALL_MAPS, SKIP_RECEIVER }; enum IcCheckType { ELEMENT, PROPERTY }; -// The stub compilers compile stubs for the stub cache. -class StubCompiler BASE_EMBEDDED { +class PropertyAccessCompiler BASE_EMBEDDED { public: - explicit StubCompiler(Isolate* isolate, - ExtraICState extra_ic_state = kNoExtraICState) - : isolate_(isolate), extra_ic_state_(extra_ic_state), - masm_(isolate, NULL, 256) { } + static Builtins::Name MissBuiltin(Code::Kind kind) { + switch (kind) { + case Code::LOAD_IC: + return Builtins::kLoadIC_Miss; + case Code::STORE_IC: + return Builtins::kStoreIC_Miss; + case Code::KEYED_LOAD_IC: + return Builtins::kKeyedLoadIC_Miss; + case Code::KEYED_STORE_IC: + return Builtins::kKeyedStoreIC_Miss; + default: + UNREACHABLE(); + } + return Builtins::kLoadIC_Miss; + } + + static void TailCallBuiltin(MacroAssembler* masm, Builtins::Name name); + + protected: + PropertyAccessCompiler(Isolate* isolate, Code::Kind kind, + CacheHolderFlag cache_holder) + : registers_(GetCallingConvention(kind)), + kind_(kind), + cache_holder_(cache_holder), + isolate_(isolate), + masm_(isolate, NULL, 256) {} + + Code::Kind kind() const { return kind_; } + CacheHolderFlag cache_holder() const { return cache_holder_; } + MacroAssembler* masm() { return &masm_; } + Isolate* isolate() const { return isolate_; } + Heap* heap() const { return isolate()->heap(); } + Factory* factory() const { return isolate()->factory(); } + + Register receiver() const { return registers_[0]; } + Register name() const { return registers_[1]; } + Register scratch1() const { return registers_[2]; } + Register scratch2() const { return registers_[3]; } + Register scratch3() const { return registers_[4]; } + + // Calling convention between indexed store IC and handler. + Register transition_map() const { return scratch1(); } + + static Register* GetCallingConvention(Code::Kind); + static Register* load_calling_convention(); + static Register* store_calling_convention(); + static Register* keyed_store_calling_convention(); + + Register* registers_; + + static void GenerateTailCall(MacroAssembler* masm, Handle<Code> code); + + Handle<Code> GetCodeWithFlags(Code::Flags flags, const char* name); + Handle<Code> GetCodeWithFlags(Code::Flags flags, Handle<Name> name); + + private: + Code::Kind kind_; + CacheHolderFlag cache_holder_; + + Isolate* isolate_; + MacroAssembler masm_; +}; + + +class PropertyICCompiler : public PropertyAccessCompiler { + public: + // Finds the Code object stored in the Heap::non_monomorphic_cache(). + static Code* FindPreMonomorphic(Isolate* isolate, Code::Kind kind, + ExtraICState extra_ic_state); + + // Named + static Handle<Code> ComputeLoad(Isolate* isolate, InlineCacheState ic_state, + ExtraICState extra_state); + static Handle<Code> ComputeStore(Isolate* isolate, InlineCacheState ic_state, + ExtraICState extra_state); + + static Handle<Code> ComputeMonomorphic(Code::Kind kind, Handle<Name> name, + Handle<HeapType> type, + Handle<Code> handler, + ExtraICState extra_ic_state); + static Handle<Code> ComputePolymorphic(Code::Kind kind, TypeHandleList* types, + CodeHandleList* handlers, + int number_of_valid_maps, + Handle<Name> name, + ExtraICState extra_ic_state); + + // Keyed + static Handle<Code> ComputeKeyedLoadMonomorphic(Handle<Map> receiver_map); + + static Handle<Code> ComputeKeyedStoreMonomorphic( + Handle<Map> receiver_map, StrictMode strict_mode, + KeyedAccessStoreMode store_mode); + static Handle<Code> ComputeKeyedLoadPolymorphic(MapHandleList* receiver_maps); + static Handle<Code> ComputeKeyedStorePolymorphic( + MapHandleList* receiver_maps, KeyedAccessStoreMode store_mode, + StrictMode strict_mode); + + // Compare nil + static Handle<Code> ComputeCompareNil(Handle<Map> receiver_map, + CompareNilICStub* stub); + + + private: + PropertyICCompiler(Isolate* isolate, Code::Kind kind, + ExtraICState extra_ic_state = kNoExtraICState, + CacheHolderFlag cache_holder = kCacheOnReceiver) + : PropertyAccessCompiler(isolate, kind, cache_holder), + extra_ic_state_(extra_ic_state) {} + + static Handle<Code> Find(Handle<Name> name, Handle<Map> stub_holder_map, + Code::Kind kind, + ExtraICState extra_ic_state = kNoExtraICState, + CacheHolderFlag cache_holder = kCacheOnReceiver); Handle<Code> CompileLoadInitialize(Code::Flags flags); Handle<Code> CompileLoadPreMonomorphic(Code::Flags flags); Handle<Code> CompileLoadMegamorphic(Code::Flags flags); - Handle<Code> CompileStoreInitialize(Code::Flags flags); Handle<Code> CompileStorePreMonomorphic(Code::Flags flags); Handle<Code> CompileStoreGeneric(Code::Flags flags); Handle<Code> CompileStoreMegamorphic(Code::Flags flags); - // Static functions for generating parts of stubs. - static void GenerateLoadGlobalFunctionPrototype(MacroAssembler* masm, - int index, - Register prototype); + Handle<Code> CompileMonomorphic(Handle<HeapType> type, Handle<Code> handler, + Handle<Name> name, IcCheckType check); + Handle<Code> CompilePolymorphic(TypeHandleList* types, + CodeHandleList* handlers, Handle<Name> name, + Code::StubType type, IcCheckType check); + + Handle<Code> CompileKeyedStoreMonomorphic(Handle<Map> receiver_map, + KeyedAccessStoreMode store_mode); + Handle<Code> CompileKeyedStorePolymorphic(MapHandleList* receiver_maps, + KeyedAccessStoreMode store_mode); + Handle<Code> CompileKeyedStorePolymorphic(MapHandleList* receiver_maps, + CodeHandleList* handler_stubs, + MapHandleList* transitioned_maps); + + bool IncludesNumberType(TypeHandleList* types); + + Handle<Code> GetCode(Code::Kind kind, Code::StubType type, Handle<Name> name, + InlineCacheState state = MONOMORPHIC); + + Logger::LogEventsAndTags log_kind(Handle<Code> code) { + if (kind() == Code::LOAD_IC) { + return code->ic_state() == MONOMORPHIC ? Logger::LOAD_IC_TAG + : Logger::LOAD_POLYMORPHIC_IC_TAG; + } else if (kind() == Code::KEYED_LOAD_IC) { + return code->ic_state() == MONOMORPHIC + ? Logger::KEYED_LOAD_IC_TAG + : Logger::KEYED_LOAD_POLYMORPHIC_IC_TAG; + } else if (kind() == Code::STORE_IC) { + return code->ic_state() == MONOMORPHIC ? Logger::STORE_IC_TAG + : Logger::STORE_POLYMORPHIC_IC_TAG; + } else { + DCHECK_EQ(Code::KEYED_STORE_IC, kind()); + return code->ic_state() == MONOMORPHIC + ? Logger::KEYED_STORE_IC_TAG + : Logger::KEYED_STORE_POLYMORPHIC_IC_TAG; + } + } + + const ExtraICState extra_ic_state_; +}; + + +class PropertyHandlerCompiler : public PropertyAccessCompiler { + public: + static Handle<Code> Find(Handle<Name> name, Handle<Map> map, Code::Kind kind, + CacheHolderFlag cache_holder, Code::StubType type); + + protected: + PropertyHandlerCompiler(Isolate* isolate, Code::Kind kind, + Handle<HeapType> type, Handle<JSObject> holder, + CacheHolderFlag cache_holder) + : PropertyAccessCompiler(isolate, kind, cache_holder), + type_(type), + holder_(holder) {} + + virtual ~PropertyHandlerCompiler() {} + + virtual Register FrontendHeader(Register object_reg, Handle<Name> name, + Label* miss) { + UNREACHABLE(); + return receiver(); + } + + virtual void FrontendFooter(Handle<Name> name, Label* miss) { UNREACHABLE(); } + + Register Frontend(Register object_reg, Handle<Name> name); + void NonexistentFrontendHeader(Handle<Name> name, Label* miss, + Register scratch1, Register scratch2); + + // TODO(verwaest): Make non-static. + static void GenerateFastApiCall(MacroAssembler* masm, + const CallOptimization& optimization, + Handle<Map> receiver_map, Register receiver, + Register scratch, bool is_store, int argc, + Register* values); // Helper function used to check that the dictionary doesn't contain // the property. This function may return false negatives, so miss_label @@ -314,35 +413,6 @@ class StubCompiler BASE_EMBEDDED { Register r0, Register r1); - // Generates prototype loading code that uses the objects from the - // context we were in when this function was called. If the context - // has changed, a jump to miss is performed. This ties the generated - // code to a particular context and so must not be used in cases - // where the generated code is not allowed to have references to - // objects from a context. - static void GenerateDirectLoadGlobalFunctionPrototype(MacroAssembler* masm, - int index, - Register prototype, - Label* miss); - - static void GenerateFastPropertyLoad(MacroAssembler* masm, - Register dst, - Register src, - bool inobject, - int index, - Representation representation); - - static void GenerateLoadArrayLength(MacroAssembler* masm, - Register receiver, - Register scratch, - Label* miss_label); - - static void GenerateLoadFunctionPrototype(MacroAssembler* masm, - Register receiver, - Register scratch1, - Register scratch2, - Label* miss_label); - // Generate code to check that a global property cell is empty. Create // the property cell at compilation time if no cell exists for the // property. @@ -352,8 +422,6 @@ class StubCompiler BASE_EMBEDDED { Register scratch, Label* miss); - static void TailCallBuiltin(MacroAssembler* masm, Builtins::Name name); - // Generates code that verifies that the property holder has not changed // (checking maps of objects in the prototype chain for fast and global // objects or doing negative lookup for slow objects, ensures that the @@ -366,355 +434,163 @@ class StubCompiler BASE_EMBEDDED { // register is only clobbered if it the same as the holder register. The // function returns a register containing the holder - either object_reg or // holder_reg. - Register CheckPrototypes(Handle<HeapType> type, - Register object_reg, - Handle<JSObject> holder, - Register holder_reg, - Register scratch1, - Register scratch2, - Handle<Name> name, - Label* miss, + Register CheckPrototypes(Register object_reg, Register holder_reg, + Register scratch1, Register scratch2, + Handle<Name> name, Label* miss, PrototypeCheckType check = CHECK_ALL_MAPS); - static void GenerateFastApiCall(MacroAssembler* masm, - const CallOptimization& optimization, - Handle<Map> receiver_map, - Register receiver, - Register scratch, - bool is_store, - int argc, - Register* values); - - protected: - Handle<Code> GetCodeWithFlags(Code::Flags flags, const char* name); - Handle<Code> GetCodeWithFlags(Code::Flags flags, Handle<Name> name); - - ExtraICState extra_state() { return extra_ic_state_; } - - MacroAssembler* masm() { return &masm_; } - - static void LookupPostInterceptor(Handle<JSObject> holder, - Handle<Name> name, - LookupResult* lookup); - - Isolate* isolate() { return isolate_; } - Heap* heap() { return isolate()->heap(); } - Factory* factory() { return isolate()->factory(); } - - static void GenerateTailCall(MacroAssembler* masm, Handle<Code> code); + Handle<Code> GetCode(Code::Kind kind, Code::StubType type, Handle<Name> name); + void set_type_for_object(Handle<Object> object) { + type_ = IC::CurrentTypeOf(object, isolate()); + } + void set_holder(Handle<JSObject> holder) { holder_ = holder; } + Handle<HeapType> type() const { return type_; } + Handle<JSObject> holder() const { return holder_; } private: - Isolate* isolate_; - const ExtraICState extra_ic_state_; - MacroAssembler masm_; + Handle<HeapType> type_; + Handle<JSObject> holder_; }; -enum FrontendCheckType { PERFORM_INITIAL_CHECKS, SKIP_INITIAL_CHECKS }; - - -class BaseLoadStoreStubCompiler: public StubCompiler { +class NamedLoadHandlerCompiler : public PropertyHandlerCompiler { public: - BaseLoadStoreStubCompiler(Isolate* isolate, - Code::Kind kind, - ExtraICState extra_ic_state = kNoExtraICState, - InlineCacheHolderFlag cache_holder = OWN_MAP) - : StubCompiler(isolate, extra_ic_state), - kind_(kind), - cache_holder_(cache_holder) { - InitializeRegisters(); - } - virtual ~BaseLoadStoreStubCompiler() { } - - Handle<Code> CompileMonomorphicIC(Handle<HeapType> type, - Handle<Code> handler, - Handle<Name> name); - - Handle<Code> CompilePolymorphicIC(TypeHandleList* types, - CodeHandleList* handlers, - Handle<Name> name, - Code::StubType type, - IcCheckType check); - - static Builtins::Name MissBuiltin(Code::Kind kind) { - switch (kind) { - case Code::LOAD_IC: return Builtins::kLoadIC_Miss; - case Code::STORE_IC: return Builtins::kStoreIC_Miss; - case Code::KEYED_LOAD_IC: return Builtins::kKeyedLoadIC_Miss; - case Code::KEYED_STORE_IC: return Builtins::kKeyedStoreIC_Miss; - default: UNREACHABLE(); - } - return Builtins::kLoadIC_Miss; - } - - protected: - virtual Register HandlerFrontendHeader(Handle<HeapType> type, - Register object_reg, - Handle<JSObject> holder, - Handle<Name> name, - Label* miss) = 0; - - virtual void HandlerFrontendFooter(Handle<Name> name, Label* miss) = 0; - - Register HandlerFrontend(Handle<HeapType> type, - Register object_reg, + NamedLoadHandlerCompiler(Isolate* isolate, Handle<HeapType> type, Handle<JSObject> holder, - Handle<Name> name); + CacheHolderFlag cache_holder) + : PropertyHandlerCompiler(isolate, Code::LOAD_IC, type, holder, + cache_holder) {} - Handle<Code> GetCode(Code::Kind kind, - Code::StubType type, - Handle<Name> name); + virtual ~NamedLoadHandlerCompiler() {} - Handle<Code> GetICCode(Code::Kind kind, - Code::StubType type, - Handle<Name> name, - InlineCacheState state = MONOMORPHIC); - Code::Kind kind() { return kind_; } + Handle<Code> CompileLoadField(Handle<Name> name, FieldIndex index); - Logger::LogEventsAndTags log_kind(Handle<Code> code) { - if (!code->is_inline_cache_stub()) return Logger::STUB_TAG; - if (kind_ == Code::LOAD_IC) { - return code->ic_state() == MONOMORPHIC - ? Logger::LOAD_IC_TAG : Logger::LOAD_POLYMORPHIC_IC_TAG; - } else if (kind_ == Code::KEYED_LOAD_IC) { - return code->ic_state() == MONOMORPHIC - ? Logger::KEYED_LOAD_IC_TAG : Logger::KEYED_LOAD_POLYMORPHIC_IC_TAG; - } else if (kind_ == Code::STORE_IC) { - return code->ic_state() == MONOMORPHIC - ? Logger::STORE_IC_TAG : Logger::STORE_POLYMORPHIC_IC_TAG; - } else { - return code->ic_state() == MONOMORPHIC - ? Logger::KEYED_STORE_IC_TAG : Logger::KEYED_STORE_POLYMORPHIC_IC_TAG; - } - } - void JitEvent(Handle<Name> name, Handle<Code> code); - - Register receiver() { return registers_[0]; } - Register name() { return registers_[1]; } - Register scratch1() { return registers_[2]; } - Register scratch2() { return registers_[3]; } - Register scratch3() { return registers_[4]; } - - void InitializeRegisters(); - - bool IncludesNumberType(TypeHandleList* types); - - Code::Kind kind_; - InlineCacheHolderFlag cache_holder_; - Register* registers_; -}; - - -class LoadStubCompiler: public BaseLoadStoreStubCompiler { - public: - LoadStubCompiler(Isolate* isolate, - ExtraICState extra_ic_state = kNoExtraICState, - InlineCacheHolderFlag cache_holder = OWN_MAP, - Code::Kind kind = Code::LOAD_IC) - : BaseLoadStoreStubCompiler(isolate, kind, extra_ic_state, - cache_holder) { } - virtual ~LoadStubCompiler() { } - - Handle<Code> CompileLoadField(Handle<HeapType> type, - Handle<JSObject> holder, - Handle<Name> name, - PropertyIndex index, - Representation representation); - - Handle<Code> CompileLoadCallback(Handle<HeapType> type, - Handle<JSObject> holder, - Handle<Name> name, + Handle<Code> CompileLoadCallback(Handle<Name> name, Handle<ExecutableAccessorInfo> callback); - Handle<Code> CompileLoadCallback(Handle<HeapType> type, - Handle<JSObject> holder, - Handle<Name> name, + Handle<Code> CompileLoadCallback(Handle<Name> name, const CallOptimization& call_optimization); - Handle<Code> CompileLoadConstant(Handle<HeapType> type, - Handle<JSObject> holder, - Handle<Name> name, - Handle<Object> value); + Handle<Code> CompileLoadConstant(Handle<Name> name, int constant_index); - Handle<Code> CompileLoadInterceptor(Handle<HeapType> type, - Handle<JSObject> holder, - Handle<Name> name); + Handle<Code> CompileLoadInterceptor(Handle<Name> name); - Handle<Code> CompileLoadViaGetter(Handle<HeapType> type, - Handle<JSObject> holder, - Handle<Name> name, + Handle<Code> CompileLoadViaGetter(Handle<Name> name, Handle<JSFunction> getter); - static void GenerateLoadViaGetter(MacroAssembler* masm, - Handle<HeapType> type, + Handle<Code> CompileLoadGlobal(Handle<PropertyCell> cell, Handle<Name> name, + bool is_configurable); + + // Static interface + static Handle<Code> ComputeLoadNonexistent(Handle<Name> name, + Handle<HeapType> type); + + static void GenerateLoadViaGetter(MacroAssembler* masm, Handle<HeapType> type, Register receiver, Handle<JSFunction> getter); static void GenerateLoadViaGetterForDeopt(MacroAssembler* masm) { - GenerateLoadViaGetter( - masm, Handle<HeapType>::null(), no_reg, Handle<JSFunction>()); + GenerateLoadViaGetter(masm, Handle<HeapType>::null(), no_reg, + Handle<JSFunction>()); } - Handle<Code> CompileLoadNonexistent(Handle<HeapType> type, - Handle<JSObject> last, - Handle<Name> name); + static void GenerateLoadFunctionPrototype(MacroAssembler* masm, + Register receiver, + Register scratch1, + Register scratch2, + Label* miss_label); - Handle<Code> CompileLoadGlobal(Handle<HeapType> type, - Handle<GlobalObject> holder, - Handle<PropertyCell> cell, - Handle<Name> name, - bool is_dont_delete); + // These constants describe the structure of the interceptor arguments on the + // stack. The arguments are pushed by the (platform-specific) + // PushInterceptorArguments and read by LoadPropertyWithInterceptorOnly and + // LoadWithInterceptor. + static const int kInterceptorArgsNameIndex = 0; + static const int kInterceptorArgsInfoIndex = 1; + static const int kInterceptorArgsThisIndex = 2; + static const int kInterceptorArgsHolderIndex = 3; + static const int kInterceptorArgsLength = 4; protected: - ContextualMode contextual_mode() { - return LoadIC::GetContextualMode(extra_state()); - } + virtual Register FrontendHeader(Register object_reg, Handle<Name> name, + Label* miss); - virtual Register HandlerFrontendHeader(Handle<HeapType> type, - Register object_reg, - Handle<JSObject> holder, - Handle<Name> name, - Label* miss); + virtual void FrontendFooter(Handle<Name> name, Label* miss); - virtual void HandlerFrontendFooter(Handle<Name> name, Label* miss); - - Register CallbackHandlerFrontend(Handle<HeapType> type, - Register object_reg, - Handle<JSObject> holder, - Handle<Name> name, - Handle<Object> callback); - void NonexistentHandlerFrontend(Handle<HeapType> type, - Handle<JSObject> last, - Handle<Name> name); - - void GenerateLoadField(Register reg, - Handle<JSObject> holder, - PropertyIndex field, - Representation representation); + private: + Handle<Code> CompileLoadNonexistent(Handle<Name> name); void GenerateLoadConstant(Handle<Object> value); void GenerateLoadCallback(Register reg, Handle<ExecutableAccessorInfo> callback); void GenerateLoadCallback(const CallOptimization& call_optimization, Handle<Map> receiver_map); void GenerateLoadInterceptor(Register holder_reg, - Handle<Object> object, - Handle<JSObject> holder, LookupResult* lookup, Handle<Name> name); void GenerateLoadPostInterceptor(Register reg, - Handle<JSObject> interceptor_holder, Handle<Name> name, LookupResult* lookup); - private: - static Register* registers(); - Register scratch4() { return registers_[5]; } - friend class BaseLoadStoreStubCompiler; -}; - - -class KeyedLoadStubCompiler: public LoadStubCompiler { - public: - KeyedLoadStubCompiler(Isolate* isolate, - ExtraICState extra_ic_state = kNoExtraICState, - InlineCacheHolderFlag cache_holder = OWN_MAP) - : LoadStubCompiler(isolate, extra_ic_state, cache_holder, - Code::KEYED_LOAD_IC) { } - - Handle<Code> CompileLoadElement(Handle<Map> receiver_map); - - void CompileElementHandlers(MapHandleList* receiver_maps, - CodeHandleList* handlers); + // Generates prototype loading code that uses the objects from the + // context we were in when this function was called. If the context + // has changed, a jump to miss is performed. This ties the generated + // code to a particular context and so must not be used in cases + // where the generated code is not allowed to have references to + // objects from a context. + static void GenerateDirectLoadGlobalFunctionPrototype(MacroAssembler* masm, + int index, + Register prototype, + Label* miss); - static void GenerateLoadDictionaryElement(MacroAssembler* masm); - private: - static Register* registers(); - friend class BaseLoadStoreStubCompiler; + Register scratch4() { return registers_[5]; } }; -class StoreStubCompiler: public BaseLoadStoreStubCompiler { +class NamedStoreHandlerCompiler : public PropertyHandlerCompiler { public: - StoreStubCompiler(Isolate* isolate, - ExtraICState extra_ic_state, - Code::Kind kind = Code::STORE_IC) - : BaseLoadStoreStubCompiler(isolate, kind, extra_ic_state) {} + explicit NamedStoreHandlerCompiler(Isolate* isolate, Handle<HeapType> type, + Handle<JSObject> holder) + : PropertyHandlerCompiler(isolate, Code::STORE_IC, type, holder, + kCacheOnReceiver) {} - virtual ~StoreStubCompiler() { } + virtual ~NamedStoreHandlerCompiler() {} - Handle<Code> CompileStoreTransition(Handle<JSObject> object, - LookupResult* lookup, - Handle<Map> transition, + Handle<Code> CompileStoreTransition(Handle<Map> transition, Handle<Name> name); - - Handle<Code> CompileStoreField(Handle<JSObject> object, - LookupResult* lookup, - Handle<Name> name); - - Handle<Code> CompileStoreArrayLength(Handle<JSObject> object, - LookupResult* lookup, - Handle<Name> name); - - void GenerateStoreArrayLength(); - - void GenerateNegativeHolderLookup(MacroAssembler* masm, - Handle<JSObject> holder, - Register holder_reg, - Handle<Name> name, - Label* miss); - - void GenerateStoreTransition(MacroAssembler* masm, - Handle<JSObject> object, - LookupResult* lookup, - Handle<Map> transition, - Handle<Name> name, - Register receiver_reg, - Register name_reg, - Register value_reg, - Register scratch1, - Register scratch2, - Register scratch3, - Label* miss_label, - Label* slow); - - void GenerateStoreField(MacroAssembler* masm, - Handle<JSObject> object, - LookupResult* lookup, - Register receiver_reg, - Register name_reg, - Register value_reg, - Register scratch1, - Register scratch2, - Label* miss_label); - - Handle<Code> CompileStoreCallback(Handle<JSObject> object, - Handle<JSObject> holder, - Handle<Name> name, + Handle<Code> CompileStoreField(LookupResult* lookup, Handle<Name> name); + Handle<Code> CompileStoreCallback(Handle<JSObject> object, Handle<Name> name, Handle<ExecutableAccessorInfo> callback); - - Handle<Code> CompileStoreCallback(Handle<JSObject> object, - Handle<JSObject> holder, - Handle<Name> name, + Handle<Code> CompileStoreCallback(Handle<JSObject> object, Handle<Name> name, const CallOptimization& call_optimization); + Handle<Code> CompileStoreViaSetter(Handle<JSObject> object, Handle<Name> name, + Handle<JSFunction> setter); + Handle<Code> CompileStoreInterceptor(Handle<Name> name); static void GenerateStoreViaSetter(MacroAssembler* masm, - Handle<HeapType> type, - Register receiver, + Handle<HeapType> type, Register receiver, Handle<JSFunction> setter); static void GenerateStoreViaSetterForDeopt(MacroAssembler* masm) { - GenerateStoreViaSetter( - masm, Handle<HeapType>::null(), no_reg, Handle<JSFunction>()); + GenerateStoreViaSetter(masm, Handle<HeapType>::null(), no_reg, + Handle<JSFunction>()); } - Handle<Code> CompileStoreViaSetter(Handle<JSObject> object, - Handle<JSObject> holder, - Handle<Name> name, - Handle<JSFunction> setter); + protected: + virtual Register FrontendHeader(Register object_reg, Handle<Name> name, + Label* miss); + + virtual void FrontendFooter(Handle<Name> name, Label* miss); + void GenerateRestoreName(Label* label, Handle<Name> name); - Handle<Code> CompileStoreInterceptor(Handle<JSObject> object, - Handle<Name> name); + private: + void GenerateStoreTransition(Handle<Map> transition, Handle<Name> name, + Register receiver_reg, Register name_reg, + Register value_reg, Register scratch1, + Register scratch2, Register scratch3, + Label* miss_label, Label* slow); + + void GenerateStoreField(LookupResult* lookup, Register value_reg, + Label* miss_label); static Builtins::Name SlowBuiltin(Code::Kind kind) { switch (kind) { @@ -725,51 +601,24 @@ class StoreStubCompiler: public BaseLoadStoreStubCompiler { return Builtins::kStoreIC_Slow; } - protected: - virtual Register HandlerFrontendHeader(Handle<HeapType> type, - Register object_reg, - Handle<JSObject> holder, - Handle<Name> name, - Label* miss); - - virtual void HandlerFrontendFooter(Handle<Name> name, Label* miss); - void GenerateRestoreName(MacroAssembler* masm, - Label* label, - Handle<Name> name); - - private: - static Register* registers(); static Register value(); - friend class BaseLoadStoreStubCompiler; }; -class KeyedStoreStubCompiler: public StoreStubCompiler { +class ElementHandlerCompiler : public PropertyHandlerCompiler { public: - KeyedStoreStubCompiler(Isolate* isolate, - ExtraICState extra_ic_state) - : StoreStubCompiler(isolate, extra_ic_state, Code::KEYED_STORE_IC) {} + explicit ElementHandlerCompiler(Isolate* isolate) + : PropertyHandlerCompiler(isolate, Code::KEYED_LOAD_IC, + Handle<HeapType>::null(), + Handle<JSObject>::null(), kCacheOnReceiver) {} - Handle<Code> CompileStoreElement(Handle<Map> receiver_map); + virtual ~ElementHandlerCompiler() {} - Handle<Code> CompileStorePolymorphic(MapHandleList* receiver_maps, - CodeHandleList* handler_stubs, - MapHandleList* transitioned_maps); - - Handle<Code> CompileStoreElementPolymorphic(MapHandleList* receiver_maps); + void CompileElementHandlers(MapHandleList* receiver_maps, + CodeHandleList* handlers); + static void GenerateLoadDictionaryElement(MacroAssembler* masm); static void GenerateStoreDictionaryElement(MacroAssembler* masm); - - private: - static Register* registers(); - - KeyedAccessStoreMode store_mode() { - return KeyedStoreIC::GetKeyedAccessStoreMode(extra_state()); - } - - Register transition_map() { return scratch1(); } - - friend class BaseLoadStoreStubCompiler; }; @@ -785,7 +634,7 @@ class CallOptimization BASE_EMBEDDED { } Handle<JSFunction> constant_function() const { - ASSERT(is_constant_call()); + DCHECK(is_constant_call()); return constant_function_; } @@ -794,12 +643,12 @@ class CallOptimization BASE_EMBEDDED { } Handle<FunctionTemplateInfo> expected_receiver_type() const { - ASSERT(is_simple_api_call()); + DCHECK(is_simple_api_call()); return expected_receiver_type_; } Handle<CallHandlerInfo> api_call_info() const { - ASSERT(is_simple_api_call()); + DCHECK(is_simple_api_call()); return api_call_info_; } diff --git a/deps/v8/src/symbol.js b/deps/v8/src/symbol.js index 1c483020206..ce3327bbdba 100644 --- a/deps/v8/src/symbol.js +++ b/deps/v8/src/symbol.js @@ -82,7 +82,6 @@ function ObjectGetOwnPropertySymbols(obj) { //------------------------------------------------------------------- -var symbolCreate = InternalSymbol("Symbol.create"); var symbolHasInstance = InternalSymbol("Symbol.hasInstance"); var symbolIsConcatSpreadable = InternalSymbol("Symbol.isConcatSpreadable"); var symbolIsRegExp = InternalSymbol("Symbol.isRegExp"); @@ -100,12 +99,12 @@ function SetUpSymbol() { %FunctionSetPrototype($Symbol, new $Object()); InstallConstants($Symbol, $Array( - "create", symbolCreate, - "hasInstance", symbolHasInstance, - "isConcatSpreadable", symbolIsConcatSpreadable, - "isRegExp", symbolIsRegExp, + // TODO(rossberg): expose when implemented. + // "hasInstance", symbolHasInstance, + // "isConcatSpreadable", symbolIsConcatSpreadable, + // "isRegExp", symbolIsRegExp, "iterator", symbolIterator, - "toStringTag", symbolToStringTag, + // "toStringTag", symbolToStringTag, "unscopables", symbolUnscopables )); InstallFunctions($Symbol, DONT_ENUM, $Array( @@ -113,7 +112,7 @@ function SetUpSymbol() { "keyFor", SymbolKeyFor )); - %SetProperty($Symbol.prototype, "constructor", $Symbol, DONT_ENUM); + %AddNamedProperty($Symbol.prototype, "constructor", $Symbol, DONT_ENUM); InstallFunctions($Symbol.prototype, DONT_ENUM, $Array( "toString", SymbolToString, "valueOf", SymbolValueOf diff --git a/deps/v8/src/third_party/kernel/tools/perf/util/jitdump.h b/deps/v8/src/third_party/kernel/tools/perf/util/jitdump.h new file mode 100644 index 00000000000..85d51b79af5 --- /dev/null +++ b/deps/v8/src/third_party/kernel/tools/perf/util/jitdump.h @@ -0,0 +1,83 @@ +#ifndef JITDUMP_H +#define JITDUMP_H + +#include <sys/time.h> +#include <time.h> +#include <stdint.h> + +/* JiTD */ +#define JITHEADER_MAGIC 0x4A695444 +#define JITHEADER_MAGIC_SW 0x4454694A + +#define PADDING_8ALIGNED(x) ((((x) + 7) & 7) ^ 7) + +#define JITHEADER_VERSION 1 + +struct jitheader { + uint32_t magic; /* characters "jItD" */ + uint32_t version; /* header version */ + uint32_t total_size; /* total size of header */ + uint32_t elf_mach; /* elf mach target */ + uint32_t pad1; /* reserved */ + uint32_t pid; /* JIT process id */ + uint64_t timestamp; /* timestamp */ +}; + +enum jit_record_type { + JIT_CODE_LOAD = 0, + JIT_CODE_MOVE = 1, + JIT_CODE_DEBUG_INFO = 2, + JIT_CODE_CLOSE = 3, + JIT_CODE_MAX +}; + +/* record prefix (mandatory in each record) */ +struct jr_prefix { + uint32_t id; + uint32_t total_size; + uint64_t timestamp; +}; + +struct jr_code_load { + struct jr_prefix p; + + uint32_t pid; + uint32_t tid; + uint64_t vma; + uint64_t code_addr; + uint64_t code_size; + uint64_t code_index; +}; + +struct jr_code_close { + struct jr_prefix p; +}; + +struct jr_code_move { + struct jr_prefix p; + + uint32_t pid; + uint32_t tid; + uint64_t vma; + uint64_t old_code_addr; + uint64_t new_code_addr; + uint64_t code_size; + uint64_t code_index; +}; + +struct jr_code_debug_info { + struct jr_prefix p; + + uint64_t code_addr; + uint64_t nr_entry; +}; + +union jr_entry { + struct jr_code_debug_info info; + struct jr_code_close close; + struct jr_code_load load; + struct jr_code_move move; + struct jr_prefix prefix; +}; + +#endif /* !JITDUMP_H */ diff --git a/deps/v8/src/third_party/vtune/vtune-jit.cc b/deps/v8/src/third_party/vtune/vtune-jit.cc index 023dd1864be..d62dcfbb719 100644 --- a/deps/v8/src/third_party/vtune/vtune-jit.cc +++ b/deps/v8/src/third_party/vtune/vtune-jit.cc @@ -192,13 +192,12 @@ void VTUNEJITInterface::event_handler(const v8::JitCodeEvent* event) { jmethod.method_size = static_cast<unsigned int>(event->code_len); jmethod.method_name = temp_method_name; - Handle<Script> script = event->script; + Handle<UnboundScript> script = event->script; if (*script != NULL) { // Get the source file name and set it to jmethod.source_file_name - if ((*script->GetUnboundScript()->GetScriptName())->IsString()) { - Handle<String> script_name = - script->GetUnboundScript()->GetScriptName()->ToString(); + if ((*script->GetScriptName())->IsString()) { + Handle<String> script_name = script->GetScriptName()->ToString(); temp_file_name = new char[script_name->Utf8Length() + 1]; script_name->WriteUtf8(temp_file_name); jmethod.source_file_name = temp_file_name; @@ -225,7 +224,7 @@ void VTUNEJITInterface::event_handler(const v8::JitCodeEvent* event) { jmethod.line_number_table[index].Offset = static_cast<unsigned int>(Iter->pc_); jmethod.line_number_table[index++].LineNumber = - script->GetUnboundScript()->GetLineNumber(Iter->pos_)+1; + script->GetLineNumber(Iter->pos_) + 1; } GetEntries()->erase(event->code_start); } diff --git a/deps/v8/src/token.cc b/deps/v8/src/token.cc index 2215d39682d..5dc67bb7212 100644 --- a/deps/v8/src/token.cc +++ b/deps/v8/src/token.cc @@ -2,8 +2,8 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "../include/v8stdint.h" -#include "token.h" +#include "include/v8stdint.h" +#include "src/token.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/token.h b/deps/v8/src/token.h index 822608d6980..3535e343af8 100644 --- a/deps/v8/src/token.h +++ b/deps/v8/src/token.h @@ -5,7 +5,7 @@ #ifndef V8_TOKEN_H_ #define V8_TOKEN_H_ -#include "checks.h" +#include "src/base/logging.h" namespace v8 { namespace internal { @@ -25,138 +25,139 @@ namespace internal { #define IGNORE_TOKEN(name, string, precedence) -#define TOKEN_LIST(T, K) \ - /* End of source indicator. */ \ - T(EOS, "EOS", 0) \ - \ - /* Punctuators (ECMA-262, section 7.7, page 15). */ \ - T(LPAREN, "(", 0) \ - T(RPAREN, ")", 0) \ - T(LBRACK, "[", 0) \ - T(RBRACK, "]", 0) \ - T(LBRACE, "{", 0) \ - T(RBRACE, "}", 0) \ - T(COLON, ":", 0) \ - T(SEMICOLON, ";", 0) \ - T(PERIOD, ".", 0) \ - T(CONDITIONAL, "?", 3) \ - T(INC, "++", 0) \ - T(DEC, "--", 0) \ - \ - /* Assignment operators. */ \ - /* IsAssignmentOp() and Assignment::is_compound() relies on */ \ - /* this block of enum values being contiguous and sorted in the */ \ - /* same order! */ \ - T(INIT_VAR, "=init_var", 2) /* AST-use only. */ \ - T(INIT_LET, "=init_let", 2) /* AST-use only. */ \ - T(INIT_CONST, "=init_const", 2) /* AST-use only. */ \ - T(INIT_CONST_LEGACY, "=init_const_legacy", 2) /* AST-use only. */ \ - T(ASSIGN, "=", 2) \ - T(ASSIGN_BIT_OR, "|=", 2) \ - T(ASSIGN_BIT_XOR, "^=", 2) \ - T(ASSIGN_BIT_AND, "&=", 2) \ - T(ASSIGN_SHL, "<<=", 2) \ - T(ASSIGN_SAR, ">>=", 2) \ - T(ASSIGN_SHR, ">>>=", 2) \ - T(ASSIGN_ADD, "+=", 2) \ - T(ASSIGN_SUB, "-=", 2) \ - T(ASSIGN_MUL, "*=", 2) \ - T(ASSIGN_DIV, "/=", 2) \ - T(ASSIGN_MOD, "%=", 2) \ - \ - /* Binary operators sorted by precedence. */ \ - /* IsBinaryOp() relies on this block of enum values */ \ - /* being contiguous and sorted in the same order! */ \ - T(COMMA, ",", 1) \ - T(OR, "||", 4) \ - T(AND, "&&", 5) \ - T(BIT_OR, "|", 6) \ - T(BIT_XOR, "^", 7) \ - T(BIT_AND, "&", 8) \ - T(SHL, "<<", 11) \ - T(SAR, ">>", 11) \ - T(SHR, ">>>", 11) \ - T(ROR, "rotate right", 11) /* only used by Crankshaft */ \ - T(ADD, "+", 12) \ - T(SUB, "-", 12) \ - T(MUL, "*", 13) \ - T(DIV, "/", 13) \ - T(MOD, "%", 13) \ - \ - /* Compare operators sorted by precedence. */ \ - /* IsCompareOp() relies on this block of enum values */ \ - /* being contiguous and sorted in the same order! */ \ - T(EQ, "==", 9) \ - T(NE, "!=", 9) \ - T(EQ_STRICT, "===", 9) \ - T(NE_STRICT, "!==", 9) \ - T(LT, "<", 10) \ - T(GT, ">", 10) \ - T(LTE, "<=", 10) \ - T(GTE, ">=", 10) \ - K(INSTANCEOF, "instanceof", 10) \ - K(IN, "in", 10) \ - \ - /* Unary operators. */ \ - /* IsUnaryOp() relies on this block of enum values */ \ - /* being contiguous and sorted in the same order! */ \ - T(NOT, "!", 0) \ - T(BIT_NOT, "~", 0) \ - K(DELETE, "delete", 0) \ - K(TYPEOF, "typeof", 0) \ - K(VOID, "void", 0) \ - \ - /* Keywords (ECMA-262, section 7.5.2, page 13). */ \ - K(BREAK, "break", 0) \ - K(CASE, "case", 0) \ - K(CATCH, "catch", 0) \ - K(CONTINUE, "continue", 0) \ - K(DEBUGGER, "debugger", 0) \ - K(DEFAULT, "default", 0) \ - /* DELETE */ \ - K(DO, "do", 0) \ - K(ELSE, "else", 0) \ - K(FINALLY, "finally", 0) \ - K(FOR, "for", 0) \ - K(FUNCTION, "function", 0) \ - K(IF, "if", 0) \ - /* IN */ \ - /* INSTANCEOF */ \ - K(NEW, "new", 0) \ - K(RETURN, "return", 0) \ - K(SWITCH, "switch", 0) \ - K(THIS, "this", 0) \ - K(THROW, "throw", 0) \ - K(TRY, "try", 0) \ - /* TYPEOF */ \ - K(VAR, "var", 0) \ - /* VOID */ \ - K(WHILE, "while", 0) \ - K(WITH, "with", 0) \ - \ - /* Literals (ECMA-262, section 7.8, page 16). */ \ - K(NULL_LITERAL, "null", 0) \ - K(TRUE_LITERAL, "true", 0) \ - K(FALSE_LITERAL, "false", 0) \ - T(NUMBER, NULL, 0) \ - T(STRING, NULL, 0) \ - \ - /* Identifiers (not keywords or future reserved words). */ \ - T(IDENTIFIER, NULL, 0) \ - \ - /* Future reserved words (ECMA-262, section 7.6.1.2). */ \ - T(FUTURE_RESERVED_WORD, NULL, 0) \ - T(FUTURE_STRICT_RESERVED_WORD, NULL, 0) \ - K(CONST, "const", 0) \ - K(EXPORT, "export", 0) \ - K(IMPORT, "import", 0) \ - K(LET, "let", 0) \ - K(YIELD, "yield", 0) \ - \ - /* Illegal token - not able to scan. */ \ - T(ILLEGAL, "ILLEGAL", 0) \ - \ - /* Scanner-internal use only. */ \ +#define TOKEN_LIST(T, K) \ + /* End of source indicator. */ \ + T(EOS, "EOS", 0) \ + \ + /* Punctuators (ECMA-262, section 7.7, page 15). */ \ + T(LPAREN, "(", 0) \ + T(RPAREN, ")", 0) \ + T(LBRACK, "[", 0) \ + T(RBRACK, "]", 0) \ + T(LBRACE, "{", 0) \ + T(RBRACE, "}", 0) \ + T(COLON, ":", 0) \ + T(SEMICOLON, ";", 0) \ + T(PERIOD, ".", 0) \ + T(CONDITIONAL, "?", 3) \ + T(INC, "++", 0) \ + T(DEC, "--", 0) \ + T(ARROW, "=>", 0) \ + \ + /* Assignment operators. */ \ + /* IsAssignmentOp() and Assignment::is_compound() relies on */ \ + /* this block of enum values being contiguous and sorted in the */ \ + /* same order! */ \ + T(INIT_VAR, "=init_var", 2) /* AST-use only. */ \ + T(INIT_LET, "=init_let", 2) /* AST-use only. */ \ + T(INIT_CONST, "=init_const", 2) /* AST-use only. */ \ + T(INIT_CONST_LEGACY, "=init_const_legacy", 2) /* AST-use only. */ \ + T(ASSIGN, "=", 2) \ + T(ASSIGN_BIT_OR, "|=", 2) \ + T(ASSIGN_BIT_XOR, "^=", 2) \ + T(ASSIGN_BIT_AND, "&=", 2) \ + T(ASSIGN_SHL, "<<=", 2) \ + T(ASSIGN_SAR, ">>=", 2) \ + T(ASSIGN_SHR, ">>>=", 2) \ + T(ASSIGN_ADD, "+=", 2) \ + T(ASSIGN_SUB, "-=", 2) \ + T(ASSIGN_MUL, "*=", 2) \ + T(ASSIGN_DIV, "/=", 2) \ + T(ASSIGN_MOD, "%=", 2) \ + \ + /* Binary operators sorted by precedence. */ \ + /* IsBinaryOp() relies on this block of enum values */ \ + /* being contiguous and sorted in the same order! */ \ + T(COMMA, ",", 1) \ + T(OR, "||", 4) \ + T(AND, "&&", 5) \ + T(BIT_OR, "|", 6) \ + T(BIT_XOR, "^", 7) \ + T(BIT_AND, "&", 8) \ + T(SHL, "<<", 11) \ + T(SAR, ">>", 11) \ + T(SHR, ">>>", 11) \ + T(ROR, "rotate right", 11) /* only used by Crankshaft */ \ + T(ADD, "+", 12) \ + T(SUB, "-", 12) \ + T(MUL, "*", 13) \ + T(DIV, "/", 13) \ + T(MOD, "%", 13) \ + \ + /* Compare operators sorted by precedence. */ \ + /* IsCompareOp() relies on this block of enum values */ \ + /* being contiguous and sorted in the same order! */ \ + T(EQ, "==", 9) \ + T(NE, "!=", 9) \ + T(EQ_STRICT, "===", 9) \ + T(NE_STRICT, "!==", 9) \ + T(LT, "<", 10) \ + T(GT, ">", 10) \ + T(LTE, "<=", 10) \ + T(GTE, ">=", 10) \ + K(INSTANCEOF, "instanceof", 10) \ + K(IN, "in", 10) \ + \ + /* Unary operators. */ \ + /* IsUnaryOp() relies on this block of enum values */ \ + /* being contiguous and sorted in the same order! */ \ + T(NOT, "!", 0) \ + T(BIT_NOT, "~", 0) \ + K(DELETE, "delete", 0) \ + K(TYPEOF, "typeof", 0) \ + K(VOID, "void", 0) \ + \ + /* Keywords (ECMA-262, section 7.5.2, page 13). */ \ + K(BREAK, "break", 0) \ + K(CASE, "case", 0) \ + K(CATCH, "catch", 0) \ + K(CONTINUE, "continue", 0) \ + K(DEBUGGER, "debugger", 0) \ + K(DEFAULT, "default", 0) \ + /* DELETE */ \ + K(DO, "do", 0) \ + K(ELSE, "else", 0) \ + K(FINALLY, "finally", 0) \ + K(FOR, "for", 0) \ + K(FUNCTION, "function", 0) \ + K(IF, "if", 0) \ + /* IN */ \ + /* INSTANCEOF */ \ + K(NEW, "new", 0) \ + K(RETURN, "return", 0) \ + K(SWITCH, "switch", 0) \ + K(THIS, "this", 0) \ + K(THROW, "throw", 0) \ + K(TRY, "try", 0) \ + /* TYPEOF */ \ + K(VAR, "var", 0) \ + /* VOID */ \ + K(WHILE, "while", 0) \ + K(WITH, "with", 0) \ + \ + /* Literals (ECMA-262, section 7.8, page 16). */ \ + K(NULL_LITERAL, "null", 0) \ + K(TRUE_LITERAL, "true", 0) \ + K(FALSE_LITERAL, "false", 0) \ + T(NUMBER, NULL, 0) \ + T(STRING, NULL, 0) \ + \ + /* Identifiers (not keywords or future reserved words). */ \ + T(IDENTIFIER, NULL, 0) \ + \ + /* Future reserved words (ECMA-262, section 7.6.1.2). */ \ + T(FUTURE_RESERVED_WORD, NULL, 0) \ + T(FUTURE_STRICT_RESERVED_WORD, NULL, 0) \ + K(CONST, "const", 0) \ + K(EXPORT, "export", 0) \ + K(IMPORT, "import", 0) \ + K(LET, "let", 0) \ + K(YIELD, "yield", 0) \ + \ + /* Illegal token - not able to scan. */ \ + T(ILLEGAL, "ILLEGAL", 0) \ + \ + /* Scanner-internal use only. */ \ T(WHITESPACE, NULL, 0) @@ -173,7 +174,7 @@ class Token { // Returns a string corresponding to the C++ token name // (e.g. "LT" for the token LT). static const char* Name(Value tok) { - ASSERT(tok < NUM_TOKENS); // tok is unsigned + DCHECK(tok < NUM_TOKENS); // tok is unsigned return name_[tok]; } @@ -216,7 +217,7 @@ class Token { } static Value NegateCompareOp(Value op) { - ASSERT(IsArithmeticCompareOp(op)); + DCHECK(IsArithmeticCompareOp(op)); switch (op) { case EQ: return NE; case NE: return EQ; @@ -233,7 +234,7 @@ class Token { } static Value ReverseCompareOp(Value op) { - ASSERT(IsArithmeticCompareOp(op)); + DCHECK(IsArithmeticCompareOp(op)); switch (op) { case EQ: return EQ; case NE: return NE; @@ -269,14 +270,14 @@ class Token { // (.e., "<" for the token LT) or NULL if the token doesn't // have a (unique) string (e.g. an IDENTIFIER). static const char* String(Value tok) { - ASSERT(tok < NUM_TOKENS); // tok is unsigned. + DCHECK(tok < NUM_TOKENS); // tok is unsigned. return string_[tok]; } // Returns the precedence > 0 for binary and compare // operators; returns 0 otherwise. static int Precedence(Value tok) { - ASSERT(tok < NUM_TOKENS); // tok is unsigned. + DCHECK(tok < NUM_TOKENS); // tok is unsigned. return precedence_[tok]; } diff --git a/deps/v8/src/transitions-inl.h b/deps/v8/src/transitions-inl.h index 8998198609e..a16eb4477cd 100644 --- a/deps/v8/src/transitions-inl.h +++ b/deps/v8/src/transitions-inl.h @@ -5,7 +5,7 @@ #ifndef V8_TRANSITIONS_INL_H_ #define V8_TRANSITIONS_INL_H_ -#include "transitions.h" +#include "src/transitions.h" namespace v8 { namespace internal { @@ -28,7 +28,7 @@ namespace internal { TransitionArray* TransitionArray::cast(Object* object) { - ASSERT(object->IsTransitionArray()); + DCHECK(object->IsTransitionArray()); return reinterpret_cast<TransitionArray*>(object); } @@ -59,7 +59,7 @@ bool TransitionArray::HasPrototypeTransitions() { FixedArray* TransitionArray::GetPrototypeTransitions() { - ASSERT(IsFullTransitionArray()); + DCHECK(IsFullTransitionArray()); Object* prototype_transitions = get(kPrototypeTransitionsIndex); return FixedArray::cast(prototype_transitions); } @@ -67,8 +67,8 @@ FixedArray* TransitionArray::GetPrototypeTransitions() { void TransitionArray::SetPrototypeTransitions(FixedArray* transitions, WriteBarrierMode mode) { - ASSERT(IsFullTransitionArray()); - ASSERT(transitions->IsFixedArray()); + DCHECK(IsFullTransitionArray()); + DCHECK(transitions->IsFixedArray()); Heap* heap = GetHeap(); WRITE_FIELD(this, kPrototypeTransitionsOffset, transitions); CONDITIONAL_WRITE_BARRIER( @@ -83,8 +83,8 @@ Object** TransitionArray::GetPrototypeTransitionsSlot() { Object** TransitionArray::GetKeySlot(int transition_number) { - ASSERT(!IsSimpleTransition()); - ASSERT(transition_number < number_of_transitions()); + DCHECK(!IsSimpleTransition()); + DCHECK(transition_number < number_of_transitions()); return RawFieldOfElementAt(ToKeyIndex(transition_number)); } @@ -96,34 +96,34 @@ Name* TransitionArray::GetKey(int transition_number) { Name* key = target->instance_descriptors()->GetKey(descriptor); return key; } - ASSERT(transition_number < number_of_transitions()); + DCHECK(transition_number < number_of_transitions()); return Name::cast(get(ToKeyIndex(transition_number))); } void TransitionArray::SetKey(int transition_number, Name* key) { - ASSERT(!IsSimpleTransition()); - ASSERT(transition_number < number_of_transitions()); + DCHECK(!IsSimpleTransition()); + DCHECK(transition_number < number_of_transitions()); set(ToKeyIndex(transition_number), key); } Map* TransitionArray::GetTarget(int transition_number) { if (IsSimpleTransition()) { - ASSERT(transition_number == kSimpleTransitionIndex); + DCHECK(transition_number == kSimpleTransitionIndex); return Map::cast(get(kSimpleTransitionTarget)); } - ASSERT(transition_number < number_of_transitions()); + DCHECK(transition_number < number_of_transitions()); return Map::cast(get(ToTargetIndex(transition_number))); } void TransitionArray::SetTarget(int transition_number, Map* value) { if (IsSimpleTransition()) { - ASSERT(transition_number == kSimpleTransitionIndex); + DCHECK(transition_number == kSimpleTransitionIndex); return set(kSimpleTransitionTarget, value); } - ASSERT(transition_number < number_of_transitions()); + DCHECK(transition_number < number_of_transitions()); set(ToTargetIndex(transition_number), value); } diff --git a/deps/v8/src/transitions.cc b/deps/v8/src/transitions.cc index 7fd56cbe237..96ed870e07b 100644 --- a/deps/v8/src/transitions.cc +++ b/deps/v8/src/transitions.cc @@ -2,11 +2,11 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "objects.h" -#include "transitions-inl.h" -#include "utils.h" +#include "src/objects.h" +#include "src/transitions-inl.h" +#include "src/utils.h" namespace v8 { namespace internal { @@ -64,7 +64,7 @@ Handle<TransitionArray> TransitionArray::NewWith(Handle<Map> map, Handle<TransitionArray> TransitionArray::ExtendToFullTransitionArray( Handle<Map> containing_map) { - ASSERT(!containing_map->transitions()->IsFullTransitionArray()); + DCHECK(!containing_map->transitions()->IsFullTransitionArray()); int nof = containing_map->transitions()->number_of_transitions(); // A transition array may shrink during GC. @@ -72,7 +72,7 @@ Handle<TransitionArray> TransitionArray::ExtendToFullTransitionArray( DisallowHeapAllocation no_gc; int new_nof = containing_map->transitions()->number_of_transitions(); if (new_nof != nof) { - ASSERT(new_nof == 0); + DCHECK(new_nof == 0); result->Shrink(ToKeyIndex(0)); } else if (nof == 1) { result->NoIncrementalWriteBarrierCopyFrom( @@ -104,11 +104,11 @@ Handle<TransitionArray> TransitionArray::CopyInsert(Handle<Map> map, // The map's transition array may grown smaller during the allocation above as // it was weakly traversed, though it is guaranteed not to disappear. Trim the // result copy if needed, and recompute variables. - ASSERT(map->HasTransitionArray()); + DCHECK(map->HasTransitionArray()); DisallowHeapAllocation no_gc; TransitionArray* array = map->transitions(); if (array->number_of_transitions() != number_of_transitions) { - ASSERT(array->number_of_transitions() < number_of_transitions); + DCHECK(array->number_of_transitions() < number_of_transitions); number_of_transitions = array->number_of_transitions(); new_size = number_of_transitions; diff --git a/deps/v8/src/transitions.h b/deps/v8/src/transitions.h index 7d8e551a582..21c02ac1ad5 100644 --- a/deps/v8/src/transitions.h +++ b/deps/v8/src/transitions.h @@ -5,11 +5,11 @@ #ifndef V8_TRANSITIONS_H_ #define V8_TRANSITIONS_H_ -#include "elements-kind.h" -#include "heap.h" -#include "isolate.h" -#include "objects.h" -#include "v8checks.h" +#include "src/checks.h" +#include "src/elements-kind.h" +#include "src/heap/heap.h" +#include "src/isolate.h" +#include "src/objects.h" namespace v8 { namespace internal { @@ -140,10 +140,7 @@ class TransitionArray: public FixedArray { #ifdef OBJECT_PRINT // Print all the transitions. - inline void PrintTransitions() { - PrintTransitions(stdout); - } - void PrintTransitions(FILE* out); + void PrintTransitions(OStream& os); // NOLINT #endif #ifdef DEBUG diff --git a/deps/v8/src/trig-table.h b/deps/v8/src/trig-table.h deleted file mode 100644 index 7332152a9df..00000000000 --- a/deps/v8/src/trig-table.h +++ /dev/null @@ -1,38 +0,0 @@ -// Copyright 2013 the V8 project authors. All rights reserved. -// Use of this source code is governed by a BSD-style license that can be -// found in the LICENSE file. - -#ifndef V8_TRIG_TABLE_H_ -#define V8_TRIG_TABLE_H_ - - -namespace v8 { -namespace internal { - -class TrigonometricLookupTable : public AllStatic { - public: - // Casting away const-ness to use as argument for typed array constructor. - static void* sin_table() { - return const_cast<double*>(&kSinTable[0]); - } - - static void* cos_x_interval_table() { - return const_cast<double*>(&kCosXIntervalTable[0]); - } - - static double samples_over_pi_half() { return kSamplesOverPiHalf; } - static int samples() { return kSamples; } - static int table_num_bytes() { return kTableSize * sizeof(*kSinTable); } - static int table_size() { return kTableSize; } - - private: - static const double kSinTable[]; - static const double kCosXIntervalTable[]; - static const int kSamples; - static const int kTableSize; - static const double kSamplesOverPiHalf; -}; - -} } // namespace v8::internal - -#endif // V8_TRIG_TABLE_H_ diff --git a/deps/v8/src/type-info.cc b/deps/v8/src/type-info.cc index 83fcd0f25e7..1d2db7c5480 100644 --- a/deps/v8/src/type-info.cc +++ b/deps/v8/src/type-info.cc @@ -2,18 +2,18 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "ast.h" -#include "code-stubs.h" -#include "compiler.h" -#include "ic.h" -#include "macro-assembler.h" -#include "stub-cache.h" -#include "type-info.h" +#include "src/ast.h" +#include "src/code-stubs.h" +#include "src/compiler.h" +#include "src/ic.h" +#include "src/macro-assembler.h" +#include "src/stub-cache.h" +#include "src/type-info.h" -#include "ic-inl.h" -#include "objects-inl.h" +#include "src/ic-inl.h" +#include "src/objects-inl.h" namespace v8 { namespace internal { @@ -26,7 +26,7 @@ TypeFeedbackOracle::TypeFeedbackOracle(Handle<Code> code, : native_context_(native_context), zone_(zone) { BuildDictionary(code); - ASSERT(dictionary_->IsDictionary()); + DCHECK(dictionary_->IsDictionary()); // We make a copy of the feedback vector because a GC could clear // the type feedback info contained therein. // TODO(mvstanton): revisit the decision to copy when we weakly @@ -56,7 +56,7 @@ Handle<Object> TypeFeedbackOracle::GetInfo(TypeFeedbackId ast_id) { Handle<Object> TypeFeedbackOracle::GetInfo(int slot) { - ASSERT(slot >= 0 && slot < feedback_vector_->length()); + DCHECK(slot >= 0 && slot < feedback_vector_->length()); Object* obj = feedback_vector_->get(slot); if (!obj->IsJSFunction() || !CanRetainOtherContext(JSFunction::cast(obj), *native_context_)) { @@ -97,9 +97,7 @@ bool TypeFeedbackOracle::StoreIsKeyedPolymorphic(TypeFeedbackId ast_id) { bool TypeFeedbackOracle::CallIsMonomorphic(int slot) { Handle<Object> value = GetInfo(slot); - return FLAG_pretenuring_call_new - ? value->IsJSFunction() - : value->IsAllocationSite() || value->IsJSFunction(); + return value->IsAllocationSite() || value->IsJSFunction(); } @@ -134,12 +132,11 @@ KeyedAccessStoreMode TypeFeedbackOracle::GetStoreMode( Handle<JSFunction> TypeFeedbackOracle::GetCallTarget(int slot) { Handle<Object> info = GetInfo(slot); - if (FLAG_pretenuring_call_new || info->IsJSFunction()) { - return Handle<JSFunction>::cast(info); + if (info->IsAllocationSite()) { + return Handle<JSFunction>(isolate()->native_context()->array_function()); } - ASSERT(info->IsAllocationSite()); - return Handle<JSFunction>(isolate()->native_context()->array_function()); + return Handle<JSFunction>::cast(info); } @@ -149,11 +146,20 @@ Handle<JSFunction> TypeFeedbackOracle::GetCallNewTarget(int slot) { return Handle<JSFunction>::cast(info); } - ASSERT(info->IsAllocationSite()); + DCHECK(info->IsAllocationSite()); return Handle<JSFunction>(isolate()->native_context()->array_function()); } +Handle<AllocationSite> TypeFeedbackOracle::GetCallAllocationSite(int slot) { + Handle<Object> info = GetInfo(slot); + if (info->IsAllocationSite()) { + return Handle<AllocationSite>::cast(info); + } + return Handle<AllocationSite>::null(); +} + + Handle<AllocationSite> TypeFeedbackOracle::GetCallNewAllocationSite(int slot) { Handle<Object> info = GetInfo(slot); if (FLAG_pretenuring_call_new || info->IsAllocationSite()) { @@ -169,16 +175,6 @@ bool TypeFeedbackOracle::LoadIsBuiltin( } -bool TypeFeedbackOracle::LoadIsStub(TypeFeedbackId id, ICStub* stub) { - Handle<Object> object = GetInfo(id); - if (!object->IsCode()) return false; - Handle<Code> code = Handle<Code>::cast(object); - if (!code->is_load_stub()) return false; - if (code->ic_state() != MONOMORPHIC) return false; - return stub->Describes(*code); -} - - void TypeFeedbackOracle::CompareType(TypeFeedbackId id, Type** left_type, Type** right_type, @@ -194,16 +190,15 @@ void TypeFeedbackOracle::CompareType(TypeFeedbackId id, Handle<Map> map; Map* raw_map = code->FindFirstMap(); if (raw_map != NULL) { - if (Map::CurrentMapForDeprecated(handle(raw_map)).ToHandle(&map) && + if (Map::TryUpdate(handle(raw_map)).ToHandle(&map) && CanRetainOtherContext(*map, *native_context_)) { map = Handle<Map>::null(); } } if (code->is_compare_ic_stub()) { - int stub_minor_key = code->stub_info(); - CompareIC::StubInfoToType( - stub_minor_key, left_type, right_type, combined_type, map, zone()); + CompareIC::StubInfoToType(code->stub_key(), left_type, right_type, + combined_type, map, zone()); } else if (code->is_compare_nil_ic_stub()) { CompareNilICStub stub(isolate(), code->extra_ic_state()); *combined_type = stub.GetType(zone(), map); @@ -223,7 +218,7 @@ void TypeFeedbackOracle::BinaryType(TypeFeedbackId id, if (!object->IsCode()) { // For some binary ops we don't have ICs, e.g. Token::COMMA, but for the // operations covered by the BinaryOpIC we should always have them. - ASSERT(op < BinaryOpIC::State::FIRST_TOKEN || + DCHECK(op < BinaryOpIC::State::FIRST_TOKEN || op > BinaryOpIC::State::LAST_TOKEN); *left = *right = *result = Type::None(zone()); *fixed_right_arg = Maybe<int>(); @@ -231,9 +226,9 @@ void TypeFeedbackOracle::BinaryType(TypeFeedbackId id, return; } Handle<Code> code = Handle<Code>::cast(object); - ASSERT_EQ(Code::BINARY_OP_IC, code->kind()); + DCHECK_EQ(Code::BINARY_OP_IC, code->kind()); BinaryOpIC::State state(isolate(), code->extra_ic_state()); - ASSERT_EQ(op, state.op()); + DCHECK_EQ(op, state.op()); *left = state.GetLeftType(zone()); *right = state.GetRightType(zone()); @@ -253,22 +248,18 @@ Type* TypeFeedbackOracle::CountType(TypeFeedbackId id) { Handle<Object> object = GetInfo(id); if (!object->IsCode()) return Type::None(zone()); Handle<Code> code = Handle<Code>::cast(object); - ASSERT_EQ(Code::BINARY_OP_IC, code->kind()); + DCHECK_EQ(Code::BINARY_OP_IC, code->kind()); BinaryOpIC::State state(isolate(), code->extra_ic_state()); return state.GetLeftType(zone()); } -void TypeFeedbackOracle::PropertyReceiverTypes( - TypeFeedbackId id, Handle<String> name, - SmallMapList* receiver_types, bool* is_prototype) { +void TypeFeedbackOracle::PropertyReceiverTypes(TypeFeedbackId id, + Handle<String> name, + SmallMapList* receiver_types) { receiver_types->Clear(); - FunctionPrototypeStub proto_stub(isolate(), Code::LOAD_IC); - *is_prototype = LoadIsStub(id, &proto_stub); - if (!*is_prototype) { - Code::Flags flags = Code::ComputeHandlerFlags(Code::LOAD_IC); - CollectReceiverTypes(id, name, flags, receiver_types); - } + Code::Flags flags = Code::ComputeHandlerFlags(Code::LOAD_IC); + CollectReceiverTypes(id, name, flags, receiver_types); } @@ -315,7 +306,7 @@ void TypeFeedbackOracle::CollectReceiverTypes(TypeFeedbackId ast_id, Handle<Object> object = GetInfo(ast_id); if (object->IsUndefined() || object->IsSmi()) return; - ASSERT(object->IsCode()); + DCHECK(object->IsCode()); Handle<Code> code(Handle<Code>::cast(object)); if (FLAG_collect_megamorphic_maps_from_stub_cache && @@ -466,7 +457,7 @@ void TypeFeedbackOracle::ProcessRelocInfos(ZoneList<RelocInfo>* infos) { void TypeFeedbackOracle::SetInfo(TypeFeedbackId ast_id, Object* target) { - ASSERT(dictionary_->FindEntry(IdToKey(ast_id)) == + DCHECK(dictionary_->FindEntry(IdToKey(ast_id)) == UnseededNumberDictionary::kNotFound); // Dictionary has been allocated with sufficient size for all elements. DisallowHeapAllocation no_need_to_resize_dictionary; diff --git a/deps/v8/src/type-info.h b/deps/v8/src/type-info.h index 24a9edbd336..44fecf634c4 100644 --- a/deps/v8/src/type-info.h +++ b/deps/v8/src/type-info.h @@ -5,16 +5,15 @@ #ifndef V8_TYPE_INFO_H_ #define V8_TYPE_INFO_H_ -#include "allocation.h" -#include "globals.h" -#include "types.h" -#include "zone-inl.h" +#include "src/allocation.h" +#include "src/globals.h" +#include "src/types.h" +#include "src/zone-inl.h" namespace v8 { namespace internal { // Forward declarations. -class ICStub; class SmallMapList; @@ -41,10 +40,8 @@ class TypeFeedbackOracle: public ZoneObject { KeyedAccessStoreMode GetStoreMode(TypeFeedbackId id); - void PropertyReceiverTypes(TypeFeedbackId id, - Handle<String> name, - SmallMapList* receiver_types, - bool* is_prototype); + void PropertyReceiverTypes(TypeFeedbackId id, Handle<String> name, + SmallMapList* receiver_types); void KeyedPropertyReceiverTypes(TypeFeedbackId id, SmallMapList* receiver_types, bool* is_string); @@ -65,11 +62,11 @@ class TypeFeedbackOracle: public ZoneObject { Context* native_context); Handle<JSFunction> GetCallTarget(int slot); + Handle<AllocationSite> GetCallAllocationSite(int slot); Handle<JSFunction> GetCallNewTarget(int slot); Handle<AllocationSite> GetCallNewAllocationSite(int slot); bool LoadIsBuiltin(TypeFeedbackId id, Builtins::Name builtin_id); - bool LoadIsStub(TypeFeedbackId id, ICStub* stub); // TODO(1571) We can't use ToBooleanStub::Types as the return value because // of various cycles in our headers. Death to tons of implementations in diff --git a/deps/v8/src/typedarray.js b/deps/v8/src/typedarray.js index 267d43d514f..c149b35b98e 100644 --- a/deps/v8/src/typedarray.js +++ b/deps/v8/src/typedarray.js @@ -25,163 +25,163 @@ FUNCTION(9, Uint8ClampedArray, 1) endmacro macro TYPED_ARRAY_CONSTRUCTOR(ARRAY_ID, NAME, ELEMENT_SIZE) - function NAMEConstructByArrayBuffer(obj, buffer, byteOffset, length) { - if (!IS_UNDEFINED(byteOffset)) { - byteOffset = - ToPositiveInteger(byteOffset, "invalid_typed_array_length"); - } - if (!IS_UNDEFINED(length)) { - length = ToPositiveInteger(length, "invalid_typed_array_length"); - } - - var bufferByteLength = %_ArrayBufferGetByteLength(buffer); - var offset; - if (IS_UNDEFINED(byteOffset)) { - offset = 0; - } else { - offset = byteOffset; +function NAMEConstructByArrayBuffer(obj, buffer, byteOffset, length) { + if (!IS_UNDEFINED(byteOffset)) { + byteOffset = + ToPositiveInteger(byteOffset, "invalid_typed_array_length"); + } + if (!IS_UNDEFINED(length)) { + length = ToPositiveInteger(length, "invalid_typed_array_length"); + } - if (offset % ELEMENT_SIZE !== 0) { - throw MakeRangeError("invalid_typed_array_alignment", - ["start offset", "NAME", ELEMENT_SIZE]); - } - if (offset > bufferByteLength) { - throw MakeRangeError("invalid_typed_array_offset"); - } - } + var bufferByteLength = %_ArrayBufferGetByteLength(buffer); + var offset; + if (IS_UNDEFINED(byteOffset)) { + offset = 0; + } else { + offset = byteOffset; - var newByteLength; - var newLength; - if (IS_UNDEFINED(length)) { - if (bufferByteLength % ELEMENT_SIZE !== 0) { - throw MakeRangeError("invalid_typed_array_alignment", - ["byte length", "NAME", ELEMENT_SIZE]); - } - newByteLength = bufferByteLength - offset; - newLength = newByteLength / ELEMENT_SIZE; - } else { - var newLength = length; - newByteLength = newLength * ELEMENT_SIZE; + if (offset % ELEMENT_SIZE !== 0) { + throw MakeRangeError("invalid_typed_array_alignment", + ["start offset", "NAME", ELEMENT_SIZE]); } - if ((offset + newByteLength > bufferByteLength) - || (newLength > %_MaxSmi())) { - throw MakeRangeError("invalid_typed_array_length"); + if (offset > bufferByteLength) { + throw MakeRangeError("invalid_typed_array_offset"); } - %_TypedArrayInitialize(obj, ARRAY_ID, buffer, offset, newByteLength); } - function NAMEConstructByLength(obj, length) { - var l = IS_UNDEFINED(length) ? - 0 : ToPositiveInteger(length, "invalid_typed_array_length"); - if (l > %_MaxSmi()) { - throw MakeRangeError("invalid_typed_array_length"); - } - var byteLength = l * ELEMENT_SIZE; - if (byteLength > %_TypedArrayMaxSizeInHeap()) { - var buffer = new $ArrayBuffer(byteLength); - %_TypedArrayInitialize(obj, ARRAY_ID, buffer, 0, byteLength); - } else { - %_TypedArrayInitialize(obj, ARRAY_ID, null, 0, byteLength); + var newByteLength; + var newLength; + if (IS_UNDEFINED(length)) { + if (bufferByteLength % ELEMENT_SIZE !== 0) { + throw MakeRangeError("invalid_typed_array_alignment", + ["byte length", "NAME", ELEMENT_SIZE]); } + newByteLength = bufferByteLength - offset; + newLength = newByteLength / ELEMENT_SIZE; + } else { + var newLength = length; + newByteLength = newLength * ELEMENT_SIZE; } + if ((offset + newByteLength > bufferByteLength) + || (newLength > %_MaxSmi())) { + throw MakeRangeError("invalid_typed_array_length"); + } + %_TypedArrayInitialize(obj, ARRAY_ID, buffer, offset, newByteLength); +} - function NAMEConstructByArrayLike(obj, arrayLike) { - var length = arrayLike.length; - var l = ToPositiveInteger(length, "invalid_typed_array_length"); +function NAMEConstructByLength(obj, length) { + var l = IS_UNDEFINED(length) ? + 0 : ToPositiveInteger(length, "invalid_typed_array_length"); + if (l > %_MaxSmi()) { + throw MakeRangeError("invalid_typed_array_length"); + } + var byteLength = l * ELEMENT_SIZE; + if (byteLength > %_TypedArrayMaxSizeInHeap()) { + var buffer = new $ArrayBuffer(byteLength); + %_TypedArrayInitialize(obj, ARRAY_ID, buffer, 0, byteLength); + } else { + %_TypedArrayInitialize(obj, ARRAY_ID, null, 0, byteLength); + } +} - if (l > %_MaxSmi()) { - throw MakeRangeError("invalid_typed_array_length"); - } - if(!%TypedArrayInitializeFromArrayLike(obj, ARRAY_ID, arrayLike, l)) { - for (var i = 0; i < l; i++) { - // It is crucial that we let any execptions from arrayLike[i] - // propagate outside the function. - obj[i] = arrayLike[i]; - } +function NAMEConstructByArrayLike(obj, arrayLike) { + var length = arrayLike.length; + var l = ToPositiveInteger(length, "invalid_typed_array_length"); + + if (l > %_MaxSmi()) { + throw MakeRangeError("invalid_typed_array_length"); + } + if(!%TypedArrayInitializeFromArrayLike(obj, ARRAY_ID, arrayLike, l)) { + for (var i = 0; i < l; i++) { + // It is crucial that we let any execptions from arrayLike[i] + // propagate outside the function. + obj[i] = arrayLike[i]; } } +} - function NAMEConstructor(arg1, arg2, arg3) { - if (%_IsConstructCall()) { - if (IS_ARRAYBUFFER(arg1)) { - NAMEConstructByArrayBuffer(this, arg1, arg2, arg3); - } else if (IS_NUMBER(arg1) || IS_STRING(arg1) || - IS_BOOLEAN(arg1) || IS_UNDEFINED(arg1)) { - NAMEConstructByLength(this, arg1); - } else { - NAMEConstructByArrayLike(this, arg1); - } +function NAMEConstructor(arg1, arg2, arg3) { + if (%_IsConstructCall()) { + if (IS_ARRAYBUFFER(arg1)) { + NAMEConstructByArrayBuffer(this, arg1, arg2, arg3); + } else if (IS_NUMBER(arg1) || IS_STRING(arg1) || + IS_BOOLEAN(arg1) || IS_UNDEFINED(arg1)) { + NAMEConstructByLength(this, arg1); } else { - throw MakeTypeError("constructor_not_function", ["NAME"]) + NAMEConstructByArrayLike(this, arg1); } + } else { + throw MakeTypeError("constructor_not_function", ["NAME"]) } +} - function NAME_GetBuffer() { - if (!(%_ClassOf(this) === 'NAME')) { - throw MakeTypeError('incompatible_method_receiver', - ["NAME.buffer", this]); - } - return %TypedArrayGetBuffer(this); +function NAME_GetBuffer() { + if (!(%_ClassOf(this) === 'NAME')) { + throw MakeTypeError('incompatible_method_receiver', + ["NAME.buffer", this]); } + return %TypedArrayGetBuffer(this); +} - function NAME_GetByteLength() { - if (!(%_ClassOf(this) === 'NAME')) { - throw MakeTypeError('incompatible_method_receiver', - ["NAME.byteLength", this]); - } - return %_ArrayBufferViewGetByteLength(this); +function NAME_GetByteLength() { + if (!(%_ClassOf(this) === 'NAME')) { + throw MakeTypeError('incompatible_method_receiver', + ["NAME.byteLength", this]); } + return %_ArrayBufferViewGetByteLength(this); +} - function NAME_GetByteOffset() { - if (!(%_ClassOf(this) === 'NAME')) { - throw MakeTypeError('incompatible_method_receiver', - ["NAME.byteOffset", this]); - } - return %_ArrayBufferViewGetByteOffset(this); +function NAME_GetByteOffset() { + if (!(%_ClassOf(this) === 'NAME')) { + throw MakeTypeError('incompatible_method_receiver', + ["NAME.byteOffset", this]); } + return %_ArrayBufferViewGetByteOffset(this); +} - function NAME_GetLength() { - if (!(%_ClassOf(this) === 'NAME')) { - throw MakeTypeError('incompatible_method_receiver', - ["NAME.length", this]); - } - return %_TypedArrayGetLength(this); +function NAME_GetLength() { + if (!(%_ClassOf(this) === 'NAME')) { + throw MakeTypeError('incompatible_method_receiver', + ["NAME.length", this]); } + return %_TypedArrayGetLength(this); +} - var $NAME = global.NAME; +var $NAME = global.NAME; - function NAMESubArray(begin, end) { - if (!(%_ClassOf(this) === 'NAME')) { - throw MakeTypeError('incompatible_method_receiver', - ["NAME.subarray", this]); - } - var beginInt = TO_INTEGER(begin); - if (!IS_UNDEFINED(end)) { - end = TO_INTEGER(end); - } +function NAMESubArray(begin, end) { + if (!(%_ClassOf(this) === 'NAME')) { + throw MakeTypeError('incompatible_method_receiver', + ["NAME.subarray", this]); + } + var beginInt = TO_INTEGER(begin); + if (!IS_UNDEFINED(end)) { + end = TO_INTEGER(end); + } - var srcLength = %_TypedArrayGetLength(this); - if (beginInt < 0) { - beginInt = MathMax(0, srcLength + beginInt); - } else { - beginInt = MathMin(srcLength, beginInt); - } + var srcLength = %_TypedArrayGetLength(this); + if (beginInt < 0) { + beginInt = MathMax(0, srcLength + beginInt); + } else { + beginInt = MathMin(srcLength, beginInt); + } - var endInt = IS_UNDEFINED(end) ? srcLength : end; - if (endInt < 0) { - endInt = MathMax(0, srcLength + endInt); - } else { - endInt = MathMin(endInt, srcLength); - } - if (endInt < beginInt) { - endInt = beginInt; - } - var newLength = endInt - beginInt; - var beginByteOffset = - %_ArrayBufferViewGetByteOffset(this) + beginInt * ELEMENT_SIZE; - return new $NAME(%TypedArrayGetBuffer(this), - beginByteOffset, newLength); + var endInt = IS_UNDEFINED(end) ? srcLength : end; + if (endInt < 0) { + endInt = MathMax(0, srcLength + endInt); + } else { + endInt = MathMin(endInt, srcLength); + } + if (endInt < beginInt) { + endInt = beginInt; } + var newLength = endInt - beginInt; + var beginByteOffset = + %_ArrayBufferViewGetByteOffset(this) + beginInt * ELEMENT_SIZE; + return new $NAME(%TypedArrayGetBuffer(this), + beginByteOffset, newLength); +} endmacro TYPED_ARRAYS(TYPED_ARRAY_CONSTRUCTOR) @@ -299,13 +299,13 @@ macro SETUP_TYPED_ARRAY(ARRAY_ID, NAME, ELEMENT_SIZE) %SetCode(global.NAME, NAMEConstructor); %FunctionSetPrototype(global.NAME, new $Object()); - %SetProperty(global.NAME, "BYTES_PER_ELEMENT", ELEMENT_SIZE, - READ_ONLY | DONT_ENUM | DONT_DELETE); - %SetProperty(global.NAME.prototype, - "constructor", global.NAME, DONT_ENUM); - %SetProperty(global.NAME.prototype, - "BYTES_PER_ELEMENT", ELEMENT_SIZE, - READ_ONLY | DONT_ENUM | DONT_DELETE); + %AddNamedProperty(global.NAME, "BYTES_PER_ELEMENT", ELEMENT_SIZE, + READ_ONLY | DONT_ENUM | DONT_DELETE); + %AddNamedProperty(global.NAME.prototype, + "constructor", global.NAME, DONT_ENUM); + %AddNamedProperty(global.NAME.prototype, + "BYTES_PER_ELEMENT", ELEMENT_SIZE, + READ_ONLY | DONT_ENUM | DONT_DELETE); InstallGetter(global.NAME.prototype, "buffer", NAME_GetBuffer); InstallGetter(global.NAME.prototype, "byteOffset", NAME_GetByteOffset); InstallGetter(global.NAME.prototype, "byteLength", NAME_GetByteLength); @@ -357,7 +357,7 @@ function DataViewConstructor(buffer, byteOffset, byteLength) { // length = 3 } } -function DataViewGetBuffer() { +function DataViewGetBufferJS() { if (!IS_DATAVIEW(this)) { throw MakeTypeError('incompatible_method_receiver', ['DataView.buffer', this]); @@ -398,7 +398,7 @@ function ToPositiveDataViewOffset(offset) { macro DATA_VIEW_GETTER_SETTER(TYPENAME) -function DataViewGetTYPENAME(offset, little_endian) { +function DataViewGetTYPENAMEJS(offset, little_endian) { if (!IS_DATAVIEW(this)) { throw MakeTypeError('incompatible_method_receiver', ['DataView.getTYPENAME', this]); @@ -411,7 +411,7 @@ function DataViewGetTYPENAME(offset, little_endian) { !!little_endian); } -function DataViewSetTYPENAME(offset, value, little_endian) { +function DataViewSetTYPENAMEJS(offset, value, little_endian) { if (!IS_DATAVIEW(this)) { throw MakeTypeError('incompatible_method_receiver', ['DataView.setTYPENAME', this]); @@ -436,36 +436,36 @@ function SetupDataView() { %FunctionSetPrototype($DataView, new $Object); // Set up constructor property on the DataView prototype. - %SetProperty($DataView.prototype, "constructor", $DataView, DONT_ENUM); + %AddNamedProperty($DataView.prototype, "constructor", $DataView, DONT_ENUM); - InstallGetter($DataView.prototype, "buffer", DataViewGetBuffer); + InstallGetter($DataView.prototype, "buffer", DataViewGetBufferJS); InstallGetter($DataView.prototype, "byteOffset", DataViewGetByteOffset); InstallGetter($DataView.prototype, "byteLength", DataViewGetByteLength); InstallFunctions($DataView.prototype, DONT_ENUM, $Array( - "getInt8", DataViewGetInt8, - "setInt8", DataViewSetInt8, + "getInt8", DataViewGetInt8JS, + "setInt8", DataViewSetInt8JS, - "getUint8", DataViewGetUint8, - "setUint8", DataViewSetUint8, + "getUint8", DataViewGetUint8JS, + "setUint8", DataViewSetUint8JS, - "getInt16", DataViewGetInt16, - "setInt16", DataViewSetInt16, + "getInt16", DataViewGetInt16JS, + "setInt16", DataViewSetInt16JS, - "getUint16", DataViewGetUint16, - "setUint16", DataViewSetUint16, + "getUint16", DataViewGetUint16JS, + "setUint16", DataViewSetUint16JS, - "getInt32", DataViewGetInt32, - "setInt32", DataViewSetInt32, + "getInt32", DataViewGetInt32JS, + "setInt32", DataViewSetInt32JS, - "getUint32", DataViewGetUint32, - "setUint32", DataViewSetUint32, + "getUint32", DataViewGetUint32JS, + "setUint32", DataViewSetUint32JS, - "getFloat32", DataViewGetFloat32, - "setFloat32", DataViewSetFloat32, + "getFloat32", DataViewGetFloat32JS, + "setFloat32", DataViewSetFloat32JS, - "getFloat64", DataViewGetFloat64, - "setFloat64", DataViewSetFloat64 + "getFloat64", DataViewGetFloat64JS, + "setFloat64", DataViewSetFloat64JS )); } diff --git a/deps/v8/src/types-inl.h b/deps/v8/src/types-inl.h index ca4f120c798..f102ae3e137 100644 --- a/deps/v8/src/types-inl.h +++ b/deps/v8/src/types-inl.h @@ -5,26 +5,38 @@ #ifndef V8_TYPES_INL_H_ #define V8_TYPES_INL_H_ -#include "types.h" +#include "src/types.h" -#include "factory.h" -#include "handles-inl.h" +#include "src/factory.h" +#include "src/handles-inl.h" namespace v8 { namespace internal { -// -------------------------------------------------------------------------- // +// ----------------------------------------------------------------------------- // TypeImpl template<class Config> TypeImpl<Config>* TypeImpl<Config>::cast(typename Config::Base* object) { TypeImpl* t = static_cast<TypeImpl*>(object); - ASSERT(t->IsBitset() || t->IsClass() || t->IsConstant() || - t->IsUnion() || t->IsArray() || t->IsFunction()); + DCHECK(t->IsBitset() || t->IsClass() || t->IsConstant() || t->IsRange() || + t->IsUnion() || t->IsArray() || t->IsFunction() || t->IsContext()); return t; } +// Most precise _current_ type of a value (usually its class). +template<class Config> +typename TypeImpl<Config>::TypeHandle TypeImpl<Config>::NowOf( + i::Object* value, Region* region) { + if (value->IsSmi() || + i::HeapObject::cast(value)->map()->instance_type() == HEAP_NUMBER_TYPE) { + return Of(value, region); + } + return Class(i::handle(i::HeapObject::cast(value)->map()), region); +} + + template<class Config> bool TypeImpl<Config>::NowContains(i::Object* value) { DisallowHeapAllocation no_allocation; @@ -39,7 +51,7 @@ bool TypeImpl<Config>::NowContains(i::Object* value) { } -// -------------------------------------------------------------------------- // +// ----------------------------------------------------------------------------- // ZoneTypeConfig // static @@ -70,42 +82,28 @@ bool ZoneTypeConfig::is_struct(Type* type, int tag) { // static bool ZoneTypeConfig::is_class(Type* type) { - return is_struct(type, Type::StructuralType::kClassTag); -} - - -// static -bool ZoneTypeConfig::is_constant(Type* type) { - return is_struct(type, Type::StructuralType::kConstantTag); + return false; } // static int ZoneTypeConfig::as_bitset(Type* type) { - ASSERT(is_bitset(type)); + DCHECK(is_bitset(type)); return static_cast<int>(reinterpret_cast<intptr_t>(type) >> 1); } // static ZoneTypeConfig::Struct* ZoneTypeConfig::as_struct(Type* type) { - ASSERT(!is_bitset(type)); + DCHECK(!is_bitset(type)); return reinterpret_cast<Struct*>(type); } // static i::Handle<i::Map> ZoneTypeConfig::as_class(Type* type) { - ASSERT(is_class(type)); - return i::Handle<i::Map>(static_cast<i::Map**>(as_struct(type)[3])); -} - - -// static -i::Handle<i::Object> ZoneTypeConfig::as_constant(Type* type) { - ASSERT(is_constant(type)); - return i::Handle<i::Object>( - static_cast<i::Object**>(as_struct(type)[3])); + UNREACHABLE(); + return i::Handle<i::Map>(); } @@ -122,84 +120,80 @@ ZoneTypeConfig::Type* ZoneTypeConfig::from_bitset(int bitset, Zone* Zone) { // static -ZoneTypeConfig::Type* ZoneTypeConfig::from_struct(Struct* structured) { - return reinterpret_cast<Type*>(structured); +ZoneTypeConfig::Type* ZoneTypeConfig::from_struct(Struct* structure) { + return reinterpret_cast<Type*>(structure); } // static ZoneTypeConfig::Type* ZoneTypeConfig::from_class( - i::Handle<i::Map> map, int lub, Zone* zone) { - Struct* structured = struct_create(Type::StructuralType::kClassTag, 2, zone); - structured[2] = from_bitset(lub); - structured[3] = map.location(); - return from_struct(structured); + i::Handle<i::Map> map, Zone* zone) { + return from_bitset(0); } // static -ZoneTypeConfig::Type* ZoneTypeConfig::from_constant( - i::Handle<i::Object> value, int lub, Zone* zone) { - Struct* structured = - struct_create(Type::StructuralType::kConstantTag, 2, zone); - structured[2] = from_bitset(lub); - structured[3] = value.location(); - return from_struct(structured); +ZoneTypeConfig::Struct* ZoneTypeConfig::struct_create( + int tag, int length, Zone* zone) { + Struct* structure = reinterpret_cast<Struct*>( + zone->New(sizeof(void*) * (length + 2))); // NOLINT + structure[0] = reinterpret_cast<void*>(tag); + structure[1] = reinterpret_cast<void*>(length); + return structure; } // static -ZoneTypeConfig::Struct* ZoneTypeConfig::struct_create( - int tag, int length, Zone* zone) { - Struct* structured = reinterpret_cast<Struct*>( - zone->New(sizeof(void*) * (length + 2))); // NOLINT - structured[0] = reinterpret_cast<void*>(tag); - structured[1] = reinterpret_cast<void*>(length); - return structured; +void ZoneTypeConfig::struct_shrink(Struct* structure, int length) { + DCHECK(0 <= length && length <= struct_length(structure)); + structure[1] = reinterpret_cast<void*>(length); } // static -void ZoneTypeConfig::struct_shrink(Struct* structured, int length) { - ASSERT(0 <= length && length <= struct_length(structured)); - structured[1] = reinterpret_cast<void*>(length); +int ZoneTypeConfig::struct_tag(Struct* structure) { + return static_cast<int>(reinterpret_cast<intptr_t>(structure[0])); } // static -int ZoneTypeConfig::struct_tag(Struct* structured) { - return static_cast<int>(reinterpret_cast<intptr_t>(structured[0])); +int ZoneTypeConfig::struct_length(Struct* structure) { + return static_cast<int>(reinterpret_cast<intptr_t>(structure[1])); } // static -int ZoneTypeConfig::struct_length(Struct* structured) { - return static_cast<int>(reinterpret_cast<intptr_t>(structured[1])); +Type* ZoneTypeConfig::struct_get(Struct* structure, int i) { + DCHECK(0 <= i && i <= struct_length(structure)); + return static_cast<Type*>(structure[2 + i]); } // static -Type* ZoneTypeConfig::struct_get(Struct* structured, int i) { - ASSERT(0 <= i && i <= struct_length(structured)); - return static_cast<Type*>(structured[2 + i]); +void ZoneTypeConfig::struct_set(Struct* structure, int i, Type* x) { + DCHECK(0 <= i && i <= struct_length(structure)); + structure[2 + i] = x; } // static -void ZoneTypeConfig::struct_set(Struct* structured, int i, Type* type) { - ASSERT(0 <= i && i <= struct_length(structured)); - structured[2 + i] = type; +template<class V> +i::Handle<V> ZoneTypeConfig::struct_get_value(Struct* structure, int i) { + DCHECK(0 <= i && i <= struct_length(structure)); + return i::Handle<V>(static_cast<V**>(structure[2 + i])); } // static -int ZoneTypeConfig::lub_bitset(Type* type) { - ASSERT(is_class(type) || is_constant(type)); - return as_bitset(struct_get(as_struct(type), 0)); +template<class V> +void ZoneTypeConfig::struct_set_value( + Struct* structure, int i, i::Handle<V> x) { + DCHECK(0 <= i && i <= struct_length(structure)); + structure[2 + i] = x.location(); } -// -------------------------------------------------------------------------- // +// ----------------------------------------------------------------------------- // HeapTypeConfig // static @@ -228,12 +222,6 @@ bool HeapTypeConfig::is_class(Type* type) { } -// static -bool HeapTypeConfig::is_constant(Type* type) { - return type->IsBox(); -} - - // static bool HeapTypeConfig::is_struct(Type* type, int tag) { return type->IsFixedArray() && struct_tag(as_struct(type)) == tag; @@ -252,13 +240,6 @@ i::Handle<i::Map> HeapTypeConfig::as_class(Type* type) { } -// static -i::Handle<i::Object> HeapTypeConfig::as_constant(Type* type) { - i::Box* box = i::Box::cast(type); - return i::handle(box->value(), box->GetIsolate()); -} - - // static i::Handle<HeapTypeConfig::Struct> HeapTypeConfig::as_struct(Type* type) { return i::handle(Struct::cast(type)); @@ -280,71 +261,74 @@ i::Handle<HeapTypeConfig::Type> HeapTypeConfig::from_bitset( // static i::Handle<HeapTypeConfig::Type> HeapTypeConfig::from_class( - i::Handle<i::Map> map, int lub, Isolate* isolate) { + i::Handle<i::Map> map, Isolate* isolate) { return i::Handle<Type>::cast(i::Handle<Object>::cast(map)); } -// static -i::Handle<HeapTypeConfig::Type> HeapTypeConfig::from_constant( - i::Handle<i::Object> value, int lub, Isolate* isolate) { - i::Handle<Box> box = isolate->factory()->NewBox(value); - return i::Handle<Type>::cast(i::Handle<Object>::cast(box)); -} - - // static i::Handle<HeapTypeConfig::Type> HeapTypeConfig::from_struct( - i::Handle<Struct> structured) { - return i::Handle<Type>::cast(i::Handle<Object>::cast(structured)); + i::Handle<Struct> structure) { + return i::Handle<Type>::cast(i::Handle<Object>::cast(structure)); } // static i::Handle<HeapTypeConfig::Struct> HeapTypeConfig::struct_create( int tag, int length, Isolate* isolate) { - i::Handle<Struct> structured = isolate->factory()->NewFixedArray(length + 1); - structured->set(0, i::Smi::FromInt(tag)); - return structured; + i::Handle<Struct> structure = isolate->factory()->NewFixedArray(length + 1); + structure->set(0, i::Smi::FromInt(tag)); + return structure; } // static -void HeapTypeConfig::struct_shrink(i::Handle<Struct> structured, int length) { - structured->Shrink(length + 1); +void HeapTypeConfig::struct_shrink(i::Handle<Struct> structure, int length) { + structure->Shrink(length + 1); } // static -int HeapTypeConfig::struct_tag(i::Handle<Struct> structured) { - return static_cast<i::Smi*>(structured->get(0))->value(); +int HeapTypeConfig::struct_tag(i::Handle<Struct> structure) { + return static_cast<i::Smi*>(structure->get(0))->value(); } // static -int HeapTypeConfig::struct_length(i::Handle<Struct> structured) { - return structured->length() - 1; +int HeapTypeConfig::struct_length(i::Handle<Struct> structure) { + return structure->length() - 1; } // static i::Handle<HeapTypeConfig::Type> HeapTypeConfig::struct_get( - i::Handle<Struct> structured, int i) { - Type* type = static_cast<Type*>(structured->get(i + 1)); - return i::handle(type, structured->GetIsolate()); + i::Handle<Struct> structure, int i) { + Type* type = static_cast<Type*>(structure->get(i + 1)); + return i::handle(type, structure->GetIsolate()); } // static void HeapTypeConfig::struct_set( - i::Handle<Struct> structured, int i, i::Handle<Type> type) { - structured->set(i + 1, *type); + i::Handle<Struct> structure, int i, i::Handle<Type> type) { + structure->set(i + 1, *type); +} + + +// static +template<class V> +i::Handle<V> HeapTypeConfig::struct_get_value( + i::Handle<Struct> structure, int i) { + V* x = static_cast<V*>(structure->get(i + 1)); + return i::handle(x, structure->GetIsolate()); } // static -int HeapTypeConfig::lub_bitset(Type* type) { - return 0; // kNone, which causes recomputation. +template<class V> +void HeapTypeConfig::struct_set_value( + i::Handle<Struct> structure, int i, i::Handle<V> x) { + structure->set(i + 1, *x); } } } // namespace v8::internal diff --git a/deps/v8/src/types.cc b/deps/v8/src/types.cc index 5270c0ea6de..db92f30c47c 100644 --- a/deps/v8/src/types.cc +++ b/deps/v8/src/types.cc @@ -2,132 +2,85 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "types.h" +#include "src/types.h" -#include "string-stream.h" -#include "types-inl.h" +#include "src/ostreams.h" +#include "src/types-inl.h" namespace v8 { namespace internal { -template<class Config> -int TypeImpl<Config>::NumClasses() { - DisallowHeapAllocation no_allocation; - if (this->IsClass()) { - return 1; - } else if (this->IsUnion()) { - UnionHandle unioned = handle(this->AsUnion()); - int result = 0; - for (int i = 0; i < unioned->Length(); ++i) { - if (unioned->Get(i)->IsClass()) ++result; - } - return result; - } else { - return 0; - } -} +// ----------------------------------------------------------------------------- +// Range-related custom order on doubles. +// We want -0 to be less than +0. -template<class Config> -int TypeImpl<Config>::NumConstants() { - DisallowHeapAllocation no_allocation; - if (this->IsConstant()) { - return 1; - } else if (this->IsUnion()) { - UnionHandle unioned = handle(this->AsUnion()); - int result = 0; - for (int i = 0; i < unioned->Length(); ++i) { - if (unioned->Get(i)->IsConstant()) ++result; - } - return result; - } else { - return 0; - } +static bool dle(double x, double y) { + return x <= y && (x != 0 || IsMinusZero(x) || !IsMinusZero(y)); } -template<class Config> template<class T> -typename TypeImpl<Config>::TypeHandle -TypeImpl<Config>::Iterator<T>::get_type() { - ASSERT(!Done()); - return type_->IsUnion() ? type_->AsUnion()->Get(index_) : type_; +static bool deq(double x, double y) { + return dle(x, y) && dle(y, x); } -// C++ cannot specialise nested templates, so we have to go through this -// contortion with an auxiliary template to simulate it. -template<class Config, class T> -struct TypeImplIteratorAux { - static bool matches(typename TypeImpl<Config>::TypeHandle type); - static i::Handle<T> current(typename TypeImpl<Config>::TypeHandle type); -}; - -template<class Config> -struct TypeImplIteratorAux<Config, i::Map> { - static bool matches(typename TypeImpl<Config>::TypeHandle type) { - return type->IsClass(); - } - static i::Handle<i::Map> current(typename TypeImpl<Config>::TypeHandle type) { - return type->AsClass()->Map(); - } -}; +// ----------------------------------------------------------------------------- +// Glb and lub computation. +// The largest bitset subsumed by this type. template<class Config> -struct TypeImplIteratorAux<Config, i::Object> { - static bool matches(typename TypeImpl<Config>::TypeHandle type) { - return type->IsConstant(); - } - static i::Handle<i::Object> current( - typename TypeImpl<Config>::TypeHandle type) { - return type->AsConstant()->Value(); - } -}; - -template<class Config> template<class T> -bool TypeImpl<Config>::Iterator<T>::matches(TypeHandle type) { - return TypeImplIteratorAux<Config, T>::matches(type); -} - -template<class Config> template<class T> -i::Handle<T> TypeImpl<Config>::Iterator<T>::Current() { - return TypeImplIteratorAux<Config, T>::current(get_type()); -} - - -template<class Config> template<class T> -void TypeImpl<Config>::Iterator<T>::Advance() { +int TypeImpl<Config>::BitsetType::Glb(TypeImpl* type) { DisallowHeapAllocation no_allocation; - ++index_; - if (type_->IsUnion()) { - UnionHandle unioned = handle(type_->AsUnion()); - for (; index_ < unioned->Length(); ++index_) { - if (matches(unioned->Get(index_))) return; - } - } else if (index_ == 0 && matches(type_)) { - return; + if (type->IsBitset()) { + return type->AsBitset(); + } else if (type->IsUnion()) { + UnionHandle unioned = handle(type->AsUnion()); + DCHECK(unioned->Wellformed()); + return unioned->Get(0)->BitsetGlb(); // Other BitsetGlb's are kNone anyway. + } else { + return kNone; } - index_ = -1; } -// Get the largest bitset subsumed by this type. +// The smallest bitset subsuming this type. template<class Config> -int TypeImpl<Config>::BitsetType::Glb(TypeImpl* type) { +int TypeImpl<Config>::BitsetType::Lub(TypeImpl* type) { DisallowHeapAllocation no_allocation; if (type->IsBitset()) { return type->AsBitset(); } else if (type->IsUnion()) { - // All but the first are non-bitsets and thus would yield kNone anyway. - return type->AsUnion()->Get(0)->BitsetGlb(); + UnionHandle unioned = handle(type->AsUnion()); + int bitset = kNone; + for (int i = 0; i < unioned->Length(); ++i) { + bitset |= unioned->Get(i)->BitsetLub(); + } + return bitset; + } else if (type->IsClass()) { + // Little hack to avoid the need for a region for handlification here... + return Config::is_class(type) ? Lub(*Config::as_class(type)) : + type->AsClass()->Bound(NULL)->AsBitset(); + } else if (type->IsConstant()) { + return type->AsConstant()->Bound()->AsBitset(); + } else if (type->IsRange()) { + return type->AsRange()->Bound()->AsBitset(); + } else if (type->IsContext()) { + return type->AsContext()->Bound()->AsBitset(); + } else if (type->IsArray()) { + return type->AsArray()->Bound()->AsBitset(); + } else if (type->IsFunction()) { + return type->AsFunction()->Bound()->AsBitset(); } else { + UNREACHABLE(); return kNone; } } -// Get the smallest bitset subsuming this type. +// The smallest bitset subsuming this type, ignoring explicit bounds. template<class Config> -int TypeImpl<Config>::BitsetType::Lub(TypeImpl* type) { +int TypeImpl<Config>::BitsetType::InherentLub(TypeImpl* type) { DisallowHeapAllocation no_allocation; if (type->IsBitset()) { return type->AsBitset(); @@ -135,15 +88,17 @@ int TypeImpl<Config>::BitsetType::Lub(TypeImpl* type) { UnionHandle unioned = handle(type->AsUnion()); int bitset = kNone; for (int i = 0; i < unioned->Length(); ++i) { - bitset |= unioned->Get(i)->BitsetLub(); + bitset |= unioned->Get(i)->InherentBitsetLub(); } return bitset; } else if (type->IsClass()) { - int bitset = Config::lub_bitset(type); - return bitset ? bitset : Lub(*type->AsClass()->Map()); + return Lub(*type->AsClass()->Map()); } else if (type->IsConstant()) { - int bitset = Config::lub_bitset(type); - return bitset ? bitset : Lub(*type->AsConstant()->Value()); + return Lub(*type->AsConstant()->Value()); + } else if (type->IsRange()) { + return Lub(type->AsRange()->Min(), type->AsRange()->Max()); + } else if (type->IsContext()) { + return kInternal & kTaggedPtr; } else if (type->IsArray()) { return kArray; } else if (type->IsFunction()) { @@ -158,16 +113,55 @@ int TypeImpl<Config>::BitsetType::Lub(TypeImpl* type) { template<class Config> int TypeImpl<Config>::BitsetType::Lub(i::Object* value) { DisallowHeapAllocation no_allocation; - if (value->IsSmi()) return kSignedSmall & kTaggedInt; - i::Map* map = i::HeapObject::cast(value)->map(); - if (map->instance_type() == HEAP_NUMBER_TYPE) { - int32_t i; - uint32_t u; - return kTaggedPtr & ( - value->ToInt32(&i) ? (Smi::IsValid(i) ? kSignedSmall : kOtherSigned32) : - value->ToUint32(&u) ? kUnsigned32 : kFloat); + if (value->IsNumber()) { + return Lub(value->Number()) & (value->IsSmi() ? kTaggedInt : kTaggedPtr); + } + return Lub(i::HeapObject::cast(value)->map()); +} + + +template<class Config> +int TypeImpl<Config>::BitsetType::Lub(double value) { + DisallowHeapAllocation no_allocation; + if (i::IsMinusZero(value)) return kMinusZero; + if (std::isnan(value)) return kNaN; + if (IsUint32Double(value)) return Lub(FastD2UI(value)); + if (IsInt32Double(value)) return Lub(FastD2I(value)); + return kOtherNumber; +} + + +template<class Config> +int TypeImpl<Config>::BitsetType::Lub(double min, double max) { + DisallowHeapAllocation no_allocation; + DCHECK(dle(min, max)); + if (deq(min, max)) return BitsetType::Lub(min); // Singleton range. + int bitset = BitsetType::kNumber ^ SEMANTIC(BitsetType::kNaN); + if (dle(0, min) || max < 0) bitset ^= SEMANTIC(BitsetType::kMinusZero); + return bitset; + // TODO(neis): Could refine this further by doing more checks on min/max. +} + + +template<class Config> +int TypeImpl<Config>::BitsetType::Lub(int32_t value) { + if (value >= 0x40000000) { + return i::SmiValuesAre31Bits() ? kOtherUnsigned31 : kUnsignedSmall; + } + if (value >= 0) return kUnsignedSmall; + if (value >= -0x40000000) return kOtherSignedSmall; + return i::SmiValuesAre31Bits() ? kOtherSigned32 : kOtherSignedSmall; +} + + +template<class Config> +int TypeImpl<Config>::BitsetType::Lub(uint32_t value) { + DisallowHeapAllocation no_allocation; + if (value >= 0x80000000u) return kOtherUnsigned32; + if (value >= 0x40000000u) { + return i::SmiValuesAre31Bits() ? kOtherUnsigned31 : kUnsignedSmall; } - return Lub(map); + return kUnsignedSmall; } @@ -201,17 +195,17 @@ int TypeImpl<Config>::BitsetType::Lub(i::Map* map) { case ODDBALL_TYPE: { Heap* heap = map->GetHeap(); if (map == heap->undefined_map()) return kUndefined; - if (map == heap->the_hole_map()) return kAny; // TODO(rossberg): kNone? if (map == heap->null_map()) return kNull; if (map == heap->boolean_map()) return kBoolean; - ASSERT(map == heap->uninitialized_map() || + DCHECK(map == heap->the_hole_map() || + map == heap->uninitialized_map() || map == heap->no_interceptor_result_sentinel_map() || map == heap->termination_exception_map() || map == heap->arguments_marker_map()); return kInternal & kTaggedPtr; } case HEAP_NUMBER_TYPE: - return kFloat & kTaggedPtr; + return kNumber & kTaggedPtr; case JS_VALUE_TYPE: case JS_DATE_TYPE: case JS_OBJECT_TYPE: @@ -254,9 +248,11 @@ int TypeImpl<Config>::BitsetType::Lub(i::Map* map) { return kDetectable; case DECLARED_ACCESSOR_INFO_TYPE: case EXECUTABLE_ACCESSOR_INFO_TYPE: + case SHARED_FUNCTION_INFO_TYPE: case ACCESSOR_PAIR_TYPE: case FIXED_ARRAY_TYPE: case FOREIGN_TYPE: + case CODE_TYPE: return kInternal & kTaggedPtr; default: UNREACHABLE(); @@ -265,17 +261,8 @@ int TypeImpl<Config>::BitsetType::Lub(i::Map* map) { } -// Most precise _current_ type of a value (usually its class). -template<class Config> -typename TypeImpl<Config>::TypeHandle TypeImpl<Config>::NowOf( - i::Object* value, Region* region) { - if (value->IsSmi() || - i::HeapObject::cast(value)->map()->instance_type() == HEAP_NUMBER_TYPE) { - return Of(value, region); - } - return Class(i::handle(i::HeapObject::cast(value)->map()), region); -} - +// ----------------------------------------------------------------------------- +// Predicates. // Check this <= that. template<class Config> @@ -285,16 +272,30 @@ bool TypeImpl<Config>::SlowIs(TypeImpl* that) { // Fast path for bitsets. if (this->IsNone()) return true; if (that->IsBitset()) { - return (BitsetType::Lub(this) | that->AsBitset()) == that->AsBitset(); + return BitsetType::Is(BitsetType::Lub(this), that->AsBitset()); } if (that->IsClass()) { return this->IsClass() - && *this->AsClass()->Map() == *that->AsClass()->Map(); + && *this->AsClass()->Map() == *that->AsClass()->Map() + && ((Config::is_class(that) && Config::is_class(this)) || + BitsetType::New(this->BitsetLub())->Is( + BitsetType::New(that->BitsetLub()))); } if (that->IsConstant()) { return this->IsConstant() - && *this->AsConstant()->Value() == *that->AsConstant()->Value(); + && *this->AsConstant()->Value() == *that->AsConstant()->Value() + && this->AsConstant()->Bound()->Is(that->AsConstant()->Bound()); + } + if (that->IsRange()) { + return this->IsRange() + && this->AsRange()->Bound()->Is(that->AsRange()->Bound()) + && dle(that->AsRange()->Min(), this->AsRange()->Min()) + && dle(this->AsRange()->Max(), that->AsRange()->Max()); + } + if (that->IsContext()) { + return this->IsContext() + && this->AsContext()->Outer()->Equals(that->AsContext()->Outer()); } if (that->IsArray()) { return this->IsArray() @@ -328,16 +329,12 @@ bool TypeImpl<Config>::SlowIs(TypeImpl* that) { // T <= (T1 \/ ... \/ Tn) <=> (T <= T1) \/ ... \/ (T <= Tn) // (iff T is not a union) - ASSERT(!this->IsUnion()); - if (that->IsUnion()) { - UnionHandle unioned = handle(that->AsUnion()); - for (int i = 0; i < unioned->Length(); ++i) { - if (this->Is(unioned->Get(i))) return true; - if (this->IsBitset()) break; // Fast fail, only first field is a bitset. - } - return false; + DCHECK(!this->IsUnion() && that->IsUnion()); + UnionHandle unioned = handle(that->AsUnion()); + for (int i = 0; i < unioned->Length(); ++i) { + if (this->Is(unioned->Get(i))) return true; + if (this->IsBitset()) break; // Fast fail, only first field is a bitset. } - return false; } @@ -396,12 +393,9 @@ bool TypeImpl<Config>::Maybe(TypeImpl* that) { return false; } - ASSERT(!this->IsUnion() && !that->IsUnion()); - if (this->IsBitset()) { - return BitsetType::IsInhabited(this->AsBitset() & that->BitsetLub()); - } - if (that->IsBitset()) { - return BitsetType::IsInhabited(this->BitsetLub() & that->AsBitset()); + DCHECK(!this->IsUnion() && !that->IsUnion()); + if (this->IsBitset() || that->IsBitset()) { + return BitsetType::IsInhabited(this->BitsetLub() & that->BitsetLub()); } if (this->IsClass()) { return that->IsClass() @@ -411,6 +405,9 @@ bool TypeImpl<Config>::Maybe(TypeImpl* that) { return that->IsConstant() && *this->AsConstant()->Value() == *that->AsConstant()->Value(); } + if (this->IsContext()) { + return this->Equals(that); + } if (this->IsArray()) { // There is no variance! return this->Equals(that); @@ -424,10 +421,16 @@ bool TypeImpl<Config>::Maybe(TypeImpl* that) { } +// Check if value is contained in (inhabits) type. template<class Config> bool TypeImpl<Config>::Contains(i::Object* value) { DisallowHeapAllocation no_allocation; - + if (this->IsRange()) { + return value->IsNumber() && + dle(this->AsRange()->Min(), value->Number()) && + dle(value->Number(), this->AsRange()->Max()) && + BitsetType::Is(BitsetType::Lub(value), this->BitsetLub()); + } for (Iterator<i::Object> it = this->Constants(); !it.Done(); it.Advance()) { if (*it.Current() == value) return true; } @@ -436,43 +439,148 @@ bool TypeImpl<Config>::Contains(i::Object* value) { template<class Config> -bool TypeImpl<Config>::InUnion(UnionHandle unioned, int current_size) { - ASSERT(!this->IsUnion()); +bool TypeImpl<Config>::UnionType::Wellformed() { + DCHECK(this->Length() >= 2); + for (int i = 0; i < this->Length(); ++i) { + DCHECK(!this->Get(i)->IsUnion()); + if (i > 0) DCHECK(!this->Get(i)->IsBitset()); + for (int j = 0; j < this->Length(); ++j) { + if (i != j) DCHECK(!this->Get(i)->Is(this->Get(j))); + } + } + return true; +} + + +// ----------------------------------------------------------------------------- +// Union and intersection + +template<class Config> +typename TypeImpl<Config>::TypeHandle TypeImpl<Config>::Rebound( + int bitset, Region* region) { + TypeHandle bound = BitsetType::New(bitset, region); + if (this->IsClass()) { + return ClassType::New(this->AsClass()->Map(), bound, region); + } else if (this->IsConstant()) { + return ConstantType::New(this->AsConstant()->Value(), bound, region); + } else if (this->IsRange()) { + return RangeType::New( + this->AsRange()->Min(), this->AsRange()->Max(), bound, region); + } else if (this->IsContext()) { + return ContextType::New(this->AsContext()->Outer(), bound, region); + } else if (this->IsArray()) { + return ArrayType::New(this->AsArray()->Element(), bound, region); + } else if (this->IsFunction()) { + FunctionHandle function = Config::handle(this->AsFunction()); + int arity = function->Arity(); + FunctionHandle type = FunctionType::New( + function->Result(), function->Receiver(), bound, arity, region); + for (int i = 0; i < arity; ++i) { + type->InitParameter(i, function->Parameter(i)); + } + return type; + } + UNREACHABLE(); + return TypeHandle(); +} + + +template<class Config> +int TypeImpl<Config>::BoundBy(TypeImpl* that) { + DCHECK(!this->IsUnion()); + if (that->IsUnion()) { + UnionType* unioned = that->AsUnion(); + int length = unioned->Length(); + int bitset = BitsetType::kNone; + for (int i = 0; i < length; ++i) { + bitset |= BoundBy(unioned->Get(i)->unhandle()); + } + return bitset; + } else if (that->IsClass() && this->IsClass() && + *this->AsClass()->Map() == *that->AsClass()->Map()) { + return that->BitsetLub(); + } else if (that->IsConstant() && this->IsConstant() && + *this->AsConstant()->Value() == *that->AsConstant()->Value()) { + return that->AsConstant()->Bound()->AsBitset(); + } else if (that->IsContext() && this->IsContext() && this->Is(that)) { + return that->AsContext()->Bound()->AsBitset(); + } else if (that->IsArray() && this->IsArray() && this->Is(that)) { + return that->AsArray()->Bound()->AsBitset(); + } else if (that->IsFunction() && this->IsFunction() && this->Is(that)) { + return that->AsFunction()->Bound()->AsBitset(); + } + return that->BitsetGlb(); +} + + +template<class Config> +int TypeImpl<Config>::IndexInUnion( + int bound, UnionHandle unioned, int current_size) { + DCHECK(!this->IsUnion()); for (int i = 0; i < current_size; ++i) { - if (this->Is(unioned->Get(i))) return true; + TypeHandle that = unioned->Get(i); + if (that->IsBitset()) { + if (BitsetType::Is(bound, that->AsBitset())) return i; + } else if (that->IsClass() && this->IsClass()) { + if (*this->AsClass()->Map() == *that->AsClass()->Map()) return i; + } else if (that->IsConstant() && this->IsConstant()) { + if (*this->AsConstant()->Value() == *that->AsConstant()->Value()) + return i; + } else if (that->IsContext() && this->IsContext()) { + if (this->Is(that)) return i; + } else if (that->IsArray() && this->IsArray()) { + if (this->Is(that)) return i; + } else if (that->IsFunction() && this->IsFunction()) { + if (this->Is(that)) return i; + } } - return false; + return -1; } -// Get non-bitsets from this which are not subsumed by union, store at result, -// starting at index. Returns updated index. +// Get non-bitsets from type, bounded by upper. +// Store at result starting at index. Returns updated index. template<class Config> int TypeImpl<Config>::ExtendUnion( - UnionHandle result, TypeHandle type, int current_size) { - int old_size = current_size; + UnionHandle result, int size, TypeHandle type, + TypeHandle other, bool is_intersect, Region* region) { if (type->IsUnion()) { UnionHandle unioned = handle(type->AsUnion()); for (int i = 0; i < unioned->Length(); ++i) { - TypeHandle type = unioned->Get(i); - ASSERT(i == 0 || !(type->IsBitset() || type->Is(unioned->Get(0)))); - if (!type->IsBitset() && !type->InUnion(result, old_size)) { - result->Set(current_size++, type); + TypeHandle type_i = unioned->Get(i); + DCHECK(i == 0 || !(type_i->IsBitset() || type_i->Is(unioned->Get(0)))); + if (!type_i->IsBitset()) { + size = ExtendUnion(result, size, type_i, other, is_intersect, region); } } } else if (!type->IsBitset()) { - // For all structural types, subtyping implies equivalence. - ASSERT(type->IsClass() || type->IsConstant() || - type->IsArray() || type->IsFunction()); - if (!type->InUnion(result, old_size)) { - result->Set(current_size++, type); + DCHECK(type->IsClass() || type->IsConstant() || type->IsRange() || + type->IsContext() || type->IsArray() || type->IsFunction()); + int inherent_bound = type->InherentBitsetLub(); + int old_bound = type->BitsetLub(); + int other_bound = type->BoundBy(other->unhandle()) & inherent_bound; + int new_bound = + is_intersect ? (old_bound & other_bound) : (old_bound | other_bound); + if (new_bound != BitsetType::kNone) { + int i = type->IndexInUnion(new_bound, result, size); + if (i == -1) { + i = size++; + } else if (result->Get(i)->IsBitset()) { + return size; // Already fully subsumed. + } else { + int type_i_bound = result->Get(i)->BitsetLub(); + new_bound |= type_i_bound; + if (new_bound == type_i_bound) return size; + } + if (new_bound != old_bound) type = type->Rebound(new_bound, region); + result->Set(i, type); } } - return current_size; + return size; } -// Union is O(1) on simple bit unions, but O(n*m) on structured unions. +// Union is O(1) on simple bitsets, but O(n*m) on structured unions. template<class Config> typename TypeImpl<Config>::TypeHandle TypeImpl<Config>::Union( TypeHandle type1, TypeHandle type2, Region* region) { @@ -501,54 +609,27 @@ typename TypeImpl<Config>::TypeHandle TypeImpl<Config>::Union( } int bitset = type1->BitsetGlb() | type2->BitsetGlb(); if (bitset != BitsetType::kNone) ++size; - ASSERT(size >= 1); + DCHECK(size >= 1); UnionHandle unioned = UnionType::New(size, region); size = 0; if (bitset != BitsetType::kNone) { unioned->Set(size++, BitsetType::New(bitset, region)); } - size = ExtendUnion(unioned, type1, size); - size = ExtendUnion(unioned, type2, size); + size = ExtendUnion(unioned, size, type1, type2, false, region); + size = ExtendUnion(unioned, size, type2, type1, false, region); if (size == 1) { return unioned->Get(0); } else { unioned->Shrink(size); + DCHECK(unioned->Wellformed()); return unioned; } } -// Get non-bitsets from type which are also in other, store at result, -// starting at index. Returns updated index. -template<class Config> -int TypeImpl<Config>::ExtendIntersection( - UnionHandle result, TypeHandle type, TypeHandle other, int current_size) { - int old_size = current_size; - if (type->IsUnion()) { - UnionHandle unioned = handle(type->AsUnion()); - for (int i = 0; i < unioned->Length(); ++i) { - TypeHandle type = unioned->Get(i); - ASSERT(i == 0 || !(type->IsBitset() || type->Is(unioned->Get(0)))); - if (!type->IsBitset() && type->Is(other) && - !type->InUnion(result, old_size)) { - result->Set(current_size++, type); - } - } - } else if (!type->IsBitset()) { - // For all structural types, subtyping implies equivalence. - ASSERT(type->IsClass() || type->IsConstant() || - type->IsArray() || type->IsFunction()); - if (type->Is(other) && !type->InUnion(result, old_size)) { - result->Set(current_size++, type); - } - } - return current_size; -} - - -// Intersection is O(1) on simple bit unions, but O(n*m) on structured unions. +// Intersection is O(1) on simple bitsets, but O(n*m) on structured unions. template<class Config> typename TypeImpl<Config>::TypeHandle TypeImpl<Config>::Intersect( TypeHandle type1, TypeHandle type2, Region* region) { @@ -577,15 +658,15 @@ typename TypeImpl<Config>::TypeHandle TypeImpl<Config>::Intersect( } int bitset = type1->BitsetGlb() & type2->BitsetGlb(); if (bitset != BitsetType::kNone) ++size; - ASSERT(size >= 1); + DCHECK(size >= 1); UnionHandle unioned = UnionType::New(size, region); size = 0; if (bitset != BitsetType::kNone) { unioned->Set(size++, BitsetType::New(bitset, region)); } - size = ExtendIntersection(unioned, type1, type2, size); - size = ExtendIntersection(unioned, type2, type1, size); + size = ExtendUnion(unioned, size, type1, type2, true, region); + size = ExtendUnion(unioned, size, type2, type1, true, region); if (size == 0) { return None(region); @@ -593,11 +674,118 @@ typename TypeImpl<Config>::TypeHandle TypeImpl<Config>::Intersect( return unioned->Get(0); } else { unioned->Shrink(size); + DCHECK(unioned->Wellformed()); return unioned; } } +// ----------------------------------------------------------------------------- +// Iteration. + +template<class Config> +int TypeImpl<Config>::NumClasses() { + DisallowHeapAllocation no_allocation; + if (this->IsClass()) { + return 1; + } else if (this->IsUnion()) { + UnionHandle unioned = handle(this->AsUnion()); + int result = 0; + for (int i = 0; i < unioned->Length(); ++i) { + if (unioned->Get(i)->IsClass()) ++result; + } + return result; + } else { + return 0; + } +} + + +template<class Config> +int TypeImpl<Config>::NumConstants() { + DisallowHeapAllocation no_allocation; + if (this->IsConstant()) { + return 1; + } else if (this->IsUnion()) { + UnionHandle unioned = handle(this->AsUnion()); + int result = 0; + for (int i = 0; i < unioned->Length(); ++i) { + if (unioned->Get(i)->IsConstant()) ++result; + } + return result; + } else { + return 0; + } +} + + +template<class Config> template<class T> +typename TypeImpl<Config>::TypeHandle +TypeImpl<Config>::Iterator<T>::get_type() { + DCHECK(!Done()); + return type_->IsUnion() ? type_->AsUnion()->Get(index_) : type_; +} + + +// C++ cannot specialise nested templates, so we have to go through this +// contortion with an auxiliary template to simulate it. +template<class Config, class T> +struct TypeImplIteratorAux { + static bool matches(typename TypeImpl<Config>::TypeHandle type); + static i::Handle<T> current(typename TypeImpl<Config>::TypeHandle type); +}; + +template<class Config> +struct TypeImplIteratorAux<Config, i::Map> { + static bool matches(typename TypeImpl<Config>::TypeHandle type) { + return type->IsClass(); + } + static i::Handle<i::Map> current(typename TypeImpl<Config>::TypeHandle type) { + return type->AsClass()->Map(); + } +}; + +template<class Config> +struct TypeImplIteratorAux<Config, i::Object> { + static bool matches(typename TypeImpl<Config>::TypeHandle type) { + return type->IsConstant(); + } + static i::Handle<i::Object> current( + typename TypeImpl<Config>::TypeHandle type) { + return type->AsConstant()->Value(); + } +}; + +template<class Config> template<class T> +bool TypeImpl<Config>::Iterator<T>::matches(TypeHandle type) { + return TypeImplIteratorAux<Config, T>::matches(type); +} + +template<class Config> template<class T> +i::Handle<T> TypeImpl<Config>::Iterator<T>::Current() { + return TypeImplIteratorAux<Config, T>::current(get_type()); +} + + +template<class Config> template<class T> +void TypeImpl<Config>::Iterator<T>::Advance() { + DisallowHeapAllocation no_allocation; + ++index_; + if (type_->IsUnion()) { + UnionHandle unioned = handle(type_->AsUnion()); + for (; index_ < unioned->Length(); ++index_) { + if (matches(unioned->Get(index_))) return; + } + } else if (index_ == 0 && matches(type_)) { + return; + } + index_ = -1; +} + + +// ----------------------------------------------------------------------------- +// Conversion between low-level representations. + template<class Config> template<class OtherType> typename TypeImpl<Config>::TypeHandle TypeImpl<Config>::Convert( @@ -605,27 +793,41 @@ typename TypeImpl<Config>::TypeHandle TypeImpl<Config>::Convert( if (type->IsBitset()) { return BitsetType::New(type->AsBitset(), region); } else if (type->IsClass()) { - return ClassType::New(type->AsClass()->Map(), region); + TypeHandle bound = BitsetType::New(type->BitsetLub(), region); + return ClassType::New(type->AsClass()->Map(), bound, region); } else if (type->IsConstant()) { - return ConstantType::New(type->AsConstant()->Value(), region); + TypeHandle bound = Convert<OtherType>(type->AsConstant()->Bound(), region); + return ConstantType::New(type->AsConstant()->Value(), bound, region); + } else if (type->IsRange()) { + TypeHandle bound = Convert<OtherType>(type->AsRange()->Bound(), region); + return RangeType::New( + type->AsRange()->Min(), type->AsRange()->Max(), bound, region); + } else if (type->IsContext()) { + TypeHandle bound = Convert<OtherType>(type->AsContext()->Bound(), region); + TypeHandle outer = Convert<OtherType>(type->AsContext()->Outer(), region); + return ContextType::New(outer, bound, region); } else if (type->IsUnion()) { int length = type->AsUnion()->Length(); UnionHandle unioned = UnionType::New(length, region); for (int i = 0; i < length; ++i) { - unioned->Set(i, Convert<OtherType>(type->AsUnion()->Get(i), region)); + TypeHandle t = Convert<OtherType>(type->AsUnion()->Get(i), region); + unioned->Set(i, t); } return unioned; } else if (type->IsArray()) { - return ArrayType::New( - Convert<OtherType>(type->AsArray()->Element(), region), region); + TypeHandle element = Convert<OtherType>(type->AsArray()->Element(), region); + TypeHandle bound = Convert<OtherType>(type->AsArray()->Bound(), region); + return ArrayType::New(element, bound, region); } else if (type->IsFunction()) { + TypeHandle res = Convert<OtherType>(type->AsFunction()->Result(), region); + TypeHandle rcv = Convert<OtherType>(type->AsFunction()->Receiver(), region); + TypeHandle bound = Convert<OtherType>(type->AsFunction()->Bound(), region); FunctionHandle function = FunctionType::New( - Convert<OtherType>(type->AsFunction()->Result(), region), - Convert<OtherType>(type->AsFunction()->Receiver(), region), - type->AsFunction()->Arity(), region); + res, rcv, bound, type->AsFunction()->Arity(), region); for (int i = 0; i < function->Arity(); ++i) { - function->InitParameter(i, - Convert<OtherType>(type->AsFunction()->Parameter(i), region)); + TypeHandle param = Convert<OtherType>( + type->AsFunction()->Parameter(i), region); + function->InitParameter(i, param); } return function; } else { @@ -635,38 +837,22 @@ typename TypeImpl<Config>::TypeHandle TypeImpl<Config>::Convert( } -// TODO(rossberg): this does not belong here. -Representation Representation::FromType(Type* type) { - DisallowHeapAllocation no_allocation; - if (type->Is(Type::None())) return Representation::None(); - if (type->Is(Type::SignedSmall())) return Representation::Smi(); - if (type->Is(Type::Signed32())) return Representation::Integer32(); - if (type->Is(Type::Number())) return Representation::Double(); - return Representation::Tagged(); -} - - -template<class Config> -void TypeImpl<Config>::TypePrint(PrintDimension dim) { - TypePrint(stdout, dim); - PrintF(stdout, "\n"); - Flush(stdout); -} - +// ----------------------------------------------------------------------------- +// Printing. template<class Config> const char* TypeImpl<Config>::BitsetType::Name(int bitset) { switch (bitset) { - case kAny & kRepresentation: return "Any"; - #define PRINT_COMPOSED_TYPE(type, value) \ - case k##type & kRepresentation: return #type; - REPRESENTATION_BITSET_TYPE_LIST(PRINT_COMPOSED_TYPE) - #undef PRINT_COMPOSED_TYPE + case REPRESENTATION(kAny): return "Any"; + #define RETURN_NAMED_REPRESENTATION_TYPE(type, value) \ + case REPRESENTATION(k##type): return #type; + REPRESENTATION_BITSET_TYPE_LIST(RETURN_NAMED_REPRESENTATION_TYPE) + #undef RETURN_NAMED_REPRESENTATION_TYPE - #define PRINT_COMPOSED_TYPE(type, value) \ - case k##type & kSemantic: return #type; - SEMANTIC_BITSET_TYPE_LIST(PRINT_COMPOSED_TYPE) - #undef PRINT_COMPOSED_TYPE + #define RETURN_NAMED_SEMANTIC_TYPE(type, value) \ + case SEMANTIC(k##type): return #type; + SEMANTIC_BITSET_TYPE_LIST(RETURN_NAMED_SEMANTIC_TYPE) + #undef RETURN_NAMED_SEMANTIC_TYPE default: return NULL; @@ -674,98 +860,115 @@ const char* TypeImpl<Config>::BitsetType::Name(int bitset) { } -template<class Config> -void TypeImpl<Config>::BitsetType::BitsetTypePrint(FILE* out, int bitset) { +template <class Config> +void TypeImpl<Config>::BitsetType::Print(OStream& os, // NOLINT + int bitset) { DisallowHeapAllocation no_allocation; const char* name = Name(bitset); if (name != NULL) { - PrintF(out, "%s", name); - } else { - static const int named_bitsets[] = { - #define BITSET_CONSTANT(type, value) k##type & kRepresentation, + os << name; + return; + } + + static const int named_bitsets[] = { +#define BITSET_CONSTANT(type, value) REPRESENTATION(k##type), REPRESENTATION_BITSET_TYPE_LIST(BITSET_CONSTANT) - #undef BITSET_CONSTANT +#undef BITSET_CONSTANT - #define BITSET_CONSTANT(type, value) k##type & kSemantic, +#define BITSET_CONSTANT(type, value) SEMANTIC(k##type), SEMANTIC_BITSET_TYPE_LIST(BITSET_CONSTANT) - #undef BITSET_CONSTANT - }; - - bool is_first = true; - PrintF(out, "("); - for (int i(ARRAY_SIZE(named_bitsets) - 1); bitset != 0 && i >= 0; --i) { - int subset = named_bitsets[i]; - if ((bitset & subset) == subset) { - if (!is_first) PrintF(out, " | "); - is_first = false; - PrintF(out, "%s", Name(subset)); - bitset -= subset; - } +#undef BITSET_CONSTANT + }; + + bool is_first = true; + os << "("; + for (int i(ARRAY_SIZE(named_bitsets) - 1); bitset != 0 && i >= 0; --i) { + int subset = named_bitsets[i]; + if ((bitset & subset) == subset) { + if (!is_first) os << " | "; + is_first = false; + os << Name(subset); + bitset -= subset; } - ASSERT(bitset == 0); - PrintF(out, ")"); } + DCHECK(bitset == 0); + os << ")"; } -template<class Config> -void TypeImpl<Config>::TypePrint(FILE* out, PrintDimension dim) { +template <class Config> +void TypeImpl<Config>::PrintTo(OStream& os, PrintDimension dim) { // NOLINT DisallowHeapAllocation no_allocation; - if (this->IsBitset()) { - int bitset = this->AsBitset(); - switch (dim) { - case BOTH_DIMS: - BitsetType::BitsetTypePrint(out, bitset & BitsetType::kSemantic); - PrintF(out, "/"); - BitsetType::BitsetTypePrint(out, bitset & BitsetType::kRepresentation); - break; - case SEMANTIC_DIM: - BitsetType::BitsetTypePrint(out, bitset & BitsetType::kSemantic); - break; - case REPRESENTATION_DIM: - BitsetType::BitsetTypePrint(out, bitset & BitsetType::kRepresentation); - break; - } - } else if (this->IsConstant()) { - PrintF(out, "Constant(%p : ", - static_cast<void*>(*this->AsConstant()->Value())); - BitsetType::New(BitsetType::Lub(this))->TypePrint(out, dim); - PrintF(out, ")"); - } else if (this->IsClass()) { - PrintF(out, "Class(%p < ", static_cast<void*>(*this->AsClass()->Map())); - BitsetType::New(BitsetType::Lub(this))->TypePrint(out, dim); - PrintF(out, ")"); - } else if (this->IsUnion()) { - PrintF(out, "("); - UnionHandle unioned = handle(this->AsUnion()); - for (int i = 0; i < unioned->Length(); ++i) { - TypeHandle type_i = unioned->Get(i); - if (i > 0) PrintF(out, " | "); - type_i->TypePrint(out, dim); - } - PrintF(out, ")"); - } else if (this->IsArray()) { - PrintF(out, "["); - AsArray()->Element()->TypePrint(out, dim); - PrintF(out, "]"); - } else if (this->IsFunction()) { - if (!this->AsFunction()->Receiver()->IsAny()) { - this->AsFunction()->Receiver()->TypePrint(out, dim); - PrintF(out, "."); - } - PrintF(out, "("); - for (int i = 0; i < this->AsFunction()->Arity(); ++i) { - if (i > 0) PrintF(out, ", "); - this->AsFunction()->Parameter(i)->TypePrint(out, dim); + if (dim != REPRESENTATION_DIM) { + if (this->IsBitset()) { + BitsetType::Print(os, SEMANTIC(this->AsBitset())); + } else if (this->IsClass()) { + os << "Class(" << static_cast<void*>(*this->AsClass()->Map()) << " < "; + BitsetType::New(BitsetType::Lub(this))->PrintTo(os, dim); + os << ")"; + } else if (this->IsConstant()) { + os << "Constant(" << static_cast<void*>(*this->AsConstant()->Value()) + << " : "; + BitsetType::New(BitsetType::Lub(this))->PrintTo(os, dim); + os << ")"; + } else if (this->IsRange()) { + os << "Range(" << this->AsRange()->Min() + << ".." << this->AsRange()->Max() << " : "; + BitsetType::New(BitsetType::Lub(this))->PrintTo(os, dim); + os << ")"; + } else if (this->IsContext()) { + os << "Context("; + this->AsContext()->Outer()->PrintTo(os, dim); + os << ")"; + } else if (this->IsUnion()) { + os << "("; + UnionHandle unioned = handle(this->AsUnion()); + for (int i = 0; i < unioned->Length(); ++i) { + TypeHandle type_i = unioned->Get(i); + if (i > 0) os << " | "; + type_i->PrintTo(os, dim); + } + os << ")"; + } else if (this->IsArray()) { + os << "Array("; + AsArray()->Element()->PrintTo(os, dim); + os << ")"; + } else if (this->IsFunction()) { + if (!this->AsFunction()->Receiver()->IsAny()) { + this->AsFunction()->Receiver()->PrintTo(os, dim); + os << "."; + } + os << "("; + for (int i = 0; i < this->AsFunction()->Arity(); ++i) { + if (i > 0) os << ", "; + this->AsFunction()->Parameter(i)->PrintTo(os, dim); + } + os << ")->"; + this->AsFunction()->Result()->PrintTo(os, dim); + } else { + UNREACHABLE(); } - PrintF(out, ")->"); - this->AsFunction()->Result()->TypePrint(out, dim); - } else { - UNREACHABLE(); + } + if (dim == BOTH_DIMS) os << "/"; + if (dim != SEMANTIC_DIM) { + BitsetType::Print(os, REPRESENTATION(this->BitsetLub())); } } +#ifdef DEBUG +template <class Config> +void TypeImpl<Config>::Print() { + OFStream os(stdout); + PrintTo(os); + os << endl; +} +#endif + + +// ----------------------------------------------------------------------------- +// Instantiations. + template class TypeImpl<ZoneTypeConfig>; template class TypeImpl<ZoneTypeConfig>::Iterator<i::Map>; template class TypeImpl<ZoneTypeConfig>::Iterator<i::Object>; diff --git a/deps/v8/src/types.h b/deps/v8/src/types.h index 5ca3a81452c..cca8b3167b4 100644 --- a/deps/v8/src/types.h +++ b/deps/v8/src/types.h @@ -5,7 +5,9 @@ #ifndef V8_TYPES_H_ #define V8_TYPES_H_ -#include "handles.h" +#include "src/factory.h" +#include "src/handles.h" +#include "src/ostreams.h" namespace v8 { namespace internal { @@ -45,10 +47,14 @@ namespace internal { // Constant(x) < T iff instance_type(map(x)) < T // Array(T) < Array // Function(R, S, T0, T1, ...) < Function +// Context(T) < Internal // -// Both structural Array and Function types are invariant in all parameters. -// Relaxing this would make Union and Intersect operations more involved. -// Note that Constant(x) < Class(map(x)) does _not_ hold, since x's map can +// Both structural Array and Function types are invariant in all parameters; +// relaxing this would make Union and Intersect operations more involved. +// There is no subtyping relation between Array, Function, or Context types +// and respective Constant types, since these types cannot be reconstructed +// for arbitrary heap values. +// Note also that Constant(x) < Class(map(x)) does _not_ hold, since x's map can // change! (Its instance type cannot, however.) // TODO(rossberg): the latter is not currently true for proxies, because of fix, // but will hold once we implement direct proxies. @@ -62,11 +68,12 @@ namespace internal { // None <= R // R <= Any // -// UntaggedInt <= UntaggedInt8 \/ UntaggedInt16 \/ UntaggedInt32) -// UntaggedFloat <= UntaggedFloat32 \/ UntaggedFloat64 -// UntaggedNumber <= UntaggedInt \/ UntaggedFloat -// Untagged <= UntaggedNumber \/ UntaggedPtr -// Tagged <= TaggedInt \/ TaggedPtr +// UntaggedInt = UntaggedInt1 \/ UntaggedInt8 \/ +// UntaggedInt16 \/ UntaggedInt32 +// UntaggedFloat = UntaggedFloat32 \/ UntaggedFloat64 +// UntaggedNumber = UntaggedInt \/ UntaggedFloat +// Untagged = UntaggedNumber \/ UntaggedPtr +// Tagged = TaggedInt \/ TaggedPtr // // Subtyping relates the two dimensions, for example: // @@ -128,15 +135,19 @@ namespace internal { // them. For zone types, no query method touches the heap, only constructors do. +// ----------------------------------------------------------------------------- +// Values for bitset types + #define MASK_BITSET_TYPE_LIST(V) \ - V(Representation, static_cast<int>(0xff800000)) \ - V(Semantic, static_cast<int>(0x007fffff)) + V(Representation, static_cast<int>(0xffc00000)) \ + V(Semantic, static_cast<int>(0x003fffff)) -#define REPRESENTATION(k) ((k) & kRepresentation) -#define SEMANTIC(k) ((k) & kSemantic) +#define REPRESENTATION(k) ((k) & BitsetType::kRepresentation) +#define SEMANTIC(k) ((k) & BitsetType::kSemantic) #define REPRESENTATION_BITSET_TYPE_LIST(V) \ V(None, 0) \ + V(UntaggedInt1, 1 << 22 | kSemantic) \ V(UntaggedInt8, 1 << 23 | kSemantic) \ V(UntaggedInt16, 1 << 24 | kSemantic) \ V(UntaggedInt32, 1 << 25 | kSemantic) \ @@ -144,46 +155,57 @@ namespace internal { V(UntaggedFloat64, 1 << 27 | kSemantic) \ V(UntaggedPtr, 1 << 28 | kSemantic) \ V(TaggedInt, 1 << 29 | kSemantic) \ - V(TaggedPtr, -1 << 30 | kSemantic) /* MSB has to be sign-extended */ \ + /* MSB has to be sign-extended */ \ + V(TaggedPtr, static_cast<int>(~0u << 30) | kSemantic) \ \ - V(UntaggedInt, kUntaggedInt8 | kUntaggedInt16 | kUntaggedInt32) \ - V(UntaggedFloat, kUntaggedFloat32 | kUntaggedFloat64) \ - V(UntaggedNumber, kUntaggedInt | kUntaggedFloat) \ - V(Untagged, kUntaggedNumber | kUntaggedPtr) \ + V(UntaggedInt, kUntaggedInt1 | kUntaggedInt8 | \ + kUntaggedInt16 | kUntaggedInt32) \ + V(UntaggedFloat, kUntaggedFloat32 | kUntaggedFloat64) \ + V(UntaggedNumber, kUntaggedInt | kUntaggedFloat) \ + V(Untagged, kUntaggedNumber | kUntaggedPtr) \ V(Tagged, kTaggedInt | kTaggedPtr) #define SEMANTIC_BITSET_TYPE_LIST(V) \ V(Null, 1 << 0 | REPRESENTATION(kTaggedPtr)) \ V(Undefined, 1 << 1 | REPRESENTATION(kTaggedPtr)) \ V(Boolean, 1 << 2 | REPRESENTATION(kTaggedPtr)) \ - V(SignedSmall, 1 << 3 | REPRESENTATION(kTagged | kUntaggedNumber)) \ - V(OtherSigned32, 1 << 4 | REPRESENTATION(kTagged | kUntaggedNumber)) \ - V(Unsigned32, 1 << 5 | REPRESENTATION(kTagged | kUntaggedNumber)) \ - V(Float, 1 << 6 | REPRESENTATION(kTagged | kUntaggedNumber)) \ - V(Symbol, 1 << 7 | REPRESENTATION(kTaggedPtr)) \ - V(InternalizedString, 1 << 8 | REPRESENTATION(kTaggedPtr)) \ - V(OtherString, 1 << 9 | REPRESENTATION(kTaggedPtr)) \ - V(Undetectable, 1 << 10 | REPRESENTATION(kTaggedPtr)) \ - V(Array, 1 << 11 | REPRESENTATION(kTaggedPtr)) \ - V(Function, 1 << 12 | REPRESENTATION(kTaggedPtr)) \ - V(RegExp, 1 << 13 | REPRESENTATION(kTaggedPtr)) \ - V(OtherObject, 1 << 14 | REPRESENTATION(kTaggedPtr)) \ - V(Proxy, 1 << 15 | REPRESENTATION(kTaggedPtr)) \ - V(Internal, 1 << 16 | REPRESENTATION(kTagged | kUntagged)) \ + V(UnsignedSmall, 1 << 3 | REPRESENTATION(kTagged | kUntaggedNumber)) \ + V(OtherSignedSmall, 1 << 4 | REPRESENTATION(kTagged | kUntaggedNumber)) \ + V(OtherUnsigned31, 1 << 5 | REPRESENTATION(kTagged | kUntaggedNumber)) \ + V(OtherUnsigned32, 1 << 6 | REPRESENTATION(kTagged | kUntaggedNumber)) \ + V(OtherSigned32, 1 << 7 | REPRESENTATION(kTagged | kUntaggedNumber)) \ + V(MinusZero, 1 << 8 | REPRESENTATION(kTagged | kUntaggedNumber)) \ + V(NaN, 1 << 9 | REPRESENTATION(kTagged | kUntaggedNumber)) \ + V(OtherNumber, 1 << 10 | REPRESENTATION(kTagged | kUntaggedNumber)) \ + V(Symbol, 1 << 11 | REPRESENTATION(kTaggedPtr)) \ + V(InternalizedString, 1 << 12 | REPRESENTATION(kTaggedPtr)) \ + V(OtherString, 1 << 13 | REPRESENTATION(kTaggedPtr)) \ + V(Undetectable, 1 << 14 | REPRESENTATION(kTaggedPtr)) \ + V(Array, 1 << 15 | REPRESENTATION(kTaggedPtr)) \ + V(Buffer, 1 << 16 | REPRESENTATION(kTaggedPtr)) \ + V(Function, 1 << 17 | REPRESENTATION(kTaggedPtr)) \ + V(RegExp, 1 << 18 | REPRESENTATION(kTaggedPtr)) \ + V(OtherObject, 1 << 19 | REPRESENTATION(kTaggedPtr)) \ + V(Proxy, 1 << 20 | REPRESENTATION(kTaggedPtr)) \ + V(Internal, 1 << 21 | REPRESENTATION(kTagged | kUntagged)) \ \ - V(Signed32, kSignedSmall | kOtherSigned32) \ - V(Number, kSigned32 | kUnsigned32 | kFloat) \ - V(String, kInternalizedString | kOtherString) \ - V(UniqueName, kSymbol | kInternalizedString) \ - V(Name, kSymbol | kString) \ - V(NumberOrString, kNumber | kString) \ - V(DetectableObject, kArray | kFunction | kRegExp | kOtherObject) \ - V(DetectableReceiver, kDetectableObject | kProxy) \ - V(Detectable, kDetectableReceiver | kNumber | kName) \ - V(Object, kDetectableObject | kUndetectable) \ - V(Receiver, kObject | kProxy) \ - V(NonNumber, kBoolean | kName | kNull | kReceiver | \ - kUndefined | kInternal) \ + V(SignedSmall, kUnsignedSmall | kOtherSignedSmall) \ + V(Signed32, kSignedSmall | kOtherUnsigned31 | kOtherSigned32) \ + V(Unsigned32, kUnsignedSmall | kOtherUnsigned31 | kOtherUnsigned32) \ + V(Integral32, kSigned32 | kUnsigned32) \ + V(Number, kIntegral32 | kMinusZero | kNaN | kOtherNumber) \ + V(String, kInternalizedString | kOtherString) \ + V(UniqueName, kSymbol | kInternalizedString) \ + V(Name, kSymbol | kString) \ + V(NumberOrString, kNumber | kString) \ + V(Primitive, kNumber | kName | kBoolean | kNull | kUndefined) \ + V(DetectableObject, kArray | kFunction | kRegExp | kOtherObject) \ + V(DetectableReceiver, kDetectableObject | kProxy) \ + V(Detectable, kDetectableReceiver | kNumber | kName) \ + V(Object, kDetectableObject | kUndetectable) \ + V(Receiver, kObject | kProxy) \ + V(NonNumber, kBoolean | kName | kNull | kReceiver | \ + kUndefined | kInternal) \ V(Any, -1) #define BITSET_TYPE_LIST(V) \ @@ -192,6 +214,9 @@ namespace internal { SEMANTIC_BITSET_TYPE_LIST(V) +// ----------------------------------------------------------------------------- +// The abstract Type class, parameterized over the low-level representation. + // struct Config { // typedef TypeImpl<Config> Type; // typedef Base; @@ -202,16 +227,13 @@ namespace internal { // template<class T> static Handle<T>::type cast(Handle<Type>::type); // static bool is_bitset(Type*); // static bool is_class(Type*); -// static bool is_constant(Type*); // static bool is_struct(Type*, int tag); // static int as_bitset(Type*); // static i::Handle<i::Map> as_class(Type*); -// static i::Handle<i::Object> as_constant(Type*); // static Handle<Struct>::type as_struct(Type*); // static Type* from_bitset(int bitset); // static Handle<Type>::type from_bitset(int bitset, Region*); -// static Handle<Type>::type from_class(i::Handle<Map>, int lub, Region*); -// static Handle<Type>::type from_constant(i::Handle<Object>, int, Region*); +// static Handle<Type>::type from_class(i::Handle<Map>, Region*); // static Handle<Type>::type from_struct(Handle<Struct>::type, int tag); // static Handle<Struct>::type struct_create(int tag, int length, Region*); // static void struct_shrink(Handle<Struct>::type, int length); @@ -219,28 +241,39 @@ namespace internal { // static int struct_length(Handle<Struct>::type); // static Handle<Type>::type struct_get(Handle<Struct>::type, int); // static void struct_set(Handle<Struct>::type, int, Handle<Type>::type); -// static int lub_bitset(Type*); +// template<class V> +// static i::Handle<V> struct_get_value(Handle<Struct>::type, int); +// template<class V> +// static void struct_set_value(Handle<Struct>::type, int, i::Handle<V>); // } template<class Config> class TypeImpl : public Config::Base { public: + // Auxiliary types. + class BitsetType; // Internal class StructuralType; // Internal class UnionType; // Internal class ClassType; class ConstantType; + class RangeType; + class ContextType; class ArrayType; class FunctionType; typedef typename Config::template Handle<TypeImpl>::type TypeHandle; typedef typename Config::template Handle<ClassType>::type ClassHandle; typedef typename Config::template Handle<ConstantType>::type ConstantHandle; + typedef typename Config::template Handle<RangeType>::type RangeHandle; + typedef typename Config::template Handle<ContextType>::type ContextHandle; typedef typename Config::template Handle<ArrayType>::type ArrayHandle; typedef typename Config::template Handle<FunctionType>::type FunctionHandle; typedef typename Config::template Handle<UnionType>::type UnionHandle; typedef typename Config::Region Region; + // Constructors. + #define DEFINE_TYPE_CONSTRUCTOR(type, value) \ static TypeImpl* type() { return BitsetType::New(BitsetType::k##type); } \ static TypeHandle type(Region* region) { \ @@ -253,8 +286,15 @@ class TypeImpl : public Config::Base { return ClassType::New(map, region); } static TypeHandle Constant(i::Handle<i::Object> value, Region* region) { + // TODO(neis): Return RangeType for numerical values. return ConstantType::New(value, region); } + static TypeHandle Range(double min, double max, Region* region) { + return RangeType::New(min, max, region); + } + static TypeHandle Context(TypeHandle outer, Region* region) { + return ContextType::New(outer, region); + } static TypeHandle Array(TypeHandle element, Region* region) { return ArrayType::New(element, region); } @@ -278,10 +318,22 @@ class TypeImpl : public Config::Base { function->InitParameter(1, param1); return function; } + static TypeHandle Function( + TypeHandle result, TypeHandle param0, TypeHandle param1, + TypeHandle param2, Region* region) { + FunctionHandle function = Function(result, Any(region), 3, region); + function->InitParameter(0, param0); + function->InitParameter(1, param1); + function->InitParameter(2, param2); + return function; + } static TypeHandle Union(TypeHandle type1, TypeHandle type2, Region* reg); static TypeHandle Intersect(TypeHandle type1, TypeHandle type2, Region* reg); + static TypeHandle Of(double value, Region* region) { + return Config::from_bitset(BitsetType::Lub(value), region); + } static TypeHandle Of(i::Object* value, Region* region) { return Config::from_bitset(BitsetType::Lub(value), region); } @@ -289,9 +341,9 @@ class TypeImpl : public Config::Base { return Of(*value, region); } - bool IsInhabited() { - return !this->IsBitset() || BitsetType::IsInhabited(this->AsBitset()); - } + // Predicates. + + bool IsInhabited() { return BitsetType::IsInhabited(this->BitsetLub()); } bool Is(TypeImpl* that) { return this == that || this->SlowIs(that); } template<class TypeHandle> @@ -309,9 +361,9 @@ class TypeImpl : public Config::Base { bool Contains(i::Object* val); bool Contains(i::Handle<i::Object> val) { return this->Contains(*val); } - // State-dependent versions of Of and Is that consider subtyping between + // State-dependent versions of the above that consider subtyping between // a constant and its map class. - static TypeHandle NowOf(i::Object* value, Region* region); + inline static TypeHandle NowOf(i::Object* value, Region* region); static TypeHandle NowOf(i::Handle<i::Object> value, Region* region) { return NowOf(*value, region); } @@ -323,15 +375,32 @@ class TypeImpl : public Config::Base { bool NowStable(); - bool IsClass() { return Config::is_class(this); } - bool IsConstant() { return Config::is_constant(this); } - bool IsArray() { return Config::is_struct(this, StructuralType::kArrayTag); } + // Inspection. + + bool IsClass() { + return Config::is_class(this) + || Config::is_struct(this, StructuralType::kClassTag); + } + bool IsConstant() { + return Config::is_struct(this, StructuralType::kConstantTag); + } + bool IsRange() { + return Config::is_struct(this, StructuralType::kRangeTag); + } + bool IsContext() { + return Config::is_struct(this, StructuralType::kContextTag); + } + bool IsArray() { + return Config::is_struct(this, StructuralType::kArrayTag); + } bool IsFunction() { return Config::is_struct(this, StructuralType::kFunctionTag); } ClassType* AsClass() { return ClassType::cast(this); } ConstantType* AsConstant() { return ConstantType::cast(this); } + RangeType* AsRange() { return RangeType::cast(this); } + ContextType* AsContext() { return ContextType::cast(this); } ArrayType* AsArray() { return ArrayType::cast(this); } FunctionType* AsFunction() { return FunctionType::cast(this); } @@ -348,24 +417,39 @@ class TypeImpl : public Config::Base { return Iterator<i::Object>(Config::handle(this)); } + // Casting and conversion. + static inline TypeImpl* cast(typename Config::Base* object); template<class OtherTypeImpl> static TypeHandle Convert( typename OtherTypeImpl::TypeHandle type, Region* region); + // Printing. + enum PrintDimension { BOTH_DIMS, SEMANTIC_DIM, REPRESENTATION_DIM }; - void TypePrint(PrintDimension = BOTH_DIMS); - void TypePrint(FILE* out, PrintDimension = BOTH_DIMS); + + void PrintTo(OStream& os, PrintDimension dim = BOTH_DIMS); // NOLINT + +#ifdef DEBUG + void Print(); +#endif protected: + // Friends. + template<class> friend class Iterator; template<class> friend class TypeImpl; + // Handle conversion. + template<class T> static typename Config::template Handle<T>::type handle(T* type) { return Config::handle(type); } + TypeImpl* unhandle() { return this; } + + // Internal inspection. bool IsNone() { return this == None(); } bool IsAny() { return this == Any(); } @@ -373,27 +457,34 @@ class TypeImpl : public Config::Base { bool IsUnion() { return Config::is_struct(this, StructuralType::kUnionTag); } int AsBitset() { - ASSERT(this->IsBitset()); + DCHECK(this->IsBitset()); return static_cast<BitsetType*>(this)->Bitset(); } UnionType* AsUnion() { return UnionType::cast(this); } - bool SlowIs(TypeImpl* that); - - bool InUnion(UnionHandle unioned, int current_size); - static int ExtendUnion( - UnionHandle unioned, TypeHandle t, int current_size); - static int ExtendIntersection( - UnionHandle unioned, TypeHandle t, TypeHandle other, int current_size); + // Auxiliary functions. int BitsetGlb() { return BitsetType::Glb(this); } int BitsetLub() { return BitsetType::Lub(this); } + int InherentBitsetLub() { return BitsetType::InherentLub(this); } + + bool SlowIs(TypeImpl* that); + + TypeHandle Rebound(int bitset, Region* region); + int BoundBy(TypeImpl* that); + int IndexInUnion(int bound, UnionHandle unioned, int current_size); + static int ExtendUnion( + UnionHandle unioned, int current_size, TypeHandle t, + TypeHandle other, bool is_intersect, Region* region); }; +// ----------------------------------------------------------------------------- +// Bitset types (internal). + template<class Config> class TypeImpl<Config>::BitsetType : public TypeImpl<Config> { - private: + protected: friend class TypeImpl<Config>; enum { @@ -405,7 +496,7 @@ class TypeImpl<Config>::BitsetType : public TypeImpl<Config> { int Bitset() { return Config::as_bitset(this); } - static BitsetType* New(int bitset) { + static TypeImpl* New(int bitset) { return static_cast<BitsetType*>(Config::from_bitset(bitset)); } static TypeHandle New(int bitset, Region* region) { @@ -416,18 +507,30 @@ class TypeImpl<Config>::BitsetType : public TypeImpl<Config> { return (bitset & kRepresentation) && (bitset & kSemantic); } + static bool Is(int bitset1, int bitset2) { + return (bitset1 | bitset2) == bitset2; + } + static int Glb(TypeImpl* type); // greatest lower bound that's a bitset static int Lub(TypeImpl* type); // least upper bound that's a bitset static int Lub(i::Object* value); + static int Lub(double value); + static int Lub(int32_t value); + static int Lub(uint32_t value); static int Lub(i::Map* map); + static int Lub(double min, double max); + static int InherentLub(TypeImpl* type); static const char* Name(int bitset); - static void BitsetTypePrint(FILE* out, int bitset); + static void Print(OStream& os, int bitset); // NOLINT + using TypeImpl::PrintTo; }; -// Internal -// A structured type contains a tag and a variable number of type fields. +// ----------------------------------------------------------------------------- +// Superclass for non-bitset types (internal). +// Contains a tag and a variable number of type or value fields. + template<class Config> class TypeImpl<Config>::StructuralType : public TypeImpl<Config> { protected: @@ -438,6 +541,8 @@ class TypeImpl<Config>::StructuralType : public TypeImpl<Config> { enum Tag { kClassTag, kConstantTag, + kRangeTag, + kContextTag, kArrayTag, kFunctionTag, kUnionTag @@ -447,120 +552,273 @@ class TypeImpl<Config>::StructuralType : public TypeImpl<Config> { return Config::struct_length(Config::as_struct(this)); } TypeHandle Get(int i) { + DCHECK(0 <= i && i < this->Length()); return Config::struct_get(Config::as_struct(this), i); } void Set(int i, TypeHandle type) { + DCHECK(0 <= i && i < this->Length()); Config::struct_set(Config::as_struct(this), i, type); } void Shrink(int length) { + DCHECK(2 <= length && length <= this->Length()); Config::struct_shrink(Config::as_struct(this), length); } + template<class V> i::Handle<V> GetValue(int i) { + DCHECK(0 <= i && i < this->Length()); + return Config::template struct_get_value<V>(Config::as_struct(this), i); + } + template<class V> void SetValue(int i, i::Handle<V> x) { + DCHECK(0 <= i && i < this->Length()); + Config::struct_set_value(Config::as_struct(this), i, x); + } static TypeHandle New(Tag tag, int length, Region* region) { + DCHECK(1 <= length); return Config::from_struct(Config::struct_create(tag, length, region)); } }; +// ----------------------------------------------------------------------------- +// Union types (internal). +// A union is a structured type with the following invariants: +// - its length is at least 2 +// - at most one field is a bitset, and it must go into index 0 +// - no field is a union +// - no field is a subtype of any other field +template<class Config> +class TypeImpl<Config>::UnionType : public StructuralType { + public: + static UnionHandle New(int length, Region* region) { + return Config::template cast<UnionType>( + StructuralType::New(StructuralType::kUnionTag, length, region)); + } + + static UnionType* cast(TypeImpl* type) { + DCHECK(type->IsUnion()); + return static_cast<UnionType*>(type); + } + + bool Wellformed(); +}; + + +// ----------------------------------------------------------------------------- +// Class types. + template<class Config> -class TypeImpl<Config>::ClassType : public TypeImpl<Config> { +class TypeImpl<Config>::ClassType : public StructuralType { public: - i::Handle<i::Map> Map() { return Config::as_class(this); } + TypeHandle Bound(Region* region) { + return Config::is_class(this) + ? BitsetType::New(BitsetType::Lub(*Config::as_class(this)), region) + : this->Get(0); + } + i::Handle<i::Map> Map() { + return Config::is_class(this) + ? Config::as_class(this) + : this->template GetValue<i::Map>(1); + } + + static ClassHandle New( + i::Handle<i::Map> map, TypeHandle bound, Region* region) { + DCHECK(BitsetType::Is(bound->AsBitset(), BitsetType::Lub(*map))); + ClassHandle type = Config::template cast<ClassType>( + StructuralType::New(StructuralType::kClassTag, 2, region)); + type->Set(0, bound); + type->SetValue(1, map); + return type; + } static ClassHandle New(i::Handle<i::Map> map, Region* region) { - return Config::template cast<ClassType>( - Config::from_class(map, BitsetType::Lub(*map), region)); + ClassHandle type = + Config::template cast<ClassType>(Config::from_class(map, region)); + if (type->IsClass()) { + return type; + } else { + TypeHandle bound = BitsetType::New(BitsetType::Lub(*map), region); + return New(map, bound, region); + } } static ClassType* cast(TypeImpl* type) { - ASSERT(type->IsClass()); + DCHECK(type->IsClass()); return static_cast<ClassType*>(type); } }; +// ----------------------------------------------------------------------------- +// Constant types. + template<class Config> -class TypeImpl<Config>::ConstantType : public TypeImpl<Config> { +class TypeImpl<Config>::ConstantType : public StructuralType { public: - i::Handle<i::Object> Value() { return Config::as_constant(this); } + TypeHandle Bound() { return this->Get(0); } + i::Handle<i::Object> Value() { return this->template GetValue<i::Object>(1); } + + static ConstantHandle New( + i::Handle<i::Object> value, TypeHandle bound, Region* region) { + DCHECK(BitsetType::Is(bound->AsBitset(), BitsetType::Lub(*value))); + ConstantHandle type = Config::template cast<ConstantType>( + StructuralType::New(StructuralType::kConstantTag, 2, region)); + type->Set(0, bound); + type->SetValue(1, value); + return type; + } static ConstantHandle New(i::Handle<i::Object> value, Region* region) { - return Config::template cast<ConstantType>( - Config::from_constant(value, BitsetType::Lub(*value), region)); + TypeHandle bound = BitsetType::New(BitsetType::Lub(*value), region); + return New(value, bound, region); } static ConstantType* cast(TypeImpl* type) { - ASSERT(type->IsConstant()); + DCHECK(type->IsConstant()); return static_cast<ConstantType*>(type); } }; -// Internal -// A union is a structured type with the following invariants: -// - its length is at least 2 -// - at most one field is a bitset, and it must go into index 0 -// - no field is a union +// ----------------------------------------------------------------------------- +// Range types. + template<class Config> -class TypeImpl<Config>::UnionType : public StructuralType { +class TypeImpl<Config>::RangeType : public StructuralType { public: - static UnionHandle New(int length, Region* region) { - return Config::template cast<UnionType>( - StructuralType::New(StructuralType::kUnionTag, length, region)); + TypeHandle Bound() { return this->Get(0); } + double Min() { return this->template GetValue<i::HeapNumber>(1)->value(); } + double Max() { return this->template GetValue<i::HeapNumber>(2)->value(); } + + static RangeHandle New( + double min, double max, TypeHandle bound, Region* region) { + DCHECK(BitsetType::Is(bound->AsBitset(), BitsetType::Lub(min, max))); + RangeHandle type = Config::template cast<RangeType>( + StructuralType::New(StructuralType::kRangeTag, 3, region)); + type->Set(0, bound); + Factory* factory = Config::isolate(region)->factory(); + Handle<HeapNumber> minV = factory->NewHeapNumber(min); + Handle<HeapNumber> maxV = factory->NewHeapNumber(max); + type->SetValue(1, minV); + type->SetValue(2, maxV); + return type; } - static UnionType* cast(TypeImpl* type) { - ASSERT(type->IsUnion()); - return static_cast<UnionType*>(type); + static RangeHandle New(double min, double max, Region* region) { + TypeHandle bound = BitsetType::New(BitsetType::Lub(min, max), region); + return New(min, max, bound, region); + } + + static RangeType* cast(TypeImpl* type) { + DCHECK(type->IsRange()); + return static_cast<RangeType*>(type); } }; +// ----------------------------------------------------------------------------- +// Context types. + +template<class Config> +class TypeImpl<Config>::ContextType : public StructuralType { + public: + TypeHandle Bound() { return this->Get(0); } + TypeHandle Outer() { return this->Get(1); } + + static ContextHandle New(TypeHandle outer, TypeHandle bound, Region* region) { + DCHECK(BitsetType::Is( + bound->AsBitset(), BitsetType::kInternal & BitsetType::kTaggedPtr)); + ContextHandle type = Config::template cast<ContextType>( + StructuralType::New(StructuralType::kContextTag, 2, region)); + type->Set(0, bound); + type->Set(1, outer); + return type; + } + + static ContextHandle New(TypeHandle outer, Region* region) { + TypeHandle bound = BitsetType::New( + BitsetType::kInternal & BitsetType::kTaggedPtr, region); + return New(outer, bound, region); + } + + static ContextType* cast(TypeImpl* type) { + DCHECK(type->IsContext()); + return static_cast<ContextType*>(type); + } +}; + + +// ----------------------------------------------------------------------------- +// Array types. + template<class Config> class TypeImpl<Config>::ArrayType : public StructuralType { public: - TypeHandle Element() { return this->Get(0); } + TypeHandle Bound() { return this->Get(0); } + TypeHandle Element() { return this->Get(1); } - static ArrayHandle New(TypeHandle element, Region* region) { + static ArrayHandle New(TypeHandle element, TypeHandle bound, Region* region) { + DCHECK(BitsetType::Is(bound->AsBitset(), BitsetType::kArray)); ArrayHandle type = Config::template cast<ArrayType>( - StructuralType::New(StructuralType::kArrayTag, 1, region)); - type->Set(0, element); + StructuralType::New(StructuralType::kArrayTag, 2, region)); + type->Set(0, bound); + type->Set(1, element); return type; } + static ArrayHandle New(TypeHandle element, Region* region) { + TypeHandle bound = BitsetType::New(BitsetType::kArray, region); + return New(element, bound, region); + } + static ArrayType* cast(TypeImpl* type) { - ASSERT(type->IsArray()); + DCHECK(type->IsArray()); return static_cast<ArrayType*>(type); } }; +// ----------------------------------------------------------------------------- +// Function types. + template<class Config> class TypeImpl<Config>::FunctionType : public StructuralType { public: - int Arity() { return this->Length() - 2; } - TypeHandle Result() { return this->Get(0); } - TypeHandle Receiver() { return this->Get(1); } - TypeHandle Parameter(int i) { return this->Get(2 + i); } + int Arity() { return this->Length() - 3; } + TypeHandle Bound() { return this->Get(0); } + TypeHandle Result() { return this->Get(1); } + TypeHandle Receiver() { return this->Get(2); } + TypeHandle Parameter(int i) { return this->Get(3 + i); } - void InitParameter(int i, TypeHandle type) { this->Set(2 + i, type); } + void InitParameter(int i, TypeHandle type) { this->Set(3 + i, type); } static FunctionHandle New( - TypeHandle result, TypeHandle receiver, int arity, Region* region) { + TypeHandle result, TypeHandle receiver, TypeHandle bound, + int arity, Region* region) { + DCHECK(BitsetType::Is(bound->AsBitset(), BitsetType::kFunction)); FunctionHandle type = Config::template cast<FunctionType>( - StructuralType::New(StructuralType::kFunctionTag, 2 + arity, region)); - type->Set(0, result); - type->Set(1, receiver); + StructuralType::New(StructuralType::kFunctionTag, 3 + arity, region)); + type->Set(0, bound); + type->Set(1, result); + type->Set(2, receiver); return type; } + static FunctionHandle New( + TypeHandle result, TypeHandle receiver, int arity, Region* region) { + TypeHandle bound = BitsetType::New(BitsetType::kFunction, region); + return New(result, receiver, bound, arity, region); + } + static FunctionType* cast(TypeImpl* type) { - ASSERT(type->IsFunction()); + DCHECK(type->IsFunction()); return static_cast<FunctionType*>(type); } }; +// ----------------------------------------------------------------------------- +// Type iterators. + template<class Config> template<class T> class TypeImpl<Config>::Iterator { public: @@ -584,8 +842,10 @@ class TypeImpl<Config>::Iterator { }; -// Zone-allocated types are either (odd) integers to represent bitsets, or +// ----------------------------------------------------------------------------- +// Zone-allocated types; they are either (odd) integers to represent bitsets, or // (even) pointers to structures for everything else. + struct ZoneTypeConfig { typedef TypeImpl<ZoneTypeConfig> Type; class Base {}; @@ -593,41 +853,46 @@ struct ZoneTypeConfig { typedef i::Zone Region; template<class T> struct Handle { typedef T* type; }; + // TODO(neis): This will be removed again once we have struct_get_double(). + static inline i::Isolate* isolate(Region* region) { + return region->isolate(); + } + template<class T> static inline T* handle(T* type); template<class T> static inline T* cast(Type* type); static inline bool is_bitset(Type* type); static inline bool is_class(Type* type); - static inline bool is_constant(Type* type); static inline bool is_struct(Type* type, int tag); static inline int as_bitset(Type* type); - static inline Struct* as_struct(Type* type); static inline i::Handle<i::Map> as_class(Type* type); - static inline i::Handle<i::Object> as_constant(Type* type); + static inline Struct* as_struct(Type* type); static inline Type* from_bitset(int bitset); static inline Type* from_bitset(int bitset, Zone* zone); + static inline Type* from_class(i::Handle<i::Map> map, Zone* zone); static inline Type* from_struct(Struct* structured); - static inline Type* from_class(i::Handle<i::Map> map, int lub, Zone* zone); - static inline Type* from_constant( - i::Handle<i::Object> value, int lub, Zone* zone); static inline Struct* struct_create(int tag, int length, Zone* zone); - static inline void struct_shrink(Struct* structured, int length); - static inline int struct_tag(Struct* structured); - static inline int struct_length(Struct* structured); - static inline Type* struct_get(Struct* structured, int i); - static inline void struct_set(Struct* structured, int i, Type* type); - - static inline int lub_bitset(Type* type); + static inline void struct_shrink(Struct* structure, int length); + static inline int struct_tag(Struct* structure); + static inline int struct_length(Struct* structure); + static inline Type* struct_get(Struct* structure, int i); + static inline void struct_set(Struct* structure, int i, Type* type); + template<class V> + static inline i::Handle<V> struct_get_value(Struct* structure, int i); + template<class V> static inline void struct_set_value( + Struct* structure, int i, i::Handle<V> x); }; typedef TypeImpl<ZoneTypeConfig> Type; -// Heap-allocated types are either smis for bitsets, maps for classes, boxes for +// ----------------------------------------------------------------------------- +// Heap-allocated types; either smis for bitsets, maps for classes, boxes for // constants, or fixed arrays for unions. + struct HeapTypeConfig { typedef TypeImpl<HeapTypeConfig> Type; typedef i::Object Base; @@ -635,43 +900,50 @@ struct HeapTypeConfig { typedef i::Isolate Region; template<class T> struct Handle { typedef i::Handle<T> type; }; + // TODO(neis): This will be removed again once we have struct_get_double(). + static inline i::Isolate* isolate(Region* region) { + return region; + } + template<class T> static inline i::Handle<T> handle(T* type); template<class T> static inline i::Handle<T> cast(i::Handle<Type> type); static inline bool is_bitset(Type* type); static inline bool is_class(Type* type); - static inline bool is_constant(Type* type); static inline bool is_struct(Type* type, int tag); static inline int as_bitset(Type* type); static inline i::Handle<i::Map> as_class(Type* type); - static inline i::Handle<i::Object> as_constant(Type* type); static inline i::Handle<Struct> as_struct(Type* type); static inline Type* from_bitset(int bitset); static inline i::Handle<Type> from_bitset(int bitset, Isolate* isolate); static inline i::Handle<Type> from_class( - i::Handle<i::Map> map, int lub, Isolate* isolate); - static inline i::Handle<Type> from_constant( - i::Handle<i::Object> value, int lub, Isolate* isolate); - static inline i::Handle<Type> from_struct(i::Handle<Struct> structured); + i::Handle<i::Map> map, Isolate* isolate); + static inline i::Handle<Type> from_struct(i::Handle<Struct> structure); static inline i::Handle<Struct> struct_create( int tag, int length, Isolate* isolate); - static inline void struct_shrink(i::Handle<Struct> structured, int length); - static inline int struct_tag(i::Handle<Struct> structured); - static inline int struct_length(i::Handle<Struct> structured); - static inline i::Handle<Type> struct_get(i::Handle<Struct> structured, int i); + static inline void struct_shrink(i::Handle<Struct> structure, int length); + static inline int struct_tag(i::Handle<Struct> structure); + static inline int struct_length(i::Handle<Struct> structure); + static inline i::Handle<Type> struct_get(i::Handle<Struct> structure, int i); static inline void struct_set( - i::Handle<Struct> structured, int i, i::Handle<Type> type); - - static inline int lub_bitset(Type* type); + i::Handle<Struct> structure, int i, i::Handle<Type> type); + template<class V> + static inline i::Handle<V> struct_get_value( + i::Handle<Struct> structure, int i); + template<class V> + static inline void struct_set_value( + i::Handle<Struct> structure, int i, i::Handle<V> x); }; typedef TypeImpl<HeapTypeConfig> HeapType; -// A simple struct to represent a pair of lower/upper type bounds. +// ----------------------------------------------------------------------------- +// Type bounds. A simple struct to represent a pair of lower/upper types. + template<class Config> struct BoundsImpl { typedef TypeImpl<Config> Type; @@ -684,7 +956,7 @@ struct BoundsImpl { BoundsImpl() {} explicit BoundsImpl(TypeHandle t) : lower(t), upper(t) {} BoundsImpl(TypeHandle l, TypeHandle u) : lower(l), upper(u) { - ASSERT(lower->Is(upper)); + DCHECK(lower->Is(upper)); } // Unrestricted bounds. diff --git a/deps/v8/src/typing.cc b/deps/v8/src/typing.cc index 434aff34905..136f72271c8 100644 --- a/deps/v8/src/typing.cc +++ b/deps/v8/src/typing.cc @@ -2,12 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "typing.h" +#include "src/typing.h" -#include "frames.h" -#include "frames-inl.h" -#include "parser.h" // for CompileTimeValue; TODO(rossberg): should move -#include "scopes.h" +#include "src/frames.h" +#include "src/frames-inl.h" +#include "src/ostreams.h" +#include "src/parser.h" // for CompileTimeValue; TODO(rossberg): should move +#include "src/scopes.h" namespace v8 { namespace internal { @@ -27,7 +28,7 @@ AstTyper::AstTyper(CompilationInfo* info) #define RECURSE(call) \ do { \ - ASSERT(!visitor->HasStackOverflow()); \ + DCHECK(!visitor->HasStackOverflow()); \ call; \ if (visitor->HasStackOverflow()) return; \ } while (false) @@ -50,12 +51,12 @@ void AstTyper::Run(CompilationInfo* info) { #ifdef OBJECT_PRINT static void PrintObserved(Variable* var, Object* value, Type* type) { - PrintF(" observed %s ", var->IsParameter() ? "param" : "local"); - var->name()->Print(); - PrintF(" : "); - value->ShortPrint(); - PrintF(" -> "); - type->TypePrint(); + OFStream os(stdout); + os << " observed " << (var->IsParameter() ? "param" : "local") << " "; + var->name()->Print(os); + os << " : " << Brief(value) << " -> "; + type->PrintTo(os); + os << endl; } #endif // OBJECT_PRINT @@ -75,7 +76,7 @@ void AstTyper::ObserveTypesAtOsrEntry(IterationStatement* stmt) { Scope* scope = info_->scope(); // Assert that the frame on the stack belongs to the function we want to OSR. - ASSERT_EQ(*info_->closure(), frame->function()); + DCHECK_EQ(*info_->closure(), frame->function()); int params = scope->num_parameters(); int locals = scope->StackLocalCount(); @@ -118,7 +119,7 @@ void AstTyper::ObserveTypesAtOsrEntry(IterationStatement* stmt) { #define RECURSE(call) \ do { \ - ASSERT(!HasStackOverflow()); \ + DCHECK(!HasStackOverflow()); \ call; \ if (HasStackOverflow()) return; \ } while (false) @@ -435,7 +436,7 @@ void AstTyper::VisitAssignment(Assignment* expr) { if (!expr->IsUninitialized()) { if (prop->key()->IsPropertyName()) { Literal* lit_key = prop->key()->AsLiteral(); - ASSERT(lit_key != NULL && lit_key->value()->IsString()); + DCHECK(lit_key != NULL && lit_key->value()->IsString()); Handle<String> name = Handle<String>::cast(lit_key->value()); oracle()->AssignmentReceiverTypes(id, name, expr->GetReceiverTypes()); } else { @@ -483,12 +484,9 @@ void AstTyper::VisitProperty(Property* expr) { if (!expr->IsUninitialized()) { if (expr->key()->IsPropertyName()) { Literal* lit_key = expr->key()->AsLiteral(); - ASSERT(lit_key != NULL && lit_key->value()->IsString()); + DCHECK(lit_key != NULL && lit_key->value()->IsString()); Handle<String> name = Handle<String>::cast(lit_key->value()); - bool is_prototype; - oracle()->PropertyReceiverTypes( - id, name, expr->GetReceiverTypes(), &is_prototype); - expr->set_is_function_prototype(is_prototype); + oracle()->PropertyReceiverTypes(id, name, expr->GetReceiverTypes()); } else { bool is_string; oracle()->KeyedPropertyReceiverTypes( @@ -511,6 +509,9 @@ void AstTyper::VisitCall(Call* expr) { expr->IsUsingCallFeedbackSlot(isolate()) && oracle()->CallIsMonomorphic(expr->CallFeedbackSlot())) { expr->set_target(oracle()->GetCallTarget(expr->CallFeedbackSlot())); + Handle<AllocationSite> site = + oracle()->GetCallAllocationSite(expr->CallFeedbackSlot()); + expr->set_allocation_site(site); } ZoneList<Expression*>* args = expr->arguments(); @@ -672,7 +673,7 @@ void AstTyper::VisitBinaryOperation(BinaryOperation* expr) { Bounds l = expr->left()->bounds(); Bounds r = expr->right()->bounds(); Type* lower = - l.lower->Is(Type::None()) || r.lower->Is(Type::None()) ? + !l.lower->IsInhabited() || !r.lower->IsInhabited() ? Type::None(zone()) : l.lower->Is(Type::String()) || r.lower->Is(Type::String()) ? Type::String(zone()) : diff --git a/deps/v8/src/typing.h b/deps/v8/src/typing.h index 71a63c30908..6f6b0dcb1f9 100644 --- a/deps/v8/src/typing.h +++ b/deps/v8/src/typing.h @@ -5,16 +5,16 @@ #ifndef V8_TYPING_H_ #define V8_TYPING_H_ -#include "v8.h" - -#include "allocation.h" -#include "ast.h" -#include "compiler.h" -#include "type-info.h" -#include "types.h" -#include "effects.h" -#include "zone.h" -#include "scopes.h" +#include "src/v8.h" + +#include "src/allocation.h" +#include "src/ast.h" +#include "src/compiler.h" +#include "src/effects.h" +#include "src/scopes.h" +#include "src/type-info.h" +#include "src/types.h" +#include "src/zone.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/unbound-queue-inl.h b/deps/v8/src/unbound-queue-inl.h index 7c2e8bc4f45..67822816800 100644 --- a/deps/v8/src/unbound-queue-inl.h +++ b/deps/v8/src/unbound-queue-inl.h @@ -5,9 +5,7 @@ #ifndef V8_UNBOUND_QUEUE_INL_H_ #define V8_UNBOUND_QUEUE_INL_H_ -#include "unbound-queue.h" - -#include "atomicops.h" +#include "src/unbound-queue.h" namespace v8 { namespace internal { @@ -26,7 +24,7 @@ struct UnboundQueue<Record>::Node: public Malloced { template<typename Record> UnboundQueue<Record>::UnboundQueue() { first_ = new Node(Record()); - divider_ = last_ = reinterpret_cast<AtomicWord>(first_); + divider_ = last_ = reinterpret_cast<base::AtomicWord>(first_); } @@ -46,10 +44,10 @@ void UnboundQueue<Record>::DeleteFirst() { template<typename Record> bool UnboundQueue<Record>::Dequeue(Record* rec) { - if (divider_ == Acquire_Load(&last_)) return false; + if (divider_ == base::Acquire_Load(&last_)) return false; Node* next = reinterpret_cast<Node*>(divider_)->next; *rec = next->value; - Release_Store(÷r_, reinterpret_cast<AtomicWord>(next)); + base::Release_Store(÷r_, reinterpret_cast<base::AtomicWord>(next)); return true; } @@ -58,9 +56,9 @@ template<typename Record> void UnboundQueue<Record>::Enqueue(const Record& rec) { Node*& next = reinterpret_cast<Node*>(last_)->next; next = new Node(rec); - Release_Store(&last_, reinterpret_cast<AtomicWord>(next)); + base::Release_Store(&last_, reinterpret_cast<base::AtomicWord>(next)); - while (first_ != reinterpret_cast<Node*>(Acquire_Load(÷r_))) { + while (first_ != reinterpret_cast<Node*>(base::Acquire_Load(÷r_))) { DeleteFirst(); } } @@ -68,13 +66,13 @@ void UnboundQueue<Record>::Enqueue(const Record& rec) { template<typename Record> bool UnboundQueue<Record>::IsEmpty() const { - return NoBarrier_Load(÷r_) == NoBarrier_Load(&last_); + return base::NoBarrier_Load(÷r_) == base::NoBarrier_Load(&last_); } template<typename Record> Record* UnboundQueue<Record>::Peek() const { - if (divider_ == Acquire_Load(&last_)) return NULL; + if (divider_ == base::Acquire_Load(&last_)) return NULL; Node* next = reinterpret_cast<Node*>(divider_)->next; return &next->value; } diff --git a/deps/v8/src/unbound-queue.h b/deps/v8/src/unbound-queue.h index 35a3ef499f9..3e129289739 100644 --- a/deps/v8/src/unbound-queue.h +++ b/deps/v8/src/unbound-queue.h @@ -5,7 +5,8 @@ #ifndef V8_UNBOUND_QUEUE_ #define V8_UNBOUND_QUEUE_ -#include "allocation.h" +#include "src/allocation.h" +#include "src/base/atomicops.h" namespace v8 { namespace internal { @@ -34,8 +35,8 @@ class UnboundQueue BASE_EMBEDDED { struct Node; Node* first_; - AtomicWord divider_; // Node* - AtomicWord last_; // Node* + base::AtomicWord divider_; // Node* + base::AtomicWord last_; // Node* DISALLOW_COPY_AND_ASSIGN(UnboundQueue); }; diff --git a/deps/v8/src/unicode-inl.h b/deps/v8/src/unicode-inl.h index a0142d2259f..81327d7ad2f 100644 --- a/deps/v8/src/unicode-inl.h +++ b/deps/v8/src/unicode-inl.h @@ -5,9 +5,9 @@ #ifndef V8_UNICODE_INL_H_ #define V8_UNICODE_INL_H_ -#include "unicode.h" -#include "checks.h" -#include "platform.h" +#include "src/unicode.h" +#include "src/base/logging.h" +#include "src/utils.h" namespace unibrow { @@ -58,7 +58,7 @@ template <class T, int s> int Mapping<T, s>::CalculateValue(uchar c, uchar n, uint16_t Latin1::ConvertNonLatin1ToLatin1(uint16_t c) { - ASSERT(c > Latin1::kMaxChar); + DCHECK(c > Latin1::kMaxChar); switch (c) { // This are equivalent characters in unicode. case 0x39c: @@ -184,15 +184,15 @@ void Utf8Decoder<kBufferSize>::Reset(const char* stream, unsigned length) { template <unsigned kBufferSize> unsigned Utf8Decoder<kBufferSize>::WriteUtf16(uint16_t* data, unsigned length) const { - ASSERT(length > 0); + DCHECK(length > 0); if (length > utf16_length_) length = utf16_length_; // memcpy everything in buffer. unsigned buffer_length = last_byte_of_buffer_unused_ ? kBufferSize - 1 : kBufferSize; - unsigned memcpy_length = length <= buffer_length ? length : buffer_length; - v8::internal::OS::MemCopy(data, buffer_, memcpy_length*sizeof(uint16_t)); + unsigned memcpy_length = length <= buffer_length ? length : buffer_length; + v8::internal::MemCopy(data, buffer_, memcpy_length * sizeof(uint16_t)); if (length <= buffer_length) return length; - ASSERT(unbuffered_start_ != NULL); + DCHECK(unbuffered_start_ != NULL); // Copy the rest the slow way. WriteUtf16Slow(unbuffered_start_, data + buffer_length, diff --git a/deps/v8/src/unicode.cc b/deps/v8/src/unicode.cc index 3322110ab2c..a128a6ff09f 100644 --- a/deps/v8/src/unicode.cc +++ b/deps/v8/src/unicode.cc @@ -4,9 +4,9 @@ // // This file was generated at 2014-02-07 15:31:16.733174 -#include "unicode-inl.h" -#include <stdlib.h> +#include "src/unicode-inl.h" #include <stdio.h> +#include <stdlib.h> namespace unibrow { @@ -271,7 +271,7 @@ void Utf8DecoderBase::Reset(uint16_t* buffer, while (stream_length != 0) { unsigned cursor = 0; uint32_t character = Utf8::ValueOf(stream, stream_length, &cursor); - ASSERT(cursor > 0 && cursor <= stream_length); + DCHECK(cursor > 0 && cursor <= stream_length); stream += cursor; stream_length -= cursor; bool is_two_characters = character > Utf16::kMaxNonSurrogateCharCode; @@ -296,7 +296,7 @@ void Utf8DecoderBase::Reset(uint16_t* buffer, } // Have gone over buffer. // Last char of buffer is unused, set cursor back. - ASSERT(is_two_characters); + DCHECK(is_two_characters); writing_to_buffer = false; last_byte_of_buffer_unused_ = true; unbuffered_start_ = stream - cursor; @@ -317,7 +317,7 @@ void Utf8DecoderBase::WriteUtf16Slow(const uint8_t* stream, if (character > unibrow::Utf16::kMaxNonSurrogateCharCode) { *data++ = Utf16::LeadSurrogate(character); *data++ = Utf16::TrailSurrogate(character); - ASSERT(data_length > 1); + DCHECK(data_length > 1); data_length -= 2; } else { *data++ = character; diff --git a/deps/v8/src/unicode.h b/deps/v8/src/unicode.h index ecb5ab4d7da..e2d6b96b972 100644 --- a/deps/v8/src/unicode.h +++ b/deps/v8/src/unicode.h @@ -6,7 +6,7 @@ #define V8_UNICODE_H_ #include <sys/types.h> -#include "globals.h" +#include "src/globals.h" /** * \file * Definitions and convenience functions for working with unicode. diff --git a/deps/v8/src/unique.h b/deps/v8/src/unique.h index 8ed26829012..ffc659fa10a 100644 --- a/deps/v8/src/unique.h +++ b/deps/v8/src/unique.h @@ -5,10 +5,11 @@ #ifndef V8_HYDROGEN_UNIQUE_H_ #define V8_HYDROGEN_UNIQUE_H_ -#include "handles.h" -#include "objects.h" -#include "utils.h" -#include "zone.h" +#include "src/handles.h" +#include "src/objects.h" +#include "src/string-stream.h" +#include "src/utils.h" +#include "src/zone.h" namespace v8 { namespace internal { @@ -29,7 +30,7 @@ class UniqueSet; // Careful! Comparison of two Uniques is only correct if both were created // in the same "era" of GC or if at least one is a non-movable object. template <typename T> -class Unique V8_FINAL { +class Unique { public: // TODO(titzer): make private and introduce a uniqueness scope. explicit Unique(Handle<T> handle) { @@ -42,9 +43,9 @@ class Unique V8_FINAL { // NOTE: we currently consider maps to be non-movable, so no special // assurance is required for creating a Unique<Map>. // TODO(titzer): other immortable immovable objects are also fine. - ASSERT(!AllowHeapAllocation::IsAllowed() || handle->IsMap()); + DCHECK(!AllowHeapAllocation::IsAllowed() || handle->IsMap()); raw_address_ = reinterpret_cast<Address>(*handle); - ASSERT_NE(raw_address_, NULL); // Non-null should imply non-zero address. + DCHECK_NE(raw_address_, NULL); // Non-null should imply non-zero address. } handle_ = handle; } @@ -68,28 +69,28 @@ class Unique V8_FINAL { template <typename U> inline bool operator==(const Unique<U>& other) const { - ASSERT(IsInitialized() && other.IsInitialized()); + DCHECK(IsInitialized() && other.IsInitialized()); return raw_address_ == other.raw_address_; } template <typename U> inline bool operator!=(const Unique<U>& other) const { - ASSERT(IsInitialized() && other.IsInitialized()); + DCHECK(IsInitialized() && other.IsInitialized()); return raw_address_ != other.raw_address_; } inline intptr_t Hashcode() const { - ASSERT(IsInitialized()); + DCHECK(IsInitialized()); return reinterpret_cast<intptr_t>(raw_address_); } inline bool IsNull() const { - ASSERT(IsInitialized()); + DCHECK(IsInitialized()); return raw_address_ == NULL; } inline bool IsKnownGlobal(void* global) const { - ASSERT(IsInitialized()); + DCHECK(IsInitialized()); return raw_address_ == reinterpret_cast<Address>(global); } @@ -117,8 +118,10 @@ class Unique V8_FINAL { friend class UniqueSet<T>; // Uses internal details for speed. template <class U> friend class Unique; // For comparing raw_address values. + template <class U> + friend class PrintableUnique; // For automatic up casting. - private: + protected: Unique<T>() : raw_address_(NULL) { } Address raw_address_; @@ -128,6 +131,70 @@ class Unique V8_FINAL { }; +// TODO(danno): At some point if all of the uses of Unique end up using +// PrintableUnique, then we should merge PrintableUnique into Unique and +// predicate generating the printable string on a "am I tracing" check. +template <class T> +class PrintableUnique : public Unique<T> { + public: + // TODO(titzer): make private and introduce a uniqueness scope. + explicit PrintableUnique(Zone* zone, Handle<T> handle) : Unique<T>(handle) { + InitializeString(zone); + } + + // TODO(titzer): this is a hack to migrate to Unique<T> incrementally. + PrintableUnique(Zone* zone, Address raw_address, Handle<T> handle) + : Unique<T>(raw_address, handle) { + InitializeString(zone); + } + + // Constructor for handling automatic up casting. + // Eg. PrintableUnique<JSFunction> can be passed when PrintableUnique<Object> + // is expected. + template <class S> + PrintableUnique(PrintableUnique<S> uniq) // NOLINT + : Unique<T>(Handle<T>()) { +#ifdef DEBUG + T* a = NULL; + S* b = NULL; + a = b; // Fake assignment to enforce type checks. + USE(a); +#endif + this->raw_address_ = uniq.raw_address_; + this->handle_ = uniq.handle_; + string_ = uniq.string(); + } + + // TODO(titzer): this is a hack to migrate to Unique<T> incrementally. + static PrintableUnique<T> CreateUninitialized(Zone* zone, Handle<T> handle) { + return PrintableUnique<T>(zone, reinterpret_cast<Address>(NULL), handle); + } + + static PrintableUnique<T> CreateImmovable(Zone* zone, Handle<T> handle) { + return PrintableUnique<T>(zone, reinterpret_cast<Address>(*handle), handle); + } + + const char* string() const { return string_; } + + private: + const char* string_; + + void InitializeString(Zone* zone) { + // The stringified version of the parameter must be calculated when the + // Operator is constructed to avoid accessing the heap. + HeapStringAllocator temp_allocator; + StringStream stream(&temp_allocator); + this->handle_->ShortPrint(&stream); + SmartArrayPointer<const char> desc_string = stream.ToCString(); + const char* desc_chars = desc_string.get(); + int length = static_cast<int>(strlen(desc_chars)); + char* desc_copy = zone->NewArray<char>(length + 1); + memcpy(desc_copy, desc_chars, length + 1); + string_ = desc_copy; + } +}; + + template <typename T> class UniqueSet V8_FINAL : public ZoneObject { public: @@ -138,7 +205,7 @@ class UniqueSet V8_FINAL : public ZoneObject { UniqueSet(int capacity, Zone* zone) : size_(0), capacity_(capacity), array_(zone->NewArray<Unique<T> >(capacity)) { - ASSERT(capacity <= kMaxCapacity); + DCHECK(capacity <= kMaxCapacity); } // Singleton constructor. @@ -149,7 +216,7 @@ class UniqueSet V8_FINAL : public ZoneObject { // Add a new element to this unique set. Mutates this set. O(|this|). void Add(Unique<T> uniq, Zone* zone) { - ASSERT(uniq.IsInitialized()); + DCHECK(uniq.IsInitialized()); // Keep the set sorted by the {raw_address} of the unique elements. for (int i = 0; i < size_; i++) { if (array_[i] == uniq) return; @@ -275,6 +342,26 @@ class UniqueSet V8_FINAL : public ZoneObject { return out; } + // Returns a new set representing all elements from this set which are not in + // that set. O(|this| * |that|). + UniqueSet<T>* Subtract(const UniqueSet<T>* that, Zone* zone) const { + if (that->size_ == 0) return this->Copy(zone); + + UniqueSet<T>* out = new(zone) UniqueSet<T>(this->size_, zone); + + int i = 0, j = 0; + while (i < this->size_) { + Unique<T> cand = this->array_[i]; + if (!that->Contains(cand)) { + out->array_[j++] = cand; + } + i++; + } + + out->size_ = j; + return out; + } + // Makes an exact copy of this set. O(|this|). UniqueSet<T>* Copy(Zone* zone) const { UniqueSet<T>* copy = new(zone) UniqueSet<T>(this->size_, zone); @@ -292,7 +379,7 @@ class UniqueSet V8_FINAL : public ZoneObject { } inline Unique<T> at(int index) const { - ASSERT(index >= 0 && index < size_); + DCHECK(index >= 0 && index < size_); return array_[index]; } @@ -321,7 +408,6 @@ class UniqueSet V8_FINAL : public ZoneObject { } }; - } } // namespace v8::internal #endif // V8_HYDROGEN_UNIQUE_H_ diff --git a/deps/v8/src/uri.h b/deps/v8/src/uri.h index 6b76525fea9..bb5140b8c02 100644 --- a/deps/v8/src/uri.h +++ b/deps/v8/src/uri.h @@ -5,11 +5,11 @@ #ifndef V8_URI_H_ #define V8_URI_H_ -#include "v8.h" +#include "src/v8.h" -#include "conversions.h" -#include "string-search.h" -#include "utils.h" +#include "src/conversions.h" +#include "src/string-search.h" +#include "src/utils.h" namespace v8 { namespace internal { @@ -22,7 +22,7 @@ static INLINE(Vector<const Char> GetCharVector(Handle<String> string)); template <> Vector<const uint8_t> GetCharVector(Handle<String> string) { String::FlatContent flat = string->GetFlatContent(); - ASSERT(flat.IsAscii()); + DCHECK(flat.IsAscii()); return flat.ToOneByteVector(); } @@ -30,7 +30,7 @@ Vector<const uint8_t> GetCharVector(Handle<String> string) { template <> Vector<const uc16> GetCharVector(Handle<String> string) { String::FlatContent flat = string->GetFlatContent(); - ASSERT(flat.IsTwoByte()); + DCHECK(flat.IsTwoByte()); return flat.ToUC16Vector(); } @@ -100,13 +100,13 @@ MaybeHandle<String> URIUnescape::UnescapeSlow( } } - ASSERT(start_index < length); + DCHECK(start_index < length); Handle<String> first_part = isolate->factory()->NewProperSubString(string, 0, start_index); int dest_position = 0; Handle<String> second_part; - ASSERT(unescaped_length <= String::kMaxLength); + DCHECK(unescaped_length <= String::kMaxLength); if (one_byte) { Handle<SeqOneByteString> dest = isolate->factory()->NewRawOneByteString( unescaped_length).ToHandleChecked(); @@ -226,7 +226,7 @@ const char URIEscape::kNotEscaped[] = { template<typename Char> MaybeHandle<String> URIEscape::Escape(Isolate* isolate, Handle<String> string) { - ASSERT(string->IsFlat()); + DCHECK(string->IsFlat()); int escaped_length = 0; int length = string->length(); @@ -243,7 +243,7 @@ MaybeHandle<String> URIEscape::Escape(Isolate* isolate, Handle<String> string) { } // We don't allow strings that are longer than a maximal length. - ASSERT(String::kMaxLength < 0x7fffffff - 6); // Cannot overflow. + DCHECK(String::kMaxLength < 0x7fffffff - 6); // Cannot overflow. if (escaped_length > String::kMaxLength) break; // Provoke exception. } } diff --git a/deps/v8/src/uri.js b/deps/v8/src/uri.js index 0e50f0b7001..4b7d1f7e00d 100644 --- a/deps/v8/src/uri.js +++ b/deps/v8/src/uri.js @@ -2,6 +2,8 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. +"use strict"; + // This file relies on the fact that the following declaration has been made // in runtime.js: // var $Array = global.Array; @@ -11,424 +13,380 @@ // This file contains support for URI manipulations written in // JavaScript. -// Lazily initialized. -var hexCharArray = 0; -var hexCharCodeArray = 0; +(function() { -function URIAddEncodedOctetToBuffer(octet, result, index) { - result[index++] = 37; // Char code of '%'. - result[index++] = hexCharCodeArray[octet >> 4]; - result[index++] = hexCharCodeArray[octet & 0x0F]; - return index; -} + // ------------------------------------------------------------------- + // Define internal helper functions. + function HexValueOf(code) { + // 0-9 + if (code >= 48 && code <= 57) return code - 48; + // A-F + if (code >= 65 && code <= 70) return code - 55; + // a-f + if (code >= 97 && code <= 102) return code - 87; -function URIEncodeOctets(octets, result, index) { - if (hexCharCodeArray === 0) { - hexCharCodeArray = [48, 49, 50, 51, 52, 53, 54, 55, 56, 57, - 65, 66, 67, 68, 69, 70]; + return -1; } - index = URIAddEncodedOctetToBuffer(octets[0], result, index); - if (octets[1]) index = URIAddEncodedOctetToBuffer(octets[1], result, index); - if (octets[2]) index = URIAddEncodedOctetToBuffer(octets[2], result, index); - if (octets[3]) index = URIAddEncodedOctetToBuffer(octets[3], result, index); - return index; -} - - -function URIEncodeSingle(cc, result, index) { - var x = (cc >> 12) & 0xF; - var y = (cc >> 6) & 63; - var z = cc & 63; - var octets = new $Array(3); - if (cc <= 0x007F) { - octets[0] = cc; - } else if (cc <= 0x07FF) { - octets[0] = y + 192; - octets[1] = z + 128; - } else { - octets[0] = x + 224; - octets[1] = y + 128; - octets[2] = z + 128; + + // Does the char code correspond to an alpha-numeric char. + function isAlphaNumeric(cc) { + // a - z + if (97 <= cc && cc <= 122) return true; + // A - Z + if (65 <= cc && cc <= 90) return true; + // 0 - 9 + if (48 <= cc && cc <= 57) return true; + + return false; } - return URIEncodeOctets(octets, result, index); -} - - -function URIEncodePair(cc1 , cc2, result, index) { - var u = ((cc1 >> 6) & 0xF) + 1; - var w = (cc1 >> 2) & 0xF; - var x = cc1 & 3; - var y = (cc2 >> 6) & 0xF; - var z = cc2 & 63; - var octets = new $Array(4); - octets[0] = (u >> 2) + 240; - octets[1] = (((u & 3) << 4) | w) + 128; - octets[2] = ((x << 4) | y) + 128; - octets[3] = z + 128; - return URIEncodeOctets(octets, result, index); -} - - -function URIHexCharsToCharCode(highChar, lowChar) { - var highCode = HexValueOf(highChar); - var lowCode = HexValueOf(lowChar); - if (highCode == -1 || lowCode == -1) { - throw new $URIError("URI malformed"); + + //Lazily initialized. + var hexCharCodeArray = 0; + + function URIAddEncodedOctetToBuffer(octet, result, index) { + result[index++] = 37; // Char code of '%'. + result[index++] = hexCharCodeArray[octet >> 4]; + result[index++] = hexCharCodeArray[octet & 0x0F]; + return index; } - return (highCode << 4) | lowCode; -} - - -function URIDecodeOctets(octets, result, index) { - var value; - var o0 = octets[0]; - if (o0 < 0x80) { - value = o0; - } else if (o0 < 0xc2) { - throw new $URIError("URI malformed"); - } else { - var o1 = octets[1]; - if (o0 < 0xe0) { - var a = o0 & 0x1f; - if ((o1 < 0x80) || (o1 > 0xbf)) { - throw new $URIError("URI malformed"); - } - var b = o1 & 0x3f; - value = (a << 6) + b; - if (value < 0x80 || value > 0x7ff) { - throw new $URIError("URI malformed"); - } + + function URIEncodeOctets(octets, result, index) { + if (hexCharCodeArray === 0) { + hexCharCodeArray = [48, 49, 50, 51, 52, 53, 54, 55, 56, 57, + 65, 66, 67, 68, 69, 70]; + } + index = URIAddEncodedOctetToBuffer(octets[0], result, index); + if (octets[1]) index = URIAddEncodedOctetToBuffer(octets[1], result, index); + if (octets[2]) index = URIAddEncodedOctetToBuffer(octets[2], result, index); + if (octets[3]) index = URIAddEncodedOctetToBuffer(octets[3], result, index); + return index; + } + + function URIEncodeSingle(cc, result, index) { + var x = (cc >> 12) & 0xF; + var y = (cc >> 6) & 63; + var z = cc & 63; + var octets = new $Array(3); + if (cc <= 0x007F) { + octets[0] = cc; + } else if (cc <= 0x07FF) { + octets[0] = y + 192; + octets[1] = z + 128; } else { - var o2 = octets[2]; - if (o0 < 0xf0) { - var a = o0 & 0x0f; + octets[0] = x + 224; + octets[1] = y + 128; + octets[2] = z + 128; + } + return URIEncodeOctets(octets, result, index); + } + + function URIEncodePair(cc1 , cc2, result, index) { + var u = ((cc1 >> 6) & 0xF) + 1; + var w = (cc1 >> 2) & 0xF; + var x = cc1 & 3; + var y = (cc2 >> 6) & 0xF; + var z = cc2 & 63; + var octets = new $Array(4); + octets[0] = (u >> 2) + 240; + octets[1] = (((u & 3) << 4) | w) + 128; + octets[2] = ((x << 4) | y) + 128; + octets[3] = z + 128; + return URIEncodeOctets(octets, result, index); + } + + function URIHexCharsToCharCode(highChar, lowChar) { + var highCode = HexValueOf(highChar); + var lowCode = HexValueOf(lowChar); + if (highCode == -1 || lowCode == -1) { + throw new $URIError("URI malformed"); + } + return (highCode << 4) | lowCode; + } + + // Callers must ensure that |result| is a sufficiently long sequential + // two-byte string! + function URIDecodeOctets(octets, result, index) { + var value; + var o0 = octets[0]; + if (o0 < 0x80) { + value = o0; + } else if (o0 < 0xc2) { + throw new $URIError("URI malformed"); + } else { + var o1 = octets[1]; + if (o0 < 0xe0) { + var a = o0 & 0x1f; if ((o1 < 0x80) || (o1 > 0xbf)) { throw new $URIError("URI malformed"); } var b = o1 & 0x3f; - if ((o2 < 0x80) || (o2 > 0xbf)) { - throw new $URIError("URI malformed"); - } - var c = o2 & 0x3f; - value = (a << 12) + (b << 6) + c; - if ((value < 0x800) || (value > 0xffff)) { + value = (a << 6) + b; + if (value < 0x80 || value > 0x7ff) { throw new $URIError("URI malformed"); } } else { - var o3 = octets[3]; - if (o0 < 0xf8) { - var a = (o0 & 0x07); + var o2 = octets[2]; + if (o0 < 0xf0) { + var a = o0 & 0x0f; if ((o1 < 0x80) || (o1 > 0xbf)) { throw new $URIError("URI malformed"); } - var b = (o1 & 0x3f); + var b = o1 & 0x3f; if ((o2 < 0x80) || (o2 > 0xbf)) { throw new $URIError("URI malformed"); } - var c = (o2 & 0x3f); - if ((o3 < 0x80) || (o3 > 0xbf)) { + var c = o2 & 0x3f; + value = (a << 12) + (b << 6) + c; + if ((value < 0x800) || (value > 0xffff)) { throw new $URIError("URI malformed"); } - var d = (o3 & 0x3f); - value = (a << 18) + (b << 12) + (c << 6) + d; - if ((value < 0x10000) || (value > 0x10ffff)) { + } else { + var o3 = octets[3]; + if (o0 < 0xf8) { + var a = (o0 & 0x07); + if ((o1 < 0x80) || (o1 > 0xbf)) { + throw new $URIError("URI malformed"); + } + var b = (o1 & 0x3f); + if ((o2 < 0x80) || (o2 > 0xbf)) { + throw new $URIError("URI malformed"); + } + var c = (o2 & 0x3f); + if ((o3 < 0x80) || (o3 > 0xbf)) { + throw new $URIError("URI malformed"); + } + var d = (o3 & 0x3f); + value = (a << 18) + (b << 12) + (c << 6) + d; + if ((value < 0x10000) || (value > 0x10ffff)) { + throw new $URIError("URI malformed"); + } + } else { throw new $URIError("URI malformed"); } - } else { - throw new $URIError("URI malformed"); } } } - } - if (0xD800 <= value && value <= 0xDFFF) { - throw new $URIError("URI malformed"); - } - if (value < 0x10000) { - %_TwoByteSeqStringSetChar(result, index++, value); - return index; - } else { - %_TwoByteSeqStringSetChar(result, index++, (value >> 10) + 0xd7c0); - %_TwoByteSeqStringSetChar(result, index++, (value & 0x3ff) + 0xdc00); + if (0xD800 <= value && value <= 0xDFFF) { + throw new $URIError("URI malformed"); + } + if (value < 0x10000) { + %_TwoByteSeqStringSetChar(result, index++, value); + } else { + %_TwoByteSeqStringSetChar(result, index++, (value >> 10) + 0xd7c0); + %_TwoByteSeqStringSetChar(result, index++, (value & 0x3ff) + 0xdc00); + } return index; } -} - - -// ECMA-262, section 15.1.3 -function Encode(uri, unescape) { - var uriLength = uri.length; - var array = new InternalArray(uriLength); - var index = 0; - for (var k = 0; k < uriLength; k++) { - var cc1 = uri.charCodeAt(k); - if (unescape(cc1)) { - array[index++] = cc1; - } else { - if (cc1 >= 0xDC00 && cc1 <= 0xDFFF) throw new $URIError("URI malformed"); - if (cc1 < 0xD800 || cc1 > 0xDBFF) { - index = URIEncodeSingle(cc1, array, index); + + // ECMA-262, section 15.1.3 + function Encode(uri, unescape) { + var uriLength = uri.length; + var array = new InternalArray(uriLength); + var index = 0; + for (var k = 0; k < uriLength; k++) { + var cc1 = uri.charCodeAt(k); + if (unescape(cc1)) { + array[index++] = cc1; } else { - k++; - if (k == uriLength) throw new $URIError("URI malformed"); - var cc2 = uri.charCodeAt(k); - if (cc2 < 0xDC00 || cc2 > 0xDFFF) throw new $URIError("URI malformed"); - index = URIEncodePair(cc1, cc2, array, index); + if (cc1 >= 0xDC00 && cc1 <= 0xDFFF) throw new $URIError("URI malformed"); + if (cc1 < 0xD800 || cc1 > 0xDBFF) { + index = URIEncodeSingle(cc1, array, index); + } else { + k++; + if (k == uriLength) throw new $URIError("URI malformed"); + var cc2 = uri.charCodeAt(k); + if (cc2 < 0xDC00 || cc2 > 0xDFFF) throw new $URIError("URI malformed"); + index = URIEncodePair(cc1, cc2, array, index); + } } } - } - var result = %NewString(array.length, NEW_ONE_BYTE_STRING); - for (var i = 0; i < array.length; i++) { - %_OneByteSeqStringSetChar(result, i, array[i]); + var result = %NewString(array.length, NEW_ONE_BYTE_STRING); + for (var i = 0; i < array.length; i++) { + %_OneByteSeqStringSetChar(result, i, array[i]); + } + return result; } - return result; -} - - -// ECMA-262, section 15.1.3 -function Decode(uri, reserved) { - var uriLength = uri.length; - var one_byte = %NewString(uriLength, NEW_ONE_BYTE_STRING); - var index = 0; - var k = 0; - - // Optimistically assume ascii string. - for ( ; k < uriLength; k++) { - var code = uri.charCodeAt(k); - if (code == 37) { // '%' - if (k + 2 >= uriLength) throw new $URIError("URI malformed"); - var cc = URIHexCharsToCharCode(uri.charCodeAt(k+1), uri.charCodeAt(k+2)); - if (cc >> 7) break; // Assumption wrong, two byte string. - if (reserved(cc)) { - %_OneByteSeqStringSetChar(one_byte, index++, 37); // '%'. - %_OneByteSeqStringSetChar(one_byte, index++, uri.charCodeAt(k+1)); - %_OneByteSeqStringSetChar(one_byte, index++, uri.charCodeAt(k+2)); + + // ECMA-262, section 15.1.3 + function Decode(uri, reserved) { + var uriLength = uri.length; + var one_byte = %NewString(uriLength, NEW_ONE_BYTE_STRING); + var index = 0; + var k = 0; + + // Optimistically assume ascii string. + for ( ; k < uriLength; k++) { + var code = uri.charCodeAt(k); + if (code == 37) { // '%' + if (k + 2 >= uriLength) throw new $URIError("URI malformed"); + var cc = URIHexCharsToCharCode(uri.charCodeAt(k+1), uri.charCodeAt(k+2)); + if (cc >> 7) break; // Assumption wrong, two byte string. + if (reserved(cc)) { + %_OneByteSeqStringSetChar(one_byte, index++, 37); // '%'. + %_OneByteSeqStringSetChar(one_byte, index++, uri.charCodeAt(k+1)); + %_OneByteSeqStringSetChar(one_byte, index++, uri.charCodeAt(k+2)); + } else { + %_OneByteSeqStringSetChar(one_byte, index++, cc); + } + k += 2; } else { - %_OneByteSeqStringSetChar(one_byte, index++, cc); + if (code > 0x7f) break; // Assumption wrong, two byte string. + %_OneByteSeqStringSetChar(one_byte, index++, code); } - k += 2; - } else { - if (code > 0x7f) break; // Assumption wrong, two byte string. - %_OneByteSeqStringSetChar(one_byte, index++, code); } - } - one_byte = %TruncateString(one_byte, index); - if (k == uriLength) return one_byte; - - // Write into two byte string. - var two_byte = %NewString(uriLength - k, NEW_TWO_BYTE_STRING); - index = 0; - - for ( ; k < uriLength; k++) { - var code = uri.charCodeAt(k); - if (code == 37) { // '%' - if (k + 2 >= uriLength) throw new $URIError("URI malformed"); - var cc = URIHexCharsToCharCode(uri.charCodeAt(++k), uri.charCodeAt(++k)); - if (cc >> 7) { - var n = 0; - while (((cc << ++n) & 0x80) != 0) { } - if (n == 1 || n > 4) throw new $URIError("URI malformed"); - var octets = new $Array(n); - octets[0] = cc; - if (k + 3 * (n - 1) >= uriLength) throw new $URIError("URI malformed"); - for (var i = 1; i < n; i++) { - if (uri.charAt(++k) != '%') throw new $URIError("URI malformed"); - octets[i] = URIHexCharsToCharCode(uri.charCodeAt(++k), - uri.charCodeAt(++k)); + one_byte = %TruncateString(one_byte, index); + if (k == uriLength) return one_byte; + + // Write into two byte string. + var two_byte = %NewString(uriLength - k, NEW_TWO_BYTE_STRING); + index = 0; + + for ( ; k < uriLength; k++) { + var code = uri.charCodeAt(k); + if (code == 37) { // '%' + if (k + 2 >= uriLength) throw new $URIError("URI malformed"); + var cc = URIHexCharsToCharCode(uri.charCodeAt(++k), uri.charCodeAt(++k)); + if (cc >> 7) { + var n = 0; + while (((cc << ++n) & 0x80) != 0) { } + if (n == 1 || n > 4) throw new $URIError("URI malformed"); + var octets = new $Array(n); + octets[0] = cc; + if (k + 3 * (n - 1) >= uriLength) throw new $URIError("URI malformed"); + for (var i = 1; i < n; i++) { + if (uri.charAt(++k) != '%') throw new $URIError("URI malformed"); + octets[i] = URIHexCharsToCharCode(uri.charCodeAt(++k), + uri.charCodeAt(++k)); + } + index = URIDecodeOctets(octets, two_byte, index); + } else if (reserved(cc)) { + %_TwoByteSeqStringSetChar(two_byte, index++, 37); // '%'. + %_TwoByteSeqStringSetChar(two_byte, index++, uri.charCodeAt(k - 1)); + %_TwoByteSeqStringSetChar(two_byte, index++, uri.charCodeAt(k)); + } else { + %_TwoByteSeqStringSetChar(two_byte, index++, cc); } - index = URIDecodeOctets(octets, two_byte, index); - } else if (reserved(cc)) { - %_TwoByteSeqStringSetChar(two_byte, index++, 37); // '%'. - %_TwoByteSeqStringSetChar(two_byte, index++, uri.charCodeAt(k - 1)); - %_TwoByteSeqStringSetChar(two_byte, index++, uri.charCodeAt(k)); } else { - %_TwoByteSeqStringSetChar(two_byte, index++, cc); + %_TwoByteSeqStringSetChar(two_byte, index++, code); } - } else { - %_TwoByteSeqStringSetChar(two_byte, index++, code); } - } - - two_byte = %TruncateString(two_byte, index); - return one_byte + two_byte; -} - - -// ECMA-262 - 15.1.3.1. -function URIDecode(uri) { - var reservedPredicate = function(cc) { - // #$ - if (35 <= cc && cc <= 36) return true; - // & - if (cc == 38) return true; - // +, - if (43 <= cc && cc <= 44) return true; - // / - if (cc == 47) return true; - // :; - if (58 <= cc && cc <= 59) return true; - // = - if (cc == 61) return true; - // ?@ - if (63 <= cc && cc <= 64) return true; - - return false; - }; - var string = ToString(uri); - return Decode(string, reservedPredicate); -} - - -// ECMA-262 - 15.1.3.2. -function URIDecodeComponent(component) { - var reservedPredicate = function(cc) { return false; }; - var string = ToString(component); - return Decode(string, reservedPredicate); -} - - -// Does the char code correspond to an alpha-numeric char. -function isAlphaNumeric(cc) { - // a - z - if (97 <= cc && cc <= 122) return true; - // A - Z - if (65 <= cc && cc <= 90) return true; - // 0 - 9 - if (48 <= cc && cc <= 57) return true; - - return false; -} - - -// ECMA-262 - 15.1.3.3. -function URIEncode(uri) { - var unescapePredicate = function(cc) { - if (isAlphaNumeric(cc)) return true; - // ! - if (cc == 33) return true; - // #$ - if (35 <= cc && cc <= 36) return true; - // &'()*+,-./ - if (38 <= cc && cc <= 47) return true; - // :; - if (58 <= cc && cc <= 59) return true; - // = - if (cc == 61) return true; - // ?@ - if (63 <= cc && cc <= 64) return true; - // _ - if (cc == 95) return true; - // ~ - if (cc == 126) return true; - return false; - }; - - var string = ToString(uri); - return Encode(string, unescapePredicate); -} - - -// ECMA-262 - 15.1.3.4 -function URIEncodeComponent(component) { - var unescapePredicate = function(cc) { - if (isAlphaNumeric(cc)) return true; - // ! - if (cc == 33) return true; - // '()* - if (39 <= cc && cc <= 42) return true; - // -. - if (45 <= cc && cc <= 46) return true; - // _ - if (cc == 95) return true; - // ~ - if (cc == 126) return true; - - return false; - }; - - var string = ToString(component); - return Encode(string, unescapePredicate); -} + two_byte = %TruncateString(two_byte, index); + return one_byte + two_byte; + } + // ------------------------------------------------------------------- + // Define exported functions. -function HexValueOf(code) { - // 0-9 - if (code >= 48 && code <= 57) return code - 48; - // A-F - if (code >= 65 && code <= 70) return code - 55; - // a-f - if (code >= 97 && code <= 102) return code - 87; + // ECMA-262 - B.2.1. + function URIEscapeJS(str) { + var s = ToString(str); + return %URIEscape(s); + } - return -1; -} + // ECMA-262 - B.2.2. + function URIUnescapeJS(str) { + var s = ToString(str); + return %URIUnescape(s); + } + // ECMA-262 - 15.1.3.1. + function URIDecode(uri) { + var reservedPredicate = function(cc) { + // #$ + if (35 <= cc && cc <= 36) return true; + // & + if (cc == 38) return true; + // +, + if (43 <= cc && cc <= 44) return true; + // / + if (cc == 47) return true; + // :; + if (58 <= cc && cc <= 59) return true; + // = + if (cc == 61) return true; + // ?@ + if (63 <= cc && cc <= 64) return true; -// Convert a character code to 4-digit hex string representation -// 64 -> 0040, 62234 -> F31A. -function CharCodeToHex4Str(cc) { - var r = ""; - if (hexCharArray === 0) { - hexCharArray = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9", - "A", "B", "C", "D", "E", "F"]; - } - for (var i = 0; i < 4; ++i) { - var c = hexCharArray[cc & 0x0F]; - r = c + r; - cc = cc >>> 4; - } - return r; -} - - -// Returns true if all digits in string s are valid hex numbers -function IsValidHex(s) { - for (var i = 0; i < s.length; ++i) { - var cc = s.charCodeAt(i); - if ((48 <= cc && cc <= 57) || - (65 <= cc && cc <= 70) || - (97 <= cc && cc <= 102)) { - // '0'..'9', 'A'..'F' and 'a' .. 'f'. - } else { return false; - } + }; + var string = ToString(uri); + return Decode(string, reservedPredicate); } - return true; -} + // ECMA-262 - 15.1.3.2. + function URIDecodeComponent(component) { + var reservedPredicate = function(cc) { return false; }; + var string = ToString(component); + return Decode(string, reservedPredicate); + } -// ECMA-262 - B.2.1. -function URIEscape(str) { - var s = ToString(str); - return %URIEscape(s); -} + // ECMA-262 - 15.1.3.3. + function URIEncode(uri) { + var unescapePredicate = function(cc) { + if (isAlphaNumeric(cc)) return true; + // ! + if (cc == 33) return true; + // #$ + if (35 <= cc && cc <= 36) return true; + // &'()*+,-./ + if (38 <= cc && cc <= 47) return true; + // :; + if (58 <= cc && cc <= 59) return true; + // = + if (cc == 61) return true; + // ?@ + if (63 <= cc && cc <= 64) return true; + // _ + if (cc == 95) return true; + // ~ + if (cc == 126) return true; + return false; + }; + var string = ToString(uri); + return Encode(string, unescapePredicate); + } -// ECMA-262 - B.2.2. -function URIUnescape(str) { - var s = ToString(str); - return %URIUnescape(s); -} + // ECMA-262 - 15.1.3.4 + function URIEncodeComponent(component) { + var unescapePredicate = function(cc) { + if (isAlphaNumeric(cc)) return true; + // ! + if (cc == 33) return true; + // '()* + if (39 <= cc && cc <= 42) return true; + // -. + if (45 <= cc && cc <= 46) return true; + // _ + if (cc == 95) return true; + // ~ + if (cc == 126) return true; + return false; + }; + var string = ToString(component); + return Encode(string, unescapePredicate); + } -// ------------------------------------------------------------------- + // ------------------------------------------------------------------- + // Install exported functions. -function SetUpUri() { %CheckIsBootstrapping(); // Set up non-enumerable URI functions on the global object and set // their names. InstallFunctions(global, DONT_ENUM, $Array( - "escape", URIEscape, - "unescape", URIUnescape, - "decodeURI", URIDecode, - "decodeURIComponent", URIDecodeComponent, - "encodeURI", URIEncode, - "encodeURIComponent", URIEncodeComponent + "escape", URIEscapeJS, + "unescape", URIUnescapeJS, + "decodeURI", URIDecode, + "decodeURIComponent", URIDecodeComponent, + "encodeURI", URIEncode, + "encodeURIComponent", URIEncodeComponent )); -} -SetUpUri(); +})(); diff --git a/deps/v8/src/utils-inl.h b/deps/v8/src/utils-inl.h index 8c89f65f775..d0c0e3cb2a9 100644 --- a/deps/v8/src/utils-inl.h +++ b/deps/v8/src/utils-inl.h @@ -5,7 +5,7 @@ #ifndef V8_UTILS_INL_H_ #define V8_UTILS_INL_H_ -#include "list-inl.h" +#include "src/list-inl.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/utils.cc b/deps/v8/src/utils.cc index 7af30f27b7b..165855a3ab5 100644 --- a/deps/v8/src/utils.cc +++ b/deps/v8/src/utils.cc @@ -5,11 +5,11 @@ #include <stdarg.h> #include <sys/stat.h> -#include "v8.h" +#include "src/v8.h" -#include "checks.h" -#include "platform.h" -#include "utils.h" +#include "src/base/logging.h" +#include "src/base/platform/platform.h" +#include "src/utils.h" namespace v8 { namespace internal { @@ -27,9 +27,9 @@ void SimpleStringBuilder::AddString(const char* s) { void SimpleStringBuilder::AddSubstring(const char* s, int n) { - ASSERT(!is_finalized() && position_ + n <= buffer_.length()); - ASSERT(static_cast<size_t>(n) <= strlen(s)); - OS::MemCopy(&buffer_[position_], s, n * kCharSize); + DCHECK(!is_finalized() && position_ + n <= buffer_.length()); + DCHECK(static_cast<size_t>(n) <= strlen(s)); + MemCopy(&buffer_[position_], s, n * kCharSize); position_ += n; } @@ -60,7 +60,7 @@ void SimpleStringBuilder::AddDecimalInteger(int32_t value) { char* SimpleStringBuilder::Finalize() { - ASSERT(!is_finalized() && position_ <= buffer_.length()); + DCHECK(!is_finalized() && position_ <= buffer_.length()); // If there is no space for null termination, overwrite last character. if (position_ == buffer_.length()) { position_--; @@ -70,9 +70,9 @@ char* SimpleStringBuilder::Finalize() { buffer_[position_] = '\0'; // Make sure nobody managed to add a 0-character to the // buffer while building the string. - ASSERT(strlen(buffer_.start()) == static_cast<size_t>(position_)); + DCHECK(strlen(buffer_.start()) == static_cast<size_t>(position_)); position_ = -1; - ASSERT(is_finalized()); + DCHECK(is_finalized()); return buffer_.start(); } @@ -80,7 +80,7 @@ char* SimpleStringBuilder::Finalize() { void PrintF(const char* format, ...) { va_list arguments; va_start(arguments, format); - OS::VPrint(format, arguments); + base::OS::VPrint(format, arguments); va_end(arguments); } @@ -88,20 +88,39 @@ void PrintF(const char* format, ...) { void PrintF(FILE* out, const char* format, ...) { va_list arguments; va_start(arguments, format); - OS::VFPrint(out, format, arguments); + base::OS::VFPrint(out, format, arguments); va_end(arguments); } void PrintPID(const char* format, ...) { - OS::Print("[%d] ", OS::GetCurrentProcessId()); + base::OS::Print("[%d] ", base::OS::GetCurrentProcessId()); va_list arguments; va_start(arguments, format); - OS::VPrint(format, arguments); + base::OS::VPrint(format, arguments); va_end(arguments); } +int SNPrintF(Vector<char> str, const char* format, ...) { + va_list args; + va_start(args, format); + int result = VSNPrintF(str, format, args); + va_end(args); + return result; +} + + +int VSNPrintF(Vector<char> str, const char* format, va_list args) { + return base::OS::VSNPrintF(str.start(), str.length(), format, args); +} + + +void StrNCpy(Vector<char> dest, const char* src, size_t n) { + base::OS::StrNCpy(dest.start(), dest.length(), src, n); +} + + void Flush(FILE* out) { fflush(out); } @@ -145,15 +164,15 @@ char* ReadLine(const char* prompt) { char* new_result = NewArray<char>(new_len); // Copy the existing input into the new array and set the new // array as the result. - OS::MemCopy(new_result, result, offset * kCharSize); + MemCopy(new_result, result, offset * kCharSize); DeleteArray(result); result = new_result; } // Copy the newly read line into the result. - OS::MemCopy(result + offset, line_buf, len * kCharSize); + MemCopy(result + offset, line_buf, len * kCharSize); offset += len; } - ASSERT(result != NULL); + DCHECK(result != NULL); result[offset] = '\0'; return result; } @@ -166,7 +185,7 @@ char* ReadCharsFromFile(FILE* file, const char* filename) { if (file == NULL || fseek(file, 0, SEEK_END) != 0) { if (verbose) { - OS::PrintError("Cannot read from file %s.\n", filename); + base::OS::PrintError("Cannot read from file %s.\n", filename); } return NULL; } @@ -193,7 +212,7 @@ char* ReadCharsFromFile(const char* filename, int* size, int extra_space, bool verbose) { - FILE* file = OS::FOpen(filename, "rb"); + FILE* file = base::OS::FOpen(filename, "rb"); char* result = ReadCharsFromFile(file, size, extra_space, verbose, filename); if (file != NULL) fclose(file); return result; @@ -255,10 +274,10 @@ int AppendChars(const char* filename, const char* str, int size, bool verbose) { - FILE* f = OS::FOpen(filename, "ab"); + FILE* f = base::OS::FOpen(filename, "ab"); if (f == NULL) { if (verbose) { - OS::PrintError("Cannot open file %s for writing.\n", filename); + base::OS::PrintError("Cannot open file %s for writing.\n", filename); } return 0; } @@ -272,10 +291,10 @@ int WriteChars(const char* filename, const char* str, int size, bool verbose) { - FILE* f = OS::FOpen(filename, "wb"); + FILE* f = base::OS::FOpen(filename, "wb"); if (f == NULL) { if (verbose) { - OS::PrintError("Cannot open file %s for writing.\n", filename); + base::OS::PrintError("Cannot open file %s for writing.\n", filename); } return 0; } @@ -304,8 +323,8 @@ void StringBuilder::AddFormatted(const char* format, ...) { void StringBuilder::AddFormattedList(const char* format, va_list list) { - ASSERT(!is_finalized() && position_ <= buffer_.length()); - int n = OS::VSNPrintF(buffer_ + position_, format, list); + DCHECK(!is_finalized() && position_ <= buffer_.length()); + int n = VSNPrintF(buffer_ + position_, format, list); if (n < 0 || n >= (buffer_.length() - position_)) { position_ = buffer_.length(); } else { @@ -314,4 +333,85 @@ void StringBuilder::AddFormattedList(const char* format, va_list list) { } +#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87 +static void MemMoveWrapper(void* dest, const void* src, size_t size) { + memmove(dest, src, size); +} + + +// Initialize to library version so we can call this at any time during startup. +static MemMoveFunction memmove_function = &MemMoveWrapper; + +// Defined in codegen-ia32.cc. +MemMoveFunction CreateMemMoveFunction(); + +// Copy memory area to disjoint memory area. +void MemMove(void* dest, const void* src, size_t size) { + if (size == 0) return; + // Note: here we rely on dependent reads being ordered. This is true + // on all architectures we currently support. + (*memmove_function)(dest, src, size); +} + +#elif V8_OS_POSIX && V8_HOST_ARCH_ARM +void MemCopyUint16Uint8Wrapper(uint16_t* dest, const uint8_t* src, + size_t chars) { + uint16_t* limit = dest + chars; + while (dest < limit) { + *dest++ = static_cast<uint16_t>(*src++); + } +} + + +MemCopyUint8Function memcopy_uint8_function = &MemCopyUint8Wrapper; +MemCopyUint16Uint8Function memcopy_uint16_uint8_function = + &MemCopyUint16Uint8Wrapper; +// Defined in codegen-arm.cc. +MemCopyUint8Function CreateMemCopyUint8Function(MemCopyUint8Function stub); +MemCopyUint16Uint8Function CreateMemCopyUint16Uint8Function( + MemCopyUint16Uint8Function stub); + +#elif V8_OS_POSIX && V8_HOST_ARCH_MIPS +MemCopyUint8Function memcopy_uint8_function = &MemCopyUint8Wrapper; +// Defined in codegen-mips.cc. +MemCopyUint8Function CreateMemCopyUint8Function(MemCopyUint8Function stub); +#endif + + +void init_memcopy_functions() { +#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87 + MemMoveFunction generated_memmove = CreateMemMoveFunction(); + if (generated_memmove != NULL) { + memmove_function = generated_memmove; + } +#elif V8_OS_POSIX && V8_HOST_ARCH_ARM + memcopy_uint8_function = CreateMemCopyUint8Function(&MemCopyUint8Wrapper); + memcopy_uint16_uint8_function = + CreateMemCopyUint16Uint8Function(&MemCopyUint16Uint8Wrapper); +#elif V8_OS_POSIX && V8_HOST_ARCH_MIPS + memcopy_uint8_function = CreateMemCopyUint8Function(&MemCopyUint8Wrapper); +#endif +} + + +bool DoubleToBoolean(double d) { + // NaN, +0, and -0 should return the false object +#if __BYTE_ORDER == __LITTLE_ENDIAN + union IeeeDoubleLittleEndianArchType u; +#elif __BYTE_ORDER == __BIG_ENDIAN + union IeeeDoubleBigEndianArchType u; +#endif + u.d = d; + if (u.bits.exp == 2047) { + // Detect NaN for IEEE double precision floating point. + if ((u.bits.man_low | u.bits.man_high) != 0) return false; + } + if (u.bits.exp == 0) { + // Detect +0, and -0 for IEEE double precision floating point. + if ((u.bits.man_low | u.bits.man_high) == 0) return false; + } + return true; +} + + } } // namespace v8::internal diff --git a/deps/v8/src/utils.h b/deps/v8/src/utils.h index 115f7847584..c5012bbc255 100644 --- a/deps/v8/src/utils.h +++ b/deps/v8/src/utils.h @@ -8,12 +8,16 @@ #include <limits.h> #include <stdlib.h> #include <string.h> +#include <cmath> -#include "allocation.h" -#include "checks.h" -#include "globals.h" -#include "platform.h" -#include "vector.h" +#include "include/v8.h" +#include "src/allocation.h" +#include "src/base/logging.h" +#include "src/base/macros.h" +#include "src/base/platform/platform.h" +#include "src/globals.h" +#include "src/list.h" +#include "src/vector.h" namespace v8 { namespace internal { @@ -21,19 +25,9 @@ namespace internal { // ---------------------------------------------------------------------------- // General helper functions -#define IS_POWER_OF_TWO(x) ((x) != 0 && (((x) & ((x) - 1)) == 0)) - -// Returns true iff x is a power of 2. Cannot be used with the maximally -// negative value of the type T (the -1 overflows). -template <typename T> -inline bool IsPowerOf2(T x) { - return IS_POWER_OF_TWO(x); -} - - // X must be a power of 2. Returns the number of trailing zeros. inline int WhichPowerOf2(uint32_t x) { - ASSERT(IsPowerOf2(x)); + DCHECK(IsPowerOf2(x)); int bits = 0; #ifdef DEBUG int original_x = x; @@ -57,7 +51,7 @@ inline int WhichPowerOf2(uint32_t x) { case 2: bits++; // Fall through. case 1: break; } - ASSERT_EQ(1 << bits, original_x); + DCHECK_EQ(1 << bits, original_x); return bits; return 0; } @@ -90,50 +84,6 @@ inline int ArithmeticShiftRight(int x, int s) { } -// Compute the 0-relative offset of some absolute value x of type T. -// This allows conversion of Addresses and integral types into -// 0-relative int offsets. -template <typename T> -inline intptr_t OffsetFrom(T x) { - return x - static_cast<T>(0); -} - - -// Compute the absolute value of type T for some 0-relative offset x. -// This allows conversion of 0-relative int offsets into Addresses and -// integral types. -template <typename T> -inline T AddressFrom(intptr_t x) { - return static_cast<T>(static_cast<T>(0) + x); -} - - -// Return the largest multiple of m which is <= x. -template <typename T> -inline T RoundDown(T x, intptr_t m) { - ASSERT(IsPowerOf2(m)); - return AddressFrom<T>(OffsetFrom(x) & -m); -} - - -// Return the smallest multiple of m which is >= x. -template <typename T> -inline T RoundUp(T x, intptr_t m) { - return RoundDown<T>(static_cast<T>(x + m - 1), m); -} - - -// Increment a pointer until it has the specified alignment. -// This works like RoundUp, but it works correctly on pointer types where -// sizeof(*pointer) might not be 1. -template<class T> -T AlignUp(T pointer, size_t alignment) { - ASSERT(sizeof(pointer) == sizeof(uintptr_t)); - uintptr_t pointer_raw = reinterpret_cast<uintptr_t>(pointer); - return reinterpret_cast<T>(RoundUp(pointer_raw, alignment)); -} - - template <typename T> int Compare(const T& a, const T& b) { if (a == b) @@ -161,35 +111,6 @@ int HandleObjectPointerCompare(const Handle<T>* a, const Handle<T>* b) { } -// Returns the smallest power of two which is >= x. If you pass in a -// number that is already a power of two, it is returned as is. -// Implementation is from "Hacker's Delight" by Henry S. Warren, Jr., -// figure 3-3, page 48, where the function is called clp2. -inline uint32_t RoundUpToPowerOf2(uint32_t x) { - ASSERT(x <= 0x80000000u); - x = x - 1; - x = x | (x >> 1); - x = x | (x >> 2); - x = x | (x >> 4); - x = x | (x >> 8); - x = x | (x >> 16); - return x + 1; -} - - -inline uint32_t RoundDownToPowerOf2(uint32_t x) { - uint32_t rounded_up = RoundUpToPowerOf2(x); - if (rounded_up > x) return rounded_up >> 1; - return rounded_up; -} - - -template <typename T, typename U> -inline bool IsAligned(T value, U alignment) { - return (value & (alignment - 1)) == 0; -} - - // Returns true if (addr + offset) is aligned. inline bool IsAddressAligned(Address addr, intptr_t alignment, @@ -220,10 +141,12 @@ T Abs(T a) { } -// Returns the negative absolute value of its argument. -template <typename T> -T NegAbs(T a) { - return a < 0 ? a : -a; +// Floor(-0.0) == 0.0 +inline double Floor(double x) { +#ifdef _MSC_VER + if (x == 0) return x; // Fix for issue 3477. +#endif + return std::floor(x); } @@ -233,6 +156,25 @@ inline int32_t WhichPowerOf2Abs(int32_t x) { } +// Obtains the unsigned type corresponding to T +// available in C++11 as std::make_unsigned +template<typename T> +struct make_unsigned { + typedef T type; +}; + + +// Template specializations necessary to have make_unsigned work +template<> struct make_unsigned<int32_t> { + typedef uint32_t type; +}; + + +template<> struct make_unsigned<int64_t> { + typedef uint64_t type; +}; + + // ---------------------------------------------------------------------------- // BitField is a help template for encoding and decode bitfield with // unsigned content. @@ -247,6 +189,7 @@ class BitFieldBase { static const U kMask = ((kOne << shift) << size) - (kOne << shift); static const U kShift = shift; static const U kSize = size; + static const U kNext = kShift + kSize; // Value for the field with all bits set. static const T kMax = static_cast<T>((1U << size) - 1); @@ -258,7 +201,7 @@ class BitFieldBase { // Returns a type U with the bit field value encoded. static U encode(T value) { - ASSERT(is_valid(value)); + DCHECK(is_valid(value)); return static_cast<U>(value) << shift; } @@ -321,6 +264,85 @@ inline uint32_t ComputePointerHash(void* ptr) { } +// ---------------------------------------------------------------------------- +// Generated memcpy/memmove + +// Initializes the codegen support that depends on CPU features. This is +// called after CPU initialization. +void init_memcopy_functions(); + +#if defined(V8_TARGET_ARCH_IA32) || defined(V8_TARGET_ARCH_X87) +// Limit below which the extra overhead of the MemCopy function is likely +// to outweigh the benefits of faster copying. +const int kMinComplexMemCopy = 64; + +// Copy memory area. No restrictions. +void MemMove(void* dest, const void* src, size_t size); +typedef void (*MemMoveFunction)(void* dest, const void* src, size_t size); + +// Keep the distinction of "move" vs. "copy" for the benefit of other +// architectures. +V8_INLINE void MemCopy(void* dest, const void* src, size_t size) { + MemMove(dest, src, size); +} +#elif defined(V8_HOST_ARCH_ARM) +typedef void (*MemCopyUint8Function)(uint8_t* dest, const uint8_t* src, + size_t size); +extern MemCopyUint8Function memcopy_uint8_function; +V8_INLINE void MemCopyUint8Wrapper(uint8_t* dest, const uint8_t* src, + size_t chars) { + memcpy(dest, src, chars); +} +// For values < 16, the assembler function is slower than the inlined C code. +const int kMinComplexMemCopy = 16; +V8_INLINE void MemCopy(void* dest, const void* src, size_t size) { + (*memcopy_uint8_function)(reinterpret_cast<uint8_t*>(dest), + reinterpret_cast<const uint8_t*>(src), size); +} +V8_INLINE void MemMove(void* dest, const void* src, size_t size) { + memmove(dest, src, size); +} + +typedef void (*MemCopyUint16Uint8Function)(uint16_t* dest, const uint8_t* src, + size_t size); +extern MemCopyUint16Uint8Function memcopy_uint16_uint8_function; +void MemCopyUint16Uint8Wrapper(uint16_t* dest, const uint8_t* src, + size_t chars); +// For values < 12, the assembler function is slower than the inlined C code. +const int kMinComplexConvertMemCopy = 12; +V8_INLINE void MemCopyUint16Uint8(uint16_t* dest, const uint8_t* src, + size_t size) { + (*memcopy_uint16_uint8_function)(dest, src, size); +} +#elif defined(V8_HOST_ARCH_MIPS) +typedef void (*MemCopyUint8Function)(uint8_t* dest, const uint8_t* src, + size_t size); +extern MemCopyUint8Function memcopy_uint8_function; +V8_INLINE void MemCopyUint8Wrapper(uint8_t* dest, const uint8_t* src, + size_t chars) { + memcpy(dest, src, chars); +} +// For values < 16, the assembler function is slower than the inlined C code. +const int kMinComplexMemCopy = 16; +V8_INLINE void MemCopy(void* dest, const void* src, size_t size) { + (*memcopy_uint8_function)(reinterpret_cast<uint8_t*>(dest), + reinterpret_cast<const uint8_t*>(src), size); +} +V8_INLINE void MemMove(void* dest, const void* src, size_t size) { + memmove(dest, src, size); +} +#else +// Copy memory area to disjoint memory area. +V8_INLINE void MemCopy(void* dest, const void* src, size_t size) { + memcpy(dest, src, size); +} +V8_INLINE void MemMove(void* dest, const void* src, size_t size) { + memmove(dest, src, size); +} +const int kMinComplexMemCopy = 16 * kPointerSize; +#endif // V8_TARGET_ARCH_IA32 + + // ---------------------------------------------------------------------------- // Miscellaneous @@ -346,7 +368,7 @@ class Access { explicit Access(StaticResource<T>* resource) : resource_(resource) , instance_(&resource->instance_) { - ASSERT(!resource->is_reserved_); + DCHECK(!resource->is_reserved_); resource->is_reserved_ = true; } @@ -374,12 +396,12 @@ class SetOncePointer { bool is_set() const { return pointer_ != NULL; } T* get() const { - ASSERT(pointer_ != NULL); + DCHECK(pointer_ != NULL); return pointer_; } void set(T* value) { - ASSERT(pointer_ == NULL && value != NULL); + DCHECK(pointer_ == NULL && value != NULL); pointer_ = value; } @@ -402,14 +424,14 @@ class EmbeddedVector : public Vector<T> { // When copying, make underlying Vector to reference our buffer. EmbeddedVector(const EmbeddedVector& rhs) : Vector<T>(rhs) { - OS::MemCopy(buffer_, rhs.buffer_, sizeof(T) * kSize); - set_start(buffer_); + MemCopy(buffer_, rhs.buffer_, sizeof(T) * kSize); + this->set_start(buffer_); } EmbeddedVector& operator=(const EmbeddedVector& rhs) { if (this == &rhs) return *this; Vector<T>::operator=(rhs); - OS::MemCopy(buffer_, rhs.buffer_, sizeof(T) * kSize); + MemCopy(buffer_, rhs.buffer_, sizeof(T) * kSize); this->set_start(buffer_); return *this; } @@ -459,7 +481,7 @@ class Collector { // A basic Collector will keep this vector valid as long as the Collector // is alive. inline Vector<T> AddBlock(int size, T initial_value) { - ASSERT(size > 0); + DCHECK(size > 0); if (size > current_chunk_.length() - index_) { Grow(size); } @@ -493,7 +515,7 @@ class Collector { // Write the contents of the collector into the provided vector. void WriteTo(Vector<T> destination) { - ASSERT(size_ <= destination.length()); + DCHECK(size_ <= destination.length()); int position = 0; for (int i = 0; i < chunks_.length(); i++) { Vector<T> chunk = chunks_.at(i); @@ -533,7 +555,7 @@ class Collector { // Creates a new current chunk, and stores the old chunk in the chunks_ list. void Grow(int min_capacity) { - ASSERT(growth_factor > 1); + DCHECK(growth_factor > 1); int new_capacity; int current_length = current_chunk_.length(); if (current_length < kMinCapacity) { @@ -551,7 +573,7 @@ class Collector { } } NewChunk(new_capacity); - ASSERT(index_ + min_capacity <= current_chunk_.length()); + DCHECK(index_ + min_capacity <= current_chunk_.length()); } // Before replacing the current chunk, give a subclass the option to move @@ -590,12 +612,12 @@ class SequenceCollector : public Collector<T, growth_factor, max_growth> { virtual ~SequenceCollector() {} void StartSequence() { - ASSERT(sequence_start_ == kNoSequence); + DCHECK(sequence_start_ == kNoSequence); sequence_start_ = this->index_; } Vector<T> EndSequence() { - ASSERT(sequence_start_ != kNoSequence); + DCHECK(sequence_start_ != kNoSequence); int sequence_start = sequence_start_; sequence_start_ = kNoSequence; if (sequence_start == this->index_) return Vector<T>(); @@ -604,7 +626,7 @@ class SequenceCollector : public Collector<T, growth_factor, max_growth> { // Drops the currently added sequence, and all collected elements in it. void DropSequence() { - ASSERT(sequence_start_ != kNoSequence); + DCHECK(sequence_start_ != kNoSequence); int sequence_length = this->index_ - sequence_start_; this->index_ = sequence_start_; this->size_ -= sequence_length; @@ -629,7 +651,7 @@ class SequenceCollector : public Collector<T, growth_factor, max_growth> { } int sequence_length = this->index_ - sequence_start_; Vector<T> new_chunk = Vector<T>::New(sequence_length + new_capacity); - ASSERT(sequence_length < new_chunk.length()); + DCHECK(sequence_length < new_chunk.length()); for (int i = 0; i < sequence_length; i++) { new_chunk[i] = this->current_chunk_[sequence_start_ + i]; } @@ -676,8 +698,8 @@ inline int CompareCharsUnsigned(const lchar* lhs, template<typename lchar, typename rchar> inline int CompareChars(const lchar* lhs, const rchar* rhs, int chars) { - ASSERT(sizeof(lchar) <= 2); - ASSERT(sizeof(rchar) <= 2); + DCHECK(sizeof(lchar) <= 2); + DCHECK(sizeof(rchar) <= 2); if (sizeof(lchar) == 1) { if (sizeof(rchar) == 1) { return CompareCharsUnsigned(reinterpret_cast<const uint8_t*>(lhs), @@ -704,8 +726,8 @@ inline int CompareChars(const lchar* lhs, const rchar* rhs, int chars) { // Calculate 10^exponent. inline int TenToThe(int exponent) { - ASSERT(exponent <= 9); - ASSERT(exponent >= 1); + DCHECK(exponent <= 9); + DCHECK(exponent >= 1); int answer = 10; for (int i = 1; i < exponent; i++) answer *= 10; return answer; @@ -775,11 +797,11 @@ class EmbeddedContainer { int length() const { return NumElements; } const ElementType& operator[](int i) const { - ASSERT(i < length()); + DCHECK(i < length()); return elems_[i]; } ElementType& operator[](int i) { - ASSERT(i < length()); + DCHECK(i < length()); return elems_[i]; } @@ -825,7 +847,7 @@ class SimpleStringBuilder { // Get the current position in the builder. int position() const { - ASSERT(!is_finalized()); + DCHECK(!is_finalized()); return position_; } @@ -836,8 +858,8 @@ class SimpleStringBuilder { // 0-characters; use the Finalize() method to terminate the string // instead. void AddCharacter(char c) { - ASSERT(c != '\0'); - ASSERT(!is_finalized() && position_ < buffer_.length()); + DCHECK(c != '\0'); + DCHECK(!is_finalized() && position_ < buffer_.length()); buffer_[position_++] = c; } @@ -896,9 +918,9 @@ class EnumSet { private: T Mask(E element) const { - // The strange typing in ASSERT is necessary to avoid stupid warnings, see: + // The strange typing in DCHECK is necessary to avoid stupid warnings, see: // http://gcc.gnu.org/bugzilla/show_bug.cgi?id=43680 - ASSERT(static_cast<int>(element) < static_cast<int>(sizeof(T) * CHAR_BIT)); + DCHECK(static_cast<int>(element) < static_cast<int>(sizeof(T) * CHAR_BIT)); return static_cast<T>(1) << element; } @@ -925,19 +947,19 @@ inline int signed_bitextract_64(int msb, int lsb, int x) { // Check number width. inline bool is_intn(int64_t x, unsigned n) { - ASSERT((0 < n) && (n < 64)); + DCHECK((0 < n) && (n < 64)); int64_t limit = static_cast<int64_t>(1) << (n - 1); return (-limit <= x) && (x < limit); } inline bool is_uintn(int64_t x, unsigned n) { - ASSERT((0 < n) && (n < (sizeof(x) * kBitsPerByte))); + DCHECK((0 < n) && (n < (sizeof(x) * kBitsPerByte))); return !(x >> n); } template <class T> inline T truncate_to_intn(T x, unsigned n) { - ASSERT((0 < n) && (n < (sizeof(x) * kBitsPerByte))); + DCHECK((0 < n) && (n < (sizeof(x) * kBitsPerByte))); return (x & ((static_cast<T>(1) << n) - 1)); } @@ -1064,6 +1086,13 @@ void FPRINTF_CHECKING PrintF(FILE* out, const char* format, ...); // Prepends the current process ID to the output. void PRINTF_CHECKING PrintPID(const char* format, ...); +// Safe formatting print. Ensures that str is always null-terminated. +// Returns the number of chars written, or -1 if output was truncated. +int FPRINTF_CHECKING SNPrintF(Vector<char> str, const char* format, ...); +int VSNPrintF(Vector<char> str, const char* format, va_list args); + +void StrNCpy(Vector<char> dest, const char* src, size_t n); + // Our version of fflush. void Flush(FILE* out); @@ -1136,11 +1165,11 @@ inline void CopyWords(T* dst, const T* src, size_t num_words) { // TODO(mvstanton): disabled because mac builds are bogus failing on this // assert. They are doing a signed comparison. Investigate in // the morning. - // ASSERT(Min(dst, const_cast<T*>(src)) + num_words <= + // DCHECK(Min(dst, const_cast<T*>(src)) + num_words <= // Max(dst, const_cast<T*>(src))); - ASSERT(num_words > 0); + DCHECK(num_words > 0); - // Use block copying OS::MemCopy if the segment we're copying is + // Use block copying MemCopy if the segment we're copying is // enough to justify the extra call/setup overhead. static const size_t kBlockCopyLimit = 16; @@ -1150,7 +1179,7 @@ inline void CopyWords(T* dst, const T* src, size_t num_words) { *dst++ = *src++; } while (num_words > 0); } else { - OS::MemCopy(dst, src, num_words * kPointerSize); + MemCopy(dst, src, num_words * kPointerSize); } } @@ -1159,9 +1188,9 @@ inline void CopyWords(T* dst, const T* src, size_t num_words) { template <typename T> inline void MoveWords(T* dst, const T* src, size_t num_words) { STATIC_ASSERT(sizeof(T) == kPointerSize); - ASSERT(num_words > 0); + DCHECK(num_words > 0); - // Use block copying OS::MemCopy if the segment we're copying is + // Use block copying MemCopy if the segment we're copying is // enough to justify the extra call/setup overhead. static const size_t kBlockCopyLimit = 16; @@ -1173,7 +1202,7 @@ inline void MoveWords(T* dst, const T* src, size_t num_words) { *dst++ = *src++; } while (num_words > 0); } else { - OS::MemMove(dst, src, num_words * kPointerSize); + MemMove(dst, src, num_words * kPointerSize); } } @@ -1182,13 +1211,13 @@ inline void MoveWords(T* dst, const T* src, size_t num_words) { template <typename T> inline void CopyBytes(T* dst, const T* src, size_t num_bytes) { STATIC_ASSERT(sizeof(T) == 1); - ASSERT(Min(dst, const_cast<T*>(src)) + num_bytes <= + DCHECK(Min(dst, const_cast<T*>(src)) + num_bytes <= Max(dst, const_cast<T*>(src))); if (num_bytes == 0) return; - // Use block copying OS::MemCopy if the segment we're copying is + // Use block copying MemCopy if the segment we're copying is // enough to justify the extra call/setup overhead. - static const int kBlockCopyLimit = OS::kMinComplexMemCopy; + static const int kBlockCopyLimit = kMinComplexMemCopy; if (num_bytes < static_cast<size_t>(kBlockCopyLimit)) { do { @@ -1196,7 +1225,7 @@ inline void CopyBytes(T* dst, const T* src, size_t num_bytes) { *dst++ = *src++; } while (num_bytes > 0); } else { - OS::MemCopy(dst, src, num_bytes); + MemCopy(dst, src, num_bytes); } } @@ -1212,8 +1241,12 @@ inline void MemsetPointer(T** dest, U* value, int counter) { #if V8_HOST_ARCH_IA32 #define STOS "stosl" #elif V8_HOST_ARCH_X64 +#if V8_HOST_ARCH_32_BIT +#define STOS "addr32 stosl" +#else #define STOS "stosq" #endif +#endif #if defined(__native_client__) // This STOS sequence does not validate for x86_64 Native Client. // Here we #undef STOS to force use of the slower C version. @@ -1289,8 +1322,8 @@ INLINE(void CopyChars(sinkchar* dest, const sourcechar* src, int chars)); template<typename sourcechar, typename sinkchar> void CopyChars(sinkchar* dest, const sourcechar* src, int chars) { - ASSERT(sizeof(sourcechar) <= 2); - ASSERT(sizeof(sinkchar) <= 2); + DCHECK(sizeof(sourcechar) <= 2); + DCHECK(sizeof(sinkchar) <= 2); if (sizeof(sinkchar) == 1) { if (sizeof(sourcechar) == 1) { CopyCharsUnsigned(reinterpret_cast<uint8_t*>(dest), @@ -1319,13 +1352,13 @@ void CopyCharsUnsigned(sinkchar* dest, const sourcechar* src, int chars) { sinkchar* limit = dest + chars; #ifdef V8_HOST_CAN_READ_UNALIGNED if (sizeof(*dest) == sizeof(*src)) { - if (chars >= static_cast<int>(OS::kMinComplexMemCopy / sizeof(*dest))) { - OS::MemCopy(dest, src, chars * sizeof(*dest)); + if (chars >= static_cast<int>(kMinComplexMemCopy / sizeof(*dest))) { + MemCopy(dest, src, chars * sizeof(*dest)); return; } // Number of characters in a uintptr_t. static const int kStepSize = sizeof(uintptr_t) / sizeof(*dest); // NOLINT - ASSERT(dest + kStepSize > dest); // Check for overflow. + DCHECK(dest + kStepSize > dest); // Check for overflow. while (dest + kStepSize <= limit) { *reinterpret_cast<uintptr_t*>(dest) = *reinterpret_cast<const uintptr_t*>(src); @@ -1391,17 +1424,17 @@ void CopyCharsUnsigned(uint8_t* dest, const uint8_t* src, int chars) { memcpy(dest, src, 15); break; default: - OS::MemCopy(dest, src, chars); + MemCopy(dest, src, chars); break; } } void CopyCharsUnsigned(uint16_t* dest, const uint8_t* src, int chars) { - if (chars >= OS::kMinComplexConvertMemCopy) { - OS::MemCopyUint16Uint8(dest, src, chars); + if (chars >= kMinComplexConvertMemCopy) { + MemCopyUint16Uint8(dest, src, chars); } else { - OS::MemCopyUint16Uint8Wrapper(dest, src, chars); + MemCopyUint16Uint8Wrapper(dest, src, chars); } } @@ -1432,7 +1465,7 @@ void CopyCharsUnsigned(uint16_t* dest, const uint16_t* src, int chars) { memcpy(dest, src, 14); break; default: - OS::MemCopy(dest, src, chars * sizeof(*dest)); + MemCopy(dest, src, chars * sizeof(*dest)); break; } } @@ -1440,18 +1473,18 @@ void CopyCharsUnsigned(uint16_t* dest, const uint16_t* src, int chars) { #elif defined(V8_HOST_ARCH_MIPS) void CopyCharsUnsigned(uint8_t* dest, const uint8_t* src, int chars) { - if (chars < OS::kMinComplexMemCopy) { + if (chars < kMinComplexMemCopy) { memcpy(dest, src, chars); } else { - OS::MemCopy(dest, src, chars); + MemCopy(dest, src, chars); } } void CopyCharsUnsigned(uint16_t* dest, const uint16_t* src, int chars) { - if (chars < OS::kMinComplexMemCopy) { + if (chars < kMinComplexMemCopy) { memcpy(dest, src, chars * sizeof(*dest)); } else { - OS::MemCopy(dest, src, chars * sizeof(*dest)); + MemCopy(dest, src, chars * sizeof(*dest)); } } #endif @@ -1472,6 +1505,36 @@ class StringBuilder : public SimpleStringBuilder { }; +bool DoubleToBoolean(double d); + +template <typename Stream> +bool StringToArrayIndex(Stream* stream, uint32_t* index) { + uint16_t ch = stream->GetNext(); + + // If the string begins with a '0' character, it must only consist + // of it to be a legal array index. + if (ch == '0') { + *index = 0; + return !stream->HasMore(); + } + + // Convert string to uint32 array index; character by character. + int d = ch - '0'; + if (d < 0 || d > 9) return false; + uint32_t result = d; + while (stream->HasMore()) { + d = stream->GetNext() - '0'; + if (d < 0 || d > 9) return false; + // Check that the new result is below the 32 bit limit. + if (result > 429496729U - ((d > 5) ? 1 : 0)) return false; + result = (result * 10) + d; + } + + *index = result; + return true; +} + + } } // namespace v8::internal #endif // V8_UTILS_H_ diff --git a/deps/v8/src/v8.cc b/deps/v8/src/v8.cc index f8156ecbd71..13a576b1c94 100644 --- a/deps/v8/src/v8.cc +++ b/deps/v8/src/v8.cc @@ -2,28 +2,27 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" - -#include "assembler.h" -#include "isolate.h" -#include "elements.h" -#include "bootstrapper.h" -#include "debug.h" -#include "deoptimizer.h" -#include "frames.h" -#include "heap-profiler.h" -#include "hydrogen.h" -#ifdef V8_USE_DEFAULT_PLATFORM -#include "libplatform/default-platform.h" -#endif -#include "lithium-allocator.h" -#include "objects.h" -#include "once.h" -#include "platform.h" -#include "sampler.h" -#include "runtime-profiler.h" -#include "serialize.h" -#include "store-buffer.h" +#include "src/v8.h" + +#include "src/assembler.h" +#include "src/base/once.h" +#include "src/base/platform/platform.h" +#include "src/bootstrapper.h" +#include "src/compiler/pipeline.h" +#include "src/debug.h" +#include "src/deoptimizer.h" +#include "src/elements.h" +#include "src/frames.h" +#include "src/heap/store-buffer.h" +#include "src/heap-profiler.h" +#include "src/hydrogen.h" +#include "src/isolate.h" +#include "src/lithium-allocator.h" +#include "src/objects.h" +#include "src/runtime-profiler.h" +#include "src/sampler.h" +#include "src/serialize.h" + namespace v8 { namespace internal { @@ -41,43 +40,20 @@ bool V8::Initialize(Deserializer* des) { if (isolate->IsDead()) return false; if (isolate->IsInitialized()) return true; -#ifdef V8_USE_DEFAULT_PLATFORM - DefaultPlatform* platform = static_cast<DefaultPlatform*>(platform_); - platform->SetThreadPoolSize(isolate->max_available_threads()); - // We currently only start the threads early, if we know that we'll use them. - if (FLAG_job_based_sweeping) platform->EnsureInitialized(); -#endif - return isolate->Init(des); } void V8::TearDown() { - Isolate* isolate = Isolate::Current(); - ASSERT(isolate->IsDefaultIsolate()); - if (!isolate->IsInitialized()) return; - - // The isolate has to be torn down before clearing the LOperand - // caches so that the optimizing compiler thread (if running) - // doesn't see an inconsistent view of the lithium instructions. - isolate->TearDown(); - delete isolate; - Bootstrapper::TearDownExtensions(); ElementsAccessor::TearDown(); LOperand::TearDownCaches(); + compiler::Pipeline::TearDown(); ExternalReference::TearDownMathExpData(); RegisteredExtension::UnregisterAll(); Isolate::GlobalTearDown(); Sampler::TearDown(); - Serializer::TearDown(); - -#ifdef V8_USE_DEFAULT_PLATFORM - DefaultPlatform* platform = static_cast<DefaultPlatform*>(platform_); - platform_ = NULL; - delete platform; -#endif } @@ -89,7 +65,6 @@ void V8::SetReturnAddressLocationResolver( void V8::InitializeOncePerProcessImpl() { FlagList::EnforceFlagImplications(); - Serializer::InitializeOncePerProcess(); if (FLAG_predictable && FLAG_random_seed == 0) { // Avoid random seeds in predictable mode. @@ -99,17 +74,14 @@ void V8::InitializeOncePerProcessImpl() { if (FLAG_stress_compaction) { FLAG_force_marking_deque_overflows = true; FLAG_gc_global = true; - FLAG_max_new_space_size = 2 * Page::kPageSize; + FLAG_max_semi_space_size = 1; } -#ifdef V8_USE_DEFAULT_PLATFORM - platform_ = new DefaultPlatform; -#endif + base::OS::Initialize(FLAG_random_seed, FLAG_hard_abort, FLAG_gc_fake_mmap); + Sampler::SetUp(); - // TODO(svenpanne) Clean this up when Serializer is a real object. - bool serializer_enabled = Serializer::enabled(NULL); - CpuFeatures::Probe(serializer_enabled); - OS::PostSetUp(); + CpuFeatures::Probe(false); + init_memcopy_functions(); // The custom exp implementation needs 16KB of lookup data; initialize it // on demand. init_fast_sqrt_function(); @@ -118,6 +90,7 @@ void V8::InitializeOncePerProcessImpl() { #endif ElementsAccessor::InitializeOncePerProcess(); LOperand::SetUpCaches(); + compiler::Pipeline::SetUp(); SetUpJSCallerSavedCodeData(); ExternalReference::SetUp(); Bootstrapper::InitializeOncePerProcess(); @@ -125,25 +98,25 @@ void V8::InitializeOncePerProcessImpl() { void V8::InitializeOncePerProcess() { - CallOnce(&init_once, &InitializeOncePerProcessImpl); + base::CallOnce(&init_once, &InitializeOncePerProcessImpl); } void V8::InitializePlatform(v8::Platform* platform) { - ASSERT(!platform_); - ASSERT(platform); + CHECK(!platform_); + CHECK(platform); platform_ = platform; } void V8::ShutdownPlatform() { - ASSERT(platform_); + CHECK(platform_); platform_ = NULL; } v8::Platform* V8::GetCurrentPlatform() { - ASSERT(platform_); + DCHECK(platform_); return platform_; } diff --git a/deps/v8/src/v8.h b/deps/v8/src/v8.h index f019e68b7f0..8ae75fb8547 100644 --- a/deps/v8/src/v8.h +++ b/deps/v8/src/v8.h @@ -26,25 +26,25 @@ #endif // Basic includes -#include "../include/v8.h" -#include "../include/v8-platform.h" -#include "v8globals.h" -#include "v8checks.h" -#include "allocation.h" -#include "assert-scope.h" -#include "utils.h" -#include "flags.h" +#include "include/v8.h" +#include "include/v8-platform.h" +#include "src/checks.h" // NOLINT +#include "src/allocation.h" // NOLINT +#include "src/assert-scope.h" // NOLINT +#include "src/utils.h" // NOLINT +#include "src/flags.h" // NOLINT +#include "src/globals.h" // NOLINT // Objects & heap -#include "objects-inl.h" -#include "spaces-inl.h" -#include "heap-inl.h" -#include "incremental-marking-inl.h" -#include "mark-compact-inl.h" -#include "log-inl.h" -#include "handles-inl.h" -#include "types-inl.h" -#include "zone-inl.h" +#include "src/objects-inl.h" // NOLINT +#include "src/heap/spaces-inl.h" // NOLINT +#include "src/heap/heap-inl.h" // NOLINT +#include "src/heap/incremental-marking-inl.h" // NOLINT +#include "src/heap/mark-compact-inl.h" // NOLINT +#include "src/log-inl.h" // NOLINT +#include "src/handles-inl.h" // NOLINT +#include "src/types-inl.h" // NOLINT +#include "src/zone-inl.h" // NOLINT namespace v8 { namespace internal { diff --git a/deps/v8/src/v8checks.h b/deps/v8/src/v8checks.h deleted file mode 100644 index 79a30886822..00000000000 --- a/deps/v8/src/v8checks.h +++ /dev/null @@ -1,39 +0,0 @@ -// Copyright 2006-2008 the V8 project authors. All rights reserved. -// Use of this source code is governed by a BSD-style license that can be -// found in the LICENSE file. - -#ifndef V8_V8CHECKS_H_ -#define V8_V8CHECKS_H_ - -#include "checks.h" - -namespace v8 { - class Value; - template <class T> class Handle; - -namespace internal { - intptr_t HeapObjectTagMask(); - -} } // namespace v8::internal - - -void CheckNonEqualsHelper(const char* file, - int line, - const char* unexpected_source, - v8::Handle<v8::Value> unexpected, - const char* value_source, - v8::Handle<v8::Value> value); - -void CheckEqualsHelper(const char* file, - int line, - const char* expected_source, - v8::Handle<v8::Value> expected, - const char* value_source, - v8::Handle<v8::Value> value); - -#define ASSERT_TAG_ALIGNED(address) \ - ASSERT((reinterpret_cast<intptr_t>(address) & HeapObjectTagMask()) == 0) - -#define ASSERT_SIZE_TAG_ALIGNED(size) ASSERT((size & HeapObjectTagMask()) == 0) - -#endif // V8_V8CHECKS_H_ diff --git a/deps/v8/src/v8dll-main.cc b/deps/v8/src/v8dll-main.cc index dd085690261..6250b3e341d 100644 --- a/deps/v8/src/v8dll-main.cc +++ b/deps/v8/src/v8dll-main.cc @@ -5,10 +5,10 @@ // The GYP based build ends up defining USING_V8_SHARED when compiling this // file. #undef USING_V8_SHARED -#include "../include/v8.h" +#include "include/v8.h" #if V8_OS_WIN -#include "win32-headers.h" +#include "src/base/win32-headers.h" extern "C" { BOOL WINAPI DllMain(HANDLE hinstDLL, diff --git a/deps/v8/src/v8globals.h b/deps/v8/src/v8globals.h deleted file mode 100644 index b9ca952e75d..00000000000 --- a/deps/v8/src/v8globals.h +++ /dev/null @@ -1,554 +0,0 @@ -// Copyright 2012 the V8 project authors. All rights reserved. -// Use of this source code is governed by a BSD-style license that can be -// found in the LICENSE file. - -#ifndef V8_V8GLOBALS_H_ -#define V8_V8GLOBALS_H_ - -#include "globals.h" -#include "checks.h" - -namespace v8 { -namespace internal { - -// This file contains constants and global declarations related to the -// V8 system. - -// Mask for the sign bit in a smi. -const intptr_t kSmiSignMask = kIntptrSignBit; - -const int kObjectAlignmentBits = kPointerSizeLog2; -const intptr_t kObjectAlignment = 1 << kObjectAlignmentBits; -const intptr_t kObjectAlignmentMask = kObjectAlignment - 1; - -// Desired alignment for pointers. -const intptr_t kPointerAlignment = (1 << kPointerSizeLog2); -const intptr_t kPointerAlignmentMask = kPointerAlignment - 1; - -// Desired alignment for double values. -const intptr_t kDoubleAlignment = 8; -const intptr_t kDoubleAlignmentMask = kDoubleAlignment - 1; - -// Desired alignment for generated code is 32 bytes (to improve cache line -// utilization). -const int kCodeAlignmentBits = 5; -const intptr_t kCodeAlignment = 1 << kCodeAlignmentBits; -const intptr_t kCodeAlignmentMask = kCodeAlignment - 1; - -// Tag information for Failure. -// TODO(yangguo): remove this from space owner calculation. -const int kFailureTag = 3; -const int kFailureTagSize = 2; -const intptr_t kFailureTagMask = (1 << kFailureTagSize) - 1; - - -// Zap-value: The value used for zapping dead objects. -// Should be a recognizable hex value tagged as a failure. -#ifdef V8_HOST_ARCH_64_BIT -const Address kZapValue = - reinterpret_cast<Address>(V8_UINT64_C(0xdeadbeedbeadbeef)); -const Address kHandleZapValue = - reinterpret_cast<Address>(V8_UINT64_C(0x1baddead0baddeaf)); -const Address kGlobalHandleZapValue = - reinterpret_cast<Address>(V8_UINT64_C(0x1baffed00baffedf)); -const Address kFromSpaceZapValue = - reinterpret_cast<Address>(V8_UINT64_C(0x1beefdad0beefdaf)); -const uint64_t kDebugZapValue = V8_UINT64_C(0xbadbaddbbadbaddb); -const uint64_t kSlotsZapValue = V8_UINT64_C(0xbeefdeadbeefdeef); -const uint64_t kFreeListZapValue = 0xfeed1eaffeed1eaf; -#else -const Address kZapValue = reinterpret_cast<Address>(0xdeadbeef); -const Address kHandleZapValue = reinterpret_cast<Address>(0xbaddeaf); -const Address kGlobalHandleZapValue = reinterpret_cast<Address>(0xbaffedf); -const Address kFromSpaceZapValue = reinterpret_cast<Address>(0xbeefdaf); -const uint32_t kSlotsZapValue = 0xbeefdeef; -const uint32_t kDebugZapValue = 0xbadbaddb; -const uint32_t kFreeListZapValue = 0xfeed1eaf; -#endif - -const int kCodeZapValue = 0xbadc0de; - -// Number of bits to represent the page size for paged spaces. The value of 20 -// gives 1Mb bytes per page. -const int kPageSizeBits = 20; - -// On Intel architecture, cache line size is 64 bytes. -// On ARM it may be less (32 bytes), but as far this constant is -// used for aligning data, it doesn't hurt to align on a greater value. -#define PROCESSOR_CACHE_LINE_SIZE 64 - -// Constants relevant to double precision floating point numbers. -// If looking only at the top 32 bits, the QNaN mask is bits 19 to 30. -const uint32_t kQuietNaNHighBitsMask = 0xfff << (51 - 32); - - -// ----------------------------------------------------------------------------- -// Forward declarations for frequently used classes - -class AccessorInfo; -class Allocation; -class Arguments; -class Assembler; -class Code; -class CodeGenerator; -class CodeStub; -class Context; -class Debug; -class Debugger; -class DebugInfo; -class Descriptor; -class DescriptorArray; -class TransitionArray; -class ExternalReference; -class FixedArray; -class FunctionTemplateInfo; -class MemoryChunk; -class SeededNumberDictionary; -class UnseededNumberDictionary; -class NameDictionary; -template <typename T> class MaybeHandle; -template <typename T> class Handle; -class Heap; -class HeapObject; -class IC; -class InterceptorInfo; -class Isolate; -class JSReceiver; -class JSArray; -class JSFunction; -class JSObject; -class LargeObjectSpace; -class LookupResult; -class MacroAssembler; -class Map; -class MapSpace; -class MarkCompactCollector; -class NewSpace; -class Object; -class OldSpace; -class Foreign; -class Scope; -class ScopeInfo; -class Script; -class Smi; -template <typename Config, class Allocator = FreeStoreAllocationPolicy> - class SplayTree; -class String; -class Name; -class Struct; -class Variable; -class RelocInfo; -class Deserializer; -class MessageLocation; -class VirtualMemory; -class Mutex; -class RecursiveMutex; - -typedef bool (*WeakSlotCallback)(Object** pointer); - -typedef bool (*WeakSlotCallbackWithHeap)(Heap* heap, Object** pointer); - -// ----------------------------------------------------------------------------- -// Miscellaneous - -// NOTE: SpaceIterator depends on AllocationSpace enumeration values being -// consecutive. -enum AllocationSpace { - NEW_SPACE, // Semispaces collected with copying collector. - OLD_POINTER_SPACE, // May contain pointers to new space. - OLD_DATA_SPACE, // Must not have pointers to new space. - CODE_SPACE, // No pointers to new space, marked executable. - MAP_SPACE, // Only and all map objects. - CELL_SPACE, // Only and all cell objects. - PROPERTY_CELL_SPACE, // Only and all global property cell objects. - LO_SPACE, // Promoted large objects. - INVALID_SPACE, // Only used in AllocationResult to signal success. - - FIRST_SPACE = NEW_SPACE, - LAST_SPACE = LO_SPACE, - FIRST_PAGED_SPACE = OLD_POINTER_SPACE, - LAST_PAGED_SPACE = PROPERTY_CELL_SPACE -}; -const int kSpaceTagSize = 3; -const int kSpaceTagMask = (1 << kSpaceTagSize) - 1; - - -// A flag that indicates whether objects should be pretenured when -// allocated (allocated directly into the old generation) or not -// (allocated in the young generation if the object size and type -// allows). -enum PretenureFlag { NOT_TENURED, TENURED }; - -enum MinimumCapacity { - USE_DEFAULT_MINIMUM_CAPACITY, - USE_CUSTOM_MINIMUM_CAPACITY -}; - -enum GarbageCollector { SCAVENGER, MARK_COMPACTOR }; - -enum Executability { NOT_EXECUTABLE, EXECUTABLE }; - -enum VisitMode { - VISIT_ALL, - VISIT_ALL_IN_SCAVENGE, - VISIT_ALL_IN_SWEEP_NEWSPACE, - VISIT_ONLY_STRONG -}; - -// Flag indicating whether code is built into the VM (one of the natives files). -enum NativesFlag { NOT_NATIVES_CODE, NATIVES_CODE }; - - -// A CodeDesc describes a buffer holding instructions and relocation -// information. The instructions start at the beginning of the buffer -// and grow forward, the relocation information starts at the end of -// the buffer and grows backward. -// -// |<--------------- buffer_size ---------------->| -// |<-- instr_size -->| |<-- reloc_size -->| -// +==================+========+==================+ -// | instructions | free | reloc info | -// +==================+========+==================+ -// ^ -// | -// buffer - -struct CodeDesc { - byte* buffer; - int buffer_size; - int instr_size; - int reloc_size; - Assembler* origin; -}; - - -// Callback function used for iterating objects in heap spaces, -// for example, scanning heap objects. -typedef int (*HeapObjectCallback)(HeapObject* obj); - - -// Callback function used for checking constraints when copying/relocating -// objects. Returns true if an object can be copied/relocated from its -// old_addr to a new_addr. -typedef bool (*ConstraintCallback)(Address new_addr, Address old_addr); - - -// Callback function on inline caches, used for iterating over inline caches -// in compiled code. -typedef void (*InlineCacheCallback)(Code* code, Address ic); - - -// State for inline cache call sites. Aliased as IC::State. -enum InlineCacheState { - // Has never been executed. - UNINITIALIZED, - // Has been executed but monomorhic state has been delayed. - PREMONOMORPHIC, - // Has been executed and only one receiver type has been seen. - MONOMORPHIC, - // Like MONOMORPHIC but check failed due to prototype. - MONOMORPHIC_PROTOTYPE_FAILURE, - // Multiple receiver types have been seen. - POLYMORPHIC, - // Many receiver types have been seen. - MEGAMORPHIC, - // A generic handler is installed and no extra typefeedback is recorded. - GENERIC, - // Special state for debug break or step in prepare stubs. - DEBUG_STUB -}; - - -enum CallFunctionFlags { - NO_CALL_FUNCTION_FLAGS, - CALL_AS_METHOD, - // Always wrap the receiver and call to the JSFunction. Only use this flag - // both the receiver type and the target method are statically known. - WRAP_AND_CALL -}; - - -enum CallConstructorFlags { - NO_CALL_CONSTRUCTOR_FLAGS, - // The call target is cached in the instruction stream. - RECORD_CONSTRUCTOR_TARGET -}; - - -enum InlineCacheHolderFlag { - OWN_MAP, // For fast properties objects. - PROTOTYPE_MAP // For slow properties objects (except GlobalObjects). -}; - - -// The Store Buffer (GC). -typedef enum { - kStoreBufferFullEvent, - kStoreBufferStartScanningPagesEvent, - kStoreBufferScanningPageEvent -} StoreBufferEvent; - - -typedef void (*StoreBufferCallback)(Heap* heap, - MemoryChunk* page, - StoreBufferEvent event); - - -// Union used for fast testing of specific double values. -union DoubleRepresentation { - double value; - int64_t bits; - DoubleRepresentation(double x) { value = x; } - bool operator==(const DoubleRepresentation& other) const { - return bits == other.bits; - } -}; - - -// Union used for customized checking of the IEEE double types -// inlined within v8 runtime, rather than going to the underlying -// platform headers and libraries -union IeeeDoubleLittleEndianArchType { - double d; - struct { - unsigned int man_low :32; - unsigned int man_high :20; - unsigned int exp :11; - unsigned int sign :1; - } bits; -}; - - -union IeeeDoubleBigEndianArchType { - double d; - struct { - unsigned int sign :1; - unsigned int exp :11; - unsigned int man_high :20; - unsigned int man_low :32; - } bits; -}; - - -// AccessorCallback -struct AccessorDescriptor { - Object* (*getter)(Isolate* isolate, Object* object, void* data); - Object* (*setter)( - Isolate* isolate, JSObject* object, Object* value, void* data); - void* data; -}; - - -// Logging and profiling. A StateTag represents a possible state of -// the VM. The logger maintains a stack of these. Creating a VMState -// object enters a state by pushing on the stack, and destroying a -// VMState object leaves a state by popping the current state from the -// stack. - -enum StateTag { - JS, - GC, - COMPILER, - OTHER, - EXTERNAL, - IDLE -}; - - -// ----------------------------------------------------------------------------- -// Macros - -// Testers for test. - -#define HAS_SMI_TAG(value) \ - ((reinterpret_cast<intptr_t>(value) & kSmiTagMask) == kSmiTag) - -#define HAS_FAILURE_TAG(value) \ - ((reinterpret_cast<intptr_t>(value) & kFailureTagMask) == kFailureTag) - -// OBJECT_POINTER_ALIGN returns the value aligned as a HeapObject pointer -#define OBJECT_POINTER_ALIGN(value) \ - (((value) + kObjectAlignmentMask) & ~kObjectAlignmentMask) - -// POINTER_SIZE_ALIGN returns the value aligned as a pointer. -#define POINTER_SIZE_ALIGN(value) \ - (((value) + kPointerAlignmentMask) & ~kPointerAlignmentMask) - -// CODE_POINTER_ALIGN returns the value aligned as a generated code segment. -#define CODE_POINTER_ALIGN(value) \ - (((value) + kCodeAlignmentMask) & ~kCodeAlignmentMask) - -// Support for tracking C++ memory allocation. Insert TRACK_MEMORY("Fisk") -// inside a C++ class and new and delete will be overloaded so logging is -// performed. -// This file (globals.h) is included before log.h, so we use direct calls to -// the Logger rather than the LOG macro. -#ifdef DEBUG -#define TRACK_MEMORY(name) \ - void* operator new(size_t size) { \ - void* result = ::operator new(size); \ - Logger::NewEventStatic(name, result, size); \ - return result; \ - } \ - void operator delete(void* object) { \ - Logger::DeleteEventStatic(name, object); \ - ::operator delete(object); \ - } -#else -#define TRACK_MEMORY(name) -#endif - - -// Feature flags bit positions. They are mostly based on the CPUID spec. -// On X86/X64, values below 32 are bits in EDX, values above 32 are bits in ECX. -enum CpuFeature { SSE4_1 = 32 + 19, // x86 - SSE3 = 32 + 0, // x86 - SSE2 = 26, // x86 - CMOV = 15, // x86 - VFP3 = 1, // ARM - ARMv7 = 2, // ARM - SUDIV = 3, // ARM - UNALIGNED_ACCESSES = 4, // ARM - MOVW_MOVT_IMMEDIATE_LOADS = 5, // ARM - VFP32DREGS = 6, // ARM - NEON = 7, // ARM - SAHF = 0, // x86 - FPU = 1}; // MIPS - - -// Used to specify if a macro instruction must perform a smi check on tagged -// values. -enum SmiCheckType { - DONT_DO_SMI_CHECK, - DO_SMI_CHECK -}; - - -enum ScopeType { - EVAL_SCOPE, // The top-level scope for an eval source. - FUNCTION_SCOPE, // The top-level scope for a function. - MODULE_SCOPE, // The scope introduced by a module literal - GLOBAL_SCOPE, // The top-level scope for a program or a top-level eval. - CATCH_SCOPE, // The scope introduced by catch. - BLOCK_SCOPE, // The scope introduced by a new block. - WITH_SCOPE // The scope introduced by with. -}; - - -const uint32_t kHoleNanUpper32 = 0x7FFFFFFF; -const uint32_t kHoleNanLower32 = 0xFFFFFFFF; -const uint32_t kNaNOrInfinityLowerBoundUpper32 = 0x7FF00000; - -const uint64_t kHoleNanInt64 = - (static_cast<uint64_t>(kHoleNanUpper32) << 32) | kHoleNanLower32; -const uint64_t kLastNonNaNInt64 = - (static_cast<uint64_t>(kNaNOrInfinityLowerBoundUpper32) << 32); - - -// The order of this enum has to be kept in sync with the predicates below. -enum VariableMode { - // User declared variables: - VAR, // declared via 'var', and 'function' declarations - - CONST_LEGACY, // declared via legacy 'const' declarations - - LET, // declared via 'let' declarations (first lexical) - - CONST, // declared via 'const' declarations - - MODULE, // declared via 'module' declaration (last lexical) - - // Variables introduced by the compiler: - INTERNAL, // like VAR, but not user-visible (may or may not - // be in a context) - - TEMPORARY, // temporary variables (not user-visible), stack-allocated - // unless the scope as a whole has forced context allocation - - DYNAMIC, // always require dynamic lookup (we don't know - // the declaration) - - DYNAMIC_GLOBAL, // requires dynamic lookup, but we know that the - // variable is global unless it has been shadowed - // by an eval-introduced variable - - DYNAMIC_LOCAL // requires dynamic lookup, but we know that the - // variable is local and where it is unless it - // has been shadowed by an eval-introduced - // variable -}; - - -inline bool IsDynamicVariableMode(VariableMode mode) { - return mode >= DYNAMIC && mode <= DYNAMIC_LOCAL; -} - - -inline bool IsDeclaredVariableMode(VariableMode mode) { - return mode >= VAR && mode <= MODULE; -} - - -inline bool IsLexicalVariableMode(VariableMode mode) { - return mode >= LET && mode <= MODULE; -} - - -inline bool IsImmutableVariableMode(VariableMode mode) { - return (mode >= CONST && mode <= MODULE) || mode == CONST_LEGACY; -} - - -// ES6 Draft Rev3 10.2 specifies declarative environment records with mutable -// and immutable bindings that can be in two states: initialized and -// uninitialized. In ES5 only immutable bindings have these two states. When -// accessing a binding, it needs to be checked for initialization. However in -// the following cases the binding is initialized immediately after creation -// so the initialization check can always be skipped: -// 1. Var declared local variables. -// var foo; -// 2. A local variable introduced by a function declaration. -// function foo() {} -// 3. Parameters -// function x(foo) {} -// 4. Catch bound variables. -// try {} catch (foo) {} -// 6. Function variables of named function expressions. -// var x = function foo() {} -// 7. Implicit binding of 'this'. -// 8. Implicit binding of 'arguments' in functions. -// -// ES5 specified object environment records which are introduced by ES elements -// such as Program and WithStatement that associate identifier bindings with the -// properties of some object. In the specification only mutable bindings exist -// (which may be non-writable) and have no distinct initialization step. However -// V8 allows const declarations in global code with distinct creation and -// initialization steps which are represented by non-writable properties in the -// global object. As a result also these bindings need to be checked for -// initialization. -// -// The following enum specifies a flag that indicates if the binding needs a -// distinct initialization step (kNeedsInitialization) or if the binding is -// immediately initialized upon creation (kCreatedInitialized). -enum InitializationFlag { - kNeedsInitialization, - kCreatedInitialized -}; - - -enum ClearExceptionFlag { - KEEP_EXCEPTION, - CLEAR_EXCEPTION -}; - - -enum MinusZeroMode { - TREAT_MINUS_ZERO_AS_ZERO, - FAIL_ON_MINUS_ZERO -}; - -} } // namespace v8::internal - -namespace i = v8::internal; - -#endif // V8_V8GLOBALS_H_ diff --git a/deps/v8/src/v8natives.js b/deps/v8/src/v8natives.js index cd60cb40d81..9612f16f961 100644 --- a/deps/v8/src/v8natives.js +++ b/deps/v8/src/v8natives.js @@ -28,7 +28,7 @@ function InstallFunctions(object, attributes, functions) { var f = functions[i + 1]; %FunctionSetName(f, key); %FunctionRemovePrototype(f); - %SetProperty(object, key, f, attributes); + %AddNamedProperty(object, key, f, attributes); %SetNativeFlag(f); } %ToFastProperties(object); @@ -39,7 +39,7 @@ function InstallFunctions(object, attributes, functions) { function InstallGetter(object, name, getter) { %FunctionSetName(getter, name); %FunctionRemovePrototype(getter); - %DefineOrRedefineAccessorProperty(object, name, getter, null, DONT_ENUM); + %DefineAccessorPropertyUnchecked(object, name, getter, null, DONT_ENUM); %SetNativeFlag(getter); } @@ -50,7 +50,7 @@ function InstallGetterSetter(object, name, getter, setter) { %FunctionSetName(setter, name); %FunctionRemovePrototype(getter); %FunctionRemovePrototype(setter); - %DefineOrRedefineAccessorProperty(object, name, getter, setter, DONT_ENUM); + %DefineAccessorPropertyUnchecked(object, name, getter, setter, DONT_ENUM); %SetNativeFlag(getter); %SetNativeFlag(setter); } @@ -65,7 +65,7 @@ function InstallConstants(object, constants) { for (var i = 0; i < constants.length; i += 2) { var name = constants[i]; var k = constants[i + 1]; - %SetProperty(object, name, k, attributes); + %AddNamedProperty(object, name, k, attributes); } %ToFastProperties(object); } @@ -86,16 +86,17 @@ function SetUpLockedPrototype(constructor, fields, methods) { } if (fields) { for (var i = 0; i < fields.length; i++) { - %SetProperty(prototype, fields[i], UNDEFINED, DONT_ENUM | DONT_DELETE); + %AddNamedProperty(prototype, fields[i], + UNDEFINED, DONT_ENUM | DONT_DELETE); } } for (var i = 0; i < methods.length; i += 2) { var key = methods[i]; var f = methods[i + 1]; - %SetProperty(prototype, key, f, DONT_ENUM | DONT_DELETE | READ_ONLY); + %AddNamedProperty(prototype, key, f, DONT_ENUM | DONT_DELETE | READ_ONLY); %SetNativeFlag(f); } - %SetPrototype(prototype, null); + %InternalSetPrototype(prototype, null); %ToFastProperties(prototype); } @@ -172,12 +173,12 @@ function GlobalEval(x) { 'be the global object from which eval originated'); } - var global_receiver = %GlobalReceiver(global); + var global_proxy = %GlobalProxy(global); var f = %CompileString(x, false); if (!IS_FUNCTION(f)) return f; - return %_CallFunction(global_receiver, f); + return %_CallFunction(global_proxy, f); } @@ -190,13 +191,13 @@ function SetUpGlobal() { var attributes = DONT_ENUM | DONT_DELETE | READ_ONLY; // ECMA 262 - 15.1.1.1. - %SetProperty(global, "NaN", NAN, attributes); + %AddNamedProperty(global, "NaN", NAN, attributes); // ECMA-262 - 15.1.1.2. - %SetProperty(global, "Infinity", INFINITY, attributes); + %AddNamedProperty(global, "Infinity", INFINITY, attributes); // ECMA-262 - 15.1.1.3. - %SetProperty(global, "undefined", UNDEFINED, attributes); + %AddNamedProperty(global, "undefined", UNDEFINED, attributes); // Set up non-enumerable function on the global object. InstallFunctions(global, DONT_ENUM, $Array( @@ -244,7 +245,7 @@ function ObjectHasOwnProperty(V) { var handler = %GetHandler(this); return CallTrap1(handler, "hasOwn", DerivedHasOwnTrap, ToName(V)); } - return %HasLocalProperty(TO_OBJECT_INLINE(this), ToName(V)); + return %HasOwnProperty(TO_OBJECT_INLINE(this), ToName(V)); } @@ -263,7 +264,7 @@ function ObjectPropertyIsEnumerable(V) { // TODO(rossberg): adjust once there is a story for symbols vs proxies. if (IS_SYMBOL(V)) return false; - var desc = GetOwnProperty(this, P); + var desc = GetOwnPropertyJS(this, P); return IS_UNDEFINED(desc) ? false : desc.isEnumerable(); } return %IsPropertyEnumerable(ToObject(this), P); @@ -274,7 +275,7 @@ function ObjectPropertyIsEnumerable(V) { function ObjectDefineGetter(name, fun) { var receiver = this; if (receiver == null && !IS_UNDETECTABLE(receiver)) { - receiver = %GlobalReceiver(global); + receiver = %GlobalProxy(global); } if (!IS_SPEC_FUNCTION(fun)) { throw new $TypeError( @@ -291,7 +292,7 @@ function ObjectDefineGetter(name, fun) { function ObjectLookupGetter(name) { var receiver = this; if (receiver == null && !IS_UNDETECTABLE(receiver)) { - receiver = %GlobalReceiver(global); + receiver = %GlobalProxy(global); } return %LookupAccessor(ToObject(receiver), ToName(name), GETTER); } @@ -300,7 +301,7 @@ function ObjectLookupGetter(name) { function ObjectDefineSetter(name, fun) { var receiver = this; if (receiver == null && !IS_UNDETECTABLE(receiver)) { - receiver = %GlobalReceiver(global); + receiver = %GlobalProxy(global); } if (!IS_SPEC_FUNCTION(fun)) { throw new $TypeError( @@ -317,7 +318,7 @@ function ObjectDefineSetter(name, fun) { function ObjectLookupSetter(name) { var receiver = this; if (receiver == null && !IS_UNDETECTABLE(receiver)) { - receiver = %GlobalReceiver(global); + receiver = %GlobalProxy(global); } return %LookupAccessor(ToObject(receiver), ToName(name), SETTER); } @@ -332,7 +333,7 @@ function ObjectKeys(obj) { var names = CallTrap0(handler, "keys", DerivedKeysTrap); return ToNameArray(names, "keys", false); } - return %LocalKeys(obj); + return %OwnKeys(obj); } @@ -386,24 +387,22 @@ function FromGenericPropertyDescriptor(desc) { var obj = new $Object(); if (desc.hasValue()) { - %IgnoreAttributesAndSetProperty(obj, "value", desc.getValue(), NONE); + %AddNamedProperty(obj, "value", desc.getValue(), NONE); } if (desc.hasWritable()) { - %IgnoreAttributesAndSetProperty(obj, "writable", desc.isWritable(), NONE); + %AddNamedProperty(obj, "writable", desc.isWritable(), NONE); } if (desc.hasGetter()) { - %IgnoreAttributesAndSetProperty(obj, "get", desc.getGet(), NONE); + %AddNamedProperty(obj, "get", desc.getGet(), NONE); } if (desc.hasSetter()) { - %IgnoreAttributesAndSetProperty(obj, "set", desc.getSet(), NONE); + %AddNamedProperty(obj, "set", desc.getSet(), NONE); } if (desc.hasEnumerable()) { - %IgnoreAttributesAndSetProperty(obj, "enumerable", - desc.isEnumerable(), NONE); + %AddNamedProperty(obj, "enumerable", desc.isEnumerable(), NONE); } if (desc.hasConfigurable()) { - %IgnoreAttributesAndSetProperty(obj, "configurable", - desc.isConfigurable(), NONE); + %AddNamedProperty(obj, "configurable", desc.isConfigurable(), NONE); } return obj; } @@ -627,7 +626,7 @@ function CallTrap2(handler, name, defaultTrap, x, y) { // ES5 section 8.12.1. -function GetOwnProperty(obj, v) { +function GetOwnPropertyJS(obj, v) { var p = ToName(v); if (%IsJSProxy(obj)) { // TODO(rossberg): adjust once there is a story for symbols vs proxies. @@ -659,7 +658,7 @@ function GetOwnProperty(obj, v) { // ES5 section 8.12.7. function Delete(obj, p, should_throw) { - var desc = GetOwnProperty(obj, p); + var desc = GetOwnPropertyJS(obj, p); if (IS_UNDEFINED(desc)) return true; if (desc.isConfigurable()) { %DeleteProperty(obj, p, 0); @@ -833,7 +832,7 @@ function DefineObjectProperty(obj, p, desc, should_throw) { value = current.getValue(); } - %DefineOrRedefineDataProperty(obj, p, value, flag); + %DefineDataPropertyUnchecked(obj, p, value, flag); } else { // There are 3 cases that lead here: // Step 4b - defining a new accessor property. @@ -843,7 +842,7 @@ function DefineObjectProperty(obj, p, desc, should_throw) { // descriptor. var getter = desc.hasGetter() ? desc.getGet() : null; var setter = desc.hasSetter() ? desc.getSet() : null; - %DefineOrRedefineAccessorProperty(obj, p, getter, setter, flag); + %DefineAccessorPropertyUnchecked(obj, p, getter, setter, flag); } return true; } @@ -866,7 +865,7 @@ function DefineArrayProperty(obj, p, desc, should_throw) { if (new_length != ToNumber(desc.getValue())) { throw new $RangeError('defineProperty() array length out of range'); } - var length_desc = GetOwnProperty(obj, "length"); + var length_desc = GetOwnPropertyJS(obj, "length"); if (new_length != length && !length_desc.isWritable()) { if (should_throw) { throw MakeTypeError("redefine_disallowed", [p]); @@ -888,7 +887,7 @@ function DefineArrayProperty(obj, p, desc, should_throw) { while (new_length < length--) { var index = ToString(length); if (emit_splice) { - var deletedDesc = GetOwnProperty(obj, index); + var deletedDesc = GetOwnPropertyJS(obj, index); if (deletedDesc && deletedDesc.hasValue()) removed[length - new_length] = deletedDesc.getValue(); } @@ -901,7 +900,7 @@ function DefineArrayProperty(obj, p, desc, should_throw) { // Make sure the below call to DefineObjectProperty() doesn't overwrite // any magic "length" property by removing the value. // TODO(mstarzinger): This hack should be removed once we have addressed the - // respective TODO in Runtime_DefineOrRedefineDataProperty. + // respective TODO in Runtime_DefineDataPropertyUnchecked. // For the time being, we need a hack to prevent Object.observe from // generating two change records. obj.length = new_length; @@ -926,34 +925,36 @@ function DefineArrayProperty(obj, p, desc, should_throw) { } // Step 4 - Special handling for array index. - var index = ToUint32(p); - var emit_splice = false; - if (ToString(index) == p && index != 4294967295) { - var length = obj.length; - if (index >= length && %IsObserved(obj)) { - emit_splice = true; - BeginPerformSplice(obj); - } + if (!IS_SYMBOL(p)) { + var index = ToUint32(p); + var emit_splice = false; + if (ToString(index) == p && index != 4294967295) { + var length = obj.length; + if (index >= length && %IsObserved(obj)) { + emit_splice = true; + BeginPerformSplice(obj); + } - var length_desc = GetOwnProperty(obj, "length"); - if ((index >= length && !length_desc.isWritable()) || - !DefineObjectProperty(obj, p, desc, true)) { - if (emit_splice) + var length_desc = GetOwnPropertyJS(obj, "length"); + if ((index >= length && !length_desc.isWritable()) || + !DefineObjectProperty(obj, p, desc, true)) { + if (emit_splice) + EndPerformSplice(obj); + if (should_throw) { + throw MakeTypeError("define_disallowed", [p]); + } else { + return false; + } + } + if (index >= length) { + obj.length = index + 1; + } + if (emit_splice) { EndPerformSplice(obj); - if (should_throw) { - throw MakeTypeError("define_disallowed", [p]); - } else { - return false; + EnqueueSpliceRecord(obj, length, [], index + 1 - length); } + return true; } - if (index >= length) { - obj.length = index + 1; - } - if (emit_splice) { - EndPerformSplice(obj); - EnqueueSpliceRecord(obj, length, [], index + 1 - length); - } - return true; } // Step 5 - Fallback to default implementation. @@ -1007,7 +1008,7 @@ function ObjectGetOwnPropertyDescriptor(obj, p) { throw MakeTypeError("called_on_non_object", ["Object.getOwnPropertyDescriptor"]); } - var desc = GetOwnProperty(obj, p); + var desc = GetOwnPropertyJS(obj, p); return FromPropertyDescriptor(desc); } @@ -1025,7 +1026,7 @@ function ToNameArray(obj, trap, includeSymbols) { var s = ToName(obj[index]); // TODO(rossberg): adjust once there is a story for symbols vs proxies. if (IS_SYMBOL(s) && !includeSymbols) continue; - if (%HasLocalProperty(names, s)) { + if (%HasOwnProperty(names, s)) { throw MakeTypeError("proxy_repeated_prop_name", [obj, trap, s]); } array[index] = s; @@ -1045,13 +1046,13 @@ function ObjectGetOwnPropertyKeys(obj, symbolsOnly) { // Find all the indexed properties. - // Only get the local element names if we want to include string keys. + // Only get own element names if we want to include string keys. if (!symbolsOnly) { - var localElementNames = %GetLocalElementNames(obj); - for (var i = 0; i < localElementNames.length; ++i) { - localElementNames[i] = %_NumberToString(localElementNames[i]); + var ownElementNames = %GetOwnElementNames(obj); + for (var i = 0; i < ownElementNames.length; ++i) { + ownElementNames[i] = %_NumberToString(ownElementNames[i]); } - nameArrays.push(localElementNames); + nameArrays.push(ownElementNames); // Get names for indexed interceptor properties. var interceptorInfo = %GetInterceptorInfo(obj); @@ -1065,8 +1066,8 @@ function ObjectGetOwnPropertyKeys(obj, symbolsOnly) { // Find all the named properties. - // Get the local property names. - nameArrays.push(%GetLocalPropertyNames(obj, filter)); + // Get own property names. + nameArrays.push(%GetOwnPropertyNames(obj, filter)); // Get names for named interceptor properties if any. if ((interceptorInfo & 2) != 0) { @@ -1126,7 +1127,8 @@ function ObjectCreate(proto, properties) { if (!IS_SPEC_OBJECT(proto) && proto !== null) { throw MakeTypeError("proto_object_or_null", [proto]); } - var obj = { __proto__: proto }; + var obj = {}; + %InternalSetPrototype(obj, proto); if (!IS_UNDEFINED(properties)) ObjectDefineProperties(obj, properties); return obj; } @@ -1156,8 +1158,8 @@ function ObjectDefineProperty(obj, p, attributes) { {value: 0, writable: 0, get: 0, set: 0, enumerable: 0, configurable: 0}; for (var i = 0; i < names.length; i++) { var N = names[i]; - if (!(%HasLocalProperty(standardNames, N))) { - var attr = GetOwnProperty(attributes, N); + if (!(%HasOwnProperty(standardNames, N))) { + var attr = GetOwnPropertyJS(attributes, N); DefineOwnProperty(descObj, N, attr, true); } } @@ -1173,13 +1175,24 @@ function ObjectDefineProperty(obj, p, attributes) { } -function GetOwnEnumerablePropertyNames(properties) { +function GetOwnEnumerablePropertyNames(object) { var names = new InternalArray(); - for (var key in properties) { - if (%HasLocalProperty(properties, key)) { + for (var key in object) { + if (%HasOwnProperty(object, key)) { names.push(key); } } + + var filter = PROPERTY_ATTRIBUTES_STRING | PROPERTY_ATTRIBUTES_PRIVATE_SYMBOL; + var symbols = %GetOwnPropertyNames(object, filter); + for (var i = 0; i < symbols.length; ++i) { + var symbol = symbols[i]; + if (IS_SYMBOL(symbol)) { + var desc = ObjectGetOwnPropertyDescriptor(object, symbol); + if (desc.enumerable) names.push(symbol); + } + } + return names; } @@ -1242,7 +1255,7 @@ function ObjectSeal(obj) { var names = ObjectGetOwnPropertyNames(obj); for (var i = 0; i < names.length; i++) { var name = names[i]; - var desc = GetOwnProperty(obj, name); + var desc = GetOwnPropertyJS(obj, name); if (desc.isConfigurable()) { desc.setConfigurable(false); DefineOwnProperty(obj, name, desc, true); @@ -1254,7 +1267,7 @@ function ObjectSeal(obj) { // ES5 section 15.2.3.9. -function ObjectFreeze(obj) { +function ObjectFreezeJS(obj) { if (!IS_SPEC_OBJECT(obj)) { throw MakeTypeError("called_on_non_object", ["Object.freeze"]); } @@ -1266,7 +1279,7 @@ function ObjectFreeze(obj) { var names = ObjectGetOwnPropertyNames(obj); for (var i = 0; i < names.length; i++) { var name = names[i]; - var desc = GetOwnProperty(obj, name); + var desc = GetOwnPropertyJS(obj, name); if (desc.isWritable() || desc.isConfigurable()) { if (IsDataDescriptor(desc)) desc.setWritable(false); desc.setConfigurable(false); @@ -1310,8 +1323,10 @@ function ObjectIsSealed(obj) { var names = ObjectGetOwnPropertyNames(obj); for (var i = 0; i < names.length; i++) { var name = names[i]; - var desc = GetOwnProperty(obj, name); - if (desc.isConfigurable()) return false; + var desc = GetOwnPropertyJS(obj, name); + if (desc.isConfigurable()) { + return false; + } } return true; } @@ -1331,7 +1346,7 @@ function ObjectIsFrozen(obj) { var names = ObjectGetOwnPropertyNames(obj); for (var i = 0; i < names.length; i++) { var name = names[i]; - var desc = GetOwnProperty(obj, name); + var desc = GetOwnPropertyJS(obj, name); if (IsDataDescriptor(desc) && desc.isWritable()) return false; if (desc.isConfigurable()) return false; } @@ -1396,9 +1411,8 @@ function SetUpObject() { %SetNativeFlag($Object); %SetCode($Object, ObjectConstructor); - %SetExpectedNumberOfProperties($Object, 4); - %SetProperty($Object.prototype, "constructor", $Object, DONT_ENUM); + %AddNamedProperty($Object.prototype, "constructor", $Object, DONT_ENUM); // Set up non-enumerable functions on the Object.prototype object. InstallFunctions($Object.prototype, DONT_ENUM, $Array( @@ -1422,7 +1436,7 @@ function SetUpObject() { "create", ObjectCreate, "defineProperty", ObjectDefineProperty, "defineProperties", ObjectDefineProperties, - "freeze", ObjectFreeze, + "freeze", ObjectFreezeJS, "getPrototypeOf", ObjectGetPrototypeOf, "setPrototypeOf", ObjectSetPrototypeOf, "getOwnPropertyDescriptor", ObjectGetOwnPropertyDescriptor, @@ -1485,7 +1499,7 @@ function SetUpBoolean () { %SetCode($Boolean, BooleanConstructor); %FunctionSetPrototype($Boolean, new $Boolean(false)); - %SetProperty($Boolean.prototype, "constructor", $Boolean, DONT_ENUM); + %AddNamedProperty($Boolean.prototype, "constructor", $Boolean, DONT_ENUM); InstallFunctions($Boolean.prototype, DONT_ENUM, $Array( "toString", BooleanToString, @@ -1554,7 +1568,7 @@ function NumberValueOf() { // ECMA-262 section 15.7.4.5 -function NumberToFixed(fractionDigits) { +function NumberToFixedJS(fractionDigits) { var x = this; if (!IS_NUMBER(this)) { if (!IS_NUMBER_WRAPPER(this)) { @@ -1579,7 +1593,7 @@ function NumberToFixed(fractionDigits) { // ECMA-262 section 15.7.4.6 -function NumberToExponential(fractionDigits) { +function NumberToExponentialJS(fractionDigits) { var x = this; if (!IS_NUMBER(this)) { if (!IS_NUMBER_WRAPPER(this)) { @@ -1605,7 +1619,7 @@ function NumberToExponential(fractionDigits) { // ECMA-262 section 15.7.4.7 -function NumberToPrecision(precision) { +function NumberToPrecisionJS(precision) { var x = this; if (!IS_NUMBER(this)) { if (!IS_NUMBER_WRAPPER(this)) { @@ -1668,7 +1682,7 @@ function SetUpNumber() { %OptimizeObjectForAddingMultipleProperties($Number.prototype, 8); // Set up the constructor property on the Number prototype object. - %SetProperty($Number.prototype, "constructor", $Number, DONT_ENUM); + %AddNamedProperty($Number.prototype, "constructor", $Number, DONT_ENUM); InstallConstants($Number, $Array( // ECMA-262 section 15.7.3.1. @@ -1694,9 +1708,9 @@ function SetUpNumber() { "toString", NumberToString, "toLocaleString", NumberToLocaleString, "valueOf", NumberValueOf, - "toFixed", NumberToFixed, - "toExponential", NumberToExponential, - "toPrecision", NumberToPrecision + "toFixed", NumberToFixedJS, + "toExponential", NumberToExponentialJS, + "toPrecision", NumberToPrecisionJS )); // Harmony Number constructor additions @@ -1736,6 +1750,10 @@ function FunctionSourceString(func) { } } + if (%FunctionIsArrow(func)) { + return source; + } + var name = %FunctionNameShouldPrintAsAnonymous(func) ? 'anonymous' : %FunctionGetName(func); @@ -1782,19 +1800,15 @@ function FunctionBind(this_arg) { // Length is 1. return %Apply(bindings[0], bindings[1], argv, 0, bound_argc + argc); }; - %FunctionRemovePrototype(boundFunction); var new_length = 0; - if (%_ClassOf(this) == "Function") { - // Function or FunctionProxy. - var old_length = this.length; - // FunctionProxies might provide a non-UInt32 value. If so, ignore it. - if ((typeof old_length === "number") && - ((old_length >>> 0) === old_length)) { - var argc = %_ArgumentsLength(); - if (argc > 0) argc--; // Don't count the thisArg as parameter. - new_length = old_length - argc; - if (new_length < 0) new_length = 0; - } + var old_length = this.length; + // FunctionProxies might provide a non-UInt32 value. If so, ignore it. + if ((typeof old_length === "number") && + ((old_length >>> 0) === old_length)) { + var argc = %_ArgumentsLength(); + if (argc > 0) argc--; // Don't count the thisArg as parameter. + new_length = old_length - argc; + if (new_length < 0) new_length = 0; } // This runtime function finds any remaining arguments on the stack, // so we don't pass the arguments object. @@ -1823,7 +1837,7 @@ function NewFunctionString(arguments, function_token) { // If the formal parameters string include ) - an illegal // character - it may make the combined function expression // compile. We avoid this problem by checking for this early on. - if (%_CallFunction(p, ')', StringIndexOf) != -1) { + if (%_CallFunction(p, ')', StringIndexOfJS) != -1) { throw MakeSyntaxError('paren_in_arg_string', []); } // If the formal parameters include an unbalanced block comment, the @@ -1838,10 +1852,12 @@ function NewFunctionString(arguments, function_token) { function FunctionConstructor(arg1) { // length == 1 var source = NewFunctionString(arguments, 'function'); - var global_receiver = %GlobalReceiver(global); + var global_proxy = %GlobalProxy(global); // Compile the string in the constructor and not a helper so that errors // appear to come from here. - var f = %_CallFunction(global_receiver, %CompileString(source, true)); + var f = %CompileString(source, true); + if (!IS_FUNCTION(f)) return f; + f = %_CallFunction(global_proxy, f); %FunctionMarkNameShouldPrintAsAnonymous(f); return f; } @@ -1853,7 +1869,7 @@ function SetUpFunction() { %CheckIsBootstrapping(); %SetCode($Function, FunctionConstructor); - %SetProperty($Function.prototype, "constructor", $Function, DONT_ENUM); + %AddNamedProperty($Function.prototype, "constructor", $Function, DONT_ENUM); InstallFunctions($Function.prototype, DONT_ENUM, $Array( "bind", FunctionBind, @@ -1864,32 +1880,31 @@ function SetUpFunction() { SetUpFunction(); -//---------------------------------------------------------------------------- - -// TODO(rossberg): very simple abstraction for generic microtask queue. -// Eventually, we should move to a real event queue that allows to maintain -// relative ordering of different kinds of tasks. - -function RunMicrotasks() { - while (%SetMicrotaskPending(false)) { - var microtaskState = %GetMicrotaskState(); - if (IS_UNDEFINED(microtaskState.queue)) - return; - - var microtasks = microtaskState.queue; - microtaskState.queue = null; +// ---------------------------------------------------------------------------- +// Iterator related spec functions. - for (var i = 0; i < microtasks.length; i++) { - microtasks[i](); - } +// ES6 rev 26, 2014-07-18 +// 7.4.1 CheckIterable ( obj ) +function ToIterable(obj) { + if (!IS_SPEC_OBJECT(obj)) { + return UNDEFINED; } + return obj[symbolIterator]; } -function EnqueueMicrotask(fn) { - var microtaskState = %GetMicrotaskState(); - if (IS_UNDEFINED(microtaskState.queue) || IS_NULL(microtaskState.queue)) { - microtaskState.queue = new InternalArray; + +// ES6 rev 26, 2014-07-18 +// 7.4.2 GetIterator ( obj, method ) +function GetIterator(obj, method) { + if (IS_UNDEFINED(method)) { + method = ToIterable(obj); + } + if (!IS_SPEC_FUNCTION(method)) { + throw MakeTypeError('not_iterable', [obj]); + } + var iterator = %_CallFunction(obj, method); + if (!IS_SPEC_OBJECT(iterator)) { + throw MakeTypeError('not_an_iterator', [iterator]); } - microtaskState.queue.push(fn); - %SetMicrotaskPending(true); + return iterator; } diff --git a/deps/v8/src/v8threads.cc b/deps/v8/src/v8threads.cc index 410d0b133c1..010f50b3e4d 100644 --- a/deps/v8/src/v8threads.cc +++ b/deps/v8/src/v8threads.cc @@ -2,14 +2,14 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "api.h" -#include "bootstrapper.h" -#include "debug.h" -#include "execution.h" -#include "v8threads.h" -#include "regexp-stack.h" +#include "src/api.h" +#include "src/bootstrapper.h" +#include "src/debug.h" +#include "src/execution.h" +#include "src/regexp-stack.h" +#include "src/v8threads.h" namespace v8 { @@ -22,7 +22,7 @@ bool Locker::active_ = false; // Once the Locker is initialized, the current thread will be guaranteed to have // the lock for a given isolate. void Locker::Initialize(v8::Isolate* isolate) { - ASSERT(isolate != NULL); + DCHECK(isolate != NULL); has_lock_= false; top_level_ = true; isolate_ = reinterpret_cast<i::Isolate*>(isolate); @@ -52,12 +52,12 @@ void Locker::Initialize(v8::Isolate* isolate) { isolate_->stack_guard()->InitThread(access); } } - ASSERT(isolate_->thread_manager()->IsLockedByCurrentThread()); + DCHECK(isolate_->thread_manager()->IsLockedByCurrentThread()); } bool Locker::IsLocked(v8::Isolate* isolate) { - ASSERT(isolate != NULL); + DCHECK(isolate != NULL); i::Isolate* internal_isolate = reinterpret_cast<i::Isolate*>(isolate); return internal_isolate->thread_manager()->IsLockedByCurrentThread(); } @@ -69,7 +69,7 @@ bool Locker::IsActive() { Locker::~Locker() { - ASSERT(isolate_->thread_manager()->IsLockedByCurrentThread()); + DCHECK(isolate_->thread_manager()->IsLockedByCurrentThread()); if (has_lock_) { if (top_level_) { isolate_->thread_manager()->FreeThreadResources(); @@ -82,16 +82,16 @@ Locker::~Locker() { void Unlocker::Initialize(v8::Isolate* isolate) { - ASSERT(isolate != NULL); + DCHECK(isolate != NULL); isolate_ = reinterpret_cast<i::Isolate*>(isolate); - ASSERT(isolate_->thread_manager()->IsLockedByCurrentThread()); + DCHECK(isolate_->thread_manager()->IsLockedByCurrentThread()); isolate_->thread_manager()->ArchiveThread(); isolate_->thread_manager()->Unlock(); } Unlocker::~Unlocker() { - ASSERT(!isolate_->thread_manager()->IsLockedByCurrentThread()); + DCHECK(!isolate_->thread_manager()->IsLockedByCurrentThread()); isolate_->thread_manager()->Lock(); isolate_->thread_manager()->RestoreThread(); } @@ -101,7 +101,7 @@ namespace internal { bool ThreadManager::RestoreThread() { - ASSERT(IsLockedByCurrentThread()); + DCHECK(IsLockedByCurrentThread()); // First check whether the current thread has been 'lazily archived', i.e. // not archived at all. If that is the case we put the state storage we // had prepared back in the free list, since we didn't need it after all. @@ -109,8 +109,8 @@ bool ThreadManager::RestoreThread() { lazily_archived_thread_ = ThreadId::Invalid(); Isolate::PerIsolateThreadData* per_thread = isolate_->FindPerThreadDataForThisThread(); - ASSERT(per_thread != NULL); - ASSERT(per_thread->thread_state() == lazily_archived_thread_state_); + DCHECK(per_thread != NULL); + DCHECK(per_thread->thread_state() == lazily_archived_thread_state_); lazily_archived_thread_state_->set_id(ThreadId::Invalid()); lazily_archived_thread_state_->LinkInto(ThreadState::FREE_LIST); lazily_archived_thread_state_ = NULL; @@ -145,7 +145,7 @@ bool ThreadManager::RestoreThread() { from = isolate_->bootstrapper()->RestoreState(from); per_thread->set_thread_state(NULL); if (state->terminate_on_restore()) { - isolate_->stack_guard()->TerminateExecution(); + isolate_->stack_guard()->RequestTerminateExecution(); state->set_terminate_on_restore(false); } state->set_id(ThreadId::Invalid()); @@ -158,7 +158,7 @@ bool ThreadManager::RestoreThread() { void ThreadManager::Lock() { mutex_.Lock(); mutex_owner_ = ThreadId::Current(); - ASSERT(IsLockedByCurrentThread()); + DCHECK(IsLockedByCurrentThread()); } @@ -271,9 +271,9 @@ void ThreadManager::DeleteThreadStateList(ThreadState* anchor) { void ThreadManager::ArchiveThread() { - ASSERT(lazily_archived_thread_.Equals(ThreadId::Invalid())); - ASSERT(!IsArchived()); - ASSERT(IsLockedByCurrentThread()); + DCHECK(lazily_archived_thread_.Equals(ThreadId::Invalid())); + DCHECK(!IsArchived()); + DCHECK(IsLockedByCurrentThread()); ThreadState* state = GetFreeThreadState(); state->Unlink(); Isolate::PerIsolateThreadData* per_thread = @@ -281,14 +281,14 @@ void ThreadManager::ArchiveThread() { per_thread->set_thread_state(state); lazily_archived_thread_ = ThreadId::Current(); lazily_archived_thread_state_ = state; - ASSERT(state->id().Equals(ThreadId::Invalid())); + DCHECK(state->id().Equals(ThreadId::Invalid())); state->set_id(CurrentId()); - ASSERT(!state->id().Equals(ThreadId::Invalid())); + DCHECK(!state->id().Equals(ThreadId::Invalid())); } void ThreadManager::EagerlyArchiveThread() { - ASSERT(IsLockedByCurrentThread()); + DCHECK(IsLockedByCurrentThread()); ThreadState* state = lazily_archived_thread_state_; state->LinkInto(ThreadState::IN_USE_LIST); char* to = state->data(); diff --git a/deps/v8/src/v8threads.h b/deps/v8/src/v8threads.h index ca722adc66f..c3ba5173750 100644 --- a/deps/v8/src/v8threads.h +++ b/deps/v8/src/v8threads.h @@ -96,7 +96,7 @@ class ThreadManager { void EagerlyArchiveThread(); - Mutex mutex_; + base::Mutex mutex_; ThreadId mutex_owner_; ThreadId lazily_archived_thread_; ThreadState* lazily_archived_thread_state_; diff --git a/deps/v8/src/variables.cc b/deps/v8/src/variables.cc index 3b90e0eeacd..658831239cf 100644 --- a/deps/v8/src/variables.cc +++ b/deps/v8/src/variables.cc @@ -2,11 +2,11 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" -#include "ast.h" -#include "scopes.h" -#include "variables.h" +#include "src/ast.h" +#include "src/scopes.h" +#include "src/variables.h" namespace v8 { namespace internal { @@ -32,30 +32,26 @@ const char* Variable::Mode2String(VariableMode mode) { } -Variable::Variable(Scope* scope, - Handle<String> name, - VariableMode mode, - bool is_valid_ref, - Kind kind, +Variable::Variable(Scope* scope, const AstRawString* name, VariableMode mode, + bool is_valid_ref, Kind kind, InitializationFlag initialization_flag, - Interface* interface) - : scope_(scope), - name_(name), - mode_(mode), - kind_(kind), - location_(UNALLOCATED), - index_(-1), - initializer_position_(RelocInfo::kNoPosition), - local_if_not_shadowed_(NULL), - is_valid_ref_(is_valid_ref), - force_context_allocation_(false), - is_used_(false), - initialization_flag_(initialization_flag), - interface_(interface) { - // Names must be canonicalized for fast equality checks. - ASSERT(name->IsInternalizedString()); + MaybeAssignedFlag maybe_assigned_flag, Interface* interface) + : scope_(scope), + name_(name), + mode_(mode), + kind_(kind), + location_(UNALLOCATED), + index_(-1), + initializer_position_(RelocInfo::kNoPosition), + local_if_not_shadowed_(NULL), + is_valid_ref_(is_valid_ref), + force_context_allocation_(false), + is_used_(false), + initialization_flag_(initialization_flag), + maybe_assigned_(maybe_assigned_flag), + interface_(interface) { // Var declared variables never need initialization. - ASSERT(!(mode == VAR && initialization_flag == kNeedsInitialization)); + DCHECK(!(mode == VAR && initialization_flag == kNeedsInitialization)); } diff --git a/deps/v8/src/variables.h b/deps/v8/src/variables.h index 3d8e130d33f..a8cf5e36e40 100644 --- a/deps/v8/src/variables.h +++ b/deps/v8/src/variables.h @@ -5,8 +5,9 @@ #ifndef V8_VARIABLES_H_ #define V8_VARIABLES_H_ -#include "zone.h" -#include "interface.h" +#include "src/ast-value-factory.h" +#include "src/interface.h" +#include "src/zone.h" namespace v8 { namespace internal { @@ -51,12 +52,9 @@ class Variable: public ZoneObject { LOOKUP }; - Variable(Scope* scope, - Handle<String> name, - VariableMode mode, - bool is_valid_ref, - Kind kind, - InitializationFlag initialization_flag, + Variable(Scope* scope, const AstRawString* name, VariableMode mode, + bool is_valid_ref, Kind kind, InitializationFlag initialization_flag, + MaybeAssignedFlag maybe_assigned_flag = kNotAssigned, Interface* interface = Interface::NewValue()); // Printing support @@ -70,17 +68,20 @@ class Variable: public ZoneObject { // scope is only used to follow the context chain length. Scope* scope() const { return scope_; } - Handle<String> name() const { return name_; } + Handle<String> name() const { return name_->string(); } + const AstRawString* raw_name() const { return name_; } VariableMode mode() const { return mode_; } bool has_forced_context_allocation() const { return force_context_allocation_; } void ForceContextAllocation() { - ASSERT(mode_ != TEMPORARY); + DCHECK(mode_ != TEMPORARY); force_context_allocation_ = true; } bool is_used() { return is_used_; } - void set_is_used(bool flag) { is_used_ = flag; } + void set_is_used() { is_used_ = true; } + MaybeAssignedFlag maybe_assigned() const { return maybe_assigned_; } + void set_maybe_assigned() { maybe_assigned_ = kMaybeAssigned; } int initializer_position() { return initializer_position_; } void set_initializer_position(int pos) { initializer_position_ = pos; } @@ -112,7 +113,7 @@ class Variable: public ZoneObject { } Variable* local_if_not_shadowed() const { - ASSERT(mode_ == DYNAMIC_LOCAL && local_if_not_shadowed_ != NULL); + DCHECK(mode_ == DYNAMIC_LOCAL && local_if_not_shadowed_ != NULL); return local_if_not_shadowed_; } @@ -136,7 +137,7 @@ class Variable: public ZoneObject { private: Scope* scope_; - Handle<String> name_; + const AstRawString* name_; VariableMode mode_; Kind kind_; Location location_; @@ -156,6 +157,7 @@ class Variable: public ZoneObject { bool force_context_allocation_; // set by variable resolver bool is_used_; InitializationFlag initialization_flag_; + MaybeAssignedFlag maybe_assigned_; // Module type info. Interface* interface_; diff --git a/deps/v8/src/vector.h b/deps/v8/src/vector.h index 9e8c200ed4b..e12b9161064 100644 --- a/deps/v8/src/vector.h +++ b/deps/v8/src/vector.h @@ -8,9 +8,9 @@ #include <string.h> #include <algorithm> -#include "allocation.h" -#include "checks.h" -#include "globals.h" +#include "src/allocation.h" +#include "src/checks.h" +#include "src/globals.h" namespace v8 { namespace internal { @@ -21,7 +21,7 @@ class Vector { public: Vector() : start_(NULL), length_(0) {} Vector(T* data, int length) : start_(data), length_(length) { - ASSERT(length == 0 || (length > 0 && data != NULL)); + DCHECK(length == 0 || (length > 0 && data != NULL)); } static Vector<T> New(int length) { @@ -31,9 +31,9 @@ class Vector { // Returns a vector using the same backing storage as this one, // spanning from and including 'from', to but not including 'to'. Vector<T> SubVector(int from, int to) { - SLOW_ASSERT(to <= length_); - SLOW_ASSERT(from < to); - ASSERT(0 <= from); + SLOW_DCHECK(to <= length_); + SLOW_DCHECK(from < to); + DCHECK(0 <= from); return Vector<T>(start() + from, to - from); } @@ -48,7 +48,7 @@ class Vector { // Access individual vector elements - checks bounds in debug mode. T& operator[](int index) const { - ASSERT(0 <= index && index < length_); + DCHECK(0 <= index && index < length_); return start_[index]; } @@ -74,7 +74,7 @@ class Vector { } void Truncate(int length) { - ASSERT(length <= length_); + DCHECK(length <= length_); length_ = length; } @@ -87,7 +87,7 @@ class Vector { } inline Vector<T> operator+(int offset) { - ASSERT(offset < length_); + DCHECK(offset < length_); return Vector<T>(start_ + offset, length_ - offset); } @@ -100,6 +100,17 @@ class Vector { input.length() * sizeof(S) / sizeof(T)); } + bool operator==(const Vector<T>& other) const { + if (length_ != other.length_) return false; + if (start_ == other.start_) return true; + for (int i = 0; i < length_; ++i) { + if (start_[i] != other.start_[i]) { + return false; + } + } + return true; + } + protected: void set_start(T* start) { start_ = start; } @@ -135,7 +146,7 @@ class ScopedVector : public Vector<T> { inline int StrLength(const char* string) { size_t length = strlen(string); - ASSERT(length == static_cast<size_t>(static_cast<int>(length))); + DCHECK(length == static_cast<size_t>(static_cast<int>(length))); return static_cast<int>(length); } diff --git a/deps/v8/src/version.cc b/deps/v8/src/version.cc index 33755eb1fd1..c6f087d04b5 100644 --- a/deps/v8/src/version.cc +++ b/deps/v8/src/version.cc @@ -25,16 +25,16 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "v8.h" +#include "src/v8.h" -#include "version.h" +#include "src/version.h" // These macros define the version number for the current version. // NOTE these macros are used by some of the tool scripts and the build // system so their names cannot be changed without changing the scripts. #define MAJOR_VERSION 3 -#define MINOR_VERSION 26 -#define BUILD_NUMBER 33 +#define MINOR_VERSION 28 +#define BUILD_NUMBER 73 #define PATCH_LEVEL 0 // Use 1 for candidates and 0 otherwise. // (Boolean macro values are not supported by all preprocessors.) @@ -84,13 +84,13 @@ void Version::GetString(Vector<char> str) { const char* is_simulator = ""; #endif // USE_SIMULATOR if (GetPatch() > 0) { - OS::SNPrintF(str, "%d.%d.%d.%d%s%s", - GetMajor(), GetMinor(), GetBuild(), GetPatch(), candidate, - is_simulator); + SNPrintF(str, "%d.%d.%d.%d%s%s", + GetMajor(), GetMinor(), GetBuild(), GetPatch(), candidate, + is_simulator); } else { - OS::SNPrintF(str, "%d.%d.%d%s%s", - GetMajor(), GetMinor(), GetBuild(), candidate, - is_simulator); + SNPrintF(str, "%d.%d.%d%s%s", + GetMajor(), GetMinor(), GetBuild(), candidate, + is_simulator); } } @@ -101,15 +101,15 @@ void Version::GetSONAME(Vector<char> str) { // Generate generic SONAME if no specific SONAME is defined. const char* candidate = IsCandidate() ? "-candidate" : ""; if (GetPatch() > 0) { - OS::SNPrintF(str, "libv8-%d.%d.%d.%d%s.so", - GetMajor(), GetMinor(), GetBuild(), GetPatch(), candidate); + SNPrintF(str, "libv8-%d.%d.%d.%d%s.so", + GetMajor(), GetMinor(), GetBuild(), GetPatch(), candidate); } else { - OS::SNPrintF(str, "libv8-%d.%d.%d%s.so", - GetMajor(), GetMinor(), GetBuild(), candidate); + SNPrintF(str, "libv8-%d.%d.%d%s.so", + GetMajor(), GetMinor(), GetBuild(), candidate); } } else { // Use specific SONAME. - OS::SNPrintF(str, "%s", soname_); + SNPrintF(str, "%s", soname_); } } diff --git a/deps/v8/src/version.h b/deps/v8/src/version.h index b0a60715215..4f600054ec1 100644 --- a/deps/v8/src/version.h +++ b/deps/v8/src/version.h @@ -16,6 +16,7 @@ class Version { static int GetBuild() { return build_; } static int GetPatch() { return patch_; } static bool IsCandidate() { return candidate_; } + static int Hash() { return (major_ << 20) ^ (minor_ << 10) ^ patch_; } // Calculate the V8 version string. static void GetString(Vector<char> str); diff --git a/deps/v8/src/vm-state-inl.h b/deps/v8/src/vm-state-inl.h index f26c48bd7d5..ac3941ea84b 100644 --- a/deps/v8/src/vm-state-inl.h +++ b/deps/v8/src/vm-state-inl.h @@ -5,9 +5,9 @@ #ifndef V8_VM_STATE_INL_H_ #define V8_VM_STATE_INL_H_ -#include "vm-state.h" -#include "log.h" -#include "simulator.h" +#include "src/vm-state.h" +#include "src/log.h" +#include "src/simulator.h" namespace v8 { namespace internal { @@ -40,8 +40,7 @@ template <StateTag Tag> VMState<Tag>::VMState(Isolate* isolate) : isolate_(isolate), previous_tag_(isolate->current_vm_state()) { if (FLAG_log_timer_events && previous_tag_ != EXTERNAL && Tag == EXTERNAL) { - LOG(isolate_, - TimerEvent(Logger::START, Logger::TimerEventScope::v8_external)); + LOG(isolate_, TimerEvent(Logger::START, TimerEventExternal::name())); } isolate_->set_current_vm_state(Tag); } @@ -50,8 +49,7 @@ VMState<Tag>::VMState(Isolate* isolate) template <StateTag Tag> VMState<Tag>::~VMState() { if (FLAG_log_timer_events && previous_tag_ != EXTERNAL && Tag == EXTERNAL) { - LOG(isolate_, - TimerEvent(Logger::END, Logger::TimerEventScope::v8_external)); + LOG(isolate_, TimerEvent(Logger::END, TimerEventExternal::name())); } isolate_->set_current_vm_state(previous_tag_); } diff --git a/deps/v8/src/vm-state.h b/deps/v8/src/vm-state.h index 9b3bed69d1d..a72180ca45f 100644 --- a/deps/v8/src/vm-state.h +++ b/deps/v8/src/vm-state.h @@ -5,8 +5,8 @@ #ifndef V8_VM_STATE_H_ #define V8_VM_STATE_H_ -#include "allocation.h" -#include "isolate.h" +#include "src/allocation.h" +#include "src/isolate.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/weak_collection.js b/deps/v8/src/weak_collection.js index 4c26d257437..73dd9de6ba9 100644 --- a/deps/v8/src/weak_collection.js +++ b/deps/v8/src/weak_collection.js @@ -15,12 +15,36 @@ var $WeakSet = global.WeakSet; // ------------------------------------------------------------------- // Harmony WeakMap -function WeakMapConstructor() { - if (%_IsConstructCall()) { - %WeakCollectionInitialize(this); - } else { +function WeakMapConstructor(iterable) { + if (!%_IsConstructCall()) { throw MakeTypeError('constructor_not_function', ['WeakMap']); } + + var iter, adder; + + if (!IS_NULL_OR_UNDEFINED(iterable)) { + iter = GetIterator(iterable); + adder = this.set; + if (!IS_SPEC_FUNCTION(adder)) { + throw MakeTypeError('property_not_function', ['set', this]); + } + } + + %WeakCollectionInitialize(this); + + if (IS_UNDEFINED(iter)) return; + + var next, done, nextItem; + while (!(next = iter.next()).done) { + if (!IS_SPEC_OBJECT(next)) { + throw MakeTypeError('iterator_result_not_an_object', [next]); + } + nextItem = next.value; + if (!IS_SPEC_OBJECT(nextItem)) { + throw MakeTypeError('iterator_value_not_an_object', [nextItem]); + } + %_CallFunction(this, nextItem[0], nextItem[1], adder); + } } @@ -89,7 +113,7 @@ function SetUpWeakMap() { %SetCode($WeakMap, WeakMapConstructor); %FunctionSetPrototype($WeakMap, new $Object()); - %SetProperty($WeakMap.prototype, "constructor", $WeakMap, DONT_ENUM); + %AddNamedProperty($WeakMap.prototype, "constructor", $WeakMap, DONT_ENUM); // Set up the non-enumerable functions on the WeakMap prototype object. InstallFunctions($WeakMap.prototype, DONT_ENUM, $Array( @@ -107,12 +131,32 @@ SetUpWeakMap(); // ------------------------------------------------------------------- // Harmony WeakSet -function WeakSetConstructor() { - if (%_IsConstructCall()) { - %WeakCollectionInitialize(this); - } else { +function WeakSetConstructor(iterable) { + if (!%_IsConstructCall()) { throw MakeTypeError('constructor_not_function', ['WeakSet']); } + + var iter, adder; + + if (!IS_NULL_OR_UNDEFINED(iterable)) { + iter = GetIterator(iterable); + adder = this.add; + if (!IS_SPEC_FUNCTION(adder)) { + throw MakeTypeError('property_not_function', ['add', this]); + } + } + + %WeakCollectionInitialize(this); + + if (IS_UNDEFINED(iter)) return; + + var next, done; + while (!(next = iter.next()).done) { + if (!IS_SPEC_OBJECT(next)) { + throw MakeTypeError('iterator_result_not_an_object', [next]); + } + %_CallFunction(this, next.value, adder); + } } @@ -169,7 +213,7 @@ function SetUpWeakSet() { %SetCode($WeakSet, WeakSetConstructor); %FunctionSetPrototype($WeakSet, new $Object()); - %SetProperty($WeakSet.prototype, "constructor", $WeakSet, DONT_ENUM); + %AddNamedProperty($WeakSet.prototype, "constructor", $WeakSet, DONT_ENUM); // Set up the non-enumerable functions on the WeakSet prototype object. InstallFunctions($WeakSet.prototype, DONT_ENUM, $Array( diff --git a/deps/v8/src/x64/assembler-x64-inl.h b/deps/v8/src/x64/assembler-x64-inl.h index be990228906..b64bbfb664f 100644 --- a/deps/v8/src/x64/assembler-x64-inl.h +++ b/deps/v8/src/x64/assembler-x64-inl.h @@ -5,15 +5,17 @@ #ifndef V8_X64_ASSEMBLER_X64_INL_H_ #define V8_X64_ASSEMBLER_X64_INL_H_ -#include "x64/assembler-x64.h" +#include "src/x64/assembler-x64.h" -#include "cpu.h" -#include "debug.h" -#include "v8memory.h" +#include "src/base/cpu.h" +#include "src/debug.h" +#include "src/v8memory.h" namespace v8 { namespace internal { +bool CpuFeatures::SupportsCrankshaft() { return true; } + // ----------------------------------------------------------------------------- // Implementation of Assembler @@ -55,7 +57,7 @@ void Assembler::emitw(uint16_t x) { void Assembler::emit_code_target(Handle<Code> target, RelocInfo::Mode rmode, TypeFeedbackId ast_id) { - ASSERT(RelocInfo::IsCodeTarget(rmode) || + DCHECK(RelocInfo::IsCodeTarget(rmode) || rmode == RelocInfo::CODE_AGE_SEQUENCE); if (rmode == RelocInfo::CODE_TARGET && !ast_id.IsNone()) { RecordRelocInfo(RelocInfo::CODE_TARGET_WITH_ID, ast_id.ToInt()); @@ -74,8 +76,7 @@ void Assembler::emit_code_target(Handle<Code> target, void Assembler::emit_runtime_entry(Address entry, RelocInfo::Mode rmode) { - ASSERT(RelocInfo::IsRuntimeEntry(rmode)); - ASSERT(isolate()->code_range()->exists()); + DCHECK(RelocInfo::IsRuntimeEntry(rmode)); RecordRelocInfo(rmode); emitl(static_cast<uint32_t>(entry - isolate()->code_range()->start())); } @@ -107,7 +108,7 @@ void Assembler::emit_rex_64(XMMRegister reg, const Operand& op) { void Assembler::emit_rex_64(Register rm_reg) { - ASSERT_EQ(rm_reg.code() & 0xf, rm_reg.code()); + DCHECK_EQ(rm_reg.code() & 0xf, rm_reg.code()); emit(0x48 | rm_reg.high_bit()); } @@ -191,9 +192,12 @@ Address Assembler::target_address_at(Address pc, void Assembler::set_target_address_at(Address pc, ConstantPoolArray* constant_pool, - Address target) { + Address target, + ICacheFlushMode icache_flush_mode) { Memory::int32_at(pc) = static_cast<int32_t>(target - pc - 4); - CPU::FlushICache(pc, sizeof(int32_t)); + if (icache_flush_mode != SKIP_ICACHE_FLUSH) { + CpuFeatures::FlushICache(pc, sizeof(int32_t)); + } } @@ -202,13 +206,17 @@ Address Assembler::target_address_from_return_address(Address pc) { } +Address Assembler::break_address_from_return_address(Address pc) { + return pc - Assembler::kPatchDebugBreakSlotReturnOffset; +} + + Handle<Object> Assembler::code_target_object_handle_at(Address pc) { return code_targets_[Memory::int32_at(pc)]; } Address Assembler::runtime_entry_at(Address pc) { - ASSERT(isolate()->code_range()->exists()); return Memory::int32_at(pc) + isolate()->code_range()->start(); } @@ -216,32 +224,33 @@ Address Assembler::runtime_entry_at(Address pc) { // Implementation of RelocInfo // The modes possibly affected by apply must be in kApplyMask. -void RelocInfo::apply(intptr_t delta) { +void RelocInfo::apply(intptr_t delta, ICacheFlushMode icache_flush_mode) { + bool flush_icache = icache_flush_mode != SKIP_ICACHE_FLUSH; if (IsInternalReference(rmode_)) { // absolute code pointer inside code object moves with the code object. Memory::Address_at(pc_) += static_cast<int32_t>(delta); - CPU::FlushICache(pc_, sizeof(Address)); + if (flush_icache) CpuFeatures::FlushICache(pc_, sizeof(Address)); } else if (IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)) { Memory::int32_at(pc_) -= static_cast<int32_t>(delta); - CPU::FlushICache(pc_, sizeof(int32_t)); + if (flush_icache) CpuFeatures::FlushICache(pc_, sizeof(int32_t)); } else if (rmode_ == CODE_AGE_SEQUENCE) { if (*pc_ == kCallOpcode) { int32_t* p = reinterpret_cast<int32_t*>(pc_ + 1); *p -= static_cast<int32_t>(delta); // Relocate entry. - CPU::FlushICache(p, sizeof(uint32_t)); + if (flush_icache) CpuFeatures::FlushICache(p, sizeof(uint32_t)); } } } Address RelocInfo::target_address() { - ASSERT(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); + DCHECK(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); return Assembler::target_address_at(pc_, host_); } Address RelocInfo::target_address_address() { - ASSERT(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_) + DCHECK(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_) || rmode_ == EMBEDDED_OBJECT || rmode_ == EXTERNAL_REFERENCE); return reinterpret_cast<Address>(pc_); @@ -263,10 +272,13 @@ int RelocInfo::target_address_size() { } -void RelocInfo::set_target_address(Address target, WriteBarrierMode mode) { - ASSERT(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); - Assembler::set_target_address_at(pc_, host_, target); - if (mode == UPDATE_WRITE_BARRIER && host() != NULL && IsCodeTarget(rmode_)) { +void RelocInfo::set_target_address(Address target, + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); + Assembler::set_target_address_at(pc_, host_, target, icache_flush_mode); + if (write_barrier_mode == UPDATE_WRITE_BARRIER && host() != NULL && + IsCodeTarget(rmode_)) { Object* target_code = Code::GetCodeFromTargetAddress(target); host()->GetHeap()->incremental_marking()->RecordWriteIntoCode( host(), this, HeapObject::cast(target_code)); @@ -275,13 +287,13 @@ void RelocInfo::set_target_address(Address target, WriteBarrierMode mode) { Object* RelocInfo::target_object() { - ASSERT(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); + DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); return Memory::Object_at(pc_); } Handle<Object> RelocInfo::target_object_handle(Assembler* origin) { - ASSERT(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); + DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); if (rmode_ == EMBEDDED_OBJECT) { return Memory::Object_Handle_at(pc_); } else { @@ -291,17 +303,20 @@ Handle<Object> RelocInfo::target_object_handle(Assembler* origin) { Address RelocInfo::target_reference() { - ASSERT(rmode_ == RelocInfo::EXTERNAL_REFERENCE); + DCHECK(rmode_ == RelocInfo::EXTERNAL_REFERENCE); return Memory::Address_at(pc_); } -void RelocInfo::set_target_object(Object* target, WriteBarrierMode mode) { - ASSERT(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); - ASSERT(!target->IsConsString()); +void RelocInfo::set_target_object(Object* target, + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); Memory::Object_at(pc_) = target; - CPU::FlushICache(pc_, sizeof(Address)); - if (mode == UPDATE_WRITE_BARRIER && + if (icache_flush_mode != SKIP_ICACHE_FLUSH) { + CpuFeatures::FlushICache(pc_, sizeof(Address)); + } + if (write_barrier_mode == UPDATE_WRITE_BARRIER && host() != NULL && target->IsHeapObject()) { host()->GetHeap()->incremental_marking()->RecordWrite( @@ -311,37 +326,44 @@ void RelocInfo::set_target_object(Object* target, WriteBarrierMode mode) { Address RelocInfo::target_runtime_entry(Assembler* origin) { - ASSERT(IsRuntimeEntry(rmode_)); + DCHECK(IsRuntimeEntry(rmode_)); return origin->runtime_entry_at(pc_); } void RelocInfo::set_target_runtime_entry(Address target, - WriteBarrierMode mode) { - ASSERT(IsRuntimeEntry(rmode_)); - if (target_address() != target) set_target_address(target, mode); + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(IsRuntimeEntry(rmode_)); + if (target_address() != target) { + set_target_address(target, write_barrier_mode, icache_flush_mode); + } } Handle<Cell> RelocInfo::target_cell_handle() { - ASSERT(rmode_ == RelocInfo::CELL); + DCHECK(rmode_ == RelocInfo::CELL); Address address = Memory::Address_at(pc_); return Handle<Cell>(reinterpret_cast<Cell**>(address)); } Cell* RelocInfo::target_cell() { - ASSERT(rmode_ == RelocInfo::CELL); + DCHECK(rmode_ == RelocInfo::CELL); return Cell::FromValueAddress(Memory::Address_at(pc_)); } -void RelocInfo::set_target_cell(Cell* cell, WriteBarrierMode mode) { - ASSERT(rmode_ == RelocInfo::CELL); +void RelocInfo::set_target_cell(Cell* cell, + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(rmode_ == RelocInfo::CELL); Address address = cell->address() + Cell::kValueOffset; Memory::Address_at(pc_) = address; - CPU::FlushICache(pc_, sizeof(Address)); - if (mode == UPDATE_WRITE_BARRIER && + if (icache_flush_mode != SKIP_ICACHE_FLUSH) { + CpuFeatures::FlushICache(pc_, sizeof(Address)); + } + if (write_barrier_mode == UPDATE_WRITE_BARRIER && host() != NULL) { // TODO(1550) We are passing NULL as a slot because cell can never be on // evacuation candidate. @@ -381,29 +403,31 @@ bool RelocInfo::IsPatchedDebugBreakSlotSequence() { Handle<Object> RelocInfo::code_age_stub_handle(Assembler* origin) { - ASSERT(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); - ASSERT(*pc_ == kCallOpcode); + DCHECK(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); + DCHECK(*pc_ == kCallOpcode); return origin->code_target_object_handle_at(pc_ + 1); } Code* RelocInfo::code_age_stub() { - ASSERT(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); - ASSERT(*pc_ == kCallOpcode); + DCHECK(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); + DCHECK(*pc_ == kCallOpcode); return Code::GetCodeFromTargetAddress( Assembler::target_address_at(pc_ + 1, host_)); } -void RelocInfo::set_code_age_stub(Code* stub) { - ASSERT(*pc_ == kCallOpcode); - ASSERT(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); - Assembler::set_target_address_at(pc_ + 1, host_, stub->instruction_start()); +void RelocInfo::set_code_age_stub(Code* stub, + ICacheFlushMode icache_flush_mode) { + DCHECK(*pc_ == kCallOpcode); + DCHECK(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); + Assembler::set_target_address_at(pc_ + 1, host_, stub->instruction_start(), + icache_flush_mode); } Address RelocInfo::call_address() { - ASSERT((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || + DCHECK((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || (IsDebugBreakSlot(rmode()) && IsPatchedDebugBreakSlotSequence())); return Memory::Address_at( pc_ + Assembler::kRealPatchReturnSequenceAddressOffset); @@ -411,12 +435,12 @@ Address RelocInfo::call_address() { void RelocInfo::set_call_address(Address target) { - ASSERT((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || + DCHECK((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || (IsDebugBreakSlot(rmode()) && IsPatchedDebugBreakSlotSequence())); Memory::Address_at(pc_ + Assembler::kRealPatchReturnSequenceAddressOffset) = target; - CPU::FlushICache(pc_ + Assembler::kRealPatchReturnSequenceAddressOffset, - sizeof(Address)); + CpuFeatures::FlushICache( + pc_ + Assembler::kRealPatchReturnSequenceAddressOffset, sizeof(Address)); if (host() != NULL) { Object* target_code = Code::GetCodeFromTargetAddress(target); host()->GetHeap()->incremental_marking()->RecordWriteIntoCode( @@ -436,7 +460,7 @@ void RelocInfo::set_call_object(Object* target) { Object** RelocInfo::call_object_address() { - ASSERT((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || + DCHECK((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || (IsDebugBreakSlot(rmode()) && IsPatchedDebugBreakSlotSequence())); return reinterpret_cast<Object**>( pc_ + Assembler::kPatchReturnSequenceAddressOffset); @@ -447,14 +471,14 @@ void RelocInfo::Visit(Isolate* isolate, ObjectVisitor* visitor) { RelocInfo::Mode mode = rmode(); if (mode == RelocInfo::EMBEDDED_OBJECT) { visitor->VisitEmbeddedPointer(this); - CPU::FlushICache(pc_, sizeof(Address)); + CpuFeatures::FlushICache(pc_, sizeof(Address)); } else if (RelocInfo::IsCodeTarget(mode)) { visitor->VisitCodeTarget(this); } else if (mode == RelocInfo::CELL) { visitor->VisitCell(this); } else if (mode == RelocInfo::EXTERNAL_REFERENCE) { visitor->VisitExternalReference(this); - CPU::FlushICache(pc_, sizeof(Address)); + CpuFeatures::FlushICache(pc_, sizeof(Address)); } else if (RelocInfo::IsCodeAgeSequence(mode)) { visitor->VisitCodeAgeSequence(this); } else if (((RelocInfo::IsJSReturn(mode) && @@ -474,14 +498,14 @@ void RelocInfo::Visit(Heap* heap) { RelocInfo::Mode mode = rmode(); if (mode == RelocInfo::EMBEDDED_OBJECT) { StaticVisitor::VisitEmbeddedPointer(heap, this); - CPU::FlushICache(pc_, sizeof(Address)); + CpuFeatures::FlushICache(pc_, sizeof(Address)); } else if (RelocInfo::IsCodeTarget(mode)) { StaticVisitor::VisitCodeTarget(heap, this); } else if (mode == RelocInfo::CELL) { StaticVisitor::VisitCell(heap, this); } else if (mode == RelocInfo::EXTERNAL_REFERENCE) { StaticVisitor::VisitExternalReference(this); - CPU::FlushICache(pc_, sizeof(Address)); + CpuFeatures::FlushICache(pc_, sizeof(Address)); } else if (RelocInfo::IsCodeAgeSequence(mode)) { StaticVisitor::VisitCodeAgeSequence(heap, this); } else if (heap->isolate()->debug()->has_break_points() && @@ -500,7 +524,7 @@ void RelocInfo::Visit(Heap* heap) { // Implementation of Operand void Operand::set_modrm(int mod, Register rm_reg) { - ASSERT(is_uint2(mod)); + DCHECK(is_uint2(mod)); buf_[0] = mod << 6 | rm_reg.low_bits(); // Set REX.B to the high bit of rm.code(). rex_ |= rm_reg.high_bit(); @@ -508,26 +532,26 @@ void Operand::set_modrm(int mod, Register rm_reg) { void Operand::set_sib(ScaleFactor scale, Register index, Register base) { - ASSERT(len_ == 1); - ASSERT(is_uint2(scale)); + DCHECK(len_ == 1); + DCHECK(is_uint2(scale)); // Use SIB with no index register only for base rsp or r12. Otherwise we // would skip the SIB byte entirely. - ASSERT(!index.is(rsp) || base.is(rsp) || base.is(r12)); + DCHECK(!index.is(rsp) || base.is(rsp) || base.is(r12)); buf_[1] = (scale << 6) | (index.low_bits() << 3) | base.low_bits(); rex_ |= index.high_bit() << 1 | base.high_bit(); len_ = 2; } void Operand::set_disp8(int disp) { - ASSERT(is_int8(disp)); - ASSERT(len_ == 1 || len_ == 2); + DCHECK(is_int8(disp)); + DCHECK(len_ == 1 || len_ == 2); int8_t* p = reinterpret_cast<int8_t*>(&buf_[len_]); *p = disp; len_ += sizeof(int8_t); } void Operand::set_disp32(int disp) { - ASSERT(len_ == 1 || len_ == 2); + DCHECK(len_ == 1 || len_ == 2); int32_t* p = reinterpret_cast<int32_t*>(&buf_[len_]); *p = disp; len_ += sizeof(int32_t); diff --git a/deps/v8/src/x64/assembler-x64.cc b/deps/v8/src/x64/assembler-x64.cc index 306a54d82b5..d13c21f4b7c 100644 --- a/deps/v8/src/x64/assembler-x64.cc +++ b/deps/v8/src/x64/assembler-x64.cc @@ -2,12 +2,12 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_X64 -#include "macro-assembler.h" -#include "serialize.h" +#include "src/macro-assembler.h" +#include "src/serialize.h" namespace v8 { namespace internal { @@ -15,58 +15,23 @@ namespace internal { // ----------------------------------------------------------------------------- // Implementation of CpuFeatures +void CpuFeatures::ProbeImpl(bool cross_compile) { + base::CPU cpu; + CHECK(cpu.has_sse2()); // SSE2 support is mandatory. + CHECK(cpu.has_cmov()); // CMOV support is mandatory. -#ifdef DEBUG -bool CpuFeatures::initialized_ = false; -#endif -uint64_t CpuFeatures::supported_ = CpuFeatures::kDefaultCpuFeatures; -uint64_t CpuFeatures::found_by_runtime_probing_only_ = 0; -uint64_t CpuFeatures::cross_compile_ = 0; + // Only use statically determined features for cross compile (snapshot). + if (cross_compile) return; -ExternalReference ExternalReference::cpu_features() { - ASSERT(CpuFeatures::initialized_); - return ExternalReference(&CpuFeatures::supported_); + if (cpu.has_sse41() && FLAG_enable_sse4_1) supported_ |= 1u << SSE4_1; + if (cpu.has_sse3() && FLAG_enable_sse3) supported_ |= 1u << SSE3; + // SAHF is not generally available in long mode. + if (cpu.has_sahf() && FLAG_enable_sahf) supported_|= 1u << SAHF; } -void CpuFeatures::Probe(bool serializer_enabled) { - ASSERT(supported_ == CpuFeatures::kDefaultCpuFeatures); -#ifdef DEBUG - initialized_ = true; -#endif - supported_ = kDefaultCpuFeatures; - if (serializer_enabled) { - supported_ |= OS::CpuFeaturesImpliedByPlatform(); - return; // No features if we might serialize. - } - - uint64_t probed_features = 0; - CPU cpu; - if (cpu.has_sse41()) { - probed_features |= static_cast<uint64_t>(1) << SSE4_1; - } - if (cpu.has_sse3()) { - probed_features |= static_cast<uint64_t>(1) << SSE3; - } - - // SSE2 must be available on every x64 CPU. - ASSERT(cpu.has_sse2()); - probed_features |= static_cast<uint64_t>(1) << SSE2; - - // CMOV must be available on every x64 CPU. - ASSERT(cpu.has_cmov()); - probed_features |= static_cast<uint64_t>(1) << CMOV; - - // SAHF is not generally available in long mode. - if (cpu.has_sahf()) { - probed_features |= static_cast<uint64_t>(1) << SAHF; - } - - uint64_t platform_features = OS::CpuFeaturesImpliedByPlatform(); - supported_ = probed_features | platform_features; - found_by_runtime_probing_only_ - = probed_features & ~kDefaultCpuFeatures & ~platform_features; -} +void CpuFeatures::PrintTarget() { } +void CpuFeatures::PrintFeatures() { } // ----------------------------------------------------------------------------- @@ -92,7 +57,7 @@ void RelocInfo::PatchCodeWithCall(Address target, int guard_bytes) { patcher.masm()->call(kScratchRegister); // Check that the size of the code generated is as expected. - ASSERT_EQ(Assembler::kCallSequenceLength, + DCHECK_EQ(Assembler::kCallSequenceLength, patcher.masm()->SizeOfCodeGeneratedSince(&check_codesize)); // Add the requested number of int3 instructions after the call. @@ -109,7 +74,7 @@ void RelocInfo::PatchCode(byte* instructions, int instruction_count) { } // Indicate that code has changed. - CPU::FlushICache(pc_, instruction_count); + CpuFeatures::FlushICache(pc_, instruction_count); } @@ -153,7 +118,7 @@ Operand::Operand(Register base, Register index, ScaleFactor scale, int32_t disp) : rex_(0) { - ASSERT(!index.is(rsp)); + DCHECK(!index.is(rsp)); len_ = 1; set_sib(scale, index, base); if (disp == 0 && !base.is(rbp) && !base.is(r13)) { @@ -173,7 +138,7 @@ Operand::Operand(Register base, Operand::Operand(Register index, ScaleFactor scale, int32_t disp) : rex_(0) { - ASSERT(!index.is(rsp)); + DCHECK(!index.is(rsp)); len_ = 1; set_modrm(0, rsp); set_sib(scale, index, rbp); @@ -182,10 +147,10 @@ Operand::Operand(Register index, Operand::Operand(const Operand& operand, int32_t offset) { - ASSERT(operand.len_ >= 1); + DCHECK(operand.len_ >= 1); // Operand encodes REX ModR/M [SIB] [Disp]. byte modrm = operand.buf_[0]; - ASSERT(modrm < 0xC0); // Disallow mode 3 (register target). + DCHECK(modrm < 0xC0); // Disallow mode 3 (register target). bool has_sib = ((modrm & 0x07) == 0x04); byte mode = modrm & 0xC0; int disp_offset = has_sib ? 2 : 1; @@ -203,7 +168,7 @@ Operand::Operand(const Operand& operand, int32_t offset) { } // Write new operand with same registers, but with modified displacement. - ASSERT(offset >= 0 ? disp_value + offset > disp_value + DCHECK(offset >= 0 ? disp_value + offset > disp_value : disp_value + offset < disp_value); // No overflow. disp_value += offset; rex_ = operand.rex_; @@ -230,7 +195,7 @@ Operand::Operand(const Operand& operand, int32_t offset) { bool Operand::AddressUsesRegister(Register reg) const { int code = reg.code(); - ASSERT((buf_[0] & 0xC0) != 0xC0); // Always a memory operand. + DCHECK((buf_[0] & 0xC0) != 0xC0); // Always a memory operand. // Start with only low three bits of base register. Initial decoding doesn't // distinguish on the REX.B bit. int base_code = buf_[0] & 0x07; @@ -287,12 +252,12 @@ Assembler::Assembler(Isolate* isolate, void* buffer, int buffer_size) void Assembler::GetCode(CodeDesc* desc) { // Finalize code (at this point overflow() may be true, but the gap ensures // that we are still not overlapping instructions and relocation info). - ASSERT(pc_ <= reloc_info_writer.pos()); // No overlap. + DCHECK(pc_ <= reloc_info_writer.pos()); // No overlap. // Set up code descriptor. desc->buffer = buffer_; desc->buffer_size = buffer_size_; desc->instr_size = pc_offset(); - ASSERT(desc->instr_size > 0); // Zero-size code objects upset the system. + DCHECK(desc->instr_size > 0); // Zero-size code objects upset the system. desc->reloc_size = static_cast<int>((buffer_ + buffer_size_) - reloc_info_writer.pos()); desc->origin = this; @@ -300,7 +265,7 @@ void Assembler::GetCode(CodeDesc* desc) { void Assembler::Align(int m) { - ASSERT(IsPowerOf2(m)); + DCHECK(IsPowerOf2(m)); int delta = (m - (pc_offset() & (m - 1))) & (m - 1); Nop(delta); } @@ -321,8 +286,8 @@ bool Assembler::IsNop(Address addr) { void Assembler::bind_to(Label* L, int pos) { - ASSERT(!L->is_bound()); // Label may only be bound once. - ASSERT(0 <= pos && pos <= pc_offset()); // Position must be valid. + DCHECK(!L->is_bound()); // Label may only be bound once. + DCHECK(0 <= pos && pos <= pc_offset()); // Position must be valid. if (L->is_linked()) { int current = L->pos(); int next = long_at(current); @@ -341,7 +306,7 @@ void Assembler::bind_to(Label* L, int pos) { int fixup_pos = L->near_link_pos(); int offset_to_next = static_cast<int>(*reinterpret_cast<int8_t*>(addr_at(fixup_pos))); - ASSERT(offset_to_next <= 0); + DCHECK(offset_to_next <= 0); int disp = pos - (fixup_pos + sizeof(int8_t)); CHECK(is_int8(disp)); set_byte_at(fixup_pos, disp); @@ -361,16 +326,13 @@ void Assembler::bind(Label* L) { void Assembler::GrowBuffer() { - ASSERT(buffer_overflow()); + DCHECK(buffer_overflow()); if (!own_buffer_) FATAL("external code buffer is too small"); // Compute new buffer size. CodeDesc desc; // the new buffer - if (buffer_size_ < 4*KB) { - desc.buffer_size = 4*KB; - } else { - desc.buffer_size = 2*buffer_size_; - } + desc.buffer_size = 2 * buffer_size_; + // Some internal data structures overflow for very large buffers, // they must ensure that kMaximalBufferSize is not too large. if ((desc.buffer_size > kMaximalBufferSize) || @@ -394,18 +356,12 @@ void Assembler::GrowBuffer() { intptr_t pc_delta = desc.buffer - buffer_; intptr_t rc_delta = (desc.buffer + desc.buffer_size) - (buffer_ + buffer_size_); - OS::MemMove(desc.buffer, buffer_, desc.instr_size); - OS::MemMove(rc_delta + reloc_info_writer.pos(), - reloc_info_writer.pos(), desc.reloc_size); + MemMove(desc.buffer, buffer_, desc.instr_size); + MemMove(rc_delta + reloc_info_writer.pos(), reloc_info_writer.pos(), + desc.reloc_size); // Switch buffers. - if (isolate() != NULL && - isolate()->assembler_spare_buffer() == NULL && - buffer_size_ == kMinimalBufferSize) { - isolate()->set_assembler_spare_buffer(buffer_); - } else { - DeleteArray(buffer_); - } + DeleteArray(buffer_); buffer_ = desc.buffer; buffer_size_ = desc.buffer_size; pc_ += pc_delta; @@ -423,17 +379,17 @@ void Assembler::GrowBuffer() { } } - ASSERT(!buffer_overflow()); + DCHECK(!buffer_overflow()); } void Assembler::emit_operand(int code, const Operand& adr) { - ASSERT(is_uint3(code)); + DCHECK(is_uint3(code)); const unsigned length = adr.len_; - ASSERT(length > 0); + DCHECK(length > 0); // Emit updated ModR/M byte containing the given register. - ASSERT((adr.buf_[0] & 0x38) == 0); + DCHECK((adr.buf_[0] & 0x38) == 0); pc_[0] = adr.buf_[0] | code << 3; // Emit the rest of the encoded operand. @@ -460,7 +416,7 @@ void Assembler::arithmetic_op(byte opcode, Register rm_reg, int size) { EnsureSpace ensure_space(this); - ASSERT((opcode & 0xC6) == 2); + DCHECK((opcode & 0xC6) == 2); if (rm_reg.low_bits() == 4) { // Forces SIB byte. // Swap reg and rm_reg and change opcode operand order. emit_rex(rm_reg, reg, size); @@ -476,7 +432,7 @@ void Assembler::arithmetic_op(byte opcode, void Assembler::arithmetic_op_16(byte opcode, Register reg, Register rm_reg) { EnsureSpace ensure_space(this); - ASSERT((opcode & 0xC6) == 2); + DCHECK((opcode & 0xC6) == 2); if (rm_reg.low_bits() == 4) { // Forces SIB byte. // Swap reg and rm_reg and change opcode operand order. emit(0x66); @@ -516,7 +472,7 @@ void Assembler::arithmetic_op_8(byte opcode, Register reg, const Operand& op) { void Assembler::arithmetic_op_8(byte opcode, Register reg, Register rm_reg) { EnsureSpace ensure_space(this); - ASSERT((opcode & 0xC6) == 2); + DCHECK((opcode & 0xC6) == 2); if (rm_reg.low_bits() == 4) { // Forces SIB byte. // Swap reg and rm_reg and change opcode operand order. if (!rm_reg.is_byte_register() || !reg.is_byte_register()) { @@ -618,7 +574,7 @@ void Assembler::immediate_arithmetic_op_8(byte subcode, Immediate src) { EnsureSpace ensure_space(this); emit_optional_rex_32(dst); - ASSERT(is_int8(src.value_) || is_uint8(src.value_)); + DCHECK(is_int8(src.value_) || is_uint8(src.value_)); emit(0x80); emit_operand(subcode, dst); emit(src.value_); @@ -633,7 +589,7 @@ void Assembler::immediate_arithmetic_op_8(byte subcode, // Register is not one of al, bl, cl, dl. Its encoding needs REX. emit_rex_32(dst); } - ASSERT(is_int8(src.value_) || is_uint8(src.value_)); + DCHECK(is_int8(src.value_) || is_uint8(src.value_)); emit(0x80); emit_modrm(subcode, dst); emit(src.value_); @@ -645,7 +601,7 @@ void Assembler::shift(Register dst, int subcode, int size) { EnsureSpace ensure_space(this); - ASSERT(size == kInt64Size ? is_uint6(shift_amount.value_) + DCHECK(size == kInt64Size ? is_uint6(shift_amount.value_) : is_uint5(shift_amount.value_)); if (shift_amount.value_ == 1) { emit_rex(dst, size); @@ -702,13 +658,13 @@ void Assembler::call(Label* L) { emit(0xE8); if (L->is_bound()) { int offset = L->pos() - pc_offset() - sizeof(int32_t); - ASSERT(offset <= 0); + DCHECK(offset <= 0); emitl(offset); } else if (L->is_linked()) { emitl(L->pos()); L->link_to(pc_offset() - sizeof(int32_t)); } else { - ASSERT(L->is_unused()); + DCHECK(L->is_unused()); int32_t current = pc_offset(); emitl(current); L->link_to(current); @@ -717,7 +673,7 @@ void Assembler::call(Label* L) { void Assembler::call(Address entry, RelocInfo::Mode rmode) { - ASSERT(RelocInfo::IsRuntimeEntry(rmode)); + DCHECK(RelocInfo::IsRuntimeEntry(rmode)); positions_recorder()->WriteRecordedPositions(); EnsureSpace ensure_space(this); // 1110 1000 #32-bit disp. @@ -768,7 +724,7 @@ void Assembler::call(Address target) { emit(0xE8); Address source = pc_ + 4; intptr_t displacement = target - source; - ASSERT(is_int32(displacement)); + DCHECK(is_int32(displacement)); emitl(static_cast<int32_t>(displacement)); } @@ -799,7 +755,7 @@ void Assembler::cmovq(Condition cc, Register dst, Register src) { } // No need to check CpuInfo for CMOV support, it's a required part of the // 64-bit architecture. - ASSERT(cc >= 0); // Use mov for unconditional moves. + DCHECK(cc >= 0); // Use mov for unconditional moves. EnsureSpace ensure_space(this); // Opcode: REX.W 0f 40 + cc /r. emit_rex_64(dst, src); @@ -815,7 +771,7 @@ void Assembler::cmovq(Condition cc, Register dst, const Operand& src) { } else if (cc == never) { return; } - ASSERT(cc >= 0); + DCHECK(cc >= 0); EnsureSpace ensure_space(this); // Opcode: REX.W 0f 40 + cc /r. emit_rex_64(dst, src); @@ -831,7 +787,7 @@ void Assembler::cmovl(Condition cc, Register dst, Register src) { } else if (cc == never) { return; } - ASSERT(cc >= 0); + DCHECK(cc >= 0); EnsureSpace ensure_space(this); // Opcode: 0f 40 + cc /r. emit_optional_rex_32(dst, src); @@ -847,7 +803,7 @@ void Assembler::cmovl(Condition cc, Register dst, const Operand& src) { } else if (cc == never) { return; } - ASSERT(cc >= 0); + DCHECK(cc >= 0); EnsureSpace ensure_space(this); // Opcode: 0f 40 + cc /r. emit_optional_rex_32(dst, src); @@ -858,7 +814,7 @@ void Assembler::cmovl(Condition cc, Register dst, const Operand& src) { void Assembler::cmpb_al(Immediate imm8) { - ASSERT(is_int8(imm8.value_) || is_uint8(imm8.value_)); + DCHECK(is_int8(imm8.value_) || is_uint8(imm8.value_)); EnsureSpace ensure_space(this); emit(0x3c); emit(imm8.value_); @@ -936,6 +892,14 @@ void Assembler::emit_idiv(Register src, int size) { } +void Assembler::emit_div(Register src, int size) { + EnsureSpace ensure_space(this); + emit_rex(src, size); + emit(0xF7); + emit_modrm(0x6, src); +} + + void Assembler::emit_imul(Register src, int size) { EnsureSpace ensure_space(this); emit_rex(src, size); @@ -1007,12 +971,12 @@ void Assembler::j(Condition cc, Label* L, Label::Distance distance) { return; } EnsureSpace ensure_space(this); - ASSERT(is_uint4(cc)); + DCHECK(is_uint4(cc)); if (L->is_bound()) { const int short_size = 2; const int long_size = 6; int offs = L->pos() - pc_offset(); - ASSERT(offs <= 0); + DCHECK(offs <= 0); // Determine whether we can use 1-byte offsets for backwards branches, // which have a max range of 128 bytes. @@ -1038,7 +1002,7 @@ void Assembler::j(Condition cc, Label* L, Label::Distance distance) { byte disp = 0x00; if (L->is_near_linked()) { int offset = L->near_link_pos() - pc_offset(); - ASSERT(is_int8(offset)); + DCHECK(is_int8(offset)); disp = static_cast<byte>(offset & 0xFF); } L->link_to(pc_offset(), Label::kNear); @@ -1050,7 +1014,7 @@ void Assembler::j(Condition cc, Label* L, Label::Distance distance) { emitl(L->pos()); L->link_to(pc_offset() - sizeof(int32_t)); } else { - ASSERT(L->is_unused()); + DCHECK(L->is_unused()); emit(0x0F); emit(0x80 | cc); int32_t current = pc_offset(); @@ -1061,9 +1025,9 @@ void Assembler::j(Condition cc, Label* L, Label::Distance distance) { void Assembler::j(Condition cc, Address entry, RelocInfo::Mode rmode) { - ASSERT(RelocInfo::IsRuntimeEntry(rmode)); + DCHECK(RelocInfo::IsRuntimeEntry(rmode)); EnsureSpace ensure_space(this); - ASSERT(is_uint4(cc)); + DCHECK(is_uint4(cc)); emit(0x0F); emit(0x80 | cc); emit_runtime_entry(entry, rmode); @@ -1074,7 +1038,7 @@ void Assembler::j(Condition cc, Handle<Code> target, RelocInfo::Mode rmode) { EnsureSpace ensure_space(this); - ASSERT(is_uint4(cc)); + DCHECK(is_uint4(cc)); // 0000 1111 1000 tttn #32-bit disp. emit(0x0F); emit(0x80 | cc); @@ -1088,7 +1052,7 @@ void Assembler::jmp(Label* L, Label::Distance distance) { const int long_size = sizeof(int32_t); if (L->is_bound()) { int offs = L->pos() - pc_offset() - 1; - ASSERT(offs <= 0); + DCHECK(offs <= 0); if (is_int8(offs - short_size) && !predictable_code_size()) { // 1110 1011 #8-bit disp. emit(0xEB); @@ -1103,7 +1067,7 @@ void Assembler::jmp(Label* L, Label::Distance distance) { byte disp = 0x00; if (L->is_near_linked()) { int offset = L->near_link_pos() - pc_offset(); - ASSERT(is_int8(offset)); + DCHECK(is_int8(offset)); disp = static_cast<byte>(offset & 0xFF); } L->link_to(pc_offset(), Label::kNear); @@ -1115,7 +1079,7 @@ void Assembler::jmp(Label* L, Label::Distance distance) { L->link_to(pc_offset() - long_size); } else { // 1110 1001 #32-bit disp. - ASSERT(L->is_unused()); + DCHECK(L->is_unused()); emit(0xE9); int32_t current = pc_offset(); emitl(current); @@ -1133,9 +1097,9 @@ void Assembler::jmp(Handle<Code> target, RelocInfo::Mode rmode) { void Assembler::jmp(Address entry, RelocInfo::Mode rmode) { - ASSERT(RelocInfo::IsRuntimeEntry(rmode)); + DCHECK(RelocInfo::IsRuntimeEntry(rmode)); EnsureSpace ensure_space(this); - ASSERT(RelocInfo::IsRuntimeEntry(rmode)); + DCHECK(RelocInfo::IsRuntimeEntry(rmode)); emit(0xE9); emit_runtime_entry(entry, rmode); } @@ -1174,7 +1138,7 @@ void Assembler::load_rax(void* value, RelocInfo::Mode mode) { emit(0xA1); emitp(value, mode); } else { - ASSERT(kPointerSize == kInt32Size); + DCHECK(kPointerSize == kInt32Size); emit(0xA1); emitp(value, mode); // In 64-bit mode, need to zero extend the operand to 8 bytes. @@ -1306,7 +1270,7 @@ void Assembler::emit_mov(Register dst, Immediate value, int size) { emit(0xC7); emit_modrm(0x0, dst); } else { - ASSERT(size == kInt32Size); + DCHECK(size == kInt32Size); emit(0xB8 + dst.low_bits()); } emit(value); @@ -1352,13 +1316,13 @@ void Assembler::movl(const Operand& dst, Label* src) { emit_operand(0, dst); if (src->is_bound()) { int offset = src->pos() - pc_offset() - sizeof(int32_t); - ASSERT(offset <= 0); + DCHECK(offset <= 0); emitl(offset); } else if (src->is_linked()) { emitl(src->pos()); src->link_to(pc_offset() - sizeof(int32_t)); } else { - ASSERT(src->is_unused()); + DCHECK(src->is_unused()); int32_t current = pc_offset(); emitl(current); src->link_to(current); @@ -1429,6 +1393,17 @@ void Assembler::emit_movzxb(Register dst, const Operand& src, int size) { } +void Assembler::emit_movzxb(Register dst, Register src, int size) { + EnsureSpace ensure_space(this); + // 32 bit operations zero the top 32 bits of 64 bit registers. Therefore + // there is no need to make this a 64 bit operation. + emit_optional_rex_32(dst, src); + emit(0x0F); + emit(0xB6); + emit_modrm(dst, src); +} + + void Assembler::emit_movzxw(Register dst, const Operand& src, int size) { EnsureSpace ensure_space(this); // 32 bit operations zero the top 32 bits of 64 bit registers. Therefore @@ -1660,7 +1635,7 @@ void Assembler::pushfq() { void Assembler::ret(int imm16) { EnsureSpace ensure_space(this); - ASSERT(is_uint16(imm16)); + DCHECK(is_uint16(imm16)); if (imm16 == 0) { emit(0xC3); } else { @@ -1677,7 +1652,7 @@ void Assembler::setcc(Condition cc, Register reg) { return; } EnsureSpace ensure_space(this); - ASSERT(is_uint4(cc)); + DCHECK(is_uint4(cc)); if (!reg.is_byte_register()) { // Use x64 byte registers, where different. emit_rex_32(reg); } @@ -1723,6 +1698,14 @@ void Assembler::emit_xchg(Register dst, Register src, int size) { } +void Assembler::emit_xchg(Register dst, const Operand& src, int size) { + EnsureSpace ensure_space(this); + emit_rex(dst, src, size); + emit(0x87); + emit_operand(dst, src); +} + + void Assembler::store_rax(void* dst, RelocInfo::Mode mode) { EnsureSpace ensure_space(this); if (kPointerSize == kInt64Size) { @@ -1730,7 +1713,7 @@ void Assembler::store_rax(void* dst, RelocInfo::Mode mode) { emit(0xA3); emitp(dst, mode); } else { - ASSERT(kPointerSize == kInt32Size); + DCHECK(kPointerSize == kInt32Size); emit(0xA3); emitp(dst, mode); // In 64-bit mode, need to zero extend the operand to 8 bytes. @@ -1764,7 +1747,7 @@ void Assembler::testb(Register dst, Register src) { void Assembler::testb(Register reg, Immediate mask) { - ASSERT(is_int8(mask.value_) || is_uint8(mask.value_)); + DCHECK(is_int8(mask.value_) || is_uint8(mask.value_)); EnsureSpace ensure_space(this); if (reg.is(rax)) { emit(0xA8); @@ -1782,7 +1765,7 @@ void Assembler::testb(Register reg, Immediate mask) { void Assembler::testb(const Operand& op, Immediate mask) { - ASSERT(is_int8(mask.value_) || is_uint8(mask.value_)); + DCHECK(is_int8(mask.value_) || is_uint8(mask.value_)); EnsureSpace ensure_space(this); emit_optional_rex_32(rax, op); emit(0xF6); @@ -1930,7 +1913,7 @@ void Assembler::fstp_d(const Operand& adr) { void Assembler::fstp(int index) { - ASSERT(is_uint3(index)); + DCHECK(is_uint3(index)); EnsureSpace ensure_space(this); emit_farith(0xDD, 0xD8, index); } @@ -1961,7 +1944,7 @@ void Assembler::fistp_s(const Operand& adr) { void Assembler::fisttp_s(const Operand& adr) { - ASSERT(IsEnabled(SSE3)); + DCHECK(IsEnabled(SSE3)); EnsureSpace ensure_space(this); emit_optional_rex_32(adr); emit(0xDB); @@ -1970,7 +1953,7 @@ void Assembler::fisttp_s(const Operand& adr) { void Assembler::fisttp_d(const Operand& adr) { - ASSERT(IsEnabled(SSE3)); + DCHECK(IsEnabled(SSE3)); EnsureSpace ensure_space(this); emit_optional_rex_32(adr); emit(0xDD); @@ -2223,14 +2206,15 @@ void Assembler::fnclex() { void Assembler::sahf() { // TODO(X64): Test for presence. Not all 64-bit intel CPU's have sahf // in 64-bit mode. Test CpuID. + DCHECK(IsEnabled(SAHF)); EnsureSpace ensure_space(this); emit(0x9E); } void Assembler::emit_farith(int b1, int b2, int i) { - ASSERT(is_uint8(b1) && is_uint8(b2)); // wrong opcode - ASSERT(is_uint3(i)); // illegal stack offset + DCHECK(is_uint8(b1) && is_uint8(b2)); // wrong opcode + DCHECK(is_uint3(i)); // illegal stack offset emit(b1); emit(b2 + i); } @@ -2466,8 +2450,8 @@ void Assembler::movdqu(XMMRegister dst, const Operand& src) { void Assembler::extractps(Register dst, XMMRegister src, byte imm8) { - ASSERT(IsEnabled(SSE4_1)); - ASSERT(is_uint8(imm8)); + DCHECK(IsEnabled(SSE4_1)); + DCHECK(is_uint8(imm8)); EnsureSpace ensure_space(this); emit(0x66); emit_optional_rex_32(src, dst); @@ -2527,7 +2511,7 @@ void Assembler::movaps(XMMRegister dst, XMMRegister src) { void Assembler::shufps(XMMRegister dst, XMMRegister src, byte imm8) { - ASSERT(is_uint8(imm8)); + DCHECK(is_uint8(imm8)); EnsureSpace ensure_space(this); emit_optional_rex_32(src, dst); emit(0x0F); @@ -2826,6 +2810,16 @@ void Assembler::sqrtsd(XMMRegister dst, XMMRegister src) { } +void Assembler::sqrtsd(XMMRegister dst, const Operand& src) { + EnsureSpace ensure_space(this); + emit(0xF2); + emit_optional_rex_32(dst, src); + emit(0x0F); + emit(0x51); + emit_sse_operand(dst, src); +} + + void Assembler::ucomisd(XMMRegister dst, XMMRegister src) { EnsureSpace ensure_space(this); emit(0x66); @@ -2859,7 +2853,7 @@ void Assembler::cmpltsd(XMMRegister dst, XMMRegister src) { void Assembler::roundsd(XMMRegister dst, XMMRegister src, Assembler::RoundingMode mode) { - ASSERT(IsEnabled(SSE4_1)); + DCHECK(IsEnabled(SSE4_1)); EnsureSpace ensure_space(this); emit(0x66); emit_optional_rex_32(dst, src); @@ -2927,12 +2921,11 @@ void Assembler::dd(uint32_t data) { // Relocation information implementations. void Assembler::RecordRelocInfo(RelocInfo::Mode rmode, intptr_t data) { - ASSERT(!RelocInfo::IsNone(rmode)); - if (rmode == RelocInfo::EXTERNAL_REFERENCE) { - // Don't record external references unless the heap will be serialized. - if (!Serializer::enabled(isolate()) && !emit_debug_code()) { - return; - } + DCHECK(!RelocInfo::IsNone(rmode)); + // Don't record external references unless the heap will be serialized. + if (rmode == RelocInfo::EXTERNAL_REFERENCE && + !serializer_enabled() && !emit_debug_code()) { + return; } else if (rmode == RelocInfo::CODE_AGE_SEQUENCE) { // Don't record psuedo relocation info for code age sequence mode. return; @@ -2966,14 +2959,14 @@ void Assembler::RecordComment(const char* msg, bool force) { Handle<ConstantPoolArray> Assembler::NewConstantPool(Isolate* isolate) { // No out-of-line constant pool support. - ASSERT(!FLAG_enable_ool_constant_pool); + DCHECK(!FLAG_enable_ool_constant_pool); return isolate->factory()->empty_constant_pool_array(); } void Assembler::PopulateConstantPool(ConstantPoolArray* constant_pool) { // No out-of-line constant pool support. - ASSERT(!FLAG_enable_ool_constant_pool); + DCHECK(!FLAG_enable_ool_constant_pool); return; } diff --git a/deps/v8/src/x64/assembler-x64.h b/deps/v8/src/x64/assembler-x64.h index 685d46c094a..3896f8923da 100644 --- a/deps/v8/src/x64/assembler-x64.h +++ b/deps/v8/src/x64/assembler-x64.h @@ -37,7 +37,7 @@ #ifndef V8_X64_ASSEMBLER_X64_H_ #define V8_X64_ASSEMBLER_X64_H_ -#include "serialize.h" +#include "src/serialize.h" namespace v8 { namespace internal { @@ -84,13 +84,13 @@ struct Register { } static Register FromAllocationIndex(int index) { - ASSERT(index >= 0 && index < kMaxNumAllocatableRegisters); + DCHECK(index >= 0 && index < kMaxNumAllocatableRegisters); Register result = { kRegisterCodeByAllocationIndex[index] }; return result; } static const char* AllocationIndexToString(int index) { - ASSERT(index >= 0 && index < kMaxNumAllocatableRegisters); + DCHECK(index >= 0 && index < kMaxNumAllocatableRegisters); const char* const names[] = { "rax", "rbx", @@ -116,7 +116,7 @@ struct Register { // rax, rbx, rcx and rdx are byte registers, the rest are not. bool is_byte_register() const { return code_ <= 3; } int code() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return code_; } int bit() const { @@ -201,18 +201,18 @@ struct XMMRegister { } static int ToAllocationIndex(XMMRegister reg) { - ASSERT(reg.code() != 0); + DCHECK(reg.code() != 0); return reg.code() - 1; } static XMMRegister FromAllocationIndex(int index) { - ASSERT(0 <= index && index < kMaxNumAllocatableRegisters); + DCHECK(0 <= index && index < kMaxNumAllocatableRegisters); XMMRegister result = { index + 1 }; return result; } static const char* AllocationIndexToString(int index) { - ASSERT(index >= 0 && index < kMaxNumAllocatableRegisters); + DCHECK(index >= 0 && index < kMaxNumAllocatableRegisters); const char* const names[] = { "xmm1", "xmm2", @@ -234,15 +234,15 @@ struct XMMRegister { } static XMMRegister from_code(int code) { - ASSERT(code >= 0); - ASSERT(code < kMaxNumRegisters); + DCHECK(code >= 0); + DCHECK(code < kMaxNumRegisters); XMMRegister r = { code }; return r; } bool is_valid() const { return 0 <= code_ && code_ < kMaxNumRegisters; } bool is(XMMRegister reg) const { return code_ == reg.code_; } int code() const { - ASSERT(is_valid()); + DCHECK(is_valid()); return code_; } @@ -326,8 +326,8 @@ inline Condition NegateCondition(Condition cc) { } -// Corresponds to transposing the operands of a comparison. -inline Condition ReverseCondition(Condition cc) { +// Commute a condition such that {a cond b == b cond' a}. +inline Condition CommuteCondition(Condition cc) { switch (cc) { case below: return above; @@ -347,7 +347,7 @@ inline Condition ReverseCondition(Condition cc) { return greater_equal; default: return cc; - }; + } } @@ -358,7 +358,7 @@ class Immediate BASE_EMBEDDED { public: explicit Immediate(int32_t value) : value_(value) {} explicit Immediate(Smi* value) { - ASSERT(SmiValuesAre31Bits()); // Only available for 31-bit SMI. + DCHECK(SmiValuesAre31Bits()); // Only available for 31-bit SMI. value_ = static_cast<int32_t>(reinterpret_cast<intptr_t>(value)); } @@ -437,100 +437,27 @@ class Operand BASE_EMBEDDED { }; -// CpuFeatures keeps track of which features are supported by the target CPU. -// Supported features must be enabled by a CpuFeatureScope before use. -// Example: -// if (assembler->IsSupported(SSE3)) { -// CpuFeatureScope fscope(assembler, SSE3); -// // Generate SSE3 floating point code. -// } else { -// // Generate standard SSE2 floating point code. -// } -class CpuFeatures : public AllStatic { - public: - // Detect features of the target CPU. Set safe defaults if the serializer - // is enabled (snapshots must be portable). - static void Probe(bool serializer_enabled); - - // Check whether a feature is supported by the target CPU. - static bool IsSupported(CpuFeature f) { - if (Check(f, cross_compile_)) return true; - ASSERT(initialized_); - if (f == SSE3 && !FLAG_enable_sse3) return false; - if (f == SSE4_1 && !FLAG_enable_sse4_1) return false; - if (f == CMOV && !FLAG_enable_cmov) return false; - if (f == SAHF && !FLAG_enable_sahf) return false; - return Check(f, supported_); - } - - static bool IsSafeForSnapshot(Isolate* isolate, CpuFeature f) { - return Check(f, cross_compile_) || - (IsSupported(f) && - !(Serializer::enabled(isolate) && - Check(f, found_by_runtime_probing_only_))); - } - - static bool VerifyCrossCompiling() { - return cross_compile_ == 0; - } - - static bool VerifyCrossCompiling(CpuFeature f) { - uint64_t mask = flag2set(f); - return cross_compile_ == 0 || - (cross_compile_ & mask) == mask; - } - - static bool SupportsCrankshaft() { return true; } - - private: - static bool Check(CpuFeature f, uint64_t set) { - return (set & flag2set(f)) != 0; - } - - static uint64_t flag2set(CpuFeature f) { - return static_cast<uint64_t>(1) << f; - } - - // Safe defaults include CMOV for X64. It is always available, if - // anyone checks, but they shouldn't need to check. - // The required user mode extensions in X64 are (from AMD64 ABI Table A.1): - // fpu, tsc, cx8, cmov, mmx, sse, sse2, fxsr, syscall - static const uint64_t kDefaultCpuFeatures = (1 << CMOV); - -#ifdef DEBUG - static bool initialized_; -#endif - static uint64_t supported_; - static uint64_t found_by_runtime_probing_only_; - - static uint64_t cross_compile_; - - friend class ExternalReference; - friend class PlatformFeatureScope; - DISALLOW_COPY_AND_ASSIGN(CpuFeatures); -}; - - -#define ASSEMBLER_INSTRUCTION_LIST(V) \ - V(add) \ - V(and) \ - V(cmp) \ - V(dec) \ - V(idiv) \ - V(imul) \ - V(inc) \ - V(lea) \ - V(mov) \ - V(movzxb) \ - V(movzxw) \ - V(neg) \ - V(not) \ - V(or) \ - V(repmovs) \ - V(sbb) \ - V(sub) \ - V(test) \ - V(xchg) \ +#define ASSEMBLER_INSTRUCTION_LIST(V) \ + V(add) \ + V(and) \ + V(cmp) \ + V(dec) \ + V(idiv) \ + V(div) \ + V(imul) \ + V(inc) \ + V(lea) \ + V(mov) \ + V(movzxb) \ + V(movzxw) \ + V(neg) \ + V(not) \ + V(or) \ + V(repmovs) \ + V(sbb) \ + V(sub) \ + V(test) \ + V(xchg) \ V(xor) @@ -592,22 +519,29 @@ class Assembler : public AssemblerBase { ConstantPoolArray* constant_pool); static inline void set_target_address_at(Address pc, ConstantPoolArray* constant_pool, - Address target); + Address target, + ICacheFlushMode icache_flush_mode = + FLUSH_ICACHE_IF_NEEDED) ; static inline Address target_address_at(Address pc, Code* code) { ConstantPoolArray* constant_pool = code ? code->constant_pool() : NULL; return target_address_at(pc, constant_pool); } static inline void set_target_address_at(Address pc, Code* code, - Address target) { + Address target, + ICacheFlushMode icache_flush_mode = + FLUSH_ICACHE_IF_NEEDED) { ConstantPoolArray* constant_pool = code ? code->constant_pool() : NULL; - set_target_address_at(pc, constant_pool, target); + set_target_address_at(pc, constant_pool, target, icache_flush_mode); } // Return the code target address at a call site from the return address // of that call in the instruction stream. static inline Address target_address_from_return_address(Address pc); + // Return the code target address of the patch debug break slot + inline static Address break_address_from_return_address(Address pc); + // This sets the branch destination (which is in the instruction on x64). // This is for calls and branches within generated code. inline static void deserialization_set_special_target_at( @@ -619,7 +553,7 @@ class Assembler : public AssemblerBase { if (kPointerSize == kInt64Size) { return RelocInfo::NONE64; } else { - ASSERT(kPointerSize == kInt32Size); + DCHECK(kPointerSize == kInt32Size); return RelocInfo::NONE32; } } @@ -1139,6 +1073,7 @@ class Assembler : public AssemblerBase { void orpd(XMMRegister dst, XMMRegister src); void xorpd(XMMRegister dst, XMMRegister src); void sqrtsd(XMMRegister dst, XMMRegister src); + void sqrtsd(XMMRegister dst, const Operand& src); void ucomisd(XMMRegister dst, XMMRegister src); void ucomisd(XMMRegister dst, const Operand& src); @@ -1326,7 +1261,7 @@ class Assembler : public AssemblerBase { if (size == kInt64Size) { emit_rex_64(); } else { - ASSERT(size == kInt32Size); + DCHECK(size == kInt32Size); } } @@ -1335,7 +1270,7 @@ class Assembler : public AssemblerBase { if (size == kInt64Size) { emit_rex_64(p1); } else { - ASSERT(size == kInt32Size); + DCHECK(size == kInt32Size); emit_optional_rex_32(p1); } } @@ -1345,7 +1280,7 @@ class Assembler : public AssemblerBase { if (size == kInt64Size) { emit_rex_64(p1, p2); } else { - ASSERT(size == kInt32Size); + DCHECK(size == kInt32Size); emit_optional_rex_32(p1, p2); } } @@ -1371,7 +1306,7 @@ class Assembler : public AssemblerBase { // Emit a ModR/M byte with an operation subcode in the reg field and // a register in the rm_reg field. void emit_modrm(int code, Register rm_reg) { - ASSERT(is_uint3(code)); + DCHECK(is_uint3(code)); emit(0xC0 | code << 3 | rm_reg.low_bits()); } @@ -1504,6 +1439,7 @@ class Assembler : public AssemblerBase { // Divide edx:eax by lower 32 bits of src. Quotient in eax, remainder in edx // when size is 32. void emit_idiv(Register src, int size); + void emit_div(Register src, int size); // Signed multiply instructions. // rdx:rax = rax * src when size is 64 or edx:eax = eax * src when size is 32. @@ -1524,6 +1460,7 @@ class Assembler : public AssemblerBase { void emit_mov(const Operand& dst, Immediate value, int size); void emit_movzxb(Register dst, const Operand& src, int size); + void emit_movzxb(Register dst, Register src, int size); void emit_movzxw(Register dst, const Operand& src, int size); void emit_movzxw(Register dst, Register src, int size); @@ -1583,9 +1520,12 @@ class Assembler : public AssemblerBase { void emit_test(Register reg, Immediate mask, int size); void emit_test(const Operand& op, Register reg, int size); void emit_test(const Operand& op, Immediate mask, int size); + void emit_test(Register reg, const Operand& op, int size) { + return emit_test(op, reg, size); + } - // Exchange two registers void emit_xchg(Register dst, Register src, int size); + void emit_xchg(Register dst, const Operand& src, int size); void emit_xor(Register dst, Register src, int size) { if (size == kInt64Size && dst.code() == src.code()) { @@ -1643,7 +1583,7 @@ class EnsureSpace BASE_EMBEDDED { #ifdef DEBUG ~EnsureSpace() { int bytes_generated = space_before_ - assembler_->available_space(); - ASSERT(bytes_generated < assembler_->kGap); + DCHECK(bytes_generated < assembler_->kGap); } #endif diff --git a/deps/v8/src/x64/builtins-x64.cc b/deps/v8/src/x64/builtins-x64.cc index 9e3b89ac617..a18747d50d7 100644 --- a/deps/v8/src/x64/builtins-x64.cc +++ b/deps/v8/src/x64/builtins-x64.cc @@ -2,14 +2,14 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_X64 -#include "codegen.h" -#include "deoptimizer.h" -#include "full-codegen.h" -#include "stub-cache.h" +#include "src/codegen.h" +#include "src/deoptimizer.h" +#include "src/full-codegen.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -41,7 +41,7 @@ void Builtins::Generate_Adaptor(MacroAssembler* masm, __ Push(rdi); __ PushReturnAddressFrom(kScratchRegister); } else { - ASSERT(extra_args == NO_EXTRA_ARGUMENTS); + DCHECK(extra_args == NO_EXTRA_ARGUMENTS); } // JumpToExternalReference expects rax to contain the number of arguments @@ -91,7 +91,7 @@ void Builtins::Generate_InOptimizationQueue(MacroAssembler* masm) { __ CompareRoot(rsp, Heap::kStackLimitRootIndex); __ j(above_equal, &ok); - CallRuntimePassFunction(masm, Runtime::kHiddenTryInstallOptimizedCode); + CallRuntimePassFunction(masm, Runtime::kTryInstallOptimizedCode); GenerateTailCallToReturnedCode(masm); __ bind(&ok); @@ -101,7 +101,6 @@ void Builtins::Generate_InOptimizationQueue(MacroAssembler* masm) { static void Generate_JSConstructStubHelper(MacroAssembler* masm, bool is_api_function, - bool count_constructions, bool create_memento) { // ----------- S t a t e ------------- // -- rax: number of arguments @@ -109,14 +108,8 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, // -- rbx: allocation site or undefined // ----------------------------------- - // Should never count constructions for api objects. - ASSERT(!is_api_function || !count_constructions);\ - // Should never create mementos for api functions. - ASSERT(!is_api_function || !create_memento); - - // Should never create mementos before slack tracking is finished. - ASSERT(!count_constructions || !create_memento); + DCHECK(!is_api_function || !create_memento); // Enter a construct frame. { @@ -151,7 +144,7 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, // rdi: constructor __ movp(rax, FieldOperand(rdi, JSFunction::kPrototypeOrInitialMapOffset)); // Will both indicate a NULL and a Smi - ASSERT(kSmiTag == 0); + DCHECK(kSmiTag == 0); __ JumpIfSmi(rax, &rt_call); // rdi: constructor // rax: initial map (if proven valid below) @@ -166,23 +159,32 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, __ CmpInstanceType(rax, JS_FUNCTION_TYPE); __ j(equal, &rt_call); - if (count_constructions) { + if (!is_api_function) { Label allocate; + // The code below relies on these assumptions. + STATIC_ASSERT(JSFunction::kNoSlackTracking == 0); + STATIC_ASSERT(Map::ConstructionCount::kShift + + Map::ConstructionCount::kSize == 32); + // Check if slack tracking is enabled. + __ movl(rsi, FieldOperand(rax, Map::kBitField3Offset)); + __ shrl(rsi, Immediate(Map::ConstructionCount::kShift)); + __ j(zero, &allocate); // JSFunction::kNoSlackTracking // Decrease generous allocation count. - __ movp(rcx, FieldOperand(rdi, JSFunction::kSharedFunctionInfoOffset)); - __ decb(FieldOperand(rcx, - SharedFunctionInfo::kConstructionCountOffset)); - __ j(not_zero, &allocate); + __ subl(FieldOperand(rax, Map::kBitField3Offset), + Immediate(1 << Map::ConstructionCount::kShift)); + + __ cmpl(rsi, Immediate(JSFunction::kFinishSlackTracking)); + __ j(not_equal, &allocate); __ Push(rax); __ Push(rdi); __ Push(rdi); // constructor - // The call will replace the stub, so the countdown is only done once. - __ CallRuntime(Runtime::kHiddenFinalizeInstanceSize, 1); + __ CallRuntime(Runtime::kFinalizeInstanceSize, 1); __ Pop(rdi); __ Pop(rax); + __ xorl(rsi, rsi); // JSFunction::kNoSlackTracking __ bind(&allocate); } @@ -213,9 +215,17 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, // rax: initial map // rbx: JSObject // rdi: start of next object (including memento if create_memento) + // rsi: slack tracking counter (non-API function case) __ leap(rcx, Operand(rbx, JSObject::kHeaderSize)); __ LoadRoot(rdx, Heap::kUndefinedValueRootIndex); - if (count_constructions) { + if (!is_api_function) { + Label no_inobject_slack_tracking; + + // Check if slack tracking is enabled. + __ cmpl(rsi, Immediate(JSFunction::kNoSlackTracking)); + __ j(equal, &no_inobject_slack_tracking); + + // Allocate object with a slack. __ movzxbp(rsi, FieldOperand(rax, Map::kPreAllocatedPropertyFieldsOffset)); __ leap(rsi, @@ -228,20 +238,21 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, } __ InitializeFieldsWithFiller(rcx, rsi, rdx); __ LoadRoot(rdx, Heap::kOnePointerFillerMapRootIndex); - __ InitializeFieldsWithFiller(rcx, rdi, rdx); - } else if (create_memento) { + // Fill the remaining fields with one pointer filler map. + + __ bind(&no_inobject_slack_tracking); + } + if (create_memento) { __ leap(rsi, Operand(rdi, -AllocationMemento::kSize)); __ InitializeFieldsWithFiller(rcx, rsi, rdx); // Fill in memento fields if necessary. // rsi: points to the allocated but uninitialized memento. - Handle<Map> allocation_memento_map = factory->allocation_memento_map(); __ Move(Operand(rsi, AllocationMemento::kMapOffset), - allocation_memento_map); + factory->allocation_memento_map()); // Get the cell or undefined. __ movp(rdx, Operand(rsp, kPointerSize*2)); - __ movp(Operand(rsi, AllocationMemento::kAllocationSiteOffset), - rdx); + __ movp(Operand(rsi, AllocationMemento::kAllocationSiteOffset), rdx); } else { __ InitializeFieldsWithFiller(rcx, rdi, rdx); } @@ -344,13 +355,14 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, offset = kPointerSize; } - // Must restore rdi (constructor) before calling runtime. + // Must restore rsi (context) and rdi (constructor) before calling runtime. + __ movp(rsi, Operand(rbp, StandardFrameConstants::kContextOffset)); __ movp(rdi, Operand(rsp, offset)); __ Push(rdi); if (create_memento) { - __ CallRuntime(Runtime::kHiddenNewObjectWithAllocationSite, 2); + __ CallRuntime(Runtime::kNewObjectWithAllocationSite, 2); } else { - __ CallRuntime(Runtime::kHiddenNewObject, 1); + __ CallRuntime(Runtime::kNewObject, 1); } __ movp(rbx, rax); // store result in rbx @@ -416,7 +428,7 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, } // Store offset of return address for deoptimizer. - if (!is_api_function && !count_constructions) { + if (!is_api_function) { masm->isolate()->heap()->SetConstructStubDeoptPCOffset(masm->pc_offset()); } @@ -459,18 +471,13 @@ static void Generate_JSConstructStubHelper(MacroAssembler* masm, } -void Builtins::Generate_JSConstructStubCountdown(MacroAssembler* masm) { - Generate_JSConstructStubHelper(masm, false, true, false); -} - - void Builtins::Generate_JSConstructStubGeneric(MacroAssembler* masm) { - Generate_JSConstructStubHelper(masm, false, false, FLAG_pretenuring_call_new); + Generate_JSConstructStubHelper(masm, false, FLAG_pretenuring_call_new); } void Builtins::Generate_JSConstructStubApi(MacroAssembler* masm) { - Generate_JSConstructStubHelper(masm, true, false, false); + Generate_JSConstructStubHelper(masm, true, false); } @@ -603,7 +610,7 @@ void Builtins::Generate_JSConstructEntryTrampoline(MacroAssembler* masm) { void Builtins::Generate_CompileUnoptimized(MacroAssembler* masm) { - CallRuntimePassFunction(masm, Runtime::kHiddenCompileUnoptimized); + CallRuntimePassFunction(masm, Runtime::kCompileUnoptimized); GenerateTailCallToReturnedCode(masm); } @@ -618,7 +625,7 @@ static void CallCompileOptimized(MacroAssembler* masm, // Whether to compile in a background thread. __ Push(masm->isolate()->factory()->ToBoolean(concurrent)); - __ CallRuntime(Runtime::kHiddenCompileOptimized, 2); + __ CallRuntime(Runtime::kCompileOptimized, 2); // Restore receiver. __ Pop(rdi); } @@ -719,7 +726,7 @@ static void Generate_NotifyStubFailureHelper(MacroAssembler* masm, // stubs that tail call the runtime on deopts passing their parameters in // registers. __ Pushad(); - __ CallRuntime(Runtime::kHiddenNotifyStubFailure, 0, save_doubles); + __ CallRuntime(Runtime::kNotifyStubFailure, 0, save_doubles); __ Popad(); // Tear down internal frame. } @@ -748,7 +755,7 @@ static void Generate_NotifyDeoptimizedHelper(MacroAssembler* masm, // Pass the deoptimization type to the runtime system. __ Push(Smi::FromInt(static_cast<int>(type))); - __ CallRuntime(Runtime::kHiddenNotifyDeoptimized, 1); + __ CallRuntime(Runtime::kNotifyDeoptimized, 1); // Tear down internal frame. } @@ -821,7 +828,7 @@ void Builtins::Generate_FunctionCall(MacroAssembler* masm) { // 3a. Patch the first argument if necessary when calling a function. Label shift_arguments; __ Set(rdx, 0); // indicate regular JS_FUNCTION - { Label convert_to_object, use_global_receiver, patch_receiver; + { Label convert_to_object, use_global_proxy, patch_receiver; // Change context eagerly in case we need the global receiver. __ movp(rsi, FieldOperand(rdi, JSFunction::kContextOffset)); @@ -842,9 +849,9 @@ void Builtins::Generate_FunctionCall(MacroAssembler* masm) { __ JumpIfSmi(rbx, &convert_to_object, Label::kNear); __ CompareRoot(rbx, Heap::kNullValueRootIndex); - __ j(equal, &use_global_receiver); + __ j(equal, &use_global_proxy); __ CompareRoot(rbx, Heap::kUndefinedValueRootIndex); - __ j(equal, &use_global_receiver); + __ j(equal, &use_global_proxy); STATIC_ASSERT(LAST_SPEC_OBJECT_TYPE == LAST_TYPE); __ CmpObjectType(rbx, FIRST_SPEC_OBJECT_TYPE, rcx); @@ -870,10 +877,10 @@ void Builtins::Generate_FunctionCall(MacroAssembler* masm) { __ movp(rdi, args.GetReceiverOperand()); __ jmp(&patch_receiver, Label::kNear); - __ bind(&use_global_receiver); + __ bind(&use_global_proxy); __ movp(rbx, Operand(rsi, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); - __ movp(rbx, FieldOperand(rbx, GlobalObject::kGlobalReceiverOffset)); + __ movp(rbx, FieldOperand(rbx, GlobalObject::kGlobalProxyOffset)); __ bind(&patch_receiver); __ movp(args.GetArgumentOperand(1), rbx); @@ -1017,7 +1024,7 @@ void Builtins::Generate_FunctionApply(MacroAssembler* masm) { __ movp(rsi, FieldOperand(rdi, JSFunction::kContextOffset)); // Do not transform the receiver for strict mode functions. - Label call_to_object, use_global_receiver; + Label call_to_object, use_global_proxy; __ movp(rdx, FieldOperand(rdi, JSFunction::kSharedFunctionInfoOffset)); __ testb(FieldOperand(rdx, SharedFunctionInfo::kStrictModeByteOffset), Immediate(1 << SharedFunctionInfo::kStrictModeBitWithinByte)); @@ -1031,9 +1038,9 @@ void Builtins::Generate_FunctionApply(MacroAssembler* masm) { // Compute the receiver in sloppy mode. __ JumpIfSmi(rbx, &call_to_object, Label::kNear); __ CompareRoot(rbx, Heap::kNullValueRootIndex); - __ j(equal, &use_global_receiver); + __ j(equal, &use_global_proxy); __ CompareRoot(rbx, Heap::kUndefinedValueRootIndex); - __ j(equal, &use_global_receiver); + __ j(equal, &use_global_proxy); // If given receiver is already a JavaScript object then there's no // reason for converting it. @@ -1048,10 +1055,10 @@ void Builtins::Generate_FunctionApply(MacroAssembler* masm) { __ movp(rbx, rax); __ jmp(&push_receiver, Label::kNear); - __ bind(&use_global_receiver); + __ bind(&use_global_proxy); __ movp(rbx, Operand(rsi, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); - __ movp(rbx, FieldOperand(rbx, GlobalObject::kGlobalReceiverOffset)); + __ movp(rbx, FieldOperand(rbx, GlobalObject::kGlobalProxyOffset)); // Push the receiver. __ bind(&push_receiver); @@ -1059,12 +1066,17 @@ void Builtins::Generate_FunctionApply(MacroAssembler* masm) { // Copy all arguments from the array to the stack. Label entry, loop; - __ movp(rax, Operand(rbp, kIndexOffset)); + Register receiver = LoadIC::ReceiverRegister(); + Register key = LoadIC::NameRegister(); + __ movp(key, Operand(rbp, kIndexOffset)); __ jmp(&entry); __ bind(&loop); - __ movp(rdx, Operand(rbp, kArgumentsOffset)); // load arguments + __ movp(receiver, Operand(rbp, kArgumentsOffset)); // load arguments // Use inline caching to speed up access to arguments. + if (FLAG_vector_ics) { + __ Move(LoadIC::SlotRegister(), Smi::FromInt(0)); + } Handle<Code> ic = masm->isolate()->builtins()->KeyedLoadIC_Initialize(); __ Call(ic, RelocInfo::CODE_TARGET); @@ -1076,19 +1088,19 @@ void Builtins::Generate_FunctionApply(MacroAssembler* masm) { // Push the nth argument. __ Push(rax); - // Update the index on the stack and in register rax. - __ movp(rax, Operand(rbp, kIndexOffset)); - __ SmiAddConstant(rax, rax, Smi::FromInt(1)); - __ movp(Operand(rbp, kIndexOffset), rax); + // Update the index on the stack and in register key. + __ movp(key, Operand(rbp, kIndexOffset)); + __ SmiAddConstant(key, key, Smi::FromInt(1)); + __ movp(Operand(rbp, kIndexOffset), key); __ bind(&entry); - __ cmpp(rax, Operand(rbp, kLimitOffset)); + __ cmpp(key, Operand(rbp, kLimitOffset)); __ j(not_equal, &loop); // Call the function. Label call_proxy; ParameterCount actual(rax); - __ SmiToInteger32(rax, rax); + __ SmiToInteger32(rax, key); __ movp(rdi, Operand(rbp, kFunctionOffset)); __ CmpObjectType(rdi, JS_FUNCTION_TYPE, rcx); __ j(not_equal, &call_proxy); @@ -1498,7 +1510,7 @@ void Builtins::Generate_OsrAfterStackCheck(MacroAssembler* masm) { __ j(above_equal, &ok); { FrameScope scope(masm, StackFrame::INTERNAL); - __ CallRuntime(Runtime::kHiddenStackGuard, 0); + __ CallRuntime(Runtime::kStackGuard, 0); } __ jmp(masm->isolate()->builtins()->OnStackReplacement(), RelocInfo::CODE_TARGET); diff --git a/deps/v8/src/x64/code-stubs-x64.cc b/deps/v8/src/x64/code-stubs-x64.cc index 546595ad419..5a30ab70a8e 100644 --- a/deps/v8/src/x64/code-stubs-x64.cc +++ b/deps/v8/src/x64/code-stubs-x64.cc @@ -2,15 +2,15 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_X64 -#include "bootstrapper.h" -#include "code-stubs.h" -#include "regexp-macro-assembler.h" -#include "stub-cache.h" -#include "runtime.h" +#include "src/bootstrapper.h" +#include "src/code-stubs.h" +#include "src/regexp-macro-assembler.h" +#include "src/runtime.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -18,256 +18,212 @@ namespace internal { void FastNewClosureStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { rbx }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenNewClosureFromStubFailure)->entry; + Register registers[] = { rsi, rbx }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kNewClosureFromStubFailure)->entry); } void FastNewContextStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { rdi }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; + Register registers[] = { rsi, rdi }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); } void ToNumberStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { rax }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; + Register registers[] = { rsi, rax }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); } void NumberToStringStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { rax }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenNumberToString)->entry; + Register registers[] = { rsi, rax }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kNumberToStringRT)->entry); } void FastCloneShallowArrayStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { rax, rbx, rcx }; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId( - Runtime::kHiddenCreateArrayLiteralStubBailout)->entry; + Register registers[] = { rsi, rax, rbx, rcx }; + Representation representations[] = { + Representation::Tagged(), + Representation::Tagged(), + Representation::Smi(), + Representation::Tagged() }; + + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kCreateArrayLiteralStubBailout)->entry, + representations); } void FastCloneShallowObjectStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { rax, rbx, rcx, rdx }; - descriptor->register_param_count_ = 4; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenCreateObjectLiteral)->entry; + Register registers[] = { rsi, rax, rbx, rcx, rdx }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kCreateObjectLiteral)->entry); } void CreateAllocationSiteStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { rbx, rdx }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; + Register registers[] = { rsi, rbx, rdx }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); } -void KeyedLoadFastElementStub::InitializeInterfaceDescriptor( +void CallFunctionStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { rdx, rax }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(KeyedLoadIC_MissFromStubFailure); + Register registers[] = {rsi, rdi}; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); } -void KeyedLoadDictionaryElementStub::InitializeInterfaceDescriptor( +void CallConstructStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { rdx, rax }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(KeyedLoadIC_MissFromStubFailure); + // rax : number of arguments + // rbx : feedback vector + // rdx : (only if rbx is not the megamorphic symbol) slot in feedback + // vector (Smi) + // rdi : constructor function + // TODO(turbofan): So far we don't gather type feedback and hence skip the + // slot parameter, but ArrayConstructStub needs the vector to be undefined. + Register registers[] = {rsi, rax, rdi, rbx}; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); } void RegExpConstructResultStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { rcx, rbx, rax }; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenRegExpConstructResult)->entry; -} - - -void LoadFieldStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { rax }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; -} - - -void KeyedLoadFieldStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { rdx }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; -} - - -void StringLengthStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { rax, rcx }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; -} - - -void KeyedStringLengthStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { rdx, rax }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = NULL; + Register registers[] = { rsi, rcx, rbx, rax }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kRegExpConstructResult)->entry); } -void KeyedStoreFastElementStub::InitializeInterfaceDescriptor( +void TransitionElementsKindStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { rdx, rcx, rax }; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(KeyedStoreIC_MissFromStubFailure); + Register registers[] = { rsi, rax, rbx }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kTransitionElementsKind)->entry); } -void TransitionElementsKindStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { rax, rbx }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kTransitionElementsKind)->entry; -} +const Register InterfaceDescriptor::ContextRegister() { return rsi; } static void InitializeArrayConstructorDescriptor( - CodeStubInterfaceDescriptor* descriptor, + CodeStub::Major major, CodeStubInterfaceDescriptor* descriptor, int constant_stack_parameter_count) { // register state // rax -- number of arguments // rdi -- function // rbx -- allocation site with elements kind - static Register registers_variable_args[] = { rdi, rbx, rax }; - static Register registers_no_args[] = { rdi, rbx }; + Address deopt_handler = Runtime::FunctionForId( + Runtime::kArrayConstructor)->entry; if (constant_stack_parameter_count == 0) { - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers_no_args; + Register registers[] = { rsi, rdi, rbx }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, + deopt_handler, NULL, constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE); } else { // stack param count needs (constructor pointer, and single argument) - descriptor->handler_arguments_mode_ = PASS_ARGUMENTS; - descriptor->stack_parameter_count_ = rax; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers_variable_args; + Register registers[] = { rsi, rdi, rbx, rax }; + Representation representations[] = { + Representation::Tagged(), + Representation::Tagged(), + Representation::Tagged(), + Representation::Integer32() }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, rax, + deopt_handler, representations, + constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE, PASS_ARGUMENTS); } - - descriptor->hint_stack_parameter_count_ = constant_stack_parameter_count; - descriptor->function_mode_ = JS_FUNCTION_STUB_MODE; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenArrayConstructor)->entry; } static void InitializeInternalArrayConstructorDescriptor( - CodeStubInterfaceDescriptor* descriptor, + CodeStub::Major major, CodeStubInterfaceDescriptor* descriptor, int constant_stack_parameter_count) { // register state + // rsi -- context // rax -- number of arguments // rdi -- constructor function - static Register registers_variable_args[] = { rdi, rax }; - static Register registers_no_args[] = { rdi }; + Address deopt_handler = Runtime::FunctionForId( + Runtime::kInternalArrayConstructor)->entry; if (constant_stack_parameter_count == 0) { - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers_no_args; + Register registers[] = { rsi, rdi }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, + deopt_handler, NULL, constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE); } else { // stack param count needs (constructor pointer, and single argument) - descriptor->handler_arguments_mode_ = PASS_ARGUMENTS; - descriptor->stack_parameter_count_ = rax; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers_variable_args; + Register registers[] = { rsi, rdi, rax }; + Representation representations[] = { + Representation::Tagged(), + Representation::Tagged(), + Representation::Integer32() }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, rax, + deopt_handler, representations, + constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE, PASS_ARGUMENTS); } - - descriptor->hint_stack_parameter_count_ = constant_stack_parameter_count; - descriptor->function_mode_ = JS_FUNCTION_STUB_MODE; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenInternalArrayConstructor)->entry; } void ArrayNoArgumentConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeArrayConstructorDescriptor(descriptor, 0); + InitializeArrayConstructorDescriptor(MajorKey(), descriptor, 0); } void ArraySingleArgumentConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeArrayConstructorDescriptor(descriptor, 1); + InitializeArrayConstructorDescriptor(MajorKey(), descriptor, 1); } void ArrayNArgumentsConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeArrayConstructorDescriptor(descriptor, -1); + InitializeArrayConstructorDescriptor(MajorKey(), descriptor, -1); } void InternalArrayNoArgumentConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeInternalArrayConstructorDescriptor(descriptor, 0); + InitializeInternalArrayConstructorDescriptor(MajorKey(), descriptor, 0); } void InternalArraySingleArgumentConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeInternalArrayConstructorDescriptor(descriptor, 1); + InitializeInternalArrayConstructorDescriptor(MajorKey(), descriptor, 1); } void InternalArrayNArgumentsConstructorStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - InitializeInternalArrayConstructorDescriptor(descriptor, -1); + InitializeInternalArrayConstructorDescriptor(MajorKey(), descriptor, -1); } void CompareNilICStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { rax }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(CompareNilIC_Miss); + Register registers[] = { rsi, rax }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(CompareNilIC_Miss)); descriptor->SetMissHandler( ExternalReference(IC_Utility(IC::kCompareNilIC_Miss), isolate())); } @@ -275,42 +231,19 @@ void CompareNilICStub::InitializeInterfaceDescriptor( void ToBooleanStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { rax }; - descriptor->register_param_count_ = 1; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(ToBooleanIC_Miss); + Register registers[] = { rsi, rax }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(ToBooleanIC_Miss)); descriptor->SetMissHandler( ExternalReference(IC_Utility(IC::kToBooleanIC_Miss), isolate())); } -void StoreGlobalStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { rdx, rcx, rax }; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(StoreIC_MissFromStubFailure); -} - - -void ElementsTransitionAndStoreStub::InitializeInterfaceDescriptor( - CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { rax, rbx, rcx, rdx }; - descriptor->register_param_count_ = 4; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(ElementsTransitionAndStoreIC_Miss); -} - - void BinaryOpICStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { rdx, rax }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = FUNCTION_ADDR(BinaryOpIC_Miss); + Register registers[] = { rsi, rdx, rax }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(BinaryOpIC_Miss)); descriptor->SetMissHandler( ExternalReference(IC_Utility(IC::kBinaryOpIC_Miss), isolate())); } @@ -318,21 +251,17 @@ void BinaryOpICStub::InitializeInterfaceDescriptor( void BinaryOpWithAllocationSiteStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { rcx, rdx, rax }; - descriptor->register_param_count_ = 3; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - FUNCTION_ADDR(BinaryOpIC_MissWithAllocationSite); + Register registers[] = { rsi, rcx, rdx, rax }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(BinaryOpIC_MissWithAllocationSite)); } void StringAddStub::InitializeInterfaceDescriptor( CodeStubInterfaceDescriptor* descriptor) { - static Register registers[] = { rdx, rax }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->deoptimization_handler_ = - Runtime::FunctionForId(Runtime::kHiddenStringAdd)->entry; + Register registers[] = { rsi, rdx, rax }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kStringAdd)->entry); } @@ -340,82 +269,72 @@ void CallDescriptors::InitializeForIsolate(Isolate* isolate) { { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::ArgumentAdaptorCall); - static Register registers[] = { rdi, // JSFunction - rsi, // context - rax, // actual number of arguments - rbx, // expected number of arguments + Register registers[] = { rsi, // context + rdi, // JSFunction + rax, // actual number of arguments + rbx, // expected number of arguments }; - static Representation representations[] = { - Representation::Tagged(), // JSFunction + Representation representations[] = { Representation::Tagged(), // context + Representation::Tagged(), // JSFunction Representation::Integer32(), // actual number of arguments Representation::Integer32(), // expected number of arguments }; - descriptor->register_param_count_ = 4; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); } { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::KeyedCall); - static Register registers[] = { rsi, // context - rcx, // key + Register registers[] = { rsi, // context + rcx, // key }; - static Representation representations[] = { + Representation representations[] = { Representation::Tagged(), // context Representation::Tagged(), // key }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); } { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::NamedCall); - static Register registers[] = { rsi, // context - rcx, // name + Register registers[] = { rsi, // context + rcx, // name }; - static Representation representations[] = { + Representation representations[] = { Representation::Tagged(), // context Representation::Tagged(), // name }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); } { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::CallHandler); - static Register registers[] = { rsi, // context - rdx, // receiver + Register registers[] = { rsi, // context + rdx, // receiver }; - static Representation representations[] = { + Representation representations[] = { Representation::Tagged(), // context Representation::Tagged(), // receiver }; - descriptor->register_param_count_ = 2; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); } { CallInterfaceDescriptor* descriptor = isolate->call_descriptor(Isolate::ApiFunctionCall); - static Register registers[] = { rax, // callee - rbx, // call_data - rcx, // holder - rdx, // api_function_address - rsi, // context + Register registers[] = { rsi, // context + rax, // callee + rbx, // call_data + rcx, // holder + rdx, // api_function_address }; - static Representation representations[] = { + Representation representations[] = { + Representation::Tagged(), // context Representation::Tagged(), // callee Representation::Tagged(), // call_data Representation::Tagged(), // holder Representation::External(), // api_function_address - Representation::Tagged(), // context }; - descriptor->register_param_count_ = 5; - descriptor->register_params_ = registers; - descriptor->param_representations_ = representations; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); } } @@ -428,18 +347,19 @@ void HydrogenCodeStub::GenerateLightweightMiss(MacroAssembler* masm) { isolate()->counters()->code_stubs()->Increment(); CodeStubInterfaceDescriptor* descriptor = GetInterfaceDescriptor(); - int param_count = descriptor->register_param_count_; + int param_count = descriptor->GetEnvironmentParameterCount(); { // Call the runtime system in a fresh internal frame. FrameScope scope(masm, StackFrame::INTERNAL); - ASSERT(descriptor->register_param_count_ == 0 || - rax.is(descriptor->register_params_[param_count - 1])); + DCHECK(param_count == 0 || + rax.is(descriptor->GetEnvironmentParameterRegister( + param_count - 1))); // Push arguments for (int i = 0; i < param_count; ++i) { - __ Push(descriptor->register_params_[i]); + __ Push(descriptor->GetEnvironmentParameterRegister(i)); } ExternalReference miss = descriptor->miss_handler(); - __ CallExternalReference(miss, descriptor->register_param_count_); + __ CallExternalReference(miss, param_count); } __ Ret(); @@ -480,7 +400,7 @@ class FloatingPointHelper : public AllStatic { void DoubleToIStub::Generate(MacroAssembler* masm) { Register input_reg = this->source(); Register final_result_reg = this->destination(); - ASSERT(is_truncating()); + DCHECK(is_truncating()); Label check_negative, process_64_bits, done; @@ -552,7 +472,7 @@ void DoubleToIStub::Generate(MacroAssembler* masm) { __ addp(rsp, Immediate(kDoubleSize)); } if (!final_result_reg.is(result_reg)) { - ASSERT(final_result_reg.is(rcx)); + DCHECK(final_result_reg.is(rcx)); __ movl(final_result_reg, result_reg); } __ popq(save_reg); @@ -820,7 +740,7 @@ void MathPowStub::Generate(MacroAssembler* masm) { if (exponent_type_ == ON_STACK) { // The arguments are still on the stack. __ bind(&call_runtime); - __ TailCallRuntime(Runtime::kHiddenMathPow, 2, 1); + __ TailCallRuntime(Runtime::kMathPowRT, 2, 1); // The stub is called from non-optimized code, which expects the result // as heap number in rax. @@ -833,7 +753,7 @@ void MathPowStub::Generate(MacroAssembler* masm) { __ bind(&call_runtime); // Move base to the correct argument register. Exponent is already in xmm1. __ movsd(xmm0, double_base); - ASSERT(double_exponent.is(xmm1)); + DCHECK(double_exponent.is(xmm1)); { AllowExternalCallThatCantCauseGC scope(masm); __ PrepareCallCFunction(2); @@ -852,30 +772,13 @@ void MathPowStub::Generate(MacroAssembler* masm) { void FunctionPrototypeStub::Generate(MacroAssembler* masm) { Label miss; - Register receiver; - if (kind() == Code::KEYED_LOAD_IC) { - // ----------- S t a t e ------------- - // -- rax : key - // -- rdx : receiver - // -- rsp[0] : return address - // ----------------------------------- - __ Cmp(rax, isolate()->factory()->prototype_string()); - __ j(not_equal, &miss); - receiver = rdx; - } else { - ASSERT(kind() == Code::LOAD_IC); - // ----------- S t a t e ------------- - // -- rax : receiver - // -- rcx : name - // -- rsp[0] : return address - // ----------------------------------- - receiver = rax; - } + Register receiver = LoadIC::ReceiverRegister(); - StubCompiler::GenerateLoadFunctionPrototype(masm, receiver, r8, r9, &miss); + NamedLoadHandlerCompiler::GenerateLoadFunctionPrototype(masm, receiver, r8, + r9, &miss); __ bind(&miss); - StubCompiler::TailCallBuiltin( - masm, BaseLoadStoreStubCompiler::MissBuiltin(kind())); + PropertyAccessCompiler::TailCallBuiltin( + masm, PropertyAccessCompiler::MissBuiltin(Code::LOAD_IC)); } @@ -1003,35 +906,35 @@ void ArgumentsAccessStub::GenerateNewSloppyFast(MacroAssembler* masm) { // rax = address of new object(s) (tagged) // rcx = argument count (untagged) - // Get the arguments boilerplate from the current native context into rdi. - Label has_mapped_parameters, copy; + // Get the arguments map from the current native context into rdi. + Label has_mapped_parameters, instantiate; __ movp(rdi, Operand(rsi, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); __ movp(rdi, FieldOperand(rdi, GlobalObject::kNativeContextOffset)); __ testp(rbx, rbx); __ j(not_zero, &has_mapped_parameters, Label::kNear); - const int kIndex = Context::SLOPPY_ARGUMENTS_BOILERPLATE_INDEX; + const int kIndex = Context::SLOPPY_ARGUMENTS_MAP_INDEX; __ movp(rdi, Operand(rdi, Context::SlotOffset(kIndex))); - __ jmp(©, Label::kNear); + __ jmp(&instantiate, Label::kNear); - const int kAliasedIndex = Context::ALIASED_ARGUMENTS_BOILERPLATE_INDEX; + const int kAliasedIndex = Context::ALIASED_ARGUMENTS_MAP_INDEX; __ bind(&has_mapped_parameters); __ movp(rdi, Operand(rdi, Context::SlotOffset(kAliasedIndex))); - __ bind(©); + __ bind(&instantiate); // rax = address of new object (tagged) // rbx = mapped parameter count (untagged) // rcx = argument count (untagged) - // rdi = address of boilerplate object (tagged) - // Copy the JS object part. - for (int i = 0; i < JSObject::kHeaderSize; i += kPointerSize) { - __ movp(rdx, FieldOperand(rdi, i)); - __ movp(FieldOperand(rax, i), rdx); - } + // rdi = address of arguments map (tagged) + __ movp(FieldOperand(rax, JSObject::kMapOffset), rdi); + __ LoadRoot(kScratchRegister, Heap::kEmptyFixedArrayRootIndex); + __ movp(FieldOperand(rax, JSObject::kPropertiesOffset), kScratchRegister); + __ movp(FieldOperand(rax, JSObject::kElementsOffset), kScratchRegister); // Set up the callee in-object property. STATIC_ASSERT(Heap::kArgumentsCalleeIndex == 1); __ movp(rdx, args.GetArgumentOperand(0)); + __ AssertNotSmi(rdx); __ movp(FieldOperand(rax, JSObject::kHeaderSize + Heap::kArgumentsCalleeIndex * kPointerSize), rdx); @@ -1149,7 +1052,7 @@ void ArgumentsAccessStub::GenerateNewSloppyFast(MacroAssembler* masm) { __ bind(&runtime); __ Integer32ToSmi(rcx, rcx); __ movp(args.GetArgumentOperand(2), rcx); // Patch argument count. - __ TailCallRuntime(Runtime::kHiddenNewArgumentsFast, 3, 1); + __ TailCallRuntime(Runtime::kNewSloppyArguments, 3, 1); } @@ -1176,7 +1079,7 @@ void ArgumentsAccessStub::GenerateNewSloppySlow(MacroAssembler* masm) { __ movp(args.GetArgumentOperand(1), rdx); __ bind(&runtime); - __ TailCallRuntime(Runtime::kHiddenNewArgumentsFast, 3, 1); + __ TailCallRuntime(Runtime::kNewSloppyArguments, 3, 1); } @@ -1221,18 +1124,16 @@ void ArgumentsAccessStub::GenerateNewStrict(MacroAssembler* masm) { // Do the allocation of both objects in one go. __ Allocate(rcx, rax, rdx, rbx, &runtime, TAG_OBJECT); - // Get the arguments boilerplate from the current native context. + // Get the arguments map from the current native context. __ movp(rdi, Operand(rsi, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); __ movp(rdi, FieldOperand(rdi, GlobalObject::kNativeContextOffset)); - const int offset = - Context::SlotOffset(Context::STRICT_ARGUMENTS_BOILERPLATE_INDEX); + const int offset = Context::SlotOffset(Context::STRICT_ARGUMENTS_MAP_INDEX); __ movp(rdi, Operand(rdi, offset)); - // Copy the JS object part. - for (int i = 0; i < JSObject::kHeaderSize; i += kPointerSize) { - __ movp(rbx, FieldOperand(rdi, i)); - __ movp(FieldOperand(rax, i), rbx); - } + __ movp(FieldOperand(rax, JSObject::kMapOffset), rdi); + __ LoadRoot(kScratchRegister, Heap::kEmptyFixedArrayRootIndex); + __ movp(FieldOperand(rax, JSObject::kPropertiesOffset), kScratchRegister); + __ movp(FieldOperand(rax, JSObject::kElementsOffset), kScratchRegister); // Get the length (smi tagged) and set that as an in-object property too. STATIC_ASSERT(Heap::kArgumentsLengthIndex == 0); @@ -1277,7 +1178,7 @@ void ArgumentsAccessStub::GenerateNewStrict(MacroAssembler* masm) { // Do the runtime call to allocate the arguments object. __ bind(&runtime); - __ TailCallRuntime(Runtime::kHiddenNewStrictArgumentsFast, 3, 1); + __ TailCallRuntime(Runtime::kNewStrictArguments, 3, 1); } @@ -1286,7 +1187,7 @@ void RegExpExecStub::Generate(MacroAssembler* masm) { // time or if regexp entry in generated code is turned off runtime switch or // at compilation. #ifdef V8_INTERPRETED_REGEXP - __ TailCallRuntime(Runtime::kHiddenRegExpExec, 4, 1); + __ TailCallRuntime(Runtime::kRegExpExecRT, 4, 1); #else // V8_INTERPRETED_REGEXP // Stack frame on entry. @@ -1425,8 +1326,8 @@ void RegExpExecStub::Generate(MacroAssembler* masm) { // (5b) Is subject external? If yes, go to (8). __ testb(rbx, Immediate(kStringRepresentationMask)); // The underlying external string is never a short external string. - STATIC_CHECK(ExternalString::kMaxShortLength < ConsString::kMinLength); - STATIC_CHECK(ExternalString::kMaxShortLength < SlicedString::kMinLength); + STATIC_ASSERT(ExternalString::kMaxShortLength < ConsString::kMinLength); + STATIC_ASSERT(ExternalString::kMaxShortLength < SlicedString::kMinLength); __ j(not_zero, &external_string); // Go to (8) // (6) One byte sequential. Load regexp code for one byte. @@ -1679,7 +1580,7 @@ void RegExpExecStub::Generate(MacroAssembler* masm) { // Do the runtime call to execute the regexp. __ bind(&runtime); - __ TailCallRuntime(Runtime::kHiddenRegExpExec, 4, 1); + __ TailCallRuntime(Runtime::kRegExpExecRT, 4, 1); // Deferred code for string handling. // (7) Not a long external string? If yes, go to (10). @@ -1731,8 +1632,8 @@ void RegExpExecStub::Generate(MacroAssembler* masm) { static int NegativeComparisonResult(Condition cc) { - ASSERT(cc != equal); - ASSERT((cc == less) || (cc == less_equal) + DCHECK(cc != equal); + DCHECK((cc == less) || (cc == less_equal) || (cc == greater) || (cc == greater_equal)); return (cc == greater || cc == greater_equal) ? LESS : GREATER; } @@ -1922,7 +1823,7 @@ void ICCompareStub::GenerateGeneric(MacroAssembler* masm) { // If one of the numbers was NaN, then the result is always false. // The cc is never not-equal. __ bind(&unordered); - ASSERT(cc != not_equal); + DCHECK(cc != not_equal); if (cc == less || cc == less_equal) { __ Set(rax, 1); } else { @@ -2204,16 +2105,17 @@ static void EmitWrapCase(MacroAssembler* masm, } -void CallFunctionStub::Generate(MacroAssembler* masm) { +static void CallFunctionNoFeedback(MacroAssembler* masm, + int argc, bool needs_checks, + bool call_as_method) { // rdi : the function to call // wrap_and_call can only be true if we are compiling a monomorphic method. Isolate* isolate = masm->isolate(); Label slow, non_function, wrap, cont; - int argc = argc_; StackArgumentsAccessor args(rsp, argc); - if (NeedsChecks()) { + if (needs_checks) { // Check that the function really is a JavaScript function. __ JumpIfSmi(rdi, &non_function); @@ -2225,15 +2127,15 @@ void CallFunctionStub::Generate(MacroAssembler* masm) { // Fast-case: Just invoke the function. ParameterCount actual(argc); - if (CallAsMethod()) { - if (NeedsChecks()) { + if (call_as_method) { + if (needs_checks) { EmitContinueIfStrictOrNative(masm, &cont); } // Load the receiver from the stack. __ movp(rax, args.GetReceiverOperand()); - if (NeedsChecks()) { + if (needs_checks) { __ JumpIfSmi(rax, &wrap); __ CmpObjectType(rax, FIRST_SPEC_OBJECT_TYPE, rcx); @@ -2247,19 +2149,24 @@ void CallFunctionStub::Generate(MacroAssembler* masm) { __ InvokeFunction(rdi, actual, JUMP_FUNCTION, NullCallWrapper()); - if (NeedsChecks()) { + if (needs_checks) { // Slow-case: Non-function called. __ bind(&slow); EmitSlowCase(isolate, masm, &args, argc, &non_function); } - if (CallAsMethod()) { + if (call_as_method) { __ bind(&wrap); EmitWrapCase(masm, &args, &cont); } } +void CallFunctionStub::Generate(MacroAssembler* masm) { + CallFunctionNoFeedback(masm, argc_, NeedsChecks(), CallAsMethod()); +} + + void CallConstructStub::Generate(MacroAssembler* masm) { // rax : number of arguments // rbx : feedback vector @@ -2334,6 +2241,47 @@ static void EmitLoadTypeFeedbackVector(MacroAssembler* masm, Register vector) { } +void CallIC_ArrayStub::Generate(MacroAssembler* masm) { + // rdi - function + // rdx - slot id (as integer) + Label miss; + int argc = state_.arg_count(); + ParameterCount actual(argc); + + EmitLoadTypeFeedbackVector(masm, rbx); + __ SmiToInteger32(rdx, rdx); + + __ LoadGlobalFunction(Context::ARRAY_FUNCTION_INDEX, rcx); + __ cmpp(rdi, rcx); + __ j(not_equal, &miss); + + __ movp(rax, Immediate(arg_count())); + __ movp(rcx, FieldOperand(rbx, rdx, times_pointer_size, + FixedArray::kHeaderSize)); + // Verify that ecx contains an AllocationSite + Factory* factory = masm->isolate()->factory(); + __ Cmp(FieldOperand(rcx, HeapObject::kMapOffset), + factory->allocation_site_map()); + __ j(not_equal, &miss); + + __ movp(rbx, rcx); + ArrayConstructorStub stub(masm->isolate(), arg_count()); + __ TailCallStub(&stub); + + __ bind(&miss); + GenerateMiss(masm, IC::kCallIC_Customization_Miss); + + // The slow case, we need this no matter what to complete a call after a miss. + CallFunctionNoFeedback(masm, + arg_count(), + true, + CallAsMethod()); + + // Unreachable. + __ int3(); +} + + void CallICStub::Generate(MacroAssembler* masm) { // rdi - function // rbx - vector @@ -2350,7 +2298,7 @@ void CallICStub::Generate(MacroAssembler* masm) { // The checks. First, does rdi match the recorded monomorphic target? __ SmiToInteger32(rdx, rdx); - __ cmpq(rdi, FieldOperand(rbx, rdx, times_pointer_size, + __ cmpp(rdi, FieldOperand(rbx, rdx, times_pointer_size, FixedArray::kHeaderSize)); __ j(not_equal, &extra_checks_or_miss); @@ -2390,7 +2338,11 @@ void CallICStub::Generate(MacroAssembler* masm) { __ j(equal, &miss); if (!FLAG_trace_ic) { - // We are going megamorphic, and we don't want to visit the runtime. + // We are going megamorphic. If the feedback is a JSFunction, it is fine + // to handle it here. More complex cases are dealt with in the runtime. + __ AssertNotSmi(rcx); + __ CmpObjectType(rcx, JS_FUNCTION_TYPE, rcx); + __ j(not_equal, &miss); __ Move(FieldOperand(rbx, rdx, times_pointer_size, FixedArray::kHeaderSize), TypeFeedbackInfo::MegamorphicSentinel(isolate)); @@ -2399,7 +2351,7 @@ void CallICStub::Generate(MacroAssembler* masm) { // We are here because tracing is on or we are going monomorphic. __ bind(&miss); - GenerateMiss(masm); + GenerateMiss(masm, IC::kCallIC_Miss); // the slow case __ bind(&slow_start); @@ -2415,7 +2367,7 @@ void CallICStub::Generate(MacroAssembler* masm) { } -void CallICStub::GenerateMiss(MacroAssembler* masm) { +void CallICStub::GenerateMiss(MacroAssembler* masm, IC::UtilityId id) { // Get the receiver of the function from the stack; 1 ~ return address. __ movp(rcx, Operand(rsp, (state_.arg_count() + 1) * kPointerSize)); @@ -2430,7 +2382,7 @@ void CallICStub::GenerateMiss(MacroAssembler* masm) { __ Push(rdx); // Call the entry. - ExternalReference miss = ExternalReference(IC_Utility(IC::kCallIC_Miss), + ExternalReference miss = ExternalReference(IC_Utility(id), masm->isolate()); __ CallExternalReference(miss, 4); @@ -2513,7 +2465,7 @@ void CEntryStub::Generate(MacroAssembler* masm) { __ movp(rdx, r15); // argv. __ Move(r8, ExternalReference::isolate_address(isolate())); } else { - ASSERT_EQ(2, result_size_); + DCHECK_EQ(2, result_size_); // Pass a pointer to the result location as the first argument. __ leap(rcx, StackSpaceOperand(2)); // Pass a pointer to the Arguments object as the second argument. @@ -2534,7 +2486,7 @@ void CEntryStub::Generate(MacroAssembler* masm) { #ifdef _WIN64 // If return value is on the stack, pop it to registers. if (result_size_ > 1) { - ASSERT_EQ(2, result_size_); + DCHECK_EQ(2, result_size_); // Read result values stored on stack. Result is stored // above the four argument mirror slots and the two // Arguments object slots. @@ -2784,6 +2736,13 @@ void InstanceofStub::Generate(MacroAssembler* masm) { // is and instance of the function and anything else to // indicate that the value is not an instance. + // Fixed register usage throughout the stub. + Register object = rax; // Object (lhs). + Register map = rbx; // Map of the object. + Register function = rdx; // Function (rhs). + Register prototype = rdi; // Prototype of the function. + Register scratch = rcx; + static const int kOffsetToMapCheckValue = 2; static const int kOffsetToResultValue = kPointerSize == kInt64Size ? 18 : 14; // The last 4 bytes of the instruction sequence @@ -2798,85 +2757,88 @@ void InstanceofStub::Generate(MacroAssembler* masm) { // before the offset of the hole value in the root array. static const unsigned int kWordBeforeResultValue = kPointerSize == kInt64Size ? 0x458B4906 : 0x458B4106; - // Only the inline check flag is supported on X64. - ASSERT(flags_ == kNoFlags || HasCallSiteInlineCheck()); + int extra_argument_offset = HasCallSiteInlineCheck() ? 1 : 0; - // Get the object - go slow case if it's a smi. + DCHECK_EQ(object.code(), InstanceofStub::left().code()); + DCHECK_EQ(function.code(), InstanceofStub::right().code()); + + // Get the object and function - they are always both needed. + // Go slow case if the object is a smi. Label slow; StackArgumentsAccessor args(rsp, 2 + extra_argument_offset, ARGUMENTS_DONT_CONTAIN_RECEIVER); - __ movp(rax, args.GetArgumentOperand(0)); - __ JumpIfSmi(rax, &slow); + if (!HasArgsInRegisters()) { + __ movp(object, args.GetArgumentOperand(0)); + __ movp(function, args.GetArgumentOperand(1)); + } + __ JumpIfSmi(object, &slow); // Check that the left hand is a JS object. Leave its map in rax. - __ CmpObjectType(rax, FIRST_SPEC_OBJECT_TYPE, rax); + __ CmpObjectType(object, FIRST_SPEC_OBJECT_TYPE, map); __ j(below, &slow); - __ CmpInstanceType(rax, LAST_SPEC_OBJECT_TYPE); + __ CmpInstanceType(map, LAST_SPEC_OBJECT_TYPE); __ j(above, &slow); - // Get the prototype of the function. - __ movp(rdx, args.GetArgumentOperand(1)); - // rdx is function, rax is map. - // If there is a call site cache don't look in the global cache, but do the // real lookup and update the call site cache. - if (!HasCallSiteInlineCheck()) { + if (!HasCallSiteInlineCheck() && !ReturnTrueFalseObject()) { // Look up the function and the map in the instanceof cache. Label miss; - __ CompareRoot(rdx, Heap::kInstanceofCacheFunctionRootIndex); + __ CompareRoot(function, Heap::kInstanceofCacheFunctionRootIndex); __ j(not_equal, &miss, Label::kNear); - __ CompareRoot(rax, Heap::kInstanceofCacheMapRootIndex); + __ CompareRoot(map, Heap::kInstanceofCacheMapRootIndex); __ j(not_equal, &miss, Label::kNear); __ LoadRoot(rax, Heap::kInstanceofCacheAnswerRootIndex); - __ ret(2 * kPointerSize); + __ ret((HasArgsInRegisters() ? 0 : 2) * kPointerSize); __ bind(&miss); } - __ TryGetFunctionPrototype(rdx, rbx, &slow, true); + // Get the prototype of the function. + __ TryGetFunctionPrototype(function, prototype, &slow, true); // Check that the function prototype is a JS object. - __ JumpIfSmi(rbx, &slow); - __ CmpObjectType(rbx, FIRST_SPEC_OBJECT_TYPE, kScratchRegister); + __ JumpIfSmi(prototype, &slow); + __ CmpObjectType(prototype, FIRST_SPEC_OBJECT_TYPE, kScratchRegister); __ j(below, &slow); __ CmpInstanceType(kScratchRegister, LAST_SPEC_OBJECT_TYPE); __ j(above, &slow); - // Register mapping: - // rax is object map. - // rdx is function. - // rbx is function prototype. + // Update the global instanceof or call site inlined cache with the current + // map and function. The cached answer will be set when it is known below. if (!HasCallSiteInlineCheck()) { - __ StoreRoot(rdx, Heap::kInstanceofCacheFunctionRootIndex); - __ StoreRoot(rax, Heap::kInstanceofCacheMapRootIndex); + __ StoreRoot(function, Heap::kInstanceofCacheFunctionRootIndex); + __ StoreRoot(map, Heap::kInstanceofCacheMapRootIndex); } else { + // The constants for the code patching are based on push instructions + // at the call site. + DCHECK(!HasArgsInRegisters()); // Get return address and delta to inlined map check. __ movq(kScratchRegister, StackOperandForReturnAddress(0)); __ subp(kScratchRegister, args.GetArgumentOperand(2)); if (FLAG_debug_code) { - __ movl(rdi, Immediate(kWordBeforeMapCheckValue)); - __ cmpl(Operand(kScratchRegister, kOffsetToMapCheckValue - 4), rdi); + __ movl(scratch, Immediate(kWordBeforeMapCheckValue)); + __ cmpl(Operand(kScratchRegister, kOffsetToMapCheckValue - 4), scratch); __ Assert(equal, kInstanceofStubUnexpectedCallSiteCacheCheck); } __ movp(kScratchRegister, Operand(kScratchRegister, kOffsetToMapCheckValue)); - __ movp(Operand(kScratchRegister, 0), rax); + __ movp(Operand(kScratchRegister, 0), map); } - __ movp(rcx, FieldOperand(rax, Map::kPrototypeOffset)); - // Loop through the prototype chain looking for the function prototype. + __ movp(scratch, FieldOperand(map, Map::kPrototypeOffset)); Label loop, is_instance, is_not_instance; __ LoadRoot(kScratchRegister, Heap::kNullValueRootIndex); __ bind(&loop); - __ cmpp(rcx, rbx); + __ cmpp(scratch, prototype); __ j(equal, &is_instance, Label::kNear); - __ cmpp(rcx, kScratchRegister); + __ cmpp(scratch, kScratchRegister); // The code at is_not_instance assumes that kScratchRegister contains a // non-zero GCable value (the null object in this case). __ j(equal, &is_not_instance, Label::kNear); - __ movp(rcx, FieldOperand(rcx, HeapObject::kMapOffset)); - __ movp(rcx, FieldOperand(rcx, Map::kPrototypeOffset)); + __ movp(scratch, FieldOperand(scratch, HeapObject::kMapOffset)); + __ movp(scratch, FieldOperand(scratch, Map::kPrototypeOffset)); __ jmp(&loop); __ bind(&is_instance); @@ -2885,12 +2847,15 @@ void InstanceofStub::Generate(MacroAssembler* masm) { // Store bitwise zero in the cache. This is a Smi in GC terms. STATIC_ASSERT(kSmiTag == 0); __ StoreRoot(rax, Heap::kInstanceofCacheAnswerRootIndex); + if (ReturnTrueFalseObject()) { + __ LoadRoot(rax, Heap::kTrueValueRootIndex); + } } else { // Store offset of true in the root array at the inline check site. int true_offset = 0x100 + (Heap::kTrueValueRootIndex << kPointerSizeLog2) - kRootRegisterBias; // Assert it is a 1-byte signed value. - ASSERT(true_offset >= 0 && true_offset < 0x100); + DCHECK(true_offset >= 0 && true_offset < 0x100); __ movl(rax, Immediate(true_offset)); __ movq(kScratchRegister, StackOperandForReturnAddress(0)); __ subp(kScratchRegister, args.GetArgumentOperand(2)); @@ -2900,20 +2865,26 @@ void InstanceofStub::Generate(MacroAssembler* masm) { __ cmpl(Operand(kScratchRegister, kOffsetToResultValue - 4), rax); __ Assert(equal, kInstanceofStubUnexpectedCallSiteCacheMov); } - __ Set(rax, 0); + if (!ReturnTrueFalseObject()) { + __ Set(rax, 0); + } } - __ ret((2 + extra_argument_offset) * kPointerSize); + __ ret(((HasArgsInRegisters() ? 0 : 2) + extra_argument_offset) * + kPointerSize); __ bind(&is_not_instance); if (!HasCallSiteInlineCheck()) { // We have to store a non-zero value in the cache. __ StoreRoot(kScratchRegister, Heap::kInstanceofCacheAnswerRootIndex); + if (ReturnTrueFalseObject()) { + __ LoadRoot(rax, Heap::kFalseValueRootIndex); + } } else { // Store offset of false in the root array at the inline check site. int false_offset = 0x100 + (Heap::kFalseValueRootIndex << kPointerSizeLog2) - kRootRegisterBias; // Assert it is a 1-byte signed value. - ASSERT(false_offset >= 0 && false_offset < 0x100); + DCHECK(false_offset >= 0 && false_offset < 0x100); __ movl(rax, Immediate(false_offset)); __ movq(kScratchRegister, StackOperandForReturnAddress(0)); __ subp(kScratchRegister, args.GetArgumentOperand(2)); @@ -2924,25 +2895,48 @@ void InstanceofStub::Generate(MacroAssembler* masm) { __ Assert(equal, kInstanceofStubUnexpectedCallSiteCacheMov); } } - __ ret((2 + extra_argument_offset) * kPointerSize); + __ ret(((HasArgsInRegisters() ? 0 : 2) + extra_argument_offset) * + kPointerSize); // Slow-case: Go through the JavaScript implementation. __ bind(&slow); - if (HasCallSiteInlineCheck()) { - // Remove extra value from the stack. - __ PopReturnAddressTo(rcx); - __ Pop(rax); - __ PushReturnAddressFrom(rcx); + if (!ReturnTrueFalseObject()) { + // Tail call the builtin which returns 0 or 1. + DCHECK(!HasArgsInRegisters()); + if (HasCallSiteInlineCheck()) { + // Remove extra value from the stack. + __ PopReturnAddressTo(rcx); + __ Pop(rax); + __ PushReturnAddressFrom(rcx); + } + __ InvokeBuiltin(Builtins::INSTANCE_OF, JUMP_FUNCTION); + } else { + // Call the builtin and convert 0/1 to true/false. + { + FrameScope scope(masm, StackFrame::INTERNAL); + __ Push(object); + __ Push(function); + __ InvokeBuiltin(Builtins::INSTANCE_OF, CALL_FUNCTION); + } + Label true_value, done; + __ testq(rax, rax); + __ j(zero, &true_value, Label::kNear); + __ LoadRoot(rax, Heap::kFalseValueRootIndex); + __ jmp(&done, Label::kNear); + __ bind(&true_value); + __ LoadRoot(rax, Heap::kTrueValueRootIndex); + __ bind(&done); + __ ret(((HasArgsInRegisters() ? 0 : 2) + extra_argument_offset) * + kPointerSize); } - __ InvokeBuiltin(Builtins::INSTANCE_OF, JUMP_FUNCTION); } // Passing arguments in registers is not supported. -Register InstanceofStub::left() { return no_reg; } +Register InstanceofStub::left() { return rax; } -Register InstanceofStub::right() { return no_reg; } +Register InstanceofStub::right() { return rdx; } // ------------------------------------------------------------------------- @@ -3001,9 +2995,9 @@ void StringCharCodeAtGenerator::GenerateSlow( if (index_flags_ == STRING_INDEX_IS_NUMBER) { __ CallRuntime(Runtime::kNumberToIntegerMapMinusZero, 1); } else { - ASSERT(index_flags_ == STRING_INDEX_IS_ARRAY_INDEX); + DCHECK(index_flags_ == STRING_INDEX_IS_ARRAY_INDEX); // NumberToSmi discards numbers that are not exact integers. - __ CallRuntime(Runtime::kHiddenNumberToSmi, 1); + __ CallRuntime(Runtime::kNumberToSmi, 1); } if (!index_.is(rax)) { // Save the conversion result before the pop instructions below @@ -3028,7 +3022,7 @@ void StringCharCodeAtGenerator::GenerateSlow( __ Push(object_); __ Integer32ToSmi(index_, index_); __ Push(index_); - __ CallRuntime(Runtime::kHiddenStringCharCodeAt, 2); + __ CallRuntime(Runtime::kStringCharCodeAtRT, 2); if (!result_.is(rax)) { __ movp(result_, rax); } @@ -3077,50 +3071,22 @@ void StringCharFromCodeGenerator::GenerateSlow( } -void StringHelper::GenerateCopyCharactersREP(MacroAssembler* masm, - Register dest, - Register src, - Register count, - bool ascii) { - // Copy characters using rep movs of doublewords. Align destination on 4 byte - // boundary before starting rep movs. Copy remaining characters after running - // rep movs. - // Count is positive int32, dest and src are character pointers. - ASSERT(dest.is(rdi)); // rep movs destination - ASSERT(src.is(rsi)); // rep movs source - ASSERT(count.is(rcx)); // rep movs count - +void StringHelper::GenerateCopyCharacters(MacroAssembler* masm, + Register dest, + Register src, + Register count, + String::Encoding encoding) { // Nothing to do for zero characters. Label done; __ testl(count, count); __ j(zero, &done, Label::kNear); // Make count the number of bytes to copy. - if (!ascii) { + if (encoding == String::TWO_BYTE_ENCODING) { STATIC_ASSERT(2 == sizeof(uc16)); __ addl(count, count); } - // Don't enter the rep movs if there are less than 4 bytes to copy. - Label last_bytes; - __ testl(count, Immediate(~(kPointerSize - 1))); - __ j(zero, &last_bytes, Label::kNear); - - // Copy from edi to esi using rep movs instruction. - __ movl(kScratchRegister, count); - // Number of doublewords to copy. - __ shrl(count, Immediate(kPointerSizeLog2)); - __ repmovsp(); - - // Find number of bytes left. - __ movl(count, kScratchRegister); - __ andp(count, Immediate(kPointerSize - 1)); - - // Check if there are more bytes to copy. - __ bind(&last_bytes); - __ testl(count, count); - __ j(zero, &done, Label::kNear); - // Copy remaining characters. Label loop; __ bind(&loop); @@ -3340,7 +3306,7 @@ void SubStringStub::Generate(MacroAssembler* masm) { // Handle external string. // Rule out short external strings. - STATIC_CHECK(kShortExternalStringTag != 0); + STATIC_ASSERT(kShortExternalStringTag != 0); __ testb(rbx, Immediate(kShortExternalStringMask)); __ j(not_zero, &runtime); __ movp(rdi, FieldOperand(rdi, ExternalString::kResourceDataOffset)); @@ -3358,10 +3324,9 @@ void SubStringStub::Generate(MacroAssembler* masm) { // rax: result string // rcx: result string length - __ movp(r14, rsi); // esi used by following code. { // Locate character of sub string start. SmiIndex smi_as_index = masm->SmiToIndex(rdx, rdx, times_1); - __ leap(rsi, Operand(rdi, smi_as_index.reg, smi_as_index.scale, + __ leap(r14, Operand(rdi, smi_as_index.reg, smi_as_index.scale, SeqOneByteString::kHeaderSize - kHeapObjectTag)); } // Locate first character of result. @@ -3369,11 +3334,10 @@ void SubStringStub::Generate(MacroAssembler* masm) { // rax: result string // rcx: result length - // rdi: first character of result + // r14: first character of result // rsi: character of sub string start - // r14: original value of rsi - StringHelper::GenerateCopyCharactersREP(masm, rdi, rsi, rcx, true); - __ movp(rsi, r14); // Restore rsi. + StringHelper::GenerateCopyCharacters( + masm, rdi, r14, rcx, String::ONE_BYTE_ENCODING); __ IncrementCounter(counters->sub_string_native(), 1); __ ret(SUB_STRING_ARGUMENT_COUNT * kPointerSize); @@ -3383,10 +3347,9 @@ void SubStringStub::Generate(MacroAssembler* masm) { // rax: result string // rcx: result string length - __ movp(r14, rsi); // esi used by following code. { // Locate character of sub string start. SmiIndex smi_as_index = masm->SmiToIndex(rdx, rdx, times_2); - __ leap(rsi, Operand(rdi, smi_as_index.reg, smi_as_index.scale, + __ leap(r14, Operand(rdi, smi_as_index.reg, smi_as_index.scale, SeqOneByteString::kHeaderSize - kHeapObjectTag)); } // Locate first character of result. @@ -3395,16 +3358,15 @@ void SubStringStub::Generate(MacroAssembler* masm) { // rax: result string // rcx: result length // rdi: first character of result - // rsi: character of sub string start - // r14: original value of rsi - StringHelper::GenerateCopyCharactersREP(masm, rdi, rsi, rcx, false); - __ movp(rsi, r14); // Restore esi. + // r14: character of sub string start + StringHelper::GenerateCopyCharacters( + masm, rdi, r14, rcx, String::TWO_BYTE_ENCODING); __ IncrementCounter(counters->sub_string_native(), 1); __ ret(SUB_STRING_ARGUMENT_COUNT * kPointerSize); // Just jump to runtime to create the sub string. __ bind(&runtime); - __ TailCallRuntime(Runtime::kHiddenSubString, 3, 1); + __ TailCallRuntime(Runtime::kSubString, 3, 1); __ bind(&single_char); // rax: string @@ -3601,7 +3563,7 @@ void StringCompareStub::Generate(MacroAssembler* masm) { // Call the runtime; it returns -1 (less), 0 (equal), or 1 (greater) // tagged as a small integer. __ bind(&runtime); - __ TailCallRuntime(Runtime::kHiddenStringCompare, 2, 1); + __ TailCallRuntime(Runtime::kStringCompare, 2, 1); } @@ -3634,7 +3596,7 @@ void BinaryOpICWithAllocationSiteStub::Generate(MacroAssembler* masm) { void ICCompareStub::GenerateSmis(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::SMI); + DCHECK(state_ == CompareIC::SMI); Label miss; __ JumpIfNotBothSmi(rdx, rax, &miss, Label::kNear); @@ -3658,7 +3620,7 @@ void ICCompareStub::GenerateSmis(MacroAssembler* masm) { void ICCompareStub::GenerateNumbers(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::NUMBER); + DCHECK(state_ == CompareIC::NUMBER); Label generic_stub; Label unordered, maybe_undefined1, maybe_undefined2; @@ -3735,8 +3697,8 @@ void ICCompareStub::GenerateNumbers(MacroAssembler* masm) { void ICCompareStub::GenerateInternalizedStrings(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::INTERNALIZED_STRING); - ASSERT(GetCondition() == equal); + DCHECK(state_ == CompareIC::INTERNALIZED_STRING); + DCHECK(GetCondition() == equal); // Registers containing left and right operands respectively. Register left = rdx; @@ -3764,7 +3726,7 @@ void ICCompareStub::GenerateInternalizedStrings(MacroAssembler* masm) { __ cmpp(left, right); // Make sure rax is non-zero. At this point input operands are // guaranteed to be non-zero. - ASSERT(right.is(rax)); + DCHECK(right.is(rax)); __ j(not_equal, &done, Label::kNear); STATIC_ASSERT(EQUAL == 0); STATIC_ASSERT(kSmiTag == 0); @@ -3778,8 +3740,8 @@ void ICCompareStub::GenerateInternalizedStrings(MacroAssembler* masm) { void ICCompareStub::GenerateUniqueNames(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::UNIQUE_NAME); - ASSERT(GetCondition() == equal); + DCHECK(state_ == CompareIC::UNIQUE_NAME); + DCHECK(GetCondition() == equal); // Registers containing left and right operands respectively. Register left = rdx; @@ -3807,7 +3769,7 @@ void ICCompareStub::GenerateUniqueNames(MacroAssembler* masm) { __ cmpp(left, right); // Make sure rax is non-zero. At this point input operands are // guaranteed to be non-zero. - ASSERT(right.is(rax)); + DCHECK(right.is(rax)); __ j(not_equal, &done, Label::kNear); STATIC_ASSERT(EQUAL == 0); STATIC_ASSERT(kSmiTag == 0); @@ -3821,7 +3783,7 @@ void ICCompareStub::GenerateUniqueNames(MacroAssembler* masm) { void ICCompareStub::GenerateStrings(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::STRING); + DCHECK(state_ == CompareIC::STRING); Label miss; bool equality = Token::IsEqualityOp(op_); @@ -3872,7 +3834,7 @@ void ICCompareStub::GenerateStrings(MacroAssembler* masm) { __ j(not_zero, &do_compare, Label::kNear); // Make sure rax is non-zero. At this point input operands are // guaranteed to be non-zero. - ASSERT(right.is(rax)); + DCHECK(right.is(rax)); __ ret(0); __ bind(&do_compare); } @@ -3899,7 +3861,7 @@ void ICCompareStub::GenerateStrings(MacroAssembler* masm) { if (equality) { __ TailCallRuntime(Runtime::kStringEquals, 2, 1); } else { - __ TailCallRuntime(Runtime::kHiddenStringCompare, 2, 1); + __ TailCallRuntime(Runtime::kStringCompare, 2, 1); } __ bind(&miss); @@ -3908,7 +3870,7 @@ void ICCompareStub::GenerateStrings(MacroAssembler* masm) { void ICCompareStub::GenerateObjects(MacroAssembler* masm) { - ASSERT(state_ == CompareIC::OBJECT); + DCHECK(state_ == CompareIC::OBJECT); Label miss; Condition either_smi = masm->CheckEitherSmi(rdx, rax); __ j(either_smi, &miss, Label::kNear); @@ -3918,7 +3880,7 @@ void ICCompareStub::GenerateObjects(MacroAssembler* masm) { __ CmpObjectType(rdx, JS_OBJECT_TYPE, rcx); __ j(not_equal, &miss, Label::kNear); - ASSERT(GetCondition() == equal); + DCHECK(GetCondition() == equal); __ subp(rax, rdx); __ ret(0); @@ -3978,7 +3940,7 @@ void NameDictionaryLookupStub::GenerateNegativeLookup(MacroAssembler* masm, Register properties, Handle<Name> name, Register r0) { - ASSERT(name->IsUniqueName()); + DCHECK(name->IsUniqueName()); // If names of slots in range from 1 to kProbes - 1 for the hash value are // not equal to the name and kProbes-th slot is not used (its name is the // undefined value), it guarantees the hash table doesn't contain the @@ -3995,12 +3957,12 @@ void NameDictionaryLookupStub::GenerateNegativeLookup(MacroAssembler* masm, Immediate(name->Hash() + NameDictionary::GetProbeOffset(i))); // Scale the index by multiplying by the entry size. - ASSERT(NameDictionary::kEntrySize == 3); + DCHECK(NameDictionary::kEntrySize == 3); __ leap(index, Operand(index, index, times_2, 0)); // index *= 3. Register entity_name = r0; // Having undefined at this place means the name is not contained. - ASSERT_EQ(kSmiTagSize, 1); + DCHECK_EQ(kSmiTagSize, 1); __ movp(entity_name, Operand(properties, index, times_pointer_size, @@ -4046,10 +4008,10 @@ void NameDictionaryLookupStub::GeneratePositiveLookup(MacroAssembler* masm, Register name, Register r0, Register r1) { - ASSERT(!elements.is(r0)); - ASSERT(!elements.is(r1)); - ASSERT(!name.is(r0)); - ASSERT(!name.is(r1)); + DCHECK(!elements.is(r0)); + DCHECK(!elements.is(r1)); + DCHECK(!name.is(r0)); + DCHECK(!name.is(r1)); __ AssertName(name); @@ -4066,7 +4028,7 @@ void NameDictionaryLookupStub::GeneratePositiveLookup(MacroAssembler* masm, __ andp(r1, r0); // Scale the index by multiplying by the entry size. - ASSERT(NameDictionary::kEntrySize == 3); + DCHECK(NameDictionary::kEntrySize == 3); __ leap(r1, Operand(r1, r1, times_2, 0)); // r1 = r1 * 3 // Check if the key is identical to the name. @@ -4128,7 +4090,7 @@ void NameDictionaryLookupStub::Generate(MacroAssembler* masm) { __ andp(scratch, Operand(rsp, 0)); // Scale the index by multiplying by the entry size. - ASSERT(NameDictionary::kEntrySize == 3); + DCHECK(NameDictionary::kEntrySize == 3); __ leap(index_, Operand(scratch, scratch, times_2, 0)); // index *= 3. // Having undefined at this place means the name is not contained. @@ -4187,11 +4149,6 @@ void StoreBufferOverflowStub::GenerateFixedRegStubsAheadOfTime( } -bool CodeStub::CanUseFPRegisters() { - return true; // Always have SSE2 on x64. -} - - // Takes the input in 3 registers: address_ value_ and object_. A pointer to // the value has just been written into the object, now this stub makes sure // we keep the GC informed. The word in the object where the value has been @@ -4275,8 +4232,8 @@ void RecordWriteStub::InformIncrementalMarker(MacroAssembler* masm) { regs_.SaveCallerSaveRegisters(masm, save_fp_regs_mode_); Register address = arg_reg_1.is(regs_.address()) ? kScratchRegister : regs_.address(); - ASSERT(!address.is(regs_.object())); - ASSERT(!address.is(arg_reg_1)); + DCHECK(!address.is(regs_.object())); + DCHECK(!address.is(arg_reg_1)); __ Move(address, regs_.address()); __ Move(arg_reg_1, regs_.object()); // TODO(gc) Can we just set address arg2 in the beginning? @@ -4466,7 +4423,7 @@ void StoreArrayLiteralElementStub::Generate(MacroAssembler* masm) { void StubFailureTrampolineStub::Generate(MacroAssembler* masm) { - CEntryStub ces(isolate(), 1, fp_registers_ ? kSaveFPRegs : kDontSaveFPRegs); + CEntryStub ces(isolate(), 1, kSaveFPRegs); __ Call(ces.GetCode(), RelocInfo::CODE_TARGET); int parameter_count_offset = StubFailureTrampolineFrame::kCallerStackParameterCountFrameOffset; @@ -4567,12 +4524,12 @@ static void CreateArrayDispatchOneArgument(MacroAssembler* masm, Label normal_sequence; if (mode == DONT_OVERRIDE) { - ASSERT(FAST_SMI_ELEMENTS == 0); - ASSERT(FAST_HOLEY_SMI_ELEMENTS == 1); - ASSERT(FAST_ELEMENTS == 2); - ASSERT(FAST_HOLEY_ELEMENTS == 3); - ASSERT(FAST_DOUBLE_ELEMENTS == 4); - ASSERT(FAST_HOLEY_DOUBLE_ELEMENTS == 5); + DCHECK(FAST_SMI_ELEMENTS == 0); + DCHECK(FAST_HOLEY_SMI_ELEMENTS == 1); + DCHECK(FAST_ELEMENTS == 2); + DCHECK(FAST_HOLEY_ELEMENTS == 3); + DCHECK(FAST_DOUBLE_ELEMENTS == 4); + DCHECK(FAST_HOLEY_DOUBLE_ELEMENTS == 5); // is the low bit set? If so, we are holey and that is good. __ testb(rdx, Immediate(1)); @@ -4817,8 +4774,7 @@ void InternalArrayConstructorStub::Generate(MacroAssembler* masm) { // but the following masking takes care of that anyway. __ movzxbp(rcx, FieldOperand(rcx, Map::kBitField2Offset)); // Retrieve elements_kind from bit field 2. - __ andp(rcx, Immediate(Map::kElementsKindMask)); - __ shrp(rcx, Immediate(Map::kElementsKindShift)); + __ DecodeField<Map::ElementsKindBits>(rcx); if (FLAG_debug_code) { Label done; @@ -4932,7 +4888,7 @@ void CallApiFunctionStub::Generate(MacroAssembler* masm) { // It's okay if api_function_address == callback_arg // but not arguments_arg - ASSERT(!api_function_address.is(arguments_arg)); + DCHECK(!api_function_address.is(arguments_arg)); // v8::InvocationCallback's argument. __ leap(arguments_arg, StackSpaceOperand(0)); @@ -5002,7 +4958,7 @@ void CallApiGetterStub::Generate(MacroAssembler* masm) { // It's okay if api_function_address == getter_arg // but not accessor_info_arg or name_arg - ASSERT(!api_function_address.is(accessor_info_arg) && + DCHECK(!api_function_address.is(accessor_info_arg) && !api_function_address.is(name_arg)); // The name handler is counted as an argument. diff --git a/deps/v8/src/x64/code-stubs-x64.h b/deps/v8/src/x64/code-stubs-x64.h index 2d6d21d0ab6..71fc5aba593 100644 --- a/deps/v8/src/x64/code-stubs-x64.h +++ b/deps/v8/src/x64/code-stubs-x64.h @@ -5,7 +5,7 @@ #ifndef V8_X64_CODE_STUBS_X64_H_ #define V8_X64_CODE_STUBS_X64_H_ -#include "ic-inl.h" +#include "src/ic-inl.h" namespace v8 { namespace internal { @@ -26,8 +26,8 @@ class StoreBufferOverflowStub: public PlatformCodeStub { private: SaveFPRegsMode save_doubles_; - Major MajorKey() { return StoreBufferOverflow; } - int MinorKey() { return (save_doubles_ == kSaveFPRegs) ? 1 : 0; } + Major MajorKey() const { return StoreBufferOverflow; } + int MinorKey() const { return (save_doubles_ == kSaveFPRegs) ? 1 : 0; } }; @@ -36,11 +36,11 @@ class StringHelper : public AllStatic { // Generate code for copying characters using the rep movs instruction. // Copies rcx characters from rsi to rdi. Copying of overlapping regions is // not supported. - static void GenerateCopyCharactersREP(MacroAssembler* masm, - Register dest, // Must be rdi. - Register src, // Must be rsi. - Register count, // Must be rcx. - bool ascii); + static void GenerateCopyCharacters(MacroAssembler* masm, + Register dest, + Register src, + Register count, + String::Encoding encoding); // Generate string hash. @@ -66,8 +66,8 @@ class SubStringStub: public PlatformCodeStub { explicit SubStringStub(Isolate* isolate) : PlatformCodeStub(isolate) {} private: - Major MajorKey() { return SubString; } - int MinorKey() { return 0; } + Major MajorKey() const { return SubString; } + int MinorKey() const { return 0; } void Generate(MacroAssembler* masm); }; @@ -95,8 +95,8 @@ class StringCompareStub: public PlatformCodeStub { Register scratch2); private: - virtual Major MajorKey() { return StringCompare; } - virtual int MinorKey() { return 0; } + virtual Major MajorKey() const { return StringCompare; } + virtual int MinorKey() const { return 0; } virtual void Generate(MacroAssembler* masm); static void GenerateAsciiCharsCompareLoop( @@ -156,9 +156,9 @@ class NameDictionaryLookupStub: public PlatformCodeStub { NameDictionary::kHeaderSize + NameDictionary::kElementsStartIndex * kPointerSize; - Major MajorKey() { return NameDictionaryLookup; } + Major MajorKey() const { return NameDictionaryLookup; } - int MinorKey() { + int MinorKey() const { return DictionaryBits::encode(dictionary_.code()) | ResultBits::encode(result_.code()) | IndexBits::encode(index_.code()) | @@ -218,13 +218,13 @@ class RecordWriteStub: public PlatformCodeStub { return INCREMENTAL; } - ASSERT(first_instruction == kTwoByteNopInstruction); + DCHECK(first_instruction == kTwoByteNopInstruction); if (second_instruction == kFiveByteJumpInstruction) { return INCREMENTAL_COMPACTION; } - ASSERT(second_instruction == kFiveByteNopInstruction); + DCHECK(second_instruction == kFiveByteNopInstruction); return STORE_BUFFER_ONLY; } @@ -232,23 +232,23 @@ class RecordWriteStub: public PlatformCodeStub { static void Patch(Code* stub, Mode mode) { switch (mode) { case STORE_BUFFER_ONLY: - ASSERT(GetMode(stub) == INCREMENTAL || + DCHECK(GetMode(stub) == INCREMENTAL || GetMode(stub) == INCREMENTAL_COMPACTION); stub->instruction_start()[0] = kTwoByteNopInstruction; stub->instruction_start()[2] = kFiveByteNopInstruction; break; case INCREMENTAL: - ASSERT(GetMode(stub) == STORE_BUFFER_ONLY); + DCHECK(GetMode(stub) == STORE_BUFFER_ONLY); stub->instruction_start()[0] = kTwoByteJumpInstruction; break; case INCREMENTAL_COMPACTION: - ASSERT(GetMode(stub) == STORE_BUFFER_ONLY); + DCHECK(GetMode(stub) == STORE_BUFFER_ONLY); stub->instruction_start()[0] = kTwoByteNopInstruction; stub->instruction_start()[2] = kFiveByteJumpInstruction; break; } - ASSERT(GetMode(stub) == mode); - CPU::FlushICache(stub->instruction_start(), 7); + DCHECK(GetMode(stub) == mode); + CpuFeatures::FlushICache(stub->instruction_start(), 7); } private: @@ -266,7 +266,7 @@ class RecordWriteStub: public PlatformCodeStub { object_(object), address_(address), scratch0_(scratch0) { - ASSERT(!AreAliased(scratch0, object, address, no_reg)); + DCHECK(!AreAliased(scratch0, object, address, no_reg)); scratch1_ = GetRegThatIsNotRcxOr(object_, address_, scratch0_); if (scratch0.is(rcx)) { scratch0_ = GetRegThatIsNotRcxOr(object_, address_, scratch1_); @@ -277,15 +277,15 @@ class RecordWriteStub: public PlatformCodeStub { if (address.is(rcx)) { address_ = GetRegThatIsNotRcxOr(object_, scratch0_, scratch1_); } - ASSERT(!AreAliased(scratch0_, object_, address_, rcx)); + DCHECK(!AreAliased(scratch0_, object_, address_, rcx)); } void Save(MacroAssembler* masm) { - ASSERT(!address_orig_.is(object_)); - ASSERT(object_.is(object_orig_) || address_.is(address_orig_)); - ASSERT(!AreAliased(object_, address_, scratch1_, scratch0_)); - ASSERT(!AreAliased(object_orig_, address_, scratch1_, scratch0_)); - ASSERT(!AreAliased(object_, address_orig_, scratch1_, scratch0_)); + DCHECK(!address_orig_.is(object_)); + DCHECK(object_.is(object_orig_) || address_.is(address_orig_)); + DCHECK(!AreAliased(object_, address_, scratch1_, scratch0_)); + DCHECK(!AreAliased(object_orig_, address_, scratch1_, scratch0_)); + DCHECK(!AreAliased(object_, address_orig_, scratch1_, scratch0_)); // We don't have to save scratch0_orig_ because it was given to us as // a scratch register. But if we had to switch to a different reg then // we should save the new scratch0_. @@ -387,9 +387,9 @@ class RecordWriteStub: public PlatformCodeStub { Mode mode); void InformIncrementalMarker(MacroAssembler* masm); - Major MajorKey() { return RecordWrite; } + Major MajorKey() const { return RecordWrite; } - int MinorKey() { + int MinorKey() const { return ObjectBits::encode(object_.code()) | ValueBits::encode(value_.code()) | AddressBits::encode(address_.code()) | diff --git a/deps/v8/src/x64/codegen-x64.cc b/deps/v8/src/x64/codegen-x64.cc index 9903017700a..01cb512d02b 100644 --- a/deps/v8/src/x64/codegen-x64.cc +++ b/deps/v8/src/x64/codegen-x64.cc @@ -2,12 +2,12 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_X64 -#include "codegen.h" -#include "macro-assembler.h" +#include "src/codegen.h" +#include "src/macro-assembler.h" namespace v8 { namespace internal { @@ -17,14 +17,14 @@ namespace internal { void StubRuntimeCallHelper::BeforeCall(MacroAssembler* masm) const { masm->EnterFrame(StackFrame::INTERNAL); - ASSERT(!masm->has_frame()); + DCHECK(!masm->has_frame()); masm->set_has_frame(true); } void StubRuntimeCallHelper::AfterCall(MacroAssembler* masm) const { masm->LeaveFrame(StackFrame::INTERNAL); - ASSERT(masm->has_frame()); + DCHECK(masm->has_frame()); masm->set_has_frame(false); } @@ -35,7 +35,8 @@ void StubRuntimeCallHelper::AfterCall(MacroAssembler* masm) const { UnaryMathFunction CreateExpFunction() { if (!FLAG_fast_math) return &std::exp; size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(1 * KB, &actual_size, true)); + byte* buffer = + static_cast<byte*>(base::OS::Allocate(1 * KB, &actual_size, true)); if (buffer == NULL) return &std::exp; ExternalReference::InitializeMathExpData(); @@ -55,10 +56,10 @@ UnaryMathFunction CreateExpFunction() { CodeDesc desc; masm.GetCode(&desc); - ASSERT(!RelocInfo::RequiresRelocation(desc)); + DCHECK(!RelocInfo::RequiresRelocation(desc)); - CPU::FlushICache(buffer, actual_size); - OS::ProtectCode(buffer, actual_size); + CpuFeatures::FlushICache(buffer, actual_size); + base::OS::ProtectCode(buffer, actual_size); return FUNCTION_CAST<UnaryMathFunction>(buffer); } @@ -66,9 +67,8 @@ UnaryMathFunction CreateExpFunction() { UnaryMathFunction CreateSqrtFunction() { size_t actual_size; // Allocate buffer in executable space. - byte* buffer = static_cast<byte*>(OS::Allocate(1 * KB, - &actual_size, - true)); + byte* buffer = + static_cast<byte*>(base::OS::Allocate(1 * KB, &actual_size, true)); if (buffer == NULL) return &std::sqrt; MacroAssembler masm(NULL, buffer, static_cast<int>(actual_size)); @@ -79,10 +79,10 @@ UnaryMathFunction CreateSqrtFunction() { CodeDesc desc; masm.GetCode(&desc); - ASSERT(!RelocInfo::RequiresRelocation(desc)); + DCHECK(!RelocInfo::RequiresRelocation(desc)); - CPU::FlushICache(buffer, actual_size); - OS::ProtectCode(buffer, actual_size); + CpuFeatures::FlushICache(buffer, actual_size); + base::OS::ProtectCode(buffer, actual_size); return FUNCTION_CAST<UnaryMathFunction>(buffer); } @@ -92,9 +92,8 @@ typedef double (*ModuloFunction)(double, double); // Define custom fmod implementation. ModuloFunction CreateModuloFunction() { size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>( + base::OS::Allocate(Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Assembler masm(NULL, buffer, static_cast<int>(actual_size)); // Generated code is put into a fixed, unmovable, buffer, and not into @@ -170,7 +169,7 @@ ModuloFunction CreateModuloFunction() { CodeDesc desc; masm.GetCode(&desc); - OS::ProtectCode(buffer, actual_size); + base::OS::ProtectCode(buffer, actual_size); // Call the function from C++ through this pointer. return FUNCTION_CAST<ModuloFunction>(buffer); } @@ -185,26 +184,29 @@ ModuloFunction CreateModuloFunction() { #define __ ACCESS_MASM(masm) void ElementsTransitionGenerator::GenerateMapChangeElementsTransition( - MacroAssembler* masm, AllocationSiteMode mode, + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, + AllocationSiteMode mode, Label* allocation_memento_found) { - // ----------- S t a t e ------------- - // -- rax : value - // -- rbx : target map - // -- rcx : key - // -- rdx : receiver - // -- rsp[0] : return address - // ----------------------------------- + // Return address is on the stack. + Register scratch = rdi; + DCHECK(!AreAliased(receiver, key, value, target_map, scratch)); + if (mode == TRACK_ALLOCATION_SITE) { - ASSERT(allocation_memento_found != NULL); - __ JumpIfJSArrayHasAllocationMemento(rdx, rdi, allocation_memento_found); + DCHECK(allocation_memento_found != NULL); + __ JumpIfJSArrayHasAllocationMemento( + receiver, scratch, allocation_memento_found); } // Set transitioned map. - __ movp(FieldOperand(rdx, HeapObject::kMapOffset), rbx); - __ RecordWriteField(rdx, + __ movp(FieldOperand(receiver, HeapObject::kMapOffset), target_map); + __ RecordWriteField(receiver, HeapObject::kMapOffset, - rbx, - rdi, + target_map, + scratch, kDontSaveFPRegs, EMIT_REMEMBERED_SET, OMIT_SMI_CHECK); @@ -212,14 +214,19 @@ void ElementsTransitionGenerator::GenerateMapChangeElementsTransition( void ElementsTransitionGenerator::GenerateSmiToDouble( - MacroAssembler* masm, AllocationSiteMode mode, Label* fail) { - // ----------- S t a t e ------------- - // -- rax : value - // -- rbx : target map - // -- rcx : key - // -- rdx : receiver - // -- rsp[0] : return address - // ----------------------------------- + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, + AllocationSiteMode mode, + Label* fail) { + // Return address is on the stack. + DCHECK(receiver.is(rdx)); + DCHECK(key.is(rcx)); + DCHECK(value.is(rax)); + DCHECK(target_map.is(rbx)); + // The fail label is not actually used since we do not allocate. Label allocated, new_backing_store, only_change_map, done; @@ -243,7 +250,7 @@ void ElementsTransitionGenerator::GenerateSmiToDouble( } else { // For x32 port we have to allocate a new backing store as SMI size is // not equal with double size. - ASSERT(kDoubleSize == 2 * kPointerSize); + DCHECK(kDoubleSize == 2 * kPointerSize); __ jmp(&new_backing_store); } @@ -346,14 +353,19 @@ void ElementsTransitionGenerator::GenerateSmiToDouble( void ElementsTransitionGenerator::GenerateDoubleToObject( - MacroAssembler* masm, AllocationSiteMode mode, Label* fail) { - // ----------- S t a t e ------------- - // -- rax : value - // -- rbx : target map - // -- rcx : key - // -- rdx : receiver - // -- rsp[0] : return address - // ----------------------------------- + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, + AllocationSiteMode mode, + Label* fail) { + // Return address is on the stack. + DCHECK(receiver.is(rdx)); + DCHECK(key.is(rcx)); + DCHECK(value.is(rax)); + DCHECK(target_map.is(rbx)); + Label loop, entry, convert_hole, gc_required, only_change_map; if (mode == TRACK_ALLOCATION_SITE) { @@ -518,7 +530,7 @@ void StringCharLoadGenerator::Generate(MacroAssembler* masm, __ Assert(zero, kExternalStringExpectedButNotFound); } // Rule out short external strings. - STATIC_CHECK(kShortExternalStringTag != 0); + STATIC_ASSERT(kShortExternalStringTag != 0); __ testb(result, Immediate(kShortExternalStringTag)); __ j(not_zero, call_runtime); // Check encoding. @@ -568,11 +580,12 @@ void MathExpGenerator::EmitMathExp(MacroAssembler* masm, XMMRegister double_scratch, Register temp1, Register temp2) { - ASSERT(!input.is(result)); - ASSERT(!input.is(double_scratch)); - ASSERT(!result.is(double_scratch)); - ASSERT(!temp1.is(temp2)); - ASSERT(ExternalReference::math_exp_constants(0).address() != NULL); + DCHECK(!input.is(result)); + DCHECK(!input.is(double_scratch)); + DCHECK(!result.is(double_scratch)); + DCHECK(!temp1.is(temp2)); + DCHECK(ExternalReference::math_exp_constants(0).address() != NULL); + DCHECK(!masm->serializer_enabled()); // External references not serializable. Label done; @@ -617,7 +630,7 @@ void MathExpGenerator::EmitMathExp(MacroAssembler* masm, CodeAgingHelper::CodeAgingHelper() { - ASSERT(young_sequence_.length() == kNoCodeAgeSequenceLength); + DCHECK(young_sequence_.length() == kNoCodeAgeSequenceLength); // The sequence of instructions that is patched out for aging code is the // following boilerplate stack-building prologue that is found both in // FUNCTION and OPTIMIZED_FUNCTION code: @@ -638,7 +651,7 @@ bool CodeAgingHelper::IsOld(byte* candidate) const { bool Code::IsYoungSequence(Isolate* isolate, byte* sequence) { bool result = isolate->code_aging_helper()->IsYoung(sequence); - ASSERT(result || isolate->code_aging_helper()->IsOld(sequence)); + DCHECK(result || isolate->code_aging_helper()->IsOld(sequence)); return result; } @@ -665,7 +678,7 @@ void Code::PatchPlatformCodeAge(Isolate* isolate, uint32_t young_length = isolate->code_aging_helper()->young_sequence_length(); if (age == kNoAgeCodeAge) { isolate->code_aging_helper()->CopyYoungSequenceTo(sequence); - CPU::FlushICache(sequence, young_length); + CpuFeatures::FlushICache(sequence, young_length); } else { Code* stub = GetCodeAgeStub(isolate, age, parity); CodePatcher patcher(sequence, young_length); @@ -677,7 +690,7 @@ void Code::PatchPlatformCodeAge(Isolate* isolate, Operand StackArgumentsAccessor::GetArgumentOperand(int index) { - ASSERT(index >= 0); + DCHECK(index >= 0); int receiver = (receiver_mode_ == ARGUMENTS_CONTAIN_RECEIVER) ? 1 : 0; int displacement_to_last_argument = base_reg_.is(rsp) ? kPCOnStackSize : kFPOnStackSize + kPCOnStackSize; @@ -685,7 +698,7 @@ Operand StackArgumentsAccessor::GetArgumentOperand(int index) { if (argument_count_reg_.is(no_reg)) { // argument[0] is at base_reg_ + displacement_to_last_argument + // (argument_count_immediate_ + receiver - 1) * kPointerSize. - ASSERT(argument_count_immediate_ + receiver > 0); + DCHECK(argument_count_immediate_ + receiver > 0); return Operand(base_reg_, displacement_to_last_argument + (argument_count_immediate_ + receiver - 1 - index) * kPointerSize); } else { diff --git a/deps/v8/src/x64/codegen-x64.h b/deps/v8/src/x64/codegen-x64.h index 540bba77c82..8bfd7f4c583 100644 --- a/deps/v8/src/x64/codegen-x64.h +++ b/deps/v8/src/x64/codegen-x64.h @@ -5,8 +5,8 @@ #ifndef V8_X64_CODEGEN_X64_H_ #define V8_X64_CODEGEN_X64_H_ -#include "ast.h" -#include "ic-inl.h" +#include "src/ast.h" +#include "src/ic-inl.h" namespace v8 { namespace internal { @@ -96,7 +96,7 @@ class StackArgumentsAccessor BASE_EMBEDDED { Operand GetArgumentOperand(int index); Operand GetReceiverOperand() { - ASSERT(receiver_mode_ == ARGUMENTS_CONTAIN_RECEIVER); + DCHECK(receiver_mode_ == ARGUMENTS_CONTAIN_RECEIVER); return GetArgumentOperand(0); } diff --git a/deps/v8/src/x64/cpu-x64.cc b/deps/v8/src/x64/cpu-x64.cc index 9243e2fb5ee..59a187f14c7 100644 --- a/deps/v8/src/x64/cpu-x64.cc +++ b/deps/v8/src/x64/cpu-x64.cc @@ -5,20 +5,20 @@ // CPU specific code for x64 independent of OS goes here. #if defined(__GNUC__) && !defined(__MINGW64__) -#include "third_party/valgrind/valgrind.h" +#include "src/third_party/valgrind/valgrind.h" #endif -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_X64 -#include "cpu.h" -#include "macro-assembler.h" +#include "src/assembler.h" +#include "src/macro-assembler.h" namespace v8 { namespace internal { -void CPU::FlushICache(void* start, size_t size) { +void CpuFeatures::FlushICache(void* start, size_t size) { // No need to flush the instruction cache on Intel. On Intel instruction // cache flushing is only necessary when multiple cores running the same // code simultaneously. V8 (and JavaScript) is single threaded and when code diff --git a/deps/v8/src/x64/debug-x64.cc b/deps/v8/src/x64/debug-x64.cc index 6e0e05fda89..fe78e372145 100644 --- a/deps/v8/src/x64/debug-x64.cc +++ b/deps/v8/src/x64/debug-x64.cc @@ -2,13 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_X64 -#include "assembler.h" -#include "codegen.h" -#include "debug.h" +#include "src/assembler.h" +#include "src/codegen.h" +#include "src/debug.h" namespace v8 { @@ -23,7 +23,7 @@ bool BreakLocationIterator::IsDebugBreakAtReturn() { // CodeGenerator::VisitReturnStatement and VirtualFrame::Exit in codegen-x64.cc // for the precise return instructions sequence. void BreakLocationIterator::SetDebugBreakAtReturn() { - ASSERT(Assembler::kJSReturnSequenceLength >= Assembler::kCallSequenceLength); + DCHECK(Assembler::kJSReturnSequenceLength >= Assembler::kCallSequenceLength); rinfo()->PatchCodeWithCall( debug_info_->GetIsolate()->builtins()->Return_DebugBreak()->entry(), Assembler::kJSReturnSequenceLength - Assembler::kCallSequenceLength); @@ -40,20 +40,20 @@ void BreakLocationIterator::ClearDebugBreakAtReturn() { // A debug break in the frame exit code is identified by the JS frame exit code // having been patched with a call instruction. bool Debug::IsDebugBreakAtReturn(v8::internal::RelocInfo* rinfo) { - ASSERT(RelocInfo::IsJSReturn(rinfo->rmode())); + DCHECK(RelocInfo::IsJSReturn(rinfo->rmode())); return rinfo->IsPatchedReturnSequence(); } bool BreakLocationIterator::IsDebugBreakAtSlot() { - ASSERT(IsDebugBreakSlot()); + DCHECK(IsDebugBreakSlot()); // Check whether the debug break slot instructions have been patched. - return !Assembler::IsNop(rinfo()->pc()); + return rinfo()->IsPatchedDebugBreakSlotSequence(); } void BreakLocationIterator::SetDebugBreakAtSlot() { - ASSERT(IsDebugBreakSlot()); + DCHECK(IsDebugBreakSlot()); rinfo()->PatchCodeWithCall( debug_info_->GetIsolate()->builtins()->Slot_DebugBreak()->entry(), Assembler::kDebugBreakSlotLength - Assembler::kCallSequenceLength); @@ -61,12 +61,10 @@ void BreakLocationIterator::SetDebugBreakAtSlot() { void BreakLocationIterator::ClearDebugBreakAtSlot() { - ASSERT(IsDebugBreakSlot()); + DCHECK(IsDebugBreakSlot()); rinfo()->PatchCode(original_rinfo()->pc(), Assembler::kDebugBreakSlotLength); } -const bool Debug::FramePaddingLayout::kIsSupported = true; - #define __ ACCESS_MASM(masm) @@ -80,21 +78,21 @@ static void Generate_DebugBreakCallHelper(MacroAssembler* masm, FrameScope scope(masm, StackFrame::INTERNAL); // Load padding words on stack. - for (int i = 0; i < Debug::FramePaddingLayout::kInitialSize; i++) { - __ Push(Smi::FromInt(Debug::FramePaddingLayout::kPaddingValue)); + for (int i = 0; i < LiveEdit::kFramePaddingInitialSize; i++) { + __ Push(Smi::FromInt(LiveEdit::kFramePaddingValue)); } - __ Push(Smi::FromInt(Debug::FramePaddingLayout::kInitialSize)); + __ Push(Smi::FromInt(LiveEdit::kFramePaddingInitialSize)); // Store the registers containing live values on the expression stack to // make sure that these are correctly updated during GC. Non object values // are stored as as two smis causing it to be untouched by GC. - ASSERT((object_regs & ~kJSCallerSaved) == 0); - ASSERT((non_object_regs & ~kJSCallerSaved) == 0); - ASSERT((object_regs & non_object_regs) == 0); + DCHECK((object_regs & ~kJSCallerSaved) == 0); + DCHECK((non_object_regs & ~kJSCallerSaved) == 0); + DCHECK((object_regs & non_object_regs) == 0); for (int i = 0; i < kNumJSCallerSaved; i++) { int r = JSCallerSavedCode(i); Register reg = { r }; - ASSERT(!reg.is(kScratchRegister)); + DCHECK(!reg.is(kScratchRegister)); if ((object_regs & (1 << r)) != 0) { __ Push(reg); } @@ -146,13 +144,13 @@ static void Generate_DebugBreakCallHelper(MacroAssembler* masm, // jumping to the target address intended by the caller and that was // overwritten by the address of DebugBreakXXX. ExternalReference after_break_target = - ExternalReference(Debug_Address::AfterBreakTarget(), masm->isolate()); + ExternalReference::debug_after_break_target_address(masm->isolate()); __ Move(kScratchRegister, after_break_target); __ Jump(Operand(kScratchRegister, 0)); } -void Debug::GenerateCallICStubDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCallICStubDebugBreak(MacroAssembler* masm) { // Register state for CallICStub // ----------- S t a t e ------------- // -- rdx : type feedback slot (smi) @@ -162,51 +160,41 @@ void Debug::GenerateCallICStubDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateLoadICDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateLoadICDebugBreak(MacroAssembler* masm) { // Register state for IC load call (from ic-x64.cc). - // ----------- S t a t e ------------- - // -- rax : receiver - // -- rcx : name - // ----------------------------------- - Generate_DebugBreakCallHelper(masm, rax.bit() | rcx.bit(), 0, false); + Register receiver = LoadIC::ReceiverRegister(); + Register name = LoadIC::NameRegister(); + Generate_DebugBreakCallHelper(masm, receiver.bit() | name.bit(), 0, false); } -void Debug::GenerateStoreICDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateStoreICDebugBreak(MacroAssembler* masm) { // Register state for IC store call (from ic-x64.cc). - // ----------- S t a t e ------------- - // -- rax : value - // -- rcx : name - // -- rdx : receiver - // ----------------------------------- + Register receiver = StoreIC::ReceiverRegister(); + Register name = StoreIC::NameRegister(); + Register value = StoreIC::ValueRegister(); Generate_DebugBreakCallHelper( - masm, rax.bit() | rcx.bit() | rdx.bit(), 0, false); + masm, receiver.bit() | name.bit() | value.bit(), 0, false); } -void Debug::GenerateKeyedLoadICDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateKeyedLoadICDebugBreak(MacroAssembler* masm) { // Register state for keyed IC load call (from ic-x64.cc). - // ----------- S t a t e ------------- - // -- rax : key - // -- rdx : receiver - // ----------------------------------- - Generate_DebugBreakCallHelper(masm, rax.bit() | rdx.bit(), 0, false); + GenerateLoadICDebugBreak(masm); } -void Debug::GenerateKeyedStoreICDebugBreak(MacroAssembler* masm) { - // Register state for keyed IC load call (from ic-x64.cc). - // ----------- S t a t e ------------- - // -- rax : value - // -- rcx : key - // -- rdx : receiver - // ----------------------------------- +void DebugCodegen::GenerateKeyedStoreICDebugBreak(MacroAssembler* masm) { + // Register state for keyed IC store call (from ic-x64.cc). + Register receiver = KeyedStoreIC::ReceiverRegister(); + Register name = KeyedStoreIC::NameRegister(); + Register value = KeyedStoreIC::ValueRegister(); Generate_DebugBreakCallHelper( - masm, rax.bit() | rcx.bit() | rdx.bit(), 0, false); + masm, receiver.bit() | name.bit() | value.bit(), 0, false); } -void Debug::GenerateCompareNilICDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCompareNilICDebugBreak(MacroAssembler* masm) { // Register state for CompareNil IC // ----------- S t a t e ------------- // -- rax : value @@ -215,7 +203,7 @@ void Debug::GenerateCompareNilICDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateReturnDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateReturnDebugBreak(MacroAssembler* masm) { // Register state just before return from JS function (from codegen-x64.cc). // ----------- S t a t e ------------- // -- rax: return value @@ -224,7 +212,7 @@ void Debug::GenerateReturnDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateCallFunctionStubDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCallFunctionStubDebugBreak(MacroAssembler* masm) { // Register state for CallFunctionStub (from code-stubs-x64.cc). // ----------- S t a t e ------------- // -- rdi : function @@ -233,7 +221,7 @@ void Debug::GenerateCallFunctionStubDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateCallConstructStubDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCallConstructStubDebugBreak(MacroAssembler* masm) { // Register state for CallConstructStub (from code-stubs-x64.cc). // rax is the actual number of arguments not encoded as a smi, see comment // above IC call. @@ -245,7 +233,8 @@ void Debug::GenerateCallConstructStubDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateCallConstructStubRecordDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateCallConstructStubRecordDebugBreak( + MacroAssembler* masm) { // Register state for CallConstructStub (from code-stubs-x64.cc). // rax is the actual number of arguments not encoded as a smi, see comment // above IC call. @@ -260,33 +249,33 @@ void Debug::GenerateCallConstructStubRecordDebugBreak(MacroAssembler* masm) { } -void Debug::GenerateSlot(MacroAssembler* masm) { +void DebugCodegen::GenerateSlot(MacroAssembler* masm) { // Generate enough nop's to make space for a call instruction. Label check_codesize; __ bind(&check_codesize); __ RecordDebugBreakSlot(); __ Nop(Assembler::kDebugBreakSlotLength); - ASSERT_EQ(Assembler::kDebugBreakSlotLength, + DCHECK_EQ(Assembler::kDebugBreakSlotLength, masm->SizeOfCodeGeneratedSince(&check_codesize)); } -void Debug::GenerateSlotDebugBreak(MacroAssembler* masm) { +void DebugCodegen::GenerateSlotDebugBreak(MacroAssembler* masm) { // In the places where a debug break slot is inserted no registers can contain // object pointers. Generate_DebugBreakCallHelper(masm, 0, 0, true); } -void Debug::GeneratePlainReturnLiveEdit(MacroAssembler* masm) { +void DebugCodegen::GeneratePlainReturnLiveEdit(MacroAssembler* masm) { masm->ret(0); } -void Debug::GenerateFrameDropperLiveEdit(MacroAssembler* masm) { +void DebugCodegen::GenerateFrameDropperLiveEdit(MacroAssembler* masm) { ExternalReference restarter_frame_function_slot = - ExternalReference(Debug_Address::RestarterFrameFunctionPointer(), - masm->isolate()); + ExternalReference::debug_restarter_frame_function_pointer_address( + masm->isolate()); __ Move(rax, restarter_frame_function_slot); __ movp(Operand(rax, 0), Immediate(0)); @@ -308,7 +297,7 @@ void Debug::GenerateFrameDropperLiveEdit(MacroAssembler* masm) { __ jmp(rdx); } -const bool Debug::kFrameDropperSupported = true; +const bool LiveEdit::kFrameDropperSupported = true; #undef __ diff --git a/deps/v8/src/x64/deoptimizer-x64.cc b/deps/v8/src/x64/deoptimizer-x64.cc index 9016d4b7542..a2f9faa0026 100644 --- a/deps/v8/src/x64/deoptimizer-x64.cc +++ b/deps/v8/src/x64/deoptimizer-x64.cc @@ -2,14 +2,14 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_X64 -#include "codegen.h" -#include "deoptimizer.h" -#include "full-codegen.h" -#include "safepoint-table.h" +#include "src/codegen.h" +#include "src/deoptimizer.h" +#include "src/full-codegen.h" +#include "src/safepoint-table.h" namespace v8 { namespace internal { @@ -60,9 +60,6 @@ void Deoptimizer::PatchCodeForDeoptimization(Isolate* isolate, Code* code) { #endif DeoptimizationInputData* deopt_data = DeoptimizationInputData::cast(code->deoptimization_data()); - SharedFunctionInfo* shared = - SharedFunctionInfo::cast(deopt_data->SharedFunctionInfo()); - shared->EvictFromOptimizedCodeMap(code, "deoptimized code"); deopt_data->SetSharedFunctionInfo(Smi::FromInt(0)); // For each LLazyBailout instruction insert a call to the corresponding // deoptimization entry. @@ -75,9 +72,9 @@ void Deoptimizer::PatchCodeForDeoptimization(Isolate* isolate, Code* code) { CodePatcher patcher(call_address, Assembler::kCallSequenceLength); patcher.masm()->Call(GetDeoptimizationEntry(isolate, i, LAZY), Assembler::RelocInfoNone()); - ASSERT(prev_call_address == NULL || + DCHECK(prev_call_address == NULL || call_address >= prev_call_address + patch_size()); - ASSERT(call_address + patch_size() <= code->instruction_end()); + DCHECK(call_address + patch_size() <= code->instruction_end()); #ifdef DEBUG prev_call_address = call_address; #endif @@ -108,7 +105,7 @@ void Deoptimizer::FillInputFrame(Address tos, JavaScriptFrame* frame) { void Deoptimizer::SetPlatformCompiledStubRegisters( FrameDescription* output_frame, CodeStubInterfaceDescriptor* descriptor) { intptr_t handler = - reinterpret_cast<intptr_t>(descriptor->deoptimization_handler_); + reinterpret_cast<intptr_t>(descriptor->deoptimization_handler()); int params = descriptor->GetHandlerParameterCount(); output_frame->SetRegister(rax.code(), params); output_frame->SetRegister(rbx.code(), handler); @@ -129,11 +126,6 @@ bool Deoptimizer::HasAlignmentPadding(JSFunction* function) { } -Code* Deoptimizer::NotifyStubFailureBuiltin() { - return isolate_->builtins()->builtin(Builtins::kNotifyStubFailureSaveDoubles); -} - - #define __ masm()-> void Deoptimizer::EntryGenerator::Generate() { @@ -299,7 +291,7 @@ void Deoptimizer::EntryGenerator::Generate() { // Do not restore rsp, simply pop the value into the next register // and overwrite this afterwards. if (r.is(rsp)) { - ASSERT(i > 0); + DCHECK(i > 0); r = Register::from_code(i - 1); } __ popq(r); @@ -322,7 +314,7 @@ void Deoptimizer::TableEntryGenerator::GeneratePrologue() { USE(start); __ pushq_imm32(i); __ jmp(&done); - ASSERT(masm()->pc_offset() - start == table_entry_size_); + DCHECK(masm()->pc_offset() - start == table_entry_size_); } __ bind(&done); } diff --git a/deps/v8/src/x64/disasm-x64.cc b/deps/v8/src/x64/disasm-x64.cc index bef2f82dfa7..2b8fc2d4dcf 100644 --- a/deps/v8/src/x64/disasm-x64.cc +++ b/deps/v8/src/x64/disasm-x64.cc @@ -3,15 +3,15 @@ // found in the LICENSE file. #include <assert.h> -#include <stdio.h> #include <stdarg.h> +#include <stdio.h> -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_X64 -#include "disasm.h" -#include "lazy-instance.h" +#include "src/base/lazy-instance.h" +#include "src/disasm.h" namespace disasm { @@ -216,7 +216,7 @@ void InstructionTable::CopyTable(const ByteMnemonic bm[], OperandType op_order = bm[i].op_order_; id->op_order_ = static_cast<OperandType>(op_order & ~BYTE_SIZE_OPERAND_FLAG); - ASSERT_EQ(NO_INSTR, id->type); // Information not already entered + DCHECK_EQ(NO_INSTR, id->type); // Information not already entered id->type = type; id->byte_size_operation = ((op_order & BYTE_SIZE_OPERAND_FLAG) != 0); } @@ -230,7 +230,7 @@ void InstructionTable::SetTableRange(InstructionType type, const char* mnem) { for (byte b = start; b <= end; b++) { InstructionDesc* id = &instructions_[b]; - ASSERT_EQ(NO_INSTR, id->type); // Information not already entered + DCHECK_EQ(NO_INSTR, id->type); // Information not already entered id->mnem = mnem; id->type = type; id->byte_size_operation = byte_size; @@ -241,14 +241,14 @@ void InstructionTable::SetTableRange(InstructionType type, void InstructionTable::AddJumpConditionalShort() { for (byte b = 0x70; b <= 0x7F; b++) { InstructionDesc* id = &instructions_[b]; - ASSERT_EQ(NO_INSTR, id->type); // Information not already entered + DCHECK_EQ(NO_INSTR, id->type); // Information not already entered id->mnem = NULL; // Computed depending on condition code. id->type = JUMP_CONDITIONAL_SHORT_INSTR; } } -static v8::internal::LazyInstance<InstructionTable>::type instruction_table = +static v8::base::LazyInstance<InstructionTable>::type instruction_table = LAZY_INSTANCE_INITIALIZER; @@ -328,7 +328,7 @@ class DisassemblerX64 { const InstructionTable* const instruction_table_; void setRex(byte rex) { - ASSERT_EQ(0x40, rex & 0xF0); + DCHECK_EQ(0x40, rex & 0xF0); rex_ = rex; } @@ -430,7 +430,7 @@ void DisassemblerX64::AppendToBuffer(const char* format, ...) { v8::internal::Vector<char> buf = tmp_buffer_ + tmp_buffer_pos_; va_list args; va_start(args, format); - int result = v8::internal::OS::VSNPrintF(buf, format, args); + int result = v8::internal::VSNPrintF(buf, format, args); va_end(args); tmp_buffer_pos_ += result; } @@ -661,7 +661,7 @@ int DisassemblerX64::PrintImmediateOp(byte* data) { // Returns number of bytes used, including *data. int DisassemblerX64::F6F7Instruction(byte* data) { - ASSERT(*data == 0xF7 || *data == 0xF6); + DCHECK(*data == 0xF7 || *data == 0xF6); byte modrm = *(data + 1); int mod, regop, rm; get_modrm(modrm, &mod, ®op, &rm); @@ -680,6 +680,9 @@ int DisassemblerX64::F6F7Instruction(byte* data) { case 5: mnem = "imul"; break; + case 6: + mnem = "div"; + break; case 7: mnem = "idiv"; break; @@ -747,7 +750,7 @@ int DisassemblerX64::ShiftInstruction(byte* data) { UnimplementedInstruction(); return num_bytes; } - ASSERT_NE(NULL, mnem); + DCHECK_NE(NULL, mnem); if (op == 0xD0) { imm8 = 1; } else if (op == 0xC0) { @@ -770,7 +773,7 @@ int DisassemblerX64::ShiftInstruction(byte* data) { // Returns number of bytes used, including *data. int DisassemblerX64::JumpShort(byte* data) { - ASSERT_EQ(0xEB, *data); + DCHECK_EQ(0xEB, *data); byte b = *(data + 1); byte* dest = data + static_cast<int8_t>(b) + 2; AppendToBuffer("jmp %s", NameOfAddress(dest)); @@ -780,7 +783,7 @@ int DisassemblerX64::JumpShort(byte* data) { // Returns number of bytes used, including *data. int DisassemblerX64::JumpConditional(byte* data) { - ASSERT_EQ(0x0F, *data); + DCHECK_EQ(0x0F, *data); byte cond = *(data + 1) & 0x0F; byte* dest = data + *reinterpret_cast<int32_t*>(data + 2) + 6; const char* mnem = conditional_code_suffix[cond]; @@ -802,7 +805,7 @@ int DisassemblerX64::JumpConditionalShort(byte* data) { // Returns number of bytes used, including *data. int DisassemblerX64::SetCC(byte* data) { - ASSERT_EQ(0x0F, *data); + DCHECK_EQ(0x0F, *data); byte cond = *(data + 1) & 0x0F; const char* mnem = conditional_code_suffix[cond]; AppendToBuffer("set%s%c ", mnem, operand_size_code()); @@ -814,7 +817,7 @@ int DisassemblerX64::SetCC(byte* data) { // Returns number of bytes used, including *data. int DisassemblerX64::FPUInstruction(byte* data) { byte escape_opcode = *data; - ASSERT_EQ(0xD8, escape_opcode & 0xF8); + DCHECK_EQ(0xD8, escape_opcode & 0xF8); byte modrm_byte = *(data+1); if (modrm_byte >= 0xC0) { @@ -1068,7 +1071,7 @@ int DisassemblerX64::TwoByteOpcodeInstruction(byte* data) { current += PrintRightXMMOperand(current); } else if (opcode == 0x73) { current += 1; - ASSERT(regop == 6); + DCHECK(regop == 6); AppendToBuffer("psllq,%s,%d", NameOfXMMRegister(rm), *current & 0x7f); current += 1; } else { @@ -1788,19 +1791,19 @@ int DisassemblerX64::InstructionDecode(v8::internal::Vector<char> out_buffer, } int instr_len = static_cast<int>(data - instr); - ASSERT(instr_len > 0); // Ensure progress. + DCHECK(instr_len > 0); // Ensure progress. int outp = 0; // Instruction bytes. for (byte* bp = instr; bp < data; bp++) { - outp += v8::internal::OS::SNPrintF(out_buffer + outp, "%02x", *bp); + outp += v8::internal::SNPrintF(out_buffer + outp, "%02x", *bp); } for (int i = 6 - instr_len; i >= 0; i--) { - outp += v8::internal::OS::SNPrintF(out_buffer + outp, " "); + outp += v8::internal::SNPrintF(out_buffer + outp, " "); } - outp += v8::internal::OS::SNPrintF(out_buffer + outp, " %s", - tmp_buffer_.start()); + outp += v8::internal::SNPrintF(out_buffer + outp, " %s", + tmp_buffer_.start()); return instr_len; } @@ -1827,7 +1830,7 @@ static const char* xmm_regs[16] = { const char* NameConverter::NameOfAddress(byte* addr) const { - v8::internal::OS::SNPrintF(tmp_buffer_, "%p", addr); + v8::internal::SNPrintF(tmp_buffer_, "%p", addr); return tmp_buffer_.start(); } diff --git a/deps/v8/src/x64/frames-x64.cc b/deps/v8/src/x64/frames-x64.cc index 7121d68cd39..114945b49fc 100644 --- a/deps/v8/src/x64/frames-x64.cc +++ b/deps/v8/src/x64/frames-x64.cc @@ -2,14 +2,14 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_X64 -#include "assembler.h" -#include "assembler-x64.h" -#include "assembler-x64-inl.h" -#include "frames.h" +#include "src/assembler.h" +#include "src/frames.h" +#include "src/x64/assembler-x64-inl.h" +#include "src/x64/assembler-x64.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/x64/frames-x64.h b/deps/v8/src/x64/frames-x64.h index 43c11961c36..88130302849 100644 --- a/deps/v8/src/x64/frames-x64.h +++ b/deps/v8/src/x64/frames-x64.h @@ -18,8 +18,6 @@ const RegList kJSCallerSaved = const int kNumJSCallerSaved = 5; -typedef Object* JSCallerSavedBuffer[kNumJSCallerSaved]; - // Number of registers for which space is reserved in safepoints. const int kNumSafepointRegisters = 16; diff --git a/deps/v8/src/x64/full-codegen-x64.cc b/deps/v8/src/x64/full-codegen-x64.cc index 475080553ac..38b594c2bc2 100644 --- a/deps/v8/src/x64/full-codegen-x64.cc +++ b/deps/v8/src/x64/full-codegen-x64.cc @@ -2,19 +2,19 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_X64 -#include "code-stubs.h" -#include "codegen.h" -#include "compiler.h" -#include "debug.h" -#include "full-codegen.h" -#include "isolate-inl.h" -#include "parser.h" -#include "scopes.h" -#include "stub-cache.h" +#include "src/code-stubs.h" +#include "src/codegen.h" +#include "src/compiler.h" +#include "src/debug.h" +#include "src/full-codegen.h" +#include "src/isolate-inl.h" +#include "src/parser.h" +#include "src/scopes.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -31,7 +31,7 @@ class JumpPatchSite BASE_EMBEDDED { } ~JumpPatchSite() { - ASSERT(patch_site_.is_bound() == info_emitted_); + DCHECK(patch_site_.is_bound() == info_emitted_); } void EmitJumpIfNotSmi(Register reg, @@ -51,7 +51,7 @@ class JumpPatchSite BASE_EMBEDDED { void EmitPatchInfo() { if (patch_site_.is_bound()) { int delta_to_patch_site = masm_->SizeOfCodeGeneratedSince(&patch_site_); - ASSERT(is_uint8(delta_to_patch_site)); + DCHECK(is_uint8(delta_to_patch_site)); __ testl(rax, Immediate(delta_to_patch_site)); #ifdef DEBUG info_emitted_ = true; @@ -64,8 +64,8 @@ class JumpPatchSite BASE_EMBEDDED { private: // jc will be patched with jz, jnc will become jnz. void EmitJump(Condition cc, Label* target, Label::Distance near_jump) { - ASSERT(!patch_site_.is_bound() && !info_emitted_); - ASSERT(cc == carry || cc == not_carry); + DCHECK(!patch_site_.is_bound() && !info_emitted_); + DCHECK(cc == carry || cc == not_carry); __ bind(&patch_site_); __ j(cc, target, near_jump); } @@ -78,27 +78,6 @@ class JumpPatchSite BASE_EMBEDDED { }; -static void EmitStackCheck(MacroAssembler* masm_, - int pointers = 0, - Register scratch = rsp) { - Isolate* isolate = masm_->isolate(); - Label ok; - ASSERT(scratch.is(rsp) == (pointers == 0)); - Heap::RootListIndex index; - if (pointers != 0) { - __ movp(scratch, rsp); - __ subp(scratch, Immediate(pointers * kPointerSize)); - index = Heap::kRealStackLimitRootIndex; - } else { - index = Heap::kStackLimitRootIndex; - } - __ CompareRoot(scratch, index); - __ j(above_equal, &ok, Label::kNear); - __ call(isolate->builtins()->StackCheck(), RelocInfo::CODE_TARGET); - __ bind(&ok); -} - - // Generate code for a JS function. On entry to the function the receiver // and arguments have been pushed on the stack left to right, with the // return address on top of them. The actual argument count matches the @@ -144,7 +123,7 @@ void FullCodeGenerator::Generate() { __ j(not_equal, &ok, Label::kNear); __ movp(rcx, GlobalObjectOperand()); - __ movp(rcx, FieldOperand(rcx, GlobalObject::kGlobalReceiverOffset)); + __ movp(rcx, FieldOperand(rcx, GlobalObject::kGlobalProxyOffset)); __ movp(args.GetReceiverOperand(), rcx); @@ -157,18 +136,24 @@ void FullCodeGenerator::Generate() { FrameScope frame_scope(masm_, StackFrame::MANUAL); info->set_prologue_offset(masm_->pc_offset()); - __ Prologue(BUILD_FUNCTION_FRAME); + __ Prologue(info->IsCodePreAgingActive()); info->AddNoFrameRange(0, masm_->pc_offset()); { Comment cmnt(masm_, "[ Allocate locals"); int locals_count = info->scope()->num_stack_slots(); // Generators allocate locals, if any, in context slots. - ASSERT(!info->function()->is_generator() || locals_count == 0); + DCHECK(!info->function()->is_generator() || locals_count == 0); if (locals_count == 1) { __ PushRoot(Heap::kUndefinedValueRootIndex); } else if (locals_count > 1) { if (locals_count >= 128) { - EmitStackCheck(masm_, locals_count, rcx); + Label ok; + __ movp(rcx, rsp); + __ subp(rcx, Immediate(locals_count * kPointerSize)); + __ CompareRoot(rcx, Heap::kRealStackLimitRootIndex); + __ j(above_equal, &ok, Label::kNear); + __ InvokeBuiltin(Builtins::STACK_OVERFLOW, CALL_FUNCTION); + __ bind(&ok); } __ LoadRoot(rdx, Heap::kUndefinedValueRootIndex); const int kMaxPushes = 32; @@ -199,17 +184,20 @@ void FullCodeGenerator::Generate() { int heap_slots = info->scope()->num_heap_slots() - Context::MIN_CONTEXT_SLOTS; if (heap_slots > 0) { Comment cmnt(masm_, "[ Allocate context"); + bool need_write_barrier = true; // Argument to NewContext is the function, which is still in rdi. if (FLAG_harmony_scoping && info->scope()->is_global_scope()) { __ Push(rdi); __ Push(info->scope()->GetScopeInfo()); - __ CallRuntime(Runtime::kHiddenNewGlobalContext, 2); + __ CallRuntime(Runtime::kNewGlobalContext, 2); } else if (heap_slots <= FastNewContextStub::kMaximumSlots) { FastNewContextStub stub(isolate(), heap_slots); __ CallStub(&stub); + // Result of FastNewContextStub is always in new space. + need_write_barrier = false; } else { __ Push(rdi); - __ CallRuntime(Runtime::kHiddenNewFunctionContext, 1); + __ CallRuntime(Runtime::kNewFunctionContext, 1); } function_in_register = false; // Context is returned in rax. It replaces the context passed to us. @@ -230,8 +218,15 @@ void FullCodeGenerator::Generate() { int context_offset = Context::SlotOffset(var->index()); __ movp(Operand(rsi, context_offset), rax); // Update the write barrier. This clobbers rax and rbx. - __ RecordWriteContextSlot( - rsi, context_offset, rax, rbx, kDontSaveFPRegs); + if (need_write_barrier) { + __ RecordWriteContextSlot( + rsi, context_offset, rax, rbx, kDontSaveFPRegs); + } else if (FLAG_debug_code) { + Label done; + __ JumpIfInNewSpace(rsi, rax, &done, Label::kNear); + __ Abort(kExpectedNewSpaceObject); + __ bind(&done); + } } } } @@ -289,9 +284,9 @@ void FullCodeGenerator::Generate() { // constant. if (scope()->is_function_scope() && scope()->function() != NULL) { VariableDeclaration* function = scope()->function(); - ASSERT(function->proxy()->var()->mode() == CONST || + DCHECK(function->proxy()->var()->mode() == CONST || function->proxy()->var()->mode() == CONST_LEGACY); - ASSERT(function->proxy()->var()->location() != Variable::UNALLOCATED); + DCHECK(function->proxy()->var()->location() != Variable::UNALLOCATED); VisitVariableDeclaration(function); } VisitDeclarations(scope()->declarations()); @@ -299,13 +294,17 @@ void FullCodeGenerator::Generate() { { Comment cmnt(masm_, "[ Stack check"); PrepareForBailoutForId(BailoutId::Declarations(), NO_REGISTERS); - EmitStackCheck(masm_); + Label ok; + __ CompareRoot(rsp, Heap::kStackLimitRootIndex); + __ j(above_equal, &ok, Label::kNear); + __ call(isolate()->builtins()->StackCheck(), RelocInfo::CODE_TARGET); + __ bind(&ok); } { Comment cmnt(masm_, "[ Body"); - ASSERT(loop_depth() == 0); + DCHECK(loop_depth() == 0); VisitStatements(function()->body()); - ASSERT(loop_depth() == 0); + DCHECK(loop_depth() == 0); } } @@ -346,7 +345,7 @@ void FullCodeGenerator::EmitBackEdgeBookkeeping(IterationStatement* stmt, Comment cmnt(masm_, "[ Back edge bookkeeping"); Label ok; - ASSERT(back_edge_target->is_bound()); + DCHECK(back_edge_target->is_bound()); int distance = masm_->SizeOfCodeGeneratedSince(back_edge_target); int weight = Min(kMaxBackEdgeWeight, Max(1, distance / kCodeSizeMultiplier)); @@ -429,7 +428,7 @@ void FullCodeGenerator::EmitReturnSequence() { } // Check that the size of the code used for returning is large enough // for the debugger's requirements. - ASSERT(Assembler::kJSReturnSequenceLength <= + DCHECK(Assembler::kJSReturnSequenceLength <= masm_->SizeOfCodeGeneratedSince(&check_exit_codesize)); info_->AddNoFrameRange(no_frame_start, masm_->pc_offset()); @@ -438,18 +437,18 @@ void FullCodeGenerator::EmitReturnSequence() { void FullCodeGenerator::EffectContext::Plug(Variable* var) const { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); } void FullCodeGenerator::AccumulatorValueContext::Plug(Variable* var) const { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); codegen()->GetVar(result_register(), var); } void FullCodeGenerator::StackValueContext::Plug(Variable* var) const { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); MemOperand operand = codegen()->VarOperand(var, result_register()); __ Push(operand); } @@ -524,7 +523,7 @@ void FullCodeGenerator::TestContext::Plug(Handle<Object> lit) const { true, true_label_, false_label_); - ASSERT(!lit->IsUndetectableObject()); // There are no undetectable literals. + DCHECK(!lit->IsUndetectableObject()); // There are no undetectable literals. if (lit->IsUndefined() || lit->IsNull() || lit->IsFalse()) { if (false_label_ != fall_through_) __ jmp(false_label_); } else if (lit->IsTrue() || lit->IsJSObject()) { @@ -551,7 +550,7 @@ void FullCodeGenerator::TestContext::Plug(Handle<Object> lit) const { void FullCodeGenerator::EffectContext::DropAndPlug(int count, Register reg) const { - ASSERT(count > 0); + DCHECK(count > 0); __ Drop(count); } @@ -559,7 +558,7 @@ void FullCodeGenerator::EffectContext::DropAndPlug(int count, void FullCodeGenerator::AccumulatorValueContext::DropAndPlug( int count, Register reg) const { - ASSERT(count > 0); + DCHECK(count > 0); __ Drop(count); __ Move(result_register(), reg); } @@ -567,7 +566,7 @@ void FullCodeGenerator::AccumulatorValueContext::DropAndPlug( void FullCodeGenerator::StackValueContext::DropAndPlug(int count, Register reg) const { - ASSERT(count > 0); + DCHECK(count > 0); if (count > 1) __ Drop(count - 1); __ movp(Operand(rsp, 0), reg); } @@ -575,7 +574,7 @@ void FullCodeGenerator::StackValueContext::DropAndPlug(int count, void FullCodeGenerator::TestContext::DropAndPlug(int count, Register reg) const { - ASSERT(count > 0); + DCHECK(count > 0); // For simplicity we always test the accumulator register. __ Drop(count); __ Move(result_register(), reg); @@ -586,7 +585,7 @@ void FullCodeGenerator::TestContext::DropAndPlug(int count, void FullCodeGenerator::EffectContext::Plug(Label* materialize_true, Label* materialize_false) const { - ASSERT(materialize_true == materialize_false); + DCHECK(materialize_true == materialize_false); __ bind(materialize_true); } @@ -619,8 +618,8 @@ void FullCodeGenerator::StackValueContext::Plug( void FullCodeGenerator::TestContext::Plug(Label* materialize_true, Label* materialize_false) const { - ASSERT(materialize_true == true_label_); - ASSERT(materialize_false == false_label_); + DCHECK(materialize_true == true_label_); + DCHECK(materialize_false == false_label_); } @@ -683,7 +682,7 @@ void FullCodeGenerator::Split(Condition cc, MemOperand FullCodeGenerator::StackOperand(Variable* var) { - ASSERT(var->IsStackAllocated()); + DCHECK(var->IsStackAllocated()); // Offset is negative because higher indexes are at lower addresses. int offset = -var->index() * kPointerSize; // Adjust by a (parameter or local) base offset. @@ -698,7 +697,7 @@ MemOperand FullCodeGenerator::StackOperand(Variable* var) { MemOperand FullCodeGenerator::VarOperand(Variable* var, Register scratch) { - ASSERT(var->IsContextSlot() || var->IsStackAllocated()); + DCHECK(var->IsContextSlot() || var->IsStackAllocated()); if (var->IsContextSlot()) { int context_chain_length = scope()->ContextChainLength(var->scope()); __ LoadContext(scratch, context_chain_length); @@ -710,7 +709,7 @@ MemOperand FullCodeGenerator::VarOperand(Variable* var, Register scratch) { void FullCodeGenerator::GetVar(Register dest, Variable* var) { - ASSERT(var->IsContextSlot() || var->IsStackAllocated()); + DCHECK(var->IsContextSlot() || var->IsStackAllocated()); MemOperand location = VarOperand(var, dest); __ movp(dest, location); } @@ -720,10 +719,10 @@ void FullCodeGenerator::SetVar(Variable* var, Register src, Register scratch0, Register scratch1) { - ASSERT(var->IsContextSlot() || var->IsStackAllocated()); - ASSERT(!scratch0.is(src)); - ASSERT(!scratch0.is(scratch1)); - ASSERT(!scratch1.is(src)); + DCHECK(var->IsContextSlot() || var->IsStackAllocated()); + DCHECK(!scratch0.is(src)); + DCHECK(!scratch0.is(scratch1)); + DCHECK(!scratch1.is(src)); MemOperand location = VarOperand(var, scratch0); __ movp(location, src); @@ -757,7 +756,7 @@ void FullCodeGenerator::PrepareForBailoutBeforeSplit(Expression* expr, void FullCodeGenerator::EmitDebugCheckDeclarationContext(Variable* variable) { // The variable in the declaration always resides in the current context. - ASSERT_EQ(0, scope()->ContextChainLength(variable->scope())); + DCHECK_EQ(0, scope()->ContextChainLength(variable->scope())); if (generate_debug_code_) { // Check that we're not inside a with or catch context. __ movp(rbx, FieldOperand(rsi, HeapObject::kMapOffset)); @@ -812,7 +811,7 @@ void FullCodeGenerator::VisitVariableDeclaration( __ Push(rsi); __ Push(variable->name()); // Declaration nodes are always introduced in one of four modes. - ASSERT(IsDeclaredVariableMode(mode)); + DCHECK(IsDeclaredVariableMode(mode)); PropertyAttributes attr = IsImmutableVariableMode(mode) ? READ_ONLY : NONE; __ Push(Smi::FromInt(attr)); @@ -825,7 +824,7 @@ void FullCodeGenerator::VisitVariableDeclaration( } else { __ Push(Smi::FromInt(0)); // Indicates no initial value. } - __ CallRuntime(Runtime::kHiddenDeclareContextSlot, 4); + __ CallRuntime(Runtime::kDeclareLookupSlot, 4); break; } } @@ -840,7 +839,7 @@ void FullCodeGenerator::VisitFunctionDeclaration( case Variable::UNALLOCATED: { globals_->Add(variable->name(), zone()); Handle<SharedFunctionInfo> function = - Compiler::BuildFunctionInfo(declaration->fun(), script()); + Compiler::BuildFunctionInfo(declaration->fun(), script(), info_); // Check for stack-overflow exception. if (function.is_null()) return SetStackOverflow(); globals_->Add(function, zone()); @@ -879,7 +878,7 @@ void FullCodeGenerator::VisitFunctionDeclaration( __ Push(variable->name()); __ Push(Smi::FromInt(NONE)); VisitForStackValue(declaration->fun()); - __ CallRuntime(Runtime::kHiddenDeclareContextSlot, 4); + __ CallRuntime(Runtime::kDeclareLookupSlot, 4); break; } } @@ -888,8 +887,8 @@ void FullCodeGenerator::VisitFunctionDeclaration( void FullCodeGenerator::VisitModuleDeclaration(ModuleDeclaration* declaration) { Variable* variable = declaration->proxy()->var(); - ASSERT(variable->location() == Variable::CONTEXT); - ASSERT(variable->interface()->IsFrozen()); + DCHECK(variable->location() == Variable::CONTEXT); + DCHECK(variable->interface()->IsFrozen()); Comment cmnt(masm_, "[ ModuleDeclaration"); EmitDebugCheckDeclarationContext(variable); @@ -949,7 +948,7 @@ void FullCodeGenerator::DeclareGlobals(Handle<FixedArray> pairs) { __ Push(rsi); // The context is the first argument. __ Push(pairs); __ Push(Smi::FromInt(DeclareGlobalsFlags())); - __ CallRuntime(Runtime::kHiddenDeclareGlobals, 3); + __ CallRuntime(Runtime::kDeclareGlobals, 3); // Return value is ignored. } @@ -957,7 +956,7 @@ void FullCodeGenerator::DeclareGlobals(Handle<FixedArray> pairs) { void FullCodeGenerator::DeclareModules(Handle<FixedArray> descriptions) { // Call the runtime to declare the modules. __ Push(descriptions); - __ CallRuntime(Runtime::kHiddenDeclareModules, 1); + __ CallRuntime(Runtime::kDeclareModules, 1); // Return value is ignored. } @@ -1242,24 +1241,8 @@ void FullCodeGenerator::VisitForOfStatement(ForOfStatement* stmt) { Iteration loop_statement(this, stmt); increment_loop_depth(); - // var iterator = iterable[@@iterator]() - VisitForAccumulatorValue(stmt->assign_iterator()); - - // As with for-in, skip the loop if the iterator is null or undefined. - __ CompareRoot(rax, Heap::kUndefinedValueRootIndex); - __ j(equal, loop_statement.break_label()); - __ CompareRoot(rax, Heap::kNullValueRootIndex); - __ j(equal, loop_statement.break_label()); - - // Convert the iterator to a JS object. - Label convert, done_convert; - __ JumpIfSmi(rax, &convert); - __ CmpObjectType(rax, FIRST_SPEC_OBJECT_TYPE, rcx); - __ j(above_equal, &done_convert); - __ bind(&convert); - __ Push(rax); - __ InvokeBuiltin(Builtins::TO_OBJECT, CALL_FUNCTION); - __ bind(&done_convert); + // var iterator = iterable[Symbol.iterator](); + VisitForEffect(stmt->assign_iterator()); // Loop entry. __ bind(loop_statement.continue_label()); @@ -1317,7 +1300,7 @@ void FullCodeGenerator::EmitNewClosure(Handle<SharedFunctionInfo> info, __ Push(pretenure ? isolate()->factory()->true_value() : isolate()->factory()->false_value()); - __ CallRuntime(Runtime::kHiddenNewClosure, 3); + __ CallRuntime(Runtime::kNewClosure, 3); } context()->Plug(rax); } @@ -1329,7 +1312,7 @@ void FullCodeGenerator::VisitVariableProxy(VariableProxy* expr) { } -void FullCodeGenerator::EmitLoadGlobalCheckExtensions(Variable* var, +void FullCodeGenerator::EmitLoadGlobalCheckExtensions(VariableProxy* proxy, TypeofState typeof_state, Label* slow) { Register context = rsi; @@ -1380,8 +1363,13 @@ void FullCodeGenerator::EmitLoadGlobalCheckExtensions(Variable* var, // All extension objects were empty and it is safe to use a global // load IC call. - __ movp(rax, GlobalObjectOperand()); - __ Move(rcx, var->name()); + __ movp(LoadIC::ReceiverRegister(), GlobalObjectOperand()); + __ Move(LoadIC::NameRegister(), proxy->var()->name()); + if (FLAG_vector_ics) { + __ Move(LoadIC::SlotRegister(), + Smi::FromInt(proxy->VariableFeedbackSlot())); + } + ContextualMode mode = (typeof_state == INSIDE_TYPEOF) ? NOT_CONTEXTUAL : CONTEXTUAL; @@ -1391,7 +1379,7 @@ void FullCodeGenerator::EmitLoadGlobalCheckExtensions(Variable* var, MemOperand FullCodeGenerator::ContextSlotOperandCheckExtensions(Variable* var, Label* slow) { - ASSERT(var->IsContextSlot()); + DCHECK(var->IsContextSlot()); Register context = rsi; Register temp = rbx; @@ -1419,7 +1407,7 @@ MemOperand FullCodeGenerator::ContextSlotOperandCheckExtensions(Variable* var, } -void FullCodeGenerator::EmitDynamicLookupFastCase(Variable* var, +void FullCodeGenerator::EmitDynamicLookupFastCase(VariableProxy* proxy, TypeofState typeof_state, Label* slow, Label* done) { @@ -1428,8 +1416,9 @@ void FullCodeGenerator::EmitDynamicLookupFastCase(Variable* var, // introducing variables. In those cases, we do not want to // perform a runtime call for all variables in the scope // containing the eval. + Variable* var = proxy->var(); if (var->mode() == DYNAMIC_GLOBAL) { - EmitLoadGlobalCheckExtensions(var, typeof_state, slow); + EmitLoadGlobalCheckExtensions(proxy, typeof_state, slow); __ jmp(done); } else if (var->mode() == DYNAMIC_LOCAL) { Variable* local = var->local_if_not_shadowed(); @@ -1442,7 +1431,7 @@ void FullCodeGenerator::EmitDynamicLookupFastCase(Variable* var, __ LoadRoot(rax, Heap::kUndefinedValueRootIndex); } else { // LET || CONST __ Push(var->name()); - __ CallRuntime(Runtime::kHiddenThrowReferenceError, 1); + __ CallRuntime(Runtime::kThrowReferenceError, 1); } } __ jmp(done); @@ -1460,10 +1449,12 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { switch (var->location()) { case Variable::UNALLOCATED: { Comment cmnt(masm_, "[ Global variable"); - // Use inline caching. Variable name is passed in rcx and the global - // object on the stack. - __ Move(rcx, var->name()); - __ movp(rax, GlobalObjectOperand()); + __ Move(LoadIC::NameRegister(), var->name()); + __ movp(LoadIC::ReceiverRegister(), GlobalObjectOperand()); + if (FLAG_vector_ics) { + __ Move(LoadIC::SlotRegister(), + Smi::FromInt(proxy->VariableFeedbackSlot())); + } CallLoadIC(CONTEXTUAL); context()->Plug(rax); break; @@ -1480,7 +1471,7 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { // always looked up dynamically, i.e. in that case // var->location() == LOOKUP. // always holds. - ASSERT(var->scope() != NULL); + DCHECK(var->scope() != NULL); // Check if the binding really needs an initialization check. The check // can be skipped in the following situation: we have a LET or CONST @@ -1503,8 +1494,8 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { skip_init_check = false; } else { // Check that we always have valid source position. - ASSERT(var->initializer_position() != RelocInfo::kNoPosition); - ASSERT(proxy->position() != RelocInfo::kNoPosition); + DCHECK(var->initializer_position() != RelocInfo::kNoPosition); + DCHECK(proxy->position() != RelocInfo::kNoPosition); skip_init_check = var->mode() != CONST_LEGACY && var->initializer_position() < proxy->position(); } @@ -1519,10 +1510,10 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { // Throw a reference error when using an uninitialized let/const // binding in harmony mode. __ Push(var->name()); - __ CallRuntime(Runtime::kHiddenThrowReferenceError, 1); + __ CallRuntime(Runtime::kThrowReferenceError, 1); } else { // Uninitalized const bindings outside of harmony mode are unholed. - ASSERT(var->mode() == CONST_LEGACY); + DCHECK(var->mode() == CONST_LEGACY); __ LoadRoot(rax, Heap::kUndefinedValueRootIndex); } __ bind(&done); @@ -1539,11 +1530,11 @@ void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { Label done, slow; // Generate code for loading from variables potentially shadowed // by eval-introduced variables. - EmitDynamicLookupFastCase(var, NOT_INSIDE_TYPEOF, &slow, &done); + EmitDynamicLookupFastCase(proxy, NOT_INSIDE_TYPEOF, &slow, &done); __ bind(&slow); __ Push(rsi); // Context. __ Push(var->name()); - __ CallRuntime(Runtime::kHiddenLoadContextSlot, 2); + __ CallRuntime(Runtime::kLoadLookupSlot, 2); __ bind(&done); context()->Plug(rax); break; @@ -1574,7 +1565,7 @@ void FullCodeGenerator::VisitRegExpLiteral(RegExpLiteral* expr) { __ Push(Smi::FromInt(expr->literal_index())); __ Push(expr->pattern()); __ Push(expr->flags()); - __ CallRuntime(Runtime::kHiddenMaterializeRegExpLiteral, 4); + __ CallRuntime(Runtime::kMaterializeRegExpLiteral, 4); __ movp(rbx, rax); __ bind(&materialized); @@ -1586,7 +1577,7 @@ void FullCodeGenerator::VisitRegExpLiteral(RegExpLiteral* expr) { __ bind(&runtime_allocate); __ Push(rbx); __ Push(Smi::FromInt(size)); - __ CallRuntime(Runtime::kHiddenAllocateInNewSpace, 1); + __ CallRuntime(Runtime::kAllocateInNewSpace, 1); __ Pop(rbx); __ bind(&allocated); @@ -1628,14 +1619,14 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) { : ObjectLiteral::kNoFlags; int properties_count = constant_properties->length() / 2; if (expr->may_store_doubles() || expr->depth() > 1 || - Serializer::enabled(isolate()) || flags != ObjectLiteral::kFastElements || + masm()->serializer_enabled() || flags != ObjectLiteral::kFastElements || properties_count > FastCloneShallowObjectStub::kMaximumClonedProperties) { __ movp(rdi, Operand(rbp, JavaScriptFrameConstants::kFunctionOffset)); __ Push(FieldOperand(rdi, JSFunction::kLiteralsOffset)); __ Push(Smi::FromInt(expr->literal_index())); __ Push(constant_properties); __ Push(Smi::FromInt(flags)); - __ CallRuntime(Runtime::kHiddenCreateObjectLiteral, 4); + __ CallRuntime(Runtime::kCreateObjectLiteral, 4); } else { __ movp(rdi, Operand(rbp, JavaScriptFrameConstants::kFunctionOffset)); __ movp(rax, FieldOperand(rdi, JSFunction::kLiteralsOffset)); @@ -1670,14 +1661,15 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) { case ObjectLiteral::Property::CONSTANT: UNREACHABLE(); case ObjectLiteral::Property::MATERIALIZED_LITERAL: - ASSERT(!CompileTimeValue::IsCompileTimeValue(value)); + DCHECK(!CompileTimeValue::IsCompileTimeValue(value)); // Fall through. case ObjectLiteral::Property::COMPUTED: if (key->value()->IsInternalizedString()) { if (property->emit_store()) { VisitForAccumulatorValue(value); - __ Move(rcx, key->value()); - __ movp(rdx, Operand(rsp, 0)); + DCHECK(StoreIC::ValueRegister().is(rax)); + __ Move(StoreIC::NameRegister(), key->value()); + __ movp(StoreIC::ReceiverRegister(), Operand(rsp, 0)); CallStoreIC(key->LiteralFeedbackId()); PrepareForBailoutForId(key->id(), NO_REGISTERS); } else { @@ -1689,7 +1681,7 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) { VisitForStackValue(key); VisitForStackValue(value); if (property->emit_store()) { - __ Push(Smi::FromInt(NONE)); // PropertyAttributes + __ Push(Smi::FromInt(SLOPPY)); // Strict mode __ CallRuntime(Runtime::kSetProperty, 4); } else { __ Drop(3); @@ -1723,11 +1715,11 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) { EmitAccessor(it->second->getter); EmitAccessor(it->second->setter); __ Push(Smi::FromInt(NONE)); - __ CallRuntime(Runtime::kDefineOrRedefineAccessorProperty, 5); + __ CallRuntime(Runtime::kDefineAccessorPropertyUnchecked, 5); } if (expr->has_function()) { - ASSERT(result_saved); + DCHECK(result_saved); __ Push(Operand(rsp, 0)); __ CallRuntime(Runtime::kToFastProperties, 1); } @@ -1751,7 +1743,7 @@ void FullCodeGenerator::VisitArrayLiteral(ArrayLiteral* expr) { ZoneList<Expression*>* subexprs = expr->values(); int length = subexprs->length(); Handle<FixedArray> constant_elements = expr->constant_elements(); - ASSERT_EQ(2, constant_elements->length()); + DCHECK_EQ(2, constant_elements->length()); ElementsKind constant_elements_kind = static_cast<ElementsKind>(Smi::cast(constant_elements->get(0))->value()); bool has_constant_fast_elements = @@ -1766,49 +1758,19 @@ void FullCodeGenerator::VisitArrayLiteral(ArrayLiteral* expr) { allocation_site_mode = DONT_TRACK_ALLOCATION_SITE; } - Heap* heap = isolate()->heap(); - if (has_constant_fast_elements && - constant_elements_values->map() == heap->fixed_cow_array_map()) { - // If the elements are already FAST_*_ELEMENTS, the boilerplate cannot - // change, so it's possible to specialize the stub in advance. - __ IncrementCounter(isolate()->counters()->cow_arrays_created_stub(), 1); - __ movp(rbx, Operand(rbp, JavaScriptFrameConstants::kFunctionOffset)); - __ movp(rax, FieldOperand(rbx, JSFunction::kLiteralsOffset)); - __ Move(rbx, Smi::FromInt(expr->literal_index())); - __ Move(rcx, constant_elements); - FastCloneShallowArrayStub stub( - isolate(), - FastCloneShallowArrayStub::COPY_ON_WRITE_ELEMENTS, - allocation_site_mode, - length); - __ CallStub(&stub); - } else if (expr->depth() > 1 || Serializer::enabled(isolate()) || - length > FastCloneShallowArrayStub::kMaximumClonedLength) { + if (expr->depth() > 1 || length > JSObject::kInitialMaxFastElementArray) { __ movp(rbx, Operand(rbp, JavaScriptFrameConstants::kFunctionOffset)); __ Push(FieldOperand(rbx, JSFunction::kLiteralsOffset)); __ Push(Smi::FromInt(expr->literal_index())); __ Push(constant_elements); __ Push(Smi::FromInt(flags)); - __ CallRuntime(Runtime::kHiddenCreateArrayLiteral, 4); + __ CallRuntime(Runtime::kCreateArrayLiteral, 4); } else { - ASSERT(IsFastSmiOrObjectElementsKind(constant_elements_kind) || - FLAG_smi_only_arrays); - FastCloneShallowArrayStub::Mode mode = - FastCloneShallowArrayStub::CLONE_ANY_ELEMENTS; - - // If the elements are already FAST_*_ELEMENTS, the boilerplate cannot - // change, so it's possible to specialize the stub in advance. - if (has_constant_fast_elements) { - mode = FastCloneShallowArrayStub::CLONE_ELEMENTS; - } - __ movp(rbx, Operand(rbp, JavaScriptFrameConstants::kFunctionOffset)); __ movp(rax, FieldOperand(rbx, JSFunction::kLiteralsOffset)); __ Move(rbx, Smi::FromInt(expr->literal_index())); __ Move(rcx, constant_elements); - FastCloneShallowArrayStub stub(isolate(), - mode, - allocation_site_mode, length); + FastCloneShallowArrayStub stub(isolate(), allocation_site_mode); __ CallStub(&stub); } @@ -1862,7 +1824,7 @@ void FullCodeGenerator::VisitArrayLiteral(ArrayLiteral* expr) { void FullCodeGenerator::VisitAssignment(Assignment* expr) { - ASSERT(expr->target()->IsValidReferenceExpression()); + DCHECK(expr->target()->IsValidReferenceExpression()); Comment cmnt(masm_, "[ Assignment"); @@ -1884,9 +1846,9 @@ void FullCodeGenerator::VisitAssignment(Assignment* expr) { break; case NAMED_PROPERTY: if (expr->is_compound()) { - // We need the receiver both on the stack and in the accumulator. - VisitForAccumulatorValue(property->obj()); - __ Push(result_register()); + // We need the receiver both on the stack and in the register. + VisitForStackValue(property->obj()); + __ movp(LoadIC::ReceiverRegister(), Operand(rsp, 0)); } else { VisitForStackValue(property->obj()); } @@ -1894,9 +1856,9 @@ void FullCodeGenerator::VisitAssignment(Assignment* expr) { case KEYED_PROPERTY: { if (expr->is_compound()) { VisitForStackValue(property->obj()); - VisitForAccumulatorValue(property->key()); - __ movp(rdx, Operand(rsp, 0)); - __ Push(rax); + VisitForStackValue(property->key()); + __ movp(LoadIC::ReceiverRegister(), Operand(rsp, kPointerSize)); + __ movp(LoadIC::NameRegister(), Operand(rsp, 0)); } else { VisitForStackValue(property->obj()); VisitForStackValue(property->key()); @@ -1992,7 +1954,7 @@ void FullCodeGenerator::VisitYield(Yield* expr) { __ bind(&suspend); VisitForAccumulatorValue(expr->generator_object()); - ASSERT(continuation.pos() > 0 && Smi::IsValid(continuation.pos())); + DCHECK(continuation.pos() > 0 && Smi::IsValid(continuation.pos())); __ Move(FieldOperand(rax, JSGeneratorObject::kContinuationOffset), Smi::FromInt(continuation.pos())); __ movp(FieldOperand(rax, JSGeneratorObject::kContextOffset), rsi); @@ -2003,7 +1965,7 @@ void FullCodeGenerator::VisitYield(Yield* expr) { __ cmpp(rsp, rbx); __ j(equal, &post_runtime); __ Push(rax); // generator object - __ CallRuntime(Runtime::kHiddenSuspendJSGeneratorObject, 1); + __ CallRuntime(Runtime::kSuspendJSGeneratorObject, 1); __ movp(context_register(), Operand(rbp, StandardFrameConstants::kContextOffset)); __ bind(&post_runtime); @@ -2037,6 +1999,9 @@ void FullCodeGenerator::VisitYield(Yield* expr) { Label l_catch, l_try, l_suspend, l_continuation, l_resume; Label l_next, l_call, l_loop; + Register load_receiver = LoadIC::ReceiverRegister(); + Register load_name = LoadIC::NameRegister(); + // Initial send value is undefined. __ LoadRoot(rax, Heap::kUndefinedValueRootIndex); __ jmp(&l_next); @@ -2044,10 +2009,10 @@ void FullCodeGenerator::VisitYield(Yield* expr) { // catch (e) { receiver = iter; f = 'throw'; arg = e; goto l_call; } __ bind(&l_catch); handler_table()->set(expr->index(), Smi::FromInt(l_catch.pos())); - __ LoadRoot(rcx, Heap::kthrow_stringRootIndex); // "throw" - __ Push(rcx); - __ Push(Operand(rsp, 2 * kPointerSize)); // iter - __ Push(rax); // exception + __ LoadRoot(load_name, Heap::kthrow_stringRootIndex); // "throw" + __ Push(load_name); + __ Push(Operand(rsp, 2 * kPointerSize)); // iter + __ Push(rax); // exception __ jmp(&l_call); // try { received = %yield result } @@ -2065,14 +2030,14 @@ void FullCodeGenerator::VisitYield(Yield* expr) { const int generator_object_depth = kPointerSize + handler_size; __ movp(rax, Operand(rsp, generator_object_depth)); __ Push(rax); // g - ASSERT(l_continuation.pos() > 0 && Smi::IsValid(l_continuation.pos())); + DCHECK(l_continuation.pos() > 0 && Smi::IsValid(l_continuation.pos())); __ Move(FieldOperand(rax, JSGeneratorObject::kContinuationOffset), Smi::FromInt(l_continuation.pos())); __ movp(FieldOperand(rax, JSGeneratorObject::kContextOffset), rsi); __ movp(rcx, rsi); __ RecordWriteField(rax, JSGeneratorObject::kContextOffset, rcx, rdx, kDontSaveFPRegs); - __ CallRuntime(Runtime::kHiddenSuspendJSGeneratorObject, 1); + __ CallRuntime(Runtime::kSuspendJSGeneratorObject, 1); __ movp(context_register(), Operand(rbp, StandardFrameConstants::kContextOffset)); __ Pop(rax); // result @@ -2082,15 +2047,19 @@ void FullCodeGenerator::VisitYield(Yield* expr) { // receiver = iter; f = 'next'; arg = received; __ bind(&l_next); - __ LoadRoot(rcx, Heap::knext_stringRootIndex); // "next" - __ Push(rcx); - __ Push(Operand(rsp, 2 * kPointerSize)); // iter - __ Push(rax); // received + + __ LoadRoot(load_name, Heap::knext_stringRootIndex); + __ Push(load_name); // "next" + __ Push(Operand(rsp, 2 * kPointerSize)); // iter + __ Push(rax); // received // result = receiver[f](arg); __ bind(&l_call); - __ movp(rdx, Operand(rsp, kPointerSize)); - __ movp(rax, Operand(rsp, 2 * kPointerSize)); + __ movp(load_receiver, Operand(rsp, kPointerSize)); + if (FLAG_vector_ics) { + __ Move(LoadIC::SlotRegister(), + Smi::FromInt(expr->KeyedLoadFeedbackSlot())); + } Handle<Code> ic = isolate()->builtins()->KeyedLoadIC_Initialize(); CallIC(ic, TypeFeedbackId::None()); __ movp(rdi, rax); @@ -2103,17 +2072,25 @@ void FullCodeGenerator::VisitYield(Yield* expr) { // if (!result.done) goto l_try; __ bind(&l_loop); - __ Push(rax); // save result - __ LoadRoot(rcx, Heap::kdone_stringRootIndex); // "done" - CallLoadIC(NOT_CONTEXTUAL); // result.done in rax + __ Move(load_receiver, rax); + __ Push(load_receiver); // save result + __ LoadRoot(load_name, Heap::kdone_stringRootIndex); // "done" + if (FLAG_vector_ics) { + __ Move(LoadIC::SlotRegister(), Smi::FromInt(expr->DoneFeedbackSlot())); + } + CallLoadIC(NOT_CONTEXTUAL); // rax=result.done Handle<Code> bool_ic = ToBooleanStub::GetUninitialized(isolate()); CallIC(bool_ic); __ testp(result_register(), result_register()); __ j(zero, &l_try); // result.value - __ Pop(rax); // result - __ LoadRoot(rcx, Heap::kvalue_stringRootIndex); // "value" + __ Pop(load_receiver); // result + __ LoadRoot(load_name, Heap::kvalue_stringRootIndex); // "value" + if (FLAG_vector_ics) { + __ Move(LoadIC::SlotRegister(), + Smi::FromInt(expr->ValueFeedbackSlot())); + } CallLoadIC(NOT_CONTEXTUAL); // result.value in rax context()->DropAndPlug(2, rax); // drop iter and g break; @@ -2126,7 +2103,7 @@ void FullCodeGenerator::EmitGeneratorResume(Expression *generator, Expression *value, JSGeneratorObject::ResumeMode resume_mode) { // The value stays in rax, and is ultimately read by the resumed generator, as - // if CallRuntime(Runtime::kHiddenSuspendJSGeneratorObject) returned it. Or it + // if CallRuntime(Runtime::kSuspendJSGeneratorObject) returned it. Or it // is read to throw the value when the resumed generator is already closed. // rbx will hold the generator object until the activation has been resumed. VisitForStackValue(generator); @@ -2206,7 +2183,7 @@ void FullCodeGenerator::EmitGeneratorResume(Expression *generator, __ Push(rbx); __ Push(result_register()); __ Push(Smi::FromInt(resume_mode)); - __ CallRuntime(Runtime::kHiddenResumeJSGeneratorObject, 3); + __ CallRuntime(Runtime::kResumeJSGeneratorObject, 3); // Not reached: the runtime call returns elsewhere. __ Abort(kGeneratorFailedToResume); @@ -2220,14 +2197,14 @@ void FullCodeGenerator::EmitGeneratorResume(Expression *generator, } else { // Throw the provided value. __ Push(rax); - __ CallRuntime(Runtime::kHiddenThrow, 1); + __ CallRuntime(Runtime::kThrow, 1); } __ jmp(&done); // Throw error if we attempt to operate on a running generator. __ bind(&wrong_state); __ Push(rbx); - __ CallRuntime(Runtime::kHiddenThrowGeneratorStateError, 1); + __ CallRuntime(Runtime::kThrowGeneratorStateError, 1); __ bind(&done); context()->Plug(result_register()); @@ -2245,7 +2222,7 @@ void FullCodeGenerator::EmitCreateIteratorResult(bool done) { __ bind(&gc_required); __ Push(Smi::FromInt(map->instance_size())); - __ CallRuntime(Runtime::kHiddenAllocateInNewSpace, 1); + __ CallRuntime(Runtime::kAllocateInNewSpace, 1); __ movp(context_register(), Operand(rbp, StandardFrameConstants::kContextOffset)); @@ -2253,7 +2230,7 @@ void FullCodeGenerator::EmitCreateIteratorResult(bool done) { __ Move(rbx, map); __ Pop(rcx); __ Move(rdx, isolate()->factory()->ToBoolean(done)); - ASSERT_EQ(map->instance_size(), 5 * kPointerSize); + DCHECK_EQ(map->instance_size(), 5 * kPointerSize); __ movp(FieldOperand(rax, HeapObject::kMapOffset), rbx); __ Move(FieldOperand(rax, JSObject::kPropertiesOffset), isolate()->factory()->empty_fixed_array()); @@ -2274,15 +2251,25 @@ void FullCodeGenerator::EmitCreateIteratorResult(bool done) { void FullCodeGenerator::EmitNamedPropertyLoad(Property* prop) { SetSourcePosition(prop->position()); Literal* key = prop->key()->AsLiteral(); - __ Move(rcx, key->value()); - CallLoadIC(NOT_CONTEXTUAL, prop->PropertyFeedbackId()); + __ Move(LoadIC::NameRegister(), key->value()); + if (FLAG_vector_ics) { + __ Move(LoadIC::SlotRegister(), Smi::FromInt(prop->PropertyFeedbackSlot())); + CallLoadIC(NOT_CONTEXTUAL); + } else { + CallLoadIC(NOT_CONTEXTUAL, prop->PropertyFeedbackId()); + } } void FullCodeGenerator::EmitKeyedPropertyLoad(Property* prop) { SetSourcePosition(prop->position()); Handle<Code> ic = isolate()->builtins()->KeyedLoadIC_Initialize(); - CallIC(ic, prop->PropertyFeedbackId()); + if (FLAG_vector_ics) { + __ Move(LoadIC::SlotRegister(), Smi::FromInt(prop->PropertyFeedbackSlot())); + CallIC(ic); + } else { + CallIC(ic, prop->PropertyFeedbackId()); + } } @@ -2314,7 +2301,7 @@ void FullCodeGenerator::EmitInlineSmiBinaryOp(BinaryOperation* expr, __ SmiShiftArithmeticRight(rax, rdx, rcx); break; case Token::SHL: - __ SmiShiftLeft(rax, rdx, rcx); + __ SmiShiftLeft(rax, rdx, rcx, &stub_call); break; case Token::SHR: __ SmiShiftLogicalRight(rax, rdx, rcx, &stub_call); @@ -2360,7 +2347,7 @@ void FullCodeGenerator::EmitBinaryOp(BinaryOperation* expr, void FullCodeGenerator::EmitAssignment(Expression* expr) { - ASSERT(expr->IsValidReferenceExpression()); + DCHECK(expr->IsValidReferenceExpression()); // Left-hand side can only be a property, a global or a (parameter or local) // slot. @@ -2383,9 +2370,9 @@ void FullCodeGenerator::EmitAssignment(Expression* expr) { case NAMED_PROPERTY: { __ Push(rax); // Preserve value. VisitForAccumulatorValue(prop->obj()); - __ movp(rdx, rax); - __ Pop(rax); // Restore value. - __ Move(rcx, prop->key()->AsLiteral()->value()); + __ Move(StoreIC::ReceiverRegister(), rax); + __ Pop(StoreIC::ValueRegister()); // Restore value. + __ Move(StoreIC::NameRegister(), prop->key()->AsLiteral()->value()); CallStoreIC(); break; } @@ -2393,9 +2380,9 @@ void FullCodeGenerator::EmitAssignment(Expression* expr) { __ Push(rax); // Preserve value. VisitForStackValue(prop->obj()); VisitForAccumulatorValue(prop->key()); - __ movp(rcx, rax); - __ Pop(rdx); - __ Pop(rax); // Restore value. + __ Move(KeyedStoreIC::NameRegister(), rax); + __ Pop(KeyedStoreIC::ReceiverRegister()); + __ Pop(KeyedStoreIC::ValueRegister()); // Restore value. Handle<Code> ic = strict_mode() == SLOPPY ? isolate()->builtins()->KeyedStoreIC_Initialize() : isolate()->builtins()->KeyedStoreIC_Initialize_Strict(); @@ -2418,34 +2405,24 @@ void FullCodeGenerator::EmitStoreToStackLocalOrContextSlot( } -void FullCodeGenerator::EmitCallStoreContextSlot( - Handle<String> name, StrictMode strict_mode) { - __ Push(rax); // Value. - __ Push(rsi); // Context. - __ Push(name); - __ Push(Smi::FromInt(strict_mode)); - __ CallRuntime(Runtime::kHiddenStoreContextSlot, 4); -} - - void FullCodeGenerator::EmitVariableAssignment(Variable* var, Token::Value op) { if (var->IsUnallocated()) { // Global var, const, or let. - __ Move(rcx, var->name()); - __ movp(rdx, GlobalObjectOperand()); + __ Move(StoreIC::NameRegister(), var->name()); + __ movp(StoreIC::ReceiverRegister(), GlobalObjectOperand()); CallStoreIC(); } else if (op == Token::INIT_CONST_LEGACY) { // Const initializers need a write barrier. - ASSERT(!var->IsParameter()); // No const parameters. + DCHECK(!var->IsParameter()); // No const parameters. if (var->IsLookupSlot()) { __ Push(rax); __ Push(rsi); __ Push(var->name()); - __ CallRuntime(Runtime::kHiddenInitializeConstContextSlot, 3); + __ CallRuntime(Runtime::kInitializeLegacyConstLookupSlot, 3); } else { - ASSERT(var->IsStackLocal() || var->IsContextSlot()); + DCHECK(var->IsStackLocal() || var->IsContextSlot()); Label skip; MemOperand location = VarOperand(var, rcx); __ movp(rdx, location); @@ -2457,28 +2434,30 @@ void FullCodeGenerator::EmitVariableAssignment(Variable* var, } else if (var->mode() == LET && op != Token::INIT_LET) { // Non-initializing assignment to let variable needs a write barrier. - if (var->IsLookupSlot()) { - EmitCallStoreContextSlot(var->name(), strict_mode()); - } else { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); - Label assign; - MemOperand location = VarOperand(var, rcx); - __ movp(rdx, location); - __ CompareRoot(rdx, Heap::kTheHoleValueRootIndex); - __ j(not_equal, &assign, Label::kNear); - __ Push(var->name()); - __ CallRuntime(Runtime::kHiddenThrowReferenceError, 1); - __ bind(&assign); - EmitStoreToStackLocalOrContextSlot(var, location); - } + DCHECK(!var->IsLookupSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); + Label assign; + MemOperand location = VarOperand(var, rcx); + __ movp(rdx, location); + __ CompareRoot(rdx, Heap::kTheHoleValueRootIndex); + __ j(not_equal, &assign, Label::kNear); + __ Push(var->name()); + __ CallRuntime(Runtime::kThrowReferenceError, 1); + __ bind(&assign); + EmitStoreToStackLocalOrContextSlot(var, location); } else if (!var->is_const_mode() || op == Token::INIT_CONST) { - // Assignment to var or initializing assignment to let/const - // in harmony mode. if (var->IsLookupSlot()) { - EmitCallStoreContextSlot(var->name(), strict_mode()); + // Assignment to var. + __ Push(rax); // Value. + __ Push(rsi); // Context. + __ Push(var->name()); + __ Push(Smi::FromInt(strict_mode())); + __ CallRuntime(Runtime::kStoreLookupSlot, 4); } else { - ASSERT(var->IsStackAllocated() || var->IsContextSlot()); + // Assignment to var or initializing assignment to let/const in harmony + // mode. + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); MemOperand location = VarOperand(var, rcx); if (generate_debug_code_ && op == Token::INIT_LET) { // Check for an uninitialized let binding. @@ -2496,13 +2475,13 @@ void FullCodeGenerator::EmitVariableAssignment(Variable* var, void FullCodeGenerator::EmitNamedPropertyAssignment(Assignment* expr) { // Assignment to a property, using a named store IC. Property* prop = expr->target()->AsProperty(); - ASSERT(prop != NULL); - ASSERT(prop->key()->AsLiteral() != NULL); + DCHECK(prop != NULL); + DCHECK(prop->key()->IsLiteral()); // Record source code position before IC call. SetSourcePosition(expr->position()); - __ Move(rcx, prop->key()->AsLiteral()->value()); - __ Pop(rdx); + __ Move(StoreIC::NameRegister(), prop->key()->AsLiteral()->value()); + __ Pop(StoreIC::ReceiverRegister()); CallStoreIC(expr->AssignmentFeedbackId()); PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); @@ -2513,8 +2492,9 @@ void FullCodeGenerator::EmitNamedPropertyAssignment(Assignment* expr) { void FullCodeGenerator::EmitKeyedPropertyAssignment(Assignment* expr) { // Assignment to a property, using a keyed store IC. - __ Pop(rcx); - __ Pop(rdx); + __ Pop(KeyedStoreIC::NameRegister()); // Key. + __ Pop(KeyedStoreIC::ReceiverRegister()); + DCHECK(KeyedStoreIC::ValueRegister().is(rax)); // Record source code position before IC call. SetSourcePosition(expr->position()); Handle<Code> ic = strict_mode() == SLOPPY @@ -2533,13 +2513,16 @@ void FullCodeGenerator::VisitProperty(Property* expr) { if (key->IsPropertyName()) { VisitForAccumulatorValue(expr->obj()); + DCHECK(!rax.is(LoadIC::ReceiverRegister())); + __ movp(LoadIC::ReceiverRegister(), rax); EmitNamedPropertyLoad(expr); PrepareForBailoutForId(expr->LoadId(), TOS_REG); context()->Plug(rax); } else { VisitForStackValue(expr->obj()); VisitForAccumulatorValue(expr->key()); - __ Pop(rdx); + __ Move(LoadIC::NameRegister(), rax); + __ Pop(LoadIC::ReceiverRegister()); EmitKeyedPropertyLoad(expr); context()->Plug(rax); } @@ -2571,8 +2554,8 @@ void FullCodeGenerator::EmitCallWithLoadIC(Call* expr) { __ Push(isolate()->factory()->undefined_value()); } else { // Load the function from the receiver. - ASSERT(callee->IsProperty()); - __ movp(rax, Operand(rsp, 0)); + DCHECK(callee->IsProperty()); + __ movp(LoadIC::ReceiverRegister(), Operand(rsp, 0)); EmitNamedPropertyLoad(callee->AsProperty()); PrepareForBailoutForId(callee->AsProperty()->LoadId(), TOS_REG); // Push the target function under the receiver. @@ -2593,8 +2576,9 @@ void FullCodeGenerator::EmitKeyedCallWithLoadIC(Call* expr, Expression* callee = expr->expression(); // Load the function from the receiver. - ASSERT(callee->IsProperty()); - __ movp(rdx, Operand(rsp, 0)); + DCHECK(callee->IsProperty()); + __ movp(LoadIC::ReceiverRegister(), Operand(rsp, 0)); + __ Move(LoadIC::NameRegister(), rax); EmitKeyedPropertyLoad(callee->AsProperty()); PrepareForBailoutForId(callee->AsProperty()->LoadId(), TOS_REG); @@ -2654,7 +2638,7 @@ void FullCodeGenerator::EmitResolvePossiblyDirectEval(int arg_count) { __ Push(Smi::FromInt(scope()->start_position())); // Do the runtime call. - __ CallRuntime(Runtime::kHiddenResolvePossiblyDirectEval, 5); + __ CallRuntime(Runtime::kResolvePossiblyDirectEval, 5); } @@ -2714,14 +2698,14 @@ void FullCodeGenerator::VisitCall(Call* expr) { { PreservePositionScope scope(masm()->positions_recorder()); // Generate code for loading from variables potentially shadowed by // eval-introduced variables. - EmitDynamicLookupFastCase(proxy->var(), NOT_INSIDE_TYPEOF, &slow, &done); + EmitDynamicLookupFastCase(proxy, NOT_INSIDE_TYPEOF, &slow, &done); } __ bind(&slow); // Call the runtime to find the function to call (returned in rax) and // the object holding it (returned in rdx). __ Push(context_register()); __ Push(proxy->name()); - __ CallRuntime(Runtime::kHiddenLoadContextSlot, 2); + __ CallRuntime(Runtime::kLoadLookupSlot, 2); __ Push(rax); // Function. __ Push(rdx); // Receiver. @@ -2753,7 +2737,7 @@ void FullCodeGenerator::VisitCall(Call* expr) { EmitKeyedCallWithLoadIC(expr, property->key()); } } else { - ASSERT(call_type == Call::OTHER_CALL); + DCHECK(call_type == Call::OTHER_CALL); // Call to an arbitrary expression not handled specially above. { PreservePositionScope scope(masm()->positions_recorder()); VisitForStackValue(callee); @@ -2765,7 +2749,7 @@ void FullCodeGenerator::VisitCall(Call* expr) { #ifdef DEBUG // RecordJSReturnSite should have been called. - ASSERT(expr->return_is_recorded_); + DCHECK(expr->return_is_recorded_); #endif } @@ -2799,7 +2783,7 @@ void FullCodeGenerator::VisitCallNew(CallNew* expr) { // Record call targets in unoptimized code, but not in the snapshot. if (FLAG_pretenuring_call_new) { EnsureSlotContainsAllocationSite(expr->AllocationSiteFeedbackSlot()); - ASSERT(expr->AllocationSiteFeedbackSlot() == + DCHECK(expr->AllocationSiteFeedbackSlot() == expr->CallNewFeedbackSlot() + 1); } @@ -2815,7 +2799,7 @@ void FullCodeGenerator::VisitCallNew(CallNew* expr) { void FullCodeGenerator::EmitIsSmi(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2836,7 +2820,7 @@ void FullCodeGenerator::EmitIsSmi(CallRuntime* expr) { void FullCodeGenerator::EmitIsNonNegativeSmi(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2857,7 +2841,7 @@ void FullCodeGenerator::EmitIsNonNegativeSmi(CallRuntime* expr) { void FullCodeGenerator::EmitIsObject(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2889,7 +2873,7 @@ void FullCodeGenerator::EmitIsObject(CallRuntime* expr) { void FullCodeGenerator::EmitIsSpecObject(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2911,7 +2895,7 @@ void FullCodeGenerator::EmitIsSpecObject(CallRuntime* expr) { void FullCodeGenerator::EmitIsUndetectableObject(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2936,7 +2920,7 @@ void FullCodeGenerator::EmitIsUndetectableObject(CallRuntime* expr) { void FullCodeGenerator::EmitIsStringWrapperSafeForDefaultValueOf( CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -2977,10 +2961,8 @@ void FullCodeGenerator::EmitIsStringWrapperSafeForDefaultValueOf( // rcx: valid entries in the descriptor array. // Calculate the end of the descriptor array. __ imulp(rcx, rcx, Immediate(DescriptorArray::kDescriptorSize)); - SmiIndex index = masm_->SmiToIndex(rdx, rcx, kPointerSizeLog2); __ leap(rcx, - Operand( - r8, index.reg, index.scale, DescriptorArray::kFirstOffset)); + Operand(r8, rcx, times_pointer_size, DescriptorArray::kFirstOffset)); // Calculate location of the first key name. __ addp(r8, Immediate(DescriptorArray::kFirstOffset)); // Loop through all the keys in the descriptor array. If one of these is the @@ -3022,7 +3004,7 @@ void FullCodeGenerator::EmitIsStringWrapperSafeForDefaultValueOf( void FullCodeGenerator::EmitIsFunction(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3044,7 +3026,7 @@ void FullCodeGenerator::EmitIsFunction(CallRuntime* expr) { void FullCodeGenerator::EmitIsMinusZero(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3071,7 +3053,7 @@ void FullCodeGenerator::EmitIsMinusZero(CallRuntime* expr) { void FullCodeGenerator::EmitIsArray(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3093,7 +3075,7 @@ void FullCodeGenerator::EmitIsArray(CallRuntime* expr) { void FullCodeGenerator::EmitIsRegExp(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3115,7 +3097,7 @@ void FullCodeGenerator::EmitIsRegExp(CallRuntime* expr) { void FullCodeGenerator::EmitIsConstructCall(CallRuntime* expr) { - ASSERT(expr->arguments()->length() == 0); + DCHECK(expr->arguments()->length() == 0); Label materialize_true, materialize_false; Label* if_true = NULL; @@ -3147,7 +3129,7 @@ void FullCodeGenerator::EmitIsConstructCall(CallRuntime* expr) { void FullCodeGenerator::EmitObjectEquals(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); // Load the two objects into registers and perform the comparison. VisitForStackValue(args->at(0)); @@ -3171,7 +3153,7 @@ void FullCodeGenerator::EmitObjectEquals(CallRuntime* expr) { void FullCodeGenerator::EmitArguments(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); // ArgumentsAccessStub expects the key in rdx and the formal // parameter count in rax. @@ -3185,7 +3167,7 @@ void FullCodeGenerator::EmitArguments(CallRuntime* expr) { void FullCodeGenerator::EmitArgumentsLength(CallRuntime* expr) { - ASSERT(expr->arguments()->length() == 0); + DCHECK(expr->arguments()->length() == 0); Label exit; // Get the number of formal parameters. @@ -3209,7 +3191,7 @@ void FullCodeGenerator::EmitArgumentsLength(CallRuntime* expr) { void FullCodeGenerator::EmitClassOf(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); Label done, null, function, non_function_constructor; VisitForAccumulatorValue(args->at(0)); @@ -3272,7 +3254,7 @@ void FullCodeGenerator::EmitSubString(CallRuntime* expr) { // Load the arguments on the stack and call the stub. SubStringStub stub(isolate()); ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 3); + DCHECK(args->length() == 3); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); VisitForStackValue(args->at(2)); @@ -3285,7 +3267,7 @@ void FullCodeGenerator::EmitRegExpExec(CallRuntime* expr) { // Load the arguments on the stack and call the stub. RegExpExecStub stub(isolate()); ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 4); + DCHECK(args->length() == 4); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); VisitForStackValue(args->at(2)); @@ -3297,7 +3279,7 @@ void FullCodeGenerator::EmitRegExpExec(CallRuntime* expr) { void FullCodeGenerator::EmitValueOf(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); // Load the object. @@ -3316,8 +3298,8 @@ void FullCodeGenerator::EmitValueOf(CallRuntime* expr) { void FullCodeGenerator::EmitDateField(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); - ASSERT_NE(NULL, args->at(1)->AsLiteral()); + DCHECK(args->length() == 2); + DCHECK_NE(NULL, args->at(1)->AsLiteral()); Smi* index = Smi::cast(*(args->at(1)->AsLiteral()->value())); VisitForAccumulatorValue(args->at(0)); // Load the object. @@ -3355,7 +3337,7 @@ void FullCodeGenerator::EmitDateField(CallRuntime* expr) { } __ bind(¬_date_object); - __ CallRuntime(Runtime::kHiddenThrowNotDateError, 0); + __ CallRuntime(Runtime::kThrowNotDateError, 0); __ bind(&done); context()->Plug(rax); } @@ -3363,7 +3345,7 @@ void FullCodeGenerator::EmitDateField(CallRuntime* expr) { void FullCodeGenerator::EmitOneByteSeqStringSetChar(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(3, args->length()); + DCHECK_EQ(3, args->length()); Register string = rax; Register index = rbx; @@ -3396,7 +3378,7 @@ void FullCodeGenerator::EmitOneByteSeqStringSetChar(CallRuntime* expr) { void FullCodeGenerator::EmitTwoByteSeqStringSetChar(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(3, args->length()); + DCHECK_EQ(3, args->length()); Register string = rax; Register index = rbx; @@ -3430,7 +3412,7 @@ void FullCodeGenerator::EmitTwoByteSeqStringSetChar(CallRuntime* expr) { void FullCodeGenerator::EmitMathPow(CallRuntime* expr) { // Load the arguments on the stack and call the runtime function. ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); MathPowStub stub(isolate(), MathPowStub::ON_STACK); @@ -3441,7 +3423,7 @@ void FullCodeGenerator::EmitMathPow(CallRuntime* expr) { void FullCodeGenerator::EmitSetValueOf(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); VisitForStackValue(args->at(0)); // Load the object. VisitForAccumulatorValue(args->at(1)); // Load the value. @@ -3469,7 +3451,7 @@ void FullCodeGenerator::EmitSetValueOf(CallRuntime* expr) { void FullCodeGenerator::EmitNumberToString(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(args->length(), 1); + DCHECK_EQ(args->length(), 1); // Load the argument into rax and call the stub. VisitForAccumulatorValue(args->at(0)); @@ -3482,7 +3464,7 @@ void FullCodeGenerator::EmitNumberToString(CallRuntime* expr) { void FullCodeGenerator::EmitStringCharFromCode(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3501,7 +3483,7 @@ void FullCodeGenerator::EmitStringCharFromCode(CallRuntime* expr) { void FullCodeGenerator::EmitStringCharCodeAt(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); VisitForStackValue(args->at(0)); VisitForAccumulatorValue(args->at(1)); @@ -3547,7 +3529,7 @@ void FullCodeGenerator::EmitStringCharCodeAt(CallRuntime* expr) { void FullCodeGenerator::EmitStringCharAt(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); VisitForStackValue(args->at(0)); VisitForAccumulatorValue(args->at(1)); @@ -3595,7 +3577,7 @@ void FullCodeGenerator::EmitStringCharAt(CallRuntime* expr) { void FullCodeGenerator::EmitStringAdd(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(2, args->length()); + DCHECK_EQ(2, args->length()); VisitForStackValue(args->at(0)); VisitForAccumulatorValue(args->at(1)); @@ -3608,7 +3590,7 @@ void FullCodeGenerator::EmitStringAdd(CallRuntime* expr) { void FullCodeGenerator::EmitStringCompare(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(2, args->length()); + DCHECK_EQ(2, args->length()); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); @@ -3621,7 +3603,7 @@ void FullCodeGenerator::EmitStringCompare(CallRuntime* expr) { void FullCodeGenerator::EmitCallFunction(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() >= 2); + DCHECK(args->length() >= 2); int arg_count = args->length() - 2; // 2 ~ receiver and function. for (int i = 0; i < arg_count + 1; i++) { @@ -3654,7 +3636,7 @@ void FullCodeGenerator::EmitCallFunction(CallRuntime* expr) { void FullCodeGenerator::EmitRegExpConstructResult(CallRuntime* expr) { RegExpConstructResultStub stub(isolate()); ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 3); + DCHECK(args->length() == 3); VisitForStackValue(args->at(0)); VisitForStackValue(args->at(1)); VisitForAccumulatorValue(args->at(2)); @@ -3667,9 +3649,9 @@ void FullCodeGenerator::EmitRegExpConstructResult(CallRuntime* expr) { void FullCodeGenerator::EmitGetFromCache(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT_EQ(2, args->length()); + DCHECK_EQ(2, args->length()); - ASSERT_NE(NULL, args->at(0)->AsLiteral()); + DCHECK_NE(NULL, args->at(0)->AsLiteral()); int cache_id = Smi::cast(*(args->at(0)->AsLiteral()->value()))->value(); Handle<FixedArray> jsfunction_result_caches( @@ -3715,7 +3697,7 @@ void FullCodeGenerator::EmitGetFromCache(CallRuntime* expr) { // Call runtime to perform the lookup. __ Push(cache); __ Push(key); - __ CallRuntime(Runtime::kHiddenGetFromCache, 2); + __ CallRuntime(Runtime::kGetFromCache, 2); __ bind(&done); context()->Plug(rax); @@ -3724,7 +3706,7 @@ void FullCodeGenerator::EmitGetFromCache(CallRuntime* expr) { void FullCodeGenerator::EmitHasCachedArrayIndex(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); @@ -3747,13 +3729,13 @@ void FullCodeGenerator::EmitHasCachedArrayIndex(CallRuntime* expr) { void FullCodeGenerator::EmitGetCachedArrayIndex(CallRuntime* expr) { ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 1); + DCHECK(args->length() == 1); VisitForAccumulatorValue(args->at(0)); __ AssertString(rax); __ movl(rax, FieldOperand(rax, String::kHashFieldOffset)); - ASSERT(String::kHashShift >= kSmiTagSize); + DCHECK(String::kHashShift >= kSmiTagSize); __ IndexFromHash(rax, rax); context()->Plug(rax); @@ -3765,7 +3747,7 @@ void FullCodeGenerator::EmitFastAsciiArrayJoin(CallRuntime* expr) { non_trivial_array, not_size_one_array, loop, loop_1, loop_1_condition, loop_2, loop_2_entry, loop_3, loop_3_entry; ZoneList<Expression*>* args = expr->arguments(); - ASSERT(args->length() == 2); + DCHECK(args->length() == 2); // We will leave the separator on the stack until the end of the function. VisitForStackValue(args->at(1)); // Load this to rax (= array) @@ -4045,6 +4027,17 @@ void FullCodeGenerator::EmitFastAsciiArrayJoin(CallRuntime* expr) { } +void FullCodeGenerator::EmitDebugIsActive(CallRuntime* expr) { + DCHECK(expr->arguments()->length() == 0); + ExternalReference debug_is_active = + ExternalReference::debug_is_active_address(isolate()); + __ Move(kScratchRegister, debug_is_active); + __ movzxbp(rax, Operand(kScratchRegister, 0)); + __ Integer32ToSmi(rax, rax); + context()->Plug(rax); +} + + void FullCodeGenerator::VisitCallRuntime(CallRuntime* expr) { if (expr->function() != NULL && expr->function()->intrinsic_type == Runtime::INLINE) { @@ -4063,9 +4056,15 @@ void FullCodeGenerator::VisitCallRuntime(CallRuntime* expr) { __ Push(FieldOperand(rax, GlobalObject::kBuiltinsOffset)); // Load the function from the receiver. - __ movp(rax, Operand(rsp, 0)); - __ Move(rcx, expr->name()); - CallLoadIC(NOT_CONTEXTUAL, expr->CallRuntimeFeedbackId()); + __ movp(LoadIC::ReceiverRegister(), Operand(rsp, 0)); + __ Move(LoadIC::NameRegister(), expr->name()); + if (FLAG_vector_ics) { + __ Move(LoadIC::SlotRegister(), + Smi::FromInt(expr->CallRuntimeFeedbackSlot())); + CallLoadIC(NOT_CONTEXTUAL); + } else { + CallLoadIC(NOT_CONTEXTUAL, expr->CallRuntimeFeedbackId()); + } // Push the target function under the receiver. __ Push(Operand(rsp, 0)); @@ -4116,7 +4115,7 @@ void FullCodeGenerator::VisitUnaryOperation(UnaryOperation* expr) { Variable* var = proxy->var(); // Delete of an unqualified identifier is disallowed in strict mode // but "delete this" is allowed. - ASSERT(strict_mode() == SLOPPY || var->is_this()); + DCHECK(strict_mode() == SLOPPY || var->is_this()); if (var->IsUnallocated()) { __ Push(GlobalObjectOperand()); __ Push(var->name()); @@ -4133,7 +4132,7 @@ void FullCodeGenerator::VisitUnaryOperation(UnaryOperation* expr) { // context where the variable was introduced. __ Push(context_register()); __ Push(var->name()); - __ CallRuntime(Runtime::kHiddenDeleteContextSlot, 2); + __ CallRuntime(Runtime::kDeleteLookupSlot, 2); context()->Plug(rax); } } else { @@ -4171,7 +4170,7 @@ void FullCodeGenerator::VisitUnaryOperation(UnaryOperation* expr) { // for control and plugging the control flow into the context, // because we need to prepare a pair of extra administrative AST ids // for the optimizing compiler. - ASSERT(context()->IsAccumulatorValue() || context()->IsStackValue()); + DCHECK(context()->IsAccumulatorValue() || context()->IsStackValue()); Label materialize_true, materialize_false, done; VisitForControl(expr->expression(), &materialize_false, @@ -4214,7 +4213,7 @@ void FullCodeGenerator::VisitUnaryOperation(UnaryOperation* expr) { void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { - ASSERT(expr->expression()->IsValidReferenceExpression()); + DCHECK(expr->expression()->IsValidReferenceExpression()); Comment cmnt(masm_, "[ CountOperation"); SetSourcePosition(expr->position()); @@ -4233,7 +4232,7 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { // Evaluate expression and get value. if (assign_type == VARIABLE) { - ASSERT(expr->expression()->AsVariableProxy()->var() != NULL); + DCHECK(expr->expression()->AsVariableProxy()->var() != NULL); AccumulatorValueContext context(this); EmitVariableLoad(expr->expression()->AsVariableProxy()); } else { @@ -4242,14 +4241,16 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { __ Push(Smi::FromInt(0)); } if (assign_type == NAMED_PROPERTY) { - VisitForAccumulatorValue(prop->obj()); - __ Push(rax); // Copy of receiver, needed for later store. + VisitForStackValue(prop->obj()); + __ movp(LoadIC::ReceiverRegister(), Operand(rsp, 0)); EmitNamedPropertyLoad(prop); } else { VisitForStackValue(prop->obj()); - VisitForAccumulatorValue(prop->key()); - __ movp(rdx, Operand(rsp, 0)); // Leave receiver on stack - __ Push(rax); // Copy of key, needed for later store. + VisitForStackValue(prop->key()); + // Leave receiver on stack + __ movp(LoadIC::ReceiverRegister(), Operand(rsp, kPointerSize)); + // Copy of key, needed for later store. + __ movp(LoadIC::NameRegister(), Operand(rsp, 0)); EmitKeyedPropertyLoad(prop); } } @@ -4361,8 +4362,8 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { } break; case NAMED_PROPERTY: { - __ Move(rcx, prop->key()->AsLiteral()->value()); - __ Pop(rdx); + __ Move(StoreIC::NameRegister(), prop->key()->AsLiteral()->value()); + __ Pop(StoreIC::ReceiverRegister()); CallStoreIC(expr->CountStoreFeedbackId()); PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); if (expr->is_postfix()) { @@ -4375,8 +4376,8 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { break; } case KEYED_PROPERTY: { - __ Pop(rcx); - __ Pop(rdx); + __ Pop(KeyedStoreIC::NameRegister()); + __ Pop(KeyedStoreIC::ReceiverRegister()); Handle<Code> ic = strict_mode() == SLOPPY ? isolate()->builtins()->KeyedStoreIC_Initialize() : isolate()->builtins()->KeyedStoreIC_Initialize_Strict(); @@ -4397,13 +4398,17 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { void FullCodeGenerator::VisitForTypeofValue(Expression* expr) { VariableProxy* proxy = expr->AsVariableProxy(); - ASSERT(!context()->IsEffect()); - ASSERT(!context()->IsTest()); + DCHECK(!context()->IsEffect()); + DCHECK(!context()->IsTest()); if (proxy != NULL && proxy->var()->IsUnallocated()) { Comment cmnt(masm_, "[ Global variable"); - __ Move(rcx, proxy->name()); - __ movp(rax, GlobalObjectOperand()); + __ Move(LoadIC::NameRegister(), proxy->name()); + __ movp(LoadIC::ReceiverRegister(), GlobalObjectOperand()); + if (FLAG_vector_ics) { + __ Move(LoadIC::SlotRegister(), + Smi::FromInt(proxy->VariableFeedbackSlot())); + } // Use a regular load, not a contextual load, to avoid a reference // error. CallLoadIC(NOT_CONTEXTUAL); @@ -4415,12 +4420,12 @@ void FullCodeGenerator::VisitForTypeofValue(Expression* expr) { // Generate code for loading from variables potentially shadowed // by eval-introduced variables. - EmitDynamicLookupFastCase(proxy->var(), INSIDE_TYPEOF, &slow, &done); + EmitDynamicLookupFastCase(proxy, INSIDE_TYPEOF, &slow, &done); __ bind(&slow); __ Push(rsi); __ Push(proxy->name()); - __ CallRuntime(Runtime::kHiddenLoadContextSlotNoReferenceError, 2); + __ CallRuntime(Runtime::kLoadLookupSlotNoReferenceError, 2); PrepareForBailout(expr, TOS_REG); __ bind(&done); @@ -4470,10 +4475,6 @@ void FullCodeGenerator::EmitLiteralCompareTypeof(Expression* expr, __ j(equal, if_true); __ CompareRoot(rax, Heap::kFalseValueRootIndex); Split(equal, if_true, if_false, fall_through); - } else if (FLAG_harmony_typeof && - String::Equals(check, factory->null_string())) { - __ CompareRoot(rax, Heap::kNullValueRootIndex); - Split(equal, if_true, if_false, fall_through); } else if (String::Equals(check, factory->undefined_string())) { __ CompareRoot(rax, Heap::kUndefinedValueRootIndex); __ j(equal, if_true); @@ -4492,10 +4493,8 @@ void FullCodeGenerator::EmitLiteralCompareTypeof(Expression* expr, Split(equal, if_true, if_false, fall_through); } else if (String::Equals(check, factory->object_string())) { __ JumpIfSmi(rax, if_false); - if (!FLAG_harmony_typeof) { - __ CompareRoot(rax, Heap::kNullValueRootIndex); - __ j(equal, if_true); - } + __ CompareRoot(rax, Heap::kNullValueRootIndex); + __ j(equal, if_true); __ CmpObjectType(rax, FIRST_NONCALLABLE_SPEC_OBJECT_TYPE, rdx); __ j(below, if_false); __ CmpInstanceType(rdx, LAST_NONCALLABLE_SPEC_OBJECT_TYPE); @@ -4630,7 +4629,7 @@ Register FullCodeGenerator::context_register() { void FullCodeGenerator::StoreToFrameField(int frame_offset, Register value) { - ASSERT(IsAligned(frame_offset, kPointerSize)); + DCHECK(IsAligned(frame_offset, kPointerSize)); __ movp(Operand(rbp, frame_offset), value); } @@ -4655,7 +4654,7 @@ void FullCodeGenerator::PushFunctionArgumentForContextAllocation() { // code. Fetch it from the context. __ Push(ContextOperand(rsi, Context::CLOSURE_INDEX)); } else { - ASSERT(declaration_scope->is_function_scope()); + DCHECK(declaration_scope->is_function_scope()); __ Push(Operand(rbp, JavaScriptFrameConstants::kFunctionOffset)); } } @@ -4666,8 +4665,8 @@ void FullCodeGenerator::PushFunctionArgumentForContextAllocation() { void FullCodeGenerator::EnterFinallyBlock() { - ASSERT(!result_register().is(rdx)); - ASSERT(!result_register().is(rcx)); + DCHECK(!result_register().is(rdx)); + DCHECK(!result_register().is(rcx)); // Cook return address on top of stack (smi encoded Code* delta) __ PopReturnAddressTo(rdx); __ Move(rcx, masm_->CodeObject()); @@ -4698,8 +4697,8 @@ void FullCodeGenerator::EnterFinallyBlock() { void FullCodeGenerator::ExitFinallyBlock() { - ASSERT(!result_register().is(rdx)); - ASSERT(!result_register().is(rcx)); + DCHECK(!result_register().is(rdx)); + DCHECK(!result_register().is(rcx)); // Restore pending message from stack. __ Pop(rdx); ExternalReference pending_message_script = @@ -4811,18 +4810,18 @@ BackEdgeTable::BackEdgeState BackEdgeTable::GetBackEdgeState( Address pc) { Address call_target_address = pc - kIntSize; Address jns_instr_address = call_target_address - 3; - ASSERT_EQ(kCallInstruction, *(call_target_address - 1)); + DCHECK_EQ(kCallInstruction, *(call_target_address - 1)); if (*jns_instr_address == kJnsInstruction) { - ASSERT_EQ(kJnsOffset, *(call_target_address - 2)); - ASSERT_EQ(isolate->builtins()->InterruptCheck()->entry(), + DCHECK_EQ(kJnsOffset, *(call_target_address - 2)); + DCHECK_EQ(isolate->builtins()->InterruptCheck()->entry(), Assembler::target_address_at(call_target_address, unoptimized_code)); return INTERRUPT; } - ASSERT_EQ(kNopByteOne, *jns_instr_address); - ASSERT_EQ(kNopByteTwo, *(call_target_address - 2)); + DCHECK_EQ(kNopByteOne, *jns_instr_address); + DCHECK_EQ(kNopByteTwo, *(call_target_address - 2)); if (Assembler::target_address_at(call_target_address, unoptimized_code) == @@ -4830,7 +4829,7 @@ BackEdgeTable::BackEdgeState BackEdgeTable::GetBackEdgeState( return ON_STACK_REPLACEMENT; } - ASSERT_EQ(isolate->builtins()->OsrAfterStackCheck()->entry(), + DCHECK_EQ(isolate->builtins()->OsrAfterStackCheck()->entry(), Assembler::target_address_at(call_target_address, unoptimized_code)); return OSR_AFTER_STACK_CHECK; diff --git a/deps/v8/src/x64/ic-x64.cc b/deps/v8/src/x64/ic-x64.cc index 90a303dbae1..69e14135b0d 100644 --- a/deps/v8/src/x64/ic-x64.cc +++ b/deps/v8/src/x64/ic-x64.cc @@ -2,14 +2,14 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_X64 -#include "codegen.h" -#include "ic-inl.h" -#include "runtime.h" -#include "stub-cache.h" +#include "src/codegen.h" +#include "src/ic-inl.h" +#include "src/runtime.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -35,46 +35,6 @@ static void GenerateGlobalInstanceTypeCheck(MacroAssembler* masm, } -// Generated code falls through if the receiver is a regular non-global -// JS object with slow properties and no interceptors. -static void GenerateNameDictionaryReceiverCheck(MacroAssembler* masm, - Register receiver, - Register r0, - Register r1, - Label* miss) { - // Register usage: - // receiver: holds the receiver on entry and is unchanged. - // r0: used to hold receiver instance type. - // Holds the property dictionary on fall through. - // r1: used to hold receivers map. - - __ JumpIfSmi(receiver, miss); - - // Check that the receiver is a valid JS object. - __ movp(r1, FieldOperand(receiver, HeapObject::kMapOffset)); - __ movb(r0, FieldOperand(r1, Map::kInstanceTypeOffset)); - __ cmpb(r0, Immediate(FIRST_SPEC_OBJECT_TYPE)); - __ j(below, miss); - - // If this assert fails, we have to check upper bound too. - STATIC_ASSERT(LAST_TYPE == LAST_SPEC_OBJECT_TYPE); - - GenerateGlobalInstanceTypeCheck(masm, r0, miss); - - // Check for non-global object that requires access check. - __ testb(FieldOperand(r1, Map::kBitFieldOffset), - Immediate((1 << Map::kIsAccessCheckNeeded) | - (1 << Map::kHasNamedInterceptor))); - __ j(not_zero, miss); - - __ movp(r0, FieldOperand(receiver, JSObject::kPropertiesOffset)); - __ CompareRoot(FieldOperand(r0, HeapObject::kMapOffset), - Heap::kHashTableMapRootIndex); - __ j(not_equal, miss); -} - - - // Helper function used to load a property from a dictionary backing storage. // This function may return false negatives, so miss_label // must always call a backup property load that is complete. @@ -220,7 +180,7 @@ static void GenerateKeyedLoadReceiverCheck(MacroAssembler* masm, // In the case that the object is a value-wrapper object, // we enter the runtime system to make sure that indexing // into string objects work as intended. - ASSERT(JS_OBJECT_TYPE > JS_VALUE_TYPE); + DCHECK(JS_OBJECT_TYPE > JS_VALUE_TYPE); __ CmpObjectType(receiver, JS_OBJECT_TYPE, map); __ j(below, slow); @@ -327,30 +287,31 @@ static void GenerateKeyNameCheck(MacroAssembler* masm, void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- rax : key - // -- rdx : receiver - // -- rsp[0] : return address - // ----------------------------------- + // The return address is on the stack. Label slow, check_name, index_smi, index_name, property_array_property; Label probe_dictionary, check_number_dictionary; + Register receiver = ReceiverRegister(); + Register key = NameRegister(); + DCHECK(receiver.is(rdx)); + DCHECK(key.is(rcx)); + // Check that the key is a smi. - __ JumpIfNotSmi(rax, &check_name); + __ JumpIfNotSmi(key, &check_name); __ bind(&index_smi); // Now the key is known to be a smi. This place is also jumped to from below // where a numeric string is converted to a smi. GenerateKeyedLoadReceiverCheck( - masm, rdx, rcx, Map::kHasIndexedInterceptor, &slow); + masm, receiver, rax, Map::kHasIndexedInterceptor, &slow); // Check the receiver's map to see if it has fast elements. - __ CheckFastElements(rcx, &check_number_dictionary); + __ CheckFastElements(rax, &check_number_dictionary); GenerateFastArrayLoad(masm, - rdx, + receiver, + key, rax, - rcx, rbx, rax, NULL, @@ -360,50 +321,46 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { __ ret(0); __ bind(&check_number_dictionary); - __ SmiToInteger32(rbx, rax); - __ movp(rcx, FieldOperand(rdx, JSObject::kElementsOffset)); + __ SmiToInteger32(rbx, key); + __ movp(rax, FieldOperand(receiver, JSObject::kElementsOffset)); // Check whether the elements is a number dictionary. - // rdx: receiver - // rax: key // rbx: key as untagged int32 - // rcx: elements - __ CompareRoot(FieldOperand(rcx, HeapObject::kMapOffset), + // rax: elements + __ CompareRoot(FieldOperand(rax, HeapObject::kMapOffset), Heap::kHashTableMapRootIndex); __ j(not_equal, &slow); - __ LoadFromNumberDictionary(&slow, rcx, rax, rbx, r9, rdi, rax); + __ LoadFromNumberDictionary(&slow, rax, key, rbx, r9, rdi, rax); __ ret(0); __ bind(&slow); // Slow case: Jump to runtime. - // rdx: receiver - // rax: key __ IncrementCounter(counters->keyed_load_generic_slow(), 1); GenerateRuntimeGetProperty(masm); __ bind(&check_name); - GenerateKeyNameCheck(masm, rax, rcx, rbx, &index_name, &slow); + GenerateKeyNameCheck(masm, key, rax, rbx, &index_name, &slow); GenerateKeyedLoadReceiverCheck( - masm, rdx, rcx, Map::kHasNamedInterceptor, &slow); + masm, receiver, rax, Map::kHasNamedInterceptor, &slow); // If the receiver is a fast-case object, check the keyed lookup - // cache. Otherwise probe the dictionary leaving result in rcx. - __ movp(rbx, FieldOperand(rdx, JSObject::kPropertiesOffset)); + // cache. Otherwise probe the dictionary leaving result in key. + __ movp(rbx, FieldOperand(receiver, JSObject::kPropertiesOffset)); __ CompareRoot(FieldOperand(rbx, HeapObject::kMapOffset), Heap::kHashTableMapRootIndex); __ j(equal, &probe_dictionary); // Load the map of the receiver, compute the keyed lookup cache hash // based on 32 bits of the map pointer and the string hash. - __ movp(rbx, FieldOperand(rdx, HeapObject::kMapOffset)); - __ movl(rcx, rbx); - __ shrl(rcx, Immediate(KeyedLookupCache::kMapHashShift)); - __ movl(rdi, FieldOperand(rax, String::kHashFieldOffset)); + __ movp(rbx, FieldOperand(receiver, HeapObject::kMapOffset)); + __ movl(rax, rbx); + __ shrl(rax, Immediate(KeyedLookupCache::kMapHashShift)); + __ movl(rdi, FieldOperand(key, String::kHashFieldOffset)); __ shrl(rdi, Immediate(String::kHashShift)); - __ xorp(rcx, rdi); + __ xorp(rax, rdi); int mask = (KeyedLookupCache::kCapacityMask & KeyedLookupCache::kHashMask); - __ andp(rcx, Immediate(mask)); + __ andp(rax, Immediate(mask)); // Load the key (consisting of map and internalized string) from the cache and // check for match. @@ -415,13 +372,13 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { for (int i = 0; i < kEntriesPerBucket - 1; i++) { Label try_next_entry; - __ movp(rdi, rcx); + __ movp(rdi, rax); __ shlp(rdi, Immediate(kPointerSizeLog2 + 1)); __ LoadAddress(kScratchRegister, cache_keys); int off = kPointerSize * i * 2; __ cmpp(rbx, Operand(kScratchRegister, rdi, times_1, off)); __ j(not_equal, &try_next_entry); - __ cmpp(rax, Operand(kScratchRegister, rdi, times_1, off + kPointerSize)); + __ cmpp(key, Operand(kScratchRegister, rdi, times_1, off + kPointerSize)); __ j(equal, &hit_on_nth_entry[i]); __ bind(&try_next_entry); } @@ -429,7 +386,7 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { int off = kPointerSize * (kEntriesPerBucket - 1) * 2; __ cmpp(rbx, Operand(kScratchRegister, rdi, times_1, off)); __ j(not_equal, &slow); - __ cmpp(rax, Operand(kScratchRegister, rdi, times_1, off + kPointerSize)); + __ cmpp(key, Operand(kScratchRegister, rdi, times_1, off + kPointerSize)); __ j(not_equal, &slow); // Get field offset, which is a 32-bit integer. @@ -440,12 +397,12 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { for (int i = kEntriesPerBucket - 1; i >= 0; i--) { __ bind(&hit_on_nth_entry[i]); if (i != 0) { - __ addl(rcx, Immediate(i)); + __ addl(rax, Immediate(i)); } __ LoadAddress(kScratchRegister, cache_field_offsets); - __ movl(rdi, Operand(kScratchRegister, rcx, times_4, 0)); - __ movzxbp(rcx, FieldOperand(rbx, Map::kInObjectPropertiesOffset)); - __ subp(rdi, rcx); + __ movl(rdi, Operand(kScratchRegister, rax, times_4, 0)); + __ movzxbp(rax, FieldOperand(rbx, Map::kInObjectPropertiesOffset)); + __ subp(rdi, rax); __ j(above_equal, &property_array_property); if (i != 0) { __ jmp(&load_in_object_property); @@ -454,15 +411,15 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { // Load in-object property. __ bind(&load_in_object_property); - __ movzxbp(rcx, FieldOperand(rbx, Map::kInstanceSizeOffset)); - __ addp(rcx, rdi); - __ movp(rax, FieldOperand(rdx, rcx, times_pointer_size, 0)); + __ movzxbp(rax, FieldOperand(rbx, Map::kInstanceSizeOffset)); + __ addp(rax, rdi); + __ movp(rax, FieldOperand(receiver, rax, times_pointer_size, 0)); __ IncrementCounter(counters->keyed_load_generic_lookup_cache(), 1); __ ret(0); // Load property array property. __ bind(&property_array_property); - __ movp(rax, FieldOperand(rdx, JSObject::kPropertiesOffset)); + __ movp(rax, FieldOperand(receiver, JSObject::kPropertiesOffset)); __ movp(rax, FieldOperand(rax, rdi, times_pointer_size, FixedArray::kHeaderSize)); __ IncrementCounter(counters->keyed_load_generic_lookup_cache(), 1); @@ -471,36 +428,31 @@ void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { // Do a quick inline probe of the receiver's dictionary, if it // exists. __ bind(&probe_dictionary); - // rdx: receiver - // rax: key // rbx: elements - __ movp(rcx, FieldOperand(rdx, JSObject::kMapOffset)); - __ movb(rcx, FieldOperand(rcx, Map::kInstanceTypeOffset)); - GenerateGlobalInstanceTypeCheck(masm, rcx, &slow); + __ movp(rax, FieldOperand(receiver, JSObject::kMapOffset)); + __ movb(rax, FieldOperand(rax, Map::kInstanceTypeOffset)); + GenerateGlobalInstanceTypeCheck(masm, rax, &slow); - GenerateDictionaryLoad(masm, &slow, rbx, rax, rcx, rdi, rax); + GenerateDictionaryLoad(masm, &slow, rbx, key, rax, rdi, rax); __ IncrementCounter(counters->keyed_load_generic_symbol(), 1); __ ret(0); __ bind(&index_name); - __ IndexFromHash(rbx, rax); + __ IndexFromHash(rbx, key); __ jmp(&index_smi); } void KeyedLoadIC::GenerateString(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- rax : key - // -- rdx : receiver - // -- rsp[0] : return address - // ----------------------------------- + // Return address is on the stack. Label miss; - Register receiver = rdx; - Register index = rax; - Register scratch = rcx; + Register receiver = ReceiverRegister(); + Register index = NameRegister(); + Register scratch = rbx; Register result = rax; + DCHECK(!scratch.is(receiver) && !scratch.is(index)); StringCharAtGenerator char_at_generator(receiver, index, @@ -522,42 +474,42 @@ void KeyedLoadIC::GenerateString(MacroAssembler* masm) { void KeyedLoadIC::GenerateIndexedInterceptor(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- rax : key - // -- rdx : receiver - // -- rsp[0] : return address - // ----------------------------------- + // Return address is on the stack. Label slow; + Register receiver = ReceiverRegister(); + Register key = NameRegister(); + Register scratch = rax; + DCHECK(!scratch.is(receiver) && !scratch.is(key)); + // Check that the receiver isn't a smi. - __ JumpIfSmi(rdx, &slow); + __ JumpIfSmi(receiver, &slow); // Check that the key is an array index, that is Uint32. STATIC_ASSERT(kSmiValueSize <= 32); - __ JumpUnlessNonNegativeSmi(rax, &slow); + __ JumpUnlessNonNegativeSmi(key, &slow); // Get the map of the receiver. - __ movp(rcx, FieldOperand(rdx, HeapObject::kMapOffset)); + __ movp(scratch, FieldOperand(receiver, HeapObject::kMapOffset)); // Check that it has indexed interceptor and access checks // are not enabled for this object. - __ movb(rcx, FieldOperand(rcx, Map::kBitFieldOffset)); - __ andb(rcx, Immediate(kSlowCaseBitFieldMask)); - __ cmpb(rcx, Immediate(1 << Map::kHasIndexedInterceptor)); + __ movb(scratch, FieldOperand(scratch, Map::kBitFieldOffset)); + __ andb(scratch, Immediate(kSlowCaseBitFieldMask)); + __ cmpb(scratch, Immediate(1 << Map::kHasIndexedInterceptor)); __ j(not_zero, &slow); // Everything is fine, call runtime. - __ PopReturnAddressTo(rcx); - __ Push(rdx); // receiver - __ Push(rax); // key - __ PushReturnAddressFrom(rcx); + __ PopReturnAddressTo(scratch); + __ Push(receiver); // receiver + __ Push(key); // key + __ PushReturnAddressFrom(scratch); // Perform tail call to the entry. __ TailCallExternalReference( - ExternalReference(IC_Utility(kKeyedLoadPropertyWithInterceptor), + ExternalReference(IC_Utility(kLoadElementWithInterceptor), masm->isolate()), - 2, - 1); + 2, 1); __ bind(&slow); GenerateMiss(masm); @@ -574,12 +526,16 @@ static void KeyedStoreGenerateGenericHelper( Label transition_smi_elements; Label finish_object_store, non_double_value, transition_double_elements; Label fast_double_without_map_check; + Register receiver = KeyedStoreIC::ReceiverRegister(); + Register key = KeyedStoreIC::NameRegister(); + Register value = KeyedStoreIC::ValueRegister(); + DCHECK(receiver.is(rdx)); + DCHECK(key.is(rcx)); + DCHECK(value.is(rax)); // Fast case: Do the store, could be either Object or double. __ bind(fast_object); - // rax: value // rbx: receiver's elements array (a FixedArray) - // rcx: index - // rdx: receiver (a JSArray) + // receiver is a JSArray. // r9: map of receiver if (check_map == kCheckMap) { __ movp(rdi, FieldOperand(rbx, HeapObject::kMapOffset)); @@ -592,26 +548,26 @@ static void KeyedStoreGenerateGenericHelper( // there may be a callback on the element Label holecheck_passed1; __ movp(kScratchRegister, FieldOperand(rbx, - rcx, + key, times_pointer_size, FixedArray::kHeaderSize)); __ CompareRoot(kScratchRegister, Heap::kTheHoleValueRootIndex); __ j(not_equal, &holecheck_passed1); - __ JumpIfDictionaryInPrototypeChain(rdx, rdi, kScratchRegister, slow); + __ JumpIfDictionaryInPrototypeChain(receiver, rdi, kScratchRegister, slow); __ bind(&holecheck_passed1); // Smi stores don't require further checks. Label non_smi_value; - __ JumpIfNotSmi(rax, &non_smi_value); + __ JumpIfNotSmi(value, &non_smi_value); if (increment_length == kIncrementLength) { // Add 1 to receiver->length. - __ leal(rdi, Operand(rcx, 1)); - __ Integer32ToSmiField(FieldOperand(rdx, JSArray::kLengthOffset), rdi); + __ leal(rdi, Operand(key, 1)); + __ Integer32ToSmiField(FieldOperand(receiver, JSArray::kLengthOffset), rdi); } // It's irrelevant whether array is smi-only or not when writing a smi. - __ movp(FieldOperand(rbx, rcx, times_pointer_size, FixedArray::kHeaderSize), - rax); + __ movp(FieldOperand(rbx, key, times_pointer_size, FixedArray::kHeaderSize), + value); __ ret(0); __ bind(&non_smi_value); @@ -622,14 +578,14 @@ static void KeyedStoreGenerateGenericHelper( __ bind(&finish_object_store); if (increment_length == kIncrementLength) { // Add 1 to receiver->length. - __ leal(rdi, Operand(rcx, 1)); - __ Integer32ToSmiField(FieldOperand(rdx, JSArray::kLengthOffset), rdi); + __ leal(rdi, Operand(key, 1)); + __ Integer32ToSmiField(FieldOperand(receiver, JSArray::kLengthOffset), rdi); } - __ movp(FieldOperand(rbx, rcx, times_pointer_size, FixedArray::kHeaderSize), - rax); - __ movp(rdx, rax); // Preserve the value which is returned. + __ movp(FieldOperand(rbx, key, times_pointer_size, FixedArray::kHeaderSize), + value); + __ movp(rdx, value); // Preserve the value which is returned. __ RecordWriteArray( - rbx, rdx, rcx, kDontSaveFPRegs, EMIT_REMEMBERED_SET, OMIT_SMI_CHECK); + rbx, rdx, key, kDontSaveFPRegs, EMIT_REMEMBERED_SET, OMIT_SMI_CHECK); __ ret(0); __ bind(fast_double); @@ -645,25 +601,25 @@ static void KeyedStoreGenerateGenericHelper( // We have to see if the double version of the hole is present. If so // go to the runtime. uint32_t offset = FixedDoubleArray::kHeaderSize + sizeof(kHoleNanLower32); - __ cmpl(FieldOperand(rbx, rcx, times_8, offset), Immediate(kHoleNanUpper32)); + __ cmpl(FieldOperand(rbx, key, times_8, offset), Immediate(kHoleNanUpper32)); __ j(not_equal, &fast_double_without_map_check); - __ JumpIfDictionaryInPrototypeChain(rdx, rdi, kScratchRegister, slow); + __ JumpIfDictionaryInPrototypeChain(receiver, rdi, kScratchRegister, slow); __ bind(&fast_double_without_map_check); - __ StoreNumberToDoubleElements(rax, rbx, rcx, xmm0, + __ StoreNumberToDoubleElements(value, rbx, key, xmm0, &transition_double_elements); if (increment_length == kIncrementLength) { // Add 1 to receiver->length. - __ leal(rdi, Operand(rcx, 1)); - __ Integer32ToSmiField(FieldOperand(rdx, JSArray::kLengthOffset), rdi); + __ leal(rdi, Operand(key, 1)); + __ Integer32ToSmiField(FieldOperand(receiver, JSArray::kLengthOffset), rdi); } __ ret(0); __ bind(&transition_smi_elements); - __ movp(rbx, FieldOperand(rdx, HeapObject::kMapOffset)); + __ movp(rbx, FieldOperand(receiver, HeapObject::kMapOffset)); // Transition the array appropriately depending on the value type. - __ movp(r9, FieldOperand(rax, HeapObject::kMapOffset)); + __ movp(r9, FieldOperand(value, HeapObject::kMapOffset)); __ CompareRoot(r9, Heap::kHeapNumberMapRootIndex); __ j(not_equal, &non_double_value); @@ -676,8 +632,9 @@ static void KeyedStoreGenerateGenericHelper( slow); AllocationSiteMode mode = AllocationSite::GetMode(FAST_SMI_ELEMENTS, FAST_DOUBLE_ELEMENTS); - ElementsTransitionGenerator::GenerateSmiToDouble(masm, mode, slow); - __ movp(rbx, FieldOperand(rdx, JSObject::kElementsOffset)); + ElementsTransitionGenerator::GenerateSmiToDouble( + masm, receiver, key, value, rbx, mode, slow); + __ movp(rbx, FieldOperand(receiver, JSObject::kElementsOffset)); __ jmp(&fast_double_without_map_check); __ bind(&non_double_value); @@ -688,52 +645,52 @@ static void KeyedStoreGenerateGenericHelper( rdi, slow); mode = AllocationSite::GetMode(FAST_SMI_ELEMENTS, FAST_ELEMENTS); - ElementsTransitionGenerator::GenerateMapChangeElementsTransition(masm, mode, - slow); - __ movp(rbx, FieldOperand(rdx, JSObject::kElementsOffset)); + ElementsTransitionGenerator::GenerateMapChangeElementsTransition( + masm, receiver, key, value, rbx, mode, slow); + __ movp(rbx, FieldOperand(receiver, JSObject::kElementsOffset)); __ jmp(&finish_object_store); __ bind(&transition_double_elements); // Elements are FAST_DOUBLE_ELEMENTS, but value is an Object that's not a // HeapNumber. Make sure that the receiver is a Array with FAST_ELEMENTS and // transition array from FAST_DOUBLE_ELEMENTS to FAST_ELEMENTS - __ movp(rbx, FieldOperand(rdx, HeapObject::kMapOffset)); + __ movp(rbx, FieldOperand(receiver, HeapObject::kMapOffset)); __ LoadTransitionedArrayMapConditional(FAST_DOUBLE_ELEMENTS, FAST_ELEMENTS, rbx, rdi, slow); mode = AllocationSite::GetMode(FAST_DOUBLE_ELEMENTS, FAST_ELEMENTS); - ElementsTransitionGenerator::GenerateDoubleToObject(masm, mode, slow); - __ movp(rbx, FieldOperand(rdx, JSObject::kElementsOffset)); + ElementsTransitionGenerator::GenerateDoubleToObject( + masm, receiver, key, value, rbx, mode, slow); + __ movp(rbx, FieldOperand(receiver, JSObject::kElementsOffset)); __ jmp(&finish_object_store); } void KeyedStoreIC::GenerateGeneric(MacroAssembler* masm, StrictMode strict_mode) { - // ----------- S t a t e ------------- - // -- rax : value - // -- rcx : key - // -- rdx : receiver - // -- rsp[0] : return address - // ----------------------------------- + // Return address is on the stack. Label slow, slow_with_tagged_index, fast_object, fast_object_grow; Label fast_double, fast_double_grow; Label array, extra, check_if_double_array; + Register receiver = ReceiverRegister(); + Register key = NameRegister(); + DCHECK(receiver.is(rdx)); + DCHECK(key.is(rcx)); // Check that the object isn't a smi. - __ JumpIfSmi(rdx, &slow_with_tagged_index); + __ JumpIfSmi(receiver, &slow_with_tagged_index); // Get the map from the receiver. - __ movp(r9, FieldOperand(rdx, HeapObject::kMapOffset)); + __ movp(r9, FieldOperand(receiver, HeapObject::kMapOffset)); // Check that the receiver does not require access checks and is not observed. // The generic stub does not perform map checks or handle observed objects. __ testb(FieldOperand(r9, Map::kBitFieldOffset), Immediate(1 << Map::kIsAccessCheckNeeded | 1 << Map::kIsObserved)); __ j(not_zero, &slow_with_tagged_index); // Check that the key is a smi. - __ JumpIfNotSmi(rcx, &slow_with_tagged_index); - __ SmiToInteger32(rcx, rcx); + __ JumpIfNotSmi(key, &slow_with_tagged_index); + __ SmiToInteger32(key, key); __ CmpInstanceType(r9, JS_ARRAY_TYPE); __ j(equal, &array); @@ -742,20 +699,15 @@ void KeyedStoreIC::GenerateGeneric(MacroAssembler* masm, __ j(below, &slow); // Object case: Check key against length in the elements array. - // rax: value - // rdx: JSObject - // rcx: index - __ movp(rbx, FieldOperand(rdx, JSObject::kElementsOffset)); + __ movp(rbx, FieldOperand(receiver, JSObject::kElementsOffset)); // Check array bounds. - __ SmiCompareInteger32(FieldOperand(rbx, FixedArray::kLengthOffset), rcx); - // rax: value + __ SmiCompareInteger32(FieldOperand(rbx, FixedArray::kLengthOffset), key); // rbx: FixedArray - // rcx: index __ j(above, &fast_object); // Slow case: call runtime. __ bind(&slow); - __ Integer32ToSmi(rcx, rcx); + __ Integer32ToSmi(key, key); __ bind(&slow_with_tagged_index); GenerateRuntimeSetProperty(masm, strict_mode); // Never returns to here. @@ -764,13 +716,11 @@ void KeyedStoreIC::GenerateGeneric(MacroAssembler* masm, // perform the store and update the length. Used for adding one // element to the array by writing to array[array.length]. __ bind(&extra); - // rax: value - // rdx: receiver (a JSArray) + // receiver is a JSArray. // rbx: receiver's elements array (a FixedArray) - // rcx: index - // flags: smicompare (rdx.length(), rbx) + // flags: smicompare (receiver.length(), rbx) __ j(not_equal, &slow); // do not leave holes in the array - __ SmiCompareInteger32(FieldOperand(rbx, FixedArray::kLengthOffset), rcx); + __ SmiCompareInteger32(FieldOperand(rbx, FixedArray::kLengthOffset), key); __ j(below_equal, &slow); // Increment index to get new length. __ movp(rdi, FieldOperand(rbx, HeapObject::kMapOffset)); @@ -788,14 +738,12 @@ void KeyedStoreIC::GenerateGeneric(MacroAssembler* masm, // array. Check that the array is in fast mode (and writable); if it // is the length is always a smi. __ bind(&array); - // rax: value - // rdx: receiver (a JSArray) - // rcx: index - __ movp(rbx, FieldOperand(rdx, JSObject::kElementsOffset)); + // receiver is a JSArray. + __ movp(rbx, FieldOperand(receiver, JSObject::kElementsOffset)); // Check the key against the length in the array, compute the // address to store into and fall through to fast case. - __ SmiCompareInteger32(FieldOperand(rdx, JSArray::kLengthOffset), rcx); + __ SmiCompareInteger32(FieldOperand(receiver, JSArray::kLengthOffset), key); __ j(below_equal, &extra); KeyedStoreGenerateGenericHelper(masm, &fast_object, &fast_double, @@ -887,21 +835,22 @@ static Operand GenerateUnmappedArgumentsLookup(MacroAssembler* masm, void KeyedLoadIC::GenerateSloppyArguments(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- rax : key - // -- rdx : receiver - // -- rsp[0] : return address - // ----------------------------------- + // The return address is on the stack. + Register receiver = ReceiverRegister(); + Register key = NameRegister(); + DCHECK(receiver.is(rdx)); + DCHECK(key.is(rcx)); + Label slow, notin; Operand mapped_location = GenerateMappedArgumentsLookup( - masm, rdx, rax, rbx, rcx, rdi, ¬in, &slow); + masm, receiver, key, rbx, rax, rdi, ¬in, &slow); __ movp(rax, mapped_location); __ Ret(); __ bind(¬in); // The unmapped lookup expects that the parameter map is in rbx. Operand unmapped_location = - GenerateUnmappedArgumentsLookup(masm, rax, rbx, rcx, &slow); + GenerateUnmappedArgumentsLookup(masm, key, rbx, rax, &slow); __ CompareRoot(unmapped_location, Heap::kTheHoleValueRootIndex); __ j(equal, &slow); __ movp(rax, unmapped_location); @@ -912,18 +861,20 @@ void KeyedLoadIC::GenerateSloppyArguments(MacroAssembler* masm) { void KeyedStoreIC::GenerateSloppyArguments(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- rax : value - // -- rcx : key - // -- rdx : receiver - // -- rsp[0] : return address - // ----------------------------------- + // The return address is on the stack. Label slow, notin; + Register receiver = ReceiverRegister(); + Register name = NameRegister(); + Register value = ValueRegister(); + DCHECK(receiver.is(rdx)); + DCHECK(name.is(rcx)); + DCHECK(value.is(rax)); + Operand mapped_location = GenerateMappedArgumentsLookup( - masm, rdx, rcx, rbx, rdi, r8, ¬in, &slow); - __ movp(mapped_location, rax); + masm, receiver, name, rbx, rdi, r8, ¬in, &slow); + __ movp(mapped_location, value); __ leap(r9, mapped_location); - __ movp(r8, rax); + __ movp(r8, value); __ RecordWrite(rbx, r9, r8, @@ -934,10 +885,10 @@ void KeyedStoreIC::GenerateSloppyArguments(MacroAssembler* masm) { __ bind(¬in); // The unmapped lookup expects that the parameter map is in rbx. Operand unmapped_location = - GenerateUnmappedArgumentsLookup(masm, rcx, rbx, rdi, &slow); - __ movp(unmapped_location, rax); + GenerateUnmappedArgumentsLookup(masm, name, rbx, rdi, &slow); + __ movp(unmapped_location, value); __ leap(r9, unmapped_location); - __ movp(r8, rax); + __ movp(r8, value); __ RecordWrite(rbx, r9, r8, @@ -951,56 +902,60 @@ void KeyedStoreIC::GenerateSloppyArguments(MacroAssembler* masm) { void LoadIC::GenerateMegamorphic(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- rax : receiver - // -- rcx : name - // -- rsp[0] : return address - // ----------------------------------- + // The return address is on the stack. + Register receiver = ReceiverRegister(); + Register name = NameRegister(); + DCHECK(receiver.is(rdx)); + DCHECK(name.is(rcx)); // Probe the stub cache. - Code::Flags flags = Code::ComputeHandlerFlags(Code::LOAD_IC); + Code::Flags flags = Code::RemoveTypeAndHolderFromFlags( + Code::ComputeHandlerFlags(Code::LOAD_IC)); masm->isolate()->stub_cache()->GenerateProbe( - masm, flags, rax, rcx, rbx, rdx); + masm, flags, receiver, name, rbx, rax); GenerateMiss(masm); } void LoadIC::GenerateNormal(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- rax : receiver - // -- rcx : name - // -- rsp[0] : return address - // ----------------------------------- - Label miss; + Register dictionary = rax; + DCHECK(!dictionary.is(ReceiverRegister())); + DCHECK(!dictionary.is(NameRegister())); - GenerateNameDictionaryReceiverCheck(masm, rax, rdx, rbx, &miss); + Label slow; - // rdx: elements - // Search the dictionary placing the result in rax. - GenerateDictionaryLoad(masm, &miss, rdx, rcx, rbx, rdi, rax); + __ movp(dictionary, + FieldOperand(ReceiverRegister(), JSObject::kPropertiesOffset)); + GenerateDictionaryLoad(masm, &slow, dictionary, NameRegister(), rbx, rdi, + rax); __ ret(0); - // Cache miss: Jump to runtime. - __ bind(&miss); - GenerateMiss(masm); + // Dictionary load failed, go slow (but don't miss). + __ bind(&slow); + GenerateRuntimeGetProperty(masm); +} + + +// A register that isn't one of the parameters to the load ic. +static const Register LoadIC_TempRegister() { return rbx; } + + +static const Register KeyedLoadIC_TempRegister() { + return rbx; } void LoadIC::GenerateMiss(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- rax : receiver - // -- rcx : name - // -- rsp[0] : return address - // ----------------------------------- + // The return address is on the stack. Counters* counters = masm->isolate()->counters(); __ IncrementCounter(counters->load_miss(), 1); - __ PopReturnAddressTo(rbx); - __ Push(rax); // receiver - __ Push(rcx); // name - __ PushReturnAddressFrom(rbx); + __ PopReturnAddressTo(LoadIC_TempRegister()); + __ Push(ReceiverRegister()); // receiver + __ Push(NameRegister()); // name + __ PushReturnAddressFrom(LoadIC_TempRegister()); // Perform tail call to the entry. ExternalReference ref = @@ -1010,16 +965,12 @@ void LoadIC::GenerateMiss(MacroAssembler* masm) { void LoadIC::GenerateRuntimeGetProperty(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- rax : receiver - // -- rcx : name - // -- rsp[0] : return address - // ----------------------------------- + // The return address is on the stack. - __ PopReturnAddressTo(rbx); - __ Push(rax); // receiver - __ Push(rcx); // name - __ PushReturnAddressFrom(rbx); + __ PopReturnAddressTo(LoadIC_TempRegister()); + __ Push(ReceiverRegister()); // receiver + __ Push(NameRegister()); // name + __ PushReturnAddressFrom(LoadIC_TempRegister()); // Perform tail call to the entry. __ TailCallRuntime(Runtime::kGetProperty, 2, 1); @@ -1027,19 +978,14 @@ void LoadIC::GenerateRuntimeGetProperty(MacroAssembler* masm) { void KeyedLoadIC::GenerateMiss(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- rax : key - // -- rdx : receiver - // -- rsp[0] : return address - // ----------------------------------- - + // The return address is on the stack. Counters* counters = masm->isolate()->counters(); __ IncrementCounter(counters->keyed_load_miss(), 1); - __ PopReturnAddressTo(rbx); - __ Push(rdx); // receiver - __ Push(rax); // name - __ PushReturnAddressFrom(rbx); + __ PopReturnAddressTo(KeyedLoadIC_TempRegister()); + __ Push(ReceiverRegister()); // receiver + __ Push(NameRegister()); // name + __ PushReturnAddressFrom(KeyedLoadIC_TempRegister()); // Perform tail call to the entry. ExternalReference ref = @@ -1048,17 +994,40 @@ void KeyedLoadIC::GenerateMiss(MacroAssembler* masm) { } +// IC register specifications +const Register LoadIC::ReceiverRegister() { return rdx; } +const Register LoadIC::NameRegister() { return rcx; } + + +const Register LoadIC::SlotRegister() { + DCHECK(FLAG_vector_ics); + return rax; +} + + +const Register LoadIC::VectorRegister() { + DCHECK(FLAG_vector_ics); + return rbx; +} + + +const Register StoreIC::ReceiverRegister() { return rdx; } +const Register StoreIC::NameRegister() { return rcx; } +const Register StoreIC::ValueRegister() { return rax; } + + +const Register KeyedStoreIC::MapRegister() { + return rbx; +} + + void KeyedLoadIC::GenerateRuntimeGetProperty(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- rax : key - // -- rdx : receiver - // -- rsp[0] : return address - // ----------------------------------- + // The return address is on the stack. - __ PopReturnAddressTo(rbx); - __ Push(rdx); // receiver - __ Push(rax); // name - __ PushReturnAddressFrom(rbx); + __ PopReturnAddressTo(KeyedLoadIC_TempRegister()); + __ Push(ReceiverRegister()); // receiver + __ Push(NameRegister()); // name + __ PushReturnAddressFrom(KeyedLoadIC_TempRegister()); // Perform tail call to the entry. __ TailCallRuntime(Runtime::kKeyedGetProperty, 2, 1); @@ -1066,36 +1035,37 @@ void KeyedLoadIC::GenerateRuntimeGetProperty(MacroAssembler* masm) { void StoreIC::GenerateMegamorphic(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- rax : value - // -- rcx : name - // -- rdx : receiver - // -- rsp[0] : return address - // ----------------------------------- + // The return address is on the stack. // Get the receiver from the stack and probe the stub cache. - Code::Flags flags = Code::ComputeHandlerFlags(Code::STORE_IC); + Code::Flags flags = Code::RemoveTypeAndHolderFromFlags( + Code::ComputeHandlerFlags(Code::STORE_IC)); masm->isolate()->stub_cache()->GenerateProbe( - masm, flags, rdx, rcx, rbx, no_reg); + masm, flags, ReceiverRegister(), NameRegister(), rbx, no_reg); // Cache miss: Jump to runtime. GenerateMiss(masm); } -void StoreIC::GenerateMiss(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- rax : value - // -- rcx : name - // -- rdx : receiver - // -- rsp[0] : return address - // ----------------------------------- +static void StoreIC_PushArgs(MacroAssembler* masm) { + Register receiver = StoreIC::ReceiverRegister(); + Register name = StoreIC::NameRegister(); + Register value = StoreIC::ValueRegister(); + + DCHECK(!rbx.is(receiver) && !rbx.is(name) && !rbx.is(value)); __ PopReturnAddressTo(rbx); - __ Push(rdx); // receiver - __ Push(rcx); // name - __ Push(rax); // value + __ Push(receiver); + __ Push(name); + __ Push(value); __ PushReturnAddressFrom(rbx); +} + + +void StoreIC::GenerateMiss(MacroAssembler* masm) { + // Return address is on the stack. + StoreIC_PushArgs(masm); // Perform tail call to the entry. ExternalReference ref = @@ -1105,18 +1075,15 @@ void StoreIC::GenerateMiss(MacroAssembler* masm) { void StoreIC::GenerateNormal(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- rax : value - // -- rcx : name - // -- rdx : receiver - // -- rsp[0] : return address - // ----------------------------------- + Register receiver = ReceiverRegister(); + Register name = NameRegister(); + Register value = ValueRegister(); + Register dictionary = rbx; Label miss; - GenerateNameDictionaryReceiverCheck(masm, rdx, rbx, rdi, &miss); - - GenerateDictionaryStore(masm, &miss, rbx, rcx, rax, r8, r9); + __ movp(dictionary, FieldOperand(receiver, JSObject::kPropertiesOffset)); + GenerateDictionaryStore(masm, &miss, dictionary, name, value, r8, r9); Counters* counters = masm->isolate()->counters(); __ IncrementCounter(counters->store_normal_hit(), 1); __ ret(0); @@ -1129,60 +1096,43 @@ void StoreIC::GenerateNormal(MacroAssembler* masm) { void StoreIC::GenerateRuntimeSetProperty(MacroAssembler* masm, StrictMode strict_mode) { - // ----------- S t a t e ------------- - // -- rax : value - // -- rcx : name - // -- rdx : receiver - // -- rsp[0] : return address - // ----------------------------------- + // Return address is on the stack. + DCHECK(!rbx.is(ReceiverRegister()) && !rbx.is(NameRegister()) && + !rbx.is(ValueRegister())); + __ PopReturnAddressTo(rbx); - __ Push(rdx); - __ Push(rcx); - __ Push(rax); - __ Push(Smi::FromInt(NONE)); // PropertyAttributes + __ Push(ReceiverRegister()); + __ Push(NameRegister()); + __ Push(ValueRegister()); __ Push(Smi::FromInt(strict_mode)); __ PushReturnAddressFrom(rbx); // Do tail-call to runtime routine. - __ TailCallRuntime(Runtime::kSetProperty, 5, 1); + __ TailCallRuntime(Runtime::kSetProperty, 4, 1); } void KeyedStoreIC::GenerateRuntimeSetProperty(MacroAssembler* masm, StrictMode strict_mode) { - // ----------- S t a t e ------------- - // -- rax : value - // -- rcx : key - // -- rdx : receiver - // -- rsp[0] : return address - // ----------------------------------- + // Return address is on the stack. + DCHECK(!rbx.is(ReceiverRegister()) && !rbx.is(NameRegister()) && + !rbx.is(ValueRegister())); __ PopReturnAddressTo(rbx); - __ Push(rdx); // receiver - __ Push(rcx); // key - __ Push(rax); // value - __ Push(Smi::FromInt(NONE)); // PropertyAttributes + __ Push(ReceiverRegister()); + __ Push(NameRegister()); + __ Push(ValueRegister()); __ Push(Smi::FromInt(strict_mode)); // Strict mode. __ PushReturnAddressFrom(rbx); // Do tail-call to runtime routine. - __ TailCallRuntime(Runtime::kSetProperty, 5, 1); + __ TailCallRuntime(Runtime::kSetProperty, 4, 1); } void StoreIC::GenerateSlow(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- rax : value - // -- rcx : key - // -- rdx : receiver - // -- rsp[0] : return address - // ----------------------------------- - - __ PopReturnAddressTo(rbx); - __ Push(rdx); // receiver - __ Push(rcx); // key - __ Push(rax); // value - __ PushReturnAddressFrom(rbx); + // Return address is on the stack. + StoreIC_PushArgs(masm); // Do tail-call to runtime routine. ExternalReference ref(IC_Utility(kStoreIC_Slow), masm->isolate()); @@ -1191,18 +1141,8 @@ void StoreIC::GenerateSlow(MacroAssembler* masm) { void KeyedStoreIC::GenerateSlow(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- rax : value - // -- rcx : key - // -- rdx : receiver - // -- rsp[0] : return address - // ----------------------------------- - - __ PopReturnAddressTo(rbx); - __ Push(rdx); // receiver - __ Push(rcx); // key - __ Push(rax); // value - __ PushReturnAddressFrom(rbx); + // Return address is on the stack. + StoreIC_PushArgs(masm); // Do tail-call to runtime routine. ExternalReference ref(IC_Utility(kKeyedStoreIC_Slow), masm->isolate()); @@ -1211,18 +1151,8 @@ void KeyedStoreIC::GenerateSlow(MacroAssembler* masm) { void KeyedStoreIC::GenerateMiss(MacroAssembler* masm) { - // ----------- S t a t e ------------- - // -- rax : value - // -- rcx : key - // -- rdx : receiver - // -- rsp[0] : return address - // ----------------------------------- - - __ PopReturnAddressTo(rbx); - __ Push(rdx); // receiver - __ Push(rcx); // key - __ Push(rax); // value - __ PushReturnAddressFrom(rbx); + // Return address is on the stack. + StoreIC_PushArgs(masm); // Do tail-call to runtime routine. ExternalReference ref = @@ -1273,7 +1203,7 @@ void PatchInlinedSmiCode(Address address, InlinedSmiCheck check) { // If the instruction following the call is not a test al, nothing // was inlined. if (*test_instruction_address != Assembler::kTestAlByte) { - ASSERT(*test_instruction_address == Assembler::kNopByte); + DCHECK(*test_instruction_address == Assembler::kNopByte); return; } @@ -1290,7 +1220,7 @@ void PatchInlinedSmiCode(Address address, InlinedSmiCheck check) { // jump-if-carry/not-carry to jump-if-zero/not-zero, whereas disabling is the // reverse operation of that. Address jmp_address = test_instruction_address - delta; - ASSERT((check == ENABLE_INLINED_SMI_CHECK) + DCHECK((check == ENABLE_INLINED_SMI_CHECK) ? (*jmp_address == Assembler::kJncShortOpcode || *jmp_address == Assembler::kJcShortOpcode) : (*jmp_address == Assembler::kJnzShortOpcode || diff --git a/deps/v8/src/x64/lithium-codegen-x64.cc b/deps/v8/src/x64/lithium-codegen-x64.cc index 57fbc931865..4457c202973 100644 --- a/deps/v8/src/x64/lithium-codegen-x64.cc +++ b/deps/v8/src/x64/lithium-codegen-x64.cc @@ -2,14 +2,14 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_X64 -#include "x64/lithium-codegen-x64.h" -#include "code-stubs.h" -#include "stub-cache.h" -#include "hydrogen-osr.h" +#include "src/code-stubs.h" +#include "src/hydrogen-osr.h" +#include "src/stub-cache.h" +#include "src/x64/lithium-codegen-x64.h" namespace v8 { namespace internal { @@ -44,7 +44,7 @@ class SafepointGenerator V8_FINAL : public CallWrapper { bool LCodeGen::GenerateCode() { LPhase phase("Z_Code generation", chunk()); - ASSERT(is_unused()); + DCHECK(is_unused()); status_ = GENERATING; // Open a frame scope to indicate that there is a frame on the stack. The @@ -61,7 +61,7 @@ bool LCodeGen::GenerateCode() { void LCodeGen::FinishCode(Handle<Code> code) { - ASSERT(is_done()); + DCHECK(is_done()); code->set_stack_slots(GetStackSlotCount()); code->set_safepoint_table_offset(safepoints_.GetCodeOffset()); if (code->is_optimized_code()) RegisterWeakObjectsInOptimizedCode(code); @@ -80,8 +80,8 @@ void LCodeGen::MakeSureStackPagesMapped(int offset) { void LCodeGen::SaveCallerDoubles() { - ASSERT(info()->saves_caller_doubles()); - ASSERT(NeedsEagerFrame()); + DCHECK(info()->saves_caller_doubles()); + DCHECK(NeedsEagerFrame()); Comment(";;; Save clobbered callee double registers"); int count = 0; BitVector* doubles = chunk()->allocated_double_registers(); @@ -96,8 +96,8 @@ void LCodeGen::SaveCallerDoubles() { void LCodeGen::RestoreCallerDoubles() { - ASSERT(info()->saves_caller_doubles()); - ASSERT(NeedsEagerFrame()); + DCHECK(info()->saves_caller_doubles()); + DCHECK(NeedsEagerFrame()); Comment(";;; Restore clobbered callee double registers"); BitVector* doubles = chunk()->allocated_double_registers(); BitVector::Iterator save_iterator(doubles); @@ -112,7 +112,7 @@ void LCodeGen::RestoreCallerDoubles() { bool LCodeGen::GeneratePrologue() { - ASSERT(is_generating()); + DCHECK(is_generating()); if (info()->IsOptimizing()) { ProfileEntryHookStub::MaybeCallEntryHook(masm_); @@ -137,7 +137,7 @@ bool LCodeGen::GeneratePrologue() { __ j(not_equal, &ok, Label::kNear); __ movp(rcx, GlobalObjectOperand()); - __ movp(rcx, FieldOperand(rcx, GlobalObject::kGlobalReceiverOffset)); + __ movp(rcx, FieldOperand(rcx, GlobalObject::kGlobalProxyOffset)); __ movp(args.GetReceiverOperand(), rcx); @@ -147,9 +147,13 @@ bool LCodeGen::GeneratePrologue() { info()->set_prologue_offset(masm_->pc_offset()); if (NeedsEagerFrame()) { - ASSERT(!frame_is_built_); + DCHECK(!frame_is_built_); frame_is_built_ = true; - __ Prologue(info()->IsStub() ? BUILD_STUB_FRAME : BUILD_FUNCTION_FRAME); + if (info()->IsStub()) { + __ StubPrologue(); + } else { + __ Prologue(info()->IsCodePreAgingActive()); + } info()->AddNoFrameRange(0, masm_->pc_offset()); } @@ -163,7 +167,7 @@ bool LCodeGen::GeneratePrologue() { #endif __ Push(rax); __ Set(rax, slots); - __ movq(kScratchRegister, kSlotsZapValue); + __ Set(kScratchRegister, kSlotsZapValue); Label loop; __ bind(&loop); __ movp(MemOperand(rsp, rax, times_pointer_size, 0), @@ -187,13 +191,16 @@ bool LCodeGen::GeneratePrologue() { int heap_slots = info_->num_heap_slots() - Context::MIN_CONTEXT_SLOTS; if (heap_slots > 0) { Comment(";;; Allocate local context"); + bool need_write_barrier = true; // Argument to NewContext is the function, which is still in rdi. if (heap_slots <= FastNewContextStub::kMaximumSlots) { FastNewContextStub stub(isolate(), heap_slots); __ CallStub(&stub); + // Result of FastNewContextStub is always in new space. + need_write_barrier = false; } else { __ Push(rdi); - __ CallRuntime(Runtime::kHiddenNewFunctionContext, 1); + __ CallRuntime(Runtime::kNewFunctionContext, 1); } RecordSafepoint(Safepoint::kNoLazyDeopt); // Context is returned in rax. It replaces the context passed to us. @@ -214,7 +221,14 @@ bool LCodeGen::GeneratePrologue() { int context_offset = Context::SlotOffset(var->index()); __ movp(Operand(rsi, context_offset), rax); // Update the write barrier. This clobbers rax and rbx. - __ RecordWriteContextSlot(rsi, context_offset, rax, rbx, kSaveFPRegs); + if (need_write_barrier) { + __ RecordWriteContextSlot(rsi, context_offset, rax, rbx, kSaveFPRegs); + } else if (FLAG_debug_code) { + Label done; + __ JumpIfInNewSpace(rsi, rax, &done, Label::kNear); + __ Abort(kExpectedNewSpaceObject); + __ bind(&done); + } } } Comment(";;; End allocate local context"); @@ -238,7 +252,7 @@ void LCodeGen::GenerateOsrPrologue() { // Adjust the frame size, subsuming the unoptimized frame into the // optimized frame. int slots = GetStackSlotCount() - graph()->osr()->UnoptimizedFrameSlots(); - ASSERT(slots >= 0); + DCHECK(slots >= 0); __ subp(rsp, Immediate(slots * kPointerSize)); } @@ -261,12 +275,17 @@ void LCodeGen::GenerateBodyInstructionPost(LInstruction* instr) { } if (instr->HasResult() && instr->MustSignExtendResult(chunk())) { + // We sign extend the dehoisted key at the definition point when the pointer + // size is 64-bit. For x32 port, we sign extend the dehoisted key at the use + // points and MustSignExtendResult is always false. We can't use + // STATIC_ASSERT here as the pointer size is 32-bit for x32. + DCHECK(kPointerSize == kInt64Size); if (instr->result()->IsRegister()) { Register result_reg = ToRegister(instr->result()); __ movsxlq(result_reg, result_reg); } else { // Sign extend the 32bit result in the stack slots. - ASSERT(instr->result()->IsStackSlot()); + DCHECK(instr->result()->IsStackSlot()); Operand src = ToOperand(instr->result()); __ movsxlq(kScratchRegister, src); __ movq(src, kScratchRegister); @@ -291,7 +310,7 @@ bool LCodeGen::GenerateJumpTable() { Comment(";;; jump table entry %d: deoptimization bailout %d.", i, id); } if (jump_table_[i].needs_frame) { - ASSERT(!info()->saves_caller_doubles()); + DCHECK(!info()->saves_caller_doubles()); __ Move(kScratchRegister, ExternalReference::ForDeoptEntry(entry)); if (needs_frame.is_bound()) { __ jmp(&needs_frame); @@ -304,7 +323,7 @@ bool LCodeGen::GenerateJumpTable() { // This variant of deopt can only be used with stubs. Since we don't // have a function pointer to install in the stack frame that we're // building, install a special marker there instead. - ASSERT(info()->IsStub()); + DCHECK(info()->IsStub()); __ Move(rsi, Smi::FromInt(StackFrame::STUB)); __ Push(rsi); __ movp(rsi, MemOperand(rsp, kPointerSize)); @@ -312,7 +331,7 @@ bool LCodeGen::GenerateJumpTable() { } } else { if (info()->saves_caller_doubles()) { - ASSERT(info()->IsStub()); + DCHECK(info()->IsStub()); RestoreCallerDoubles(); } __ call(entry, RelocInfo::RUNTIME_ENTRY); @@ -323,7 +342,7 @@ bool LCodeGen::GenerateJumpTable() { bool LCodeGen::GenerateDeferredCode() { - ASSERT(is_generating()); + DCHECK(is_generating()); if (deferred_.length() > 0) { for (int i = 0; !is_aborted() && i < deferred_.length(); i++) { LDeferredCode* code = deferred_[i]; @@ -341,8 +360,8 @@ bool LCodeGen::GenerateDeferredCode() { __ bind(code->entry()); if (NeedsDeferredFrame()) { Comment(";;; Build frame"); - ASSERT(!frame_is_built_); - ASSERT(info()->IsStub()); + DCHECK(!frame_is_built_); + DCHECK(info()->IsStub()); frame_is_built_ = true; // Build the frame in such a way that esi isn't trashed. __ pushq(rbp); // Caller's frame pointer. @@ -355,7 +374,7 @@ bool LCodeGen::GenerateDeferredCode() { if (NeedsDeferredFrame()) { __ bind(code->done()); Comment(";;; Destroy frame"); - ASSERT(frame_is_built_); + DCHECK(frame_is_built_); frame_is_built_ = false; __ movp(rsp, rbp); __ popq(rbp); @@ -372,7 +391,7 @@ bool LCodeGen::GenerateDeferredCode() { bool LCodeGen::GenerateSafepointTable() { - ASSERT(is_done()); + DCHECK(is_done()); safepoints_.Emit(masm(), GetStackSlotCount()); return !is_aborted(); } @@ -389,13 +408,13 @@ XMMRegister LCodeGen::ToDoubleRegister(int index) const { Register LCodeGen::ToRegister(LOperand* op) const { - ASSERT(op->IsRegister()); + DCHECK(op->IsRegister()); return ToRegister(op->index()); } XMMRegister LCodeGen::ToDoubleRegister(LOperand* op) const { - ASSERT(op->IsDoubleRegister()); + DCHECK(op->IsDoubleRegister()); return ToDoubleRegister(op->index()); } @@ -417,8 +436,17 @@ bool LCodeGen::IsSmiConstant(LConstantOperand* op) const { int32_t LCodeGen::ToInteger32(LConstantOperand* op) const { + return ToRepresentation(op, Representation::Integer32()); +} + + +int32_t LCodeGen::ToRepresentation(LConstantOperand* op, + const Representation& r) const { HConstant* constant = chunk_->LookupConstant(op); - return constant->Integer32Value(); + int32_t value = constant->Integer32Value(); + if (r.IsInteger32()) return value; + DCHECK(SmiValuesAre31Bits() && r.IsSmiOrTagged()); + return static_cast<int32_t>(reinterpret_cast<intptr_t>(Smi::FromInt(value))); } @@ -430,27 +458,27 @@ Smi* LCodeGen::ToSmi(LConstantOperand* op) const { double LCodeGen::ToDouble(LConstantOperand* op) const { HConstant* constant = chunk_->LookupConstant(op); - ASSERT(constant->HasDoubleValue()); + DCHECK(constant->HasDoubleValue()); return constant->DoubleValue(); } ExternalReference LCodeGen::ToExternalReference(LConstantOperand* op) const { HConstant* constant = chunk_->LookupConstant(op); - ASSERT(constant->HasExternalReferenceValue()); + DCHECK(constant->HasExternalReferenceValue()); return constant->ExternalReferenceValue(); } Handle<Object> LCodeGen::ToHandle(LConstantOperand* op) const { HConstant* constant = chunk_->LookupConstant(op); - ASSERT(chunk_->LookupLiteralRepresentation(op).IsSmiOrTagged()); + DCHECK(chunk_->LookupLiteralRepresentation(op).IsSmiOrTagged()); return constant->handle(isolate()); } static int ArgumentsOffsetWithoutFrame(int index) { - ASSERT(index < 0); + DCHECK(index < 0); return -(index + 1) * kPointerSize + kPCOnStackSize; } @@ -458,7 +486,7 @@ static int ArgumentsOffsetWithoutFrame(int index) { Operand LCodeGen::ToOperand(LOperand* op) const { // Does not handle registers. In X64 assembler, plain registers are not // representable as an Operand. - ASSERT(op->IsStackSlot() || op->IsDoubleStackSlot()); + DCHECK(op->IsStackSlot() || op->IsDoubleStackSlot()); if (NeedsEagerFrame()) { return Operand(rbp, StackSlotOffset(op->index())); } else { @@ -493,13 +521,13 @@ void LCodeGen::WriteTranslation(LEnvironment* environment, translation->BeginConstructStubFrame(closure_id, translation_size); break; case JS_GETTER: - ASSERT(translation_size == 1); - ASSERT(height == 0); + DCHECK(translation_size == 1); + DCHECK(height == 0); translation->BeginGetterStubFrame(closure_id); break; case JS_SETTER: - ASSERT(translation_size == 2); - ASSERT(height == 0); + DCHECK(translation_size == 2); + DCHECK(height == 0); translation->BeginSetterStubFrame(closure_id); break; case ARGUMENTS_ADAPTOR: @@ -598,7 +626,7 @@ void LCodeGen::CallCodeGeneric(Handle<Code> code, LInstruction* instr, SafepointMode safepoint_mode, int argc) { - ASSERT(instr != NULL); + DCHECK(instr != NULL); __ call(code, mode); RecordSafepointWithLazyDeopt(instr, safepoint_mode, argc); @@ -622,8 +650,8 @@ void LCodeGen::CallRuntime(const Runtime::Function* function, int num_arguments, LInstruction* instr, SaveFPRegsMode save_doubles) { - ASSERT(instr != NULL); - ASSERT(instr->HasPointerMap()); + DCHECK(instr != NULL); + DCHECK(instr->HasPointerMap()); __ CallRuntime(function, num_arguments, save_doubles); @@ -702,9 +730,9 @@ void LCodeGen::DeoptimizeIf(Condition cc, LEnvironment* environment, Deoptimizer::BailoutType bailout_type) { RegisterEnvironmentForDeoptimization(environment, Safepoint::kNoLazyDeopt); - ASSERT(environment->HasBeenRegistered()); + DCHECK(environment->HasBeenRegistered()); int id = environment->deoptimization_index(); - ASSERT(info()->IsOptimizing() || info()->IsStub()); + DCHECK(info()->IsOptimizing() || info()->IsStub()); Address entry = Deoptimizer::GetDeoptimizationEntry(isolate(), id, bailout_type); if (entry == NULL) { @@ -716,7 +744,7 @@ void LCodeGen::DeoptimizeIf(Condition cc, ExternalReference count = ExternalReference::stress_deopt_count(isolate()); Label no_deopt; __ pushfq(); - __ Push(rax); + __ pushq(rax); Operand count_operand = masm()->ExternalOperand(count, kScratchRegister); __ movl(rax, count_operand); __ subl(rax, Immediate(1)); @@ -724,13 +752,13 @@ void LCodeGen::DeoptimizeIf(Condition cc, if (FLAG_trap_on_deopt) __ int3(); __ movl(rax, Immediate(FLAG_deopt_every_n_times)); __ movl(count_operand, rax); - __ Pop(rax); + __ popq(rax); __ popfq(); - ASSERT(frame_is_built_); + DCHECK(frame_is_built_); __ call(entry, RelocInfo::RUNTIME_ENTRY); __ bind(&no_deopt); __ movl(count_operand, rax); - __ Pop(rax); + __ popq(rax); __ popfq(); } @@ -743,7 +771,7 @@ void LCodeGen::DeoptimizeIf(Condition cc, __ bind(&done); } - ASSERT(info()->IsStub() || frame_is_built_); + DCHECK(info()->IsStub() || frame_is_built_); // Go through jump table if we need to handle condition, build frame, or // restore caller doubles. if (cc == no_condition && frame_is_built_ && @@ -783,7 +811,7 @@ void LCodeGen::PopulateDeoptimizationData(Handle<Code> code) { int length = deoptimizations_.length(); if (length == 0) return; Handle<DeoptimizationInputData> data = - DeoptimizationInputData::New(isolate(), length, TENURED); + DeoptimizationInputData::New(isolate(), length, 0, TENURED); Handle<ByteArray> translations = translations_.CreateByteArray(isolate()->factory()); @@ -834,7 +862,7 @@ int LCodeGen::DefineDeoptimizationLiteral(Handle<Object> literal) { void LCodeGen::PopulateDeoptimizationLiteralsWithInlinedFunctions() { - ASSERT(deoptimization_literals_.length() == 0); + DCHECK(deoptimization_literals_.length() == 0); const ZoneList<Handle<JSFunction> >* inlined_closures = chunk()->inlined_closures(); @@ -854,7 +882,7 @@ void LCodeGen::RecordSafepointWithLazyDeopt( if (safepoint_mode == RECORD_SIMPLE_SAFEPOINT) { RecordSafepoint(instr->pointer_map(), Safepoint::kLazyDeopt); } else { - ASSERT(safepoint_mode == RECORD_SAFEPOINT_WITH_REGISTERS); + DCHECK(safepoint_mode == RECORD_SAFEPOINT_WITH_REGISTERS); RecordSafepointWithRegisters( instr->pointer_map(), argc, Safepoint::kLazyDeopt); } @@ -866,7 +894,7 @@ void LCodeGen::RecordSafepoint( Safepoint::Kind kind, int arguments, Safepoint::DeoptMode deopt_mode) { - ASSERT(kind == expected_safepoint_kind_); + DCHECK(kind == expected_safepoint_kind_); const ZoneList<LOperand*>* operands = pointers->GetNormalizedOperands(); @@ -955,8 +983,8 @@ void LCodeGen::DoParameter(LParameter* instr) { void LCodeGen::DoCallStub(LCallStub* instr) { - ASSERT(ToRegister(instr->context()).is(rsi)); - ASSERT(ToRegister(instr->result()).is(rax)); + DCHECK(ToRegister(instr->context()).is(rsi)); + DCHECK(ToRegister(instr->result()).is(rax)); switch (instr->hydrogen()->major_key()) { case CodeStub::RegExpExec: { RegExpExecStub stub(isolate()); @@ -987,7 +1015,7 @@ void LCodeGen::DoUnknownOSRValue(LUnknownOSRValue* instr) { void LCodeGen::DoModByPowerOf2I(LModByPowerOf2I* instr) { Register dividend = ToRegister(instr->dividend()); int32_t divisor = instr->divisor(); - ASSERT(dividend.is(ToRegister(instr->result()))); + DCHECK(dividend.is(ToRegister(instr->result()))); // Theoretically, a variation of the branch-free code for integer division by // a power of 2 (calculating the remainder via an additional multiplication @@ -1020,7 +1048,7 @@ void LCodeGen::DoModByPowerOf2I(LModByPowerOf2I* instr) { void LCodeGen::DoModByConstI(LModByConstI* instr) { Register dividend = ToRegister(instr->dividend()); int32_t divisor = instr->divisor(); - ASSERT(ToRegister(instr->result()).is(rax)); + DCHECK(ToRegister(instr->result()).is(rax)); if (divisor == 0) { DeoptimizeIf(no_condition, instr->environment()); @@ -1048,12 +1076,12 @@ void LCodeGen::DoModI(LModI* instr) { HMod* hmod = instr->hydrogen(); Register left_reg = ToRegister(instr->left()); - ASSERT(left_reg.is(rax)); + DCHECK(left_reg.is(rax)); Register right_reg = ToRegister(instr->right()); - ASSERT(!right_reg.is(rax)); - ASSERT(!right_reg.is(rdx)); + DCHECK(!right_reg.is(rax)); + DCHECK(!right_reg.is(rdx)); Register result_reg = ToRegister(instr->result()); - ASSERT(result_reg.is(rdx)); + DCHECK(result_reg.is(rdx)); Label done; // Check for x % 0, idiv would signal a divide error. We have to @@ -1103,7 +1131,7 @@ void LCodeGen::DoModI(LModI* instr) { void LCodeGen::DoFlooringDivByPowerOf2I(LFlooringDivByPowerOf2I* instr) { Register dividend = ToRegister(instr->dividend()); int32_t divisor = instr->divisor(); - ASSERT(dividend.is(ToRegister(instr->result()))); + DCHECK(dividend.is(ToRegister(instr->result()))); // If the divisor is positive, things are easy: There can be no deopts and we // can simply do an arithmetic right shift. @@ -1120,16 +1148,17 @@ void LCodeGen::DoFlooringDivByPowerOf2I(LFlooringDivByPowerOf2I* instr) { DeoptimizeIf(zero, instr->environment()); } - // If the negation could not overflow, simply shifting is OK. - if (!instr->hydrogen()->CheckFlag(HValue::kLeftCanBeMinInt)) { - __ sarl(dividend, Immediate(shift)); + // Dividing by -1 is basically negation, unless we overflow. + if (divisor == -1) { + if (instr->hydrogen()->CheckFlag(HValue::kLeftCanBeMinInt)) { + DeoptimizeIf(overflow, instr->environment()); + } return; } - // Note that we could emit branch-free code, but that would need one more - // register. - if (divisor == -1) { - DeoptimizeIf(overflow, instr->environment()); + // If the negation could not overflow, simply shifting is OK. + if (!instr->hydrogen()->CheckFlag(HValue::kLeftCanBeMinInt)) { + __ sarl(dividend, Immediate(shift)); return; } @@ -1146,7 +1175,7 @@ void LCodeGen::DoFlooringDivByPowerOf2I(LFlooringDivByPowerOf2I* instr) { void LCodeGen::DoFlooringDivByConstI(LFlooringDivByConstI* instr) { Register dividend = ToRegister(instr->dividend()); int32_t divisor = instr->divisor(); - ASSERT(ToRegister(instr->result()).is(rdx)); + DCHECK(ToRegister(instr->result()).is(rdx)); if (divisor == 0) { DeoptimizeIf(no_condition, instr->environment()); @@ -1172,7 +1201,7 @@ void LCodeGen::DoFlooringDivByConstI(LFlooringDivByConstI* instr) { // In the general case we may need to adjust before and after the truncating // division to get a flooring division. Register temp = ToRegister(instr->temp3()); - ASSERT(!temp.is(dividend) && !temp.is(rax) && !temp.is(rdx)); + DCHECK(!temp.is(dividend) && !temp.is(rax) && !temp.is(rdx)); Label needs_adjustment, done; __ cmpl(dividend, Immediate(0)); __ j(divisor > 0 ? less : greater, &needs_adjustment, Label::kNear); @@ -1195,11 +1224,11 @@ void LCodeGen::DoFlooringDivI(LFlooringDivI* instr) { Register divisor = ToRegister(instr->divisor()); Register remainder = ToRegister(instr->temp()); Register result = ToRegister(instr->result()); - ASSERT(dividend.is(rax)); - ASSERT(remainder.is(rdx)); - ASSERT(result.is(rax)); - ASSERT(!divisor.is(rax)); - ASSERT(!divisor.is(rdx)); + DCHECK(dividend.is(rax)); + DCHECK(remainder.is(rdx)); + DCHECK(result.is(rax)); + DCHECK(!divisor.is(rax)); + DCHECK(!divisor.is(rdx)); // Check for x / 0. if (hdiv->CheckFlag(HValue::kCanBeDivByZero)) { @@ -1245,8 +1274,8 @@ void LCodeGen::DoDivByPowerOf2I(LDivByPowerOf2I* instr) { Register dividend = ToRegister(instr->dividend()); int32_t divisor = instr->divisor(); Register result = ToRegister(instr->result()); - ASSERT(divisor == kMinInt || IsPowerOf2(Abs(divisor))); - ASSERT(!result.is(dividend)); + DCHECK(divisor == kMinInt || IsPowerOf2(Abs(divisor))); + DCHECK(!result.is(dividend)); // Check for (0 / -x) that will produce negative zero. HDiv* hdiv = instr->hydrogen(); @@ -1282,7 +1311,7 @@ void LCodeGen::DoDivByPowerOf2I(LDivByPowerOf2I* instr) { void LCodeGen::DoDivByConstI(LDivByConstI* instr) { Register dividend = ToRegister(instr->dividend()); int32_t divisor = instr->divisor(); - ASSERT(ToRegister(instr->result()).is(rdx)); + DCHECK(ToRegister(instr->result()).is(rdx)); if (divisor == 0) { DeoptimizeIf(no_condition, instr->environment()); @@ -1314,11 +1343,11 @@ void LCodeGen::DoDivI(LDivI* instr) { Register dividend = ToRegister(instr->dividend()); Register divisor = ToRegister(instr->divisor()); Register remainder = ToRegister(instr->temp()); - ASSERT(dividend.is(rax)); - ASSERT(remainder.is(rdx)); - ASSERT(ToRegister(instr->result()).is(rax)); - ASSERT(!divisor.is(rax)); - ASSERT(!divisor.is(rdx)); + DCHECK(dividend.is(rax)); + DCHECK(remainder.is(rdx)); + DCHECK(ToRegister(instr->result()).is(rax)); + DCHECK(!divisor.is(rax)); + DCHECK(!divisor.is(rdx)); // Check for x / 0. if (hdiv->CheckFlag(HValue::kCanBeDivByZero)) { @@ -1443,8 +1472,11 @@ void LCodeGen::DoMulI(LMulI* instr) { } __ j(not_zero, &done, Label::kNear); if (right->IsConstantOperand()) { - // Constant can't be represented as Smi due to immediate size limit. - ASSERT(!instr->hydrogen_value()->representation().IsSmi()); + // Constant can't be represented as 32-bit Smi due to immediate size + // limit. + DCHECK(SmiValuesAre32Bits() + ? !instr->hydrogen_value()->representation().IsSmi() + : SmiValuesAre31Bits()); if (ToInteger32(LConstantOperand::cast(right)) < 0) { DeoptimizeIf(no_condition, instr->environment()); } else if (ToInteger32(LConstantOperand::cast(right)) == 0) { @@ -1475,11 +1507,13 @@ void LCodeGen::DoMulI(LMulI* instr) { void LCodeGen::DoBitI(LBitI* instr) { LOperand* left = instr->left(); LOperand* right = instr->right(); - ASSERT(left->Equals(instr->result())); - ASSERT(left->IsRegister()); + DCHECK(left->Equals(instr->result())); + DCHECK(left->IsRegister()); if (right->IsConstantOperand()) { - int32_t right_operand = ToInteger32(LConstantOperand::cast(right)); + int32_t right_operand = + ToRepresentation(LConstantOperand::cast(right), + instr->hydrogen()->right()->representation()); switch (instr->op()) { case Token::BIT_AND: __ andl(ToRegister(left), Immediate(right_operand)); @@ -1526,7 +1560,7 @@ void LCodeGen::DoBitI(LBitI* instr) { break; } } else { - ASSERT(right->IsRegister()); + DCHECK(right->IsRegister()); switch (instr->op()) { case Token::BIT_AND: if (instr->IsInteger32()) { @@ -1560,10 +1594,10 @@ void LCodeGen::DoBitI(LBitI* instr) { void LCodeGen::DoShiftI(LShiftI* instr) { LOperand* left = instr->left(); LOperand* right = instr->right(); - ASSERT(left->Equals(instr->result())); - ASSERT(left->IsRegister()); + DCHECK(left->Equals(instr->result())); + DCHECK(left->IsRegister()); if (right->IsRegister()) { - ASSERT(ToRegister(right).is(rcx)); + DCHECK(ToRegister(right).is(rcx)); switch (instr->op()) { case Token::ROR: @@ -1601,17 +1635,30 @@ void LCodeGen::DoShiftI(LShiftI* instr) { } break; case Token::SHR: - if (shift_count == 0 && instr->can_deopt()) { + if (shift_count != 0) { + __ shrl(ToRegister(left), Immediate(shift_count)); + } else if (instr->can_deopt()) { __ testl(ToRegister(left), ToRegister(left)); DeoptimizeIf(negative, instr->environment()); - } else { - __ shrl(ToRegister(left), Immediate(shift_count)); } break; case Token::SHL: if (shift_count != 0) { if (instr->hydrogen_value()->representation().IsSmi()) { - __ shlp(ToRegister(left), Immediate(shift_count)); + if (SmiValuesAre32Bits()) { + __ shlp(ToRegister(left), Immediate(shift_count)); + } else { + DCHECK(SmiValuesAre31Bits()); + if (instr->can_deopt()) { + if (shift_count != 1) { + __ shll(ToRegister(left), Immediate(shift_count - 1)); + } + __ Integer32ToSmi(ToRegister(left), ToRegister(left)); + DeoptimizeIf(overflow, instr->environment()); + } else { + __ shll(ToRegister(left), Immediate(shift_count)); + } + } } else { __ shll(ToRegister(left), Immediate(shift_count)); } @@ -1628,11 +1675,13 @@ void LCodeGen::DoShiftI(LShiftI* instr) { void LCodeGen::DoSubI(LSubI* instr) { LOperand* left = instr->left(); LOperand* right = instr->right(); - ASSERT(left->Equals(instr->result())); + DCHECK(left->Equals(instr->result())); if (right->IsConstantOperand()) { - __ subl(ToRegister(left), - Immediate(ToInteger32(LConstantOperand::cast(right)))); + int32_t right_operand = + ToRepresentation(LConstantOperand::cast(right), + instr->hydrogen()->right()->representation()); + __ subl(ToRegister(left), Immediate(right_operand)); } else if (right->IsRegister()) { if (instr->hydrogen_value()->representation().IsSmi()) { __ subp(ToRegister(left), ToRegister(right)); @@ -1669,7 +1718,7 @@ void LCodeGen::DoConstantS(LConstantS* instr) { void LCodeGen::DoConstantD(LConstantD* instr) { - ASSERT(instr->result()->IsDoubleRegister()); + DCHECK(instr->result()->IsDoubleRegister()); XMMRegister res = ToDoubleRegister(instr->result()); double v = instr->value(); uint64_t int_val = BitCast<uint64_t, double>(v); @@ -1693,13 +1742,6 @@ void LCodeGen::DoConstantE(LConstantE* instr) { void LCodeGen::DoConstantT(LConstantT* instr) { Handle<Object> object = instr->value(isolate()); AllowDeferredHandleDereference smi_check; - if (instr->hydrogen()->HasObjectMap()) { - Handle<Map> object_map = instr->hydrogen()->ObjectMap().handle(); - ASSERT(object->IsHeapObject()); - ASSERT(!object_map->is_stable() || - *object_map == Handle<HeapObject>::cast(object)->map()); - USE(object_map); - } __ Move(ToRegister(instr->result()), object); } @@ -1716,8 +1758,8 @@ void LCodeGen::DoDateField(LDateField* instr) { Register result = ToRegister(instr->result()); Smi* index = instr->index(); Label runtime, done, not_date_object; - ASSERT(object.is(result)); - ASSERT(object.is(rax)); + DCHECK(object.is(result)); + DCHECK(object.is(rax)); Condition cc = masm()->CheckSmi(object); DeoptimizeIf(cc, instr->environment()); @@ -1812,12 +1854,12 @@ void LCodeGen::DoSeqStringSetChar(LSeqStringSetChar* instr) { Operand operand = BuildSeqStringOperand(string, instr->index(), encoding); if (instr->value()->IsConstantOperand()) { int value = ToInteger32(LConstantOperand::cast(instr->value())); - ASSERT_LE(0, value); + DCHECK_LE(0, value); if (encoding == String::ONE_BYTE_ENCODING) { - ASSERT_LE(value, String::kMaxOneByteCharCode); + DCHECK_LE(value, String::kMaxOneByteCharCode); __ movb(operand, Immediate(value)); } else { - ASSERT_LE(value, String::kMaxUtf16CodeUnit); + DCHECK_LE(value, String::kMaxUtf16CodeUnit); __ movw(operand, Immediate(value)); } } else { @@ -1840,8 +1882,11 @@ void LCodeGen::DoAddI(LAddI* instr) { if (LAddI::UseLea(instr->hydrogen()) && !left->Equals(instr->result())) { if (right->IsConstantOperand()) { - ASSERT(!target_rep.IsSmi()); // No support for smi-immediates. - int32_t offset = ToInteger32(LConstantOperand::cast(right)); + // No support for smi-immediates for 32-bit SMI. + DCHECK(SmiValuesAre32Bits() ? !target_rep.IsSmi() : SmiValuesAre31Bits()); + int32_t offset = + ToRepresentation(LConstantOperand::cast(right), + instr->hydrogen()->right()->representation()); if (is_p) { __ leap(ToRegister(instr->result()), MemOperand(ToRegister(left), offset)); @@ -1859,13 +1904,15 @@ void LCodeGen::DoAddI(LAddI* instr) { } } else { if (right->IsConstantOperand()) { - ASSERT(!target_rep.IsSmi()); // No support for smi-immediates. + // No support for smi-immediates for 32-bit SMI. + DCHECK(SmiValuesAre32Bits() ? !target_rep.IsSmi() : SmiValuesAre31Bits()); + int32_t right_operand = + ToRepresentation(LConstantOperand::cast(right), + instr->hydrogen()->right()->representation()); if (is_p) { - __ addp(ToRegister(left), - Immediate(ToInteger32(LConstantOperand::cast(right)))); + __ addp(ToRegister(left), Immediate(right_operand)); } else { - __ addl(ToRegister(left), - Immediate(ToInteger32(LConstantOperand::cast(right)))); + __ addl(ToRegister(left), Immediate(right_operand)); } } else if (right->IsRegister()) { if (is_p) { @@ -1890,7 +1937,7 @@ void LCodeGen::DoAddI(LAddI* instr) { void LCodeGen::DoMathMinMax(LMathMinMax* instr) { LOperand* left = instr->left(); LOperand* right = instr->right(); - ASSERT(left->Equals(instr->result())); + DCHECK(left->Equals(instr->result())); HMathMinMax::Operation operation = instr->hydrogen()->operation(); if (instr->hydrogen()->representation().IsSmiOrInteger32()) { Label return_left; @@ -1899,9 +1946,12 @@ void LCodeGen::DoMathMinMax(LMathMinMax* instr) { : greater_equal; Register left_reg = ToRegister(left); if (right->IsConstantOperand()) { - Immediate right_imm = - Immediate(ToInteger32(LConstantOperand::cast(right))); - ASSERT(!instr->hydrogen_value()->representation().IsSmi()); + Immediate right_imm = Immediate( + ToRepresentation(LConstantOperand::cast(right), + instr->hydrogen()->right()->representation())); + DCHECK(SmiValuesAre32Bits() + ? !instr->hydrogen()->representation().IsSmi() + : SmiValuesAre31Bits()); __ cmpl(left_reg, right_imm); __ j(condition, &return_left, Label::kNear); __ movp(left_reg, right_imm); @@ -1926,7 +1976,7 @@ void LCodeGen::DoMathMinMax(LMathMinMax* instr) { } __ bind(&return_left); } else { - ASSERT(instr->hydrogen()->representation().IsDouble()); + DCHECK(instr->hydrogen()->representation().IsDouble()); Label check_nan_left, check_zero, return_left, return_right; Condition condition = (operation == HMathMinMax::kMathMin) ? below : above; XMMRegister left_reg = ToDoubleRegister(left); @@ -1967,7 +2017,7 @@ void LCodeGen::DoArithmeticD(LArithmeticD* instr) { XMMRegister right = ToDoubleRegister(instr->right()); XMMRegister result = ToDoubleRegister(instr->result()); // All operations except MOD are computed in-place. - ASSERT(instr->op() == Token::MOD || left.is(result)); + DCHECK(instr->op() == Token::MOD || left.is(result)); switch (instr->op()) { case Token::ADD: __ addsd(left, right); @@ -1988,7 +2038,7 @@ void LCodeGen::DoArithmeticD(LArithmeticD* instr) { XMMRegister xmm_scratch = double_scratch0(); __ PrepareCallCFunction(2); __ movaps(xmm_scratch, left); - ASSERT(right.is(xmm1)); + DCHECK(right.is(xmm1)); __ CallCFunction( ExternalReference::mod_two_doubles_operation(isolate()), 2); __ movaps(result, xmm_scratch); @@ -2002,10 +2052,10 @@ void LCodeGen::DoArithmeticD(LArithmeticD* instr) { void LCodeGen::DoArithmeticT(LArithmeticT* instr) { - ASSERT(ToRegister(instr->context()).is(rsi)); - ASSERT(ToRegister(instr->left()).is(rdx)); - ASSERT(ToRegister(instr->right()).is(rax)); - ASSERT(ToRegister(instr->result()).is(rax)); + DCHECK(ToRegister(instr->context()).is(rsi)); + DCHECK(ToRegister(instr->left()).is(rdx)); + DCHECK(ToRegister(instr->right()).is(rax)); + DCHECK(ToRegister(instr->result()).is(rax)); BinaryOpICStub stub(isolate(), instr->op(), NO_OVERWRITE); CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); @@ -2049,45 +2099,45 @@ void LCodeGen::DoDebugBreak(LDebugBreak* instr) { void LCodeGen::DoBranch(LBranch* instr) { Representation r = instr->hydrogen()->value()->representation(); if (r.IsInteger32()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); Register reg = ToRegister(instr->value()); __ testl(reg, reg); EmitBranch(instr, not_zero); } else if (r.IsSmi()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); Register reg = ToRegister(instr->value()); __ testp(reg, reg); EmitBranch(instr, not_zero); } else if (r.IsDouble()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); XMMRegister reg = ToDoubleRegister(instr->value()); XMMRegister xmm_scratch = double_scratch0(); __ xorps(xmm_scratch, xmm_scratch); __ ucomisd(reg, xmm_scratch); EmitBranch(instr, not_equal); } else { - ASSERT(r.IsTagged()); + DCHECK(r.IsTagged()); Register reg = ToRegister(instr->value()); HType type = instr->hydrogen()->value()->type(); if (type.IsBoolean()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); __ CompareRoot(reg, Heap::kTrueValueRootIndex); EmitBranch(instr, equal); } else if (type.IsSmi()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); __ SmiCompare(reg, Smi::FromInt(0)); EmitBranch(instr, not_equal); } else if (type.IsJSArray()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); EmitBranch(instr, no_condition); } else if (type.IsHeapNumber()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); XMMRegister xmm_scratch = double_scratch0(); __ xorps(xmm_scratch, xmm_scratch); __ ucomisd(xmm_scratch, FieldOperand(reg, HeapNumber::kValueOffset)); EmitBranch(instr, not_equal); } else if (type.IsString()) { - ASSERT(!info()->IsStub()); + DCHECK(!info()->IsStub()); __ cmpp(FieldOperand(reg, String::kLengthOffset), Immediate(0)); EmitBranch(instr, not_equal); } else { @@ -2230,7 +2280,11 @@ inline Condition LCodeGen::TokenToCondition(Token::Value op, bool is_unsigned) { void LCodeGen::DoCompareNumericAndBranch(LCompareNumericAndBranch* instr) { LOperand* left = instr->left(); LOperand* right = instr->right(); - Condition cc = TokenToCondition(instr->op(), instr->is_double()); + bool is_unsigned = + instr->is_double() || + instr->hydrogen()->left()->CheckFlag(HInstruction::kUint32) || + instr->hydrogen()->right()->CheckFlag(HInstruction::kUint32); + Condition cc = TokenToCondition(instr->op(), is_unsigned); if (left->IsConstantOperand() && right->IsConstantOperand()) { // We can statically evaluate the comparison. @@ -2267,8 +2321,8 @@ void LCodeGen::DoCompareNumericAndBranch(LCompareNumericAndBranch* instr) { } else { __ cmpl(ToOperand(right), Immediate(value)); } - // We transposed the operands. Reverse the condition. - cc = ReverseCondition(cc); + // We commuted the operands, so commute the condition. + cc = CommuteCondition(cc); } else if (instr->hydrogen_value()->representation().IsSmi()) { if (right->IsRegister()) { __ cmpp(ToRegister(left), ToRegister(right)); @@ -2326,7 +2380,7 @@ void LCodeGen::DoCmpHoleAndBranch(LCmpHoleAndBranch* instr) { void LCodeGen::DoCompareMinusZeroAndBranch(LCompareMinusZeroAndBranch* instr) { Representation rep = instr->hydrogen()->value()->representation(); - ASSERT(!rep.IsInteger32()); + DCHECK(!rep.IsInteger32()); if (rep.IsDouble()) { XMMRegister value = ToDoubleRegister(instr->value()); @@ -2354,7 +2408,7 @@ void LCodeGen::DoCompareMinusZeroAndBranch(LCompareMinusZeroAndBranch* instr) { Condition LCodeGen::EmitIsObject(Register input, Label* is_not_object, Label* is_object) { - ASSERT(!input.is(kScratchRegister)); + DCHECK(!input.is(kScratchRegister)); __ JumpIfSmi(input, is_not_object); @@ -2405,7 +2459,7 @@ void LCodeGen::DoIsStringAndBranch(LIsStringAndBranch* instr) { Register temp = ToRegister(instr->temp()); SmiCheck check_needed = - instr->hydrogen()->value()->IsHeapObject() + instr->hydrogen()->value()->type().IsHeapObject() ? OMIT_SMI_CHECK : INLINE_SMI_CHECK; Condition true_cond = EmitIsString( @@ -2432,7 +2486,7 @@ void LCodeGen::DoIsUndetectableAndBranch(LIsUndetectableAndBranch* instr) { Register input = ToRegister(instr->value()); Register temp = ToRegister(instr->temp()); - if (!instr->hydrogen()->value()->IsHeapObject()) { + if (!instr->hydrogen()->value()->type().IsHeapObject()) { __ JumpIfSmi(input, instr->FalseLabel(chunk_)); } __ movp(temp, FieldOperand(input, HeapObject::kMapOffset)); @@ -2443,7 +2497,7 @@ void LCodeGen::DoIsUndetectableAndBranch(LIsUndetectableAndBranch* instr) { void LCodeGen::DoStringCompareAndBranch(LStringCompareAndBranch* instr) { - ASSERT(ToRegister(instr->context()).is(rsi)); + DCHECK(ToRegister(instr->context()).is(rsi)); Token::Value op = instr->op(); Handle<Code> ic = CompareIC::GetUninitialized(isolate(), op); @@ -2460,7 +2514,7 @@ static InstanceType TestType(HHasInstanceTypeAndBranch* instr) { InstanceType from = instr->from(); InstanceType to = instr->to(); if (from == FIRST_TYPE) return to; - ASSERT(from == to || to == LAST_TYPE); + DCHECK(from == to || to == LAST_TYPE); return from; } @@ -2479,7 +2533,7 @@ static Condition BranchCondition(HHasInstanceTypeAndBranch* instr) { void LCodeGen::DoHasInstanceTypeAndBranch(LHasInstanceTypeAndBranch* instr) { Register input = ToRegister(instr->value()); - if (!instr->hydrogen()->value()->IsHeapObject()) { + if (!instr->hydrogen()->value()->type().IsHeapObject()) { __ JumpIfSmi(input, instr->FalseLabel(chunk_)); } @@ -2495,7 +2549,7 @@ void LCodeGen::DoGetCachedArrayIndex(LGetCachedArrayIndex* instr) { __ AssertString(input); __ movl(result, FieldOperand(input, String::kHashFieldOffset)); - ASSERT(String::kHashShift >= kSmiTagSize); + DCHECK(String::kHashShift >= kSmiTagSize); __ IndexFromHash(result, result); } @@ -2518,9 +2572,9 @@ void LCodeGen::EmitClassOfTest(Label* is_true, Register input, Register temp, Register temp2) { - ASSERT(!input.is(temp)); - ASSERT(!input.is(temp2)); - ASSERT(!temp.is(temp2)); + DCHECK(!input.is(temp)); + DCHECK(!input.is(temp2)); + DCHECK(!temp.is(temp2)); __ JumpIfSmi(input, is_false); @@ -2572,7 +2626,7 @@ void LCodeGen::EmitClassOfTest(Label* is_true, // classes and it doesn't have to because you can't access it with natives // syntax. Since both sides are internalized it is sufficient to use an // identity comparison. - ASSERT(class_name->IsInternalizedString()); + DCHECK(class_name->IsInternalizedString()); __ Cmp(temp, class_name); // End with the answer in the z flag. } @@ -2600,7 +2654,7 @@ void LCodeGen::DoCmpMapAndBranch(LCmpMapAndBranch* instr) { void LCodeGen::DoInstanceOf(LInstanceOf* instr) { - ASSERT(ToRegister(instr->context()).is(rsi)); + DCHECK(ToRegister(instr->context()).is(rsi)); InstanceofStub stub(isolate(), InstanceofStub::kNoFlags); __ Push(ToRegister(instr->left())); __ Push(ToRegister(instr->right())); @@ -2632,7 +2686,7 @@ void LCodeGen::DoInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr) { Label map_check_; }; - ASSERT(ToRegister(instr->context()).is(rsi)); + DCHECK(ToRegister(instr->context()).is(rsi)); DeferredInstanceOfKnownGlobal* deferred; deferred = new(zone()) DeferredInstanceOfKnownGlobal(this, instr); @@ -2660,7 +2714,7 @@ void LCodeGen::DoInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr) { // Check that the code size between patch label and patch sites is invariant. Label end_of_patched_code; __ bind(&end_of_patched_code); - ASSERT(true); + DCHECK(true); #endif __ jmp(&done, Label::kNear); @@ -2692,10 +2746,10 @@ void LCodeGen::DoDeferredInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr, __ Push(ToRegister(instr->value())); __ Push(instr->function()); - static const int kAdditionalDelta = 10; + static const int kAdditionalDelta = kPointerSize == kInt64Size ? 10 : 16; int delta = masm_->SizeOfCodeGeneratedSince(map_check) + kAdditionalDelta; - ASSERT(delta >= 0); + DCHECK(delta >= 0); __ PushImm32(delta); // We are pushing three values on the stack but recording a @@ -2707,7 +2761,7 @@ void LCodeGen::DoDeferredInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr, instr, RECORD_SAFEPOINT_WITH_REGISTERS, 2); - ASSERT(delta == masm_->SizeOfCodeGeneratedSince(map_check)); + DCHECK(delta == masm_->SizeOfCodeGeneratedSince(map_check)); LEnvironment* env = instr->GetDeferredLazyDeoptimizationEnvironment(); safepoints_.RecordLazyDeoptimizationIndex(env->deoptimization_index()); // Move result to a register that survives the end of the @@ -2727,7 +2781,7 @@ void LCodeGen::DoDeferredInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr, void LCodeGen::DoCmpT(LCmpT* instr) { - ASSERT(ToRegister(instr->context()).is(rsi)); + DCHECK(ToRegister(instr->context()).is(rsi)); Token::Value op = instr->op(); Handle<Code> ic = CompareIC::GetUninitialized(isolate(), op); @@ -2794,11 +2848,19 @@ void LCodeGen::DoLoadGlobalCell(LLoadGlobalCell* instr) { void LCodeGen::DoLoadGlobalGeneric(LLoadGlobalGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(rsi)); - ASSERT(ToRegister(instr->global_object()).is(rax)); - ASSERT(ToRegister(instr->result()).is(rax)); - - __ Move(rcx, instr->name()); + DCHECK(ToRegister(instr->context()).is(rsi)); + DCHECK(ToRegister(instr->global_object()).is(LoadIC::ReceiverRegister())); + DCHECK(ToRegister(instr->result()).is(rax)); + + __ Move(LoadIC::NameRegister(), instr->name()); + if (FLAG_vector_ics) { + Register vector = ToRegister(instr->temp_vector()); + DCHECK(vector.is(LoadIC::VectorRegister())); + __ Move(vector, instr->hydrogen()->feedback_vector()); + // No need to allocate this register. + DCHECK(LoadIC::SlotRegister().is(rax)); + __ Move(LoadIC::SlotRegister(), Smi::FromInt(instr->hydrogen()->slot())); + } ContextualMode mode = instr->for_typeof() ? NOT_CONTEXTUAL : CONTEXTUAL; Handle<Code> ic = LoadIC::initialize_stub(isolate(), mode); CallCode(ic, RelocInfo::CODE_TARGET, instr); @@ -2816,7 +2878,7 @@ void LCodeGen::DoStoreGlobalCell(LStoreGlobalCell* instr) { if (instr->hydrogen()->RequiresHoleCheck()) { // We have a temp because CompareRoot might clobber kScratchRegister. Register cell = ToRegister(instr->temp()); - ASSERT(!value.is(cell)); + DCHECK(!value.is(cell)); __ Move(cell, cell_handle, RelocInfo::CELL); __ CompareRoot(Operand(cell, 0), Heap::kTheHoleValueRootIndex); DeoptimizeIf(equal, instr->environment()); @@ -2868,7 +2930,7 @@ void LCodeGen::DoStoreContextSlot(LStoreContextSlot* instr) { if (instr->hydrogen()->NeedsWriteBarrier()) { SmiCheck check_needed = - instr->hydrogen()->value()->IsHeapObject() + instr->hydrogen()->value()->type().IsHeapObject() ? OMIT_SMI_CHECK : INLINE_SMI_CHECK; int offset = Context::SlotOffset(instr->slot_index()); Register scratch = ToRegister(instr->temp()); @@ -2892,7 +2954,7 @@ void LCodeGen::DoLoadNamedField(LLoadNamedField* instr) { if (access.IsExternalMemory()) { Register result = ToRegister(instr->result()); if (instr->object()->IsConstantOperand()) { - ASSERT(result.is(rax)); + DCHECK(result.is(rax)); __ load_rax(ToExternalReference(LConstantOperand::cast(instr->object()))); } else { Register object = ToRegister(instr->object()); @@ -2925,7 +2987,7 @@ void LCodeGen::DoLoadNamedField(LLoadNamedField* instr) { // Read int value directly from upper half of the smi. STATIC_ASSERT(kSmiTag == 0); - ASSERT(kSmiTagSize + kSmiShiftSize == 32); + DCHECK(kSmiTagSize + kSmiShiftSize == 32); offset += kPointerSize / 2; representation = Representation::Integer32(); } @@ -2934,11 +2996,19 @@ void LCodeGen::DoLoadNamedField(LLoadNamedField* instr) { void LCodeGen::DoLoadNamedGeneric(LLoadNamedGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(rsi)); - ASSERT(ToRegister(instr->object()).is(rax)); - ASSERT(ToRegister(instr->result()).is(rax)); - - __ Move(rcx, instr->name()); + DCHECK(ToRegister(instr->context()).is(rsi)); + DCHECK(ToRegister(instr->object()).is(LoadIC::ReceiverRegister())); + DCHECK(ToRegister(instr->result()).is(rax)); + + __ Move(LoadIC::NameRegister(), instr->name()); + if (FLAG_vector_ics) { + Register vector = ToRegister(instr->temp_vector()); + DCHECK(vector.is(LoadIC::VectorRegister())); + __ Move(vector, instr->hydrogen()->feedback_vector()); + // No need to allocate this register. + DCHECK(LoadIC::SlotRegister().is(rax)); + __ Move(LoadIC::SlotRegister(), Smi::FromInt(instr->hydrogen()->slot())); + } Handle<Code> ic = LoadIC::initialize_stub(isolate(), NOT_CONTEXTUAL); CallCode(ic, RelocInfo::CODE_TARGET, instr); } @@ -2948,16 +3018,6 @@ void LCodeGen::DoLoadFunctionPrototype(LLoadFunctionPrototype* instr) { Register function = ToRegister(instr->function()); Register result = ToRegister(instr->result()); - // Check that the function really is a function. - __ CmpObjectType(function, JS_FUNCTION_TYPE, result); - DeoptimizeIf(not_equal, instr->environment()); - - // Check whether the function has an instance prototype. - Label non_instance; - __ testb(FieldOperand(result, Map::kBitFieldOffset), - Immediate(1 << Map::kHasNonInstancePrototype)); - __ j(not_zero, &non_instance, Label::kNear); - // Get the prototype or initial map from the function. __ movp(result, FieldOperand(function, JSFunction::kPrototypeOrInitialMapOffset)); @@ -2973,12 +3033,6 @@ void LCodeGen::DoLoadFunctionPrototype(LLoadFunctionPrototype* instr) { // Get the prototype from the initial map. __ movp(result, FieldOperand(result, Map::kPrototypeOffset)); - __ jmp(&done, Label::kNear); - - // Non-instance prototype: Fetch prototype from constructor field - // in the function's map. - __ bind(&non_instance); - __ movp(result, FieldOperand(result, Map::kConstructorOffset)); // All done. __ bind(&done); @@ -3025,15 +3079,24 @@ void LCodeGen::DoAccessArgumentsAt(LAccessArgumentsAt* instr) { void LCodeGen::DoLoadKeyedExternalArray(LLoadKeyed* instr) { ElementsKind elements_kind = instr->elements_kind(); LOperand* key = instr->key(); - int base_offset = instr->is_fixed_typed_array() - ? FixedTypedArrayBase::kDataOffset - kHeapObjectTag - : 0; + if (kPointerSize == kInt32Size && !key->IsConstantOperand()) { + Register key_reg = ToRegister(key); + Representation key_representation = + instr->hydrogen()->key()->representation(); + if (ExternalArrayOpRequiresTemp(key_representation, elements_kind)) { + __ SmiToInteger64(key_reg, key_reg); + } else if (instr->hydrogen()->IsDehoisted()) { + // Sign extend key because it could be a 32 bit negative value + // and the dehoisted address computation happens in 64 bits + __ movsxlq(key_reg, key_reg); + } + } Operand operand(BuildFastArrayOperand( instr->elements(), key, + instr->hydrogen()->key()->representation(), elements_kind, - base_offset, - instr->additional_index())); + instr->base_offset())); if (elements_kind == EXTERNAL_FLOAT32_ELEMENTS || elements_kind == FLOAT32_ELEMENTS) { @@ -3098,15 +3161,19 @@ void LCodeGen::DoLoadKeyedExternalArray(LLoadKeyed* instr) { void LCodeGen::DoLoadKeyedFixedDoubleArray(LLoadKeyed* instr) { XMMRegister result(ToDoubleRegister(instr->result())); LOperand* key = instr->key(); + if (kPointerSize == kInt32Size && !key->IsConstantOperand() && + instr->hydrogen()->IsDehoisted()) { + // Sign extend key because it could be a 32 bit negative value + // and the dehoisted address computation happens in 64 bits + __ movsxlq(ToRegister(key), ToRegister(key)); + } if (instr->hydrogen()->RequiresHoleCheck()) { - int offset = FixedDoubleArray::kHeaderSize - kHeapObjectTag + - sizeof(kHoleNanLower32); Operand hole_check_operand = BuildFastArrayOperand( instr->elements(), key, + instr->hydrogen()->key()->representation(), FAST_DOUBLE_ELEMENTS, - offset, - instr->additional_index()); + instr->base_offset() + sizeof(kHoleNanLower32)); __ cmpl(hole_check_operand, Immediate(kHoleNanUpper32)); DeoptimizeIf(equal, instr->environment()); } @@ -3114,9 +3181,9 @@ void LCodeGen::DoLoadKeyedFixedDoubleArray(LLoadKeyed* instr) { Operand double_load_operand = BuildFastArrayOperand( instr->elements(), key, + instr->hydrogen()->key()->representation(), FAST_DOUBLE_ELEMENTS, - FixedDoubleArray::kHeaderSize - kHeapObjectTag, - instr->additional_index()); + instr->base_offset()); __ movsd(result, double_load_operand); } @@ -3126,35 +3193,41 @@ void LCodeGen::DoLoadKeyedFixedArray(LLoadKeyed* instr) { Register result = ToRegister(instr->result()); LOperand* key = instr->key(); bool requires_hole_check = hinstr->RequiresHoleCheck(); - int offset = FixedArray::kHeaderSize - kHeapObjectTag; Representation representation = hinstr->representation(); + int offset = instr->base_offset(); + if (kPointerSize == kInt32Size && !key->IsConstantOperand() && + instr->hydrogen()->IsDehoisted()) { + // Sign extend key because it could be a 32 bit negative value + // and the dehoisted address computation happens in 64 bits + __ movsxlq(ToRegister(key), ToRegister(key)); + } if (representation.IsInteger32() && SmiValuesAre32Bits() && hinstr->elements_kind() == FAST_SMI_ELEMENTS) { - ASSERT(!requires_hole_check); + DCHECK(!requires_hole_check); if (FLAG_debug_code) { Register scratch = kScratchRegister; __ Load(scratch, BuildFastArrayOperand(instr->elements(), key, + instr->hydrogen()->key()->representation(), FAST_ELEMENTS, - offset, - instr->additional_index()), + offset), Representation::Smi()); __ AssertSmi(scratch); } // Read int value directly from upper half of the smi. STATIC_ASSERT(kSmiTag == 0); - ASSERT(kSmiTagSize + kSmiShiftSize == 32); + DCHECK(kSmiTagSize + kSmiShiftSize == 32); offset += kPointerSize / 2; } __ Load(result, BuildFastArrayOperand(instr->elements(), key, + instr->hydrogen()->key()->representation(), FAST_ELEMENTS, - offset, - instr->additional_index()), + offset), representation); // Check for the hole value. @@ -3184,9 +3257,9 @@ void LCodeGen::DoLoadKeyed(LLoadKeyed* instr) { Operand LCodeGen::BuildFastArrayOperand( LOperand* elements_pointer, LOperand* key, + Representation key_representation, ElementsKind elements_kind, - uint32_t offset, - uint32_t additional_index) { + uint32_t offset) { Register elements_pointer_reg = ToRegister(elements_pointer); int shift_size = ElementsKindToShiftSize(elements_kind); if (key->IsConstantOperand()) { @@ -3195,22 +3268,35 @@ Operand LCodeGen::BuildFastArrayOperand( Abort(kArrayIndexConstantValueTooBig); } return Operand(elements_pointer_reg, - ((constant_value + additional_index) << shift_size) - + offset); + (constant_value << shift_size) + offset); } else { + // Take the tag bit into account while computing the shift size. + if (key_representation.IsSmi() && (shift_size >= 1)) { + DCHECK(SmiValuesAre31Bits()); + shift_size -= kSmiTagSize; + } ScaleFactor scale_factor = static_cast<ScaleFactor>(shift_size); return Operand(elements_pointer_reg, ToRegister(key), scale_factor, - offset + (additional_index << shift_size)); + offset); } } void LCodeGen::DoLoadKeyedGeneric(LLoadKeyedGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(rsi)); - ASSERT(ToRegister(instr->object()).is(rdx)); - ASSERT(ToRegister(instr->key()).is(rax)); + DCHECK(ToRegister(instr->context()).is(rsi)); + DCHECK(ToRegister(instr->object()).is(LoadIC::ReceiverRegister())); + DCHECK(ToRegister(instr->key()).is(LoadIC::NameRegister())); + + if (FLAG_vector_ics) { + Register vector = ToRegister(instr->temp_vector()); + DCHECK(vector.is(LoadIC::VectorRegister())); + __ Move(vector, instr->hydrogen()->feedback_vector()); + // No need to allocate this register. + DCHECK(LoadIC::SlotRegister().is(rax)); + __ Move(LoadIC::SlotRegister(), Smi::FromInt(instr->hydrogen()->slot())); + } Handle<Code> ic = isolate()->builtins()->KeyedLoadIC_Initialize(); CallCode(ic, RelocInfo::CODE_TARGET, instr); @@ -3315,8 +3401,7 @@ void LCodeGen::DoWrapReceiver(LWrapReceiver* instr) { __ movp(receiver, Operand(receiver, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); - __ movp(receiver, - FieldOperand(receiver, GlobalObject::kGlobalReceiverOffset)); + __ movp(receiver, FieldOperand(receiver, GlobalObject::kGlobalProxyOffset)); __ bind(&receiver_ok); } @@ -3327,9 +3412,9 @@ void LCodeGen::DoApplyArguments(LApplyArguments* instr) { Register function = ToRegister(instr->function()); Register length = ToRegister(instr->length()); Register elements = ToRegister(instr->elements()); - ASSERT(receiver.is(rax)); // Used for parameter count. - ASSERT(function.is(rdi)); // Required by InvokeFunction. - ASSERT(ToRegister(instr->result()).is(rax)); + DCHECK(receiver.is(rax)); // Used for parameter count. + DCHECK(function.is(rdi)); // Required by InvokeFunction. + DCHECK(ToRegister(instr->result()).is(rax)); // Copy the arguments to this function possibly from the // adaptor frame below it. @@ -3355,7 +3440,7 @@ void LCodeGen::DoApplyArguments(LApplyArguments* instr) { // Invoke the function. __ bind(&invoke); - ASSERT(instr->HasPointerMap()); + DCHECK(instr->HasPointerMap()); LPointerMap* pointers = instr->pointer_map(); SafepointGenerator safepoint_generator( this, pointers, Safepoint::kLazyDeopt); @@ -3387,17 +3472,17 @@ void LCodeGen::DoContext(LContext* instr) { __ movp(result, Operand(rbp, StandardFrameConstants::kContextOffset)); } else { // If there is no frame, the context must be in rsi. - ASSERT(result.is(rsi)); + DCHECK(result.is(rsi)); } } void LCodeGen::DoDeclareGlobals(LDeclareGlobals* instr) { - ASSERT(ToRegister(instr->context()).is(rsi)); + DCHECK(ToRegister(instr->context()).is(rsi)); __ Push(rsi); // The context is the first argument. __ Push(instr->hydrogen()->pairs()); __ Push(Smi::FromInt(instr->hydrogen()->flags())); - CallRuntime(Runtime::kHiddenDeclareGlobals, 3, instr); + CallRuntime(Runtime::kDeclareGlobals, 3, instr); } @@ -3448,7 +3533,7 @@ void LCodeGen::CallKnownFunction(Handle<JSFunction> function, void LCodeGen::DoCallWithDescriptor(LCallWithDescriptor* instr) { - ASSERT(ToRegister(instr->result()).is(rax)); + DCHECK(ToRegister(instr->result()).is(rax)); LPointerMap* pointers = instr->pointer_map(); SafepointGenerator generator(this, pointers, Safepoint::kLazyDeopt); @@ -3459,7 +3544,7 @@ void LCodeGen::DoCallWithDescriptor(LCallWithDescriptor* instr) { generator.BeforeCall(__ CallSize(code)); __ call(code, RelocInfo::CODE_TARGET); } else { - ASSERT(instr->target()->IsRegister()); + DCHECK(instr->target()->IsRegister()); Register target = ToRegister(instr->target()); generator.BeforeCall(__ CallSize(target)); __ addp(target, Immediate(Code::kHeaderSize - kHeapObjectTag)); @@ -3470,8 +3555,8 @@ void LCodeGen::DoCallWithDescriptor(LCallWithDescriptor* instr) { void LCodeGen::DoCallJSFunction(LCallJSFunction* instr) { - ASSERT(ToRegister(instr->function()).is(rdi)); - ASSERT(ToRegister(instr->result()).is(rax)); + DCHECK(ToRegister(instr->function()).is(rdi)); + DCHECK(ToRegister(instr->result()).is(rax)); if (instr->hydrogen()->pass_argument_count()) { __ Set(rax, instr->arity()); @@ -3529,7 +3614,7 @@ void LCodeGen::DoDeferredMathAbsTaggedHeapNumber(LMathAbs* instr) { // Slow case: Call the runtime system to do the number allocation. __ bind(&slow); CallRuntimeFromDeferred( - Runtime::kHiddenAllocateHeapNumber, 0, instr, instr->context()); + Runtime::kAllocateHeapNumber, 0, instr, instr->context()); // Set the pointer to the new heap number in tmp. if (!tmp.is(rax)) __ movp(tmp, rax); // Restore input_reg after call to runtime. @@ -3582,7 +3667,7 @@ void LCodeGen::DoMathAbs(LMathAbs* instr) { LMathAbs* instr_; }; - ASSERT(instr->value()->Equals(instr->result())); + DCHECK(instr->value()->Equals(instr->result())); Representation r = instr->hydrogen()->value()->representation(); if (r.IsDouble()) { @@ -3727,17 +3812,30 @@ void LCodeGen::DoMathRound(LMathRound* instr) { } -void LCodeGen::DoMathSqrt(LMathSqrt* instr) { +void LCodeGen::DoMathFround(LMathFround* instr) { XMMRegister input_reg = ToDoubleRegister(instr->value()); - ASSERT(ToDoubleRegister(instr->result()).is(input_reg)); - __ sqrtsd(input_reg, input_reg); + XMMRegister output_reg = ToDoubleRegister(instr->result()); + __ cvtsd2ss(output_reg, input_reg); + __ cvtss2sd(output_reg, output_reg); +} + + +void LCodeGen::DoMathSqrt(LMathSqrt* instr) { + XMMRegister output = ToDoubleRegister(instr->result()); + if (instr->value()->IsDoubleRegister()) { + XMMRegister input = ToDoubleRegister(instr->value()); + __ sqrtsd(output, input); + } else { + Operand input = ToOperand(instr->value()); + __ sqrtsd(output, input); + } } void LCodeGen::DoMathPowHalf(LMathPowHalf* instr) { XMMRegister xmm_scratch = double_scratch0(); XMMRegister input_reg = ToDoubleRegister(instr->value()); - ASSERT(ToDoubleRegister(instr->result()).is(input_reg)); + DCHECK(ToDoubleRegister(instr->result()).is(input_reg)); // Note that according to ECMA-262 15.8.2.13: // Math.pow(-Infinity, 0.5) == Infinity @@ -3772,12 +3870,12 @@ void LCodeGen::DoPower(LPower* instr) { // Just make sure that the input/output registers are the expected ones. Register exponent = rdx; - ASSERT(!instr->right()->IsRegister() || + DCHECK(!instr->right()->IsRegister() || ToRegister(instr->right()).is(exponent)); - ASSERT(!instr->right()->IsDoubleRegister() || + DCHECK(!instr->right()->IsDoubleRegister() || ToDoubleRegister(instr->right()).is(xmm1)); - ASSERT(ToDoubleRegister(instr->left()).is(xmm2)); - ASSERT(ToDoubleRegister(instr->result()).is(xmm3)); + DCHECK(ToDoubleRegister(instr->left()).is(xmm2)); + DCHECK(ToDoubleRegister(instr->result()).is(xmm3)); if (exponent_type.IsSmi()) { MathPowStub stub(isolate(), MathPowStub::TAGGED); @@ -3794,7 +3892,7 @@ void LCodeGen::DoPower(LPower* instr) { MathPowStub stub(isolate(), MathPowStub::INTEGER); __ CallStub(&stub); } else { - ASSERT(exponent_type.IsDouble()); + DCHECK(exponent_type.IsDouble()); MathPowStub stub(isolate(), MathPowStub::DOUBLE); __ CallStub(&stub); } @@ -3813,7 +3911,7 @@ void LCodeGen::DoMathExp(LMathExp* instr) { void LCodeGen::DoMathLog(LMathLog* instr) { - ASSERT(instr->value()->Equals(instr->result())); + DCHECK(instr->value()->Equals(instr->result())); XMMRegister input_reg = ToDoubleRegister(instr->value()); XMMRegister xmm_scratch = double_scratch0(); Label positive, done, zero; @@ -3860,9 +3958,9 @@ void LCodeGen::DoMathClz32(LMathClz32* instr) { void LCodeGen::DoInvokeFunction(LInvokeFunction* instr) { - ASSERT(ToRegister(instr->context()).is(rsi)); - ASSERT(ToRegister(instr->function()).is(rdi)); - ASSERT(instr->HasPointerMap()); + DCHECK(ToRegister(instr->context()).is(rsi)); + DCHECK(ToRegister(instr->function()).is(rdi)); + DCHECK(instr->HasPointerMap()); Handle<JSFunction> known_function = instr->hydrogen()->known_function(); if (known_function.is_null()) { @@ -3881,9 +3979,9 @@ void LCodeGen::DoInvokeFunction(LInvokeFunction* instr) { void LCodeGen::DoCallFunction(LCallFunction* instr) { - ASSERT(ToRegister(instr->context()).is(rsi)); - ASSERT(ToRegister(instr->function()).is(rdi)); - ASSERT(ToRegister(instr->result()).is(rax)); + DCHECK(ToRegister(instr->context()).is(rsi)); + DCHECK(ToRegister(instr->function()).is(rdi)); + DCHECK(ToRegister(instr->result()).is(rax)); int arity = instr->arity(); CallFunctionStub stub(isolate(), arity, instr->hydrogen()->function_flags()); @@ -3892,9 +3990,9 @@ void LCodeGen::DoCallFunction(LCallFunction* instr) { void LCodeGen::DoCallNew(LCallNew* instr) { - ASSERT(ToRegister(instr->context()).is(rsi)); - ASSERT(ToRegister(instr->constructor()).is(rdi)); - ASSERT(ToRegister(instr->result()).is(rax)); + DCHECK(ToRegister(instr->context()).is(rsi)); + DCHECK(ToRegister(instr->constructor()).is(rdi)); + DCHECK(ToRegister(instr->result()).is(rax)); __ Set(rax, instr->arity()); // No cell in ebx for construct type feedback in optimized code @@ -3905,9 +4003,9 @@ void LCodeGen::DoCallNew(LCallNew* instr) { void LCodeGen::DoCallNewArray(LCallNewArray* instr) { - ASSERT(ToRegister(instr->context()).is(rsi)); - ASSERT(ToRegister(instr->constructor()).is(rdi)); - ASSERT(ToRegister(instr->result()).is(rax)); + DCHECK(ToRegister(instr->context()).is(rsi)); + DCHECK(ToRegister(instr->constructor()).is(rdi)); + DCHECK(ToRegister(instr->result()).is(rax)); __ Set(rax, instr->arity()); __ LoadRoot(rbx, Heap::kUndefinedValueRootIndex); @@ -3950,7 +4048,7 @@ void LCodeGen::DoCallNewArray(LCallNewArray* instr) { void LCodeGen::DoCallRuntime(LCallRuntime* instr) { - ASSERT(ToRegister(instr->context()).is(rsi)); + DCHECK(ToRegister(instr->context()).is(rsi)); CallRuntime(instr->function(), instr->arity(), instr, instr->save_doubles()); } @@ -3984,10 +4082,10 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { int offset = access.offset(); if (access.IsExternalMemory()) { - ASSERT(!hinstr->NeedsWriteBarrier()); + DCHECK(!hinstr->NeedsWriteBarrier()); Register value = ToRegister(instr->value()); if (instr->object()->IsConstantOperand()) { - ASSERT(value.is(rax)); + DCHECK(value.is(rax)); LConstantOperand* object = LConstantOperand::cast(instr->object()); __ store_rax(ToExternalReference(object)); } else { @@ -4000,13 +4098,13 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { Register object = ToRegister(instr->object()); __ AssertNotSmi(object); - ASSERT(!representation.IsSmi() || + DCHECK(!representation.IsSmi() || !instr->value()->IsConstantOperand() || IsInteger32Constant(LConstantOperand::cast(instr->value()))); if (representation.IsDouble()) { - ASSERT(access.IsInobject()); - ASSERT(!hinstr->has_transition()); - ASSERT(!hinstr->NeedsWriteBarrier()); + DCHECK(access.IsInobject()); + DCHECK(!hinstr->has_transition()); + DCHECK(!hinstr->NeedsWriteBarrier()); XMMRegister value = ToDoubleRegister(instr->value()); __ movsd(FieldOperand(object, offset), value); return; @@ -4022,13 +4120,10 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { __ Move(kScratchRegister, transition); __ movp(FieldOperand(object, HeapObject::kMapOffset), kScratchRegister); // Update the write barrier for the map field. - __ RecordWriteField(object, - HeapObject::kMapOffset, - kScratchRegister, - temp, - kSaveFPRegs, - OMIT_REMEMBERED_SET, - OMIT_SMI_CHECK); + __ RecordWriteForMap(object, + kScratchRegister, + temp, + kSaveFPRegs); } } @@ -4041,7 +4136,7 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { if (representation.IsSmi() && SmiValuesAre32Bits() && hinstr->value()->representation().IsInteger32()) { - ASSERT(hinstr->store_mode() == STORE_TO_INITIALIZED_ENTRY); + DCHECK(hinstr->store_mode() == STORE_TO_INITIALIZED_ENTRY); if (FLAG_debug_code) { Register scratch = kScratchRegister; __ Load(scratch, FieldOperand(write_register, offset), representation); @@ -4049,7 +4144,7 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { } // Store int value directly to upper half of the smi. STATIC_ASSERT(kSmiTag == 0); - ASSERT(kSmiTagSize + kSmiShiftSize == 32); + DCHECK(kSmiTagSize + kSmiShiftSize == 32); offset += kPointerSize / 2; representation = Representation::Integer32(); } @@ -4062,7 +4157,7 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { } else { LConstantOperand* operand_value = LConstantOperand::cast(instr->value()); if (IsInteger32Constant(operand_value)) { - ASSERT(!hinstr->NeedsWriteBarrier()); + DCHECK(!hinstr->NeedsWriteBarrier()); int32_t value = ToInteger32(operand_value); if (representation.IsSmi()) { __ Move(operand, Smi::FromInt(value)); @@ -4073,7 +4168,7 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { } else { Handle<Object> handle_value = ToHandle(operand_value); - ASSERT(!hinstr->NeedsWriteBarrier()); + DCHECK(!hinstr->NeedsWriteBarrier()); __ Move(operand, handle_value); } } @@ -4088,17 +4183,18 @@ void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { temp, kSaveFPRegs, EMIT_REMEMBERED_SET, - hinstr->SmiCheckForWriteBarrier()); + hinstr->SmiCheckForWriteBarrier(), + hinstr->PointersToHereCheckForValue()); } } void LCodeGen::DoStoreNamedGeneric(LStoreNamedGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(rsi)); - ASSERT(ToRegister(instr->object()).is(rdx)); - ASSERT(ToRegister(instr->value()).is(rax)); + DCHECK(ToRegister(instr->context()).is(rsi)); + DCHECK(ToRegister(instr->object()).is(StoreIC::ReceiverRegister())); + DCHECK(ToRegister(instr->value()).is(StoreIC::ValueRegister())); - __ Move(rcx, instr->hydrogen()->name()); + __ Move(StoreIC::NameRegister(), instr->hydrogen()->name()); Handle<Code> ic = StoreIC::initialize_stub(isolate(), instr->strict_mode()); CallCode(ic, RelocInfo::CODE_TARGET, instr); } @@ -4106,8 +4202,8 @@ void LCodeGen::DoStoreNamedGeneric(LStoreNamedGeneric* instr) { void LCodeGen::DoBoundsCheck(LBoundsCheck* instr) { Representation representation = instr->hydrogen()->length()->representation(); - ASSERT(representation.Equals(instr->hydrogen()->index()->representation())); - ASSERT(representation.IsSmiOrInteger32()); + DCHECK(representation.Equals(instr->hydrogen()->index()->representation())); + DCHECK(representation.IsSmiOrInteger32()); Condition cc = instr->hydrogen()->allow_equality() ? below : below_equal; if (instr->length()->IsConstantOperand()) { @@ -4118,7 +4214,7 @@ void LCodeGen::DoBoundsCheck(LBoundsCheck* instr) { } else { __ cmpl(index, Immediate(length)); } - cc = ReverseCondition(cc); + cc = CommuteCondition(cc); } else if (instr->index()->IsConstantOperand()) { int32_t index = ToInteger32(LConstantOperand::cast(instr->index())); if (instr->length()->IsRegister()) { @@ -4168,15 +4264,24 @@ void LCodeGen::DoBoundsCheck(LBoundsCheck* instr) { void LCodeGen::DoStoreKeyedExternalArray(LStoreKeyed* instr) { ElementsKind elements_kind = instr->elements_kind(); LOperand* key = instr->key(); - int base_offset = instr->is_fixed_typed_array() - ? FixedTypedArrayBase::kDataOffset - kHeapObjectTag - : 0; + if (kPointerSize == kInt32Size && !key->IsConstantOperand()) { + Register key_reg = ToRegister(key); + Representation key_representation = + instr->hydrogen()->key()->representation(); + if (ExternalArrayOpRequiresTemp(key_representation, elements_kind)) { + __ SmiToInteger64(key_reg, key_reg); + } else if (instr->hydrogen()->IsDehoisted()) { + // Sign extend key because it could be a 32 bit negative value + // and the dehoisted address computation happens in 64 bits + __ movsxlq(key_reg, key_reg); + } + } Operand operand(BuildFastArrayOperand( instr->elements(), key, + instr->hydrogen()->key()->representation(), elements_kind, - base_offset, - instr->additional_index())); + instr->base_offset())); if (elements_kind == EXTERNAL_FLOAT32_ELEMENTS || elements_kind == FLOAT32_ELEMENTS) { @@ -4231,6 +4336,12 @@ void LCodeGen::DoStoreKeyedExternalArray(LStoreKeyed* instr) { void LCodeGen::DoStoreKeyedFixedDoubleArray(LStoreKeyed* instr) { XMMRegister value = ToDoubleRegister(instr->value()); LOperand* key = instr->key(); + if (kPointerSize == kInt32Size && !key->IsConstantOperand() && + instr->hydrogen()->IsDehoisted()) { + // Sign extend key because it could be a 32 bit negative value + // and the dehoisted address computation happens in 64 bits + __ movsxlq(ToRegister(key), ToRegister(key)); + } if (instr->NeedsCanonicalization()) { Label have_value; @@ -4247,9 +4358,9 @@ void LCodeGen::DoStoreKeyedFixedDoubleArray(LStoreKeyed* instr) { Operand double_store_operand = BuildFastArrayOperand( instr->elements(), key, + instr->hydrogen()->key()->representation(), FAST_DOUBLE_ELEMENTS, - FixedDoubleArray::kHeaderSize - kHeapObjectTag, - instr->additional_index()); + instr->base_offset()); __ movsd(double_store_operand, value); } @@ -4258,36 +4369,41 @@ void LCodeGen::DoStoreKeyedFixedDoubleArray(LStoreKeyed* instr) { void LCodeGen::DoStoreKeyedFixedArray(LStoreKeyed* instr) { HStoreKeyed* hinstr = instr->hydrogen(); LOperand* key = instr->key(); - int offset = FixedArray::kHeaderSize - kHeapObjectTag; + int offset = instr->base_offset(); Representation representation = hinstr->value()->representation(); + if (kPointerSize == kInt32Size && !key->IsConstantOperand() && + instr->hydrogen()->IsDehoisted()) { + // Sign extend key because it could be a 32 bit negative value + // and the dehoisted address computation happens in 64 bits + __ movsxlq(ToRegister(key), ToRegister(key)); + } if (representation.IsInteger32() && SmiValuesAre32Bits()) { - ASSERT(hinstr->store_mode() == STORE_TO_INITIALIZED_ENTRY); - ASSERT(hinstr->elements_kind() == FAST_SMI_ELEMENTS); + DCHECK(hinstr->store_mode() == STORE_TO_INITIALIZED_ENTRY); + DCHECK(hinstr->elements_kind() == FAST_SMI_ELEMENTS); if (FLAG_debug_code) { Register scratch = kScratchRegister; __ Load(scratch, BuildFastArrayOperand(instr->elements(), key, + instr->hydrogen()->key()->representation(), FAST_ELEMENTS, - offset, - instr->additional_index()), + offset), Representation::Smi()); __ AssertSmi(scratch); } // Store int value directly to upper half of the smi. STATIC_ASSERT(kSmiTag == 0); - ASSERT(kSmiTagSize + kSmiShiftSize == 32); + DCHECK(kSmiTagSize + kSmiShiftSize == 32); offset += kPointerSize / 2; } Operand operand = BuildFastArrayOperand(instr->elements(), key, + instr->hydrogen()->key()->representation(), FAST_ELEMENTS, - offset, - instr->additional_index()); - + offset); if (instr->value()->IsRegister()) { __ Store(operand, ToRegister(instr->value()), representation); } else { @@ -4308,10 +4424,10 @@ void LCodeGen::DoStoreKeyedFixedArray(LStoreKeyed* instr) { if (hinstr->NeedsWriteBarrier()) { Register elements = ToRegister(instr->elements()); - ASSERT(instr->value()->IsRegister()); + DCHECK(instr->value()->IsRegister()); Register value = ToRegister(instr->value()); - ASSERT(!key->IsConstantOperand()); - SmiCheck check_needed = hinstr->value()->IsHeapObject() + DCHECK(!key->IsConstantOperand()); + SmiCheck check_needed = hinstr->value()->type().IsHeapObject() ? OMIT_SMI_CHECK : INLINE_SMI_CHECK; // Compute address of modified element and store it into key register. Register key_reg(ToRegister(key)); @@ -4321,7 +4437,8 @@ void LCodeGen::DoStoreKeyedFixedArray(LStoreKeyed* instr) { value, kSaveFPRegs, EMIT_REMEMBERED_SET, - check_needed); + check_needed, + hinstr->PointersToHereCheckForValue()); } } @@ -4338,10 +4455,10 @@ void LCodeGen::DoStoreKeyed(LStoreKeyed* instr) { void LCodeGen::DoStoreKeyedGeneric(LStoreKeyedGeneric* instr) { - ASSERT(ToRegister(instr->context()).is(rsi)); - ASSERT(ToRegister(instr->object()).is(rdx)); - ASSERT(ToRegister(instr->key()).is(rcx)); - ASSERT(ToRegister(instr->value()).is(rax)); + DCHECK(ToRegister(instr->context()).is(rsi)); + DCHECK(ToRegister(instr->object()).is(KeyedStoreIC::ReceiverRegister())); + DCHECK(ToRegister(instr->key()).is(KeyedStoreIC::NameRegister())); + DCHECK(ToRegister(instr->value()).is(KeyedStoreIC::ValueRegister())); Handle<Code> ic = instr->strict_mode() == STRICT ? isolate()->builtins()->KeyedStoreIC_Initialize_Strict() @@ -4366,12 +4483,11 @@ void LCodeGen::DoTransitionElementsKind(LTransitionElementsKind* instr) { __ Move(new_map_reg, to_map, RelocInfo::EMBEDDED_OBJECT); __ movp(FieldOperand(object_reg, HeapObject::kMapOffset), new_map_reg); // Write barrier. - ASSERT_NE(instr->temp(), NULL); - __ RecordWriteField(object_reg, HeapObject::kMapOffset, new_map_reg, - ToRegister(instr->temp()), kDontSaveFPRegs); + __ RecordWriteForMap(object_reg, new_map_reg, ToRegister(instr->temp()), + kDontSaveFPRegs); } else { - ASSERT(object_reg.is(rax)); - ASSERT(ToRegister(instr->context()).is(rsi)); + DCHECK(object_reg.is(rax)); + DCHECK(ToRegister(instr->context()).is(rsi)); PushSafepointRegistersScope scope(this); __ Move(rbx, to_map); bool is_js_array = from_map->instance_type() == JS_ARRAY_TYPE; @@ -4394,9 +4510,9 @@ void LCodeGen::DoTrapAllocationMemento(LTrapAllocationMemento* instr) { void LCodeGen::DoStringAdd(LStringAdd* instr) { - ASSERT(ToRegister(instr->context()).is(rsi)); - ASSERT(ToRegister(instr->left()).is(rdx)); - ASSERT(ToRegister(instr->right()).is(rax)); + DCHECK(ToRegister(instr->context()).is(rsi)); + DCHECK(ToRegister(instr->left()).is(rdx)); + DCHECK(ToRegister(instr->right()).is(rax)); StringAddStub stub(isolate(), instr->hydrogen()->flags(), instr->hydrogen()->pretenure_flag()); @@ -4452,7 +4568,7 @@ void LCodeGen::DoDeferredStringCharCodeAt(LStringCharCodeAt* instr) { __ Push(index); } CallRuntimeFromDeferred( - Runtime::kHiddenStringCharCodeAt, 2, instr, instr->context()); + Runtime::kStringCharCodeAtRT, 2, instr, instr->context()); __ AssertSmi(rax); __ SmiToInteger32(rax, rax); __ StoreToSafepointRegisterSlot(result, rax); @@ -4475,10 +4591,10 @@ void LCodeGen::DoStringCharFromCode(LStringCharFromCode* instr) { DeferredStringCharFromCode* deferred = new(zone()) DeferredStringCharFromCode(this, instr); - ASSERT(instr->hydrogen()->value()->representation().IsInteger32()); + DCHECK(instr->hydrogen()->value()->representation().IsInteger32()); Register char_code = ToRegister(instr->char_code()); Register result = ToRegister(instr->result()); - ASSERT(!char_code.is(result)); + DCHECK(!char_code.is(result)); __ cmpl(char_code, Immediate(String::kMaxOneByteCharCode)); __ j(above, deferred->entry()); @@ -4512,9 +4628,9 @@ void LCodeGen::DoDeferredStringCharFromCode(LStringCharFromCode* instr) { void LCodeGen::DoInteger32ToDouble(LInteger32ToDouble* instr) { LOperand* input = instr->value(); - ASSERT(input->IsRegister() || input->IsStackSlot()); + DCHECK(input->IsRegister() || input->IsStackSlot()); LOperand* output = instr->result(); - ASSERT(output->IsDoubleRegister()); + DCHECK(output->IsDoubleRegister()); if (input->IsRegister()) { __ Cvtlsi2sd(ToDoubleRegister(output), ToRegister(input)); } else { @@ -4526,20 +4642,38 @@ void LCodeGen::DoInteger32ToDouble(LInteger32ToDouble* instr) { void LCodeGen::DoUint32ToDouble(LUint32ToDouble* instr) { LOperand* input = instr->value(); LOperand* output = instr->result(); - LOperand* temp = instr->temp(); - __ LoadUint32(ToDoubleRegister(output), - ToRegister(input), - ToDoubleRegister(temp)); + __ LoadUint32(ToDoubleRegister(output), ToRegister(input)); } void LCodeGen::DoNumberTagI(LNumberTagI* instr) { + class DeferredNumberTagI V8_FINAL : public LDeferredCode { + public: + DeferredNumberTagI(LCodeGen* codegen, LNumberTagI* instr) + : LDeferredCode(codegen), instr_(instr) { } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredNumberTagIU(instr_, instr_->value(), instr_->temp1(), + instr_->temp2(), SIGNED_INT32); + } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + private: + LNumberTagI* instr_; + }; + LOperand* input = instr->value(); - ASSERT(input->IsRegister() && input->Equals(instr->result())); + DCHECK(input->IsRegister() && input->Equals(instr->result())); Register reg = ToRegister(input); - __ Integer32ToSmi(reg, reg); + if (SmiValuesAre32Bits()) { + __ Integer32ToSmi(reg, reg); + } else { + DCHECK(SmiValuesAre31Bits()); + DeferredNumberTagI* deferred = new(zone()) DeferredNumberTagI(this, instr); + __ Integer32ToSmi(reg, reg); + __ j(overflow, deferred->entry()); + __ bind(deferred->exit()); + } } @@ -4549,7 +4683,8 @@ void LCodeGen::DoNumberTagU(LNumberTagU* instr) { DeferredNumberTagU(LCodeGen* codegen, LNumberTagU* instr) : LDeferredCode(codegen), instr_(instr) { } virtual void Generate() V8_OVERRIDE { - codegen()->DoDeferredNumberTagU(instr_); + codegen()->DoDeferredNumberTagIU(instr_, instr_->value(), instr_->temp1(), + instr_->temp2(), UNSIGNED_INT32); } virtual LInstruction* instr() V8_OVERRIDE { return instr_; } private: @@ -4557,7 +4692,7 @@ void LCodeGen::DoNumberTagU(LNumberTagU* instr) { }; LOperand* input = instr->value(); - ASSERT(input->IsRegister() && input->Equals(instr->result())); + DCHECK(input->IsRegister() && input->Equals(instr->result())); Register reg = ToRegister(input); DeferredNumberTagU* deferred = new(zone()) DeferredNumberTagU(this, instr); @@ -4568,21 +4703,35 @@ void LCodeGen::DoNumberTagU(LNumberTagU* instr) { } -void LCodeGen::DoDeferredNumberTagU(LNumberTagU* instr) { +void LCodeGen::DoDeferredNumberTagIU(LInstruction* instr, + LOperand* value, + LOperand* temp1, + LOperand* temp2, + IntegerSignedness signedness) { Label done, slow; - Register reg = ToRegister(instr->value()); - Register tmp = ToRegister(instr->temp1()); - XMMRegister temp_xmm = ToDoubleRegister(instr->temp2()); + Register reg = ToRegister(value); + Register tmp = ToRegister(temp1); + XMMRegister temp_xmm = ToDoubleRegister(temp2); // Load value into temp_xmm which will be preserved across potential call to // runtime (MacroAssembler::EnterExitFrameEpilogue preserves only allocatable // XMM registers on x64). - XMMRegister xmm_scratch = double_scratch0(); - __ LoadUint32(temp_xmm, reg, xmm_scratch); + if (signedness == SIGNED_INT32) { + DCHECK(SmiValuesAre31Bits()); + // There was overflow, so bits 30 and 31 of the original integer + // disagree. Try to allocate a heap number in new space and store + // the value in there. If that fails, call the runtime system. + __ SmiToInteger32(reg, reg); + __ xorl(reg, Immediate(0x80000000)); + __ cvtlsi2sd(temp_xmm, reg); + } else { + DCHECK(signedness == UNSIGNED_INT32); + __ LoadUint32(temp_xmm, reg); + } if (FLAG_inline_new) { __ AllocateHeapNumber(reg, tmp, &slow); - __ jmp(&done, Label::kNear); + __ jmp(&done, kPointerSize == kInt64Size ? Label::kNear : Label::kFar); } // Slow case: Call the runtime system to do the number allocation. @@ -4596,13 +4745,13 @@ void LCodeGen::DoDeferredNumberTagU(LNumberTagU* instr) { // Preserve the value of all registers. PushSafepointRegistersScope scope(this); - // NumberTagU uses the context from the frame, rather than + // NumberTagIU uses the context from the frame, rather than // the environment's HContext or HInlinedContext value. - // They only call Runtime::kHiddenAllocateHeapNumber. + // They only call Runtime::kAllocateHeapNumber. // The corresponding HChange instructions are added in a phase that does // not have easy access to the local context. __ movp(rsi, Operand(rbp, StandardFrameConstants::kContextOffset)); - __ CallRuntimeSaveDoubles(Runtime::kHiddenAllocateHeapNumber); + __ CallRuntimeSaveDoubles(Runtime::kAllocateHeapNumber); RecordSafepointWithRegisters( instr->pointer_map(), 0, Safepoint::kNoLazyDeopt); __ StoreToSafepointRegisterSlot(reg, rax); @@ -4654,11 +4803,11 @@ void LCodeGen::DoDeferredNumberTagD(LNumberTagD* instr) { PushSafepointRegistersScope scope(this); // NumberTagD uses the context from the frame, rather than // the environment's HContext or HInlinedContext value. - // They only call Runtime::kHiddenAllocateHeapNumber. + // They only call Runtime::kAllocateHeapNumber. // The corresponding HChange instructions are added in a phase that does // not have easy access to the local context. __ movp(rsi, Operand(rbp, StandardFrameConstants::kContextOffset)); - __ CallRuntimeSaveDoubles(Runtime::kHiddenAllocateHeapNumber); + __ CallRuntimeSaveDoubles(Runtime::kAllocateHeapNumber); RecordSafepointWithRegisters( instr->pointer_map(), 0, Safepoint::kNoLazyDeopt); __ movp(kScratchRegister, rax); @@ -4673,8 +4822,8 @@ void LCodeGen::DoSmiTag(LSmiTag* instr) { Register output = ToRegister(instr->result()); if (hchange->CheckFlag(HValue::kCanOverflow) && hchange->value()->CheckFlag(HValue::kUint32)) { - __ testl(input, input); - DeoptimizeIf(sign, instr->environment()); + Condition is_smi = __ CheckUInteger32ValidSmiValue(input); + DeoptimizeIf(NegateCondition(is_smi), instr->environment()); } __ Integer32ToSmi(output, input); if (hchange->CheckFlag(HValue::kCanOverflow) && @@ -4685,7 +4834,7 @@ void LCodeGen::DoSmiTag(LSmiTag* instr) { void LCodeGen::DoSmiUntag(LSmiUntag* instr) { - ASSERT(instr->value()->Equals(instr->result())); + DCHECK(instr->value()->Equals(instr->result())); Register input = ToRegister(instr->value()); if (instr->needs_check()) { Condition is_smi = __ CheckSmi(input); @@ -4746,7 +4895,7 @@ void LCodeGen::EmitNumberUntagD(Register input_reg, __ jmp(&done, Label::kNear); } } else { - ASSERT(mode == NUMBER_CANDIDATE_IS_SMI); + DCHECK(mode == NUMBER_CANDIDATE_IS_SMI); } // Smi to XMM conversion @@ -4817,8 +4966,8 @@ void LCodeGen::DoTaggedToI(LTaggedToI* instr) { }; LOperand* input = instr->value(); - ASSERT(input->IsRegister()); - ASSERT(input->Equals(instr->result())); + DCHECK(input->IsRegister()); + DCHECK(input->Equals(instr->result())); Register input_reg = ToRegister(input); if (instr->hydrogen()->value()->representation().IsSmi()) { @@ -4834,9 +4983,9 @@ void LCodeGen::DoTaggedToI(LTaggedToI* instr) { void LCodeGen::DoNumberUntagD(LNumberUntagD* instr) { LOperand* input = instr->value(); - ASSERT(input->IsRegister()); + DCHECK(input->IsRegister()); LOperand* result = instr->result(); - ASSERT(result->IsDoubleRegister()); + DCHECK(result->IsDoubleRegister()); Register input_reg = ToRegister(input); XMMRegister result_reg = ToDoubleRegister(result); @@ -4855,9 +5004,9 @@ void LCodeGen::DoNumberUntagD(LNumberUntagD* instr) { void LCodeGen::DoDoubleToI(LDoubleToI* instr) { LOperand* input = instr->value(); - ASSERT(input->IsDoubleRegister()); + DCHECK(input->IsDoubleRegister()); LOperand* result = instr->result(); - ASSERT(result->IsRegister()); + DCHECK(result->IsRegister()); XMMRegister input_reg = ToDoubleRegister(input); Register result_reg = ToRegister(result); @@ -4880,9 +5029,9 @@ void LCodeGen::DoDoubleToI(LDoubleToI* instr) { void LCodeGen::DoDoubleToSmi(LDoubleToSmi* instr) { LOperand* input = instr->value(); - ASSERT(input->IsDoubleRegister()); + DCHECK(input->IsDoubleRegister()); LOperand* result = instr->result(); - ASSERT(result->IsRegister()); + DCHECK(result->IsRegister()); XMMRegister input_reg = ToDoubleRegister(input); Register result_reg = ToRegister(result); @@ -4910,7 +5059,7 @@ void LCodeGen::DoCheckSmi(LCheckSmi* instr) { void LCodeGen::DoCheckNonSmi(LCheckNonSmi* instr) { - if (!instr->hydrogen()->value()->IsHeapObject()) { + if (!instr->hydrogen()->value()->type().IsHeapObject()) { LOperand* input = instr->value(); Condition cc = masm()->CheckSmi(ToRegister(input)); DeoptimizeIf(cc, instr->environment()); @@ -4949,7 +5098,7 @@ void LCodeGen::DoCheckInstanceType(LCheckInstanceType* instr) { instr->hydrogen()->GetCheckMaskAndTag(&mask, &tag); if (IsPowerOf2(mask)) { - ASSERT(tag == 0 || IsPowerOf2(tag)); + DCHECK(tag == 0 || IsPowerOf2(tag)); __ testb(FieldOperand(kScratchRegister, Map::kInstanceTypeOffset), Immediate(mask)); DeoptimizeIf(tag == 0 ? not_zero : zero, instr->environment()); @@ -5013,7 +5162,7 @@ void LCodeGen::DoCheckMaps(LCheckMaps* instr) { } LOperand* input = instr->value(); - ASSERT(input->IsRegister()); + DCHECK(input->IsRegister()); Register reg = ToRegister(input); DeferredCheckMaps* deferred = NULL; @@ -5051,14 +5200,14 @@ void LCodeGen::DoClampDToUint8(LClampDToUint8* instr) { void LCodeGen::DoClampIToUint8(LClampIToUint8* instr) { - ASSERT(instr->unclamped()->Equals(instr->result())); + DCHECK(instr->unclamped()->Equals(instr->result())); Register value_reg = ToRegister(instr->result()); __ ClampUint8(value_reg); } void LCodeGen::DoClampTToUint8(LClampTToUint8* instr) { - ASSERT(instr->unclamped()->Equals(instr->result())); + DCHECK(instr->unclamped()->Equals(instr->result())); Register input_reg = ToRegister(instr->unclamped()); XMMRegister temp_xmm_reg = ToDoubleRegister(instr->temp_xmm()); XMMRegister xmm_scratch = double_scratch0(); @@ -5142,11 +5291,11 @@ void LCodeGen::DoAllocate(LAllocate* instr) { flags = static_cast<AllocationFlags>(flags | DOUBLE_ALIGNMENT); } if (instr->hydrogen()->IsOldPointerSpaceAllocation()) { - ASSERT(!instr->hydrogen()->IsOldDataSpaceAllocation()); - ASSERT(!instr->hydrogen()->IsNewSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsOldDataSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); flags = static_cast<AllocationFlags>(flags | PRETENURE_OLD_POINTER_SPACE); } else if (instr->hydrogen()->IsOldDataSpaceAllocation()) { - ASSERT(!instr->hydrogen()->IsNewSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); flags = static_cast<AllocationFlags>(flags | PRETENURE_OLD_DATA_SPACE); } @@ -5194,7 +5343,7 @@ void LCodeGen::DoDeferredAllocate(LAllocate* instr) { PushSafepointRegistersScope scope(this); if (instr->size()->IsRegister()) { Register size = ToRegister(instr->size()); - ASSERT(!size.is(result)); + DCHECK(!size.is(result)); __ Integer32ToSmi(size, size); __ Push(size); } else { @@ -5204,11 +5353,11 @@ void LCodeGen::DoDeferredAllocate(LAllocate* instr) { int flags = 0; if (instr->hydrogen()->IsOldPointerSpaceAllocation()) { - ASSERT(!instr->hydrogen()->IsOldDataSpaceAllocation()); - ASSERT(!instr->hydrogen()->IsNewSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsOldDataSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); flags = AllocateTargetSpace::update(flags, OLD_POINTER_SPACE); } else if (instr->hydrogen()->IsOldDataSpaceAllocation()) { - ASSERT(!instr->hydrogen()->IsNewSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); flags = AllocateTargetSpace::update(flags, OLD_DATA_SPACE); } else { flags = AllocateTargetSpace::update(flags, NEW_SPACE); @@ -5216,20 +5365,20 @@ void LCodeGen::DoDeferredAllocate(LAllocate* instr) { __ Push(Smi::FromInt(flags)); CallRuntimeFromDeferred( - Runtime::kHiddenAllocateInTargetSpace, 2, instr, instr->context()); + Runtime::kAllocateInTargetSpace, 2, instr, instr->context()); __ StoreToSafepointRegisterSlot(result, rax); } void LCodeGen::DoToFastProperties(LToFastProperties* instr) { - ASSERT(ToRegister(instr->value()).is(rax)); + DCHECK(ToRegister(instr->value()).is(rax)); __ Push(rax); CallRuntime(Runtime::kToFastProperties, 1, instr); } void LCodeGen::DoRegExpLiteral(LRegExpLiteral* instr) { - ASSERT(ToRegister(instr->context()).is(rsi)); + DCHECK(ToRegister(instr->context()).is(rsi)); Label materialized; // Registers will be used as follows: // rcx = literals array. @@ -5248,7 +5397,7 @@ void LCodeGen::DoRegExpLiteral(LRegExpLiteral* instr) { __ Push(Smi::FromInt(instr->hydrogen()->literal_index())); __ Push(instr->hydrogen()->pattern()); __ Push(instr->hydrogen()->flags()); - CallRuntime(Runtime::kHiddenMaterializeRegExpLiteral, 4, instr); + CallRuntime(Runtime::kMaterializeRegExpLiteral, 4, instr); __ movp(rbx, rax); __ bind(&materialized); @@ -5260,7 +5409,7 @@ void LCodeGen::DoRegExpLiteral(LRegExpLiteral* instr) { __ bind(&runtime_allocate); __ Push(rbx); __ Push(Smi::FromInt(size)); - CallRuntime(Runtime::kHiddenAllocateInNewSpace, 1, instr); + CallRuntime(Runtime::kAllocateInNewSpace, 1, instr); __ Pop(rbx); __ bind(&allocated); @@ -5280,7 +5429,7 @@ void LCodeGen::DoRegExpLiteral(LRegExpLiteral* instr) { void LCodeGen::DoFunctionLiteral(LFunctionLiteral* instr) { - ASSERT(ToRegister(instr->context()).is(rsi)); + DCHECK(ToRegister(instr->context()).is(rsi)); // Use the fast case closure allocation code that allocates in new // space for nested functions that don't need literals cloning. bool pretenure = instr->hydrogen()->pretenure(); @@ -5295,13 +5444,13 @@ void LCodeGen::DoFunctionLiteral(LFunctionLiteral* instr) { __ Push(instr->hydrogen()->shared_info()); __ PushRoot(pretenure ? Heap::kTrueValueRootIndex : Heap::kFalseValueRootIndex); - CallRuntime(Runtime::kHiddenNewClosure, 3, instr); + CallRuntime(Runtime::kNewClosure, 3, instr); } } void LCodeGen::DoTypeof(LTypeof* instr) { - ASSERT(ToRegister(instr->context()).is(rsi)); + DCHECK(ToRegister(instr->context()).is(rsi)); LOperand* input = instr->value(); EmitPushTaggedOperand(input); CallRuntime(Runtime::kTypeof, 1, instr); @@ -5309,7 +5458,7 @@ void LCodeGen::DoTypeof(LTypeof* instr) { void LCodeGen::EmitPushTaggedOperand(LOperand* operand) { - ASSERT(!operand->IsDoubleRegister()); + DCHECK(!operand->IsDoubleRegister()); if (operand->IsConstantOperand()) { __ Push(ToHandle(LConstantOperand::cast(operand))); } else if (operand->IsRegister()) { @@ -5369,11 +5518,6 @@ Condition LCodeGen::EmitTypeofIs(LTypeofIsAndBranch* instr, Register input) { __ CompareRoot(input, Heap::kFalseValueRootIndex); final_branch_condition = equal; - } else if (FLAG_harmony_typeof && - String::Equals(type_name, factory->null_string())) { - __ CompareRoot(input, Heap::kNullValueRootIndex); - final_branch_condition = equal; - } else if (String::Equals(type_name, factory->undefined_string())) { __ CompareRoot(input, Heap::kUndefinedValueRootIndex); __ j(equal, true_label, true_distance); @@ -5394,10 +5538,8 @@ Condition LCodeGen::EmitTypeofIs(LTypeofIsAndBranch* instr, Register input) { } else if (String::Equals(type_name, factory->object_string())) { __ JumpIfSmi(input, false_label, false_distance); - if (!FLAG_harmony_typeof) { - __ CompareRoot(input, Heap::kNullValueRootIndex); - __ j(equal, true_label, true_distance); - } + __ CompareRoot(input, Heap::kNullValueRootIndex); + __ j(equal, true_label, true_distance); __ CmpObjectType(input, FIRST_NONCALLABLE_SPEC_OBJECT_TYPE, input); __ j(below, false_label, false_distance); __ CmpInstanceType(input, LAST_NONCALLABLE_SPEC_OBJECT_TYPE); @@ -5457,7 +5599,7 @@ void LCodeGen::EnsureSpaceForLazyDeopt(int space_needed) { void LCodeGen::DoLazyBailout(LLazyBailout* instr) { last_lazy_deopt_pc_ = masm()->pc_offset(); - ASSERT(instr->HasEnvironment()); + DCHECK(instr->HasEnvironment()); LEnvironment* env = instr->environment(); RegisterEnvironmentForDeoptimization(env, Safepoint::kLazyDeopt); safepoints_.RecordLazyDeoptimizationIndex(env->deoptimization_index()); @@ -5492,9 +5634,9 @@ void LCodeGen::DoDummyUse(LDummyUse* instr) { void LCodeGen::DoDeferredStackCheck(LStackCheck* instr) { PushSafepointRegistersScope scope(this); __ movp(rsi, Operand(rbp, StandardFrameConstants::kContextOffset)); - __ CallRuntimeSaveDoubles(Runtime::kHiddenStackGuard); + __ CallRuntimeSaveDoubles(Runtime::kStackGuard); RecordSafepointWithLazyDeopt(instr, RECORD_SAFEPOINT_WITH_REGISTERS, 0); - ASSERT(instr->HasEnvironment()); + DCHECK(instr->HasEnvironment()); LEnvironment* env = instr->environment(); safepoints_.RecordLazyDeoptimizationIndex(env->deoptimization_index()); } @@ -5513,7 +5655,7 @@ void LCodeGen::DoStackCheck(LStackCheck* instr) { LStackCheck* instr_; }; - ASSERT(instr->HasEnvironment()); + DCHECK(instr->HasEnvironment()); LEnvironment* env = instr->environment(); // There is no LLazyBailout instruction for stack-checks. We have to // prepare for lazy deoptimization explicitly here. @@ -5523,14 +5665,14 @@ void LCodeGen::DoStackCheck(LStackCheck* instr) { __ CompareRoot(rsp, Heap::kStackLimitRootIndex); __ j(above_equal, &done, Label::kNear); - ASSERT(instr->context()->IsRegister()); - ASSERT(ToRegister(instr->context()).is(rsi)); + DCHECK(instr->context()->IsRegister()); + DCHECK(ToRegister(instr->context()).is(rsi)); CallCode(isolate()->builtins()->StackCheck(), RelocInfo::CODE_TARGET, instr); __ bind(&done); } else { - ASSERT(instr->hydrogen()->is_backwards_branch()); + DCHECK(instr->hydrogen()->is_backwards_branch()); // Perform stack overflow check if this goto needs it before jumping. DeferredStackCheck* deferred_stack_check = new(zone()) DeferredStackCheck(this, instr); @@ -5555,7 +5697,7 @@ void LCodeGen::DoOsrEntry(LOsrEntry* instr) { // If the environment were already registered, we would have no way of // backpatching it with the spill slot operands. - ASSERT(!environment->HasBeenRegistered()); + DCHECK(!environment->HasBeenRegistered()); RegisterEnvironmentForDeoptimization(environment, Safepoint::kNoLazyDeopt); GenerateOsrPrologue(); @@ -5563,7 +5705,7 @@ void LCodeGen::DoOsrEntry(LOsrEntry* instr) { void LCodeGen::DoForInPrepareMap(LForInPrepareMap* instr) { - ASSERT(ToRegister(instr->context()).is(rsi)); + DCHECK(ToRegister(instr->context()).is(rsi)); __ CompareRoot(rax, Heap::kUndefinedValueRootIndex); DeoptimizeIf(equal, instr->environment()); @@ -5697,6 +5839,21 @@ void LCodeGen::DoLoadFieldByIndex(LLoadFieldByIndex* instr) { } +void LCodeGen::DoStoreFrameContext(LStoreFrameContext* instr) { + Register context = ToRegister(instr->context()); + __ movp(Operand(rbp, StandardFrameConstants::kContextOffset), context); +} + + +void LCodeGen::DoAllocateBlockContext(LAllocateBlockContext* instr) { + Handle<ScopeInfo> scope_info = instr->scope_info(); + __ Push(scope_info); + __ Push(ToRegister(instr->function())); + CallRuntime(Runtime::kPushBlockContext, 2, instr); + RecordSafepoint(Safepoint::kNoLazyDeopt); +} + + #undef __ } } // namespace v8::internal diff --git a/deps/v8/src/x64/lithium-codegen-x64.h b/deps/v8/src/x64/lithium-codegen-x64.h index 686dc857aa5..b3070c01892 100644 --- a/deps/v8/src/x64/lithium-codegen-x64.h +++ b/deps/v8/src/x64/lithium-codegen-x64.h @@ -5,15 +5,15 @@ #ifndef V8_X64_LITHIUM_CODEGEN_X64_H_ #define V8_X64_LITHIUM_CODEGEN_X64_H_ -#include "x64/lithium-x64.h" +#include "src/x64/lithium-x64.h" -#include "checks.h" -#include "deoptimizer.h" -#include "lithium-codegen.h" -#include "safepoint-table.h" -#include "scopes.h" -#include "utils.h" -#include "x64/lithium-gap-resolver-x64.h" +#include "src/base/logging.h" +#include "src/deoptimizer.h" +#include "src/lithium-codegen.h" +#include "src/safepoint-table.h" +#include "src/scopes.h" +#include "src/utils.h" +#include "src/x64/lithium-gap-resolver-x64.h" namespace v8 { namespace internal { @@ -65,6 +65,7 @@ class LCodeGen: public LCodeGenBase { bool IsInteger32Constant(LConstantOperand* op) const; bool IsDehoistedKeyConstant(LConstantOperand* op) const; bool IsSmiConstant(LConstantOperand* op) const; + int32_t ToRepresentation(LConstantOperand* op, const Representation& r) const; int32_t ToInteger32(LConstantOperand* op) const; Smi* ToSmi(LConstantOperand* op) const; double ToDouble(LConstantOperand* op) const; @@ -83,7 +84,14 @@ class LCodeGen: public LCodeGenBase { // Deferred code support. void DoDeferredNumberTagD(LNumberTagD* instr); - void DoDeferredNumberTagU(LNumberTagU* instr); + + enum IntegerSignedness { SIGNED_INT32, UNSIGNED_INT32 }; + void DoDeferredNumberTagIU(LInstruction* instr, + LOperand* value, + LOperand* temp1, + LOperand* temp2, + IntegerSignedness signedness); + void DoDeferredTaggedToI(LTaggedToI* instr, Label* done); void DoDeferredMathAbsTaggedHeapNumber(LMathAbs* instr); void DoDeferredStackCheck(LStackCheck* instr); @@ -224,9 +232,9 @@ class LCodeGen: public LCodeGenBase { Operand BuildFastArrayOperand( LOperand* elements_pointer, LOperand* key, + Representation key_representation, ElementsKind elements_kind, - uint32_t offset, - uint32_t additional_index = 0); + uint32_t base_offset); Operand BuildSeqStringOperand(Register string, LOperand* index, @@ -337,14 +345,14 @@ class LCodeGen: public LCodeGenBase { public: explicit PushSafepointRegistersScope(LCodeGen* codegen) : codegen_(codegen) { - ASSERT(codegen_->info()->is_calling()); - ASSERT(codegen_->expected_safepoint_kind_ == Safepoint::kSimple); + DCHECK(codegen_->info()->is_calling()); + DCHECK(codegen_->expected_safepoint_kind_ == Safepoint::kSimple); codegen_->masm_->PushSafepointRegisters(); codegen_->expected_safepoint_kind_ = Safepoint::kWithRegisters; } ~PushSafepointRegistersScope() { - ASSERT(codegen_->expected_safepoint_kind_ == Safepoint::kWithRegisters); + DCHECK(codegen_->expected_safepoint_kind_ == Safepoint::kWithRegisters); codegen_->masm_->PopSafepointRegisters(); codegen_->expected_safepoint_kind_ = Safepoint::kSimple; } diff --git a/deps/v8/src/x64/lithium-gap-resolver-x64.cc b/deps/v8/src/x64/lithium-gap-resolver-x64.cc index 7827abd168f..bfc2ec0e7d4 100644 --- a/deps/v8/src/x64/lithium-gap-resolver-x64.cc +++ b/deps/v8/src/x64/lithium-gap-resolver-x64.cc @@ -2,12 +2,12 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_X64 -#include "x64/lithium-gap-resolver-x64.h" -#include "x64/lithium-codegen-x64.h" +#include "src/x64/lithium-codegen-x64.h" +#include "src/x64/lithium-gap-resolver-x64.h" namespace v8 { namespace internal { @@ -17,7 +17,7 @@ LGapResolver::LGapResolver(LCodeGen* owner) void LGapResolver::Resolve(LParallelMove* parallel_move) { - ASSERT(moves_.is_empty()); + DCHECK(moves_.is_empty()); // Build up a worklist of moves. BuildInitialMoveList(parallel_move); @@ -34,7 +34,7 @@ void LGapResolver::Resolve(LParallelMove* parallel_move) { // Perform the moves with constant sources. for (int i = 0; i < moves_.length(); ++i) { if (!moves_[i].IsEliminated()) { - ASSERT(moves_[i].source()->IsConstantOperand()); + DCHECK(moves_[i].source()->IsConstantOperand()); EmitMove(i); } } @@ -65,13 +65,13 @@ void LGapResolver::PerformMove(int index) { // which means that a call to PerformMove could change any source operand // in the move graph. - ASSERT(!moves_[index].IsPending()); - ASSERT(!moves_[index].IsRedundant()); + DCHECK(!moves_[index].IsPending()); + DCHECK(!moves_[index].IsRedundant()); // Clear this move's destination to indicate a pending move. The actual // destination is saved in a stack-allocated local. Recursion may allow // multiple moves to be pending. - ASSERT(moves_[index].source() != NULL); // Or else it will look eliminated. + DCHECK(moves_[index].source() != NULL); // Or else it will look eliminated. LOperand* destination = moves_[index].destination(); moves_[index].set_destination(NULL); @@ -112,7 +112,7 @@ void LGapResolver::PerformMove(int index) { for (int i = 0; i < moves_.length(); ++i) { LMoveOperands other_move = moves_[i]; if (other_move.Blocks(destination)) { - ASSERT(other_move.IsPending()); + DCHECK(other_move.IsPending()); EmitSwap(index); return; } @@ -124,12 +124,12 @@ void LGapResolver::PerformMove(int index) { void LGapResolver::Verify() { -#ifdef ENABLE_SLOW_ASSERTS +#ifdef ENABLE_SLOW_DCHECKS // No operand should be the destination for more than one move. for (int i = 0; i < moves_.length(); ++i) { LOperand* destination = moves_[i].destination(); for (int j = i + 1; j < moves_.length(); ++j) { - SLOW_ASSERT(!destination->Equals(moves_[j].destination())); + SLOW_DCHECK(!destination->Equals(moves_[j].destination())); } } #endif @@ -151,7 +151,7 @@ void LGapResolver::EmitMove(int index) { Register dst = cgen_->ToRegister(destination); __ movp(dst, src); } else { - ASSERT(destination->IsStackSlot()); + DCHECK(destination->IsStackSlot()); Operand dst = cgen_->ToOperand(destination); __ movp(dst, src); } @@ -162,7 +162,7 @@ void LGapResolver::EmitMove(int index) { Register dst = cgen_->ToRegister(destination); __ movp(dst, src); } else { - ASSERT(destination->IsStackSlot()); + DCHECK(destination->IsStackSlot()); Operand dst = cgen_->ToOperand(destination); __ movp(kScratchRegister, src); __ movp(dst, kScratchRegister); @@ -197,7 +197,7 @@ void LGapResolver::EmitMove(int index) { __ movq(dst, kScratchRegister); } } else { - ASSERT(destination->IsStackSlot()); + DCHECK(destination->IsStackSlot()); Operand dst = cgen_->ToOperand(destination); if (cgen_->IsSmiConstant(constant_source)) { __ Move(dst, cgen_->ToSmi(constant_source)); @@ -215,7 +215,7 @@ void LGapResolver::EmitMove(int index) { if (destination->IsDoubleRegister()) { __ movaps(cgen_->ToDoubleRegister(destination), src); } else { - ASSERT(destination->IsDoubleStackSlot()); + DCHECK(destination->IsDoubleStackSlot()); __ movsd(cgen_->ToOperand(destination), src); } } else if (source->IsDoubleStackSlot()) { @@ -223,7 +223,7 @@ void LGapResolver::EmitMove(int index) { if (destination->IsDoubleRegister()) { __ movsd(cgen_->ToDoubleRegister(destination), src); } else { - ASSERT(destination->IsDoubleStackSlot()); + DCHECK(destination->IsDoubleStackSlot()); __ movsd(xmm0, src); __ movsd(cgen_->ToOperand(destination), xmm0); } @@ -278,13 +278,13 @@ void LGapResolver::EmitSwap(int index) { } else if (source->IsDoubleRegister() || destination->IsDoubleRegister()) { // Swap a double register and a double stack slot. - ASSERT((source->IsDoubleRegister() && destination->IsDoubleStackSlot()) || + DCHECK((source->IsDoubleRegister() && destination->IsDoubleStackSlot()) || (source->IsDoubleStackSlot() && destination->IsDoubleRegister())); XMMRegister reg = cgen_->ToDoubleRegister(source->IsDoubleRegister() ? source : destination); LOperand* other = source->IsDoubleRegister() ? destination : source; - ASSERT(other->IsDoubleStackSlot()); + DCHECK(other->IsDoubleStackSlot()); Operand other_operand = cgen_->ToOperand(other); __ movsd(xmm0, other_operand); __ movsd(other_operand, reg); diff --git a/deps/v8/src/x64/lithium-gap-resolver-x64.h b/deps/v8/src/x64/lithium-gap-resolver-x64.h index 5ceacb17d45..fd4b91ab348 100644 --- a/deps/v8/src/x64/lithium-gap-resolver-x64.h +++ b/deps/v8/src/x64/lithium-gap-resolver-x64.h @@ -5,9 +5,9 @@ #ifndef V8_X64_LITHIUM_GAP_RESOLVER_X64_H_ #define V8_X64_LITHIUM_GAP_RESOLVER_X64_H_ -#include "v8.h" +#include "src/v8.h" -#include "lithium.h" +#include "src/lithium.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/x64/lithium-x64.cc b/deps/v8/src/x64/lithium-x64.cc index a5ef1192e96..0575166fa4b 100644 --- a/deps/v8/src/x64/lithium-x64.cc +++ b/deps/v8/src/x64/lithium-x64.cc @@ -2,14 +2,13 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_X64 -#include "lithium-allocator-inl.h" -#include "x64/lithium-x64.h" -#include "x64/lithium-codegen-x64.h" -#include "hydrogen-osr.h" +#include "src/hydrogen-osr.h" +#include "src/lithium-inl.h" +#include "src/x64/lithium-codegen-x64.h" namespace v8 { namespace internal { @@ -28,17 +27,17 @@ void LInstruction::VerifyCall() { // outputs because all registers are blocked by the calling convention. // Inputs operands must use a fixed register or use-at-start policy or // a non-register policy. - ASSERT(Output() == NULL || + DCHECK(Output() == NULL || LUnallocated::cast(Output())->HasFixedPolicy() || !LUnallocated::cast(Output())->HasRegisterPolicy()); for (UseIterator it(this); !it.Done(); it.Advance()) { LUnallocated* operand = LUnallocated::cast(it.Current()); - ASSERT(operand->HasFixedPolicy() || + DCHECK(operand->HasFixedPolicy() || operand->IsUsedAtStart()); } for (TempIterator it(this); !it.Done(); it.Advance()) { LUnallocated* operand = LUnallocated::cast(it.Current()); - ASSERT(operand->HasFixedPolicy() ||!operand->HasRegisterPolicy()); + DCHECK(operand->HasFixedPolicy() ||!operand->HasRegisterPolicy()); } } #endif @@ -331,6 +330,16 @@ void LAccessArgumentsAt::PrintDataTo(StringStream* stream) { int LPlatformChunk::GetNextSpillIndex(RegisterKind kind) { + if (kind == DOUBLE_REGISTERS && kDoubleSize == 2 * kPointerSize) { + // Skip a slot if for a double-width slot for x32 port. + spill_slot_count_++; + // The spill slot's address is at rbp - (index + 1) * kPointerSize - + // StandardFrameConstants::kFixedFrameSizeFromFp. kFixedFrameSizeFromFp is + // 2 * kPointerSize, if rbp is aligned at 8-byte boundary, the below "|= 1" + // will make sure the spilled doubles are aligned at 8-byte boundary. + // TODO(haitao): make sure rbp is aligned at 8-byte boundary for x32 port. + spill_slot_count_ |= 1; + } return spill_slot_count_++; } @@ -343,7 +352,7 @@ LOperand* LPlatformChunk::GetNextSpillSlot(RegisterKind kind) { if (kind == DOUBLE_REGISTERS) { return LDoubleStackSlot::Create(index, zone()); } else { - ASSERT(kind == GENERAL_REGISTERS); + DCHECK(kind == GENERAL_REGISTERS); return LStackSlot::Create(index, zone()); } } @@ -351,8 +360,9 @@ LOperand* LPlatformChunk::GetNextSpillSlot(RegisterKind kind) { void LStoreNamedField::PrintDataTo(StringStream* stream) { object()->PrintTo(stream); - hydrogen()->access().PrintTo(stream); - stream->Add(" <- "); + OStringStream os; + os << hydrogen()->access() << " <- "; + stream->Add(os.c_str()); value()->PrintTo(stream); } @@ -371,7 +381,7 @@ void LLoadKeyed::PrintDataTo(StringStream* stream) { stream->Add("["); key()->PrintTo(stream); if (hydrogen()->IsDehoisted()) { - stream->Add(" + %d]", additional_index()); + stream->Add(" + %d]", base_offset()); } else { stream->Add("]"); } @@ -383,13 +393,13 @@ void LStoreKeyed::PrintDataTo(StringStream* stream) { stream->Add("["); key()->PrintTo(stream); if (hydrogen()->IsDehoisted()) { - stream->Add(" + %d] <-", additional_index()); + stream->Add(" + %d] <-", base_offset()); } else { stream->Add("] <- "); } if (value() == NULL) { - ASSERT(hydrogen()->IsConstantHoleStore() && + DCHECK(hydrogen()->IsConstantHoleStore() && hydrogen()->value()->representation().IsDouble()); stream->Add("<the hole(nan)>"); } else { @@ -414,7 +424,7 @@ void LTransitionElementsKind::PrintDataTo(StringStream* stream) { LPlatformChunk* LChunkBuilder::Build() { - ASSERT(is_unused()); + DCHECK(is_unused()); chunk_ = new(zone()) LPlatformChunk(info(), graph()); LPhase phase("L_Building chunk", chunk_); status_ = BUILDING; @@ -635,7 +645,7 @@ LInstruction* LChunkBuilder::MarkAsCall(LInstruction* instr, LInstruction* LChunkBuilder::AssignPointerMap(LInstruction* instr) { - ASSERT(!instr->HasPointerMap()); + DCHECK(!instr->HasPointerMap()); instr->set_pointer_map(new(zone()) LPointerMap(zone())); return instr; } @@ -656,14 +666,14 @@ LUnallocated* LChunkBuilder::TempRegister() { LOperand* LChunkBuilder::FixedTemp(Register reg) { LUnallocated* operand = ToUnallocated(reg); - ASSERT(operand->HasFixedPolicy()); + DCHECK(operand->HasFixedPolicy()); return operand; } LOperand* LChunkBuilder::FixedTemp(XMMRegister reg) { LUnallocated* operand = ToUnallocated(reg); - ASSERT(operand->HasFixedPolicy()); + DCHECK(operand->HasFixedPolicy()); return operand; } @@ -692,24 +702,30 @@ LInstruction* LChunkBuilder::DoDeoptimize(HDeoptimize* instr) { LInstruction* LChunkBuilder::DoShift(Token::Value op, HBitwiseBinaryOperation* instr) { if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* left = UseRegisterAtStart(instr->left()); HValue* right_value = instr->right(); LOperand* right = NULL; int constant_value = 0; + bool does_deopt = false; if (right_value->IsConstant()) { HConstant* constant = HConstant::cast(right_value); right = chunk_->DefineConstantOperand(constant); constant_value = constant->Integer32Value() & 0x1f; + if (SmiValuesAre31Bits() && instr->representation().IsSmi() && + constant_value > 0) { + // Left shift can deoptimize if we shift by > 0 and the result + // cannot be truncated to smi. + does_deopt = !instr->CheckUsesForFlag(HValue::kTruncatingToSmi); + } } else { right = UseFixed(right_value, rcx); } // Shift operations can only deoptimize if we do a logical shift by 0 and // the result cannot be truncated to int32. - bool does_deopt = false; if (op == Token::SHR && constant_value == 0) { if (FLAG_opt_safe_uint32_operations) { does_deopt = !instr->CheckFlag(HInstruction::kUint32); @@ -729,9 +745,9 @@ LInstruction* LChunkBuilder::DoShift(Token::Value op, LInstruction* LChunkBuilder::DoArithmeticD(Token::Value op, HArithmeticBinaryOperation* instr) { - ASSERT(instr->representation().IsDouble()); - ASSERT(instr->left()->representation().IsDouble()); - ASSERT(instr->right()->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); + DCHECK(instr->right()->representation().IsDouble()); if (op == Token::MOD) { LOperand* left = UseRegisterAtStart(instr->BetterLeftOperand()); LOperand* right = UseFixedDouble(instr->BetterRightOperand(), xmm1); @@ -750,8 +766,8 @@ LInstruction* LChunkBuilder::DoArithmeticT(Token::Value op, HBinaryOperation* instr) { HValue* left = instr->left(); HValue* right = instr->right(); - ASSERT(left->representation().IsTagged()); - ASSERT(right->representation().IsTagged()); + DCHECK(left->representation().IsTagged()); + DCHECK(right->representation().IsTagged()); LOperand* context = UseFixed(instr->context(), rsi); LOperand* left_operand = UseFixed(left, rdx); LOperand* right_operand = UseFixed(right, rax); @@ -762,7 +778,7 @@ LInstruction* LChunkBuilder::DoArithmeticT(Token::Value op, void LChunkBuilder::DoBasicBlock(HBasicBlock* block, HBasicBlock* next_block) { - ASSERT(is_building()); + DCHECK(is_building()); current_block_ = block; next_block_ = next_block; if (block->IsStartBlock()) { @@ -771,13 +787,13 @@ void LChunkBuilder::DoBasicBlock(HBasicBlock* block, HBasicBlock* next_block) { } else if (block->predecessors()->length() == 1) { // We have a single predecessor => copy environment and outgoing // argument count from the predecessor. - ASSERT(block->phis()->length() == 0); + DCHECK(block->phis()->length() == 0); HBasicBlock* pred = block->predecessors()->at(0); HEnvironment* last_environment = pred->last_environment(); - ASSERT(last_environment != NULL); + DCHECK(last_environment != NULL); // Only copy the environment, if it is later used again. if (pred->end()->SecondSuccessor() == NULL) { - ASSERT(pred->end()->FirstSuccessor() == block); + DCHECK(pred->end()->FirstSuccessor() == block); } else { if (pred->end()->FirstSuccessor()->block_id() > block->block_id() || pred->end()->SecondSuccessor()->block_id() > block->block_id()) { @@ -785,7 +801,7 @@ void LChunkBuilder::DoBasicBlock(HBasicBlock* block, HBasicBlock* next_block) { } } block->UpdateEnvironment(last_environment); - ASSERT(pred->argument_count() >= 0); + DCHECK(pred->argument_count() >= 0); argument_count_ = pred->argument_count(); } else { // We are at a state join => process phis. @@ -837,7 +853,7 @@ void LChunkBuilder::VisitInstruction(HInstruction* current) { if (current->OperandCount() == 0) { instr = DefineAsRegister(new(zone()) LDummy()); } else { - ASSERT(!current->OperandAt(0)->IsControlInstruction()); + DCHECK(!current->OperandAt(0)->IsControlInstruction()); instr = DefineAsRegister(new(zone()) LDummyUse(UseAny(current->OperandAt(0)))); } @@ -849,76 +865,90 @@ void LChunkBuilder::VisitInstruction(HInstruction* current) { chunk_->AddInstruction(dummy, current_block_); } } else { - instr = current->CompileToLithium(this); + HBasicBlock* successor; + if (current->IsControlInstruction() && + HControlInstruction::cast(current)->KnownSuccessorBlock(&successor) && + successor != NULL) { + instr = new(zone()) LGoto(successor); + } else { + instr = current->CompileToLithium(this); + } } argument_count_ += current->argument_delta(); - ASSERT(argument_count_ >= 0); + DCHECK(argument_count_ >= 0); if (instr != NULL) { - // Associate the hydrogen instruction first, since we may need it for - // the ClobbersRegisters() or ClobbersDoubleRegisters() calls below. - instr->set_hydrogen_value(current); + AddInstruction(instr, current); + } + + current_instruction_ = old_current; +} + + +void LChunkBuilder::AddInstruction(LInstruction* instr, + HInstruction* hydrogen_val) { + // Associate the hydrogen instruction first, since we may need it for + // the ClobbersRegisters() or ClobbersDoubleRegisters() calls below. + instr->set_hydrogen_value(hydrogen_val); #if DEBUG - // Make sure that the lithium instruction has either no fixed register - // constraints in temps or the result OR no uses that are only used at - // start. If this invariant doesn't hold, the register allocator can decide - // to insert a split of a range immediately before the instruction due to an - // already allocated register needing to be used for the instruction's fixed - // register constraint. In this case, The register allocator won't see an - // interference between the split child and the use-at-start (it would if - // the it was just a plain use), so it is free to move the split child into - // the same register that is used for the use-at-start. - // See https://code.google.com/p/chromium/issues/detail?id=201590 - if (!(instr->ClobbersRegisters() && - instr->ClobbersDoubleRegisters(isolate()))) { - int fixed = 0; - int used_at_start = 0; - for (UseIterator it(instr); !it.Done(); it.Advance()) { - LUnallocated* operand = LUnallocated::cast(it.Current()); - if (operand->IsUsedAtStart()) ++used_at_start; - } - if (instr->Output() != NULL) { - if (LUnallocated::cast(instr->Output())->HasFixedPolicy()) ++fixed; - } - for (TempIterator it(instr); !it.Done(); it.Advance()) { - LUnallocated* operand = LUnallocated::cast(it.Current()); - if (operand->HasFixedPolicy()) ++fixed; - } - ASSERT(fixed == 0 || used_at_start == 0); + // Make sure that the lithium instruction has either no fixed register + // constraints in temps or the result OR no uses that are only used at + // start. If this invariant doesn't hold, the register allocator can decide + // to insert a split of a range immediately before the instruction due to an + // already allocated register needing to be used for the instruction's fixed + // register constraint. In this case, The register allocator won't see an + // interference between the split child and the use-at-start (it would if + // the it was just a plain use), so it is free to move the split child into + // the same register that is used for the use-at-start. + // See https://code.google.com/p/chromium/issues/detail?id=201590 + if (!(instr->ClobbersRegisters() && + instr->ClobbersDoubleRegisters(isolate()))) { + int fixed = 0; + int used_at_start = 0; + for (UseIterator it(instr); !it.Done(); it.Advance()) { + LUnallocated* operand = LUnallocated::cast(it.Current()); + if (operand->IsUsedAtStart()) ++used_at_start; + } + if (instr->Output() != NULL) { + if (LUnallocated::cast(instr->Output())->HasFixedPolicy()) ++fixed; } + for (TempIterator it(instr); !it.Done(); it.Advance()) { + LUnallocated* operand = LUnallocated::cast(it.Current()); + if (operand->HasFixedPolicy()) ++fixed; + } + DCHECK(fixed == 0 || used_at_start == 0); + } #endif - if (FLAG_stress_pointer_maps && !instr->HasPointerMap()) { - instr = AssignPointerMap(instr); - } - if (FLAG_stress_environments && !instr->HasEnvironment()) { - instr = AssignEnvironment(instr); + if (FLAG_stress_pointer_maps && !instr->HasPointerMap()) { + instr = AssignPointerMap(instr); + } + if (FLAG_stress_environments && !instr->HasEnvironment()) { + instr = AssignEnvironment(instr); + } + chunk_->AddInstruction(instr, current_block_); + + if (instr->IsCall()) { + HValue* hydrogen_value_for_lazy_bailout = hydrogen_val; + LInstruction* instruction_needing_environment = NULL; + if (hydrogen_val->HasObservableSideEffects()) { + HSimulate* sim = HSimulate::cast(hydrogen_val->next()); + instruction_needing_environment = instr; + sim->ReplayEnvironment(current_block_->last_environment()); + hydrogen_value_for_lazy_bailout = sim; } - chunk_->AddInstruction(instr, current_block_); - - if (instr->IsCall()) { - HValue* hydrogen_value_for_lazy_bailout = current; - LInstruction* instruction_needing_environment = NULL; - if (current->HasObservableSideEffects()) { - HSimulate* sim = HSimulate::cast(current->next()); - instruction_needing_environment = instr; - sim->ReplayEnvironment(current_block_->last_environment()); - hydrogen_value_for_lazy_bailout = sim; - } - LInstruction* bailout = AssignEnvironment(new(zone()) LLazyBailout()); - bailout->set_hydrogen_value(hydrogen_value_for_lazy_bailout); - chunk_->AddInstruction(bailout, current_block_); - if (instruction_needing_environment != NULL) { - // Store the lazy deopt environment with the instruction if needed. - // Right now it is only used for LInstanceOfKnownGlobal. - instruction_needing_environment-> - SetDeferredLazyDeoptimizationEnvironment(bailout->environment()); - } + LInstruction* bailout = AssignEnvironment(new(zone()) LLazyBailout()); + bailout->set_hydrogen_value(hydrogen_value_for_lazy_bailout); + chunk_->AddInstruction(bailout, current_block_); + if (instruction_needing_environment != NULL) { + // Store the lazy deopt environment with the instruction if needed. + // Right now it is only used for LInstanceOfKnownGlobal. + instruction_needing_environment-> + SetDeferredLazyDeoptimizationEnvironment(bailout->environment()); } } - current_instruction_ = old_current; } @@ -933,9 +963,6 @@ LInstruction* LChunkBuilder::DoDebugBreak(HDebugBreak* instr) { LInstruction* LChunkBuilder::DoBranch(HBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; - HValue* value = instr->value(); Representation r = value->representation(); HType type = value->type(); @@ -955,10 +982,7 @@ LInstruction* LChunkBuilder::DoBranch(HBranch* instr) { LInstruction* LChunkBuilder::DoCompareMap(HCompareMap* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; - - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); return new(zone()) LCmpMapAndBranch(value); } @@ -1016,9 +1040,13 @@ LInstruction* LChunkBuilder::DoApplyArguments(HApplyArguments* instr) { } -LInstruction* LChunkBuilder::DoPushArgument(HPushArgument* instr) { - LOperand* argument = UseOrConstant(instr->argument()); - return new(zone()) LPushArgument(argument); +LInstruction* LChunkBuilder::DoPushArguments(HPushArguments* instr) { + int argc = instr->OperandCount(); + for (int i = 0; i < argc; ++i) { + LOperand* argument = UseOrConstant(instr->argument(i)); + AddInstruction(new(zone()) LPushArgument(argument), instr); + } + return NULL; } @@ -1075,7 +1103,7 @@ LInstruction* LChunkBuilder::DoCallJSFunction( LInstruction* LChunkBuilder::DoCallWithDescriptor( HCallWithDescriptor* instr) { - const CallInterfaceDescriptor* descriptor = instr->descriptor(); + const InterfaceDescriptor* descriptor = instr->descriptor(); LOperand* target = UseRegisterOrConstantAtStart(instr->target()); ZoneList<LOperand*> ops(instr->OperandCount(), zone()); @@ -1102,14 +1130,24 @@ LInstruction* LChunkBuilder::DoInvokeFunction(HInvokeFunction* instr) { LInstruction* LChunkBuilder::DoUnaryMathOperation(HUnaryMathOperation* instr) { switch (instr->op()) { - case kMathFloor: return DoMathFloor(instr); - case kMathRound: return DoMathRound(instr); - case kMathAbs: return DoMathAbs(instr); - case kMathLog: return DoMathLog(instr); - case kMathExp: return DoMathExp(instr); - case kMathSqrt: return DoMathSqrt(instr); - case kMathPowHalf: return DoMathPowHalf(instr); - case kMathClz32: return DoMathClz32(instr); + case kMathFloor: + return DoMathFloor(instr); + case kMathRound: + return DoMathRound(instr); + case kMathFround: + return DoMathFround(instr); + case kMathAbs: + return DoMathAbs(instr); + case kMathLog: + return DoMathLog(instr); + case kMathExp: + return DoMathExp(instr); + case kMathSqrt: + return DoMathSqrt(instr); + case kMathPowHalf: + return DoMathPowHalf(instr); + case kMathClz32: + return DoMathClz32(instr); default: UNREACHABLE(); return NULL; @@ -1132,6 +1170,13 @@ LInstruction* LChunkBuilder::DoMathRound(HUnaryMathOperation* instr) { } +LInstruction* LChunkBuilder::DoMathFround(HUnaryMathOperation* instr) { + LOperand* input = UseRegister(instr->value()); + LMathFround* result = new (zone()) LMathFround(input); + return DefineAsRegister(result); +} + + LInstruction* LChunkBuilder::DoMathAbs(HUnaryMathOperation* instr) { LOperand* context = UseAny(instr->context()); LOperand* input = UseRegisterAtStart(instr->value()); @@ -1145,8 +1190,8 @@ LInstruction* LChunkBuilder::DoMathAbs(HUnaryMathOperation* instr) { LInstruction* LChunkBuilder::DoMathLog(HUnaryMathOperation* instr) { - ASSERT(instr->representation().IsDouble()); - ASSERT(instr->value()->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->value()->representation().IsDouble()); LOperand* input = UseRegisterAtStart(instr->value()); return MarkAsCall(DefineSameAsFirst(new(zone()) LMathLog(input)), instr); } @@ -1160,8 +1205,8 @@ LInstruction* LChunkBuilder::DoMathClz32(HUnaryMathOperation* instr) { LInstruction* LChunkBuilder::DoMathExp(HUnaryMathOperation* instr) { - ASSERT(instr->representation().IsDouble()); - ASSERT(instr->value()->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->value()->representation().IsDouble()); LOperand* value = UseTempRegister(instr->value()); LOperand* temp1 = TempRegister(); LOperand* temp2 = TempRegister(); @@ -1171,9 +1216,8 @@ LInstruction* LChunkBuilder::DoMathExp(HUnaryMathOperation* instr) { LInstruction* LChunkBuilder::DoMathSqrt(HUnaryMathOperation* instr) { - LOperand* input = UseRegisterAtStart(instr->value()); - LMathSqrt* result = new(zone()) LMathSqrt(input); - return DefineSameAsFirst(result); + LOperand* input = UseAtStart(instr->value()); + return DefineAsRegister(new(zone()) LMathSqrt(input)); } @@ -1237,9 +1281,9 @@ LInstruction* LChunkBuilder::DoShl(HShl* instr) { LInstruction* LChunkBuilder::DoBitwise(HBitwise* instr) { if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); - ASSERT(instr->CheckFlag(HValue::kTruncatingToInt32)); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->CheckFlag(HValue::kTruncatingToInt32)); LOperand* left = UseRegisterAtStart(instr->BetterLeftOperand()); LOperand* right = UseOrConstantAtStart(instr->BetterRightOperand()); @@ -1251,9 +1295,9 @@ LInstruction* LChunkBuilder::DoBitwise(HBitwise* instr) { LInstruction* LChunkBuilder::DoDivByPowerOf2I(HDiv* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LInstruction* result = DefineAsRegister(new(zone()) LDivByPowerOf2I( @@ -1269,9 +1313,9 @@ LInstruction* LChunkBuilder::DoDivByPowerOf2I(HDiv* instr) { LInstruction* LChunkBuilder::DoDivByConstI(HDiv* instr) { - ASSERT(instr->representation().IsInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LOperand* temp1 = FixedTemp(rax); @@ -1288,9 +1332,9 @@ LInstruction* LChunkBuilder::DoDivByConstI(HDiv* instr) { LInstruction* LChunkBuilder::DoDivI(HDiv* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseFixed(instr->left(), rax); LOperand* divisor = UseRegister(instr->right()); LOperand* temp = FixedTemp(rdx); @@ -1337,9 +1381,9 @@ LInstruction* LChunkBuilder::DoFlooringDivByPowerOf2I(HMathFloorOfDiv* instr) { LInstruction* LChunkBuilder::DoFlooringDivByConstI(HMathFloorOfDiv* instr) { - ASSERT(instr->representation().IsInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LOperand* temp1 = FixedTemp(rax); @@ -1364,9 +1408,9 @@ LInstruction* LChunkBuilder::DoFlooringDivByConstI(HMathFloorOfDiv* instr) { LInstruction* LChunkBuilder::DoFlooringDivI(HMathFloorOfDiv* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseFixed(instr->left(), rax); LOperand* divisor = UseRegister(instr->right()); LOperand* temp = FixedTemp(rdx); @@ -1393,14 +1437,15 @@ LInstruction* LChunkBuilder::DoMathFloorOfDiv(HMathFloorOfDiv* instr) { LInstruction* LChunkBuilder::DoModByPowerOf2I(HMod* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegisterAtStart(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LInstruction* result = DefineSameAsFirst(new(zone()) LModByPowerOf2I( dividend, divisor)); - if (instr->CheckFlag(HValue::kBailoutOnMinusZero)) { + if (instr->CheckFlag(HValue::kLeftCanBeNegative) && + instr->CheckFlag(HValue::kBailoutOnMinusZero)) { result = AssignEnvironment(result); } return result; @@ -1408,9 +1453,9 @@ LInstruction* LChunkBuilder::DoModByPowerOf2I(HMod* instr) { LInstruction* LChunkBuilder::DoModByConstI(HMod* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseRegister(instr->left()); int32_t divisor = instr->right()->GetInteger32Constant(); LOperand* temp1 = FixedTemp(rax); @@ -1425,9 +1470,9 @@ LInstruction* LChunkBuilder::DoModByConstI(HMod* instr) { LInstruction* LChunkBuilder::DoModI(HMod* instr) { - ASSERT(instr->representation().IsSmiOrInteger32()); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* dividend = UseFixed(instr->left(), rax); LOperand* divisor = UseRegister(instr->right()); LOperand* temp = FixedTemp(rdx); @@ -1460,8 +1505,8 @@ LInstruction* LChunkBuilder::DoMod(HMod* instr) { LInstruction* LChunkBuilder::DoMul(HMul* instr) { if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* left = UseRegisterAtStart(instr->BetterLeftOperand()); LOperand* right = UseOrConstant(instr->BetterRightOperand()); LMulI* mul = new(zone()) LMulI(left, right); @@ -1480,8 +1525,8 @@ LInstruction* LChunkBuilder::DoMul(HMul* instr) { LInstruction* LChunkBuilder::DoSub(HSub* instr) { if (instr->representation().IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* left = UseRegisterAtStart(instr->left()); LOperand* right = UseOrConstantAtStart(instr->right()); LSubI* sub = new(zone()) LSubI(left, right); @@ -1505,12 +1550,12 @@ LInstruction* LChunkBuilder::DoAdd(HAdd* instr) { // are multiple uses of the add's inputs, so using a 3-register add will // preserve all input values for later uses. bool use_lea = LAddI::UseLea(instr); - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); LOperand* left = UseRegisterAtStart(instr->BetterLeftOperand()); HValue* right_candidate = instr->BetterRightOperand(); LOperand* right; - if (instr->representation().IsSmi()) { + if (SmiValuesAre32Bits() && instr->representation().IsSmi()) { // We cannot add a tagged immediate to a tagged value, // so we request it in a register. right = UseRegisterAtStart(right_candidate); @@ -1527,9 +1572,9 @@ LInstruction* LChunkBuilder::DoAdd(HAdd* instr) { } return result; } else if (instr->representation().IsExternal()) { - ASSERT(instr->left()->representation().IsExternal()); - ASSERT(instr->right()->representation().IsInteger32()); - ASSERT(!instr->CheckFlag(HValue::kCanOverflow)); + DCHECK(instr->left()->representation().IsExternal()); + DCHECK(instr->right()->representation().IsInteger32()); + DCHECK(!instr->CheckFlag(HValue::kCanOverflow)); bool use_lea = LAddI::UseLea(instr); LOperand* left = UseRegisterAtStart(instr->left()); HValue* right_candidate = instr->right(); @@ -1553,8 +1598,8 @@ LInstruction* LChunkBuilder::DoAdd(HAdd* instr) { LInstruction* LChunkBuilder::DoMathMinMax(HMathMinMax* instr) { LOperand* left = NULL; LOperand* right = NULL; - ASSERT(instr->left()->representation().Equals(instr->representation())); - ASSERT(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); if (instr->representation().IsSmi()) { left = UseRegisterAtStart(instr->BetterLeftOperand()); right = UseAtStart(instr->BetterRightOperand()); @@ -1562,7 +1607,7 @@ LInstruction* LChunkBuilder::DoMathMinMax(HMathMinMax* instr) { left = UseRegisterAtStart(instr->BetterLeftOperand()); right = UseOrConstantAtStart(instr->BetterRightOperand()); } else { - ASSERT(instr->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); left = UseRegisterAtStart(instr->left()); right = UseRegisterAtStart(instr->right()); } @@ -1572,11 +1617,11 @@ LInstruction* LChunkBuilder::DoMathMinMax(HMathMinMax* instr) { LInstruction* LChunkBuilder::DoPower(HPower* instr) { - ASSERT(instr->representation().IsDouble()); + DCHECK(instr->representation().IsDouble()); // We call a C function for double power. It can't trigger a GC. // We need to use fixed result register for the call. Representation exponent_type = instr->right()->representation(); - ASSERT(instr->left()->representation().IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); LOperand* left = UseFixedDouble(instr->left(), xmm2); LOperand* right = exponent_type.IsDouble() ? UseFixedDouble(instr->right(), xmm1) : UseFixed(instr->right(), rdx); @@ -1587,8 +1632,8 @@ LInstruction* LChunkBuilder::DoPower(HPower* instr) { LInstruction* LChunkBuilder::DoCompareGeneric(HCompareGeneric* instr) { - ASSERT(instr->left()->representation().IsTagged()); - ASSERT(instr->right()->representation().IsTagged()); + DCHECK(instr->left()->representation().IsTagged()); + DCHECK(instr->right()->representation().IsTagged()); LOperand* context = UseFixed(instr->context(), rsi); LOperand* left = UseFixed(instr->left(), rdx); LOperand* right = UseFixed(instr->right(), rax); @@ -1599,19 +1644,17 @@ LInstruction* LChunkBuilder::DoCompareGeneric(HCompareGeneric* instr) { LInstruction* LChunkBuilder::DoCompareNumericAndBranch( HCompareNumericAndBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; Representation r = instr->representation(); if (r.IsSmiOrInteger32()) { - ASSERT(instr->left()->representation().Equals(r)); - ASSERT(instr->right()->representation().Equals(r)); + DCHECK(instr->left()->representation().Equals(r)); + DCHECK(instr->right()->representation().Equals(r)); LOperand* left = UseRegisterOrConstantAtStart(instr->left()); LOperand* right = UseOrConstantAtStart(instr->right()); return new(zone()) LCompareNumericAndBranch(left, right); } else { - ASSERT(r.IsDouble()); - ASSERT(instr->left()->representation().IsDouble()); - ASSERT(instr->right()->representation().IsDouble()); + DCHECK(r.IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); + DCHECK(instr->right()->representation().IsDouble()); LOperand* left; LOperand* right; if (instr->left()->IsConstant() && instr->right()->IsConstant()) { @@ -1628,8 +1671,6 @@ LInstruction* LChunkBuilder::DoCompareNumericAndBranch( LInstruction* LChunkBuilder::DoCompareObjectEqAndBranch( HCompareObjectEqAndBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; LOperand* left = UseRegisterAtStart(instr->left()); LOperand* right = UseRegisterOrConstantAtStart(instr->right()); return new(zone()) LCmpObjectEqAndBranch(left, right); @@ -1645,21 +1686,19 @@ LInstruction* LChunkBuilder::DoCompareHoleAndBranch( LInstruction* LChunkBuilder::DoCompareMinusZeroAndBranch( HCompareMinusZeroAndBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; LOperand* value = UseRegister(instr->value()); return new(zone()) LCompareMinusZeroAndBranch(value); } LInstruction* LChunkBuilder::DoIsObjectAndBranch(HIsObjectAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); return new(zone()) LIsObjectAndBranch(UseRegisterAtStart(instr->value())); } LInstruction* LChunkBuilder::DoIsStringAndBranch(HIsStringAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); LOperand* temp = TempRegister(); return new(zone()) LIsStringAndBranch(value, temp); @@ -1667,14 +1706,14 @@ LInstruction* LChunkBuilder::DoIsStringAndBranch(HIsStringAndBranch* instr) { LInstruction* LChunkBuilder::DoIsSmiAndBranch(HIsSmiAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); return new(zone()) LIsSmiAndBranch(Use(instr->value())); } LInstruction* LChunkBuilder::DoIsUndetectableAndBranch( HIsUndetectableAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); LOperand* temp = TempRegister(); return new(zone()) LIsUndetectableAndBranch(value, temp); @@ -1684,8 +1723,8 @@ LInstruction* LChunkBuilder::DoIsUndetectableAndBranch( LInstruction* LChunkBuilder::DoStringCompareAndBranch( HStringCompareAndBranch* instr) { - ASSERT(instr->left()->representation().IsTagged()); - ASSERT(instr->right()->representation().IsTagged()); + DCHECK(instr->left()->representation().IsTagged()); + DCHECK(instr->right()->representation().IsTagged()); LOperand* context = UseFixed(instr->context(), rsi); LOperand* left = UseFixed(instr->left(), rdx); LOperand* right = UseFixed(instr->right(), rax); @@ -1698,7 +1737,7 @@ LInstruction* LChunkBuilder::DoStringCompareAndBranch( LInstruction* LChunkBuilder::DoHasInstanceTypeAndBranch( HHasInstanceTypeAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); return new(zone()) LHasInstanceTypeAndBranch(value); } @@ -1706,7 +1745,7 @@ LInstruction* LChunkBuilder::DoHasInstanceTypeAndBranch( LInstruction* LChunkBuilder::DoGetCachedArrayIndex( HGetCachedArrayIndex* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); return DefineAsRegister(new(zone()) LGetCachedArrayIndex(value)); @@ -1715,7 +1754,7 @@ LInstruction* LChunkBuilder::DoGetCachedArrayIndex( LInstruction* LChunkBuilder::DoHasCachedArrayIndexAndBranch( HHasCachedArrayIndexAndBranch* instr) { - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LOperand* value = UseRegisterAtStart(instr->value()); return new(zone()) LHasCachedArrayIndexAndBranch(value); } @@ -1833,7 +1872,7 @@ LInstruction* LChunkBuilder::DoChange(HChange* instr) { } return AssignEnvironment(DefineSameAsFirst(new(zone()) LCheckSmi(value))); } else { - ASSERT(to.IsInteger32()); + DCHECK(to.IsInteger32()); if (val->type().IsSmi() || val->representation().IsSmi()) { LOperand* value = UseRegister(val); return DefineSameAsFirst(new(zone()) LSmiUntag(value, false)); @@ -1860,7 +1899,7 @@ LInstruction* LChunkBuilder::DoChange(HChange* instr) { return AssignEnvironment( DefineAsRegister(new(zone()) LDoubleToSmi(value))); } else { - ASSERT(to.IsInteger32()); + DCHECK(to.IsInteger32()); LOperand* value = UseRegister(val); LInstruction* result = DefineAsRegister(new(zone()) LDoubleToI(value)); if (!instr->CanTruncateToInt32()) result = AssignEnvironment(result); @@ -1880,7 +1919,9 @@ LInstruction* LChunkBuilder::DoChange(HChange* instr) { return AssignPointerMap(DefineSameAsFirst(result)); } else { LOperand* value = UseRegister(val); - LNumberTagI* result = new(zone()) LNumberTagI(value); + LOperand* temp1 = SmiValuesAre32Bits() ? NULL : TempRegister(); + LOperand* temp2 = SmiValuesAre32Bits() ? NULL : FixedTemp(xmm1); + LNumberTagI* result = new(zone()) LNumberTagI(value, temp1, temp2); return AssignPointerMap(DefineSameAsFirst(result)); } } else if (to.IsSmi()) { @@ -1891,11 +1932,9 @@ LInstruction* LChunkBuilder::DoChange(HChange* instr) { } return result; } else { - ASSERT(to.IsDouble()); + DCHECK(to.IsDouble()); if (val->CheckFlag(HInstruction::kUint32)) { - LOperand* temp = FixedTemp(xmm1); - return DefineAsRegister( - new(zone()) LUint32ToDouble(UseRegister(val), temp)); + return DefineAsRegister(new(zone()) LUint32ToDouble(UseRegister(val))); } else { LOperand* value = Use(val); return DefineAsRegister(new(zone()) LInteger32ToDouble(value)); @@ -1910,7 +1949,9 @@ LInstruction* LChunkBuilder::DoChange(HChange* instr) { LInstruction* LChunkBuilder::DoCheckHeapObject(HCheckHeapObject* instr) { LOperand* value = UseRegisterAtStart(instr->value()); LInstruction* result = new(zone()) LCheckNonSmi(value); - if (!instr->value()->IsHeapObject()) result = AssignEnvironment(result); + if (!instr->value()->type().IsHeapObject()) { + result = AssignEnvironment(result); + } return result; } @@ -1955,7 +1996,7 @@ LInstruction* LChunkBuilder::DoClampToUint8(HClampToUint8* instr) { } else if (input_rep.IsInteger32()) { return DefineSameAsFirst(new(zone()) LClampIToUint8(reg)); } else { - ASSERT(input_rep.IsSmiOrTagged()); + DCHECK(input_rep.IsSmiOrTagged()); // Register allocator doesn't (yet) support allocation of double // temps. Reserve xmm1 explicitly. LClampTToUint8* result = new(zone()) LClampTToUint8(reg, @@ -1967,7 +2008,7 @@ LInstruction* LChunkBuilder::DoClampToUint8(HClampToUint8* instr) { LInstruction* LChunkBuilder::DoDoubleBits(HDoubleBits* instr) { HValue* value = instr->value(); - ASSERT(value->representation().IsDouble()); + DCHECK(value->representation().IsDouble()); return DefineAsRegister(new(zone()) LDoubleBits(UseRegister(value))); } @@ -2017,9 +2058,15 @@ LInstruction* LChunkBuilder::DoLoadGlobalCell(HLoadGlobalCell* instr) { LInstruction* LChunkBuilder::DoLoadGlobalGeneric(HLoadGlobalGeneric* instr) { LOperand* context = UseFixed(instr->context(), rsi); - LOperand* global_object = UseFixed(instr->global_object(), rax); + LOperand* global_object = UseFixed(instr->global_object(), + LoadIC::ReceiverRegister()); + LOperand* vector = NULL; + if (FLAG_vector_ics) { + vector = FixedTemp(LoadIC::VectorRegister()); + } + LLoadGlobalGeneric* result = - new(zone()) LLoadGlobalGeneric(context, global_object); + new(zone()) LLoadGlobalGeneric(context, global_object, vector); return MarkAsCall(DefineFixed(result, rax), instr); } @@ -2084,8 +2131,13 @@ LInstruction* LChunkBuilder::DoLoadNamedField(HLoadNamedField* instr) { LInstruction* LChunkBuilder::DoLoadNamedGeneric(HLoadNamedGeneric* instr) { LOperand* context = UseFixed(instr->context(), rsi); - LOperand* object = UseFixed(instr->object(), rax); - LLoadNamedGeneric* result = new(zone()) LLoadNamedGeneric(context, object); + LOperand* object = UseFixed(instr->object(), LoadIC::ReceiverRegister()); + LOperand* vector = NULL; + if (FLAG_vector_ics) { + vector = FixedTemp(LoadIC::VectorRegister()); + } + LLoadNamedGeneric* result = new(zone()) LLoadNamedGeneric( + context, object, vector); return MarkAsCall(DefineFixed(result, rax), instr); } @@ -2103,6 +2155,11 @@ LInstruction* LChunkBuilder::DoLoadRoot(HLoadRoot* instr) { void LChunkBuilder::FindDehoistedKeyDefinitions(HValue* candidate) { + // We sign extend the dehoisted key at the definition point when the pointer + // size is 64-bit. For x32 port, we sign extend the dehoisted key at the use + // points and should not invoke this function. We can't use STATIC_ASSERT + // here as the pointer size is 32-bit for x32. + DCHECK(kPointerSize == kInt64Size); BitVector* dehoisted_key_ids = chunk_->GetDehoistedKeyIds(); if (dehoisted_key_ids->Contains(candidate->id())) return; dehoisted_key_ids->Add(candidate->id()); @@ -2114,12 +2171,25 @@ void LChunkBuilder::FindDehoistedKeyDefinitions(HValue* candidate) { LInstruction* LChunkBuilder::DoLoadKeyed(HLoadKeyed* instr) { - ASSERT(instr->key()->representation().IsInteger32()); + DCHECK((kPointerSize == kInt64Size && + instr->key()->representation().IsInteger32()) || + (kPointerSize == kInt32Size && + instr->key()->representation().IsSmiOrInteger32())); ElementsKind elements_kind = instr->elements_kind(); - LOperand* key = UseRegisterOrConstantAtStart(instr->key()); + LOperand* key = NULL; LInstruction* result = NULL; - if (instr->IsDehoisted()) { + if (kPointerSize == kInt64Size) { + key = UseRegisterOrConstantAtStart(instr->key()); + } else { + bool clobbers_key = ExternalArrayOpRequiresTemp( + instr->key()->representation(), elements_kind); + key = clobbers_key + ? UseTempRegister(instr->key()) + : UseRegisterOrConstantAtStart(instr->key()); + } + + if ((kPointerSize == kInt64Size) && instr->IsDehoisted()) { FindDehoistedKeyDefinitions(instr->key()); } @@ -2127,7 +2197,7 @@ LInstruction* LChunkBuilder::DoLoadKeyed(HLoadKeyed* instr) { LOperand* obj = UseRegisterAtStart(instr->elements()); result = DefineAsRegister(new(zone()) LLoadKeyed(obj, key)); } else { - ASSERT( + DCHECK( (instr->representation().IsInteger32() && !(IsDoubleOrFloatElementsKind(elements_kind))) || (instr->representation().IsDouble() && @@ -2152,11 +2222,15 @@ LInstruction* LChunkBuilder::DoLoadKeyed(HLoadKeyed* instr) { LInstruction* LChunkBuilder::DoLoadKeyedGeneric(HLoadKeyedGeneric* instr) { LOperand* context = UseFixed(instr->context(), rsi); - LOperand* object = UseFixed(instr->object(), rdx); - LOperand* key = UseFixed(instr->key(), rax); + LOperand* object = UseFixed(instr->object(), LoadIC::ReceiverRegister()); + LOperand* key = UseFixed(instr->key(), LoadIC::NameRegister()); + LOperand* vector = NULL; + if (FLAG_vector_ics) { + vector = FixedTemp(LoadIC::VectorRegister()); + } LLoadKeyedGeneric* result = - new(zone()) LLoadKeyedGeneric(context, object, key); + new(zone()) LLoadKeyedGeneric(context, object, key, vector); return MarkAsCall(DefineFixed(result, rax), instr); } @@ -2164,12 +2238,12 @@ LInstruction* LChunkBuilder::DoLoadKeyedGeneric(HLoadKeyedGeneric* instr) { LInstruction* LChunkBuilder::DoStoreKeyed(HStoreKeyed* instr) { ElementsKind elements_kind = instr->elements_kind(); - if (instr->IsDehoisted()) { + if ((kPointerSize == kInt64Size) && instr->IsDehoisted()) { FindDehoistedKeyDefinitions(instr->key()); } if (!instr->is_typed_elements()) { - ASSERT(instr->elements()->representation().IsTagged()); + DCHECK(instr->elements()->representation().IsTagged()); bool needs_write_barrier = instr->NeedsWriteBarrier(); LOperand* object = NULL; LOperand* key = NULL; @@ -2181,7 +2255,7 @@ LInstruction* LChunkBuilder::DoStoreKeyed(HStoreKeyed* instr) { val = UseRegisterAtStart(instr->value()); key = UseRegisterOrConstantAtStart(instr->key()); } else { - ASSERT(value_representation.IsSmiOrTagged() || + DCHECK(value_representation.IsSmiOrTagged() || value_representation.IsInteger32()); if (needs_write_barrier) { object = UseTempRegister(instr->elements()); @@ -2197,12 +2271,12 @@ LInstruction* LChunkBuilder::DoStoreKeyed(HStoreKeyed* instr) { return new(zone()) LStoreKeyed(object, key, val); } - ASSERT( + DCHECK( (instr->value()->representation().IsInteger32() && !IsDoubleOrFloatElementsKind(elements_kind)) || (instr->value()->representation().IsDouble() && IsDoubleOrFloatElementsKind(elements_kind))); - ASSERT((instr->is_fixed_typed_array() && + DCHECK((instr->is_fixed_typed_array() && instr->elements()->representation().IsTagged()) || (instr->is_external() && instr->elements()->representation().IsExternal())); @@ -2212,7 +2286,16 @@ LInstruction* LChunkBuilder::DoStoreKeyed(HStoreKeyed* instr) { elements_kind == FLOAT32_ELEMENTS; LOperand* val = val_is_temp_register ? UseTempRegister(instr->value()) : UseRegister(instr->value()); - LOperand* key = UseRegisterOrConstantAtStart(instr->key()); + LOperand* key = NULL; + if (kPointerSize == kInt64Size) { + key = UseRegisterOrConstantAtStart(instr->key()); + } else { + bool clobbers_key = ExternalArrayOpRequiresTemp( + instr->key()->representation(), elements_kind); + key = clobbers_key + ? UseTempRegister(instr->key()) + : UseRegisterOrConstantAtStart(instr->key()); + } LOperand* backing_store = UseRegister(instr->elements()); return new(zone()) LStoreKeyed(backing_store, key, val); } @@ -2220,13 +2303,14 @@ LInstruction* LChunkBuilder::DoStoreKeyed(HStoreKeyed* instr) { LInstruction* LChunkBuilder::DoStoreKeyedGeneric(HStoreKeyedGeneric* instr) { LOperand* context = UseFixed(instr->context(), rsi); - LOperand* object = UseFixed(instr->object(), rdx); - LOperand* key = UseFixed(instr->key(), rcx); - LOperand* value = UseFixed(instr->value(), rax); + LOperand* object = UseFixed(instr->object(), + KeyedStoreIC::ReceiverRegister()); + LOperand* key = UseFixed(instr->key(), KeyedStoreIC::NameRegister()); + LOperand* value = UseFixed(instr->value(), KeyedStoreIC::ValueRegister()); - ASSERT(instr->object()->representation().IsTagged()); - ASSERT(instr->key()->representation().IsTagged()); - ASSERT(instr->value()->representation().IsTagged()); + DCHECK(instr->object()->representation().IsTagged()); + DCHECK(instr->key()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); LStoreKeyedGeneric* result = new(zone()) LStoreKeyedGeneric(context, object, key, value); @@ -2277,9 +2361,9 @@ LInstruction* LChunkBuilder::DoStoreNamedField(HStoreNamedField* instr) { ? UseRegister(instr->object()) : UseTempRegister(instr->object()); } else if (is_external_location) { - ASSERT(!is_in_object); - ASSERT(!needs_write_barrier); - ASSERT(!needs_write_barrier_for_map); + DCHECK(!is_in_object); + DCHECK(!needs_write_barrier); + DCHECK(!needs_write_barrier_for_map); obj = UseRegisterOrConstant(instr->object()); } else { obj = needs_write_barrier_for_map @@ -2317,8 +2401,8 @@ LInstruction* LChunkBuilder::DoStoreNamedField(HStoreNamedField* instr) { LInstruction* LChunkBuilder::DoStoreNamedGeneric(HStoreNamedGeneric* instr) { LOperand* context = UseFixed(instr->context(), rsi); - LOperand* object = UseFixed(instr->object(), rdx); - LOperand* value = UseFixed(instr->value(), rax); + LOperand* object = UseFixed(instr->object(), StoreIC::ReceiverRegister()); + LOperand* value = UseFixed(instr->value(), StoreIC::ValueRegister()); LStoreNamedGeneric* result = new(zone()) LStoreNamedGeneric(context, object, value); @@ -2381,7 +2465,7 @@ LInstruction* LChunkBuilder::DoFunctionLiteral(HFunctionLiteral* instr) { LInstruction* LChunkBuilder::DoOsrEntry(HOsrEntry* instr) { - ASSERT(argument_count_ == 0); + DCHECK(argument_count_ == 0); allocator_->MarkAsOsrEntry(); current_block_->last_environment()->set_ast_id(instr->ast_id()); return AssignEnvironment(new(zone()) LOsrEntry); @@ -2394,11 +2478,11 @@ LInstruction* LChunkBuilder::DoParameter(HParameter* instr) { int spill_index = chunk()->GetParameterStackSlot(instr->index()); return DefineAsSpilled(result, spill_index); } else { - ASSERT(info()->IsStub()); + DCHECK(info()->IsStub()); CodeStubInterfaceDescriptor* descriptor = info()->code_stub()->GetInterfaceDescriptor(); int index = static_cast<int>(instr->index()); - Register reg = descriptor->GetParameterRegister(index); + Register reg = descriptor->GetEnvironmentParameterRegister(index); return DefineFixed(result, reg); } } @@ -2478,9 +2562,6 @@ LInstruction* LChunkBuilder::DoTypeof(HTypeof* instr) { LInstruction* LChunkBuilder::DoTypeofIsAndBranch(HTypeofIsAndBranch* instr) { - LInstruction* goto_instr = CheckElideControlInstruction(instr); - if (goto_instr != NULL) return goto_instr; - return new(zone()) LTypeofIsAndBranch(UseTempRegister(instr->value())); } @@ -2503,7 +2584,7 @@ LInstruction* LChunkBuilder::DoStackCheck(HStackCheck* instr) { LOperand* context = UseFixed(instr->context(), rsi); return MarkAsCall(new(zone()) LStackCheck(context), instr); } else { - ASSERT(instr->is_backwards_branch()); + DCHECK(instr->is_backwards_branch()); LOperand* context = UseAny(instr->context()); return AssignEnvironment( AssignPointerMap(new(zone()) LStackCheck(context))); @@ -2539,7 +2620,7 @@ LInstruction* LChunkBuilder::DoLeaveInlined(HLeaveInlined* instr) { if (env->entry()->arguments_pushed()) { int argument_count = env->arguments_environment()->parameter_count(); pop = new(zone()) LDrop(argument_count); - ASSERT(instr->argument_delta() == -argument_count); + DCHECK(instr->argument_delta() == -argument_count); } HEnvironment* outer = current_block_->last_environment()-> @@ -2581,6 +2662,22 @@ LInstruction* LChunkBuilder::DoLoadFieldByIndex(HLoadFieldByIndex* instr) { } +LInstruction* LChunkBuilder::DoStoreFrameContext(HStoreFrameContext* instr) { + LOperand* context = UseRegisterAtStart(instr->context()); + return new(zone()) LStoreFrameContext(context); +} + + +LInstruction* LChunkBuilder::DoAllocateBlockContext( + HAllocateBlockContext* instr) { + LOperand* context = UseFixed(instr->context(), rsi); + LOperand* function = UseRegisterAtStart(instr->function()); + LAllocateBlockContext* result = + new(zone()) LAllocateBlockContext(context, function); + return MarkAsCall(DefineFixed(result, rsi), instr); +} + + } } // namespace v8::internal #endif // V8_TARGET_ARCH_X64 diff --git a/deps/v8/src/x64/lithium-x64.h b/deps/v8/src/x64/lithium-x64.h index 093b95b4dd0..a1c563f8825 100644 --- a/deps/v8/src/x64/lithium-x64.h +++ b/deps/v8/src/x64/lithium-x64.h @@ -5,11 +5,11 @@ #ifndef V8_X64_LITHIUM_X64_H_ #define V8_X64_LITHIUM_X64_H_ -#include "hydrogen.h" -#include "lithium-allocator.h" -#include "lithium.h" -#include "safepoint-table.h" -#include "utils.h" +#include "src/hydrogen.h" +#include "src/lithium.h" +#include "src/lithium-allocator.h" +#include "src/safepoint-table.h" +#include "src/utils.h" namespace v8 { namespace internal { @@ -17,145 +17,148 @@ namespace internal { // Forward declarations. class LCodeGen; -#define LITHIUM_CONCRETE_INSTRUCTION_LIST(V) \ - V(AccessArgumentsAt) \ - V(AddI) \ - V(Allocate) \ - V(ApplyArguments) \ - V(ArgumentsElements) \ - V(ArgumentsLength) \ - V(ArithmeticD) \ - V(ArithmeticT) \ - V(BitI) \ - V(BoundsCheck) \ - V(Branch) \ - V(CallJSFunction) \ - V(CallWithDescriptor) \ - V(CallFunction) \ - V(CallNew) \ - V(CallNewArray) \ - V(CallRuntime) \ - V(CallStub) \ - V(CheckInstanceType) \ - V(CheckMaps) \ - V(CheckMapValue) \ - V(CheckNonSmi) \ - V(CheckSmi) \ - V(CheckValue) \ - V(ClampDToUint8) \ - V(ClampIToUint8) \ - V(ClampTToUint8) \ - V(ClassOfTestAndBranch) \ - V(CompareMinusZeroAndBranch) \ - V(CompareNumericAndBranch) \ - V(CmpObjectEqAndBranch) \ - V(CmpHoleAndBranch) \ - V(CmpMapAndBranch) \ - V(CmpT) \ - V(ConstantD) \ - V(ConstantE) \ - V(ConstantI) \ - V(ConstantS) \ - V(ConstantT) \ - V(ConstructDouble) \ - V(Context) \ - V(DateField) \ - V(DebugBreak) \ - V(DeclareGlobals) \ - V(Deoptimize) \ - V(DivByConstI) \ - V(DivByPowerOf2I) \ - V(DivI) \ - V(DoubleBits) \ - V(DoubleToI) \ - V(DoubleToSmi) \ - V(Drop) \ - V(DummyUse) \ - V(Dummy) \ - V(FlooringDivByConstI) \ - V(FlooringDivByPowerOf2I) \ - V(FlooringDivI) \ - V(ForInCacheArray) \ - V(ForInPrepareMap) \ - V(FunctionLiteral) \ - V(GetCachedArrayIndex) \ - V(Goto) \ - V(HasCachedArrayIndexAndBranch) \ - V(HasInstanceTypeAndBranch) \ - V(InnerAllocatedObject) \ - V(InstanceOf) \ - V(InstanceOfKnownGlobal) \ - V(InstructionGap) \ - V(Integer32ToDouble) \ - V(InvokeFunction) \ - V(IsConstructCallAndBranch) \ - V(IsObjectAndBranch) \ - V(IsStringAndBranch) \ - V(IsSmiAndBranch) \ - V(IsUndetectableAndBranch) \ - V(Label) \ - V(LazyBailout) \ - V(LoadContextSlot) \ - V(LoadRoot) \ - V(LoadFieldByIndex) \ - V(LoadFunctionPrototype) \ - V(LoadGlobalCell) \ - V(LoadGlobalGeneric) \ - V(LoadKeyed) \ - V(LoadKeyedGeneric) \ - V(LoadNamedField) \ - V(LoadNamedGeneric) \ - V(MapEnumLength) \ - V(MathAbs) \ - V(MathClz32) \ - V(MathExp) \ - V(MathFloor) \ - V(MathLog) \ - V(MathMinMax) \ - V(MathPowHalf) \ - V(MathRound) \ - V(MathSqrt) \ - V(ModByConstI) \ - V(ModByPowerOf2I) \ - V(ModI) \ - V(MulI) \ - V(NumberTagD) \ - V(NumberTagI) \ - V(NumberTagU) \ - V(NumberUntagD) \ - V(OsrEntry) \ - V(Parameter) \ - V(Power) \ - V(PushArgument) \ - V(RegExpLiteral) \ - V(Return) \ - V(SeqStringGetChar) \ - V(SeqStringSetChar) \ - V(ShiftI) \ - V(SmiTag) \ - V(SmiUntag) \ - V(StackCheck) \ - V(StoreCodeEntry) \ - V(StoreContextSlot) \ - V(StoreGlobalCell) \ - V(StoreKeyed) \ - V(StoreKeyedGeneric) \ - V(StoreNamedField) \ - V(StoreNamedGeneric) \ - V(StringAdd) \ - V(StringCharCodeAt) \ - V(StringCharFromCode) \ - V(StringCompareAndBranch) \ - V(SubI) \ - V(TaggedToI) \ - V(ThisFunction) \ - V(ToFastProperties) \ - V(TransitionElementsKind) \ - V(TrapAllocationMemento) \ - V(Typeof) \ - V(TypeofIsAndBranch) \ - V(Uint32ToDouble) \ - V(UnknownOSRValue) \ +#define LITHIUM_CONCRETE_INSTRUCTION_LIST(V) \ + V(AccessArgumentsAt) \ + V(AddI) \ + V(Allocate) \ + V(AllocateBlockContext) \ + V(ApplyArguments) \ + V(ArgumentsElements) \ + V(ArgumentsLength) \ + V(ArithmeticD) \ + V(ArithmeticT) \ + V(BitI) \ + V(BoundsCheck) \ + V(Branch) \ + V(CallJSFunction) \ + V(CallWithDescriptor) \ + V(CallFunction) \ + V(CallNew) \ + V(CallNewArray) \ + V(CallRuntime) \ + V(CallStub) \ + V(CheckInstanceType) \ + V(CheckMaps) \ + V(CheckMapValue) \ + V(CheckNonSmi) \ + V(CheckSmi) \ + V(CheckValue) \ + V(ClampDToUint8) \ + V(ClampIToUint8) \ + V(ClampTToUint8) \ + V(ClassOfTestAndBranch) \ + V(CompareMinusZeroAndBranch) \ + V(CompareNumericAndBranch) \ + V(CmpObjectEqAndBranch) \ + V(CmpHoleAndBranch) \ + V(CmpMapAndBranch) \ + V(CmpT) \ + V(ConstantD) \ + V(ConstantE) \ + V(ConstantI) \ + V(ConstantS) \ + V(ConstantT) \ + V(ConstructDouble) \ + V(Context) \ + V(DateField) \ + V(DebugBreak) \ + V(DeclareGlobals) \ + V(Deoptimize) \ + V(DivByConstI) \ + V(DivByPowerOf2I) \ + V(DivI) \ + V(DoubleBits) \ + V(DoubleToI) \ + V(DoubleToSmi) \ + V(Drop) \ + V(DummyUse) \ + V(Dummy) \ + V(FlooringDivByConstI) \ + V(FlooringDivByPowerOf2I) \ + V(FlooringDivI) \ + V(ForInCacheArray) \ + V(ForInPrepareMap) \ + V(FunctionLiteral) \ + V(GetCachedArrayIndex) \ + V(Goto) \ + V(HasCachedArrayIndexAndBranch) \ + V(HasInstanceTypeAndBranch) \ + V(InnerAllocatedObject) \ + V(InstanceOf) \ + V(InstanceOfKnownGlobal) \ + V(InstructionGap) \ + V(Integer32ToDouble) \ + V(InvokeFunction) \ + V(IsConstructCallAndBranch) \ + V(IsObjectAndBranch) \ + V(IsStringAndBranch) \ + V(IsSmiAndBranch) \ + V(IsUndetectableAndBranch) \ + V(Label) \ + V(LazyBailout) \ + V(LoadContextSlot) \ + V(LoadRoot) \ + V(LoadFieldByIndex) \ + V(LoadFunctionPrototype) \ + V(LoadGlobalCell) \ + V(LoadGlobalGeneric) \ + V(LoadKeyed) \ + V(LoadKeyedGeneric) \ + V(LoadNamedField) \ + V(LoadNamedGeneric) \ + V(MapEnumLength) \ + V(MathAbs) \ + V(MathClz32) \ + V(MathExp) \ + V(MathFloor) \ + V(MathFround) \ + V(MathLog) \ + V(MathMinMax) \ + V(MathPowHalf) \ + V(MathRound) \ + V(MathSqrt) \ + V(ModByConstI) \ + V(ModByPowerOf2I) \ + V(ModI) \ + V(MulI) \ + V(NumberTagD) \ + V(NumberTagI) \ + V(NumberTagU) \ + V(NumberUntagD) \ + V(OsrEntry) \ + V(Parameter) \ + V(Power) \ + V(PushArgument) \ + V(RegExpLiteral) \ + V(Return) \ + V(SeqStringGetChar) \ + V(SeqStringSetChar) \ + V(ShiftI) \ + V(SmiTag) \ + V(SmiUntag) \ + V(StackCheck) \ + V(StoreCodeEntry) \ + V(StoreContextSlot) \ + V(StoreFrameContext) \ + V(StoreGlobalCell) \ + V(StoreKeyed) \ + V(StoreKeyedGeneric) \ + V(StoreNamedField) \ + V(StoreNamedGeneric) \ + V(StringAdd) \ + V(StringCharCodeAt) \ + V(StringCharFromCode) \ + V(StringCompareAndBranch) \ + V(SubI) \ + V(TaggedToI) \ + V(ThisFunction) \ + V(ToFastProperties) \ + V(TransitionElementsKind) \ + V(TrapAllocationMemento) \ + V(Typeof) \ + V(TypeofIsAndBranch) \ + V(Uint32ToDouble) \ + V(UnknownOSRValue) \ V(WrapReceiver) @@ -168,7 +171,7 @@ class LCodeGen; return mnemonic; \ } \ static L##type* cast(LInstruction* instr) { \ - ASSERT(instr->Is##type()); \ + DCHECK(instr->Is##type()); \ return reinterpret_cast<L##type*>(instr); \ } @@ -217,6 +220,9 @@ class LInstruction : public ZoneObject { virtual bool IsControl() const { return false; } + // Try deleting this instruction if possible. + virtual bool TryDelete() { return false; } + void set_environment(LEnvironment* env) { environment_ = env; } LEnvironment* environment() const { return environment_; } bool HasEnvironment() const { return environment_ != NULL; } @@ -259,11 +265,12 @@ class LInstruction : public ZoneObject { void VerifyCall(); #endif + virtual int InputCount() = 0; + virtual LOperand* InputAt(int i) = 0; + private: // Iterator support. friend class InputIterator; - virtual int InputCount() = 0; - virtual LOperand* InputAt(int i) = 0; friend class TempIterator; virtual int TempCount() = 0; @@ -331,7 +338,7 @@ class LGap : public LTemplateInstruction<0, 0, 0> { virtual bool IsGap() const V8_FINAL V8_OVERRIDE { return true; } virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; static LGap* cast(LInstruction* instr) { - ASSERT(instr->IsGap()); + DCHECK(instr->IsGap()); return reinterpret_cast<LGap*>(instr); } @@ -412,7 +419,7 @@ class LLazyBailout V8_FINAL : public LTemplateInstruction<0, 0, 0> { class LDummy V8_FINAL : public LTemplateInstruction<1, 0, 0> { public: - explicit LDummy() { } + LDummy() {} DECLARE_CONCRETE_INSTRUCTION(Dummy, "dummy") }; @@ -428,6 +435,7 @@ class LDummyUse V8_FINAL : public LTemplateInstruction<1, 1, 0> { class LDeoptimize V8_FINAL : public LTemplateInstruction<0, 0, 0> { public: + virtual bool IsControl() const V8_OVERRIDE { return true; } DECLARE_CONCRETE_INSTRUCTION(Deoptimize, "deoptimize") DECLARE_HYDROGEN_ACCESSOR(Deoptimize) }; @@ -844,7 +852,7 @@ class LMathFloor V8_FINAL : public LTemplateInstruction<1, 1, 0> { class LMathRound V8_FINAL : public LTemplateInstruction<1, 1, 1> { public: - explicit LMathRound(LOperand* value, LOperand* temp) { + LMathRound(LOperand* value, LOperand* temp) { inputs_[0] = value; temps_[0] = temp; } @@ -857,6 +865,16 @@ class LMathRound V8_FINAL : public LTemplateInstruction<1, 1, 1> { }; +class LMathFround V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LMathFround(LOperand* value) { inputs_[0] = value; } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(MathFround, "math-fround") +}; + + class LMathAbs V8_FINAL : public LTemplateInstruction<1, 2, 0> { public: explicit LMathAbs(LOperand* context, LOperand* value) { @@ -1544,7 +1562,7 @@ class LReturn V8_FINAL : public LTemplateInstruction<0, 3, 0> { return parameter_count()->IsConstantOperand(); } LConstantOperand* constant_parameter_count() { - ASSERT(has_constant_parameter_count()); + DCHECK(has_constant_parameter_count()); return LConstantOperand::cast(parameter_count()); } LOperand* parameter_count() { return inputs_[2]; } @@ -1567,11 +1585,13 @@ class LLoadNamedField V8_FINAL : public LTemplateInstruction<1, 1, 0> { }; -class LLoadNamedGeneric V8_FINAL : public LTemplateInstruction<1, 2, 0> { +class LLoadNamedGeneric V8_FINAL : public LTemplateInstruction<1, 2, 1> { public: - explicit LLoadNamedGeneric(LOperand* context, LOperand* object) { + explicit LLoadNamedGeneric(LOperand* context, LOperand* object, + LOperand* vector) { inputs_[0] = context; inputs_[1] = object; + temps_[0] = vector; } DECLARE_CONCRETE_INSTRUCTION(LoadNamedGeneric, "load-named-generic") @@ -1579,6 +1599,8 @@ class LLoadNamedGeneric V8_FINAL : public LTemplateInstruction<1, 2, 0> { LOperand* context() { return inputs_[0]; } LOperand* object() { return inputs_[1]; } + LOperand* temp_vector() { return temps_[0]; } + Handle<Object> name() const { return hydrogen()->name(); } }; @@ -1605,6 +1627,22 @@ class LLoadRoot V8_FINAL : public LTemplateInstruction<1, 0, 0> { }; +inline static bool ExternalArrayOpRequiresTemp( + Representation key_representation, + ElementsKind elements_kind) { + // Operations that require the key to be divided by two to be converted into + // an index cannot fold the scale operation into a load and need an extra + // temp register to do the work. + return SmiValuesAre31Bits() && key_representation.IsSmi() && + (elements_kind == EXTERNAL_INT8_ELEMENTS || + elements_kind == EXTERNAL_UINT8_ELEMENTS || + elements_kind == EXTERNAL_UINT8_CLAMPED_ELEMENTS || + elements_kind == UINT8_ELEMENTS || + elements_kind == INT8_ELEMENTS || + elements_kind == UINT8_CLAMPED_ELEMENTS); +} + + class LLoadKeyed V8_FINAL : public LTemplateInstruction<1, 2, 0> { public: LLoadKeyed(LOperand* elements, LOperand* key) { @@ -1627,26 +1665,30 @@ class LLoadKeyed V8_FINAL : public LTemplateInstruction<1, 2, 0> { LOperand* elements() { return inputs_[0]; } LOperand* key() { return inputs_[1]; } virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; - uint32_t additional_index() const { return hydrogen()->index_offset(); } + uint32_t base_offset() const { return hydrogen()->base_offset(); } ElementsKind elements_kind() const { return hydrogen()->elements_kind(); } }; -class LLoadKeyedGeneric V8_FINAL : public LTemplateInstruction<1, 3, 0> { +class LLoadKeyedGeneric V8_FINAL : public LTemplateInstruction<1, 3, 1> { public: - LLoadKeyedGeneric(LOperand* context, LOperand* obj, LOperand* key) { + LLoadKeyedGeneric(LOperand* context, LOperand* obj, LOperand* key, + LOperand* vector) { inputs_[0] = context; inputs_[1] = obj; inputs_[2] = key; + temps_[0] = vector; } DECLARE_CONCRETE_INSTRUCTION(LoadKeyedGeneric, "load-keyed-generic") + DECLARE_HYDROGEN_ACCESSOR(LoadKeyedGeneric) LOperand* context() { return inputs_[0]; } LOperand* object() { return inputs_[1]; } LOperand* key() { return inputs_[2]; } + LOperand* temp_vector() { return temps_[0]; } }; @@ -1657,11 +1699,13 @@ class LLoadGlobalCell V8_FINAL : public LTemplateInstruction<1, 0, 0> { }; -class LLoadGlobalGeneric V8_FINAL : public LTemplateInstruction<1, 2, 0> { +class LLoadGlobalGeneric V8_FINAL : public LTemplateInstruction<1, 2, 1> { public: - explicit LLoadGlobalGeneric(LOperand* context, LOperand* global_object) { + explicit LLoadGlobalGeneric(LOperand* context, LOperand* global_object, + LOperand* vector) { inputs_[0] = context; inputs_[1] = global_object; + temps_[0] = vector; } DECLARE_CONCRETE_INSTRUCTION(LoadGlobalGeneric, "load-global-generic") @@ -1669,6 +1713,8 @@ class LLoadGlobalGeneric V8_FINAL : public LTemplateInstruction<1, 2, 0> { LOperand* context() { return inputs_[0]; } LOperand* global_object() { return inputs_[1]; } + LOperand* temp_vector() { return temps_[0]; } + Handle<Object> name() const { return hydrogen()->name(); } bool for_typeof() const { return hydrogen()->for_typeof(); } }; @@ -1752,15 +1798,15 @@ class LDrop V8_FINAL : public LTemplateInstruction<0, 0, 0> { }; -class LStoreCodeEntry V8_FINAL: public LTemplateInstruction<0, 1, 1> { +class LStoreCodeEntry V8_FINAL: public LTemplateInstruction<0, 2, 0> { public: LStoreCodeEntry(LOperand* function, LOperand* code_object) { inputs_[0] = function; - temps_[0] = code_object; + inputs_[1] = code_object; } LOperand* function() { return inputs_[0]; } - LOperand* code_object() { return temps_[0]; } + LOperand* code_object() { return inputs_[1]; } virtual void PrintDataTo(StringStream* stream); @@ -1831,11 +1877,11 @@ class LCallJSFunction V8_FINAL : public LTemplateInstruction<1, 1, 0> { class LCallWithDescriptor V8_FINAL : public LTemplateResultInstruction<1> { public: - LCallWithDescriptor(const CallInterfaceDescriptor* descriptor, - ZoneList<LOperand*>& operands, + LCallWithDescriptor(const InterfaceDescriptor* descriptor, + const ZoneList<LOperand*>& operands, Zone* zone) - : inputs_(descriptor->environment_length() + 1, zone) { - ASSERT(descriptor->environment_length() + 1 == operands.length()); + : inputs_(descriptor->GetRegisterParameterCount() + 1, zone) { + DCHECK(descriptor->GetRegisterParameterCount() + 1 == operands.length()); inputs_.AddAll(operands, zone); } @@ -1966,27 +2012,29 @@ class LInteger32ToDouble V8_FINAL : public LTemplateInstruction<1, 1, 0> { }; -class LUint32ToDouble V8_FINAL : public LTemplateInstruction<1, 1, 1> { +class LUint32ToDouble V8_FINAL : public LTemplateInstruction<1, 1, 0> { public: - explicit LUint32ToDouble(LOperand* value, LOperand* temp) { + explicit LUint32ToDouble(LOperand* value) { inputs_[0] = value; - temps_[0] = temp; } LOperand* value() { return inputs_[0]; } - LOperand* temp() { return temps_[0]; } DECLARE_CONCRETE_INSTRUCTION(Uint32ToDouble, "uint32-to-double") }; -class LNumberTagI V8_FINAL : public LTemplateInstruction<1, 1, 0> { +class LNumberTagI V8_FINAL : public LTemplateInstruction<1, 1, 2> { public: - explicit LNumberTagI(LOperand* value) { + LNumberTagI(LOperand* value, LOperand* temp1, LOperand* temp2) { inputs_[0] = value; + temps_[0] = temp1; + temps_[1] = temp2; } LOperand* value() { return inputs_[0]; } + LOperand* temp1() { return temps_[0]; } + LOperand* temp2() { return temps_[1]; } DECLARE_CONCRETE_INSTRUCTION(NumberTagI, "number-tag-i") }; @@ -2183,7 +2231,7 @@ class LStoreKeyed V8_FINAL : public LTemplateInstruction<0, 3, 0> { virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; bool NeedsCanonicalization() { return hydrogen()->NeedsCanonicalization(); } - uint32_t additional_index() const { return hydrogen()->index_offset(); } + uint32_t base_offset() const { return hydrogen()->base_offset(); } }; @@ -2628,6 +2676,35 @@ class LLoadFieldByIndex V8_FINAL : public LTemplateInstruction<1, 2, 0> { }; +class LStoreFrameContext: public LTemplateInstruction<0, 1, 0> { + public: + explicit LStoreFrameContext(LOperand* context) { + inputs_[0] = context; + } + + LOperand* context() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(StoreFrameContext, "store-frame-context") +}; + + +class LAllocateBlockContext: public LTemplateInstruction<1, 2, 0> { + public: + LAllocateBlockContext(LOperand* context, LOperand* function) { + inputs_[0] = context; + inputs_[1] = function; + } + + LOperand* context() { return inputs_[0]; } + LOperand* function() { return inputs_[1]; } + + Handle<ScopeInfo> scope_info() { return hydrogen()->scope_info(); } + + DECLARE_CONCRETE_INSTRUCTION(AllocateBlockContext, "allocate-block-context") + DECLARE_HYDROGEN_ACCESSOR(AllocateBlockContext) +}; + + class LChunkBuilder; class LPlatformChunk V8_FINAL : public LChunk { public: @@ -2665,8 +2742,6 @@ class LChunkBuilder V8_FINAL : public LChunkBuilderBase { // Build the sequence for the graph. LPlatformChunk* Build(); - LInstruction* CheckElideControlInstruction(HControlInstruction* instr); - // Declare methods that deal with the individual node types. #define DECLARE_DO(type) LInstruction* Do##type(H##type* node); HYDROGEN_CONCRETE_INSTRUCTION_LIST(DECLARE_DO) @@ -2674,6 +2749,7 @@ class LChunkBuilder V8_FINAL : public LChunkBuilderBase { LInstruction* DoMathFloor(HUnaryMathOperation* instr); LInstruction* DoMathRound(HUnaryMathOperation* instr); + LInstruction* DoMathFround(HUnaryMathOperation* instr); LInstruction* DoMathAbs(HUnaryMathOperation* instr); LInstruction* DoMathLog(HUnaryMathOperation* instr); LInstruction* DoMathExp(HUnaryMathOperation* instr); @@ -2790,6 +2866,7 @@ class LChunkBuilder V8_FINAL : public LChunkBuilderBase { CanDeoptimize can_deoptimize = CANNOT_DEOPTIMIZE_EAGERLY); void VisitInstruction(HInstruction* current); + void AddInstruction(LInstruction* instr, HInstruction* current); void DoBasicBlock(HBasicBlock* block, HBasicBlock* next_block); LInstruction* DoShift(Token::Value op, HBitwiseBinaryOperation* instr); diff --git a/deps/v8/src/x64/macro-assembler-x64.cc b/deps/v8/src/x64/macro-assembler-x64.cc index 832cf52c663..7a37fb3e3a3 100644 --- a/deps/v8/src/x64/macro-assembler-x64.cc +++ b/deps/v8/src/x64/macro-assembler-x64.cc @@ -2,19 +2,19 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_X64 -#include "bootstrapper.h" -#include "codegen.h" -#include "cpu-profiler.h" -#include "assembler-x64.h" -#include "macro-assembler-x64.h" -#include "serialize.h" -#include "debug.h" -#include "heap.h" -#include "isolate-inl.h" +#include "src/bootstrapper.h" +#include "src/codegen.h" +#include "src/cpu-profiler.h" +#include "src/debug.h" +#include "src/heap/heap.h" +#include "src/isolate-inl.h" +#include "src/serialize.h" +#include "src/x64/assembler-x64.h" +#include "src/x64/macro-assembler-x64.h" namespace v8 { namespace internal { @@ -60,7 +60,7 @@ int64_t MacroAssembler::RootRegisterDelta(ExternalReference other) { Operand MacroAssembler::ExternalOperand(ExternalReference target, Register scratch) { - if (root_array_available_ && !Serializer::enabled(isolate())) { + if (root_array_available_ && !serializer_enabled()) { int64_t delta = RootRegisterDelta(target); if (delta != kInvalidRootRegisterDelta && is_int32(delta)) { return Operand(kRootRegister, static_cast<int32_t>(delta)); @@ -72,7 +72,7 @@ Operand MacroAssembler::ExternalOperand(ExternalReference target, void MacroAssembler::Load(Register destination, ExternalReference source) { - if (root_array_available_ && !Serializer::enabled(isolate())) { + if (root_array_available_ && !serializer_enabled()) { int64_t delta = RootRegisterDelta(source); if (delta != kInvalidRootRegisterDelta && is_int32(delta)) { movp(destination, Operand(kRootRegister, static_cast<int32_t>(delta))); @@ -90,7 +90,7 @@ void MacroAssembler::Load(Register destination, ExternalReference source) { void MacroAssembler::Store(ExternalReference destination, Register source) { - if (root_array_available_ && !Serializer::enabled(isolate())) { + if (root_array_available_ && !serializer_enabled()) { int64_t delta = RootRegisterDelta(destination); if (delta != kInvalidRootRegisterDelta && is_int32(delta)) { movp(Operand(kRootRegister, static_cast<int32_t>(delta)), source); @@ -109,7 +109,7 @@ void MacroAssembler::Store(ExternalReference destination, Register source) { void MacroAssembler::LoadAddress(Register destination, ExternalReference source) { - if (root_array_available_ && !Serializer::enabled(isolate())) { + if (root_array_available_ && !serializer_enabled()) { int64_t delta = RootRegisterDelta(source); if (delta != kInvalidRootRegisterDelta && is_int32(delta)) { leap(destination, Operand(kRootRegister, static_cast<int32_t>(delta))); @@ -122,7 +122,7 @@ void MacroAssembler::LoadAddress(Register destination, int MacroAssembler::LoadAddressSize(ExternalReference source) { - if (root_array_available_ && !Serializer::enabled(isolate())) { + if (root_array_available_ && !serializer_enabled()) { // This calculation depends on the internals of LoadAddress. // It's correctness is ensured by the asserts in the Call // instruction below. @@ -144,7 +144,7 @@ int MacroAssembler::LoadAddressSize(ExternalReference source) { void MacroAssembler::PushAddress(ExternalReference source) { int64_t address = reinterpret_cast<int64_t>(source.address()); - if (is_int32(address) && !Serializer::enabled(isolate())) { + if (is_int32(address) && !serializer_enabled()) { if (emit_debug_code()) { Move(kScratchRegister, kZapValue, Assembler::RelocInfoNone()); } @@ -157,7 +157,7 @@ void MacroAssembler::PushAddress(ExternalReference source) { void MacroAssembler::LoadRoot(Register destination, Heap::RootListIndex index) { - ASSERT(root_array_available_); + DCHECK(root_array_available_); movp(destination, Operand(kRootRegister, (index << kPointerSizeLog2) - kRootRegisterBias)); } @@ -166,7 +166,7 @@ void MacroAssembler::LoadRoot(Register destination, Heap::RootListIndex index) { void MacroAssembler::LoadRootIndexed(Register destination, Register variable_offset, int fixed_offset) { - ASSERT(root_array_available_); + DCHECK(root_array_available_); movp(destination, Operand(kRootRegister, variable_offset, times_pointer_size, @@ -175,20 +175,20 @@ void MacroAssembler::LoadRootIndexed(Register destination, void MacroAssembler::StoreRoot(Register source, Heap::RootListIndex index) { - ASSERT(root_array_available_); + DCHECK(root_array_available_); movp(Operand(kRootRegister, (index << kPointerSizeLog2) - kRootRegisterBias), source); } void MacroAssembler::PushRoot(Heap::RootListIndex index) { - ASSERT(root_array_available_); + DCHECK(root_array_available_); Push(Operand(kRootRegister, (index << kPointerSizeLog2) - kRootRegisterBias)); } void MacroAssembler::CompareRoot(Register with, Heap::RootListIndex index) { - ASSERT(root_array_available_); + DCHECK(root_array_available_); cmpp(with, Operand(kRootRegister, (index << kPointerSizeLog2) - kRootRegisterBias)); } @@ -196,8 +196,8 @@ void MacroAssembler::CompareRoot(Register with, Heap::RootListIndex index) { void MacroAssembler::CompareRoot(const Operand& with, Heap::RootListIndex index) { - ASSERT(root_array_available_); - ASSERT(!with.AddressUsesRegister(kScratchRegister)); + DCHECK(root_array_available_); + DCHECK(!with.AddressUsesRegister(kScratchRegister)); LoadRoot(kScratchRegister, index); cmpp(with, kScratchRegister); } @@ -232,7 +232,7 @@ void MacroAssembler::RememberedSetHelper(Register object, // For debug tests. ret(0); bind(&buffer_overflowed); } else { - ASSERT(and_then == kFallThroughAtEnd); + DCHECK(and_then == kFallThroughAtEnd); j(equal, &done, Label::kNear); } StoreBufferOverflowStub store_buffer_overflow = @@ -241,7 +241,7 @@ void MacroAssembler::RememberedSetHelper(Register object, // For debug tests. if (and_then == kReturnAtEnd) { ret(0); } else { - ASSERT(and_then == kFallThroughAtEnd); + DCHECK(and_then == kFallThroughAtEnd); bind(&done); } } @@ -252,7 +252,7 @@ void MacroAssembler::InNewSpace(Register object, Condition cc, Label* branch, Label::Distance distance) { - if (Serializer::enabled(isolate())) { + if (serializer_enabled()) { // Can't do arithmetic on external references if it might get serialized. // The mask isn't really an address. We load it as an external reference in // case the size of the new space is different between the snapshot maker @@ -268,7 +268,9 @@ void MacroAssembler::InNewSpace(Register object, cmpp(scratch, kScratchRegister); j(cc, branch, distance); } else { - ASSERT(is_int32(static_cast<int64_t>(isolate()->heap()->NewSpaceMask()))); + DCHECK(kPointerSize == kInt64Size + ? is_int32(static_cast<int64_t>(isolate()->heap()->NewSpaceMask())) + : kPointerSize == kInt32Size); intptr_t new_space_start = reinterpret_cast<intptr_t>(isolate()->heap()->NewSpaceStart()); Move(kScratchRegister, reinterpret_cast<Address>(-new_space_start), @@ -292,7 +294,8 @@ void MacroAssembler::RecordWriteField( Register dst, SaveFPRegsMode save_fp, RememberedSetAction remembered_set_action, - SmiCheck smi_check) { + SmiCheck smi_check, + PointersToHereCheck pointers_to_here_check_for_value) { // First, check if a write barrier is even needed. The tests below // catch stores of Smis. Label done; @@ -304,7 +307,7 @@ void MacroAssembler::RecordWriteField( // Although the object register is tagged, the offset is relative to the start // of the object, so so offset must be a multiple of kPointerSize. - ASSERT(IsAligned(offset, kPointerSize)); + DCHECK(IsAligned(offset, kPointerSize)); leap(dst, FieldOperand(object, offset)); if (emit_debug_code()) { @@ -315,8 +318,8 @@ void MacroAssembler::RecordWriteField( bind(&ok); } - RecordWrite( - object, dst, value, save_fp, remembered_set_action, OMIT_SMI_CHECK); + RecordWrite(object, dst, value, save_fp, remembered_set_action, + OMIT_SMI_CHECK, pointers_to_here_check_for_value); bind(&done); @@ -329,12 +332,14 @@ void MacroAssembler::RecordWriteField( } -void MacroAssembler::RecordWriteArray(Register object, - Register value, - Register index, - SaveFPRegsMode save_fp, - RememberedSetAction remembered_set_action, - SmiCheck smi_check) { +void MacroAssembler::RecordWriteArray( + Register object, + Register value, + Register index, + SaveFPRegsMode save_fp, + RememberedSetAction remembered_set_action, + SmiCheck smi_check, + PointersToHereCheck pointers_to_here_check_for_value) { // First, check if a write barrier is even needed. The tests below // catch stores of Smis. Label done; @@ -349,8 +354,8 @@ void MacroAssembler::RecordWriteArray(Register object, leap(dst, Operand(object, index, times_pointer_size, FixedArray::kHeaderSize - kHeapObjectTag)); - RecordWrite( - object, dst, value, save_fp, remembered_set_action, OMIT_SMI_CHECK); + RecordWrite(object, dst, value, save_fp, remembered_set_action, + OMIT_SMI_CHECK, pointers_to_here_check_for_value); bind(&done); @@ -363,34 +368,103 @@ void MacroAssembler::RecordWriteArray(Register object, } -void MacroAssembler::RecordWrite(Register object, - Register address, - Register value, - SaveFPRegsMode fp_mode, - RememberedSetAction remembered_set_action, - SmiCheck smi_check) { - ASSERT(!object.is(value)); - ASSERT(!object.is(address)); - ASSERT(!value.is(address)); +void MacroAssembler::RecordWriteForMap(Register object, + Register map, + Register dst, + SaveFPRegsMode fp_mode) { + DCHECK(!object.is(kScratchRegister)); + DCHECK(!object.is(map)); + DCHECK(!object.is(dst)); + DCHECK(!map.is(dst)); AssertNotSmi(object); - if (remembered_set_action == OMIT_REMEMBERED_SET && - !FLAG_incremental_marking) { + if (emit_debug_code()) { + Label ok; + if (map.is(kScratchRegister)) pushq(map); + CompareMap(map, isolate()->factory()->meta_map()); + if (map.is(kScratchRegister)) popq(map); + j(equal, &ok, Label::kNear); + int3(); + bind(&ok); + } + + if (!FLAG_incremental_marking) { return; } if (emit_debug_code()) { Label ok; - cmpp(value, Operand(address, 0)); + if (map.is(kScratchRegister)) pushq(map); + cmpp(map, FieldOperand(object, HeapObject::kMapOffset)); + if (map.is(kScratchRegister)) popq(map); j(equal, &ok, Label::kNear); int3(); bind(&ok); } + // Compute the address. + leap(dst, FieldOperand(object, HeapObject::kMapOffset)); + + // First, check if a write barrier is even needed. The tests below + // catch stores of smis and stores into the young generation. + Label done; + + // A single check of the map's pages interesting flag suffices, since it is + // only set during incremental collection, and then it's also guaranteed that + // the from object's page's interesting flag is also set. This optimization + // relies on the fact that maps can never be in new space. + CheckPageFlag(map, + map, // Used as scratch. + MemoryChunk::kPointersToHereAreInterestingMask, + zero, + &done, + Label::kNear); + + RecordWriteStub stub(isolate(), object, map, dst, OMIT_REMEMBERED_SET, + fp_mode); + CallStub(&stub); + + bind(&done); + // Count number of write barriers in generated code. isolate()->counters()->write_barriers_static()->Increment(); IncrementCounter(isolate()->counters()->write_barriers_dynamic(), 1); + // Clobber clobbered registers when running with the debug-code flag + // turned on to provoke errors. + if (emit_debug_code()) { + Move(dst, kZapValue, Assembler::RelocInfoNone()); + Move(map, kZapValue, Assembler::RelocInfoNone()); + } +} + + +void MacroAssembler::RecordWrite( + Register object, + Register address, + Register value, + SaveFPRegsMode fp_mode, + RememberedSetAction remembered_set_action, + SmiCheck smi_check, + PointersToHereCheck pointers_to_here_check_for_value) { + DCHECK(!object.is(value)); + DCHECK(!object.is(address)); + DCHECK(!value.is(address)); + AssertNotSmi(object); + + if (remembered_set_action == OMIT_REMEMBERED_SET && + !FLAG_incremental_marking) { + return; + } + + if (emit_debug_code()) { + Label ok; + cmpp(value, Operand(address, 0)); + j(equal, &ok, Label::kNear); + int3(); + bind(&ok); + } + // First, check if a write barrier is even needed. The tests below // catch stores of smis and stores into the young generation. Label done; @@ -400,12 +474,14 @@ void MacroAssembler::RecordWrite(Register object, JumpIfSmi(value, &done); } - CheckPageFlag(value, - value, // Used as scratch. - MemoryChunk::kPointersToHereAreInterestingMask, - zero, - &done, - Label::kNear); + if (pointers_to_here_check_for_value != kPointersToHereAreAlwaysInteresting) { + CheckPageFlag(value, + value, // Used as scratch. + MemoryChunk::kPointersToHereAreInterestingMask, + zero, + &done, + Label::kNear); + } CheckPageFlag(object, value, // Used as scratch. @@ -420,6 +496,10 @@ void MacroAssembler::RecordWrite(Register object, bind(&done); + // Count number of write barriers in generated code. + isolate()->counters()->write_barriers_static()->Increment(); + IncrementCounter(isolate()->counters()->write_barriers_dynamic(), 1); + // Clobber clobbered registers when running with the debug-code flag // turned on to provoke errors. if (emit_debug_code()) { @@ -462,10 +542,10 @@ void MacroAssembler::Check(Condition cc, BailoutReason reason) { void MacroAssembler::CheckStackAlignment() { - int frame_alignment = OS::ActivationFrameAlignment(); + int frame_alignment = base::OS::ActivationFrameAlignment(); int frame_alignment_mask = frame_alignment - 1; if (frame_alignment > kPointerSize) { - ASSERT(IsPowerOf2(frame_alignment)); + DCHECK(IsPowerOf2(frame_alignment)); Label alignment_as_expected; testp(rsp, Immediate(frame_alignment_mask)); j(zero, &alignment_as_expected, Label::kNear); @@ -502,7 +582,6 @@ void MacroAssembler::Abort(BailoutReason reason) { } #endif - Push(rax); Move(kScratchRegister, Smi::FromInt(static_cast<int>(reason)), Assembler::RelocInfoNone()); Push(kScratchRegister); @@ -521,7 +600,7 @@ void MacroAssembler::Abort(BailoutReason reason) { void MacroAssembler::CallStub(CodeStub* stub, TypeFeedbackId ast_id) { - ASSERT(AllowThisStubCall(stub)); // Calls are not allowed in some stubs + DCHECK(AllowThisStubCall(stub)); // Calls are not allowed in some stubs Call(stub->GetCode(), RelocInfo::CODE_TARGET, ast_id); } @@ -532,7 +611,7 @@ void MacroAssembler::TailCallStub(CodeStub* stub) { void MacroAssembler::StubReturn(int argc) { - ASSERT(argc >= 1 && generating_stub()); + DCHECK(argc >= 1 && generating_stub()); ret((argc - 1) * kPointerSize); } @@ -546,18 +625,12 @@ void MacroAssembler::IndexFromHash(Register hash, Register index) { // The assert checks that the constants for the maximum number of digits // for an array index cached in the hash field and the number of bits // reserved for it does not conflict. - ASSERT(TenToThe(String::kMaxCachedArrayIndexLength) < + DCHECK(TenToThe(String::kMaxCachedArrayIndexLength) < (1 << String::kArrayIndexValueBits)); - // We want the smi-tagged index in key. Even if we subsequently go to - // the slow case, converting the key to a smi is always valid. - // key: string key - // hash: key's hash field, including its array index value. - andp(hash, Immediate(String::kArrayIndexValueMask)); - shrp(hash, Immediate(String::kHashShift)); - // Here we actually clobber the key which will be used if calling into - // runtime later. However as the new key is the numeric value of a string key - // there is no difference in using either key. - Integer32ToSmi(index, hash); + if (!hash.is(index)) { + movl(index, hash); + } + DecodeFieldToSmi<String::ArrayIndexValueBits>(index); } @@ -621,7 +694,7 @@ void MacroAssembler::TailCallRuntime(Runtime::FunctionId fid, static int Offset(ExternalReference ref0, ExternalReference ref1) { int64_t offset = (ref0.address() - ref1.address()); // Check that fits into int. - ASSERT(static_cast<int>(offset) == offset); + DCHECK(static_cast<int>(offset) == offset); return static_cast<int>(offset); } @@ -658,7 +731,7 @@ void MacroAssembler::CallApiFunctionAndReturn( ExternalReference scheduled_exception_address = ExternalReference::scheduled_exception_address(isolate()); - ASSERT(rdx.is(function_address) || r8.is(function_address)); + DCHECK(rdx.is(function_address) || r8.is(function_address)); // Allocate HandleScope in callee-save registers. Register prev_next_address_reg = r14; Register prev_limit_reg = rbx; @@ -770,7 +843,7 @@ void MacroAssembler::CallApiFunctionAndReturn( bind(&promote_scheduled_exception); { FrameScope frame(this, StackFrame::INTERNAL); - CallRuntime(Runtime::kHiddenPromoteScheduledException, 0); + CallRuntime(Runtime::kPromoteScheduledException, 0); } jmp(&exception_handled); @@ -800,7 +873,7 @@ void MacroAssembler::InvokeBuiltin(Builtins::JavaScript id, InvokeFlag flag, const CallWrapper& call_wrapper) { // You can't call a builtin without a valid frame. - ASSERT(flag == JUMP_FUNCTION || has_frame()); + DCHECK(flag == JUMP_FUNCTION || has_frame()); // Rely on the assertion to check that the number of provided // arguments match the expected number of arguments. Fake a @@ -822,7 +895,7 @@ void MacroAssembler::GetBuiltinFunction(Register target, void MacroAssembler::GetBuiltinEntry(Register target, Builtins::JavaScript id) { - ASSERT(!target.is(rdi)); + DCHECK(!target.is(rdi)); // Load the JavaScript builtin function from the builtins object. GetBuiltinFunction(rdi, id); movp(target, FieldOperand(rdi, JSFunction::kCodeEntryOffset)); @@ -898,7 +971,7 @@ void MacroAssembler::Cvtlsi2sd(XMMRegister dst, const Operand& src) { void MacroAssembler::Load(Register dst, const Operand& src, Representation r) { - ASSERT(!r.IsDouble()); + DCHECK(!r.IsDouble()); if (r.IsInteger8()) { movsxbq(dst, src); } else if (r.IsUInteger8()) { @@ -916,7 +989,7 @@ void MacroAssembler::Load(Register dst, const Operand& src, Representation r) { void MacroAssembler::Store(const Operand& dst, Register src, Representation r) { - ASSERT(!r.IsDouble()); + DCHECK(!r.IsDouble()); if (r.IsInteger8() || r.IsUInteger8()) { movb(dst, src); } else if (r.IsInteger16() || r.IsUInteger16()) { @@ -971,7 +1044,7 @@ bool MacroAssembler::IsUnsafeInt(const int32_t x) { void MacroAssembler::SafeMove(Register dst, Smi* src) { - ASSERT(!dst.is(kScratchRegister)); + DCHECK(!dst.is(kScratchRegister)); if (IsUnsafeInt(src->value()) && jit_cookie() != 0) { if (SmiValuesAre32Bits()) { // JIT cookie can be converted to Smi. @@ -979,7 +1052,7 @@ void MacroAssembler::SafeMove(Register dst, Smi* src) { Move(kScratchRegister, Smi::FromInt(jit_cookie())); xorp(dst, kScratchRegister); } else { - ASSERT(SmiValuesAre31Bits()); + DCHECK(SmiValuesAre31Bits()); int32_t value = static_cast<int32_t>(reinterpret_cast<intptr_t>(src)); movp(dst, Immediate(value ^ jit_cookie())); xorp(dst, Immediate(jit_cookie())); @@ -998,7 +1071,7 @@ void MacroAssembler::SafePush(Smi* src) { Move(kScratchRegister, Smi::FromInt(jit_cookie())); xorp(Operand(rsp, 0), kScratchRegister); } else { - ASSERT(SmiValuesAre31Bits()); + DCHECK(SmiValuesAre31Bits()); int32_t value = static_cast<int32_t>(reinterpret_cast<intptr_t>(src)); Push(Immediate(value ^ jit_cookie())); xorp(Operand(rsp, 0), Immediate(jit_cookie())); @@ -1027,7 +1100,7 @@ void MacroAssembler::LoadSmiConstant(Register dst, Smi* source) { if (emit_debug_code()) { Move(dst, Smi::FromInt(kSmiConstantRegisterValue), Assembler::RelocInfoNone()); - cmpq(dst, kSmiConstantRegister); + cmpp(dst, kSmiConstantRegister); Assert(equal, kUninitializedKSmiConstantRegister); } int value = source->value(); @@ -1098,10 +1171,10 @@ void MacroAssembler::Integer32ToSmiField(const Operand& dst, Register src) { } if (SmiValuesAre32Bits()) { - ASSERT(kSmiShift % kBitsPerByte == 0); + DCHECK(kSmiShift % kBitsPerByte == 0); movl(Operand(dst, kSmiShift / kBitsPerByte), src); } else { - ASSERT(SmiValuesAre31Bits()); + DCHECK(SmiValuesAre31Bits()); Integer32ToSmi(kScratchRegister, src); movp(dst, kScratchRegister); } @@ -1129,7 +1202,7 @@ void MacroAssembler::SmiToInteger32(Register dst, Register src) { if (SmiValuesAre32Bits()) { shrp(dst, Immediate(kSmiShift)); } else { - ASSERT(SmiValuesAre31Bits()); + DCHECK(SmiValuesAre31Bits()); sarl(dst, Immediate(kSmiShift)); } } @@ -1139,7 +1212,7 @@ void MacroAssembler::SmiToInteger32(Register dst, const Operand& src) { if (SmiValuesAre32Bits()) { movl(dst, Operand(src, kSmiShift / kBitsPerByte)); } else { - ASSERT(SmiValuesAre31Bits()); + DCHECK(SmiValuesAre31Bits()); movl(dst, src); sarl(dst, Immediate(kSmiShift)); } @@ -1163,7 +1236,7 @@ void MacroAssembler::SmiToInteger64(Register dst, const Operand& src) { if (SmiValuesAre32Bits()) { movsxlq(dst, Operand(src, kSmiShift / kBitsPerByte)); } else { - ASSERT(SmiValuesAre31Bits()); + DCHECK(SmiValuesAre31Bits()); movp(dst, src); SmiToInteger64(dst, dst); } @@ -1190,7 +1263,7 @@ void MacroAssembler::SmiCompare(Register dst, Smi* src) { void MacroAssembler::Cmp(Register dst, Smi* src) { - ASSERT(!dst.is(kScratchRegister)); + DCHECK(!dst.is(kScratchRegister)); if (src->value() == 0) { testp(dst, dst); } else { @@ -1219,7 +1292,7 @@ void MacroAssembler::SmiCompare(const Operand& dst, Smi* src) { if (SmiValuesAre32Bits()) { cmpl(Operand(dst, kSmiShift / kBitsPerByte), Immediate(src->value())); } else { - ASSERT(SmiValuesAre31Bits()); + DCHECK(SmiValuesAre31Bits()); cmpl(dst, Immediate(src)); } } @@ -1228,7 +1301,7 @@ void MacroAssembler::SmiCompare(const Operand& dst, Smi* src) { void MacroAssembler::Cmp(const Operand& dst, Smi* src) { // The Operand cannot use the smi register. Register smi_reg = GetSmiConstant(src); - ASSERT(!dst.AddressUsesRegister(smi_reg)); + DCHECK(!dst.AddressUsesRegister(smi_reg)); cmpp(dst, smi_reg); } @@ -1237,7 +1310,7 @@ void MacroAssembler::SmiCompareInteger32(const Operand& dst, Register src) { if (SmiValuesAre32Bits()) { cmpl(Operand(dst, kSmiShift / kBitsPerByte), src); } else { - ASSERT(SmiValuesAre31Bits()); + DCHECK(SmiValuesAre31Bits()); SmiToInteger32(kScratchRegister, dst); cmpl(kScratchRegister, src); } @@ -1247,8 +1320,8 @@ void MacroAssembler::SmiCompareInteger32(const Operand& dst, Register src) { void MacroAssembler::PositiveSmiTimesPowerOfTwoToInteger64(Register dst, Register src, int power) { - ASSERT(power >= 0); - ASSERT(power < 64); + DCHECK(power >= 0); + DCHECK(power < 64); if (power == 0) { SmiToInteger64(dst, src); return; @@ -1267,7 +1340,7 @@ void MacroAssembler::PositiveSmiTimesPowerOfTwoToInteger64(Register dst, void MacroAssembler::PositiveSmiDivPowerOfTwoToInteger32(Register dst, Register src, int power) { - ASSERT((0 <= power) && (power < 32)); + DCHECK((0 <= power) && (power < 32)); if (dst.is(src)) { shrp(dst, Immediate(power + kSmiShift)); } else { @@ -1280,8 +1353,8 @@ void MacroAssembler::SmiOrIfSmis(Register dst, Register src1, Register src2, Label* on_not_smis, Label::Distance near_jump) { if (dst.is(src1) || dst.is(src2)) { - ASSERT(!src1.is(kScratchRegister)); - ASSERT(!src2.is(kScratchRegister)); + DCHECK(!src1.is(kScratchRegister)); + DCHECK(!src2.is(kScratchRegister)); movp(kScratchRegister, src1); orp(kScratchRegister, src2); JumpIfNotSmi(kScratchRegister, on_not_smis, near_jump); @@ -1327,7 +1400,7 @@ Condition MacroAssembler::CheckBothSmi(Register first, Register second) { leal(kScratchRegister, Operand(first, second, times_1, 0)); testb(kScratchRegister, Immediate(0x03)); } else { - ASSERT(SmiValuesAre31Bits()); + DCHECK(SmiValuesAre31Bits()); movl(kScratchRegister, first); orl(kScratchRegister, second); testb(kScratchRegister, Immediate(kSmiTagMask)); @@ -1369,7 +1442,7 @@ Condition MacroAssembler::CheckEitherSmi(Register first, Condition MacroAssembler::CheckIsMinSmi(Register src) { - ASSERT(!src.is(kScratchRegister)); + DCHECK(!src.is(kScratchRegister)); // If we overflow by subtracting one, it's the minimal smi value. cmpp(src, kSmiConstantRegister); return overflow; @@ -1381,7 +1454,7 @@ Condition MacroAssembler::CheckInteger32ValidSmiValue(Register src) { // A 32-bit integer value can always be converted to a smi. return always; } else { - ASSERT(SmiValuesAre31Bits()); + DCHECK(SmiValuesAre31Bits()); cmpl(src, Immediate(0xc0000000)); return positive; } @@ -1395,7 +1468,7 @@ Condition MacroAssembler::CheckUInteger32ValidSmiValue(Register src) { testl(src, src); return positive; } else { - ASSERT(SmiValuesAre31Bits()); + DCHECK(SmiValuesAre31Bits()); testl(src, Immediate(0xc0000000)); return zero; } @@ -1423,6 +1496,14 @@ void MacroAssembler::CheckSmiToIndicator(Register dst, const Operand& src) { } +void MacroAssembler::JumpIfValidSmiValue(Register src, + Label* on_valid, + Label::Distance near_jump) { + Condition is_valid = CheckInteger32ValidSmiValue(src); + j(is_valid, on_valid, near_jump); +} + + void MacroAssembler::JumpIfNotValidSmiValue(Register src, Label* on_invalid, Label::Distance near_jump) { @@ -1431,6 +1512,14 @@ void MacroAssembler::JumpIfNotValidSmiValue(Register src, } +void MacroAssembler::JumpIfUIntValidSmiValue(Register src, + Label* on_valid, + Label::Distance near_jump) { + Condition is_valid = CheckUInteger32ValidSmiValue(src); + j(is_valid, on_valid, near_jump); +} + + void MacroAssembler::JumpIfUIntNotValidSmiValue(Register src, Label* on_invalid, Label::Distance near_jump) { @@ -1497,7 +1586,7 @@ void MacroAssembler::SmiAddConstant(Register dst, Register src, Smi* constant) { } return; } else if (dst.is(src)) { - ASSERT(!dst.is(kScratchRegister)); + DCHECK(!dst.is(kScratchRegister)); switch (constant->value()) { case 1: addp(dst, kSmiConstantRegister); @@ -1545,7 +1634,7 @@ void MacroAssembler::SmiAddConstant(const Operand& dst, Smi* constant) { addl(Operand(dst, kSmiShift / kBitsPerByte), Immediate(constant->value())); } else { - ASSERT(SmiValuesAre31Bits()); + DCHECK(SmiValuesAre31Bits()); addp(dst, Immediate(constant)); } } @@ -1563,12 +1652,12 @@ void MacroAssembler::SmiAddConstant(Register dst, movp(dst, src); } } else if (dst.is(src)) { - ASSERT(!dst.is(kScratchRegister)); + DCHECK(!dst.is(kScratchRegister)); LoadSmiConstant(kScratchRegister, constant); addp(dst, kScratchRegister); if (mode.Contains(BAILOUT_ON_NO_OVERFLOW)) { j(no_overflow, bailout_label, near_jump); - ASSERT(mode.Contains(PRESERVE_SOURCE_REGISTER)); + DCHECK(mode.Contains(PRESERVE_SOURCE_REGISTER)); subp(dst, kScratchRegister); } else if (mode.Contains(BAILOUT_ON_OVERFLOW)) { if (mode.Contains(PRESERVE_SOURCE_REGISTER)) { @@ -1585,8 +1674,8 @@ void MacroAssembler::SmiAddConstant(Register dst, CHECK(mode.IsEmpty()); } } else { - ASSERT(mode.Contains(PRESERVE_SOURCE_REGISTER)); - ASSERT(mode.Contains(BAILOUT_ON_OVERFLOW)); + DCHECK(mode.Contains(PRESERVE_SOURCE_REGISTER)); + DCHECK(mode.Contains(BAILOUT_ON_OVERFLOW)); LoadSmiConstant(dst, constant); addp(dst, src); j(overflow, bailout_label, near_jump); @@ -1600,7 +1689,7 @@ void MacroAssembler::SmiSubConstant(Register dst, Register src, Smi* constant) { movp(dst, src); } } else if (dst.is(src)) { - ASSERT(!dst.is(kScratchRegister)); + DCHECK(!dst.is(kScratchRegister)); Register constant_reg = GetSmiConstant(constant); subp(dst, constant_reg); } else { @@ -1629,12 +1718,12 @@ void MacroAssembler::SmiSubConstant(Register dst, movp(dst, src); } } else if (dst.is(src)) { - ASSERT(!dst.is(kScratchRegister)); + DCHECK(!dst.is(kScratchRegister)); LoadSmiConstant(kScratchRegister, constant); subp(dst, kScratchRegister); if (mode.Contains(BAILOUT_ON_NO_OVERFLOW)) { j(no_overflow, bailout_label, near_jump); - ASSERT(mode.Contains(PRESERVE_SOURCE_REGISTER)); + DCHECK(mode.Contains(PRESERVE_SOURCE_REGISTER)); addp(dst, kScratchRegister); } else if (mode.Contains(BAILOUT_ON_OVERFLOW)) { if (mode.Contains(PRESERVE_SOURCE_REGISTER)) { @@ -1651,10 +1740,10 @@ void MacroAssembler::SmiSubConstant(Register dst, CHECK(mode.IsEmpty()); } } else { - ASSERT(mode.Contains(PRESERVE_SOURCE_REGISTER)); - ASSERT(mode.Contains(BAILOUT_ON_OVERFLOW)); + DCHECK(mode.Contains(PRESERVE_SOURCE_REGISTER)); + DCHECK(mode.Contains(BAILOUT_ON_OVERFLOW)); if (constant->value() == Smi::kMinValue) { - ASSERT(!dst.is(kScratchRegister)); + DCHECK(!dst.is(kScratchRegister)); movp(dst, src); LoadSmiConstant(kScratchRegister, constant); subp(dst, kScratchRegister); @@ -1674,7 +1763,7 @@ void MacroAssembler::SmiNeg(Register dst, Label* on_smi_result, Label::Distance near_jump) { if (dst.is(src)) { - ASSERT(!dst.is(kScratchRegister)); + DCHECK(!dst.is(kScratchRegister)); movp(kScratchRegister, src); negp(dst); // Low 32 bits are retained as zero by negation. // Test if result is zero or Smi::kMinValue. @@ -1719,8 +1808,8 @@ void MacroAssembler::SmiAdd(Register dst, Register src2, Label* on_not_smi_result, Label::Distance near_jump) { - ASSERT_NOT_NULL(on_not_smi_result); - ASSERT(!dst.is(src2)); + DCHECK_NOT_NULL(on_not_smi_result); + DCHECK(!dst.is(src2)); SmiAddHelper<Register>(this, dst, src1, src2, on_not_smi_result, near_jump); } @@ -1730,8 +1819,8 @@ void MacroAssembler::SmiAdd(Register dst, const Operand& src2, Label* on_not_smi_result, Label::Distance near_jump) { - ASSERT_NOT_NULL(on_not_smi_result); - ASSERT(!src2.AddressUsesRegister(dst)); + DCHECK_NOT_NULL(on_not_smi_result); + DCHECK(!src2.AddressUsesRegister(dst)); SmiAddHelper<Operand>(this, dst, src1, src2, on_not_smi_result, near_jump); } @@ -1783,8 +1872,8 @@ void MacroAssembler::SmiSub(Register dst, Register src2, Label* on_not_smi_result, Label::Distance near_jump) { - ASSERT_NOT_NULL(on_not_smi_result); - ASSERT(!dst.is(src2)); + DCHECK_NOT_NULL(on_not_smi_result); + DCHECK(!dst.is(src2)); SmiSubHelper<Register>(this, dst, src1, src2, on_not_smi_result, near_jump); } @@ -1794,8 +1883,8 @@ void MacroAssembler::SmiSub(Register dst, const Operand& src2, Label* on_not_smi_result, Label::Distance near_jump) { - ASSERT_NOT_NULL(on_not_smi_result); - ASSERT(!src2.AddressUsesRegister(dst)); + DCHECK_NOT_NULL(on_not_smi_result); + DCHECK(!src2.AddressUsesRegister(dst)); SmiSubHelper<Operand>(this, dst, src1, src2, on_not_smi_result, near_jump); } @@ -1816,7 +1905,7 @@ static void SmiSubNoOverflowHelper(MacroAssembler* masm, void MacroAssembler::SmiSub(Register dst, Register src1, Register src2) { - ASSERT(!dst.is(src2)); + DCHECK(!dst.is(src2)); SmiSubNoOverflowHelper<Register>(this, dst, src1, src2); } @@ -1833,10 +1922,10 @@ void MacroAssembler::SmiMul(Register dst, Register src2, Label* on_not_smi_result, Label::Distance near_jump) { - ASSERT(!dst.is(src2)); - ASSERT(!dst.is(kScratchRegister)); - ASSERT(!src1.is(kScratchRegister)); - ASSERT(!src2.is(kScratchRegister)); + DCHECK(!dst.is(src2)); + DCHECK(!dst.is(kScratchRegister)); + DCHECK(!src1.is(kScratchRegister)); + DCHECK(!src2.is(kScratchRegister)); if (dst.is(src1)) { Label failure, zero_correct_result; @@ -1888,12 +1977,12 @@ void MacroAssembler::SmiDiv(Register dst, Register src2, Label* on_not_smi_result, Label::Distance near_jump) { - ASSERT(!src1.is(kScratchRegister)); - ASSERT(!src2.is(kScratchRegister)); - ASSERT(!dst.is(kScratchRegister)); - ASSERT(!src2.is(rax)); - ASSERT(!src2.is(rdx)); - ASSERT(!src1.is(rdx)); + DCHECK(!src1.is(kScratchRegister)); + DCHECK(!src2.is(kScratchRegister)); + DCHECK(!dst.is(kScratchRegister)); + DCHECK(!src2.is(rax)); + DCHECK(!src2.is(rdx)); + DCHECK(!src1.is(rdx)); // Check for 0 divisor (result is +/-Infinity). testp(src2, src2); @@ -1911,7 +2000,7 @@ void MacroAssembler::SmiDiv(Register dst, // We overshoot a little and go to slow case if we divide min-value // by any negative value, not just -1. Label safe_div; - testl(rax, Immediate(0x7fffffff)); + testl(rax, Immediate(~Smi::kMinValue)); j(not_zero, &safe_div, Label::kNear); testp(src2, src2); if (src1.is(rax)) { @@ -1951,13 +2040,13 @@ void MacroAssembler::SmiMod(Register dst, Register src2, Label* on_not_smi_result, Label::Distance near_jump) { - ASSERT(!dst.is(kScratchRegister)); - ASSERT(!src1.is(kScratchRegister)); - ASSERT(!src2.is(kScratchRegister)); - ASSERT(!src2.is(rax)); - ASSERT(!src2.is(rdx)); - ASSERT(!src1.is(rdx)); - ASSERT(!src1.is(src2)); + DCHECK(!dst.is(kScratchRegister)); + DCHECK(!src1.is(kScratchRegister)); + DCHECK(!src2.is(kScratchRegister)); + DCHECK(!src2.is(rax)); + DCHECK(!src2.is(rdx)); + DCHECK(!src1.is(rdx)); + DCHECK(!src1.is(src2)); testp(src2, src2); j(zero, on_not_smi_result, near_jump); @@ -2003,14 +2092,14 @@ void MacroAssembler::SmiMod(Register dst, void MacroAssembler::SmiNot(Register dst, Register src) { - ASSERT(!dst.is(kScratchRegister)); - ASSERT(!src.is(kScratchRegister)); + DCHECK(!dst.is(kScratchRegister)); + DCHECK(!src.is(kScratchRegister)); if (SmiValuesAre32Bits()) { // Set tag and padding bits before negating, so that they are zero // afterwards. movl(kScratchRegister, Immediate(~0)); } else { - ASSERT(SmiValuesAre31Bits()); + DCHECK(SmiValuesAre31Bits()); movl(kScratchRegister, Immediate(1)); } if (dst.is(src)) { @@ -2023,7 +2112,7 @@ void MacroAssembler::SmiNot(Register dst, Register src) { void MacroAssembler::SmiAnd(Register dst, Register src1, Register src2) { - ASSERT(!dst.is(src2)); + DCHECK(!dst.is(src2)); if (!dst.is(src1)) { movp(dst, src1); } @@ -2035,7 +2124,7 @@ void MacroAssembler::SmiAndConstant(Register dst, Register src, Smi* constant) { if (constant->value() == 0) { Set(dst, 0); } else if (dst.is(src)) { - ASSERT(!dst.is(kScratchRegister)); + DCHECK(!dst.is(kScratchRegister)); Register constant_reg = GetSmiConstant(constant); andp(dst, constant_reg); } else { @@ -2047,7 +2136,7 @@ void MacroAssembler::SmiAndConstant(Register dst, Register src, Smi* constant) { void MacroAssembler::SmiOr(Register dst, Register src1, Register src2) { if (!dst.is(src1)) { - ASSERT(!src1.is(src2)); + DCHECK(!src1.is(src2)); movp(dst, src1); } orp(dst, src2); @@ -2056,7 +2145,7 @@ void MacroAssembler::SmiOr(Register dst, Register src1, Register src2) { void MacroAssembler::SmiOrConstant(Register dst, Register src, Smi* constant) { if (dst.is(src)) { - ASSERT(!dst.is(kScratchRegister)); + DCHECK(!dst.is(kScratchRegister)); Register constant_reg = GetSmiConstant(constant); orp(dst, constant_reg); } else { @@ -2068,7 +2157,7 @@ void MacroAssembler::SmiOrConstant(Register dst, Register src, Smi* constant) { void MacroAssembler::SmiXor(Register dst, Register src1, Register src2) { if (!dst.is(src1)) { - ASSERT(!src1.is(src2)); + DCHECK(!src1.is(src2)); movp(dst, src1); } xorp(dst, src2); @@ -2077,7 +2166,7 @@ void MacroAssembler::SmiXor(Register dst, Register src1, Register src2) { void MacroAssembler::SmiXorConstant(Register dst, Register src, Smi* constant) { if (dst.is(src)) { - ASSERT(!dst.is(kScratchRegister)); + DCHECK(!dst.is(kScratchRegister)); Register constant_reg = GetSmiConstant(constant); xorp(dst, constant_reg); } else { @@ -2090,7 +2179,7 @@ void MacroAssembler::SmiXorConstant(Register dst, Register src, Smi* constant) { void MacroAssembler::SmiShiftArithmeticRightConstant(Register dst, Register src, int shift_value) { - ASSERT(is_uint5(shift_value)); + DCHECK(is_uint5(shift_value)); if (shift_value > 0) { if (dst.is(src)) { sarp(dst, Immediate(shift_value + kSmiShift)); @@ -2104,12 +2193,27 @@ void MacroAssembler::SmiShiftArithmeticRightConstant(Register dst, void MacroAssembler::SmiShiftLeftConstant(Register dst, Register src, - int shift_value) { - if (!dst.is(src)) { - movp(dst, src); - } - if (shift_value > 0) { - shlp(dst, Immediate(shift_value)); + int shift_value, + Label* on_not_smi_result, + Label::Distance near_jump) { + if (SmiValuesAre32Bits()) { + if (!dst.is(src)) { + movp(dst, src); + } + if (shift_value > 0) { + // Shift amount specified by lower 5 bits, not six as the shl opcode. + shlq(dst, Immediate(shift_value & 0x1f)); + } + } else { + DCHECK(SmiValuesAre31Bits()); + if (dst.is(src)) { + UNIMPLEMENTED(); // Not used. + } else { + SmiToInteger32(dst, src); + shll(dst, Immediate(shift_value)); + JumpIfNotValidSmiValue(dst, on_not_smi_result, near_jump); + Integer32ToSmi(dst, dst); + } } } @@ -2121,29 +2225,73 @@ void MacroAssembler::SmiShiftLogicalRightConstant( if (dst.is(src)) { UNIMPLEMENTED(); // Not used. } else { - movp(dst, src); if (shift_value == 0) { - testp(dst, dst); + testp(src, src); j(negative, on_not_smi_result, near_jump); } - shrq(dst, Immediate(shift_value + kSmiShift)); - shlq(dst, Immediate(kSmiShift)); + if (SmiValuesAre32Bits()) { + movp(dst, src); + shrp(dst, Immediate(shift_value + kSmiShift)); + shlp(dst, Immediate(kSmiShift)); + } else { + DCHECK(SmiValuesAre31Bits()); + SmiToInteger32(dst, src); + shrp(dst, Immediate(shift_value)); + JumpIfUIntNotValidSmiValue(dst, on_not_smi_result, near_jump); + Integer32ToSmi(dst, dst); + } } } void MacroAssembler::SmiShiftLeft(Register dst, Register src1, - Register src2) { - ASSERT(!dst.is(rcx)); - // Untag shift amount. - if (!dst.is(src1)) { - movq(dst, src1); + Register src2, + Label* on_not_smi_result, + Label::Distance near_jump) { + if (SmiValuesAre32Bits()) { + DCHECK(!dst.is(rcx)); + if (!dst.is(src1)) { + movp(dst, src1); + } + // Untag shift amount. + SmiToInteger32(rcx, src2); + // Shift amount specified by lower 5 bits, not six as the shl opcode. + andp(rcx, Immediate(0x1f)); + shlq_cl(dst); + } else { + DCHECK(SmiValuesAre31Bits()); + DCHECK(!dst.is(kScratchRegister)); + DCHECK(!src1.is(kScratchRegister)); + DCHECK(!src2.is(kScratchRegister)); + DCHECK(!dst.is(src2)); + DCHECK(!dst.is(rcx)); + + if (src1.is(rcx) || src2.is(rcx)) { + movq(kScratchRegister, rcx); + } + if (dst.is(src1)) { + UNIMPLEMENTED(); // Not used. + } else { + Label valid_result; + SmiToInteger32(dst, src1); + SmiToInteger32(rcx, src2); + shll_cl(dst); + JumpIfValidSmiValue(dst, &valid_result, Label::kNear); + // As src1 or src2 could not be dst, we do not need to restore them for + // clobbering dst. + if (src1.is(rcx) || src2.is(rcx)) { + if (src1.is(rcx)) { + movq(src1, kScratchRegister); + } else { + movq(src2, kScratchRegister); + } + } + jmp(on_not_smi_result, near_jump); + bind(&valid_result); + Integer32ToSmi(dst, dst); + } } - SmiToInteger32(rcx, src2); - // Shift amount specified by lower 5 bits, not six as the shl opcode. - andq(rcx, Immediate(0x1f)); - shlq_cl(dst); } @@ -2152,36 +2300,34 @@ void MacroAssembler::SmiShiftLogicalRight(Register dst, Register src2, Label* on_not_smi_result, Label::Distance near_jump) { - ASSERT(!dst.is(kScratchRegister)); - ASSERT(!src1.is(kScratchRegister)); - ASSERT(!src2.is(kScratchRegister)); - ASSERT(!dst.is(rcx)); - // dst and src1 can be the same, because the one case that bails out - // is a shift by 0, which leaves dst, and therefore src1, unchanged. + DCHECK(!dst.is(kScratchRegister)); + DCHECK(!src1.is(kScratchRegister)); + DCHECK(!src2.is(kScratchRegister)); + DCHECK(!dst.is(src2)); + DCHECK(!dst.is(rcx)); if (src1.is(rcx) || src2.is(rcx)) { movq(kScratchRegister, rcx); } - if (!dst.is(src1)) { - movq(dst, src1); - } - SmiToInteger32(rcx, src2); - orl(rcx, Immediate(kSmiShift)); - shrq_cl(dst); // Shift is rcx modulo 0x1f + 32. - shlq(dst, Immediate(kSmiShift)); - testq(dst, dst); - if (src1.is(rcx) || src2.is(rcx)) { - Label positive_result; - j(positive, &positive_result, Label::kNear); - if (src1.is(rcx)) { - movq(src1, kScratchRegister); - } else { - movq(src2, kScratchRegister); - } - jmp(on_not_smi_result, near_jump); - bind(&positive_result); + if (dst.is(src1)) { + UNIMPLEMENTED(); // Not used. } else { - // src2 was zero and src1 negative. - j(negative, on_not_smi_result, near_jump); + Label valid_result; + SmiToInteger32(dst, src1); + SmiToInteger32(rcx, src2); + shrl_cl(dst); + JumpIfUIntValidSmiValue(dst, &valid_result, Label::kNear); + // As src1 or src2 could not be dst, we do not need to restore them for + // clobbering dst. + if (src1.is(rcx) || src2.is(rcx)) { + if (src1.is(rcx)) { + movq(src1, kScratchRegister); + } else { + movq(src2, kScratchRegister); + } + } + jmp(on_not_smi_result, near_jump); + bind(&valid_result); + Integer32ToSmi(dst, dst); } } @@ -2189,27 +2335,18 @@ void MacroAssembler::SmiShiftLogicalRight(Register dst, void MacroAssembler::SmiShiftArithmeticRight(Register dst, Register src1, Register src2) { - ASSERT(!dst.is(kScratchRegister)); - ASSERT(!src1.is(kScratchRegister)); - ASSERT(!src2.is(kScratchRegister)); - ASSERT(!dst.is(rcx)); - if (src1.is(rcx)) { - movp(kScratchRegister, src1); - } else if (src2.is(rcx)) { - movp(kScratchRegister, src2); - } + DCHECK(!dst.is(kScratchRegister)); + DCHECK(!src1.is(kScratchRegister)); + DCHECK(!src2.is(kScratchRegister)); + DCHECK(!dst.is(rcx)); + + SmiToInteger32(rcx, src2); if (!dst.is(src1)) { movp(dst, src1); } - SmiToInteger32(rcx, src2); - orl(rcx, Immediate(kSmiShift)); - sarp_cl(dst); // Shift 32 + original rcx & 0x1f. - shlp(dst, Immediate(kSmiShift)); - if (src1.is(rcx)) { - movp(src1, kScratchRegister); - } else if (src2.is(rcx)) { - movp(src2, kScratchRegister); - } + SmiToInteger32(dst, dst); + sarl_cl(dst); + Integer32ToSmi(dst, dst); } @@ -2218,18 +2355,18 @@ void MacroAssembler::SelectNonSmi(Register dst, Register src2, Label* on_not_smis, Label::Distance near_jump) { - ASSERT(!dst.is(kScratchRegister)); - ASSERT(!src1.is(kScratchRegister)); - ASSERT(!src2.is(kScratchRegister)); - ASSERT(!dst.is(src1)); - ASSERT(!dst.is(src2)); + DCHECK(!dst.is(kScratchRegister)); + DCHECK(!src1.is(kScratchRegister)); + DCHECK(!src2.is(kScratchRegister)); + DCHECK(!dst.is(src1)); + DCHECK(!dst.is(src2)); // Both operands must not be smis. #ifdef DEBUG Condition not_both_smis = NegateCondition(CheckBothSmi(src1, src2)); Check(not_both_smis, kBothRegistersWereSmisInSelectNonSmi); #endif STATIC_ASSERT(kSmiTag == 0); - ASSERT_EQ(0, Smi::FromInt(0)); + DCHECK_EQ(0, Smi::FromInt(0)); movl(kScratchRegister, Immediate(kSmiTagMask)); andp(kScratchRegister, src1); testl(kScratchRegister, src2); @@ -2237,7 +2374,7 @@ void MacroAssembler::SelectNonSmi(Register dst, j(not_zero, on_not_smis, near_jump); // Exactly one operand is a smi. - ASSERT_EQ(1, static_cast<int>(kSmiTagMask)); + DCHECK_EQ(1, static_cast<int>(kSmiTagMask)); // kScratchRegister still holds src1 & kSmiTag, which is either zero or one. subp(kScratchRegister, Immediate(1)); // If src1 is a smi, then scratch register all 1s, else it is all 0s. @@ -2254,7 +2391,7 @@ SmiIndex MacroAssembler::SmiToIndex(Register dst, Register src, int shift) { if (SmiValuesAre32Bits()) { - ASSERT(is_uint6(shift)); + DCHECK(is_uint6(shift)); // There is a possible optimization if shift is in the range 60-63, but that // will (and must) never happen. if (!dst.is(src)) { @@ -2267,8 +2404,8 @@ SmiIndex MacroAssembler::SmiToIndex(Register dst, } return SmiIndex(dst, times_1); } else { - ASSERT(SmiValuesAre31Bits()); - ASSERT(shift >= times_1 && shift <= (static_cast<int>(times_8) + 1)); + DCHECK(SmiValuesAre31Bits()); + DCHECK(shift >= times_1 && shift <= (static_cast<int>(times_8) + 1)); if (!dst.is(src)) { movp(dst, src); } @@ -2289,7 +2426,7 @@ SmiIndex MacroAssembler::SmiToNegativeIndex(Register dst, int shift) { if (SmiValuesAre32Bits()) { // Register src holds a positive smi. - ASSERT(is_uint6(shift)); + DCHECK(is_uint6(shift)); if (!dst.is(src)) { movp(dst, src); } @@ -2301,8 +2438,8 @@ SmiIndex MacroAssembler::SmiToNegativeIndex(Register dst, } return SmiIndex(dst, times_1); } else { - ASSERT(SmiValuesAre31Bits()); - ASSERT(shift >= times_1 && shift <= (static_cast<int>(times_8) + 1)); + DCHECK(SmiValuesAre31Bits()); + DCHECK(shift >= times_1 && shift <= (static_cast<int>(times_8) + 1)); if (!dst.is(src)) { movp(dst, src); } @@ -2318,10 +2455,10 @@ SmiIndex MacroAssembler::SmiToNegativeIndex(Register dst, void MacroAssembler::AddSmiField(Register dst, const Operand& src) { if (SmiValuesAre32Bits()) { - ASSERT_EQ(0, kSmiShift % kBitsPerByte); + DCHECK_EQ(0, kSmiShift % kBitsPerByte); addl(dst, Operand(src, kSmiShift / kBitsPerByte)); } else { - ASSERT(SmiValuesAre31Bits()); + DCHECK(SmiValuesAre31Bits()); SmiToInteger32(kScratchRegister, src); addl(dst, kScratchRegister); } @@ -2340,7 +2477,7 @@ void MacroAssembler::Push(Smi* source) { void MacroAssembler::PushRegisterAsTwoSmis(Register src, Register scratch) { - ASSERT(!src.is(scratch)); + DCHECK(!src.is(scratch)); movp(scratch, src); // High bits. shrp(src, Immediate(kPointerSize * kBitsPerByte - kSmiShift)); @@ -2353,7 +2490,7 @@ void MacroAssembler::PushRegisterAsTwoSmis(Register src, Register scratch) { void MacroAssembler::PopRegisterAsTwoSmis(Register dst, Register scratch) { - ASSERT(!dst.is(scratch)); + DCHECK(!dst.is(scratch)); Pop(scratch); // Low bits. shrp(scratch, Immediate(kSmiShift)); @@ -2369,7 +2506,7 @@ void MacroAssembler::Test(const Operand& src, Smi* source) { if (SmiValuesAre32Bits()) { testl(Operand(src, kIntSize), Immediate(source->value())); } else { - ASSERT(SmiValuesAre31Bits()); + DCHECK(SmiValuesAre31Bits()); testl(src, Immediate(source)); } } @@ -2491,7 +2628,7 @@ void MacroAssembler::JumpIfNotBothSequentialAsciiStrings( movzxbl(scratch2, FieldOperand(scratch2, Map::kInstanceTypeOffset)); // Check that both are flat ASCII strings. - ASSERT(kNotStringTag != 0); + DCHECK(kNotStringTag != 0); const int kFlatAsciiStringMask = kIsNotStringMask | kStringRepresentationMask | kStringEncodingMask; const int kFlatAsciiStringTag = @@ -2500,7 +2637,7 @@ void MacroAssembler::JumpIfNotBothSequentialAsciiStrings( andl(scratch1, Immediate(kFlatAsciiStringMask)); andl(scratch2, Immediate(kFlatAsciiStringMask)); // Interleave the bits to check both scratch1 and scratch2 in one test. - ASSERT_EQ(0, kFlatAsciiStringMask & (kFlatAsciiStringMask << 3)); + DCHECK_EQ(0, kFlatAsciiStringMask & (kFlatAsciiStringMask << 3)); leap(scratch1, Operand(scratch1, scratch2, times_8, 0)); cmpl(scratch1, Immediate(kFlatAsciiStringTag + (kFlatAsciiStringTag << 3))); @@ -2538,7 +2675,7 @@ void MacroAssembler::JumpIfBothInstanceTypesAreNotSequentialAscii( movp(scratch2, second_object_instance_type); // Check that both are flat ASCII strings. - ASSERT(kNotStringTag != 0); + DCHECK(kNotStringTag != 0); const int kFlatAsciiStringMask = kIsNotStringMask | kStringRepresentationMask | kStringEncodingMask; const int kFlatAsciiStringTag = @@ -2547,7 +2684,7 @@ void MacroAssembler::JumpIfBothInstanceTypesAreNotSequentialAscii( andl(scratch1, Immediate(kFlatAsciiStringMask)); andl(scratch2, Immediate(kFlatAsciiStringMask)); // Interleave the bits to check both scratch1 and scratch2 in one test. - ASSERT_EQ(0, kFlatAsciiStringMask & (kFlatAsciiStringMask << 3)); + DCHECK_EQ(0, kFlatAsciiStringMask & (kFlatAsciiStringMask << 3)); leap(scratch1, Operand(scratch1, scratch2, times_8, 0)); cmpl(scratch1, Immediate(kFlatAsciiStringTag + (kFlatAsciiStringTag << 3))); @@ -2650,7 +2787,7 @@ void MacroAssembler::Push(Handle<Object> source) { void MacroAssembler::MoveHeapObject(Register result, Handle<Object> object) { AllowDeferredHandleDereference using_raw_address; - ASSERT(object->IsHeapObject()); + DCHECK(object->IsHeapObject()); if (isolate()->heap()->InNewSpace(*object)) { Handle<Cell> cell = isolate()->factory()->NewCell(object); Move(result, cell, RelocInfo::CELL); @@ -2681,7 +2818,7 @@ void MacroAssembler::Drop(int stack_elements) { void MacroAssembler::DropUnderReturnAddress(int stack_elements, Register scratch) { - ASSERT(stack_elements > 0); + DCHECK(stack_elements > 0); if (kPointerSize == kInt64Size && stack_elements == 1) { popq(MemOperand(rsp, 0)); return; @@ -2698,7 +2835,7 @@ void MacroAssembler::Push(Register src) { pushq(src); } else { // x32 uses 64-bit push for rbp in the prologue. - ASSERT(src.code() != rbp.code()); + DCHECK(src.code() != rbp.code()); leal(rsp, Operand(rsp, -4)); movp(Operand(rsp, 0), src); } @@ -2751,7 +2888,7 @@ void MacroAssembler::Pop(Register dst) { popq(dst); } else { // x32 uses 64-bit pop for rbp in the epilogue. - ASSERT(dst.code() != rbp.code()); + DCHECK(dst.code() != rbp.code()); movp(dst, Operand(rsp, 0)); leal(rsp, Operand(rsp, 4)); } @@ -2790,7 +2927,7 @@ void MacroAssembler::PopQuad(const Operand& dst) { void MacroAssembler::LoadSharedFunctionInfoSpecialField(Register dst, Register base, int offset) { - ASSERT(offset > SharedFunctionInfo::kLengthOffset && + DCHECK(offset > SharedFunctionInfo::kLengthOffset && offset <= SharedFunctionInfo::kSize && (((offset - SharedFunctionInfo::kLengthOffset) / kIntSize) % 2 == 1)); if (kPointerSize == kInt64Size) { @@ -2805,7 +2942,7 @@ void MacroAssembler::LoadSharedFunctionInfoSpecialField(Register dst, void MacroAssembler::TestBitSharedFunctionInfoSpecialField(Register base, int offset, int bits) { - ASSERT(offset > SharedFunctionInfo::kLengthOffset && + DCHECK(offset > SharedFunctionInfo::kLengthOffset && offset <= SharedFunctionInfo::kSize && (((offset - SharedFunctionInfo::kLengthOffset) / kIntSize) % 2 == 1)); if (kPointerSize == kInt32Size) { @@ -2893,7 +3030,7 @@ void MacroAssembler::Call(Handle<Code> code_object, #ifdef DEBUG int end_position = pc_offset() + CallSize(code_object); #endif - ASSERT(RelocInfo::IsCodeTarget(rmode) || + DCHECK(RelocInfo::IsCodeTarget(rmode) || rmode == RelocInfo::CODE_AGE_SEQUENCE); call(code_object, rmode, ast_id); #ifdef DEBUG @@ -3328,8 +3465,7 @@ void MacroAssembler::ClampDoubleToUint8(XMMRegister input_reg, void MacroAssembler::LoadUint32(XMMRegister dst, - Register src, - XMMRegister scratch) { + Register src) { if (FLAG_debug_code) { cmpq(src, Immediate(0xffffffff)); Assert(below_equal, kInputGPRIsExpectedToHaveUpper32Cleared); @@ -3423,7 +3559,7 @@ void MacroAssembler::TaggedToI(Register result_reg, Label* lost_precision, Label::Distance dst) { Label done; - ASSERT(!temp.is(xmm0)); + DCHECK(!temp.is(xmm0)); // Heap number map check. CompareRoot(FieldOperand(input_reg, HeapObject::kMapOffset), @@ -3449,39 +3585,6 @@ void MacroAssembler::TaggedToI(Register result_reg, } -void MacroAssembler::Throw(BailoutReason reason) { -#ifdef DEBUG - const char* msg = GetBailoutReason(reason); - if (msg != NULL) { - RecordComment("Throw message: "); - RecordComment(msg); - } -#endif - - Push(rax); - Push(Smi::FromInt(reason)); - if (!has_frame_) { - // We don't actually want to generate a pile of code for this, so just - // claim there is a stack frame, without generating one. - FrameScope scope(this, StackFrame::NONE); - CallRuntime(Runtime::kHiddenThrowMessage, 1); - } else { - CallRuntime(Runtime::kHiddenThrowMessage, 1); - } - // Control will not return here. - int3(); -} - - -void MacroAssembler::ThrowIf(Condition cc, BailoutReason reason) { - Label L; - j(NegateCondition(cc), &L); - Throw(reason); - // will not return here - bind(&L); -} - - void MacroAssembler::LoadInstanceDescriptors(Register map, Register descriptors) { movp(descriptors, FieldOperand(map, Map::kDescriptorsOffset)); @@ -3489,16 +3592,16 @@ void MacroAssembler::LoadInstanceDescriptors(Register map, void MacroAssembler::NumberOfOwnDescriptors(Register dst, Register map) { - movp(dst, FieldOperand(map, Map::kBitField3Offset)); + movl(dst, FieldOperand(map, Map::kBitField3Offset)); DecodeField<Map::NumberOfOwnDescriptorsBits>(dst); } void MacroAssembler::EnumLength(Register dst, Register map) { STATIC_ASSERT(Map::EnumLengthBits::kShift == 0); - movp(dst, FieldOperand(map, Map::kBitField3Offset)); - Move(kScratchRegister, Smi::FromInt(Map::EnumLengthBits::kMask)); - andp(dst, kScratchRegister); + movl(dst, FieldOperand(map, Map::kBitField3Offset)); + andl(dst, Immediate(Map::EnumLengthBits::kMask)); + Integer32ToSmi(dst, dst); } @@ -3557,7 +3660,7 @@ void MacroAssembler::AssertSmi(const Operand& object) { void MacroAssembler::AssertZeroExtended(Register int32_register) { if (emit_debug_code()) { - ASSERT(!int32_register.is(kScratchRegister)); + DCHECK(!int32_register.is(kScratchRegister)); movq(kScratchRegister, V8_INT64_C(0x0000000100000000)); cmpq(kScratchRegister, int32_register); Check(above_equal, k32BitValueInRegisterIsNotZeroExtended); @@ -3608,7 +3711,7 @@ void MacroAssembler::AssertRootValue(Register src, Heap::RootListIndex root_value_index, BailoutReason reason) { if (emit_debug_code()) { - ASSERT(!src.is(kScratchRegister)); + DCHECK(!src.is(kScratchRegister)); LoadRoot(kScratchRegister, root_value_index); cmpp(src, kScratchRegister); Check(equal, reason); @@ -3642,15 +3745,16 @@ void MacroAssembler::TryGetFunctionPrototype(Register function, Register result, Label* miss, bool miss_on_bound_function) { - // Check that the receiver isn't a smi. - testl(function, Immediate(kSmiTagMask)); - j(zero, miss); + Label non_instance; + if (miss_on_bound_function) { + // Check that the receiver isn't a smi. + testl(function, Immediate(kSmiTagMask)); + j(zero, miss); - // Check that the function really is a function. - CmpObjectType(function, JS_FUNCTION_TYPE, result); - j(not_equal, miss); + // Check that the function really is a function. + CmpObjectType(function, JS_FUNCTION_TYPE, result); + j(not_equal, miss); - if (miss_on_bound_function) { movp(kScratchRegister, FieldOperand(function, JSFunction::kSharedFunctionInfoOffset)); // It's not smi-tagged (stored in the top half of a smi-tagged 8-byte @@ -3659,13 +3763,12 @@ void MacroAssembler::TryGetFunctionPrototype(Register function, SharedFunctionInfo::kCompilerHintsOffset, SharedFunctionInfo::kBoundFunction); j(not_zero, miss); - } - // Make sure that the function has an instance prototype. - Label non_instance; - testb(FieldOperand(result, Map::kBitFieldOffset), - Immediate(1 << Map::kHasNonInstancePrototype)); - j(not_zero, &non_instance, Label::kNear); + // Make sure that the function has an instance prototype. + testb(FieldOperand(result, Map::kBitFieldOffset), + Immediate(1 << Map::kHasNonInstancePrototype)); + j(not_zero, &non_instance, Label::kNear); + } // Get the prototype or initial map from the function. movp(result, @@ -3684,12 +3787,15 @@ void MacroAssembler::TryGetFunctionPrototype(Register function, // Get the prototype from the initial map. movp(result, FieldOperand(result, Map::kPrototypeOffset)); - jmp(&done, Label::kNear); - // Non-instance prototype: Fetch prototype from constructor field - // in initial map. - bind(&non_instance); - movp(result, FieldOperand(result, Map::kConstructorOffset)); + if (miss_on_bound_function) { + jmp(&done, Label::kNear); + + // Non-instance prototype: Fetch prototype from constructor field + // in initial map. + bind(&non_instance); + movp(result, FieldOperand(result, Map::kConstructorOffset)); + } // All done. bind(&done); @@ -3705,7 +3811,7 @@ void MacroAssembler::SetCounter(StatsCounter* counter, int value) { void MacroAssembler::IncrementCounter(StatsCounter* counter, int value) { - ASSERT(value > 0); + DCHECK(value > 0); if (FLAG_native_code_counters && counter->Enabled()) { Operand counter_operand = ExternalOperand(ExternalReference(counter)); if (value == 1) { @@ -3718,7 +3824,7 @@ void MacroAssembler::IncrementCounter(StatsCounter* counter, int value) { void MacroAssembler::DecrementCounter(StatsCounter* counter, int value) { - ASSERT(value > 0); + DCHECK(value > 0); if (FLAG_native_code_counters && counter->Enabled()) { Operand counter_operand = ExternalOperand(ExternalReference(counter)); if (value == 1) { @@ -3734,7 +3840,7 @@ void MacroAssembler::DebugBreak() { Set(rax, 0); // No arguments. LoadAddress(rbx, ExternalReference(Runtime::kDebugBreak, isolate())); CEntryStub ces(isolate(), 1); - ASSERT(AllowThisStubCall(&ces)); + DCHECK(AllowThisStubCall(&ces)); Call(ces.GetCode(), RelocInfo::DEBUG_BREAK); } @@ -3745,7 +3851,7 @@ void MacroAssembler::InvokeCode(Register code, InvokeFlag flag, const CallWrapper& call_wrapper) { // You can't call a function without a valid frame. - ASSERT(flag == JUMP_FUNCTION || has_frame()); + DCHECK(flag == JUMP_FUNCTION || has_frame()); Label done; bool definitely_mismatches = false; @@ -3764,7 +3870,7 @@ void MacroAssembler::InvokeCode(Register code, call(code); call_wrapper.AfterCall(); } else { - ASSERT(flag == JUMP_FUNCTION); + DCHECK(flag == JUMP_FUNCTION); jmp(code); } bind(&done); @@ -3777,9 +3883,9 @@ void MacroAssembler::InvokeFunction(Register function, InvokeFlag flag, const CallWrapper& call_wrapper) { // You can't call a function without a valid frame. - ASSERT(flag == JUMP_FUNCTION || has_frame()); + DCHECK(flag == JUMP_FUNCTION || has_frame()); - ASSERT(function.is(rdi)); + DCHECK(function.is(rdi)); movp(rdx, FieldOperand(function, JSFunction::kSharedFunctionInfoOffset)); movp(rsi, FieldOperand(function, JSFunction::kContextOffset)); LoadSharedFunctionInfoSpecialField(rbx, rdx, @@ -3799,9 +3905,9 @@ void MacroAssembler::InvokeFunction(Register function, InvokeFlag flag, const CallWrapper& call_wrapper) { // You can't call a function without a valid frame. - ASSERT(flag == JUMP_FUNCTION || has_frame()); + DCHECK(flag == JUMP_FUNCTION || has_frame()); - ASSERT(function.is(rdi)); + DCHECK(function.is(rdi)); movp(rsi, FieldOperand(function, JSFunction::kContextOffset)); // Advances rdx to the end of the Code object header, to the start of // the executable code. @@ -3834,7 +3940,7 @@ void MacroAssembler::InvokePrologue(const ParameterCount& expected, *definitely_mismatches = false; Label invoke; if (expected.is_immediate()) { - ASSERT(actual.is_immediate()); + DCHECK(actual.is_immediate()); if (expected.immediate() == actual.immediate()) { definitely_matches = true; } else { @@ -3858,15 +3964,15 @@ void MacroAssembler::InvokePrologue(const ParameterCount& expected, // IC mechanism. cmpp(expected.reg(), Immediate(actual.immediate())); j(equal, &invoke, Label::kNear); - ASSERT(expected.reg().is(rbx)); + DCHECK(expected.reg().is(rbx)); Set(rax, actual.immediate()); } else if (!expected.reg().is(actual.reg())) { // Both expected and actual are in (different) registers. This // is the case when we invoke functions using call and apply. cmpp(expected.reg(), actual.reg()); j(equal, &invoke, Label::kNear); - ASSERT(actual.reg().is(rax)); - ASSERT(expected.reg().is(rbx)); + DCHECK(actual.reg().is(rax)); + DCHECK(expected.reg().is(rbx)); } } @@ -3894,26 +4000,27 @@ void MacroAssembler::InvokePrologue(const ParameterCount& expected, } -void MacroAssembler::Prologue(PrologueFrameMode frame_mode) { - if (frame_mode == BUILD_STUB_FRAME) { +void MacroAssembler::StubPrologue() { pushq(rbp); // Caller's frame pointer. movp(rbp, rsp); Push(rsi); // Callee's context. Push(Smi::FromInt(StackFrame::STUB)); +} + + +void MacroAssembler::Prologue(bool code_pre_aging) { + PredictableCodeSizeScope predictible_code_size_scope(this, + kNoCodeAgeSequenceLength); + if (code_pre_aging) { + // Pre-age the code. + Call(isolate()->builtins()->MarkCodeAsExecutedOnce(), + RelocInfo::CODE_AGE_SEQUENCE); + Nop(kNoCodeAgeSequenceLength - Assembler::kShortCallInstructionLength); } else { - PredictableCodeSizeScope predictible_code_size_scope(this, - kNoCodeAgeSequenceLength); - if (isolate()->IsCodePreAgingActive()) { - // Pre-age the code. - Call(isolate()->builtins()->MarkCodeAsExecutedOnce(), - RelocInfo::CODE_AGE_SEQUENCE); - Nop(kNoCodeAgeSequenceLength - Assembler::kShortCallInstructionLength); - } else { - pushq(rbp); // Caller's frame pointer. - movp(rbp, rsp); - Push(rsi); // Callee's context. - Push(rdi); // Callee's JS function. - } + pushq(rbp); // Caller's frame pointer. + movp(rbp, rsp); + Push(rsi); // Callee's context. + Push(rdi); // Callee's JS function. } } @@ -3949,15 +4056,15 @@ void MacroAssembler::LeaveFrame(StackFrame::Type type) { void MacroAssembler::EnterExitFramePrologue(bool save_rax) { // Set up the frame structure on the stack. // All constants are relative to the frame pointer of the exit frame. - ASSERT(ExitFrameConstants::kCallerSPDisplacement == + DCHECK(ExitFrameConstants::kCallerSPDisplacement == kFPOnStackSize + kPCOnStackSize); - ASSERT(ExitFrameConstants::kCallerPCOffset == kFPOnStackSize); - ASSERT(ExitFrameConstants::kCallerFPOffset == 0 * kPointerSize); + DCHECK(ExitFrameConstants::kCallerPCOffset == kFPOnStackSize); + DCHECK(ExitFrameConstants::kCallerFPOffset == 0 * kPointerSize); pushq(rbp); movp(rbp, rsp); // Reserve room for entry stack pointer and push the code object. - ASSERT(ExitFrameConstants::kSPOffset == -1 * kPointerSize); + DCHECK(ExitFrameConstants::kSPOffset == -1 * kPointerSize); Push(Immediate(0)); // Saved entry sp, patched before call. Move(kScratchRegister, CodeObject(), RelocInfo::EMBEDDED_OBJECT); Push(kScratchRegister); // Accessed from EditFrame::code_slot. @@ -3993,10 +4100,10 @@ void MacroAssembler::EnterExitFrameEpilogue(int arg_stack_space, } // Get the required frame alignment for the OS. - const int kFrameAlignment = OS::ActivationFrameAlignment(); + const int kFrameAlignment = base::OS::ActivationFrameAlignment(); if (kFrameAlignment > 0) { - ASSERT(IsPowerOf2(kFrameAlignment)); - ASSERT(is_int8(kFrameAlignment)); + DCHECK(IsPowerOf2(kFrameAlignment)); + DCHECK(is_int8(kFrameAlignment)); andp(rsp, Immediate(-kFrameAlignment)); } @@ -4079,8 +4186,8 @@ void MacroAssembler::CheckAccessGlobalProxy(Register holder_reg, Label* miss) { Label same_contexts; - ASSERT(!holder_reg.is(scratch)); - ASSERT(!scratch.is(kScratchRegister)); + DCHECK(!holder_reg.is(scratch)); + DCHECK(!scratch.is(kScratchRegister)); // Load current lexical context from the stack frame. movp(scratch, Operand(rbp, StandardFrameConstants::kContextOffset)); @@ -4140,7 +4247,7 @@ void MacroAssembler::CheckAccessGlobalProxy(Register holder_reg, // Compute the hash code from the untagged key. This must be kept in sync with -// ComputeIntegerHash in utils.h and KeyedLoadGenericElementStub in +// ComputeIntegerHash in utils.h and KeyedLoadGenericStub in // code-stub-hydrogen.cc void MacroAssembler::GetNumberHash(Register r0, Register scratch) { // First of all we assign the hash seed to scratch. @@ -4226,7 +4333,7 @@ void MacroAssembler::LoadFromNumberDictionary(Label* miss, andp(r2, r1); // Scale the index by multiplying by the entry size. - ASSERT(SeededNumberDictionary::kEntrySize == 3); + DCHECK(SeededNumberDictionary::kEntrySize == 3); leap(r2, Operand(r2, r2, times_2, 0)); // r2 = r2 * 3 // Check if the key matches. @@ -4245,7 +4352,7 @@ void MacroAssembler::LoadFromNumberDictionary(Label* miss, // Check that the value is a normal propety. const int kDetailsOffset = SeededNumberDictionary::kElementsStartOffset + 2 * kPointerSize; - ASSERT_EQ(NORMAL, 0); + DCHECK_EQ(NORMAL, 0); Test(FieldOperand(elements, r2, times_pointer_size, kDetailsOffset), Smi::FromInt(PropertyDetails::TypeField::kMask)); j(not_zero, miss); @@ -4266,7 +4373,7 @@ void MacroAssembler::LoadAllocationTopHelper(Register result, // Just return if allocation top is already known. if ((flags & RESULT_CONTAINS_TOP) != 0) { // No use of scratch if allocation top is provided. - ASSERT(!scratch.is_valid()); + DCHECK(!scratch.is_valid()); #ifdef DEBUG // Assert that result actually contains top on entry. Operand top_operand = ExternalOperand(allocation_top); @@ -4287,6 +4394,41 @@ void MacroAssembler::LoadAllocationTopHelper(Register result, } +void MacroAssembler::MakeSureDoubleAlignedHelper(Register result, + Register scratch, + Label* gc_required, + AllocationFlags flags) { + if (kPointerSize == kDoubleSize) { + if (FLAG_debug_code) { + testl(result, Immediate(kDoubleAlignmentMask)); + Check(zero, kAllocationIsNotDoubleAligned); + } + } else { + // Align the next allocation. Storing the filler map without checking top + // is safe in new-space because the limit of the heap is aligned there. + DCHECK(kPointerSize * 2 == kDoubleSize); + DCHECK((flags & PRETENURE_OLD_POINTER_SPACE) == 0); + DCHECK(kPointerAlignment * 2 == kDoubleAlignment); + // Make sure scratch is not clobbered by this function as it might be + // used in UpdateAllocationTopHelper later. + DCHECK(!scratch.is(kScratchRegister)); + Label aligned; + testl(result, Immediate(kDoubleAlignmentMask)); + j(zero, &aligned, Label::kNear); + if ((flags & PRETENURE_OLD_DATA_SPACE) != 0) { + ExternalReference allocation_limit = + AllocationUtils::GetAllocationLimitReference(isolate(), flags); + cmpp(result, ExternalOperand(allocation_limit)); + j(above_equal, gc_required); + } + LoadRoot(kScratchRegister, Heap::kOnePointerFillerMapRootIndex); + movp(Operand(result, 0), kScratchRegister); + addp(result, Immediate(kDoubleSize / 2)); + bind(&aligned); + } +} + + void MacroAssembler::UpdateAllocationTopHelper(Register result_end, Register scratch, AllocationFlags flags) { @@ -4314,8 +4456,8 @@ void MacroAssembler::Allocate(int object_size, Register scratch, Label* gc_required, AllocationFlags flags) { - ASSERT((flags & (RESULT_CONTAINS_TOP | SIZE_IN_WORDS)) == 0); - ASSERT(object_size <= Page::kMaxRegularHeapObjectSize); + DCHECK((flags & (RESULT_CONTAINS_TOP | SIZE_IN_WORDS)) == 0); + DCHECK(object_size <= Page::kMaxRegularHeapObjectSize); if (!FLAG_inline_new) { if (emit_debug_code()) { // Trash the registers to simulate an allocation failure. @@ -4330,16 +4472,13 @@ void MacroAssembler::Allocate(int object_size, jmp(gc_required); return; } - ASSERT(!result.is(result_end)); + DCHECK(!result.is(result_end)); // Load address of new object into result. LoadAllocationTopHelper(result, scratch, flags); - // Align the next allocation. Storing the filler map without checking top is - // safe in new-space because the limit of the heap is aligned there. - if (((flags & DOUBLE_ALIGNMENT) != 0) && FLAG_debug_code) { - testq(result, Immediate(kDoubleAlignmentMask)); - Check(zero, kAllocationIsNotDoubleAligned); + if ((flags & DOUBLE_ALIGNMENT) != 0) { + MakeSureDoubleAlignedHelper(result, scratch, gc_required, flags); } // Calculate new top and bail out if new space is exhausted. @@ -4369,7 +4508,7 @@ void MacroAssembler::Allocate(int object_size, } } else if (tag_result) { // Tag the result if requested. - ASSERT(kHeapObjectTag == 1); + DCHECK(kHeapObjectTag == 1); incp(result); } } @@ -4383,7 +4522,7 @@ void MacroAssembler::Allocate(int header_size, Register scratch, Label* gc_required, AllocationFlags flags) { - ASSERT((flags & SIZE_IN_WORDS) == 0); + DCHECK((flags & SIZE_IN_WORDS) == 0); leap(result_end, Operand(element_count, element_size, header_size)); Allocate(result_end, result, result_end, scratch, gc_required, flags); } @@ -4395,7 +4534,7 @@ void MacroAssembler::Allocate(Register object_size, Register scratch, Label* gc_required, AllocationFlags flags) { - ASSERT((flags & SIZE_IN_WORDS) == 0); + DCHECK((flags & SIZE_IN_WORDS) == 0); if (!FLAG_inline_new) { if (emit_debug_code()) { // Trash the registers to simulate an allocation failure. @@ -4409,16 +4548,13 @@ void MacroAssembler::Allocate(Register object_size, jmp(gc_required); return; } - ASSERT(!result.is(result_end)); + DCHECK(!result.is(result_end)); // Load address of new object into result. LoadAllocationTopHelper(result, scratch, flags); - // Align the next allocation. Storing the filler map without checking top is - // safe in new-space because the limit of the heap is aligned there. - if (((flags & DOUBLE_ALIGNMENT) != 0) && FLAG_debug_code) { - testq(result, Immediate(kDoubleAlignmentMask)); - Check(zero, kAllocationIsNotDoubleAligned); + if ((flags & DOUBLE_ALIGNMENT) != 0) { + MakeSureDoubleAlignedHelper(result, scratch, gc_required, flags); } // Calculate new top and bail out if new space is exhausted. @@ -4460,12 +4596,17 @@ void MacroAssembler::UndoAllocationInNewSpace(Register object) { void MacroAssembler::AllocateHeapNumber(Register result, Register scratch, - Label* gc_required) { + Label* gc_required, + MutableMode mode) { // Allocate heap number in new space. Allocate(HeapNumber::kSize, result, scratch, no_reg, gc_required, TAG_OBJECT); + Heap::RootListIndex map_index = mode == MUTABLE + ? Heap::kMutableHeapNumberMapRootIndex + : Heap::kHeapNumberMapRootIndex; + // Set the map. - LoadRoot(kScratchRegister, Heap::kHeapNumberMapRootIndex); + LoadRoot(kScratchRegister, map_index); movp(FieldOperand(result, HeapObject::kMapOffset), kScratchRegister); } @@ -4480,7 +4621,7 @@ void MacroAssembler::AllocateTwoByteString(Register result, // observing object alignment. const int kHeaderAlignment = SeqTwoByteString::kHeaderSize & kObjectAlignmentMask; - ASSERT(kShortSize == 2); + DCHECK(kShortSize == 2); // scratch1 = length * 2 + kObjectAlignmentMask. leap(scratch1, Operand(length, length, times_1, kObjectAlignmentMask + kHeaderAlignment)); @@ -4520,7 +4661,7 @@ void MacroAssembler::AllocateAsciiString(Register result, const int kHeaderAlignment = SeqOneByteString::kHeaderSize & kObjectAlignmentMask; movl(scratch1, length); - ASSERT(kCharSize == 1); + DCHECK(kCharSize == 1); addp(scratch1, Immediate(kObjectAlignmentMask + kHeaderAlignment)); andp(scratch1, Immediate(~kObjectAlignmentMask)); if (kHeaderAlignment > 0) { @@ -4565,33 +4706,12 @@ void MacroAssembler::AllocateAsciiConsString(Register result, Register scratch1, Register scratch2, Label* gc_required) { - Label allocate_new_space, install_map; - AllocationFlags flags = TAG_OBJECT; - - ExternalReference high_promotion_mode = ExternalReference:: - new_space_high_promotion_mode_active_address(isolate()); - - Load(scratch1, high_promotion_mode); - testb(scratch1, Immediate(1)); - j(zero, &allocate_new_space); - Allocate(ConsString::kSize, - result, - scratch1, - scratch2, - gc_required, - static_cast<AllocationFlags>(flags | PRETENURE_OLD_POINTER_SPACE)); - - jmp(&install_map); - - bind(&allocate_new_space); Allocate(ConsString::kSize, result, scratch1, scratch2, gc_required, - flags); - - bind(&install_map); + TAG_OBJECT); // Set the map. The other fields are left uninitialized. LoadRoot(kScratchRegister, Heap::kConsAsciiStringMapRootIndex); @@ -4639,7 +4759,7 @@ void MacroAssembler::CopyBytes(Register destination, Register length, int min_length, Register scratch) { - ASSERT(min_length >= 0); + DCHECK(min_length >= 0); if (emit_debug_code()) { cmpl(length, Immediate(min_length)); Assert(greater_equal, kInvalidMinLength); @@ -4652,9 +4772,9 @@ void MacroAssembler::CopyBytes(Register destination, j(below, &short_string, Label::kNear); } - ASSERT(source.is(rsi)); - ASSERT(destination.is(rdi)); - ASSERT(length.is(rcx)); + DCHECK(source.is(rsi)); + DCHECK(destination.is(rdi)); + DCHECK(length.is(rcx)); if (min_length <= kLongStringLimit) { cmpl(length, Immediate(2 * kPointerSize)); @@ -4819,7 +4939,7 @@ int MacroAssembler::ArgumentStackSlotsForCFunctionCall(int num_arguments) { // arguments. // On AMD64 ABI (Linux/Mac) the first six arguments are passed in registers // and the caller does not reserve stack slots for them. - ASSERT(num_arguments >= 0); + DCHECK(num_arguments >= 0); #ifdef _WIN64 const int kMinimumStackSlots = kRegisterPassedArguments; if (num_arguments < kMinimumStackSlots) return kMinimumStackSlots; @@ -4865,13 +4985,13 @@ void MacroAssembler::EmitSeqStringSetCharCheck(Register string, void MacroAssembler::PrepareCallCFunction(int num_arguments) { - int frame_alignment = OS::ActivationFrameAlignment(); - ASSERT(frame_alignment != 0); - ASSERT(num_arguments >= 0); + int frame_alignment = base::OS::ActivationFrameAlignment(); + DCHECK(frame_alignment != 0); + DCHECK(num_arguments >= 0); // Make stack end at alignment and allocate space for arguments and old rsp. movp(kScratchRegister, rsp); - ASSERT(IsPowerOf2(frame_alignment)); + DCHECK(IsPowerOf2(frame_alignment)); int argument_slots_on_stack = ArgumentStackSlotsForCFunctionCall(num_arguments); subp(rsp, Immediate((argument_slots_on_stack + 1) * kRegisterSize)); @@ -4888,30 +5008,48 @@ void MacroAssembler::CallCFunction(ExternalReference function, void MacroAssembler::CallCFunction(Register function, int num_arguments) { - ASSERT(has_frame()); + DCHECK(has_frame()); // Check stack alignment. if (emit_debug_code()) { CheckStackAlignment(); } call(function); - ASSERT(OS::ActivationFrameAlignment() != 0); - ASSERT(num_arguments >= 0); + DCHECK(base::OS::ActivationFrameAlignment() != 0); + DCHECK(num_arguments >= 0); int argument_slots_on_stack = ArgumentStackSlotsForCFunctionCall(num_arguments); movp(rsp, Operand(rsp, argument_slots_on_stack * kRegisterSize)); } -bool AreAliased(Register r1, Register r2, Register r3, Register r4) { - if (r1.is(r2)) return true; - if (r1.is(r3)) return true; - if (r1.is(r4)) return true; - if (r2.is(r3)) return true; - if (r2.is(r4)) return true; - if (r3.is(r4)) return true; - return false; +#ifdef DEBUG +bool AreAliased(Register reg1, + Register reg2, + Register reg3, + Register reg4, + Register reg5, + Register reg6, + Register reg7, + Register reg8) { + int n_of_valid_regs = reg1.is_valid() + reg2.is_valid() + + reg3.is_valid() + reg4.is_valid() + reg5.is_valid() + reg6.is_valid() + + reg7.is_valid() + reg8.is_valid(); + + RegList regs = 0; + if (reg1.is_valid()) regs |= reg1.bit(); + if (reg2.is_valid()) regs |= reg2.bit(); + if (reg3.is_valid()) regs |= reg3.bit(); + if (reg4.is_valid()) regs |= reg4.bit(); + if (reg5.is_valid()) regs |= reg5.bit(); + if (reg6.is_valid()) regs |= reg6.bit(); + if (reg7.is_valid()) regs |= reg7.bit(); + if (reg8.is_valid()) regs |= reg8.bit(); + int n_of_non_aliasing_regs = NumRegs(regs); + + return n_of_valid_regs != n_of_non_aliasing_regs; } +#endif CodePatcher::CodePatcher(byte* address, int size) @@ -4921,17 +5059,17 @@ CodePatcher::CodePatcher(byte* address, int size) // Create a new macro assembler pointing to the address of the code to patch. // The size is adjusted with kGap on order for the assembler to generate size // bytes of instructions without failing with buffer size constraints. - ASSERT(masm_.reloc_info_writer.pos() == address_ + size_ + Assembler::kGap); + DCHECK(masm_.reloc_info_writer.pos() == address_ + size_ + Assembler::kGap); } CodePatcher::~CodePatcher() { // Indicate that code has changed. - CPU::FlushICache(address_, size_); + CpuFeatures::FlushICache(address_, size_); // Check that the code was patched as expected. - ASSERT(masm_.pc_ == address_ + size_); - ASSERT(masm_.reloc_info_writer.pos() == address_ + size_ + Assembler::kGap); + DCHECK(masm_.pc_ == address_ + size_); + DCHECK(masm_.reloc_info_writer.pos() == address_ + size_ + Assembler::kGap); } @@ -4942,7 +5080,7 @@ void MacroAssembler::CheckPageFlag( Condition cc, Label* condition_met, Label::Distance condition_met_distance) { - ASSERT(cc == zero || cc == not_zero); + DCHECK(cc == zero || cc == not_zero); if (scratch.is(object)) { andp(scratch, Immediate(~Page::kPageAlignmentMask)); } else { @@ -4964,9 +5102,8 @@ void MacroAssembler::CheckMapDeprecated(Handle<Map> map, Label* if_deprecated) { if (map->CanBeDeprecated()) { Move(scratch, map); - movp(scratch, FieldOperand(scratch, Map::kBitField3Offset)); - SmiToInteger32(scratch, scratch); - andp(scratch, Immediate(Map::Deprecated::kMask)); + movl(scratch, FieldOperand(scratch, Map::kBitField3Offset)); + andl(scratch, Immediate(Map::Deprecated::kMask)); j(not_zero, if_deprecated); } } @@ -4977,10 +5114,10 @@ void MacroAssembler::JumpIfBlack(Register object, Register mask_scratch, Label* on_black, Label::Distance on_black_distance) { - ASSERT(!AreAliased(object, bitmap_scratch, mask_scratch, rcx)); + DCHECK(!AreAliased(object, bitmap_scratch, mask_scratch, rcx)); GetMarkBits(object, bitmap_scratch, mask_scratch); - ASSERT(strcmp(Marking::kBlackBitPattern, "10") == 0); + DCHECK(strcmp(Marking::kBlackBitPattern, "10") == 0); // The mask_scratch register contains a 1 at the position of the first bit // and a 0 at all other positions, including the position of the second bit. movp(rcx, mask_scratch); @@ -5006,8 +5143,8 @@ void MacroAssembler::JumpIfDataObject( movp(scratch, FieldOperand(value, HeapObject::kMapOffset)); CompareRoot(scratch, Heap::kHeapNumberMapRootIndex); j(equal, &is_data_object, Label::kNear); - ASSERT(kIsIndirectStringTag == 1 && kIsIndirectStringMask == 1); - ASSERT(kNotStringTag == 0x80 && kIsNotStringMask == 0x80); + DCHECK(kIsIndirectStringTag == 1 && kIsIndirectStringMask == 1); + DCHECK(kNotStringTag == 0x80 && kIsNotStringMask == 0x80); // If it's a string and it's not a cons string then it's an object containing // no GC pointers. testb(FieldOperand(scratch, Map::kInstanceTypeOffset), @@ -5020,7 +5157,7 @@ void MacroAssembler::JumpIfDataObject( void MacroAssembler::GetMarkBits(Register addr_reg, Register bitmap_reg, Register mask_reg) { - ASSERT(!AreAliased(addr_reg, bitmap_reg, mask_reg, rcx)); + DCHECK(!AreAliased(addr_reg, bitmap_reg, mask_reg, rcx)); movp(bitmap_reg, addr_reg); // Sign extended 32 bit immediate. andp(bitmap_reg, Immediate(~Page::kPageAlignmentMask)); @@ -5047,14 +5184,14 @@ void MacroAssembler::EnsureNotWhite( Register mask_scratch, Label* value_is_white_and_not_data, Label::Distance distance) { - ASSERT(!AreAliased(value, bitmap_scratch, mask_scratch, rcx)); + DCHECK(!AreAliased(value, bitmap_scratch, mask_scratch, rcx)); GetMarkBits(value, bitmap_scratch, mask_scratch); // If the value is black or grey we don't need to do anything. - ASSERT(strcmp(Marking::kWhiteBitPattern, "00") == 0); - ASSERT(strcmp(Marking::kBlackBitPattern, "10") == 0); - ASSERT(strcmp(Marking::kGreyBitPattern, "11") == 0); - ASSERT(strcmp(Marking::kImpossibleBitPattern, "01") == 0); + DCHECK(strcmp(Marking::kWhiteBitPattern, "00") == 0); + DCHECK(strcmp(Marking::kBlackBitPattern, "10") == 0); + DCHECK(strcmp(Marking::kGreyBitPattern, "11") == 0); + DCHECK(strcmp(Marking::kImpossibleBitPattern, "01") == 0); Label done; @@ -5092,8 +5229,8 @@ void MacroAssembler::EnsureNotWhite( bind(¬_heap_number); // Check for strings. - ASSERT(kIsIndirectStringTag == 1 && kIsIndirectStringMask == 1); - ASSERT(kNotStringTag == 0x80 && kIsNotStringMask == 0x80); + DCHECK(kIsIndirectStringTag == 1 && kIsIndirectStringMask == 1); + DCHECK(kNotStringTag == 0x80 && kIsNotStringMask == 0x80); // If it's a string and it's not a cons string then it's an object containing // no GC pointers. Register instance_type = rcx; @@ -5106,8 +5243,8 @@ void MacroAssembler::EnsureNotWhite( Label not_external; // External strings are the only ones with the kExternalStringTag bit // set. - ASSERT_EQ(0, kSeqStringTag & kExternalStringTag); - ASSERT_EQ(0, kConsStringTag & kExternalStringTag); + DCHECK_EQ(0, kSeqStringTag & kExternalStringTag); + DCHECK_EQ(0, kConsStringTag & kExternalStringTag); testb(instance_type, Immediate(kExternalStringTag)); j(zero, ¬_external, Label::kNear); movp(length, Immediate(ExternalString::kSize)); @@ -5115,7 +5252,7 @@ void MacroAssembler::EnsureNotWhite( bind(¬_external); // Sequential string, either ASCII or UC16. - ASSERT(kOneByteStringTag == 0x04); + DCHECK(kOneByteStringTag == 0x04); andp(length, Immediate(kStringEncodingMask)); xorp(length, Immediate(kStringEncodingMask)); addp(length, Immediate(0x04)); @@ -5208,8 +5345,8 @@ void MacroAssembler::JumpIfDictionaryInPrototypeChain( Register scratch0, Register scratch1, Label* found) { - ASSERT(!(scratch0.is(kScratchRegister) && scratch1.is(kScratchRegister))); - ASSERT(!scratch1.is(scratch0)); + DCHECK(!(scratch0.is(kScratchRegister) && scratch1.is(kScratchRegister))); + DCHECK(!scratch1.is(scratch0)); Register current = scratch0; Label loop_again; @@ -5219,8 +5356,7 @@ void MacroAssembler::JumpIfDictionaryInPrototypeChain( bind(&loop_again); movp(current, FieldOperand(current, HeapObject::kMapOffset)); movp(scratch1, FieldOperand(current, Map::kBitField2Offset)); - andp(scratch1, Immediate(Map::kElementsKindMask)); - shrp(scratch1, Immediate(Map::kElementsKindShift)); + DecodeField<Map::ElementsKindBits>(scratch1); cmpp(scratch1, Immediate(DICTIONARY_ELEMENTS)); j(equal, found); movp(current, FieldOperand(current, Map::kPrototypeOffset)); @@ -5230,8 +5366,8 @@ void MacroAssembler::JumpIfDictionaryInPrototypeChain( void MacroAssembler::TruncatingDiv(Register dividend, int32_t divisor) { - ASSERT(!dividend.is(rax)); - ASSERT(!dividend.is(rdx)); + DCHECK(!dividend.is(rax)); + DCHECK(!dividend.is(rdx)); MultiplierAndShift ms(divisor); movl(rax, Immediate(ms.multiplier())); imull(dividend); diff --git a/deps/v8/src/x64/macro-assembler-x64.h b/deps/v8/src/x64/macro-assembler-x64.h index d9893d62109..2ab05cf1ac9 100644 --- a/deps/v8/src/x64/macro-assembler-x64.h +++ b/deps/v8/src/x64/macro-assembler-x64.h @@ -5,9 +5,9 @@ #ifndef V8_X64_MACRO_ASSEMBLER_X64_H_ #define V8_X64_MACRO_ASSEMBLER_X64_H_ -#include "assembler.h" -#include "frames.h" -#include "v8globals.h" +#include "src/assembler.h" +#include "src/frames.h" +#include "src/globals.h" namespace v8 { namespace internal { @@ -29,6 +29,10 @@ typedef Operand MemOperand; enum RememberedSetAction { EMIT_REMEMBERED_SET, OMIT_REMEMBERED_SET }; enum SmiCheck { INLINE_SMI_CHECK, OMIT_SMI_CHECK }; +enum PointersToHereCheck { + kPointersToHereMaybeInteresting, + kPointersToHereAreAlwaysInteresting +}; enum SmiOperationConstraint { PRESERVE_SOURCE_REGISTER, @@ -46,7 +50,16 @@ class SmiOperationExecutionMode : public EnumSet<SmiOperationConstraint, byte> { : EnumSet<SmiOperationConstraint, byte>(bits) { } }; -bool AreAliased(Register r1, Register r2, Register r3, Register r4); +#ifdef DEBUG +bool AreAliased(Register reg1, + Register reg2, + Register reg3 = no_reg, + Register reg4 = no_reg, + Register reg5 = no_reg, + Register reg6 = no_reg, + Register reg7 = no_reg, + Register reg8 = no_reg); +#endif // Forward declaration. class JumpTarget; @@ -220,7 +233,9 @@ class MacroAssembler: public Assembler { Register scratch, SaveFPRegsMode save_fp, RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, - SmiCheck smi_check = INLINE_SMI_CHECK); + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting); // As above, but the offset has the tag presubtracted. For use with // Operand(reg, off). @@ -231,14 +246,17 @@ class MacroAssembler: public Assembler { Register scratch, SaveFPRegsMode save_fp, RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, - SmiCheck smi_check = INLINE_SMI_CHECK) { + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting) { RecordWriteField(context, offset + kHeapObjectTag, value, scratch, save_fp, remembered_set_action, - smi_check); + smi_check, + pointers_to_here_check_for_value); } // Notify the garbage collector that we wrote a pointer into a fixed array. @@ -253,7 +271,15 @@ class MacroAssembler: public Assembler { Register index, SaveFPRegsMode save_fp, RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, - SmiCheck smi_check = INLINE_SMI_CHECK); + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting); + + void RecordWriteForMap( + Register object, + Register map, + Register dst, + SaveFPRegsMode save_fp); // For page containing |object| mark region covering |address| // dirty. |object| is the object being stored into, |value| is the @@ -266,7 +292,9 @@ class MacroAssembler: public Assembler { Register value, SaveFPRegsMode save_fp, RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, - SmiCheck smi_check = INLINE_SMI_CHECK); + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting); // --------------------------------------------------------------------------- // Debugger Support @@ -274,7 +302,8 @@ class MacroAssembler: public Assembler { void DebugBreak(); // Generates function and stub prologue code. - void Prologue(PrologueFrameMode frame_mode); + void StubPrologue(); + void Prologue(bool code_pre_aging); // Enter specific kind of exit frame; either in normal or // debug mode. Expects the number of arguments in register rax and @@ -469,10 +498,18 @@ class MacroAssembler: public Assembler { // Test-and-jump functions. Typically combines a check function // above with a conditional jump. + // Jump if the value can be represented by a smi. + void JumpIfValidSmiValue(Register src, Label* on_valid, + Label::Distance near_jump = Label::kFar); + // Jump if the value cannot be represented by a smi. void JumpIfNotValidSmiValue(Register src, Label* on_invalid, Label::Distance near_jump = Label::kFar); + // Jump if the unsigned integer value can be represented by a smi. + void JumpIfUIntValidSmiValue(Register src, Label* on_valid, + Label::Distance near_jump = Label::kFar); + // Jump if the unsigned integer value cannot be represented by a smi. void JumpIfUIntNotValidSmiValue(Register src, Label* on_invalid, Label::Distance near_jump = Label::kFar); @@ -630,7 +667,9 @@ class MacroAssembler: public Assembler { void SmiShiftLeftConstant(Register dst, Register src, - int shift_value); + int shift_value, + Label* on_not_smi_result = NULL, + Label::Distance near_jump = Label::kFar); void SmiShiftLogicalRightConstant(Register dst, Register src, int shift_value, @@ -644,7 +683,9 @@ class MacroAssembler: public Assembler { // Uses and clobbers rcx, so dst may not be rcx. void SmiShiftLeft(Register dst, Register src1, - Register src2); + Register src2, + Label* on_not_smi_result = NULL, + Label::Distance near_jump = Label::kFar); // Shifts a smi value to the right, shifting in zero bits at the top, and // returns the unsigned intepretation of the result if that is a smi. // Uses and clobbers rcx, so dst may not be rcx. @@ -841,15 +882,15 @@ class MacroAssembler: public Assembler { void Move(Register dst, void* ptr, RelocInfo::Mode rmode) { // This method must not be used with heap object references. The stored // address is not GC safe. Use the handle version instead. - ASSERT(rmode > RelocInfo::LAST_GCED_ENUM); + DCHECK(rmode > RelocInfo::LAST_GCED_ENUM); movp(dst, ptr, rmode); } void Move(Register dst, Handle<Object> value, RelocInfo::Mode rmode) { AllowDeferredHandleDereference using_raw_address; - ASSERT(!RelocInfo::IsNone(rmode)); - ASSERT(value->IsHeapObject()); - ASSERT(!isolate()->heap()->InNewSpace(*value)); + DCHECK(!RelocInfo::IsNone(rmode)); + DCHECK(value->IsHeapObject()); + DCHECK(!isolate()->heap()->InNewSpace(*value)); movp(dst, reinterpret_cast<void*>(value.location()), rmode); } @@ -1003,7 +1044,7 @@ class MacroAssembler: public Assembler { MinusZeroMode minus_zero_mode, Label* lost_precision, Label::Distance dst = Label::kFar); - void LoadUint32(XMMRegister dst, Register src, XMMRegister scratch); + void LoadUint32(XMMRegister dst, Register src); void LoadInstanceDescriptors(Register map, Register descriptors); void EnumLength(Register dst, Register map); @@ -1011,11 +1052,32 @@ class MacroAssembler: public Assembler { template<typename Field> void DecodeField(Register reg) { - static const int shift = Field::kShift + kSmiShift; + static const int shift = Field::kShift; static const int mask = Field::kMask >> Field::kShift; - shrp(reg, Immediate(shift)); + if (shift != 0) { + shrp(reg, Immediate(shift)); + } andp(reg, Immediate(mask)); - shlp(reg, Immediate(kSmiShift)); + } + + template<typename Field> + void DecodeFieldToSmi(Register reg) { + if (SmiValuesAre32Bits()) { + andp(reg, Immediate(Field::kMask)); + shlp(reg, Immediate(kSmiShift - Field::kShift)); + } else { + static const int shift = Field::kShift; + static const int mask = (Field::kMask >> Field::kShift) << kSmiTagSize; + DCHECK(SmiValuesAre31Bits()); + DCHECK(kSmiShift == kSmiTagSize); + DCHECK((mask & 0x80000000u) == 0); + if (shift < kSmiShift) { + shlp(reg, Immediate(kSmiShift - shift)); + } else if (shift > kSmiShift) { + sarp(reg, Immediate(shift - kSmiShift)); + } + andp(reg, Immediate(mask)); + } } // Abort execution if argument is not a number, enabled via --debug-code. @@ -1064,12 +1126,6 @@ class MacroAssembler: public Assembler { // Propagate an uncatchable exception out of the current JS stack. void ThrowUncatchable(Register value); - // Throw a message string as an exception. - void Throw(BailoutReason reason); - - // Throw a message string as an exception if a condition is not true. - void ThrowIf(Condition cc, BailoutReason reason); - // --------------------------------------------------------------------------- // Inline caching support @@ -1139,7 +1195,8 @@ class MacroAssembler: public Assembler { // space is full. void AllocateHeapNumber(Register result, Register scratch, - Label* gc_required); + Label* gc_required, + MutableMode mode = IMMUTABLE); // Allocate a sequential string. All the header fields of the string object // are initialized. @@ -1330,7 +1387,7 @@ class MacroAssembler: public Assembler { void Ret(int bytes_dropped, Register scratch); Handle<Object> CodeObject() { - ASSERT(!code_object_.is_null()); + DCHECK(!code_object_.is_null()); return code_object_; } @@ -1478,6 +1535,11 @@ class MacroAssembler: public Assembler { Register scratch, AllocationFlags flags); + void MakeSureDoubleAlignedHelper(Register result, + Register scratch, + Label* gc_required, + AllocationFlags flags); + // Update allocation top with value in result_end register. // If scratch is valid, it contains the address of the allocation top. void UpdateAllocationTopHelper(Register result_end, diff --git a/deps/v8/src/x64/regexp-macro-assembler-x64.cc b/deps/v8/src/x64/regexp-macro-assembler-x64.cc index 6a9a264f974..731089a508e 100644 --- a/deps/v8/src/x64/regexp-macro-assembler-x64.cc +++ b/deps/v8/src/x64/regexp-macro-assembler-x64.cc @@ -2,18 +2,18 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_X64 -#include "cpu-profiler.h" -#include "serialize.h" -#include "unicode.h" -#include "log.h" -#include "regexp-stack.h" -#include "macro-assembler.h" -#include "regexp-macro-assembler.h" -#include "x64/regexp-macro-assembler-x64.h" +#include "src/cpu-profiler.h" +#include "src/log.h" +#include "src/macro-assembler.h" +#include "src/regexp-macro-assembler.h" +#include "src/regexp-stack.h" +#include "src/serialize.h" +#include "src/unicode.h" +#include "src/x64/regexp-macro-assembler-x64.h" namespace v8 { namespace internal { @@ -109,7 +109,7 @@ RegExpMacroAssemblerX64::RegExpMacroAssemblerX64( success_label_(), backtrack_label_(), exit_label_() { - ASSERT_EQ(0, registers_to_save % 2); + DCHECK_EQ(0, registers_to_save % 2); __ jmp(&entry_label_); // We'll write the entry code when we know more. __ bind(&start_label_); // And then continue from here. } @@ -140,8 +140,8 @@ void RegExpMacroAssemblerX64::AdvanceCurrentPosition(int by) { void RegExpMacroAssemblerX64::AdvanceRegister(int reg, int by) { - ASSERT(reg >= 0); - ASSERT(reg < num_registers_); + DCHECK(reg >= 0); + DCHECK(reg < num_registers_); if (by != 0) { __ addp(register_location(reg), Immediate(by)); } @@ -295,7 +295,7 @@ void RegExpMacroAssemblerX64::CheckNotBackReferenceIgnoreCase( __ movp(rdi, r11); __ subq(rdi, rsi); } else { - ASSERT(mode_ == UC16); + DCHECK(mode_ == UC16); // Save important/volatile registers before calling C function. #ifndef _WIN64 // Caller save on Linux and callee save in Windows. @@ -404,7 +404,7 @@ void RegExpMacroAssemblerX64::CheckNotBackReference( __ movzxbl(rax, Operand(rdx, 0)); __ cmpb(rax, Operand(rbx, 0)); } else { - ASSERT(mode_ == UC16); + DCHECK(mode_ == UC16); __ movzxwl(rax, Operand(rdx, 0)); __ cmpw(rax, Operand(rbx, 0)); } @@ -465,7 +465,7 @@ void RegExpMacroAssemblerX64::CheckNotCharacterAfterMinusAnd( uc16 minus, uc16 mask, Label* on_not_equal) { - ASSERT(minus < String::kMaxUtf16CodeUnit); + DCHECK(minus < String::kMaxUtf16CodeUnit); __ leap(rax, Operand(current_character(), -minus)); __ andp(rax, Immediate(mask)); __ cmpl(rax, Immediate(c)); @@ -596,7 +596,7 @@ bool RegExpMacroAssemblerX64::CheckSpecialCharacterClass(uc16 type, BranchOrBacktrack(above, on_no_match); } __ Move(rbx, ExternalReference::re_word_character_map()); - ASSERT_EQ(0, word_character_map[0]); // Character '\0' is not a word char. + DCHECK_EQ(0, word_character_map[0]); // Character '\0' is not a word char. __ testb(Operand(rbx, current_character(), times_1, 0), current_character()); BranchOrBacktrack(zero, on_no_match); @@ -610,7 +610,7 @@ bool RegExpMacroAssemblerX64::CheckSpecialCharacterClass(uc16 type, __ j(above, &done); } __ Move(rbx, ExternalReference::re_word_character_map()); - ASSERT_EQ(0, word_character_map[0]); // Character '\0' is not a word char. + DCHECK_EQ(0, word_character_map[0]); // Character '\0' is not a word char. __ testb(Operand(rbx, current_character(), times_1, 0), current_character()); BranchOrBacktrack(not_zero, on_no_match); @@ -669,12 +669,12 @@ Handle<HeapObject> RegExpMacroAssemblerX64::GetCode(Handle<String> source) { #else // GCC passes arguments in rdi, rsi, rdx, rcx, r8, r9 (and then on stack). // Push register parameters on stack for reference. - ASSERT_EQ(kInputString, -1 * kRegisterSize); - ASSERT_EQ(kStartIndex, -2 * kRegisterSize); - ASSERT_EQ(kInputStart, -3 * kRegisterSize); - ASSERT_EQ(kInputEnd, -4 * kRegisterSize); - ASSERT_EQ(kRegisterOutput, -5 * kRegisterSize); - ASSERT_EQ(kNumOutputRegisters, -6 * kRegisterSize); + DCHECK_EQ(kInputString, -1 * kRegisterSize); + DCHECK_EQ(kStartIndex, -2 * kRegisterSize); + DCHECK_EQ(kInputStart, -3 * kRegisterSize); + DCHECK_EQ(kInputEnd, -4 * kRegisterSize); + DCHECK_EQ(kRegisterOutput, -5 * kRegisterSize); + DCHECK_EQ(kNumOutputRegisters, -6 * kRegisterSize); __ pushq(rdi); __ pushq(rsi); __ pushq(rdx); @@ -1022,8 +1022,8 @@ void RegExpMacroAssemblerX64::LoadCurrentCharacter(int cp_offset, Label* on_end_of_input, bool check_bounds, int characters) { - ASSERT(cp_offset >= -1); // ^ and \b can look behind one character. - ASSERT(cp_offset < (1<<30)); // Be sane! (And ensure negation works) + DCHECK(cp_offset >= -1); // ^ and \b can look behind one character. + DCHECK(cp_offset < (1<<30)); // Be sane! (And ensure negation works) if (check_bounds) { CheckPosition(cp_offset + characters - 1, on_end_of_input); } @@ -1104,7 +1104,7 @@ void RegExpMacroAssemblerX64::SetCurrentPositionFromEnd(int by) { void RegExpMacroAssemblerX64::SetRegister(int register_index, int to) { - ASSERT(register_index >= num_saved_registers_); // Reserved for positions! + DCHECK(register_index >= num_saved_registers_); // Reserved for positions! __ movp(register_location(register_index), Immediate(to)); } @@ -1127,7 +1127,7 @@ void RegExpMacroAssemblerX64::WriteCurrentPositionToRegister(int reg, void RegExpMacroAssemblerX64::ClearRegisters(int reg_from, int reg_to) { - ASSERT(reg_from <= reg_to); + DCHECK(reg_from <= reg_to); __ movp(rax, Operand(rbp, kInputStartMinusOne)); for (int reg = reg_from; reg <= reg_to; reg++) { __ movp(register_location(reg), rax); @@ -1183,7 +1183,8 @@ int RegExpMacroAssemblerX64::CheckStackGuardState(Address* return_address, Code* re_code, Address re_frame) { Isolate* isolate = frame_entry<Isolate*>(re_frame, kIsolate); - if (isolate->stack_guard()->IsStackOverflow()) { + StackLimitCheck check(isolate); + if (check.JsHasOverflowed()) { isolate->StackOverflow(); return EXCEPTION; } @@ -1206,11 +1207,11 @@ int RegExpMacroAssemblerX64::CheckStackGuardState(Address* return_address, // Current string. bool is_ascii = subject->IsOneByteRepresentationUnderneath(); - ASSERT(re_code->instruction_start() <= *return_address); - ASSERT(*return_address <= + DCHECK(re_code->instruction_start() <= *return_address); + DCHECK(*return_address <= re_code->instruction_start() + re_code->instruction_size()); - Object* result = Execution::HandleStackGuardInterrupt(isolate); + Object* result = isolate->stack_guard()->HandleInterrupts(); if (*code_handle != re_code) { // Return address no longer valid intptr_t delta = code_handle->address() - re_code->address(); @@ -1246,7 +1247,7 @@ int RegExpMacroAssemblerX64::CheckStackGuardState(Address* return_address, // be a sequential or external string with the same content. // Update the start and end pointers in the stack frame to the current // location (whether it has actually moved or not). - ASSERT(StringShape(*subject_tmp).IsSequential() || + DCHECK(StringShape(*subject_tmp).IsSequential() || StringShape(*subject_tmp).IsExternal()); // The original start address of the characters to match. @@ -1278,7 +1279,7 @@ int RegExpMacroAssemblerX64::CheckStackGuardState(Address* return_address, Operand RegExpMacroAssemblerX64::register_location(int register_index) { - ASSERT(register_index < (1<<30)); + DCHECK(register_index < (1<<30)); if (num_registers_ <= register_index) { num_registers_ = register_index + 1; } @@ -1329,7 +1330,7 @@ void RegExpMacroAssemblerX64::SafeReturn() { void RegExpMacroAssemblerX64::Push(Register source) { - ASSERT(!source.is(backtrack_stackpointer())); + DCHECK(!source.is(backtrack_stackpointer())); // Notice: This updates flags, unlike normal Push. __ subp(backtrack_stackpointer(), Immediate(kIntSize)); __ movl(Operand(backtrack_stackpointer(), 0), source); @@ -1369,7 +1370,7 @@ void RegExpMacroAssemblerX64::Push(Label* backtrack_target) { void RegExpMacroAssemblerX64::Pop(Register target) { - ASSERT(!target.is(backtrack_stackpointer())); + DCHECK(!target.is(backtrack_stackpointer())); __ movsxlq(target, Operand(backtrack_stackpointer(), 0)); // Notice: This updates flags, unlike normal Pop. __ addp(backtrack_stackpointer(), Immediate(kIntSize)); @@ -1418,16 +1419,16 @@ void RegExpMacroAssemblerX64::LoadCurrentCharacterUnchecked(int cp_offset, } else if (characters == 2) { __ movzxwl(current_character(), Operand(rsi, rdi, times_1, cp_offset)); } else { - ASSERT(characters == 1); + DCHECK(characters == 1); __ movzxbl(current_character(), Operand(rsi, rdi, times_1, cp_offset)); } } else { - ASSERT(mode_ == UC16); + DCHECK(mode_ == UC16); if (characters == 2) { __ movl(current_character(), Operand(rsi, rdi, times_1, cp_offset * sizeof(uc16))); } else { - ASSERT(characters == 1); + DCHECK(characters == 1); __ movzxwl(current_character(), Operand(rsi, rdi, times_1, cp_offset * sizeof(uc16))); } diff --git a/deps/v8/src/x64/regexp-macro-assembler-x64.h b/deps/v8/src/x64/regexp-macro-assembler-x64.h index e9f6a35ddb3..2e2e45e35f9 100644 --- a/deps/v8/src/x64/regexp-macro-assembler-x64.h +++ b/deps/v8/src/x64/regexp-macro-assembler-x64.h @@ -5,11 +5,10 @@ #ifndef V8_X64_REGEXP_MACRO_ASSEMBLER_X64_H_ #define V8_X64_REGEXP_MACRO_ASSEMBLER_X64_H_ -#include "x64/assembler-x64.h" -#include "x64/assembler-x64-inl.h" -#include "macro-assembler.h" -#include "code.h" -#include "x64/macro-assembler-x64.h" +#include "src/macro-assembler.h" +#include "src/x64/assembler-x64-inl.h" +#include "src/x64/assembler-x64.h" +#include "src/x64/macro-assembler-x64.h" namespace v8 { namespace internal { diff --git a/deps/v8/src/x64/simulator-x64.h b/deps/v8/src/x64/simulator-x64.h index a43728f01dc..35cbdc78884 100644 --- a/deps/v8/src/x64/simulator-x64.h +++ b/deps/v8/src/x64/simulator-x64.h @@ -5,7 +5,7 @@ #ifndef V8_X64_SIMULATOR_X64_H_ #define V8_X64_SIMULATOR_X64_H_ -#include "allocation.h" +#include "src/allocation.h" namespace v8 { namespace internal { @@ -24,9 +24,6 @@ typedef int (*regexp_matcher)(String*, int, const byte*, #define CALL_GENERATED_REGEXP_CODE(entry, p0, p1, p2, p3, p4, p5, p6, p7, p8) \ (FUNCTION_CAST<regexp_matcher>(entry)(p0, p1, p2, p3, p4, p5, p6, p7, p8)) -#define TRY_CATCH_FROM_ADDRESS(try_catch_address) \ - (reinterpret_cast<TryCatch*>(try_catch_address)) - // The stack limit beyond which we will throw stack overflow errors in // generated code. Because generated code on x64 uses the C stack, we // just use the C stack limit. diff --git a/deps/v8/src/x64/stub-cache-x64.cc b/deps/v8/src/x64/stub-cache-x64.cc index 537f4123517..504482d9314 100644 --- a/deps/v8/src/x64/stub-cache-x64.cc +++ b/deps/v8/src/x64/stub-cache-x64.cc @@ -2,14 +2,14 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -#include "v8.h" +#include "src/v8.h" #if V8_TARGET_ARCH_X64 -#include "arguments.h" -#include "ic-inl.h" -#include "codegen.h" -#include "stub-cache.h" +#include "src/arguments.h" +#include "src/codegen.h" +#include "src/ic-inl.h" +#include "src/stub-cache.h" namespace v8 { namespace internal { @@ -24,16 +24,16 @@ static void ProbeTable(Isolate* isolate, Register receiver, Register name, // The offset is scaled by 4, based on - // kHeapObjectTagSize, which is two bits + // kCacheIndexShift, which is two bits Register offset) { // We need to scale up the pointer by 2 when the offset is scaled by less // than the pointer size. - ASSERT(kPointerSize == kInt64Size - ? kPointerSizeLog2 == kHeapObjectTagSize + 1 - : kPointerSizeLog2 == kHeapObjectTagSize); + DCHECK(kPointerSize == kInt64Size + ? kPointerSizeLog2 == StubCache::kCacheIndexShift + 1 + : kPointerSizeLog2 == StubCache::kCacheIndexShift); ScaleFactor scale_factor = kPointerSize == kInt64Size ? times_2 : times_1; - ASSERT_EQ(3 * kPointerSize, sizeof(StubCache::Entry)); + DCHECK_EQ(3 * kPointerSize, sizeof(StubCache::Entry)); // The offset register holds the entry offset times four (due to masking // and shifting optimizations). ExternalReference key_offset(isolate->stub_cache()->key_reference(table)); @@ -86,14 +86,11 @@ static void ProbeTable(Isolate* isolate, } -void StubCompiler::GenerateDictionaryNegativeLookup(MacroAssembler* masm, - Label* miss_label, - Register receiver, - Handle<Name> name, - Register scratch0, - Register scratch1) { - ASSERT(name->IsUniqueName()); - ASSERT(!receiver.is(scratch0)); +void PropertyHandlerCompiler::GenerateDictionaryNegativeLookup( + MacroAssembler* masm, Label* miss_label, Register receiver, + Handle<Name> name, Register scratch0, Register scratch1) { + DCHECK(name->IsUniqueName()); + DCHECK(!receiver.is(scratch0)); Counters* counters = masm->isolate()->counters(); __ IncrementCounter(counters->negative_lookups(), 1); __ IncrementCounter(counters->negative_lookups_miss(), 1); @@ -148,19 +145,19 @@ void StubCache::GenerateProbe(MacroAssembler* masm, USE(extra3); // The register extra2 is not used on the X64 platform. // Make sure that code is valid. The multiplying code relies on the // entry size being 3 * kPointerSize. - ASSERT(sizeof(Entry) == 3 * kPointerSize); + DCHECK(sizeof(Entry) == 3 * kPointerSize); // Make sure the flags do not name a specific type. - ASSERT(Code::ExtractTypeFromFlags(flags) == 0); + DCHECK(Code::ExtractTypeFromFlags(flags) == 0); // Make sure that there are no register conflicts. - ASSERT(!scratch.is(receiver)); - ASSERT(!scratch.is(name)); + DCHECK(!scratch.is(receiver)); + DCHECK(!scratch.is(name)); // Check scratch register is valid, extra and extra2 are unused. - ASSERT(!scratch.is(no_reg)); - ASSERT(extra2.is(no_reg)); - ASSERT(extra3.is(no_reg)); + DCHECK(!scratch.is(no_reg)); + DCHECK(extra2.is(no_reg)); + DCHECK(extra3.is(no_reg)); Counters* counters = masm->isolate()->counters(); __ IncrementCounter(counters->megamorphic_stub_cache_probes(), 1); @@ -175,7 +172,7 @@ void StubCache::GenerateProbe(MacroAssembler* masm, __ xorp(scratch, Immediate(flags)); // We mask out the last two bits because they are not part of the hash and // they are always 01 for maps. Also in the two 'and' instructions below. - __ andp(scratch, Immediate((kPrimaryTableSize - 1) << kHeapObjectTagSize)); + __ andp(scratch, Immediate((kPrimaryTableSize - 1) << kCacheIndexShift)); // Probe the primary table. ProbeTable(isolate, masm, flags, kPrimary, receiver, name, scratch); @@ -184,10 +181,10 @@ void StubCache::GenerateProbe(MacroAssembler* masm, __ movl(scratch, FieldOperand(name, Name::kHashFieldOffset)); __ addl(scratch, FieldOperand(receiver, HeapObject::kMapOffset)); __ xorp(scratch, Immediate(flags)); - __ andp(scratch, Immediate((kPrimaryTableSize - 1) << kHeapObjectTagSize)); + __ andp(scratch, Immediate((kPrimaryTableSize - 1) << kCacheIndexShift)); __ subl(scratch, name); __ addl(scratch, Immediate(flags)); - __ andp(scratch, Immediate((kSecondaryTableSize - 1) << kHeapObjectTagSize)); + __ andp(scratch, Immediate((kSecondaryTableSize - 1) << kCacheIndexShift)); // Probe the secondary table. ProbeTable(isolate, masm, flags, kSecondary, receiver, name, scratch); @@ -199,30 +196,8 @@ void StubCache::GenerateProbe(MacroAssembler* masm, } -void StubCompiler::GenerateLoadGlobalFunctionPrototype(MacroAssembler* masm, - int index, - Register prototype) { - // Load the global or builtins object from the current context. - __ movp(prototype, - Operand(rsi, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); - // Load the native context from the global or builtins object. - __ movp(prototype, - FieldOperand(prototype, GlobalObject::kNativeContextOffset)); - // Load the function from the native context. - __ movp(prototype, Operand(prototype, Context::SlotOffset(index))); - // Load the initial map. The global functions all have initial maps. - __ movp(prototype, - FieldOperand(prototype, JSFunction::kPrototypeOrInitialMapOffset)); - // Load the prototype from the initial map. - __ movp(prototype, FieldOperand(prototype, Map::kPrototypeOffset)); -} - - -void StubCompiler::GenerateDirectLoadGlobalFunctionPrototype( - MacroAssembler* masm, - int index, - Register prototype, - Label* miss) { +void NamedLoadHandlerCompiler::GenerateDirectLoadGlobalFunctionPrototype( + MacroAssembler* masm, int index, Register prototype, Label* miss) { Isolate* isolate = masm->isolate(); // Get the global function with the given index. Handle<JSFunction> function( @@ -243,65 +218,28 @@ void StubCompiler::GenerateDirectLoadGlobalFunctionPrototype( } -void StubCompiler::GenerateLoadArrayLength(MacroAssembler* masm, - Register receiver, - Register scratch, - Label* miss_label) { - // Check that the receiver isn't a smi. - __ JumpIfSmi(receiver, miss_label); - - // Check that the object is a JS array. - __ CmpObjectType(receiver, JS_ARRAY_TYPE, scratch); - __ j(not_equal, miss_label); - - // Load length directly from the JS array. - __ movp(rax, FieldOperand(receiver, JSArray::kLengthOffset)); - __ ret(0); -} - - -void StubCompiler::GenerateLoadFunctionPrototype(MacroAssembler* masm, - Register receiver, - Register result, - Register scratch, - Label* miss_label) { +void NamedLoadHandlerCompiler::GenerateLoadFunctionPrototype( + MacroAssembler* masm, Register receiver, Register result, Register scratch, + Label* miss_label) { __ TryGetFunctionPrototype(receiver, result, miss_label); if (!result.is(rax)) __ movp(rax, result); __ ret(0); } -void StubCompiler::GenerateFastPropertyLoad(MacroAssembler* masm, - Register dst, - Register src, - bool inobject, - int index, - Representation representation) { - ASSERT(!representation.IsDouble()); - int offset = index * kPointerSize; - if (!inobject) { - // Calculate the offset into the properties array. - offset = offset + FixedArray::kHeaderSize; - __ movp(dst, FieldOperand(src, JSObject::kPropertiesOffset)); - src = dst; - } - __ movp(dst, FieldOperand(src, offset)); -} - - static void PushInterceptorArguments(MacroAssembler* masm, Register receiver, Register holder, Register name, Handle<JSObject> holder_obj) { - STATIC_ASSERT(StubCache::kInterceptorArgsNameIndex == 0); - STATIC_ASSERT(StubCache::kInterceptorArgsInfoIndex == 1); - STATIC_ASSERT(StubCache::kInterceptorArgsThisIndex == 2); - STATIC_ASSERT(StubCache::kInterceptorArgsHolderIndex == 3); - STATIC_ASSERT(StubCache::kInterceptorArgsLength == 4); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsNameIndex == 0); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsInfoIndex == 1); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsThisIndex == 2); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsHolderIndex == 3); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsLength == 4); __ Push(name); Handle<InterceptorInfo> interceptor(holder_obj->GetNamedInterceptor()); - ASSERT(!masm->isolate()->heap()->InNewSpace(*interceptor)); + DCHECK(!masm->isolate()->heap()->InNewSpace(*interceptor)); __ Move(kScratchRegister, interceptor); __ Push(kScratchRegister); __ Push(receiver); @@ -317,22 +255,17 @@ static void CompileCallLoadPropertyWithInterceptor( Handle<JSObject> holder_obj, IC::UtilityId id) { PushInterceptorArguments(masm, receiver, holder, name, holder_obj); - __ CallExternalReference( - ExternalReference(IC_Utility(id), masm->isolate()), - StubCache::kInterceptorArgsLength); + __ CallExternalReference(ExternalReference(IC_Utility(id), masm->isolate()), + NamedLoadHandlerCompiler::kInterceptorArgsLength); } // Generate call to api function. -void StubCompiler::GenerateFastApiCall(MacroAssembler* masm, - const CallOptimization& optimization, - Handle<Map> receiver_map, - Register receiver, - Register scratch_in, - bool is_store, - int argc, - Register* values) { - ASSERT(optimization.is_simple_api_call()); +void PropertyHandlerCompiler::GenerateFastApiCall( + MacroAssembler* masm, const CallOptimization& optimization, + Handle<Map> receiver_map, Register receiver, Register scratch_in, + bool is_store, int argc, Register* values) { + DCHECK(optimization.is_simple_api_call()); __ PopReturnAddressTo(scratch_in); // receiver @@ -340,8 +273,8 @@ void StubCompiler::GenerateFastApiCall(MacroAssembler* masm, // Write the arguments to stack frame. for (int i = 0; i < argc; i++) { Register arg = values[argc-1-i]; - ASSERT(!receiver.is(arg)); - ASSERT(!scratch_in.is(arg)); + DCHECK(!receiver.is(arg)); + DCHECK(!scratch_in.is(arg)); __ Push(arg); } __ PushReturnAddressFrom(scratch_in); @@ -402,24 +335,12 @@ void StubCompiler::GenerateFastApiCall(MacroAssembler* masm, } -void StoreStubCompiler::GenerateRestoreName(MacroAssembler* masm, - Label* label, - Handle<Name> name) { - if (!label->is_unused()) { - __ bind(label); - __ Move(this->name(), name); - } -} - - -void StubCompiler::GenerateCheckPropertyCell(MacroAssembler* masm, - Handle<JSGlobalObject> global, - Handle<Name> name, - Register scratch, - Label* miss) { +void PropertyHandlerCompiler::GenerateCheckPropertyCell( + MacroAssembler* masm, Handle<JSGlobalObject> global, Handle<Name> name, + Register scratch, Label* miss) { Handle<PropertyCell> cell = JSGlobalObject::EnsurePropertyCell(global, name); - ASSERT(cell->value()->IsTheHole()); + DCHECK(cell->value()->IsTheHole()); __ Move(scratch, cell); __ Cmp(FieldOperand(scratch, Cell::kValueOffset), masm->isolate()->factory()->the_hole_value()); @@ -427,45 +348,39 @@ void StubCompiler::GenerateCheckPropertyCell(MacroAssembler* masm, } -void StoreStubCompiler::GenerateNegativeHolderLookup( - MacroAssembler* masm, - Handle<JSObject> holder, - Register holder_reg, - Handle<Name> name, - Label* miss) { - if (holder->IsJSGlobalObject()) { - GenerateCheckPropertyCell( - masm, Handle<JSGlobalObject>::cast(holder), name, scratch1(), miss); - } else if (!holder->HasFastProperties() && !holder->IsJSGlobalProxy()) { - GenerateDictionaryNegativeLookup( - masm, miss, holder_reg, name, scratch1(), scratch2()); +void PropertyAccessCompiler::GenerateTailCall(MacroAssembler* masm, + Handle<Code> code) { + __ jmp(code, RelocInfo::CODE_TARGET); +} + + +#undef __ +#define __ ACCESS_MASM((masm())) + + +void NamedStoreHandlerCompiler::GenerateRestoreName(Label* label, + Handle<Name> name) { + if (!label->is_unused()) { + __ bind(label); + __ Move(this->name(), name); } } // Receiver_reg is preserved on jumps to miss_label, but may be destroyed if // store is successful. -void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, - Handle<JSObject> object, - LookupResult* lookup, - Handle<Map> transition, - Handle<Name> name, - Register receiver_reg, - Register storage_reg, - Register value_reg, - Register scratch1, - Register scratch2, - Register unused, - Label* miss_label, - Label* slow) { +void NamedStoreHandlerCompiler::GenerateStoreTransition( + Handle<Map> transition, Handle<Name> name, Register receiver_reg, + Register storage_reg, Register value_reg, Register scratch1, + Register scratch2, Register unused, Label* miss_label, Label* slow) { int descriptor = transition->LastAdded(); DescriptorArray* descriptors = transition->instance_descriptors(); PropertyDetails details = descriptors->GetDetails(descriptor); Representation representation = details.representation(); - ASSERT(!representation.IsNone()); + DCHECK(!representation.IsNone()); if (details.type() == CONSTANT) { - Handle<Object> constant(descriptors->GetValue(descriptor), masm->isolate()); + Handle<Object> constant(descriptors->GetValue(descriptor), isolate()); __ Cmp(value_reg, constant); __ j(not_equal, miss_label); } else if (representation.IsSmi()) { @@ -489,7 +404,7 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, } } else if (representation.IsDouble()) { Label do_store, heap_number; - __ AllocateHeapNumber(storage_reg, scratch1, slow); + __ AllocateHeapNumber(storage_reg, scratch1, slow, MUTABLE); __ JumpIfNotSmi(value_reg, &heap_number); __ SmiToInteger32(scratch1, value_reg); @@ -497,21 +412,20 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, __ jmp(&do_store); __ bind(&heap_number); - __ CheckMap(value_reg, masm->isolate()->factory()->heap_number_map(), - miss_label, DONT_DO_SMI_CHECK); + __ CheckMap(value_reg, isolate()->factory()->heap_number_map(), miss_label, + DONT_DO_SMI_CHECK); __ movsd(xmm0, FieldOperand(value_reg, HeapNumber::kValueOffset)); __ bind(&do_store); __ movsd(FieldOperand(storage_reg, HeapNumber::kValueOffset), xmm0); } - // Stub never generated for non-global objects that require access - // checks. - ASSERT(object->IsJSGlobalProxy() || !object->IsAccessCheckNeeded()); + // Stub never generated for objects that require access checks. + DCHECK(!transition->is_access_check_needed()); // Perform map transition for the receiver if necessary. if (details.type() == FIELD && - object->map()->unused_property_fields() == 0) { + Map::cast(transition->GetBackPointer())->unused_property_fields() == 0) { // The properties must be extended before we can store the value. // We jump to a runtime call that extends the properties array. __ PopReturnAddressTo(scratch1); @@ -521,9 +435,8 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, __ PushReturnAddressFrom(scratch1); __ TailCallExternalReference( ExternalReference(IC_Utility(IC::kSharedStoreIC_ExtendStorage), - masm->isolate()), - 3, - 1); + isolate()), + 3, 1); return; } @@ -541,7 +454,7 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, OMIT_SMI_CHECK); if (details.type() == CONSTANT) { - ASSERT(value_reg.is(rax)); + DCHECK(value_reg.is(rax)); __ ret(0); return; } @@ -552,14 +465,14 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, // Adjust for the number of properties stored in the object. Even in the // face of a transition we can use the old map here because the size of the // object and the number of in-object properties is not going to change. - index -= object->map()->inobject_properties(); + index -= transition->inobject_properties(); // TODO(verwaest): Share this code as a code stub. SmiCheck smi_check = representation.IsTagged() ? INLINE_SMI_CHECK : OMIT_SMI_CHECK; if (index < 0) { // Set the property straight into the object. - int offset = object->map()->instance_size() + (index * kPointerSize); + int offset = transition->instance_size() + (index * kPointerSize); if (representation.IsDouble()) { __ movp(FieldOperand(receiver_reg, offset), storage_reg); } else { @@ -598,147 +511,44 @@ void StoreStubCompiler::GenerateStoreTransition(MacroAssembler* masm, } // Return the value (register rax). - ASSERT(value_reg.is(rax)); + DCHECK(value_reg.is(rax)); __ ret(0); } -// Both name_reg and receiver_reg are preserved on jumps to miss_label, -// but may be destroyed if store is successful. -void StoreStubCompiler::GenerateStoreField(MacroAssembler* masm, - Handle<JSObject> object, - LookupResult* lookup, - Register receiver_reg, - Register name_reg, - Register value_reg, - Register scratch1, - Register scratch2, - Label* miss_label) { - // Stub never generated for non-global objects that require access - // checks. - ASSERT(object->IsJSGlobalProxy() || !object->IsAccessCheckNeeded()); - - int index = lookup->GetFieldIndex().field_index(); - - // Adjust for the number of properties stored in the object. Even in the - // face of a transition we can use the old map here because the size of the - // object and the number of in-object properties is not going to change. - index -= object->map()->inobject_properties(); - - Representation representation = lookup->representation(); - ASSERT(!representation.IsNone()); - if (representation.IsSmi()) { - __ JumpIfNotSmi(value_reg, miss_label); - } else if (representation.IsHeapObject()) { - __ JumpIfSmi(value_reg, miss_label); - HeapType* field_type = lookup->GetFieldType(); - HeapType::Iterator<Map> it = field_type->Classes(); - if (!it.Done()) { - Label do_store; - while (true) { - __ CompareMap(value_reg, it.Current()); - it.Advance(); - if (it.Done()) { - __ j(not_equal, miss_label); - break; - } - __ j(equal, &do_store, Label::kNear); - } - __ bind(&do_store); - } - } else if (representation.IsDouble()) { - // Load the double storage. - if (index < 0) { - int offset = object->map()->instance_size() + (index * kPointerSize); - __ movp(scratch1, FieldOperand(receiver_reg, offset)); - } else { - __ movp(scratch1, - FieldOperand(receiver_reg, JSObject::kPropertiesOffset)); - int offset = index * kPointerSize + FixedArray::kHeaderSize; - __ movp(scratch1, FieldOperand(scratch1, offset)); - } - - // Store the value into the storage. - Label do_store, heap_number; - __ JumpIfNotSmi(value_reg, &heap_number); - __ SmiToInteger32(scratch2, value_reg); - __ Cvtlsi2sd(xmm0, scratch2); - __ jmp(&do_store); - - __ bind(&heap_number); - __ CheckMap(value_reg, masm->isolate()->factory()->heap_number_map(), - miss_label, DONT_DO_SMI_CHECK); - __ movsd(xmm0, FieldOperand(value_reg, HeapNumber::kValueOffset)); - __ bind(&do_store); - __ movsd(FieldOperand(scratch1, HeapNumber::kValueOffset), xmm0); - // Return the value (register rax). - ASSERT(value_reg.is(rax)); - __ ret(0); - return; - } - - // TODO(verwaest): Share this code as a code stub. - SmiCheck smi_check = representation.IsTagged() - ? INLINE_SMI_CHECK : OMIT_SMI_CHECK; - if (index < 0) { - // Set the property straight into the object. - int offset = object->map()->instance_size() + (index * kPointerSize); - __ movp(FieldOperand(receiver_reg, offset), value_reg); - - if (!representation.IsSmi()) { - // Update the write barrier for the array address. - // Pass the value being stored in the now unused name_reg. - __ movp(name_reg, value_reg); - __ RecordWriteField( - receiver_reg, offset, name_reg, scratch1, kDontSaveFPRegs, - EMIT_REMEMBERED_SET, smi_check); - } - } else { - // Write to the properties array. - int offset = index * kPointerSize + FixedArray::kHeaderSize; - // Get the properties array (optimistically). - __ movp(scratch1, FieldOperand(receiver_reg, JSObject::kPropertiesOffset)); - __ movp(FieldOperand(scratch1, offset), value_reg); - - if (!representation.IsSmi()) { - // Update the write barrier for the array address. - // Pass the value being stored in the now unused name_reg. - __ movp(name_reg, value_reg); - __ RecordWriteField( - scratch1, offset, name_reg, receiver_reg, kDontSaveFPRegs, - EMIT_REMEMBERED_SET, smi_check); +void NamedStoreHandlerCompiler::GenerateStoreField(LookupResult* lookup, + Register value_reg, + Label* miss_label) { + DCHECK(lookup->representation().IsHeapObject()); + __ JumpIfSmi(value_reg, miss_label); + HeapType::Iterator<Map> it = lookup->GetFieldType()->Classes(); + Label do_store; + while (true) { + __ CompareMap(value_reg, it.Current()); + it.Advance(); + if (it.Done()) { + __ j(not_equal, miss_label); + break; } + __ j(equal, &do_store, Label::kNear); } + __ bind(&do_store); - // Return the value (register rax). - ASSERT(value_reg.is(rax)); - __ ret(0); -} - - -void StubCompiler::GenerateTailCall(MacroAssembler* masm, Handle<Code> code) { - __ jmp(code, RelocInfo::CODE_TARGET); + StoreFieldStub stub(isolate(), lookup->GetFieldIndex(), + lookup->representation()); + GenerateTailCall(masm(), stub.GetCode()); } -#undef __ -#define __ ACCESS_MASM((masm())) - - -Register StubCompiler::CheckPrototypes(Handle<HeapType> type, - Register object_reg, - Handle<JSObject> holder, - Register holder_reg, - Register scratch1, - Register scratch2, - Handle<Name> name, - Label* miss, - PrototypeCheckType check) { - Handle<Map> receiver_map(IC::TypeToMap(*type, isolate())); +Register PropertyHandlerCompiler::CheckPrototypes( + Register object_reg, Register holder_reg, Register scratch1, + Register scratch2, Handle<Name> name, Label* miss, + PrototypeCheckType check) { + Handle<Map> receiver_map(IC::TypeToMap(*type(), isolate())); // Make sure there's no overlap between holder and object registers. - ASSERT(!scratch1.is(object_reg) && !scratch1.is(holder_reg)); - ASSERT(!scratch2.is(object_reg) && !scratch2.is(holder_reg) + DCHECK(!scratch1.is(object_reg) && !scratch1.is(holder_reg)); + DCHECK(!scratch2.is(object_reg) && !scratch2.is(holder_reg) && !scratch2.is(scratch1)); // Keep track of the current object in register reg. On the first @@ -748,12 +558,12 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, int depth = 0; Handle<JSObject> current = Handle<JSObject>::null(); - if (type->IsConstant()) { - current = Handle<JSObject>::cast(type->AsConstant()->Value()); + if (type()->IsConstant()) { + current = Handle<JSObject>::cast(type()->AsConstant()->Value()); } Handle<JSObject> prototype = Handle<JSObject>::null(); Handle<Map> current_map = receiver_map; - Handle<Map> holder_map(holder->map()); + Handle<Map> holder_map(holder()->map()); // Traverse the prototype chain and check the maps in the prototype chain for // fast and global objects or do negative lookup for normal objects. while (!current_map.is_identical_to(holder_map)) { @@ -761,18 +571,18 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, // Only global objects and objects that do not require access // checks are allowed in stubs. - ASSERT(current_map->IsJSGlobalProxyMap() || + DCHECK(current_map->IsJSGlobalProxyMap() || !current_map->is_access_check_needed()); prototype = handle(JSObject::cast(current_map->prototype())); if (current_map->is_dictionary_map() && - !current_map->IsJSGlobalObjectMap() && - !current_map->IsJSGlobalProxyMap()) { + !current_map->IsJSGlobalObjectMap()) { + DCHECK(!current_map->IsJSGlobalProxyMap()); // Proxy maps are fast. if (!name->IsUniqueName()) { - ASSERT(name->IsString()); + DCHECK(name->IsString()); name = factory()->InternalizeString(Handle<String>::cast(name)); } - ASSERT(current.is_null() || + DCHECK(current.is_null() || current->property_dictionary()->FindEntry(name) == NameDictionary::kNotFound); @@ -784,7 +594,12 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, __ movp(reg, FieldOperand(scratch1, Map::kPrototypeOffset)); } else { bool in_new_space = heap()->InNewSpace(*prototype); - if (in_new_space) { + // Two possible reasons for loading the prototype from the map: + // (1) Can't store references to new space in code. + // (2) Handler is shared for all receivers with the same prototype + // map (but not necessarily the same prototype instance). + bool load_prototype_from_map = in_new_space || depth == 1; + if (load_prototype_from_map) { // Save the map in scratch1 for later. __ movp(scratch1, FieldOperand(reg, HeapObject::kMapOffset)); } @@ -795,6 +610,9 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, // Check access rights to the global object. This has to happen after // the map check so that we know that the object is actually a global // object. + // This allows us to install generated handlers for accesses to the + // global proxy (as opposed to using slow ICs). See corresponding code + // in LookupForRead(). if (current_map->IsJSGlobalProxyMap()) { __ CheckAccessGlobalProxy(reg, scratch2, miss); } else if (current_map->IsJSGlobalObjectMap()) { @@ -804,12 +622,9 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, } reg = holder_reg; // From now on the object will be in holder_reg. - if (in_new_space) { - // The prototype is in new space; we cannot store a reference to it - // in the code. Load it from the map. + if (load_prototype_from_map) { __ movp(reg, FieldOperand(scratch1, Map::kPrototypeOffset)); } else { - // The prototype is in old space; load it directly. __ Move(reg, prototype); } } @@ -828,7 +643,7 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, } // Perform security check for access to the global object. - ASSERT(current_map->IsJSGlobalProxyMap() || + DCHECK(current_map->IsJSGlobalProxyMap() || !current_map->is_access_check_needed()); if (current_map->IsJSGlobalProxyMap()) { __ CheckAccessGlobalProxy(reg, scratch1, miss); @@ -839,7 +654,7 @@ Register StubCompiler::CheckPrototypes(Handle<HeapType> type, } -void LoadStubCompiler::HandlerFrontendFooter(Handle<Name> name, Label* miss) { +void NamedLoadHandlerCompiler::FrontendFooter(Handle<Name> name, Label* miss) { if (!miss->is_unused()) { Label success; __ jmp(&success); @@ -850,93 +665,21 @@ void LoadStubCompiler::HandlerFrontendFooter(Handle<Name> name, Label* miss) { } -void StoreStubCompiler::HandlerFrontendFooter(Handle<Name> name, Label* miss) { +void NamedStoreHandlerCompiler::FrontendFooter(Handle<Name> name, Label* miss) { if (!miss->is_unused()) { Label success; __ jmp(&success); - GenerateRestoreName(masm(), miss, name); + GenerateRestoreName(miss, name); TailCallBuiltin(masm(), MissBuiltin(kind())); __ bind(&success); } } -Register LoadStubCompiler::CallbackHandlerFrontend( - Handle<HeapType> type, - Register object_reg, - Handle<JSObject> holder, - Handle<Name> name, - Handle<Object> callback) { - Label miss; - - Register reg = HandlerFrontendHeader(type, object_reg, holder, name, &miss); - - if (!holder->HasFastProperties() && !holder->IsJSGlobalObject()) { - ASSERT(!reg.is(scratch2())); - ASSERT(!reg.is(scratch3())); - ASSERT(!reg.is(scratch4())); - - // Load the properties dictionary. - Register dictionary = scratch4(); - __ movp(dictionary, FieldOperand(reg, JSObject::kPropertiesOffset)); - - // Probe the dictionary. - Label probe_done; - NameDictionaryLookupStub::GeneratePositiveLookup(masm(), - &miss, - &probe_done, - dictionary, - this->name(), - scratch2(), - scratch3()); - __ bind(&probe_done); - - // If probing finds an entry in the dictionary, scratch3 contains the - // index into the dictionary. Check that the value is the callback. - Register index = scratch3(); - const int kElementsStartOffset = - NameDictionary::kHeaderSize + - NameDictionary::kElementsStartIndex * kPointerSize; - const int kValueOffset = kElementsStartOffset + kPointerSize; - __ movp(scratch2(), - Operand(dictionary, index, times_pointer_size, - kValueOffset - kHeapObjectTag)); - __ Move(scratch3(), callback, RelocInfo::EMBEDDED_OBJECT); - __ cmpp(scratch2(), scratch3()); - __ j(not_equal, &miss); - } - - HandlerFrontendFooter(name, &miss); - return reg; -} - - -void LoadStubCompiler::GenerateLoadField(Register reg, - Handle<JSObject> holder, - PropertyIndex field, - Representation representation) { - if (!reg.is(receiver())) __ movp(receiver(), reg); - if (kind() == Code::LOAD_IC) { - LoadFieldStub stub(isolate(), - field.is_inobject(holder), - field.translate(holder), - representation); - GenerateTailCall(masm(), stub.GetCode()); - } else { - KeyedLoadFieldStub stub(isolate(), - field.is_inobject(holder), - field.translate(holder), - representation); - GenerateTailCall(masm(), stub.GetCode()); - } -} - - -void LoadStubCompiler::GenerateLoadCallback( - Register reg, - Handle<ExecutableAccessorInfo> callback) { +void NamedLoadHandlerCompiler::GenerateLoadCallback( + Register reg, Handle<ExecutableAccessorInfo> callback) { // Insert additional parameters into the stack frame above return address. - ASSERT(!scratch4().is(reg)); + DCHECK(!scratch4().is(reg)); __ PopReturnAddressTo(scratch4()); STATIC_ASSERT(PropertyCallbackArguments::kHolderIndex == 0); @@ -948,14 +691,14 @@ void LoadStubCompiler::GenerateLoadCallback( STATIC_ASSERT(PropertyCallbackArguments::kArgsLength == 6); __ Push(receiver()); // receiver if (heap()->InNewSpace(callback->data())) { - ASSERT(!scratch2().is(reg)); + DCHECK(!scratch2().is(reg)); __ Move(scratch2(), callback); __ Push(FieldOperand(scratch2(), ExecutableAccessorInfo::kDataOffset)); // data } else { __ Push(Handle<Object>(callback->data(), isolate())); } - ASSERT(!kScratchRegister.is(reg)); + DCHECK(!kScratchRegister.is(reg)); __ LoadRoot(kScratchRegister, Heap::kUndefinedValueRootIndex); __ Push(kScratchRegister); // return value __ Push(kScratchRegister); // return value default @@ -977,21 +720,18 @@ void LoadStubCompiler::GenerateLoadCallback( } -void LoadStubCompiler::GenerateLoadConstant(Handle<Object> value) { +void NamedLoadHandlerCompiler::GenerateLoadConstant(Handle<Object> value) { // Return the constant value. __ Move(rax, value); __ ret(0); } -void LoadStubCompiler::GenerateLoadInterceptor( - Register holder_reg, - Handle<Object> object, - Handle<JSObject> interceptor_holder, - LookupResult* lookup, - Handle<Name> name) { - ASSERT(interceptor_holder->HasNamedInterceptor()); - ASSERT(!interceptor_holder->GetNamedInterceptor()->getter()->IsUndefined()); +void NamedLoadHandlerCompiler::GenerateLoadInterceptor(Register holder_reg, + LookupResult* lookup, + Handle<Name> name) { + DCHECK(holder()->HasNamedInterceptor()); + DCHECK(!holder()->GetNamedInterceptor()->getter()->IsUndefined()); // So far the most popular follow ups for interceptor loads are FIELD // and CALLBACKS, so inline only them, other cases may be added @@ -1002,10 +742,12 @@ void LoadStubCompiler::GenerateLoadInterceptor( compile_followup_inline = true; } else if (lookup->type() == CALLBACKS && lookup->GetCallbackObject()->IsExecutableAccessorInfo()) { - ExecutableAccessorInfo* callback = - ExecutableAccessorInfo::cast(lookup->GetCallbackObject()); - compile_followup_inline = callback->getter() != NULL && - callback->IsCompatibleReceiver(*object); + Handle<ExecutableAccessorInfo> callback( + ExecutableAccessorInfo::cast(lookup->GetCallbackObject())); + compile_followup_inline = + callback->getter() != NULL && + ExecutableAccessorInfo::IsCompatibleReceiverType(isolate(), callback, + type()); } } @@ -1013,13 +755,13 @@ void LoadStubCompiler::GenerateLoadInterceptor( // Compile the interceptor call, followed by inline code to load the // property from further up the prototype chain if the call fails. // Check that the maps haven't changed. - ASSERT(holder_reg.is(receiver()) || holder_reg.is(scratch1())); + DCHECK(holder_reg.is(receiver()) || holder_reg.is(scratch1())); // Preserve the receiver register explicitly whenever it is different from // the holder and it is needed should the interceptor return without any // result. The CALLBACKS case needs the receiver to be passed into C++ code, // the FIELD case might cause a miss during the prototype check. - bool must_perfrom_prototype_check = *interceptor_holder != lookup->holder(); + bool must_perfrom_prototype_check = *holder() != lookup->holder(); bool must_preserve_receiver_reg = !receiver().is(holder_reg) && (lookup->type() == CALLBACKS || must_perfrom_prototype_check); @@ -1038,7 +780,7 @@ void LoadStubCompiler::GenerateLoadInterceptor( // interceptor's holder has been compiled before (see a caller // of this method.) CompileCallLoadPropertyWithInterceptor( - masm(), receiver(), holder_reg, this->name(), interceptor_holder, + masm(), receiver(), holder_reg, this->name(), holder(), IC::kLoadPropertyWithInterceptorOnly); // Check if interceptor provided a value for property. If it's @@ -1059,29 +801,27 @@ void LoadStubCompiler::GenerateLoadInterceptor( // Leave the internal frame. } - GenerateLoadPostInterceptor(holder_reg, interceptor_holder, name, lookup); + GenerateLoadPostInterceptor(holder_reg, name, lookup); } else { // !compile_followup_inline // Call the runtime system to load the interceptor. // Check that the maps haven't changed. __ PopReturnAddressTo(scratch2()); - PushInterceptorArguments(masm(), receiver(), holder_reg, - this->name(), interceptor_holder); + PushInterceptorArguments(masm(), receiver(), holder_reg, this->name(), + holder()); __ PushReturnAddressFrom(scratch2()); ExternalReference ref = ExternalReference( - IC_Utility(IC::kLoadPropertyWithInterceptorForLoad), isolate()); - __ TailCallExternalReference(ref, StubCache::kInterceptorArgsLength, 1); + IC_Utility(IC::kLoadPropertyWithInterceptor), isolate()); + __ TailCallExternalReference( + ref, NamedLoadHandlerCompiler::kInterceptorArgsLength, 1); } } -Handle<Code> StoreStubCompiler::CompileStoreCallback( - Handle<JSObject> object, - Handle<JSObject> holder, - Handle<Name> name, +Handle<Code> NamedStoreHandlerCompiler::CompileStoreCallback( + Handle<JSObject> object, Handle<Name> name, Handle<ExecutableAccessorInfo> callback) { - Register holder_reg = HandlerFrontend( - IC::CurrentTypeOf(object, isolate()), receiver(), holder, name); + Register holder_reg = Frontend(receiver(), name); __ PopReturnAddressTo(scratch1()); __ Push(receiver()); @@ -1105,10 +845,8 @@ Handle<Code> StoreStubCompiler::CompileStoreCallback( #define __ ACCESS_MASM(masm) -void StoreStubCompiler::GenerateStoreViaSetter( - MacroAssembler* masm, - Handle<HeapType> type, - Register receiver, +void NamedStoreHandlerCompiler::GenerateStoreViaSetter( + MacroAssembler* masm, Handle<HeapType> type, Register receiver, Handle<JSFunction> setter) { // ----------- S t a t e ------------- // -- rsp[0] : return address @@ -1124,7 +862,7 @@ void StoreStubCompiler::GenerateStoreViaSetter( if (IC::TypeToMap(*type, masm->isolate())->IsJSGlobalObjectMap()) { // Swap in the global receiver. __ movp(receiver, - FieldOperand(receiver, JSGlobalObject::kGlobalReceiverOffset)); + FieldOperand(receiver, JSGlobalObject::kGlobalProxyOffset)); } __ Push(receiver); __ Push(value()); @@ -1152,8 +890,7 @@ void StoreStubCompiler::GenerateStoreViaSetter( #define __ ACCESS_MASM(masm()) -Handle<Code> StoreStubCompiler::CompileStoreInterceptor( - Handle<JSObject> object, +Handle<Code> NamedStoreHandlerCompiler::CompileStoreInterceptor( Handle<Name> name) { __ PopReturnAddressTo(scratch1()); __ Push(receiver()); @@ -1162,8 +899,8 @@ Handle<Code> StoreStubCompiler::CompileStoreInterceptor( __ PushReturnAddressFrom(scratch1()); // Do tail-call to the runtime system. - ExternalReference store_ic_property = - ExternalReference(IC_Utility(IC::kStoreInterceptorProperty), isolate()); + ExternalReference store_ic_property = ExternalReference( + IC_Utility(IC::kStorePropertyWithInterceptor), isolate()); __ TailCallExternalReference(store_ic_property, 3, 1); // Return the generated code. @@ -1171,23 +908,8 @@ Handle<Code> StoreStubCompiler::CompileStoreInterceptor( } -void StoreStubCompiler::GenerateStoreArrayLength() { - // Prepare tail call to StoreIC_ArrayLength. - __ PopReturnAddressTo(scratch1()); - __ Push(receiver()); - __ Push(value()); - __ PushReturnAddressFrom(scratch1()); - - ExternalReference ref = - ExternalReference(IC_Utility(IC::kStoreIC_ArrayLength), - masm()->isolate()); - __ TailCallExternalReference(ref, 2, 1); -} - - -Handle<Code> KeyedStoreStubCompiler::CompileStorePolymorphic( - MapHandleList* receiver_maps, - CodeHandleList* handler_stubs, +Handle<Code> PropertyICCompiler::CompileKeyedStorePolymorphic( + MapHandleList* receiver_maps, CodeHandleList* handler_stubs, MapHandleList* transitioned_maps) { Label miss; __ JumpIfSmi(receiver(), &miss, Label::kNear); @@ -1215,67 +937,39 @@ Handle<Code> KeyedStoreStubCompiler::CompileStorePolymorphic( TailCallBuiltin(masm(), MissBuiltin(kind())); // Return the generated code. - return GetICCode( - kind(), Code::NORMAL, factory()->empty_string(), POLYMORPHIC); + return GetCode(kind(), Code::NORMAL, factory()->empty_string(), POLYMORPHIC); } -Handle<Code> LoadStubCompiler::CompileLoadNonexistent(Handle<HeapType> type, - Handle<JSObject> last, - Handle<Name> name) { - NonexistentHandlerFrontend(type, last, name); - - // Return undefined if maps of the full prototype chain are still the - // same and no global property with this name contains a value. - __ LoadRoot(rax, Heap::kUndefinedValueRootIndex); - __ ret(0); - - // Return the generated code. - return GetCode(kind(), Code::FAST, name); -} - - -Register* LoadStubCompiler::registers() { - // receiver, name, scratch1, scratch2, scratch3, scratch4. - static Register registers[] = { rax, rcx, rdx, rbx, rdi, r8 }; - return registers; -} - - -Register* KeyedLoadStubCompiler::registers() { +Register* PropertyAccessCompiler::load_calling_convention() { // receiver, name, scratch1, scratch2, scratch3, scratch4. - static Register registers[] = { rdx, rax, rbx, rcx, rdi, r8 }; + Register receiver = LoadIC::ReceiverRegister(); + Register name = LoadIC::NameRegister(); + static Register registers[] = { receiver, name, rax, rbx, rdi, r8 }; return registers; } -Register StoreStubCompiler::value() { - return rax; -} - - -Register* StoreStubCompiler::registers() { +Register* PropertyAccessCompiler::store_calling_convention() { // receiver, name, scratch1, scratch2, scratch3. - static Register registers[] = { rdx, rcx, rbx, rdi, r8 }; + Register receiver = KeyedStoreIC::ReceiverRegister(); + Register name = KeyedStoreIC::NameRegister(); + DCHECK(rbx.is(KeyedStoreIC::MapRegister())); + static Register registers[] = { receiver, name, rbx, rdi, r8 }; return registers; } -Register* KeyedStoreStubCompiler::registers() { - // receiver, name, scratch1, scratch2, scratch3. - static Register registers[] = { rdx, rcx, rbx, rdi, r8 }; - return registers; -} +Register NamedStoreHandlerCompiler::value() { return StoreIC::ValueRegister(); } #undef __ #define __ ACCESS_MASM(masm) -void LoadStubCompiler::GenerateLoadViaGetter(MacroAssembler* masm, - Handle<HeapType> type, - Register receiver, - Handle<JSFunction> getter) { +void NamedLoadHandlerCompiler::GenerateLoadViaGetter( + MacroAssembler* masm, Handle<HeapType> type, Register receiver, + Handle<JSFunction> getter) { // ----------- S t a t e ------------- // -- rax : receiver // -- rcx : name @@ -1289,7 +983,7 @@ void LoadStubCompiler::GenerateLoadViaGetter(MacroAssembler* masm, if (IC::TypeToMap(*type, masm->isolate())->IsJSGlobalObjectMap()) { // Swap in the global receiver. __ movp(receiver, - FieldOperand(receiver, JSGlobalObject::kGlobalReceiverOffset)); + FieldOperand(receiver, JSGlobalObject::kGlobalProxyOffset)); } __ Push(receiver); ParameterCount actual(0); @@ -1313,62 +1007,63 @@ void LoadStubCompiler::GenerateLoadViaGetter(MacroAssembler* masm, #define __ ACCESS_MASM(masm()) -Handle<Code> LoadStubCompiler::CompileLoadGlobal( - Handle<HeapType> type, - Handle<GlobalObject> global, - Handle<PropertyCell> cell, - Handle<Name> name, - bool is_dont_delete) { +Handle<Code> NamedLoadHandlerCompiler::CompileLoadGlobal( + Handle<PropertyCell> cell, Handle<Name> name, bool is_configurable) { Label miss; - // TODO(verwaest): Directly store to rax. Currently we cannot do this, since - // rax is used as receiver(), which we would otherwise clobber before a - // potential miss. - HandlerFrontendHeader(type, receiver(), global, name, &miss); + FrontendHeader(receiver(), name, &miss); // Get the value from the cell. - __ Move(rbx, cell); - __ movp(rbx, FieldOperand(rbx, PropertyCell::kValueOffset)); + Register result = StoreIC::ValueRegister(); + __ Move(result, cell); + __ movp(result, FieldOperand(result, PropertyCell::kValueOffset)); // Check for deleted property if property can actually be deleted. - if (!is_dont_delete) { - __ CompareRoot(rbx, Heap::kTheHoleValueRootIndex); + if (is_configurable) { + __ CompareRoot(result, Heap::kTheHoleValueRootIndex); __ j(equal, &miss); } else if (FLAG_debug_code) { - __ CompareRoot(rbx, Heap::kTheHoleValueRootIndex); + __ CompareRoot(result, Heap::kTheHoleValueRootIndex); __ Check(not_equal, kDontDeleteCellsCannotContainTheHole); } Counters* counters = isolate()->counters(); __ IncrementCounter(counters->named_load_global_stub(), 1); - __ movp(rax, rbx); __ ret(0); - HandlerFrontendFooter(name, &miss); + FrontendFooter(name, &miss); // Return the generated code. return GetCode(kind(), Code::NORMAL, name); } -Handle<Code> BaseLoadStoreStubCompiler::CompilePolymorphicIC( - TypeHandleList* types, - CodeHandleList* handlers, - Handle<Name> name, - Code::StubType type, - IcCheckType check) { +Handle<Code> PropertyICCompiler::CompilePolymorphic(TypeHandleList* types, + CodeHandleList* handlers, + Handle<Name> name, + Code::StubType type, + IcCheckType check) { Label miss; if (check == PROPERTY && (kind() == Code::KEYED_LOAD_IC || kind() == Code::KEYED_STORE_IC)) { - __ Cmp(this->name(), name); - __ j(not_equal, &miss); + // In case we are compiling an IC for dictionary loads and stores, just + // check whether the name is unique. + if (name.is_identical_to(isolate()->factory()->normal_ic_symbol())) { + __ JumpIfNotUniqueName(this->name(), &miss); + } else { + __ Cmp(this->name(), name); + __ j(not_equal, &miss); + } } Label number_case; Label* smi_target = IncludesNumberType(types) ? &number_case : &miss; __ JumpIfSmi(receiver(), smi_target); + // Polymorphic keyed stores may use the map register Register map_reg = scratch1(); + DCHECK(kind() != Code::KEYED_STORE_IC || + map_reg.is(KeyedStoreIC::MapRegister())); __ movp(map_reg, FieldOperand(receiver(), HeapObject::kMapOffset)); int receiver_count = types->length(); int number_of_handled_maps = 0; @@ -1380,13 +1075,13 @@ Handle<Code> BaseLoadStoreStubCompiler::CompilePolymorphicIC( // Check map and tail call if there's a match __ Cmp(map_reg, map); if (type->Is(HeapType::Number())) { - ASSERT(!number_case.is_unused()); + DCHECK(!number_case.is_unused()); __ bind(&number_case); } __ j(equal, handlers->at(current), RelocInfo::CODE_TARGET); } } - ASSERT(number_of_handled_maps > 0); + DCHECK(number_of_handled_maps > 0); __ bind(&miss); TailCallBuiltin(masm(), MissBuiltin(kind())); @@ -1394,7 +1089,7 @@ Handle<Code> BaseLoadStoreStubCompiler::CompilePolymorphicIC( // Return the generated code. InlineCacheState state = number_of_handled_maps > 1 ? POLYMORPHIC : MONOMORPHIC; - return GetICCode(kind(), type, name, state); + return GetCode(kind(), type, name, state); } @@ -1402,33 +1097,35 @@ Handle<Code> BaseLoadStoreStubCompiler::CompilePolymorphicIC( #define __ ACCESS_MASM(masm) -void KeyedLoadStubCompiler::GenerateLoadDictionaryElement( +void ElementHandlerCompiler::GenerateLoadDictionaryElement( MacroAssembler* masm) { // ----------- S t a t e ------------- - // -- rax : key + // -- rcx : key // -- rdx : receiver // -- rsp[0] : return address // ----------------------------------- + DCHECK(rdx.is(LoadIC::ReceiverRegister())); + DCHECK(rcx.is(LoadIC::NameRegister())); Label slow, miss; // This stub is meant to be tail-jumped to, the receiver must already // have been verified by the caller to not be a smi. - __ JumpIfNotSmi(rax, &miss); - __ SmiToInteger32(rbx, rax); - __ movp(rcx, FieldOperand(rdx, JSObject::kElementsOffset)); + __ JumpIfNotSmi(rcx, &miss); + __ SmiToInteger32(rbx, rcx); + __ movp(rax, FieldOperand(rdx, JSObject::kElementsOffset)); // Check whether the elements is a number dictionary. // rdx: receiver - // rax: key + // rcx: key // rbx: key as untagged int32 - // rcx: elements - __ LoadFromNumberDictionary(&slow, rcx, rax, rbx, r9, rdi, rax); + // rax: elements + __ LoadFromNumberDictionary(&slow, rax, rcx, rbx, r9, rdi, rax); __ ret(0); __ bind(&slow); // ----------- S t a t e ------------- - // -- rax : key + // -- rcx : key // -- rdx : receiver // -- rsp[0] : return address // ----------------------------------- @@ -1436,7 +1133,7 @@ void KeyedLoadStubCompiler::GenerateLoadDictionaryElement( __ bind(&miss); // ----------- S t a t e ------------- - // -- rax : key + // -- rcx : key // -- rdx : receiver // -- rsp[0] : return address // ----------------------------------- diff --git a/deps/v8/src/x87/OWNERS b/deps/v8/src/x87/OWNERS new file mode 100644 index 00000000000..dd9998b2610 --- /dev/null +++ b/deps/v8/src/x87/OWNERS @@ -0,0 +1 @@ +weiliang.lin@intel.com diff --git a/deps/v8/src/x87/assembler-x87-inl.h b/deps/v8/src/x87/assembler-x87-inl.h new file mode 100644 index 00000000000..25ecfcf1379 --- /dev/null +++ b/deps/v8/src/x87/assembler-x87-inl.h @@ -0,0 +1,571 @@ +// Copyright (c) 1994-2006 Sun Microsystems Inc. +// All Rights Reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// - Redistributions of source code must retain the above copyright notice, +// this list of conditions and the following disclaimer. +// +// - Redistribution in binary form must reproduce the above copyright +// notice, this list of conditions and the following disclaimer in the +// documentation and/or other materials provided with the distribution. +// +// - Neither the name of Sun Microsystems or the names of contributors may +// be used to endorse or promote products derived from this software without +// specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS +// IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, +// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +// PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR +// CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, +// EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, +// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +// PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +// LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +// NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +// The original source code covered by the above license above has been +// modified significantly by Google Inc. +// Copyright 2012 the V8 project authors. All rights reserved. + +// A light-weight IA32 Assembler. + +#ifndef V8_X87_ASSEMBLER_X87_INL_H_ +#define V8_X87_ASSEMBLER_X87_INL_H_ + +#include "src/x87/assembler-x87.h" + +#include "src/assembler.h" +#include "src/debug.h" + +namespace v8 { +namespace internal { + +bool CpuFeatures::SupportsCrankshaft() { return false; } + + +static const byte kCallOpcode = 0xE8; +static const int kNoCodeAgeSequenceLength = 5; + + +// The modes possibly affected by apply must be in kApplyMask. +void RelocInfo::apply(intptr_t delta, ICacheFlushMode icache_flush_mode) { + bool flush_icache = icache_flush_mode != SKIP_ICACHE_FLUSH; + if (IsRuntimeEntry(rmode_) || IsCodeTarget(rmode_)) { + int32_t* p = reinterpret_cast<int32_t*>(pc_); + *p -= delta; // Relocate entry. + if (flush_icache) CpuFeatures::FlushICache(p, sizeof(uint32_t)); + } else if (rmode_ == CODE_AGE_SEQUENCE) { + if (*pc_ == kCallOpcode) { + int32_t* p = reinterpret_cast<int32_t*>(pc_ + 1); + *p -= delta; // Relocate entry. + if (flush_icache) CpuFeatures::FlushICache(p, sizeof(uint32_t)); + } + } else if (rmode_ == JS_RETURN && IsPatchedReturnSequence()) { + // Special handling of js_return when a break point is set (call + // instruction has been inserted). + int32_t* p = reinterpret_cast<int32_t*>(pc_ + 1); + *p -= delta; // Relocate entry. + if (flush_icache) CpuFeatures::FlushICache(p, sizeof(uint32_t)); + } else if (rmode_ == DEBUG_BREAK_SLOT && IsPatchedDebugBreakSlotSequence()) { + // Special handling of a debug break slot when a break point is set (call + // instruction has been inserted). + int32_t* p = reinterpret_cast<int32_t*>(pc_ + 1); + *p -= delta; // Relocate entry. + if (flush_icache) CpuFeatures::FlushICache(p, sizeof(uint32_t)); + } else if (IsInternalReference(rmode_)) { + // absolute code pointer inside code object moves with the code object. + int32_t* p = reinterpret_cast<int32_t*>(pc_); + *p += delta; // Relocate entry. + if (flush_icache) CpuFeatures::FlushICache(p, sizeof(uint32_t)); + } +} + + +Address RelocInfo::target_address() { + DCHECK(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); + return Assembler::target_address_at(pc_, host_); +} + + +Address RelocInfo::target_address_address() { + DCHECK(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_) + || rmode_ == EMBEDDED_OBJECT + || rmode_ == EXTERNAL_REFERENCE); + return reinterpret_cast<Address>(pc_); +} + + +Address RelocInfo::constant_pool_entry_address() { + UNREACHABLE(); + return NULL; +} + + +int RelocInfo::target_address_size() { + return Assembler::kSpecialTargetSize; +} + + +void RelocInfo::set_target_address(Address target, + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + Assembler::set_target_address_at(pc_, host_, target, icache_flush_mode); + Assembler::set_target_address_at(pc_, host_, target); + DCHECK(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)); + if (write_barrier_mode == UPDATE_WRITE_BARRIER && host() != NULL && + IsCodeTarget(rmode_)) { + Object* target_code = Code::GetCodeFromTargetAddress(target); + host()->GetHeap()->incremental_marking()->RecordWriteIntoCode( + host(), this, HeapObject::cast(target_code)); + } +} + + +Object* RelocInfo::target_object() { + DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); + return Memory::Object_at(pc_); +} + + +Handle<Object> RelocInfo::target_object_handle(Assembler* origin) { + DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); + return Memory::Object_Handle_at(pc_); +} + + +void RelocInfo::set_target_object(Object* target, + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT); + Memory::Object_at(pc_) = target; + if (icache_flush_mode != SKIP_ICACHE_FLUSH) { + CpuFeatures::FlushICache(pc_, sizeof(Address)); + } + if (write_barrier_mode == UPDATE_WRITE_BARRIER && + host() != NULL && + target->IsHeapObject()) { + host()->GetHeap()->incremental_marking()->RecordWrite( + host(), &Memory::Object_at(pc_), HeapObject::cast(target)); + } +} + + +Address RelocInfo::target_reference() { + DCHECK(rmode_ == RelocInfo::EXTERNAL_REFERENCE); + return Memory::Address_at(pc_); +} + + +Address RelocInfo::target_runtime_entry(Assembler* origin) { + DCHECK(IsRuntimeEntry(rmode_)); + return reinterpret_cast<Address>(*reinterpret_cast<int32_t*>(pc_)); +} + + +void RelocInfo::set_target_runtime_entry(Address target, + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(IsRuntimeEntry(rmode_)); + if (target_address() != target) { + set_target_address(target, write_barrier_mode, icache_flush_mode); + } +} + + +Handle<Cell> RelocInfo::target_cell_handle() { + DCHECK(rmode_ == RelocInfo::CELL); + Address address = Memory::Address_at(pc_); + return Handle<Cell>(reinterpret_cast<Cell**>(address)); +} + + +Cell* RelocInfo::target_cell() { + DCHECK(rmode_ == RelocInfo::CELL); + return Cell::FromValueAddress(Memory::Address_at(pc_)); +} + + +void RelocInfo::set_target_cell(Cell* cell, + WriteBarrierMode write_barrier_mode, + ICacheFlushMode icache_flush_mode) { + DCHECK(rmode_ == RelocInfo::CELL); + Address address = cell->address() + Cell::kValueOffset; + Memory::Address_at(pc_) = address; + if (icache_flush_mode != SKIP_ICACHE_FLUSH) { + CpuFeatures::FlushICache(pc_, sizeof(Address)); + } + if (write_barrier_mode == UPDATE_WRITE_BARRIER && host() != NULL) { + // TODO(1550) We are passing NULL as a slot because cell can never be on + // evacuation candidate. + host()->GetHeap()->incremental_marking()->RecordWrite( + host(), NULL, cell); + } +} + + +Handle<Object> RelocInfo::code_age_stub_handle(Assembler* origin) { + DCHECK(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); + DCHECK(*pc_ == kCallOpcode); + return Memory::Object_Handle_at(pc_ + 1); +} + + +Code* RelocInfo::code_age_stub() { + DCHECK(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); + DCHECK(*pc_ == kCallOpcode); + return Code::GetCodeFromTargetAddress( + Assembler::target_address_at(pc_ + 1, host_)); +} + + +void RelocInfo::set_code_age_stub(Code* stub, + ICacheFlushMode icache_flush_mode) { + DCHECK(*pc_ == kCallOpcode); + DCHECK(rmode_ == RelocInfo::CODE_AGE_SEQUENCE); + Assembler::set_target_address_at(pc_ + 1, host_, stub->instruction_start(), + icache_flush_mode); +} + + +Address RelocInfo::call_address() { + DCHECK((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || + (IsDebugBreakSlot(rmode()) && IsPatchedDebugBreakSlotSequence())); + return Assembler::target_address_at(pc_ + 1, host_); +} + + +void RelocInfo::set_call_address(Address target) { + DCHECK((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || + (IsDebugBreakSlot(rmode()) && IsPatchedDebugBreakSlotSequence())); + Assembler::set_target_address_at(pc_ + 1, host_, target); + if (host() != NULL) { + Object* target_code = Code::GetCodeFromTargetAddress(target); + host()->GetHeap()->incremental_marking()->RecordWriteIntoCode( + host(), this, HeapObject::cast(target_code)); + } +} + + +Object* RelocInfo::call_object() { + return *call_object_address(); +} + + +void RelocInfo::set_call_object(Object* target) { + *call_object_address() = target; +} + + +Object** RelocInfo::call_object_address() { + DCHECK((IsJSReturn(rmode()) && IsPatchedReturnSequence()) || + (IsDebugBreakSlot(rmode()) && IsPatchedDebugBreakSlotSequence())); + return reinterpret_cast<Object**>(pc_ + 1); +} + + +void RelocInfo::WipeOut() { + if (IsEmbeddedObject(rmode_) || IsExternalReference(rmode_)) { + Memory::Address_at(pc_) = NULL; + } else if (IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)) { + // Effectively write zero into the relocation. + Assembler::set_target_address_at(pc_, host_, pc_ + sizeof(int32_t)); + } else { + UNREACHABLE(); + } +} + + +bool RelocInfo::IsPatchedReturnSequence() { + return *pc_ == kCallOpcode; +} + + +bool RelocInfo::IsPatchedDebugBreakSlotSequence() { + return !Assembler::IsNop(pc()); +} + + +void RelocInfo::Visit(Isolate* isolate, ObjectVisitor* visitor) { + RelocInfo::Mode mode = rmode(); + if (mode == RelocInfo::EMBEDDED_OBJECT) { + visitor->VisitEmbeddedPointer(this); + CpuFeatures::FlushICache(pc_, sizeof(Address)); + } else if (RelocInfo::IsCodeTarget(mode)) { + visitor->VisitCodeTarget(this); + } else if (mode == RelocInfo::CELL) { + visitor->VisitCell(this); + } else if (mode == RelocInfo::EXTERNAL_REFERENCE) { + visitor->VisitExternalReference(this); + CpuFeatures::FlushICache(pc_, sizeof(Address)); + } else if (RelocInfo::IsCodeAgeSequence(mode)) { + visitor->VisitCodeAgeSequence(this); + } else if (((RelocInfo::IsJSReturn(mode) && + IsPatchedReturnSequence()) || + (RelocInfo::IsDebugBreakSlot(mode) && + IsPatchedDebugBreakSlotSequence())) && + isolate->debug()->has_break_points()) { + visitor->VisitDebugTarget(this); + } else if (IsRuntimeEntry(mode)) { + visitor->VisitRuntimeEntry(this); + } +} + + +template<typename StaticVisitor> +void RelocInfo::Visit(Heap* heap) { + RelocInfo::Mode mode = rmode(); + if (mode == RelocInfo::EMBEDDED_OBJECT) { + StaticVisitor::VisitEmbeddedPointer(heap, this); + CpuFeatures::FlushICache(pc_, sizeof(Address)); + } else if (RelocInfo::IsCodeTarget(mode)) { + StaticVisitor::VisitCodeTarget(heap, this); + } else if (mode == RelocInfo::CELL) { + StaticVisitor::VisitCell(heap, this); + } else if (mode == RelocInfo::EXTERNAL_REFERENCE) { + StaticVisitor::VisitExternalReference(this); + CpuFeatures::FlushICache(pc_, sizeof(Address)); + } else if (RelocInfo::IsCodeAgeSequence(mode)) { + StaticVisitor::VisitCodeAgeSequence(heap, this); + } else if (heap->isolate()->debug()->has_break_points() && + ((RelocInfo::IsJSReturn(mode) && + IsPatchedReturnSequence()) || + (RelocInfo::IsDebugBreakSlot(mode) && + IsPatchedDebugBreakSlotSequence()))) { + StaticVisitor::VisitDebugTarget(heap, this); + } else if (IsRuntimeEntry(mode)) { + StaticVisitor::VisitRuntimeEntry(this); + } +} + + + +Immediate::Immediate(int x) { + x_ = x; + rmode_ = RelocInfo::NONE32; +} + + +Immediate::Immediate(const ExternalReference& ext) { + x_ = reinterpret_cast<int32_t>(ext.address()); + rmode_ = RelocInfo::EXTERNAL_REFERENCE; +} + + +Immediate::Immediate(Label* internal_offset) { + x_ = reinterpret_cast<int32_t>(internal_offset); + rmode_ = RelocInfo::INTERNAL_REFERENCE; +} + + +Immediate::Immediate(Handle<Object> handle) { + AllowDeferredHandleDereference using_raw_address; + // Verify all Objects referred by code are NOT in new space. + Object* obj = *handle; + if (obj->IsHeapObject()) { + DCHECK(!HeapObject::cast(obj)->GetHeap()->InNewSpace(obj)); + x_ = reinterpret_cast<intptr_t>(handle.location()); + rmode_ = RelocInfo::EMBEDDED_OBJECT; + } else { + // no relocation needed + x_ = reinterpret_cast<intptr_t>(obj); + rmode_ = RelocInfo::NONE32; + } +} + + +Immediate::Immediate(Smi* value) { + x_ = reinterpret_cast<intptr_t>(value); + rmode_ = RelocInfo::NONE32; +} + + +Immediate::Immediate(Address addr) { + x_ = reinterpret_cast<int32_t>(addr); + rmode_ = RelocInfo::NONE32; +} + + +void Assembler::emit(uint32_t x) { + *reinterpret_cast<uint32_t*>(pc_) = x; + pc_ += sizeof(uint32_t); +} + + +void Assembler::emit(Handle<Object> handle) { + AllowDeferredHandleDereference heap_object_check; + // Verify all Objects referred by code are NOT in new space. + Object* obj = *handle; + DCHECK(!isolate()->heap()->InNewSpace(obj)); + if (obj->IsHeapObject()) { + emit(reinterpret_cast<intptr_t>(handle.location()), + RelocInfo::EMBEDDED_OBJECT); + } else { + // no relocation needed + emit(reinterpret_cast<intptr_t>(obj)); + } +} + + +void Assembler::emit(uint32_t x, RelocInfo::Mode rmode, TypeFeedbackId id) { + if (rmode == RelocInfo::CODE_TARGET && !id.IsNone()) { + RecordRelocInfo(RelocInfo::CODE_TARGET_WITH_ID, id.ToInt()); + } else if (!RelocInfo::IsNone(rmode) + && rmode != RelocInfo::CODE_AGE_SEQUENCE) { + RecordRelocInfo(rmode); + } + emit(x); +} + + +void Assembler::emit(Handle<Code> code, + RelocInfo::Mode rmode, + TypeFeedbackId id) { + AllowDeferredHandleDereference embedding_raw_address; + emit(reinterpret_cast<intptr_t>(code.location()), rmode, id); +} + + +void Assembler::emit(const Immediate& x) { + if (x.rmode_ == RelocInfo::INTERNAL_REFERENCE) { + Label* label = reinterpret_cast<Label*>(x.x_); + emit_code_relative_offset(label); + return; + } + if (!RelocInfo::IsNone(x.rmode_)) RecordRelocInfo(x.rmode_); + emit(x.x_); +} + + +void Assembler::emit_code_relative_offset(Label* label) { + if (label->is_bound()) { + int32_t pos; + pos = label->pos() + Code::kHeaderSize - kHeapObjectTag; + emit(pos); + } else { + emit_disp(label, Displacement::CODE_RELATIVE); + } +} + + +void Assembler::emit_w(const Immediate& x) { + DCHECK(RelocInfo::IsNone(x.rmode_)); + uint16_t value = static_cast<uint16_t>(x.x_); + reinterpret_cast<uint16_t*>(pc_)[0] = value; + pc_ += sizeof(uint16_t); +} + + +Address Assembler::target_address_at(Address pc, + ConstantPoolArray* constant_pool) { + return pc + sizeof(int32_t) + *reinterpret_cast<int32_t*>(pc); +} + + +void Assembler::set_target_address_at(Address pc, + ConstantPoolArray* constant_pool, + Address target, + ICacheFlushMode icache_flush_mode) { + int32_t* p = reinterpret_cast<int32_t*>(pc); + *p = target - (pc + sizeof(int32_t)); + if (icache_flush_mode != SKIP_ICACHE_FLUSH) { + CpuFeatures::FlushICache(p, sizeof(int32_t)); + } +} + + +Address Assembler::target_address_from_return_address(Address pc) { + return pc - kCallTargetAddressOffset; +} + + +Address Assembler::break_address_from_return_address(Address pc) { + return pc - Assembler::kPatchDebugBreakSlotReturnOffset; +} + + +Displacement Assembler::disp_at(Label* L) { + return Displacement(long_at(L->pos())); +} + + +void Assembler::disp_at_put(Label* L, Displacement disp) { + long_at_put(L->pos(), disp.data()); +} + + +void Assembler::emit_disp(Label* L, Displacement::Type type) { + Displacement disp(L, type); + L->link_to(pc_offset()); + emit(static_cast<int>(disp.data())); +} + + +void Assembler::emit_near_disp(Label* L) { + byte disp = 0x00; + if (L->is_near_linked()) { + int offset = L->near_link_pos() - pc_offset(); + DCHECK(is_int8(offset)); + disp = static_cast<byte>(offset & 0xFF); + } + L->link_to(pc_offset(), Label::kNear); + *pc_++ = disp; +} + + +void Operand::set_modrm(int mod, Register rm) { + DCHECK((mod & -4) == 0); + buf_[0] = mod << 6 | rm.code(); + len_ = 1; +} + + +void Operand::set_sib(ScaleFactor scale, Register index, Register base) { + DCHECK(len_ == 1); + DCHECK((scale & -4) == 0); + // Use SIB with no index register only for base esp. + DCHECK(!index.is(esp) || base.is(esp)); + buf_[1] = scale << 6 | index.code() << 3 | base.code(); + len_ = 2; +} + + +void Operand::set_disp8(int8_t disp) { + DCHECK(len_ == 1 || len_ == 2); + *reinterpret_cast<int8_t*>(&buf_[len_++]) = disp; +} + + +void Operand::set_dispr(int32_t disp, RelocInfo::Mode rmode) { + DCHECK(len_ == 1 || len_ == 2); + int32_t* p = reinterpret_cast<int32_t*>(&buf_[len_]); + *p = disp; + len_ += sizeof(int32_t); + rmode_ = rmode; +} + +Operand::Operand(Register reg) { + // reg + set_modrm(3, reg); +} + + +Operand::Operand(int32_t disp, RelocInfo::Mode rmode) { + // [disp/r] + set_modrm(0, ebp); + set_dispr(disp, rmode); +} + + +Operand::Operand(Immediate imm) { + // [disp/r] + set_modrm(0, ebp); + set_dispr(imm.x_, imm.rmode_); +} +} } // namespace v8::internal + +#endif // V8_X87_ASSEMBLER_X87_INL_H_ diff --git a/deps/v8/src/x87/assembler-x87.cc b/deps/v8/src/x87/assembler-x87.cc new file mode 100644 index 00000000000..0e3ff25fa43 --- /dev/null +++ b/deps/v8/src/x87/assembler-x87.cc @@ -0,0 +1,2053 @@ +// Copyright (c) 1994-2006 Sun Microsystems Inc. +// All Rights Reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions +// are met: +// +// - Redistributions of source code must retain the above copyright notice, +// this list of conditions and the following disclaimer. +// +// - Redistribution in binary form must reproduce the above copyright +// notice, this list of conditions and the following disclaimer in the +// documentation and/or other materials provided with the +// distribution. +// +// - Neither the name of Sun Microsystems or the names of contributors may +// be used to endorse or promote products derived from this software without +// specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +// FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +// COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +// INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +// (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +// HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, +// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED +// OF THE POSSIBILITY OF SUCH DAMAGE. + +// The original source code covered by the above license above has been modified +// significantly by Google Inc. +// Copyright 2012 the V8 project authors. All rights reserved. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_X87 + +#include "src/base/cpu.h" +#include "src/disassembler.h" +#include "src/macro-assembler.h" +#include "src/serialize.h" + +namespace v8 { +namespace internal { + +// ----------------------------------------------------------------------------- +// Implementation of CpuFeatures + +void CpuFeatures::ProbeImpl(bool cross_compile) { + base::CPU cpu; + + // Only use statically determined features for cross compile (snapshot). + if (cross_compile) return; +} + + +void CpuFeatures::PrintTarget() { } +void CpuFeatures::PrintFeatures() { } + + +// ----------------------------------------------------------------------------- +// Implementation of Displacement + +void Displacement::init(Label* L, Type type) { + DCHECK(!L->is_bound()); + int next = 0; + if (L->is_linked()) { + next = L->pos(); + DCHECK(next > 0); // Displacements must be at positions > 0 + } + // Ensure that we _never_ overflow the next field. + DCHECK(NextField::is_valid(Assembler::kMaximalBufferSize)); + data_ = NextField::encode(next) | TypeField::encode(type); +} + + +// ----------------------------------------------------------------------------- +// Implementation of RelocInfo + + +const int RelocInfo::kApplyMask = + RelocInfo::kCodeTargetMask | 1 << RelocInfo::RUNTIME_ENTRY | + 1 << RelocInfo::JS_RETURN | 1 << RelocInfo::INTERNAL_REFERENCE | + 1 << RelocInfo::DEBUG_BREAK_SLOT | 1 << RelocInfo::CODE_AGE_SEQUENCE; + + +bool RelocInfo::IsCodedSpecially() { + // The deserializer needs to know whether a pointer is specially coded. Being + // specially coded on IA32 means that it is a relative address, as used by + // branch instructions. These are also the ones that need changing when a + // code object moves. + return (1 << rmode_) & kApplyMask; +} + + +bool RelocInfo::IsInConstantPool() { + return false; +} + + +void RelocInfo::PatchCode(byte* instructions, int instruction_count) { + // Patch the code at the current address with the supplied instructions. + for (int i = 0; i < instruction_count; i++) { + *(pc_ + i) = *(instructions + i); + } + + // Indicate that code has changed. + CpuFeatures::FlushICache(pc_, instruction_count); +} + + +// Patch the code at the current PC with a call to the target address. +// Additional guard int3 instructions can be added if required. +void RelocInfo::PatchCodeWithCall(Address target, int guard_bytes) { + // Call instruction takes up 5 bytes and int3 takes up one byte. + static const int kCallCodeSize = 5; + int code_size = kCallCodeSize + guard_bytes; + + // Create a code patcher. + CodePatcher patcher(pc_, code_size); + + // Add a label for checking the size of the code used for returning. +#ifdef DEBUG + Label check_codesize; + patcher.masm()->bind(&check_codesize); +#endif + + // Patch the code. + patcher.masm()->call(target, RelocInfo::NONE32); + + // Check that the size of the code generated is as expected. + DCHECK_EQ(kCallCodeSize, + patcher.masm()->SizeOfCodeGeneratedSince(&check_codesize)); + + // Add the requested number of int3 instructions after the call. + DCHECK_GE(guard_bytes, 0); + for (int i = 0; i < guard_bytes; i++) { + patcher.masm()->int3(); + } +} + + +// ----------------------------------------------------------------------------- +// Implementation of Operand + +Operand::Operand(Register base, int32_t disp, RelocInfo::Mode rmode) { + // [base + disp/r] + if (disp == 0 && RelocInfo::IsNone(rmode) && !base.is(ebp)) { + // [base] + set_modrm(0, base); + if (base.is(esp)) set_sib(times_1, esp, base); + } else if (is_int8(disp) && RelocInfo::IsNone(rmode)) { + // [base + disp8] + set_modrm(1, base); + if (base.is(esp)) set_sib(times_1, esp, base); + set_disp8(disp); + } else { + // [base + disp/r] + set_modrm(2, base); + if (base.is(esp)) set_sib(times_1, esp, base); + set_dispr(disp, rmode); + } +} + + +Operand::Operand(Register base, + Register index, + ScaleFactor scale, + int32_t disp, + RelocInfo::Mode rmode) { + DCHECK(!index.is(esp)); // illegal addressing mode + // [base + index*scale + disp/r] + if (disp == 0 && RelocInfo::IsNone(rmode) && !base.is(ebp)) { + // [base + index*scale] + set_modrm(0, esp); + set_sib(scale, index, base); + } else if (is_int8(disp) && RelocInfo::IsNone(rmode)) { + // [base + index*scale + disp8] + set_modrm(1, esp); + set_sib(scale, index, base); + set_disp8(disp); + } else { + // [base + index*scale + disp/r] + set_modrm(2, esp); + set_sib(scale, index, base); + set_dispr(disp, rmode); + } +} + + +Operand::Operand(Register index, + ScaleFactor scale, + int32_t disp, + RelocInfo::Mode rmode) { + DCHECK(!index.is(esp)); // illegal addressing mode + // [index*scale + disp/r] + set_modrm(0, esp); + set_sib(scale, index, ebp); + set_dispr(disp, rmode); +} + + +bool Operand::is_reg(Register reg) const { + return ((buf_[0] & 0xF8) == 0xC0) // addressing mode is register only. + && ((buf_[0] & 0x07) == reg.code()); // register codes match. +} + + +bool Operand::is_reg_only() const { + return (buf_[0] & 0xF8) == 0xC0; // Addressing mode is register only. +} + + +Register Operand::reg() const { + DCHECK(is_reg_only()); + return Register::from_code(buf_[0] & 0x07); +} + + +// ----------------------------------------------------------------------------- +// Implementation of Assembler. + +// Emit a single byte. Must always be inlined. +#define EMIT(x) \ + *pc_++ = (x) + + +#ifdef GENERATED_CODE_COVERAGE +static void InitCoverageLog(); +#endif + +Assembler::Assembler(Isolate* isolate, void* buffer, int buffer_size) + : AssemblerBase(isolate, buffer, buffer_size), + positions_recorder_(this) { + // Clear the buffer in debug mode unless it was provided by the + // caller in which case we can't be sure it's okay to overwrite + // existing code in it; see CodePatcher::CodePatcher(...). +#ifdef DEBUG + if (own_buffer_) { + memset(buffer_, 0xCC, buffer_size_); // int3 + } +#endif + + reloc_info_writer.Reposition(buffer_ + buffer_size_, pc_); + +#ifdef GENERATED_CODE_COVERAGE + InitCoverageLog(); +#endif +} + + +void Assembler::GetCode(CodeDesc* desc) { + // Finalize code (at this point overflow() may be true, but the gap ensures + // that we are still not overlapping instructions and relocation info). + DCHECK(pc_ <= reloc_info_writer.pos()); // No overlap. + // Set up code descriptor. + desc->buffer = buffer_; + desc->buffer_size = buffer_size_; + desc->instr_size = pc_offset(); + desc->reloc_size = (buffer_ + buffer_size_) - reloc_info_writer.pos(); + desc->origin = this; +} + + +void Assembler::Align(int m) { + DCHECK(IsPowerOf2(m)); + int mask = m - 1; + int addr = pc_offset(); + Nop((m - (addr & mask)) & mask); +} + + +bool Assembler::IsNop(Address addr) { + Address a = addr; + while (*a == 0x66) a++; + if (*a == 0x90) return true; + if (a[0] == 0xf && a[1] == 0x1f) return true; + return false; +} + + +void Assembler::Nop(int bytes) { + EnsureSpace ensure_space(this); + + // Older CPUs that do not support SSE2 may not support multibyte NOP + // instructions. + for (; bytes > 0; bytes--) { + EMIT(0x90); + } + return; +} + + +void Assembler::CodeTargetAlign() { + Align(16); // Preferred alignment of jump targets on ia32. +} + + +void Assembler::cpuid() { + EnsureSpace ensure_space(this); + EMIT(0x0F); + EMIT(0xA2); +} + + +void Assembler::pushad() { + EnsureSpace ensure_space(this); + EMIT(0x60); +} + + +void Assembler::popad() { + EnsureSpace ensure_space(this); + EMIT(0x61); +} + + +void Assembler::pushfd() { + EnsureSpace ensure_space(this); + EMIT(0x9C); +} + + +void Assembler::popfd() { + EnsureSpace ensure_space(this); + EMIT(0x9D); +} + + +void Assembler::push(const Immediate& x) { + EnsureSpace ensure_space(this); + if (x.is_int8()) { + EMIT(0x6a); + EMIT(x.x_); + } else { + EMIT(0x68); + emit(x); + } +} + + +void Assembler::push_imm32(int32_t imm32) { + EnsureSpace ensure_space(this); + EMIT(0x68); + emit(imm32); +} + + +void Assembler::push(Register src) { + EnsureSpace ensure_space(this); + EMIT(0x50 | src.code()); +} + + +void Assembler::push(const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0xFF); + emit_operand(esi, src); +} + + +void Assembler::pop(Register dst) { + DCHECK(reloc_info_writer.last_pc() != NULL); + EnsureSpace ensure_space(this); + EMIT(0x58 | dst.code()); +} + + +void Assembler::pop(const Operand& dst) { + EnsureSpace ensure_space(this); + EMIT(0x8F); + emit_operand(eax, dst); +} + + +void Assembler::enter(const Immediate& size) { + EnsureSpace ensure_space(this); + EMIT(0xC8); + emit_w(size); + EMIT(0); +} + + +void Assembler::leave() { + EnsureSpace ensure_space(this); + EMIT(0xC9); +} + + +void Assembler::mov_b(Register dst, const Operand& src) { + CHECK(dst.is_byte_register()); + EnsureSpace ensure_space(this); + EMIT(0x8A); + emit_operand(dst, src); +} + + +void Assembler::mov_b(const Operand& dst, int8_t imm8) { + EnsureSpace ensure_space(this); + EMIT(0xC6); + emit_operand(eax, dst); + EMIT(imm8); +} + + +void Assembler::mov_b(const Operand& dst, Register src) { + CHECK(src.is_byte_register()); + EnsureSpace ensure_space(this); + EMIT(0x88); + emit_operand(src, dst); +} + + +void Assembler::mov_w(Register dst, const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0x66); + EMIT(0x8B); + emit_operand(dst, src); +} + + +void Assembler::mov_w(const Operand& dst, Register src) { + EnsureSpace ensure_space(this); + EMIT(0x66); + EMIT(0x89); + emit_operand(src, dst); +} + + +void Assembler::mov_w(const Operand& dst, int16_t imm16) { + EnsureSpace ensure_space(this); + EMIT(0x66); + EMIT(0xC7); + emit_operand(eax, dst); + EMIT(static_cast<int8_t>(imm16 & 0xff)); + EMIT(static_cast<int8_t>(imm16 >> 8)); +} + + +void Assembler::mov(Register dst, int32_t imm32) { + EnsureSpace ensure_space(this); + EMIT(0xB8 | dst.code()); + emit(imm32); +} + + +void Assembler::mov(Register dst, const Immediate& x) { + EnsureSpace ensure_space(this); + EMIT(0xB8 | dst.code()); + emit(x); +} + + +void Assembler::mov(Register dst, Handle<Object> handle) { + EnsureSpace ensure_space(this); + EMIT(0xB8 | dst.code()); + emit(handle); +} + + +void Assembler::mov(Register dst, const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0x8B); + emit_operand(dst, src); +} + + +void Assembler::mov(Register dst, Register src) { + EnsureSpace ensure_space(this); + EMIT(0x89); + EMIT(0xC0 | src.code() << 3 | dst.code()); +} + + +void Assembler::mov(const Operand& dst, const Immediate& x) { + EnsureSpace ensure_space(this); + EMIT(0xC7); + emit_operand(eax, dst); + emit(x); +} + + +void Assembler::mov(const Operand& dst, Handle<Object> handle) { + EnsureSpace ensure_space(this); + EMIT(0xC7); + emit_operand(eax, dst); + emit(handle); +} + + +void Assembler::mov(const Operand& dst, Register src) { + EnsureSpace ensure_space(this); + EMIT(0x89); + emit_operand(src, dst); +} + + +void Assembler::movsx_b(Register dst, const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0x0F); + EMIT(0xBE); + emit_operand(dst, src); +} + + +void Assembler::movsx_w(Register dst, const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0x0F); + EMIT(0xBF); + emit_operand(dst, src); +} + + +void Assembler::movzx_b(Register dst, const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0x0F); + EMIT(0xB6); + emit_operand(dst, src); +} + + +void Assembler::movzx_w(Register dst, const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0x0F); + EMIT(0xB7); + emit_operand(dst, src); +} + + +void Assembler::cld() { + EnsureSpace ensure_space(this); + EMIT(0xFC); +} + + +void Assembler::rep_movs() { + EnsureSpace ensure_space(this); + EMIT(0xF3); + EMIT(0xA5); +} + + +void Assembler::rep_stos() { + EnsureSpace ensure_space(this); + EMIT(0xF3); + EMIT(0xAB); +} + + +void Assembler::stos() { + EnsureSpace ensure_space(this); + EMIT(0xAB); +} + + +void Assembler::xchg(Register dst, Register src) { + EnsureSpace ensure_space(this); + if (src.is(eax) || dst.is(eax)) { // Single-byte encoding. + EMIT(0x90 | (src.is(eax) ? dst.code() : src.code())); + } else { + EMIT(0x87); + EMIT(0xC0 | src.code() << 3 | dst.code()); + } +} + + +void Assembler::xchg(Register dst, const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0x87); + emit_operand(dst, src); +} + + +void Assembler::adc(Register dst, int32_t imm32) { + EnsureSpace ensure_space(this); + emit_arith(2, Operand(dst), Immediate(imm32)); +} + + +void Assembler::adc(Register dst, const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0x13); + emit_operand(dst, src); +} + + +void Assembler::add(Register dst, const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0x03); + emit_operand(dst, src); +} + + +void Assembler::add(const Operand& dst, Register src) { + EnsureSpace ensure_space(this); + EMIT(0x01); + emit_operand(src, dst); +} + + +void Assembler::add(const Operand& dst, const Immediate& x) { + DCHECK(reloc_info_writer.last_pc() != NULL); + EnsureSpace ensure_space(this); + emit_arith(0, dst, x); +} + + +void Assembler::and_(Register dst, int32_t imm32) { + and_(dst, Immediate(imm32)); +} + + +void Assembler::and_(Register dst, const Immediate& x) { + EnsureSpace ensure_space(this); + emit_arith(4, Operand(dst), x); +} + + +void Assembler::and_(Register dst, const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0x23); + emit_operand(dst, src); +} + + +void Assembler::and_(const Operand& dst, const Immediate& x) { + EnsureSpace ensure_space(this); + emit_arith(4, dst, x); +} + + +void Assembler::and_(const Operand& dst, Register src) { + EnsureSpace ensure_space(this); + EMIT(0x21); + emit_operand(src, dst); +} + + +void Assembler::cmpb(const Operand& op, int8_t imm8) { + EnsureSpace ensure_space(this); + if (op.is_reg(eax)) { + EMIT(0x3C); + } else { + EMIT(0x80); + emit_operand(edi, op); // edi == 7 + } + EMIT(imm8); +} + + +void Assembler::cmpb(const Operand& op, Register reg) { + CHECK(reg.is_byte_register()); + EnsureSpace ensure_space(this); + EMIT(0x38); + emit_operand(reg, op); +} + + +void Assembler::cmpb(Register reg, const Operand& op) { + CHECK(reg.is_byte_register()); + EnsureSpace ensure_space(this); + EMIT(0x3A); + emit_operand(reg, op); +} + + +void Assembler::cmpw(const Operand& op, Immediate imm16) { + DCHECK(imm16.is_int16()); + EnsureSpace ensure_space(this); + EMIT(0x66); + EMIT(0x81); + emit_operand(edi, op); + emit_w(imm16); +} + + +void Assembler::cmp(Register reg, int32_t imm32) { + EnsureSpace ensure_space(this); + emit_arith(7, Operand(reg), Immediate(imm32)); +} + + +void Assembler::cmp(Register reg, Handle<Object> handle) { + EnsureSpace ensure_space(this); + emit_arith(7, Operand(reg), Immediate(handle)); +} + + +void Assembler::cmp(Register reg, const Operand& op) { + EnsureSpace ensure_space(this); + EMIT(0x3B); + emit_operand(reg, op); +} + + +void Assembler::cmp(const Operand& op, const Immediate& imm) { + EnsureSpace ensure_space(this); + emit_arith(7, op, imm); +} + + +void Assembler::cmp(const Operand& op, Handle<Object> handle) { + EnsureSpace ensure_space(this); + emit_arith(7, op, Immediate(handle)); +} + + +void Assembler::cmpb_al(const Operand& op) { + EnsureSpace ensure_space(this); + EMIT(0x38); // CMP r/m8, r8 + emit_operand(eax, op); // eax has same code as register al. +} + + +void Assembler::cmpw_ax(const Operand& op) { + EnsureSpace ensure_space(this); + EMIT(0x66); + EMIT(0x39); // CMP r/m16, r16 + emit_operand(eax, op); // eax has same code as register ax. +} + + +void Assembler::dec_b(Register dst) { + CHECK(dst.is_byte_register()); + EnsureSpace ensure_space(this); + EMIT(0xFE); + EMIT(0xC8 | dst.code()); +} + + +void Assembler::dec_b(const Operand& dst) { + EnsureSpace ensure_space(this); + EMIT(0xFE); + emit_operand(ecx, dst); +} + + +void Assembler::dec(Register dst) { + EnsureSpace ensure_space(this); + EMIT(0x48 | dst.code()); +} + + +void Assembler::dec(const Operand& dst) { + EnsureSpace ensure_space(this); + EMIT(0xFF); + emit_operand(ecx, dst); +} + + +void Assembler::cdq() { + EnsureSpace ensure_space(this); + EMIT(0x99); +} + + +void Assembler::idiv(const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0xF7); + emit_operand(edi, src); +} + + +void Assembler::div(const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0xF7); + emit_operand(esi, src); +} + + +void Assembler::imul(Register reg) { + EnsureSpace ensure_space(this); + EMIT(0xF7); + EMIT(0xE8 | reg.code()); +} + + +void Assembler::imul(Register dst, const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0x0F); + EMIT(0xAF); + emit_operand(dst, src); +} + + +void Assembler::imul(Register dst, Register src, int32_t imm32) { + imul(dst, Operand(src), imm32); +} + + +void Assembler::imul(Register dst, const Operand& src, int32_t imm32) { + EnsureSpace ensure_space(this); + if (is_int8(imm32)) { + EMIT(0x6B); + emit_operand(dst, src); + EMIT(imm32); + } else { + EMIT(0x69); + emit_operand(dst, src); + emit(imm32); + } +} + + +void Assembler::inc(Register dst) { + EnsureSpace ensure_space(this); + EMIT(0x40 | dst.code()); +} + + +void Assembler::inc(const Operand& dst) { + EnsureSpace ensure_space(this); + EMIT(0xFF); + emit_operand(eax, dst); +} + + +void Assembler::lea(Register dst, const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0x8D); + emit_operand(dst, src); +} + + +void Assembler::mul(Register src) { + EnsureSpace ensure_space(this); + EMIT(0xF7); + EMIT(0xE0 | src.code()); +} + + +void Assembler::neg(Register dst) { + EnsureSpace ensure_space(this); + EMIT(0xF7); + EMIT(0xD8 | dst.code()); +} + + +void Assembler::neg(const Operand& dst) { + EnsureSpace ensure_space(this); + EMIT(0xF7); + emit_operand(ebx, dst); +} + + +void Assembler::not_(Register dst) { + EnsureSpace ensure_space(this); + EMIT(0xF7); + EMIT(0xD0 | dst.code()); +} + + +void Assembler::not_(const Operand& dst) { + EnsureSpace ensure_space(this); + EMIT(0xF7); + emit_operand(edx, dst); +} + + +void Assembler::or_(Register dst, int32_t imm32) { + EnsureSpace ensure_space(this); + emit_arith(1, Operand(dst), Immediate(imm32)); +} + + +void Assembler::or_(Register dst, const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0x0B); + emit_operand(dst, src); +} + + +void Assembler::or_(const Operand& dst, const Immediate& x) { + EnsureSpace ensure_space(this); + emit_arith(1, dst, x); +} + + +void Assembler::or_(const Operand& dst, Register src) { + EnsureSpace ensure_space(this); + EMIT(0x09); + emit_operand(src, dst); +} + + +void Assembler::rcl(Register dst, uint8_t imm8) { + EnsureSpace ensure_space(this); + DCHECK(is_uint5(imm8)); // illegal shift count + if (imm8 == 1) { + EMIT(0xD1); + EMIT(0xD0 | dst.code()); + } else { + EMIT(0xC1); + EMIT(0xD0 | dst.code()); + EMIT(imm8); + } +} + + +void Assembler::rcr(Register dst, uint8_t imm8) { + EnsureSpace ensure_space(this); + DCHECK(is_uint5(imm8)); // illegal shift count + if (imm8 == 1) { + EMIT(0xD1); + EMIT(0xD8 | dst.code()); + } else { + EMIT(0xC1); + EMIT(0xD8 | dst.code()); + EMIT(imm8); + } +} + + +void Assembler::ror(Register dst, uint8_t imm8) { + EnsureSpace ensure_space(this); + DCHECK(is_uint5(imm8)); // illegal shift count + if (imm8 == 1) { + EMIT(0xD1); + EMIT(0xC8 | dst.code()); + } else { + EMIT(0xC1); + EMIT(0xC8 | dst.code()); + EMIT(imm8); + } +} + + +void Assembler::ror_cl(Register dst) { + EnsureSpace ensure_space(this); + EMIT(0xD3); + EMIT(0xC8 | dst.code()); +} + + +void Assembler::sar(const Operand& dst, uint8_t imm8) { + EnsureSpace ensure_space(this); + DCHECK(is_uint5(imm8)); // illegal shift count + if (imm8 == 1) { + EMIT(0xD1); + emit_operand(edi, dst); + } else { + EMIT(0xC1); + emit_operand(edi, dst); + EMIT(imm8); + } +} + + +void Assembler::sar_cl(const Operand& dst) { + EnsureSpace ensure_space(this); + EMIT(0xD3); + emit_operand(edi, dst); +} + + +void Assembler::sbb(Register dst, const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0x1B); + emit_operand(dst, src); +} + + +void Assembler::shld(Register dst, const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0x0F); + EMIT(0xA5); + emit_operand(dst, src); +} + + +void Assembler::shl(const Operand& dst, uint8_t imm8) { + EnsureSpace ensure_space(this); + DCHECK(is_uint5(imm8)); // illegal shift count + if (imm8 == 1) { + EMIT(0xD1); + emit_operand(esp, dst); + } else { + EMIT(0xC1); + emit_operand(esp, dst); + EMIT(imm8); + } +} + + +void Assembler::shl_cl(const Operand& dst) { + EnsureSpace ensure_space(this); + EMIT(0xD3); + emit_operand(esp, dst); +} + + +void Assembler::shrd(Register dst, const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0x0F); + EMIT(0xAD); + emit_operand(dst, src); +} + + +void Assembler::shr(const Operand& dst, uint8_t imm8) { + EnsureSpace ensure_space(this); + DCHECK(is_uint5(imm8)); // illegal shift count + if (imm8 == 1) { + EMIT(0xD1); + emit_operand(ebp, dst); + } else { + EMIT(0xC1); + emit_operand(ebp, dst); + EMIT(imm8); + } +} + + +void Assembler::shr_cl(const Operand& dst) { + EnsureSpace ensure_space(this); + EMIT(0xD3); + emit_operand(ebp, dst); +} + + +void Assembler::sub(const Operand& dst, const Immediate& x) { + EnsureSpace ensure_space(this); + emit_arith(5, dst, x); +} + + +void Assembler::sub(Register dst, const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0x2B); + emit_operand(dst, src); +} + + +void Assembler::sub(const Operand& dst, Register src) { + EnsureSpace ensure_space(this); + EMIT(0x29); + emit_operand(src, dst); +} + + +void Assembler::test(Register reg, const Immediate& imm) { + if (RelocInfo::IsNone(imm.rmode_) && is_uint8(imm.x_)) { + test_b(reg, imm.x_); + return; + } + + EnsureSpace ensure_space(this); + // This is not using emit_arith because test doesn't support + // sign-extension of 8-bit operands. + if (reg.is(eax)) { + EMIT(0xA9); + } else { + EMIT(0xF7); + EMIT(0xC0 | reg.code()); + } + emit(imm); +} + + +void Assembler::test(Register reg, const Operand& op) { + EnsureSpace ensure_space(this); + EMIT(0x85); + emit_operand(reg, op); +} + + +void Assembler::test_b(Register reg, const Operand& op) { + CHECK(reg.is_byte_register()); + EnsureSpace ensure_space(this); + EMIT(0x84); + emit_operand(reg, op); +} + + +void Assembler::test(const Operand& op, const Immediate& imm) { + if (op.is_reg_only()) { + test(op.reg(), imm); + return; + } + if (RelocInfo::IsNone(imm.rmode_) && is_uint8(imm.x_)) { + return test_b(op, imm.x_); + } + EnsureSpace ensure_space(this); + EMIT(0xF7); + emit_operand(eax, op); + emit(imm); +} + + +void Assembler::test_b(Register reg, uint8_t imm8) { + EnsureSpace ensure_space(this); + // Only use test against byte for registers that have a byte + // variant: eax, ebx, ecx, and edx. + if (reg.is(eax)) { + EMIT(0xA8); + EMIT(imm8); + } else if (reg.is_byte_register()) { + emit_arith_b(0xF6, 0xC0, reg, imm8); + } else { + EMIT(0xF7); + EMIT(0xC0 | reg.code()); + emit(imm8); + } +} + + +void Assembler::test_b(const Operand& op, uint8_t imm8) { + if (op.is_reg_only()) { + test_b(op.reg(), imm8); + return; + } + EnsureSpace ensure_space(this); + EMIT(0xF6); + emit_operand(eax, op); + EMIT(imm8); +} + + +void Assembler::xor_(Register dst, int32_t imm32) { + EnsureSpace ensure_space(this); + emit_arith(6, Operand(dst), Immediate(imm32)); +} + + +void Assembler::xor_(Register dst, const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0x33); + emit_operand(dst, src); +} + + +void Assembler::xor_(const Operand& dst, Register src) { + EnsureSpace ensure_space(this); + EMIT(0x31); + emit_operand(src, dst); +} + + +void Assembler::xor_(const Operand& dst, const Immediate& x) { + EnsureSpace ensure_space(this); + emit_arith(6, dst, x); +} + + +void Assembler::bt(const Operand& dst, Register src) { + EnsureSpace ensure_space(this); + EMIT(0x0F); + EMIT(0xA3); + emit_operand(src, dst); +} + + +void Assembler::bts(const Operand& dst, Register src) { + EnsureSpace ensure_space(this); + EMIT(0x0F); + EMIT(0xAB); + emit_operand(src, dst); +} + + +void Assembler::bsr(Register dst, const Operand& src) { + EnsureSpace ensure_space(this); + EMIT(0x0F); + EMIT(0xBD); + emit_operand(dst, src); +} + + +void Assembler::hlt() { + EnsureSpace ensure_space(this); + EMIT(0xF4); +} + + +void Assembler::int3() { + EnsureSpace ensure_space(this); + EMIT(0xCC); +} + + +void Assembler::nop() { + EnsureSpace ensure_space(this); + EMIT(0x90); +} + + +void Assembler::ret(int imm16) { + EnsureSpace ensure_space(this); + DCHECK(is_uint16(imm16)); + if (imm16 == 0) { + EMIT(0xC3); + } else { + EMIT(0xC2); + EMIT(imm16 & 0xFF); + EMIT((imm16 >> 8) & 0xFF); + } +} + + +// Labels refer to positions in the (to be) generated code. +// There are bound, linked, and unused labels. +// +// Bound labels refer to known positions in the already +// generated code. pos() is the position the label refers to. +// +// Linked labels refer to unknown positions in the code +// to be generated; pos() is the position of the 32bit +// Displacement of the last instruction using the label. + + +void Assembler::print(Label* L) { + if (L->is_unused()) { + PrintF("unused label\n"); + } else if (L->is_bound()) { + PrintF("bound label to %d\n", L->pos()); + } else if (L->is_linked()) { + Label l = *L; + PrintF("unbound label"); + while (l.is_linked()) { + Displacement disp = disp_at(&l); + PrintF("@ %d ", l.pos()); + disp.print(); + PrintF("\n"); + disp.next(&l); + } + } else { + PrintF("label in inconsistent state (pos = %d)\n", L->pos_); + } +} + + +void Assembler::bind_to(Label* L, int pos) { + EnsureSpace ensure_space(this); + DCHECK(0 <= pos && pos <= pc_offset()); // must have a valid binding position + while (L->is_linked()) { + Displacement disp = disp_at(L); + int fixup_pos = L->pos(); + if (disp.type() == Displacement::CODE_RELATIVE) { + // Relative to Code* heap object pointer. + long_at_put(fixup_pos, pos + Code::kHeaderSize - kHeapObjectTag); + } else { + if (disp.type() == Displacement::UNCONDITIONAL_JUMP) { + DCHECK(byte_at(fixup_pos - 1) == 0xE9); // jmp expected + } + // Relative address, relative to point after address. + int imm32 = pos - (fixup_pos + sizeof(int32_t)); + long_at_put(fixup_pos, imm32); + } + disp.next(L); + } + while (L->is_near_linked()) { + int fixup_pos = L->near_link_pos(); + int offset_to_next = + static_cast<int>(*reinterpret_cast<int8_t*>(addr_at(fixup_pos))); + DCHECK(offset_to_next <= 0); + // Relative address, relative to point after address. + int disp = pos - fixup_pos - sizeof(int8_t); + CHECK(0 <= disp && disp <= 127); + set_byte_at(fixup_pos, disp); + if (offset_to_next < 0) { + L->link_to(fixup_pos + offset_to_next, Label::kNear); + } else { + L->UnuseNear(); + } + } + L->bind_to(pos); +} + + +void Assembler::bind(Label* L) { + EnsureSpace ensure_space(this); + DCHECK(!L->is_bound()); // label can only be bound once + bind_to(L, pc_offset()); +} + + +void Assembler::call(Label* L) { + positions_recorder()->WriteRecordedPositions(); + EnsureSpace ensure_space(this); + if (L->is_bound()) { + const int long_size = 5; + int offs = L->pos() - pc_offset(); + DCHECK(offs <= 0); + // 1110 1000 #32-bit disp. + EMIT(0xE8); + emit(offs - long_size); + } else { + // 1110 1000 #32-bit disp. + EMIT(0xE8); + emit_disp(L, Displacement::OTHER); + } +} + + +void Assembler::call(byte* entry, RelocInfo::Mode rmode) { + positions_recorder()->WriteRecordedPositions(); + EnsureSpace ensure_space(this); + DCHECK(!RelocInfo::IsCodeTarget(rmode)); + EMIT(0xE8); + if (RelocInfo::IsRuntimeEntry(rmode)) { + emit(reinterpret_cast<uint32_t>(entry), rmode); + } else { + emit(entry - (pc_ + sizeof(int32_t)), rmode); + } +} + + +int Assembler::CallSize(const Operand& adr) { + // Call size is 1 (opcode) + adr.len_ (operand). + return 1 + adr.len_; +} + + +void Assembler::call(const Operand& adr) { + positions_recorder()->WriteRecordedPositions(); + EnsureSpace ensure_space(this); + EMIT(0xFF); + emit_operand(edx, adr); +} + + +int Assembler::CallSize(Handle<Code> code, RelocInfo::Mode rmode) { + return 1 /* EMIT */ + sizeof(uint32_t) /* emit */; +} + + +void Assembler::call(Handle<Code> code, + RelocInfo::Mode rmode, + TypeFeedbackId ast_id) { + positions_recorder()->WriteRecordedPositions(); + EnsureSpace ensure_space(this); + DCHECK(RelocInfo::IsCodeTarget(rmode) + || rmode == RelocInfo::CODE_AGE_SEQUENCE); + EMIT(0xE8); + emit(code, rmode, ast_id); +} + + +void Assembler::jmp(Label* L, Label::Distance distance) { + EnsureSpace ensure_space(this); + if (L->is_bound()) { + const int short_size = 2; + const int long_size = 5; + int offs = L->pos() - pc_offset(); + DCHECK(offs <= 0); + if (is_int8(offs - short_size)) { + // 1110 1011 #8-bit disp. + EMIT(0xEB); + EMIT((offs - short_size) & 0xFF); + } else { + // 1110 1001 #32-bit disp. + EMIT(0xE9); + emit(offs - long_size); + } + } else if (distance == Label::kNear) { + EMIT(0xEB); + emit_near_disp(L); + } else { + // 1110 1001 #32-bit disp. + EMIT(0xE9); + emit_disp(L, Displacement::UNCONDITIONAL_JUMP); + } +} + + +void Assembler::jmp(byte* entry, RelocInfo::Mode rmode) { + EnsureSpace ensure_space(this); + DCHECK(!RelocInfo::IsCodeTarget(rmode)); + EMIT(0xE9); + if (RelocInfo::IsRuntimeEntry(rmode)) { + emit(reinterpret_cast<uint32_t>(entry), rmode); + } else { + emit(entry - (pc_ + sizeof(int32_t)), rmode); + } +} + + +void Assembler::jmp(const Operand& adr) { + EnsureSpace ensure_space(this); + EMIT(0xFF); + emit_operand(esp, adr); +} + + +void Assembler::jmp(Handle<Code> code, RelocInfo::Mode rmode) { + EnsureSpace ensure_space(this); + DCHECK(RelocInfo::IsCodeTarget(rmode)); + EMIT(0xE9); + emit(code, rmode); +} + + +void Assembler::j(Condition cc, Label* L, Label::Distance distance) { + EnsureSpace ensure_space(this); + DCHECK(0 <= cc && static_cast<int>(cc) < 16); + if (L->is_bound()) { + const int short_size = 2; + const int long_size = 6; + int offs = L->pos() - pc_offset(); + DCHECK(offs <= 0); + if (is_int8(offs - short_size)) { + // 0111 tttn #8-bit disp + EMIT(0x70 | cc); + EMIT((offs - short_size) & 0xFF); + } else { + // 0000 1111 1000 tttn #32-bit disp + EMIT(0x0F); + EMIT(0x80 | cc); + emit(offs - long_size); + } + } else if (distance == Label::kNear) { + EMIT(0x70 | cc); + emit_near_disp(L); + } else { + // 0000 1111 1000 tttn #32-bit disp + // Note: could eliminate cond. jumps to this jump if condition + // is the same however, seems to be rather unlikely case. + EMIT(0x0F); + EMIT(0x80 | cc); + emit_disp(L, Displacement::OTHER); + } +} + + +void Assembler::j(Condition cc, byte* entry, RelocInfo::Mode rmode) { + EnsureSpace ensure_space(this); + DCHECK((0 <= cc) && (static_cast<int>(cc) < 16)); + // 0000 1111 1000 tttn #32-bit disp. + EMIT(0x0F); + EMIT(0x80 | cc); + if (RelocInfo::IsRuntimeEntry(rmode)) { + emit(reinterpret_cast<uint32_t>(entry), rmode); + } else { + emit(entry - (pc_ + sizeof(int32_t)), rmode); + } +} + + +void Assembler::j(Condition cc, Handle<Code> code) { + EnsureSpace ensure_space(this); + // 0000 1111 1000 tttn #32-bit disp + EMIT(0x0F); + EMIT(0x80 | cc); + emit(code, RelocInfo::CODE_TARGET); +} + + +// FPU instructions. + +void Assembler::fld(int i) { + EnsureSpace ensure_space(this); + emit_farith(0xD9, 0xC0, i); +} + + +void Assembler::fstp(int i) { + EnsureSpace ensure_space(this); + emit_farith(0xDD, 0xD8, i); +} + + +void Assembler::fld1() { + EnsureSpace ensure_space(this); + EMIT(0xD9); + EMIT(0xE8); +} + + +void Assembler::fldpi() { + EnsureSpace ensure_space(this); + EMIT(0xD9); + EMIT(0xEB); +} + + +void Assembler::fldz() { + EnsureSpace ensure_space(this); + EMIT(0xD9); + EMIT(0xEE); +} + + +void Assembler::fldln2() { + EnsureSpace ensure_space(this); + EMIT(0xD9); + EMIT(0xED); +} + + +void Assembler::fld_s(const Operand& adr) { + EnsureSpace ensure_space(this); + EMIT(0xD9); + emit_operand(eax, adr); +} + + +void Assembler::fld_d(const Operand& adr) { + EnsureSpace ensure_space(this); + EMIT(0xDD); + emit_operand(eax, adr); +} + + +void Assembler::fstp_s(const Operand& adr) { + EnsureSpace ensure_space(this); + EMIT(0xD9); + emit_operand(ebx, adr); +} + + +void Assembler::fst_s(const Operand& adr) { + EnsureSpace ensure_space(this); + EMIT(0xD9); + emit_operand(edx, adr); +} + + +void Assembler::fstp_d(const Operand& adr) { + EnsureSpace ensure_space(this); + EMIT(0xDD); + emit_operand(ebx, adr); +} + + +void Assembler::fst_d(const Operand& adr) { + EnsureSpace ensure_space(this); + EMIT(0xDD); + emit_operand(edx, adr); +} + + +void Assembler::fild_s(const Operand& adr) { + EnsureSpace ensure_space(this); + EMIT(0xDB); + emit_operand(eax, adr); +} + + +void Assembler::fild_d(const Operand& adr) { + EnsureSpace ensure_space(this); + EMIT(0xDF); + emit_operand(ebp, adr); +} + + +void Assembler::fistp_s(const Operand& adr) { + EnsureSpace ensure_space(this); + EMIT(0xDB); + emit_operand(ebx, adr); +} + + +void Assembler::fisttp_s(const Operand& adr) { + DCHECK(IsEnabled(SSE3)); + EnsureSpace ensure_space(this); + EMIT(0xDB); + emit_operand(ecx, adr); +} + + +void Assembler::fisttp_d(const Operand& adr) { + DCHECK(IsEnabled(SSE3)); + EnsureSpace ensure_space(this); + EMIT(0xDD); + emit_operand(ecx, adr); +} + + +void Assembler::fist_s(const Operand& adr) { + EnsureSpace ensure_space(this); + EMIT(0xDB); + emit_operand(edx, adr); +} + + +void Assembler::fistp_d(const Operand& adr) { + EnsureSpace ensure_space(this); + EMIT(0xDF); + emit_operand(edi, adr); +} + + +void Assembler::fabs() { + EnsureSpace ensure_space(this); + EMIT(0xD9); + EMIT(0xE1); +} + + +void Assembler::fchs() { + EnsureSpace ensure_space(this); + EMIT(0xD9); + EMIT(0xE0); +} + + +void Assembler::fcos() { + EnsureSpace ensure_space(this); + EMIT(0xD9); + EMIT(0xFF); +} + + +void Assembler::fsin() { + EnsureSpace ensure_space(this); + EMIT(0xD9); + EMIT(0xFE); +} + + +void Assembler::fptan() { + EnsureSpace ensure_space(this); + EMIT(0xD9); + EMIT(0xF2); +} + + +void Assembler::fyl2x() { + EnsureSpace ensure_space(this); + EMIT(0xD9); + EMIT(0xF1); +} + + +void Assembler::f2xm1() { + EnsureSpace ensure_space(this); + EMIT(0xD9); + EMIT(0xF0); +} + + +void Assembler::fscale() { + EnsureSpace ensure_space(this); + EMIT(0xD9); + EMIT(0xFD); +} + + +void Assembler::fninit() { + EnsureSpace ensure_space(this); + EMIT(0xDB); + EMIT(0xE3); +} + + +void Assembler::fadd(int i) { + EnsureSpace ensure_space(this); + emit_farith(0xDC, 0xC0, i); +} + + +void Assembler::fadd_i(int i) { + EnsureSpace ensure_space(this); + emit_farith(0xD8, 0xC0, i); +} + + +void Assembler::fsub(int i) { + EnsureSpace ensure_space(this); + emit_farith(0xDC, 0xE8, i); +} + + +void Assembler::fsub_i(int i) { + EnsureSpace ensure_space(this); + emit_farith(0xD8, 0xE0, i); +} + + +void Assembler::fisub_s(const Operand& adr) { + EnsureSpace ensure_space(this); + EMIT(0xDA); + emit_operand(esp, adr); +} + + +void Assembler::fmul_i(int i) { + EnsureSpace ensure_space(this); + emit_farith(0xD8, 0xC8, i); +} + + +void Assembler::fmul(int i) { + EnsureSpace ensure_space(this); + emit_farith(0xDC, 0xC8, i); +} + + +void Assembler::fdiv(int i) { + EnsureSpace ensure_space(this); + emit_farith(0xDC, 0xF8, i); +} + + +void Assembler::fdiv_i(int i) { + EnsureSpace ensure_space(this); + emit_farith(0xD8, 0xF0, i); +} + + +void Assembler::faddp(int i) { + EnsureSpace ensure_space(this); + emit_farith(0xDE, 0xC0, i); +} + + +void Assembler::fsubp(int i) { + EnsureSpace ensure_space(this); + emit_farith(0xDE, 0xE8, i); +} + + +void Assembler::fsubrp(int i) { + EnsureSpace ensure_space(this); + emit_farith(0xDE, 0xE0, i); +} + + +void Assembler::fmulp(int i) { + EnsureSpace ensure_space(this); + emit_farith(0xDE, 0xC8, i); +} + + +void Assembler::fdivp(int i) { + EnsureSpace ensure_space(this); + emit_farith(0xDE, 0xF8, i); +} + + +void Assembler::fprem() { + EnsureSpace ensure_space(this); + EMIT(0xD9); + EMIT(0xF8); +} + + +void Assembler::fprem1() { + EnsureSpace ensure_space(this); + EMIT(0xD9); + EMIT(0xF5); +} + + +void Assembler::fxch(int i) { + EnsureSpace ensure_space(this); + emit_farith(0xD9, 0xC8, i); +} + + +void Assembler::fincstp() { + EnsureSpace ensure_space(this); + EMIT(0xD9); + EMIT(0xF7); +} + + +void Assembler::ffree(int i) { + EnsureSpace ensure_space(this); + emit_farith(0xDD, 0xC0, i); +} + + +void Assembler::ftst() { + EnsureSpace ensure_space(this); + EMIT(0xD9); + EMIT(0xE4); +} + + +void Assembler::fucomp(int i) { + EnsureSpace ensure_space(this); + emit_farith(0xDD, 0xE8, i); +} + + +void Assembler::fucompp() { + EnsureSpace ensure_space(this); + EMIT(0xDA); + EMIT(0xE9); +} + + +void Assembler::fucomi(int i) { + EnsureSpace ensure_space(this); + EMIT(0xDB); + EMIT(0xE8 + i); +} + + +void Assembler::fucomip() { + EnsureSpace ensure_space(this); + EMIT(0xDF); + EMIT(0xE9); +} + + +void Assembler::fcompp() { + EnsureSpace ensure_space(this); + EMIT(0xDE); + EMIT(0xD9); +} + + +void Assembler::fnstsw_ax() { + EnsureSpace ensure_space(this); + EMIT(0xDF); + EMIT(0xE0); +} + + +void Assembler::fwait() { + EnsureSpace ensure_space(this); + EMIT(0x9B); +} + + +void Assembler::frndint() { + EnsureSpace ensure_space(this); + EMIT(0xD9); + EMIT(0xFC); +} + + +void Assembler::fnclex() { + EnsureSpace ensure_space(this); + EMIT(0xDB); + EMIT(0xE2); +} + + +void Assembler::sahf() { + EnsureSpace ensure_space(this); + EMIT(0x9E); +} + + +void Assembler::setcc(Condition cc, Register reg) { + DCHECK(reg.is_byte_register()); + EnsureSpace ensure_space(this); + EMIT(0x0F); + EMIT(0x90 | cc); + EMIT(0xC0 | reg.code()); +} + + +void Assembler::Print() { + Disassembler::Decode(isolate(), stdout, buffer_, pc_); +} + + +void Assembler::RecordJSReturn() { + positions_recorder()->WriteRecordedPositions(); + EnsureSpace ensure_space(this); + RecordRelocInfo(RelocInfo::JS_RETURN); +} + + +void Assembler::RecordDebugBreakSlot() { + positions_recorder()->WriteRecordedPositions(); + EnsureSpace ensure_space(this); + RecordRelocInfo(RelocInfo::DEBUG_BREAK_SLOT); +} + + +void Assembler::RecordComment(const char* msg, bool force) { + if (FLAG_code_comments || force) { + EnsureSpace ensure_space(this); + RecordRelocInfo(RelocInfo::COMMENT, reinterpret_cast<intptr_t>(msg)); + } +} + + +void Assembler::GrowBuffer() { + DCHECK(buffer_overflow()); + if (!own_buffer_) FATAL("external code buffer is too small"); + + // Compute new buffer size. + CodeDesc desc; // the new buffer + desc.buffer_size = 2 * buffer_size_; + + // Some internal data structures overflow for very large buffers, + // they must ensure that kMaximalBufferSize is not too large. + if ((desc.buffer_size > kMaximalBufferSize) || + (desc.buffer_size > isolate()->heap()->MaxOldGenerationSize())) { + V8::FatalProcessOutOfMemory("Assembler::GrowBuffer"); + } + + // Set up new buffer. + desc.buffer = NewArray<byte>(desc.buffer_size); + desc.instr_size = pc_offset(); + desc.reloc_size = (buffer_ + buffer_size_) - (reloc_info_writer.pos()); + + // Clear the buffer in debug mode. Use 'int3' instructions to make + // sure to get into problems if we ever run uninitialized code. +#ifdef DEBUG + memset(desc.buffer, 0xCC, desc.buffer_size); +#endif + + // Copy the data. + int pc_delta = desc.buffer - buffer_; + int rc_delta = (desc.buffer + desc.buffer_size) - (buffer_ + buffer_size_); + MemMove(desc.buffer, buffer_, desc.instr_size); + MemMove(rc_delta + reloc_info_writer.pos(), reloc_info_writer.pos(), + desc.reloc_size); + + DeleteArray(buffer_); + buffer_ = desc.buffer; + buffer_size_ = desc.buffer_size; + pc_ += pc_delta; + reloc_info_writer.Reposition(reloc_info_writer.pos() + rc_delta, + reloc_info_writer.last_pc() + pc_delta); + + // Relocate runtime entries. + for (RelocIterator it(desc); !it.done(); it.next()) { + RelocInfo::Mode rmode = it.rinfo()->rmode(); + if (rmode == RelocInfo::INTERNAL_REFERENCE) { + int32_t* p = reinterpret_cast<int32_t*>(it.rinfo()->pc()); + if (*p != 0) { // 0 means uninitialized. + *p += pc_delta; + } + } + } + + DCHECK(!buffer_overflow()); +} + + +void Assembler::emit_arith_b(int op1, int op2, Register dst, int imm8) { + DCHECK(is_uint8(op1) && is_uint8(op2)); // wrong opcode + DCHECK(is_uint8(imm8)); + DCHECK((op1 & 0x01) == 0); // should be 8bit operation + EMIT(op1); + EMIT(op2 | dst.code()); + EMIT(imm8); +} + + +void Assembler::emit_arith(int sel, Operand dst, const Immediate& x) { + DCHECK((0 <= sel) && (sel <= 7)); + Register ireg = { sel }; + if (x.is_int8()) { + EMIT(0x83); // using a sign-extended 8-bit immediate. + emit_operand(ireg, dst); + EMIT(x.x_ & 0xFF); + } else if (dst.is_reg(eax)) { + EMIT((sel << 3) | 0x05); // short form if the destination is eax. + emit(x); + } else { + EMIT(0x81); // using a literal 32-bit immediate. + emit_operand(ireg, dst); + emit(x); + } +} + + +void Assembler::emit_operand(Register reg, const Operand& adr) { + const unsigned length = adr.len_; + DCHECK(length > 0); + + // Emit updated ModRM byte containing the given register. + pc_[0] = (adr.buf_[0] & ~0x38) | (reg.code() << 3); + + // Emit the rest of the encoded operand. + for (unsigned i = 1; i < length; i++) pc_[i] = adr.buf_[i]; + pc_ += length; + + // Emit relocation information if necessary. + if (length >= sizeof(int32_t) && !RelocInfo::IsNone(adr.rmode_)) { + pc_ -= sizeof(int32_t); // pc_ must be *at* disp32 + RecordRelocInfo(adr.rmode_); + pc_ += sizeof(int32_t); + } +} + + +void Assembler::emit_farith(int b1, int b2, int i) { + DCHECK(is_uint8(b1) && is_uint8(b2)); // wrong opcode + DCHECK(0 <= i && i < 8); // illegal stack offset + EMIT(b1); + EMIT(b2 + i); +} + + +void Assembler::db(uint8_t data) { + EnsureSpace ensure_space(this); + EMIT(data); +} + + +void Assembler::dd(uint32_t data) { + EnsureSpace ensure_space(this); + emit(data); +} + + +void Assembler::RecordRelocInfo(RelocInfo::Mode rmode, intptr_t data) { + DCHECK(!RelocInfo::IsNone(rmode)); + // Don't record external references unless the heap will be serialized. + if (rmode == RelocInfo::EXTERNAL_REFERENCE && + !serializer_enabled() && !emit_debug_code()) { + return; + } + RelocInfo rinfo(pc_, rmode, data, NULL); + reloc_info_writer.Write(&rinfo); +} + + +Handle<ConstantPoolArray> Assembler::NewConstantPool(Isolate* isolate) { + // No out-of-line constant pool support. + DCHECK(!FLAG_enable_ool_constant_pool); + return isolate->factory()->empty_constant_pool_array(); +} + + +void Assembler::PopulateConstantPool(ConstantPoolArray* constant_pool) { + // No out-of-line constant pool support. + DCHECK(!FLAG_enable_ool_constant_pool); + return; +} + + +#ifdef GENERATED_CODE_COVERAGE +static FILE* coverage_log = NULL; + + +static void InitCoverageLog() { + char* file_name = getenv("V8_GENERATED_CODE_COVERAGE_LOG"); + if (file_name != NULL) { + coverage_log = fopen(file_name, "aw+"); + } +} + + +void LogGeneratedCodeCoverage(const char* file_line) { + const char* return_address = (&file_line)[-1]; + char* push_insn = const_cast<char*>(return_address - 12); + push_insn[0] = 0xeb; // Relative branch insn. + push_insn[1] = 13; // Skip over coverage insns. + if (coverage_log != NULL) { + fprintf(coverage_log, "%s\n", file_line); + fflush(coverage_log); + } +} + +#endif + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_X87 diff --git a/deps/v8/src/x87/assembler-x87.h b/deps/v8/src/x87/assembler-x87.h new file mode 100644 index 00000000000..a2bedcc3cc5 --- /dev/null +++ b/deps/v8/src/x87/assembler-x87.h @@ -0,0 +1,1053 @@ +// Copyright (c) 1994-2006 Sun Microsystems Inc. +// All Rights Reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// - Redistributions of source code must retain the above copyright notice, +// this list of conditions and the following disclaimer. +// +// - Redistribution in binary form must reproduce the above copyright +// notice, this list of conditions and the following disclaimer in the +// documentation and/or other materials provided with the distribution. +// +// - Neither the name of Sun Microsystems or the names of contributors may +// be used to endorse or promote products derived from this software without +// specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS +// IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, +// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +// PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR +// CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, +// EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, +// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +// PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +// LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +// NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +// The original source code covered by the above license above has been +// modified significantly by Google Inc. +// Copyright 2011 the V8 project authors. All rights reserved. + +// A light-weight IA32 Assembler. + +#ifndef V8_X87_ASSEMBLER_X87_H_ +#define V8_X87_ASSEMBLER_X87_H_ + +#include "src/isolate.h" +#include "src/serialize.h" + +namespace v8 { +namespace internal { + +// CPU Registers. +// +// 1) We would prefer to use an enum, but enum values are assignment- +// compatible with int, which has caused code-generation bugs. +// +// 2) We would prefer to use a class instead of a struct but we don't like +// the register initialization to depend on the particular initialization +// order (which appears to be different on OS X, Linux, and Windows for the +// installed versions of C++ we tried). Using a struct permits C-style +// "initialization". Also, the Register objects cannot be const as this +// forces initialization stubs in MSVC, making us dependent on initialization +// order. +// +// 3) By not using an enum, we are possibly preventing the compiler from +// doing certain constant folds, which may significantly reduce the +// code generated for some assembly instructions (because they boil down +// to a few constants). If this is a problem, we could change the code +// such that we use an enum in optimized mode, and the struct in debug +// mode. This way we get the compile-time error checking in debug mode +// and best performance in optimized code. +// +struct Register { + static const int kMaxNumAllocatableRegisters = 6; + static int NumAllocatableRegisters() { + return kMaxNumAllocatableRegisters; + } + static const int kNumRegisters = 8; + + static inline const char* AllocationIndexToString(int index); + + static inline int ToAllocationIndex(Register reg); + + static inline Register FromAllocationIndex(int index); + + static Register from_code(int code) { + DCHECK(code >= 0); + DCHECK(code < kNumRegisters); + Register r = { code }; + return r; + } + bool is_valid() const { return 0 <= code_ && code_ < kNumRegisters; } + bool is(Register reg) const { return code_ == reg.code_; } + // eax, ebx, ecx and edx are byte registers, the rest are not. + bool is_byte_register() const { return code_ <= 3; } + int code() const { + DCHECK(is_valid()); + return code_; + } + int bit() const { + DCHECK(is_valid()); + return 1 << code_; + } + + // Unfortunately we can't make this private in a struct. + int code_; +}; + +const int kRegister_eax_Code = 0; +const int kRegister_ecx_Code = 1; +const int kRegister_edx_Code = 2; +const int kRegister_ebx_Code = 3; +const int kRegister_esp_Code = 4; +const int kRegister_ebp_Code = 5; +const int kRegister_esi_Code = 6; +const int kRegister_edi_Code = 7; +const int kRegister_no_reg_Code = -1; + +const Register eax = { kRegister_eax_Code }; +const Register ecx = { kRegister_ecx_Code }; +const Register edx = { kRegister_edx_Code }; +const Register ebx = { kRegister_ebx_Code }; +const Register esp = { kRegister_esp_Code }; +const Register ebp = { kRegister_ebp_Code }; +const Register esi = { kRegister_esi_Code }; +const Register edi = { kRegister_edi_Code }; +const Register no_reg = { kRegister_no_reg_Code }; + + +inline const char* Register::AllocationIndexToString(int index) { + DCHECK(index >= 0 && index < kMaxNumAllocatableRegisters); + // This is the mapping of allocation indices to registers. + const char* const kNames[] = { "eax", "ecx", "edx", "ebx", "esi", "edi" }; + return kNames[index]; +} + + +inline int Register::ToAllocationIndex(Register reg) { + DCHECK(reg.is_valid() && !reg.is(esp) && !reg.is(ebp)); + return (reg.code() >= 6) ? reg.code() - 2 : reg.code(); +} + + +inline Register Register::FromAllocationIndex(int index) { + DCHECK(index >= 0 && index < kMaxNumAllocatableRegisters); + return (index >= 4) ? from_code(index + 2) : from_code(index); +} + + +struct X87Register { + static const int kMaxNumAllocatableRegisters = 8; + static const int kMaxNumRegisters = 8; + static int NumAllocatableRegisters() { + return kMaxNumAllocatableRegisters; + } + + static int ToAllocationIndex(X87Register reg) { + return reg.code_; + } + + static const char* AllocationIndexToString(int index) { + DCHECK(index >= 0 && index < kMaxNumAllocatableRegisters); + const char* const names[] = { + "stX_0", "stX_1", "stX_2", "stX_3", "stX_4", + "stX_5", "stX_6", "stX_7" + }; + return names[index]; + } + + static X87Register FromAllocationIndex(int index) { + DCHECK(index >= 0 && index < kMaxNumAllocatableRegisters); + X87Register result; + result.code_ = index; + return result; + } + + bool is_valid() const { + return 0 <= code_ && code_ < kMaxNumRegisters; + } + + int code() const { + DCHECK(is_valid()); + return code_; + } + + bool is(X87Register reg) const { + return code_ == reg.code_; + } + + int code_; +}; + + +typedef X87Register DoubleRegister; + + +const X87Register stX_0 = { 0 }; +const X87Register stX_1 = { 1 }; +const X87Register stX_2 = { 2 }; +const X87Register stX_3 = { 3 }; +const X87Register stX_4 = { 4 }; +const X87Register stX_5 = { 5 }; +const X87Register stX_6 = { 6 }; +const X87Register stX_7 = { 7 }; + + +enum Condition { + // any value < 0 is considered no_condition + no_condition = -1, + + overflow = 0, + no_overflow = 1, + below = 2, + above_equal = 3, + equal = 4, + not_equal = 5, + below_equal = 6, + above = 7, + negative = 8, + positive = 9, + parity_even = 10, + parity_odd = 11, + less = 12, + greater_equal = 13, + less_equal = 14, + greater = 15, + + // aliases + carry = below, + not_carry = above_equal, + zero = equal, + not_zero = not_equal, + sign = negative, + not_sign = positive +}; + + +// Returns the equivalent of !cc. +// Negation of the default no_condition (-1) results in a non-default +// no_condition value (-2). As long as tests for no_condition check +// for condition < 0, this will work as expected. +inline Condition NegateCondition(Condition cc) { + return static_cast<Condition>(cc ^ 1); +} + + +// Commute a condition such that {a cond b == b cond' a}. +inline Condition CommuteCondition(Condition cc) { + switch (cc) { + case below: + return above; + case above: + return below; + case above_equal: + return below_equal; + case below_equal: + return above_equal; + case less: + return greater; + case greater: + return less; + case greater_equal: + return less_equal; + case less_equal: + return greater_equal; + default: + return cc; + } +} + + +// ----------------------------------------------------------------------------- +// Machine instruction Immediates + +class Immediate BASE_EMBEDDED { + public: + inline explicit Immediate(int x); + inline explicit Immediate(const ExternalReference& ext); + inline explicit Immediate(Handle<Object> handle); + inline explicit Immediate(Smi* value); + inline explicit Immediate(Address addr); + + static Immediate CodeRelativeOffset(Label* label) { + return Immediate(label); + } + + bool is_zero() const { return x_ == 0 && RelocInfo::IsNone(rmode_); } + bool is_int8() const { + return -128 <= x_ && x_ < 128 && RelocInfo::IsNone(rmode_); + } + bool is_int16() const { + return -32768 <= x_ && x_ < 32768 && RelocInfo::IsNone(rmode_); + } + + private: + inline explicit Immediate(Label* value); + + int x_; + RelocInfo::Mode rmode_; + + friend class Operand; + friend class Assembler; + friend class MacroAssembler; +}; + + +// ----------------------------------------------------------------------------- +// Machine instruction Operands + +enum ScaleFactor { + times_1 = 0, + times_2 = 1, + times_4 = 2, + times_8 = 3, + times_int_size = times_4, + times_half_pointer_size = times_2, + times_pointer_size = times_4, + times_twice_pointer_size = times_8 +}; + + +class Operand BASE_EMBEDDED { + public: + // reg + INLINE(explicit Operand(Register reg)); + + // [disp/r] + INLINE(explicit Operand(int32_t disp, RelocInfo::Mode rmode)); + + // [disp/r] + INLINE(explicit Operand(Immediate imm)); + + // [base + disp/r] + explicit Operand(Register base, int32_t disp, + RelocInfo::Mode rmode = RelocInfo::NONE32); + + // [base + index*scale + disp/r] + explicit Operand(Register base, + Register index, + ScaleFactor scale, + int32_t disp, + RelocInfo::Mode rmode = RelocInfo::NONE32); + + // [index*scale + disp/r] + explicit Operand(Register index, + ScaleFactor scale, + int32_t disp, + RelocInfo::Mode rmode = RelocInfo::NONE32); + + static Operand StaticVariable(const ExternalReference& ext) { + return Operand(reinterpret_cast<int32_t>(ext.address()), + RelocInfo::EXTERNAL_REFERENCE); + } + + static Operand StaticArray(Register index, + ScaleFactor scale, + const ExternalReference& arr) { + return Operand(index, scale, reinterpret_cast<int32_t>(arr.address()), + RelocInfo::EXTERNAL_REFERENCE); + } + + static Operand ForCell(Handle<Cell> cell) { + AllowDeferredHandleDereference embedding_raw_address; + return Operand(reinterpret_cast<int32_t>(cell.location()), + RelocInfo::CELL); + } + + static Operand ForRegisterPlusImmediate(Register base, Immediate imm) { + return Operand(base, imm.x_, imm.rmode_); + } + + // Returns true if this Operand is a wrapper for the specified register. + bool is_reg(Register reg) const; + + // Returns true if this Operand is a wrapper for one register. + bool is_reg_only() const; + + // Asserts that this Operand is a wrapper for one register and returns the + // register. + Register reg() const; + + private: + // Set the ModRM byte without an encoded 'reg' register. The + // register is encoded later as part of the emit_operand operation. + inline void set_modrm(int mod, Register rm); + + inline void set_sib(ScaleFactor scale, Register index, Register base); + inline void set_disp8(int8_t disp); + inline void set_dispr(int32_t disp, RelocInfo::Mode rmode); + + byte buf_[6]; + // The number of bytes in buf_. + unsigned int len_; + // Only valid if len_ > 4. + RelocInfo::Mode rmode_; + + friend class Assembler; + friend class MacroAssembler; +}; + + +// ----------------------------------------------------------------------------- +// A Displacement describes the 32bit immediate field of an instruction which +// may be used together with a Label in order to refer to a yet unknown code +// position. Displacements stored in the instruction stream are used to describe +// the instruction and to chain a list of instructions using the same Label. +// A Displacement contains 2 different fields: +// +// next field: position of next displacement in the chain (0 = end of list) +// type field: instruction type +// +// A next value of null (0) indicates the end of a chain (note that there can +// be no displacement at position zero, because there is always at least one +// instruction byte before the displacement). +// +// Displacement _data field layout +// +// |31.....2|1......0| +// [ next | type | + +class Displacement BASE_EMBEDDED { + public: + enum Type { + UNCONDITIONAL_JUMP, + CODE_RELATIVE, + OTHER + }; + + int data() const { return data_; } + Type type() const { return TypeField::decode(data_); } + void next(Label* L) const { + int n = NextField::decode(data_); + n > 0 ? L->link_to(n) : L->Unuse(); + } + void link_to(Label* L) { init(L, type()); } + + explicit Displacement(int data) { data_ = data; } + + Displacement(Label* L, Type type) { init(L, type); } + + void print() { + PrintF("%s (%x) ", (type() == UNCONDITIONAL_JUMP ? "jmp" : "[other]"), + NextField::decode(data_)); + } + + private: + int data_; + + class TypeField: public BitField<Type, 0, 2> {}; + class NextField: public BitField<int, 2, 32-2> {}; + + void init(Label* L, Type type); +}; + + +class Assembler : public AssemblerBase { + private: + // We check before assembling an instruction that there is sufficient + // space to write an instruction and its relocation information. + // The relocation writer's position must be kGap bytes above the end of + // the generated instructions. This leaves enough space for the + // longest possible ia32 instruction, 15 bytes, and the longest possible + // relocation information encoding, RelocInfoWriter::kMaxLength == 16. + // (There is a 15 byte limit on ia32 instruction length that rules out some + // otherwise valid instructions.) + // This allows for a single, fast space check per instruction. + static const int kGap = 32; + + public: + // Create an assembler. Instructions and relocation information are emitted + // into a buffer, with the instructions starting from the beginning and the + // relocation information starting from the end of the buffer. See CodeDesc + // for a detailed comment on the layout (globals.h). + // + // If the provided buffer is NULL, the assembler allocates and grows its own + // buffer, and buffer_size determines the initial buffer size. The buffer is + // owned by the assembler and deallocated upon destruction of the assembler. + // + // If the provided buffer is not NULL, the assembler uses the provided buffer + // for code generation and assumes its size to be buffer_size. If the buffer + // is too small, a fatal error occurs. No deallocation of the buffer is done + // upon destruction of the assembler. + // TODO(vitalyr): the assembler does not need an isolate. + Assembler(Isolate* isolate, void* buffer, int buffer_size); + virtual ~Assembler() { } + + // GetCode emits any pending (non-emitted) code and fills the descriptor + // desc. GetCode() is idempotent; it returns the same result if no other + // Assembler functions are invoked in between GetCode() calls. + void GetCode(CodeDesc* desc); + + // Read/Modify the code target in the branch/call instruction at pc. + inline static Address target_address_at(Address pc, + ConstantPoolArray* constant_pool); + inline static void set_target_address_at(Address pc, + ConstantPoolArray* constant_pool, + Address target, + ICacheFlushMode icache_flush_mode = + FLUSH_ICACHE_IF_NEEDED); + static inline Address target_address_at(Address pc, Code* code) { + ConstantPoolArray* constant_pool = code ? code->constant_pool() : NULL; + return target_address_at(pc, constant_pool); + } + static inline void set_target_address_at(Address pc, + Code* code, + Address target, + ICacheFlushMode icache_flush_mode = + FLUSH_ICACHE_IF_NEEDED) { + ConstantPoolArray* constant_pool = code ? code->constant_pool() : NULL; + set_target_address_at(pc, constant_pool, target); + } + + // Return the code target address at a call site from the return address + // of that call in the instruction stream. + inline static Address target_address_from_return_address(Address pc); + + // Return the code target address of the patch debug break slot + inline static Address break_address_from_return_address(Address pc); + + // This sets the branch destination (which is in the instruction on x86). + // This is for calls and branches within generated code. + inline static void deserialization_set_special_target_at( + Address instruction_payload, Code* code, Address target) { + set_target_address_at(instruction_payload, code, target); + } + + static const int kSpecialTargetSize = kPointerSize; + + // Distance between the address of the code target in the call instruction + // and the return address + static const int kCallTargetAddressOffset = kPointerSize; + // Distance between start of patched return sequence and the emitted address + // to jump to. + static const int kPatchReturnSequenceAddressOffset = 1; // JMP imm32. + + // Distance between start of patched debug break slot and the emitted address + // to jump to. + static const int kPatchDebugBreakSlotAddressOffset = 1; // JMP imm32. + + static const int kCallInstructionLength = 5; + static const int kPatchDebugBreakSlotReturnOffset = kPointerSize; + static const int kJSReturnSequenceLength = 6; + + // The debug break slot must be able to contain a call instruction. + static const int kDebugBreakSlotLength = kCallInstructionLength; + + // One byte opcode for test al, 0xXX. + static const byte kTestAlByte = 0xA8; + // One byte opcode for nop. + static const byte kNopByte = 0x90; + + // One byte opcode for a short unconditional jump. + static const byte kJmpShortOpcode = 0xEB; + // One byte prefix for a short conditional jump. + static const byte kJccShortPrefix = 0x70; + static const byte kJncShortOpcode = kJccShortPrefix | not_carry; + static const byte kJcShortOpcode = kJccShortPrefix | carry; + static const byte kJnzShortOpcode = kJccShortPrefix | not_zero; + static const byte kJzShortOpcode = kJccShortPrefix | zero; + + + // --------------------------------------------------------------------------- + // Code generation + // + // - function names correspond one-to-one to ia32 instruction mnemonics + // - unless specified otherwise, instructions operate on 32bit operands + // - instructions on 8bit (byte) operands/registers have a trailing '_b' + // - instructions on 16bit (word) operands/registers have a trailing '_w' + // - naming conflicts with C++ keywords are resolved via a trailing '_' + + // NOTE ON INTERFACE: Currently, the interface is not very consistent + // in the sense that some operations (e.g. mov()) can be called in more + // the one way to generate the same instruction: The Register argument + // can in some cases be replaced with an Operand(Register) argument. + // This should be cleaned up and made more orthogonal. The questions + // is: should we always use Operands instead of Registers where an + // Operand is possible, or should we have a Register (overloaded) form + // instead? We must be careful to make sure that the selected instruction + // is obvious from the parameters to avoid hard-to-find code generation + // bugs. + + // Insert the smallest number of nop instructions + // possible to align the pc offset to a multiple + // of m. m must be a power of 2. + void Align(int m); + void Nop(int bytes = 1); + // Aligns code to something that's optimal for a jump target for the platform. + void CodeTargetAlign(); + + // Stack + void pushad(); + void popad(); + + void pushfd(); + void popfd(); + + void push(const Immediate& x); + void push_imm32(int32_t imm32); + void push(Register src); + void push(const Operand& src); + + void pop(Register dst); + void pop(const Operand& dst); + + void enter(const Immediate& size); + void leave(); + + // Moves + void mov_b(Register dst, Register src) { mov_b(dst, Operand(src)); } + void mov_b(Register dst, const Operand& src); + void mov_b(Register dst, int8_t imm8) { mov_b(Operand(dst), imm8); } + void mov_b(const Operand& dst, int8_t imm8); + void mov_b(const Operand& dst, Register src); + + void mov_w(Register dst, const Operand& src); + void mov_w(const Operand& dst, Register src); + void mov_w(const Operand& dst, int16_t imm16); + + void mov(Register dst, int32_t imm32); + void mov(Register dst, const Immediate& x); + void mov(Register dst, Handle<Object> handle); + void mov(Register dst, const Operand& src); + void mov(Register dst, Register src); + void mov(const Operand& dst, const Immediate& x); + void mov(const Operand& dst, Handle<Object> handle); + void mov(const Operand& dst, Register src); + + void movsx_b(Register dst, Register src) { movsx_b(dst, Operand(src)); } + void movsx_b(Register dst, const Operand& src); + + void movsx_w(Register dst, Register src) { movsx_w(dst, Operand(src)); } + void movsx_w(Register dst, const Operand& src); + + void movzx_b(Register dst, Register src) { movzx_b(dst, Operand(src)); } + void movzx_b(Register dst, const Operand& src); + + void movzx_w(Register dst, Register src) { movzx_w(dst, Operand(src)); } + void movzx_w(Register dst, const Operand& src); + + // Flag management. + void cld(); + + // Repetitive string instructions. + void rep_movs(); + void rep_stos(); + void stos(); + + // Exchange + void xchg(Register dst, Register src); + void xchg(Register dst, const Operand& src); + + // Arithmetics + void adc(Register dst, int32_t imm32); + void adc(Register dst, const Operand& src); + + void add(Register dst, Register src) { add(dst, Operand(src)); } + void add(Register dst, const Operand& src); + void add(const Operand& dst, Register src); + void add(Register dst, const Immediate& imm) { add(Operand(dst), imm); } + void add(const Operand& dst, const Immediate& x); + + void and_(Register dst, int32_t imm32); + void and_(Register dst, const Immediate& x); + void and_(Register dst, Register src) { and_(dst, Operand(src)); } + void and_(Register dst, const Operand& src); + void and_(const Operand& dst, Register src); + void and_(const Operand& dst, const Immediate& x); + + void cmpb(Register reg, int8_t imm8) { cmpb(Operand(reg), imm8); } + void cmpb(const Operand& op, int8_t imm8); + void cmpb(Register reg, const Operand& op); + void cmpb(const Operand& op, Register reg); + void cmpb_al(const Operand& op); + void cmpw_ax(const Operand& op); + void cmpw(const Operand& op, Immediate imm16); + void cmp(Register reg, int32_t imm32); + void cmp(Register reg, Handle<Object> handle); + void cmp(Register reg0, Register reg1) { cmp(reg0, Operand(reg1)); } + void cmp(Register reg, const Operand& op); + void cmp(Register reg, const Immediate& imm) { cmp(Operand(reg), imm); } + void cmp(const Operand& op, const Immediate& imm); + void cmp(const Operand& op, Handle<Object> handle); + + void dec_b(Register dst); + void dec_b(const Operand& dst); + + void dec(Register dst); + void dec(const Operand& dst); + + void cdq(); + + void idiv(Register src) { idiv(Operand(src)); } + void idiv(const Operand& src); + void div(Register src) { div(Operand(src)); } + void div(const Operand& src); + + // Signed multiply instructions. + void imul(Register src); // edx:eax = eax * src. + void imul(Register dst, Register src) { imul(dst, Operand(src)); } + void imul(Register dst, const Operand& src); // dst = dst * src. + void imul(Register dst, Register src, int32_t imm32); // dst = src * imm32. + void imul(Register dst, const Operand& src, int32_t imm32); + + void inc(Register dst); + void inc(const Operand& dst); + + void lea(Register dst, const Operand& src); + + // Unsigned multiply instruction. + void mul(Register src); // edx:eax = eax * reg. + + void neg(Register dst); + void neg(const Operand& dst); + + void not_(Register dst); + void not_(const Operand& dst); + + void or_(Register dst, int32_t imm32); + void or_(Register dst, Register src) { or_(dst, Operand(src)); } + void or_(Register dst, const Operand& src); + void or_(const Operand& dst, Register src); + void or_(Register dst, const Immediate& imm) { or_(Operand(dst), imm); } + void or_(const Operand& dst, const Immediate& x); + + void rcl(Register dst, uint8_t imm8); + void rcr(Register dst, uint8_t imm8); + void ror(Register dst, uint8_t imm8); + void ror_cl(Register dst); + + void sar(Register dst, uint8_t imm8) { sar(Operand(dst), imm8); } + void sar(const Operand& dst, uint8_t imm8); + void sar_cl(Register dst) { sar_cl(Operand(dst)); } + void sar_cl(const Operand& dst); + + void sbb(Register dst, const Operand& src); + + void shld(Register dst, Register src) { shld(dst, Operand(src)); } + void shld(Register dst, const Operand& src); + + void shl(Register dst, uint8_t imm8) { shl(Operand(dst), imm8); } + void shl(const Operand& dst, uint8_t imm8); + void shl_cl(Register dst) { shl_cl(Operand(dst)); } + void shl_cl(const Operand& dst); + + void shrd(Register dst, Register src) { shrd(dst, Operand(src)); } + void shrd(Register dst, const Operand& src); + + void shr(Register dst, uint8_t imm8) { shr(Operand(dst), imm8); } + void shr(const Operand& dst, uint8_t imm8); + void shr_cl(Register dst) { shr_cl(Operand(dst)); } + void shr_cl(const Operand& dst); + + void sub(Register dst, const Immediate& imm) { sub(Operand(dst), imm); } + void sub(const Operand& dst, const Immediate& x); + void sub(Register dst, Register src) { sub(dst, Operand(src)); } + void sub(Register dst, const Operand& src); + void sub(const Operand& dst, Register src); + + void test(Register reg, const Immediate& imm); + void test(Register reg0, Register reg1) { test(reg0, Operand(reg1)); } + void test(Register reg, const Operand& op); + void test_b(Register reg, const Operand& op); + void test(const Operand& op, const Immediate& imm); + void test_b(Register reg, uint8_t imm8); + void test_b(const Operand& op, uint8_t imm8); + + void xor_(Register dst, int32_t imm32); + void xor_(Register dst, Register src) { xor_(dst, Operand(src)); } + void xor_(Register dst, const Operand& src); + void xor_(const Operand& dst, Register src); + void xor_(Register dst, const Immediate& imm) { xor_(Operand(dst), imm); } + void xor_(const Operand& dst, const Immediate& x); + + // Bit operations. + void bt(const Operand& dst, Register src); + void bts(Register dst, Register src) { bts(Operand(dst), src); } + void bts(const Operand& dst, Register src); + void bsr(Register dst, Register src) { bsr(dst, Operand(src)); } + void bsr(Register dst, const Operand& src); + + // Miscellaneous + void hlt(); + void int3(); + void nop(); + void ret(int imm16); + + // Label operations & relative jumps (PPUM Appendix D) + // + // Takes a branch opcode (cc) and a label (L) and generates + // either a backward branch or a forward branch and links it + // to the label fixup chain. Usage: + // + // Label L; // unbound label + // j(cc, &L); // forward branch to unbound label + // bind(&L); // bind label to the current pc + // j(cc, &L); // backward branch to bound label + // bind(&L); // illegal: a label may be bound only once + // + // Note: The same Label can be used for forward and backward branches + // but it may be bound only once. + + void bind(Label* L); // binds an unbound label L to the current code position + + // Calls + void call(Label* L); + void call(byte* entry, RelocInfo::Mode rmode); + int CallSize(const Operand& adr); + void call(Register reg) { call(Operand(reg)); } + void call(const Operand& adr); + int CallSize(Handle<Code> code, RelocInfo::Mode mode); + void call(Handle<Code> code, + RelocInfo::Mode rmode, + TypeFeedbackId id = TypeFeedbackId::None()); + + // Jumps + // unconditional jump to L + void jmp(Label* L, Label::Distance distance = Label::kFar); + void jmp(byte* entry, RelocInfo::Mode rmode); + void jmp(Register reg) { jmp(Operand(reg)); } + void jmp(const Operand& adr); + void jmp(Handle<Code> code, RelocInfo::Mode rmode); + + // Conditional jumps + void j(Condition cc, + Label* L, + Label::Distance distance = Label::kFar); + void j(Condition cc, byte* entry, RelocInfo::Mode rmode); + void j(Condition cc, Handle<Code> code); + + // Floating-point operations + void fld(int i); + void fstp(int i); + + void fld1(); + void fldz(); + void fldpi(); + void fldln2(); + + void fld_s(const Operand& adr); + void fld_d(const Operand& adr); + + void fstp_s(const Operand& adr); + void fst_s(const Operand& adr); + void fstp_d(const Operand& adr); + void fst_d(const Operand& adr); + + void fild_s(const Operand& adr); + void fild_d(const Operand& adr); + + void fist_s(const Operand& adr); + + void fistp_s(const Operand& adr); + void fistp_d(const Operand& adr); + + // The fisttp instructions require SSE3. + void fisttp_s(const Operand& adr); + void fisttp_d(const Operand& adr); + + void fabs(); + void fchs(); + void fcos(); + void fsin(); + void fptan(); + void fyl2x(); + void f2xm1(); + void fscale(); + void fninit(); + + void fadd(int i); + void fadd_i(int i); + void fsub(int i); + void fsub_i(int i); + void fmul(int i); + void fmul_i(int i); + void fdiv(int i); + void fdiv_i(int i); + + void fisub_s(const Operand& adr); + + void faddp(int i = 1); + void fsubp(int i = 1); + void fsubrp(int i = 1); + void fmulp(int i = 1); + void fdivp(int i = 1); + void fprem(); + void fprem1(); + + void fxch(int i = 1); + void fincstp(); + void ffree(int i = 0); + + void ftst(); + void fucomp(int i); + void fucompp(); + void fucomi(int i); + void fucomip(); + void fcompp(); + void fnstsw_ax(); + void fwait(); + void fnclex(); + + void frndint(); + + void sahf(); + void setcc(Condition cc, Register reg); + + void cpuid(); + + // TODO(lrn): Need SFENCE for movnt? + + // Debugging + void Print(); + + // Check the code size generated from label to here. + int SizeOfCodeGeneratedSince(Label* label) { + return pc_offset() - label->pos(); + } + + // Mark address of the ExitJSFrame code. + void RecordJSReturn(); + + // Mark address of a debug break slot. + void RecordDebugBreakSlot(); + + // Record a comment relocation entry that can be used by a disassembler. + // Use --code-comments to enable, or provide "force = true" flag to always + // write a comment. + void RecordComment(const char* msg, bool force = false); + + // Writes a single byte or word of data in the code stream. Used for + // inline tables, e.g., jump-tables. + void db(uint8_t data); + void dd(uint32_t data); + + // Check if there is less than kGap bytes available in the buffer. + // If this is the case, we need to grow the buffer before emitting + // an instruction or relocation information. + inline bool buffer_overflow() const { + return pc_ >= reloc_info_writer.pos() - kGap; + } + + // Get the number of bytes available in the buffer. + inline int available_space() const { return reloc_info_writer.pos() - pc_; } + + static bool IsNop(Address addr); + + PositionsRecorder* positions_recorder() { return &positions_recorder_; } + + int relocation_writer_size() { + return (buffer_ + buffer_size_) - reloc_info_writer.pos(); + } + + // Avoid overflows for displacements etc. + static const int kMaximalBufferSize = 512*MB; + + byte byte_at(int pos) { return buffer_[pos]; } + void set_byte_at(int pos, byte value) { buffer_[pos] = value; } + + // Allocate a constant pool of the correct size for the generated code. + Handle<ConstantPoolArray> NewConstantPool(Isolate* isolate); + + // Generate the constant pool for the generated code. + void PopulateConstantPool(ConstantPoolArray* constant_pool); + + protected: + byte* addr_at(int pos) { return buffer_ + pos; } + + + private: + uint32_t long_at(int pos) { + return *reinterpret_cast<uint32_t*>(addr_at(pos)); + } + void long_at_put(int pos, uint32_t x) { + *reinterpret_cast<uint32_t*>(addr_at(pos)) = x; + } + + // code emission + void GrowBuffer(); + inline void emit(uint32_t x); + inline void emit(Handle<Object> handle); + inline void emit(uint32_t x, + RelocInfo::Mode rmode, + TypeFeedbackId id = TypeFeedbackId::None()); + inline void emit(Handle<Code> code, + RelocInfo::Mode rmode, + TypeFeedbackId id = TypeFeedbackId::None()); + inline void emit(const Immediate& x); + inline void emit_w(const Immediate& x); + + // Emit the code-object-relative offset of the label's position + inline void emit_code_relative_offset(Label* label); + + // instruction generation + void emit_arith_b(int op1, int op2, Register dst, int imm8); + + // Emit a basic arithmetic instruction (i.e. first byte of the family is 0x81) + // with a given destination expression and an immediate operand. It attempts + // to use the shortest encoding possible. + // sel specifies the /n in the modrm byte (see the Intel PRM). + void emit_arith(int sel, Operand dst, const Immediate& x); + + void emit_operand(Register reg, const Operand& adr); + + void emit_farith(int b1, int b2, int i); + + // labels + void print(Label* L); + void bind_to(Label* L, int pos); + + // displacements + inline Displacement disp_at(Label* L); + inline void disp_at_put(Label* L, Displacement disp); + inline void emit_disp(Label* L, Displacement::Type type); + inline void emit_near_disp(Label* L); + + // record reloc info for current pc_ + void RecordRelocInfo(RelocInfo::Mode rmode, intptr_t data = 0); + + friend class CodePatcher; + friend class EnsureSpace; + + // code generation + RelocInfoWriter reloc_info_writer; + + PositionsRecorder positions_recorder_; + friend class PositionsRecorder; +}; + + +// Helper class that ensures that there is enough space for generating +// instructions and relocation information. The constructor makes +// sure that there is enough space and (in debug mode) the destructor +// checks that we did not generate too much. +class EnsureSpace BASE_EMBEDDED { + public: + explicit EnsureSpace(Assembler* assembler) : assembler_(assembler) { + if (assembler_->buffer_overflow()) assembler_->GrowBuffer(); +#ifdef DEBUG + space_before_ = assembler_->available_space(); +#endif + } + +#ifdef DEBUG + ~EnsureSpace() { + int bytes_generated = space_before_ - assembler_->available_space(); + DCHECK(bytes_generated < assembler_->kGap); + } +#endif + + private: + Assembler* assembler_; +#ifdef DEBUG + int space_before_; +#endif +}; + +} } // namespace v8::internal + +#endif // V8_X87_ASSEMBLER_X87_H_ diff --git a/deps/v8/src/x87/builtins-x87.cc b/deps/v8/src/x87/builtins-x87.cc new file mode 100644 index 00000000000..59ecda3a95b --- /dev/null +++ b/deps/v8/src/x87/builtins-x87.cc @@ -0,0 +1,1457 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_X87 + +#include "src/codegen.h" +#include "src/deoptimizer.h" +#include "src/full-codegen.h" +#include "src/stub-cache.h" + +namespace v8 { +namespace internal { + + +#define __ ACCESS_MASM(masm) + + +void Builtins::Generate_Adaptor(MacroAssembler* masm, + CFunctionId id, + BuiltinExtraArguments extra_args) { + // ----------- S t a t e ------------- + // -- eax : number of arguments excluding receiver + // -- edi : called function (only guaranteed when + // extra_args requires it) + // -- esi : context + // -- esp[0] : return address + // -- esp[4] : last argument + // -- ... + // -- esp[4 * argc] : first argument (argc == eax) + // -- esp[4 * (argc +1)] : receiver + // ----------------------------------- + + // Insert extra arguments. + int num_extra_args = 0; + if (extra_args == NEEDS_CALLED_FUNCTION) { + num_extra_args = 1; + Register scratch = ebx; + __ pop(scratch); // Save return address. + __ push(edi); + __ push(scratch); // Restore return address. + } else { + DCHECK(extra_args == NO_EXTRA_ARGUMENTS); + } + + // JumpToExternalReference expects eax to contain the number of arguments + // including the receiver and the extra arguments. + __ add(eax, Immediate(num_extra_args + 1)); + __ JumpToExternalReference(ExternalReference(id, masm->isolate())); +} + + +static void CallRuntimePassFunction( + MacroAssembler* masm, Runtime::FunctionId function_id) { + FrameScope scope(masm, StackFrame::INTERNAL); + // Push a copy of the function. + __ push(edi); + // Function is also the parameter to the runtime call. + __ push(edi); + + __ CallRuntime(function_id, 1); + // Restore receiver. + __ pop(edi); +} + + +static void GenerateTailCallToSharedCode(MacroAssembler* masm) { + __ mov(eax, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset)); + __ mov(eax, FieldOperand(eax, SharedFunctionInfo::kCodeOffset)); + __ lea(eax, FieldOperand(eax, Code::kHeaderSize)); + __ jmp(eax); +} + + +static void GenerateTailCallToReturnedCode(MacroAssembler* masm) { + __ lea(eax, FieldOperand(eax, Code::kHeaderSize)); + __ jmp(eax); +} + + +void Builtins::Generate_InOptimizationQueue(MacroAssembler* masm) { + // Checking whether the queued function is ready for install is optional, + // since we come across interrupts and stack checks elsewhere. However, + // not checking may delay installing ready functions, and always checking + // would be quite expensive. A good compromise is to first check against + // stack limit as a cue for an interrupt signal. + Label ok; + ExternalReference stack_limit = + ExternalReference::address_of_stack_limit(masm->isolate()); + __ cmp(esp, Operand::StaticVariable(stack_limit)); + __ j(above_equal, &ok, Label::kNear); + + CallRuntimePassFunction(masm, Runtime::kTryInstallOptimizedCode); + GenerateTailCallToReturnedCode(masm); + + __ bind(&ok); + GenerateTailCallToSharedCode(masm); +} + + +static void Generate_JSConstructStubHelper(MacroAssembler* masm, + bool is_api_function, + bool create_memento) { + // ----------- S t a t e ------------- + // -- eax: number of arguments + // -- edi: constructor function + // -- ebx: allocation site or undefined + // ----------------------------------- + + // Should never create mementos for api functions. + DCHECK(!is_api_function || !create_memento); + + // Enter a construct frame. + { + FrameScope scope(masm, StackFrame::CONSTRUCT); + + if (create_memento) { + __ AssertUndefinedOrAllocationSite(ebx); + __ push(ebx); + } + + // Store a smi-tagged arguments count on the stack. + __ SmiTag(eax); + __ push(eax); + + // Push the function to invoke on the stack. + __ push(edi); + + // Try to allocate the object without transitioning into C code. If any of + // the preconditions is not met, the code bails out to the runtime call. + Label rt_call, allocated; + if (FLAG_inline_new) { + Label undo_allocation; + ExternalReference debug_step_in_fp = + ExternalReference::debug_step_in_fp_address(masm->isolate()); + __ cmp(Operand::StaticVariable(debug_step_in_fp), Immediate(0)); + __ j(not_equal, &rt_call); + + // Verified that the constructor is a JSFunction. + // Load the initial map and verify that it is in fact a map. + // edi: constructor + __ mov(eax, FieldOperand(edi, JSFunction::kPrototypeOrInitialMapOffset)); + // Will both indicate a NULL and a Smi + __ JumpIfSmi(eax, &rt_call); + // edi: constructor + // eax: initial map (if proven valid below) + __ CmpObjectType(eax, MAP_TYPE, ebx); + __ j(not_equal, &rt_call); + + // Check that the constructor is not constructing a JSFunction (see + // comments in Runtime_NewObject in runtime.cc). In which case the + // initial map's instance type would be JS_FUNCTION_TYPE. + // edi: constructor + // eax: initial map + __ CmpInstanceType(eax, JS_FUNCTION_TYPE); + __ j(equal, &rt_call); + + if (!is_api_function) { + Label allocate; + // The code below relies on these assumptions. + STATIC_ASSERT(JSFunction::kNoSlackTracking == 0); + STATIC_ASSERT(Map::ConstructionCount::kShift + + Map::ConstructionCount::kSize == 32); + // Check if slack tracking is enabled. + __ mov(esi, FieldOperand(eax, Map::kBitField3Offset)); + __ shr(esi, Map::ConstructionCount::kShift); + __ j(zero, &allocate); // JSFunction::kNoSlackTracking + // Decrease generous allocation count. + __ sub(FieldOperand(eax, Map::kBitField3Offset), + Immediate(1 << Map::ConstructionCount::kShift)); + + __ cmp(esi, JSFunction::kFinishSlackTracking); + __ j(not_equal, &allocate); + + __ push(eax); + __ push(edi); + + __ push(edi); // constructor + __ CallRuntime(Runtime::kFinalizeInstanceSize, 1); + + __ pop(edi); + __ pop(eax); + __ xor_(esi, esi); // JSFunction::kNoSlackTracking + + __ bind(&allocate); + } + + // Now allocate the JSObject on the heap. + // edi: constructor + // eax: initial map + __ movzx_b(edi, FieldOperand(eax, Map::kInstanceSizeOffset)); + __ shl(edi, kPointerSizeLog2); + if (create_memento) { + __ add(edi, Immediate(AllocationMemento::kSize)); + } + + __ Allocate(edi, ebx, edi, no_reg, &rt_call, NO_ALLOCATION_FLAGS); + + Factory* factory = masm->isolate()->factory(); + + // Allocated the JSObject, now initialize the fields. + // eax: initial map + // ebx: JSObject + // edi: start of next object (including memento if create_memento) + __ mov(Operand(ebx, JSObject::kMapOffset), eax); + __ mov(ecx, factory->empty_fixed_array()); + __ mov(Operand(ebx, JSObject::kPropertiesOffset), ecx); + __ mov(Operand(ebx, JSObject::kElementsOffset), ecx); + // Set extra fields in the newly allocated object. + // eax: initial map + // ebx: JSObject + // edi: start of next object (including memento if create_memento) + // esi: slack tracking counter (non-API function case) + __ mov(edx, factory->undefined_value()); + __ lea(ecx, Operand(ebx, JSObject::kHeaderSize)); + if (!is_api_function) { + Label no_inobject_slack_tracking; + + // Check if slack tracking is enabled. + __ cmp(esi, JSFunction::kNoSlackTracking); + __ j(equal, &no_inobject_slack_tracking); + + // Allocate object with a slack. + __ movzx_b(esi, + FieldOperand(eax, Map::kPreAllocatedPropertyFieldsOffset)); + __ lea(esi, + Operand(ebx, esi, times_pointer_size, JSObject::kHeaderSize)); + // esi: offset of first field after pre-allocated fields + if (FLAG_debug_code) { + __ cmp(esi, edi); + __ Assert(less_equal, + kUnexpectedNumberOfPreAllocatedPropertyFields); + } + __ InitializeFieldsWithFiller(ecx, esi, edx); + __ mov(edx, factory->one_pointer_filler_map()); + // Fill the remaining fields with one pointer filler map. + + __ bind(&no_inobject_slack_tracking); + } + + if (create_memento) { + __ lea(esi, Operand(edi, -AllocationMemento::kSize)); + __ InitializeFieldsWithFiller(ecx, esi, edx); + + // Fill in memento fields if necessary. + // esi: points to the allocated but uninitialized memento. + __ mov(Operand(esi, AllocationMemento::kMapOffset), + factory->allocation_memento_map()); + // Get the cell or undefined. + __ mov(edx, Operand(esp, kPointerSize*2)); + __ mov(Operand(esi, AllocationMemento::kAllocationSiteOffset), + edx); + } else { + __ InitializeFieldsWithFiller(ecx, edi, edx); + } + + // Add the object tag to make the JSObject real, so that we can continue + // and jump into the continuation code at any time from now on. Any + // failures need to undo the allocation, so that the heap is in a + // consistent state and verifiable. + // eax: initial map + // ebx: JSObject + // edi: start of next object + __ or_(ebx, Immediate(kHeapObjectTag)); + + // Check if a non-empty properties array is needed. + // Allocate and initialize a FixedArray if it is. + // eax: initial map + // ebx: JSObject + // edi: start of next object + // Calculate the total number of properties described by the map. + __ movzx_b(edx, FieldOperand(eax, Map::kUnusedPropertyFieldsOffset)); + __ movzx_b(ecx, + FieldOperand(eax, Map::kPreAllocatedPropertyFieldsOffset)); + __ add(edx, ecx); + // Calculate unused properties past the end of the in-object properties. + __ movzx_b(ecx, FieldOperand(eax, Map::kInObjectPropertiesOffset)); + __ sub(edx, ecx); + // Done if no extra properties are to be allocated. + __ j(zero, &allocated); + __ Assert(positive, kPropertyAllocationCountFailed); + + // Scale the number of elements by pointer size and add the header for + // FixedArrays to the start of the next object calculation from above. + // ebx: JSObject + // edi: start of next object (will be start of FixedArray) + // edx: number of elements in properties array + __ Allocate(FixedArray::kHeaderSize, + times_pointer_size, + edx, + REGISTER_VALUE_IS_INT32, + edi, + ecx, + no_reg, + &undo_allocation, + RESULT_CONTAINS_TOP); + + // Initialize the FixedArray. + // ebx: JSObject + // edi: FixedArray + // edx: number of elements + // ecx: start of next object + __ mov(eax, factory->fixed_array_map()); + __ mov(Operand(edi, FixedArray::kMapOffset), eax); // setup the map + __ SmiTag(edx); + __ mov(Operand(edi, FixedArray::kLengthOffset), edx); // and length + + // Initialize the fields to undefined. + // ebx: JSObject + // edi: FixedArray + // ecx: start of next object + { Label loop, entry; + __ mov(edx, factory->undefined_value()); + __ lea(eax, Operand(edi, FixedArray::kHeaderSize)); + __ jmp(&entry); + __ bind(&loop); + __ mov(Operand(eax, 0), edx); + __ add(eax, Immediate(kPointerSize)); + __ bind(&entry); + __ cmp(eax, ecx); + __ j(below, &loop); + } + + // Store the initialized FixedArray into the properties field of + // the JSObject + // ebx: JSObject + // edi: FixedArray + __ or_(edi, Immediate(kHeapObjectTag)); // add the heap tag + __ mov(FieldOperand(ebx, JSObject::kPropertiesOffset), edi); + + + // Continue with JSObject being successfully allocated + // ebx: JSObject + __ jmp(&allocated); + + // Undo the setting of the new top so that the heap is verifiable. For + // example, the map's unused properties potentially do not match the + // allocated objects unused properties. + // ebx: JSObject (previous new top) + __ bind(&undo_allocation); + __ UndoAllocationInNewSpace(ebx); + } + + // Allocate the new receiver object using the runtime call. + __ bind(&rt_call); + int offset = 0; + if (create_memento) { + // Get the cell or allocation site. + __ mov(edi, Operand(esp, kPointerSize * 2)); + __ push(edi); + offset = kPointerSize; + } + + // Must restore esi (context) and edi (constructor) before calling runtime. + __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset)); + __ mov(edi, Operand(esp, offset)); + // edi: function (constructor) + __ push(edi); + if (create_memento) { + __ CallRuntime(Runtime::kNewObjectWithAllocationSite, 2); + } else { + __ CallRuntime(Runtime::kNewObject, 1); + } + __ mov(ebx, eax); // store result in ebx + + // If we ended up using the runtime, and we want a memento, then the + // runtime call made it for us, and we shouldn't do create count + // increment. + Label count_incremented; + if (create_memento) { + __ jmp(&count_incremented); + } + + // New object allocated. + // ebx: newly allocated object + __ bind(&allocated); + + if (create_memento) { + __ mov(ecx, Operand(esp, kPointerSize * 2)); + __ cmp(ecx, masm->isolate()->factory()->undefined_value()); + __ j(equal, &count_incremented); + // ecx is an AllocationSite. We are creating a memento from it, so we + // need to increment the memento create count. + __ add(FieldOperand(ecx, AllocationSite::kPretenureCreateCountOffset), + Immediate(Smi::FromInt(1))); + __ bind(&count_incremented); + } + + // Retrieve the function from the stack. + __ pop(edi); + + // Retrieve smi-tagged arguments count from the stack. + __ mov(eax, Operand(esp, 0)); + __ SmiUntag(eax); + + // Push the allocated receiver to the stack. We need two copies + // because we may have to return the original one and the calling + // conventions dictate that the called function pops the receiver. + __ push(ebx); + __ push(ebx); + + // Set up pointer to last argument. + __ lea(ebx, Operand(ebp, StandardFrameConstants::kCallerSPOffset)); + + // Copy arguments and receiver to the expression stack. + Label loop, entry; + __ mov(ecx, eax); + __ jmp(&entry); + __ bind(&loop); + __ push(Operand(ebx, ecx, times_4, 0)); + __ bind(&entry); + __ dec(ecx); + __ j(greater_equal, &loop); + + // Call the function. + if (is_api_function) { + __ mov(esi, FieldOperand(edi, JSFunction::kContextOffset)); + Handle<Code> code = + masm->isolate()->builtins()->HandleApiCallConstruct(); + __ call(code, RelocInfo::CODE_TARGET); + } else { + ParameterCount actual(eax); + __ InvokeFunction(edi, actual, CALL_FUNCTION, + NullCallWrapper()); + } + + // Store offset of return address for deoptimizer. + if (!is_api_function) { + masm->isolate()->heap()->SetConstructStubDeoptPCOffset(masm->pc_offset()); + } + + // Restore context from the frame. + __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset)); + + // If the result is an object (in the ECMA sense), we should get rid + // of the receiver and use the result; see ECMA-262 section 13.2.2-7 + // on page 74. + Label use_receiver, exit; + + // If the result is a smi, it is *not* an object in the ECMA sense. + __ JumpIfSmi(eax, &use_receiver); + + // If the type of the result (stored in its map) is less than + // FIRST_SPEC_OBJECT_TYPE, it is not an object in the ECMA sense. + __ CmpObjectType(eax, FIRST_SPEC_OBJECT_TYPE, ecx); + __ j(above_equal, &exit); + + // Throw away the result of the constructor invocation and use the + // on-stack receiver as the result. + __ bind(&use_receiver); + __ mov(eax, Operand(esp, 0)); + + // Restore the arguments count and leave the construct frame. + __ bind(&exit); + __ mov(ebx, Operand(esp, kPointerSize)); // Get arguments count. + + // Leave construct frame. + } + + // Remove caller arguments from the stack and return. + STATIC_ASSERT(kSmiTagSize == 1 && kSmiTag == 0); + __ pop(ecx); + __ lea(esp, Operand(esp, ebx, times_2, 1 * kPointerSize)); // 1 ~ receiver + __ push(ecx); + __ IncrementCounter(masm->isolate()->counters()->constructed_objects(), 1); + __ ret(0); +} + + +void Builtins::Generate_JSConstructStubGeneric(MacroAssembler* masm) { + Generate_JSConstructStubHelper(masm, false, FLAG_pretenuring_call_new); +} + + +void Builtins::Generate_JSConstructStubApi(MacroAssembler* masm) { + Generate_JSConstructStubHelper(masm, true, false); +} + + +static void Generate_JSEntryTrampolineHelper(MacroAssembler* masm, + bool is_construct) { + ProfileEntryHookStub::MaybeCallEntryHook(masm); + + // Clear the context before we push it when entering the internal frame. + __ Move(esi, Immediate(0)); + + { + FrameScope scope(masm, StackFrame::INTERNAL); + + // Load the previous frame pointer (ebx) to access C arguments + __ mov(ebx, Operand(ebp, 0)); + + // Get the function from the frame and setup the context. + __ mov(ecx, Operand(ebx, EntryFrameConstants::kFunctionArgOffset)); + __ mov(esi, FieldOperand(ecx, JSFunction::kContextOffset)); + + // Push the function and the receiver onto the stack. + __ push(ecx); + __ push(Operand(ebx, EntryFrameConstants::kReceiverArgOffset)); + + // Load the number of arguments and setup pointer to the arguments. + __ mov(eax, Operand(ebx, EntryFrameConstants::kArgcOffset)); + __ mov(ebx, Operand(ebx, EntryFrameConstants::kArgvOffset)); + + // Copy arguments to the stack in a loop. + Label loop, entry; + __ Move(ecx, Immediate(0)); + __ jmp(&entry); + __ bind(&loop); + __ mov(edx, Operand(ebx, ecx, times_4, 0)); // push parameter from argv + __ push(Operand(edx, 0)); // dereference handle + __ inc(ecx); + __ bind(&entry); + __ cmp(ecx, eax); + __ j(not_equal, &loop); + + // Get the function from the stack and call it. + // kPointerSize for the receiver. + __ mov(edi, Operand(esp, eax, times_4, kPointerSize)); + + // Invoke the code. + if (is_construct) { + // No type feedback cell is available + __ mov(ebx, masm->isolate()->factory()->undefined_value()); + CallConstructStub stub(masm->isolate(), NO_CALL_CONSTRUCTOR_FLAGS); + __ CallStub(&stub); + } else { + ParameterCount actual(eax); + __ InvokeFunction(edi, actual, CALL_FUNCTION, + NullCallWrapper()); + } + + // Exit the internal frame. Notice that this also removes the empty. + // context and the function left on the stack by the code + // invocation. + } + __ ret(kPointerSize); // Remove receiver. +} + + +void Builtins::Generate_JSEntryTrampoline(MacroAssembler* masm) { + Generate_JSEntryTrampolineHelper(masm, false); +} + + +void Builtins::Generate_JSConstructEntryTrampoline(MacroAssembler* masm) { + Generate_JSEntryTrampolineHelper(masm, true); +} + + +void Builtins::Generate_CompileUnoptimized(MacroAssembler* masm) { + CallRuntimePassFunction(masm, Runtime::kCompileUnoptimized); + GenerateTailCallToReturnedCode(masm); +} + + + +static void CallCompileOptimized(MacroAssembler* masm, bool concurrent) { + FrameScope scope(masm, StackFrame::INTERNAL); + // Push a copy of the function. + __ push(edi); + // Function is also the parameter to the runtime call. + __ push(edi); + // Whether to compile in a background thread. + __ Push(masm->isolate()->factory()->ToBoolean(concurrent)); + + __ CallRuntime(Runtime::kCompileOptimized, 2); + // Restore receiver. + __ pop(edi); +} + + +void Builtins::Generate_CompileOptimized(MacroAssembler* masm) { + CallCompileOptimized(masm, false); + GenerateTailCallToReturnedCode(masm); +} + + +void Builtins::Generate_CompileOptimizedConcurrent(MacroAssembler* masm) { + CallCompileOptimized(masm, true); + GenerateTailCallToReturnedCode(masm); +} + + +static void GenerateMakeCodeYoungAgainCommon(MacroAssembler* masm) { + // For now, we are relying on the fact that make_code_young doesn't do any + // garbage collection which allows us to save/restore the registers without + // worrying about which of them contain pointers. We also don't build an + // internal frame to make the code faster, since we shouldn't have to do stack + // crawls in MakeCodeYoung. This seems a bit fragile. + + // Re-execute the code that was patched back to the young age when + // the stub returns. + __ sub(Operand(esp, 0), Immediate(5)); + __ pushad(); + __ mov(eax, Operand(esp, 8 * kPointerSize)); + { + FrameScope scope(masm, StackFrame::MANUAL); + __ PrepareCallCFunction(2, ebx); + __ mov(Operand(esp, 1 * kPointerSize), + Immediate(ExternalReference::isolate_address(masm->isolate()))); + __ mov(Operand(esp, 0), eax); + __ CallCFunction( + ExternalReference::get_make_code_young_function(masm->isolate()), 2); + } + __ popad(); + __ ret(0); +} + +#define DEFINE_CODE_AGE_BUILTIN_GENERATOR(C) \ +void Builtins::Generate_Make##C##CodeYoungAgainEvenMarking( \ + MacroAssembler* masm) { \ + GenerateMakeCodeYoungAgainCommon(masm); \ +} \ +void Builtins::Generate_Make##C##CodeYoungAgainOddMarking( \ + MacroAssembler* masm) { \ + GenerateMakeCodeYoungAgainCommon(masm); \ +} +CODE_AGE_LIST(DEFINE_CODE_AGE_BUILTIN_GENERATOR) +#undef DEFINE_CODE_AGE_BUILTIN_GENERATOR + + +void Builtins::Generate_MarkCodeAsExecutedOnce(MacroAssembler* masm) { + // For now, as in GenerateMakeCodeYoungAgainCommon, we are relying on the fact + // that make_code_young doesn't do any garbage collection which allows us to + // save/restore the registers without worrying about which of them contain + // pointers. + __ pushad(); + __ mov(eax, Operand(esp, 8 * kPointerSize)); + __ sub(eax, Immediate(Assembler::kCallInstructionLength)); + { // NOLINT + FrameScope scope(masm, StackFrame::MANUAL); + __ PrepareCallCFunction(2, ebx); + __ mov(Operand(esp, 1 * kPointerSize), + Immediate(ExternalReference::isolate_address(masm->isolate()))); + __ mov(Operand(esp, 0), eax); + __ CallCFunction( + ExternalReference::get_mark_code_as_executed_function(masm->isolate()), + 2); + } + __ popad(); + + // Perform prologue operations usually performed by the young code stub. + __ pop(eax); // Pop return address into scratch register. + __ push(ebp); // Caller's frame pointer. + __ mov(ebp, esp); + __ push(esi); // Callee's context. + __ push(edi); // Callee's JS Function. + __ push(eax); // Push return address after frame prologue. + + // Jump to point after the code-age stub. + __ ret(0); +} + + +void Builtins::Generate_MarkCodeAsExecutedTwice(MacroAssembler* masm) { + GenerateMakeCodeYoungAgainCommon(masm); +} + + +static void Generate_NotifyStubFailureHelper(MacroAssembler* masm) { + // Enter an internal frame. + { + FrameScope scope(masm, StackFrame::INTERNAL); + + // Preserve registers across notification, this is important for compiled + // stubs that tail call the runtime on deopts passing their parameters in + // registers. + __ pushad(); + __ CallRuntime(Runtime::kNotifyStubFailure, 0); + __ popad(); + // Tear down internal frame. + } + + __ pop(MemOperand(esp, 0)); // Ignore state offset + __ ret(0); // Return to IC Miss stub, continuation still on stack. +} + + +void Builtins::Generate_NotifyStubFailure(MacroAssembler* masm) { + Generate_NotifyStubFailureHelper(masm); +} + + +void Builtins::Generate_NotifyStubFailureSaveDoubles(MacroAssembler* masm) { + // SaveDoubles is meanless for X87, just used by deoptimizer.cc + Generate_NotifyStubFailureHelper(masm); +} + + +static void Generate_NotifyDeoptimizedHelper(MacroAssembler* masm, + Deoptimizer::BailoutType type) { + { + FrameScope scope(masm, StackFrame::INTERNAL); + + // Pass deoptimization type to the runtime system. + __ push(Immediate(Smi::FromInt(static_cast<int>(type)))); + __ CallRuntime(Runtime::kNotifyDeoptimized, 1); + + // Tear down internal frame. + } + + // Get the full codegen state from the stack and untag it. + __ mov(ecx, Operand(esp, 1 * kPointerSize)); + __ SmiUntag(ecx); + + // Switch on the state. + Label not_no_registers, not_tos_eax; + __ cmp(ecx, FullCodeGenerator::NO_REGISTERS); + __ j(not_equal, ¬_no_registers, Label::kNear); + __ ret(1 * kPointerSize); // Remove state. + + __ bind(¬_no_registers); + __ mov(eax, Operand(esp, 2 * kPointerSize)); + __ cmp(ecx, FullCodeGenerator::TOS_REG); + __ j(not_equal, ¬_tos_eax, Label::kNear); + __ ret(2 * kPointerSize); // Remove state, eax. + + __ bind(¬_tos_eax); + __ Abort(kNoCasesLeft); +} + + +void Builtins::Generate_NotifyDeoptimized(MacroAssembler* masm) { + Generate_NotifyDeoptimizedHelper(masm, Deoptimizer::EAGER); +} + + +void Builtins::Generate_NotifySoftDeoptimized(MacroAssembler* masm) { + Generate_NotifyDeoptimizedHelper(masm, Deoptimizer::SOFT); +} + + +void Builtins::Generate_NotifyLazyDeoptimized(MacroAssembler* masm) { + Generate_NotifyDeoptimizedHelper(masm, Deoptimizer::LAZY); +} + + +void Builtins::Generate_FunctionCall(MacroAssembler* masm) { + Factory* factory = masm->isolate()->factory(); + + // 1. Make sure we have at least one argument. + { Label done; + __ test(eax, eax); + __ j(not_zero, &done); + __ pop(ebx); + __ push(Immediate(factory->undefined_value())); + __ push(ebx); + __ inc(eax); + __ bind(&done); + } + + // 2. Get the function to call (passed as receiver) from the stack, check + // if it is a function. + Label slow, non_function; + // 1 ~ return address. + __ mov(edi, Operand(esp, eax, times_4, 1 * kPointerSize)); + __ JumpIfSmi(edi, &non_function); + __ CmpObjectType(edi, JS_FUNCTION_TYPE, ecx); + __ j(not_equal, &slow); + + + // 3a. Patch the first argument if necessary when calling a function. + Label shift_arguments; + __ Move(edx, Immediate(0)); // indicate regular JS_FUNCTION + { Label convert_to_object, use_global_proxy, patch_receiver; + // Change context eagerly in case we need the global receiver. + __ mov(esi, FieldOperand(edi, JSFunction::kContextOffset)); + + // Do not transform the receiver for strict mode functions. + __ mov(ebx, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset)); + __ test_b(FieldOperand(ebx, SharedFunctionInfo::kStrictModeByteOffset), + 1 << SharedFunctionInfo::kStrictModeBitWithinByte); + __ j(not_equal, &shift_arguments); + + // Do not transform the receiver for natives (shared already in ebx). + __ test_b(FieldOperand(ebx, SharedFunctionInfo::kNativeByteOffset), + 1 << SharedFunctionInfo::kNativeBitWithinByte); + __ j(not_equal, &shift_arguments); + + // Compute the receiver in sloppy mode. + __ mov(ebx, Operand(esp, eax, times_4, 0)); // First argument. + + // Call ToObject on the receiver if it is not an object, or use the + // global object if it is null or undefined. + __ JumpIfSmi(ebx, &convert_to_object); + __ cmp(ebx, factory->null_value()); + __ j(equal, &use_global_proxy); + __ cmp(ebx, factory->undefined_value()); + __ j(equal, &use_global_proxy); + STATIC_ASSERT(LAST_SPEC_OBJECT_TYPE == LAST_TYPE); + __ CmpObjectType(ebx, FIRST_SPEC_OBJECT_TYPE, ecx); + __ j(above_equal, &shift_arguments); + + __ bind(&convert_to_object); + + { // In order to preserve argument count. + FrameScope scope(masm, StackFrame::INTERNAL); + __ SmiTag(eax); + __ push(eax); + + __ push(ebx); + __ InvokeBuiltin(Builtins::TO_OBJECT, CALL_FUNCTION); + __ mov(ebx, eax); + __ Move(edx, Immediate(0)); // restore + + __ pop(eax); + __ SmiUntag(eax); + } + + // Restore the function to edi. + __ mov(edi, Operand(esp, eax, times_4, 1 * kPointerSize)); + __ jmp(&patch_receiver); + + __ bind(&use_global_proxy); + __ mov(ebx, + Operand(esi, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); + __ mov(ebx, FieldOperand(ebx, GlobalObject::kGlobalProxyOffset)); + + __ bind(&patch_receiver); + __ mov(Operand(esp, eax, times_4, 0), ebx); + + __ jmp(&shift_arguments); + } + + // 3b. Check for function proxy. + __ bind(&slow); + __ Move(edx, Immediate(1)); // indicate function proxy + __ CmpInstanceType(ecx, JS_FUNCTION_PROXY_TYPE); + __ j(equal, &shift_arguments); + __ bind(&non_function); + __ Move(edx, Immediate(2)); // indicate non-function + + // 3c. Patch the first argument when calling a non-function. The + // CALL_NON_FUNCTION builtin expects the non-function callee as + // receiver, so overwrite the first argument which will ultimately + // become the receiver. + __ mov(Operand(esp, eax, times_4, 0), edi); + + // 4. Shift arguments and return address one slot down on the stack + // (overwriting the original receiver). Adjust argument count to make + // the original first argument the new receiver. + __ bind(&shift_arguments); + { Label loop; + __ mov(ecx, eax); + __ bind(&loop); + __ mov(ebx, Operand(esp, ecx, times_4, 0)); + __ mov(Operand(esp, ecx, times_4, kPointerSize), ebx); + __ dec(ecx); + __ j(not_sign, &loop); // While non-negative (to copy return address). + __ pop(ebx); // Discard copy of return address. + __ dec(eax); // One fewer argument (first argument is new receiver). + } + + // 5a. Call non-function via tail call to CALL_NON_FUNCTION builtin, + // or a function proxy via CALL_FUNCTION_PROXY. + { Label function, non_proxy; + __ test(edx, edx); + __ j(zero, &function); + __ Move(ebx, Immediate(0)); + __ cmp(edx, Immediate(1)); + __ j(not_equal, &non_proxy); + + __ pop(edx); // return address + __ push(edi); // re-add proxy object as additional argument + __ push(edx); + __ inc(eax); + __ GetBuiltinEntry(edx, Builtins::CALL_FUNCTION_PROXY); + __ jmp(masm->isolate()->builtins()->ArgumentsAdaptorTrampoline(), + RelocInfo::CODE_TARGET); + + __ bind(&non_proxy); + __ GetBuiltinEntry(edx, Builtins::CALL_NON_FUNCTION); + __ jmp(masm->isolate()->builtins()->ArgumentsAdaptorTrampoline(), + RelocInfo::CODE_TARGET); + __ bind(&function); + } + + // 5b. Get the code to call from the function and check that the number of + // expected arguments matches what we're providing. If so, jump + // (tail-call) to the code in register edx without checking arguments. + __ mov(edx, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset)); + __ mov(ebx, + FieldOperand(edx, SharedFunctionInfo::kFormalParameterCountOffset)); + __ mov(edx, FieldOperand(edi, JSFunction::kCodeEntryOffset)); + __ SmiUntag(ebx); + __ cmp(eax, ebx); + __ j(not_equal, + masm->isolate()->builtins()->ArgumentsAdaptorTrampoline()); + + ParameterCount expected(0); + __ InvokeCode(edx, expected, expected, JUMP_FUNCTION, NullCallWrapper()); +} + + +void Builtins::Generate_FunctionApply(MacroAssembler* masm) { + static const int kArgumentsOffset = 2 * kPointerSize; + static const int kReceiverOffset = 3 * kPointerSize; + static const int kFunctionOffset = 4 * kPointerSize; + { + FrameScope frame_scope(masm, StackFrame::INTERNAL); + + __ push(Operand(ebp, kFunctionOffset)); // push this + __ push(Operand(ebp, kArgumentsOffset)); // push arguments + __ InvokeBuiltin(Builtins::APPLY_PREPARE, CALL_FUNCTION); + + // Check the stack for overflow. We are not trying to catch + // interruptions (e.g. debug break and preemption) here, so the "real stack + // limit" is checked. + Label okay; + ExternalReference real_stack_limit = + ExternalReference::address_of_real_stack_limit(masm->isolate()); + __ mov(edi, Operand::StaticVariable(real_stack_limit)); + // Make ecx the space we have left. The stack might already be overflowed + // here which will cause ecx to become negative. + __ mov(ecx, esp); + __ sub(ecx, edi); + // Make edx the space we need for the array when it is unrolled onto the + // stack. + __ mov(edx, eax); + __ shl(edx, kPointerSizeLog2 - kSmiTagSize); + // Check if the arguments will overflow the stack. + __ cmp(ecx, edx); + __ j(greater, &okay); // Signed comparison. + + // Out of stack space. + __ push(Operand(ebp, 4 * kPointerSize)); // push this + __ push(eax); + __ InvokeBuiltin(Builtins::STACK_OVERFLOW, CALL_FUNCTION); + __ bind(&okay); + // End of stack check. + + // Push current index and limit. + const int kLimitOffset = + StandardFrameConstants::kExpressionsOffset - 1 * kPointerSize; + const int kIndexOffset = kLimitOffset - 1 * kPointerSize; + __ push(eax); // limit + __ push(Immediate(0)); // index + + // Get the receiver. + __ mov(ebx, Operand(ebp, kReceiverOffset)); + + // Check that the function is a JS function (otherwise it must be a proxy). + Label push_receiver, use_global_proxy; + __ mov(edi, Operand(ebp, kFunctionOffset)); + __ CmpObjectType(edi, JS_FUNCTION_TYPE, ecx); + __ j(not_equal, &push_receiver); + + // Change context eagerly to get the right global object if necessary. + __ mov(esi, FieldOperand(edi, JSFunction::kContextOffset)); + + // Compute the receiver. + // Do not transform the receiver for strict mode functions. + Label call_to_object; + __ mov(ecx, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset)); + __ test_b(FieldOperand(ecx, SharedFunctionInfo::kStrictModeByteOffset), + 1 << SharedFunctionInfo::kStrictModeBitWithinByte); + __ j(not_equal, &push_receiver); + + Factory* factory = masm->isolate()->factory(); + + // Do not transform the receiver for natives (shared already in ecx). + __ test_b(FieldOperand(ecx, SharedFunctionInfo::kNativeByteOffset), + 1 << SharedFunctionInfo::kNativeBitWithinByte); + __ j(not_equal, &push_receiver); + + // Compute the receiver in sloppy mode. + // Call ToObject on the receiver if it is not an object, or use the + // global object if it is null or undefined. + __ JumpIfSmi(ebx, &call_to_object); + __ cmp(ebx, factory->null_value()); + __ j(equal, &use_global_proxy); + __ cmp(ebx, factory->undefined_value()); + __ j(equal, &use_global_proxy); + STATIC_ASSERT(LAST_SPEC_OBJECT_TYPE == LAST_TYPE); + __ CmpObjectType(ebx, FIRST_SPEC_OBJECT_TYPE, ecx); + __ j(above_equal, &push_receiver); + + __ bind(&call_to_object); + __ push(ebx); + __ InvokeBuiltin(Builtins::TO_OBJECT, CALL_FUNCTION); + __ mov(ebx, eax); + __ jmp(&push_receiver); + + __ bind(&use_global_proxy); + __ mov(ebx, + Operand(esi, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); + __ mov(ebx, FieldOperand(ebx, GlobalObject::kGlobalProxyOffset)); + + // Push the receiver. + __ bind(&push_receiver); + __ push(ebx); + + // Copy all arguments from the array to the stack. + Label entry, loop; + Register receiver = LoadIC::ReceiverRegister(); + Register key = LoadIC::NameRegister(); + __ mov(key, Operand(ebp, kIndexOffset)); + __ jmp(&entry); + __ bind(&loop); + __ mov(receiver, Operand(ebp, kArgumentsOffset)); // load arguments + + // Use inline caching to speed up access to arguments. + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), Immediate(Smi::FromInt(0))); + } + Handle<Code> ic = masm->isolate()->builtins()->KeyedLoadIC_Initialize(); + __ call(ic, RelocInfo::CODE_TARGET); + // It is important that we do not have a test instruction after the + // call. A test instruction after the call is used to indicate that + // we have generated an inline version of the keyed load. In this + // case, we know that we are not generating a test instruction next. + + // Push the nth argument. + __ push(eax); + + // Update the index on the stack and in register key. + __ mov(key, Operand(ebp, kIndexOffset)); + __ add(key, Immediate(1 << kSmiTagSize)); + __ mov(Operand(ebp, kIndexOffset), key); + + __ bind(&entry); + __ cmp(key, Operand(ebp, kLimitOffset)); + __ j(not_equal, &loop); + + // Call the function. + Label call_proxy; + ParameterCount actual(eax); + __ Move(eax, key); + __ SmiUntag(eax); + __ mov(edi, Operand(ebp, kFunctionOffset)); + __ CmpObjectType(edi, JS_FUNCTION_TYPE, ecx); + __ j(not_equal, &call_proxy); + __ InvokeFunction(edi, actual, CALL_FUNCTION, NullCallWrapper()); + + frame_scope.GenerateLeaveFrame(); + __ ret(3 * kPointerSize); // remove this, receiver, and arguments + + // Call the function proxy. + __ bind(&call_proxy); + __ push(edi); // add function proxy as last argument + __ inc(eax); + __ Move(ebx, Immediate(0)); + __ GetBuiltinEntry(edx, Builtins::CALL_FUNCTION_PROXY); + __ call(masm->isolate()->builtins()->ArgumentsAdaptorTrampoline(), + RelocInfo::CODE_TARGET); + + // Leave internal frame. + } + __ ret(3 * kPointerSize); // remove this, receiver, and arguments +} + + +void Builtins::Generate_InternalArrayCode(MacroAssembler* masm) { + // ----------- S t a t e ------------- + // -- eax : argc + // -- esp[0] : return address + // -- esp[4] : last argument + // ----------------------------------- + Label generic_array_code; + + // Get the InternalArray function. + __ LoadGlobalFunction(Context::INTERNAL_ARRAY_FUNCTION_INDEX, edi); + + if (FLAG_debug_code) { + // Initial map for the builtin InternalArray function should be a map. + __ mov(ebx, FieldOperand(edi, JSFunction::kPrototypeOrInitialMapOffset)); + // Will both indicate a NULL and a Smi. + __ test(ebx, Immediate(kSmiTagMask)); + __ Assert(not_zero, kUnexpectedInitialMapForInternalArrayFunction); + __ CmpObjectType(ebx, MAP_TYPE, ecx); + __ Assert(equal, kUnexpectedInitialMapForInternalArrayFunction); + } + + // Run the native code for the InternalArray function called as a normal + // function. + // tail call a stub + InternalArrayConstructorStub stub(masm->isolate()); + __ TailCallStub(&stub); +} + + +void Builtins::Generate_ArrayCode(MacroAssembler* masm) { + // ----------- S t a t e ------------- + // -- eax : argc + // -- esp[0] : return address + // -- esp[4] : last argument + // ----------------------------------- + Label generic_array_code; + + // Get the Array function. + __ LoadGlobalFunction(Context::ARRAY_FUNCTION_INDEX, edi); + + if (FLAG_debug_code) { + // Initial map for the builtin Array function should be a map. + __ mov(ebx, FieldOperand(edi, JSFunction::kPrototypeOrInitialMapOffset)); + // Will both indicate a NULL and a Smi. + __ test(ebx, Immediate(kSmiTagMask)); + __ Assert(not_zero, kUnexpectedInitialMapForArrayFunction); + __ CmpObjectType(ebx, MAP_TYPE, ecx); + __ Assert(equal, kUnexpectedInitialMapForArrayFunction); + } + + // Run the native code for the Array function called as a normal function. + // tail call a stub + __ mov(ebx, masm->isolate()->factory()->undefined_value()); + ArrayConstructorStub stub(masm->isolate()); + __ TailCallStub(&stub); +} + + +void Builtins::Generate_StringConstructCode(MacroAssembler* masm) { + // ----------- S t a t e ------------- + // -- eax : number of arguments + // -- edi : constructor function + // -- esp[0] : return address + // -- esp[(argc - n) * 4] : arg[n] (zero-based) + // -- esp[(argc + 1) * 4] : receiver + // ----------------------------------- + Counters* counters = masm->isolate()->counters(); + __ IncrementCounter(counters->string_ctor_calls(), 1); + + if (FLAG_debug_code) { + __ LoadGlobalFunction(Context::STRING_FUNCTION_INDEX, ecx); + __ cmp(edi, ecx); + __ Assert(equal, kUnexpectedStringFunction); + } + + // Load the first argument into eax and get rid of the rest + // (including the receiver). + Label no_arguments; + __ test(eax, eax); + __ j(zero, &no_arguments); + __ mov(ebx, Operand(esp, eax, times_pointer_size, 0)); + __ pop(ecx); + __ lea(esp, Operand(esp, eax, times_pointer_size, kPointerSize)); + __ push(ecx); + __ mov(eax, ebx); + + // Lookup the argument in the number to string cache. + Label not_cached, argument_is_string; + __ LookupNumberStringCache(eax, // Input. + ebx, // Result. + ecx, // Scratch 1. + edx, // Scratch 2. + ¬_cached); + __ IncrementCounter(counters->string_ctor_cached_number(), 1); + __ bind(&argument_is_string); + // ----------- S t a t e ------------- + // -- ebx : argument converted to string + // -- edi : constructor function + // -- esp[0] : return address + // ----------------------------------- + + // Allocate a JSValue and put the tagged pointer into eax. + Label gc_required; + __ Allocate(JSValue::kSize, + eax, // Result. + ecx, // New allocation top (we ignore it). + no_reg, + &gc_required, + TAG_OBJECT); + + // Set the map. + __ LoadGlobalFunctionInitialMap(edi, ecx); + if (FLAG_debug_code) { + __ cmpb(FieldOperand(ecx, Map::kInstanceSizeOffset), + JSValue::kSize >> kPointerSizeLog2); + __ Assert(equal, kUnexpectedStringWrapperInstanceSize); + __ cmpb(FieldOperand(ecx, Map::kUnusedPropertyFieldsOffset), 0); + __ Assert(equal, kUnexpectedUnusedPropertiesOfStringWrapper); + } + __ mov(FieldOperand(eax, HeapObject::kMapOffset), ecx); + + // Set properties and elements. + Factory* factory = masm->isolate()->factory(); + __ Move(ecx, Immediate(factory->empty_fixed_array())); + __ mov(FieldOperand(eax, JSObject::kPropertiesOffset), ecx); + __ mov(FieldOperand(eax, JSObject::kElementsOffset), ecx); + + // Set the value. + __ mov(FieldOperand(eax, JSValue::kValueOffset), ebx); + + // Ensure the object is fully initialized. + STATIC_ASSERT(JSValue::kSize == 4 * kPointerSize); + + // We're done. Return. + __ ret(0); + + // The argument was not found in the number to string cache. Check + // if it's a string already before calling the conversion builtin. + Label convert_argument; + __ bind(¬_cached); + STATIC_ASSERT(kSmiTag == 0); + __ JumpIfSmi(eax, &convert_argument); + Condition is_string = masm->IsObjectStringType(eax, ebx, ecx); + __ j(NegateCondition(is_string), &convert_argument); + __ mov(ebx, eax); + __ IncrementCounter(counters->string_ctor_string_value(), 1); + __ jmp(&argument_is_string); + + // Invoke the conversion builtin and put the result into ebx. + __ bind(&convert_argument); + __ IncrementCounter(counters->string_ctor_conversions(), 1); + { + FrameScope scope(masm, StackFrame::INTERNAL); + __ push(edi); // Preserve the function. + __ push(eax); + __ InvokeBuiltin(Builtins::TO_STRING, CALL_FUNCTION); + __ pop(edi); + } + __ mov(ebx, eax); + __ jmp(&argument_is_string); + + // Load the empty string into ebx, remove the receiver from the + // stack, and jump back to the case where the argument is a string. + __ bind(&no_arguments); + __ Move(ebx, Immediate(factory->empty_string())); + __ pop(ecx); + __ lea(esp, Operand(esp, kPointerSize)); + __ push(ecx); + __ jmp(&argument_is_string); + + // At this point the argument is already a string. Call runtime to + // create a string wrapper. + __ bind(&gc_required); + __ IncrementCounter(counters->string_ctor_gc_required(), 1); + { + FrameScope scope(masm, StackFrame::INTERNAL); + __ push(ebx); + __ CallRuntime(Runtime::kNewStringWrapper, 1); + } + __ ret(0); +} + + +static void ArgumentsAdaptorStackCheck(MacroAssembler* masm, + Label* stack_overflow) { + // ----------- S t a t e ------------- + // -- eax : actual number of arguments + // -- ebx : expected number of arguments + // -- edi : function (passed through to callee) + // ----------------------------------- + // Check the stack for overflow. We are not trying to catch + // interruptions (e.g. debug break and preemption) here, so the "real stack + // limit" is checked. + ExternalReference real_stack_limit = + ExternalReference::address_of_real_stack_limit(masm->isolate()); + __ mov(edx, Operand::StaticVariable(real_stack_limit)); + // Make ecx the space we have left. The stack might already be overflowed + // here which will cause ecx to become negative. + __ mov(ecx, esp); + __ sub(ecx, edx); + // Make edx the space we need for the array when it is unrolled onto the + // stack. + __ mov(edx, ebx); + __ shl(edx, kPointerSizeLog2); + // Check if the arguments will overflow the stack. + __ cmp(ecx, edx); + __ j(less_equal, stack_overflow); // Signed comparison. +} + + +static void EnterArgumentsAdaptorFrame(MacroAssembler* masm) { + __ push(ebp); + __ mov(ebp, esp); + + // Store the arguments adaptor context sentinel. + __ push(Immediate(Smi::FromInt(StackFrame::ARGUMENTS_ADAPTOR))); + + // Push the function on the stack. + __ push(edi); + + // Preserve the number of arguments on the stack. Must preserve eax, + // ebx and ecx because these registers are used when copying the + // arguments and the receiver. + STATIC_ASSERT(kSmiTagSize == 1); + __ lea(edi, Operand(eax, eax, times_1, kSmiTag)); + __ push(edi); +} + + +static void LeaveArgumentsAdaptorFrame(MacroAssembler* masm) { + // Retrieve the number of arguments from the stack. + __ mov(ebx, Operand(ebp, ArgumentsAdaptorFrameConstants::kLengthOffset)); + + // Leave the frame. + __ leave(); + + // Remove caller arguments from the stack. + STATIC_ASSERT(kSmiTagSize == 1 && kSmiTag == 0); + __ pop(ecx); + __ lea(esp, Operand(esp, ebx, times_2, 1 * kPointerSize)); // 1 ~ receiver + __ push(ecx); +} + + +void Builtins::Generate_ArgumentsAdaptorTrampoline(MacroAssembler* masm) { + // ----------- S t a t e ------------- + // -- eax : actual number of arguments + // -- ebx : expected number of arguments + // -- edi : function (passed through to callee) + // ----------------------------------- + + Label invoke, dont_adapt_arguments; + __ IncrementCounter(masm->isolate()->counters()->arguments_adaptors(), 1); + + Label stack_overflow; + ArgumentsAdaptorStackCheck(masm, &stack_overflow); + + Label enough, too_few; + __ mov(edx, FieldOperand(edi, JSFunction::kCodeEntryOffset)); + __ cmp(eax, ebx); + __ j(less, &too_few); + __ cmp(ebx, SharedFunctionInfo::kDontAdaptArgumentsSentinel); + __ j(equal, &dont_adapt_arguments); + + { // Enough parameters: Actual >= expected. + __ bind(&enough); + EnterArgumentsAdaptorFrame(masm); + + // Copy receiver and all expected arguments. + const int offset = StandardFrameConstants::kCallerSPOffset; + __ lea(eax, Operand(ebp, eax, times_4, offset)); + __ mov(edi, -1); // account for receiver + + Label copy; + __ bind(©); + __ inc(edi); + __ push(Operand(eax, 0)); + __ sub(eax, Immediate(kPointerSize)); + __ cmp(edi, ebx); + __ j(less, ©); + __ jmp(&invoke); + } + + { // Too few parameters: Actual < expected. + __ bind(&too_few); + EnterArgumentsAdaptorFrame(masm); + + // Copy receiver and all actual arguments. + const int offset = StandardFrameConstants::kCallerSPOffset; + __ lea(edi, Operand(ebp, eax, times_4, offset)); + // ebx = expected - actual. + __ sub(ebx, eax); + // eax = -actual - 1 + __ neg(eax); + __ sub(eax, Immediate(1)); + + Label copy; + __ bind(©); + __ inc(eax); + __ push(Operand(edi, 0)); + __ sub(edi, Immediate(kPointerSize)); + __ test(eax, eax); + __ j(not_zero, ©); + + // Fill remaining expected arguments with undefined values. + Label fill; + __ bind(&fill); + __ inc(eax); + __ push(Immediate(masm->isolate()->factory()->undefined_value())); + __ cmp(eax, ebx); + __ j(less, &fill); + } + + // Call the entry point. + __ bind(&invoke); + // Restore function pointer. + __ mov(edi, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset)); + __ call(edx); + + // Store offset of return address for deoptimizer. + masm->isolate()->heap()->SetArgumentsAdaptorDeoptPCOffset(masm->pc_offset()); + + // Leave frame and return. + LeaveArgumentsAdaptorFrame(masm); + __ ret(0); + + // ------------------------------------------- + // Dont adapt arguments. + // ------------------------------------------- + __ bind(&dont_adapt_arguments); + __ jmp(edx); + + __ bind(&stack_overflow); + { + FrameScope frame(masm, StackFrame::MANUAL); + EnterArgumentsAdaptorFrame(masm); + __ InvokeBuiltin(Builtins::STACK_OVERFLOW, CALL_FUNCTION); + __ int3(); + } +} + + +void Builtins::Generate_OnStackReplacement(MacroAssembler* masm) { + // Lookup the function in the JavaScript frame. + __ mov(eax, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset)); + { + FrameScope scope(masm, StackFrame::INTERNAL); + // Pass function as argument. + __ push(eax); + __ CallRuntime(Runtime::kCompileForOnStackReplacement, 1); + } + + Label skip; + // If the code object is null, just return to the unoptimized code. + __ cmp(eax, Immediate(0)); + __ j(not_equal, &skip, Label::kNear); + __ ret(0); + + __ bind(&skip); + + // Load deoptimization data from the code object. + __ mov(ebx, Operand(eax, Code::kDeoptimizationDataOffset - kHeapObjectTag)); + + // Load the OSR entrypoint offset from the deoptimization data. + __ mov(ebx, Operand(ebx, FixedArray::OffsetOfElementAt( + DeoptimizationInputData::kOsrPcOffsetIndex) - kHeapObjectTag)); + __ SmiUntag(ebx); + + // Compute the target address = code_obj + header_size + osr_offset + __ lea(eax, Operand(eax, ebx, times_1, Code::kHeaderSize - kHeapObjectTag)); + + // Overwrite the return address on the stack. + __ mov(Operand(esp, 0), eax); + + // And "return" to the OSR entry point of the function. + __ ret(0); +} + + +void Builtins::Generate_OsrAfterStackCheck(MacroAssembler* masm) { + // We check the stack limit as indicator that recompilation might be done. + Label ok; + ExternalReference stack_limit = + ExternalReference::address_of_stack_limit(masm->isolate()); + __ cmp(esp, Operand::StaticVariable(stack_limit)); + __ j(above_equal, &ok, Label::kNear); + { + FrameScope scope(masm, StackFrame::INTERNAL); + __ CallRuntime(Runtime::kStackGuard, 0); + } + __ jmp(masm->isolate()->builtins()->OnStackReplacement(), + RelocInfo::CODE_TARGET); + + __ bind(&ok); + __ ret(0); +} + +#undef __ +} +} // namespace v8::internal + +#endif // V8_TARGET_ARCH_X87 diff --git a/deps/v8/src/x87/code-stubs-x87.cc b/deps/v8/src/x87/code-stubs-x87.cc new file mode 100644 index 00000000000..6191aaf4e67 --- /dev/null +++ b/deps/v8/src/x87/code-stubs-x87.cc @@ -0,0 +1,4654 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_X87 + +#include "src/bootstrapper.h" +#include "src/code-stubs.h" +#include "src/codegen.h" +#include "src/isolate.h" +#include "src/jsregexp.h" +#include "src/regexp-macro-assembler.h" +#include "src/runtime.h" +#include "src/stub-cache.h" + +namespace v8 { +namespace internal { + + +void FastNewClosureStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { esi, ebx }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kNewClosureFromStubFailure)->entry); +} + + +void FastNewContextStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { esi, edi }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); +} + + +void ToNumberStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + // ToNumberStub invokes a function, and therefore needs a context. + Register registers[] = { esi, eax }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); +} + + +void NumberToStringStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { esi, eax }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kNumberToStringRT)->entry); +} + + +void FastCloneShallowArrayStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { esi, eax, ebx, ecx }; + Representation representations[] = { + Representation::Tagged(), + Representation::Tagged(), + Representation::Smi(), + Representation::Tagged() }; + + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kCreateArrayLiteralStubBailout)->entry, + representations); +} + + +void FastCloneShallowObjectStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { esi, eax, ebx, ecx, edx }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kCreateObjectLiteral)->entry); +} + + +void CreateAllocationSiteStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { esi, ebx, edx }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); +} + + +void CallFunctionStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = {esi, edi}; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); +} + + +void CallConstructStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + // eax : number of arguments + // ebx : feedback vector + // edx : (only if ebx is not the megamorphic symbol) slot in feedback + // vector (Smi) + // edi : constructor function + // TODO(turbofan): So far we don't gather type feedback and hence skip the + // slot parameter, but ArrayConstructStub needs the vector to be undefined. + Register registers[] = {esi, eax, edi, ebx}; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers); +} + + +void RegExpConstructResultStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { esi, ecx, ebx, eax }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kRegExpConstructResult)->entry); +} + + +void TransitionElementsKindStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { esi, eax, ebx }; + descriptor->Initialize( + MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kTransitionElementsKind)->entry); +} + + +const Register InterfaceDescriptor::ContextRegister() { return esi; } + + +static void InitializeArrayConstructorDescriptor( + Isolate* isolate, CodeStub::Major major, + CodeStubInterfaceDescriptor* descriptor, + int constant_stack_parameter_count) { + // register state + // eax -- number of arguments + // edi -- function + // ebx -- allocation site with elements kind + Address deopt_handler = Runtime::FunctionForId( + Runtime::kArrayConstructor)->entry; + + if (constant_stack_parameter_count == 0) { + Register registers[] = { esi, edi, ebx }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, + deopt_handler, NULL, constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE); + } else { + // stack param count needs (constructor pointer, and single argument) + Register registers[] = { esi, edi, ebx, eax }; + Representation representations[] = { + Representation::Tagged(), + Representation::Tagged(), + Representation::Tagged(), + Representation::Integer32() }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, eax, + deopt_handler, representations, + constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE, PASS_ARGUMENTS); + } +} + + +static void InitializeInternalArrayConstructorDescriptor( + CodeStub::Major major, CodeStubInterfaceDescriptor* descriptor, + int constant_stack_parameter_count) { + // register state + // eax -- number of arguments + // edi -- constructor function + Address deopt_handler = Runtime::FunctionForId( + Runtime::kInternalArrayConstructor)->entry; + + if (constant_stack_parameter_count == 0) { + Register registers[] = { esi, edi }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, + deopt_handler, NULL, constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE); + } else { + // stack param count needs (constructor pointer, and single argument) + Register registers[] = { esi, edi, eax }; + Representation representations[] = { + Representation::Tagged(), + Representation::Tagged(), + Representation::Integer32() }; + descriptor->Initialize(major, ARRAY_SIZE(registers), registers, eax, + deopt_handler, representations, + constant_stack_parameter_count, + JS_FUNCTION_STUB_MODE, PASS_ARGUMENTS); + } +} + + +void ArrayNoArgumentConstructorStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + InitializeArrayConstructorDescriptor(isolate(), MajorKey(), descriptor, 0); +} + + +void ArraySingleArgumentConstructorStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + InitializeArrayConstructorDescriptor(isolate(), MajorKey(), descriptor, 1); +} + + +void ArrayNArgumentsConstructorStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + InitializeArrayConstructorDescriptor(isolate(), MajorKey(), descriptor, -1); +} + + +void InternalArrayNoArgumentConstructorStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + InitializeInternalArrayConstructorDescriptor(MajorKey(), descriptor, 0); +} + + +void InternalArraySingleArgumentConstructorStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + InitializeInternalArrayConstructorDescriptor(MajorKey(), descriptor, 1); +} + + +void InternalArrayNArgumentsConstructorStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + InitializeInternalArrayConstructorDescriptor(MajorKey(), descriptor, -1); +} + + +void CompareNilICStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { esi, eax }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(CompareNilIC_Miss)); + descriptor->SetMissHandler( + ExternalReference(IC_Utility(IC::kCompareNilIC_Miss), isolate())); +} + +void ToBooleanStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { esi, eax }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(ToBooleanIC_Miss)); + descriptor->SetMissHandler( + ExternalReference(IC_Utility(IC::kToBooleanIC_Miss), isolate())); +} + + +void BinaryOpICStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { esi, edx, eax }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(BinaryOpIC_Miss)); + descriptor->SetMissHandler( + ExternalReference(IC_Utility(IC::kBinaryOpIC_Miss), isolate())); +} + + +void BinaryOpWithAllocationSiteStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { esi, ecx, edx, eax }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + FUNCTION_ADDR(BinaryOpIC_MissWithAllocationSite)); +} + + +void StringAddStub::InitializeInterfaceDescriptor( + CodeStubInterfaceDescriptor* descriptor) { + Register registers[] = { esi, edx, eax }; + descriptor->Initialize(MajorKey(), ARRAY_SIZE(registers), registers, + Runtime::FunctionForId(Runtime::kStringAdd)->entry); +} + + +void CallDescriptors::InitializeForIsolate(Isolate* isolate) { + { + CallInterfaceDescriptor* descriptor = + isolate->call_descriptor(Isolate::ArgumentAdaptorCall); + Register registers[] = { esi, // context + edi, // JSFunction + eax, // actual number of arguments + ebx, // expected number of arguments + }; + Representation representations[] = { + Representation::Tagged(), // context + Representation::Tagged(), // JSFunction + Representation::Integer32(), // actual number of arguments + Representation::Integer32(), // expected number of arguments + }; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); + } + { + CallInterfaceDescriptor* descriptor = + isolate->call_descriptor(Isolate::KeyedCall); + Register registers[] = { esi, // context + ecx, // key + }; + Representation representations[] = { + Representation::Tagged(), // context + Representation::Tagged(), // key + }; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); + } + { + CallInterfaceDescriptor* descriptor = + isolate->call_descriptor(Isolate::NamedCall); + Register registers[] = { esi, // context + ecx, // name + }; + Representation representations[] = { + Representation::Tagged(), // context + Representation::Tagged(), // name + }; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); + } + { + CallInterfaceDescriptor* descriptor = + isolate->call_descriptor(Isolate::CallHandler); + Register registers[] = { esi, // context + edx, // name + }; + Representation representations[] = { + Representation::Tagged(), // context + Representation::Tagged(), // receiver + }; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); + } + { + CallInterfaceDescriptor* descriptor = + isolate->call_descriptor(Isolate::ApiFunctionCall); + Register registers[] = { esi, // context + eax, // callee + ebx, // call_data + ecx, // holder + edx, // api_function_address + }; + Representation representations[] = { + Representation::Tagged(), // context + Representation::Tagged(), // callee + Representation::Tagged(), // call_data + Representation::Tagged(), // holder + Representation::External(), // api_function_address + }; + descriptor->Initialize(ARRAY_SIZE(registers), registers, representations); + } +} + + +#define __ ACCESS_MASM(masm) + + +void HydrogenCodeStub::GenerateLightweightMiss(MacroAssembler* masm) { + // Update the static counter each time a new code stub is generated. + isolate()->counters()->code_stubs()->Increment(); + + CodeStubInterfaceDescriptor* descriptor = GetInterfaceDescriptor(); + int param_count = descriptor->GetEnvironmentParameterCount(); + { + // Call the runtime system in a fresh internal frame. + FrameScope scope(masm, StackFrame::INTERNAL); + DCHECK(param_count == 0 || + eax.is(descriptor->GetEnvironmentParameterRegister( + param_count - 1))); + // Push arguments + for (int i = 0; i < param_count; ++i) { + __ push(descriptor->GetEnvironmentParameterRegister(i)); + } + ExternalReference miss = descriptor->miss_handler(); + __ CallExternalReference(miss, param_count); + } + + __ ret(0); +} + + +void StoreBufferOverflowStub::Generate(MacroAssembler* masm) { + // We don't allow a GC during a store buffer overflow so there is no need to + // store the registers in any particular way, but we do have to store and + // restore them. + __ pushad(); + const int argument_count = 1; + + AllowExternalCallThatCantCauseGC scope(masm); + __ PrepareCallCFunction(argument_count, ecx); + __ mov(Operand(esp, 0 * kPointerSize), + Immediate(ExternalReference::isolate_address(isolate()))); + __ CallCFunction( + ExternalReference::store_buffer_overflow_function(isolate()), + argument_count); + __ popad(); + __ ret(0); +} + + +class FloatingPointHelper : public AllStatic { + public: + enum ArgLocation { + ARGS_ON_STACK, + ARGS_IN_REGISTERS + }; + + // Code pattern for loading a floating point value. Input value must + // be either a smi or a heap number object (fp value). Requirements: + // operand in register number. Returns operand as floating point number + // on FPU stack. + static void LoadFloatOperand(MacroAssembler* masm, Register number); + + // Test if operands are smi or number objects (fp). Requirements: + // operand_1 in eax, operand_2 in edx; falls through on float + // operands, jumps to the non_float label otherwise. + static void CheckFloatOperands(MacroAssembler* masm, + Label* non_float, + Register scratch); +}; + + +void DoubleToIStub::Generate(MacroAssembler* masm) { + Register input_reg = this->source(); + Register final_result_reg = this->destination(); + DCHECK(is_truncating()); + + Label check_negative, process_64_bits, done, done_no_stash; + + int double_offset = offset(); + + // Account for return address and saved regs if input is esp. + if (input_reg.is(esp)) double_offset += 3 * kPointerSize; + + MemOperand mantissa_operand(MemOperand(input_reg, double_offset)); + MemOperand exponent_operand(MemOperand(input_reg, + double_offset + kDoubleSize / 2)); + + Register scratch1; + { + Register scratch_candidates[3] = { ebx, edx, edi }; + for (int i = 0; i < 3; i++) { + scratch1 = scratch_candidates[i]; + if (!final_result_reg.is(scratch1) && !input_reg.is(scratch1)) break; + } + } + // Since we must use ecx for shifts below, use some other register (eax) + // to calculate the result if ecx is the requested return register. + Register result_reg = final_result_reg.is(ecx) ? eax : final_result_reg; + // Save ecx if it isn't the return register and therefore volatile, or if it + // is the return register, then save the temp register we use in its stead for + // the result. + Register save_reg = final_result_reg.is(ecx) ? eax : ecx; + __ push(scratch1); + __ push(save_reg); + + bool stash_exponent_copy = !input_reg.is(esp); + __ mov(scratch1, mantissa_operand); + __ mov(ecx, exponent_operand); + if (stash_exponent_copy) __ push(ecx); + + __ and_(ecx, HeapNumber::kExponentMask); + __ shr(ecx, HeapNumber::kExponentShift); + __ lea(result_reg, MemOperand(ecx, -HeapNumber::kExponentBias)); + __ cmp(result_reg, Immediate(HeapNumber::kMantissaBits)); + __ j(below, &process_64_bits); + + // Result is entirely in lower 32-bits of mantissa + int delta = HeapNumber::kExponentBias + Double::kPhysicalSignificandSize; + __ sub(ecx, Immediate(delta)); + __ xor_(result_reg, result_reg); + __ cmp(ecx, Immediate(31)); + __ j(above, &done); + __ shl_cl(scratch1); + __ jmp(&check_negative); + + __ bind(&process_64_bits); + // Result must be extracted from shifted 32-bit mantissa + __ sub(ecx, Immediate(delta)); + __ neg(ecx); + if (stash_exponent_copy) { + __ mov(result_reg, MemOperand(esp, 0)); + } else { + __ mov(result_reg, exponent_operand); + } + __ and_(result_reg, + Immediate(static_cast<uint32_t>(Double::kSignificandMask >> 32))); + __ add(result_reg, + Immediate(static_cast<uint32_t>(Double::kHiddenBit >> 32))); + __ shrd(result_reg, scratch1); + __ shr_cl(result_reg); + __ test(ecx, Immediate(32)); + { + Label skip_mov; + __ j(equal, &skip_mov, Label::kNear); + __ mov(scratch1, result_reg); + __ bind(&skip_mov); + } + + // If the double was negative, negate the integer result. + __ bind(&check_negative); + __ mov(result_reg, scratch1); + __ neg(result_reg); + if (stash_exponent_copy) { + __ cmp(MemOperand(esp, 0), Immediate(0)); + } else { + __ cmp(exponent_operand, Immediate(0)); + } + { + Label skip_mov; + __ j(less_equal, &skip_mov, Label::kNear); + __ mov(result_reg, scratch1); + __ bind(&skip_mov); + } + + // Restore registers + __ bind(&done); + if (stash_exponent_copy) { + __ add(esp, Immediate(kDoubleSize / 2)); + } + __ bind(&done_no_stash); + if (!final_result_reg.is(result_reg)) { + DCHECK(final_result_reg.is(ecx)); + __ mov(final_result_reg, result_reg); + } + __ pop(save_reg); + __ pop(scratch1); + __ ret(0); +} + + +void FloatingPointHelper::LoadFloatOperand(MacroAssembler* masm, + Register number) { + Label load_smi, done; + + __ JumpIfSmi(number, &load_smi, Label::kNear); + __ fld_d(FieldOperand(number, HeapNumber::kValueOffset)); + __ jmp(&done, Label::kNear); + + __ bind(&load_smi); + __ SmiUntag(number); + __ push(number); + __ fild_s(Operand(esp, 0)); + __ pop(number); + + __ bind(&done); +} + + +void FloatingPointHelper::CheckFloatOperands(MacroAssembler* masm, + Label* non_float, + Register scratch) { + Label test_other, done; + // Test if both operands are floats or smi -> scratch=k_is_float; + // Otherwise scratch = k_not_float. + __ JumpIfSmi(edx, &test_other, Label::kNear); + __ mov(scratch, FieldOperand(edx, HeapObject::kMapOffset)); + Factory* factory = masm->isolate()->factory(); + __ cmp(scratch, factory->heap_number_map()); + __ j(not_equal, non_float); // argument in edx is not a number -> NaN + + __ bind(&test_other); + __ JumpIfSmi(eax, &done, Label::kNear); + __ mov(scratch, FieldOperand(eax, HeapObject::kMapOffset)); + __ cmp(scratch, factory->heap_number_map()); + __ j(not_equal, non_float); // argument in eax is not a number -> NaN + + // Fall-through: Both operands are numbers. + __ bind(&done); +} + + +void MathPowStub::Generate(MacroAssembler* masm) { + // No SSE2 support + UNREACHABLE(); +} + + +void FunctionPrototypeStub::Generate(MacroAssembler* masm) { + Label miss; + Register receiver = LoadIC::ReceiverRegister(); + + NamedLoadHandlerCompiler::GenerateLoadFunctionPrototype(masm, receiver, eax, + ebx, &miss); + __ bind(&miss); + PropertyAccessCompiler::TailCallBuiltin( + masm, PropertyAccessCompiler::MissBuiltin(Code::LOAD_IC)); +} + + +void ArgumentsAccessStub::GenerateReadElement(MacroAssembler* masm) { + // The key is in edx and the parameter count is in eax. + + // The displacement is used for skipping the frame pointer on the + // stack. It is the offset of the last parameter (if any) relative + // to the frame pointer. + static const int kDisplacement = 1 * kPointerSize; + + // Check that the key is a smi. + Label slow; + __ JumpIfNotSmi(edx, &slow, Label::kNear); + + // Check if the calling frame is an arguments adaptor frame. + Label adaptor; + __ mov(ebx, Operand(ebp, StandardFrameConstants::kCallerFPOffset)); + __ mov(ecx, Operand(ebx, StandardFrameConstants::kContextOffset)); + __ cmp(ecx, Immediate(Smi::FromInt(StackFrame::ARGUMENTS_ADAPTOR))); + __ j(equal, &adaptor, Label::kNear); + + // Check index against formal parameters count limit passed in + // through register eax. Use unsigned comparison to get negative + // check for free. + __ cmp(edx, eax); + __ j(above_equal, &slow, Label::kNear); + + // Read the argument from the stack and return it. + STATIC_ASSERT(kSmiTagSize == 1); + STATIC_ASSERT(kSmiTag == 0); // Shifting code depends on these. + __ lea(ebx, Operand(ebp, eax, times_2, 0)); + __ neg(edx); + __ mov(eax, Operand(ebx, edx, times_2, kDisplacement)); + __ ret(0); + + // Arguments adaptor case: Check index against actual arguments + // limit found in the arguments adaptor frame. Use unsigned + // comparison to get negative check for free. + __ bind(&adaptor); + __ mov(ecx, Operand(ebx, ArgumentsAdaptorFrameConstants::kLengthOffset)); + __ cmp(edx, ecx); + __ j(above_equal, &slow, Label::kNear); + + // Read the argument from the stack and return it. + STATIC_ASSERT(kSmiTagSize == 1); + STATIC_ASSERT(kSmiTag == 0); // Shifting code depends on these. + __ lea(ebx, Operand(ebx, ecx, times_2, 0)); + __ neg(edx); + __ mov(eax, Operand(ebx, edx, times_2, kDisplacement)); + __ ret(0); + + // Slow-case: Handle non-smi or out-of-bounds access to arguments + // by calling the runtime system. + __ bind(&slow); + __ pop(ebx); // Return address. + __ push(edx); + __ push(ebx); + __ TailCallRuntime(Runtime::kGetArgumentsProperty, 1, 1); +} + + +void ArgumentsAccessStub::GenerateNewSloppySlow(MacroAssembler* masm) { + // esp[0] : return address + // esp[4] : number of parameters + // esp[8] : receiver displacement + // esp[12] : function + + // Check if the calling frame is an arguments adaptor frame. + Label runtime; + __ mov(edx, Operand(ebp, StandardFrameConstants::kCallerFPOffset)); + __ mov(ecx, Operand(edx, StandardFrameConstants::kContextOffset)); + __ cmp(ecx, Immediate(Smi::FromInt(StackFrame::ARGUMENTS_ADAPTOR))); + __ j(not_equal, &runtime, Label::kNear); + + // Patch the arguments.length and the parameters pointer. + __ mov(ecx, Operand(edx, ArgumentsAdaptorFrameConstants::kLengthOffset)); + __ mov(Operand(esp, 1 * kPointerSize), ecx); + __ lea(edx, Operand(edx, ecx, times_2, + StandardFrameConstants::kCallerSPOffset)); + __ mov(Operand(esp, 2 * kPointerSize), edx); + + __ bind(&runtime); + __ TailCallRuntime(Runtime::kNewSloppyArguments, 3, 1); +} + + +void ArgumentsAccessStub::GenerateNewSloppyFast(MacroAssembler* masm) { + // esp[0] : return address + // esp[4] : number of parameters (tagged) + // esp[8] : receiver displacement + // esp[12] : function + + // ebx = parameter count (tagged) + __ mov(ebx, Operand(esp, 1 * kPointerSize)); + + // Check if the calling frame is an arguments adaptor frame. + // TODO(rossberg): Factor out some of the bits that are shared with the other + // Generate* functions. + Label runtime; + Label adaptor_frame, try_allocate; + __ mov(edx, Operand(ebp, StandardFrameConstants::kCallerFPOffset)); + __ mov(ecx, Operand(edx, StandardFrameConstants::kContextOffset)); + __ cmp(ecx, Immediate(Smi::FromInt(StackFrame::ARGUMENTS_ADAPTOR))); + __ j(equal, &adaptor_frame, Label::kNear); + + // No adaptor, parameter count = argument count. + __ mov(ecx, ebx); + __ jmp(&try_allocate, Label::kNear); + + // We have an adaptor frame. Patch the parameters pointer. + __ bind(&adaptor_frame); + __ mov(ecx, Operand(edx, ArgumentsAdaptorFrameConstants::kLengthOffset)); + __ lea(edx, Operand(edx, ecx, times_2, + StandardFrameConstants::kCallerSPOffset)); + __ mov(Operand(esp, 2 * kPointerSize), edx); + + // ebx = parameter count (tagged) + // ecx = argument count (smi-tagged) + // esp[4] = parameter count (tagged) + // esp[8] = address of receiver argument + // Compute the mapped parameter count = min(ebx, ecx) in ebx. + __ cmp(ebx, ecx); + __ j(less_equal, &try_allocate, Label::kNear); + __ mov(ebx, ecx); + + __ bind(&try_allocate); + + // Save mapped parameter count. + __ push(ebx); + + // Compute the sizes of backing store, parameter map, and arguments object. + // 1. Parameter map, has 2 extra words containing context and backing store. + const int kParameterMapHeaderSize = + FixedArray::kHeaderSize + 2 * kPointerSize; + Label no_parameter_map; + __ test(ebx, ebx); + __ j(zero, &no_parameter_map, Label::kNear); + __ lea(ebx, Operand(ebx, times_2, kParameterMapHeaderSize)); + __ bind(&no_parameter_map); + + // 2. Backing store. + __ lea(ebx, Operand(ebx, ecx, times_2, FixedArray::kHeaderSize)); + + // 3. Arguments object. + __ add(ebx, Immediate(Heap::kSloppyArgumentsObjectSize)); + + // Do the allocation of all three objects in one go. + __ Allocate(ebx, eax, edx, edi, &runtime, TAG_OBJECT); + + // eax = address of new object(s) (tagged) + // ecx = argument count (smi-tagged) + // esp[0] = mapped parameter count (tagged) + // esp[8] = parameter count (tagged) + // esp[12] = address of receiver argument + // Get the arguments map from the current native context into edi. + Label has_mapped_parameters, instantiate; + __ mov(edi, Operand(esi, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); + __ mov(edi, FieldOperand(edi, GlobalObject::kNativeContextOffset)); + __ mov(ebx, Operand(esp, 0 * kPointerSize)); + __ test(ebx, ebx); + __ j(not_zero, &has_mapped_parameters, Label::kNear); + __ mov( + edi, + Operand(edi, Context::SlotOffset(Context::SLOPPY_ARGUMENTS_MAP_INDEX))); + __ jmp(&instantiate, Label::kNear); + + __ bind(&has_mapped_parameters); + __ mov( + edi, + Operand(edi, Context::SlotOffset(Context::ALIASED_ARGUMENTS_MAP_INDEX))); + __ bind(&instantiate); + + // eax = address of new object (tagged) + // ebx = mapped parameter count (tagged) + // ecx = argument count (smi-tagged) + // edi = address of arguments map (tagged) + // esp[0] = mapped parameter count (tagged) + // esp[8] = parameter count (tagged) + // esp[12] = address of receiver argument + // Copy the JS object part. + __ mov(FieldOperand(eax, JSObject::kMapOffset), edi); + __ mov(FieldOperand(eax, JSObject::kPropertiesOffset), + masm->isolate()->factory()->empty_fixed_array()); + __ mov(FieldOperand(eax, JSObject::kElementsOffset), + masm->isolate()->factory()->empty_fixed_array()); + + // Set up the callee in-object property. + STATIC_ASSERT(Heap::kArgumentsCalleeIndex == 1); + __ mov(edx, Operand(esp, 4 * kPointerSize)); + __ AssertNotSmi(edx); + __ mov(FieldOperand(eax, JSObject::kHeaderSize + + Heap::kArgumentsCalleeIndex * kPointerSize), + edx); + + // Use the length (smi tagged) and set that as an in-object property too. + __ AssertSmi(ecx); + STATIC_ASSERT(Heap::kArgumentsLengthIndex == 0); + __ mov(FieldOperand(eax, JSObject::kHeaderSize + + Heap::kArgumentsLengthIndex * kPointerSize), + ecx); + + // Set up the elements pointer in the allocated arguments object. + // If we allocated a parameter map, edi will point there, otherwise to the + // backing store. + __ lea(edi, Operand(eax, Heap::kSloppyArgumentsObjectSize)); + __ mov(FieldOperand(eax, JSObject::kElementsOffset), edi); + + // eax = address of new object (tagged) + // ebx = mapped parameter count (tagged) + // ecx = argument count (tagged) + // edi = address of parameter map or backing store (tagged) + // esp[0] = mapped parameter count (tagged) + // esp[8] = parameter count (tagged) + // esp[12] = address of receiver argument + // Free a register. + __ push(eax); + + // Initialize parameter map. If there are no mapped arguments, we're done. + Label skip_parameter_map; + __ test(ebx, ebx); + __ j(zero, &skip_parameter_map); + + __ mov(FieldOperand(edi, FixedArray::kMapOffset), + Immediate(isolate()->factory()->sloppy_arguments_elements_map())); + __ lea(eax, Operand(ebx, reinterpret_cast<intptr_t>(Smi::FromInt(2)))); + __ mov(FieldOperand(edi, FixedArray::kLengthOffset), eax); + __ mov(FieldOperand(edi, FixedArray::kHeaderSize + 0 * kPointerSize), esi); + __ lea(eax, Operand(edi, ebx, times_2, kParameterMapHeaderSize)); + __ mov(FieldOperand(edi, FixedArray::kHeaderSize + 1 * kPointerSize), eax); + + // Copy the parameter slots and the holes in the arguments. + // We need to fill in mapped_parameter_count slots. They index the context, + // where parameters are stored in reverse order, at + // MIN_CONTEXT_SLOTS .. MIN_CONTEXT_SLOTS+parameter_count-1 + // The mapped parameter thus need to get indices + // MIN_CONTEXT_SLOTS+parameter_count-1 .. + // MIN_CONTEXT_SLOTS+parameter_count-mapped_parameter_count + // We loop from right to left. + Label parameters_loop, parameters_test; + __ push(ecx); + __ mov(eax, Operand(esp, 2 * kPointerSize)); + __ mov(ebx, Immediate(Smi::FromInt(Context::MIN_CONTEXT_SLOTS))); + __ add(ebx, Operand(esp, 4 * kPointerSize)); + __ sub(ebx, eax); + __ mov(ecx, isolate()->factory()->the_hole_value()); + __ mov(edx, edi); + __ lea(edi, Operand(edi, eax, times_2, kParameterMapHeaderSize)); + // eax = loop variable (tagged) + // ebx = mapping index (tagged) + // ecx = the hole value + // edx = address of parameter map (tagged) + // edi = address of backing store (tagged) + // esp[0] = argument count (tagged) + // esp[4] = address of new object (tagged) + // esp[8] = mapped parameter count (tagged) + // esp[16] = parameter count (tagged) + // esp[20] = address of receiver argument + __ jmp(¶meters_test, Label::kNear); + + __ bind(¶meters_loop); + __ sub(eax, Immediate(Smi::FromInt(1))); + __ mov(FieldOperand(edx, eax, times_2, kParameterMapHeaderSize), ebx); + __ mov(FieldOperand(edi, eax, times_2, FixedArray::kHeaderSize), ecx); + __ add(ebx, Immediate(Smi::FromInt(1))); + __ bind(¶meters_test); + __ test(eax, eax); + __ j(not_zero, ¶meters_loop, Label::kNear); + __ pop(ecx); + + __ bind(&skip_parameter_map); + + // ecx = argument count (tagged) + // edi = address of backing store (tagged) + // esp[0] = address of new object (tagged) + // esp[4] = mapped parameter count (tagged) + // esp[12] = parameter count (tagged) + // esp[16] = address of receiver argument + // Copy arguments header and remaining slots (if there are any). + __ mov(FieldOperand(edi, FixedArray::kMapOffset), + Immediate(isolate()->factory()->fixed_array_map())); + __ mov(FieldOperand(edi, FixedArray::kLengthOffset), ecx); + + Label arguments_loop, arguments_test; + __ mov(ebx, Operand(esp, 1 * kPointerSize)); + __ mov(edx, Operand(esp, 4 * kPointerSize)); + __ sub(edx, ebx); // Is there a smarter way to do negative scaling? + __ sub(edx, ebx); + __ jmp(&arguments_test, Label::kNear); + + __ bind(&arguments_loop); + __ sub(edx, Immediate(kPointerSize)); + __ mov(eax, Operand(edx, 0)); + __ mov(FieldOperand(edi, ebx, times_2, FixedArray::kHeaderSize), eax); + __ add(ebx, Immediate(Smi::FromInt(1))); + + __ bind(&arguments_test); + __ cmp(ebx, ecx); + __ j(less, &arguments_loop, Label::kNear); + + // Restore. + __ pop(eax); // Address of arguments object. + __ pop(ebx); // Parameter count. + + // Return and remove the on-stack parameters. + __ ret(3 * kPointerSize); + + // Do the runtime call to allocate the arguments object. + __ bind(&runtime); + __ pop(eax); // Remove saved parameter count. + __ mov(Operand(esp, 1 * kPointerSize), ecx); // Patch argument count. + __ TailCallRuntime(Runtime::kNewSloppyArguments, 3, 1); +} + + +void ArgumentsAccessStub::GenerateNewStrict(MacroAssembler* masm) { + // esp[0] : return address + // esp[4] : number of parameters + // esp[8] : receiver displacement + // esp[12] : function + + // Check if the calling frame is an arguments adaptor frame. + Label adaptor_frame, try_allocate, runtime; + __ mov(edx, Operand(ebp, StandardFrameConstants::kCallerFPOffset)); + __ mov(ecx, Operand(edx, StandardFrameConstants::kContextOffset)); + __ cmp(ecx, Immediate(Smi::FromInt(StackFrame::ARGUMENTS_ADAPTOR))); + __ j(equal, &adaptor_frame, Label::kNear); + + // Get the length from the frame. + __ mov(ecx, Operand(esp, 1 * kPointerSize)); + __ jmp(&try_allocate, Label::kNear); + + // Patch the arguments.length and the parameters pointer. + __ bind(&adaptor_frame); + __ mov(ecx, Operand(edx, ArgumentsAdaptorFrameConstants::kLengthOffset)); + __ mov(Operand(esp, 1 * kPointerSize), ecx); + __ lea(edx, Operand(edx, ecx, times_2, + StandardFrameConstants::kCallerSPOffset)); + __ mov(Operand(esp, 2 * kPointerSize), edx); + + // Try the new space allocation. Start out with computing the size of + // the arguments object and the elements array. + Label add_arguments_object; + __ bind(&try_allocate); + __ test(ecx, ecx); + __ j(zero, &add_arguments_object, Label::kNear); + __ lea(ecx, Operand(ecx, times_2, FixedArray::kHeaderSize)); + __ bind(&add_arguments_object); + __ add(ecx, Immediate(Heap::kStrictArgumentsObjectSize)); + + // Do the allocation of both objects in one go. + __ Allocate(ecx, eax, edx, ebx, &runtime, TAG_OBJECT); + + // Get the arguments map from the current native context. + __ mov(edi, Operand(esi, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); + __ mov(edi, FieldOperand(edi, GlobalObject::kNativeContextOffset)); + const int offset = Context::SlotOffset(Context::STRICT_ARGUMENTS_MAP_INDEX); + __ mov(edi, Operand(edi, offset)); + + __ mov(FieldOperand(eax, JSObject::kMapOffset), edi); + __ mov(FieldOperand(eax, JSObject::kPropertiesOffset), + masm->isolate()->factory()->empty_fixed_array()); + __ mov(FieldOperand(eax, JSObject::kElementsOffset), + masm->isolate()->factory()->empty_fixed_array()); + + // Get the length (smi tagged) and set that as an in-object property too. + STATIC_ASSERT(Heap::kArgumentsLengthIndex == 0); + __ mov(ecx, Operand(esp, 1 * kPointerSize)); + __ AssertSmi(ecx); + __ mov(FieldOperand(eax, JSObject::kHeaderSize + + Heap::kArgumentsLengthIndex * kPointerSize), + ecx); + + // If there are no actual arguments, we're done. + Label done; + __ test(ecx, ecx); + __ j(zero, &done, Label::kNear); + + // Get the parameters pointer from the stack. + __ mov(edx, Operand(esp, 2 * kPointerSize)); + + // Set up the elements pointer in the allocated arguments object and + // initialize the header in the elements fixed array. + __ lea(edi, Operand(eax, Heap::kStrictArgumentsObjectSize)); + __ mov(FieldOperand(eax, JSObject::kElementsOffset), edi); + __ mov(FieldOperand(edi, FixedArray::kMapOffset), + Immediate(isolate()->factory()->fixed_array_map())); + + __ mov(FieldOperand(edi, FixedArray::kLengthOffset), ecx); + // Untag the length for the loop below. + __ SmiUntag(ecx); + + // Copy the fixed array slots. + Label loop; + __ bind(&loop); + __ mov(ebx, Operand(edx, -1 * kPointerSize)); // Skip receiver. + __ mov(FieldOperand(edi, FixedArray::kHeaderSize), ebx); + __ add(edi, Immediate(kPointerSize)); + __ sub(edx, Immediate(kPointerSize)); + __ dec(ecx); + __ j(not_zero, &loop); + + // Return and remove the on-stack parameters. + __ bind(&done); + __ ret(3 * kPointerSize); + + // Do the runtime call to allocate the arguments object. + __ bind(&runtime); + __ TailCallRuntime(Runtime::kNewStrictArguments, 3, 1); +} + + +void RegExpExecStub::Generate(MacroAssembler* masm) { + // Just jump directly to runtime if native RegExp is not selected at compile + // time or if regexp entry in generated code is turned off runtime switch or + // at compilation. +#ifdef V8_INTERPRETED_REGEXP + __ TailCallRuntime(Runtime::kRegExpExecRT, 4, 1); +#else // V8_INTERPRETED_REGEXP + + // Stack frame on entry. + // esp[0]: return address + // esp[4]: last_match_info (expected JSArray) + // esp[8]: previous index + // esp[12]: subject string + // esp[16]: JSRegExp object + + static const int kLastMatchInfoOffset = 1 * kPointerSize; + static const int kPreviousIndexOffset = 2 * kPointerSize; + static const int kSubjectOffset = 3 * kPointerSize; + static const int kJSRegExpOffset = 4 * kPointerSize; + + Label runtime; + Factory* factory = isolate()->factory(); + + // Ensure that a RegExp stack is allocated. + ExternalReference address_of_regexp_stack_memory_address = + ExternalReference::address_of_regexp_stack_memory_address(isolate()); + ExternalReference address_of_regexp_stack_memory_size = + ExternalReference::address_of_regexp_stack_memory_size(isolate()); + __ mov(ebx, Operand::StaticVariable(address_of_regexp_stack_memory_size)); + __ test(ebx, ebx); + __ j(zero, &runtime); + + // Check that the first argument is a JSRegExp object. + __ mov(eax, Operand(esp, kJSRegExpOffset)); + STATIC_ASSERT(kSmiTag == 0); + __ JumpIfSmi(eax, &runtime); + __ CmpObjectType(eax, JS_REGEXP_TYPE, ecx); + __ j(not_equal, &runtime); + + // Check that the RegExp has been compiled (data contains a fixed array). + __ mov(ecx, FieldOperand(eax, JSRegExp::kDataOffset)); + if (FLAG_debug_code) { + __ test(ecx, Immediate(kSmiTagMask)); + __ Check(not_zero, kUnexpectedTypeForRegExpDataFixedArrayExpected); + __ CmpObjectType(ecx, FIXED_ARRAY_TYPE, ebx); + __ Check(equal, kUnexpectedTypeForRegExpDataFixedArrayExpected); + } + + // ecx: RegExp data (FixedArray) + // Check the type of the RegExp. Only continue if type is JSRegExp::IRREGEXP. + __ mov(ebx, FieldOperand(ecx, JSRegExp::kDataTagOffset)); + __ cmp(ebx, Immediate(Smi::FromInt(JSRegExp::IRREGEXP))); + __ j(not_equal, &runtime); + + // ecx: RegExp data (FixedArray) + // Check that the number of captures fit in the static offsets vector buffer. + __ mov(edx, FieldOperand(ecx, JSRegExp::kIrregexpCaptureCountOffset)); + // Check (number_of_captures + 1) * 2 <= offsets vector size + // Or number_of_captures * 2 <= offsets vector size - 2 + // Multiplying by 2 comes for free since edx is smi-tagged. + STATIC_ASSERT(kSmiTag == 0); + STATIC_ASSERT(kSmiTagSize + kSmiShiftSize == 1); + STATIC_ASSERT(Isolate::kJSRegexpStaticOffsetsVectorSize >= 2); + __ cmp(edx, Isolate::kJSRegexpStaticOffsetsVectorSize - 2); + __ j(above, &runtime); + + // Reset offset for possibly sliced string. + __ Move(edi, Immediate(0)); + __ mov(eax, Operand(esp, kSubjectOffset)); + __ JumpIfSmi(eax, &runtime); + __ mov(edx, eax); // Make a copy of the original subject string. + __ mov(ebx, FieldOperand(eax, HeapObject::kMapOffset)); + __ movzx_b(ebx, FieldOperand(ebx, Map::kInstanceTypeOffset)); + + // eax: subject string + // edx: subject string + // ebx: subject string instance type + // ecx: RegExp data (FixedArray) + // Handle subject string according to its encoding and representation: + // (1) Sequential two byte? If yes, go to (9). + // (2) Sequential one byte? If yes, go to (6). + // (3) Anything but sequential or cons? If yes, go to (7). + // (4) Cons string. If the string is flat, replace subject with first string. + // Otherwise bailout. + // (5a) Is subject sequential two byte? If yes, go to (9). + // (5b) Is subject external? If yes, go to (8). + // (6) One byte sequential. Load regexp code for one byte. + // (E) Carry on. + /// [...] + + // Deferred code at the end of the stub: + // (7) Not a long external string? If yes, go to (10). + // (8) External string. Make it, offset-wise, look like a sequential string. + // (8a) Is the external string one byte? If yes, go to (6). + // (9) Two byte sequential. Load regexp code for one byte. Go to (E). + // (10) Short external string or not a string? If yes, bail out to runtime. + // (11) Sliced string. Replace subject with parent. Go to (5a). + + Label seq_one_byte_string /* 6 */, seq_two_byte_string /* 9 */, + external_string /* 8 */, check_underlying /* 5a */, + not_seq_nor_cons /* 7 */, check_code /* E */, + not_long_external /* 10 */; + + // (1) Sequential two byte? If yes, go to (9). + __ and_(ebx, kIsNotStringMask | + kStringRepresentationMask | + kStringEncodingMask | + kShortExternalStringMask); + STATIC_ASSERT((kStringTag | kSeqStringTag | kTwoByteStringTag) == 0); + __ j(zero, &seq_two_byte_string); // Go to (9). + + // (2) Sequential one byte? If yes, go to (6). + // Any other sequential string must be one byte. + __ and_(ebx, Immediate(kIsNotStringMask | + kStringRepresentationMask | + kShortExternalStringMask)); + __ j(zero, &seq_one_byte_string, Label::kNear); // Go to (6). + + // (3) Anything but sequential or cons? If yes, go to (7). + // We check whether the subject string is a cons, since sequential strings + // have already been covered. + STATIC_ASSERT(kConsStringTag < kExternalStringTag); + STATIC_ASSERT(kSlicedStringTag > kExternalStringTag); + STATIC_ASSERT(kIsNotStringMask > kExternalStringTag); + STATIC_ASSERT(kShortExternalStringTag > kExternalStringTag); + __ cmp(ebx, Immediate(kExternalStringTag)); + __ j(greater_equal, ¬_seq_nor_cons); // Go to (7). + + // (4) Cons string. Check that it's flat. + // Replace subject with first string and reload instance type. + __ cmp(FieldOperand(eax, ConsString::kSecondOffset), factory->empty_string()); + __ j(not_equal, &runtime); + __ mov(eax, FieldOperand(eax, ConsString::kFirstOffset)); + __ bind(&check_underlying); + __ mov(ebx, FieldOperand(eax, HeapObject::kMapOffset)); + __ mov(ebx, FieldOperand(ebx, Map::kInstanceTypeOffset)); + + // (5a) Is subject sequential two byte? If yes, go to (9). + __ test_b(ebx, kStringRepresentationMask | kStringEncodingMask); + STATIC_ASSERT((kSeqStringTag | kTwoByteStringTag) == 0); + __ j(zero, &seq_two_byte_string); // Go to (9). + // (5b) Is subject external? If yes, go to (8). + __ test_b(ebx, kStringRepresentationMask); + // The underlying external string is never a short external string. + STATIC_ASSERT(ExternalString::kMaxShortLength < ConsString::kMinLength); + STATIC_ASSERT(ExternalString::kMaxShortLength < SlicedString::kMinLength); + __ j(not_zero, &external_string); // Go to (8). + + // eax: sequential subject string (or look-alike, external string) + // edx: original subject string + // ecx: RegExp data (FixedArray) + // (6) One byte sequential. Load regexp code for one byte. + __ bind(&seq_one_byte_string); + // Load previous index and check range before edx is overwritten. We have + // to use edx instead of eax here because it might have been only made to + // look like a sequential string when it actually is an external string. + __ mov(ebx, Operand(esp, kPreviousIndexOffset)); + __ JumpIfNotSmi(ebx, &runtime); + __ cmp(ebx, FieldOperand(edx, String::kLengthOffset)); + __ j(above_equal, &runtime); + __ mov(edx, FieldOperand(ecx, JSRegExp::kDataAsciiCodeOffset)); + __ Move(ecx, Immediate(1)); // Type is one byte. + + // (E) Carry on. String handling is done. + __ bind(&check_code); + // edx: irregexp code + // Check that the irregexp code has been generated for the actual string + // encoding. If it has, the field contains a code object otherwise it contains + // a smi (code flushing support). + __ JumpIfSmi(edx, &runtime); + + // eax: subject string + // ebx: previous index (smi) + // edx: code + // ecx: encoding of subject string (1 if ASCII, 0 if two_byte); + // All checks done. Now push arguments for native regexp code. + Counters* counters = isolate()->counters(); + __ IncrementCounter(counters->regexp_entry_native(), 1); + + // Isolates: note we add an additional parameter here (isolate pointer). + static const int kRegExpExecuteArguments = 9; + __ EnterApiExitFrame(kRegExpExecuteArguments); + + // Argument 9: Pass current isolate address. + __ mov(Operand(esp, 8 * kPointerSize), + Immediate(ExternalReference::isolate_address(isolate()))); + + // Argument 8: Indicate that this is a direct call from JavaScript. + __ mov(Operand(esp, 7 * kPointerSize), Immediate(1)); + + // Argument 7: Start (high end) of backtracking stack memory area. + __ mov(esi, Operand::StaticVariable(address_of_regexp_stack_memory_address)); + __ add(esi, Operand::StaticVariable(address_of_regexp_stack_memory_size)); + __ mov(Operand(esp, 6 * kPointerSize), esi); + + // Argument 6: Set the number of capture registers to zero to force global + // regexps to behave as non-global. This does not affect non-global regexps. + __ mov(Operand(esp, 5 * kPointerSize), Immediate(0)); + + // Argument 5: static offsets vector buffer. + __ mov(Operand(esp, 4 * kPointerSize), + Immediate(ExternalReference::address_of_static_offsets_vector( + isolate()))); + + // Argument 2: Previous index. + __ SmiUntag(ebx); + __ mov(Operand(esp, 1 * kPointerSize), ebx); + + // Argument 1: Original subject string. + // The original subject is in the previous stack frame. Therefore we have to + // use ebp, which points exactly to one pointer size below the previous esp. + // (Because creating a new stack frame pushes the previous ebp onto the stack + // and thereby moves up esp by one kPointerSize.) + __ mov(esi, Operand(ebp, kSubjectOffset + kPointerSize)); + __ mov(Operand(esp, 0 * kPointerSize), esi); + + // esi: original subject string + // eax: underlying subject string + // ebx: previous index + // ecx: encoding of subject string (1 if ASCII 0 if two_byte); + // edx: code + // Argument 4: End of string data + // Argument 3: Start of string data + // Prepare start and end index of the input. + // Load the length from the original sliced string if that is the case. + __ mov(esi, FieldOperand(esi, String::kLengthOffset)); + __ add(esi, edi); // Calculate input end wrt offset. + __ SmiUntag(edi); + __ add(ebx, edi); // Calculate input start wrt offset. + + // ebx: start index of the input string + // esi: end index of the input string + Label setup_two_byte, setup_rest; + __ test(ecx, ecx); + __ j(zero, &setup_two_byte, Label::kNear); + __ SmiUntag(esi); + __ lea(ecx, FieldOperand(eax, esi, times_1, SeqOneByteString::kHeaderSize)); + __ mov(Operand(esp, 3 * kPointerSize), ecx); // Argument 4. + __ lea(ecx, FieldOperand(eax, ebx, times_1, SeqOneByteString::kHeaderSize)); + __ mov(Operand(esp, 2 * kPointerSize), ecx); // Argument 3. + __ jmp(&setup_rest, Label::kNear); + + __ bind(&setup_two_byte); + STATIC_ASSERT(kSmiTag == 0); + STATIC_ASSERT(kSmiTagSize == 1); // esi is smi (powered by 2). + __ lea(ecx, FieldOperand(eax, esi, times_1, SeqTwoByteString::kHeaderSize)); + __ mov(Operand(esp, 3 * kPointerSize), ecx); // Argument 4. + __ lea(ecx, FieldOperand(eax, ebx, times_2, SeqTwoByteString::kHeaderSize)); + __ mov(Operand(esp, 2 * kPointerSize), ecx); // Argument 3. + + __ bind(&setup_rest); + + // Locate the code entry and call it. + __ add(edx, Immediate(Code::kHeaderSize - kHeapObjectTag)); + __ call(edx); + + // Drop arguments and come back to JS mode. + __ LeaveApiExitFrame(true); + + // Check the result. + Label success; + __ cmp(eax, 1); + // We expect exactly one result since we force the called regexp to behave + // as non-global. + __ j(equal, &success); + Label failure; + __ cmp(eax, NativeRegExpMacroAssembler::FAILURE); + __ j(equal, &failure); + __ cmp(eax, NativeRegExpMacroAssembler::EXCEPTION); + // If not exception it can only be retry. Handle that in the runtime system. + __ j(not_equal, &runtime); + // Result must now be exception. If there is no pending exception already a + // stack overflow (on the backtrack stack) was detected in RegExp code but + // haven't created the exception yet. Handle that in the runtime system. + // TODO(592): Rerunning the RegExp to get the stack overflow exception. + ExternalReference pending_exception(Isolate::kPendingExceptionAddress, + isolate()); + __ mov(edx, Immediate(isolate()->factory()->the_hole_value())); + __ mov(eax, Operand::StaticVariable(pending_exception)); + __ cmp(edx, eax); + __ j(equal, &runtime); + // For exception, throw the exception again. + + // Clear the pending exception variable. + __ mov(Operand::StaticVariable(pending_exception), edx); + + // Special handling of termination exceptions which are uncatchable + // by javascript code. + __ cmp(eax, factory->termination_exception()); + Label throw_termination_exception; + __ j(equal, &throw_termination_exception, Label::kNear); + + // Handle normal exception by following handler chain. + __ Throw(eax); + + __ bind(&throw_termination_exception); + __ ThrowUncatchable(eax); + + __ bind(&failure); + // For failure to match, return null. + __ mov(eax, factory->null_value()); + __ ret(4 * kPointerSize); + + // Load RegExp data. + __ bind(&success); + __ mov(eax, Operand(esp, kJSRegExpOffset)); + __ mov(ecx, FieldOperand(eax, JSRegExp::kDataOffset)); + __ mov(edx, FieldOperand(ecx, JSRegExp::kIrregexpCaptureCountOffset)); + // Calculate number of capture registers (number_of_captures + 1) * 2. + STATIC_ASSERT(kSmiTag == 0); + STATIC_ASSERT(kSmiTagSize + kSmiShiftSize == 1); + __ add(edx, Immediate(2)); // edx was a smi. + + // edx: Number of capture registers + // Load last_match_info which is still known to be a fast case JSArray. + // Check that the fourth object is a JSArray object. + __ mov(eax, Operand(esp, kLastMatchInfoOffset)); + __ JumpIfSmi(eax, &runtime); + __ CmpObjectType(eax, JS_ARRAY_TYPE, ebx); + __ j(not_equal, &runtime); + // Check that the JSArray is in fast case. + __ mov(ebx, FieldOperand(eax, JSArray::kElementsOffset)); + __ mov(eax, FieldOperand(ebx, HeapObject::kMapOffset)); + __ cmp(eax, factory->fixed_array_map()); + __ j(not_equal, &runtime); + // Check that the last match info has space for the capture registers and the + // additional information. + __ mov(eax, FieldOperand(ebx, FixedArray::kLengthOffset)); + __ SmiUntag(eax); + __ sub(eax, Immediate(RegExpImpl::kLastMatchOverhead)); + __ cmp(edx, eax); + __ j(greater, &runtime); + + // ebx: last_match_info backing store (FixedArray) + // edx: number of capture registers + // Store the capture count. + __ SmiTag(edx); // Number of capture registers to smi. + __ mov(FieldOperand(ebx, RegExpImpl::kLastCaptureCountOffset), edx); + __ SmiUntag(edx); // Number of capture registers back from smi. + // Store last subject and last input. + __ mov(eax, Operand(esp, kSubjectOffset)); + __ mov(ecx, eax); + __ mov(FieldOperand(ebx, RegExpImpl::kLastSubjectOffset), eax); + __ RecordWriteField(ebx, + RegExpImpl::kLastSubjectOffset, + eax, + edi); + __ mov(eax, ecx); + __ mov(FieldOperand(ebx, RegExpImpl::kLastInputOffset), eax); + __ RecordWriteField(ebx, + RegExpImpl::kLastInputOffset, + eax, + edi); + + // Get the static offsets vector filled by the native regexp code. + ExternalReference address_of_static_offsets_vector = + ExternalReference::address_of_static_offsets_vector(isolate()); + __ mov(ecx, Immediate(address_of_static_offsets_vector)); + + // ebx: last_match_info backing store (FixedArray) + // ecx: offsets vector + // edx: number of capture registers + Label next_capture, done; + // Capture register counter starts from number of capture registers and + // counts down until wraping after zero. + __ bind(&next_capture); + __ sub(edx, Immediate(1)); + __ j(negative, &done, Label::kNear); + // Read the value from the static offsets vector buffer. + __ mov(edi, Operand(ecx, edx, times_int_size, 0)); + __ SmiTag(edi); + // Store the smi value in the last match info. + __ mov(FieldOperand(ebx, + edx, + times_pointer_size, + RegExpImpl::kFirstCaptureOffset), + edi); + __ jmp(&next_capture); + __ bind(&done); + + // Return last match info. + __ mov(eax, Operand(esp, kLastMatchInfoOffset)); + __ ret(4 * kPointerSize); + + // Do the runtime call to execute the regexp. + __ bind(&runtime); + __ TailCallRuntime(Runtime::kRegExpExecRT, 4, 1); + + // Deferred code for string handling. + // (7) Not a long external string? If yes, go to (10). + __ bind(¬_seq_nor_cons); + // Compare flags are still set from (3). + __ j(greater, ¬_long_external, Label::kNear); // Go to (10). + + // (8) External string. Short external strings have been ruled out. + __ bind(&external_string); + // Reload instance type. + __ mov(ebx, FieldOperand(eax, HeapObject::kMapOffset)); + __ movzx_b(ebx, FieldOperand(ebx, Map::kInstanceTypeOffset)); + if (FLAG_debug_code) { + // Assert that we do not have a cons or slice (indirect strings) here. + // Sequential strings have already been ruled out. + __ test_b(ebx, kIsIndirectStringMask); + __ Assert(zero, kExternalStringExpectedButNotFound); + } + __ mov(eax, FieldOperand(eax, ExternalString::kResourceDataOffset)); + // Move the pointer so that offset-wise, it looks like a sequential string. + STATIC_ASSERT(SeqTwoByteString::kHeaderSize == SeqOneByteString::kHeaderSize); + __ sub(eax, Immediate(SeqTwoByteString::kHeaderSize - kHeapObjectTag)); + STATIC_ASSERT(kTwoByteStringTag == 0); + // (8a) Is the external string one byte? If yes, go to (6). + __ test_b(ebx, kStringEncodingMask); + __ j(not_zero, &seq_one_byte_string); // Goto (6). + + // eax: sequential subject string (or look-alike, external string) + // edx: original subject string + // ecx: RegExp data (FixedArray) + // (9) Two byte sequential. Load regexp code for one byte. Go to (E). + __ bind(&seq_two_byte_string); + // Load previous index and check range before edx is overwritten. We have + // to use edx instead of eax here because it might have been only made to + // look like a sequential string when it actually is an external string. + __ mov(ebx, Operand(esp, kPreviousIndexOffset)); + __ JumpIfNotSmi(ebx, &runtime); + __ cmp(ebx, FieldOperand(edx, String::kLengthOffset)); + __ j(above_equal, &runtime); + __ mov(edx, FieldOperand(ecx, JSRegExp::kDataUC16CodeOffset)); + __ Move(ecx, Immediate(0)); // Type is two byte. + __ jmp(&check_code); // Go to (E). + + // (10) Not a string or a short external string? If yes, bail out to runtime. + __ bind(¬_long_external); + // Catch non-string subject or short external string. + STATIC_ASSERT(kNotStringTag != 0 && kShortExternalStringTag !=0); + __ test(ebx, Immediate(kIsNotStringMask | kShortExternalStringTag)); + __ j(not_zero, &runtime); + + // (11) Sliced string. Replace subject with parent. Go to (5a). + // Load offset into edi and replace subject string with parent. + __ mov(edi, FieldOperand(eax, SlicedString::kOffsetOffset)); + __ mov(eax, FieldOperand(eax, SlicedString::kParentOffset)); + __ jmp(&check_underlying); // Go to (5a). +#endif // V8_INTERPRETED_REGEXP +} + + +static int NegativeComparisonResult(Condition cc) { + DCHECK(cc != equal); + DCHECK((cc == less) || (cc == less_equal) + || (cc == greater) || (cc == greater_equal)); + return (cc == greater || cc == greater_equal) ? LESS : GREATER; +} + + +static void CheckInputType(MacroAssembler* masm, + Register input, + CompareIC::State expected, + Label* fail) { + Label ok; + if (expected == CompareIC::SMI) { + __ JumpIfNotSmi(input, fail); + } else if (expected == CompareIC::NUMBER) { + __ JumpIfSmi(input, &ok); + __ cmp(FieldOperand(input, HeapObject::kMapOffset), + Immediate(masm->isolate()->factory()->heap_number_map())); + __ j(not_equal, fail); + } + // We could be strict about internalized/non-internalized here, but as long as + // hydrogen doesn't care, the stub doesn't have to care either. + __ bind(&ok); +} + + +static void BranchIfNotInternalizedString(MacroAssembler* masm, + Label* label, + Register object, + Register scratch) { + __ JumpIfSmi(object, label); + __ mov(scratch, FieldOperand(object, HeapObject::kMapOffset)); + __ movzx_b(scratch, FieldOperand(scratch, Map::kInstanceTypeOffset)); + STATIC_ASSERT(kInternalizedTag == 0 && kStringTag == 0); + __ test(scratch, Immediate(kIsNotStringMask | kIsNotInternalizedMask)); + __ j(not_zero, label); +} + + +void ICCompareStub::GenerateGeneric(MacroAssembler* masm) { + Label check_unequal_objects; + Condition cc = GetCondition(); + + Label miss; + CheckInputType(masm, edx, left_, &miss); + CheckInputType(masm, eax, right_, &miss); + + // Compare two smis. + Label non_smi, smi_done; + __ mov(ecx, edx); + __ or_(ecx, eax); + __ JumpIfNotSmi(ecx, &non_smi, Label::kNear); + __ sub(edx, eax); // Return on the result of the subtraction. + __ j(no_overflow, &smi_done, Label::kNear); + __ not_(edx); // Correct sign in case of overflow. edx is never 0 here. + __ bind(&smi_done); + __ mov(eax, edx); + __ ret(0); + __ bind(&non_smi); + + // NOTICE! This code is only reached after a smi-fast-case check, so + // it is certain that at least one operand isn't a smi. + + // Identical objects can be compared fast, but there are some tricky cases + // for NaN and undefined. + Label generic_heap_number_comparison; + { + Label not_identical; + __ cmp(eax, edx); + __ j(not_equal, ¬_identical); + + if (cc != equal) { + // Check for undefined. undefined OP undefined is false even though + // undefined == undefined. + Label check_for_nan; + __ cmp(edx, isolate()->factory()->undefined_value()); + __ j(not_equal, &check_for_nan, Label::kNear); + __ Move(eax, Immediate(Smi::FromInt(NegativeComparisonResult(cc)))); + __ ret(0); + __ bind(&check_for_nan); + } + + // Test for NaN. Compare heap numbers in a general way, + // to hanlde NaNs correctly. + __ cmp(FieldOperand(edx, HeapObject::kMapOffset), + Immediate(isolate()->factory()->heap_number_map())); + __ j(equal, &generic_heap_number_comparison, Label::kNear); + if (cc != equal) { + // Call runtime on identical JSObjects. Otherwise return equal. + __ CmpObjectType(eax, FIRST_SPEC_OBJECT_TYPE, ecx); + __ j(above_equal, ¬_identical); + } + __ Move(eax, Immediate(Smi::FromInt(EQUAL))); + __ ret(0); + + + __ bind(¬_identical); + } + + // Strict equality can quickly decide whether objects are equal. + // Non-strict object equality is slower, so it is handled later in the stub. + if (cc == equal && strict()) { + Label slow; // Fallthrough label. + Label not_smis; + // If we're doing a strict equality comparison, we don't have to do + // type conversion, so we generate code to do fast comparison for objects + // and oddballs. Non-smi numbers and strings still go through the usual + // slow-case code. + // If either is a Smi (we know that not both are), then they can only + // be equal if the other is a HeapNumber. If so, use the slow case. + STATIC_ASSERT(kSmiTag == 0); + DCHECK_EQ(0, Smi::FromInt(0)); + __ mov(ecx, Immediate(kSmiTagMask)); + __ and_(ecx, eax); + __ test(ecx, edx); + __ j(not_zero, ¬_smis, Label::kNear); + // One operand is a smi. + + // Check whether the non-smi is a heap number. + STATIC_ASSERT(kSmiTagMask == 1); + // ecx still holds eax & kSmiTag, which is either zero or one. + __ sub(ecx, Immediate(0x01)); + __ mov(ebx, edx); + __ xor_(ebx, eax); + __ and_(ebx, ecx); // ebx holds either 0 or eax ^ edx. + __ xor_(ebx, eax); + // if eax was smi, ebx is now edx, else eax. + + // Check if the non-smi operand is a heap number. + __ cmp(FieldOperand(ebx, HeapObject::kMapOffset), + Immediate(isolate()->factory()->heap_number_map())); + // If heap number, handle it in the slow case. + __ j(equal, &slow, Label::kNear); + // Return non-equal (ebx is not zero) + __ mov(eax, ebx); + __ ret(0); + + __ bind(¬_smis); + // If either operand is a JSObject or an oddball value, then they are not + // equal since their pointers are different + // There is no test for undetectability in strict equality. + + // Get the type of the first operand. + // If the first object is a JS object, we have done pointer comparison. + Label first_non_object; + STATIC_ASSERT(LAST_TYPE == LAST_SPEC_OBJECT_TYPE); + __ CmpObjectType(eax, FIRST_SPEC_OBJECT_TYPE, ecx); + __ j(below, &first_non_object, Label::kNear); + + // Return non-zero (eax is not zero) + Label return_not_equal; + STATIC_ASSERT(kHeapObjectTag != 0); + __ bind(&return_not_equal); + __ ret(0); + + __ bind(&first_non_object); + // Check for oddballs: true, false, null, undefined. + __ CmpInstanceType(ecx, ODDBALL_TYPE); + __ j(equal, &return_not_equal); + + __ CmpObjectType(edx, FIRST_SPEC_OBJECT_TYPE, ecx); + __ j(above_equal, &return_not_equal); + + // Check for oddballs: true, false, null, undefined. + __ CmpInstanceType(ecx, ODDBALL_TYPE); + __ j(equal, &return_not_equal); + + // Fall through to the general case. + __ bind(&slow); + } + + // Generate the number comparison code. + Label non_number_comparison; + Label unordered; + __ bind(&generic_heap_number_comparison); + FloatingPointHelper::CheckFloatOperands( + masm, &non_number_comparison, ebx); + FloatingPointHelper::LoadFloatOperand(masm, eax); + FloatingPointHelper::LoadFloatOperand(masm, edx); + __ FCmp(); + + // Don't base result on EFLAGS when a NaN is involved. + __ j(parity_even, &unordered, Label::kNear); + + Label below_label, above_label; + // Return a result of -1, 0, or 1, based on EFLAGS. + __ j(below, &below_label, Label::kNear); + __ j(above, &above_label, Label::kNear); + + __ Move(eax, Immediate(0)); + __ ret(0); + + __ bind(&below_label); + __ mov(eax, Immediate(Smi::FromInt(-1))); + __ ret(0); + + __ bind(&above_label); + __ mov(eax, Immediate(Smi::FromInt(1))); + __ ret(0); + + // If one of the numbers was NaN, then the result is always false. + // The cc is never not-equal. + __ bind(&unordered); + DCHECK(cc != not_equal); + if (cc == less || cc == less_equal) { + __ mov(eax, Immediate(Smi::FromInt(1))); + } else { + __ mov(eax, Immediate(Smi::FromInt(-1))); + } + __ ret(0); + + // The number comparison code did not provide a valid result. + __ bind(&non_number_comparison); + + // Fast negative check for internalized-to-internalized equality. + Label check_for_strings; + if (cc == equal) { + BranchIfNotInternalizedString(masm, &check_for_strings, eax, ecx); + BranchIfNotInternalizedString(masm, &check_for_strings, edx, ecx); + + // We've already checked for object identity, so if both operands + // are internalized they aren't equal. Register eax already holds a + // non-zero value, which indicates not equal, so just return. + __ ret(0); + } + + __ bind(&check_for_strings); + + __ JumpIfNotBothSequentialAsciiStrings(edx, eax, ecx, ebx, + &check_unequal_objects); + + // Inline comparison of ASCII strings. + if (cc == equal) { + StringCompareStub::GenerateFlatAsciiStringEquals(masm, + edx, + eax, + ecx, + ebx); + } else { + StringCompareStub::GenerateCompareFlatAsciiStrings(masm, + edx, + eax, + ecx, + ebx, + edi); + } +#ifdef DEBUG + __ Abort(kUnexpectedFallThroughFromStringComparison); +#endif + + __ bind(&check_unequal_objects); + if (cc == equal && !strict()) { + // Non-strict equality. Objects are unequal if + // they are both JSObjects and not undetectable, + // and their pointers are different. + Label not_both_objects; + Label return_unequal; + // At most one is a smi, so we can test for smi by adding the two. + // A smi plus a heap object has the low bit set, a heap object plus + // a heap object has the low bit clear. + STATIC_ASSERT(kSmiTag == 0); + STATIC_ASSERT(kSmiTagMask == 1); + __ lea(ecx, Operand(eax, edx, times_1, 0)); + __ test(ecx, Immediate(kSmiTagMask)); + __ j(not_zero, ¬_both_objects, Label::kNear); + __ CmpObjectType(eax, FIRST_SPEC_OBJECT_TYPE, ecx); + __ j(below, ¬_both_objects, Label::kNear); + __ CmpObjectType(edx, FIRST_SPEC_OBJECT_TYPE, ebx); + __ j(below, ¬_both_objects, Label::kNear); + // We do not bail out after this point. Both are JSObjects, and + // they are equal if and only if both are undetectable. + // The and of the undetectable flags is 1 if and only if they are equal. + __ test_b(FieldOperand(ecx, Map::kBitFieldOffset), + 1 << Map::kIsUndetectable); + __ j(zero, &return_unequal, Label::kNear); + __ test_b(FieldOperand(ebx, Map::kBitFieldOffset), + 1 << Map::kIsUndetectable); + __ j(zero, &return_unequal, Label::kNear); + // The objects are both undetectable, so they both compare as the value + // undefined, and are equal. + __ Move(eax, Immediate(EQUAL)); + __ bind(&return_unequal); + // Return non-equal by returning the non-zero object pointer in eax, + // or return equal if we fell through to here. + __ ret(0); // rax, rdx were pushed + __ bind(¬_both_objects); + } + + // Push arguments below the return address. + __ pop(ecx); + __ push(edx); + __ push(eax); + + // Figure out which native to call and setup the arguments. + Builtins::JavaScript builtin; + if (cc == equal) { + builtin = strict() ? Builtins::STRICT_EQUALS : Builtins::EQUALS; + } else { + builtin = Builtins::COMPARE; + __ push(Immediate(Smi::FromInt(NegativeComparisonResult(cc)))); + } + + // Restore return address on the stack. + __ push(ecx); + + // Call the native; it returns -1 (less), 0 (equal), or 1 (greater) + // tagged as a small integer. + __ InvokeBuiltin(builtin, JUMP_FUNCTION); + + __ bind(&miss); + GenerateMiss(masm); +} + + +static void GenerateRecordCallTarget(MacroAssembler* masm) { + // Cache the called function in a feedback vector slot. Cache states + // are uninitialized, monomorphic (indicated by a JSFunction), and + // megamorphic. + // eax : number of arguments to the construct function + // ebx : Feedback vector + // edx : slot in feedback vector (Smi) + // edi : the function to call + Isolate* isolate = masm->isolate(); + Label initialize, done, miss, megamorphic, not_array_function; + + // Load the cache state into ecx. + __ mov(ecx, FieldOperand(ebx, edx, times_half_pointer_size, + FixedArray::kHeaderSize)); + + // A monomorphic cache hit or an already megamorphic state: invoke the + // function without changing the state. + __ cmp(ecx, edi); + __ j(equal, &done, Label::kFar); + __ cmp(ecx, Immediate(TypeFeedbackInfo::MegamorphicSentinel(isolate))); + __ j(equal, &done, Label::kFar); + + if (!FLAG_pretenuring_call_new) { + // If we came here, we need to see if we are the array function. + // If we didn't have a matching function, and we didn't find the megamorph + // sentinel, then we have in the slot either some other function or an + // AllocationSite. Do a map check on the object in ecx. + Handle<Map> allocation_site_map = isolate->factory()->allocation_site_map(); + __ cmp(FieldOperand(ecx, 0), Immediate(allocation_site_map)); + __ j(not_equal, &miss); + + // Make sure the function is the Array() function + __ LoadGlobalFunction(Context::ARRAY_FUNCTION_INDEX, ecx); + __ cmp(edi, ecx); + __ j(not_equal, &megamorphic); + __ jmp(&done, Label::kFar); + } + + __ bind(&miss); + + // A monomorphic miss (i.e, here the cache is not uninitialized) goes + // megamorphic. + __ cmp(ecx, Immediate(TypeFeedbackInfo::UninitializedSentinel(isolate))); + __ j(equal, &initialize); + // MegamorphicSentinel is an immortal immovable object (undefined) so no + // write-barrier is needed. + __ bind(&megamorphic); + __ mov(FieldOperand(ebx, edx, times_half_pointer_size, + FixedArray::kHeaderSize), + Immediate(TypeFeedbackInfo::MegamorphicSentinel(isolate))); + __ jmp(&done, Label::kFar); + + // An uninitialized cache is patched with the function or sentinel to + // indicate the ElementsKind if function is the Array constructor. + __ bind(&initialize); + if (!FLAG_pretenuring_call_new) { + // Make sure the function is the Array() function + __ LoadGlobalFunction(Context::ARRAY_FUNCTION_INDEX, ecx); + __ cmp(edi, ecx); + __ j(not_equal, ¬_array_function); + + // The target function is the Array constructor, + // Create an AllocationSite if we don't already have it, store it in the + // slot. + { + FrameScope scope(masm, StackFrame::INTERNAL); + + // Arguments register must be smi-tagged to call out. + __ SmiTag(eax); + __ push(eax); + __ push(edi); + __ push(edx); + __ push(ebx); + + CreateAllocationSiteStub create_stub(isolate); + __ CallStub(&create_stub); + + __ pop(ebx); + __ pop(edx); + __ pop(edi); + __ pop(eax); + __ SmiUntag(eax); + } + __ jmp(&done); + + __ bind(¬_array_function); + } + + __ mov(FieldOperand(ebx, edx, times_half_pointer_size, + FixedArray::kHeaderSize), + edi); + // We won't need edx or ebx anymore, just save edi + __ push(edi); + __ push(ebx); + __ push(edx); + __ RecordWriteArray(ebx, edi, edx, EMIT_REMEMBERED_SET, OMIT_SMI_CHECK); + __ pop(edx); + __ pop(ebx); + __ pop(edi); + + __ bind(&done); +} + + +static void EmitContinueIfStrictOrNative(MacroAssembler* masm, Label* cont) { + // Do not transform the receiver for strict mode functions. + __ mov(ecx, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset)); + __ test_b(FieldOperand(ecx, SharedFunctionInfo::kStrictModeByteOffset), + 1 << SharedFunctionInfo::kStrictModeBitWithinByte); + __ j(not_equal, cont); + + // Do not transform the receiver for natives (shared already in ecx). + __ test_b(FieldOperand(ecx, SharedFunctionInfo::kNativeByteOffset), + 1 << SharedFunctionInfo::kNativeBitWithinByte); + __ j(not_equal, cont); +} + + +static void EmitSlowCase(Isolate* isolate, + MacroAssembler* masm, + int argc, + Label* non_function) { + // Check for function proxy. + __ CmpInstanceType(ecx, JS_FUNCTION_PROXY_TYPE); + __ j(not_equal, non_function); + __ pop(ecx); + __ push(edi); // put proxy as additional argument under return address + __ push(ecx); + __ Move(eax, Immediate(argc + 1)); + __ Move(ebx, Immediate(0)); + __ GetBuiltinEntry(edx, Builtins::CALL_FUNCTION_PROXY); + { + Handle<Code> adaptor = isolate->builtins()->ArgumentsAdaptorTrampoline(); + __ jmp(adaptor, RelocInfo::CODE_TARGET); + } + + // CALL_NON_FUNCTION expects the non-function callee as receiver (instead + // of the original receiver from the call site). + __ bind(non_function); + __ mov(Operand(esp, (argc + 1) * kPointerSize), edi); + __ Move(eax, Immediate(argc)); + __ Move(ebx, Immediate(0)); + __ GetBuiltinEntry(edx, Builtins::CALL_NON_FUNCTION); + Handle<Code> adaptor = isolate->builtins()->ArgumentsAdaptorTrampoline(); + __ jmp(adaptor, RelocInfo::CODE_TARGET); +} + + +static void EmitWrapCase(MacroAssembler* masm, int argc, Label* cont) { + // Wrap the receiver and patch it back onto the stack. + { FrameScope frame_scope(masm, StackFrame::INTERNAL); + __ push(edi); + __ push(eax); + __ InvokeBuiltin(Builtins::TO_OBJECT, CALL_FUNCTION); + __ pop(edi); + } + __ mov(Operand(esp, (argc + 1) * kPointerSize), eax); + __ jmp(cont); +} + + +static void CallFunctionNoFeedback(MacroAssembler* masm, + int argc, bool needs_checks, + bool call_as_method) { + // edi : the function to call + Label slow, non_function, wrap, cont; + + if (needs_checks) { + // Check that the function really is a JavaScript function. + __ JumpIfSmi(edi, &non_function); + + // Goto slow case if we do not have a function. + __ CmpObjectType(edi, JS_FUNCTION_TYPE, ecx); + __ j(not_equal, &slow); + } + + // Fast-case: Just invoke the function. + ParameterCount actual(argc); + + if (call_as_method) { + if (needs_checks) { + EmitContinueIfStrictOrNative(masm, &cont); + } + + // Load the receiver from the stack. + __ mov(eax, Operand(esp, (argc + 1) * kPointerSize)); + + if (needs_checks) { + __ JumpIfSmi(eax, &wrap); + + __ CmpObjectType(eax, FIRST_SPEC_OBJECT_TYPE, ecx); + __ j(below, &wrap); + } else { + __ jmp(&wrap); + } + + __ bind(&cont); + } + + __ InvokeFunction(edi, actual, JUMP_FUNCTION, NullCallWrapper()); + + if (needs_checks) { + // Slow-case: Non-function called. + __ bind(&slow); + // (non_function is bound in EmitSlowCase) + EmitSlowCase(masm->isolate(), masm, argc, &non_function); + } + + if (call_as_method) { + __ bind(&wrap); + EmitWrapCase(masm, argc, &cont); + } +} + + +void CallFunctionStub::Generate(MacroAssembler* masm) { + CallFunctionNoFeedback(masm, argc_, NeedsChecks(), CallAsMethod()); +} + + +void CallConstructStub::Generate(MacroAssembler* masm) { + // eax : number of arguments + // ebx : feedback vector + // edx : (only if ebx is not the megamorphic symbol) slot in feedback + // vector (Smi) + // edi : constructor function + Label slow, non_function_call; + + // Check that function is not a smi. + __ JumpIfSmi(edi, &non_function_call); + // Check that function is a JSFunction. + __ CmpObjectType(edi, JS_FUNCTION_TYPE, ecx); + __ j(not_equal, &slow); + + if (RecordCallTarget()) { + GenerateRecordCallTarget(masm); + + if (FLAG_pretenuring_call_new) { + // Put the AllocationSite from the feedback vector into ebx. + // By adding kPointerSize we encode that we know the AllocationSite + // entry is at the feedback vector slot given by edx + 1. + __ mov(ebx, FieldOperand(ebx, edx, times_half_pointer_size, + FixedArray::kHeaderSize + kPointerSize)); + } else { + Label feedback_register_initialized; + // Put the AllocationSite from the feedback vector into ebx, or undefined. + __ mov(ebx, FieldOperand(ebx, edx, times_half_pointer_size, + FixedArray::kHeaderSize)); + Handle<Map> allocation_site_map = + isolate()->factory()->allocation_site_map(); + __ cmp(FieldOperand(ebx, 0), Immediate(allocation_site_map)); + __ j(equal, &feedback_register_initialized); + __ mov(ebx, isolate()->factory()->undefined_value()); + __ bind(&feedback_register_initialized); + } + + __ AssertUndefinedOrAllocationSite(ebx); + } + + // Jump to the function-specific construct stub. + Register jmp_reg = ecx; + __ mov(jmp_reg, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset)); + __ mov(jmp_reg, FieldOperand(jmp_reg, + SharedFunctionInfo::kConstructStubOffset)); + __ lea(jmp_reg, FieldOperand(jmp_reg, Code::kHeaderSize)); + __ jmp(jmp_reg); + + // edi: called object + // eax: number of arguments + // ecx: object map + Label do_call; + __ bind(&slow); + __ CmpInstanceType(ecx, JS_FUNCTION_PROXY_TYPE); + __ j(not_equal, &non_function_call); + __ GetBuiltinEntry(edx, Builtins::CALL_FUNCTION_PROXY_AS_CONSTRUCTOR); + __ jmp(&do_call); + + __ bind(&non_function_call); + __ GetBuiltinEntry(edx, Builtins::CALL_NON_FUNCTION_AS_CONSTRUCTOR); + __ bind(&do_call); + // Set expected number of arguments to zero (not changing eax). + __ Move(ebx, Immediate(0)); + Handle<Code> arguments_adaptor = + isolate()->builtins()->ArgumentsAdaptorTrampoline(); + __ jmp(arguments_adaptor, RelocInfo::CODE_TARGET); +} + + +static void EmitLoadTypeFeedbackVector(MacroAssembler* masm, Register vector) { + __ mov(vector, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset)); + __ mov(vector, FieldOperand(vector, JSFunction::kSharedFunctionInfoOffset)); + __ mov(vector, FieldOperand(vector, + SharedFunctionInfo::kFeedbackVectorOffset)); +} + + +void CallIC_ArrayStub::Generate(MacroAssembler* masm) { + // edi - function + // edx - slot id + Label miss; + int argc = state_.arg_count(); + ParameterCount actual(argc); + + EmitLoadTypeFeedbackVector(masm, ebx); + + __ LoadGlobalFunction(Context::ARRAY_FUNCTION_INDEX, ecx); + __ cmp(edi, ecx); + __ j(not_equal, &miss); + + __ mov(eax, arg_count()); + __ mov(ecx, FieldOperand(ebx, edx, times_half_pointer_size, + FixedArray::kHeaderSize)); + + // Verify that ecx contains an AllocationSite + Factory* factory = masm->isolate()->factory(); + __ cmp(FieldOperand(ecx, HeapObject::kMapOffset), + factory->allocation_site_map()); + __ j(not_equal, &miss); + + __ mov(ebx, ecx); + ArrayConstructorStub stub(masm->isolate(), arg_count()); + __ TailCallStub(&stub); + + __ bind(&miss); + GenerateMiss(masm, IC::kCallIC_Customization_Miss); + + // The slow case, we need this no matter what to complete a call after a miss. + CallFunctionNoFeedback(masm, + arg_count(), + true, + CallAsMethod()); + + // Unreachable. + __ int3(); +} + + +void CallICStub::Generate(MacroAssembler* masm) { + // edi - function + // edx - slot id + Isolate* isolate = masm->isolate(); + Label extra_checks_or_miss, slow_start; + Label slow, non_function, wrap, cont; + Label have_js_function; + int argc = state_.arg_count(); + ParameterCount actual(argc); + + EmitLoadTypeFeedbackVector(masm, ebx); + + // The checks. First, does edi match the recorded monomorphic target? + __ cmp(edi, FieldOperand(ebx, edx, times_half_pointer_size, + FixedArray::kHeaderSize)); + __ j(not_equal, &extra_checks_or_miss); + + __ bind(&have_js_function); + if (state_.CallAsMethod()) { + EmitContinueIfStrictOrNative(masm, &cont); + + // Load the receiver from the stack. + __ mov(eax, Operand(esp, (argc + 1) * kPointerSize)); + + __ JumpIfSmi(eax, &wrap); + + __ CmpObjectType(eax, FIRST_SPEC_OBJECT_TYPE, ecx); + __ j(below, &wrap); + + __ bind(&cont); + } + + __ InvokeFunction(edi, actual, JUMP_FUNCTION, NullCallWrapper()); + + __ bind(&slow); + EmitSlowCase(isolate, masm, argc, &non_function); + + if (state_.CallAsMethod()) { + __ bind(&wrap); + EmitWrapCase(masm, argc, &cont); + } + + __ bind(&extra_checks_or_miss); + Label miss; + + __ mov(ecx, FieldOperand(ebx, edx, times_half_pointer_size, + FixedArray::kHeaderSize)); + __ cmp(ecx, Immediate(TypeFeedbackInfo::MegamorphicSentinel(isolate))); + __ j(equal, &slow_start); + __ cmp(ecx, Immediate(TypeFeedbackInfo::UninitializedSentinel(isolate))); + __ j(equal, &miss); + + if (!FLAG_trace_ic) { + // We are going megamorphic. If the feedback is a JSFunction, it is fine + // to handle it here. More complex cases are dealt with in the runtime. + __ AssertNotSmi(ecx); + __ CmpObjectType(ecx, JS_FUNCTION_TYPE, ecx); + __ j(not_equal, &miss); + __ mov(FieldOperand(ebx, edx, times_half_pointer_size, + FixedArray::kHeaderSize), + Immediate(TypeFeedbackInfo::MegamorphicSentinel(isolate))); + __ jmp(&slow_start); + } + + // We are here because tracing is on or we are going monomorphic. + __ bind(&miss); + GenerateMiss(masm, IC::kCallIC_Miss); + + // the slow case + __ bind(&slow_start); + + // Check that the function really is a JavaScript function. + __ JumpIfSmi(edi, &non_function); + + // Goto slow case if we do not have a function. + __ CmpObjectType(edi, JS_FUNCTION_TYPE, ecx); + __ j(not_equal, &slow); + __ jmp(&have_js_function); + + // Unreachable + __ int3(); +} + + +void CallICStub::GenerateMiss(MacroAssembler* masm, IC::UtilityId id) { + // Get the receiver of the function from the stack; 1 ~ return address. + __ mov(ecx, Operand(esp, (state_.arg_count() + 1) * kPointerSize)); + + { + FrameScope scope(masm, StackFrame::INTERNAL); + + // Push the receiver and the function and feedback info. + __ push(ecx); + __ push(edi); + __ push(ebx); + __ push(edx); + + // Call the entry. + ExternalReference miss = ExternalReference(IC_Utility(id), + masm->isolate()); + __ CallExternalReference(miss, 4); + + // Move result to edi and exit the internal frame. + __ mov(edi, eax); + } +} + + +bool CEntryStub::NeedsImmovableCode() { + return false; +} + + +void CodeStub::GenerateStubsAheadOfTime(Isolate* isolate) { + CEntryStub::GenerateAheadOfTime(isolate); + StoreBufferOverflowStub::GenerateFixedRegStubsAheadOfTime(isolate); + StubFailureTrampolineStub::GenerateAheadOfTime(isolate); + // It is important that the store buffer overflow stubs are generated first. + ArrayConstructorStubBase::GenerateStubsAheadOfTime(isolate); + CreateAllocationSiteStub::GenerateAheadOfTime(isolate); + BinaryOpICStub::GenerateAheadOfTime(isolate); + BinaryOpICWithAllocationSiteStub::GenerateAheadOfTime(isolate); +} + + +void CodeStub::GenerateFPStubs(Isolate* isolate) { + // Do nothing. +} + + +void CEntryStub::GenerateAheadOfTime(Isolate* isolate) { + CEntryStub stub(isolate, 1); + stub.GetCode(); +} + + +void CEntryStub::Generate(MacroAssembler* masm) { + // eax: number of arguments including receiver + // ebx: pointer to C function (C callee-saved) + // ebp: frame pointer (restored after C call) + // esp: stack pointer (restored after C call) + // esi: current context (C callee-saved) + // edi: JS function of the caller (C callee-saved) + + ProfileEntryHookStub::MaybeCallEntryHook(masm); + + // Enter the exit frame that transitions from JavaScript to C++. + __ EnterExitFrame(); + + // ebx: pointer to C function (C callee-saved) + // ebp: frame pointer (restored after C call) + // esp: stack pointer (restored after C call) + // edi: number of arguments including receiver (C callee-saved) + // esi: pointer to the first argument (C callee-saved) + + // Result returned in eax, or eax+edx if result_size_ is 2. + + // Check stack alignment. + if (FLAG_debug_code) { + __ CheckStackAlignment(); + } + + // Call C function. + __ mov(Operand(esp, 0 * kPointerSize), edi); // argc. + __ mov(Operand(esp, 1 * kPointerSize), esi); // argv. + __ mov(Operand(esp, 2 * kPointerSize), + Immediate(ExternalReference::isolate_address(isolate()))); + __ call(ebx); + // Result is in eax or edx:eax - do not destroy these registers! + + // Runtime functions should not return 'the hole'. Allowing it to escape may + // lead to crashes in the IC code later. + if (FLAG_debug_code) { + Label okay; + __ cmp(eax, isolate()->factory()->the_hole_value()); + __ j(not_equal, &okay, Label::kNear); + __ int3(); + __ bind(&okay); + } + + // Check result for exception sentinel. + Label exception_returned; + __ cmp(eax, isolate()->factory()->exception()); + __ j(equal, &exception_returned); + + ExternalReference pending_exception_address( + Isolate::kPendingExceptionAddress, isolate()); + + // Check that there is no pending exception, otherwise we + // should have returned the exception sentinel. + if (FLAG_debug_code) { + __ push(edx); + __ mov(edx, Immediate(isolate()->factory()->the_hole_value())); + Label okay; + __ cmp(edx, Operand::StaticVariable(pending_exception_address)); + // Cannot use check here as it attempts to generate call into runtime. + __ j(equal, &okay, Label::kNear); + __ int3(); + __ bind(&okay); + __ pop(edx); + } + + // Exit the JavaScript to C++ exit frame. + __ LeaveExitFrame(); + __ ret(0); + + // Handling of exception. + __ bind(&exception_returned); + + // Retrieve the pending exception. + __ mov(eax, Operand::StaticVariable(pending_exception_address)); + + // Clear the pending exception. + __ mov(edx, Immediate(isolate()->factory()->the_hole_value())); + __ mov(Operand::StaticVariable(pending_exception_address), edx); + + // Special handling of termination exceptions which are uncatchable + // by javascript code. + Label throw_termination_exception; + __ cmp(eax, isolate()->factory()->termination_exception()); + __ j(equal, &throw_termination_exception); + + // Handle normal exception. + __ Throw(eax); + + __ bind(&throw_termination_exception); + __ ThrowUncatchable(eax); +} + + +void JSEntryStub::GenerateBody(MacroAssembler* masm, bool is_construct) { + Label invoke, handler_entry, exit; + Label not_outermost_js, not_outermost_js_2; + + ProfileEntryHookStub::MaybeCallEntryHook(masm); + + // Set up frame. + __ push(ebp); + __ mov(ebp, esp); + + // Push marker in two places. + int marker = is_construct ? StackFrame::ENTRY_CONSTRUCT : StackFrame::ENTRY; + __ push(Immediate(Smi::FromInt(marker))); // context slot + __ push(Immediate(Smi::FromInt(marker))); // function slot + // Save callee-saved registers (C calling conventions). + __ push(edi); + __ push(esi); + __ push(ebx); + + // Save copies of the top frame descriptor on the stack. + ExternalReference c_entry_fp(Isolate::kCEntryFPAddress, isolate()); + __ push(Operand::StaticVariable(c_entry_fp)); + + // If this is the outermost JS call, set js_entry_sp value. + ExternalReference js_entry_sp(Isolate::kJSEntrySPAddress, isolate()); + __ cmp(Operand::StaticVariable(js_entry_sp), Immediate(0)); + __ j(not_equal, ¬_outermost_js, Label::kNear); + __ mov(Operand::StaticVariable(js_entry_sp), ebp); + __ push(Immediate(Smi::FromInt(StackFrame::OUTERMOST_JSENTRY_FRAME))); + __ jmp(&invoke, Label::kNear); + __ bind(¬_outermost_js); + __ push(Immediate(Smi::FromInt(StackFrame::INNER_JSENTRY_FRAME))); + + // Jump to a faked try block that does the invoke, with a faked catch + // block that sets the pending exception. + __ jmp(&invoke); + __ bind(&handler_entry); + handler_offset_ = handler_entry.pos(); + // Caught exception: Store result (exception) in the pending exception + // field in the JSEnv and return a failure sentinel. + ExternalReference pending_exception(Isolate::kPendingExceptionAddress, + isolate()); + __ mov(Operand::StaticVariable(pending_exception), eax); + __ mov(eax, Immediate(isolate()->factory()->exception())); + __ jmp(&exit); + + // Invoke: Link this frame into the handler chain. There's only one + // handler block in this code object, so its index is 0. + __ bind(&invoke); + __ PushTryHandler(StackHandler::JS_ENTRY, 0); + + // Clear any pending exceptions. + __ mov(edx, Immediate(isolate()->factory()->the_hole_value())); + __ mov(Operand::StaticVariable(pending_exception), edx); + + // Fake a receiver (NULL). + __ push(Immediate(0)); // receiver + + // Invoke the function by calling through JS entry trampoline builtin and + // pop the faked function when we return. Notice that we cannot store a + // reference to the trampoline code directly in this stub, because the + // builtin stubs may not have been generated yet. + if (is_construct) { + ExternalReference construct_entry(Builtins::kJSConstructEntryTrampoline, + isolate()); + __ mov(edx, Immediate(construct_entry)); + } else { + ExternalReference entry(Builtins::kJSEntryTrampoline, isolate()); + __ mov(edx, Immediate(entry)); + } + __ mov(edx, Operand(edx, 0)); // deref address + __ lea(edx, FieldOperand(edx, Code::kHeaderSize)); + __ call(edx); + + // Unlink this frame from the handler chain. + __ PopTryHandler(); + + __ bind(&exit); + // Check if the current stack frame is marked as the outermost JS frame. + __ pop(ebx); + __ cmp(ebx, Immediate(Smi::FromInt(StackFrame::OUTERMOST_JSENTRY_FRAME))); + __ j(not_equal, ¬_outermost_js_2); + __ mov(Operand::StaticVariable(js_entry_sp), Immediate(0)); + __ bind(¬_outermost_js_2); + + // Restore the top frame descriptor from the stack. + __ pop(Operand::StaticVariable(ExternalReference( + Isolate::kCEntryFPAddress, isolate()))); + + // Restore callee-saved registers (C calling conventions). + __ pop(ebx); + __ pop(esi); + __ pop(edi); + __ add(esp, Immediate(2 * kPointerSize)); // remove markers + + // Restore frame pointer and return. + __ pop(ebp); + __ ret(0); +} + + +// Generate stub code for instanceof. +// This code can patch a call site inlined cache of the instance of check, +// which looks like this. +// +// 81 ff XX XX XX XX cmp edi, <the hole, patched to a map> +// 75 0a jne <some near label> +// b8 XX XX XX XX mov eax, <the hole, patched to either true or false> +// +// If call site patching is requested the stack will have the delta from the +// return address to the cmp instruction just below the return address. This +// also means that call site patching can only take place with arguments in +// registers. TOS looks like this when call site patching is requested +// +// esp[0] : return address +// esp[4] : delta from return address to cmp instruction +// +void InstanceofStub::Generate(MacroAssembler* masm) { + // Call site inlining and patching implies arguments in registers. + DCHECK(HasArgsInRegisters() || !HasCallSiteInlineCheck()); + + // Fixed register usage throughout the stub. + Register object = eax; // Object (lhs). + Register map = ebx; // Map of the object. + Register function = edx; // Function (rhs). + Register prototype = edi; // Prototype of the function. + Register scratch = ecx; + + // Constants describing the call site code to patch. + static const int kDeltaToCmpImmediate = 2; + static const int kDeltaToMov = 8; + static const int kDeltaToMovImmediate = 9; + static const int8_t kCmpEdiOperandByte1 = BitCast<int8_t, uint8_t>(0x3b); + static const int8_t kCmpEdiOperandByte2 = BitCast<int8_t, uint8_t>(0x3d); + static const int8_t kMovEaxImmediateByte = BitCast<int8_t, uint8_t>(0xb8); + + DCHECK_EQ(object.code(), InstanceofStub::left().code()); + DCHECK_EQ(function.code(), InstanceofStub::right().code()); + + // Get the object and function - they are always both needed. + Label slow, not_js_object; + if (!HasArgsInRegisters()) { + __ mov(object, Operand(esp, 2 * kPointerSize)); + __ mov(function, Operand(esp, 1 * kPointerSize)); + } + + // Check that the left hand is a JS object. + __ JumpIfSmi(object, ¬_js_object); + __ IsObjectJSObjectType(object, map, scratch, ¬_js_object); + + // If there is a call site cache don't look in the global cache, but do the + // real lookup and update the call site cache. + if (!HasCallSiteInlineCheck() && !ReturnTrueFalseObject()) { + // Look up the function and the map in the instanceof cache. + Label miss; + __ CompareRoot(function, scratch, Heap::kInstanceofCacheFunctionRootIndex); + __ j(not_equal, &miss, Label::kNear); + __ CompareRoot(map, scratch, Heap::kInstanceofCacheMapRootIndex); + __ j(not_equal, &miss, Label::kNear); + __ LoadRoot(eax, Heap::kInstanceofCacheAnswerRootIndex); + __ ret((HasArgsInRegisters() ? 0 : 2) * kPointerSize); + __ bind(&miss); + } + + // Get the prototype of the function. + __ TryGetFunctionPrototype(function, prototype, scratch, &slow, true); + + // Check that the function prototype is a JS object. + __ JumpIfSmi(prototype, &slow); + __ IsObjectJSObjectType(prototype, scratch, scratch, &slow); + + // Update the global instanceof or call site inlined cache with the current + // map and function. The cached answer will be set when it is known below. + if (!HasCallSiteInlineCheck()) { + __ StoreRoot(map, scratch, Heap::kInstanceofCacheMapRootIndex); + __ StoreRoot(function, scratch, Heap::kInstanceofCacheFunctionRootIndex); + } else { + // The constants for the code patching are based on no push instructions + // at the call site. + DCHECK(HasArgsInRegisters()); + // Get return address and delta to inlined map check. + __ mov(scratch, Operand(esp, 0 * kPointerSize)); + __ sub(scratch, Operand(esp, 1 * kPointerSize)); + if (FLAG_debug_code) { + __ cmpb(Operand(scratch, 0), kCmpEdiOperandByte1); + __ Assert(equal, kInstanceofStubUnexpectedCallSiteCacheCmp1); + __ cmpb(Operand(scratch, 1), kCmpEdiOperandByte2); + __ Assert(equal, kInstanceofStubUnexpectedCallSiteCacheCmp2); + } + __ mov(scratch, Operand(scratch, kDeltaToCmpImmediate)); + __ mov(Operand(scratch, 0), map); + } + + // Loop through the prototype chain of the object looking for the function + // prototype. + __ mov(scratch, FieldOperand(map, Map::kPrototypeOffset)); + Label loop, is_instance, is_not_instance; + __ bind(&loop); + __ cmp(scratch, prototype); + __ j(equal, &is_instance, Label::kNear); + Factory* factory = isolate()->factory(); + __ cmp(scratch, Immediate(factory->null_value())); + __ j(equal, &is_not_instance, Label::kNear); + __ mov(scratch, FieldOperand(scratch, HeapObject::kMapOffset)); + __ mov(scratch, FieldOperand(scratch, Map::kPrototypeOffset)); + __ jmp(&loop); + + __ bind(&is_instance); + if (!HasCallSiteInlineCheck()) { + __ mov(eax, Immediate(0)); + __ StoreRoot(eax, scratch, Heap::kInstanceofCacheAnswerRootIndex); + if (ReturnTrueFalseObject()) { + __ mov(eax, factory->true_value()); + } + } else { + // Get return address and delta to inlined map check. + __ mov(eax, factory->true_value()); + __ mov(scratch, Operand(esp, 0 * kPointerSize)); + __ sub(scratch, Operand(esp, 1 * kPointerSize)); + if (FLAG_debug_code) { + __ cmpb(Operand(scratch, kDeltaToMov), kMovEaxImmediateByte); + __ Assert(equal, kInstanceofStubUnexpectedCallSiteCacheMov); + } + __ mov(Operand(scratch, kDeltaToMovImmediate), eax); + if (!ReturnTrueFalseObject()) { + __ Move(eax, Immediate(0)); + } + } + __ ret((HasArgsInRegisters() ? 0 : 2) * kPointerSize); + + __ bind(&is_not_instance); + if (!HasCallSiteInlineCheck()) { + __ mov(eax, Immediate(Smi::FromInt(1))); + __ StoreRoot(eax, scratch, Heap::kInstanceofCacheAnswerRootIndex); + if (ReturnTrueFalseObject()) { + __ mov(eax, factory->false_value()); + } + } else { + // Get return address and delta to inlined map check. + __ mov(eax, factory->false_value()); + __ mov(scratch, Operand(esp, 0 * kPointerSize)); + __ sub(scratch, Operand(esp, 1 * kPointerSize)); + if (FLAG_debug_code) { + __ cmpb(Operand(scratch, kDeltaToMov), kMovEaxImmediateByte); + __ Assert(equal, kInstanceofStubUnexpectedCallSiteCacheMov); + } + __ mov(Operand(scratch, kDeltaToMovImmediate), eax); + if (!ReturnTrueFalseObject()) { + __ Move(eax, Immediate(Smi::FromInt(1))); + } + } + __ ret((HasArgsInRegisters() ? 0 : 2) * kPointerSize); + + Label object_not_null, object_not_null_or_smi; + __ bind(¬_js_object); + // Before null, smi and string value checks, check that the rhs is a function + // as for a non-function rhs an exception needs to be thrown. + __ JumpIfSmi(function, &slow, Label::kNear); + __ CmpObjectType(function, JS_FUNCTION_TYPE, scratch); + __ j(not_equal, &slow, Label::kNear); + + // Null is not instance of anything. + __ cmp(object, factory->null_value()); + __ j(not_equal, &object_not_null, Label::kNear); + if (ReturnTrueFalseObject()) { + __ mov(eax, factory->false_value()); + } else { + __ Move(eax, Immediate(Smi::FromInt(1))); + } + __ ret((HasArgsInRegisters() ? 0 : 2) * kPointerSize); + + __ bind(&object_not_null); + // Smi values is not instance of anything. + __ JumpIfNotSmi(object, &object_not_null_or_smi, Label::kNear); + if (ReturnTrueFalseObject()) { + __ mov(eax, factory->false_value()); + } else { + __ Move(eax, Immediate(Smi::FromInt(1))); + } + __ ret((HasArgsInRegisters() ? 0 : 2) * kPointerSize); + + __ bind(&object_not_null_or_smi); + // String values is not instance of anything. + Condition is_string = masm->IsObjectStringType(object, scratch, scratch); + __ j(NegateCondition(is_string), &slow, Label::kNear); + if (ReturnTrueFalseObject()) { + __ mov(eax, factory->false_value()); + } else { + __ Move(eax, Immediate(Smi::FromInt(1))); + } + __ ret((HasArgsInRegisters() ? 0 : 2) * kPointerSize); + + // Slow-case: Go through the JavaScript implementation. + __ bind(&slow); + if (!ReturnTrueFalseObject()) { + // Tail call the builtin which returns 0 or 1. + if (HasArgsInRegisters()) { + // Push arguments below return address. + __ pop(scratch); + __ push(object); + __ push(function); + __ push(scratch); + } + __ InvokeBuiltin(Builtins::INSTANCE_OF, JUMP_FUNCTION); + } else { + // Call the builtin and convert 0/1 to true/false. + { + FrameScope scope(masm, StackFrame::INTERNAL); + __ push(object); + __ push(function); + __ InvokeBuiltin(Builtins::INSTANCE_OF, CALL_FUNCTION); + } + Label true_value, done; + __ test(eax, eax); + __ j(zero, &true_value, Label::kNear); + __ mov(eax, factory->false_value()); + __ jmp(&done, Label::kNear); + __ bind(&true_value); + __ mov(eax, factory->true_value()); + __ bind(&done); + __ ret((HasArgsInRegisters() ? 0 : 2) * kPointerSize); + } +} + + +Register InstanceofStub::left() { return eax; } + + +Register InstanceofStub::right() { return edx; } + + +// ------------------------------------------------------------------------- +// StringCharCodeAtGenerator + +void StringCharCodeAtGenerator::GenerateFast(MacroAssembler* masm) { + // If the receiver is a smi trigger the non-string case. + STATIC_ASSERT(kSmiTag == 0); + __ JumpIfSmi(object_, receiver_not_string_); + + // Fetch the instance type of the receiver into result register. + __ mov(result_, FieldOperand(object_, HeapObject::kMapOffset)); + __ movzx_b(result_, FieldOperand(result_, Map::kInstanceTypeOffset)); + // If the receiver is not a string trigger the non-string case. + __ test(result_, Immediate(kIsNotStringMask)); + __ j(not_zero, receiver_not_string_); + + // If the index is non-smi trigger the non-smi case. + STATIC_ASSERT(kSmiTag == 0); + __ JumpIfNotSmi(index_, &index_not_smi_); + __ bind(&got_smi_index_); + + // Check for index out of range. + __ cmp(index_, FieldOperand(object_, String::kLengthOffset)); + __ j(above_equal, index_out_of_range_); + + __ SmiUntag(index_); + + Factory* factory = masm->isolate()->factory(); + StringCharLoadGenerator::Generate( + masm, factory, object_, index_, result_, &call_runtime_); + + __ SmiTag(result_); + __ bind(&exit_); +} + + +void StringCharCodeAtGenerator::GenerateSlow( + MacroAssembler* masm, + const RuntimeCallHelper& call_helper) { + __ Abort(kUnexpectedFallthroughToCharCodeAtSlowCase); + + // Index is not a smi. + __ bind(&index_not_smi_); + // If index is a heap number, try converting it to an integer. + __ CheckMap(index_, + masm->isolate()->factory()->heap_number_map(), + index_not_number_, + DONT_DO_SMI_CHECK); + call_helper.BeforeCall(masm); + __ push(object_); + __ push(index_); // Consumed by runtime conversion function. + if (index_flags_ == STRING_INDEX_IS_NUMBER) { + __ CallRuntime(Runtime::kNumberToIntegerMapMinusZero, 1); + } else { + DCHECK(index_flags_ == STRING_INDEX_IS_ARRAY_INDEX); + // NumberToSmi discards numbers that are not exact integers. + __ CallRuntime(Runtime::kNumberToSmi, 1); + } + if (!index_.is(eax)) { + // Save the conversion result before the pop instructions below + // have a chance to overwrite it. + __ mov(index_, eax); + } + __ pop(object_); + // Reload the instance type. + __ mov(result_, FieldOperand(object_, HeapObject::kMapOffset)); + __ movzx_b(result_, FieldOperand(result_, Map::kInstanceTypeOffset)); + call_helper.AfterCall(masm); + // If index is still not a smi, it must be out of range. + STATIC_ASSERT(kSmiTag == 0); + __ JumpIfNotSmi(index_, index_out_of_range_); + // Otherwise, return to the fast path. + __ jmp(&got_smi_index_); + + // Call runtime. We get here when the receiver is a string and the + // index is a number, but the code of getting the actual character + // is too complex (e.g., when the string needs to be flattened). + __ bind(&call_runtime_); + call_helper.BeforeCall(masm); + __ push(object_); + __ SmiTag(index_); + __ push(index_); + __ CallRuntime(Runtime::kStringCharCodeAtRT, 2); + if (!result_.is(eax)) { + __ mov(result_, eax); + } + call_helper.AfterCall(masm); + __ jmp(&exit_); + + __ Abort(kUnexpectedFallthroughFromCharCodeAtSlowCase); +} + + +// ------------------------------------------------------------------------- +// StringCharFromCodeGenerator + +void StringCharFromCodeGenerator::GenerateFast(MacroAssembler* masm) { + // Fast case of Heap::LookupSingleCharacterStringFromCode. + STATIC_ASSERT(kSmiTag == 0); + STATIC_ASSERT(kSmiShiftSize == 0); + DCHECK(IsPowerOf2(String::kMaxOneByteCharCode + 1)); + __ test(code_, + Immediate(kSmiTagMask | + ((~String::kMaxOneByteCharCode) << kSmiTagSize))); + __ j(not_zero, &slow_case_); + + Factory* factory = masm->isolate()->factory(); + __ Move(result_, Immediate(factory->single_character_string_cache())); + STATIC_ASSERT(kSmiTag == 0); + STATIC_ASSERT(kSmiTagSize == 1); + STATIC_ASSERT(kSmiShiftSize == 0); + // At this point code register contains smi tagged ASCII char code. + __ mov(result_, FieldOperand(result_, + code_, times_half_pointer_size, + FixedArray::kHeaderSize)); + __ cmp(result_, factory->undefined_value()); + __ j(equal, &slow_case_); + __ bind(&exit_); +} + + +void StringCharFromCodeGenerator::GenerateSlow( + MacroAssembler* masm, + const RuntimeCallHelper& call_helper) { + __ Abort(kUnexpectedFallthroughToCharFromCodeSlowCase); + + __ bind(&slow_case_); + call_helper.BeforeCall(masm); + __ push(code_); + __ CallRuntime(Runtime::kCharFromCode, 1); + if (!result_.is(eax)) { + __ mov(result_, eax); + } + call_helper.AfterCall(masm); + __ jmp(&exit_); + + __ Abort(kUnexpectedFallthroughFromCharFromCodeSlowCase); +} + + +void StringHelper::GenerateCopyCharacters(MacroAssembler* masm, + Register dest, + Register src, + Register count, + Register scratch, + String::Encoding encoding) { + DCHECK(!scratch.is(dest)); + DCHECK(!scratch.is(src)); + DCHECK(!scratch.is(count)); + + // Nothing to do for zero characters. + Label done; + __ test(count, count); + __ j(zero, &done); + + // Make count the number of bytes to copy. + if (encoding == String::TWO_BYTE_ENCODING) { + __ shl(count, 1); + } + + Label loop; + __ bind(&loop); + __ mov_b(scratch, Operand(src, 0)); + __ mov_b(Operand(dest, 0), scratch); + __ inc(src); + __ inc(dest); + __ dec(count); + __ j(not_zero, &loop); + + __ bind(&done); +} + + +void StringHelper::GenerateHashInit(MacroAssembler* masm, + Register hash, + Register character, + Register scratch) { + // hash = (seed + character) + ((seed + character) << 10); + if (masm->serializer_enabled()) { + __ LoadRoot(scratch, Heap::kHashSeedRootIndex); + __ SmiUntag(scratch); + __ add(scratch, character); + __ mov(hash, scratch); + __ shl(scratch, 10); + __ add(hash, scratch); + } else { + int32_t seed = masm->isolate()->heap()->HashSeed(); + __ lea(scratch, Operand(character, seed)); + __ shl(scratch, 10); + __ lea(hash, Operand(scratch, character, times_1, seed)); + } + // hash ^= hash >> 6; + __ mov(scratch, hash); + __ shr(scratch, 6); + __ xor_(hash, scratch); +} + + +void StringHelper::GenerateHashAddCharacter(MacroAssembler* masm, + Register hash, + Register character, + Register scratch) { + // hash += character; + __ add(hash, character); + // hash += hash << 10; + __ mov(scratch, hash); + __ shl(scratch, 10); + __ add(hash, scratch); + // hash ^= hash >> 6; + __ mov(scratch, hash); + __ shr(scratch, 6); + __ xor_(hash, scratch); +} + + +void StringHelper::GenerateHashGetHash(MacroAssembler* masm, + Register hash, + Register scratch) { + // hash += hash << 3; + __ mov(scratch, hash); + __ shl(scratch, 3); + __ add(hash, scratch); + // hash ^= hash >> 11; + __ mov(scratch, hash); + __ shr(scratch, 11); + __ xor_(hash, scratch); + // hash += hash << 15; + __ mov(scratch, hash); + __ shl(scratch, 15); + __ add(hash, scratch); + + __ and_(hash, String::kHashBitMask); + + // if (hash == 0) hash = 27; + Label hash_not_zero; + __ j(not_zero, &hash_not_zero, Label::kNear); + __ mov(hash, Immediate(StringHasher::kZeroHash)); + __ bind(&hash_not_zero); +} + + +void SubStringStub::Generate(MacroAssembler* masm) { + Label runtime; + + // Stack frame on entry. + // esp[0]: return address + // esp[4]: to + // esp[8]: from + // esp[12]: string + + // Make sure first argument is a string. + __ mov(eax, Operand(esp, 3 * kPointerSize)); + STATIC_ASSERT(kSmiTag == 0); + __ JumpIfSmi(eax, &runtime); + Condition is_string = masm->IsObjectStringType(eax, ebx, ebx); + __ j(NegateCondition(is_string), &runtime); + + // eax: string + // ebx: instance type + + // Calculate length of sub string using the smi values. + __ mov(ecx, Operand(esp, 1 * kPointerSize)); // To index. + __ JumpIfNotSmi(ecx, &runtime); + __ mov(edx, Operand(esp, 2 * kPointerSize)); // From index. + __ JumpIfNotSmi(edx, &runtime); + __ sub(ecx, edx); + __ cmp(ecx, FieldOperand(eax, String::kLengthOffset)); + Label not_original_string; + // Shorter than original string's length: an actual substring. + __ j(below, ¬_original_string, Label::kNear); + // Longer than original string's length or negative: unsafe arguments. + __ j(above, &runtime); + // Return original string. + Counters* counters = isolate()->counters(); + __ IncrementCounter(counters->sub_string_native(), 1); + __ ret(3 * kPointerSize); + __ bind(¬_original_string); + + Label single_char; + __ cmp(ecx, Immediate(Smi::FromInt(1))); + __ j(equal, &single_char); + + // eax: string + // ebx: instance type + // ecx: sub string length (smi) + // edx: from index (smi) + // Deal with different string types: update the index if necessary + // and put the underlying string into edi. + Label underlying_unpacked, sliced_string, seq_or_external_string; + // If the string is not indirect, it can only be sequential or external. + STATIC_ASSERT(kIsIndirectStringMask == (kSlicedStringTag & kConsStringTag)); + STATIC_ASSERT(kIsIndirectStringMask != 0); + __ test(ebx, Immediate(kIsIndirectStringMask)); + __ j(zero, &seq_or_external_string, Label::kNear); + + Factory* factory = isolate()->factory(); + __ test(ebx, Immediate(kSlicedNotConsMask)); + __ j(not_zero, &sliced_string, Label::kNear); + // Cons string. Check whether it is flat, then fetch first part. + // Flat cons strings have an empty second part. + __ cmp(FieldOperand(eax, ConsString::kSecondOffset), + factory->empty_string()); + __ j(not_equal, &runtime); + __ mov(edi, FieldOperand(eax, ConsString::kFirstOffset)); + // Update instance type. + __ mov(ebx, FieldOperand(edi, HeapObject::kMapOffset)); + __ movzx_b(ebx, FieldOperand(ebx, Map::kInstanceTypeOffset)); + __ jmp(&underlying_unpacked, Label::kNear); + + __ bind(&sliced_string); + // Sliced string. Fetch parent and adjust start index by offset. + __ add(edx, FieldOperand(eax, SlicedString::kOffsetOffset)); + __ mov(edi, FieldOperand(eax, SlicedString::kParentOffset)); + // Update instance type. + __ mov(ebx, FieldOperand(edi, HeapObject::kMapOffset)); + __ movzx_b(ebx, FieldOperand(ebx, Map::kInstanceTypeOffset)); + __ jmp(&underlying_unpacked, Label::kNear); + + __ bind(&seq_or_external_string); + // Sequential or external string. Just move string to the expected register. + __ mov(edi, eax); + + __ bind(&underlying_unpacked); + + if (FLAG_string_slices) { + Label copy_routine; + // edi: underlying subject string + // ebx: instance type of underlying subject string + // edx: adjusted start index (smi) + // ecx: length (smi) + __ cmp(ecx, Immediate(Smi::FromInt(SlicedString::kMinLength))); + // Short slice. Copy instead of slicing. + __ j(less, ©_routine); + // Allocate new sliced string. At this point we do not reload the instance + // type including the string encoding because we simply rely on the info + // provided by the original string. It does not matter if the original + // string's encoding is wrong because we always have to recheck encoding of + // the newly created string's parent anyways due to externalized strings. + Label two_byte_slice, set_slice_header; + STATIC_ASSERT((kStringEncodingMask & kOneByteStringTag) != 0); + STATIC_ASSERT((kStringEncodingMask & kTwoByteStringTag) == 0); + __ test(ebx, Immediate(kStringEncodingMask)); + __ j(zero, &two_byte_slice, Label::kNear); + __ AllocateAsciiSlicedString(eax, ebx, no_reg, &runtime); + __ jmp(&set_slice_header, Label::kNear); + __ bind(&two_byte_slice); + __ AllocateTwoByteSlicedString(eax, ebx, no_reg, &runtime); + __ bind(&set_slice_header); + __ mov(FieldOperand(eax, SlicedString::kLengthOffset), ecx); + __ mov(FieldOperand(eax, SlicedString::kHashFieldOffset), + Immediate(String::kEmptyHashField)); + __ mov(FieldOperand(eax, SlicedString::kParentOffset), edi); + __ mov(FieldOperand(eax, SlicedString::kOffsetOffset), edx); + __ IncrementCounter(counters->sub_string_native(), 1); + __ ret(3 * kPointerSize); + + __ bind(©_routine); + } + + // edi: underlying subject string + // ebx: instance type of underlying subject string + // edx: adjusted start index (smi) + // ecx: length (smi) + // The subject string can only be external or sequential string of either + // encoding at this point. + Label two_byte_sequential, runtime_drop_two, sequential_string; + STATIC_ASSERT(kExternalStringTag != 0); + STATIC_ASSERT(kSeqStringTag == 0); + __ test_b(ebx, kExternalStringTag); + __ j(zero, &sequential_string); + + // Handle external string. + // Rule out short external strings. + STATIC_ASSERT(kShortExternalStringTag != 0); + __ test_b(ebx, kShortExternalStringMask); + __ j(not_zero, &runtime); + __ mov(edi, FieldOperand(edi, ExternalString::kResourceDataOffset)); + // Move the pointer so that offset-wise, it looks like a sequential string. + STATIC_ASSERT(SeqTwoByteString::kHeaderSize == SeqOneByteString::kHeaderSize); + __ sub(edi, Immediate(SeqTwoByteString::kHeaderSize - kHeapObjectTag)); + + __ bind(&sequential_string); + // Stash away (adjusted) index and (underlying) string. + __ push(edx); + __ push(edi); + __ SmiUntag(ecx); + STATIC_ASSERT((kOneByteStringTag & kStringEncodingMask) != 0); + __ test_b(ebx, kStringEncodingMask); + __ j(zero, &two_byte_sequential); + + // Sequential ASCII string. Allocate the result. + __ AllocateAsciiString(eax, ecx, ebx, edx, edi, &runtime_drop_two); + + // eax: result string + // ecx: result string length + // Locate first character of result. + __ mov(edi, eax); + __ add(edi, Immediate(SeqOneByteString::kHeaderSize - kHeapObjectTag)); + // Load string argument and locate character of sub string start. + __ pop(edx); + __ pop(ebx); + __ SmiUntag(ebx); + __ lea(edx, FieldOperand(edx, ebx, times_1, SeqOneByteString::kHeaderSize)); + + // eax: result string + // ecx: result length + // edi: first character of result + // edx: character of sub string start + StringHelper::GenerateCopyCharacters( + masm, edi, edx, ecx, ebx, String::ONE_BYTE_ENCODING); + __ IncrementCounter(counters->sub_string_native(), 1); + __ ret(3 * kPointerSize); + + __ bind(&two_byte_sequential); + // Sequential two-byte string. Allocate the result. + __ AllocateTwoByteString(eax, ecx, ebx, edx, edi, &runtime_drop_two); + + // eax: result string + // ecx: result string length + // Locate first character of result. + __ mov(edi, eax); + __ add(edi, + Immediate(SeqTwoByteString::kHeaderSize - kHeapObjectTag)); + // Load string argument and locate character of sub string start. + __ pop(edx); + __ pop(ebx); + // As from is a smi it is 2 times the value which matches the size of a two + // byte character. + STATIC_ASSERT(kSmiTag == 0); + STATIC_ASSERT(kSmiTagSize + kSmiShiftSize == 1); + __ lea(edx, FieldOperand(edx, ebx, times_1, SeqTwoByteString::kHeaderSize)); + + // eax: result string + // ecx: result length + // edi: first character of result + // edx: character of sub string start + StringHelper::GenerateCopyCharacters( + masm, edi, edx, ecx, ebx, String::TWO_BYTE_ENCODING); + __ IncrementCounter(counters->sub_string_native(), 1); + __ ret(3 * kPointerSize); + + // Drop pushed values on the stack before tail call. + __ bind(&runtime_drop_two); + __ Drop(2); + + // Just jump to runtime to create the sub string. + __ bind(&runtime); + __ TailCallRuntime(Runtime::kSubString, 3, 1); + + __ bind(&single_char); + // eax: string + // ebx: instance type + // ecx: sub string length (smi) + // edx: from index (smi) + StringCharAtGenerator generator( + eax, edx, ecx, eax, &runtime, &runtime, &runtime, STRING_INDEX_IS_NUMBER); + generator.GenerateFast(masm); + __ ret(3 * kPointerSize); + generator.SkipSlow(masm, &runtime); +} + + +void StringCompareStub::GenerateFlatAsciiStringEquals(MacroAssembler* masm, + Register left, + Register right, + Register scratch1, + Register scratch2) { + Register length = scratch1; + + // Compare lengths. + Label strings_not_equal, check_zero_length; + __ mov(length, FieldOperand(left, String::kLengthOffset)); + __ cmp(length, FieldOperand(right, String::kLengthOffset)); + __ j(equal, &check_zero_length, Label::kNear); + __ bind(&strings_not_equal); + __ Move(eax, Immediate(Smi::FromInt(NOT_EQUAL))); + __ ret(0); + + // Check if the length is zero. + Label compare_chars; + __ bind(&check_zero_length); + STATIC_ASSERT(kSmiTag == 0); + __ test(length, length); + __ j(not_zero, &compare_chars, Label::kNear); + __ Move(eax, Immediate(Smi::FromInt(EQUAL))); + __ ret(0); + + // Compare characters. + __ bind(&compare_chars); + GenerateAsciiCharsCompareLoop(masm, left, right, length, scratch2, + &strings_not_equal, Label::kNear); + + // Characters are equal. + __ Move(eax, Immediate(Smi::FromInt(EQUAL))); + __ ret(0); +} + + +void StringCompareStub::GenerateCompareFlatAsciiStrings(MacroAssembler* masm, + Register left, + Register right, + Register scratch1, + Register scratch2, + Register scratch3) { + Counters* counters = masm->isolate()->counters(); + __ IncrementCounter(counters->string_compare_native(), 1); + + // Find minimum length. + Label left_shorter; + __ mov(scratch1, FieldOperand(left, String::kLengthOffset)); + __ mov(scratch3, scratch1); + __ sub(scratch3, FieldOperand(right, String::kLengthOffset)); + + Register length_delta = scratch3; + + __ j(less_equal, &left_shorter, Label::kNear); + // Right string is shorter. Change scratch1 to be length of right string. + __ sub(scratch1, length_delta); + __ bind(&left_shorter); + + Register min_length = scratch1; + + // If either length is zero, just compare lengths. + Label compare_lengths; + __ test(min_length, min_length); + __ j(zero, &compare_lengths, Label::kNear); + + // Compare characters. + Label result_not_equal; + GenerateAsciiCharsCompareLoop(masm, left, right, min_length, scratch2, + &result_not_equal, Label::kNear); + + // Compare lengths - strings up to min-length are equal. + __ bind(&compare_lengths); + __ test(length_delta, length_delta); + Label length_not_equal; + __ j(not_zero, &length_not_equal, Label::kNear); + + // Result is EQUAL. + STATIC_ASSERT(EQUAL == 0); + STATIC_ASSERT(kSmiTag == 0); + __ Move(eax, Immediate(Smi::FromInt(EQUAL))); + __ ret(0); + + Label result_greater; + Label result_less; + __ bind(&length_not_equal); + __ j(greater, &result_greater, Label::kNear); + __ jmp(&result_less, Label::kNear); + __ bind(&result_not_equal); + __ j(above, &result_greater, Label::kNear); + __ bind(&result_less); + + // Result is LESS. + __ Move(eax, Immediate(Smi::FromInt(LESS))); + __ ret(0); + + // Result is GREATER. + __ bind(&result_greater); + __ Move(eax, Immediate(Smi::FromInt(GREATER))); + __ ret(0); +} + + +void StringCompareStub::GenerateAsciiCharsCompareLoop( + MacroAssembler* masm, + Register left, + Register right, + Register length, + Register scratch, + Label* chars_not_equal, + Label::Distance chars_not_equal_near) { + // Change index to run from -length to -1 by adding length to string + // start. This means that loop ends when index reaches zero, which + // doesn't need an additional compare. + __ SmiUntag(length); + __ lea(left, + FieldOperand(left, length, times_1, SeqOneByteString::kHeaderSize)); + __ lea(right, + FieldOperand(right, length, times_1, SeqOneByteString::kHeaderSize)); + __ neg(length); + Register index = length; // index = -length; + + // Compare loop. + Label loop; + __ bind(&loop); + __ mov_b(scratch, Operand(left, index, times_1, 0)); + __ cmpb(scratch, Operand(right, index, times_1, 0)); + __ j(not_equal, chars_not_equal, chars_not_equal_near); + __ inc(index); + __ j(not_zero, &loop); +} + + +void StringCompareStub::Generate(MacroAssembler* masm) { + Label runtime; + + // Stack frame on entry. + // esp[0]: return address + // esp[4]: right string + // esp[8]: left string + + __ mov(edx, Operand(esp, 2 * kPointerSize)); // left + __ mov(eax, Operand(esp, 1 * kPointerSize)); // right + + Label not_same; + __ cmp(edx, eax); + __ j(not_equal, ¬_same, Label::kNear); + STATIC_ASSERT(EQUAL == 0); + STATIC_ASSERT(kSmiTag == 0); + __ Move(eax, Immediate(Smi::FromInt(EQUAL))); + __ IncrementCounter(isolate()->counters()->string_compare_native(), 1); + __ ret(2 * kPointerSize); + + __ bind(¬_same); + + // Check that both objects are sequential ASCII strings. + __ JumpIfNotBothSequentialAsciiStrings(edx, eax, ecx, ebx, &runtime); + + // Compare flat ASCII strings. + // Drop arguments from the stack. + __ pop(ecx); + __ add(esp, Immediate(2 * kPointerSize)); + __ push(ecx); + GenerateCompareFlatAsciiStrings(masm, edx, eax, ecx, ebx, edi); + + // Call the runtime; it returns -1 (less), 0 (equal), or 1 (greater) + // tagged as a small integer. + __ bind(&runtime); + __ TailCallRuntime(Runtime::kStringCompare, 2, 1); +} + + +void BinaryOpICWithAllocationSiteStub::Generate(MacroAssembler* masm) { + // ----------- S t a t e ------------- + // -- edx : left + // -- eax : right + // -- esp[0] : return address + // ----------------------------------- + + // Load ecx with the allocation site. We stick an undefined dummy value here + // and replace it with the real allocation site later when we instantiate this + // stub in BinaryOpICWithAllocationSiteStub::GetCodeCopyFromTemplate(). + __ mov(ecx, handle(isolate()->heap()->undefined_value())); + + // Make sure that we actually patched the allocation site. + if (FLAG_debug_code) { + __ test(ecx, Immediate(kSmiTagMask)); + __ Assert(not_equal, kExpectedAllocationSite); + __ cmp(FieldOperand(ecx, HeapObject::kMapOffset), + isolate()->factory()->allocation_site_map()); + __ Assert(equal, kExpectedAllocationSite); + } + + // Tail call into the stub that handles binary operations with allocation + // sites. + BinaryOpWithAllocationSiteStub stub(isolate(), state_); + __ TailCallStub(&stub); +} + + +void ICCompareStub::GenerateSmis(MacroAssembler* masm) { + DCHECK(state_ == CompareIC::SMI); + Label miss; + __ mov(ecx, edx); + __ or_(ecx, eax); + __ JumpIfNotSmi(ecx, &miss, Label::kNear); + + if (GetCondition() == equal) { + // For equality we do not care about the sign of the result. + __ sub(eax, edx); + } else { + Label done; + __ sub(edx, eax); + __ j(no_overflow, &done, Label::kNear); + // Correct sign of result in case of overflow. + __ not_(edx); + __ bind(&done); + __ mov(eax, edx); + } + __ ret(0); + + __ bind(&miss); + GenerateMiss(masm); +} + + +void ICCompareStub::GenerateNumbers(MacroAssembler* masm) { + DCHECK(state_ == CompareIC::NUMBER); + + Label generic_stub; + Label unordered, maybe_undefined1, maybe_undefined2; + Label miss; + + if (left_ == CompareIC::SMI) { + __ JumpIfNotSmi(edx, &miss); + } + if (right_ == CompareIC::SMI) { + __ JumpIfNotSmi(eax, &miss); + } + + // Inlining the double comparison and falling back to the general compare + // stub if NaN is involved or SSE2 or CMOV is unsupported. + __ mov(ecx, edx); + __ and_(ecx, eax); + __ JumpIfSmi(ecx, &generic_stub, Label::kNear); + + __ cmp(FieldOperand(eax, HeapObject::kMapOffset), + isolate()->factory()->heap_number_map()); + __ j(not_equal, &maybe_undefined1, Label::kNear); + __ cmp(FieldOperand(edx, HeapObject::kMapOffset), + isolate()->factory()->heap_number_map()); + __ j(not_equal, &maybe_undefined2, Label::kNear); + + __ bind(&unordered); + __ bind(&generic_stub); + ICCompareStub stub(isolate(), op_, CompareIC::GENERIC, CompareIC::GENERIC, + CompareIC::GENERIC); + __ jmp(stub.GetCode(), RelocInfo::CODE_TARGET); + + __ bind(&maybe_undefined1); + if (Token::IsOrderedRelationalCompareOp(op_)) { + __ cmp(eax, Immediate(isolate()->factory()->undefined_value())); + __ j(not_equal, &miss); + __ JumpIfSmi(edx, &unordered); + __ CmpObjectType(edx, HEAP_NUMBER_TYPE, ecx); + __ j(not_equal, &maybe_undefined2, Label::kNear); + __ jmp(&unordered); + } + + __ bind(&maybe_undefined2); + if (Token::IsOrderedRelationalCompareOp(op_)) { + __ cmp(edx, Immediate(isolate()->factory()->undefined_value())); + __ j(equal, &unordered); + } + + __ bind(&miss); + GenerateMiss(masm); +} + + +void ICCompareStub::GenerateInternalizedStrings(MacroAssembler* masm) { + DCHECK(state_ == CompareIC::INTERNALIZED_STRING); + DCHECK(GetCondition() == equal); + + // Registers containing left and right operands respectively. + Register left = edx; + Register right = eax; + Register tmp1 = ecx; + Register tmp2 = ebx; + + // Check that both operands are heap objects. + Label miss; + __ mov(tmp1, left); + STATIC_ASSERT(kSmiTag == 0); + __ and_(tmp1, right); + __ JumpIfSmi(tmp1, &miss, Label::kNear); + + // Check that both operands are internalized strings. + __ mov(tmp1, FieldOperand(left, HeapObject::kMapOffset)); + __ mov(tmp2, FieldOperand(right, HeapObject::kMapOffset)); + __ movzx_b(tmp1, FieldOperand(tmp1, Map::kInstanceTypeOffset)); + __ movzx_b(tmp2, FieldOperand(tmp2, Map::kInstanceTypeOffset)); + STATIC_ASSERT(kInternalizedTag == 0 && kStringTag == 0); + __ or_(tmp1, tmp2); + __ test(tmp1, Immediate(kIsNotStringMask | kIsNotInternalizedMask)); + __ j(not_zero, &miss, Label::kNear); + + // Internalized strings are compared by identity. + Label done; + __ cmp(left, right); + // Make sure eax is non-zero. At this point input operands are + // guaranteed to be non-zero. + DCHECK(right.is(eax)); + __ j(not_equal, &done, Label::kNear); + STATIC_ASSERT(EQUAL == 0); + STATIC_ASSERT(kSmiTag == 0); + __ Move(eax, Immediate(Smi::FromInt(EQUAL))); + __ bind(&done); + __ ret(0); + + __ bind(&miss); + GenerateMiss(masm); +} + + +void ICCompareStub::GenerateUniqueNames(MacroAssembler* masm) { + DCHECK(state_ == CompareIC::UNIQUE_NAME); + DCHECK(GetCondition() == equal); + + // Registers containing left and right operands respectively. + Register left = edx; + Register right = eax; + Register tmp1 = ecx; + Register tmp2 = ebx; + + // Check that both operands are heap objects. + Label miss; + __ mov(tmp1, left); + STATIC_ASSERT(kSmiTag == 0); + __ and_(tmp1, right); + __ JumpIfSmi(tmp1, &miss, Label::kNear); + + // Check that both operands are unique names. This leaves the instance + // types loaded in tmp1 and tmp2. + __ mov(tmp1, FieldOperand(left, HeapObject::kMapOffset)); + __ mov(tmp2, FieldOperand(right, HeapObject::kMapOffset)); + __ movzx_b(tmp1, FieldOperand(tmp1, Map::kInstanceTypeOffset)); + __ movzx_b(tmp2, FieldOperand(tmp2, Map::kInstanceTypeOffset)); + + __ JumpIfNotUniqueName(tmp1, &miss, Label::kNear); + __ JumpIfNotUniqueName(tmp2, &miss, Label::kNear); + + // Unique names are compared by identity. + Label done; + __ cmp(left, right); + // Make sure eax is non-zero. At this point input operands are + // guaranteed to be non-zero. + DCHECK(right.is(eax)); + __ j(not_equal, &done, Label::kNear); + STATIC_ASSERT(EQUAL == 0); + STATIC_ASSERT(kSmiTag == 0); + __ Move(eax, Immediate(Smi::FromInt(EQUAL))); + __ bind(&done); + __ ret(0); + + __ bind(&miss); + GenerateMiss(masm); +} + + +void ICCompareStub::GenerateStrings(MacroAssembler* masm) { + DCHECK(state_ == CompareIC::STRING); + Label miss; + + bool equality = Token::IsEqualityOp(op_); + + // Registers containing left and right operands respectively. + Register left = edx; + Register right = eax; + Register tmp1 = ecx; + Register tmp2 = ebx; + Register tmp3 = edi; + + // Check that both operands are heap objects. + __ mov(tmp1, left); + STATIC_ASSERT(kSmiTag == 0); + __ and_(tmp1, right); + __ JumpIfSmi(tmp1, &miss); + + // Check that both operands are strings. This leaves the instance + // types loaded in tmp1 and tmp2. + __ mov(tmp1, FieldOperand(left, HeapObject::kMapOffset)); + __ mov(tmp2, FieldOperand(right, HeapObject::kMapOffset)); + __ movzx_b(tmp1, FieldOperand(tmp1, Map::kInstanceTypeOffset)); + __ movzx_b(tmp2, FieldOperand(tmp2, Map::kInstanceTypeOffset)); + __ mov(tmp3, tmp1); + STATIC_ASSERT(kNotStringTag != 0); + __ or_(tmp3, tmp2); + __ test(tmp3, Immediate(kIsNotStringMask)); + __ j(not_zero, &miss); + + // Fast check for identical strings. + Label not_same; + __ cmp(left, right); + __ j(not_equal, ¬_same, Label::kNear); + STATIC_ASSERT(EQUAL == 0); + STATIC_ASSERT(kSmiTag == 0); + __ Move(eax, Immediate(Smi::FromInt(EQUAL))); + __ ret(0); + + // Handle not identical strings. + __ bind(¬_same); + + // Check that both strings are internalized. If they are, we're done + // because we already know they are not identical. But in the case of + // non-equality compare, we still need to determine the order. We + // also know they are both strings. + if (equality) { + Label do_compare; + STATIC_ASSERT(kInternalizedTag == 0); + __ or_(tmp1, tmp2); + __ test(tmp1, Immediate(kIsNotInternalizedMask)); + __ j(not_zero, &do_compare, Label::kNear); + // Make sure eax is non-zero. At this point input operands are + // guaranteed to be non-zero. + DCHECK(right.is(eax)); + __ ret(0); + __ bind(&do_compare); + } + + // Check that both strings are sequential ASCII. + Label runtime; + __ JumpIfNotBothSequentialAsciiStrings(left, right, tmp1, tmp2, &runtime); + + // Compare flat ASCII strings. Returns when done. + if (equality) { + StringCompareStub::GenerateFlatAsciiStringEquals( + masm, left, right, tmp1, tmp2); + } else { + StringCompareStub::GenerateCompareFlatAsciiStrings( + masm, left, right, tmp1, tmp2, tmp3); + } + + // Handle more complex cases in runtime. + __ bind(&runtime); + __ pop(tmp1); // Return address. + __ push(left); + __ push(right); + __ push(tmp1); + if (equality) { + __ TailCallRuntime(Runtime::kStringEquals, 2, 1); + } else { + __ TailCallRuntime(Runtime::kStringCompare, 2, 1); + } + + __ bind(&miss); + GenerateMiss(masm); +} + + +void ICCompareStub::GenerateObjects(MacroAssembler* masm) { + DCHECK(state_ == CompareIC::OBJECT); + Label miss; + __ mov(ecx, edx); + __ and_(ecx, eax); + __ JumpIfSmi(ecx, &miss, Label::kNear); + + __ CmpObjectType(eax, JS_OBJECT_TYPE, ecx); + __ j(not_equal, &miss, Label::kNear); + __ CmpObjectType(edx, JS_OBJECT_TYPE, ecx); + __ j(not_equal, &miss, Label::kNear); + + DCHECK(GetCondition() == equal); + __ sub(eax, edx); + __ ret(0); + + __ bind(&miss); + GenerateMiss(masm); +} + + +void ICCompareStub::GenerateKnownObjects(MacroAssembler* masm) { + Label miss; + __ mov(ecx, edx); + __ and_(ecx, eax); + __ JumpIfSmi(ecx, &miss, Label::kNear); + + __ mov(ecx, FieldOperand(eax, HeapObject::kMapOffset)); + __ mov(ebx, FieldOperand(edx, HeapObject::kMapOffset)); + __ cmp(ecx, known_map_); + __ j(not_equal, &miss, Label::kNear); + __ cmp(ebx, known_map_); + __ j(not_equal, &miss, Label::kNear); + + __ sub(eax, edx); + __ ret(0); + + __ bind(&miss); + GenerateMiss(masm); +} + + +void ICCompareStub::GenerateMiss(MacroAssembler* masm) { + { + // Call the runtime system in a fresh internal frame. + ExternalReference miss = ExternalReference(IC_Utility(IC::kCompareIC_Miss), + isolate()); + FrameScope scope(masm, StackFrame::INTERNAL); + __ push(edx); // Preserve edx and eax. + __ push(eax); + __ push(edx); // And also use them as the arguments. + __ push(eax); + __ push(Immediate(Smi::FromInt(op_))); + __ CallExternalReference(miss, 3); + // Compute the entry point of the rewritten stub. + __ lea(edi, FieldOperand(eax, Code::kHeaderSize)); + __ pop(eax); + __ pop(edx); + } + + // Do a tail call to the rewritten stub. + __ jmp(edi); +} + + +// Helper function used to check that the dictionary doesn't contain +// the property. This function may return false negatives, so miss_label +// must always call a backup property check that is complete. +// This function is safe to call if the receiver has fast properties. +// Name must be a unique name and receiver must be a heap object. +void NameDictionaryLookupStub::GenerateNegativeLookup(MacroAssembler* masm, + Label* miss, + Label* done, + Register properties, + Handle<Name> name, + Register r0) { + DCHECK(name->IsUniqueName()); + + // If names of slots in range from 1 to kProbes - 1 for the hash value are + // not equal to the name and kProbes-th slot is not used (its name is the + // undefined value), it guarantees the hash table doesn't contain the + // property. It's true even if some slots represent deleted properties + // (their names are the hole value). + for (int i = 0; i < kInlinedProbes; i++) { + // Compute the masked index: (hash + i + i * i) & mask. + Register index = r0; + // Capacity is smi 2^n. + __ mov(index, FieldOperand(properties, kCapacityOffset)); + __ dec(index); + __ and_(index, + Immediate(Smi::FromInt(name->Hash() + + NameDictionary::GetProbeOffset(i)))); + + // Scale the index by multiplying by the entry size. + DCHECK(NameDictionary::kEntrySize == 3); + __ lea(index, Operand(index, index, times_2, 0)); // index *= 3. + Register entity_name = r0; + // Having undefined at this place means the name is not contained. + DCHECK_EQ(kSmiTagSize, 1); + __ mov(entity_name, Operand(properties, index, times_half_pointer_size, + kElementsStartOffset - kHeapObjectTag)); + __ cmp(entity_name, masm->isolate()->factory()->undefined_value()); + __ j(equal, done); + + // Stop if found the property. + __ cmp(entity_name, Handle<Name>(name)); + __ j(equal, miss); + + Label good; + // Check for the hole and skip. + __ cmp(entity_name, masm->isolate()->factory()->the_hole_value()); + __ j(equal, &good, Label::kNear); + + // Check if the entry name is not a unique name. + __ mov(entity_name, FieldOperand(entity_name, HeapObject::kMapOffset)); + __ JumpIfNotUniqueName(FieldOperand(entity_name, Map::kInstanceTypeOffset), + miss); + __ bind(&good); + } + + NameDictionaryLookupStub stub(masm->isolate(), properties, r0, r0, + NEGATIVE_LOOKUP); + __ push(Immediate(Handle<Object>(name))); + __ push(Immediate(name->Hash())); + __ CallStub(&stub); + __ test(r0, r0); + __ j(not_zero, miss); + __ jmp(done); +} + + +// Probe the name dictionary in the |elements| register. Jump to the +// |done| label if a property with the given name is found leaving the +// index into the dictionary in |r0|. Jump to the |miss| label +// otherwise. +void NameDictionaryLookupStub::GeneratePositiveLookup(MacroAssembler* masm, + Label* miss, + Label* done, + Register elements, + Register name, + Register r0, + Register r1) { + DCHECK(!elements.is(r0)); + DCHECK(!elements.is(r1)); + DCHECK(!name.is(r0)); + DCHECK(!name.is(r1)); + + __ AssertName(name); + + __ mov(r1, FieldOperand(elements, kCapacityOffset)); + __ shr(r1, kSmiTagSize); // convert smi to int + __ dec(r1); + + // Generate an unrolled loop that performs a few probes before + // giving up. Measurements done on Gmail indicate that 2 probes + // cover ~93% of loads from dictionaries. + for (int i = 0; i < kInlinedProbes; i++) { + // Compute the masked index: (hash + i + i * i) & mask. + __ mov(r0, FieldOperand(name, Name::kHashFieldOffset)); + __ shr(r0, Name::kHashShift); + if (i > 0) { + __ add(r0, Immediate(NameDictionary::GetProbeOffset(i))); + } + __ and_(r0, r1); + + // Scale the index by multiplying by the entry size. + DCHECK(NameDictionary::kEntrySize == 3); + __ lea(r0, Operand(r0, r0, times_2, 0)); // r0 = r0 * 3 + + // Check if the key is identical to the name. + __ cmp(name, Operand(elements, + r0, + times_4, + kElementsStartOffset - kHeapObjectTag)); + __ j(equal, done); + } + + NameDictionaryLookupStub stub(masm->isolate(), elements, r1, r0, + POSITIVE_LOOKUP); + __ push(name); + __ mov(r0, FieldOperand(name, Name::kHashFieldOffset)); + __ shr(r0, Name::kHashShift); + __ push(r0); + __ CallStub(&stub); + + __ test(r1, r1); + __ j(zero, miss); + __ jmp(done); +} + + +void NameDictionaryLookupStub::Generate(MacroAssembler* masm) { + // This stub overrides SometimesSetsUpAFrame() to return false. That means + // we cannot call anything that could cause a GC from this stub. + // Stack frame on entry: + // esp[0 * kPointerSize]: return address. + // esp[1 * kPointerSize]: key's hash. + // esp[2 * kPointerSize]: key. + // Registers: + // dictionary_: NameDictionary to probe. + // result_: used as scratch. + // index_: will hold an index of entry if lookup is successful. + // might alias with result_. + // Returns: + // result_ is zero if lookup failed, non zero otherwise. + + Label in_dictionary, maybe_in_dictionary, not_in_dictionary; + + Register scratch = result_; + + __ mov(scratch, FieldOperand(dictionary_, kCapacityOffset)); + __ dec(scratch); + __ SmiUntag(scratch); + __ push(scratch); + + // If names of slots in range from 1 to kProbes - 1 for the hash value are + // not equal to the name and kProbes-th slot is not used (its name is the + // undefined value), it guarantees the hash table doesn't contain the + // property. It's true even if some slots represent deleted properties + // (their names are the null value). + for (int i = kInlinedProbes; i < kTotalProbes; i++) { + // Compute the masked index: (hash + i + i * i) & mask. + __ mov(scratch, Operand(esp, 2 * kPointerSize)); + if (i > 0) { + __ add(scratch, Immediate(NameDictionary::GetProbeOffset(i))); + } + __ and_(scratch, Operand(esp, 0)); + + // Scale the index by multiplying by the entry size. + DCHECK(NameDictionary::kEntrySize == 3); + __ lea(index_, Operand(scratch, scratch, times_2, 0)); // index *= 3. + + // Having undefined at this place means the name is not contained. + DCHECK_EQ(kSmiTagSize, 1); + __ mov(scratch, Operand(dictionary_, + index_, + times_pointer_size, + kElementsStartOffset - kHeapObjectTag)); + __ cmp(scratch, isolate()->factory()->undefined_value()); + __ j(equal, ¬_in_dictionary); + + // Stop if found the property. + __ cmp(scratch, Operand(esp, 3 * kPointerSize)); + __ j(equal, &in_dictionary); + + if (i != kTotalProbes - 1 && mode_ == NEGATIVE_LOOKUP) { + // If we hit a key that is not a unique name during negative + // lookup we have to bailout as this key might be equal to the + // key we are looking for. + + // Check if the entry name is not a unique name. + __ mov(scratch, FieldOperand(scratch, HeapObject::kMapOffset)); + __ JumpIfNotUniqueName(FieldOperand(scratch, Map::kInstanceTypeOffset), + &maybe_in_dictionary); + } + } + + __ bind(&maybe_in_dictionary); + // If we are doing negative lookup then probing failure should be + // treated as a lookup success. For positive lookup probing failure + // should be treated as lookup failure. + if (mode_ == POSITIVE_LOOKUP) { + __ mov(result_, Immediate(0)); + __ Drop(1); + __ ret(2 * kPointerSize); + } + + __ bind(&in_dictionary); + __ mov(result_, Immediate(1)); + __ Drop(1); + __ ret(2 * kPointerSize); + + __ bind(¬_in_dictionary); + __ mov(result_, Immediate(0)); + __ Drop(1); + __ ret(2 * kPointerSize); +} + + +void StoreBufferOverflowStub::GenerateFixedRegStubsAheadOfTime( + Isolate* isolate) { + StoreBufferOverflowStub stub(isolate); + stub.GetCode(); +} + + +// Takes the input in 3 registers: address_ value_ and object_. A pointer to +// the value has just been written into the object, now this stub makes sure +// we keep the GC informed. The word in the object where the value has been +// written is in the address register. +void RecordWriteStub::Generate(MacroAssembler* masm) { + Label skip_to_incremental_noncompacting; + Label skip_to_incremental_compacting; + + // The first two instructions are generated with labels so as to get the + // offset fixed up correctly by the bind(Label*) call. We patch it back and + // forth between a compare instructions (a nop in this position) and the + // real branch when we start and stop incremental heap marking. + __ jmp(&skip_to_incremental_noncompacting, Label::kNear); + __ jmp(&skip_to_incremental_compacting, Label::kFar); + + if (remembered_set_action_ == EMIT_REMEMBERED_SET) { + __ RememberedSetHelper(object_, + address_, + value_, + MacroAssembler::kReturnAtEnd); + } else { + __ ret(0); + } + + __ bind(&skip_to_incremental_noncompacting); + GenerateIncremental(masm, INCREMENTAL); + + __ bind(&skip_to_incremental_compacting); + GenerateIncremental(masm, INCREMENTAL_COMPACTION); + + // Initial mode of the stub is expected to be STORE_BUFFER_ONLY. + // Will be checked in IncrementalMarking::ActivateGeneratedStub. + masm->set_byte_at(0, kTwoByteNopInstruction); + masm->set_byte_at(2, kFiveByteNopInstruction); +} + + +void RecordWriteStub::GenerateIncremental(MacroAssembler* masm, Mode mode) { + regs_.Save(masm); + + if (remembered_set_action_ == EMIT_REMEMBERED_SET) { + Label dont_need_remembered_set; + + __ mov(regs_.scratch0(), Operand(regs_.address(), 0)); + __ JumpIfNotInNewSpace(regs_.scratch0(), // Value. + regs_.scratch0(), + &dont_need_remembered_set); + + __ CheckPageFlag(regs_.object(), + regs_.scratch0(), + 1 << MemoryChunk::SCAN_ON_SCAVENGE, + not_zero, + &dont_need_remembered_set); + + // First notify the incremental marker if necessary, then update the + // remembered set. + CheckNeedsToInformIncrementalMarker( + masm, + kUpdateRememberedSetOnNoNeedToInformIncrementalMarker, + mode); + InformIncrementalMarker(masm); + regs_.Restore(masm); + __ RememberedSetHelper(object_, + address_, + value_, + MacroAssembler::kReturnAtEnd); + + __ bind(&dont_need_remembered_set); + } + + CheckNeedsToInformIncrementalMarker( + masm, + kReturnOnNoNeedToInformIncrementalMarker, + mode); + InformIncrementalMarker(masm); + regs_.Restore(masm); + __ ret(0); +} + + +void RecordWriteStub::InformIncrementalMarker(MacroAssembler* masm) { + regs_.SaveCallerSaveRegisters(masm); + int argument_count = 3; + __ PrepareCallCFunction(argument_count, regs_.scratch0()); + __ mov(Operand(esp, 0 * kPointerSize), regs_.object()); + __ mov(Operand(esp, 1 * kPointerSize), regs_.address()); // Slot. + __ mov(Operand(esp, 2 * kPointerSize), + Immediate(ExternalReference::isolate_address(isolate()))); + + AllowExternalCallThatCantCauseGC scope(masm); + __ CallCFunction( + ExternalReference::incremental_marking_record_write_function(isolate()), + argument_count); + + regs_.RestoreCallerSaveRegisters(masm); +} + + +void RecordWriteStub::CheckNeedsToInformIncrementalMarker( + MacroAssembler* masm, + OnNoNeedToInformIncrementalMarker on_no_need, + Mode mode) { + Label object_is_black, need_incremental, need_incremental_pop_object; + + __ mov(regs_.scratch0(), Immediate(~Page::kPageAlignmentMask)); + __ and_(regs_.scratch0(), regs_.object()); + __ mov(regs_.scratch1(), + Operand(regs_.scratch0(), + MemoryChunk::kWriteBarrierCounterOffset)); + __ sub(regs_.scratch1(), Immediate(1)); + __ mov(Operand(regs_.scratch0(), + MemoryChunk::kWriteBarrierCounterOffset), + regs_.scratch1()); + __ j(negative, &need_incremental); + + // Let's look at the color of the object: If it is not black we don't have + // to inform the incremental marker. + __ JumpIfBlack(regs_.object(), + regs_.scratch0(), + regs_.scratch1(), + &object_is_black, + Label::kNear); + + regs_.Restore(masm); + if (on_no_need == kUpdateRememberedSetOnNoNeedToInformIncrementalMarker) { + __ RememberedSetHelper(object_, + address_, + value_, + MacroAssembler::kReturnAtEnd); + } else { + __ ret(0); + } + + __ bind(&object_is_black); + + // Get the value from the slot. + __ mov(regs_.scratch0(), Operand(regs_.address(), 0)); + + if (mode == INCREMENTAL_COMPACTION) { + Label ensure_not_white; + + __ CheckPageFlag(regs_.scratch0(), // Contains value. + regs_.scratch1(), // Scratch. + MemoryChunk::kEvacuationCandidateMask, + zero, + &ensure_not_white, + Label::kNear); + + __ CheckPageFlag(regs_.object(), + regs_.scratch1(), // Scratch. + MemoryChunk::kSkipEvacuationSlotsRecordingMask, + not_zero, + &ensure_not_white, + Label::kNear); + + __ jmp(&need_incremental); + + __ bind(&ensure_not_white); + } + + // We need an extra register for this, so we push the object register + // temporarily. + __ push(regs_.object()); + __ EnsureNotWhite(regs_.scratch0(), // The value. + regs_.scratch1(), // Scratch. + regs_.object(), // Scratch. + &need_incremental_pop_object, + Label::kNear); + __ pop(regs_.object()); + + regs_.Restore(masm); + if (on_no_need == kUpdateRememberedSetOnNoNeedToInformIncrementalMarker) { + __ RememberedSetHelper(object_, + address_, + value_, + MacroAssembler::kReturnAtEnd); + } else { + __ ret(0); + } + + __ bind(&need_incremental_pop_object); + __ pop(regs_.object()); + + __ bind(&need_incremental); + + // Fall through when we need to inform the incremental marker. +} + + +void StoreArrayLiteralElementStub::Generate(MacroAssembler* masm) { + // ----------- S t a t e ------------- + // -- eax : element value to store + // -- ecx : element index as smi + // -- esp[0] : return address + // -- esp[4] : array literal index in function + // -- esp[8] : array literal + // clobbers ebx, edx, edi + // ----------------------------------- + + Label element_done; + Label double_elements; + Label smi_element; + Label slow_elements; + Label slow_elements_from_double; + Label fast_elements; + + // Get array literal index, array literal and its map. + __ mov(edx, Operand(esp, 1 * kPointerSize)); + __ mov(ebx, Operand(esp, 2 * kPointerSize)); + __ mov(edi, FieldOperand(ebx, JSObject::kMapOffset)); + + __ CheckFastElements(edi, &double_elements); + + // Check for FAST_*_SMI_ELEMENTS or FAST_*_ELEMENTS elements + __ JumpIfSmi(eax, &smi_element); + __ CheckFastSmiElements(edi, &fast_elements, Label::kNear); + + // Store into the array literal requires a elements transition. Call into + // the runtime. + + __ bind(&slow_elements); + __ pop(edi); // Pop return address and remember to put back later for tail + // call. + __ push(ebx); + __ push(ecx); + __ push(eax); + __ mov(ebx, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset)); + __ push(FieldOperand(ebx, JSFunction::kLiteralsOffset)); + __ push(edx); + __ push(edi); // Return return address so that tail call returns to right + // place. + __ TailCallRuntime(Runtime::kStoreArrayLiteralElement, 5, 1); + + __ bind(&slow_elements_from_double); + __ pop(edx); + __ jmp(&slow_elements); + + // Array literal has ElementsKind of FAST_*_ELEMENTS and value is an object. + __ bind(&fast_elements); + __ mov(ebx, FieldOperand(ebx, JSObject::kElementsOffset)); + __ lea(ecx, FieldOperand(ebx, ecx, times_half_pointer_size, + FixedArrayBase::kHeaderSize)); + __ mov(Operand(ecx, 0), eax); + // Update the write barrier for the array store. + __ RecordWrite(ebx, ecx, eax, + EMIT_REMEMBERED_SET, + OMIT_SMI_CHECK); + __ ret(0); + + // Array literal has ElementsKind of FAST_*_SMI_ELEMENTS or FAST_*_ELEMENTS, + // and value is Smi. + __ bind(&smi_element); + __ mov(ebx, FieldOperand(ebx, JSObject::kElementsOffset)); + __ mov(FieldOperand(ebx, ecx, times_half_pointer_size, + FixedArrayBase::kHeaderSize), eax); + __ ret(0); + + // Array literal has ElementsKind of FAST_*_DOUBLE_ELEMENTS. + __ bind(&double_elements); + + __ push(edx); + __ mov(edx, FieldOperand(ebx, JSObject::kElementsOffset)); + __ StoreNumberToDoubleElements(eax, + edx, + ecx, + edi, + &slow_elements_from_double, + false); + __ pop(edx); + __ ret(0); +} + + +void StubFailureTrampolineStub::Generate(MacroAssembler* masm) { + CEntryStub ces(isolate(), 1); + __ call(ces.GetCode(), RelocInfo::CODE_TARGET); + int parameter_count_offset = + StubFailureTrampolineFrame::kCallerStackParameterCountFrameOffset; + __ mov(ebx, MemOperand(ebp, parameter_count_offset)); + masm->LeaveFrame(StackFrame::STUB_FAILURE_TRAMPOLINE); + __ pop(ecx); + int additional_offset = function_mode_ == JS_FUNCTION_STUB_MODE + ? kPointerSize + : 0; + __ lea(esp, MemOperand(esp, ebx, times_pointer_size, additional_offset)); + __ jmp(ecx); // Return to IC Miss stub, continuation still on stack. +} + + +void ProfileEntryHookStub::MaybeCallEntryHook(MacroAssembler* masm) { + if (masm->isolate()->function_entry_hook() != NULL) { + ProfileEntryHookStub stub(masm->isolate()); + masm->CallStub(&stub); + } +} + + +void ProfileEntryHookStub::Generate(MacroAssembler* masm) { + // Save volatile registers. + const int kNumSavedRegisters = 3; + __ push(eax); + __ push(ecx); + __ push(edx); + + // Calculate and push the original stack pointer. + __ lea(eax, Operand(esp, (kNumSavedRegisters + 1) * kPointerSize)); + __ push(eax); + + // Retrieve our return address and use it to calculate the calling + // function's address. + __ mov(eax, Operand(esp, (kNumSavedRegisters + 1) * kPointerSize)); + __ sub(eax, Immediate(Assembler::kCallInstructionLength)); + __ push(eax); + + // Call the entry hook. + DCHECK(isolate()->function_entry_hook() != NULL); + __ call(FUNCTION_ADDR(isolate()->function_entry_hook()), + RelocInfo::RUNTIME_ENTRY); + __ add(esp, Immediate(2 * kPointerSize)); + + // Restore ecx. + __ pop(edx); + __ pop(ecx); + __ pop(eax); + + __ ret(0); +} + + +template<class T> +static void CreateArrayDispatch(MacroAssembler* masm, + AllocationSiteOverrideMode mode) { + if (mode == DISABLE_ALLOCATION_SITES) { + T stub(masm->isolate(), + GetInitialFastElementsKind(), + mode); + __ TailCallStub(&stub); + } else if (mode == DONT_OVERRIDE) { + int last_index = GetSequenceIndexFromFastElementsKind( + TERMINAL_FAST_ELEMENTS_KIND); + for (int i = 0; i <= last_index; ++i) { + Label next; + ElementsKind kind = GetFastElementsKindFromSequenceIndex(i); + __ cmp(edx, kind); + __ j(not_equal, &next); + T stub(masm->isolate(), kind); + __ TailCallStub(&stub); + __ bind(&next); + } + + // If we reached this point there is a problem. + __ Abort(kUnexpectedElementsKindInArrayConstructor); + } else { + UNREACHABLE(); + } +} + + +static void CreateArrayDispatchOneArgument(MacroAssembler* masm, + AllocationSiteOverrideMode mode) { + // ebx - allocation site (if mode != DISABLE_ALLOCATION_SITES) + // edx - kind (if mode != DISABLE_ALLOCATION_SITES) + // eax - number of arguments + // edi - constructor? + // esp[0] - return address + // esp[4] - last argument + Label normal_sequence; + if (mode == DONT_OVERRIDE) { + DCHECK(FAST_SMI_ELEMENTS == 0); + DCHECK(FAST_HOLEY_SMI_ELEMENTS == 1); + DCHECK(FAST_ELEMENTS == 2); + DCHECK(FAST_HOLEY_ELEMENTS == 3); + DCHECK(FAST_DOUBLE_ELEMENTS == 4); + DCHECK(FAST_HOLEY_DOUBLE_ELEMENTS == 5); + + // is the low bit set? If so, we are holey and that is good. + __ test_b(edx, 1); + __ j(not_zero, &normal_sequence); + } + + // look at the first argument + __ mov(ecx, Operand(esp, kPointerSize)); + __ test(ecx, ecx); + __ j(zero, &normal_sequence); + + if (mode == DISABLE_ALLOCATION_SITES) { + ElementsKind initial = GetInitialFastElementsKind(); + ElementsKind holey_initial = GetHoleyElementsKind(initial); + + ArraySingleArgumentConstructorStub stub_holey(masm->isolate(), + holey_initial, + DISABLE_ALLOCATION_SITES); + __ TailCallStub(&stub_holey); + + __ bind(&normal_sequence); + ArraySingleArgumentConstructorStub stub(masm->isolate(), + initial, + DISABLE_ALLOCATION_SITES); + __ TailCallStub(&stub); + } else if (mode == DONT_OVERRIDE) { + // We are going to create a holey array, but our kind is non-holey. + // Fix kind and retry. + __ inc(edx); + + if (FLAG_debug_code) { + Handle<Map> allocation_site_map = + masm->isolate()->factory()->allocation_site_map(); + __ cmp(FieldOperand(ebx, 0), Immediate(allocation_site_map)); + __ Assert(equal, kExpectedAllocationSite); + } + + // Save the resulting elements kind in type info. We can't just store r3 + // in the AllocationSite::transition_info field because elements kind is + // restricted to a portion of the field...upper bits need to be left alone. + STATIC_ASSERT(AllocationSite::ElementsKindBits::kShift == 0); + __ add(FieldOperand(ebx, AllocationSite::kTransitionInfoOffset), + Immediate(Smi::FromInt(kFastElementsKindPackedToHoley))); + + __ bind(&normal_sequence); + int last_index = GetSequenceIndexFromFastElementsKind( + TERMINAL_FAST_ELEMENTS_KIND); + for (int i = 0; i <= last_index; ++i) { + Label next; + ElementsKind kind = GetFastElementsKindFromSequenceIndex(i); + __ cmp(edx, kind); + __ j(not_equal, &next); + ArraySingleArgumentConstructorStub stub(masm->isolate(), kind); + __ TailCallStub(&stub); + __ bind(&next); + } + + // If we reached this point there is a problem. + __ Abort(kUnexpectedElementsKindInArrayConstructor); + } else { + UNREACHABLE(); + } +} + + +template<class T> +static void ArrayConstructorStubAheadOfTimeHelper(Isolate* isolate) { + int to_index = GetSequenceIndexFromFastElementsKind( + TERMINAL_FAST_ELEMENTS_KIND); + for (int i = 0; i <= to_index; ++i) { + ElementsKind kind = GetFastElementsKindFromSequenceIndex(i); + T stub(isolate, kind); + stub.GetCode(); + if (AllocationSite::GetMode(kind) != DONT_TRACK_ALLOCATION_SITE) { + T stub1(isolate, kind, DISABLE_ALLOCATION_SITES); + stub1.GetCode(); + } + } +} + + +void ArrayConstructorStubBase::GenerateStubsAheadOfTime(Isolate* isolate) { + ArrayConstructorStubAheadOfTimeHelper<ArrayNoArgumentConstructorStub>( + isolate); + ArrayConstructorStubAheadOfTimeHelper<ArraySingleArgumentConstructorStub>( + isolate); + ArrayConstructorStubAheadOfTimeHelper<ArrayNArgumentsConstructorStub>( + isolate); +} + + +void InternalArrayConstructorStubBase::GenerateStubsAheadOfTime( + Isolate* isolate) { + ElementsKind kinds[2] = { FAST_ELEMENTS, FAST_HOLEY_ELEMENTS }; + for (int i = 0; i < 2; i++) { + // For internal arrays we only need a few things + InternalArrayNoArgumentConstructorStub stubh1(isolate, kinds[i]); + stubh1.GetCode(); + InternalArraySingleArgumentConstructorStub stubh2(isolate, kinds[i]); + stubh2.GetCode(); + InternalArrayNArgumentsConstructorStub stubh3(isolate, kinds[i]); + stubh3.GetCode(); + } +} + + +void ArrayConstructorStub::GenerateDispatchToArrayStub( + MacroAssembler* masm, + AllocationSiteOverrideMode mode) { + if (argument_count_ == ANY) { + Label not_zero_case, not_one_case; + __ test(eax, eax); + __ j(not_zero, ¬_zero_case); + CreateArrayDispatch<ArrayNoArgumentConstructorStub>(masm, mode); + + __ bind(¬_zero_case); + __ cmp(eax, 1); + __ j(greater, ¬_one_case); + CreateArrayDispatchOneArgument(masm, mode); + + __ bind(¬_one_case); + CreateArrayDispatch<ArrayNArgumentsConstructorStub>(masm, mode); + } else if (argument_count_ == NONE) { + CreateArrayDispatch<ArrayNoArgumentConstructorStub>(masm, mode); + } else if (argument_count_ == ONE) { + CreateArrayDispatchOneArgument(masm, mode); + } else if (argument_count_ == MORE_THAN_ONE) { + CreateArrayDispatch<ArrayNArgumentsConstructorStub>(masm, mode); + } else { + UNREACHABLE(); + } +} + + +void ArrayConstructorStub::Generate(MacroAssembler* masm) { + // ----------- S t a t e ------------- + // -- eax : argc (only if argument_count_ == ANY) + // -- ebx : AllocationSite or undefined + // -- edi : constructor + // -- esp[0] : return address + // -- esp[4] : last argument + // ----------------------------------- + if (FLAG_debug_code) { + // The array construct code is only set for the global and natives + // builtin Array functions which always have maps. + + // Initial map for the builtin Array function should be a map. + __ mov(ecx, FieldOperand(edi, JSFunction::kPrototypeOrInitialMapOffset)); + // Will both indicate a NULL and a Smi. + __ test(ecx, Immediate(kSmiTagMask)); + __ Assert(not_zero, kUnexpectedInitialMapForArrayFunction); + __ CmpObjectType(ecx, MAP_TYPE, ecx); + __ Assert(equal, kUnexpectedInitialMapForArrayFunction); + + // We should either have undefined in ebx or a valid AllocationSite + __ AssertUndefinedOrAllocationSite(ebx); + } + + Label no_info; + // If the feedback vector is the undefined value call an array constructor + // that doesn't use AllocationSites. + __ cmp(ebx, isolate()->factory()->undefined_value()); + __ j(equal, &no_info); + + // Only look at the lower 16 bits of the transition info. + __ mov(edx, FieldOperand(ebx, AllocationSite::kTransitionInfoOffset)); + __ SmiUntag(edx); + STATIC_ASSERT(AllocationSite::ElementsKindBits::kShift == 0); + __ and_(edx, Immediate(AllocationSite::ElementsKindBits::kMask)); + GenerateDispatchToArrayStub(masm, DONT_OVERRIDE); + + __ bind(&no_info); + GenerateDispatchToArrayStub(masm, DISABLE_ALLOCATION_SITES); +} + + +void InternalArrayConstructorStub::GenerateCase( + MacroAssembler* masm, ElementsKind kind) { + Label not_zero_case, not_one_case; + Label normal_sequence; + + __ test(eax, eax); + __ j(not_zero, ¬_zero_case); + InternalArrayNoArgumentConstructorStub stub0(isolate(), kind); + __ TailCallStub(&stub0); + + __ bind(¬_zero_case); + __ cmp(eax, 1); + __ j(greater, ¬_one_case); + + if (IsFastPackedElementsKind(kind)) { + // We might need to create a holey array + // look at the first argument + __ mov(ecx, Operand(esp, kPointerSize)); + __ test(ecx, ecx); + __ j(zero, &normal_sequence); + + InternalArraySingleArgumentConstructorStub + stub1_holey(isolate(), GetHoleyElementsKind(kind)); + __ TailCallStub(&stub1_holey); + } + + __ bind(&normal_sequence); + InternalArraySingleArgumentConstructorStub stub1(isolate(), kind); + __ TailCallStub(&stub1); + + __ bind(¬_one_case); + InternalArrayNArgumentsConstructorStub stubN(isolate(), kind); + __ TailCallStub(&stubN); +} + + +void InternalArrayConstructorStub::Generate(MacroAssembler* masm) { + // ----------- S t a t e ------------- + // -- eax : argc + // -- edi : constructor + // -- esp[0] : return address + // -- esp[4] : last argument + // ----------------------------------- + + if (FLAG_debug_code) { + // The array construct code is only set for the global and natives + // builtin Array functions which always have maps. + + // Initial map for the builtin Array function should be a map. + __ mov(ecx, FieldOperand(edi, JSFunction::kPrototypeOrInitialMapOffset)); + // Will both indicate a NULL and a Smi. + __ test(ecx, Immediate(kSmiTagMask)); + __ Assert(not_zero, kUnexpectedInitialMapForArrayFunction); + __ CmpObjectType(ecx, MAP_TYPE, ecx); + __ Assert(equal, kUnexpectedInitialMapForArrayFunction); + } + + // Figure out the right elements kind + __ mov(ecx, FieldOperand(edi, JSFunction::kPrototypeOrInitialMapOffset)); + + // Load the map's "bit field 2" into |result|. We only need the first byte, + // but the following masking takes care of that anyway. + __ mov(ecx, FieldOperand(ecx, Map::kBitField2Offset)); + // Retrieve elements_kind from bit field 2. + __ DecodeField<Map::ElementsKindBits>(ecx); + + if (FLAG_debug_code) { + Label done; + __ cmp(ecx, Immediate(FAST_ELEMENTS)); + __ j(equal, &done); + __ cmp(ecx, Immediate(FAST_HOLEY_ELEMENTS)); + __ Assert(equal, + kInvalidElementsKindForInternalArrayOrInternalPackedArray); + __ bind(&done); + } + + Label fast_elements_case; + __ cmp(ecx, Immediate(FAST_ELEMENTS)); + __ j(equal, &fast_elements_case); + GenerateCase(masm, FAST_HOLEY_ELEMENTS); + + __ bind(&fast_elements_case); + GenerateCase(masm, FAST_ELEMENTS); +} + + +void CallApiFunctionStub::Generate(MacroAssembler* masm) { + // ----------- S t a t e ------------- + // -- eax : callee + // -- ebx : call_data + // -- ecx : holder + // -- edx : api_function_address + // -- esi : context + // -- + // -- esp[0] : return address + // -- esp[4] : last argument + // -- ... + // -- esp[argc * 4] : first argument + // -- esp[(argc + 1) * 4] : receiver + // ----------------------------------- + + Register callee = eax; + Register call_data = ebx; + Register holder = ecx; + Register api_function_address = edx; + Register return_address = edi; + Register context = esi; + + int argc = ArgumentBits::decode(bit_field_); + bool is_store = IsStoreBits::decode(bit_field_); + bool call_data_undefined = CallDataUndefinedBits::decode(bit_field_); + + typedef FunctionCallbackArguments FCA; + + STATIC_ASSERT(FCA::kContextSaveIndex == 6); + STATIC_ASSERT(FCA::kCalleeIndex == 5); + STATIC_ASSERT(FCA::kDataIndex == 4); + STATIC_ASSERT(FCA::kReturnValueOffset == 3); + STATIC_ASSERT(FCA::kReturnValueDefaultValueIndex == 2); + STATIC_ASSERT(FCA::kIsolateIndex == 1); + STATIC_ASSERT(FCA::kHolderIndex == 0); + STATIC_ASSERT(FCA::kArgsLength == 7); + + __ pop(return_address); + + // context save + __ push(context); + // load context from callee + __ mov(context, FieldOperand(callee, JSFunction::kContextOffset)); + + // callee + __ push(callee); + + // call data + __ push(call_data); + + Register scratch = call_data; + if (!call_data_undefined) { + // return value + __ push(Immediate(isolate()->factory()->undefined_value())); + // return value default + __ push(Immediate(isolate()->factory()->undefined_value())); + } else { + // return value + __ push(scratch); + // return value default + __ push(scratch); + } + // isolate + __ push(Immediate(reinterpret_cast<int>(isolate()))); + // holder + __ push(holder); + + __ mov(scratch, esp); + + // return address + __ push(return_address); + + // API function gets reference to the v8::Arguments. If CPU profiler + // is enabled wrapper function will be called and we need to pass + // address of the callback as additional parameter, always allocate + // space for it. + const int kApiArgc = 1 + 1; + + // Allocate the v8::Arguments structure in the arguments' space since + // it's not controlled by GC. + const int kApiStackSpace = 4; + + __ PrepareCallApiFunction(kApiArgc + kApiStackSpace); + + // FunctionCallbackInfo::implicit_args_. + __ mov(ApiParameterOperand(2), scratch); + __ add(scratch, Immediate((argc + FCA::kArgsLength - 1) * kPointerSize)); + // FunctionCallbackInfo::values_. + __ mov(ApiParameterOperand(3), scratch); + // FunctionCallbackInfo::length_. + __ Move(ApiParameterOperand(4), Immediate(argc)); + // FunctionCallbackInfo::is_construct_call_. + __ Move(ApiParameterOperand(5), Immediate(0)); + + // v8::InvocationCallback's argument. + __ lea(scratch, ApiParameterOperand(2)); + __ mov(ApiParameterOperand(0), scratch); + + ExternalReference thunk_ref = + ExternalReference::invoke_function_callback(isolate()); + + Operand context_restore_operand(ebp, + (2 + FCA::kContextSaveIndex) * kPointerSize); + // Stores return the first js argument + int return_value_offset = 0; + if (is_store) { + return_value_offset = 2 + FCA::kArgsLength; + } else { + return_value_offset = 2 + FCA::kReturnValueOffset; + } + Operand return_value_operand(ebp, return_value_offset * kPointerSize); + __ CallApiFunctionAndReturn(api_function_address, + thunk_ref, + ApiParameterOperand(1), + argc + FCA::kArgsLength + 1, + return_value_operand, + &context_restore_operand); +} + + +void CallApiGetterStub::Generate(MacroAssembler* masm) { + // ----------- S t a t e ------------- + // -- esp[0] : return address + // -- esp[4] : name + // -- esp[8 - kArgsLength*4] : PropertyCallbackArguments object + // -- ... + // -- edx : api_function_address + // ----------------------------------- + + // array for v8::Arguments::values_, handler for name and pointer + // to the values (it considered as smi in GC). + const int kStackSpace = PropertyCallbackArguments::kArgsLength + 2; + // Allocate space for opional callback address parameter in case + // CPU profiler is active. + const int kApiArgc = 2 + 1; + + Register api_function_address = edx; + Register scratch = ebx; + + // load address of name + __ lea(scratch, Operand(esp, 1 * kPointerSize)); + + __ PrepareCallApiFunction(kApiArgc); + __ mov(ApiParameterOperand(0), scratch); // name. + __ add(scratch, Immediate(kPointerSize)); + __ mov(ApiParameterOperand(1), scratch); // arguments pointer. + + ExternalReference thunk_ref = + ExternalReference::invoke_accessor_getter_callback(isolate()); + + __ CallApiFunctionAndReturn(api_function_address, + thunk_ref, + ApiParameterOperand(2), + kStackSpace, + Operand(ebp, 7 * kPointerSize), + NULL); +} + + +#undef __ + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_X87 diff --git a/deps/v8/src/x87/code-stubs-x87.h b/deps/v8/src/x87/code-stubs-x87.h new file mode 100644 index 00000000000..e32902f27c1 --- /dev/null +++ b/deps/v8/src/x87/code-stubs-x87.h @@ -0,0 +1,413 @@ +// Copyright 2011 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_X87_CODE_STUBS_X87_H_ +#define V8_X87_CODE_STUBS_X87_H_ + +#include "src/ic-inl.h" +#include "src/macro-assembler.h" + +namespace v8 { +namespace internal { + + +void ArrayNativeCode(MacroAssembler* masm, + bool construct_call, + Label* call_generic_code); + + +class StoreBufferOverflowStub: public PlatformCodeStub { + public: + explicit StoreBufferOverflowStub(Isolate* isolate) + : PlatformCodeStub(isolate) { } + + void Generate(MacroAssembler* masm); + + static void GenerateFixedRegStubsAheadOfTime(Isolate* isolate); + virtual bool SometimesSetsUpAFrame() { return false; } + + private: + Major MajorKey() const { return StoreBufferOverflow; } + int MinorKey() const { return 0; } +}; + + +class StringHelper : public AllStatic { + public: + // Generate code for copying characters using the rep movs instruction. + // Copies ecx characters from esi to edi. Copying of overlapping regions is + // not supported. + static void GenerateCopyCharacters(MacroAssembler* masm, + Register dest, + Register src, + Register count, + Register scratch, + String::Encoding encoding); + + // Generate string hash. + static void GenerateHashInit(MacroAssembler* masm, + Register hash, + Register character, + Register scratch); + static void GenerateHashAddCharacter(MacroAssembler* masm, + Register hash, + Register character, + Register scratch); + static void GenerateHashGetHash(MacroAssembler* masm, + Register hash, + Register scratch); + + private: + DISALLOW_IMPLICIT_CONSTRUCTORS(StringHelper); +}; + + +class SubStringStub: public PlatformCodeStub { + public: + explicit SubStringStub(Isolate* isolate) : PlatformCodeStub(isolate) {} + + private: + Major MajorKey() const { return SubString; } + int MinorKey() const { return 0; } + + void Generate(MacroAssembler* masm); +}; + + +class StringCompareStub: public PlatformCodeStub { + public: + explicit StringCompareStub(Isolate* isolate) : PlatformCodeStub(isolate) { } + + // Compares two flat ASCII strings and returns result in eax. + static void GenerateCompareFlatAsciiStrings(MacroAssembler* masm, + Register left, + Register right, + Register scratch1, + Register scratch2, + Register scratch3); + + // Compares two flat ASCII strings for equality and returns result + // in eax. + static void GenerateFlatAsciiStringEquals(MacroAssembler* masm, + Register left, + Register right, + Register scratch1, + Register scratch2); + + private: + virtual Major MajorKey() const { return StringCompare; } + virtual int MinorKey() const { return 0; } + virtual void Generate(MacroAssembler* masm); + + static void GenerateAsciiCharsCompareLoop( + MacroAssembler* masm, + Register left, + Register right, + Register length, + Register scratch, + Label* chars_not_equal, + Label::Distance chars_not_equal_near = Label::kFar); +}; + + +class NameDictionaryLookupStub: public PlatformCodeStub { + public: + enum LookupMode { POSITIVE_LOOKUP, NEGATIVE_LOOKUP }; + + NameDictionaryLookupStub(Isolate* isolate, + Register dictionary, + Register result, + Register index, + LookupMode mode) + : PlatformCodeStub(isolate), + dictionary_(dictionary), result_(result), index_(index), mode_(mode) { } + + void Generate(MacroAssembler* masm); + + static void GenerateNegativeLookup(MacroAssembler* masm, + Label* miss, + Label* done, + Register properties, + Handle<Name> name, + Register r0); + + static void GeneratePositiveLookup(MacroAssembler* masm, + Label* miss, + Label* done, + Register elements, + Register name, + Register r0, + Register r1); + + virtual bool SometimesSetsUpAFrame() { return false; } + + private: + static const int kInlinedProbes = 4; + static const int kTotalProbes = 20; + + static const int kCapacityOffset = + NameDictionary::kHeaderSize + + NameDictionary::kCapacityIndex * kPointerSize; + + static const int kElementsStartOffset = + NameDictionary::kHeaderSize + + NameDictionary::kElementsStartIndex * kPointerSize; + + Major MajorKey() const { return NameDictionaryLookup; } + + int MinorKey() const { + return DictionaryBits::encode(dictionary_.code()) | + ResultBits::encode(result_.code()) | + IndexBits::encode(index_.code()) | + LookupModeBits::encode(mode_); + } + + class DictionaryBits: public BitField<int, 0, 3> {}; + class ResultBits: public BitField<int, 3, 3> {}; + class IndexBits: public BitField<int, 6, 3> {}; + class LookupModeBits: public BitField<LookupMode, 9, 1> {}; + + Register dictionary_; + Register result_; + Register index_; + LookupMode mode_; +}; + + +class RecordWriteStub: public PlatformCodeStub { + public: + RecordWriteStub(Isolate* isolate, + Register object, + Register value, + Register address, + RememberedSetAction remembered_set_action) + : PlatformCodeStub(isolate), + object_(object), + value_(value), + address_(address), + remembered_set_action_(remembered_set_action), + regs_(object, // An input reg. + address, // An input reg. + value) { // One scratch reg. + } + + enum Mode { + STORE_BUFFER_ONLY, + INCREMENTAL, + INCREMENTAL_COMPACTION + }; + + virtual bool SometimesSetsUpAFrame() { return false; } + + static const byte kTwoByteNopInstruction = 0x3c; // Cmpb al, #imm8. + static const byte kTwoByteJumpInstruction = 0xeb; // Jmp #imm8. + + static const byte kFiveByteNopInstruction = 0x3d; // Cmpl eax, #imm32. + static const byte kFiveByteJumpInstruction = 0xe9; // Jmp #imm32. + + static Mode GetMode(Code* stub) { + byte first_instruction = stub->instruction_start()[0]; + byte second_instruction = stub->instruction_start()[2]; + + if (first_instruction == kTwoByteJumpInstruction) { + return INCREMENTAL; + } + + DCHECK(first_instruction == kTwoByteNopInstruction); + + if (second_instruction == kFiveByteJumpInstruction) { + return INCREMENTAL_COMPACTION; + } + + DCHECK(second_instruction == kFiveByteNopInstruction); + + return STORE_BUFFER_ONLY; + } + + static void Patch(Code* stub, Mode mode) { + switch (mode) { + case STORE_BUFFER_ONLY: + DCHECK(GetMode(stub) == INCREMENTAL || + GetMode(stub) == INCREMENTAL_COMPACTION); + stub->instruction_start()[0] = kTwoByteNopInstruction; + stub->instruction_start()[2] = kFiveByteNopInstruction; + break; + case INCREMENTAL: + DCHECK(GetMode(stub) == STORE_BUFFER_ONLY); + stub->instruction_start()[0] = kTwoByteJumpInstruction; + break; + case INCREMENTAL_COMPACTION: + DCHECK(GetMode(stub) == STORE_BUFFER_ONLY); + stub->instruction_start()[0] = kTwoByteNopInstruction; + stub->instruction_start()[2] = kFiveByteJumpInstruction; + break; + } + DCHECK(GetMode(stub) == mode); + CpuFeatures::FlushICache(stub->instruction_start(), 7); + } + + private: + // This is a helper class for freeing up 3 scratch registers, where the third + // is always ecx (needed for shift operations). The input is two registers + // that must be preserved and one scratch register provided by the caller. + class RegisterAllocation { + public: + RegisterAllocation(Register object, + Register address, + Register scratch0) + : object_orig_(object), + address_orig_(address), + scratch0_orig_(scratch0), + object_(object), + address_(address), + scratch0_(scratch0) { + DCHECK(!AreAliased(scratch0, object, address, no_reg)); + scratch1_ = GetRegThatIsNotEcxOr(object_, address_, scratch0_); + if (scratch0.is(ecx)) { + scratch0_ = GetRegThatIsNotEcxOr(object_, address_, scratch1_); + } + if (object.is(ecx)) { + object_ = GetRegThatIsNotEcxOr(address_, scratch0_, scratch1_); + } + if (address.is(ecx)) { + address_ = GetRegThatIsNotEcxOr(object_, scratch0_, scratch1_); + } + DCHECK(!AreAliased(scratch0_, object_, address_, ecx)); + } + + void Save(MacroAssembler* masm) { + DCHECK(!address_orig_.is(object_)); + DCHECK(object_.is(object_orig_) || address_.is(address_orig_)); + DCHECK(!AreAliased(object_, address_, scratch1_, scratch0_)); + DCHECK(!AreAliased(object_orig_, address_, scratch1_, scratch0_)); + DCHECK(!AreAliased(object_, address_orig_, scratch1_, scratch0_)); + // We don't have to save scratch0_orig_ because it was given to us as + // a scratch register. But if we had to switch to a different reg then + // we should save the new scratch0_. + if (!scratch0_.is(scratch0_orig_)) masm->push(scratch0_); + if (!ecx.is(scratch0_orig_) && + !ecx.is(object_orig_) && + !ecx.is(address_orig_)) { + masm->push(ecx); + } + masm->push(scratch1_); + if (!address_.is(address_orig_)) { + masm->push(address_); + masm->mov(address_, address_orig_); + } + if (!object_.is(object_orig_)) { + masm->push(object_); + masm->mov(object_, object_orig_); + } + } + + void Restore(MacroAssembler* masm) { + // These will have been preserved the entire time, so we just need to move + // them back. Only in one case is the orig_ reg different from the plain + // one, since only one of them can alias with ecx. + if (!object_.is(object_orig_)) { + masm->mov(object_orig_, object_); + masm->pop(object_); + } + if (!address_.is(address_orig_)) { + masm->mov(address_orig_, address_); + masm->pop(address_); + } + masm->pop(scratch1_); + if (!ecx.is(scratch0_orig_) && + !ecx.is(object_orig_) && + !ecx.is(address_orig_)) { + masm->pop(ecx); + } + if (!scratch0_.is(scratch0_orig_)) masm->pop(scratch0_); + } + + // If we have to call into C then we need to save and restore all caller- + // saved registers that were not already preserved. The caller saved + // registers are eax, ecx and edx. The three scratch registers (incl. ecx) + // will be restored by other means so we don't bother pushing them here. + void SaveCallerSaveRegisters(MacroAssembler* masm) { + if (!scratch0_.is(eax) && !scratch1_.is(eax)) masm->push(eax); + if (!scratch0_.is(edx) && !scratch1_.is(edx)) masm->push(edx); + } + + inline void RestoreCallerSaveRegisters(MacroAssembler*masm) { + if (!scratch0_.is(edx) && !scratch1_.is(edx)) masm->pop(edx); + if (!scratch0_.is(eax) && !scratch1_.is(eax)) masm->pop(eax); + } + + inline Register object() { return object_; } + inline Register address() { return address_; } + inline Register scratch0() { return scratch0_; } + inline Register scratch1() { return scratch1_; } + + private: + Register object_orig_; + Register address_orig_; + Register scratch0_orig_; + Register object_; + Register address_; + Register scratch0_; + Register scratch1_; + // Third scratch register is always ecx. + + Register GetRegThatIsNotEcxOr(Register r1, + Register r2, + Register r3) { + for (int i = 0; i < Register::NumAllocatableRegisters(); i++) { + Register candidate = Register::FromAllocationIndex(i); + if (candidate.is(ecx)) continue; + if (candidate.is(r1)) continue; + if (candidate.is(r2)) continue; + if (candidate.is(r3)) continue; + return candidate; + } + UNREACHABLE(); + return no_reg; + } + friend class RecordWriteStub; + }; + + enum OnNoNeedToInformIncrementalMarker { + kReturnOnNoNeedToInformIncrementalMarker, + kUpdateRememberedSetOnNoNeedToInformIncrementalMarker + } +; + void Generate(MacroAssembler* masm); + void GenerateIncremental(MacroAssembler* masm, Mode mode); + void CheckNeedsToInformIncrementalMarker( + MacroAssembler* masm, + OnNoNeedToInformIncrementalMarker on_no_need, + Mode mode); + void InformIncrementalMarker(MacroAssembler* masm); + + Major MajorKey() const { return RecordWrite; } + + int MinorKey() const { + return ObjectBits::encode(object_.code()) | + ValueBits::encode(value_.code()) | + AddressBits::encode(address_.code()) | + RememberedSetActionBits::encode(remembered_set_action_); + } + + void Activate(Code* code) { + code->GetHeap()->incremental_marking()->ActivateGeneratedStub(code); + } + + class ObjectBits: public BitField<int, 0, 3> {}; + class ValueBits: public BitField<int, 3, 3> {}; + class AddressBits: public BitField<int, 6, 3> {}; + class RememberedSetActionBits: public BitField<RememberedSetAction, 9, 1> {}; + + Register object_; + Register value_; + Register address_; + RememberedSetAction remembered_set_action_; + RegisterAllocation regs_; +}; + + +} } // namespace v8::internal + +#endif // V8_X87_CODE_STUBS_X87_H_ diff --git a/deps/v8/src/x87/codegen-x87.cc b/deps/v8/src/x87/codegen-x87.cc new file mode 100644 index 00000000000..f6b8fc4f2ab --- /dev/null +++ b/deps/v8/src/x87/codegen-x87.cc @@ -0,0 +1,645 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_X87 + +#include "src/codegen.h" +#include "src/heap/heap.h" +#include "src/macro-assembler.h" + +namespace v8 { +namespace internal { + + +// ------------------------------------------------------------------------- +// Platform-specific RuntimeCallHelper functions. + +void StubRuntimeCallHelper::BeforeCall(MacroAssembler* masm) const { + masm->EnterFrame(StackFrame::INTERNAL); + DCHECK(!masm->has_frame()); + masm->set_has_frame(true); +} + + +void StubRuntimeCallHelper::AfterCall(MacroAssembler* masm) const { + masm->LeaveFrame(StackFrame::INTERNAL); + DCHECK(masm->has_frame()); + masm->set_has_frame(false); +} + + +#define __ masm. + + +UnaryMathFunction CreateExpFunction() { + // No SSE2 support + return &std::exp; +} + + +UnaryMathFunction CreateSqrtFunction() { + // No SSE2 support + return &std::sqrt; +} + + +// Helper functions for CreateMemMoveFunction. +#undef __ +#define __ ACCESS_MASM(masm) + +enum Direction { FORWARD, BACKWARD }; +enum Alignment { MOVE_ALIGNED, MOVE_UNALIGNED }; + + +void MemMoveEmitPopAndReturn(MacroAssembler* masm) { + __ pop(esi); + __ pop(edi); + __ ret(0); +} + + +#undef __ +#define __ masm. + + +class LabelConverter { + public: + explicit LabelConverter(byte* buffer) : buffer_(buffer) {} + int32_t address(Label* l) const { + return reinterpret_cast<int32_t>(buffer_) + l->pos(); + } + private: + byte* buffer_; +}; + + +MemMoveFunction CreateMemMoveFunction() { + size_t actual_size; + // Allocate buffer in executable space. + byte* buffer = + static_cast<byte*>(base::OS::Allocate(1 * KB, &actual_size, true)); + if (buffer == NULL) return NULL; + MacroAssembler masm(NULL, buffer, static_cast<int>(actual_size)); + LabelConverter conv(buffer); + + // Generated code is put into a fixed, unmovable buffer, and not into + // the V8 heap. We can't, and don't, refer to any relocatable addresses + // (e.g. the JavaScript nan-object). + + // 32-bit C declaration function calls pass arguments on stack. + + // Stack layout: + // esp[12]: Third argument, size. + // esp[8]: Second argument, source pointer. + // esp[4]: First argument, destination pointer. + // esp[0]: return address + + const int kDestinationOffset = 1 * kPointerSize; + const int kSourceOffset = 2 * kPointerSize; + const int kSizeOffset = 3 * kPointerSize; + + int stack_offset = 0; // Update if we change the stack height. + + Label backward, backward_much_overlap; + Label forward_much_overlap, small_size, medium_size, pop_and_return; + __ push(edi); + __ push(esi); + stack_offset += 2 * kPointerSize; + Register dst = edi; + Register src = esi; + Register count = ecx; + __ mov(dst, Operand(esp, stack_offset + kDestinationOffset)); + __ mov(src, Operand(esp, stack_offset + kSourceOffset)); + __ mov(count, Operand(esp, stack_offset + kSizeOffset)); + + __ cmp(dst, src); + __ j(equal, &pop_and_return); + + // No SSE2. + Label forward; + __ cmp(count, 0); + __ j(equal, &pop_and_return); + __ cmp(dst, src); + __ j(above, &backward); + __ jmp(&forward); + { + // Simple forward copier. + Label forward_loop_1byte, forward_loop_4byte; + __ bind(&forward_loop_4byte); + __ mov(eax, Operand(src, 0)); + __ sub(count, Immediate(4)); + __ add(src, Immediate(4)); + __ mov(Operand(dst, 0), eax); + __ add(dst, Immediate(4)); + __ bind(&forward); // Entry point. + __ cmp(count, 3); + __ j(above, &forward_loop_4byte); + __ bind(&forward_loop_1byte); + __ cmp(count, 0); + __ j(below_equal, &pop_and_return); + __ mov_b(eax, Operand(src, 0)); + __ dec(count); + __ inc(src); + __ mov_b(Operand(dst, 0), eax); + __ inc(dst); + __ jmp(&forward_loop_1byte); + } + { + // Simple backward copier. + Label backward_loop_1byte, backward_loop_4byte, entry_shortcut; + __ bind(&backward); + __ add(src, count); + __ add(dst, count); + __ cmp(count, 3); + __ j(below_equal, &entry_shortcut); + + __ bind(&backward_loop_4byte); + __ sub(src, Immediate(4)); + __ sub(count, Immediate(4)); + __ mov(eax, Operand(src, 0)); + __ sub(dst, Immediate(4)); + __ mov(Operand(dst, 0), eax); + __ cmp(count, 3); + __ j(above, &backward_loop_4byte); + __ bind(&backward_loop_1byte); + __ cmp(count, 0); + __ j(below_equal, &pop_and_return); + __ bind(&entry_shortcut); + __ dec(src); + __ dec(count); + __ mov_b(eax, Operand(src, 0)); + __ dec(dst); + __ mov_b(Operand(dst, 0), eax); + __ jmp(&backward_loop_1byte); + } + + __ bind(&pop_and_return); + MemMoveEmitPopAndReturn(&masm); + + CodeDesc desc; + masm.GetCode(&desc); + DCHECK(!RelocInfo::RequiresRelocation(desc)); + CpuFeatures::FlushICache(buffer, actual_size); + base::OS::ProtectCode(buffer, actual_size); + // TODO(jkummerow): It would be nice to register this code creation event + // with the PROFILE / GDBJIT system. + return FUNCTION_CAST<MemMoveFunction>(buffer); +} + + +#undef __ + +// ------------------------------------------------------------------------- +// Code generators + +#define __ ACCESS_MASM(masm) + + +void ElementsTransitionGenerator::GenerateMapChangeElementsTransition( + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, + AllocationSiteMode mode, + Label* allocation_memento_found) { + Register scratch = edi; + DCHECK(!AreAliased(receiver, key, value, target_map, scratch)); + + if (mode == TRACK_ALLOCATION_SITE) { + DCHECK(allocation_memento_found != NULL); + __ JumpIfJSArrayHasAllocationMemento( + receiver, scratch, allocation_memento_found); + } + + // Set transitioned map. + __ mov(FieldOperand(receiver, HeapObject::kMapOffset), target_map); + __ RecordWriteField(receiver, + HeapObject::kMapOffset, + target_map, + scratch, + EMIT_REMEMBERED_SET, + OMIT_SMI_CHECK); +} + + +void ElementsTransitionGenerator::GenerateSmiToDouble( + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, + AllocationSiteMode mode, + Label* fail) { + // Return address is on the stack. + DCHECK(receiver.is(edx)); + DCHECK(key.is(ecx)); + DCHECK(value.is(eax)); + DCHECK(target_map.is(ebx)); + + Label loop, entry, convert_hole, gc_required, only_change_map; + + if (mode == TRACK_ALLOCATION_SITE) { + __ JumpIfJSArrayHasAllocationMemento(edx, edi, fail); + } + + // Check for empty arrays, which only require a map transition and no changes + // to the backing store. + __ mov(edi, FieldOperand(edx, JSObject::kElementsOffset)); + __ cmp(edi, Immediate(masm->isolate()->factory()->empty_fixed_array())); + __ j(equal, &only_change_map); + + __ push(eax); + __ push(ebx); + + __ mov(edi, FieldOperand(edi, FixedArray::kLengthOffset)); + + // Allocate new FixedDoubleArray. + // edx: receiver + // edi: length of source FixedArray (smi-tagged) + AllocationFlags flags = + static_cast<AllocationFlags>(TAG_OBJECT | DOUBLE_ALIGNMENT); + __ Allocate(FixedDoubleArray::kHeaderSize, times_8, edi, + REGISTER_VALUE_IS_SMI, eax, ebx, no_reg, &gc_required, flags); + + // eax: destination FixedDoubleArray + // edi: number of elements + // edx: receiver + __ mov(FieldOperand(eax, HeapObject::kMapOffset), + Immediate(masm->isolate()->factory()->fixed_double_array_map())); + __ mov(FieldOperand(eax, FixedDoubleArray::kLengthOffset), edi); + __ mov(esi, FieldOperand(edx, JSObject::kElementsOffset)); + // Replace receiver's backing store with newly created FixedDoubleArray. + __ mov(FieldOperand(edx, JSObject::kElementsOffset), eax); + __ mov(ebx, eax); + __ RecordWriteField(edx, + JSObject::kElementsOffset, + ebx, + edi, + EMIT_REMEMBERED_SET, + OMIT_SMI_CHECK); + + __ mov(edi, FieldOperand(esi, FixedArray::kLengthOffset)); + + // Prepare for conversion loop. + ExternalReference canonical_the_hole_nan_reference = + ExternalReference::address_of_the_hole_nan(); + __ jmp(&entry); + + // Call into runtime if GC is required. + __ bind(&gc_required); + // Restore registers before jumping into runtime. + __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset)); + __ pop(ebx); + __ pop(eax); + __ jmp(fail); + + // Convert and copy elements + // esi: source FixedArray + __ bind(&loop); + __ mov(ebx, FieldOperand(esi, edi, times_2, FixedArray::kHeaderSize)); + // ebx: current element from source + // edi: index of current element + __ JumpIfNotSmi(ebx, &convert_hole); + + // Normal smi, convert it to double and store. + __ SmiUntag(ebx); + __ push(ebx); + __ fild_s(Operand(esp, 0)); + __ pop(ebx); + __ fstp_d(FieldOperand(eax, edi, times_4, FixedDoubleArray::kHeaderSize)); + __ jmp(&entry); + + // Found hole, store hole_nan_as_double instead. + __ bind(&convert_hole); + + if (FLAG_debug_code) { + __ cmp(ebx, masm->isolate()->factory()->the_hole_value()); + __ Assert(equal, kObjectFoundInSmiOnlyArray); + } + + __ fld_d(Operand::StaticVariable(canonical_the_hole_nan_reference)); + __ fstp_d(FieldOperand(eax, edi, times_4, FixedDoubleArray::kHeaderSize)); + + __ bind(&entry); + __ sub(edi, Immediate(Smi::FromInt(1))); + __ j(not_sign, &loop); + + __ pop(ebx); + __ pop(eax); + + // Restore esi. + __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset)); + + __ bind(&only_change_map); + // eax: value + // ebx: target map + // Set transitioned map. + __ mov(FieldOperand(edx, HeapObject::kMapOffset), ebx); + __ RecordWriteField(edx, + HeapObject::kMapOffset, + ebx, + edi, + OMIT_REMEMBERED_SET, + OMIT_SMI_CHECK); +} + + +void ElementsTransitionGenerator::GenerateDoubleToObject( + MacroAssembler* masm, + Register receiver, + Register key, + Register value, + Register target_map, + AllocationSiteMode mode, + Label* fail) { + // Return address is on the stack. + DCHECK(receiver.is(edx)); + DCHECK(key.is(ecx)); + DCHECK(value.is(eax)); + DCHECK(target_map.is(ebx)); + + Label loop, entry, convert_hole, gc_required, only_change_map, success; + + if (mode == TRACK_ALLOCATION_SITE) { + __ JumpIfJSArrayHasAllocationMemento(edx, edi, fail); + } + + // Check for empty arrays, which only require a map transition and no changes + // to the backing store. + __ mov(edi, FieldOperand(edx, JSObject::kElementsOffset)); + __ cmp(edi, Immediate(masm->isolate()->factory()->empty_fixed_array())); + __ j(equal, &only_change_map); + + __ push(eax); + __ push(edx); + __ push(ebx); + + __ mov(ebx, FieldOperand(edi, FixedDoubleArray::kLengthOffset)); + + // Allocate new FixedArray. + // ebx: length of source FixedDoubleArray (smi-tagged) + __ lea(edi, Operand(ebx, times_2, FixedArray::kHeaderSize)); + __ Allocate(edi, eax, esi, no_reg, &gc_required, TAG_OBJECT); + + // eax: destination FixedArray + // ebx: number of elements + __ mov(FieldOperand(eax, HeapObject::kMapOffset), + Immediate(masm->isolate()->factory()->fixed_array_map())); + __ mov(FieldOperand(eax, FixedArray::kLengthOffset), ebx); + __ mov(edi, FieldOperand(edx, JSObject::kElementsOffset)); + + __ jmp(&entry); + + // ebx: target map + // edx: receiver + // Set transitioned map. + __ bind(&only_change_map); + __ mov(FieldOperand(edx, HeapObject::kMapOffset), ebx); + __ RecordWriteField(edx, + HeapObject::kMapOffset, + ebx, + edi, + OMIT_REMEMBERED_SET, + OMIT_SMI_CHECK); + __ jmp(&success); + + // Call into runtime if GC is required. + __ bind(&gc_required); + __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset)); + __ pop(ebx); + __ pop(edx); + __ pop(eax); + __ jmp(fail); + + // Box doubles into heap numbers. + // edi: source FixedDoubleArray + // eax: destination FixedArray + __ bind(&loop); + // ebx: index of current element (smi-tagged) + uint32_t offset = FixedDoubleArray::kHeaderSize + sizeof(kHoleNanLower32); + __ cmp(FieldOperand(edi, ebx, times_4, offset), Immediate(kHoleNanUpper32)); + __ j(equal, &convert_hole); + + // Non-hole double, copy value into a heap number. + __ AllocateHeapNumber(edx, esi, no_reg, &gc_required); + // edx: new heap number + __ mov(esi, FieldOperand(edi, ebx, times_4, FixedDoubleArray::kHeaderSize)); + __ mov(FieldOperand(edx, HeapNumber::kValueOffset), esi); + __ mov(esi, FieldOperand(edi, ebx, times_4, offset)); + __ mov(FieldOperand(edx, HeapNumber::kValueOffset + kPointerSize), esi); + __ mov(FieldOperand(eax, ebx, times_2, FixedArray::kHeaderSize), edx); + __ mov(esi, ebx); + __ RecordWriteArray(eax, + edx, + esi, + EMIT_REMEMBERED_SET, + OMIT_SMI_CHECK); + __ jmp(&entry, Label::kNear); + + // Replace the-hole NaN with the-hole pointer. + __ bind(&convert_hole); + __ mov(FieldOperand(eax, ebx, times_2, FixedArray::kHeaderSize), + masm->isolate()->factory()->the_hole_value()); + + __ bind(&entry); + __ sub(ebx, Immediate(Smi::FromInt(1))); + __ j(not_sign, &loop); + + __ pop(ebx); + __ pop(edx); + // ebx: target map + // edx: receiver + // Set transitioned map. + __ mov(FieldOperand(edx, HeapObject::kMapOffset), ebx); + __ RecordWriteField(edx, + HeapObject::kMapOffset, + ebx, + edi, + OMIT_REMEMBERED_SET, + OMIT_SMI_CHECK); + // Replace receiver's backing store with newly created and filled FixedArray. + __ mov(FieldOperand(edx, JSObject::kElementsOffset), eax); + __ RecordWriteField(edx, + JSObject::kElementsOffset, + eax, + edi, + EMIT_REMEMBERED_SET, + OMIT_SMI_CHECK); + + // Restore registers. + __ pop(eax); + __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset)); + + __ bind(&success); +} + + +void StringCharLoadGenerator::Generate(MacroAssembler* masm, + Factory* factory, + Register string, + Register index, + Register result, + Label* call_runtime) { + // Fetch the instance type of the receiver into result register. + __ mov(result, FieldOperand(string, HeapObject::kMapOffset)); + __ movzx_b(result, FieldOperand(result, Map::kInstanceTypeOffset)); + + // We need special handling for indirect strings. + Label check_sequential; + __ test(result, Immediate(kIsIndirectStringMask)); + __ j(zero, &check_sequential, Label::kNear); + + // Dispatch on the indirect string shape: slice or cons. + Label cons_string; + __ test(result, Immediate(kSlicedNotConsMask)); + __ j(zero, &cons_string, Label::kNear); + + // Handle slices. + Label indirect_string_loaded; + __ mov(result, FieldOperand(string, SlicedString::kOffsetOffset)); + __ SmiUntag(result); + __ add(index, result); + __ mov(string, FieldOperand(string, SlicedString::kParentOffset)); + __ jmp(&indirect_string_loaded, Label::kNear); + + // Handle cons strings. + // Check whether the right hand side is the empty string (i.e. if + // this is really a flat string in a cons string). If that is not + // the case we would rather go to the runtime system now to flatten + // the string. + __ bind(&cons_string); + __ cmp(FieldOperand(string, ConsString::kSecondOffset), + Immediate(factory->empty_string())); + __ j(not_equal, call_runtime); + __ mov(string, FieldOperand(string, ConsString::kFirstOffset)); + + __ bind(&indirect_string_loaded); + __ mov(result, FieldOperand(string, HeapObject::kMapOffset)); + __ movzx_b(result, FieldOperand(result, Map::kInstanceTypeOffset)); + + // Distinguish sequential and external strings. Only these two string + // representations can reach here (slices and flat cons strings have been + // reduced to the underlying sequential or external string). + Label seq_string; + __ bind(&check_sequential); + STATIC_ASSERT(kSeqStringTag == 0); + __ test(result, Immediate(kStringRepresentationMask)); + __ j(zero, &seq_string, Label::kNear); + + // Handle external strings. + Label ascii_external, done; + if (FLAG_debug_code) { + // Assert that we do not have a cons or slice (indirect strings) here. + // Sequential strings have already been ruled out. + __ test(result, Immediate(kIsIndirectStringMask)); + __ Assert(zero, kExternalStringExpectedButNotFound); + } + // Rule out short external strings. + STATIC_ASSERT(kShortExternalStringTag != 0); + __ test_b(result, kShortExternalStringMask); + __ j(not_zero, call_runtime); + // Check encoding. + STATIC_ASSERT(kTwoByteStringTag == 0); + __ test_b(result, kStringEncodingMask); + __ mov(result, FieldOperand(string, ExternalString::kResourceDataOffset)); + __ j(not_equal, &ascii_external, Label::kNear); + // Two-byte string. + __ movzx_w(result, Operand(result, index, times_2, 0)); + __ jmp(&done, Label::kNear); + __ bind(&ascii_external); + // Ascii string. + __ movzx_b(result, Operand(result, index, times_1, 0)); + __ jmp(&done, Label::kNear); + + // Dispatch on the encoding: ASCII or two-byte. + Label ascii; + __ bind(&seq_string); + STATIC_ASSERT((kStringEncodingMask & kOneByteStringTag) != 0); + STATIC_ASSERT((kStringEncodingMask & kTwoByteStringTag) == 0); + __ test(result, Immediate(kStringEncodingMask)); + __ j(not_zero, &ascii, Label::kNear); + + // Two-byte string. + // Load the two-byte character code into the result register. + __ movzx_w(result, FieldOperand(string, + index, + times_2, + SeqTwoByteString::kHeaderSize)); + __ jmp(&done, Label::kNear); + + // Ascii string. + // Load the byte into the result register. + __ bind(&ascii); + __ movzx_b(result, FieldOperand(string, + index, + times_1, + SeqOneByteString::kHeaderSize)); + __ bind(&done); +} + + +#undef __ + + +CodeAgingHelper::CodeAgingHelper() { + DCHECK(young_sequence_.length() == kNoCodeAgeSequenceLength); + CodePatcher patcher(young_sequence_.start(), young_sequence_.length()); + patcher.masm()->push(ebp); + patcher.masm()->mov(ebp, esp); + patcher.masm()->push(esi); + patcher.masm()->push(edi); +} + + +#ifdef DEBUG +bool CodeAgingHelper::IsOld(byte* candidate) const { + return *candidate == kCallOpcode; +} +#endif + + +bool Code::IsYoungSequence(Isolate* isolate, byte* sequence) { + bool result = isolate->code_aging_helper()->IsYoung(sequence); + DCHECK(result || isolate->code_aging_helper()->IsOld(sequence)); + return result; +} + + +void Code::GetCodeAgeAndParity(Isolate* isolate, byte* sequence, Age* age, + MarkingParity* parity) { + if (IsYoungSequence(isolate, sequence)) { + *age = kNoAgeCodeAge; + *parity = NO_MARKING_PARITY; + } else { + sequence++; // Skip the kCallOpcode byte + Address target_address = sequence + *reinterpret_cast<int*>(sequence) + + Assembler::kCallTargetAddressOffset; + Code* stub = GetCodeFromTargetAddress(target_address); + GetCodeAgeAndParity(stub, age, parity); + } +} + + +void Code::PatchPlatformCodeAge(Isolate* isolate, + byte* sequence, + Code::Age age, + MarkingParity parity) { + uint32_t young_length = isolate->code_aging_helper()->young_sequence_length(); + if (age == kNoAgeCodeAge) { + isolate->code_aging_helper()->CopyYoungSequenceTo(sequence); + CpuFeatures::FlushICache(sequence, young_length); + } else { + Code* stub = GetCodeAgeStub(isolate, age, parity); + CodePatcher patcher(sequence, young_length); + patcher.masm()->call(stub->instruction_start(), RelocInfo::NONE32); + } +} + + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_X87 diff --git a/deps/v8/src/x87/codegen-x87.h b/deps/v8/src/x87/codegen-x87.h new file mode 100644 index 00000000000..15b2702407f --- /dev/null +++ b/deps/v8/src/x87/codegen-x87.h @@ -0,0 +1,33 @@ +// Copyright 2011 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_X87_CODEGEN_X87_H_ +#define V8_X87_CODEGEN_X87_H_ + +#include "src/ast.h" +#include "src/ic-inl.h" + +namespace v8 { +namespace internal { + + +class StringCharLoadGenerator : public AllStatic { + public: + // Generates the code for handling different string types and loading the + // indexed character into |result|. We expect |index| as untagged input and + // |result| as untagged output. + static void Generate(MacroAssembler* masm, + Factory* factory, + Register string, + Register index, + Register result, + Label* call_runtime); + + private: + DISALLOW_COPY_AND_ASSIGN(StringCharLoadGenerator); +}; + +} } // namespace v8::internal + +#endif // V8_X87_CODEGEN_X87_H_ diff --git a/deps/v8/src/x87/cpu-x87.cc b/deps/v8/src/x87/cpu-x87.cc new file mode 100644 index 00000000000..03816dff6b2 --- /dev/null +++ b/deps/v8/src/x87/cpu-x87.cc @@ -0,0 +1,44 @@ +// Copyright 2011 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// CPU specific code for ia32 independent of OS goes here. + +#ifdef __GNUC__ +#include "src/third_party/valgrind/valgrind.h" +#endif + +#include "src/v8.h" + +#if V8_TARGET_ARCH_X87 + +#include "src/assembler.h" +#include "src/macro-assembler.h" + +namespace v8 { +namespace internal { + +void CpuFeatures::FlushICache(void* start, size_t size) { + // No need to flush the instruction cache on Intel. On Intel instruction + // cache flushing is only necessary when multiple cores running the same + // code simultaneously. V8 (and JavaScript) is single threaded and when code + // is patched on an intel CPU the core performing the patching will have its + // own instruction cache updated automatically. + + // If flushing of the instruction cache becomes necessary Windows has the + // API function FlushInstructionCache. + + // By default, valgrind only checks the stack for writes that might need to + // invalidate already cached translated code. This leads to random + // instability when code patches or moves are sometimes unnoticed. One + // solution is to run valgrind with --smc-check=all, but this comes at a big + // performance cost. We can notify valgrind to invalidate its cache. +#ifdef VALGRIND_DISCARD_TRANSLATIONS + unsigned res = VALGRIND_DISCARD_TRANSLATIONS(start, size); + USE(res); +#endif +} + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_X87 diff --git a/deps/v8/src/x87/debug-x87.cc b/deps/v8/src/x87/debug-x87.cc new file mode 100644 index 00000000000..3f94edd2177 --- /dev/null +++ b/deps/v8/src/x87/debug-x87.cc @@ -0,0 +1,326 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_X87 + +#include "src/codegen.h" +#include "src/debug.h" + + +namespace v8 { +namespace internal { + +bool BreakLocationIterator::IsDebugBreakAtReturn() { + return Debug::IsDebugBreakAtReturn(rinfo()); +} + + +// Patch the JS frame exit code with a debug break call. See +// CodeGenerator::VisitReturnStatement and VirtualFrame::Exit in codegen-x87.cc +// for the precise return instructions sequence. +void BreakLocationIterator::SetDebugBreakAtReturn() { + DCHECK(Assembler::kJSReturnSequenceLength >= + Assembler::kCallInstructionLength); + rinfo()->PatchCodeWithCall( + debug_info_->GetIsolate()->builtins()->Return_DebugBreak()->entry(), + Assembler::kJSReturnSequenceLength - Assembler::kCallInstructionLength); +} + + +// Restore the JS frame exit code. +void BreakLocationIterator::ClearDebugBreakAtReturn() { + rinfo()->PatchCode(original_rinfo()->pc(), + Assembler::kJSReturnSequenceLength); +} + + +// A debug break in the frame exit code is identified by the JS frame exit code +// having been patched with a call instruction. +bool Debug::IsDebugBreakAtReturn(RelocInfo* rinfo) { + DCHECK(RelocInfo::IsJSReturn(rinfo->rmode())); + return rinfo->IsPatchedReturnSequence(); +} + + +bool BreakLocationIterator::IsDebugBreakAtSlot() { + DCHECK(IsDebugBreakSlot()); + // Check whether the debug break slot instructions have been patched. + return rinfo()->IsPatchedDebugBreakSlotSequence(); +} + + +void BreakLocationIterator::SetDebugBreakAtSlot() { + DCHECK(IsDebugBreakSlot()); + Isolate* isolate = debug_info_->GetIsolate(); + rinfo()->PatchCodeWithCall( + isolate->builtins()->Slot_DebugBreak()->entry(), + Assembler::kDebugBreakSlotLength - Assembler::kCallInstructionLength); +} + + +void BreakLocationIterator::ClearDebugBreakAtSlot() { + DCHECK(IsDebugBreakSlot()); + rinfo()->PatchCode(original_rinfo()->pc(), Assembler::kDebugBreakSlotLength); +} + + +#define __ ACCESS_MASM(masm) + +static void Generate_DebugBreakCallHelper(MacroAssembler* masm, + RegList object_regs, + RegList non_object_regs, + bool convert_call_to_jmp) { + // Enter an internal frame. + { + FrameScope scope(masm, StackFrame::INTERNAL); + + // Load padding words on stack. + for (int i = 0; i < LiveEdit::kFramePaddingInitialSize; i++) { + __ push(Immediate(Smi::FromInt(LiveEdit::kFramePaddingValue))); + } + __ push(Immediate(Smi::FromInt(LiveEdit::kFramePaddingInitialSize))); + + // Store the registers containing live values on the expression stack to + // make sure that these are correctly updated during GC. Non object values + // are stored as a smi causing it to be untouched by GC. + DCHECK((object_regs & ~kJSCallerSaved) == 0); + DCHECK((non_object_regs & ~kJSCallerSaved) == 0); + DCHECK((object_regs & non_object_regs) == 0); + for (int i = 0; i < kNumJSCallerSaved; i++) { + int r = JSCallerSavedCode(i); + Register reg = { r }; + if ((object_regs & (1 << r)) != 0) { + __ push(reg); + } + if ((non_object_regs & (1 << r)) != 0) { + if (FLAG_debug_code) { + __ test(reg, Immediate(0xc0000000)); + __ Assert(zero, kUnableToEncodeValueAsSmi); + } + __ SmiTag(reg); + __ push(reg); + } + } + +#ifdef DEBUG + __ RecordComment("// Calling from debug break to runtime - come in - over"); +#endif + __ Move(eax, Immediate(0)); // No arguments. + __ mov(ebx, Immediate(ExternalReference::debug_break(masm->isolate()))); + + CEntryStub ceb(masm->isolate(), 1); + __ CallStub(&ceb); + + // Automatically find register that could be used after register restore. + // We need one register for padding skip instructions. + Register unused_reg = { -1 }; + + // Restore the register values containing object pointers from the + // expression stack. + for (int i = kNumJSCallerSaved; --i >= 0;) { + int r = JSCallerSavedCode(i); + Register reg = { r }; + if (FLAG_debug_code) { + __ Move(reg, Immediate(kDebugZapValue)); + } + bool taken = reg.code() == esi.code(); + if ((object_regs & (1 << r)) != 0) { + __ pop(reg); + taken = true; + } + if ((non_object_regs & (1 << r)) != 0) { + __ pop(reg); + __ SmiUntag(reg); + taken = true; + } + if (!taken) { + unused_reg = reg; + } + } + + DCHECK(unused_reg.code() != -1); + + // Read current padding counter and skip corresponding number of words. + __ pop(unused_reg); + // We divide stored value by 2 (untagging) and multiply it by word's size. + STATIC_ASSERT(kSmiTagSize == 1 && kSmiShiftSize == 0); + __ lea(esp, Operand(esp, unused_reg, times_half_pointer_size, 0)); + + // Get rid of the internal frame. + } + + // If this call did not replace a call but patched other code then there will + // be an unwanted return address left on the stack. Here we get rid of that. + if (convert_call_to_jmp) { + __ add(esp, Immediate(kPointerSize)); + } + + // Now that the break point has been handled, resume normal execution by + // jumping to the target address intended by the caller and that was + // overwritten by the address of DebugBreakXXX. + ExternalReference after_break_target = + ExternalReference::debug_after_break_target_address(masm->isolate()); + __ jmp(Operand::StaticVariable(after_break_target)); +} + + +void DebugCodegen::GenerateCallICStubDebugBreak(MacroAssembler* masm) { + // Register state for CallICStub + // ----------- S t a t e ------------- + // -- edx : type feedback slot (smi) + // -- edi : function + // ----------------------------------- + Generate_DebugBreakCallHelper(masm, edx.bit() | edi.bit(), + 0, false); +} + + +void DebugCodegen::GenerateLoadICDebugBreak(MacroAssembler* masm) { + // Register state for IC load call (from ic-x87.cc). + Register receiver = LoadIC::ReceiverRegister(); + Register name = LoadIC::NameRegister(); + Generate_DebugBreakCallHelper(masm, receiver.bit() | name.bit(), 0, false); +} + + +void DebugCodegen::GenerateStoreICDebugBreak(MacroAssembler* masm) { + // Register state for IC store call (from ic-x87.cc). + Register receiver = StoreIC::ReceiverRegister(); + Register name = StoreIC::NameRegister(); + Register value = StoreIC::ValueRegister(); + Generate_DebugBreakCallHelper( + masm, receiver.bit() | name.bit() | value.bit(), 0, false); +} + + +void DebugCodegen::GenerateKeyedLoadICDebugBreak(MacroAssembler* masm) { + // Register state for keyed IC load call (from ic-x87.cc). + GenerateLoadICDebugBreak(masm); +} + + +void DebugCodegen::GenerateKeyedStoreICDebugBreak(MacroAssembler* masm) { + // Register state for keyed IC store call (from ic-x87.cc). + Register receiver = KeyedStoreIC::ReceiverRegister(); + Register name = KeyedStoreIC::NameRegister(); + Register value = KeyedStoreIC::ValueRegister(); + Generate_DebugBreakCallHelper( + masm, receiver.bit() | name.bit() | value.bit(), 0, false); +} + + +void DebugCodegen::GenerateCompareNilICDebugBreak(MacroAssembler* masm) { + // Register state for CompareNil IC + // ----------- S t a t e ------------- + // -- eax : value + // ----------------------------------- + Generate_DebugBreakCallHelper(masm, eax.bit(), 0, false); +} + + +void DebugCodegen::GenerateReturnDebugBreak(MacroAssembler* masm) { + // Register state just before return from JS function (from codegen-x87.cc). + // ----------- S t a t e ------------- + // -- eax: return value + // ----------------------------------- + Generate_DebugBreakCallHelper(masm, eax.bit(), 0, true); +} + + +void DebugCodegen::GenerateCallFunctionStubDebugBreak(MacroAssembler* masm) { + // Register state for CallFunctionStub (from code-stubs-x87.cc). + // ----------- S t a t e ------------- + // -- edi: function + // ----------------------------------- + Generate_DebugBreakCallHelper(masm, edi.bit(), 0, false); +} + + +void DebugCodegen::GenerateCallConstructStubDebugBreak(MacroAssembler* masm) { + // Register state for CallConstructStub (from code-stubs-x87.cc). + // eax is the actual number of arguments not encoded as a smi see comment + // above IC call. + // ----------- S t a t e ------------- + // -- eax: number of arguments (not smi) + // -- edi: constructor function + // ----------------------------------- + // The number of arguments in eax is not smi encoded. + Generate_DebugBreakCallHelper(masm, edi.bit(), eax.bit(), false); +} + + +void DebugCodegen::GenerateCallConstructStubRecordDebugBreak( + MacroAssembler* masm) { + // Register state for CallConstructStub (from code-stubs-x87.cc). + // eax is the actual number of arguments not encoded as a smi see comment + // above IC call. + // ----------- S t a t e ------------- + // -- eax: number of arguments (not smi) + // -- ebx: feedback array + // -- edx: feedback slot (smi) + // -- edi: constructor function + // ----------------------------------- + // The number of arguments in eax is not smi encoded. + Generate_DebugBreakCallHelper(masm, ebx.bit() | edx.bit() | edi.bit(), + eax.bit(), false); +} + + +void DebugCodegen::GenerateSlot(MacroAssembler* masm) { + // Generate enough nop's to make space for a call instruction. + Label check_codesize; + __ bind(&check_codesize); + __ RecordDebugBreakSlot(); + __ Nop(Assembler::kDebugBreakSlotLength); + DCHECK_EQ(Assembler::kDebugBreakSlotLength, + masm->SizeOfCodeGeneratedSince(&check_codesize)); +} + + +void DebugCodegen::GenerateSlotDebugBreak(MacroAssembler* masm) { + // In the places where a debug break slot is inserted no registers can contain + // object pointers. + Generate_DebugBreakCallHelper(masm, 0, 0, true); +} + + +void DebugCodegen::GeneratePlainReturnLiveEdit(MacroAssembler* masm) { + masm->ret(0); +} + + +void DebugCodegen::GenerateFrameDropperLiveEdit(MacroAssembler* masm) { + ExternalReference restarter_frame_function_slot = + ExternalReference::debug_restarter_frame_function_pointer_address( + masm->isolate()); + __ mov(Operand::StaticVariable(restarter_frame_function_slot), Immediate(0)); + + // We do not know our frame height, but set esp based on ebp. + __ lea(esp, Operand(ebp, -1 * kPointerSize)); + + __ pop(edi); // Function. + __ pop(ebp); + + // Load context from the function. + __ mov(esi, FieldOperand(edi, JSFunction::kContextOffset)); + + // Get function code. + __ mov(edx, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset)); + __ mov(edx, FieldOperand(edx, SharedFunctionInfo::kCodeOffset)); + __ lea(edx, FieldOperand(edx, Code::kHeaderSize)); + + // Re-run JSFunction, edi is function, esi is context. + __ jmp(edx); +} + + +const bool LiveEdit::kFrameDropperSupported = true; + +#undef __ + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_X87 diff --git a/deps/v8/src/x87/deoptimizer-x87.cc b/deps/v8/src/x87/deoptimizer-x87.cc new file mode 100644 index 00000000000..96698a13259 --- /dev/null +++ b/deps/v8/src/x87/deoptimizer-x87.cc @@ -0,0 +1,403 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_X87 + +#include "src/codegen.h" +#include "src/deoptimizer.h" +#include "src/full-codegen.h" +#include "src/safepoint-table.h" + +namespace v8 { +namespace internal { + +const int Deoptimizer::table_entry_size_ = 10; + + +int Deoptimizer::patch_size() { + return Assembler::kCallInstructionLength; +} + + +void Deoptimizer::EnsureRelocSpaceForLazyDeoptimization(Handle<Code> code) { + Isolate* isolate = code->GetIsolate(); + HandleScope scope(isolate); + + // Compute the size of relocation information needed for the code + // patching in Deoptimizer::DeoptimizeFunction. + int min_reloc_size = 0; + int prev_pc_offset = 0; + DeoptimizationInputData* deopt_data = + DeoptimizationInputData::cast(code->deoptimization_data()); + for (int i = 0; i < deopt_data->DeoptCount(); i++) { + int pc_offset = deopt_data->Pc(i)->value(); + if (pc_offset == -1) continue; + DCHECK_GE(pc_offset, prev_pc_offset); + int pc_delta = pc_offset - prev_pc_offset; + // We use RUNTIME_ENTRY reloc info which has a size of 2 bytes + // if encodable with small pc delta encoding and up to 6 bytes + // otherwise. + if (pc_delta <= RelocInfo::kMaxSmallPCDelta) { + min_reloc_size += 2; + } else { + min_reloc_size += 6; + } + prev_pc_offset = pc_offset; + } + + // If the relocation information is not big enough we create a new + // relocation info object that is padded with comments to make it + // big enough for lazy doptimization. + int reloc_length = code->relocation_info()->length(); + if (min_reloc_size > reloc_length) { + int comment_reloc_size = RelocInfo::kMinRelocCommentSize; + // Padding needed. + int min_padding = min_reloc_size - reloc_length; + // Number of comments needed to take up at least that much space. + int additional_comments = + (min_padding + comment_reloc_size - 1) / comment_reloc_size; + // Actual padding size. + int padding = additional_comments * comment_reloc_size; + // Allocate new relocation info and copy old relocation to the end + // of the new relocation info array because relocation info is + // written and read backwards. + Factory* factory = isolate->factory(); + Handle<ByteArray> new_reloc = + factory->NewByteArray(reloc_length + padding, TENURED); + MemCopy(new_reloc->GetDataStartAddress() + padding, + code->relocation_info()->GetDataStartAddress(), reloc_length); + // Create a relocation writer to write the comments in the padding + // space. Use position 0 for everything to ensure short encoding. + RelocInfoWriter reloc_info_writer( + new_reloc->GetDataStartAddress() + padding, 0); + intptr_t comment_string + = reinterpret_cast<intptr_t>(RelocInfo::kFillerCommentString); + RelocInfo rinfo(0, RelocInfo::COMMENT, comment_string, NULL); + for (int i = 0; i < additional_comments; ++i) { +#ifdef DEBUG + byte* pos_before = reloc_info_writer.pos(); +#endif + reloc_info_writer.Write(&rinfo); + DCHECK(RelocInfo::kMinRelocCommentSize == + pos_before - reloc_info_writer.pos()); + } + // Replace relocation information on the code object. + code->set_relocation_info(*new_reloc); + } +} + + +void Deoptimizer::PatchCodeForDeoptimization(Isolate* isolate, Code* code) { + Address code_start_address = code->instruction_start(); + + if (FLAG_zap_code_space) { + // Fail hard and early if we enter this code object again. + byte* pointer = code->FindCodeAgeSequence(); + if (pointer != NULL) { + pointer += kNoCodeAgeSequenceLength; + } else { + pointer = code->instruction_start(); + } + CodePatcher patcher(pointer, 1); + patcher.masm()->int3(); + + DeoptimizationInputData* data = + DeoptimizationInputData::cast(code->deoptimization_data()); + int osr_offset = data->OsrPcOffset()->value(); + if (osr_offset > 0) { + CodePatcher osr_patcher(code->instruction_start() + osr_offset, 1); + osr_patcher.masm()->int3(); + } + } + + // We will overwrite the code's relocation info in-place. Relocation info + // is written backward. The relocation info is the payload of a byte + // array. Later on we will slide this to the start of the byte array and + // create a filler object in the remaining space. + ByteArray* reloc_info = code->relocation_info(); + Address reloc_end_address = reloc_info->address() + reloc_info->Size(); + RelocInfoWriter reloc_info_writer(reloc_end_address, code_start_address); + + // Since the call is a relative encoding, write new + // reloc info. We do not need any of the existing reloc info because the + // existing code will not be used again (we zap it in debug builds). + // + // Emit call to lazy deoptimization at all lazy deopt points. + DeoptimizationInputData* deopt_data = + DeoptimizationInputData::cast(code->deoptimization_data()); +#ifdef DEBUG + Address prev_call_address = NULL; +#endif + // For each LLazyBailout instruction insert a call to the corresponding + // deoptimization entry. + for (int i = 0; i < deopt_data->DeoptCount(); i++) { + if (deopt_data->Pc(i)->value() == -1) continue; + // Patch lazy deoptimization entry. + Address call_address = code_start_address + deopt_data->Pc(i)->value(); + CodePatcher patcher(call_address, patch_size()); + Address deopt_entry = GetDeoptimizationEntry(isolate, i, LAZY); + patcher.masm()->call(deopt_entry, RelocInfo::NONE32); + // We use RUNTIME_ENTRY for deoptimization bailouts. + RelocInfo rinfo(call_address + 1, // 1 after the call opcode. + RelocInfo::RUNTIME_ENTRY, + reinterpret_cast<intptr_t>(deopt_entry), + NULL); + reloc_info_writer.Write(&rinfo); + DCHECK_GE(reloc_info_writer.pos(), + reloc_info->address() + ByteArray::kHeaderSize); + DCHECK(prev_call_address == NULL || + call_address >= prev_call_address + patch_size()); + DCHECK(call_address + patch_size() <= code->instruction_end()); +#ifdef DEBUG + prev_call_address = call_address; +#endif + } + + // Move the relocation info to the beginning of the byte array. + int new_reloc_size = reloc_end_address - reloc_info_writer.pos(); + MemMove(code->relocation_start(), reloc_info_writer.pos(), new_reloc_size); + + // The relocation info is in place, update the size. + reloc_info->set_length(new_reloc_size); + + // Handle the junk part after the new relocation info. We will create + // a non-live object in the extra space at the end of the former reloc info. + Address junk_address = reloc_info->address() + reloc_info->Size(); + DCHECK(junk_address <= reloc_end_address); + isolate->heap()->CreateFillerObjectAt(junk_address, + reloc_end_address - junk_address); +} + + +void Deoptimizer::FillInputFrame(Address tos, JavaScriptFrame* frame) { + // Set the register values. The values are not important as there are no + // callee saved registers in JavaScript frames, so all registers are + // spilled. Registers ebp and esp are set to the correct values though. + + for (int i = 0; i < Register::kNumRegisters; i++) { + input_->SetRegister(i, i * 4); + } + input_->SetRegister(esp.code(), reinterpret_cast<intptr_t>(frame->sp())); + input_->SetRegister(ebp.code(), reinterpret_cast<intptr_t>(frame->fp())); + for (int i = 0; i < DoubleRegister::NumAllocatableRegisters(); i++) { + input_->SetDoubleRegister(i, 0.0); + } + + // Fill the frame content from the actual data on the frame. + for (unsigned i = 0; i < input_->GetFrameSize(); i += kPointerSize) { + input_->SetFrameSlot(i, Memory::uint32_at(tos + i)); + } +} + + +void Deoptimizer::SetPlatformCompiledStubRegisters( + FrameDescription* output_frame, CodeStubInterfaceDescriptor* descriptor) { + intptr_t handler = + reinterpret_cast<intptr_t>(descriptor->deoptimization_handler()); + int params = descriptor->GetHandlerParameterCount(); + output_frame->SetRegister(eax.code(), params); + output_frame->SetRegister(ebx.code(), handler); +} + + +void Deoptimizer::CopyDoubleRegisters(FrameDescription* output_frame) { + // Do nothing for X87. + return; +} + + +bool Deoptimizer::HasAlignmentPadding(JSFunction* function) { + int parameter_count = function->shared()->formal_parameter_count() + 1; + unsigned input_frame_size = input_->GetFrameSize(); + unsigned alignment_state_offset = + input_frame_size - parameter_count * kPointerSize - + StandardFrameConstants::kFixedFrameSize - + kPointerSize; + DCHECK(JavaScriptFrameConstants::kDynamicAlignmentStateOffset == + JavaScriptFrameConstants::kLocal0Offset); + int32_t alignment_state = input_->GetFrameSlot(alignment_state_offset); + return (alignment_state == kAlignmentPaddingPushed); +} + + +#define __ masm()-> + +void Deoptimizer::EntryGenerator::Generate() { + GeneratePrologue(); + + // Save all general purpose registers before messing with them. + const int kNumberOfRegisters = Register::kNumRegisters; + __ pushad(); + + const int kSavedRegistersAreaSize = kNumberOfRegisters * kPointerSize; + + // Get the bailout id from the stack. + __ mov(ebx, Operand(esp, kSavedRegistersAreaSize)); + + // Get the address of the location in the code object + // and compute the fp-to-sp delta in register edx. + __ mov(ecx, Operand(esp, kSavedRegistersAreaSize + 1 * kPointerSize)); + __ lea(edx, Operand(esp, kSavedRegistersAreaSize + 2 * kPointerSize)); + + __ sub(edx, ebp); + __ neg(edx); + + // Allocate a new deoptimizer object. + __ PrepareCallCFunction(6, eax); + __ mov(eax, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset)); + __ mov(Operand(esp, 0 * kPointerSize), eax); // Function. + __ mov(Operand(esp, 1 * kPointerSize), Immediate(type())); // Bailout type. + __ mov(Operand(esp, 2 * kPointerSize), ebx); // Bailout id. + __ mov(Operand(esp, 3 * kPointerSize), ecx); // Code address or 0. + __ mov(Operand(esp, 4 * kPointerSize), edx); // Fp-to-sp delta. + __ mov(Operand(esp, 5 * kPointerSize), + Immediate(ExternalReference::isolate_address(isolate()))); + { + AllowExternalCallThatCantCauseGC scope(masm()); + __ CallCFunction(ExternalReference::new_deoptimizer_function(isolate()), 6); + } + + // Preserve deoptimizer object in register eax and get the input + // frame descriptor pointer. + __ mov(ebx, Operand(eax, Deoptimizer::input_offset())); + + // Fill in the input registers. + for (int i = kNumberOfRegisters - 1; i >= 0; i--) { + int offset = (i * kPointerSize) + FrameDescription::registers_offset(); + __ pop(Operand(ebx, offset)); + } + + // Clear FPU all exceptions. + // TODO(ulan): Find out why the TOP register is not zero here in some cases, + // and check that the generated code never deoptimizes with unbalanced stack. + __ fnclex(); + + // Remove the bailout id, return address and the double registers. + __ add(esp, Immediate(2 * kPointerSize)); + + // Compute a pointer to the unwinding limit in register ecx; that is + // the first stack slot not part of the input frame. + __ mov(ecx, Operand(ebx, FrameDescription::frame_size_offset())); + __ add(ecx, esp); + + // Unwind the stack down to - but not including - the unwinding + // limit and copy the contents of the activation frame to the input + // frame description. + __ lea(edx, Operand(ebx, FrameDescription::frame_content_offset())); + Label pop_loop_header; + __ jmp(&pop_loop_header); + Label pop_loop; + __ bind(&pop_loop); + __ pop(Operand(edx, 0)); + __ add(edx, Immediate(sizeof(uint32_t))); + __ bind(&pop_loop_header); + __ cmp(ecx, esp); + __ j(not_equal, &pop_loop); + + // Compute the output frame in the deoptimizer. + __ push(eax); + __ PrepareCallCFunction(1, ebx); + __ mov(Operand(esp, 0 * kPointerSize), eax); + { + AllowExternalCallThatCantCauseGC scope(masm()); + __ CallCFunction( + ExternalReference::compute_output_frames_function(isolate()), 1); + } + __ pop(eax); + + // If frame was dynamically aligned, pop padding. + Label no_padding; + __ cmp(Operand(eax, Deoptimizer::has_alignment_padding_offset()), + Immediate(0)); + __ j(equal, &no_padding); + __ pop(ecx); + if (FLAG_debug_code) { + __ cmp(ecx, Immediate(kAlignmentZapValue)); + __ Assert(equal, kAlignmentMarkerExpected); + } + __ bind(&no_padding); + + // Replace the current frame with the output frames. + Label outer_push_loop, inner_push_loop, + outer_loop_header, inner_loop_header; + // Outer loop state: eax = current FrameDescription**, edx = one past the + // last FrameDescription**. + __ mov(edx, Operand(eax, Deoptimizer::output_count_offset())); + __ mov(eax, Operand(eax, Deoptimizer::output_offset())); + __ lea(edx, Operand(eax, edx, times_4, 0)); + __ jmp(&outer_loop_header); + __ bind(&outer_push_loop); + // Inner loop state: ebx = current FrameDescription*, ecx = loop index. + __ mov(ebx, Operand(eax, 0)); + __ mov(ecx, Operand(ebx, FrameDescription::frame_size_offset())); + __ jmp(&inner_loop_header); + __ bind(&inner_push_loop); + __ sub(ecx, Immediate(sizeof(uint32_t))); + __ push(Operand(ebx, ecx, times_1, FrameDescription::frame_content_offset())); + __ bind(&inner_loop_header); + __ test(ecx, ecx); + __ j(not_zero, &inner_push_loop); + __ add(eax, Immediate(kPointerSize)); + __ bind(&outer_loop_header); + __ cmp(eax, edx); + __ j(below, &outer_push_loop); + + // Push state, pc, and continuation from the last output frame. + __ push(Operand(ebx, FrameDescription::state_offset())); + __ push(Operand(ebx, FrameDescription::pc_offset())); + __ push(Operand(ebx, FrameDescription::continuation_offset())); + + + // Push the registers from the last output frame. + for (int i = 0; i < kNumberOfRegisters; i++) { + int offset = (i * kPointerSize) + FrameDescription::registers_offset(); + __ push(Operand(ebx, offset)); + } + + // Restore the registers from the stack. + __ popad(); + + // Return to the continuation point. + __ ret(0); +} + + +void Deoptimizer::TableEntryGenerator::GeneratePrologue() { + // Create a sequence of deoptimization entries. + Label done; + for (int i = 0; i < count(); i++) { + int start = masm()->pc_offset(); + USE(start); + __ push_imm32(i); + __ jmp(&done); + DCHECK(masm()->pc_offset() - start == table_entry_size_); + } + __ bind(&done); +} + + +void FrameDescription::SetCallerPc(unsigned offset, intptr_t value) { + SetFrameSlot(offset, value); +} + + +void FrameDescription::SetCallerFp(unsigned offset, intptr_t value) { + SetFrameSlot(offset, value); +} + + +void FrameDescription::SetCallerConstantPool(unsigned offset, intptr_t value) { + // No out-of-line constant pool support. + UNREACHABLE(); +} + + +#undef __ + + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_X87 diff --git a/deps/v8/src/x87/disasm-x87.cc b/deps/v8/src/x87/disasm-x87.cc new file mode 100644 index 00000000000..53a8c290670 --- /dev/null +++ b/deps/v8/src/x87/disasm-x87.cc @@ -0,0 +1,1775 @@ +// Copyright 2011 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include <assert.h> +#include <stdarg.h> +#include <stdio.h> + +#include "src/v8.h" + +#if V8_TARGET_ARCH_X87 + +#include "src/disasm.h" + +namespace disasm { + +enum OperandOrder { + UNSET_OP_ORDER = 0, + REG_OPER_OP_ORDER, + OPER_REG_OP_ORDER +}; + + +//------------------------------------------------------------------ +// Tables +//------------------------------------------------------------------ +struct ByteMnemonic { + int b; // -1 terminates, otherwise must be in range (0..255) + const char* mnem; + OperandOrder op_order_; +}; + + +static const ByteMnemonic two_operands_instr[] = { + {0x01, "add", OPER_REG_OP_ORDER}, + {0x03, "add", REG_OPER_OP_ORDER}, + {0x09, "or", OPER_REG_OP_ORDER}, + {0x0B, "or", REG_OPER_OP_ORDER}, + {0x1B, "sbb", REG_OPER_OP_ORDER}, + {0x21, "and", OPER_REG_OP_ORDER}, + {0x23, "and", REG_OPER_OP_ORDER}, + {0x29, "sub", OPER_REG_OP_ORDER}, + {0x2A, "subb", REG_OPER_OP_ORDER}, + {0x2B, "sub", REG_OPER_OP_ORDER}, + {0x31, "xor", OPER_REG_OP_ORDER}, + {0x33, "xor", REG_OPER_OP_ORDER}, + {0x38, "cmpb", OPER_REG_OP_ORDER}, + {0x3A, "cmpb", REG_OPER_OP_ORDER}, + {0x3B, "cmp", REG_OPER_OP_ORDER}, + {0x84, "test_b", REG_OPER_OP_ORDER}, + {0x85, "test", REG_OPER_OP_ORDER}, + {0x87, "xchg", REG_OPER_OP_ORDER}, + {0x8A, "mov_b", REG_OPER_OP_ORDER}, + {0x8B, "mov", REG_OPER_OP_ORDER}, + {0x8D, "lea", REG_OPER_OP_ORDER}, + {-1, "", UNSET_OP_ORDER} +}; + + +static const ByteMnemonic zero_operands_instr[] = { + {0xC3, "ret", UNSET_OP_ORDER}, + {0xC9, "leave", UNSET_OP_ORDER}, + {0x90, "nop", UNSET_OP_ORDER}, + {0xF4, "hlt", UNSET_OP_ORDER}, + {0xCC, "int3", UNSET_OP_ORDER}, + {0x60, "pushad", UNSET_OP_ORDER}, + {0x61, "popad", UNSET_OP_ORDER}, + {0x9C, "pushfd", UNSET_OP_ORDER}, + {0x9D, "popfd", UNSET_OP_ORDER}, + {0x9E, "sahf", UNSET_OP_ORDER}, + {0x99, "cdq", UNSET_OP_ORDER}, + {0x9B, "fwait", UNSET_OP_ORDER}, + {0xFC, "cld", UNSET_OP_ORDER}, + {0xAB, "stos", UNSET_OP_ORDER}, + {-1, "", UNSET_OP_ORDER} +}; + + +static const ByteMnemonic call_jump_instr[] = { + {0xE8, "call", UNSET_OP_ORDER}, + {0xE9, "jmp", UNSET_OP_ORDER}, + {-1, "", UNSET_OP_ORDER} +}; + + +static const ByteMnemonic short_immediate_instr[] = { + {0x05, "add", UNSET_OP_ORDER}, + {0x0D, "or", UNSET_OP_ORDER}, + {0x15, "adc", UNSET_OP_ORDER}, + {0x25, "and", UNSET_OP_ORDER}, + {0x2D, "sub", UNSET_OP_ORDER}, + {0x35, "xor", UNSET_OP_ORDER}, + {0x3D, "cmp", UNSET_OP_ORDER}, + {-1, "", UNSET_OP_ORDER} +}; + + +// Generally we don't want to generate these because they are subject to partial +// register stalls. They are included for completeness and because the cmp +// variant is used by the RecordWrite stub. Because it does not update the +// register it is not subject to partial register stalls. +static ByteMnemonic byte_immediate_instr[] = { + {0x0c, "or", UNSET_OP_ORDER}, + {0x24, "and", UNSET_OP_ORDER}, + {0x34, "xor", UNSET_OP_ORDER}, + {0x3c, "cmp", UNSET_OP_ORDER}, + {-1, "", UNSET_OP_ORDER} +}; + + +static const char* const jump_conditional_mnem[] = { + /*0*/ "jo", "jno", "jc", "jnc", + /*4*/ "jz", "jnz", "jna", "ja", + /*8*/ "js", "jns", "jpe", "jpo", + /*12*/ "jl", "jnl", "jng", "jg" +}; + + +static const char* const set_conditional_mnem[] = { + /*0*/ "seto", "setno", "setc", "setnc", + /*4*/ "setz", "setnz", "setna", "seta", + /*8*/ "sets", "setns", "setpe", "setpo", + /*12*/ "setl", "setnl", "setng", "setg" +}; + + +static const char* const conditional_move_mnem[] = { + /*0*/ "cmovo", "cmovno", "cmovc", "cmovnc", + /*4*/ "cmovz", "cmovnz", "cmovna", "cmova", + /*8*/ "cmovs", "cmovns", "cmovpe", "cmovpo", + /*12*/ "cmovl", "cmovnl", "cmovng", "cmovg" +}; + + +enum InstructionType { + NO_INSTR, + ZERO_OPERANDS_INSTR, + TWO_OPERANDS_INSTR, + JUMP_CONDITIONAL_SHORT_INSTR, + REGISTER_INSTR, + MOVE_REG_INSTR, + CALL_JUMP_INSTR, + SHORT_IMMEDIATE_INSTR, + BYTE_IMMEDIATE_INSTR +}; + + +struct InstructionDesc { + const char* mnem; + InstructionType type; + OperandOrder op_order_; +}; + + +class InstructionTable { + public: + InstructionTable(); + const InstructionDesc& Get(byte x) const { return instructions_[x]; } + static InstructionTable* get_instance() { + static InstructionTable table; + return &table; + } + + private: + InstructionDesc instructions_[256]; + void Clear(); + void Init(); + void CopyTable(const ByteMnemonic bm[], InstructionType type); + void SetTableRange(InstructionType type, + byte start, + byte end, + const char* mnem); + void AddJumpConditionalShort(); +}; + + +InstructionTable::InstructionTable() { + Clear(); + Init(); +} + + +void InstructionTable::Clear() { + for (int i = 0; i < 256; i++) { + instructions_[i].mnem = ""; + instructions_[i].type = NO_INSTR; + instructions_[i].op_order_ = UNSET_OP_ORDER; + } +} + + +void InstructionTable::Init() { + CopyTable(two_operands_instr, TWO_OPERANDS_INSTR); + CopyTable(zero_operands_instr, ZERO_OPERANDS_INSTR); + CopyTable(call_jump_instr, CALL_JUMP_INSTR); + CopyTable(short_immediate_instr, SHORT_IMMEDIATE_INSTR); + CopyTable(byte_immediate_instr, BYTE_IMMEDIATE_INSTR); + AddJumpConditionalShort(); + SetTableRange(REGISTER_INSTR, 0x40, 0x47, "inc"); + SetTableRange(REGISTER_INSTR, 0x48, 0x4F, "dec"); + SetTableRange(REGISTER_INSTR, 0x50, 0x57, "push"); + SetTableRange(REGISTER_INSTR, 0x58, 0x5F, "pop"); + SetTableRange(REGISTER_INSTR, 0x91, 0x97, "xchg eax,"); // 0x90 is nop. + SetTableRange(MOVE_REG_INSTR, 0xB8, 0xBF, "mov"); +} + + +void InstructionTable::CopyTable(const ByteMnemonic bm[], + InstructionType type) { + for (int i = 0; bm[i].b >= 0; i++) { + InstructionDesc* id = &instructions_[bm[i].b]; + id->mnem = bm[i].mnem; + id->op_order_ = bm[i].op_order_; + DCHECK_EQ(NO_INSTR, id->type); // Information not already entered. + id->type = type; + } +} + + +void InstructionTable::SetTableRange(InstructionType type, + byte start, + byte end, + const char* mnem) { + for (byte b = start; b <= end; b++) { + InstructionDesc* id = &instructions_[b]; + DCHECK_EQ(NO_INSTR, id->type); // Information not already entered. + id->mnem = mnem; + id->type = type; + } +} + + +void InstructionTable::AddJumpConditionalShort() { + for (byte b = 0x70; b <= 0x7F; b++) { + InstructionDesc* id = &instructions_[b]; + DCHECK_EQ(NO_INSTR, id->type); // Information not already entered. + id->mnem = jump_conditional_mnem[b & 0x0F]; + id->type = JUMP_CONDITIONAL_SHORT_INSTR; + } +} + + +// The X87 disassembler implementation. +class DisassemblerX87 { + public: + DisassemblerX87(const NameConverter& converter, + bool abort_on_unimplemented = true) + : converter_(converter), + instruction_table_(InstructionTable::get_instance()), + tmp_buffer_pos_(0), + abort_on_unimplemented_(abort_on_unimplemented) { + tmp_buffer_[0] = '\0'; + } + + virtual ~DisassemblerX87() {} + + // Writes one disassembled instruction into 'buffer' (0-terminated). + // Returns the length of the disassembled machine instruction in bytes. + int InstructionDecode(v8::internal::Vector<char> buffer, byte* instruction); + + private: + const NameConverter& converter_; + InstructionTable* instruction_table_; + v8::internal::EmbeddedVector<char, 128> tmp_buffer_; + unsigned int tmp_buffer_pos_; + bool abort_on_unimplemented_; + + enum { + eax = 0, + ecx = 1, + edx = 2, + ebx = 3, + esp = 4, + ebp = 5, + esi = 6, + edi = 7 + }; + + + enum ShiftOpcodeExtension { + kROL = 0, + kROR = 1, + kRCL = 2, + kRCR = 3, + kSHL = 4, + KSHR = 5, + kSAR = 7 + }; + + + const char* NameOfCPURegister(int reg) const { + return converter_.NameOfCPURegister(reg); + } + + + const char* NameOfByteCPURegister(int reg) const { + return converter_.NameOfByteCPURegister(reg); + } + + + const char* NameOfXMMRegister(int reg) const { + return converter_.NameOfXMMRegister(reg); + } + + + const char* NameOfAddress(byte* addr) const { + return converter_.NameOfAddress(addr); + } + + + // Disassembler helper functions. + static void get_modrm(byte data, int* mod, int* regop, int* rm) { + *mod = (data >> 6) & 3; + *regop = (data & 0x38) >> 3; + *rm = data & 7; + } + + + static void get_sib(byte data, int* scale, int* index, int* base) { + *scale = (data >> 6) & 3; + *index = (data >> 3) & 7; + *base = data & 7; + } + + typedef const char* (DisassemblerX87::*RegisterNameMapping)(int reg) const; + + int PrintRightOperandHelper(byte* modrmp, RegisterNameMapping register_name); + int PrintRightOperand(byte* modrmp); + int PrintRightByteOperand(byte* modrmp); + int PrintRightXMMOperand(byte* modrmp); + int PrintOperands(const char* mnem, OperandOrder op_order, byte* data); + int PrintImmediateOp(byte* data); + int F7Instruction(byte* data); + int D1D3C1Instruction(byte* data); + int JumpShort(byte* data); + int JumpConditional(byte* data, const char* comment); + int JumpConditionalShort(byte* data, const char* comment); + int SetCC(byte* data); + int CMov(byte* data); + int FPUInstruction(byte* data); + int MemoryFPUInstruction(int escape_opcode, int regop, byte* modrm_start); + int RegisterFPUInstruction(int escape_opcode, byte modrm_byte); + void AppendToBuffer(const char* format, ...); + + + void UnimplementedInstruction() { + if (abort_on_unimplemented_) { + UNIMPLEMENTED(); + } else { + AppendToBuffer("'Unimplemented Instruction'"); + } + } +}; + + +void DisassemblerX87::AppendToBuffer(const char* format, ...) { + v8::internal::Vector<char> buf = tmp_buffer_ + tmp_buffer_pos_; + va_list args; + va_start(args, format); + int result = v8::internal::VSNPrintF(buf, format, args); + va_end(args); + tmp_buffer_pos_ += result; +} + +int DisassemblerX87::PrintRightOperandHelper( + byte* modrmp, + RegisterNameMapping direct_register_name) { + int mod, regop, rm; + get_modrm(*modrmp, &mod, ®op, &rm); + RegisterNameMapping register_name = (mod == 3) ? direct_register_name : + &DisassemblerX87::NameOfCPURegister; + switch (mod) { + case 0: + if (rm == ebp) { + int32_t disp = *reinterpret_cast<int32_t*>(modrmp+1); + AppendToBuffer("[0x%x]", disp); + return 5; + } else if (rm == esp) { + byte sib = *(modrmp + 1); + int scale, index, base; + get_sib(sib, &scale, &index, &base); + if (index == esp && base == esp && scale == 0 /*times_1*/) { + AppendToBuffer("[%s]", (this->*register_name)(rm)); + return 2; + } else if (base == ebp) { + int32_t disp = *reinterpret_cast<int32_t*>(modrmp + 2); + AppendToBuffer("[%s*%d%s0x%x]", + (this->*register_name)(index), + 1 << scale, + disp < 0 ? "-" : "+", + disp < 0 ? -disp : disp); + return 6; + } else if (index != esp && base != ebp) { + // [base+index*scale] + AppendToBuffer("[%s+%s*%d]", + (this->*register_name)(base), + (this->*register_name)(index), + 1 << scale); + return 2; + } else { + UnimplementedInstruction(); + return 1; + } + } else { + AppendToBuffer("[%s]", (this->*register_name)(rm)); + return 1; + } + break; + case 1: // fall through + case 2: + if (rm == esp) { + byte sib = *(modrmp + 1); + int scale, index, base; + get_sib(sib, &scale, &index, &base); + int disp = mod == 2 ? *reinterpret_cast<int32_t*>(modrmp + 2) + : *reinterpret_cast<int8_t*>(modrmp + 2); + if (index == base && index == rm /*esp*/ && scale == 0 /*times_1*/) { + AppendToBuffer("[%s%s0x%x]", + (this->*register_name)(rm), + disp < 0 ? "-" : "+", + disp < 0 ? -disp : disp); + } else { + AppendToBuffer("[%s+%s*%d%s0x%x]", + (this->*register_name)(base), + (this->*register_name)(index), + 1 << scale, + disp < 0 ? "-" : "+", + disp < 0 ? -disp : disp); + } + return mod == 2 ? 6 : 3; + } else { + // No sib. + int disp = mod == 2 ? *reinterpret_cast<int32_t*>(modrmp + 1) + : *reinterpret_cast<int8_t*>(modrmp + 1); + AppendToBuffer("[%s%s0x%x]", + (this->*register_name)(rm), + disp < 0 ? "-" : "+", + disp < 0 ? -disp : disp); + return mod == 2 ? 5 : 2; + } + break; + case 3: + AppendToBuffer("%s", (this->*register_name)(rm)); + return 1; + default: + UnimplementedInstruction(); + return 1; + } + UNREACHABLE(); +} + + +int DisassemblerX87::PrintRightOperand(byte* modrmp) { + return PrintRightOperandHelper(modrmp, &DisassemblerX87::NameOfCPURegister); +} + + +int DisassemblerX87::PrintRightByteOperand(byte* modrmp) { + return PrintRightOperandHelper(modrmp, + &DisassemblerX87::NameOfByteCPURegister); +} + + +int DisassemblerX87::PrintRightXMMOperand(byte* modrmp) { + return PrintRightOperandHelper(modrmp, + &DisassemblerX87::NameOfXMMRegister); +} + + +// Returns number of bytes used including the current *data. +// Writes instruction's mnemonic, left and right operands to 'tmp_buffer_'. +int DisassemblerX87::PrintOperands(const char* mnem, + OperandOrder op_order, + byte* data) { + byte modrm = *data; + int mod, regop, rm; + get_modrm(modrm, &mod, ®op, &rm); + int advance = 0; + switch (op_order) { + case REG_OPER_OP_ORDER: { + AppendToBuffer("%s %s,", mnem, NameOfCPURegister(regop)); + advance = PrintRightOperand(data); + break; + } + case OPER_REG_OP_ORDER: { + AppendToBuffer("%s ", mnem); + advance = PrintRightOperand(data); + AppendToBuffer(",%s", NameOfCPURegister(regop)); + break; + } + default: + UNREACHABLE(); + break; + } + return advance; +} + + +// Returns number of bytes used by machine instruction, including *data byte. +// Writes immediate instructions to 'tmp_buffer_'. +int DisassemblerX87::PrintImmediateOp(byte* data) { + bool sign_extension_bit = (*data & 0x02) != 0; + byte modrm = *(data+1); + int mod, regop, rm; + get_modrm(modrm, &mod, ®op, &rm); + const char* mnem = "Imm???"; + switch (regop) { + case 0: mnem = "add"; break; + case 1: mnem = "or"; break; + case 2: mnem = "adc"; break; + case 4: mnem = "and"; break; + case 5: mnem = "sub"; break; + case 6: mnem = "xor"; break; + case 7: mnem = "cmp"; break; + default: UnimplementedInstruction(); + } + AppendToBuffer("%s ", mnem); + int count = PrintRightOperand(data+1); + if (sign_extension_bit) { + AppendToBuffer(",0x%x", *(data + 1 + count)); + return 1 + count + 1 /*int8*/; + } else { + AppendToBuffer(",0x%x", *reinterpret_cast<int32_t*>(data + 1 + count)); + return 1 + count + 4 /*int32_t*/; + } +} + + +// Returns number of bytes used, including *data. +int DisassemblerX87::F7Instruction(byte* data) { + DCHECK_EQ(0xF7, *data); + byte modrm = *++data; + int mod, regop, rm; + get_modrm(modrm, &mod, ®op, &rm); + const char* mnem = NULL; + switch (regop) { + case 0: + mnem = "test"; + break; + case 2: + mnem = "not"; + break; + case 3: + mnem = "neg"; + break; + case 4: + mnem = "mul"; + break; + case 5: + mnem = "imul"; + break; + case 6: + mnem = "div"; + break; + case 7: + mnem = "idiv"; + break; + default: + UnimplementedInstruction(); + } + AppendToBuffer("%s ", mnem); + int count = PrintRightOperand(data); + if (regop == 0) { + AppendToBuffer(",0x%x", *reinterpret_cast<int32_t*>(data + count)); + count += 4; + } + return 1 + count; +} + + +int DisassemblerX87::D1D3C1Instruction(byte* data) { + byte op = *data; + DCHECK(op == 0xD1 || op == 0xD3 || op == 0xC1); + byte modrm = *++data; + int mod, regop, rm; + get_modrm(modrm, &mod, ®op, &rm); + int imm8 = -1; + const char* mnem = NULL; + switch (regop) { + case kROL: + mnem = "rol"; + break; + case kROR: + mnem = "ror"; + break; + case kRCL: + mnem = "rcl"; + break; + case kRCR: + mnem = "rcr"; + break; + case kSHL: + mnem = "shl"; + break; + case KSHR: + mnem = "shr"; + break; + case kSAR: + mnem = "sar"; + break; + default: + UnimplementedInstruction(); + } + AppendToBuffer("%s ", mnem); + int count = PrintRightOperand(data); + if (op == 0xD1) { + imm8 = 1; + } else if (op == 0xC1) { + imm8 = *(data + 1); + count++; + } else if (op == 0xD3) { + // Shift/rotate by cl. + } + if (imm8 >= 0) { + AppendToBuffer(",%d", imm8); + } else { + AppendToBuffer(",cl"); + } + return 1 + count; +} + + +// Returns number of bytes used, including *data. +int DisassemblerX87::JumpShort(byte* data) { + DCHECK_EQ(0xEB, *data); + byte b = *(data+1); + byte* dest = data + static_cast<int8_t>(b) + 2; + AppendToBuffer("jmp %s", NameOfAddress(dest)); + return 2; +} + + +// Returns number of bytes used, including *data. +int DisassemblerX87::JumpConditional(byte* data, const char* comment) { + DCHECK_EQ(0x0F, *data); + byte cond = *(data+1) & 0x0F; + byte* dest = data + *reinterpret_cast<int32_t*>(data+2) + 6; + const char* mnem = jump_conditional_mnem[cond]; + AppendToBuffer("%s %s", mnem, NameOfAddress(dest)); + if (comment != NULL) { + AppendToBuffer(", %s", comment); + } + return 6; // includes 0x0F +} + + +// Returns number of bytes used, including *data. +int DisassemblerX87::JumpConditionalShort(byte* data, const char* comment) { + byte cond = *data & 0x0F; + byte b = *(data+1); + byte* dest = data + static_cast<int8_t>(b) + 2; + const char* mnem = jump_conditional_mnem[cond]; + AppendToBuffer("%s %s", mnem, NameOfAddress(dest)); + if (comment != NULL) { + AppendToBuffer(", %s", comment); + } + return 2; +} + + +// Returns number of bytes used, including *data. +int DisassemblerX87::SetCC(byte* data) { + DCHECK_EQ(0x0F, *data); + byte cond = *(data+1) & 0x0F; + const char* mnem = set_conditional_mnem[cond]; + AppendToBuffer("%s ", mnem); + PrintRightByteOperand(data+2); + return 3; // Includes 0x0F. +} + + +// Returns number of bytes used, including *data. +int DisassemblerX87::CMov(byte* data) { + DCHECK_EQ(0x0F, *data); + byte cond = *(data + 1) & 0x0F; + const char* mnem = conditional_move_mnem[cond]; + int op_size = PrintOperands(mnem, REG_OPER_OP_ORDER, data + 2); + return 2 + op_size; // includes 0x0F +} + + +// Returns number of bytes used, including *data. +int DisassemblerX87::FPUInstruction(byte* data) { + byte escape_opcode = *data; + DCHECK_EQ(0xD8, escape_opcode & 0xF8); + byte modrm_byte = *(data+1); + + if (modrm_byte >= 0xC0) { + return RegisterFPUInstruction(escape_opcode, modrm_byte); + } else { + return MemoryFPUInstruction(escape_opcode, modrm_byte, data+1); + } +} + +int DisassemblerX87::MemoryFPUInstruction(int escape_opcode, + int modrm_byte, + byte* modrm_start) { + const char* mnem = "?"; + int regop = (modrm_byte >> 3) & 0x7; // reg/op field of modrm byte. + switch (escape_opcode) { + case 0xD9: switch (regop) { + case 0: mnem = "fld_s"; break; + case 2: mnem = "fst_s"; break; + case 3: mnem = "fstp_s"; break; + case 7: mnem = "fstcw"; break; + default: UnimplementedInstruction(); + } + break; + + case 0xDB: switch (regop) { + case 0: mnem = "fild_s"; break; + case 1: mnem = "fisttp_s"; break; + case 2: mnem = "fist_s"; break; + case 3: mnem = "fistp_s"; break; + default: UnimplementedInstruction(); + } + break; + + case 0xDD: switch (regop) { + case 0: mnem = "fld_d"; break; + case 1: mnem = "fisttp_d"; break; + case 2: mnem = "fst_d"; break; + case 3: mnem = "fstp_d"; break; + default: UnimplementedInstruction(); + } + break; + + case 0xDF: switch (regop) { + case 5: mnem = "fild_d"; break; + case 7: mnem = "fistp_d"; break; + default: UnimplementedInstruction(); + } + break; + + default: UnimplementedInstruction(); + } + AppendToBuffer("%s ", mnem); + int count = PrintRightOperand(modrm_start); + return count + 1; +} + +int DisassemblerX87::RegisterFPUInstruction(int escape_opcode, + byte modrm_byte) { + bool has_register = false; // Is the FPU register encoded in modrm_byte? + const char* mnem = "?"; + + switch (escape_opcode) { + case 0xD8: + has_register = true; + switch (modrm_byte & 0xF8) { + case 0xC0: mnem = "fadd_i"; break; + case 0xE0: mnem = "fsub_i"; break; + case 0xC8: mnem = "fmul_i"; break; + case 0xF0: mnem = "fdiv_i"; break; + default: UnimplementedInstruction(); + } + break; + + case 0xD9: + switch (modrm_byte & 0xF8) { + case 0xC0: + mnem = "fld"; + has_register = true; + break; + case 0xC8: + mnem = "fxch"; + has_register = true; + break; + default: + switch (modrm_byte) { + case 0xE0: mnem = "fchs"; break; + case 0xE1: mnem = "fabs"; break; + case 0xE4: mnem = "ftst"; break; + case 0xE8: mnem = "fld1"; break; + case 0xEB: mnem = "fldpi"; break; + case 0xED: mnem = "fldln2"; break; + case 0xEE: mnem = "fldz"; break; + case 0xF0: mnem = "f2xm1"; break; + case 0xF1: mnem = "fyl2x"; break; + case 0xF4: mnem = "fxtract"; break; + case 0xF5: mnem = "fprem1"; break; + case 0xF7: mnem = "fincstp"; break; + case 0xF8: mnem = "fprem"; break; + case 0xFC: mnem = "frndint"; break; + case 0xFD: mnem = "fscale"; break; + case 0xFE: mnem = "fsin"; break; + case 0xFF: mnem = "fcos"; break; + default: UnimplementedInstruction(); + } + } + break; + + case 0xDA: + if (modrm_byte == 0xE9) { + mnem = "fucompp"; + } else { + UnimplementedInstruction(); + } + break; + + case 0xDB: + if ((modrm_byte & 0xF8) == 0xE8) { + mnem = "fucomi"; + has_register = true; + } else if (modrm_byte == 0xE2) { + mnem = "fclex"; + } else if (modrm_byte == 0xE3) { + mnem = "fninit"; + } else { + UnimplementedInstruction(); + } + break; + + case 0xDC: + has_register = true; + switch (modrm_byte & 0xF8) { + case 0xC0: mnem = "fadd"; break; + case 0xE8: mnem = "fsub"; break; + case 0xC8: mnem = "fmul"; break; + case 0xF8: mnem = "fdiv"; break; + default: UnimplementedInstruction(); + } + break; + + case 0xDD: + has_register = true; + switch (modrm_byte & 0xF8) { + case 0xC0: mnem = "ffree"; break; + case 0xD0: mnem = "fst"; break; + case 0xD8: mnem = "fstp"; break; + default: UnimplementedInstruction(); + } + break; + + case 0xDE: + if (modrm_byte == 0xD9) { + mnem = "fcompp"; + } else { + has_register = true; + switch (modrm_byte & 0xF8) { + case 0xC0: mnem = "faddp"; break; + case 0xE8: mnem = "fsubp"; break; + case 0xC8: mnem = "fmulp"; break; + case 0xF8: mnem = "fdivp"; break; + default: UnimplementedInstruction(); + } + } + break; + + case 0xDF: + if (modrm_byte == 0xE0) { + mnem = "fnstsw_ax"; + } else if ((modrm_byte & 0xF8) == 0xE8) { + mnem = "fucomip"; + has_register = true; + } + break; + + default: UnimplementedInstruction(); + } + + if (has_register) { + AppendToBuffer("%s st%d", mnem, modrm_byte & 0x7); + } else { + AppendToBuffer("%s", mnem); + } + return 2; +} + + +// Mnemonics for instructions 0xF0 byte. +// Returns NULL if the instruction is not handled here. +static const char* F0Mnem(byte f0byte) { + switch (f0byte) { + case 0x18: return "prefetch"; + case 0xA2: return "cpuid"; + case 0xBE: return "movsx_b"; + case 0xBF: return "movsx_w"; + case 0xB6: return "movzx_b"; + case 0xB7: return "movzx_w"; + case 0xAF: return "imul"; + case 0xA5: return "shld"; + case 0xAD: return "shrd"; + case 0xAC: return "shrd"; // 3-operand version. + case 0xAB: return "bts"; + case 0xBD: return "bsr"; + default: return NULL; + } +} + + +// Disassembled instruction '*instr' and writes it into 'out_buffer'. +int DisassemblerX87::InstructionDecode(v8::internal::Vector<char> out_buffer, + byte* instr) { + tmp_buffer_pos_ = 0; // starting to write as position 0 + byte* data = instr; + // Check for hints. + const char* branch_hint = NULL; + // We use these two prefixes only with branch prediction + if (*data == 0x3E /*ds*/) { + branch_hint = "predicted taken"; + data++; + } else if (*data == 0x2E /*cs*/) { + branch_hint = "predicted not taken"; + data++; + } + bool processed = true; // Will be set to false if the current instruction + // is not in 'instructions' table. + const InstructionDesc& idesc = instruction_table_->Get(*data); + switch (idesc.type) { + case ZERO_OPERANDS_INSTR: + AppendToBuffer(idesc.mnem); + data++; + break; + + case TWO_OPERANDS_INSTR: + data++; + data += PrintOperands(idesc.mnem, idesc.op_order_, data); + break; + + case JUMP_CONDITIONAL_SHORT_INSTR: + data += JumpConditionalShort(data, branch_hint); + break; + + case REGISTER_INSTR: + AppendToBuffer("%s %s", idesc.mnem, NameOfCPURegister(*data & 0x07)); + data++; + break; + + case MOVE_REG_INSTR: { + byte* addr = reinterpret_cast<byte*>(*reinterpret_cast<int32_t*>(data+1)); + AppendToBuffer("mov %s,%s", + NameOfCPURegister(*data & 0x07), + NameOfAddress(addr)); + data += 5; + break; + } + + case CALL_JUMP_INSTR: { + byte* addr = data + *reinterpret_cast<int32_t*>(data+1) + 5; + AppendToBuffer("%s %s", idesc.mnem, NameOfAddress(addr)); + data += 5; + break; + } + + case SHORT_IMMEDIATE_INSTR: { + byte* addr = reinterpret_cast<byte*>(*reinterpret_cast<int32_t*>(data+1)); + AppendToBuffer("%s eax,%s", idesc.mnem, NameOfAddress(addr)); + data += 5; + break; + } + + case BYTE_IMMEDIATE_INSTR: { + AppendToBuffer("%s al,0x%x", idesc.mnem, data[1]); + data += 2; + break; + } + + case NO_INSTR: + processed = false; + break; + + default: + UNIMPLEMENTED(); // This type is not implemented. + } + //---------------------------- + if (!processed) { + switch (*data) { + case 0xC2: + AppendToBuffer("ret 0x%x", *reinterpret_cast<uint16_t*>(data+1)); + data += 3; + break; + + case 0x6B: { + data++; + data += PrintOperands("imul", REG_OPER_OP_ORDER, data); + AppendToBuffer(",%d", *data); + data++; + } break; + + case 0x69: { + data++; + data += PrintOperands("imul", REG_OPER_OP_ORDER, data); + AppendToBuffer(",%d", *reinterpret_cast<int32_t*>(data)); + data += 4; + } + break; + + case 0xF6: + { data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + if (regop == eax) { + AppendToBuffer("test_b "); + data += PrintRightByteOperand(data); + int32_t imm = *data; + AppendToBuffer(",0x%x", imm); + data++; + } else { + UnimplementedInstruction(); + } + } + break; + + case 0x81: // fall through + case 0x83: // 0x81 with sign extension bit set + data += PrintImmediateOp(data); + break; + + case 0x0F: + { byte f0byte = data[1]; + const char* f0mnem = F0Mnem(f0byte); + if (f0byte == 0x18) { + data += 2; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + const char* suffix[] = {"nta", "1", "2", "3"}; + AppendToBuffer("%s%s ", f0mnem, suffix[regop & 0x03]); + data += PrintRightOperand(data); + } else if (f0byte == 0x1F && data[2] == 0) { + AppendToBuffer("nop"); // 3 byte nop. + data += 3; + } else if (f0byte == 0x1F && data[2] == 0x40 && data[3] == 0) { + AppendToBuffer("nop"); // 4 byte nop. + data += 4; + } else if (f0byte == 0x1F && data[2] == 0x44 && data[3] == 0 && + data[4] == 0) { + AppendToBuffer("nop"); // 5 byte nop. + data += 5; + } else if (f0byte == 0x1F && data[2] == 0x80 && data[3] == 0 && + data[4] == 0 && data[5] == 0 && data[6] == 0) { + AppendToBuffer("nop"); // 7 byte nop. + data += 7; + } else if (f0byte == 0x1F && data[2] == 0x84 && data[3] == 0 && + data[4] == 0 && data[5] == 0 && data[6] == 0 && + data[7] == 0) { + AppendToBuffer("nop"); // 8 byte nop. + data += 8; + } else if (f0byte == 0xA2 || f0byte == 0x31) { + AppendToBuffer("%s", f0mnem); + data += 2; + } else if (f0byte == 0x28) { + data += 2; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("movaps %s,%s", + NameOfXMMRegister(regop), + NameOfXMMRegister(rm)); + data++; + } else if (f0byte >= 0x53 && f0byte <= 0x5F) { + const char* const pseudo_op[] = { + "rcpps", + "andps", + "andnps", + "orps", + "xorps", + "addps", + "mulps", + "cvtps2pd", + "cvtdq2ps", + "subps", + "minps", + "divps", + "maxps", + }; + + data += 2; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("%s %s,", + pseudo_op[f0byte - 0x53], + NameOfXMMRegister(regop)); + data += PrintRightXMMOperand(data); + } else if (f0byte == 0x50) { + data += 2; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("movmskps %s,%s", + NameOfCPURegister(regop), + NameOfXMMRegister(rm)); + data++; + } else if (f0byte== 0xC6) { + // shufps xmm, xmm/m128, imm8 + data += 2; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + int8_t imm8 = static_cast<int8_t>(data[1]); + AppendToBuffer("shufps %s,%s,%d", + NameOfXMMRegister(rm), + NameOfXMMRegister(regop), + static_cast<int>(imm8)); + data += 2; + } else if ((f0byte & 0xF0) == 0x80) { + data += JumpConditional(data, branch_hint); + } else if (f0byte == 0xBE || f0byte == 0xBF || f0byte == 0xB6 || + f0byte == 0xB7 || f0byte == 0xAF) { + data += 2; + data += PrintOperands(f0mnem, REG_OPER_OP_ORDER, data); + } else if ((f0byte & 0xF0) == 0x90) { + data += SetCC(data); + } else if ((f0byte & 0xF0) == 0x40) { + data += CMov(data); + } else if (f0byte == 0xAB || f0byte == 0xA5 || f0byte == 0xAD) { + // shrd, shld, bts + data += 2; + AppendToBuffer("%s ", f0mnem); + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + data += PrintRightOperand(data); + if (f0byte == 0xAB) { + AppendToBuffer(",%s", NameOfCPURegister(regop)); + } else { + AppendToBuffer(",%s,cl", NameOfCPURegister(regop)); + } + } else if (f0byte == 0xBD) { + data += 2; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("%s %s,", f0mnem, NameOfCPURegister(regop)); + data += PrintRightOperand(data); + } else { + UnimplementedInstruction(); + } + } + break; + + case 0x8F: + { data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + if (regop == eax) { + AppendToBuffer("pop "); + data += PrintRightOperand(data); + } + } + break; + + case 0xFF: + { data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + const char* mnem = NULL; + switch (regop) { + case esi: mnem = "push"; break; + case eax: mnem = "inc"; break; + case ecx: mnem = "dec"; break; + case edx: mnem = "call"; break; + case esp: mnem = "jmp"; break; + default: mnem = "???"; + } + AppendToBuffer("%s ", mnem); + data += PrintRightOperand(data); + } + break; + + case 0xC7: // imm32, fall through + case 0xC6: // imm8 + { bool is_byte = *data == 0xC6; + data++; + if (is_byte) { + AppendToBuffer("%s ", "mov_b"); + data += PrintRightByteOperand(data); + int32_t imm = *data; + AppendToBuffer(",0x%x", imm); + data++; + } else { + AppendToBuffer("%s ", "mov"); + data += PrintRightOperand(data); + int32_t imm = *reinterpret_cast<int32_t*>(data); + AppendToBuffer(",0x%x", imm); + data += 4; + } + } + break; + + case 0x80: + { data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + const char* mnem = NULL; + switch (regop) { + case 5: mnem = "subb"; break; + case 7: mnem = "cmpb"; break; + default: UnimplementedInstruction(); + } + AppendToBuffer("%s ", mnem); + data += PrintRightByteOperand(data); + int32_t imm = *data; + AppendToBuffer(",0x%x", imm); + data++; + } + break; + + case 0x88: // 8bit, fall through + case 0x89: // 32bit + { bool is_byte = *data == 0x88; + int mod, regop, rm; + data++; + get_modrm(*data, &mod, ®op, &rm); + if (is_byte) { + AppendToBuffer("%s ", "mov_b"); + data += PrintRightByteOperand(data); + AppendToBuffer(",%s", NameOfByteCPURegister(regop)); + } else { + AppendToBuffer("%s ", "mov"); + data += PrintRightOperand(data); + AppendToBuffer(",%s", NameOfCPURegister(regop)); + } + } + break; + + case 0x66: // prefix + while (*data == 0x66) data++; + if (*data == 0xf && data[1] == 0x1f) { + AppendToBuffer("nop"); // 0x66 prefix + } else if (*data == 0x90) { + AppendToBuffer("nop"); // 0x66 prefix + } else if (*data == 0x8B) { + data++; + data += PrintOperands("mov_w", REG_OPER_OP_ORDER, data); + } else if (*data == 0x89) { + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("mov_w "); + data += PrintRightOperand(data); + AppendToBuffer(",%s", NameOfCPURegister(regop)); + } else if (*data == 0xC7) { + data++; + AppendToBuffer("%s ", "mov_w"); + data += PrintRightOperand(data); + int imm = *reinterpret_cast<int16_t*>(data); + AppendToBuffer(",0x%x", imm); + data += 2; + } else if (*data == 0x0F) { + data++; + if (*data == 0x38) { + data++; + if (*data == 0x17) { + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("ptest %s,%s", + NameOfXMMRegister(regop), + NameOfXMMRegister(rm)); + data++; + } else if (*data == 0x2A) { + // movntdqa + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("movntdqa %s,", NameOfXMMRegister(regop)); + data += PrintRightOperand(data); + } else { + UnimplementedInstruction(); + } + } else if (*data == 0x3A) { + data++; + if (*data == 0x0B) { + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + int8_t imm8 = static_cast<int8_t>(data[1]); + AppendToBuffer("roundsd %s,%s,%d", + NameOfXMMRegister(regop), + NameOfXMMRegister(rm), + static_cast<int>(imm8)); + data += 2; + } else if (*data == 0x16) { + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + int8_t imm8 = static_cast<int8_t>(data[1]); + AppendToBuffer("pextrd %s,%s,%d", + NameOfCPURegister(regop), + NameOfXMMRegister(rm), + static_cast<int>(imm8)); + data += 2; + } else if (*data == 0x17) { + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + int8_t imm8 = static_cast<int8_t>(data[1]); + AppendToBuffer("extractps %s,%s,%d", + NameOfCPURegister(rm), + NameOfXMMRegister(regop), + static_cast<int>(imm8)); + data += 2; + } else if (*data == 0x22) { + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + int8_t imm8 = static_cast<int8_t>(data[1]); + AppendToBuffer("pinsrd %s,%s,%d", + NameOfXMMRegister(regop), + NameOfCPURegister(rm), + static_cast<int>(imm8)); + data += 2; + } else { + UnimplementedInstruction(); + } + } else if (*data == 0x2E || *data == 0x2F) { + const char* mnem = (*data == 0x2E) ? "ucomisd" : "comisd"; + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + if (mod == 0x3) { + AppendToBuffer("%s %s,%s", mnem, + NameOfXMMRegister(regop), + NameOfXMMRegister(rm)); + data++; + } else { + AppendToBuffer("%s %s,", mnem, NameOfXMMRegister(regop)); + data += PrintRightOperand(data); + } + } else if (*data == 0x50) { + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("movmskpd %s,%s", + NameOfCPURegister(regop), + NameOfXMMRegister(rm)); + data++; + } else if (*data == 0x54) { + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("andpd %s,%s", + NameOfXMMRegister(regop), + NameOfXMMRegister(rm)); + data++; + } else if (*data == 0x56) { + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("orpd %s,%s", + NameOfXMMRegister(regop), + NameOfXMMRegister(rm)); + data++; + } else if (*data == 0x57) { + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("xorpd %s,%s", + NameOfXMMRegister(regop), + NameOfXMMRegister(rm)); + data++; + } else if (*data == 0x6E) { + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("movd %s,", NameOfXMMRegister(regop)); + data += PrintRightOperand(data); + } else if (*data == 0x6F) { + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("movdqa %s,", NameOfXMMRegister(regop)); + data += PrintRightXMMOperand(data); + } else if (*data == 0x70) { + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + int8_t imm8 = static_cast<int8_t>(data[1]); + AppendToBuffer("pshufd %s,%s,%d", + NameOfXMMRegister(regop), + NameOfXMMRegister(rm), + static_cast<int>(imm8)); + data += 2; + } else if (*data == 0x76) { + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("pcmpeqd %s,%s", + NameOfXMMRegister(regop), + NameOfXMMRegister(rm)); + data++; + } else if (*data == 0x90) { + data++; + AppendToBuffer("nop"); // 2 byte nop. + } else if (*data == 0xF3) { + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("psllq %s,%s", + NameOfXMMRegister(regop), + NameOfXMMRegister(rm)); + data++; + } else if (*data == 0x73) { + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + int8_t imm8 = static_cast<int8_t>(data[1]); + DCHECK(regop == esi || regop == edx); + AppendToBuffer("%s %s,%d", + (regop == esi) ? "psllq" : "psrlq", + NameOfXMMRegister(rm), + static_cast<int>(imm8)); + data += 2; + } else if (*data == 0xD3) { + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("psrlq %s,%s", + NameOfXMMRegister(regop), + NameOfXMMRegister(rm)); + data++; + } else if (*data == 0x7F) { + AppendToBuffer("movdqa "); + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + data += PrintRightXMMOperand(data); + AppendToBuffer(",%s", NameOfXMMRegister(regop)); + } else if (*data == 0x7E) { + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("movd "); + data += PrintRightOperand(data); + AppendToBuffer(",%s", NameOfXMMRegister(regop)); + } else if (*data == 0xDB) { + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("pand %s,%s", + NameOfXMMRegister(regop), + NameOfXMMRegister(rm)); + data++; + } else if (*data == 0xE7) { + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + if (mod == 3) { + AppendToBuffer("movntdq "); + data += PrintRightOperand(data); + AppendToBuffer(",%s", NameOfXMMRegister(regop)); + } else { + UnimplementedInstruction(); + } + } else if (*data == 0xEF) { + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("pxor %s,%s", + NameOfXMMRegister(regop), + NameOfXMMRegister(rm)); + data++; + } else if (*data == 0xEB) { + data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("por %s,%s", + NameOfXMMRegister(regop), + NameOfXMMRegister(rm)); + data++; + } else { + UnimplementedInstruction(); + } + } else { + UnimplementedInstruction(); + } + break; + + case 0xFE: + { data++; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + if (regop == ecx) { + AppendToBuffer("dec_b "); + data += PrintRightOperand(data); + } else { + UnimplementedInstruction(); + } + } + break; + + case 0x68: + AppendToBuffer("push 0x%x", *reinterpret_cast<int32_t*>(data+1)); + data += 5; + break; + + case 0x6A: + AppendToBuffer("push 0x%x", *reinterpret_cast<int8_t*>(data + 1)); + data += 2; + break; + + case 0xA8: + AppendToBuffer("test al,0x%x", *reinterpret_cast<uint8_t*>(data+1)); + data += 2; + break; + + case 0xA9: + AppendToBuffer("test eax,0x%x", *reinterpret_cast<int32_t*>(data+1)); + data += 5; + break; + + case 0xD1: // fall through + case 0xD3: // fall through + case 0xC1: + data += D1D3C1Instruction(data); + break; + + case 0xD8: // fall through + case 0xD9: // fall through + case 0xDA: // fall through + case 0xDB: // fall through + case 0xDC: // fall through + case 0xDD: // fall through + case 0xDE: // fall through + case 0xDF: + data += FPUInstruction(data); + break; + + case 0xEB: + data += JumpShort(data); + break; + + case 0xF2: + if (*(data+1) == 0x0F) { + byte b2 = *(data+2); + if (b2 == 0x11) { + AppendToBuffer("movsd "); + data += 3; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + data += PrintRightXMMOperand(data); + AppendToBuffer(",%s", NameOfXMMRegister(regop)); + } else if (b2 == 0x10) { + data += 3; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("movsd %s,", NameOfXMMRegister(regop)); + data += PrintRightXMMOperand(data); + } else if (b2 == 0x5A) { + data += 3; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("cvtsd2ss %s,", NameOfXMMRegister(regop)); + data += PrintRightXMMOperand(data); + } else { + const char* mnem = "?"; + switch (b2) { + case 0x2A: mnem = "cvtsi2sd"; break; + case 0x2C: mnem = "cvttsd2si"; break; + case 0x2D: mnem = "cvtsd2si"; break; + case 0x51: mnem = "sqrtsd"; break; + case 0x58: mnem = "addsd"; break; + case 0x59: mnem = "mulsd"; break; + case 0x5C: mnem = "subsd"; break; + case 0x5E: mnem = "divsd"; break; + } + data += 3; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + if (b2 == 0x2A) { + AppendToBuffer("%s %s,", mnem, NameOfXMMRegister(regop)); + data += PrintRightOperand(data); + } else if (b2 == 0x2C || b2 == 0x2D) { + AppendToBuffer("%s %s,", mnem, NameOfCPURegister(regop)); + data += PrintRightXMMOperand(data); + } else if (b2 == 0xC2) { + // Intel manual 2A, Table 3-18. + const char* const pseudo_op[] = { + "cmpeqsd", + "cmpltsd", + "cmplesd", + "cmpunordsd", + "cmpneqsd", + "cmpnltsd", + "cmpnlesd", + "cmpordsd" + }; + AppendToBuffer("%s %s,%s", + pseudo_op[data[1]], + NameOfXMMRegister(regop), + NameOfXMMRegister(rm)); + data += 2; + } else { + AppendToBuffer("%s %s,", mnem, NameOfXMMRegister(regop)); + data += PrintRightXMMOperand(data); + } + } + } else { + UnimplementedInstruction(); + } + break; + + case 0xF3: + if (*(data+1) == 0x0F) { + byte b2 = *(data+2); + if (b2 == 0x11) { + AppendToBuffer("movss "); + data += 3; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + data += PrintRightXMMOperand(data); + AppendToBuffer(",%s", NameOfXMMRegister(regop)); + } else if (b2 == 0x10) { + data += 3; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("movss %s,", NameOfXMMRegister(regop)); + data += PrintRightXMMOperand(data); + } else if (b2 == 0x2C) { + data += 3; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("cvttss2si %s,", NameOfCPURegister(regop)); + data += PrintRightXMMOperand(data); + } else if (b2 == 0x5A) { + data += 3; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("cvtss2sd %s,", NameOfXMMRegister(regop)); + data += PrintRightXMMOperand(data); + } else if (b2 == 0x6F) { + data += 3; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + AppendToBuffer("movdqu %s,", NameOfXMMRegister(regop)); + data += PrintRightXMMOperand(data); + } else if (b2 == 0x7F) { + AppendToBuffer("movdqu "); + data += 3; + int mod, regop, rm; + get_modrm(*data, &mod, ®op, &rm); + data += PrintRightXMMOperand(data); + AppendToBuffer(",%s", NameOfXMMRegister(regop)); + } else { + UnimplementedInstruction(); + } + } else if (*(data+1) == 0xA5) { + data += 2; + AppendToBuffer("rep_movs"); + } else if (*(data+1) == 0xAB) { + data += 2; + AppendToBuffer("rep_stos"); + } else { + UnimplementedInstruction(); + } + break; + + case 0xF7: + data += F7Instruction(data); + break; + + default: + UnimplementedInstruction(); + } + } + + if (tmp_buffer_pos_ < sizeof tmp_buffer_) { + tmp_buffer_[tmp_buffer_pos_] = '\0'; + } + + int instr_len = data - instr; + if (instr_len == 0) { + printf("%02x", *data); + } + DCHECK(instr_len > 0); // Ensure progress. + + int outp = 0; + // Instruction bytes. + for (byte* bp = instr; bp < data; bp++) { + outp += v8::internal::SNPrintF(out_buffer + outp, "%02x", *bp); + } + for (int i = 6 - instr_len; i >= 0; i--) { + outp += v8::internal::SNPrintF(out_buffer + outp, " "); + } + + outp += v8::internal::SNPrintF(out_buffer + outp, " %s", tmp_buffer_.start()); + return instr_len; +} // NOLINT (function is too long) + + +//------------------------------------------------------------------------------ + + +static const char* cpu_regs[8] = { + "eax", "ecx", "edx", "ebx", "esp", "ebp", "esi", "edi" +}; + + +static const char* byte_cpu_regs[8] = { + "al", "cl", "dl", "bl", "ah", "ch", "dh", "bh" +}; + + +static const char* xmm_regs[8] = { + "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7" +}; + + +const char* NameConverter::NameOfAddress(byte* addr) const { + v8::internal::SNPrintF(tmp_buffer_, "%p", addr); + return tmp_buffer_.start(); +} + + +const char* NameConverter::NameOfConstant(byte* addr) const { + return NameOfAddress(addr); +} + + +const char* NameConverter::NameOfCPURegister(int reg) const { + if (0 <= reg && reg < 8) return cpu_regs[reg]; + return "noreg"; +} + + +const char* NameConverter::NameOfByteCPURegister(int reg) const { + if (0 <= reg && reg < 8) return byte_cpu_regs[reg]; + return "noreg"; +} + + +const char* NameConverter::NameOfXMMRegister(int reg) const { + if (0 <= reg && reg < 8) return xmm_regs[reg]; + return "noxmmreg"; +} + + +const char* NameConverter::NameInCode(byte* addr) const { + // X87 does not embed debug strings at the moment. + UNREACHABLE(); + return ""; +} + + +//------------------------------------------------------------------------------ + +Disassembler::Disassembler(const NameConverter& converter) + : converter_(converter) {} + + +Disassembler::~Disassembler() {} + + +int Disassembler::InstructionDecode(v8::internal::Vector<char> buffer, + byte* instruction) { + DisassemblerX87 d(converter_, false /*do not crash if unimplemented*/); + return d.InstructionDecode(buffer, instruction); +} + + +// The IA-32 assembler does not currently use constant pools. +int Disassembler::ConstantPoolSizeAt(byte* instruction) { return -1; } + + +/*static*/ void Disassembler::Disassemble(FILE* f, byte* begin, byte* end) { + NameConverter converter; + Disassembler d(converter); + for (byte* pc = begin; pc < end;) { + v8::internal::EmbeddedVector<char, 128> buffer; + buffer[0] = '\0'; + byte* prev_pc = pc; + pc += d.InstructionDecode(buffer, pc); + fprintf(f, "%p", prev_pc); + fprintf(f, " "); + + for (byte* bp = prev_pc; bp < pc; bp++) { + fprintf(f, "%02x", *bp); + } + for (int i = 6 - (pc - prev_pc); i >= 0; i--) { + fprintf(f, " "); + } + fprintf(f, " %s\n", buffer.start()); + } +} + + +} // namespace disasm + +#endif // V8_TARGET_ARCH_X87 diff --git a/deps/v8/src/x87/frames-x87.cc b/deps/v8/src/x87/frames-x87.cc new file mode 100644 index 00000000000..6091b4599b6 --- /dev/null +++ b/deps/v8/src/x87/frames-x87.cc @@ -0,0 +1,42 @@ +// Copyright 2006-2008 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_X87 + +#include "src/assembler.h" +#include "src/frames.h" +#include "src/x87/assembler-x87-inl.h" +#include "src/x87/assembler-x87.h" + +namespace v8 { +namespace internal { + + +Register JavaScriptFrame::fp_register() { return ebp; } +Register JavaScriptFrame::context_register() { return esi; } +Register JavaScriptFrame::constant_pool_pointer_register() { + UNREACHABLE(); + return no_reg; +} + + +Register StubFailureTrampolineFrame::fp_register() { return ebp; } +Register StubFailureTrampolineFrame::context_register() { return esi; } +Register StubFailureTrampolineFrame::constant_pool_pointer_register() { + UNREACHABLE(); + return no_reg; +} + + +Object*& ExitFrame::constant_pool_slot() const { + UNREACHABLE(); + return Memory::Object_at(NULL); +} + + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_X87 diff --git a/deps/v8/src/x87/frames-x87.h b/deps/v8/src/x87/frames-x87.h new file mode 100644 index 00000000000..5b91baf3858 --- /dev/null +++ b/deps/v8/src/x87/frames-x87.h @@ -0,0 +1,125 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_X87_FRAMES_X87_H_ +#define V8_X87_FRAMES_X87_H_ + +namespace v8 { +namespace internal { + + +// Register lists +// Note that the bit values must match those used in actual instruction encoding +const int kNumRegs = 8; + + +// Caller-saved registers +const RegList kJSCallerSaved = + 1 << 0 | // eax + 1 << 1 | // ecx + 1 << 2 | // edx + 1 << 3 | // ebx - used as a caller-saved register in JavaScript code + 1 << 7; // edi - callee function + +const int kNumJSCallerSaved = 5; + + +// Number of registers for which space is reserved in safepoints. +const int kNumSafepointRegisters = 8; + +const int kNoAlignmentPadding = 0; +const int kAlignmentPaddingPushed = 2; +const int kAlignmentZapValue = 0x12345678; // Not heap object tagged. + +// ---------------------------------------------------- + + +class EntryFrameConstants : public AllStatic { + public: + static const int kCallerFPOffset = -6 * kPointerSize; + + static const int kFunctionArgOffset = +3 * kPointerSize; + static const int kReceiverArgOffset = +4 * kPointerSize; + static const int kArgcOffset = +5 * kPointerSize; + static const int kArgvOffset = +6 * kPointerSize; +}; + + +class ExitFrameConstants : public AllStatic { + public: + static const int kFrameSize = 2 * kPointerSize; + + static const int kCodeOffset = -2 * kPointerSize; + static const int kSPOffset = -1 * kPointerSize; + + static const int kCallerFPOffset = 0 * kPointerSize; + static const int kCallerPCOffset = +1 * kPointerSize; + + // FP-relative displacement of the caller's SP. It points just + // below the saved PC. + static const int kCallerSPDisplacement = +2 * kPointerSize; + + static const int kConstantPoolOffset = 0; // Not used +}; + + +class JavaScriptFrameConstants : public AllStatic { + public: + // FP-relative. + static const int kLocal0Offset = StandardFrameConstants::kExpressionsOffset; + static const int kLastParameterOffset = +2 * kPointerSize; + static const int kFunctionOffset = StandardFrameConstants::kMarkerOffset; + + // Caller SP-relative. + static const int kParam0Offset = -2 * kPointerSize; + static const int kReceiverOffset = -1 * kPointerSize; + + static const int kDynamicAlignmentStateOffset = kLocal0Offset; +}; + + +class ArgumentsAdaptorFrameConstants : public AllStatic { + public: + // FP-relative. + static const int kLengthOffset = StandardFrameConstants::kExpressionsOffset; + + static const int kFrameSize = + StandardFrameConstants::kFixedFrameSize + kPointerSize; +}; + + +class ConstructFrameConstants : public AllStatic { + public: + // FP-relative. + static const int kImplicitReceiverOffset = -5 * kPointerSize; + static const int kConstructorOffset = kMinInt; + static const int kLengthOffset = -4 * kPointerSize; + static const int kCodeOffset = StandardFrameConstants::kExpressionsOffset; + + static const int kFrameSize = + StandardFrameConstants::kFixedFrameSize + 3 * kPointerSize; +}; + + +class InternalFrameConstants : public AllStatic { + public: + // FP-relative. + static const int kCodeOffset = StandardFrameConstants::kExpressionsOffset; +}; + + +inline Object* JavaScriptFrame::function_slot_object() const { + const int offset = JavaScriptFrameConstants::kFunctionOffset; + return Memory::Object_at(fp() + offset); +} + + +inline void StackHandler::SetFp(Address slot, Address fp) { + Memory::Address_at(slot) = fp; +} + + +} } // namespace v8::internal + +#endif // V8_X87_FRAMES_X87_H_ diff --git a/deps/v8/src/x87/full-codegen-x87.cc b/deps/v8/src/x87/full-codegen-x87.cc new file mode 100644 index 00000000000..2e8b8651ffd --- /dev/null +++ b/deps/v8/src/x87/full-codegen-x87.cc @@ -0,0 +1,4827 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_X87 + +#include "src/code-stubs.h" +#include "src/codegen.h" +#include "src/compiler.h" +#include "src/debug.h" +#include "src/full-codegen.h" +#include "src/isolate-inl.h" +#include "src/parser.h" +#include "src/scopes.h" +#include "src/stub-cache.h" + +namespace v8 { +namespace internal { + +#define __ ACCESS_MASM(masm_) + + +class JumpPatchSite BASE_EMBEDDED { + public: + explicit JumpPatchSite(MacroAssembler* masm) : masm_(masm) { +#ifdef DEBUG + info_emitted_ = false; +#endif + } + + ~JumpPatchSite() { + DCHECK(patch_site_.is_bound() == info_emitted_); + } + + void EmitJumpIfNotSmi(Register reg, + Label* target, + Label::Distance distance = Label::kFar) { + __ test(reg, Immediate(kSmiTagMask)); + EmitJump(not_carry, target, distance); // Always taken before patched. + } + + void EmitJumpIfSmi(Register reg, + Label* target, + Label::Distance distance = Label::kFar) { + __ test(reg, Immediate(kSmiTagMask)); + EmitJump(carry, target, distance); // Never taken before patched. + } + + void EmitPatchInfo() { + if (patch_site_.is_bound()) { + int delta_to_patch_site = masm_->SizeOfCodeGeneratedSince(&patch_site_); + DCHECK(is_uint8(delta_to_patch_site)); + __ test(eax, Immediate(delta_to_patch_site)); +#ifdef DEBUG + info_emitted_ = true; +#endif + } else { + __ nop(); // Signals no inlined code. + } + } + + private: + // jc will be patched with jz, jnc will become jnz. + void EmitJump(Condition cc, Label* target, Label::Distance distance) { + DCHECK(!patch_site_.is_bound() && !info_emitted_); + DCHECK(cc == carry || cc == not_carry); + __ bind(&patch_site_); + __ j(cc, target, distance); + } + + MacroAssembler* masm_; + Label patch_site_; +#ifdef DEBUG + bool info_emitted_; +#endif +}; + + +// Generate code for a JS function. On entry to the function the receiver +// and arguments have been pushed on the stack left to right, with the +// return address on top of them. The actual argument count matches the +// formal parameter count expected by the function. +// +// The live registers are: +// o edi: the JS function object being called (i.e. ourselves) +// o esi: our context +// o ebp: our caller's frame pointer +// o esp: stack pointer (pointing to return address) +// +// The function builds a JS frame. Please see JavaScriptFrameConstants in +// frames-x87.h for its layout. +void FullCodeGenerator::Generate() { + CompilationInfo* info = info_; + handler_table_ = + isolate()->factory()->NewFixedArray(function()->handler_count(), TENURED); + + profiling_counter_ = isolate()->factory()->NewCell( + Handle<Smi>(Smi::FromInt(FLAG_interrupt_budget), isolate())); + SetFunctionPosition(function()); + Comment cmnt(masm_, "[ function compiled by full code generator"); + + ProfileEntryHookStub::MaybeCallEntryHook(masm_); + +#ifdef DEBUG + if (strlen(FLAG_stop_at) > 0 && + info->function()->name()->IsUtf8EqualTo(CStrVector(FLAG_stop_at))) { + __ int3(); + } +#endif + + // Sloppy mode functions and builtins need to replace the receiver with the + // global proxy when called as functions (without an explicit receiver + // object). + if (info->strict_mode() == SLOPPY && !info->is_native()) { + Label ok; + // +1 for return address. + int receiver_offset = (info->scope()->num_parameters() + 1) * kPointerSize; + __ mov(ecx, Operand(esp, receiver_offset)); + + __ cmp(ecx, isolate()->factory()->undefined_value()); + __ j(not_equal, &ok, Label::kNear); + + __ mov(ecx, GlobalObjectOperand()); + __ mov(ecx, FieldOperand(ecx, GlobalObject::kGlobalProxyOffset)); + + __ mov(Operand(esp, receiver_offset), ecx); + + __ bind(&ok); + } + + // Open a frame scope to indicate that there is a frame on the stack. The + // MANUAL indicates that the scope shouldn't actually generate code to set up + // the frame (that is done below). + FrameScope frame_scope(masm_, StackFrame::MANUAL); + + info->set_prologue_offset(masm_->pc_offset()); + __ Prologue(info->IsCodePreAgingActive()); + info->AddNoFrameRange(0, masm_->pc_offset()); + + { Comment cmnt(masm_, "[ Allocate locals"); + int locals_count = info->scope()->num_stack_slots(); + // Generators allocate locals, if any, in context slots. + DCHECK(!info->function()->is_generator() || locals_count == 0); + if (locals_count == 1) { + __ push(Immediate(isolate()->factory()->undefined_value())); + } else if (locals_count > 1) { + if (locals_count >= 128) { + Label ok; + __ mov(ecx, esp); + __ sub(ecx, Immediate(locals_count * kPointerSize)); + ExternalReference stack_limit = + ExternalReference::address_of_real_stack_limit(isolate()); + __ cmp(ecx, Operand::StaticVariable(stack_limit)); + __ j(above_equal, &ok, Label::kNear); + __ InvokeBuiltin(Builtins::STACK_OVERFLOW, CALL_FUNCTION); + __ bind(&ok); + } + __ mov(eax, Immediate(isolate()->factory()->undefined_value())); + const int kMaxPushes = 32; + if (locals_count >= kMaxPushes) { + int loop_iterations = locals_count / kMaxPushes; + __ mov(ecx, loop_iterations); + Label loop_header; + __ bind(&loop_header); + // Do pushes. + for (int i = 0; i < kMaxPushes; i++) { + __ push(eax); + } + __ dec(ecx); + __ j(not_zero, &loop_header, Label::kNear); + } + int remaining = locals_count % kMaxPushes; + // Emit the remaining pushes. + for (int i = 0; i < remaining; i++) { + __ push(eax); + } + } + } + + bool function_in_register = true; + + // Possibly allocate a local context. + int heap_slots = info->scope()->num_heap_slots() - Context::MIN_CONTEXT_SLOTS; + if (heap_slots > 0) { + Comment cmnt(masm_, "[ Allocate context"); + bool need_write_barrier = true; + // Argument to NewContext is the function, which is still in edi. + if (FLAG_harmony_scoping && info->scope()->is_global_scope()) { + __ push(edi); + __ Push(info->scope()->GetScopeInfo()); + __ CallRuntime(Runtime::kNewGlobalContext, 2); + } else if (heap_slots <= FastNewContextStub::kMaximumSlots) { + FastNewContextStub stub(isolate(), heap_slots); + __ CallStub(&stub); + // Result of FastNewContextStub is always in new space. + need_write_barrier = false; + } else { + __ push(edi); + __ CallRuntime(Runtime::kNewFunctionContext, 1); + } + function_in_register = false; + // Context is returned in eax. It replaces the context passed to us. + // It's saved in the stack and kept live in esi. + __ mov(esi, eax); + __ mov(Operand(ebp, StandardFrameConstants::kContextOffset), eax); + + // Copy parameters into context if necessary. + int num_parameters = info->scope()->num_parameters(); + for (int i = 0; i < num_parameters; i++) { + Variable* var = scope()->parameter(i); + if (var->IsContextSlot()) { + int parameter_offset = StandardFrameConstants::kCallerSPOffset + + (num_parameters - 1 - i) * kPointerSize; + // Load parameter from stack. + __ mov(eax, Operand(ebp, parameter_offset)); + // Store it in the context. + int context_offset = Context::SlotOffset(var->index()); + __ mov(Operand(esi, context_offset), eax); + // Update the write barrier. This clobbers eax and ebx. + if (need_write_barrier) { + __ RecordWriteContextSlot(esi, + context_offset, + eax, + ebx); + } else if (FLAG_debug_code) { + Label done; + __ JumpIfInNewSpace(esi, eax, &done, Label::kNear); + __ Abort(kExpectedNewSpaceObject); + __ bind(&done); + } + } + } + } + + Variable* arguments = scope()->arguments(); + if (arguments != NULL) { + // Function uses arguments object. + Comment cmnt(masm_, "[ Allocate arguments object"); + if (function_in_register) { + __ push(edi); + } else { + __ push(Operand(ebp, JavaScriptFrameConstants::kFunctionOffset)); + } + // Receiver is just before the parameters on the caller's stack. + int num_parameters = info->scope()->num_parameters(); + int offset = num_parameters * kPointerSize; + __ lea(edx, + Operand(ebp, StandardFrameConstants::kCallerSPOffset + offset)); + __ push(edx); + __ push(Immediate(Smi::FromInt(num_parameters))); + // Arguments to ArgumentsAccessStub: + // function, receiver address, parameter count. + // The stub will rewrite receiver and parameter count if the previous + // stack frame was an arguments adapter frame. + ArgumentsAccessStub::Type type; + if (strict_mode() == STRICT) { + type = ArgumentsAccessStub::NEW_STRICT; + } else if (function()->has_duplicate_parameters()) { + type = ArgumentsAccessStub::NEW_SLOPPY_SLOW; + } else { + type = ArgumentsAccessStub::NEW_SLOPPY_FAST; + } + ArgumentsAccessStub stub(isolate(), type); + __ CallStub(&stub); + + SetVar(arguments, eax, ebx, edx); + } + + if (FLAG_trace) { + __ CallRuntime(Runtime::kTraceEnter, 0); + } + + // Visit the declarations and body unless there is an illegal + // redeclaration. + if (scope()->HasIllegalRedeclaration()) { + Comment cmnt(masm_, "[ Declarations"); + scope()->VisitIllegalRedeclaration(this); + + } else { + PrepareForBailoutForId(BailoutId::FunctionEntry(), NO_REGISTERS); + { Comment cmnt(masm_, "[ Declarations"); + // For named function expressions, declare the function name as a + // constant. + if (scope()->is_function_scope() && scope()->function() != NULL) { + VariableDeclaration* function = scope()->function(); + DCHECK(function->proxy()->var()->mode() == CONST || + function->proxy()->var()->mode() == CONST_LEGACY); + DCHECK(function->proxy()->var()->location() != Variable::UNALLOCATED); + VisitVariableDeclaration(function); + } + VisitDeclarations(scope()->declarations()); + } + + { Comment cmnt(masm_, "[ Stack check"); + PrepareForBailoutForId(BailoutId::Declarations(), NO_REGISTERS); + Label ok; + ExternalReference stack_limit + = ExternalReference::address_of_stack_limit(isolate()); + __ cmp(esp, Operand::StaticVariable(stack_limit)); + __ j(above_equal, &ok, Label::kNear); + __ call(isolate()->builtins()->StackCheck(), RelocInfo::CODE_TARGET); + __ bind(&ok); + } + + { Comment cmnt(masm_, "[ Body"); + DCHECK(loop_depth() == 0); + VisitStatements(function()->body()); + DCHECK(loop_depth() == 0); + } + } + + // Always emit a 'return undefined' in case control fell off the end of + // the body. + { Comment cmnt(masm_, "[ return <undefined>;"); + __ mov(eax, isolate()->factory()->undefined_value()); + EmitReturnSequence(); + } +} + + +void FullCodeGenerator::ClearAccumulator() { + __ Move(eax, Immediate(Smi::FromInt(0))); +} + + +void FullCodeGenerator::EmitProfilingCounterDecrement(int delta) { + __ mov(ebx, Immediate(profiling_counter_)); + __ sub(FieldOperand(ebx, Cell::kValueOffset), + Immediate(Smi::FromInt(delta))); +} + + +void FullCodeGenerator::EmitProfilingCounterReset() { + int reset_value = FLAG_interrupt_budget; + __ mov(ebx, Immediate(profiling_counter_)); + __ mov(FieldOperand(ebx, Cell::kValueOffset), + Immediate(Smi::FromInt(reset_value))); +} + + +void FullCodeGenerator::EmitBackEdgeBookkeeping(IterationStatement* stmt, + Label* back_edge_target) { + Comment cmnt(masm_, "[ Back edge bookkeeping"); + Label ok; + + DCHECK(back_edge_target->is_bound()); + int distance = masm_->SizeOfCodeGeneratedSince(back_edge_target); + int weight = Min(kMaxBackEdgeWeight, + Max(1, distance / kCodeSizeMultiplier)); + EmitProfilingCounterDecrement(weight); + __ j(positive, &ok, Label::kNear); + __ call(isolate()->builtins()->InterruptCheck(), RelocInfo::CODE_TARGET); + + // Record a mapping of this PC offset to the OSR id. This is used to find + // the AST id from the unoptimized code in order to use it as a key into + // the deoptimization input data found in the optimized code. + RecordBackEdge(stmt->OsrEntryId()); + + EmitProfilingCounterReset(); + + __ bind(&ok); + PrepareForBailoutForId(stmt->EntryId(), NO_REGISTERS); + // Record a mapping of the OSR id to this PC. This is used if the OSR + // entry becomes the target of a bailout. We don't expect it to be, but + // we want it to work if it is. + PrepareForBailoutForId(stmt->OsrEntryId(), NO_REGISTERS); +} + + +void FullCodeGenerator::EmitReturnSequence() { + Comment cmnt(masm_, "[ Return sequence"); + if (return_label_.is_bound()) { + __ jmp(&return_label_); + } else { + // Common return label + __ bind(&return_label_); + if (FLAG_trace) { + __ push(eax); + __ CallRuntime(Runtime::kTraceExit, 1); + } + // Pretend that the exit is a backwards jump to the entry. + int weight = 1; + if (info_->ShouldSelfOptimize()) { + weight = FLAG_interrupt_budget / FLAG_self_opt_count; + } else { + int distance = masm_->pc_offset(); + weight = Min(kMaxBackEdgeWeight, + Max(1, distance / kCodeSizeMultiplier)); + } + EmitProfilingCounterDecrement(weight); + Label ok; + __ j(positive, &ok, Label::kNear); + __ push(eax); + __ call(isolate()->builtins()->InterruptCheck(), + RelocInfo::CODE_TARGET); + __ pop(eax); + EmitProfilingCounterReset(); + __ bind(&ok); +#ifdef DEBUG + // Add a label for checking the size of the code used for returning. + Label check_exit_codesize; + masm_->bind(&check_exit_codesize); +#endif + SetSourcePosition(function()->end_position() - 1); + __ RecordJSReturn(); + // Do not use the leave instruction here because it is too short to + // patch with the code required by the debugger. + __ mov(esp, ebp); + int no_frame_start = masm_->pc_offset(); + __ pop(ebp); + + int arguments_bytes = (info_->scope()->num_parameters() + 1) * kPointerSize; + __ Ret(arguments_bytes, ecx); + // Check that the size of the code used for returning is large enough + // for the debugger's requirements. + DCHECK(Assembler::kJSReturnSequenceLength <= + masm_->SizeOfCodeGeneratedSince(&check_exit_codesize)); + info_->AddNoFrameRange(no_frame_start, masm_->pc_offset()); + } +} + + +void FullCodeGenerator::EffectContext::Plug(Variable* var) const { + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); +} + + +void FullCodeGenerator::AccumulatorValueContext::Plug(Variable* var) const { + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); + codegen()->GetVar(result_register(), var); +} + + +void FullCodeGenerator::StackValueContext::Plug(Variable* var) const { + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); + MemOperand operand = codegen()->VarOperand(var, result_register()); + // Memory operands can be pushed directly. + __ push(operand); +} + + +void FullCodeGenerator::TestContext::Plug(Variable* var) const { + // For simplicity we always test the accumulator register. + codegen()->GetVar(result_register(), var); + codegen()->PrepareForBailoutBeforeSplit(condition(), false, NULL, NULL); + codegen()->DoTest(this); +} + + +void FullCodeGenerator::EffectContext::Plug(Heap::RootListIndex index) const { + UNREACHABLE(); // Not used on X87. +} + + +void FullCodeGenerator::AccumulatorValueContext::Plug( + Heap::RootListIndex index) const { + UNREACHABLE(); // Not used on X87. +} + + +void FullCodeGenerator::StackValueContext::Plug( + Heap::RootListIndex index) const { + UNREACHABLE(); // Not used on X87. +} + + +void FullCodeGenerator::TestContext::Plug(Heap::RootListIndex index) const { + UNREACHABLE(); // Not used on X87. +} + + +void FullCodeGenerator::EffectContext::Plug(Handle<Object> lit) const { +} + + +void FullCodeGenerator::AccumulatorValueContext::Plug( + Handle<Object> lit) const { + if (lit->IsSmi()) { + __ SafeMove(result_register(), Immediate(lit)); + } else { + __ Move(result_register(), Immediate(lit)); + } +} + + +void FullCodeGenerator::StackValueContext::Plug(Handle<Object> lit) const { + if (lit->IsSmi()) { + __ SafePush(Immediate(lit)); + } else { + __ push(Immediate(lit)); + } +} + + +void FullCodeGenerator::TestContext::Plug(Handle<Object> lit) const { + codegen()->PrepareForBailoutBeforeSplit(condition(), + true, + true_label_, + false_label_); + DCHECK(!lit->IsUndetectableObject()); // There are no undetectable literals. + if (lit->IsUndefined() || lit->IsNull() || lit->IsFalse()) { + if (false_label_ != fall_through_) __ jmp(false_label_); + } else if (lit->IsTrue() || lit->IsJSObject()) { + if (true_label_ != fall_through_) __ jmp(true_label_); + } else if (lit->IsString()) { + if (String::cast(*lit)->length() == 0) { + if (false_label_ != fall_through_) __ jmp(false_label_); + } else { + if (true_label_ != fall_through_) __ jmp(true_label_); + } + } else if (lit->IsSmi()) { + if (Smi::cast(*lit)->value() == 0) { + if (false_label_ != fall_through_) __ jmp(false_label_); + } else { + if (true_label_ != fall_through_) __ jmp(true_label_); + } + } else { + // For simplicity we always test the accumulator register. + __ mov(result_register(), lit); + codegen()->DoTest(this); + } +} + + +void FullCodeGenerator::EffectContext::DropAndPlug(int count, + Register reg) const { + DCHECK(count > 0); + __ Drop(count); +} + + +void FullCodeGenerator::AccumulatorValueContext::DropAndPlug( + int count, + Register reg) const { + DCHECK(count > 0); + __ Drop(count); + __ Move(result_register(), reg); +} + + +void FullCodeGenerator::StackValueContext::DropAndPlug(int count, + Register reg) const { + DCHECK(count > 0); + if (count > 1) __ Drop(count - 1); + __ mov(Operand(esp, 0), reg); +} + + +void FullCodeGenerator::TestContext::DropAndPlug(int count, + Register reg) const { + DCHECK(count > 0); + // For simplicity we always test the accumulator register. + __ Drop(count); + __ Move(result_register(), reg); + codegen()->PrepareForBailoutBeforeSplit(condition(), false, NULL, NULL); + codegen()->DoTest(this); +} + + +void FullCodeGenerator::EffectContext::Plug(Label* materialize_true, + Label* materialize_false) const { + DCHECK(materialize_true == materialize_false); + __ bind(materialize_true); +} + + +void FullCodeGenerator::AccumulatorValueContext::Plug( + Label* materialize_true, + Label* materialize_false) const { + Label done; + __ bind(materialize_true); + __ mov(result_register(), isolate()->factory()->true_value()); + __ jmp(&done, Label::kNear); + __ bind(materialize_false); + __ mov(result_register(), isolate()->factory()->false_value()); + __ bind(&done); +} + + +void FullCodeGenerator::StackValueContext::Plug( + Label* materialize_true, + Label* materialize_false) const { + Label done; + __ bind(materialize_true); + __ push(Immediate(isolate()->factory()->true_value())); + __ jmp(&done, Label::kNear); + __ bind(materialize_false); + __ push(Immediate(isolate()->factory()->false_value())); + __ bind(&done); +} + + +void FullCodeGenerator::TestContext::Plug(Label* materialize_true, + Label* materialize_false) const { + DCHECK(materialize_true == true_label_); + DCHECK(materialize_false == false_label_); +} + + +void FullCodeGenerator::EffectContext::Plug(bool flag) const { +} + + +void FullCodeGenerator::AccumulatorValueContext::Plug(bool flag) const { + Handle<Object> value = flag + ? isolate()->factory()->true_value() + : isolate()->factory()->false_value(); + __ mov(result_register(), value); +} + + +void FullCodeGenerator::StackValueContext::Plug(bool flag) const { + Handle<Object> value = flag + ? isolate()->factory()->true_value() + : isolate()->factory()->false_value(); + __ push(Immediate(value)); +} + + +void FullCodeGenerator::TestContext::Plug(bool flag) const { + codegen()->PrepareForBailoutBeforeSplit(condition(), + true, + true_label_, + false_label_); + if (flag) { + if (true_label_ != fall_through_) __ jmp(true_label_); + } else { + if (false_label_ != fall_through_) __ jmp(false_label_); + } +} + + +void FullCodeGenerator::DoTest(Expression* condition, + Label* if_true, + Label* if_false, + Label* fall_through) { + Handle<Code> ic = ToBooleanStub::GetUninitialized(isolate()); + CallIC(ic, condition->test_id()); + __ test(result_register(), result_register()); + // The stub returns nonzero for true. + Split(not_zero, if_true, if_false, fall_through); +} + + +void FullCodeGenerator::Split(Condition cc, + Label* if_true, + Label* if_false, + Label* fall_through) { + if (if_false == fall_through) { + __ j(cc, if_true); + } else if (if_true == fall_through) { + __ j(NegateCondition(cc), if_false); + } else { + __ j(cc, if_true); + __ jmp(if_false); + } +} + + +MemOperand FullCodeGenerator::StackOperand(Variable* var) { + DCHECK(var->IsStackAllocated()); + // Offset is negative because higher indexes are at lower addresses. + int offset = -var->index() * kPointerSize; + // Adjust by a (parameter or local) base offset. + if (var->IsParameter()) { + offset += (info_->scope()->num_parameters() + 1) * kPointerSize; + } else { + offset += JavaScriptFrameConstants::kLocal0Offset; + } + return Operand(ebp, offset); +} + + +MemOperand FullCodeGenerator::VarOperand(Variable* var, Register scratch) { + DCHECK(var->IsContextSlot() || var->IsStackAllocated()); + if (var->IsContextSlot()) { + int context_chain_length = scope()->ContextChainLength(var->scope()); + __ LoadContext(scratch, context_chain_length); + return ContextOperand(scratch, var->index()); + } else { + return StackOperand(var); + } +} + + +void FullCodeGenerator::GetVar(Register dest, Variable* var) { + DCHECK(var->IsContextSlot() || var->IsStackAllocated()); + MemOperand location = VarOperand(var, dest); + __ mov(dest, location); +} + + +void FullCodeGenerator::SetVar(Variable* var, + Register src, + Register scratch0, + Register scratch1) { + DCHECK(var->IsContextSlot() || var->IsStackAllocated()); + DCHECK(!scratch0.is(src)); + DCHECK(!scratch0.is(scratch1)); + DCHECK(!scratch1.is(src)); + MemOperand location = VarOperand(var, scratch0); + __ mov(location, src); + + // Emit the write barrier code if the location is in the heap. + if (var->IsContextSlot()) { + int offset = Context::SlotOffset(var->index()); + DCHECK(!scratch0.is(esi) && !src.is(esi) && !scratch1.is(esi)); + __ RecordWriteContextSlot(scratch0, offset, src, scratch1); + } +} + + +void FullCodeGenerator::PrepareForBailoutBeforeSplit(Expression* expr, + bool should_normalize, + Label* if_true, + Label* if_false) { + // Only prepare for bailouts before splits if we're in a test + // context. Otherwise, we let the Visit function deal with the + // preparation to avoid preparing with the same AST id twice. + if (!context()->IsTest() || !info_->IsOptimizable()) return; + + Label skip; + if (should_normalize) __ jmp(&skip, Label::kNear); + PrepareForBailout(expr, TOS_REG); + if (should_normalize) { + __ cmp(eax, isolate()->factory()->true_value()); + Split(equal, if_true, if_false, NULL); + __ bind(&skip); + } +} + + +void FullCodeGenerator::EmitDebugCheckDeclarationContext(Variable* variable) { + // The variable in the declaration always resides in the current context. + DCHECK_EQ(0, scope()->ContextChainLength(variable->scope())); + if (generate_debug_code_) { + // Check that we're not inside a with or catch context. + __ mov(ebx, FieldOperand(esi, HeapObject::kMapOffset)); + __ cmp(ebx, isolate()->factory()->with_context_map()); + __ Check(not_equal, kDeclarationInWithContext); + __ cmp(ebx, isolate()->factory()->catch_context_map()); + __ Check(not_equal, kDeclarationInCatchContext); + } +} + + +void FullCodeGenerator::VisitVariableDeclaration( + VariableDeclaration* declaration) { + // If it was not possible to allocate the variable at compile time, we + // need to "declare" it at runtime to make sure it actually exists in the + // local context. + VariableProxy* proxy = declaration->proxy(); + VariableMode mode = declaration->mode(); + Variable* variable = proxy->var(); + bool hole_init = mode == LET || mode == CONST || mode == CONST_LEGACY; + switch (variable->location()) { + case Variable::UNALLOCATED: + globals_->Add(variable->name(), zone()); + globals_->Add(variable->binding_needs_init() + ? isolate()->factory()->the_hole_value() + : isolate()->factory()->undefined_value(), zone()); + break; + + case Variable::PARAMETER: + case Variable::LOCAL: + if (hole_init) { + Comment cmnt(masm_, "[ VariableDeclaration"); + __ mov(StackOperand(variable), + Immediate(isolate()->factory()->the_hole_value())); + } + break; + + case Variable::CONTEXT: + if (hole_init) { + Comment cmnt(masm_, "[ VariableDeclaration"); + EmitDebugCheckDeclarationContext(variable); + __ mov(ContextOperand(esi, variable->index()), + Immediate(isolate()->factory()->the_hole_value())); + // No write barrier since the hole value is in old space. + PrepareForBailoutForId(proxy->id(), NO_REGISTERS); + } + break; + + case Variable::LOOKUP: { + Comment cmnt(masm_, "[ VariableDeclaration"); + __ push(esi); + __ push(Immediate(variable->name())); + // VariableDeclaration nodes are always introduced in one of four modes. + DCHECK(IsDeclaredVariableMode(mode)); + PropertyAttributes attr = + IsImmutableVariableMode(mode) ? READ_ONLY : NONE; + __ push(Immediate(Smi::FromInt(attr))); + // Push initial value, if any. + // Note: For variables we must not push an initial value (such as + // 'undefined') because we may have a (legal) redeclaration and we + // must not destroy the current value. + if (hole_init) { + __ push(Immediate(isolate()->factory()->the_hole_value())); + } else { + __ push(Immediate(Smi::FromInt(0))); // Indicates no initial value. + } + __ CallRuntime(Runtime::kDeclareLookupSlot, 4); + break; + } + } +} + + +void FullCodeGenerator::VisitFunctionDeclaration( + FunctionDeclaration* declaration) { + VariableProxy* proxy = declaration->proxy(); + Variable* variable = proxy->var(); + switch (variable->location()) { + case Variable::UNALLOCATED: { + globals_->Add(variable->name(), zone()); + Handle<SharedFunctionInfo> function = + Compiler::BuildFunctionInfo(declaration->fun(), script(), info_); + // Check for stack-overflow exception. + if (function.is_null()) return SetStackOverflow(); + globals_->Add(function, zone()); + break; + } + + case Variable::PARAMETER: + case Variable::LOCAL: { + Comment cmnt(masm_, "[ FunctionDeclaration"); + VisitForAccumulatorValue(declaration->fun()); + __ mov(StackOperand(variable), result_register()); + break; + } + + case Variable::CONTEXT: { + Comment cmnt(masm_, "[ FunctionDeclaration"); + EmitDebugCheckDeclarationContext(variable); + VisitForAccumulatorValue(declaration->fun()); + __ mov(ContextOperand(esi, variable->index()), result_register()); + // We know that we have written a function, which is not a smi. + __ RecordWriteContextSlot(esi, + Context::SlotOffset(variable->index()), + result_register(), + ecx, + EMIT_REMEMBERED_SET, + OMIT_SMI_CHECK); + PrepareForBailoutForId(proxy->id(), NO_REGISTERS); + break; + } + + case Variable::LOOKUP: { + Comment cmnt(masm_, "[ FunctionDeclaration"); + __ push(esi); + __ push(Immediate(variable->name())); + __ push(Immediate(Smi::FromInt(NONE))); + VisitForStackValue(declaration->fun()); + __ CallRuntime(Runtime::kDeclareLookupSlot, 4); + break; + } + } +} + + +void FullCodeGenerator::VisitModuleDeclaration(ModuleDeclaration* declaration) { + Variable* variable = declaration->proxy()->var(); + DCHECK(variable->location() == Variable::CONTEXT); + DCHECK(variable->interface()->IsFrozen()); + + Comment cmnt(masm_, "[ ModuleDeclaration"); + EmitDebugCheckDeclarationContext(variable); + + // Load instance object. + __ LoadContext(eax, scope_->ContextChainLength(scope_->GlobalScope())); + __ mov(eax, ContextOperand(eax, variable->interface()->Index())); + __ mov(eax, ContextOperand(eax, Context::EXTENSION_INDEX)); + + // Assign it. + __ mov(ContextOperand(esi, variable->index()), eax); + // We know that we have written a module, which is not a smi. + __ RecordWriteContextSlot(esi, + Context::SlotOffset(variable->index()), + eax, + ecx, + EMIT_REMEMBERED_SET, + OMIT_SMI_CHECK); + PrepareForBailoutForId(declaration->proxy()->id(), NO_REGISTERS); + + // Traverse into body. + Visit(declaration->module()); +} + + +void FullCodeGenerator::VisitImportDeclaration(ImportDeclaration* declaration) { + VariableProxy* proxy = declaration->proxy(); + Variable* variable = proxy->var(); + switch (variable->location()) { + case Variable::UNALLOCATED: + // TODO(rossberg) + break; + + case Variable::CONTEXT: { + Comment cmnt(masm_, "[ ImportDeclaration"); + EmitDebugCheckDeclarationContext(variable); + // TODO(rossberg) + break; + } + + case Variable::PARAMETER: + case Variable::LOCAL: + case Variable::LOOKUP: + UNREACHABLE(); + } +} + + +void FullCodeGenerator::VisitExportDeclaration(ExportDeclaration* declaration) { + // TODO(rossberg) +} + + +void FullCodeGenerator::DeclareGlobals(Handle<FixedArray> pairs) { + // Call the runtime to declare the globals. + __ push(esi); // The context is the first argument. + __ Push(pairs); + __ Push(Smi::FromInt(DeclareGlobalsFlags())); + __ CallRuntime(Runtime::kDeclareGlobals, 3); + // Return value is ignored. +} + + +void FullCodeGenerator::DeclareModules(Handle<FixedArray> descriptions) { + // Call the runtime to declare the modules. + __ Push(descriptions); + __ CallRuntime(Runtime::kDeclareModules, 1); + // Return value is ignored. +} + + +void FullCodeGenerator::VisitSwitchStatement(SwitchStatement* stmt) { + Comment cmnt(masm_, "[ SwitchStatement"); + Breakable nested_statement(this, stmt); + SetStatementPosition(stmt); + + // Keep the switch value on the stack until a case matches. + VisitForStackValue(stmt->tag()); + PrepareForBailoutForId(stmt->EntryId(), NO_REGISTERS); + + ZoneList<CaseClause*>* clauses = stmt->cases(); + CaseClause* default_clause = NULL; // Can occur anywhere in the list. + + Label next_test; // Recycled for each test. + // Compile all the tests with branches to their bodies. + for (int i = 0; i < clauses->length(); i++) { + CaseClause* clause = clauses->at(i); + clause->body_target()->Unuse(); + + // The default is not a test, but remember it as final fall through. + if (clause->is_default()) { + default_clause = clause; + continue; + } + + Comment cmnt(masm_, "[ Case comparison"); + __ bind(&next_test); + next_test.Unuse(); + + // Compile the label expression. + VisitForAccumulatorValue(clause->label()); + + // Perform the comparison as if via '==='. + __ mov(edx, Operand(esp, 0)); // Switch value. + bool inline_smi_code = ShouldInlineSmiCase(Token::EQ_STRICT); + JumpPatchSite patch_site(masm_); + if (inline_smi_code) { + Label slow_case; + __ mov(ecx, edx); + __ or_(ecx, eax); + patch_site.EmitJumpIfNotSmi(ecx, &slow_case, Label::kNear); + + __ cmp(edx, eax); + __ j(not_equal, &next_test); + __ Drop(1); // Switch value is no longer needed. + __ jmp(clause->body_target()); + __ bind(&slow_case); + } + + // Record position before stub call for type feedback. + SetSourcePosition(clause->position()); + Handle<Code> ic = CompareIC::GetUninitialized(isolate(), Token::EQ_STRICT); + CallIC(ic, clause->CompareId()); + patch_site.EmitPatchInfo(); + + Label skip; + __ jmp(&skip, Label::kNear); + PrepareForBailout(clause, TOS_REG); + __ cmp(eax, isolate()->factory()->true_value()); + __ j(not_equal, &next_test); + __ Drop(1); + __ jmp(clause->body_target()); + __ bind(&skip); + + __ test(eax, eax); + __ j(not_equal, &next_test); + __ Drop(1); // Switch value is no longer needed. + __ jmp(clause->body_target()); + } + + // Discard the test value and jump to the default if present, otherwise to + // the end of the statement. + __ bind(&next_test); + __ Drop(1); // Switch value is no longer needed. + if (default_clause == NULL) { + __ jmp(nested_statement.break_label()); + } else { + __ jmp(default_clause->body_target()); + } + + // Compile all the case bodies. + for (int i = 0; i < clauses->length(); i++) { + Comment cmnt(masm_, "[ Case body"); + CaseClause* clause = clauses->at(i); + __ bind(clause->body_target()); + PrepareForBailoutForId(clause->EntryId(), NO_REGISTERS); + VisitStatements(clause->statements()); + } + + __ bind(nested_statement.break_label()); + PrepareForBailoutForId(stmt->ExitId(), NO_REGISTERS); +} + + +void FullCodeGenerator::VisitForInStatement(ForInStatement* stmt) { + Comment cmnt(masm_, "[ ForInStatement"); + int slot = stmt->ForInFeedbackSlot(); + + SetStatementPosition(stmt); + + Label loop, exit; + ForIn loop_statement(this, stmt); + increment_loop_depth(); + + // Get the object to enumerate over. If the object is null or undefined, skip + // over the loop. See ECMA-262 version 5, section 12.6.4. + VisitForAccumulatorValue(stmt->enumerable()); + __ cmp(eax, isolate()->factory()->undefined_value()); + __ j(equal, &exit); + __ cmp(eax, isolate()->factory()->null_value()); + __ j(equal, &exit); + + PrepareForBailoutForId(stmt->PrepareId(), TOS_REG); + + // Convert the object to a JS object. + Label convert, done_convert; + __ JumpIfSmi(eax, &convert, Label::kNear); + __ CmpObjectType(eax, FIRST_SPEC_OBJECT_TYPE, ecx); + __ j(above_equal, &done_convert, Label::kNear); + __ bind(&convert); + __ push(eax); + __ InvokeBuiltin(Builtins::TO_OBJECT, CALL_FUNCTION); + __ bind(&done_convert); + __ push(eax); + + // Check for proxies. + Label call_runtime, use_cache, fixed_array; + STATIC_ASSERT(FIRST_JS_PROXY_TYPE == FIRST_SPEC_OBJECT_TYPE); + __ CmpObjectType(eax, LAST_JS_PROXY_TYPE, ecx); + __ j(below_equal, &call_runtime); + + // Check cache validity in generated code. This is a fast case for + // the JSObject::IsSimpleEnum cache validity checks. If we cannot + // guarantee cache validity, call the runtime system to check cache + // validity or get the property names in a fixed array. + __ CheckEnumCache(&call_runtime); + + __ mov(eax, FieldOperand(eax, HeapObject::kMapOffset)); + __ jmp(&use_cache, Label::kNear); + + // Get the set of properties to enumerate. + __ bind(&call_runtime); + __ push(eax); + __ CallRuntime(Runtime::kGetPropertyNamesFast, 1); + __ cmp(FieldOperand(eax, HeapObject::kMapOffset), + isolate()->factory()->meta_map()); + __ j(not_equal, &fixed_array); + + + // We got a map in register eax. Get the enumeration cache from it. + Label no_descriptors; + __ bind(&use_cache); + + __ EnumLength(edx, eax); + __ cmp(edx, Immediate(Smi::FromInt(0))); + __ j(equal, &no_descriptors); + + __ LoadInstanceDescriptors(eax, ecx); + __ mov(ecx, FieldOperand(ecx, DescriptorArray::kEnumCacheOffset)); + __ mov(ecx, FieldOperand(ecx, DescriptorArray::kEnumCacheBridgeCacheOffset)); + + // Set up the four remaining stack slots. + __ push(eax); // Map. + __ push(ecx); // Enumeration cache. + __ push(edx); // Number of valid entries for the map in the enum cache. + __ push(Immediate(Smi::FromInt(0))); // Initial index. + __ jmp(&loop); + + __ bind(&no_descriptors); + __ add(esp, Immediate(kPointerSize)); + __ jmp(&exit); + + // We got a fixed array in register eax. Iterate through that. + Label non_proxy; + __ bind(&fixed_array); + + // No need for a write barrier, we are storing a Smi in the feedback vector. + __ LoadHeapObject(ebx, FeedbackVector()); + __ mov(FieldOperand(ebx, FixedArray::OffsetOfElementAt(slot)), + Immediate(TypeFeedbackInfo::MegamorphicSentinel(isolate()))); + + __ mov(ebx, Immediate(Smi::FromInt(1))); // Smi indicates slow check + __ mov(ecx, Operand(esp, 0 * kPointerSize)); // Get enumerated object + STATIC_ASSERT(FIRST_JS_PROXY_TYPE == FIRST_SPEC_OBJECT_TYPE); + __ CmpObjectType(ecx, LAST_JS_PROXY_TYPE, ecx); + __ j(above, &non_proxy); + __ Move(ebx, Immediate(Smi::FromInt(0))); // Zero indicates proxy + __ bind(&non_proxy); + __ push(ebx); // Smi + __ push(eax); // Array + __ mov(eax, FieldOperand(eax, FixedArray::kLengthOffset)); + __ push(eax); // Fixed array length (as smi). + __ push(Immediate(Smi::FromInt(0))); // Initial index. + + // Generate code for doing the condition check. + PrepareForBailoutForId(stmt->BodyId(), NO_REGISTERS); + __ bind(&loop); + __ mov(eax, Operand(esp, 0 * kPointerSize)); // Get the current index. + __ cmp(eax, Operand(esp, 1 * kPointerSize)); // Compare to the array length. + __ j(above_equal, loop_statement.break_label()); + + // Get the current entry of the array into register ebx. + __ mov(ebx, Operand(esp, 2 * kPointerSize)); + __ mov(ebx, FieldOperand(ebx, eax, times_2, FixedArray::kHeaderSize)); + + // Get the expected map from the stack or a smi in the + // permanent slow case into register edx. + __ mov(edx, Operand(esp, 3 * kPointerSize)); + + // Check if the expected map still matches that of the enumerable. + // If not, we may have to filter the key. + Label update_each; + __ mov(ecx, Operand(esp, 4 * kPointerSize)); + __ cmp(edx, FieldOperand(ecx, HeapObject::kMapOffset)); + __ j(equal, &update_each, Label::kNear); + + // For proxies, no filtering is done. + // TODO(rossberg): What if only a prototype is a proxy? Not specified yet. + DCHECK(Smi::FromInt(0) == 0); + __ test(edx, edx); + __ j(zero, &update_each); + + // Convert the entry to a string or null if it isn't a property + // anymore. If the property has been removed while iterating, we + // just skip it. + __ push(ecx); // Enumerable. + __ push(ebx); // Current entry. + __ InvokeBuiltin(Builtins::FILTER_KEY, CALL_FUNCTION); + __ test(eax, eax); + __ j(equal, loop_statement.continue_label()); + __ mov(ebx, eax); + + // Update the 'each' property or variable from the possibly filtered + // entry in register ebx. + __ bind(&update_each); + __ mov(result_register(), ebx); + // Perform the assignment as if via '='. + { EffectContext context(this); + EmitAssignment(stmt->each()); + } + + // Generate code for the body of the loop. + Visit(stmt->body()); + + // Generate code for going to the next element by incrementing the + // index (smi) stored on top of the stack. + __ bind(loop_statement.continue_label()); + __ add(Operand(esp, 0 * kPointerSize), Immediate(Smi::FromInt(1))); + + EmitBackEdgeBookkeeping(stmt, &loop); + __ jmp(&loop); + + // Remove the pointers stored on the stack. + __ bind(loop_statement.break_label()); + __ add(esp, Immediate(5 * kPointerSize)); + + // Exit and decrement the loop depth. + PrepareForBailoutForId(stmt->ExitId(), NO_REGISTERS); + __ bind(&exit); + decrement_loop_depth(); +} + + +void FullCodeGenerator::VisitForOfStatement(ForOfStatement* stmt) { + Comment cmnt(masm_, "[ ForOfStatement"); + SetStatementPosition(stmt); + + Iteration loop_statement(this, stmt); + increment_loop_depth(); + + // var iterator = iterable[Symbol.iterator](); + VisitForEffect(stmt->assign_iterator()); + + // Loop entry. + __ bind(loop_statement.continue_label()); + + // result = iterator.next() + VisitForEffect(stmt->next_result()); + + // if (result.done) break; + Label result_not_done; + VisitForControl(stmt->result_done(), + loop_statement.break_label(), + &result_not_done, + &result_not_done); + __ bind(&result_not_done); + + // each = result.value + VisitForEffect(stmt->assign_each()); + + // Generate code for the body of the loop. + Visit(stmt->body()); + + // Check stack before looping. + PrepareForBailoutForId(stmt->BackEdgeId(), NO_REGISTERS); + EmitBackEdgeBookkeeping(stmt, loop_statement.continue_label()); + __ jmp(loop_statement.continue_label()); + + // Exit and decrement the loop depth. + PrepareForBailoutForId(stmt->ExitId(), NO_REGISTERS); + __ bind(loop_statement.break_label()); + decrement_loop_depth(); +} + + +void FullCodeGenerator::EmitNewClosure(Handle<SharedFunctionInfo> info, + bool pretenure) { + // Use the fast case closure allocation code that allocates in new + // space for nested functions that don't need literals cloning. If + // we're running with the --always-opt or the --prepare-always-opt + // flag, we need to use the runtime function so that the new function + // we are creating here gets a chance to have its code optimized and + // doesn't just get a copy of the existing unoptimized code. + if (!FLAG_always_opt && + !FLAG_prepare_always_opt && + !pretenure && + scope()->is_function_scope() && + info->num_literals() == 0) { + FastNewClosureStub stub(isolate(), + info->strict_mode(), + info->is_generator()); + __ mov(ebx, Immediate(info)); + __ CallStub(&stub); + } else { + __ push(esi); + __ push(Immediate(info)); + __ push(Immediate(pretenure + ? isolate()->factory()->true_value() + : isolate()->factory()->false_value())); + __ CallRuntime(Runtime::kNewClosure, 3); + } + context()->Plug(eax); +} + + +void FullCodeGenerator::VisitVariableProxy(VariableProxy* expr) { + Comment cmnt(masm_, "[ VariableProxy"); + EmitVariableLoad(expr); +} + + +void FullCodeGenerator::EmitLoadGlobalCheckExtensions(VariableProxy* proxy, + TypeofState typeof_state, + Label* slow) { + Register context = esi; + Register temp = edx; + + Scope* s = scope(); + while (s != NULL) { + if (s->num_heap_slots() > 0) { + if (s->calls_sloppy_eval()) { + // Check that extension is NULL. + __ cmp(ContextOperand(context, Context::EXTENSION_INDEX), + Immediate(0)); + __ j(not_equal, slow); + } + // Load next context in chain. + __ mov(temp, ContextOperand(context, Context::PREVIOUS_INDEX)); + // Walk the rest of the chain without clobbering esi. + context = temp; + } + // If no outer scope calls eval, we do not need to check more + // context extensions. If we have reached an eval scope, we check + // all extensions from this point. + if (!s->outer_scope_calls_sloppy_eval() || s->is_eval_scope()) break; + s = s->outer_scope(); + } + + if (s != NULL && s->is_eval_scope()) { + // Loop up the context chain. There is no frame effect so it is + // safe to use raw labels here. + Label next, fast; + if (!context.is(temp)) { + __ mov(temp, context); + } + __ bind(&next); + // Terminate at native context. + __ cmp(FieldOperand(temp, HeapObject::kMapOffset), + Immediate(isolate()->factory()->native_context_map())); + __ j(equal, &fast, Label::kNear); + // Check that extension is NULL. + __ cmp(ContextOperand(temp, Context::EXTENSION_INDEX), Immediate(0)); + __ j(not_equal, slow); + // Load next context in chain. + __ mov(temp, ContextOperand(temp, Context::PREVIOUS_INDEX)); + __ jmp(&next); + __ bind(&fast); + } + + // All extension objects were empty and it is safe to use a global + // load IC call. + __ mov(LoadIC::ReceiverRegister(), GlobalObjectOperand()); + __ mov(LoadIC::NameRegister(), proxy->var()->name()); + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(proxy->VariableFeedbackSlot()))); + } + + ContextualMode mode = (typeof_state == INSIDE_TYPEOF) + ? NOT_CONTEXTUAL + : CONTEXTUAL; + + CallLoadIC(mode); +} + + +MemOperand FullCodeGenerator::ContextSlotOperandCheckExtensions(Variable* var, + Label* slow) { + DCHECK(var->IsContextSlot()); + Register context = esi; + Register temp = ebx; + + for (Scope* s = scope(); s != var->scope(); s = s->outer_scope()) { + if (s->num_heap_slots() > 0) { + if (s->calls_sloppy_eval()) { + // Check that extension is NULL. + __ cmp(ContextOperand(context, Context::EXTENSION_INDEX), + Immediate(0)); + __ j(not_equal, slow); + } + __ mov(temp, ContextOperand(context, Context::PREVIOUS_INDEX)); + // Walk the rest of the chain without clobbering esi. + context = temp; + } + } + // Check that last extension is NULL. + __ cmp(ContextOperand(context, Context::EXTENSION_INDEX), Immediate(0)); + __ j(not_equal, slow); + + // This function is used only for loads, not stores, so it's safe to + // return an esi-based operand (the write barrier cannot be allowed to + // destroy the esi register). + return ContextOperand(context, var->index()); +} + + +void FullCodeGenerator::EmitDynamicLookupFastCase(VariableProxy* proxy, + TypeofState typeof_state, + Label* slow, + Label* done) { + // Generate fast-case code for variables that might be shadowed by + // eval-introduced variables. Eval is used a lot without + // introducing variables. In those cases, we do not want to + // perform a runtime call for all variables in the scope + // containing the eval. + Variable* var = proxy->var(); + if (var->mode() == DYNAMIC_GLOBAL) { + EmitLoadGlobalCheckExtensions(proxy, typeof_state, slow); + __ jmp(done); + } else if (var->mode() == DYNAMIC_LOCAL) { + Variable* local = var->local_if_not_shadowed(); + __ mov(eax, ContextSlotOperandCheckExtensions(local, slow)); + if (local->mode() == LET || local->mode() == CONST || + local->mode() == CONST_LEGACY) { + __ cmp(eax, isolate()->factory()->the_hole_value()); + __ j(not_equal, done); + if (local->mode() == CONST_LEGACY) { + __ mov(eax, isolate()->factory()->undefined_value()); + } else { // LET || CONST + __ push(Immediate(var->name())); + __ CallRuntime(Runtime::kThrowReferenceError, 1); + } + } + __ jmp(done); + } +} + + +void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy) { + // Record position before possible IC call. + SetSourcePosition(proxy->position()); + Variable* var = proxy->var(); + + // Three cases: global variables, lookup variables, and all other types of + // variables. + switch (var->location()) { + case Variable::UNALLOCATED: { + Comment cmnt(masm_, "[ Global variable"); + __ mov(LoadIC::ReceiverRegister(), GlobalObjectOperand()); + __ mov(LoadIC::NameRegister(), var->name()); + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(proxy->VariableFeedbackSlot()))); + } + CallLoadIC(CONTEXTUAL); + context()->Plug(eax); + break; + } + + case Variable::PARAMETER: + case Variable::LOCAL: + case Variable::CONTEXT: { + Comment cmnt(masm_, var->IsContextSlot() ? "[ Context variable" + : "[ Stack variable"); + if (var->binding_needs_init()) { + // var->scope() may be NULL when the proxy is located in eval code and + // refers to a potential outside binding. Currently those bindings are + // always looked up dynamically, i.e. in that case + // var->location() == LOOKUP. + // always holds. + DCHECK(var->scope() != NULL); + + // Check if the binding really needs an initialization check. The check + // can be skipped in the following situation: we have a LET or CONST + // binding in harmony mode, both the Variable and the VariableProxy have + // the same declaration scope (i.e. they are both in global code, in the + // same function or in the same eval code) and the VariableProxy is in + // the source physically located after the initializer of the variable. + // + // We cannot skip any initialization checks for CONST in non-harmony + // mode because const variables may be declared but never initialized: + // if (false) { const x; }; var y = x; + // + // The condition on the declaration scopes is a conservative check for + // nested functions that access a binding and are called before the + // binding is initialized: + // function() { f(); let x = 1; function f() { x = 2; } } + // + bool skip_init_check; + if (var->scope()->DeclarationScope() != scope()->DeclarationScope()) { + skip_init_check = false; + } else { + // Check that we always have valid source position. + DCHECK(var->initializer_position() != RelocInfo::kNoPosition); + DCHECK(proxy->position() != RelocInfo::kNoPosition); + skip_init_check = var->mode() != CONST_LEGACY && + var->initializer_position() < proxy->position(); + } + + if (!skip_init_check) { + // Let and const need a read barrier. + Label done; + GetVar(eax, var); + __ cmp(eax, isolate()->factory()->the_hole_value()); + __ j(not_equal, &done, Label::kNear); + if (var->mode() == LET || var->mode() == CONST) { + // Throw a reference error when using an uninitialized let/const + // binding in harmony mode. + __ push(Immediate(var->name())); + __ CallRuntime(Runtime::kThrowReferenceError, 1); + } else { + // Uninitalized const bindings outside of harmony mode are unholed. + DCHECK(var->mode() == CONST_LEGACY); + __ mov(eax, isolate()->factory()->undefined_value()); + } + __ bind(&done); + context()->Plug(eax); + break; + } + } + context()->Plug(var); + break; + } + + case Variable::LOOKUP: { + Comment cmnt(masm_, "[ Lookup variable"); + Label done, slow; + // Generate code for loading from variables potentially shadowed + // by eval-introduced variables. + EmitDynamicLookupFastCase(proxy, NOT_INSIDE_TYPEOF, &slow, &done); + __ bind(&slow); + __ push(esi); // Context. + __ push(Immediate(var->name())); + __ CallRuntime(Runtime::kLoadLookupSlot, 2); + __ bind(&done); + context()->Plug(eax); + break; + } + } +} + + +void FullCodeGenerator::VisitRegExpLiteral(RegExpLiteral* expr) { + Comment cmnt(masm_, "[ RegExpLiteral"); + Label materialized; + // Registers will be used as follows: + // edi = JS function. + // ecx = literals array. + // ebx = regexp literal. + // eax = regexp literal clone. + __ mov(edi, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset)); + __ mov(ecx, FieldOperand(edi, JSFunction::kLiteralsOffset)); + int literal_offset = + FixedArray::kHeaderSize + expr->literal_index() * kPointerSize; + __ mov(ebx, FieldOperand(ecx, literal_offset)); + __ cmp(ebx, isolate()->factory()->undefined_value()); + __ j(not_equal, &materialized, Label::kNear); + + // Create regexp literal using runtime function + // Result will be in eax. + __ push(ecx); + __ push(Immediate(Smi::FromInt(expr->literal_index()))); + __ push(Immediate(expr->pattern())); + __ push(Immediate(expr->flags())); + __ CallRuntime(Runtime::kMaterializeRegExpLiteral, 4); + __ mov(ebx, eax); + + __ bind(&materialized); + int size = JSRegExp::kSize + JSRegExp::kInObjectFieldCount * kPointerSize; + Label allocated, runtime_allocate; + __ Allocate(size, eax, ecx, edx, &runtime_allocate, TAG_OBJECT); + __ jmp(&allocated); + + __ bind(&runtime_allocate); + __ push(ebx); + __ push(Immediate(Smi::FromInt(size))); + __ CallRuntime(Runtime::kAllocateInNewSpace, 1); + __ pop(ebx); + + __ bind(&allocated); + // Copy the content into the newly allocated memory. + // (Unroll copy loop once for better throughput). + for (int i = 0; i < size - kPointerSize; i += 2 * kPointerSize) { + __ mov(edx, FieldOperand(ebx, i)); + __ mov(ecx, FieldOperand(ebx, i + kPointerSize)); + __ mov(FieldOperand(eax, i), edx); + __ mov(FieldOperand(eax, i + kPointerSize), ecx); + } + if ((size % (2 * kPointerSize)) != 0) { + __ mov(edx, FieldOperand(ebx, size - kPointerSize)); + __ mov(FieldOperand(eax, size - kPointerSize), edx); + } + context()->Plug(eax); +} + + +void FullCodeGenerator::EmitAccessor(Expression* expression) { + if (expression == NULL) { + __ push(Immediate(isolate()->factory()->null_value())); + } else { + VisitForStackValue(expression); + } +} + + +void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) { + Comment cmnt(masm_, "[ ObjectLiteral"); + + expr->BuildConstantProperties(isolate()); + Handle<FixedArray> constant_properties = expr->constant_properties(); + int flags = expr->fast_elements() + ? ObjectLiteral::kFastElements + : ObjectLiteral::kNoFlags; + flags |= expr->has_function() + ? ObjectLiteral::kHasFunction + : ObjectLiteral::kNoFlags; + int properties_count = constant_properties->length() / 2; + if (expr->may_store_doubles() || expr->depth() > 1 || + masm()->serializer_enabled() || + flags != ObjectLiteral::kFastElements || + properties_count > FastCloneShallowObjectStub::kMaximumClonedProperties) { + __ mov(edi, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset)); + __ push(FieldOperand(edi, JSFunction::kLiteralsOffset)); + __ push(Immediate(Smi::FromInt(expr->literal_index()))); + __ push(Immediate(constant_properties)); + __ push(Immediate(Smi::FromInt(flags))); + __ CallRuntime(Runtime::kCreateObjectLiteral, 4); + } else { + __ mov(edi, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset)); + __ mov(eax, FieldOperand(edi, JSFunction::kLiteralsOffset)); + __ mov(ebx, Immediate(Smi::FromInt(expr->literal_index()))); + __ mov(ecx, Immediate(constant_properties)); + __ mov(edx, Immediate(Smi::FromInt(flags))); + FastCloneShallowObjectStub stub(isolate(), properties_count); + __ CallStub(&stub); + } + + // If result_saved is true the result is on top of the stack. If + // result_saved is false the result is in eax. + bool result_saved = false; + + // Mark all computed expressions that are bound to a key that + // is shadowed by a later occurrence of the same key. For the + // marked expressions, no store code is emitted. + expr->CalculateEmitStore(zone()); + + AccessorTable accessor_table(zone()); + for (int i = 0; i < expr->properties()->length(); i++) { + ObjectLiteral::Property* property = expr->properties()->at(i); + if (property->IsCompileTimeValue()) continue; + + Literal* key = property->key(); + Expression* value = property->value(); + if (!result_saved) { + __ push(eax); // Save result on the stack + result_saved = true; + } + switch (property->kind()) { + case ObjectLiteral::Property::CONSTANT: + UNREACHABLE(); + case ObjectLiteral::Property::MATERIALIZED_LITERAL: + DCHECK(!CompileTimeValue::IsCompileTimeValue(value)); + // Fall through. + case ObjectLiteral::Property::COMPUTED: + if (key->value()->IsInternalizedString()) { + if (property->emit_store()) { + VisitForAccumulatorValue(value); + DCHECK(StoreIC::ValueRegister().is(eax)); + __ mov(StoreIC::NameRegister(), Immediate(key->value())); + __ mov(StoreIC::ReceiverRegister(), Operand(esp, 0)); + CallStoreIC(key->LiteralFeedbackId()); + PrepareForBailoutForId(key->id(), NO_REGISTERS); + } else { + VisitForEffect(value); + } + break; + } + __ push(Operand(esp, 0)); // Duplicate receiver. + VisitForStackValue(key); + VisitForStackValue(value); + if (property->emit_store()) { + __ push(Immediate(Smi::FromInt(SLOPPY))); // Strict mode + __ CallRuntime(Runtime::kSetProperty, 4); + } else { + __ Drop(3); + } + break; + case ObjectLiteral::Property::PROTOTYPE: + __ push(Operand(esp, 0)); // Duplicate receiver. + VisitForStackValue(value); + if (property->emit_store()) { + __ CallRuntime(Runtime::kSetPrototype, 2); + } else { + __ Drop(2); + } + break; + case ObjectLiteral::Property::GETTER: + accessor_table.lookup(key)->second->getter = value; + break; + case ObjectLiteral::Property::SETTER: + accessor_table.lookup(key)->second->setter = value; + break; + } + } + + // Emit code to define accessors, using only a single call to the runtime for + // each pair of corresponding getters and setters. + for (AccessorTable::Iterator it = accessor_table.begin(); + it != accessor_table.end(); + ++it) { + __ push(Operand(esp, 0)); // Duplicate receiver. + VisitForStackValue(it->first); + EmitAccessor(it->second->getter); + EmitAccessor(it->second->setter); + __ push(Immediate(Smi::FromInt(NONE))); + __ CallRuntime(Runtime::kDefineAccessorPropertyUnchecked, 5); + } + + if (expr->has_function()) { + DCHECK(result_saved); + __ push(Operand(esp, 0)); + __ CallRuntime(Runtime::kToFastProperties, 1); + } + + if (result_saved) { + context()->PlugTOS(); + } else { + context()->Plug(eax); + } +} + + +void FullCodeGenerator::VisitArrayLiteral(ArrayLiteral* expr) { + Comment cmnt(masm_, "[ ArrayLiteral"); + + expr->BuildConstantElements(isolate()); + int flags = expr->depth() == 1 + ? ArrayLiteral::kShallowElements + : ArrayLiteral::kNoFlags; + + ZoneList<Expression*>* subexprs = expr->values(); + int length = subexprs->length(); + Handle<FixedArray> constant_elements = expr->constant_elements(); + DCHECK_EQ(2, constant_elements->length()); + ElementsKind constant_elements_kind = + static_cast<ElementsKind>(Smi::cast(constant_elements->get(0))->value()); + bool has_constant_fast_elements = + IsFastObjectElementsKind(constant_elements_kind); + Handle<FixedArrayBase> constant_elements_values( + FixedArrayBase::cast(constant_elements->get(1))); + + AllocationSiteMode allocation_site_mode = TRACK_ALLOCATION_SITE; + if (has_constant_fast_elements && !FLAG_allocation_site_pretenuring) { + // If the only customer of allocation sites is transitioning, then + // we can turn it off if we don't have anywhere else to transition to. + allocation_site_mode = DONT_TRACK_ALLOCATION_SITE; + } + + if (expr->depth() > 1 || length > JSObject::kInitialMaxFastElementArray) { + __ mov(ebx, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset)); + __ push(FieldOperand(ebx, JSFunction::kLiteralsOffset)); + __ push(Immediate(Smi::FromInt(expr->literal_index()))); + __ push(Immediate(constant_elements)); + __ push(Immediate(Smi::FromInt(flags))); + __ CallRuntime(Runtime::kCreateArrayLiteral, 4); + } else { + __ mov(ebx, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset)); + __ mov(eax, FieldOperand(ebx, JSFunction::kLiteralsOffset)); + __ mov(ebx, Immediate(Smi::FromInt(expr->literal_index()))); + __ mov(ecx, Immediate(constant_elements)); + FastCloneShallowArrayStub stub(isolate(), allocation_site_mode); + __ CallStub(&stub); + } + + bool result_saved = false; // Is the result saved to the stack? + + // Emit code to evaluate all the non-constant subexpressions and to store + // them into the newly cloned array. + for (int i = 0; i < length; i++) { + Expression* subexpr = subexprs->at(i); + // If the subexpression is a literal or a simple materialized literal it + // is already set in the cloned array. + if (CompileTimeValue::IsCompileTimeValue(subexpr)) continue; + + if (!result_saved) { + __ push(eax); // array literal. + __ push(Immediate(Smi::FromInt(expr->literal_index()))); + result_saved = true; + } + VisitForAccumulatorValue(subexpr); + + if (IsFastObjectElementsKind(constant_elements_kind)) { + // Fast-case array literal with ElementsKind of FAST_*_ELEMENTS, they + // cannot transition and don't need to call the runtime stub. + int offset = FixedArray::kHeaderSize + (i * kPointerSize); + __ mov(ebx, Operand(esp, kPointerSize)); // Copy of array literal. + __ mov(ebx, FieldOperand(ebx, JSObject::kElementsOffset)); + // Store the subexpression value in the array's elements. + __ mov(FieldOperand(ebx, offset), result_register()); + // Update the write barrier for the array store. + __ RecordWriteField(ebx, offset, result_register(), ecx, + EMIT_REMEMBERED_SET, + INLINE_SMI_CHECK); + } else { + // Store the subexpression value in the array's elements. + __ mov(ecx, Immediate(Smi::FromInt(i))); + StoreArrayLiteralElementStub stub(isolate()); + __ CallStub(&stub); + } + + PrepareForBailoutForId(expr->GetIdForElement(i), NO_REGISTERS); + } + + if (result_saved) { + __ add(esp, Immediate(kPointerSize)); // literal index + context()->PlugTOS(); + } else { + context()->Plug(eax); + } +} + + +void FullCodeGenerator::VisitAssignment(Assignment* expr) { + DCHECK(expr->target()->IsValidReferenceExpression()); + + Comment cmnt(masm_, "[ Assignment"); + + // Left-hand side can only be a property, a global or a (parameter or local) + // slot. + enum LhsKind { VARIABLE, NAMED_PROPERTY, KEYED_PROPERTY }; + LhsKind assign_type = VARIABLE; + Property* property = expr->target()->AsProperty(); + if (property != NULL) { + assign_type = (property->key()->IsPropertyName()) + ? NAMED_PROPERTY + : KEYED_PROPERTY; + } + + // Evaluate LHS expression. + switch (assign_type) { + case VARIABLE: + // Nothing to do here. + break; + case NAMED_PROPERTY: + if (expr->is_compound()) { + // We need the receiver both on the stack and in the register. + VisitForStackValue(property->obj()); + __ mov(LoadIC::ReceiverRegister(), Operand(esp, 0)); + } else { + VisitForStackValue(property->obj()); + } + break; + case KEYED_PROPERTY: { + if (expr->is_compound()) { + VisitForStackValue(property->obj()); + VisitForStackValue(property->key()); + __ mov(LoadIC::ReceiverRegister(), Operand(esp, kPointerSize)); + __ mov(LoadIC::NameRegister(), Operand(esp, 0)); + } else { + VisitForStackValue(property->obj()); + VisitForStackValue(property->key()); + } + break; + } + } + + // For compound assignments we need another deoptimization point after the + // variable/property load. + if (expr->is_compound()) { + AccumulatorValueContext result_context(this); + { AccumulatorValueContext left_operand_context(this); + switch (assign_type) { + case VARIABLE: + EmitVariableLoad(expr->target()->AsVariableProxy()); + PrepareForBailout(expr->target(), TOS_REG); + break; + case NAMED_PROPERTY: + EmitNamedPropertyLoad(property); + PrepareForBailoutForId(property->LoadId(), TOS_REG); + break; + case KEYED_PROPERTY: + EmitKeyedPropertyLoad(property); + PrepareForBailoutForId(property->LoadId(), TOS_REG); + break; + } + } + + Token::Value op = expr->binary_op(); + __ push(eax); // Left operand goes on the stack. + VisitForAccumulatorValue(expr->value()); + + OverwriteMode mode = expr->value()->ResultOverwriteAllowed() + ? OVERWRITE_RIGHT + : NO_OVERWRITE; + SetSourcePosition(expr->position() + 1); + if (ShouldInlineSmiCase(op)) { + EmitInlineSmiBinaryOp(expr->binary_operation(), + op, + mode, + expr->target(), + expr->value()); + } else { + EmitBinaryOp(expr->binary_operation(), op, mode); + } + + // Deoptimization point in case the binary operation may have side effects. + PrepareForBailout(expr->binary_operation(), TOS_REG); + } else { + VisitForAccumulatorValue(expr->value()); + } + + // Record source position before possible IC call. + SetSourcePosition(expr->position()); + + // Store the value. + switch (assign_type) { + case VARIABLE: + EmitVariableAssignment(expr->target()->AsVariableProxy()->var(), + expr->op()); + PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); + context()->Plug(eax); + break; + case NAMED_PROPERTY: + EmitNamedPropertyAssignment(expr); + break; + case KEYED_PROPERTY: + EmitKeyedPropertyAssignment(expr); + break; + } +} + + +void FullCodeGenerator::VisitYield(Yield* expr) { + Comment cmnt(masm_, "[ Yield"); + // Evaluate yielded value first; the initial iterator definition depends on + // this. It stays on the stack while we update the iterator. + VisitForStackValue(expr->expression()); + + switch (expr->yield_kind()) { + case Yield::SUSPEND: + // Pop value from top-of-stack slot; box result into result register. + EmitCreateIteratorResult(false); + __ push(result_register()); + // Fall through. + case Yield::INITIAL: { + Label suspend, continuation, post_runtime, resume; + + __ jmp(&suspend); + + __ bind(&continuation); + __ jmp(&resume); + + __ bind(&suspend); + VisitForAccumulatorValue(expr->generator_object()); + DCHECK(continuation.pos() > 0 && Smi::IsValid(continuation.pos())); + __ mov(FieldOperand(eax, JSGeneratorObject::kContinuationOffset), + Immediate(Smi::FromInt(continuation.pos()))); + __ mov(FieldOperand(eax, JSGeneratorObject::kContextOffset), esi); + __ mov(ecx, esi); + __ RecordWriteField(eax, JSGeneratorObject::kContextOffset, ecx, edx); + __ lea(ebx, Operand(ebp, StandardFrameConstants::kExpressionsOffset)); + __ cmp(esp, ebx); + __ j(equal, &post_runtime); + __ push(eax); // generator object + __ CallRuntime(Runtime::kSuspendJSGeneratorObject, 1); + __ mov(context_register(), + Operand(ebp, StandardFrameConstants::kContextOffset)); + __ bind(&post_runtime); + __ pop(result_register()); + EmitReturnSequence(); + + __ bind(&resume); + context()->Plug(result_register()); + break; + } + + case Yield::FINAL: { + VisitForAccumulatorValue(expr->generator_object()); + __ mov(FieldOperand(result_register(), + JSGeneratorObject::kContinuationOffset), + Immediate(Smi::FromInt(JSGeneratorObject::kGeneratorClosed))); + // Pop value from top-of-stack slot, box result into result register. + EmitCreateIteratorResult(true); + EmitUnwindBeforeReturn(); + EmitReturnSequence(); + break; + } + + case Yield::DELEGATING: { + VisitForStackValue(expr->generator_object()); + + // Initial stack layout is as follows: + // [sp + 1 * kPointerSize] iter + // [sp + 0 * kPointerSize] g + + Label l_catch, l_try, l_suspend, l_continuation, l_resume; + Label l_next, l_call, l_loop; + Register load_receiver = LoadIC::ReceiverRegister(); + Register load_name = LoadIC::NameRegister(); + + // Initial send value is undefined. + __ mov(eax, isolate()->factory()->undefined_value()); + __ jmp(&l_next); + + // catch (e) { receiver = iter; f = 'throw'; arg = e; goto l_call; } + __ bind(&l_catch); + handler_table()->set(expr->index(), Smi::FromInt(l_catch.pos())); + __ mov(load_name, isolate()->factory()->throw_string()); // "throw" + __ push(load_name); // "throw" + __ push(Operand(esp, 2 * kPointerSize)); // iter + __ push(eax); // exception + __ jmp(&l_call); + + // try { received = %yield result } + // Shuffle the received result above a try handler and yield it without + // re-boxing. + __ bind(&l_try); + __ pop(eax); // result + __ PushTryHandler(StackHandler::CATCH, expr->index()); + const int handler_size = StackHandlerConstants::kSize; + __ push(eax); // result + __ jmp(&l_suspend); + __ bind(&l_continuation); + __ jmp(&l_resume); + __ bind(&l_suspend); + const int generator_object_depth = kPointerSize + handler_size; + __ mov(eax, Operand(esp, generator_object_depth)); + __ push(eax); // g + DCHECK(l_continuation.pos() > 0 && Smi::IsValid(l_continuation.pos())); + __ mov(FieldOperand(eax, JSGeneratorObject::kContinuationOffset), + Immediate(Smi::FromInt(l_continuation.pos()))); + __ mov(FieldOperand(eax, JSGeneratorObject::kContextOffset), esi); + __ mov(ecx, esi); + __ RecordWriteField(eax, JSGeneratorObject::kContextOffset, ecx, edx); + __ CallRuntime(Runtime::kSuspendJSGeneratorObject, 1); + __ mov(context_register(), + Operand(ebp, StandardFrameConstants::kContextOffset)); + __ pop(eax); // result + EmitReturnSequence(); + __ bind(&l_resume); // received in eax + __ PopTryHandler(); + + // receiver = iter; f = iter.next; arg = received; + __ bind(&l_next); + + __ mov(load_name, isolate()->factory()->next_string()); + __ push(load_name); // "next" + __ push(Operand(esp, 2 * kPointerSize)); // iter + __ push(eax); // received + + // result = receiver[f](arg); + __ bind(&l_call); + __ mov(load_receiver, Operand(esp, kPointerSize)); + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(expr->KeyedLoadFeedbackSlot()))); + } + Handle<Code> ic = isolate()->builtins()->KeyedLoadIC_Initialize(); + CallIC(ic, TypeFeedbackId::None()); + __ mov(edi, eax); + __ mov(Operand(esp, 2 * kPointerSize), edi); + CallFunctionStub stub(isolate(), 1, CALL_AS_METHOD); + __ CallStub(&stub); + + __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset)); + __ Drop(1); // The function is still on the stack; drop it. + + // if (!result.done) goto l_try; + __ bind(&l_loop); + __ push(eax); // save result + __ Move(load_receiver, eax); // result + __ mov(load_name, + isolate()->factory()->done_string()); // "done" + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(expr->DoneFeedbackSlot()))); + } + CallLoadIC(NOT_CONTEXTUAL); // result.done in eax + Handle<Code> bool_ic = ToBooleanStub::GetUninitialized(isolate()); + CallIC(bool_ic); + __ test(eax, eax); + __ j(zero, &l_try); + + // result.value + __ pop(load_receiver); // result + __ mov(load_name, + isolate()->factory()->value_string()); // "value" + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(expr->ValueFeedbackSlot()))); + } + CallLoadIC(NOT_CONTEXTUAL); // result.value in eax + context()->DropAndPlug(2, eax); // drop iter and g + break; + } + } +} + + +void FullCodeGenerator::EmitGeneratorResume(Expression *generator, + Expression *value, + JSGeneratorObject::ResumeMode resume_mode) { + // The value stays in eax, and is ultimately read by the resumed generator, as + // if CallRuntime(Runtime::kSuspendJSGeneratorObject) returned it. Or it + // is read to throw the value when the resumed generator is already closed. + // ebx will hold the generator object until the activation has been resumed. + VisitForStackValue(generator); + VisitForAccumulatorValue(value); + __ pop(ebx); + + // Check generator state. + Label wrong_state, closed_state, done; + STATIC_ASSERT(JSGeneratorObject::kGeneratorExecuting < 0); + STATIC_ASSERT(JSGeneratorObject::kGeneratorClosed == 0); + __ cmp(FieldOperand(ebx, JSGeneratorObject::kContinuationOffset), + Immediate(Smi::FromInt(0))); + __ j(equal, &closed_state); + __ j(less, &wrong_state); + + // Load suspended function and context. + __ mov(esi, FieldOperand(ebx, JSGeneratorObject::kContextOffset)); + __ mov(edi, FieldOperand(ebx, JSGeneratorObject::kFunctionOffset)); + + // Push receiver. + __ push(FieldOperand(ebx, JSGeneratorObject::kReceiverOffset)); + + // Push holes for arguments to generator function. + __ mov(edx, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset)); + __ mov(edx, + FieldOperand(edx, SharedFunctionInfo::kFormalParameterCountOffset)); + __ mov(ecx, isolate()->factory()->the_hole_value()); + Label push_argument_holes, push_frame; + __ bind(&push_argument_holes); + __ sub(edx, Immediate(Smi::FromInt(1))); + __ j(carry, &push_frame); + __ push(ecx); + __ jmp(&push_argument_holes); + + // Enter a new JavaScript frame, and initialize its slots as they were when + // the generator was suspended. + Label resume_frame; + __ bind(&push_frame); + __ call(&resume_frame); + __ jmp(&done); + __ bind(&resume_frame); + __ push(ebp); // Caller's frame pointer. + __ mov(ebp, esp); + __ push(esi); // Callee's context. + __ push(edi); // Callee's JS Function. + + // Load the operand stack size. + __ mov(edx, FieldOperand(ebx, JSGeneratorObject::kOperandStackOffset)); + __ mov(edx, FieldOperand(edx, FixedArray::kLengthOffset)); + __ SmiUntag(edx); + + // If we are sending a value and there is no operand stack, we can jump back + // in directly. + if (resume_mode == JSGeneratorObject::NEXT) { + Label slow_resume; + __ cmp(edx, Immediate(0)); + __ j(not_zero, &slow_resume); + __ mov(edx, FieldOperand(edi, JSFunction::kCodeEntryOffset)); + __ mov(ecx, FieldOperand(ebx, JSGeneratorObject::kContinuationOffset)); + __ SmiUntag(ecx); + __ add(edx, ecx); + __ mov(FieldOperand(ebx, JSGeneratorObject::kContinuationOffset), + Immediate(Smi::FromInt(JSGeneratorObject::kGeneratorExecuting))); + __ jmp(edx); + __ bind(&slow_resume); + } + + // Otherwise, we push holes for the operand stack and call the runtime to fix + // up the stack and the handlers. + Label push_operand_holes, call_resume; + __ bind(&push_operand_holes); + __ sub(edx, Immediate(1)); + __ j(carry, &call_resume); + __ push(ecx); + __ jmp(&push_operand_holes); + __ bind(&call_resume); + __ push(ebx); + __ push(result_register()); + __ Push(Smi::FromInt(resume_mode)); + __ CallRuntime(Runtime::kResumeJSGeneratorObject, 3); + // Not reached: the runtime call returns elsewhere. + __ Abort(kGeneratorFailedToResume); + + // Reach here when generator is closed. + __ bind(&closed_state); + if (resume_mode == JSGeneratorObject::NEXT) { + // Return completed iterator result when generator is closed. + __ push(Immediate(isolate()->factory()->undefined_value())); + // Pop value from top-of-stack slot; box result into result register. + EmitCreateIteratorResult(true); + } else { + // Throw the provided value. + __ push(eax); + __ CallRuntime(Runtime::kThrow, 1); + } + __ jmp(&done); + + // Throw error if we attempt to operate on a running generator. + __ bind(&wrong_state); + __ push(ebx); + __ CallRuntime(Runtime::kThrowGeneratorStateError, 1); + + __ bind(&done); + context()->Plug(result_register()); +} + + +void FullCodeGenerator::EmitCreateIteratorResult(bool done) { + Label gc_required; + Label allocated; + + Handle<Map> map(isolate()->native_context()->iterator_result_map()); + + __ Allocate(map->instance_size(), eax, ecx, edx, &gc_required, TAG_OBJECT); + __ jmp(&allocated); + + __ bind(&gc_required); + __ Push(Smi::FromInt(map->instance_size())); + __ CallRuntime(Runtime::kAllocateInNewSpace, 1); + __ mov(context_register(), + Operand(ebp, StandardFrameConstants::kContextOffset)); + + __ bind(&allocated); + __ mov(ebx, map); + __ pop(ecx); + __ mov(edx, isolate()->factory()->ToBoolean(done)); + DCHECK_EQ(map->instance_size(), 5 * kPointerSize); + __ mov(FieldOperand(eax, HeapObject::kMapOffset), ebx); + __ mov(FieldOperand(eax, JSObject::kPropertiesOffset), + isolate()->factory()->empty_fixed_array()); + __ mov(FieldOperand(eax, JSObject::kElementsOffset), + isolate()->factory()->empty_fixed_array()); + __ mov(FieldOperand(eax, JSGeneratorObject::kResultValuePropertyOffset), ecx); + __ mov(FieldOperand(eax, JSGeneratorObject::kResultDonePropertyOffset), edx); + + // Only the value field needs a write barrier, as the other values are in the + // root set. + __ RecordWriteField(eax, JSGeneratorObject::kResultValuePropertyOffset, + ecx, edx); +} + + +void FullCodeGenerator::EmitNamedPropertyLoad(Property* prop) { + SetSourcePosition(prop->position()); + Literal* key = prop->key()->AsLiteral(); + DCHECK(!key->value()->IsSmi()); + __ mov(LoadIC::NameRegister(), Immediate(key->value())); + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(prop->PropertyFeedbackSlot()))); + CallLoadIC(NOT_CONTEXTUAL); + } else { + CallLoadIC(NOT_CONTEXTUAL, prop->PropertyFeedbackId()); + } +} + + +void FullCodeGenerator::EmitKeyedPropertyLoad(Property* prop) { + SetSourcePosition(prop->position()); + Handle<Code> ic = isolate()->builtins()->KeyedLoadIC_Initialize(); + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(prop->PropertyFeedbackSlot()))); + CallIC(ic); + } else { + CallIC(ic, prop->PropertyFeedbackId()); + } +} + + +void FullCodeGenerator::EmitInlineSmiBinaryOp(BinaryOperation* expr, + Token::Value op, + OverwriteMode mode, + Expression* left, + Expression* right) { + // Do combined smi check of the operands. Left operand is on the + // stack. Right operand is in eax. + Label smi_case, done, stub_call; + __ pop(edx); + __ mov(ecx, eax); + __ or_(eax, edx); + JumpPatchSite patch_site(masm_); + patch_site.EmitJumpIfSmi(eax, &smi_case, Label::kNear); + + __ bind(&stub_call); + __ mov(eax, ecx); + BinaryOpICStub stub(isolate(), op, mode); + CallIC(stub.GetCode(), expr->BinaryOperationFeedbackId()); + patch_site.EmitPatchInfo(); + __ jmp(&done, Label::kNear); + + // Smi case. + __ bind(&smi_case); + __ mov(eax, edx); // Copy left operand in case of a stub call. + + switch (op) { + case Token::SAR: + __ SmiUntag(ecx); + __ sar_cl(eax); // No checks of result necessary + __ and_(eax, Immediate(~kSmiTagMask)); + break; + case Token::SHL: { + Label result_ok; + __ SmiUntag(eax); + __ SmiUntag(ecx); + __ shl_cl(eax); + // Check that the *signed* result fits in a smi. + __ cmp(eax, 0xc0000000); + __ j(positive, &result_ok); + __ SmiTag(ecx); + __ jmp(&stub_call); + __ bind(&result_ok); + __ SmiTag(eax); + break; + } + case Token::SHR: { + Label result_ok; + __ SmiUntag(eax); + __ SmiUntag(ecx); + __ shr_cl(eax); + __ test(eax, Immediate(0xc0000000)); + __ j(zero, &result_ok); + __ SmiTag(ecx); + __ jmp(&stub_call); + __ bind(&result_ok); + __ SmiTag(eax); + break; + } + case Token::ADD: + __ add(eax, ecx); + __ j(overflow, &stub_call); + break; + case Token::SUB: + __ sub(eax, ecx); + __ j(overflow, &stub_call); + break; + case Token::MUL: { + __ SmiUntag(eax); + __ imul(eax, ecx); + __ j(overflow, &stub_call); + __ test(eax, eax); + __ j(not_zero, &done, Label::kNear); + __ mov(ebx, edx); + __ or_(ebx, ecx); + __ j(negative, &stub_call); + break; + } + case Token::BIT_OR: + __ or_(eax, ecx); + break; + case Token::BIT_AND: + __ and_(eax, ecx); + break; + case Token::BIT_XOR: + __ xor_(eax, ecx); + break; + default: + UNREACHABLE(); + } + + __ bind(&done); + context()->Plug(eax); +} + + +void FullCodeGenerator::EmitBinaryOp(BinaryOperation* expr, + Token::Value op, + OverwriteMode mode) { + __ pop(edx); + BinaryOpICStub stub(isolate(), op, mode); + JumpPatchSite patch_site(masm_); // unbound, signals no inlined smi code. + CallIC(stub.GetCode(), expr->BinaryOperationFeedbackId()); + patch_site.EmitPatchInfo(); + context()->Plug(eax); +} + + +void FullCodeGenerator::EmitAssignment(Expression* expr) { + DCHECK(expr->IsValidReferenceExpression()); + + // Left-hand side can only be a property, a global or a (parameter or local) + // slot. + enum LhsKind { VARIABLE, NAMED_PROPERTY, KEYED_PROPERTY }; + LhsKind assign_type = VARIABLE; + Property* prop = expr->AsProperty(); + if (prop != NULL) { + assign_type = (prop->key()->IsPropertyName()) + ? NAMED_PROPERTY + : KEYED_PROPERTY; + } + + switch (assign_type) { + case VARIABLE: { + Variable* var = expr->AsVariableProxy()->var(); + EffectContext context(this); + EmitVariableAssignment(var, Token::ASSIGN); + break; + } + case NAMED_PROPERTY: { + __ push(eax); // Preserve value. + VisitForAccumulatorValue(prop->obj()); + __ Move(StoreIC::ReceiverRegister(), eax); + __ pop(StoreIC::ValueRegister()); // Restore value. + __ mov(StoreIC::NameRegister(), prop->key()->AsLiteral()->value()); + CallStoreIC(); + break; + } + case KEYED_PROPERTY: { + __ push(eax); // Preserve value. + VisitForStackValue(prop->obj()); + VisitForAccumulatorValue(prop->key()); + __ Move(KeyedStoreIC::NameRegister(), eax); + __ pop(KeyedStoreIC::ReceiverRegister()); // Receiver. + __ pop(KeyedStoreIC::ValueRegister()); // Restore value. + Handle<Code> ic = strict_mode() == SLOPPY + ? isolate()->builtins()->KeyedStoreIC_Initialize() + : isolate()->builtins()->KeyedStoreIC_Initialize_Strict(); + CallIC(ic); + break; + } + } + context()->Plug(eax); +} + + +void FullCodeGenerator::EmitStoreToStackLocalOrContextSlot( + Variable* var, MemOperand location) { + __ mov(location, eax); + if (var->IsContextSlot()) { + __ mov(edx, eax); + int offset = Context::SlotOffset(var->index()); + __ RecordWriteContextSlot(ecx, offset, edx, ebx); + } +} + + +void FullCodeGenerator::EmitVariableAssignment(Variable* var, + Token::Value op) { + if (var->IsUnallocated()) { + // Global var, const, or let. + __ mov(StoreIC::NameRegister(), var->name()); + __ mov(StoreIC::ReceiverRegister(), GlobalObjectOperand()); + CallStoreIC(); + + } else if (op == Token::INIT_CONST_LEGACY) { + // Const initializers need a write barrier. + DCHECK(!var->IsParameter()); // No const parameters. + if (var->IsLookupSlot()) { + __ push(eax); + __ push(esi); + __ push(Immediate(var->name())); + __ CallRuntime(Runtime::kInitializeLegacyConstLookupSlot, 3); + } else { + DCHECK(var->IsStackLocal() || var->IsContextSlot()); + Label skip; + MemOperand location = VarOperand(var, ecx); + __ mov(edx, location); + __ cmp(edx, isolate()->factory()->the_hole_value()); + __ j(not_equal, &skip, Label::kNear); + EmitStoreToStackLocalOrContextSlot(var, location); + __ bind(&skip); + } + + } else if (var->mode() == LET && op != Token::INIT_LET) { + // Non-initializing assignment to let variable needs a write barrier. + DCHECK(!var->IsLookupSlot()); + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); + Label assign; + MemOperand location = VarOperand(var, ecx); + __ mov(edx, location); + __ cmp(edx, isolate()->factory()->the_hole_value()); + __ j(not_equal, &assign, Label::kNear); + __ push(Immediate(var->name())); + __ CallRuntime(Runtime::kThrowReferenceError, 1); + __ bind(&assign); + EmitStoreToStackLocalOrContextSlot(var, location); + + } else if (!var->is_const_mode() || op == Token::INIT_CONST) { + if (var->IsLookupSlot()) { + // Assignment to var. + __ push(eax); // Value. + __ push(esi); // Context. + __ push(Immediate(var->name())); + __ push(Immediate(Smi::FromInt(strict_mode()))); + __ CallRuntime(Runtime::kStoreLookupSlot, 4); + } else { + // Assignment to var or initializing assignment to let/const in harmony + // mode. + DCHECK(var->IsStackAllocated() || var->IsContextSlot()); + MemOperand location = VarOperand(var, ecx); + if (generate_debug_code_ && op == Token::INIT_LET) { + // Check for an uninitialized let binding. + __ mov(edx, location); + __ cmp(edx, isolate()->factory()->the_hole_value()); + __ Check(equal, kLetBindingReInitialization); + } + EmitStoreToStackLocalOrContextSlot(var, location); + } + } + // Non-initializing assignments to consts are ignored. +} + + +void FullCodeGenerator::EmitNamedPropertyAssignment(Assignment* expr) { + // Assignment to a property, using a named store IC. + // eax : value + // esp[0] : receiver + + Property* prop = expr->target()->AsProperty(); + DCHECK(prop != NULL); + DCHECK(prop->key()->IsLiteral()); + + // Record source code position before IC call. + SetSourcePosition(expr->position()); + __ mov(StoreIC::NameRegister(), prop->key()->AsLiteral()->value()); + __ pop(StoreIC::ReceiverRegister()); + CallStoreIC(expr->AssignmentFeedbackId()); + PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); + context()->Plug(eax); +} + + +void FullCodeGenerator::EmitKeyedPropertyAssignment(Assignment* expr) { + // Assignment to a property, using a keyed store IC. + // eax : value + // esp[0] : key + // esp[kPointerSize] : receiver + + __ pop(KeyedStoreIC::NameRegister()); // Key. + __ pop(KeyedStoreIC::ReceiverRegister()); + DCHECK(KeyedStoreIC::ValueRegister().is(eax)); + // Record source code position before IC call. + SetSourcePosition(expr->position()); + Handle<Code> ic = strict_mode() == SLOPPY + ? isolate()->builtins()->KeyedStoreIC_Initialize() + : isolate()->builtins()->KeyedStoreIC_Initialize_Strict(); + CallIC(ic, expr->AssignmentFeedbackId()); + + PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); + context()->Plug(eax); +} + + +void FullCodeGenerator::VisitProperty(Property* expr) { + Comment cmnt(masm_, "[ Property"); + Expression* key = expr->key(); + + if (key->IsPropertyName()) { + VisitForAccumulatorValue(expr->obj()); + __ Move(LoadIC::ReceiverRegister(), result_register()); + EmitNamedPropertyLoad(expr); + PrepareForBailoutForId(expr->LoadId(), TOS_REG); + context()->Plug(eax); + } else { + VisitForStackValue(expr->obj()); + VisitForAccumulatorValue(expr->key()); + __ pop(LoadIC::ReceiverRegister()); // Object. + __ Move(LoadIC::NameRegister(), result_register()); // Key. + EmitKeyedPropertyLoad(expr); + context()->Plug(eax); + } +} + + +void FullCodeGenerator::CallIC(Handle<Code> code, + TypeFeedbackId ast_id) { + ic_total_count_++; + __ call(code, RelocInfo::CODE_TARGET, ast_id); +} + + +// Code common for calls using the IC. +void FullCodeGenerator::EmitCallWithLoadIC(Call* expr) { + Expression* callee = expr->expression(); + + CallIC::CallType call_type = callee->IsVariableProxy() + ? CallIC::FUNCTION + : CallIC::METHOD; + // Get the target function. + if (call_type == CallIC::FUNCTION) { + { StackValueContext context(this); + EmitVariableLoad(callee->AsVariableProxy()); + PrepareForBailout(callee, NO_REGISTERS); + } + // Push undefined as receiver. This is patched in the method prologue if it + // is a sloppy mode method. + __ push(Immediate(isolate()->factory()->undefined_value())); + } else { + // Load the function from the receiver. + DCHECK(callee->IsProperty()); + __ mov(LoadIC::ReceiverRegister(), Operand(esp, 0)); + EmitNamedPropertyLoad(callee->AsProperty()); + PrepareForBailoutForId(callee->AsProperty()->LoadId(), TOS_REG); + // Push the target function under the receiver. + __ push(Operand(esp, 0)); + __ mov(Operand(esp, kPointerSize), eax); + } + + EmitCall(expr, call_type); +} + + +// Code common for calls using the IC. +void FullCodeGenerator::EmitKeyedCallWithLoadIC(Call* expr, + Expression* key) { + // Load the key. + VisitForAccumulatorValue(key); + + Expression* callee = expr->expression(); + + // Load the function from the receiver. + DCHECK(callee->IsProperty()); + __ mov(LoadIC::ReceiverRegister(), Operand(esp, 0)); + __ mov(LoadIC::NameRegister(), eax); + EmitKeyedPropertyLoad(callee->AsProperty()); + PrepareForBailoutForId(callee->AsProperty()->LoadId(), TOS_REG); + + // Push the target function under the receiver. + __ push(Operand(esp, 0)); + __ mov(Operand(esp, kPointerSize), eax); + + EmitCall(expr, CallIC::METHOD); +} + + +void FullCodeGenerator::EmitCall(Call* expr, CallIC::CallType call_type) { + // Load the arguments. + ZoneList<Expression*>* args = expr->arguments(); + int arg_count = args->length(); + { PreservePositionScope scope(masm()->positions_recorder()); + for (int i = 0; i < arg_count; i++) { + VisitForStackValue(args->at(i)); + } + } + + // Record source position of the IC call. + SetSourcePosition(expr->position()); + Handle<Code> ic = CallIC::initialize_stub( + isolate(), arg_count, call_type); + __ Move(edx, Immediate(Smi::FromInt(expr->CallFeedbackSlot()))); + __ mov(edi, Operand(esp, (arg_count + 1) * kPointerSize)); + // Don't assign a type feedback id to the IC, since type feedback is provided + // by the vector above. + CallIC(ic); + + RecordJSReturnSite(expr); + + // Restore context register. + __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset)); + + context()->DropAndPlug(1, eax); +} + + +void FullCodeGenerator::EmitResolvePossiblyDirectEval(int arg_count) { + // Push copy of the first argument or undefined if it doesn't exist. + if (arg_count > 0) { + __ push(Operand(esp, arg_count * kPointerSize)); + } else { + __ push(Immediate(isolate()->factory()->undefined_value())); + } + + // Push the receiver of the enclosing function. + __ push(Operand(ebp, (2 + info_->scope()->num_parameters()) * kPointerSize)); + // Push the language mode. + __ push(Immediate(Smi::FromInt(strict_mode()))); + + // Push the start position of the scope the calls resides in. + __ push(Immediate(Smi::FromInt(scope()->start_position()))); + + // Do the runtime call. + __ CallRuntime(Runtime::kResolvePossiblyDirectEval, 5); +} + + +void FullCodeGenerator::VisitCall(Call* expr) { +#ifdef DEBUG + // We want to verify that RecordJSReturnSite gets called on all paths + // through this function. Avoid early returns. + expr->return_is_recorded_ = false; +#endif + + Comment cmnt(masm_, "[ Call"); + Expression* callee = expr->expression(); + Call::CallType call_type = expr->GetCallType(isolate()); + + if (call_type == Call::POSSIBLY_EVAL_CALL) { + // In a call to eval, we first call RuntimeHidden_ResolvePossiblyDirectEval + // to resolve the function we need to call and the receiver of the call. + // Then we call the resolved function using the given arguments. + ZoneList<Expression*>* args = expr->arguments(); + int arg_count = args->length(); + { PreservePositionScope pos_scope(masm()->positions_recorder()); + VisitForStackValue(callee); + // Reserved receiver slot. + __ push(Immediate(isolate()->factory()->undefined_value())); + // Push the arguments. + for (int i = 0; i < arg_count; i++) { + VisitForStackValue(args->at(i)); + } + + // Push a copy of the function (found below the arguments) and + // resolve eval. + __ push(Operand(esp, (arg_count + 1) * kPointerSize)); + EmitResolvePossiblyDirectEval(arg_count); + + // The runtime call returns a pair of values in eax (function) and + // edx (receiver). Touch up the stack with the right values. + __ mov(Operand(esp, (arg_count + 0) * kPointerSize), edx); + __ mov(Operand(esp, (arg_count + 1) * kPointerSize), eax); + } + // Record source position for debugger. + SetSourcePosition(expr->position()); + CallFunctionStub stub(isolate(), arg_count, NO_CALL_FUNCTION_FLAGS); + __ mov(edi, Operand(esp, (arg_count + 1) * kPointerSize)); + __ CallStub(&stub); + RecordJSReturnSite(expr); + // Restore context register. + __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset)); + context()->DropAndPlug(1, eax); + + } else if (call_type == Call::GLOBAL_CALL) { + EmitCallWithLoadIC(expr); + + } else if (call_type == Call::LOOKUP_SLOT_CALL) { + // Call to a lookup slot (dynamically introduced variable). + VariableProxy* proxy = callee->AsVariableProxy(); + Label slow, done; + { PreservePositionScope scope(masm()->positions_recorder()); + // Generate code for loading from variables potentially shadowed by + // eval-introduced variables. + EmitDynamicLookupFastCase(proxy, NOT_INSIDE_TYPEOF, &slow, &done); + } + __ bind(&slow); + // Call the runtime to find the function to call (returned in eax) and + // the object holding it (returned in edx). + __ push(context_register()); + __ push(Immediate(proxy->name())); + __ CallRuntime(Runtime::kLoadLookupSlot, 2); + __ push(eax); // Function. + __ push(edx); // Receiver. + + // If fast case code has been generated, emit code to push the function + // and receiver and have the slow path jump around this code. + if (done.is_linked()) { + Label call; + __ jmp(&call, Label::kNear); + __ bind(&done); + // Push function. + __ push(eax); + // The receiver is implicitly the global receiver. Indicate this by + // passing the hole to the call function stub. + __ push(Immediate(isolate()->factory()->undefined_value())); + __ bind(&call); + } + + // The receiver is either the global receiver or an object found by + // LoadContextSlot. + EmitCall(expr); + + } else if (call_type == Call::PROPERTY_CALL) { + Property* property = callee->AsProperty(); + { PreservePositionScope scope(masm()->positions_recorder()); + VisitForStackValue(property->obj()); + } + if (property->key()->IsPropertyName()) { + EmitCallWithLoadIC(expr); + } else { + EmitKeyedCallWithLoadIC(expr, property->key()); + } + + } else { + DCHECK(call_type == Call::OTHER_CALL); + // Call to an arbitrary expression not handled specially above. + { PreservePositionScope scope(masm()->positions_recorder()); + VisitForStackValue(callee); + } + __ push(Immediate(isolate()->factory()->undefined_value())); + // Emit function call. + EmitCall(expr); + } + +#ifdef DEBUG + // RecordJSReturnSite should have been called. + DCHECK(expr->return_is_recorded_); +#endif +} + + +void FullCodeGenerator::VisitCallNew(CallNew* expr) { + Comment cmnt(masm_, "[ CallNew"); + // According to ECMA-262, section 11.2.2, page 44, the function + // expression in new calls must be evaluated before the + // arguments. + + // Push constructor on the stack. If it's not a function it's used as + // receiver for CALL_NON_FUNCTION, otherwise the value on the stack is + // ignored. + VisitForStackValue(expr->expression()); + + // Push the arguments ("left-to-right") on the stack. + ZoneList<Expression*>* args = expr->arguments(); + int arg_count = args->length(); + for (int i = 0; i < arg_count; i++) { + VisitForStackValue(args->at(i)); + } + + // Call the construct call builtin that handles allocation and + // constructor invocation. + SetSourcePosition(expr->position()); + + // Load function and argument count into edi and eax. + __ Move(eax, Immediate(arg_count)); + __ mov(edi, Operand(esp, arg_count * kPointerSize)); + + // Record call targets in unoptimized code. + if (FLAG_pretenuring_call_new) { + EnsureSlotContainsAllocationSite(expr->AllocationSiteFeedbackSlot()); + DCHECK(expr->AllocationSiteFeedbackSlot() == + expr->CallNewFeedbackSlot() + 1); + } + + __ LoadHeapObject(ebx, FeedbackVector()); + __ mov(edx, Immediate(Smi::FromInt(expr->CallNewFeedbackSlot()))); + + CallConstructStub stub(isolate(), RECORD_CONSTRUCTOR_TARGET); + __ call(stub.GetCode(), RelocInfo::CONSTRUCT_CALL); + PrepareForBailoutForId(expr->ReturnId(), TOS_REG); + context()->Plug(eax); +} + + +void FullCodeGenerator::EmitIsSmi(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + __ test(eax, Immediate(kSmiTagMask)); + Split(zero, if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitIsNonNegativeSmi(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + __ test(eax, Immediate(kSmiTagMask | 0x80000000)); + Split(zero, if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitIsObject(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + __ JumpIfSmi(eax, if_false); + __ cmp(eax, isolate()->factory()->null_value()); + __ j(equal, if_true); + __ mov(ebx, FieldOperand(eax, HeapObject::kMapOffset)); + // Undetectable objects behave like undefined when tested with typeof. + __ movzx_b(ecx, FieldOperand(ebx, Map::kBitFieldOffset)); + __ test(ecx, Immediate(1 << Map::kIsUndetectable)); + __ j(not_zero, if_false); + __ movzx_b(ecx, FieldOperand(ebx, Map::kInstanceTypeOffset)); + __ cmp(ecx, FIRST_NONCALLABLE_SPEC_OBJECT_TYPE); + __ j(below, if_false); + __ cmp(ecx, LAST_NONCALLABLE_SPEC_OBJECT_TYPE); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + Split(below_equal, if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitIsSpecObject(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + __ JumpIfSmi(eax, if_false); + __ CmpObjectType(eax, FIRST_SPEC_OBJECT_TYPE, ebx); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + Split(above_equal, if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitIsUndetectableObject(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + __ JumpIfSmi(eax, if_false); + __ mov(ebx, FieldOperand(eax, HeapObject::kMapOffset)); + __ movzx_b(ebx, FieldOperand(ebx, Map::kBitFieldOffset)); + __ test(ebx, Immediate(1 << Map::kIsUndetectable)); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + Split(not_zero, if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitIsStringWrapperSafeForDefaultValueOf( + CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); + + Label materialize_true, materialize_false, skip_lookup; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + __ AssertNotSmi(eax); + + // Check whether this map has already been checked to be safe for default + // valueOf. + __ mov(ebx, FieldOperand(eax, HeapObject::kMapOffset)); + __ test_b(FieldOperand(ebx, Map::kBitField2Offset), + 1 << Map::kStringWrapperSafeForDefaultValueOf); + __ j(not_zero, &skip_lookup); + + // Check for fast case object. Return false for slow case objects. + __ mov(ecx, FieldOperand(eax, JSObject::kPropertiesOffset)); + __ mov(ecx, FieldOperand(ecx, HeapObject::kMapOffset)); + __ cmp(ecx, isolate()->factory()->hash_table_map()); + __ j(equal, if_false); + + // Look for valueOf string in the descriptor array, and indicate false if + // found. Since we omit an enumeration index check, if it is added via a + // transition that shares its descriptor array, this is a false positive. + Label entry, loop, done; + + // Skip loop if no descriptors are valid. + __ NumberOfOwnDescriptors(ecx, ebx); + __ cmp(ecx, 0); + __ j(equal, &done); + + __ LoadInstanceDescriptors(ebx, ebx); + // ebx: descriptor array. + // ecx: valid entries in the descriptor array. + // Calculate the end of the descriptor array. + STATIC_ASSERT(kSmiTag == 0); + STATIC_ASSERT(kSmiTagSize == 1); + STATIC_ASSERT(kPointerSize == 4); + __ imul(ecx, ecx, DescriptorArray::kDescriptorSize); + __ lea(ecx, Operand(ebx, ecx, times_4, DescriptorArray::kFirstOffset)); + // Calculate location of the first key name. + __ add(ebx, Immediate(DescriptorArray::kFirstOffset)); + // Loop through all the keys in the descriptor array. If one of these is the + // internalized string "valueOf" the result is false. + __ jmp(&entry); + __ bind(&loop); + __ mov(edx, FieldOperand(ebx, 0)); + __ cmp(edx, isolate()->factory()->value_of_string()); + __ j(equal, if_false); + __ add(ebx, Immediate(DescriptorArray::kDescriptorSize * kPointerSize)); + __ bind(&entry); + __ cmp(ebx, ecx); + __ j(not_equal, &loop); + + __ bind(&done); + + // Reload map as register ebx was used as temporary above. + __ mov(ebx, FieldOperand(eax, HeapObject::kMapOffset)); + + // Set the bit in the map to indicate that there is no local valueOf field. + __ or_(FieldOperand(ebx, Map::kBitField2Offset), + Immediate(1 << Map::kStringWrapperSafeForDefaultValueOf)); + + __ bind(&skip_lookup); + + // If a valueOf property is not found on the object check that its + // prototype is the un-modified String prototype. If not result is false. + __ mov(ecx, FieldOperand(ebx, Map::kPrototypeOffset)); + __ JumpIfSmi(ecx, if_false); + __ mov(ecx, FieldOperand(ecx, HeapObject::kMapOffset)); + __ mov(edx, Operand(esi, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); + __ mov(edx, + FieldOperand(edx, GlobalObject::kNativeContextOffset)); + __ cmp(ecx, + ContextOperand(edx, + Context::STRING_FUNCTION_PROTOTYPE_MAP_INDEX)); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + Split(equal, if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitIsFunction(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + __ JumpIfSmi(eax, if_false); + __ CmpObjectType(eax, JS_FUNCTION_TYPE, ebx); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + Split(equal, if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitIsMinusZero(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + Handle<Map> map = masm()->isolate()->factory()->heap_number_map(); + __ CheckMap(eax, map, if_false, DO_SMI_CHECK); + // Check if the exponent half is 0x80000000. Comparing against 1 and + // checking for overflow is the shortest possible encoding. + __ cmp(FieldOperand(eax, HeapNumber::kExponentOffset), Immediate(0x1)); + __ j(no_overflow, if_false); + __ cmp(FieldOperand(eax, HeapNumber::kMantissaOffset), Immediate(0x0)); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + Split(equal, if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + + +void FullCodeGenerator::EmitIsArray(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + __ JumpIfSmi(eax, if_false); + __ CmpObjectType(eax, JS_ARRAY_TYPE, ebx); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + Split(equal, if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitIsRegExp(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + __ JumpIfSmi(eax, if_false); + __ CmpObjectType(eax, JS_REGEXP_TYPE, ebx); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + Split(equal, if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + + +void FullCodeGenerator::EmitIsConstructCall(CallRuntime* expr) { + DCHECK(expr->arguments()->length() == 0); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + // Get the frame pointer for the calling frame. + __ mov(eax, Operand(ebp, StandardFrameConstants::kCallerFPOffset)); + + // Skip the arguments adaptor frame if it exists. + Label check_frame_marker; + __ cmp(Operand(eax, StandardFrameConstants::kContextOffset), + Immediate(Smi::FromInt(StackFrame::ARGUMENTS_ADAPTOR))); + __ j(not_equal, &check_frame_marker); + __ mov(eax, Operand(eax, StandardFrameConstants::kCallerFPOffset)); + + // Check the marker in the calling frame. + __ bind(&check_frame_marker); + __ cmp(Operand(eax, StandardFrameConstants::kMarkerOffset), + Immediate(Smi::FromInt(StackFrame::CONSTRUCT))); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + Split(equal, if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitObjectEquals(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 2); + + // Load the two objects into registers and perform the comparison. + VisitForStackValue(args->at(0)); + VisitForAccumulatorValue(args->at(1)); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + __ pop(ebx); + __ cmp(eax, ebx); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + Split(equal, if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitArguments(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + // ArgumentsAccessStub expects the key in edx and the formal + // parameter count in eax. + VisitForAccumulatorValue(args->at(0)); + __ mov(edx, eax); + __ Move(eax, Immediate(Smi::FromInt(info_->scope()->num_parameters()))); + ArgumentsAccessStub stub(isolate(), ArgumentsAccessStub::READ_ELEMENT); + __ CallStub(&stub); + context()->Plug(eax); +} + + +void FullCodeGenerator::EmitArgumentsLength(CallRuntime* expr) { + DCHECK(expr->arguments()->length() == 0); + + Label exit; + // Get the number of formal parameters. + __ Move(eax, Immediate(Smi::FromInt(info_->scope()->num_parameters()))); + + // Check if the calling frame is an arguments adaptor frame. + __ mov(ebx, Operand(ebp, StandardFrameConstants::kCallerFPOffset)); + __ cmp(Operand(ebx, StandardFrameConstants::kContextOffset), + Immediate(Smi::FromInt(StackFrame::ARGUMENTS_ADAPTOR))); + __ j(not_equal, &exit); + + // Arguments adaptor case: Read the arguments length from the + // adaptor frame. + __ mov(eax, Operand(ebx, ArgumentsAdaptorFrameConstants::kLengthOffset)); + + __ bind(&exit); + __ AssertSmi(eax); + context()->Plug(eax); +} + + +void FullCodeGenerator::EmitClassOf(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + Label done, null, function, non_function_constructor; + + VisitForAccumulatorValue(args->at(0)); + + // If the object is a smi, we return null. + __ JumpIfSmi(eax, &null); + + // Check that the object is a JS object but take special care of JS + // functions to make sure they have 'Function' as their class. + // Assume that there are only two callable types, and one of them is at + // either end of the type range for JS object types. Saves extra comparisons. + STATIC_ASSERT(NUM_OF_CALLABLE_SPEC_OBJECT_TYPES == 2); + __ CmpObjectType(eax, FIRST_SPEC_OBJECT_TYPE, eax); + // Map is now in eax. + __ j(below, &null); + STATIC_ASSERT(FIRST_NONCALLABLE_SPEC_OBJECT_TYPE == + FIRST_SPEC_OBJECT_TYPE + 1); + __ j(equal, &function); + + __ CmpInstanceType(eax, LAST_SPEC_OBJECT_TYPE); + STATIC_ASSERT(LAST_NONCALLABLE_SPEC_OBJECT_TYPE == + LAST_SPEC_OBJECT_TYPE - 1); + __ j(equal, &function); + // Assume that there is no larger type. + STATIC_ASSERT(LAST_NONCALLABLE_SPEC_OBJECT_TYPE == LAST_TYPE - 1); + + // Check if the constructor in the map is a JS function. + __ mov(eax, FieldOperand(eax, Map::kConstructorOffset)); + __ CmpObjectType(eax, JS_FUNCTION_TYPE, ebx); + __ j(not_equal, &non_function_constructor); + + // eax now contains the constructor function. Grab the + // instance class name from there. + __ mov(eax, FieldOperand(eax, JSFunction::kSharedFunctionInfoOffset)); + __ mov(eax, FieldOperand(eax, SharedFunctionInfo::kInstanceClassNameOffset)); + __ jmp(&done); + + // Functions have class 'Function'. + __ bind(&function); + __ mov(eax, isolate()->factory()->function_class_string()); + __ jmp(&done); + + // Objects with a non-function constructor have class 'Object'. + __ bind(&non_function_constructor); + __ mov(eax, isolate()->factory()->Object_string()); + __ jmp(&done); + + // Non-JS objects have class null. + __ bind(&null); + __ mov(eax, isolate()->factory()->null_value()); + + // All done. + __ bind(&done); + + context()->Plug(eax); +} + + +void FullCodeGenerator::EmitSubString(CallRuntime* expr) { + // Load the arguments on the stack and call the stub. + SubStringStub stub(isolate()); + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 3); + VisitForStackValue(args->at(0)); + VisitForStackValue(args->at(1)); + VisitForStackValue(args->at(2)); + __ CallStub(&stub); + context()->Plug(eax); +} + + +void FullCodeGenerator::EmitRegExpExec(CallRuntime* expr) { + // Load the arguments on the stack and call the stub. + RegExpExecStub stub(isolate()); + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 4); + VisitForStackValue(args->at(0)); + VisitForStackValue(args->at(1)); + VisitForStackValue(args->at(2)); + VisitForStackValue(args->at(3)); + __ CallStub(&stub); + context()->Plug(eax); +} + + +void FullCodeGenerator::EmitValueOf(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); // Load the object. + + Label done; + // If the object is a smi return the object. + __ JumpIfSmi(eax, &done, Label::kNear); + // If the object is not a value type, return the object. + __ CmpObjectType(eax, JS_VALUE_TYPE, ebx); + __ j(not_equal, &done, Label::kNear); + __ mov(eax, FieldOperand(eax, JSValue::kValueOffset)); + + __ bind(&done); + context()->Plug(eax); +} + + +void FullCodeGenerator::EmitDateField(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 2); + DCHECK_NE(NULL, args->at(1)->AsLiteral()); + Smi* index = Smi::cast(*(args->at(1)->AsLiteral()->value())); + + VisitForAccumulatorValue(args->at(0)); // Load the object. + + Label runtime, done, not_date_object; + Register object = eax; + Register result = eax; + Register scratch = ecx; + + __ JumpIfSmi(object, ¬_date_object); + __ CmpObjectType(object, JS_DATE_TYPE, scratch); + __ j(not_equal, ¬_date_object); + + if (index->value() == 0) { + __ mov(result, FieldOperand(object, JSDate::kValueOffset)); + __ jmp(&done); + } else { + if (index->value() < JSDate::kFirstUncachedField) { + ExternalReference stamp = ExternalReference::date_cache_stamp(isolate()); + __ mov(scratch, Operand::StaticVariable(stamp)); + __ cmp(scratch, FieldOperand(object, JSDate::kCacheStampOffset)); + __ j(not_equal, &runtime, Label::kNear); + __ mov(result, FieldOperand(object, JSDate::kValueOffset + + kPointerSize * index->value())); + __ jmp(&done); + } + __ bind(&runtime); + __ PrepareCallCFunction(2, scratch); + __ mov(Operand(esp, 0), object); + __ mov(Operand(esp, 1 * kPointerSize), Immediate(index)); + __ CallCFunction(ExternalReference::get_date_field_function(isolate()), 2); + __ jmp(&done); + } + + __ bind(¬_date_object); + __ CallRuntime(Runtime::kThrowNotDateError, 0); + __ bind(&done); + context()->Plug(result); +} + + +void FullCodeGenerator::EmitOneByteSeqStringSetChar(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK_EQ(3, args->length()); + + Register string = eax; + Register index = ebx; + Register value = ecx; + + VisitForStackValue(args->at(1)); // index + VisitForStackValue(args->at(2)); // value + VisitForAccumulatorValue(args->at(0)); // string + + __ pop(value); + __ pop(index); + + if (FLAG_debug_code) { + __ test(value, Immediate(kSmiTagMask)); + __ Check(zero, kNonSmiValue); + __ test(index, Immediate(kSmiTagMask)); + __ Check(zero, kNonSmiValue); + } + + __ SmiUntag(value); + __ SmiUntag(index); + + if (FLAG_debug_code) { + static const uint32_t one_byte_seq_type = kSeqStringTag | kOneByteStringTag; + __ EmitSeqStringSetCharCheck(string, index, value, one_byte_seq_type); + } + + __ mov_b(FieldOperand(string, index, times_1, SeqOneByteString::kHeaderSize), + value); + context()->Plug(string); +} + + +void FullCodeGenerator::EmitTwoByteSeqStringSetChar(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK_EQ(3, args->length()); + + Register string = eax; + Register index = ebx; + Register value = ecx; + + VisitForStackValue(args->at(1)); // index + VisitForStackValue(args->at(2)); // value + VisitForAccumulatorValue(args->at(0)); // string + __ pop(value); + __ pop(index); + + if (FLAG_debug_code) { + __ test(value, Immediate(kSmiTagMask)); + __ Check(zero, kNonSmiValue); + __ test(index, Immediate(kSmiTagMask)); + __ Check(zero, kNonSmiValue); + __ SmiUntag(index); + static const uint32_t two_byte_seq_type = kSeqStringTag | kTwoByteStringTag; + __ EmitSeqStringSetCharCheck(string, index, value, two_byte_seq_type); + __ SmiTag(index); + } + + __ SmiUntag(value); + // No need to untag a smi for two-byte addressing. + __ mov_w(FieldOperand(string, index, times_1, SeqTwoByteString::kHeaderSize), + value); + context()->Plug(string); +} + + +void FullCodeGenerator::EmitMathPow(CallRuntime* expr) { + // Load the arguments on the stack and call the runtime function. + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 2); + VisitForStackValue(args->at(0)); + VisitForStackValue(args->at(1)); + + __ CallRuntime(Runtime::kMathPowSlow, 2); + context()->Plug(eax); +} + + +void FullCodeGenerator::EmitSetValueOf(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 2); + + VisitForStackValue(args->at(0)); // Load the object. + VisitForAccumulatorValue(args->at(1)); // Load the value. + __ pop(ebx); // eax = value. ebx = object. + + Label done; + // If the object is a smi, return the value. + __ JumpIfSmi(ebx, &done, Label::kNear); + + // If the object is not a value type, return the value. + __ CmpObjectType(ebx, JS_VALUE_TYPE, ecx); + __ j(not_equal, &done, Label::kNear); + + // Store the value. + __ mov(FieldOperand(ebx, JSValue::kValueOffset), eax); + + // Update the write barrier. Save the value as it will be + // overwritten by the write barrier code and is needed afterward. + __ mov(edx, eax); + __ RecordWriteField(ebx, JSValue::kValueOffset, edx, ecx); + + __ bind(&done); + context()->Plug(eax); +} + + +void FullCodeGenerator::EmitNumberToString(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK_EQ(args->length(), 1); + + // Load the argument into eax and call the stub. + VisitForAccumulatorValue(args->at(0)); + + NumberToStringStub stub(isolate()); + __ CallStub(&stub); + context()->Plug(eax); +} + + +void FullCodeGenerator::EmitStringCharFromCode(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); + + Label done; + StringCharFromCodeGenerator generator(eax, ebx); + generator.GenerateFast(masm_); + __ jmp(&done); + + NopRuntimeCallHelper call_helper; + generator.GenerateSlow(masm_, call_helper); + + __ bind(&done); + context()->Plug(ebx); +} + + +void FullCodeGenerator::EmitStringCharCodeAt(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 2); + + VisitForStackValue(args->at(0)); + VisitForAccumulatorValue(args->at(1)); + + Register object = ebx; + Register index = eax; + Register result = edx; + + __ pop(object); + + Label need_conversion; + Label index_out_of_range; + Label done; + StringCharCodeAtGenerator generator(object, + index, + result, + &need_conversion, + &need_conversion, + &index_out_of_range, + STRING_INDEX_IS_NUMBER); + generator.GenerateFast(masm_); + __ jmp(&done); + + __ bind(&index_out_of_range); + // When the index is out of range, the spec requires us to return + // NaN. + __ Move(result, Immediate(isolate()->factory()->nan_value())); + __ jmp(&done); + + __ bind(&need_conversion); + // Move the undefined value into the result register, which will + // trigger conversion. + __ Move(result, Immediate(isolate()->factory()->undefined_value())); + __ jmp(&done); + + NopRuntimeCallHelper call_helper; + generator.GenerateSlow(masm_, call_helper); + + __ bind(&done); + context()->Plug(result); +} + + +void FullCodeGenerator::EmitStringCharAt(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 2); + + VisitForStackValue(args->at(0)); + VisitForAccumulatorValue(args->at(1)); + + Register object = ebx; + Register index = eax; + Register scratch = edx; + Register result = eax; + + __ pop(object); + + Label need_conversion; + Label index_out_of_range; + Label done; + StringCharAtGenerator generator(object, + index, + scratch, + result, + &need_conversion, + &need_conversion, + &index_out_of_range, + STRING_INDEX_IS_NUMBER); + generator.GenerateFast(masm_); + __ jmp(&done); + + __ bind(&index_out_of_range); + // When the index is out of range, the spec requires us to return + // the empty string. + __ Move(result, Immediate(isolate()->factory()->empty_string())); + __ jmp(&done); + + __ bind(&need_conversion); + // Move smi zero into the result register, which will trigger + // conversion. + __ Move(result, Immediate(Smi::FromInt(0))); + __ jmp(&done); + + NopRuntimeCallHelper call_helper; + generator.GenerateSlow(masm_, call_helper); + + __ bind(&done); + context()->Plug(result); +} + + +void FullCodeGenerator::EmitStringAdd(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK_EQ(2, args->length()); + VisitForStackValue(args->at(0)); + VisitForAccumulatorValue(args->at(1)); + + __ pop(edx); + StringAddStub stub(isolate(), STRING_ADD_CHECK_BOTH, NOT_TENURED); + __ CallStub(&stub); + context()->Plug(eax); +} + + +void FullCodeGenerator::EmitStringCompare(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK_EQ(2, args->length()); + + VisitForStackValue(args->at(0)); + VisitForStackValue(args->at(1)); + + StringCompareStub stub(isolate()); + __ CallStub(&stub); + context()->Plug(eax); +} + + +void FullCodeGenerator::EmitCallFunction(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() >= 2); + + int arg_count = args->length() - 2; // 2 ~ receiver and function. + for (int i = 0; i < arg_count + 1; ++i) { + VisitForStackValue(args->at(i)); + } + VisitForAccumulatorValue(args->last()); // Function. + + Label runtime, done; + // Check for non-function argument (including proxy). + __ JumpIfSmi(eax, &runtime); + __ CmpObjectType(eax, JS_FUNCTION_TYPE, ebx); + __ j(not_equal, &runtime); + + // InvokeFunction requires the function in edi. Move it in there. + __ mov(edi, result_register()); + ParameterCount count(arg_count); + __ InvokeFunction(edi, count, CALL_FUNCTION, NullCallWrapper()); + __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset)); + __ jmp(&done); + + __ bind(&runtime); + __ push(eax); + __ CallRuntime(Runtime::kCall, args->length()); + __ bind(&done); + + context()->Plug(eax); +} + + +void FullCodeGenerator::EmitRegExpConstructResult(CallRuntime* expr) { + // Load the arguments on the stack and call the stub. + RegExpConstructResultStub stub(isolate()); + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 3); + VisitForStackValue(args->at(0)); + VisitForStackValue(args->at(1)); + VisitForAccumulatorValue(args->at(2)); + __ pop(ebx); + __ pop(ecx); + __ CallStub(&stub); + context()->Plug(eax); +} + + +void FullCodeGenerator::EmitGetFromCache(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK_EQ(2, args->length()); + + DCHECK_NE(NULL, args->at(0)->AsLiteral()); + int cache_id = Smi::cast(*(args->at(0)->AsLiteral()->value()))->value(); + + Handle<FixedArray> jsfunction_result_caches( + isolate()->native_context()->jsfunction_result_caches()); + if (jsfunction_result_caches->length() <= cache_id) { + __ Abort(kAttemptToUseUndefinedCache); + __ mov(eax, isolate()->factory()->undefined_value()); + context()->Plug(eax); + return; + } + + VisitForAccumulatorValue(args->at(1)); + + Register key = eax; + Register cache = ebx; + Register tmp = ecx; + __ mov(cache, ContextOperand(esi, Context::GLOBAL_OBJECT_INDEX)); + __ mov(cache, + FieldOperand(cache, GlobalObject::kNativeContextOffset)); + __ mov(cache, ContextOperand(cache, Context::JSFUNCTION_RESULT_CACHES_INDEX)); + __ mov(cache, + FieldOperand(cache, FixedArray::OffsetOfElementAt(cache_id))); + + Label done, not_found; + STATIC_ASSERT(kSmiTag == 0 && kSmiTagSize == 1); + __ mov(tmp, FieldOperand(cache, JSFunctionResultCache::kFingerOffset)); + // tmp now holds finger offset as a smi. + __ cmp(key, FixedArrayElementOperand(cache, tmp)); + __ j(not_equal, ¬_found); + + __ mov(eax, FixedArrayElementOperand(cache, tmp, 1)); + __ jmp(&done); + + __ bind(¬_found); + // Call runtime to perform the lookup. + __ push(cache); + __ push(key); + __ CallRuntime(Runtime::kGetFromCache, 2); + + __ bind(&done); + context()->Plug(eax); +} + + +void FullCodeGenerator::EmitHasCachedArrayIndex(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + + VisitForAccumulatorValue(args->at(0)); + + __ AssertString(eax); + + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + __ test(FieldOperand(eax, String::kHashFieldOffset), + Immediate(String::kContainsCachedArrayIndexMask)); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + Split(zero, if_true, if_false, fall_through); + + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitGetCachedArrayIndex(CallRuntime* expr) { + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 1); + VisitForAccumulatorValue(args->at(0)); + + __ AssertString(eax); + + __ mov(eax, FieldOperand(eax, String::kHashFieldOffset)); + __ IndexFromHash(eax, eax); + + context()->Plug(eax); +} + + +void FullCodeGenerator::EmitFastAsciiArrayJoin(CallRuntime* expr) { + Label bailout, done, one_char_separator, long_separator, + non_trivial_array, not_size_one_array, loop, + loop_1, loop_1_condition, loop_2, loop_2_entry, loop_3, loop_3_entry; + + ZoneList<Expression*>* args = expr->arguments(); + DCHECK(args->length() == 2); + // We will leave the separator on the stack until the end of the function. + VisitForStackValue(args->at(1)); + // Load this to eax (= array) + VisitForAccumulatorValue(args->at(0)); + // All aliases of the same register have disjoint lifetimes. + Register array = eax; + Register elements = no_reg; // Will be eax. + + Register index = edx; + + Register string_length = ecx; + + Register string = esi; + + Register scratch = ebx; + + Register array_length = edi; + Register result_pos = no_reg; // Will be edi. + + // Separator operand is already pushed. + Operand separator_operand = Operand(esp, 2 * kPointerSize); + Operand result_operand = Operand(esp, 1 * kPointerSize); + Operand array_length_operand = Operand(esp, 0); + __ sub(esp, Immediate(2 * kPointerSize)); + __ cld(); + // Check that the array is a JSArray + __ JumpIfSmi(array, &bailout); + __ CmpObjectType(array, JS_ARRAY_TYPE, scratch); + __ j(not_equal, &bailout); + + // Check that the array has fast elements. + __ CheckFastElements(scratch, &bailout); + + // If the array has length zero, return the empty string. + __ mov(array_length, FieldOperand(array, JSArray::kLengthOffset)); + __ SmiUntag(array_length); + __ j(not_zero, &non_trivial_array); + __ mov(result_operand, isolate()->factory()->empty_string()); + __ jmp(&done); + + // Save the array length. + __ bind(&non_trivial_array); + __ mov(array_length_operand, array_length); + + // Save the FixedArray containing array's elements. + // End of array's live range. + elements = array; + __ mov(elements, FieldOperand(array, JSArray::kElementsOffset)); + array = no_reg; + + + // Check that all array elements are sequential ASCII strings, and + // accumulate the sum of their lengths, as a smi-encoded value. + __ Move(index, Immediate(0)); + __ Move(string_length, Immediate(0)); + // Loop condition: while (index < length). + // Live loop registers: index, array_length, string, + // scratch, string_length, elements. + if (generate_debug_code_) { + __ cmp(index, array_length); + __ Assert(less, kNoEmptyArraysHereInEmitFastAsciiArrayJoin); + } + __ bind(&loop); + __ mov(string, FieldOperand(elements, + index, + times_pointer_size, + FixedArray::kHeaderSize)); + __ JumpIfSmi(string, &bailout); + __ mov(scratch, FieldOperand(string, HeapObject::kMapOffset)); + __ movzx_b(scratch, FieldOperand(scratch, Map::kInstanceTypeOffset)); + __ and_(scratch, Immediate( + kIsNotStringMask | kStringEncodingMask | kStringRepresentationMask)); + __ cmp(scratch, kStringTag | kOneByteStringTag | kSeqStringTag); + __ j(not_equal, &bailout); + __ add(string_length, + FieldOperand(string, SeqOneByteString::kLengthOffset)); + __ j(overflow, &bailout); + __ add(index, Immediate(1)); + __ cmp(index, array_length); + __ j(less, &loop); + + // If array_length is 1, return elements[0], a string. + __ cmp(array_length, 1); + __ j(not_equal, ¬_size_one_array); + __ mov(scratch, FieldOperand(elements, FixedArray::kHeaderSize)); + __ mov(result_operand, scratch); + __ jmp(&done); + + __ bind(¬_size_one_array); + + // End of array_length live range. + result_pos = array_length; + array_length = no_reg; + + // Live registers: + // string_length: Sum of string lengths, as a smi. + // elements: FixedArray of strings. + + // Check that the separator is a flat ASCII string. + __ mov(string, separator_operand); + __ JumpIfSmi(string, &bailout); + __ mov(scratch, FieldOperand(string, HeapObject::kMapOffset)); + __ movzx_b(scratch, FieldOperand(scratch, Map::kInstanceTypeOffset)); + __ and_(scratch, Immediate( + kIsNotStringMask | kStringEncodingMask | kStringRepresentationMask)); + __ cmp(scratch, kStringTag | kOneByteStringTag | kSeqStringTag); + __ j(not_equal, &bailout); + + // Add (separator length times array_length) - separator length + // to string_length. + __ mov(scratch, separator_operand); + __ mov(scratch, FieldOperand(scratch, SeqOneByteString::kLengthOffset)); + __ sub(string_length, scratch); // May be negative, temporarily. + __ imul(scratch, array_length_operand); + __ j(overflow, &bailout); + __ add(string_length, scratch); + __ j(overflow, &bailout); + + __ shr(string_length, 1); + // Live registers and stack values: + // string_length + // elements + __ AllocateAsciiString(result_pos, string_length, scratch, + index, string, &bailout); + __ mov(result_operand, result_pos); + __ lea(result_pos, FieldOperand(result_pos, SeqOneByteString::kHeaderSize)); + + + __ mov(string, separator_operand); + __ cmp(FieldOperand(string, SeqOneByteString::kLengthOffset), + Immediate(Smi::FromInt(1))); + __ j(equal, &one_char_separator); + __ j(greater, &long_separator); + + + // Empty separator case + __ mov(index, Immediate(0)); + __ jmp(&loop_1_condition); + // Loop condition: while (index < length). + __ bind(&loop_1); + // Each iteration of the loop concatenates one string to the result. + // Live values in registers: + // index: which element of the elements array we are adding to the result. + // result_pos: the position to which we are currently copying characters. + // elements: the FixedArray of strings we are joining. + + // Get string = array[index]. + __ mov(string, FieldOperand(elements, index, + times_pointer_size, + FixedArray::kHeaderSize)); + __ mov(string_length, + FieldOperand(string, String::kLengthOffset)); + __ shr(string_length, 1); + __ lea(string, + FieldOperand(string, SeqOneByteString::kHeaderSize)); + __ CopyBytes(string, result_pos, string_length, scratch); + __ add(index, Immediate(1)); + __ bind(&loop_1_condition); + __ cmp(index, array_length_operand); + __ j(less, &loop_1); // End while (index < length). + __ jmp(&done); + + + + // One-character separator case + __ bind(&one_char_separator); + // Replace separator with its ASCII character value. + __ mov_b(scratch, FieldOperand(string, SeqOneByteString::kHeaderSize)); + __ mov_b(separator_operand, scratch); + + __ Move(index, Immediate(0)); + // Jump into the loop after the code that copies the separator, so the first + // element is not preceded by a separator + __ jmp(&loop_2_entry); + // Loop condition: while (index < length). + __ bind(&loop_2); + // Each iteration of the loop concatenates one string to the result. + // Live values in registers: + // index: which element of the elements array we are adding to the result. + // result_pos: the position to which we are currently copying characters. + + // Copy the separator character to the result. + __ mov_b(scratch, separator_operand); + __ mov_b(Operand(result_pos, 0), scratch); + __ inc(result_pos); + + __ bind(&loop_2_entry); + // Get string = array[index]. + __ mov(string, FieldOperand(elements, index, + times_pointer_size, + FixedArray::kHeaderSize)); + __ mov(string_length, + FieldOperand(string, String::kLengthOffset)); + __ shr(string_length, 1); + __ lea(string, + FieldOperand(string, SeqOneByteString::kHeaderSize)); + __ CopyBytes(string, result_pos, string_length, scratch); + __ add(index, Immediate(1)); + + __ cmp(index, array_length_operand); + __ j(less, &loop_2); // End while (index < length). + __ jmp(&done); + + + // Long separator case (separator is more than one character). + __ bind(&long_separator); + + __ Move(index, Immediate(0)); + // Jump into the loop after the code that copies the separator, so the first + // element is not preceded by a separator + __ jmp(&loop_3_entry); + // Loop condition: while (index < length). + __ bind(&loop_3); + // Each iteration of the loop concatenates one string to the result. + // Live values in registers: + // index: which element of the elements array we are adding to the result. + // result_pos: the position to which we are currently copying characters. + + // Copy the separator to the result. + __ mov(string, separator_operand); + __ mov(string_length, + FieldOperand(string, String::kLengthOffset)); + __ shr(string_length, 1); + __ lea(string, + FieldOperand(string, SeqOneByteString::kHeaderSize)); + __ CopyBytes(string, result_pos, string_length, scratch); + + __ bind(&loop_3_entry); + // Get string = array[index]. + __ mov(string, FieldOperand(elements, index, + times_pointer_size, + FixedArray::kHeaderSize)); + __ mov(string_length, + FieldOperand(string, String::kLengthOffset)); + __ shr(string_length, 1); + __ lea(string, + FieldOperand(string, SeqOneByteString::kHeaderSize)); + __ CopyBytes(string, result_pos, string_length, scratch); + __ add(index, Immediate(1)); + + __ cmp(index, array_length_operand); + __ j(less, &loop_3); // End while (index < length). + __ jmp(&done); + + + __ bind(&bailout); + __ mov(result_operand, isolate()->factory()->undefined_value()); + __ bind(&done); + __ mov(eax, result_operand); + // Drop temp values from the stack, and restore context register. + __ add(esp, Immediate(3 * kPointerSize)); + + __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset)); + context()->Plug(eax); +} + + +void FullCodeGenerator::EmitDebugIsActive(CallRuntime* expr) { + DCHECK(expr->arguments()->length() == 0); + ExternalReference debug_is_active = + ExternalReference::debug_is_active_address(isolate()); + __ movzx_b(eax, Operand::StaticVariable(debug_is_active)); + __ SmiTag(eax); + context()->Plug(eax); +} + + +void FullCodeGenerator::VisitCallRuntime(CallRuntime* expr) { + if (expr->function() != NULL && + expr->function()->intrinsic_type == Runtime::INLINE) { + Comment cmnt(masm_, "[ InlineRuntimeCall"); + EmitInlineRuntimeCall(expr); + return; + } + + Comment cmnt(masm_, "[ CallRuntime"); + ZoneList<Expression*>* args = expr->arguments(); + + if (expr->is_jsruntime()) { + // Push the builtins object as receiver. + __ mov(eax, GlobalObjectOperand()); + __ push(FieldOperand(eax, GlobalObject::kBuiltinsOffset)); + + // Load the function from the receiver. + __ mov(LoadIC::ReceiverRegister(), Operand(esp, 0)); + __ mov(LoadIC::NameRegister(), Immediate(expr->name())); + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(expr->CallRuntimeFeedbackSlot()))); + CallLoadIC(NOT_CONTEXTUAL); + } else { + CallLoadIC(NOT_CONTEXTUAL, expr->CallRuntimeFeedbackId()); + } + + // Push the target function under the receiver. + __ push(Operand(esp, 0)); + __ mov(Operand(esp, kPointerSize), eax); + + // Code common for calls using the IC. + ZoneList<Expression*>* args = expr->arguments(); + int arg_count = args->length(); + for (int i = 0; i < arg_count; i++) { + VisitForStackValue(args->at(i)); + } + + // Record source position of the IC call. + SetSourcePosition(expr->position()); + CallFunctionStub stub(isolate(), arg_count, NO_CALL_FUNCTION_FLAGS); + __ mov(edi, Operand(esp, (arg_count + 1) * kPointerSize)); + __ CallStub(&stub); + // Restore context register. + __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset)); + context()->DropAndPlug(1, eax); + + } else { + // Push the arguments ("left-to-right"). + int arg_count = args->length(); + for (int i = 0; i < arg_count; i++) { + VisitForStackValue(args->at(i)); + } + + // Call the C runtime function. + __ CallRuntime(expr->function(), arg_count); + + context()->Plug(eax); + } +} + + +void FullCodeGenerator::VisitUnaryOperation(UnaryOperation* expr) { + switch (expr->op()) { + case Token::DELETE: { + Comment cmnt(masm_, "[ UnaryOperation (DELETE)"); + Property* property = expr->expression()->AsProperty(); + VariableProxy* proxy = expr->expression()->AsVariableProxy(); + + if (property != NULL) { + VisitForStackValue(property->obj()); + VisitForStackValue(property->key()); + __ push(Immediate(Smi::FromInt(strict_mode()))); + __ InvokeBuiltin(Builtins::DELETE, CALL_FUNCTION); + context()->Plug(eax); + } else if (proxy != NULL) { + Variable* var = proxy->var(); + // Delete of an unqualified identifier is disallowed in strict mode + // but "delete this" is allowed. + DCHECK(strict_mode() == SLOPPY || var->is_this()); + if (var->IsUnallocated()) { + __ push(GlobalObjectOperand()); + __ push(Immediate(var->name())); + __ push(Immediate(Smi::FromInt(SLOPPY))); + __ InvokeBuiltin(Builtins::DELETE, CALL_FUNCTION); + context()->Plug(eax); + } else if (var->IsStackAllocated() || var->IsContextSlot()) { + // Result of deleting non-global variables is false. 'this' is + // not really a variable, though we implement it as one. The + // subexpression does not have side effects. + context()->Plug(var->is_this()); + } else { + // Non-global variable. Call the runtime to try to delete from the + // context where the variable was introduced. + __ push(context_register()); + __ push(Immediate(var->name())); + __ CallRuntime(Runtime::kDeleteLookupSlot, 2); + context()->Plug(eax); + } + } else { + // Result of deleting non-property, non-variable reference is true. + // The subexpression may have side effects. + VisitForEffect(expr->expression()); + context()->Plug(true); + } + break; + } + + case Token::VOID: { + Comment cmnt(masm_, "[ UnaryOperation (VOID)"); + VisitForEffect(expr->expression()); + context()->Plug(isolate()->factory()->undefined_value()); + break; + } + + case Token::NOT: { + Comment cmnt(masm_, "[ UnaryOperation (NOT)"); + if (context()->IsEffect()) { + // Unary NOT has no side effects so it's only necessary to visit the + // subexpression. Match the optimizing compiler by not branching. + VisitForEffect(expr->expression()); + } else if (context()->IsTest()) { + const TestContext* test = TestContext::cast(context()); + // The labels are swapped for the recursive call. + VisitForControl(expr->expression(), + test->false_label(), + test->true_label(), + test->fall_through()); + context()->Plug(test->true_label(), test->false_label()); + } else { + // We handle value contexts explicitly rather than simply visiting + // for control and plugging the control flow into the context, + // because we need to prepare a pair of extra administrative AST ids + // for the optimizing compiler. + DCHECK(context()->IsAccumulatorValue() || context()->IsStackValue()); + Label materialize_true, materialize_false, done; + VisitForControl(expr->expression(), + &materialize_false, + &materialize_true, + &materialize_true); + __ bind(&materialize_true); + PrepareForBailoutForId(expr->MaterializeTrueId(), NO_REGISTERS); + if (context()->IsAccumulatorValue()) { + __ mov(eax, isolate()->factory()->true_value()); + } else { + __ Push(isolate()->factory()->true_value()); + } + __ jmp(&done, Label::kNear); + __ bind(&materialize_false); + PrepareForBailoutForId(expr->MaterializeFalseId(), NO_REGISTERS); + if (context()->IsAccumulatorValue()) { + __ mov(eax, isolate()->factory()->false_value()); + } else { + __ Push(isolate()->factory()->false_value()); + } + __ bind(&done); + } + break; + } + + case Token::TYPEOF: { + Comment cmnt(masm_, "[ UnaryOperation (TYPEOF)"); + { StackValueContext context(this); + VisitForTypeofValue(expr->expression()); + } + __ CallRuntime(Runtime::kTypeof, 1); + context()->Plug(eax); + break; + } + + default: + UNREACHABLE(); + } +} + + +void FullCodeGenerator::VisitCountOperation(CountOperation* expr) { + DCHECK(expr->expression()->IsValidReferenceExpression()); + + Comment cmnt(masm_, "[ CountOperation"); + SetSourcePosition(expr->position()); + + // Expression can only be a property, a global or a (parameter or local) + // slot. + enum LhsKind { VARIABLE, NAMED_PROPERTY, KEYED_PROPERTY }; + LhsKind assign_type = VARIABLE; + Property* prop = expr->expression()->AsProperty(); + // In case of a property we use the uninitialized expression context + // of the key to detect a named property. + if (prop != NULL) { + assign_type = + (prop->key()->IsPropertyName()) ? NAMED_PROPERTY : KEYED_PROPERTY; + } + + // Evaluate expression and get value. + if (assign_type == VARIABLE) { + DCHECK(expr->expression()->AsVariableProxy()->var() != NULL); + AccumulatorValueContext context(this); + EmitVariableLoad(expr->expression()->AsVariableProxy()); + } else { + // Reserve space for result of postfix operation. + if (expr->is_postfix() && !context()->IsEffect()) { + __ push(Immediate(Smi::FromInt(0))); + } + if (assign_type == NAMED_PROPERTY) { + // Put the object both on the stack and in the register. + VisitForStackValue(prop->obj()); + __ mov(LoadIC::ReceiverRegister(), Operand(esp, 0)); + EmitNamedPropertyLoad(prop); + } else { + VisitForStackValue(prop->obj()); + VisitForStackValue(prop->key()); + __ mov(LoadIC::ReceiverRegister(), + Operand(esp, kPointerSize)); // Object. + __ mov(LoadIC::NameRegister(), Operand(esp, 0)); // Key. + EmitKeyedPropertyLoad(prop); + } + } + + // We need a second deoptimization point after loading the value + // in case evaluating the property load my have a side effect. + if (assign_type == VARIABLE) { + PrepareForBailout(expr->expression(), TOS_REG); + } else { + PrepareForBailoutForId(prop->LoadId(), TOS_REG); + } + + // Inline smi case if we are in a loop. + Label done, stub_call; + JumpPatchSite patch_site(masm_); + if (ShouldInlineSmiCase(expr->op())) { + Label slow; + patch_site.EmitJumpIfNotSmi(eax, &slow, Label::kNear); + + // Save result for postfix expressions. + if (expr->is_postfix()) { + if (!context()->IsEffect()) { + // Save the result on the stack. If we have a named or keyed property + // we store the result under the receiver that is currently on top + // of the stack. + switch (assign_type) { + case VARIABLE: + __ push(eax); + break; + case NAMED_PROPERTY: + __ mov(Operand(esp, kPointerSize), eax); + break; + case KEYED_PROPERTY: + __ mov(Operand(esp, 2 * kPointerSize), eax); + break; + } + } + } + + if (expr->op() == Token::INC) { + __ add(eax, Immediate(Smi::FromInt(1))); + } else { + __ sub(eax, Immediate(Smi::FromInt(1))); + } + __ j(no_overflow, &done, Label::kNear); + // Call stub. Undo operation first. + if (expr->op() == Token::INC) { + __ sub(eax, Immediate(Smi::FromInt(1))); + } else { + __ add(eax, Immediate(Smi::FromInt(1))); + } + __ jmp(&stub_call, Label::kNear); + __ bind(&slow); + } + ToNumberStub convert_stub(isolate()); + __ CallStub(&convert_stub); + + // Save result for postfix expressions. + if (expr->is_postfix()) { + if (!context()->IsEffect()) { + // Save the result on the stack. If we have a named or keyed property + // we store the result under the receiver that is currently on top + // of the stack. + switch (assign_type) { + case VARIABLE: + __ push(eax); + break; + case NAMED_PROPERTY: + __ mov(Operand(esp, kPointerSize), eax); + break; + case KEYED_PROPERTY: + __ mov(Operand(esp, 2 * kPointerSize), eax); + break; + } + } + } + + // Record position before stub call. + SetSourcePosition(expr->position()); + + // Call stub for +1/-1. + __ bind(&stub_call); + __ mov(edx, eax); + __ mov(eax, Immediate(Smi::FromInt(1))); + BinaryOpICStub stub(isolate(), expr->binary_op(), NO_OVERWRITE); + CallIC(stub.GetCode(), expr->CountBinOpFeedbackId()); + patch_site.EmitPatchInfo(); + __ bind(&done); + + // Store the value returned in eax. + switch (assign_type) { + case VARIABLE: + if (expr->is_postfix()) { + // Perform the assignment as if via '='. + { EffectContext context(this); + EmitVariableAssignment(expr->expression()->AsVariableProxy()->var(), + Token::ASSIGN); + PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); + context.Plug(eax); + } + // For all contexts except EffectContext We have the result on + // top of the stack. + if (!context()->IsEffect()) { + context()->PlugTOS(); + } + } else { + // Perform the assignment as if via '='. + EmitVariableAssignment(expr->expression()->AsVariableProxy()->var(), + Token::ASSIGN); + PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); + context()->Plug(eax); + } + break; + case NAMED_PROPERTY: { + __ mov(StoreIC::NameRegister(), prop->key()->AsLiteral()->value()); + __ pop(StoreIC::ReceiverRegister()); + CallStoreIC(expr->CountStoreFeedbackId()); + PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); + if (expr->is_postfix()) { + if (!context()->IsEffect()) { + context()->PlugTOS(); + } + } else { + context()->Plug(eax); + } + break; + } + case KEYED_PROPERTY: { + __ pop(KeyedStoreIC::NameRegister()); + __ pop(KeyedStoreIC::ReceiverRegister()); + Handle<Code> ic = strict_mode() == SLOPPY + ? isolate()->builtins()->KeyedStoreIC_Initialize() + : isolate()->builtins()->KeyedStoreIC_Initialize_Strict(); + CallIC(ic, expr->CountStoreFeedbackId()); + PrepareForBailoutForId(expr->AssignmentId(), TOS_REG); + if (expr->is_postfix()) { + // Result is on the stack + if (!context()->IsEffect()) { + context()->PlugTOS(); + } + } else { + context()->Plug(eax); + } + break; + } + } +} + + +void FullCodeGenerator::VisitForTypeofValue(Expression* expr) { + VariableProxy* proxy = expr->AsVariableProxy(); + DCHECK(!context()->IsEffect()); + DCHECK(!context()->IsTest()); + + if (proxy != NULL && proxy->var()->IsUnallocated()) { + Comment cmnt(masm_, "[ Global variable"); + __ mov(LoadIC::ReceiverRegister(), GlobalObjectOperand()); + __ mov(LoadIC::NameRegister(), Immediate(proxy->name())); + if (FLAG_vector_ics) { + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(proxy->VariableFeedbackSlot()))); + } + // Use a regular load, not a contextual load, to avoid a reference + // error. + CallLoadIC(NOT_CONTEXTUAL); + PrepareForBailout(expr, TOS_REG); + context()->Plug(eax); + } else if (proxy != NULL && proxy->var()->IsLookupSlot()) { + Comment cmnt(masm_, "[ Lookup slot"); + Label done, slow; + + // Generate code for loading from variables potentially shadowed + // by eval-introduced variables. + EmitDynamicLookupFastCase(proxy, INSIDE_TYPEOF, &slow, &done); + + __ bind(&slow); + __ push(esi); + __ push(Immediate(proxy->name())); + __ CallRuntime(Runtime::kLoadLookupSlotNoReferenceError, 2); + PrepareForBailout(expr, TOS_REG); + __ bind(&done); + + context()->Plug(eax); + } else { + // This expression cannot throw a reference error at the top level. + VisitInDuplicateContext(expr); + } +} + + +void FullCodeGenerator::EmitLiteralCompareTypeof(Expression* expr, + Expression* sub_expr, + Handle<String> check) { + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + { AccumulatorValueContext context(this); + VisitForTypeofValue(sub_expr); + } + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + + Factory* factory = isolate()->factory(); + if (String::Equals(check, factory->number_string())) { + __ JumpIfSmi(eax, if_true); + __ cmp(FieldOperand(eax, HeapObject::kMapOffset), + isolate()->factory()->heap_number_map()); + Split(equal, if_true, if_false, fall_through); + } else if (String::Equals(check, factory->string_string())) { + __ JumpIfSmi(eax, if_false); + __ CmpObjectType(eax, FIRST_NONSTRING_TYPE, edx); + __ j(above_equal, if_false); + // Check for undetectable objects => false. + __ test_b(FieldOperand(edx, Map::kBitFieldOffset), + 1 << Map::kIsUndetectable); + Split(zero, if_true, if_false, fall_through); + } else if (String::Equals(check, factory->symbol_string())) { + __ JumpIfSmi(eax, if_false); + __ CmpObjectType(eax, SYMBOL_TYPE, edx); + Split(equal, if_true, if_false, fall_through); + } else if (String::Equals(check, factory->boolean_string())) { + __ cmp(eax, isolate()->factory()->true_value()); + __ j(equal, if_true); + __ cmp(eax, isolate()->factory()->false_value()); + Split(equal, if_true, if_false, fall_through); + } else if (String::Equals(check, factory->undefined_string())) { + __ cmp(eax, isolate()->factory()->undefined_value()); + __ j(equal, if_true); + __ JumpIfSmi(eax, if_false); + // Check for undetectable objects => true. + __ mov(edx, FieldOperand(eax, HeapObject::kMapOffset)); + __ movzx_b(ecx, FieldOperand(edx, Map::kBitFieldOffset)); + __ test(ecx, Immediate(1 << Map::kIsUndetectable)); + Split(not_zero, if_true, if_false, fall_through); + } else if (String::Equals(check, factory->function_string())) { + __ JumpIfSmi(eax, if_false); + STATIC_ASSERT(NUM_OF_CALLABLE_SPEC_OBJECT_TYPES == 2); + __ CmpObjectType(eax, JS_FUNCTION_TYPE, edx); + __ j(equal, if_true); + __ CmpInstanceType(edx, JS_FUNCTION_PROXY_TYPE); + Split(equal, if_true, if_false, fall_through); + } else if (String::Equals(check, factory->object_string())) { + __ JumpIfSmi(eax, if_false); + __ cmp(eax, isolate()->factory()->null_value()); + __ j(equal, if_true); + __ CmpObjectType(eax, FIRST_NONCALLABLE_SPEC_OBJECT_TYPE, edx); + __ j(below, if_false); + __ CmpInstanceType(edx, LAST_NONCALLABLE_SPEC_OBJECT_TYPE); + __ j(above, if_false); + // Check for undetectable objects => false. + __ test_b(FieldOperand(edx, Map::kBitFieldOffset), + 1 << Map::kIsUndetectable); + Split(zero, if_true, if_false, fall_through); + } else { + if (if_false != fall_through) __ jmp(if_false); + } + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::VisitCompareOperation(CompareOperation* expr) { + Comment cmnt(masm_, "[ CompareOperation"); + SetSourcePosition(expr->position()); + + // First we try a fast inlined version of the compare when one of + // the operands is a literal. + if (TryLiteralCompare(expr)) return; + + // Always perform the comparison for its control flow. Pack the result + // into the expression's context after the comparison is performed. + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + Token::Value op = expr->op(); + VisitForStackValue(expr->left()); + switch (op) { + case Token::IN: + VisitForStackValue(expr->right()); + __ InvokeBuiltin(Builtins::IN, CALL_FUNCTION); + PrepareForBailoutBeforeSplit(expr, false, NULL, NULL); + __ cmp(eax, isolate()->factory()->true_value()); + Split(equal, if_true, if_false, fall_through); + break; + + case Token::INSTANCEOF: { + VisitForStackValue(expr->right()); + InstanceofStub stub(isolate(), InstanceofStub::kNoFlags); + __ CallStub(&stub); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + __ test(eax, eax); + // The stub returns 0 for true. + Split(zero, if_true, if_false, fall_through); + break; + } + + default: { + VisitForAccumulatorValue(expr->right()); + Condition cc = CompareIC::ComputeCondition(op); + __ pop(edx); + + bool inline_smi_code = ShouldInlineSmiCase(op); + JumpPatchSite patch_site(masm_); + if (inline_smi_code) { + Label slow_case; + __ mov(ecx, edx); + __ or_(ecx, eax); + patch_site.EmitJumpIfNotSmi(ecx, &slow_case, Label::kNear); + __ cmp(edx, eax); + Split(cc, if_true, if_false, NULL); + __ bind(&slow_case); + } + + // Record position and call the compare IC. + SetSourcePosition(expr->position()); + Handle<Code> ic = CompareIC::GetUninitialized(isolate(), op); + CallIC(ic, expr->CompareOperationFeedbackId()); + patch_site.EmitPatchInfo(); + + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + __ test(eax, eax); + Split(cc, if_true, if_false, fall_through); + } + } + + // Convert the result of the comparison into one expected for this + // expression's context. + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::EmitLiteralCompareNil(CompareOperation* expr, + Expression* sub_expr, + NilValue nil) { + Label materialize_true, materialize_false; + Label* if_true = NULL; + Label* if_false = NULL; + Label* fall_through = NULL; + context()->PrepareTest(&materialize_true, &materialize_false, + &if_true, &if_false, &fall_through); + + VisitForAccumulatorValue(sub_expr); + PrepareForBailoutBeforeSplit(expr, true, if_true, if_false); + + Handle<Object> nil_value = nil == kNullValue + ? isolate()->factory()->null_value() + : isolate()->factory()->undefined_value(); + if (expr->op() == Token::EQ_STRICT) { + __ cmp(eax, nil_value); + Split(equal, if_true, if_false, fall_through); + } else { + Handle<Code> ic = CompareNilICStub::GetUninitialized(isolate(), nil); + CallIC(ic, expr->CompareOperationFeedbackId()); + __ test(eax, eax); + Split(not_zero, if_true, if_false, fall_through); + } + context()->Plug(if_true, if_false); +} + + +void FullCodeGenerator::VisitThisFunction(ThisFunction* expr) { + __ mov(eax, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset)); + context()->Plug(eax); +} + + +Register FullCodeGenerator::result_register() { + return eax; +} + + +Register FullCodeGenerator::context_register() { + return esi; +} + + +void FullCodeGenerator::StoreToFrameField(int frame_offset, Register value) { + DCHECK_EQ(POINTER_SIZE_ALIGN(frame_offset), frame_offset); + __ mov(Operand(ebp, frame_offset), value); +} + + +void FullCodeGenerator::LoadContextField(Register dst, int context_index) { + __ mov(dst, ContextOperand(esi, context_index)); +} + + +void FullCodeGenerator::PushFunctionArgumentForContextAllocation() { + Scope* declaration_scope = scope()->DeclarationScope(); + if (declaration_scope->is_global_scope() || + declaration_scope->is_module_scope()) { + // Contexts nested in the native context have a canonical empty function + // as their closure, not the anonymous closure containing the global + // code. Pass a smi sentinel and let the runtime look up the empty + // function. + __ push(Immediate(Smi::FromInt(0))); + } else if (declaration_scope->is_eval_scope()) { + // Contexts nested inside eval code have the same closure as the context + // calling eval, not the anonymous closure containing the eval code. + // Fetch it from the context. + __ push(ContextOperand(esi, Context::CLOSURE_INDEX)); + } else { + DCHECK(declaration_scope->is_function_scope()); + __ push(Operand(ebp, JavaScriptFrameConstants::kFunctionOffset)); + } +} + + +// ---------------------------------------------------------------------------- +// Non-local control flow support. + +void FullCodeGenerator::EnterFinallyBlock() { + // Cook return address on top of stack (smi encoded Code* delta) + DCHECK(!result_register().is(edx)); + __ pop(edx); + __ sub(edx, Immediate(masm_->CodeObject())); + STATIC_ASSERT(kSmiTagSize + kSmiShiftSize == 1); + STATIC_ASSERT(kSmiTag == 0); + __ SmiTag(edx); + __ push(edx); + + // Store result register while executing finally block. + __ push(result_register()); + + // Store pending message while executing finally block. + ExternalReference pending_message_obj = + ExternalReference::address_of_pending_message_obj(isolate()); + __ mov(edx, Operand::StaticVariable(pending_message_obj)); + __ push(edx); + + ExternalReference has_pending_message = + ExternalReference::address_of_has_pending_message(isolate()); + __ mov(edx, Operand::StaticVariable(has_pending_message)); + __ SmiTag(edx); + __ push(edx); + + ExternalReference pending_message_script = + ExternalReference::address_of_pending_message_script(isolate()); + __ mov(edx, Operand::StaticVariable(pending_message_script)); + __ push(edx); +} + + +void FullCodeGenerator::ExitFinallyBlock() { + DCHECK(!result_register().is(edx)); + // Restore pending message from stack. + __ pop(edx); + ExternalReference pending_message_script = + ExternalReference::address_of_pending_message_script(isolate()); + __ mov(Operand::StaticVariable(pending_message_script), edx); + + __ pop(edx); + __ SmiUntag(edx); + ExternalReference has_pending_message = + ExternalReference::address_of_has_pending_message(isolate()); + __ mov(Operand::StaticVariable(has_pending_message), edx); + + __ pop(edx); + ExternalReference pending_message_obj = + ExternalReference::address_of_pending_message_obj(isolate()); + __ mov(Operand::StaticVariable(pending_message_obj), edx); + + // Restore result register from stack. + __ pop(result_register()); + + // Uncook return address. + __ pop(edx); + __ SmiUntag(edx); + __ add(edx, Immediate(masm_->CodeObject())); + __ jmp(edx); +} + + +#undef __ + +#define __ ACCESS_MASM(masm()) + +FullCodeGenerator::NestedStatement* FullCodeGenerator::TryFinally::Exit( + int* stack_depth, + int* context_length) { + // The macros used here must preserve the result register. + + // Because the handler block contains the context of the finally + // code, we can restore it directly from there for the finally code + // rather than iteratively unwinding contexts via their previous + // links. + __ Drop(*stack_depth); // Down to the handler block. + if (*context_length > 0) { + // Restore the context to its dedicated register and the stack. + __ mov(esi, Operand(esp, StackHandlerConstants::kContextOffset)); + __ mov(Operand(ebp, StandardFrameConstants::kContextOffset), esi); + } + __ PopTryHandler(); + __ call(finally_entry_); + + *stack_depth = 0; + *context_length = 0; + return previous_; +} + +#undef __ + + +static const byte kJnsInstruction = 0x79; +static const byte kJnsOffset = 0x11; +static const byte kNopByteOne = 0x66; +static const byte kNopByteTwo = 0x90; +#ifdef DEBUG +static const byte kCallInstruction = 0xe8; +#endif + + +void BackEdgeTable::PatchAt(Code* unoptimized_code, + Address pc, + BackEdgeState target_state, + Code* replacement_code) { + Address call_target_address = pc - kIntSize; + Address jns_instr_address = call_target_address - 3; + Address jns_offset_address = call_target_address - 2; + + switch (target_state) { + case INTERRUPT: + // sub <profiling_counter>, <delta> ;; Not changed + // jns ok + // call <interrupt stub> + // ok: + *jns_instr_address = kJnsInstruction; + *jns_offset_address = kJnsOffset; + break; + case ON_STACK_REPLACEMENT: + case OSR_AFTER_STACK_CHECK: + // sub <profiling_counter>, <delta> ;; Not changed + // nop + // nop + // call <on-stack replacment> + // ok: + *jns_instr_address = kNopByteOne; + *jns_offset_address = kNopByteTwo; + break; + } + + Assembler::set_target_address_at(call_target_address, + unoptimized_code, + replacement_code->entry()); + unoptimized_code->GetHeap()->incremental_marking()->RecordCodeTargetPatch( + unoptimized_code, call_target_address, replacement_code); +} + + +BackEdgeTable::BackEdgeState BackEdgeTable::GetBackEdgeState( + Isolate* isolate, + Code* unoptimized_code, + Address pc) { + Address call_target_address = pc - kIntSize; + Address jns_instr_address = call_target_address - 3; + DCHECK_EQ(kCallInstruction, *(call_target_address - 1)); + + if (*jns_instr_address == kJnsInstruction) { + DCHECK_EQ(kJnsOffset, *(call_target_address - 2)); + DCHECK_EQ(isolate->builtins()->InterruptCheck()->entry(), + Assembler::target_address_at(call_target_address, + unoptimized_code)); + return INTERRUPT; + } + + DCHECK_EQ(kNopByteOne, *jns_instr_address); + DCHECK_EQ(kNopByteTwo, *(call_target_address - 2)); + + if (Assembler::target_address_at(call_target_address, unoptimized_code) == + isolate->builtins()->OnStackReplacement()->entry()) { + return ON_STACK_REPLACEMENT; + } + + DCHECK_EQ(isolate->builtins()->OsrAfterStackCheck()->entry(), + Assembler::target_address_at(call_target_address, + unoptimized_code)); + return OSR_AFTER_STACK_CHECK; +} + + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_X87 diff --git a/deps/v8/src/x87/ic-x87.cc b/deps/v8/src/x87/ic-x87.cc new file mode 100644 index 00000000000..4f7d8101335 --- /dev/null +++ b/deps/v8/src/x87/ic-x87.cc @@ -0,0 +1,1211 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_X87 + +#include "src/codegen.h" +#include "src/ic-inl.h" +#include "src/runtime.h" +#include "src/stub-cache.h" + +namespace v8 { +namespace internal { + +// ---------------------------------------------------------------------------- +// Static IC stub generators. +// + +#define __ ACCESS_MASM(masm) + + +static void GenerateGlobalInstanceTypeCheck(MacroAssembler* masm, + Register type, + Label* global_object) { + // Register usage: + // type: holds the receiver instance type on entry. + __ cmp(type, JS_GLOBAL_OBJECT_TYPE); + __ j(equal, global_object); + __ cmp(type, JS_BUILTINS_OBJECT_TYPE); + __ j(equal, global_object); + __ cmp(type, JS_GLOBAL_PROXY_TYPE); + __ j(equal, global_object); +} + + +// Helper function used to load a property from a dictionary backing +// storage. This function may fail to load a property even though it is +// in the dictionary, so code at miss_label must always call a backup +// property load that is complete. This function is safe to call if +// name is not internalized, and will jump to the miss_label in that +// case. The generated code assumes that the receiver has slow +// properties, is not a global object and does not have interceptors. +static void GenerateDictionaryLoad(MacroAssembler* masm, + Label* miss_label, + Register elements, + Register name, + Register r0, + Register r1, + Register result) { + // Register use: + // + // elements - holds the property dictionary on entry and is unchanged. + // + // name - holds the name of the property on entry and is unchanged. + // + // Scratch registers: + // + // r0 - used for the index into the property dictionary + // + // r1 - used to hold the capacity of the property dictionary. + // + // result - holds the result on exit. + + Label done; + + // Probe the dictionary. + NameDictionaryLookupStub::GeneratePositiveLookup(masm, + miss_label, + &done, + elements, + name, + r0, + r1); + + // If probing finds an entry in the dictionary, r0 contains the + // index into the dictionary. Check that the value is a normal + // property. + __ bind(&done); + const int kElementsStartOffset = + NameDictionary::kHeaderSize + + NameDictionary::kElementsStartIndex * kPointerSize; + const int kDetailsOffset = kElementsStartOffset + 2 * kPointerSize; + __ test(Operand(elements, r0, times_4, kDetailsOffset - kHeapObjectTag), + Immediate(PropertyDetails::TypeField::kMask << kSmiTagSize)); + __ j(not_zero, miss_label); + + // Get the value at the masked, scaled index. + const int kValueOffset = kElementsStartOffset + kPointerSize; + __ mov(result, Operand(elements, r0, times_4, kValueOffset - kHeapObjectTag)); +} + + +// Helper function used to store a property to a dictionary backing +// storage. This function may fail to store a property eventhough it +// is in the dictionary, so code at miss_label must always call a +// backup property store that is complete. This function is safe to +// call if name is not internalized, and will jump to the miss_label in +// that case. The generated code assumes that the receiver has slow +// properties, is not a global object and does not have interceptors. +static void GenerateDictionaryStore(MacroAssembler* masm, + Label* miss_label, + Register elements, + Register name, + Register value, + Register r0, + Register r1) { + // Register use: + // + // elements - holds the property dictionary on entry and is clobbered. + // + // name - holds the name of the property on entry and is unchanged. + // + // value - holds the value to store and is unchanged. + // + // r0 - used for index into the property dictionary and is clobbered. + // + // r1 - used to hold the capacity of the property dictionary and is clobbered. + Label done; + + + // Probe the dictionary. + NameDictionaryLookupStub::GeneratePositiveLookup(masm, + miss_label, + &done, + elements, + name, + r0, + r1); + + // If probing finds an entry in the dictionary, r0 contains the + // index into the dictionary. Check that the value is a normal + // property that is not read only. + __ bind(&done); + const int kElementsStartOffset = + NameDictionary::kHeaderSize + + NameDictionary::kElementsStartIndex * kPointerSize; + const int kDetailsOffset = kElementsStartOffset + 2 * kPointerSize; + const int kTypeAndReadOnlyMask = + (PropertyDetails::TypeField::kMask | + PropertyDetails::AttributesField::encode(READ_ONLY)) << kSmiTagSize; + __ test(Operand(elements, r0, times_4, kDetailsOffset - kHeapObjectTag), + Immediate(kTypeAndReadOnlyMask)); + __ j(not_zero, miss_label); + + // Store the value at the masked, scaled index. + const int kValueOffset = kElementsStartOffset + kPointerSize; + __ lea(r0, Operand(elements, r0, times_4, kValueOffset - kHeapObjectTag)); + __ mov(Operand(r0, 0), value); + + // Update write barrier. Make sure not to clobber the value. + __ mov(r1, value); + __ RecordWrite(elements, r0, r1); +} + + +// Checks the receiver for special cases (value type, slow case bits). +// Falls through for regular JS object. +static void GenerateKeyedLoadReceiverCheck(MacroAssembler* masm, + Register receiver, + Register map, + int interceptor_bit, + Label* slow) { + // Register use: + // receiver - holds the receiver and is unchanged. + // Scratch registers: + // map - used to hold the map of the receiver. + + // Check that the object isn't a smi. + __ JumpIfSmi(receiver, slow); + + // Get the map of the receiver. + __ mov(map, FieldOperand(receiver, HeapObject::kMapOffset)); + + // Check bit field. + __ test_b(FieldOperand(map, Map::kBitFieldOffset), + (1 << Map::kIsAccessCheckNeeded) | (1 << interceptor_bit)); + __ j(not_zero, slow); + // Check that the object is some kind of JS object EXCEPT JS Value type. + // In the case that the object is a value-wrapper object, + // we enter the runtime system to make sure that indexing + // into string objects works as intended. + DCHECK(JS_OBJECT_TYPE > JS_VALUE_TYPE); + + __ CmpInstanceType(map, JS_OBJECT_TYPE); + __ j(below, slow); +} + + +// Loads an indexed element from a fast case array. +// If not_fast_array is NULL, doesn't perform the elements map check. +static void GenerateFastArrayLoad(MacroAssembler* masm, + Register receiver, + Register key, + Register scratch, + Register result, + Label* not_fast_array, + Label* out_of_range) { + // Register use: + // receiver - holds the receiver and is unchanged. + // key - holds the key and is unchanged (must be a smi). + // Scratch registers: + // scratch - used to hold elements of the receiver and the loaded value. + // result - holds the result on exit if the load succeeds and + // we fall through. + + __ mov(scratch, FieldOperand(receiver, JSObject::kElementsOffset)); + if (not_fast_array != NULL) { + // Check that the object is in fast mode and writable. + __ CheckMap(scratch, + masm->isolate()->factory()->fixed_array_map(), + not_fast_array, + DONT_DO_SMI_CHECK); + } else { + __ AssertFastElements(scratch); + } + // Check that the key (index) is within bounds. + __ cmp(key, FieldOperand(scratch, FixedArray::kLengthOffset)); + __ j(above_equal, out_of_range); + // Fast case: Do the load. + STATIC_ASSERT((kPointerSize == 4) && (kSmiTagSize == 1) && (kSmiTag == 0)); + __ mov(scratch, FieldOperand(scratch, key, times_2, FixedArray::kHeaderSize)); + __ cmp(scratch, Immediate(masm->isolate()->factory()->the_hole_value())); + // In case the loaded value is the_hole we have to consult GetProperty + // to ensure the prototype chain is searched. + __ j(equal, out_of_range); + if (!result.is(scratch)) { + __ mov(result, scratch); + } +} + + +// Checks whether a key is an array index string or a unique name. +// Falls through if the key is a unique name. +static void GenerateKeyNameCheck(MacroAssembler* masm, + Register key, + Register map, + Register hash, + Label* index_string, + Label* not_unique) { + // Register use: + // key - holds the key and is unchanged. Assumed to be non-smi. + // Scratch registers: + // map - used to hold the map of the key. + // hash - used to hold the hash of the key. + Label unique; + __ CmpObjectType(key, LAST_UNIQUE_NAME_TYPE, map); + __ j(above, not_unique); + STATIC_ASSERT(LAST_UNIQUE_NAME_TYPE == FIRST_NONSTRING_TYPE); + __ j(equal, &unique); + + // Is the string an array index, with cached numeric value? + __ mov(hash, FieldOperand(key, Name::kHashFieldOffset)); + __ test(hash, Immediate(Name::kContainsCachedArrayIndexMask)); + __ j(zero, index_string); + + // Is the string internalized? We already know it's a string so a single + // bit test is enough. + STATIC_ASSERT(kNotInternalizedTag != 0); + __ test_b(FieldOperand(map, Map::kInstanceTypeOffset), + kIsNotInternalizedMask); + __ j(not_zero, not_unique); + + __ bind(&unique); +} + + +static Operand GenerateMappedArgumentsLookup(MacroAssembler* masm, + Register object, + Register key, + Register scratch1, + Register scratch2, + Label* unmapped_case, + Label* slow_case) { + Heap* heap = masm->isolate()->heap(); + Factory* factory = masm->isolate()->factory(); + + // Check that the receiver is a JSObject. Because of the elements + // map check later, we do not need to check for interceptors or + // whether it requires access checks. + __ JumpIfSmi(object, slow_case); + // Check that the object is some kind of JSObject. + __ CmpObjectType(object, FIRST_JS_RECEIVER_TYPE, scratch1); + __ j(below, slow_case); + + // Check that the key is a positive smi. + __ test(key, Immediate(0x80000001)); + __ j(not_zero, slow_case); + + // Load the elements into scratch1 and check its map. + Handle<Map> arguments_map(heap->sloppy_arguments_elements_map()); + __ mov(scratch1, FieldOperand(object, JSObject::kElementsOffset)); + __ CheckMap(scratch1, arguments_map, slow_case, DONT_DO_SMI_CHECK); + + // Check if element is in the range of mapped arguments. If not, jump + // to the unmapped lookup with the parameter map in scratch1. + __ mov(scratch2, FieldOperand(scratch1, FixedArray::kLengthOffset)); + __ sub(scratch2, Immediate(Smi::FromInt(2))); + __ cmp(key, scratch2); + __ j(above_equal, unmapped_case); + + // Load element index and check whether it is the hole. + const int kHeaderSize = FixedArray::kHeaderSize + 2 * kPointerSize; + __ mov(scratch2, FieldOperand(scratch1, + key, + times_half_pointer_size, + kHeaderSize)); + __ cmp(scratch2, factory->the_hole_value()); + __ j(equal, unmapped_case); + + // Load value from context and return it. We can reuse scratch1 because + // we do not jump to the unmapped lookup (which requires the parameter + // map in scratch1). + const int kContextOffset = FixedArray::kHeaderSize; + __ mov(scratch1, FieldOperand(scratch1, kContextOffset)); + return FieldOperand(scratch1, + scratch2, + times_half_pointer_size, + Context::kHeaderSize); +} + + +static Operand GenerateUnmappedArgumentsLookup(MacroAssembler* masm, + Register key, + Register parameter_map, + Register scratch, + Label* slow_case) { + // Element is in arguments backing store, which is referenced by the + // second element of the parameter_map. + const int kBackingStoreOffset = FixedArray::kHeaderSize + kPointerSize; + Register backing_store = parameter_map; + __ mov(backing_store, FieldOperand(parameter_map, kBackingStoreOffset)); + Handle<Map> fixed_array_map(masm->isolate()->heap()->fixed_array_map()); + __ CheckMap(backing_store, fixed_array_map, slow_case, DONT_DO_SMI_CHECK); + __ mov(scratch, FieldOperand(backing_store, FixedArray::kLengthOffset)); + __ cmp(key, scratch); + __ j(greater_equal, slow_case); + return FieldOperand(backing_store, + key, + times_half_pointer_size, + FixedArray::kHeaderSize); +} + + +void KeyedLoadIC::GenerateGeneric(MacroAssembler* masm) { + // The return address is on the stack. + Label slow, check_name, index_smi, index_name, property_array_property; + Label probe_dictionary, check_number_dictionary; + + Register receiver = ReceiverRegister(); + Register key = NameRegister(); + DCHECK(receiver.is(edx)); + DCHECK(key.is(ecx)); + + // Check that the key is a smi. + __ JumpIfNotSmi(key, &check_name); + __ bind(&index_smi); + // Now the key is known to be a smi. This place is also jumped to from + // where a numeric string is converted to a smi. + + GenerateKeyedLoadReceiverCheck( + masm, receiver, eax, Map::kHasIndexedInterceptor, &slow); + + // Check the receiver's map to see if it has fast elements. + __ CheckFastElements(eax, &check_number_dictionary); + + GenerateFastArrayLoad(masm, receiver, key, eax, eax, NULL, &slow); + Isolate* isolate = masm->isolate(); + Counters* counters = isolate->counters(); + __ IncrementCounter(counters->keyed_load_generic_smi(), 1); + __ ret(0); + + __ bind(&check_number_dictionary); + __ mov(ebx, key); + __ SmiUntag(ebx); + __ mov(eax, FieldOperand(receiver, JSObject::kElementsOffset)); + + // Check whether the elements is a number dictionary. + // ebx: untagged index + // eax: elements + __ CheckMap(eax, + isolate->factory()->hash_table_map(), + &slow, + DONT_DO_SMI_CHECK); + Label slow_pop_receiver; + // Push receiver on the stack to free up a register for the dictionary + // probing. + __ push(receiver); + __ LoadFromNumberDictionary(&slow_pop_receiver, eax, key, ebx, edx, edi, eax); + // Pop receiver before returning. + __ pop(receiver); + __ ret(0); + + __ bind(&slow_pop_receiver); + // Pop the receiver from the stack and jump to runtime. + __ pop(receiver); + + __ bind(&slow); + // Slow case: jump to runtime. + __ IncrementCounter(counters->keyed_load_generic_slow(), 1); + GenerateRuntimeGetProperty(masm); + + __ bind(&check_name); + GenerateKeyNameCheck(masm, key, eax, ebx, &index_name, &slow); + + GenerateKeyedLoadReceiverCheck( + masm, receiver, eax, Map::kHasNamedInterceptor, &slow); + + // If the receiver is a fast-case object, check the keyed lookup + // cache. Otherwise probe the dictionary. + __ mov(ebx, FieldOperand(receiver, JSObject::kPropertiesOffset)); + __ cmp(FieldOperand(ebx, HeapObject::kMapOffset), + Immediate(isolate->factory()->hash_table_map())); + __ j(equal, &probe_dictionary); + + // The receiver's map is still in eax, compute the keyed lookup cache hash + // based on 32 bits of the map pointer and the string hash. + if (FLAG_debug_code) { + __ cmp(eax, FieldOperand(receiver, HeapObject::kMapOffset)); + __ Check(equal, kMapIsNoLongerInEax); + } + __ mov(ebx, eax); // Keep the map around for later. + __ shr(eax, KeyedLookupCache::kMapHashShift); + __ mov(edi, FieldOperand(key, String::kHashFieldOffset)); + __ shr(edi, String::kHashShift); + __ xor_(eax, edi); + __ and_(eax, KeyedLookupCache::kCapacityMask & KeyedLookupCache::kHashMask); + + // Load the key (consisting of map and internalized string) from the cache and + // check for match. + Label load_in_object_property; + static const int kEntriesPerBucket = KeyedLookupCache::kEntriesPerBucket; + Label hit_on_nth_entry[kEntriesPerBucket]; + ExternalReference cache_keys = + ExternalReference::keyed_lookup_cache_keys(masm->isolate()); + + for (int i = 0; i < kEntriesPerBucket - 1; i++) { + Label try_next_entry; + __ mov(edi, eax); + __ shl(edi, kPointerSizeLog2 + 1); + if (i != 0) { + __ add(edi, Immediate(kPointerSize * i * 2)); + } + __ cmp(ebx, Operand::StaticArray(edi, times_1, cache_keys)); + __ j(not_equal, &try_next_entry); + __ add(edi, Immediate(kPointerSize)); + __ cmp(key, Operand::StaticArray(edi, times_1, cache_keys)); + __ j(equal, &hit_on_nth_entry[i]); + __ bind(&try_next_entry); + } + + __ lea(edi, Operand(eax, 1)); + __ shl(edi, kPointerSizeLog2 + 1); + __ add(edi, Immediate(kPointerSize * (kEntriesPerBucket - 1) * 2)); + __ cmp(ebx, Operand::StaticArray(edi, times_1, cache_keys)); + __ j(not_equal, &slow); + __ add(edi, Immediate(kPointerSize)); + __ cmp(key, Operand::StaticArray(edi, times_1, cache_keys)); + __ j(not_equal, &slow); + + // Get field offset. + // ebx : receiver's map + // eax : lookup cache index + ExternalReference cache_field_offsets = + ExternalReference::keyed_lookup_cache_field_offsets(masm->isolate()); + + // Hit on nth entry. + for (int i = kEntriesPerBucket - 1; i >= 0; i--) { + __ bind(&hit_on_nth_entry[i]); + if (i != 0) { + __ add(eax, Immediate(i)); + } + __ mov(edi, + Operand::StaticArray(eax, times_pointer_size, cache_field_offsets)); + __ movzx_b(eax, FieldOperand(ebx, Map::kInObjectPropertiesOffset)); + __ sub(edi, eax); + __ j(above_equal, &property_array_property); + if (i != 0) { + __ jmp(&load_in_object_property); + } + } + + // Load in-object property. + __ bind(&load_in_object_property); + __ movzx_b(eax, FieldOperand(ebx, Map::kInstanceSizeOffset)); + __ add(eax, edi); + __ mov(eax, FieldOperand(receiver, eax, times_pointer_size, 0)); + __ IncrementCounter(counters->keyed_load_generic_lookup_cache(), 1); + __ ret(0); + + // Load property array property. + __ bind(&property_array_property); + __ mov(eax, FieldOperand(receiver, JSObject::kPropertiesOffset)); + __ mov(eax, FieldOperand(eax, edi, times_pointer_size, + FixedArray::kHeaderSize)); + __ IncrementCounter(counters->keyed_load_generic_lookup_cache(), 1); + __ ret(0); + + // Do a quick inline probe of the receiver's dictionary, if it + // exists. + __ bind(&probe_dictionary); + + __ mov(eax, FieldOperand(receiver, JSObject::kMapOffset)); + __ movzx_b(eax, FieldOperand(eax, Map::kInstanceTypeOffset)); + GenerateGlobalInstanceTypeCheck(masm, eax, &slow); + + GenerateDictionaryLoad(masm, &slow, ebx, key, eax, edi, eax); + __ IncrementCounter(counters->keyed_load_generic_symbol(), 1); + __ ret(0); + + __ bind(&index_name); + __ IndexFromHash(ebx, key); + // Now jump to the place where smi keys are handled. + __ jmp(&index_smi); +} + + +void KeyedLoadIC::GenerateString(MacroAssembler* masm) { + // Return address is on the stack. + Label miss; + + Register receiver = ReceiverRegister(); + Register index = NameRegister(); + Register scratch = ebx; + DCHECK(!scratch.is(receiver) && !scratch.is(index)); + Register result = eax; + DCHECK(!result.is(scratch)); + + StringCharAtGenerator char_at_generator(receiver, + index, + scratch, + result, + &miss, // When not a string. + &miss, // When not a number. + &miss, // When index out of range. + STRING_INDEX_IS_ARRAY_INDEX); + char_at_generator.GenerateFast(masm); + __ ret(0); + + StubRuntimeCallHelper call_helper; + char_at_generator.GenerateSlow(masm, call_helper); + + __ bind(&miss); + GenerateMiss(masm); +} + + +void KeyedLoadIC::GenerateIndexedInterceptor(MacroAssembler* masm) { + // Return address is on the stack. + Label slow; + + Register receiver = ReceiverRegister(); + Register key = NameRegister(); + Register scratch = eax; + DCHECK(!scratch.is(receiver) && !scratch.is(key)); + + // Check that the receiver isn't a smi. + __ JumpIfSmi(receiver, &slow); + + // Check that the key is an array index, that is Uint32. + __ test(key, Immediate(kSmiTagMask | kSmiSignMask)); + __ j(not_zero, &slow); + + // Get the map of the receiver. + __ mov(scratch, FieldOperand(receiver, HeapObject::kMapOffset)); + + // Check that it has indexed interceptor and access checks + // are not enabled for this object. + __ movzx_b(scratch, FieldOperand(scratch, Map::kBitFieldOffset)); + __ and_(scratch, Immediate(kSlowCaseBitFieldMask)); + __ cmp(scratch, Immediate(1 << Map::kHasIndexedInterceptor)); + __ j(not_zero, &slow); + + // Everything is fine, call runtime. + __ pop(scratch); + __ push(receiver); // receiver + __ push(key); // key + __ push(scratch); // return address + + // Perform tail call to the entry. + ExternalReference ref = ExternalReference( + IC_Utility(kLoadElementWithInterceptor), masm->isolate()); + __ TailCallExternalReference(ref, 2, 1); + + __ bind(&slow); + GenerateMiss(masm); +} + + +void KeyedLoadIC::GenerateSloppyArguments(MacroAssembler* masm) { + // The return address is on the stack. + Register receiver = ReceiverRegister(); + Register key = NameRegister(); + DCHECK(receiver.is(edx)); + DCHECK(key.is(ecx)); + + Label slow, notin; + Factory* factory = masm->isolate()->factory(); + Operand mapped_location = + GenerateMappedArgumentsLookup( + masm, receiver, key, ebx, eax, ¬in, &slow); + __ mov(eax, mapped_location); + __ Ret(); + __ bind(¬in); + // The unmapped lookup expects that the parameter map is in ebx. + Operand unmapped_location = + GenerateUnmappedArgumentsLookup(masm, key, ebx, eax, &slow); + __ cmp(unmapped_location, factory->the_hole_value()); + __ j(equal, &slow); + __ mov(eax, unmapped_location); + __ Ret(); + __ bind(&slow); + GenerateMiss(masm); +} + + +void KeyedStoreIC::GenerateSloppyArguments(MacroAssembler* masm) { + // Return address is on the stack. + Label slow, notin; + Register receiver = ReceiverRegister(); + Register name = NameRegister(); + Register value = ValueRegister(); + DCHECK(receiver.is(edx)); + DCHECK(name.is(ecx)); + DCHECK(value.is(eax)); + + Operand mapped_location = + GenerateMappedArgumentsLookup(masm, receiver, name, ebx, edi, ¬in, + &slow); + __ mov(mapped_location, value); + __ lea(ecx, mapped_location); + __ mov(edx, value); + __ RecordWrite(ebx, ecx, edx); + __ Ret(); + __ bind(¬in); + // The unmapped lookup expects that the parameter map is in ebx. + Operand unmapped_location = + GenerateUnmappedArgumentsLookup(masm, name, ebx, edi, &slow); + __ mov(unmapped_location, value); + __ lea(edi, unmapped_location); + __ mov(edx, value); + __ RecordWrite(ebx, edi, edx); + __ Ret(); + __ bind(&slow); + GenerateMiss(masm); +} + + +static void KeyedStoreGenerateGenericHelper( + MacroAssembler* masm, + Label* fast_object, + Label* fast_double, + Label* slow, + KeyedStoreCheckMap check_map, + KeyedStoreIncrementLength increment_length) { + Label transition_smi_elements; + Label finish_object_store, non_double_value, transition_double_elements; + Label fast_double_without_map_check; + Register receiver = KeyedStoreIC::ReceiverRegister(); + Register key = KeyedStoreIC::NameRegister(); + Register value = KeyedStoreIC::ValueRegister(); + DCHECK(receiver.is(edx)); + DCHECK(key.is(ecx)); + DCHECK(value.is(eax)); + // key is a smi. + // ebx: FixedArray receiver->elements + // edi: receiver map + // Fast case: Do the store, could either Object or double. + __ bind(fast_object); + if (check_map == kCheckMap) { + __ mov(edi, FieldOperand(ebx, HeapObject::kMapOffset)); + __ cmp(edi, masm->isolate()->factory()->fixed_array_map()); + __ j(not_equal, fast_double); + } + + // HOLECHECK: guards "A[i] = V" + // We have to go to the runtime if the current value is the hole because + // there may be a callback on the element + Label holecheck_passed1; + __ cmp(FixedArrayElementOperand(ebx, key), + masm->isolate()->factory()->the_hole_value()); + __ j(not_equal, &holecheck_passed1); + __ JumpIfDictionaryInPrototypeChain(receiver, ebx, edi, slow); + __ mov(ebx, FieldOperand(receiver, JSObject::kElementsOffset)); + + __ bind(&holecheck_passed1); + + // Smi stores don't require further checks. + Label non_smi_value; + __ JumpIfNotSmi(value, &non_smi_value); + if (increment_length == kIncrementLength) { + // Add 1 to receiver->length. + __ add(FieldOperand(receiver, JSArray::kLengthOffset), + Immediate(Smi::FromInt(1))); + } + // It's irrelevant whether array is smi-only or not when writing a smi. + __ mov(FixedArrayElementOperand(ebx, key), value); + __ ret(0); + + __ bind(&non_smi_value); + // Escape to elements kind transition case. + __ mov(edi, FieldOperand(receiver, HeapObject::kMapOffset)); + __ CheckFastObjectElements(edi, &transition_smi_elements); + + // Fast elements array, store the value to the elements backing store. + __ bind(&finish_object_store); + if (increment_length == kIncrementLength) { + // Add 1 to receiver->length. + __ add(FieldOperand(receiver, JSArray::kLengthOffset), + Immediate(Smi::FromInt(1))); + } + __ mov(FixedArrayElementOperand(ebx, key), value); + // Update write barrier for the elements array address. + __ mov(edx, value); // Preserve the value which is returned. + __ RecordWriteArray( + ebx, edx, key, EMIT_REMEMBERED_SET, OMIT_SMI_CHECK); + __ ret(0); + + __ bind(fast_double); + if (check_map == kCheckMap) { + // Check for fast double array case. If this fails, call through to the + // runtime. + __ cmp(edi, masm->isolate()->factory()->fixed_double_array_map()); + __ j(not_equal, slow); + // If the value is a number, store it as a double in the FastDoubleElements + // array. + } + + // HOLECHECK: guards "A[i] double hole?" + // We have to see if the double version of the hole is present. If so + // go to the runtime. + uint32_t offset = FixedDoubleArray::kHeaderSize + sizeof(kHoleNanLower32); + __ cmp(FieldOperand(ebx, key, times_4, offset), Immediate(kHoleNanUpper32)); + __ j(not_equal, &fast_double_without_map_check); + __ JumpIfDictionaryInPrototypeChain(receiver, ebx, edi, slow); + __ mov(ebx, FieldOperand(receiver, JSObject::kElementsOffset)); + + __ bind(&fast_double_without_map_check); + __ StoreNumberToDoubleElements(value, ebx, key, edi, + &transition_double_elements, false); + if (increment_length == kIncrementLength) { + // Add 1 to receiver->length. + __ add(FieldOperand(receiver, JSArray::kLengthOffset), + Immediate(Smi::FromInt(1))); + } + __ ret(0); + + __ bind(&transition_smi_elements); + __ mov(ebx, FieldOperand(receiver, HeapObject::kMapOffset)); + + // Transition the array appropriately depending on the value type. + __ CheckMap(value, + masm->isolate()->factory()->heap_number_map(), + &non_double_value, + DONT_DO_SMI_CHECK); + + // Value is a double. Transition FAST_SMI_ELEMENTS -> FAST_DOUBLE_ELEMENTS + // and complete the store. + __ LoadTransitionedArrayMapConditional(FAST_SMI_ELEMENTS, + FAST_DOUBLE_ELEMENTS, + ebx, + edi, + slow); + AllocationSiteMode mode = AllocationSite::GetMode(FAST_SMI_ELEMENTS, + FAST_DOUBLE_ELEMENTS); + ElementsTransitionGenerator::GenerateSmiToDouble( + masm, receiver, key, value, ebx, mode, slow); + __ mov(ebx, FieldOperand(receiver, JSObject::kElementsOffset)); + __ jmp(&fast_double_without_map_check); + + __ bind(&non_double_value); + // Value is not a double, FAST_SMI_ELEMENTS -> FAST_ELEMENTS + __ LoadTransitionedArrayMapConditional(FAST_SMI_ELEMENTS, + FAST_ELEMENTS, + ebx, + edi, + slow); + mode = AllocationSite::GetMode(FAST_SMI_ELEMENTS, FAST_ELEMENTS); + ElementsTransitionGenerator::GenerateMapChangeElementsTransition( + masm, receiver, key, value, ebx, mode, slow); + __ mov(ebx, FieldOperand(receiver, JSObject::kElementsOffset)); + __ jmp(&finish_object_store); + + __ bind(&transition_double_elements); + // Elements are FAST_DOUBLE_ELEMENTS, but value is an Object that's not a + // HeapNumber. Make sure that the receiver is a Array with FAST_ELEMENTS and + // transition array from FAST_DOUBLE_ELEMENTS to FAST_ELEMENTS + __ mov(ebx, FieldOperand(receiver, HeapObject::kMapOffset)); + __ LoadTransitionedArrayMapConditional(FAST_DOUBLE_ELEMENTS, + FAST_ELEMENTS, + ebx, + edi, + slow); + mode = AllocationSite::GetMode(FAST_DOUBLE_ELEMENTS, FAST_ELEMENTS); + ElementsTransitionGenerator::GenerateDoubleToObject( + masm, receiver, key, value, ebx, mode, slow); + __ mov(ebx, FieldOperand(receiver, JSObject::kElementsOffset)); + __ jmp(&finish_object_store); +} + + +void KeyedStoreIC::GenerateGeneric(MacroAssembler* masm, + StrictMode strict_mode) { + // Return address is on the stack. + Label slow, fast_object, fast_object_grow; + Label fast_double, fast_double_grow; + Label array, extra, check_if_double_array; + Register receiver = ReceiverRegister(); + Register key = NameRegister(); + DCHECK(receiver.is(edx)); + DCHECK(key.is(ecx)); + + // Check that the object isn't a smi. + __ JumpIfSmi(receiver, &slow); + // Get the map from the receiver. + __ mov(edi, FieldOperand(receiver, HeapObject::kMapOffset)); + // Check that the receiver does not require access checks and is not observed. + // The generic stub does not perform map checks or handle observed objects. + __ test_b(FieldOperand(edi, Map::kBitFieldOffset), + 1 << Map::kIsAccessCheckNeeded | 1 << Map::kIsObserved); + __ j(not_zero, &slow); + // Check that the key is a smi. + __ JumpIfNotSmi(key, &slow); + __ CmpInstanceType(edi, JS_ARRAY_TYPE); + __ j(equal, &array); + // Check that the object is some kind of JSObject. + __ CmpInstanceType(edi, FIRST_JS_OBJECT_TYPE); + __ j(below, &slow); + + // Object case: Check key against length in the elements array. + // Key is a smi. + // edi: receiver map + __ mov(ebx, FieldOperand(receiver, JSObject::kElementsOffset)); + // Check array bounds. Both the key and the length of FixedArray are smis. + __ cmp(key, FieldOperand(ebx, FixedArray::kLengthOffset)); + __ j(below, &fast_object); + + // Slow case: call runtime. + __ bind(&slow); + GenerateRuntimeSetProperty(masm, strict_mode); + + // Extra capacity case: Check if there is extra capacity to + // perform the store and update the length. Used for adding one + // element to the array by writing to array[array.length]. + __ bind(&extra); + // receiver is a JSArray. + // key is a smi. + // ebx: receiver->elements, a FixedArray + // edi: receiver map + // flags: compare (key, receiver.length()) + // do not leave holes in the array: + __ j(not_equal, &slow); + __ cmp(key, FieldOperand(ebx, FixedArray::kLengthOffset)); + __ j(above_equal, &slow); + __ mov(edi, FieldOperand(ebx, HeapObject::kMapOffset)); + __ cmp(edi, masm->isolate()->factory()->fixed_array_map()); + __ j(not_equal, &check_if_double_array); + __ jmp(&fast_object_grow); + + __ bind(&check_if_double_array); + __ cmp(edi, masm->isolate()->factory()->fixed_double_array_map()); + __ j(not_equal, &slow); + __ jmp(&fast_double_grow); + + // Array case: Get the length and the elements array from the JS + // array. Check that the array is in fast mode (and writable); if it + // is the length is always a smi. + __ bind(&array); + // receiver is a JSArray. + // key is a smi. + // edi: receiver map + __ mov(ebx, FieldOperand(receiver, JSObject::kElementsOffset)); + + // Check the key against the length in the array and fall through to the + // common store code. + __ cmp(key, FieldOperand(receiver, JSArray::kLengthOffset)); // Compare smis. + __ j(above_equal, &extra); + + KeyedStoreGenerateGenericHelper(masm, &fast_object, &fast_double, + &slow, kCheckMap, kDontIncrementLength); + KeyedStoreGenerateGenericHelper(masm, &fast_object_grow, &fast_double_grow, + &slow, kDontCheckMap, kIncrementLength); +} + + +void LoadIC::GenerateMegamorphic(MacroAssembler* masm) { + // The return address is on the stack. + Register receiver = ReceiverRegister(); + Register name = NameRegister(); + DCHECK(receiver.is(edx)); + DCHECK(name.is(ecx)); + + // Probe the stub cache. + Code::Flags flags = Code::RemoveTypeAndHolderFromFlags( + Code::ComputeHandlerFlags(Code::LOAD_IC)); + masm->isolate()->stub_cache()->GenerateProbe( + masm, flags, receiver, name, ebx, eax); + + // Cache miss: Jump to runtime. + GenerateMiss(masm); +} + + +void LoadIC::GenerateNormal(MacroAssembler* masm) { + Register dictionary = eax; + DCHECK(!dictionary.is(ReceiverRegister())); + DCHECK(!dictionary.is(NameRegister())); + + Label slow; + + __ mov(dictionary, + FieldOperand(ReceiverRegister(), JSObject::kPropertiesOffset)); + GenerateDictionaryLoad(masm, &slow, dictionary, NameRegister(), edi, ebx, + eax); + __ ret(0); + + // Dictionary load failed, go slow (but don't miss). + __ bind(&slow); + GenerateRuntimeGetProperty(masm); +} + + +static void LoadIC_PushArgs(MacroAssembler* masm) { + Register receiver = LoadIC::ReceiverRegister(); + Register name = LoadIC::NameRegister(); + DCHECK(!ebx.is(receiver) && !ebx.is(name)); + + __ pop(ebx); + __ push(receiver); + __ push(name); + __ push(ebx); +} + + +void LoadIC::GenerateMiss(MacroAssembler* masm) { + // Return address is on the stack. + __ IncrementCounter(masm->isolate()->counters()->load_miss(), 1); + + LoadIC_PushArgs(masm); + + // Perform tail call to the entry. + ExternalReference ref = + ExternalReference(IC_Utility(kLoadIC_Miss), masm->isolate()); + __ TailCallExternalReference(ref, 2, 1); +} + + +void LoadIC::GenerateRuntimeGetProperty(MacroAssembler* masm) { + // Return address is on the stack. + LoadIC_PushArgs(masm); + + // Perform tail call to the entry. + __ TailCallRuntime(Runtime::kGetProperty, 2, 1); +} + + +void KeyedLoadIC::GenerateMiss(MacroAssembler* masm) { + // Return address is on the stack. + __ IncrementCounter(masm->isolate()->counters()->keyed_load_miss(), 1); + + LoadIC_PushArgs(masm); + + // Perform tail call to the entry. + ExternalReference ref = + ExternalReference(IC_Utility(kKeyedLoadIC_Miss), masm->isolate()); + __ TailCallExternalReference(ref, 2, 1); +} + + +// IC register specifications +const Register LoadIC::ReceiverRegister() { return edx; } +const Register LoadIC::NameRegister() { return ecx; } + + +const Register LoadIC::SlotRegister() { + DCHECK(FLAG_vector_ics); + return eax; +} + + +const Register LoadIC::VectorRegister() { + DCHECK(FLAG_vector_ics); + return ebx; +} + + +const Register StoreIC::ReceiverRegister() { return edx; } +const Register StoreIC::NameRegister() { return ecx; } +const Register StoreIC::ValueRegister() { return eax; } + + +const Register KeyedStoreIC::MapRegister() { + return ebx; +} + + +void KeyedLoadIC::GenerateRuntimeGetProperty(MacroAssembler* masm) { + // Return address is on the stack. + LoadIC_PushArgs(masm); + + // Perform tail call to the entry. + __ TailCallRuntime(Runtime::kKeyedGetProperty, 2, 1); +} + + +void StoreIC::GenerateMegamorphic(MacroAssembler* masm) { + // Return address is on the stack. + Code::Flags flags = Code::RemoveTypeAndHolderFromFlags( + Code::ComputeHandlerFlags(Code::STORE_IC)); + masm->isolate()->stub_cache()->GenerateProbe( + masm, flags, ReceiverRegister(), NameRegister(), + ebx, no_reg); + + // Cache miss: Jump to runtime. + GenerateMiss(masm); +} + + +static void StoreIC_PushArgs(MacroAssembler* masm) { + Register receiver = StoreIC::ReceiverRegister(); + Register name = StoreIC::NameRegister(); + Register value = StoreIC::ValueRegister(); + + DCHECK(!ebx.is(receiver) && !ebx.is(name) && !ebx.is(value)); + + __ pop(ebx); + __ push(receiver); + __ push(name); + __ push(value); + __ push(ebx); +} + + +void StoreIC::GenerateMiss(MacroAssembler* masm) { + // Return address is on the stack. + StoreIC_PushArgs(masm); + + // Perform tail call to the entry. + ExternalReference ref = + ExternalReference(IC_Utility(kStoreIC_Miss), masm->isolate()); + __ TailCallExternalReference(ref, 3, 1); +} + + +void StoreIC::GenerateNormal(MacroAssembler* masm) { + Label restore_miss; + Register receiver = ReceiverRegister(); + Register name = NameRegister(); + Register value = ValueRegister(); + Register dictionary = ebx; + + __ mov(dictionary, FieldOperand(receiver, JSObject::kPropertiesOffset)); + + // A lot of registers are needed for storing to slow case + // objects. Push and restore receiver but rely on + // GenerateDictionaryStore preserving the value and name. + __ push(receiver); + GenerateDictionaryStore(masm, &restore_miss, dictionary, name, value, + receiver, edi); + __ Drop(1); + Counters* counters = masm->isolate()->counters(); + __ IncrementCounter(counters->store_normal_hit(), 1); + __ ret(0); + + __ bind(&restore_miss); + __ pop(receiver); + __ IncrementCounter(counters->store_normal_miss(), 1); + GenerateMiss(masm); +} + + +void StoreIC::GenerateRuntimeSetProperty(MacroAssembler* masm, + StrictMode strict_mode) { + // Return address is on the stack. + DCHECK(!ebx.is(ReceiverRegister()) && !ebx.is(NameRegister()) && + !ebx.is(ValueRegister())); + __ pop(ebx); + __ push(ReceiverRegister()); + __ push(NameRegister()); + __ push(ValueRegister()); + __ push(Immediate(Smi::FromInt(strict_mode))); + __ push(ebx); // return address + + // Do tail-call to runtime routine. + __ TailCallRuntime(Runtime::kSetProperty, 4, 1); +} + + +void KeyedStoreIC::GenerateRuntimeSetProperty(MacroAssembler* masm, + StrictMode strict_mode) { + // Return address is on the stack. + DCHECK(!ebx.is(ReceiverRegister()) && !ebx.is(NameRegister()) && + !ebx.is(ValueRegister())); + __ pop(ebx); + __ push(ReceiverRegister()); + __ push(NameRegister()); + __ push(ValueRegister()); + __ push(Immediate(Smi::FromInt(strict_mode))); + __ push(ebx); // return address + + // Do tail-call to runtime routine. + __ TailCallRuntime(Runtime::kSetProperty, 4, 1); +} + + +void KeyedStoreIC::GenerateMiss(MacroAssembler* masm) { + // Return address is on the stack. + StoreIC_PushArgs(masm); + + // Do tail-call to runtime routine. + ExternalReference ref = + ExternalReference(IC_Utility(kKeyedStoreIC_Miss), masm->isolate()); + __ TailCallExternalReference(ref, 3, 1); +} + + +void StoreIC::GenerateSlow(MacroAssembler* masm) { + // Return address is on the stack. + StoreIC_PushArgs(masm); + + // Do tail-call to runtime routine. + ExternalReference ref(IC_Utility(kStoreIC_Slow), masm->isolate()); + __ TailCallExternalReference(ref, 3, 1); +} + + +void KeyedStoreIC::GenerateSlow(MacroAssembler* masm) { + // Return address is on the stack. + StoreIC_PushArgs(masm); + + // Do tail-call to runtime routine. + ExternalReference ref(IC_Utility(kKeyedStoreIC_Slow), masm->isolate()); + __ TailCallExternalReference(ref, 3, 1); +} + + +#undef __ + + +Condition CompareIC::ComputeCondition(Token::Value op) { + switch (op) { + case Token::EQ_STRICT: + case Token::EQ: + return equal; + case Token::LT: + return less; + case Token::GT: + return greater; + case Token::LTE: + return less_equal; + case Token::GTE: + return greater_equal; + default: + UNREACHABLE(); + return no_condition; + } +} + + +bool CompareIC::HasInlinedSmiCode(Address address) { + // The address of the instruction following the call. + Address test_instruction_address = + address + Assembler::kCallTargetAddressOffset; + + // If the instruction following the call is not a test al, nothing + // was inlined. + return *test_instruction_address == Assembler::kTestAlByte; +} + + +void PatchInlinedSmiCode(Address address, InlinedSmiCheck check) { + // The address of the instruction following the call. + Address test_instruction_address = + address + Assembler::kCallTargetAddressOffset; + + // If the instruction following the call is not a test al, nothing + // was inlined. + if (*test_instruction_address != Assembler::kTestAlByte) { + DCHECK(*test_instruction_address == Assembler::kNopByte); + return; + } + + Address delta_address = test_instruction_address + 1; + // The delta to the start of the map check instruction and the + // condition code uses at the patched jump. + uint8_t delta = *reinterpret_cast<uint8_t*>(delta_address); + if (FLAG_trace_ic) { + PrintF("[ patching ic at %p, test=%p, delta=%d\n", + address, test_instruction_address, delta); + } + + // Patch with a short conditional jump. Enabling means switching from a short + // jump-if-carry/not-carry to jump-if-zero/not-zero, whereas disabling is the + // reverse operation of that. + Address jmp_address = test_instruction_address - delta; + DCHECK((check == ENABLE_INLINED_SMI_CHECK) + ? (*jmp_address == Assembler::kJncShortOpcode || + *jmp_address == Assembler::kJcShortOpcode) + : (*jmp_address == Assembler::kJnzShortOpcode || + *jmp_address == Assembler::kJzShortOpcode)); + Condition cc = (check == ENABLE_INLINED_SMI_CHECK) + ? (*jmp_address == Assembler::kJncShortOpcode ? not_zero : zero) + : (*jmp_address == Assembler::kJnzShortOpcode ? not_carry : carry); + *jmp_address = static_cast<byte>(Assembler::kJccShortPrefix | cc); +} + + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_X87 diff --git a/deps/v8/src/x87/lithium-codegen-x87.cc b/deps/v8/src/x87/lithium-codegen-x87.cc new file mode 100644 index 00000000000..8ba73c62367 --- /dev/null +++ b/deps/v8/src/x87/lithium-codegen-x87.cc @@ -0,0 +1,5717 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_X87 + +#include "src/code-stubs.h" +#include "src/codegen.h" +#include "src/deoptimizer.h" +#include "src/hydrogen-osr.h" +#include "src/ic.h" +#include "src/stub-cache.h" +#include "src/x87/lithium-codegen-x87.h" + +namespace v8 { +namespace internal { + + +// When invoking builtins, we need to record the safepoint in the middle of +// the invoke instruction sequence generated by the macro assembler. +class SafepointGenerator V8_FINAL : public CallWrapper { + public: + SafepointGenerator(LCodeGen* codegen, + LPointerMap* pointers, + Safepoint::DeoptMode mode) + : codegen_(codegen), + pointers_(pointers), + deopt_mode_(mode) {} + virtual ~SafepointGenerator() {} + + virtual void BeforeCall(int call_size) const V8_OVERRIDE {} + + virtual void AfterCall() const V8_OVERRIDE { + codegen_->RecordSafepoint(pointers_, deopt_mode_); + } + + private: + LCodeGen* codegen_; + LPointerMap* pointers_; + Safepoint::DeoptMode deopt_mode_; +}; + + +#define __ masm()-> + +bool LCodeGen::GenerateCode() { + LPhase phase("Z_Code generation", chunk()); + DCHECK(is_unused()); + status_ = GENERATING; + + // Open a frame scope to indicate that there is a frame on the stack. The + // MANUAL indicates that the scope shouldn't actually generate code to set up + // the frame (that is done in GeneratePrologue). + FrameScope frame_scope(masm_, StackFrame::MANUAL); + + support_aligned_spilled_doubles_ = info()->IsOptimizing(); + + dynamic_frame_alignment_ = info()->IsOptimizing() && + ((chunk()->num_double_slots() > 2 && + !chunk()->graph()->is_recursive()) || + !info()->osr_ast_id().IsNone()); + + return GeneratePrologue() && + GenerateBody() && + GenerateDeferredCode() && + GenerateJumpTable() && + GenerateSafepointTable(); +} + + +void LCodeGen::FinishCode(Handle<Code> code) { + DCHECK(is_done()); + code->set_stack_slots(GetStackSlotCount()); + code->set_safepoint_table_offset(safepoints_.GetCodeOffset()); + if (code->is_optimized_code()) RegisterWeakObjectsInOptimizedCode(code); + PopulateDeoptimizationData(code); + if (!info()->IsStub()) { + Deoptimizer::EnsureRelocSpaceForLazyDeoptimization(code); + } +} + + +#ifdef _MSC_VER +void LCodeGen::MakeSureStackPagesMapped(int offset) { + const int kPageSize = 4 * KB; + for (offset -= kPageSize; offset > 0; offset -= kPageSize) { + __ mov(Operand(esp, offset), eax); + } +} +#endif + + +bool LCodeGen::GeneratePrologue() { + DCHECK(is_generating()); + + if (info()->IsOptimizing()) { + ProfileEntryHookStub::MaybeCallEntryHook(masm_); + +#ifdef DEBUG + if (strlen(FLAG_stop_at) > 0 && + info_->function()->name()->IsUtf8EqualTo(CStrVector(FLAG_stop_at))) { + __ int3(); + } +#endif + + // Sloppy mode functions and builtins need to replace the receiver with the + // global proxy when called as functions (without an explicit receiver + // object). + if (info_->this_has_uses() && + info_->strict_mode() == SLOPPY && + !info_->is_native()) { + Label ok; + // +1 for return address. + int receiver_offset = (scope()->num_parameters() + 1) * kPointerSize; + __ mov(ecx, Operand(esp, receiver_offset)); + + __ cmp(ecx, isolate()->factory()->undefined_value()); + __ j(not_equal, &ok, Label::kNear); + + __ mov(ecx, GlobalObjectOperand()); + __ mov(ecx, FieldOperand(ecx, GlobalObject::kGlobalProxyOffset)); + + __ mov(Operand(esp, receiver_offset), ecx); + + __ bind(&ok); + } + + if (support_aligned_spilled_doubles_ && dynamic_frame_alignment_) { + // Move state of dynamic frame alignment into edx. + __ Move(edx, Immediate(kNoAlignmentPadding)); + + Label do_not_pad, align_loop; + STATIC_ASSERT(kDoubleSize == 2 * kPointerSize); + // Align esp + 4 to a multiple of 2 * kPointerSize. + __ test(esp, Immediate(kPointerSize)); + __ j(not_zero, &do_not_pad, Label::kNear); + __ push(Immediate(0)); + __ mov(ebx, esp); + __ mov(edx, Immediate(kAlignmentPaddingPushed)); + // Copy arguments, receiver, and return address. + __ mov(ecx, Immediate(scope()->num_parameters() + 2)); + + __ bind(&align_loop); + __ mov(eax, Operand(ebx, 1 * kPointerSize)); + __ mov(Operand(ebx, 0), eax); + __ add(Operand(ebx), Immediate(kPointerSize)); + __ dec(ecx); + __ j(not_zero, &align_loop, Label::kNear); + __ mov(Operand(ebx, 0), Immediate(kAlignmentZapValue)); + __ bind(&do_not_pad); + } + } + + info()->set_prologue_offset(masm_->pc_offset()); + if (NeedsEagerFrame()) { + DCHECK(!frame_is_built_); + frame_is_built_ = true; + if (info()->IsStub()) { + __ StubPrologue(); + } else { + __ Prologue(info()->IsCodePreAgingActive()); + } + info()->AddNoFrameRange(0, masm_->pc_offset()); + } + + if (info()->IsOptimizing() && + dynamic_frame_alignment_ && + FLAG_debug_code) { + __ test(esp, Immediate(kPointerSize)); + __ Assert(zero, kFrameIsExpectedToBeAligned); + } + + // Reserve space for the stack slots needed by the code. + int slots = GetStackSlotCount(); + DCHECK(slots != 0 || !info()->IsOptimizing()); + if (slots > 0) { + if (slots == 1) { + if (dynamic_frame_alignment_) { + __ push(edx); + } else { + __ push(Immediate(kNoAlignmentPadding)); + } + } else { + if (FLAG_debug_code) { + __ sub(Operand(esp), Immediate(slots * kPointerSize)); +#ifdef _MSC_VER + MakeSureStackPagesMapped(slots * kPointerSize); +#endif + __ push(eax); + __ mov(Operand(eax), Immediate(slots)); + Label loop; + __ bind(&loop); + __ mov(MemOperand(esp, eax, times_4, 0), + Immediate(kSlotsZapValue)); + __ dec(eax); + __ j(not_zero, &loop); + __ pop(eax); + } else { + __ sub(Operand(esp), Immediate(slots * kPointerSize)); +#ifdef _MSC_VER + MakeSureStackPagesMapped(slots * kPointerSize); +#endif + } + + if (support_aligned_spilled_doubles_) { + Comment(";;; Store dynamic frame alignment tag for spilled doubles"); + // Store dynamic frame alignment state in the first local. + int offset = JavaScriptFrameConstants::kDynamicAlignmentStateOffset; + if (dynamic_frame_alignment_) { + __ mov(Operand(ebp, offset), edx); + } else { + __ mov(Operand(ebp, offset), Immediate(kNoAlignmentPadding)); + } + } + } + } + + // Possibly allocate a local context. + int heap_slots = info_->num_heap_slots() - Context::MIN_CONTEXT_SLOTS; + if (heap_slots > 0) { + Comment(";;; Allocate local context"); + bool need_write_barrier = true; + // Argument to NewContext is the function, which is still in edi. + if (heap_slots <= FastNewContextStub::kMaximumSlots) { + FastNewContextStub stub(isolate(), heap_slots); + __ CallStub(&stub); + // Result of FastNewContextStub is always in new space. + need_write_barrier = false; + } else { + __ push(edi); + __ CallRuntime(Runtime::kNewFunctionContext, 1); + } + RecordSafepoint(Safepoint::kNoLazyDeopt); + // Context is returned in eax. It replaces the context passed to us. + // It's saved in the stack and kept live in esi. + __ mov(esi, eax); + __ mov(Operand(ebp, StandardFrameConstants::kContextOffset), eax); + + // Copy parameters into context if necessary. + int num_parameters = scope()->num_parameters(); + for (int i = 0; i < num_parameters; i++) { + Variable* var = scope()->parameter(i); + if (var->IsContextSlot()) { + int parameter_offset = StandardFrameConstants::kCallerSPOffset + + (num_parameters - 1 - i) * kPointerSize; + // Load parameter from stack. + __ mov(eax, Operand(ebp, parameter_offset)); + // Store it in the context. + int context_offset = Context::SlotOffset(var->index()); + __ mov(Operand(esi, context_offset), eax); + // Update the write barrier. This clobbers eax and ebx. + if (need_write_barrier) { + __ RecordWriteContextSlot(esi, + context_offset, + eax, + ebx); + } else if (FLAG_debug_code) { + Label done; + __ JumpIfInNewSpace(esi, eax, &done, Label::kNear); + __ Abort(kExpectedNewSpaceObject); + __ bind(&done); + } + } + } + Comment(";;; End allocate local context"); + } + + // Trace the call. + if (FLAG_trace && info()->IsOptimizing()) { + // We have not executed any compiled code yet, so esi still holds the + // incoming context. + __ CallRuntime(Runtime::kTraceEnter, 0); + } + return !is_aborted(); +} + + +void LCodeGen::GenerateOsrPrologue() { + // Generate the OSR entry prologue at the first unknown OSR value, or if there + // are none, at the OSR entrypoint instruction. + if (osr_pc_offset_ >= 0) return; + + osr_pc_offset_ = masm()->pc_offset(); + + // Move state of dynamic frame alignment into edx. + __ Move(edx, Immediate(kNoAlignmentPadding)); + + if (support_aligned_spilled_doubles_ && dynamic_frame_alignment_) { + Label do_not_pad, align_loop; + // Align ebp + 4 to a multiple of 2 * kPointerSize. + __ test(ebp, Immediate(kPointerSize)); + __ j(zero, &do_not_pad, Label::kNear); + __ push(Immediate(0)); + __ mov(ebx, esp); + __ mov(edx, Immediate(kAlignmentPaddingPushed)); + + // Move all parts of the frame over one word. The frame consists of: + // unoptimized frame slots, alignment state, context, frame pointer, return + // address, receiver, and the arguments. + __ mov(ecx, Immediate(scope()->num_parameters() + + 5 + graph()->osr()->UnoptimizedFrameSlots())); + + __ bind(&align_loop); + __ mov(eax, Operand(ebx, 1 * kPointerSize)); + __ mov(Operand(ebx, 0), eax); + __ add(Operand(ebx), Immediate(kPointerSize)); + __ dec(ecx); + __ j(not_zero, &align_loop, Label::kNear); + __ mov(Operand(ebx, 0), Immediate(kAlignmentZapValue)); + __ sub(Operand(ebp), Immediate(kPointerSize)); + __ bind(&do_not_pad); + } + + // Save the first local, which is overwritten by the alignment state. + Operand alignment_loc = MemOperand(ebp, -3 * kPointerSize); + __ push(alignment_loc); + + // Set the dynamic frame alignment state. + __ mov(alignment_loc, edx); + + // Adjust the frame size, subsuming the unoptimized frame into the + // optimized frame. + int slots = GetStackSlotCount() - graph()->osr()->UnoptimizedFrameSlots(); + DCHECK(slots >= 1); + __ sub(esp, Immediate((slots - 1) * kPointerSize)); +} + + +void LCodeGen::GenerateBodyInstructionPre(LInstruction* instr) { + if (instr->IsCall()) { + EnsureSpaceForLazyDeopt(Deoptimizer::patch_size()); + } + if (!instr->IsLazyBailout() && !instr->IsGap()) { + safepoints_.BumpLastLazySafepointIndex(); + } + FlushX87StackIfNecessary(instr); +} + + +void LCodeGen::GenerateBodyInstructionPost(LInstruction* instr) { + if (instr->IsGoto()) { + x87_stack_.LeavingBlock(current_block_, LGoto::cast(instr)); + } else if (FLAG_debug_code && FLAG_enable_slow_asserts && + !instr->IsGap() && !instr->IsReturn()) { + if (instr->ClobbersDoubleRegisters(isolate())) { + if (instr->HasDoubleRegisterResult()) { + DCHECK_EQ(1, x87_stack_.depth()); + } else { + DCHECK_EQ(0, x87_stack_.depth()); + } + } + __ VerifyX87StackDepth(x87_stack_.depth()); + } +} + + +bool LCodeGen::GenerateJumpTable() { + Label needs_frame; + if (jump_table_.length() > 0) { + Comment(";;; -------------------- Jump table --------------------"); + } + for (int i = 0; i < jump_table_.length(); i++) { + __ bind(&jump_table_[i].label); + Address entry = jump_table_[i].address; + Deoptimizer::BailoutType type = jump_table_[i].bailout_type; + int id = Deoptimizer::GetDeoptimizationId(isolate(), entry, type); + if (id == Deoptimizer::kNotDeoptimizationEntry) { + Comment(";;; jump table entry %d.", i); + } else { + Comment(";;; jump table entry %d: deoptimization bailout %d.", i, id); + } + if (jump_table_[i].needs_frame) { + DCHECK(!info()->saves_caller_doubles()); + __ push(Immediate(ExternalReference::ForDeoptEntry(entry))); + if (needs_frame.is_bound()) { + __ jmp(&needs_frame); + } else { + __ bind(&needs_frame); + __ push(MemOperand(ebp, StandardFrameConstants::kContextOffset)); + // This variant of deopt can only be used with stubs. Since we don't + // have a function pointer to install in the stack frame that we're + // building, install a special marker there instead. + DCHECK(info()->IsStub()); + __ push(Immediate(Smi::FromInt(StackFrame::STUB))); + // Push a PC inside the function so that the deopt code can find where + // the deopt comes from. It doesn't have to be the precise return + // address of a "calling" LAZY deopt, it only has to be somewhere + // inside the code body. + Label push_approx_pc; + __ call(&push_approx_pc); + __ bind(&push_approx_pc); + // Push the continuation which was stashed were the ebp should + // be. Replace it with the saved ebp. + __ push(MemOperand(esp, 3 * kPointerSize)); + __ mov(MemOperand(esp, 4 * kPointerSize), ebp); + __ lea(ebp, MemOperand(esp, 4 * kPointerSize)); + __ ret(0); // Call the continuation without clobbering registers. + } + } else { + __ call(entry, RelocInfo::RUNTIME_ENTRY); + } + } + return !is_aborted(); +} + + +bool LCodeGen::GenerateDeferredCode() { + DCHECK(is_generating()); + if (deferred_.length() > 0) { + for (int i = 0; !is_aborted() && i < deferred_.length(); i++) { + LDeferredCode* code = deferred_[i]; + X87Stack copy(code->x87_stack()); + x87_stack_ = copy; + + HValue* value = + instructions_->at(code->instruction_index())->hydrogen_value(); + RecordAndWritePosition( + chunk()->graph()->SourcePositionToScriptPosition(value->position())); + + Comment(";;; <@%d,#%d> " + "-------------------- Deferred %s --------------------", + code->instruction_index(), + code->instr()->hydrogen_value()->id(), + code->instr()->Mnemonic()); + __ bind(code->entry()); + if (NeedsDeferredFrame()) { + Comment(";;; Build frame"); + DCHECK(!frame_is_built_); + DCHECK(info()->IsStub()); + frame_is_built_ = true; + // Build the frame in such a way that esi isn't trashed. + __ push(ebp); // Caller's frame pointer. + __ push(Operand(ebp, StandardFrameConstants::kContextOffset)); + __ push(Immediate(Smi::FromInt(StackFrame::STUB))); + __ lea(ebp, Operand(esp, 2 * kPointerSize)); + Comment(";;; Deferred code"); + } + code->Generate(); + if (NeedsDeferredFrame()) { + __ bind(code->done()); + Comment(";;; Destroy frame"); + DCHECK(frame_is_built_); + frame_is_built_ = false; + __ mov(esp, ebp); + __ pop(ebp); + } + __ jmp(code->exit()); + } + } + + // Deferred code is the last part of the instruction sequence. Mark + // the generated code as done unless we bailed out. + if (!is_aborted()) status_ = DONE; + return !is_aborted(); +} + + +bool LCodeGen::GenerateSafepointTable() { + DCHECK(is_done()); + if (!info()->IsStub()) { + // For lazy deoptimization we need space to patch a call after every call. + // Ensure there is always space for such patching, even if the code ends + // in a call. + int target_offset = masm()->pc_offset() + Deoptimizer::patch_size(); + while (masm()->pc_offset() < target_offset) { + masm()->nop(); + } + } + safepoints_.Emit(masm(), GetStackSlotCount()); + return !is_aborted(); +} + + +Register LCodeGen::ToRegister(int index) const { + return Register::FromAllocationIndex(index); +} + + +X87Register LCodeGen::ToX87Register(int index) const { + return X87Register::FromAllocationIndex(index); +} + + +void LCodeGen::X87LoadForUsage(X87Register reg) { + DCHECK(x87_stack_.Contains(reg)); + x87_stack_.Fxch(reg); + x87_stack_.pop(); +} + + +void LCodeGen::X87LoadForUsage(X87Register reg1, X87Register reg2) { + DCHECK(x87_stack_.Contains(reg1)); + DCHECK(x87_stack_.Contains(reg2)); + x87_stack_.Fxch(reg1, 1); + x87_stack_.Fxch(reg2); + x87_stack_.pop(); + x87_stack_.pop(); +} + + +void LCodeGen::X87Stack::Fxch(X87Register reg, int other_slot) { + DCHECK(is_mutable_); + DCHECK(Contains(reg) && stack_depth_ > other_slot); + int i = ArrayIndex(reg); + int st = st2idx(i); + if (st != other_slot) { + int other_i = st2idx(other_slot); + X87Register other = stack_[other_i]; + stack_[other_i] = reg; + stack_[i] = other; + if (st == 0) { + __ fxch(other_slot); + } else if (other_slot == 0) { + __ fxch(st); + } else { + __ fxch(st); + __ fxch(other_slot); + __ fxch(st); + } + } +} + + +int LCodeGen::X87Stack::st2idx(int pos) { + return stack_depth_ - pos - 1; +} + + +int LCodeGen::X87Stack::ArrayIndex(X87Register reg) { + for (int i = 0; i < stack_depth_; i++) { + if (stack_[i].is(reg)) return i; + } + UNREACHABLE(); + return -1; +} + + +bool LCodeGen::X87Stack::Contains(X87Register reg) { + for (int i = 0; i < stack_depth_; i++) { + if (stack_[i].is(reg)) return true; + } + return false; +} + + +void LCodeGen::X87Stack::Free(X87Register reg) { + DCHECK(is_mutable_); + DCHECK(Contains(reg)); + int i = ArrayIndex(reg); + int st = st2idx(i); + if (st > 0) { + // keep track of how fstp(i) changes the order of elements + int tos_i = st2idx(0); + stack_[i] = stack_[tos_i]; + } + pop(); + __ fstp(st); +} + + +void LCodeGen::X87Mov(X87Register dst, Operand src, X87OperandType opts) { + if (x87_stack_.Contains(dst)) { + x87_stack_.Fxch(dst); + __ fstp(0); + } else { + x87_stack_.push(dst); + } + X87Fld(src, opts); +} + + +void LCodeGen::X87Fld(Operand src, X87OperandType opts) { + DCHECK(!src.is_reg_only()); + switch (opts) { + case kX87DoubleOperand: + __ fld_d(src); + break; + case kX87FloatOperand: + __ fld_s(src); + break; + case kX87IntOperand: + __ fild_s(src); + break; + default: + UNREACHABLE(); + } +} + + +void LCodeGen::X87Mov(Operand dst, X87Register src, X87OperandType opts) { + DCHECK(!dst.is_reg_only()); + x87_stack_.Fxch(src); + switch (opts) { + case kX87DoubleOperand: + __ fst_d(dst); + break; + case kX87IntOperand: + __ fist_s(dst); + break; + default: + UNREACHABLE(); + } +} + + +void LCodeGen::X87Stack::PrepareToWrite(X87Register reg) { + DCHECK(is_mutable_); + if (Contains(reg)) { + Free(reg); + } + // Mark this register as the next register to write to + stack_[stack_depth_] = reg; +} + + +void LCodeGen::X87Stack::CommitWrite(X87Register reg) { + DCHECK(is_mutable_); + // Assert the reg is prepared to write, but not on the virtual stack yet + DCHECK(!Contains(reg) && stack_[stack_depth_].is(reg) && + stack_depth_ < X87Register::kMaxNumAllocatableRegisters); + stack_depth_++; +} + + +void LCodeGen::X87PrepareBinaryOp( + X87Register left, X87Register right, X87Register result) { + // You need to use DefineSameAsFirst for x87 instructions + DCHECK(result.is(left)); + x87_stack_.Fxch(right, 1); + x87_stack_.Fxch(left); +} + + +void LCodeGen::X87Stack::FlushIfNecessary(LInstruction* instr, LCodeGen* cgen) { + if (stack_depth_ > 0 && instr->ClobbersDoubleRegisters(isolate())) { + bool double_inputs = instr->HasDoubleRegisterInput(); + + // Flush stack from tos down, since FreeX87() will mess with tos + for (int i = stack_depth_-1; i >= 0; i--) { + X87Register reg = stack_[i]; + // Skip registers which contain the inputs for the next instruction + // when flushing the stack + if (double_inputs && instr->IsDoubleInput(reg, cgen)) { + continue; + } + Free(reg); + if (i < stack_depth_-1) i++; + } + } + if (instr->IsReturn()) { + while (stack_depth_ > 0) { + __ fstp(0); + stack_depth_--; + } + if (FLAG_debug_code && FLAG_enable_slow_asserts) __ VerifyX87StackDepth(0); + } +} + + +void LCodeGen::X87Stack::LeavingBlock(int current_block_id, LGoto* goto_instr) { + DCHECK(stack_depth_ <= 1); + // If ever used for new stubs producing two pairs of doubles joined into two + // phis this assert hits. That situation is not handled, since the two stacks + // might have st0 and st1 swapped. + if (current_block_id + 1 != goto_instr->block_id()) { + // If we have a value on the x87 stack on leaving a block, it must be a + // phi input. If the next block we compile is not the join block, we have + // to discard the stack state. + stack_depth_ = 0; + } +} + + +void LCodeGen::EmitFlushX87ForDeopt() { + // The deoptimizer does not support X87 Registers. But as long as we + // deopt from a stub its not a problem, since we will re-materialize the + // original stub inputs, which can't be double registers. + DCHECK(info()->IsStub()); + if (FLAG_debug_code && FLAG_enable_slow_asserts) { + __ pushfd(); + __ VerifyX87StackDepth(x87_stack_.depth()); + __ popfd(); + } + for (int i = 0; i < x87_stack_.depth(); i++) __ fstp(0); +} + + +Register LCodeGen::ToRegister(LOperand* op) const { + DCHECK(op->IsRegister()); + return ToRegister(op->index()); +} + + +X87Register LCodeGen::ToX87Register(LOperand* op) const { + DCHECK(op->IsDoubleRegister()); + return ToX87Register(op->index()); +} + + +int32_t LCodeGen::ToInteger32(LConstantOperand* op) const { + return ToRepresentation(op, Representation::Integer32()); +} + + +int32_t LCodeGen::ToRepresentation(LConstantOperand* op, + const Representation& r) const { + HConstant* constant = chunk_->LookupConstant(op); + int32_t value = constant->Integer32Value(); + if (r.IsInteger32()) return value; + DCHECK(r.IsSmiOrTagged()); + return reinterpret_cast<int32_t>(Smi::FromInt(value)); +} + + +Handle<Object> LCodeGen::ToHandle(LConstantOperand* op) const { + HConstant* constant = chunk_->LookupConstant(op); + DCHECK(chunk_->LookupLiteralRepresentation(op).IsSmiOrTagged()); + return constant->handle(isolate()); +} + + +double LCodeGen::ToDouble(LConstantOperand* op) const { + HConstant* constant = chunk_->LookupConstant(op); + DCHECK(constant->HasDoubleValue()); + return constant->DoubleValue(); +} + + +ExternalReference LCodeGen::ToExternalReference(LConstantOperand* op) const { + HConstant* constant = chunk_->LookupConstant(op); + DCHECK(constant->HasExternalReferenceValue()); + return constant->ExternalReferenceValue(); +} + + +bool LCodeGen::IsInteger32(LConstantOperand* op) const { + return chunk_->LookupLiteralRepresentation(op).IsSmiOrInteger32(); +} + + +bool LCodeGen::IsSmi(LConstantOperand* op) const { + return chunk_->LookupLiteralRepresentation(op).IsSmi(); +} + + +static int ArgumentsOffsetWithoutFrame(int index) { + DCHECK(index < 0); + return -(index + 1) * kPointerSize + kPCOnStackSize; +} + + +Operand LCodeGen::ToOperand(LOperand* op) const { + if (op->IsRegister()) return Operand(ToRegister(op)); + DCHECK(!op->IsDoubleRegister()); + DCHECK(op->IsStackSlot() || op->IsDoubleStackSlot()); + if (NeedsEagerFrame()) { + return Operand(ebp, StackSlotOffset(op->index())); + } else { + // Retrieve parameter without eager stack-frame relative to the + // stack-pointer. + return Operand(esp, ArgumentsOffsetWithoutFrame(op->index())); + } +} + + +Operand LCodeGen::HighOperand(LOperand* op) { + DCHECK(op->IsDoubleStackSlot()); + if (NeedsEagerFrame()) { + return Operand(ebp, StackSlotOffset(op->index()) + kPointerSize); + } else { + // Retrieve parameter without eager stack-frame relative to the + // stack-pointer. + return Operand( + esp, ArgumentsOffsetWithoutFrame(op->index()) + kPointerSize); + } +} + + +void LCodeGen::WriteTranslation(LEnvironment* environment, + Translation* translation) { + if (environment == NULL) return; + + // The translation includes one command per value in the environment. + int translation_size = environment->translation_size(); + // The output frame height does not include the parameters. + int height = translation_size - environment->parameter_count(); + + WriteTranslation(environment->outer(), translation); + bool has_closure_id = !info()->closure().is_null() && + !info()->closure().is_identical_to(environment->closure()); + int closure_id = has_closure_id + ? DefineDeoptimizationLiteral(environment->closure()) + : Translation::kSelfLiteralId; + switch (environment->frame_type()) { + case JS_FUNCTION: + translation->BeginJSFrame(environment->ast_id(), closure_id, height); + break; + case JS_CONSTRUCT: + translation->BeginConstructStubFrame(closure_id, translation_size); + break; + case JS_GETTER: + DCHECK(translation_size == 1); + DCHECK(height == 0); + translation->BeginGetterStubFrame(closure_id); + break; + case JS_SETTER: + DCHECK(translation_size == 2); + DCHECK(height == 0); + translation->BeginSetterStubFrame(closure_id); + break; + case ARGUMENTS_ADAPTOR: + translation->BeginArgumentsAdaptorFrame(closure_id, translation_size); + break; + case STUB: + translation->BeginCompiledStubFrame(); + break; + default: + UNREACHABLE(); + } + + int object_index = 0; + int dematerialized_index = 0; + for (int i = 0; i < translation_size; ++i) { + LOperand* value = environment->values()->at(i); + AddToTranslation(environment, + translation, + value, + environment->HasTaggedValueAt(i), + environment->HasUint32ValueAt(i), + &object_index, + &dematerialized_index); + } +} + + +void LCodeGen::AddToTranslation(LEnvironment* environment, + Translation* translation, + LOperand* op, + bool is_tagged, + bool is_uint32, + int* object_index_pointer, + int* dematerialized_index_pointer) { + if (op == LEnvironment::materialization_marker()) { + int object_index = (*object_index_pointer)++; + if (environment->ObjectIsDuplicateAt(object_index)) { + int dupe_of = environment->ObjectDuplicateOfAt(object_index); + translation->DuplicateObject(dupe_of); + return; + } + int object_length = environment->ObjectLengthAt(object_index); + if (environment->ObjectIsArgumentsAt(object_index)) { + translation->BeginArgumentsObject(object_length); + } else { + translation->BeginCapturedObject(object_length); + } + int dematerialized_index = *dematerialized_index_pointer; + int env_offset = environment->translation_size() + dematerialized_index; + *dematerialized_index_pointer += object_length; + for (int i = 0; i < object_length; ++i) { + LOperand* value = environment->values()->at(env_offset + i); + AddToTranslation(environment, + translation, + value, + environment->HasTaggedValueAt(env_offset + i), + environment->HasUint32ValueAt(env_offset + i), + object_index_pointer, + dematerialized_index_pointer); + } + return; + } + + if (op->IsStackSlot()) { + if (is_tagged) { + translation->StoreStackSlot(op->index()); + } else if (is_uint32) { + translation->StoreUint32StackSlot(op->index()); + } else { + translation->StoreInt32StackSlot(op->index()); + } + } else if (op->IsDoubleStackSlot()) { + translation->StoreDoubleStackSlot(op->index()); + } else if (op->IsRegister()) { + Register reg = ToRegister(op); + if (is_tagged) { + translation->StoreRegister(reg); + } else if (is_uint32) { + translation->StoreUint32Register(reg); + } else { + translation->StoreInt32Register(reg); + } + } else if (op->IsConstantOperand()) { + HConstant* constant = chunk()->LookupConstant(LConstantOperand::cast(op)); + int src_index = DefineDeoptimizationLiteral(constant->handle(isolate())); + translation->StoreLiteral(src_index); + } else { + UNREACHABLE(); + } +} + + +void LCodeGen::CallCodeGeneric(Handle<Code> code, + RelocInfo::Mode mode, + LInstruction* instr, + SafepointMode safepoint_mode) { + DCHECK(instr != NULL); + __ call(code, mode); + RecordSafepointWithLazyDeopt(instr, safepoint_mode); + + // Signal that we don't inline smi code before these stubs in the + // optimizing code generator. + if (code->kind() == Code::BINARY_OP_IC || + code->kind() == Code::COMPARE_IC) { + __ nop(); + } +} + + +void LCodeGen::CallCode(Handle<Code> code, + RelocInfo::Mode mode, + LInstruction* instr) { + CallCodeGeneric(code, mode, instr, RECORD_SIMPLE_SAFEPOINT); +} + + +void LCodeGen::CallRuntime(const Runtime::Function* fun, + int argc, + LInstruction* instr) { + DCHECK(instr != NULL); + DCHECK(instr->HasPointerMap()); + + __ CallRuntime(fun, argc); + + RecordSafepointWithLazyDeopt(instr, RECORD_SIMPLE_SAFEPOINT); + + DCHECK(info()->is_calling()); +} + + +void LCodeGen::LoadContextFromDeferred(LOperand* context) { + if (context->IsRegister()) { + if (!ToRegister(context).is(esi)) { + __ mov(esi, ToRegister(context)); + } + } else if (context->IsStackSlot()) { + __ mov(esi, ToOperand(context)); + } else if (context->IsConstantOperand()) { + HConstant* constant = + chunk_->LookupConstant(LConstantOperand::cast(context)); + __ LoadObject(esi, Handle<Object>::cast(constant->handle(isolate()))); + } else { + UNREACHABLE(); + } +} + +void LCodeGen::CallRuntimeFromDeferred(Runtime::FunctionId id, + int argc, + LInstruction* instr, + LOperand* context) { + LoadContextFromDeferred(context); + + __ CallRuntime(id); + RecordSafepointWithRegisters( + instr->pointer_map(), argc, Safepoint::kNoLazyDeopt); + + DCHECK(info()->is_calling()); +} + + +void LCodeGen::RegisterEnvironmentForDeoptimization( + LEnvironment* environment, Safepoint::DeoptMode mode) { + environment->set_has_been_used(); + if (!environment->HasBeenRegistered()) { + // Physical stack frame layout: + // -x ............. -4 0 ..................................... y + // [incoming arguments] [spill slots] [pushed outgoing arguments] + + // Layout of the environment: + // 0 ..................................................... size-1 + // [parameters] [locals] [expression stack including arguments] + + // Layout of the translation: + // 0 ........................................................ size - 1 + 4 + // [expression stack including arguments] [locals] [4 words] [parameters] + // |>------------ translation_size ------------<| + + int frame_count = 0; + int jsframe_count = 0; + for (LEnvironment* e = environment; e != NULL; e = e->outer()) { + ++frame_count; + if (e->frame_type() == JS_FUNCTION) { + ++jsframe_count; + } + } + Translation translation(&translations_, frame_count, jsframe_count, zone()); + WriteTranslation(environment, &translation); + int deoptimization_index = deoptimizations_.length(); + int pc_offset = masm()->pc_offset(); + environment->Register(deoptimization_index, + translation.index(), + (mode == Safepoint::kLazyDeopt) ? pc_offset : -1); + deoptimizations_.Add(environment, zone()); + } +} + + +void LCodeGen::DeoptimizeIf(Condition cc, + LEnvironment* environment, + Deoptimizer::BailoutType bailout_type) { + RegisterEnvironmentForDeoptimization(environment, Safepoint::kNoLazyDeopt); + DCHECK(environment->HasBeenRegistered()); + int id = environment->deoptimization_index(); + DCHECK(info()->IsOptimizing() || info()->IsStub()); + Address entry = + Deoptimizer::GetDeoptimizationEntry(isolate(), id, bailout_type); + if (entry == NULL) { + Abort(kBailoutWasNotPrepared); + return; + } + + if (DeoptEveryNTimes()) { + ExternalReference count = ExternalReference::stress_deopt_count(isolate()); + Label no_deopt; + __ pushfd(); + __ push(eax); + __ mov(eax, Operand::StaticVariable(count)); + __ sub(eax, Immediate(1)); + __ j(not_zero, &no_deopt, Label::kNear); + if (FLAG_trap_on_deopt) __ int3(); + __ mov(eax, Immediate(FLAG_deopt_every_n_times)); + __ mov(Operand::StaticVariable(count), eax); + __ pop(eax); + __ popfd(); + DCHECK(frame_is_built_); + __ call(entry, RelocInfo::RUNTIME_ENTRY); + __ bind(&no_deopt); + __ mov(Operand::StaticVariable(count), eax); + __ pop(eax); + __ popfd(); + } + + // Before Instructions which can deopt, we normally flush the x87 stack. But + // we can have inputs or outputs of the current instruction on the stack, + // thus we need to flush them here from the physical stack to leave it in a + // consistent state. + if (x87_stack_.depth() > 0) { + Label done; + if (cc != no_condition) __ j(NegateCondition(cc), &done, Label::kNear); + EmitFlushX87ForDeopt(); + __ bind(&done); + } + + if (info()->ShouldTrapOnDeopt()) { + Label done; + if (cc != no_condition) __ j(NegateCondition(cc), &done, Label::kNear); + __ int3(); + __ bind(&done); + } + + DCHECK(info()->IsStub() || frame_is_built_); + if (cc == no_condition && frame_is_built_) { + __ call(entry, RelocInfo::RUNTIME_ENTRY); + } else { + // We often have several deopts to the same entry, reuse the last + // jump entry if this is the case. + if (jump_table_.is_empty() || + jump_table_.last().address != entry || + jump_table_.last().needs_frame != !frame_is_built_ || + jump_table_.last().bailout_type != bailout_type) { + Deoptimizer::JumpTableEntry table_entry(entry, + bailout_type, + !frame_is_built_); + jump_table_.Add(table_entry, zone()); + } + if (cc == no_condition) { + __ jmp(&jump_table_.last().label); + } else { + __ j(cc, &jump_table_.last().label); + } + } +} + + +void LCodeGen::DeoptimizeIf(Condition cc, + LEnvironment* environment) { + Deoptimizer::BailoutType bailout_type = info()->IsStub() + ? Deoptimizer::LAZY + : Deoptimizer::EAGER; + DeoptimizeIf(cc, environment, bailout_type); +} + + +void LCodeGen::PopulateDeoptimizationData(Handle<Code> code) { + int length = deoptimizations_.length(); + if (length == 0) return; + Handle<DeoptimizationInputData> data = + DeoptimizationInputData::New(isolate(), length, 0, TENURED); + + Handle<ByteArray> translations = + translations_.CreateByteArray(isolate()->factory()); + data->SetTranslationByteArray(*translations); + data->SetInlinedFunctionCount(Smi::FromInt(inlined_function_count_)); + data->SetOptimizationId(Smi::FromInt(info_->optimization_id())); + if (info_->IsOptimizing()) { + // Reference to shared function info does not change between phases. + AllowDeferredHandleDereference allow_handle_dereference; + data->SetSharedFunctionInfo(*info_->shared_info()); + } else { + data->SetSharedFunctionInfo(Smi::FromInt(0)); + } + + Handle<FixedArray> literals = + factory()->NewFixedArray(deoptimization_literals_.length(), TENURED); + { AllowDeferredHandleDereference copy_handles; + for (int i = 0; i < deoptimization_literals_.length(); i++) { + literals->set(i, *deoptimization_literals_[i]); + } + data->SetLiteralArray(*literals); + } + + data->SetOsrAstId(Smi::FromInt(info_->osr_ast_id().ToInt())); + data->SetOsrPcOffset(Smi::FromInt(osr_pc_offset_)); + + // Populate the deoptimization entries. + for (int i = 0; i < length; i++) { + LEnvironment* env = deoptimizations_[i]; + data->SetAstId(i, env->ast_id()); + data->SetTranslationIndex(i, Smi::FromInt(env->translation_index())); + data->SetArgumentsStackHeight(i, + Smi::FromInt(env->arguments_stack_height())); + data->SetPc(i, Smi::FromInt(env->pc_offset())); + } + code->set_deoptimization_data(*data); +} + + +int LCodeGen::DefineDeoptimizationLiteral(Handle<Object> literal) { + int result = deoptimization_literals_.length(); + for (int i = 0; i < deoptimization_literals_.length(); ++i) { + if (deoptimization_literals_[i].is_identical_to(literal)) return i; + } + deoptimization_literals_.Add(literal, zone()); + return result; +} + + +void LCodeGen::PopulateDeoptimizationLiteralsWithInlinedFunctions() { + DCHECK(deoptimization_literals_.length() == 0); + + const ZoneList<Handle<JSFunction> >* inlined_closures = + chunk()->inlined_closures(); + + for (int i = 0, length = inlined_closures->length(); + i < length; + i++) { + DefineDeoptimizationLiteral(inlined_closures->at(i)); + } + + inlined_function_count_ = deoptimization_literals_.length(); +} + + +void LCodeGen::RecordSafepointWithLazyDeopt( + LInstruction* instr, SafepointMode safepoint_mode) { + if (safepoint_mode == RECORD_SIMPLE_SAFEPOINT) { + RecordSafepoint(instr->pointer_map(), Safepoint::kLazyDeopt); + } else { + DCHECK(safepoint_mode == RECORD_SAFEPOINT_WITH_REGISTERS_AND_NO_ARGUMENTS); + RecordSafepointWithRegisters( + instr->pointer_map(), 0, Safepoint::kLazyDeopt); + } +} + + +void LCodeGen::RecordSafepoint( + LPointerMap* pointers, + Safepoint::Kind kind, + int arguments, + Safepoint::DeoptMode deopt_mode) { + DCHECK(kind == expected_safepoint_kind_); + const ZoneList<LOperand*>* operands = pointers->GetNormalizedOperands(); + Safepoint safepoint = + safepoints_.DefineSafepoint(masm(), kind, arguments, deopt_mode); + for (int i = 0; i < operands->length(); i++) { + LOperand* pointer = operands->at(i); + if (pointer->IsStackSlot()) { + safepoint.DefinePointerSlot(pointer->index(), zone()); + } else if (pointer->IsRegister() && (kind & Safepoint::kWithRegisters)) { + safepoint.DefinePointerRegister(ToRegister(pointer), zone()); + } + } +} + + +void LCodeGen::RecordSafepoint(LPointerMap* pointers, + Safepoint::DeoptMode mode) { + RecordSafepoint(pointers, Safepoint::kSimple, 0, mode); +} + + +void LCodeGen::RecordSafepoint(Safepoint::DeoptMode mode) { + LPointerMap empty_pointers(zone()); + RecordSafepoint(&empty_pointers, mode); +} + + +void LCodeGen::RecordSafepointWithRegisters(LPointerMap* pointers, + int arguments, + Safepoint::DeoptMode mode) { + RecordSafepoint(pointers, Safepoint::kWithRegisters, arguments, mode); +} + + +void LCodeGen::RecordAndWritePosition(int position) { + if (position == RelocInfo::kNoPosition) return; + masm()->positions_recorder()->RecordPosition(position); + masm()->positions_recorder()->WriteRecordedPositions(); +} + + +static const char* LabelType(LLabel* label) { + if (label->is_loop_header()) return " (loop header)"; + if (label->is_osr_entry()) return " (OSR entry)"; + return ""; +} + + +void LCodeGen::DoLabel(LLabel* label) { + Comment(";;; <@%d,#%d> -------------------- B%d%s --------------------", + current_instruction_, + label->hydrogen_value()->id(), + label->block_id(), + LabelType(label)); + __ bind(label->label()); + current_block_ = label->block_id(); + DoGap(label); +} + + +void LCodeGen::DoParallelMove(LParallelMove* move) { + resolver_.Resolve(move); +} + + +void LCodeGen::DoGap(LGap* gap) { + for (int i = LGap::FIRST_INNER_POSITION; + i <= LGap::LAST_INNER_POSITION; + i++) { + LGap::InnerPosition inner_pos = static_cast<LGap::InnerPosition>(i); + LParallelMove* move = gap->GetParallelMove(inner_pos); + if (move != NULL) DoParallelMove(move); + } +} + + +void LCodeGen::DoInstructionGap(LInstructionGap* instr) { + DoGap(instr); +} + + +void LCodeGen::DoParameter(LParameter* instr) { + // Nothing to do. +} + + +void LCodeGen::DoCallStub(LCallStub* instr) { + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->result()).is(eax)); + switch (instr->hydrogen()->major_key()) { + case CodeStub::RegExpExec: { + RegExpExecStub stub(isolate()); + CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); + break; + } + case CodeStub::SubString: { + SubStringStub stub(isolate()); + CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); + break; + } + case CodeStub::StringCompare: { + StringCompareStub stub(isolate()); + CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); + break; + } + default: + UNREACHABLE(); + } +} + + +void LCodeGen::DoUnknownOSRValue(LUnknownOSRValue* instr) { + GenerateOsrPrologue(); +} + + +void LCodeGen::DoModByPowerOf2I(LModByPowerOf2I* instr) { + Register dividend = ToRegister(instr->dividend()); + int32_t divisor = instr->divisor(); + DCHECK(dividend.is(ToRegister(instr->result()))); + + // Theoretically, a variation of the branch-free code for integer division by + // a power of 2 (calculating the remainder via an additional multiplication + // (which gets simplified to an 'and') and subtraction) should be faster, and + // this is exactly what GCC and clang emit. Nevertheless, benchmarks seem to + // indicate that positive dividends are heavily favored, so the branching + // version performs better. + HMod* hmod = instr->hydrogen(); + int32_t mask = divisor < 0 ? -(divisor + 1) : (divisor - 1); + Label dividend_is_not_negative, done; + if (hmod->CheckFlag(HValue::kLeftCanBeNegative)) { + __ test(dividend, dividend); + __ j(not_sign, ÷nd_is_not_negative, Label::kNear); + // Note that this is correct even for kMinInt operands. + __ neg(dividend); + __ and_(dividend, mask); + __ neg(dividend); + if (hmod->CheckFlag(HValue::kBailoutOnMinusZero)) { + DeoptimizeIf(zero, instr->environment()); + } + __ jmp(&done, Label::kNear); + } + + __ bind(÷nd_is_not_negative); + __ and_(dividend, mask); + __ bind(&done); +} + + +void LCodeGen::DoModByConstI(LModByConstI* instr) { + Register dividend = ToRegister(instr->dividend()); + int32_t divisor = instr->divisor(); + DCHECK(ToRegister(instr->result()).is(eax)); + + if (divisor == 0) { + DeoptimizeIf(no_condition, instr->environment()); + return; + } + + __ TruncatingDiv(dividend, Abs(divisor)); + __ imul(edx, edx, Abs(divisor)); + __ mov(eax, dividend); + __ sub(eax, edx); + + // Check for negative zero. + HMod* hmod = instr->hydrogen(); + if (hmod->CheckFlag(HValue::kBailoutOnMinusZero)) { + Label remainder_not_zero; + __ j(not_zero, &remainder_not_zero, Label::kNear); + __ cmp(dividend, Immediate(0)); + DeoptimizeIf(less, instr->environment()); + __ bind(&remainder_not_zero); + } +} + + +void LCodeGen::DoModI(LModI* instr) { + HMod* hmod = instr->hydrogen(); + + Register left_reg = ToRegister(instr->left()); + DCHECK(left_reg.is(eax)); + Register right_reg = ToRegister(instr->right()); + DCHECK(!right_reg.is(eax)); + DCHECK(!right_reg.is(edx)); + Register result_reg = ToRegister(instr->result()); + DCHECK(result_reg.is(edx)); + + Label done; + // Check for x % 0, idiv would signal a divide error. We have to + // deopt in this case because we can't return a NaN. + if (hmod->CheckFlag(HValue::kCanBeDivByZero)) { + __ test(right_reg, Operand(right_reg)); + DeoptimizeIf(zero, instr->environment()); + } + + // Check for kMinInt % -1, idiv would signal a divide error. We + // have to deopt if we care about -0, because we can't return that. + if (hmod->CheckFlag(HValue::kCanOverflow)) { + Label no_overflow_possible; + __ cmp(left_reg, kMinInt); + __ j(not_equal, &no_overflow_possible, Label::kNear); + __ cmp(right_reg, -1); + if (hmod->CheckFlag(HValue::kBailoutOnMinusZero)) { + DeoptimizeIf(equal, instr->environment()); + } else { + __ j(not_equal, &no_overflow_possible, Label::kNear); + __ Move(result_reg, Immediate(0)); + __ jmp(&done, Label::kNear); + } + __ bind(&no_overflow_possible); + } + + // Sign extend dividend in eax into edx:eax. + __ cdq(); + + // If we care about -0, test if the dividend is <0 and the result is 0. + if (hmod->CheckFlag(HValue::kBailoutOnMinusZero)) { + Label positive_left; + __ test(left_reg, Operand(left_reg)); + __ j(not_sign, &positive_left, Label::kNear); + __ idiv(right_reg); + __ test(result_reg, Operand(result_reg)); + DeoptimizeIf(zero, instr->environment()); + __ jmp(&done, Label::kNear); + __ bind(&positive_left); + } + __ idiv(right_reg); + __ bind(&done); +} + + +void LCodeGen::DoDivByPowerOf2I(LDivByPowerOf2I* instr) { + Register dividend = ToRegister(instr->dividend()); + int32_t divisor = instr->divisor(); + Register result = ToRegister(instr->result()); + DCHECK(divisor == kMinInt || IsPowerOf2(Abs(divisor))); + DCHECK(!result.is(dividend)); + + // Check for (0 / -x) that will produce negative zero. + HDiv* hdiv = instr->hydrogen(); + if (hdiv->CheckFlag(HValue::kBailoutOnMinusZero) && divisor < 0) { + __ test(dividend, dividend); + DeoptimizeIf(zero, instr->environment()); + } + // Check for (kMinInt / -1). + if (hdiv->CheckFlag(HValue::kCanOverflow) && divisor == -1) { + __ cmp(dividend, kMinInt); + DeoptimizeIf(zero, instr->environment()); + } + // Deoptimize if remainder will not be 0. + if (!hdiv->CheckFlag(HInstruction::kAllUsesTruncatingToInt32) && + divisor != 1 && divisor != -1) { + int32_t mask = divisor < 0 ? -(divisor + 1) : (divisor - 1); + __ test(dividend, Immediate(mask)); + DeoptimizeIf(not_zero, instr->environment()); + } + __ Move(result, dividend); + int32_t shift = WhichPowerOf2Abs(divisor); + if (shift > 0) { + // The arithmetic shift is always OK, the 'if' is an optimization only. + if (shift > 1) __ sar(result, 31); + __ shr(result, 32 - shift); + __ add(result, dividend); + __ sar(result, shift); + } + if (divisor < 0) __ neg(result); +} + + +void LCodeGen::DoDivByConstI(LDivByConstI* instr) { + Register dividend = ToRegister(instr->dividend()); + int32_t divisor = instr->divisor(); + DCHECK(ToRegister(instr->result()).is(edx)); + + if (divisor == 0) { + DeoptimizeIf(no_condition, instr->environment()); + return; + } + + // Check for (0 / -x) that will produce negative zero. + HDiv* hdiv = instr->hydrogen(); + if (hdiv->CheckFlag(HValue::kBailoutOnMinusZero) && divisor < 0) { + __ test(dividend, dividend); + DeoptimizeIf(zero, instr->environment()); + } + + __ TruncatingDiv(dividend, Abs(divisor)); + if (divisor < 0) __ neg(edx); + + if (!hdiv->CheckFlag(HInstruction::kAllUsesTruncatingToInt32)) { + __ mov(eax, edx); + __ imul(eax, eax, divisor); + __ sub(eax, dividend); + DeoptimizeIf(not_equal, instr->environment()); + } +} + + +// TODO(svenpanne) Refactor this to avoid code duplication with DoFlooringDivI. +void LCodeGen::DoDivI(LDivI* instr) { + HBinaryOperation* hdiv = instr->hydrogen(); + Register dividend = ToRegister(instr->dividend()); + Register divisor = ToRegister(instr->divisor()); + Register remainder = ToRegister(instr->temp()); + DCHECK(dividend.is(eax)); + DCHECK(remainder.is(edx)); + DCHECK(ToRegister(instr->result()).is(eax)); + DCHECK(!divisor.is(eax)); + DCHECK(!divisor.is(edx)); + + // Check for x / 0. + if (hdiv->CheckFlag(HValue::kCanBeDivByZero)) { + __ test(divisor, divisor); + DeoptimizeIf(zero, instr->environment()); + } + + // Check for (0 / -x) that will produce negative zero. + if (hdiv->CheckFlag(HValue::kBailoutOnMinusZero)) { + Label dividend_not_zero; + __ test(dividend, dividend); + __ j(not_zero, ÷nd_not_zero, Label::kNear); + __ test(divisor, divisor); + DeoptimizeIf(sign, instr->environment()); + __ bind(÷nd_not_zero); + } + + // Check for (kMinInt / -1). + if (hdiv->CheckFlag(HValue::kCanOverflow)) { + Label dividend_not_min_int; + __ cmp(dividend, kMinInt); + __ j(not_zero, ÷nd_not_min_int, Label::kNear); + __ cmp(divisor, -1); + DeoptimizeIf(zero, instr->environment()); + __ bind(÷nd_not_min_int); + } + + // Sign extend to edx (= remainder). + __ cdq(); + __ idiv(divisor); + + if (!hdiv->CheckFlag(HValue::kAllUsesTruncatingToInt32)) { + // Deoptimize if remainder is not 0. + __ test(remainder, remainder); + DeoptimizeIf(not_zero, instr->environment()); + } +} + + +void LCodeGen::DoFlooringDivByPowerOf2I(LFlooringDivByPowerOf2I* instr) { + Register dividend = ToRegister(instr->dividend()); + int32_t divisor = instr->divisor(); + DCHECK(dividend.is(ToRegister(instr->result()))); + + // If the divisor is positive, things are easy: There can be no deopts and we + // can simply do an arithmetic right shift. + if (divisor == 1) return; + int32_t shift = WhichPowerOf2Abs(divisor); + if (divisor > 1) { + __ sar(dividend, shift); + return; + } + + // If the divisor is negative, we have to negate and handle edge cases. + __ neg(dividend); + if (instr->hydrogen()->CheckFlag(HValue::kBailoutOnMinusZero)) { + DeoptimizeIf(zero, instr->environment()); + } + + // Dividing by -1 is basically negation, unless we overflow. + if (divisor == -1) { + if (instr->hydrogen()->CheckFlag(HValue::kLeftCanBeMinInt)) { + DeoptimizeIf(overflow, instr->environment()); + } + return; + } + + // If the negation could not overflow, simply shifting is OK. + if (!instr->hydrogen()->CheckFlag(HValue::kLeftCanBeMinInt)) { + __ sar(dividend, shift); + return; + } + + Label not_kmin_int, done; + __ j(no_overflow, ¬_kmin_int, Label::kNear); + __ mov(dividend, Immediate(kMinInt / divisor)); + __ jmp(&done, Label::kNear); + __ bind(¬_kmin_int); + __ sar(dividend, shift); + __ bind(&done); +} + + +void LCodeGen::DoFlooringDivByConstI(LFlooringDivByConstI* instr) { + Register dividend = ToRegister(instr->dividend()); + int32_t divisor = instr->divisor(); + DCHECK(ToRegister(instr->result()).is(edx)); + + if (divisor == 0) { + DeoptimizeIf(no_condition, instr->environment()); + return; + } + + // Check for (0 / -x) that will produce negative zero. + HMathFloorOfDiv* hdiv = instr->hydrogen(); + if (hdiv->CheckFlag(HValue::kBailoutOnMinusZero) && divisor < 0) { + __ test(dividend, dividend); + DeoptimizeIf(zero, instr->environment()); + } + + // Easy case: We need no dynamic check for the dividend and the flooring + // division is the same as the truncating division. + if ((divisor > 0 && !hdiv->CheckFlag(HValue::kLeftCanBeNegative)) || + (divisor < 0 && !hdiv->CheckFlag(HValue::kLeftCanBePositive))) { + __ TruncatingDiv(dividend, Abs(divisor)); + if (divisor < 0) __ neg(edx); + return; + } + + // In the general case we may need to adjust before and after the truncating + // division to get a flooring division. + Register temp = ToRegister(instr->temp3()); + DCHECK(!temp.is(dividend) && !temp.is(eax) && !temp.is(edx)); + Label needs_adjustment, done; + __ cmp(dividend, Immediate(0)); + __ j(divisor > 0 ? less : greater, &needs_adjustment, Label::kNear); + __ TruncatingDiv(dividend, Abs(divisor)); + if (divisor < 0) __ neg(edx); + __ jmp(&done, Label::kNear); + __ bind(&needs_adjustment); + __ lea(temp, Operand(dividend, divisor > 0 ? 1 : -1)); + __ TruncatingDiv(temp, Abs(divisor)); + if (divisor < 0) __ neg(edx); + __ dec(edx); + __ bind(&done); +} + + +// TODO(svenpanne) Refactor this to avoid code duplication with DoDivI. +void LCodeGen::DoFlooringDivI(LFlooringDivI* instr) { + HBinaryOperation* hdiv = instr->hydrogen(); + Register dividend = ToRegister(instr->dividend()); + Register divisor = ToRegister(instr->divisor()); + Register remainder = ToRegister(instr->temp()); + Register result = ToRegister(instr->result()); + DCHECK(dividend.is(eax)); + DCHECK(remainder.is(edx)); + DCHECK(result.is(eax)); + DCHECK(!divisor.is(eax)); + DCHECK(!divisor.is(edx)); + + // Check for x / 0. + if (hdiv->CheckFlag(HValue::kCanBeDivByZero)) { + __ test(divisor, divisor); + DeoptimizeIf(zero, instr->environment()); + } + + // Check for (0 / -x) that will produce negative zero. + if (hdiv->CheckFlag(HValue::kBailoutOnMinusZero)) { + Label dividend_not_zero; + __ test(dividend, dividend); + __ j(not_zero, ÷nd_not_zero, Label::kNear); + __ test(divisor, divisor); + DeoptimizeIf(sign, instr->environment()); + __ bind(÷nd_not_zero); + } + + // Check for (kMinInt / -1). + if (hdiv->CheckFlag(HValue::kCanOverflow)) { + Label dividend_not_min_int; + __ cmp(dividend, kMinInt); + __ j(not_zero, ÷nd_not_min_int, Label::kNear); + __ cmp(divisor, -1); + DeoptimizeIf(zero, instr->environment()); + __ bind(÷nd_not_min_int); + } + + // Sign extend to edx (= remainder). + __ cdq(); + __ idiv(divisor); + + Label done; + __ test(remainder, remainder); + __ j(zero, &done, Label::kNear); + __ xor_(remainder, divisor); + __ sar(remainder, 31); + __ add(result, remainder); + __ bind(&done); +} + + +void LCodeGen::DoMulI(LMulI* instr) { + Register left = ToRegister(instr->left()); + LOperand* right = instr->right(); + + if (instr->hydrogen()->CheckFlag(HValue::kBailoutOnMinusZero)) { + __ mov(ToRegister(instr->temp()), left); + } + + if (right->IsConstantOperand()) { + // Try strength reductions on the multiplication. + // All replacement instructions are at most as long as the imul + // and have better latency. + int constant = ToInteger32(LConstantOperand::cast(right)); + if (constant == -1) { + __ neg(left); + } else if (constant == 0) { + __ xor_(left, Operand(left)); + } else if (constant == 2) { + __ add(left, Operand(left)); + } else if (!instr->hydrogen()->CheckFlag(HValue::kCanOverflow)) { + // If we know that the multiplication can't overflow, it's safe to + // use instructions that don't set the overflow flag for the + // multiplication. + switch (constant) { + case 1: + // Do nothing. + break; + case 3: + __ lea(left, Operand(left, left, times_2, 0)); + break; + case 4: + __ shl(left, 2); + break; + case 5: + __ lea(left, Operand(left, left, times_4, 0)); + break; + case 8: + __ shl(left, 3); + break; + case 9: + __ lea(left, Operand(left, left, times_8, 0)); + break; + case 16: + __ shl(left, 4); + break; + default: + __ imul(left, left, constant); + break; + } + } else { + __ imul(left, left, constant); + } + } else { + if (instr->hydrogen()->representation().IsSmi()) { + __ SmiUntag(left); + } + __ imul(left, ToOperand(right)); + } + + if (instr->hydrogen()->CheckFlag(HValue::kCanOverflow)) { + DeoptimizeIf(overflow, instr->environment()); + } + + if (instr->hydrogen()->CheckFlag(HValue::kBailoutOnMinusZero)) { + // Bail out if the result is supposed to be negative zero. + Label done; + __ test(left, Operand(left)); + __ j(not_zero, &done, Label::kNear); + if (right->IsConstantOperand()) { + if (ToInteger32(LConstantOperand::cast(right)) < 0) { + DeoptimizeIf(no_condition, instr->environment()); + } else if (ToInteger32(LConstantOperand::cast(right)) == 0) { + __ cmp(ToRegister(instr->temp()), Immediate(0)); + DeoptimizeIf(less, instr->environment()); + } + } else { + // Test the non-zero operand for negative sign. + __ or_(ToRegister(instr->temp()), ToOperand(right)); + DeoptimizeIf(sign, instr->environment()); + } + __ bind(&done); + } +} + + +void LCodeGen::DoBitI(LBitI* instr) { + LOperand* left = instr->left(); + LOperand* right = instr->right(); + DCHECK(left->Equals(instr->result())); + DCHECK(left->IsRegister()); + + if (right->IsConstantOperand()) { + int32_t right_operand = + ToRepresentation(LConstantOperand::cast(right), + instr->hydrogen()->representation()); + switch (instr->op()) { + case Token::BIT_AND: + __ and_(ToRegister(left), right_operand); + break; + case Token::BIT_OR: + __ or_(ToRegister(left), right_operand); + break; + case Token::BIT_XOR: + if (right_operand == int32_t(~0)) { + __ not_(ToRegister(left)); + } else { + __ xor_(ToRegister(left), right_operand); + } + break; + default: + UNREACHABLE(); + break; + } + } else { + switch (instr->op()) { + case Token::BIT_AND: + __ and_(ToRegister(left), ToOperand(right)); + break; + case Token::BIT_OR: + __ or_(ToRegister(left), ToOperand(right)); + break; + case Token::BIT_XOR: + __ xor_(ToRegister(left), ToOperand(right)); + break; + default: + UNREACHABLE(); + break; + } + } +} + + +void LCodeGen::DoShiftI(LShiftI* instr) { + LOperand* left = instr->left(); + LOperand* right = instr->right(); + DCHECK(left->Equals(instr->result())); + DCHECK(left->IsRegister()); + if (right->IsRegister()) { + DCHECK(ToRegister(right).is(ecx)); + + switch (instr->op()) { + case Token::ROR: + __ ror_cl(ToRegister(left)); + if (instr->can_deopt()) { + __ test(ToRegister(left), ToRegister(left)); + DeoptimizeIf(sign, instr->environment()); + } + break; + case Token::SAR: + __ sar_cl(ToRegister(left)); + break; + case Token::SHR: + __ shr_cl(ToRegister(left)); + if (instr->can_deopt()) { + __ test(ToRegister(left), ToRegister(left)); + DeoptimizeIf(sign, instr->environment()); + } + break; + case Token::SHL: + __ shl_cl(ToRegister(left)); + break; + default: + UNREACHABLE(); + break; + } + } else { + int value = ToInteger32(LConstantOperand::cast(right)); + uint8_t shift_count = static_cast<uint8_t>(value & 0x1F); + switch (instr->op()) { + case Token::ROR: + if (shift_count == 0 && instr->can_deopt()) { + __ test(ToRegister(left), ToRegister(left)); + DeoptimizeIf(sign, instr->environment()); + } else { + __ ror(ToRegister(left), shift_count); + } + break; + case Token::SAR: + if (shift_count != 0) { + __ sar(ToRegister(left), shift_count); + } + break; + case Token::SHR: + if (shift_count != 0) { + __ shr(ToRegister(left), shift_count); + } else if (instr->can_deopt()) { + __ test(ToRegister(left), ToRegister(left)); + DeoptimizeIf(sign, instr->environment()); + } + break; + case Token::SHL: + if (shift_count != 0) { + if (instr->hydrogen_value()->representation().IsSmi() && + instr->can_deopt()) { + if (shift_count != 1) { + __ shl(ToRegister(left), shift_count - 1); + } + __ SmiTag(ToRegister(left)); + DeoptimizeIf(overflow, instr->environment()); + } else { + __ shl(ToRegister(left), shift_count); + } + } + break; + default: + UNREACHABLE(); + break; + } + } +} + + +void LCodeGen::DoSubI(LSubI* instr) { + LOperand* left = instr->left(); + LOperand* right = instr->right(); + DCHECK(left->Equals(instr->result())); + + if (right->IsConstantOperand()) { + __ sub(ToOperand(left), + ToImmediate(right, instr->hydrogen()->representation())); + } else { + __ sub(ToRegister(left), ToOperand(right)); + } + if (instr->hydrogen()->CheckFlag(HValue::kCanOverflow)) { + DeoptimizeIf(overflow, instr->environment()); + } +} + + +void LCodeGen::DoConstantI(LConstantI* instr) { + __ Move(ToRegister(instr->result()), Immediate(instr->value())); +} + + +void LCodeGen::DoConstantS(LConstantS* instr) { + __ Move(ToRegister(instr->result()), Immediate(instr->value())); +} + + +void LCodeGen::DoConstantD(LConstantD* instr) { + double v = instr->value(); + uint64_t int_val = BitCast<uint64_t, double>(v); + int32_t lower = static_cast<int32_t>(int_val); + int32_t upper = static_cast<int32_t>(int_val >> (kBitsPerInt)); + DCHECK(instr->result()->IsDoubleRegister()); + + __ push(Immediate(upper)); + __ push(Immediate(lower)); + X87Register reg = ToX87Register(instr->result()); + X87Mov(reg, Operand(esp, 0)); + __ add(Operand(esp), Immediate(kDoubleSize)); +} + + +void LCodeGen::DoConstantE(LConstantE* instr) { + __ lea(ToRegister(instr->result()), Operand::StaticVariable(instr->value())); +} + + +void LCodeGen::DoConstantT(LConstantT* instr) { + Register reg = ToRegister(instr->result()); + Handle<Object> object = instr->value(isolate()); + AllowDeferredHandleDereference smi_check; + __ LoadObject(reg, object); +} + + +void LCodeGen::DoMapEnumLength(LMapEnumLength* instr) { + Register result = ToRegister(instr->result()); + Register map = ToRegister(instr->value()); + __ EnumLength(result, map); +} + + +void LCodeGen::DoDateField(LDateField* instr) { + Register object = ToRegister(instr->date()); + Register result = ToRegister(instr->result()); + Register scratch = ToRegister(instr->temp()); + Smi* index = instr->index(); + Label runtime, done; + DCHECK(object.is(result)); + DCHECK(object.is(eax)); + + __ test(object, Immediate(kSmiTagMask)); + DeoptimizeIf(zero, instr->environment()); + __ CmpObjectType(object, JS_DATE_TYPE, scratch); + DeoptimizeIf(not_equal, instr->environment()); + + if (index->value() == 0) { + __ mov(result, FieldOperand(object, JSDate::kValueOffset)); + } else { + if (index->value() < JSDate::kFirstUncachedField) { + ExternalReference stamp = ExternalReference::date_cache_stamp(isolate()); + __ mov(scratch, Operand::StaticVariable(stamp)); + __ cmp(scratch, FieldOperand(object, JSDate::kCacheStampOffset)); + __ j(not_equal, &runtime, Label::kNear); + __ mov(result, FieldOperand(object, JSDate::kValueOffset + + kPointerSize * index->value())); + __ jmp(&done, Label::kNear); + } + __ bind(&runtime); + __ PrepareCallCFunction(2, scratch); + __ mov(Operand(esp, 0), object); + __ mov(Operand(esp, 1 * kPointerSize), Immediate(index)); + __ CallCFunction(ExternalReference::get_date_field_function(isolate()), 2); + __ bind(&done); + } +} + + +Operand LCodeGen::BuildSeqStringOperand(Register string, + LOperand* index, + String::Encoding encoding) { + if (index->IsConstantOperand()) { + int offset = ToRepresentation(LConstantOperand::cast(index), + Representation::Integer32()); + if (encoding == String::TWO_BYTE_ENCODING) { + offset *= kUC16Size; + } + STATIC_ASSERT(kCharSize == 1); + return FieldOperand(string, SeqString::kHeaderSize + offset); + } + return FieldOperand( + string, ToRegister(index), + encoding == String::ONE_BYTE_ENCODING ? times_1 : times_2, + SeqString::kHeaderSize); +} + + +void LCodeGen::DoSeqStringGetChar(LSeqStringGetChar* instr) { + String::Encoding encoding = instr->hydrogen()->encoding(); + Register result = ToRegister(instr->result()); + Register string = ToRegister(instr->string()); + + if (FLAG_debug_code) { + __ push(string); + __ mov(string, FieldOperand(string, HeapObject::kMapOffset)); + __ movzx_b(string, FieldOperand(string, Map::kInstanceTypeOffset)); + + __ and_(string, Immediate(kStringRepresentationMask | kStringEncodingMask)); + static const uint32_t one_byte_seq_type = kSeqStringTag | kOneByteStringTag; + static const uint32_t two_byte_seq_type = kSeqStringTag | kTwoByteStringTag; + __ cmp(string, Immediate(encoding == String::ONE_BYTE_ENCODING + ? one_byte_seq_type : two_byte_seq_type)); + __ Check(equal, kUnexpectedStringType); + __ pop(string); + } + + Operand operand = BuildSeqStringOperand(string, instr->index(), encoding); + if (encoding == String::ONE_BYTE_ENCODING) { + __ movzx_b(result, operand); + } else { + __ movzx_w(result, operand); + } +} + + +void LCodeGen::DoSeqStringSetChar(LSeqStringSetChar* instr) { + String::Encoding encoding = instr->hydrogen()->encoding(); + Register string = ToRegister(instr->string()); + + if (FLAG_debug_code) { + Register value = ToRegister(instr->value()); + Register index = ToRegister(instr->index()); + static const uint32_t one_byte_seq_type = kSeqStringTag | kOneByteStringTag; + static const uint32_t two_byte_seq_type = kSeqStringTag | kTwoByteStringTag; + int encoding_mask = + instr->hydrogen()->encoding() == String::ONE_BYTE_ENCODING + ? one_byte_seq_type : two_byte_seq_type; + __ EmitSeqStringSetCharCheck(string, index, value, encoding_mask); + } + + Operand operand = BuildSeqStringOperand(string, instr->index(), encoding); + if (instr->value()->IsConstantOperand()) { + int value = ToRepresentation(LConstantOperand::cast(instr->value()), + Representation::Integer32()); + DCHECK_LE(0, value); + if (encoding == String::ONE_BYTE_ENCODING) { + DCHECK_LE(value, String::kMaxOneByteCharCode); + __ mov_b(operand, static_cast<int8_t>(value)); + } else { + DCHECK_LE(value, String::kMaxUtf16CodeUnit); + __ mov_w(operand, static_cast<int16_t>(value)); + } + } else { + Register value = ToRegister(instr->value()); + if (encoding == String::ONE_BYTE_ENCODING) { + __ mov_b(operand, value); + } else { + __ mov_w(operand, value); + } + } +} + + +void LCodeGen::DoAddI(LAddI* instr) { + LOperand* left = instr->left(); + LOperand* right = instr->right(); + + if (LAddI::UseLea(instr->hydrogen()) && !left->Equals(instr->result())) { + if (right->IsConstantOperand()) { + int32_t offset = ToRepresentation(LConstantOperand::cast(right), + instr->hydrogen()->representation()); + __ lea(ToRegister(instr->result()), MemOperand(ToRegister(left), offset)); + } else { + Operand address(ToRegister(left), ToRegister(right), times_1, 0); + __ lea(ToRegister(instr->result()), address); + } + } else { + if (right->IsConstantOperand()) { + __ add(ToOperand(left), + ToImmediate(right, instr->hydrogen()->representation())); + } else { + __ add(ToRegister(left), ToOperand(right)); + } + if (instr->hydrogen()->CheckFlag(HValue::kCanOverflow)) { + DeoptimizeIf(overflow, instr->environment()); + } + } +} + + +void LCodeGen::DoMathMinMax(LMathMinMax* instr) { + LOperand* left = instr->left(); + LOperand* right = instr->right(); + DCHECK(left->Equals(instr->result())); + HMathMinMax::Operation operation = instr->hydrogen()->operation(); + if (instr->hydrogen()->representation().IsSmiOrInteger32()) { + Label return_left; + Condition condition = (operation == HMathMinMax::kMathMin) + ? less_equal + : greater_equal; + if (right->IsConstantOperand()) { + Operand left_op = ToOperand(left); + Immediate immediate = ToImmediate(LConstantOperand::cast(instr->right()), + instr->hydrogen()->representation()); + __ cmp(left_op, immediate); + __ j(condition, &return_left, Label::kNear); + __ mov(left_op, immediate); + } else { + Register left_reg = ToRegister(left); + Operand right_op = ToOperand(right); + __ cmp(left_reg, right_op); + __ j(condition, &return_left, Label::kNear); + __ mov(left_reg, right_op); + } + __ bind(&return_left); + } else { + // TODO(weiliang) use X87 for double representation. + UNIMPLEMENTED(); + } +} + + +void LCodeGen::DoArithmeticD(LArithmeticD* instr) { + X87Register left = ToX87Register(instr->left()); + X87Register right = ToX87Register(instr->right()); + X87Register result = ToX87Register(instr->result()); + if (instr->op() != Token::MOD) { + X87PrepareBinaryOp(left, right, result); + } + switch (instr->op()) { + case Token::ADD: + __ fadd_i(1); + break; + case Token::SUB: + __ fsub_i(1); + break; + case Token::MUL: + __ fmul_i(1); + break; + case Token::DIV: + __ fdiv_i(1); + break; + case Token::MOD: { + // Pass two doubles as arguments on the stack. + __ PrepareCallCFunction(4, eax); + X87Mov(Operand(esp, 1 * kDoubleSize), right); + X87Mov(Operand(esp, 0), left); + X87Free(right); + DCHECK(left.is(result)); + X87PrepareToWrite(result); + __ CallCFunction( + ExternalReference::mod_two_doubles_operation(isolate()), + 4); + + // Return value is in st(0) on ia32. + X87CommitWrite(result); + break; + } + default: + UNREACHABLE(); + break; + } +} + + +void LCodeGen::DoArithmeticT(LArithmeticT* instr) { + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->left()).is(edx)); + DCHECK(ToRegister(instr->right()).is(eax)); + DCHECK(ToRegister(instr->result()).is(eax)); + + BinaryOpICStub stub(isolate(), instr->op(), NO_OVERWRITE); + CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); +} + + +template<class InstrType> +void LCodeGen::EmitBranch(InstrType instr, Condition cc) { + int left_block = instr->TrueDestination(chunk_); + int right_block = instr->FalseDestination(chunk_); + + int next_block = GetNextEmittedBlock(); + + if (right_block == left_block || cc == no_condition) { + EmitGoto(left_block); + } else if (left_block == next_block) { + __ j(NegateCondition(cc), chunk_->GetAssemblyLabel(right_block)); + } else if (right_block == next_block) { + __ j(cc, chunk_->GetAssemblyLabel(left_block)); + } else { + __ j(cc, chunk_->GetAssemblyLabel(left_block)); + __ jmp(chunk_->GetAssemblyLabel(right_block)); + } +} + + +template<class InstrType> +void LCodeGen::EmitFalseBranch(InstrType instr, Condition cc) { + int false_block = instr->FalseDestination(chunk_); + if (cc == no_condition) { + __ jmp(chunk_->GetAssemblyLabel(false_block)); + } else { + __ j(cc, chunk_->GetAssemblyLabel(false_block)); + } +} + + +void LCodeGen::DoBranch(LBranch* instr) { + Representation r = instr->hydrogen()->value()->representation(); + if (r.IsSmiOrInteger32()) { + Register reg = ToRegister(instr->value()); + __ test(reg, Operand(reg)); + EmitBranch(instr, not_zero); + } else if (r.IsDouble()) { + UNREACHABLE(); + } else { + DCHECK(r.IsTagged()); + Register reg = ToRegister(instr->value()); + HType type = instr->hydrogen()->value()->type(); + if (type.IsBoolean()) { + DCHECK(!info()->IsStub()); + __ cmp(reg, factory()->true_value()); + EmitBranch(instr, equal); + } else if (type.IsSmi()) { + DCHECK(!info()->IsStub()); + __ test(reg, Operand(reg)); + EmitBranch(instr, not_equal); + } else if (type.IsJSArray()) { + DCHECK(!info()->IsStub()); + EmitBranch(instr, no_condition); + } else if (type.IsHeapNumber()) { + UNREACHABLE(); + } else if (type.IsString()) { + DCHECK(!info()->IsStub()); + __ cmp(FieldOperand(reg, String::kLengthOffset), Immediate(0)); + EmitBranch(instr, not_equal); + } else { + ToBooleanStub::Types expected = instr->hydrogen()->expected_input_types(); + if (expected.IsEmpty()) expected = ToBooleanStub::Types::Generic(); + + if (expected.Contains(ToBooleanStub::UNDEFINED)) { + // undefined -> false. + __ cmp(reg, factory()->undefined_value()); + __ j(equal, instr->FalseLabel(chunk_)); + } + if (expected.Contains(ToBooleanStub::BOOLEAN)) { + // true -> true. + __ cmp(reg, factory()->true_value()); + __ j(equal, instr->TrueLabel(chunk_)); + // false -> false. + __ cmp(reg, factory()->false_value()); + __ j(equal, instr->FalseLabel(chunk_)); + } + if (expected.Contains(ToBooleanStub::NULL_TYPE)) { + // 'null' -> false. + __ cmp(reg, factory()->null_value()); + __ j(equal, instr->FalseLabel(chunk_)); + } + + if (expected.Contains(ToBooleanStub::SMI)) { + // Smis: 0 -> false, all other -> true. + __ test(reg, Operand(reg)); + __ j(equal, instr->FalseLabel(chunk_)); + __ JumpIfSmi(reg, instr->TrueLabel(chunk_)); + } else if (expected.NeedsMap()) { + // If we need a map later and have a Smi -> deopt. + __ test(reg, Immediate(kSmiTagMask)); + DeoptimizeIf(zero, instr->environment()); + } + + Register map = no_reg; // Keep the compiler happy. + if (expected.NeedsMap()) { + map = ToRegister(instr->temp()); + DCHECK(!map.is(reg)); + __ mov(map, FieldOperand(reg, HeapObject::kMapOffset)); + + if (expected.CanBeUndetectable()) { + // Undetectable -> false. + __ test_b(FieldOperand(map, Map::kBitFieldOffset), + 1 << Map::kIsUndetectable); + __ j(not_zero, instr->FalseLabel(chunk_)); + } + } + + if (expected.Contains(ToBooleanStub::SPEC_OBJECT)) { + // spec object -> true. + __ CmpInstanceType(map, FIRST_SPEC_OBJECT_TYPE); + __ j(above_equal, instr->TrueLabel(chunk_)); + } + + if (expected.Contains(ToBooleanStub::STRING)) { + // String value -> false iff empty. + Label not_string; + __ CmpInstanceType(map, FIRST_NONSTRING_TYPE); + __ j(above_equal, ¬_string, Label::kNear); + __ cmp(FieldOperand(reg, String::kLengthOffset), Immediate(0)); + __ j(not_zero, instr->TrueLabel(chunk_)); + __ jmp(instr->FalseLabel(chunk_)); + __ bind(¬_string); + } + + if (expected.Contains(ToBooleanStub::SYMBOL)) { + // Symbol value -> true. + __ CmpInstanceType(map, SYMBOL_TYPE); + __ j(equal, instr->TrueLabel(chunk_)); + } + + if (expected.Contains(ToBooleanStub::HEAP_NUMBER)) { + // heap number -> false iff +0, -0, or NaN. + Label not_heap_number; + __ cmp(FieldOperand(reg, HeapObject::kMapOffset), + factory()->heap_number_map()); + __ j(not_equal, ¬_heap_number, Label::kNear); + __ fldz(); + __ fld_d(FieldOperand(reg, HeapNumber::kValueOffset)); + __ FCmp(); + __ j(zero, instr->FalseLabel(chunk_)); + __ jmp(instr->TrueLabel(chunk_)); + __ bind(¬_heap_number); + } + + if (!expected.IsGeneric()) { + // We've seen something for the first time -> deopt. + // This can only happen if we are not generic already. + DeoptimizeIf(no_condition, instr->environment()); + } + } + } +} + + +void LCodeGen::EmitGoto(int block) { + if (!IsNextEmittedBlock(block)) { + __ jmp(chunk_->GetAssemblyLabel(LookupDestination(block))); + } +} + + +void LCodeGen::DoClobberDoubles(LClobberDoubles* instr) { +} + + +void LCodeGen::DoGoto(LGoto* instr) { + EmitGoto(instr->block_id()); +} + + +Condition LCodeGen::TokenToCondition(Token::Value op, bool is_unsigned) { + Condition cond = no_condition; + switch (op) { + case Token::EQ: + case Token::EQ_STRICT: + cond = equal; + break; + case Token::NE: + case Token::NE_STRICT: + cond = not_equal; + break; + case Token::LT: + cond = is_unsigned ? below : less; + break; + case Token::GT: + cond = is_unsigned ? above : greater; + break; + case Token::LTE: + cond = is_unsigned ? below_equal : less_equal; + break; + case Token::GTE: + cond = is_unsigned ? above_equal : greater_equal; + break; + case Token::IN: + case Token::INSTANCEOF: + default: + UNREACHABLE(); + } + return cond; +} + + +void LCodeGen::DoCompareNumericAndBranch(LCompareNumericAndBranch* instr) { + LOperand* left = instr->left(); + LOperand* right = instr->right(); + bool is_unsigned = + instr->is_double() || + instr->hydrogen()->left()->CheckFlag(HInstruction::kUint32) || + instr->hydrogen()->right()->CheckFlag(HInstruction::kUint32); + Condition cc = TokenToCondition(instr->op(), is_unsigned); + + if (left->IsConstantOperand() && right->IsConstantOperand()) { + // We can statically evaluate the comparison. + double left_val = ToDouble(LConstantOperand::cast(left)); + double right_val = ToDouble(LConstantOperand::cast(right)); + int next_block = EvalComparison(instr->op(), left_val, right_val) ? + instr->TrueDestination(chunk_) : instr->FalseDestination(chunk_); + EmitGoto(next_block); + } else { + if (instr->is_double()) { + X87LoadForUsage(ToX87Register(right), ToX87Register(left)); + __ FCmp(); + // Don't base result on EFLAGS when a NaN is involved. Instead + // jump to the false block. + __ j(parity_even, instr->FalseLabel(chunk_)); + } else { + if (right->IsConstantOperand()) { + __ cmp(ToOperand(left), + ToImmediate(right, instr->hydrogen()->representation())); + } else if (left->IsConstantOperand()) { + __ cmp(ToOperand(right), + ToImmediate(left, instr->hydrogen()->representation())); + // We commuted the operands, so commute the condition. + cc = CommuteCondition(cc); + } else { + __ cmp(ToRegister(left), ToOperand(right)); + } + } + EmitBranch(instr, cc); + } +} + + +void LCodeGen::DoCmpObjectEqAndBranch(LCmpObjectEqAndBranch* instr) { + Register left = ToRegister(instr->left()); + + if (instr->right()->IsConstantOperand()) { + Handle<Object> right = ToHandle(LConstantOperand::cast(instr->right())); + __ CmpObject(left, right); + } else { + Operand right = ToOperand(instr->right()); + __ cmp(left, right); + } + EmitBranch(instr, equal); +} + + +void LCodeGen::DoCmpHoleAndBranch(LCmpHoleAndBranch* instr) { + if (instr->hydrogen()->representation().IsTagged()) { + Register input_reg = ToRegister(instr->object()); + __ cmp(input_reg, factory()->the_hole_value()); + EmitBranch(instr, equal); + return; + } + + // Put the value to the top of stack + X87Register src = ToX87Register(instr->object()); + X87LoadForUsage(src); + __ fld(0); + __ fld(0); + __ FCmp(); + Label ok; + __ j(parity_even, &ok, Label::kNear); + __ fstp(0); + EmitFalseBranch(instr, no_condition); + __ bind(&ok); + + + __ sub(esp, Immediate(kDoubleSize)); + __ fstp_d(MemOperand(esp, 0)); + + __ add(esp, Immediate(kDoubleSize)); + int offset = sizeof(kHoleNanUpper32); + __ cmp(MemOperand(esp, -offset), Immediate(kHoleNanUpper32)); + EmitBranch(instr, equal); +} + + +void LCodeGen::DoCompareMinusZeroAndBranch(LCompareMinusZeroAndBranch* instr) { + Representation rep = instr->hydrogen()->value()->representation(); + DCHECK(!rep.IsInteger32()); + + if (rep.IsDouble()) { + UNREACHABLE(); + } else { + Register value = ToRegister(instr->value()); + Handle<Map> map = masm()->isolate()->factory()->heap_number_map(); + __ CheckMap(value, map, instr->FalseLabel(chunk()), DO_SMI_CHECK); + __ cmp(FieldOperand(value, HeapNumber::kExponentOffset), + Immediate(0x1)); + EmitFalseBranch(instr, no_overflow); + __ cmp(FieldOperand(value, HeapNumber::kMantissaOffset), + Immediate(0x00000000)); + EmitBranch(instr, equal); + } +} + + +Condition LCodeGen::EmitIsObject(Register input, + Register temp1, + Label* is_not_object, + Label* is_object) { + __ JumpIfSmi(input, is_not_object); + + __ cmp(input, isolate()->factory()->null_value()); + __ j(equal, is_object); + + __ mov(temp1, FieldOperand(input, HeapObject::kMapOffset)); + // Undetectable objects behave like undefined. + __ test_b(FieldOperand(temp1, Map::kBitFieldOffset), + 1 << Map::kIsUndetectable); + __ j(not_zero, is_not_object); + + __ movzx_b(temp1, FieldOperand(temp1, Map::kInstanceTypeOffset)); + __ cmp(temp1, FIRST_NONCALLABLE_SPEC_OBJECT_TYPE); + __ j(below, is_not_object); + __ cmp(temp1, LAST_NONCALLABLE_SPEC_OBJECT_TYPE); + return below_equal; +} + + +void LCodeGen::DoIsObjectAndBranch(LIsObjectAndBranch* instr) { + Register reg = ToRegister(instr->value()); + Register temp = ToRegister(instr->temp()); + + Condition true_cond = EmitIsObject( + reg, temp, instr->FalseLabel(chunk_), instr->TrueLabel(chunk_)); + + EmitBranch(instr, true_cond); +} + + +Condition LCodeGen::EmitIsString(Register input, + Register temp1, + Label* is_not_string, + SmiCheck check_needed = INLINE_SMI_CHECK) { + if (check_needed == INLINE_SMI_CHECK) { + __ JumpIfSmi(input, is_not_string); + } + + Condition cond = masm_->IsObjectStringType(input, temp1, temp1); + + return cond; +} + + +void LCodeGen::DoIsStringAndBranch(LIsStringAndBranch* instr) { + Register reg = ToRegister(instr->value()); + Register temp = ToRegister(instr->temp()); + + SmiCheck check_needed = + instr->hydrogen()->value()->type().IsHeapObject() + ? OMIT_SMI_CHECK : INLINE_SMI_CHECK; + + Condition true_cond = EmitIsString( + reg, temp, instr->FalseLabel(chunk_), check_needed); + + EmitBranch(instr, true_cond); +} + + +void LCodeGen::DoIsSmiAndBranch(LIsSmiAndBranch* instr) { + Operand input = ToOperand(instr->value()); + + __ test(input, Immediate(kSmiTagMask)); + EmitBranch(instr, zero); +} + + +void LCodeGen::DoIsUndetectableAndBranch(LIsUndetectableAndBranch* instr) { + Register input = ToRegister(instr->value()); + Register temp = ToRegister(instr->temp()); + + if (!instr->hydrogen()->value()->type().IsHeapObject()) { + STATIC_ASSERT(kSmiTag == 0); + __ JumpIfSmi(input, instr->FalseLabel(chunk_)); + } + __ mov(temp, FieldOperand(input, HeapObject::kMapOffset)); + __ test_b(FieldOperand(temp, Map::kBitFieldOffset), + 1 << Map::kIsUndetectable); + EmitBranch(instr, not_zero); +} + + +static Condition ComputeCompareCondition(Token::Value op) { + switch (op) { + case Token::EQ_STRICT: + case Token::EQ: + return equal; + case Token::LT: + return less; + case Token::GT: + return greater; + case Token::LTE: + return less_equal; + case Token::GTE: + return greater_equal; + default: + UNREACHABLE(); + return no_condition; + } +} + + +void LCodeGen::DoStringCompareAndBranch(LStringCompareAndBranch* instr) { + Token::Value op = instr->op(); + + Handle<Code> ic = CompareIC::GetUninitialized(isolate(), op); + CallCode(ic, RelocInfo::CODE_TARGET, instr); + + Condition condition = ComputeCompareCondition(op); + __ test(eax, Operand(eax)); + + EmitBranch(instr, condition); +} + + +static InstanceType TestType(HHasInstanceTypeAndBranch* instr) { + InstanceType from = instr->from(); + InstanceType to = instr->to(); + if (from == FIRST_TYPE) return to; + DCHECK(from == to || to == LAST_TYPE); + return from; +} + + +static Condition BranchCondition(HHasInstanceTypeAndBranch* instr) { + InstanceType from = instr->from(); + InstanceType to = instr->to(); + if (from == to) return equal; + if (to == LAST_TYPE) return above_equal; + if (from == FIRST_TYPE) return below_equal; + UNREACHABLE(); + return equal; +} + + +void LCodeGen::DoHasInstanceTypeAndBranch(LHasInstanceTypeAndBranch* instr) { + Register input = ToRegister(instr->value()); + Register temp = ToRegister(instr->temp()); + + if (!instr->hydrogen()->value()->type().IsHeapObject()) { + __ JumpIfSmi(input, instr->FalseLabel(chunk_)); + } + + __ CmpObjectType(input, TestType(instr->hydrogen()), temp); + EmitBranch(instr, BranchCondition(instr->hydrogen())); +} + + +void LCodeGen::DoGetCachedArrayIndex(LGetCachedArrayIndex* instr) { + Register input = ToRegister(instr->value()); + Register result = ToRegister(instr->result()); + + __ AssertString(input); + + __ mov(result, FieldOperand(input, String::kHashFieldOffset)); + __ IndexFromHash(result, result); +} + + +void LCodeGen::DoHasCachedArrayIndexAndBranch( + LHasCachedArrayIndexAndBranch* instr) { + Register input = ToRegister(instr->value()); + + __ test(FieldOperand(input, String::kHashFieldOffset), + Immediate(String::kContainsCachedArrayIndexMask)); + EmitBranch(instr, equal); +} + + +// Branches to a label or falls through with the answer in the z flag. Trashes +// the temp registers, but not the input. +void LCodeGen::EmitClassOfTest(Label* is_true, + Label* is_false, + Handle<String>class_name, + Register input, + Register temp, + Register temp2) { + DCHECK(!input.is(temp)); + DCHECK(!input.is(temp2)); + DCHECK(!temp.is(temp2)); + __ JumpIfSmi(input, is_false); + + if (class_name->IsOneByteEqualTo(STATIC_ASCII_VECTOR("Function"))) { + // Assuming the following assertions, we can use the same compares to test + // for both being a function type and being in the object type range. + STATIC_ASSERT(NUM_OF_CALLABLE_SPEC_OBJECT_TYPES == 2); + STATIC_ASSERT(FIRST_NONCALLABLE_SPEC_OBJECT_TYPE == + FIRST_SPEC_OBJECT_TYPE + 1); + STATIC_ASSERT(LAST_NONCALLABLE_SPEC_OBJECT_TYPE == + LAST_SPEC_OBJECT_TYPE - 1); + STATIC_ASSERT(LAST_SPEC_OBJECT_TYPE == LAST_TYPE); + __ CmpObjectType(input, FIRST_SPEC_OBJECT_TYPE, temp); + __ j(below, is_false); + __ j(equal, is_true); + __ CmpInstanceType(temp, LAST_SPEC_OBJECT_TYPE); + __ j(equal, is_true); + } else { + // Faster code path to avoid two compares: subtract lower bound from the + // actual type and do a signed compare with the width of the type range. + __ mov(temp, FieldOperand(input, HeapObject::kMapOffset)); + __ movzx_b(temp2, FieldOperand(temp, Map::kInstanceTypeOffset)); + __ sub(Operand(temp2), Immediate(FIRST_NONCALLABLE_SPEC_OBJECT_TYPE)); + __ cmp(Operand(temp2), Immediate(LAST_NONCALLABLE_SPEC_OBJECT_TYPE - + FIRST_NONCALLABLE_SPEC_OBJECT_TYPE)); + __ j(above, is_false); + } + + // Now we are in the FIRST-LAST_NONCALLABLE_SPEC_OBJECT_TYPE range. + // Check if the constructor in the map is a function. + __ mov(temp, FieldOperand(temp, Map::kConstructorOffset)); + // Objects with a non-function constructor have class 'Object'. + __ CmpObjectType(temp, JS_FUNCTION_TYPE, temp2); + if (class_name->IsOneByteEqualTo(STATIC_ASCII_VECTOR("Object"))) { + __ j(not_equal, is_true); + } else { + __ j(not_equal, is_false); + } + + // temp now contains the constructor function. Grab the + // instance class name from there. + __ mov(temp, FieldOperand(temp, JSFunction::kSharedFunctionInfoOffset)); + __ mov(temp, FieldOperand(temp, + SharedFunctionInfo::kInstanceClassNameOffset)); + // The class name we are testing against is internalized since it's a literal. + // The name in the constructor is internalized because of the way the context + // is booted. This routine isn't expected to work for random API-created + // classes and it doesn't have to because you can't access it with natives + // syntax. Since both sides are internalized it is sufficient to use an + // identity comparison. + __ cmp(temp, class_name); + // End with the answer in the z flag. +} + + +void LCodeGen::DoClassOfTestAndBranch(LClassOfTestAndBranch* instr) { + Register input = ToRegister(instr->value()); + Register temp = ToRegister(instr->temp()); + Register temp2 = ToRegister(instr->temp2()); + + Handle<String> class_name = instr->hydrogen()->class_name(); + + EmitClassOfTest(instr->TrueLabel(chunk_), instr->FalseLabel(chunk_), + class_name, input, temp, temp2); + + EmitBranch(instr, equal); +} + + +void LCodeGen::DoCmpMapAndBranch(LCmpMapAndBranch* instr) { + Register reg = ToRegister(instr->value()); + __ cmp(FieldOperand(reg, HeapObject::kMapOffset), instr->map()); + EmitBranch(instr, equal); +} + + +void LCodeGen::DoInstanceOf(LInstanceOf* instr) { + // Object and function are in fixed registers defined by the stub. + DCHECK(ToRegister(instr->context()).is(esi)); + InstanceofStub stub(isolate(), InstanceofStub::kArgsInRegisters); + CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); + + Label true_value, done; + __ test(eax, Operand(eax)); + __ j(zero, &true_value, Label::kNear); + __ mov(ToRegister(instr->result()), factory()->false_value()); + __ jmp(&done, Label::kNear); + __ bind(&true_value); + __ mov(ToRegister(instr->result()), factory()->true_value()); + __ bind(&done); +} + + +void LCodeGen::DoInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr) { + class DeferredInstanceOfKnownGlobal V8_FINAL : public LDeferredCode { + public: + DeferredInstanceOfKnownGlobal(LCodeGen* codegen, + LInstanceOfKnownGlobal* instr, + const X87Stack& x87_stack) + : LDeferredCode(codegen, x87_stack), instr_(instr) { } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredInstanceOfKnownGlobal(instr_, &map_check_); + } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + Label* map_check() { return &map_check_; } + private: + LInstanceOfKnownGlobal* instr_; + Label map_check_; + }; + + DeferredInstanceOfKnownGlobal* deferred; + deferred = new(zone()) DeferredInstanceOfKnownGlobal(this, instr, x87_stack_); + + Label done, false_result; + Register object = ToRegister(instr->value()); + Register temp = ToRegister(instr->temp()); + + // A Smi is not an instance of anything. + __ JumpIfSmi(object, &false_result, Label::kNear); + + // This is the inlined call site instanceof cache. The two occurences of the + // hole value will be patched to the last map/result pair generated by the + // instanceof stub. + Label cache_miss; + Register map = ToRegister(instr->temp()); + __ mov(map, FieldOperand(object, HeapObject::kMapOffset)); + __ bind(deferred->map_check()); // Label for calculating code patching. + Handle<Cell> cache_cell = factory()->NewCell(factory()->the_hole_value()); + __ cmp(map, Operand::ForCell(cache_cell)); // Patched to cached map. + __ j(not_equal, &cache_miss, Label::kNear); + __ mov(eax, factory()->the_hole_value()); // Patched to either true or false. + __ jmp(&done, Label::kNear); + + // The inlined call site cache did not match. Check for null and string + // before calling the deferred code. + __ bind(&cache_miss); + // Null is not an instance of anything. + __ cmp(object, factory()->null_value()); + __ j(equal, &false_result, Label::kNear); + + // String values are not instances of anything. + Condition is_string = masm_->IsObjectStringType(object, temp, temp); + __ j(is_string, &false_result, Label::kNear); + + // Go to the deferred code. + __ jmp(deferred->entry()); + + __ bind(&false_result); + __ mov(ToRegister(instr->result()), factory()->false_value()); + + // Here result has either true or false. Deferred code also produces true or + // false object. + __ bind(deferred->exit()); + __ bind(&done); +} + + +void LCodeGen::DoDeferredInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr, + Label* map_check) { + PushSafepointRegistersScope scope(this); + + InstanceofStub::Flags flags = InstanceofStub::kNoFlags; + flags = static_cast<InstanceofStub::Flags>( + flags | InstanceofStub::kArgsInRegisters); + flags = static_cast<InstanceofStub::Flags>( + flags | InstanceofStub::kCallSiteInlineCheck); + flags = static_cast<InstanceofStub::Flags>( + flags | InstanceofStub::kReturnTrueFalseObject); + InstanceofStub stub(isolate(), flags); + + // Get the temp register reserved by the instruction. This needs to be a + // register which is pushed last by PushSafepointRegisters as top of the + // stack is used to pass the offset to the location of the map check to + // the stub. + Register temp = ToRegister(instr->temp()); + DCHECK(MacroAssembler::SafepointRegisterStackIndex(temp) == 0); + __ LoadHeapObject(InstanceofStub::right(), instr->function()); + static const int kAdditionalDelta = 13; + int delta = masm_->SizeOfCodeGeneratedSince(map_check) + kAdditionalDelta; + __ mov(temp, Immediate(delta)); + __ StoreToSafepointRegisterSlot(temp, temp); + CallCodeGeneric(stub.GetCode(), + RelocInfo::CODE_TARGET, + instr, + RECORD_SAFEPOINT_WITH_REGISTERS_AND_NO_ARGUMENTS); + // Get the deoptimization index of the LLazyBailout-environment that + // corresponds to this instruction. + LEnvironment* env = instr->GetDeferredLazyDeoptimizationEnvironment(); + safepoints_.RecordLazyDeoptimizationIndex(env->deoptimization_index()); + + // Put the result value into the eax slot and restore all registers. + __ StoreToSafepointRegisterSlot(eax, eax); +} + + +void LCodeGen::DoCmpT(LCmpT* instr) { + Token::Value op = instr->op(); + + Handle<Code> ic = CompareIC::GetUninitialized(isolate(), op); + CallCode(ic, RelocInfo::CODE_TARGET, instr); + + Condition condition = ComputeCompareCondition(op); + Label true_value, done; + __ test(eax, Operand(eax)); + __ j(condition, &true_value, Label::kNear); + __ mov(ToRegister(instr->result()), factory()->false_value()); + __ jmp(&done, Label::kNear); + __ bind(&true_value); + __ mov(ToRegister(instr->result()), factory()->true_value()); + __ bind(&done); +} + + +void LCodeGen::EmitReturn(LReturn* instr, bool dynamic_frame_alignment) { + int extra_value_count = dynamic_frame_alignment ? 2 : 1; + + if (instr->has_constant_parameter_count()) { + int parameter_count = ToInteger32(instr->constant_parameter_count()); + if (dynamic_frame_alignment && FLAG_debug_code) { + __ cmp(Operand(esp, + (parameter_count + extra_value_count) * kPointerSize), + Immediate(kAlignmentZapValue)); + __ Assert(equal, kExpectedAlignmentMarker); + } + __ Ret((parameter_count + extra_value_count) * kPointerSize, ecx); + } else { + Register reg = ToRegister(instr->parameter_count()); + // The argument count parameter is a smi + __ SmiUntag(reg); + Register return_addr_reg = reg.is(ecx) ? ebx : ecx; + if (dynamic_frame_alignment && FLAG_debug_code) { + DCHECK(extra_value_count == 2); + __ cmp(Operand(esp, reg, times_pointer_size, + extra_value_count * kPointerSize), + Immediate(kAlignmentZapValue)); + __ Assert(equal, kExpectedAlignmentMarker); + } + + // emit code to restore stack based on instr->parameter_count() + __ pop(return_addr_reg); // save return address + if (dynamic_frame_alignment) { + __ inc(reg); // 1 more for alignment + } + __ shl(reg, kPointerSizeLog2); + __ add(esp, reg); + __ jmp(return_addr_reg); + } +} + + +void LCodeGen::DoReturn(LReturn* instr) { + if (FLAG_trace && info()->IsOptimizing()) { + // Preserve the return value on the stack and rely on the runtime call + // to return the value in the same register. We're leaving the code + // managed by the register allocator and tearing down the frame, it's + // safe to write to the context register. + __ push(eax); + __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset)); + __ CallRuntime(Runtime::kTraceExit, 1); + } + if (dynamic_frame_alignment_) { + // Fetch the state of the dynamic frame alignment. + __ mov(edx, Operand(ebp, + JavaScriptFrameConstants::kDynamicAlignmentStateOffset)); + } + int no_frame_start = -1; + if (NeedsEagerFrame()) { + __ mov(esp, ebp); + __ pop(ebp); + no_frame_start = masm_->pc_offset(); + } + if (dynamic_frame_alignment_) { + Label no_padding; + __ cmp(edx, Immediate(kNoAlignmentPadding)); + __ j(equal, &no_padding, Label::kNear); + + EmitReturn(instr, true); + __ bind(&no_padding); + } + + EmitReturn(instr, false); + if (no_frame_start != -1) { + info()->AddNoFrameRange(no_frame_start, masm_->pc_offset()); + } +} + + +void LCodeGen::DoLoadGlobalCell(LLoadGlobalCell* instr) { + Register result = ToRegister(instr->result()); + __ mov(result, Operand::ForCell(instr->hydrogen()->cell().handle())); + if (instr->hydrogen()->RequiresHoleCheck()) { + __ cmp(result, factory()->the_hole_value()); + DeoptimizeIf(equal, instr->environment()); + } +} + + +void LCodeGen::DoLoadGlobalGeneric(LLoadGlobalGeneric* instr) { + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->global_object()).is(LoadIC::ReceiverRegister())); + DCHECK(ToRegister(instr->result()).is(eax)); + + __ mov(LoadIC::NameRegister(), instr->name()); + if (FLAG_vector_ics) { + Register vector = ToRegister(instr->temp_vector()); + DCHECK(vector.is(LoadIC::VectorRegister())); + __ mov(vector, instr->hydrogen()->feedback_vector()); + // No need to allocate this register. + DCHECK(LoadIC::SlotRegister().is(eax)); + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(instr->hydrogen()->slot()))); + } + ContextualMode mode = instr->for_typeof() ? NOT_CONTEXTUAL : CONTEXTUAL; + Handle<Code> ic = LoadIC::initialize_stub(isolate(), mode); + CallCode(ic, RelocInfo::CODE_TARGET, instr); +} + + +void LCodeGen::DoStoreGlobalCell(LStoreGlobalCell* instr) { + Register value = ToRegister(instr->value()); + Handle<PropertyCell> cell_handle = instr->hydrogen()->cell().handle(); + + // If the cell we are storing to contains the hole it could have + // been deleted from the property dictionary. In that case, we need + // to update the property details in the property dictionary to mark + // it as no longer deleted. We deoptimize in that case. + if (instr->hydrogen()->RequiresHoleCheck()) { + __ cmp(Operand::ForCell(cell_handle), factory()->the_hole_value()); + DeoptimizeIf(equal, instr->environment()); + } + + // Store the value. + __ mov(Operand::ForCell(cell_handle), value); + // Cells are always rescanned, so no write barrier here. +} + + +void LCodeGen::DoLoadContextSlot(LLoadContextSlot* instr) { + Register context = ToRegister(instr->context()); + Register result = ToRegister(instr->result()); + __ mov(result, ContextOperand(context, instr->slot_index())); + + if (instr->hydrogen()->RequiresHoleCheck()) { + __ cmp(result, factory()->the_hole_value()); + if (instr->hydrogen()->DeoptimizesOnHole()) { + DeoptimizeIf(equal, instr->environment()); + } else { + Label is_not_hole; + __ j(not_equal, &is_not_hole, Label::kNear); + __ mov(result, factory()->undefined_value()); + __ bind(&is_not_hole); + } + } +} + + +void LCodeGen::DoStoreContextSlot(LStoreContextSlot* instr) { + Register context = ToRegister(instr->context()); + Register value = ToRegister(instr->value()); + + Label skip_assignment; + + Operand target = ContextOperand(context, instr->slot_index()); + if (instr->hydrogen()->RequiresHoleCheck()) { + __ cmp(target, factory()->the_hole_value()); + if (instr->hydrogen()->DeoptimizesOnHole()) { + DeoptimizeIf(equal, instr->environment()); + } else { + __ j(not_equal, &skip_assignment, Label::kNear); + } + } + + __ mov(target, value); + if (instr->hydrogen()->NeedsWriteBarrier()) { + SmiCheck check_needed = + instr->hydrogen()->value()->type().IsHeapObject() + ? OMIT_SMI_CHECK : INLINE_SMI_CHECK; + Register temp = ToRegister(instr->temp()); + int offset = Context::SlotOffset(instr->slot_index()); + __ RecordWriteContextSlot(context, + offset, + value, + temp, + EMIT_REMEMBERED_SET, + check_needed); + } + + __ bind(&skip_assignment); +} + + +void LCodeGen::DoLoadNamedField(LLoadNamedField* instr) { + HObjectAccess access = instr->hydrogen()->access(); + int offset = access.offset(); + + if (access.IsExternalMemory()) { + Register result = ToRegister(instr->result()); + MemOperand operand = instr->object()->IsConstantOperand() + ? MemOperand::StaticVariable(ToExternalReference( + LConstantOperand::cast(instr->object()))) + : MemOperand(ToRegister(instr->object()), offset); + __ Load(result, operand, access.representation()); + return; + } + + Register object = ToRegister(instr->object()); + if (instr->hydrogen()->representation().IsDouble()) { + X87Mov(ToX87Register(instr->result()), FieldOperand(object, offset)); + return; + } + + Register result = ToRegister(instr->result()); + if (!access.IsInobject()) { + __ mov(result, FieldOperand(object, JSObject::kPropertiesOffset)); + object = result; + } + __ Load(result, FieldOperand(object, offset), access.representation()); +} + + +void LCodeGen::EmitPushTaggedOperand(LOperand* operand) { + DCHECK(!operand->IsDoubleRegister()); + if (operand->IsConstantOperand()) { + Handle<Object> object = ToHandle(LConstantOperand::cast(operand)); + AllowDeferredHandleDereference smi_check; + if (object->IsSmi()) { + __ Push(Handle<Smi>::cast(object)); + } else { + __ PushHeapObject(Handle<HeapObject>::cast(object)); + } + } else if (operand->IsRegister()) { + __ push(ToRegister(operand)); + } else { + __ push(ToOperand(operand)); + } +} + + +void LCodeGen::DoLoadNamedGeneric(LLoadNamedGeneric* instr) { + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->object()).is(LoadIC::ReceiverRegister())); + DCHECK(ToRegister(instr->result()).is(eax)); + + __ mov(LoadIC::NameRegister(), instr->name()); + if (FLAG_vector_ics) { + Register vector = ToRegister(instr->temp_vector()); + DCHECK(vector.is(LoadIC::VectorRegister())); + __ mov(vector, instr->hydrogen()->feedback_vector()); + // No need to allocate this register. + DCHECK(LoadIC::SlotRegister().is(eax)); + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(instr->hydrogen()->slot()))); + } + Handle<Code> ic = LoadIC::initialize_stub(isolate(), NOT_CONTEXTUAL); + CallCode(ic, RelocInfo::CODE_TARGET, instr); +} + + +void LCodeGen::DoLoadFunctionPrototype(LLoadFunctionPrototype* instr) { + Register function = ToRegister(instr->function()); + Register temp = ToRegister(instr->temp()); + Register result = ToRegister(instr->result()); + + // Get the prototype or initial map from the function. + __ mov(result, + FieldOperand(function, JSFunction::kPrototypeOrInitialMapOffset)); + + // Check that the function has a prototype or an initial map. + __ cmp(Operand(result), Immediate(factory()->the_hole_value())); + DeoptimizeIf(equal, instr->environment()); + + // If the function does not have an initial map, we're done. + Label done; + __ CmpObjectType(result, MAP_TYPE, temp); + __ j(not_equal, &done, Label::kNear); + + // Get the prototype from the initial map. + __ mov(result, FieldOperand(result, Map::kPrototypeOffset)); + + // All done. + __ bind(&done); +} + + +void LCodeGen::DoLoadRoot(LLoadRoot* instr) { + Register result = ToRegister(instr->result()); + __ LoadRoot(result, instr->index()); +} + + +void LCodeGen::DoAccessArgumentsAt(LAccessArgumentsAt* instr) { + Register arguments = ToRegister(instr->arguments()); + Register result = ToRegister(instr->result()); + if (instr->length()->IsConstantOperand() && + instr->index()->IsConstantOperand()) { + int const_index = ToInteger32(LConstantOperand::cast(instr->index())); + int const_length = ToInteger32(LConstantOperand::cast(instr->length())); + int index = (const_length - const_index) + 1; + __ mov(result, Operand(arguments, index * kPointerSize)); + } else { + Register length = ToRegister(instr->length()); + Operand index = ToOperand(instr->index()); + // There are two words between the frame pointer and the last argument. + // Subtracting from length accounts for one of them add one more. + __ sub(length, index); + __ mov(result, Operand(arguments, length, times_4, kPointerSize)); + } +} + + +void LCodeGen::DoLoadKeyedExternalArray(LLoadKeyed* instr) { + ElementsKind elements_kind = instr->elements_kind(); + LOperand* key = instr->key(); + if (!key->IsConstantOperand() && + ExternalArrayOpRequiresTemp(instr->hydrogen()->key()->representation(), + elements_kind)) { + __ SmiUntag(ToRegister(key)); + } + Operand operand(BuildFastArrayOperand( + instr->elements(), + key, + instr->hydrogen()->key()->representation(), + elements_kind, + instr->base_offset())); + if (elements_kind == EXTERNAL_FLOAT32_ELEMENTS || + elements_kind == FLOAT32_ELEMENTS) { + X87Mov(ToX87Register(instr->result()), operand, kX87FloatOperand); + } else if (elements_kind == EXTERNAL_FLOAT64_ELEMENTS || + elements_kind == FLOAT64_ELEMENTS) { + X87Mov(ToX87Register(instr->result()), operand); + } else { + Register result(ToRegister(instr->result())); + switch (elements_kind) { + case EXTERNAL_INT8_ELEMENTS: + case INT8_ELEMENTS: + __ movsx_b(result, operand); + break; + case EXTERNAL_UINT8_CLAMPED_ELEMENTS: + case EXTERNAL_UINT8_ELEMENTS: + case UINT8_ELEMENTS: + case UINT8_CLAMPED_ELEMENTS: + __ movzx_b(result, operand); + break; + case EXTERNAL_INT16_ELEMENTS: + case INT16_ELEMENTS: + __ movsx_w(result, operand); + break; + case EXTERNAL_UINT16_ELEMENTS: + case UINT16_ELEMENTS: + __ movzx_w(result, operand); + break; + case EXTERNAL_INT32_ELEMENTS: + case INT32_ELEMENTS: + __ mov(result, operand); + break; + case EXTERNAL_UINT32_ELEMENTS: + case UINT32_ELEMENTS: + __ mov(result, operand); + if (!instr->hydrogen()->CheckFlag(HInstruction::kUint32)) { + __ test(result, Operand(result)); + DeoptimizeIf(negative, instr->environment()); + } + break; + case EXTERNAL_FLOAT32_ELEMENTS: + case EXTERNAL_FLOAT64_ELEMENTS: + case FLOAT32_ELEMENTS: + case FLOAT64_ELEMENTS: + case FAST_SMI_ELEMENTS: + case FAST_ELEMENTS: + case FAST_DOUBLE_ELEMENTS: + case FAST_HOLEY_SMI_ELEMENTS: + case FAST_HOLEY_ELEMENTS: + case FAST_HOLEY_DOUBLE_ELEMENTS: + case DICTIONARY_ELEMENTS: + case SLOPPY_ARGUMENTS_ELEMENTS: + UNREACHABLE(); + break; + } + } +} + + +void LCodeGen::DoLoadKeyedFixedDoubleArray(LLoadKeyed* instr) { + if (instr->hydrogen()->RequiresHoleCheck()) { + Operand hole_check_operand = BuildFastArrayOperand( + instr->elements(), instr->key(), + instr->hydrogen()->key()->representation(), + FAST_DOUBLE_ELEMENTS, + instr->base_offset() + sizeof(kHoleNanLower32)); + __ cmp(hole_check_operand, Immediate(kHoleNanUpper32)); + DeoptimizeIf(equal, instr->environment()); + } + + Operand double_load_operand = BuildFastArrayOperand( + instr->elements(), + instr->key(), + instr->hydrogen()->key()->representation(), + FAST_DOUBLE_ELEMENTS, + instr->base_offset()); + X87Mov(ToX87Register(instr->result()), double_load_operand); +} + + +void LCodeGen::DoLoadKeyedFixedArray(LLoadKeyed* instr) { + Register result = ToRegister(instr->result()); + + // Load the result. + __ mov(result, + BuildFastArrayOperand(instr->elements(), + instr->key(), + instr->hydrogen()->key()->representation(), + FAST_ELEMENTS, + instr->base_offset())); + + // Check for the hole value. + if (instr->hydrogen()->RequiresHoleCheck()) { + if (IsFastSmiElementsKind(instr->hydrogen()->elements_kind())) { + __ test(result, Immediate(kSmiTagMask)); + DeoptimizeIf(not_equal, instr->environment()); + } else { + __ cmp(result, factory()->the_hole_value()); + DeoptimizeIf(equal, instr->environment()); + } + } +} + + +void LCodeGen::DoLoadKeyed(LLoadKeyed* instr) { + if (instr->is_typed_elements()) { + DoLoadKeyedExternalArray(instr); + } else if (instr->hydrogen()->representation().IsDouble()) { + DoLoadKeyedFixedDoubleArray(instr); + } else { + DoLoadKeyedFixedArray(instr); + } +} + + +Operand LCodeGen::BuildFastArrayOperand( + LOperand* elements_pointer, + LOperand* key, + Representation key_representation, + ElementsKind elements_kind, + uint32_t base_offset) { + Register elements_pointer_reg = ToRegister(elements_pointer); + int element_shift_size = ElementsKindToShiftSize(elements_kind); + int shift_size = element_shift_size; + if (key->IsConstantOperand()) { + int constant_value = ToInteger32(LConstantOperand::cast(key)); + if (constant_value & 0xF0000000) { + Abort(kArrayIndexConstantValueTooBig); + } + return Operand(elements_pointer_reg, + ((constant_value) << shift_size) + + base_offset); + } else { + // Take the tag bit into account while computing the shift size. + if (key_representation.IsSmi() && (shift_size >= 1)) { + shift_size -= kSmiTagSize; + } + ScaleFactor scale_factor = static_cast<ScaleFactor>(shift_size); + return Operand(elements_pointer_reg, + ToRegister(key), + scale_factor, + base_offset); + } +} + + +void LCodeGen::DoLoadKeyedGeneric(LLoadKeyedGeneric* instr) { + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->object()).is(LoadIC::ReceiverRegister())); + DCHECK(ToRegister(instr->key()).is(LoadIC::NameRegister())); + + if (FLAG_vector_ics) { + Register vector = ToRegister(instr->temp_vector()); + DCHECK(vector.is(LoadIC::VectorRegister())); + __ mov(vector, instr->hydrogen()->feedback_vector()); + // No need to allocate this register. + DCHECK(LoadIC::SlotRegister().is(eax)); + __ mov(LoadIC::SlotRegister(), + Immediate(Smi::FromInt(instr->hydrogen()->slot()))); + } + + Handle<Code> ic = isolate()->builtins()->KeyedLoadIC_Initialize(); + CallCode(ic, RelocInfo::CODE_TARGET, instr); +} + + +void LCodeGen::DoArgumentsElements(LArgumentsElements* instr) { + Register result = ToRegister(instr->result()); + + if (instr->hydrogen()->from_inlined()) { + __ lea(result, Operand(esp, -2 * kPointerSize)); + } else { + // Check for arguments adapter frame. + Label done, adapted; + __ mov(result, Operand(ebp, StandardFrameConstants::kCallerFPOffset)); + __ mov(result, Operand(result, StandardFrameConstants::kContextOffset)); + __ cmp(Operand(result), + Immediate(Smi::FromInt(StackFrame::ARGUMENTS_ADAPTOR))); + __ j(equal, &adapted, Label::kNear); + + // No arguments adaptor frame. + __ mov(result, Operand(ebp)); + __ jmp(&done, Label::kNear); + + // Arguments adaptor frame present. + __ bind(&adapted); + __ mov(result, Operand(ebp, StandardFrameConstants::kCallerFPOffset)); + + // Result is the frame pointer for the frame if not adapted and for the real + // frame below the adaptor frame if adapted. + __ bind(&done); + } +} + + +void LCodeGen::DoArgumentsLength(LArgumentsLength* instr) { + Operand elem = ToOperand(instr->elements()); + Register result = ToRegister(instr->result()); + + Label done; + + // If no arguments adaptor frame the number of arguments is fixed. + __ cmp(ebp, elem); + __ mov(result, Immediate(scope()->num_parameters())); + __ j(equal, &done, Label::kNear); + + // Arguments adaptor frame present. Get argument length from there. + __ mov(result, Operand(ebp, StandardFrameConstants::kCallerFPOffset)); + __ mov(result, Operand(result, + ArgumentsAdaptorFrameConstants::kLengthOffset)); + __ SmiUntag(result); + + // Argument length is in result register. + __ bind(&done); +} + + +void LCodeGen::DoWrapReceiver(LWrapReceiver* instr) { + Register receiver = ToRegister(instr->receiver()); + Register function = ToRegister(instr->function()); + + // If the receiver is null or undefined, we have to pass the global + // object as a receiver to normal functions. Values have to be + // passed unchanged to builtins and strict-mode functions. + Label receiver_ok, global_object; + Label::Distance dist = DeoptEveryNTimes() ? Label::kFar : Label::kNear; + Register scratch = ToRegister(instr->temp()); + + if (!instr->hydrogen()->known_function()) { + // Do not transform the receiver to object for strict mode + // functions. + __ mov(scratch, + FieldOperand(function, JSFunction::kSharedFunctionInfoOffset)); + __ test_b(FieldOperand(scratch, SharedFunctionInfo::kStrictModeByteOffset), + 1 << SharedFunctionInfo::kStrictModeBitWithinByte); + __ j(not_equal, &receiver_ok, dist); + + // Do not transform the receiver to object for builtins. + __ test_b(FieldOperand(scratch, SharedFunctionInfo::kNativeByteOffset), + 1 << SharedFunctionInfo::kNativeBitWithinByte); + __ j(not_equal, &receiver_ok, dist); + } + + // Normal function. Replace undefined or null with global receiver. + __ cmp(receiver, factory()->null_value()); + __ j(equal, &global_object, Label::kNear); + __ cmp(receiver, factory()->undefined_value()); + __ j(equal, &global_object, Label::kNear); + + // The receiver should be a JS object. + __ test(receiver, Immediate(kSmiTagMask)); + DeoptimizeIf(equal, instr->environment()); + __ CmpObjectType(receiver, FIRST_SPEC_OBJECT_TYPE, scratch); + DeoptimizeIf(below, instr->environment()); + + __ jmp(&receiver_ok, Label::kNear); + __ bind(&global_object); + __ mov(receiver, FieldOperand(function, JSFunction::kContextOffset)); + const int global_offset = Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX); + __ mov(receiver, Operand(receiver, global_offset)); + const int proxy_offset = GlobalObject::kGlobalProxyOffset; + __ mov(receiver, FieldOperand(receiver, proxy_offset)); + __ bind(&receiver_ok); +} + + +void LCodeGen::DoApplyArguments(LApplyArguments* instr) { + Register receiver = ToRegister(instr->receiver()); + Register function = ToRegister(instr->function()); + Register length = ToRegister(instr->length()); + Register elements = ToRegister(instr->elements()); + DCHECK(receiver.is(eax)); // Used for parameter count. + DCHECK(function.is(edi)); // Required by InvokeFunction. + DCHECK(ToRegister(instr->result()).is(eax)); + + // Copy the arguments to this function possibly from the + // adaptor frame below it. + const uint32_t kArgumentsLimit = 1 * KB; + __ cmp(length, kArgumentsLimit); + DeoptimizeIf(above, instr->environment()); + + __ push(receiver); + __ mov(receiver, length); + + // Loop through the arguments pushing them onto the execution + // stack. + Label invoke, loop; + // length is a small non-negative integer, due to the test above. + __ test(length, Operand(length)); + __ j(zero, &invoke, Label::kNear); + __ bind(&loop); + __ push(Operand(elements, length, times_pointer_size, 1 * kPointerSize)); + __ dec(length); + __ j(not_zero, &loop); + + // Invoke the function. + __ bind(&invoke); + DCHECK(instr->HasPointerMap()); + LPointerMap* pointers = instr->pointer_map(); + SafepointGenerator safepoint_generator( + this, pointers, Safepoint::kLazyDeopt); + ParameterCount actual(eax); + __ InvokeFunction(function, actual, CALL_FUNCTION, safepoint_generator); +} + + +void LCodeGen::DoDebugBreak(LDebugBreak* instr) { + __ int3(); +} + + +void LCodeGen::DoPushArgument(LPushArgument* instr) { + LOperand* argument = instr->value(); + EmitPushTaggedOperand(argument); +} + + +void LCodeGen::DoDrop(LDrop* instr) { + __ Drop(instr->count()); +} + + +void LCodeGen::DoThisFunction(LThisFunction* instr) { + Register result = ToRegister(instr->result()); + __ mov(result, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset)); +} + + +void LCodeGen::DoContext(LContext* instr) { + Register result = ToRegister(instr->result()); + if (info()->IsOptimizing()) { + __ mov(result, Operand(ebp, StandardFrameConstants::kContextOffset)); + } else { + // If there is no frame, the context must be in esi. + DCHECK(result.is(esi)); + } +} + + +void LCodeGen::DoDeclareGlobals(LDeclareGlobals* instr) { + DCHECK(ToRegister(instr->context()).is(esi)); + __ push(esi); // The context is the first argument. + __ push(Immediate(instr->hydrogen()->pairs())); + __ push(Immediate(Smi::FromInt(instr->hydrogen()->flags()))); + CallRuntime(Runtime::kDeclareGlobals, 3, instr); +} + + +void LCodeGen::CallKnownFunction(Handle<JSFunction> function, + int formal_parameter_count, + int arity, + LInstruction* instr, + EDIState edi_state) { + bool dont_adapt_arguments = + formal_parameter_count == SharedFunctionInfo::kDontAdaptArgumentsSentinel; + bool can_invoke_directly = + dont_adapt_arguments || formal_parameter_count == arity; + + if (can_invoke_directly) { + if (edi_state == EDI_UNINITIALIZED) { + __ LoadHeapObject(edi, function); + } + + // Change context. + __ mov(esi, FieldOperand(edi, JSFunction::kContextOffset)); + + // Set eax to arguments count if adaption is not needed. Assumes that eax + // is available to write to at this point. + if (dont_adapt_arguments) { + __ mov(eax, arity); + } + + // Invoke function directly. + if (function.is_identical_to(info()->closure())) { + __ CallSelf(); + } else { + __ call(FieldOperand(edi, JSFunction::kCodeEntryOffset)); + } + RecordSafepointWithLazyDeopt(instr, RECORD_SIMPLE_SAFEPOINT); + } else { + // We need to adapt arguments. + LPointerMap* pointers = instr->pointer_map(); + SafepointGenerator generator( + this, pointers, Safepoint::kLazyDeopt); + ParameterCount count(arity); + ParameterCount expected(formal_parameter_count); + __ InvokeFunction(function, expected, count, CALL_FUNCTION, generator); + } +} + + +void LCodeGen::DoCallWithDescriptor(LCallWithDescriptor* instr) { + DCHECK(ToRegister(instr->result()).is(eax)); + + LPointerMap* pointers = instr->pointer_map(); + SafepointGenerator generator(this, pointers, Safepoint::kLazyDeopt); + + if (instr->target()->IsConstantOperand()) { + LConstantOperand* target = LConstantOperand::cast(instr->target()); + Handle<Code> code = Handle<Code>::cast(ToHandle(target)); + generator.BeforeCall(__ CallSize(code, RelocInfo::CODE_TARGET)); + __ call(code, RelocInfo::CODE_TARGET); + } else { + DCHECK(instr->target()->IsRegister()); + Register target = ToRegister(instr->target()); + generator.BeforeCall(__ CallSize(Operand(target))); + __ add(target, Immediate(Code::kHeaderSize - kHeapObjectTag)); + __ call(target); + } + generator.AfterCall(); +} + + +void LCodeGen::DoCallJSFunction(LCallJSFunction* instr) { + DCHECK(ToRegister(instr->function()).is(edi)); + DCHECK(ToRegister(instr->result()).is(eax)); + + if (instr->hydrogen()->pass_argument_count()) { + __ mov(eax, instr->arity()); + } + + // Change context. + __ mov(esi, FieldOperand(edi, JSFunction::kContextOffset)); + + bool is_self_call = false; + if (instr->hydrogen()->function()->IsConstant()) { + HConstant* fun_const = HConstant::cast(instr->hydrogen()->function()); + Handle<JSFunction> jsfun = + Handle<JSFunction>::cast(fun_const->handle(isolate())); + is_self_call = jsfun.is_identical_to(info()->closure()); + } + + if (is_self_call) { + __ CallSelf(); + } else { + __ call(FieldOperand(edi, JSFunction::kCodeEntryOffset)); + } + + RecordSafepointWithLazyDeopt(instr, RECORD_SIMPLE_SAFEPOINT); +} + + +void LCodeGen::DoDeferredMathAbsTaggedHeapNumber(LMathAbs* instr) { + Register input_reg = ToRegister(instr->value()); + __ cmp(FieldOperand(input_reg, HeapObject::kMapOffset), + factory()->heap_number_map()); + DeoptimizeIf(not_equal, instr->environment()); + + Label slow, allocated, done; + Register tmp = input_reg.is(eax) ? ecx : eax; + Register tmp2 = tmp.is(ecx) ? edx : input_reg.is(ecx) ? edx : ecx; + + // Preserve the value of all registers. + PushSafepointRegistersScope scope(this); + + __ mov(tmp, FieldOperand(input_reg, HeapNumber::kExponentOffset)); + // Check the sign of the argument. If the argument is positive, just + // return it. We do not need to patch the stack since |input| and + // |result| are the same register and |input| will be restored + // unchanged by popping safepoint registers. + __ test(tmp, Immediate(HeapNumber::kSignMask)); + __ j(zero, &done, Label::kNear); + + __ AllocateHeapNumber(tmp, tmp2, no_reg, &slow); + __ jmp(&allocated, Label::kNear); + + // Slow case: Call the runtime system to do the number allocation. + __ bind(&slow); + CallRuntimeFromDeferred(Runtime::kAllocateHeapNumber, 0, + instr, instr->context()); + // Set the pointer to the new heap number in tmp. + if (!tmp.is(eax)) __ mov(tmp, eax); + // Restore input_reg after call to runtime. + __ LoadFromSafepointRegisterSlot(input_reg, input_reg); + + __ bind(&allocated); + __ mov(tmp2, FieldOperand(input_reg, HeapNumber::kExponentOffset)); + __ and_(tmp2, ~HeapNumber::kSignMask); + __ mov(FieldOperand(tmp, HeapNumber::kExponentOffset), tmp2); + __ mov(tmp2, FieldOperand(input_reg, HeapNumber::kMantissaOffset)); + __ mov(FieldOperand(tmp, HeapNumber::kMantissaOffset), tmp2); + __ StoreToSafepointRegisterSlot(input_reg, tmp); + + __ bind(&done); +} + + +void LCodeGen::EmitIntegerMathAbs(LMathAbs* instr) { + Register input_reg = ToRegister(instr->value()); + __ test(input_reg, Operand(input_reg)); + Label is_positive; + __ j(not_sign, &is_positive, Label::kNear); + __ neg(input_reg); // Sets flags. + DeoptimizeIf(negative, instr->environment()); + __ bind(&is_positive); +} + + +void LCodeGen::DoMathAbs(LMathAbs* instr) { + // Class for deferred case. + class DeferredMathAbsTaggedHeapNumber V8_FINAL : public LDeferredCode { + public: + DeferredMathAbsTaggedHeapNumber(LCodeGen* codegen, + LMathAbs* instr, + const X87Stack& x87_stack) + : LDeferredCode(codegen, x87_stack), instr_(instr) { } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredMathAbsTaggedHeapNumber(instr_); + } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + private: + LMathAbs* instr_; + }; + + DCHECK(instr->value()->Equals(instr->result())); + Representation r = instr->hydrogen()->value()->representation(); + + if (r.IsDouble()) { + UNIMPLEMENTED(); + } else if (r.IsSmiOrInteger32()) { + EmitIntegerMathAbs(instr); + } else { // Tagged case. + DeferredMathAbsTaggedHeapNumber* deferred = + new(zone()) DeferredMathAbsTaggedHeapNumber(this, instr, x87_stack_); + Register input_reg = ToRegister(instr->value()); + // Smi check. + __ JumpIfNotSmi(input_reg, deferred->entry()); + EmitIntegerMathAbs(instr); + __ bind(deferred->exit()); + } +} + + +void LCodeGen::DoMathFloor(LMathFloor* instr) { + UNIMPLEMENTED(); +} + + +void LCodeGen::DoMathRound(LMathRound* instr) { + UNIMPLEMENTED(); +} + + +void LCodeGen::DoMathFround(LMathFround* instr) { + UNIMPLEMENTED(); +} + + +void LCodeGen::DoMathSqrt(LMathSqrt* instr) { + UNIMPLEMENTED(); +} + + +void LCodeGen::DoMathPowHalf(LMathPowHalf* instr) { + UNIMPLEMENTED(); +} + + +void LCodeGen::DoPower(LPower* instr) { + UNIMPLEMENTED(); +} + + +void LCodeGen::DoMathLog(LMathLog* instr) { + UNIMPLEMENTED(); +} + + +void LCodeGen::DoMathClz32(LMathClz32* instr) { + UNIMPLEMENTED(); +} + + +void LCodeGen::DoMathExp(LMathExp* instr) { + UNIMPLEMENTED(); +} + + +void LCodeGen::DoInvokeFunction(LInvokeFunction* instr) { + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->function()).is(edi)); + DCHECK(instr->HasPointerMap()); + + Handle<JSFunction> known_function = instr->hydrogen()->known_function(); + if (known_function.is_null()) { + LPointerMap* pointers = instr->pointer_map(); + SafepointGenerator generator( + this, pointers, Safepoint::kLazyDeopt); + ParameterCount count(instr->arity()); + __ InvokeFunction(edi, count, CALL_FUNCTION, generator); + } else { + CallKnownFunction(known_function, + instr->hydrogen()->formal_parameter_count(), + instr->arity(), + instr, + EDI_CONTAINS_TARGET); + } +} + + +void LCodeGen::DoCallFunction(LCallFunction* instr) { + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->function()).is(edi)); + DCHECK(ToRegister(instr->result()).is(eax)); + + int arity = instr->arity(); + CallFunctionStub stub(isolate(), arity, instr->hydrogen()->function_flags()); + CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); +} + + +void LCodeGen::DoCallNew(LCallNew* instr) { + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->constructor()).is(edi)); + DCHECK(ToRegister(instr->result()).is(eax)); + + // No cell in ebx for construct type feedback in optimized code + __ mov(ebx, isolate()->factory()->undefined_value()); + CallConstructStub stub(isolate(), NO_CALL_CONSTRUCTOR_FLAGS); + __ Move(eax, Immediate(instr->arity())); + CallCode(stub.GetCode(), RelocInfo::CONSTRUCT_CALL, instr); +} + + +void LCodeGen::DoCallNewArray(LCallNewArray* instr) { + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->constructor()).is(edi)); + DCHECK(ToRegister(instr->result()).is(eax)); + + __ Move(eax, Immediate(instr->arity())); + __ mov(ebx, isolate()->factory()->undefined_value()); + ElementsKind kind = instr->hydrogen()->elements_kind(); + AllocationSiteOverrideMode override_mode = + (AllocationSite::GetMode(kind) == TRACK_ALLOCATION_SITE) + ? DISABLE_ALLOCATION_SITES + : DONT_OVERRIDE; + + if (instr->arity() == 0) { + ArrayNoArgumentConstructorStub stub(isolate(), kind, override_mode); + CallCode(stub.GetCode(), RelocInfo::CONSTRUCT_CALL, instr); + } else if (instr->arity() == 1) { + Label done; + if (IsFastPackedElementsKind(kind)) { + Label packed_case; + // We might need a change here + // look at the first argument + __ mov(ecx, Operand(esp, 0)); + __ test(ecx, ecx); + __ j(zero, &packed_case, Label::kNear); + + ElementsKind holey_kind = GetHoleyElementsKind(kind); + ArraySingleArgumentConstructorStub stub(isolate(), + holey_kind, + override_mode); + CallCode(stub.GetCode(), RelocInfo::CONSTRUCT_CALL, instr); + __ jmp(&done, Label::kNear); + __ bind(&packed_case); + } + + ArraySingleArgumentConstructorStub stub(isolate(), kind, override_mode); + CallCode(stub.GetCode(), RelocInfo::CONSTRUCT_CALL, instr); + __ bind(&done); + } else { + ArrayNArgumentsConstructorStub stub(isolate(), kind, override_mode); + CallCode(stub.GetCode(), RelocInfo::CONSTRUCT_CALL, instr); + } +} + + +void LCodeGen::DoCallRuntime(LCallRuntime* instr) { + DCHECK(ToRegister(instr->context()).is(esi)); + CallRuntime(instr->function(), instr->arity(), instr); +} + + +void LCodeGen::DoStoreCodeEntry(LStoreCodeEntry* instr) { + Register function = ToRegister(instr->function()); + Register code_object = ToRegister(instr->code_object()); + __ lea(code_object, FieldOperand(code_object, Code::kHeaderSize)); + __ mov(FieldOperand(function, JSFunction::kCodeEntryOffset), code_object); +} + + +void LCodeGen::DoInnerAllocatedObject(LInnerAllocatedObject* instr) { + Register result = ToRegister(instr->result()); + Register base = ToRegister(instr->base_object()); + if (instr->offset()->IsConstantOperand()) { + LConstantOperand* offset = LConstantOperand::cast(instr->offset()); + __ lea(result, Operand(base, ToInteger32(offset))); + } else { + Register offset = ToRegister(instr->offset()); + __ lea(result, Operand(base, offset, times_1, 0)); + } +} + + +void LCodeGen::DoStoreNamedField(LStoreNamedField* instr) { + Representation representation = instr->hydrogen()->field_representation(); + + HObjectAccess access = instr->hydrogen()->access(); + int offset = access.offset(); + + if (access.IsExternalMemory()) { + DCHECK(!instr->hydrogen()->NeedsWriteBarrier()); + MemOperand operand = instr->object()->IsConstantOperand() + ? MemOperand::StaticVariable( + ToExternalReference(LConstantOperand::cast(instr->object()))) + : MemOperand(ToRegister(instr->object()), offset); + if (instr->value()->IsConstantOperand()) { + LConstantOperand* operand_value = LConstantOperand::cast(instr->value()); + __ mov(operand, Immediate(ToInteger32(operand_value))); + } else { + Register value = ToRegister(instr->value()); + __ Store(value, operand, representation); + } + return; + } + + Register object = ToRegister(instr->object()); + __ AssertNotSmi(object); + DCHECK(!representation.IsSmi() || + !instr->value()->IsConstantOperand() || + IsSmi(LConstantOperand::cast(instr->value()))); + if (representation.IsDouble()) { + DCHECK(access.IsInobject()); + DCHECK(!instr->hydrogen()->has_transition()); + DCHECK(!instr->hydrogen()->NeedsWriteBarrier()); + X87Register value = ToX87Register(instr->value()); + X87Mov(FieldOperand(object, offset), value); + return; + } + + if (instr->hydrogen()->has_transition()) { + Handle<Map> transition = instr->hydrogen()->transition_map(); + AddDeprecationDependency(transition); + __ mov(FieldOperand(object, HeapObject::kMapOffset), transition); + if (instr->hydrogen()->NeedsWriteBarrierForMap()) { + Register temp = ToRegister(instr->temp()); + Register temp_map = ToRegister(instr->temp_map()); + __ mov(temp_map, transition); + __ mov(FieldOperand(object, HeapObject::kMapOffset), temp_map); + // Update the write barrier for the map field. + __ RecordWriteForMap(object, transition, temp_map, temp); + } + } + + // Do the store. + Register write_register = object; + if (!access.IsInobject()) { + write_register = ToRegister(instr->temp()); + __ mov(write_register, FieldOperand(object, JSObject::kPropertiesOffset)); + } + + MemOperand operand = FieldOperand(write_register, offset); + if (instr->value()->IsConstantOperand()) { + LConstantOperand* operand_value = LConstantOperand::cast(instr->value()); + if (operand_value->IsRegister()) { + Register value = ToRegister(operand_value); + __ Store(value, operand, representation); + } else if (representation.IsInteger32()) { + Immediate immediate = ToImmediate(operand_value, representation); + DCHECK(!instr->hydrogen()->NeedsWriteBarrier()); + __ mov(operand, immediate); + } else { + Handle<Object> handle_value = ToHandle(operand_value); + DCHECK(!instr->hydrogen()->NeedsWriteBarrier()); + __ mov(operand, handle_value); + } + } else { + Register value = ToRegister(instr->value()); + __ Store(value, operand, representation); + } + + if (instr->hydrogen()->NeedsWriteBarrier()) { + Register value = ToRegister(instr->value()); + Register temp = access.IsInobject() ? ToRegister(instr->temp()) : object; + // Update the write barrier for the object for in-object properties. + __ RecordWriteField(write_register, + offset, + value, + temp, + EMIT_REMEMBERED_SET, + instr->hydrogen()->SmiCheckForWriteBarrier(), + instr->hydrogen()->PointersToHereCheckForValue()); + } +} + + +void LCodeGen::DoStoreNamedGeneric(LStoreNamedGeneric* instr) { + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->object()).is(StoreIC::ReceiverRegister())); + DCHECK(ToRegister(instr->value()).is(StoreIC::ValueRegister())); + + __ mov(StoreIC::NameRegister(), instr->name()); + Handle<Code> ic = StoreIC::initialize_stub(isolate(), instr->strict_mode()); + CallCode(ic, RelocInfo::CODE_TARGET, instr); +} + + +void LCodeGen::DoBoundsCheck(LBoundsCheck* instr) { + Condition cc = instr->hydrogen()->allow_equality() ? above : above_equal; + if (instr->index()->IsConstantOperand()) { + __ cmp(ToOperand(instr->length()), + ToImmediate(LConstantOperand::cast(instr->index()), + instr->hydrogen()->length()->representation())); + cc = CommuteCondition(cc); + } else if (instr->length()->IsConstantOperand()) { + __ cmp(ToOperand(instr->index()), + ToImmediate(LConstantOperand::cast(instr->length()), + instr->hydrogen()->index()->representation())); + } else { + __ cmp(ToRegister(instr->index()), ToOperand(instr->length())); + } + if (FLAG_debug_code && instr->hydrogen()->skip_check()) { + Label done; + __ j(NegateCondition(cc), &done, Label::kNear); + __ int3(); + __ bind(&done); + } else { + DeoptimizeIf(cc, instr->environment()); + } +} + + +void LCodeGen::DoStoreKeyedExternalArray(LStoreKeyed* instr) { + ElementsKind elements_kind = instr->elements_kind(); + LOperand* key = instr->key(); + if (!key->IsConstantOperand() && + ExternalArrayOpRequiresTemp(instr->hydrogen()->key()->representation(), + elements_kind)) { + __ SmiUntag(ToRegister(key)); + } + Operand operand(BuildFastArrayOperand( + instr->elements(), + key, + instr->hydrogen()->key()->representation(), + elements_kind, + instr->base_offset())); + if (elements_kind == EXTERNAL_FLOAT32_ELEMENTS || + elements_kind == FLOAT32_ELEMENTS) { + __ fld(0); + __ fstp_s(operand); + } else if (elements_kind == EXTERNAL_FLOAT64_ELEMENTS || + elements_kind == FLOAT64_ELEMENTS) { + X87Mov(operand, ToX87Register(instr->value())); + } else { + Register value = ToRegister(instr->value()); + switch (elements_kind) { + case EXTERNAL_UINT8_CLAMPED_ELEMENTS: + case EXTERNAL_UINT8_ELEMENTS: + case EXTERNAL_INT8_ELEMENTS: + case UINT8_ELEMENTS: + case INT8_ELEMENTS: + case UINT8_CLAMPED_ELEMENTS: + __ mov_b(operand, value); + break; + case EXTERNAL_INT16_ELEMENTS: + case EXTERNAL_UINT16_ELEMENTS: + case UINT16_ELEMENTS: + case INT16_ELEMENTS: + __ mov_w(operand, value); + break; + case EXTERNAL_INT32_ELEMENTS: + case EXTERNAL_UINT32_ELEMENTS: + case UINT32_ELEMENTS: + case INT32_ELEMENTS: + __ mov(operand, value); + break; + case EXTERNAL_FLOAT32_ELEMENTS: + case EXTERNAL_FLOAT64_ELEMENTS: + case FLOAT32_ELEMENTS: + case FLOAT64_ELEMENTS: + case FAST_SMI_ELEMENTS: + case FAST_ELEMENTS: + case FAST_DOUBLE_ELEMENTS: + case FAST_HOLEY_SMI_ELEMENTS: + case FAST_HOLEY_ELEMENTS: + case FAST_HOLEY_DOUBLE_ELEMENTS: + case DICTIONARY_ELEMENTS: + case SLOPPY_ARGUMENTS_ELEMENTS: + UNREACHABLE(); + break; + } + } +} + + +void LCodeGen::DoStoreKeyedFixedDoubleArray(LStoreKeyed* instr) { + ExternalReference canonical_nan_reference = + ExternalReference::address_of_canonical_non_hole_nan(); + Operand double_store_operand = BuildFastArrayOperand( + instr->elements(), + instr->key(), + instr->hydrogen()->key()->representation(), + FAST_DOUBLE_ELEMENTS, + instr->base_offset()); + + // Can't use SSE2 in the serializer + if (instr->hydrogen()->IsConstantHoleStore()) { + // This means we should store the (double) hole. No floating point + // registers required. + double nan_double = FixedDoubleArray::hole_nan_as_double(); + uint64_t int_val = BitCast<uint64_t, double>(nan_double); + int32_t lower = static_cast<int32_t>(int_val); + int32_t upper = static_cast<int32_t>(int_val >> (kBitsPerInt)); + + __ mov(double_store_operand, Immediate(lower)); + Operand double_store_operand2 = BuildFastArrayOperand( + instr->elements(), + instr->key(), + instr->hydrogen()->key()->representation(), + FAST_DOUBLE_ELEMENTS, + instr->base_offset() + kPointerSize); + __ mov(double_store_operand2, Immediate(upper)); + } else { + Label no_special_nan_handling; + X87Register value = ToX87Register(instr->value()); + X87Fxch(value); + + if (instr->NeedsCanonicalization()) { + __ fld(0); + __ fld(0); + __ FCmp(); + + __ j(parity_odd, &no_special_nan_handling, Label::kNear); + __ sub(esp, Immediate(kDoubleSize)); + __ fst_d(MemOperand(esp, 0)); + __ cmp(MemOperand(esp, sizeof(kHoleNanLower32)), + Immediate(kHoleNanUpper32)); + __ add(esp, Immediate(kDoubleSize)); + Label canonicalize; + __ j(not_equal, &canonicalize, Label::kNear); + __ jmp(&no_special_nan_handling, Label::kNear); + __ bind(&canonicalize); + __ fstp(0); + __ fld_d(Operand::StaticVariable(canonical_nan_reference)); + } + + __ bind(&no_special_nan_handling); + __ fst_d(double_store_operand); + } +} + + +void LCodeGen::DoStoreKeyedFixedArray(LStoreKeyed* instr) { + Register elements = ToRegister(instr->elements()); + Register key = instr->key()->IsRegister() ? ToRegister(instr->key()) : no_reg; + + Operand operand = BuildFastArrayOperand( + instr->elements(), + instr->key(), + instr->hydrogen()->key()->representation(), + FAST_ELEMENTS, + instr->base_offset()); + if (instr->value()->IsRegister()) { + __ mov(operand, ToRegister(instr->value())); + } else { + LConstantOperand* operand_value = LConstantOperand::cast(instr->value()); + if (IsSmi(operand_value)) { + Immediate immediate = ToImmediate(operand_value, Representation::Smi()); + __ mov(operand, immediate); + } else { + DCHECK(!IsInteger32(operand_value)); + Handle<Object> handle_value = ToHandle(operand_value); + __ mov(operand, handle_value); + } + } + + if (instr->hydrogen()->NeedsWriteBarrier()) { + DCHECK(instr->value()->IsRegister()); + Register value = ToRegister(instr->value()); + DCHECK(!instr->key()->IsConstantOperand()); + SmiCheck check_needed = + instr->hydrogen()->value()->type().IsHeapObject() + ? OMIT_SMI_CHECK : INLINE_SMI_CHECK; + // Compute address of modified element and store it into key register. + __ lea(key, operand); + __ RecordWrite(elements, + key, + value, + EMIT_REMEMBERED_SET, + check_needed, + instr->hydrogen()->PointersToHereCheckForValue()); + } +} + + +void LCodeGen::DoStoreKeyed(LStoreKeyed* instr) { + // By cases...external, fast-double, fast + if (instr->is_typed_elements()) { + DoStoreKeyedExternalArray(instr); + } else if (instr->hydrogen()->value()->representation().IsDouble()) { + DoStoreKeyedFixedDoubleArray(instr); + } else { + DoStoreKeyedFixedArray(instr); + } +} + + +void LCodeGen::DoStoreKeyedGeneric(LStoreKeyedGeneric* instr) { + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->object()).is(KeyedStoreIC::ReceiverRegister())); + DCHECK(ToRegister(instr->key()).is(KeyedStoreIC::NameRegister())); + DCHECK(ToRegister(instr->value()).is(KeyedStoreIC::ValueRegister())); + + Handle<Code> ic = instr->strict_mode() == STRICT + ? isolate()->builtins()->KeyedStoreIC_Initialize_Strict() + : isolate()->builtins()->KeyedStoreIC_Initialize(); + CallCode(ic, RelocInfo::CODE_TARGET, instr); +} + + +void LCodeGen::DoTrapAllocationMemento(LTrapAllocationMemento* instr) { + Register object = ToRegister(instr->object()); + Register temp = ToRegister(instr->temp()); + Label no_memento_found; + __ TestJSArrayForAllocationMemento(object, temp, &no_memento_found); + DeoptimizeIf(equal, instr->environment()); + __ bind(&no_memento_found); +} + + +void LCodeGen::DoTransitionElementsKind(LTransitionElementsKind* instr) { + Register object_reg = ToRegister(instr->object()); + + Handle<Map> from_map = instr->original_map(); + Handle<Map> to_map = instr->transitioned_map(); + ElementsKind from_kind = instr->from_kind(); + ElementsKind to_kind = instr->to_kind(); + + Label not_applicable; + bool is_simple_map_transition = + IsSimpleMapChangeTransition(from_kind, to_kind); + Label::Distance branch_distance = + is_simple_map_transition ? Label::kNear : Label::kFar; + __ cmp(FieldOperand(object_reg, HeapObject::kMapOffset), from_map); + __ j(not_equal, ¬_applicable, branch_distance); + if (is_simple_map_transition) { + Register new_map_reg = ToRegister(instr->new_map_temp()); + __ mov(FieldOperand(object_reg, HeapObject::kMapOffset), + Immediate(to_map)); + // Write barrier. + DCHECK_NE(instr->temp(), NULL); + __ RecordWriteForMap(object_reg, to_map, new_map_reg, + ToRegister(instr->temp())); + } else { + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(object_reg.is(eax)); + PushSafepointRegistersScope scope(this); + __ mov(ebx, to_map); + bool is_js_array = from_map->instance_type() == JS_ARRAY_TYPE; + TransitionElementsKindStub stub(isolate(), from_kind, to_kind, is_js_array); + __ CallStub(&stub); + RecordSafepointWithLazyDeopt(instr, + RECORD_SAFEPOINT_WITH_REGISTERS_AND_NO_ARGUMENTS); + } + __ bind(¬_applicable); +} + + +void LCodeGen::DoStringCharCodeAt(LStringCharCodeAt* instr) { + class DeferredStringCharCodeAt V8_FINAL : public LDeferredCode { + public: + DeferredStringCharCodeAt(LCodeGen* codegen, + LStringCharCodeAt* instr, + const X87Stack& x87_stack) + : LDeferredCode(codegen, x87_stack), instr_(instr) { } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredStringCharCodeAt(instr_); + } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + private: + LStringCharCodeAt* instr_; + }; + + DeferredStringCharCodeAt* deferred = + new(zone()) DeferredStringCharCodeAt(this, instr, x87_stack_); + + StringCharLoadGenerator::Generate(masm(), + factory(), + ToRegister(instr->string()), + ToRegister(instr->index()), + ToRegister(instr->result()), + deferred->entry()); + __ bind(deferred->exit()); +} + + +void LCodeGen::DoDeferredStringCharCodeAt(LStringCharCodeAt* instr) { + Register string = ToRegister(instr->string()); + Register result = ToRegister(instr->result()); + + // TODO(3095996): Get rid of this. For now, we need to make the + // result register contain a valid pointer because it is already + // contained in the register pointer map. + __ Move(result, Immediate(0)); + + PushSafepointRegistersScope scope(this); + __ push(string); + // Push the index as a smi. This is safe because of the checks in + // DoStringCharCodeAt above. + STATIC_ASSERT(String::kMaxLength <= Smi::kMaxValue); + if (instr->index()->IsConstantOperand()) { + Immediate immediate = ToImmediate(LConstantOperand::cast(instr->index()), + Representation::Smi()); + __ push(immediate); + } else { + Register index = ToRegister(instr->index()); + __ SmiTag(index); + __ push(index); + } + CallRuntimeFromDeferred(Runtime::kStringCharCodeAtRT, 2, + instr, instr->context()); + __ AssertSmi(eax); + __ SmiUntag(eax); + __ StoreToSafepointRegisterSlot(result, eax); +} + + +void LCodeGen::DoStringCharFromCode(LStringCharFromCode* instr) { + class DeferredStringCharFromCode V8_FINAL : public LDeferredCode { + public: + DeferredStringCharFromCode(LCodeGen* codegen, + LStringCharFromCode* instr, + const X87Stack& x87_stack) + : LDeferredCode(codegen, x87_stack), instr_(instr) { } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredStringCharFromCode(instr_); + } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + private: + LStringCharFromCode* instr_; + }; + + DeferredStringCharFromCode* deferred = + new(zone()) DeferredStringCharFromCode(this, instr, x87_stack_); + + DCHECK(instr->hydrogen()->value()->representation().IsInteger32()); + Register char_code = ToRegister(instr->char_code()); + Register result = ToRegister(instr->result()); + DCHECK(!char_code.is(result)); + + __ cmp(char_code, String::kMaxOneByteCharCode); + __ j(above, deferred->entry()); + __ Move(result, Immediate(factory()->single_character_string_cache())); + __ mov(result, FieldOperand(result, + char_code, times_pointer_size, + FixedArray::kHeaderSize)); + __ cmp(result, factory()->undefined_value()); + __ j(equal, deferred->entry()); + __ bind(deferred->exit()); +} + + +void LCodeGen::DoDeferredStringCharFromCode(LStringCharFromCode* instr) { + Register char_code = ToRegister(instr->char_code()); + Register result = ToRegister(instr->result()); + + // TODO(3095996): Get rid of this. For now, we need to make the + // result register contain a valid pointer because it is already + // contained in the register pointer map. + __ Move(result, Immediate(0)); + + PushSafepointRegistersScope scope(this); + __ SmiTag(char_code); + __ push(char_code); + CallRuntimeFromDeferred(Runtime::kCharFromCode, 1, instr, instr->context()); + __ StoreToSafepointRegisterSlot(result, eax); +} + + +void LCodeGen::DoStringAdd(LStringAdd* instr) { + DCHECK(ToRegister(instr->context()).is(esi)); + DCHECK(ToRegister(instr->left()).is(edx)); + DCHECK(ToRegister(instr->right()).is(eax)); + StringAddStub stub(isolate(), + instr->hydrogen()->flags(), + instr->hydrogen()->pretenure_flag()); + CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); +} + + +void LCodeGen::DoInteger32ToDouble(LInteger32ToDouble* instr) { + LOperand* input = instr->value(); + LOperand* output = instr->result(); + DCHECK(input->IsRegister() || input->IsStackSlot()); + DCHECK(output->IsDoubleRegister()); + if (input->IsRegister()) { + Register input_reg = ToRegister(input); + __ push(input_reg); + X87Mov(ToX87Register(output), Operand(esp, 0), kX87IntOperand); + __ pop(input_reg); + } else { + X87Mov(ToX87Register(output), ToOperand(input), kX87IntOperand); + } +} + + +void LCodeGen::DoUint32ToDouble(LUint32ToDouble* instr) { + LOperand* input = instr->value(); + LOperand* output = instr->result(); + X87Register res = ToX87Register(output); + X87PrepareToWrite(res); + __ LoadUint32NoSSE2(ToRegister(input)); + X87CommitWrite(res); +} + + +void LCodeGen::DoNumberTagI(LNumberTagI* instr) { + class DeferredNumberTagI V8_FINAL : public LDeferredCode { + public: + DeferredNumberTagI(LCodeGen* codegen, + LNumberTagI* instr, + const X87Stack& x87_stack) + : LDeferredCode(codegen, x87_stack), instr_(instr) { } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredNumberTagIU(instr_, instr_->value(), instr_->temp(), + SIGNED_INT32); + } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + private: + LNumberTagI* instr_; + }; + + LOperand* input = instr->value(); + DCHECK(input->IsRegister() && input->Equals(instr->result())); + Register reg = ToRegister(input); + + DeferredNumberTagI* deferred = + new(zone()) DeferredNumberTagI(this, instr, x87_stack_); + __ SmiTag(reg); + __ j(overflow, deferred->entry()); + __ bind(deferred->exit()); +} + + +void LCodeGen::DoNumberTagU(LNumberTagU* instr) { + class DeferredNumberTagU V8_FINAL : public LDeferredCode { + public: + DeferredNumberTagU(LCodeGen* codegen, + LNumberTagU* instr, + const X87Stack& x87_stack) + : LDeferredCode(codegen, x87_stack), instr_(instr) { } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredNumberTagIU(instr_, instr_->value(), instr_->temp(), + UNSIGNED_INT32); + } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + private: + LNumberTagU* instr_; + }; + + LOperand* input = instr->value(); + DCHECK(input->IsRegister() && input->Equals(instr->result())); + Register reg = ToRegister(input); + + DeferredNumberTagU* deferred = + new(zone()) DeferredNumberTagU(this, instr, x87_stack_); + __ cmp(reg, Immediate(Smi::kMaxValue)); + __ j(above, deferred->entry()); + __ SmiTag(reg); + __ bind(deferred->exit()); +} + + +void LCodeGen::DoDeferredNumberTagIU(LInstruction* instr, + LOperand* value, + LOperand* temp, + IntegerSignedness signedness) { + Label done, slow; + Register reg = ToRegister(value); + Register tmp = ToRegister(temp); + + if (signedness == SIGNED_INT32) { + // There was overflow, so bits 30 and 31 of the original integer + // disagree. Try to allocate a heap number in new space and store + // the value in there. If that fails, call the runtime system. + __ SmiUntag(reg); + __ xor_(reg, 0x80000000); + __ push(reg); + __ fild_s(Operand(esp, 0)); + __ pop(reg); + } else { + // There's no fild variant for unsigned values, so zero-extend to a 64-bit + // int manually. + __ push(Immediate(0)); + __ push(reg); + __ fild_d(Operand(esp, 0)); + __ pop(reg); + __ pop(reg); + } + + if (FLAG_inline_new) { + __ AllocateHeapNumber(reg, tmp, no_reg, &slow); + __ jmp(&done, Label::kNear); + } + + // Slow case: Call the runtime system to do the number allocation. + __ bind(&slow); + { + // TODO(3095996): Put a valid pointer value in the stack slot where the + // result register is stored, as this register is in the pointer map, but + // contains an integer value. + __ Move(reg, Immediate(0)); + + // Preserve the value of all registers. + PushSafepointRegistersScope scope(this); + + // NumberTagI and NumberTagD use the context from the frame, rather than + // the environment's HContext or HInlinedContext value. + // They only call Runtime::kAllocateHeapNumber. + // The corresponding HChange instructions are added in a phase that does + // not have easy access to the local context. + __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset)); + __ CallRuntime(Runtime::kAllocateHeapNumber); + RecordSafepointWithRegisters( + instr->pointer_map(), 0, Safepoint::kNoLazyDeopt); + __ StoreToSafepointRegisterSlot(reg, eax); + } + + __ bind(&done); + __ fstp_d(FieldOperand(reg, HeapNumber::kValueOffset)); +} + + +void LCodeGen::DoNumberTagD(LNumberTagD* instr) { + class DeferredNumberTagD V8_FINAL : public LDeferredCode { + public: + DeferredNumberTagD(LCodeGen* codegen, + LNumberTagD* instr, + const X87Stack& x87_stack) + : LDeferredCode(codegen, x87_stack), instr_(instr) { } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredNumberTagD(instr_); + } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + private: + LNumberTagD* instr_; + }; + + Register reg = ToRegister(instr->result()); + + // Put the value to the top of stack + X87Register src = ToX87Register(instr->value()); + X87LoadForUsage(src); + + DeferredNumberTagD* deferred = + new(zone()) DeferredNumberTagD(this, instr, x87_stack_); + if (FLAG_inline_new) { + Register tmp = ToRegister(instr->temp()); + __ AllocateHeapNumber(reg, tmp, no_reg, deferred->entry()); + } else { + __ jmp(deferred->entry()); + } + __ bind(deferred->exit()); + __ fstp_d(FieldOperand(reg, HeapNumber::kValueOffset)); +} + + +void LCodeGen::DoDeferredNumberTagD(LNumberTagD* instr) { + // TODO(3095996): Get rid of this. For now, we need to make the + // result register contain a valid pointer because it is already + // contained in the register pointer map. + Register reg = ToRegister(instr->result()); + __ Move(reg, Immediate(0)); + + PushSafepointRegistersScope scope(this); + // NumberTagI and NumberTagD use the context from the frame, rather than + // the environment's HContext or HInlinedContext value. + // They only call Runtime::kAllocateHeapNumber. + // The corresponding HChange instructions are added in a phase that does + // not have easy access to the local context. + __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset)); + __ CallRuntime(Runtime::kAllocateHeapNumber); + RecordSafepointWithRegisters( + instr->pointer_map(), 0, Safepoint::kNoLazyDeopt); + __ StoreToSafepointRegisterSlot(reg, eax); +} + + +void LCodeGen::DoSmiTag(LSmiTag* instr) { + HChange* hchange = instr->hydrogen(); + Register input = ToRegister(instr->value()); + if (hchange->CheckFlag(HValue::kCanOverflow) && + hchange->value()->CheckFlag(HValue::kUint32)) { + __ test(input, Immediate(0xc0000000)); + DeoptimizeIf(not_zero, instr->environment()); + } + __ SmiTag(input); + if (hchange->CheckFlag(HValue::kCanOverflow) && + !hchange->value()->CheckFlag(HValue::kUint32)) { + DeoptimizeIf(overflow, instr->environment()); + } +} + + +void LCodeGen::DoSmiUntag(LSmiUntag* instr) { + LOperand* input = instr->value(); + Register result = ToRegister(input); + DCHECK(input->IsRegister() && input->Equals(instr->result())); + if (instr->needs_check()) { + __ test(result, Immediate(kSmiTagMask)); + DeoptimizeIf(not_zero, instr->environment()); + } else { + __ AssertSmi(result); + } + __ SmiUntag(result); +} + + +void LCodeGen::EmitNumberUntagDNoSSE2(Register input_reg, + Register temp_reg, + X87Register res_reg, + bool can_convert_undefined_to_nan, + bool deoptimize_on_minus_zero, + LEnvironment* env, + NumberUntagDMode mode) { + Label load_smi, done; + + X87PrepareToWrite(res_reg); + if (mode == NUMBER_CANDIDATE_IS_ANY_TAGGED) { + // Smi check. + __ JumpIfSmi(input_reg, &load_smi, Label::kNear); + + // Heap number map check. + __ cmp(FieldOperand(input_reg, HeapObject::kMapOffset), + factory()->heap_number_map()); + if (!can_convert_undefined_to_nan) { + DeoptimizeIf(not_equal, env); + } else { + Label heap_number, convert; + __ j(equal, &heap_number, Label::kNear); + + // Convert undefined (or hole) to NaN. + __ cmp(input_reg, factory()->undefined_value()); + DeoptimizeIf(not_equal, env); + + __ bind(&convert); + ExternalReference nan = + ExternalReference::address_of_canonical_non_hole_nan(); + __ fld_d(Operand::StaticVariable(nan)); + __ jmp(&done, Label::kNear); + + __ bind(&heap_number); + } + // Heap number to x87 conversion. + __ fld_d(FieldOperand(input_reg, HeapNumber::kValueOffset)); + if (deoptimize_on_minus_zero) { + __ fldz(); + __ FCmp(); + __ fld_d(FieldOperand(input_reg, HeapNumber::kValueOffset)); + __ j(not_zero, &done, Label::kNear); + + // Use general purpose registers to check if we have -0.0 + __ mov(temp_reg, FieldOperand(input_reg, HeapNumber::kExponentOffset)); + __ test(temp_reg, Immediate(HeapNumber::kSignMask)); + __ j(zero, &done, Label::kNear); + + // Pop FPU stack before deoptimizing. + __ fstp(0); + DeoptimizeIf(not_zero, env); + } + __ jmp(&done, Label::kNear); + } else { + DCHECK(mode == NUMBER_CANDIDATE_IS_SMI); + } + + __ bind(&load_smi); + // Clobbering a temp is faster than re-tagging the + // input register since we avoid dependencies. + __ mov(temp_reg, input_reg); + __ SmiUntag(temp_reg); // Untag smi before converting to float. + __ push(temp_reg); + __ fild_s(Operand(esp, 0)); + __ add(esp, Immediate(kPointerSize)); + __ bind(&done); + X87CommitWrite(res_reg); +} + + +void LCodeGen::DoDeferredTaggedToI(LTaggedToI* instr, Label* done) { + Register input_reg = ToRegister(instr->value()); + + // The input was optimistically untagged; revert it. + STATIC_ASSERT(kSmiTagSize == 1); + __ lea(input_reg, Operand(input_reg, times_2, kHeapObjectTag)); + + if (instr->truncating()) { + Label no_heap_number, check_bools, check_false; + + // Heap number map check. + __ cmp(FieldOperand(input_reg, HeapObject::kMapOffset), + factory()->heap_number_map()); + __ j(not_equal, &no_heap_number, Label::kNear); + __ TruncateHeapNumberToI(input_reg, input_reg); + __ jmp(done); + + __ bind(&no_heap_number); + // Check for Oddballs. Undefined/False is converted to zero and True to one + // for truncating conversions. + __ cmp(input_reg, factory()->undefined_value()); + __ j(not_equal, &check_bools, Label::kNear); + __ Move(input_reg, Immediate(0)); + __ jmp(done); + + __ bind(&check_bools); + __ cmp(input_reg, factory()->true_value()); + __ j(not_equal, &check_false, Label::kNear); + __ Move(input_reg, Immediate(1)); + __ jmp(done); + + __ bind(&check_false); + __ cmp(input_reg, factory()->false_value()); + __ RecordComment("Deferred TaggedToI: cannot truncate"); + DeoptimizeIf(not_equal, instr->environment()); + __ Move(input_reg, Immediate(0)); + } else { + Label bailout; + __ TaggedToI(input_reg, input_reg, + instr->hydrogen()->GetMinusZeroMode(), &bailout); + __ jmp(done); + __ bind(&bailout); + DeoptimizeIf(no_condition, instr->environment()); + } +} + + +void LCodeGen::DoTaggedToI(LTaggedToI* instr) { + class DeferredTaggedToI V8_FINAL : public LDeferredCode { + public: + DeferredTaggedToI(LCodeGen* codegen, + LTaggedToI* instr, + const X87Stack& x87_stack) + : LDeferredCode(codegen, x87_stack), instr_(instr) { } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredTaggedToI(instr_, done()); + } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + private: + LTaggedToI* instr_; + }; + + LOperand* input = instr->value(); + DCHECK(input->IsRegister()); + Register input_reg = ToRegister(input); + DCHECK(input_reg.is(ToRegister(instr->result()))); + + if (instr->hydrogen()->value()->representation().IsSmi()) { + __ SmiUntag(input_reg); + } else { + DeferredTaggedToI* deferred = + new(zone()) DeferredTaggedToI(this, instr, x87_stack_); + // Optimistically untag the input. + // If the input is a HeapObject, SmiUntag will set the carry flag. + STATIC_ASSERT(kSmiTagSize == 1 && kSmiTag == 0); + __ SmiUntag(input_reg); + // Branch to deferred code if the input was tagged. + // The deferred code will take care of restoring the tag. + __ j(carry, deferred->entry()); + __ bind(deferred->exit()); + } +} + + +void LCodeGen::DoNumberUntagD(LNumberUntagD* instr) { + LOperand* input = instr->value(); + DCHECK(input->IsRegister()); + LOperand* temp = instr->temp(); + DCHECK(temp->IsRegister()); + LOperand* result = instr->result(); + DCHECK(result->IsDoubleRegister()); + + Register input_reg = ToRegister(input); + bool deoptimize_on_minus_zero = + instr->hydrogen()->deoptimize_on_minus_zero(); + Register temp_reg = ToRegister(temp); + + HValue* value = instr->hydrogen()->value(); + NumberUntagDMode mode = value->representation().IsSmi() + ? NUMBER_CANDIDATE_IS_SMI : NUMBER_CANDIDATE_IS_ANY_TAGGED; + + EmitNumberUntagDNoSSE2(input_reg, + temp_reg, + ToX87Register(result), + instr->hydrogen()->can_convert_undefined_to_nan(), + deoptimize_on_minus_zero, + instr->environment(), + mode); +} + + +void LCodeGen::DoDoubleToI(LDoubleToI* instr) { + LOperand* input = instr->value(); + DCHECK(input->IsDoubleRegister()); + LOperand* result = instr->result(); + DCHECK(result->IsRegister()); + Register result_reg = ToRegister(result); + + if (instr->truncating()) { + X87Register input_reg = ToX87Register(input); + X87Fxch(input_reg); + __ TruncateX87TOSToI(result_reg); + } else { + Label bailout, done; + X87Register input_reg = ToX87Register(input); + X87Fxch(input_reg); + __ X87TOSToI(result_reg, instr->hydrogen()->GetMinusZeroMode(), + &bailout, Label::kNear); + __ jmp(&done, Label::kNear); + __ bind(&bailout); + DeoptimizeIf(no_condition, instr->environment()); + __ bind(&done); + } +} + + +void LCodeGen::DoDoubleToSmi(LDoubleToSmi* instr) { + LOperand* input = instr->value(); + DCHECK(input->IsDoubleRegister()); + LOperand* result = instr->result(); + DCHECK(result->IsRegister()); + Register result_reg = ToRegister(result); + + Label bailout, done; + X87Register input_reg = ToX87Register(input); + X87Fxch(input_reg); + __ X87TOSToI(result_reg, instr->hydrogen()->GetMinusZeroMode(), + &bailout, Label::kNear); + __ jmp(&done, Label::kNear); + __ bind(&bailout); + DeoptimizeIf(no_condition, instr->environment()); + __ bind(&done); + + __ SmiTag(result_reg); + DeoptimizeIf(overflow, instr->environment()); +} + + +void LCodeGen::DoCheckSmi(LCheckSmi* instr) { + LOperand* input = instr->value(); + __ test(ToOperand(input), Immediate(kSmiTagMask)); + DeoptimizeIf(not_zero, instr->environment()); +} + + +void LCodeGen::DoCheckNonSmi(LCheckNonSmi* instr) { + if (!instr->hydrogen()->value()->type().IsHeapObject()) { + LOperand* input = instr->value(); + __ test(ToOperand(input), Immediate(kSmiTagMask)); + DeoptimizeIf(zero, instr->environment()); + } +} + + +void LCodeGen::DoCheckInstanceType(LCheckInstanceType* instr) { + Register input = ToRegister(instr->value()); + Register temp = ToRegister(instr->temp()); + + __ mov(temp, FieldOperand(input, HeapObject::kMapOffset)); + + if (instr->hydrogen()->is_interval_check()) { + InstanceType first; + InstanceType last; + instr->hydrogen()->GetCheckInterval(&first, &last); + + __ cmpb(FieldOperand(temp, Map::kInstanceTypeOffset), + static_cast<int8_t>(first)); + + // If there is only one type in the interval check for equality. + if (first == last) { + DeoptimizeIf(not_equal, instr->environment()); + } else { + DeoptimizeIf(below, instr->environment()); + // Omit check for the last type. + if (last != LAST_TYPE) { + __ cmpb(FieldOperand(temp, Map::kInstanceTypeOffset), + static_cast<int8_t>(last)); + DeoptimizeIf(above, instr->environment()); + } + } + } else { + uint8_t mask; + uint8_t tag; + instr->hydrogen()->GetCheckMaskAndTag(&mask, &tag); + + if (IsPowerOf2(mask)) { + DCHECK(tag == 0 || IsPowerOf2(tag)); + __ test_b(FieldOperand(temp, Map::kInstanceTypeOffset), mask); + DeoptimizeIf(tag == 0 ? not_zero : zero, instr->environment()); + } else { + __ movzx_b(temp, FieldOperand(temp, Map::kInstanceTypeOffset)); + __ and_(temp, mask); + __ cmp(temp, tag); + DeoptimizeIf(not_equal, instr->environment()); + } + } +} + + +void LCodeGen::DoCheckValue(LCheckValue* instr) { + Handle<HeapObject> object = instr->hydrogen()->object().handle(); + if (instr->hydrogen()->object_in_new_space()) { + Register reg = ToRegister(instr->value()); + Handle<Cell> cell = isolate()->factory()->NewCell(object); + __ cmp(reg, Operand::ForCell(cell)); + } else { + Operand operand = ToOperand(instr->value()); + __ cmp(operand, object); + } + DeoptimizeIf(not_equal, instr->environment()); +} + + +void LCodeGen::DoDeferredInstanceMigration(LCheckMaps* instr, Register object) { + { + PushSafepointRegistersScope scope(this); + __ push(object); + __ xor_(esi, esi); + __ CallRuntime(Runtime::kTryMigrateInstance); + RecordSafepointWithRegisters( + instr->pointer_map(), 1, Safepoint::kNoLazyDeopt); + + __ test(eax, Immediate(kSmiTagMask)); + } + DeoptimizeIf(zero, instr->environment()); +} + + +void LCodeGen::DoCheckMaps(LCheckMaps* instr) { + class DeferredCheckMaps V8_FINAL : public LDeferredCode { + public: + DeferredCheckMaps(LCodeGen* codegen, + LCheckMaps* instr, + Register object, + const X87Stack& x87_stack) + : LDeferredCode(codegen, x87_stack), instr_(instr), object_(object) { + SetExit(check_maps()); + } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredInstanceMigration(instr_, object_); + } + Label* check_maps() { return &check_maps_; } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + private: + LCheckMaps* instr_; + Label check_maps_; + Register object_; + }; + + if (instr->hydrogen()->IsStabilityCheck()) { + const UniqueSet<Map>* maps = instr->hydrogen()->maps(); + for (int i = 0; i < maps->size(); ++i) { + AddStabilityDependency(maps->at(i).handle()); + } + return; + } + + LOperand* input = instr->value(); + DCHECK(input->IsRegister()); + Register reg = ToRegister(input); + + DeferredCheckMaps* deferred = NULL; + if (instr->hydrogen()->HasMigrationTarget()) { + deferred = new(zone()) DeferredCheckMaps(this, instr, reg, x87_stack_); + __ bind(deferred->check_maps()); + } + + const UniqueSet<Map>* maps = instr->hydrogen()->maps(); + Label success; + for (int i = 0; i < maps->size() - 1; i++) { + Handle<Map> map = maps->at(i).handle(); + __ CompareMap(reg, map); + __ j(equal, &success, Label::kNear); + } + + Handle<Map> map = maps->at(maps->size() - 1).handle(); + __ CompareMap(reg, map); + if (instr->hydrogen()->HasMigrationTarget()) { + __ j(not_equal, deferred->entry()); + } else { + DeoptimizeIf(not_equal, instr->environment()); + } + + __ bind(&success); +} + + +void LCodeGen::DoClampDToUint8(LClampDToUint8* instr) { + UNREACHABLE(); +} + + +void LCodeGen::DoClampIToUint8(LClampIToUint8* instr) { + DCHECK(instr->unclamped()->Equals(instr->result())); + Register value_reg = ToRegister(instr->result()); + __ ClampUint8(value_reg); +} + + +void LCodeGen::DoClampTToUint8NoSSE2(LClampTToUint8NoSSE2* instr) { + Register input_reg = ToRegister(instr->unclamped()); + Register result_reg = ToRegister(instr->result()); + Register scratch = ToRegister(instr->scratch()); + Register scratch2 = ToRegister(instr->scratch2()); + Register scratch3 = ToRegister(instr->scratch3()); + Label is_smi, done, heap_number, valid_exponent, + largest_value, zero_result, maybe_nan_or_infinity; + + __ JumpIfSmi(input_reg, &is_smi); + + // Check for heap number + __ cmp(FieldOperand(input_reg, HeapObject::kMapOffset), + factory()->heap_number_map()); + __ j(equal, &heap_number, Label::kNear); + + // Check for undefined. Undefined is converted to zero for clamping + // conversions. + __ cmp(input_reg, factory()->undefined_value()); + DeoptimizeIf(not_equal, instr->environment()); + __ jmp(&zero_result, Label::kNear); + + // Heap number + __ bind(&heap_number); + + // Surprisingly, all of the hand-crafted bit-manipulations below are much + // faster than the x86 FPU built-in instruction, especially since "banker's + // rounding" would be additionally very expensive + + // Get exponent word. + __ mov(scratch, FieldOperand(input_reg, HeapNumber::kExponentOffset)); + __ mov(scratch3, FieldOperand(input_reg, HeapNumber::kMantissaOffset)); + + // Test for negative values --> clamp to zero + __ test(scratch, scratch); + __ j(negative, &zero_result, Label::kNear); + + // Get exponent alone in scratch2. + __ mov(scratch2, scratch); + __ and_(scratch2, HeapNumber::kExponentMask); + __ shr(scratch2, HeapNumber::kExponentShift); + __ j(zero, &zero_result, Label::kNear); + __ sub(scratch2, Immediate(HeapNumber::kExponentBias - 1)); + __ j(negative, &zero_result, Label::kNear); + + const uint32_t non_int8_exponent = 7; + __ cmp(scratch2, Immediate(non_int8_exponent + 1)); + // If the exponent is too big, check for special values. + __ j(greater, &maybe_nan_or_infinity, Label::kNear); + + __ bind(&valid_exponent); + // Exponent word in scratch, exponent in scratch2. We know that 0 <= exponent + // < 7. The shift bias is the number of bits to shift the mantissa such that + // with an exponent of 7 such the that top-most one is in bit 30, allowing + // detection the rounding overflow of a 255.5 to 256 (bit 31 goes from 0 to + // 1). + int shift_bias = (30 - HeapNumber::kExponentShift) - 7 - 1; + __ lea(result_reg, MemOperand(scratch2, shift_bias)); + // Here result_reg (ecx) is the shift, scratch is the exponent word. Get the + // top bits of the mantissa. + __ and_(scratch, HeapNumber::kMantissaMask); + // Put back the implicit 1 of the mantissa + __ or_(scratch, 1 << HeapNumber::kExponentShift); + // Shift up to round + __ shl_cl(scratch); + // Use "banker's rounding" to spec: If fractional part of number is 0.5, then + // use the bit in the "ones" place and add it to the "halves" place, which has + // the effect of rounding to even. + __ mov(scratch2, scratch); + const uint32_t one_half_bit_shift = 30 - sizeof(uint8_t) * 8; + const uint32_t one_bit_shift = one_half_bit_shift + 1; + __ and_(scratch2, Immediate((1 << one_bit_shift) - 1)); + __ cmp(scratch2, Immediate(1 << one_half_bit_shift)); + Label no_round; + __ j(less, &no_round, Label::kNear); + Label round_up; + __ mov(scratch2, Immediate(1 << one_half_bit_shift)); + __ j(greater, &round_up, Label::kNear); + __ test(scratch3, scratch3); + __ j(not_zero, &round_up, Label::kNear); + __ mov(scratch2, scratch); + __ and_(scratch2, Immediate(1 << one_bit_shift)); + __ shr(scratch2, 1); + __ bind(&round_up); + __ add(scratch, scratch2); + __ j(overflow, &largest_value, Label::kNear); + __ bind(&no_round); + __ shr(scratch, 23); + __ mov(result_reg, scratch); + __ jmp(&done, Label::kNear); + + __ bind(&maybe_nan_or_infinity); + // Check for NaN/Infinity, all other values map to 255 + __ cmp(scratch2, Immediate(HeapNumber::kInfinityOrNanExponent + 1)); + __ j(not_equal, &largest_value, Label::kNear); + + // Check for NaN, which differs from Infinity in that at least one mantissa + // bit is set. + __ and_(scratch, HeapNumber::kMantissaMask); + __ or_(scratch, FieldOperand(input_reg, HeapNumber::kMantissaOffset)); + __ j(not_zero, &zero_result, Label::kNear); // M!=0 --> NaN + // Infinity -> Fall through to map to 255. + + __ bind(&largest_value); + __ mov(result_reg, Immediate(255)); + __ jmp(&done, Label::kNear); + + __ bind(&zero_result); + __ xor_(result_reg, result_reg); + __ jmp(&done, Label::kNear); + + // smi + __ bind(&is_smi); + if (!input_reg.is(result_reg)) { + __ mov(result_reg, input_reg); + } + __ SmiUntag(result_reg); + __ ClampUint8(result_reg); + __ bind(&done); +} + + +void LCodeGen::DoDoubleBits(LDoubleBits* instr) { + UNREACHABLE(); +} + + +void LCodeGen::DoConstructDouble(LConstructDouble* instr) { + UNREACHABLE(); +} + + +void LCodeGen::DoAllocate(LAllocate* instr) { + class DeferredAllocate V8_FINAL : public LDeferredCode { + public: + DeferredAllocate(LCodeGen* codegen, + LAllocate* instr, + const X87Stack& x87_stack) + : LDeferredCode(codegen, x87_stack), instr_(instr) { } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredAllocate(instr_); + } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + private: + LAllocate* instr_; + }; + + DeferredAllocate* deferred = + new(zone()) DeferredAllocate(this, instr, x87_stack_); + + Register result = ToRegister(instr->result()); + Register temp = ToRegister(instr->temp()); + + // Allocate memory for the object. + AllocationFlags flags = TAG_OBJECT; + if (instr->hydrogen()->MustAllocateDoubleAligned()) { + flags = static_cast<AllocationFlags>(flags | DOUBLE_ALIGNMENT); + } + if (instr->hydrogen()->IsOldPointerSpaceAllocation()) { + DCHECK(!instr->hydrogen()->IsOldDataSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); + flags = static_cast<AllocationFlags>(flags | PRETENURE_OLD_POINTER_SPACE); + } else if (instr->hydrogen()->IsOldDataSpaceAllocation()) { + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); + flags = static_cast<AllocationFlags>(flags | PRETENURE_OLD_DATA_SPACE); + } + + if (instr->size()->IsConstantOperand()) { + int32_t size = ToInteger32(LConstantOperand::cast(instr->size())); + if (size <= Page::kMaxRegularHeapObjectSize) { + __ Allocate(size, result, temp, no_reg, deferred->entry(), flags); + } else { + __ jmp(deferred->entry()); + } + } else { + Register size = ToRegister(instr->size()); + __ Allocate(size, result, temp, no_reg, deferred->entry(), flags); + } + + __ bind(deferred->exit()); + + if (instr->hydrogen()->MustPrefillWithFiller()) { + if (instr->size()->IsConstantOperand()) { + int32_t size = ToInteger32(LConstantOperand::cast(instr->size())); + __ mov(temp, (size / kPointerSize) - 1); + } else { + temp = ToRegister(instr->size()); + __ shr(temp, kPointerSizeLog2); + __ dec(temp); + } + Label loop; + __ bind(&loop); + __ mov(FieldOperand(result, temp, times_pointer_size, 0), + isolate()->factory()->one_pointer_filler_map()); + __ dec(temp); + __ j(not_zero, &loop); + } +} + + +void LCodeGen::DoDeferredAllocate(LAllocate* instr) { + Register result = ToRegister(instr->result()); + + // TODO(3095996): Get rid of this. For now, we need to make the + // result register contain a valid pointer because it is already + // contained in the register pointer map. + __ Move(result, Immediate(Smi::FromInt(0))); + + PushSafepointRegistersScope scope(this); + if (instr->size()->IsRegister()) { + Register size = ToRegister(instr->size()); + DCHECK(!size.is(result)); + __ SmiTag(ToRegister(instr->size())); + __ push(size); + } else { + int32_t size = ToInteger32(LConstantOperand::cast(instr->size())); + if (size >= 0 && size <= Smi::kMaxValue) { + __ push(Immediate(Smi::FromInt(size))); + } else { + // We should never get here at runtime => abort + __ int3(); + return; + } + } + + int flags = AllocateDoubleAlignFlag::encode( + instr->hydrogen()->MustAllocateDoubleAligned()); + if (instr->hydrogen()->IsOldPointerSpaceAllocation()) { + DCHECK(!instr->hydrogen()->IsOldDataSpaceAllocation()); + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); + flags = AllocateTargetSpace::update(flags, OLD_POINTER_SPACE); + } else if (instr->hydrogen()->IsOldDataSpaceAllocation()) { + DCHECK(!instr->hydrogen()->IsNewSpaceAllocation()); + flags = AllocateTargetSpace::update(flags, OLD_DATA_SPACE); + } else { + flags = AllocateTargetSpace::update(flags, NEW_SPACE); + } + __ push(Immediate(Smi::FromInt(flags))); + + CallRuntimeFromDeferred( + Runtime::kAllocateInTargetSpace, 2, instr, instr->context()); + __ StoreToSafepointRegisterSlot(result, eax); +} + + +void LCodeGen::DoToFastProperties(LToFastProperties* instr) { + DCHECK(ToRegister(instr->value()).is(eax)); + __ push(eax); + CallRuntime(Runtime::kToFastProperties, 1, instr); +} + + +void LCodeGen::DoRegExpLiteral(LRegExpLiteral* instr) { + DCHECK(ToRegister(instr->context()).is(esi)); + Label materialized; + // Registers will be used as follows: + // ecx = literals array. + // ebx = regexp literal. + // eax = regexp literal clone. + // esi = context. + int literal_offset = + FixedArray::OffsetOfElementAt(instr->hydrogen()->literal_index()); + __ LoadHeapObject(ecx, instr->hydrogen()->literals()); + __ mov(ebx, FieldOperand(ecx, literal_offset)); + __ cmp(ebx, factory()->undefined_value()); + __ j(not_equal, &materialized, Label::kNear); + + // Create regexp literal using runtime function + // Result will be in eax. + __ push(ecx); + __ push(Immediate(Smi::FromInt(instr->hydrogen()->literal_index()))); + __ push(Immediate(instr->hydrogen()->pattern())); + __ push(Immediate(instr->hydrogen()->flags())); + CallRuntime(Runtime::kMaterializeRegExpLiteral, 4, instr); + __ mov(ebx, eax); + + __ bind(&materialized); + int size = JSRegExp::kSize + JSRegExp::kInObjectFieldCount * kPointerSize; + Label allocated, runtime_allocate; + __ Allocate(size, eax, ecx, edx, &runtime_allocate, TAG_OBJECT); + __ jmp(&allocated, Label::kNear); + + __ bind(&runtime_allocate); + __ push(ebx); + __ push(Immediate(Smi::FromInt(size))); + CallRuntime(Runtime::kAllocateInNewSpace, 1, instr); + __ pop(ebx); + + __ bind(&allocated); + // Copy the content into the newly allocated memory. + // (Unroll copy loop once for better throughput). + for (int i = 0; i < size - kPointerSize; i += 2 * kPointerSize) { + __ mov(edx, FieldOperand(ebx, i)); + __ mov(ecx, FieldOperand(ebx, i + kPointerSize)); + __ mov(FieldOperand(eax, i), edx); + __ mov(FieldOperand(eax, i + kPointerSize), ecx); + } + if ((size % (2 * kPointerSize)) != 0) { + __ mov(edx, FieldOperand(ebx, size - kPointerSize)); + __ mov(FieldOperand(eax, size - kPointerSize), edx); + } +} + + +void LCodeGen::DoFunctionLiteral(LFunctionLiteral* instr) { + DCHECK(ToRegister(instr->context()).is(esi)); + // Use the fast case closure allocation code that allocates in new + // space for nested functions that don't need literals cloning. + bool pretenure = instr->hydrogen()->pretenure(); + if (!pretenure && instr->hydrogen()->has_no_literals()) { + FastNewClosureStub stub(isolate(), + instr->hydrogen()->strict_mode(), + instr->hydrogen()->is_generator()); + __ mov(ebx, Immediate(instr->hydrogen()->shared_info())); + CallCode(stub.GetCode(), RelocInfo::CODE_TARGET, instr); + } else { + __ push(esi); + __ push(Immediate(instr->hydrogen()->shared_info())); + __ push(Immediate(pretenure ? factory()->true_value() + : factory()->false_value())); + CallRuntime(Runtime::kNewClosure, 3, instr); + } +} + + +void LCodeGen::DoTypeof(LTypeof* instr) { + DCHECK(ToRegister(instr->context()).is(esi)); + LOperand* input = instr->value(); + EmitPushTaggedOperand(input); + CallRuntime(Runtime::kTypeof, 1, instr); +} + + +void LCodeGen::DoTypeofIsAndBranch(LTypeofIsAndBranch* instr) { + Register input = ToRegister(instr->value()); + Condition final_branch_condition = EmitTypeofIs(instr, input); + if (final_branch_condition != no_condition) { + EmitBranch(instr, final_branch_condition); + } +} + + +Condition LCodeGen::EmitTypeofIs(LTypeofIsAndBranch* instr, Register input) { + Label* true_label = instr->TrueLabel(chunk_); + Label* false_label = instr->FalseLabel(chunk_); + Handle<String> type_name = instr->type_literal(); + int left_block = instr->TrueDestination(chunk_); + int right_block = instr->FalseDestination(chunk_); + int next_block = GetNextEmittedBlock(); + + Label::Distance true_distance = left_block == next_block ? Label::kNear + : Label::kFar; + Label::Distance false_distance = right_block == next_block ? Label::kNear + : Label::kFar; + Condition final_branch_condition = no_condition; + if (String::Equals(type_name, factory()->number_string())) { + __ JumpIfSmi(input, true_label, true_distance); + __ cmp(FieldOperand(input, HeapObject::kMapOffset), + factory()->heap_number_map()); + final_branch_condition = equal; + + } else if (String::Equals(type_name, factory()->string_string())) { + __ JumpIfSmi(input, false_label, false_distance); + __ CmpObjectType(input, FIRST_NONSTRING_TYPE, input); + __ j(above_equal, false_label, false_distance); + __ test_b(FieldOperand(input, Map::kBitFieldOffset), + 1 << Map::kIsUndetectable); + final_branch_condition = zero; + + } else if (String::Equals(type_name, factory()->symbol_string())) { + __ JumpIfSmi(input, false_label, false_distance); + __ CmpObjectType(input, SYMBOL_TYPE, input); + final_branch_condition = equal; + + } else if (String::Equals(type_name, factory()->boolean_string())) { + __ cmp(input, factory()->true_value()); + __ j(equal, true_label, true_distance); + __ cmp(input, factory()->false_value()); + final_branch_condition = equal; + + } else if (String::Equals(type_name, factory()->undefined_string())) { + __ cmp(input, factory()->undefined_value()); + __ j(equal, true_label, true_distance); + __ JumpIfSmi(input, false_label, false_distance); + // Check for undetectable objects => true. + __ mov(input, FieldOperand(input, HeapObject::kMapOffset)); + __ test_b(FieldOperand(input, Map::kBitFieldOffset), + 1 << Map::kIsUndetectable); + final_branch_condition = not_zero; + + } else if (String::Equals(type_name, factory()->function_string())) { + STATIC_ASSERT(NUM_OF_CALLABLE_SPEC_OBJECT_TYPES == 2); + __ JumpIfSmi(input, false_label, false_distance); + __ CmpObjectType(input, JS_FUNCTION_TYPE, input); + __ j(equal, true_label, true_distance); + __ CmpInstanceType(input, JS_FUNCTION_PROXY_TYPE); + final_branch_condition = equal; + + } else if (String::Equals(type_name, factory()->object_string())) { + __ JumpIfSmi(input, false_label, false_distance); + __ cmp(input, factory()->null_value()); + __ j(equal, true_label, true_distance); + __ CmpObjectType(input, FIRST_NONCALLABLE_SPEC_OBJECT_TYPE, input); + __ j(below, false_label, false_distance); + __ CmpInstanceType(input, LAST_NONCALLABLE_SPEC_OBJECT_TYPE); + __ j(above, false_label, false_distance); + // Check for undetectable objects => false. + __ test_b(FieldOperand(input, Map::kBitFieldOffset), + 1 << Map::kIsUndetectable); + final_branch_condition = zero; + + } else { + __ jmp(false_label, false_distance); + } + return final_branch_condition; +} + + +void LCodeGen::DoIsConstructCallAndBranch(LIsConstructCallAndBranch* instr) { + Register temp = ToRegister(instr->temp()); + + EmitIsConstructCall(temp); + EmitBranch(instr, equal); +} + + +void LCodeGen::EmitIsConstructCall(Register temp) { + // Get the frame pointer for the calling frame. + __ mov(temp, Operand(ebp, StandardFrameConstants::kCallerFPOffset)); + + // Skip the arguments adaptor frame if it exists. + Label check_frame_marker; + __ cmp(Operand(temp, StandardFrameConstants::kContextOffset), + Immediate(Smi::FromInt(StackFrame::ARGUMENTS_ADAPTOR))); + __ j(not_equal, &check_frame_marker, Label::kNear); + __ mov(temp, Operand(temp, StandardFrameConstants::kCallerFPOffset)); + + // Check the marker in the calling frame. + __ bind(&check_frame_marker); + __ cmp(Operand(temp, StandardFrameConstants::kMarkerOffset), + Immediate(Smi::FromInt(StackFrame::CONSTRUCT))); +} + + +void LCodeGen::EnsureSpaceForLazyDeopt(int space_needed) { + if (!info()->IsStub()) { + // Ensure that we have enough space after the previous lazy-bailout + // instruction for patching the code here. + int current_pc = masm()->pc_offset(); + if (current_pc < last_lazy_deopt_pc_ + space_needed) { + int padding_size = last_lazy_deopt_pc_ + space_needed - current_pc; + __ Nop(padding_size); + } + } + last_lazy_deopt_pc_ = masm()->pc_offset(); +} + + +void LCodeGen::DoLazyBailout(LLazyBailout* instr) { + last_lazy_deopt_pc_ = masm()->pc_offset(); + DCHECK(instr->HasEnvironment()); + LEnvironment* env = instr->environment(); + RegisterEnvironmentForDeoptimization(env, Safepoint::kLazyDeopt); + safepoints_.RecordLazyDeoptimizationIndex(env->deoptimization_index()); +} + + +void LCodeGen::DoDeoptimize(LDeoptimize* instr) { + Deoptimizer::BailoutType type = instr->hydrogen()->type(); + // TODO(danno): Stubs expect all deopts to be lazy for historical reasons (the + // needed return address), even though the implementation of LAZY and EAGER is + // now identical. When LAZY is eventually completely folded into EAGER, remove + // the special case below. + if (info()->IsStub() && type == Deoptimizer::EAGER) { + type = Deoptimizer::LAZY; + } + Comment(";;; deoptimize: %s", instr->hydrogen()->reason()); + DeoptimizeIf(no_condition, instr->environment(), type); +} + + +void LCodeGen::DoDummy(LDummy* instr) { + // Nothing to see here, move on! +} + + +void LCodeGen::DoDummyUse(LDummyUse* instr) { + // Nothing to see here, move on! +} + + +void LCodeGen::DoDeferredStackCheck(LStackCheck* instr) { + PushSafepointRegistersScope scope(this); + __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset)); + __ CallRuntime(Runtime::kStackGuard); + RecordSafepointWithLazyDeopt( + instr, RECORD_SAFEPOINT_WITH_REGISTERS_AND_NO_ARGUMENTS); + DCHECK(instr->HasEnvironment()); + LEnvironment* env = instr->environment(); + safepoints_.RecordLazyDeoptimizationIndex(env->deoptimization_index()); +} + + +void LCodeGen::DoStackCheck(LStackCheck* instr) { + class DeferredStackCheck V8_FINAL : public LDeferredCode { + public: + DeferredStackCheck(LCodeGen* codegen, + LStackCheck* instr, + const X87Stack& x87_stack) + : LDeferredCode(codegen, x87_stack), instr_(instr) { } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredStackCheck(instr_); + } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + private: + LStackCheck* instr_; + }; + + DCHECK(instr->HasEnvironment()); + LEnvironment* env = instr->environment(); + // There is no LLazyBailout instruction for stack-checks. We have to + // prepare for lazy deoptimization explicitly here. + if (instr->hydrogen()->is_function_entry()) { + // Perform stack overflow check. + Label done; + ExternalReference stack_limit = + ExternalReference::address_of_stack_limit(isolate()); + __ cmp(esp, Operand::StaticVariable(stack_limit)); + __ j(above_equal, &done, Label::kNear); + + DCHECK(instr->context()->IsRegister()); + DCHECK(ToRegister(instr->context()).is(esi)); + CallCode(isolate()->builtins()->StackCheck(), + RelocInfo::CODE_TARGET, + instr); + __ bind(&done); + } else { + DCHECK(instr->hydrogen()->is_backwards_branch()); + // Perform stack overflow check if this goto needs it before jumping. + DeferredStackCheck* deferred_stack_check = + new(zone()) DeferredStackCheck(this, instr, x87_stack_); + ExternalReference stack_limit = + ExternalReference::address_of_stack_limit(isolate()); + __ cmp(esp, Operand::StaticVariable(stack_limit)); + __ j(below, deferred_stack_check->entry()); + EnsureSpaceForLazyDeopt(Deoptimizer::patch_size()); + __ bind(instr->done_label()); + deferred_stack_check->SetExit(instr->done_label()); + RegisterEnvironmentForDeoptimization(env, Safepoint::kLazyDeopt); + // Don't record a deoptimization index for the safepoint here. + // This will be done explicitly when emitting call and the safepoint in + // the deferred code. + } +} + + +void LCodeGen::DoOsrEntry(LOsrEntry* instr) { + // This is a pseudo-instruction that ensures that the environment here is + // properly registered for deoptimization and records the assembler's PC + // offset. + LEnvironment* environment = instr->environment(); + + // If the environment were already registered, we would have no way of + // backpatching it with the spill slot operands. + DCHECK(!environment->HasBeenRegistered()); + RegisterEnvironmentForDeoptimization(environment, Safepoint::kNoLazyDeopt); + + GenerateOsrPrologue(); +} + + +void LCodeGen::DoForInPrepareMap(LForInPrepareMap* instr) { + DCHECK(ToRegister(instr->context()).is(esi)); + __ cmp(eax, isolate()->factory()->undefined_value()); + DeoptimizeIf(equal, instr->environment()); + + __ cmp(eax, isolate()->factory()->null_value()); + DeoptimizeIf(equal, instr->environment()); + + __ test(eax, Immediate(kSmiTagMask)); + DeoptimizeIf(zero, instr->environment()); + + STATIC_ASSERT(FIRST_JS_PROXY_TYPE == FIRST_SPEC_OBJECT_TYPE); + __ CmpObjectType(eax, LAST_JS_PROXY_TYPE, ecx); + DeoptimizeIf(below_equal, instr->environment()); + + Label use_cache, call_runtime; + __ CheckEnumCache(&call_runtime); + + __ mov(eax, FieldOperand(eax, HeapObject::kMapOffset)); + __ jmp(&use_cache, Label::kNear); + + // Get the set of properties to enumerate. + __ bind(&call_runtime); + __ push(eax); + CallRuntime(Runtime::kGetPropertyNamesFast, 1, instr); + + __ cmp(FieldOperand(eax, HeapObject::kMapOffset), + isolate()->factory()->meta_map()); + DeoptimizeIf(not_equal, instr->environment()); + __ bind(&use_cache); +} + + +void LCodeGen::DoForInCacheArray(LForInCacheArray* instr) { + Register map = ToRegister(instr->map()); + Register result = ToRegister(instr->result()); + Label load_cache, done; + __ EnumLength(result, map); + __ cmp(result, Immediate(Smi::FromInt(0))); + __ j(not_equal, &load_cache, Label::kNear); + __ mov(result, isolate()->factory()->empty_fixed_array()); + __ jmp(&done, Label::kNear); + + __ bind(&load_cache); + __ LoadInstanceDescriptors(map, result); + __ mov(result, + FieldOperand(result, DescriptorArray::kEnumCacheOffset)); + __ mov(result, + FieldOperand(result, FixedArray::SizeFor(instr->idx()))); + __ bind(&done); + __ test(result, result); + DeoptimizeIf(equal, instr->environment()); +} + + +void LCodeGen::DoCheckMapValue(LCheckMapValue* instr) { + Register object = ToRegister(instr->value()); + __ cmp(ToRegister(instr->map()), + FieldOperand(object, HeapObject::kMapOffset)); + DeoptimizeIf(not_equal, instr->environment()); +} + + +void LCodeGen::DoDeferredLoadMutableDouble(LLoadFieldByIndex* instr, + Register object, + Register index) { + PushSafepointRegistersScope scope(this); + __ push(object); + __ push(index); + __ xor_(esi, esi); + __ CallRuntime(Runtime::kLoadMutableDouble); + RecordSafepointWithRegisters( + instr->pointer_map(), 2, Safepoint::kNoLazyDeopt); + __ StoreToSafepointRegisterSlot(object, eax); +} + + +void LCodeGen::DoLoadFieldByIndex(LLoadFieldByIndex* instr) { + class DeferredLoadMutableDouble V8_FINAL : public LDeferredCode { + public: + DeferredLoadMutableDouble(LCodeGen* codegen, + LLoadFieldByIndex* instr, + Register object, + Register index, + const X87Stack& x87_stack) + : LDeferredCode(codegen, x87_stack), + instr_(instr), + object_(object), + index_(index) { + } + virtual void Generate() V8_OVERRIDE { + codegen()->DoDeferredLoadMutableDouble(instr_, object_, index_); + } + virtual LInstruction* instr() V8_OVERRIDE { return instr_; } + private: + LLoadFieldByIndex* instr_; + Register object_; + Register index_; + }; + + Register object = ToRegister(instr->object()); + Register index = ToRegister(instr->index()); + + DeferredLoadMutableDouble* deferred; + deferred = new(zone()) DeferredLoadMutableDouble( + this, instr, object, index, x87_stack_); + + Label out_of_object, done; + __ test(index, Immediate(Smi::FromInt(1))); + __ j(not_zero, deferred->entry()); + + __ sar(index, 1); + + __ cmp(index, Immediate(0)); + __ j(less, &out_of_object, Label::kNear); + __ mov(object, FieldOperand(object, + index, + times_half_pointer_size, + JSObject::kHeaderSize)); + __ jmp(&done, Label::kNear); + + __ bind(&out_of_object); + __ mov(object, FieldOperand(object, JSObject::kPropertiesOffset)); + __ neg(index); + // Index is now equal to out of object property index plus 1. + __ mov(object, FieldOperand(object, + index, + times_half_pointer_size, + FixedArray::kHeaderSize - kPointerSize)); + __ bind(deferred->exit()); + __ bind(&done); +} + + +void LCodeGen::DoStoreFrameContext(LStoreFrameContext* instr) { + Register context = ToRegister(instr->context()); + __ mov(Operand(ebp, StandardFrameConstants::kContextOffset), context); +} + + +void LCodeGen::DoAllocateBlockContext(LAllocateBlockContext* instr) { + Handle<ScopeInfo> scope_info = instr->scope_info(); + __ Push(scope_info); + __ push(ToRegister(instr->function())); + CallRuntime(Runtime::kPushBlockContext, 2, instr); + RecordSafepoint(Safepoint::kNoLazyDeopt); +} + + +#undef __ + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_X87 diff --git a/deps/v8/src/x87/lithium-codegen-x87.h b/deps/v8/src/x87/lithium-codegen-x87.h new file mode 100644 index 00000000000..327d5398e04 --- /dev/null +++ b/deps/v8/src/x87/lithium-codegen-x87.h @@ -0,0 +1,504 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_X87_LITHIUM_CODEGEN_X87_H_ +#define V8_X87_LITHIUM_CODEGEN_X87_H_ + +#include "src/x87/lithium-x87.h" + +#include "src/base/logging.h" +#include "src/deoptimizer.h" +#include "src/lithium-codegen.h" +#include "src/safepoint-table.h" +#include "src/scopes.h" +#include "src/utils.h" +#include "src/x87/lithium-gap-resolver-x87.h" + +namespace v8 { +namespace internal { + +// Forward declarations. +class LDeferredCode; +class LGapNode; +class SafepointGenerator; + +class LCodeGen: public LCodeGenBase { + public: + LCodeGen(LChunk* chunk, MacroAssembler* assembler, CompilationInfo* info) + : LCodeGenBase(chunk, assembler, info), + deoptimizations_(4, info->zone()), + jump_table_(4, info->zone()), + deoptimization_literals_(8, info->zone()), + inlined_function_count_(0), + scope_(info->scope()), + translations_(info->zone()), + deferred_(8, info->zone()), + dynamic_frame_alignment_(false), + support_aligned_spilled_doubles_(false), + osr_pc_offset_(-1), + frame_is_built_(false), + x87_stack_(assembler), + safepoints_(info->zone()), + resolver_(this), + expected_safepoint_kind_(Safepoint::kSimple) { + PopulateDeoptimizationLiteralsWithInlinedFunctions(); + } + + int LookupDestination(int block_id) const { + return chunk()->LookupDestination(block_id); + } + + bool IsNextEmittedBlock(int block_id) const { + return LookupDestination(block_id) == GetNextEmittedBlock(); + } + + bool NeedsEagerFrame() const { + return GetStackSlotCount() > 0 || + info()->is_non_deferred_calling() || + !info()->IsStub() || + info()->requires_frame(); + } + bool NeedsDeferredFrame() const { + return !NeedsEagerFrame() && info()->is_deferred_calling(); + } + + // Support for converting LOperands to assembler types. + Operand ToOperand(LOperand* op) const; + Register ToRegister(LOperand* op) const; + X87Register ToX87Register(LOperand* op) const; + + bool IsInteger32(LConstantOperand* op) const; + bool IsSmi(LConstantOperand* op) const; + Immediate ToImmediate(LOperand* op, const Representation& r) const { + return Immediate(ToRepresentation(LConstantOperand::cast(op), r)); + } + double ToDouble(LConstantOperand* op) const; + + // Support for non-sse2 (x87) floating point stack handling. + // These functions maintain the mapping of physical stack registers to our + // virtual registers between instructions. + enum X87OperandType { kX87DoubleOperand, kX87FloatOperand, kX87IntOperand }; + + void X87Mov(X87Register reg, Operand src, + X87OperandType operand = kX87DoubleOperand); + void X87Mov(Operand src, X87Register reg, + X87OperandType operand = kX87DoubleOperand); + + void X87PrepareBinaryOp( + X87Register left, X87Register right, X87Register result); + + void X87LoadForUsage(X87Register reg); + void X87LoadForUsage(X87Register reg1, X87Register reg2); + void X87PrepareToWrite(X87Register reg) { x87_stack_.PrepareToWrite(reg); } + void X87CommitWrite(X87Register reg) { x87_stack_.CommitWrite(reg); } + + void X87Fxch(X87Register reg, int other_slot = 0) { + x87_stack_.Fxch(reg, other_slot); + } + void X87Free(X87Register reg) { + x87_stack_.Free(reg); + } + + + bool X87StackEmpty() { + return x87_stack_.depth() == 0; + } + + Handle<Object> ToHandle(LConstantOperand* op) const; + + // The operand denoting the second word (the one with a higher address) of + // a double stack slot. + Operand HighOperand(LOperand* op); + + // Try to generate code for the entire chunk, but it may fail if the + // chunk contains constructs we cannot handle. Returns true if the + // code generation attempt succeeded. + bool GenerateCode(); + + // Finish the code by setting stack height, safepoint, and bailout + // information on it. + void FinishCode(Handle<Code> code); + + // Deferred code support. + void DoDeferredNumberTagD(LNumberTagD* instr); + + enum IntegerSignedness { SIGNED_INT32, UNSIGNED_INT32 }; + void DoDeferredNumberTagIU(LInstruction* instr, + LOperand* value, + LOperand* temp, + IntegerSignedness signedness); + + void DoDeferredTaggedToI(LTaggedToI* instr, Label* done); + void DoDeferredMathAbsTaggedHeapNumber(LMathAbs* instr); + void DoDeferredStackCheck(LStackCheck* instr); + void DoDeferredStringCharCodeAt(LStringCharCodeAt* instr); + void DoDeferredStringCharFromCode(LStringCharFromCode* instr); + void DoDeferredAllocate(LAllocate* instr); + void DoDeferredInstanceOfKnownGlobal(LInstanceOfKnownGlobal* instr, + Label* map_check); + void DoDeferredInstanceMigration(LCheckMaps* instr, Register object); + void DoDeferredLoadMutableDouble(LLoadFieldByIndex* instr, + Register object, + Register index); + + // Parallel move support. + void DoParallelMove(LParallelMove* move); + void DoGap(LGap* instr); + + // Emit frame translation commands for an environment. + void WriteTranslation(LEnvironment* environment, Translation* translation); + + void EnsureRelocSpaceForDeoptimization(); + + // Declare methods that deal with the individual node types. +#define DECLARE_DO(type) void Do##type(L##type* node); + LITHIUM_CONCRETE_INSTRUCTION_LIST(DECLARE_DO) +#undef DECLARE_DO + + private: + StrictMode strict_mode() const { return info()->strict_mode(); } + + Scope* scope() const { return scope_; } + + void EmitClassOfTest(Label* if_true, + Label* if_false, + Handle<String> class_name, + Register input, + Register temporary, + Register temporary2); + + int GetStackSlotCount() const { return chunk()->spill_slot_count(); } + + void AddDeferredCode(LDeferredCode* code) { deferred_.Add(code, zone()); } + + // Code generation passes. Returns true if code generation should + // continue. + void GenerateBodyInstructionPre(LInstruction* instr) V8_OVERRIDE; + void GenerateBodyInstructionPost(LInstruction* instr) V8_OVERRIDE; + bool GeneratePrologue(); + bool GenerateDeferredCode(); + bool GenerateJumpTable(); + bool GenerateSafepointTable(); + + // Generates the custom OSR entrypoint and sets the osr_pc_offset. + void GenerateOsrPrologue(); + + enum SafepointMode { + RECORD_SIMPLE_SAFEPOINT, + RECORD_SAFEPOINT_WITH_REGISTERS_AND_NO_ARGUMENTS + }; + + void CallCode(Handle<Code> code, + RelocInfo::Mode mode, + LInstruction* instr); + + void CallCodeGeneric(Handle<Code> code, + RelocInfo::Mode mode, + LInstruction* instr, + SafepointMode safepoint_mode); + + void CallRuntime(const Runtime::Function* fun, + int argc, + LInstruction* instr); + + void CallRuntime(Runtime::FunctionId id, + int argc, + LInstruction* instr) { + const Runtime::Function* function = Runtime::FunctionForId(id); + CallRuntime(function, argc, instr); + } + + void CallRuntimeFromDeferred(Runtime::FunctionId id, + int argc, + LInstruction* instr, + LOperand* context); + + void LoadContextFromDeferred(LOperand* context); + + enum EDIState { + EDI_UNINITIALIZED, + EDI_CONTAINS_TARGET + }; + + // Generate a direct call to a known function. Expects the function + // to be in edi. + void CallKnownFunction(Handle<JSFunction> function, + int formal_parameter_count, + int arity, + LInstruction* instr, + EDIState edi_state); + + void RecordSafepointWithLazyDeopt(LInstruction* instr, + SafepointMode safepoint_mode); + + void RegisterEnvironmentForDeoptimization(LEnvironment* environment, + Safepoint::DeoptMode mode); + void DeoptimizeIf(Condition cc, + LEnvironment* environment, + Deoptimizer::BailoutType bailout_type); + void DeoptimizeIf(Condition cc, LEnvironment* environment); + + bool DeoptEveryNTimes() { + return FLAG_deopt_every_n_times != 0 && !info()->IsStub(); + } + + void AddToTranslation(LEnvironment* environment, + Translation* translation, + LOperand* op, + bool is_tagged, + bool is_uint32, + int* object_index_pointer, + int* dematerialized_index_pointer); + void PopulateDeoptimizationData(Handle<Code> code); + int DefineDeoptimizationLiteral(Handle<Object> literal); + + void PopulateDeoptimizationLiteralsWithInlinedFunctions(); + + Register ToRegister(int index) const; + X87Register ToX87Register(int index) const; + int32_t ToRepresentation(LConstantOperand* op, const Representation& r) const; + int32_t ToInteger32(LConstantOperand* op) const; + ExternalReference ToExternalReference(LConstantOperand* op) const; + + Operand BuildFastArrayOperand(LOperand* elements_pointer, + LOperand* key, + Representation key_representation, + ElementsKind elements_kind, + uint32_t base_offset); + + Operand BuildSeqStringOperand(Register string, + LOperand* index, + String::Encoding encoding); + + void EmitIntegerMathAbs(LMathAbs* instr); + + // Support for recording safepoint and position information. + void RecordSafepoint(LPointerMap* pointers, + Safepoint::Kind kind, + int arguments, + Safepoint::DeoptMode mode); + void RecordSafepoint(LPointerMap* pointers, Safepoint::DeoptMode mode); + void RecordSafepoint(Safepoint::DeoptMode mode); + void RecordSafepointWithRegisters(LPointerMap* pointers, + int arguments, + Safepoint::DeoptMode mode); + + void RecordAndWritePosition(int position) V8_OVERRIDE; + + static Condition TokenToCondition(Token::Value op, bool is_unsigned); + void EmitGoto(int block); + + // EmitBranch expects to be the last instruction of a block. + template<class InstrType> + void EmitBranch(InstrType instr, Condition cc); + template<class InstrType> + void EmitFalseBranch(InstrType instr, Condition cc); + void EmitNumberUntagDNoSSE2( + Register input, + Register temp, + X87Register res_reg, + bool allow_undefined_as_nan, + bool deoptimize_on_minus_zero, + LEnvironment* env, + NumberUntagDMode mode = NUMBER_CANDIDATE_IS_ANY_TAGGED); + + // Emits optimized code for typeof x == "y". Modifies input register. + // Returns the condition on which a final split to + // true and false label should be made, to optimize fallthrough. + Condition EmitTypeofIs(LTypeofIsAndBranch* instr, Register input); + + // Emits optimized code for %_IsObject(x). Preserves input register. + // Returns the condition on which a final split to + // true and false label should be made, to optimize fallthrough. + Condition EmitIsObject(Register input, + Register temp1, + Label* is_not_object, + Label* is_object); + + // Emits optimized code for %_IsString(x). Preserves input register. + // Returns the condition on which a final split to + // true and false label should be made, to optimize fallthrough. + Condition EmitIsString(Register input, + Register temp1, + Label* is_not_string, + SmiCheck check_needed); + + // Emits optimized code for %_IsConstructCall(). + // Caller should branch on equal condition. + void EmitIsConstructCall(Register temp); + + // Emits optimized code to deep-copy the contents of statically known + // object graphs (e.g. object literal boilerplate). + void EmitDeepCopy(Handle<JSObject> object, + Register result, + Register source, + int* offset, + AllocationSiteMode mode); + + void EnsureSpaceForLazyDeopt(int space_needed) V8_OVERRIDE; + void DoLoadKeyedExternalArray(LLoadKeyed* instr); + void DoLoadKeyedFixedDoubleArray(LLoadKeyed* instr); + void DoLoadKeyedFixedArray(LLoadKeyed* instr); + void DoStoreKeyedExternalArray(LStoreKeyed* instr); + void DoStoreKeyedFixedDoubleArray(LStoreKeyed* instr); + void DoStoreKeyedFixedArray(LStoreKeyed* instr); + + void EmitReturn(LReturn* instr, bool dynamic_frame_alignment); + + // Emits code for pushing either a tagged constant, a (non-double) + // register, or a stack slot operand. + void EmitPushTaggedOperand(LOperand* operand); + + void X87Fld(Operand src, X87OperandType opts); + + void EmitFlushX87ForDeopt(); + void FlushX87StackIfNecessary(LInstruction* instr) { + x87_stack_.FlushIfNecessary(instr, this); + } + friend class LGapResolver; + +#ifdef _MSC_VER + // On windows, you may not access the stack more than one page below + // the most recently mapped page. To make the allocated area randomly + // accessible, we write an arbitrary value to each page in range + // esp + offset - page_size .. esp in turn. + void MakeSureStackPagesMapped(int offset); +#endif + + ZoneList<LEnvironment*> deoptimizations_; + ZoneList<Deoptimizer::JumpTableEntry> jump_table_; + ZoneList<Handle<Object> > deoptimization_literals_; + int inlined_function_count_; + Scope* const scope_; + TranslationBuffer translations_; + ZoneList<LDeferredCode*> deferred_; + bool dynamic_frame_alignment_; + bool support_aligned_spilled_doubles_; + int osr_pc_offset_; + bool frame_is_built_; + + class X87Stack { + public: + explicit X87Stack(MacroAssembler* masm) + : stack_depth_(0), is_mutable_(true), masm_(masm) { } + explicit X87Stack(const X87Stack& other) + : stack_depth_(other.stack_depth_), is_mutable_(false), masm_(masm()) { + for (int i = 0; i < stack_depth_; i++) { + stack_[i] = other.stack_[i]; + } + } + bool operator==(const X87Stack& other) const { + if (stack_depth_ != other.stack_depth_) return false; + for (int i = 0; i < stack_depth_; i++) { + if (!stack_[i].is(other.stack_[i])) return false; + } + return true; + } + bool Contains(X87Register reg); + void Fxch(X87Register reg, int other_slot = 0); + void Free(X87Register reg); + void PrepareToWrite(X87Register reg); + void CommitWrite(X87Register reg); + void FlushIfNecessary(LInstruction* instr, LCodeGen* cgen); + void LeavingBlock(int current_block_id, LGoto* goto_instr); + int depth() const { return stack_depth_; } + void pop() { + DCHECK(is_mutable_); + stack_depth_--; + } + void push(X87Register reg) { + DCHECK(is_mutable_); + DCHECK(stack_depth_ < X87Register::kMaxNumAllocatableRegisters); + stack_[stack_depth_] = reg; + stack_depth_++; + } + + MacroAssembler* masm() const { return masm_; } + Isolate* isolate() const { return masm_->isolate(); } + + private: + int ArrayIndex(X87Register reg); + int st2idx(int pos); + + X87Register stack_[X87Register::kMaxNumAllocatableRegisters]; + int stack_depth_; + bool is_mutable_; + MacroAssembler* masm_; + }; + X87Stack x87_stack_; + + // Builder that keeps track of safepoints in the code. The table + // itself is emitted at the end of the generated code. + SafepointTableBuilder safepoints_; + + // Compiler from a set of parallel moves to a sequential list of moves. + LGapResolver resolver_; + + Safepoint::Kind expected_safepoint_kind_; + + class PushSafepointRegistersScope V8_FINAL BASE_EMBEDDED { + public: + explicit PushSafepointRegistersScope(LCodeGen* codegen) + : codegen_(codegen) { + DCHECK(codegen_->expected_safepoint_kind_ == Safepoint::kSimple); + codegen_->masm_->PushSafepointRegisters(); + codegen_->expected_safepoint_kind_ = Safepoint::kWithRegisters; + DCHECK(codegen_->info()->is_calling()); + } + + ~PushSafepointRegistersScope() { + DCHECK(codegen_->expected_safepoint_kind_ == Safepoint::kWithRegisters); + codegen_->masm_->PopSafepointRegisters(); + codegen_->expected_safepoint_kind_ = Safepoint::kSimple; + } + + private: + LCodeGen* codegen_; + }; + + friend class LDeferredCode; + friend class LEnvironment; + friend class SafepointGenerator; + DISALLOW_COPY_AND_ASSIGN(LCodeGen); +}; + + +class LDeferredCode : public ZoneObject { + public: + explicit LDeferredCode(LCodeGen* codegen, const LCodeGen::X87Stack& x87_stack) + : codegen_(codegen), + external_exit_(NULL), + instruction_index_(codegen->current_instruction_), + x87_stack_(x87_stack) { + codegen->AddDeferredCode(this); + } + + virtual ~LDeferredCode() {} + virtual void Generate() = 0; + virtual LInstruction* instr() = 0; + + void SetExit(Label* exit) { external_exit_ = exit; } + Label* entry() { return &entry_; } + Label* exit() { return external_exit_ != NULL ? external_exit_ : &exit_; } + Label* done() { return codegen_->NeedsDeferredFrame() ? &done_ : exit(); } + int instruction_index() const { return instruction_index_; } + const LCodeGen::X87Stack& x87_stack() const { return x87_stack_; } + + protected: + LCodeGen* codegen() const { return codegen_; } + MacroAssembler* masm() const { return codegen_->masm(); } + + private: + LCodeGen* codegen_; + Label entry_; + Label exit_; + Label* external_exit_; + Label done_; + int instruction_index_; + LCodeGen::X87Stack x87_stack_; +}; + +} } // namespace v8::internal + +#endif // V8_X87_LITHIUM_CODEGEN_X87_H_ diff --git a/deps/v8/src/x87/lithium-gap-resolver-x87.cc b/deps/v8/src/x87/lithium-gap-resolver-x87.cc new file mode 100644 index 00000000000..e25c78c9937 --- /dev/null +++ b/deps/v8/src/x87/lithium-gap-resolver-x87.cc @@ -0,0 +1,445 @@ +// Copyright 2011 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_X87 + +#include "src/x87/lithium-codegen-x87.h" +#include "src/x87/lithium-gap-resolver-x87.h" + +namespace v8 { +namespace internal { + +LGapResolver::LGapResolver(LCodeGen* owner) + : cgen_(owner), + moves_(32, owner->zone()), + source_uses_(), + destination_uses_(), + spilled_register_(-1) {} + + +void LGapResolver::Resolve(LParallelMove* parallel_move) { + DCHECK(HasBeenReset()); + // Build up a worklist of moves. + BuildInitialMoveList(parallel_move); + + for (int i = 0; i < moves_.length(); ++i) { + LMoveOperands move = moves_[i]; + // Skip constants to perform them last. They don't block other moves + // and skipping such moves with register destinations keeps those + // registers free for the whole algorithm. + if (!move.IsEliminated() && !move.source()->IsConstantOperand()) { + PerformMove(i); + } + } + + // Perform the moves with constant sources. + for (int i = 0; i < moves_.length(); ++i) { + if (!moves_[i].IsEliminated()) { + DCHECK(moves_[i].source()->IsConstantOperand()); + EmitMove(i); + } + } + + Finish(); + DCHECK(HasBeenReset()); +} + + +void LGapResolver::BuildInitialMoveList(LParallelMove* parallel_move) { + // Perform a linear sweep of the moves to add them to the initial list of + // moves to perform, ignoring any move that is redundant (the source is + // the same as the destination, the destination is ignored and + // unallocated, or the move was already eliminated). + const ZoneList<LMoveOperands>* moves = parallel_move->move_operands(); + for (int i = 0; i < moves->length(); ++i) { + LMoveOperands move = moves->at(i); + if (!move.IsRedundant()) AddMove(move); + } + Verify(); +} + + +void LGapResolver::PerformMove(int index) { + // Each call to this function performs a move and deletes it from the move + // graph. We first recursively perform any move blocking this one. We + // mark a move as "pending" on entry to PerformMove in order to detect + // cycles in the move graph. We use operand swaps to resolve cycles, + // which means that a call to PerformMove could change any source operand + // in the move graph. + + DCHECK(!moves_[index].IsPending()); + DCHECK(!moves_[index].IsRedundant()); + + // Clear this move's destination to indicate a pending move. The actual + // destination is saved on the side. + DCHECK(moves_[index].source() != NULL); // Or else it will look eliminated. + LOperand* destination = moves_[index].destination(); + moves_[index].set_destination(NULL); + + // Perform a depth-first traversal of the move graph to resolve + // dependencies. Any unperformed, unpending move with a source the same + // as this one's destination blocks this one so recursively perform all + // such moves. + for (int i = 0; i < moves_.length(); ++i) { + LMoveOperands other_move = moves_[i]; + if (other_move.Blocks(destination) && !other_move.IsPending()) { + // Though PerformMove can change any source operand in the move graph, + // this call cannot create a blocking move via a swap (this loop does + // not miss any). Assume there is a non-blocking move with source A + // and this move is blocked on source B and there is a swap of A and + // B. Then A and B must be involved in the same cycle (or they would + // not be swapped). Since this move's destination is B and there is + // only a single incoming edge to an operand, this move must also be + // involved in the same cycle. In that case, the blocking move will + // be created but will be "pending" when we return from PerformMove. + PerformMove(i); + } + } + + // We are about to resolve this move and don't need it marked as + // pending, so restore its destination. + moves_[index].set_destination(destination); + + // This move's source may have changed due to swaps to resolve cycles and + // so it may now be the last move in the cycle. If so remove it. + if (moves_[index].source()->Equals(destination)) { + RemoveMove(index); + return; + } + + // The move may be blocked on a (at most one) pending move, in which case + // we have a cycle. Search for such a blocking move and perform a swap to + // resolve it. + for (int i = 0; i < moves_.length(); ++i) { + LMoveOperands other_move = moves_[i]; + if (other_move.Blocks(destination)) { + DCHECK(other_move.IsPending()); + EmitSwap(index); + return; + } + } + + // This move is not blocked. + EmitMove(index); +} + + +void LGapResolver::AddMove(LMoveOperands move) { + LOperand* source = move.source(); + if (source->IsRegister()) ++source_uses_[source->index()]; + + LOperand* destination = move.destination(); + if (destination->IsRegister()) ++destination_uses_[destination->index()]; + + moves_.Add(move, cgen_->zone()); +} + + +void LGapResolver::RemoveMove(int index) { + LOperand* source = moves_[index].source(); + if (source->IsRegister()) { + --source_uses_[source->index()]; + DCHECK(source_uses_[source->index()] >= 0); + } + + LOperand* destination = moves_[index].destination(); + if (destination->IsRegister()) { + --destination_uses_[destination->index()]; + DCHECK(destination_uses_[destination->index()] >= 0); + } + + moves_[index].Eliminate(); +} + + +int LGapResolver::CountSourceUses(LOperand* operand) { + int count = 0; + for (int i = 0; i < moves_.length(); ++i) { + if (!moves_[i].IsEliminated() && moves_[i].source()->Equals(operand)) { + ++count; + } + } + return count; +} + + +Register LGapResolver::GetFreeRegisterNot(Register reg) { + int skip_index = reg.is(no_reg) ? -1 : Register::ToAllocationIndex(reg); + for (int i = 0; i < Register::NumAllocatableRegisters(); ++i) { + if (source_uses_[i] == 0 && destination_uses_[i] > 0 && i != skip_index) { + return Register::FromAllocationIndex(i); + } + } + return no_reg; +} + + +bool LGapResolver::HasBeenReset() { + if (!moves_.is_empty()) return false; + if (spilled_register_ >= 0) return false; + + for (int i = 0; i < Register::NumAllocatableRegisters(); ++i) { + if (source_uses_[i] != 0) return false; + if (destination_uses_[i] != 0) return false; + } + return true; +} + + +void LGapResolver::Verify() { +#ifdef ENABLE_SLOW_DCHECKS + // No operand should be the destination for more than one move. + for (int i = 0; i < moves_.length(); ++i) { + LOperand* destination = moves_[i].destination(); + for (int j = i + 1; j < moves_.length(); ++j) { + SLOW_DCHECK(!destination->Equals(moves_[j].destination())); + } + } +#endif +} + + +#define __ ACCESS_MASM(cgen_->masm()) + +void LGapResolver::Finish() { + if (spilled_register_ >= 0) { + __ pop(Register::FromAllocationIndex(spilled_register_)); + spilled_register_ = -1; + } + moves_.Rewind(0); +} + + +void LGapResolver::EnsureRestored(LOperand* operand) { + if (operand->IsRegister() && operand->index() == spilled_register_) { + __ pop(Register::FromAllocationIndex(spilled_register_)); + spilled_register_ = -1; + } +} + + +Register LGapResolver::EnsureTempRegister() { + // 1. We may have already spilled to create a temp register. + if (spilled_register_ >= 0) { + return Register::FromAllocationIndex(spilled_register_); + } + + // 2. We may have a free register that we can use without spilling. + Register free = GetFreeRegisterNot(no_reg); + if (!free.is(no_reg)) return free; + + // 3. Prefer to spill a register that is not used in any remaining move + // because it will not need to be restored until the end. + for (int i = 0; i < Register::NumAllocatableRegisters(); ++i) { + if (source_uses_[i] == 0 && destination_uses_[i] == 0) { + Register scratch = Register::FromAllocationIndex(i); + __ push(scratch); + spilled_register_ = i; + return scratch; + } + } + + // 4. Use an arbitrary register. Register 0 is as arbitrary as any other. + Register scratch = Register::FromAllocationIndex(0); + __ push(scratch); + spilled_register_ = 0; + return scratch; +} + + +void LGapResolver::EmitMove(int index) { + LOperand* source = moves_[index].source(); + LOperand* destination = moves_[index].destination(); + EnsureRestored(source); + EnsureRestored(destination); + + // Dispatch on the source and destination operand kinds. Not all + // combinations are possible. + if (source->IsRegister()) { + DCHECK(destination->IsRegister() || destination->IsStackSlot()); + Register src = cgen_->ToRegister(source); + Operand dst = cgen_->ToOperand(destination); + __ mov(dst, src); + + } else if (source->IsStackSlot()) { + DCHECK(destination->IsRegister() || destination->IsStackSlot()); + Operand src = cgen_->ToOperand(source); + if (destination->IsRegister()) { + Register dst = cgen_->ToRegister(destination); + __ mov(dst, src); + } else { + // Spill on demand to use a temporary register for memory-to-memory + // moves. + Register tmp = EnsureTempRegister(); + Operand dst = cgen_->ToOperand(destination); + __ mov(tmp, src); + __ mov(dst, tmp); + } + + } else if (source->IsConstantOperand()) { + LConstantOperand* constant_source = LConstantOperand::cast(source); + if (destination->IsRegister()) { + Register dst = cgen_->ToRegister(destination); + Representation r = cgen_->IsSmi(constant_source) + ? Representation::Smi() : Representation::Integer32(); + if (cgen_->IsInteger32(constant_source)) { + __ Move(dst, cgen_->ToImmediate(constant_source, r)); + } else { + __ LoadObject(dst, cgen_->ToHandle(constant_source)); + } + } else if (destination->IsDoubleRegister()) { + double v = cgen_->ToDouble(constant_source); + uint64_t int_val = BitCast<uint64_t, double>(v); + int32_t lower = static_cast<int32_t>(int_val); + int32_t upper = static_cast<int32_t>(int_val >> kBitsPerInt); + __ push(Immediate(upper)); + __ push(Immediate(lower)); + X87Register dst = cgen_->ToX87Register(destination); + cgen_->X87Mov(dst, MemOperand(esp, 0)); + __ add(esp, Immediate(kDoubleSize)); + } else { + DCHECK(destination->IsStackSlot()); + Operand dst = cgen_->ToOperand(destination); + Representation r = cgen_->IsSmi(constant_source) + ? Representation::Smi() : Representation::Integer32(); + if (cgen_->IsInteger32(constant_source)) { + __ Move(dst, cgen_->ToImmediate(constant_source, r)); + } else { + Register tmp = EnsureTempRegister(); + __ LoadObject(tmp, cgen_->ToHandle(constant_source)); + __ mov(dst, tmp); + } + } + + } else if (source->IsDoubleRegister()) { + // load from the register onto the stack, store in destination, which must + // be a double stack slot in the non-SSE2 case. + DCHECK(destination->IsDoubleStackSlot()); + Operand dst = cgen_->ToOperand(destination); + X87Register src = cgen_->ToX87Register(source); + cgen_->X87Mov(dst, src); + } else if (source->IsDoubleStackSlot()) { + // load from the stack slot on top of the floating point stack, and then + // store in destination. If destination is a double register, then it + // represents the top of the stack and nothing needs to be done. + if (destination->IsDoubleStackSlot()) { + Register tmp = EnsureTempRegister(); + Operand src0 = cgen_->ToOperand(source); + Operand src1 = cgen_->HighOperand(source); + Operand dst0 = cgen_->ToOperand(destination); + Operand dst1 = cgen_->HighOperand(destination); + __ mov(tmp, src0); // Then use tmp to copy source to destination. + __ mov(dst0, tmp); + __ mov(tmp, src1); + __ mov(dst1, tmp); + } else { + Operand src = cgen_->ToOperand(source); + X87Register dst = cgen_->ToX87Register(destination); + cgen_->X87Mov(dst, src); + } + } else { + UNREACHABLE(); + } + + RemoveMove(index); +} + + +void LGapResolver::EmitSwap(int index) { + LOperand* source = moves_[index].source(); + LOperand* destination = moves_[index].destination(); + EnsureRestored(source); + EnsureRestored(destination); + + // Dispatch on the source and destination operand kinds. Not all + // combinations are possible. + if (source->IsRegister() && destination->IsRegister()) { + // Register-register. + Register src = cgen_->ToRegister(source); + Register dst = cgen_->ToRegister(destination); + __ xchg(dst, src); + + } else if ((source->IsRegister() && destination->IsStackSlot()) || + (source->IsStackSlot() && destination->IsRegister())) { + // Register-memory. Use a free register as a temp if possible. Do not + // spill on demand because the simple spill implementation cannot avoid + // spilling src at this point. + Register tmp = GetFreeRegisterNot(no_reg); + Register reg = + cgen_->ToRegister(source->IsRegister() ? source : destination); + Operand mem = + cgen_->ToOperand(source->IsRegister() ? destination : source); + if (tmp.is(no_reg)) { + __ xor_(reg, mem); + __ xor_(mem, reg); + __ xor_(reg, mem); + } else { + __ mov(tmp, mem); + __ mov(mem, reg); + __ mov(reg, tmp); + } + + } else if (source->IsStackSlot() && destination->IsStackSlot()) { + // Memory-memory. Spill on demand to use a temporary. If there is a + // free register after that, use it as a second temporary. + Register tmp0 = EnsureTempRegister(); + Register tmp1 = GetFreeRegisterNot(tmp0); + Operand src = cgen_->ToOperand(source); + Operand dst = cgen_->ToOperand(destination); + if (tmp1.is(no_reg)) { + // Only one temp register available to us. + __ mov(tmp0, dst); + __ xor_(tmp0, src); + __ xor_(src, tmp0); + __ xor_(tmp0, src); + __ mov(dst, tmp0); + } else { + __ mov(tmp0, dst); + __ mov(tmp1, src); + __ mov(dst, tmp1); + __ mov(src, tmp0); + } + } else { + // No other combinations are possible. + UNREACHABLE(); + } + + // The swap of source and destination has executed a move from source to + // destination. + RemoveMove(index); + + // Any unperformed (including pending) move with a source of either + // this move's source or destination needs to have their source + // changed to reflect the state of affairs after the swap. + for (int i = 0; i < moves_.length(); ++i) { + LMoveOperands other_move = moves_[i]; + if (other_move.Blocks(source)) { + moves_[i].set_source(destination); + } else if (other_move.Blocks(destination)) { + moves_[i].set_source(source); + } + } + + // In addition to swapping the actual uses as sources, we need to update + // the use counts. + if (source->IsRegister() && destination->IsRegister()) { + int temp = source_uses_[source->index()]; + source_uses_[source->index()] = source_uses_[destination->index()]; + source_uses_[destination->index()] = temp; + } else if (source->IsRegister()) { + // We don't have use counts for non-register operands like destination. + // Compute those counts now. + source_uses_[source->index()] = CountSourceUses(source); + } else if (destination->IsRegister()) { + source_uses_[destination->index()] = CountSourceUses(destination); + } +} + +#undef __ + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_X87 diff --git a/deps/v8/src/x87/lithium-gap-resolver-x87.h b/deps/v8/src/x87/lithium-gap-resolver-x87.h new file mode 100644 index 00000000000..737660c71ac --- /dev/null +++ b/deps/v8/src/x87/lithium-gap-resolver-x87.h @@ -0,0 +1,87 @@ +// Copyright 2011 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_X87_LITHIUM_GAP_RESOLVER_X87_H_ +#define V8_X87_LITHIUM_GAP_RESOLVER_X87_H_ + +#include "src/v8.h" + +#include "src/lithium.h" + +namespace v8 { +namespace internal { + +class LCodeGen; +class LGapResolver; + +class LGapResolver V8_FINAL BASE_EMBEDDED { + public: + explicit LGapResolver(LCodeGen* owner); + + // Resolve a set of parallel moves, emitting assembler instructions. + void Resolve(LParallelMove* parallel_move); + + private: + // Build the initial list of moves. + void BuildInitialMoveList(LParallelMove* parallel_move); + + // Perform the move at the moves_ index in question (possibly requiring + // other moves to satisfy dependencies). + void PerformMove(int index); + + // Emit any code necessary at the end of a gap move. + void Finish(); + + // Add or delete a move from the move graph without emitting any code. + // Used to build up the graph and remove trivial moves. + void AddMove(LMoveOperands move); + void RemoveMove(int index); + + // Report the count of uses of operand as a source in a not-yet-performed + // move. Used to rebuild use counts. + int CountSourceUses(LOperand* operand); + + // Emit a move and remove it from the move graph. + void EmitMove(int index); + + // Execute a move by emitting a swap of two operands. The move from + // source to destination is removed from the move graph. + void EmitSwap(int index); + + // Ensure that the given operand is not spilled. + void EnsureRestored(LOperand* operand); + + // Return a register that can be used as a temp register, spilling + // something if necessary. + Register EnsureTempRegister(); + + // Return a known free register different from the given one (which could + // be no_reg---returning any free register), or no_reg if there is no such + // register. + Register GetFreeRegisterNot(Register reg); + + // Verify that the state is the initial one, ready to resolve a single + // parallel move. + bool HasBeenReset(); + + // Verify the move list before performing moves. + void Verify(); + + LCodeGen* cgen_; + + // List of moves not yet resolved. + ZoneList<LMoveOperands> moves_; + + // Source and destination use counts for the general purpose registers. + int source_uses_[Register::kMaxNumAllocatableRegisters]; + int destination_uses_[Register::kMaxNumAllocatableRegisters]; + + // If we had to spill on demand, the currently spilled register's + // allocation index. + int spilled_register_; +}; + +} } // namespace v8::internal + +#endif // V8_X87_LITHIUM_GAP_RESOLVER_X87_H_ diff --git a/deps/v8/src/x87/lithium-x87.cc b/deps/v8/src/x87/lithium-x87.cc new file mode 100644 index 00000000000..f2eb9f0a00e --- /dev/null +++ b/deps/v8/src/x87/lithium-x87.cc @@ -0,0 +1,2683 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_X87 + +#include "src/hydrogen-osr.h" +#include "src/lithium-inl.h" +#include "src/x87/lithium-codegen-x87.h" + +namespace v8 { +namespace internal { + +#define DEFINE_COMPILE(type) \ + void L##type::CompileToNative(LCodeGen* generator) { \ + generator->Do##type(this); \ + } +LITHIUM_CONCRETE_INSTRUCTION_LIST(DEFINE_COMPILE) +#undef DEFINE_COMPILE + + +#ifdef DEBUG +void LInstruction::VerifyCall() { + // Call instructions can use only fixed registers as temporaries and + // outputs because all registers are blocked by the calling convention. + // Inputs operands must use a fixed register or use-at-start policy or + // a non-register policy. + DCHECK(Output() == NULL || + LUnallocated::cast(Output())->HasFixedPolicy() || + !LUnallocated::cast(Output())->HasRegisterPolicy()); + for (UseIterator it(this); !it.Done(); it.Advance()) { + LUnallocated* operand = LUnallocated::cast(it.Current()); + DCHECK(operand->HasFixedPolicy() || + operand->IsUsedAtStart()); + } + for (TempIterator it(this); !it.Done(); it.Advance()) { + LUnallocated* operand = LUnallocated::cast(it.Current()); + DCHECK(operand->HasFixedPolicy() ||!operand->HasRegisterPolicy()); + } +} +#endif + + +bool LInstruction::HasDoubleRegisterResult() { + return HasResult() && result()->IsDoubleRegister(); +} + + +bool LInstruction::HasDoubleRegisterInput() { + for (int i = 0; i < InputCount(); i++) { + LOperand* op = InputAt(i); + if (op != NULL && op->IsDoubleRegister()) { + return true; + } + } + return false; +} + + +bool LInstruction::IsDoubleInput(X87Register reg, LCodeGen* cgen) { + for (int i = 0; i < InputCount(); i++) { + LOperand* op = InputAt(i); + if (op != NULL && op->IsDoubleRegister()) { + if (cgen->ToX87Register(op).is(reg)) return true; + } + } + return false; +} + + +void LInstruction::PrintTo(StringStream* stream) { + stream->Add("%s ", this->Mnemonic()); + + PrintOutputOperandTo(stream); + + PrintDataTo(stream); + + if (HasEnvironment()) { + stream->Add(" "); + environment()->PrintTo(stream); + } + + if (HasPointerMap()) { + stream->Add(" "); + pointer_map()->PrintTo(stream); + } +} + + +void LInstruction::PrintDataTo(StringStream* stream) { + stream->Add("= "); + for (int i = 0; i < InputCount(); i++) { + if (i > 0) stream->Add(" "); + if (InputAt(i) == NULL) { + stream->Add("NULL"); + } else { + InputAt(i)->PrintTo(stream); + } + } +} + + +void LInstruction::PrintOutputOperandTo(StringStream* stream) { + if (HasResult()) result()->PrintTo(stream); +} + + +void LLabel::PrintDataTo(StringStream* stream) { + LGap::PrintDataTo(stream); + LLabel* rep = replacement(); + if (rep != NULL) { + stream->Add(" Dead block replaced with B%d", rep->block_id()); + } +} + + +bool LGap::IsRedundant() const { + for (int i = 0; i < 4; i++) { + if (parallel_moves_[i] != NULL && !parallel_moves_[i]->IsRedundant()) { + return false; + } + } + + return true; +} + + +void LGap::PrintDataTo(StringStream* stream) { + for (int i = 0; i < 4; i++) { + stream->Add("("); + if (parallel_moves_[i] != NULL) { + parallel_moves_[i]->PrintDataTo(stream); + } + stream->Add(") "); + } +} + + +const char* LArithmeticD::Mnemonic() const { + switch (op()) { + case Token::ADD: return "add-d"; + case Token::SUB: return "sub-d"; + case Token::MUL: return "mul-d"; + case Token::DIV: return "div-d"; + case Token::MOD: return "mod-d"; + default: + UNREACHABLE(); + return NULL; + } +} + + +const char* LArithmeticT::Mnemonic() const { + switch (op()) { + case Token::ADD: return "add-t"; + case Token::SUB: return "sub-t"; + case Token::MUL: return "mul-t"; + case Token::MOD: return "mod-t"; + case Token::DIV: return "div-t"; + case Token::BIT_AND: return "bit-and-t"; + case Token::BIT_OR: return "bit-or-t"; + case Token::BIT_XOR: return "bit-xor-t"; + case Token::ROR: return "ror-t"; + case Token::SHL: return "sal-t"; + case Token::SAR: return "sar-t"; + case Token::SHR: return "shr-t"; + default: + UNREACHABLE(); + return NULL; + } +} + + +bool LGoto::HasInterestingComment(LCodeGen* gen) const { + return !gen->IsNextEmittedBlock(block_id()); +} + + +void LGoto::PrintDataTo(StringStream* stream) { + stream->Add("B%d", block_id()); +} + + +void LBranch::PrintDataTo(StringStream* stream) { + stream->Add("B%d | B%d on ", true_block_id(), false_block_id()); + value()->PrintTo(stream); +} + + +void LCompareNumericAndBranch::PrintDataTo(StringStream* stream) { + stream->Add("if "); + left()->PrintTo(stream); + stream->Add(" %s ", Token::String(op())); + right()->PrintTo(stream); + stream->Add(" then B%d else B%d", true_block_id(), false_block_id()); +} + + +void LIsObjectAndBranch::PrintDataTo(StringStream* stream) { + stream->Add("if is_object("); + value()->PrintTo(stream); + stream->Add(") then B%d else B%d", true_block_id(), false_block_id()); +} + + +void LIsStringAndBranch::PrintDataTo(StringStream* stream) { + stream->Add("if is_string("); + value()->PrintTo(stream); + stream->Add(") then B%d else B%d", true_block_id(), false_block_id()); +} + + +void LIsSmiAndBranch::PrintDataTo(StringStream* stream) { + stream->Add("if is_smi("); + value()->PrintTo(stream); + stream->Add(") then B%d else B%d", true_block_id(), false_block_id()); +} + + +void LIsUndetectableAndBranch::PrintDataTo(StringStream* stream) { + stream->Add("if is_undetectable("); + value()->PrintTo(stream); + stream->Add(") then B%d else B%d", true_block_id(), false_block_id()); +} + + +void LStringCompareAndBranch::PrintDataTo(StringStream* stream) { + stream->Add("if string_compare("); + left()->PrintTo(stream); + right()->PrintTo(stream); + stream->Add(") then B%d else B%d", true_block_id(), false_block_id()); +} + + +void LHasInstanceTypeAndBranch::PrintDataTo(StringStream* stream) { + stream->Add("if has_instance_type("); + value()->PrintTo(stream); + stream->Add(") then B%d else B%d", true_block_id(), false_block_id()); +} + + +void LHasCachedArrayIndexAndBranch::PrintDataTo(StringStream* stream) { + stream->Add("if has_cached_array_index("); + value()->PrintTo(stream); + stream->Add(") then B%d else B%d", true_block_id(), false_block_id()); +} + + +void LClassOfTestAndBranch::PrintDataTo(StringStream* stream) { + stream->Add("if class_of_test("); + value()->PrintTo(stream); + stream->Add(", \"%o\") then B%d else B%d", + *hydrogen()->class_name(), + true_block_id(), + false_block_id()); +} + + +void LTypeofIsAndBranch::PrintDataTo(StringStream* stream) { + stream->Add("if typeof "); + value()->PrintTo(stream); + stream->Add(" == \"%s\" then B%d else B%d", + hydrogen()->type_literal()->ToCString().get(), + true_block_id(), false_block_id()); +} + + +void LStoreCodeEntry::PrintDataTo(StringStream* stream) { + stream->Add(" = "); + function()->PrintTo(stream); + stream->Add(".code_entry = "); + code_object()->PrintTo(stream); +} + + +void LInnerAllocatedObject::PrintDataTo(StringStream* stream) { + stream->Add(" = "); + base_object()->PrintTo(stream); + stream->Add(" + "); + offset()->PrintTo(stream); +} + + +void LCallJSFunction::PrintDataTo(StringStream* stream) { + stream->Add("= "); + function()->PrintTo(stream); + stream->Add("#%d / ", arity()); +} + + +void LCallWithDescriptor::PrintDataTo(StringStream* stream) { + for (int i = 0; i < InputCount(); i++) { + InputAt(i)->PrintTo(stream); + stream->Add(" "); + } + stream->Add("#%d / ", arity()); +} + + +void LLoadContextSlot::PrintDataTo(StringStream* stream) { + context()->PrintTo(stream); + stream->Add("[%d]", slot_index()); +} + + +void LStoreContextSlot::PrintDataTo(StringStream* stream) { + context()->PrintTo(stream); + stream->Add("[%d] <- ", slot_index()); + value()->PrintTo(stream); +} + + +void LInvokeFunction::PrintDataTo(StringStream* stream) { + stream->Add("= "); + context()->PrintTo(stream); + stream->Add(" "); + function()->PrintTo(stream); + stream->Add(" #%d / ", arity()); +} + + +void LCallNew::PrintDataTo(StringStream* stream) { + stream->Add("= "); + context()->PrintTo(stream); + stream->Add(" "); + constructor()->PrintTo(stream); + stream->Add(" #%d / ", arity()); +} + + +void LCallNewArray::PrintDataTo(StringStream* stream) { + stream->Add("= "); + context()->PrintTo(stream); + stream->Add(" "); + constructor()->PrintTo(stream); + stream->Add(" #%d / ", arity()); + ElementsKind kind = hydrogen()->elements_kind(); + stream->Add(" (%s) ", ElementsKindToString(kind)); +} + + +void LAccessArgumentsAt::PrintDataTo(StringStream* stream) { + arguments()->PrintTo(stream); + + stream->Add(" length "); + length()->PrintTo(stream); + + stream->Add(" index "); + index()->PrintTo(stream); +} + + +int LPlatformChunk::GetNextSpillIndex(RegisterKind kind) { + // Skip a slot if for a double-width slot. + if (kind == DOUBLE_REGISTERS) { + spill_slot_count_++; + spill_slot_count_ |= 1; + num_double_slots_++; + } + return spill_slot_count_++; +} + + +LOperand* LPlatformChunk::GetNextSpillSlot(RegisterKind kind) { + int index = GetNextSpillIndex(kind); + if (kind == DOUBLE_REGISTERS) { + return LDoubleStackSlot::Create(index, zone()); + } else { + DCHECK(kind == GENERAL_REGISTERS); + return LStackSlot::Create(index, zone()); + } +} + + +void LStoreNamedField::PrintDataTo(StringStream* stream) { + object()->PrintTo(stream); + OStringStream os; + os << hydrogen()->access() << " <- "; + stream->Add(os.c_str()); + value()->PrintTo(stream); +} + + +void LStoreNamedGeneric::PrintDataTo(StringStream* stream) { + object()->PrintTo(stream); + stream->Add("."); + stream->Add(String::cast(*name())->ToCString().get()); + stream->Add(" <- "); + value()->PrintTo(stream); +} + + +void LLoadKeyed::PrintDataTo(StringStream* stream) { + elements()->PrintTo(stream); + stream->Add("["); + key()->PrintTo(stream); + if (hydrogen()->IsDehoisted()) { + stream->Add(" + %d]", base_offset()); + } else { + stream->Add("]"); + } +} + + +void LStoreKeyed::PrintDataTo(StringStream* stream) { + elements()->PrintTo(stream); + stream->Add("["); + key()->PrintTo(stream); + if (hydrogen()->IsDehoisted()) { + stream->Add(" + %d] <-", base_offset()); + } else { + stream->Add("] <- "); + } + + if (value() == NULL) { + DCHECK(hydrogen()->IsConstantHoleStore() && + hydrogen()->value()->representation().IsDouble()); + stream->Add("<the hole(nan)>"); + } else { + value()->PrintTo(stream); + } +} + + +void LStoreKeyedGeneric::PrintDataTo(StringStream* stream) { + object()->PrintTo(stream); + stream->Add("["); + key()->PrintTo(stream); + stream->Add("] <- "); + value()->PrintTo(stream); +} + + +void LTransitionElementsKind::PrintDataTo(StringStream* stream) { + object()->PrintTo(stream); + stream->Add(" %p -> %p", *original_map(), *transitioned_map()); +} + + +LPlatformChunk* LChunkBuilder::Build() { + DCHECK(is_unused()); + chunk_ = new(zone()) LPlatformChunk(info(), graph()); + LPhase phase("L_Building chunk", chunk_); + status_ = BUILDING; + + // Reserve the first spill slot for the state of dynamic alignment. + if (info()->IsOptimizing()) { + int alignment_state_index = chunk_->GetNextSpillIndex(GENERAL_REGISTERS); + DCHECK_EQ(alignment_state_index, 0); + USE(alignment_state_index); + } + + // If compiling for OSR, reserve space for the unoptimized frame, + // which will be subsumed into this frame. + if (graph()->has_osr()) { + for (int i = graph()->osr()->UnoptimizedFrameSlots(); i > 0; i--) { + chunk_->GetNextSpillIndex(GENERAL_REGISTERS); + } + } + + const ZoneList<HBasicBlock*>* blocks = graph()->blocks(); + for (int i = 0; i < blocks->length(); i++) { + HBasicBlock* next = NULL; + if (i < blocks->length() - 1) next = blocks->at(i + 1); + DoBasicBlock(blocks->at(i), next); + if (is_aborted()) return NULL; + } + status_ = DONE; + return chunk_; +} + + +void LChunkBuilder::Abort(BailoutReason reason) { + info()->set_bailout_reason(reason); + status_ = ABORTED; +} + + +LUnallocated* LChunkBuilder::ToUnallocated(Register reg) { + return new(zone()) LUnallocated(LUnallocated::FIXED_REGISTER, + Register::ToAllocationIndex(reg)); +} + + +LOperand* LChunkBuilder::UseFixed(HValue* value, Register fixed_register) { + return Use(value, ToUnallocated(fixed_register)); +} + + +LOperand* LChunkBuilder::UseRegister(HValue* value) { + return Use(value, new(zone()) LUnallocated(LUnallocated::MUST_HAVE_REGISTER)); +} + + +LOperand* LChunkBuilder::UseRegisterAtStart(HValue* value) { + return Use(value, + new(zone()) LUnallocated(LUnallocated::MUST_HAVE_REGISTER, + LUnallocated::USED_AT_START)); +} + + +LOperand* LChunkBuilder::UseTempRegister(HValue* value) { + return Use(value, new(zone()) LUnallocated(LUnallocated::WRITABLE_REGISTER)); +} + + +LOperand* LChunkBuilder::Use(HValue* value) { + return Use(value, new(zone()) LUnallocated(LUnallocated::NONE)); +} + + +LOperand* LChunkBuilder::UseAtStart(HValue* value) { + return Use(value, new(zone()) LUnallocated(LUnallocated::NONE, + LUnallocated::USED_AT_START)); +} + + +static inline bool CanBeImmediateConstant(HValue* value) { + return value->IsConstant() && HConstant::cast(value)->NotInNewSpace(); +} + + +LOperand* LChunkBuilder::UseOrConstant(HValue* value) { + return CanBeImmediateConstant(value) + ? chunk_->DefineConstantOperand(HConstant::cast(value)) + : Use(value); +} + + +LOperand* LChunkBuilder::UseOrConstantAtStart(HValue* value) { + return CanBeImmediateConstant(value) + ? chunk_->DefineConstantOperand(HConstant::cast(value)) + : UseAtStart(value); +} + + +LOperand* LChunkBuilder::UseFixedOrConstant(HValue* value, + Register fixed_register) { + return CanBeImmediateConstant(value) + ? chunk_->DefineConstantOperand(HConstant::cast(value)) + : UseFixed(value, fixed_register); +} + + +LOperand* LChunkBuilder::UseRegisterOrConstant(HValue* value) { + return CanBeImmediateConstant(value) + ? chunk_->DefineConstantOperand(HConstant::cast(value)) + : UseRegister(value); +} + + +LOperand* LChunkBuilder::UseRegisterOrConstantAtStart(HValue* value) { + return CanBeImmediateConstant(value) + ? chunk_->DefineConstantOperand(HConstant::cast(value)) + : UseRegisterAtStart(value); +} + + +LOperand* LChunkBuilder::UseConstant(HValue* value) { + return chunk_->DefineConstantOperand(HConstant::cast(value)); +} + + +LOperand* LChunkBuilder::UseAny(HValue* value) { + return value->IsConstant() + ? chunk_->DefineConstantOperand(HConstant::cast(value)) + : Use(value, new(zone()) LUnallocated(LUnallocated::ANY)); +} + + +LOperand* LChunkBuilder::Use(HValue* value, LUnallocated* operand) { + if (value->EmitAtUses()) { + HInstruction* instr = HInstruction::cast(value); + VisitInstruction(instr); + } + operand->set_virtual_register(value->id()); + return operand; +} + + +LInstruction* LChunkBuilder::Define(LTemplateResultInstruction<1>* instr, + LUnallocated* result) { + result->set_virtual_register(current_instruction_->id()); + instr->set_result(result); + return instr; +} + + +LInstruction* LChunkBuilder::DefineAsRegister( + LTemplateResultInstruction<1>* instr) { + return Define(instr, + new(zone()) LUnallocated(LUnallocated::MUST_HAVE_REGISTER)); +} + + +LInstruction* LChunkBuilder::DefineAsSpilled( + LTemplateResultInstruction<1>* instr, + int index) { + return Define(instr, + new(zone()) LUnallocated(LUnallocated::FIXED_SLOT, index)); +} + + +LInstruction* LChunkBuilder::DefineSameAsFirst( + LTemplateResultInstruction<1>* instr) { + return Define(instr, + new(zone()) LUnallocated(LUnallocated::SAME_AS_FIRST_INPUT)); +} + + +LInstruction* LChunkBuilder::DefineFixed(LTemplateResultInstruction<1>* instr, + Register reg) { + return Define(instr, ToUnallocated(reg)); +} + + +LInstruction* LChunkBuilder::AssignEnvironment(LInstruction* instr) { + HEnvironment* hydrogen_env = current_block_->last_environment(); + int argument_index_accumulator = 0; + ZoneList<HValue*> objects_to_materialize(0, zone()); + instr->set_environment(CreateEnvironment(hydrogen_env, + &argument_index_accumulator, + &objects_to_materialize)); + return instr; +} + + +LInstruction* LChunkBuilder::MarkAsCall(LInstruction* instr, + HInstruction* hinstr, + CanDeoptimize can_deoptimize) { + info()->MarkAsNonDeferredCalling(); + +#ifdef DEBUG + instr->VerifyCall(); +#endif + instr->MarkAsCall(); + instr = AssignPointerMap(instr); + + // If instruction does not have side-effects lazy deoptimization + // after the call will try to deoptimize to the point before the call. + // Thus we still need to attach environment to this call even if + // call sequence can not deoptimize eagerly. + bool needs_environment = + (can_deoptimize == CAN_DEOPTIMIZE_EAGERLY) || + !hinstr->HasObservableSideEffects(); + if (needs_environment && !instr->HasEnvironment()) { + instr = AssignEnvironment(instr); + // We can't really figure out if the environment is needed or not. + instr->environment()->set_has_been_used(); + } + + return instr; +} + + +LInstruction* LChunkBuilder::AssignPointerMap(LInstruction* instr) { + DCHECK(!instr->HasPointerMap()); + instr->set_pointer_map(new(zone()) LPointerMap(zone())); + return instr; +} + + +LUnallocated* LChunkBuilder::TempRegister() { + LUnallocated* operand = + new(zone()) LUnallocated(LUnallocated::MUST_HAVE_REGISTER); + int vreg = allocator_->GetVirtualRegister(); + if (!allocator_->AllocationOk()) { + Abort(kOutOfVirtualRegistersWhileTryingToAllocateTempRegister); + vreg = 0; + } + operand->set_virtual_register(vreg); + return operand; +} + + +LOperand* LChunkBuilder::FixedTemp(Register reg) { + LUnallocated* operand = ToUnallocated(reg); + DCHECK(operand->HasFixedPolicy()); + return operand; +} + + +LInstruction* LChunkBuilder::DoBlockEntry(HBlockEntry* instr) { + return new(zone()) LLabel(instr->block()); +} + + +LInstruction* LChunkBuilder::DoDummyUse(HDummyUse* instr) { + return DefineAsRegister(new(zone()) LDummyUse(UseAny(instr->value()))); +} + + +LInstruction* LChunkBuilder::DoEnvironmentMarker(HEnvironmentMarker* instr) { + UNREACHABLE(); + return NULL; +} + + +LInstruction* LChunkBuilder::DoDeoptimize(HDeoptimize* instr) { + return AssignEnvironment(new(zone()) LDeoptimize); +} + + +LInstruction* LChunkBuilder::DoShift(Token::Value op, + HBitwiseBinaryOperation* instr) { + if (instr->representation().IsSmiOrInteger32()) { + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + LOperand* left = UseRegisterAtStart(instr->left()); + + HValue* right_value = instr->right(); + LOperand* right = NULL; + int constant_value = 0; + bool does_deopt = false; + if (right_value->IsConstant()) { + HConstant* constant = HConstant::cast(right_value); + right = chunk_->DefineConstantOperand(constant); + constant_value = constant->Integer32Value() & 0x1f; + // Left shifts can deoptimize if we shift by > 0 and the result cannot be + // truncated to smi. + if (instr->representation().IsSmi() && constant_value > 0) { + does_deopt = !instr->CheckUsesForFlag(HValue::kTruncatingToSmi); + } + } else { + right = UseFixed(right_value, ecx); + } + + // Shift operations can only deoptimize if we do a logical shift by 0 and + // the result cannot be truncated to int32. + if (op == Token::SHR && constant_value == 0) { + if (FLAG_opt_safe_uint32_operations) { + does_deopt = !instr->CheckFlag(HInstruction::kUint32); + } else { + does_deopt = !instr->CheckUsesForFlag(HValue::kTruncatingToInt32); + } + } + + LInstruction* result = + DefineSameAsFirst(new(zone()) LShiftI(op, left, right, does_deopt)); + return does_deopt ? AssignEnvironment(result) : result; + } else { + return DoArithmeticT(op, instr); + } +} + + +LInstruction* LChunkBuilder::DoArithmeticD(Token::Value op, + HArithmeticBinaryOperation* instr) { + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); + DCHECK(instr->right()->representation().IsDouble()); + if (op == Token::MOD) { + LOperand* left = UseRegisterAtStart(instr->BetterLeftOperand()); + LOperand* right = UseRegisterAtStart(instr->BetterRightOperand()); + LArithmeticD* result = new(zone()) LArithmeticD(op, left, right); + return MarkAsCall(DefineSameAsFirst(result), instr); + } else { + LOperand* left = UseRegisterAtStart(instr->BetterLeftOperand()); + LOperand* right = UseRegisterAtStart(instr->BetterRightOperand()); + LArithmeticD* result = new(zone()) LArithmeticD(op, left, right); + return DefineSameAsFirst(result); + } +} + + +LInstruction* LChunkBuilder::DoArithmeticT(Token::Value op, + HBinaryOperation* instr) { + HValue* left = instr->left(); + HValue* right = instr->right(); + DCHECK(left->representation().IsTagged()); + DCHECK(right->representation().IsTagged()); + LOperand* context = UseFixed(instr->context(), esi); + LOperand* left_operand = UseFixed(left, edx); + LOperand* right_operand = UseFixed(right, eax); + LArithmeticT* result = + new(zone()) LArithmeticT(op, context, left_operand, right_operand); + return MarkAsCall(DefineFixed(result, eax), instr); +} + + +void LChunkBuilder::DoBasicBlock(HBasicBlock* block, HBasicBlock* next_block) { + DCHECK(is_building()); + current_block_ = block; + next_block_ = next_block; + if (block->IsStartBlock()) { + block->UpdateEnvironment(graph_->start_environment()); + argument_count_ = 0; + } else if (block->predecessors()->length() == 1) { + // We have a single predecessor => copy environment and outgoing + // argument count from the predecessor. + DCHECK(block->phis()->length() == 0); + HBasicBlock* pred = block->predecessors()->at(0); + HEnvironment* last_environment = pred->last_environment(); + DCHECK(last_environment != NULL); + // Only copy the environment, if it is later used again. + if (pred->end()->SecondSuccessor() == NULL) { + DCHECK(pred->end()->FirstSuccessor() == block); + } else { + if (pred->end()->FirstSuccessor()->block_id() > block->block_id() || + pred->end()->SecondSuccessor()->block_id() > block->block_id()) { + last_environment = last_environment->Copy(); + } + } + block->UpdateEnvironment(last_environment); + DCHECK(pred->argument_count() >= 0); + argument_count_ = pred->argument_count(); + } else { + // We are at a state join => process phis. + HBasicBlock* pred = block->predecessors()->at(0); + // No need to copy the environment, it cannot be used later. + HEnvironment* last_environment = pred->last_environment(); + for (int i = 0; i < block->phis()->length(); ++i) { + HPhi* phi = block->phis()->at(i); + if (phi->HasMergedIndex()) { + last_environment->SetValueAt(phi->merged_index(), phi); + } + } + for (int i = 0; i < block->deleted_phis()->length(); ++i) { + if (block->deleted_phis()->at(i) < last_environment->length()) { + last_environment->SetValueAt(block->deleted_phis()->at(i), + graph_->GetConstantUndefined()); + } + } + block->UpdateEnvironment(last_environment); + // Pick up the outgoing argument count of one of the predecessors. + argument_count_ = pred->argument_count(); + } + HInstruction* current = block->first(); + int start = chunk_->instructions()->length(); + while (current != NULL && !is_aborted()) { + // Code for constants in registers is generated lazily. + if (!current->EmitAtUses()) { + VisitInstruction(current); + } + current = current->next(); + } + int end = chunk_->instructions()->length() - 1; + if (end >= start) { + block->set_first_instruction_index(start); + block->set_last_instruction_index(end); + } + block->set_argument_count(argument_count_); + next_block_ = NULL; + current_block_ = NULL; +} + + +void LChunkBuilder::VisitInstruction(HInstruction* current) { + HInstruction* old_current = current_instruction_; + current_instruction_ = current; + + LInstruction* instr = NULL; + if (current->CanReplaceWithDummyUses()) { + if (current->OperandCount() == 0) { + instr = DefineAsRegister(new(zone()) LDummy()); + } else { + DCHECK(!current->OperandAt(0)->IsControlInstruction()); + instr = DefineAsRegister(new(zone()) + LDummyUse(UseAny(current->OperandAt(0)))); + } + for (int i = 1; i < current->OperandCount(); ++i) { + if (current->OperandAt(i)->IsControlInstruction()) continue; + LInstruction* dummy = + new(zone()) LDummyUse(UseAny(current->OperandAt(i))); + dummy->set_hydrogen_value(current); + chunk_->AddInstruction(dummy, current_block_); + } + } else { + HBasicBlock* successor; + if (current->IsControlInstruction() && + HControlInstruction::cast(current)->KnownSuccessorBlock(&successor) && + successor != NULL) { + instr = new(zone()) LGoto(successor); + } else { + instr = current->CompileToLithium(this); + } + } + + argument_count_ += current->argument_delta(); + DCHECK(argument_count_ >= 0); + + if (instr != NULL) { + AddInstruction(instr, current); + } + + current_instruction_ = old_current; +} + + +void LChunkBuilder::AddInstruction(LInstruction* instr, + HInstruction* hydrogen_val) { + // Associate the hydrogen instruction first, since we may need it for + // the ClobbersRegisters() or ClobbersDoubleRegisters() calls below. + instr->set_hydrogen_value(hydrogen_val); + +#if DEBUG + // Make sure that the lithium instruction has either no fixed register + // constraints in temps or the result OR no uses that are only used at + // start. If this invariant doesn't hold, the register allocator can decide + // to insert a split of a range immediately before the instruction due to an + // already allocated register needing to be used for the instruction's fixed + // register constraint. In this case, The register allocator won't see an + // interference between the split child and the use-at-start (it would if + // the it was just a plain use), so it is free to move the split child into + // the same register that is used for the use-at-start. + // See https://code.google.com/p/chromium/issues/detail?id=201590 + if (!(instr->ClobbersRegisters() && + instr->ClobbersDoubleRegisters(isolate()))) { + int fixed = 0; + int used_at_start = 0; + for (UseIterator it(instr); !it.Done(); it.Advance()) { + LUnallocated* operand = LUnallocated::cast(it.Current()); + if (operand->IsUsedAtStart()) ++used_at_start; + } + if (instr->Output() != NULL) { + if (LUnallocated::cast(instr->Output())->HasFixedPolicy()) ++fixed; + } + for (TempIterator it(instr); !it.Done(); it.Advance()) { + LUnallocated* operand = LUnallocated::cast(it.Current()); + if (operand->HasFixedPolicy()) ++fixed; + } + DCHECK(fixed == 0 || used_at_start == 0); + } +#endif + + if (FLAG_stress_pointer_maps && !instr->HasPointerMap()) { + instr = AssignPointerMap(instr); + } + if (FLAG_stress_environments && !instr->HasEnvironment()) { + instr = AssignEnvironment(instr); + } + if (instr->IsGoto() && LGoto::cast(instr)->jumps_to_join()) { + // TODO(olivf) Since phis of spilled values are joined as registers + // (not in the stack slot), we need to allow the goto gaps to keep one + // x87 register alive. To ensure all other values are still spilled, we + // insert a fpu register barrier right before. + LClobberDoubles* clobber = new(zone()) LClobberDoubles(isolate()); + clobber->set_hydrogen_value(hydrogen_val); + chunk_->AddInstruction(clobber, current_block_); + } + chunk_->AddInstruction(instr, current_block_); + + if (instr->IsCall()) { + HValue* hydrogen_value_for_lazy_bailout = hydrogen_val; + LInstruction* instruction_needing_environment = NULL; + if (hydrogen_val->HasObservableSideEffects()) { + HSimulate* sim = HSimulate::cast(hydrogen_val->next()); + instruction_needing_environment = instr; + sim->ReplayEnvironment(current_block_->last_environment()); + hydrogen_value_for_lazy_bailout = sim; + } + LInstruction* bailout = AssignEnvironment(new(zone()) LLazyBailout()); + bailout->set_hydrogen_value(hydrogen_value_for_lazy_bailout); + chunk_->AddInstruction(bailout, current_block_); + if (instruction_needing_environment != NULL) { + // Store the lazy deopt environment with the instruction if needed. + // Right now it is only used for LInstanceOfKnownGlobal. + instruction_needing_environment-> + SetDeferredLazyDeoptimizationEnvironment(bailout->environment()); + } + } +} + + +LInstruction* LChunkBuilder::DoGoto(HGoto* instr) { + return new(zone()) LGoto(instr->FirstSuccessor()); +} + + +LInstruction* LChunkBuilder::DoBranch(HBranch* instr) { + HValue* value = instr->value(); + Representation r = value->representation(); + HType type = value->type(); + ToBooleanStub::Types expected = instr->expected_input_types(); + if (expected.IsEmpty()) expected = ToBooleanStub::Types::Generic(); + + bool easy_case = !r.IsTagged() || type.IsBoolean() || type.IsSmi() || + type.IsJSArray() || type.IsHeapNumber() || type.IsString(); + LOperand* temp = !easy_case && expected.NeedsMap() ? TempRegister() : NULL; + LInstruction* branch = new(zone()) LBranch(UseRegister(value), temp); + if (!easy_case && + ((!expected.Contains(ToBooleanStub::SMI) && expected.NeedsMap()) || + !expected.IsGeneric())) { + branch = AssignEnvironment(branch); + } + return branch; +} + + +LInstruction* LChunkBuilder::DoDebugBreak(HDebugBreak* instr) { + return new(zone()) LDebugBreak(); +} + + +LInstruction* LChunkBuilder::DoCompareMap(HCompareMap* instr) { + DCHECK(instr->value()->representation().IsTagged()); + LOperand* value = UseRegisterAtStart(instr->value()); + return new(zone()) LCmpMapAndBranch(value); +} + + +LInstruction* LChunkBuilder::DoArgumentsLength(HArgumentsLength* length) { + info()->MarkAsRequiresFrame(); + return DefineAsRegister(new(zone()) LArgumentsLength(Use(length->value()))); +} + + +LInstruction* LChunkBuilder::DoArgumentsElements(HArgumentsElements* elems) { + info()->MarkAsRequiresFrame(); + return DefineAsRegister(new(zone()) LArgumentsElements); +} + + +LInstruction* LChunkBuilder::DoInstanceOf(HInstanceOf* instr) { + LOperand* left = UseFixed(instr->left(), InstanceofStub::left()); + LOperand* right = UseFixed(instr->right(), InstanceofStub::right()); + LOperand* context = UseFixed(instr->context(), esi); + LInstanceOf* result = new(zone()) LInstanceOf(context, left, right); + return MarkAsCall(DefineFixed(result, eax), instr); +} + + +LInstruction* LChunkBuilder::DoInstanceOfKnownGlobal( + HInstanceOfKnownGlobal* instr) { + LInstanceOfKnownGlobal* result = + new(zone()) LInstanceOfKnownGlobal( + UseFixed(instr->context(), esi), + UseFixed(instr->left(), InstanceofStub::left()), + FixedTemp(edi)); + return MarkAsCall(DefineFixed(result, eax), instr); +} + + +LInstruction* LChunkBuilder::DoWrapReceiver(HWrapReceiver* instr) { + LOperand* receiver = UseRegister(instr->receiver()); + LOperand* function = UseRegister(instr->function()); + LOperand* temp = TempRegister(); + LWrapReceiver* result = + new(zone()) LWrapReceiver(receiver, function, temp); + return AssignEnvironment(DefineSameAsFirst(result)); +} + + +LInstruction* LChunkBuilder::DoApplyArguments(HApplyArguments* instr) { + LOperand* function = UseFixed(instr->function(), edi); + LOperand* receiver = UseFixed(instr->receiver(), eax); + LOperand* length = UseFixed(instr->length(), ebx); + LOperand* elements = UseFixed(instr->elements(), ecx); + LApplyArguments* result = new(zone()) LApplyArguments(function, + receiver, + length, + elements); + return MarkAsCall(DefineFixed(result, eax), instr, CAN_DEOPTIMIZE_EAGERLY); +} + + +LInstruction* LChunkBuilder::DoPushArguments(HPushArguments* instr) { + int argc = instr->OperandCount(); + for (int i = 0; i < argc; ++i) { + LOperand* argument = UseAny(instr->argument(i)); + AddInstruction(new(zone()) LPushArgument(argument), instr); + } + return NULL; +} + + +LInstruction* LChunkBuilder::DoStoreCodeEntry( + HStoreCodeEntry* store_code_entry) { + LOperand* function = UseRegister(store_code_entry->function()); + LOperand* code_object = UseTempRegister(store_code_entry->code_object()); + return new(zone()) LStoreCodeEntry(function, code_object); +} + + +LInstruction* LChunkBuilder::DoInnerAllocatedObject( + HInnerAllocatedObject* instr) { + LOperand* base_object = UseRegisterAtStart(instr->base_object()); + LOperand* offset = UseRegisterOrConstantAtStart(instr->offset()); + return DefineAsRegister( + new(zone()) LInnerAllocatedObject(base_object, offset)); +} + + +LInstruction* LChunkBuilder::DoThisFunction(HThisFunction* instr) { + return instr->HasNoUses() + ? NULL + : DefineAsRegister(new(zone()) LThisFunction); +} + + +LInstruction* LChunkBuilder::DoContext(HContext* instr) { + if (instr->HasNoUses()) return NULL; + + if (info()->IsStub()) { + return DefineFixed(new(zone()) LContext, esi); + } + + return DefineAsRegister(new(zone()) LContext); +} + + +LInstruction* LChunkBuilder::DoDeclareGlobals(HDeclareGlobals* instr) { + LOperand* context = UseFixed(instr->context(), esi); + return MarkAsCall(new(zone()) LDeclareGlobals(context), instr); +} + + +LInstruction* LChunkBuilder::DoCallJSFunction( + HCallJSFunction* instr) { + LOperand* function = UseFixed(instr->function(), edi); + + LCallJSFunction* result = new(zone()) LCallJSFunction(function); + + return MarkAsCall(DefineFixed(result, eax), instr, CANNOT_DEOPTIMIZE_EAGERLY); +} + + +LInstruction* LChunkBuilder::DoCallWithDescriptor( + HCallWithDescriptor* instr) { + const InterfaceDescriptor* descriptor = instr->descriptor(); + LOperand* target = UseRegisterOrConstantAtStart(instr->target()); + ZoneList<LOperand*> ops(instr->OperandCount(), zone()); + ops.Add(target, zone()); + for (int i = 1; i < instr->OperandCount(); i++) { + LOperand* op = UseFixed(instr->OperandAt(i), + descriptor->GetParameterRegister(i - 1)); + ops.Add(op, zone()); + } + + LCallWithDescriptor* result = new(zone()) LCallWithDescriptor( + descriptor, ops, zone()); + return MarkAsCall(DefineFixed(result, eax), instr, CANNOT_DEOPTIMIZE_EAGERLY); +} + + +LInstruction* LChunkBuilder::DoInvokeFunction(HInvokeFunction* instr) { + LOperand* context = UseFixed(instr->context(), esi); + LOperand* function = UseFixed(instr->function(), edi); + LInvokeFunction* result = new(zone()) LInvokeFunction(context, function); + return MarkAsCall(DefineFixed(result, eax), instr, CANNOT_DEOPTIMIZE_EAGERLY); +} + + +LInstruction* LChunkBuilder::DoUnaryMathOperation(HUnaryMathOperation* instr) { + switch (instr->op()) { + case kMathFloor: return DoMathFloor(instr); + case kMathRound: return DoMathRound(instr); + case kMathFround: return DoMathFround(instr); + case kMathAbs: return DoMathAbs(instr); + case kMathLog: return DoMathLog(instr); + case kMathExp: return DoMathExp(instr); + case kMathSqrt: return DoMathSqrt(instr); + case kMathPowHalf: return DoMathPowHalf(instr); + case kMathClz32: return DoMathClz32(instr); + default: + UNREACHABLE(); + return NULL; + } +} + + +LInstruction* LChunkBuilder::DoMathFloor(HUnaryMathOperation* instr) { + LOperand* input = UseRegisterAtStart(instr->value()); + LMathFloor* result = new(zone()) LMathFloor(input); + return AssignEnvironment(DefineAsRegister(result)); +} + + +LInstruction* LChunkBuilder::DoMathRound(HUnaryMathOperation* instr) { + // Crankshaft is turned off for nosse2. + UNREACHABLE(); + return NULL; +} + + +LInstruction* LChunkBuilder::DoMathFround(HUnaryMathOperation* instr) { + LOperand* input = UseRegisterAtStart(instr->value()); + LMathFround* result = new (zone()) LMathFround(input); + return AssignEnvironment(DefineAsRegister(result)); +} + + +LInstruction* LChunkBuilder::DoMathAbs(HUnaryMathOperation* instr) { + LOperand* context = UseAny(instr->context()); // Deferred use. + LOperand* input = UseRegisterAtStart(instr->value()); + LInstruction* result = + DefineSameAsFirst(new(zone()) LMathAbs(context, input)); + Representation r = instr->value()->representation(); + if (!r.IsDouble() && !r.IsSmiOrInteger32()) result = AssignPointerMap(result); + if (!r.IsDouble()) result = AssignEnvironment(result); + return result; +} + + +LInstruction* LChunkBuilder::DoMathLog(HUnaryMathOperation* instr) { + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->value()->representation().IsDouble()); + LOperand* input = UseRegisterAtStart(instr->value()); + return MarkAsCall(DefineSameAsFirst(new(zone()) LMathLog(input)), instr); +} + + +LInstruction* LChunkBuilder::DoMathClz32(HUnaryMathOperation* instr) { + LOperand* input = UseRegisterAtStart(instr->value()); + LMathClz32* result = new(zone()) LMathClz32(input); + return DefineAsRegister(result); +} + + +LInstruction* LChunkBuilder::DoMathExp(HUnaryMathOperation* instr) { + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->value()->representation().IsDouble()); + LOperand* value = UseTempRegister(instr->value()); + LOperand* temp1 = TempRegister(); + LOperand* temp2 = TempRegister(); + LMathExp* result = new(zone()) LMathExp(value, temp1, temp2); + return DefineAsRegister(result); +} + + +LInstruction* LChunkBuilder::DoMathSqrt(HUnaryMathOperation* instr) { + LOperand* input = UseRegisterAtStart(instr->value()); + LMathSqrt* result = new(zone()) LMathSqrt(input); + return DefineSameAsFirst(result); +} + + +LInstruction* LChunkBuilder::DoMathPowHalf(HUnaryMathOperation* instr) { + LOperand* input = UseRegisterAtStart(instr->value()); + LOperand* temp = TempRegister(); + LMathPowHalf* result = new(zone()) LMathPowHalf(input, temp); + return DefineSameAsFirst(result); +} + + +LInstruction* LChunkBuilder::DoCallNew(HCallNew* instr) { + LOperand* context = UseFixed(instr->context(), esi); + LOperand* constructor = UseFixed(instr->constructor(), edi); + LCallNew* result = new(zone()) LCallNew(context, constructor); + return MarkAsCall(DefineFixed(result, eax), instr); +} + + +LInstruction* LChunkBuilder::DoCallNewArray(HCallNewArray* instr) { + LOperand* context = UseFixed(instr->context(), esi); + LOperand* constructor = UseFixed(instr->constructor(), edi); + LCallNewArray* result = new(zone()) LCallNewArray(context, constructor); + return MarkAsCall(DefineFixed(result, eax), instr); +} + + +LInstruction* LChunkBuilder::DoCallFunction(HCallFunction* instr) { + LOperand* context = UseFixed(instr->context(), esi); + LOperand* function = UseFixed(instr->function(), edi); + LCallFunction* call = new(zone()) LCallFunction(context, function); + return MarkAsCall(DefineFixed(call, eax), instr); +} + + +LInstruction* LChunkBuilder::DoCallRuntime(HCallRuntime* instr) { + LOperand* context = UseFixed(instr->context(), esi); + return MarkAsCall(DefineFixed(new(zone()) LCallRuntime(context), eax), instr); +} + + +LInstruction* LChunkBuilder::DoRor(HRor* instr) { + return DoShift(Token::ROR, instr); +} + + +LInstruction* LChunkBuilder::DoShr(HShr* instr) { + return DoShift(Token::SHR, instr); +} + + +LInstruction* LChunkBuilder::DoSar(HSar* instr) { + return DoShift(Token::SAR, instr); +} + + +LInstruction* LChunkBuilder::DoShl(HShl* instr) { + return DoShift(Token::SHL, instr); +} + + +LInstruction* LChunkBuilder::DoBitwise(HBitwise* instr) { + if (instr->representation().IsSmiOrInteger32()) { + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + DCHECK(instr->CheckFlag(HValue::kTruncatingToInt32)); + + LOperand* left = UseRegisterAtStart(instr->BetterLeftOperand()); + LOperand* right = UseOrConstantAtStart(instr->BetterRightOperand()); + return DefineSameAsFirst(new(zone()) LBitI(left, right)); + } else { + return DoArithmeticT(instr->op(), instr); + } +} + + +LInstruction* LChunkBuilder::DoDivByPowerOf2I(HDiv* instr) { + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + LOperand* dividend = UseRegister(instr->left()); + int32_t divisor = instr->right()->GetInteger32Constant(); + LInstruction* result = DefineAsRegister(new(zone()) LDivByPowerOf2I( + dividend, divisor)); + if ((instr->CheckFlag(HValue::kBailoutOnMinusZero) && divisor < 0) || + (instr->CheckFlag(HValue::kCanOverflow) && divisor == -1) || + (!instr->CheckFlag(HInstruction::kAllUsesTruncatingToInt32) && + divisor != 1 && divisor != -1)) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoDivByConstI(HDiv* instr) { + DCHECK(instr->representation().IsInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + LOperand* dividend = UseRegister(instr->left()); + int32_t divisor = instr->right()->GetInteger32Constant(); + LOperand* temp1 = FixedTemp(eax); + LOperand* temp2 = FixedTemp(edx); + LInstruction* result = DefineFixed(new(zone()) LDivByConstI( + dividend, divisor, temp1, temp2), edx); + if (divisor == 0 || + (instr->CheckFlag(HValue::kBailoutOnMinusZero) && divisor < 0) || + !instr->CheckFlag(HInstruction::kAllUsesTruncatingToInt32)) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoDivI(HDiv* instr) { + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + LOperand* dividend = UseFixed(instr->left(), eax); + LOperand* divisor = UseRegister(instr->right()); + LOperand* temp = FixedTemp(edx); + LInstruction* result = DefineFixed(new(zone()) LDivI( + dividend, divisor, temp), eax); + if (instr->CheckFlag(HValue::kCanBeDivByZero) || + instr->CheckFlag(HValue::kBailoutOnMinusZero) || + instr->CheckFlag(HValue::kCanOverflow) || + !instr->CheckFlag(HValue::kAllUsesTruncatingToInt32)) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoDiv(HDiv* instr) { + if (instr->representation().IsSmiOrInteger32()) { + if (instr->RightIsPowerOf2()) { + return DoDivByPowerOf2I(instr); + } else if (instr->right()->IsConstant()) { + return DoDivByConstI(instr); + } else { + return DoDivI(instr); + } + } else if (instr->representation().IsDouble()) { + return DoArithmeticD(Token::DIV, instr); + } else { + return DoArithmeticT(Token::DIV, instr); + } +} + + +LInstruction* LChunkBuilder::DoFlooringDivByPowerOf2I(HMathFloorOfDiv* instr) { + LOperand* dividend = UseRegisterAtStart(instr->left()); + int32_t divisor = instr->right()->GetInteger32Constant(); + LInstruction* result = DefineSameAsFirst(new(zone()) LFlooringDivByPowerOf2I( + dividend, divisor)); + if ((instr->CheckFlag(HValue::kBailoutOnMinusZero) && divisor < 0) || + (instr->CheckFlag(HValue::kLeftCanBeMinInt) && divisor == -1)) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoFlooringDivByConstI(HMathFloorOfDiv* instr) { + DCHECK(instr->representation().IsInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + LOperand* dividend = UseRegister(instr->left()); + int32_t divisor = instr->right()->GetInteger32Constant(); + LOperand* temp1 = FixedTemp(eax); + LOperand* temp2 = FixedTemp(edx); + LOperand* temp3 = + ((divisor > 0 && !instr->CheckFlag(HValue::kLeftCanBeNegative)) || + (divisor < 0 && !instr->CheckFlag(HValue::kLeftCanBePositive))) ? + NULL : TempRegister(); + LInstruction* result = + DefineFixed(new(zone()) LFlooringDivByConstI(dividend, + divisor, + temp1, + temp2, + temp3), + edx); + if (divisor == 0 || + (instr->CheckFlag(HValue::kBailoutOnMinusZero) && divisor < 0)) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoFlooringDivI(HMathFloorOfDiv* instr) { + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + LOperand* dividend = UseFixed(instr->left(), eax); + LOperand* divisor = UseRegister(instr->right()); + LOperand* temp = FixedTemp(edx); + LInstruction* result = DefineFixed(new(zone()) LFlooringDivI( + dividend, divisor, temp), eax); + if (instr->CheckFlag(HValue::kCanBeDivByZero) || + instr->CheckFlag(HValue::kBailoutOnMinusZero) || + instr->CheckFlag(HValue::kCanOverflow)) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoMathFloorOfDiv(HMathFloorOfDiv* instr) { + if (instr->RightIsPowerOf2()) { + return DoFlooringDivByPowerOf2I(instr); + } else if (instr->right()->IsConstant()) { + return DoFlooringDivByConstI(instr); + } else { + return DoFlooringDivI(instr); + } +} + + +LInstruction* LChunkBuilder::DoModByPowerOf2I(HMod* instr) { + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + LOperand* dividend = UseRegisterAtStart(instr->left()); + int32_t divisor = instr->right()->GetInteger32Constant(); + LInstruction* result = DefineSameAsFirst(new(zone()) LModByPowerOf2I( + dividend, divisor)); + if (instr->CheckFlag(HValue::kLeftCanBeNegative) && + instr->CheckFlag(HValue::kBailoutOnMinusZero)) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoModByConstI(HMod* instr) { + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + LOperand* dividend = UseRegister(instr->left()); + int32_t divisor = instr->right()->GetInteger32Constant(); + LOperand* temp1 = FixedTemp(eax); + LOperand* temp2 = FixedTemp(edx); + LInstruction* result = DefineFixed(new(zone()) LModByConstI( + dividend, divisor, temp1, temp2), eax); + if (divisor == 0 || instr->CheckFlag(HValue::kBailoutOnMinusZero)) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoModI(HMod* instr) { + DCHECK(instr->representation().IsSmiOrInteger32()); + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + LOperand* dividend = UseFixed(instr->left(), eax); + LOperand* divisor = UseRegister(instr->right()); + LOperand* temp = FixedTemp(edx); + LInstruction* result = DefineFixed(new(zone()) LModI( + dividend, divisor, temp), edx); + if (instr->CheckFlag(HValue::kCanBeDivByZero) || + instr->CheckFlag(HValue::kBailoutOnMinusZero)) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoMod(HMod* instr) { + if (instr->representation().IsSmiOrInteger32()) { + if (instr->RightIsPowerOf2()) { + return DoModByPowerOf2I(instr); + } else if (instr->right()->IsConstant()) { + return DoModByConstI(instr); + } else { + return DoModI(instr); + } + } else if (instr->representation().IsDouble()) { + return DoArithmeticD(Token::MOD, instr); + } else { + return DoArithmeticT(Token::MOD, instr); + } +} + + +LInstruction* LChunkBuilder::DoMul(HMul* instr) { + if (instr->representation().IsSmiOrInteger32()) { + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + LOperand* left = UseRegisterAtStart(instr->BetterLeftOperand()); + LOperand* right = UseOrConstant(instr->BetterRightOperand()); + LOperand* temp = NULL; + if (instr->CheckFlag(HValue::kBailoutOnMinusZero)) { + temp = TempRegister(); + } + LMulI* mul = new(zone()) LMulI(left, right, temp); + if (instr->CheckFlag(HValue::kCanOverflow) || + instr->CheckFlag(HValue::kBailoutOnMinusZero)) { + AssignEnvironment(mul); + } + return DefineSameAsFirst(mul); + } else if (instr->representation().IsDouble()) { + return DoArithmeticD(Token::MUL, instr); + } else { + return DoArithmeticT(Token::MUL, instr); + } +} + + +LInstruction* LChunkBuilder::DoSub(HSub* instr) { + if (instr->representation().IsSmiOrInteger32()) { + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + LOperand* left = UseRegisterAtStart(instr->left()); + LOperand* right = UseOrConstantAtStart(instr->right()); + LSubI* sub = new(zone()) LSubI(left, right); + LInstruction* result = DefineSameAsFirst(sub); + if (instr->CheckFlag(HValue::kCanOverflow)) { + result = AssignEnvironment(result); + } + return result; + } else if (instr->representation().IsDouble()) { + return DoArithmeticD(Token::SUB, instr); + } else { + return DoArithmeticT(Token::SUB, instr); + } +} + + +LInstruction* LChunkBuilder::DoAdd(HAdd* instr) { + if (instr->representation().IsSmiOrInteger32()) { + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + // Check to see if it would be advantageous to use an lea instruction rather + // than an add. This is the case when no overflow check is needed and there + // are multiple uses of the add's inputs, so using a 3-register add will + // preserve all input values for later uses. + bool use_lea = LAddI::UseLea(instr); + LOperand* left = UseRegisterAtStart(instr->BetterLeftOperand()); + HValue* right_candidate = instr->BetterRightOperand(); + LOperand* right = use_lea + ? UseRegisterOrConstantAtStart(right_candidate) + : UseOrConstantAtStart(right_candidate); + LAddI* add = new(zone()) LAddI(left, right); + bool can_overflow = instr->CheckFlag(HValue::kCanOverflow); + LInstruction* result = use_lea + ? DefineAsRegister(add) + : DefineSameAsFirst(add); + if (can_overflow) { + result = AssignEnvironment(result); + } + return result; + } else if (instr->representation().IsDouble()) { + return DoArithmeticD(Token::ADD, instr); + } else if (instr->representation().IsExternal()) { + DCHECK(instr->left()->representation().IsExternal()); + DCHECK(instr->right()->representation().IsInteger32()); + DCHECK(!instr->CheckFlag(HValue::kCanOverflow)); + bool use_lea = LAddI::UseLea(instr); + LOperand* left = UseRegisterAtStart(instr->left()); + HValue* right_candidate = instr->right(); + LOperand* right = use_lea + ? UseRegisterOrConstantAtStart(right_candidate) + : UseOrConstantAtStart(right_candidate); + LAddI* add = new(zone()) LAddI(left, right); + LInstruction* result = use_lea + ? DefineAsRegister(add) + : DefineSameAsFirst(add); + return result; + } else { + return DoArithmeticT(Token::ADD, instr); + } +} + + +LInstruction* LChunkBuilder::DoMathMinMax(HMathMinMax* instr) { + LOperand* left = NULL; + LOperand* right = NULL; + if (instr->representation().IsSmiOrInteger32()) { + DCHECK(instr->left()->representation().Equals(instr->representation())); + DCHECK(instr->right()->representation().Equals(instr->representation())); + left = UseRegisterAtStart(instr->BetterLeftOperand()); + right = UseOrConstantAtStart(instr->BetterRightOperand()); + } else { + DCHECK(instr->representation().IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); + DCHECK(instr->right()->representation().IsDouble()); + left = UseRegisterAtStart(instr->left()); + right = UseRegisterAtStart(instr->right()); + } + LMathMinMax* minmax = new(zone()) LMathMinMax(left, right); + return DefineSameAsFirst(minmax); +} + + +LInstruction* LChunkBuilder::DoPower(HPower* instr) { + // Crankshaft is turned off for nosse2. + UNREACHABLE(); + return NULL; +} + + +LInstruction* LChunkBuilder::DoCompareGeneric(HCompareGeneric* instr) { + DCHECK(instr->left()->representation().IsSmiOrTagged()); + DCHECK(instr->right()->representation().IsSmiOrTagged()); + LOperand* context = UseFixed(instr->context(), esi); + LOperand* left = UseFixed(instr->left(), edx); + LOperand* right = UseFixed(instr->right(), eax); + LCmpT* result = new(zone()) LCmpT(context, left, right); + return MarkAsCall(DefineFixed(result, eax), instr); +} + + +LInstruction* LChunkBuilder::DoCompareNumericAndBranch( + HCompareNumericAndBranch* instr) { + Representation r = instr->representation(); + if (r.IsSmiOrInteger32()) { + DCHECK(instr->left()->representation().Equals(r)); + DCHECK(instr->right()->representation().Equals(r)); + LOperand* left = UseRegisterOrConstantAtStart(instr->left()); + LOperand* right = UseOrConstantAtStart(instr->right()); + return new(zone()) LCompareNumericAndBranch(left, right); + } else { + DCHECK(r.IsDouble()); + DCHECK(instr->left()->representation().IsDouble()); + DCHECK(instr->right()->representation().IsDouble()); + LOperand* left; + LOperand* right; + if (CanBeImmediateConstant(instr->left()) && + CanBeImmediateConstant(instr->right())) { + // The code generator requires either both inputs to be constant + // operands, or neither. + left = UseConstant(instr->left()); + right = UseConstant(instr->right()); + } else { + left = UseRegisterAtStart(instr->left()); + right = UseRegisterAtStart(instr->right()); + } + return new(zone()) LCompareNumericAndBranch(left, right); + } +} + + +LInstruction* LChunkBuilder::DoCompareObjectEqAndBranch( + HCompareObjectEqAndBranch* instr) { + LOperand* left = UseRegisterAtStart(instr->left()); + LOperand* right = UseOrConstantAtStart(instr->right()); + return new(zone()) LCmpObjectEqAndBranch(left, right); +} + + +LInstruction* LChunkBuilder::DoCompareHoleAndBranch( + HCompareHoleAndBranch* instr) { + LOperand* value = UseRegisterAtStart(instr->value()); + return new(zone()) LCmpHoleAndBranch(value); +} + + +LInstruction* LChunkBuilder::DoCompareMinusZeroAndBranch( + HCompareMinusZeroAndBranch* instr) { + LOperand* value = UseRegister(instr->value()); + LOperand* scratch = TempRegister(); + return new(zone()) LCompareMinusZeroAndBranch(value, scratch); +} + + +LInstruction* LChunkBuilder::DoIsObjectAndBranch(HIsObjectAndBranch* instr) { + DCHECK(instr->value()->representation().IsSmiOrTagged()); + LOperand* temp = TempRegister(); + return new(zone()) LIsObjectAndBranch(UseRegister(instr->value()), temp); +} + + +LInstruction* LChunkBuilder::DoIsStringAndBranch(HIsStringAndBranch* instr) { + DCHECK(instr->value()->representation().IsTagged()); + LOperand* temp = TempRegister(); + return new(zone()) LIsStringAndBranch(UseRegister(instr->value()), temp); +} + + +LInstruction* LChunkBuilder::DoIsSmiAndBranch(HIsSmiAndBranch* instr) { + DCHECK(instr->value()->representation().IsTagged()); + return new(zone()) LIsSmiAndBranch(Use(instr->value())); +} + + +LInstruction* LChunkBuilder::DoIsUndetectableAndBranch( + HIsUndetectableAndBranch* instr) { + DCHECK(instr->value()->representation().IsTagged()); + return new(zone()) LIsUndetectableAndBranch( + UseRegisterAtStart(instr->value()), TempRegister()); +} + + +LInstruction* LChunkBuilder::DoStringCompareAndBranch( + HStringCompareAndBranch* instr) { + DCHECK(instr->left()->representation().IsTagged()); + DCHECK(instr->right()->representation().IsTagged()); + LOperand* context = UseFixed(instr->context(), esi); + LOperand* left = UseFixed(instr->left(), edx); + LOperand* right = UseFixed(instr->right(), eax); + + LStringCompareAndBranch* result = new(zone()) + LStringCompareAndBranch(context, left, right); + + return MarkAsCall(result, instr); +} + + +LInstruction* LChunkBuilder::DoHasInstanceTypeAndBranch( + HHasInstanceTypeAndBranch* instr) { + DCHECK(instr->value()->representation().IsTagged()); + return new(zone()) LHasInstanceTypeAndBranch( + UseRegisterAtStart(instr->value()), + TempRegister()); +} + + +LInstruction* LChunkBuilder::DoGetCachedArrayIndex( + HGetCachedArrayIndex* instr) { + DCHECK(instr->value()->representation().IsTagged()); + LOperand* value = UseRegisterAtStart(instr->value()); + + return DefineAsRegister(new(zone()) LGetCachedArrayIndex(value)); +} + + +LInstruction* LChunkBuilder::DoHasCachedArrayIndexAndBranch( + HHasCachedArrayIndexAndBranch* instr) { + DCHECK(instr->value()->representation().IsTagged()); + return new(zone()) LHasCachedArrayIndexAndBranch( + UseRegisterAtStart(instr->value())); +} + + +LInstruction* LChunkBuilder::DoClassOfTestAndBranch( + HClassOfTestAndBranch* instr) { + DCHECK(instr->value()->representation().IsTagged()); + return new(zone()) LClassOfTestAndBranch(UseRegister(instr->value()), + TempRegister(), + TempRegister()); +} + + +LInstruction* LChunkBuilder::DoMapEnumLength(HMapEnumLength* instr) { + LOperand* map = UseRegisterAtStart(instr->value()); + return DefineAsRegister(new(zone()) LMapEnumLength(map)); +} + + +LInstruction* LChunkBuilder::DoDateField(HDateField* instr) { + LOperand* date = UseFixed(instr->value(), eax); + LDateField* result = + new(zone()) LDateField(date, FixedTemp(ecx), instr->index()); + return MarkAsCall(DefineFixed(result, eax), instr, CAN_DEOPTIMIZE_EAGERLY); +} + + +LInstruction* LChunkBuilder::DoSeqStringGetChar(HSeqStringGetChar* instr) { + LOperand* string = UseRegisterAtStart(instr->string()); + LOperand* index = UseRegisterOrConstantAtStart(instr->index()); + return DefineAsRegister(new(zone()) LSeqStringGetChar(string, index)); +} + + +LOperand* LChunkBuilder::GetSeqStringSetCharOperand(HSeqStringSetChar* instr) { + if (instr->encoding() == String::ONE_BYTE_ENCODING) { + if (FLAG_debug_code) { + return UseFixed(instr->value(), eax); + } else { + return UseFixedOrConstant(instr->value(), eax); + } + } else { + if (FLAG_debug_code) { + return UseRegisterAtStart(instr->value()); + } else { + return UseRegisterOrConstantAtStart(instr->value()); + } + } +} + + +LInstruction* LChunkBuilder::DoSeqStringSetChar(HSeqStringSetChar* instr) { + LOperand* string = UseRegisterAtStart(instr->string()); + LOperand* index = FLAG_debug_code + ? UseRegisterAtStart(instr->index()) + : UseRegisterOrConstantAtStart(instr->index()); + LOperand* value = GetSeqStringSetCharOperand(instr); + LOperand* context = FLAG_debug_code ? UseFixed(instr->context(), esi) : NULL; + LInstruction* result = new(zone()) LSeqStringSetChar(context, string, + index, value); + if (FLAG_debug_code) { + result = MarkAsCall(result, instr); + } + return result; +} + + +LInstruction* LChunkBuilder::DoBoundsCheck(HBoundsCheck* instr) { + if (!FLAG_debug_code && instr->skip_check()) return NULL; + LOperand* index = UseRegisterOrConstantAtStart(instr->index()); + LOperand* length = !index->IsConstantOperand() + ? UseOrConstantAtStart(instr->length()) + : UseAtStart(instr->length()); + LInstruction* result = new(zone()) LBoundsCheck(index, length); + if (!FLAG_debug_code || !instr->skip_check()) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoBoundsCheckBaseIndexInformation( + HBoundsCheckBaseIndexInformation* instr) { + UNREACHABLE(); + return NULL; +} + + +LInstruction* LChunkBuilder::DoAbnormalExit(HAbnormalExit* instr) { + // The control instruction marking the end of a block that completed + // abruptly (e.g., threw an exception). There is nothing specific to do. + return NULL; +} + + +LInstruction* LChunkBuilder::DoUseConst(HUseConst* instr) { + return NULL; +} + + +LInstruction* LChunkBuilder::DoForceRepresentation(HForceRepresentation* bad) { + // All HForceRepresentation instructions should be eliminated in the + // representation change phase of Hydrogen. + UNREACHABLE(); + return NULL; +} + + +LInstruction* LChunkBuilder::DoChange(HChange* instr) { + Representation from = instr->from(); + Representation to = instr->to(); + HValue* val = instr->value(); + if (from.IsSmi()) { + if (to.IsTagged()) { + LOperand* value = UseRegister(val); + return DefineSameAsFirst(new(zone()) LDummyUse(value)); + } + from = Representation::Tagged(); + } + if (from.IsTagged()) { + if (to.IsDouble()) { + LOperand* value = UseRegister(val); + LOperand* temp = TempRegister(); + LInstruction* result = + DefineAsRegister(new(zone()) LNumberUntagD(value, temp)); + if (!val->representation().IsSmi()) result = AssignEnvironment(result); + return result; + } else if (to.IsSmi()) { + LOperand* value = UseRegister(val); + if (val->type().IsSmi()) { + return DefineSameAsFirst(new(zone()) LDummyUse(value)); + } + return AssignEnvironment(DefineSameAsFirst(new(zone()) LCheckSmi(value))); + } else { + DCHECK(to.IsInteger32()); + if (val->type().IsSmi() || val->representation().IsSmi()) { + LOperand* value = UseRegister(val); + return DefineSameAsFirst(new(zone()) LSmiUntag(value, false)); + } else { + LOperand* value = UseRegister(val); + LInstruction* result = DefineSameAsFirst(new(zone()) LTaggedToI(value)); + if (!val->representation().IsSmi()) result = AssignEnvironment(result); + return result; + } + } + } else if (from.IsDouble()) { + if (to.IsTagged()) { + info()->MarkAsDeferredCalling(); + LOperand* value = UseRegisterAtStart(val); + LOperand* temp = FLAG_inline_new ? TempRegister() : NULL; + LUnallocated* result_temp = TempRegister(); + LNumberTagD* result = new(zone()) LNumberTagD(value, temp); + return AssignPointerMap(Define(result, result_temp)); + } else if (to.IsSmi()) { + LOperand* value = UseRegister(val); + return AssignEnvironment( + DefineAsRegister(new(zone()) LDoubleToSmi(value))); + } else { + DCHECK(to.IsInteger32()); + bool truncating = instr->CanTruncateToInt32(); + LOperand* value = UseRegister(val); + LInstruction* result = DefineAsRegister(new(zone()) LDoubleToI(value)); + if (!truncating) result = AssignEnvironment(result); + return result; + } + } else if (from.IsInteger32()) { + info()->MarkAsDeferredCalling(); + if (to.IsTagged()) { + if (!instr->CheckFlag(HValue::kCanOverflow)) { + LOperand* value = UseRegister(val); + return DefineSameAsFirst(new(zone()) LSmiTag(value)); + } else if (val->CheckFlag(HInstruction::kUint32)) { + LOperand* value = UseRegister(val); + LOperand* temp = TempRegister(); + LNumberTagU* result = new(zone()) LNumberTagU(value, temp); + return AssignPointerMap(DefineSameAsFirst(result)); + } else { + LOperand* value = UseRegister(val); + LOperand* temp = TempRegister(); + LNumberTagI* result = new(zone()) LNumberTagI(value, temp); + return AssignPointerMap(DefineSameAsFirst(result)); + } + } else if (to.IsSmi()) { + LOperand* value = UseRegister(val); + LInstruction* result = DefineSameAsFirst(new(zone()) LSmiTag(value)); + if (instr->CheckFlag(HValue::kCanOverflow)) { + result = AssignEnvironment(result); + } + return result; + } else { + DCHECK(to.IsDouble()); + if (val->CheckFlag(HInstruction::kUint32)) { + return DefineAsRegister(new(zone()) LUint32ToDouble(UseRegister(val))); + } else { + return DefineAsRegister(new(zone()) LInteger32ToDouble(Use(val))); + } + } + } + UNREACHABLE(); + return NULL; +} + + +LInstruction* LChunkBuilder::DoCheckHeapObject(HCheckHeapObject* instr) { + LOperand* value = UseAtStart(instr->value()); + LInstruction* result = new(zone()) LCheckNonSmi(value); + if (!instr->value()->type().IsHeapObject()) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoCheckSmi(HCheckSmi* instr) { + LOperand* value = UseRegisterAtStart(instr->value()); + return AssignEnvironment(new(zone()) LCheckSmi(value)); +} + + +LInstruction* LChunkBuilder::DoCheckInstanceType(HCheckInstanceType* instr) { + LOperand* value = UseRegisterAtStart(instr->value()); + LOperand* temp = TempRegister(); + LCheckInstanceType* result = new(zone()) LCheckInstanceType(value, temp); + return AssignEnvironment(result); +} + + +LInstruction* LChunkBuilder::DoCheckValue(HCheckValue* instr) { + // If the object is in new space, we'll emit a global cell compare and so + // want the value in a register. If the object gets promoted before we + // emit code, we will still get the register but will do an immediate + // compare instead of the cell compare. This is safe. + LOperand* value = instr->object_in_new_space() + ? UseRegisterAtStart(instr->value()) : UseAtStart(instr->value()); + return AssignEnvironment(new(zone()) LCheckValue(value)); +} + + +LInstruction* LChunkBuilder::DoCheckMaps(HCheckMaps* instr) { + if (instr->IsStabilityCheck()) return new(zone()) LCheckMaps; + LOperand* value = UseRegisterAtStart(instr->value()); + LInstruction* result = AssignEnvironment(new(zone()) LCheckMaps(value)); + if (instr->HasMigrationTarget()) { + info()->MarkAsDeferredCalling(); + result = AssignPointerMap(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoClampToUint8(HClampToUint8* instr) { + HValue* value = instr->value(); + Representation input_rep = value->representation(); + if (input_rep.IsDouble()) { + UNREACHABLE(); + return NULL; + } else if (input_rep.IsInteger32()) { + LOperand* reg = UseFixed(value, eax); + return DefineFixed(new(zone()) LClampIToUint8(reg), eax); + } else { + DCHECK(input_rep.IsSmiOrTagged()); + LOperand* value = UseRegister(instr->value()); + LClampTToUint8NoSSE2* res = + new(zone()) LClampTToUint8NoSSE2(value, TempRegister(), + TempRegister(), TempRegister()); + return AssignEnvironment(DefineFixed(res, ecx)); + } +} + + +LInstruction* LChunkBuilder::DoDoubleBits(HDoubleBits* instr) { + HValue* value = instr->value(); + DCHECK(value->representation().IsDouble()); + return DefineAsRegister(new(zone()) LDoubleBits(UseRegister(value))); +} + + +LInstruction* LChunkBuilder::DoConstructDouble(HConstructDouble* instr) { + LOperand* lo = UseRegister(instr->lo()); + LOperand* hi = UseRegister(instr->hi()); + return DefineAsRegister(new(zone()) LConstructDouble(hi, lo)); +} + + +LInstruction* LChunkBuilder::DoReturn(HReturn* instr) { + LOperand* context = info()->IsStub() ? UseFixed(instr->context(), esi) : NULL; + LOperand* parameter_count = UseRegisterOrConstant(instr->parameter_count()); + return new(zone()) LReturn( + UseFixed(instr->value(), eax), context, parameter_count); +} + + +LInstruction* LChunkBuilder::DoConstant(HConstant* instr) { + Representation r = instr->representation(); + if (r.IsSmi()) { + return DefineAsRegister(new(zone()) LConstantS); + } else if (r.IsInteger32()) { + return DefineAsRegister(new(zone()) LConstantI); + } else if (r.IsDouble()) { + double value = instr->DoubleValue(); + bool value_is_zero = BitCast<uint64_t, double>(value) == 0; + LOperand* temp = value_is_zero ? NULL : TempRegister(); + return DefineAsRegister(new(zone()) LConstantD(temp)); + } else if (r.IsExternal()) { + return DefineAsRegister(new(zone()) LConstantE); + } else if (r.IsTagged()) { + return DefineAsRegister(new(zone()) LConstantT); + } else { + UNREACHABLE(); + return NULL; + } +} + + +LInstruction* LChunkBuilder::DoLoadGlobalCell(HLoadGlobalCell* instr) { + LLoadGlobalCell* result = new(zone()) LLoadGlobalCell; + return instr->RequiresHoleCheck() + ? AssignEnvironment(DefineAsRegister(result)) + : DefineAsRegister(result); +} + + +LInstruction* LChunkBuilder::DoLoadGlobalGeneric(HLoadGlobalGeneric* instr) { + LOperand* context = UseFixed(instr->context(), esi); + LOperand* global_object = UseFixed(instr->global_object(), + LoadIC::ReceiverRegister()); + LOperand* vector = NULL; + if (FLAG_vector_ics) { + vector = FixedTemp(LoadIC::VectorRegister()); + } + + LLoadGlobalGeneric* result = + new(zone()) LLoadGlobalGeneric(context, global_object, vector); + return MarkAsCall(DefineFixed(result, eax), instr); +} + + +LInstruction* LChunkBuilder::DoStoreGlobalCell(HStoreGlobalCell* instr) { + LStoreGlobalCell* result = + new(zone()) LStoreGlobalCell(UseRegister(instr->value())); + return instr->RequiresHoleCheck() ? AssignEnvironment(result) : result; +} + + +LInstruction* LChunkBuilder::DoLoadContextSlot(HLoadContextSlot* instr) { + LOperand* context = UseRegisterAtStart(instr->value()); + LInstruction* result = + DefineAsRegister(new(zone()) LLoadContextSlot(context)); + if (instr->RequiresHoleCheck() && instr->DeoptimizesOnHole()) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoStoreContextSlot(HStoreContextSlot* instr) { + LOperand* value; + LOperand* temp; + LOperand* context = UseRegister(instr->context()); + if (instr->NeedsWriteBarrier()) { + value = UseTempRegister(instr->value()); + temp = TempRegister(); + } else { + value = UseRegister(instr->value()); + temp = NULL; + } + LInstruction* result = new(zone()) LStoreContextSlot(context, value, temp); + if (instr->RequiresHoleCheck() && instr->DeoptimizesOnHole()) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoLoadNamedField(HLoadNamedField* instr) { + LOperand* obj = (instr->access().IsExternalMemory() && + instr->access().offset() == 0) + ? UseRegisterOrConstantAtStart(instr->object()) + : UseRegisterAtStart(instr->object()); + return DefineAsRegister(new(zone()) LLoadNamedField(obj)); +} + + +LInstruction* LChunkBuilder::DoLoadNamedGeneric(HLoadNamedGeneric* instr) { + LOperand* context = UseFixed(instr->context(), esi); + LOperand* object = UseFixed(instr->object(), LoadIC::ReceiverRegister()); + LOperand* vector = NULL; + if (FLAG_vector_ics) { + vector = FixedTemp(LoadIC::VectorRegister()); + } + LLoadNamedGeneric* result = new(zone()) LLoadNamedGeneric( + context, object, vector); + return MarkAsCall(DefineFixed(result, eax), instr); +} + + +LInstruction* LChunkBuilder::DoLoadFunctionPrototype( + HLoadFunctionPrototype* instr) { + return AssignEnvironment(DefineAsRegister( + new(zone()) LLoadFunctionPrototype(UseRegister(instr->function()), + TempRegister()))); +} + + +LInstruction* LChunkBuilder::DoLoadRoot(HLoadRoot* instr) { + return DefineAsRegister(new(zone()) LLoadRoot); +} + + +LInstruction* LChunkBuilder::DoLoadKeyed(HLoadKeyed* instr) { + DCHECK(instr->key()->representation().IsSmiOrInteger32()); + ElementsKind elements_kind = instr->elements_kind(); + bool clobbers_key = ExternalArrayOpRequiresTemp( + instr->key()->representation(), elements_kind); + LOperand* key = clobbers_key + ? UseTempRegister(instr->key()) + : UseRegisterOrConstantAtStart(instr->key()); + LInstruction* result = NULL; + + if (!instr->is_typed_elements()) { + LOperand* obj = UseRegisterAtStart(instr->elements()); + result = DefineAsRegister(new(zone()) LLoadKeyed(obj, key)); + } else { + DCHECK( + (instr->representation().IsInteger32() && + !(IsDoubleOrFloatElementsKind(instr->elements_kind()))) || + (instr->representation().IsDouble() && + (IsDoubleOrFloatElementsKind(instr->elements_kind())))); + LOperand* backing_store = UseRegister(instr->elements()); + result = DefineAsRegister(new(zone()) LLoadKeyed(backing_store, key)); + } + + if ((instr->is_external() || instr->is_fixed_typed_array()) ? + // see LCodeGen::DoLoadKeyedExternalArray + ((instr->elements_kind() == EXTERNAL_UINT32_ELEMENTS || + instr->elements_kind() == UINT32_ELEMENTS) && + !instr->CheckFlag(HInstruction::kUint32)) : + // see LCodeGen::DoLoadKeyedFixedDoubleArray and + // LCodeGen::DoLoadKeyedFixedArray + instr->RequiresHoleCheck()) { + result = AssignEnvironment(result); + } + return result; +} + + +LInstruction* LChunkBuilder::DoLoadKeyedGeneric(HLoadKeyedGeneric* instr) { + LOperand* context = UseFixed(instr->context(), esi); + LOperand* object = UseFixed(instr->object(), LoadIC::ReceiverRegister()); + LOperand* key = UseFixed(instr->key(), LoadIC::NameRegister()); + LOperand* vector = NULL; + if (FLAG_vector_ics) { + vector = FixedTemp(LoadIC::VectorRegister()); + } + LLoadKeyedGeneric* result = + new(zone()) LLoadKeyedGeneric(context, object, key, vector); + return MarkAsCall(DefineFixed(result, eax), instr); +} + + +LOperand* LChunkBuilder::GetStoreKeyedValueOperand(HStoreKeyed* instr) { + ElementsKind elements_kind = instr->elements_kind(); + + // Determine if we need a byte register in this case for the value. + bool val_is_fixed_register = + elements_kind == EXTERNAL_INT8_ELEMENTS || + elements_kind == EXTERNAL_UINT8_ELEMENTS || + elements_kind == EXTERNAL_UINT8_CLAMPED_ELEMENTS || + elements_kind == UINT8_ELEMENTS || + elements_kind == INT8_ELEMENTS || + elements_kind == UINT8_CLAMPED_ELEMENTS; + if (val_is_fixed_register) { + return UseFixed(instr->value(), eax); + } + + if (IsDoubleOrFloatElementsKind(elements_kind)) { + return UseRegisterAtStart(instr->value()); + } + + return UseRegister(instr->value()); +} + + +LInstruction* LChunkBuilder::DoStoreKeyed(HStoreKeyed* instr) { + if (!instr->is_typed_elements()) { + DCHECK(instr->elements()->representation().IsTagged()); + DCHECK(instr->key()->representation().IsInteger32() || + instr->key()->representation().IsSmi()); + + if (instr->value()->representation().IsDouble()) { + LOperand* object = UseRegisterAtStart(instr->elements()); + LOperand* val = NULL; + val = UseRegisterAtStart(instr->value()); + LOperand* key = UseRegisterOrConstantAtStart(instr->key()); + return new(zone()) LStoreKeyed(object, key, val); + } else { + DCHECK(instr->value()->representation().IsSmiOrTagged()); + bool needs_write_barrier = instr->NeedsWriteBarrier(); + + LOperand* obj = UseRegister(instr->elements()); + LOperand* val; + LOperand* key; + if (needs_write_barrier) { + val = UseTempRegister(instr->value()); + key = UseTempRegister(instr->key()); + } else { + val = UseRegisterOrConstantAtStart(instr->value()); + key = UseRegisterOrConstantAtStart(instr->key()); + } + return new(zone()) LStoreKeyed(obj, key, val); + } + } + + ElementsKind elements_kind = instr->elements_kind(); + DCHECK( + (instr->value()->representation().IsInteger32() && + !IsDoubleOrFloatElementsKind(elements_kind)) || + (instr->value()->representation().IsDouble() && + IsDoubleOrFloatElementsKind(elements_kind))); + DCHECK((instr->is_fixed_typed_array() && + instr->elements()->representation().IsTagged()) || + (instr->is_external() && + instr->elements()->representation().IsExternal())); + + LOperand* backing_store = UseRegister(instr->elements()); + LOperand* val = GetStoreKeyedValueOperand(instr); + bool clobbers_key = ExternalArrayOpRequiresTemp( + instr->key()->representation(), elements_kind); + LOperand* key = clobbers_key + ? UseTempRegister(instr->key()) + : UseRegisterOrConstantAtStart(instr->key()); + return new(zone()) LStoreKeyed(backing_store, key, val); +} + + +LInstruction* LChunkBuilder::DoStoreKeyedGeneric(HStoreKeyedGeneric* instr) { + LOperand* context = UseFixed(instr->context(), esi); + LOperand* object = UseFixed(instr->object(), + KeyedStoreIC::ReceiverRegister()); + LOperand* key = UseFixed(instr->key(), KeyedStoreIC::NameRegister()); + LOperand* value = UseFixed(instr->value(), KeyedStoreIC::ValueRegister()); + + DCHECK(instr->object()->representation().IsTagged()); + DCHECK(instr->key()->representation().IsTagged()); + DCHECK(instr->value()->representation().IsTagged()); + + LStoreKeyedGeneric* result = + new(zone()) LStoreKeyedGeneric(context, object, key, value); + return MarkAsCall(result, instr); +} + + +LInstruction* LChunkBuilder::DoTransitionElementsKind( + HTransitionElementsKind* instr) { + if (IsSimpleMapChangeTransition(instr->from_kind(), instr->to_kind())) { + LOperand* object = UseRegister(instr->object()); + LOperand* new_map_reg = TempRegister(); + LOperand* temp_reg = TempRegister(); + LTransitionElementsKind* result = + new(zone()) LTransitionElementsKind(object, NULL, + new_map_reg, temp_reg); + return result; + } else { + LOperand* object = UseFixed(instr->object(), eax); + LOperand* context = UseFixed(instr->context(), esi); + LTransitionElementsKind* result = + new(zone()) LTransitionElementsKind(object, context, NULL, NULL); + return MarkAsCall(result, instr); + } +} + + +LInstruction* LChunkBuilder::DoTrapAllocationMemento( + HTrapAllocationMemento* instr) { + LOperand* object = UseRegister(instr->object()); + LOperand* temp = TempRegister(); + LTrapAllocationMemento* result = + new(zone()) LTrapAllocationMemento(object, temp); + return AssignEnvironment(result); +} + + +LInstruction* LChunkBuilder::DoStoreNamedField(HStoreNamedField* instr) { + bool is_in_object = instr->access().IsInobject(); + bool is_external_location = instr->access().IsExternalMemory() && + instr->access().offset() == 0; + bool needs_write_barrier = instr->NeedsWriteBarrier(); + bool needs_write_barrier_for_map = instr->has_transition() && + instr->NeedsWriteBarrierForMap(); + + LOperand* obj; + if (needs_write_barrier) { + obj = is_in_object + ? UseRegister(instr->object()) + : UseTempRegister(instr->object()); + } else if (is_external_location) { + DCHECK(!is_in_object); + DCHECK(!needs_write_barrier); + DCHECK(!needs_write_barrier_for_map); + obj = UseRegisterOrConstant(instr->object()); + } else { + obj = needs_write_barrier_for_map + ? UseRegister(instr->object()) + : UseRegisterAtStart(instr->object()); + } + + bool can_be_constant = instr->value()->IsConstant() && + HConstant::cast(instr->value())->NotInNewSpace() && + !instr->field_representation().IsDouble(); + + LOperand* val; + if (instr->field_representation().IsInteger8() || + instr->field_representation().IsUInteger8()) { + // mov_b requires a byte register (i.e. any of eax, ebx, ecx, edx). + // Just force the value to be in eax and we're safe here. + val = UseFixed(instr->value(), eax); + } else if (needs_write_barrier) { + val = UseTempRegister(instr->value()); + } else if (can_be_constant) { + val = UseRegisterOrConstant(instr->value()); + } else if (instr->field_representation().IsSmi()) { + val = UseTempRegister(instr->value()); + } else if (instr->field_representation().IsDouble()) { + val = UseRegisterAtStart(instr->value()); + } else { + val = UseRegister(instr->value()); + } + + // We only need a scratch register if we have a write barrier or we + // have a store into the properties array (not in-object-property). + LOperand* temp = (!is_in_object || needs_write_barrier || + needs_write_barrier_for_map) ? TempRegister() : NULL; + + // We need a temporary register for write barrier of the map field. + LOperand* temp_map = needs_write_barrier_for_map ? TempRegister() : NULL; + + return new(zone()) LStoreNamedField(obj, val, temp, temp_map); +} + + +LInstruction* LChunkBuilder::DoStoreNamedGeneric(HStoreNamedGeneric* instr) { + LOperand* context = UseFixed(instr->context(), esi); + LOperand* object = UseFixed(instr->object(), StoreIC::ReceiverRegister()); + LOperand* value = UseFixed(instr->value(), StoreIC::ValueRegister()); + + LStoreNamedGeneric* result = + new(zone()) LStoreNamedGeneric(context, object, value); + return MarkAsCall(result, instr); +} + + +LInstruction* LChunkBuilder::DoStringAdd(HStringAdd* instr) { + LOperand* context = UseFixed(instr->context(), esi); + LOperand* left = UseFixed(instr->left(), edx); + LOperand* right = UseFixed(instr->right(), eax); + LStringAdd* string_add = new(zone()) LStringAdd(context, left, right); + return MarkAsCall(DefineFixed(string_add, eax), instr); +} + + +LInstruction* LChunkBuilder::DoStringCharCodeAt(HStringCharCodeAt* instr) { + LOperand* string = UseTempRegister(instr->string()); + LOperand* index = UseTempRegister(instr->index()); + LOperand* context = UseAny(instr->context()); + LStringCharCodeAt* result = + new(zone()) LStringCharCodeAt(context, string, index); + return AssignPointerMap(DefineAsRegister(result)); +} + + +LInstruction* LChunkBuilder::DoStringCharFromCode(HStringCharFromCode* instr) { + LOperand* char_code = UseRegister(instr->value()); + LOperand* context = UseAny(instr->context()); + LStringCharFromCode* result = + new(zone()) LStringCharFromCode(context, char_code); + return AssignPointerMap(DefineAsRegister(result)); +} + + +LInstruction* LChunkBuilder::DoAllocate(HAllocate* instr) { + info()->MarkAsDeferredCalling(); + LOperand* context = UseAny(instr->context()); + LOperand* size = instr->size()->IsConstant() + ? UseConstant(instr->size()) + : UseTempRegister(instr->size()); + LOperand* temp = TempRegister(); + LAllocate* result = new(zone()) LAllocate(context, size, temp); + return AssignPointerMap(DefineAsRegister(result)); +} + + +LInstruction* LChunkBuilder::DoRegExpLiteral(HRegExpLiteral* instr) { + LOperand* context = UseFixed(instr->context(), esi); + return MarkAsCall( + DefineFixed(new(zone()) LRegExpLiteral(context), eax), instr); +} + + +LInstruction* LChunkBuilder::DoFunctionLiteral(HFunctionLiteral* instr) { + LOperand* context = UseFixed(instr->context(), esi); + return MarkAsCall( + DefineFixed(new(zone()) LFunctionLiteral(context), eax), instr); +} + + +LInstruction* LChunkBuilder::DoOsrEntry(HOsrEntry* instr) { + DCHECK(argument_count_ == 0); + allocator_->MarkAsOsrEntry(); + current_block_->last_environment()->set_ast_id(instr->ast_id()); + return AssignEnvironment(new(zone()) LOsrEntry); +} + + +LInstruction* LChunkBuilder::DoParameter(HParameter* instr) { + LParameter* result = new(zone()) LParameter; + if (instr->kind() == HParameter::STACK_PARAMETER) { + int spill_index = chunk()->GetParameterStackSlot(instr->index()); + return DefineAsSpilled(result, spill_index); + } else { + DCHECK(info()->IsStub()); + CodeStubInterfaceDescriptor* descriptor = + info()->code_stub()->GetInterfaceDescriptor(); + int index = static_cast<int>(instr->index()); + Register reg = descriptor->GetEnvironmentParameterRegister(index); + return DefineFixed(result, reg); + } +} + + +LInstruction* LChunkBuilder::DoUnknownOSRValue(HUnknownOSRValue* instr) { + // Use an index that corresponds to the location in the unoptimized frame, + // which the optimized frame will subsume. + int env_index = instr->index(); + int spill_index = 0; + if (instr->environment()->is_parameter_index(env_index)) { + spill_index = chunk()->GetParameterStackSlot(env_index); + } else { + spill_index = env_index - instr->environment()->first_local_index(); + if (spill_index > LUnallocated::kMaxFixedSlotIndex) { + Abort(kNotEnoughSpillSlotsForOsr); + spill_index = 0; + } + if (spill_index == 0) { + // The dynamic frame alignment state overwrites the first local. + // The first local is saved at the end of the unoptimized frame. + spill_index = graph()->osr()->UnoptimizedFrameSlots(); + } + } + return DefineAsSpilled(new(zone()) LUnknownOSRValue, spill_index); +} + + +LInstruction* LChunkBuilder::DoCallStub(HCallStub* instr) { + LOperand* context = UseFixed(instr->context(), esi); + LCallStub* result = new(zone()) LCallStub(context); + return MarkAsCall(DefineFixed(result, eax), instr); +} + + +LInstruction* LChunkBuilder::DoArgumentsObject(HArgumentsObject* instr) { + // There are no real uses of the arguments object. + // arguments.length and element access are supported directly on + // stack arguments, and any real arguments object use causes a bailout. + // So this value is never used. + return NULL; +} + + +LInstruction* LChunkBuilder::DoCapturedObject(HCapturedObject* instr) { + instr->ReplayEnvironment(current_block_->last_environment()); + + // There are no real uses of a captured object. + return NULL; +} + + +LInstruction* LChunkBuilder::DoAccessArgumentsAt(HAccessArgumentsAt* instr) { + info()->MarkAsRequiresFrame(); + LOperand* args = UseRegister(instr->arguments()); + LOperand* length; + LOperand* index; + if (instr->length()->IsConstant() && instr->index()->IsConstant()) { + length = UseRegisterOrConstant(instr->length()); + index = UseOrConstant(instr->index()); + } else { + length = UseTempRegister(instr->length()); + index = Use(instr->index()); + } + return DefineAsRegister(new(zone()) LAccessArgumentsAt(args, length, index)); +} + + +LInstruction* LChunkBuilder::DoToFastProperties(HToFastProperties* instr) { + LOperand* object = UseFixed(instr->value(), eax); + LToFastProperties* result = new(zone()) LToFastProperties(object); + return MarkAsCall(DefineFixed(result, eax), instr); +} + + +LInstruction* LChunkBuilder::DoTypeof(HTypeof* instr) { + LOperand* context = UseFixed(instr->context(), esi); + LOperand* value = UseAtStart(instr->value()); + LTypeof* result = new(zone()) LTypeof(context, value); + return MarkAsCall(DefineFixed(result, eax), instr); +} + + +LInstruction* LChunkBuilder::DoTypeofIsAndBranch(HTypeofIsAndBranch* instr) { + return new(zone()) LTypeofIsAndBranch(UseTempRegister(instr->value())); +} + + +LInstruction* LChunkBuilder::DoIsConstructCallAndBranch( + HIsConstructCallAndBranch* instr) { + return new(zone()) LIsConstructCallAndBranch(TempRegister()); +} + + +LInstruction* LChunkBuilder::DoSimulate(HSimulate* instr) { + instr->ReplayEnvironment(current_block_->last_environment()); + return NULL; +} + + +LInstruction* LChunkBuilder::DoStackCheck(HStackCheck* instr) { + info()->MarkAsDeferredCalling(); + if (instr->is_function_entry()) { + LOperand* context = UseFixed(instr->context(), esi); + return MarkAsCall(new(zone()) LStackCheck(context), instr); + } else { + DCHECK(instr->is_backwards_branch()); + LOperand* context = UseAny(instr->context()); + return AssignEnvironment( + AssignPointerMap(new(zone()) LStackCheck(context))); + } +} + + +LInstruction* LChunkBuilder::DoEnterInlined(HEnterInlined* instr) { + HEnvironment* outer = current_block_->last_environment(); + outer->set_ast_id(instr->ReturnId()); + HConstant* undefined = graph()->GetConstantUndefined(); + HEnvironment* inner = outer->CopyForInlining(instr->closure(), + instr->arguments_count(), + instr->function(), + undefined, + instr->inlining_kind()); + // Only replay binding of arguments object if it wasn't removed from graph. + if (instr->arguments_var() != NULL && instr->arguments_object()->IsLinked()) { + inner->Bind(instr->arguments_var(), instr->arguments_object()); + } + inner->set_entry(instr); + current_block_->UpdateEnvironment(inner); + chunk_->AddInlinedClosure(instr->closure()); + return NULL; +} + + +LInstruction* LChunkBuilder::DoLeaveInlined(HLeaveInlined* instr) { + LInstruction* pop = NULL; + + HEnvironment* env = current_block_->last_environment(); + + if (env->entry()->arguments_pushed()) { + int argument_count = env->arguments_environment()->parameter_count(); + pop = new(zone()) LDrop(argument_count); + DCHECK(instr->argument_delta() == -argument_count); + } + + HEnvironment* outer = current_block_->last_environment()-> + DiscardInlined(false); + current_block_->UpdateEnvironment(outer); + return pop; +} + + +LInstruction* LChunkBuilder::DoForInPrepareMap(HForInPrepareMap* instr) { + LOperand* context = UseFixed(instr->context(), esi); + LOperand* object = UseFixed(instr->enumerable(), eax); + LForInPrepareMap* result = new(zone()) LForInPrepareMap(context, object); + return MarkAsCall(DefineFixed(result, eax), instr, CAN_DEOPTIMIZE_EAGERLY); +} + + +LInstruction* LChunkBuilder::DoForInCacheArray(HForInCacheArray* instr) { + LOperand* map = UseRegister(instr->map()); + return AssignEnvironment(DefineAsRegister( + new(zone()) LForInCacheArray(map))); +} + + +LInstruction* LChunkBuilder::DoCheckMapValue(HCheckMapValue* instr) { + LOperand* value = UseRegisterAtStart(instr->value()); + LOperand* map = UseRegisterAtStart(instr->map()); + return AssignEnvironment(new(zone()) LCheckMapValue(value, map)); +} + + +LInstruction* LChunkBuilder::DoLoadFieldByIndex(HLoadFieldByIndex* instr) { + LOperand* object = UseRegister(instr->object()); + LOperand* index = UseTempRegister(instr->index()); + LLoadFieldByIndex* load = new(zone()) LLoadFieldByIndex(object, index); + LInstruction* result = DefineSameAsFirst(load); + return AssignPointerMap(result); +} + + +LInstruction* LChunkBuilder::DoStoreFrameContext(HStoreFrameContext* instr) { + LOperand* context = UseRegisterAtStart(instr->context()); + return new(zone()) LStoreFrameContext(context); +} + + +LInstruction* LChunkBuilder::DoAllocateBlockContext( + HAllocateBlockContext* instr) { + LOperand* context = UseFixed(instr->context(), esi); + LOperand* function = UseRegisterAtStart(instr->function()); + LAllocateBlockContext* result = + new(zone()) LAllocateBlockContext(context, function); + return MarkAsCall(DefineFixed(result, esi), instr); +} + + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_X87 diff --git a/deps/v8/src/x87/lithium-x87.h b/deps/v8/src/x87/lithium-x87.h new file mode 100644 index 00000000000..56d7c640ffc --- /dev/null +++ b/deps/v8/src/x87/lithium-x87.h @@ -0,0 +1,2917 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_X87_LITHIUM_X87_H_ +#define V8_X87_LITHIUM_X87_H_ + +#include "src/hydrogen.h" +#include "src/lithium.h" +#include "src/lithium-allocator.h" +#include "src/safepoint-table.h" +#include "src/utils.h" + +namespace v8 { +namespace internal { + +namespace compiler { +class RCodeVisualizer; +} + +// Forward declarations. +class LCodeGen; + +#define LITHIUM_CONCRETE_INSTRUCTION_LIST(V) \ + V(AccessArgumentsAt) \ + V(AddI) \ + V(AllocateBlockContext) \ + V(Allocate) \ + V(ApplyArguments) \ + V(ArgumentsElements) \ + V(ArgumentsLength) \ + V(ArithmeticD) \ + V(ArithmeticT) \ + V(BitI) \ + V(BoundsCheck) \ + V(Branch) \ + V(CallJSFunction) \ + V(CallWithDescriptor) \ + V(CallFunction) \ + V(CallNew) \ + V(CallNewArray) \ + V(CallRuntime) \ + V(CallStub) \ + V(CheckInstanceType) \ + V(CheckMaps) \ + V(CheckMapValue) \ + V(CheckNonSmi) \ + V(CheckSmi) \ + V(CheckValue) \ + V(ClampDToUint8) \ + V(ClampIToUint8) \ + V(ClampTToUint8NoSSE2) \ + V(ClassOfTestAndBranch) \ + V(ClobberDoubles) \ + V(CompareMinusZeroAndBranch) \ + V(CompareNumericAndBranch) \ + V(CmpObjectEqAndBranch) \ + V(CmpHoleAndBranch) \ + V(CmpMapAndBranch) \ + V(CmpT) \ + V(ConstantD) \ + V(ConstantE) \ + V(ConstantI) \ + V(ConstantS) \ + V(ConstantT) \ + V(ConstructDouble) \ + V(Context) \ + V(DateField) \ + V(DebugBreak) \ + V(DeclareGlobals) \ + V(Deoptimize) \ + V(DivByConstI) \ + V(DivByPowerOf2I) \ + V(DivI) \ + V(DoubleBits) \ + V(DoubleToI) \ + V(DoubleToSmi) \ + V(Drop) \ + V(Dummy) \ + V(DummyUse) \ + V(FlooringDivByConstI) \ + V(FlooringDivByPowerOf2I) \ + V(FlooringDivI) \ + V(ForInCacheArray) \ + V(ForInPrepareMap) \ + V(FunctionLiteral) \ + V(GetCachedArrayIndex) \ + V(Goto) \ + V(HasCachedArrayIndexAndBranch) \ + V(HasInstanceTypeAndBranch) \ + V(InnerAllocatedObject) \ + V(InstanceOf) \ + V(InstanceOfKnownGlobal) \ + V(InstructionGap) \ + V(Integer32ToDouble) \ + V(InvokeFunction) \ + V(IsConstructCallAndBranch) \ + V(IsObjectAndBranch) \ + V(IsStringAndBranch) \ + V(IsSmiAndBranch) \ + V(IsUndetectableAndBranch) \ + V(Label) \ + V(LazyBailout) \ + V(LoadContextSlot) \ + V(LoadFieldByIndex) \ + V(LoadFunctionPrototype) \ + V(LoadGlobalCell) \ + V(LoadGlobalGeneric) \ + V(LoadKeyed) \ + V(LoadKeyedGeneric) \ + V(LoadNamedField) \ + V(LoadNamedGeneric) \ + V(LoadRoot) \ + V(MapEnumLength) \ + V(MathAbs) \ + V(MathClz32) \ + V(MathExp) \ + V(MathFloor) \ + V(MathFround) \ + V(MathLog) \ + V(MathMinMax) \ + V(MathPowHalf) \ + V(MathRound) \ + V(MathSqrt) \ + V(ModByConstI) \ + V(ModByPowerOf2I) \ + V(ModI) \ + V(MulI) \ + V(NumberTagD) \ + V(NumberTagI) \ + V(NumberTagU) \ + V(NumberUntagD) \ + V(OsrEntry) \ + V(Parameter) \ + V(Power) \ + V(PushArgument) \ + V(RegExpLiteral) \ + V(Return) \ + V(SeqStringGetChar) \ + V(SeqStringSetChar) \ + V(ShiftI) \ + V(SmiTag) \ + V(SmiUntag) \ + V(StackCheck) \ + V(StoreCodeEntry) \ + V(StoreContextSlot) \ + V(StoreFrameContext) \ + V(StoreGlobalCell) \ + V(StoreKeyed) \ + V(StoreKeyedGeneric) \ + V(StoreNamedField) \ + V(StoreNamedGeneric) \ + V(StringAdd) \ + V(StringCharCodeAt) \ + V(StringCharFromCode) \ + V(StringCompareAndBranch) \ + V(SubI) \ + V(TaggedToI) \ + V(ThisFunction) \ + V(ToFastProperties) \ + V(TransitionElementsKind) \ + V(TrapAllocationMemento) \ + V(Typeof) \ + V(TypeofIsAndBranch) \ + V(Uint32ToDouble) \ + V(UnknownOSRValue) \ + V(WrapReceiver) + + +#define DECLARE_CONCRETE_INSTRUCTION(type, mnemonic) \ + virtual Opcode opcode() const V8_FINAL V8_OVERRIDE { \ + return LInstruction::k##type; \ + } \ + virtual void CompileToNative(LCodeGen* generator) V8_FINAL V8_OVERRIDE; \ + virtual const char* Mnemonic() const V8_FINAL V8_OVERRIDE { \ + return mnemonic; \ + } \ + static L##type* cast(LInstruction* instr) { \ + DCHECK(instr->Is##type()); \ + return reinterpret_cast<L##type*>(instr); \ + } + + +#define DECLARE_HYDROGEN_ACCESSOR(type) \ + H##type* hydrogen() const { \ + return H##type::cast(hydrogen_value()); \ + } + + +class LInstruction : public ZoneObject { + public: + LInstruction() + : environment_(NULL), + hydrogen_value_(NULL), + bit_field_(IsCallBits::encode(false)) { + } + + virtual ~LInstruction() {} + + virtual void CompileToNative(LCodeGen* generator) = 0; + virtual const char* Mnemonic() const = 0; + virtual void PrintTo(StringStream* stream); + virtual void PrintDataTo(StringStream* stream); + virtual void PrintOutputOperandTo(StringStream* stream); + + enum Opcode { + // Declare a unique enum value for each instruction. +#define DECLARE_OPCODE(type) k##type, + LITHIUM_CONCRETE_INSTRUCTION_LIST(DECLARE_OPCODE) kAdapter, + kNumberOfInstructions +#undef DECLARE_OPCODE + }; + + virtual Opcode opcode() const = 0; + + // Declare non-virtual type testers for all leaf IR classes. +#define DECLARE_PREDICATE(type) \ + bool Is##type() const { return opcode() == k##type; } + LITHIUM_CONCRETE_INSTRUCTION_LIST(DECLARE_PREDICATE) +#undef DECLARE_PREDICATE + + // Declare virtual predicates for instructions that don't have + // an opcode. + virtual bool IsGap() const { return false; } + + virtual bool IsControl() const { return false; } + + // Try deleting this instruction if possible. + virtual bool TryDelete() { return false; } + + void set_environment(LEnvironment* env) { environment_ = env; } + LEnvironment* environment() const { return environment_; } + bool HasEnvironment() const { return environment_ != NULL; } + + void set_pointer_map(LPointerMap* p) { pointer_map_.set(p); } + LPointerMap* pointer_map() const { return pointer_map_.get(); } + bool HasPointerMap() const { return pointer_map_.is_set(); } + + void set_hydrogen_value(HValue* value) { hydrogen_value_ = value; } + HValue* hydrogen_value() const { return hydrogen_value_; } + + virtual void SetDeferredLazyDeoptimizationEnvironment(LEnvironment* env) { } + + void MarkAsCall() { bit_field_ = IsCallBits::update(bit_field_, true); } + bool IsCall() const { return IsCallBits::decode(bit_field_); } + + // Interface to the register allocator and iterators. + bool ClobbersTemps() const { return IsCall(); } + bool ClobbersRegisters() const { return IsCall(); } + virtual bool ClobbersDoubleRegisters(Isolate* isolate) const { + return IsCall() || + // We only have rudimentary X87Stack tracking, thus in general + // cannot handle phi-nodes. + (IsControl()); + } + + virtual bool HasResult() const = 0; + virtual LOperand* result() const = 0; + + bool HasDoubleRegisterResult(); + bool HasDoubleRegisterInput(); + bool IsDoubleInput(X87Register reg, LCodeGen* cgen); + + LOperand* FirstInput() { return InputAt(0); } + LOperand* Output() { return HasResult() ? result() : NULL; } + + virtual bool HasInterestingComment(LCodeGen* gen) const { return true; } + +#ifdef DEBUG + void VerifyCall(); +#endif + + virtual int InputCount() = 0; + virtual LOperand* InputAt(int i) = 0; + + private: + // Iterator support. + friend class InputIterator; + + friend class TempIterator; + virtual int TempCount() = 0; + virtual LOperand* TempAt(int i) = 0; + + class IsCallBits: public BitField<bool, 0, 1> {}; + + LEnvironment* environment_; + SetOncePointer<LPointerMap> pointer_map_; + HValue* hydrogen_value_; + int bit_field_; +}; + + +// R = number of result operands (0 or 1). +template<int R> +class LTemplateResultInstruction : public LInstruction { + public: + // Allow 0 or 1 output operands. + STATIC_ASSERT(R == 0 || R == 1); + virtual bool HasResult() const V8_FINAL V8_OVERRIDE { + return R != 0 && result() != NULL; + } + void set_result(LOperand* operand) { results_[0] = operand; } + LOperand* result() const { return results_[0]; } + + protected: + EmbeddedContainer<LOperand*, R> results_; +}; + + +// R = number of result operands (0 or 1). +// I = number of input operands. +// T = number of temporary operands. +template<int R, int I, int T> +class LTemplateInstruction : public LTemplateResultInstruction<R> { + protected: + EmbeddedContainer<LOperand*, I> inputs_; + EmbeddedContainer<LOperand*, T> temps_; + + private: + // Iterator support. + virtual int InputCount() V8_FINAL V8_OVERRIDE { return I; } + virtual LOperand* InputAt(int i) V8_FINAL V8_OVERRIDE { return inputs_[i]; } + + virtual int TempCount() V8_FINAL V8_OVERRIDE { return T; } + virtual LOperand* TempAt(int i) V8_FINAL V8_OVERRIDE { return temps_[i]; } +}; + + +class LGap : public LTemplateInstruction<0, 0, 0> { + public: + explicit LGap(HBasicBlock* block) : block_(block) { + parallel_moves_[BEFORE] = NULL; + parallel_moves_[START] = NULL; + parallel_moves_[END] = NULL; + parallel_moves_[AFTER] = NULL; + } + + // Can't use the DECLARE-macro here because of sub-classes. + virtual bool IsGap() const V8_FINAL V8_OVERRIDE { return true; } + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + static LGap* cast(LInstruction* instr) { + DCHECK(instr->IsGap()); + return reinterpret_cast<LGap*>(instr); + } + + bool IsRedundant() const; + + HBasicBlock* block() const { return block_; } + + enum InnerPosition { + BEFORE, + START, + END, + AFTER, + FIRST_INNER_POSITION = BEFORE, + LAST_INNER_POSITION = AFTER + }; + + LParallelMove* GetOrCreateParallelMove(InnerPosition pos, Zone* zone) { + if (parallel_moves_[pos] == NULL) { + parallel_moves_[pos] = new(zone) LParallelMove(zone); + } + return parallel_moves_[pos]; + } + + LParallelMove* GetParallelMove(InnerPosition pos) { + return parallel_moves_[pos]; + } + + private: + LParallelMove* parallel_moves_[LAST_INNER_POSITION + 1]; + HBasicBlock* block_; +}; + + +class LInstructionGap V8_FINAL : public LGap { + public: + explicit LInstructionGap(HBasicBlock* block) : LGap(block) { } + + virtual bool HasInterestingComment(LCodeGen* gen) const V8_OVERRIDE { + return !IsRedundant(); + } + + DECLARE_CONCRETE_INSTRUCTION(InstructionGap, "gap") +}; + + +class LClobberDoubles V8_FINAL : public LTemplateInstruction<0, 0, 0> { + public: + explicit LClobberDoubles(Isolate* isolate) { } + + virtual bool ClobbersDoubleRegisters(Isolate* isolate) const V8_OVERRIDE { + return true; + } + + DECLARE_CONCRETE_INSTRUCTION(ClobberDoubles, "clobber-d") +}; + + +class LGoto V8_FINAL : public LTemplateInstruction<0, 0, 0> { + public: + explicit LGoto(HBasicBlock* block) : block_(block) { } + + virtual bool HasInterestingComment(LCodeGen* gen) const V8_OVERRIDE; + DECLARE_CONCRETE_INSTRUCTION(Goto, "goto") + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + virtual bool IsControl() const V8_OVERRIDE { return true; } + + int block_id() const { return block_->block_id(); } + virtual bool ClobbersDoubleRegisters(Isolate* isolate) const V8_OVERRIDE { + return false; + } + + bool jumps_to_join() const { return block_->predecessors()->length() > 1; } + + private: + HBasicBlock* block_; +}; + + +class LLazyBailout V8_FINAL : public LTemplateInstruction<0, 0, 0> { + public: + DECLARE_CONCRETE_INSTRUCTION(LazyBailout, "lazy-bailout") +}; + + +class LDummy V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + LDummy() {} + DECLARE_CONCRETE_INSTRUCTION(Dummy, "dummy") +}; + + +class LDummyUse V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LDummyUse(LOperand* value) { + inputs_[0] = value; + } + DECLARE_CONCRETE_INSTRUCTION(DummyUse, "dummy-use") +}; + + +class LDeoptimize V8_FINAL : public LTemplateInstruction<0, 0, 0> { + public: + virtual bool IsControl() const V8_OVERRIDE { return true; } + DECLARE_CONCRETE_INSTRUCTION(Deoptimize, "deoptimize") + DECLARE_HYDROGEN_ACCESSOR(Deoptimize) +}; + + +class LLabel V8_FINAL : public LGap { + public: + explicit LLabel(HBasicBlock* block) + : LGap(block), replacement_(NULL) { } + + virtual bool HasInterestingComment(LCodeGen* gen) const V8_OVERRIDE { + return false; + } + DECLARE_CONCRETE_INSTRUCTION(Label, "label") + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + + int block_id() const { return block()->block_id(); } + bool is_loop_header() const { return block()->IsLoopHeader(); } + bool is_osr_entry() const { return block()->is_osr_entry(); } + Label* label() { return &label_; } + LLabel* replacement() const { return replacement_; } + void set_replacement(LLabel* label) { replacement_ = label; } + bool HasReplacement() const { return replacement_ != NULL; } + + private: + Label label_; + LLabel* replacement_; +}; + + +class LParameter V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + virtual bool HasInterestingComment(LCodeGen* gen) const V8_OVERRIDE { + return false; + } + DECLARE_CONCRETE_INSTRUCTION(Parameter, "parameter") +}; + + +class LCallStub V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LCallStub(LOperand* context) { + inputs_[0] = context; + } + + LOperand* context() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(CallStub, "call-stub") + DECLARE_HYDROGEN_ACCESSOR(CallStub) +}; + + +class LUnknownOSRValue V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + virtual bool HasInterestingComment(LCodeGen* gen) const V8_OVERRIDE { + return false; + } + DECLARE_CONCRETE_INSTRUCTION(UnknownOSRValue, "unknown-osr-value") +}; + + +template<int I, int T> +class LControlInstruction: public LTemplateInstruction<0, I, T> { + public: + LControlInstruction() : false_label_(NULL), true_label_(NULL) { } + + virtual bool IsControl() const V8_FINAL V8_OVERRIDE { return true; } + + int SuccessorCount() { return hydrogen()->SuccessorCount(); } + HBasicBlock* SuccessorAt(int i) { return hydrogen()->SuccessorAt(i); } + + int TrueDestination(LChunk* chunk) { + return chunk->LookupDestination(true_block_id()); + } + int FalseDestination(LChunk* chunk) { + return chunk->LookupDestination(false_block_id()); + } + + Label* TrueLabel(LChunk* chunk) { + if (true_label_ == NULL) { + true_label_ = chunk->GetAssemblyLabel(TrueDestination(chunk)); + } + return true_label_; + } + Label* FalseLabel(LChunk* chunk) { + if (false_label_ == NULL) { + false_label_ = chunk->GetAssemblyLabel(FalseDestination(chunk)); + } + return false_label_; + } + + protected: + int true_block_id() { return SuccessorAt(0)->block_id(); } + int false_block_id() { return SuccessorAt(1)->block_id(); } + + private: + HControlInstruction* hydrogen() { + return HControlInstruction::cast(this->hydrogen_value()); + } + + Label* false_label_; + Label* true_label_; +}; + + +class LWrapReceiver V8_FINAL : public LTemplateInstruction<1, 2, 1> { + public: + LWrapReceiver(LOperand* receiver, + LOperand* function, + LOperand* temp) { + inputs_[0] = receiver; + inputs_[1] = function; + temps_[0] = temp; + } + + LOperand* receiver() { return inputs_[0]; } + LOperand* function() { return inputs_[1]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(WrapReceiver, "wrap-receiver") + DECLARE_HYDROGEN_ACCESSOR(WrapReceiver) +}; + + +class LApplyArguments V8_FINAL : public LTemplateInstruction<1, 4, 0> { + public: + LApplyArguments(LOperand* function, + LOperand* receiver, + LOperand* length, + LOperand* elements) { + inputs_[0] = function; + inputs_[1] = receiver; + inputs_[2] = length; + inputs_[3] = elements; + } + + LOperand* function() { return inputs_[0]; } + LOperand* receiver() { return inputs_[1]; } + LOperand* length() { return inputs_[2]; } + LOperand* elements() { return inputs_[3]; } + + DECLARE_CONCRETE_INSTRUCTION(ApplyArguments, "apply-arguments") +}; + + +class LAccessArgumentsAt V8_FINAL : public LTemplateInstruction<1, 3, 0> { + public: + LAccessArgumentsAt(LOperand* arguments, LOperand* length, LOperand* index) { + inputs_[0] = arguments; + inputs_[1] = length; + inputs_[2] = index; + } + + LOperand* arguments() { return inputs_[0]; } + LOperand* length() { return inputs_[1]; } + LOperand* index() { return inputs_[2]; } + + DECLARE_CONCRETE_INSTRUCTION(AccessArgumentsAt, "access-arguments-at") + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LArgumentsLength V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LArgumentsLength(LOperand* elements) { + inputs_[0] = elements; + } + + LOperand* elements() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(ArgumentsLength, "arguments-length") +}; + + +class LArgumentsElements V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + DECLARE_CONCRETE_INSTRUCTION(ArgumentsElements, "arguments-elements") + DECLARE_HYDROGEN_ACCESSOR(ArgumentsElements) +}; + + +class LDebugBreak V8_FINAL : public LTemplateInstruction<0, 0, 0> { + public: + DECLARE_CONCRETE_INSTRUCTION(DebugBreak, "break") +}; + + +class LModByPowerOf2I V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + LModByPowerOf2I(LOperand* dividend, int32_t divisor) { + inputs_[0] = dividend; + divisor_ = divisor; + } + + LOperand* dividend() { return inputs_[0]; } + int32_t divisor() const { return divisor_; } + + DECLARE_CONCRETE_INSTRUCTION(ModByPowerOf2I, "mod-by-power-of-2-i") + DECLARE_HYDROGEN_ACCESSOR(Mod) + + private: + int32_t divisor_; +}; + + +class LModByConstI V8_FINAL : public LTemplateInstruction<1, 1, 2> { + public: + LModByConstI(LOperand* dividend, + int32_t divisor, + LOperand* temp1, + LOperand* temp2) { + inputs_[0] = dividend; + divisor_ = divisor; + temps_[0] = temp1; + temps_[1] = temp2; + } + + LOperand* dividend() { return inputs_[0]; } + int32_t divisor() const { return divisor_; } + LOperand* temp1() { return temps_[0]; } + LOperand* temp2() { return temps_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(ModByConstI, "mod-by-const-i") + DECLARE_HYDROGEN_ACCESSOR(Mod) + + private: + int32_t divisor_; +}; + + +class LModI V8_FINAL : public LTemplateInstruction<1, 2, 1> { + public: + LModI(LOperand* left, LOperand* right, LOperand* temp) { + inputs_[0] = left; + inputs_[1] = right; + temps_[0] = temp; + } + + LOperand* left() { return inputs_[0]; } + LOperand* right() { return inputs_[1]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(ModI, "mod-i") + DECLARE_HYDROGEN_ACCESSOR(Mod) +}; + + +class LDivByPowerOf2I V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + LDivByPowerOf2I(LOperand* dividend, int32_t divisor) { + inputs_[0] = dividend; + divisor_ = divisor; + } + + LOperand* dividend() { return inputs_[0]; } + int32_t divisor() const { return divisor_; } + + DECLARE_CONCRETE_INSTRUCTION(DivByPowerOf2I, "div-by-power-of-2-i") + DECLARE_HYDROGEN_ACCESSOR(Div) + + private: + int32_t divisor_; +}; + + +class LDivByConstI V8_FINAL : public LTemplateInstruction<1, 1, 2> { + public: + LDivByConstI(LOperand* dividend, + int32_t divisor, + LOperand* temp1, + LOperand* temp2) { + inputs_[0] = dividend; + divisor_ = divisor; + temps_[0] = temp1; + temps_[1] = temp2; + } + + LOperand* dividend() { return inputs_[0]; } + int32_t divisor() const { return divisor_; } + LOperand* temp1() { return temps_[0]; } + LOperand* temp2() { return temps_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(DivByConstI, "div-by-const-i") + DECLARE_HYDROGEN_ACCESSOR(Div) + + private: + int32_t divisor_; +}; + + +class LDivI V8_FINAL : public LTemplateInstruction<1, 2, 1> { + public: + LDivI(LOperand* dividend, LOperand* divisor, LOperand* temp) { + inputs_[0] = dividend; + inputs_[1] = divisor; + temps_[0] = temp; + } + + LOperand* dividend() { return inputs_[0]; } + LOperand* divisor() { return inputs_[1]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(DivI, "div-i") + DECLARE_HYDROGEN_ACCESSOR(BinaryOperation) +}; + + +class LFlooringDivByPowerOf2I V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + LFlooringDivByPowerOf2I(LOperand* dividend, int32_t divisor) { + inputs_[0] = dividend; + divisor_ = divisor; + } + + LOperand* dividend() { return inputs_[0]; } + int32_t divisor() const { return divisor_; } + + DECLARE_CONCRETE_INSTRUCTION(FlooringDivByPowerOf2I, + "flooring-div-by-power-of-2-i") + DECLARE_HYDROGEN_ACCESSOR(MathFloorOfDiv) + + private: + int32_t divisor_; +}; + + +class LFlooringDivByConstI V8_FINAL : public LTemplateInstruction<1, 1, 3> { + public: + LFlooringDivByConstI(LOperand* dividend, + int32_t divisor, + LOperand* temp1, + LOperand* temp2, + LOperand* temp3) { + inputs_[0] = dividend; + divisor_ = divisor; + temps_[0] = temp1; + temps_[1] = temp2; + temps_[2] = temp3; + } + + LOperand* dividend() { return inputs_[0]; } + int32_t divisor() const { return divisor_; } + LOperand* temp1() { return temps_[0]; } + LOperand* temp2() { return temps_[1]; } + LOperand* temp3() { return temps_[2]; } + + DECLARE_CONCRETE_INSTRUCTION(FlooringDivByConstI, "flooring-div-by-const-i") + DECLARE_HYDROGEN_ACCESSOR(MathFloorOfDiv) + + private: + int32_t divisor_; +}; + + +class LFlooringDivI V8_FINAL : public LTemplateInstruction<1, 2, 1> { + public: + LFlooringDivI(LOperand* dividend, LOperand* divisor, LOperand* temp) { + inputs_[0] = dividend; + inputs_[1] = divisor; + temps_[0] = temp; + } + + LOperand* dividend() { return inputs_[0]; } + LOperand* divisor() { return inputs_[1]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(FlooringDivI, "flooring-div-i") + DECLARE_HYDROGEN_ACCESSOR(MathFloorOfDiv) +}; + + +class LMulI V8_FINAL : public LTemplateInstruction<1, 2, 1> { + public: + LMulI(LOperand* left, LOperand* right, LOperand* temp) { + inputs_[0] = left; + inputs_[1] = right; + temps_[0] = temp; + } + + LOperand* left() { return inputs_[0]; } + LOperand* right() { return inputs_[1]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(MulI, "mul-i") + DECLARE_HYDROGEN_ACCESSOR(Mul) +}; + + +class LCompareNumericAndBranch V8_FINAL : public LControlInstruction<2, 0> { + public: + LCompareNumericAndBranch(LOperand* left, LOperand* right) { + inputs_[0] = left; + inputs_[1] = right; + } + + LOperand* left() { return inputs_[0]; } + LOperand* right() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(CompareNumericAndBranch, + "compare-numeric-and-branch") + DECLARE_HYDROGEN_ACCESSOR(CompareNumericAndBranch) + + Token::Value op() const { return hydrogen()->token(); } + bool is_double() const { + return hydrogen()->representation().IsDouble(); + } + + virtual void PrintDataTo(StringStream* stream); +}; + + +class LMathFloor V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LMathFloor(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(MathFloor, "math-floor") + DECLARE_HYDROGEN_ACCESSOR(UnaryMathOperation) +}; + + +class LMathRound V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LMathRound(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(MathRound, "math-round") + DECLARE_HYDROGEN_ACCESSOR(UnaryMathOperation) +}; + + +class LMathFround V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LMathFround(LOperand* value) { inputs_[0] = value; } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(MathFround, "math-fround") +}; + + +class LMathAbs V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LMathAbs(LOperand* context, LOperand* value) { + inputs_[1] = context; + inputs_[0] = value; + } + + LOperand* context() { return inputs_[1]; } + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(MathAbs, "math-abs") + DECLARE_HYDROGEN_ACCESSOR(UnaryMathOperation) +}; + + +class LMathLog V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LMathLog(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(MathLog, "math-log") +}; + + +class LMathClz32 V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LMathClz32(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(MathClz32, "math-clz32") +}; + + +class LMathExp V8_FINAL : public LTemplateInstruction<1, 1, 2> { + public: + LMathExp(LOperand* value, + LOperand* temp1, + LOperand* temp2) { + inputs_[0] = value; + temps_[0] = temp1; + temps_[1] = temp2; + ExternalReference::InitializeMathExpData(); + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp1() { return temps_[0]; } + LOperand* temp2() { return temps_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(MathExp, "math-exp") +}; + + +class LMathSqrt V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LMathSqrt(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(MathSqrt, "math-sqrt") +}; + + +class LMathPowHalf V8_FINAL : public LTemplateInstruction<1, 1, 1> { + public: + LMathPowHalf(LOperand* value, LOperand* temp) { + inputs_[0] = value; + temps_[0] = temp; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(MathPowHalf, "math-pow-half") +}; + + +class LCmpObjectEqAndBranch V8_FINAL : public LControlInstruction<2, 0> { + public: + LCmpObjectEqAndBranch(LOperand* left, LOperand* right) { + inputs_[0] = left; + inputs_[1] = right; + } + + LOperand* left() { return inputs_[0]; } + LOperand* right() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(CmpObjectEqAndBranch, "cmp-object-eq-and-branch") +}; + + +class LCmpHoleAndBranch V8_FINAL : public LControlInstruction<1, 0> { + public: + explicit LCmpHoleAndBranch(LOperand* object) { + inputs_[0] = object; + } + + LOperand* object() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(CmpHoleAndBranch, "cmp-hole-and-branch") + DECLARE_HYDROGEN_ACCESSOR(CompareHoleAndBranch) +}; + + +class LCompareMinusZeroAndBranch V8_FINAL : public LControlInstruction<1, 1> { + public: + LCompareMinusZeroAndBranch(LOperand* value, LOperand* temp) { + inputs_[0] = value; + temps_[0] = temp; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(CompareMinusZeroAndBranch, + "cmp-minus-zero-and-branch") + DECLARE_HYDROGEN_ACCESSOR(CompareMinusZeroAndBranch) +}; + + +class LIsObjectAndBranch V8_FINAL : public LControlInstruction<1, 1> { + public: + LIsObjectAndBranch(LOperand* value, LOperand* temp) { + inputs_[0] = value; + temps_[0] = temp; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(IsObjectAndBranch, "is-object-and-branch") + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LIsStringAndBranch V8_FINAL : public LControlInstruction<1, 1> { + public: + LIsStringAndBranch(LOperand* value, LOperand* temp) { + inputs_[0] = value; + temps_[0] = temp; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(IsStringAndBranch, "is-string-and-branch") + DECLARE_HYDROGEN_ACCESSOR(IsStringAndBranch) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LIsSmiAndBranch V8_FINAL : public LControlInstruction<1, 0> { + public: + explicit LIsSmiAndBranch(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(IsSmiAndBranch, "is-smi-and-branch") + DECLARE_HYDROGEN_ACCESSOR(IsSmiAndBranch) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LIsUndetectableAndBranch V8_FINAL : public LControlInstruction<1, 1> { + public: + LIsUndetectableAndBranch(LOperand* value, LOperand* temp) { + inputs_[0] = value; + temps_[0] = temp; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(IsUndetectableAndBranch, + "is-undetectable-and-branch") + DECLARE_HYDROGEN_ACCESSOR(IsUndetectableAndBranch) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LStringCompareAndBranch V8_FINAL : public LControlInstruction<3, 0> { + public: + LStringCompareAndBranch(LOperand* context, LOperand* left, LOperand* right) { + inputs_[0] = context; + inputs_[1] = left; + inputs_[2] = right; + } + + LOperand* context() { return inputs_[1]; } + LOperand* left() { return inputs_[1]; } + LOperand* right() { return inputs_[2]; } + + DECLARE_CONCRETE_INSTRUCTION(StringCompareAndBranch, + "string-compare-and-branch") + DECLARE_HYDROGEN_ACCESSOR(StringCompareAndBranch) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + + Token::Value op() const { return hydrogen()->token(); } +}; + + +class LHasInstanceTypeAndBranch V8_FINAL : public LControlInstruction<1, 1> { + public: + LHasInstanceTypeAndBranch(LOperand* value, LOperand* temp) { + inputs_[0] = value; + temps_[0] = temp; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(HasInstanceTypeAndBranch, + "has-instance-type-and-branch") + DECLARE_HYDROGEN_ACCESSOR(HasInstanceTypeAndBranch) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LGetCachedArrayIndex V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LGetCachedArrayIndex(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(GetCachedArrayIndex, "get-cached-array-index") + DECLARE_HYDROGEN_ACCESSOR(GetCachedArrayIndex) +}; + + +class LHasCachedArrayIndexAndBranch V8_FINAL + : public LControlInstruction<1, 0> { + public: + explicit LHasCachedArrayIndexAndBranch(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(HasCachedArrayIndexAndBranch, + "has-cached-array-index-and-branch") + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LIsConstructCallAndBranch V8_FINAL : public LControlInstruction<0, 1> { + public: + explicit LIsConstructCallAndBranch(LOperand* temp) { + temps_[0] = temp; + } + + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(IsConstructCallAndBranch, + "is-construct-call-and-branch") +}; + + +class LClassOfTestAndBranch V8_FINAL : public LControlInstruction<1, 2> { + public: + LClassOfTestAndBranch(LOperand* value, LOperand* temp, LOperand* temp2) { + inputs_[0] = value; + temps_[0] = temp; + temps_[1] = temp2; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + LOperand* temp2() { return temps_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(ClassOfTestAndBranch, + "class-of-test-and-branch") + DECLARE_HYDROGEN_ACCESSOR(ClassOfTestAndBranch) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LCmpT V8_FINAL : public LTemplateInstruction<1, 3, 0> { + public: + LCmpT(LOperand* context, LOperand* left, LOperand* right) { + inputs_[0] = context; + inputs_[1] = left; + inputs_[2] = right; + } + + DECLARE_CONCRETE_INSTRUCTION(CmpT, "cmp-t") + DECLARE_HYDROGEN_ACCESSOR(CompareGeneric) + + LOperand* context() { return inputs_[0]; } + Token::Value op() const { return hydrogen()->token(); } +}; + + +class LInstanceOf V8_FINAL : public LTemplateInstruction<1, 3, 0> { + public: + LInstanceOf(LOperand* context, LOperand* left, LOperand* right) { + inputs_[0] = context; + inputs_[1] = left; + inputs_[2] = right; + } + + LOperand* context() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(InstanceOf, "instance-of") +}; + + +class LInstanceOfKnownGlobal V8_FINAL : public LTemplateInstruction<1, 2, 1> { + public: + LInstanceOfKnownGlobal(LOperand* context, LOperand* value, LOperand* temp) { + inputs_[0] = context; + inputs_[1] = value; + temps_[0] = temp; + } + + LOperand* context() { return inputs_[0]; } + LOperand* value() { return inputs_[1]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(InstanceOfKnownGlobal, + "instance-of-known-global") + DECLARE_HYDROGEN_ACCESSOR(InstanceOfKnownGlobal) + + Handle<JSFunction> function() const { return hydrogen()->function(); } + LEnvironment* GetDeferredLazyDeoptimizationEnvironment() { + return lazy_deopt_env_; + } + virtual void SetDeferredLazyDeoptimizationEnvironment( + LEnvironment* env) V8_OVERRIDE { + lazy_deopt_env_ = env; + } + + private: + LEnvironment* lazy_deopt_env_; +}; + + +class LBoundsCheck V8_FINAL : public LTemplateInstruction<0, 2, 0> { + public: + LBoundsCheck(LOperand* index, LOperand* length) { + inputs_[0] = index; + inputs_[1] = length; + } + + LOperand* index() { return inputs_[0]; } + LOperand* length() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(BoundsCheck, "bounds-check") + DECLARE_HYDROGEN_ACCESSOR(BoundsCheck) +}; + + +class LBitI V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LBitI(LOperand* left, LOperand* right) { + inputs_[0] = left; + inputs_[1] = right; + } + + LOperand* left() { return inputs_[0]; } + LOperand* right() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(BitI, "bit-i") + DECLARE_HYDROGEN_ACCESSOR(Bitwise) + + Token::Value op() const { return hydrogen()->op(); } +}; + + +class LShiftI V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LShiftI(Token::Value op, LOperand* left, LOperand* right, bool can_deopt) + : op_(op), can_deopt_(can_deopt) { + inputs_[0] = left; + inputs_[1] = right; + } + + LOperand* left() { return inputs_[0]; } + LOperand* right() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(ShiftI, "shift-i") + + Token::Value op() const { return op_; } + bool can_deopt() const { return can_deopt_; } + + private: + Token::Value op_; + bool can_deopt_; +}; + + +class LSubI V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LSubI(LOperand* left, LOperand* right) { + inputs_[0] = left; + inputs_[1] = right; + } + + LOperand* left() { return inputs_[0]; } + LOperand* right() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(SubI, "sub-i") + DECLARE_HYDROGEN_ACCESSOR(Sub) +}; + + +class LConstantI V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + DECLARE_CONCRETE_INSTRUCTION(ConstantI, "constant-i") + DECLARE_HYDROGEN_ACCESSOR(Constant) + + int32_t value() const { return hydrogen()->Integer32Value(); } +}; + + +class LConstantS V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + DECLARE_CONCRETE_INSTRUCTION(ConstantS, "constant-s") + DECLARE_HYDROGEN_ACCESSOR(Constant) + + Smi* value() const { return Smi::FromInt(hydrogen()->Integer32Value()); } +}; + + +class LConstantD V8_FINAL : public LTemplateInstruction<1, 0, 1> { + public: + explicit LConstantD(LOperand* temp) { + temps_[0] = temp; + } + + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(ConstantD, "constant-d") + DECLARE_HYDROGEN_ACCESSOR(Constant) + + double value() const { return hydrogen()->DoubleValue(); } +}; + + +class LConstantE V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + DECLARE_CONCRETE_INSTRUCTION(ConstantE, "constant-e") + DECLARE_HYDROGEN_ACCESSOR(Constant) + + ExternalReference value() const { + return hydrogen()->ExternalReferenceValue(); + } +}; + + +class LConstantT V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + DECLARE_CONCRETE_INSTRUCTION(ConstantT, "constant-t") + DECLARE_HYDROGEN_ACCESSOR(Constant) + + Handle<Object> value(Isolate* isolate) const { + return hydrogen()->handle(isolate); + } +}; + + +class LBranch V8_FINAL : public LControlInstruction<1, 1> { + public: + LBranch(LOperand* value, LOperand* temp) { + inputs_[0] = value; + temps_[0] = temp; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(Branch, "branch") + DECLARE_HYDROGEN_ACCESSOR(Branch) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LCmpMapAndBranch V8_FINAL : public LControlInstruction<1, 0> { + public: + explicit LCmpMapAndBranch(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(CmpMapAndBranch, "cmp-map-and-branch") + DECLARE_HYDROGEN_ACCESSOR(CompareMap) + + Handle<Map> map() const { return hydrogen()->map().handle(); } +}; + + +class LMapEnumLength V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LMapEnumLength(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(MapEnumLength, "map-enum-length") +}; + + +class LDateField V8_FINAL : public LTemplateInstruction<1, 1, 1> { + public: + LDateField(LOperand* date, LOperand* temp, Smi* index) + : index_(index) { + inputs_[0] = date; + temps_[0] = temp; + } + + LOperand* date() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(DateField, "date-field") + DECLARE_HYDROGEN_ACCESSOR(DateField) + + Smi* index() const { return index_; } + + private: + Smi* index_; +}; + + +class LSeqStringGetChar V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LSeqStringGetChar(LOperand* string, LOperand* index) { + inputs_[0] = string; + inputs_[1] = index; + } + + LOperand* string() const { return inputs_[0]; } + LOperand* index() const { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(SeqStringGetChar, "seq-string-get-char") + DECLARE_HYDROGEN_ACCESSOR(SeqStringGetChar) +}; + + +class LSeqStringSetChar V8_FINAL : public LTemplateInstruction<1, 4, 0> { + public: + LSeqStringSetChar(LOperand* context, + LOperand* string, + LOperand* index, + LOperand* value) { + inputs_[0] = context; + inputs_[1] = string; + inputs_[2] = index; + inputs_[3] = value; + } + + LOperand* string() { return inputs_[1]; } + LOperand* index() { return inputs_[2]; } + LOperand* value() { return inputs_[3]; } + + DECLARE_CONCRETE_INSTRUCTION(SeqStringSetChar, "seq-string-set-char") + DECLARE_HYDROGEN_ACCESSOR(SeqStringSetChar) +}; + + +class LAddI V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LAddI(LOperand* left, LOperand* right) { + inputs_[0] = left; + inputs_[1] = right; + } + + LOperand* left() { return inputs_[0]; } + LOperand* right() { return inputs_[1]; } + + static bool UseLea(HAdd* add) { + return !add->CheckFlag(HValue::kCanOverflow) && + add->BetterLeftOperand()->UseCount() > 1; + } + + DECLARE_CONCRETE_INSTRUCTION(AddI, "add-i") + DECLARE_HYDROGEN_ACCESSOR(Add) +}; + + +class LMathMinMax V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LMathMinMax(LOperand* left, LOperand* right) { + inputs_[0] = left; + inputs_[1] = right; + } + + LOperand* left() { return inputs_[0]; } + LOperand* right() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(MathMinMax, "math-min-max") + DECLARE_HYDROGEN_ACCESSOR(MathMinMax) +}; + + +class LPower V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LPower(LOperand* left, LOperand* right) { + inputs_[0] = left; + inputs_[1] = right; + } + + LOperand* left() { return inputs_[0]; } + LOperand* right() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(Power, "power") + DECLARE_HYDROGEN_ACCESSOR(Power) +}; + + +class LArithmeticD V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LArithmeticD(Token::Value op, LOperand* left, LOperand* right) + : op_(op) { + inputs_[0] = left; + inputs_[1] = right; + } + + LOperand* left() { return inputs_[0]; } + LOperand* right() { return inputs_[1]; } + + Token::Value op() const { return op_; } + + virtual Opcode opcode() const V8_OVERRIDE { + return LInstruction::kArithmeticD; + } + virtual void CompileToNative(LCodeGen* generator) V8_OVERRIDE; + virtual const char* Mnemonic() const V8_OVERRIDE; + + private: + Token::Value op_; +}; + + +class LArithmeticT V8_FINAL : public LTemplateInstruction<1, 3, 0> { + public: + LArithmeticT(Token::Value op, + LOperand* context, + LOperand* left, + LOperand* right) + : op_(op) { + inputs_[0] = context; + inputs_[1] = left; + inputs_[2] = right; + } + + LOperand* context() { return inputs_[0]; } + LOperand* left() { return inputs_[1]; } + LOperand* right() { return inputs_[2]; } + + virtual Opcode opcode() const V8_OVERRIDE { + return LInstruction::kArithmeticT; + } + virtual void CompileToNative(LCodeGen* generator) V8_OVERRIDE; + virtual const char* Mnemonic() const V8_OVERRIDE; + + Token::Value op() const { return op_; } + + private: + Token::Value op_; +}; + + +class LReturn V8_FINAL : public LTemplateInstruction<0, 3, 0> { + public: + explicit LReturn(LOperand* value, + LOperand* context, + LOperand* parameter_count) { + inputs_[0] = value; + inputs_[1] = context; + inputs_[2] = parameter_count; + } + + bool has_constant_parameter_count() { + return parameter_count()->IsConstantOperand(); + } + LConstantOperand* constant_parameter_count() { + DCHECK(has_constant_parameter_count()); + return LConstantOperand::cast(parameter_count()); + } + LOperand* parameter_count() { return inputs_[2]; } + + DECLARE_CONCRETE_INSTRUCTION(Return, "return") + DECLARE_HYDROGEN_ACCESSOR(Return) +}; + + +class LLoadNamedField V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LLoadNamedField(LOperand* object) { + inputs_[0] = object; + } + + LOperand* object() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(LoadNamedField, "load-named-field") + DECLARE_HYDROGEN_ACCESSOR(LoadNamedField) +}; + + +class LLoadNamedGeneric V8_FINAL : public LTemplateInstruction<1, 2, 1> { + public: + LLoadNamedGeneric(LOperand* context, LOperand* object, LOperand* vector) { + inputs_[0] = context; + inputs_[1] = object; + temps_[0] = vector; + } + + LOperand* context() { return inputs_[0]; } + LOperand* object() { return inputs_[1]; } + LOperand* temp_vector() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(LoadNamedGeneric, "load-named-generic") + DECLARE_HYDROGEN_ACCESSOR(LoadNamedGeneric) + + Handle<Object> name() const { return hydrogen()->name(); } +}; + + +class LLoadFunctionPrototype V8_FINAL : public LTemplateInstruction<1, 1, 1> { + public: + LLoadFunctionPrototype(LOperand* function, LOperand* temp) { + inputs_[0] = function; + temps_[0] = temp; + } + + LOperand* function() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(LoadFunctionPrototype, "load-function-prototype") + DECLARE_HYDROGEN_ACCESSOR(LoadFunctionPrototype) +}; + + +class LLoadRoot V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + DECLARE_CONCRETE_INSTRUCTION(LoadRoot, "load-root") + DECLARE_HYDROGEN_ACCESSOR(LoadRoot) + + Heap::RootListIndex index() const { return hydrogen()->index(); } +}; + + +class LLoadKeyed V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LLoadKeyed(LOperand* elements, LOperand* key) { + inputs_[0] = elements; + inputs_[1] = key; + } + LOperand* elements() { return inputs_[0]; } + LOperand* key() { return inputs_[1]; } + ElementsKind elements_kind() const { + return hydrogen()->elements_kind(); + } + bool is_external() const { + return hydrogen()->is_external(); + } + bool is_fixed_typed_array() const { + return hydrogen()->is_fixed_typed_array(); + } + bool is_typed_elements() const { + return is_external() || is_fixed_typed_array(); + } + + DECLARE_CONCRETE_INSTRUCTION(LoadKeyed, "load-keyed") + DECLARE_HYDROGEN_ACCESSOR(LoadKeyed) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + uint32_t base_offset() const { return hydrogen()->base_offset(); } + bool key_is_smi() { + return hydrogen()->key()->representation().IsTagged(); + } +}; + + +inline static bool ExternalArrayOpRequiresTemp( + Representation key_representation, + ElementsKind elements_kind) { + // Operations that require the key to be divided by two to be converted into + // an index cannot fold the scale operation into a load and need an extra + // temp register to do the work. + return key_representation.IsSmi() && + (elements_kind == EXTERNAL_INT8_ELEMENTS || + elements_kind == EXTERNAL_UINT8_ELEMENTS || + elements_kind == EXTERNAL_UINT8_CLAMPED_ELEMENTS || + elements_kind == UINT8_ELEMENTS || + elements_kind == INT8_ELEMENTS || + elements_kind == UINT8_CLAMPED_ELEMENTS); +} + + +class LLoadKeyedGeneric V8_FINAL : public LTemplateInstruction<1, 3, 1> { + public: + LLoadKeyedGeneric(LOperand* context, LOperand* obj, LOperand* key, + LOperand* vector) { + inputs_[0] = context; + inputs_[1] = obj; + inputs_[2] = key; + temps_[0] = vector; + } + + LOperand* context() { return inputs_[0]; } + LOperand* object() { return inputs_[1]; } + LOperand* key() { return inputs_[2]; } + LOperand* temp_vector() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(LoadKeyedGeneric, "load-keyed-generic") + DECLARE_HYDROGEN_ACCESSOR(LoadKeyedGeneric) +}; + + +class LLoadGlobalCell V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + DECLARE_CONCRETE_INSTRUCTION(LoadGlobalCell, "load-global-cell") + DECLARE_HYDROGEN_ACCESSOR(LoadGlobalCell) +}; + + +class LLoadGlobalGeneric V8_FINAL : public LTemplateInstruction<1, 2, 1> { + public: + LLoadGlobalGeneric(LOperand* context, LOperand* global_object, + LOperand* vector) { + inputs_[0] = context; + inputs_[1] = global_object; + temps_[0] = vector; + } + + LOperand* context() { return inputs_[0]; } + LOperand* global_object() { return inputs_[1]; } + LOperand* temp_vector() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(LoadGlobalGeneric, "load-global-generic") + DECLARE_HYDROGEN_ACCESSOR(LoadGlobalGeneric) + + Handle<Object> name() const { return hydrogen()->name(); } + bool for_typeof() const { return hydrogen()->for_typeof(); } +}; + + +class LStoreGlobalCell V8_FINAL : public LTemplateInstruction<0, 1, 0> { + public: + explicit LStoreGlobalCell(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(StoreGlobalCell, "store-global-cell") + DECLARE_HYDROGEN_ACCESSOR(StoreGlobalCell) +}; + + +class LLoadContextSlot V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LLoadContextSlot(LOperand* context) { + inputs_[0] = context; + } + + LOperand* context() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(LoadContextSlot, "load-context-slot") + DECLARE_HYDROGEN_ACCESSOR(LoadContextSlot) + + int slot_index() { return hydrogen()->slot_index(); } + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LStoreContextSlot V8_FINAL : public LTemplateInstruction<0, 2, 1> { + public: + LStoreContextSlot(LOperand* context, LOperand* value, LOperand* temp) { + inputs_[0] = context; + inputs_[1] = value; + temps_[0] = temp; + } + + LOperand* context() { return inputs_[0]; } + LOperand* value() { return inputs_[1]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(StoreContextSlot, "store-context-slot") + DECLARE_HYDROGEN_ACCESSOR(StoreContextSlot) + + int slot_index() { return hydrogen()->slot_index(); } + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LPushArgument V8_FINAL : public LTemplateInstruction<0, 1, 0> { + public: + explicit LPushArgument(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(PushArgument, "push-argument") +}; + + +class LDrop V8_FINAL : public LTemplateInstruction<0, 0, 0> { + public: + explicit LDrop(int count) : count_(count) { } + + int count() const { return count_; } + + DECLARE_CONCRETE_INSTRUCTION(Drop, "drop") + + private: + int count_; +}; + + +class LStoreCodeEntry V8_FINAL: public LTemplateInstruction<0, 2, 0> { + public: + LStoreCodeEntry(LOperand* function, LOperand* code_object) { + inputs_[0] = function; + inputs_[1] = code_object; + } + + LOperand* function() { return inputs_[0]; } + LOperand* code_object() { return inputs_[1]; } + + virtual void PrintDataTo(StringStream* stream); + + DECLARE_CONCRETE_INSTRUCTION(StoreCodeEntry, "store-code-entry") + DECLARE_HYDROGEN_ACCESSOR(StoreCodeEntry) +}; + + +class LInnerAllocatedObject V8_FINAL: public LTemplateInstruction<1, 2, 0> { + public: + LInnerAllocatedObject(LOperand* base_object, LOperand* offset) { + inputs_[0] = base_object; + inputs_[1] = offset; + } + + LOperand* base_object() const { return inputs_[0]; } + LOperand* offset() const { return inputs_[1]; } + + virtual void PrintDataTo(StringStream* stream); + + DECLARE_CONCRETE_INSTRUCTION(InnerAllocatedObject, "inner-allocated-object") +}; + + +class LThisFunction V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + DECLARE_CONCRETE_INSTRUCTION(ThisFunction, "this-function") + DECLARE_HYDROGEN_ACCESSOR(ThisFunction) +}; + + +class LContext V8_FINAL : public LTemplateInstruction<1, 0, 0> { + public: + DECLARE_CONCRETE_INSTRUCTION(Context, "context") + DECLARE_HYDROGEN_ACCESSOR(Context) +}; + + +class LDeclareGlobals V8_FINAL : public LTemplateInstruction<0, 1, 0> { + public: + explicit LDeclareGlobals(LOperand* context) { + inputs_[0] = context; + } + + LOperand* context() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(DeclareGlobals, "declare-globals") + DECLARE_HYDROGEN_ACCESSOR(DeclareGlobals) +}; + + +class LCallJSFunction V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LCallJSFunction(LOperand* function) { + inputs_[0] = function; + } + + LOperand* function() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(CallJSFunction, "call-js-function") + DECLARE_HYDROGEN_ACCESSOR(CallJSFunction) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + + int arity() const { return hydrogen()->argument_count() - 1; } +}; + + +class LCallWithDescriptor V8_FINAL : public LTemplateResultInstruction<1> { + public: + LCallWithDescriptor(const InterfaceDescriptor* descriptor, + const ZoneList<LOperand*>& operands, + Zone* zone) + : inputs_(descriptor->GetRegisterParameterCount() + 1, zone) { + DCHECK(descriptor->GetRegisterParameterCount() + 1 == operands.length()); + inputs_.AddAll(operands, zone); + } + + LOperand* target() const { return inputs_[0]; } + + private: + DECLARE_CONCRETE_INSTRUCTION(CallWithDescriptor, "call-with-descriptor") + DECLARE_HYDROGEN_ACCESSOR(CallWithDescriptor) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + + int arity() const { return hydrogen()->argument_count() - 1; } + + ZoneList<LOperand*> inputs_; + + // Iterator support. + virtual int InputCount() V8_FINAL V8_OVERRIDE { return inputs_.length(); } + virtual LOperand* InputAt(int i) V8_FINAL V8_OVERRIDE { return inputs_[i]; } + + virtual int TempCount() V8_FINAL V8_OVERRIDE { return 0; } + virtual LOperand* TempAt(int i) V8_FINAL V8_OVERRIDE { return NULL; } +}; + + +class LInvokeFunction V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LInvokeFunction(LOperand* context, LOperand* function) { + inputs_[0] = context; + inputs_[1] = function; + } + + LOperand* context() { return inputs_[0]; } + LOperand* function() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(InvokeFunction, "invoke-function") + DECLARE_HYDROGEN_ACCESSOR(InvokeFunction) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + + int arity() const { return hydrogen()->argument_count() - 1; } +}; + + +class LCallFunction V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + explicit LCallFunction(LOperand* context, LOperand* function) { + inputs_[0] = context; + inputs_[1] = function; + } + + LOperand* context() { return inputs_[0]; } + LOperand* function() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(CallFunction, "call-function") + DECLARE_HYDROGEN_ACCESSOR(CallFunction) + + int arity() const { return hydrogen()->argument_count() - 1; } +}; + + +class LCallNew V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LCallNew(LOperand* context, LOperand* constructor) { + inputs_[0] = context; + inputs_[1] = constructor; + } + + LOperand* context() { return inputs_[0]; } + LOperand* constructor() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(CallNew, "call-new") + DECLARE_HYDROGEN_ACCESSOR(CallNew) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + + int arity() const { return hydrogen()->argument_count() - 1; } +}; + + +class LCallNewArray V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LCallNewArray(LOperand* context, LOperand* constructor) { + inputs_[0] = context; + inputs_[1] = constructor; + } + + LOperand* context() { return inputs_[0]; } + LOperand* constructor() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(CallNewArray, "call-new-array") + DECLARE_HYDROGEN_ACCESSOR(CallNewArray) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + + int arity() const { return hydrogen()->argument_count() - 1; } +}; + + +class LCallRuntime V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LCallRuntime(LOperand* context) { + inputs_[0] = context; + } + + LOperand* context() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(CallRuntime, "call-runtime") + DECLARE_HYDROGEN_ACCESSOR(CallRuntime) + + virtual bool ClobbersDoubleRegisters(Isolate* isolate) const V8_OVERRIDE { + return true; + } + + const Runtime::Function* function() const { return hydrogen()->function(); } + int arity() const { return hydrogen()->argument_count(); } +}; + + +class LInteger32ToDouble V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LInteger32ToDouble(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(Integer32ToDouble, "int32-to-double") +}; + + +class LUint32ToDouble V8_FINAL : public LTemplateInstruction<1, 1, 1> { + public: + explicit LUint32ToDouble(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(Uint32ToDouble, "uint32-to-double") +}; + + +class LNumberTagI V8_FINAL : public LTemplateInstruction<1, 1, 1> { + public: + LNumberTagI(LOperand* value, LOperand* temp) { + inputs_[0] = value; + temps_[0] = temp; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(NumberTagI, "number-tag-i") +}; + + +class LNumberTagU V8_FINAL : public LTemplateInstruction<1, 1, 1> { + public: + LNumberTagU(LOperand* value, LOperand* temp) { + inputs_[0] = value; + temps_[0] = temp; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(NumberTagU, "number-tag-u") +}; + + +class LNumberTagD V8_FINAL : public LTemplateInstruction<1, 1, 1> { + public: + LNumberTagD(LOperand* value, LOperand* temp) { + inputs_[0] = value; + temps_[0] = temp; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(NumberTagD, "number-tag-d") + DECLARE_HYDROGEN_ACCESSOR(Change) +}; + + +// Sometimes truncating conversion from a tagged value to an int32. +class LDoubleToI V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LDoubleToI(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(DoubleToI, "double-to-i") + DECLARE_HYDROGEN_ACCESSOR(UnaryOperation) + + bool truncating() { return hydrogen()->CanTruncateToInt32(); } +}; + + +class LDoubleToSmi V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LDoubleToSmi(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(DoubleToSmi, "double-to-smi") + DECLARE_HYDROGEN_ACCESSOR(UnaryOperation) +}; + + +// Truncating conversion from a tagged value to an int32. +class LTaggedToI V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LTaggedToI(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(TaggedToI, "tagged-to-i") + DECLARE_HYDROGEN_ACCESSOR(Change) + + bool truncating() { return hydrogen()->CanTruncateToInt32(); } +}; + + +class LSmiTag V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LSmiTag(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(SmiTag, "smi-tag") + DECLARE_HYDROGEN_ACCESSOR(Change) +}; + + +class LNumberUntagD V8_FINAL : public LTemplateInstruction<1, 1, 1> { + public: + explicit LNumberUntagD(LOperand* value, LOperand* temp) { + inputs_[0] = value; + temps_[0] = temp; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(NumberUntagD, "double-untag") + DECLARE_HYDROGEN_ACCESSOR(Change); +}; + + +class LSmiUntag V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + LSmiUntag(LOperand* value, bool needs_check) + : needs_check_(needs_check) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(SmiUntag, "smi-untag") + + bool needs_check() const { return needs_check_; } + + private: + bool needs_check_; +}; + + +class LStoreNamedField V8_FINAL : public LTemplateInstruction<0, 2, 2> { + public: + LStoreNamedField(LOperand* obj, + LOperand* val, + LOperand* temp, + LOperand* temp_map) { + inputs_[0] = obj; + inputs_[1] = val; + temps_[0] = temp; + temps_[1] = temp_map; + } + + LOperand* object() { return inputs_[0]; } + LOperand* value() { return inputs_[1]; } + LOperand* temp() { return temps_[0]; } + LOperand* temp_map() { return temps_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(StoreNamedField, "store-named-field") + DECLARE_HYDROGEN_ACCESSOR(StoreNamedField) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LStoreNamedGeneric V8_FINAL : public LTemplateInstruction<0, 3, 0> { + public: + LStoreNamedGeneric(LOperand* context, LOperand* object, LOperand* value) { + inputs_[0] = context; + inputs_[1] = object; + inputs_[2] = value; + } + + LOperand* context() { return inputs_[0]; } + LOperand* object() { return inputs_[1]; } + LOperand* value() { return inputs_[2]; } + + DECLARE_CONCRETE_INSTRUCTION(StoreNamedGeneric, "store-named-generic") + DECLARE_HYDROGEN_ACCESSOR(StoreNamedGeneric) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + Handle<Object> name() const { return hydrogen()->name(); } + StrictMode strict_mode() { return hydrogen()->strict_mode(); } +}; + + +class LStoreKeyed V8_FINAL : public LTemplateInstruction<0, 3, 0> { + public: + LStoreKeyed(LOperand* obj, LOperand* key, LOperand* val) { + inputs_[0] = obj; + inputs_[1] = key; + inputs_[2] = val; + } + + bool is_external() const { return hydrogen()->is_external(); } + bool is_fixed_typed_array() const { + return hydrogen()->is_fixed_typed_array(); + } + bool is_typed_elements() const { + return is_external() || is_fixed_typed_array(); + } + LOperand* elements() { return inputs_[0]; } + LOperand* key() { return inputs_[1]; } + LOperand* value() { return inputs_[2]; } + ElementsKind elements_kind() const { + return hydrogen()->elements_kind(); + } + + DECLARE_CONCRETE_INSTRUCTION(StoreKeyed, "store-keyed") + DECLARE_HYDROGEN_ACCESSOR(StoreKeyed) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + uint32_t base_offset() const { return hydrogen()->base_offset(); } + bool NeedsCanonicalization() { return hydrogen()->NeedsCanonicalization(); } +}; + + +class LStoreKeyedGeneric V8_FINAL : public LTemplateInstruction<0, 4, 0> { + public: + LStoreKeyedGeneric(LOperand* context, + LOperand* object, + LOperand* key, + LOperand* value) { + inputs_[0] = context; + inputs_[1] = object; + inputs_[2] = key; + inputs_[3] = value; + } + + LOperand* context() { return inputs_[0]; } + LOperand* object() { return inputs_[1]; } + LOperand* key() { return inputs_[2]; } + LOperand* value() { return inputs_[3]; } + + DECLARE_CONCRETE_INSTRUCTION(StoreKeyedGeneric, "store-keyed-generic") + DECLARE_HYDROGEN_ACCESSOR(StoreKeyedGeneric) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + + StrictMode strict_mode() { return hydrogen()->strict_mode(); } +}; + + +class LTransitionElementsKind V8_FINAL : public LTemplateInstruction<0, 2, 2> { + public: + LTransitionElementsKind(LOperand* object, + LOperand* context, + LOperand* new_map_temp, + LOperand* temp) { + inputs_[0] = object; + inputs_[1] = context; + temps_[0] = new_map_temp; + temps_[1] = temp; + } + + LOperand* context() { return inputs_[1]; } + LOperand* object() { return inputs_[0]; } + LOperand* new_map_temp() { return temps_[0]; } + LOperand* temp() { return temps_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(TransitionElementsKind, + "transition-elements-kind") + DECLARE_HYDROGEN_ACCESSOR(TransitionElementsKind) + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; + + Handle<Map> original_map() { return hydrogen()->original_map().handle(); } + Handle<Map> transitioned_map() { + return hydrogen()->transitioned_map().handle(); + } + ElementsKind from_kind() { return hydrogen()->from_kind(); } + ElementsKind to_kind() { return hydrogen()->to_kind(); } +}; + + +class LTrapAllocationMemento V8_FINAL : public LTemplateInstruction<0, 1, 1> { + public: + LTrapAllocationMemento(LOperand* object, + LOperand* temp) { + inputs_[0] = object; + temps_[0] = temp; + } + + LOperand* object() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(TrapAllocationMemento, + "trap-allocation-memento") +}; + + +class LStringAdd V8_FINAL : public LTemplateInstruction<1, 3, 0> { + public: + LStringAdd(LOperand* context, LOperand* left, LOperand* right) { + inputs_[0] = context; + inputs_[1] = left; + inputs_[2] = right; + } + + LOperand* context() { return inputs_[0]; } + LOperand* left() { return inputs_[1]; } + LOperand* right() { return inputs_[2]; } + + DECLARE_CONCRETE_INSTRUCTION(StringAdd, "string-add") + DECLARE_HYDROGEN_ACCESSOR(StringAdd) +}; + + +class LStringCharCodeAt V8_FINAL : public LTemplateInstruction<1, 3, 0> { + public: + LStringCharCodeAt(LOperand* context, LOperand* string, LOperand* index) { + inputs_[0] = context; + inputs_[1] = string; + inputs_[2] = index; + } + + LOperand* context() { return inputs_[0]; } + LOperand* string() { return inputs_[1]; } + LOperand* index() { return inputs_[2]; } + + DECLARE_CONCRETE_INSTRUCTION(StringCharCodeAt, "string-char-code-at") + DECLARE_HYDROGEN_ACCESSOR(StringCharCodeAt) +}; + + +class LStringCharFromCode V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LStringCharFromCode(LOperand* context, LOperand* char_code) { + inputs_[0] = context; + inputs_[1] = char_code; + } + + LOperand* context() { return inputs_[0]; } + LOperand* char_code() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(StringCharFromCode, "string-char-from-code") + DECLARE_HYDROGEN_ACCESSOR(StringCharFromCode) +}; + + +class LCheckValue V8_FINAL : public LTemplateInstruction<0, 1, 0> { + public: + explicit LCheckValue(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(CheckValue, "check-value") + DECLARE_HYDROGEN_ACCESSOR(CheckValue) +}; + + +class LCheckInstanceType V8_FINAL : public LTemplateInstruction<0, 1, 1> { + public: + LCheckInstanceType(LOperand* value, LOperand* temp) { + inputs_[0] = value; + temps_[0] = temp; + } + + LOperand* value() { return inputs_[0]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(CheckInstanceType, "check-instance-type") + DECLARE_HYDROGEN_ACCESSOR(CheckInstanceType) +}; + + +class LCheckMaps V8_FINAL : public LTemplateInstruction<0, 1, 0> { + public: + explicit LCheckMaps(LOperand* value = NULL) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(CheckMaps, "check-maps") + DECLARE_HYDROGEN_ACCESSOR(CheckMaps) +}; + + +class LCheckSmi V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LCheckSmi(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(CheckSmi, "check-smi") +}; + + +class LClampDToUint8 V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LClampDToUint8(LOperand* value) { + inputs_[0] = value; + } + + LOperand* unclamped() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(ClampDToUint8, "clamp-d-to-uint8") +}; + + +class LClampIToUint8 V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LClampIToUint8(LOperand* value) { + inputs_[0] = value; + } + + LOperand* unclamped() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(ClampIToUint8, "clamp-i-to-uint8") +}; + + +// Truncating conversion from a tagged value to an int32. +class LClampTToUint8NoSSE2 V8_FINAL : public LTemplateInstruction<1, 1, 3> { + public: + LClampTToUint8NoSSE2(LOperand* unclamped, + LOperand* temp1, + LOperand* temp2, + LOperand* temp3) { + inputs_[0] = unclamped; + temps_[0] = temp1; + temps_[1] = temp2; + temps_[2] = temp3; + } + + LOperand* unclamped() { return inputs_[0]; } + LOperand* scratch() { return temps_[0]; } + LOperand* scratch2() { return temps_[1]; } + LOperand* scratch3() { return temps_[2]; } + + DECLARE_CONCRETE_INSTRUCTION(ClampTToUint8NoSSE2, + "clamp-t-to-uint8-nosse2") + DECLARE_HYDROGEN_ACCESSOR(UnaryOperation) +}; + + +class LCheckNonSmi V8_FINAL : public LTemplateInstruction<0, 1, 0> { + public: + explicit LCheckNonSmi(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(CheckNonSmi, "check-non-smi") + DECLARE_HYDROGEN_ACCESSOR(CheckHeapObject) +}; + + +class LDoubleBits V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LDoubleBits(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(DoubleBits, "double-bits") + DECLARE_HYDROGEN_ACCESSOR(DoubleBits) +}; + + +class LConstructDouble V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LConstructDouble(LOperand* hi, LOperand* lo) { + inputs_[0] = hi; + inputs_[1] = lo; + } + + LOperand* hi() { return inputs_[0]; } + LOperand* lo() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(ConstructDouble, "construct-double") +}; + + +class LAllocate V8_FINAL : public LTemplateInstruction<1, 2, 1> { + public: + LAllocate(LOperand* context, LOperand* size, LOperand* temp) { + inputs_[0] = context; + inputs_[1] = size; + temps_[0] = temp; + } + + LOperand* context() { return inputs_[0]; } + LOperand* size() { return inputs_[1]; } + LOperand* temp() { return temps_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(Allocate, "allocate") + DECLARE_HYDROGEN_ACCESSOR(Allocate) +}; + + +class LRegExpLiteral V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LRegExpLiteral(LOperand* context) { + inputs_[0] = context; + } + + LOperand* context() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(RegExpLiteral, "regexp-literal") + DECLARE_HYDROGEN_ACCESSOR(RegExpLiteral) +}; + + +class LFunctionLiteral V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LFunctionLiteral(LOperand* context) { + inputs_[0] = context; + } + + LOperand* context() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(FunctionLiteral, "function-literal") + DECLARE_HYDROGEN_ACCESSOR(FunctionLiteral) +}; + + +class LToFastProperties V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LToFastProperties(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(ToFastProperties, "to-fast-properties") + DECLARE_HYDROGEN_ACCESSOR(ToFastProperties) +}; + + +class LTypeof V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LTypeof(LOperand* context, LOperand* value) { + inputs_[0] = context; + inputs_[1] = value; + } + + LOperand* context() { return inputs_[0]; } + LOperand* value() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(Typeof, "typeof") +}; + + +class LTypeofIsAndBranch V8_FINAL : public LControlInstruction<1, 0> { + public: + explicit LTypeofIsAndBranch(LOperand* value) { + inputs_[0] = value; + } + + LOperand* value() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(TypeofIsAndBranch, "typeof-is-and-branch") + DECLARE_HYDROGEN_ACCESSOR(TypeofIsAndBranch) + + Handle<String> type_literal() { return hydrogen()->type_literal(); } + + virtual void PrintDataTo(StringStream* stream) V8_OVERRIDE; +}; + + +class LOsrEntry V8_FINAL : public LTemplateInstruction<0, 0, 0> { + public: + virtual bool HasInterestingComment(LCodeGen* gen) const V8_OVERRIDE { + return false; + } + DECLARE_CONCRETE_INSTRUCTION(OsrEntry, "osr-entry") +}; + + +class LStackCheck V8_FINAL : public LTemplateInstruction<0, 1, 0> { + public: + explicit LStackCheck(LOperand* context) { + inputs_[0] = context; + } + + LOperand* context() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(StackCheck, "stack-check") + DECLARE_HYDROGEN_ACCESSOR(StackCheck) + + Label* done_label() { return &done_label_; } + + private: + Label done_label_; +}; + + +class LForInPrepareMap V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LForInPrepareMap(LOperand* context, LOperand* object) { + inputs_[0] = context; + inputs_[1] = object; + } + + LOperand* context() { return inputs_[0]; } + LOperand* object() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(ForInPrepareMap, "for-in-prepare-map") +}; + + +class LForInCacheArray V8_FINAL : public LTemplateInstruction<1, 1, 0> { + public: + explicit LForInCacheArray(LOperand* map) { + inputs_[0] = map; + } + + LOperand* map() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(ForInCacheArray, "for-in-cache-array") + + int idx() { + return HForInCacheArray::cast(this->hydrogen_value())->idx(); + } +}; + + +class LCheckMapValue V8_FINAL : public LTemplateInstruction<0, 2, 0> { + public: + LCheckMapValue(LOperand* value, LOperand* map) { + inputs_[0] = value; + inputs_[1] = map; + } + + LOperand* value() { return inputs_[0]; } + LOperand* map() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(CheckMapValue, "check-map-value") +}; + + +class LLoadFieldByIndex V8_FINAL : public LTemplateInstruction<1, 2, 0> { + public: + LLoadFieldByIndex(LOperand* object, LOperand* index) { + inputs_[0] = object; + inputs_[1] = index; + } + + LOperand* object() { return inputs_[0]; } + LOperand* index() { return inputs_[1]; } + + DECLARE_CONCRETE_INSTRUCTION(LoadFieldByIndex, "load-field-by-index") +}; + + +class LStoreFrameContext: public LTemplateInstruction<0, 1, 0> { + public: + explicit LStoreFrameContext(LOperand* context) { + inputs_[0] = context; + } + + LOperand* context() { return inputs_[0]; } + + DECLARE_CONCRETE_INSTRUCTION(StoreFrameContext, "store-frame-context") +}; + + +class LAllocateBlockContext: public LTemplateInstruction<1, 2, 0> { + public: + LAllocateBlockContext(LOperand* context, LOperand* function) { + inputs_[0] = context; + inputs_[1] = function; + } + + LOperand* context() { return inputs_[0]; } + LOperand* function() { return inputs_[1]; } + + Handle<ScopeInfo> scope_info() { return hydrogen()->scope_info(); } + + DECLARE_CONCRETE_INSTRUCTION(AllocateBlockContext, "allocate-block-context") + DECLARE_HYDROGEN_ACCESSOR(AllocateBlockContext) +}; + + +class LChunkBuilder; +class LPlatformChunk V8_FINAL : public LChunk { + public: + LPlatformChunk(CompilationInfo* info, HGraph* graph) + : LChunk(info, graph), + num_double_slots_(0) { } + + int GetNextSpillIndex(RegisterKind kind); + LOperand* GetNextSpillSlot(RegisterKind kind); + + int num_double_slots() const { return num_double_slots_; } + + private: + int num_double_slots_; +}; + + +class LChunkBuilder V8_FINAL : public LChunkBuilderBase { + public: + LChunkBuilder(CompilationInfo* info, HGraph* graph, LAllocator* allocator) + : LChunkBuilderBase(graph->zone()), + chunk_(NULL), + info_(info), + graph_(graph), + status_(UNUSED), + current_instruction_(NULL), + current_block_(NULL), + next_block_(NULL), + allocator_(allocator) { } + + Isolate* isolate() const { return graph_->isolate(); } + + // Build the sequence for the graph. + LPlatformChunk* Build(); + + // Declare methods that deal with the individual node types. +#define DECLARE_DO(type) LInstruction* Do##type(H##type* node); + HYDROGEN_CONCRETE_INSTRUCTION_LIST(DECLARE_DO) +#undef DECLARE_DO + + LInstruction* DoMathFloor(HUnaryMathOperation* instr); + LInstruction* DoMathRound(HUnaryMathOperation* instr); + LInstruction* DoMathFround(HUnaryMathOperation* instr); + LInstruction* DoMathAbs(HUnaryMathOperation* instr); + LInstruction* DoMathLog(HUnaryMathOperation* instr); + LInstruction* DoMathExp(HUnaryMathOperation* instr); + LInstruction* DoMathSqrt(HUnaryMathOperation* instr); + LInstruction* DoMathPowHalf(HUnaryMathOperation* instr); + LInstruction* DoMathClz32(HUnaryMathOperation* instr); + LInstruction* DoDivByPowerOf2I(HDiv* instr); + LInstruction* DoDivByConstI(HDiv* instr); + LInstruction* DoDivI(HDiv* instr); + LInstruction* DoModByPowerOf2I(HMod* instr); + LInstruction* DoModByConstI(HMod* instr); + LInstruction* DoModI(HMod* instr); + LInstruction* DoFlooringDivByPowerOf2I(HMathFloorOfDiv* instr); + LInstruction* DoFlooringDivByConstI(HMathFloorOfDiv* instr); + LInstruction* DoFlooringDivI(HMathFloorOfDiv* instr); + + private: + enum Status { + UNUSED, + BUILDING, + DONE, + ABORTED + }; + + LPlatformChunk* chunk() const { return chunk_; } + CompilationInfo* info() const { return info_; } + HGraph* graph() const { return graph_; } + + bool is_unused() const { return status_ == UNUSED; } + bool is_building() const { return status_ == BUILDING; } + bool is_done() const { return status_ == DONE; } + bool is_aborted() const { return status_ == ABORTED; } + + void Abort(BailoutReason reason); + + // Methods for getting operands for Use / Define / Temp. + LUnallocated* ToUnallocated(Register reg); + LUnallocated* ToUnallocated(X87Register reg); + + // Methods for setting up define-use relationships. + MUST_USE_RESULT LOperand* Use(HValue* value, LUnallocated* operand); + MUST_USE_RESULT LOperand* UseFixed(HValue* value, Register fixed_register); + + // A value that is guaranteed to be allocated to a register. + // Operand created by UseRegister is guaranteed to be live until the end of + // instruction. This means that register allocator will not reuse it's + // register for any other operand inside instruction. + // Operand created by UseRegisterAtStart is guaranteed to be live only at + // instruction start. Register allocator is free to assign the same register + // to some other operand used inside instruction (i.e. temporary or + // output). + MUST_USE_RESULT LOperand* UseRegister(HValue* value); + MUST_USE_RESULT LOperand* UseRegisterAtStart(HValue* value); + + // An input operand in a register that may be trashed. + MUST_USE_RESULT LOperand* UseTempRegister(HValue* value); + + // An input operand in a register or stack slot. + MUST_USE_RESULT LOperand* Use(HValue* value); + MUST_USE_RESULT LOperand* UseAtStart(HValue* value); + + // An input operand in a register, stack slot or a constant operand. + MUST_USE_RESULT LOperand* UseOrConstant(HValue* value); + MUST_USE_RESULT LOperand* UseOrConstantAtStart(HValue* value); + + // An input operand in a fixed register or a constant operand. + MUST_USE_RESULT LOperand* UseFixedOrConstant(HValue* value, + Register fixed_register); + + // An input operand in a register or a constant operand. + MUST_USE_RESULT LOperand* UseRegisterOrConstant(HValue* value); + MUST_USE_RESULT LOperand* UseRegisterOrConstantAtStart(HValue* value); + + // An input operand in a constant operand. + MUST_USE_RESULT LOperand* UseConstant(HValue* value); + + // An input operand in register, stack slot or a constant operand. + // Will not be moved to a register even if one is freely available. + virtual MUST_USE_RESULT LOperand* UseAny(HValue* value) V8_OVERRIDE; + + // Temporary operand that must be in a register. + MUST_USE_RESULT LUnallocated* TempRegister(); + MUST_USE_RESULT LOperand* FixedTemp(Register reg); + + // Methods for setting up define-use relationships. + // Return the same instruction that they are passed. + LInstruction* Define(LTemplateResultInstruction<1>* instr, + LUnallocated* result); + LInstruction* DefineAsRegister(LTemplateResultInstruction<1>* instr); + LInstruction* DefineAsSpilled(LTemplateResultInstruction<1>* instr, + int index); + LInstruction* DefineSameAsFirst(LTemplateResultInstruction<1>* instr); + LInstruction* DefineFixed(LTemplateResultInstruction<1>* instr, + Register reg); + LInstruction* DefineX87TOS(LTemplateResultInstruction<1>* instr); + // Assigns an environment to an instruction. An instruction which can + // deoptimize must have an environment. + LInstruction* AssignEnvironment(LInstruction* instr); + // Assigns a pointer map to an instruction. An instruction which can + // trigger a GC or a lazy deoptimization must have a pointer map. + LInstruction* AssignPointerMap(LInstruction* instr); + + enum CanDeoptimize { CAN_DEOPTIMIZE_EAGERLY, CANNOT_DEOPTIMIZE_EAGERLY }; + + LOperand* GetSeqStringSetCharOperand(HSeqStringSetChar* instr); + + // Marks a call for the register allocator. Assigns a pointer map to + // support GC and lazy deoptimization. Assigns an environment to support + // eager deoptimization if CAN_DEOPTIMIZE_EAGERLY. + LInstruction* MarkAsCall( + LInstruction* instr, + HInstruction* hinstr, + CanDeoptimize can_deoptimize = CANNOT_DEOPTIMIZE_EAGERLY); + + void VisitInstruction(HInstruction* current); + void AddInstruction(LInstruction* instr, HInstruction* current); + + void DoBasicBlock(HBasicBlock* block, HBasicBlock* next_block); + LInstruction* DoShift(Token::Value op, HBitwiseBinaryOperation* instr); + LInstruction* DoArithmeticD(Token::Value op, + HArithmeticBinaryOperation* instr); + LInstruction* DoArithmeticT(Token::Value op, + HBinaryOperation* instr); + + LOperand* GetStoreKeyedValueOperand(HStoreKeyed* instr); + + LPlatformChunk* chunk_; + CompilationInfo* info_; + HGraph* const graph_; + Status status_; + HInstruction* current_instruction_; + HBasicBlock* current_block_; + HBasicBlock* next_block_; + LAllocator* allocator_; + + DISALLOW_COPY_AND_ASSIGN(LChunkBuilder); +}; + +#undef DECLARE_HYDROGEN_ACCESSOR +#undef DECLARE_CONCRETE_INSTRUCTION + +} } // namespace v8::internal + +#endif // V8_X87_LITHIUM_X87_H_ diff --git a/deps/v8/src/x87/macro-assembler-x87.cc b/deps/v8/src/x87/macro-assembler-x87.cc new file mode 100644 index 00000000000..6196d8f7ae4 --- /dev/null +++ b/deps/v8/src/x87/macro-assembler-x87.cc @@ -0,0 +1,3327 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_X87 + +#include "src/bootstrapper.h" +#include "src/codegen.h" +#include "src/cpu-profiler.h" +#include "src/debug.h" +#include "src/isolate-inl.h" +#include "src/runtime.h" +#include "src/serialize.h" + +namespace v8 { +namespace internal { + +// ------------------------------------------------------------------------- +// MacroAssembler implementation. + +MacroAssembler::MacroAssembler(Isolate* arg_isolate, void* buffer, int size) + : Assembler(arg_isolate, buffer, size), + generating_stub_(false), + has_frame_(false) { + if (isolate() != NULL) { + // TODO(titzer): should we just use a null handle here instead? + code_object_ = Handle<Object>(isolate()->heap()->undefined_value(), + isolate()); + } +} + + +void MacroAssembler::Load(Register dst, const Operand& src, Representation r) { + DCHECK(!r.IsDouble()); + if (r.IsInteger8()) { + movsx_b(dst, src); + } else if (r.IsUInteger8()) { + movzx_b(dst, src); + } else if (r.IsInteger16()) { + movsx_w(dst, src); + } else if (r.IsUInteger16()) { + movzx_w(dst, src); + } else { + mov(dst, src); + } +} + + +void MacroAssembler::Store(Register src, const Operand& dst, Representation r) { + DCHECK(!r.IsDouble()); + if (r.IsInteger8() || r.IsUInteger8()) { + mov_b(dst, src); + } else if (r.IsInteger16() || r.IsUInteger16()) { + mov_w(dst, src); + } else { + if (r.IsHeapObject()) { + AssertNotSmi(src); + } else if (r.IsSmi()) { + AssertSmi(src); + } + mov(dst, src); + } +} + + +void MacroAssembler::LoadRoot(Register destination, Heap::RootListIndex index) { + if (isolate()->heap()->RootCanBeTreatedAsConstant(index)) { + Handle<Object> value(&isolate()->heap()->roots_array_start()[index]); + mov(destination, value); + return; + } + ExternalReference roots_array_start = + ExternalReference::roots_array_start(isolate()); + mov(destination, Immediate(index)); + mov(destination, Operand::StaticArray(destination, + times_pointer_size, + roots_array_start)); +} + + +void MacroAssembler::StoreRoot(Register source, + Register scratch, + Heap::RootListIndex index) { + DCHECK(Heap::RootCanBeWrittenAfterInitialization(index)); + ExternalReference roots_array_start = + ExternalReference::roots_array_start(isolate()); + mov(scratch, Immediate(index)); + mov(Operand::StaticArray(scratch, times_pointer_size, roots_array_start), + source); +} + + +void MacroAssembler::CompareRoot(Register with, + Register scratch, + Heap::RootListIndex index) { + ExternalReference roots_array_start = + ExternalReference::roots_array_start(isolate()); + mov(scratch, Immediate(index)); + cmp(with, Operand::StaticArray(scratch, + times_pointer_size, + roots_array_start)); +} + + +void MacroAssembler::CompareRoot(Register with, Heap::RootListIndex index) { + DCHECK(isolate()->heap()->RootCanBeTreatedAsConstant(index)); + Handle<Object> value(&isolate()->heap()->roots_array_start()[index]); + cmp(with, value); +} + + +void MacroAssembler::CompareRoot(const Operand& with, + Heap::RootListIndex index) { + DCHECK(isolate()->heap()->RootCanBeTreatedAsConstant(index)); + Handle<Object> value(&isolate()->heap()->roots_array_start()[index]); + cmp(with, value); +} + + +void MacroAssembler::InNewSpace( + Register object, + Register scratch, + Condition cc, + Label* condition_met, + Label::Distance condition_met_distance) { + DCHECK(cc == equal || cc == not_equal); + if (scratch.is(object)) { + and_(scratch, Immediate(~Page::kPageAlignmentMask)); + } else { + mov(scratch, Immediate(~Page::kPageAlignmentMask)); + and_(scratch, object); + } + // Check that we can use a test_b. + DCHECK(MemoryChunk::IN_FROM_SPACE < 8); + DCHECK(MemoryChunk::IN_TO_SPACE < 8); + int mask = (1 << MemoryChunk::IN_FROM_SPACE) + | (1 << MemoryChunk::IN_TO_SPACE); + // If non-zero, the page belongs to new-space. + test_b(Operand(scratch, MemoryChunk::kFlagsOffset), + static_cast<uint8_t>(mask)); + j(cc, condition_met, condition_met_distance); +} + + +void MacroAssembler::RememberedSetHelper( + Register object, // Only used for debug checks. + Register addr, + Register scratch, + MacroAssembler::RememberedSetFinalAction and_then) { + Label done; + if (emit_debug_code()) { + Label ok; + JumpIfNotInNewSpace(object, scratch, &ok, Label::kNear); + int3(); + bind(&ok); + } + // Load store buffer top. + ExternalReference store_buffer = + ExternalReference::store_buffer_top(isolate()); + mov(scratch, Operand::StaticVariable(store_buffer)); + // Store pointer to buffer. + mov(Operand(scratch, 0), addr); + // Increment buffer top. + add(scratch, Immediate(kPointerSize)); + // Write back new top of buffer. + mov(Operand::StaticVariable(store_buffer), scratch); + // Call stub on end of buffer. + // Check for end of buffer. + test(scratch, Immediate(StoreBuffer::kStoreBufferOverflowBit)); + if (and_then == kReturnAtEnd) { + Label buffer_overflowed; + j(not_equal, &buffer_overflowed, Label::kNear); + ret(0); + bind(&buffer_overflowed); + } else { + DCHECK(and_then == kFallThroughAtEnd); + j(equal, &done, Label::kNear); + } + StoreBufferOverflowStub store_buffer_overflow = + StoreBufferOverflowStub(isolate()); + CallStub(&store_buffer_overflow); + if (and_then == kReturnAtEnd) { + ret(0); + } else { + DCHECK(and_then == kFallThroughAtEnd); + bind(&done); + } +} + + +void MacroAssembler::ClampUint8(Register reg) { + Label done; + test(reg, Immediate(0xFFFFFF00)); + j(zero, &done, Label::kNear); + setcc(negative, reg); // 1 if negative, 0 if positive. + dec_b(reg); // 0 if negative, 255 if positive. + bind(&done); +} + + +void MacroAssembler::SlowTruncateToI(Register result_reg, + Register input_reg, + int offset) { + DoubleToIStub stub(isolate(), input_reg, result_reg, offset, true); + call(stub.GetCode(), RelocInfo::CODE_TARGET); +} + + +void MacroAssembler::TruncateX87TOSToI(Register result_reg) { + sub(esp, Immediate(kDoubleSize)); + fst_d(MemOperand(esp, 0)); + SlowTruncateToI(result_reg, esp, 0); + add(esp, Immediate(kDoubleSize)); +} + + +void MacroAssembler::X87TOSToI(Register result_reg, + MinusZeroMode minus_zero_mode, + Label* conversion_failed, + Label::Distance dst) { + Label done; + sub(esp, Immediate(kPointerSize)); + fld(0); + fist_s(MemOperand(esp, 0)); + fild_s(MemOperand(esp, 0)); + pop(result_reg); + FCmp(); + j(not_equal, conversion_failed, dst); + j(parity_even, conversion_failed, dst); + if (minus_zero_mode == FAIL_ON_MINUS_ZERO) { + test(result_reg, Operand(result_reg)); + j(not_zero, &done, Label::kNear); + // To check for minus zero, we load the value again as float, and check + // if that is still 0. + sub(esp, Immediate(kPointerSize)); + fst_s(MemOperand(esp, 0)); + pop(result_reg); + test(result_reg, Operand(result_reg)); + j(not_zero, conversion_failed, dst); + } + bind(&done); +} + + +void MacroAssembler::TruncateHeapNumberToI(Register result_reg, + Register input_reg) { + Label done, slow_case; + + SlowTruncateToI(result_reg, input_reg); + bind(&done); +} + + +void MacroAssembler::TaggedToI(Register result_reg, + Register input_reg, + MinusZeroMode minus_zero_mode, + Label* lost_precision) { + Label done; + + cmp(FieldOperand(input_reg, HeapObject::kMapOffset), + isolate()->factory()->heap_number_map()); + j(not_equal, lost_precision, Label::kNear); + + // TODO(olivf) Converting a number on the fpu is actually quite slow. We + // should first try a fast conversion and then bailout to this slow case. + Label lost_precision_pop, zero_check; + Label* lost_precision_int = (minus_zero_mode == FAIL_ON_MINUS_ZERO) + ? &lost_precision_pop : lost_precision; + sub(esp, Immediate(kPointerSize)); + fld_d(FieldOperand(input_reg, HeapNumber::kValueOffset)); + if (minus_zero_mode == FAIL_ON_MINUS_ZERO) fld(0); + fist_s(MemOperand(esp, 0)); + fild_s(MemOperand(esp, 0)); + FCmp(); + pop(result_reg); + j(not_equal, lost_precision_int, Label::kNear); + j(parity_even, lost_precision_int, Label::kNear); // NaN. + if (minus_zero_mode == FAIL_ON_MINUS_ZERO) { + test(result_reg, Operand(result_reg)); + j(zero, &zero_check, Label::kNear); + fstp(0); + jmp(&done, Label::kNear); + bind(&zero_check); + // To check for minus zero, we load the value again as float, and check + // if that is still 0. + sub(esp, Immediate(kPointerSize)); + fstp_s(Operand(esp, 0)); + pop(result_reg); + test(result_reg, Operand(result_reg)); + j(zero, &done, Label::kNear); + jmp(lost_precision, Label::kNear); + + bind(&lost_precision_pop); + fstp(0); + jmp(lost_precision, Label::kNear); + } + bind(&done); +} + + +void MacroAssembler::LoadUint32NoSSE2(Register src) { + Label done; + push(src); + fild_s(Operand(esp, 0)); + cmp(src, Immediate(0)); + j(not_sign, &done, Label::kNear); + ExternalReference uint32_bias = + ExternalReference::address_of_uint32_bias(); + fld_d(Operand::StaticVariable(uint32_bias)); + faddp(1); + bind(&done); + add(esp, Immediate(kPointerSize)); +} + + +void MacroAssembler::RecordWriteArray( + Register object, + Register value, + Register index, + RememberedSetAction remembered_set_action, + SmiCheck smi_check, + PointersToHereCheck pointers_to_here_check_for_value) { + // First, check if a write barrier is even needed. The tests below + // catch stores of Smis. + Label done; + + // Skip barrier if writing a smi. + if (smi_check == INLINE_SMI_CHECK) { + DCHECK_EQ(0, kSmiTag); + test(value, Immediate(kSmiTagMask)); + j(zero, &done); + } + + // Array access: calculate the destination address in the same manner as + // KeyedStoreIC::GenerateGeneric. Multiply a smi by 2 to get an offset + // into an array of words. + Register dst = index; + lea(dst, Operand(object, index, times_half_pointer_size, + FixedArray::kHeaderSize - kHeapObjectTag)); + + RecordWrite(object, dst, value, remembered_set_action, OMIT_SMI_CHECK, + pointers_to_here_check_for_value); + + bind(&done); + + // Clobber clobbered input registers when running with the debug-code flag + // turned on to provoke errors. + if (emit_debug_code()) { + mov(value, Immediate(BitCast<int32_t>(kZapValue))); + mov(index, Immediate(BitCast<int32_t>(kZapValue))); + } +} + + +void MacroAssembler::RecordWriteField( + Register object, + int offset, + Register value, + Register dst, + RememberedSetAction remembered_set_action, + SmiCheck smi_check, + PointersToHereCheck pointers_to_here_check_for_value) { + // First, check if a write barrier is even needed. The tests below + // catch stores of Smis. + Label done; + + // Skip barrier if writing a smi. + if (smi_check == INLINE_SMI_CHECK) { + JumpIfSmi(value, &done, Label::kNear); + } + + // Although the object register is tagged, the offset is relative to the start + // of the object, so so offset must be a multiple of kPointerSize. + DCHECK(IsAligned(offset, kPointerSize)); + + lea(dst, FieldOperand(object, offset)); + if (emit_debug_code()) { + Label ok; + test_b(dst, (1 << kPointerSizeLog2) - 1); + j(zero, &ok, Label::kNear); + int3(); + bind(&ok); + } + + RecordWrite(object, dst, value, remembered_set_action, OMIT_SMI_CHECK, + pointers_to_here_check_for_value); + + bind(&done); + + // Clobber clobbered input registers when running with the debug-code flag + // turned on to provoke errors. + if (emit_debug_code()) { + mov(value, Immediate(BitCast<int32_t>(kZapValue))); + mov(dst, Immediate(BitCast<int32_t>(kZapValue))); + } +} + + +void MacroAssembler::RecordWriteForMap( + Register object, + Handle<Map> map, + Register scratch1, + Register scratch2) { + Label done; + + Register address = scratch1; + Register value = scratch2; + if (emit_debug_code()) { + Label ok; + lea(address, FieldOperand(object, HeapObject::kMapOffset)); + test_b(address, (1 << kPointerSizeLog2) - 1); + j(zero, &ok, Label::kNear); + int3(); + bind(&ok); + } + + DCHECK(!object.is(value)); + DCHECK(!object.is(address)); + DCHECK(!value.is(address)); + AssertNotSmi(object); + + if (!FLAG_incremental_marking) { + return; + } + + // Compute the address. + lea(address, FieldOperand(object, HeapObject::kMapOffset)); + + // A single check of the map's pages interesting flag suffices, since it is + // only set during incremental collection, and then it's also guaranteed that + // the from object's page's interesting flag is also set. This optimization + // relies on the fact that maps can never be in new space. + DCHECK(!isolate()->heap()->InNewSpace(*map)); + CheckPageFlagForMap(map, + MemoryChunk::kPointersToHereAreInterestingMask, + zero, + &done, + Label::kNear); + + RecordWriteStub stub(isolate(), object, value, address, OMIT_REMEMBERED_SET); + CallStub(&stub); + + bind(&done); + + // Count number of write barriers in generated code. + isolate()->counters()->write_barriers_static()->Increment(); + IncrementCounter(isolate()->counters()->write_barriers_dynamic(), 1); + + // Clobber clobbered input registers when running with the debug-code flag + // turned on to provoke errors. + if (emit_debug_code()) { + mov(value, Immediate(BitCast<int32_t>(kZapValue))); + mov(scratch1, Immediate(BitCast<int32_t>(kZapValue))); + mov(scratch2, Immediate(BitCast<int32_t>(kZapValue))); + } +} + + +void MacroAssembler::RecordWrite( + Register object, + Register address, + Register value, + RememberedSetAction remembered_set_action, + SmiCheck smi_check, + PointersToHereCheck pointers_to_here_check_for_value) { + DCHECK(!object.is(value)); + DCHECK(!object.is(address)); + DCHECK(!value.is(address)); + AssertNotSmi(object); + + if (remembered_set_action == OMIT_REMEMBERED_SET && + !FLAG_incremental_marking) { + return; + } + + if (emit_debug_code()) { + Label ok; + cmp(value, Operand(address, 0)); + j(equal, &ok, Label::kNear); + int3(); + bind(&ok); + } + + // First, check if a write barrier is even needed. The tests below + // catch stores of Smis and stores into young gen. + Label done; + + if (smi_check == INLINE_SMI_CHECK) { + // Skip barrier if writing a smi. + JumpIfSmi(value, &done, Label::kNear); + } + + if (pointers_to_here_check_for_value != kPointersToHereAreAlwaysInteresting) { + CheckPageFlag(value, + value, // Used as scratch. + MemoryChunk::kPointersToHereAreInterestingMask, + zero, + &done, + Label::kNear); + } + CheckPageFlag(object, + value, // Used as scratch. + MemoryChunk::kPointersFromHereAreInterestingMask, + zero, + &done, + Label::kNear); + + RecordWriteStub stub(isolate(), object, value, address, + remembered_set_action); + CallStub(&stub); + + bind(&done); + + // Count number of write barriers in generated code. + isolate()->counters()->write_barriers_static()->Increment(); + IncrementCounter(isolate()->counters()->write_barriers_dynamic(), 1); + + // Clobber clobbered registers when running with the debug-code flag + // turned on to provoke errors. + if (emit_debug_code()) { + mov(address, Immediate(BitCast<int32_t>(kZapValue))); + mov(value, Immediate(BitCast<int32_t>(kZapValue))); + } +} + + +void MacroAssembler::DebugBreak() { + Move(eax, Immediate(0)); + mov(ebx, Immediate(ExternalReference(Runtime::kDebugBreak, isolate()))); + CEntryStub ces(isolate(), 1); + call(ces.GetCode(), RelocInfo::DEBUG_BREAK); +} + + +bool MacroAssembler::IsUnsafeImmediate(const Immediate& x) { + static const int kMaxImmediateBits = 17; + if (!RelocInfo::IsNone(x.rmode_)) return false; + return !is_intn(x.x_, kMaxImmediateBits); +} + + +void MacroAssembler::SafeMove(Register dst, const Immediate& x) { + if (IsUnsafeImmediate(x) && jit_cookie() != 0) { + Move(dst, Immediate(x.x_ ^ jit_cookie())); + xor_(dst, jit_cookie()); + } else { + Move(dst, x); + } +} + + +void MacroAssembler::SafePush(const Immediate& x) { + if (IsUnsafeImmediate(x) && jit_cookie() != 0) { + push(Immediate(x.x_ ^ jit_cookie())); + xor_(Operand(esp, 0), Immediate(jit_cookie())); + } else { + push(x); + } +} + + +void MacroAssembler::CmpObjectType(Register heap_object, + InstanceType type, + Register map) { + mov(map, FieldOperand(heap_object, HeapObject::kMapOffset)); + CmpInstanceType(map, type); +} + + +void MacroAssembler::CmpInstanceType(Register map, InstanceType type) { + cmpb(FieldOperand(map, Map::kInstanceTypeOffset), + static_cast<int8_t>(type)); +} + + +void MacroAssembler::CheckFastElements(Register map, + Label* fail, + Label::Distance distance) { + STATIC_ASSERT(FAST_SMI_ELEMENTS == 0); + STATIC_ASSERT(FAST_HOLEY_SMI_ELEMENTS == 1); + STATIC_ASSERT(FAST_ELEMENTS == 2); + STATIC_ASSERT(FAST_HOLEY_ELEMENTS == 3); + cmpb(FieldOperand(map, Map::kBitField2Offset), + Map::kMaximumBitField2FastHoleyElementValue); + j(above, fail, distance); +} + + +void MacroAssembler::CheckFastObjectElements(Register map, + Label* fail, + Label::Distance distance) { + STATIC_ASSERT(FAST_SMI_ELEMENTS == 0); + STATIC_ASSERT(FAST_HOLEY_SMI_ELEMENTS == 1); + STATIC_ASSERT(FAST_ELEMENTS == 2); + STATIC_ASSERT(FAST_HOLEY_ELEMENTS == 3); + cmpb(FieldOperand(map, Map::kBitField2Offset), + Map::kMaximumBitField2FastHoleySmiElementValue); + j(below_equal, fail, distance); + cmpb(FieldOperand(map, Map::kBitField2Offset), + Map::kMaximumBitField2FastHoleyElementValue); + j(above, fail, distance); +} + + +void MacroAssembler::CheckFastSmiElements(Register map, + Label* fail, + Label::Distance distance) { + STATIC_ASSERT(FAST_SMI_ELEMENTS == 0); + STATIC_ASSERT(FAST_HOLEY_SMI_ELEMENTS == 1); + cmpb(FieldOperand(map, Map::kBitField2Offset), + Map::kMaximumBitField2FastHoleySmiElementValue); + j(above, fail, distance); +} + + +void MacroAssembler::StoreNumberToDoubleElements( + Register maybe_number, + Register elements, + Register key, + Register scratch, + Label* fail, + int elements_offset) { + Label smi_value, done, maybe_nan, not_nan, is_nan, have_double_value; + JumpIfSmi(maybe_number, &smi_value, Label::kNear); + + CheckMap(maybe_number, + isolate()->factory()->heap_number_map(), + fail, + DONT_DO_SMI_CHECK); + + // Double value, canonicalize NaN. + uint32_t offset = HeapNumber::kValueOffset + sizeof(kHoleNanLower32); + cmp(FieldOperand(maybe_number, offset), + Immediate(kNaNOrInfinityLowerBoundUpper32)); + j(greater_equal, &maybe_nan, Label::kNear); + + bind(¬_nan); + ExternalReference canonical_nan_reference = + ExternalReference::address_of_canonical_non_hole_nan(); + fld_d(FieldOperand(maybe_number, HeapNumber::kValueOffset)); + bind(&have_double_value); + fstp_d(FieldOperand(elements, key, times_4, + FixedDoubleArray::kHeaderSize - elements_offset)); + jmp(&done); + + bind(&maybe_nan); + // Could be NaN or Infinity. If fraction is not zero, it's NaN, otherwise + // it's an Infinity, and the non-NaN code path applies. + j(greater, &is_nan, Label::kNear); + cmp(FieldOperand(maybe_number, HeapNumber::kValueOffset), Immediate(0)); + j(zero, ¬_nan); + bind(&is_nan); + fld_d(Operand::StaticVariable(canonical_nan_reference)); + jmp(&have_double_value, Label::kNear); + + bind(&smi_value); + // Value is a smi. Convert to a double and store. + // Preserve original value. + mov(scratch, maybe_number); + SmiUntag(scratch); + push(scratch); + fild_s(Operand(esp, 0)); + pop(scratch); + fstp_d(FieldOperand(elements, key, times_4, + FixedDoubleArray::kHeaderSize - elements_offset)); + bind(&done); +} + + +void MacroAssembler::CompareMap(Register obj, Handle<Map> map) { + cmp(FieldOperand(obj, HeapObject::kMapOffset), map); +} + + +void MacroAssembler::CheckMap(Register obj, + Handle<Map> map, + Label* fail, + SmiCheckType smi_check_type) { + if (smi_check_type == DO_SMI_CHECK) { + JumpIfSmi(obj, fail); + } + + CompareMap(obj, map); + j(not_equal, fail); +} + + +void MacroAssembler::DispatchMap(Register obj, + Register unused, + Handle<Map> map, + Handle<Code> success, + SmiCheckType smi_check_type) { + Label fail; + if (smi_check_type == DO_SMI_CHECK) { + JumpIfSmi(obj, &fail); + } + cmp(FieldOperand(obj, HeapObject::kMapOffset), Immediate(map)); + j(equal, success); + + bind(&fail); +} + + +Condition MacroAssembler::IsObjectStringType(Register heap_object, + Register map, + Register instance_type) { + mov(map, FieldOperand(heap_object, HeapObject::kMapOffset)); + movzx_b(instance_type, FieldOperand(map, Map::kInstanceTypeOffset)); + STATIC_ASSERT(kNotStringTag != 0); + test(instance_type, Immediate(kIsNotStringMask)); + return zero; +} + + +Condition MacroAssembler::IsObjectNameType(Register heap_object, + Register map, + Register instance_type) { + mov(map, FieldOperand(heap_object, HeapObject::kMapOffset)); + movzx_b(instance_type, FieldOperand(map, Map::kInstanceTypeOffset)); + cmpb(instance_type, static_cast<uint8_t>(LAST_NAME_TYPE)); + return below_equal; +} + + +void MacroAssembler::IsObjectJSObjectType(Register heap_object, + Register map, + Register scratch, + Label* fail) { + mov(map, FieldOperand(heap_object, HeapObject::kMapOffset)); + IsInstanceJSObjectType(map, scratch, fail); +} + + +void MacroAssembler::IsInstanceJSObjectType(Register map, + Register scratch, + Label* fail) { + movzx_b(scratch, FieldOperand(map, Map::kInstanceTypeOffset)); + sub(scratch, Immediate(FIRST_NONCALLABLE_SPEC_OBJECT_TYPE)); + cmp(scratch, + LAST_NONCALLABLE_SPEC_OBJECT_TYPE - FIRST_NONCALLABLE_SPEC_OBJECT_TYPE); + j(above, fail); +} + + +void MacroAssembler::FCmp() { + fucompp(); + push(eax); + fnstsw_ax(); + sahf(); + pop(eax); +} + + +void MacroAssembler::AssertNumber(Register object) { + if (emit_debug_code()) { + Label ok; + JumpIfSmi(object, &ok); + cmp(FieldOperand(object, HeapObject::kMapOffset), + isolate()->factory()->heap_number_map()); + Check(equal, kOperandNotANumber); + bind(&ok); + } +} + + +void MacroAssembler::AssertSmi(Register object) { + if (emit_debug_code()) { + test(object, Immediate(kSmiTagMask)); + Check(equal, kOperandIsNotASmi); + } +} + + +void MacroAssembler::AssertString(Register object) { + if (emit_debug_code()) { + test(object, Immediate(kSmiTagMask)); + Check(not_equal, kOperandIsASmiAndNotAString); + push(object); + mov(object, FieldOperand(object, HeapObject::kMapOffset)); + CmpInstanceType(object, FIRST_NONSTRING_TYPE); + pop(object); + Check(below, kOperandIsNotAString); + } +} + + +void MacroAssembler::AssertName(Register object) { + if (emit_debug_code()) { + test(object, Immediate(kSmiTagMask)); + Check(not_equal, kOperandIsASmiAndNotAName); + push(object); + mov(object, FieldOperand(object, HeapObject::kMapOffset)); + CmpInstanceType(object, LAST_NAME_TYPE); + pop(object); + Check(below_equal, kOperandIsNotAName); + } +} + + +void MacroAssembler::AssertUndefinedOrAllocationSite(Register object) { + if (emit_debug_code()) { + Label done_checking; + AssertNotSmi(object); + cmp(object, isolate()->factory()->undefined_value()); + j(equal, &done_checking); + cmp(FieldOperand(object, 0), + Immediate(isolate()->factory()->allocation_site_map())); + Assert(equal, kExpectedUndefinedOrCell); + bind(&done_checking); + } +} + + +void MacroAssembler::AssertNotSmi(Register object) { + if (emit_debug_code()) { + test(object, Immediate(kSmiTagMask)); + Check(not_equal, kOperandIsASmi); + } +} + + +void MacroAssembler::StubPrologue() { + push(ebp); // Caller's frame pointer. + mov(ebp, esp); + push(esi); // Callee's context. + push(Immediate(Smi::FromInt(StackFrame::STUB))); +} + + +void MacroAssembler::Prologue(bool code_pre_aging) { + PredictableCodeSizeScope predictible_code_size_scope(this, + kNoCodeAgeSequenceLength); + if (code_pre_aging) { + // Pre-age the code. + call(isolate()->builtins()->MarkCodeAsExecutedOnce(), + RelocInfo::CODE_AGE_SEQUENCE); + Nop(kNoCodeAgeSequenceLength - Assembler::kCallInstructionLength); + } else { + push(ebp); // Caller's frame pointer. + mov(ebp, esp); + push(esi); // Callee's context. + push(edi); // Callee's JS function. + } +} + + +void MacroAssembler::EnterFrame(StackFrame::Type type) { + push(ebp); + mov(ebp, esp); + push(esi); + push(Immediate(Smi::FromInt(type))); + push(Immediate(CodeObject())); + if (emit_debug_code()) { + cmp(Operand(esp, 0), Immediate(isolate()->factory()->undefined_value())); + Check(not_equal, kCodeObjectNotProperlyPatched); + } +} + + +void MacroAssembler::LeaveFrame(StackFrame::Type type) { + if (emit_debug_code()) { + cmp(Operand(ebp, StandardFrameConstants::kMarkerOffset), + Immediate(Smi::FromInt(type))); + Check(equal, kStackFrameTypesMustMatch); + } + leave(); +} + + +void MacroAssembler::EnterExitFramePrologue() { + // Set up the frame structure on the stack. + DCHECK(ExitFrameConstants::kCallerSPDisplacement == +2 * kPointerSize); + DCHECK(ExitFrameConstants::kCallerPCOffset == +1 * kPointerSize); + DCHECK(ExitFrameConstants::kCallerFPOffset == 0 * kPointerSize); + push(ebp); + mov(ebp, esp); + + // Reserve room for entry stack pointer and push the code object. + DCHECK(ExitFrameConstants::kSPOffset == -1 * kPointerSize); + push(Immediate(0)); // Saved entry sp, patched before call. + push(Immediate(CodeObject())); // Accessed from ExitFrame::code_slot. + + // Save the frame pointer and the context in top. + ExternalReference c_entry_fp_address(Isolate::kCEntryFPAddress, isolate()); + ExternalReference context_address(Isolate::kContextAddress, isolate()); + mov(Operand::StaticVariable(c_entry_fp_address), ebp); + mov(Operand::StaticVariable(context_address), esi); +} + + +void MacroAssembler::EnterExitFrameEpilogue(int argc) { + sub(esp, Immediate(argc * kPointerSize)); + + // Get the required frame alignment for the OS. + const int kFrameAlignment = base::OS::ActivationFrameAlignment(); + if (kFrameAlignment > 0) { + DCHECK(IsPowerOf2(kFrameAlignment)); + and_(esp, -kFrameAlignment); + } + + // Patch the saved entry sp. + mov(Operand(ebp, ExitFrameConstants::kSPOffset), esp); +} + + +void MacroAssembler::EnterExitFrame() { + EnterExitFramePrologue(); + + // Set up argc and argv in callee-saved registers. + int offset = StandardFrameConstants::kCallerSPOffset - kPointerSize; + mov(edi, eax); + lea(esi, Operand(ebp, eax, times_4, offset)); + + // Reserve space for argc, argv and isolate. + EnterExitFrameEpilogue(3); +} + + +void MacroAssembler::EnterApiExitFrame(int argc) { + EnterExitFramePrologue(); + EnterExitFrameEpilogue(argc); +} + + +void MacroAssembler::LeaveExitFrame() { + // Get the return address from the stack and restore the frame pointer. + mov(ecx, Operand(ebp, 1 * kPointerSize)); + mov(ebp, Operand(ebp, 0 * kPointerSize)); + + // Pop the arguments and the receiver from the caller stack. + lea(esp, Operand(esi, 1 * kPointerSize)); + + // Push the return address to get ready to return. + push(ecx); + + LeaveExitFrameEpilogue(true); +} + + +void MacroAssembler::LeaveExitFrameEpilogue(bool restore_context) { + // Restore current context from top and clear it in debug mode. + ExternalReference context_address(Isolate::kContextAddress, isolate()); + if (restore_context) { + mov(esi, Operand::StaticVariable(context_address)); + } +#ifdef DEBUG + mov(Operand::StaticVariable(context_address), Immediate(0)); +#endif + + // Clear the top frame. + ExternalReference c_entry_fp_address(Isolate::kCEntryFPAddress, + isolate()); + mov(Operand::StaticVariable(c_entry_fp_address), Immediate(0)); +} + + +void MacroAssembler::LeaveApiExitFrame(bool restore_context) { + mov(esp, ebp); + pop(ebp); + + LeaveExitFrameEpilogue(restore_context); +} + + +void MacroAssembler::PushTryHandler(StackHandler::Kind kind, + int handler_index) { + // Adjust this code if not the case. + STATIC_ASSERT(StackHandlerConstants::kSize == 5 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kNextOffset == 0); + STATIC_ASSERT(StackHandlerConstants::kCodeOffset == 1 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kStateOffset == 2 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kContextOffset == 3 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kFPOffset == 4 * kPointerSize); + + // We will build up the handler from the bottom by pushing on the stack. + // First push the frame pointer and context. + if (kind == StackHandler::JS_ENTRY) { + // The frame pointer does not point to a JS frame so we save NULL for + // ebp. We expect the code throwing an exception to check ebp before + // dereferencing it to restore the context. + push(Immediate(0)); // NULL frame pointer. + push(Immediate(Smi::FromInt(0))); // No context. + } else { + push(ebp); + push(esi); + } + // Push the state and the code object. + unsigned state = + StackHandler::IndexField::encode(handler_index) | + StackHandler::KindField::encode(kind); + push(Immediate(state)); + Push(CodeObject()); + + // Link the current handler as the next handler. + ExternalReference handler_address(Isolate::kHandlerAddress, isolate()); + push(Operand::StaticVariable(handler_address)); + // Set this new handler as the current one. + mov(Operand::StaticVariable(handler_address), esp); +} + + +void MacroAssembler::PopTryHandler() { + STATIC_ASSERT(StackHandlerConstants::kNextOffset == 0); + ExternalReference handler_address(Isolate::kHandlerAddress, isolate()); + pop(Operand::StaticVariable(handler_address)); + add(esp, Immediate(StackHandlerConstants::kSize - kPointerSize)); +} + + +void MacroAssembler::JumpToHandlerEntry() { + // Compute the handler entry address and jump to it. The handler table is + // a fixed array of (smi-tagged) code offsets. + // eax = exception, edi = code object, edx = state. + mov(ebx, FieldOperand(edi, Code::kHandlerTableOffset)); + shr(edx, StackHandler::kKindWidth); + mov(edx, FieldOperand(ebx, edx, times_4, FixedArray::kHeaderSize)); + SmiUntag(edx); + lea(edi, FieldOperand(edi, edx, times_1, Code::kHeaderSize)); + jmp(edi); +} + + +void MacroAssembler::Throw(Register value) { + // Adjust this code if not the case. + STATIC_ASSERT(StackHandlerConstants::kSize == 5 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kNextOffset == 0); + STATIC_ASSERT(StackHandlerConstants::kCodeOffset == 1 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kStateOffset == 2 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kContextOffset == 3 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kFPOffset == 4 * kPointerSize); + + // The exception is expected in eax. + if (!value.is(eax)) { + mov(eax, value); + } + // Drop the stack pointer to the top of the top handler. + ExternalReference handler_address(Isolate::kHandlerAddress, isolate()); + mov(esp, Operand::StaticVariable(handler_address)); + // Restore the next handler. + pop(Operand::StaticVariable(handler_address)); + + // Remove the code object and state, compute the handler address in edi. + pop(edi); // Code object. + pop(edx); // Index and state. + + // Restore the context and frame pointer. + pop(esi); // Context. + pop(ebp); // Frame pointer. + + // If the handler is a JS frame, restore the context to the frame. + // (kind == ENTRY) == (ebp == 0) == (esi == 0), so we could test either + // ebp or esi. + Label skip; + test(esi, esi); + j(zero, &skip, Label::kNear); + mov(Operand(ebp, StandardFrameConstants::kContextOffset), esi); + bind(&skip); + + JumpToHandlerEntry(); +} + + +void MacroAssembler::ThrowUncatchable(Register value) { + // Adjust this code if not the case. + STATIC_ASSERT(StackHandlerConstants::kSize == 5 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kNextOffset == 0); + STATIC_ASSERT(StackHandlerConstants::kCodeOffset == 1 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kStateOffset == 2 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kContextOffset == 3 * kPointerSize); + STATIC_ASSERT(StackHandlerConstants::kFPOffset == 4 * kPointerSize); + + // The exception is expected in eax. + if (!value.is(eax)) { + mov(eax, value); + } + // Drop the stack pointer to the top of the top stack handler. + ExternalReference handler_address(Isolate::kHandlerAddress, isolate()); + mov(esp, Operand::StaticVariable(handler_address)); + + // Unwind the handlers until the top ENTRY handler is found. + Label fetch_next, check_kind; + jmp(&check_kind, Label::kNear); + bind(&fetch_next); + mov(esp, Operand(esp, StackHandlerConstants::kNextOffset)); + + bind(&check_kind); + STATIC_ASSERT(StackHandler::JS_ENTRY == 0); + test(Operand(esp, StackHandlerConstants::kStateOffset), + Immediate(StackHandler::KindField::kMask)); + j(not_zero, &fetch_next); + + // Set the top handler address to next handler past the top ENTRY handler. + pop(Operand::StaticVariable(handler_address)); + + // Remove the code object and state, compute the handler address in edi. + pop(edi); // Code object. + pop(edx); // Index and state. + + // Clear the context pointer and frame pointer (0 was saved in the handler). + pop(esi); + pop(ebp); + + JumpToHandlerEntry(); +} + + +void MacroAssembler::CheckAccessGlobalProxy(Register holder_reg, + Register scratch1, + Register scratch2, + Label* miss) { + Label same_contexts; + + DCHECK(!holder_reg.is(scratch1)); + DCHECK(!holder_reg.is(scratch2)); + DCHECK(!scratch1.is(scratch2)); + + // Load current lexical context from the stack frame. + mov(scratch1, Operand(ebp, StandardFrameConstants::kContextOffset)); + + // When generating debug code, make sure the lexical context is set. + if (emit_debug_code()) { + cmp(scratch1, Immediate(0)); + Check(not_equal, kWeShouldNotHaveAnEmptyLexicalContext); + } + // Load the native context of the current context. + int offset = + Context::kHeaderSize + Context::GLOBAL_OBJECT_INDEX * kPointerSize; + mov(scratch1, FieldOperand(scratch1, offset)); + mov(scratch1, FieldOperand(scratch1, GlobalObject::kNativeContextOffset)); + + // Check the context is a native context. + if (emit_debug_code()) { + // Read the first word and compare to native_context_map. + cmp(FieldOperand(scratch1, HeapObject::kMapOffset), + isolate()->factory()->native_context_map()); + Check(equal, kJSGlobalObjectNativeContextShouldBeANativeContext); + } + + // Check if both contexts are the same. + cmp(scratch1, FieldOperand(holder_reg, JSGlobalProxy::kNativeContextOffset)); + j(equal, &same_contexts); + + // Compare security tokens, save holder_reg on the stack so we can use it + // as a temporary register. + // + // Check that the security token in the calling global object is + // compatible with the security token in the receiving global + // object. + mov(scratch2, + FieldOperand(holder_reg, JSGlobalProxy::kNativeContextOffset)); + + // Check the context is a native context. + if (emit_debug_code()) { + cmp(scratch2, isolate()->factory()->null_value()); + Check(not_equal, kJSGlobalProxyContextShouldNotBeNull); + + // Read the first word and compare to native_context_map(), + cmp(FieldOperand(scratch2, HeapObject::kMapOffset), + isolate()->factory()->native_context_map()); + Check(equal, kJSGlobalObjectNativeContextShouldBeANativeContext); + } + + int token_offset = Context::kHeaderSize + + Context::SECURITY_TOKEN_INDEX * kPointerSize; + mov(scratch1, FieldOperand(scratch1, token_offset)); + cmp(scratch1, FieldOperand(scratch2, token_offset)); + j(not_equal, miss); + + bind(&same_contexts); +} + + +// Compute the hash code from the untagged key. This must be kept in sync with +// ComputeIntegerHash in utils.h and KeyedLoadGenericStub in +// code-stub-hydrogen.cc +// +// Note: r0 will contain hash code +void MacroAssembler::GetNumberHash(Register r0, Register scratch) { + // Xor original key with a seed. + if (serializer_enabled()) { + ExternalReference roots_array_start = + ExternalReference::roots_array_start(isolate()); + mov(scratch, Immediate(Heap::kHashSeedRootIndex)); + mov(scratch, + Operand::StaticArray(scratch, times_pointer_size, roots_array_start)); + SmiUntag(scratch); + xor_(r0, scratch); + } else { + int32_t seed = isolate()->heap()->HashSeed(); + xor_(r0, Immediate(seed)); + } + + // hash = ~hash + (hash << 15); + mov(scratch, r0); + not_(r0); + shl(scratch, 15); + add(r0, scratch); + // hash = hash ^ (hash >> 12); + mov(scratch, r0); + shr(scratch, 12); + xor_(r0, scratch); + // hash = hash + (hash << 2); + lea(r0, Operand(r0, r0, times_4, 0)); + // hash = hash ^ (hash >> 4); + mov(scratch, r0); + shr(scratch, 4); + xor_(r0, scratch); + // hash = hash * 2057; + imul(r0, r0, 2057); + // hash = hash ^ (hash >> 16); + mov(scratch, r0); + shr(scratch, 16); + xor_(r0, scratch); +} + + + +void MacroAssembler::LoadFromNumberDictionary(Label* miss, + Register elements, + Register key, + Register r0, + Register r1, + Register r2, + Register result) { + // Register use: + // + // elements - holds the slow-case elements of the receiver and is unchanged. + // + // key - holds the smi key on entry and is unchanged. + // + // Scratch registers: + // + // r0 - holds the untagged key on entry and holds the hash once computed. + // + // r1 - used to hold the capacity mask of the dictionary + // + // r2 - used for the index into the dictionary. + // + // result - holds the result on exit if the load succeeds and we fall through. + + Label done; + + GetNumberHash(r0, r1); + + // Compute capacity mask. + mov(r1, FieldOperand(elements, SeededNumberDictionary::kCapacityOffset)); + shr(r1, kSmiTagSize); // convert smi to int + dec(r1); + + // Generate an unrolled loop that performs a few probes before giving up. + for (int i = 0; i < kNumberDictionaryProbes; i++) { + // Use r2 for index calculations and keep the hash intact in r0. + mov(r2, r0); + // Compute the masked index: (hash + i + i * i) & mask. + if (i > 0) { + add(r2, Immediate(SeededNumberDictionary::GetProbeOffset(i))); + } + and_(r2, r1); + + // Scale the index by multiplying by the entry size. + DCHECK(SeededNumberDictionary::kEntrySize == 3); + lea(r2, Operand(r2, r2, times_2, 0)); // r2 = r2 * 3 + + // Check if the key matches. + cmp(key, FieldOperand(elements, + r2, + times_pointer_size, + SeededNumberDictionary::kElementsStartOffset)); + if (i != (kNumberDictionaryProbes - 1)) { + j(equal, &done); + } else { + j(not_equal, miss); + } + } + + bind(&done); + // Check that the value is a normal propety. + const int kDetailsOffset = + SeededNumberDictionary::kElementsStartOffset + 2 * kPointerSize; + DCHECK_EQ(NORMAL, 0); + test(FieldOperand(elements, r2, times_pointer_size, kDetailsOffset), + Immediate(PropertyDetails::TypeField::kMask << kSmiTagSize)); + j(not_zero, miss); + + // Get the value at the masked, scaled index. + const int kValueOffset = + SeededNumberDictionary::kElementsStartOffset + kPointerSize; + mov(result, FieldOperand(elements, r2, times_pointer_size, kValueOffset)); +} + + +void MacroAssembler::LoadAllocationTopHelper(Register result, + Register scratch, + AllocationFlags flags) { + ExternalReference allocation_top = + AllocationUtils::GetAllocationTopReference(isolate(), flags); + + // Just return if allocation top is already known. + if ((flags & RESULT_CONTAINS_TOP) != 0) { + // No use of scratch if allocation top is provided. + DCHECK(scratch.is(no_reg)); +#ifdef DEBUG + // Assert that result actually contains top on entry. + cmp(result, Operand::StaticVariable(allocation_top)); + Check(equal, kUnexpectedAllocationTop); +#endif + return; + } + + // Move address of new object to result. Use scratch register if available. + if (scratch.is(no_reg)) { + mov(result, Operand::StaticVariable(allocation_top)); + } else { + mov(scratch, Immediate(allocation_top)); + mov(result, Operand(scratch, 0)); + } +} + + +void MacroAssembler::UpdateAllocationTopHelper(Register result_end, + Register scratch, + AllocationFlags flags) { + if (emit_debug_code()) { + test(result_end, Immediate(kObjectAlignmentMask)); + Check(zero, kUnalignedAllocationInNewSpace); + } + + ExternalReference allocation_top = + AllocationUtils::GetAllocationTopReference(isolate(), flags); + + // Update new top. Use scratch if available. + if (scratch.is(no_reg)) { + mov(Operand::StaticVariable(allocation_top), result_end); + } else { + mov(Operand(scratch, 0), result_end); + } +} + + +void MacroAssembler::Allocate(int object_size, + Register result, + Register result_end, + Register scratch, + Label* gc_required, + AllocationFlags flags) { + DCHECK((flags & (RESULT_CONTAINS_TOP | SIZE_IN_WORDS)) == 0); + DCHECK(object_size <= Page::kMaxRegularHeapObjectSize); + if (!FLAG_inline_new) { + if (emit_debug_code()) { + // Trash the registers to simulate an allocation failure. + mov(result, Immediate(0x7091)); + if (result_end.is_valid()) { + mov(result_end, Immediate(0x7191)); + } + if (scratch.is_valid()) { + mov(scratch, Immediate(0x7291)); + } + } + jmp(gc_required); + return; + } + DCHECK(!result.is(result_end)); + + // Load address of new object into result. + LoadAllocationTopHelper(result, scratch, flags); + + ExternalReference allocation_limit = + AllocationUtils::GetAllocationLimitReference(isolate(), flags); + + // Align the next allocation. Storing the filler map without checking top is + // safe in new-space because the limit of the heap is aligned there. + if ((flags & DOUBLE_ALIGNMENT) != 0) { + DCHECK((flags & PRETENURE_OLD_POINTER_SPACE) == 0); + DCHECK(kPointerAlignment * 2 == kDoubleAlignment); + Label aligned; + test(result, Immediate(kDoubleAlignmentMask)); + j(zero, &aligned, Label::kNear); + if ((flags & PRETENURE_OLD_DATA_SPACE) != 0) { + cmp(result, Operand::StaticVariable(allocation_limit)); + j(above_equal, gc_required); + } + mov(Operand(result, 0), + Immediate(isolate()->factory()->one_pointer_filler_map())); + add(result, Immediate(kDoubleSize / 2)); + bind(&aligned); + } + + // Calculate new top and bail out if space is exhausted. + Register top_reg = result_end.is_valid() ? result_end : result; + if (!top_reg.is(result)) { + mov(top_reg, result); + } + add(top_reg, Immediate(object_size)); + j(carry, gc_required); + cmp(top_reg, Operand::StaticVariable(allocation_limit)); + j(above, gc_required); + + // Update allocation top. + UpdateAllocationTopHelper(top_reg, scratch, flags); + + // Tag result if requested. + bool tag_result = (flags & TAG_OBJECT) != 0; + if (top_reg.is(result)) { + if (tag_result) { + sub(result, Immediate(object_size - kHeapObjectTag)); + } else { + sub(result, Immediate(object_size)); + } + } else if (tag_result) { + DCHECK(kHeapObjectTag == 1); + inc(result); + } +} + + +void MacroAssembler::Allocate(int header_size, + ScaleFactor element_size, + Register element_count, + RegisterValueType element_count_type, + Register result, + Register result_end, + Register scratch, + Label* gc_required, + AllocationFlags flags) { + DCHECK((flags & SIZE_IN_WORDS) == 0); + if (!FLAG_inline_new) { + if (emit_debug_code()) { + // Trash the registers to simulate an allocation failure. + mov(result, Immediate(0x7091)); + mov(result_end, Immediate(0x7191)); + if (scratch.is_valid()) { + mov(scratch, Immediate(0x7291)); + } + // Register element_count is not modified by the function. + } + jmp(gc_required); + return; + } + DCHECK(!result.is(result_end)); + + // Load address of new object into result. + LoadAllocationTopHelper(result, scratch, flags); + + ExternalReference allocation_limit = + AllocationUtils::GetAllocationLimitReference(isolate(), flags); + + // Align the next allocation. Storing the filler map without checking top is + // safe in new-space because the limit of the heap is aligned there. + if ((flags & DOUBLE_ALIGNMENT) != 0) { + DCHECK((flags & PRETENURE_OLD_POINTER_SPACE) == 0); + DCHECK(kPointerAlignment * 2 == kDoubleAlignment); + Label aligned; + test(result, Immediate(kDoubleAlignmentMask)); + j(zero, &aligned, Label::kNear); + if ((flags & PRETENURE_OLD_DATA_SPACE) != 0) { + cmp(result, Operand::StaticVariable(allocation_limit)); + j(above_equal, gc_required); + } + mov(Operand(result, 0), + Immediate(isolate()->factory()->one_pointer_filler_map())); + add(result, Immediate(kDoubleSize / 2)); + bind(&aligned); + } + + // Calculate new top and bail out if space is exhausted. + // We assume that element_count*element_size + header_size does not + // overflow. + if (element_count_type == REGISTER_VALUE_IS_SMI) { + STATIC_ASSERT(static_cast<ScaleFactor>(times_2 - 1) == times_1); + STATIC_ASSERT(static_cast<ScaleFactor>(times_4 - 1) == times_2); + STATIC_ASSERT(static_cast<ScaleFactor>(times_8 - 1) == times_4); + DCHECK(element_size >= times_2); + DCHECK(kSmiTagSize == 1); + element_size = static_cast<ScaleFactor>(element_size - 1); + } else { + DCHECK(element_count_type == REGISTER_VALUE_IS_INT32); + } + lea(result_end, Operand(element_count, element_size, header_size)); + add(result_end, result); + j(carry, gc_required); + cmp(result_end, Operand::StaticVariable(allocation_limit)); + j(above, gc_required); + + if ((flags & TAG_OBJECT) != 0) { + DCHECK(kHeapObjectTag == 1); + inc(result); + } + + // Update allocation top. + UpdateAllocationTopHelper(result_end, scratch, flags); +} + + +void MacroAssembler::Allocate(Register object_size, + Register result, + Register result_end, + Register scratch, + Label* gc_required, + AllocationFlags flags) { + DCHECK((flags & (RESULT_CONTAINS_TOP | SIZE_IN_WORDS)) == 0); + if (!FLAG_inline_new) { + if (emit_debug_code()) { + // Trash the registers to simulate an allocation failure. + mov(result, Immediate(0x7091)); + mov(result_end, Immediate(0x7191)); + if (scratch.is_valid()) { + mov(scratch, Immediate(0x7291)); + } + // object_size is left unchanged by this function. + } + jmp(gc_required); + return; + } + DCHECK(!result.is(result_end)); + + // Load address of new object into result. + LoadAllocationTopHelper(result, scratch, flags); + + ExternalReference allocation_limit = + AllocationUtils::GetAllocationLimitReference(isolate(), flags); + + // Align the next allocation. Storing the filler map without checking top is + // safe in new-space because the limit of the heap is aligned there. + if ((flags & DOUBLE_ALIGNMENT) != 0) { + DCHECK((flags & PRETENURE_OLD_POINTER_SPACE) == 0); + DCHECK(kPointerAlignment * 2 == kDoubleAlignment); + Label aligned; + test(result, Immediate(kDoubleAlignmentMask)); + j(zero, &aligned, Label::kNear); + if ((flags & PRETENURE_OLD_DATA_SPACE) != 0) { + cmp(result, Operand::StaticVariable(allocation_limit)); + j(above_equal, gc_required); + } + mov(Operand(result, 0), + Immediate(isolate()->factory()->one_pointer_filler_map())); + add(result, Immediate(kDoubleSize / 2)); + bind(&aligned); + } + + // Calculate new top and bail out if space is exhausted. + if (!object_size.is(result_end)) { + mov(result_end, object_size); + } + add(result_end, result); + j(carry, gc_required); + cmp(result_end, Operand::StaticVariable(allocation_limit)); + j(above, gc_required); + + // Tag result if requested. + if ((flags & TAG_OBJECT) != 0) { + DCHECK(kHeapObjectTag == 1); + inc(result); + } + + // Update allocation top. + UpdateAllocationTopHelper(result_end, scratch, flags); +} + + +void MacroAssembler::UndoAllocationInNewSpace(Register object) { + ExternalReference new_space_allocation_top = + ExternalReference::new_space_allocation_top_address(isolate()); + + // Make sure the object has no tag before resetting top. + and_(object, Immediate(~kHeapObjectTagMask)); +#ifdef DEBUG + cmp(object, Operand::StaticVariable(new_space_allocation_top)); + Check(below, kUndoAllocationOfNonAllocatedMemory); +#endif + mov(Operand::StaticVariable(new_space_allocation_top), object); +} + + +void MacroAssembler::AllocateHeapNumber(Register result, + Register scratch1, + Register scratch2, + Label* gc_required, + MutableMode mode) { + // Allocate heap number in new space. + Allocate(HeapNumber::kSize, result, scratch1, scratch2, gc_required, + TAG_OBJECT); + + Handle<Map> map = mode == MUTABLE + ? isolate()->factory()->mutable_heap_number_map() + : isolate()->factory()->heap_number_map(); + + // Set the map. + mov(FieldOperand(result, HeapObject::kMapOffset), Immediate(map)); +} + + +void MacroAssembler::AllocateTwoByteString(Register result, + Register length, + Register scratch1, + Register scratch2, + Register scratch3, + Label* gc_required) { + // Calculate the number of bytes needed for the characters in the string while + // observing object alignment. + DCHECK((SeqTwoByteString::kHeaderSize & kObjectAlignmentMask) == 0); + DCHECK(kShortSize == 2); + // scratch1 = length * 2 + kObjectAlignmentMask. + lea(scratch1, Operand(length, length, times_1, kObjectAlignmentMask)); + and_(scratch1, Immediate(~kObjectAlignmentMask)); + + // Allocate two byte string in new space. + Allocate(SeqTwoByteString::kHeaderSize, + times_1, + scratch1, + REGISTER_VALUE_IS_INT32, + result, + scratch2, + scratch3, + gc_required, + TAG_OBJECT); + + // Set the map, length and hash field. + mov(FieldOperand(result, HeapObject::kMapOffset), + Immediate(isolate()->factory()->string_map())); + mov(scratch1, length); + SmiTag(scratch1); + mov(FieldOperand(result, String::kLengthOffset), scratch1); + mov(FieldOperand(result, String::kHashFieldOffset), + Immediate(String::kEmptyHashField)); +} + + +void MacroAssembler::AllocateAsciiString(Register result, + Register length, + Register scratch1, + Register scratch2, + Register scratch3, + Label* gc_required) { + // Calculate the number of bytes needed for the characters in the string while + // observing object alignment. + DCHECK((SeqOneByteString::kHeaderSize & kObjectAlignmentMask) == 0); + mov(scratch1, length); + DCHECK(kCharSize == 1); + add(scratch1, Immediate(kObjectAlignmentMask)); + and_(scratch1, Immediate(~kObjectAlignmentMask)); + + // Allocate ASCII string in new space. + Allocate(SeqOneByteString::kHeaderSize, + times_1, + scratch1, + REGISTER_VALUE_IS_INT32, + result, + scratch2, + scratch3, + gc_required, + TAG_OBJECT); + + // Set the map, length and hash field. + mov(FieldOperand(result, HeapObject::kMapOffset), + Immediate(isolate()->factory()->ascii_string_map())); + mov(scratch1, length); + SmiTag(scratch1); + mov(FieldOperand(result, String::kLengthOffset), scratch1); + mov(FieldOperand(result, String::kHashFieldOffset), + Immediate(String::kEmptyHashField)); +} + + +void MacroAssembler::AllocateAsciiString(Register result, + int length, + Register scratch1, + Register scratch2, + Label* gc_required) { + DCHECK(length > 0); + + // Allocate ASCII string in new space. + Allocate(SeqOneByteString::SizeFor(length), result, scratch1, scratch2, + gc_required, TAG_OBJECT); + + // Set the map, length and hash field. + mov(FieldOperand(result, HeapObject::kMapOffset), + Immediate(isolate()->factory()->ascii_string_map())); + mov(FieldOperand(result, String::kLengthOffset), + Immediate(Smi::FromInt(length))); + mov(FieldOperand(result, String::kHashFieldOffset), + Immediate(String::kEmptyHashField)); +} + + +void MacroAssembler::AllocateTwoByteConsString(Register result, + Register scratch1, + Register scratch2, + Label* gc_required) { + // Allocate heap number in new space. + Allocate(ConsString::kSize, result, scratch1, scratch2, gc_required, + TAG_OBJECT); + + // Set the map. The other fields are left uninitialized. + mov(FieldOperand(result, HeapObject::kMapOffset), + Immediate(isolate()->factory()->cons_string_map())); +} + + +void MacroAssembler::AllocateAsciiConsString(Register result, + Register scratch1, + Register scratch2, + Label* gc_required) { + Allocate(ConsString::kSize, + result, + scratch1, + scratch2, + gc_required, + TAG_OBJECT); + + // Set the map. The other fields are left uninitialized. + mov(FieldOperand(result, HeapObject::kMapOffset), + Immediate(isolate()->factory()->cons_ascii_string_map())); +} + + +void MacroAssembler::AllocateTwoByteSlicedString(Register result, + Register scratch1, + Register scratch2, + Label* gc_required) { + // Allocate heap number in new space. + Allocate(SlicedString::kSize, result, scratch1, scratch2, gc_required, + TAG_OBJECT); + + // Set the map. The other fields are left uninitialized. + mov(FieldOperand(result, HeapObject::kMapOffset), + Immediate(isolate()->factory()->sliced_string_map())); +} + + +void MacroAssembler::AllocateAsciiSlicedString(Register result, + Register scratch1, + Register scratch2, + Label* gc_required) { + // Allocate heap number in new space. + Allocate(SlicedString::kSize, result, scratch1, scratch2, gc_required, + TAG_OBJECT); + + // Set the map. The other fields are left uninitialized. + mov(FieldOperand(result, HeapObject::kMapOffset), + Immediate(isolate()->factory()->sliced_ascii_string_map())); +} + + +// Copy memory, byte-by-byte, from source to destination. Not optimized for +// long or aligned copies. The contents of scratch and length are destroyed. +// Source and destination are incremented by length. +// Many variants of movsb, loop unrolling, word moves, and indexed operands +// have been tried here already, and this is fastest. +// A simpler loop is faster on small copies, but 30% slower on large ones. +// The cld() instruction must have been emitted, to set the direction flag(), +// before calling this function. +void MacroAssembler::CopyBytes(Register source, + Register destination, + Register length, + Register scratch) { + Label short_loop, len4, len8, len12, done, short_string; + DCHECK(source.is(esi)); + DCHECK(destination.is(edi)); + DCHECK(length.is(ecx)); + cmp(length, Immediate(4)); + j(below, &short_string, Label::kNear); + + // Because source is 4-byte aligned in our uses of this function, + // we keep source aligned for the rep_movs call by copying the odd bytes + // at the end of the ranges. + mov(scratch, Operand(source, length, times_1, -4)); + mov(Operand(destination, length, times_1, -4), scratch); + + cmp(length, Immediate(8)); + j(below_equal, &len4, Label::kNear); + cmp(length, Immediate(12)); + j(below_equal, &len8, Label::kNear); + cmp(length, Immediate(16)); + j(below_equal, &len12, Label::kNear); + + mov(scratch, ecx); + shr(ecx, 2); + rep_movs(); + and_(scratch, Immediate(0x3)); + add(destination, scratch); + jmp(&done, Label::kNear); + + bind(&len12); + mov(scratch, Operand(source, 8)); + mov(Operand(destination, 8), scratch); + bind(&len8); + mov(scratch, Operand(source, 4)); + mov(Operand(destination, 4), scratch); + bind(&len4); + mov(scratch, Operand(source, 0)); + mov(Operand(destination, 0), scratch); + add(destination, length); + jmp(&done, Label::kNear); + + bind(&short_string); + test(length, length); + j(zero, &done, Label::kNear); + + bind(&short_loop); + mov_b(scratch, Operand(source, 0)); + mov_b(Operand(destination, 0), scratch); + inc(source); + inc(destination); + dec(length); + j(not_zero, &short_loop); + + bind(&done); +} + + +void MacroAssembler::InitializeFieldsWithFiller(Register start_offset, + Register end_offset, + Register filler) { + Label loop, entry; + jmp(&entry); + bind(&loop); + mov(Operand(start_offset, 0), filler); + add(start_offset, Immediate(kPointerSize)); + bind(&entry); + cmp(start_offset, end_offset); + j(less, &loop); +} + + +void MacroAssembler::BooleanBitTest(Register object, + int field_offset, + int bit_index) { + bit_index += kSmiTagSize + kSmiShiftSize; + DCHECK(IsPowerOf2(kBitsPerByte)); + int byte_index = bit_index / kBitsPerByte; + int byte_bit_index = bit_index & (kBitsPerByte - 1); + test_b(FieldOperand(object, field_offset + byte_index), + static_cast<byte>(1 << byte_bit_index)); +} + + + +void MacroAssembler::NegativeZeroTest(Register result, + Register op, + Label* then_label) { + Label ok; + test(result, result); + j(not_zero, &ok); + test(op, op); + j(sign, then_label); + bind(&ok); +} + + +void MacroAssembler::NegativeZeroTest(Register result, + Register op1, + Register op2, + Register scratch, + Label* then_label) { + Label ok; + test(result, result); + j(not_zero, &ok); + mov(scratch, op1); + or_(scratch, op2); + j(sign, then_label); + bind(&ok); +} + + +void MacroAssembler::TryGetFunctionPrototype(Register function, + Register result, + Register scratch, + Label* miss, + bool miss_on_bound_function) { + Label non_instance; + if (miss_on_bound_function) { + // Check that the receiver isn't a smi. + JumpIfSmi(function, miss); + + // Check that the function really is a function. + CmpObjectType(function, JS_FUNCTION_TYPE, result); + j(not_equal, miss); + + // If a bound function, go to miss label. + mov(scratch, + FieldOperand(function, JSFunction::kSharedFunctionInfoOffset)); + BooleanBitTest(scratch, SharedFunctionInfo::kCompilerHintsOffset, + SharedFunctionInfo::kBoundFunction); + j(not_zero, miss); + + // Make sure that the function has an instance prototype. + movzx_b(scratch, FieldOperand(result, Map::kBitFieldOffset)); + test(scratch, Immediate(1 << Map::kHasNonInstancePrototype)); + j(not_zero, &non_instance); + } + + // Get the prototype or initial map from the function. + mov(result, + FieldOperand(function, JSFunction::kPrototypeOrInitialMapOffset)); + + // If the prototype or initial map is the hole, don't return it and + // simply miss the cache instead. This will allow us to allocate a + // prototype object on-demand in the runtime system. + cmp(result, Immediate(isolate()->factory()->the_hole_value())); + j(equal, miss); + + // If the function does not have an initial map, we're done. + Label done; + CmpObjectType(result, MAP_TYPE, scratch); + j(not_equal, &done); + + // Get the prototype from the initial map. + mov(result, FieldOperand(result, Map::kPrototypeOffset)); + + if (miss_on_bound_function) { + jmp(&done); + + // Non-instance prototype: Fetch prototype from constructor field + // in initial map. + bind(&non_instance); + mov(result, FieldOperand(result, Map::kConstructorOffset)); + } + + // All done. + bind(&done); +} + + +void MacroAssembler::CallStub(CodeStub* stub, TypeFeedbackId ast_id) { + DCHECK(AllowThisStubCall(stub)); // Calls are not allowed in some stubs. + call(stub->GetCode(), RelocInfo::CODE_TARGET, ast_id); +} + + +void MacroAssembler::TailCallStub(CodeStub* stub) { + jmp(stub->GetCode(), RelocInfo::CODE_TARGET); +} + + +void MacroAssembler::StubReturn(int argc) { + DCHECK(argc >= 1 && generating_stub()); + ret((argc - 1) * kPointerSize); +} + + +bool MacroAssembler::AllowThisStubCall(CodeStub* stub) { + return has_frame_ || !stub->SometimesSetsUpAFrame(); +} + + +void MacroAssembler::IndexFromHash(Register hash, Register index) { + // The assert checks that the constants for the maximum number of digits + // for an array index cached in the hash field and the number of bits + // reserved for it does not conflict. + DCHECK(TenToThe(String::kMaxCachedArrayIndexLength) < + (1 << String::kArrayIndexValueBits)); + if (!index.is(hash)) { + mov(index, hash); + } + DecodeFieldToSmi<String::ArrayIndexValueBits>(index); +} + + +void MacroAssembler::CallRuntime(const Runtime::Function* f, + int num_arguments) { + // If the expected number of arguments of the runtime function is + // constant, we check that the actual number of arguments match the + // expectation. + CHECK(f->nargs < 0 || f->nargs == num_arguments); + + // TODO(1236192): Most runtime routines don't need the number of + // arguments passed in because it is constant. At some point we + // should remove this need and make the runtime routine entry code + // smarter. + Move(eax, Immediate(num_arguments)); + mov(ebx, Immediate(ExternalReference(f, isolate()))); + CEntryStub ces(isolate(), 1); + CallStub(&ces); +} + + +void MacroAssembler::CallExternalReference(ExternalReference ref, + int num_arguments) { + mov(eax, Immediate(num_arguments)); + mov(ebx, Immediate(ref)); + + CEntryStub stub(isolate(), 1); + CallStub(&stub); +} + + +void MacroAssembler::TailCallExternalReference(const ExternalReference& ext, + int num_arguments, + int result_size) { + // TODO(1236192): Most runtime routines don't need the number of + // arguments passed in because it is constant. At some point we + // should remove this need and make the runtime routine entry code + // smarter. + Move(eax, Immediate(num_arguments)); + JumpToExternalReference(ext); +} + + +void MacroAssembler::TailCallRuntime(Runtime::FunctionId fid, + int num_arguments, + int result_size) { + TailCallExternalReference(ExternalReference(fid, isolate()), + num_arguments, + result_size); +} + + +Operand ApiParameterOperand(int index) { + return Operand(esp, index * kPointerSize); +} + + +void MacroAssembler::PrepareCallApiFunction(int argc) { + EnterApiExitFrame(argc); + if (emit_debug_code()) { + mov(esi, Immediate(BitCast<int32_t>(kZapValue))); + } +} + + +void MacroAssembler::CallApiFunctionAndReturn( + Register function_address, + ExternalReference thunk_ref, + Operand thunk_last_arg, + int stack_space, + Operand return_value_operand, + Operand* context_restore_operand) { + ExternalReference next_address = + ExternalReference::handle_scope_next_address(isolate()); + ExternalReference limit_address = + ExternalReference::handle_scope_limit_address(isolate()); + ExternalReference level_address = + ExternalReference::handle_scope_level_address(isolate()); + + DCHECK(edx.is(function_address)); + // Allocate HandleScope in callee-save registers. + mov(ebx, Operand::StaticVariable(next_address)); + mov(edi, Operand::StaticVariable(limit_address)); + add(Operand::StaticVariable(level_address), Immediate(1)); + + if (FLAG_log_timer_events) { + FrameScope frame(this, StackFrame::MANUAL); + PushSafepointRegisters(); + PrepareCallCFunction(1, eax); + mov(Operand(esp, 0), + Immediate(ExternalReference::isolate_address(isolate()))); + CallCFunction(ExternalReference::log_enter_external_function(isolate()), 1); + PopSafepointRegisters(); + } + + + Label profiler_disabled; + Label end_profiler_check; + mov(eax, Immediate(ExternalReference::is_profiling_address(isolate()))); + cmpb(Operand(eax, 0), 0); + j(zero, &profiler_disabled); + + // Additional parameter is the address of the actual getter function. + mov(thunk_last_arg, function_address); + // Call the api function. + mov(eax, Immediate(thunk_ref)); + call(eax); + jmp(&end_profiler_check); + + bind(&profiler_disabled); + // Call the api function. + call(function_address); + bind(&end_profiler_check); + + if (FLAG_log_timer_events) { + FrameScope frame(this, StackFrame::MANUAL); + PushSafepointRegisters(); + PrepareCallCFunction(1, eax); + mov(Operand(esp, 0), + Immediate(ExternalReference::isolate_address(isolate()))); + CallCFunction(ExternalReference::log_leave_external_function(isolate()), 1); + PopSafepointRegisters(); + } + + Label prologue; + // Load the value from ReturnValue + mov(eax, return_value_operand); + + Label promote_scheduled_exception; + Label exception_handled; + Label delete_allocated_handles; + Label leave_exit_frame; + + bind(&prologue); + // No more valid handles (the result handle was the last one). Restore + // previous handle scope. + mov(Operand::StaticVariable(next_address), ebx); + sub(Operand::StaticVariable(level_address), Immediate(1)); + Assert(above_equal, kInvalidHandleScopeLevel); + cmp(edi, Operand::StaticVariable(limit_address)); + j(not_equal, &delete_allocated_handles); + bind(&leave_exit_frame); + + // Check if the function scheduled an exception. + ExternalReference scheduled_exception_address = + ExternalReference::scheduled_exception_address(isolate()); + cmp(Operand::StaticVariable(scheduled_exception_address), + Immediate(isolate()->factory()->the_hole_value())); + j(not_equal, &promote_scheduled_exception); + bind(&exception_handled); + +#if ENABLE_EXTRA_CHECKS + // Check if the function returned a valid JavaScript value. + Label ok; + Register return_value = eax; + Register map = ecx; + + JumpIfSmi(return_value, &ok, Label::kNear); + mov(map, FieldOperand(return_value, HeapObject::kMapOffset)); + + CmpInstanceType(map, FIRST_NONSTRING_TYPE); + j(below, &ok, Label::kNear); + + CmpInstanceType(map, FIRST_SPEC_OBJECT_TYPE); + j(above_equal, &ok, Label::kNear); + + cmp(map, isolate()->factory()->heap_number_map()); + j(equal, &ok, Label::kNear); + + cmp(return_value, isolate()->factory()->undefined_value()); + j(equal, &ok, Label::kNear); + + cmp(return_value, isolate()->factory()->true_value()); + j(equal, &ok, Label::kNear); + + cmp(return_value, isolate()->factory()->false_value()); + j(equal, &ok, Label::kNear); + + cmp(return_value, isolate()->factory()->null_value()); + j(equal, &ok, Label::kNear); + + Abort(kAPICallReturnedInvalidObject); + + bind(&ok); +#endif + + bool restore_context = context_restore_operand != NULL; + if (restore_context) { + mov(esi, *context_restore_operand); + } + LeaveApiExitFrame(!restore_context); + ret(stack_space * kPointerSize); + + bind(&promote_scheduled_exception); + { + FrameScope frame(this, StackFrame::INTERNAL); + CallRuntime(Runtime::kPromoteScheduledException, 0); + } + jmp(&exception_handled); + + // HandleScope limit has changed. Delete allocated extensions. + ExternalReference delete_extensions = + ExternalReference::delete_handle_scope_extensions(isolate()); + bind(&delete_allocated_handles); + mov(Operand::StaticVariable(limit_address), edi); + mov(edi, eax); + mov(Operand(esp, 0), + Immediate(ExternalReference::isolate_address(isolate()))); + mov(eax, Immediate(delete_extensions)); + call(eax); + mov(eax, edi); + jmp(&leave_exit_frame); +} + + +void MacroAssembler::JumpToExternalReference(const ExternalReference& ext) { + // Set the entry point and jump to the C entry runtime stub. + mov(ebx, Immediate(ext)); + CEntryStub ces(isolate(), 1); + jmp(ces.GetCode(), RelocInfo::CODE_TARGET); +} + + +void MacroAssembler::InvokePrologue(const ParameterCount& expected, + const ParameterCount& actual, + Handle<Code> code_constant, + const Operand& code_operand, + Label* done, + bool* definitely_mismatches, + InvokeFlag flag, + Label::Distance done_near, + const CallWrapper& call_wrapper) { + bool definitely_matches = false; + *definitely_mismatches = false; + Label invoke; + if (expected.is_immediate()) { + DCHECK(actual.is_immediate()); + if (expected.immediate() == actual.immediate()) { + definitely_matches = true; + } else { + mov(eax, actual.immediate()); + const int sentinel = SharedFunctionInfo::kDontAdaptArgumentsSentinel; + if (expected.immediate() == sentinel) { + // Don't worry about adapting arguments for builtins that + // don't want that done. Skip adaption code by making it look + // like we have a match between expected and actual number of + // arguments. + definitely_matches = true; + } else { + *definitely_mismatches = true; + mov(ebx, expected.immediate()); + } + } + } else { + if (actual.is_immediate()) { + // Expected is in register, actual is immediate. This is the + // case when we invoke function values without going through the + // IC mechanism. + cmp(expected.reg(), actual.immediate()); + j(equal, &invoke); + DCHECK(expected.reg().is(ebx)); + mov(eax, actual.immediate()); + } else if (!expected.reg().is(actual.reg())) { + // Both expected and actual are in (different) registers. This + // is the case when we invoke functions using call and apply. + cmp(expected.reg(), actual.reg()); + j(equal, &invoke); + DCHECK(actual.reg().is(eax)); + DCHECK(expected.reg().is(ebx)); + } + } + + if (!definitely_matches) { + Handle<Code> adaptor = + isolate()->builtins()->ArgumentsAdaptorTrampoline(); + if (!code_constant.is_null()) { + mov(edx, Immediate(code_constant)); + add(edx, Immediate(Code::kHeaderSize - kHeapObjectTag)); + } else if (!code_operand.is_reg(edx)) { + mov(edx, code_operand); + } + + if (flag == CALL_FUNCTION) { + call_wrapper.BeforeCall(CallSize(adaptor, RelocInfo::CODE_TARGET)); + call(adaptor, RelocInfo::CODE_TARGET); + call_wrapper.AfterCall(); + if (!*definitely_mismatches) { + jmp(done, done_near); + } + } else { + jmp(adaptor, RelocInfo::CODE_TARGET); + } + bind(&invoke); + } +} + + +void MacroAssembler::InvokeCode(const Operand& code, + const ParameterCount& expected, + const ParameterCount& actual, + InvokeFlag flag, + const CallWrapper& call_wrapper) { + // You can't call a function without a valid frame. + DCHECK(flag == JUMP_FUNCTION || has_frame()); + + Label done; + bool definitely_mismatches = false; + InvokePrologue(expected, actual, Handle<Code>::null(), code, + &done, &definitely_mismatches, flag, Label::kNear, + call_wrapper); + if (!definitely_mismatches) { + if (flag == CALL_FUNCTION) { + call_wrapper.BeforeCall(CallSize(code)); + call(code); + call_wrapper.AfterCall(); + } else { + DCHECK(flag == JUMP_FUNCTION); + jmp(code); + } + bind(&done); + } +} + + +void MacroAssembler::InvokeFunction(Register fun, + const ParameterCount& actual, + InvokeFlag flag, + const CallWrapper& call_wrapper) { + // You can't call a function without a valid frame. + DCHECK(flag == JUMP_FUNCTION || has_frame()); + + DCHECK(fun.is(edi)); + mov(edx, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset)); + mov(esi, FieldOperand(edi, JSFunction::kContextOffset)); + mov(ebx, FieldOperand(edx, SharedFunctionInfo::kFormalParameterCountOffset)); + SmiUntag(ebx); + + ParameterCount expected(ebx); + InvokeCode(FieldOperand(edi, JSFunction::kCodeEntryOffset), + expected, actual, flag, call_wrapper); +} + + +void MacroAssembler::InvokeFunction(Register fun, + const ParameterCount& expected, + const ParameterCount& actual, + InvokeFlag flag, + const CallWrapper& call_wrapper) { + // You can't call a function without a valid frame. + DCHECK(flag == JUMP_FUNCTION || has_frame()); + + DCHECK(fun.is(edi)); + mov(esi, FieldOperand(edi, JSFunction::kContextOffset)); + + InvokeCode(FieldOperand(edi, JSFunction::kCodeEntryOffset), + expected, actual, flag, call_wrapper); +} + + +void MacroAssembler::InvokeFunction(Handle<JSFunction> function, + const ParameterCount& expected, + const ParameterCount& actual, + InvokeFlag flag, + const CallWrapper& call_wrapper) { + LoadHeapObject(edi, function); + InvokeFunction(edi, expected, actual, flag, call_wrapper); +} + + +void MacroAssembler::InvokeBuiltin(Builtins::JavaScript id, + InvokeFlag flag, + const CallWrapper& call_wrapper) { + // You can't call a builtin without a valid frame. + DCHECK(flag == JUMP_FUNCTION || has_frame()); + + // Rely on the assertion to check that the number of provided + // arguments match the expected number of arguments. Fake a + // parameter count to avoid emitting code to do the check. + ParameterCount expected(0); + GetBuiltinFunction(edi, id); + InvokeCode(FieldOperand(edi, JSFunction::kCodeEntryOffset), + expected, expected, flag, call_wrapper); +} + + +void MacroAssembler::GetBuiltinFunction(Register target, + Builtins::JavaScript id) { + // Load the JavaScript builtin function from the builtins object. + mov(target, Operand(esi, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); + mov(target, FieldOperand(target, GlobalObject::kBuiltinsOffset)); + mov(target, FieldOperand(target, + JSBuiltinsObject::OffsetOfFunctionWithId(id))); +} + + +void MacroAssembler::GetBuiltinEntry(Register target, Builtins::JavaScript id) { + DCHECK(!target.is(edi)); + // Load the JavaScript builtin function from the builtins object. + GetBuiltinFunction(edi, id); + // Load the code entry point from the function into the target register. + mov(target, FieldOperand(edi, JSFunction::kCodeEntryOffset)); +} + + +void MacroAssembler::LoadContext(Register dst, int context_chain_length) { + if (context_chain_length > 0) { + // Move up the chain of contexts to the context containing the slot. + mov(dst, Operand(esi, Context::SlotOffset(Context::PREVIOUS_INDEX))); + for (int i = 1; i < context_chain_length; i++) { + mov(dst, Operand(dst, Context::SlotOffset(Context::PREVIOUS_INDEX))); + } + } else { + // Slot is in the current function context. Move it into the + // destination register in case we store into it (the write barrier + // cannot be allowed to destroy the context in esi). + mov(dst, esi); + } + + // We should not have found a with context by walking the context chain + // (i.e., the static scope chain and runtime context chain do not agree). + // A variable occurring in such a scope should have slot type LOOKUP and + // not CONTEXT. + if (emit_debug_code()) { + cmp(FieldOperand(dst, HeapObject::kMapOffset), + isolate()->factory()->with_context_map()); + Check(not_equal, kVariableResolvedToWithContext); + } +} + + +void MacroAssembler::LoadTransitionedArrayMapConditional( + ElementsKind expected_kind, + ElementsKind transitioned_kind, + Register map_in_out, + Register scratch, + Label* no_map_match) { + // Load the global or builtins object from the current context. + mov(scratch, Operand(esi, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); + mov(scratch, FieldOperand(scratch, GlobalObject::kNativeContextOffset)); + + // Check that the function's map is the same as the expected cached map. + mov(scratch, Operand(scratch, + Context::SlotOffset(Context::JS_ARRAY_MAPS_INDEX))); + + size_t offset = expected_kind * kPointerSize + + FixedArrayBase::kHeaderSize; + cmp(map_in_out, FieldOperand(scratch, offset)); + j(not_equal, no_map_match); + + // Use the transitioned cached map. + offset = transitioned_kind * kPointerSize + + FixedArrayBase::kHeaderSize; + mov(map_in_out, FieldOperand(scratch, offset)); +} + + +void MacroAssembler::LoadGlobalFunction(int index, Register function) { + // Load the global or builtins object from the current context. + mov(function, + Operand(esi, Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX))); + // Load the native context from the global or builtins object. + mov(function, + FieldOperand(function, GlobalObject::kNativeContextOffset)); + // Load the function from the native context. + mov(function, Operand(function, Context::SlotOffset(index))); +} + + +void MacroAssembler::LoadGlobalFunctionInitialMap(Register function, + Register map) { + // Load the initial map. The global functions all have initial maps. + mov(map, FieldOperand(function, JSFunction::kPrototypeOrInitialMapOffset)); + if (emit_debug_code()) { + Label ok, fail; + CheckMap(map, isolate()->factory()->meta_map(), &fail, DO_SMI_CHECK); + jmp(&ok); + bind(&fail); + Abort(kGlobalFunctionsMustHaveInitialMap); + bind(&ok); + } +} + + +// Store the value in register src in the safepoint register stack +// slot for register dst. +void MacroAssembler::StoreToSafepointRegisterSlot(Register dst, Register src) { + mov(SafepointRegisterSlot(dst), src); +} + + +void MacroAssembler::StoreToSafepointRegisterSlot(Register dst, Immediate src) { + mov(SafepointRegisterSlot(dst), src); +} + + +void MacroAssembler::LoadFromSafepointRegisterSlot(Register dst, Register src) { + mov(dst, SafepointRegisterSlot(src)); +} + + +Operand MacroAssembler::SafepointRegisterSlot(Register reg) { + return Operand(esp, SafepointRegisterStackIndex(reg.code()) * kPointerSize); +} + + +int MacroAssembler::SafepointRegisterStackIndex(int reg_code) { + // The registers are pushed starting with the lowest encoding, + // which means that lowest encodings are furthest away from + // the stack pointer. + DCHECK(reg_code >= 0 && reg_code < kNumSafepointRegisters); + return kNumSafepointRegisters - reg_code - 1; +} + + +void MacroAssembler::LoadHeapObject(Register result, + Handle<HeapObject> object) { + AllowDeferredHandleDereference embedding_raw_address; + if (isolate()->heap()->InNewSpace(*object)) { + Handle<Cell> cell = isolate()->factory()->NewCell(object); + mov(result, Operand::ForCell(cell)); + } else { + mov(result, object); + } +} + + +void MacroAssembler::CmpHeapObject(Register reg, Handle<HeapObject> object) { + AllowDeferredHandleDereference using_raw_address; + if (isolate()->heap()->InNewSpace(*object)) { + Handle<Cell> cell = isolate()->factory()->NewCell(object); + cmp(reg, Operand::ForCell(cell)); + } else { + cmp(reg, object); + } +} + + +void MacroAssembler::PushHeapObject(Handle<HeapObject> object) { + AllowDeferredHandleDereference using_raw_address; + if (isolate()->heap()->InNewSpace(*object)) { + Handle<Cell> cell = isolate()->factory()->NewCell(object); + push(Operand::ForCell(cell)); + } else { + Push(object); + } +} + + +void MacroAssembler::Ret() { + ret(0); +} + + +void MacroAssembler::Ret(int bytes_dropped, Register scratch) { + if (is_uint16(bytes_dropped)) { + ret(bytes_dropped); + } else { + pop(scratch); + add(esp, Immediate(bytes_dropped)); + push(scratch); + ret(0); + } +} + + +void MacroAssembler::VerifyX87StackDepth(uint32_t depth) { + // Make sure the floating point stack is either empty or has depth items. + DCHECK(depth <= 7); + // This is very expensive. + DCHECK(FLAG_debug_code && FLAG_enable_slow_asserts); + + // The top-of-stack (tos) is 7 if there is one item pushed. + int tos = (8 - depth) % 8; + const int kTopMask = 0x3800; + push(eax); + fwait(); + fnstsw_ax(); + and_(eax, kTopMask); + shr(eax, 11); + cmp(eax, Immediate(tos)); + Check(equal, kUnexpectedFPUStackDepthAfterInstruction); + fnclex(); + pop(eax); +} + + +void MacroAssembler::Drop(int stack_elements) { + if (stack_elements > 0) { + add(esp, Immediate(stack_elements * kPointerSize)); + } +} + + +void MacroAssembler::Move(Register dst, Register src) { + if (!dst.is(src)) { + mov(dst, src); + } +} + + +void MacroAssembler::Move(Register dst, const Immediate& x) { + if (x.is_zero()) { + xor_(dst, dst); // Shorter than mov of 32-bit immediate 0. + } else { + mov(dst, x); + } +} + + +void MacroAssembler::Move(const Operand& dst, const Immediate& x) { + mov(dst, x); +} + + +void MacroAssembler::SetCounter(StatsCounter* counter, int value) { + if (FLAG_native_code_counters && counter->Enabled()) { + mov(Operand::StaticVariable(ExternalReference(counter)), Immediate(value)); + } +} + + +void MacroAssembler::IncrementCounter(StatsCounter* counter, int value) { + DCHECK(value > 0); + if (FLAG_native_code_counters && counter->Enabled()) { + Operand operand = Operand::StaticVariable(ExternalReference(counter)); + if (value == 1) { + inc(operand); + } else { + add(operand, Immediate(value)); + } + } +} + + +void MacroAssembler::DecrementCounter(StatsCounter* counter, int value) { + DCHECK(value > 0); + if (FLAG_native_code_counters && counter->Enabled()) { + Operand operand = Operand::StaticVariable(ExternalReference(counter)); + if (value == 1) { + dec(operand); + } else { + sub(operand, Immediate(value)); + } + } +} + + +void MacroAssembler::IncrementCounter(Condition cc, + StatsCounter* counter, + int value) { + DCHECK(value > 0); + if (FLAG_native_code_counters && counter->Enabled()) { + Label skip; + j(NegateCondition(cc), &skip); + pushfd(); + IncrementCounter(counter, value); + popfd(); + bind(&skip); + } +} + + +void MacroAssembler::DecrementCounter(Condition cc, + StatsCounter* counter, + int value) { + DCHECK(value > 0); + if (FLAG_native_code_counters && counter->Enabled()) { + Label skip; + j(NegateCondition(cc), &skip); + pushfd(); + DecrementCounter(counter, value); + popfd(); + bind(&skip); + } +} + + +void MacroAssembler::Assert(Condition cc, BailoutReason reason) { + if (emit_debug_code()) Check(cc, reason); +} + + +void MacroAssembler::AssertFastElements(Register elements) { + if (emit_debug_code()) { + Factory* factory = isolate()->factory(); + Label ok; + cmp(FieldOperand(elements, HeapObject::kMapOffset), + Immediate(factory->fixed_array_map())); + j(equal, &ok); + cmp(FieldOperand(elements, HeapObject::kMapOffset), + Immediate(factory->fixed_double_array_map())); + j(equal, &ok); + cmp(FieldOperand(elements, HeapObject::kMapOffset), + Immediate(factory->fixed_cow_array_map())); + j(equal, &ok); + Abort(kJSObjectWithFastElementsMapHasSlowElements); + bind(&ok); + } +} + + +void MacroAssembler::Check(Condition cc, BailoutReason reason) { + Label L; + j(cc, &L); + Abort(reason); + // will not return here + bind(&L); +} + + +void MacroAssembler::CheckStackAlignment() { + int frame_alignment = base::OS::ActivationFrameAlignment(); + int frame_alignment_mask = frame_alignment - 1; + if (frame_alignment > kPointerSize) { + DCHECK(IsPowerOf2(frame_alignment)); + Label alignment_as_expected; + test(esp, Immediate(frame_alignment_mask)); + j(zero, &alignment_as_expected); + // Abort if stack is not aligned. + int3(); + bind(&alignment_as_expected); + } +} + + +void MacroAssembler::Abort(BailoutReason reason) { +#ifdef DEBUG + const char* msg = GetBailoutReason(reason); + if (msg != NULL) { + RecordComment("Abort message: "); + RecordComment(msg); + } + + if (FLAG_trap_on_abort) { + int3(); + return; + } +#endif + + push(Immediate(reinterpret_cast<intptr_t>(Smi::FromInt(reason)))); + // Disable stub call restrictions to always allow calls to abort. + if (!has_frame_) { + // We don't actually want to generate a pile of code for this, so just + // claim there is a stack frame, without generating one. + FrameScope scope(this, StackFrame::NONE); + CallRuntime(Runtime::kAbort, 1); + } else { + CallRuntime(Runtime::kAbort, 1); + } + // will not return here + int3(); +} + + +void MacroAssembler::LoadInstanceDescriptors(Register map, + Register descriptors) { + mov(descriptors, FieldOperand(map, Map::kDescriptorsOffset)); +} + + +void MacroAssembler::NumberOfOwnDescriptors(Register dst, Register map) { + mov(dst, FieldOperand(map, Map::kBitField3Offset)); + DecodeField<Map::NumberOfOwnDescriptorsBits>(dst); +} + + +void MacroAssembler::LookupNumberStringCache(Register object, + Register result, + Register scratch1, + Register scratch2, + Label* not_found) { + // Use of registers. Register result is used as a temporary. + Register number_string_cache = result; + Register mask = scratch1; + Register scratch = scratch2; + + // Load the number string cache. + LoadRoot(number_string_cache, Heap::kNumberStringCacheRootIndex); + // Make the hash mask from the length of the number string cache. It + // contains two elements (number and string) for each cache entry. + mov(mask, FieldOperand(number_string_cache, FixedArray::kLengthOffset)); + shr(mask, kSmiTagSize + 1); // Untag length and divide it by two. + sub(mask, Immediate(1)); // Make mask. + + // Calculate the entry in the number string cache. The hash value in the + // number string cache for smis is just the smi value, and the hash for + // doubles is the xor of the upper and lower words. See + // Heap::GetNumberStringCache. + Label smi_hash_calculated; + Label load_result_from_cache; + Label not_smi; + STATIC_ASSERT(kSmiTag == 0); + JumpIfNotSmi(object, ¬_smi, Label::kNear); + mov(scratch, object); + SmiUntag(scratch); + jmp(&smi_hash_calculated, Label::kNear); + bind(¬_smi); + cmp(FieldOperand(object, HeapObject::kMapOffset), + isolate()->factory()->heap_number_map()); + j(not_equal, not_found); + STATIC_ASSERT(8 == kDoubleSize); + mov(scratch, FieldOperand(object, HeapNumber::kValueOffset)); + xor_(scratch, FieldOperand(object, HeapNumber::kValueOffset + 4)); + // Object is heap number and hash is now in scratch. Calculate cache index. + and_(scratch, mask); + Register index = scratch; + Register probe = mask; + mov(probe, + FieldOperand(number_string_cache, + index, + times_twice_pointer_size, + FixedArray::kHeaderSize)); + JumpIfSmi(probe, not_found); + fld_d(FieldOperand(object, HeapNumber::kValueOffset)); + fld_d(FieldOperand(probe, HeapNumber::kValueOffset)); + FCmp(); + j(parity_even, not_found); // Bail out if NaN is involved. + j(not_equal, not_found); // The cache did not contain this value. + jmp(&load_result_from_cache, Label::kNear); + + bind(&smi_hash_calculated); + // Object is smi and hash is now in scratch. Calculate cache index. + and_(scratch, mask); + // Check if the entry is the smi we are looking for. + cmp(object, + FieldOperand(number_string_cache, + index, + times_twice_pointer_size, + FixedArray::kHeaderSize)); + j(not_equal, not_found); + + // Get the result from the cache. + bind(&load_result_from_cache); + mov(result, + FieldOperand(number_string_cache, + index, + times_twice_pointer_size, + FixedArray::kHeaderSize + kPointerSize)); + IncrementCounter(isolate()->counters()->number_to_string_native(), 1); +} + + +void MacroAssembler::JumpIfInstanceTypeIsNotSequentialAscii( + Register instance_type, + Register scratch, + Label* failure) { + if (!scratch.is(instance_type)) { + mov(scratch, instance_type); + } + and_(scratch, + kIsNotStringMask | kStringRepresentationMask | kStringEncodingMask); + cmp(scratch, kStringTag | kSeqStringTag | kOneByteStringTag); + j(not_equal, failure); +} + + +void MacroAssembler::JumpIfNotBothSequentialAsciiStrings(Register object1, + Register object2, + Register scratch1, + Register scratch2, + Label* failure) { + // Check that both objects are not smis. + STATIC_ASSERT(kSmiTag == 0); + mov(scratch1, object1); + and_(scratch1, object2); + JumpIfSmi(scratch1, failure); + + // Load instance type for both strings. + mov(scratch1, FieldOperand(object1, HeapObject::kMapOffset)); + mov(scratch2, FieldOperand(object2, HeapObject::kMapOffset)); + movzx_b(scratch1, FieldOperand(scratch1, Map::kInstanceTypeOffset)); + movzx_b(scratch2, FieldOperand(scratch2, Map::kInstanceTypeOffset)); + + // Check that both are flat ASCII strings. + const int kFlatAsciiStringMask = + kIsNotStringMask | kStringRepresentationMask | kStringEncodingMask; + const int kFlatAsciiStringTag = + kStringTag | kOneByteStringTag | kSeqStringTag; + // Interleave bits from both instance types and compare them in one check. + DCHECK_EQ(0, kFlatAsciiStringMask & (kFlatAsciiStringMask << 3)); + and_(scratch1, kFlatAsciiStringMask); + and_(scratch2, kFlatAsciiStringMask); + lea(scratch1, Operand(scratch1, scratch2, times_8, 0)); + cmp(scratch1, kFlatAsciiStringTag | (kFlatAsciiStringTag << 3)); + j(not_equal, failure); +} + + +void MacroAssembler::JumpIfNotUniqueName(Operand operand, + Label* not_unique_name, + Label::Distance distance) { + STATIC_ASSERT(kInternalizedTag == 0 && kStringTag == 0); + Label succeed; + test(operand, Immediate(kIsNotStringMask | kIsNotInternalizedMask)); + j(zero, &succeed); + cmpb(operand, static_cast<uint8_t>(SYMBOL_TYPE)); + j(not_equal, not_unique_name, distance); + + bind(&succeed); +} + + +void MacroAssembler::EmitSeqStringSetCharCheck(Register string, + Register index, + Register value, + uint32_t encoding_mask) { + Label is_object; + JumpIfNotSmi(string, &is_object, Label::kNear); + Abort(kNonObject); + bind(&is_object); + + push(value); + mov(value, FieldOperand(string, HeapObject::kMapOffset)); + movzx_b(value, FieldOperand(value, Map::kInstanceTypeOffset)); + + and_(value, Immediate(kStringRepresentationMask | kStringEncodingMask)); + cmp(value, Immediate(encoding_mask)); + pop(value); + Check(equal, kUnexpectedStringType); + + // The index is assumed to be untagged coming in, tag it to compare with the + // string length without using a temp register, it is restored at the end of + // this function. + SmiTag(index); + Check(no_overflow, kIndexIsTooLarge); + + cmp(index, FieldOperand(string, String::kLengthOffset)); + Check(less, kIndexIsTooLarge); + + cmp(index, Immediate(Smi::FromInt(0))); + Check(greater_equal, kIndexIsNegative); + + // Restore the index + SmiUntag(index); +} + + +void MacroAssembler::PrepareCallCFunction(int num_arguments, Register scratch) { + int frame_alignment = base::OS::ActivationFrameAlignment(); + if (frame_alignment != 0) { + // Make stack end at alignment and make room for num_arguments words + // and the original value of esp. + mov(scratch, esp); + sub(esp, Immediate((num_arguments + 1) * kPointerSize)); + DCHECK(IsPowerOf2(frame_alignment)); + and_(esp, -frame_alignment); + mov(Operand(esp, num_arguments * kPointerSize), scratch); + } else { + sub(esp, Immediate(num_arguments * kPointerSize)); + } +} + + +void MacroAssembler::CallCFunction(ExternalReference function, + int num_arguments) { + // Trashing eax is ok as it will be the return value. + mov(eax, Immediate(function)); + CallCFunction(eax, num_arguments); +} + + +void MacroAssembler::CallCFunction(Register function, + int num_arguments) { + DCHECK(has_frame()); + // Check stack alignment. + if (emit_debug_code()) { + CheckStackAlignment(); + } + + call(function); + if (base::OS::ActivationFrameAlignment() != 0) { + mov(esp, Operand(esp, num_arguments * kPointerSize)); + } else { + add(esp, Immediate(num_arguments * kPointerSize)); + } +} + + +#ifdef DEBUG +bool AreAliased(Register reg1, + Register reg2, + Register reg3, + Register reg4, + Register reg5, + Register reg6, + Register reg7, + Register reg8) { + int n_of_valid_regs = reg1.is_valid() + reg2.is_valid() + + reg3.is_valid() + reg4.is_valid() + reg5.is_valid() + reg6.is_valid() + + reg7.is_valid() + reg8.is_valid(); + + RegList regs = 0; + if (reg1.is_valid()) regs |= reg1.bit(); + if (reg2.is_valid()) regs |= reg2.bit(); + if (reg3.is_valid()) regs |= reg3.bit(); + if (reg4.is_valid()) regs |= reg4.bit(); + if (reg5.is_valid()) regs |= reg5.bit(); + if (reg6.is_valid()) regs |= reg6.bit(); + if (reg7.is_valid()) regs |= reg7.bit(); + if (reg8.is_valid()) regs |= reg8.bit(); + int n_of_non_aliasing_regs = NumRegs(regs); + + return n_of_valid_regs != n_of_non_aliasing_regs; +} +#endif + + +CodePatcher::CodePatcher(byte* address, int size) + : address_(address), + size_(size), + masm_(NULL, address, size + Assembler::kGap) { + // Create a new macro assembler pointing to the address of the code to patch. + // The size is adjusted with kGap on order for the assembler to generate size + // bytes of instructions without failing with buffer size constraints. + DCHECK(masm_.reloc_info_writer.pos() == address_ + size_ + Assembler::kGap); +} + + +CodePatcher::~CodePatcher() { + // Indicate that code has changed. + CpuFeatures::FlushICache(address_, size_); + + // Check that the code was patched as expected. + DCHECK(masm_.pc_ == address_ + size_); + DCHECK(masm_.reloc_info_writer.pos() == address_ + size_ + Assembler::kGap); +} + + +void MacroAssembler::CheckPageFlag( + Register object, + Register scratch, + int mask, + Condition cc, + Label* condition_met, + Label::Distance condition_met_distance) { + DCHECK(cc == zero || cc == not_zero); + if (scratch.is(object)) { + and_(scratch, Immediate(~Page::kPageAlignmentMask)); + } else { + mov(scratch, Immediate(~Page::kPageAlignmentMask)); + and_(scratch, object); + } + if (mask < (1 << kBitsPerByte)) { + test_b(Operand(scratch, MemoryChunk::kFlagsOffset), + static_cast<uint8_t>(mask)); + } else { + test(Operand(scratch, MemoryChunk::kFlagsOffset), Immediate(mask)); + } + j(cc, condition_met, condition_met_distance); +} + + +void MacroAssembler::CheckPageFlagForMap( + Handle<Map> map, + int mask, + Condition cc, + Label* condition_met, + Label::Distance condition_met_distance) { + DCHECK(cc == zero || cc == not_zero); + Page* page = Page::FromAddress(map->address()); + DCHECK(!serializer_enabled()); // Serializer cannot match page_flags. + ExternalReference reference(ExternalReference::page_flags(page)); + // The inlined static address check of the page's flags relies + // on maps never being compacted. + DCHECK(!isolate()->heap()->mark_compact_collector()-> + IsOnEvacuationCandidate(*map)); + if (mask < (1 << kBitsPerByte)) { + test_b(Operand::StaticVariable(reference), static_cast<uint8_t>(mask)); + } else { + test(Operand::StaticVariable(reference), Immediate(mask)); + } + j(cc, condition_met, condition_met_distance); +} + + +void MacroAssembler::CheckMapDeprecated(Handle<Map> map, + Register scratch, + Label* if_deprecated) { + if (map->CanBeDeprecated()) { + mov(scratch, map); + mov(scratch, FieldOperand(scratch, Map::kBitField3Offset)); + and_(scratch, Immediate(Map::Deprecated::kMask)); + j(not_zero, if_deprecated); + } +} + + +void MacroAssembler::JumpIfBlack(Register object, + Register scratch0, + Register scratch1, + Label* on_black, + Label::Distance on_black_near) { + HasColor(object, scratch0, scratch1, + on_black, on_black_near, + 1, 0); // kBlackBitPattern. + DCHECK(strcmp(Marking::kBlackBitPattern, "10") == 0); +} + + +void MacroAssembler::HasColor(Register object, + Register bitmap_scratch, + Register mask_scratch, + Label* has_color, + Label::Distance has_color_distance, + int first_bit, + int second_bit) { + DCHECK(!AreAliased(object, bitmap_scratch, mask_scratch, ecx)); + + GetMarkBits(object, bitmap_scratch, mask_scratch); + + Label other_color, word_boundary; + test(mask_scratch, Operand(bitmap_scratch, MemoryChunk::kHeaderSize)); + j(first_bit == 1 ? zero : not_zero, &other_color, Label::kNear); + add(mask_scratch, mask_scratch); // Shift left 1 by adding. + j(zero, &word_boundary, Label::kNear); + test(mask_scratch, Operand(bitmap_scratch, MemoryChunk::kHeaderSize)); + j(second_bit == 1 ? not_zero : zero, has_color, has_color_distance); + jmp(&other_color, Label::kNear); + + bind(&word_boundary); + test_b(Operand(bitmap_scratch, MemoryChunk::kHeaderSize + kPointerSize), 1); + + j(second_bit == 1 ? not_zero : zero, has_color, has_color_distance); + bind(&other_color); +} + + +void MacroAssembler::GetMarkBits(Register addr_reg, + Register bitmap_reg, + Register mask_reg) { + DCHECK(!AreAliased(addr_reg, mask_reg, bitmap_reg, ecx)); + mov(bitmap_reg, Immediate(~Page::kPageAlignmentMask)); + and_(bitmap_reg, addr_reg); + mov(ecx, addr_reg); + int shift = + Bitmap::kBitsPerCellLog2 + kPointerSizeLog2 - Bitmap::kBytesPerCellLog2; + shr(ecx, shift); + and_(ecx, + (Page::kPageAlignmentMask >> shift) & ~(Bitmap::kBytesPerCell - 1)); + + add(bitmap_reg, ecx); + mov(ecx, addr_reg); + shr(ecx, kPointerSizeLog2); + and_(ecx, (1 << Bitmap::kBitsPerCellLog2) - 1); + mov(mask_reg, Immediate(1)); + shl_cl(mask_reg); +} + + +void MacroAssembler::EnsureNotWhite( + Register value, + Register bitmap_scratch, + Register mask_scratch, + Label* value_is_white_and_not_data, + Label::Distance distance) { + DCHECK(!AreAliased(value, bitmap_scratch, mask_scratch, ecx)); + GetMarkBits(value, bitmap_scratch, mask_scratch); + + // If the value is black or grey we don't need to do anything. + DCHECK(strcmp(Marking::kWhiteBitPattern, "00") == 0); + DCHECK(strcmp(Marking::kBlackBitPattern, "10") == 0); + DCHECK(strcmp(Marking::kGreyBitPattern, "11") == 0); + DCHECK(strcmp(Marking::kImpossibleBitPattern, "01") == 0); + + Label done; + + // Since both black and grey have a 1 in the first position and white does + // not have a 1 there we only need to check one bit. + test(mask_scratch, Operand(bitmap_scratch, MemoryChunk::kHeaderSize)); + j(not_zero, &done, Label::kNear); + + if (emit_debug_code()) { + // Check for impossible bit pattern. + Label ok; + push(mask_scratch); + // shl. May overflow making the check conservative. + add(mask_scratch, mask_scratch); + test(mask_scratch, Operand(bitmap_scratch, MemoryChunk::kHeaderSize)); + j(zero, &ok, Label::kNear); + int3(); + bind(&ok); + pop(mask_scratch); + } + + // Value is white. We check whether it is data that doesn't need scanning. + // Currently only checks for HeapNumber and non-cons strings. + Register map = ecx; // Holds map while checking type. + Register length = ecx; // Holds length of object after checking type. + Label not_heap_number; + Label is_data_object; + + // Check for heap-number + mov(map, FieldOperand(value, HeapObject::kMapOffset)); + cmp(map, isolate()->factory()->heap_number_map()); + j(not_equal, ¬_heap_number, Label::kNear); + mov(length, Immediate(HeapNumber::kSize)); + jmp(&is_data_object, Label::kNear); + + bind(¬_heap_number); + // Check for strings. + DCHECK(kIsIndirectStringTag == 1 && kIsIndirectStringMask == 1); + DCHECK(kNotStringTag == 0x80 && kIsNotStringMask == 0x80); + // If it's a string and it's not a cons string then it's an object containing + // no GC pointers. + Register instance_type = ecx; + movzx_b(instance_type, FieldOperand(map, Map::kInstanceTypeOffset)); + test_b(instance_type, kIsIndirectStringMask | kIsNotStringMask); + j(not_zero, value_is_white_and_not_data); + // It's a non-indirect (non-cons and non-slice) string. + // If it's external, the length is just ExternalString::kSize. + // Otherwise it's String::kHeaderSize + string->length() * (1 or 2). + Label not_external; + // External strings are the only ones with the kExternalStringTag bit + // set. + DCHECK_EQ(0, kSeqStringTag & kExternalStringTag); + DCHECK_EQ(0, kConsStringTag & kExternalStringTag); + test_b(instance_type, kExternalStringTag); + j(zero, ¬_external, Label::kNear); + mov(length, Immediate(ExternalString::kSize)); + jmp(&is_data_object, Label::kNear); + + bind(¬_external); + // Sequential string, either ASCII or UC16. + DCHECK(kOneByteStringTag == 0x04); + and_(length, Immediate(kStringEncodingMask)); + xor_(length, Immediate(kStringEncodingMask)); + add(length, Immediate(0x04)); + // Value now either 4 (if ASCII) or 8 (if UC16), i.e., char-size shifted + // by 2. If we multiply the string length as smi by this, it still + // won't overflow a 32-bit value. + DCHECK_EQ(SeqOneByteString::kMaxSize, SeqTwoByteString::kMaxSize); + DCHECK(SeqOneByteString::kMaxSize <= + static_cast<int>(0xffffffffu >> (2 + kSmiTagSize))); + imul(length, FieldOperand(value, String::kLengthOffset)); + shr(length, 2 + kSmiTagSize + kSmiShiftSize); + add(length, Immediate(SeqString::kHeaderSize + kObjectAlignmentMask)); + and_(length, Immediate(~kObjectAlignmentMask)); + + bind(&is_data_object); + // Value is a data object, and it is white. Mark it black. Since we know + // that the object is white we can make it black by flipping one bit. + or_(Operand(bitmap_scratch, MemoryChunk::kHeaderSize), mask_scratch); + + and_(bitmap_scratch, Immediate(~Page::kPageAlignmentMask)); + add(Operand(bitmap_scratch, MemoryChunk::kLiveBytesOffset), + length); + if (emit_debug_code()) { + mov(length, Operand(bitmap_scratch, MemoryChunk::kLiveBytesOffset)); + cmp(length, Operand(bitmap_scratch, MemoryChunk::kSizeOffset)); + Check(less_equal, kLiveBytesCountOverflowChunkSize); + } + + bind(&done); +} + + +void MacroAssembler::EnumLength(Register dst, Register map) { + STATIC_ASSERT(Map::EnumLengthBits::kShift == 0); + mov(dst, FieldOperand(map, Map::kBitField3Offset)); + and_(dst, Immediate(Map::EnumLengthBits::kMask)); + SmiTag(dst); +} + + +void MacroAssembler::CheckEnumCache(Label* call_runtime) { + Label next, start; + mov(ecx, eax); + + // Check if the enum length field is properly initialized, indicating that + // there is an enum cache. + mov(ebx, FieldOperand(ecx, HeapObject::kMapOffset)); + + EnumLength(edx, ebx); + cmp(edx, Immediate(Smi::FromInt(kInvalidEnumCacheSentinel))); + j(equal, call_runtime); + + jmp(&start); + + bind(&next); + mov(ebx, FieldOperand(ecx, HeapObject::kMapOffset)); + + // For all objects but the receiver, check that the cache is empty. + EnumLength(edx, ebx); + cmp(edx, Immediate(Smi::FromInt(0))); + j(not_equal, call_runtime); + + bind(&start); + + // Check that there are no elements. Register rcx contains the current JS + // object we've reached through the prototype chain. + Label no_elements; + mov(ecx, FieldOperand(ecx, JSObject::kElementsOffset)); + cmp(ecx, isolate()->factory()->empty_fixed_array()); + j(equal, &no_elements); + + // Second chance, the object may be using the empty slow element dictionary. + cmp(ecx, isolate()->factory()->empty_slow_element_dictionary()); + j(not_equal, call_runtime); + + bind(&no_elements); + mov(ecx, FieldOperand(ebx, Map::kPrototypeOffset)); + cmp(ecx, isolate()->factory()->null_value()); + j(not_equal, &next); +} + + +void MacroAssembler::TestJSArrayForAllocationMemento( + Register receiver_reg, + Register scratch_reg, + Label* no_memento_found) { + ExternalReference new_space_start = + ExternalReference::new_space_start(isolate()); + ExternalReference new_space_allocation_top = + ExternalReference::new_space_allocation_top_address(isolate()); + + lea(scratch_reg, Operand(receiver_reg, + JSArray::kSize + AllocationMemento::kSize - kHeapObjectTag)); + cmp(scratch_reg, Immediate(new_space_start)); + j(less, no_memento_found); + cmp(scratch_reg, Operand::StaticVariable(new_space_allocation_top)); + j(greater, no_memento_found); + cmp(MemOperand(scratch_reg, -AllocationMemento::kSize), + Immediate(isolate()->factory()->allocation_memento_map())); +} + + +void MacroAssembler::JumpIfDictionaryInPrototypeChain( + Register object, + Register scratch0, + Register scratch1, + Label* found) { + DCHECK(!scratch1.is(scratch0)); + Factory* factory = isolate()->factory(); + Register current = scratch0; + Label loop_again; + + // scratch contained elements pointer. + mov(current, object); + + // Loop based on the map going up the prototype chain. + bind(&loop_again); + mov(current, FieldOperand(current, HeapObject::kMapOffset)); + mov(scratch1, FieldOperand(current, Map::kBitField2Offset)); + DecodeField<Map::ElementsKindBits>(scratch1); + cmp(scratch1, Immediate(DICTIONARY_ELEMENTS)); + j(equal, found); + mov(current, FieldOperand(current, Map::kPrototypeOffset)); + cmp(current, Immediate(factory->null_value())); + j(not_equal, &loop_again); +} + + +void MacroAssembler::TruncatingDiv(Register dividend, int32_t divisor) { + DCHECK(!dividend.is(eax)); + DCHECK(!dividend.is(edx)); + MultiplierAndShift ms(divisor); + mov(eax, Immediate(ms.multiplier())); + imul(dividend); + if (divisor > 0 && ms.multiplier() < 0) add(edx, dividend); + if (divisor < 0 && ms.multiplier() > 0) sub(edx, dividend); + if (ms.shift() > 0) sar(edx, ms.shift()); + mov(eax, dividend); + shr(eax, 31); + add(edx, eax); +} + + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_X87 diff --git a/deps/v8/src/x87/macro-assembler-x87.h b/deps/v8/src/x87/macro-assembler-x87.h new file mode 100644 index 00000000000..743bebdfe72 --- /dev/null +++ b/deps/v8/src/x87/macro-assembler-x87.h @@ -0,0 +1,1100 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_X87_MACRO_ASSEMBLER_X87_H_ +#define V8_X87_MACRO_ASSEMBLER_X87_H_ + +#include "src/assembler.h" +#include "src/frames.h" +#include "src/globals.h" + +namespace v8 { +namespace internal { + +// Convenience for platform-independent signatures. We do not normally +// distinguish memory operands from other operands on ia32. +typedef Operand MemOperand; + +enum RememberedSetAction { EMIT_REMEMBERED_SET, OMIT_REMEMBERED_SET }; +enum SmiCheck { INLINE_SMI_CHECK, OMIT_SMI_CHECK }; +enum PointersToHereCheck { + kPointersToHereMaybeInteresting, + kPointersToHereAreAlwaysInteresting +}; + + +enum RegisterValueType { + REGISTER_VALUE_IS_SMI, + REGISTER_VALUE_IS_INT32 +}; + + +#ifdef DEBUG +bool AreAliased(Register reg1, + Register reg2, + Register reg3 = no_reg, + Register reg4 = no_reg, + Register reg5 = no_reg, + Register reg6 = no_reg, + Register reg7 = no_reg, + Register reg8 = no_reg); +#endif + + +// MacroAssembler implements a collection of frequently used macros. +class MacroAssembler: public Assembler { + public: + // The isolate parameter can be NULL if the macro assembler should + // not use isolate-dependent functionality. In this case, it's the + // responsibility of the caller to never invoke such function on the + // macro assembler. + MacroAssembler(Isolate* isolate, void* buffer, int size); + + void Load(Register dst, const Operand& src, Representation r); + void Store(Register src, const Operand& dst, Representation r); + + // Operations on roots in the root-array. + void LoadRoot(Register destination, Heap::RootListIndex index); + void StoreRoot(Register source, Register scratch, Heap::RootListIndex index); + void CompareRoot(Register with, Register scratch, Heap::RootListIndex index); + // These methods can only be used with constant roots (i.e. non-writable + // and not in new space). + void CompareRoot(Register with, Heap::RootListIndex index); + void CompareRoot(const Operand& with, Heap::RootListIndex index); + + // --------------------------------------------------------------------------- + // GC Support + enum RememberedSetFinalAction { + kReturnAtEnd, + kFallThroughAtEnd + }; + + // Record in the remembered set the fact that we have a pointer to new space + // at the address pointed to by the addr register. Only works if addr is not + // in new space. + void RememberedSetHelper(Register object, // Used for debug code. + Register addr, + Register scratch, + RememberedSetFinalAction and_then); + + void CheckPageFlag(Register object, + Register scratch, + int mask, + Condition cc, + Label* condition_met, + Label::Distance condition_met_distance = Label::kFar); + + void CheckPageFlagForMap( + Handle<Map> map, + int mask, + Condition cc, + Label* condition_met, + Label::Distance condition_met_distance = Label::kFar); + + void CheckMapDeprecated(Handle<Map> map, + Register scratch, + Label* if_deprecated); + + // Check if object is in new space. Jumps if the object is not in new space. + // The register scratch can be object itself, but scratch will be clobbered. + void JumpIfNotInNewSpace(Register object, + Register scratch, + Label* branch, + Label::Distance distance = Label::kFar) { + InNewSpace(object, scratch, zero, branch, distance); + } + + // Check if object is in new space. Jumps if the object is in new space. + // The register scratch can be object itself, but it will be clobbered. + void JumpIfInNewSpace(Register object, + Register scratch, + Label* branch, + Label::Distance distance = Label::kFar) { + InNewSpace(object, scratch, not_zero, branch, distance); + } + + // Check if an object has a given incremental marking color. Also uses ecx! + void HasColor(Register object, + Register scratch0, + Register scratch1, + Label* has_color, + Label::Distance has_color_distance, + int first_bit, + int second_bit); + + void JumpIfBlack(Register object, + Register scratch0, + Register scratch1, + Label* on_black, + Label::Distance on_black_distance = Label::kFar); + + // Checks the color of an object. If the object is already grey or black + // then we just fall through, since it is already live. If it is white and + // we can determine that it doesn't need to be scanned, then we just mark it + // black and fall through. For the rest we jump to the label so the + // incremental marker can fix its assumptions. + void EnsureNotWhite(Register object, + Register scratch1, + Register scratch2, + Label* object_is_white_and_not_data, + Label::Distance distance); + + // Notify the garbage collector that we wrote a pointer into an object. + // |object| is the object being stored into, |value| is the object being + // stored. value and scratch registers are clobbered by the operation. + // The offset is the offset from the start of the object, not the offset from + // the tagged HeapObject pointer. For use with FieldOperand(reg, off). + void RecordWriteField( + Register object, + int offset, + Register value, + Register scratch, + RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting); + + // As above, but the offset has the tag presubtracted. For use with + // Operand(reg, off). + void RecordWriteContextSlot( + Register context, + int offset, + Register value, + Register scratch, + RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting) { + RecordWriteField(context, + offset + kHeapObjectTag, + value, + scratch, + remembered_set_action, + smi_check, + pointers_to_here_check_for_value); + } + + // Notify the garbage collector that we wrote a pointer into a fixed array. + // |array| is the array being stored into, |value| is the + // object being stored. |index| is the array index represented as a + // Smi. All registers are clobbered by the operation RecordWriteArray + // filters out smis so it does not update the write barrier if the + // value is a smi. + void RecordWriteArray( + Register array, + Register value, + Register index, + RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting); + + // For page containing |object| mark region covering |address| + // dirty. |object| is the object being stored into, |value| is the + // object being stored. The address and value registers are clobbered by the + // operation. RecordWrite filters out smis so it does not update the + // write barrier if the value is a smi. + void RecordWrite( + Register object, + Register address, + Register value, + RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET, + SmiCheck smi_check = INLINE_SMI_CHECK, + PointersToHereCheck pointers_to_here_check_for_value = + kPointersToHereMaybeInteresting); + + // For page containing |object| mark the region covering the object's map + // dirty. |object| is the object being stored into, |map| is the Map object + // that was stored. + void RecordWriteForMap( + Register object, + Handle<Map> map, + Register scratch1, + Register scratch2); + + // --------------------------------------------------------------------------- + // Debugger Support + + void DebugBreak(); + + // Generates function and stub prologue code. + void StubPrologue(); + void Prologue(bool code_pre_aging); + + // Enter specific kind of exit frame. Expects the number of + // arguments in register eax and sets up the number of arguments in + // register edi and the pointer to the first argument in register + // esi. + void EnterExitFrame(); + + void EnterApiExitFrame(int argc); + + // Leave the current exit frame. Expects the return value in + // register eax:edx (untouched) and the pointer to the first + // argument in register esi. + void LeaveExitFrame(); + + // Leave the current exit frame. Expects the return value in + // register eax (untouched). + void LeaveApiExitFrame(bool restore_context); + + // Find the function context up the context chain. + void LoadContext(Register dst, int context_chain_length); + + // Conditionally load the cached Array transitioned map of type + // transitioned_kind from the native context if the map in register + // map_in_out is the cached Array map in the native context of + // expected_kind. + void LoadTransitionedArrayMapConditional( + ElementsKind expected_kind, + ElementsKind transitioned_kind, + Register map_in_out, + Register scratch, + Label* no_map_match); + + // Load the global function with the given index. + void LoadGlobalFunction(int index, Register function); + + // Load the initial map from the global function. The registers + // function and map can be the same. + void LoadGlobalFunctionInitialMap(Register function, Register map); + + // Push and pop the registers that can hold pointers. + void PushSafepointRegisters() { pushad(); } + void PopSafepointRegisters() { popad(); } + // Store the value in register/immediate src in the safepoint + // register stack slot for register dst. + void StoreToSafepointRegisterSlot(Register dst, Register src); + void StoreToSafepointRegisterSlot(Register dst, Immediate src); + void LoadFromSafepointRegisterSlot(Register dst, Register src); + + void LoadHeapObject(Register result, Handle<HeapObject> object); + void CmpHeapObject(Register reg, Handle<HeapObject> object); + void PushHeapObject(Handle<HeapObject> object); + + void LoadObject(Register result, Handle<Object> object) { + AllowDeferredHandleDereference heap_object_check; + if (object->IsHeapObject()) { + LoadHeapObject(result, Handle<HeapObject>::cast(object)); + } else { + Move(result, Immediate(object)); + } + } + + void CmpObject(Register reg, Handle<Object> object) { + AllowDeferredHandleDereference heap_object_check; + if (object->IsHeapObject()) { + CmpHeapObject(reg, Handle<HeapObject>::cast(object)); + } else { + cmp(reg, Immediate(object)); + } + } + + // --------------------------------------------------------------------------- + // JavaScript invokes + + // Invoke the JavaScript function code by either calling or jumping. + void InvokeCode(Register code, + const ParameterCount& expected, + const ParameterCount& actual, + InvokeFlag flag, + const CallWrapper& call_wrapper) { + InvokeCode(Operand(code), expected, actual, flag, call_wrapper); + } + + void InvokeCode(const Operand& code, + const ParameterCount& expected, + const ParameterCount& actual, + InvokeFlag flag, + const CallWrapper& call_wrapper); + + // Invoke the JavaScript function in the given register. Changes the + // current context to the context in the function before invoking. + void InvokeFunction(Register function, + const ParameterCount& actual, + InvokeFlag flag, + const CallWrapper& call_wrapper); + + void InvokeFunction(Register function, + const ParameterCount& expected, + const ParameterCount& actual, + InvokeFlag flag, + const CallWrapper& call_wrapper); + + void InvokeFunction(Handle<JSFunction> function, + const ParameterCount& expected, + const ParameterCount& actual, + InvokeFlag flag, + const CallWrapper& call_wrapper); + + // Invoke specified builtin JavaScript function. Adds an entry to + // the unresolved list if the name does not resolve. + void InvokeBuiltin(Builtins::JavaScript id, + InvokeFlag flag, + const CallWrapper& call_wrapper = NullCallWrapper()); + + // Store the function for the given builtin in the target register. + void GetBuiltinFunction(Register target, Builtins::JavaScript id); + + // Store the code object for the given builtin in the target register. + void GetBuiltinEntry(Register target, Builtins::JavaScript id); + + // Expression support + // Support for constant splitting. + bool IsUnsafeImmediate(const Immediate& x); + void SafeMove(Register dst, const Immediate& x); + void SafePush(const Immediate& x); + + // Compare object type for heap object. + // Incoming register is heap_object and outgoing register is map. + void CmpObjectType(Register heap_object, InstanceType type, Register map); + + // Compare instance type for map. + void CmpInstanceType(Register map, InstanceType type); + + // Check if a map for a JSObject indicates that the object has fast elements. + // Jump to the specified label if it does not. + void CheckFastElements(Register map, + Label* fail, + Label::Distance distance = Label::kFar); + + // Check if a map for a JSObject indicates that the object can have both smi + // and HeapObject elements. Jump to the specified label if it does not. + void CheckFastObjectElements(Register map, + Label* fail, + Label::Distance distance = Label::kFar); + + // Check if a map for a JSObject indicates that the object has fast smi only + // elements. Jump to the specified label if it does not. + void CheckFastSmiElements(Register map, + Label* fail, + Label::Distance distance = Label::kFar); + + // Check to see if maybe_number can be stored as a double in + // FastDoubleElements. If it can, store it at the index specified by key in + // the FastDoubleElements array elements, otherwise jump to fail. + void StoreNumberToDoubleElements(Register maybe_number, + Register elements, + Register key, + Register scratch, + Label* fail, + int offset = 0); + + // Compare an object's map with the specified map. + void CompareMap(Register obj, Handle<Map> map); + + // Check if the map of an object is equal to a specified map and branch to + // label if not. Skip the smi check if not required (object is known to be a + // heap object). If mode is ALLOW_ELEMENT_TRANSITION_MAPS, then also match + // against maps that are ElementsKind transition maps of the specified map. + void CheckMap(Register obj, + Handle<Map> map, + Label* fail, + SmiCheckType smi_check_type); + + // Check if the map of an object is equal to a specified map and branch to a + // specified target if equal. Skip the smi check if not required (object is + // known to be a heap object) + void DispatchMap(Register obj, + Register unused, + Handle<Map> map, + Handle<Code> success, + SmiCheckType smi_check_type); + + // Check if the object in register heap_object is a string. Afterwards the + // register map contains the object map and the register instance_type + // contains the instance_type. The registers map and instance_type can be the + // same in which case it contains the instance type afterwards. Either of the + // registers map and instance_type can be the same as heap_object. + Condition IsObjectStringType(Register heap_object, + Register map, + Register instance_type); + + // Check if the object in register heap_object is a name. Afterwards the + // register map contains the object map and the register instance_type + // contains the instance_type. The registers map and instance_type can be the + // same in which case it contains the instance type afterwards. Either of the + // registers map and instance_type can be the same as heap_object. + Condition IsObjectNameType(Register heap_object, + Register map, + Register instance_type); + + // Check if a heap object's type is in the JSObject range, not including + // JSFunction. The object's map will be loaded in the map register. + // Any or all of the three registers may be the same. + // The contents of the scratch register will always be overwritten. + void IsObjectJSObjectType(Register heap_object, + Register map, + Register scratch, + Label* fail); + + // The contents of the scratch register will be overwritten. + void IsInstanceJSObjectType(Register map, Register scratch, Label* fail); + + // FCmp is similar to integer cmp, but requires unsigned + // jcc instructions (je, ja, jae, jb, jbe, je, and jz). + void FCmp(); + + void ClampUint8(Register reg); + + void SlowTruncateToI(Register result_reg, Register input_reg, + int offset = HeapNumber::kValueOffset - kHeapObjectTag); + + void TruncateHeapNumberToI(Register result_reg, Register input_reg); + void TruncateX87TOSToI(Register result_reg); + + void X87TOSToI(Register result_reg, MinusZeroMode minus_zero_mode, + Label* conversion_failed, Label::Distance dst = Label::kFar); + + void TaggedToI(Register result_reg, Register input_reg, + MinusZeroMode minus_zero_mode, Label* lost_precision); + + // Smi tagging support. + void SmiTag(Register reg) { + STATIC_ASSERT(kSmiTag == 0); + STATIC_ASSERT(kSmiTagSize == 1); + add(reg, reg); + } + void SmiUntag(Register reg) { + sar(reg, kSmiTagSize); + } + + // Modifies the register even if it does not contain a Smi! + void SmiUntag(Register reg, Label* is_smi) { + STATIC_ASSERT(kSmiTagSize == 1); + sar(reg, kSmiTagSize); + STATIC_ASSERT(kSmiTag == 0); + j(not_carry, is_smi); + } + + void LoadUint32NoSSE2(Register src); + + // Jump the register contains a smi. + inline void JumpIfSmi(Register value, + Label* smi_label, + Label::Distance distance = Label::kFar) { + test(value, Immediate(kSmiTagMask)); + j(zero, smi_label, distance); + } + // Jump if the operand is a smi. + inline void JumpIfSmi(Operand value, + Label* smi_label, + Label::Distance distance = Label::kFar) { + test(value, Immediate(kSmiTagMask)); + j(zero, smi_label, distance); + } + // Jump if register contain a non-smi. + inline void JumpIfNotSmi(Register value, + Label* not_smi_label, + Label::Distance distance = Label::kFar) { + test(value, Immediate(kSmiTagMask)); + j(not_zero, not_smi_label, distance); + } + + void LoadInstanceDescriptors(Register map, Register descriptors); + void EnumLength(Register dst, Register map); + void NumberOfOwnDescriptors(Register dst, Register map); + + template<typename Field> + void DecodeField(Register reg) { + static const int shift = Field::kShift; + static const int mask = Field::kMask >> Field::kShift; + if (shift != 0) { + sar(reg, shift); + } + and_(reg, Immediate(mask)); + } + + template<typename Field> + void DecodeFieldToSmi(Register reg) { + static const int shift = Field::kShift; + static const int mask = (Field::kMask >> Field::kShift) << kSmiTagSize; + STATIC_ASSERT((mask & (0x80000000u >> (kSmiTagSize - 1))) == 0); + STATIC_ASSERT(kSmiTag == 0); + if (shift < kSmiTagSize) { + shl(reg, kSmiTagSize - shift); + } else if (shift > kSmiTagSize) { + sar(reg, shift - kSmiTagSize); + } + and_(reg, Immediate(mask)); + } + + // Abort execution if argument is not a number, enabled via --debug-code. + void AssertNumber(Register object); + + // Abort execution if argument is not a smi, enabled via --debug-code. + void AssertSmi(Register object); + + // Abort execution if argument is a smi, enabled via --debug-code. + void AssertNotSmi(Register object); + + // Abort execution if argument is not a string, enabled via --debug-code. + void AssertString(Register object); + + // Abort execution if argument is not a name, enabled via --debug-code. + void AssertName(Register object); + + // Abort execution if argument is not undefined or an AllocationSite, enabled + // via --debug-code. + void AssertUndefinedOrAllocationSite(Register object); + + // --------------------------------------------------------------------------- + // Exception handling + + // Push a new try handler and link it into try handler chain. + void PushTryHandler(StackHandler::Kind kind, int handler_index); + + // Unlink the stack handler on top of the stack from the try handler chain. + void PopTryHandler(); + + // Throw to the top handler in the try hander chain. + void Throw(Register value); + + // Throw past all JS frames to the top JS entry frame. + void ThrowUncatchable(Register value); + + // --------------------------------------------------------------------------- + // Inline caching support + + // Generate code for checking access rights - used for security checks + // on access to global objects across environments. The holder register + // is left untouched, but the scratch register is clobbered. + void CheckAccessGlobalProxy(Register holder_reg, + Register scratch1, + Register scratch2, + Label* miss); + + void GetNumberHash(Register r0, Register scratch); + + void LoadFromNumberDictionary(Label* miss, + Register elements, + Register key, + Register r0, + Register r1, + Register r2, + Register result); + + + // --------------------------------------------------------------------------- + // Allocation support + + // Allocate an object in new space or old pointer space. If the given space + // is exhausted control continues at the gc_required label. The allocated + // object is returned in result and end of the new object is returned in + // result_end. The register scratch can be passed as no_reg in which case + // an additional object reference will be added to the reloc info. The + // returned pointers in result and result_end have not yet been tagged as + // heap objects. If result_contains_top_on_entry is true the content of + // result is known to be the allocation top on entry (could be result_end + // from a previous call). If result_contains_top_on_entry is true scratch + // should be no_reg as it is never used. + void Allocate(int object_size, + Register result, + Register result_end, + Register scratch, + Label* gc_required, + AllocationFlags flags); + + void Allocate(int header_size, + ScaleFactor element_size, + Register element_count, + RegisterValueType element_count_type, + Register result, + Register result_end, + Register scratch, + Label* gc_required, + AllocationFlags flags); + + void Allocate(Register object_size, + Register result, + Register result_end, + Register scratch, + Label* gc_required, + AllocationFlags flags); + + // Undo allocation in new space. The object passed and objects allocated after + // it will no longer be allocated. Make sure that no pointers are left to the + // object(s) no longer allocated as they would be invalid when allocation is + // un-done. + void UndoAllocationInNewSpace(Register object); + + // Allocate a heap number in new space with undefined value. The + // register scratch2 can be passed as no_reg; the others must be + // valid registers. Returns tagged pointer in result register, or + // jumps to gc_required if new space is full. + void AllocateHeapNumber(Register result, + Register scratch1, + Register scratch2, + Label* gc_required, + MutableMode mode = IMMUTABLE); + + // Allocate a sequential string. All the header fields of the string object + // are initialized. + void AllocateTwoByteString(Register result, + Register length, + Register scratch1, + Register scratch2, + Register scratch3, + Label* gc_required); + void AllocateAsciiString(Register result, + Register length, + Register scratch1, + Register scratch2, + Register scratch3, + Label* gc_required); + void AllocateAsciiString(Register result, + int length, + Register scratch1, + Register scratch2, + Label* gc_required); + + // Allocate a raw cons string object. Only the map field of the result is + // initialized. + void AllocateTwoByteConsString(Register result, + Register scratch1, + Register scratch2, + Label* gc_required); + void AllocateAsciiConsString(Register result, + Register scratch1, + Register scratch2, + Label* gc_required); + + // Allocate a raw sliced string object. Only the map field of the result is + // initialized. + void AllocateTwoByteSlicedString(Register result, + Register scratch1, + Register scratch2, + Label* gc_required); + void AllocateAsciiSlicedString(Register result, + Register scratch1, + Register scratch2, + Label* gc_required); + + // Copy memory, byte-by-byte, from source to destination. Not optimized for + // long or aligned copies. + // The contents of index and scratch are destroyed. + void CopyBytes(Register source, + Register destination, + Register length, + Register scratch); + + // Initialize fields with filler values. Fields starting at |start_offset| + // not including end_offset are overwritten with the value in |filler|. At + // the end the loop, |start_offset| takes the value of |end_offset|. + void InitializeFieldsWithFiller(Register start_offset, + Register end_offset, + Register filler); + + // --------------------------------------------------------------------------- + // Support functions. + + // Check a boolean-bit of a Smi field. + void BooleanBitTest(Register object, int field_offset, int bit_index); + + // Check if result is zero and op is negative. + void NegativeZeroTest(Register result, Register op, Label* then_label); + + // Check if result is zero and any of op1 and op2 are negative. + // Register scratch is destroyed, and it must be different from op2. + void NegativeZeroTest(Register result, Register op1, Register op2, + Register scratch, Label* then_label); + + // Try to get function prototype of a function and puts the value in + // the result register. Checks that the function really is a + // function and jumps to the miss label if the fast checks fail. The + // function register will be untouched; the other registers may be + // clobbered. + void TryGetFunctionPrototype(Register function, + Register result, + Register scratch, + Label* miss, + bool miss_on_bound_function = false); + + // Picks out an array index from the hash field. + // Register use: + // hash - holds the index's hash. Clobbered. + // index - holds the overwritten index on exit. + void IndexFromHash(Register hash, Register index); + + // --------------------------------------------------------------------------- + // Runtime calls + + // Call a code stub. Generate the code if necessary. + void CallStub(CodeStub* stub, TypeFeedbackId ast_id = TypeFeedbackId::None()); + + // Tail call a code stub (jump). Generate the code if necessary. + void TailCallStub(CodeStub* stub); + + // Return from a code stub after popping its arguments. + void StubReturn(int argc); + + // Call a runtime routine. + void CallRuntime(const Runtime::Function* f, int num_arguments); + // Convenience function: Same as above, but takes the fid instead. + void CallRuntime(Runtime::FunctionId id) { + const Runtime::Function* function = Runtime::FunctionForId(id); + CallRuntime(function, function->nargs); + } + void CallRuntime(Runtime::FunctionId id, int num_arguments) { + CallRuntime(Runtime::FunctionForId(id), num_arguments); + } + + // Convenience function: call an external reference. + void CallExternalReference(ExternalReference ref, int num_arguments); + + // Tail call of a runtime routine (jump). + // Like JumpToExternalReference, but also takes care of passing the number + // of parameters. + void TailCallExternalReference(const ExternalReference& ext, + int num_arguments, + int result_size); + + // Convenience function: tail call a runtime routine (jump). + void TailCallRuntime(Runtime::FunctionId fid, + int num_arguments, + int result_size); + + // Before calling a C-function from generated code, align arguments on stack. + // After aligning the frame, arguments must be stored in esp[0], esp[4], + // etc., not pushed. The argument count assumes all arguments are word sized. + // Some compilers/platforms require the stack to be aligned when calling + // C++ code. + // Needs a scratch register to do some arithmetic. This register will be + // trashed. + void PrepareCallCFunction(int num_arguments, Register scratch); + + // Calls a C function and cleans up the space for arguments allocated + // by PrepareCallCFunction. The called function is not allowed to trigger a + // garbage collection, since that might move the code and invalidate the + // return address (unless this is somehow accounted for by the called + // function). + void CallCFunction(ExternalReference function, int num_arguments); + void CallCFunction(Register function, int num_arguments); + + // Prepares stack to put arguments (aligns and so on). Reserves + // space for return value if needed (assumes the return value is a handle). + // Arguments must be stored in ApiParameterOperand(0), ApiParameterOperand(1) + // etc. Saves context (esi). If space was reserved for return value then + // stores the pointer to the reserved slot into esi. + void PrepareCallApiFunction(int argc); + + // Calls an API function. Allocates HandleScope, extracts returned value + // from handle and propagates exceptions. Clobbers ebx, edi and + // caller-save registers. Restores context. On return removes + // stack_space * kPointerSize (GCed). + void CallApiFunctionAndReturn(Register function_address, + ExternalReference thunk_ref, + Operand thunk_last_arg, + int stack_space, + Operand return_value_operand, + Operand* context_restore_operand); + + // Jump to a runtime routine. + void JumpToExternalReference(const ExternalReference& ext); + + // --------------------------------------------------------------------------- + // Utilities + + void Ret(); + + // Return and drop arguments from stack, where the number of arguments + // may be bigger than 2^16 - 1. Requires a scratch register. + void Ret(int bytes_dropped, Register scratch); + + // Emit code to discard a non-negative number of pointer-sized elements + // from the stack, clobbering only the esp register. + void Drop(int element_count); + + void Call(Label* target) { call(target); } + void Push(Register src) { push(src); } + void Pop(Register dst) { pop(dst); } + + // Emit call to the code we are currently generating. + void CallSelf() { + Handle<Code> self(reinterpret_cast<Code**>(CodeObject().location())); + call(self, RelocInfo::CODE_TARGET); + } + + // Move if the registers are not identical. + void Move(Register target, Register source); + + // Move a constant into a destination using the most efficient encoding. + void Move(Register dst, const Immediate& x); + void Move(const Operand& dst, const Immediate& x); + + // Push a handle value. + void Push(Handle<Object> handle) { push(Immediate(handle)); } + void Push(Smi* smi) { Push(Handle<Smi>(smi, isolate())); } + + Handle<Object> CodeObject() { + DCHECK(!code_object_.is_null()); + return code_object_; + } + + // Insert code to verify that the x87 stack has the specified depth (0-7) + void VerifyX87StackDepth(uint32_t depth); + + // Emit code for a truncating division by a constant. The dividend register is + // unchanged, the result is in edx, and eax gets clobbered. + void TruncatingDiv(Register dividend, int32_t divisor); + + // --------------------------------------------------------------------------- + // StatsCounter support + + void SetCounter(StatsCounter* counter, int value); + void IncrementCounter(StatsCounter* counter, int value); + void DecrementCounter(StatsCounter* counter, int value); + void IncrementCounter(Condition cc, StatsCounter* counter, int value); + void DecrementCounter(Condition cc, StatsCounter* counter, int value); + + + // --------------------------------------------------------------------------- + // Debugging + + // Calls Abort(msg) if the condition cc is not satisfied. + // Use --debug_code to enable. + void Assert(Condition cc, BailoutReason reason); + + void AssertFastElements(Register elements); + + // Like Assert(), but always enabled. + void Check(Condition cc, BailoutReason reason); + + // Print a message to stdout and abort execution. + void Abort(BailoutReason reason); + + // Check that the stack is aligned. + void CheckStackAlignment(); + + // Verify restrictions about code generated in stubs. + void set_generating_stub(bool value) { generating_stub_ = value; } + bool generating_stub() { return generating_stub_; } + void set_has_frame(bool value) { has_frame_ = value; } + bool has_frame() { return has_frame_; } + inline bool AllowThisStubCall(CodeStub* stub); + + // --------------------------------------------------------------------------- + // String utilities. + + // Generate code to do a lookup in the number string cache. If the number in + // the register object is found in the cache the generated code falls through + // with the result in the result register. The object and the result register + // can be the same. If the number is not found in the cache the code jumps to + // the label not_found with only the content of register object unchanged. + void LookupNumberStringCache(Register object, + Register result, + Register scratch1, + Register scratch2, + Label* not_found); + + // Check whether the instance type represents a flat ASCII string. Jump to the + // label if not. If the instance type can be scratched specify same register + // for both instance type and scratch. + void JumpIfInstanceTypeIsNotSequentialAscii(Register instance_type, + Register scratch, + Label* on_not_flat_ascii_string); + + // Checks if both objects are sequential ASCII strings, and jumps to label + // if either is not. + void JumpIfNotBothSequentialAsciiStrings(Register object1, + Register object2, + Register scratch1, + Register scratch2, + Label* on_not_flat_ascii_strings); + + // Checks if the given register or operand is a unique name + void JumpIfNotUniqueName(Register reg, Label* not_unique_name, + Label::Distance distance = Label::kFar) { + JumpIfNotUniqueName(Operand(reg), not_unique_name, distance); + } + + void JumpIfNotUniqueName(Operand operand, Label* not_unique_name, + Label::Distance distance = Label::kFar); + + void EmitSeqStringSetCharCheck(Register string, + Register index, + Register value, + uint32_t encoding_mask); + + static int SafepointRegisterStackIndex(Register reg) { + return SafepointRegisterStackIndex(reg.code()); + } + + // Activation support. + void EnterFrame(StackFrame::Type type); + void LeaveFrame(StackFrame::Type type); + + // Expects object in eax and returns map with validated enum cache + // in eax. Assumes that any other register can be used as a scratch. + void CheckEnumCache(Label* call_runtime); + + // AllocationMemento support. Arrays may have an associated + // AllocationMemento object that can be checked for in order to pretransition + // to another type. + // On entry, receiver_reg should point to the array object. + // scratch_reg gets clobbered. + // If allocation info is present, conditional code is set to equal. + void TestJSArrayForAllocationMemento(Register receiver_reg, + Register scratch_reg, + Label* no_memento_found); + + void JumpIfJSArrayHasAllocationMemento(Register receiver_reg, + Register scratch_reg, + Label* memento_found) { + Label no_memento_found; + TestJSArrayForAllocationMemento(receiver_reg, scratch_reg, + &no_memento_found); + j(equal, memento_found); + bind(&no_memento_found); + } + + // Jumps to found label if a prototype map has dictionary elements. + void JumpIfDictionaryInPrototypeChain(Register object, Register scratch0, + Register scratch1, Label* found); + + private: + bool generating_stub_; + bool has_frame_; + // This handle will be patched with the code object on installation. + Handle<Object> code_object_; + + // Helper functions for generating invokes. + void InvokePrologue(const ParameterCount& expected, + const ParameterCount& actual, + Handle<Code> code_constant, + const Operand& code_operand, + Label* done, + bool* definitely_mismatches, + InvokeFlag flag, + Label::Distance done_distance, + const CallWrapper& call_wrapper = NullCallWrapper()); + + void EnterExitFramePrologue(); + void EnterExitFrameEpilogue(int argc); + + void LeaveExitFrameEpilogue(bool restore_context); + + // Allocation support helpers. + void LoadAllocationTopHelper(Register result, + Register scratch, + AllocationFlags flags); + + void UpdateAllocationTopHelper(Register result_end, + Register scratch, + AllocationFlags flags); + + // Helper for implementing JumpIfNotInNewSpace and JumpIfInNewSpace. + void InNewSpace(Register object, + Register scratch, + Condition cc, + Label* condition_met, + Label::Distance condition_met_distance = Label::kFar); + + // Helper for finding the mark bits for an address. Afterwards, the + // bitmap register points at the word with the mark bits and the mask + // the position of the first bit. Uses ecx as scratch and leaves addr_reg + // unchanged. + inline void GetMarkBits(Register addr_reg, + Register bitmap_reg, + Register mask_reg); + + // Helper for throwing exceptions. Compute a handler address and jump to + // it. See the implementation for register usage. + void JumpToHandlerEntry(); + + // Compute memory operands for safepoint stack slots. + Operand SafepointRegisterSlot(Register reg); + static int SafepointRegisterStackIndex(int reg_code); + + // Needs access to SafepointRegisterStackIndex for compiled frame + // traversal. + friend class StandardFrame; +}; + + +// The code patcher is used to patch (typically) small parts of code e.g. for +// debugging and other types of instrumentation. When using the code patcher +// the exact number of bytes specified must be emitted. Is not legal to emit +// relocation information. If any of these constraints are violated it causes +// an assertion. +class CodePatcher { + public: + CodePatcher(byte* address, int size); + virtual ~CodePatcher(); + + // Macro assembler to emit code. + MacroAssembler* masm() { return &masm_; } + + private: + byte* address_; // The address of the code being patched. + int size_; // Number of bytes of the expected patch size. + MacroAssembler masm_; // Macro assembler used to generate the code. +}; + + +// ----------------------------------------------------------------------------- +// Static helper functions. + +// Generate an Operand for loading a field from an object. +inline Operand FieldOperand(Register object, int offset) { + return Operand(object, offset - kHeapObjectTag); +} + + +// Generate an Operand for loading an indexed field from an object. +inline Operand FieldOperand(Register object, + Register index, + ScaleFactor scale, + int offset) { + return Operand(object, index, scale, offset - kHeapObjectTag); +} + + +inline Operand FixedArrayElementOperand(Register array, + Register index_as_smi, + int additional_offset = 0) { + int offset = FixedArray::kHeaderSize + additional_offset * kPointerSize; + return FieldOperand(array, index_as_smi, times_half_pointer_size, offset); +} + + +inline Operand ContextOperand(Register context, int index) { + return Operand(context, Context::SlotOffset(index)); +} + + +inline Operand GlobalObjectOperand() { + return ContextOperand(esi, Context::GLOBAL_OBJECT_INDEX); +} + + +// Generates an Operand for saving parameters after PrepareCallApiFunction. +Operand ApiParameterOperand(int index); + + +#ifdef GENERATED_CODE_COVERAGE +extern void LogGeneratedCodeCoverage(const char* file_line); +#define CODE_COVERAGE_STRINGIFY(x) #x +#define CODE_COVERAGE_TOSTRING(x) CODE_COVERAGE_STRINGIFY(x) +#define __FILE_LINE__ __FILE__ ":" CODE_COVERAGE_TOSTRING(__LINE__) +#define ACCESS_MASM(masm) { \ + byte* ia32_coverage_function = \ + reinterpret_cast<byte*>(FUNCTION_ADDR(LogGeneratedCodeCoverage)); \ + masm->pushfd(); \ + masm->pushad(); \ + masm->push(Immediate(reinterpret_cast<int>(&__FILE_LINE__))); \ + masm->call(ia32_coverage_function, RelocInfo::RUNTIME_ENTRY); \ + masm->pop(eax); \ + masm->popad(); \ + masm->popfd(); \ + } \ + masm-> +#else +#define ACCESS_MASM(masm) masm-> +#endif + + +} } // namespace v8::internal + +#endif // V8_X87_MACRO_ASSEMBLER_X87_H_ diff --git a/deps/v8/src/x87/regexp-macro-assembler-x87.cc b/deps/v8/src/x87/regexp-macro-assembler-x87.cc new file mode 100644 index 00000000000..54dd52f23aa --- /dev/null +++ b/deps/v8/src/x87/regexp-macro-assembler-x87.cc @@ -0,0 +1,1309 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_X87 + +#include "src/cpu-profiler.h" +#include "src/log.h" +#include "src/macro-assembler.h" +#include "src/regexp-macro-assembler.h" +#include "src/regexp-stack.h" +#include "src/unicode.h" +#include "src/x87/regexp-macro-assembler-x87.h" + +namespace v8 { +namespace internal { + +#ifndef V8_INTERPRETED_REGEXP +/* + * This assembler uses the following register assignment convention + * - edx : Current character. Must be loaded using LoadCurrentCharacter + * before using any of the dispatch methods. Temporarily stores the + * index of capture start after a matching pass for a global regexp. + * - edi : Current position in input, as negative offset from end of string. + * Please notice that this is the byte offset, not the character offset! + * - esi : end of input (points to byte after last character in input). + * - ebp : Frame pointer. Used to access arguments, local variables and + * RegExp registers. + * - esp : Points to tip of C stack. + * - ecx : Points to tip of backtrack stack + * + * The registers eax and ebx are free to use for computations. + * + * Each call to a public method should retain this convention. + * The stack will have the following structure: + * - Isolate* isolate (address of the current isolate) + * - direct_call (if 1, direct call from JavaScript code, if 0 + * call through the runtime system) + * - stack_area_base (high end of the memory area to use as + * backtracking stack) + * - capture array size (may fit multiple sets of matches) + * - int* capture_array (int[num_saved_registers_], for output). + * - end of input (address of end of string) + * - start of input (address of first character in string) + * - start index (character index of start) + * - String* input_string (location of a handle containing the string) + * --- frame alignment (if applicable) --- + * - return address + * ebp-> - old ebp + * - backup of caller esi + * - backup of caller edi + * - backup of caller ebx + * - success counter (only for global regexps to count matches). + * - Offset of location before start of input (effectively character + * position -1). Used to initialize capture registers to a non-position. + * - register 0 ebp[-4] (only positions must be stored in the first + * - register 1 ebp[-8] num_saved_registers_ registers) + * - ... + * + * The first num_saved_registers_ registers are initialized to point to + * "character -1" in the string (i.e., char_size() bytes before the first + * character of the string). The remaining registers starts out as garbage. + * + * The data up to the return address must be placed there by the calling + * code, by calling the code entry as cast to a function with the signature: + * int (*match)(String* input_string, + * int start_index, + * Address start, + * Address end, + * int* capture_output_array, + * bool at_start, + * byte* stack_area_base, + * bool direct_call) + */ + +#define __ ACCESS_MASM(masm_) + +RegExpMacroAssemblerX87::RegExpMacroAssemblerX87( + Mode mode, + int registers_to_save, + Zone* zone) + : NativeRegExpMacroAssembler(zone), + masm_(new MacroAssembler(zone->isolate(), NULL, kRegExpCodeSize)), + mode_(mode), + num_registers_(registers_to_save), + num_saved_registers_(registers_to_save), + entry_label_(), + start_label_(), + success_label_(), + backtrack_label_(), + exit_label_() { + DCHECK_EQ(0, registers_to_save % 2); + __ jmp(&entry_label_); // We'll write the entry code later. + __ bind(&start_label_); // And then continue from here. +} + + +RegExpMacroAssemblerX87::~RegExpMacroAssemblerX87() { + delete masm_; + // Unuse labels in case we throw away the assembler without calling GetCode. + entry_label_.Unuse(); + start_label_.Unuse(); + success_label_.Unuse(); + backtrack_label_.Unuse(); + exit_label_.Unuse(); + check_preempt_label_.Unuse(); + stack_overflow_label_.Unuse(); +} + + +int RegExpMacroAssemblerX87::stack_limit_slack() { + return RegExpStack::kStackLimitSlack; +} + + +void RegExpMacroAssemblerX87::AdvanceCurrentPosition(int by) { + if (by != 0) { + __ add(edi, Immediate(by * char_size())); + } +} + + +void RegExpMacroAssemblerX87::AdvanceRegister(int reg, int by) { + DCHECK(reg >= 0); + DCHECK(reg < num_registers_); + if (by != 0) { + __ add(register_location(reg), Immediate(by)); + } +} + + +void RegExpMacroAssemblerX87::Backtrack() { + CheckPreemption(); + // Pop Code* offset from backtrack stack, add Code* and jump to location. + Pop(ebx); + __ add(ebx, Immediate(masm_->CodeObject())); + __ jmp(ebx); +} + + +void RegExpMacroAssemblerX87::Bind(Label* label) { + __ bind(label); +} + + +void RegExpMacroAssemblerX87::CheckCharacter(uint32_t c, Label* on_equal) { + __ cmp(current_character(), c); + BranchOrBacktrack(equal, on_equal); +} + + +void RegExpMacroAssemblerX87::CheckCharacterGT(uc16 limit, Label* on_greater) { + __ cmp(current_character(), limit); + BranchOrBacktrack(greater, on_greater); +} + + +void RegExpMacroAssemblerX87::CheckAtStart(Label* on_at_start) { + Label not_at_start; + // Did we start the match at the start of the string at all? + __ cmp(Operand(ebp, kStartIndex), Immediate(0)); + BranchOrBacktrack(not_equal, ¬_at_start); + // If we did, are we still at the start of the input? + __ lea(eax, Operand(esi, edi, times_1, 0)); + __ cmp(eax, Operand(ebp, kInputStart)); + BranchOrBacktrack(equal, on_at_start); + __ bind(¬_at_start); +} + + +void RegExpMacroAssemblerX87::CheckNotAtStart(Label* on_not_at_start) { + // Did we start the match at the start of the string at all? + __ cmp(Operand(ebp, kStartIndex), Immediate(0)); + BranchOrBacktrack(not_equal, on_not_at_start); + // If we did, are we still at the start of the input? + __ lea(eax, Operand(esi, edi, times_1, 0)); + __ cmp(eax, Operand(ebp, kInputStart)); + BranchOrBacktrack(not_equal, on_not_at_start); +} + + +void RegExpMacroAssemblerX87::CheckCharacterLT(uc16 limit, Label* on_less) { + __ cmp(current_character(), limit); + BranchOrBacktrack(less, on_less); +} + + +void RegExpMacroAssemblerX87::CheckGreedyLoop(Label* on_equal) { + Label fallthrough; + __ cmp(edi, Operand(backtrack_stackpointer(), 0)); + __ j(not_equal, &fallthrough); + __ add(backtrack_stackpointer(), Immediate(kPointerSize)); // Pop. + BranchOrBacktrack(no_condition, on_equal); + __ bind(&fallthrough); +} + + +void RegExpMacroAssemblerX87::CheckNotBackReferenceIgnoreCase( + int start_reg, + Label* on_no_match) { + Label fallthrough; + __ mov(edx, register_location(start_reg)); // Index of start of capture + __ mov(ebx, register_location(start_reg + 1)); // Index of end of capture + __ sub(ebx, edx); // Length of capture. + + // The length of a capture should not be negative. This can only happen + // if the end of the capture is unrecorded, or at a point earlier than + // the start of the capture. + BranchOrBacktrack(less, on_no_match); + + // If length is zero, either the capture is empty or it is completely + // uncaptured. In either case succeed immediately. + __ j(equal, &fallthrough); + + // Check that there are sufficient characters left in the input. + __ mov(eax, edi); + __ add(eax, ebx); + BranchOrBacktrack(greater, on_no_match); + + if (mode_ == ASCII) { + Label success; + Label fail; + Label loop_increment; + // Save register contents to make the registers available below. + __ push(edi); + __ push(backtrack_stackpointer()); + // After this, the eax, ecx, and edi registers are available. + + __ add(edx, esi); // Start of capture + __ add(edi, esi); // Start of text to match against capture. + __ add(ebx, edi); // End of text to match against capture. + + Label loop; + __ bind(&loop); + __ movzx_b(eax, Operand(edi, 0)); + __ cmpb_al(Operand(edx, 0)); + __ j(equal, &loop_increment); + + // Mismatch, try case-insensitive match (converting letters to lower-case). + __ or_(eax, 0x20); // Convert match character to lower-case. + __ lea(ecx, Operand(eax, -'a')); + __ cmp(ecx, static_cast<int32_t>('z' - 'a')); // Is eax a lowercase letter? + Label convert_capture; + __ j(below_equal, &convert_capture); // In range 'a'-'z'. + // Latin-1: Check for values in range [224,254] but not 247. + __ sub(ecx, Immediate(224 - 'a')); + __ cmp(ecx, Immediate(254 - 224)); + __ j(above, &fail); // Weren't Latin-1 letters. + __ cmp(ecx, Immediate(247 - 224)); // Check for 247. + __ j(equal, &fail); + __ bind(&convert_capture); + // Also convert capture character. + __ movzx_b(ecx, Operand(edx, 0)); + __ or_(ecx, 0x20); + + __ cmp(eax, ecx); + __ j(not_equal, &fail); + + __ bind(&loop_increment); + // Increment pointers into match and capture strings. + __ add(edx, Immediate(1)); + __ add(edi, Immediate(1)); + // Compare to end of match, and loop if not done. + __ cmp(edi, ebx); + __ j(below, &loop); + __ jmp(&success); + + __ bind(&fail); + // Restore original values before failing. + __ pop(backtrack_stackpointer()); + __ pop(edi); + BranchOrBacktrack(no_condition, on_no_match); + + __ bind(&success); + // Restore original value before continuing. + __ pop(backtrack_stackpointer()); + // Drop original value of character position. + __ add(esp, Immediate(kPointerSize)); + // Compute new value of character position after the matched part. + __ sub(edi, esi); + } else { + DCHECK(mode_ == UC16); + // Save registers before calling C function. + __ push(esi); + __ push(edi); + __ push(backtrack_stackpointer()); + __ push(ebx); + + static const int argument_count = 4; + __ PrepareCallCFunction(argument_count, ecx); + // Put arguments into allocated stack area, last argument highest on stack. + // Parameters are + // Address byte_offset1 - Address captured substring's start. + // Address byte_offset2 - Address of current character position. + // size_t byte_length - length of capture in bytes(!) + // Isolate* isolate + + // Set isolate. + __ mov(Operand(esp, 3 * kPointerSize), + Immediate(ExternalReference::isolate_address(isolate()))); + // Set byte_length. + __ mov(Operand(esp, 2 * kPointerSize), ebx); + // Set byte_offset2. + // Found by adding negative string-end offset of current position (edi) + // to end of string. + __ add(edi, esi); + __ mov(Operand(esp, 1 * kPointerSize), edi); + // Set byte_offset1. + // Start of capture, where edx already holds string-end negative offset. + __ add(edx, esi); + __ mov(Operand(esp, 0 * kPointerSize), edx); + + { + AllowExternalCallThatCantCauseGC scope(masm_); + ExternalReference compare = + ExternalReference::re_case_insensitive_compare_uc16(isolate()); + __ CallCFunction(compare, argument_count); + } + // Pop original values before reacting on result value. + __ pop(ebx); + __ pop(backtrack_stackpointer()); + __ pop(edi); + __ pop(esi); + + // Check if function returned non-zero for success or zero for failure. + __ or_(eax, eax); + BranchOrBacktrack(zero, on_no_match); + // On success, increment position by length of capture. + __ add(edi, ebx); + } + __ bind(&fallthrough); +} + + +void RegExpMacroAssemblerX87::CheckNotBackReference( + int start_reg, + Label* on_no_match) { + Label fallthrough; + Label success; + Label fail; + + // Find length of back-referenced capture. + __ mov(edx, register_location(start_reg)); + __ mov(eax, register_location(start_reg + 1)); + __ sub(eax, edx); // Length to check. + // Fail on partial or illegal capture (start of capture after end of capture). + BranchOrBacktrack(less, on_no_match); + // Succeed on empty capture (including no capture) + __ j(equal, &fallthrough); + + // Check that there are sufficient characters left in the input. + __ mov(ebx, edi); + __ add(ebx, eax); + BranchOrBacktrack(greater, on_no_match); + + // Save register to make it available below. + __ push(backtrack_stackpointer()); + + // Compute pointers to match string and capture string + __ lea(ebx, Operand(esi, edi, times_1, 0)); // Start of match. + __ add(edx, esi); // Start of capture. + __ lea(ecx, Operand(eax, ebx, times_1, 0)); // End of match + + Label loop; + __ bind(&loop); + if (mode_ == ASCII) { + __ movzx_b(eax, Operand(edx, 0)); + __ cmpb_al(Operand(ebx, 0)); + } else { + DCHECK(mode_ == UC16); + __ movzx_w(eax, Operand(edx, 0)); + __ cmpw_ax(Operand(ebx, 0)); + } + __ j(not_equal, &fail); + // Increment pointers into capture and match string. + __ add(edx, Immediate(char_size())); + __ add(ebx, Immediate(char_size())); + // Check if we have reached end of match area. + __ cmp(ebx, ecx); + __ j(below, &loop); + __ jmp(&success); + + __ bind(&fail); + // Restore backtrack stackpointer. + __ pop(backtrack_stackpointer()); + BranchOrBacktrack(no_condition, on_no_match); + + __ bind(&success); + // Move current character position to position after match. + __ mov(edi, ecx); + __ sub(edi, esi); + // Restore backtrack stackpointer. + __ pop(backtrack_stackpointer()); + + __ bind(&fallthrough); +} + + +void RegExpMacroAssemblerX87::CheckNotCharacter(uint32_t c, + Label* on_not_equal) { + __ cmp(current_character(), c); + BranchOrBacktrack(not_equal, on_not_equal); +} + + +void RegExpMacroAssemblerX87::CheckCharacterAfterAnd(uint32_t c, + uint32_t mask, + Label* on_equal) { + if (c == 0) { + __ test(current_character(), Immediate(mask)); + } else { + __ mov(eax, mask); + __ and_(eax, current_character()); + __ cmp(eax, c); + } + BranchOrBacktrack(equal, on_equal); +} + + +void RegExpMacroAssemblerX87::CheckNotCharacterAfterAnd(uint32_t c, + uint32_t mask, + Label* on_not_equal) { + if (c == 0) { + __ test(current_character(), Immediate(mask)); + } else { + __ mov(eax, mask); + __ and_(eax, current_character()); + __ cmp(eax, c); + } + BranchOrBacktrack(not_equal, on_not_equal); +} + + +void RegExpMacroAssemblerX87::CheckNotCharacterAfterMinusAnd( + uc16 c, + uc16 minus, + uc16 mask, + Label* on_not_equal) { + DCHECK(minus < String::kMaxUtf16CodeUnit); + __ lea(eax, Operand(current_character(), -minus)); + if (c == 0) { + __ test(eax, Immediate(mask)); + } else { + __ and_(eax, mask); + __ cmp(eax, c); + } + BranchOrBacktrack(not_equal, on_not_equal); +} + + +void RegExpMacroAssemblerX87::CheckCharacterInRange( + uc16 from, + uc16 to, + Label* on_in_range) { + __ lea(eax, Operand(current_character(), -from)); + __ cmp(eax, to - from); + BranchOrBacktrack(below_equal, on_in_range); +} + + +void RegExpMacroAssemblerX87::CheckCharacterNotInRange( + uc16 from, + uc16 to, + Label* on_not_in_range) { + __ lea(eax, Operand(current_character(), -from)); + __ cmp(eax, to - from); + BranchOrBacktrack(above, on_not_in_range); +} + + +void RegExpMacroAssemblerX87::CheckBitInTable( + Handle<ByteArray> table, + Label* on_bit_set) { + __ mov(eax, Immediate(table)); + Register index = current_character(); + if (mode_ != ASCII || kTableMask != String::kMaxOneByteCharCode) { + __ mov(ebx, kTableSize - 1); + __ and_(ebx, current_character()); + index = ebx; + } + __ cmpb(FieldOperand(eax, index, times_1, ByteArray::kHeaderSize), 0); + BranchOrBacktrack(not_equal, on_bit_set); +} + + +bool RegExpMacroAssemblerX87::CheckSpecialCharacterClass(uc16 type, + Label* on_no_match) { + // Range checks (c in min..max) are generally implemented by an unsigned + // (c - min) <= (max - min) check + switch (type) { + case 's': + // Match space-characters + if (mode_ == ASCII) { + // One byte space characters are '\t'..'\r', ' ' and \u00a0. + Label success; + __ cmp(current_character(), ' '); + __ j(equal, &success, Label::kNear); + // Check range 0x09..0x0d + __ lea(eax, Operand(current_character(), -'\t')); + __ cmp(eax, '\r' - '\t'); + __ j(below_equal, &success, Label::kNear); + // \u00a0 (NBSP). + __ cmp(eax, 0x00a0 - '\t'); + BranchOrBacktrack(not_equal, on_no_match); + __ bind(&success); + return true; + } + return false; + case 'S': + // The emitted code for generic character classes is good enough. + return false; + case 'd': + // Match ASCII digits ('0'..'9') + __ lea(eax, Operand(current_character(), -'0')); + __ cmp(eax, '9' - '0'); + BranchOrBacktrack(above, on_no_match); + return true; + case 'D': + // Match non ASCII-digits + __ lea(eax, Operand(current_character(), -'0')); + __ cmp(eax, '9' - '0'); + BranchOrBacktrack(below_equal, on_no_match); + return true; + case '.': { + // Match non-newlines (not 0x0a('\n'), 0x0d('\r'), 0x2028 and 0x2029) + __ mov(eax, current_character()); + __ xor_(eax, Immediate(0x01)); + // See if current character is '\n'^1 or '\r'^1, i.e., 0x0b or 0x0c + __ sub(eax, Immediate(0x0b)); + __ cmp(eax, 0x0c - 0x0b); + BranchOrBacktrack(below_equal, on_no_match); + if (mode_ == UC16) { + // Compare original value to 0x2028 and 0x2029, using the already + // computed (current_char ^ 0x01 - 0x0b). I.e., check for + // 0x201d (0x2028 - 0x0b) or 0x201e. + __ sub(eax, Immediate(0x2028 - 0x0b)); + __ cmp(eax, 0x2029 - 0x2028); + BranchOrBacktrack(below_equal, on_no_match); + } + return true; + } + case 'w': { + if (mode_ != ASCII) { + // Table is 128 entries, so all ASCII characters can be tested. + __ cmp(current_character(), Immediate('z')); + BranchOrBacktrack(above, on_no_match); + } + DCHECK_EQ(0, word_character_map[0]); // Character '\0' is not a word char. + ExternalReference word_map = ExternalReference::re_word_character_map(); + __ test_b(current_character(), + Operand::StaticArray(current_character(), times_1, word_map)); + BranchOrBacktrack(zero, on_no_match); + return true; + } + case 'W': { + Label done; + if (mode_ != ASCII) { + // Table is 128 entries, so all ASCII characters can be tested. + __ cmp(current_character(), Immediate('z')); + __ j(above, &done); + } + DCHECK_EQ(0, word_character_map[0]); // Character '\0' is not a word char. + ExternalReference word_map = ExternalReference::re_word_character_map(); + __ test_b(current_character(), + Operand::StaticArray(current_character(), times_1, word_map)); + BranchOrBacktrack(not_zero, on_no_match); + if (mode_ != ASCII) { + __ bind(&done); + } + return true; + } + // Non-standard classes (with no syntactic shorthand) used internally. + case '*': + // Match any character. + return true; + case 'n': { + // Match newlines (0x0a('\n'), 0x0d('\r'), 0x2028 or 0x2029). + // The opposite of '.'. + __ mov(eax, current_character()); + __ xor_(eax, Immediate(0x01)); + // See if current character is '\n'^1 or '\r'^1, i.e., 0x0b or 0x0c + __ sub(eax, Immediate(0x0b)); + __ cmp(eax, 0x0c - 0x0b); + if (mode_ == ASCII) { + BranchOrBacktrack(above, on_no_match); + } else { + Label done; + BranchOrBacktrack(below_equal, &done); + DCHECK_EQ(UC16, mode_); + // Compare original value to 0x2028 and 0x2029, using the already + // computed (current_char ^ 0x01 - 0x0b). I.e., check for + // 0x201d (0x2028 - 0x0b) or 0x201e. + __ sub(eax, Immediate(0x2028 - 0x0b)); + __ cmp(eax, 1); + BranchOrBacktrack(above, on_no_match); + __ bind(&done); + } + return true; + } + // No custom implementation (yet): s(UC16), S(UC16). + default: + return false; + } +} + + +void RegExpMacroAssemblerX87::Fail() { + STATIC_ASSERT(FAILURE == 0); // Return value for failure is zero. + if (!global()) { + __ Move(eax, Immediate(FAILURE)); + } + __ jmp(&exit_label_); +} + + +Handle<HeapObject> RegExpMacroAssemblerX87::GetCode(Handle<String> source) { + Label return_eax; + // Finalize code - write the entry point code now we know how many + // registers we need. + + // Entry code: + __ bind(&entry_label_); + + // Tell the system that we have a stack frame. Because the type is MANUAL, no + // code is generated. + FrameScope scope(masm_, StackFrame::MANUAL); + + // Actually emit code to start a new stack frame. + __ push(ebp); + __ mov(ebp, esp); + // Save callee-save registers. Order here should correspond to order of + // kBackup_ebx etc. + __ push(esi); + __ push(edi); + __ push(ebx); // Callee-save on MacOS. + __ push(Immediate(0)); // Number of successful matches in a global regexp. + __ push(Immediate(0)); // Make room for "input start - 1" constant. + + // Check if we have space on the stack for registers. + Label stack_limit_hit; + Label stack_ok; + + ExternalReference stack_limit = + ExternalReference::address_of_stack_limit(isolate()); + __ mov(ecx, esp); + __ sub(ecx, Operand::StaticVariable(stack_limit)); + // Handle it if the stack pointer is already below the stack limit. + __ j(below_equal, &stack_limit_hit); + // Check if there is room for the variable number of registers above + // the stack limit. + __ cmp(ecx, num_registers_ * kPointerSize); + __ j(above_equal, &stack_ok); + // Exit with OutOfMemory exception. There is not enough space on the stack + // for our working registers. + __ mov(eax, EXCEPTION); + __ jmp(&return_eax); + + __ bind(&stack_limit_hit); + CallCheckStackGuardState(ebx); + __ or_(eax, eax); + // If returned value is non-zero, we exit with the returned value as result. + __ j(not_zero, &return_eax); + + __ bind(&stack_ok); + // Load start index for later use. + __ mov(ebx, Operand(ebp, kStartIndex)); + + // Allocate space on stack for registers. + __ sub(esp, Immediate(num_registers_ * kPointerSize)); + // Load string length. + __ mov(esi, Operand(ebp, kInputEnd)); + // Load input position. + __ mov(edi, Operand(ebp, kInputStart)); + // Set up edi to be negative offset from string end. + __ sub(edi, esi); + + // Set eax to address of char before start of the string. + // (effectively string position -1). + __ neg(ebx); + if (mode_ == UC16) { + __ lea(eax, Operand(edi, ebx, times_2, -char_size())); + } else { + __ lea(eax, Operand(edi, ebx, times_1, -char_size())); + } + // Store this value in a local variable, for use when clearing + // position registers. + __ mov(Operand(ebp, kInputStartMinusOne), eax); + +#if V8_OS_WIN + // Ensure that we write to each stack page, in order. Skipping a page + // on Windows can cause segmentation faults. Assuming page size is 4k. + const int kPageSize = 4096; + const int kRegistersPerPage = kPageSize / kPointerSize; + for (int i = num_saved_registers_ + kRegistersPerPage - 1; + i < num_registers_; + i += kRegistersPerPage) { + __ mov(register_location(i), eax); // One write every page. + } +#endif // V8_OS_WIN + + Label load_char_start_regexp, start_regexp; + // Load newline if index is at start, previous character otherwise. + __ cmp(Operand(ebp, kStartIndex), Immediate(0)); + __ j(not_equal, &load_char_start_regexp, Label::kNear); + __ mov(current_character(), '\n'); + __ jmp(&start_regexp, Label::kNear); + + // Global regexp restarts matching here. + __ bind(&load_char_start_regexp); + // Load previous char as initial value of current character register. + LoadCurrentCharacterUnchecked(-1, 1); + __ bind(&start_regexp); + + // Initialize on-stack registers. + if (num_saved_registers_ > 0) { // Always is, if generated from a regexp. + // Fill saved registers with initial value = start offset - 1 + // Fill in stack push order, to avoid accessing across an unwritten + // page (a problem on Windows). + if (num_saved_registers_ > 8) { + __ mov(ecx, kRegisterZero); + Label init_loop; + __ bind(&init_loop); + __ mov(Operand(ebp, ecx, times_1, 0), eax); + __ sub(ecx, Immediate(kPointerSize)); + __ cmp(ecx, kRegisterZero - num_saved_registers_ * kPointerSize); + __ j(greater, &init_loop); + } else { // Unroll the loop. + for (int i = 0; i < num_saved_registers_; i++) { + __ mov(register_location(i), eax); + } + } + } + + // Initialize backtrack stack pointer. + __ mov(backtrack_stackpointer(), Operand(ebp, kStackHighEnd)); + + __ jmp(&start_label_); + + // Exit code: + if (success_label_.is_linked()) { + // Save captures when successful. + __ bind(&success_label_); + if (num_saved_registers_ > 0) { + // copy captures to output + __ mov(ebx, Operand(ebp, kRegisterOutput)); + __ mov(ecx, Operand(ebp, kInputEnd)); + __ mov(edx, Operand(ebp, kStartIndex)); + __ sub(ecx, Operand(ebp, kInputStart)); + if (mode_ == UC16) { + __ lea(ecx, Operand(ecx, edx, times_2, 0)); + } else { + __ add(ecx, edx); + } + for (int i = 0; i < num_saved_registers_; i++) { + __ mov(eax, register_location(i)); + if (i == 0 && global_with_zero_length_check()) { + // Keep capture start in edx for the zero-length check later. + __ mov(edx, eax); + } + // Convert to index from start of string, not end. + __ add(eax, ecx); + if (mode_ == UC16) { + __ sar(eax, 1); // Convert byte index to character index. + } + __ mov(Operand(ebx, i * kPointerSize), eax); + } + } + + if (global()) { + // Restart matching if the regular expression is flagged as global. + // Increment success counter. + __ inc(Operand(ebp, kSuccessfulCaptures)); + // Capture results have been stored, so the number of remaining global + // output registers is reduced by the number of stored captures. + __ mov(ecx, Operand(ebp, kNumOutputRegisters)); + __ sub(ecx, Immediate(num_saved_registers_)); + // Check whether we have enough room for another set of capture results. + __ cmp(ecx, Immediate(num_saved_registers_)); + __ j(less, &exit_label_); + + __ mov(Operand(ebp, kNumOutputRegisters), ecx); + // Advance the location for output. + __ add(Operand(ebp, kRegisterOutput), + Immediate(num_saved_registers_ * kPointerSize)); + + // Prepare eax to initialize registers with its value in the next run. + __ mov(eax, Operand(ebp, kInputStartMinusOne)); + + if (global_with_zero_length_check()) { + // Special case for zero-length matches. + // edx: capture start index + __ cmp(edi, edx); + // Not a zero-length match, restart. + __ j(not_equal, &load_char_start_regexp); + // edi (offset from the end) is zero if we already reached the end. + __ test(edi, edi); + __ j(zero, &exit_label_, Label::kNear); + // Advance current position after a zero-length match. + if (mode_ == UC16) { + __ add(edi, Immediate(2)); + } else { + __ inc(edi); + } + } + + __ jmp(&load_char_start_regexp); + } else { + __ mov(eax, Immediate(SUCCESS)); + } + } + + __ bind(&exit_label_); + if (global()) { + // Return the number of successful captures. + __ mov(eax, Operand(ebp, kSuccessfulCaptures)); + } + + __ bind(&return_eax); + // Skip esp past regexp registers. + __ lea(esp, Operand(ebp, kBackup_ebx)); + // Restore callee-save registers. + __ pop(ebx); + __ pop(edi); + __ pop(esi); + // Exit function frame, restore previous one. + __ pop(ebp); + __ ret(0); + + // Backtrack code (branch target for conditional backtracks). + if (backtrack_label_.is_linked()) { + __ bind(&backtrack_label_); + Backtrack(); + } + + Label exit_with_exception; + + // Preempt-code + if (check_preempt_label_.is_linked()) { + SafeCallTarget(&check_preempt_label_); + + __ push(backtrack_stackpointer()); + __ push(edi); + + CallCheckStackGuardState(ebx); + __ or_(eax, eax); + // If returning non-zero, we should end execution with the given + // result as return value. + __ j(not_zero, &return_eax); + + __ pop(edi); + __ pop(backtrack_stackpointer()); + // String might have moved: Reload esi from frame. + __ mov(esi, Operand(ebp, kInputEnd)); + SafeReturn(); + } + + // Backtrack stack overflow code. + if (stack_overflow_label_.is_linked()) { + SafeCallTarget(&stack_overflow_label_); + // Reached if the backtrack-stack limit has been hit. + + Label grow_failed; + // Save registers before calling C function + __ push(esi); + __ push(edi); + + // Call GrowStack(backtrack_stackpointer()) + static const int num_arguments = 3; + __ PrepareCallCFunction(num_arguments, ebx); + __ mov(Operand(esp, 2 * kPointerSize), + Immediate(ExternalReference::isolate_address(isolate()))); + __ lea(eax, Operand(ebp, kStackHighEnd)); + __ mov(Operand(esp, 1 * kPointerSize), eax); + __ mov(Operand(esp, 0 * kPointerSize), backtrack_stackpointer()); + ExternalReference grow_stack = + ExternalReference::re_grow_stack(isolate()); + __ CallCFunction(grow_stack, num_arguments); + // If return NULL, we have failed to grow the stack, and + // must exit with a stack-overflow exception. + __ or_(eax, eax); + __ j(equal, &exit_with_exception); + // Otherwise use return value as new stack pointer. + __ mov(backtrack_stackpointer(), eax); + // Restore saved registers and continue. + __ pop(edi); + __ pop(esi); + SafeReturn(); + } + + if (exit_with_exception.is_linked()) { + // If any of the code above needed to exit with an exception. + __ bind(&exit_with_exception); + // Exit with Result EXCEPTION(-1) to signal thrown exception. + __ mov(eax, EXCEPTION); + __ jmp(&return_eax); + } + + CodeDesc code_desc; + masm_->GetCode(&code_desc); + Handle<Code> code = + isolate()->factory()->NewCode(code_desc, + Code::ComputeFlags(Code::REGEXP), + masm_->CodeObject()); + PROFILE(isolate(), RegExpCodeCreateEvent(*code, *source)); + return Handle<HeapObject>::cast(code); +} + + +void RegExpMacroAssemblerX87::GoTo(Label* to) { + BranchOrBacktrack(no_condition, to); +} + + +void RegExpMacroAssemblerX87::IfRegisterGE(int reg, + int comparand, + Label* if_ge) { + __ cmp(register_location(reg), Immediate(comparand)); + BranchOrBacktrack(greater_equal, if_ge); +} + + +void RegExpMacroAssemblerX87::IfRegisterLT(int reg, + int comparand, + Label* if_lt) { + __ cmp(register_location(reg), Immediate(comparand)); + BranchOrBacktrack(less, if_lt); +} + + +void RegExpMacroAssemblerX87::IfRegisterEqPos(int reg, + Label* if_eq) { + __ cmp(edi, register_location(reg)); + BranchOrBacktrack(equal, if_eq); +} + + +RegExpMacroAssembler::IrregexpImplementation + RegExpMacroAssemblerX87::Implementation() { + return kX87Implementation; +} + + +void RegExpMacroAssemblerX87::LoadCurrentCharacter(int cp_offset, + Label* on_end_of_input, + bool check_bounds, + int characters) { + DCHECK(cp_offset >= -1); // ^ and \b can look behind one character. + DCHECK(cp_offset < (1<<30)); // Be sane! (And ensure negation works) + if (check_bounds) { + CheckPosition(cp_offset + characters - 1, on_end_of_input); + } + LoadCurrentCharacterUnchecked(cp_offset, characters); +} + + +void RegExpMacroAssemblerX87::PopCurrentPosition() { + Pop(edi); +} + + +void RegExpMacroAssemblerX87::PopRegister(int register_index) { + Pop(eax); + __ mov(register_location(register_index), eax); +} + + +void RegExpMacroAssemblerX87::PushBacktrack(Label* label) { + Push(Immediate::CodeRelativeOffset(label)); + CheckStackLimit(); +} + + +void RegExpMacroAssemblerX87::PushCurrentPosition() { + Push(edi); +} + + +void RegExpMacroAssemblerX87::PushRegister(int register_index, + StackCheckFlag check_stack_limit) { + __ mov(eax, register_location(register_index)); + Push(eax); + if (check_stack_limit) CheckStackLimit(); +} + + +void RegExpMacroAssemblerX87::ReadCurrentPositionFromRegister(int reg) { + __ mov(edi, register_location(reg)); +} + + +void RegExpMacroAssemblerX87::ReadStackPointerFromRegister(int reg) { + __ mov(backtrack_stackpointer(), register_location(reg)); + __ add(backtrack_stackpointer(), Operand(ebp, kStackHighEnd)); +} + +void RegExpMacroAssemblerX87::SetCurrentPositionFromEnd(int by) { + Label after_position; + __ cmp(edi, -by * char_size()); + __ j(greater_equal, &after_position, Label::kNear); + __ mov(edi, -by * char_size()); + // On RegExp code entry (where this operation is used), the character before + // the current position is expected to be already loaded. + // We have advanced the position, so it's safe to read backwards. + LoadCurrentCharacterUnchecked(-1, 1); + __ bind(&after_position); +} + + +void RegExpMacroAssemblerX87::SetRegister(int register_index, int to) { + DCHECK(register_index >= num_saved_registers_); // Reserved for positions! + __ mov(register_location(register_index), Immediate(to)); +} + + +bool RegExpMacroAssemblerX87::Succeed() { + __ jmp(&success_label_); + return global(); +} + + +void RegExpMacroAssemblerX87::WriteCurrentPositionToRegister(int reg, + int cp_offset) { + if (cp_offset == 0) { + __ mov(register_location(reg), edi); + } else { + __ lea(eax, Operand(edi, cp_offset * char_size())); + __ mov(register_location(reg), eax); + } +} + + +void RegExpMacroAssemblerX87::ClearRegisters(int reg_from, int reg_to) { + DCHECK(reg_from <= reg_to); + __ mov(eax, Operand(ebp, kInputStartMinusOne)); + for (int reg = reg_from; reg <= reg_to; reg++) { + __ mov(register_location(reg), eax); + } +} + + +void RegExpMacroAssemblerX87::WriteStackPointerToRegister(int reg) { + __ mov(eax, backtrack_stackpointer()); + __ sub(eax, Operand(ebp, kStackHighEnd)); + __ mov(register_location(reg), eax); +} + + +// Private methods: + +void RegExpMacroAssemblerX87::CallCheckStackGuardState(Register scratch) { + static const int num_arguments = 3; + __ PrepareCallCFunction(num_arguments, scratch); + // RegExp code frame pointer. + __ mov(Operand(esp, 2 * kPointerSize), ebp); + // Code* of self. + __ mov(Operand(esp, 1 * kPointerSize), Immediate(masm_->CodeObject())); + // Next address on the stack (will be address of return address). + __ lea(eax, Operand(esp, -kPointerSize)); + __ mov(Operand(esp, 0 * kPointerSize), eax); + ExternalReference check_stack_guard = + ExternalReference::re_check_stack_guard_state(isolate()); + __ CallCFunction(check_stack_guard, num_arguments); +} + + +// Helper function for reading a value out of a stack frame. +template <typename T> +static T& frame_entry(Address re_frame, int frame_offset) { + return reinterpret_cast<T&>(Memory::int32_at(re_frame + frame_offset)); +} + + +int RegExpMacroAssemblerX87::CheckStackGuardState(Address* return_address, + Code* re_code, + Address re_frame) { + Isolate* isolate = frame_entry<Isolate*>(re_frame, kIsolate); + StackLimitCheck check(isolate); + if (check.JsHasOverflowed()) { + isolate->StackOverflow(); + return EXCEPTION; + } + + // If not real stack overflow the stack guard was used to interrupt + // execution for another purpose. + + // If this is a direct call from JavaScript retry the RegExp forcing the call + // through the runtime system. Currently the direct call cannot handle a GC. + if (frame_entry<int>(re_frame, kDirectCall) == 1) { + return RETRY; + } + + // Prepare for possible GC. + HandleScope handles(isolate); + Handle<Code> code_handle(re_code); + + Handle<String> subject(frame_entry<String*>(re_frame, kInputString)); + + // Current string. + bool is_ascii = subject->IsOneByteRepresentationUnderneath(); + + DCHECK(re_code->instruction_start() <= *return_address); + DCHECK(*return_address <= + re_code->instruction_start() + re_code->instruction_size()); + + Object* result = isolate->stack_guard()->HandleInterrupts(); + + if (*code_handle != re_code) { // Return address no longer valid + int delta = code_handle->address() - re_code->address(); + // Overwrite the return address on the stack. + *return_address += delta; + } + + if (result->IsException()) { + return EXCEPTION; + } + + Handle<String> subject_tmp = subject; + int slice_offset = 0; + + // Extract the underlying string and the slice offset. + if (StringShape(*subject_tmp).IsCons()) { + subject_tmp = Handle<String>(ConsString::cast(*subject_tmp)->first()); + } else if (StringShape(*subject_tmp).IsSliced()) { + SlicedString* slice = SlicedString::cast(*subject_tmp); + subject_tmp = Handle<String>(slice->parent()); + slice_offset = slice->offset(); + } + + // String might have changed. + if (subject_tmp->IsOneByteRepresentation() != is_ascii) { + // If we changed between an ASCII and an UC16 string, the specialized + // code cannot be used, and we need to restart regexp matching from + // scratch (including, potentially, compiling a new version of the code). + return RETRY; + } + + // Otherwise, the content of the string might have moved. It must still + // be a sequential or external string with the same content. + // Update the start and end pointers in the stack frame to the current + // location (whether it has actually moved or not). + DCHECK(StringShape(*subject_tmp).IsSequential() || + StringShape(*subject_tmp).IsExternal()); + + // The original start address of the characters to match. + const byte* start_address = frame_entry<const byte*>(re_frame, kInputStart); + + // Find the current start address of the same character at the current string + // position. + int start_index = frame_entry<int>(re_frame, kStartIndex); + const byte* new_address = StringCharacterPosition(*subject_tmp, + start_index + slice_offset); + + if (start_address != new_address) { + // If there is a difference, update the object pointer and start and end + // addresses in the RegExp stack frame to match the new value. + const byte* end_address = frame_entry<const byte* >(re_frame, kInputEnd); + int byte_length = static_cast<int>(end_address - start_address); + frame_entry<const String*>(re_frame, kInputString) = *subject; + frame_entry<const byte*>(re_frame, kInputStart) = new_address; + frame_entry<const byte*>(re_frame, kInputEnd) = new_address + byte_length; + } else if (frame_entry<const String*>(re_frame, kInputString) != *subject) { + // Subject string might have been a ConsString that underwent + // short-circuiting during GC. That will not change start_address but + // will change pointer inside the subject handle. + frame_entry<const String*>(re_frame, kInputString) = *subject; + } + + return 0; +} + + +Operand RegExpMacroAssemblerX87::register_location(int register_index) { + DCHECK(register_index < (1<<30)); + if (num_registers_ <= register_index) { + num_registers_ = register_index + 1; + } + return Operand(ebp, kRegisterZero - register_index * kPointerSize); +} + + +void RegExpMacroAssemblerX87::CheckPosition(int cp_offset, + Label* on_outside_input) { + __ cmp(edi, -cp_offset * char_size()); + BranchOrBacktrack(greater_equal, on_outside_input); +} + + +void RegExpMacroAssemblerX87::BranchOrBacktrack(Condition condition, + Label* to) { + if (condition < 0) { // No condition + if (to == NULL) { + Backtrack(); + return; + } + __ jmp(to); + return; + } + if (to == NULL) { + __ j(condition, &backtrack_label_); + return; + } + __ j(condition, to); +} + + +void RegExpMacroAssemblerX87::SafeCall(Label* to) { + Label return_to; + __ push(Immediate::CodeRelativeOffset(&return_to)); + __ jmp(to); + __ bind(&return_to); +} + + +void RegExpMacroAssemblerX87::SafeReturn() { + __ pop(ebx); + __ add(ebx, Immediate(masm_->CodeObject())); + __ jmp(ebx); +} + + +void RegExpMacroAssemblerX87::SafeCallTarget(Label* name) { + __ bind(name); +} + + +void RegExpMacroAssemblerX87::Push(Register source) { + DCHECK(!source.is(backtrack_stackpointer())); + // Notice: This updates flags, unlike normal Push. + __ sub(backtrack_stackpointer(), Immediate(kPointerSize)); + __ mov(Operand(backtrack_stackpointer(), 0), source); +} + + +void RegExpMacroAssemblerX87::Push(Immediate value) { + // Notice: This updates flags, unlike normal Push. + __ sub(backtrack_stackpointer(), Immediate(kPointerSize)); + __ mov(Operand(backtrack_stackpointer(), 0), value); +} + + +void RegExpMacroAssemblerX87::Pop(Register target) { + DCHECK(!target.is(backtrack_stackpointer())); + __ mov(target, Operand(backtrack_stackpointer(), 0)); + // Notice: This updates flags, unlike normal Pop. + __ add(backtrack_stackpointer(), Immediate(kPointerSize)); +} + + +void RegExpMacroAssemblerX87::CheckPreemption() { + // Check for preemption. + Label no_preempt; + ExternalReference stack_limit = + ExternalReference::address_of_stack_limit(isolate()); + __ cmp(esp, Operand::StaticVariable(stack_limit)); + __ j(above, &no_preempt); + + SafeCall(&check_preempt_label_); + + __ bind(&no_preempt); +} + + +void RegExpMacroAssemblerX87::CheckStackLimit() { + Label no_stack_overflow; + ExternalReference stack_limit = + ExternalReference::address_of_regexp_stack_limit(isolate()); + __ cmp(backtrack_stackpointer(), Operand::StaticVariable(stack_limit)); + __ j(above, &no_stack_overflow); + + SafeCall(&stack_overflow_label_); + + __ bind(&no_stack_overflow); +} + + +void RegExpMacroAssemblerX87::LoadCurrentCharacterUnchecked(int cp_offset, + int characters) { + if (mode_ == ASCII) { + if (characters == 4) { + __ mov(current_character(), Operand(esi, edi, times_1, cp_offset)); + } else if (characters == 2) { + __ movzx_w(current_character(), Operand(esi, edi, times_1, cp_offset)); + } else { + DCHECK(characters == 1); + __ movzx_b(current_character(), Operand(esi, edi, times_1, cp_offset)); + } + } else { + DCHECK(mode_ == UC16); + if (characters == 2) { + __ mov(current_character(), + Operand(esi, edi, times_1, cp_offset * sizeof(uc16))); + } else { + DCHECK(characters == 1); + __ movzx_w(current_character(), + Operand(esi, edi, times_1, cp_offset * sizeof(uc16))); + } + } +} + + +#undef __ + +#endif // V8_INTERPRETED_REGEXP + +}} // namespace v8::internal + +#endif // V8_TARGET_ARCH_X87 diff --git a/deps/v8/src/x87/regexp-macro-assembler-x87.h b/deps/v8/src/x87/regexp-macro-assembler-x87.h new file mode 100644 index 00000000000..3c98dfff674 --- /dev/null +++ b/deps/v8/src/x87/regexp-macro-assembler-x87.h @@ -0,0 +1,200 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_X87_REGEXP_MACRO_ASSEMBLER_X87_H_ +#define V8_X87_REGEXP_MACRO_ASSEMBLER_X87_H_ + +#include "src/macro-assembler.h" +#include "src/x87/assembler-x87-inl.h" +#include "src/x87/assembler-x87.h" + +namespace v8 { +namespace internal { + +#ifndef V8_INTERPRETED_REGEXP +class RegExpMacroAssemblerX87: public NativeRegExpMacroAssembler { + public: + RegExpMacroAssemblerX87(Mode mode, int registers_to_save, Zone* zone); + virtual ~RegExpMacroAssemblerX87(); + virtual int stack_limit_slack(); + virtual void AdvanceCurrentPosition(int by); + virtual void AdvanceRegister(int reg, int by); + virtual void Backtrack(); + virtual void Bind(Label* label); + virtual void CheckAtStart(Label* on_at_start); + virtual void CheckCharacter(uint32_t c, Label* on_equal); + virtual void CheckCharacterAfterAnd(uint32_t c, + uint32_t mask, + Label* on_equal); + virtual void CheckCharacterGT(uc16 limit, Label* on_greater); + virtual void CheckCharacterLT(uc16 limit, Label* on_less); + // A "greedy loop" is a loop that is both greedy and with a simple + // body. It has a particularly simple implementation. + virtual void CheckGreedyLoop(Label* on_tos_equals_current_position); + virtual void CheckNotAtStart(Label* on_not_at_start); + virtual void CheckNotBackReference(int start_reg, Label* on_no_match); + virtual void CheckNotBackReferenceIgnoreCase(int start_reg, + Label* on_no_match); + virtual void CheckNotCharacter(uint32_t c, Label* on_not_equal); + virtual void CheckNotCharacterAfterAnd(uint32_t c, + uint32_t mask, + Label* on_not_equal); + virtual void CheckNotCharacterAfterMinusAnd(uc16 c, + uc16 minus, + uc16 mask, + Label* on_not_equal); + virtual void CheckCharacterInRange(uc16 from, + uc16 to, + Label* on_in_range); + virtual void CheckCharacterNotInRange(uc16 from, + uc16 to, + Label* on_not_in_range); + virtual void CheckBitInTable(Handle<ByteArray> table, Label* on_bit_set); + + // Checks whether the given offset from the current position is before + // the end of the string. + virtual void CheckPosition(int cp_offset, Label* on_outside_input); + virtual bool CheckSpecialCharacterClass(uc16 type, Label* on_no_match); + virtual void Fail(); + virtual Handle<HeapObject> GetCode(Handle<String> source); + virtual void GoTo(Label* label); + virtual void IfRegisterGE(int reg, int comparand, Label* if_ge); + virtual void IfRegisterLT(int reg, int comparand, Label* if_lt); + virtual void IfRegisterEqPos(int reg, Label* if_eq); + virtual IrregexpImplementation Implementation(); + virtual void LoadCurrentCharacter(int cp_offset, + Label* on_end_of_input, + bool check_bounds = true, + int characters = 1); + virtual void PopCurrentPosition(); + virtual void PopRegister(int register_index); + virtual void PushBacktrack(Label* label); + virtual void PushCurrentPosition(); + virtual void PushRegister(int register_index, + StackCheckFlag check_stack_limit); + virtual void ReadCurrentPositionFromRegister(int reg); + virtual void ReadStackPointerFromRegister(int reg); + virtual void SetCurrentPositionFromEnd(int by); + virtual void SetRegister(int register_index, int to); + virtual bool Succeed(); + virtual void WriteCurrentPositionToRegister(int reg, int cp_offset); + virtual void ClearRegisters(int reg_from, int reg_to); + virtual void WriteStackPointerToRegister(int reg); + + // Called from RegExp if the stack-guard is triggered. + // If the code object is relocated, the return address is fixed before + // returning. + static int CheckStackGuardState(Address* return_address, + Code* re_code, + Address re_frame); + + private: + // Offsets from ebp of function parameters and stored registers. + static const int kFramePointer = 0; + // Above the frame pointer - function parameters and return address. + static const int kReturn_eip = kFramePointer + kPointerSize; + static const int kFrameAlign = kReturn_eip + kPointerSize; + // Parameters. + static const int kInputString = kFrameAlign; + static const int kStartIndex = kInputString + kPointerSize; + static const int kInputStart = kStartIndex + kPointerSize; + static const int kInputEnd = kInputStart + kPointerSize; + static const int kRegisterOutput = kInputEnd + kPointerSize; + // For the case of global regular expression, we have room to store at least + // one set of capture results. For the case of non-global regexp, we ignore + // this value. + static const int kNumOutputRegisters = kRegisterOutput + kPointerSize; + static const int kStackHighEnd = kNumOutputRegisters + kPointerSize; + static const int kDirectCall = kStackHighEnd + kPointerSize; + static const int kIsolate = kDirectCall + kPointerSize; + // Below the frame pointer - local stack variables. + // When adding local variables remember to push space for them in + // the frame in GetCode. + static const int kBackup_esi = kFramePointer - kPointerSize; + static const int kBackup_edi = kBackup_esi - kPointerSize; + static const int kBackup_ebx = kBackup_edi - kPointerSize; + static const int kSuccessfulCaptures = kBackup_ebx - kPointerSize; + static const int kInputStartMinusOne = kSuccessfulCaptures - kPointerSize; + // First register address. Following registers are below it on the stack. + static const int kRegisterZero = kInputStartMinusOne - kPointerSize; + + // Initial size of code buffer. + static const size_t kRegExpCodeSize = 1024; + + // Load a number of characters at the given offset from the + // current position, into the current-character register. + void LoadCurrentCharacterUnchecked(int cp_offset, int character_count); + + // Check whether preemption has been requested. + void CheckPreemption(); + + // Check whether we are exceeding the stack limit on the backtrack stack. + void CheckStackLimit(); + + // Generate a call to CheckStackGuardState. + void CallCheckStackGuardState(Register scratch); + + // The ebp-relative location of a regexp register. + Operand register_location(int register_index); + + // The register containing the current character after LoadCurrentCharacter. + inline Register current_character() { return edx; } + + // The register containing the backtrack stack top. Provides a meaningful + // name to the register. + inline Register backtrack_stackpointer() { return ecx; } + + // Byte size of chars in the string to match (decided by the Mode argument) + inline int char_size() { return static_cast<int>(mode_); } + + // Equivalent to a conditional branch to the label, unless the label + // is NULL, in which case it is a conditional Backtrack. + void BranchOrBacktrack(Condition condition, Label* to); + + // Call and return internally in the generated code in a way that + // is GC-safe (i.e., doesn't leave absolute code addresses on the stack) + inline void SafeCall(Label* to); + inline void SafeReturn(); + inline void SafeCallTarget(Label* name); + + // Pushes the value of a register on the backtrack stack. Decrements the + // stack pointer (ecx) by a word size and stores the register's value there. + inline void Push(Register source); + + // Pushes a value on the backtrack stack. Decrements the stack pointer (ecx) + // by a word size and stores the value there. + inline void Push(Immediate value); + + // Pops a value from the backtrack stack. Reads the word at the stack pointer + // (ecx) and increments it by a word size. + inline void Pop(Register target); + + Isolate* isolate() const { return masm_->isolate(); } + + MacroAssembler* masm_; + + // Which mode to generate code for (ASCII or UC16). + Mode mode_; + + // One greater than maximal register index actually used. + int num_registers_; + + // Number of registers to output at the end (the saved registers + // are always 0..num_saved_registers_-1) + int num_saved_registers_; + + // Labels used internally. + Label entry_label_; + Label start_label_; + Label success_label_; + Label backtrack_label_; + Label exit_label_; + Label check_preempt_label_; + Label stack_overflow_label_; +}; +#endif // V8_INTERPRETED_REGEXP + +}} // namespace v8::internal + +#endif // V8_X87_REGEXP_MACRO_ASSEMBLER_X87_H_ diff --git a/deps/v8/src/x87/simulator-x87.cc b/deps/v8/src/x87/simulator-x87.cc new file mode 100644 index 00000000000..20edae83a2a --- /dev/null +++ b/deps/v8/src/x87/simulator-x87.cc @@ -0,0 +1,6 @@ +// Copyright 2008 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + + +// Since there is no simulator for the ia32 architecture this file is empty. diff --git a/deps/v8/src/x87/simulator-x87.h b/deps/v8/src/x87/simulator-x87.h new file mode 100644 index 00000000000..a780e839d2d --- /dev/null +++ b/deps/v8/src/x87/simulator-x87.h @@ -0,0 +1,48 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_X87_SIMULATOR_X87_H_ +#define V8_X87_SIMULATOR_X87_H_ + +#include "src/allocation.h" + +namespace v8 { +namespace internal { + +// Since there is no simulator for the ia32 architecture the only thing we can +// do is to call the entry directly. +#define CALL_GENERATED_CODE(entry, p0, p1, p2, p3, p4) \ + (entry(p0, p1, p2, p3, p4)) + + +typedef int (*regexp_matcher)(String*, int, const byte*, + const byte*, int*, int, Address, int, Isolate*); + +// Call the generated regexp code directly. The code at the entry address should +// expect eight int/pointer sized arguments and return an int. +#define CALL_GENERATED_REGEXP_CODE(entry, p0, p1, p2, p3, p4, p5, p6, p7, p8) \ + (FUNCTION_CAST<regexp_matcher>(entry)(p0, p1, p2, p3, p4, p5, p6, p7, p8)) + + +// The stack limit beyond which we will throw stack overflow errors in +// generated code. Because generated code on ia32 uses the C stack, we +// just use the C stack limit. +class SimulatorStack : public v8::internal::AllStatic { + public: + static inline uintptr_t JsLimitFromCLimit(Isolate* isolate, + uintptr_t c_limit) { + USE(isolate); + return c_limit; + } + + static inline uintptr_t RegisterCTryCatch(uintptr_t try_catch_address) { + return try_catch_address; + } + + static inline void UnregisterCTryCatch() { } +}; + +} } // namespace v8::internal + +#endif // V8_X87_SIMULATOR_X87_H_ diff --git a/deps/v8/src/x87/stub-cache-x87.cc b/deps/v8/src/x87/stub-cache-x87.cc new file mode 100644 index 00000000000..0fc450a56f4 --- /dev/null +++ b/deps/v8/src/x87/stub-cache-x87.cc @@ -0,0 +1,1201 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#if V8_TARGET_ARCH_X87 + +#include "src/codegen.h" +#include "src/ic-inl.h" +#include "src/stub-cache.h" + +namespace v8 { +namespace internal { + +#define __ ACCESS_MASM(masm) + + +static void ProbeTable(Isolate* isolate, + MacroAssembler* masm, + Code::Flags flags, + StubCache::Table table, + Register name, + Register receiver, + // Number of the cache entry pointer-size scaled. + Register offset, + Register extra) { + ExternalReference key_offset(isolate->stub_cache()->key_reference(table)); + ExternalReference value_offset(isolate->stub_cache()->value_reference(table)); + ExternalReference map_offset(isolate->stub_cache()->map_reference(table)); + + Label miss; + + // Multiply by 3 because there are 3 fields per entry (name, code, map). + __ lea(offset, Operand(offset, offset, times_2, 0)); + + if (extra.is_valid()) { + // Get the code entry from the cache. + __ mov(extra, Operand::StaticArray(offset, times_1, value_offset)); + + // Check that the key in the entry matches the name. + __ cmp(name, Operand::StaticArray(offset, times_1, key_offset)); + __ j(not_equal, &miss); + + // Check the map matches. + __ mov(offset, Operand::StaticArray(offset, times_1, map_offset)); + __ cmp(offset, FieldOperand(receiver, HeapObject::kMapOffset)); + __ j(not_equal, &miss); + + // Check that the flags match what we're looking for. + __ mov(offset, FieldOperand(extra, Code::kFlagsOffset)); + __ and_(offset, ~Code::kFlagsNotUsedInLookup); + __ cmp(offset, flags); + __ j(not_equal, &miss); + +#ifdef DEBUG + if (FLAG_test_secondary_stub_cache && table == StubCache::kPrimary) { + __ jmp(&miss); + } else if (FLAG_test_primary_stub_cache && table == StubCache::kSecondary) { + __ jmp(&miss); + } +#endif + + // Jump to the first instruction in the code stub. + __ add(extra, Immediate(Code::kHeaderSize - kHeapObjectTag)); + __ jmp(extra); + + __ bind(&miss); + } else { + // Save the offset on the stack. + __ push(offset); + + // Check that the key in the entry matches the name. + __ cmp(name, Operand::StaticArray(offset, times_1, key_offset)); + __ j(not_equal, &miss); + + // Check the map matches. + __ mov(offset, Operand::StaticArray(offset, times_1, map_offset)); + __ cmp(offset, FieldOperand(receiver, HeapObject::kMapOffset)); + __ j(not_equal, &miss); + + // Restore offset register. + __ mov(offset, Operand(esp, 0)); + + // Get the code entry from the cache. + __ mov(offset, Operand::StaticArray(offset, times_1, value_offset)); + + // Check that the flags match what we're looking for. + __ mov(offset, FieldOperand(offset, Code::kFlagsOffset)); + __ and_(offset, ~Code::kFlagsNotUsedInLookup); + __ cmp(offset, flags); + __ j(not_equal, &miss); + +#ifdef DEBUG + if (FLAG_test_secondary_stub_cache && table == StubCache::kPrimary) { + __ jmp(&miss); + } else if (FLAG_test_primary_stub_cache && table == StubCache::kSecondary) { + __ jmp(&miss); + } +#endif + + // Restore offset and re-load code entry from cache. + __ pop(offset); + __ mov(offset, Operand::StaticArray(offset, times_1, value_offset)); + + // Jump to the first instruction in the code stub. + __ add(offset, Immediate(Code::kHeaderSize - kHeapObjectTag)); + __ jmp(offset); + + // Pop at miss. + __ bind(&miss); + __ pop(offset); + } +} + + +void PropertyHandlerCompiler::GenerateDictionaryNegativeLookup( + MacroAssembler* masm, Label* miss_label, Register receiver, + Handle<Name> name, Register scratch0, Register scratch1) { + DCHECK(name->IsUniqueName()); + DCHECK(!receiver.is(scratch0)); + Counters* counters = masm->isolate()->counters(); + __ IncrementCounter(counters->negative_lookups(), 1); + __ IncrementCounter(counters->negative_lookups_miss(), 1); + + __ mov(scratch0, FieldOperand(receiver, HeapObject::kMapOffset)); + + const int kInterceptorOrAccessCheckNeededMask = + (1 << Map::kHasNamedInterceptor) | (1 << Map::kIsAccessCheckNeeded); + + // Bail out if the receiver has a named interceptor or requires access checks. + __ test_b(FieldOperand(scratch0, Map::kBitFieldOffset), + kInterceptorOrAccessCheckNeededMask); + __ j(not_zero, miss_label); + + // Check that receiver is a JSObject. + __ CmpInstanceType(scratch0, FIRST_SPEC_OBJECT_TYPE); + __ j(below, miss_label); + + // Load properties array. + Register properties = scratch0; + __ mov(properties, FieldOperand(receiver, JSObject::kPropertiesOffset)); + + // Check that the properties array is a dictionary. + __ cmp(FieldOperand(properties, HeapObject::kMapOffset), + Immediate(masm->isolate()->factory()->hash_table_map())); + __ j(not_equal, miss_label); + + Label done; + NameDictionaryLookupStub::GenerateNegativeLookup(masm, + miss_label, + &done, + properties, + name, + scratch1); + __ bind(&done); + __ DecrementCounter(counters->negative_lookups_miss(), 1); +} + + +void StubCache::GenerateProbe(MacroAssembler* masm, + Code::Flags flags, + Register receiver, + Register name, + Register scratch, + Register extra, + Register extra2, + Register extra3) { + Label miss; + + // Assert that code is valid. The multiplying code relies on the entry size + // being 12. + DCHECK(sizeof(Entry) == 12); + + // Assert the flags do not name a specific type. + DCHECK(Code::ExtractTypeFromFlags(flags) == 0); + + // Assert that there are no register conflicts. + DCHECK(!scratch.is(receiver)); + DCHECK(!scratch.is(name)); + DCHECK(!extra.is(receiver)); + DCHECK(!extra.is(name)); + DCHECK(!extra.is(scratch)); + + // Assert scratch and extra registers are valid, and extra2/3 are unused. + DCHECK(!scratch.is(no_reg)); + DCHECK(extra2.is(no_reg)); + DCHECK(extra3.is(no_reg)); + + Register offset = scratch; + scratch = no_reg; + + Counters* counters = masm->isolate()->counters(); + __ IncrementCounter(counters->megamorphic_stub_cache_probes(), 1); + + // Check that the receiver isn't a smi. + __ JumpIfSmi(receiver, &miss); + + // Get the map of the receiver and compute the hash. + __ mov(offset, FieldOperand(name, Name::kHashFieldOffset)); + __ add(offset, FieldOperand(receiver, HeapObject::kMapOffset)); + __ xor_(offset, flags); + // We mask out the last two bits because they are not part of the hash and + // they are always 01 for maps. Also in the two 'and' instructions below. + __ and_(offset, (kPrimaryTableSize - 1) << kCacheIndexShift); + // ProbeTable expects the offset to be pointer scaled, which it is, because + // the heap object tag size is 2 and the pointer size log 2 is also 2. + DCHECK(kCacheIndexShift == kPointerSizeLog2); + + // Probe the primary table. + ProbeTable(isolate(), masm, flags, kPrimary, name, receiver, offset, extra); + + // Primary miss: Compute hash for secondary probe. + __ mov(offset, FieldOperand(name, Name::kHashFieldOffset)); + __ add(offset, FieldOperand(receiver, HeapObject::kMapOffset)); + __ xor_(offset, flags); + __ and_(offset, (kPrimaryTableSize - 1) << kCacheIndexShift); + __ sub(offset, name); + __ add(offset, Immediate(flags)); + __ and_(offset, (kSecondaryTableSize - 1) << kCacheIndexShift); + + // Probe the secondary table. + ProbeTable( + isolate(), masm, flags, kSecondary, name, receiver, offset, extra); + + // Cache miss: Fall-through and let caller handle the miss by + // entering the runtime system. + __ bind(&miss); + __ IncrementCounter(counters->megamorphic_stub_cache_misses(), 1); +} + + +void NamedLoadHandlerCompiler::GenerateDirectLoadGlobalFunctionPrototype( + MacroAssembler* masm, int index, Register prototype, Label* miss) { + // Get the global function with the given index. + Handle<JSFunction> function( + JSFunction::cast(masm->isolate()->native_context()->get(index))); + // Check we're still in the same context. + Register scratch = prototype; + const int offset = Context::SlotOffset(Context::GLOBAL_OBJECT_INDEX); + __ mov(scratch, Operand(esi, offset)); + __ mov(scratch, FieldOperand(scratch, GlobalObject::kNativeContextOffset)); + __ cmp(Operand(scratch, Context::SlotOffset(index)), function); + __ j(not_equal, miss); + + // Load its initial map. The global functions all have initial maps. + __ Move(prototype, Immediate(Handle<Map>(function->initial_map()))); + // Load the prototype from the initial map. + __ mov(prototype, FieldOperand(prototype, Map::kPrototypeOffset)); +} + + +void NamedLoadHandlerCompiler::GenerateLoadFunctionPrototype( + MacroAssembler* masm, Register receiver, Register scratch1, + Register scratch2, Label* miss_label) { + __ TryGetFunctionPrototype(receiver, scratch1, scratch2, miss_label); + __ mov(eax, scratch1); + __ ret(0); +} + + +static void PushInterceptorArguments(MacroAssembler* masm, + Register receiver, + Register holder, + Register name, + Handle<JSObject> holder_obj) { + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsNameIndex == 0); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsInfoIndex == 1); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsThisIndex == 2); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsHolderIndex == 3); + STATIC_ASSERT(NamedLoadHandlerCompiler::kInterceptorArgsLength == 4); + __ push(name); + Handle<InterceptorInfo> interceptor(holder_obj->GetNamedInterceptor()); + DCHECK(!masm->isolate()->heap()->InNewSpace(*interceptor)); + Register scratch = name; + __ mov(scratch, Immediate(interceptor)); + __ push(scratch); + __ push(receiver); + __ push(holder); +} + + +static void CompileCallLoadPropertyWithInterceptor( + MacroAssembler* masm, + Register receiver, + Register holder, + Register name, + Handle<JSObject> holder_obj, + IC::UtilityId id) { + PushInterceptorArguments(masm, receiver, holder, name, holder_obj); + __ CallExternalReference(ExternalReference(IC_Utility(id), masm->isolate()), + NamedLoadHandlerCompiler::kInterceptorArgsLength); +} + + +// Generate call to api function. +// This function uses push() to generate smaller, faster code than +// the version above. It is an optimization that should will be removed +// when api call ICs are generated in hydrogen. +void PropertyHandlerCompiler::GenerateFastApiCall( + MacroAssembler* masm, const CallOptimization& optimization, + Handle<Map> receiver_map, Register receiver, Register scratch_in, + bool is_store, int argc, Register* values) { + // Copy return value. + __ pop(scratch_in); + // receiver + __ push(receiver); + // Write the arguments to stack frame. + for (int i = 0; i < argc; i++) { + Register arg = values[argc-1-i]; + DCHECK(!receiver.is(arg)); + DCHECK(!scratch_in.is(arg)); + __ push(arg); + } + __ push(scratch_in); + // Stack now matches JSFunction abi. + DCHECK(optimization.is_simple_api_call()); + + // Abi for CallApiFunctionStub. + Register callee = eax; + Register call_data = ebx; + Register holder = ecx; + Register api_function_address = edx; + Register scratch = edi; // scratch_in is no longer valid. + + // Put holder in place. + CallOptimization::HolderLookup holder_lookup; + Handle<JSObject> api_holder = optimization.LookupHolderOfExpectedType( + receiver_map, + &holder_lookup); + switch (holder_lookup) { + case CallOptimization::kHolderIsReceiver: + __ Move(holder, receiver); + break; + case CallOptimization::kHolderFound: + __ LoadHeapObject(holder, api_holder); + break; + case CallOptimization::kHolderNotFound: + UNREACHABLE(); + break; + } + + Isolate* isolate = masm->isolate(); + Handle<JSFunction> function = optimization.constant_function(); + Handle<CallHandlerInfo> api_call_info = optimization.api_call_info(); + Handle<Object> call_data_obj(api_call_info->data(), isolate); + + // Put callee in place. + __ LoadHeapObject(callee, function); + + bool call_data_undefined = false; + // Put call_data in place. + if (isolate->heap()->InNewSpace(*call_data_obj)) { + __ mov(scratch, api_call_info); + __ mov(call_data, FieldOperand(scratch, CallHandlerInfo::kDataOffset)); + } else if (call_data_obj->IsUndefined()) { + call_data_undefined = true; + __ mov(call_data, Immediate(isolate->factory()->undefined_value())); + } else { + __ mov(call_data, call_data_obj); + } + + // Put api_function_address in place. + Address function_address = v8::ToCData<Address>(api_call_info->callback()); + __ mov(api_function_address, Immediate(function_address)); + + // Jump to stub. + CallApiFunctionStub stub(isolate, is_store, call_data_undefined, argc); + __ TailCallStub(&stub); +} + + +// Generate code to check that a global property cell is empty. Create +// the property cell at compilation time if no cell exists for the +// property. +void PropertyHandlerCompiler::GenerateCheckPropertyCell( + MacroAssembler* masm, Handle<JSGlobalObject> global, Handle<Name> name, + Register scratch, Label* miss) { + Handle<PropertyCell> cell = + JSGlobalObject::EnsurePropertyCell(global, name); + DCHECK(cell->value()->IsTheHole()); + Handle<Oddball> the_hole = masm->isolate()->factory()->the_hole_value(); + if (masm->serializer_enabled()) { + __ mov(scratch, Immediate(cell)); + __ cmp(FieldOperand(scratch, PropertyCell::kValueOffset), + Immediate(the_hole)); + } else { + __ cmp(Operand::ForCell(cell), Immediate(the_hole)); + } + __ j(not_equal, miss); +} + + +void PropertyAccessCompiler::GenerateTailCall(MacroAssembler* masm, + Handle<Code> code) { + __ jmp(code, RelocInfo::CODE_TARGET); +} + + +#undef __ +#define __ ACCESS_MASM(masm()) + + +void NamedStoreHandlerCompiler::GenerateRestoreName(Label* label, + Handle<Name> name) { + if (!label->is_unused()) { + __ bind(label); + __ mov(this->name(), Immediate(name)); + } +} + + +// Receiver_reg is preserved on jumps to miss_label, but may be destroyed if +// store is successful. +void NamedStoreHandlerCompiler::GenerateStoreTransition( + Handle<Map> transition, Handle<Name> name, Register receiver_reg, + Register storage_reg, Register value_reg, Register scratch1, + Register scratch2, Register unused, Label* miss_label, Label* slow) { + int descriptor = transition->LastAdded(); + DescriptorArray* descriptors = transition->instance_descriptors(); + PropertyDetails details = descriptors->GetDetails(descriptor); + Representation representation = details.representation(); + DCHECK(!representation.IsNone()); + + if (details.type() == CONSTANT) { + Handle<Object> constant(descriptors->GetValue(descriptor), isolate()); + __ CmpObject(value_reg, constant); + __ j(not_equal, miss_label); + } else if (representation.IsSmi()) { + __ JumpIfNotSmi(value_reg, miss_label); + } else if (representation.IsHeapObject()) { + __ JumpIfSmi(value_reg, miss_label); + HeapType* field_type = descriptors->GetFieldType(descriptor); + HeapType::Iterator<Map> it = field_type->Classes(); + if (!it.Done()) { + Label do_store; + while (true) { + __ CompareMap(value_reg, it.Current()); + it.Advance(); + if (it.Done()) { + __ j(not_equal, miss_label); + break; + } + __ j(equal, &do_store, Label::kNear); + } + __ bind(&do_store); + } + } else if (representation.IsDouble()) { + Label do_store, heap_number; + __ AllocateHeapNumber(storage_reg, scratch1, scratch2, slow, MUTABLE); + + __ JumpIfNotSmi(value_reg, &heap_number); + __ SmiUntag(value_reg); + __ push(value_reg); + __ fild_s(Operand(esp, 0)); + __ pop(value_reg); + __ SmiTag(value_reg); + __ jmp(&do_store); + + __ bind(&heap_number); + __ CheckMap(value_reg, isolate()->factory()->heap_number_map(), miss_label, + DONT_DO_SMI_CHECK); + __ fld_d(FieldOperand(value_reg, HeapNumber::kValueOffset)); + + __ bind(&do_store); + __ fstp_d(FieldOperand(storage_reg, HeapNumber::kValueOffset)); + } + + // Stub never generated for objects that require access checks. + DCHECK(!transition->is_access_check_needed()); + + // Perform map transition for the receiver if necessary. + if (details.type() == FIELD && + Map::cast(transition->GetBackPointer())->unused_property_fields() == 0) { + // The properties must be extended before we can store the value. + // We jump to a runtime call that extends the properties array. + __ pop(scratch1); // Return address. + __ push(receiver_reg); + __ push(Immediate(transition)); + __ push(value_reg); + __ push(scratch1); + __ TailCallExternalReference( + ExternalReference(IC_Utility(IC::kSharedStoreIC_ExtendStorage), + isolate()), + 3, 1); + return; + } + + // Update the map of the object. + __ mov(scratch1, Immediate(transition)); + __ mov(FieldOperand(receiver_reg, HeapObject::kMapOffset), scratch1); + + // Update the write barrier for the map field. + __ RecordWriteField(receiver_reg, + HeapObject::kMapOffset, + scratch1, + scratch2, + OMIT_REMEMBERED_SET, + OMIT_SMI_CHECK); + + if (details.type() == CONSTANT) { + DCHECK(value_reg.is(eax)); + __ ret(0); + return; + } + + int index = transition->instance_descriptors()->GetFieldIndex( + transition->LastAdded()); + + // Adjust for the number of properties stored in the object. Even in the + // face of a transition we can use the old map here because the size of the + // object and the number of in-object properties is not going to change. + index -= transition->inobject_properties(); + + SmiCheck smi_check = representation.IsTagged() + ? INLINE_SMI_CHECK : OMIT_SMI_CHECK; + // TODO(verwaest): Share this code as a code stub. + if (index < 0) { + // Set the property straight into the object. + int offset = transition->instance_size() + (index * kPointerSize); + if (representation.IsDouble()) { + __ mov(FieldOperand(receiver_reg, offset), storage_reg); + } else { + __ mov(FieldOperand(receiver_reg, offset), value_reg); + } + + if (!representation.IsSmi()) { + // Update the write barrier for the array address. + if (!representation.IsDouble()) { + __ mov(storage_reg, value_reg); + } + __ RecordWriteField(receiver_reg, + offset, + storage_reg, + scratch1, + EMIT_REMEMBERED_SET, + smi_check); + } + } else { + // Write to the properties array. + int offset = index * kPointerSize + FixedArray::kHeaderSize; + // Get the properties array (optimistically). + __ mov(scratch1, FieldOperand(receiver_reg, JSObject::kPropertiesOffset)); + if (representation.IsDouble()) { + __ mov(FieldOperand(scratch1, offset), storage_reg); + } else { + __ mov(FieldOperand(scratch1, offset), value_reg); + } + + if (!representation.IsSmi()) { + // Update the write barrier for the array address. + if (!representation.IsDouble()) { + __ mov(storage_reg, value_reg); + } + __ RecordWriteField(scratch1, + offset, + storage_reg, + receiver_reg, + EMIT_REMEMBERED_SET, + smi_check); + } + } + + // Return the value (register eax). + DCHECK(value_reg.is(eax)); + __ ret(0); +} + + +void NamedStoreHandlerCompiler::GenerateStoreField(LookupResult* lookup, + Register value_reg, + Label* miss_label) { + DCHECK(lookup->representation().IsHeapObject()); + __ JumpIfSmi(value_reg, miss_label); + HeapType::Iterator<Map> it = lookup->GetFieldType()->Classes(); + Label do_store; + while (true) { + __ CompareMap(value_reg, it.Current()); + it.Advance(); + if (it.Done()) { + __ j(not_equal, miss_label); + break; + } + __ j(equal, &do_store, Label::kNear); + } + __ bind(&do_store); + + StoreFieldStub stub(isolate(), lookup->GetFieldIndex(), + lookup->representation()); + GenerateTailCall(masm(), stub.GetCode()); +} + + +Register PropertyHandlerCompiler::CheckPrototypes( + Register object_reg, Register holder_reg, Register scratch1, + Register scratch2, Handle<Name> name, Label* miss, + PrototypeCheckType check) { + Handle<Map> receiver_map(IC::TypeToMap(*type(), isolate())); + + // Make sure there's no overlap between holder and object registers. + DCHECK(!scratch1.is(object_reg) && !scratch1.is(holder_reg)); + DCHECK(!scratch2.is(object_reg) && !scratch2.is(holder_reg) + && !scratch2.is(scratch1)); + + // Keep track of the current object in register reg. + Register reg = object_reg; + int depth = 0; + + Handle<JSObject> current = Handle<JSObject>::null(); + if (type()->IsConstant()) + current = Handle<JSObject>::cast(type()->AsConstant()->Value()); + Handle<JSObject> prototype = Handle<JSObject>::null(); + Handle<Map> current_map = receiver_map; + Handle<Map> holder_map(holder()->map()); + // Traverse the prototype chain and check the maps in the prototype chain for + // fast and global objects or do negative lookup for normal objects. + while (!current_map.is_identical_to(holder_map)) { + ++depth; + + // Only global objects and objects that do not require access + // checks are allowed in stubs. + DCHECK(current_map->IsJSGlobalProxyMap() || + !current_map->is_access_check_needed()); + + prototype = handle(JSObject::cast(current_map->prototype())); + if (current_map->is_dictionary_map() && + !current_map->IsJSGlobalObjectMap()) { + DCHECK(!current_map->IsJSGlobalProxyMap()); // Proxy maps are fast. + if (!name->IsUniqueName()) { + DCHECK(name->IsString()); + name = factory()->InternalizeString(Handle<String>::cast(name)); + } + DCHECK(current.is_null() || + current->property_dictionary()->FindEntry(name) == + NameDictionary::kNotFound); + + GenerateDictionaryNegativeLookup(masm(), miss, reg, name, + scratch1, scratch2); + + __ mov(scratch1, FieldOperand(reg, HeapObject::kMapOffset)); + reg = holder_reg; // From now on the object will be in holder_reg. + __ mov(reg, FieldOperand(scratch1, Map::kPrototypeOffset)); + } else { + bool in_new_space = heap()->InNewSpace(*prototype); + // Two possible reasons for loading the prototype from the map: + // (1) Can't store references to new space in code. + // (2) Handler is shared for all receivers with the same prototype + // map (but not necessarily the same prototype instance). + bool load_prototype_from_map = in_new_space || depth == 1; + if (depth != 1 || check == CHECK_ALL_MAPS) { + __ CheckMap(reg, current_map, miss, DONT_DO_SMI_CHECK); + } + + // Check access rights to the global object. This has to happen after + // the map check so that we know that the object is actually a global + // object. + // This allows us to install generated handlers for accesses to the + // global proxy (as opposed to using slow ICs). See corresponding code + // in LookupForRead(). + if (current_map->IsJSGlobalProxyMap()) { + __ CheckAccessGlobalProxy(reg, scratch1, scratch2, miss); + } else if (current_map->IsJSGlobalObjectMap()) { + GenerateCheckPropertyCell( + masm(), Handle<JSGlobalObject>::cast(current), name, + scratch2, miss); + } + + if (load_prototype_from_map) { + // Save the map in scratch1 for later. + __ mov(scratch1, FieldOperand(reg, HeapObject::kMapOffset)); + } + + reg = holder_reg; // From now on the object will be in holder_reg. + + if (load_prototype_from_map) { + __ mov(reg, FieldOperand(scratch1, Map::kPrototypeOffset)); + } else { + __ mov(reg, prototype); + } + } + + // Go to the next object in the prototype chain. + current = prototype; + current_map = handle(current->map()); + } + + // Log the check depth. + LOG(isolate(), IntEvent("check-maps-depth", depth + 1)); + + if (depth != 0 || check == CHECK_ALL_MAPS) { + // Check the holder map. + __ CheckMap(reg, current_map, miss, DONT_DO_SMI_CHECK); + } + + // Perform security check for access to the global object. + DCHECK(current_map->IsJSGlobalProxyMap() || + !current_map->is_access_check_needed()); + if (current_map->IsJSGlobalProxyMap()) { + __ CheckAccessGlobalProxy(reg, scratch1, scratch2, miss); + } + + // Return the register containing the holder. + return reg; +} + + +void NamedLoadHandlerCompiler::FrontendFooter(Handle<Name> name, Label* miss) { + if (!miss->is_unused()) { + Label success; + __ jmp(&success); + __ bind(miss); + TailCallBuiltin(masm(), MissBuiltin(kind())); + __ bind(&success); + } +} + + +void NamedStoreHandlerCompiler::FrontendFooter(Handle<Name> name, Label* miss) { + if (!miss->is_unused()) { + Label success; + __ jmp(&success); + GenerateRestoreName(miss, name); + TailCallBuiltin(masm(), MissBuiltin(kind())); + __ bind(&success); + } +} + + +void NamedLoadHandlerCompiler::GenerateLoadCallback( + Register reg, Handle<ExecutableAccessorInfo> callback) { + // Insert additional parameters into the stack frame above return address. + DCHECK(!scratch3().is(reg)); + __ pop(scratch3()); // Get return address to place it below. + + STATIC_ASSERT(PropertyCallbackArguments::kHolderIndex == 0); + STATIC_ASSERT(PropertyCallbackArguments::kIsolateIndex == 1); + STATIC_ASSERT(PropertyCallbackArguments::kReturnValueDefaultValueIndex == 2); + STATIC_ASSERT(PropertyCallbackArguments::kReturnValueOffset == 3); + STATIC_ASSERT(PropertyCallbackArguments::kDataIndex == 4); + STATIC_ASSERT(PropertyCallbackArguments::kThisIndex == 5); + __ push(receiver()); // receiver + // Push data from ExecutableAccessorInfo. + if (isolate()->heap()->InNewSpace(callback->data())) { + DCHECK(!scratch2().is(reg)); + __ mov(scratch2(), Immediate(callback)); + __ push(FieldOperand(scratch2(), ExecutableAccessorInfo::kDataOffset)); + } else { + __ push(Immediate(Handle<Object>(callback->data(), isolate()))); + } + __ push(Immediate(isolate()->factory()->undefined_value())); // ReturnValue + // ReturnValue default value + __ push(Immediate(isolate()->factory()->undefined_value())); + __ push(Immediate(reinterpret_cast<int>(isolate()))); + __ push(reg); // holder + + // Save a pointer to where we pushed the arguments. This will be + // passed as the const PropertyAccessorInfo& to the C++ callback. + __ push(esp); + + __ push(name()); // name + + __ push(scratch3()); // Restore return address. + + // Abi for CallApiGetter + Register getter_address = edx; + Address function_address = v8::ToCData<Address>(callback->getter()); + __ mov(getter_address, Immediate(function_address)); + + CallApiGetterStub stub(isolate()); + __ TailCallStub(&stub); +} + + +void NamedLoadHandlerCompiler::GenerateLoadConstant(Handle<Object> value) { + // Return the constant value. + __ LoadObject(eax, value); + __ ret(0); +} + + +void NamedLoadHandlerCompiler::GenerateLoadInterceptor(Register holder_reg, + LookupResult* lookup, + Handle<Name> name) { + DCHECK(holder()->HasNamedInterceptor()); + DCHECK(!holder()->GetNamedInterceptor()->getter()->IsUndefined()); + + // So far the most popular follow ups for interceptor loads are FIELD + // and CALLBACKS, so inline only them, other cases may be added + // later. + bool compile_followup_inline = false; + if (lookup->IsFound() && lookup->IsCacheable()) { + if (lookup->IsField()) { + compile_followup_inline = true; + } else if (lookup->type() == CALLBACKS && + lookup->GetCallbackObject()->IsExecutableAccessorInfo()) { + Handle<ExecutableAccessorInfo> callback( + ExecutableAccessorInfo::cast(lookup->GetCallbackObject())); + compile_followup_inline = + callback->getter() != NULL && + ExecutableAccessorInfo::IsCompatibleReceiverType(isolate(), callback, + type()); + } + } + + if (compile_followup_inline) { + // Compile the interceptor call, followed by inline code to load the + // property from further up the prototype chain if the call fails. + // Check that the maps haven't changed. + DCHECK(holder_reg.is(receiver()) || holder_reg.is(scratch1())); + + // Preserve the receiver register explicitly whenever it is different from + // the holder and it is needed should the interceptor return without any + // result. The CALLBACKS case needs the receiver to be passed into C++ code, + // the FIELD case might cause a miss during the prototype check. + bool must_perfrom_prototype_check = *holder() != lookup->holder(); + bool must_preserve_receiver_reg = !receiver().is(holder_reg) && + (lookup->type() == CALLBACKS || must_perfrom_prototype_check); + + // Save necessary data before invoking an interceptor. + // Requires a frame to make GC aware of pushed pointers. + { + FrameScope frame_scope(masm(), StackFrame::INTERNAL); + + if (must_preserve_receiver_reg) { + __ push(receiver()); + } + __ push(holder_reg); + __ push(this->name()); + + // Invoke an interceptor. Note: map checks from receiver to + // interceptor's holder has been compiled before (see a caller + // of this method.) + CompileCallLoadPropertyWithInterceptor( + masm(), receiver(), holder_reg, this->name(), holder(), + IC::kLoadPropertyWithInterceptorOnly); + + // Check if interceptor provided a value for property. If it's + // the case, return immediately. + Label interceptor_failed; + __ cmp(eax, factory()->no_interceptor_result_sentinel()); + __ j(equal, &interceptor_failed); + frame_scope.GenerateLeaveFrame(); + __ ret(0); + + // Clobber registers when generating debug-code to provoke errors. + __ bind(&interceptor_failed); + if (FLAG_debug_code) { + __ mov(receiver(), Immediate(BitCast<int32_t>(kZapValue))); + __ mov(holder_reg, Immediate(BitCast<int32_t>(kZapValue))); + __ mov(this->name(), Immediate(BitCast<int32_t>(kZapValue))); + } + + __ pop(this->name()); + __ pop(holder_reg); + if (must_preserve_receiver_reg) { + __ pop(receiver()); + } + + // Leave the internal frame. + } + + GenerateLoadPostInterceptor(holder_reg, name, lookup); + } else { // !compile_followup_inline + // Call the runtime system to load the interceptor. + // Check that the maps haven't changed. + __ pop(scratch2()); // save old return address + PushInterceptorArguments(masm(), receiver(), holder_reg, this->name(), + holder()); + __ push(scratch2()); // restore old return address + + ExternalReference ref = + ExternalReference(IC_Utility(IC::kLoadPropertyWithInterceptor), + isolate()); + __ TailCallExternalReference( + ref, NamedLoadHandlerCompiler::kInterceptorArgsLength, 1); + } +} + + +Handle<Code> NamedStoreHandlerCompiler::CompileStoreCallback( + Handle<JSObject> object, Handle<Name> name, + Handle<ExecutableAccessorInfo> callback) { + Register holder_reg = Frontend(receiver(), name); + + __ pop(scratch1()); // remove the return address + __ push(receiver()); + __ push(holder_reg); + __ Push(callback); + __ Push(name); + __ push(value()); + __ push(scratch1()); // restore return address + + // Do tail-call to the runtime system. + ExternalReference store_callback_property = + ExternalReference(IC_Utility(IC::kStoreCallbackProperty), isolate()); + __ TailCallExternalReference(store_callback_property, 5, 1); + + // Return the generated code. + return GetCode(kind(), Code::FAST, name); +} + + +#undef __ +#define __ ACCESS_MASM(masm) + + +void NamedStoreHandlerCompiler::GenerateStoreViaSetter( + MacroAssembler* masm, Handle<HeapType> type, Register receiver, + Handle<JSFunction> setter) { + // ----------- S t a t e ------------- + // -- esp[0] : return address + // ----------------------------------- + { + FrameScope scope(masm, StackFrame::INTERNAL); + + // Save value register, so we can restore it later. + __ push(value()); + + if (!setter.is_null()) { + // Call the JavaScript setter with receiver and value on the stack. + if (IC::TypeToMap(*type, masm->isolate())->IsJSGlobalObjectMap()) { + // Swap in the global receiver. + __ mov(receiver, + FieldOperand(receiver, JSGlobalObject::kGlobalProxyOffset)); + } + __ push(receiver); + __ push(value()); + ParameterCount actual(1); + ParameterCount expected(setter); + __ InvokeFunction(setter, expected, actual, + CALL_FUNCTION, NullCallWrapper()); + } else { + // If we generate a global code snippet for deoptimization only, remember + // the place to continue after deoptimization. + masm->isolate()->heap()->SetSetterStubDeoptPCOffset(masm->pc_offset()); + } + + // We have to return the passed value, not the return value of the setter. + __ pop(eax); + + // Restore context register. + __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset)); + } + __ ret(0); +} + + +#undef __ +#define __ ACCESS_MASM(masm()) + + +Handle<Code> NamedStoreHandlerCompiler::CompileStoreInterceptor( + Handle<Name> name) { + __ pop(scratch1()); // remove the return address + __ push(receiver()); + __ push(this->name()); + __ push(value()); + __ push(scratch1()); // restore return address + + // Do tail-call to the runtime system. + ExternalReference store_ic_property = ExternalReference( + IC_Utility(IC::kStorePropertyWithInterceptor), isolate()); + __ TailCallExternalReference(store_ic_property, 3, 1); + + // Return the generated code. + return GetCode(kind(), Code::FAST, name); +} + + +Handle<Code> PropertyICCompiler::CompileKeyedStorePolymorphic( + MapHandleList* receiver_maps, CodeHandleList* handler_stubs, + MapHandleList* transitioned_maps) { + Label miss; + __ JumpIfSmi(receiver(), &miss, Label::kNear); + __ mov(scratch1(), FieldOperand(receiver(), HeapObject::kMapOffset)); + for (int i = 0; i < receiver_maps->length(); ++i) { + __ cmp(scratch1(), receiver_maps->at(i)); + if (transitioned_maps->at(i).is_null()) { + __ j(equal, handler_stubs->at(i)); + } else { + Label next_map; + __ j(not_equal, &next_map, Label::kNear); + __ mov(transition_map(), Immediate(transitioned_maps->at(i))); + __ jmp(handler_stubs->at(i), RelocInfo::CODE_TARGET); + __ bind(&next_map); + } + } + __ bind(&miss); + TailCallBuiltin(masm(), MissBuiltin(kind())); + + // Return the generated code. + return GetCode(kind(), Code::NORMAL, factory()->empty_string(), POLYMORPHIC); +} + + +Register* PropertyAccessCompiler::load_calling_convention() { + // receiver, name, scratch1, scratch2, scratch3, scratch4. + Register receiver = LoadIC::ReceiverRegister(); + Register name = LoadIC::NameRegister(); + static Register registers[] = { receiver, name, ebx, eax, edi, no_reg }; + return registers; +} + + +Register* PropertyAccessCompiler::store_calling_convention() { + // receiver, name, scratch1, scratch2, scratch3. + Register receiver = StoreIC::ReceiverRegister(); + Register name = StoreIC::NameRegister(); + DCHECK(ebx.is(KeyedStoreIC::MapRegister())); + static Register registers[] = { receiver, name, ebx, edi, no_reg }; + return registers; +} + + +Register NamedStoreHandlerCompiler::value() { return StoreIC::ValueRegister(); } + + +#undef __ +#define __ ACCESS_MASM(masm) + + +void NamedLoadHandlerCompiler::GenerateLoadViaGetter( + MacroAssembler* masm, Handle<HeapType> type, Register receiver, + Handle<JSFunction> getter) { + { + FrameScope scope(masm, StackFrame::INTERNAL); + + if (!getter.is_null()) { + // Call the JavaScript getter with the receiver on the stack. + if (IC::TypeToMap(*type, masm->isolate())->IsJSGlobalObjectMap()) { + // Swap in the global receiver. + __ mov(receiver, + FieldOperand(receiver, JSGlobalObject::kGlobalProxyOffset)); + } + __ push(receiver); + ParameterCount actual(0); + ParameterCount expected(getter); + __ InvokeFunction(getter, expected, actual, + CALL_FUNCTION, NullCallWrapper()); + } else { + // If we generate a global code snippet for deoptimization only, remember + // the place to continue after deoptimization. + masm->isolate()->heap()->SetGetterStubDeoptPCOffset(masm->pc_offset()); + } + + // Restore context register. + __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset)); + } + __ ret(0); +} + + +#undef __ +#define __ ACCESS_MASM(masm()) + + +Handle<Code> NamedLoadHandlerCompiler::CompileLoadGlobal( + Handle<PropertyCell> cell, Handle<Name> name, bool is_configurable) { + Label miss; + + FrontendHeader(receiver(), name, &miss); + // Get the value from the cell. + Register result = StoreIC::ValueRegister(); + if (masm()->serializer_enabled()) { + __ mov(result, Immediate(cell)); + __ mov(result, FieldOperand(result, PropertyCell::kValueOffset)); + } else { + __ mov(result, Operand::ForCell(cell)); + } + + // Check for deleted property if property can actually be deleted. + if (is_configurable) { + __ cmp(result, factory()->the_hole_value()); + __ j(equal, &miss); + } else if (FLAG_debug_code) { + __ cmp(result, factory()->the_hole_value()); + __ Check(not_equal, kDontDeleteCellsCannotContainTheHole); + } + + Counters* counters = isolate()->counters(); + __ IncrementCounter(counters->named_load_global_stub(), 1); + // The code above already loads the result into the return register. + __ ret(0); + + FrontendFooter(name, &miss); + + // Return the generated code. + return GetCode(kind(), Code::NORMAL, name); +} + + +Handle<Code> PropertyICCompiler::CompilePolymorphic(TypeHandleList* types, + CodeHandleList* handlers, + Handle<Name> name, + Code::StubType type, + IcCheckType check) { + Label miss; + + if (check == PROPERTY && + (kind() == Code::KEYED_LOAD_IC || kind() == Code::KEYED_STORE_IC)) { + // In case we are compiling an IC for dictionary loads and stores, just + // check whether the name is unique. + if (name.is_identical_to(isolate()->factory()->normal_ic_symbol())) { + __ JumpIfNotUniqueName(this->name(), &miss); + } else { + __ cmp(this->name(), Immediate(name)); + __ j(not_equal, &miss); + } + } + + Label number_case; + Label* smi_target = IncludesNumberType(types) ? &number_case : &miss; + __ JumpIfSmi(receiver(), smi_target); + + // Polymorphic keyed stores may use the map register + Register map_reg = scratch1(); + DCHECK(kind() != Code::KEYED_STORE_IC || + map_reg.is(KeyedStoreIC::MapRegister())); + __ mov(map_reg, FieldOperand(receiver(), HeapObject::kMapOffset)); + int receiver_count = types->length(); + int number_of_handled_maps = 0; + for (int current = 0; current < receiver_count; ++current) { + Handle<HeapType> type = types->at(current); + Handle<Map> map = IC::TypeToMap(*type, isolate()); + if (!map->is_deprecated()) { + number_of_handled_maps++; + __ cmp(map_reg, map); + if (type->Is(HeapType::Number())) { + DCHECK(!number_case.is_unused()); + __ bind(&number_case); + } + __ j(equal, handlers->at(current)); + } + } + DCHECK(number_of_handled_maps != 0); + + __ bind(&miss); + TailCallBuiltin(masm(), MissBuiltin(kind())); + + // Return the generated code. + InlineCacheState state = + number_of_handled_maps > 1 ? POLYMORPHIC : MONOMORPHIC; + return GetCode(kind(), type, name, state); +} + + +#undef __ +#define __ ACCESS_MASM(masm) + + +void ElementHandlerCompiler::GenerateLoadDictionaryElement( + MacroAssembler* masm) { + // ----------- S t a t e ------------- + // -- ecx : key + // -- edx : receiver + // -- esp[0] : return address + // ----------------------------------- + DCHECK(edx.is(LoadIC::ReceiverRegister())); + DCHECK(ecx.is(LoadIC::NameRegister())); + Label slow, miss; + + // This stub is meant to be tail-jumped to, the receiver must already + // have been verified by the caller to not be a smi. + __ JumpIfNotSmi(ecx, &miss); + __ mov(ebx, ecx); + __ SmiUntag(ebx); + __ mov(eax, FieldOperand(edx, JSObject::kElementsOffset)); + + // Push receiver on the stack to free up a register for the dictionary + // probing. + __ push(edx); + __ LoadFromNumberDictionary(&slow, eax, ecx, ebx, edx, edi, eax); + // Pop receiver before returning. + __ pop(edx); + __ ret(0); + + __ bind(&slow); + __ pop(edx); + + // ----------- S t a t e ------------- + // -- ecx : key + // -- edx : receiver + // -- esp[0] : return address + // ----------------------------------- + TailCallBuiltin(masm, Builtins::kKeyedLoadIC_Slow); + + __ bind(&miss); + // ----------- S t a t e ------------- + // -- ecx : key + // -- edx : receiver + // -- esp[0] : return address + // ----------------------------------- + TailCallBuiltin(masm, Builtins::kKeyedLoadIC_Miss); +} + + +#undef __ + +} } // namespace v8::internal + +#endif // V8_TARGET_ARCH_X87 diff --git a/deps/v8/src/zone-allocator.h b/deps/v8/src/zone-allocator.h index 8501c35b272..ab0ae9cf606 100644 --- a/deps/v8/src/zone-allocator.h +++ b/deps/v8/src/zone-allocator.h @@ -7,7 +7,7 @@ #include <limits> -#include "zone.h" +#include "src/zone.h" namespace v8 { namespace internal { @@ -62,6 +62,8 @@ class zone_allocator { Zone* zone_; }; +typedef zone_allocator<bool> ZoneBoolAllocator; +typedef zone_allocator<int> ZoneIntAllocator; } } // namespace v8::internal #endif // V8_ZONE_ALLOCATOR_H_ diff --git a/deps/v8/src/zone-containers.h b/deps/v8/src/zone-containers.h index c4a1055f9c8..1295ed7ab95 100644 --- a/deps/v8/src/zone-containers.h +++ b/deps/v8/src/zone-containers.h @@ -6,14 +6,14 @@ #define V8_ZONE_CONTAINERS_H_ #include <vector> -#include <set> -#include "zone.h" +#include "src/zone-allocator.h" namespace v8 { namespace internal { -typedef zone_allocator<int> ZoneIntAllocator; +typedef std::vector<bool, ZoneBoolAllocator> BoolVector; + typedef std::vector<int, ZoneIntAllocator> IntVector; typedef IntVector::iterator IntVectorIter; typedef IntVector::reverse_iterator IntVectorRIter; diff --git a/deps/v8/src/zone-inl.h b/deps/v8/src/zone-inl.h index c17f33c4aa1..cf037b59bc3 100644 --- a/deps/v8/src/zone-inl.h +++ b/deps/v8/src/zone-inl.h @@ -5,7 +5,7 @@ #ifndef V8_ZONE_INL_H_ #define V8_ZONE_INL_H_ -#include "zone.h" +#include "src/zone.h" #ifdef V8_USE_ADDRESS_SANITIZER #include <sanitizer/asan_interface.h> @@ -13,9 +13,9 @@ #define ASAN_UNPOISON_MEMORY_REGION(start, size) ((void) 0) #endif -#include "counters.h" -#include "isolate.h" -#include "utils.h" +#include "src/counters.h" +#include "src/isolate.h" +#include "src/utils.h" namespace v8 { namespace internal { @@ -24,54 +24,6 @@ namespace internal { static const int kASanRedzoneBytes = 24; // Must be a multiple of 8. -inline void* Zone::New(int size) { - // Round up the requested size to fit the alignment. - size = RoundUp(size, kAlignment); - - // If the allocation size is divisible by 8 then we return an 8-byte aligned - // address. - if (kPointerSize == 4 && kAlignment == 4) { - position_ += ((~size) & 4) & (reinterpret_cast<intptr_t>(position_) & 4); - } else { - ASSERT(kAlignment >= kPointerSize); - } - - // Check if the requested size is available without expanding. - Address result = position_; - - int size_with_redzone = -#ifdef V8_USE_ADDRESS_SANITIZER - size + kASanRedzoneBytes; -#else - size; -#endif - - if (size_with_redzone > limit_ - position_) { - result = NewExpand(size_with_redzone); - } else { - position_ += size_with_redzone; - } - -#ifdef V8_USE_ADDRESS_SANITIZER - Address redzone_position = result + size; - ASSERT(redzone_position + kASanRedzoneBytes == position_); - ASAN_POISON_MEMORY_REGION(redzone_position, kASanRedzoneBytes); -#endif - - // Check that the result has the proper alignment and return it. - ASSERT(IsAddressAligned(result, kAlignment, 0)); - allocation_size_ += size; - return reinterpret_cast<void*>(result); -} - - -template <typename T> -T* Zone::NewArray(int length) { - CHECK(std::numeric_limits<int>::max() / static_cast<int>(sizeof(T)) > length); - return static_cast<T*>(New(length * sizeof(T))); -} - - bool Zone::excess_allocation() { return segment_bytes_allocated_ > kExcessLimit; } @@ -97,7 +49,7 @@ void* ZoneObject::operator new(size_t size, Zone* zone) { } inline void* ZoneAllocationPolicy::New(size_t size) { - ASSERT(zone_); + DCHECK(zone_); return zone_->New(static_cast<int>(size)); } diff --git a/deps/v8/src/zone.cc b/deps/v8/src/zone.cc index 49efc5a7463..48d8c7b171d 100644 --- a/deps/v8/src/zone.cc +++ b/deps/v8/src/zone.cc @@ -4,8 +4,8 @@ #include <string.h> -#include "v8.h" -#include "zone-inl.h" +#include "src/v8.h" +#include "src/zone-inl.h" namespace v8 { namespace internal { @@ -58,7 +58,48 @@ Zone::~Zone() { DeleteAll(); DeleteKeptSegment(); - ASSERT(segment_bytes_allocated_ == 0); + DCHECK(segment_bytes_allocated_ == 0); +} + + +void* Zone::New(int size) { + // Round up the requested size to fit the alignment. + size = RoundUp(size, kAlignment); + + // If the allocation size is divisible by 8 then we return an 8-byte aligned + // address. + if (kPointerSize == 4 && kAlignment == 4) { + position_ += ((~size) & 4) & (reinterpret_cast<intptr_t>(position_) & 4); + } else { + DCHECK(kAlignment >= kPointerSize); + } + + // Check if the requested size is available without expanding. + Address result = position_; + + int size_with_redzone = +#ifdef V8_USE_ADDRESS_SANITIZER + size + kASanRedzoneBytes; +#else + size; +#endif + + if (size_with_redzone > limit_ - position_) { + result = NewExpand(size_with_redzone); + } else { + position_ += size_with_redzone; + } + +#ifdef V8_USE_ADDRESS_SANITIZER + Address redzone_position = result + size; + DCHECK(redzone_position + kASanRedzoneBytes == position_); + ASAN_POISON_MEMORY_REGION(redzone_position, kASanRedzoneBytes); +#endif + + // Check that the result has the proper alignment and return it. + DCHECK(IsAddressAligned(result, kAlignment, 0)); + allocation_size_ += size; + return reinterpret_cast<void*>(result); } @@ -120,7 +161,7 @@ void Zone::DeleteKeptSegment() { static const unsigned char kZapDeadByte = 0xcd; #endif - ASSERT(segment_head_ == NULL || segment_head_->next() == NULL); + DCHECK(segment_head_ == NULL || segment_head_->next() == NULL); if (segment_head_ != NULL) { int size = segment_head_->size(); #ifdef DEBUG @@ -133,7 +174,7 @@ void Zone::DeleteKeptSegment() { segment_head_ = NULL; } - ASSERT(segment_bytes_allocated_ == 0); + DCHECK(segment_bytes_allocated_ == 0); } @@ -160,8 +201,8 @@ void Zone::DeleteSegment(Segment* segment, int size) { Address Zone::NewExpand(int size) { // Make sure the requested size is already properly aligned and that // there isn't enough room in the Zone to satisfy the request. - ASSERT(size == RoundDown(size, kAlignment)); - ASSERT(size > limit_ - position_); + DCHECK(size == RoundDown(size, kAlignment)); + DCHECK(size > limit_ - position_); // Compute the new segment size. We use a 'high water mark' // strategy, where we increase the segment size every time we expand @@ -210,7 +251,7 @@ Address Zone::NewExpand(int size) { return NULL; } limit_ = segment->end(); - ASSERT(position_ <= limit_); + DCHECK(position_ <= limit_); return result; } diff --git a/deps/v8/src/zone.h b/deps/v8/src/zone.h index 573e13e1d4a..a690b8d8caf 100644 --- a/deps/v8/src/zone.h +++ b/deps/v8/src/zone.h @@ -5,21 +5,18 @@ #ifndef V8_ZONE_H_ #define V8_ZONE_H_ -#include "allocation.h" -#include "checks.h" -#include "hashmap.h" -#include "globals.h" -#include "list.h" -#include "splay-tree.h" +#include <limits> + +#include "src/allocation.h" +#include "src/base/logging.h" +#include "src/globals.h" +#include "src/hashmap.h" +#include "src/list.h" +#include "src/splay-tree.h" namespace v8 { namespace internal { -#if defined(__has_feature) - #if __has_feature(address_sanitizer) - #define V8_USE_ADDRESS_SANITIZER - #endif -#endif class Segment; class Isolate; @@ -43,10 +40,14 @@ class Zone { ~Zone(); // Allocate 'size' bytes of memory in the Zone; expands the Zone by // allocating new segments of memory on demand using malloc(). - inline void* New(int size); + void* New(int size); template <typename T> - inline T* NewArray(int length); + T* NewArray(int length) { + CHECK(std::numeric_limits<int>::max() / static_cast<int>(sizeof(T)) > + length); + return static_cast<T*>(New(length * sizeof(T))); + } // Deletes all objects and free all memory allocated in the Zone. Keeps one // small (size <= kMaximumKeptSegmentSize) segment around if it finds one. diff --git a/deps/v8/test/base-unittests/DEPS b/deps/v8/test/base-unittests/DEPS new file mode 100644 index 00000000000..90b080063f6 --- /dev/null +++ b/deps/v8/test/base-unittests/DEPS @@ -0,0 +1,8 @@ +include_rules = [ + "-include", + "+include/v8config.h", + "+include/v8stdint.h", + "-src", + "+src/base", + "+testing/gtest", +] diff --git a/deps/v8/test/base-unittests/base-unittests.gyp b/deps/v8/test/base-unittests/base-unittests.gyp new file mode 100644 index 00000000000..339269db1bc --- /dev/null +++ b/deps/v8/test/base-unittests/base-unittests.gyp @@ -0,0 +1,41 @@ +# Copyright 2014 the V8 project authors. All rights reserved. +# Use of this source code is governed by a BSD-style license that can be +# found in the LICENSE file. + +{ + 'variables': { + 'v8_code': 1, + }, + 'includes': ['../../build/toolchain.gypi', '../../build/features.gypi'], + 'targets': [ + { + 'target_name': 'base-unittests', + 'type': 'executable', + 'dependencies': [ + '../../testing/gtest.gyp:gtest', + '../../testing/gtest.gyp:gtest_main', + '../../tools/gyp/v8.gyp:v8_libbase', + ], + 'include_dirs': [ + '../..', + ], + 'sources': [ ### gcmole(all) ### + 'cpu-unittest.cc', + 'platform/condition-variable-unittest.cc', + 'platform/mutex-unittest.cc', + 'platform/platform-unittest.cc', + 'platform/time-unittest.cc', + 'utils/random-number-generator-unittest.cc', + ], + 'conditions': [ + ['os_posix == 1', { + # TODO(svenpanne): This is a temporary work-around to fix the warnings + # that show up because we use -std=gnu++0x instead of -std=c++11. + 'cflags!': [ + '-pedantic', + ], + }], + ], + }, + ], +} diff --git a/deps/v8/test/base-unittests/base-unittests.status b/deps/v8/test/base-unittests/base-unittests.status new file mode 100644 index 00000000000..d439913ccf6 --- /dev/null +++ b/deps/v8/test/base-unittests/base-unittests.status @@ -0,0 +1,6 @@ +# Copyright 2014 the V8 project authors. All rights reserved. +# Use of this source code is governed by a BSD-style license that can be +# found in the LICENSE file. + +[ +] diff --git a/deps/v8/test/base-unittests/cpu-unittest.cc b/deps/v8/test/base-unittests/cpu-unittest.cc new file mode 100644 index 00000000000..5c58f862381 --- /dev/null +++ b/deps/v8/test/base-unittests/cpu-unittest.cc @@ -0,0 +1,49 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/base/cpu.h" +#include "testing/gtest/include/gtest/gtest.h" + +namespace v8 { +namespace base { + +TEST(CPUTest, FeatureImplications) { + CPU cpu; + + // ia32 and x64 features + EXPECT_TRUE(!cpu.has_sse() || cpu.has_mmx()); + EXPECT_TRUE(!cpu.has_sse2() || cpu.has_sse()); + EXPECT_TRUE(!cpu.has_sse3() || cpu.has_sse2()); + EXPECT_TRUE(!cpu.has_ssse3() || cpu.has_sse3()); + EXPECT_TRUE(!cpu.has_sse41() || cpu.has_sse3()); + EXPECT_TRUE(!cpu.has_sse42() || cpu.has_sse41()); + + // arm features + EXPECT_TRUE(!cpu.has_vfp3_d32() || cpu.has_vfp3()); +} + + +TEST(CPUTest, RequiredFeatures) { + CPU cpu; + +#if V8_HOST_ARCH_ARM + EXPECT_TRUE(cpu.has_fpu()); +#endif + +#if V8_HOST_ARCH_IA32 + EXPECT_TRUE(cpu.has_fpu()); + EXPECT_TRUE(cpu.has_sahf()); +#endif + +#if V8_HOST_ARCH_X64 + EXPECT_TRUE(cpu.has_fpu()); + EXPECT_TRUE(cpu.has_cmov()); + EXPECT_TRUE(cpu.has_mmx()); + EXPECT_TRUE(cpu.has_sse()); + EXPECT_TRUE(cpu.has_sse2()); +#endif +} + +} // namespace base +} // namespace v8 diff --git a/deps/v8/test/cctest/test-condition-variable.cc b/deps/v8/test/base-unittests/platform/condition-variable-unittest.cc similarity index 58% rename from deps/v8/test/cctest/test-condition-variable.cc rename to deps/v8/test/base-unittests/platform/condition-variable-unittest.cc index a7bd6500dc9..ea1efd0d5b9 100644 --- a/deps/v8/test/cctest/test-condition-variable.cc +++ b/deps/v8/test/base-unittests/platform/condition-variable-unittest.cc @@ -1,40 +1,17 @@ -// Copyright 2013 the V8 project authors. All rights reserved. -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are -// met: -// -// * Redistributions of source code must retain the above copyright -// notice, this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above -// copyright notice, this list of conditions and the following -// disclaimer in the documentation and/or other materials provided -// with the distribution. -// * Neither the name of Google Inc. nor the names of its -// contributors may be used to endorse or promote products derived -// from this software without specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -#include "v8.h" - -#include "cctest.h" -#include "platform/condition-variable.h" -#include "platform/time.h" - -using namespace ::v8::internal; - - -TEST(WaitForAfterNofityOnSameThread) { +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/base/platform/condition-variable.h" + +#include "src/base/platform/platform.h" +#include "src/base/platform/time.h" +#include "testing/gtest/include/gtest/gtest.h" + +namespace v8 { +namespace base { + +TEST(ConditionVariable, WaitForAfterNofityOnSameThread) { for (int n = 0; n < 10; ++n) { Mutex mutex; ConditionVariable cv; @@ -42,19 +19,22 @@ TEST(WaitForAfterNofityOnSameThread) { LockGuard<Mutex> lock_guard(&mutex); cv.NotifyOne(); - CHECK_EQ(false, cv.WaitFor(&mutex, TimeDelta::FromMicroseconds(n))); + EXPECT_FALSE(cv.WaitFor(&mutex, TimeDelta::FromMicroseconds(n))); cv.NotifyAll(); - CHECK_EQ(false, cv.WaitFor(&mutex, TimeDelta::FromMicroseconds(n))); + EXPECT_FALSE(cv.WaitFor(&mutex, TimeDelta::FromMicroseconds(n))); } } +namespace { + class ThreadWithMutexAndConditionVariable V8_FINAL : public Thread { public: ThreadWithMutexAndConditionVariable() - : Thread("ThreadWithMutexAndConditionVariable"), - running_(false), finished_(false) {} + : Thread(Options("ThreadWithMutexAndConditionVariable")), + running_(false), + finished_(false) {} virtual ~ThreadWithMutexAndConditionVariable() {} virtual void Run() V8_OVERRIDE { @@ -74,15 +54,17 @@ class ThreadWithMutexAndConditionVariable V8_FINAL : public Thread { Mutex mutex_; }; +} + -TEST(MultipleThreadsWithSeparateConditionVariables) { +TEST(ConditionVariable, MultipleThreadsWithSeparateConditionVariables) { static const int kThreadCount = 128; ThreadWithMutexAndConditionVariable threads[kThreadCount]; for (int n = 0; n < kThreadCount; ++n) { LockGuard<Mutex> lock_guard(&threads[n].mutex_); - CHECK(!threads[n].running_); - CHECK(!threads[n].finished_); + EXPECT_FALSE(threads[n].running_); + EXPECT_FALSE(threads[n].finished_); threads[n].Start(); // Wait for nth thread to start. while (!threads[n].running_) { @@ -92,14 +74,14 @@ TEST(MultipleThreadsWithSeparateConditionVariables) { for (int n = kThreadCount - 1; n >= 0; --n) { LockGuard<Mutex> lock_guard(&threads[n].mutex_); - CHECK(threads[n].running_); - CHECK(!threads[n].finished_); + EXPECT_TRUE(threads[n].running_); + EXPECT_FALSE(threads[n].finished_); } for (int n = 0; n < kThreadCount; ++n) { LockGuard<Mutex> lock_guard(&threads[n].mutex_); - CHECK(threads[n].running_); - CHECK(!threads[n].finished_); + EXPECT_TRUE(threads[n].running_); + EXPECT_FALSE(threads[n].finished_); // Tell the nth thread to quit. threads[n].running_ = false; threads[n].cv_.NotifyOne(); @@ -111,24 +93,29 @@ TEST(MultipleThreadsWithSeparateConditionVariables) { while (!threads[n].finished_) { threads[n].cv_.Wait(&threads[n].mutex_); } - CHECK(!threads[n].running_); - CHECK(threads[n].finished_); + EXPECT_FALSE(threads[n].running_); + EXPECT_TRUE(threads[n].finished_); } for (int n = 0; n < kThreadCount; ++n) { threads[n].Join(); LockGuard<Mutex> lock_guard(&threads[n].mutex_); - CHECK(!threads[n].running_); - CHECK(threads[n].finished_); + EXPECT_FALSE(threads[n].running_); + EXPECT_TRUE(threads[n].finished_); } } +namespace { + class ThreadWithSharedMutexAndConditionVariable V8_FINAL : public Thread { public: ThreadWithSharedMutexAndConditionVariable() - : Thread("ThreadWithSharedMutexAndConditionVariable"), - running_(false), finished_(false), cv_(NULL), mutex_(NULL) {} + : Thread(Options("ThreadWithSharedMutexAndConditionVariable")), + running_(false), + finished_(false), + cv_(NULL), + mutex_(NULL) {} virtual ~ThreadWithSharedMutexAndConditionVariable() {} virtual void Run() V8_OVERRIDE { @@ -148,8 +135,10 @@ class ThreadWithSharedMutexAndConditionVariable V8_FINAL : public Thread { Mutex* mutex_; }; +} + -TEST(MultipleThreadsWithSharedSeparateConditionVariables) { +TEST(ConditionVariable, MultipleThreadsWithSharedSeparateConditionVariables) { static const int kThreadCount = 128; ThreadWithSharedMutexAndConditionVariable threads[kThreadCount]; ConditionVariable cv; @@ -164,8 +153,8 @@ TEST(MultipleThreadsWithSharedSeparateConditionVariables) { { LockGuard<Mutex> lock_guard(&mutex); for (int n = 0; n < kThreadCount; ++n) { - CHECK(!threads[n].running_); - CHECK(!threads[n].finished_); + EXPECT_FALSE(threads[n].running_); + EXPECT_FALSE(threads[n].finished_); threads[n].Start(); } } @@ -184,8 +173,8 @@ TEST(MultipleThreadsWithSharedSeparateConditionVariables) { { LockGuard<Mutex> lock_guard(&mutex); for (int n = 0; n < kThreadCount; ++n) { - CHECK(threads[n].running_); - CHECK(!threads[n].finished_); + EXPECT_TRUE(threads[n].running_); + EXPECT_FALSE(threads[n].finished_); } } @@ -193,8 +182,8 @@ TEST(MultipleThreadsWithSharedSeparateConditionVariables) { { LockGuard<Mutex> lock_guard(&mutex); for (int n = kThreadCount - 1; n >= 0; --n) { - CHECK(threads[n].running_); - CHECK(!threads[n].finished_); + EXPECT_TRUE(threads[n].running_); + EXPECT_FALSE(threads[n].finished_); // Tell the nth thread to quit. threads[n].running_ = false; } @@ -215,8 +204,8 @@ TEST(MultipleThreadsWithSharedSeparateConditionVariables) { { LockGuard<Mutex> lock_guard(&mutex); for (int n = kThreadCount - 1; n >= 0; --n) { - CHECK(!threads[n].running_); - CHECK(threads[n].finished_); + EXPECT_FALSE(threads[n].running_); + EXPECT_TRUE(threads[n].finished_); } } @@ -227,18 +216,21 @@ TEST(MultipleThreadsWithSharedSeparateConditionVariables) { } +namespace { + class LoopIncrementThread V8_FINAL : public Thread { public: - LoopIncrementThread(int rem, - int* counter, - int limit, - int thread_count, - ConditionVariable* cv, - Mutex* mutex) - : Thread("LoopIncrementThread"), rem_(rem), counter_(counter), - limit_(limit), thread_count_(thread_count), cv_(cv), mutex_(mutex) { - CHECK_LT(rem, thread_count); - CHECK_EQ(0, limit % thread_count); + LoopIncrementThread(int rem, int* counter, int limit, int thread_count, + ConditionVariable* cv, Mutex* mutex) + : Thread(Options("LoopIncrementThread")), + rem_(rem), + counter_(counter), + limit_(limit), + thread_count_(thread_count), + cv_(cv), + mutex_(mutex) { + EXPECT_LT(rem, thread_count); + EXPECT_EQ(0, limit % thread_count); } virtual void Run() V8_OVERRIDE { @@ -251,9 +243,9 @@ class LoopIncrementThread V8_FINAL : public Thread { count = *counter_; } if (count >= limit_) break; - CHECK_EQ(*counter_, count); + EXPECT_EQ(*counter_, count); if (last_count != -1) { - CHECK_EQ(last_count + (thread_count_ - 1), count); + EXPECT_EQ(last_count + (thread_count_ - 1), count); } count++; *counter_ = count; @@ -271,13 +263,15 @@ class LoopIncrementThread V8_FINAL : public Thread { Mutex* mutex_; }; +} + -TEST(LoopIncrement) { +TEST(ConditionVariable, LoopIncrement) { static const int kMaxThreadCount = 16; Mutex mutex; ConditionVariable cv; for (int thread_count = 1; thread_count < kMaxThreadCount; ++thread_count) { - int limit = thread_count * 100; + int limit = thread_count * 10; int counter = 0; // Setup the threads. @@ -299,6 +293,9 @@ TEST(LoopIncrement) { } delete[] threads; - CHECK_EQ(limit, counter); + EXPECT_EQ(limit, counter); } } + +} // namespace base +} // namespace v8 diff --git a/deps/v8/test/base-unittests/platform/mutex-unittest.cc b/deps/v8/test/base-unittests/platform/mutex-unittest.cc new file mode 100644 index 00000000000..5af5efb5a9d --- /dev/null +++ b/deps/v8/test/base-unittests/platform/mutex-unittest.cc @@ -0,0 +1,91 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/base/platform/mutex.h" + +#include "testing/gtest/include/gtest/gtest.h" + +namespace v8 { +namespace base { + +TEST(Mutex, LockGuardMutex) { + Mutex mutex; + { LockGuard<Mutex> lock_guard(&mutex); } + { LockGuard<Mutex> lock_guard(&mutex); } +} + + +TEST(Mutex, LockGuardRecursiveMutex) { + RecursiveMutex recursive_mutex; + { LockGuard<RecursiveMutex> lock_guard(&recursive_mutex); } + { + LockGuard<RecursiveMutex> lock_guard1(&recursive_mutex); + LockGuard<RecursiveMutex> lock_guard2(&recursive_mutex); + } +} + + +TEST(Mutex, LockGuardLazyMutex) { + LazyMutex lazy_mutex = LAZY_MUTEX_INITIALIZER; + { LockGuard<Mutex> lock_guard(lazy_mutex.Pointer()); } + { LockGuard<Mutex> lock_guard(lazy_mutex.Pointer()); } +} + + +TEST(Mutex, LockGuardLazyRecursiveMutex) { + LazyRecursiveMutex lazy_recursive_mutex = LAZY_RECURSIVE_MUTEX_INITIALIZER; + { LockGuard<RecursiveMutex> lock_guard(lazy_recursive_mutex.Pointer()); } + { + LockGuard<RecursiveMutex> lock_guard1(lazy_recursive_mutex.Pointer()); + LockGuard<RecursiveMutex> lock_guard2(lazy_recursive_mutex.Pointer()); + } +} + + +TEST(Mutex, MultipleMutexes) { + Mutex mutex1; + Mutex mutex2; + Mutex mutex3; + // Order 1 + mutex1.Lock(); + mutex2.Lock(); + mutex3.Lock(); + mutex1.Unlock(); + mutex2.Unlock(); + mutex3.Unlock(); + // Order 2 + mutex1.Lock(); + mutex2.Lock(); + mutex3.Lock(); + mutex3.Unlock(); + mutex2.Unlock(); + mutex1.Unlock(); +} + + +TEST(Mutex, MultipleRecursiveMutexes) { + RecursiveMutex recursive_mutex1; + RecursiveMutex recursive_mutex2; + // Order 1 + recursive_mutex1.Lock(); + recursive_mutex2.Lock(); + EXPECT_TRUE(recursive_mutex1.TryLock()); + EXPECT_TRUE(recursive_mutex2.TryLock()); + recursive_mutex1.Unlock(); + recursive_mutex1.Unlock(); + recursive_mutex2.Unlock(); + recursive_mutex2.Unlock(); + // Order 2 + recursive_mutex1.Lock(); + EXPECT_TRUE(recursive_mutex1.TryLock()); + recursive_mutex2.Lock(); + EXPECT_TRUE(recursive_mutex2.TryLock()); + recursive_mutex2.Unlock(); + recursive_mutex1.Unlock(); + recursive_mutex2.Unlock(); + recursive_mutex1.Unlock(); +} + +} // namespace base +} // namespace v8 diff --git a/deps/v8/test/base-unittests/platform/platform-unittest.cc b/deps/v8/test/base-unittests/platform/platform-unittest.cc new file mode 100644 index 00000000000..3530ff8073a --- /dev/null +++ b/deps/v8/test/base-unittests/platform/platform-unittest.cc @@ -0,0 +1,115 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/base/platform/platform.h" + +#if V8_OS_POSIX +#include <unistd.h> // NOLINT +#endif + +#if V8_OS_WIN +#include "src/base/win32-headers.h" +#endif +#include "testing/gtest/include/gtest/gtest.h" + +namespace v8 { +namespace base { + +TEST(OS, GetCurrentProcessId) { +#if V8_OS_POSIX + EXPECT_EQ(static_cast<int>(getpid()), OS::GetCurrentProcessId()); +#endif + +#if V8_OS_WIN + EXPECT_EQ(static_cast<int>(::GetCurrentProcessId()), + OS::GetCurrentProcessId()); +#endif +} + + +TEST(OS, NumberOfProcessorsOnline) { + EXPECT_GT(OS::NumberOfProcessorsOnline(), 0); +} + + +namespace { + +class SelfJoinThread V8_FINAL : public Thread { + public: + SelfJoinThread() : Thread(Options("SelfJoinThread")) {} + virtual void Run() V8_OVERRIDE { Join(); } +}; + +} + + +TEST(Thread, SelfJoin) { + SelfJoinThread thread; + thread.Start(); + thread.Join(); +} + + +namespace { + +class ThreadLocalStorageTest : public Thread, public ::testing::Test { + public: + ThreadLocalStorageTest() : Thread(Options("ThreadLocalStorageTest")) { + for (size_t i = 0; i < ARRAY_SIZE(keys_); ++i) { + keys_[i] = Thread::CreateThreadLocalKey(); + } + } + ~ThreadLocalStorageTest() { + for (size_t i = 0; i < ARRAY_SIZE(keys_); ++i) { + Thread::DeleteThreadLocalKey(keys_[i]); + } + } + + virtual void Run() V8_FINAL V8_OVERRIDE { + for (size_t i = 0; i < ARRAY_SIZE(keys_); i++) { + CHECK(!Thread::HasThreadLocal(keys_[i])); + } + for (size_t i = 0; i < ARRAY_SIZE(keys_); i++) { + Thread::SetThreadLocal(keys_[i], GetValue(i)); + } + for (size_t i = 0; i < ARRAY_SIZE(keys_); i++) { + CHECK(Thread::HasThreadLocal(keys_[i])); + } + for (size_t i = 0; i < ARRAY_SIZE(keys_); i++) { + CHECK_EQ(GetValue(i), Thread::GetThreadLocal(keys_[i])); + CHECK_EQ(GetValue(i), Thread::GetExistingThreadLocal(keys_[i])); + } + for (size_t i = 0; i < ARRAY_SIZE(keys_); i++) { + Thread::SetThreadLocal(keys_[i], GetValue(ARRAY_SIZE(keys_) - i - 1)); + } + for (size_t i = 0; i < ARRAY_SIZE(keys_); i++) { + CHECK(Thread::HasThreadLocal(keys_[i])); + } + for (size_t i = 0; i < ARRAY_SIZE(keys_); i++) { + CHECK_EQ(GetValue(ARRAY_SIZE(keys_) - i - 1), + Thread::GetThreadLocal(keys_[i])); + CHECK_EQ(GetValue(ARRAY_SIZE(keys_) - i - 1), + Thread::GetExistingThreadLocal(keys_[i])); + } + } + + private: + static void* GetValue(size_t x) { + return reinterpret_cast<void*>(static_cast<uintptr_t>(x + 1)); + } + + Thread::LocalStorageKey keys_[256]; +}; + +} + + +TEST_F(ThreadLocalStorageTest, DoTest) { + Run(); + Start(); + Join(); +} + +} // namespace base +} // namespace v8 diff --git a/deps/v8/test/base-unittests/platform/time-unittest.cc b/deps/v8/test/base-unittests/platform/time-unittest.cc new file mode 100644 index 00000000000..409323a8d60 --- /dev/null +++ b/deps/v8/test/base-unittests/platform/time-unittest.cc @@ -0,0 +1,186 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/base/platform/time.h" + +#if V8_OS_MACOSX +#include <mach/mach_time.h> +#endif +#if V8_OS_POSIX +#include <sys/time.h> +#endif + +#if V8_OS_WIN +#include "src/base/win32-headers.h" +#endif + +#include "src/base/platform/elapsed-timer.h" +#include "testing/gtest/include/gtest/gtest.h" + +namespace v8 { +namespace base { + +TEST(TimeDelta, FromAndIn) { + EXPECT_EQ(TimeDelta::FromDays(2), TimeDelta::FromHours(48)); + EXPECT_EQ(TimeDelta::FromHours(3), TimeDelta::FromMinutes(180)); + EXPECT_EQ(TimeDelta::FromMinutes(2), TimeDelta::FromSeconds(120)); + EXPECT_EQ(TimeDelta::FromSeconds(2), TimeDelta::FromMilliseconds(2000)); + EXPECT_EQ(TimeDelta::FromMilliseconds(2), TimeDelta::FromMicroseconds(2000)); + EXPECT_EQ(static_cast<int>(13), TimeDelta::FromDays(13).InDays()); + EXPECT_EQ(static_cast<int>(13), TimeDelta::FromHours(13).InHours()); + EXPECT_EQ(static_cast<int>(13), TimeDelta::FromMinutes(13).InMinutes()); + EXPECT_EQ(static_cast<int64_t>(13), TimeDelta::FromSeconds(13).InSeconds()); + EXPECT_DOUBLE_EQ(13.0, TimeDelta::FromSeconds(13).InSecondsF()); + EXPECT_EQ(static_cast<int64_t>(13), + TimeDelta::FromMilliseconds(13).InMilliseconds()); + EXPECT_DOUBLE_EQ(13.0, TimeDelta::FromMilliseconds(13).InMillisecondsF()); + EXPECT_EQ(static_cast<int64_t>(13), + TimeDelta::FromMicroseconds(13).InMicroseconds()); +} + + +#if V8_OS_MACOSX +TEST(TimeDelta, MachTimespec) { + TimeDelta null = TimeDelta(); + EXPECT_EQ(null, TimeDelta::FromMachTimespec(null.ToMachTimespec())); + TimeDelta delta1 = TimeDelta::FromMilliseconds(42); + EXPECT_EQ(delta1, TimeDelta::FromMachTimespec(delta1.ToMachTimespec())); + TimeDelta delta2 = TimeDelta::FromDays(42); + EXPECT_EQ(delta2, TimeDelta::FromMachTimespec(delta2.ToMachTimespec())); +} +#endif + + +TEST(Time, JsTime) { + Time t = Time::FromJsTime(700000.3); + EXPECT_DOUBLE_EQ(700000.3, t.ToJsTime()); +} + + +#if V8_OS_POSIX +TEST(Time, Timespec) { + Time null; + EXPECT_TRUE(null.IsNull()); + EXPECT_EQ(null, Time::FromTimespec(null.ToTimespec())); + Time now = Time::Now(); + EXPECT_EQ(now, Time::FromTimespec(now.ToTimespec())); + Time now_sys = Time::NowFromSystemTime(); + EXPECT_EQ(now_sys, Time::FromTimespec(now_sys.ToTimespec())); + Time unix_epoch = Time::UnixEpoch(); + EXPECT_EQ(unix_epoch, Time::FromTimespec(unix_epoch.ToTimespec())); + Time max = Time::Max(); + EXPECT_TRUE(max.IsMax()); + EXPECT_EQ(max, Time::FromTimespec(max.ToTimespec())); +} + + +TEST(Time, Timeval) { + Time null; + EXPECT_TRUE(null.IsNull()); + EXPECT_EQ(null, Time::FromTimeval(null.ToTimeval())); + Time now = Time::Now(); + EXPECT_EQ(now, Time::FromTimeval(now.ToTimeval())); + Time now_sys = Time::NowFromSystemTime(); + EXPECT_EQ(now_sys, Time::FromTimeval(now_sys.ToTimeval())); + Time unix_epoch = Time::UnixEpoch(); + EXPECT_EQ(unix_epoch, Time::FromTimeval(unix_epoch.ToTimeval())); + Time max = Time::Max(); + EXPECT_TRUE(max.IsMax()); + EXPECT_EQ(max, Time::FromTimeval(max.ToTimeval())); +} +#endif + + +#if V8_OS_WIN +TEST(Time, Filetime) { + Time null; + EXPECT_TRUE(null.IsNull()); + EXPECT_EQ(null, Time::FromFiletime(null.ToFiletime())); + Time now = Time::Now(); + EXPECT_EQ(now, Time::FromFiletime(now.ToFiletime())); + Time now_sys = Time::NowFromSystemTime(); + EXPECT_EQ(now_sys, Time::FromFiletime(now_sys.ToFiletime())); + Time unix_epoch = Time::UnixEpoch(); + EXPECT_EQ(unix_epoch, Time::FromFiletime(unix_epoch.ToFiletime())); + Time max = Time::Max(); + EXPECT_TRUE(max.IsMax()); + EXPECT_EQ(max, Time::FromFiletime(max.ToFiletime())); +} +#endif + + +namespace { + +template <typename T> +static void ResolutionTest(T (*Now)(), TimeDelta target_granularity) { + // We're trying to measure that intervals increment in a VERY small amount + // of time -- according to the specified target granularity. Unfortunately, + // if we happen to have a context switch in the middle of our test, the + // context switch could easily exceed our limit. So, we iterate on this + // several times. As long as we're able to detect the fine-granularity + // timers at least once, then the test has succeeded. + static const TimeDelta kExpirationTimeout = TimeDelta::FromSeconds(1); + ElapsedTimer timer; + timer.Start(); + TimeDelta delta; + do { + T start = Now(); + T now = start; + // Loop until we can detect that the clock has changed. Non-HighRes timers + // will increment in chunks, i.e. 15ms. By spinning until we see a clock + // change, we detect the minimum time between measurements. + do { + now = Now(); + delta = now - start; + } while (now <= start); + EXPECT_NE(static_cast<int64_t>(0), delta.InMicroseconds()); + } while (delta > target_granularity && !timer.HasExpired(kExpirationTimeout)); + EXPECT_LE(delta, target_granularity); +} + +} + + +TEST(Time, NowResolution) { + // We assume that Time::Now() has at least 16ms resolution. + static const TimeDelta kTargetGranularity = TimeDelta::FromMilliseconds(16); + ResolutionTest<Time>(&Time::Now, kTargetGranularity); +} + + +TEST(TimeTicks, NowResolution) { + // We assume that TimeTicks::Now() has at least 16ms resolution. + static const TimeDelta kTargetGranularity = TimeDelta::FromMilliseconds(16); + ResolutionTest<TimeTicks>(&TimeTicks::Now, kTargetGranularity); +} + + +TEST(TimeTicks, HighResolutionNowResolution) { + if (!TimeTicks::IsHighResolutionClockWorking()) return; + + // We assume that TimeTicks::HighResolutionNow() has sub-ms resolution. + static const TimeDelta kTargetGranularity = TimeDelta::FromMilliseconds(1); + ResolutionTest<TimeTicks>(&TimeTicks::HighResolutionNow, kTargetGranularity); +} + + +TEST(TimeTicks, IsMonotonic) { + TimeTicks previous_normal_ticks; + TimeTicks previous_highres_ticks; + ElapsedTimer timer; + timer.Start(); + while (!timer.HasExpired(TimeDelta::FromMilliseconds(100))) { + TimeTicks normal_ticks = TimeTicks::Now(); + TimeTicks highres_ticks = TimeTicks::HighResolutionNow(); + EXPECT_GE(normal_ticks, previous_normal_ticks); + EXPECT_GE((normal_ticks - previous_normal_ticks).InMicroseconds(), 0); + EXPECT_GE(highres_ticks, previous_highres_ticks); + EXPECT_GE((highres_ticks - previous_highres_ticks).InMicroseconds(), 0); + previous_normal_ticks = normal_ticks; + previous_highres_ticks = highres_ticks; + } +} + +} // namespace base +} // namespace v8 diff --git a/deps/v8/test/base-unittests/testcfg.py b/deps/v8/test/base-unittests/testcfg.py new file mode 100644 index 00000000000..0ed46dcdb11 --- /dev/null +++ b/deps/v8/test/base-unittests/testcfg.py @@ -0,0 +1,51 @@ +# Copyright 2014 the V8 project authors. All rights reserved. +# Use of this source code is governed by a BSD-style license that can be +# found in the LICENSE file. + +import os +import shutil + +from testrunner.local import commands +from testrunner.local import testsuite +from testrunner.local import utils +from testrunner.objects import testcase + + +class BaseUnitTestsSuite(testsuite.TestSuite): + def __init__(self, name, root): + super(BaseUnitTestsSuite, self).__init__(name, root) + + def ListTests(self, context): + shell = os.path.abspath(os.path.join(context.shell_dir, self.shell())) + if utils.IsWindows(): + shell += ".exe" + output = commands.Execute(context.command_prefix + + [shell, "--gtest_list_tests"] + + context.extra_flags) + if output.exit_code != 0: + print output.stdout + print output.stderr + return [] + tests = [] + test_case = '' + for test_desc in output.stdout.strip().split(): + if test_desc.endswith('.'): + test_case = test_desc + else: + test = testcase.TestCase(self, test_case + test_desc, dependency=None) + tests.append(test) + tests.sort() + return tests + + def GetFlagsForTestCase(self, testcase, context): + return (testcase.flags + ["--gtest_filter=" + testcase.path] + + ["--gtest_random_seed=%s" % context.random_seed] + + ["--gtest_print_time=0"] + + context.mode_flags) + + def shell(self): + return "base-unittests" + + +def GetSuite(name, root): + return BaseUnitTestsSuite(name, root) diff --git a/deps/v8/test/base-unittests/utils/random-number-generator-unittest.cc b/deps/v8/test/base-unittests/utils/random-number-generator-unittest.cc new file mode 100644 index 00000000000..7c533db4f07 --- /dev/null +++ b/deps/v8/test/base-unittests/utils/random-number-generator-unittest.cc @@ -0,0 +1,53 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include <climits> + +#include "src/base/utils/random-number-generator.h" +#include "testing/gtest/include/gtest/gtest.h" + +namespace v8 { +namespace base { + +class RandomNumberGeneratorTest : public ::testing::TestWithParam<int> {}; + + +static const int kMaxRuns = 12345; + + +TEST_P(RandomNumberGeneratorTest, NextIntWithMaxValue) { + RandomNumberGenerator rng(GetParam()); + for (int max = 1; max <= kMaxRuns; ++max) { + int n = rng.NextInt(max); + EXPECT_LE(0, n); + EXPECT_LT(n, max); + } +} + + +TEST_P(RandomNumberGeneratorTest, NextBooleanReturnsFalseOrTrue) { + RandomNumberGenerator rng(GetParam()); + for (int k = 0; k < kMaxRuns; ++k) { + bool b = rng.NextBool(); + EXPECT_TRUE(b == false || b == true); + } +} + + +TEST_P(RandomNumberGeneratorTest, NextDoubleReturnsValueBetween0And1) { + RandomNumberGenerator rng(GetParam()); + for (int k = 0; k < kMaxRuns; ++k) { + double d = rng.NextDouble(); + EXPECT_LE(0.0, d); + EXPECT_LT(d, 1.0); + } +} + + +INSTANTIATE_TEST_CASE_P(RandomSeeds, RandomNumberGeneratorTest, + ::testing::Values(INT_MIN, -1, 0, 1, 42, 100, + 1234567890, 987654321, INT_MAX)); + +} // namespace base +} // namespace v8 diff --git a/deps/v8/test/benchmarks/benchmarks.status b/deps/v8/test/benchmarks/benchmarks.status index d651b3c0f01..1afd5eca247 100644 --- a/deps/v8/test/benchmarks/benchmarks.status +++ b/deps/v8/test/benchmarks/benchmarks.status @@ -25,9 +25,11 @@ # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -# Too slow in Debug mode. [ [ALWAYS, { + # Too slow in Debug mode. 'octane/mandreel': [PASS, ['mode == debug', SKIP]], + # TODO(mstarzinger,ishell): Timeout with TF in predictable mode. + 'octane/richards': [PASS, NO_VARIANTS], }], # ALWAYS ] diff --git a/deps/v8/test/benchmarks/testcfg.py b/deps/v8/test/benchmarks/testcfg.py index c94a35ffd9f..8c573ba30b1 100644 --- a/deps/v8/test/benchmarks/testcfg.py +++ b/deps/v8/test/benchmarks/testcfg.py @@ -31,6 +31,7 @@ import subprocess import tarfile +from testrunner.local import statusfile from testrunner.local import testsuite from testrunner.objects import testcase @@ -183,8 +184,12 @@ def DownloadData(self): os.chdir(old_cwd) def VariantFlags(self, testcase, default_flags): - # Both --nocrankshaft and --stressopt are very slow. - return [[]] + if testcase.outcomes and statusfile.OnlyStandardVariant(testcase.outcomes): + return [[]] + # Both --nocrankshaft and --stressopt are very slow. Add TF but without + # always opt to match the way the benchmarks are run for performance + # testing. + return [[], ["--turbo-filter=*"]] def GetSuite(name, root): diff --git a/deps/v8/test/cctest/DEPS b/deps/v8/test/cctest/DEPS new file mode 100644 index 00000000000..3e73aa244ff --- /dev/null +++ b/deps/v8/test/cctest/DEPS @@ -0,0 +1,3 @@ +include_rules = [ + "+src", +] diff --git a/deps/v8/test/cctest/cctest.cc b/deps/v8/test/cctest/cctest.cc index b1cf5abb4e6..2bb08b0ec94 100644 --- a/deps/v8/test/cctest/cctest.cc +++ b/deps/v8/test/cctest/cctest.cc @@ -25,13 +25,14 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include <v8.h> -#include "cctest.h" +#include "include/v8.h" +#include "test/cctest/cctest.h" -#include "print-extension.h" -#include "profiler-extension.h" -#include "trace-extension.h" -#include "debug.h" +#include "include/libplatform/libplatform.h" +#include "src/debug.h" +#include "test/cctest/print-extension.h" +#include "test/cctest/profiler-extension.h" +#include "test/cctest/trace-extension.h" enum InitializationState {kUnset, kUnintialized, kInitialized}; static InitializationState initialization_state_ = kUnset; @@ -138,7 +139,8 @@ static void SuggestTestHarness(int tests) { int main(int argc, char* argv[]) { v8::V8::InitializeICU(); - i::Isolate::SetCrashIfDefaultIsolateInitialized(); + v8::Platform* platform = v8::platform::CreateDefaultPlatform(); + v8::V8::InitializePlatform(platform); v8::internal::FlagList::SetFlagsFromCommandLine(&argc, argv, true); @@ -157,6 +159,10 @@ int main(int argc, char* argv[]) { for (int i = 1; i < argc; i++) { char* arg = argv[i]; if (strcmp(arg, "--list") == 0) { + // TODO(svenpanne) Serializer::enabled() and Serializer::code_address_map_ + // are fundamentally broken, so we can't unconditionally initialize and + // dispose V8. + v8::V8::Initialize(); PrintTestList(CcTest::last()); print_run_count = false; @@ -200,7 +206,10 @@ int main(int argc, char* argv[]) { if (print_run_count && tests_run != 1) printf("Ran %i tests.\n", tests_run); CcTest::TearDown(); - if (!disable_automatic_dispose_) v8::V8::Dispose(); + // TODO(svenpanne) See comment above. + // if (!disable_automatic_dispose_) v8::V8::Dispose(); + v8::V8::ShutdownPlatform(); + delete platform; return 0; } diff --git a/deps/v8/test/cctest/cctest.gyp b/deps/v8/test/cctest/cctest.gyp index 745b4c51d88..42946f5bfe7 100644 --- a/deps/v8/test/cctest/cctest.gyp +++ b/deps/v8/test/cctest/cctest.gyp @@ -37,12 +37,53 @@ 'type': 'executable', 'dependencies': [ 'resources', + '../../tools/gyp/v8.gyp:v8_libplatform', ], 'include_dirs': [ - '../../src', + '../..', ], 'sources': [ ### gcmole(all) ### '<(generated_file)', + 'compiler/codegen-tester.cc', + 'compiler/codegen-tester.h', + 'compiler/function-tester.h', + 'compiler/graph-builder-tester.cc', + 'compiler/graph-builder-tester.h', + 'compiler/graph-tester.h', + 'compiler/simplified-graph-builder.cc', + 'compiler/simplified-graph-builder.h', + 'compiler/test-branch-combine.cc', + 'compiler/test-changes-lowering.cc', + 'compiler/test-codegen-deopt.cc', + 'compiler/test-gap-resolver.cc', + 'compiler/test-graph-reducer.cc', + 'compiler/test-instruction-selector.cc', + 'compiler/test-instruction.cc', + 'compiler/test-js-context-specialization.cc', + 'compiler/test-js-constant-cache.cc', + 'compiler/test-js-typed-lowering.cc', + 'compiler/test-linkage.cc', + 'compiler/test-machine-operator-reducer.cc', + 'compiler/test-node-algorithm.cc', + 'compiler/test-node-cache.cc', + 'compiler/test-node.cc', + 'compiler/test-operator.cc', + 'compiler/test-phi-reducer.cc', + 'compiler/test-pipeline.cc', + 'compiler/test-representation-change.cc', + 'compiler/test-run-deopt.cc', + 'compiler/test-run-intrinsics.cc', + 'compiler/test-run-jsbranches.cc', + 'compiler/test-run-jscalls.cc', + 'compiler/test-run-jsexceptions.cc', + 'compiler/test-run-jsops.cc', + 'compiler/test-run-machops.cc', + 'compiler/test-run-variables.cc', + 'compiler/test-schedule.cc', + 'compiler/test-scheduler.cc', + 'compiler/test-simplified-lowering.cc', + 'compiler/test-structured-ifbuilder-fuzzer.cc', + 'compiler/test-structured-machine-assembler.cc', 'cctest.cc', 'gay-fixed.cc', 'gay-precision.cc', @@ -56,12 +97,11 @@ 'test-atomicops.cc', 'test-bignum.cc', 'test-bignum-dtoa.cc', + 'test-checks.cc', 'test-circular-queue.cc', 'test-compiler.cc', - 'test-condition-variable.cc', 'test-constantpool.cc', 'test-conversions.cc', - 'test-cpu.cc', 'test-cpu-profiler.cc', 'test-dataflow.cc', 'test-date.cc', @@ -77,12 +117,15 @@ 'test-fixed-dtoa.cc', 'test-flags.cc', 'test-func-name-inference.cc', + 'test-gc-tracer.cc', 'test-global-handles.cc', 'test-global-object.cc', 'test-hashing.cc', 'test-hashmap.cc', 'test-heap.cc', 'test-heap-profiler.cc', + 'test-hydrogen-types.cc', + 'test-libplatform-default-platform.cc', 'test-libplatform-task-queue.cc', 'test-libplatform-worker-thread.cc', 'test-list.cc', @@ -92,12 +135,11 @@ 'test-microtask-delivery.cc', 'test-mark-compact.cc', 'test-mementos.cc', - 'test-mutex.cc', 'test-object-observe.cc', 'test-ordered-hash-table.cc', + 'test-ostreams.cc', 'test-parsing.cc', 'test-platform.cc', - 'test-platform-tls.cc', 'test-profile-generator.cc', 'test-random-number-generator.cc', 'test-regexp.cc', @@ -105,17 +147,16 @@ 'test-representation.cc', 'test-semaphore.cc', 'test-serialize.cc', - 'test-socket.cc', 'test-spaces.cc', 'test-strings.cc', 'test-symbols.cc', 'test-strtod.cc', 'test-thread-termination.cc', 'test-threads.cc', - 'test-time.cc', 'test-types.cc', 'test-unbound-queue.cc', 'test-unique.cc', + 'test-unscopables-hidden-prototype.cc', 'test-utils.cc', 'test-version.cc', 'test-weakmaps.cc', @@ -126,10 +167,10 @@ 'conditions': [ ['v8_target_arch=="ia32"', { 'sources': [ ### gcmole(arch:ia32) ### + 'compiler/test-instruction-selector-ia32.cc', 'test-assembler-ia32.cc', 'test-code-stubs.cc', 'test-code-stubs-ia32.cc', - 'test-cpu-ia32.cc', 'test-disasm-ia32.cc', 'test-macro-assembler-ia32.cc', 'test-log-stack-tracer.cc' @@ -140,7 +181,6 @@ 'test-assembler-x64.cc', 'test-code-stubs.cc', 'test-code-stubs-x64.cc', - 'test-cpu-x64.cc', 'test-disasm-x64.cc', 'test-macro-assembler-x64.cc', 'test-log-stack-tracer.cc' @@ -148,6 +188,7 @@ }], ['v8_target_arch=="arm"', { 'sources': [ ### gcmole(arch:arm) ### + 'compiler/test-instruction-selector-arm.cc', 'test-assembler-arm.cc', 'test-code-stubs.cc', 'test-code-stubs-arm.cc', @@ -176,14 +217,28 @@ 'test-macro-assembler-mips.cc' ], }], - [ 'OS=="linux" or OS=="qnx"', { + ['v8_target_arch=="mips64el"', { 'sources': [ - 'test-platform-linux.cc', + 'test-assembler-mips64.cc', + 'test-code-stubs.cc', + 'test-code-stubs-mips64.cc', + 'test-disasm-mips64.cc', + 'test-macro-assembler-mips64.cc' ], }], - [ 'OS=="mac"', { + ['v8_target_arch=="x87"', { + 'sources': [ ### gcmole(arch:x87) ### + 'test-assembler-x87.cc', + 'test-code-stubs.cc', + 'test-code-stubs-x87.cc', + 'test-disasm-x87.cc', + 'test-macro-assembler-x87.cc', + 'test-log-stack-tracer.cc' + ], + }], + [ 'OS=="linux" or OS=="qnx"', { 'sources': [ - 'test-platform-macos.cc', + 'test-platform-linux.cc', ], }], [ 'OS=="win"', { @@ -206,7 +261,7 @@ }, { 'dependencies': [ - '../../tools/gyp/v8.gyp:v8_nosnapshot.<(v8_target_arch)', + '../../tools/gyp/v8.gyp:v8_nosnapshot', ], }], ], diff --git a/deps/v8/test/cctest/cctest.h b/deps/v8/test/cctest/cctest.h index 36e1b96ebfc..2ab973c52dd 100644 --- a/deps/v8/test/cctest/cctest.h +++ b/deps/v8/test/cctest/cctest.h @@ -28,7 +28,9 @@ #ifndef CCTEST_H_ #define CCTEST_H_ -#include "v8.h" +#include "src/v8.h" + +#include "src/isolate-inl.h" #ifndef TEST #define TEST(Name) \ @@ -83,7 +85,6 @@ typedef v8::internal::EnumSet<CcTestExtensionIds> CcTestExtensionFlags; // Use this to expose protected methods in i::Heap. class TestHeap : public i::Heap { public: - using i::Heap::AllocateArgumentsObject; using i::Heap::AllocateByteArray; using i::Heap::AllocateFixedArray; using i::Heap::AllocateHeapNumber; @@ -113,6 +114,11 @@ class CcTest { return isolate_; } + static i::Isolate* InitIsolateOnce() { + if (!initialize_called_) InitializeVM(); + return i_isolate(); + } + static i::Isolate* i_isolate() { return reinterpret_cast<i::Isolate*>(isolate()); } @@ -125,6 +131,10 @@ class CcTest { return reinterpret_cast<TestHeap*>(i_isolate()->heap()); } + static v8::base::RandomNumberGenerator* random_number_generator() { + return InitIsolateOnce()->random_number_generator(); + } + static v8::Local<v8::Object> global() { return isolate()->GetCurrentContext()->Global(); } @@ -177,7 +187,7 @@ class CcTest { // thread fuzzing test. In the thread fuzzing test it will // pseudorandomly select a successor thread and switch execution // to that thread, suspending the current test. -class ApiTestFuzzer: public v8::internal::Thread { +class ApiTestFuzzer: public v8::base::Thread { public: void CallTest(); @@ -199,11 +209,10 @@ class ApiTestFuzzer: public v8::internal::Thread { private: explicit ApiTestFuzzer(int num) - : Thread("ApiTestFuzzer"), + : Thread(Options("ApiTestFuzzer")), test_number_(num), gate_(0), - active_(true) { - } + active_(true) {} ~ApiTestFuzzer() {} static bool fuzzing_; @@ -212,11 +221,11 @@ class ApiTestFuzzer: public v8::internal::Thread { static int active_tests_; static bool NextThread(); int test_number_; - v8::internal::Semaphore gate_; + v8::base::Semaphore gate_; bool active_; void ContextSwitch(); static int GetNextTestNumber(); - static v8::internal::Semaphore all_tests_done_; + static v8::base::Semaphore all_tests_done_; }; @@ -311,6 +320,15 @@ class LocalContext { v8::Isolate* isolate_; }; + +static inline uint16_t* AsciiToTwoByteString(const char* source) { + int array_length = i::StrLength(source) + 1; + uint16_t* converted = i::NewArray<uint16_t>(array_length); + for (int i = 0; i < array_length; i++) converted[i] = source[i]; + return converted; +} + + static inline v8::Local<v8::Value> v8_num(double x) { return v8::Number::New(v8::Isolate::GetCurrent(), x); } @@ -363,14 +381,20 @@ static inline v8::Local<v8::Value> CompileRun(v8::Local<v8::String> source) { } -static inline v8::Local<v8::Value> PreCompileCompileRun(const char* source) { +static inline v8::Local<v8::Value> ParserCacheCompileRun(const char* source) { // Compile once just to get the preparse data, then compile the second time // using the data. v8::Isolate* isolate = v8::Isolate::GetCurrent(); v8::ScriptCompiler::Source script_source(v8_str(source)); v8::ScriptCompiler::Compile(isolate, &script_source, - v8::ScriptCompiler::kProduceDataToCache); - return v8::ScriptCompiler::Compile(isolate, &script_source)->Run(); + v8::ScriptCompiler::kProduceParserCache); + + // Check whether we received cached data, and if so use it. + v8::ScriptCompiler::CompileOptions options = + script_source.GetCachedData() ? v8::ScriptCompiler::kConsumeParserCache + : v8::ScriptCompiler::kNoCompileOptions; + + return v8::ScriptCompiler::Compile(isolate, &script_source, options)->Run(); } @@ -403,10 +427,49 @@ static inline v8::Local<v8::Value> CompileRunWithOrigin( } -// Pick a slightly different port to allow tests to be run in parallel. -static inline int FlagDependentPortOffset() { - return ::v8::internal::FLAG_crankshaft == false ? 100 : - ::v8::internal::FLAG_always_opt ? 200 : 0; + +static inline void ExpectString(const char* code, const char* expected) { + v8::Local<v8::Value> result = CompileRun(code); + CHECK(result->IsString()); + v8::String::Utf8Value utf8(result); + CHECK_EQ(expected, *utf8); +} + + +static inline void ExpectInt32(const char* code, int expected) { + v8::Local<v8::Value> result = CompileRun(code); + CHECK(result->IsInt32()); + CHECK_EQ(expected, result->Int32Value()); +} + + +static inline void ExpectBoolean(const char* code, bool expected) { + v8::Local<v8::Value> result = CompileRun(code); + CHECK(result->IsBoolean()); + CHECK_EQ(expected, result->BooleanValue()); +} + + +static inline void ExpectTrue(const char* code) { + ExpectBoolean(code, true); +} + + +static inline void ExpectFalse(const char* code) { + ExpectBoolean(code, false); +} + + +static inline void ExpectObject(const char* code, + v8::Local<v8::Value> expected) { + v8::Local<v8::Value> result = CompileRun(code); + CHECK(result->SameValue(expected)); +} + + +static inline void ExpectUndefined(const char* code) { + v8::Local<v8::Value> result = CompileRun(code); + CHECK(result->IsUndefined()); } @@ -431,6 +494,26 @@ static inline void SimulateFullSpace(v8::internal::PagedSpace* space) { } +// Helper function that simulates many incremental marking steps until +// marking is completed. +static inline void SimulateIncrementalMarking(i::Heap* heap) { + i::MarkCompactCollector* collector = heap->mark_compact_collector(); + i::IncrementalMarking* marking = heap->incremental_marking(); + if (collector->sweeping_in_progress()) { + collector->EnsureSweepingCompleted(); + } + CHECK(marking->IsMarking() || marking->IsStopped()); + if (marking->IsStopped()) { + marking->Start(); + } + CHECK(marking->IsMarking()); + while (!marking->IsComplete()) { + marking->Step(i::MB, i::IncrementalMarking::NO_GC_VIA_STACK_GUARD); + } + CHECK(marking->IsComplete()); +} + + // Helper class for new allocations tracking and checking. // To use checking of JS allocations tracking in a test, // just create an instance of this class. @@ -453,4 +536,30 @@ class HeapObjectsTracker { }; +class InitializedHandleScope { + public: + InitializedHandleScope() + : main_isolate_(CcTest::InitIsolateOnce()), + handle_scope_(main_isolate_) {} + + // Prefixing the below with main_ reduces a lot of naming clashes. + i::Isolate* main_isolate() { return main_isolate_; } + + private: + i::Isolate* main_isolate_; + i::HandleScope handle_scope_; +}; + + +class HandleAndZoneScope : public InitializedHandleScope { + public: + HandleAndZoneScope() : main_zone_(main_isolate()) {} + + // Prefixing the below with main_ reduces a lot of naming clashes. + i::Zone* main_zone() { return &main_zone_; } + + private: + i::Zone main_zone_; +}; + #endif // ifndef CCTEST_H_ diff --git a/deps/v8/test/cctest/cctest.status b/deps/v8/test/cctest/cctest.status index fb73f7a6daf..60baaca081c 100644 --- a/deps/v8/test/cctest/cctest.status +++ b/deps/v8/test/cctest/cctest.status @@ -31,6 +31,7 @@ 'test-api/Bug*': [FAIL], ############################################################################## + # BUG(382): Weird test. Can't guarantee that it never times out. 'test-api/ApplyInterruption': [PASS, TIMEOUT], @@ -67,14 +68,102 @@ # This tests only the type system, so there is no point in running several # variants. + 'test-hydrogen-types/*': [PASS, NO_VARIANTS], 'test-types/*': [PASS, NO_VARIANTS], - # BUG(2999). - 'test-cpu-profiler/CollectCpuProfile': [PASS, FLAKY], - # BUG(3287). - 'test-cpu-profiler/SampleWhenFrameIsNotSetup': [PASS, FLAKY], - # BUG(3308). - 'test-cpu-profiler/JsNativeJsRuntimeJsSample': [PASS, FLAKY], + # The cpu profiler tests are notoriously flaky. + # BUG(2999). (test/cpu-profiler/CollectCpuProfile) + # BUG(3287). (test-cpu-profiler/SampleWhenFrameIsNotSetup) + 'test-cpu-profiler/*': [PASS, FLAKY], + + ############################################################################## + # TurboFan compiler failures. + + # TODO(mstarzinger): These need investigation and are not categorized yet. + 'test-cpu-profiler/*': [SKIP], + 'test-heap/NextCodeLinkIsWeak': [PASS, NO_VARIANTS], + + # TODO(mstarzinger/verwaest): This access check API is borked. + 'test-api/TurnOnAccessCheck': [PASS, NO_VARIANTS], + 'test-api/TurnOnAccessCheckAndRecompile': [PASS, NO_VARIANTS], + + # TODO(mstarzinger): Sometimes the try-catch blacklist fails. + 'test-debug/DebugEvaluateWithoutStack': [PASS, NO_VARIANTS], + 'test-debug/MessageQueues': [PASS, NO_VARIANTS], + 'test-debug/NestedBreakEventContextData': [PASS, NO_VARIANTS], + 'test-debug/SendClientDataToHandler': [PASS, NO_VARIANTS], + + # TODO(dcarney): C calls are broken all over the place. + 'test-run-machops/RunCall*': [SKIP], + 'test-run-machops/RunLoadImmIndex': [SKIP], + 'test-run-machops/RunSpillLotsOfThingsWithCall': [SKIP], + + # Some tests are just too slow to run for now. + 'test-api/Threading*': [PASS, NO_VARIANTS], + 'test-api/RequestInterruptTestWithMathAbs': [PASS, NO_VARIANTS], + 'test-heap/IncrementalMarkingStepMakesBigProgressWithLargeObjects': [PASS, NO_VARIANTS], + 'test-heap-profiler/ManyLocalsInSharedContext': [PASS, NO_VARIANTS], + 'test-debug/ThreadedDebugging': [PASS, NO_VARIANTS], + 'test-debug/DebugBreakLoop': [PASS, NO_VARIANTS], + + # Support for lazy deoptimization is missing. + 'test-deoptimization/DeoptimizeCompare': [PASS, NO_VARIANTS], + + # Support for breakpoints requires using LoadICs and StoreICs. + 'test-debug/BreakPointICStore': [PASS, NO_VARIANTS], + 'test-debug/BreakPointICLoad': [PASS, NO_VARIANTS], + 'test-debug/BreakPointICCall': [PASS, NO_VARIANTS], + 'test-debug/BreakPointICCallWithGC': [PASS, NO_VARIANTS], + 'test-debug/BreakPointConstructCallWithGC': [PASS, NO_VARIANTS], + 'test-debug/BreakPointReturn': [PASS, NO_VARIANTS], + 'test-debug/BreakPointThroughJavaScript': [PASS, NO_VARIANTS], + 'test-debug/ScriptBreakPointByNameThroughJavaScript': [PASS, NO_VARIANTS], + 'test-debug/ScriptBreakPointByIdThroughJavaScript': [PASS, NO_VARIANTS], + 'test-debug/DebugStepLinear': [PASS, NO_VARIANTS], + 'test-debug/DebugStepKeyedLoadLoop': [PASS, NO_VARIANTS], + 'test-debug/DebugStepKeyedStoreLoop': [PASS, NO_VARIANTS], + 'test-debug/DebugStepNamedLoadLoop': [PASS, NO_VARIANTS], + 'test-debug/DebugStepNamedStoreLoop': [PASS, NO_VARIANTS], + 'test-debug/DebugStepLinearMixedICs': [PASS, NO_VARIANTS], + 'test-debug/DebugStepDeclarations': [PASS, NO_VARIANTS], + 'test-debug/DebugStepLocals': [PASS, NO_VARIANTS], + 'test-debug/DebugStepIf': [PASS, NO_VARIANTS], + 'test-debug/DebugStepSwitch': [PASS, NO_VARIANTS], + 'test-debug/DebugStepWhile': [PASS, NO_VARIANTS], + 'test-debug/DebugStepDoWhile': [PASS, NO_VARIANTS], + 'test-debug/DebugStepFor': [PASS, NO_VARIANTS], + 'test-debug/DebugStepForContinue': [PASS, NO_VARIANTS], + 'test-debug/DebugStepForBreak': [PASS, NO_VARIANTS], + 'test-debug/DebugStepForIn': [PASS, NO_VARIANTS], + 'test-debug/DebugStepWith': [PASS, NO_VARIANTS], + 'test-debug/DebugConditional': [PASS, NO_VARIANTS], + 'test-debug/StepInOutSimple': [PASS, NO_VARIANTS], + 'test-debug/StepInOutTree': [PASS, NO_VARIANTS], + 'test-debug/StepInOutBranch': [PASS, NO_VARIANTS], + 'test-debug/DebugBreak': [PASS, NO_VARIANTS], + 'test-debug/DebugBreakStackInspection': [PASS, NO_VARIANTS], + 'test-debug/BreakMessageWhenMessageHandlerIsReset': [PASS, NO_VARIANTS], + 'test-debug/NoDebugBreakInAfterCompileMessageHandler': [PASS, NO_VARIANTS], + 'test-debug/DisableBreak': [PASS, NO_VARIANTS], + 'test-debug/RegExpDebugBreak': [PASS, NO_VARIANTS], + 'test-debug/DebugBreakFunctionApply': [PASS, NO_VARIANTS], + 'test-debug/DeoptimizeDuringDebugBreak': [PASS, NO_VARIANTS], + + # Support for %GetFrameDetails is missing and requires checkpoints. + 'test-api/Regress385349': [PASS, NO_VARIANTS], + 'test-debug/DebuggerStatement': [PASS, NO_VARIANTS], + 'test-debug/DebuggerStatementBreakpoint': [PASS, NO_VARIANTS], + 'test-debug/DebugEvaluateWithCodeGenerationDisallowed': [PASS, NO_VARIANTS], + 'test-debug/DebugStepNatives': [PASS, NO_VARIANTS], + 'test-debug/DebugStepFunctionCall': [PASS, NO_VARIANTS], + 'test-debug/DebugStepFunctionApply': [PASS, NO_VARIANTS], + 'test-debug/ScriptNameAndData': [PASS, NO_VARIANTS], + 'test-debug/ContextData': [PASS, NO_VARIANTS], + 'test-debug/DebugBreakInMessageHandler': [PASS, NO_VARIANTS], + 'test-debug/CallFunctionInDebugger': [PASS, NO_VARIANTS], + 'test-debug/CallingContextIsNotDebugContext': [PASS, NO_VARIANTS], + 'test-debug/DebugEventContext': [PASS, NO_VARIANTS], + 'test-debug/DebugBreakInline': [PASS, NO_VARIANTS], ############################################################################ # Slow tests. @@ -90,6 +179,10 @@ 'test-api/Bug618': [PASS], + # BUG(v8:3385). + 'test-serialize/DeserializeFromSecondSerialization': [PASS, FAIL], + 'test-serialize/DeserializeFromSecondSerializationAndRunScript2': [PASS, FAIL], + # BUG(v8:2999). 'test-cpu-profiler/CollectCpuProfile': [PASS, FAIL], @@ -101,6 +194,12 @@ # BUG(v8:3247). 'test-mark-compact/NoPromotion': [SKIP], + + # BUG(v8:3446). + 'test-mark-compact/Promotion': [PASS, FAIL], + + # BUG(v8:3434). + ' test-api/LoadICFastApi_DirectCall_GCMoveStubWithProfiler': [SKIP] }], # 'arch == arm64' ['arch == arm64 and simulator_run == True', { @@ -132,7 +231,7 @@ ############################################################################## ['no_snap == True', { # BUG(3215) - 'test-lockers/MultithreadedParallelIsolates': [PASS, FAIL], + 'test-lockers/MultithreadedParallelIsolates': [PASS, FAIL, TIMEOUT], }], # 'no_snap == True' ############################################################################## @@ -148,16 +247,19 @@ # BUG(2999). 'test-cpu-profiler/CollectCpuProfile': [PASS, FAIL], - 'test-cpu-profiler/JsNativeJsSample': [PASS, FLAKY], - - # BUG(3055). - 'test-cpu-profiler/JsNative1JsNative2JsSample': [PASS, ['mode == release', FAIL], ['mode == debug', FLAKY]], # BUG(3005). 'test-alloc/CodeRange': [PASS, FAIL], # BUG(3215). Crashes on windows. 'test-lockers/MultithreadedParallelIsolates': [SKIP], + + # BUG(3331). Fails on windows. + 'test-heap/NoWeakHashTableLeakWithIncrementalMarking': [SKIP], + + # BUG(v8:3433). Crashes on windows. + 'test-cpu-profiler/FunctionApplySample': [SKIP], + }], # 'system == windows' ############################################################################## @@ -187,6 +289,10 @@ 'test-api/Threading2': [PASS, SLOW], 'test-api/Threading3': [PASS, SLOW], 'test-api/Threading4': [PASS, SLOW], + + # Crashes due to OOM in simulator. + 'test-types/Distributivity1': [PASS, FLAKY], + 'test-types/Distributivity2': [PASS, FLAKY], }], # 'arch == arm' ############################################################################## @@ -202,6 +308,39 @@ 'test-serialize/DeserializeFromSecondSerialization': [SKIP], }], # 'arch == mipsel or arch == mips' +############################################################################## +['arch == mips64el', { + + # BUG(2657): Test sometimes times out on MIPS simulator. + 'test-thread-termination/TerminateMultipleV8ThreadsDefaultIsolate': [PASS, TIMEOUT], + + # BUG(v8:3154). + 'test-heap/ReleaseOverReservedPages': [PASS, FAIL], + + # BUG(1075): Unresolved crashes on MIPS also. + 'test-serialize/Deserialize': [SKIP], + 'test-serialize/DeserializeFromSecondSerializationAndRunScript2': [SKIP], + 'test-serialize/DeserializeAndRunScript2': [SKIP], + 'test-serialize/DeserializeFromSecondSerialization': [SKIP], +}], # 'arch == mips64el' + +############################################################################## +['arch == x87', { + + # TODO (weiliang): Enable below tests after fixing the double register + # allocation limit in X87 port. + 'test-serialize/Serialize': [PASS, ['mode == debug', SKIP]], + 'test-serialize/Deserialize': [PASS, ['mode == debug', SKIP]], + 'test-serialize/SerializeTwice': [PASS, ['mode == debug', SKIP]], + 'test-serialize/ContextSerialization': [PASS, ['mode == debug', SKIP]], + 'test-serialize/ContextDeserialization': [PASS, ['mode == debug', SKIP]], + 'test-serialize/PartialDeserialization': [PASS, ['mode == debug', SKIP]], + 'test-serialize/PartialSerialization': [PASS, ['mode == debug', SKIP]], + 'test-serialize/DeserializeAndRunScript2': [PASS, ['mode == debug', SKIP]], + 'test-serialize/DeserializeFromSecondSerializationAndRunScript2': [PASS, ['mode == debug', SKIP]], + 'test-serialize/DeserializeFromSecondSerialization': [PASS, ['mode == debug', SKIP]], +}], # 'arch == x87' + ############################################################################## ['arch == android_arm or arch == android_ia32', { diff --git a/deps/v8/test/cctest/compiler/call-tester.h b/deps/v8/test/cctest/compiler/call-tester.h new file mode 100644 index 00000000000..40189ab405c --- /dev/null +++ b/deps/v8/test/cctest/compiler/call-tester.h @@ -0,0 +1,384 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_CCTEST_COMPILER_CALL_TESTER_H_ +#define V8_CCTEST_COMPILER_CALL_TESTER_H_ + +#include "src/v8.h" + +#include "src/simulator.h" + +#if V8_TARGET_ARCH_IA32 +#if __GNUC__ +#define V8_CDECL __attribute__((cdecl)) +#else +#define V8_CDECL __cdecl +#endif +#else +#define V8_CDECL +#endif + +namespace v8 { +namespace internal { +namespace compiler { + +template <typename R> +struct ReturnValueTraits { + static R Cast(uintptr_t r) { return reinterpret_cast<R>(r); } + static MachineType Representation() { + // TODO(dcarney): detect when R is of a subclass of Object* instead of this + // type check. + while (false) { + *(static_cast<Object* volatile*>(0)) = static_cast<R>(0); + } + return kMachineTagged; + } +}; + +template <> +struct ReturnValueTraits<int32_t*> { + static int32_t* Cast(uintptr_t r) { return reinterpret_cast<int32_t*>(r); } + static MachineType Representation() { + return MachineOperatorBuilder::pointer_rep(); + } +}; + +template <> +struct ReturnValueTraits<void> { + static void Cast(uintptr_t r) {} + static MachineType Representation() { + return MachineOperatorBuilder::pointer_rep(); + } +}; + +template <> +struct ReturnValueTraits<bool> { + static bool Cast(uintptr_t r) { return static_cast<bool>(r); } + static MachineType Representation() { + return MachineOperatorBuilder::pointer_rep(); + } +}; + +template <> +struct ReturnValueTraits<int32_t> { + static int32_t Cast(uintptr_t r) { return static_cast<int32_t>(r); } + static MachineType Representation() { return kMachineWord32; } +}; + +template <> +struct ReturnValueTraits<uint32_t> { + static uint32_t Cast(uintptr_t r) { return static_cast<uint32_t>(r); } + static MachineType Representation() { return kMachineWord32; } +}; + +template <> +struct ReturnValueTraits<int64_t> { + static int64_t Cast(uintptr_t r) { return static_cast<int64_t>(r); } + static MachineType Representation() { return kMachineWord64; } +}; + +template <> +struct ReturnValueTraits<uint64_t> { + static uint64_t Cast(uintptr_t r) { return static_cast<uint64_t>(r); } + static MachineType Representation() { return kMachineWord64; } +}; + +template <> +struct ReturnValueTraits<int16_t> { + static int16_t Cast(uintptr_t r) { return static_cast<int16_t>(r); } + static MachineType Representation() { + return MachineOperatorBuilder::pointer_rep(); + } +}; + +template <> +struct ReturnValueTraits<int8_t> { + static int8_t Cast(uintptr_t r) { return static_cast<int8_t>(r); } + static MachineType Representation() { + return MachineOperatorBuilder::pointer_rep(); + } +}; + +template <> +struct ReturnValueTraits<double> { + static double Cast(uintptr_t r) { + UNREACHABLE(); + return 0.0; + } + static MachineType Representation() { return kMachineFloat64; } +}; + + +template <typename R> +struct ParameterTraits { + static uintptr_t Cast(R r) { return static_cast<uintptr_t>(r); } +}; + +template <> +struct ParameterTraits<int*> { + static uintptr_t Cast(int* r) { return reinterpret_cast<uintptr_t>(r); } +}; + +template <typename T> +struct ParameterTraits<T*> { + static uintptr_t Cast(void* r) { return reinterpret_cast<uintptr_t>(r); } +}; + +class CallHelper { + public: + explicit CallHelper(Isolate* isolate) : isolate_(isolate) { USE(isolate_); } + virtual ~CallHelper() {} + + static MachineCallDescriptorBuilder* ToCallDescriptorBuilder( + Zone* zone, MachineType return_type, MachineType p0 = kMachineLast, + MachineType p1 = kMachineLast, MachineType p2 = kMachineLast, + MachineType p3 = kMachineLast, MachineType p4 = kMachineLast) { + const int kSize = 5; + MachineType* params = zone->NewArray<MachineType>(kSize); + params[0] = p0; + params[1] = p1; + params[2] = p2; + params[3] = p3; + params[4] = p4; + int parameter_count = 0; + for (int i = 0; i < kSize; ++i) { + if (params[i] == kMachineLast) { + break; + } + parameter_count++; + } + return new (zone) + MachineCallDescriptorBuilder(return_type, parameter_count, params); + } + + protected: + virtual void VerifyParameters(int parameter_count, + MachineType* parameters) = 0; + virtual byte* Generate() = 0; + + private: +#if USE_SIMULATOR && V8_TARGET_ARCH_ARM64 + uintptr_t CallSimulator(byte* f, Simulator::CallArgument* args) { + Simulator* simulator = Simulator::current(isolate_); + return static_cast<uintptr_t>(simulator->CallInt64(f, args)); + } + + template <typename R, typename F> + R DoCall(F* f) { + Simulator::CallArgument args[] = {Simulator::CallArgument::End()}; + return ReturnValueTraits<R>::Cast(CallSimulator(FUNCTION_ADDR(f), args)); + } + template <typename R, typename F, typename P1> + R DoCall(F* f, P1 p1) { + Simulator::CallArgument args[] = {Simulator::CallArgument(p1), + Simulator::CallArgument::End()}; + return ReturnValueTraits<R>::Cast(CallSimulator(FUNCTION_ADDR(f), args)); + } + template <typename R, typename F, typename P1, typename P2> + R DoCall(F* f, P1 p1, P2 p2) { + Simulator::CallArgument args[] = {Simulator::CallArgument(p1), + Simulator::CallArgument(p2), + Simulator::CallArgument::End()}; + return ReturnValueTraits<R>::Cast(CallSimulator(FUNCTION_ADDR(f), args)); + } + template <typename R, typename F, typename P1, typename P2, typename P3> + R DoCall(F* f, P1 p1, P2 p2, P3 p3) { + Simulator::CallArgument args[] = { + Simulator::CallArgument(p1), Simulator::CallArgument(p2), + Simulator::CallArgument(p3), Simulator::CallArgument::End()}; + return ReturnValueTraits<R>::Cast(CallSimulator(FUNCTION_ADDR(f), args)); + } + template <typename R, typename F, typename P1, typename P2, typename P3, + typename P4> + R DoCall(F* f, P1 p1, P2 p2, P3 p3, P4 p4) { + Simulator::CallArgument args[] = { + Simulator::CallArgument(p1), Simulator::CallArgument(p2), + Simulator::CallArgument(p3), Simulator::CallArgument(p4), + Simulator::CallArgument::End()}; + return ReturnValueTraits<R>::Cast(CallSimulator(FUNCTION_ADDR(f), args)); + } +#elif USE_SIMULATOR && V8_TARGET_ARCH_ARM + uintptr_t CallSimulator(byte* f, int32_t p1 = 0, int32_t p2 = 0, + int32_t p3 = 0, int32_t p4 = 0) { + Simulator* simulator = Simulator::current(isolate_); + return static_cast<uintptr_t>(simulator->Call(f, 4, p1, p2, p3, p4)); + } + template <typename R, typename F> + R DoCall(F* f) { + return ReturnValueTraits<R>::Cast(CallSimulator(FUNCTION_ADDR(f))); + } + template <typename R, typename F, typename P1> + R DoCall(F* f, P1 p1) { + return ReturnValueTraits<R>::Cast( + CallSimulator(FUNCTION_ADDR(f), ParameterTraits<P1>::Cast(p1))); + } + template <typename R, typename F, typename P1, typename P2> + R DoCall(F* f, P1 p1, P2 p2) { + return ReturnValueTraits<R>::Cast( + CallSimulator(FUNCTION_ADDR(f), ParameterTraits<P1>::Cast(p1), + ParameterTraits<P2>::Cast(p2))); + } + template <typename R, typename F, typename P1, typename P2, typename P3> + R DoCall(F* f, P1 p1, P2 p2, P3 p3) { + return ReturnValueTraits<R>::Cast(CallSimulator( + FUNCTION_ADDR(f), ParameterTraits<P1>::Cast(p1), + ParameterTraits<P2>::Cast(p2), ParameterTraits<P3>::Cast(p3))); + } + template <typename R, typename F, typename P1, typename P2, typename P3, + typename P4> + R DoCall(F* f, P1 p1, P2 p2, P3 p3, P4 p4) { + return ReturnValueTraits<R>::Cast(CallSimulator( + FUNCTION_ADDR(f), ParameterTraits<P1>::Cast(p1), + ParameterTraits<P2>::Cast(p2), ParameterTraits<P3>::Cast(p3), + ParameterTraits<P4>::Cast(p4))); + } +#else + template <typename R, typename F> + R DoCall(F* f) { + return f(); + } + template <typename R, typename F, typename P1> + R DoCall(F* f, P1 p1) { + return f(p1); + } + template <typename R, typename F, typename P1, typename P2> + R DoCall(F* f, P1 p1, P2 p2) { + return f(p1, p2); + } + template <typename R, typename F, typename P1, typename P2, typename P3> + R DoCall(F* f, P1 p1, P2 p2, P3 p3) { + return f(p1, p2, p3); + } + template <typename R, typename F, typename P1, typename P2, typename P3, + typename P4> + R DoCall(F* f, P1 p1, P2 p2, P3 p3, P4 p4) { + return f(p1, p2, p3, p4); + } +#endif + +#ifndef DEBUG + void VerifyParameters0() {} + + template <typename P1> + void VerifyParameters1() {} + + template <typename P1, typename P2> + void VerifyParameters2() {} + + template <typename P1, typename P2, typename P3> + void VerifyParameters3() {} + + template <typename P1, typename P2, typename P3, typename P4> + void VerifyParameters4() {} +#else + void VerifyParameters0() { VerifyParameters(0, NULL); } + + template <typename P1> + void VerifyParameters1() { + MachineType parameters[] = {ReturnValueTraits<P1>::Representation()}; + VerifyParameters(ARRAY_SIZE(parameters), parameters); + } + + template <typename P1, typename P2> + void VerifyParameters2() { + MachineType parameters[] = {ReturnValueTraits<P1>::Representation(), + ReturnValueTraits<P2>::Representation()}; + VerifyParameters(ARRAY_SIZE(parameters), parameters); + } + + template <typename P1, typename P2, typename P3> + void VerifyParameters3() { + MachineType parameters[] = {ReturnValueTraits<P1>::Representation(), + ReturnValueTraits<P2>::Representation(), + ReturnValueTraits<P3>::Representation()}; + VerifyParameters(ARRAY_SIZE(parameters), parameters); + } + + template <typename P1, typename P2, typename P3, typename P4> + void VerifyParameters4() { + MachineType parameters[] = {ReturnValueTraits<P1>::Representation(), + ReturnValueTraits<P2>::Representation(), + ReturnValueTraits<P3>::Representation(), + ReturnValueTraits<P4>::Representation()}; + VerifyParameters(ARRAY_SIZE(parameters), parameters); + } +#endif + + // TODO(dcarney): replace Call() in CallHelper2 with these. + template <typename R> + R Call0() { + typedef R V8_CDECL FType(); + VerifyParameters0(); + return DoCall<R>(FUNCTION_CAST<FType*>(Generate())); + } + + template <typename R, typename P1> + R Call1(P1 p1) { + typedef R V8_CDECL FType(P1); + VerifyParameters1<P1>(); + return DoCall<R>(FUNCTION_CAST<FType*>(Generate()), p1); + } + + template <typename R, typename P1, typename P2> + R Call2(P1 p1, P2 p2) { + typedef R V8_CDECL FType(P1, P2); + VerifyParameters2<P1, P2>(); + return DoCall<R>(FUNCTION_CAST<FType*>(Generate()), p1, p2); + } + + template <typename R, typename P1, typename P2, typename P3> + R Call3(P1 p1, P2 p2, P3 p3) { + typedef R V8_CDECL FType(P1, P2, P3); + VerifyParameters3<P1, P2, P3>(); + return DoCall<R>(FUNCTION_CAST<FType*>(Generate()), p1, p2, p3); + } + + template <typename R, typename P1, typename P2, typename P3, typename P4> + R Call4(P1 p1, P2 p2, P3 p3, P4 p4) { + typedef R V8_CDECL FType(P1, P2, P3, P4); + VerifyParameters4<P1, P2, P3, P4>(); + return DoCall<R>(FUNCTION_CAST<FType*>(Generate()), p1, p2, p3, p4); + } + + template <typename R, typename C> + friend class CallHelper2; + Isolate* isolate_; +}; + + +// TODO(dcarney): replace CallHelper with CallHelper2 and rename. +template <typename R, typename C> +class CallHelper2 { + public: + R Call() { return helper()->template Call0<R>(); } + + template <typename P1> + R Call(P1 p1) { + return helper()->template Call1<R>(p1); + } + + template <typename P1, typename P2> + R Call(P1 p1, P2 p2) { + return helper()->template Call2<R>(p1, p2); + } + + template <typename P1, typename P2, typename P3> + R Call(P1 p1, P2 p2, P3 p3) { + return helper()->template Call3<R>(p1, p2, p3); + } + + template <typename P1, typename P2, typename P3, typename P4> + R Call(P1 p1, P2 p2, P3 p3, P4 p4) { + return helper()->template Call4<R>(p1, p2, p3, p4); + } + + private: + CallHelper* helper() { return static_cast<C*>(this); } +}; + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_CCTEST_COMPILER_CALL_TESTER_H_ diff --git a/deps/v8/test/cctest/compiler/codegen-tester.cc b/deps/v8/test/cctest/compiler/codegen-tester.cc new file mode 100644 index 00000000000..24b2c6e9f04 --- /dev/null +++ b/deps/v8/test/cctest/compiler/codegen-tester.cc @@ -0,0 +1,578 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "test/cctest/cctest.h" +#include "test/cctest/compiler/codegen-tester.h" +#include "test/cctest/compiler/value-helper.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +TEST(CompareWrapper) { + // Who tests the testers? + // If CompareWrapper is broken, then test expectations will be broken. + RawMachineAssemblerTester<int32_t> m; + CompareWrapper wWord32Equal(IrOpcode::kWord32Equal); + CompareWrapper wInt32LessThan(IrOpcode::kInt32LessThan); + CompareWrapper wInt32LessThanOrEqual(IrOpcode::kInt32LessThanOrEqual); + CompareWrapper wUint32LessThan(IrOpcode::kUint32LessThan); + CompareWrapper wUint32LessThanOrEqual(IrOpcode::kUint32LessThanOrEqual); + + { + FOR_INT32_INPUTS(pl) { + FOR_INT32_INPUTS(pr) { + int32_t a = *pl; + int32_t b = *pr; + CHECK_EQ(a == b, wWord32Equal.Int32Compare(a, b)); + CHECK_EQ(a < b, wInt32LessThan.Int32Compare(a, b)); + CHECK_EQ(a <= b, wInt32LessThanOrEqual.Int32Compare(a, b)); + } + } + } + + { + FOR_UINT32_INPUTS(pl) { + FOR_UINT32_INPUTS(pr) { + uint32_t a = *pl; + uint32_t b = *pr; + CHECK_EQ(a == b, wWord32Equal.Int32Compare(a, b)); + CHECK_EQ(a < b, wUint32LessThan.Int32Compare(a, b)); + CHECK_EQ(a <= b, wUint32LessThanOrEqual.Int32Compare(a, b)); + } + } + } + + CHECK_EQ(true, wWord32Equal.Int32Compare(0, 0)); + CHECK_EQ(true, wWord32Equal.Int32Compare(257, 257)); + CHECK_EQ(true, wWord32Equal.Int32Compare(65539, 65539)); + CHECK_EQ(true, wWord32Equal.Int32Compare(-1, -1)); + CHECK_EQ(true, wWord32Equal.Int32Compare(0xffffffff, 0xffffffff)); + + CHECK_EQ(false, wWord32Equal.Int32Compare(0, 1)); + CHECK_EQ(false, wWord32Equal.Int32Compare(257, 256)); + CHECK_EQ(false, wWord32Equal.Int32Compare(65539, 65537)); + CHECK_EQ(false, wWord32Equal.Int32Compare(-1, -2)); + CHECK_EQ(false, wWord32Equal.Int32Compare(0xffffffff, 0xfffffffe)); + + CHECK_EQ(false, wInt32LessThan.Int32Compare(0, 0)); + CHECK_EQ(false, wInt32LessThan.Int32Compare(357, 357)); + CHECK_EQ(false, wInt32LessThan.Int32Compare(75539, 75539)); + CHECK_EQ(false, wInt32LessThan.Int32Compare(-1, -1)); + CHECK_EQ(false, wInt32LessThan.Int32Compare(0xffffffff, 0xffffffff)); + + CHECK_EQ(true, wInt32LessThan.Int32Compare(0, 1)); + CHECK_EQ(true, wInt32LessThan.Int32Compare(456, 457)); + CHECK_EQ(true, wInt32LessThan.Int32Compare(85537, 85539)); + CHECK_EQ(true, wInt32LessThan.Int32Compare(-2, -1)); + CHECK_EQ(true, wInt32LessThan.Int32Compare(0xfffffffe, 0xffffffff)); + + CHECK_EQ(false, wInt32LessThan.Int32Compare(1, 0)); + CHECK_EQ(false, wInt32LessThan.Int32Compare(457, 456)); + CHECK_EQ(false, wInt32LessThan.Int32Compare(85539, 85537)); + CHECK_EQ(false, wInt32LessThan.Int32Compare(-1, -2)); + CHECK_EQ(false, wInt32LessThan.Int32Compare(0xffffffff, 0xfffffffe)); + + CHECK_EQ(true, wInt32LessThanOrEqual.Int32Compare(0, 0)); + CHECK_EQ(true, wInt32LessThanOrEqual.Int32Compare(357, 357)); + CHECK_EQ(true, wInt32LessThanOrEqual.Int32Compare(75539, 75539)); + CHECK_EQ(true, wInt32LessThanOrEqual.Int32Compare(-1, -1)); + CHECK_EQ(true, wInt32LessThanOrEqual.Int32Compare(0xffffffff, 0xffffffff)); + + CHECK_EQ(true, wInt32LessThanOrEqual.Int32Compare(0, 1)); + CHECK_EQ(true, wInt32LessThanOrEqual.Int32Compare(456, 457)); + CHECK_EQ(true, wInt32LessThanOrEqual.Int32Compare(85537, 85539)); + CHECK_EQ(true, wInt32LessThanOrEqual.Int32Compare(-2, -1)); + CHECK_EQ(true, wInt32LessThanOrEqual.Int32Compare(0xfffffffe, 0xffffffff)); + + CHECK_EQ(false, wInt32LessThanOrEqual.Int32Compare(1, 0)); + CHECK_EQ(false, wInt32LessThanOrEqual.Int32Compare(457, 456)); + CHECK_EQ(false, wInt32LessThanOrEqual.Int32Compare(85539, 85537)); + CHECK_EQ(false, wInt32LessThanOrEqual.Int32Compare(-1, -2)); + CHECK_EQ(false, wInt32LessThanOrEqual.Int32Compare(0xffffffff, 0xfffffffe)); + + // Unsigned comparisons. + CHECK_EQ(false, wUint32LessThan.Int32Compare(0, 0)); + CHECK_EQ(false, wUint32LessThan.Int32Compare(357, 357)); + CHECK_EQ(false, wUint32LessThan.Int32Compare(75539, 75539)); + CHECK_EQ(false, wUint32LessThan.Int32Compare(-1, -1)); + CHECK_EQ(false, wUint32LessThan.Int32Compare(0xffffffff, 0xffffffff)); + CHECK_EQ(false, wUint32LessThan.Int32Compare(0xffffffff, 0)); + CHECK_EQ(false, wUint32LessThan.Int32Compare(-2999, 0)); + + CHECK_EQ(true, wUint32LessThan.Int32Compare(0, 1)); + CHECK_EQ(true, wUint32LessThan.Int32Compare(456, 457)); + CHECK_EQ(true, wUint32LessThan.Int32Compare(85537, 85539)); + CHECK_EQ(true, wUint32LessThan.Int32Compare(-11, -10)); + CHECK_EQ(true, wUint32LessThan.Int32Compare(0xfffffffe, 0xffffffff)); + CHECK_EQ(true, wUint32LessThan.Int32Compare(0, 0xffffffff)); + CHECK_EQ(true, wUint32LessThan.Int32Compare(0, -2996)); + + CHECK_EQ(false, wUint32LessThan.Int32Compare(1, 0)); + CHECK_EQ(false, wUint32LessThan.Int32Compare(457, 456)); + CHECK_EQ(false, wUint32LessThan.Int32Compare(85539, 85537)); + CHECK_EQ(false, wUint32LessThan.Int32Compare(-10, -21)); + CHECK_EQ(false, wUint32LessThan.Int32Compare(0xffffffff, 0xfffffffe)); + + CHECK_EQ(true, wUint32LessThanOrEqual.Int32Compare(0, 0)); + CHECK_EQ(true, wUint32LessThanOrEqual.Int32Compare(357, 357)); + CHECK_EQ(true, wUint32LessThanOrEqual.Int32Compare(75539, 75539)); + CHECK_EQ(true, wUint32LessThanOrEqual.Int32Compare(-1, -1)); + CHECK_EQ(true, wUint32LessThanOrEqual.Int32Compare(0xffffffff, 0xffffffff)); + + CHECK_EQ(true, wUint32LessThanOrEqual.Int32Compare(0, 1)); + CHECK_EQ(true, wUint32LessThanOrEqual.Int32Compare(456, 457)); + CHECK_EQ(true, wUint32LessThanOrEqual.Int32Compare(85537, 85539)); + CHECK_EQ(true, wUint32LessThanOrEqual.Int32Compare(-300, -299)); + CHECK_EQ(true, wUint32LessThanOrEqual.Int32Compare(-300, -300)); + CHECK_EQ(true, wUint32LessThanOrEqual.Int32Compare(0xfffffffe, 0xffffffff)); + CHECK_EQ(true, wUint32LessThanOrEqual.Int32Compare(0, -2995)); + + CHECK_EQ(false, wUint32LessThanOrEqual.Int32Compare(1, 0)); + CHECK_EQ(false, wUint32LessThanOrEqual.Int32Compare(457, 456)); + CHECK_EQ(false, wUint32LessThanOrEqual.Int32Compare(85539, 85537)); + CHECK_EQ(false, wUint32LessThanOrEqual.Int32Compare(-130, -170)); + CHECK_EQ(false, wUint32LessThanOrEqual.Int32Compare(0xffffffff, 0xfffffffe)); + CHECK_EQ(false, wUint32LessThanOrEqual.Int32Compare(-2997, 0)); + + CompareWrapper wFloat64Equal(IrOpcode::kFloat64Equal); + CompareWrapper wFloat64LessThan(IrOpcode::kFloat64LessThan); + CompareWrapper wFloat64LessThanOrEqual(IrOpcode::kFloat64LessThanOrEqual); + + // Check NaN handling. + double nan = v8::base::OS::nan_value(); + double inf = V8_INFINITY; + CHECK_EQ(false, wFloat64Equal.Float64Compare(nan, 0.0)); + CHECK_EQ(false, wFloat64Equal.Float64Compare(nan, 1.0)); + CHECK_EQ(false, wFloat64Equal.Float64Compare(nan, inf)); + CHECK_EQ(false, wFloat64Equal.Float64Compare(nan, -inf)); + CHECK_EQ(false, wFloat64Equal.Float64Compare(nan, nan)); + + CHECK_EQ(false, wFloat64Equal.Float64Compare(0.0, nan)); + CHECK_EQ(false, wFloat64Equal.Float64Compare(1.0, nan)); + CHECK_EQ(false, wFloat64Equal.Float64Compare(inf, nan)); + CHECK_EQ(false, wFloat64Equal.Float64Compare(-inf, nan)); + CHECK_EQ(false, wFloat64Equal.Float64Compare(nan, nan)); + + CHECK_EQ(false, wFloat64LessThan.Float64Compare(nan, 0.0)); + CHECK_EQ(false, wFloat64LessThan.Float64Compare(nan, 1.0)); + CHECK_EQ(false, wFloat64LessThan.Float64Compare(nan, inf)); + CHECK_EQ(false, wFloat64LessThan.Float64Compare(nan, -inf)); + CHECK_EQ(false, wFloat64LessThan.Float64Compare(nan, nan)); + + CHECK_EQ(false, wFloat64LessThan.Float64Compare(0.0, nan)); + CHECK_EQ(false, wFloat64LessThan.Float64Compare(1.0, nan)); + CHECK_EQ(false, wFloat64LessThan.Float64Compare(inf, nan)); + CHECK_EQ(false, wFloat64LessThan.Float64Compare(-inf, nan)); + CHECK_EQ(false, wFloat64LessThan.Float64Compare(nan, nan)); + + CHECK_EQ(false, wFloat64LessThanOrEqual.Float64Compare(nan, 0.0)); + CHECK_EQ(false, wFloat64LessThanOrEqual.Float64Compare(nan, 1.0)); + CHECK_EQ(false, wFloat64LessThanOrEqual.Float64Compare(nan, inf)); + CHECK_EQ(false, wFloat64LessThanOrEqual.Float64Compare(nan, -inf)); + CHECK_EQ(false, wFloat64LessThanOrEqual.Float64Compare(nan, nan)); + + CHECK_EQ(false, wFloat64LessThanOrEqual.Float64Compare(0.0, nan)); + CHECK_EQ(false, wFloat64LessThanOrEqual.Float64Compare(1.0, nan)); + CHECK_EQ(false, wFloat64LessThanOrEqual.Float64Compare(inf, nan)); + CHECK_EQ(false, wFloat64LessThanOrEqual.Float64Compare(-inf, nan)); + CHECK_EQ(false, wFloat64LessThanOrEqual.Float64Compare(nan, nan)); + + // Check inf handling. + CHECK_EQ(false, wFloat64Equal.Float64Compare(inf, 0.0)); + CHECK_EQ(false, wFloat64Equal.Float64Compare(inf, 1.0)); + CHECK_EQ(true, wFloat64Equal.Float64Compare(inf, inf)); + CHECK_EQ(false, wFloat64Equal.Float64Compare(inf, -inf)); + + CHECK_EQ(false, wFloat64Equal.Float64Compare(0.0, inf)); + CHECK_EQ(false, wFloat64Equal.Float64Compare(1.0, inf)); + CHECK_EQ(true, wFloat64Equal.Float64Compare(inf, inf)); + CHECK_EQ(false, wFloat64Equal.Float64Compare(-inf, inf)); + + CHECK_EQ(false, wFloat64LessThan.Float64Compare(inf, 0.0)); + CHECK_EQ(false, wFloat64LessThan.Float64Compare(inf, 1.0)); + CHECK_EQ(false, wFloat64LessThan.Float64Compare(inf, inf)); + CHECK_EQ(false, wFloat64LessThan.Float64Compare(inf, -inf)); + + CHECK_EQ(true, wFloat64LessThan.Float64Compare(0.0, inf)); + CHECK_EQ(true, wFloat64LessThan.Float64Compare(1.0, inf)); + CHECK_EQ(false, wFloat64LessThan.Float64Compare(inf, inf)); + CHECK_EQ(true, wFloat64LessThan.Float64Compare(-inf, inf)); + + CHECK_EQ(false, wFloat64LessThanOrEqual.Float64Compare(inf, 0.0)); + CHECK_EQ(false, wFloat64LessThanOrEqual.Float64Compare(inf, 1.0)); + CHECK_EQ(true, wFloat64LessThanOrEqual.Float64Compare(inf, inf)); + CHECK_EQ(false, wFloat64LessThanOrEqual.Float64Compare(inf, -inf)); + + CHECK_EQ(true, wFloat64LessThanOrEqual.Float64Compare(0.0, inf)); + CHECK_EQ(true, wFloat64LessThanOrEqual.Float64Compare(1.0, inf)); + CHECK_EQ(true, wFloat64LessThanOrEqual.Float64Compare(inf, inf)); + CHECK_EQ(true, wFloat64LessThanOrEqual.Float64Compare(-inf, inf)); + + // Check -inf handling. + CHECK_EQ(false, wFloat64Equal.Float64Compare(-inf, 0.0)); + CHECK_EQ(false, wFloat64Equal.Float64Compare(-inf, 1.0)); + CHECK_EQ(false, wFloat64Equal.Float64Compare(-inf, inf)); + CHECK_EQ(true, wFloat64Equal.Float64Compare(-inf, -inf)); + + CHECK_EQ(false, wFloat64Equal.Float64Compare(0.0, -inf)); + CHECK_EQ(false, wFloat64Equal.Float64Compare(1.0, -inf)); + CHECK_EQ(false, wFloat64Equal.Float64Compare(inf, -inf)); + CHECK_EQ(true, wFloat64Equal.Float64Compare(-inf, -inf)); + + CHECK_EQ(true, wFloat64LessThan.Float64Compare(-inf, 0.0)); + CHECK_EQ(true, wFloat64LessThan.Float64Compare(-inf, 1.0)); + CHECK_EQ(true, wFloat64LessThan.Float64Compare(-inf, inf)); + CHECK_EQ(false, wFloat64LessThan.Float64Compare(-inf, -inf)); + + CHECK_EQ(false, wFloat64LessThan.Float64Compare(0.0, -inf)); + CHECK_EQ(false, wFloat64LessThan.Float64Compare(1.0, -inf)); + CHECK_EQ(false, wFloat64LessThan.Float64Compare(inf, -inf)); + CHECK_EQ(false, wFloat64LessThan.Float64Compare(-inf, -inf)); + + CHECK_EQ(true, wFloat64LessThanOrEqual.Float64Compare(-inf, 0.0)); + CHECK_EQ(true, wFloat64LessThanOrEqual.Float64Compare(-inf, 1.0)); + CHECK_EQ(true, wFloat64LessThanOrEqual.Float64Compare(-inf, inf)); + CHECK_EQ(true, wFloat64LessThanOrEqual.Float64Compare(-inf, -inf)); + + CHECK_EQ(false, wFloat64LessThanOrEqual.Float64Compare(0.0, -inf)); + CHECK_EQ(false, wFloat64LessThanOrEqual.Float64Compare(1.0, -inf)); + CHECK_EQ(false, wFloat64LessThanOrEqual.Float64Compare(inf, -inf)); + CHECK_EQ(true, wFloat64LessThanOrEqual.Float64Compare(-inf, -inf)); + + // Check basic values. + CHECK_EQ(true, wFloat64Equal.Float64Compare(0, 0)); + CHECK_EQ(true, wFloat64Equal.Float64Compare(257.1, 257.1)); + CHECK_EQ(true, wFloat64Equal.Float64Compare(65539.1, 65539.1)); + CHECK_EQ(true, wFloat64Equal.Float64Compare(-1.1, -1.1)); + + CHECK_EQ(false, wFloat64Equal.Float64Compare(0, 1)); + CHECK_EQ(false, wFloat64Equal.Float64Compare(257.2, 256.2)); + CHECK_EQ(false, wFloat64Equal.Float64Compare(65539.2, 65537.2)); + CHECK_EQ(false, wFloat64Equal.Float64Compare(-1.2, -2.2)); + + CHECK_EQ(false, wFloat64LessThan.Float64Compare(0, 0)); + CHECK_EQ(false, wFloat64LessThan.Float64Compare(357.3, 357.3)); + CHECK_EQ(false, wFloat64LessThan.Float64Compare(75539.3, 75539.3)); + CHECK_EQ(false, wFloat64LessThan.Float64Compare(-1.3, -1.3)); + + CHECK_EQ(true, wFloat64LessThan.Float64Compare(0, 1)); + CHECK_EQ(true, wFloat64LessThan.Float64Compare(456.4, 457.4)); + CHECK_EQ(true, wFloat64LessThan.Float64Compare(85537.4, 85539.4)); + CHECK_EQ(true, wFloat64LessThan.Float64Compare(-2.4, -1.4)); + + CHECK_EQ(false, wFloat64LessThan.Float64Compare(1, 0)); + CHECK_EQ(false, wFloat64LessThan.Float64Compare(457.5, 456.5)); + CHECK_EQ(false, wFloat64LessThan.Float64Compare(85539.5, 85537.5)); + CHECK_EQ(false, wFloat64LessThan.Float64Compare(-1.5, -2.5)); + + CHECK_EQ(true, wFloat64LessThanOrEqual.Float64Compare(0, 0)); + CHECK_EQ(true, wFloat64LessThanOrEqual.Float64Compare(357.6, 357.6)); + CHECK_EQ(true, wFloat64LessThanOrEqual.Float64Compare(75539.6, 75539.6)); + CHECK_EQ(true, wFloat64LessThanOrEqual.Float64Compare(-1.6, -1.6)); + + CHECK_EQ(true, wFloat64LessThanOrEqual.Float64Compare(0, 1)); + CHECK_EQ(true, wFloat64LessThanOrEqual.Float64Compare(456.7, 457.7)); + CHECK_EQ(true, wFloat64LessThanOrEqual.Float64Compare(85537.7, 85539.7)); + CHECK_EQ(true, wFloat64LessThanOrEqual.Float64Compare(-2.7, -1.7)); + + CHECK_EQ(false, wFloat64LessThanOrEqual.Float64Compare(1, 0)); + CHECK_EQ(false, wFloat64LessThanOrEqual.Float64Compare(457.8, 456.8)); + CHECK_EQ(false, wFloat64LessThanOrEqual.Float64Compare(85539.8, 85537.8)); + CHECK_EQ(false, wFloat64LessThanOrEqual.Float64Compare(-1.8, -2.8)); +} + + +void Int32BinopInputShapeTester::TestAllInputShapes() { + std::vector<int32_t> inputs = ValueHelper::int32_vector(); + int num_int_inputs = static_cast<int>(inputs.size()); + if (num_int_inputs > 16) num_int_inputs = 16; // limit to 16 inputs + + for (int i = -2; i < num_int_inputs; i++) { // for all left shapes + for (int j = -2; j < num_int_inputs; j++) { // for all right shapes + if (i >= 0 && j >= 0) break; // No constant/constant combos + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32); + Node* p0 = m.Parameter(0); + Node* p1 = m.Parameter(1); + Node* n0; + Node* n1; + + // left = Parameter | Load | Constant + if (i == -2) { + n0 = p0; + } else if (i == -1) { + n0 = m.LoadFromPointer(&input_a, kMachineWord32); + } else { + n0 = m.Int32Constant(inputs[i]); + } + + // right = Parameter | Load | Constant + if (j == -2) { + n1 = p1; + } else if (j == -1) { + n1 = m.LoadFromPointer(&input_b, kMachineWord32); + } else { + n1 = m.Int32Constant(inputs[j]); + } + + gen->gen(&m, n0, n1); + + if (false) printf("Int32BinopInputShapeTester i=%d, j=%d\n", i, j); + if (i >= 0) { + input_a = inputs[i]; + RunRight(&m); + } else if (j >= 0) { + input_b = inputs[j]; + RunLeft(&m); + } else { + Run(&m); + } + } + } +} + + +void Int32BinopInputShapeTester::Run(RawMachineAssemblerTester<int32_t>* m) { + FOR_INT32_INPUTS(pl) { + FOR_INT32_INPUTS(pr) { + input_a = *pl; + input_b = *pr; + int32_t expect = gen->expected(input_a, input_b); + if (false) printf(" cmp(a=%d, b=%d) ?== %d\n", input_a, input_b, expect); + CHECK_EQ(expect, m->Call(input_a, input_b)); + } + } +} + + +void Int32BinopInputShapeTester::RunLeft( + RawMachineAssemblerTester<int32_t>* m) { + FOR_UINT32_INPUTS(i) { + input_a = *i; + int32_t expect = gen->expected(input_a, input_b); + if (false) printf(" cmp(a=%d, b=%d) ?== %d\n", input_a, input_b, expect); + CHECK_EQ(expect, m->Call(input_a, input_b)); + } +} + + +void Int32BinopInputShapeTester::RunRight( + RawMachineAssemblerTester<int32_t>* m) { + FOR_UINT32_INPUTS(i) { + input_b = *i; + int32_t expect = gen->expected(input_a, input_b); + if (false) printf(" cmp(a=%d, b=%d) ?== %d\n", input_a, input_b, expect); + CHECK_EQ(expect, m->Call(input_a, input_b)); + } +} + + +TEST(ParametersEqual) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32); + Node* p1 = m.Parameter(1); + CHECK_NE(NULL, p1); + Node* p0 = m.Parameter(0); + CHECK_NE(NULL, p0); + CHECK_EQ(p0, m.Parameter(0)); + CHECK_EQ(p1, m.Parameter(1)); +} + + +#if V8_TURBOFAN_TARGET + +void RunSmiConstant(int32_t v) { +// TODO(dcarney): on x64 Smis are generated with the SmiConstantRegister +#if !V8_TARGET_ARCH_X64 + if (Smi::IsValid(v)) { + RawMachineAssemblerTester<Object*> m; + m.Return(m.NumberConstant(v)); + CHECK_EQ(Smi::FromInt(v), m.Call()); + } +#endif +} + + +void RunNumberConstant(double v) { + RawMachineAssemblerTester<Object*> m; +#if V8_TARGET_ARCH_X64 + // TODO(dcarney): on x64 Smis are generated with the SmiConstantRegister + Handle<Object> number = m.isolate()->factory()->NewNumber(v); + if (number->IsSmi()) return; +#endif + m.Return(m.NumberConstant(v)); + Object* result = m.Call(); + m.CheckNumber(v, result); +} + + +TEST(RunEmpty) { + RawMachineAssemblerTester<int32_t> m; + m.Return(m.Int32Constant(0)); + CHECK_EQ(0, m.Call()); +} + + +TEST(RunInt32Constants) { + FOR_INT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m; + m.Return(m.Int32Constant(*i)); + CHECK_EQ(*i, m.Call()); + } +} + + +TEST(RunSmiConstants) { + for (int32_t i = 1; i < Smi::kMaxValue && i != 0; i = i << 1) { + RunSmiConstant(i); + RunSmiConstant(3 * i); + RunSmiConstant(5 * i); + RunSmiConstant(-i); + RunSmiConstant(i | 1); + RunSmiConstant(i | 3); + } + RunSmiConstant(Smi::kMaxValue); + RunSmiConstant(Smi::kMaxValue - 1); + RunSmiConstant(Smi::kMinValue); + RunSmiConstant(Smi::kMinValue + 1); + + FOR_INT32_INPUTS(i) { RunSmiConstant(*i); } +} + + +TEST(RunNumberConstants) { + { + FOR_FLOAT64_INPUTS(i) { RunNumberConstant(*i); } + } + { + FOR_INT32_INPUTS(i) { RunNumberConstant(*i); } + } + + for (int32_t i = 1; i < Smi::kMaxValue && i != 0; i = i << 1) { + RunNumberConstant(i); + RunNumberConstant(-i); + RunNumberConstant(i | 1); + RunNumberConstant(i | 3); + } + RunNumberConstant(Smi::kMaxValue); + RunNumberConstant(Smi::kMaxValue - 1); + RunNumberConstant(Smi::kMinValue); + RunNumberConstant(Smi::kMinValue + 1); +} + + +TEST(RunEmptyString) { + RawMachineAssemblerTester<Object*> m; + m.Return(m.StringConstant("empty")); + m.CheckString("empty", m.Call()); +} + + +TEST(RunHeapConstant) { + RawMachineAssemblerTester<Object*> m; + m.Return(m.StringConstant("empty")); + m.CheckString("empty", m.Call()); +} + + +TEST(RunHeapNumberConstant) { + RawMachineAssemblerTester<Object*> m; + Handle<Object> number = m.isolate()->factory()->NewHeapNumber(100.5); + m.Return(m.HeapConstant(number)); + Object* result = m.Call(); + CHECK_EQ(result, *number); +} + + +TEST(RunParam1) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Parameter(0)); + + FOR_INT32_INPUTS(i) { + int32_t result = m.Call(*i); + CHECK_EQ(*i, result); + } +} + + +TEST(RunParam2_1) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32); + Node* p0 = m.Parameter(0); + Node* p1 = m.Parameter(1); + m.Return(p0); + USE(p1); + + FOR_INT32_INPUTS(i) { + int32_t result = m.Call(*i, -9999); + CHECK_EQ(*i, result); + } +} + + +TEST(RunParam2_2) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32); + Node* p0 = m.Parameter(0); + Node* p1 = m.Parameter(1); + m.Return(p1); + USE(p0); + + FOR_INT32_INPUTS(i) { + int32_t result = m.Call(-7777, *i); + CHECK_EQ(*i, result); + } +} + + +TEST(RunParam3) { + for (int i = 0; i < 3; i++) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + Node* nodes[] = {m.Parameter(0), m.Parameter(1), m.Parameter(2)}; + m.Return(nodes[i]); + + int p[] = {-99, -77, -88}; + FOR_INT32_INPUTS(j) { + p[i] = *j; + int32_t result = m.Call(p[0], p[1], p[2]); + CHECK_EQ(*j, result); + } + } +} + + +TEST(RunBinopTester) { + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(bt.param0); + + FOR_INT32_INPUTS(i) { CHECK_EQ(*i, bt.call(*i, 777)); } + } + + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(bt.param1); + + FOR_INT32_INPUTS(i) { CHECK_EQ(*i, bt.call(666, *i)); } + } + + { + RawMachineAssemblerTester<int32_t> m; + Float64BinopTester bt(&m); + bt.AddReturn(bt.param0); + + FOR_FLOAT64_INPUTS(i) { CHECK_EQ(*i, bt.call(*i, 9.0)); } + } + + { + RawMachineAssemblerTester<int32_t> m; + Float64BinopTester bt(&m); + bt.AddReturn(bt.param1); + + FOR_FLOAT64_INPUTS(i) { CHECK_EQ(*i, bt.call(-11.25, *i)); } + } +} + +#endif // V8_TURBOFAN_TARGET diff --git a/deps/v8/test/cctest/compiler/codegen-tester.h b/deps/v8/test/cctest/compiler/codegen-tester.h new file mode 100644 index 00000000000..300381b4939 --- /dev/null +++ b/deps/v8/test/cctest/compiler/codegen-tester.h @@ -0,0 +1,353 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_CCTEST_COMPILER_CODEGEN_TESTER_H_ +#define V8_CCTEST_COMPILER_CODEGEN_TESTER_H_ + +#include "src/v8.h" + +#include "src/compiler/pipeline.h" +#include "src/compiler/raw-machine-assembler.h" +#include "src/compiler/structured-machine-assembler.h" +#include "src/simulator.h" +#include "test/cctest/compiler/call-tester.h" + +namespace v8 { +namespace internal { +namespace compiler { + +template <typename MachineAssembler> +class MachineAssemblerTester : public HandleAndZoneScope, + public CallHelper, + public MachineAssembler { + public: + MachineAssemblerTester(MachineType return_type, MachineType p0, + MachineType p1, MachineType p2, MachineType p3, + MachineType p4) + : HandleAndZoneScope(), + CallHelper(main_isolate()), + MachineAssembler(new (main_zone()) Graph(main_zone()), + ToCallDescriptorBuilder(main_zone(), return_type, p0, + p1, p2, p3, p4), + MachineOperatorBuilder::pointer_rep()) {} + + Node* LoadFromPointer(void* address, MachineType rep, int32_t offset = 0) { + return this->Load(rep, this->PointerConstant(address), + this->Int32Constant(offset)); + } + + void StoreToPointer(void* address, MachineType rep, Node* node) { + this->Store(rep, this->PointerConstant(address), node); + } + + Node* StringConstant(const char* string) { + return this->HeapConstant( + this->isolate()->factory()->InternalizeUtf8String(string)); + } + + void CheckNumber(double expected, Object* number) { + CHECK(this->isolate()->factory()->NewNumber(expected)->SameValue(number)); + } + + void CheckString(const char* expected, Object* string) { + CHECK( + this->isolate()->factory()->InternalizeUtf8String(expected)->SameValue( + string)); + } + + void GenerateCode() { Generate(); } + + protected: + virtual void VerifyParameters(int parameter_count, + MachineType* parameter_types) { + CHECK_EQ(this->parameter_count(), parameter_count); + const MachineType* expected_types = this->parameter_types(); + for (int i = 0; i < parameter_count; i++) { + CHECK_EQ(expected_types[i], parameter_types[i]); + } + } + + virtual byte* Generate() { + if (code_.is_null()) { + Schedule* schedule = this->Export(); + CallDescriptor* call_descriptor = this->call_descriptor(); + Graph* graph = this->graph(); + CompilationInfo info(graph->zone()->isolate(), graph->zone()); + Linkage linkage(&info, call_descriptor); + Pipeline pipeline(&info); + code_ = pipeline.GenerateCodeForMachineGraph(&linkage, graph, schedule); + } + return this->code_.ToHandleChecked()->entry(); + } + + private: + MaybeHandle<Code> code_; +}; + + +template <typename ReturnType> +class RawMachineAssemblerTester + : public MachineAssemblerTester<RawMachineAssembler>, + public CallHelper2<ReturnType, RawMachineAssemblerTester<ReturnType> > { + public: + RawMachineAssemblerTester(MachineType p0 = kMachineLast, + MachineType p1 = kMachineLast, + MachineType p2 = kMachineLast, + MachineType p3 = kMachineLast, + MachineType p4 = kMachineLast) + : MachineAssemblerTester<RawMachineAssembler>( + ReturnValueTraits<ReturnType>::Representation(), p0, p1, p2, p3, + p4) {} + + template <typename Ci, typename Fn> + void Run(const Ci& ci, const Fn& fn) { + typename Ci::const_iterator i; + for (i = ci.begin(); i != ci.end(); ++i) { + CHECK_EQ(fn(*i), this->Call(*i)); + } + } + + template <typename Ci, typename Cj, typename Fn> + void Run(const Ci& ci, const Cj& cj, const Fn& fn) { + typename Ci::const_iterator i; + typename Cj::const_iterator j; + for (i = ci.begin(); i != ci.end(); ++i) { + for (j = cj.begin(); j != cj.end(); ++j) { + CHECK_EQ(fn(*i, *j), this->Call(*i, *j)); + } + } + } +}; + + +template <typename ReturnType> +class StructuredMachineAssemblerTester + : public MachineAssemblerTester<StructuredMachineAssembler>, + public CallHelper2<ReturnType, + StructuredMachineAssemblerTester<ReturnType> > { + public: + StructuredMachineAssemblerTester(MachineType p0 = kMachineLast, + MachineType p1 = kMachineLast, + MachineType p2 = kMachineLast, + MachineType p3 = kMachineLast, + MachineType p4 = kMachineLast) + : MachineAssemblerTester<StructuredMachineAssembler>( + ReturnValueTraits<ReturnType>::Representation(), p0, p1, p2, p3, + p4) {} +}; + + +static const bool USE_RESULT_BUFFER = true; +static const bool USE_RETURN_REGISTER = false; +static const int32_t CHECK_VALUE = 0x99BEEDCE; + + +// TODO(titzer): use the C-style calling convention, or any register-based +// calling convention for binop tests. +template <typename CType, MachineType rep, bool use_result_buffer> +class BinopTester { + public: + explicit BinopTester(RawMachineAssemblerTester<int32_t>* tester) + : T(tester), + param0(T->LoadFromPointer(&p0, rep)), + param1(T->LoadFromPointer(&p1, rep)), + p0(static_cast<CType>(0)), + p1(static_cast<CType>(0)), + result(static_cast<CType>(0)) {} + + RawMachineAssemblerTester<int32_t>* T; + Node* param0; + Node* param1; + + CType call(CType a0, CType a1) { + p0 = a0; + p1 = a1; + if (use_result_buffer) { + CHECK_EQ(CHECK_VALUE, T->Call()); + return result; + } else { + return T->Call(); + } + } + + void AddReturn(Node* val) { + if (use_result_buffer) { + T->Store(rep, T->PointerConstant(&result), T->Int32Constant(0), val); + T->Return(T->Int32Constant(CHECK_VALUE)); + } else { + T->Return(val); + } + } + + template <typename Ci, typename Cj, typename Fn> + void Run(const Ci& ci, const Cj& cj, const Fn& fn) { + typename Ci::const_iterator i; + typename Cj::const_iterator j; + for (i = ci.begin(); i != ci.end(); ++i) { + for (j = cj.begin(); j != cj.end(); ++j) { + CHECK_EQ(fn(*i, *j), this->call(*i, *j)); + } + } + } + + protected: + CType p0; + CType p1; + CType result; +}; + + +// A helper class for testing code sequences that take two int parameters and +// return an int value. +class Int32BinopTester + : public BinopTester<int32_t, kMachineWord32, USE_RETURN_REGISTER> { + public: + explicit Int32BinopTester(RawMachineAssemblerTester<int32_t>* tester) + : BinopTester<int32_t, kMachineWord32, USE_RETURN_REGISTER>(tester) {} + + int32_t call(uint32_t a0, uint32_t a1) { + p0 = static_cast<int32_t>(a0); + p1 = static_cast<int32_t>(a1); + return T->Call(); + } +}; + + +// A helper class for testing code sequences that take two double parameters and +// return a double value. +// TODO(titzer): figure out how to return doubles correctly on ia32. +class Float64BinopTester + : public BinopTester<double, kMachineFloat64, USE_RESULT_BUFFER> { + public: + explicit Float64BinopTester(RawMachineAssemblerTester<int32_t>* tester) + : BinopTester<double, kMachineFloat64, USE_RESULT_BUFFER>(tester) {} +}; + + +// A helper class for testing code sequences that take two pointer parameters +// and return a pointer value. +// TODO(titzer): pick word size of pointers based on V8_TARGET. +template <typename Type> +class PointerBinopTester + : public BinopTester<Type*, kMachineWord32, USE_RETURN_REGISTER> { + public: + explicit PointerBinopTester(RawMachineAssemblerTester<int32_t>* tester) + : BinopTester<Type*, kMachineWord32, USE_RETURN_REGISTER>(tester) {} +}; + + +// A helper class for testing code sequences that take two tagged parameters and +// return a tagged value. +template <typename Type> +class TaggedBinopTester + : public BinopTester<Type*, kMachineTagged, USE_RETURN_REGISTER> { + public: + explicit TaggedBinopTester(RawMachineAssemblerTester<int32_t>* tester) + : BinopTester<Type*, kMachineTagged, USE_RETURN_REGISTER>(tester) {} +}; + +// A helper class for testing compares. Wraps a machine opcode and provides +// evaluation routines and the operators. +class CompareWrapper { + public: + explicit CompareWrapper(IrOpcode::Value op) : opcode(op) {} + + Node* MakeNode(RawMachineAssemblerTester<int32_t>* m, Node* a, Node* b) { + return m->NewNode(op(m->machine()), a, b); + } + + Operator* op(MachineOperatorBuilder* machine) { + switch (opcode) { + case IrOpcode::kWord32Equal: + return machine->Word32Equal(); + case IrOpcode::kInt32LessThan: + return machine->Int32LessThan(); + case IrOpcode::kInt32LessThanOrEqual: + return machine->Int32LessThanOrEqual(); + case IrOpcode::kUint32LessThan: + return machine->Uint32LessThan(); + case IrOpcode::kUint32LessThanOrEqual: + return machine->Uint32LessThanOrEqual(); + case IrOpcode::kFloat64Equal: + return machine->Float64Equal(); + case IrOpcode::kFloat64LessThan: + return machine->Float64LessThan(); + case IrOpcode::kFloat64LessThanOrEqual: + return machine->Float64LessThanOrEqual(); + default: + UNREACHABLE(); + } + return NULL; + } + + bool Int32Compare(int32_t a, int32_t b) { + switch (opcode) { + case IrOpcode::kWord32Equal: + return a == b; + case IrOpcode::kInt32LessThan: + return a < b; + case IrOpcode::kInt32LessThanOrEqual: + return a <= b; + case IrOpcode::kUint32LessThan: + return static_cast<uint32_t>(a) < static_cast<uint32_t>(b); + case IrOpcode::kUint32LessThanOrEqual: + return static_cast<uint32_t>(a) <= static_cast<uint32_t>(b); + default: + UNREACHABLE(); + } + return false; + } + + bool Float64Compare(double a, double b) { + switch (opcode) { + case IrOpcode::kFloat64Equal: + return a == b; + case IrOpcode::kFloat64LessThan: + return a < b; + case IrOpcode::kFloat64LessThanOrEqual: + return a <= b; + default: + UNREACHABLE(); + } + return false; + } + + IrOpcode::Value opcode; +}; + + +// A small closure class to generate code for a function of two inputs that +// produces a single output so that it can be used in many different contexts. +// The {expected()} method should compute the expected output for a given +// pair of inputs. +template <typename T> +class BinopGen { + public: + virtual void gen(RawMachineAssemblerTester<int32_t>* m, Node* a, Node* b) = 0; + virtual T expected(T a, T b) = 0; + virtual ~BinopGen() {} +}; + +// A helper class to generate various combination of input shape combinations +// and run the generated code to ensure it produces the correct results. +class Int32BinopInputShapeTester { + public: + explicit Int32BinopInputShapeTester(BinopGen<int32_t>* g) : gen(g) {} + + void TestAllInputShapes(); + + private: + BinopGen<int32_t>* gen; + int32_t input_a; + int32_t input_b; + + void Run(RawMachineAssemblerTester<int32_t>* m); + void RunLeft(RawMachineAssemblerTester<int32_t>* m); + void RunRight(RawMachineAssemblerTester<int32_t>* m); +}; +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_CCTEST_COMPILER_CODEGEN_TESTER_H_ diff --git a/deps/v8/test/cctest/compiler/function-tester.h b/deps/v8/test/cctest/compiler/function-tester.h new file mode 100644 index 00000000000..2ed2fe99883 --- /dev/null +++ b/deps/v8/test/cctest/compiler/function-tester.h @@ -0,0 +1,194 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_CCTEST_COMPILER_FUNCTION_TESTER_H_ +#define V8_CCTEST_COMPILER_FUNCTION_TESTER_H_ + +#include "src/v8.h" +#include "test/cctest/cctest.h" + +#include "src/compiler.h" +#include "src/compiler/pipeline.h" +#include "src/execution.h" +#include "src/full-codegen.h" +#include "src/handles.h" +#include "src/objects-inl.h" +#include "src/parser.h" +#include "src/rewriter.h" +#include "src/scopes.h" + +#define USE_CRANKSHAFT 0 + +namespace v8 { +namespace internal { +namespace compiler { + +class FunctionTester : public InitializedHandleScope { + public: + explicit FunctionTester(const char* source) + : isolate(main_isolate()), + function((FLAG_allow_natives_syntax = true, NewFunction(source))) { + Compile(function); + } + + Isolate* isolate; + Handle<JSFunction> function; + + Handle<JSFunction> Compile(Handle<JSFunction> function) { +#if V8_TURBOFAN_TARGET + CompilationInfoWithZone info(function); + + CHECK(Parser::Parse(&info)); + StrictMode strict_mode = info.function()->strict_mode(); + info.SetStrictMode(strict_mode); + info.SetOptimizing(BailoutId::None(), Handle<Code>(function->code())); + CHECK(Rewriter::Rewrite(&info)); + CHECK(Scope::Analyze(&info)); + CHECK_NE(NULL, info.scope()); + + EnsureDeoptimizationSupport(&info); + + Pipeline pipeline(&info); + Handle<Code> code = pipeline.GenerateCode(); + + CHECK(!code.is_null()); + function->ReplaceCode(*code); +#elif USE_CRANKSHAFT + Handle<Code> unoptimized = Handle<Code>(function->code()); + Handle<Code> code = Compiler::GetOptimizedCode(function, unoptimized, + Compiler::NOT_CONCURRENT); + CHECK(!code.is_null()); +#if ENABLE_DISASSEMBLER + if (FLAG_print_opt_code) { + CodeTracer::Scope tracing_scope(isolate->GetCodeTracer()); + code->Disassemble("test code", tracing_scope.file()); + } +#endif + function->ReplaceCode(*code); +#endif + return function; + } + + static void EnsureDeoptimizationSupport(CompilationInfo* info) { + bool should_recompile = !info->shared_info()->has_deoptimization_support(); + if (should_recompile) { + CompilationInfoWithZone unoptimized(info->shared_info()); + // Note that we use the same AST that we will use for generating the + // optimized code. + unoptimized.SetFunction(info->function()); + unoptimized.PrepareForCompilation(info->scope()); + unoptimized.SetContext(info->context()); + if (should_recompile) unoptimized.EnableDeoptimizationSupport(); + bool succeeded = FullCodeGenerator::MakeCode(&unoptimized); + CHECK(succeeded); + Handle<SharedFunctionInfo> shared = info->shared_info(); + shared->EnableDeoptimizationSupport(*unoptimized.code()); + } + } + + MaybeHandle<Object> Call(Handle<Object> a, Handle<Object> b) { + Handle<Object> args[] = {a, b}; + return Execution::Call(isolate, function, undefined(), 2, args, false); + } + + void CheckThrows(Handle<Object> a, Handle<Object> b) { + TryCatch try_catch; + MaybeHandle<Object> no_result = Call(a, b); + CHECK(isolate->has_pending_exception()); + CHECK(try_catch.HasCaught()); + CHECK(no_result.is_null()); + // TODO(mstarzinger): Temporary workaround for issue chromium:362388. + isolate->OptionalRescheduleException(true); + } + + v8::Handle<v8::Message> CheckThrowsReturnMessage(Handle<Object> a, + Handle<Object> b) { + TryCatch try_catch; + MaybeHandle<Object> no_result = Call(a, b); + CHECK(isolate->has_pending_exception()); + CHECK(try_catch.HasCaught()); + CHECK(no_result.is_null()); + // TODO(mstarzinger): Calling OptionalRescheduleException is a dirty hack, + // it's the only way to make Message() not to assert because an external + // exception has been caught by the try_catch. + isolate->OptionalRescheduleException(true); + return try_catch.Message(); + } + + void CheckCall(Handle<Object> expected, Handle<Object> a, Handle<Object> b) { + Handle<Object> result = Call(a, b).ToHandleChecked(); + CHECK(expected->SameValue(*result)); + } + + void CheckCall(Handle<Object> expected, Handle<Object> a) { + CheckCall(expected, a, undefined()); + } + + void CheckCall(Handle<Object> expected) { + CheckCall(expected, undefined(), undefined()); + } + + void CheckCall(double expected, double a, double b) { + CheckCall(Val(expected), Val(a), Val(b)); + } + + void CheckTrue(Handle<Object> a, Handle<Object> b) { + CheckCall(true_value(), a, b); + } + + void CheckTrue(Handle<Object> a) { CheckCall(true_value(), a, undefined()); } + + void CheckTrue(double a, double b) { + CheckCall(true_value(), Val(a), Val(b)); + } + + void CheckFalse(Handle<Object> a, Handle<Object> b) { + CheckCall(false_value(), a, b); + } + + void CheckFalse(Handle<Object> a) { + CheckCall(false_value(), a, undefined()); + } + + void CheckFalse(double a, double b) { + CheckCall(false_value(), Val(a), Val(b)); + } + + Handle<JSFunction> NewFunction(const char* source) { + return v8::Utils::OpenHandle( + *v8::Handle<v8::Function>::Cast(CompileRun(source))); + } + + Handle<JSObject> NewObject(const char* source) { + return v8::Utils::OpenHandle( + *v8::Handle<v8::Object>::Cast(CompileRun(source))); + } + + Handle<String> Val(const char* string) { + return isolate->factory()->InternalizeUtf8String(string); + } + + Handle<Object> Val(double value) { + return isolate->factory()->NewNumber(value); + } + + Handle<Object> infinity() { return isolate->factory()->infinity_value(); } + + Handle<Object> minus_infinity() { return Val(-V8_INFINITY); } + + Handle<Object> nan() { return isolate->factory()->nan_value(); } + + Handle<Object> undefined() { return isolate->factory()->undefined_value(); } + + Handle<Object> null() { return isolate->factory()->null_value(); } + + Handle<Object> true_value() { return isolate->factory()->true_value(); } + + Handle<Object> false_value() { return isolate->factory()->false_value(); } +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_CCTEST_COMPILER_FUNCTION_TESTER_H_ diff --git a/deps/v8/test/cctest/compiler/graph-builder-tester.cc b/deps/v8/test/cctest/compiler/graph-builder-tester.cc new file mode 100644 index 00000000000..fb6e4a28ce9 --- /dev/null +++ b/deps/v8/test/cctest/compiler/graph-builder-tester.cc @@ -0,0 +1,65 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "test/cctest/compiler/graph-builder-tester.h" +#include "src/compiler/pipeline.h" + +namespace v8 { +namespace internal { +namespace compiler { + +MachineCallHelper::MachineCallHelper(Zone* zone, + MachineCallDescriptorBuilder* builder) + : CallHelper(zone->isolate()), + call_descriptor_builder_(builder), + parameters_(NULL), + graph_(NULL) {} + + +void MachineCallHelper::InitParameters(GraphBuilder* builder, + CommonOperatorBuilder* common) { + DCHECK_EQ(NULL, parameters_); + graph_ = builder->graph(); + if (parameter_count() == 0) return; + parameters_ = graph_->zone()->NewArray<Node*>(parameter_count()); + for (int i = 0; i < parameter_count(); ++i) { + parameters_[i] = builder->NewNode(common->Parameter(i), graph_->start()); + } +} + + +byte* MachineCallHelper::Generate() { + DCHECK(parameter_count() == 0 || parameters_ != NULL); + if (!Pipeline::SupportedBackend()) return NULL; + if (code_.is_null()) { + Zone* zone = graph_->zone(); + CompilationInfo info(zone->isolate(), zone); + Linkage linkage(&info, call_descriptor_builder_->BuildCallDescriptor(zone)); + Pipeline pipeline(&info); + code_ = pipeline.GenerateCodeForMachineGraph(&linkage, graph_); + } + return code_.ToHandleChecked()->entry(); +} + + +void MachineCallHelper::VerifyParameters(int parameter_count, + MachineType* parameter_types) { + CHECK_EQ(this->parameter_count(), parameter_count); + const MachineType* expected_types = + call_descriptor_builder_->parameter_types(); + for (int i = 0; i < parameter_count; i++) { + CHECK_EQ(expected_types[i], parameter_types[i]); + } +} + + +Node* MachineCallHelper::Parameter(int offset) { + DCHECK_NE(NULL, parameters_); + DCHECK(0 <= offset && offset < parameter_count()); + return parameters_[offset]; +} + +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/test/cctest/compiler/graph-builder-tester.h b/deps/v8/test/cctest/compiler/graph-builder-tester.h new file mode 100644 index 00000000000..64d9b8a73d8 --- /dev/null +++ b/deps/v8/test/cctest/compiler/graph-builder-tester.h @@ -0,0 +1,114 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_CCTEST_COMPILER_GRAPH_BUILDER_TESTER_H_ +#define V8_CCTEST_COMPILER_GRAPH_BUILDER_TESTER_H_ + +#include "src/v8.h" +#include "test/cctest/cctest.h" + +#include "src/compiler/common-operator.h" +#include "src/compiler/graph-builder.h" +#include "src/compiler/machine-node-factory.h" +#include "src/compiler/machine-operator.h" +#include "src/compiler/simplified-node-factory.h" +#include "src/compiler/simplified-operator.h" +#include "test/cctest/compiler/call-tester.h" +#include "test/cctest/compiler/simplified-graph-builder.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// A class that just passes node creation on to the Graph. +class DirectGraphBuilder : public GraphBuilder { + public: + explicit DirectGraphBuilder(Graph* graph) : GraphBuilder(graph) {} + virtual ~DirectGraphBuilder() {} + + protected: + virtual Node* MakeNode(Operator* op, int value_input_count, + Node** value_inputs) { + return graph()->NewNode(op, value_input_count, value_inputs); + } +}; + + +class MachineCallHelper : public CallHelper { + public: + MachineCallHelper(Zone* zone, MachineCallDescriptorBuilder* builder); + + Node* Parameter(int offset); + + void GenerateCode() { Generate(); } + + protected: + virtual byte* Generate(); + virtual void VerifyParameters(int parameter_count, MachineType* parameters); + void InitParameters(GraphBuilder* builder, CommonOperatorBuilder* common); + + protected: + int parameter_count() const { + return call_descriptor_builder_->parameter_count(); + } + + private: + MachineCallDescriptorBuilder* call_descriptor_builder_; + Node** parameters_; + // TODO(dcarney): shouldn't need graph stored. + Graph* graph_; + MaybeHandle<Code> code_; +}; + + +class GraphAndBuilders { + public: + explicit GraphAndBuilders(Zone* zone) + : main_graph_(new (zone) Graph(zone)), + main_common_(zone), + main_machine_(zone), + main_simplified_(zone) {} + + protected: + // Prefixed with main_ to avoid naiming conflicts. + Graph* main_graph_; + CommonOperatorBuilder main_common_; + MachineOperatorBuilder main_machine_; + SimplifiedOperatorBuilder main_simplified_; +}; + + +template <typename ReturnType> +class GraphBuilderTester + : public HandleAndZoneScope, + private GraphAndBuilders, + public MachineCallHelper, + public SimplifiedGraphBuilder, + public CallHelper2<ReturnType, GraphBuilderTester<ReturnType> > { + public: + explicit GraphBuilderTester(MachineType p0 = kMachineLast, + MachineType p1 = kMachineLast, + MachineType p2 = kMachineLast, + MachineType p3 = kMachineLast, + MachineType p4 = kMachineLast) + : GraphAndBuilders(main_zone()), + MachineCallHelper( + main_zone(), + ToCallDescriptorBuilder( + main_zone(), ReturnValueTraits<ReturnType>::Representation(), + p0, p1, p2, p3, p4)), + SimplifiedGraphBuilder(main_graph_, &main_common_, &main_machine_, + &main_simplified_) { + Begin(parameter_count()); + InitParameters(this, &main_common_); + } + virtual ~GraphBuilderTester() {} + + Factory* factory() const { return isolate()->factory(); } +}; +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_CCTEST_COMPILER_GRAPH_BUILDER_TESTER_H_ diff --git a/deps/v8/test/cctest/compiler/graph-tester.h b/deps/v8/test/cctest/compiler/graph-tester.h new file mode 100644 index 00000000000..e56924540b3 --- /dev/null +++ b/deps/v8/test/cctest/compiler/graph-tester.h @@ -0,0 +1,42 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_CCTEST_COMPILER_GRAPH_TESTER_H_ +#define V8_CCTEST_COMPILER_GRAPH_TESTER_H_ + +#include "src/v8.h" +#include "test/cctest/cctest.h" + +#include "src/compiler/common-operator.h" +#include "src/compiler/graph.h" + +namespace v8 { +namespace internal { +namespace compiler { + +class GraphTester : public HandleAndZoneScope, public Graph { + public: + GraphTester() : Graph(main_zone()) {} +}; + + +class GraphWithStartNodeTester : public GraphTester { + public: + explicit GraphWithStartNodeTester(int num_parameters = 0) + : builder_(main_zone()), + start_node_(NewNode(builder_.Start(num_parameters))) { + SetStart(start_node_); + } + + Node* start_node() { return start_node_; } + + private: + CommonOperatorBuilder builder_; + Node* start_node_; +}; +} +} +} // namespace v8::internal::compiler + +#endif // V8_CCTEST_COMPILER_GRAPH_TESTER_H_ diff --git a/deps/v8/test/cctest/compiler/instruction-selector-tester.h b/deps/v8/test/cctest/compiler/instruction-selector-tester.h new file mode 100644 index 00000000000..60adaec8239 --- /dev/null +++ b/deps/v8/test/cctest/compiler/instruction-selector-tester.h @@ -0,0 +1,127 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_CCTEST_COMPILER_INSTRUCTION_SELECTOR_TEST_H_ +#define V8_CCTEST_COMPILER_INSTRUCTION_SELECTOR_TEST_H_ + +#include <deque> +#include <set> + +#include "src/compiler/instruction-selector.h" +#include "src/compiler/raw-machine-assembler.h" +#include "src/ostreams.h" +#include "test/cctest/cctest.h" + +namespace v8 { +namespace internal { +namespace compiler { + +typedef std::set<int> VirtualRegisterSet; + +enum InstructionSelectorTesterMode { kTargetMode, kInternalMode }; + +class InstructionSelectorTester : public HandleAndZoneScope, + public RawMachineAssembler { + public: + enum Mode { kTargetMode, kInternalMode }; + + static const int kParameterCount = 3; + static MachineType* BuildParameterArray(Zone* zone) { + MachineType* array = zone->NewArray<MachineType>(kParameterCount); + for (int i = 0; i < kParameterCount; ++i) { + array[i] = kMachineWord32; + } + return array; + } + + InstructionSelectorTester() + : RawMachineAssembler( + new (main_zone()) Graph(main_zone()), new (main_zone()) + MachineCallDescriptorBuilder(kMachineWord32, kParameterCount, + BuildParameterArray(main_zone())), + MachineOperatorBuilder::pointer_rep()) {} + + void SelectInstructions(CpuFeature feature) { + SelectInstructions(InstructionSelector::Features(feature)); + } + + void SelectInstructions(CpuFeature feature1, CpuFeature feature2) { + SelectInstructions(InstructionSelector::Features(feature1, feature2)); + } + + void SelectInstructions(Mode mode = kTargetMode) { + SelectInstructions(InstructionSelector::Features(), mode); + } + + void SelectInstructions(InstructionSelector::Features features, + Mode mode = kTargetMode) { + OFStream out(stdout); + Schedule* schedule = Export(); + CHECK_NE(0, graph()->NodeCount()); + CompilationInfo info(main_isolate(), main_zone()); + Linkage linkage(&info, call_descriptor()); + InstructionSequence sequence(&linkage, graph(), schedule); + SourcePositionTable source_positions(graph()); + InstructionSelector selector(&sequence, &source_positions, features); + selector.SelectInstructions(); + out << "--- Code sequence after instruction selection --- " << endl + << sequence; + for (InstructionSequence::const_iterator i = sequence.begin(); + i != sequence.end(); ++i) { + Instruction* instr = *i; + if (instr->opcode() < 0) continue; + if (mode == kTargetMode) { + switch (ArchOpcodeField::decode(instr->opcode())) { +#define CASE(Name) \ + case k##Name: \ + break; + TARGET_ARCH_OPCODE_LIST(CASE) +#undef CASE + default: + continue; + } + } + code.push_back(instr); + } + for (int vreg = 0; vreg < sequence.VirtualRegisterCount(); ++vreg) { + if (sequence.IsDouble(vreg)) { + CHECK(!sequence.IsReference(vreg)); + doubles.insert(vreg); + } + if (sequence.IsReference(vreg)) { + CHECK(!sequence.IsDouble(vreg)); + references.insert(vreg); + } + } + immediates.assign(sequence.immediates().begin(), + sequence.immediates().end()); + } + + int32_t ToInt32(const InstructionOperand* operand) const { + size_t i = operand->index(); + CHECK(i < immediates.size()); + CHECK_EQ(InstructionOperand::IMMEDIATE, operand->kind()); + return immediates[i].ToInt32(); + } + + std::deque<Instruction*> code; + VirtualRegisterSet doubles; + VirtualRegisterSet references; + std::deque<Constant> immediates; +}; + + +static inline void CheckSameVreg(InstructionOperand* exp, + InstructionOperand* val) { + CHECK_EQ(InstructionOperand::UNALLOCATED, exp->kind()); + CHECK_EQ(InstructionOperand::UNALLOCATED, val->kind()); + CHECK_EQ(UnallocatedOperand::cast(exp)->virtual_register(), + UnallocatedOperand::cast(val)->virtual_register()); +} + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_CCTEST_COMPILER_INSTRUCTION_SELECTOR_TEST_H_ diff --git a/deps/v8/test/cctest/compiler/simplified-graph-builder.cc b/deps/v8/test/cctest/compiler/simplified-graph-builder.cc new file mode 100644 index 00000000000..c688399eaec --- /dev/null +++ b/deps/v8/test/cctest/compiler/simplified-graph-builder.cc @@ -0,0 +1,78 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "test/cctest/compiler/simplified-graph-builder.h" + +namespace v8 { +namespace internal { +namespace compiler { + +SimplifiedGraphBuilder::SimplifiedGraphBuilder( + Graph* graph, CommonOperatorBuilder* common, + MachineOperatorBuilder* machine, SimplifiedOperatorBuilder* simplified) + : StructuredGraphBuilder(graph, common), + machine_(machine), + simplified_(simplified) {} + + +void SimplifiedGraphBuilder::Begin(int num_parameters) { + DCHECK(graph()->start() == NULL); + Node* start = graph()->NewNode(common()->Start(num_parameters)); + graph()->SetStart(start); + set_environment(new (zone()) Environment(this, start)); +} + + +void SimplifiedGraphBuilder::Return(Node* value) { + Node* control = NewNode(common()->Return(), value); + UpdateControlDependencyToLeaveFunction(control); +} + + +void SimplifiedGraphBuilder::End() { + environment()->UpdateControlDependency(exit_control()); + graph()->SetEnd(NewNode(common()->End())); +} + + +SimplifiedGraphBuilder::Environment::Environment( + SimplifiedGraphBuilder* builder, Node* control_dependency) + : StructuredGraphBuilder::Environment(builder, control_dependency) {} + + +Node* SimplifiedGraphBuilder::Environment::Top() { + DCHECK(!values()->empty()); + return values()->back(); +} + + +void SimplifiedGraphBuilder::Environment::Push(Node* node) { + values()->push_back(node); +} + + +Node* SimplifiedGraphBuilder::Environment::Pop() { + DCHECK(!values()->empty()); + Node* back = values()->back(); + values()->pop_back(); + return back; +} + + +void SimplifiedGraphBuilder::Environment::Poke(size_t depth, Node* node) { + DCHECK(depth < values()->size()); + size_t index = values()->size() - depth - 1; + values()->at(index) = node; +} + + +Node* SimplifiedGraphBuilder::Environment::Peek(size_t depth) { + DCHECK(depth < values()->size()); + size_t index = values()->size() - depth - 1; + return values()->at(index); +} + +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/test/cctest/compiler/simplified-graph-builder.h b/deps/v8/test/cctest/compiler/simplified-graph-builder.h new file mode 100644 index 00000000000..fa9161e1713 --- /dev/null +++ b/deps/v8/test/cctest/compiler/simplified-graph-builder.h @@ -0,0 +1,73 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_CCTEST_COMPILER_SIMPLIFIED_GRAPH_BUILDER_H_ +#define V8_CCTEST_COMPILER_SIMPLIFIED_GRAPH_BUILDER_H_ + +#include "src/compiler/common-operator.h" +#include "src/compiler/graph-builder.h" +#include "src/compiler/machine-node-factory.h" +#include "src/compiler/machine-operator.h" +#include "src/compiler/simplified-node-factory.h" +#include "src/compiler/simplified-operator.h" +#include "test/cctest/cctest.h" +#include "test/cctest/compiler/call-tester.h" + +namespace v8 { +namespace internal { +namespace compiler { + +class SimplifiedGraphBuilder + : public StructuredGraphBuilder, + public MachineNodeFactory<SimplifiedGraphBuilder>, + public SimplifiedNodeFactory<SimplifiedGraphBuilder> { + public: + SimplifiedGraphBuilder(Graph* graph, CommonOperatorBuilder* common, + MachineOperatorBuilder* machine, + SimplifiedOperatorBuilder* simplified); + virtual ~SimplifiedGraphBuilder() {} + + class Environment : public StructuredGraphBuilder::Environment { + public: + Environment(SimplifiedGraphBuilder* builder, Node* control_dependency); + + // TODO(dcarney): encode somehow and merge into StructuredGraphBuilder. + // SSA renaming operations. + Node* Top(); + void Push(Node* node); + Node* Pop(); + void Poke(size_t depth, Node* node); + Node* Peek(size_t depth); + }; + + Isolate* isolate() const { return zone()->isolate(); } + Zone* zone() const { return StructuredGraphBuilder::zone(); } + CommonOperatorBuilder* common() const { + return StructuredGraphBuilder::common(); + } + MachineOperatorBuilder* machine() const { return machine_; } + SimplifiedOperatorBuilder* simplified() const { return simplified_; } + Environment* environment() { + return reinterpret_cast<Environment*>( + StructuredGraphBuilder::environment()); + } + + // Initialize graph and builder. + void Begin(int num_parameters); + + void Return(Node* value); + + // Close the graph. + void End(); + + private: + MachineOperatorBuilder* machine_; + SimplifiedOperatorBuilder* simplified_; +}; + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_CCTEST_COMPILER_SIMPLIFIED_GRAPH_BUILDER_H_ diff --git a/deps/v8/test/cctest/compiler/test-branch-combine.cc b/deps/v8/test/cctest/compiler/test-branch-combine.cc new file mode 100644 index 00000000000..61dffdca875 --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-branch-combine.cc @@ -0,0 +1,462 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "test/cctest/cctest.h" +#include "test/cctest/compiler/codegen-tester.h" +#include "test/cctest/compiler/value-helper.h" + +#if V8_TURBOFAN_TARGET + +using namespace v8::internal; +using namespace v8::internal::compiler; + +typedef RawMachineAssembler::Label MLabel; + +static IrOpcode::Value int32cmp_opcodes[] = { + IrOpcode::kWord32Equal, IrOpcode::kInt32LessThan, + IrOpcode::kInt32LessThanOrEqual, IrOpcode::kUint32LessThan, + IrOpcode::kUint32LessThanOrEqual}; + + +TEST(BranchCombineWord32EqualZero_1) { + // Test combining a branch with x == 0 + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + int32_t eq_constant = -1033; + int32_t ne_constant = 825118; + Node* p0 = m.Parameter(0); + + MLabel blocka, blockb; + m.Branch(m.Word32Equal(p0, m.Int32Constant(0)), &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(eq_constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(ne_constant)); + + FOR_INT32_INPUTS(i) { + int32_t a = *i; + int32_t expect = a == 0 ? eq_constant : ne_constant; + CHECK_EQ(expect, m.Call(a)); + } +} + + +TEST(BranchCombineWord32EqualZero_chain) { + // Test combining a branch with a chain of x == 0 == 0 == 0 ... + int32_t eq_constant = -1133; + int32_t ne_constant = 815118; + + for (int k = 0; k < 6; k++) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + Node* p0 = m.Parameter(0); + MLabel blocka, blockb; + Node* cond = p0; + for (int j = 0; j < k; j++) { + cond = m.Word32Equal(cond, m.Int32Constant(0)); + } + m.Branch(cond, &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(eq_constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(ne_constant)); + + FOR_INT32_INPUTS(i) { + int32_t a = *i; + int32_t expect = (k & 1) == 1 ? (a == 0 ? eq_constant : ne_constant) + : (a == 0 ? ne_constant : eq_constant); + CHECK_EQ(expect, m.Call(a)); + } + } +} + + +TEST(BranchCombineInt32LessThanZero_1) { + // Test combining a branch with x < 0 + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + int32_t eq_constant = -1433; + int32_t ne_constant = 845118; + Node* p0 = m.Parameter(0); + + MLabel blocka, blockb; + m.Branch(m.Int32LessThan(p0, m.Int32Constant(0)), &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(eq_constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(ne_constant)); + + FOR_INT32_INPUTS(i) { + int32_t a = *i; + int32_t expect = a < 0 ? eq_constant : ne_constant; + CHECK_EQ(expect, m.Call(a)); + } +} + + +TEST(BranchCombineUint32LessThan100_1) { + // Test combining a branch with x < 100 + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + int32_t eq_constant = 1471; + int32_t ne_constant = 88845718; + Node* p0 = m.Parameter(0); + + MLabel blocka, blockb; + m.Branch(m.Uint32LessThan(p0, m.Int32Constant(100)), &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(eq_constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(ne_constant)); + + FOR_UINT32_INPUTS(i) { + uint32_t a = *i; + int32_t expect = a < 100 ? eq_constant : ne_constant; + CHECK_EQ(expect, m.Call(a)); + } +} + + +TEST(BranchCombineUint32LessThanOrEqual100_1) { + // Test combining a branch with x <= 100 + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + int32_t eq_constant = 1479; + int32_t ne_constant = 77845719; + Node* p0 = m.Parameter(0); + + MLabel blocka, blockb; + m.Branch(m.Uint32LessThanOrEqual(p0, m.Int32Constant(100)), &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(eq_constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(ne_constant)); + + FOR_UINT32_INPUTS(i) { + uint32_t a = *i; + int32_t expect = a <= 100 ? eq_constant : ne_constant; + CHECK_EQ(expect, m.Call(a)); + } +} + + +TEST(BranchCombineZeroLessThanInt32_1) { + // Test combining a branch with 0 < x + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + int32_t eq_constant = -2033; + int32_t ne_constant = 225118; + Node* p0 = m.Parameter(0); + + MLabel blocka, blockb; + m.Branch(m.Int32LessThan(m.Int32Constant(0), p0), &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(eq_constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(ne_constant)); + + FOR_INT32_INPUTS(i) { + int32_t a = *i; + int32_t expect = 0 < a ? eq_constant : ne_constant; + CHECK_EQ(expect, m.Call(a)); + } +} + + +TEST(BranchCombineInt32GreaterThanZero_1) { + // Test combining a branch with x > 0 + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + int32_t eq_constant = -1073; + int32_t ne_constant = 825178; + Node* p0 = m.Parameter(0); + + MLabel blocka, blockb; + m.Branch(m.Int32GreaterThan(p0, m.Int32Constant(0)), &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(eq_constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(ne_constant)); + + FOR_INT32_INPUTS(i) { + int32_t a = *i; + int32_t expect = a > 0 ? eq_constant : ne_constant; + CHECK_EQ(expect, m.Call(a)); + } +} + + +TEST(BranchCombineWord32EqualP) { + // Test combining a branch with an Word32Equal. + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32); + int32_t eq_constant = -1035; + int32_t ne_constant = 825018; + Node* p0 = m.Parameter(0); + Node* p1 = m.Parameter(1); + + MLabel blocka, blockb; + m.Branch(m.Word32Equal(p0, p1), &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(eq_constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(ne_constant)); + + FOR_INT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + int32_t a = *i; + int32_t b = *j; + int32_t expect = a == b ? eq_constant : ne_constant; + CHECK_EQ(expect, m.Call(a, b)); + } + } +} + + +TEST(BranchCombineWord32EqualI) { + int32_t eq_constant = -1135; + int32_t ne_constant = 925718; + + for (int left = 0; left < 2; left++) { + FOR_INT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + int32_t a = *i; + + Node* p0 = m.Int32Constant(a); + Node* p1 = m.Parameter(0); + + MLabel blocka, blockb; + if (left == 1) m.Branch(m.Word32Equal(p0, p1), &blocka, &blockb); + if (left == 0) m.Branch(m.Word32Equal(p1, p0), &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(eq_constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(ne_constant)); + + FOR_INT32_INPUTS(j) { + int32_t b = *j; + int32_t expect = a == b ? eq_constant : ne_constant; + CHECK_EQ(expect, m.Call(b)); + } + } + } +} + + +TEST(BranchCombineInt32CmpP) { + int32_t eq_constant = -1235; + int32_t ne_constant = 725018; + + for (int op = 0; op < 2; op++) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32); + Node* p0 = m.Parameter(0); + Node* p1 = m.Parameter(1); + + MLabel blocka, blockb; + if (op == 0) m.Branch(m.Int32LessThan(p0, p1), &blocka, &blockb); + if (op == 1) m.Branch(m.Int32LessThanOrEqual(p0, p1), &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(eq_constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(ne_constant)); + + FOR_INT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + int32_t a = *i; + int32_t b = *j; + int32_t expect = 0; + if (op == 0) expect = a < b ? eq_constant : ne_constant; + if (op == 1) expect = a <= b ? eq_constant : ne_constant; + CHECK_EQ(expect, m.Call(a, b)); + } + } + } +} + + +TEST(BranchCombineInt32CmpI) { + int32_t eq_constant = -1175; + int32_t ne_constant = 927711; + + for (int op = 0; op < 2; op++) { + FOR_INT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + int32_t a = *i; + Node* p0 = m.Int32Constant(a); + Node* p1 = m.Parameter(0); + + MLabel blocka, blockb; + if (op == 0) m.Branch(m.Int32LessThan(p0, p1), &blocka, &blockb); + if (op == 1) m.Branch(m.Int32LessThanOrEqual(p0, p1), &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(eq_constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(ne_constant)); + + FOR_INT32_INPUTS(j) { + int32_t b = *j; + int32_t expect = 0; + if (op == 0) expect = a < b ? eq_constant : ne_constant; + if (op == 1) expect = a <= b ? eq_constant : ne_constant; + CHECK_EQ(expect, m.Call(b)); + } + } + } +} + + +// Now come the sophisticated tests for many input shape combinations. + +// Materializes a boolean (1 or 0) from a comparison. +class CmpMaterializeBoolGen : public BinopGen<int32_t> { + public: + CompareWrapper w; + bool invert; + + CmpMaterializeBoolGen(IrOpcode::Value opcode, bool i) + : w(opcode), invert(i) {} + + virtual void gen(RawMachineAssemblerTester<int32_t>* m, Node* a, Node* b) { + Node* cond = w.MakeNode(m, a, b); + if (invert) cond = m->Word32Equal(cond, m->Int32Constant(0)); + m->Return(cond); + } + virtual int32_t expected(int32_t a, int32_t b) { + if (invert) return !w.Int32Compare(a, b) ? 1 : 0; + return w.Int32Compare(a, b) ? 1 : 0; + } +}; + + +// Generates a branch and return one of two values from a comparison. +class CmpBranchGen : public BinopGen<int32_t> { + public: + CompareWrapper w; + bool invert; + bool true_first; + int32_t eq_constant; + int32_t ne_constant; + + CmpBranchGen(IrOpcode::Value opcode, bool i, bool t, int32_t eq, int32_t ne) + : w(opcode), invert(i), true_first(t), eq_constant(eq), ne_constant(ne) {} + + virtual void gen(RawMachineAssemblerTester<int32_t>* m, Node* a, Node* b) { + MLabel blocka, blockb; + Node* cond = w.MakeNode(m, a, b); + if (invert) cond = m->Word32Equal(cond, m->Int32Constant(0)); + m->Branch(cond, &blocka, &blockb); + if (true_first) { + m->Bind(&blocka); + m->Return(m->Int32Constant(eq_constant)); + m->Bind(&blockb); + m->Return(m->Int32Constant(ne_constant)); + } else { + m->Bind(&blockb); + m->Return(m->Int32Constant(ne_constant)); + m->Bind(&blocka); + m->Return(m->Int32Constant(eq_constant)); + } + } + virtual int32_t expected(int32_t a, int32_t b) { + if (invert) return !w.Int32Compare(a, b) ? eq_constant : ne_constant; + return w.Int32Compare(a, b) ? eq_constant : ne_constant; + } +}; + + +TEST(BranchCombineInt32CmpAllInputShapes_materialized) { + for (size_t i = 0; i < ARRAY_SIZE(int32cmp_opcodes); i++) { + CmpMaterializeBoolGen gen(int32cmp_opcodes[i], false); + Int32BinopInputShapeTester tester(&gen); + tester.TestAllInputShapes(); + } +} + + +TEST(BranchCombineInt32CmpAllInputShapes_inverted_materialized) { + for (size_t i = 0; i < ARRAY_SIZE(int32cmp_opcodes); i++) { + CmpMaterializeBoolGen gen(int32cmp_opcodes[i], true); + Int32BinopInputShapeTester tester(&gen); + tester.TestAllInputShapes(); + } +} + + +TEST(BranchCombineInt32CmpAllInputShapes_branch_true) { + for (int i = 0; i < static_cast<int>(ARRAY_SIZE(int32cmp_opcodes)); i++) { + CmpBranchGen gen(int32cmp_opcodes[i], false, false, 995 + i, -1011 - i); + Int32BinopInputShapeTester tester(&gen); + tester.TestAllInputShapes(); + } +} + + +TEST(BranchCombineInt32CmpAllInputShapes_branch_false) { + for (int i = 0; i < static_cast<int>(ARRAY_SIZE(int32cmp_opcodes)); i++) { + CmpBranchGen gen(int32cmp_opcodes[i], false, true, 795 + i, -2011 - i); + Int32BinopInputShapeTester tester(&gen); + tester.TestAllInputShapes(); + } +} + + +TEST(BranchCombineInt32CmpAllInputShapes_inverse_branch_true) { + for (int i = 0; i < static_cast<int>(ARRAY_SIZE(int32cmp_opcodes)); i++) { + CmpBranchGen gen(int32cmp_opcodes[i], true, false, 695 + i, -3011 - i); + Int32BinopInputShapeTester tester(&gen); + tester.TestAllInputShapes(); + } +} + + +TEST(BranchCombineInt32CmpAllInputShapes_inverse_branch_false) { + for (int i = 0; i < static_cast<int>(ARRAY_SIZE(int32cmp_opcodes)); i++) { + CmpBranchGen gen(int32cmp_opcodes[i], true, true, 595 + i, -4011 - i); + Int32BinopInputShapeTester tester(&gen); + tester.TestAllInputShapes(); + } +} + + +TEST(BranchCombineFloat64Compares) { + double inf = V8_INFINITY; + double nan = v8::base::OS::nan_value(); + double inputs[] = {0.0, 1.0, -1.0, -inf, inf, nan}; + + int32_t eq_constant = -1733; + int32_t ne_constant = 915118; + + double input_a = 0.0; + double input_b = 0.0; + + CompareWrapper cmps[] = {CompareWrapper(IrOpcode::kFloat64Equal), + CompareWrapper(IrOpcode::kFloat64LessThan), + CompareWrapper(IrOpcode::kFloat64LessThanOrEqual)}; + + for (size_t c = 0; c < ARRAY_SIZE(cmps); c++) { + CompareWrapper cmp = cmps[c]; + for (int invert = 0; invert < 2; invert++) { + RawMachineAssemblerTester<int32_t> m; + Node* a = m.LoadFromPointer(&input_a, kMachineFloat64); + Node* b = m.LoadFromPointer(&input_b, kMachineFloat64); + + MLabel blocka, blockb; + Node* cond = cmp.MakeNode(&m, a, b); + if (invert) cond = m.Word32Equal(cond, m.Int32Constant(0)); + m.Branch(cond, &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(eq_constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(ne_constant)); + + for (size_t i = 0; i < ARRAY_SIZE(inputs); i++) { + for (size_t j = 0; j < ARRAY_SIZE(inputs); j += 2) { + input_a = inputs[i]; + input_b = inputs[i]; + int32_t expected = + invert ? (cmp.Float64Compare(input_a, input_b) ? ne_constant + : eq_constant) + : (cmp.Float64Compare(input_a, input_b) ? eq_constant + : ne_constant); + CHECK_EQ(expected, m.Call()); + } + } + } + } +} +#endif // V8_TURBOFAN_TARGET diff --git a/deps/v8/test/cctest/compiler/test-changes-lowering.cc b/deps/v8/test/cctest/compiler/test-changes-lowering.cc new file mode 100644 index 00000000000..148f4b34f54 --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-changes-lowering.cc @@ -0,0 +1,397 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include <limits> + +#include "src/compiler/control-builders.h" +#include "src/compiler/generic-node-inl.h" +#include "src/compiler/node-properties-inl.h" +#include "src/compiler/pipeline.h" +#include "src/compiler/simplified-lowering.h" +#include "src/compiler/simplified-node-factory.h" +#include "src/compiler/typer.h" +#include "src/compiler/verifier.h" +#include "src/execution.h" +#include "src/parser.h" +#include "src/rewriter.h" +#include "src/scopes.h" +#include "test/cctest/cctest.h" +#include "test/cctest/compiler/codegen-tester.h" +#include "test/cctest/compiler/graph-builder-tester.h" +#include "test/cctest/compiler/value-helper.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +template <typename ReturnType> +class ChangesLoweringTester : public GraphBuilderTester<ReturnType> { + public: + explicit ChangesLoweringTester(MachineType p0 = kMachineLast) + : GraphBuilderTester<ReturnType>(p0), + typer(this->zone()), + source_positions(this->graph()), + jsgraph(this->graph(), this->common(), &typer), + lowering(&jsgraph, &source_positions), + function(Handle<JSFunction>::null()) {} + + Typer typer; + SourcePositionTable source_positions; + JSGraph jsgraph; + SimplifiedLowering lowering; + Handle<JSFunction> function; + + Node* start() { return this->graph()->start(); } + + template <typename T> + T* CallWithPotentialGC() { + // TODO(titzer): we need to wrap the code in a JSFunction and call it via + // Execution::Call() so that the GC knows about the frame, can walk it, + // relocate the code object if necessary, etc. + // This is pretty ugly and at the least should be moved up to helpers. + if (function.is_null()) { + function = + v8::Utils::OpenHandle(*v8::Handle<v8::Function>::Cast(CompileRun( + "(function() { 'use strict'; return 2.7123; })"))); + CompilationInfoWithZone info(function); + CHECK(Parser::Parse(&info)); + StrictMode strict_mode = info.function()->strict_mode(); + info.SetStrictMode(strict_mode); + info.SetOptimizing(BailoutId::None(), Handle<Code>(function->code())); + CHECK(Rewriter::Rewrite(&info)); + CHECK(Scope::Analyze(&info)); + CHECK_NE(NULL, info.scope()); + Pipeline pipeline(&info); + Linkage linkage(&info); + Handle<Code> code = + pipeline.GenerateCodeForMachineGraph(&linkage, this->graph()); + CHECK(!code.is_null()); + function->ReplaceCode(*code); + } + Handle<Object>* args = NULL; + MaybeHandle<Object> result = + Execution::Call(this->isolate(), function, factory()->undefined_value(), + 0, args, false); + return T::cast(*result.ToHandleChecked()); + } + + void StoreFloat64(Node* node, double* ptr) { + Node* ptr_node = this->PointerConstant(ptr); + this->Store(kMachineFloat64, ptr_node, node); + } + + Node* LoadInt32(int32_t* ptr) { + Node* ptr_node = this->PointerConstant(ptr); + return this->Load(kMachineWord32, ptr_node); + } + + Node* LoadUint32(uint32_t* ptr) { + Node* ptr_node = this->PointerConstant(ptr); + return this->Load(kMachineWord32, ptr_node); + } + + Node* LoadFloat64(double* ptr) { + Node* ptr_node = this->PointerConstant(ptr); + return this->Load(kMachineFloat64, ptr_node); + } + + void CheckNumber(double expected, Object* number) { + CHECK(this->isolate()->factory()->NewNumber(expected)->SameValue(number)); + } + + void BuildAndLower(Operator* op) { + // We build a graph by hand here, because the raw machine assembler + // does not add the correct control and effect nodes. + Node* p0 = this->Parameter(0); + Node* change = this->graph()->NewNode(op, p0); + Node* ret = this->graph()->NewNode(this->common()->Return(), change, + this->start(), this->start()); + Node* end = this->graph()->NewNode(this->common()->End(), ret); + this->graph()->SetEnd(end); + this->lowering.LowerChange(change, this->start(), this->start()); + Verifier::Run(this->graph()); + } + + void BuildStoreAndLower(Operator* op, Operator* store_op, void* location) { + // We build a graph by hand here, because the raw machine assembler + // does not add the correct control and effect nodes. + Node* p0 = this->Parameter(0); + Node* change = this->graph()->NewNode(op, p0); + Node* store = this->graph()->NewNode( + store_op, this->PointerConstant(location), this->Int32Constant(0), + change, this->start(), this->start()); + Node* ret = this->graph()->NewNode( + this->common()->Return(), this->Int32Constant(0), store, this->start()); + Node* end = this->graph()->NewNode(this->common()->End(), ret); + this->graph()->SetEnd(end); + this->lowering.LowerChange(change, this->start(), this->start()); + Verifier::Run(this->graph()); + } + + void BuildLoadAndLower(Operator* op, Operator* load_op, void* location) { + // We build a graph by hand here, because the raw machine assembler + // does not add the correct control and effect nodes. + Node* load = + this->graph()->NewNode(load_op, this->PointerConstant(location), + this->Int32Constant(0), this->start()); + Node* change = this->graph()->NewNode(op, load); + Node* ret = this->graph()->NewNode(this->common()->Return(), change, + this->start(), this->start()); + Node* end = this->graph()->NewNode(this->common()->End(), ret); + this->graph()->SetEnd(end); + this->lowering.LowerChange(change, this->start(), this->start()); + Verifier::Run(this->graph()); + } + + Factory* factory() { return this->isolate()->factory(); } + Heap* heap() { return this->isolate()->heap(); } +}; + + +TEST(RunChangeTaggedToInt32) { + // Build and lower a graph by hand. + ChangesLoweringTester<int32_t> t(kMachineTagged); + t.BuildAndLower(t.simplified()->ChangeTaggedToInt32()); + + if (Pipeline::SupportedTarget()) { + FOR_INT32_INPUTS(i) { + int32_t input = *i; + + if (Smi::IsValid(input)) { + int32_t result = t.Call(Smi::FromInt(input)); + CHECK_EQ(input, result); + } + + { + Handle<Object> number = t.factory()->NewNumber(input); + int32_t result = t.Call(*number); + CHECK_EQ(input, result); + } + + { + Handle<HeapNumber> number = t.factory()->NewHeapNumber(input); + int32_t result = t.Call(*number); + CHECK_EQ(input, result); + } + } + } +} + + +TEST(RunChangeTaggedToUint32) { + // Build and lower a graph by hand. + ChangesLoweringTester<uint32_t> t(kMachineTagged); + t.BuildAndLower(t.simplified()->ChangeTaggedToUint32()); + + if (Pipeline::SupportedTarget()) { + FOR_UINT32_INPUTS(i) { + uint32_t input = *i; + + if (Smi::IsValid(input)) { + uint32_t result = t.Call(Smi::FromInt(input)); + CHECK_EQ(static_cast<int32_t>(input), static_cast<int32_t>(result)); + } + + { + Handle<Object> number = t.factory()->NewNumber(input); + uint32_t result = t.Call(*number); + CHECK_EQ(static_cast<int32_t>(input), static_cast<int32_t>(result)); + } + + { + Handle<HeapNumber> number = t.factory()->NewHeapNumber(input); + uint32_t result = t.Call(*number); + CHECK_EQ(static_cast<int32_t>(input), static_cast<int32_t>(result)); + } + } + } +} + + +TEST(RunChangeTaggedToFloat64) { + ChangesLoweringTester<int32_t> t(kMachineTagged); + double result; + + t.BuildStoreAndLower(t.simplified()->ChangeTaggedToFloat64(), + t.machine()->Store(kMachineFloat64, kNoWriteBarrier), + &result); + + if (Pipeline::SupportedTarget()) { + FOR_INT32_INPUTS(i) { + int32_t input = *i; + + if (Smi::IsValid(input)) { + t.Call(Smi::FromInt(input)); + CHECK_EQ(input, static_cast<int32_t>(result)); + } + + { + Handle<Object> number = t.factory()->NewNumber(input); + t.Call(*number); + CHECK_EQ(input, static_cast<int32_t>(result)); + } + + { + Handle<HeapNumber> number = t.factory()->NewHeapNumber(input); + t.Call(*number); + CHECK_EQ(input, static_cast<int32_t>(result)); + } + } + } + + if (Pipeline::SupportedTarget()) { + FOR_FLOAT64_INPUTS(i) { + double input = *i; + { + Handle<Object> number = t.factory()->NewNumber(input); + t.Call(*number); + CHECK_EQ(input, result); + } + + { + Handle<HeapNumber> number = t.factory()->NewHeapNumber(input); + t.Call(*number); + CHECK_EQ(input, result); + } + } + } +} + + +TEST(RunChangeBoolToBit) { + ChangesLoweringTester<int32_t> t(kMachineTagged); + t.BuildAndLower(t.simplified()->ChangeBoolToBit()); + + if (Pipeline::SupportedTarget()) { + Object* true_obj = t.heap()->true_value(); + int32_t result = t.Call(true_obj); + CHECK_EQ(1, result); + } + + if (Pipeline::SupportedTarget()) { + Object* false_obj = t.heap()->false_value(); + int32_t result = t.Call(false_obj); + CHECK_EQ(0, result); + } +} + + +TEST(RunChangeBitToBool) { + ChangesLoweringTester<Object*> t(kMachineWord32); + t.BuildAndLower(t.simplified()->ChangeBitToBool()); + + if (Pipeline::SupportedTarget()) { + Object* result = t.Call(1); + Object* true_obj = t.heap()->true_value(); + CHECK_EQ(true_obj, result); + } + + if (Pipeline::SupportedTarget()) { + Object* result = t.Call(0); + Object* false_obj = t.heap()->false_value(); + CHECK_EQ(false_obj, result); + } +} + + +bool TODO_INT32_TO_TAGGED_WILL_WORK(int32_t v) { + // TODO(titzer): enable all UI32 -> Tagged checking when inline allocation + // works. + return Smi::IsValid(v); +} + + +bool TODO_UINT32_TO_TAGGED_WILL_WORK(uint32_t v) { + // TODO(titzer): enable all UI32 -> Tagged checking when inline allocation + // works. + return v <= static_cast<uint32_t>(Smi::kMaxValue); +} + + +TEST(RunChangeInt32ToTagged) { + ChangesLoweringTester<Object*> t; + int32_t input; + t.BuildLoadAndLower(t.simplified()->ChangeInt32ToTagged(), + t.machine()->Load(kMachineWord32), &input); + + if (Pipeline::SupportedTarget()) { + FOR_INT32_INPUTS(i) { + input = *i; + Object* result = t.CallWithPotentialGC<Object>(); + if (TODO_INT32_TO_TAGGED_WILL_WORK(input)) { + t.CheckNumber(static_cast<double>(input), result); + } + } + } + + if (Pipeline::SupportedTarget()) { + FOR_INT32_INPUTS(i) { + input = *i; + SimulateFullSpace(CcTest::heap()->new_space()); + Object* result = t.CallWithPotentialGC<Object>(); + if (TODO_INT32_TO_TAGGED_WILL_WORK(input)) { + t.CheckNumber(static_cast<double>(input), result); + } + } + } +} + + +TEST(RunChangeUint32ToTagged) { + ChangesLoweringTester<Object*> t; + uint32_t input; + t.BuildLoadAndLower(t.simplified()->ChangeUint32ToTagged(), + t.machine()->Load(kMachineWord32), &input); + + if (Pipeline::SupportedTarget()) { + FOR_UINT32_INPUTS(i) { + input = *i; + Object* result = t.CallWithPotentialGC<Object>(); + double expected = static_cast<double>(input); + if (TODO_UINT32_TO_TAGGED_WILL_WORK(input)) { + t.CheckNumber(expected, result); + } + } + } + + if (Pipeline::SupportedTarget()) { + FOR_UINT32_INPUTS(i) { + input = *i; + SimulateFullSpace(CcTest::heap()->new_space()); + Object* result = t.CallWithPotentialGC<Object>(); + double expected = static_cast<double>(static_cast<uint32_t>(input)); + if (TODO_UINT32_TO_TAGGED_WILL_WORK(input)) { + t.CheckNumber(expected, result); + } + } + } +} + + +// TODO(titzer): lowering of Float64->Tagged needs inline allocation. +#define TODO_FLOAT64_TO_TAGGED false + +TEST(RunChangeFloat64ToTagged) { + ChangesLoweringTester<Object*> t; + double input; + t.BuildLoadAndLower(t.simplified()->ChangeFloat64ToTagged(), + t.machine()->Load(kMachineFloat64), &input); + + // TODO(titzer): need inline allocation to change float to tagged. + if (TODO_FLOAT64_TO_TAGGED && Pipeline::SupportedTarget()) { + FOR_FLOAT64_INPUTS(i) { + input = *i; + Object* result = t.CallWithPotentialGC<Object>(); + t.CheckNumber(input, result); + } + } + + if (TODO_FLOAT64_TO_TAGGED && Pipeline::SupportedTarget()) { + FOR_FLOAT64_INPUTS(i) { + input = *i; + SimulateFullSpace(CcTest::heap()->new_space()); + Object* result = t.CallWithPotentialGC<Object>(); + t.CheckNumber(input, result); + } + } +} diff --git a/deps/v8/test/cctest/compiler/test-codegen-deopt.cc b/deps/v8/test/cctest/compiler/test-codegen-deopt.cc new file mode 100644 index 00000000000..b953ee53cc2 --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-codegen-deopt.cc @@ -0,0 +1,344 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" +#include "test/cctest/cctest.h" + +#include "src/compiler/code-generator.h" +#include "src/compiler/common-operator.h" +#include "src/compiler/graph.h" +#include "src/compiler/instruction-selector.h" +#include "src/compiler/machine-operator.h" +#include "src/compiler/node.h" +#include "src/compiler/operator.h" +#include "src/compiler/raw-machine-assembler.h" +#include "src/compiler/register-allocator.h" +#include "src/compiler/schedule.h" + +#include "src/full-codegen.h" +#include "src/parser.h" +#include "src/rewriter.h" + +#include "test/cctest/compiler/function-tester.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + + +#if V8_TURBOFAN_TARGET + +typedef RawMachineAssembler::Label MLabel; + +static Handle<JSFunction> NewFunction(const char* source) { + return v8::Utils::OpenHandle( + *v8::Handle<v8::Function>::Cast(CompileRun(source))); +} + + +class DeoptCodegenTester { + public: + explicit DeoptCodegenTester(HandleAndZoneScope* scope, const char* src) + : scope_(scope), + function(NewFunction(src)), + info(function, scope->main_zone()), + bailout_id(-1) { + CHECK(Parser::Parse(&info)); + StrictMode strict_mode = info.function()->strict_mode(); + info.SetStrictMode(strict_mode); + info.SetOptimizing(BailoutId::None(), Handle<Code>(function->code())); + CHECK(Rewriter::Rewrite(&info)); + CHECK(Scope::Analyze(&info)); + CHECK_NE(NULL, info.scope()); + + FunctionTester::EnsureDeoptimizationSupport(&info); + + DCHECK(info.shared_info()->has_deoptimization_support()); + + graph = new (scope_->main_zone()) Graph(scope_->main_zone()); + } + + virtual ~DeoptCodegenTester() { delete code; } + + void GenerateCodeFromSchedule(Schedule* schedule) { + OFStream os(stdout); + os << *schedule; + + // Initialize the codegen and generate code. + Linkage* linkage = new (scope_->main_zone()) Linkage(&info); + code = new v8::internal::compiler::InstructionSequence(linkage, graph, + schedule); + SourcePositionTable source_positions(graph); + InstructionSelector selector(code, &source_positions); + selector.SelectInstructions(); + + os << "----- Instruction sequence before register allocation -----\n" + << *code; + + RegisterAllocator allocator(code); + CHECK(allocator.Allocate()); + + os << "----- Instruction sequence after register allocation -----\n" + << *code; + + compiler::CodeGenerator generator(code); + result_code = generator.GenerateCode(); + +#ifdef DEBUG + result_code->Print(); +#endif + } + + Zone* zone() { return scope_->main_zone(); } + + HandleAndZoneScope* scope_; + Handle<JSFunction> function; + CompilationInfo info; + BailoutId bailout_id; + Handle<Code> result_code; + v8::internal::compiler::InstructionSequence* code; + Graph* graph; +}; + + +class TrivialDeoptCodegenTester : public DeoptCodegenTester { + public: + explicit TrivialDeoptCodegenTester(HandleAndZoneScope* scope) + : DeoptCodegenTester(scope, + "function foo() { deopt(); return 42; }; foo") {} + + void GenerateCode() { + GenerateCodeFromSchedule(BuildGraphAndSchedule(graph)); + } + + Schedule* BuildGraphAndSchedule(Graph* graph) { + Isolate* isolate = info.isolate(); + CommonOperatorBuilder common(zone()); + + // Manually construct a schedule for the function below: + // function foo() { + // deopt(); + // } + + MachineType parameter_reps[] = {kMachineTagged}; + MachineCallDescriptorBuilder descriptor_builder(kMachineTagged, 1, + parameter_reps); + + RawMachineAssembler m(graph, &descriptor_builder); + + Handle<Object> undef_object = + Handle<Object>(isolate->heap()->undefined_value(), isolate); + PrintableUnique<Object> undef_constant = + PrintableUnique<Object>::CreateUninitialized(zone(), undef_object); + Node* undef_node = m.NewNode(common.HeapConstant(undef_constant)); + + Handle<JSFunction> deopt_function = + NewFunction("function deopt() { %DeoptimizeFunction(foo); }; deopt"); + PrintableUnique<Object> deopt_fun_constant = + PrintableUnique<Object>::CreateUninitialized(zone(), deopt_function); + Node* deopt_fun_node = m.NewNode(common.HeapConstant(deopt_fun_constant)); + + MLabel deopt, cont; + Node* call = m.CallJS0(deopt_fun_node, undef_node, &cont, &deopt); + + m.Bind(&cont); + m.NewNode(common.Continuation(), call); + m.Return(undef_node); + + m.Bind(&deopt); + m.NewNode(common.LazyDeoptimization(), call); + + bailout_id = GetCallBailoutId(); + Node* parameters = m.NewNode(common.StateValues(1), undef_node); + Node* locals = m.NewNode(common.StateValues(0)); + Node* stack = m.NewNode(common.StateValues(0)); + + Node* state_node = + m.NewNode(common.FrameState(bailout_id), parameters, locals, stack); + m.Deoptimize(state_node); + + // Schedule the graph: + Schedule* schedule = m.Export(); + + cont_block = cont.block(); + deopt_block = deopt.block(); + + return schedule; + } + + BailoutId GetCallBailoutId() { + ZoneList<Statement*>* body = info.function()->body(); + for (int i = 0; i < body->length(); i++) { + if (body->at(i)->IsExpressionStatement() && + body->at(i)->AsExpressionStatement()->expression()->IsCall()) { + return body->at(i)->AsExpressionStatement()->expression()->id(); + } + } + CHECK(false); + return BailoutId(-1); + } + + BasicBlock* cont_block; + BasicBlock* deopt_block; +}; + + +TEST(TurboTrivialDeoptCodegen) { + HandleAndZoneScope scope; + InitializedHandleScope handles; + + FLAG_allow_natives_syntax = true; + FLAG_turbo_deoptimization = true; + + TrivialDeoptCodegenTester t(&scope); + t.GenerateCode(); + + DeoptimizationInputData* data = + DeoptimizationInputData::cast(t.result_code->deoptimization_data()); + + Label* cont_label = t.code->GetLabel(t.cont_block); + Label* deopt_label = t.code->GetLabel(t.deopt_block); + + // Check the patch table. It should patch the continuation address to the + // deoptimization block address. + CHECK_EQ(1, data->ReturnAddressPatchCount()); + CHECK_EQ(cont_label->pos(), data->ReturnAddressPc(0)->value()); + CHECK_EQ(deopt_label->pos(), data->PatchedAddressPc(0)->value()); + + // Check that we deoptimize to the right AST id. + CHECK_EQ(1, data->DeoptCount()); + CHECK_EQ(1, data->DeoptCount()); + CHECK_EQ(t.bailout_id.ToInt(), data->AstId(0).ToInt()); +} + + +TEST(TurboTrivialDeoptCodegenAndRun) { + HandleAndZoneScope scope; + InitializedHandleScope handles; + + FLAG_allow_natives_syntax = true; + FLAG_turbo_deoptimization = true; + + TrivialDeoptCodegenTester t(&scope); + t.GenerateCode(); + + t.function->ReplaceCode(*t.result_code); + t.info.context()->native_context()->AddOptimizedCode(*t.result_code); + + Isolate* isolate = scope.main_isolate(); + Handle<Object> result; + bool has_pending_exception = + !Execution::Call(isolate, t.function, + isolate->factory()->undefined_value(), 0, NULL, + false).ToHandle(&result); + CHECK(!has_pending_exception); + CHECK(result->SameValue(Smi::FromInt(42))); +} + + +class TrivialRuntimeDeoptCodegenTester : public DeoptCodegenTester { + public: + explicit TrivialRuntimeDeoptCodegenTester(HandleAndZoneScope* scope) + : DeoptCodegenTester( + scope, + "function foo() { %DeoptimizeFunction(foo); return 42; }; foo") {} + + void GenerateCode() { + GenerateCodeFromSchedule(BuildGraphAndSchedule(graph)); + } + + Schedule* BuildGraphAndSchedule(Graph* graph) { + Isolate* isolate = info.isolate(); + CommonOperatorBuilder common(zone()); + + // Manually construct a schedule for the function below: + // function foo() { + // %DeoptimizeFunction(foo); + // } + + MachineType parameter_reps[] = {kMachineTagged}; + MachineCallDescriptorBuilder descriptor_builder(kMachineTagged, 2, + parameter_reps); + + RawMachineAssembler m(graph, &descriptor_builder); + + Handle<Object> undef_object = + Handle<Object>(isolate->heap()->undefined_value(), isolate); + PrintableUnique<Object> undef_constant = + PrintableUnique<Object>::CreateUninitialized(zone(), undef_object); + Node* undef_node = m.NewNode(common.HeapConstant(undef_constant)); + + PrintableUnique<Object> this_fun_constant = + PrintableUnique<Object>::CreateUninitialized(zone(), function); + Node* this_fun_node = m.NewNode(common.HeapConstant(this_fun_constant)); + + MLabel deopt, cont; + Node* call = m.CallRuntime1(Runtime::kDeoptimizeFunction, this_fun_node, + &cont, &deopt); + + m.Bind(&cont); + m.NewNode(common.Continuation(), call); + m.Return(undef_node); + + m.Bind(&deopt); + m.NewNode(common.LazyDeoptimization(), call); + + bailout_id = GetCallBailoutId(); + Node* parameters = m.NewNode(common.StateValues(1), undef_node); + Node* locals = m.NewNode(common.StateValues(0)); + Node* stack = m.NewNode(common.StateValues(0)); + + Node* state_node = + m.NewNode(common.FrameState(bailout_id), parameters, locals, stack); + m.Deoptimize(state_node); + + // Schedule the graph: + Schedule* schedule = m.Export(); + + cont_block = cont.block(); + deopt_block = deopt.block(); + + return schedule; + } + + BailoutId GetCallBailoutId() { + ZoneList<Statement*>* body = info.function()->body(); + for (int i = 0; i < body->length(); i++) { + if (body->at(i)->IsExpressionStatement() && + body->at(i)->AsExpressionStatement()->expression()->IsCallRuntime()) { + return body->at(i)->AsExpressionStatement()->expression()->id(); + } + } + CHECK(false); + return BailoutId(-1); + } + + BasicBlock* cont_block; + BasicBlock* deopt_block; +}; + + +TEST(TurboTrivialRuntimeDeoptCodegenAndRun) { + HandleAndZoneScope scope; + InitializedHandleScope handles; + + FLAG_allow_natives_syntax = true; + FLAG_turbo_deoptimization = true; + + TrivialRuntimeDeoptCodegenTester t(&scope); + t.GenerateCode(); + + t.function->ReplaceCode(*t.result_code); + t.info.context()->native_context()->AddOptimizedCode(*t.result_code); + + Isolate* isolate = scope.main_isolate(); + Handle<Object> result; + bool has_pending_exception = + !Execution::Call(isolate, t.function, + isolate->factory()->undefined_value(), 0, NULL, + false).ToHandle(&result); + CHECK(!has_pending_exception); + CHECK(result->SameValue(Smi::FromInt(42))); +} + +#endif diff --git a/deps/v8/test/cctest/compiler/test-gap-resolver.cc b/deps/v8/test/cctest/compiler/test-gap-resolver.cc new file mode 100644 index 00000000000..00c220945da --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-gap-resolver.cc @@ -0,0 +1,173 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/gap-resolver.h" + +#include "src/base/utils/random-number-generator.h" +#include "test/cctest/cctest.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +// The state of our move interpreter is the mapping of operands to values. Note +// that the actual values don't really matter, all we care about is equality. +class InterpreterState { + public: + typedef std::vector<MoveOperands> Moves; + + void ExecuteInParallel(Moves moves) { + InterpreterState copy(*this); + for (Moves::iterator it = moves.begin(); it != moves.end(); ++it) { + if (!it->IsRedundant()) write(it->destination(), copy.read(it->source())); + } + } + + bool operator==(const InterpreterState& other) const { + return values_ == other.values_; + } + + bool operator!=(const InterpreterState& other) const { + return values_ != other.values_; + } + + private: + // Internally, the state is a normalized permutation of (kind,index) pairs. + typedef std::pair<InstructionOperand::Kind, int> Key; + typedef Key Value; + typedef std::map<Key, Value> OperandMap; + + Value read(const InstructionOperand* op) const { + OperandMap::const_iterator it = values_.find(KeyFor(op)); + return (it == values_.end()) ? ValueFor(op) : it->second; + } + + void write(const InstructionOperand* op, Value v) { + if (v == ValueFor(op)) { + values_.erase(KeyFor(op)); + } else { + values_[KeyFor(op)] = v; + } + } + + static Key KeyFor(const InstructionOperand* op) { + return Key(op->kind(), op->index()); + } + + static Value ValueFor(const InstructionOperand* op) { + return Value(op->kind(), op->index()); + } + + friend OStream& operator<<(OStream& os, const InterpreterState& is) { + for (OperandMap::const_iterator it = is.values_.begin(); + it != is.values_.end(); ++it) { + if (it != is.values_.begin()) os << " "; + InstructionOperand source(it->first.first, it->first.second); + InstructionOperand destination(it->second.first, it->second.second); + os << MoveOperands(&source, &destination); + } + return os; + } + + OperandMap values_; +}; + + +// An abstract interpreter for moves, swaps and parallel moves. +class MoveInterpreter : public GapResolver::Assembler { + public: + virtual void AssembleMove(InstructionOperand* source, + InstructionOperand* destination) V8_OVERRIDE { + InterpreterState::Moves moves; + moves.push_back(MoveOperands(source, destination)); + state_.ExecuteInParallel(moves); + } + + virtual void AssembleSwap(InstructionOperand* source, + InstructionOperand* destination) V8_OVERRIDE { + InterpreterState::Moves moves; + moves.push_back(MoveOperands(source, destination)); + moves.push_back(MoveOperands(destination, source)); + state_.ExecuteInParallel(moves); + } + + void AssembleParallelMove(const ParallelMove* pm) { + InterpreterState::Moves moves(pm->move_operands()->begin(), + pm->move_operands()->end()); + state_.ExecuteInParallel(moves); + } + + InterpreterState state() const { return state_; } + + private: + InterpreterState state_; +}; + + +class ParallelMoveCreator : public HandleAndZoneScope { + public: + ParallelMoveCreator() : rng_(CcTest::random_number_generator()) {} + + ParallelMove* Create(int size) { + ParallelMove* parallel_move = new (main_zone()) ParallelMove(main_zone()); + std::set<InstructionOperand*, InstructionOperandComparator> seen; + for (int i = 0; i < size; ++i) { + MoveOperands mo(CreateRandomOperand(), CreateRandomOperand()); + if (!mo.IsRedundant() && seen.find(mo.destination()) == seen.end()) { + parallel_move->AddMove(mo.source(), mo.destination(), main_zone()); + seen.insert(mo.destination()); + } + } + return parallel_move; + } + + private: + struct InstructionOperandComparator { + bool operator()(const InstructionOperand* x, + const InstructionOperand* y) const { + return (x->kind() < y->kind()) || + (x->kind() == y->kind() && x->index() < y->index()); + } + }; + + InstructionOperand* CreateRandomOperand() { + int index = rng_->NextInt(6); + switch (rng_->NextInt(5)) { + case 0: + return ConstantOperand::Create(index, main_zone()); + case 1: + return StackSlotOperand::Create(index, main_zone()); + case 2: + return DoubleStackSlotOperand::Create(index, main_zone()); + case 3: + return RegisterOperand::Create(index, main_zone()); + case 4: + return DoubleRegisterOperand::Create(index, main_zone()); + } + UNREACHABLE(); + return NULL; + } + + private: + v8::base::RandomNumberGenerator* rng_; +}; + + +TEST(FuzzResolver) { + ParallelMoveCreator pmc; + for (int size = 0; size < 20; ++size) { + for (int repeat = 0; repeat < 50; ++repeat) { + ParallelMove* pm = pmc.Create(size); + + // Note: The gap resolver modifies the ParallelMove, so interpret first. + MoveInterpreter mi1; + mi1.AssembleParallelMove(pm); + + MoveInterpreter mi2; + GapResolver resolver(&mi2); + resolver.Resolve(pm); + + CHECK(mi1.state() == mi2.state()); + } + } +} diff --git a/deps/v8/test/cctest/compiler/test-graph-reducer.cc b/deps/v8/test/cctest/compiler/test-graph-reducer.cc new file mode 100644 index 00000000000..189b3db18e7 --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-graph-reducer.cc @@ -0,0 +1,661 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "graph-tester.h" +#include "src/compiler/generic-node-inl.h" +#include "src/compiler/graph-reducer.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +const uint8_t OPCODE_A0 = 10; +const uint8_t OPCODE_A1 = 11; +const uint8_t OPCODE_A2 = 12; +const uint8_t OPCODE_B0 = 20; +const uint8_t OPCODE_B1 = 21; +const uint8_t OPCODE_B2 = 22; +const uint8_t OPCODE_C0 = 30; +const uint8_t OPCODE_C1 = 31; +const uint8_t OPCODE_C2 = 32; + +static SimpleOperator OPA0(OPCODE_A0, Operator::kNoWrite, 0, 0, "opa0"); +static SimpleOperator OPA1(OPCODE_A1, Operator::kNoWrite, 1, 0, "opa1"); +static SimpleOperator OPA2(OPCODE_A2, Operator::kNoWrite, 2, 0, "opa2"); +static SimpleOperator OPB0(OPCODE_B0, Operator::kNoWrite, 0, 0, "opa0"); +static SimpleOperator OPB1(OPCODE_B1, Operator::kNoWrite, 1, 0, "opa1"); +static SimpleOperator OPB2(OPCODE_B2, Operator::kNoWrite, 2, 0, "opa2"); +static SimpleOperator OPC0(OPCODE_C0, Operator::kNoWrite, 0, 0, "opc0"); +static SimpleOperator OPC1(OPCODE_C1, Operator::kNoWrite, 1, 0, "opc1"); +static SimpleOperator OPC2(OPCODE_C2, Operator::kNoWrite, 2, 0, "opc2"); + + +// Replaces all "A" operators with "B" operators without creating new nodes. +class InPlaceABReducer : public Reducer { + public: + virtual Reduction Reduce(Node* node) { + switch (node->op()->opcode()) { + case OPCODE_A0: + CHECK_EQ(0, node->InputCount()); + node->set_op(&OPB0); + return Replace(node); + case OPCODE_A1: + CHECK_EQ(1, node->InputCount()); + node->set_op(&OPB1); + return Replace(node); + case OPCODE_A2: + CHECK_EQ(2, node->InputCount()); + node->set_op(&OPB2); + return Replace(node); + } + return NoChange(); + } +}; + + +// Replaces all "A" operators with "B" operators by allocating new nodes. +class NewABReducer : public Reducer { + public: + explicit NewABReducer(Graph* graph) : graph_(graph) {} + virtual Reduction Reduce(Node* node) { + switch (node->op()->opcode()) { + case OPCODE_A0: + CHECK_EQ(0, node->InputCount()); + return Replace(graph_->NewNode(&OPB0)); + case OPCODE_A1: + CHECK_EQ(1, node->InputCount()); + return Replace(graph_->NewNode(&OPB1, node->InputAt(0))); + case OPCODE_A2: + CHECK_EQ(2, node->InputCount()); + return Replace( + graph_->NewNode(&OPB2, node->InputAt(0), node->InputAt(1))); + } + return NoChange(); + } + Graph* graph_; +}; + + +// Replaces all "B" operators with "C" operators without creating new nodes. +class InPlaceBCReducer : public Reducer { + public: + virtual Reduction Reduce(Node* node) { + switch (node->op()->opcode()) { + case OPCODE_B0: + CHECK_EQ(0, node->InputCount()); + node->set_op(&OPC0); + return Replace(node); + case OPCODE_B1: + CHECK_EQ(1, node->InputCount()); + node->set_op(&OPC1); + return Replace(node); + case OPCODE_B2: + CHECK_EQ(2, node->InputCount()); + node->set_op(&OPC2); + return Replace(node); + } + return NoChange(); + } +}; + + +// Wraps all "OPA0" nodes in "OPB1" operators by allocating new nodes. +class A0Wrapper V8_FINAL : public Reducer { + public: + explicit A0Wrapper(Graph* graph) : graph_(graph) {} + virtual Reduction Reduce(Node* node) V8_OVERRIDE { + switch (node->op()->opcode()) { + case OPCODE_A0: + CHECK_EQ(0, node->InputCount()); + return Replace(graph_->NewNode(&OPB1, node)); + } + return NoChange(); + } + Graph* graph_; +}; + + +// Wraps all "OPB0" nodes in two "OPC1" operators by allocating new nodes. +class B0Wrapper V8_FINAL : public Reducer { + public: + explicit B0Wrapper(Graph* graph) : graph_(graph) {} + virtual Reduction Reduce(Node* node) V8_OVERRIDE { + switch (node->op()->opcode()) { + case OPCODE_B0: + CHECK_EQ(0, node->InputCount()); + return Replace(graph_->NewNode(&OPC1, graph_->NewNode(&OPC1, node))); + } + return NoChange(); + } + Graph* graph_; +}; + + +// Replaces all "OPA1" nodes with the first input. +class A1Forwarder : public Reducer { + virtual Reduction Reduce(Node* node) { + switch (node->op()->opcode()) { + case OPCODE_A1: + CHECK_EQ(1, node->InputCount()); + return Replace(node->InputAt(0)); + } + return NoChange(); + } +}; + + +// Replaces all "OPB1" nodes with the first input. +class B1Forwarder : public Reducer { + virtual Reduction Reduce(Node* node) { + switch (node->op()->opcode()) { + case OPCODE_B1: + CHECK_EQ(1, node->InputCount()); + return Replace(node->InputAt(0)); + } + return NoChange(); + } +}; + + +// Swaps the inputs to "OP2A" and "OP2B" nodes based on ids. +class AB2Sorter : public Reducer { + virtual Reduction Reduce(Node* node) { + switch (node->op()->opcode()) { + case OPCODE_A2: + case OPCODE_B2: + CHECK_EQ(2, node->InputCount()); + Node* x = node->InputAt(0); + Node* y = node->InputAt(1); + if (x->id() > y->id()) { + node->ReplaceInput(0, y); + node->ReplaceInput(1, x); + return Replace(node); + } + } + return NoChange(); + } +}; + + +// Simply records the nodes visited. +class ReducerRecorder : public Reducer { + public: + explicit ReducerRecorder(Zone* zone) + : set(NodeSet::key_compare(), NodeSet::allocator_type(zone)) {} + virtual Reduction Reduce(Node* node) { + set.insert(node); + return NoChange(); + } + void CheckContains(Node* node) { + CHECK_EQ(1, static_cast<int>(set.count(node))); + } + NodeSet set; +}; + + +TEST(ReduceGraphFromEnd1) { + GraphTester graph; + + Node* n1 = graph.NewNode(&OPA0); + Node* end = graph.NewNode(&OPA1, n1); + graph.SetEnd(end); + + GraphReducer reducer(&graph); + ReducerRecorder recorder(graph.zone()); + reducer.AddReducer(&recorder); + reducer.ReduceGraph(); + recorder.CheckContains(n1); + recorder.CheckContains(end); +} + + +TEST(ReduceGraphFromEnd2) { + GraphTester graph; + + Node* n1 = graph.NewNode(&OPA0); + Node* n2 = graph.NewNode(&OPA1, n1); + Node* n3 = graph.NewNode(&OPA1, n1); + Node* end = graph.NewNode(&OPA2, n2, n3); + graph.SetEnd(end); + + GraphReducer reducer(&graph); + ReducerRecorder recorder(graph.zone()); + reducer.AddReducer(&recorder); + reducer.ReduceGraph(); + recorder.CheckContains(n1); + recorder.CheckContains(n2); + recorder.CheckContains(n3); + recorder.CheckContains(end); +} + + +TEST(ReduceInPlace1) { + GraphTester graph; + + Node* n1 = graph.NewNode(&OPA0); + Node* end = graph.NewNode(&OPA1, n1); + graph.SetEnd(end); + + GraphReducer reducer(&graph); + InPlaceABReducer r; + reducer.AddReducer(&r); + + // Tests A* => B* with in-place updates. + for (int i = 0; i < 3; i++) { + int before = graph.NodeCount(); + reducer.ReduceGraph(); + CHECK_EQ(before, graph.NodeCount()); + CHECK_EQ(&OPB0, n1->op()); + CHECK_EQ(&OPB1, end->op()); + CHECK_EQ(n1, end->InputAt(0)); + } +} + + +TEST(ReduceInPlace2) { + GraphTester graph; + + Node* n1 = graph.NewNode(&OPA0); + Node* n2 = graph.NewNode(&OPA1, n1); + Node* n3 = graph.NewNode(&OPA1, n1); + Node* end = graph.NewNode(&OPA2, n2, n3); + graph.SetEnd(end); + + GraphReducer reducer(&graph); + InPlaceABReducer r; + reducer.AddReducer(&r); + + // Tests A* => B* with in-place updates. + for (int i = 0; i < 3; i++) { + int before = graph.NodeCount(); + reducer.ReduceGraph(); + CHECK_EQ(before, graph.NodeCount()); + CHECK_EQ(&OPB0, n1->op()); + CHECK_EQ(&OPB1, n2->op()); + CHECK_EQ(n1, n2->InputAt(0)); + CHECK_EQ(&OPB1, n3->op()); + CHECK_EQ(n1, n3->InputAt(0)); + CHECK_EQ(&OPB2, end->op()); + CHECK_EQ(n2, end->InputAt(0)); + CHECK_EQ(n3, end->InputAt(1)); + } +} + + +TEST(ReduceNew1) { + GraphTester graph; + + Node* n1 = graph.NewNode(&OPA0); + Node* n2 = graph.NewNode(&OPA1, n1); + Node* n3 = graph.NewNode(&OPA1, n1); + Node* end = graph.NewNode(&OPA2, n2, n3); + graph.SetEnd(end); + + GraphReducer reducer(&graph); + NewABReducer r(&graph); + reducer.AddReducer(&r); + + // Tests A* => B* while creating new nodes. + for (int i = 0; i < 3; i++) { + int before = graph.NodeCount(); + reducer.ReduceGraph(); + if (i == 0) { + CHECK_NE(before, graph.NodeCount()); + } else { + CHECK_EQ(before, graph.NodeCount()); + } + Node* nend = graph.end(); + CHECK_NE(end, nend); // end() should be updated too. + + Node* nn2 = nend->InputAt(0); + Node* nn3 = nend->InputAt(1); + Node* nn1 = nn2->InputAt(0); + + CHECK_EQ(nn1, nn3->InputAt(0)); + + CHECK_EQ(&OPB0, nn1->op()); + CHECK_EQ(&OPB1, nn2->op()); + CHECK_EQ(&OPB1, nn3->op()); + CHECK_EQ(&OPB2, nend->op()); + } +} + + +TEST(Wrapping1) { + GraphTester graph; + + Node* end = graph.NewNode(&OPA0); + graph.SetEnd(end); + CHECK_EQ(1, graph.NodeCount()); + + GraphReducer reducer(&graph); + A0Wrapper r(&graph); + reducer.AddReducer(&r); + + reducer.ReduceGraph(); + CHECK_EQ(2, graph.NodeCount()); + + Node* nend = graph.end(); + CHECK_NE(end, nend); + CHECK_EQ(&OPB1, nend->op()); + CHECK_EQ(1, nend->InputCount()); + CHECK_EQ(end, nend->InputAt(0)); +} + + +TEST(Wrapping2) { + GraphTester graph; + + Node* end = graph.NewNode(&OPB0); + graph.SetEnd(end); + CHECK_EQ(1, graph.NodeCount()); + + GraphReducer reducer(&graph); + B0Wrapper r(&graph); + reducer.AddReducer(&r); + + reducer.ReduceGraph(); + CHECK_EQ(3, graph.NodeCount()); + + Node* nend = graph.end(); + CHECK_NE(end, nend); + CHECK_EQ(&OPC1, nend->op()); + CHECK_EQ(1, nend->InputCount()); + + Node* n1 = nend->InputAt(0); + CHECK_NE(end, n1); + CHECK_EQ(&OPC1, n1->op()); + CHECK_EQ(1, n1->InputCount()); + CHECK_EQ(end, n1->InputAt(0)); +} + + +TEST(Forwarding1) { + GraphTester graph; + + Node* n1 = graph.NewNode(&OPA0); + Node* end = graph.NewNode(&OPA1, n1); + graph.SetEnd(end); + + GraphReducer reducer(&graph); + A1Forwarder r; + reducer.AddReducer(&r); + + // Tests A1(x) => x + for (int i = 0; i < 3; i++) { + int before = graph.NodeCount(); + reducer.ReduceGraph(); + CHECK_EQ(before, graph.NodeCount()); + CHECK_EQ(&OPA0, n1->op()); + CHECK_EQ(n1, graph.end()); + } +} + + +TEST(Forwarding2) { + GraphTester graph; + + Node* n1 = graph.NewNode(&OPA0); + Node* n2 = graph.NewNode(&OPA1, n1); + Node* n3 = graph.NewNode(&OPA1, n1); + Node* end = graph.NewNode(&OPA2, n2, n3); + graph.SetEnd(end); + + GraphReducer reducer(&graph); + A1Forwarder r; + reducer.AddReducer(&r); + + // Tests reducing A2(A1(x), A1(y)) => A2(x, y). + for (int i = 0; i < 3; i++) { + int before = graph.NodeCount(); + reducer.ReduceGraph(); + CHECK_EQ(before, graph.NodeCount()); + CHECK_EQ(&OPA0, n1->op()); + CHECK_EQ(n1, end->InputAt(0)); + CHECK_EQ(n1, end->InputAt(1)); + CHECK_EQ(&OPA2, end->op()); + CHECK_EQ(0, n2->UseCount()); + CHECK_EQ(0, n3->UseCount()); + } +} + + +TEST(Forwarding3) { + // Tests reducing a chain of A1(A1(A1(A1(x)))) => x. + for (int i = 0; i < 8; i++) { + GraphTester graph; + + Node* n1 = graph.NewNode(&OPA0); + Node* end = n1; + for (int j = 0; j < i; j++) { + end = graph.NewNode(&OPA1, end); + } + graph.SetEnd(end); + + GraphReducer reducer(&graph); + A1Forwarder r; + reducer.AddReducer(&r); + + for (int i = 0; i < 3; i++) { + int before = graph.NodeCount(); + reducer.ReduceGraph(); + CHECK_EQ(before, graph.NodeCount()); + CHECK_EQ(&OPA0, n1->op()); + CHECK_EQ(n1, graph.end()); + } + } +} + + +TEST(ReduceForward1) { + GraphTester graph; + + Node* n1 = graph.NewNode(&OPA0); + Node* n2 = graph.NewNode(&OPA1, n1); + Node* n3 = graph.NewNode(&OPA1, n1); + Node* end = graph.NewNode(&OPA2, n2, n3); + graph.SetEnd(end); + + GraphReducer reducer(&graph); + InPlaceABReducer r; + B1Forwarder f; + reducer.AddReducer(&r); + reducer.AddReducer(&f); + + // Tests first reducing A => B, then B1(x) => x. + for (int i = 0; i < 3; i++) { + int before = graph.NodeCount(); + reducer.ReduceGraph(); + CHECK_EQ(before, graph.NodeCount()); + CHECK_EQ(&OPB0, n1->op()); + CHECK_EQ(&OPB1, n2->op()); + CHECK_EQ(n1, end->InputAt(0)); + CHECK_EQ(&OPB1, n3->op()); + CHECK_EQ(n1, end->InputAt(0)); + CHECK_EQ(&OPB2, end->op()); + CHECK_EQ(0, n2->UseCount()); + CHECK_EQ(0, n3->UseCount()); + } +} + + +TEST(Sorter1) { + HandleAndZoneScope scope; + AB2Sorter r; + for (int i = 0; i < 6; i++) { + GraphTester graph; + + Node* n1 = graph.NewNode(&OPA0); + Node* n2 = graph.NewNode(&OPA1, n1); + Node* n3 = graph.NewNode(&OPA1, n1); + Node* end = NULL; // Initialize to please the compiler. + + if (i == 0) end = graph.NewNode(&OPA2, n2, n3); + if (i == 1) end = graph.NewNode(&OPA2, n3, n2); + if (i == 2) end = graph.NewNode(&OPA2, n2, n1); + if (i == 3) end = graph.NewNode(&OPA2, n1, n2); + if (i == 4) end = graph.NewNode(&OPA2, n3, n1); + if (i == 5) end = graph.NewNode(&OPA2, n1, n3); + + graph.SetEnd(end); + + GraphReducer reducer(&graph); + reducer.AddReducer(&r); + + int before = graph.NodeCount(); + reducer.ReduceGraph(); + CHECK_EQ(before, graph.NodeCount()); + CHECK_EQ(&OPA0, n1->op()); + CHECK_EQ(&OPA1, n2->op()); + CHECK_EQ(&OPA1, n3->op()); + CHECK_EQ(&OPA2, end->op()); + CHECK_EQ(end, graph.end()); + CHECK(end->InputAt(0)->id() <= end->InputAt(1)->id()); + } +} + + +// Generate a node graph with the given permutations. +void GenDAG(Graph* graph, int* p3, int* p2, int* p1) { + Node* level4 = graph->NewNode(&OPA0); + Node* level3[] = {graph->NewNode(&OPA1, level4), + graph->NewNode(&OPA1, level4)}; + + Node* level2[] = {graph->NewNode(&OPA1, level3[p3[0]]), + graph->NewNode(&OPA1, level3[p3[1]]), + graph->NewNode(&OPA1, level3[p3[0]]), + graph->NewNode(&OPA1, level3[p3[1]])}; + + Node* level1[] = {graph->NewNode(&OPA2, level2[p2[0]], level2[p2[1]]), + graph->NewNode(&OPA2, level2[p2[2]], level2[p2[3]])}; + + Node* end = graph->NewNode(&OPA2, level1[p1[0]], level1[p1[1]]); + graph->SetEnd(end); +} + + +TEST(SortForwardReduce) { + GraphTester graph; + + // Tests combined reductions on a series of DAGs. + for (int j = 0; j < 2; j++) { + int p3[] = {j, 1 - j}; + for (int m = 0; m < 2; m++) { + int p1[] = {m, 1 - m}; + for (int k = 0; k < 24; k++) { // All permutations of 0, 1, 2, 3 + int p2[] = {-1, -1, -1, -1}; + int n = k; + for (int d = 4; d >= 1; d--) { // Construct permutation. + int p = n % d; + for (int z = 0; z < 4; z++) { + if (p2[z] == -1) { + if (p == 0) p2[z] = d - 1; + p--; + } + } + n = n / d; + } + + GenDAG(&graph, p3, p2, p1); + + GraphReducer reducer(&graph); + AB2Sorter r1; + A1Forwarder r2; + InPlaceABReducer r3; + reducer.AddReducer(&r1); + reducer.AddReducer(&r2); + reducer.AddReducer(&r3); + + reducer.ReduceGraph(); + + Node* end = graph.end(); + CHECK_EQ(&OPB2, end->op()); + Node* n1 = end->InputAt(0); + Node* n2 = end->InputAt(1); + CHECK_NE(n1, n2); + CHECK(n1->id() < n2->id()); + CHECK_EQ(&OPB2, n1->op()); + CHECK_EQ(&OPB2, n2->op()); + Node* n4 = n1->InputAt(0); + CHECK_EQ(&OPB0, n4->op()); + CHECK_EQ(n4, n1->InputAt(1)); + CHECK_EQ(n4, n2->InputAt(0)); + CHECK_EQ(n4, n2->InputAt(1)); + } + } + } +} + + +TEST(Order) { + // Test that the order of reducers doesn't matter, as they should be + // rerun for changed nodes. + for (int i = 0; i < 2; i++) { + GraphTester graph; + + Node* n1 = graph.NewNode(&OPA0); + Node* end = graph.NewNode(&OPA1, n1); + graph.SetEnd(end); + + GraphReducer reducer(&graph); + InPlaceABReducer abr; + InPlaceBCReducer bcr; + if (i == 0) { + reducer.AddReducer(&abr); + reducer.AddReducer(&bcr); + } else { + reducer.AddReducer(&bcr); + reducer.AddReducer(&abr); + } + + // Tests A* => C* with in-place updates. + for (int i = 0; i < 3; i++) { + int before = graph.NodeCount(); + reducer.ReduceGraph(); + CHECK_EQ(before, graph.NodeCount()); + CHECK_EQ(&OPC0, n1->op()); + CHECK_EQ(&OPC1, end->op()); + CHECK_EQ(n1, end->InputAt(0)); + } + } +} + + +// Tests that a reducer is only applied once. +class OneTimeReducer : public Reducer { + public: + OneTimeReducer(Reducer* reducer, Zone* zone) + : reducer_(reducer), + nodes_(NodeSet::key_compare(), NodeSet::allocator_type(zone)) {} + virtual Reduction Reduce(Node* node) { + CHECK_EQ(0, static_cast<int>(nodes_.count(node))); + nodes_.insert(node); + return reducer_->Reduce(node); + } + Reducer* reducer_; + NodeSet nodes_; +}; + + +TEST(OneTimeReduce1) { + GraphTester graph; + + Node* n1 = graph.NewNode(&OPA0); + Node* end = graph.NewNode(&OPA1, n1); + graph.SetEnd(end); + + GraphReducer reducer(&graph); + InPlaceABReducer r; + OneTimeReducer once(&r, graph.zone()); + reducer.AddReducer(&once); + + // Tests A* => B* with in-place updates. Should only be applied once. + int before = graph.NodeCount(); + reducer.ReduceGraph(); + CHECK_EQ(before, graph.NodeCount()); + CHECK_EQ(&OPB0, n1->op()); + CHECK_EQ(&OPB1, end->op()); + CHECK_EQ(n1, end->InputAt(0)); +} diff --git a/deps/v8/test/cctest/compiler/test-instruction-selector-arm.cc b/deps/v8/test/cctest/compiler/test-instruction-selector-arm.cc new file mode 100644 index 00000000000..f62e09f978e --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-instruction-selector-arm.cc @@ -0,0 +1,1863 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include <list> + +#include "test/cctest/compiler/instruction-selector-tester.h" +#include "test/cctest/compiler/value-helper.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +namespace { + +typedef RawMachineAssembler::Label MLabel; + +struct DPI { + Operator* op; + ArchOpcode arch_opcode; + ArchOpcode reverse_arch_opcode; + ArchOpcode test_arch_opcode; +}; + + +// ARM data processing instructions. +class DPIs V8_FINAL : public std::list<DPI>, private HandleAndZoneScope { + public: + DPIs() { + MachineOperatorBuilder machine(main_zone()); + DPI and_ = {machine.Word32And(), kArmAnd, kArmAnd, kArmTst}; + push_back(and_); + DPI or_ = {machine.Word32Or(), kArmOrr, kArmOrr, kArmOrr}; + push_back(or_); + DPI xor_ = {machine.Word32Xor(), kArmEor, kArmEor, kArmTeq}; + push_back(xor_); + DPI add = {machine.Int32Add(), kArmAdd, kArmAdd, kArmCmn}; + push_back(add); + DPI sub = {machine.Int32Sub(), kArmSub, kArmRsb, kArmCmp}; + push_back(sub); + } +}; + + +struct ODPI { + Operator* op; + ArchOpcode arch_opcode; + ArchOpcode reverse_arch_opcode; +}; + + +// ARM data processing instructions with overflow. +class ODPIs V8_FINAL : public std::list<ODPI>, private HandleAndZoneScope { + public: + ODPIs() { + MachineOperatorBuilder machine(main_zone()); + ODPI add = {machine.Int32AddWithOverflow(), kArmAdd, kArmAdd}; + push_back(add); + ODPI sub = {machine.Int32SubWithOverflow(), kArmSub, kArmRsb}; + push_back(sub); + } +}; + + +// ARM immediates. +class Immediates V8_FINAL : public std::list<int32_t> { + public: + Immediates() { + for (uint32_t imm8 = 0; imm8 < 256; ++imm8) { + for (uint32_t rot4 = 0; rot4 < 32; rot4 += 2) { + int32_t imm = (imm8 >> rot4) | (imm8 << (32 - rot4)); + CHECK(Assembler::ImmediateFitsAddrMode1Instruction(imm)); + push_back(imm); + } + } + } +}; + + +struct Shift { + Operator* op; + int32_t i_low; // lowest possible immediate + int32_t i_high; // highest possible immediate + AddressingMode i_mode; // Operand2_R_<shift>_I + AddressingMode r_mode; // Operand2_R_<shift>_R +}; + + +// ARM shifts. +class Shifts V8_FINAL : public std::list<Shift>, private HandleAndZoneScope { + public: + Shifts() { + MachineOperatorBuilder machine(main_zone()); + Shift sar = {machine.Word32Sar(), 1, 32, kMode_Operand2_R_ASR_I, + kMode_Operand2_R_ASR_R}; + Shift shl = {machine.Word32Shl(), 0, 31, kMode_Operand2_R_LSL_I, + kMode_Operand2_R_LSL_R}; + Shift shr = {machine.Word32Shr(), 1, 32, kMode_Operand2_R_LSR_I, + kMode_Operand2_R_LSR_R}; + push_back(sar); + push_back(shl); + push_back(shr); + } +}; + +} // namespace + + +TEST(InstructionSelectorDPIP) { + DPIs dpis; + for (DPIs::const_iterator i = dpis.begin(); i != dpis.end(); ++i) { + DPI dpi = *i; + InstructionSelectorTester m; + m.Return(m.NewNode(dpi.op, m.Parameter(0), m.Parameter(1))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(dpi.arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R, m.code[0]->addressing_mode()); + } +} + + +TEST(InstructionSelectorDPIImm) { + DPIs dpis; + Immediates immediates; + for (DPIs::const_iterator i = dpis.begin(); i != dpis.end(); ++i) { + DPI dpi = *i; + for (Immediates::const_iterator j = immediates.begin(); + j != immediates.end(); ++j) { + int32_t imm = *j; + { + InstructionSelectorTester m; + m.Return(m.NewNode(dpi.op, m.Parameter(0), m.Int32Constant(imm))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(dpi.arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_I, m.code[0]->addressing_mode()); + } + { + InstructionSelectorTester m; + m.Return(m.NewNode(dpi.op, m.Int32Constant(imm), m.Parameter(0))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(dpi.reverse_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_I, m.code[0]->addressing_mode()); + } + } + } +} + + +TEST(InstructionSelectorDPIAndShiftP) { + DPIs dpis; + Shifts shifts; + for (DPIs::const_iterator i = dpis.begin(); i != dpis.end(); ++i) { + DPI dpi = *i; + for (Shifts::const_iterator j = shifts.begin(); j != shifts.end(); ++j) { + Shift shift = *j; + { + InstructionSelectorTester m; + m.Return( + m.NewNode(dpi.op, m.Parameter(0), + m.NewNode(shift.op, m.Parameter(1), m.Parameter(2)))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(dpi.arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(shift.r_mode, m.code[0]->addressing_mode()); + } + { + InstructionSelectorTester m; + m.Return(m.NewNode(dpi.op, + m.NewNode(shift.op, m.Parameter(0), m.Parameter(1)), + m.Parameter(2))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(dpi.reverse_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(shift.r_mode, m.code[0]->addressing_mode()); + } + } + } +} + + +TEST(InstructionSelectorDPIAndRotateRightP) { + DPIs dpis; + for (DPIs::const_iterator i = dpis.begin(); i != dpis.end(); ++i) { + DPI dpi = *i; + { + InstructionSelectorTester m; + Node* value = m.Parameter(1); + Node* shift = m.Parameter(2); + Node* ror = m.Word32Or( + m.Word32Shr(value, shift), + m.Word32Shl(value, m.Int32Sub(m.Int32Constant(32), shift))); + m.Return(m.NewNode(dpi.op, m.Parameter(0), ror)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(dpi.arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R_ROR_R, m.code[0]->addressing_mode()); + } + { + InstructionSelectorTester m; + Node* value = m.Parameter(1); + Node* shift = m.Parameter(2); + Node* ror = + m.Word32Or(m.Word32Shl(value, m.Int32Sub(m.Int32Constant(32), shift)), + m.Word32Shr(value, shift)); + m.Return(m.NewNode(dpi.op, m.Parameter(0), ror)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(dpi.arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R_ROR_R, m.code[0]->addressing_mode()); + } + { + InstructionSelectorTester m; + Node* value = m.Parameter(1); + Node* shift = m.Parameter(2); + Node* ror = m.Word32Or( + m.Word32Shr(value, shift), + m.Word32Shl(value, m.Int32Sub(m.Int32Constant(32), shift))); + m.Return(m.NewNode(dpi.op, ror, m.Parameter(0))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(dpi.reverse_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R_ROR_R, m.code[0]->addressing_mode()); + } + { + InstructionSelectorTester m; + Node* value = m.Parameter(1); + Node* shift = m.Parameter(2); + Node* ror = + m.Word32Or(m.Word32Shl(value, m.Int32Sub(m.Int32Constant(32), shift)), + m.Word32Shr(value, shift)); + m.Return(m.NewNode(dpi.op, ror, m.Parameter(0))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(dpi.reverse_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R_ROR_R, m.code[0]->addressing_mode()); + } + } +} + + +TEST(InstructionSelectorDPIAndShiftImm) { + DPIs dpis; + Shifts shifts; + for (DPIs::const_iterator i = dpis.begin(); i != dpis.end(); ++i) { + DPI dpi = *i; + for (Shifts::const_iterator j = shifts.begin(); j != shifts.end(); ++j) { + Shift shift = *j; + for (int32_t imm = shift.i_low; imm <= shift.i_high; ++imm) { + { + InstructionSelectorTester m; + m.Return(m.NewNode( + dpi.op, m.Parameter(0), + m.NewNode(shift.op, m.Parameter(1), m.Int32Constant(imm)))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(dpi.arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(shift.i_mode, m.code[0]->addressing_mode()); + } + { + InstructionSelectorTester m; + m.Return(m.NewNode( + dpi.op, m.NewNode(shift.op, m.Parameter(0), m.Int32Constant(imm)), + m.Parameter(1))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(dpi.reverse_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(shift.i_mode, m.code[0]->addressing_mode()); + } + } + } + } +} + + +TEST(InstructionSelectorODPIP) { + ODPIs odpis; + for (ODPIs::const_iterator i = odpis.begin(); i != odpis.end(); ++i) { + ODPI odpi = *i; + { + InstructionSelectorTester m; + m.Return( + m.Projection(1, m.NewNode(odpi.op, m.Parameter(0), m.Parameter(1)))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(odpi.arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kOverflow, m.code[0]->flags_condition()); + CHECK_EQ(2, m.code[0]->InputCount()); + CHECK_LE(1, m.code[0]->OutputCount()); + } + { + InstructionSelectorTester m; + m.Return( + m.Projection(0, m.NewNode(odpi.op, m.Parameter(0), m.Parameter(1)))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(odpi.arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_none, m.code[0]->flags_mode()); + CHECK_EQ(2, m.code[0]->InputCount()); + CHECK_LE(1, m.code[0]->OutputCount()); + } + { + InstructionSelectorTester m; + Node* node = m.NewNode(odpi.op, m.Parameter(0), m.Parameter(1)); + m.Return(m.Word32Equal(m.Projection(0, node), m.Projection(1, node))); + m.SelectInstructions(); + CHECK_LE(1, m.code.size()); + CHECK_EQ(odpi.arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kOverflow, m.code[0]->flags_condition()); + CHECK_EQ(2, m.code[0]->InputCount()); + CHECK_EQ(2, m.code[0]->OutputCount()); + } + } +} + + +TEST(InstructionSelectorODPIImm) { + ODPIs odpis; + Immediates immediates; + for (ODPIs::const_iterator i = odpis.begin(); i != odpis.end(); ++i) { + ODPI odpi = *i; + for (Immediates::const_iterator j = immediates.begin(); + j != immediates.end(); ++j) { + int32_t imm = *j; + { + InstructionSelectorTester m; + m.Return(m.Projection( + 1, m.NewNode(odpi.op, m.Parameter(0), m.Int32Constant(imm)))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(odpi.arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_I, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kOverflow, m.code[0]->flags_condition()); + CHECK_EQ(2, m.code[0]->InputCount()); + CHECK_EQ(imm, m.ToInt32(m.code[0]->InputAt(1))); + CHECK_LE(1, m.code[0]->OutputCount()); + } + { + InstructionSelectorTester m; + m.Return(m.Projection( + 1, m.NewNode(odpi.op, m.Int32Constant(imm), m.Parameter(0)))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(odpi.reverse_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_I, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kOverflow, m.code[0]->flags_condition()); + CHECK_EQ(2, m.code[0]->InputCount()); + CHECK_EQ(imm, m.ToInt32(m.code[0]->InputAt(1))); + CHECK_LE(1, m.code[0]->OutputCount()); + } + { + InstructionSelectorTester m; + m.Return(m.Projection( + 0, m.NewNode(odpi.op, m.Parameter(0), m.Int32Constant(imm)))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(odpi.arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_I, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_none, m.code[0]->flags_mode()); + CHECK_EQ(2, m.code[0]->InputCount()); + CHECK_EQ(imm, m.ToInt32(m.code[0]->InputAt(1))); + CHECK_LE(1, m.code[0]->OutputCount()); + } + { + InstructionSelectorTester m; + m.Return(m.Projection( + 0, m.NewNode(odpi.op, m.Int32Constant(imm), m.Parameter(0)))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(odpi.reverse_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_I, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_none, m.code[0]->flags_mode()); + CHECK_EQ(2, m.code[0]->InputCount()); + CHECK_EQ(imm, m.ToInt32(m.code[0]->InputAt(1))); + CHECK_LE(1, m.code[0]->OutputCount()); + } + { + InstructionSelectorTester m; + Node* node = m.NewNode(odpi.op, m.Parameter(0), m.Int32Constant(imm)); + m.Return(m.Word32Equal(m.Projection(0, node), m.Projection(1, node))); + m.SelectInstructions(); + CHECK_LE(1, m.code.size()); + CHECK_EQ(odpi.arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_I, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kOverflow, m.code[0]->flags_condition()); + CHECK_EQ(2, m.code[0]->InputCount()); + CHECK_EQ(imm, m.ToInt32(m.code[0]->InputAt(1))); + CHECK_EQ(2, m.code[0]->OutputCount()); + } + { + InstructionSelectorTester m; + Node* node = m.NewNode(odpi.op, m.Int32Constant(imm), m.Parameter(0)); + m.Return(m.Word32Equal(m.Projection(0, node), m.Projection(1, node))); + m.SelectInstructions(); + CHECK_LE(1, m.code.size()); + CHECK_EQ(odpi.reverse_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_I, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kOverflow, m.code[0]->flags_condition()); + CHECK_EQ(2, m.code[0]->InputCount()); + CHECK_EQ(imm, m.ToInt32(m.code[0]->InputAt(1))); + CHECK_EQ(2, m.code[0]->OutputCount()); + } + } + } +} + + +TEST(InstructionSelectorODPIAndShiftP) { + ODPIs odpis; + Shifts shifts; + for (ODPIs::const_iterator i = odpis.begin(); i != odpis.end(); ++i) { + ODPI odpi = *i; + for (Shifts::const_iterator j = shifts.begin(); j != shifts.end(); ++j) { + Shift shift = *j; + { + InstructionSelectorTester m; + m.Return(m.Projection( + 1, m.NewNode(odpi.op, m.Parameter(0), + m.NewNode(shift.op, m.Parameter(1), m.Parameter(2))))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(odpi.arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(shift.r_mode, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kOverflow, m.code[0]->flags_condition()); + CHECK_EQ(3, m.code[0]->InputCount()); + CHECK_LE(1, m.code[0]->OutputCount()); + } + { + InstructionSelectorTester m; + m.Return(m.Projection( + 1, m.NewNode(odpi.op, + m.NewNode(shift.op, m.Parameter(0), m.Parameter(1)), + m.Parameter(2)))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(odpi.reverse_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(shift.r_mode, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kOverflow, m.code[0]->flags_condition()); + CHECK_EQ(3, m.code[0]->InputCount()); + CHECK_LE(1, m.code[0]->OutputCount()); + } + { + InstructionSelectorTester m; + m.Return(m.Projection( + 0, m.NewNode(odpi.op, m.Parameter(0), + m.NewNode(shift.op, m.Parameter(1), m.Parameter(2))))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(odpi.arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(shift.r_mode, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_none, m.code[0]->flags_mode()); + CHECK_EQ(3, m.code[0]->InputCount()); + CHECK_LE(1, m.code[0]->OutputCount()); + } + { + InstructionSelectorTester m; + m.Return(m.Projection( + 0, m.NewNode(odpi.op, + m.NewNode(shift.op, m.Parameter(0), m.Parameter(1)), + m.Parameter(2)))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(odpi.reverse_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(shift.r_mode, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_none, m.code[0]->flags_mode()); + CHECK_EQ(3, m.code[0]->InputCount()); + CHECK_LE(1, m.code[0]->OutputCount()); + } + { + InstructionSelectorTester m; + Node* node = + m.NewNode(odpi.op, m.Parameter(0), + m.NewNode(shift.op, m.Parameter(1), m.Parameter(2))); + m.Return(m.Word32Equal(m.Projection(0, node), m.Projection(1, node))); + m.SelectInstructions(); + CHECK_LE(1, m.code.size()); + CHECK_EQ(odpi.arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(shift.r_mode, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kOverflow, m.code[0]->flags_condition()); + CHECK_EQ(3, m.code[0]->InputCount()); + CHECK_EQ(2, m.code[0]->OutputCount()); + } + { + InstructionSelectorTester m; + Node* node = m.NewNode( + odpi.op, m.NewNode(shift.op, m.Parameter(0), m.Parameter(1)), + m.Parameter(2)); + m.Return(m.Word32Equal(m.Projection(0, node), m.Projection(1, node))); + m.SelectInstructions(); + CHECK_LE(1, m.code.size()); + CHECK_EQ(odpi.reverse_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(shift.r_mode, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kOverflow, m.code[0]->flags_condition()); + CHECK_EQ(3, m.code[0]->InputCount()); + CHECK_EQ(2, m.code[0]->OutputCount()); + } + } + } +} + + +TEST(InstructionSelectorODPIAndShiftImm) { + ODPIs odpis; + Shifts shifts; + for (ODPIs::const_iterator i = odpis.begin(); i != odpis.end(); ++i) { + ODPI odpi = *i; + for (Shifts::const_iterator j = shifts.begin(); j != shifts.end(); ++j) { + Shift shift = *j; + for (int32_t imm = shift.i_low; imm <= shift.i_high; ++imm) { + { + InstructionSelectorTester m; + m.Return(m.Projection(1, m.NewNode(odpi.op, m.Parameter(0), + m.NewNode(shift.op, m.Parameter(1), + m.Int32Constant(imm))))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(odpi.arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(shift.i_mode, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kOverflow, m.code[0]->flags_condition()); + CHECK_EQ(3, m.code[0]->InputCount()); + CHECK_EQ(imm, m.ToInt32(m.code[0]->InputAt(2))); + CHECK_LE(1, m.code[0]->OutputCount()); + } + { + InstructionSelectorTester m; + m.Return(m.Projection( + 1, m.NewNode(odpi.op, m.NewNode(shift.op, m.Parameter(0), + m.Int32Constant(imm)), + m.Parameter(1)))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(odpi.reverse_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(shift.i_mode, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kOverflow, m.code[0]->flags_condition()); + CHECK_EQ(3, m.code[0]->InputCount()); + CHECK_EQ(imm, m.ToInt32(m.code[0]->InputAt(2))); + CHECK_LE(1, m.code[0]->OutputCount()); + } + { + InstructionSelectorTester m; + m.Return(m.Projection(0, m.NewNode(odpi.op, m.Parameter(0), + m.NewNode(shift.op, m.Parameter(1), + m.Int32Constant(imm))))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(odpi.arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(shift.i_mode, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_none, m.code[0]->flags_mode()); + CHECK_EQ(3, m.code[0]->InputCount()); + CHECK_EQ(imm, m.ToInt32(m.code[0]->InputAt(2))); + CHECK_LE(1, m.code[0]->OutputCount()); + } + { + InstructionSelectorTester m; + m.Return(m.Projection( + 0, m.NewNode(odpi.op, m.NewNode(shift.op, m.Parameter(0), + m.Int32Constant(imm)), + m.Parameter(1)))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(odpi.reverse_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(shift.i_mode, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_none, m.code[0]->flags_mode()); + CHECK_EQ(3, m.code[0]->InputCount()); + CHECK_EQ(imm, m.ToInt32(m.code[0]->InputAt(2))); + CHECK_LE(1, m.code[0]->OutputCount()); + } + { + InstructionSelectorTester m; + Node* node = m.NewNode( + odpi.op, m.Parameter(0), + m.NewNode(shift.op, m.Parameter(1), m.Int32Constant(imm))); + m.Return(m.Word32Equal(m.Projection(0, node), m.Projection(1, node))); + m.SelectInstructions(); + CHECK_LE(1, m.code.size()); + CHECK_EQ(odpi.arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(shift.i_mode, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kOverflow, m.code[0]->flags_condition()); + CHECK_EQ(3, m.code[0]->InputCount()); + CHECK_EQ(imm, m.ToInt32(m.code[0]->InputAt(2))); + CHECK_EQ(2, m.code[0]->OutputCount()); + } + { + InstructionSelectorTester m; + Node* node = m.NewNode(odpi.op, m.NewNode(shift.op, m.Parameter(0), + m.Int32Constant(imm)), + m.Parameter(1)); + m.Return(m.Word32Equal(m.Projection(0, node), m.Projection(1, node))); + m.SelectInstructions(); + CHECK_LE(1, m.code.size()); + CHECK_EQ(odpi.reverse_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(shift.i_mode, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kOverflow, m.code[0]->flags_condition()); + CHECK_EQ(3, m.code[0]->InputCount()); + CHECK_EQ(imm, m.ToInt32(m.code[0]->InputAt(2))); + CHECK_EQ(2, m.code[0]->OutputCount()); + } + } + } + } +} + + +TEST(InstructionSelectorWord32AndAndWord32XorWithMinus1P) { + { + InstructionSelectorTester m; + m.Return(m.Word32And(m.Parameter(0), + m.Word32Xor(m.Int32Constant(-1), m.Parameter(1)))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmBic, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R, m.code[0]->addressing_mode()); + } + { + InstructionSelectorTester m; + m.Return(m.Word32And(m.Parameter(0), + m.Word32Xor(m.Parameter(1), m.Int32Constant(-1)))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmBic, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R, m.code[0]->addressing_mode()); + } + { + InstructionSelectorTester m; + m.Return(m.Word32And(m.Word32Xor(m.Int32Constant(-1), m.Parameter(0)), + m.Parameter(1))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmBic, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R, m.code[0]->addressing_mode()); + } + { + InstructionSelectorTester m; + m.Return(m.Word32And(m.Word32Xor(m.Parameter(0), m.Int32Constant(-1)), + m.Parameter(1))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmBic, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R, m.code[0]->addressing_mode()); + } +} + + +TEST(InstructionSelectorWord32AndAndWord32XorWithMinus1AndShiftP) { + Shifts shifts; + for (Shifts::const_iterator i = shifts.begin(); i != shifts.end(); ++i) { + Shift shift = *i; + { + InstructionSelectorTester m; + m.Return(m.Word32And( + m.Parameter(0), + m.Word32Xor(m.Int32Constant(-1), + m.NewNode(shift.op, m.Parameter(1), m.Parameter(2))))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmBic, m.code[0]->arch_opcode()); + CHECK_EQ(shift.r_mode, m.code[0]->addressing_mode()); + } + { + InstructionSelectorTester m; + m.Return(m.Word32And( + m.Parameter(0), + m.Word32Xor(m.NewNode(shift.op, m.Parameter(1), m.Parameter(2)), + m.Int32Constant(-1)))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmBic, m.code[0]->arch_opcode()); + CHECK_EQ(shift.r_mode, m.code[0]->addressing_mode()); + } + { + InstructionSelectorTester m; + m.Return(m.Word32And( + m.Word32Xor(m.Int32Constant(-1), + m.NewNode(shift.op, m.Parameter(0), m.Parameter(1))), + m.Parameter(2))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmBic, m.code[0]->arch_opcode()); + CHECK_EQ(shift.r_mode, m.code[0]->addressing_mode()); + } + { + InstructionSelectorTester m; + m.Return(m.Word32And( + m.Word32Xor(m.NewNode(shift.op, m.Parameter(0), m.Parameter(1)), + m.Int32Constant(-1)), + m.Parameter(2))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmBic, m.code[0]->arch_opcode()); + CHECK_EQ(shift.r_mode, m.code[0]->addressing_mode()); + } + } +} + + +TEST(InstructionSelectorWord32XorWithMinus1P) { + { + InstructionSelectorTester m; + m.Return(m.Word32Xor(m.Int32Constant(-1), m.Parameter(0))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmMvn, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R, m.code[0]->addressing_mode()); + } + { + InstructionSelectorTester m; + m.Return(m.Word32Xor(m.Parameter(0), m.Int32Constant(-1))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmMvn, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R, m.code[0]->addressing_mode()); + } +} + + +TEST(InstructionSelectorWord32XorWithMinus1AndShiftP) { + Shifts shifts; + for (Shifts::const_iterator i = shifts.begin(); i != shifts.end(); ++i) { + Shift shift = *i; + { + InstructionSelectorTester m; + m.Return( + m.Word32Xor(m.Int32Constant(-1), + m.NewNode(shift.op, m.Parameter(0), m.Parameter(1)))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmMvn, m.code[0]->arch_opcode()); + CHECK_EQ(shift.r_mode, m.code[0]->addressing_mode()); + } + { + InstructionSelectorTester m; + m.Return(m.Word32Xor(m.NewNode(shift.op, m.Parameter(0), m.Parameter(1)), + m.Int32Constant(-1))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmMvn, m.code[0]->arch_opcode()); + CHECK_EQ(shift.r_mode, m.code[0]->addressing_mode()); + } + } +} + + +TEST(InstructionSelectorShiftP) { + Shifts shifts; + for (Shifts::const_iterator i = shifts.begin(); i != shifts.end(); ++i) { + Shift shift = *i; + InstructionSelectorTester m; + m.Return(m.NewNode(shift.op, m.Parameter(0), m.Parameter(1))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmMov, m.code[0]->arch_opcode()); + CHECK_EQ(shift.r_mode, m.code[0]->addressing_mode()); + CHECK_EQ(2, m.code[0]->InputCount()); + } +} + + +TEST(InstructionSelectorShiftImm) { + Shifts shifts; + for (Shifts::const_iterator i = shifts.begin(); i != shifts.end(); ++i) { + Shift shift = *i; + for (int32_t imm = shift.i_low; imm <= shift.i_high; ++imm) { + InstructionSelectorTester m; + m.Return(m.NewNode(shift.op, m.Parameter(0), m.Int32Constant(imm))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmMov, m.code[0]->arch_opcode()); + CHECK_EQ(shift.i_mode, m.code[0]->addressing_mode()); + CHECK_EQ(2, m.code[0]->InputCount()); + CHECK_EQ(imm, m.ToInt32(m.code[0]->InputAt(1))); + } + } +} + + +TEST(InstructionSelectorRotateRightP) { + { + InstructionSelectorTester m; + Node* value = m.Parameter(0); + Node* shift = m.Parameter(1); + m.Return( + m.Word32Or(m.Word32Shr(value, shift), + m.Word32Shl(value, m.Int32Sub(m.Int32Constant(32), shift)))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmMov, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R_ROR_R, m.code[0]->addressing_mode()); + CHECK_EQ(2, m.code[0]->InputCount()); + } + { + InstructionSelectorTester m; + Node* value = m.Parameter(0); + Node* shift = m.Parameter(1); + m.Return( + m.Word32Or(m.Word32Shl(value, m.Int32Sub(m.Int32Constant(32), shift)), + m.Word32Shr(value, shift))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmMov, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R_ROR_R, m.code[0]->addressing_mode()); + CHECK_EQ(2, m.code[0]->InputCount()); + } +} + + +TEST(InstructionSelectorRotateRightImm) { + FOR_INPUTS(uint32_t, ror, i) { + uint32_t shift = *i; + { + InstructionSelectorTester m; + Node* value = m.Parameter(0); + m.Return(m.Word32Or(m.Word32Shr(value, m.Int32Constant(shift)), + m.Word32Shl(value, m.Int32Constant(32 - shift)))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmMov, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R_ROR_I, m.code[0]->addressing_mode()); + CHECK_EQ(2, m.code[0]->InputCount()); + CHECK_EQ(shift, m.ToInt32(m.code[0]->InputAt(1))); + } + { + InstructionSelectorTester m; + Node* value = m.Parameter(0); + m.Return(m.Word32Or(m.Word32Shl(value, m.Int32Constant(32 - shift)), + m.Word32Shr(value, m.Int32Constant(shift)))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmMov, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R_ROR_I, m.code[0]->addressing_mode()); + CHECK_EQ(2, m.code[0]->InputCount()); + CHECK_EQ(shift, m.ToInt32(m.code[0]->InputAt(1))); + } + } +} + + +TEST(InstructionSelectorInt32MulP) { + InstructionSelectorTester m; + m.Return(m.Int32Mul(m.Parameter(0), m.Parameter(1))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmMul, m.code[0]->arch_opcode()); +} + + +TEST(InstructionSelectorInt32MulImm) { + // x * (2^k + 1) -> (x >> k) + x + for (int k = 1; k < 31; ++k) { + InstructionSelectorTester m; + m.Return(m.Int32Mul(m.Parameter(0), m.Int32Constant((1 << k) + 1))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmAdd, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R_LSL_I, m.code[0]->addressing_mode()); + } + // (2^k + 1) * x -> (x >> k) + x + for (int k = 1; k < 31; ++k) { + InstructionSelectorTester m; + m.Return(m.Int32Mul(m.Int32Constant((1 << k) + 1), m.Parameter(0))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmAdd, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R_LSL_I, m.code[0]->addressing_mode()); + } + // x * (2^k - 1) -> (x >> k) - x + for (int k = 3; k < 31; ++k) { + InstructionSelectorTester m; + m.Return(m.Int32Mul(m.Parameter(0), m.Int32Constant((1 << k) - 1))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmRsb, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R_LSL_I, m.code[0]->addressing_mode()); + } + // (2^k - 1) * x -> (x >> k) - x + for (int k = 3; k < 31; ++k) { + InstructionSelectorTester m; + m.Return(m.Int32Mul(m.Int32Constant((1 << k) - 1), m.Parameter(0))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmRsb, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R_LSL_I, m.code[0]->addressing_mode()); + } +} + + +TEST(InstructionSelectorWord32AndImm_ARMv7) { + for (uint32_t width = 1; width <= 32; ++width) { + InstructionSelectorTester m; + m.Return(m.Word32And(m.Parameter(0), + m.Int32Constant(0xffffffffu >> (32 - width)))); + m.SelectInstructions(ARMv7); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmUbfx, m.code[0]->arch_opcode()); + CHECK_EQ(3, m.code[0]->InputCount()); + CHECK_EQ(0, m.ToInt32(m.code[0]->InputAt(1))); + CHECK_EQ(width, m.ToInt32(m.code[0]->InputAt(2))); + } + for (uint32_t lsb = 0; lsb <= 31; ++lsb) { + for (uint32_t width = 1; width < 32 - lsb; ++width) { + uint32_t msk = ~((0xffffffffu >> (32 - width)) << lsb); + InstructionSelectorTester m; + m.Return(m.Word32And(m.Parameter(0), m.Int32Constant(msk))); + m.SelectInstructions(ARMv7); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmBfc, m.code[0]->arch_opcode()); + CHECK_EQ(1, m.code[0]->OutputCount()); + CHECK(UnallocatedOperand::cast(m.code[0]->Output()) + ->HasSameAsInputPolicy()); + CHECK_EQ(3, m.code[0]->InputCount()); + CHECK_EQ(lsb, m.ToInt32(m.code[0]->InputAt(1))); + CHECK_EQ(width, m.ToInt32(m.code[0]->InputAt(2))); + } + } +} + + +TEST(InstructionSelectorWord32AndAndWord32ShrImm_ARMv7) { + for (uint32_t lsb = 0; lsb <= 31; ++lsb) { + for (uint32_t width = 1; width <= 32 - lsb; ++width) { + { + InstructionSelectorTester m; + m.Return(m.Word32And(m.Word32Shr(m.Parameter(0), m.Int32Constant(lsb)), + m.Int32Constant(0xffffffffu >> (32 - width)))); + m.SelectInstructions(ARMv7); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmUbfx, m.code[0]->arch_opcode()); + CHECK_EQ(3, m.code[0]->InputCount()); + CHECK_EQ(lsb, m.ToInt32(m.code[0]->InputAt(1))); + CHECK_EQ(width, m.ToInt32(m.code[0]->InputAt(2))); + } + { + InstructionSelectorTester m; + m.Return( + m.Word32And(m.Int32Constant(0xffffffffu >> (32 - width)), + m.Word32Shr(m.Parameter(0), m.Int32Constant(lsb)))); + m.SelectInstructions(ARMv7); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmUbfx, m.code[0]->arch_opcode()); + CHECK_EQ(3, m.code[0]->InputCount()); + CHECK_EQ(lsb, m.ToInt32(m.code[0]->InputAt(1))); + CHECK_EQ(width, m.ToInt32(m.code[0]->InputAt(2))); + } + } + } +} + + +TEST(InstructionSelectorWord32ShrAndWord32AndImm_ARMv7) { + for (uint32_t lsb = 0; lsb <= 31; ++lsb) { + for (uint32_t width = 1; width <= 32 - lsb; ++width) { + uint32_t max = 1 << lsb; + if (max > static_cast<uint32_t>(kMaxInt)) max -= 1; + uint32_t jnk = CcTest::random_number_generator()->NextInt(max); + uint32_t msk = ((0xffffffffu >> (32 - width)) << lsb) | jnk; + { + InstructionSelectorTester m; + m.Return(m.Word32Shr(m.Word32And(m.Parameter(0), m.Int32Constant(msk)), + m.Int32Constant(lsb))); + m.SelectInstructions(ARMv7); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmUbfx, m.code[0]->arch_opcode()); + CHECK_EQ(3, m.code[0]->InputCount()); + CHECK_EQ(lsb, m.ToInt32(m.code[0]->InputAt(1))); + CHECK_EQ(width, m.ToInt32(m.code[0]->InputAt(2))); + } + { + InstructionSelectorTester m; + m.Return(m.Word32Shr(m.Word32And(m.Int32Constant(msk), m.Parameter(0)), + m.Int32Constant(lsb))); + m.SelectInstructions(ARMv7); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmUbfx, m.code[0]->arch_opcode()); + CHECK_EQ(3, m.code[0]->InputCount()); + CHECK_EQ(lsb, m.ToInt32(m.code[0]->InputAt(1))); + CHECK_EQ(width, m.ToInt32(m.code[0]->InputAt(2))); + } + } + } +} + + +TEST(InstructionSelectorInt32SubAndInt32MulP) { + InstructionSelectorTester m; + m.Return( + m.Int32Sub(m.Parameter(0), m.Int32Mul(m.Parameter(1), m.Parameter(2)))); + m.SelectInstructions(); + CHECK_EQ(2, m.code.size()); + CHECK_EQ(kArmMul, m.code[0]->arch_opcode()); + CHECK_EQ(1, m.code[0]->OutputCount()); + CHECK_EQ(kArmSub, m.code[1]->arch_opcode()); + CHECK_EQ(2, m.code[1]->InputCount()); + CheckSameVreg(m.code[0]->Output(), m.code[1]->InputAt(1)); +} + + +TEST(InstructionSelectorInt32SubAndInt32MulP_MLS) { + InstructionSelectorTester m; + m.Return( + m.Int32Sub(m.Parameter(0), m.Int32Mul(m.Parameter(1), m.Parameter(2)))); + m.SelectInstructions(MLS); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmMls, m.code[0]->arch_opcode()); +} + + +TEST(InstructionSelectorInt32DivP) { + InstructionSelectorTester m; + m.Return(m.Int32Div(m.Parameter(0), m.Parameter(1))); + m.SelectInstructions(); + CHECK_EQ(4, m.code.size()); + CHECK_EQ(kArmVcvtF64S32, m.code[0]->arch_opcode()); + CHECK_EQ(1, m.code[0]->OutputCount()); + CHECK_EQ(kArmVcvtF64S32, m.code[1]->arch_opcode()); + CHECK_EQ(1, m.code[1]->OutputCount()); + CHECK_EQ(kArmVdivF64, m.code[2]->arch_opcode()); + CHECK_EQ(2, m.code[2]->InputCount()); + CHECK_EQ(1, m.code[2]->OutputCount()); + CheckSameVreg(m.code[0]->Output(), m.code[2]->InputAt(0)); + CheckSameVreg(m.code[1]->Output(), m.code[2]->InputAt(1)); + CHECK_EQ(kArmVcvtS32F64, m.code[3]->arch_opcode()); + CHECK_EQ(1, m.code[3]->InputCount()); + CheckSameVreg(m.code[2]->Output(), m.code[3]->InputAt(0)); +} + + +TEST(InstructionSelectorInt32DivP_SUDIV) { + InstructionSelectorTester m; + m.Return(m.Int32Div(m.Parameter(0), m.Parameter(1))); + m.SelectInstructions(SUDIV); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmSdiv, m.code[0]->arch_opcode()); +} + + +TEST(InstructionSelectorInt32UDivP) { + InstructionSelectorTester m; + m.Return(m.Int32UDiv(m.Parameter(0), m.Parameter(1))); + m.SelectInstructions(); + CHECK_EQ(4, m.code.size()); + CHECK_EQ(kArmVcvtF64U32, m.code[0]->arch_opcode()); + CHECK_EQ(1, m.code[0]->OutputCount()); + CHECK_EQ(kArmVcvtF64U32, m.code[1]->arch_opcode()); + CHECK_EQ(1, m.code[1]->OutputCount()); + CHECK_EQ(kArmVdivF64, m.code[2]->arch_opcode()); + CHECK_EQ(2, m.code[2]->InputCount()); + CHECK_EQ(1, m.code[2]->OutputCount()); + CheckSameVreg(m.code[0]->Output(), m.code[2]->InputAt(0)); + CheckSameVreg(m.code[1]->Output(), m.code[2]->InputAt(1)); + CHECK_EQ(kArmVcvtU32F64, m.code[3]->arch_opcode()); + CHECK_EQ(1, m.code[3]->InputCount()); + CheckSameVreg(m.code[2]->Output(), m.code[3]->InputAt(0)); +} + + +TEST(InstructionSelectorInt32UDivP_SUDIV) { + InstructionSelectorTester m; + m.Return(m.Int32UDiv(m.Parameter(0), m.Parameter(1))); + m.SelectInstructions(SUDIV); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmUdiv, m.code[0]->arch_opcode()); +} + + +TEST(InstructionSelectorInt32ModP) { + InstructionSelectorTester m; + m.Return(m.Int32Mod(m.Parameter(0), m.Parameter(1))); + m.SelectInstructions(); + CHECK_EQ(6, m.code.size()); + CHECK_EQ(kArmVcvtF64S32, m.code[0]->arch_opcode()); + CHECK_EQ(1, m.code[0]->OutputCount()); + CHECK_EQ(kArmVcvtF64S32, m.code[1]->arch_opcode()); + CHECK_EQ(1, m.code[1]->OutputCount()); + CHECK_EQ(kArmVdivF64, m.code[2]->arch_opcode()); + CHECK_EQ(2, m.code[2]->InputCount()); + CHECK_EQ(1, m.code[2]->OutputCount()); + CheckSameVreg(m.code[0]->Output(), m.code[2]->InputAt(0)); + CheckSameVreg(m.code[1]->Output(), m.code[2]->InputAt(1)); + CHECK_EQ(kArmVcvtS32F64, m.code[3]->arch_opcode()); + CHECK_EQ(1, m.code[3]->InputCount()); + CheckSameVreg(m.code[2]->Output(), m.code[3]->InputAt(0)); + CHECK_EQ(kArmMul, m.code[4]->arch_opcode()); + CHECK_EQ(1, m.code[4]->OutputCount()); + CHECK_EQ(2, m.code[4]->InputCount()); + CheckSameVreg(m.code[3]->Output(), m.code[4]->InputAt(0)); + CheckSameVreg(m.code[1]->InputAt(0), m.code[4]->InputAt(1)); + CHECK_EQ(kArmSub, m.code[5]->arch_opcode()); + CHECK_EQ(1, m.code[5]->OutputCount()); + CHECK_EQ(2, m.code[5]->InputCount()); + CheckSameVreg(m.code[0]->InputAt(0), m.code[5]->InputAt(0)); + CheckSameVreg(m.code[4]->Output(), m.code[5]->InputAt(1)); +} + + +TEST(InstructionSelectorInt32ModP_SUDIV) { + InstructionSelectorTester m; + m.Return(m.Int32Mod(m.Parameter(0), m.Parameter(1))); + m.SelectInstructions(SUDIV); + CHECK_EQ(3, m.code.size()); + CHECK_EQ(kArmSdiv, m.code[0]->arch_opcode()); + CHECK_EQ(1, m.code[0]->OutputCount()); + CHECK_EQ(2, m.code[0]->InputCount()); + CHECK_EQ(kArmMul, m.code[1]->arch_opcode()); + CHECK_EQ(1, m.code[1]->OutputCount()); + CHECK_EQ(2, m.code[1]->InputCount()); + CheckSameVreg(m.code[0]->Output(), m.code[1]->InputAt(0)); + CheckSameVreg(m.code[0]->InputAt(1), m.code[1]->InputAt(1)); + CHECK_EQ(kArmSub, m.code[2]->arch_opcode()); + CHECK_EQ(1, m.code[2]->OutputCount()); + CHECK_EQ(2, m.code[2]->InputCount()); + CheckSameVreg(m.code[0]->InputAt(0), m.code[2]->InputAt(0)); + CheckSameVreg(m.code[1]->Output(), m.code[2]->InputAt(1)); +} + + +TEST(InstructionSelectorInt32ModP_MLS_SUDIV) { + InstructionSelectorTester m; + m.Return(m.Int32Mod(m.Parameter(0), m.Parameter(1))); + m.SelectInstructions(MLS, SUDIV); + CHECK_EQ(2, m.code.size()); + CHECK_EQ(kArmSdiv, m.code[0]->arch_opcode()); + CHECK_EQ(1, m.code[0]->OutputCount()); + CHECK_EQ(2, m.code[0]->InputCount()); + CHECK_EQ(kArmMls, m.code[1]->arch_opcode()); + CHECK_EQ(1, m.code[1]->OutputCount()); + CHECK_EQ(3, m.code[1]->InputCount()); + CheckSameVreg(m.code[0]->Output(), m.code[1]->InputAt(0)); + CheckSameVreg(m.code[0]->InputAt(1), m.code[1]->InputAt(1)); + CheckSameVreg(m.code[0]->InputAt(0), m.code[1]->InputAt(2)); +} + + +TEST(InstructionSelectorInt32UModP) { + InstructionSelectorTester m; + m.Return(m.Int32UMod(m.Parameter(0), m.Parameter(1))); + m.SelectInstructions(); + CHECK_EQ(6, m.code.size()); + CHECK_EQ(kArmVcvtF64U32, m.code[0]->arch_opcode()); + CHECK_EQ(1, m.code[0]->OutputCount()); + CHECK_EQ(kArmVcvtF64U32, m.code[1]->arch_opcode()); + CHECK_EQ(1, m.code[1]->OutputCount()); + CHECK_EQ(kArmVdivF64, m.code[2]->arch_opcode()); + CHECK_EQ(2, m.code[2]->InputCount()); + CHECK_EQ(1, m.code[2]->OutputCount()); + CheckSameVreg(m.code[0]->Output(), m.code[2]->InputAt(0)); + CheckSameVreg(m.code[1]->Output(), m.code[2]->InputAt(1)); + CHECK_EQ(kArmVcvtU32F64, m.code[3]->arch_opcode()); + CHECK_EQ(1, m.code[3]->InputCount()); + CheckSameVreg(m.code[2]->Output(), m.code[3]->InputAt(0)); + CHECK_EQ(kArmMul, m.code[4]->arch_opcode()); + CHECK_EQ(1, m.code[4]->OutputCount()); + CHECK_EQ(2, m.code[4]->InputCount()); + CheckSameVreg(m.code[3]->Output(), m.code[4]->InputAt(0)); + CheckSameVreg(m.code[1]->InputAt(0), m.code[4]->InputAt(1)); + CHECK_EQ(kArmSub, m.code[5]->arch_opcode()); + CHECK_EQ(1, m.code[5]->OutputCount()); + CHECK_EQ(2, m.code[5]->InputCount()); + CheckSameVreg(m.code[0]->InputAt(0), m.code[5]->InputAt(0)); + CheckSameVreg(m.code[4]->Output(), m.code[5]->InputAt(1)); +} + + +TEST(InstructionSelectorInt32UModP_SUDIV) { + InstructionSelectorTester m; + m.Return(m.Int32UMod(m.Parameter(0), m.Parameter(1))); + m.SelectInstructions(SUDIV); + CHECK_EQ(3, m.code.size()); + CHECK_EQ(kArmUdiv, m.code[0]->arch_opcode()); + CHECK_EQ(1, m.code[0]->OutputCount()); + CHECK_EQ(2, m.code[0]->InputCount()); + CHECK_EQ(kArmMul, m.code[1]->arch_opcode()); + CHECK_EQ(1, m.code[1]->OutputCount()); + CHECK_EQ(2, m.code[1]->InputCount()); + CheckSameVreg(m.code[0]->Output(), m.code[1]->InputAt(0)); + CheckSameVreg(m.code[0]->InputAt(1), m.code[1]->InputAt(1)); + CHECK_EQ(kArmSub, m.code[2]->arch_opcode()); + CHECK_EQ(1, m.code[2]->OutputCount()); + CHECK_EQ(2, m.code[2]->InputCount()); + CheckSameVreg(m.code[0]->InputAt(0), m.code[2]->InputAt(0)); + CheckSameVreg(m.code[1]->Output(), m.code[2]->InputAt(1)); +} + + +TEST(InstructionSelectorInt32UModP_MLS_SUDIV) { + InstructionSelectorTester m; + m.Return(m.Int32UMod(m.Parameter(0), m.Parameter(1))); + m.SelectInstructions(MLS, SUDIV); + CHECK_EQ(2, m.code.size()); + CHECK_EQ(kArmUdiv, m.code[0]->arch_opcode()); + CHECK_EQ(1, m.code[0]->OutputCount()); + CHECK_EQ(2, m.code[0]->InputCount()); + CHECK_EQ(kArmMls, m.code[1]->arch_opcode()); + CHECK_EQ(1, m.code[1]->OutputCount()); + CHECK_EQ(3, m.code[1]->InputCount()); + CheckSameVreg(m.code[0]->Output(), m.code[1]->InputAt(0)); + CheckSameVreg(m.code[0]->InputAt(1), m.code[1]->InputAt(1)); + CheckSameVreg(m.code[0]->InputAt(0), m.code[1]->InputAt(2)); +} + + +TEST(InstructionSelectorWord32EqualP) { + InstructionSelectorTester m; + m.Return(m.Word32Equal(m.Parameter(0), m.Parameter(1))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmCmp, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); +} + + +TEST(InstructionSelectorWord32EqualImm) { + Immediates immediates; + for (Immediates::const_iterator i = immediates.begin(); i != immediates.end(); + ++i) { + int32_t imm = *i; + { + InstructionSelectorTester m; + m.Return(m.Word32Equal(m.Parameter(0), m.Int32Constant(imm))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + if (imm == 0) { + CHECK_EQ(kArmTst, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R, m.code[0]->addressing_mode()); + CHECK_EQ(2, m.code[0]->InputCount()); + CheckSameVreg(m.code[0]->InputAt(0), m.code[0]->InputAt(1)); + } else { + CHECK_EQ(kArmCmp, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_I, m.code[0]->addressing_mode()); + } + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + } + { + InstructionSelectorTester m; + m.Return(m.Word32Equal(m.Int32Constant(imm), m.Parameter(0))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + if (imm == 0) { + CHECK_EQ(kArmTst, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R, m.code[0]->addressing_mode()); + CHECK_EQ(2, m.code[0]->InputCount()); + CheckSameVreg(m.code[0]->InputAt(0), m.code[0]->InputAt(1)); + } else { + CHECK_EQ(kArmCmp, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_I, m.code[0]->addressing_mode()); + } + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + } + } +} + + +TEST(InstructionSelectorWord32EqualAndDPIP) { + DPIs dpis; + for (DPIs::const_iterator i = dpis.begin(); i != dpis.end(); ++i) { + DPI dpi = *i; + { + InstructionSelectorTester m; + m.Return(m.Word32Equal(m.NewNode(dpi.op, m.Parameter(0), m.Parameter(1)), + m.Int32Constant(0))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(dpi.test_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + } + { + InstructionSelectorTester m; + m.Return( + m.Word32Equal(m.Int32Constant(0), + m.NewNode(dpi.op, m.Parameter(0), m.Parameter(1)))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(dpi.test_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + } + } +} + + +TEST(InstructionSelectorWord32EqualAndDPIImm) { + DPIs dpis; + Immediates immediates; + for (DPIs::const_iterator i = dpis.begin(); i != dpis.end(); ++i) { + DPI dpi = *i; + for (Immediates::const_iterator j = immediates.begin(); + j != immediates.end(); ++j) { + int32_t imm = *j; + { + InstructionSelectorTester m; + m.Return(m.Word32Equal( + m.NewNode(dpi.op, m.Parameter(0), m.Int32Constant(imm)), + m.Int32Constant(0))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(dpi.test_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_I, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + } + { + InstructionSelectorTester m; + m.Return(m.Word32Equal( + m.NewNode(dpi.op, m.Int32Constant(imm), m.Parameter(0)), + m.Int32Constant(0))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(dpi.test_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_I, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + } + { + InstructionSelectorTester m; + m.Return(m.Word32Equal( + m.Int32Constant(0), + m.NewNode(dpi.op, m.Parameter(0), m.Int32Constant(imm)))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(dpi.test_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_I, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + } + { + InstructionSelectorTester m; + m.Return(m.Word32Equal( + m.Int32Constant(0), + m.NewNode(dpi.op, m.Int32Constant(imm), m.Parameter(0)))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(dpi.test_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_I, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + } + } + } +} + + +TEST(InstructionSelectorWord32EqualAndShiftP) { + Shifts shifts; + for (Shifts::const_iterator i = shifts.begin(); i != shifts.end(); ++i) { + Shift shift = *i; + { + InstructionSelectorTester m; + m.Return(m.Word32Equal( + m.Parameter(0), m.NewNode(shift.op, m.Parameter(1), m.Parameter(2)))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmCmp, m.code[0]->arch_opcode()); + CHECK_EQ(shift.r_mode, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + } + { + InstructionSelectorTester m; + m.Return(m.Word32Equal( + m.NewNode(shift.op, m.Parameter(0), m.Parameter(1)), m.Parameter(2))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmCmp, m.code[0]->arch_opcode()); + CHECK_EQ(shift.r_mode, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_set, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + } + } +} + + +TEST(InstructionSelectorBranchWithWord32EqualAndShiftP) { + Shifts shifts; + for (Shifts::const_iterator i = shifts.begin(); i != shifts.end(); ++i) { + Shift shift = *i; + { + InstructionSelectorTester m; + MLabel blocka, blockb; + m.Branch(m.Word32Equal(m.Parameter(0), m.NewNode(shift.op, m.Parameter(1), + m.Parameter(2))), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(1)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmCmp, m.code[0]->arch_opcode()); + CHECK_EQ(shift.r_mode, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_branch, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + } + { + InstructionSelectorTester m; + MLabel blocka, blockb; + m.Branch( + m.Word32Equal(m.NewNode(shift.op, m.Parameter(1), m.Parameter(2)), + m.Parameter(0)), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(1)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmCmp, m.code[0]->arch_opcode()); + CHECK_EQ(shift.r_mode, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_branch, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + } + } +} + + +TEST(InstructionSelectorBranchWithWord32EqualAndShiftImm) { + Shifts shifts; + for (Shifts::const_iterator i = shifts.begin(); i != shifts.end(); ++i) { + Shift shift = *i; + for (int32_t imm = shift.i_low; imm <= shift.i_high; ++imm) { + { + InstructionSelectorTester m; + MLabel blocka, blockb; + m.Branch( + m.Word32Equal(m.Parameter(0), m.NewNode(shift.op, m.Parameter(1), + m.Int32Constant(imm))), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(1)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmCmp, m.code[0]->arch_opcode()); + CHECK_EQ(shift.i_mode, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_branch, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + } + { + InstructionSelectorTester m; + MLabel blocka, blockb; + m.Branch(m.Word32Equal( + m.NewNode(shift.op, m.Parameter(1), m.Int32Constant(imm)), + m.Parameter(0)), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(1)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmCmp, m.code[0]->arch_opcode()); + CHECK_EQ(shift.i_mode, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_branch, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + } + } + } +} + + +TEST(InstructionSelectorBranchWithWord32EqualAndRotateRightP) { + { + InstructionSelectorTester m; + MLabel blocka, blockb; + Node* input = m.Parameter(0); + Node* value = m.Parameter(1); + Node* shift = m.Parameter(2); + Node* ror = + m.Word32Or(m.Word32Shr(value, shift), + m.Word32Shl(value, m.Int32Sub(m.Int32Constant(32), shift))); + m.Branch(m.Word32Equal(input, ror), &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(1)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmCmp, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R_ROR_R, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_branch, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + } + { + InstructionSelectorTester m; + MLabel blocka, blockb; + Node* input = m.Parameter(0); + Node* value = m.Parameter(1); + Node* shift = m.Parameter(2); + Node* ror = + m.Word32Or(m.Word32Shl(value, m.Int32Sub(m.Int32Constant(32), shift)), + m.Word32Shr(value, shift)); + m.Branch(m.Word32Equal(input, ror), &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(1)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmCmp, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R_ROR_R, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_branch, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + } + { + InstructionSelectorTester m; + MLabel blocka, blockb; + Node* input = m.Parameter(0); + Node* value = m.Parameter(1); + Node* shift = m.Parameter(2); + Node* ror = + m.Word32Or(m.Word32Shr(value, shift), + m.Word32Shl(value, m.Int32Sub(m.Int32Constant(32), shift))); + m.Branch(m.Word32Equal(ror, input), &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(1)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmCmp, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R_ROR_R, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_branch, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + } + { + InstructionSelectorTester m; + MLabel blocka, blockb; + Node* input = m.Parameter(0); + Node* value = m.Parameter(1); + Node* shift = m.Parameter(2); + Node* ror = + m.Word32Or(m.Word32Shl(value, m.Int32Sub(m.Int32Constant(32), shift)), + m.Word32Shr(value, shift)); + m.Branch(m.Word32Equal(ror, input), &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(1)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmCmp, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R_ROR_R, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_branch, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + } +} + + +TEST(InstructionSelectorBranchWithWord32EqualAndRotateRightImm) { + FOR_INPUTS(uint32_t, ror, i) { + uint32_t shift = *i; + { + InstructionSelectorTester m; + MLabel blocka, blockb; + Node* input = m.Parameter(0); + Node* value = m.Parameter(1); + Node* ror = m.Word32Or(m.Word32Shr(value, m.Int32Constant(shift)), + m.Word32Shl(value, m.Int32Constant(32 - shift))); + m.Branch(m.Word32Equal(input, ror), &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(1)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmCmp, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R_ROR_I, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_branch, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + CHECK_LE(3, m.code[0]->InputCount()); + CHECK_EQ(shift, m.ToInt32(m.code[0]->InputAt(2))); + } + { + InstructionSelectorTester m; + MLabel blocka, blockb; + Node* input = m.Parameter(0); + Node* value = m.Parameter(1); + Node* ror = m.Word32Or(m.Word32Shl(value, m.Int32Constant(32 - shift)), + m.Word32Shr(value, m.Int32Constant(shift))); + m.Branch(m.Word32Equal(input, ror), &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(1)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmCmp, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R_ROR_I, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_branch, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + CHECK_LE(3, m.code[0]->InputCount()); + CHECK_EQ(shift, m.ToInt32(m.code[0]->InputAt(2))); + } + { + InstructionSelectorTester m; + MLabel blocka, blockb; + Node* input = m.Parameter(0); + Node* value = m.Parameter(1); + Node* ror = m.Word32Or(m.Word32Shr(value, m.Int32Constant(shift)), + m.Word32Shl(value, m.Int32Constant(32 - shift))); + m.Branch(m.Word32Equal(ror, input), &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(1)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmCmp, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R_ROR_I, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_branch, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + CHECK_LE(3, m.code[0]->InputCount()); + CHECK_EQ(shift, m.ToInt32(m.code[0]->InputAt(2))); + } + { + InstructionSelectorTester m; + MLabel blocka, blockb; + Node* input = m.Parameter(0); + Node* value = m.Parameter(1); + Node* ror = m.Word32Or(m.Word32Shl(value, m.Int32Constant(32 - shift)), + m.Word32Shr(value, m.Int32Constant(shift))); + m.Branch(m.Word32Equal(ror, input), &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(1)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kArmCmp, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R_ROR_I, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_branch, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + CHECK_LE(3, m.code[0]->InputCount()); + CHECK_EQ(shift, m.ToInt32(m.code[0]->InputAt(2))); + } + } +} + + +TEST(InstructionSelectorBranchWithDPIP) { + DPIs dpis; + for (DPIs::const_iterator i = dpis.begin(); i != dpis.end(); ++i) { + DPI dpi = *i; + { + InstructionSelectorTester m; + MLabel blocka, blockb; + m.Branch(m.NewNode(dpi.op, m.Parameter(0), m.Parameter(1)), &blocka, + &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(1)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(dpi.test_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_branch, m.code[0]->flags_mode()); + CHECK_EQ(kNotEqual, m.code[0]->flags_condition()); + } + { + InstructionSelectorTester m; + MLabel blocka, blockb; + m.Branch(m.Word32Equal(m.Int32Constant(0), + m.NewNode(dpi.op, m.Parameter(0), m.Parameter(1))), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(1)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(dpi.test_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_branch, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + } + { + InstructionSelectorTester m; + MLabel blocka, blockb; + m.Branch(m.Word32Equal(m.NewNode(dpi.op, m.Parameter(0), m.Parameter(1)), + m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(1)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(dpi.test_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_branch, m.code[0]->flags_mode()); + CHECK_EQ(kEqual, m.code[0]->flags_condition()); + } + } +} + + +TEST(InstructionSelectorBranchWithODPIP) { + ODPIs odpis; + for (ODPIs::const_iterator i = odpis.begin(); i != odpis.end(); ++i) { + ODPI odpi = *i; + { + InstructionSelectorTester m; + MLabel blocka, blockb; + Node* node = m.NewNode(odpi.op, m.Parameter(0), m.Parameter(1)); + m.Branch(m.Projection(1, node), &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(0)); + m.Bind(&blockb); + m.Return(m.Projection(0, node)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(odpi.arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_branch, m.code[0]->flags_mode()); + CHECK_EQ(kOverflow, m.code[0]->flags_condition()); + } + { + InstructionSelectorTester m; + MLabel blocka, blockb; + Node* node = m.NewNode(odpi.op, m.Parameter(0), m.Parameter(1)); + m.Branch(m.Word32Equal(m.Projection(1, node), m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(0)); + m.Bind(&blockb); + m.Return(m.Projection(0, node)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(odpi.arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_branch, m.code[0]->flags_mode()); + CHECK_EQ(kNotOverflow, m.code[0]->flags_condition()); + } + { + InstructionSelectorTester m; + MLabel blocka, blockb; + Node* node = m.NewNode(odpi.op, m.Parameter(0), m.Parameter(1)); + m.Branch(m.Word32Equal(m.Int32Constant(0), m.Projection(1, node)), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(0)); + m.Bind(&blockb); + m.Return(m.Projection(0, node)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(odpi.arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_R, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_branch, m.code[0]->flags_mode()); + CHECK_EQ(kNotOverflow, m.code[0]->flags_condition()); + } + } +} + + +TEST(InstructionSelectorBranchWithODPIImm) { + ODPIs odpis; + Immediates immediates; + for (ODPIs::const_iterator i = odpis.begin(); i != odpis.end(); ++i) { + ODPI odpi = *i; + for (Immediates::const_iterator j = immediates.begin(); + j != immediates.end(); ++j) { + int32_t imm = *j; + { + InstructionSelectorTester m; + MLabel blocka, blockb; + Node* node = m.NewNode(odpi.op, m.Parameter(0), m.Int32Constant(imm)); + m.Branch(m.Projection(1, node), &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(0)); + m.Bind(&blockb); + m.Return(m.Projection(0, node)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(odpi.arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_I, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_branch, m.code[0]->flags_mode()); + CHECK_EQ(kOverflow, m.code[0]->flags_condition()); + CHECK_LE(2, m.code[0]->InputCount()); + CHECK_EQ(imm, m.ToInt32(m.code[0]->InputAt(1))); + } + { + InstructionSelectorTester m; + MLabel blocka, blockb; + Node* node = m.NewNode(odpi.op, m.Int32Constant(imm), m.Parameter(0)); + m.Branch(m.Projection(1, node), &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(0)); + m.Bind(&blockb); + m.Return(m.Projection(0, node)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(odpi.reverse_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_I, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_branch, m.code[0]->flags_mode()); + CHECK_EQ(kOverflow, m.code[0]->flags_condition()); + CHECK_LE(2, m.code[0]->InputCount()); + CHECK_EQ(imm, m.ToInt32(m.code[0]->InputAt(1))); + } + { + InstructionSelectorTester m; + MLabel blocka, blockb; + Node* node = m.NewNode(odpi.op, m.Parameter(0), m.Int32Constant(imm)); + m.Branch(m.Word32Equal(m.Projection(1, node), m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(0)); + m.Bind(&blockb); + m.Return(m.Projection(0, node)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(odpi.arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_I, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_branch, m.code[0]->flags_mode()); + CHECK_EQ(kNotOverflow, m.code[0]->flags_condition()); + CHECK_LE(2, m.code[0]->InputCount()); + CHECK_EQ(imm, m.ToInt32(m.code[0]->InputAt(1))); + } + { + InstructionSelectorTester m; + MLabel blocka, blockb; + Node* node = m.NewNode(odpi.op, m.Int32Constant(imm), m.Parameter(0)); + m.Branch(m.Word32Equal(m.Projection(1, node), m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(0)); + m.Bind(&blockb); + m.Return(m.Projection(0, node)); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(odpi.reverse_arch_opcode, m.code[0]->arch_opcode()); + CHECK_EQ(kMode_Operand2_I, m.code[0]->addressing_mode()); + CHECK_EQ(kFlags_branch, m.code[0]->flags_mode()); + CHECK_EQ(kNotOverflow, m.code[0]->flags_condition()); + CHECK_LE(2, m.code[0]->InputCount()); + CHECK_EQ(imm, m.ToInt32(m.code[0]->InputAt(1))); + } + } + } +} diff --git a/deps/v8/test/cctest/compiler/test-instruction-selector-ia32.cc b/deps/v8/test/cctest/compiler/test-instruction-selector-ia32.cc new file mode 100644 index 00000000000..b6509584e06 --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-instruction-selector-ia32.cc @@ -0,0 +1,66 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "test/cctest/compiler/instruction-selector-tester.h" +#include "test/cctest/compiler/value-helper.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +TEST(InstructionSelectorInt32AddP) { + InstructionSelectorTester m; + m.Return(m.Int32Add(m.Parameter(0), m.Parameter(1))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kIA32Add, m.code[0]->arch_opcode()); +} + + +TEST(InstructionSelectorInt32AddImm) { + FOR_INT32_INPUTS(i) { + int32_t imm = *i; + { + InstructionSelectorTester m; + m.Return(m.Int32Add(m.Parameter(0), m.Int32Constant(imm))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kIA32Add, m.code[0]->arch_opcode()); + CHECK_EQ(2, m.code[0]->InputCount()); + CHECK_EQ(imm, m.ToInt32(m.code[0]->InputAt(1))); + } + { + InstructionSelectorTester m; + m.Return(m.Int32Add(m.Int32Constant(imm), m.Parameter(0))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kIA32Add, m.code[0]->arch_opcode()); + CHECK_EQ(2, m.code[0]->InputCount()); + CHECK_EQ(imm, m.ToInt32(m.code[0]->InputAt(1))); + } + } +} + + +TEST(InstructionSelectorInt32SubP) { + InstructionSelectorTester m; + m.Return(m.Int32Sub(m.Parameter(0), m.Parameter(1))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kIA32Sub, m.code[0]->arch_opcode()); + CHECK_EQ(1, m.code[0]->OutputCount()); +} + + +TEST(InstructionSelectorInt32SubImm) { + FOR_INT32_INPUTS(i) { + int32_t imm = *i; + InstructionSelectorTester m; + m.Return(m.Int32Sub(m.Parameter(0), m.Int32Constant(imm))); + m.SelectInstructions(); + CHECK_EQ(1, m.code.size()); + CHECK_EQ(kIA32Sub, m.code[0]->arch_opcode()); + CHECK_EQ(2, m.code[0]->InputCount()); + CHECK_EQ(imm, m.ToInt32(m.code[0]->InputAt(1))); + } +} diff --git a/deps/v8/test/cctest/compiler/test-instruction-selector.cc b/deps/v8/test/cctest/compiler/test-instruction-selector.cc new file mode 100644 index 00000000000..e59406426e8 --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-instruction-selector.cc @@ -0,0 +1,22 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "test/cctest/compiler/instruction-selector-tester.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +#if V8_TURBOFAN_TARGET + +TEST(InstructionSelectionReturnZero) { + InstructionSelectorTester m; + m.Return(m.Int32Constant(0)); + m.SelectInstructions(InstructionSelectorTester::kInternalMode); + CHECK_EQ(2, static_cast<int>(m.code.size())); + CHECK_EQ(kArchNop, m.code[0]->opcode()); + CHECK_EQ(kArchRet, m.code[1]->opcode()); + CHECK_EQ(1, static_cast<int>(m.code[1]->InputCount())); +} + +#endif // !V8_TURBOFAN_TARGET diff --git a/deps/v8/test/cctest/compiler/test-instruction.cc b/deps/v8/test/cctest/compiler/test-instruction.cc new file mode 100644 index 00000000000..bc9f4c7723e --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-instruction.cc @@ -0,0 +1,350 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" +#include "test/cctest/cctest.h" + +#include "src/compiler/code-generator.h" +#include "src/compiler/common-operator.h" +#include "src/compiler/graph.h" +#include "src/compiler/instruction.h" +#include "src/compiler/machine-operator.h" +#include "src/compiler/node.h" +#include "src/compiler/operator.h" +#include "src/compiler/schedule.h" +#include "src/compiler/scheduler.h" +#include "src/lithium.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +typedef v8::internal::compiler::Instruction TestInstr; +typedef v8::internal::compiler::InstructionSequence TestInstrSeq; + +// A testing helper for the register code abstraction. +class InstructionTester : public HandleAndZoneScope { + public: // We're all friends here. + InstructionTester() + : isolate(main_isolate()), + graph(zone()), + schedule(zone()), + info(static_cast<HydrogenCodeStub*>(NULL), main_isolate()), + linkage(&info), + common(zone()), + machine(zone(), kMachineWord32), + code(NULL) {} + + ~InstructionTester() { delete code; } + + Isolate* isolate; + Graph graph; + Schedule schedule; + CompilationInfoWithZone info; + Linkage linkage; + CommonOperatorBuilder common; + MachineOperatorBuilder machine; + TestInstrSeq* code; + + Zone* zone() { return main_zone(); } + + void allocCode() { + if (schedule.rpo_order()->size() == 0) { + // Compute the RPO order. + Scheduler::ComputeSpecialRPO(&schedule); + DCHECK(schedule.rpo_order()->size() > 0); + } + code = new TestInstrSeq(&linkage, &graph, &schedule); + } + + Node* Int32Constant(int32_t val) { + Node* node = graph.NewNode(common.Int32Constant(val)); + schedule.AddNode(schedule.entry(), node); + return node; + } + + Node* Float64Constant(double val) { + Node* node = graph.NewNode(common.Float64Constant(val)); + schedule.AddNode(schedule.entry(), node); + return node; + } + + Node* Parameter(int32_t which) { + Node* node = graph.NewNode(common.Parameter(which)); + schedule.AddNode(schedule.entry(), node); + return node; + } + + Node* NewNode(BasicBlock* block) { + Node* node = graph.NewNode(common.Int32Constant(111)); + schedule.AddNode(block, node); + return node; + } + + int NewInstr(BasicBlock* block) { + InstructionCode opcode = static_cast<InstructionCode>(110); + TestInstr* instr = TestInstr::New(zone(), opcode); + return code->AddInstruction(instr, block); + } + + UnallocatedOperand* NewUnallocated(int vreg) { + UnallocatedOperand* unallocated = + new (zone()) UnallocatedOperand(UnallocatedOperand::ANY); + unallocated->set_virtual_register(vreg); + return unallocated; + } +}; + + +TEST(InstructionBasic) { + InstructionTester R; + + for (int i = 0; i < 10; i++) { + R.Int32Constant(i); // Add some nodes to the graph. + } + + BasicBlock* last = R.schedule.entry(); + for (int i = 0; i < 5; i++) { + BasicBlock* block = R.schedule.NewBasicBlock(); + R.schedule.AddGoto(last, block); + last = block; + } + + R.allocCode(); + + CHECK_EQ(R.graph.NodeCount(), R.code->ValueCount()); + + BasicBlockVector* blocks = R.schedule.rpo_order(); + CHECK_EQ(static_cast<int>(blocks->size()), R.code->BasicBlockCount()); + + int index = 0; + for (BasicBlockVectorIter i = blocks->begin(); i != blocks->end(); + i++, index++) { + BasicBlock* block = *i; + CHECK_EQ(block, R.code->BlockAt(index)); + CHECK_EQ(-1, R.code->GetLoopEnd(block)); + } +} + + +TEST(InstructionGetBasicBlock) { + InstructionTester R; + + BasicBlock* b0 = R.schedule.entry(); + BasicBlock* b1 = R.schedule.NewBasicBlock(); + BasicBlock* b2 = R.schedule.NewBasicBlock(); + BasicBlock* b3 = R.schedule.exit(); + + R.schedule.AddGoto(b0, b1); + R.schedule.AddGoto(b1, b2); + R.schedule.AddGoto(b2, b3); + + R.allocCode(); + + R.code->StartBlock(b0); + int i0 = R.NewInstr(b0); + int i1 = R.NewInstr(b0); + R.code->EndBlock(b0); + R.code->StartBlock(b1); + int i2 = R.NewInstr(b1); + int i3 = R.NewInstr(b1); + int i4 = R.NewInstr(b1); + int i5 = R.NewInstr(b1); + R.code->EndBlock(b1); + R.code->StartBlock(b2); + int i6 = R.NewInstr(b2); + int i7 = R.NewInstr(b2); + int i8 = R.NewInstr(b2); + R.code->EndBlock(b2); + R.code->StartBlock(b3); + R.code->EndBlock(b3); + + CHECK_EQ(b0, R.code->GetBasicBlock(i0)); + CHECK_EQ(b0, R.code->GetBasicBlock(i1)); + + CHECK_EQ(b1, R.code->GetBasicBlock(i2)); + CHECK_EQ(b1, R.code->GetBasicBlock(i3)); + CHECK_EQ(b1, R.code->GetBasicBlock(i4)); + CHECK_EQ(b1, R.code->GetBasicBlock(i5)); + + CHECK_EQ(b2, R.code->GetBasicBlock(i6)); + CHECK_EQ(b2, R.code->GetBasicBlock(i7)); + CHECK_EQ(b2, R.code->GetBasicBlock(i8)); + + CHECK_EQ(b0, R.code->GetBasicBlock(b0->first_instruction_index())); + CHECK_EQ(b0, R.code->GetBasicBlock(b0->last_instruction_index())); + + CHECK_EQ(b1, R.code->GetBasicBlock(b1->first_instruction_index())); + CHECK_EQ(b1, R.code->GetBasicBlock(b1->last_instruction_index())); + + CHECK_EQ(b2, R.code->GetBasicBlock(b2->first_instruction_index())); + CHECK_EQ(b2, R.code->GetBasicBlock(b2->last_instruction_index())); + + CHECK_EQ(b3, R.code->GetBasicBlock(b3->first_instruction_index())); + CHECK_EQ(b3, R.code->GetBasicBlock(b3->last_instruction_index())); +} + + +TEST(InstructionIsGapAt) { + InstructionTester R; + + BasicBlock* b0 = R.schedule.entry(); + R.schedule.AddReturn(b0, R.Int32Constant(1)); + + R.allocCode(); + TestInstr* i0 = TestInstr::New(R.zone(), 100); + TestInstr* g = TestInstr::New(R.zone(), 103)->MarkAsControl(); + R.code->StartBlock(b0); + R.code->AddInstruction(i0, b0); + R.code->AddInstruction(g, b0); + R.code->EndBlock(b0); + + CHECK_EQ(true, R.code->InstructionAt(0)->IsBlockStart()); + + CHECK_EQ(true, R.code->IsGapAt(0)); // Label + CHECK_EQ(true, R.code->IsGapAt(1)); // Gap + CHECK_EQ(false, R.code->IsGapAt(2)); // i0 + CHECK_EQ(true, R.code->IsGapAt(3)); // Gap + CHECK_EQ(true, R.code->IsGapAt(4)); // Gap + CHECK_EQ(false, R.code->IsGapAt(5)); // g +} + + +TEST(InstructionIsGapAt2) { + InstructionTester R; + + BasicBlock* b0 = R.schedule.entry(); + BasicBlock* b1 = R.schedule.exit(); + R.schedule.AddGoto(b0, b1); + R.schedule.AddReturn(b1, R.Int32Constant(1)); + + R.allocCode(); + TestInstr* i0 = TestInstr::New(R.zone(), 100); + TestInstr* g = TestInstr::New(R.zone(), 103)->MarkAsControl(); + R.code->StartBlock(b0); + R.code->AddInstruction(i0, b0); + R.code->AddInstruction(g, b0); + R.code->EndBlock(b0); + + TestInstr* i1 = TestInstr::New(R.zone(), 102); + TestInstr* g1 = TestInstr::New(R.zone(), 104)->MarkAsControl(); + R.code->StartBlock(b1); + R.code->AddInstruction(i1, b1); + R.code->AddInstruction(g1, b1); + R.code->EndBlock(b1); + + CHECK_EQ(true, R.code->InstructionAt(0)->IsBlockStart()); + + CHECK_EQ(true, R.code->IsGapAt(0)); // Label + CHECK_EQ(true, R.code->IsGapAt(1)); // Gap + CHECK_EQ(false, R.code->IsGapAt(2)); // i0 + CHECK_EQ(true, R.code->IsGapAt(3)); // Gap + CHECK_EQ(true, R.code->IsGapAt(4)); // Gap + CHECK_EQ(false, R.code->IsGapAt(5)); // g + + CHECK_EQ(true, R.code->InstructionAt(6)->IsBlockStart()); + + CHECK_EQ(true, R.code->IsGapAt(6)); // Label + CHECK_EQ(true, R.code->IsGapAt(7)); // Gap + CHECK_EQ(false, R.code->IsGapAt(8)); // i1 + CHECK_EQ(true, R.code->IsGapAt(9)); // Gap + CHECK_EQ(true, R.code->IsGapAt(10)); // Gap + CHECK_EQ(false, R.code->IsGapAt(11)); // g1 +} + + +TEST(InstructionAddGapMove) { + InstructionTester R; + + BasicBlock* b0 = R.schedule.entry(); + R.schedule.AddReturn(b0, R.Int32Constant(1)); + + R.allocCode(); + TestInstr* i0 = TestInstr::New(R.zone(), 100); + TestInstr* g = TestInstr::New(R.zone(), 103)->MarkAsControl(); + R.code->StartBlock(b0); + R.code->AddInstruction(i0, b0); + R.code->AddInstruction(g, b0); + R.code->EndBlock(b0); + + CHECK_EQ(true, R.code->InstructionAt(0)->IsBlockStart()); + + CHECK_EQ(true, R.code->IsGapAt(0)); // Label + CHECK_EQ(true, R.code->IsGapAt(1)); // Gap + CHECK_EQ(false, R.code->IsGapAt(2)); // i0 + CHECK_EQ(true, R.code->IsGapAt(3)); // Gap + CHECK_EQ(true, R.code->IsGapAt(4)); // Gap + CHECK_EQ(false, R.code->IsGapAt(5)); // g + + int indexes[] = {0, 1, 3, 4, -1}; + for (int i = 0; indexes[i] >= 0; i++) { + int index = indexes[i]; + + UnallocatedOperand* op1 = R.NewUnallocated(index + 6); + UnallocatedOperand* op2 = R.NewUnallocated(index + 12); + + R.code->AddGapMove(index, op1, op2); + GapInstruction* gap = R.code->GapAt(index); + ParallelMove* move = gap->GetParallelMove(GapInstruction::START); + CHECK_NE(NULL, move); + const ZoneList<MoveOperands>* move_operands = move->move_operands(); + CHECK_EQ(1, move_operands->length()); + MoveOperands* cur = &move_operands->at(0); + CHECK_EQ(op1, cur->source()); + CHECK_EQ(op2, cur->destination()); + } +} + + +TEST(InstructionOperands) { + Zone zone(CcTest::InitIsolateOnce()); + + { + TestInstr* i = TestInstr::New(&zone, 101); + CHECK_EQ(0, static_cast<int>(i->OutputCount())); + CHECK_EQ(0, static_cast<int>(i->InputCount())); + CHECK_EQ(0, static_cast<int>(i->TempCount())); + } + + InstructionOperand* outputs[] = { + new (&zone) UnallocatedOperand(UnallocatedOperand::MUST_HAVE_REGISTER), + new (&zone) UnallocatedOperand(UnallocatedOperand::MUST_HAVE_REGISTER), + new (&zone) UnallocatedOperand(UnallocatedOperand::MUST_HAVE_REGISTER), + new (&zone) UnallocatedOperand(UnallocatedOperand::MUST_HAVE_REGISTER)}; + + InstructionOperand* inputs[] = { + new (&zone) UnallocatedOperand(UnallocatedOperand::MUST_HAVE_REGISTER), + new (&zone) UnallocatedOperand(UnallocatedOperand::MUST_HAVE_REGISTER), + new (&zone) UnallocatedOperand(UnallocatedOperand::MUST_HAVE_REGISTER), + new (&zone) UnallocatedOperand(UnallocatedOperand::MUST_HAVE_REGISTER)}; + + InstructionOperand* temps[] = { + new (&zone) UnallocatedOperand(UnallocatedOperand::MUST_HAVE_REGISTER), + new (&zone) UnallocatedOperand(UnallocatedOperand::MUST_HAVE_REGISTER), + new (&zone) UnallocatedOperand(UnallocatedOperand::MUST_HAVE_REGISTER), + new (&zone) UnallocatedOperand(UnallocatedOperand::MUST_HAVE_REGISTER)}; + + for (size_t i = 0; i < ARRAY_SIZE(outputs); i++) { + for (size_t j = 0; j < ARRAY_SIZE(inputs); j++) { + for (size_t k = 0; k < ARRAY_SIZE(temps); k++) { + TestInstr* m = + TestInstr::New(&zone, 101, i, outputs, j, inputs, k, temps); + CHECK(i == m->OutputCount()); + CHECK(j == m->InputCount()); + CHECK(k == m->TempCount()); + + for (size_t z = 0; z < i; z++) { + CHECK_EQ(outputs[z], m->OutputAt(z)); + } + + for (size_t z = 0; z < j; z++) { + CHECK_EQ(inputs[z], m->InputAt(z)); + } + + for (size_t z = 0; z < k; z++) { + CHECK_EQ(temps[z], m->TempAt(z)); + } + } + } + } +} diff --git a/deps/v8/test/cctest/compiler/test-js-constant-cache.cc b/deps/v8/test/cctest/compiler/test-js-constant-cache.cc new file mode 100644 index 00000000000..42a606d23c5 --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-js-constant-cache.cc @@ -0,0 +1,284 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "src/compiler/js-graph.h" +#include "src/compiler/node-properties-inl.h" +#include "src/compiler/typer.h" +#include "src/types.h" +#include "test/cctest/cctest.h" +#include "test/cctest/compiler/value-helper.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +class JSCacheTesterHelper { + protected: + explicit JSCacheTesterHelper(Zone* zone) + : main_graph_(zone), main_common_(zone), main_typer_(zone) {} + Graph main_graph_; + CommonOperatorBuilder main_common_; + Typer main_typer_; +}; + + +class JSConstantCacheTester : public HandleAndZoneScope, + public JSCacheTesterHelper, + public JSGraph { + public: + JSConstantCacheTester() + : JSCacheTesterHelper(main_zone()), + JSGraph(&main_graph_, &main_common_, &main_typer_) {} + + Type* upper(Node* node) { return NodeProperties::GetBounds(node).upper; } + + Handle<Object> handle(Node* node) { + CHECK_EQ(IrOpcode::kHeapConstant, node->opcode()); + return ValueOf<Handle<Object> >(node->op()); + } + + Factory* factory() { return main_isolate()->factory(); } +}; + + +TEST(ZeroConstant1) { + JSConstantCacheTester T; + + Node* zero = T.ZeroConstant(); + + CHECK_EQ(IrOpcode::kNumberConstant, zero->opcode()); + CHECK_EQ(zero, T.Constant(0)); + CHECK_NE(zero, T.Constant(-0.0)); + CHECK_NE(zero, T.Constant(1.0)); + CHECK_NE(zero, T.Constant(v8::base::OS::nan_value())); + CHECK_NE(zero, T.Float64Constant(0)); + CHECK_NE(zero, T.Int32Constant(0)); + + Type* t = T.upper(zero); + + CHECK(t->Is(Type::Number())); + CHECK(t->Is(Type::Integral32())); + CHECK(t->Is(Type::Signed32())); + CHECK(t->Is(Type::Unsigned32())); + CHECK(t->Is(Type::SignedSmall())); + CHECK(t->Is(Type::UnsignedSmall())); +} + + +TEST(MinusZeroConstant) { + JSConstantCacheTester T; + + Node* minus_zero = T.Constant(-0.0); + Node* zero = T.ZeroConstant(); + + CHECK_EQ(IrOpcode::kNumberConstant, minus_zero->opcode()); + CHECK_EQ(minus_zero, T.Constant(-0.0)); + CHECK_NE(zero, minus_zero); + + Type* t = T.upper(minus_zero); + + CHECK(t->Is(Type::Number())); + CHECK(t->Is(Type::MinusZero())); + CHECK(!t->Is(Type::Integral32())); + CHECK(!t->Is(Type::Signed32())); + CHECK(!t->Is(Type::Unsigned32())); + CHECK(!t->Is(Type::SignedSmall())); + CHECK(!t->Is(Type::UnsignedSmall())); + + double zero_value = ValueOf<double>(zero->op()); + double minus_zero_value = ValueOf<double>(minus_zero->op()); + + CHECK_EQ(0.0, zero_value); + CHECK_NE(-0.0, zero_value); + CHECK_EQ(-0.0, minus_zero_value); + CHECK_NE(0.0, minus_zero_value); +} + + +TEST(ZeroConstant2) { + JSConstantCacheTester T; + + Node* zero = T.Constant(0); + + CHECK_EQ(IrOpcode::kNumberConstant, zero->opcode()); + CHECK_EQ(zero, T.ZeroConstant()); + CHECK_NE(zero, T.Constant(-0.0)); + CHECK_NE(zero, T.Constant(1.0)); + CHECK_NE(zero, T.Constant(v8::base::OS::nan_value())); + CHECK_NE(zero, T.Float64Constant(0)); + CHECK_NE(zero, T.Int32Constant(0)); + + Type* t = T.upper(zero); + + CHECK(t->Is(Type::Number())); + CHECK(t->Is(Type::Integral32())); + CHECK(t->Is(Type::Signed32())); + CHECK(t->Is(Type::Unsigned32())); + CHECK(t->Is(Type::SignedSmall())); + CHECK(t->Is(Type::UnsignedSmall())); +} + + +TEST(OneConstant1) { + JSConstantCacheTester T; + + Node* one = T.OneConstant(); + + CHECK_EQ(IrOpcode::kNumberConstant, one->opcode()); + CHECK_EQ(one, T.Constant(1)); + CHECK_EQ(one, T.Constant(1.0)); + CHECK_NE(one, T.Constant(1.01)); + CHECK_NE(one, T.Constant(-1.01)); + CHECK_NE(one, T.Constant(v8::base::OS::nan_value())); + CHECK_NE(one, T.Float64Constant(1.0)); + CHECK_NE(one, T.Int32Constant(1)); + + Type* t = T.upper(one); + + CHECK(t->Is(Type::Number())); + CHECK(t->Is(Type::Integral32())); + CHECK(t->Is(Type::Signed32())); + CHECK(t->Is(Type::Unsigned32())); + CHECK(t->Is(Type::SignedSmall())); + CHECK(t->Is(Type::UnsignedSmall())); +} + + +TEST(OneConstant2) { + JSConstantCacheTester T; + + Node* one = T.Constant(1); + + CHECK_EQ(IrOpcode::kNumberConstant, one->opcode()); + CHECK_EQ(one, T.OneConstant()); + CHECK_EQ(one, T.Constant(1.0)); + CHECK_NE(one, T.Constant(1.01)); + CHECK_NE(one, T.Constant(-1.01)); + CHECK_NE(one, T.Constant(v8::base::OS::nan_value())); + CHECK_NE(one, T.Float64Constant(1.0)); + CHECK_NE(one, T.Int32Constant(1)); + + Type* t = T.upper(one); + + CHECK(t->Is(Type::Number())); + CHECK(t->Is(Type::Integral32())); + CHECK(t->Is(Type::Signed32())); + CHECK(t->Is(Type::Unsigned32())); + CHECK(t->Is(Type::SignedSmall())); + CHECK(t->Is(Type::UnsignedSmall())); +} + + +TEST(Canonicalizations) { + JSConstantCacheTester T; + + CHECK_EQ(T.ZeroConstant(), T.ZeroConstant()); + CHECK_EQ(T.UndefinedConstant(), T.UndefinedConstant()); + CHECK_EQ(T.TheHoleConstant(), T.TheHoleConstant()); + CHECK_EQ(T.TrueConstant(), T.TrueConstant()); + CHECK_EQ(T.FalseConstant(), T.FalseConstant()); + CHECK_EQ(T.NullConstant(), T.NullConstant()); + CHECK_EQ(T.ZeroConstant(), T.ZeroConstant()); + CHECK_EQ(T.OneConstant(), T.OneConstant()); + CHECK_EQ(T.NaNConstant(), T.NaNConstant()); +} + + +TEST(NoAliasing) { + JSConstantCacheTester T; + + Node* nodes[] = {T.UndefinedConstant(), T.TheHoleConstant(), T.TrueConstant(), + T.FalseConstant(), T.NullConstant(), T.ZeroConstant(), + T.OneConstant(), T.NaNConstant(), T.Constant(21), + T.Constant(22.2)}; + + for (size_t i = 0; i < ARRAY_SIZE(nodes); i++) { + for (size_t j = 0; j < ARRAY_SIZE(nodes); j++) { + if (i != j) CHECK_NE(nodes[i], nodes[j]); + } + } +} + + +TEST(CanonicalizingNumbers) { + JSConstantCacheTester T; + + FOR_FLOAT64_INPUTS(i) { + Node* node = T.Constant(*i); + for (int j = 0; j < 5; j++) { + CHECK_EQ(node, T.Constant(*i)); + } + } +} + + +TEST(NumberTypes) { + JSConstantCacheTester T; + + FOR_FLOAT64_INPUTS(i) { + double value = *i; + Node* node = T.Constant(value); + CHECK(T.upper(node)->Equals(Type::Of(value, T.main_zone()))); + } +} + + +TEST(HeapNumbers) { + JSConstantCacheTester T; + + FOR_FLOAT64_INPUTS(i) { + double value = *i; + Handle<Object> num = T.factory()->NewNumber(value); + Handle<HeapNumber> heap = T.factory()->NewHeapNumber(value); + Node* node1 = T.Constant(value); + Node* node2 = T.Constant(num); + Node* node3 = T.Constant(heap); + CHECK_EQ(node1, node2); + CHECK_EQ(node1, node3); + } +} + + +TEST(OddballHandle) { + JSConstantCacheTester T; + + CHECK_EQ(T.UndefinedConstant(), T.Constant(T.factory()->undefined_value())); + CHECK_EQ(T.TheHoleConstant(), T.Constant(T.factory()->the_hole_value())); + CHECK_EQ(T.TrueConstant(), T.Constant(T.factory()->true_value())); + CHECK_EQ(T.FalseConstant(), T.Constant(T.factory()->false_value())); + CHECK_EQ(T.NullConstant(), T.Constant(T.factory()->null_value())); + CHECK_EQ(T.NaNConstant(), T.Constant(T.factory()->nan_value())); +} + + +TEST(OddballValues) { + JSConstantCacheTester T; + + CHECK_EQ(*T.factory()->undefined_value(), *T.handle(T.UndefinedConstant())); + CHECK_EQ(*T.factory()->the_hole_value(), *T.handle(T.TheHoleConstant())); + CHECK_EQ(*T.factory()->true_value(), *T.handle(T.TrueConstant())); + CHECK_EQ(*T.factory()->false_value(), *T.handle(T.FalseConstant())); + CHECK_EQ(*T.factory()->null_value(), *T.handle(T.NullConstant())); +} + + +TEST(OddballTypes) { + JSConstantCacheTester T; + + CHECK(T.upper(T.UndefinedConstant())->Is(Type::Undefined())); + // TODO(dcarney): figure this out. + // CHECK(T.upper(T.TheHoleConstant())->Is(Type::Internal())); + CHECK(T.upper(T.TrueConstant())->Is(Type::Boolean())); + CHECK(T.upper(T.FalseConstant())->Is(Type::Boolean())); + CHECK(T.upper(T.NullConstant())->Is(Type::Null())); + CHECK(T.upper(T.ZeroConstant())->Is(Type::Number())); + CHECK(T.upper(T.OneConstant())->Is(Type::Number())); + CHECK(T.upper(T.NaNConstant())->Is(Type::NaN())); +} + + +TEST(ExternalReferences) { + // TODO(titzer): test canonicalization of external references. +} diff --git a/deps/v8/test/cctest/compiler/test-js-context-specialization.cc b/deps/v8/test/cctest/compiler/test-js-context-specialization.cc new file mode 100644 index 00000000000..740d9f3d497 --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-js-context-specialization.cc @@ -0,0 +1,309 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/js-context-specialization.h" +#include "src/compiler/js-operator.h" +#include "src/compiler/node-matchers.h" +#include "src/compiler/node-properties-inl.h" +#include "src/compiler/simplified-node-factory.h" +#include "src/compiler/source-position.h" +#include "src/compiler/typer.h" +#include "test/cctest/cctest.h" +#include "test/cctest/compiler/function-tester.h" +#include "test/cctest/compiler/graph-builder-tester.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +class ContextSpecializationTester + : public HandleAndZoneScope, + public DirectGraphBuilder, + public SimplifiedNodeFactory<ContextSpecializationTester> { + public: + ContextSpecializationTester() + : DirectGraphBuilder(new (main_zone()) Graph(main_zone())), + common_(main_zone()), + javascript_(main_zone()), + simplified_(main_zone()), + typer_(main_zone()), + jsgraph_(graph(), common(), &typer_), + info_(main_isolate(), main_zone()) {} + + Factory* factory() { return main_isolate()->factory(); } + CommonOperatorBuilder* common() { return &common_; } + JSOperatorBuilder* javascript() { return &javascript_; } + SimplifiedOperatorBuilder* simplified() { return &simplified_; } + JSGraph* jsgraph() { return &jsgraph_; } + CompilationInfo* info() { return &info_; } + + private: + CommonOperatorBuilder common_; + JSOperatorBuilder javascript_; + SimplifiedOperatorBuilder simplified_; + Typer typer_; + JSGraph jsgraph_; + CompilationInfo info_; +}; + + +TEST(ReduceJSLoadContext) { + ContextSpecializationTester t; + + Node* start = t.NewNode(t.common()->Start(0)); + t.graph()->SetStart(start); + + // Make a context and initialize it a bit for this test. + Handle<Context> native = t.factory()->NewNativeContext(); + Handle<Context> subcontext1 = t.factory()->NewNativeContext(); + Handle<Context> subcontext2 = t.factory()->NewNativeContext(); + subcontext2->set_previous(*subcontext1); + subcontext1->set_previous(*native); + Handle<Object> expected = t.factory()->InternalizeUtf8String("gboy!"); + const int slot = Context::GLOBAL_OBJECT_INDEX; + native->set(slot, *expected); + + Node* const_context = t.jsgraph()->Constant(native); + Node* deep_const_context = t.jsgraph()->Constant(subcontext2); + Node* param_context = t.NewNode(t.common()->Parameter(0), start); + JSContextSpecializer spec(t.info(), t.jsgraph(), const_context); + + { + // Mutable slot, constant context, depth = 0 => do nothing. + Node* load = t.NewNode(t.javascript()->LoadContext(0, 0, false), + const_context, const_context, start); + Reduction r = spec.ReduceJSLoadContext(load); + CHECK(!r.Changed()); + } + + { + // Mutable slot, non-constant context, depth = 0 => do nothing. + Node* load = t.NewNode(t.javascript()->LoadContext(0, 0, false), + param_context, param_context, start); + Reduction r = spec.ReduceJSLoadContext(load); + CHECK(!r.Changed()); + } + + { + // Mutable slot, constant context, depth > 0 => fold-in parent context. + Node* load = t.NewNode( + t.javascript()->LoadContext(2, Context::GLOBAL_EVAL_FUN_INDEX, false), + deep_const_context, deep_const_context, start); + Reduction r = spec.ReduceJSLoadContext(load); + CHECK(r.Changed()); + Node* new_context_input = NodeProperties::GetValueInput(r.replacement(), 0); + CHECK_EQ(IrOpcode::kHeapConstant, new_context_input->opcode()); + ValueMatcher<Handle<Context> > match(new_context_input); + CHECK_EQ(*native, *match.Value()); + ContextAccess access = static_cast<Operator1<ContextAccess>*>( + r.replacement()->op())->parameter(); + CHECK_EQ(Context::GLOBAL_EVAL_FUN_INDEX, access.index()); + CHECK_EQ(0, access.depth()); + CHECK_EQ(false, access.immutable()); + } + + { + // Immutable slot, constant context, depth = 0 => specialize. + Node* load = t.NewNode(t.javascript()->LoadContext(0, slot, true), + const_context, const_context, start); + Reduction r = spec.ReduceJSLoadContext(load); + CHECK(r.Changed()); + CHECK(r.replacement() != load); + + ValueMatcher<Handle<Object> > match(r.replacement()); + CHECK(match.HasValue()); + CHECK_EQ(*expected, *match.Value()); + } + + // TODO(titzer): test with other kinds of contexts, e.g. a function context. + // TODO(sigurds): test that loads below create context are not optimized +} + + +TEST(ReduceJSStoreContext) { + ContextSpecializationTester t; + + Node* start = t.NewNode(t.common()->Start(0)); + t.graph()->SetStart(start); + + // Make a context and initialize it a bit for this test. + Handle<Context> native = t.factory()->NewNativeContext(); + Handle<Context> subcontext1 = t.factory()->NewNativeContext(); + Handle<Context> subcontext2 = t.factory()->NewNativeContext(); + subcontext2->set_previous(*subcontext1); + subcontext1->set_previous(*native); + Handle<Object> expected = t.factory()->InternalizeUtf8String("gboy!"); + const int slot = Context::GLOBAL_OBJECT_INDEX; + native->set(slot, *expected); + + Node* const_context = t.jsgraph()->Constant(native); + Node* deep_const_context = t.jsgraph()->Constant(subcontext2); + Node* param_context = t.NewNode(t.common()->Parameter(0), start); + JSContextSpecializer spec(t.info(), t.jsgraph(), const_context); + + { + // Mutable slot, constant context, depth = 0 => do nothing. + Node* load = t.NewNode(t.javascript()->StoreContext(0, 0), const_context, + const_context, start); + Reduction r = spec.ReduceJSStoreContext(load); + CHECK(!r.Changed()); + } + + { + // Mutable slot, non-constant context, depth = 0 => do nothing. + Node* load = t.NewNode(t.javascript()->StoreContext(0, 0), param_context, + param_context, start); + Reduction r = spec.ReduceJSStoreContext(load); + CHECK(!r.Changed()); + } + + { + // Immutable slot, constant context, depth = 0 => do nothing. + Node* load = t.NewNode(t.javascript()->StoreContext(0, slot), const_context, + const_context, start); + Reduction r = spec.ReduceJSStoreContext(load); + CHECK(!r.Changed()); + } + + { + // Mutable slot, constant context, depth > 0 => fold-in parent context. + Node* load = t.NewNode( + t.javascript()->StoreContext(2, Context::GLOBAL_EVAL_FUN_INDEX), + deep_const_context, deep_const_context, start); + Reduction r = spec.ReduceJSStoreContext(load); + CHECK(r.Changed()); + Node* new_context_input = NodeProperties::GetValueInput(r.replacement(), 0); + CHECK_EQ(IrOpcode::kHeapConstant, new_context_input->opcode()); + ValueMatcher<Handle<Context> > match(new_context_input); + CHECK_EQ(*native, *match.Value()); + ContextAccess access = static_cast<Operator1<ContextAccess>*>( + r.replacement()->op())->parameter(); + CHECK_EQ(Context::GLOBAL_EVAL_FUN_INDEX, access.index()); + CHECK_EQ(0, access.depth()); + CHECK_EQ(false, access.immutable()); + } +} + + +// TODO(titzer): factor out common code with effects checking in typed lowering. +static void CheckEffectInput(Node* effect, Node* use) { + CHECK_EQ(effect, NodeProperties::GetEffectInput(use)); +} + + +TEST(SpecializeToContext) { + ContextSpecializationTester t; + + Node* start = t.NewNode(t.common()->Start(0)); + t.graph()->SetStart(start); + + // Make a context and initialize it a bit for this test. + Handle<Context> native = t.factory()->NewNativeContext(); + Handle<Object> expected = t.factory()->InternalizeUtf8String("gboy!"); + const int slot = Context::GLOBAL_OBJECT_INDEX; + native->set(slot, *expected); + t.info()->SetContext(native); + + Node* const_context = t.jsgraph()->Constant(native); + Node* param_context = t.NewNode(t.common()->Parameter(0), start); + JSContextSpecializer spec(t.info(), t.jsgraph(), const_context); + + { + // Check that SpecializeToContext() replaces values and forwards effects + // correctly, and folds values from constant and non-constant contexts + Node* effect_in = start; + Node* load = t.NewNode(t.javascript()->LoadContext(0, slot, true), + const_context, const_context, effect_in); + + + Node* value_use = t.ChangeTaggedToInt32(load); + Node* other_load = t.NewNode(t.javascript()->LoadContext(0, slot, true), + param_context, param_context, load); + Node* effect_use = other_load; + Node* other_use = t.ChangeTaggedToInt32(other_load); + + Node* add = t.NewNode(t.javascript()->Add(), value_use, other_use, + param_context, other_load, start); + + Node* ret = t.NewNode(t.common()->Return(), add, effect_use, start); + Node* end = t.NewNode(t.common()->End(), ret); + USE(end); + t.graph()->SetEnd(end); + + // Double check the above graph is what we expect, or the test is broken. + CheckEffectInput(effect_in, load); + CheckEffectInput(load, effect_use); + + // Perform the substitution on the entire graph. + spec.SpecializeToContext(); + + // Effects should have been forwarded (not replaced with a value). + CheckEffectInput(effect_in, effect_use); + + // Use of {other_load} should not have been replaced. + CHECK_EQ(other_load, other_use->InputAt(0)); + + Node* replacement = value_use->InputAt(0); + ValueMatcher<Handle<Object> > match(replacement); + CHECK(match.HasValue()); + CHECK_EQ(*expected, *match.Value()); + } + // TODO(titzer): clean up above test and test more complicated effects. +} + + +TEST(SpecializeJSFunction_ToConstant1) { + FunctionTester T( + "(function() { var x = 1; function inc(a)" + " { return a + x; } return inc; })()"); + + T.CheckCall(1.0, 0.0, 0.0); + T.CheckCall(2.0, 1.0, 0.0); + T.CheckCall(2.1, 1.1, 0.0); +} + + +TEST(SpecializeJSFunction_ToConstant2) { + FunctionTester T( + "(function() { var x = 1.5; var y = 2.25; var z = 3.75;" + " function f(a) { return a - x + y - z; } return f; })()"); + + T.CheckCall(-3.0, 0.0, 0.0); + T.CheckCall(-2.0, 1.0, 0.0); + T.CheckCall(-1.9, 1.1, 0.0); +} + + +TEST(SpecializeJSFunction_ToConstant3) { + FunctionTester T( + "(function() { var x = -11.5; function inc()" + " { return (function(a) { return a + x; }); }" + " return inc(); })()"); + + T.CheckCall(-11.5, 0.0, 0.0); + T.CheckCall(-10.5, 1.0, 0.0); + T.CheckCall(-10.4, 1.1, 0.0); +} + + +TEST(SpecializeJSFunction_ToConstant_uninit) { + { + FunctionTester T( + "(function() { if (false) { var x = 1; } function inc(a)" + " { return x; } return inc; })()"); // x is undefined! + + CHECK(T.Call(T.Val(0.0), T.Val(0.0)).ToHandleChecked()->IsUndefined()); + CHECK(T.Call(T.Val(2.0), T.Val(0.0)).ToHandleChecked()->IsUndefined()); + CHECK(T.Call(T.Val(-2.1), T.Val(0.0)).ToHandleChecked()->IsUndefined()); + } + + { + FunctionTester T( + "(function() { if (false) { var x = 1; } function inc(a)" + " { return a + x; } return inc; })()"); // x is undefined! + + CHECK(T.Call(T.Val(0.0), T.Val(0.0)).ToHandleChecked()->IsNaN()); + CHECK(T.Call(T.Val(2.0), T.Val(0.0)).ToHandleChecked()->IsNaN()); + CHECK(T.Call(T.Val(-2.1), T.Val(0.0)).ToHandleChecked()->IsNaN()); + } +} diff --git a/deps/v8/test/cctest/compiler/test-js-typed-lowering.cc b/deps/v8/test/cctest/compiler/test-js-typed-lowering.cc new file mode 100644 index 00000000000..b6aa6d9582b --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-js-typed-lowering.cc @@ -0,0 +1,1342 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" +#include "test/cctest/cctest.h" + +#include "src/compiler/graph-inl.h" +#include "src/compiler/js-typed-lowering.h" +#include "src/compiler/node-properties-inl.h" +#include "src/compiler/opcodes.h" +#include "src/compiler/typer.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +class JSTypedLoweringTester : public HandleAndZoneScope { + public: + explicit JSTypedLoweringTester(int num_parameters = 0) + : isolate(main_isolate()), + binop(NULL), + unop(NULL), + javascript(main_zone()), + machine(main_zone()), + simplified(main_zone()), + common(main_zone()), + graph(main_zone()), + typer(main_zone()), + source_positions(&graph), + context_node(NULL) { + typer.DecorateGraph(&graph); + Node* s = graph.NewNode(common.Start(num_parameters)); + graph.SetStart(s); + } + + Isolate* isolate; + Operator* binop; + Operator* unop; + JSOperatorBuilder javascript; + MachineOperatorBuilder machine; + SimplifiedOperatorBuilder simplified; + CommonOperatorBuilder common; + Graph graph; + Typer typer; + SourcePositionTable source_positions; + Node* context_node; + + Node* Parameter(Type* t, int32_t index = 0) { + Node* n = graph.NewNode(common.Parameter(index), graph.start()); + NodeProperties::SetBounds(n, Bounds(Type::None(), t)); + return n; + } + + Node* reduce(Node* node) { + JSGraph jsgraph(&graph, &common, &typer); + JSTypedLowering reducer(&jsgraph, &source_positions); + Reduction reduction = reducer.Reduce(node); + if (reduction.Changed()) return reduction.replacement(); + return node; + } + + Node* start() { return graph.start(); } + + Node* context() { + if (context_node == NULL) { + context_node = graph.NewNode(common.Parameter(-1), graph.start()); + } + return context_node; + } + + Node* control() { return start(); } + + void CheckPureBinop(IrOpcode::Value expected, Node* node) { + CHECK_EQ(expected, node->opcode()); + CHECK_EQ(2, node->InputCount()); // should not have context, effect, etc. + } + + void CheckPureBinop(Operator* expected, Node* node) { + CHECK_EQ(expected->opcode(), node->op()->opcode()); + CHECK_EQ(2, node->InputCount()); // should not have context, effect, etc. + } + + Node* ReduceUnop(Operator* op, Type* input_type) { + return reduce(Unop(op, Parameter(input_type))); + } + + Node* ReduceBinop(Operator* op, Type* left_type, Type* right_type) { + return reduce(Binop(op, Parameter(left_type, 0), Parameter(right_type, 1))); + } + + Node* Binop(Operator* op, Node* left, Node* right) { + // JS binops also require context, effect, and control + return graph.NewNode(op, left, right, context(), start(), control()); + } + + Node* Unop(Operator* op, Node* input) { + // JS unops also require context, effect, and control + return graph.NewNode(op, input, context(), start(), control()); + } + + Node* UseForEffect(Node* node) { + // TODO(titzer): use EffectPhi after fixing EffectCount + return graph.NewNode(javascript.ToNumber(), node, context(), node, + control()); + } + + void CheckEffectInput(Node* effect, Node* use) { + CHECK_EQ(effect, NodeProperties::GetEffectInput(use)); + } + + void CheckInt32Constant(int32_t expected, Node* result) { + CHECK_EQ(IrOpcode::kInt32Constant, result->opcode()); + CHECK_EQ(expected, ValueOf<int32_t>(result->op())); + } + + void CheckNumberConstant(double expected, Node* result) { + CHECK_EQ(IrOpcode::kNumberConstant, result->opcode()); + CHECK_EQ(expected, ValueOf<double>(result->op())); + } + + void CheckNaN(Node* result) { + CHECK_EQ(IrOpcode::kNumberConstant, result->opcode()); + double value = ValueOf<double>(result->op()); + CHECK(std::isnan(value)); + } + + void CheckTrue(Node* result) { + CheckHandle(isolate->factory()->true_value(), result); + } + + void CheckFalse(Node* result) { + CheckHandle(isolate->factory()->false_value(), result); + } + + void CheckHandle(Handle<Object> expected, Node* result) { + CHECK_EQ(IrOpcode::kHeapConstant, result->opcode()); + Handle<Object> value = ValueOf<Handle<Object> >(result->op()); + CHECK_EQ(*expected, *value); + } +}; + +static Type* kStringTypes[] = {Type::InternalizedString(), Type::OtherString(), + Type::String()}; + + +static Type* kInt32Types[] = { + Type::UnsignedSmall(), Type::OtherSignedSmall(), Type::OtherUnsigned31(), + Type::OtherUnsigned32(), Type::OtherSigned32(), Type::SignedSmall(), + Type::Signed32(), Type::Unsigned32(), Type::Integral32()}; + + +static Type* kNumberTypes[] = { + Type::UnsignedSmall(), Type::OtherSignedSmall(), Type::OtherUnsigned31(), + Type::OtherUnsigned32(), Type::OtherSigned32(), Type::SignedSmall(), + Type::Signed32(), Type::Unsigned32(), Type::Integral32(), + Type::MinusZero(), Type::NaN(), Type::OtherNumber(), + Type::Number()}; + + +static Type* kJSTypes[] = {Type::Undefined(), Type::Null(), Type::Boolean(), + Type::Number(), Type::String(), Type::Object()}; + + +static Type* I32Type(bool is_signed) { + return is_signed ? Type::Signed32() : Type::Unsigned32(); +} + + +static IrOpcode::Value NumberToI32(bool is_signed) { + return is_signed ? IrOpcode::kNumberToInt32 : IrOpcode::kNumberToUint32; +} + + +TEST(StringBinops) { + JSTypedLoweringTester R; + + for (size_t i = 0; i < ARRAY_SIZE(kStringTypes); ++i) { + Node* p0 = R.Parameter(kStringTypes[i], 0); + + for (size_t j = 0; j < ARRAY_SIZE(kStringTypes); ++j) { + Node* p1 = R.Parameter(kStringTypes[j], 1); + + Node* add = R.Binop(R.javascript.Add(), p0, p1); + Node* r = R.reduce(add); + + R.CheckPureBinop(IrOpcode::kStringAdd, r); + CHECK_EQ(p0, r->InputAt(0)); + CHECK_EQ(p1, r->InputAt(1)); + } + } +} + + +TEST(AddNumber1) { + JSTypedLoweringTester R; + for (size_t i = 0; i < ARRAY_SIZE(kNumberTypes); ++i) { + Node* p0 = R.Parameter(kNumberTypes[i], 0); + Node* p1 = R.Parameter(kNumberTypes[i], 1); + Node* add = R.Binop(R.javascript.Add(), p0, p1); + Node* r = R.reduce(add); + + R.CheckPureBinop(IrOpcode::kNumberAdd, r); + CHECK_EQ(p0, r->InputAt(0)); + CHECK_EQ(p1, r->InputAt(1)); + } +} + + +TEST(NumberBinops) { + JSTypedLoweringTester R; + Operator* ops[] = { + R.javascript.Add(), R.simplified.NumberAdd(), + R.javascript.Subtract(), R.simplified.NumberSubtract(), + R.javascript.Multiply(), R.simplified.NumberMultiply(), + R.javascript.Divide(), R.simplified.NumberDivide(), + R.javascript.Modulus(), R.simplified.NumberModulus(), + }; + + for (size_t i = 0; i < ARRAY_SIZE(kNumberTypes); ++i) { + Node* p0 = R.Parameter(kNumberTypes[i], 0); + + for (size_t j = 0; j < ARRAY_SIZE(kNumberTypes); ++j) { + Node* p1 = R.Parameter(kNumberTypes[j], 1); + + for (size_t k = 0; k < ARRAY_SIZE(ops); k += 2) { + Node* add = R.Binop(ops[k], p0, p1); + Node* r = R.reduce(add); + + R.CheckPureBinop(ops[k + 1], r); + CHECK_EQ(p0, r->InputAt(0)); + CHECK_EQ(p1, r->InputAt(1)); + } + } + } +} + + +static void CheckToI32(Node* old_input, Node* new_input, bool is_signed) { + Type* old_type = NodeProperties::GetBounds(old_input).upper; + Type* expected_type = I32Type(is_signed); + if (old_type->Is(expected_type)) { + CHECK_EQ(old_input, new_input); + } else if (new_input->opcode() == IrOpcode::kNumberConstant) { + CHECK(NodeProperties::GetBounds(new_input).upper->Is(expected_type)); + double v = ValueOf<double>(new_input->op()); + double e = static_cast<double>(is_signed ? FastD2I(v) : FastD2UI(v)); + CHECK_EQ(e, v); + } else { + CHECK_EQ(NumberToI32(is_signed), new_input->opcode()); + } +} + + +// A helper class for testing lowering of bitwise shift operators. +class JSBitwiseShiftTypedLoweringTester : public JSTypedLoweringTester { + public: + static const int kNumberOps = 6; + Operator* ops[kNumberOps]; + bool signedness[kNumberOps]; + + JSBitwiseShiftTypedLoweringTester() { + int i = 0; + set(i++, javascript.ShiftLeft(), true); + set(i++, machine.Word32Shl(), false); + set(i++, javascript.ShiftRight(), true); + set(i++, machine.Word32Sar(), false); + set(i++, javascript.ShiftRightLogical(), false); + set(i++, machine.Word32Shr(), false); + } + + private: + void set(int idx, Operator* op, bool s) { + ops[idx] = op; + signedness[idx] = s; + } +}; + + +TEST(Int32BitwiseShifts) { + JSBitwiseShiftTypedLoweringTester R; + + Type* types[] = { + Type::SignedSmall(), Type::UnsignedSmall(), Type::OtherSigned32(), + Type::Unsigned32(), Type::Signed32(), Type::MinusZero(), + Type::NaN(), Type::OtherNumber(), Type::Undefined(), + Type::Null(), Type::Boolean(), Type::Number(), + Type::String(), Type::Object()}; + + for (size_t i = 0; i < ARRAY_SIZE(types); ++i) { + Node* p0 = R.Parameter(types[i], 0); + + for (size_t j = 0; j < ARRAY_SIZE(types); ++j) { + Node* p1 = R.Parameter(types[j], 1); + + for (int k = 0; k < R.kNumberOps; k += 2) { + Node* add = R.Binop(R.ops[k], p0, p1); + Node* r = R.reduce(add); + + R.CheckPureBinop(R.ops[k + 1], r); + Node* r0 = r->InputAt(0); + Node* r1 = r->InputAt(1); + + CheckToI32(p0, r0, R.signedness[k]); + + R.CheckPureBinop(IrOpcode::kWord32And, r1); + CheckToI32(p1, r1->InputAt(0), R.signedness[k + 1]); + R.CheckInt32Constant(0x1F, r1->InputAt(1)); + } + } + } +} + + +// A helper class for testing lowering of bitwise operators. +class JSBitwiseTypedLoweringTester : public JSTypedLoweringTester { + public: + static const int kNumberOps = 6; + Operator* ops[kNumberOps]; + bool signedness[kNumberOps]; + + JSBitwiseTypedLoweringTester() { + int i = 0; + set(i++, javascript.BitwiseOr(), true); + set(i++, machine.Word32Or(), true); + set(i++, javascript.BitwiseXor(), true); + set(i++, machine.Word32Xor(), true); + set(i++, javascript.BitwiseAnd(), true); + set(i++, machine.Word32And(), true); + } + + private: + void set(int idx, Operator* op, bool s) { + ops[idx] = op; + signedness[idx] = s; + } +}; + + +TEST(Int32BitwiseBinops) { + JSBitwiseTypedLoweringTester R; + + Type* types[] = { + Type::SignedSmall(), Type::UnsignedSmall(), Type::OtherSigned32(), + Type::Unsigned32(), Type::Signed32(), Type::MinusZero(), + Type::NaN(), Type::OtherNumber(), Type::Undefined(), + Type::Null(), Type::Boolean(), Type::Number(), + Type::String(), Type::Object()}; + + for (size_t i = 0; i < ARRAY_SIZE(types); ++i) { + Node* p0 = R.Parameter(types[i], 0); + + for (size_t j = 0; j < ARRAY_SIZE(types); ++j) { + Node* p1 = R.Parameter(types[j], 1); + + for (int k = 0; k < R.kNumberOps; k += 2) { + Node* add = R.Binop(R.ops[k], p0, p1); + Node* r = R.reduce(add); + + R.CheckPureBinop(R.ops[k + 1], r); + + CheckToI32(p0, r->InputAt(0), R.signedness[k]); + CheckToI32(p1, r->InputAt(1), R.signedness[k + 1]); + } + } + } +} + + +TEST(JSToNumber1) { + JSTypedLoweringTester R; + Operator* ton = R.javascript.ToNumber(); + + for (size_t i = 0; i < ARRAY_SIZE(kNumberTypes); i++) { // ToNumber(number) + Node* r = R.ReduceUnop(ton, kNumberTypes[i]); + CHECK_EQ(IrOpcode::kParameter, r->opcode()); + } + + { // ToNumber(undefined) + Node* r = R.ReduceUnop(ton, Type::Undefined()); + R.CheckNaN(r); + } + + { // ToNumber(null) + Node* r = R.ReduceUnop(ton, Type::Null()); + R.CheckNumberConstant(0.0, r); + } +} + + +TEST(JSToNumber_replacement) { + JSTypedLoweringTester R; + + Type* types[] = {Type::Null(), Type::Undefined(), Type::Number()}; + + for (size_t i = 0; i < ARRAY_SIZE(types); i++) { + Node* n = R.Parameter(types[i]); + Node* c = R.graph.NewNode(R.javascript.ToNumber(), n, R.context(), + R.start(), R.start()); + Node* effect_use = R.UseForEffect(c); + Node* add = R.graph.NewNode(R.simplified.ReferenceEqual(Type::Any()), n, c); + + R.CheckEffectInput(c, effect_use); + Node* r = R.reduce(c); + + if (types[i]->Is(Type::Number())) { + CHECK_EQ(n, r); + } else { + CHECK_EQ(IrOpcode::kNumberConstant, r->opcode()); + } + + CHECK_EQ(n, add->InputAt(0)); + CHECK_EQ(r, add->InputAt(1)); + R.CheckEffectInput(R.start(), effect_use); + } +} + + +TEST(JSToNumberOfConstant) { + JSTypedLoweringTester R; + + Operator* ops[] = {R.common.NumberConstant(0), R.common.NumberConstant(-1), + R.common.NumberConstant(0.1), R.common.Int32Constant(1177), + R.common.Float64Constant(0.99)}; + + for (size_t i = 0; i < ARRAY_SIZE(ops); i++) { + Node* n = R.graph.NewNode(ops[i]); + Node* convert = R.Unop(R.javascript.ToNumber(), n); + Node* r = R.reduce(convert); + // Note that either outcome below is correct. It only depends on whether + // the types of constants are eagerly computed or only computed by the + // typing pass. + if (NodeProperties::GetBounds(n).upper->Is(Type::Number())) { + // If number constants are eagerly typed, then reduction should + // remove the ToNumber. + CHECK_EQ(n, r); + } else { + // Otherwise, type-based lowering should only look at the type, and + // *not* try to constant fold. + CHECK_EQ(convert, r); + } + } +} + + +TEST(JSToNumberOfNumberOrOtherPrimitive) { + JSTypedLoweringTester R; + Type* others[] = {Type::Undefined(), Type::Null(), Type::Boolean(), + Type::String()}; + + for (size_t i = 0; i < ARRAY_SIZE(others); i++) { + Type* t = Type::Union(Type::Number(), others[i], R.main_zone()); + Node* r = R.ReduceUnop(R.javascript.ToNumber(), t); + CHECK_EQ(IrOpcode::kJSToNumber, r->opcode()); + } +} + + +TEST(JSToBoolean) { + JSTypedLoweringTester R; + Operator* op = R.javascript.ToBoolean(); + + { // ToBoolean(undefined) + Node* r = R.ReduceUnop(op, Type::Undefined()); + R.CheckFalse(r); + } + + { // ToBoolean(null) + Node* r = R.ReduceUnop(op, Type::Null()); + R.CheckFalse(r); + } + + { // ToBoolean(boolean) + Node* r = R.ReduceUnop(op, Type::Boolean()); + CHECK_EQ(IrOpcode::kParameter, r->opcode()); + } + + { // ToBoolean(number) + Node* r = R.ReduceUnop(op, Type::Number()); + CHECK_EQ(IrOpcode::kBooleanNot, r->opcode()); + Node* i = r->InputAt(0); + CHECK_EQ(IrOpcode::kNumberEqual, i->opcode()); + // ToBoolean(number) => BooleanNot(NumberEqual(x, #0)) + } + + { // ToBoolean(string) + Node* r = R.ReduceUnop(op, Type::String()); + // TODO(titzer): test will break with better js-typed-lowering + CHECK_EQ(IrOpcode::kJSToBoolean, r->opcode()); + } + + { // ToBoolean(object) + Node* r = R.ReduceUnop(op, Type::DetectableObject()); + R.CheckTrue(r); + } + + { // ToBoolean(undetectable) + Node* r = R.ReduceUnop(op, Type::Undetectable()); + R.CheckFalse(r); + } + + { // ToBoolean(object) + Node* r = R.ReduceUnop(op, Type::Object()); + CHECK_EQ(IrOpcode::kJSToBoolean, r->opcode()); + } +} + + +TEST(JSToBoolean_replacement) { + JSTypedLoweringTester R; + + Type* types[] = {Type::Null(), Type::Undefined(), Type::Boolean(), + Type::DetectableObject(), Type::Undetectable()}; + + for (size_t i = 0; i < ARRAY_SIZE(types); i++) { + Node* n = R.Parameter(types[i]); + Node* c = R.graph.NewNode(R.javascript.ToBoolean(), n, R.context(), + R.start(), R.start()); + Node* effect_use = R.UseForEffect(c); + Node* add = R.graph.NewNode(R.simplified.ReferenceEqual(Type::Any()), n, c); + + R.CheckEffectInput(c, effect_use); + Node* r = R.reduce(c); + + if (types[i]->Is(Type::Boolean())) { + CHECK_EQ(n, r); + } else { + CHECK_EQ(IrOpcode::kHeapConstant, r->opcode()); + } + + CHECK_EQ(n, add->InputAt(0)); + CHECK_EQ(r, add->InputAt(1)); + R.CheckEffectInput(R.start(), effect_use); + } +} + + +TEST(JSToString1) { + JSTypedLoweringTester R; + + for (size_t i = 0; i < ARRAY_SIZE(kStringTypes); i++) { + Node* r = R.ReduceUnop(R.javascript.ToString(), kStringTypes[i]); + CHECK_EQ(IrOpcode::kParameter, r->opcode()); + } + + Operator* op = R.javascript.ToString(); + + { // ToString(undefined) => "undefined" + Node* r = R.ReduceUnop(op, Type::Undefined()); + R.CheckHandle(R.isolate->factory()->undefined_string(), r); + } + + { // ToString(null) => "null" + Node* r = R.ReduceUnop(op, Type::Null()); + R.CheckHandle(R.isolate->factory()->null_string(), r); + } + + { // ToString(boolean) + Node* r = R.ReduceUnop(op, Type::Boolean()); + // TODO(titzer): could be a branch + CHECK_EQ(IrOpcode::kJSToString, r->opcode()); + } + + { // ToString(number) + Node* r = R.ReduceUnop(op, Type::Number()); + // TODO(titzer): could remove effects + CHECK_EQ(IrOpcode::kJSToString, r->opcode()); + } + + { // ToString(string) + Node* r = R.ReduceUnop(op, Type::String()); + CHECK_EQ(IrOpcode::kParameter, r->opcode()); // No-op + } + + { // ToString(object) + Node* r = R.ReduceUnop(op, Type::Object()); + CHECK_EQ(IrOpcode::kJSToString, r->opcode()); // No reduction. + } +} + + +TEST(JSToString_replacement) { + JSTypedLoweringTester R; + + Type* types[] = {Type::Null(), Type::Undefined(), Type::String()}; + + for (size_t i = 0; i < ARRAY_SIZE(types); i++) { + Node* n = R.Parameter(types[i]); + Node* c = R.graph.NewNode(R.javascript.ToString(), n, R.context(), + R.start(), R.start()); + Node* effect_use = R.UseForEffect(c); + Node* add = R.graph.NewNode(R.simplified.ReferenceEqual(Type::Any()), n, c); + + R.CheckEffectInput(c, effect_use); + Node* r = R.reduce(c); + + if (types[i]->Is(Type::String())) { + CHECK_EQ(n, r); + } else { + CHECK_EQ(IrOpcode::kHeapConstant, r->opcode()); + } + + CHECK_EQ(n, add->InputAt(0)); + CHECK_EQ(r, add->InputAt(1)); + R.CheckEffectInput(R.start(), effect_use); + } +} + + +TEST(StringComparison) { + JSTypedLoweringTester R; + + Operator* ops[] = { + R.javascript.LessThan(), R.simplified.StringLessThan(), + R.javascript.LessThanOrEqual(), R.simplified.StringLessThanOrEqual(), + R.javascript.GreaterThan(), R.simplified.StringLessThan(), + R.javascript.GreaterThanOrEqual(), R.simplified.StringLessThanOrEqual()}; + + for (size_t i = 0; i < ARRAY_SIZE(kStringTypes); i++) { + Node* p0 = R.Parameter(kStringTypes[i], 0); + for (size_t j = 0; j < ARRAY_SIZE(kStringTypes); j++) { + Node* p1 = R.Parameter(kStringTypes[j], 1); + + for (size_t k = 0; k < ARRAY_SIZE(ops); k += 2) { + Node* cmp = R.Binop(ops[k], p0, p1); + Node* r = R.reduce(cmp); + + R.CheckPureBinop(ops[k + 1], r); + if (k >= 4) { + // GreaterThan and GreaterThanOrEqual commute the inputs + // and use the LessThan and LessThanOrEqual operators. + CHECK_EQ(p1, r->InputAt(0)); + CHECK_EQ(p0, r->InputAt(1)); + } else { + CHECK_EQ(p0, r->InputAt(0)); + CHECK_EQ(p1, r->InputAt(1)); + } + } + } + } +} + + +static void CheckIsConvertedToNumber(Node* val, Node* converted) { + if (NodeProperties::GetBounds(val).upper->Is(Type::Number())) { + CHECK_EQ(val, converted); + } else { + if (converted->opcode() == IrOpcode::kNumberConstant) return; + CHECK_EQ(IrOpcode::kJSToNumber, converted->opcode()); + CHECK_EQ(val, converted->InputAt(0)); + } +} + + +TEST(NumberComparison) { + JSTypedLoweringTester R; + + Operator* ops[] = { + R.javascript.LessThan(), R.simplified.NumberLessThan(), + R.javascript.LessThanOrEqual(), R.simplified.NumberLessThanOrEqual(), + R.javascript.GreaterThan(), R.simplified.NumberLessThan(), + R.javascript.GreaterThanOrEqual(), R.simplified.NumberLessThanOrEqual()}; + + for (size_t i = 0; i < ARRAY_SIZE(kJSTypes); i++) { + Type* t0 = kJSTypes[i]; + if (t0->Is(Type::String())) continue; // skip Type::String + Node* p0 = R.Parameter(t0, 0); + + for (size_t j = 0; j < ARRAY_SIZE(kJSTypes); j++) { + Type* t1 = kJSTypes[j]; + if (t1->Is(Type::String())) continue; // skip Type::String + Node* p1 = R.Parameter(t1, 1); + + for (size_t k = 0; k < ARRAY_SIZE(ops); k += 2) { + Node* cmp = R.Binop(ops[k], p0, p1); + Node* r = R.reduce(cmp); + + R.CheckPureBinop(ops[k + 1], r); + if (k >= 4) { + // GreaterThan and GreaterThanOrEqual commute the inputs + // and use the LessThan and LessThanOrEqual operators. + CheckIsConvertedToNumber(p1, r->InputAt(0)); + CheckIsConvertedToNumber(p0, r->InputAt(1)); + } else { + CheckIsConvertedToNumber(p0, r->InputAt(0)); + CheckIsConvertedToNumber(p1, r->InputAt(1)); + } + } + } + } +} + + +TEST(MixedComparison1) { + JSTypedLoweringTester R; + + Type* types[] = {Type::Number(), Type::String(), + Type::Union(Type::Number(), Type::String(), R.main_zone())}; + + for (size_t i = 0; i < ARRAY_SIZE(types); i++) { + Node* p0 = R.Parameter(types[i], 0); + + for (size_t j = 0; j < ARRAY_SIZE(types); j++) { + Node* p1 = R.Parameter(types[j], 1); + { + Node* cmp = R.Binop(R.javascript.LessThan(), p0, p1); + Node* r = R.reduce(cmp); + + if (!types[i]->Maybe(Type::String()) || + !types[j]->Maybe(Type::String())) { + if (types[i]->Is(Type::String()) && types[j]->Is(Type::String())) { + R.CheckPureBinop(R.simplified.StringLessThan(), r); + } else { + R.CheckPureBinop(R.simplified.NumberLessThan(), r); + } + } else { + CHECK_EQ(cmp, r); // No reduction of mixed types. + } + } + } + } +} + + +TEST(ObjectComparison) { + JSTypedLoweringTester R; + + Node* p0 = R.Parameter(Type::Object(), 0); + Node* p1 = R.Parameter(Type::Object(), 1); + + Node* cmp = R.Binop(R.javascript.LessThan(), p0, p1); + Node* effect_use = R.UseForEffect(cmp); + + R.CheckEffectInput(R.start(), cmp); + R.CheckEffectInput(cmp, effect_use); + + Node* r = R.reduce(cmp); + + R.CheckPureBinop(R.simplified.NumberLessThan(), r); + + Node* i0 = r->InputAt(0); + Node* i1 = r->InputAt(1); + + CHECK_NE(p0, i0); + CHECK_NE(p1, i1); + CHECK_EQ(IrOpcode::kJSToNumber, i0->opcode()); + CHECK_EQ(IrOpcode::kJSToNumber, i1->opcode()); + + // Check effect chain is correct. + R.CheckEffectInput(R.start(), i0); + R.CheckEffectInput(i0, i1); + R.CheckEffectInput(i1, effect_use); +} + + +TEST(UnaryNot) { + JSTypedLoweringTester R; + Operator* opnot = R.javascript.UnaryNot(); + + for (size_t i = 0; i < ARRAY_SIZE(kJSTypes); i++) { + Node* r = R.ReduceUnop(opnot, kJSTypes[i]); + // TODO(titzer): test will break if/when js-typed-lowering constant folds. + CHECK_EQ(IrOpcode::kBooleanNot, r->opcode()); + } +} + + +TEST(RemoveToNumberEffects) { + JSTypedLoweringTester R; + + Node* effect_use = NULL; + for (int i = 0; i < 10; i++) { + Node* p0 = R.Parameter(Type::Number()); + Node* ton = R.Unop(R.javascript.ToNumber(), p0); + effect_use = NULL; + + switch (i) { + case 0: + effect_use = R.graph.NewNode(R.javascript.ToNumber(), p0, R.context(), + ton, R.start()); + break; + case 1: + effect_use = R.graph.NewNode(R.javascript.ToNumber(), ton, R.context(), + ton, R.start()); + break; + case 2: + effect_use = R.graph.NewNode(R.common.EffectPhi(1), ton, R.start()); + case 3: + effect_use = R.graph.NewNode(R.javascript.Add(), ton, ton, R.context(), + ton, R.start()); + break; + case 4: + effect_use = R.graph.NewNode(R.javascript.Add(), p0, p0, R.context(), + ton, R.start()); + break; + case 5: + effect_use = R.graph.NewNode(R.common.Return(), p0, ton, R.start()); + break; + case 6: + effect_use = R.graph.NewNode(R.common.Return(), ton, ton, R.start()); + } + + R.CheckEffectInput(R.start(), ton); + if (effect_use != NULL) R.CheckEffectInput(ton, effect_use); + + Node* r = R.reduce(ton); + CHECK_EQ(p0, r); + CHECK_NE(R.start(), r); + + if (effect_use != NULL) { + R.CheckEffectInput(R.start(), effect_use); + // Check that value uses of ToNumber() do not go to start(). + for (int i = 0; i < effect_use->op()->InputCount(); i++) { + CHECK_NE(R.start(), effect_use->InputAt(i)); + } + } + } + + CHECK_EQ(NULL, effect_use); // should have done all cases above. +} + + +// Helper class for testing the reduction of a single binop. +class BinopEffectsTester { + public: + explicit BinopEffectsTester(Operator* op, Type* t0, Type* t1) + : R(), + p0(R.Parameter(t0, 0)), + p1(R.Parameter(t1, 1)), + binop(R.Binop(op, p0, p1)), + effect_use(R.graph.NewNode(R.common.EffectPhi(1), binop, R.start())) { + // Effects should be ordered start -> binop -> effect_use + R.CheckEffectInput(R.start(), binop); + R.CheckEffectInput(binop, effect_use); + result = R.reduce(binop); + } + + JSTypedLoweringTester R; + Node* p0; + Node* p1; + Node* binop; + Node* effect_use; + Node* result; + + void CheckEffectsRemoved() { R.CheckEffectInput(R.start(), effect_use); } + + void CheckEffectOrdering(Node* n0) { + R.CheckEffectInput(R.start(), n0); + R.CheckEffectInput(n0, effect_use); + } + + void CheckEffectOrdering(Node* n0, Node* n1) { + R.CheckEffectInput(R.start(), n0); + R.CheckEffectInput(n0, n1); + R.CheckEffectInput(n1, effect_use); + } + + Node* CheckConvertedInput(IrOpcode::Value opcode, int which, bool effects) { + return CheckConverted(opcode, result->InputAt(which), effects); + } + + Node* CheckConverted(IrOpcode::Value opcode, Node* node, bool effects) { + CHECK_EQ(opcode, node->opcode()); + if (effects) { + CHECK_LT(0, OperatorProperties::GetEffectInputCount(node->op())); + } else { + CHECK_EQ(0, OperatorProperties::GetEffectInputCount(node->op())); + } + return node; + } + + Node* CheckNoOp(int which) { + CHECK_EQ(which == 0 ? p0 : p1, result->InputAt(which)); + return result->InputAt(which); + } +}; + + +// Helper function for strict and non-strict equality reductions. +void CheckEqualityReduction(JSTypedLoweringTester* R, bool strict, Node* l, + Node* r, IrOpcode::Value expected) { + for (int j = 0; j < 2; j++) { + Node* p0 = j == 0 ? l : r; + Node* p1 = j == 1 ? l : r; + + { + Node* eq = strict ? R->graph.NewNode(R->javascript.StrictEqual(), p0, p1) + : R->Binop(R->javascript.Equal(), p0, p1); + Node* r = R->reduce(eq); + R->CheckPureBinop(expected, r); + } + + { + Node* ne = strict + ? R->graph.NewNode(R->javascript.StrictNotEqual(), p0, p1) + : R->Binop(R->javascript.NotEqual(), p0, p1); + Node* n = R->reduce(ne); + CHECK_EQ(IrOpcode::kBooleanNot, n->opcode()); + Node* r = n->InputAt(0); + R->CheckPureBinop(expected, r); + } + } +} + + +TEST(EqualityForNumbers) { + JSTypedLoweringTester R; + + Type* simple_number_types[] = {Type::UnsignedSmall(), Type::SignedSmall(), + Type::Signed32(), Type::Unsigned32(), + Type::Number()}; + + + for (size_t i = 0; i < ARRAY_SIZE(simple_number_types); ++i) { + Node* p0 = R.Parameter(simple_number_types[i], 0); + + for (size_t j = 0; j < ARRAY_SIZE(simple_number_types); ++j) { + Node* p1 = R.Parameter(simple_number_types[j], 1); + + CheckEqualityReduction(&R, true, p0, p1, IrOpcode::kNumberEqual); + CheckEqualityReduction(&R, false, p0, p1, IrOpcode::kNumberEqual); + } + } +} + + +TEST(StrictEqualityForRefEqualTypes) { + JSTypedLoweringTester R; + + Type* types[] = {Type::Undefined(), Type::Null(), Type::Boolean(), + Type::Object(), Type::Receiver()}; + + Node* p0 = R.Parameter(Type::Any()); + for (size_t i = 0; i < ARRAY_SIZE(types); i++) { + Node* p1 = R.Parameter(types[i]); + CheckEqualityReduction(&R, true, p0, p1, IrOpcode::kReferenceEqual); + } + // TODO(titzer): Equal(RefEqualTypes) +} + + +TEST(StringEquality) { + JSTypedLoweringTester R; + Node* p0 = R.Parameter(Type::String()); + Node* p1 = R.Parameter(Type::String()); + + CheckEqualityReduction(&R, true, p0, p1, IrOpcode::kStringEqual); + CheckEqualityReduction(&R, false, p0, p1, IrOpcode::kStringEqual); +} + + +TEST(RemovePureNumberBinopEffects) { + JSTypedLoweringTester R; + + Operator* ops[] = { + R.javascript.Equal(), R.simplified.NumberEqual(), + R.javascript.Add(), R.simplified.NumberAdd(), + R.javascript.Subtract(), R.simplified.NumberSubtract(), + R.javascript.Multiply(), R.simplified.NumberMultiply(), + R.javascript.Divide(), R.simplified.NumberDivide(), + R.javascript.Modulus(), R.simplified.NumberModulus(), + R.javascript.LessThan(), R.simplified.NumberLessThan(), + R.javascript.LessThanOrEqual(), R.simplified.NumberLessThanOrEqual(), + }; + + for (size_t j = 0; j < ARRAY_SIZE(ops); j += 2) { + BinopEffectsTester B(ops[j], Type::Number(), Type::Number()); + CHECK_EQ(ops[j + 1]->opcode(), B.result->op()->opcode()); + + B.R.CheckPureBinop(B.result->opcode(), B.result); + + B.CheckNoOp(0); + B.CheckNoOp(1); + + B.CheckEffectsRemoved(); + } +} + + +TEST(OrderNumberBinopEffects1) { + JSTypedLoweringTester R; + + Operator* ops[] = { + R.javascript.Subtract(), R.simplified.NumberSubtract(), + R.javascript.Multiply(), R.simplified.NumberMultiply(), + R.javascript.Divide(), R.simplified.NumberDivide(), + R.javascript.Modulus(), R.simplified.NumberModulus(), + }; + + for (size_t j = 0; j < ARRAY_SIZE(ops); j += 2) { + BinopEffectsTester B(ops[j], Type::Object(), Type::String()); + CHECK_EQ(ops[j + 1]->opcode(), B.result->op()->opcode()); + + Node* i0 = B.CheckConvertedInput(IrOpcode::kJSToNumber, 0, true); + Node* i1 = B.CheckConvertedInput(IrOpcode::kJSToNumber, 1, true); + + CHECK_EQ(B.p0, i0->InputAt(0)); + CHECK_EQ(B.p1, i1->InputAt(0)); + + // Effects should be ordered start -> i0 -> i1 -> effect_use + B.CheckEffectOrdering(i0, i1); + } +} + + +TEST(OrderNumberBinopEffects2) { + JSTypedLoweringTester R; + + Operator* ops[] = { + R.javascript.Add(), R.simplified.NumberAdd(), + R.javascript.Subtract(), R.simplified.NumberSubtract(), + R.javascript.Multiply(), R.simplified.NumberMultiply(), + R.javascript.Divide(), R.simplified.NumberDivide(), + R.javascript.Modulus(), R.simplified.NumberModulus(), + }; + + for (size_t j = 0; j < ARRAY_SIZE(ops); j += 2) { + BinopEffectsTester B(ops[j], Type::Number(), Type::Object()); + + Node* i0 = B.CheckNoOp(0); + Node* i1 = B.CheckConvertedInput(IrOpcode::kJSToNumber, 1, true); + + CHECK_EQ(B.p0, i0); + CHECK_EQ(B.p1, i1->InputAt(0)); + + // Effects should be ordered start -> i1 -> effect_use + B.CheckEffectOrdering(i1); + } + + for (size_t j = 0; j < ARRAY_SIZE(ops); j += 2) { + BinopEffectsTester B(ops[j], Type::Object(), Type::Number()); + + Node* i0 = B.CheckConvertedInput(IrOpcode::kJSToNumber, 0, true); + Node* i1 = B.CheckNoOp(1); + + CHECK_EQ(B.p0, i0->InputAt(0)); + CHECK_EQ(B.p1, i1); + + // Effects should be ordered start -> i0 -> effect_use + B.CheckEffectOrdering(i0); + } +} + + +TEST(OrderCompareEffects) { + JSTypedLoweringTester R; + + Operator* ops[] = { + R.javascript.GreaterThan(), R.simplified.NumberLessThan(), + R.javascript.GreaterThanOrEqual(), R.simplified.NumberLessThanOrEqual(), + }; + + for (size_t j = 0; j < ARRAY_SIZE(ops); j += 2) { + BinopEffectsTester B(ops[j], Type::Object(), Type::String()); + CHECK_EQ(ops[j + 1]->opcode(), B.result->op()->opcode()); + + Node* i0 = B.CheckConvertedInput(IrOpcode::kJSToNumber, 0, true); + Node* i1 = B.CheckConvertedInput(IrOpcode::kJSToNumber, 1, true); + + // Inputs should be commuted. + CHECK_EQ(B.p1, i0->InputAt(0)); + CHECK_EQ(B.p0, i1->InputAt(0)); + + // But effects should be ordered start -> i1 -> i0 -> effect_use + B.CheckEffectOrdering(i1, i0); + } + + for (size_t j = 0; j < ARRAY_SIZE(ops); j += 2) { + BinopEffectsTester B(ops[j], Type::Number(), Type::Object()); + + Node* i0 = B.CheckConvertedInput(IrOpcode::kJSToNumber, 0, true); + Node* i1 = B.result->InputAt(1); + + CHECK_EQ(B.p1, i0->InputAt(0)); // Should be commuted. + CHECK_EQ(B.p0, i1); + + // Effects should be ordered start -> i1 -> effect_use + B.CheckEffectOrdering(i0); + } + + for (size_t j = 0; j < ARRAY_SIZE(ops); j += 2) { + BinopEffectsTester B(ops[j], Type::Object(), Type::Number()); + + Node* i0 = B.result->InputAt(0); + Node* i1 = B.CheckConvertedInput(IrOpcode::kJSToNumber, 1, true); + + CHECK_EQ(B.p1, i0); // Should be commuted. + CHECK_EQ(B.p0, i1->InputAt(0)); + + // Effects should be ordered start -> i0 -> effect_use + B.CheckEffectOrdering(i1); + } +} + + +TEST(Int32BinopEffects) { + JSBitwiseTypedLoweringTester R; + + for (int j = 0; j < R.kNumberOps; j += 2) { + bool signed_left = R.signedness[j], signed_right = R.signedness[j + 1]; + BinopEffectsTester B(R.ops[j], I32Type(signed_left), I32Type(signed_right)); + CHECK_EQ(R.ops[j + 1]->opcode(), B.result->op()->opcode()); + + B.R.CheckPureBinop(B.result->opcode(), B.result); + + B.CheckNoOp(0); + B.CheckNoOp(1); + + B.CheckEffectsRemoved(); + } + + for (int j = 0; j < R.kNumberOps; j += 2) { + bool signed_left = R.signedness[j], signed_right = R.signedness[j + 1]; + BinopEffectsTester B(R.ops[j], Type::Number(), Type::Number()); + CHECK_EQ(R.ops[j + 1]->opcode(), B.result->op()->opcode()); + + B.R.CheckPureBinop(B.result->opcode(), B.result); + + B.CheckConvertedInput(NumberToI32(signed_left), 0, false); + B.CheckConvertedInput(NumberToI32(signed_right), 1, false); + + B.CheckEffectsRemoved(); + } + + for (int j = 0; j < R.kNumberOps; j += 2) { + bool signed_left = R.signedness[j], signed_right = R.signedness[j + 1]; + BinopEffectsTester B(R.ops[j], Type::Number(), Type::Object()); + + B.R.CheckPureBinop(B.result->opcode(), B.result); + + Node* i0 = B.CheckConvertedInput(NumberToI32(signed_left), 0, false); + Node* i1 = B.CheckConvertedInput(NumberToI32(signed_right), 1, false); + + CHECK_EQ(B.p0, i0->InputAt(0)); + Node* ii1 = B.CheckConverted(IrOpcode::kJSToNumber, i1->InputAt(0), true); + + CHECK_EQ(B.p1, ii1->InputAt(0)); + + B.CheckEffectOrdering(ii1); + } + + for (int j = 0; j < R.kNumberOps; j += 2) { + bool signed_left = R.signedness[j], signed_right = R.signedness[j + 1]; + BinopEffectsTester B(R.ops[j], Type::Object(), Type::Number()); + + B.R.CheckPureBinop(B.result->opcode(), B.result); + + Node* i0 = B.CheckConvertedInput(NumberToI32(signed_left), 0, false); + Node* i1 = B.CheckConvertedInput(NumberToI32(signed_right), 1, false); + + Node* ii0 = B.CheckConverted(IrOpcode::kJSToNumber, i0->InputAt(0), true); + CHECK_EQ(B.p1, i1->InputAt(0)); + + CHECK_EQ(B.p0, ii0->InputAt(0)); + + B.CheckEffectOrdering(ii0); + } + + for (int j = 0; j < R.kNumberOps; j += 2) { + bool signed_left = R.signedness[j], signed_right = R.signedness[j + 1]; + BinopEffectsTester B(R.ops[j], Type::Object(), Type::Object()); + + B.R.CheckPureBinop(B.result->opcode(), B.result); + + Node* i0 = B.CheckConvertedInput(NumberToI32(signed_left), 0, false); + Node* i1 = B.CheckConvertedInput(NumberToI32(signed_right), 1, false); + + Node* ii0 = B.CheckConverted(IrOpcode::kJSToNumber, i0->InputAt(0), true); + Node* ii1 = B.CheckConverted(IrOpcode::kJSToNumber, i1->InputAt(0), true); + + CHECK_EQ(B.p0, ii0->InputAt(0)); + CHECK_EQ(B.p1, ii1->InputAt(0)); + + B.CheckEffectOrdering(ii0, ii1); + } +} + + +TEST(UnaryNotEffects) { + JSTypedLoweringTester R; + Operator* opnot = R.javascript.UnaryNot(); + + for (size_t i = 0; i < ARRAY_SIZE(kJSTypes); i++) { + Node* p0 = R.Parameter(kJSTypes[i], 0); + Node* orig = R.Unop(opnot, p0); + Node* effect_use = R.UseForEffect(orig); + Node* value_use = R.graph.NewNode(R.common.Return(), orig); + Node* r = R.reduce(orig); + // TODO(titzer): test will break if/when js-typed-lowering constant folds. + CHECK_EQ(IrOpcode::kBooleanNot, r->opcode()); + + CHECK_EQ(r, value_use->InputAt(0)); + + if (r->InputAt(0) == orig && orig->opcode() == IrOpcode::kJSToBoolean) { + // The original node was turned into a ToBoolean, which has an effect. + R.CheckEffectInput(R.start(), orig); + R.CheckEffectInput(orig, effect_use); + } else { + // effect should have been removed from this node. + R.CheckEffectInput(R.start(), effect_use); + } + } +} + + +TEST(Int32AddNarrowing) { + { + JSBitwiseTypedLoweringTester R; + + for (int o = 0; o < R.kNumberOps; o += 2) { + for (size_t i = 0; i < ARRAY_SIZE(kInt32Types); i++) { + Node* n0 = R.Parameter(kInt32Types[i]); + for (size_t j = 0; j < ARRAY_SIZE(kInt32Types); j++) { + Node* n1 = R.Parameter(kInt32Types[j]); + Node* one = R.graph.NewNode(R.common.NumberConstant(1)); + + for (int l = 0; l < 2; l++) { + Node* add_node = R.Binop(R.simplified.NumberAdd(), n0, n1); + Node* or_node = + R.Binop(R.ops[o], l ? add_node : one, l ? one : add_node); + Node* r = R.reduce(or_node); + + CHECK_EQ(R.ops[o + 1]->opcode(), r->op()->opcode()); + CHECK_EQ(IrOpcode::kInt32Add, add_node->opcode()); + bool is_signed = l ? R.signedness[o] : R.signedness[o + 1]; + + Type* add_type = NodeProperties::GetBounds(add_node).upper; + CHECK(add_type->Is(I32Type(is_signed))); + } + } + } + } + } + { + JSBitwiseShiftTypedLoweringTester R; + + for (int o = 0; o < R.kNumberOps; o += 2) { + for (size_t i = 0; i < ARRAY_SIZE(kInt32Types); i++) { + Node* n0 = R.Parameter(kInt32Types[i]); + for (size_t j = 0; j < ARRAY_SIZE(kInt32Types); j++) { + Node* n1 = R.Parameter(kInt32Types[j]); + Node* one = R.graph.NewNode(R.common.NumberConstant(1)); + + for (int l = 0; l < 2; l++) { + Node* add_node = R.Binop(R.simplified.NumberAdd(), n0, n1); + Node* or_node = + R.Binop(R.ops[o], l ? add_node : one, l ? one : add_node); + Node* r = R.reduce(or_node); + + CHECK_EQ(R.ops[o + 1]->opcode(), r->op()->opcode()); + CHECK_EQ(IrOpcode::kInt32Add, add_node->opcode()); + bool is_signed = l ? R.signedness[o] : R.signedness[o + 1]; + + Type* add_type = NodeProperties::GetBounds(add_node).upper; + CHECK(add_type->Is(I32Type(is_signed))); + } + } + } + } + } +} + + +TEST(Int32AddNarrowingNotOwned) { + JSBitwiseTypedLoweringTester R; + + for (int o = 0; o < R.kNumberOps; o += 2) { + Node* n0 = R.Parameter(I32Type(R.signedness[o])); + Node* n1 = R.Parameter(I32Type(R.signedness[o + 1])); + Node* one = R.graph.NewNode(R.common.NumberConstant(1)); + + Node* add_node = R.Binop(R.simplified.NumberAdd(), n0, n1); + Node* or_node = R.Binop(R.ops[o], add_node, one); + Node* other_use = R.Binop(R.simplified.NumberAdd(), add_node, one); + Node* r = R.reduce(or_node); + CHECK_EQ(R.ops[o + 1]->opcode(), r->op()->opcode()); + // Should not be reduced to Int32Add because of the other number add. + CHECK_EQ(IrOpcode::kNumberAdd, add_node->opcode()); + // Conversion to int32 should be done. + CheckToI32(add_node, r->InputAt(0), R.signedness[o]); + CheckToI32(one, r->InputAt(1), R.signedness[o + 1]); + // The other use should also not be touched. + CHECK_EQ(add_node, other_use->InputAt(0)); + CHECK_EQ(one, other_use->InputAt(1)); + } +} + + +TEST(Int32Comparisons) { + JSTypedLoweringTester R; + + struct Entry { + Operator* js_op; + Operator* uint_op; + Operator* int_op; + Operator* num_op; + bool commute; + }; + + Entry ops[] = { + {R.javascript.LessThan(), R.machine.Uint32LessThan(), + R.machine.Int32LessThan(), R.simplified.NumberLessThan(), false}, + {R.javascript.LessThanOrEqual(), R.machine.Uint32LessThanOrEqual(), + R.machine.Int32LessThanOrEqual(), R.simplified.NumberLessThanOrEqual(), + false}, + {R.javascript.GreaterThan(), R.machine.Uint32LessThan(), + R.machine.Int32LessThan(), R.simplified.NumberLessThan(), true}, + {R.javascript.GreaterThanOrEqual(), R.machine.Uint32LessThanOrEqual(), + R.machine.Int32LessThanOrEqual(), R.simplified.NumberLessThanOrEqual(), + true}}; + + for (size_t o = 0; o < ARRAY_SIZE(ops); o++) { + for (size_t i = 0; i < ARRAY_SIZE(kNumberTypes); i++) { + Type* t0 = kNumberTypes[i]; + Node* p0 = R.Parameter(t0, 0); + + for (size_t j = 0; j < ARRAY_SIZE(kNumberTypes); j++) { + Type* t1 = kNumberTypes[j]; + Node* p1 = R.Parameter(t1, 1); + + Node* cmp = R.Binop(ops[o].js_op, p0, p1); + Node* r = R.reduce(cmp); + + Operator* expected; + if (t0->Is(Type::Unsigned32()) && t1->Is(Type::Unsigned32())) { + expected = ops[o].uint_op; + } else if (t0->Is(Type::Signed32()) && t1->Is(Type::Signed32())) { + expected = ops[o].int_op; + } else { + expected = ops[o].num_op; + } + R.CheckPureBinop(expected, r); + if (ops[o].commute) { + CHECK_EQ(p1, r->InputAt(0)); + CHECK_EQ(p0, r->InputAt(1)); + } else { + CHECK_EQ(p0, r->InputAt(0)); + CHECK_EQ(p1, r->InputAt(1)); + } + } + } + } +} diff --git a/deps/v8/test/cctest/compiler/test-linkage.cc b/deps/v8/test/cctest/compiler/test-linkage.cc new file mode 100644 index 00000000000..6d9453f7c46 --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-linkage.cc @@ -0,0 +1,113 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "src/compiler.h" +#include "src/zone.h" + +#include "src/compiler/common-operator.h" +#include "src/compiler/generic-node-inl.h" +#include "src/compiler/graph.h" +#include "src/compiler/linkage.h" +#include "src/compiler/machine-operator.h" +#include "src/compiler/node.h" +#include "src/compiler/operator.h" +#include "src/compiler/pipeline.h" +#include "src/compiler/schedule.h" +#include "test/cctest/cctest.h" + +#if V8_TURBOFAN_TARGET + +using namespace v8::internal; +using namespace v8::internal::compiler; + +static SimpleOperator dummy_operator(IrOpcode::kParameter, Operator::kNoWrite, + 0, 0, "dummy"); + +// So we can get a real JS function. +static Handle<JSFunction> Compile(const char* source) { + Isolate* isolate = CcTest::i_isolate(); + Handle<String> source_code = isolate->factory() + ->NewStringFromUtf8(CStrVector(source)) + .ToHandleChecked(); + Handle<SharedFunctionInfo> shared_function = Compiler::CompileScript( + source_code, Handle<String>(), 0, 0, false, + Handle<Context>(isolate->native_context()), NULL, NULL, + v8::ScriptCompiler::kNoCompileOptions, NOT_NATIVES_CODE); + return isolate->factory()->NewFunctionFromSharedFunctionInfo( + shared_function, isolate->native_context()); +} + + +TEST(TestLinkageCreate) { + InitializedHandleScope handles; + Handle<JSFunction> function = Compile("a + b"); + CompilationInfoWithZone info(function); + Linkage linkage(&info); +} + + +TEST(TestLinkageJSFunctionIncoming) { + InitializedHandleScope handles; + + const char* sources[] = {"(function() { })", "(function(a) { })", + "(function(a,b) { })", "(function(a,b,c) { })"}; + + for (int i = 0; i < 3; i++) { + i::HandleScope handles(CcTest::i_isolate()); + Handle<JSFunction> function = v8::Utils::OpenHandle( + *v8::Handle<v8::Function>::Cast(CompileRun(sources[i]))); + CompilationInfoWithZone info(function); + Linkage linkage(&info); + + CallDescriptor* descriptor = linkage.GetIncomingDescriptor(); + CHECK_NE(NULL, descriptor); + + CHECK_EQ(1 + i, descriptor->ParameterCount()); + CHECK_EQ(1, descriptor->ReturnCount()); + CHECK_EQ(Operator::kNoProperties, descriptor->properties()); + CHECK_EQ(true, descriptor->IsJSFunctionCall()); + } +} + + +TEST(TestLinkageCodeStubIncoming) { + Isolate* isolate = CcTest::InitIsolateOnce(); + CompilationInfoWithZone info(static_cast<HydrogenCodeStub*>(NULL), isolate); + Linkage linkage(&info); + // TODO(titzer): test linkage creation with a bonafide code stub. + // this just checks current behavior. + CHECK_EQ(NULL, linkage.GetIncomingDescriptor()); +} + + +TEST(TestLinkageJSCall) { + HandleAndZoneScope handles; + Handle<JSFunction> function = Compile("a + c"); + CompilationInfoWithZone info(function); + Linkage linkage(&info); + + for (int i = 0; i < 32; i++) { + CallDescriptor* descriptor = linkage.GetJSCallDescriptor(i); + CHECK_NE(NULL, descriptor); + CHECK_EQ(i, descriptor->ParameterCount()); + CHECK_EQ(1, descriptor->ReturnCount()); + CHECK_EQ(Operator::kNoProperties, descriptor->properties()); + CHECK_EQ(true, descriptor->IsJSFunctionCall()); + } +} + + +TEST(TestLinkageRuntimeCall) { + // TODO(titzer): test linkage creation for outgoing runtime calls. +} + + +TEST(TestLinkageStubCall) { + // TODO(titzer): test linkage creation for outgoing stub calls. +} + + +#endif // V8_TURBOFAN_TARGET diff --git a/deps/v8/test/cctest/compiler/test-machine-operator-reducer.cc b/deps/v8/test/cctest/compiler/test-machine-operator-reducer.cc new file mode 100644 index 00000000000..c79a96a0941 --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-machine-operator-reducer.cc @@ -0,0 +1,779 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "test/cctest/cctest.h" + +#include "src/base/utils/random-number-generator.h" +#include "src/compiler/graph-inl.h" +#include "src/compiler/machine-operator-reducer.h" +#include "test/cctest/compiler/value-helper.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +template <typename T> +Operator* NewConstantOperator(CommonOperatorBuilder* common, volatile T value); + +template <> +Operator* NewConstantOperator<int32_t>(CommonOperatorBuilder* common, + volatile int32_t value) { + return common->Int32Constant(value); +} + +template <> +Operator* NewConstantOperator<double>(CommonOperatorBuilder* common, + volatile double value) { + return common->Float64Constant(value); +} + + +class ReducerTester : public HandleAndZoneScope { + public: + explicit ReducerTester(int num_parameters = 0) + : isolate(main_isolate()), + binop(NULL), + unop(NULL), + machine(main_zone()), + common(main_zone()), + graph(main_zone()), + maxuint32(Constant<int32_t>(kMaxUInt32)) { + Node* s = graph.NewNode(common.Start(num_parameters)); + graph.SetStart(s); + } + + Isolate* isolate; + Operator* binop; + Operator* unop; + MachineOperatorBuilder machine; + CommonOperatorBuilder common; + Graph graph; + Node* maxuint32; + + template <typename T> + Node* Constant(volatile T value) { + return graph.NewNode(NewConstantOperator<T>(&common, value)); + } + + // Check that the reduction of this binop applied to constants {a} and {b} + // yields the {expect} value. + template <typename T> + void CheckFoldBinop(volatile T expect, volatile T a, volatile T b) { + CheckFoldBinop<T>(expect, Constant<T>(a), Constant<T>(b)); + } + + // Check that the reduction of this binop applied to {a} and {b} yields + // the {expect} value. + template <typename T> + void CheckFoldBinop(volatile T expect, Node* a, Node* b) { + CHECK_NE(NULL, binop); + Node* n = graph.NewNode(binop, a, b); + MachineOperatorReducer reducer(&graph); + Reduction reduction = reducer.Reduce(n); + CHECK(reduction.Changed()); + CHECK_NE(n, reduction.replacement()); + CHECK_EQ(expect, ValueOf<T>(reduction.replacement()->op())); + } + + // Check that the reduction of this binop applied to {a} and {b} yields + // the {expect} node. + void CheckBinop(Node* expect, Node* a, Node* b) { + CHECK_NE(NULL, binop); + Node* n = graph.NewNode(binop, a, b); + MachineOperatorReducer reducer(&graph); + Reduction reduction = reducer.Reduce(n); + CHECK(reduction.Changed()); + CHECK_EQ(expect, reduction.replacement()); + } + + // Check that the reduction of this binop applied to {left} and {right} yields + // this binop applied to {left_expect} and {right_expect}. + void CheckFoldBinop(Node* left_expect, Node* right_expect, Node* left, + Node* right) { + CHECK_NE(NULL, binop); + Node* n = graph.NewNode(binop, left, right); + MachineOperatorReducer reducer(&graph); + Reduction reduction = reducer.Reduce(n); + CHECK(reduction.Changed()); + CHECK_EQ(binop, reduction.replacement()->op()); + CHECK_EQ(left_expect, reduction.replacement()->InputAt(0)); + CHECK_EQ(right_expect, reduction.replacement()->InputAt(1)); + } + + // Check that the reduction of this binop applied to {left} and {right} yields + // the {op_expect} applied to {left_expect} and {right_expect}. + template <typename T> + void CheckFoldBinop(volatile T left_expect, Operator* op_expect, + Node* right_expect, Node* left, Node* right) { + CHECK_NE(NULL, binop); + Node* n = graph.NewNode(binop, left, right); + MachineOperatorReducer reducer(&graph); + Reduction r = reducer.Reduce(n); + CHECK(r.Changed()); + CHECK_EQ(op_expect->opcode(), r.replacement()->op()->opcode()); + CHECK_EQ(left_expect, ValueOf<T>(r.replacement()->InputAt(0)->op())); + CHECK_EQ(right_expect, r.replacement()->InputAt(1)); + } + + // Check that the reduction of this binop applied to {left} and {right} yields + // the {op_expect} applied to {left_expect} and {right_expect}. + template <typename T> + void CheckFoldBinop(Node* left_expect, Operator* op_expect, + volatile T right_expect, Node* left, Node* right) { + CHECK_NE(NULL, binop); + Node* n = graph.NewNode(binop, left, right); + MachineOperatorReducer reducer(&graph); + Reduction r = reducer.Reduce(n); + CHECK(r.Changed()); + CHECK_EQ(op_expect->opcode(), r.replacement()->op()->opcode()); + CHECK_EQ(left_expect, r.replacement()->InputAt(0)); + CHECK_EQ(right_expect, ValueOf<T>(r.replacement()->InputAt(1)->op())); + } + + // Check that if the given constant appears on the left, the reducer will + // swap it to be on the right. + template <typename T> + void CheckPutConstantOnRight(volatile T constant) { + // TODO(titzer): CHECK(binop->HasProperty(Operator::kCommutative)); + Node* p = Parameter(); + Node* k = Constant<T>(constant); + { + Node* n = graph.NewNode(binop, k, p); + MachineOperatorReducer reducer(&graph); + Reduction reduction = reducer.Reduce(n); + CHECK(!reduction.Changed() || reduction.replacement() == n); + CHECK_EQ(p, n->InputAt(0)); + CHECK_EQ(k, n->InputAt(1)); + } + { + Node* n = graph.NewNode(binop, p, k); + MachineOperatorReducer reducer(&graph); + Reduction reduction = reducer.Reduce(n); + CHECK(!reduction.Changed()); + CHECK_EQ(p, n->InputAt(0)); + CHECK_EQ(k, n->InputAt(1)); + } + } + + // Check that if the given constant appears on the left, the reducer will + // *NOT* swap it to be on the right. + template <typename T> + void CheckDontPutConstantOnRight(volatile T constant) { + CHECK(!binop->HasProperty(Operator::kCommutative)); + Node* p = Parameter(); + Node* k = Constant<T>(constant); + Node* n = graph.NewNode(binop, k, p); + MachineOperatorReducer reducer(&graph); + Reduction reduction = reducer.Reduce(n); + CHECK(!reduction.Changed()); + CHECK_EQ(k, n->InputAt(0)); + CHECK_EQ(p, n->InputAt(1)); + } + + Node* Parameter(int32_t index = 0) { + return graph.NewNode(common.Parameter(index), graph.start()); + } +}; + + +TEST(ReduceWord32And) { + ReducerTester R; + R.binop = R.machine.Word32And(); + + FOR_INT32_INPUTS(pl) { + FOR_INT32_INPUTS(pr) { + int32_t x = *pl, y = *pr; + R.CheckFoldBinop<int32_t>(x & y, x, y); + } + } + + R.CheckPutConstantOnRight(33); + R.CheckPutConstantOnRight(44000); + + Node* x = R.Parameter(); + Node* zero = R.Constant<int32_t>(0); + Node* minus_1 = R.Constant<int32_t>(-1); + + R.CheckBinop(zero, x, zero); // x & 0 => 0 + R.CheckBinop(zero, zero, x); // 0 & x => 0 + R.CheckBinop(x, x, minus_1); // x & -1 => 0 + R.CheckBinop(x, minus_1, x); // -1 & x => 0 + R.CheckBinop(x, x, x); // x & x => x +} + + +TEST(ReduceWord32Or) { + ReducerTester R; + R.binop = R.machine.Word32Or(); + + FOR_INT32_INPUTS(pl) { + FOR_INT32_INPUTS(pr) { + int32_t x = *pl, y = *pr; + R.CheckFoldBinop<int32_t>(x | y, x, y); + } + } + + R.CheckPutConstantOnRight(36); + R.CheckPutConstantOnRight(44001); + + Node* x = R.Parameter(); + Node* zero = R.Constant<int32_t>(0); + Node* minus_1 = R.Constant<int32_t>(-1); + + R.CheckBinop(x, x, zero); // x & 0 => x + R.CheckBinop(x, zero, x); // 0 & x => x + R.CheckBinop(minus_1, x, minus_1); // x & -1 => -1 + R.CheckBinop(minus_1, minus_1, x); // -1 & x => -1 + R.CheckBinop(x, x, x); // x & x => x +} + + +TEST(ReduceWord32Xor) { + ReducerTester R; + R.binop = R.machine.Word32Xor(); + + FOR_INT32_INPUTS(pl) { + FOR_INT32_INPUTS(pr) { + int32_t x = *pl, y = *pr; + R.CheckFoldBinop<int32_t>(x ^ y, x, y); + } + } + + R.CheckPutConstantOnRight(39); + R.CheckPutConstantOnRight(4403); + + Node* x = R.Parameter(); + Node* zero = R.Constant<int32_t>(0); + + R.CheckBinop(x, x, zero); // x ^ 0 => x + R.CheckBinop(x, zero, x); // 0 ^ x => x + R.CheckFoldBinop<int32_t>(0, x, x); // x ^ x => 0 +} + + +TEST(ReduceWord32Shl) { + ReducerTester R; + R.binop = R.machine.Word32Shl(); + + // TODO(titzer): out of range shifts + FOR_INT32_INPUTS(i) { + for (int y = 0; y < 32; y++) { + int32_t x = *i; + R.CheckFoldBinop<int32_t>(x << y, x, y); + } + } + + R.CheckDontPutConstantOnRight(44); + + Node* x = R.Parameter(); + Node* zero = R.Constant<int32_t>(0); + + R.CheckBinop(x, x, zero); // x << 0 => x +} + + +TEST(ReduceWord32Shr) { + ReducerTester R; + R.binop = R.machine.Word32Shr(); + + // TODO(titzer): test out of range shifts + FOR_UINT32_INPUTS(i) { + for (uint32_t y = 0; y < 32; y++) { + uint32_t x = *i; + R.CheckFoldBinop<int32_t>(x >> y, x, y); + } + } + + R.CheckDontPutConstantOnRight(44); + + Node* x = R.Parameter(); + Node* zero = R.Constant<int32_t>(0); + + R.CheckBinop(x, x, zero); // x >>> 0 => x +} + + +TEST(ReduceWord32Sar) { + ReducerTester R; + R.binop = R.machine.Word32Sar(); + + // TODO(titzer): test out of range shifts + FOR_INT32_INPUTS(i) { + for (int32_t y = 0; y < 32; y++) { + int32_t x = *i; + R.CheckFoldBinop<int32_t>(x >> y, x, y); + } + } + + R.CheckDontPutConstantOnRight(44); + + Node* x = R.Parameter(); + Node* zero = R.Constant<int32_t>(0); + + R.CheckBinop(x, x, zero); // x >> 0 => x +} + + +TEST(ReduceWord32Equal) { + ReducerTester R; + R.binop = R.machine.Word32Equal(); + + FOR_INT32_INPUTS(pl) { + FOR_INT32_INPUTS(pr) { + int32_t x = *pl, y = *pr; + R.CheckFoldBinop<int32_t>(x == y ? 1 : 0, x, y); + } + } + + R.CheckPutConstantOnRight(48); + R.CheckPutConstantOnRight(-48); + + Node* x = R.Parameter(0); + Node* y = R.Parameter(1); + Node* zero = R.Constant<int32_t>(0); + Node* sub = R.graph.NewNode(R.machine.Int32Sub(), x, y); + + R.CheckFoldBinop<int32_t>(1, x, x); // x == x => 1 + R.CheckFoldBinop(x, y, sub, zero); // x - y == 0 => x == y + R.CheckFoldBinop(x, y, zero, sub); // 0 == x - y => x == y +} + + +TEST(ReduceInt32Add) { + ReducerTester R; + R.binop = R.machine.Int32Add(); + + FOR_INT32_INPUTS(pl) { + FOR_INT32_INPUTS(pr) { + int32_t x = *pl, y = *pr; + R.CheckFoldBinop<int32_t>(x + y, x, y); // TODO(titzer): signed overflow + } + } + + R.CheckPutConstantOnRight(41); + R.CheckPutConstantOnRight(4407); + + Node* x = R.Parameter(); + Node* zero = R.Constant<int32_t>(0); + + R.CheckBinop(x, x, zero); // x + 0 => x + R.CheckBinop(x, zero, x); // 0 + x => x +} + + +TEST(ReduceInt32Sub) { + ReducerTester R; + R.binop = R.machine.Int32Sub(); + + FOR_INT32_INPUTS(pl) { + FOR_INT32_INPUTS(pr) { + int32_t x = *pl, y = *pr; + R.CheckFoldBinop<int32_t>(x - y, x, y); + } + } + + R.CheckDontPutConstantOnRight(412); + + Node* x = R.Parameter(); + Node* zero = R.Constant<int32_t>(0); + + R.CheckBinop(x, x, zero); // x - 0 => x +} + + +TEST(ReduceInt32Mul) { + ReducerTester R; + R.binop = R.machine.Int32Mul(); + + FOR_INT32_INPUTS(pl) { + FOR_INT32_INPUTS(pr) { + int32_t x = *pl, y = *pr; + R.CheckFoldBinop<int32_t>(x * y, x, y); // TODO(titzer): signed overflow + } + } + + R.CheckPutConstantOnRight(4111); + R.CheckPutConstantOnRight(-4407); + + Node* x = R.Parameter(); + Node* zero = R.Constant<int32_t>(0); + Node* one = R.Constant<int32_t>(1); + Node* minus_one = R.Constant<int32_t>(-1); + + R.CheckBinop(zero, x, zero); // x * 0 => 0 + R.CheckBinop(zero, zero, x); // 0 * x => 0 + R.CheckBinop(x, x, one); // x * 1 => x + R.CheckBinop(x, one, x); // 1 * x => x + R.CheckFoldBinop<int32_t>(0, R.machine.Int32Sub(), x, minus_one, + x); // -1 * x => 0 - x + R.CheckFoldBinop<int32_t>(0, R.machine.Int32Sub(), x, x, + minus_one); // x * -1 => 0 - x + + for (int32_t n = 1; n < 31; ++n) { + Node* multiplier = R.Constant<int32_t>(1 << n); + R.CheckFoldBinop<int32_t>(x, R.machine.Word32Shl(), n, x, + multiplier); // x * 2^n => x << n + R.CheckFoldBinop<int32_t>(x, R.machine.Word32Shl(), n, multiplier, + x); // 2^n * x => x << n + } +} + + +TEST(ReduceInt32Div) { + ReducerTester R; + R.binop = R.machine.Int32Div(); + + FOR_INT32_INPUTS(pl) { + FOR_INT32_INPUTS(pr) { + int32_t x = *pl, y = *pr; + if (y == 0) continue; // TODO(titzer): test / 0 + int32_t r = y == -1 ? -x : x / y; // INT_MIN / -1 may explode in C + R.CheckFoldBinop<int32_t>(r, x, y); + } + } + + R.CheckDontPutConstantOnRight(41111); + R.CheckDontPutConstantOnRight(-44071); + + Node* x = R.Parameter(); + Node* one = R.Constant<int32_t>(1); + Node* minus_one = R.Constant<int32_t>(-1); + + R.CheckBinop(x, x, one); // x / 1 => x + // TODO(titzer): // 0 / x => 0 if x != 0 + // TODO(titzer): // x / 2^n => x >> n and round + R.CheckFoldBinop<int32_t>(0, R.machine.Int32Sub(), x, x, + minus_one); // x / -1 => 0 - x +} + + +TEST(ReduceInt32UDiv) { + ReducerTester R; + R.binop = R.machine.Int32UDiv(); + + FOR_UINT32_INPUTS(pl) { + FOR_UINT32_INPUTS(pr) { + uint32_t x = *pl, y = *pr; + if (y == 0) continue; // TODO(titzer): test / 0 + R.CheckFoldBinop<int32_t>(x / y, x, y); + } + } + + R.CheckDontPutConstantOnRight(41311); + R.CheckDontPutConstantOnRight(-44371); + + Node* x = R.Parameter(); + Node* one = R.Constant<int32_t>(1); + + R.CheckBinop(x, x, one); // x / 1 => x + // TODO(titzer): // 0 / x => 0 if x != 0 + + for (uint32_t n = 1; n < 32; ++n) { + Node* divisor = R.Constant<int32_t>(1u << n); + R.CheckFoldBinop<int32_t>(x, R.machine.Word32Shr(), n, x, + divisor); // x / 2^n => x >> n + } +} + + +TEST(ReduceInt32Mod) { + ReducerTester R; + R.binop = R.machine.Int32Mod(); + + FOR_INT32_INPUTS(pl) { + FOR_INT32_INPUTS(pr) { + int32_t x = *pl, y = *pr; + if (y == 0) continue; // TODO(titzer): test % 0 + int32_t r = y == -1 ? 0 : x % y; // INT_MIN % -1 may explode in C + R.CheckFoldBinop<int32_t>(r, x, y); + } + } + + R.CheckDontPutConstantOnRight(413); + R.CheckDontPutConstantOnRight(-4401); + + Node* x = R.Parameter(); + Node* one = R.Constant<int32_t>(1); + + R.CheckFoldBinop<int32_t>(0, x, one); // x % 1 => 0 + // TODO(titzer): // x % 2^n => x & 2^n-1 and round +} + + +TEST(ReduceInt32UMod) { + ReducerTester R; + R.binop = R.machine.Int32UMod(); + + FOR_INT32_INPUTS(pl) { + FOR_INT32_INPUTS(pr) { + uint32_t x = *pl, y = *pr; + if (y == 0) continue; // TODO(titzer): test x % 0 + R.CheckFoldBinop<int32_t>(x % y, x, y); + } + } + + R.CheckDontPutConstantOnRight(417); + R.CheckDontPutConstantOnRight(-4371); + + Node* x = R.Parameter(); + Node* one = R.Constant<int32_t>(1); + + R.CheckFoldBinop<int32_t>(0, x, one); // x % 1 => 0 + + for (uint32_t n = 1; n < 32; ++n) { + Node* divisor = R.Constant<int32_t>(1u << n); + R.CheckFoldBinop<int32_t>(x, R.machine.Word32And(), (1u << n) - 1, x, + divisor); // x % 2^n => x & 2^n-1 + } +} + + +TEST(ReduceInt32LessThan) { + ReducerTester R; + R.binop = R.machine.Int32LessThan(); + + FOR_INT32_INPUTS(pl) { + FOR_INT32_INPUTS(pr) { + int32_t x = *pl, y = *pr; + R.CheckFoldBinop<int32_t>(x < y ? 1 : 0, x, y); + } + } + + R.CheckDontPutConstantOnRight(41399); + R.CheckDontPutConstantOnRight(-440197); + + Node* x = R.Parameter(0); + Node* y = R.Parameter(1); + Node* zero = R.Constant<int32_t>(0); + Node* sub = R.graph.NewNode(R.machine.Int32Sub(), x, y); + + R.CheckFoldBinop<int32_t>(0, x, x); // x < x => 0 + R.CheckFoldBinop(x, y, sub, zero); // x - y < 0 => x < y + R.CheckFoldBinop(y, x, zero, sub); // 0 < x - y => y < x +} + + +TEST(ReduceInt32LessThanOrEqual) { + ReducerTester R; + R.binop = R.machine.Int32LessThanOrEqual(); + + FOR_INT32_INPUTS(pl) { + FOR_INT32_INPUTS(pr) { + int32_t x = *pl, y = *pr; + R.CheckFoldBinop<int32_t>(x <= y ? 1 : 0, x, y); + } + } + + FOR_INT32_INPUTS(i) { R.CheckDontPutConstantOnRight<int32_t>(*i); } + + Node* x = R.Parameter(0); + Node* y = R.Parameter(1); + Node* zero = R.Constant<int32_t>(0); + Node* sub = R.graph.NewNode(R.machine.Int32Sub(), x, y); + + R.CheckFoldBinop<int32_t>(1, x, x); // x <= x => 1 + R.CheckFoldBinop(x, y, sub, zero); // x - y <= 0 => x <= y + R.CheckFoldBinop(y, x, zero, sub); // 0 <= x - y => y <= x +} + + +TEST(ReduceUint32LessThan) { + ReducerTester R; + R.binop = R.machine.Uint32LessThan(); + + FOR_UINT32_INPUTS(pl) { + FOR_UINT32_INPUTS(pr) { + uint32_t x = *pl, y = *pr; + R.CheckFoldBinop<int32_t>(x < y ? 1 : 0, x, y); + } + } + + R.CheckDontPutConstantOnRight(41399); + R.CheckDontPutConstantOnRight(-440197); + + Node* x = R.Parameter(); + Node* max = R.maxuint32; + Node* zero = R.Constant<int32_t>(0); + + R.CheckFoldBinop<int32_t>(0, max, x); // M < x => 0 + R.CheckFoldBinop<int32_t>(0, x, zero); // x < 0 => 0 + R.CheckFoldBinop<int32_t>(0, x, x); // x < x => 0 +} + + +TEST(ReduceUint32LessThanOrEqual) { + ReducerTester R; + R.binop = R.machine.Uint32LessThanOrEqual(); + + FOR_UINT32_INPUTS(pl) { + FOR_UINT32_INPUTS(pr) { + uint32_t x = *pl, y = *pr; + R.CheckFoldBinop<int32_t>(x <= y ? 1 : 0, x, y); + } + } + + R.CheckDontPutConstantOnRight(41399); + R.CheckDontPutConstantOnRight(-440197); + + Node* x = R.Parameter(); + Node* max = R.maxuint32; + Node* zero = R.Constant<int32_t>(0); + + R.CheckFoldBinop<int32_t>(1, x, max); // x <= M => 1 + R.CheckFoldBinop<int32_t>(1, zero, x); // 0 <= x => 1 + R.CheckFoldBinop<int32_t>(1, x, x); // x <= x => 1 +} + + +TEST(ReduceLoadStore) { + ReducerTester R; + + Node* base = R.Constant<int32_t>(11); + Node* index = R.Constant<int32_t>(4); + Node* load = R.graph.NewNode(R.machine.Load(kMachineWord32), base, index); + + { + MachineOperatorReducer reducer(&R.graph); + Reduction reduction = reducer.Reduce(load); + CHECK(!reduction.Changed()); // loads should not be reduced. + } + + { + Node* store = R.graph.NewNode( + R.machine.Store(kMachineWord32, kNoWriteBarrier), base, index, load); + MachineOperatorReducer reducer(&R.graph); + Reduction reduction = reducer.Reduce(store); + CHECK(!reduction.Changed()); // stores should not be reduced. + } +} + + +static void CheckNans(ReducerTester* R) { + Node* x = R->Parameter(); + std::vector<double> nans = ValueHelper::nan_vector(); + for (std::vector<double>::const_iterator pl = nans.begin(); pl != nans.end(); + ++pl) { + for (std::vector<double>::const_iterator pr = nans.begin(); + pr != nans.end(); ++pr) { + Node* nan1 = R->Constant<double>(*pl); + Node* nan2 = R->Constant<double>(*pr); + R->CheckBinop(nan1, x, nan1); // x % NaN => NaN + R->CheckBinop(nan1, nan1, x); // NaN % x => NaN + R->CheckBinop(nan1, nan2, nan1); // NaN % NaN => NaN + } + } +} + + +TEST(ReduceFloat64Add) { + ReducerTester R; + R.binop = R.machine.Float64Add(); + + FOR_FLOAT64_INPUTS(pl) { + FOR_FLOAT64_INPUTS(pr) { + double x = *pl, y = *pr; + R.CheckFoldBinop<double>(x + y, x, y); + } + } + + FOR_FLOAT64_INPUTS(i) { R.CheckPutConstantOnRight(*i); } + // TODO(titzer): CheckNans(&R); +} + + +TEST(ReduceFloat64Sub) { + ReducerTester R; + R.binop = R.machine.Float64Sub(); + + FOR_FLOAT64_INPUTS(pl) { + FOR_FLOAT64_INPUTS(pr) { + double x = *pl, y = *pr; + R.CheckFoldBinop<double>(x - y, x, y); + } + } + // TODO(titzer): CheckNans(&R); +} + + +TEST(ReduceFloat64Mul) { + ReducerTester R; + R.binop = R.machine.Float64Mul(); + + FOR_FLOAT64_INPUTS(pl) { + FOR_FLOAT64_INPUTS(pr) { + double x = *pl, y = *pr; + R.CheckFoldBinop<double>(x * y, x, y); + } + } + + double inf = V8_INFINITY; + R.CheckPutConstantOnRight(-inf); + R.CheckPutConstantOnRight(-0.1); + R.CheckPutConstantOnRight(0.1); + R.CheckPutConstantOnRight(inf); + + Node* x = R.Parameter(); + Node* one = R.Constant<double>(1.0); + + R.CheckBinop(x, x, one); // x * 1.0 => x + R.CheckBinop(x, one, x); // 1.0 * x => x + + CheckNans(&R); +} + + +TEST(ReduceFloat64Div) { + ReducerTester R; + R.binop = R.machine.Float64Div(); + + FOR_FLOAT64_INPUTS(pl) { + FOR_FLOAT64_INPUTS(pr) { + double x = *pl, y = *pr; + R.CheckFoldBinop<double>(x / y, x, y); + } + } + + Node* x = R.Parameter(); + Node* one = R.Constant<double>(1.0); + + R.CheckBinop(x, x, one); // x / 1.0 => x + + CheckNans(&R); +} + + +TEST(ReduceFloat64Mod) { + ReducerTester R; + R.binop = R.machine.Float64Mod(); + + FOR_FLOAT64_INPUTS(pl) { + FOR_FLOAT64_INPUTS(pr) { + double x = *pl, y = *pr; + R.CheckFoldBinop<double>(modulo(x, y), x, y); + } + } + + CheckNans(&R); +} + + +// TODO(titzer): test MachineOperatorReducer for Word64And +// TODO(titzer): test MachineOperatorReducer for Word64Or +// TODO(titzer): test MachineOperatorReducer for Word64Xor +// TODO(titzer): test MachineOperatorReducer for Word64Shl +// TODO(titzer): test MachineOperatorReducer for Word64Shr +// TODO(titzer): test MachineOperatorReducer for Word64Sar +// TODO(titzer): test MachineOperatorReducer for Word64Equal +// TODO(titzer): test MachineOperatorReducer for Word64Not +// TODO(titzer): test MachineOperatorReducer for Int64Add +// TODO(titzer): test MachineOperatorReducer for Int64Sub +// TODO(titzer): test MachineOperatorReducer for Int64Mul +// TODO(titzer): test MachineOperatorReducer for Int64UMul +// TODO(titzer): test MachineOperatorReducer for Int64Div +// TODO(titzer): test MachineOperatorReducer for Int64UDiv +// TODO(titzer): test MachineOperatorReducer for Int64Mod +// TODO(titzer): test MachineOperatorReducer for Int64UMod +// TODO(titzer): test MachineOperatorReducer for Int64Neg +// TODO(titzer): test MachineOperatorReducer for ChangeInt32ToFloat64 +// TODO(titzer): test MachineOperatorReducer for ChangeFloat64ToInt32 +// TODO(titzer): test MachineOperatorReducer for Float64Compare diff --git a/deps/v8/test/cctest/compiler/test-node-algorithm.cc b/deps/v8/test/cctest/compiler/test-node-algorithm.cc new file mode 100644 index 00000000000..10f98a66a7c --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-node-algorithm.cc @@ -0,0 +1,330 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include <vector> + +#include "src/v8.h" + +#include "graph-tester.h" +#include "src/compiler/common-operator.h" +#include "src/compiler/generic-node.h" +#include "src/compiler/generic-node-inl.h" +#include "src/compiler/graph.h" +#include "src/compiler/graph-inl.h" +#include "src/compiler/graph-visualizer.h" +#include "src/compiler/operator.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +static SimpleOperator dummy_operator(IrOpcode::kParameter, Operator::kNoWrite, + 0, 0, "dummy"); + +class PreNodeVisitor : public NullNodeVisitor { + public: + GenericGraphVisit::Control Pre(Node* node) { + printf("NODE ID: %d\n", node->id()); + nodes_.push_back(node); + return GenericGraphVisit::CONTINUE; + } + std::vector<Node*> nodes_; +}; + + +class PostNodeVisitor : public NullNodeVisitor { + public: + GenericGraphVisit::Control Post(Node* node) { + printf("NODE ID: %d\n", node->id()); + nodes_.push_back(node); + return GenericGraphVisit::CONTINUE; + } + std::vector<Node*> nodes_; +}; + + +TEST(TestUseNodeVisitEmpty) { + GraphWithStartNodeTester graph; + + PreNodeVisitor node_visitor; + graph.VisitNodeUsesFromStart(&node_visitor); + + CHECK_EQ(1, static_cast<int>(node_visitor.nodes_.size())); +} + + +TEST(TestUseNodePreOrderVisitSimple) { + GraphWithStartNodeTester graph; + Node* n2 = graph.NewNode(&dummy_operator, graph.start()); + Node* n3 = graph.NewNode(&dummy_operator, n2); + Node* n4 = graph.NewNode(&dummy_operator, n2, n3); + Node* n5 = graph.NewNode(&dummy_operator, n4, n2); + graph.SetEnd(n5); + + PreNodeVisitor node_visitor; + graph.VisitNodeUsesFromStart(&node_visitor); + + CHECK_EQ(5, static_cast<int>(node_visitor.nodes_.size())); + CHECK(graph.start()->id() == node_visitor.nodes_[0]->id()); + CHECK(n2->id() == node_visitor.nodes_[1]->id()); + CHECK(n3->id() == node_visitor.nodes_[2]->id()); + CHECK(n4->id() == node_visitor.nodes_[3]->id()); + CHECK(n5->id() == node_visitor.nodes_[4]->id()); +} + + +TEST(TestInputNodePreOrderVisitSimple) { + GraphWithStartNodeTester graph; + Node* n2 = graph.NewNode(&dummy_operator, graph.start()); + Node* n3 = graph.NewNode(&dummy_operator, n2); + Node* n4 = graph.NewNode(&dummy_operator, n2, n3); + Node* n5 = graph.NewNode(&dummy_operator, n4, n2); + graph.SetEnd(n5); + + PreNodeVisitor node_visitor; + graph.VisitNodeInputsFromEnd(&node_visitor); + CHECK_EQ(5, static_cast<int>(node_visitor.nodes_.size())); + CHECK(n5->id() == node_visitor.nodes_[0]->id()); + CHECK(n4->id() == node_visitor.nodes_[1]->id()); + CHECK(n2->id() == node_visitor.nodes_[2]->id()); + CHECK(graph.start()->id() == node_visitor.nodes_[3]->id()); + CHECK(n3->id() == node_visitor.nodes_[4]->id()); +} + + +TEST(TestUseNodePostOrderVisitSimple) { + GraphWithStartNodeTester graph; + Node* n2 = graph.NewNode(&dummy_operator, graph.start()); + Node* n3 = graph.NewNode(&dummy_operator, graph.start()); + Node* n4 = graph.NewNode(&dummy_operator, n2); + Node* n5 = graph.NewNode(&dummy_operator, n2); + Node* n6 = graph.NewNode(&dummy_operator, n2); + Node* n7 = graph.NewNode(&dummy_operator, n3); + Node* end_dependencies[4] = {n4, n5, n6, n7}; + Node* n8 = graph.NewNode(&dummy_operator, 4, end_dependencies); + graph.SetEnd(n8); + + PostNodeVisitor node_visitor; + graph.VisitNodeUsesFromStart(&node_visitor); + + CHECK_EQ(8, static_cast<int>(node_visitor.nodes_.size())); + CHECK(graph.end()->id() == node_visitor.nodes_[0]->id()); + CHECK(n4->id() == node_visitor.nodes_[1]->id()); + CHECK(n5->id() == node_visitor.nodes_[2]->id()); + CHECK(n6->id() == node_visitor.nodes_[3]->id()); + CHECK(n2->id() == node_visitor.nodes_[4]->id()); + CHECK(n7->id() == node_visitor.nodes_[5]->id()); + CHECK(n3->id() == node_visitor.nodes_[6]->id()); + CHECK(graph.start()->id() == node_visitor.nodes_[7]->id()); +} + + +TEST(TestUseNodePostOrderVisitLong) { + GraphWithStartNodeTester graph; + Node* n2 = graph.NewNode(&dummy_operator, graph.start()); + Node* n3 = graph.NewNode(&dummy_operator, graph.start()); + Node* n4 = graph.NewNode(&dummy_operator, n2); + Node* n5 = graph.NewNode(&dummy_operator, n2); + Node* n6 = graph.NewNode(&dummy_operator, n3); + Node* n7 = graph.NewNode(&dummy_operator, n3); + Node* n8 = graph.NewNode(&dummy_operator, n5); + Node* n9 = graph.NewNode(&dummy_operator, n5); + Node* n10 = graph.NewNode(&dummy_operator, n9); + Node* n11 = graph.NewNode(&dummy_operator, n9); + Node* end_dependencies[6] = {n4, n8, n10, n11, n6, n7}; + Node* n12 = graph.NewNode(&dummy_operator, 6, end_dependencies); + graph.SetEnd(n12); + + PostNodeVisitor node_visitor; + graph.VisitNodeUsesFromStart(&node_visitor); + + CHECK_EQ(12, static_cast<int>(node_visitor.nodes_.size())); + CHECK(graph.end()->id() == node_visitor.nodes_[0]->id()); + CHECK(n4->id() == node_visitor.nodes_[1]->id()); + CHECK(n8->id() == node_visitor.nodes_[2]->id()); + CHECK(n10->id() == node_visitor.nodes_[3]->id()); + CHECK(n11->id() == node_visitor.nodes_[4]->id()); + CHECK(n9->id() == node_visitor.nodes_[5]->id()); + CHECK(n5->id() == node_visitor.nodes_[6]->id()); + CHECK(n2->id() == node_visitor.nodes_[7]->id()); + CHECK(n6->id() == node_visitor.nodes_[8]->id()); + CHECK(n7->id() == node_visitor.nodes_[9]->id()); + CHECK(n3->id() == node_visitor.nodes_[10]->id()); + CHECK(graph.start()->id() == node_visitor.nodes_[11]->id()); +} + + +TEST(TestUseNodePreOrderVisitCycle) { + GraphWithStartNodeTester graph; + Node* n0 = graph.start_node(); + Node* n1 = graph.NewNode(&dummy_operator, n0); + Node* n2 = graph.NewNode(&dummy_operator, n1); + n0->AppendInput(graph.main_zone(), n2); + graph.SetStart(n0); + graph.SetEnd(n2); + + PreNodeVisitor node_visitor; + graph.VisitNodeUsesFromStart(&node_visitor); + + CHECK_EQ(3, static_cast<int>(node_visitor.nodes_.size())); + CHECK(n0->id() == node_visitor.nodes_[0]->id()); + CHECK(n1->id() == node_visitor.nodes_[1]->id()); + CHECK(n2->id() == node_visitor.nodes_[2]->id()); +} + + +struct ReenterNodeVisitor : NullNodeVisitor { + GenericGraphVisit::Control Pre(Node* node) { + printf("[%d] PRE NODE: %d\n", static_cast<int>(nodes_.size()), node->id()); + nodes_.push_back(node->id()); + int size = static_cast<int>(nodes_.size()); + switch (node->id()) { + case 0: + return size < 6 ? GenericGraphVisit::REENTER : GenericGraphVisit::SKIP; + case 1: + return size < 4 ? GenericGraphVisit::DEFER + : GenericGraphVisit::CONTINUE; + default: + return GenericGraphVisit::REENTER; + } + } + + GenericGraphVisit::Control Post(Node* node) { + printf("[%d] POST NODE: %d\n", static_cast<int>(nodes_.size()), node->id()); + nodes_.push_back(-node->id()); + return node->id() == 4 ? GenericGraphVisit::REENTER + : GenericGraphVisit::CONTINUE; + } + + void PreEdge(Node* from, int index, Node* to) { + printf("[%d] PRE EDGE: %d-%d\n", static_cast<int>(edges_.size()), + from->id(), to->id()); + edges_.push_back(std::make_pair(from->id(), to->id())); + } + + void PostEdge(Node* from, int index, Node* to) { + printf("[%d] POST EDGE: %d-%d\n", static_cast<int>(edges_.size()), + from->id(), to->id()); + edges_.push_back(std::make_pair(-from->id(), -to->id())); + } + + std::vector<int> nodes_; + std::vector<std::pair<int, int> > edges_; +}; + + +TEST(TestUseNodeReenterVisit) { + GraphWithStartNodeTester graph; + Node* n0 = graph.start_node(); + Node* n1 = graph.NewNode(&dummy_operator, n0); + Node* n2 = graph.NewNode(&dummy_operator, n0); + Node* n3 = graph.NewNode(&dummy_operator, n2); + Node* n4 = graph.NewNode(&dummy_operator, n0); + Node* n5 = graph.NewNode(&dummy_operator, n4); + n0->AppendInput(graph.main_zone(), n3); + graph.SetStart(n0); + graph.SetEnd(n5); + + ReenterNodeVisitor visitor; + graph.VisitNodeUsesFromStart(&visitor); + + CHECK_EQ(22, static_cast<int>(visitor.nodes_.size())); + CHECK_EQ(24, static_cast<int>(visitor.edges_.size())); + + CHECK(n0->id() == visitor.nodes_[0]); + CHECK(n0->id() == visitor.edges_[0].first); + CHECK(n1->id() == visitor.edges_[0].second); + CHECK(n1->id() == visitor.nodes_[1]); + // N1 is deferred. + CHECK(-n1->id() == visitor.edges_[1].second); + CHECK(-n0->id() == visitor.edges_[1].first); + CHECK(n0->id() == visitor.edges_[2].first); + CHECK(n2->id() == visitor.edges_[2].second); + CHECK(n2->id() == visitor.nodes_[2]); + CHECK(n2->id() == visitor.edges_[3].first); + CHECK(n3->id() == visitor.edges_[3].second); + CHECK(n3->id() == visitor.nodes_[3]); + // Circle back to N0, which we may reenter for now. + CHECK(n3->id() == visitor.edges_[4].first); + CHECK(n0->id() == visitor.edges_[4].second); + CHECK(n0->id() == visitor.nodes_[4]); + CHECK(n0->id() == visitor.edges_[5].first); + CHECK(n1->id() == visitor.edges_[5].second); + CHECK(n1->id() == visitor.nodes_[5]); + // This time N1 is no longer deferred. + CHECK(-n1->id() == visitor.nodes_[6]); + CHECK(-n1->id() == visitor.edges_[6].second); + CHECK(-n0->id() == visitor.edges_[6].first); + CHECK(n0->id() == visitor.edges_[7].first); + CHECK(n2->id() == visitor.edges_[7].second); + CHECK(n2->id() == visitor.nodes_[7]); + CHECK(n2->id() == visitor.edges_[8].first); + CHECK(n3->id() == visitor.edges_[8].second); + CHECK(n3->id() == visitor.nodes_[8]); + CHECK(n3->id() == visitor.edges_[9].first); + CHECK(n0->id() == visitor.edges_[9].second); + CHECK(n0->id() == visitor.nodes_[9]); + // This time we break at N0 and skip it. + CHECK(-n0->id() == visitor.edges_[10].second); + CHECK(-n3->id() == visitor.edges_[10].first); + CHECK(-n3->id() == visitor.nodes_[10]); + CHECK(-n3->id() == visitor.edges_[11].second); + CHECK(-n2->id() == visitor.edges_[11].first); + CHECK(-n2->id() == visitor.nodes_[11]); + CHECK(-n2->id() == visitor.edges_[12].second); + CHECK(-n0->id() == visitor.edges_[12].first); + CHECK(n0->id() == visitor.edges_[13].first); + CHECK(n4->id() == visitor.edges_[13].second); + CHECK(n4->id() == visitor.nodes_[12]); + CHECK(n4->id() == visitor.edges_[14].first); + CHECK(n5->id() == visitor.edges_[14].second); + CHECK(n5->id() == visitor.nodes_[13]); + CHECK(-n5->id() == visitor.nodes_[14]); + CHECK(-n5->id() == visitor.edges_[15].second); + CHECK(-n4->id() == visitor.edges_[15].first); + CHECK(-n4->id() == visitor.nodes_[15]); + CHECK(-n4->id() == visitor.edges_[16].second); + CHECK(-n0->id() == visitor.edges_[16].first); + CHECK(-n0->id() == visitor.nodes_[16]); + CHECK(-n0->id() == visitor.edges_[17].second); + CHECK(-n3->id() == visitor.edges_[17].first); + CHECK(-n3->id() == visitor.nodes_[17]); + CHECK(-n3->id() == visitor.edges_[18].second); + CHECK(-n2->id() == visitor.edges_[18].first); + CHECK(-n2->id() == visitor.nodes_[18]); + CHECK(-n2->id() == visitor.edges_[19].second); + CHECK(-n0->id() == visitor.edges_[19].first); + // N4 may be reentered. + CHECK(n0->id() == visitor.edges_[20].first); + CHECK(n4->id() == visitor.edges_[20].second); + CHECK(n4->id() == visitor.nodes_[19]); + CHECK(n4->id() == visitor.edges_[21].first); + CHECK(n5->id() == visitor.edges_[21].second); + CHECK(-n5->id() == visitor.edges_[22].second); + CHECK(-n4->id() == visitor.edges_[22].first); + CHECK(-n4->id() == visitor.nodes_[20]); + CHECK(-n4->id() == visitor.edges_[23].second); + CHECK(-n0->id() == visitor.edges_[23].first); + CHECK(-n0->id() == visitor.nodes_[21]); +} + + +TEST(TestPrintNodeGraphToNodeGraphviz) { + GraphWithStartNodeTester graph; + Node* n2 = graph.NewNode(&dummy_operator, graph.start()); + Node* n3 = graph.NewNode(&dummy_operator, graph.start()); + Node* n4 = graph.NewNode(&dummy_operator, n2); + Node* n5 = graph.NewNode(&dummy_operator, n2); + Node* n6 = graph.NewNode(&dummy_operator, n3); + Node* n7 = graph.NewNode(&dummy_operator, n3); + Node* n8 = graph.NewNode(&dummy_operator, n5); + Node* n9 = graph.NewNode(&dummy_operator, n5); + Node* n10 = graph.NewNode(&dummy_operator, n9); + Node* n11 = graph.NewNode(&dummy_operator, n9); + Node* end_dependencies[6] = {n4, n8, n10, n11, n6, n7}; + Node* n12 = graph.NewNode(&dummy_operator, 6, end_dependencies); + graph.SetEnd(n12); + + OFStream os(stdout); + os << AsDOT(graph); +} diff --git a/deps/v8/test/cctest/compiler/test-node-cache.cc b/deps/v8/test/cctest/compiler/test-node-cache.cc new file mode 100644 index 00000000000..23909a5f5ae --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-node-cache.cc @@ -0,0 +1,160 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "graph-tester.h" +#include "src/compiler/common-operator.h" +#include "src/compiler/node-cache.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +TEST(Int32Constant_back_to_back) { + GraphTester graph; + Int32NodeCache cache; + + for (int i = -2000000000; i < 2000000000; i += 3315177) { + Node** pos = cache.Find(graph.zone(), i); + CHECK_NE(NULL, pos); + for (int j = 0; j < 3; j++) { + Node** npos = cache.Find(graph.zone(), i); + CHECK_EQ(pos, npos); + } + } +} + + +TEST(Int32Constant_five) { + GraphTester graph; + Int32NodeCache cache; + CommonOperatorBuilder common(graph.zone()); + + int32_t constants[] = {static_cast<int32_t>(0x80000000), -77, 0, 1, -1}; + + Node* nodes[ARRAY_SIZE(constants)]; + + for (size_t i = 0; i < ARRAY_SIZE(constants); i++) { + int32_t k = constants[i]; + Node* node = graph.NewNode(common.Int32Constant(k)); + *cache.Find(graph.zone(), k) = nodes[i] = node; + } + + for (size_t i = 0; i < ARRAY_SIZE(constants); i++) { + int32_t k = constants[i]; + CHECK_EQ(nodes[i], *cache.Find(graph.zone(), k)); + } +} + + +TEST(Int32Constant_hits) { + GraphTester graph; + Int32NodeCache cache; + const int32_t kSize = 1500; + Node** nodes = graph.zone()->NewArray<Node*>(kSize); + CommonOperatorBuilder common(graph.zone()); + + for (int i = 0; i < kSize; i++) { + int32_t v = i * -55; + nodes[i] = graph.NewNode(common.Int32Constant(v)); + *cache.Find(graph.zone(), v) = nodes[i]; + } + + int hits = 0; + for (int i = 0; i < kSize; i++) { + int32_t v = i * -55; + Node** pos = cache.Find(graph.zone(), v); + if (*pos != NULL) { + CHECK_EQ(nodes[i], *pos); + hits++; + } + } + CHECK_LT(4, hits); +} + + +TEST(Int64Constant_back_to_back) { + GraphTester graph; + Int64NodeCache cache; + + for (int64_t i = -2000000000; i < 2000000000; i += 3315177) { + Node** pos = cache.Find(graph.zone(), i); + CHECK_NE(NULL, pos); + for (int j = 0; j < 3; j++) { + Node** npos = cache.Find(graph.zone(), i); + CHECK_EQ(pos, npos); + } + } +} + + +TEST(Int64Constant_hits) { + GraphTester graph; + Int64NodeCache cache; + const int32_t kSize = 1500; + Node** nodes = graph.zone()->NewArray<Node*>(kSize); + CommonOperatorBuilder common(graph.zone()); + + for (int i = 0; i < kSize; i++) { + int64_t v = static_cast<int64_t>(i) * static_cast<int64_t>(5003001); + nodes[i] = graph.NewNode(common.Int32Constant(i)); + *cache.Find(graph.zone(), v) = nodes[i]; + } + + int hits = 0; + for (int i = 0; i < kSize; i++) { + int64_t v = static_cast<int64_t>(i) * static_cast<int64_t>(5003001); + Node** pos = cache.Find(graph.zone(), v); + if (*pos != NULL) { + CHECK_EQ(nodes[i], *pos); + hits++; + } + } + CHECK_LT(4, hits); +} + + +TEST(PtrConstant_back_to_back) { + GraphTester graph; + PtrNodeCache cache; + int32_t buffer[50]; + + for (int32_t* p = buffer; + (p - buffer) < static_cast<ptrdiff_t>(ARRAY_SIZE(buffer)); p++) { + Node** pos = cache.Find(graph.zone(), p); + CHECK_NE(NULL, pos); + for (int j = 0; j < 3; j++) { + Node** npos = cache.Find(graph.zone(), p); + CHECK_EQ(pos, npos); + } + } +} + + +TEST(PtrConstant_hits) { + GraphTester graph; + PtrNodeCache cache; + const int32_t kSize = 50; + int32_t buffer[kSize]; + Node* nodes[kSize]; + CommonOperatorBuilder common(graph.zone()); + + for (size_t i = 0; i < ARRAY_SIZE(buffer); i++) { + int k = static_cast<int>(i); + int32_t* p = &buffer[i]; + nodes[i] = graph.NewNode(common.Int32Constant(k)); + *cache.Find(graph.zone(), p) = nodes[i]; + } + + int hits = 0; + for (size_t i = 0; i < ARRAY_SIZE(buffer); i++) { + int32_t* p = &buffer[i]; + Node** pos = cache.Find(graph.zone(), p); + if (*pos != NULL) { + CHECK_EQ(nodes[i], *pos); + hits++; + } + } + CHECK_LT(4, hits); +} diff --git a/deps/v8/test/cctest/compiler/test-node.cc b/deps/v8/test/cctest/compiler/test-node.cc new file mode 100644 index 00000000000..6fe8573a2f7 --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-node.cc @@ -0,0 +1,815 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include <functional> + +#include "src/v8.h" + +#include "graph-tester.h" +#include "src/compiler/generic-node-inl.h" +#include "src/compiler/node.h" +#include "src/compiler/operator.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +static SimpleOperator dummy_operator(IrOpcode::kParameter, Operator::kNoWrite, + 0, 0, "dummy"); + +TEST(NodeAllocation) { + GraphTester graph; + Node* n1 = graph.NewNode(&dummy_operator); + Node* n2 = graph.NewNode(&dummy_operator); + CHECK(n2->id() != n1->id()); +} + + +TEST(NodeWithOpcode) { + GraphTester graph; + Node* n1 = graph.NewNode(&dummy_operator); + Node* n2 = graph.NewNode(&dummy_operator); + CHECK(n1->op() == &dummy_operator); + CHECK(n2->op() == &dummy_operator); +} + + +TEST(NodeInputs1) { + GraphTester graph; + Node* n0 = graph.NewNode(&dummy_operator); + Node* n2 = graph.NewNode(&dummy_operator, n0); + CHECK_EQ(1, n2->InputCount()); + CHECK(n0 == n2->InputAt(0)); +} + + +TEST(NodeInputs2) { + GraphTester graph; + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator); + Node* n2 = graph.NewNode(&dummy_operator, n0, n1); + CHECK_EQ(2, n2->InputCount()); + CHECK(n0 == n2->InputAt(0)); + CHECK(n1 == n2->InputAt(1)); +} + + +TEST(NodeInputs3) { + GraphTester graph; + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator); + Node* n2 = graph.NewNode(&dummy_operator, n0, n1, n1); + CHECK_EQ(3, n2->InputCount()); + CHECK(n0 == n2->InputAt(0)); + CHECK(n1 == n2->InputAt(1)); + CHECK(n1 == n2->InputAt(2)); +} + + +TEST(NodeInputIteratorEmpty) { + GraphTester graph; + Node* n1 = graph.NewNode(&dummy_operator); + Node::Inputs::iterator i(n1->inputs().begin()); + int input_count = 0; + for (; i != n1->inputs().end(); ++i) { + input_count++; + } + CHECK_EQ(0, input_count); +} + + +TEST(NodeInputIteratorOne) { + GraphTester graph; + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator, n0); + Node::Inputs::iterator i(n1->inputs().begin()); + CHECK_EQ(1, n1->InputCount()); + CHECK_EQ(n0, *i); + ++i; + CHECK(n1->inputs().end() == i); +} + + +TEST(NodeUseIteratorEmpty) { + GraphTester graph; + Node* n1 = graph.NewNode(&dummy_operator); + Node::Uses::iterator i(n1->uses().begin()); + int use_count = 0; + for (; i != n1->uses().end(); ++i) { + Node::Edge edge(i.edge()); + USE(edge); + use_count++; + } + CHECK_EQ(0, use_count); +} + + +TEST(NodeUseIteratorOne) { + GraphTester graph; + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator, n0); + Node::Uses::iterator i(n0->uses().begin()); + CHECK_EQ(n1, *i); + ++i; + CHECK(n0->uses().end() == i); +} + + +TEST(NodeUseIteratorReplaceNoUses) { + GraphTester graph; + Node* n0 = graph.NewNode(&dummy_operator); + Node* n3 = graph.NewNode(&dummy_operator); + n0->ReplaceUses(n3); + CHECK(n0->uses().begin() == n0->uses().end()); +} + + +TEST(NodeUseIteratorReplaceUses) { + GraphTester graph; + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator, n0); + Node* n2 = graph.NewNode(&dummy_operator, n0); + Node* n3 = graph.NewNode(&dummy_operator); + Node::Uses::iterator i1(n0->uses().begin()); + CHECK_EQ(n1, *i1); + ++i1; + CHECK_EQ(n2, *i1); + n0->ReplaceUses(n3); + Node::Uses::iterator i2(n3->uses().begin()); + CHECK_EQ(n1, *i2); + ++i2; + CHECK_EQ(n2, *i2); + Node::Inputs::iterator i3(n1->inputs().begin()); + CHECK_EQ(n3, *i3); + ++i3; + CHECK(n1->inputs().end() == i3); + Node::Inputs::iterator i4(n2->inputs().begin()); + CHECK_EQ(n3, *i4); + ++i4; + CHECK(n2->inputs().end() == i4); +} + + +TEST(NodeUseIteratorReplaceUsesSelf) { + GraphTester graph; + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator, n0); + Node* n3 = graph.NewNode(&dummy_operator); + + n1->ReplaceInput(0, n1); // Create self-reference. + + Node::Uses::iterator i1(n1->uses().begin()); + CHECK_EQ(n1, *i1); + + n1->ReplaceUses(n3); + + CHECK(n1->uses().begin() == n1->uses().end()); + + Node::Uses::iterator i2(n3->uses().begin()); + CHECK_EQ(n1, *i2); + ++i2; + CHECK(n1->uses().end() == i2); +} + + +TEST(ReplaceInput) { + GraphTester graph; + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator); + Node* n2 = graph.NewNode(&dummy_operator); + Node* n3 = graph.NewNode(&dummy_operator, n0, n1, n2); + Node::Inputs::iterator i1(n3->inputs().begin()); + CHECK(n0 == *i1); + CHECK_EQ(n0, n3->InputAt(0)); + ++i1; + CHECK_EQ(n1, *i1); + CHECK_EQ(n1, n3->InputAt(1)); + ++i1; + CHECK_EQ(n2, *i1); + CHECK_EQ(n2, n3->InputAt(2)); + ++i1; + CHECK(i1 == n3->inputs().end()); + + Node::Uses::iterator i2(n1->uses().begin()); + CHECK_EQ(n3, *i2); + ++i2; + CHECK(i2 == n1->uses().end()); + + Node* n4 = graph.NewNode(&dummy_operator); + Node::Uses::iterator i3(n4->uses().begin()); + CHECK(i3 == n4->uses().end()); + + n3->ReplaceInput(1, n4); + + Node::Uses::iterator i4(n1->uses().begin()); + CHECK(i4 == n1->uses().end()); + + Node::Uses::iterator i5(n4->uses().begin()); + CHECK_EQ(n3, *i5); + ++i5; + CHECK(i5 == n4->uses().end()); + + Node::Inputs::iterator i6(n3->inputs().begin()); + CHECK(n0 == *i6); + CHECK_EQ(n0, n3->InputAt(0)); + ++i6; + CHECK_EQ(n4, *i6); + CHECK_EQ(n4, n3->InputAt(1)); + ++i6; + CHECK_EQ(n2, *i6); + CHECK_EQ(n2, n3->InputAt(2)); + ++i6; + CHECK(i6 == n3->inputs().end()); +} + + +TEST(OwnedBy) { + GraphTester graph; + + { + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator); + + CHECK(!n0->OwnedBy(n1)); + CHECK(!n1->OwnedBy(n0)); + + Node* n2 = graph.NewNode(&dummy_operator, n0); + CHECK(n0->OwnedBy(n2)); + CHECK(!n2->OwnedBy(n0)); + + Node* n3 = graph.NewNode(&dummy_operator, n0); + CHECK(!n0->OwnedBy(n2)); + CHECK(!n0->OwnedBy(n3)); + CHECK(!n2->OwnedBy(n0)); + CHECK(!n3->OwnedBy(n0)); + } + + { + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator, n0); + CHECK(n0->OwnedBy(n1)); + CHECK(!n1->OwnedBy(n0)); + Node* n2 = graph.NewNode(&dummy_operator, n0); + CHECK(!n0->OwnedBy(n1)); + CHECK(!n0->OwnedBy(n2)); + CHECK(!n1->OwnedBy(n0)); + CHECK(!n1->OwnedBy(n2)); + CHECK(!n2->OwnedBy(n0)); + CHECK(!n2->OwnedBy(n1)); + + Node* n3 = graph.NewNode(&dummy_operator); + n2->ReplaceInput(0, n3); + + CHECK(n0->OwnedBy(n1)); + CHECK(!n1->OwnedBy(n0)); + CHECK(!n1->OwnedBy(n0)); + CHECK(!n1->OwnedBy(n2)); + CHECK(!n2->OwnedBy(n0)); + CHECK(!n2->OwnedBy(n1)); + CHECK(n3->OwnedBy(n2)); + CHECK(!n2->OwnedBy(n3)); + } +} + + +TEST(Uses) { + GraphTester graph; + + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator, n0); + CHECK_EQ(1, n0->UseCount()); + printf("A: %d vs %d\n", n0->UseAt(0)->id(), n1->id()); + CHECK(n0->UseAt(0) == n1); + Node* n2 = graph.NewNode(&dummy_operator, n0); + CHECK_EQ(2, n0->UseCount()); + printf("B: %d vs %d\n", n0->UseAt(1)->id(), n2->id()); + CHECK(n0->UseAt(1) == n2); + Node* n3 = graph.NewNode(&dummy_operator, n0); + CHECK_EQ(3, n0->UseCount()); + CHECK(n0->UseAt(2) == n3); +} + + +TEST(Inputs) { + GraphTester graph; + + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator, n0); + Node* n2 = graph.NewNode(&dummy_operator, n0); + Node* n3 = graph.NewNode(&dummy_operator, n0, n1, n2); + CHECK_EQ(3, n3->InputCount()); + CHECK(n3->InputAt(0) == n0); + CHECK(n3->InputAt(1) == n1); + CHECK(n3->InputAt(2) == n2); + Node* n4 = graph.NewNode(&dummy_operator, n0, n1, n2); + n3->AppendInput(graph.zone(), n4); + CHECK_EQ(4, n3->InputCount()); + CHECK(n3->InputAt(0) == n0); + CHECK(n3->InputAt(1) == n1); + CHECK(n3->InputAt(2) == n2); + CHECK(n3->InputAt(3) == n4); + Node* n5 = graph.NewNode(&dummy_operator, n4); + n3->AppendInput(graph.zone(), n4); + CHECK_EQ(5, n3->InputCount()); + CHECK(n3->InputAt(0) == n0); + CHECK(n3->InputAt(1) == n1); + CHECK(n3->InputAt(2) == n2); + CHECK(n3->InputAt(3) == n4); + CHECK(n3->InputAt(4) == n4); + + // Make sure uses have been hooked op correctly. + Node::Uses uses(n4->uses()); + Node::Uses::iterator current = uses.begin(); + CHECK(current != uses.end()); + CHECK(*current == n3); + ++current; + CHECK(current != uses.end()); + CHECK(*current == n5); + ++current; + CHECK(current != uses.end()); + CHECK(*current == n3); + ++current; + CHECK(current == uses.end()); +} + + +TEST(AppendInputsAndIterator) { + GraphTester graph; + + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator, n0); + Node* n2 = graph.NewNode(&dummy_operator, n0, n1); + + Node::Inputs inputs(n2->inputs()); + Node::Inputs::iterator current = inputs.begin(); + CHECK(current != inputs.end()); + CHECK(*current == n0); + ++current; + CHECK(current != inputs.end()); + CHECK(*current == n1); + ++current; + CHECK(current == inputs.end()); + + Node* n3 = graph.NewNode(&dummy_operator); + n2->AppendInput(graph.zone(), n3); + inputs = n2->inputs(); + current = inputs.begin(); + CHECK(current != inputs.end()); + CHECK(*current == n0); + CHECK_EQ(0, current.index()); + ++current; + CHECK(current != inputs.end()); + CHECK(*current == n1); + CHECK_EQ(1, current.index()); + ++current; + CHECK(current != inputs.end()); + CHECK(*current == n3); + CHECK_EQ(2, current.index()); + ++current; + CHECK(current == inputs.end()); +} + + +TEST(NullInputsSimple) { + GraphTester graph; + + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator, n0); + Node* n2 = graph.NewNode(&dummy_operator, n0, n1); + CHECK_EQ(2, n2->InputCount()); + + CHECK(n0 == n2->InputAt(0)); + CHECK(n1 == n2->InputAt(1)); + CHECK_EQ(2, n0->UseCount()); + n2->ReplaceInput(0, NULL); + CHECK(NULL == n2->InputAt(0)); + CHECK(n1 == n2->InputAt(1)); + CHECK_EQ(1, n0->UseCount()); +} + + +TEST(NullInputsAppended) { + GraphTester graph; + + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator, n0); + Node* n2 = graph.NewNode(&dummy_operator, n0); + Node* n3 = graph.NewNode(&dummy_operator, n0); + n3->AppendInput(graph.zone(), n1); + n3->AppendInput(graph.zone(), n2); + CHECK_EQ(3, n3->InputCount()); + + CHECK(n0 == n3->InputAt(0)); + CHECK(n1 == n3->InputAt(1)); + CHECK(n2 == n3->InputAt(2)); + CHECK_EQ(1, n1->UseCount()); + n3->ReplaceInput(1, NULL); + CHECK(n0 == n3->InputAt(0)); + CHECK(NULL == n3->InputAt(1)); + CHECK(n2 == n3->InputAt(2)); + CHECK_EQ(0, n1->UseCount()); +} + + +TEST(ReplaceUsesFromAppendedInputs) { + GraphTester graph; + + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator, n0); + Node* n2 = graph.NewNode(&dummy_operator, n0); + Node* n3 = graph.NewNode(&dummy_operator); + n2->AppendInput(graph.zone(), n1); + n2->AppendInput(graph.zone(), n0); + CHECK_EQ(0, n3->UseCount()); + CHECK_EQ(3, n0->UseCount()); + n0->ReplaceUses(n3); + CHECK_EQ(0, n0->UseCount()); + CHECK_EQ(3, n3->UseCount()); + + Node::Uses uses(n3->uses()); + Node::Uses::iterator current = uses.begin(); + CHECK(current != uses.end()); + CHECK(*current == n1); + ++current; + CHECK(current != uses.end()); + CHECK(*current == n2); + ++current; + CHECK(current != uses.end()); + CHECK(*current == n2); + ++current; + CHECK(current == uses.end()); +} + + +template <bool result> +struct FixedPredicate { + bool operator()(const Node* node) const { return result; } +}; + + +TEST(ReplaceUsesIfWithFixedPredicate) { + GraphTester graph; + + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator, n0); + Node* n2 = graph.NewNode(&dummy_operator, n0); + Node* n3 = graph.NewNode(&dummy_operator); + + CHECK_EQ(0, n2->UseCount()); + n2->ReplaceUsesIf(FixedPredicate<true>(), n1); + CHECK_EQ(0, n2->UseCount()); + n2->ReplaceUsesIf(FixedPredicate<false>(), n1); + CHECK_EQ(0, n2->UseCount()); + + CHECK_EQ(0, n3->UseCount()); + n3->ReplaceUsesIf(FixedPredicate<true>(), n1); + CHECK_EQ(0, n3->UseCount()); + n3->ReplaceUsesIf(FixedPredicate<false>(), n1); + CHECK_EQ(0, n3->UseCount()); + + CHECK_EQ(2, n0->UseCount()); + CHECK_EQ(0, n1->UseCount()); + n0->ReplaceUsesIf(FixedPredicate<false>(), n1); + CHECK_EQ(2, n0->UseCount()); + CHECK_EQ(0, n1->UseCount()); + n0->ReplaceUsesIf(FixedPredicate<true>(), n1); + CHECK_EQ(0, n0->UseCount()); + CHECK_EQ(2, n1->UseCount()); + + n1->AppendInput(graph.zone(), n1); + CHECK_EQ(3, n1->UseCount()); + n1->AppendInput(graph.zone(), n3); + CHECK_EQ(1, n3->UseCount()); + n3->ReplaceUsesIf(FixedPredicate<true>(), n1); + CHECK_EQ(4, n1->UseCount()); + CHECK_EQ(0, n3->UseCount()); + n1->ReplaceUsesIf(FixedPredicate<false>(), n3); + CHECK_EQ(4, n1->UseCount()); + CHECK_EQ(0, n3->UseCount()); +} + + +TEST(ReplaceUsesIfWithEqualTo) { + GraphTester graph; + + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator, n0); + Node* n2 = graph.NewNode(&dummy_operator, n0, n1); + + CHECK_EQ(0, n2->UseCount()); + n2->ReplaceUsesIf(std::bind1st(std::equal_to<Node*>(), n1), n0); + CHECK_EQ(0, n2->UseCount()); + + CHECK_EQ(2, n0->UseCount()); + CHECK_EQ(1, n1->UseCount()); + n1->ReplaceUsesIf(std::bind1st(std::equal_to<Node*>(), n0), n0); + CHECK_EQ(2, n0->UseCount()); + CHECK_EQ(1, n1->UseCount()); + n0->ReplaceUsesIf(std::bind2nd(std::equal_to<Node*>(), n2), n1); + CHECK_EQ(1, n0->UseCount()); + CHECK_EQ(2, n1->UseCount()); +} + + +TEST(ReplaceInputMultipleUses) { + GraphTester graph; + + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator); + Node* n2 = graph.NewNode(&dummy_operator, n0); + n2->ReplaceInput(0, n1); + CHECK_EQ(0, n0->UseCount()); + CHECK_EQ(1, n1->UseCount()); + + Node* n3 = graph.NewNode(&dummy_operator, n0); + n3->ReplaceInput(0, n1); + CHECK_EQ(0, n0->UseCount()); + CHECK_EQ(2, n1->UseCount()); +} + + +TEST(TrimInputCountInline) { + GraphTester graph; + + { + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator, n0); + n1->TrimInputCount(1); + CHECK_EQ(1, n1->InputCount()); + CHECK_EQ(n0, n1->InputAt(0)); + CHECK_EQ(1, n0->UseCount()); + } + + { + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator, n0); + n1->TrimInputCount(0); + CHECK_EQ(0, n1->InputCount()); + CHECK_EQ(0, n0->UseCount()); + } + + { + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator); + Node* n2 = graph.NewNode(&dummy_operator, n0, n1); + n2->TrimInputCount(2); + CHECK_EQ(2, n2->InputCount()); + CHECK_EQ(1, n0->UseCount()); + CHECK_EQ(1, n1->UseCount()); + CHECK_EQ(0, n2->UseCount()); + } + + { + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator); + Node* n2 = graph.NewNode(&dummy_operator, n0, n1); + n2->TrimInputCount(1); + CHECK_EQ(1, n2->InputCount()); + CHECK_EQ(1, n0->UseCount()); + CHECK_EQ(0, n1->UseCount()); + CHECK_EQ(0, n2->UseCount()); + } + + { + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator); + Node* n2 = graph.NewNode(&dummy_operator, n0, n1); + n2->TrimInputCount(0); + CHECK_EQ(0, n2->InputCount()); + CHECK_EQ(0, n0->UseCount()); + CHECK_EQ(0, n1->UseCount()); + CHECK_EQ(0, n2->UseCount()); + } + + { + Node* n0 = graph.NewNode(&dummy_operator); + Node* n2 = graph.NewNode(&dummy_operator, n0, n0); + n2->TrimInputCount(1); + CHECK_EQ(1, n2->InputCount()); + CHECK_EQ(1, n0->UseCount()); + CHECK_EQ(0, n2->UseCount()); + } + + { + Node* n0 = graph.NewNode(&dummy_operator); + Node* n2 = graph.NewNode(&dummy_operator, n0, n0); + n2->TrimInputCount(0); + CHECK_EQ(0, n2->InputCount()); + CHECK_EQ(0, n0->UseCount()); + CHECK_EQ(0, n2->UseCount()); + } +} + + +TEST(TrimInputCountOutOfLine1) { + GraphTester graph; + + { + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator); + n1->AppendInput(graph.zone(), n0); + n1->TrimInputCount(1); + CHECK_EQ(1, n1->InputCount()); + CHECK_EQ(n0, n1->InputAt(0)); + CHECK_EQ(1, n0->UseCount()); + } + + { + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator); + n1->AppendInput(graph.zone(), n0); + CHECK_EQ(1, n1->InputCount()); + n1->TrimInputCount(0); + CHECK_EQ(0, n1->InputCount()); + CHECK_EQ(0, n0->UseCount()); + } + + { + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator); + Node* n2 = graph.NewNode(&dummy_operator); + n2->AppendInput(graph.zone(), n0); + n2->AppendInput(graph.zone(), n1); + CHECK_EQ(2, n2->InputCount()); + n2->TrimInputCount(2); + CHECK_EQ(2, n2->InputCount()); + CHECK_EQ(n0, n2->InputAt(0)); + CHECK_EQ(n1, n2->InputAt(1)); + CHECK_EQ(1, n0->UseCount()); + CHECK_EQ(1, n1->UseCount()); + CHECK_EQ(0, n2->UseCount()); + } + + { + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator); + Node* n2 = graph.NewNode(&dummy_operator); + n2->AppendInput(graph.zone(), n0); + n2->AppendInput(graph.zone(), n1); + CHECK_EQ(2, n2->InputCount()); + n2->TrimInputCount(1); + CHECK_EQ(1, n2->InputCount()); + CHECK_EQ(n0, n2->InputAt(0)); + CHECK_EQ(1, n0->UseCount()); + CHECK_EQ(0, n1->UseCount()); + CHECK_EQ(0, n2->UseCount()); + } + + { + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator); + Node* n2 = graph.NewNode(&dummy_operator); + n2->AppendInput(graph.zone(), n0); + n2->AppendInput(graph.zone(), n1); + CHECK_EQ(2, n2->InputCount()); + n2->TrimInputCount(0); + CHECK_EQ(0, n2->InputCount()); + CHECK_EQ(0, n0->UseCount()); + CHECK_EQ(0, n1->UseCount()); + CHECK_EQ(0, n2->UseCount()); + } + + { + Node* n0 = graph.NewNode(&dummy_operator); + Node* n2 = graph.NewNode(&dummy_operator); + n2->AppendInput(graph.zone(), n0); + n2->AppendInput(graph.zone(), n0); + CHECK_EQ(2, n2->InputCount()); + CHECK_EQ(2, n0->UseCount()); + n2->TrimInputCount(1); + CHECK_EQ(1, n2->InputCount()); + CHECK_EQ(1, n0->UseCount()); + CHECK_EQ(0, n2->UseCount()); + } + + { + Node* n0 = graph.NewNode(&dummy_operator); + Node* n2 = graph.NewNode(&dummy_operator); + n2->AppendInput(graph.zone(), n0); + n2->AppendInput(graph.zone(), n0); + CHECK_EQ(2, n2->InputCount()); + CHECK_EQ(2, n0->UseCount()); + n2->TrimInputCount(0); + CHECK_EQ(0, n2->InputCount()); + CHECK_EQ(0, n0->UseCount()); + CHECK_EQ(0, n2->UseCount()); + } +} + + +TEST(TrimInputCountOutOfLine2) { + GraphTester graph; + + { + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator); + Node* n2 = graph.NewNode(&dummy_operator, n0); + n2->AppendInput(graph.zone(), n1); + CHECK_EQ(2, n2->InputCount()); + n2->TrimInputCount(2); + CHECK_EQ(2, n2->InputCount()); + CHECK_EQ(n0, n2->InputAt(0)); + CHECK_EQ(n1, n2->InputAt(1)); + CHECK_EQ(1, n0->UseCount()); + CHECK_EQ(1, n1->UseCount()); + CHECK_EQ(0, n2->UseCount()); + } + + { + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator); + Node* n2 = graph.NewNode(&dummy_operator, n0); + n2->AppendInput(graph.zone(), n1); + CHECK_EQ(2, n2->InputCount()); + n2->TrimInputCount(1); + CHECK_EQ(1, n2->InputCount()); + CHECK_EQ(n0, n2->InputAt(0)); + CHECK_EQ(1, n0->UseCount()); + CHECK_EQ(0, n1->UseCount()); + CHECK_EQ(0, n2->UseCount()); + } + + { + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator); + Node* n2 = graph.NewNode(&dummy_operator, n0); + n2->AppendInput(graph.zone(), n1); + CHECK_EQ(2, n2->InputCount()); + n2->TrimInputCount(0); + CHECK_EQ(0, n2->InputCount()); + CHECK_EQ(0, n0->UseCount()); + CHECK_EQ(0, n1->UseCount()); + CHECK_EQ(0, n2->UseCount()); + } + + { + Node* n0 = graph.NewNode(&dummy_operator); + Node* n2 = graph.NewNode(&dummy_operator, n0); + n2->AppendInput(graph.zone(), n0); + CHECK_EQ(2, n2->InputCount()); + CHECK_EQ(2, n0->UseCount()); + n2->TrimInputCount(1); + CHECK_EQ(1, n2->InputCount()); + CHECK_EQ(1, n0->UseCount()); + CHECK_EQ(0, n2->UseCount()); + } + + { + Node* n0 = graph.NewNode(&dummy_operator); + Node* n2 = graph.NewNode(&dummy_operator, n0); + n2->AppendInput(graph.zone(), n0); + CHECK_EQ(2, n2->InputCount()); + CHECK_EQ(2, n0->UseCount()); + n2->TrimInputCount(0); + CHECK_EQ(0, n2->InputCount()); + CHECK_EQ(0, n0->UseCount()); + CHECK_EQ(0, n2->UseCount()); + } +} + + +TEST(RemoveAllInputs) { + GraphTester graph; + + for (int i = 0; i < 2; i++) { + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator, n0); + Node* n2; + if (i == 0) { + n2 = graph.NewNode(&dummy_operator, n0, n1); + } else { + n2 = graph.NewNode(&dummy_operator, n0); + n2->AppendInput(graph.zone(), n1); // with out-of-line input. + } + + n0->RemoveAllInputs(); + CHECK_EQ(0, n0->InputCount()); + + CHECK_EQ(2, n0->UseCount()); + n1->RemoveAllInputs(); + CHECK_EQ(1, n1->InputCount()); + CHECK_EQ(1, n0->UseCount()); + CHECK_EQ(NULL, n1->InputAt(0)); + + CHECK_EQ(1, n1->UseCount()); + n2->RemoveAllInputs(); + CHECK_EQ(2, n2->InputCount()); + CHECK_EQ(0, n0->UseCount()); + CHECK_EQ(0, n1->UseCount()); + CHECK_EQ(NULL, n2->InputAt(0)); + CHECK_EQ(NULL, n2->InputAt(1)); + } + + { + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator, n0); + n1->ReplaceInput(0, n1); // self-reference. + + CHECK_EQ(0, n0->UseCount()); + CHECK_EQ(1, n1->UseCount()); + n1->RemoveAllInputs(); + CHECK_EQ(1, n1->InputCount()); + CHECK_EQ(0, n1->UseCount()); + CHECK_EQ(NULL, n1->InputAt(0)); + } +} diff --git a/deps/v8/test/cctest/compiler/test-operator.cc b/deps/v8/test/cctest/compiler/test-operator.cc new file mode 100644 index 00000000000..0bf8cb755bc --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-operator.cc @@ -0,0 +1,244 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "src/compiler/operator.h" +#include "test/cctest/cctest.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +#define NaN (v8::base::OS::nan_value()) +#define Infinity (std::numeric_limits<double>::infinity()) + +TEST(TestOperatorMnemonic) { + SimpleOperator op1(10, 0, 0, 0, "ThisOne"); + CHECK_EQ(0, strcmp(op1.mnemonic(), "ThisOne")); + + SimpleOperator op2(11, 0, 0, 0, "ThatOne"); + CHECK_EQ(0, strcmp(op2.mnemonic(), "ThatOne")); + + Operator1<int> op3(12, 0, 0, 1, "Mnemonic1", 12333); + CHECK_EQ(0, strcmp(op3.mnemonic(), "Mnemonic1")); + + Operator1<double> op4(13, 0, 0, 1, "TheOther", 99.9); + CHECK_EQ(0, strcmp(op4.mnemonic(), "TheOther")); +} + + +TEST(TestSimpleOperatorHash) { + SimpleOperator op1(17, 0, 0, 0, "Another"); + CHECK_EQ(17, op1.HashCode()); + + SimpleOperator op2(18, 0, 0, 0, "Falsch"); + CHECK_EQ(18, op2.HashCode()); +} + + +TEST(TestSimpleOperatorEquals) { + SimpleOperator op1a(19, 0, 0, 0, "Another1"); + SimpleOperator op1b(19, 2, 2, 2, "Another2"); + + CHECK(op1a.Equals(&op1a)); + CHECK(op1a.Equals(&op1b)); + CHECK(op1b.Equals(&op1a)); + CHECK(op1b.Equals(&op1b)); + + SimpleOperator op2a(20, 0, 0, 0, "Falsch1"); + SimpleOperator op2b(20, 1, 1, 1, "Falsch2"); + + CHECK(op2a.Equals(&op2a)); + CHECK(op2a.Equals(&op2b)); + CHECK(op2b.Equals(&op2a)); + CHECK(op2b.Equals(&op2b)); + + CHECK(!op1a.Equals(&op2a)); + CHECK(!op1a.Equals(&op2b)); + CHECK(!op1b.Equals(&op2a)); + CHECK(!op1b.Equals(&op2b)); + + CHECK(!op2a.Equals(&op1a)); + CHECK(!op2a.Equals(&op1b)); + CHECK(!op2b.Equals(&op1a)); + CHECK(!op2b.Equals(&op1b)); +} + + +static SmartArrayPointer<const char> OperatorToString(Operator* op) { + OStringStream os; + os << *op; + return SmartArrayPointer<const char>(StrDup(os.c_str())); +} + + +TEST(TestSimpleOperatorPrint) { + SimpleOperator op1a(19, 0, 0, 0, "Another1"); + SimpleOperator op1b(19, 2, 2, 2, "Another2"); + + CHECK_EQ("Another1", OperatorToString(&op1a).get()); + CHECK_EQ("Another2", OperatorToString(&op1b).get()); + + SimpleOperator op2a(20, 0, 0, 0, "Flog1"); + SimpleOperator op2b(20, 1, 1, 1, "Flog2"); + + CHECK_EQ("Flog1", OperatorToString(&op2a).get()); + CHECK_EQ("Flog2", OperatorToString(&op2b).get()); +} + + +TEST(TestOperator1intHash) { + Operator1<int> op1a(23, 0, 0, 0, "Wolfie", 11); + Operator1<int> op1b(23, 2, 2, 2, "Doggie", 11); + + CHECK_EQ(op1a.HashCode(), op1b.HashCode()); + + Operator1<int> op2a(24, 0, 0, 0, "Arfie", 3); + Operator1<int> op2b(24, 0, 0, 0, "Arfie", 4); + + CHECK_NE(op1a.HashCode(), op2a.HashCode()); + CHECK_NE(op2a.HashCode(), op2b.HashCode()); +} + + +TEST(TestOperator1intEquals) { + Operator1<int> op1a(23, 0, 0, 0, "Scratchy", 11); + Operator1<int> op1b(23, 2, 2, 2, "Scratchy", 11); + + CHECK(op1a.Equals(&op1a)); + CHECK(op1a.Equals(&op1b)); + CHECK(op1b.Equals(&op1a)); + CHECK(op1b.Equals(&op1b)); + + Operator1<int> op2a(24, 0, 0, 0, "Im", 3); + Operator1<int> op2b(24, 0, 0, 0, "Im", 4); + + CHECK(op2a.Equals(&op2a)); + CHECK(!op2a.Equals(&op2b)); + CHECK(!op2b.Equals(&op2a)); + CHECK(op2b.Equals(&op2b)); + + CHECK(!op1a.Equals(&op2a)); + CHECK(!op1a.Equals(&op2b)); + CHECK(!op1b.Equals(&op2a)); + CHECK(!op1b.Equals(&op2b)); + + CHECK(!op2a.Equals(&op1a)); + CHECK(!op2a.Equals(&op1b)); + CHECK(!op2b.Equals(&op1a)); + CHECK(!op2b.Equals(&op1b)); + + SimpleOperator op3(25, 0, 0, 0, "Weepy"); + + CHECK(!op1a.Equals(&op3)); + CHECK(!op1b.Equals(&op3)); + CHECK(!op2a.Equals(&op3)); + CHECK(!op2b.Equals(&op3)); + + CHECK(!op3.Equals(&op1a)); + CHECK(!op3.Equals(&op1b)); + CHECK(!op3.Equals(&op2a)); + CHECK(!op3.Equals(&op2b)); +} + + +TEST(TestOperator1intPrint) { + Operator1<int> op1(12, 0, 0, 1, "Op1Test", 0); + CHECK_EQ("Op1Test[0]", OperatorToString(&op1).get()); + + Operator1<int> op2(12, 0, 0, 1, "Op1Test", 66666666); + CHECK_EQ("Op1Test[66666666]", OperatorToString(&op2).get()); + + Operator1<int> op3(12, 0, 0, 1, "FooBar", 2347); + CHECK_EQ("FooBar[2347]", OperatorToString(&op3).get()); + + Operator1<int> op4(12, 0, 0, 1, "BarFoo", -879); + CHECK_EQ("BarFoo[-879]", OperatorToString(&op4).get()); +} + + +TEST(TestOperator1doubleHash) { + Operator1<double> op1a(23, 0, 0, 0, "Wolfie", 11.77); + Operator1<double> op1b(23, 2, 2, 2, "Doggie", 11.77); + + CHECK_EQ(op1a.HashCode(), op1b.HashCode()); + + Operator1<double> op2a(24, 0, 0, 0, "Arfie", -6.7); + Operator1<double> op2b(24, 0, 0, 0, "Arfie", -6.8); + + CHECK_NE(op1a.HashCode(), op2a.HashCode()); + CHECK_NE(op2a.HashCode(), op2b.HashCode()); +} + + +TEST(TestOperator1doubleEquals) { + Operator1<double> op1a(23, 0, 0, 0, "Scratchy", 11.77); + Operator1<double> op1b(23, 2, 2, 2, "Scratchy", 11.77); + + CHECK(op1a.Equals(&op1a)); + CHECK(op1a.Equals(&op1b)); + CHECK(op1b.Equals(&op1a)); + CHECK(op1b.Equals(&op1b)); + + Operator1<double> op2a(24, 0, 0, 0, "Im", 3.1); + Operator1<double> op2b(24, 0, 0, 0, "Im", 3.2); + + CHECK(op2a.Equals(&op2a)); + CHECK(!op2a.Equals(&op2b)); + CHECK(!op2b.Equals(&op2a)); + CHECK(op2b.Equals(&op2b)); + + CHECK(!op1a.Equals(&op2a)); + CHECK(!op1a.Equals(&op2b)); + CHECK(!op1b.Equals(&op2a)); + CHECK(!op1b.Equals(&op2b)); + + CHECK(!op2a.Equals(&op1a)); + CHECK(!op2a.Equals(&op1b)); + CHECK(!op2b.Equals(&op1a)); + CHECK(!op2b.Equals(&op1b)); + + SimpleOperator op3(25, 0, 0, 0, "Weepy"); + + CHECK(!op1a.Equals(&op3)); + CHECK(!op1b.Equals(&op3)); + CHECK(!op2a.Equals(&op3)); + CHECK(!op2b.Equals(&op3)); + + CHECK(!op3.Equals(&op1a)); + CHECK(!op3.Equals(&op1b)); + CHECK(!op3.Equals(&op2a)); + CHECK(!op3.Equals(&op2b)); + + Operator1<double> op4a(24, 0, 0, 0, "Bashful", NaN); + Operator1<double> op4b(24, 0, 0, 0, "Bashful", NaN); + + CHECK(op4a.Equals(&op4a)); + CHECK(op4a.Equals(&op4b)); + CHECK(op4b.Equals(&op4a)); + CHECK(op4b.Equals(&op4b)); + + CHECK(!op3.Equals(&op4a)); + CHECK(!op3.Equals(&op4b)); + CHECK(!op3.Equals(&op4a)); + CHECK(!op3.Equals(&op4b)); +} + + +TEST(TestOperator1doublePrint) { + Operator1<double> op1(12, 0, 0, 1, "Op1Test", 0); + CHECK_EQ("Op1Test[0]", OperatorToString(&op1).get()); + + Operator1<double> op2(12, 0, 0, 1, "Op1Test", 7.3); + CHECK_EQ("Op1Test[7.3]", OperatorToString(&op2).get()); + + Operator1<double> op3(12, 0, 0, 1, "FooBar", 2e+123); + CHECK_EQ("FooBar[2e+123]", OperatorToString(&op3).get()); + + Operator1<double> op4(12, 0, 0, 1, "BarFoo", Infinity); + CHECK_EQ("BarFoo[inf]", OperatorToString(&op4).get()); + + Operator1<double> op5(12, 0, 0, 1, "BarFoo", NaN); + CHECK_EQ("BarFoo[nan]", OperatorToString(&op5).get()); +} diff --git a/deps/v8/test/cctest/compiler/test-phi-reducer.cc b/deps/v8/test/cctest/compiler/test-phi-reducer.cc new file mode 100644 index 00000000000..00e250d8a2a --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-phi-reducer.cc @@ -0,0 +1,225 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" +#include "test/cctest/cctest.h" + +#include "src/compiler/common-operator.h" +#include "src/compiler/graph-inl.h" +#include "src/compiler/phi-reducer.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +class PhiReducerTester : HandleAndZoneScope { + public: + explicit PhiReducerTester(int num_parameters = 0) + : isolate(main_isolate()), + common(main_zone()), + graph(main_zone()), + self(graph.NewNode(common.Start(num_parameters))), + dead(graph.NewNode(common.Dead())) { + graph.SetStart(self); + } + + Isolate* isolate; + CommonOperatorBuilder common; + Graph graph; + Node* self; + Node* dead; + + void CheckReduce(Node* expect, Node* phi) { + PhiReducer reducer; + Reduction reduction = reducer.Reduce(phi); + if (expect == phi) { + CHECK(!reduction.Changed()); + } else { + CHECK(reduction.Changed()); + CHECK_EQ(expect, reduction.replacement()); + } + } + + Node* Int32Constant(int32_t val) { + return graph.NewNode(common.Int32Constant(val)); + } + + Node* Float64Constant(double val) { + return graph.NewNode(common.Float64Constant(val)); + } + + Node* Parameter(int32_t index = 0) { + return graph.NewNode(common.Parameter(index), graph.start()); + } + + Node* Phi(Node* a) { + return SetSelfReferences(graph.NewNode(common.Phi(1), a)); + } + + Node* Phi(Node* a, Node* b) { + return SetSelfReferences(graph.NewNode(common.Phi(2), a, b)); + } + + Node* Phi(Node* a, Node* b, Node* c) { + return SetSelfReferences(graph.NewNode(common.Phi(3), a, b, c)); + } + + Node* Phi(Node* a, Node* b, Node* c, Node* d) { + return SetSelfReferences(graph.NewNode(common.Phi(4), a, b, c, d)); + } + + Node* PhiWithControl(Node* a, Node* control) { + return SetSelfReferences(graph.NewNode(common.Phi(1), a, control)); + } + + Node* PhiWithControl(Node* a, Node* b, Node* control) { + return SetSelfReferences(graph.NewNode(common.Phi(2), a, b, control)); + } + + Node* SetSelfReferences(Node* node) { + Node::Inputs inputs = node->inputs(); + for (Node::Inputs::iterator iter(inputs.begin()); iter != inputs.end(); + ++iter) { + Node* input = *iter; + if (input == self) node->ReplaceInput(iter.index(), node); + } + return node; + } +}; + + +TEST(PhiReduce1) { + PhiReducerTester R; + Node* zero = R.Int32Constant(0); + Node* one = R.Int32Constant(1); + Node* oneish = R.Float64Constant(1.1); + Node* param = R.Parameter(); + + Node* singles[] = {zero, one, oneish, param}; + for (size_t i = 0; i < ARRAY_SIZE(singles); i++) { + R.CheckReduce(singles[i], R.Phi(singles[i])); + } +} + + +TEST(PhiReduce2) { + PhiReducerTester R; + Node* zero = R.Int32Constant(0); + Node* one = R.Int32Constant(1); + Node* oneish = R.Float64Constant(1.1); + Node* param = R.Parameter(); + + Node* singles[] = {zero, one, oneish, param}; + for (size_t i = 0; i < ARRAY_SIZE(singles); i++) { + Node* a = singles[i]; + R.CheckReduce(a, R.Phi(a, a)); + } + + for (size_t i = 0; i < ARRAY_SIZE(singles); i++) { + Node* a = singles[i]; + R.CheckReduce(a, R.Phi(R.self, a)); + R.CheckReduce(a, R.Phi(a, R.self)); + } + + for (size_t i = 1; i < ARRAY_SIZE(singles); i++) { + Node* a = singles[i], *b = singles[0]; + Node* phi1 = R.Phi(b, a); + R.CheckReduce(phi1, phi1); + + Node* phi2 = R.Phi(a, b); + R.CheckReduce(phi2, phi2); + } +} + + +TEST(PhiReduce3) { + PhiReducerTester R; + Node* zero = R.Int32Constant(0); + Node* one = R.Int32Constant(1); + Node* oneish = R.Float64Constant(1.1); + Node* param = R.Parameter(); + + Node* singles[] = {zero, one, oneish, param}; + for (size_t i = 0; i < ARRAY_SIZE(singles); i++) { + Node* a = singles[i]; + R.CheckReduce(a, R.Phi(a, a, a)); + } + + for (size_t i = 0; i < ARRAY_SIZE(singles); i++) { + Node* a = singles[i]; + R.CheckReduce(a, R.Phi(R.self, a, a)); + R.CheckReduce(a, R.Phi(a, R.self, a)); + R.CheckReduce(a, R.Phi(a, a, R.self)); + } + + for (size_t i = 1; i < ARRAY_SIZE(singles); i++) { + Node* a = singles[i], *b = singles[0]; + Node* phi1 = R.Phi(b, a, a); + R.CheckReduce(phi1, phi1); + + Node* phi2 = R.Phi(a, b, a); + R.CheckReduce(phi2, phi2); + + Node* phi3 = R.Phi(a, a, b); + R.CheckReduce(phi3, phi3); + } +} + + +TEST(PhiReduce4) { + PhiReducerTester R; + Node* zero = R.Int32Constant(0); + Node* one = R.Int32Constant(1); + Node* oneish = R.Float64Constant(1.1); + Node* param = R.Parameter(); + + Node* singles[] = {zero, one, oneish, param}; + for (size_t i = 0; i < ARRAY_SIZE(singles); i++) { + Node* a = singles[i]; + R.CheckReduce(a, R.Phi(a, a, a, a)); + } + + for (size_t i = 0; i < ARRAY_SIZE(singles); i++) { + Node* a = singles[i]; + R.CheckReduce(a, R.Phi(R.self, a, a, a)); + R.CheckReduce(a, R.Phi(a, R.self, a, a)); + R.CheckReduce(a, R.Phi(a, a, R.self, a)); + R.CheckReduce(a, R.Phi(a, a, a, R.self)); + + R.CheckReduce(a, R.Phi(R.self, R.self, a, a)); + R.CheckReduce(a, R.Phi(a, R.self, R.self, a)); + R.CheckReduce(a, R.Phi(a, a, R.self, R.self)); + R.CheckReduce(a, R.Phi(R.self, a, a, R.self)); + } + + for (size_t i = 1; i < ARRAY_SIZE(singles); i++) { + Node* a = singles[i], *b = singles[0]; + Node* phi1 = R.Phi(b, a, a, a); + R.CheckReduce(phi1, phi1); + + Node* phi2 = R.Phi(a, b, a, a); + R.CheckReduce(phi2, phi2); + + Node* phi3 = R.Phi(a, a, b, a); + R.CheckReduce(phi3, phi3); + + Node* phi4 = R.Phi(a, a, a, b); + R.CheckReduce(phi4, phi4); + } +} + + +TEST(PhiReduceShouldIgnoreControlNodes) { + PhiReducerTester R; + Node* zero = R.Int32Constant(0); + Node* one = R.Int32Constant(1); + Node* oneish = R.Float64Constant(1.1); + Node* param = R.Parameter(); + + Node* singles[] = {zero, one, oneish, param}; + for (size_t i = 0; i < ARRAY_SIZE(singles); ++i) { + R.CheckReduce(singles[i], R.PhiWithControl(singles[i], R.dead)); + R.CheckReduce(singles[i], R.PhiWithControl(R.self, singles[i], R.dead)); + R.CheckReduce(singles[i], R.PhiWithControl(singles[i], R.self, R.dead)); + } +} diff --git a/deps/v8/test/cctest/compiler/test-pipeline.cc b/deps/v8/test/cctest/compiler/test-pipeline.cc new file mode 100644 index 00000000000..7efedeeea2a --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-pipeline.cc @@ -0,0 +1,40 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" +#include "test/cctest/cctest.h" + +#include "src/compiler.h" +#include "src/compiler/pipeline.h" +#include "src/handles.h" +#include "src/parser.h" +#include "src/rewriter.h" +#include "src/scopes.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +TEST(PipelineAdd) { + InitializedHandleScope handles; + const char* source = "(function(a,b) { return a + b; })"; + Handle<JSFunction> function = v8::Utils::OpenHandle( + *v8::Handle<v8::Function>::Cast(CompileRun(source))); + CompilationInfoWithZone info(function); + + CHECK(Parser::Parse(&info)); + StrictMode strict_mode = info.function()->strict_mode(); + info.SetStrictMode(strict_mode); + CHECK(Rewriter::Rewrite(&info)); + CHECK(Scope::Analyze(&info)); + CHECK_NE(NULL, info.scope()); + + Pipeline pipeline(&info); +#if V8_TURBOFAN_TARGET + Handle<Code> code = pipeline.GenerateCode(); + CHECK(Pipeline::SupportedTarget()); + CHECK(!code.is_null()); +#else + USE(pipeline); +#endif +} diff --git a/deps/v8/test/cctest/compiler/test-representation-change.cc b/deps/v8/test/cctest/compiler/test-representation-change.cc new file mode 100644 index 00000000000..092a5f7d90c --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-representation-change.cc @@ -0,0 +1,276 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include <limits> + +#include "src/v8.h" +#include "test/cctest/cctest.h" +#include "test/cctest/compiler/graph-builder-tester.h" + +#include "src/compiler/node-matchers.h" +#include "src/compiler/representation-change.h" +#include "src/compiler/typer.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +namespace v8 { // for friendiness. +namespace internal { +namespace compiler { + +class RepresentationChangerTester : public HandleAndZoneScope, + public GraphAndBuilders { + public: + explicit RepresentationChangerTester(int num_parameters = 0) + : GraphAndBuilders(main_zone()), + typer_(main_zone()), + jsgraph_(main_graph_, &main_common_, &typer_), + changer_(&jsgraph_, &main_simplified_, &main_machine_, main_isolate()) { + Node* s = graph()->NewNode(common()->Start(num_parameters)); + graph()->SetStart(s); + } + + Typer typer_; + JSGraph jsgraph_; + RepresentationChanger changer_; + + Isolate* isolate() { return main_isolate(); } + Graph* graph() { return main_graph_; } + CommonOperatorBuilder* common() { return &main_common_; } + JSGraph* jsgraph() { return &jsgraph_; } + RepresentationChanger* changer() { return &changer_; } + + // TODO(titzer): use ValueChecker / ValueUtil + void CheckInt32Constant(Node* n, int32_t expected) { + ValueMatcher<int32_t> m(n); + CHECK(m.HasValue()); + CHECK_EQ(expected, m.Value()); + } + + void CheckHeapConstant(Node* n, Object* expected) { + ValueMatcher<Handle<Object> > m(n); + CHECK(m.HasValue()); + CHECK_EQ(expected, *m.Value()); + } + + void CheckNumberConstant(Node* n, double expected) { + ValueMatcher<double> m(n); + CHECK_EQ(IrOpcode::kNumberConstant, n->opcode()); + CHECK(m.HasValue()); + CHECK_EQ(expected, m.Value()); + } + + Node* Parameter(int index = 0) { + return graph()->NewNode(common()->Parameter(index), graph()->start()); + } + + void CheckTypeError(RepTypeUnion from, RepTypeUnion to) { + changer()->testing_type_errors_ = true; + changer()->type_error_ = false; + Node* n = Parameter(0); + Node* c = changer()->GetRepresentationFor(n, from, to); + CHECK_EQ(n, c); + CHECK(changer()->type_error_); + } + + void CheckNop(RepTypeUnion from, RepTypeUnion to) { + Node* n = Parameter(0); + Node* c = changer()->GetRepresentationFor(n, from, to); + CHECK_EQ(n, c); + } +}; +} +} +} // namespace v8::internal::compiler + + +static const RepType all_reps[] = {rBit, rWord32, rWord64, rFloat64, rTagged}; + + +// TODO(titzer): lift this to ValueHelper +static const double double_inputs[] = { + 0.0, -0.0, 1.0, -1.0, 0.1, 1.4, -1.7, + 2, 5, 6, 982983, 888, -999.8, 3.1e7, + -2e66, 2.3e124, -12e73, V8_INFINITY, -V8_INFINITY}; + + +static const int32_t int32_inputs[] = { + 0, 1, -1, + 2, 5, 6, + 982983, 888, -999, + 65535, static_cast<int32_t>(0xFFFFFFFF), static_cast<int32_t>(0x80000000)}; + + +static const uint32_t uint32_inputs[] = { + 0, 1, static_cast<uint32_t>(-1), 2, 5, 6, + 982983, 888, static_cast<uint32_t>(-999), 65535, 0xFFFFFFFF, 0x80000000}; + + +TEST(BoolToBit_constant) { + RepresentationChangerTester r; + + Node* true_node = r.jsgraph()->TrueConstant(); + Node* true_bit = r.changer()->GetRepresentationFor(true_node, rTagged, rBit); + r.CheckInt32Constant(true_bit, 1); + + Node* false_node = r.jsgraph()->FalseConstant(); + Node* false_bit = + r.changer()->GetRepresentationFor(false_node, rTagged, rBit); + r.CheckInt32Constant(false_bit, 0); +} + + +TEST(BitToBool_constant) { + RepresentationChangerTester r; + + for (int i = -5; i < 5; i++) { + Node* node = r.jsgraph()->Int32Constant(i); + Node* val = r.changer()->GetRepresentationFor(node, rBit, rTagged); + r.CheckHeapConstant(val, i == 0 ? r.isolate()->heap()->false_value() + : r.isolate()->heap()->true_value()); + } +} + + +TEST(ToTagged_constant) { + RepresentationChangerTester r; + + for (size_t i = 0; i < ARRAY_SIZE(double_inputs); i++) { + Node* n = r.jsgraph()->Float64Constant(double_inputs[i]); + Node* c = r.changer()->GetRepresentationFor(n, rFloat64, rTagged); + r.CheckNumberConstant(c, double_inputs[i]); + } + + for (size_t i = 0; i < ARRAY_SIZE(int32_inputs); i++) { + Node* n = r.jsgraph()->Int32Constant(int32_inputs[i]); + Node* c = r.changer()->GetRepresentationFor(n, rWord32 | tInt32, rTagged); + r.CheckNumberConstant(c, static_cast<double>(int32_inputs[i])); + } + + for (size_t i = 0; i < ARRAY_SIZE(uint32_inputs); i++) { + Node* n = r.jsgraph()->Int32Constant(uint32_inputs[i]); + Node* c = r.changer()->GetRepresentationFor(n, rWord32 | tUint32, rTagged); + r.CheckNumberConstant(c, static_cast<double>(uint32_inputs[i])); + } +} + + +static void CheckChange(IrOpcode::Value expected, RepTypeUnion from, + RepTypeUnion to) { + RepresentationChangerTester r; + + Node* n = r.Parameter(); + Node* c = r.changer()->GetRepresentationFor(n, from, to); + + CHECK_NE(c, n); + CHECK_EQ(expected, c->opcode()); + CHECK_EQ(n, c->InputAt(0)); +} + + +TEST(SingleChanges) { + CheckChange(IrOpcode::kChangeBoolToBit, rTagged, rBit); + CheckChange(IrOpcode::kChangeBitToBool, rBit, rTagged); + + CheckChange(IrOpcode::kChangeInt32ToTagged, rWord32 | tInt32, rTagged); + CheckChange(IrOpcode::kChangeUint32ToTagged, rWord32 | tUint32, rTagged); + CheckChange(IrOpcode::kChangeFloat64ToTagged, rFloat64, rTagged); + + CheckChange(IrOpcode::kChangeTaggedToInt32, rTagged | tInt32, rWord32); + CheckChange(IrOpcode::kChangeTaggedToUint32, rTagged | tUint32, rWord32); + CheckChange(IrOpcode::kChangeTaggedToFloat64, rTagged, rFloat64); + + // Int32,Uint32 <-> Float64 are actually machine conversions. + CheckChange(IrOpcode::kChangeInt32ToFloat64, rWord32 | tInt32, rFloat64); + CheckChange(IrOpcode::kChangeUint32ToFloat64, rWord32 | tUint32, rFloat64); + CheckChange(IrOpcode::kChangeFloat64ToInt32, rFloat64 | tInt32, rWord32); + CheckChange(IrOpcode::kChangeFloat64ToUint32, rFloat64 | tUint32, rWord32); +} + + +TEST(SignednessInWord32) { + RepresentationChangerTester r; + + // TODO(titzer): assume that uses of a word32 without a sign mean tInt32. + CheckChange(IrOpcode::kChangeTaggedToInt32, rTagged, rWord32 | tInt32); + CheckChange(IrOpcode::kChangeTaggedToUint32, rTagged, rWord32 | tUint32); + CheckChange(IrOpcode::kChangeInt32ToFloat64, rWord32, rFloat64); + CheckChange(IrOpcode::kChangeFloat64ToInt32, rFloat64, rWord32); +} + + +TEST(Nops) { + RepresentationChangerTester r; + + // X -> X is always a nop for any single representation X. + for (size_t i = 0; i < ARRAY_SIZE(all_reps); i++) { + r.CheckNop(all_reps[i], all_reps[i]); + } + + // 32-bit or 64-bit words can be used as branch conditions (rBit). + r.CheckNop(rWord32, rBit); + r.CheckNop(rWord32, rBit | tBool); + r.CheckNop(rWord64, rBit); + r.CheckNop(rWord64, rBit | tBool); + + // rBit (result of comparison) is implicitly a wordish thing. + r.CheckNop(rBit, rWord32); + r.CheckNop(rBit | tBool, rWord32); + r.CheckNop(rBit, rWord64); + r.CheckNop(rBit | tBool, rWord64); +} + + +TEST(TypeErrors) { + RepresentationChangerTester r; + + // Floats cannot be implicitly converted to/from comparison conditions. + r.CheckTypeError(rFloat64, rBit); + r.CheckTypeError(rFloat64, rBit | tBool); + r.CheckTypeError(rBit, rFloat64); + r.CheckTypeError(rBit | tBool, rFloat64); + + // Word64 is internal and shouldn't be implicitly converted. + r.CheckTypeError(rWord64, rTagged | tBool); + r.CheckTypeError(rWord64, rTagged); + r.CheckTypeError(rWord64, rTagged | tBool); + r.CheckTypeError(rTagged, rWord64); + r.CheckTypeError(rTagged | tBool, rWord64); + + // Word64 / Word32 shouldn't be implicitly converted. + r.CheckTypeError(rWord64, rWord32); + r.CheckTypeError(rWord32, rWord64); + r.CheckTypeError(rWord64, rWord32 | tInt32); + r.CheckTypeError(rWord32 | tInt32, rWord64); + r.CheckTypeError(rWord64, rWord32 | tUint32); + r.CheckTypeError(rWord32 | tUint32, rWord64); + + for (size_t i = 0; i < ARRAY_SIZE(all_reps); i++) { + for (size_t j = 0; j < ARRAY_SIZE(all_reps); j++) { + if (i == j) continue; + // Only a single from representation is allowed. + r.CheckTypeError(all_reps[i] | all_reps[j], rTagged); + } + } +} + + +TEST(CompleteMatrix) { + // TODO(titzer): test all variants in the matrix. + // rB + // tBrB + // tBrT + // rW32 + // tIrW32 + // tUrW32 + // rW64 + // tIrW64 + // tUrW64 + // rF64 + // tIrF64 + // tUrF64 + // tArF64 + // rT + // tArT +} diff --git a/deps/v8/test/cctest/compiler/test-run-deopt.cc b/deps/v8/test/cctest/compiler/test-run-deopt.cc new file mode 100644 index 00000000000..af173d6be6e --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-run-deopt.cc @@ -0,0 +1,58 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "test/cctest/compiler/function-tester.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +#if V8_TURBOFAN_TARGET + +TEST(TurboSimpleDeopt) { + FLAG_allow_natives_syntax = true; + FLAG_turbo_deoptimization = true; + + FunctionTester T( + "(function f(a) {" + "var b = 1;" + "if (!%IsOptimized()) return 0;" + "%DeoptimizeFunction(f);" + "if (%IsOptimized()) return 0;" + "return a + b; })"); + + T.CheckCall(T.Val(2), T.Val(1)); +} + + +TEST(TurboSimpleDeoptInExpr) { + FLAG_allow_natives_syntax = true; + FLAG_turbo_deoptimization = true; + + FunctionTester T( + "(function f(a) {" + "var b = 1;" + "var c = 2;" + "if (!%IsOptimized()) return 0;" + "var d = b + (%DeoptimizeFunction(f), c);" + "if (%IsOptimized()) return 0;" + "return d + a; })"); + + T.CheckCall(T.Val(6), T.Val(3)); +} + +#endif + +TEST(TurboTrivialDeopt) { + FLAG_allow_natives_syntax = true; + FLAG_turbo_deoptimization = true; + + FunctionTester T( + "(function foo() {" + "%DeoptimizeFunction(foo);" + "return 1; })"); + + T.CheckCall(T.Val(1)); +} diff --git a/deps/v8/test/cctest/compiler/test-run-intrinsics.cc b/deps/v8/test/cctest/compiler/test-run-intrinsics.cc new file mode 100644 index 00000000000..a1b5676186a --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-run-intrinsics.cc @@ -0,0 +1,211 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "test/cctest/compiler/function-tester.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + + +TEST(IsSmi) { + FunctionTester T("(function(a) { return %_IsSmi(a); })"); + + T.CheckTrue(T.Val(1)); + T.CheckFalse(T.Val(1.1)); + T.CheckFalse(T.Val(-0.0)); + T.CheckTrue(T.Val(-2)); + T.CheckFalse(T.Val(-2.3)); + T.CheckFalse(T.undefined()); +} + + +TEST(IsNonNegativeSmi) { + FunctionTester T("(function(a) { return %_IsNonNegativeSmi(a); })"); + + T.CheckTrue(T.Val(1)); + T.CheckFalse(T.Val(1.1)); + T.CheckFalse(T.Val(-0.0)); + T.CheckFalse(T.Val(-2)); + T.CheckFalse(T.Val(-2.3)); + T.CheckFalse(T.undefined()); +} + + +TEST(IsMinusZero) { + FunctionTester T("(function(a) { return %_IsMinusZero(a); })"); + + T.CheckFalse(T.Val(1)); + T.CheckFalse(T.Val(1.1)); + T.CheckTrue(T.Val(-0.0)); + T.CheckFalse(T.Val(-2)); + T.CheckFalse(T.Val(-2.3)); + T.CheckFalse(T.undefined()); +} + + +TEST(IsArray) { + FunctionTester T("(function(a) { return %_IsArray(a); })"); + + T.CheckFalse(T.NewObject("(function() {})")); + T.CheckTrue(T.NewObject("([1])")); + T.CheckFalse(T.NewObject("({})")); + T.CheckFalse(T.NewObject("(/x/)")); + T.CheckFalse(T.undefined()); + T.CheckFalse(T.null()); + T.CheckFalse(T.Val("x")); + T.CheckFalse(T.Val(1)); +} + + +TEST(IsObject) { + FunctionTester T("(function(a) { return %_IsObject(a); })"); + + T.CheckFalse(T.NewObject("(function() {})")); + T.CheckTrue(T.NewObject("([1])")); + T.CheckTrue(T.NewObject("({})")); + T.CheckTrue(T.NewObject("(/x/)")); + T.CheckFalse(T.undefined()); + T.CheckTrue(T.null()); + T.CheckFalse(T.Val("x")); + T.CheckFalse(T.Val(1)); +} + + +TEST(IsFunction) { + FunctionTester T("(function(a) { return %_IsFunction(a); })"); + + T.CheckTrue(T.NewObject("(function() {})")); + T.CheckFalse(T.NewObject("([1])")); + T.CheckFalse(T.NewObject("({})")); + T.CheckFalse(T.NewObject("(/x/)")); + T.CheckFalse(T.undefined()); + T.CheckFalse(T.null()); + T.CheckFalse(T.Val("x")); + T.CheckFalse(T.Val(1)); +} + + +TEST(IsRegExp) { + FunctionTester T("(function(a) { return %_IsRegExp(a); })"); + + T.CheckFalse(T.NewObject("(function() {})")); + T.CheckFalse(T.NewObject("([1])")); + T.CheckFalse(T.NewObject("({})")); + T.CheckTrue(T.NewObject("(/x/)")); + T.CheckFalse(T.undefined()); + T.CheckFalse(T.null()); + T.CheckFalse(T.Val("x")); + T.CheckFalse(T.Val(1)); +} + + +TEST(ClassOf) { + FunctionTester T("(function(a) { return %_ClassOf(a); })"); + + T.CheckCall(T.Val("Function"), T.NewObject("(function() {})")); + T.CheckCall(T.Val("Array"), T.NewObject("([1])")); + T.CheckCall(T.Val("Object"), T.NewObject("({})")); + T.CheckCall(T.Val("RegExp"), T.NewObject("(/x/)")); + T.CheckCall(T.null(), T.undefined()); + T.CheckCall(T.null(), T.null()); + T.CheckCall(T.null(), T.Val("x")); + T.CheckCall(T.null(), T.Val(1)); +} + + +TEST(ObjectEquals) { + FunctionTester T("(function(a,b) { return %_ObjectEquals(a,b); })"); + CompileRun("var o = {}"); + + T.CheckTrue(T.NewObject("(o)"), T.NewObject("(o)")); + T.CheckTrue(T.Val("internal"), T.Val("internal")); + T.CheckTrue(T.true_value(), T.true_value()); + T.CheckFalse(T.true_value(), T.false_value()); + T.CheckFalse(T.NewObject("({})"), T.NewObject("({})")); + T.CheckFalse(T.Val("a"), T.Val("b")); +} + + +TEST(ValueOf) { + FunctionTester T("(function(a) { return %_ValueOf(a); })"); + + T.CheckCall(T.Val("a"), T.Val("a")); + T.CheckCall(T.Val("b"), T.NewObject("(new String('b'))")); + T.CheckCall(T.Val(123), T.Val(123)); + T.CheckCall(T.Val(456), T.NewObject("(new Number(456))")); +} + + +TEST(SetValueOf) { + FunctionTester T("(function(a,b) { return %_SetValueOf(a,b); })"); + + T.CheckCall(T.Val("a"), T.NewObject("(new String)"), T.Val("a")); + T.CheckCall(T.Val(123), T.NewObject("(new Number)"), T.Val(123)); + T.CheckCall(T.Val("x"), T.undefined(), T.Val("x")); +} + + +TEST(StringCharFromCode) { + FunctionTester T("(function(a) { return %_StringCharFromCode(a); })"); + + T.CheckCall(T.Val("a"), T.Val(97)); + T.CheckCall(T.Val("\xE2\x9D\x8A"), T.Val(0x274A)); + T.CheckCall(T.Val(""), T.undefined()); +} + + +TEST(StringCharAt) { + FunctionTester T("(function(a,b) { return %_StringCharAt(a,b); })"); + + T.CheckCall(T.Val("e"), T.Val("huge fan!"), T.Val(3)); + T.CheckCall(T.Val("f"), T.Val("\xE2\x9D\x8A fan!"), T.Val(2)); + T.CheckCall(T.Val(""), T.Val("not a fan!"), T.Val(23)); +} + + +TEST(StringCharCodeAt) { + FunctionTester T("(function(a,b) { return %_StringCharCodeAt(a,b); })"); + + T.CheckCall(T.Val('e'), T.Val("huge fan!"), T.Val(3)); + T.CheckCall(T.Val('f'), T.Val("\xE2\x9D\x8A fan!"), T.Val(2)); + T.CheckCall(T.nan(), T.Val("not a fan!"), T.Val(23)); +} + + +TEST(StringAdd) { + FunctionTester T("(function(a,b) { return %_StringAdd(a,b); })"); + + T.CheckCall(T.Val("aaabbb"), T.Val("aaa"), T.Val("bbb")); + T.CheckCall(T.Val("aaa"), T.Val("aaa"), T.Val("")); + T.CheckCall(T.Val("bbb"), T.Val(""), T.Val("bbb")); +} + + +TEST(StringSubString) { + FunctionTester T("(function(a,b) { return %_SubString(a,b,b+3); })"); + + T.CheckCall(T.Val("aaa"), T.Val("aaabbb"), T.Val(0.0)); + T.CheckCall(T.Val("abb"), T.Val("aaabbb"), T.Val(2)); + T.CheckCall(T.Val("aaa"), T.Val("aaa"), T.Val(0.0)); +} + + +TEST(StringCompare) { + FunctionTester T("(function(a,b) { return %_StringCompare(a,b); })"); + + T.CheckCall(T.Val(-1), T.Val("aaa"), T.Val("bbb")); + T.CheckCall(T.Val(0.0), T.Val("bbb"), T.Val("bbb")); + T.CheckCall(T.Val(+1), T.Val("ccc"), T.Val("bbb")); +} + + +TEST(CallFunction) { + FunctionTester T("(function(a,b) { return %_CallFunction(a, 1, 2, 3, b); })"); + CompileRun("function f(a,b,c) { return a + b + c + this.d; }"); + + T.CheckCall(T.Val(129), T.NewObject("({d:123})"), T.NewObject("f")); + T.CheckCall(T.Val("6x"), T.NewObject("({d:'x'})"), T.NewObject("f")); +} diff --git a/deps/v8/test/cctest/compiler/test-run-jsbranches.cc b/deps/v8/test/cctest/compiler/test-run-jsbranches.cc new file mode 100644 index 00000000000..2eb4fa6d0f7 --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-run-jsbranches.cc @@ -0,0 +1,262 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "test/cctest/compiler/function-tester.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +TEST(Conditional) { + FunctionTester T("(function(a) { return a ? 23 : 42; })"); + + T.CheckCall(T.Val(23), T.true_value(), T.undefined()); + T.CheckCall(T.Val(42), T.false_value(), T.undefined()); + T.CheckCall(T.Val(42), T.undefined(), T.undefined()); + T.CheckCall(T.Val(42), T.Val(0.0), T.undefined()); + T.CheckCall(T.Val(23), T.Val(999), T.undefined()); + T.CheckCall(T.Val(23), T.Val("x"), T.undefined()); +} + + +TEST(LogicalAnd) { + FunctionTester T("(function(a,b) { return a && b; })"); + + T.CheckCall(T.true_value(), T.true_value(), T.true_value()); + T.CheckCall(T.false_value(), T.false_value(), T.true_value()); + T.CheckCall(T.false_value(), T.true_value(), T.false_value()); + T.CheckCall(T.false_value(), T.false_value(), T.false_value()); + + T.CheckCall(T.Val(999), T.Val(777), T.Val(999)); + T.CheckCall(T.Val(0.0), T.Val(0.0), T.Val(999)); + T.CheckCall(T.Val("b"), T.Val("a"), T.Val("b")); +} + + +TEST(LogicalOr) { + FunctionTester T("(function(a,b) { return a || b; })"); + + T.CheckCall(T.true_value(), T.true_value(), T.true_value()); + T.CheckCall(T.true_value(), T.false_value(), T.true_value()); + T.CheckCall(T.true_value(), T.true_value(), T.false_value()); + T.CheckCall(T.false_value(), T.false_value(), T.false_value()); + + T.CheckCall(T.Val(777), T.Val(777), T.Val(999)); + T.CheckCall(T.Val(999), T.Val(0.0), T.Val(999)); + T.CheckCall(T.Val("a"), T.Val("a"), T.Val("b")); +} + + +TEST(LogicalEffect) { + FunctionTester T("(function(a,b) { a && (b = a); return b; })"); + + T.CheckCall(T.true_value(), T.true_value(), T.true_value()); + T.CheckCall(T.true_value(), T.false_value(), T.true_value()); + T.CheckCall(T.true_value(), T.true_value(), T.false_value()); + T.CheckCall(T.false_value(), T.false_value(), T.false_value()); + + T.CheckCall(T.Val(777), T.Val(777), T.Val(999)); + T.CheckCall(T.Val(999), T.Val(0.0), T.Val(999)); + T.CheckCall(T.Val("a"), T.Val("a"), T.Val("b")); +} + + +TEST(IfStatement) { + FunctionTester T("(function(a) { if (a) { return 1; } else { return 2; } })"); + + T.CheckCall(T.Val(1), T.true_value(), T.undefined()); + T.CheckCall(T.Val(2), T.false_value(), T.undefined()); + T.CheckCall(T.Val(2), T.undefined(), T.undefined()); + T.CheckCall(T.Val(2), T.Val(0.0), T.undefined()); + T.CheckCall(T.Val(1), T.Val(999), T.undefined()); + T.CheckCall(T.Val(1), T.Val("x"), T.undefined()); +} + + +TEST(DoWhileStatement) { + FunctionTester T("(function(a,b) { do { a+=23; } while(a < b) return a; })"); + + T.CheckCall(T.Val(24), T.Val(1), T.Val(1)); + T.CheckCall(T.Val(24), T.Val(1), T.Val(23)); + T.CheckCall(T.Val(47), T.Val(1), T.Val(25)); + T.CheckCall(T.Val("str23"), T.Val("str"), T.Val("str")); +} + + +TEST(WhileStatement) { + FunctionTester T("(function(a,b) { while(a < b) { a+=23; } return a; })"); + + T.CheckCall(T.Val(1), T.Val(1), T.Val(1)); + T.CheckCall(T.Val(24), T.Val(1), T.Val(23)); + T.CheckCall(T.Val(47), T.Val(1), T.Val(25)); + T.CheckCall(T.Val("str"), T.Val("str"), T.Val("str")); +} + + +TEST(ForStatement) { + FunctionTester T("(function(a,b) { for (; a < b; a+=23) {} return a; })"); + + T.CheckCall(T.Val(1), T.Val(1), T.Val(1)); + T.CheckCall(T.Val(24), T.Val(1), T.Val(23)); + T.CheckCall(T.Val(47), T.Val(1), T.Val(25)); + T.CheckCall(T.Val("str"), T.Val("str"), T.Val("str")); +} + + +static void TestForIn(const char* code) { + FunctionTester T(code); + T.CheckCall(T.undefined(), T.undefined()); + T.CheckCall(T.undefined(), T.null()); + T.CheckCall(T.undefined(), T.NewObject("({})")); + T.CheckCall(T.undefined(), T.Val(1)); + T.CheckCall(T.Val("2"), T.Val("str")); + T.CheckCall(T.Val("a"), T.NewObject("({'a' : 1})")); + T.CheckCall(T.Val("2"), T.NewObject("([1, 2, 3])")); + T.CheckCall(T.Val("a"), T.NewObject("({'a' : 1, 'b' : 1})"), T.Val("b")); + T.CheckCall(T.Val("1"), T.NewObject("([1, 2, 3])"), T.Val("2")); +} + + +TEST(ForInStatement) { + // Variable assignment. + TestForIn( + "(function(a, b) {" + "var last;" + "for (var x in a) {" + " if (b) { delete a[b]; b = undefined; }" + " last = x;" + "}" + "return last;})"); + // Indexed assignment. + TestForIn( + "(function(a, b) {" + "var array = [0, 1, undefined];" + "for (array[2] in a) {" + " if (b) { delete a[b]; b = undefined; }" + "}" + "return array[2];})"); + // Named assignment. + TestForIn( + "(function(a, b) {" + "var obj = {'a' : undefined};" + "for (obj.a in a) {" + " if (b) { delete a[b]; b = undefined; }" + "}" + "return obj.a;})"); +} + + +TEST(SwitchStatement) { + const char* src = + "(function(a,b) {" + " var r = '-';" + " switch (a) {" + " case 'x' : r += 'X-';" + " case b + 'b': r += 'B-';" + " default : r += 'D-';" + " case 'y' : r += 'Y-';" + " }" + " return r;" + "})"; + FunctionTester T(src); + + T.CheckCall(T.Val("-X-B-D-Y-"), T.Val("x"), T.Val("B")); + T.CheckCall(T.Val("-B-D-Y-"), T.Val("Bb"), T.Val("B")); + T.CheckCall(T.Val("-D-Y-"), T.Val("z"), T.Val("B")); + T.CheckCall(T.Val("-Y-"), T.Val("y"), T.Val("B")); + + CompileRun("var c = 0; var o = { toString:function(){return c++} };"); + T.CheckCall(T.Val("-D-Y-"), T.Val("1b"), T.NewObject("o")); + T.CheckCall(T.Val("-B-D-Y-"), T.Val("1b"), T.NewObject("o")); + T.CheckCall(T.Val("-D-Y-"), T.Val("1b"), T.NewObject("o")); +} + + +TEST(BlockBreakStatement) { + FunctionTester T("(function(a,b) { L:{ if (a) break L; b=1; } return b; })"); + + T.CheckCall(T.Val(7), T.true_value(), T.Val(7)); + T.CheckCall(T.Val(1), T.false_value(), T.Val(7)); +} + + +TEST(BlockReturnStatement) { + FunctionTester T("(function(a,b) { L:{ if (a) b=1; return b; } })"); + + T.CheckCall(T.Val(1), T.true_value(), T.Val(7)); + T.CheckCall(T.Val(7), T.false_value(), T.Val(7)); +} + + +TEST(NestedIfConditional) { + FunctionTester T("(function(a,b) { if (a) { b = (b?b:7) + 1; } return b; })"); + + T.CheckCall(T.Val(4), T.false_value(), T.Val(4)); + T.CheckCall(T.Val(6), T.true_value(), T.Val(5)); + T.CheckCall(T.Val(8), T.true_value(), T.undefined()); +} + + +TEST(NestedIfLogical) { + const char* src = + "(function(a,b) {" + " if (a || b) { return 1; } else { return 2; }" + "})"; + FunctionTester T(src); + + T.CheckCall(T.Val(1), T.true_value(), T.true_value()); + T.CheckCall(T.Val(1), T.false_value(), T.true_value()); + T.CheckCall(T.Val(1), T.true_value(), T.false_value()); + T.CheckCall(T.Val(2), T.false_value(), T.false_value()); + T.CheckCall(T.Val(1), T.Val(1.0), T.Val(1.0)); + T.CheckCall(T.Val(1), T.Val(0.0), T.Val(1.0)); + T.CheckCall(T.Val(1), T.Val(1.0), T.Val(0.0)); + T.CheckCall(T.Val(2), T.Val(0.0), T.Val(0.0)); +} + + +TEST(NestedIfElseFor) { + const char* src = + "(function(a,b) {" + " if (!a) { return b - 3; } else { for (; a < b; a++); }" + " return a;" + "})"; + FunctionTester T(src); + + T.CheckCall(T.Val(1), T.false_value(), T.Val(4)); + T.CheckCall(T.Val(2), T.true_value(), T.Val(2)); + T.CheckCall(T.Val(3), T.Val(3), T.Val(1)); +} + + +TEST(NestedWhileWhile) { + const char* src = + "(function(a) {" + " var i = a; while (false) while(false) return i;" + " return i;" + "})"; + FunctionTester T(src); + + T.CheckCall(T.Val(2.0), T.Val(2.0), T.Val(-1.0)); + T.CheckCall(T.Val(65.0), T.Val(65.0), T.Val(-1.0)); +} + + +TEST(NestedForIf) { + FunctionTester T("(function(a,b) { for (; a > 1; a--) if (b) return 1; })"); + + T.CheckCall(T.Val(1), T.Val(3), T.true_value()); + T.CheckCall(T.undefined(), T.Val(2), T.false_value()); + T.CheckCall(T.undefined(), T.Val(1), T.null()); +} + + +TEST(NestedForConditional) { + FunctionTester T("(function(a,b) { for (; a > 1; a--) return b ? 1 : 2; })"); + + T.CheckCall(T.Val(1), T.Val(3), T.true_value()); + T.CheckCall(T.Val(2), T.Val(2), T.false_value()); + T.CheckCall(T.undefined(), T.Val(1), T.null()); +} diff --git a/deps/v8/test/cctest/compiler/test-run-jscalls.cc b/deps/v8/test/cctest/compiler/test-run-jscalls.cc new file mode 100644 index 00000000000..2ad7e50467e --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-run-jscalls.cc @@ -0,0 +1,235 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "test/cctest/compiler/function-tester.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +TEST(SimpleCall) { + FunctionTester T("(function(foo,a) { return foo(a); })"); + Handle<JSFunction> foo = T.NewFunction("(function(a) { return a; })"); + + T.CheckCall(T.Val(3), foo, T.Val(3)); + T.CheckCall(T.Val(3.1), foo, T.Val(3.1)); + T.CheckCall(foo, foo, foo); + T.CheckCall(T.Val("Abba"), foo, T.Val("Abba")); +} + + +TEST(SimpleCall2) { + FunctionTester T("(function(foo,a) { return foo(a); })"); + Handle<JSFunction> foo = T.NewFunction("(function(a) { return a; })"); + T.Compile(foo); + + T.CheckCall(T.Val(3), foo, T.Val(3)); + T.CheckCall(T.Val(3.1), foo, T.Val(3.1)); + T.CheckCall(foo, foo, foo); + T.CheckCall(T.Val("Abba"), foo, T.Val("Abba")); +} + + +TEST(ConstCall) { + FunctionTester T("(function(foo,a) { return foo(a,3); })"); + Handle<JSFunction> foo = T.NewFunction("(function(a,b) { return a + b; })"); + T.Compile(foo); + + T.CheckCall(T.Val(6), foo, T.Val(3)); + T.CheckCall(T.Val(6.1), foo, T.Val(3.1)); + T.CheckCall(T.Val("function (a,b) { return a + b; }3"), foo, foo); + T.CheckCall(T.Val("Abba3"), foo, T.Val("Abba")); +} + + +TEST(ConstCall2) { + FunctionTester T("(function(foo,a) { return foo(a,\"3\"); })"); + Handle<JSFunction> foo = T.NewFunction("(function(a,b) { return a + b; })"); + T.Compile(foo); + + T.CheckCall(T.Val("33"), foo, T.Val(3)); + T.CheckCall(T.Val("3.13"), foo, T.Val(3.1)); + T.CheckCall(T.Val("function (a,b) { return a + b; }3"), foo, foo); + T.CheckCall(T.Val("Abba3"), foo, T.Val("Abba")); +} + + +TEST(PropertyNamedCall) { + FunctionTester T("(function(a,b) { return a.foo(b,23); })"); + CompileRun("function foo(y,z) { return this.x + y + z; }"); + + T.CheckCall(T.Val(32), T.NewObject("({ foo:foo, x:4 })"), T.Val(5)); + T.CheckCall(T.Val("xy23"), T.NewObject("({ foo:foo, x:'x' })"), T.Val("y")); + T.CheckCall(T.nan(), T.NewObject("({ foo:foo, y:0 })"), T.Val(3)); +} + + +TEST(PropertyKeyedCall) { + FunctionTester T("(function(a,b) { var f = 'foo'; return a[f](b,23); })"); + CompileRun("function foo(y,z) { return this.x + y + z; }"); + + T.CheckCall(T.Val(32), T.NewObject("({ foo:foo, x:4 })"), T.Val(5)); + T.CheckCall(T.Val("xy23"), T.NewObject("({ foo:foo, x:'x' })"), T.Val("y")); + T.CheckCall(T.nan(), T.NewObject("({ foo:foo, y:0 })"), T.Val(3)); +} + + +TEST(GlobalCall) { + FunctionTester T("(function(a,b) { return foo(a,b); })"); + CompileRun("function foo(a,b) { return a + b + this.c; }"); + CompileRun("var c = 23;"); + + T.CheckCall(T.Val(32), T.Val(4), T.Val(5)); + T.CheckCall(T.Val("xy23"), T.Val("x"), T.Val("y")); + T.CheckCall(T.nan(), T.undefined(), T.Val(3)); +} + + +TEST(LookupCall) { + FunctionTester T("(function(a,b) { with (a) { return foo(a,b); } })"); + + CompileRun("function f1(a,b) { return a.val + b; }"); + T.CheckCall(T.Val(5), T.NewObject("({ foo:f1, val:2 })"), T.Val(3)); + T.CheckCall(T.Val("xy"), T.NewObject("({ foo:f1, val:'x' })"), T.Val("y")); + + CompileRun("function f2(a,b) { return this.val + b; }"); + T.CheckCall(T.Val(9), T.NewObject("({ foo:f2, val:4 })"), T.Val(5)); + T.CheckCall(T.Val("xy"), T.NewObject("({ foo:f2, val:'x' })"), T.Val("y")); +} + + +TEST(MismatchCallTooFew) { + FunctionTester T("(function(a,b) { return foo(a,b); })"); + CompileRun("function foo(a,b,c) { return a + b + c; }"); + + T.CheckCall(T.nan(), T.Val(23), T.Val(42)); + T.CheckCall(T.nan(), T.Val(4.2), T.Val(2.3)); + T.CheckCall(T.Val("abundefined"), T.Val("a"), T.Val("b")); +} + + +TEST(MismatchCallTooMany) { + FunctionTester T("(function(a,b) { return foo(a,b); })"); + CompileRun("function foo(a) { return a; }"); + + T.CheckCall(T.Val(23), T.Val(23), T.Val(42)); + T.CheckCall(T.Val(4.2), T.Val(4.2), T.Val(2.3)); + T.CheckCall(T.Val("a"), T.Val("a"), T.Val("b")); +} + + +TEST(ConstructorCall) { + FunctionTester T("(function(a,b) { return new foo(a,b).value; })"); + CompileRun("function foo(a,b) { return { value: a + b + this.c }; }"); + CompileRun("foo.prototype.c = 23;"); + + T.CheckCall(T.Val(32), T.Val(4), T.Val(5)); + T.CheckCall(T.Val("xy23"), T.Val("x"), T.Val("y")); + T.CheckCall(T.nan(), T.undefined(), T.Val(3)); +} + + +// TODO(titzer): factor these out into test-runtime-calls.cc +TEST(RuntimeCallCPP1) { + FLAG_allow_natives_syntax = true; + FunctionTester T("(function(a) { return %ToBool(a); })"); + + T.CheckCall(T.true_value(), T.Val(23), T.undefined()); + T.CheckCall(T.true_value(), T.Val(4.2), T.undefined()); + T.CheckCall(T.true_value(), T.Val("str"), T.undefined()); + T.CheckCall(T.true_value(), T.true_value(), T.undefined()); + T.CheckCall(T.false_value(), T.false_value(), T.undefined()); + T.CheckCall(T.false_value(), T.undefined(), T.undefined()); + T.CheckCall(T.false_value(), T.Val(0.0), T.undefined()); +} + + +TEST(RuntimeCallCPP2) { + FLAG_allow_natives_syntax = true; + FunctionTester T("(function(a,b) { return %NumberAdd(a, b); })"); + + T.CheckCall(T.Val(65), T.Val(42), T.Val(23)); + T.CheckCall(T.Val(19), T.Val(42), T.Val(-23)); + T.CheckCall(T.Val(6.5), T.Val(4.2), T.Val(2.3)); +} + + +TEST(RuntimeCallJS) { + FLAG_allow_natives_syntax = true; + FunctionTester T("(function(a) { return %ToString(a); })"); + + T.CheckCall(T.Val("23"), T.Val(23), T.undefined()); + T.CheckCall(T.Val("4.2"), T.Val(4.2), T.undefined()); + T.CheckCall(T.Val("str"), T.Val("str"), T.undefined()); + T.CheckCall(T.Val("true"), T.true_value(), T.undefined()); + T.CheckCall(T.Val("false"), T.false_value(), T.undefined()); + T.CheckCall(T.Val("undefined"), T.undefined(), T.undefined()); +} + + +TEST(RuntimeCallInline) { + FLAG_allow_natives_syntax = true; + FunctionTester T("(function(a) { return %_IsObject(a); })"); + + T.CheckCall(T.false_value(), T.Val(23), T.undefined()); + T.CheckCall(T.false_value(), T.Val(4.2), T.undefined()); + T.CheckCall(T.false_value(), T.Val("str"), T.undefined()); + T.CheckCall(T.false_value(), T.true_value(), T.undefined()); + T.CheckCall(T.false_value(), T.false_value(), T.undefined()); + T.CheckCall(T.false_value(), T.undefined(), T.undefined()); + T.CheckCall(T.true_value(), T.NewObject("({})"), T.undefined()); + T.CheckCall(T.true_value(), T.NewObject("([])"), T.undefined()); +} + + +TEST(RuntimeCallBooleanize) { + // TODO(turbofan): %Booleanize will disappear, don't hesitate to remove this + // test case, two-argument case is covered by the above test already. + FLAG_allow_natives_syntax = true; + FunctionTester T("(function(a,b) { return %Booleanize(a, b); })"); + + T.CheckCall(T.true_value(), T.Val(-1), T.Val(Token::LT)); + T.CheckCall(T.false_value(), T.Val(-1), T.Val(Token::EQ)); + T.CheckCall(T.false_value(), T.Val(-1), T.Val(Token::GT)); + + T.CheckCall(T.false_value(), T.Val(0.0), T.Val(Token::LT)); + T.CheckCall(T.true_value(), T.Val(0.0), T.Val(Token::EQ)); + T.CheckCall(T.false_value(), T.Val(0.0), T.Val(Token::GT)); + + T.CheckCall(T.false_value(), T.Val(1), T.Val(Token::LT)); + T.CheckCall(T.false_value(), T.Val(1), T.Val(Token::EQ)); + T.CheckCall(T.true_value(), T.Val(1), T.Val(Token::GT)); +} + + +TEST(EvalCall) { + FunctionTester T("(function(a,b) { return eval(a); })"); + Handle<JSObject> g(T.function->context()->global_object()->global_proxy()); + + T.CheckCall(T.Val(23), T.Val("17 + 6"), T.undefined()); + T.CheckCall(T.Val("'Y'; a"), T.Val("'Y'; a"), T.Val("b-val")); + T.CheckCall(T.Val("b-val"), T.Val("'Y'; b"), T.Val("b-val")); + T.CheckCall(g, T.Val("this"), T.undefined()); + T.CheckCall(g, T.Val("'use strict'; this"), T.undefined()); + + CompileRun("eval = function(x) { return x; }"); + T.CheckCall(T.Val("17 + 6"), T.Val("17 + 6"), T.undefined()); + + CompileRun("eval = function(x) { return this; }"); + T.CheckCall(g, T.Val("17 + 6"), T.undefined()); + + CompileRun("eval = function(x) { 'use strict'; return this; }"); + T.CheckCall(T.undefined(), T.Val("17 + 6"), T.undefined()); +} + + +TEST(ReceiverPatching) { + // TODO(turbofan): Note that this test only checks that the function prologue + // patches an undefined receiver to the global receiver. If this starts to + // fail once we fix the calling protocol, just remove this test. + FunctionTester T("(function(a) { return this; })"); + Handle<JSObject> g(T.function->context()->global_object()->global_proxy()); + T.CheckCall(g, T.undefined()); +} diff --git a/deps/v8/test/cctest/compiler/test-run-jsexceptions.cc b/deps/v8/test/cctest/compiler/test-run-jsexceptions.cc new file mode 100644 index 00000000000..0712ab62057 --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-run-jsexceptions.cc @@ -0,0 +1,45 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "test/cctest/compiler/function-tester.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +TEST(Throw) { + FunctionTester T("(function(a,b) { if (a) { throw b; } else { return b; }})"); + + T.CheckThrows(T.true_value(), T.NewObject("new Error")); + T.CheckCall(T.Val(23), T.false_value(), T.Val(23)); +} + + +TEST(ThrowSourcePosition) { + static const char* src = + "(function(a, b) { \n" + " if (a == 1) throw 1; \n" + " if (a == 2) {throw 2} \n" + " if (a == 3) {0;throw 3}\n" + " throw 4; \n" + "}) "; + FunctionTester T(src); + v8::Handle<v8::Message> message; + + message = T.CheckThrowsReturnMessage(T.Val(1), T.undefined()); + CHECK(!message.IsEmpty()); + CHECK_EQ(2, message->GetLineNumber()); + CHECK_EQ(40, message->GetStartPosition()); + + message = T.CheckThrowsReturnMessage(T.Val(2), T.undefined()); + CHECK(!message.IsEmpty()); + CHECK_EQ(3, message->GetLineNumber()); + CHECK_EQ(67, message->GetStartPosition()); + + message = T.CheckThrowsReturnMessage(T.Val(3), T.undefined()); + CHECK(!message.IsEmpty()); + CHECK_EQ(4, message->GetLineNumber()); + CHECK_EQ(95, message->GetStartPosition()); +} diff --git a/deps/v8/test/cctest/compiler/test-run-jsops.cc b/deps/v8/test/cctest/compiler/test-run-jsops.cc new file mode 100644 index 00000000000..eb39760ff7e --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-run-jsops.cc @@ -0,0 +1,524 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "test/cctest/compiler/function-tester.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +TEST(BinopAdd) { + FunctionTester T("(function(a,b) { return a + b; })"); + + T.CheckCall(3, 1, 2); + T.CheckCall(-11, -2, -9); + T.CheckCall(-11, -1.5, -9.5); + T.CheckCall(T.Val("AB"), T.Val("A"), T.Val("B")); + T.CheckCall(T.Val("A11"), T.Val("A"), T.Val(11)); + T.CheckCall(T.Val("12B"), T.Val(12), T.Val("B")); + T.CheckCall(T.Val("38"), T.Val("3"), T.Val("8")); + T.CheckCall(T.Val("31"), T.Val("3"), T.NewObject("([1])")); + T.CheckCall(T.Val("3[object Object]"), T.Val("3"), T.NewObject("({})")); +} + + +TEST(BinopSubtract) { + FunctionTester T("(function(a,b) { return a - b; })"); + + T.CheckCall(3, 4, 1); + T.CheckCall(3.0, 4.5, 1.5); + T.CheckCall(T.Val(-9), T.Val("0"), T.Val(9)); + T.CheckCall(T.Val(-9), T.Val(0.0), T.Val("9")); + T.CheckCall(T.Val(1), T.Val("3"), T.Val("2")); + T.CheckCall(T.nan(), T.Val("3"), T.Val("B")); + T.CheckCall(T.Val(2), T.Val("3"), T.NewObject("([1])")); + T.CheckCall(T.nan(), T.Val("3"), T.NewObject("({})")); +} + + +TEST(BinopMultiply) { + FunctionTester T("(function(a,b) { return a * b; })"); + + T.CheckCall(6, 3, 2); + T.CheckCall(4.5, 2.0, 2.25); + T.CheckCall(T.Val(6), T.Val("3"), T.Val(2)); + T.CheckCall(T.Val(4.5), T.Val(2.0), T.Val("2.25")); + T.CheckCall(T.Val(6), T.Val("3"), T.Val("2")); + T.CheckCall(T.nan(), T.Val("3"), T.Val("B")); + T.CheckCall(T.Val(3), T.Val("3"), T.NewObject("([1])")); + T.CheckCall(T.nan(), T.Val("3"), T.NewObject("({})")); +} + + +TEST(BinopDivide) { + FunctionTester T("(function(a,b) { return a / b; })"); + + T.CheckCall(2, 8, 4); + T.CheckCall(2.1, 8.4, 4); + T.CheckCall(V8_INFINITY, 8, 0); + T.CheckCall(-V8_INFINITY, -8, 0); + T.CheckCall(T.infinity(), T.Val(8), T.Val("0")); + T.CheckCall(T.minus_infinity(), T.Val("-8"), T.Val(0.0)); + T.CheckCall(T.Val(1.5), T.Val("3"), T.Val("2")); + T.CheckCall(T.nan(), T.Val("3"), T.Val("B")); + T.CheckCall(T.Val(1.5), T.Val("3"), T.NewObject("([2])")); + T.CheckCall(T.nan(), T.Val("3"), T.NewObject("({})")); +} + + +TEST(BinopModulus) { + FunctionTester T("(function(a,b) { return a % b; })"); + + T.CheckCall(3, 8, 5); + T.CheckCall(T.Val(3), T.Val("8"), T.Val(5)); + T.CheckCall(T.Val(3), T.Val(8), T.Val("5")); + T.CheckCall(T.Val(1), T.Val("3"), T.Val("2")); + T.CheckCall(T.nan(), T.Val("3"), T.Val("B")); + T.CheckCall(T.Val(1), T.Val("3"), T.NewObject("([2])")); + T.CheckCall(T.nan(), T.Val("3"), T.NewObject("({})")); +} + + +TEST(BinopShiftLeft) { + FunctionTester T("(function(a,b) { return a << b; })"); + + T.CheckCall(4, 2, 1); + T.CheckCall(T.Val(4), T.Val("2"), T.Val(1)); + T.CheckCall(T.Val(4), T.Val(2), T.Val("1")); +} + + +TEST(BinopShiftRight) { + FunctionTester T("(function(a,b) { return a >> b; })"); + + T.CheckCall(4, 8, 1); + T.CheckCall(-4, -8, 1); + T.CheckCall(T.Val(4), T.Val("8"), T.Val(1)); + T.CheckCall(T.Val(4), T.Val(8), T.Val("1")); +} + + +TEST(BinopShiftRightLogical) { + FunctionTester T("(function(a,b) { return a >>> b; })"); + + T.CheckCall(4, 8, 1); + T.CheckCall(0x7ffffffc, -8, 1); + T.CheckCall(T.Val(4), T.Val("8"), T.Val(1)); + T.CheckCall(T.Val(4), T.Val(8), T.Val("1")); +} + + +TEST(BinopAnd) { + FunctionTester T("(function(a,b) { return a & b; })"); + + T.CheckCall(7, 7, 15); + T.CheckCall(7, 15, 7); + T.CheckCall(T.Val(7), T.Val("15"), T.Val(7)); + T.CheckCall(T.Val(7), T.Val(15), T.Val("7")); +} + + +TEST(BinopOr) { + FunctionTester T("(function(a,b) { return a | b; })"); + + T.CheckCall(6, 4, 2); + T.CheckCall(6, 2, 4); + T.CheckCall(T.Val(6), T.Val("2"), T.Val(4)); + T.CheckCall(T.Val(6), T.Val(2), T.Val("4")); +} + + +TEST(BinopXor) { + FunctionTester T("(function(a,b) { return a ^ b; })"); + + T.CheckCall(7, 15, 8); + T.CheckCall(7, 8, 15); + T.CheckCall(T.Val(7), T.Val("8"), T.Val(15)); + T.CheckCall(T.Val(7), T.Val(8), T.Val("15")); +} + + +TEST(BinopStrictEqual) { + FunctionTester T("(function(a,b) { return a === b; })"); + + T.CheckTrue(7, 7); + T.CheckFalse(7, 8); + T.CheckTrue(7.1, 7.1); + T.CheckFalse(7.1, 8.1); + + T.CheckTrue(T.Val("7.1"), T.Val("7.1")); + T.CheckFalse(T.Val(7.1), T.Val("7.1")); + T.CheckFalse(T.Val(7), T.undefined()); + T.CheckFalse(T.undefined(), T.Val(7)); + + CompileRun("var o = { desc : 'I am a singleton' }"); + T.CheckFalse(T.NewObject("([1])"), T.NewObject("([1])")); + T.CheckFalse(T.NewObject("({})"), T.NewObject("({})")); + T.CheckTrue(T.NewObject("(o)"), T.NewObject("(o)")); +} + + +TEST(BinopEqual) { + FunctionTester T("(function(a,b) { return a == b; })"); + + T.CheckTrue(7, 7); + T.CheckFalse(7, 8); + T.CheckTrue(7.1, 7.1); + T.CheckFalse(7.1, 8.1); + + T.CheckTrue(T.Val("7.1"), T.Val("7.1")); + T.CheckTrue(T.Val(7.1), T.Val("7.1")); + + CompileRun("var o = { desc : 'I am a singleton' }"); + T.CheckFalse(T.NewObject("([1])"), T.NewObject("([1])")); + T.CheckFalse(T.NewObject("({})"), T.NewObject("({})")); + T.CheckTrue(T.NewObject("(o)"), T.NewObject("(o)")); +} + + +TEST(BinopNotEqual) { + FunctionTester T("(function(a,b) { return a != b; })"); + + T.CheckFalse(7, 7); + T.CheckTrue(7, 8); + T.CheckFalse(7.1, 7.1); + T.CheckTrue(7.1, 8.1); + + T.CheckFalse(T.Val("7.1"), T.Val("7.1")); + T.CheckFalse(T.Val(7.1), T.Val("7.1")); + + CompileRun("var o = { desc : 'I am a singleton' }"); + T.CheckTrue(T.NewObject("([1])"), T.NewObject("([1])")); + T.CheckTrue(T.NewObject("({})"), T.NewObject("({})")); + T.CheckFalse(T.NewObject("(o)"), T.NewObject("(o)")); +} + + +TEST(BinopLessThan) { + FunctionTester T("(function(a,b) { return a < b; })"); + + T.CheckTrue(7, 8); + T.CheckFalse(8, 7); + T.CheckTrue(-8.1, -8); + T.CheckFalse(-8, -8.1); + T.CheckFalse(0.111, 0.111); + + T.CheckFalse(T.Val("7.1"), T.Val("7.1")); + T.CheckFalse(T.Val(7.1), T.Val("6.1")); + T.CheckFalse(T.Val(7.1), T.Val("7.1")); + T.CheckTrue(T.Val(7.1), T.Val("8.1")); +} + + +TEST(BinopLessThanEqual) { + FunctionTester T("(function(a,b) { return a <= b; })"); + + T.CheckTrue(7, 8); + T.CheckFalse(8, 7); + T.CheckTrue(-8.1, -8); + T.CheckFalse(-8, -8.1); + T.CheckTrue(0.111, 0.111); + + T.CheckTrue(T.Val("7.1"), T.Val("7.1")); + T.CheckFalse(T.Val(7.1), T.Val("6.1")); + T.CheckTrue(T.Val(7.1), T.Val("7.1")); + T.CheckTrue(T.Val(7.1), T.Val("8.1")); +} + + +TEST(BinopGreaterThan) { + FunctionTester T("(function(a,b) { return a > b; })"); + + T.CheckFalse(7, 8); + T.CheckTrue(8, 7); + T.CheckFalse(-8.1, -8); + T.CheckTrue(-8, -8.1); + T.CheckFalse(0.111, 0.111); + + T.CheckFalse(T.Val("7.1"), T.Val("7.1")); + T.CheckTrue(T.Val(7.1), T.Val("6.1")); + T.CheckFalse(T.Val(7.1), T.Val("7.1")); + T.CheckFalse(T.Val(7.1), T.Val("8.1")); +} + + +TEST(BinopGreaterThanOrEqual) { + FunctionTester T("(function(a,b) { return a >= b; })"); + + T.CheckFalse(7, 8); + T.CheckTrue(8, 7); + T.CheckFalse(-8.1, -8); + T.CheckTrue(-8, -8.1); + T.CheckTrue(0.111, 0.111); + + T.CheckTrue(T.Val("7.1"), T.Val("7.1")); + T.CheckTrue(T.Val(7.1), T.Val("6.1")); + T.CheckTrue(T.Val(7.1), T.Val("7.1")); + T.CheckFalse(T.Val(7.1), T.Val("8.1")); +} + + +TEST(BinopIn) { + FunctionTester T("(function(a,b) { return a in b; })"); + + T.CheckTrue(T.Val("x"), T.NewObject("({x:23})")); + T.CheckFalse(T.Val("y"), T.NewObject("({x:42})")); + T.CheckFalse(T.Val(123), T.NewObject("({x:65})")); + T.CheckTrue(T.Val(1), T.NewObject("([1,2,3])")); +} + + +TEST(BinopInstanceOf) { + FunctionTester T("(function(a,b) { return a instanceof b; })"); + + T.CheckTrue(T.NewObject("(new Number(23))"), T.NewObject("Number")); + T.CheckFalse(T.NewObject("(new Number(23))"), T.NewObject("String")); + T.CheckFalse(T.NewObject("(new String('a'))"), T.NewObject("Number")); + T.CheckTrue(T.NewObject("(new String('b'))"), T.NewObject("String")); + T.CheckFalse(T.Val(1), T.NewObject("Number")); + T.CheckFalse(T.Val("abc"), T.NewObject("String")); + + CompileRun("var bound = (function() {}).bind(undefined)"); + T.CheckTrue(T.NewObject("(new bound())"), T.NewObject("bound")); + T.CheckTrue(T.NewObject("(new bound())"), T.NewObject("Object")); + T.CheckFalse(T.NewObject("(new bound())"), T.NewObject("Number")); +} + + +TEST(UnopNot) { + FunctionTester T("(function(a) { return !a; })"); + + T.CheckCall(T.true_value(), T.false_value(), T.undefined()); + T.CheckCall(T.false_value(), T.true_value(), T.undefined()); + T.CheckCall(T.true_value(), T.Val(0.0), T.undefined()); + T.CheckCall(T.false_value(), T.Val(123), T.undefined()); + T.CheckCall(T.false_value(), T.Val("x"), T.undefined()); + T.CheckCall(T.true_value(), T.undefined(), T.undefined()); + T.CheckCall(T.true_value(), T.nan(), T.undefined()); +} + + +TEST(UnopCountPost) { + FunctionTester T("(function(a) { return a++; })"); + + T.CheckCall(T.Val(0.0), T.Val(0.0), T.undefined()); + T.CheckCall(T.Val(2.3), T.Val(2.3), T.undefined()); + T.CheckCall(T.Val(123), T.Val(123), T.undefined()); + T.CheckCall(T.Val(7), T.Val("7"), T.undefined()); + T.CheckCall(T.nan(), T.Val("x"), T.undefined()); + T.CheckCall(T.nan(), T.undefined(), T.undefined()); + T.CheckCall(T.Val(1.0), T.true_value(), T.undefined()); + T.CheckCall(T.Val(0.0), T.false_value(), T.undefined()); + T.CheckCall(T.nan(), T.nan(), T.undefined()); +} + + +TEST(UnopCountPre) { + FunctionTester T("(function(a) { return ++a; })"); + + T.CheckCall(T.Val(1.0), T.Val(0.0), T.undefined()); + T.CheckCall(T.Val(3.3), T.Val(2.3), T.undefined()); + T.CheckCall(T.Val(124), T.Val(123), T.undefined()); + T.CheckCall(T.Val(8), T.Val("7"), T.undefined()); + T.CheckCall(T.nan(), T.Val("x"), T.undefined()); + T.CheckCall(T.nan(), T.undefined(), T.undefined()); + T.CheckCall(T.Val(2.0), T.true_value(), T.undefined()); + T.CheckCall(T.Val(1.0), T.false_value(), T.undefined()); + T.CheckCall(T.nan(), T.nan(), T.undefined()); +} + + +TEST(PropertyNamedLoad) { + FunctionTester T("(function(a,b) { return a.x; })"); + + T.CheckCall(T.Val(23), T.NewObject("({x:23})"), T.undefined()); + T.CheckCall(T.undefined(), T.NewObject("({y:23})"), T.undefined()); +} + + +TEST(PropertyKeyedLoad) { + FunctionTester T("(function(a,b) { return a[b]; })"); + + T.CheckCall(T.Val(23), T.NewObject("({x:23})"), T.Val("x")); + T.CheckCall(T.Val(42), T.NewObject("([23,42,65])"), T.Val(1)); + T.CheckCall(T.undefined(), T.NewObject("({x:23})"), T.Val("y")); + T.CheckCall(T.undefined(), T.NewObject("([23,42,65])"), T.Val(4)); +} + + +TEST(PropertyNamedStore) { + FunctionTester T("(function(a) { a.x = 7; return a.x; })"); + + T.CheckCall(T.Val(7), T.NewObject("({})"), T.undefined()); + T.CheckCall(T.Val(7), T.NewObject("({x:23})"), T.undefined()); +} + + +TEST(PropertyKeyedStore) { + FunctionTester T("(function(a,b) { a[b] = 7; return a.x; })"); + + T.CheckCall(T.Val(7), T.NewObject("({})"), T.Val("x")); + T.CheckCall(T.Val(7), T.NewObject("({x:23})"), T.Val("x")); + T.CheckCall(T.Val(9), T.NewObject("({x:9})"), T.Val("y")); +} + + +TEST(PropertyNamedDelete) { + FunctionTester T("(function(a) { return delete a.x; })"); + + CompileRun("var o = Object.create({}, { x: { value:23 } });"); + T.CheckTrue(T.NewObject("({x:42})"), T.undefined()); + T.CheckTrue(T.NewObject("({})"), T.undefined()); + T.CheckFalse(T.NewObject("(o)"), T.undefined()); +} + + +TEST(PropertyKeyedDelete) { + FunctionTester T("(function(a, b) { return delete a[b]; })"); + + CompileRun("function getX() { return 'x'; }"); + CompileRun("var o = Object.create({}, { x: { value:23 } });"); + T.CheckTrue(T.NewObject("({x:42})"), T.Val("x")); + T.CheckFalse(T.NewObject("(o)"), T.Val("x")); + T.CheckFalse(T.NewObject("(o)"), T.NewObject("({toString:getX})")); +} + + +TEST(GlobalLoad) { + FunctionTester T("(function() { return g; })"); + + T.CheckThrows(T.undefined(), T.undefined()); + CompileRun("var g = 23;"); + T.CheckCall(T.Val(23)); +} + + +TEST(GlobalStoreSloppy) { + FunctionTester T("(function(a,b) { g = a + b; return g; })"); + + T.CheckCall(T.Val(33), T.Val(22), T.Val(11)); + CompileRun("delete g"); + CompileRun("const g = 23"); + T.CheckCall(T.Val(23), T.Val(55), T.Val(44)); +} + + +TEST(GlobalStoreStrict) { + FunctionTester T("(function(a,b) { 'use strict'; g = a + b; return g; })"); + + T.CheckThrows(T.Val(22), T.Val(11)); + CompileRun("var g = 'a global variable';"); + T.CheckCall(T.Val(33), T.Val(22), T.Val(11)); +} + + +TEST(ContextLoad) { + FunctionTester T("(function(a,b) { (function(){a}); return a + b; })"); + + T.CheckCall(T.Val(65), T.Val(23), T.Val(42)); + T.CheckCall(T.Val("ab"), T.Val("a"), T.Val("b")); +} + + +TEST(ContextStore) { + FunctionTester T("(function(a,b) { (function(){x}); var x = a; return x; })"); + + T.CheckCall(T.Val(23), T.Val(23), T.undefined()); + T.CheckCall(T.Val("a"), T.Val("a"), T.undefined()); +} + + +TEST(LookupLoad) { + FunctionTester T("(function(a,b) { with(a) { return x + b; } })"); + + T.CheckCall(T.Val(24), T.NewObject("({x:23})"), T.Val(1)); + T.CheckCall(T.Val(32), T.NewObject("({x:23, b:9})"), T.Val(2)); + T.CheckCall(T.Val(45), T.NewObject("({__proto__:{x:42}})"), T.Val(3)); + T.CheckCall(T.Val(69), T.NewObject("({get x() { return 65; }})"), T.Val(4)); +} + + +TEST(LookupStore) { + FunctionTester T("(function(a,b) { var x; with(a) { x = b; } return x; })"); + + T.CheckCall(T.undefined(), T.NewObject("({x:23})"), T.Val(1)); + T.CheckCall(T.Val(2), T.NewObject("({y:23})"), T.Val(2)); + T.CheckCall(T.Val(23), T.NewObject("({b:23})"), T.Val(3)); + T.CheckCall(T.undefined(), T.NewObject("({__proto__:{x:42}})"), T.Val(4)); +} + + +TEST(BlockLoadStore) { + FLAG_harmony_scoping = true; + FunctionTester T("(function(a) { 'use strict'; { let x = a+a; return x; }})"); + + T.CheckCall(T.Val(46), T.Val(23)); + T.CheckCall(T.Val("aa"), T.Val("a")); +} + + +TEST(BlockLoadStoreNested) { + FLAG_harmony_scoping = true; + const char* src = + "(function(a,b) {" + "'use strict';" + "{ let x = a, y = a;" + " { let y = b;" + " return x + y;" + " }" + "}})"; + FunctionTester T(src); + + T.CheckCall(T.Val(65), T.Val(23), T.Val(42)); + T.CheckCall(T.Val("ab"), T.Val("a"), T.Val("b")); +} + + +TEST(ObjectLiteralComputed) { + FunctionTester T("(function(a,b) { o = { x:a+b }; return o.x; })"); + + T.CheckCall(T.Val(65), T.Val(23), T.Val(42)); + T.CheckCall(T.Val("ab"), T.Val("a"), T.Val("b")); +} + + +TEST(ObjectLiteralNonString) { + FunctionTester T("(function(a,b) { o = { 7:a+b }; return o[7]; })"); + + T.CheckCall(T.Val(65), T.Val(23), T.Val(42)); + T.CheckCall(T.Val("ab"), T.Val("a"), T.Val("b")); +} + + +TEST(ObjectLiteralPrototype) { + FunctionTester T("(function(a) { o = { __proto__:a }; return o.x; })"); + + T.CheckCall(T.Val(23), T.NewObject("({x:23})"), T.undefined()); + T.CheckCall(T.undefined(), T.NewObject("({y:42})"), T.undefined()); +} + + +TEST(ObjectLiteralGetter) { + FunctionTester T("(function(a) { o = { get x() {return a} }; return o.x; })"); + + T.CheckCall(T.Val(23), T.Val(23), T.undefined()); + T.CheckCall(T.Val("x"), T.Val("x"), T.undefined()); +} + + +TEST(ArrayLiteral) { + FunctionTester T("(function(a,b) { o = [1, a + b, 3]; return o[1]; })"); + + T.CheckCall(T.Val(65), T.Val(23), T.Val(42)); + T.CheckCall(T.Val("ab"), T.Val("a"), T.Val("b")); +} + + +TEST(RegExpLiteral) { + FunctionTester T("(function(a) { o = /b/; return o.test(a); })"); + + T.CheckTrue(T.Val("abc")); + T.CheckFalse(T.Val("xyz")); +} diff --git a/deps/v8/test/cctest/compiler/test-run-machops.cc b/deps/v8/test/cctest/compiler/test-run-machops.cc new file mode 100644 index 00000000000..6786f387414 --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-run-machops.cc @@ -0,0 +1,4077 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include <functional> +#include <limits> + +#include "test/cctest/cctest.h" +#include "test/cctest/compiler/codegen-tester.h" +#include "test/cctest/compiler/value-helper.h" + +#if V8_TURBOFAN_TARGET + +using namespace v8::internal; +using namespace v8::internal::compiler; + +typedef RawMachineAssembler::Label MLabel; + +TEST(RunInt32Add) { + RawMachineAssemblerTester<int32_t> m; + Node* add = m.Int32Add(m.Int32Constant(0), m.Int32Constant(1)); + m.Return(add); + CHECK_EQ(1, m.Call()); +} + + +static Node* Int32Input(RawMachineAssemblerTester<int32_t>* m, int index) { + switch (index) { + case 0: + return m->Parameter(0); + case 1: + return m->Parameter(1); + case 2: + return m->Int32Constant(0); + case 3: + return m->Int32Constant(1); + case 4: + return m->Int32Constant(-1); + case 5: + return m->Int32Constant(0xff); + case 6: + return m->Int32Constant(0x01234567); + case 7: + return m->Load(kMachineWord32, m->PointerConstant(NULL)); + default: + return NULL; + } +} + + +TEST(CodeGenInt32Binop) { + RawMachineAssemblerTester<void> m; + + Operator* ops[] = { + m.machine()->Word32And(), m.machine()->Word32Or(), + m.machine()->Word32Xor(), m.machine()->Word32Shl(), + m.machine()->Word32Shr(), m.machine()->Word32Sar(), + m.machine()->Word32Equal(), m.machine()->Int32Add(), + m.machine()->Int32Sub(), m.machine()->Int32Mul(), + m.machine()->Int32Div(), m.machine()->Int32UDiv(), + m.machine()->Int32Mod(), m.machine()->Int32UMod(), + m.machine()->Int32LessThan(), m.machine()->Int32LessThanOrEqual(), + m.machine()->Uint32LessThan(), m.machine()->Uint32LessThanOrEqual(), + NULL}; + + for (int i = 0; ops[i] != NULL; i++) { + for (int j = 0; j < 8; j++) { + for (int k = 0; k < 8; k++) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32); + Node* a = Int32Input(&m, j); + Node* b = Int32Input(&m, k); + m.Return(m.NewNode(ops[i], a, b)); + m.GenerateCode(); + } + } + } +} + + +TEST(RunGoto) { + RawMachineAssemblerTester<int32_t> m; + int constant = 99999; + + MLabel next; + m.Goto(&next); + m.Bind(&next); + m.Return(m.Int32Constant(constant)); + + CHECK_EQ(constant, m.Call()); +} + + +TEST(RunGotoMultiple) { + RawMachineAssemblerTester<int32_t> m; + int constant = 9999977; + + MLabel labels[10]; + for (size_t i = 0; i < ARRAY_SIZE(labels); i++) { + m.Goto(&labels[i]); + m.Bind(&labels[i]); + } + m.Return(m.Int32Constant(constant)); + + CHECK_EQ(constant, m.Call()); +} + + +TEST(RunBranch) { + RawMachineAssemblerTester<int32_t> m; + int constant = 999777; + + MLabel blocka, blockb; + m.Branch(m.Int32Constant(0), &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(0 - constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(constant)); + + CHECK_EQ(constant, m.Call()); +} + + +TEST(RunRedundantBranch1) { + RawMachineAssemblerTester<int32_t> m; + int constant = 944777; + + MLabel blocka; + m.Branch(m.Int32Constant(0), &blocka, &blocka); + m.Bind(&blocka); + m.Return(m.Int32Constant(constant)); + + CHECK_EQ(constant, m.Call()); +} + + +TEST(RunRedundantBranch2) { + RawMachineAssemblerTester<int32_t> m; + int constant = 955777; + + MLabel blocka, blockb; + m.Branch(m.Int32Constant(0), &blocka, &blocka); + m.Bind(&blockb); + m.Goto(&blocka); + m.Bind(&blocka); + m.Return(m.Int32Constant(constant)); + + CHECK_EQ(constant, m.Call()); +} + + +TEST(RunRedundantBranch3) { + RawMachineAssemblerTester<int32_t> m; + int constant = 966777; + + MLabel blocka, blockb, blockc; + m.Branch(m.Int32Constant(0), &blocka, &blockc); + m.Bind(&blocka); + m.Branch(m.Int32Constant(0), &blockb, &blockb); + m.Bind(&blockc); + m.Goto(&blockb); + m.Bind(&blockb); + m.Return(m.Int32Constant(constant)); + + CHECK_EQ(constant, m.Call()); +} + + +TEST(RunDiamond2) { + RawMachineAssemblerTester<int32_t> m; + + int constant = 995666; + + MLabel blocka, blockb, end; + m.Branch(m.Int32Constant(0), &blocka, &blockb); + m.Bind(&blocka); + m.Goto(&end); + m.Bind(&blockb); + m.Goto(&end); + m.Bind(&end); + m.Return(m.Int32Constant(constant)); + + CHECK_EQ(constant, m.Call()); +} + + +TEST(RunLoop) { + RawMachineAssemblerTester<int32_t> m; + int constant = 999555; + + MLabel header, body, exit; + m.Goto(&header); + m.Bind(&header); + m.Branch(m.Int32Constant(0), &body, &exit); + m.Bind(&body); + m.Goto(&header); + m.Bind(&exit); + m.Return(m.Int32Constant(constant)); + + CHECK_EQ(constant, m.Call()); +} + + +template <typename R> +static void BuildDiamondPhi(RawMachineAssemblerTester<R>* m, Node* cond_node, + Node* true_node, Node* false_node) { + MLabel blocka, blockb; + MLabel* end = m->Exit(); + m->Branch(cond_node, &blocka, &blockb); + m->Bind(&blocka); + m->Goto(end); + m->Bind(&blockb); + m->Goto(end); + + m->Bind(end); + Node* phi = m->Phi(true_node, false_node); + m->Return(phi); +} + + +TEST(RunDiamondPhiConst) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + int false_val = 0xFF666; + int true_val = 0x00DDD; + Node* true_node = m.Int32Constant(true_val); + Node* false_node = m.Int32Constant(false_val); + BuildDiamondPhi(&m, m.Parameter(0), true_node, false_node); + CHECK_EQ(false_val, m.Call(0)); + CHECK_EQ(true_val, m.Call(1)); +} + + +TEST(RunDiamondPhiNumber) { + RawMachineAssemblerTester<Object*> m(kMachineWord32); + double false_val = -11.1; + double true_val = 200.1; + Node* true_node = m.NumberConstant(true_val); + Node* false_node = m.NumberConstant(false_val); + BuildDiamondPhi(&m, m.Parameter(0), true_node, false_node); + m.CheckNumber(false_val, m.Call(0)); + m.CheckNumber(true_val, m.Call(1)); +} + + +TEST(RunDiamondPhiString) { + RawMachineAssemblerTester<Object*> m(kMachineWord32); + const char* false_val = "false"; + const char* true_val = "true"; + Node* true_node = m.StringConstant(true_val); + Node* false_node = m.StringConstant(false_val); + BuildDiamondPhi(&m, m.Parameter(0), true_node, false_node); + m.CheckString(false_val, m.Call(0)); + m.CheckString(true_val, m.Call(1)); +} + + +TEST(RunDiamondPhiParam) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + BuildDiamondPhi(&m, m.Parameter(0), m.Parameter(1), m.Parameter(2)); + int32_t c1 = 0x260cb75a; + int32_t c2 = 0xcd3e9c8b; + int result = m.Call(0, c1, c2); + CHECK_EQ(c2, result); + result = m.Call(1, c1, c2); + CHECK_EQ(c1, result); +} + + +TEST(RunLoopPhiConst) { + RawMachineAssemblerTester<int32_t> m; + int true_val = 0x44000; + int false_val = 0x00888; + + Node* cond_node = m.Int32Constant(0); + Node* true_node = m.Int32Constant(true_val); + Node* false_node = m.Int32Constant(false_val); + + // x = false_val; while(false) { x = true_val; } return x; + MLabel body, header; + MLabel* end = m.Exit(); + + m.Goto(&header); + m.Bind(&header); + Node* phi = m.Phi(false_node, true_node); + m.Branch(cond_node, &body, end); + m.Bind(&body); + m.Goto(&header); + m.Bind(end); + m.Return(phi); + + CHECK_EQ(false_val, m.Call()); +} + + +TEST(RunLoopPhiParam) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + + MLabel blocka, blockb; + MLabel* end = m.Exit(); + + m.Goto(&blocka); + + m.Bind(&blocka); + Node* phi = m.Phi(m.Parameter(1), m.Parameter(2)); + Node* cond = m.Phi(m.Parameter(0), m.Int32Constant(0)); + m.Branch(cond, &blockb, end); + + m.Bind(&blockb); + m.Goto(&blocka); + + m.Bind(end); + m.Return(phi); + + int32_t c1 = 0xa81903b4; + int32_t c2 = 0x5a1207da; + int result = m.Call(0, c1, c2); + CHECK_EQ(c1, result); + result = m.Call(1, c1, c2); + CHECK_EQ(c2, result); +} + + +TEST(RunLoopPhiInduction) { + RawMachineAssemblerTester<int32_t> m; + + int false_val = 0x10777; + + // x = false_val; while(false) { x++; } return x; + MLabel header, body; + MLabel* end = m.Exit(); + Node* false_node = m.Int32Constant(false_val); + + m.Goto(&header); + + m.Bind(&header); + Node* phi = m.Phi(false_node, false_node); + m.Branch(m.Int32Constant(0), &body, end); + + m.Bind(&body); + Node* add = m.Int32Add(phi, m.Int32Constant(1)); + phi->ReplaceInput(1, add); + m.Goto(&header); + + m.Bind(end); + m.Return(phi); + + CHECK_EQ(false_val, m.Call()); +} + + +TEST(RunLoopIncrement) { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + + // x = 0; while(x ^ param) { x++; } return x; + MLabel header, body; + MLabel* end = m.Exit(); + Node* zero = m.Int32Constant(0); + + m.Goto(&header); + + m.Bind(&header); + Node* phi = m.Phi(zero, zero); + m.Branch(m.WordXor(phi, bt.param0), &body, end); + + m.Bind(&body); + phi->ReplaceInput(1, m.Int32Add(phi, m.Int32Constant(1))); + m.Goto(&header); + + m.Bind(end); + bt.AddReturn(phi); + + CHECK_EQ(11, bt.call(11, 0)); + CHECK_EQ(110, bt.call(110, 0)); + CHECK_EQ(176, bt.call(176, 0)); +} + + +TEST(RunLoopIncrement2) { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + + // x = 0; while(x < param) { x++; } return x; + MLabel header, body; + MLabel* end = m.Exit(); + Node* zero = m.Int32Constant(0); + + m.Goto(&header); + + m.Bind(&header); + Node* phi = m.Phi(zero, zero); + m.Branch(m.Int32LessThan(phi, bt.param0), &body, end); + + m.Bind(&body); + phi->ReplaceInput(1, m.Int32Add(phi, m.Int32Constant(1))); + m.Goto(&header); + + m.Bind(end); + bt.AddReturn(phi); + + CHECK_EQ(11, bt.call(11, 0)); + CHECK_EQ(110, bt.call(110, 0)); + CHECK_EQ(176, bt.call(176, 0)); + CHECK_EQ(0, bt.call(-200, 0)); +} + + +TEST(RunLoopIncrement3) { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + + // x = 0; while(x < param) { x++; } return x; + MLabel header, body; + MLabel* end = m.Exit(); + Node* zero = m.Int32Constant(0); + + m.Goto(&header); + + m.Bind(&header); + Node* phi = m.Phi(zero, zero); + m.Branch(m.Uint32LessThan(phi, bt.param0), &body, end); + + m.Bind(&body); + phi->ReplaceInput(1, m.Int32Add(phi, m.Int32Constant(1))); + m.Goto(&header); + + m.Bind(end); + bt.AddReturn(phi); + + CHECK_EQ(11, bt.call(11, 0)); + CHECK_EQ(110, bt.call(110, 0)); + CHECK_EQ(176, bt.call(176, 0)); + CHECK_EQ(200, bt.call(200, 0)); +} + + +TEST(RunLoopDecrement) { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + + // x = param; while(x) { x--; } return x; + MLabel header, body; + MLabel* end = m.Exit(); + + m.Goto(&header); + + m.Bind(&header); + Node* phi = m.Phi(bt.param0, m.Int32Constant(0)); + m.Branch(phi, &body, end); + + m.Bind(&body); + phi->ReplaceInput(1, m.Int32Sub(phi, m.Int32Constant(1))); + m.Goto(&header); + + m.Bind(end); + bt.AddReturn(phi); + + CHECK_EQ(0, bt.call(11, 0)); + CHECK_EQ(0, bt.call(110, 0)); + CHECK_EQ(0, bt.call(197, 0)); +} + + +TEST(RunLoopIncrementFloat64) { + RawMachineAssemblerTester<int32_t> m; + + // x = -3.0; while(x < 10) { x = x + 0.5; } return (int) x; + MLabel header, body; + MLabel* end = m.Exit(); + Node* minus_3 = m.Float64Constant(-3.0); + Node* ten = m.Float64Constant(10.0); + + m.Goto(&header); + + m.Bind(&header); + Node* phi = m.Phi(minus_3, ten); + m.Branch(m.Float64LessThan(phi, ten), &body, end); + + m.Bind(&body); + phi->ReplaceInput(1, m.Float64Add(phi, m.Float64Constant(0.5))); + m.Goto(&header); + + m.Bind(end); + m.Return(m.ChangeFloat64ToInt32(phi)); + + CHECK_EQ(10, m.Call()); +} + + +TEST(RunLoadInt32) { + RawMachineAssemblerTester<int32_t> m; + + int32_t p1 = 0; // loads directly from this location. + m.Return(m.LoadFromPointer(&p1, kMachineWord32)); + + FOR_INT32_INPUTS(i) { + p1 = *i; + CHECK_EQ(p1, m.Call()); + } +} + + +TEST(RunLoadInt32Offset) { + int32_t p1 = 0; // loads directly from this location. + + int32_t offsets[] = {-2000000, -100, -101, 1, 3, + 7, 120, 2000, 2000000000, 0xff}; + + for (size_t i = 0; i < ARRAY_SIZE(offsets); i++) { + RawMachineAssemblerTester<int32_t> m; + int32_t offset = offsets[i]; + byte* pointer = reinterpret_cast<byte*>(&p1) - offset; + // generate load [#base + #index] + m.Return(m.LoadFromPointer(pointer, kMachineWord32, offset)); + + FOR_INT32_INPUTS(j) { + p1 = *j; + CHECK_EQ(p1, m.Call()); + } + } +} + + +TEST(RunLoadStoreFloat64Offset) { + double p1 = 0; // loads directly from this location. + double p2 = 0; // and stores directly into this location. + + FOR_INT32_INPUTS(i) { + int32_t magic = 0x2342aabb + *i * 3; + RawMachineAssemblerTester<int32_t> m; + int32_t offset = *i; + byte* from = reinterpret_cast<byte*>(&p1) - offset; + byte* to = reinterpret_cast<byte*>(&p2) - offset; + // generate load [#base + #index] + Node* load = m.Load(kMachineFloat64, m.PointerConstant(from), + m.Int32Constant(offset)); + m.Store(kMachineFloat64, m.PointerConstant(to), m.Int32Constant(offset), + load); + m.Return(m.Int32Constant(magic)); + + FOR_FLOAT64_INPUTS(j) { + p1 = *j; + p2 = *j - 5; + CHECK_EQ(magic, m.Call()); + CHECK_EQ(p1, p2); + } + } +} + + +TEST(RunInt32AddP) { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + + bt.AddReturn(m.Int32Add(bt.param0, bt.param1)); + + FOR_INT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + // Use uint32_t because signed overflow is UB in C. + int expected = static_cast<int32_t>(*i + *j); + CHECK_EQ(expected, bt.call(*i, *j)); + } + } +} + + +TEST(RunInt32AddAndWord32SarP) { + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + m.Return(m.Int32Add(m.Parameter(0), + m.Word32Sar(m.Parameter(1), m.Parameter(2)))); + FOR_UINT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *k & 0x1F; + // Use uint32_t because signed overflow is UB in C. + int32_t expected = *i + (*j >> shift); + CHECK_EQ(expected, m.Call(*i, *j, shift)); + } + } + } + } + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + m.Return(m.Int32Add(m.Word32Sar(m.Parameter(0), m.Parameter(1)), + m.Parameter(2))); + FOR_INT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *j & 0x1F; + // Use uint32_t because signed overflow is UB in C. + int32_t expected = (*i >> shift) + *k; + CHECK_EQ(expected, m.Call(*i, shift, *k)); + } + } + } + } +} + + +TEST(RunInt32AddAndWord32ShlP) { + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + m.Return(m.Int32Add(m.Parameter(0), + m.Word32Shl(m.Parameter(1), m.Parameter(2)))); + FOR_UINT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *k & 0x1F; + // Use uint32_t because signed overflow is UB in C. + int32_t expected = *i + (*j << shift); + CHECK_EQ(expected, m.Call(*i, *j, shift)); + } + } + } + } + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + m.Return(m.Int32Add(m.Word32Shl(m.Parameter(0), m.Parameter(1)), + m.Parameter(2))); + FOR_INT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *j & 0x1F; + // Use uint32_t because signed overflow is UB in C. + int32_t expected = (*i << shift) + *k; + CHECK_EQ(expected, m.Call(*i, shift, *k)); + } + } + } + } +} + + +TEST(RunInt32AddAndWord32ShrP) { + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + m.Return(m.Int32Add(m.Parameter(0), + m.Word32Shr(m.Parameter(1), m.Parameter(2)))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *k & 0x1F; + // Use uint32_t because signed overflow is UB in C. + int32_t expected = *i + (*j >> shift); + CHECK_EQ(expected, m.Call(*i, *j, shift)); + } + } + } + } + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + m.Return(m.Int32Add(m.Word32Shr(m.Parameter(0), m.Parameter(1)), + m.Parameter(2))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *j & 0x1F; + // Use uint32_t because signed overflow is UB in C. + int32_t expected = (*i >> shift) + *k; + CHECK_EQ(expected, m.Call(*i, shift, *k)); + } + } + } + } +} + + +TEST(RunInt32AddInBranch) { + static const int32_t constant = 987654321; + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + MLabel blocka, blockb; + m.Branch( + m.Word32Equal(m.Int32Add(bt.param0, bt.param1), m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + bt.AddReturn(m.Int32Constant(constant)); + m.Bind(&blockb); + bt.AddReturn(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i + *j) == 0 ? constant : 0 - constant; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + MLabel blocka, blockb; + m.Branch( + m.Word32NotEqual(m.Int32Add(bt.param0, bt.param1), m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + bt.AddReturn(m.Int32Constant(constant)); + m.Bind(&blockb); + bt.AddReturn(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i + *j) != 0 ? constant : 0 - constant; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + MLabel blocka, blockb; + m.Branch(m.Word32Equal(m.Int32Add(m.Int32Constant(*i), m.Parameter(0)), + m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i + *j) == 0 ? constant : 0 - constant; + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + MLabel blocka, blockb; + m.Branch(m.Word32NotEqual(m.Int32Add(m.Int32Constant(*i), m.Parameter(0)), + m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i + *j) != 0 ? constant : 0 - constant; + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + RawMachineAssemblerTester<void> m; + Operator* shops[] = {m.machine()->Word32Sar(), m.machine()->Word32Shl(), + m.machine()->Word32Shr()}; + for (size_t n = 0; n < ARRAY_SIZE(shops); n++) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + MLabel blocka, blockb; + m.Branch(m.Word32Equal(m.Int32Add(m.Parameter(0), + m.NewNode(shops[n], m.Parameter(1), + m.Parameter(2))), + m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *k & 0x1F; + int32_t right; + switch (shops[n]->opcode()) { + default: + UNREACHABLE(); + case IrOpcode::kWord32Sar: + right = *j >> shift; + break; + case IrOpcode::kWord32Shl: + right = *j << shift; + break; + case IrOpcode::kWord32Shr: + right = static_cast<uint32_t>(*j) >> shift; + break; + } + int32_t expected = ((*i + right) == 0) ? constant : 0 - constant; + CHECK_EQ(expected, m.Call(*i, *j, shift)); + } + } + } + } + } +} + + +TEST(RunInt32AddInComparison) { + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn( + m.Word32Equal(m.Int32Add(bt.param0, bt.param1), m.Int32Constant(0))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i + *j) == 0; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn( + m.Word32Equal(m.Int32Constant(0), m.Int32Add(bt.param0, bt.param1))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i + *j) == 0; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Word32Equal(m.Int32Add(m.Int32Constant(*i), m.Parameter(0)), + m.Int32Constant(0))); + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i + *j) == 0; + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Word32Equal(m.Int32Add(m.Parameter(0), m.Int32Constant(*i)), + m.Int32Constant(0))); + FOR_UINT32_INPUTS(j) { + int32_t expected = (*j + *i) == 0; + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + RawMachineAssemblerTester<void> m; + Operator* shops[] = {m.machine()->Word32Sar(), m.machine()->Word32Shl(), + m.machine()->Word32Shr()}; + for (size_t n = 0; n < ARRAY_SIZE(shops); n++) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + m.Return(m.Word32Equal( + m.Int32Add(m.Parameter(0), + m.NewNode(shops[n], m.Parameter(1), m.Parameter(2))), + m.Int32Constant(0))); + FOR_UINT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *k & 0x1F; + int32_t right; + switch (shops[n]->opcode()) { + default: + UNREACHABLE(); + case IrOpcode::kWord32Sar: + right = *j >> shift; + break; + case IrOpcode::kWord32Shl: + right = *j << shift; + break; + case IrOpcode::kWord32Shr: + right = static_cast<uint32_t>(*j) >> shift; + break; + } + int32_t expected = (*i + right) == 0; + CHECK_EQ(expected, m.Call(*i, *j, shift)); + } + } + } + } + } +} + + +TEST(RunInt32SubP) { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + + m.Return(m.Int32Sub(bt.param0, bt.param1)); + + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + // Use uint32_t because signed overflow is UB in C. + int expected = static_cast<int32_t>(*i - *j); + CHECK_EQ(expected, bt.call(*i, *j)); + } + } +} + + +TEST(RunInt32SubImm) { + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Int32Sub(m.Int32Constant(*i), m.Parameter(0))); + FOR_UINT32_INPUTS(j) { + // Use uint32_t because signed overflow is UB in C. + int32_t expected = static_cast<int32_t>(*i - *j); + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Int32Sub(m.Parameter(0), m.Int32Constant(*i))); + FOR_UINT32_INPUTS(j) { + // Use uint32_t because signed overflow is UB in C. + int32_t expected = static_cast<int32_t>(*j - *i); + CHECK_EQ(expected, m.Call(*j)); + } + } + } +} + + +TEST(RunInt32SubAndWord32SarP) { + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + m.Return(m.Int32Sub(m.Parameter(0), + m.Word32Sar(m.Parameter(1), m.Parameter(2)))); + FOR_UINT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *k & 0x1F; + // Use uint32_t because signed overflow is UB in C. + int32_t expected = *i - (*j >> shift); + CHECK_EQ(expected, m.Call(*i, *j, shift)); + } + } + } + } + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + m.Return(m.Int32Sub(m.Word32Sar(m.Parameter(0), m.Parameter(1)), + m.Parameter(2))); + FOR_INT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *j & 0x1F; + // Use uint32_t because signed overflow is UB in C. + int32_t expected = (*i >> shift) - *k; + CHECK_EQ(expected, m.Call(*i, shift, *k)); + } + } + } + } +} + + +TEST(RunInt32SubAndWord32ShlP) { + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + m.Return(m.Int32Sub(m.Parameter(0), + m.Word32Shl(m.Parameter(1), m.Parameter(2)))); + FOR_UINT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *k & 0x1F; + // Use uint32_t because signed overflow is UB in C. + int32_t expected = *i - (*j << shift); + CHECK_EQ(expected, m.Call(*i, *j, shift)); + } + } + } + } + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + m.Return(m.Int32Sub(m.Word32Shl(m.Parameter(0), m.Parameter(1)), + m.Parameter(2))); + FOR_INT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *j & 0x1F; + // Use uint32_t because signed overflow is UB in C. + int32_t expected = (*i << shift) - *k; + CHECK_EQ(expected, m.Call(*i, shift, *k)); + } + } + } + } +} + + +TEST(RunInt32SubAndWord32ShrP) { + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + m.Return(m.Int32Sub(m.Parameter(0), + m.Word32Shr(m.Parameter(1), m.Parameter(2)))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *k & 0x1F; + // Use uint32_t because signed overflow is UB in C. + int32_t expected = *i - (*j >> shift); + CHECK_EQ(expected, m.Call(*i, *j, shift)); + } + } + } + } + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + m.Return(m.Int32Sub(m.Word32Shr(m.Parameter(0), m.Parameter(1)), + m.Parameter(2))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *j & 0x1F; + // Use uint32_t because signed overflow is UB in C. + int32_t expected = (*i >> shift) - *k; + CHECK_EQ(expected, m.Call(*i, shift, *k)); + } + } + } + } +} + + +TEST(RunInt32SubInBranch) { + static const int constant = 987654321; + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + MLabel blocka, blockb; + m.Branch( + m.Word32Equal(m.Int32Sub(bt.param0, bt.param1), m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + bt.AddReturn(m.Int32Constant(constant)); + m.Bind(&blockb); + bt.AddReturn(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i - *j) == 0 ? constant : 0 - constant; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + MLabel blocka, blockb; + m.Branch( + m.Word32NotEqual(m.Int32Sub(bt.param0, bt.param1), m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + bt.AddReturn(m.Int32Constant(constant)); + m.Bind(&blockb); + bt.AddReturn(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i - *j) != 0 ? constant : 0 - constant; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + MLabel blocka, blockb; + m.Branch(m.Word32Equal(m.Int32Sub(m.Int32Constant(*i), m.Parameter(0)), + m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i - *j) == 0 ? constant : 0 - constant; + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + MLabel blocka, blockb; + m.Branch(m.Word32NotEqual(m.Int32Sub(m.Int32Constant(*i), m.Parameter(0)), + m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i - *j) != 0 ? constant : 0 - constant; + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + RawMachineAssemblerTester<void> m; + Operator* shops[] = {m.machine()->Word32Sar(), m.machine()->Word32Shl(), + m.machine()->Word32Shr()}; + for (size_t n = 0; n < ARRAY_SIZE(shops); n++) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + MLabel blocka, blockb; + m.Branch(m.Word32Equal(m.Int32Sub(m.Parameter(0), + m.NewNode(shops[n], m.Parameter(1), + m.Parameter(2))), + m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *k & 0x1F; + int32_t right; + switch (shops[n]->opcode()) { + default: + UNREACHABLE(); + case IrOpcode::kWord32Sar: + right = *j >> shift; + break; + case IrOpcode::kWord32Shl: + right = *j << shift; + break; + case IrOpcode::kWord32Shr: + right = static_cast<uint32_t>(*j) >> shift; + break; + } + int32_t expected = ((*i - right) == 0) ? constant : 0 - constant; + CHECK_EQ(expected, m.Call(*i, *j, shift)); + } + } + } + } + } +} + + +TEST(RunInt32SubInComparison) { + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn( + m.Word32Equal(m.Int32Sub(bt.param0, bt.param1), m.Int32Constant(0))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i - *j) == 0; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn( + m.Word32Equal(m.Int32Constant(0), m.Int32Sub(bt.param0, bt.param1))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i - *j) == 0; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Word32Equal(m.Int32Sub(m.Int32Constant(*i), m.Parameter(0)), + m.Int32Constant(0))); + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i - *j) == 0; + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Word32Equal(m.Int32Sub(m.Parameter(0), m.Int32Constant(*i)), + m.Int32Constant(0))); + FOR_UINT32_INPUTS(j) { + int32_t expected = (*j - *i) == 0; + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + RawMachineAssemblerTester<void> m; + Operator* shops[] = {m.machine()->Word32Sar(), m.machine()->Word32Shl(), + m.machine()->Word32Shr()}; + for (size_t n = 0; n < ARRAY_SIZE(shops); n++) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + m.Return(m.Word32Equal( + m.Int32Sub(m.Parameter(0), + m.NewNode(shops[n], m.Parameter(1), m.Parameter(2))), + m.Int32Constant(0))); + FOR_UINT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *k & 0x1F; + int32_t right; + switch (shops[n]->opcode()) { + default: + UNREACHABLE(); + case IrOpcode::kWord32Sar: + right = *j >> shift; + break; + case IrOpcode::kWord32Shl: + right = *j << shift; + break; + case IrOpcode::kWord32Shr: + right = static_cast<uint32_t>(*j) >> shift; + break; + } + int32_t expected = (*i - right) == 0; + CHECK_EQ(expected, m.Call(*i, *j, shift)); + } + } + } + } + } +} + + +TEST(RunInt32MulP) { + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Int32Mul(bt.param0, bt.param1)); + FOR_INT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + int expected = static_cast<int32_t>(*i * *j); + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Int32Mul(bt.param0, bt.param1)); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + int expected = static_cast<int32_t>(*i * *j); + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } +} + + +TEST(RunInt32MulImm) { + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Int32Mul(m.Int32Constant(*i), m.Parameter(0))); + FOR_UINT32_INPUTS(j) { + int32_t expected = static_cast<int32_t>(*i * *j); + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Int32Mul(m.Parameter(0), m.Int32Constant(*i))); + FOR_UINT32_INPUTS(j) { + int32_t expected = static_cast<int32_t>(*j * *i); + CHECK_EQ(expected, m.Call(*j)); + } + } + } +} + + +TEST(RunInt32MulAndInt32AddP) { + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + m.Return( + m.Int32Add(m.Parameter(0), m.Int32Mul(m.Parameter(1), m.Parameter(2)))); + FOR_INT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + FOR_INT32_INPUTS(k) { + int32_t p0 = *i; + int32_t p1 = *j; + int32_t p2 = *k; + int expected = p0 + static_cast<int32_t>(p1 * p2); + CHECK_EQ(expected, m.Call(p0, p1, p2)); + } + } + } + } + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + m.Return( + m.Int32Add(m.Int32Mul(m.Parameter(0), m.Parameter(1)), m.Parameter(2))); + FOR_INT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + FOR_INT32_INPUTS(k) { + int32_t p0 = *i; + int32_t p1 = *j; + int32_t p2 = *k; + int expected = static_cast<int32_t>(p0 * p1) + p2; + CHECK_EQ(expected, m.Call(p0, p1, p2)); + } + } + } + } + { + FOR_INT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn( + m.Int32Add(m.Int32Constant(*i), m.Int32Mul(bt.param0, bt.param1))); + FOR_INT32_INPUTS(j) { + FOR_INT32_INPUTS(k) { + int32_t p0 = *j; + int32_t p1 = *k; + int expected = *i + static_cast<int32_t>(p0 * p1); + CHECK_EQ(expected, bt.call(p0, p1)); + } + } + } + } +} + + +TEST(RunInt32MulAndInt32SubP) { + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + m.Return( + m.Int32Sub(m.Parameter(0), m.Int32Mul(m.Parameter(1), m.Parameter(2)))); + FOR_UINT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + FOR_INT32_INPUTS(k) { + uint32_t p0 = *i; + int32_t p1 = *j; + int32_t p2 = *k; + // Use uint32_t because signed overflow is UB in C. + int expected = p0 - static_cast<uint32_t>(p1 * p2); + CHECK_EQ(expected, m.Call(p0, p1, p2)); + } + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn( + m.Int32Sub(m.Int32Constant(*i), m.Int32Mul(bt.param0, bt.param1))); + FOR_INT32_INPUTS(j) { + FOR_INT32_INPUTS(k) { + int32_t p0 = *j; + int32_t p1 = *k; + // Use uint32_t because signed overflow is UB in C. + int expected = *i - static_cast<uint32_t>(p0 * p1); + CHECK_EQ(expected, bt.call(p0, p1)); + } + } + } + } +} + + +TEST(RunInt32DivP) { + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Int32Div(bt.param0, bt.param1)); + FOR_INT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + int p0 = *i; + int p1 = *j; + if (p1 != 0 && (static_cast<uint32_t>(p0) != 0x80000000 || p1 != -1)) { + int expected = static_cast<int32_t>(p0 / p1); + CHECK_EQ(expected, bt.call(p0, p1)); + } + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Int32Add(bt.param0, m.Int32Div(bt.param0, bt.param1))); + FOR_INT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + int p0 = *i; + int p1 = *j; + if (p1 != 0 && (static_cast<uint32_t>(p0) != 0x80000000 || p1 != -1)) { + int expected = static_cast<int32_t>(p0 + (p0 / p1)); + CHECK_EQ(expected, bt.call(p0, p1)); + } + } + } + } +} + + +TEST(RunInt32UDivP) { + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Int32UDiv(bt.param0, bt.param1)); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + uint32_t p0 = *i; + uint32_t p1 = *j; + if (p1 != 0) { + uint32_t expected = static_cast<uint32_t>(p0 / p1); + CHECK_EQ(expected, bt.call(p0, p1)); + } + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Int32Add(bt.param0, m.Int32UDiv(bt.param0, bt.param1))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + uint32_t p0 = *i; + uint32_t p1 = *j; + if (p1 != 0) { + uint32_t expected = static_cast<uint32_t>(p0 + (p0 / p1)); + CHECK_EQ(expected, bt.call(p0, p1)); + } + } + } + } +} + + +TEST(RunInt32ModP) { + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Int32Mod(bt.param0, bt.param1)); + FOR_INT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + int p0 = *i; + int p1 = *j; + if (p1 != 0 && (static_cast<uint32_t>(p0) != 0x80000000 || p1 != -1)) { + int expected = static_cast<int32_t>(p0 % p1); + CHECK_EQ(expected, bt.call(p0, p1)); + } + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Int32Add(bt.param0, m.Int32Mod(bt.param0, bt.param1))); + FOR_INT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + int p0 = *i; + int p1 = *j; + if (p1 != 0 && (static_cast<uint32_t>(p0) != 0x80000000 || p1 != -1)) { + int expected = static_cast<int32_t>(p0 + (p0 % p1)); + CHECK_EQ(expected, bt.call(p0, p1)); + } + } + } + } +} + + +TEST(RunInt32UModP) { + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Int32UMod(bt.param0, bt.param1)); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + uint32_t p0 = *i; + uint32_t p1 = *j; + if (p1 != 0) { + uint32_t expected = static_cast<uint32_t>(p0 % p1); + CHECK_EQ(expected, bt.call(p0, p1)); + } + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Int32Add(bt.param0, m.Int32UMod(bt.param0, bt.param1))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + uint32_t p0 = *i; + uint32_t p1 = *j; + if (p1 != 0) { + uint32_t expected = static_cast<uint32_t>(p0 + (p0 % p1)); + CHECK_EQ(expected, bt.call(p0, p1)); + } + } + } + } +} + + +TEST(RunWord32AndP) { + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Word32And(bt.param0, bt.param1)); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + uint32_t expected = *i & *j; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Word32And(bt.param0, m.Word32Not(bt.param1))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + uint32_t expected = *i & ~(*j); + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Word32And(m.Word32Not(bt.param0), bt.param1)); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + uint32_t expected = ~(*i) & *j; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } +} + + +TEST(RunWord32AndAndWord32ShlP) { + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn( + m.Word32Shl(bt.param0, m.Word32And(bt.param1, m.Int32Constant(0x1f)))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + uint32_t expected = *i << (*j & 0x1f); + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn( + m.Word32Shl(bt.param0, m.Word32And(m.Int32Constant(0x1f), bt.param1))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + uint32_t expected = *i << (0x1f & *j); + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } +} + + +TEST(RunWord32AndAndWord32ShrP) { + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn( + m.Word32Shr(bt.param0, m.Word32And(bt.param1, m.Int32Constant(0x1f)))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + uint32_t expected = *i >> (*j & 0x1f); + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn( + m.Word32Shr(bt.param0, m.Word32And(m.Int32Constant(0x1f), bt.param1))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + uint32_t expected = *i >> (0x1f & *j); + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } +} + + +TEST(RunWord32AndAndWord32SarP) { + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn( + m.Word32Sar(bt.param0, m.Word32And(bt.param1, m.Int32Constant(0x1f)))); + FOR_INT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + uint32_t expected = *i >> (*j & 0x1f); + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn( + m.Word32Sar(bt.param0, m.Word32And(m.Int32Constant(0x1f), bt.param1))); + FOR_INT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + uint32_t expected = *i >> (0x1f & *j); + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } +} + + +TEST(RunWord32AndImm) { + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Word32And(m.Int32Constant(*i), m.Parameter(0))); + FOR_UINT32_INPUTS(j) { + uint32_t expected = *i & *j; + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Word32And(m.Int32Constant(*i), m.Word32Not(m.Parameter(0)))); + FOR_UINT32_INPUTS(j) { + uint32_t expected = *i & ~(*j); + CHECK_EQ(expected, m.Call(*j)); + } + } + } +} + + +TEST(RunWord32AndInBranch) { + static const int constant = 987654321; + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + MLabel blocka, blockb; + m.Branch( + m.Word32Equal(m.Word32And(bt.param0, bt.param1), m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + bt.AddReturn(m.Int32Constant(constant)); + m.Bind(&blockb); + bt.AddReturn(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i & *j) == 0 ? constant : 0 - constant; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + MLabel blocka, blockb; + m.Branch( + m.Word32NotEqual(m.Word32And(bt.param0, bt.param1), m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + bt.AddReturn(m.Int32Constant(constant)); + m.Bind(&blockb); + bt.AddReturn(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i & *j) != 0 ? constant : 0 - constant; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + MLabel blocka, blockb; + m.Branch(m.Word32Equal(m.Word32And(m.Int32Constant(*i), m.Parameter(0)), + m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i & *j) == 0 ? constant : 0 - constant; + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + MLabel blocka, blockb; + m.Branch( + m.Word32NotEqual(m.Word32And(m.Int32Constant(*i), m.Parameter(0)), + m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i & *j) != 0 ? constant : 0 - constant; + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + RawMachineAssemblerTester<void> m; + Operator* shops[] = {m.machine()->Word32Sar(), m.machine()->Word32Shl(), + m.machine()->Word32Shr()}; + for (size_t n = 0; n < ARRAY_SIZE(shops); n++) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + MLabel blocka, blockb; + m.Branch(m.Word32Equal(m.Word32And(m.Parameter(0), + m.NewNode(shops[n], m.Parameter(1), + m.Parameter(2))), + m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *k & 0x1F; + int32_t right; + switch (shops[n]->opcode()) { + default: + UNREACHABLE(); + case IrOpcode::kWord32Sar: + right = *j >> shift; + break; + case IrOpcode::kWord32Shl: + right = *j << shift; + break; + case IrOpcode::kWord32Shr: + right = static_cast<uint32_t>(*j) >> shift; + break; + } + int32_t expected = ((*i & right) == 0) ? constant : 0 - constant; + CHECK_EQ(expected, m.Call(*i, *j, shift)); + } + } + } + } + } +} + + +TEST(RunWord32AndInComparison) { + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn( + m.Word32Equal(m.Word32And(bt.param0, bt.param1), m.Int32Constant(0))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i & *j) == 0; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn( + m.Word32Equal(m.Int32Constant(0), m.Word32And(bt.param0, bt.param1))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i & *j) == 0; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Word32Equal(m.Word32And(m.Int32Constant(*i), m.Parameter(0)), + m.Int32Constant(0))); + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i & *j) == 0; + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Word32Equal(m.Word32And(m.Parameter(0), m.Int32Constant(*i)), + m.Int32Constant(0))); + FOR_UINT32_INPUTS(j) { + int32_t expected = (*j & *i) == 0; + CHECK_EQ(expected, m.Call(*j)); + } + } + } +} + + +TEST(RunWord32OrP) { + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Word32Or(bt.param0, bt.param1)); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + uint32_t expected = *i | *j; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Word32Or(bt.param0, m.Word32Not(bt.param1))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + uint32_t expected = *i | ~(*j); + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Word32Or(m.Word32Not(bt.param0), bt.param1)); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + uint32_t expected = ~(*i) | *j; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } +} + + +TEST(RunWord32OrImm) { + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Word32Or(m.Int32Constant(*i), m.Parameter(0))); + FOR_UINT32_INPUTS(j) { + uint32_t expected = *i | *j; + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Word32Or(m.Int32Constant(*i), m.Word32Not(m.Parameter(0)))); + FOR_UINT32_INPUTS(j) { + uint32_t expected = *i | ~(*j); + CHECK_EQ(expected, m.Call(*j)); + } + } + } +} + + +TEST(RunWord32OrInBranch) { + static const int constant = 987654321; + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + MLabel blocka, blockb; + m.Branch( + m.Word32Equal(m.Word32Or(bt.param0, bt.param1), m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + bt.AddReturn(m.Int32Constant(constant)); + m.Bind(&blockb); + bt.AddReturn(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i | *j) == 0 ? constant : 0 - constant; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + MLabel blocka, blockb; + m.Branch( + m.Word32NotEqual(m.Word32Or(bt.param0, bt.param1), m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + bt.AddReturn(m.Int32Constant(constant)); + m.Bind(&blockb); + bt.AddReturn(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i | *j) != 0 ? constant : 0 - constant; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + MLabel blocka, blockb; + m.Branch(m.Word32Equal(m.Word32Or(m.Int32Constant(*i), m.Parameter(0)), + m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i | *j) == 0 ? constant : 0 - constant; + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + MLabel blocka, blockb; + m.Branch(m.Word32NotEqual(m.Word32Or(m.Int32Constant(*i), m.Parameter(0)), + m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i | *j) != 0 ? constant : 0 - constant; + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + RawMachineAssemblerTester<void> m; + Operator* shops[] = {m.machine()->Word32Sar(), m.machine()->Word32Shl(), + m.machine()->Word32Shr()}; + for (size_t n = 0; n < ARRAY_SIZE(shops); n++) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + MLabel blocka, blockb; + m.Branch(m.Word32Equal(m.Word32Or(m.Parameter(0), + m.NewNode(shops[n], m.Parameter(1), + m.Parameter(2))), + m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *k & 0x1F; + int32_t right; + switch (shops[n]->opcode()) { + default: + UNREACHABLE(); + case IrOpcode::kWord32Sar: + right = *j >> shift; + break; + case IrOpcode::kWord32Shl: + right = *j << shift; + break; + case IrOpcode::kWord32Shr: + right = static_cast<uint32_t>(*j) >> shift; + break; + } + int32_t expected = ((*i | right) == 0) ? constant : 0 - constant; + CHECK_EQ(expected, m.Call(*i, *j, shift)); + } + } + } + } + } +} + + +TEST(RunWord32OrInComparison) { + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn( + m.Word32Equal(m.Word32Or(bt.param0, bt.param1), m.Int32Constant(0))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i | *j) == 0; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn( + m.Word32Equal(m.Int32Constant(0), m.Word32Or(bt.param0, bt.param1))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i | *j) == 0; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Word32Equal(m.Word32Or(m.Int32Constant(*i), m.Parameter(0)), + m.Int32Constant(0))); + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i | *j) == 0; + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Word32Equal(m.Word32Or(m.Parameter(0), m.Int32Constant(*i)), + m.Int32Constant(0))); + FOR_UINT32_INPUTS(j) { + int32_t expected = (*j | *i) == 0; + CHECK_EQ(expected, m.Call(*j)); + } + } + } +} + + +TEST(RunWord32XorP) { + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Word32Xor(m.Int32Constant(*i), m.Parameter(0))); + FOR_UINT32_INPUTS(j) { + uint32_t expected = *i ^ *j; + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Word32Xor(bt.param0, bt.param1)); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + uint32_t expected = *i ^ *j; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Word32Xor(bt.param0, m.Word32Not(bt.param1))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + uint32_t expected = *i ^ ~(*j); + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Word32Xor(m.Word32Not(bt.param0), bt.param1)); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + uint32_t expected = ~(*i) ^ *j; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Word32Xor(m.Int32Constant(*i), m.Word32Not(m.Parameter(0)))); + FOR_UINT32_INPUTS(j) { + uint32_t expected = *i ^ ~(*j); + CHECK_EQ(expected, m.Call(*j)); + } + } + } +} + + +TEST(RunWord32XorInBranch) { + static const int constant = 987654321; + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + MLabel blocka, blockb; + m.Branch( + m.Word32Equal(m.Word32Xor(bt.param0, bt.param1), m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + bt.AddReturn(m.Int32Constant(constant)); + m.Bind(&blockb); + bt.AddReturn(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i ^ *j) == 0 ? constant : 0 - constant; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + MLabel blocka, blockb; + m.Branch( + m.Word32NotEqual(m.Word32Xor(bt.param0, bt.param1), m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + bt.AddReturn(m.Int32Constant(constant)); + m.Bind(&blockb); + bt.AddReturn(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i ^ *j) != 0 ? constant : 0 - constant; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + MLabel blocka, blockb; + m.Branch(m.Word32Equal(m.Word32Xor(m.Int32Constant(*i), m.Parameter(0)), + m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i ^ *j) == 0 ? constant : 0 - constant; + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + FOR_UINT32_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + MLabel blocka, blockb; + m.Branch( + m.Word32NotEqual(m.Word32Xor(m.Int32Constant(*i), m.Parameter(0)), + m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(j) { + int32_t expected = (*i ^ *j) != 0 ? constant : 0 - constant; + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + RawMachineAssemblerTester<void> m; + Operator* shops[] = {m.machine()->Word32Sar(), m.machine()->Word32Shl(), + m.machine()->Word32Shr()}; + for (size_t n = 0; n < ARRAY_SIZE(shops); n++) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + MLabel blocka, blockb; + m.Branch(m.Word32Equal(m.Word32Xor(m.Parameter(0), + m.NewNode(shops[n], m.Parameter(1), + m.Parameter(2))), + m.Int32Constant(0)), + &blocka, &blockb); + m.Bind(&blocka); + m.Return(m.Int32Constant(constant)); + m.Bind(&blockb); + m.Return(m.Int32Constant(0 - constant)); + FOR_UINT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *k & 0x1F; + int32_t right; + switch (shops[n]->opcode()) { + default: + UNREACHABLE(); + case IrOpcode::kWord32Sar: + right = *j >> shift; + break; + case IrOpcode::kWord32Shl: + right = *j << shift; + break; + case IrOpcode::kWord32Shr: + right = static_cast<uint32_t>(*j) >> shift; + break; + } + int32_t expected = ((*i ^ right) == 0) ? constant : 0 - constant; + CHECK_EQ(expected, m.Call(*i, *j, shift)); + } + } + } + } + } +} + + +TEST(RunWord32ShlP) { + { + FOR_UINT32_INPUTS(i) { + uint32_t shift = *i & 0x1F; + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Word32Shl(m.Parameter(0), m.Int32Constant(shift))); + FOR_UINT32_INPUTS(j) { + uint32_t expected = *j << shift; + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Word32Shl(bt.param0, bt.param1)); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + uint32_t shift = *j & 0x1F; + uint32_t expected = *i << shift; + CHECK_EQ(expected, bt.call(*i, shift)); + } + } + } +} + + +TEST(RunWord32ShrP) { + { + FOR_UINT32_INPUTS(i) { + uint32_t shift = *i & 0x1F; + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Word32Shr(m.Parameter(0), m.Int32Constant(shift))); + FOR_UINT32_INPUTS(j) { + uint32_t expected = *j >> shift; + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Word32Shr(bt.param0, bt.param1)); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + uint32_t shift = *j & 0x1F; + uint32_t expected = *i >> shift; + CHECK_EQ(expected, bt.call(*i, shift)); + } + } + CHECK_EQ(0x00010000, bt.call(0x80000000, 15)); + } +} + + +TEST(RunWord32SarP) { + { + FOR_INT32_INPUTS(i) { + int32_t shift = *i & 0x1F; + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Word32Sar(m.Parameter(0), m.Int32Constant(shift))); + FOR_INT32_INPUTS(j) { + int32_t expected = *j >> shift; + CHECK_EQ(expected, m.Call(*j)); + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Word32Sar(bt.param0, bt.param1)); + FOR_INT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + int32_t shift = *j & 0x1F; + int32_t expected = *i >> shift; + CHECK_EQ(expected, bt.call(*i, shift)); + } + } + CHECK_EQ(0xFFFF0000, bt.call(0x80000000, 15)); + } +} + + +TEST(RunWord32NotP) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Word32Not(m.Parameter(0))); + FOR_UINT32_INPUTS(i) { + int expected = ~(*i); + CHECK_EQ(expected, m.Call(*i)); + } +} + + +TEST(RunInt32NegP) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + m.Return(m.Int32Neg(m.Parameter(0))); + FOR_INT32_INPUTS(i) { + int expected = -*i; + CHECK_EQ(expected, m.Call(*i)); + } +} + + +TEST(RunWord32EqualAndWord32SarP) { + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + m.Return(m.Word32Equal(m.Parameter(0), + m.Word32Sar(m.Parameter(1), m.Parameter(2)))); + FOR_INT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *k & 0x1F; + int32_t expected = (*i == (*j >> shift)); + CHECK_EQ(expected, m.Call(*i, *j, shift)); + } + } + } + } + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + m.Return(m.Word32Equal(m.Word32Sar(m.Parameter(0), m.Parameter(1)), + m.Parameter(2))); + FOR_INT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + FOR_INT32_INPUTS(k) { + uint32_t shift = *j & 0x1F; + int32_t expected = ((*i >> shift) == *k); + CHECK_EQ(expected, m.Call(*i, shift, *k)); + } + } + } + } +} + + +TEST(RunWord32EqualAndWord32ShlP) { + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + m.Return(m.Word32Equal(m.Parameter(0), + m.Word32Shl(m.Parameter(1), m.Parameter(2)))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *k & 0x1F; + int32_t expected = (*i == (*j << shift)); + CHECK_EQ(expected, m.Call(*i, *j, shift)); + } + } + } + } + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + m.Return(m.Word32Equal(m.Word32Shl(m.Parameter(0), m.Parameter(1)), + m.Parameter(2))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *j & 0x1F; + int32_t expected = ((*i << shift) == *k); + CHECK_EQ(expected, m.Call(*i, shift, *k)); + } + } + } + } +} + + +TEST(RunWord32EqualAndWord32ShrP) { + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + m.Return(m.Word32Equal(m.Parameter(0), + m.Word32Shr(m.Parameter(1), m.Parameter(2)))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *k & 0x1F; + int32_t expected = (*i == (*j >> shift)); + CHECK_EQ(expected, m.Call(*i, *j, shift)); + } + } + } + } + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32, + kMachineWord32); + m.Return(m.Word32Equal(m.Word32Shr(m.Parameter(0), m.Parameter(1)), + m.Parameter(2))); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + FOR_UINT32_INPUTS(k) { + uint32_t shift = *j & 0x1F; + int32_t expected = ((*i >> shift) == *k); + CHECK_EQ(expected, m.Call(*i, shift, *k)); + } + } + } + } +} + + +TEST(RunDeadNodes) { + for (int i = 0; true; i++) { + RawMachineAssemblerTester<int32_t> m(i == 5 ? kMachineWord32 + : kMachineLast); + int constant = 0x55 + i; + switch (i) { + case 0: + m.Int32Constant(44); + break; + case 1: + m.StringConstant("unused"); + break; + case 2: + m.NumberConstant(11.1); + break; + case 3: + m.PointerConstant(&constant); + break; + case 4: + m.LoadFromPointer(&constant, kMachineWord32); + break; + case 5: + m.Parameter(0); + break; + default: + return; + } + m.Return(m.Int32Constant(constant)); + if (i != 5) { + CHECK_EQ(constant, m.Call()); + } else { + CHECK_EQ(constant, m.Call(0)); + } + } +} + + +TEST(RunDeadInt32Binops) { + RawMachineAssemblerTester<int32_t> m; + + Operator* ops[] = { + m.machine()->Word32And(), m.machine()->Word32Or(), + m.machine()->Word32Xor(), m.machine()->Word32Shl(), + m.machine()->Word32Shr(), m.machine()->Word32Sar(), + m.machine()->Word32Equal(), m.machine()->Int32Add(), + m.machine()->Int32Sub(), m.machine()->Int32Mul(), + m.machine()->Int32Div(), m.machine()->Int32UDiv(), + m.machine()->Int32Mod(), m.machine()->Int32UMod(), + m.machine()->Int32LessThan(), m.machine()->Int32LessThanOrEqual(), + m.machine()->Uint32LessThan(), m.machine()->Uint32LessThanOrEqual(), + NULL}; + + for (int i = 0; ops[i] != NULL; i++) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32); + int constant = 0x55555 + i; + m.NewNode(ops[i], m.Parameter(0), m.Parameter(1)); + m.Return(m.Int32Constant(constant)); + + CHECK_EQ(constant, m.Call(1, 1)); + } +} + + +template <typename Type, typename CType> +static void RunLoadImmIndex(MachineType rep) { + const int kNumElems = 3; + CType buffer[kNumElems]; + + // initialize the buffer with raw data. + byte* raw = reinterpret_cast<byte*>(buffer); + for (size_t i = 0; i < sizeof(buffer); i++) { + raw[i] = static_cast<byte>((i + sizeof(buffer)) ^ 0xAA); + } + + // Test with various large and small offsets. + for (int offset = -1; offset <= 200000; offset *= -5) { + for (int i = 0; i < kNumElems; i++) { + RawMachineAssemblerTester<Type> m; + Node* base = m.PointerConstant(buffer - offset); + Node* index = m.Int32Constant((offset + i) * sizeof(buffer[0])); + m.Return(m.Load(rep, base, index)); + + Type expected = buffer[i]; + Type actual = static_cast<CType>(m.Call()); + CHECK_EQ(expected, actual); + printf("XXX\n"); + } + } +} + + +TEST(RunLoadImmIndex) { + RunLoadImmIndex<int8_t, uint8_t>(kMachineWord8); + RunLoadImmIndex<int16_t, uint16_t>(kMachineWord16); + RunLoadImmIndex<int32_t, uint32_t>(kMachineWord32); + RunLoadImmIndex<int32_t*, int32_t*>(kMachineTagged); + + // TODO(titzer): test kMachineFloat64 loads + // TODO(titzer): test various indexing modes. +} + + +template <typename CType> +static void RunLoadStore(MachineType rep) { + const int kNumElems = 4; + CType buffer[kNumElems]; + + for (int32_t x = 0; x < kNumElems; x++) { + int32_t y = kNumElems - x - 1; + // initialize the buffer with raw data. + byte* raw = reinterpret_cast<byte*>(buffer); + for (size_t i = 0; i < sizeof(buffer); i++) { + raw[i] = static_cast<byte>((i + sizeof(buffer)) ^ 0xAA); + } + + RawMachineAssemblerTester<int32_t> m; + int32_t OK = 0x29000 + x; + Node* base = m.PointerConstant(buffer); + Node* index0 = m.Int32Constant(x * sizeof(buffer[0])); + Node* load = m.Load(rep, base, index0); + Node* index1 = m.Int32Constant(y * sizeof(buffer[0])); + m.Store(rep, base, index1, load); + m.Return(m.Int32Constant(OK)); + + CHECK_NE(buffer[x], buffer[y]); + CHECK_EQ(OK, m.Call()); + CHECK_EQ(buffer[x], buffer[y]); + } +} + + +TEST(RunLoadStore) { + RunLoadStore<int8_t>(kMachineWord8); + RunLoadStore<int16_t>(kMachineWord16); + RunLoadStore<int32_t>(kMachineWord32); + RunLoadStore<void*>(kMachineTagged); + RunLoadStore<double>(kMachineFloat64); +} + + +TEST(RunFloat64Binop) { + RawMachineAssemblerTester<int32_t> m; + double result; + + Operator* ops[] = {m.machine()->Float64Add(), m.machine()->Float64Sub(), + m.machine()->Float64Mul(), m.machine()->Float64Div(), + m.machine()->Float64Mod(), NULL}; + + double inf = V8_INFINITY; + Operator* inputs[] = { + m.common()->Float64Constant(0), m.common()->Float64Constant(1), + m.common()->Float64Constant(1), m.common()->Float64Constant(0), + m.common()->Float64Constant(0), m.common()->Float64Constant(-1), + m.common()->Float64Constant(-1), m.common()->Float64Constant(0), + m.common()->Float64Constant(0.22), m.common()->Float64Constant(-1.22), + m.common()->Float64Constant(-1.22), m.common()->Float64Constant(0.22), + m.common()->Float64Constant(inf), m.common()->Float64Constant(0.22), + m.common()->Float64Constant(inf), m.common()->Float64Constant(-inf), + NULL}; + + for (int i = 0; ops[i] != NULL; i++) { + for (int j = 0; inputs[j] != NULL; j += 2) { + RawMachineAssemblerTester<int32_t> m; + Node* a = m.NewNode(inputs[j]); + Node* b = m.NewNode(inputs[j + 1]); + Node* binop = m.NewNode(ops[i], a, b); + Node* base = m.PointerConstant(&result); + Node* zero = m.Int32Constant(0); + m.Store(kMachineFloat64, base, zero, binop); + m.Return(m.Int32Constant(i + j)); + CHECK_EQ(i + j, m.Call()); + } + } +} + + +TEST(RunDeadFloat64Binops) { + RawMachineAssemblerTester<int32_t> m; + + Operator* ops[] = {m.machine()->Float64Add(), m.machine()->Float64Sub(), + m.machine()->Float64Mul(), m.machine()->Float64Div(), + m.machine()->Float64Mod(), NULL}; + + for (int i = 0; ops[i] != NULL; i++) { + RawMachineAssemblerTester<int32_t> m; + int constant = 0x53355 + i; + m.NewNode(ops[i], m.Float64Constant(0.1), m.Float64Constant(1.11)); + m.Return(m.Int32Constant(constant)); + CHECK_EQ(constant, m.Call()); + } +} + + +TEST(RunFloat64AddP) { + RawMachineAssemblerTester<int32_t> m; + Float64BinopTester bt(&m); + + bt.AddReturn(m.Float64Add(bt.param0, bt.param1)); + + FOR_FLOAT64_INPUTS(pl) { + FOR_FLOAT64_INPUTS(pr) { + double expected = *pl + *pr; + CHECK_EQ(expected, bt.call(*pl, *pr)); + } + } +} + + +TEST(RunFloat64SubP) { + RawMachineAssemblerTester<int32_t> m; + Float64BinopTester bt(&m); + + bt.AddReturn(m.Float64Sub(bt.param0, bt.param1)); + + FOR_FLOAT64_INPUTS(pl) { + FOR_FLOAT64_INPUTS(pr) { + double expected = *pl - *pr; + CHECK_EQ(expected, bt.call(*pl, *pr)); + } + } +} + + +TEST(RunFloat64SubImm1) { + double input = 0.0; + double output = 0.0; + + FOR_FLOAT64_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m; + Node* t0 = m.LoadFromPointer(&input, kMachineFloat64); + Node* t1 = m.Float64Sub(m.Float64Constant(*i), t0); + m.StoreToPointer(&output, kMachineFloat64, t1); + m.Return(m.Int32Constant(0)); + FOR_FLOAT64_INPUTS(j) { + input = *j; + double expected = *i - input; + CHECK_EQ(0, m.Call()); + CHECK_EQ(expected, output); + } + } +} + + +TEST(RunFloat64SubImm2) { + double input = 0.0; + double output = 0.0; + + FOR_FLOAT64_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m; + Node* t0 = m.LoadFromPointer(&input, kMachineFloat64); + Node* t1 = m.Float64Sub(t0, m.Float64Constant(*i)); + m.StoreToPointer(&output, kMachineFloat64, t1); + m.Return(m.Int32Constant(0)); + FOR_FLOAT64_INPUTS(j) { + input = *j; + double expected = input - *i; + CHECK_EQ(0, m.Call()); + CHECK_EQ(expected, output); + } + } +} + + +TEST(RunFloat64MulP) { + RawMachineAssemblerTester<int32_t> m; + Float64BinopTester bt(&m); + + bt.AddReturn(m.Float64Mul(bt.param0, bt.param1)); + + FOR_FLOAT64_INPUTS(pl) { + FOR_FLOAT64_INPUTS(pr) { + double expected = *pl * *pr; + CHECK_EQ(expected, bt.call(*pl, *pr)); + } + } +} + + +TEST(RunFloat64MulAndFloat64AddP) { + double input_a = 0.0; + double input_b = 0.0; + double input_c = 0.0; + double output = 0.0; + + { + RawMachineAssemblerTester<int32_t> m; + Node* a = m.LoadFromPointer(&input_a, kMachineFloat64); + Node* b = m.LoadFromPointer(&input_b, kMachineFloat64); + Node* c = m.LoadFromPointer(&input_c, kMachineFloat64); + m.StoreToPointer(&output, kMachineFloat64, + m.Float64Add(m.Float64Mul(a, b), c)); + m.Return(m.Int32Constant(0)); + FOR_FLOAT64_INPUTS(i) { + FOR_FLOAT64_INPUTS(j) { + FOR_FLOAT64_INPUTS(k) { + input_a = *i; + input_b = *j; + input_c = *k; + volatile double temp = input_a * input_b; + volatile double expected = temp + input_c; + CHECK_EQ(0, m.Call()); + CHECK_EQ(expected, output); + } + } + } + } + { + RawMachineAssemblerTester<int32_t> m; + Node* a = m.LoadFromPointer(&input_a, kMachineFloat64); + Node* b = m.LoadFromPointer(&input_b, kMachineFloat64); + Node* c = m.LoadFromPointer(&input_c, kMachineFloat64); + m.StoreToPointer(&output, kMachineFloat64, + m.Float64Add(a, m.Float64Mul(b, c))); + m.Return(m.Int32Constant(0)); + FOR_FLOAT64_INPUTS(i) { + FOR_FLOAT64_INPUTS(j) { + FOR_FLOAT64_INPUTS(k) { + input_a = *i; + input_b = *j; + input_c = *k; + volatile double temp = input_b * input_c; + volatile double expected = input_a + temp; + CHECK_EQ(0, m.Call()); + CHECK_EQ(expected, output); + } + } + } + } +} + + +TEST(RunFloat64MulAndFloat64SubP) { + double input_a = 0.0; + double input_b = 0.0; + double input_c = 0.0; + double output = 0.0; + + RawMachineAssemblerTester<int32_t> m; + Node* a = m.LoadFromPointer(&input_a, kMachineFloat64); + Node* b = m.LoadFromPointer(&input_b, kMachineFloat64); + Node* c = m.LoadFromPointer(&input_c, kMachineFloat64); + m.StoreToPointer(&output, kMachineFloat64, + m.Float64Sub(a, m.Float64Mul(b, c))); + m.Return(m.Int32Constant(0)); + + FOR_FLOAT64_INPUTS(i) { + FOR_FLOAT64_INPUTS(j) { + FOR_FLOAT64_INPUTS(k) { + input_a = *i; + input_b = *j; + input_c = *k; + volatile double temp = input_b * input_c; + volatile double expected = input_a - temp; + CHECK_EQ(0, m.Call()); + CHECK_EQ(expected, output); + } + } + } +} + + +TEST(RunFloat64MulImm) { + double input = 0.0; + double output = 0.0; + + { + FOR_FLOAT64_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m; + Node* t0 = m.LoadFromPointer(&input, kMachineFloat64); + Node* t1 = m.Float64Mul(m.Float64Constant(*i), t0); + m.StoreToPointer(&output, kMachineFloat64, t1); + m.Return(m.Int32Constant(0)); + FOR_FLOAT64_INPUTS(j) { + input = *j; + double expected = *i * input; + CHECK_EQ(0, m.Call()); + CHECK_EQ(expected, output); + } + } + } + { + FOR_FLOAT64_INPUTS(i) { + RawMachineAssemblerTester<int32_t> m; + Node* t0 = m.LoadFromPointer(&input, kMachineFloat64); + Node* t1 = m.Float64Mul(t0, m.Float64Constant(*i)); + m.StoreToPointer(&output, kMachineFloat64, t1); + m.Return(m.Int32Constant(0)); + FOR_FLOAT64_INPUTS(j) { + input = *j; + double expected = input * *i; + CHECK_EQ(0, m.Call()); + CHECK_EQ(expected, output); + } + } + } +} + + +TEST(RunFloat64DivP) { + RawMachineAssemblerTester<int32_t> m; + Float64BinopTester bt(&m); + + bt.AddReturn(m.Float64Div(bt.param0, bt.param1)); + + FOR_FLOAT64_INPUTS(pl) { + FOR_FLOAT64_INPUTS(pr) { + double expected = *pl / *pr; + CHECK_EQ(expected, bt.call(*pl, *pr)); + } + } +} + + +TEST(RunFloat64ModP) { + RawMachineAssemblerTester<int32_t> m; + Float64BinopTester bt(&m); + + bt.AddReturn(m.Float64Mod(bt.param0, bt.param1)); + + FOR_FLOAT64_INPUTS(i) { + FOR_FLOAT64_INPUTS(j) { + double expected = modulo(*i, *j); + double found = bt.call(*i, *j); + CHECK_EQ(expected, found); + } + } +} + + +TEST(RunChangeInt32ToFloat64_A) { + RawMachineAssemblerTester<int32_t> m; + int32_t magic = 0x986234; + double result = 0; + + Node* convert = m.ChangeInt32ToFloat64(m.Int32Constant(magic)); + m.Store(kMachineFloat64, m.PointerConstant(&result), m.Int32Constant(0), + convert); + m.Return(m.Int32Constant(magic)); + + CHECK_EQ(magic, m.Call()); + CHECK_EQ(static_cast<double>(magic), result); +} + + +TEST(RunChangeInt32ToFloat64_B) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + double output = 0; + + Node* convert = m.ChangeInt32ToFloat64(m.Parameter(0)); + m.Store(kMachineFloat64, m.PointerConstant(&output), m.Int32Constant(0), + convert); + m.Return(m.Parameter(0)); + + FOR_INT32_INPUTS(i) { + int32_t expect = *i; + CHECK_EQ(expect, m.Call(expect)); + CHECK_EQ(static_cast<double>(expect), output); + } +} + + +TEST(RunChangeUint32ToFloat64_B) { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + double output = 0; + + Node* convert = m.ChangeUint32ToFloat64(m.Parameter(0)); + m.Store(kMachineFloat64, m.PointerConstant(&output), m.Int32Constant(0), + convert); + m.Return(m.Parameter(0)); + + FOR_UINT32_INPUTS(i) { + uint32_t expect = *i; + CHECK_EQ(expect, m.Call(expect)); + CHECK_EQ(static_cast<double>(expect), output); + } +} + + +TEST(RunChangeFloat64ToInt32_A) { + RawMachineAssemblerTester<int32_t> m; + int32_t magic = 0x786234; + double input = 11.1; + int32_t result = 0; + + m.Store(kMachineWord32, m.PointerConstant(&result), m.Int32Constant(0), + m.ChangeFloat64ToInt32(m.Float64Constant(input))); + m.Return(m.Int32Constant(magic)); + + CHECK_EQ(magic, m.Call()); + CHECK_EQ(static_cast<int32_t>(input), result); +} + + +TEST(RunChangeFloat64ToInt32_B) { + RawMachineAssemblerTester<int32_t> m; + double input = 0; + int32_t output = 0; + + Node* load = + m.Load(kMachineFloat64, m.PointerConstant(&input), m.Int32Constant(0)); + Node* convert = m.ChangeFloat64ToInt32(load); + m.Store(kMachineWord32, m.PointerConstant(&output), m.Int32Constant(0), + convert); + m.Return(convert); + + { + FOR_INT32_INPUTS(i) { + input = *i; + int32_t expect = *i; + CHECK_EQ(expect, m.Call()); + CHECK_EQ(expect, output); + } + } + + // Check various powers of 2. + for (int32_t n = 1; n < 31; ++n) { + { + input = 1 << n; + int32_t expect = static_cast<int32_t>(input); + CHECK_EQ(expect, m.Call()); + CHECK_EQ(expect, output); + } + + { + input = 3 << n; + int32_t expect = static_cast<int32_t>(input); + CHECK_EQ(expect, m.Call()); + CHECK_EQ(expect, output); + } + } + // Note we don't check fractional inputs, because these Convert operators + // really should be Change operators. +} + + +TEST(RunChangeFloat64ToUint32_B) { + RawMachineAssemblerTester<int32_t> m; + double input = 0; + int32_t output = 0; + + Node* load = + m.Load(kMachineFloat64, m.PointerConstant(&input), m.Int32Constant(0)); + Node* convert = m.ChangeFloat64ToUint32(load); + m.Store(kMachineWord32, m.PointerConstant(&output), m.Int32Constant(0), + convert); + m.Return(convert); + + { + FOR_UINT32_INPUTS(i) { + input = *i; + // TODO(titzer): add a CheckEqualsHelper overload for uint32_t. + int32_t expect = static_cast<int32_t>(*i); + CHECK_EQ(expect, m.Call()); + CHECK_EQ(expect, output); + } + } + + // Check various powers of 2. + for (int32_t n = 1; n < 31; ++n) { + { + input = 1u << n; + int32_t expect = static_cast<int32_t>(static_cast<uint32_t>(input)); + CHECK_EQ(expect, m.Call()); + CHECK_EQ(expect, output); + } + + { + input = 3u << n; + int32_t expect = static_cast<int32_t>(static_cast<uint32_t>(input)); + CHECK_EQ(expect, m.Call()); + CHECK_EQ(expect, output); + } + } + // Note we don't check fractional inputs, because these Convert operators + // really should be Change operators. +} + + +TEST(RunChangeFloat64ToInt32_spilled) { + RawMachineAssemblerTester<int32_t> m; + const int kNumInputs = 32; + int32_t magic = 0x786234; + double input[kNumInputs]; + int32_t result[kNumInputs]; + Node* input_node[kNumInputs]; + + for (int i = 0; i < kNumInputs; i++) { + input_node[i] = m.Load(kMachineFloat64, m.PointerConstant(&input), + m.Int32Constant(i * 8)); + } + + for (int i = 0; i < kNumInputs; i++) { + m.Store(kMachineWord32, m.PointerConstant(&result), m.Int32Constant(i * 4), + m.ChangeFloat64ToInt32(input_node[i])); + } + + m.Return(m.Int32Constant(magic)); + + for (int i = 0; i < kNumInputs; i++) { + input[i] = 100.9 + i; + } + + CHECK_EQ(magic, m.Call()); + + for (int i = 0; i < kNumInputs; i++) { + CHECK_EQ(result[i], 100 + i); + } +} + + +TEST(RunDeadChangeFloat64ToInt32) { + RawMachineAssemblerTester<int32_t> m; + const int magic = 0x88abcda4; + m.ChangeFloat64ToInt32(m.Float64Constant(999.78)); + m.Return(m.Int32Constant(magic)); + CHECK_EQ(magic, m.Call()); +} + + +TEST(RunDeadChangeInt32ToFloat64) { + RawMachineAssemblerTester<int32_t> m; + const int magic = 0x8834abcd; + m.ChangeInt32ToFloat64(m.Int32Constant(magic - 6888)); + m.Return(m.Int32Constant(magic)); + CHECK_EQ(magic, m.Call()); +} + + +TEST(RunLoopPhiInduction2) { + RawMachineAssemblerTester<int32_t> m; + + int false_val = 0x10777; + + // x = false_val; while(false) { x++; } return x; + MLabel header, body, end; + Node* false_node = m.Int32Constant(false_val); + m.Goto(&header); + m.Bind(&header); + Node* phi = m.Phi(false_node, false_node); + m.Branch(m.Int32Constant(0), &body, &end); + m.Bind(&body); + Node* add = m.Int32Add(phi, m.Int32Constant(1)); + phi->ReplaceInput(1, add); + m.Goto(&header); + m.Bind(&end); + m.Return(phi); + + CHECK_EQ(false_val, m.Call()); +} + + +TEST(RunDoubleDiamond) { + RawMachineAssemblerTester<int32_t> m; + + const int magic = 99645; + double buffer = 0.1; + double constant = 99.99; + + MLabel blocka, blockb, end; + Node* k1 = m.Float64Constant(constant); + Node* k2 = m.Float64Constant(0 - constant); + m.Branch(m.Int32Constant(0), &blocka, &blockb); + m.Bind(&blocka); + m.Goto(&end); + m.Bind(&blockb); + m.Goto(&end); + m.Bind(&end); + Node* phi = m.Phi(k2, k1); + m.Store(kMachineFloat64, m.PointerConstant(&buffer), m.Int32Constant(0), phi); + m.Return(m.Int32Constant(magic)); + + CHECK_EQ(magic, m.Call()); + CHECK_EQ(constant, buffer); +} + + +TEST(RunRefDiamond) { + RawMachineAssemblerTester<int32_t> m; + + const int magic = 99644; + Handle<String> rexpected = + CcTest::i_isolate()->factory()->InternalizeUtf8String("A"); + String* buffer; + + MLabel blocka, blockb, end; + Node* k1 = m.StringConstant("A"); + Node* k2 = m.StringConstant("B"); + m.Branch(m.Int32Constant(0), &blocka, &blockb); + m.Bind(&blocka); + m.Goto(&end); + m.Bind(&blockb); + m.Goto(&end); + m.Bind(&end); + Node* phi = m.Phi(k2, k1); + m.Store(kMachineTagged, m.PointerConstant(&buffer), m.Int32Constant(0), phi); + m.Return(m.Int32Constant(magic)); + + CHECK_EQ(magic, m.Call()); + CHECK(rexpected->SameValue(buffer)); +} + + +TEST(RunDoubleRefDiamond) { + RawMachineAssemblerTester<int32_t> m; + + const int magic = 99648; + double dbuffer = 0.1; + double dconstant = 99.99; + Handle<String> rexpected = + CcTest::i_isolate()->factory()->InternalizeUtf8String("AX"); + String* rbuffer; + + MLabel blocka, blockb, end; + Node* d1 = m.Float64Constant(dconstant); + Node* d2 = m.Float64Constant(0 - dconstant); + Node* r1 = m.StringConstant("AX"); + Node* r2 = m.StringConstant("BX"); + m.Branch(m.Int32Constant(0), &blocka, &blockb); + m.Bind(&blocka); + m.Goto(&end); + m.Bind(&blockb); + m.Goto(&end); + m.Bind(&end); + Node* dphi = m.Phi(d2, d1); + Node* rphi = m.Phi(r2, r1); + m.Store(kMachineFloat64, m.PointerConstant(&dbuffer), m.Int32Constant(0), + dphi); + m.Store(kMachineTagged, m.PointerConstant(&rbuffer), m.Int32Constant(0), + rphi); + m.Return(m.Int32Constant(magic)); + + CHECK_EQ(magic, m.Call()); + CHECK_EQ(dconstant, dbuffer); + CHECK(rexpected->SameValue(rbuffer)); +} + + +TEST(RunDoubleRefDoubleDiamond) { + RawMachineAssemblerTester<int32_t> m; + + const int magic = 99649; + double dbuffer = 0.1; + double dconstant = 99.997; + Handle<String> rexpected = + CcTest::i_isolate()->factory()->InternalizeUtf8String("AD"); + String* rbuffer; + + MLabel blocka, blockb, mid, blockd, blocke, end; + Node* d1 = m.Float64Constant(dconstant); + Node* d2 = m.Float64Constant(0 - dconstant); + Node* r1 = m.StringConstant("AD"); + Node* r2 = m.StringConstant("BD"); + m.Branch(m.Int32Constant(0), &blocka, &blockb); + m.Bind(&blocka); + m.Goto(&mid); + m.Bind(&blockb); + m.Goto(&mid); + m.Bind(&mid); + Node* dphi1 = m.Phi(d2, d1); + Node* rphi1 = m.Phi(r2, r1); + m.Branch(m.Int32Constant(0), &blockd, &blocke); + + m.Bind(&blockd); + m.Goto(&end); + m.Bind(&blocke); + m.Goto(&end); + m.Bind(&end); + Node* dphi2 = m.Phi(d1, dphi1); + Node* rphi2 = m.Phi(r1, rphi1); + + m.Store(kMachineFloat64, m.PointerConstant(&dbuffer), m.Int32Constant(0), + dphi2); + m.Store(kMachineTagged, m.PointerConstant(&rbuffer), m.Int32Constant(0), + rphi2); + m.Return(m.Int32Constant(magic)); + + CHECK_EQ(magic, m.Call()); + CHECK_EQ(dconstant, dbuffer); + CHECK(rexpected->SameValue(rbuffer)); +} + + +TEST(RunDoubleLoopPhi) { + RawMachineAssemblerTester<int32_t> m; + MLabel header, body, end; + + int magic = 99773; + double buffer = 0.99; + double dconstant = 777.1; + + Node* zero = m.Int32Constant(0); + Node* dk = m.Float64Constant(dconstant); + + m.Goto(&header); + m.Bind(&header); + Node* phi = m.Phi(dk, dk); + phi->ReplaceInput(1, phi); + m.Branch(zero, &body, &end); + m.Bind(&body); + m.Goto(&header); + m.Bind(&end); + m.Store(kMachineFloat64, m.PointerConstant(&buffer), m.Int32Constant(0), phi); + m.Return(m.Int32Constant(magic)); + + CHECK_EQ(magic, m.Call()); +} + + +TEST(RunCountToTenAccRaw) { + RawMachineAssemblerTester<int32_t> m; + + Node* zero = m.Int32Constant(0); + Node* ten = m.Int32Constant(10); + Node* one = m.Int32Constant(1); + + MLabel header, body, body_cont, end; + + m.Goto(&header); + + m.Bind(&header); + Node* i = m.Phi(zero, zero); + Node* j = m.Phi(zero, zero); + m.Goto(&body); + + m.Bind(&body); + Node* next_i = m.Int32Add(i, one); + Node* next_j = m.Int32Add(j, one); + m.Branch(m.Word32Equal(next_i, ten), &end, &body_cont); + + m.Bind(&body_cont); + i->ReplaceInput(1, next_i); + j->ReplaceInput(1, next_j); + m.Goto(&header); + + m.Bind(&end); + m.Return(ten); + + CHECK_EQ(10, m.Call()); +} + + +TEST(RunCountToTenAccRaw2) { + RawMachineAssemblerTester<int32_t> m; + + Node* zero = m.Int32Constant(0); + Node* ten = m.Int32Constant(10); + Node* one = m.Int32Constant(1); + + MLabel header, body, body_cont, end; + + m.Goto(&header); + + m.Bind(&header); + Node* i = m.Phi(zero, zero); + Node* j = m.Phi(zero, zero); + Node* k = m.Phi(zero, zero); + m.Goto(&body); + + m.Bind(&body); + Node* next_i = m.Int32Add(i, one); + Node* next_j = m.Int32Add(j, one); + Node* next_k = m.Int32Add(j, one); + m.Branch(m.Word32Equal(next_i, ten), &end, &body_cont); + + m.Bind(&body_cont); + i->ReplaceInput(1, next_i); + j->ReplaceInput(1, next_j); + k->ReplaceInput(1, next_k); + m.Goto(&header); + + m.Bind(&end); + m.Return(ten); + + CHECK_EQ(10, m.Call()); +} + + +TEST(RunAddTree) { + RawMachineAssemblerTester<int32_t> m; + int32_t inputs[] = {11, 12, 13, 14, 15, 16, 17, 18}; + + Node* base = m.PointerConstant(inputs); + Node* n0 = m.Load(kMachineWord32, base, m.Int32Constant(0 * sizeof(int32_t))); + Node* n1 = m.Load(kMachineWord32, base, m.Int32Constant(1 * sizeof(int32_t))); + Node* n2 = m.Load(kMachineWord32, base, m.Int32Constant(2 * sizeof(int32_t))); + Node* n3 = m.Load(kMachineWord32, base, m.Int32Constant(3 * sizeof(int32_t))); + Node* n4 = m.Load(kMachineWord32, base, m.Int32Constant(4 * sizeof(int32_t))); + Node* n5 = m.Load(kMachineWord32, base, m.Int32Constant(5 * sizeof(int32_t))); + Node* n6 = m.Load(kMachineWord32, base, m.Int32Constant(6 * sizeof(int32_t))); + Node* n7 = m.Load(kMachineWord32, base, m.Int32Constant(7 * sizeof(int32_t))); + + Node* i1 = m.Int32Add(n0, n1); + Node* i2 = m.Int32Add(n2, n3); + Node* i3 = m.Int32Add(n4, n5); + Node* i4 = m.Int32Add(n6, n7); + + Node* i5 = m.Int32Add(i1, i2); + Node* i6 = m.Int32Add(i3, i4); + + Node* i7 = m.Int32Add(i5, i6); + + m.Return(i7); + + CHECK_EQ(116, m.Call()); +} + + +#if MACHINE_ASSEMBLER_SUPPORTS_CALL_C + +static int Seven() { return 7; } +static int UnaryMinus(int a) { return -a; } +static int APlusTwoB(int a, int b) { return a + 2 * b; } + + +TEST(RunCallSeven) { + for (int i = 0; i < 2; i++) { + bool call_direct = i == 0; + void* function_address = + reinterpret_cast<void*>(reinterpret_cast<intptr_t>(&Seven)); + + RawMachineAssemblerTester<int32_t> m; + Node** args = NULL; + MachineType* arg_types = NULL; + Node* function = + call_direct ? m.PointerConstant(function_address) + : m.LoadFromPointer(&function_address, + MachineOperatorBuilder::pointer_rep()); + m.Return(m.CallC(function, kMachineWord32, arg_types, args, 0)); + + CHECK_EQ(7, m.Call()); + } +} + + +TEST(RunCallUnaryMinus) { + for (int i = 0; i < 2; i++) { + bool call_direct = i == 0; + void* function_address = + reinterpret_cast<void*>(reinterpret_cast<intptr_t>(&UnaryMinus)); + + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + Node* args[] = {m.Parameter(0)}; + MachineType arg_types[] = {kMachineWord32}; + Node* function = + call_direct ? m.PointerConstant(function_address) + : m.LoadFromPointer(&function_address, + MachineOperatorBuilder::pointer_rep()); + m.Return(m.CallC(function, kMachineWord32, arg_types, args, 1)); + + FOR_INT32_INPUTS(i) { + int a = *i; + CHECK_EQ(-a, m.Call(a)); + } + } +} + + +TEST(RunCallAPlusTwoB) { + for (int i = 0; i < 2; i++) { + bool call_direct = i == 0; + void* function_address = + reinterpret_cast<void*>(reinterpret_cast<intptr_t>(&APlusTwoB)); + + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32); + Node* args[] = {m.Parameter(0), m.Parameter(1)}; + MachineType arg_types[] = {kMachineWord32, kMachineWord32}; + Node* function = + call_direct ? m.PointerConstant(function_address) + : m.LoadFromPointer(&function_address, + MachineOperatorBuilder::pointer_rep()); + m.Return(m.CallC(function, kMachineWord32, arg_types, args, 2)); + + FOR_INT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + int a = *i; + int b = *j; + int result = m.Call(a, b); + CHECK_EQ(a + 2 * b, result); + } + } + } +} + +#endif // MACHINE_ASSEMBLER_SUPPORTS_CALL_C + + +static const int kFloat64CompareHelperTestCases = 15; +static const int kFloat64CompareHelperNodeType = 4; + +static int Float64CompareHelper(RawMachineAssemblerTester<int32_t>* m, + int test_case, int node_type, double x, + double y) { + static double buffer[2]; + buffer[0] = x; + buffer[1] = y; + CHECK(0 <= test_case && test_case < kFloat64CompareHelperTestCases); + CHECK(0 <= node_type && node_type < kFloat64CompareHelperNodeType); + CHECK(x < y); + bool load_a = node_type / 2 == 1; + bool load_b = node_type % 2 == 1; + Node* a = load_a ? m->Load(kMachineFloat64, m->PointerConstant(&buffer[0])) + : m->Float64Constant(x); + Node* b = load_b ? m->Load(kMachineFloat64, m->PointerConstant(&buffer[1])) + : m->Float64Constant(y); + Node* cmp = NULL; + bool expected = false; + switch (test_case) { + // Equal tests. + case 0: + cmp = m->Float64Equal(a, b); + expected = false; + break; + case 1: + cmp = m->Float64Equal(a, a); + expected = true; + break; + // LessThan tests. + case 2: + cmp = m->Float64LessThan(a, b); + expected = true; + break; + case 3: + cmp = m->Float64LessThan(b, a); + expected = false; + break; + case 4: + cmp = m->Float64LessThan(a, a); + expected = false; + break; + // LessThanOrEqual tests. + case 5: + cmp = m->Float64LessThanOrEqual(a, b); + expected = true; + break; + case 6: + cmp = m->Float64LessThanOrEqual(b, a); + expected = false; + break; + case 7: + cmp = m->Float64LessThanOrEqual(a, a); + expected = true; + break; + // NotEqual tests. + case 8: + cmp = m->Float64NotEqual(a, b); + expected = true; + break; + case 9: + cmp = m->Float64NotEqual(b, a); + expected = true; + break; + case 10: + cmp = m->Float64NotEqual(a, a); + expected = false; + break; + // GreaterThan tests. + case 11: + cmp = m->Float64GreaterThan(a, a); + expected = false; + break; + case 12: + cmp = m->Float64GreaterThan(a, b); + expected = false; + break; + // GreaterThanOrEqual tests. + case 13: + cmp = m->Float64GreaterThanOrEqual(a, a); + expected = true; + break; + case 14: + cmp = m->Float64GreaterThanOrEqual(b, a); + expected = true; + break; + default: + UNREACHABLE(); + } + m->Return(cmp); + return expected; +} + + +TEST(RunFloat64Compare) { + double inf = V8_INFINITY; + // All pairs (a1, a2) are of the form a1 < a2. + double inputs[] = {0.0, 1.0, -1.0, 0.22, -1.22, 0.22, + -inf, 0.22, 0.22, inf, -inf, inf}; + + for (int test = 0; test < kFloat64CompareHelperTestCases; test++) { + for (int node_type = 0; node_type < kFloat64CompareHelperNodeType; + node_type++) { + for (size_t input = 0; input < ARRAY_SIZE(inputs); input += 2) { + RawMachineAssemblerTester<int32_t> m; + int expected = Float64CompareHelper(&m, test, node_type, inputs[input], + inputs[input + 1]); + CHECK_EQ(expected, m.Call()); + } + } + } +} + + +TEST(RunFloat64UnorderedCompare) { + RawMachineAssemblerTester<int32_t> m; + + Operator* operators[] = {m.machine()->Float64Equal(), + m.machine()->Float64LessThan(), + m.machine()->Float64LessThanOrEqual()}; + + double nan = v8::base::OS::nan_value(); + + FOR_FLOAT64_INPUTS(i) { + for (size_t o = 0; o < ARRAY_SIZE(operators); ++o) { + for (int j = 0; j < 2; j++) { + RawMachineAssemblerTester<int32_t> m; + Node* a = m.Float64Constant(*i); + Node* b = m.Float64Constant(nan); + if (j == 1) std::swap(a, b); + m.Return(m.NewNode(operators[o], a, b)); + CHECK_EQ(0, m.Call()); + } + } + } +} + + +TEST(RunFloat64Equal) { + double input_a = 0.0; + double input_b = 0.0; + + RawMachineAssemblerTester<int32_t> m; + Node* a = m.LoadFromPointer(&input_a, kMachineFloat64); + Node* b = m.LoadFromPointer(&input_b, kMachineFloat64); + m.Return(m.Float64Equal(a, b)); + + CompareWrapper cmp(IrOpcode::kFloat64Equal); + FOR_FLOAT64_INPUTS(pl) { + FOR_FLOAT64_INPUTS(pr) { + input_a = *pl; + input_b = *pr; + int32_t expected = cmp.Float64Compare(input_a, input_b) ? 1 : 0; + CHECK_EQ(expected, m.Call()); + } + } +} + + +TEST(RunFloat64LessThan) { + double input_a = 0.0; + double input_b = 0.0; + + RawMachineAssemblerTester<int32_t> m; + Node* a = m.LoadFromPointer(&input_a, kMachineFloat64); + Node* b = m.LoadFromPointer(&input_b, kMachineFloat64); + m.Return(m.Float64LessThan(a, b)); + + CompareWrapper cmp(IrOpcode::kFloat64LessThan); + FOR_FLOAT64_INPUTS(pl) { + FOR_FLOAT64_INPUTS(pr) { + input_a = *pl; + input_b = *pr; + int32_t expected = cmp.Float64Compare(input_a, input_b) ? 1 : 0; + CHECK_EQ(expected, m.Call()); + } + } +} + + +template <typename IntType, MachineType kRepresentation> +static void LoadStoreTruncation() { + IntType input; + + RawMachineAssemblerTester<int32_t> m; + Node* a = m.LoadFromPointer(&input, kRepresentation); + Node* ap1 = m.Int32Add(a, m.Int32Constant(1)); + m.StoreToPointer(&input, kRepresentation, ap1); + m.Return(ap1); + + const IntType max = std::numeric_limits<IntType>::max(); + const IntType min = std::numeric_limits<IntType>::min(); + + // Test upper bound. + input = max; + CHECK_EQ(max + 1, m.Call()); + CHECK_EQ(min, input); + + // Test lower bound. + input = min; + CHECK_EQ(max + 2, m.Call()); + CHECK_EQ(min + 1, input); + + // Test all one byte values that are not one byte bounds. + for (int i = -127; i < 127; i++) { + input = i; + int expected = i >= 0 ? i + 1 : max + (i - min) + 2; + CHECK_EQ(expected, m.Call()); + CHECK_EQ(i + 1, input); + } +} + + +TEST(RunLoadStoreTruncation) { + LoadStoreTruncation<int8_t, kMachineWord8>(); + LoadStoreTruncation<int16_t, kMachineWord16>(); +} + + +static void IntPtrCompare(intptr_t left, intptr_t right) { + for (int test = 0; test < 7; test++) { + RawMachineAssemblerTester<bool> m(MachineOperatorBuilder::pointer_rep(), + MachineOperatorBuilder::pointer_rep()); + Node* p0 = m.Parameter(0); + Node* p1 = m.Parameter(1); + Node* res = NULL; + bool expected = false; + switch (test) { + case 0: + res = m.IntPtrLessThan(p0, p1); + expected = true; + break; + case 1: + res = m.IntPtrLessThanOrEqual(p0, p1); + expected = true; + break; + case 2: + res = m.IntPtrEqual(p0, p1); + expected = false; + break; + case 3: + res = m.IntPtrGreaterThanOrEqual(p0, p1); + expected = false; + break; + case 4: + res = m.IntPtrGreaterThan(p0, p1); + expected = false; + break; + case 5: + res = m.IntPtrEqual(p0, p0); + expected = true; + break; + case 6: + res = m.IntPtrNotEqual(p0, p1); + expected = true; + break; + default: + UNREACHABLE(); + break; + } + m.Return(res); + CHECK_EQ(expected, m.Call(reinterpret_cast<int32_t*>(left), + reinterpret_cast<int32_t*>(right))); + } +} + + +TEST(RunIntPtrCompare) { + intptr_t min = std::numeric_limits<intptr_t>::min(); + intptr_t max = std::numeric_limits<intptr_t>::max(); + // An ascending chain of intptr_t + intptr_t inputs[] = {min, min / 2, -1, 0, 1, max / 2, max}; + for (size_t i = 0; i < ARRAY_SIZE(inputs) - 1; i++) { + IntPtrCompare(inputs[i], inputs[i + 1]); + } +} + + +TEST(RunTestIntPtrArithmetic) { + static const int kInputSize = 10; + int32_t inputs[kInputSize]; + int32_t outputs[kInputSize]; + for (int i = 0; i < kInputSize; i++) { + inputs[i] = i; + outputs[i] = -1; + } + RawMachineAssemblerTester<int32_t*> m; + Node* input = m.PointerConstant(&inputs[0]); + Node* output = m.PointerConstant(&outputs[kInputSize - 1]); + Node* elem_size = m.ConvertInt32ToIntPtr(m.Int32Constant(sizeof(inputs[0]))); + for (int i = 0; i < kInputSize; i++) { + m.Store(kMachineWord32, output, m.Load(kMachineWord32, input)); + input = m.IntPtrAdd(input, elem_size); + output = m.IntPtrSub(output, elem_size); + } + m.Return(input); + CHECK_EQ(&inputs[kInputSize], m.Call()); + for (int i = 0; i < kInputSize; i++) { + CHECK_EQ(i, inputs[i]); + CHECK_EQ(kInputSize - i - 1, outputs[i]); + } +} + + +static inline uint32_t rotr32(uint32_t i, uint32_t j) { + return (i >> j) | (i << (32 - j)); +} + + +TEST(RunTestInt32RotateRightP) { + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Word32Or( + m.Word32Shr(bt.param0, bt.param1), + m.Word32Shl(bt.param0, m.Int32Sub(m.Int32Constant(32), bt.param1)))); + bt.Run(ValueHelper::uint32_vector(), ValueHelper::ror_vector(), rotr32); + } + { + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + bt.AddReturn(m.Word32Or( + m.Word32Shl(bt.param0, m.Int32Sub(m.Int32Constant(32), bt.param1)), + m.Word32Shr(bt.param0, bt.param1))); + bt.Run(ValueHelper::uint32_vector(), ValueHelper::ror_vector(), rotr32); + } +} + + +TEST(RunTestInt32RotateRightImm) { + FOR_INPUTS(uint32_t, ror, i) { + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + Node* value = m.Parameter(0); + m.Return(m.Word32Or(m.Word32Shr(value, m.Int32Constant(*i)), + m.Word32Shl(value, m.Int32Constant(32 - *i)))); + m.Run(ValueHelper::uint32_vector(), + std::bind2nd(std::ptr_fun(&rotr32), *i)); + } + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + Node* value = m.Parameter(0); + m.Return(m.Word32Or(m.Word32Shl(value, m.Int32Constant(32 - *i)), + m.Word32Shr(value, m.Int32Constant(*i)))); + m.Run(ValueHelper::uint32_vector(), + std::bind2nd(std::ptr_fun(&rotr32), *i)); + } + } +} + + +TEST(RunSpillLotsOfThings) { + static const int kInputSize = 1000; + RawMachineAssemblerTester<void> m; + Node* accs[kInputSize]; + int32_t outputs[kInputSize]; + Node* one = m.Int32Constant(1); + Node* acc = one; + for (int i = 0; i < kInputSize; i++) { + acc = m.Int32Add(acc, one); + accs[i] = acc; + } + for (int i = 0; i < kInputSize; i++) { + m.StoreToPointer(&outputs[i], kMachineWord32, accs[i]); + } + m.Return(one); + m.Call(); + for (int i = 0; i < kInputSize; i++) { + CHECK_EQ(outputs[i], i + 2); + } +} + + +TEST(RunSpillConstantsAndParameters) { + static const int kInputSize = 1000; + static const int32_t kBase = 987; + RawMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32); + int32_t outputs[kInputSize]; + Node* csts[kInputSize]; + Node* accs[kInputSize]; + Node* acc = m.Int32Constant(0); + for (int i = 0; i < kInputSize; i++) { + csts[i] = m.Int32Constant(static_cast<int32_t>(kBase + i)); + } + for (int i = 0; i < kInputSize; i++) { + acc = m.Int32Add(acc, csts[i]); + accs[i] = acc; + } + for (int i = 0; i < kInputSize; i++) { + m.StoreToPointer(&outputs[i], kMachineWord32, accs[i]); + } + m.Return(m.Int32Add(acc, m.Int32Add(m.Parameter(0), m.Parameter(1)))); + FOR_INT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + int32_t expected = *i + *j; + for (int k = 0; k < kInputSize; k++) { + expected += kBase + k; + } + CHECK_EQ(expected, m.Call(*i, *j)); + expected = 0; + for (int k = 0; k < kInputSize; k++) { + expected += kBase + k; + CHECK_EQ(expected, outputs[k]); + } + } + } +} + + +TEST(RunNewSpaceConstantsInPhi) { + RawMachineAssemblerTester<Object*> m(kMachineWord32); + + Isolate* isolate = CcTest::i_isolate(); + Handle<HeapNumber> true_val = isolate->factory()->NewHeapNumber(11.2); + Handle<HeapNumber> false_val = isolate->factory()->NewHeapNumber(11.3); + Node* true_node = m.HeapConstant(true_val); + Node* false_node = m.HeapConstant(false_val); + + MLabel blocka, blockb, end; + m.Branch(m.Parameter(0), &blocka, &blockb); + m.Bind(&blocka); + m.Goto(&end); + m.Bind(&blockb); + m.Goto(&end); + + m.Bind(&end); + Node* phi = m.Phi(true_node, false_node); + m.Return(phi); + + CHECK_EQ(*false_val, m.Call(0)); + CHECK_EQ(*true_val, m.Call(1)); +} + + +#if MACHINE_ASSEMBLER_SUPPORTS_CALL_C + +TEST(RunSpillLotsOfThingsWithCall) { + static const int kInputSize = 1000; + RawMachineAssemblerTester<void> m; + Node* accs[kInputSize]; + int32_t outputs[kInputSize]; + Node* one = m.Int32Constant(1); + Node* acc = one; + for (int i = 0; i < kInputSize; i++) { + acc = m.Int32Add(acc, one); + accs[i] = acc; + } + // If the spill slot computation is wrong, it might load from the c frame + { + void* func = reinterpret_cast<void*>(reinterpret_cast<intptr_t>(&Seven)); + Node** args = NULL; + MachineType* arg_types = NULL; + m.CallC(m.PointerConstant(func), kMachineWord32, arg_types, args, 0); + } + for (int i = 0; i < kInputSize; i++) { + m.StoreToPointer(&outputs[i], kMachineWord32, accs[i]); + } + m.Return(one); + m.Call(); + for (int i = 0; i < kInputSize; i++) { + CHECK_EQ(outputs[i], i + 2); + } +} + +#endif // MACHINE_ASSEMBLER_SUPPORTS_CALL_C + + +static bool sadd_overflow(int32_t x, int32_t y, int32_t* val) { + int32_t v = + static_cast<int32_t>(static_cast<uint32_t>(x) + static_cast<uint32_t>(y)); + *val = v; + return (((v ^ x) & (v ^ y)) >> 31) & 1; +} + + +static bool ssub_overflow(int32_t x, int32_t y, int32_t* val) { + int32_t v = + static_cast<int32_t>(static_cast<uint32_t>(x) - static_cast<uint32_t>(y)); + *val = v; + return (((v ^ x) & (v ^ ~y)) >> 31) & 1; +} + + +TEST(RunInt32AddWithOverflowP) { + int32_t actual_val = -1; + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + Node* add = m.Int32AddWithOverflow(bt.param0, bt.param1); + Node* val = m.Projection(0, add); + Node* ovf = m.Projection(1, add); + m.StoreToPointer(&actual_val, kMachineWord32, val); + bt.AddReturn(ovf); + FOR_INT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + int32_t expected_val; + int expected_ovf = sadd_overflow(*i, *j, &expected_val); + CHECK_EQ(expected_ovf, bt.call(*i, *j)); + CHECK_EQ(expected_val, actual_val); + } + } +} + + +TEST(RunInt32AddWithOverflowImm) { + int32_t actual_val = -1, expected_val = 0; + FOR_INT32_INPUTS(i) { + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + Node* add = m.Int32AddWithOverflow(m.Int32Constant(*i), m.Parameter(0)); + Node* val = m.Projection(0, add); + Node* ovf = m.Projection(1, add); + m.StoreToPointer(&actual_val, kMachineWord32, val); + m.Return(ovf); + FOR_INT32_INPUTS(j) { + int expected_ovf = sadd_overflow(*i, *j, &expected_val); + CHECK_EQ(expected_ovf, m.Call(*j)); + CHECK_EQ(expected_val, actual_val); + } + } + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + Node* add = m.Int32AddWithOverflow(m.Parameter(0), m.Int32Constant(*i)); + Node* val = m.Projection(0, add); + Node* ovf = m.Projection(1, add); + m.StoreToPointer(&actual_val, kMachineWord32, val); + m.Return(ovf); + FOR_INT32_INPUTS(j) { + int expected_ovf = sadd_overflow(*i, *j, &expected_val); + CHECK_EQ(expected_ovf, m.Call(*j)); + CHECK_EQ(expected_val, actual_val); + } + } + FOR_INT32_INPUTS(j) { + RawMachineAssemblerTester<int32_t> m; + Node* add = + m.Int32AddWithOverflow(m.Int32Constant(*i), m.Int32Constant(*j)); + Node* val = m.Projection(0, add); + Node* ovf = m.Projection(1, add); + m.StoreToPointer(&actual_val, kMachineWord32, val); + m.Return(ovf); + int expected_ovf = sadd_overflow(*i, *j, &expected_val); + CHECK_EQ(expected_ovf, m.Call()); + CHECK_EQ(expected_val, actual_val); + } + } +} + + +TEST(RunInt32AddWithOverflowInBranchP) { + int constant = 911777; + MLabel blocka, blockb; + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + Node* add = m.Int32AddWithOverflow(bt.param0, bt.param1); + Node* ovf = m.Projection(1, add); + m.Branch(ovf, &blocka, &blockb); + m.Bind(&blocka); + bt.AddReturn(m.Int32Constant(constant)); + m.Bind(&blockb); + Node* val = m.Projection(0, add); + bt.AddReturn(val); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + int32_t expected; + if (sadd_overflow(*i, *j, &expected)) expected = constant; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } +} + + +TEST(RunInt32SubWithOverflowP) { + int32_t actual_val = -1; + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + Node* add = m.Int32SubWithOverflow(bt.param0, bt.param1); + Node* val = m.Projection(0, add); + Node* ovf = m.Projection(1, add); + m.StoreToPointer(&actual_val, kMachineWord32, val); + bt.AddReturn(ovf); + FOR_INT32_INPUTS(i) { + FOR_INT32_INPUTS(j) { + int32_t expected_val; + int expected_ovf = ssub_overflow(*i, *j, &expected_val); + CHECK_EQ(expected_ovf, bt.call(*i, *j)); + CHECK_EQ(expected_val, actual_val); + } + } +} + + +TEST(RunInt32SubWithOverflowImm) { + int32_t actual_val = -1, expected_val = 0; + FOR_INT32_INPUTS(i) { + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + Node* add = m.Int32SubWithOverflow(m.Int32Constant(*i), m.Parameter(0)); + Node* val = m.Projection(0, add); + Node* ovf = m.Projection(1, add); + m.StoreToPointer(&actual_val, kMachineWord32, val); + m.Return(ovf); + FOR_INT32_INPUTS(j) { + int expected_ovf = ssub_overflow(*i, *j, &expected_val); + CHECK_EQ(expected_ovf, m.Call(*j)); + CHECK_EQ(expected_val, actual_val); + } + } + { + RawMachineAssemblerTester<int32_t> m(kMachineWord32); + Node* add = m.Int32SubWithOverflow(m.Parameter(0), m.Int32Constant(*i)); + Node* val = m.Projection(0, add); + Node* ovf = m.Projection(1, add); + m.StoreToPointer(&actual_val, kMachineWord32, val); + m.Return(ovf); + FOR_INT32_INPUTS(j) { + int expected_ovf = ssub_overflow(*j, *i, &expected_val); + CHECK_EQ(expected_ovf, m.Call(*j)); + CHECK_EQ(expected_val, actual_val); + } + } + FOR_INT32_INPUTS(j) { + RawMachineAssemblerTester<int32_t> m; + Node* add = + m.Int32SubWithOverflow(m.Int32Constant(*i), m.Int32Constant(*j)); + Node* val = m.Projection(0, add); + Node* ovf = m.Projection(1, add); + m.StoreToPointer(&actual_val, kMachineWord32, val); + m.Return(ovf); + int expected_ovf = ssub_overflow(*i, *j, &expected_val); + CHECK_EQ(expected_ovf, m.Call()); + CHECK_EQ(expected_val, actual_val); + } + } +} + + +TEST(RunInt32SubWithOverflowInBranchP) { + int constant = 911999; + MLabel blocka, blockb; + RawMachineAssemblerTester<int32_t> m; + Int32BinopTester bt(&m); + Node* sub = m.Int32SubWithOverflow(bt.param0, bt.param1); + Node* ovf = m.Projection(1, sub); + m.Branch(ovf, &blocka, &blockb); + m.Bind(&blocka); + bt.AddReturn(m.Int32Constant(constant)); + m.Bind(&blockb); + Node* val = m.Projection(0, sub); + bt.AddReturn(val); + FOR_UINT32_INPUTS(i) { + FOR_UINT32_INPUTS(j) { + int32_t expected; + if (ssub_overflow(*i, *j, &expected)) expected = constant; + CHECK_EQ(expected, bt.call(*i, *j)); + } + } +} + +#endif // V8_TURBOFAN_TARGET diff --git a/deps/v8/test/cctest/compiler/test-run-variables.cc b/deps/v8/test/cctest/compiler/test-run-variables.cc new file mode 100644 index 00000000000..bf86e0d42c7 --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-run-variables.cc @@ -0,0 +1,121 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "test/cctest/compiler/function-tester.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +static const char* throws = NULL; + +static const char* load_tests[] = { + "var x = a; r = x", "123", "0", + "var x = (r = x)", "undefined", "undefined", + "var x = (a?1:2); r = x", "1", "2", + "const x = a; r = x", "123", "0", + "const x = (r = x)", "undefined", "undefined", + "const x = (a?3:4); r = x", "3", "4", + "'use strict'; const x = a; r = x", "123", "0", + "'use strict'; const x = (r = x)", throws, throws, + "'use strict'; const x = (a?5:6); r = x", "5", "6", + "'use strict'; let x = a; r = x", "123", "0", + "'use strict'; let x = (r = x)", throws, throws, + "'use strict'; let x = (a?7:8); r = x", "7", "8", + NULL}; + +static const char* store_tests[] = { + "var x = 1; x = a; r = x", "123", "0", + "var x = (a?(x=4,2):3); r = x", "2", "3", + "var x = (a?4:5); x = a; r = x", "123", "0", + "const x = 1; x = a; r = x", "1", "1", + "const x = (a?(x=4,2):3); r = x", "2", "3", + "const x = (a?4:5); x = a; r = x", "4", "5", + // Assignments to 'const' are SyntaxErrors, handled by the parser, + // hence we cannot test them here because they are early errors. + "'use strict'; let x = 1; x = a; r = x", "123", "0", + "'use strict'; let x = (a?(x=4,2):3); r = x", throws, "3", + "'use strict'; let x = (a?4:5); x = a; r = x", "123", "0", + NULL}; + +static const char* bind_tests[] = { + "if (a) { const x = a }; r = x;", "123", "undefined", + "for (; a > 0; a--) { const x = a }; r = x", "123", "undefined", + // Re-initialization of variables other than legacy 'const' is not + // possible due to sane variable scoping, hence no tests here. + NULL}; + + +static void RunVariableTests(const char* source, const char* tests[]) { + FLAG_harmony_scoping = true; + EmbeddedVector<char, 512> buffer; + + for (int i = 0; tests[i] != NULL; i += 3) { + SNPrintF(buffer, source, tests[i]); + PrintF("#%d: %s\n", i / 3, buffer.start()); + FunctionTester T(buffer.start()); + + // Check function with non-falsey parameter. + if (tests[i + 1] != throws) { + Handle<Object> r = v8::Utils::OpenHandle(*CompileRun(tests[i + 1])); + T.CheckCall(r, T.Val(123), T.Val("result")); + } else { + T.CheckThrows(T.Val(123), T.Val("result")); + } + + // Check function with falsey parameter. + if (tests[i + 2] != throws) { + Handle<Object> r = v8::Utils::OpenHandle(*CompileRun(tests[i + 2])); + T.CheckCall(r, T.Val(0.0), T.Val("result")); + } else { + T.CheckThrows(T.Val(0.0), T.Val("result")); + } + } +} + + +TEST(StackLoadVariables) { + const char* source = "(function(a,r) { %s; return r; })"; + RunVariableTests(source, load_tests); +} + + +TEST(ContextLoadVariables) { + const char* source = "(function(a,r) { %s; function f() {x} return r; })"; + RunVariableTests(source, load_tests); +} + + +TEST(StackStoreVariables) { + const char* source = "(function(a,r) { %s; return r; })"; + RunVariableTests(source, store_tests); +} + + +TEST(ContextStoreVariables) { + const char* source = "(function(a,r) { %s; function f() {x} return r; })"; + RunVariableTests(source, store_tests); +} + + +TEST(StackInitializeVariables) { + const char* source = "(function(a,r) { %s; return r; })"; + RunVariableTests(source, bind_tests); +} + + +TEST(ContextInitializeVariables) { + const char* source = "(function(a,r) { %s; function f() {x} return r; })"; + RunVariableTests(source, bind_tests); +} + + +TEST(SelfReferenceVariable) { + FunctionTester T("(function self() { return self; })"); + + T.CheckCall(T.function); + CompileRun("var self = 'not a function'"); + T.CheckCall(T.function); +} diff --git a/deps/v8/test/cctest/compiler/test-schedule.cc b/deps/v8/test/cctest/compiler/test-schedule.cc new file mode 100644 index 00000000000..bfa47d872a3 --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-schedule.cc @@ -0,0 +1,159 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "src/compiler/common-operator.h" +#include "src/compiler/generic-node-inl.h" +#include "src/compiler/graph.h" +#include "src/compiler/machine-operator.h" +#include "src/compiler/node.h" +#include "src/compiler/operator.h" +#include "src/compiler/schedule.h" +#include "test/cctest/cctest.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +static SimpleOperator dummy_operator(IrOpcode::kParameter, Operator::kNoWrite, + 0, 0, "dummy"); + +TEST(TestScheduleAllocation) { + HandleAndZoneScope scope; + Schedule schedule(scope.main_zone()); + + CHECK_NE(NULL, schedule.entry()); + CHECK_EQ(schedule.entry(), *(schedule.all_blocks().begin())); +} + + +TEST(TestScheduleAddNode) { + HandleAndZoneScope scope; + Graph graph(scope.main_zone()); + Node* n0 = graph.NewNode(&dummy_operator); + Node* n1 = graph.NewNode(&dummy_operator); + + Schedule schedule(scope.main_zone()); + + BasicBlock* entry = schedule.entry(); + schedule.AddNode(entry, n0); + schedule.AddNode(entry, n1); + + CHECK_EQ(entry, schedule.block(n0)); + CHECK_EQ(entry, schedule.block(n1)); + CHECK(schedule.SameBasicBlock(n0, n1)); + + Node* n2 = graph.NewNode(&dummy_operator); + CHECK_EQ(NULL, schedule.block(n2)); +} + + +TEST(TestScheduleAddGoto) { + HandleAndZoneScope scope; + + Schedule schedule(scope.main_zone()); + BasicBlock* entry = schedule.entry(); + BasicBlock* next = schedule.NewBasicBlock(); + + schedule.AddGoto(entry, next); + + CHECK_EQ(0, entry->PredecessorCount()); + CHECK_EQ(1, entry->SuccessorCount()); + CHECK_EQ(next, entry->SuccessorAt(0)); + + CHECK_EQ(1, next->PredecessorCount()); + CHECK_EQ(entry, next->PredecessorAt(0)); + CHECK_EQ(0, next->SuccessorCount()); +} + + +TEST(TestScheduleAddBranch) { + HandleAndZoneScope scope; + Schedule schedule(scope.main_zone()); + + BasicBlock* entry = schedule.entry(); + BasicBlock* tblock = schedule.NewBasicBlock(); + BasicBlock* fblock = schedule.NewBasicBlock(); + + Graph graph(scope.main_zone()); + CommonOperatorBuilder common(scope.main_zone()); + Node* n0 = graph.NewNode(&dummy_operator); + Node* b = graph.NewNode(common.Branch(), n0); + + schedule.AddBranch(entry, b, tblock, fblock); + + CHECK_EQ(0, entry->PredecessorCount()); + CHECK_EQ(2, entry->SuccessorCount()); + CHECK_EQ(tblock, entry->SuccessorAt(0)); + CHECK_EQ(fblock, entry->SuccessorAt(1)); + + CHECK_EQ(1, tblock->PredecessorCount()); + CHECK_EQ(entry, tblock->PredecessorAt(0)); + CHECK_EQ(0, tblock->SuccessorCount()); + + CHECK_EQ(1, fblock->PredecessorCount()); + CHECK_EQ(entry, fblock->PredecessorAt(0)); + CHECK_EQ(0, fblock->SuccessorCount()); +} + + +TEST(TestScheduleAddReturn) { + HandleAndZoneScope scope; + Schedule schedule(scope.main_zone()); + Graph graph(scope.main_zone()); + Node* n0 = graph.NewNode(&dummy_operator); + BasicBlock* entry = schedule.entry(); + schedule.AddReturn(entry, n0); + + CHECK_EQ(0, entry->PredecessorCount()); + CHECK_EQ(1, entry->SuccessorCount()); + CHECK_EQ(schedule.exit(), entry->SuccessorAt(0)); +} + + +TEST(TestScheduleAddThrow) { + HandleAndZoneScope scope; + Schedule schedule(scope.main_zone()); + Graph graph(scope.main_zone()); + Node* n0 = graph.NewNode(&dummy_operator); + BasicBlock* entry = schedule.entry(); + schedule.AddThrow(entry, n0); + + CHECK_EQ(0, entry->PredecessorCount()); + CHECK_EQ(1, entry->SuccessorCount()); + CHECK_EQ(schedule.exit(), entry->SuccessorAt(0)); +} + + +TEST(TestScheduleAddDeopt) { + HandleAndZoneScope scope; + Schedule schedule(scope.main_zone()); + Graph graph(scope.main_zone()); + Node* n0 = graph.NewNode(&dummy_operator); + BasicBlock* entry = schedule.entry(); + schedule.AddDeoptimize(entry, n0); + + CHECK_EQ(0, entry->PredecessorCount()); + CHECK_EQ(1, entry->SuccessorCount()); + CHECK_EQ(schedule.exit(), entry->SuccessorAt(0)); +} + + +TEST(BuildMulNodeGraph) { + HandleAndZoneScope scope; + Schedule schedule(scope.main_zone()); + Graph graph(scope.main_zone()); + CommonOperatorBuilder common(scope.main_zone()); + MachineOperatorBuilder machine(scope.main_zone(), kMachineWord32); + + Node* start = graph.NewNode(common.Start(0)); + graph.SetStart(start); + Node* param0 = graph.NewNode(common.Parameter(0), graph.start()); + Node* param1 = graph.NewNode(common.Parameter(1), graph.start()); + + Node* mul = graph.NewNode(machine.Int32Mul(), param0, param1); + Node* ret = graph.NewNode(common.Return(), mul, start); + + USE(ret); +} diff --git a/deps/v8/test/cctest/compiler/test-scheduler.cc b/deps/v8/test/cctest/compiler/test-scheduler.cc new file mode 100644 index 00000000000..ec4e77e1115 --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-scheduler.cc @@ -0,0 +1,1809 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" +#include "test/cctest/cctest.h" + +#include "src/compiler/common-operator.h" +#include "src/compiler/generic-node-inl.h" +#include "src/compiler/generic-node.h" +#include "src/compiler/graph.h" +#include "src/compiler/graph-visualizer.h" +#include "src/compiler/js-operator.h" +#include "src/compiler/machine-operator.h" +#include "src/compiler/node.h" +#include "src/compiler/operator.h" +#include "src/compiler/schedule.h" +#include "src/compiler/scheduler.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +struct TestLoop { + int count; + BasicBlock** nodes; + BasicBlock* header() { return nodes[0]; } + BasicBlock* last() { return nodes[count - 1]; } + ~TestLoop() { delete[] nodes; } +}; + + +static TestLoop* CreateLoop(Schedule* schedule, int count) { + TestLoop* loop = new TestLoop(); + loop->count = count; + loop->nodes = new BasicBlock* [count]; + for (int i = 0; i < count; i++) { + loop->nodes[i] = schedule->NewBasicBlock(); + if (i > 0) schedule->AddSuccessor(loop->nodes[i - 1], loop->nodes[i]); + } + schedule->AddSuccessor(loop->nodes[count - 1], loop->nodes[0]); + return loop; +} + + +static void CheckRPONumbers(BasicBlockVector* order, int expected, + bool loops_allowed) { + CHECK_EQ(expected, static_cast<int>(order->size())); + for (int i = 0; i < static_cast<int>(order->size()); i++) { + CHECK(order->at(i)->rpo_number_ == i); + if (!loops_allowed) CHECK_LT(order->at(i)->loop_end_, 0); + } +} + + +static void CheckLoopContains(BasicBlock** blocks, int body_size) { + BasicBlock* header = blocks[0]; + CHECK_GT(header->loop_end_, 0); + CHECK_EQ(body_size, (header->loop_end_ - header->rpo_number_)); + for (int i = 0; i < body_size; i++) { + int num = blocks[i]->rpo_number_; + CHECK(num >= header->rpo_number_ && num < header->loop_end_); + CHECK(header->LoopContains(blocks[i])); + CHECK(header->IsLoopHeader() || blocks[i]->loop_header_ == header); + } +} + + +TEST(RPODegenerate1) { + HandleAndZoneScope scope; + Schedule schedule(scope.main_zone()); + + BasicBlockVector* order = Scheduler::ComputeSpecialRPO(&schedule); + CheckRPONumbers(order, 1, false); + CHECK_EQ(schedule.entry(), order->at(0)); +} + + +TEST(RPODegenerate2) { + HandleAndZoneScope scope; + Schedule schedule(scope.main_zone()); + + schedule.AddGoto(schedule.entry(), schedule.exit()); + BasicBlockVector* order = Scheduler::ComputeSpecialRPO(&schedule); + CheckRPONumbers(order, 2, false); + CHECK_EQ(schedule.entry(), order->at(0)); + CHECK_EQ(schedule.exit(), order->at(1)); +} + + +TEST(RPOLine) { + HandleAndZoneScope scope; + + for (int i = 0; i < 10; i++) { + Schedule schedule(scope.main_zone()); + + BasicBlock* last = schedule.entry(); + for (int j = 0; j < i; j++) { + BasicBlock* block = schedule.NewBasicBlock(); + schedule.AddGoto(last, block); + last = block; + } + BasicBlockVector* order = Scheduler::ComputeSpecialRPO(&schedule); + CheckRPONumbers(order, 1 + i, false); + + Schedule::BasicBlocks blocks(schedule.all_blocks()); + for (Schedule::BasicBlocks::iterator iter = blocks.begin(); + iter != blocks.end(); ++iter) { + BasicBlock* block = *iter; + if (block->rpo_number_ >= 0 && block->SuccessorCount() == 1) { + CHECK(block->rpo_number_ + 1 == block->SuccessorAt(0)->rpo_number_); + } + } + } +} + + +TEST(RPOSelfLoop) { + HandleAndZoneScope scope; + Schedule schedule(scope.main_zone()); + schedule.AddSuccessor(schedule.entry(), schedule.entry()); + BasicBlockVector* order = Scheduler::ComputeSpecialRPO(&schedule); + CheckRPONumbers(order, 1, true); + BasicBlock* loop[] = {schedule.entry()}; + CheckLoopContains(loop, 1); +} + + +TEST(RPOEntryLoop) { + HandleAndZoneScope scope; + Schedule schedule(scope.main_zone()); + schedule.AddSuccessor(schedule.entry(), schedule.exit()); + schedule.AddSuccessor(schedule.exit(), schedule.entry()); + BasicBlockVector* order = Scheduler::ComputeSpecialRPO(&schedule); + CheckRPONumbers(order, 2, true); + BasicBlock* loop[] = {schedule.entry(), schedule.exit()}; + CheckLoopContains(loop, 2); +} + + +TEST(RPOEndLoop) { + HandleAndZoneScope scope; + Schedule schedule(scope.main_zone()); + SmartPointer<TestLoop> loop1(CreateLoop(&schedule, 2)); + schedule.AddSuccessor(schedule.entry(), loop1->header()); + BasicBlockVector* order = Scheduler::ComputeSpecialRPO(&schedule); + CheckRPONumbers(order, 3, true); + CheckLoopContains(loop1->nodes, loop1->count); +} + + +TEST(RPOEndLoopNested) { + HandleAndZoneScope scope; + Schedule schedule(scope.main_zone()); + SmartPointer<TestLoop> loop1(CreateLoop(&schedule, 2)); + schedule.AddSuccessor(schedule.entry(), loop1->header()); + schedule.AddSuccessor(loop1->last(), schedule.entry()); + BasicBlockVector* order = Scheduler::ComputeSpecialRPO(&schedule); + CheckRPONumbers(order, 3, true); + CheckLoopContains(loop1->nodes, loop1->count); +} + + +TEST(RPODiamond) { + HandleAndZoneScope scope; + Schedule schedule(scope.main_zone()); + + BasicBlock* A = schedule.entry(); + BasicBlock* B = schedule.NewBasicBlock(); + BasicBlock* C = schedule.NewBasicBlock(); + BasicBlock* D = schedule.exit(); + + schedule.AddSuccessor(A, B); + schedule.AddSuccessor(A, C); + schedule.AddSuccessor(B, D); + schedule.AddSuccessor(C, D); + + BasicBlockVector* order = Scheduler::ComputeSpecialRPO(&schedule); + CheckRPONumbers(order, 4, false); + + CHECK_EQ(0, A->rpo_number_); + CHECK((B->rpo_number_ == 1 && C->rpo_number_ == 2) || + (B->rpo_number_ == 2 && C->rpo_number_ == 1)); + CHECK_EQ(3, D->rpo_number_); +} + + +TEST(RPOLoop1) { + HandleAndZoneScope scope; + Schedule schedule(scope.main_zone()); + + BasicBlock* A = schedule.entry(); + BasicBlock* B = schedule.NewBasicBlock(); + BasicBlock* C = schedule.NewBasicBlock(); + BasicBlock* D = schedule.exit(); + + schedule.AddSuccessor(A, B); + schedule.AddSuccessor(B, C); + schedule.AddSuccessor(C, B); + schedule.AddSuccessor(C, D); + + BasicBlockVector* order = Scheduler::ComputeSpecialRPO(&schedule); + CheckRPONumbers(order, 4, true); + BasicBlock* loop[] = {B, C}; + CheckLoopContains(loop, 2); +} + + +TEST(RPOLoop2) { + HandleAndZoneScope scope; + Schedule schedule(scope.main_zone()); + + BasicBlock* A = schedule.entry(); + BasicBlock* B = schedule.NewBasicBlock(); + BasicBlock* C = schedule.NewBasicBlock(); + BasicBlock* D = schedule.exit(); + + schedule.AddSuccessor(A, B); + schedule.AddSuccessor(B, C); + schedule.AddSuccessor(C, B); + schedule.AddSuccessor(B, D); + + BasicBlockVector* order = Scheduler::ComputeSpecialRPO(&schedule); + CheckRPONumbers(order, 4, true); + BasicBlock* loop[] = {B, C}; + CheckLoopContains(loop, 2); +} + + +TEST(RPOLoopN) { + HandleAndZoneScope scope; + + for (int i = 0; i < 11; i++) { + Schedule schedule(scope.main_zone()); + BasicBlock* A = schedule.entry(); + BasicBlock* B = schedule.NewBasicBlock(); + BasicBlock* C = schedule.NewBasicBlock(); + BasicBlock* D = schedule.NewBasicBlock(); + BasicBlock* E = schedule.NewBasicBlock(); + BasicBlock* F = schedule.NewBasicBlock(); + BasicBlock* G = schedule.exit(); + + schedule.AddSuccessor(A, B); + schedule.AddSuccessor(B, C); + schedule.AddSuccessor(C, D); + schedule.AddSuccessor(D, E); + schedule.AddSuccessor(E, F); + schedule.AddSuccessor(F, B); + schedule.AddSuccessor(B, G); + + // Throw in extra backedges from time to time. + if (i == 1) schedule.AddSuccessor(B, B); + if (i == 2) schedule.AddSuccessor(C, B); + if (i == 3) schedule.AddSuccessor(D, B); + if (i == 4) schedule.AddSuccessor(E, B); + if (i == 5) schedule.AddSuccessor(F, B); + + // Throw in extra loop exits from time to time. + if (i == 6) schedule.AddSuccessor(B, G); + if (i == 7) schedule.AddSuccessor(C, G); + if (i == 8) schedule.AddSuccessor(D, G); + if (i == 9) schedule.AddSuccessor(E, G); + if (i == 10) schedule.AddSuccessor(F, G); + + BasicBlockVector* order = Scheduler::ComputeSpecialRPO(&schedule); + CheckRPONumbers(order, 7, true); + BasicBlock* loop[] = {B, C, D, E, F}; + CheckLoopContains(loop, 5); + } +} + + +TEST(RPOLoopNest1) { + HandleAndZoneScope scope; + Schedule schedule(scope.main_zone()); + + BasicBlock* A = schedule.entry(); + BasicBlock* B = schedule.NewBasicBlock(); + BasicBlock* C = schedule.NewBasicBlock(); + BasicBlock* D = schedule.NewBasicBlock(); + BasicBlock* E = schedule.NewBasicBlock(); + BasicBlock* F = schedule.exit(); + + schedule.AddSuccessor(A, B); + schedule.AddSuccessor(B, C); + schedule.AddSuccessor(C, D); + schedule.AddSuccessor(D, C); + schedule.AddSuccessor(D, E); + schedule.AddSuccessor(E, B); + schedule.AddSuccessor(E, F); + + BasicBlockVector* order = Scheduler::ComputeSpecialRPO(&schedule); + CheckRPONumbers(order, 6, true); + BasicBlock* loop1[] = {B, C, D, E}; + CheckLoopContains(loop1, 4); + + BasicBlock* loop2[] = {C, D}; + CheckLoopContains(loop2, 2); +} + + +TEST(RPOLoopNest2) { + HandleAndZoneScope scope; + Schedule schedule(scope.main_zone()); + + BasicBlock* A = schedule.entry(); + BasicBlock* B = schedule.NewBasicBlock(); + BasicBlock* C = schedule.NewBasicBlock(); + BasicBlock* D = schedule.NewBasicBlock(); + BasicBlock* E = schedule.NewBasicBlock(); + BasicBlock* F = schedule.NewBasicBlock(); + BasicBlock* G = schedule.NewBasicBlock(); + BasicBlock* H = schedule.exit(); + + schedule.AddSuccessor(A, B); + schedule.AddSuccessor(B, C); + schedule.AddSuccessor(C, D); + schedule.AddSuccessor(D, E); + schedule.AddSuccessor(E, F); + schedule.AddSuccessor(F, G); + schedule.AddSuccessor(G, H); + + schedule.AddSuccessor(E, D); + schedule.AddSuccessor(F, C); + schedule.AddSuccessor(G, B); + + BasicBlockVector* order = Scheduler::ComputeSpecialRPO(&schedule); + CheckRPONumbers(order, 8, true); + BasicBlock* loop1[] = {B, C, D, E, F, G}; + CheckLoopContains(loop1, 6); + + BasicBlock* loop2[] = {C, D, E, F}; + CheckLoopContains(loop2, 4); + + BasicBlock* loop3[] = {D, E}; + CheckLoopContains(loop3, 2); +} + + +TEST(RPOLoopFollow1) { + HandleAndZoneScope scope; + Schedule schedule(scope.main_zone()); + + SmartPointer<TestLoop> loop1(CreateLoop(&schedule, 1)); + SmartPointer<TestLoop> loop2(CreateLoop(&schedule, 1)); + + BasicBlock* A = schedule.entry(); + BasicBlock* E = schedule.exit(); + + schedule.AddSuccessor(A, loop1->header()); + schedule.AddSuccessor(loop1->header(), loop2->header()); + schedule.AddSuccessor(loop2->last(), E); + + BasicBlockVector* order = Scheduler::ComputeSpecialRPO(&schedule); + + CheckLoopContains(loop1->nodes, loop1->count); + + CHECK_EQ(schedule.BasicBlockCount(), static_cast<int>(order->size())); + CheckLoopContains(loop1->nodes, loop1->count); + CheckLoopContains(loop2->nodes, loop2->count); +} + + +TEST(RPOLoopFollow2) { + HandleAndZoneScope scope; + Schedule schedule(scope.main_zone()); + + SmartPointer<TestLoop> loop1(CreateLoop(&schedule, 1)); + SmartPointer<TestLoop> loop2(CreateLoop(&schedule, 1)); + + BasicBlock* A = schedule.entry(); + BasicBlock* S = schedule.NewBasicBlock(); + BasicBlock* E = schedule.exit(); + + schedule.AddSuccessor(A, loop1->header()); + schedule.AddSuccessor(loop1->header(), S); + schedule.AddSuccessor(S, loop2->header()); + schedule.AddSuccessor(loop2->last(), E); + + BasicBlockVector* order = Scheduler::ComputeSpecialRPO(&schedule); + + CheckLoopContains(loop1->nodes, loop1->count); + + CHECK_EQ(schedule.BasicBlockCount(), static_cast<int>(order->size())); + CheckLoopContains(loop1->nodes, loop1->count); + CheckLoopContains(loop2->nodes, loop2->count); +} + + +TEST(RPOLoopFollowN) { + HandleAndZoneScope scope; + + for (int size = 1; size < 5; size++) { + for (int exit = 0; exit < size; exit++) { + Schedule schedule(scope.main_zone()); + SmartPointer<TestLoop> loop1(CreateLoop(&schedule, size)); + SmartPointer<TestLoop> loop2(CreateLoop(&schedule, size)); + BasicBlock* A = schedule.entry(); + BasicBlock* E = schedule.exit(); + + schedule.AddSuccessor(A, loop1->header()); + schedule.AddSuccessor(loop1->nodes[exit], loop2->header()); + schedule.AddSuccessor(loop2->nodes[exit], E); + BasicBlockVector* order = Scheduler::ComputeSpecialRPO(&schedule); + CheckLoopContains(loop1->nodes, loop1->count); + + CHECK_EQ(schedule.BasicBlockCount(), static_cast<int>(order->size())); + CheckLoopContains(loop1->nodes, loop1->count); + CheckLoopContains(loop2->nodes, loop2->count); + } + } +} + + +TEST(RPONestedLoopFollow1) { + HandleAndZoneScope scope; + Schedule schedule(scope.main_zone()); + + SmartPointer<TestLoop> loop1(CreateLoop(&schedule, 1)); + SmartPointer<TestLoop> loop2(CreateLoop(&schedule, 1)); + + BasicBlock* A = schedule.entry(); + BasicBlock* B = schedule.NewBasicBlock(); + BasicBlock* C = schedule.NewBasicBlock(); + BasicBlock* E = schedule.exit(); + + schedule.AddSuccessor(A, B); + schedule.AddSuccessor(B, loop1->header()); + schedule.AddSuccessor(loop1->header(), loop2->header()); + schedule.AddSuccessor(loop2->last(), C); + schedule.AddSuccessor(C, E); + schedule.AddSuccessor(C, B); + + BasicBlockVector* order = Scheduler::ComputeSpecialRPO(&schedule); + + CheckLoopContains(loop1->nodes, loop1->count); + + CHECK_EQ(schedule.BasicBlockCount(), static_cast<int>(order->size())); + CheckLoopContains(loop1->nodes, loop1->count); + CheckLoopContains(loop2->nodes, loop2->count); + + BasicBlock* loop3[] = {B, loop1->nodes[0], loop2->nodes[0], C}; + CheckLoopContains(loop3, 4); +} + + +TEST(RPOLoopBackedges1) { + HandleAndZoneScope scope; + + int size = 8; + for (int i = 0; i < size; i++) { + for (int j = 0; j < size; j++) { + Schedule schedule(scope.main_zone()); + BasicBlock* A = schedule.entry(); + BasicBlock* E = schedule.exit(); + + SmartPointer<TestLoop> loop1(CreateLoop(&schedule, size)); + schedule.AddSuccessor(A, loop1->header()); + schedule.AddSuccessor(loop1->last(), E); + + schedule.AddSuccessor(loop1->nodes[i], loop1->header()); + schedule.AddSuccessor(loop1->nodes[j], E); + + BasicBlockVector* order = Scheduler::ComputeSpecialRPO(&schedule); + CheckRPONumbers(order, schedule.BasicBlockCount(), true); + CheckLoopContains(loop1->nodes, loop1->count); + } + } +} + + +TEST(RPOLoopOutedges1) { + HandleAndZoneScope scope; + + int size = 8; + for (int i = 0; i < size; i++) { + for (int j = 0; j < size; j++) { + Schedule schedule(scope.main_zone()); + BasicBlock* A = schedule.entry(); + BasicBlock* D = schedule.NewBasicBlock(); + BasicBlock* E = schedule.exit(); + + SmartPointer<TestLoop> loop1(CreateLoop(&schedule, size)); + schedule.AddSuccessor(A, loop1->header()); + schedule.AddSuccessor(loop1->last(), E); + + schedule.AddSuccessor(loop1->nodes[i], loop1->header()); + schedule.AddSuccessor(loop1->nodes[j], D); + schedule.AddSuccessor(D, E); + + BasicBlockVector* order = Scheduler::ComputeSpecialRPO(&schedule); + CheckRPONumbers(order, schedule.BasicBlockCount(), true); + CheckLoopContains(loop1->nodes, loop1->count); + } + } +} + + +TEST(RPOLoopOutedges2) { + HandleAndZoneScope scope; + + int size = 8; + for (int i = 0; i < size; i++) { + Schedule schedule(scope.main_zone()); + BasicBlock* A = schedule.entry(); + BasicBlock* E = schedule.exit(); + + SmartPointer<TestLoop> loop1(CreateLoop(&schedule, size)); + schedule.AddSuccessor(A, loop1->header()); + schedule.AddSuccessor(loop1->last(), E); + + for (int j = 0; j < size; j++) { + BasicBlock* O = schedule.NewBasicBlock(); + schedule.AddSuccessor(loop1->nodes[j], O); + schedule.AddSuccessor(O, E); + } + + BasicBlockVector* order = Scheduler::ComputeSpecialRPO(&schedule); + CheckRPONumbers(order, schedule.BasicBlockCount(), true); + CheckLoopContains(loop1->nodes, loop1->count); + } +} + + +TEST(RPOLoopOutloops1) { + HandleAndZoneScope scope; + + int size = 8; + for (int i = 0; i < size; i++) { + Schedule schedule(scope.main_zone()); + BasicBlock* A = schedule.entry(); + BasicBlock* E = schedule.exit(); + SmartPointer<TestLoop> loop1(CreateLoop(&schedule, size)); + schedule.AddSuccessor(A, loop1->header()); + schedule.AddSuccessor(loop1->last(), E); + + TestLoop** loopN = new TestLoop* [size]; + for (int j = 0; j < size; j++) { + loopN[j] = CreateLoop(&schedule, 2); + schedule.AddSuccessor(loop1->nodes[j], loopN[j]->header()); + schedule.AddSuccessor(loopN[j]->last(), E); + } + + BasicBlockVector* order = Scheduler::ComputeSpecialRPO(&schedule); + CheckRPONumbers(order, schedule.BasicBlockCount(), true); + CheckLoopContains(loop1->nodes, loop1->count); + + for (int j = 0; j < size; j++) { + CheckLoopContains(loopN[j]->nodes, loopN[j]->count); + delete loopN[j]; + } + delete[] loopN; + } +} + + +TEST(RPOLoopMultibackedge) { + HandleAndZoneScope scope; + Schedule schedule(scope.main_zone()); + + BasicBlock* A = schedule.entry(); + BasicBlock* B = schedule.NewBasicBlock(); + BasicBlock* C = schedule.NewBasicBlock(); + BasicBlock* D = schedule.exit(); + BasicBlock* E = schedule.NewBasicBlock(); + + schedule.AddSuccessor(A, B); + schedule.AddSuccessor(B, C); + schedule.AddSuccessor(B, D); + schedule.AddSuccessor(B, E); + schedule.AddSuccessor(C, B); + schedule.AddSuccessor(D, B); + schedule.AddSuccessor(E, B); + + BasicBlockVector* order = Scheduler::ComputeSpecialRPO(&schedule); + CheckRPONumbers(order, 5, true); + + BasicBlock* loop1[] = {B, C, D, E}; + CheckLoopContains(loop1, 4); +} + + +TEST(BuildScheduleEmpty) { + HandleAndZoneScope scope; + Graph graph(scope.main_zone()); + CommonOperatorBuilder builder(scope.main_zone()); + graph.SetStart(graph.NewNode(builder.Start(0))); + graph.SetEnd(graph.NewNode(builder.End(), graph.start())); + + USE(Scheduler::ComputeSchedule(&graph)); +} + + +TEST(BuildScheduleOneParameter) { + HandleAndZoneScope scope; + Graph graph(scope.main_zone()); + CommonOperatorBuilder builder(scope.main_zone()); + graph.SetStart(graph.NewNode(builder.Start(0))); + + Node* p1 = graph.NewNode(builder.Parameter(0), graph.start()); + Node* ret = graph.NewNode(builder.Return(), p1, graph.start(), graph.start()); + + graph.SetEnd(graph.NewNode(builder.End(), ret)); + + USE(Scheduler::ComputeSchedule(&graph)); +} + + +static int GetScheduledNodeCount(Schedule* schedule) { + int node_count = 0; + for (BasicBlockVectorIter i = schedule->rpo_order()->begin(); + i != schedule->rpo_order()->end(); ++i) { + BasicBlock* block = *i; + for (BasicBlock::const_iterator j = block->begin(); j != block->end(); + ++j) { + ++node_count; + } + BasicBlock::Control control = block->control_; + if (control != BasicBlock::kNone) { + ++node_count; + } + } + return node_count; +} + + +static void PrintGraph(Graph* graph) { + OFStream os(stdout); + os << AsDOT(*graph); +} + + +static void PrintSchedule(Schedule* schedule) { + OFStream os(stdout); + os << *schedule << endl; +} + + +TEST(BuildScheduleIfSplit) { + HandleAndZoneScope scope; + Graph graph(scope.main_zone()); + CommonOperatorBuilder builder(scope.main_zone()); + JSOperatorBuilder js_builder(scope.main_zone()); + graph.SetStart(graph.NewNode(builder.Start(3))); + + Node* p1 = graph.NewNode(builder.Parameter(0), graph.start()); + Node* p2 = graph.NewNode(builder.Parameter(1), graph.start()); + Node* p3 = graph.NewNode(builder.Parameter(2), graph.start()); + Node* p4 = graph.NewNode(builder.Parameter(3), graph.start()); + Node* p5 = graph.NewNode(builder.Parameter(4), graph.start()); + Node* cmp = graph.NewNode(js_builder.LessThanOrEqual(), p1, p2, p3, + graph.start(), graph.start()); + Node* branch = graph.NewNode(builder.Branch(), cmp, graph.start()); + Node* true_branch = graph.NewNode(builder.IfTrue(), branch); + Node* false_branch = graph.NewNode(builder.IfFalse(), branch); + + Node* ret1 = graph.NewNode(builder.Return(), p4, graph.start(), true_branch); + Node* ret2 = graph.NewNode(builder.Return(), p5, graph.start(), false_branch); + Node* merge = graph.NewNode(builder.Merge(2), ret1, ret2); + graph.SetEnd(graph.NewNode(builder.End(), merge)); + + PrintGraph(&graph); + + Schedule* schedule = Scheduler::ComputeSchedule(&graph); + + PrintSchedule(schedule); + + + CHECK_EQ(13, GetScheduledNodeCount(schedule)); +} + + +TEST(BuildScheduleIfSplitWithEffects) { + HandleAndZoneScope scope; + Isolate* isolate = scope.main_isolate(); + Graph graph(scope.main_zone()); + CommonOperatorBuilder common_builder(scope.main_zone()); + JSOperatorBuilder js_builder(scope.main_zone()); + Operator* op; + + Handle<Object> object = + Handle<Object>(isolate->heap()->undefined_value(), isolate); + PrintableUnique<Object> unique_constant = + PrintableUnique<Object>::CreateUninitialized(scope.main_zone(), object); + + // Manually transcripted code for: + // function turbo_fan_test(a, b, c, y) { + // if (a < b) { + // return a + b - c * c - a + y; + // } else { + // return c * c - a; + // } + // } + op = common_builder.Start(0); + Node* n0 = graph.NewNode(op); + USE(n0); + Node* nil = graph.NewNode(common_builder.Dead()); + op = common_builder.End(); + Node* n23 = graph.NewNode(op, nil); + USE(n23); + op = common_builder.Merge(2); + Node* n22 = graph.NewNode(op, nil, nil); + USE(n22); + op = common_builder.Return(); + Node* n16 = graph.NewNode(op, nil, nil, nil); + USE(n16); + op = js_builder.Add(); + Node* n15 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n15); + op = js_builder.Subtract(); + Node* n14 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n14); + op = js_builder.Subtract(); + Node* n13 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n13); + op = js_builder.Add(); + Node* n11 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n11); + op = common_builder.Parameter(0); + Node* n2 = graph.NewNode(op, n0); + USE(n2); + n11->ReplaceInput(0, n2); + op = common_builder.Parameter(0); + Node* n3 = graph.NewNode(op, n0); + USE(n3); + n11->ReplaceInput(1, n3); + op = common_builder.HeapConstant(unique_constant); + Node* n7 = graph.NewNode(op); + USE(n7); + n11->ReplaceInput(2, n7); + op = js_builder.LessThan(); + Node* n8 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n8); + n8->ReplaceInput(0, n2); + n8->ReplaceInput(1, n3); + n8->ReplaceInput(2, n7); + n8->ReplaceInput(3, n0); + n8->ReplaceInput(4, n0); + n11->ReplaceInput(3, n8); + op = common_builder.IfTrue(); + Node* n10 = graph.NewNode(op, nil); + USE(n10); + op = common_builder.Branch(); + Node* n9 = graph.NewNode(op, nil, nil); + USE(n9); + n9->ReplaceInput(0, n8); + n9->ReplaceInput(1, n0); + n10->ReplaceInput(0, n9); + n11->ReplaceInput(4, n10); + n13->ReplaceInput(0, n11); + op = js_builder.Multiply(); + Node* n12 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n12); + op = common_builder.Parameter(0); + Node* n4 = graph.NewNode(op, n0); + USE(n4); + n12->ReplaceInput(0, n4); + n12->ReplaceInput(1, n4); + n12->ReplaceInput(2, n7); + n12->ReplaceInput(3, n11); + n12->ReplaceInput(4, n10); + n13->ReplaceInput(1, n12); + n13->ReplaceInput(2, n7); + n13->ReplaceInput(3, n12); + n13->ReplaceInput(4, n10); + n14->ReplaceInput(0, n13); + n14->ReplaceInput(1, n2); + n14->ReplaceInput(2, n7); + n14->ReplaceInput(3, n13); + n14->ReplaceInput(4, n10); + n15->ReplaceInput(0, n14); + op = common_builder.Parameter(0); + Node* n5 = graph.NewNode(op, n0); + USE(n5); + n15->ReplaceInput(1, n5); + n15->ReplaceInput(2, n7); + n15->ReplaceInput(3, n14); + n15->ReplaceInput(4, n10); + n16->ReplaceInput(0, n15); + n16->ReplaceInput(1, n15); + n16->ReplaceInput(2, n10); + n22->ReplaceInput(0, n16); + op = common_builder.Return(); + Node* n21 = graph.NewNode(op, nil, nil, nil); + USE(n21); + op = js_builder.Subtract(); + Node* n20 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n20); + op = js_builder.Multiply(); + Node* n19 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n19); + n19->ReplaceInput(0, n4); + n19->ReplaceInput(1, n4); + n19->ReplaceInput(2, n7); + n19->ReplaceInput(3, n8); + op = common_builder.IfFalse(); + Node* n18 = graph.NewNode(op, nil); + USE(n18); + n18->ReplaceInput(0, n9); + n19->ReplaceInput(4, n18); + n20->ReplaceInput(0, n19); + n20->ReplaceInput(1, n2); + n20->ReplaceInput(2, n7); + n20->ReplaceInput(3, n19); + n20->ReplaceInput(4, n18); + n21->ReplaceInput(0, n20); + n21->ReplaceInput(1, n20); + n21->ReplaceInput(2, n18); + n22->ReplaceInput(1, n21); + n23->ReplaceInput(0, n22); + + graph.SetStart(n0); + graph.SetEnd(n23); + + PrintGraph(&graph); + + Schedule* schedule = Scheduler::ComputeSchedule(&graph); + + PrintSchedule(schedule); + + CHECK_EQ(20, GetScheduledNodeCount(schedule)); +} + + +TEST(BuildScheduleSimpleLoop) { + HandleAndZoneScope scope; + Isolate* isolate = scope.main_isolate(); + Graph graph(scope.main_zone()); + CommonOperatorBuilder common_builder(scope.main_zone()); + JSOperatorBuilder js_builder(scope.main_zone()); + Operator* op; + + Handle<Object> object = + Handle<Object>(isolate->heap()->undefined_value(), isolate); + PrintableUnique<Object> unique_constant = + PrintableUnique<Object>::CreateUninitialized(scope.main_zone(), object); + + // Manually transcripted code for: + // function turbo_fan_test(a, b) { + // while (a < b) { + // a++; + // } + // return a; + // } + op = common_builder.Start(0); + Node* n0 = graph.NewNode(op); + USE(n0); + Node* nil = graph.NewNode(common_builder.Dead()); + op = common_builder.End(); + Node* n20 = graph.NewNode(op, nil); + USE(n20); + op = common_builder.Return(); + Node* n19 = graph.NewNode(op, nil, nil, nil); + USE(n19); + op = common_builder.Phi(2); + Node* n8 = graph.NewNode(op, nil, nil, nil); + USE(n8); + op = common_builder.Parameter(0); + Node* n2 = graph.NewNode(op, n0); + USE(n2); + n8->ReplaceInput(0, n2); + op = js_builder.Add(); + Node* n18 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n18); + op = js_builder.ToNumber(); + Node* n16 = graph.NewNode(op, nil, nil, nil, nil); + USE(n16); + n16->ReplaceInput(0, n8); + op = common_builder.HeapConstant(unique_constant); + Node* n5 = graph.NewNode(op); + USE(n5); + n16->ReplaceInput(1, n5); + op = js_builder.LessThan(); + Node* n12 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n12); + n12->ReplaceInput(0, n8); + op = common_builder.Phi(2); + Node* n9 = graph.NewNode(op, nil, nil, nil); + USE(n9); + op = common_builder.Parameter(0); + Node* n3 = graph.NewNode(op, n0); + USE(n3); + n9->ReplaceInput(0, n3); + n9->ReplaceInput(1, n9); + op = common_builder.Loop(2); + Node* n6 = graph.NewNode(op, nil, nil); + USE(n6); + n6->ReplaceInput(0, n0); + op = common_builder.IfTrue(); + Node* n14 = graph.NewNode(op, nil); + USE(n14); + op = common_builder.Branch(); + Node* n13 = graph.NewNode(op, nil, nil); + USE(n13); + n13->ReplaceInput(0, n12); + n13->ReplaceInput(1, n6); + n14->ReplaceInput(0, n13); + n6->ReplaceInput(1, n14); + n9->ReplaceInput(2, n6); + n12->ReplaceInput(1, n9); + n12->ReplaceInput(2, n5); + op = common_builder.Phi(2); + Node* n10 = graph.NewNode(op, nil, nil, nil); + USE(n10); + n10->ReplaceInput(0, n0); + n10->ReplaceInput(1, n18); + n10->ReplaceInput(2, n6); + n12->ReplaceInput(3, n10); + n12->ReplaceInput(4, n6); + n16->ReplaceInput(2, n12); + n16->ReplaceInput(3, n14); + n18->ReplaceInput(0, n16); + op = common_builder.NumberConstant(0); + Node* n17 = graph.NewNode(op); + USE(n17); + n18->ReplaceInput(1, n17); + n18->ReplaceInput(2, n5); + n18->ReplaceInput(3, n16); + n18->ReplaceInput(4, n14); + n8->ReplaceInput(1, n18); + n8->ReplaceInput(2, n6); + n19->ReplaceInput(0, n8); + n19->ReplaceInput(1, n12); + op = common_builder.IfFalse(); + Node* n15 = graph.NewNode(op, nil); + USE(n15); + n15->ReplaceInput(0, n13); + n19->ReplaceInput(2, n15); + n20->ReplaceInput(0, n19); + + graph.SetStart(n0); + graph.SetEnd(n20); + + PrintGraph(&graph); + + Schedule* schedule = Scheduler::ComputeSchedule(&graph); + + PrintSchedule(schedule); + + CHECK_EQ(19, GetScheduledNodeCount(schedule)); +} + + +TEST(BuildScheduleComplexLoops) { + HandleAndZoneScope scope; + Isolate* isolate = scope.main_isolate(); + Graph graph(scope.main_zone()); + CommonOperatorBuilder common_builder(scope.main_zone()); + JSOperatorBuilder js_builder(scope.main_zone()); + Operator* op; + + Handle<Object> object = + Handle<Object>(isolate->heap()->undefined_value(), isolate); + PrintableUnique<Object> unique_constant = + PrintableUnique<Object>::CreateUninitialized(scope.main_zone(), object); + + // Manually transcripted code for: + // function turbo_fan_test(a, b, c) { + // while (a < b) { + // a++; + // while (c < b) { + // c++; + // } + // } + // while (a < b) { + // a += 2; + // } + // return a; + // } + op = common_builder.Start(0); + Node* n0 = graph.NewNode(op); + USE(n0); + Node* nil = graph.NewNode(common_builder.Dead()); + op = common_builder.End(); + Node* n46 = graph.NewNode(op, nil); + USE(n46); + op = common_builder.Return(); + Node* n45 = graph.NewNode(op, nil, nil, nil); + USE(n45); + op = common_builder.Phi(2); + Node* n35 = graph.NewNode(op, nil, nil, nil); + USE(n35); + op = common_builder.Phi(2); + Node* n9 = graph.NewNode(op, nil, nil, nil); + USE(n9); + op = common_builder.Parameter(0); + Node* n2 = graph.NewNode(op, n0); + USE(n2); + n9->ReplaceInput(0, n2); + op = common_builder.Phi(2); + Node* n23 = graph.NewNode(op, nil, nil, nil); + USE(n23); + op = js_builder.Add(); + Node* n20 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n20); + op = js_builder.ToNumber(); + Node* n18 = graph.NewNode(op, nil, nil, nil, nil); + USE(n18); + n18->ReplaceInput(0, n9); + op = common_builder.HeapConstant(unique_constant); + Node* n6 = graph.NewNode(op); + USE(n6); + n18->ReplaceInput(1, n6); + op = js_builder.LessThan(); + Node* n14 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n14); + n14->ReplaceInput(0, n9); + op = common_builder.Phi(2); + Node* n10 = graph.NewNode(op, nil, nil, nil); + USE(n10); + op = common_builder.Parameter(0); + Node* n3 = graph.NewNode(op, n0); + USE(n3); + n10->ReplaceInput(0, n3); + op = common_builder.Phi(2); + Node* n24 = graph.NewNode(op, nil, nil, nil); + USE(n24); + n24->ReplaceInput(0, n10); + n24->ReplaceInput(1, n24); + op = common_builder.Loop(2); + Node* n21 = graph.NewNode(op, nil, nil); + USE(n21); + op = common_builder.IfTrue(); + Node* n16 = graph.NewNode(op, nil); + USE(n16); + op = common_builder.Branch(); + Node* n15 = graph.NewNode(op, nil, nil); + USE(n15); + n15->ReplaceInput(0, n14); + op = common_builder.Loop(2); + Node* n7 = graph.NewNode(op, nil, nil); + USE(n7); + n7->ReplaceInput(0, n0); + op = common_builder.IfFalse(); + Node* n30 = graph.NewNode(op, nil); + USE(n30); + op = common_builder.Branch(); + Node* n28 = graph.NewNode(op, nil, nil); + USE(n28); + op = js_builder.LessThan(); + Node* n27 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n27); + op = common_builder.Phi(2); + Node* n25 = graph.NewNode(op, nil, nil, nil); + USE(n25); + op = common_builder.Phi(2); + Node* n11 = graph.NewNode(op, nil, nil, nil); + USE(n11); + op = common_builder.Parameter(0); + Node* n4 = graph.NewNode(op, n0); + USE(n4); + n11->ReplaceInput(0, n4); + n11->ReplaceInput(1, n25); + n11->ReplaceInput(2, n7); + n25->ReplaceInput(0, n11); + op = js_builder.Add(); + Node* n32 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n32); + op = js_builder.ToNumber(); + Node* n31 = graph.NewNode(op, nil, nil, nil, nil); + USE(n31); + n31->ReplaceInput(0, n25); + n31->ReplaceInput(1, n6); + n31->ReplaceInput(2, n27); + op = common_builder.IfTrue(); + Node* n29 = graph.NewNode(op, nil); + USE(n29); + n29->ReplaceInput(0, n28); + n31->ReplaceInput(3, n29); + n32->ReplaceInput(0, n31); + op = common_builder.NumberConstant(0); + Node* n19 = graph.NewNode(op); + USE(n19); + n32->ReplaceInput(1, n19); + n32->ReplaceInput(2, n6); + n32->ReplaceInput(3, n31); + n32->ReplaceInput(4, n29); + n25->ReplaceInput(1, n32); + n25->ReplaceInput(2, n21); + n27->ReplaceInput(0, n25); + n27->ReplaceInput(1, n24); + n27->ReplaceInput(2, n6); + op = common_builder.Phi(2); + Node* n26 = graph.NewNode(op, nil, nil, nil); + USE(n26); + n26->ReplaceInput(0, n20); + n26->ReplaceInput(1, n32); + n26->ReplaceInput(2, n21); + n27->ReplaceInput(3, n26); + n27->ReplaceInput(4, n21); + n28->ReplaceInput(0, n27); + n28->ReplaceInput(1, n21); + n30->ReplaceInput(0, n28); + n7->ReplaceInput(1, n30); + n15->ReplaceInput(1, n7); + n16->ReplaceInput(0, n15); + n21->ReplaceInput(0, n16); + n21->ReplaceInput(1, n29); + n24->ReplaceInput(2, n21); + n10->ReplaceInput(1, n24); + n10->ReplaceInput(2, n7); + n14->ReplaceInput(1, n10); + n14->ReplaceInput(2, n6); + op = common_builder.Phi(2); + Node* n12 = graph.NewNode(op, nil, nil, nil); + USE(n12); + n12->ReplaceInput(0, n0); + n12->ReplaceInput(1, n27); + n12->ReplaceInput(2, n7); + n14->ReplaceInput(3, n12); + n14->ReplaceInput(4, n7); + n18->ReplaceInput(2, n14); + n18->ReplaceInput(3, n16); + n20->ReplaceInput(0, n18); + n20->ReplaceInput(1, n19); + n20->ReplaceInput(2, n6); + n20->ReplaceInput(3, n18); + n20->ReplaceInput(4, n16); + n23->ReplaceInput(0, n20); + n23->ReplaceInput(1, n23); + n23->ReplaceInput(2, n21); + n9->ReplaceInput(1, n23); + n9->ReplaceInput(2, n7); + n35->ReplaceInput(0, n9); + op = js_builder.Add(); + Node* n44 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n44); + n44->ReplaceInput(0, n35); + op = common_builder.NumberConstant(0); + Node* n43 = graph.NewNode(op); + USE(n43); + n44->ReplaceInput(1, n43); + n44->ReplaceInput(2, n6); + op = js_builder.LessThan(); + Node* n39 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n39); + n39->ReplaceInput(0, n35); + op = common_builder.Phi(2); + Node* n36 = graph.NewNode(op, nil, nil, nil); + USE(n36); + n36->ReplaceInput(0, n10); + n36->ReplaceInput(1, n36); + op = common_builder.Loop(2); + Node* n33 = graph.NewNode(op, nil, nil); + USE(n33); + op = common_builder.IfFalse(); + Node* n17 = graph.NewNode(op, nil); + USE(n17); + n17->ReplaceInput(0, n15); + n33->ReplaceInput(0, n17); + op = common_builder.IfTrue(); + Node* n41 = graph.NewNode(op, nil); + USE(n41); + op = common_builder.Branch(); + Node* n40 = graph.NewNode(op, nil, nil); + USE(n40); + n40->ReplaceInput(0, n39); + n40->ReplaceInput(1, n33); + n41->ReplaceInput(0, n40); + n33->ReplaceInput(1, n41); + n36->ReplaceInput(2, n33); + n39->ReplaceInput(1, n36); + n39->ReplaceInput(2, n6); + op = common_builder.Phi(2); + Node* n38 = graph.NewNode(op, nil, nil, nil); + USE(n38); + n38->ReplaceInput(0, n14); + n38->ReplaceInput(1, n44); + n38->ReplaceInput(2, n33); + n39->ReplaceInput(3, n38); + n39->ReplaceInput(4, n33); + n44->ReplaceInput(3, n39); + n44->ReplaceInput(4, n41); + n35->ReplaceInput(1, n44); + n35->ReplaceInput(2, n33); + n45->ReplaceInput(0, n35); + n45->ReplaceInput(1, n39); + op = common_builder.IfFalse(); + Node* n42 = graph.NewNode(op, nil); + USE(n42); + n42->ReplaceInput(0, n40); + n45->ReplaceInput(2, n42); + n46->ReplaceInput(0, n45); + + graph.SetStart(n0); + graph.SetEnd(n46); + + PrintGraph(&graph); + + Schedule* schedule = Scheduler::ComputeSchedule(&graph); + + PrintSchedule(schedule); + + CHECK_EQ(46, GetScheduledNodeCount(schedule)); +} + + +TEST(BuildScheduleBreakAndContinue) { + HandleAndZoneScope scope; + Isolate* isolate = scope.main_isolate(); + Graph graph(scope.main_zone()); + CommonOperatorBuilder common_builder(scope.main_zone()); + JSOperatorBuilder js_builder(scope.main_zone()); + Operator* op; + + Handle<Object> object = + Handle<Object>(isolate->heap()->undefined_value(), isolate); + PrintableUnique<Object> unique_constant = + PrintableUnique<Object>::CreateUninitialized(scope.main_zone(), object); + + // Manually transcripted code for: + // function turbo_fan_test(a, b, c) { + // var d = 0; + // while (a < b) { + // a++; + // while (c < b) { + // c++; + // if (d == 0) break; + // a++; + // } + // if (a == 1) continue; + // d++; + // } + // return a + d; + // } + op = common_builder.Start(0); + Node* n0 = graph.NewNode(op); + USE(n0); + Node* nil = graph.NewNode(common_builder.Dead()); + op = common_builder.End(); + Node* n58 = graph.NewNode(op, nil); + USE(n58); + op = common_builder.Return(); + Node* n57 = graph.NewNode(op, nil, nil, nil); + USE(n57); + op = js_builder.Add(); + Node* n56 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n56); + op = common_builder.Phi(2); + Node* n10 = graph.NewNode(op, nil, nil, nil); + USE(n10); + op = common_builder.Parameter(0); + Node* n2 = graph.NewNode(op, n0); + USE(n2); + n10->ReplaceInput(0, n2); + op = common_builder.Phi(2); + Node* n25 = graph.NewNode(op, nil, nil, nil); + USE(n25); + op = js_builder.Add(); + Node* n22 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n22); + op = js_builder.ToNumber(); + Node* n20 = graph.NewNode(op, nil, nil, nil, nil); + USE(n20); + n20->ReplaceInput(0, n10); + op = common_builder.HeapConstant(unique_constant); + Node* n6 = graph.NewNode(op); + USE(n6); + n20->ReplaceInput(1, n6); + op = js_builder.LessThan(); + Node* n16 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n16); + n16->ReplaceInput(0, n10); + op = common_builder.Phi(2); + Node* n11 = graph.NewNode(op, nil, nil, nil); + USE(n11); + op = common_builder.Parameter(0); + Node* n3 = graph.NewNode(op, n0); + USE(n3); + n11->ReplaceInput(0, n3); + op = common_builder.Phi(2); + Node* n26 = graph.NewNode(op, nil, nil, nil); + USE(n26); + n26->ReplaceInput(0, n11); + n26->ReplaceInput(1, n26); + op = common_builder.Loop(2); + Node* n23 = graph.NewNode(op, nil, nil); + USE(n23); + op = common_builder.IfTrue(); + Node* n18 = graph.NewNode(op, nil); + USE(n18); + op = common_builder.Branch(); + Node* n17 = graph.NewNode(op, nil, nil); + USE(n17); + n17->ReplaceInput(0, n16); + op = common_builder.Loop(2); + Node* n8 = graph.NewNode(op, nil, nil); + USE(n8); + n8->ReplaceInput(0, n0); + op = common_builder.Merge(2); + Node* n53 = graph.NewNode(op, nil, nil); + USE(n53); + op = common_builder.IfTrue(); + Node* n49 = graph.NewNode(op, nil); + USE(n49); + op = common_builder.Branch(); + Node* n48 = graph.NewNode(op, nil, nil); + USE(n48); + op = js_builder.Equal(); + Node* n47 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n47); + n47->ReplaceInput(0, n25); + op = common_builder.NumberConstant(0); + Node* n46 = graph.NewNode(op); + USE(n46); + n47->ReplaceInput(1, n46); + n47->ReplaceInput(2, n6); + op = common_builder.Phi(2); + Node* n42 = graph.NewNode(op, nil, nil, nil); + USE(n42); + op = js_builder.LessThan(); + Node* n30 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n30); + op = common_builder.Phi(2); + Node* n27 = graph.NewNode(op, nil, nil, nil); + USE(n27); + op = common_builder.Phi(2); + Node* n12 = graph.NewNode(op, nil, nil, nil); + USE(n12); + op = common_builder.Parameter(0); + Node* n4 = graph.NewNode(op, n0); + USE(n4); + n12->ReplaceInput(0, n4); + op = common_builder.Phi(2); + Node* n41 = graph.NewNode(op, nil, nil, nil); + USE(n41); + n41->ReplaceInput(0, n27); + op = js_builder.Add(); + Node* n35 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n35); + op = js_builder.ToNumber(); + Node* n34 = graph.NewNode(op, nil, nil, nil, nil); + USE(n34); + n34->ReplaceInput(0, n27); + n34->ReplaceInput(1, n6); + n34->ReplaceInput(2, n30); + op = common_builder.IfTrue(); + Node* n32 = graph.NewNode(op, nil); + USE(n32); + op = common_builder.Branch(); + Node* n31 = graph.NewNode(op, nil, nil); + USE(n31); + n31->ReplaceInput(0, n30); + n31->ReplaceInput(1, n23); + n32->ReplaceInput(0, n31); + n34->ReplaceInput(3, n32); + n35->ReplaceInput(0, n34); + op = common_builder.NumberConstant(0); + Node* n21 = graph.NewNode(op); + USE(n21); + n35->ReplaceInput(1, n21); + n35->ReplaceInput(2, n6); + n35->ReplaceInput(3, n34); + n35->ReplaceInput(4, n32); + n41->ReplaceInput(1, n35); + op = common_builder.Merge(2); + Node* n40 = graph.NewNode(op, nil, nil); + USE(n40); + op = common_builder.IfFalse(); + Node* n33 = graph.NewNode(op, nil); + USE(n33); + n33->ReplaceInput(0, n31); + n40->ReplaceInput(0, n33); + op = common_builder.IfTrue(); + Node* n39 = graph.NewNode(op, nil); + USE(n39); + op = common_builder.Branch(); + Node* n38 = graph.NewNode(op, nil, nil); + USE(n38); + op = js_builder.Equal(); + Node* n37 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n37); + op = common_builder.Phi(2); + Node* n28 = graph.NewNode(op, nil, nil, nil); + USE(n28); + op = common_builder.Phi(2); + Node* n13 = graph.NewNode(op, nil, nil, nil); + USE(n13); + op = common_builder.NumberConstant(0); + Node* n7 = graph.NewNode(op); + USE(n7); + n13->ReplaceInput(0, n7); + op = common_builder.Phi(2); + Node* n54 = graph.NewNode(op, nil, nil, nil); + USE(n54); + n54->ReplaceInput(0, n28); + op = js_builder.Add(); + Node* n52 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n52); + op = js_builder.ToNumber(); + Node* n51 = graph.NewNode(op, nil, nil, nil, nil); + USE(n51); + n51->ReplaceInput(0, n28); + n51->ReplaceInput(1, n6); + n51->ReplaceInput(2, n47); + op = common_builder.IfFalse(); + Node* n50 = graph.NewNode(op, nil); + USE(n50); + n50->ReplaceInput(0, n48); + n51->ReplaceInput(3, n50); + n52->ReplaceInput(0, n51); + n52->ReplaceInput(1, n21); + n52->ReplaceInput(2, n6); + n52->ReplaceInput(3, n51); + n52->ReplaceInput(4, n50); + n54->ReplaceInput(1, n52); + n54->ReplaceInput(2, n53); + n13->ReplaceInput(1, n54); + n13->ReplaceInput(2, n8); + n28->ReplaceInput(0, n13); + n28->ReplaceInput(1, n28); + n28->ReplaceInput(2, n23); + n37->ReplaceInput(0, n28); + op = common_builder.NumberConstant(0); + Node* n36 = graph.NewNode(op); + USE(n36); + n37->ReplaceInput(1, n36); + n37->ReplaceInput(2, n6); + n37->ReplaceInput(3, n35); + n37->ReplaceInput(4, n32); + n38->ReplaceInput(0, n37); + n38->ReplaceInput(1, n32); + n39->ReplaceInput(0, n38); + n40->ReplaceInput(1, n39); + n41->ReplaceInput(2, n40); + n12->ReplaceInput(1, n41); + n12->ReplaceInput(2, n8); + n27->ReplaceInput(0, n12); + n27->ReplaceInput(1, n35); + n27->ReplaceInput(2, n23); + n30->ReplaceInput(0, n27); + n30->ReplaceInput(1, n26); + n30->ReplaceInput(2, n6); + op = common_builder.Phi(2); + Node* n29 = graph.NewNode(op, nil, nil, nil); + USE(n29); + n29->ReplaceInput(0, n22); + op = js_builder.Add(); + Node* n45 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n45); + op = js_builder.ToNumber(); + Node* n44 = graph.NewNode(op, nil, nil, nil, nil); + USE(n44); + n44->ReplaceInput(0, n25); + n44->ReplaceInput(1, n6); + n44->ReplaceInput(2, n37); + op = common_builder.IfFalse(); + Node* n43 = graph.NewNode(op, nil); + USE(n43); + n43->ReplaceInput(0, n38); + n44->ReplaceInput(3, n43); + n45->ReplaceInput(0, n44); + n45->ReplaceInput(1, n21); + n45->ReplaceInput(2, n6); + n45->ReplaceInput(3, n44); + n45->ReplaceInput(4, n43); + n29->ReplaceInput(1, n45); + n29->ReplaceInput(2, n23); + n30->ReplaceInput(3, n29); + n30->ReplaceInput(4, n23); + n42->ReplaceInput(0, n30); + n42->ReplaceInput(1, n37); + n42->ReplaceInput(2, n40); + n47->ReplaceInput(3, n42); + n47->ReplaceInput(4, n40); + n48->ReplaceInput(0, n47); + n48->ReplaceInput(1, n40); + n49->ReplaceInput(0, n48); + n53->ReplaceInput(0, n49); + n53->ReplaceInput(1, n50); + n8->ReplaceInput(1, n53); + n17->ReplaceInput(1, n8); + n18->ReplaceInput(0, n17); + n23->ReplaceInput(0, n18); + n23->ReplaceInput(1, n43); + n26->ReplaceInput(2, n23); + n11->ReplaceInput(1, n26); + n11->ReplaceInput(2, n8); + n16->ReplaceInput(1, n11); + n16->ReplaceInput(2, n6); + op = common_builder.Phi(2); + Node* n14 = graph.NewNode(op, nil, nil, nil); + USE(n14); + n14->ReplaceInput(0, n0); + op = common_builder.Phi(2); + Node* n55 = graph.NewNode(op, nil, nil, nil); + USE(n55); + n55->ReplaceInput(0, n47); + n55->ReplaceInput(1, n52); + n55->ReplaceInput(2, n53); + n14->ReplaceInput(1, n55); + n14->ReplaceInput(2, n8); + n16->ReplaceInput(3, n14); + n16->ReplaceInput(4, n8); + n20->ReplaceInput(2, n16); + n20->ReplaceInput(3, n18); + n22->ReplaceInput(0, n20); + n22->ReplaceInput(1, n21); + n22->ReplaceInput(2, n6); + n22->ReplaceInput(3, n20); + n22->ReplaceInput(4, n18); + n25->ReplaceInput(0, n22); + n25->ReplaceInput(1, n45); + n25->ReplaceInput(2, n23); + n10->ReplaceInput(1, n25); + n10->ReplaceInput(2, n8); + n56->ReplaceInput(0, n10); + n56->ReplaceInput(1, n13); + n56->ReplaceInput(2, n6); + n56->ReplaceInput(3, n16); + op = common_builder.IfFalse(); + Node* n19 = graph.NewNode(op, nil); + USE(n19); + n19->ReplaceInput(0, n17); + n56->ReplaceInput(4, n19); + n57->ReplaceInput(0, n56); + n57->ReplaceInput(1, n56); + n57->ReplaceInput(2, n19); + n58->ReplaceInput(0, n57); + + graph.SetStart(n0); + graph.SetEnd(n58); + + PrintGraph(&graph); + + Schedule* schedule = Scheduler::ComputeSchedule(&graph); + + PrintSchedule(schedule); + + CHECK_EQ(62, GetScheduledNodeCount(schedule)); +} + + +TEST(BuildScheduleSimpleLoopWithCodeMotion) { + HandleAndZoneScope scope; + Isolate* isolate = scope.main_isolate(); + Graph graph(scope.main_zone()); + CommonOperatorBuilder common_builder(scope.main_zone()); + JSOperatorBuilder js_builder(scope.main_zone()); + MachineOperatorBuilder machine_builder(scope.main_zone(), kMachineWord32); + Operator* op; + + Handle<Object> object = + Handle<Object>(isolate->heap()->undefined_value(), isolate); + PrintableUnique<Object> unique_constant = + PrintableUnique<Object>::CreateUninitialized(scope.main_zone(), object); + + // Manually transcripted code for: + // function turbo_fan_test(a, b, c) { + // while (a < b) { + // a += b + c; + // } + // return a; + // } + op = common_builder.Start(0); + Node* n0 = graph.NewNode(op); + USE(n0); + Node* nil = graph.NewNode(common_builder.Dead()); + op = common_builder.End(); + Node* n22 = graph.NewNode(op, nil); + USE(n22); + op = common_builder.Return(); + Node* n21 = graph.NewNode(op, nil, nil, nil); + USE(n21); + op = common_builder.Phi(2); + Node* n9 = graph.NewNode(op, nil, nil, nil); + USE(n9); + op = common_builder.Parameter(0); + Node* n2 = graph.NewNode(op, n0); + USE(n2); + n9->ReplaceInput(0, n2); + op = js_builder.Add(); + Node* n20 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n20); + n20->ReplaceInput(0, n9); + op = machine_builder.Int32Add(); + Node* n19 = graph.NewNode(op, nil, nil); + USE(n19); + op = common_builder.Phi(2); + Node* n10 = graph.NewNode(op, nil, nil, nil); + USE(n10); + op = common_builder.Parameter(0); + Node* n3 = graph.NewNode(op, n0); + USE(n3); + n10->ReplaceInput(0, n3); + n10->ReplaceInput(1, n10); + op = common_builder.Loop(2); + Node* n7 = graph.NewNode(op, nil, nil); + USE(n7); + n7->ReplaceInput(0, n0); + op = common_builder.IfTrue(); + Node* n17 = graph.NewNode(op, nil); + USE(n17); + op = common_builder.Branch(); + Node* n16 = graph.NewNode(op, nil, nil); + USE(n16); + op = js_builder.ToBoolean(); + Node* n15 = graph.NewNode(op, nil, nil, nil, nil); + USE(n15); + op = js_builder.LessThan(); + Node* n14 = graph.NewNode(op, nil, nil, nil, nil, nil); + USE(n14); + n14->ReplaceInput(0, n9); + n14->ReplaceInput(1, n10); + op = common_builder.HeapConstant(unique_constant); + Node* n6 = graph.NewNode(op); + USE(n6); + n14->ReplaceInput(2, n6); + op = common_builder.Phi(2); + Node* n12 = graph.NewNode(op, nil, nil, nil); + USE(n12); + n12->ReplaceInput(0, n0); + n12->ReplaceInput(1, n20); + n12->ReplaceInput(2, n7); + n14->ReplaceInput(3, n12); + n14->ReplaceInput(4, n7); + n15->ReplaceInput(0, n14); + n15->ReplaceInput(1, n6); + n15->ReplaceInput(2, n14); + n15->ReplaceInput(3, n7); + n16->ReplaceInput(0, n15); + n16->ReplaceInput(1, n7); + n17->ReplaceInput(0, n16); + n7->ReplaceInput(1, n17); + n10->ReplaceInput(2, n7); + n19->ReplaceInput(0, n2); + op = common_builder.Phi(2); + Node* n11 = graph.NewNode(op, nil, nil, nil); + USE(n11); + op = common_builder.Parameter(0); + Node* n4 = graph.NewNode(op, n0); + USE(n4); + n11->ReplaceInput(0, n4); + n11->ReplaceInput(1, n11); + n11->ReplaceInput(2, n7); + n19->ReplaceInput(1, n3); + n20->ReplaceInput(1, n19); + n20->ReplaceInput(2, n6); + n20->ReplaceInput(3, n19); + n20->ReplaceInput(4, n17); + n9->ReplaceInput(1, n20); + n9->ReplaceInput(2, n7); + n21->ReplaceInput(0, n9); + n21->ReplaceInput(1, n15); + op = common_builder.IfFalse(); + Node* n18 = graph.NewNode(op, nil); + USE(n18); + n18->ReplaceInput(0, n16); + n21->ReplaceInput(2, n18); + n22->ReplaceInput(0, n21); + + graph.SetStart(n0); + graph.SetEnd(n22); + + PrintGraph(&graph); + + Schedule* schedule = Scheduler::ComputeSchedule(&graph); + + PrintSchedule(schedule); + + CHECK_EQ(19, GetScheduledNodeCount(schedule)); + + // Make sure the integer-only add gets hoisted to a different block that the + // JSAdd. + CHECK(schedule->block(n19) != schedule->block(n20)); +} + + +#if V8_TURBOFAN_TARGET + +// So we can get a real JS function. +static Handle<JSFunction> Compile(const char* source) { + Isolate* isolate = CcTest::i_isolate(); + Handle<String> source_code = isolate->factory() + ->NewStringFromUtf8(CStrVector(source)) + .ToHandleChecked(); + Handle<SharedFunctionInfo> shared_function = Compiler::CompileScript( + source_code, Handle<String>(), 0, 0, false, + Handle<Context>(isolate->native_context()), NULL, NULL, + v8::ScriptCompiler::kNoCompileOptions, NOT_NATIVES_CODE); + return isolate->factory()->NewFunctionFromSharedFunctionInfo( + shared_function, isolate->native_context()); +} + + +TEST(BuildScheduleTrivialLazyDeoptCall) { + FLAG_turbo_deoptimization = true; + + HandleAndZoneScope scope; + Isolate* isolate = scope.main_isolate(); + Graph graph(scope.main_zone()); + CommonOperatorBuilder common(scope.main_zone()); + JSOperatorBuilder js_builder(scope.main_zone()); + + InitializedHandleScope handles; + Handle<JSFunction> function = Compile("m()"); + CompilationInfoWithZone info(function); + Linkage linkage(&info); + + // Manually transcribed code for: + // function turbo_fan_test() { + // m(); + // } + // where m can lazy deopt (so it has a deopt block associated with it). + + + // Start // + // ^ // + // | (EC) // + // | // + // /------> Call <--------------\ // + // / ^ ^ \ // + // / | | \ undef // + // / / \ \ ^ // + // (E) | (C) / \ (C) \ (E) | // + // | Continuation LazyDeoptimization | | // + // \___ ^ ^ / | // + // \ | | ______/ Framestate // + // undef \ | (VC) | (C) / ^ // + // \ \ | | / / // + // Return Deoptimization ----------/ // + // ^ ^ // + // \ / // + // (C) \ / (C) // + // \ / // + // Merge // + // ^ // + // | // + // End // + + Handle<Object> undef_object = + Handle<Object>(isolate->heap()->undefined_value(), isolate); + PrintableUnique<Object> undef_constant = + PrintableUnique<Object>::CreateUninitialized(scope.main_zone(), + undef_object); + + Node* undef_node = graph.NewNode(common.HeapConstant(undef_constant)); + + Node* start_node = graph.NewNode(common.Start(0)); + + CallDescriptor* descriptor = linkage.GetJSCallDescriptor(0); + Node* call_node = graph.NewNode(common.Call(descriptor), + undef_node, // function + undef_node, // context + start_node, // effect + start_node); // control + + Node* cont_node = graph.NewNode(common.Continuation(), call_node); + Node* lazy_deopt_node = graph.NewNode(common.LazyDeoptimization(), call_node); + + Node* parameters = graph.NewNode(common.StateValues(1), undef_node); + Node* locals = graph.NewNode(common.StateValues(0)); + Node* stack = graph.NewNode(common.StateValues(0)); + + Node* state_node = graph.NewNode(common.FrameState(BailoutId(1234)), + parameters, locals, stack); + + Node* return_node = graph.NewNode(common.Return(), + undef_node, // return value + call_node, // effect + cont_node); // control + Node* deoptimization_node = graph.NewNode(common.Deoptimize(), + state_node, // deopt environment + call_node, // effect + lazy_deopt_node); // control + + Node* merge_node = + graph.NewNode(common.Merge(2), return_node, deoptimization_node); + + Node* end_node = graph.NewNode(common.End(), merge_node); + + graph.SetStart(start_node); + graph.SetEnd(end_node); + + PrintGraph(&graph); + + Schedule* schedule = Scheduler::ComputeSchedule(&graph); + + PrintSchedule(schedule); + + // Tests: + // Continuation and deopt have basic blocks. + BasicBlock* cont_block = schedule->block(cont_node); + BasicBlock* deopt_block = schedule->block(lazy_deopt_node); + BasicBlock* call_block = schedule->block(call_node); + CHECK_NE(NULL, cont_block); + CHECK_NE(NULL, deopt_block); + CHECK_NE(NULL, call_block); + // The basic blocks are different. + CHECK_NE(cont_block, deopt_block); + CHECK_NE(cont_block, call_block); + CHECK_NE(deopt_block, call_block); + // The call node finishes its own basic block. + CHECK_EQ(BasicBlock::kCall, call_block->control_); + CHECK_EQ(call_node, call_block->control_input_); + // The lazy deopt block is deferred. + CHECK(deopt_block->deferred_); + CHECK(!call_block->deferred_); + CHECK(!cont_block->deferred_); + // The lazy deopt block contains framestate + bailout (and nothing else). + CHECK_EQ(deoptimization_node, deopt_block->control_input_); + CHECK_EQ(5, static_cast<int>(deopt_block->nodes_.size())); + CHECK_EQ(lazy_deopt_node, deopt_block->nodes_[0]); + CHECK_EQ(IrOpcode::kStateValues, deopt_block->nodes_[1]->op()->opcode()); + CHECK_EQ(IrOpcode::kStateValues, deopt_block->nodes_[2]->op()->opcode()); + CHECK_EQ(IrOpcode::kStateValues, deopt_block->nodes_[3]->op()->opcode()); + CHECK_EQ(state_node, deopt_block->nodes_[4]); +} + +#endif diff --git a/deps/v8/test/cctest/compiler/test-simplified-lowering.cc b/deps/v8/test/cctest/compiler/test-simplified-lowering.cc new file mode 100644 index 00000000000..18f4136b904 --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-simplified-lowering.cc @@ -0,0 +1,1372 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include <limits> + +#include "src/compiler/control-builders.h" +#include "src/compiler/generic-node-inl.h" +#include "src/compiler/graph-visualizer.h" +#include "src/compiler/node-properties-inl.h" +#include "src/compiler/pipeline.h" +#include "src/compiler/representation-change.h" +#include "src/compiler/simplified-lowering.h" +#include "src/compiler/simplified-node-factory.h" +#include "src/compiler/typer.h" +#include "src/compiler/verifier.h" +#include "src/execution.h" +#include "src/parser.h" +#include "src/rewriter.h" +#include "src/scopes.h" +#include "test/cctest/cctest.h" +#include "test/cctest/compiler/codegen-tester.h" +#include "test/cctest/compiler/graph-builder-tester.h" +#include "test/cctest/compiler/value-helper.h" + +using namespace v8::internal; +using namespace v8::internal::compiler; + +template <typename ReturnType> +class SimplifiedLoweringTester : public GraphBuilderTester<ReturnType> { + public: + SimplifiedLoweringTester(MachineType p0 = kMachineLast, + MachineType p1 = kMachineLast, + MachineType p2 = kMachineLast, + MachineType p3 = kMachineLast, + MachineType p4 = kMachineLast) + : GraphBuilderTester<ReturnType>(p0, p1, p2, p3, p4), + typer(this->zone()), + source_positions(this->graph()), + jsgraph(this->graph(), this->common(), &typer), + lowering(&jsgraph, &source_positions) {} + + Typer typer; + SourcePositionTable source_positions; + JSGraph jsgraph; + SimplifiedLowering lowering; + + void LowerAllNodes() { + this->End(); + lowering.LowerAllNodes(); + } + + Factory* factory() { return this->isolate()->factory(); } + Heap* heap() { return this->isolate()->heap(); } +}; + + +// TODO(dcarney): find a home for these functions. +namespace { + +FieldAccess ForJSObjectMap() { + FieldAccess access = {kTaggedBase, JSObject::kMapOffset, Handle<Name>(), + Type::Any(), kMachineTagged}; + return access; +} + + +FieldAccess ForJSObjectProperties() { + FieldAccess access = {kTaggedBase, JSObject::kPropertiesOffset, + Handle<Name>(), Type::Any(), kMachineTagged}; + return access; +} + + +FieldAccess ForArrayBufferBackingStore() { + FieldAccess access = { + kTaggedBase, JSArrayBuffer::kBackingStoreOffset, + Handle<Name>(), Type::UntaggedPtr(), + MachineOperatorBuilder::pointer_rep(), + }; + return access; +} + + +ElementAccess ForFixedArrayElement() { + ElementAccess access = {kTaggedBase, FixedArray::kHeaderSize, Type::Any(), + kMachineTagged}; + return access; +} + + +ElementAccess ForBackingStoreElement(MachineType rep) { + ElementAccess access = {kUntaggedBase, + kNonHeapObjectHeaderSize - kHeapObjectTag, + Type::Any(), rep}; + return access; +} +} + + +// Create a simple JSObject with a unique map. +static Handle<JSObject> TestObject() { + static int index = 0; + char buffer[50]; + v8::base::OS::SNPrintF(buffer, 50, "({'a_%d':1})", index++); + return Handle<JSObject>::cast(v8::Utils::OpenHandle(*CompileRun(buffer))); +} + + +TEST(RunLoadMap) { + SimplifiedLoweringTester<Object*> t(kMachineTagged); + FieldAccess access = ForJSObjectMap(); + Node* load = t.LoadField(access, t.Parameter(0)); + t.Return(load); + + t.LowerAllNodes(); + t.GenerateCode(); + + if (Pipeline::SupportedTarget()) { + Handle<JSObject> src = TestObject(); + Handle<Map> src_map(src->map()); + Object* result = t.Call(*src); // TODO(titzer): raw pointers in call + CHECK_EQ(*src_map, result); + } +} + + +TEST(RunStoreMap) { + SimplifiedLoweringTester<int32_t> t(kMachineTagged, kMachineTagged); + FieldAccess access = ForJSObjectMap(); + t.StoreField(access, t.Parameter(1), t.Parameter(0)); + t.Return(t.jsgraph.TrueConstant()); + + t.LowerAllNodes(); + t.GenerateCode(); + + if (Pipeline::SupportedTarget()) { + Handle<JSObject> src = TestObject(); + Handle<Map> src_map(src->map()); + Handle<JSObject> dst = TestObject(); + CHECK(src->map() != dst->map()); + t.Call(*src_map, *dst); // TODO(titzer): raw pointers in call + CHECK(*src_map == dst->map()); + } +} + + +TEST(RunLoadProperties) { + SimplifiedLoweringTester<Object*> t(kMachineTagged); + FieldAccess access = ForJSObjectProperties(); + Node* load = t.LoadField(access, t.Parameter(0)); + t.Return(load); + + t.LowerAllNodes(); + t.GenerateCode(); + + if (Pipeline::SupportedTarget()) { + Handle<JSObject> src = TestObject(); + Handle<FixedArray> src_props(src->properties()); + Object* result = t.Call(*src); // TODO(titzer): raw pointers in call + CHECK_EQ(*src_props, result); + } +} + + +TEST(RunLoadStoreMap) { + SimplifiedLoweringTester<Object*> t(kMachineTagged, kMachineTagged); + FieldAccess access = ForJSObjectMap(); + Node* load = t.LoadField(access, t.Parameter(0)); + t.StoreField(access, t.Parameter(1), load); + t.Return(load); + + t.LowerAllNodes(); + t.GenerateCode(); + + if (Pipeline::SupportedTarget()) { + Handle<JSObject> src = TestObject(); + Handle<Map> src_map(src->map()); + Handle<JSObject> dst = TestObject(); + CHECK(src->map() != dst->map()); + Object* result = t.Call(*src, *dst); // TODO(titzer): raw pointers in call + CHECK(result->IsMap()); + CHECK_EQ(*src_map, result); + CHECK(*src_map == dst->map()); + } +} + + +TEST(RunLoadStoreFixedArrayIndex) { + SimplifiedLoweringTester<Object*> t(kMachineTagged); + ElementAccess access = ForFixedArrayElement(); + Node* load = t.LoadElement(access, t.Parameter(0), t.Int32Constant(0)); + t.StoreElement(access, t.Parameter(0), t.Int32Constant(1), load); + t.Return(load); + + t.LowerAllNodes(); + t.GenerateCode(); + + if (Pipeline::SupportedTarget()) { + Handle<FixedArray> array = t.factory()->NewFixedArray(2); + Handle<JSObject> src = TestObject(); + Handle<JSObject> dst = TestObject(); + array->set(0, *src); + array->set(1, *dst); + Object* result = t.Call(*array); + CHECK_EQ(*src, result); + CHECK_EQ(*src, array->get(0)); + CHECK_EQ(*src, array->get(1)); + } +} + + +TEST(RunLoadStoreArrayBuffer) { + SimplifiedLoweringTester<Object*> t(kMachineTagged); + const int index = 12; + ElementAccess buffer_access = ForBackingStoreElement(kMachineWord8); + Node* backing_store = + t.LoadField(ForArrayBufferBackingStore(), t.Parameter(0)); + Node* load = + t.LoadElement(buffer_access, backing_store, t.Int32Constant(index)); + t.StoreElement(buffer_access, backing_store, t.Int32Constant(index + 1), + load); + t.Return(t.jsgraph.TrueConstant()); + + t.LowerAllNodes(); + t.GenerateCode(); + + if (Pipeline::SupportedTarget()) { + Handle<JSArrayBuffer> array = t.factory()->NewJSArrayBuffer(); + const int array_length = 2 * index; + Runtime::SetupArrayBufferAllocatingData(t.isolate(), array, array_length); + uint8_t* data = reinterpret_cast<uint8_t*>(array->backing_store()); + for (int i = 0; i < array_length; i++) { + data[i] = i; + } + + // TODO(titzer): raw pointers in call + Object* result = t.Call(*array); + CHECK_EQ(t.isolate()->heap()->true_value(), result); + for (int i = 0; i < array_length; i++) { + uint8_t expected = i; + if (i == (index + 1)) expected = index; + CHECK_EQ(data[i], expected); + } + } +} + + +TEST(RunLoadFieldFromUntaggedBase) { + Smi* smis[] = {Smi::FromInt(1), Smi::FromInt(2), Smi::FromInt(3)}; + + for (size_t i = 0; i < ARRAY_SIZE(smis); i++) { + int offset = static_cast<int>(i * sizeof(Smi*)); + FieldAccess access = {kUntaggedBase, offset, Handle<Name>(), + Type::Integral32(), kMachineTagged}; + + SimplifiedLoweringTester<Object*> t; + Node* load = t.LoadField(access, t.PointerConstant(smis)); + t.Return(load); + t.LowerAllNodes(); + + if (!Pipeline::SupportedTarget()) continue; + + for (int j = -5; j <= 5; j++) { + Smi* expected = Smi::FromInt(j); + smis[i] = expected; + CHECK_EQ(expected, t.Call()); + } + } +} + + +TEST(RunStoreFieldToUntaggedBase) { + Smi* smis[] = {Smi::FromInt(1), Smi::FromInt(2), Smi::FromInt(3)}; + + for (size_t i = 0; i < ARRAY_SIZE(smis); i++) { + int offset = static_cast<int>(i * sizeof(Smi*)); + FieldAccess access = {kUntaggedBase, offset, Handle<Name>(), + Type::Integral32(), kMachineTagged}; + + SimplifiedLoweringTester<Object*> t(kMachineTagged); + Node* p0 = t.Parameter(0); + t.StoreField(access, t.PointerConstant(smis), p0); + t.Return(p0); + t.LowerAllNodes(); + + if (!Pipeline::SupportedTarget()) continue; + + for (int j = -5; j <= 5; j++) { + Smi* expected = Smi::FromInt(j); + smis[i] = Smi::FromInt(-100); + CHECK_EQ(expected, t.Call(expected)); + CHECK_EQ(expected, smis[i]); + } + } +} + + +TEST(RunLoadElementFromUntaggedBase) { + Smi* smis[] = {Smi::FromInt(1), Smi::FromInt(2), Smi::FromInt(3), + Smi::FromInt(4), Smi::FromInt(5)}; + + for (size_t i = 0; i < ARRAY_SIZE(smis); i++) { // for header sizes + for (size_t j = 0; (i + j) < ARRAY_SIZE(smis); j++) { // for element index + int offset = static_cast<int>(i * sizeof(Smi*)); + ElementAccess access = {kUntaggedBase, offset, Type::Integral32(), + kMachineTagged}; + + SimplifiedLoweringTester<Object*> t; + Node* load = t.LoadElement(access, t.PointerConstant(smis), + t.Int32Constant(static_cast<int>(j))); + t.Return(load); + t.LowerAllNodes(); + + if (!Pipeline::SupportedTarget()) continue; + + for (int k = -5; k <= 5; k++) { + Smi* expected = Smi::FromInt(k); + smis[i + j] = expected; + CHECK_EQ(expected, t.Call()); + } + } + } +} + + +TEST(RunStoreElementFromUntaggedBase) { + Smi* smis[] = {Smi::FromInt(1), Smi::FromInt(2), Smi::FromInt(3), + Smi::FromInt(4), Smi::FromInt(5)}; + + for (size_t i = 0; i < ARRAY_SIZE(smis); i++) { // for header sizes + for (size_t j = 0; (i + j) < ARRAY_SIZE(smis); j++) { // for element index + int offset = static_cast<int>(i * sizeof(Smi*)); + ElementAccess access = {kUntaggedBase, offset, Type::Integral32(), + kMachineTagged}; + + SimplifiedLoweringTester<Object*> t(kMachineTagged); + Node* p0 = t.Parameter(0); + t.StoreElement(access, t.PointerConstant(smis), + t.Int32Constant(static_cast<int>(j)), p0); + t.Return(p0); + t.LowerAllNodes(); + + if (!Pipeline::SupportedTarget()) continue; + + for (int k = -5; k <= 5; k++) { + Smi* expected = Smi::FromInt(k); + smis[i + j] = Smi::FromInt(-100); + CHECK_EQ(expected, t.Call(expected)); + CHECK_EQ(expected, smis[i + j]); + } + + // TODO(titzer): assert the contents of the array. + } + } +} + + +// A helper class for accessing fields and elements of various types, on both +// tagged and untagged base pointers. Contains both tagged and untagged buffers +// for testing direct memory access from generated code. +template <typename E> +class AccessTester : public HandleAndZoneScope { + public: + bool tagged; + MachineType rep; + E* original_elements; + size_t num_elements; + E* untagged_array; + Handle<ByteArray> tagged_array; // TODO(titzer): use FixedArray for tagged. + + AccessTester(bool t, MachineType r, E* orig, size_t num) + : tagged(t), + rep(r), + original_elements(orig), + num_elements(num), + untagged_array(static_cast<E*>(malloc(ByteSize()))), + tagged_array(main_isolate()->factory()->NewByteArray( + static_cast<int>(ByteSize()))) { + Reinitialize(); + } + + ~AccessTester() { free(untagged_array); } + + size_t ByteSize() { return num_elements * sizeof(E); } + + // Nuke both {untagged_array} and {tagged_array} with {original_elements}. + void Reinitialize() { + memcpy(untagged_array, original_elements, ByteSize()); + CHECK_EQ(static_cast<int>(ByteSize()), tagged_array->length()); + E* raw = reinterpret_cast<E*>(tagged_array->GetDataStartAddress()); + memcpy(raw, original_elements, ByteSize()); + } + + // Create and run code that copies the element in either {untagged_array} + // or {tagged_array} at index {from_index} to index {to_index}. + void RunCopyElement(int from_index, int to_index) { + // TODO(titzer): test element and field accesses where the base is not + // a constant in the code. + BoundsCheck(from_index); + BoundsCheck(to_index); + ElementAccess access = GetElementAccess(); + + SimplifiedLoweringTester<Object*> t; + Node* ptr = GetBaseNode(&t); + Node* load = t.LoadElement(access, ptr, t.Int32Constant(from_index)); + t.StoreElement(access, ptr, t.Int32Constant(to_index), load); + t.Return(t.jsgraph.TrueConstant()); + t.LowerAllNodes(); + t.GenerateCode(); + + if (Pipeline::SupportedTarget()) { + Object* result = t.Call(); + CHECK_EQ(t.isolate()->heap()->true_value(), result); + } + } + + // Create and run code that copies the field in either {untagged_array} + // or {tagged_array} at index {from_index} to index {to_index}. + void RunCopyField(int from_index, int to_index) { + BoundsCheck(from_index); + BoundsCheck(to_index); + FieldAccess from_access = GetFieldAccess(from_index); + FieldAccess to_access = GetFieldAccess(to_index); + + SimplifiedLoweringTester<Object*> t; + Node* ptr = GetBaseNode(&t); + Node* load = t.LoadField(from_access, ptr); + t.StoreField(to_access, ptr, load); + t.Return(t.jsgraph.TrueConstant()); + t.LowerAllNodes(); + t.GenerateCode(); + + if (Pipeline::SupportedTarget()) { + Object* result = t.Call(); + CHECK_EQ(t.isolate()->heap()->true_value(), result); + } + } + + // Create and run code that copies the elements from {this} to {that}. + void RunCopyElements(AccessTester<E>* that) { + SimplifiedLoweringTester<Object*> t; + + Node* one = t.Int32Constant(1); + Node* index = t.Int32Constant(0); + Node* limit = t.Int32Constant(static_cast<int>(num_elements)); + t.environment()->Push(index); + Node* src = this->GetBaseNode(&t); + Node* dst = that->GetBaseNode(&t); + { + LoopBuilder loop(&t); + loop.BeginLoop(); + // Loop exit condition + index = t.environment()->Top(); + Node* condition = t.Int32LessThan(index, limit); + loop.BreakUnless(condition); + // dst[index] = src[index] + index = t.environment()->Pop(); + Node* load = t.LoadElement(this->GetElementAccess(), src, index); + t.StoreElement(that->GetElementAccess(), dst, index, load); + // index++ + index = t.Int32Add(index, one); + t.environment()->Push(index); + // continue + loop.EndBody(); + loop.EndLoop(); + } + index = t.environment()->Pop(); + t.Return(t.jsgraph.TrueConstant()); + t.LowerAllNodes(); + t.GenerateCode(); + + if (Pipeline::SupportedTarget()) { + Object* result = t.Call(); + CHECK_EQ(t.isolate()->heap()->true_value(), result); + } + } + + E GetElement(int index) { + BoundsCheck(index); + if (tagged) { + E* raw = reinterpret_cast<E*>(tagged_array->GetDataStartAddress()); + return raw[index]; + } else { + return untagged_array[index]; + } + } + + private: + ElementAccess GetElementAccess() { + ElementAccess access = {tagged ? kTaggedBase : kUntaggedBase, + tagged ? FixedArrayBase::kHeaderSize : 0, + Type::Any(), rep}; + return access; + } + + FieldAccess GetFieldAccess(int field) { + int offset = field * sizeof(E); + FieldAccess access = {tagged ? kTaggedBase : kUntaggedBase, + offset + (tagged ? FixedArrayBase::kHeaderSize : 0), + Handle<Name>(), Type::Any(), rep}; + return access; + } + + template <typename T> + Node* GetBaseNode(SimplifiedLoweringTester<T>* t) { + return tagged ? t->HeapConstant(tagged_array) + : t->PointerConstant(untagged_array); + } + + void BoundsCheck(int index) { + CHECK_GE(index, 0); + CHECK_LT(index, static_cast<int>(num_elements)); + CHECK_EQ(static_cast<int>(ByteSize()), tagged_array->length()); + } +}; + + +template <typename E> +static void RunAccessTest(MachineType rep, E* original_elements, size_t num) { + int num_elements = static_cast<int>(num); + + for (int taggedness = 0; taggedness < 2; taggedness++) { + AccessTester<E> a(taggedness == 1, rep, original_elements, num); + for (int field = 0; field < 2; field++) { + for (int i = 0; i < num_elements - 1; i++) { + a.Reinitialize(); + if (field == 0) { + a.RunCopyField(i, i + 1); // Test field read/write. + } else { + a.RunCopyElement(i, i + 1); // Test element read/write. + } + if (Pipeline::SupportedTarget()) { // verify. + for (int j = 0; j < num_elements; j++) { + E expect = + j == (i + 1) ? original_elements[i] : original_elements[j]; + CHECK_EQ(expect, a.GetElement(j)); + } + } + } + } + } + // Test array copy. + for (int tf = 0; tf < 2; tf++) { + for (int tt = 0; tt < 2; tt++) { + AccessTester<E> a(tf == 1, rep, original_elements, num); + AccessTester<E> b(tt == 1, rep, original_elements, num); + a.RunCopyElements(&b); + if (Pipeline::SupportedTarget()) { // verify. + for (int i = 0; i < num_elements; i++) { + CHECK_EQ(a.GetElement(i), b.GetElement(i)); + } + } + } + } +} + + +TEST(RunAccessTests_uint8) { + uint8_t data[] = {0x07, 0x16, 0x25, 0x34, 0x43, 0x99, + 0xab, 0x78, 0x89, 0x19, 0x2b, 0x38}; + RunAccessTest<uint8_t>(kMachineWord8, data, ARRAY_SIZE(data)); +} + + +TEST(RunAccessTests_uint16) { + uint16_t data[] = {0x071a, 0x162b, 0x253c, 0x344d, 0x435e, 0x7777}; + RunAccessTest<uint16_t>(kMachineWord16, data, ARRAY_SIZE(data)); +} + + +TEST(RunAccessTests_int32) { + int32_t data[] = {-211, 211, 628347, 2000000000, -2000000000, -1, -100000034}; + RunAccessTest<int32_t>(kMachineWord32, data, ARRAY_SIZE(data)); +} + + +#define V8_2PART_INT64(a, b) (((static_cast<int64_t>(a) << 32) + 0x##b##u)) + + +TEST(RunAccessTests_int64) { + if (kPointerSize != 8) return; + int64_t data[] = {V8_2PART_INT64(0x10111213, 14151617), + V8_2PART_INT64(0x20212223, 24252627), + V8_2PART_INT64(0x30313233, 34353637), + V8_2PART_INT64(0xa0a1a2a3, a4a5a6a7), + V8_2PART_INT64(0xf0f1f2f3, f4f5f6f7)}; + RunAccessTest<int64_t>(kMachineWord64, data, ARRAY_SIZE(data)); +} + + +TEST(RunAccessTests_float64) { + double data[] = {1.25, -1.25, 2.75, 11.0, 11100.8}; + RunAccessTest<double>(kMachineFloat64, data, ARRAY_SIZE(data)); +} + + +TEST(RunAccessTests_Smi) { + Smi* data[] = {Smi::FromInt(-1), Smi::FromInt(-9), + Smi::FromInt(0), Smi::FromInt(666), + Smi::FromInt(77777), Smi::FromInt(Smi::kMaxValue)}; + RunAccessTest<Smi*>(kMachineTagged, data, ARRAY_SIZE(data)); +} + + +// Fills in most of the nodes of the graph in order to make tests shorter. +class TestingGraph : public HandleAndZoneScope, public GraphAndBuilders { + public: + Typer typer; + JSGraph jsgraph; + Node* p0; + Node* p1; + Node* start; + Node* end; + Node* ret; + + explicit TestingGraph(Type* p0_type, Type* p1_type = Type::None()) + : GraphAndBuilders(main_zone()), + typer(main_zone()), + jsgraph(graph(), common(), &typer) { + start = graph()->NewNode(common()->Start(2)); + graph()->SetStart(start); + ret = + graph()->NewNode(common()->Return(), jsgraph.Constant(0), start, start); + end = graph()->NewNode(common()->End(), ret); + graph()->SetEnd(end); + p0 = graph()->NewNode(common()->Parameter(0), start); + p1 = graph()->NewNode(common()->Parameter(1), start); + NodeProperties::SetBounds(p0, Bounds(p0_type)); + NodeProperties::SetBounds(p1, Bounds(p1_type)); + } + + void CheckLoweringBinop(IrOpcode::Value expected, Operator* op) { + Node* node = Return(graph()->NewNode(op, p0, p1)); + Lower(); + CHECK_EQ(expected, node->opcode()); + } + + void CheckLoweringTruncatedBinop(IrOpcode::Value expected, Operator* op, + Operator* trunc) { + Node* node = graph()->NewNode(op, p0, p1); + Return(graph()->NewNode(trunc, node)); + Lower(); + CHECK_EQ(expected, node->opcode()); + } + + void Lower() { + SimplifiedLowering lowering(&jsgraph, NULL); + lowering.LowerAllNodes(); + } + + // Inserts the node as the return value of the graph. + Node* Return(Node* node) { + ret->ReplaceInput(0, node); + return node; + } + + // Inserts the node as the effect input to the return of the graph. + void Effect(Node* node) { ret->ReplaceInput(1, node); } + + Node* ExampleWithOutput(RepType type) { + // TODO(titzer): use parameters with guaranteed representations. + if (type & tInt32) { + return graph()->NewNode(machine()->Int32Add(), jsgraph.Int32Constant(1), + jsgraph.Int32Constant(1)); + } else if (type & tUint32) { + return graph()->NewNode(machine()->Word32Shr(), jsgraph.Int32Constant(1), + jsgraph.Int32Constant(1)); + } else if (type & rFloat64) { + return graph()->NewNode(machine()->Float64Add(), + jsgraph.Float64Constant(1), + jsgraph.Float64Constant(1)); + } else if (type & rBit) { + return graph()->NewNode(machine()->Word32Equal(), + jsgraph.Int32Constant(1), + jsgraph.Int32Constant(1)); + } else if (type & rWord64) { + return graph()->NewNode(machine()->Int64Add(), Int64Constant(1), + Int64Constant(1)); + } else { + CHECK(type & rTagged); + return p0; + } + } + + Node* Use(Node* node, RepType type) { + if (type & tInt32) { + return graph()->NewNode(machine()->Int32LessThan(), node, + jsgraph.Int32Constant(1)); + } else if (type & tUint32) { + return graph()->NewNode(machine()->Uint32LessThan(), node, + jsgraph.Int32Constant(1)); + } else if (type & rFloat64) { + return graph()->NewNode(machine()->Float64Add(), node, + jsgraph.Float64Constant(1)); + } else if (type & rWord64) { + return graph()->NewNode(machine()->Int64LessThan(), node, + Int64Constant(1)); + } else { + return graph()->NewNode(simplified()->ReferenceEqual(Type::Any()), node, + jsgraph.TrueConstant()); + } + } + + Node* Branch(Node* cond) { + Node* br = graph()->NewNode(common()->Branch(), cond, start); + Node* tb = graph()->NewNode(common()->IfTrue(), br); + Node* fb = graph()->NewNode(common()->IfFalse(), br); + Node* m = graph()->NewNode(common()->Merge(2), tb, fb); + NodeProperties::ReplaceControlInput(ret, m); + return br; + } + + Node* Int64Constant(int64_t v) { + return graph()->NewNode(common()->Int64Constant(v)); + } + + SimplifiedOperatorBuilder* simplified() { return &main_simplified_; } + MachineOperatorBuilder* machine() { return &main_machine_; } + CommonOperatorBuilder* common() { return &main_common_; } + Graph* graph() { return main_graph_; } +}; + + +TEST(LowerBooleanNot_bit_bit) { + // BooleanNot(x: rBit) used as rBit + TestingGraph t(Type::Boolean()); + Node* b = t.ExampleWithOutput(rBit); + Node* inv = t.graph()->NewNode(t.simplified()->BooleanNot(), b); + Node* use = t.Branch(inv); + t.Lower(); + Node* cmp = use->InputAt(0); + CHECK_EQ(t.machine()->WordEqual()->opcode(), cmp->opcode()); + CHECK(b == cmp->InputAt(0) || b == cmp->InputAt(1)); + Node* f = t.jsgraph.Int32Constant(0); + CHECK(f == cmp->InputAt(0) || f == cmp->InputAt(1)); +} + + +TEST(LowerBooleanNot_bit_tagged) { + // BooleanNot(x: rBit) used as rTagged + TestingGraph t(Type::Boolean()); + Node* b = t.ExampleWithOutput(rBit); + Node* inv = t.graph()->NewNode(t.simplified()->BooleanNot(), b); + Node* use = t.Use(inv, rTagged); + t.Return(use); + t.Lower(); + CHECK_EQ(IrOpcode::kChangeBitToBool, use->InputAt(0)->opcode()); + Node* cmp = use->InputAt(0)->InputAt(0); + CHECK_EQ(t.machine()->WordEqual()->opcode(), cmp->opcode()); + CHECK(b == cmp->InputAt(0) || b == cmp->InputAt(1)); + Node* f = t.jsgraph.Int32Constant(0); + CHECK(f == cmp->InputAt(0) || f == cmp->InputAt(1)); +} + + +TEST(LowerBooleanNot_tagged_bit) { + // BooleanNot(x: rTagged) used as rBit + TestingGraph t(Type::Boolean()); + Node* b = t.p0; + Node* inv = t.graph()->NewNode(t.simplified()->BooleanNot(), b); + Node* use = t.Branch(inv); + t.Lower(); + Node* cmp = use->InputAt(0); + CHECK_EQ(t.machine()->WordEqual()->opcode(), cmp->opcode()); + CHECK(b == cmp->InputAt(0) || b == cmp->InputAt(1)); + Node* f = t.jsgraph.FalseConstant(); + CHECK(f == cmp->InputAt(0) || f == cmp->InputAt(1)); +} + + +TEST(LowerBooleanNot_tagged_tagged) { + // BooleanNot(x: rTagged) used as rTagged + TestingGraph t(Type::Boolean()); + Node* b = t.p0; + Node* inv = t.graph()->NewNode(t.simplified()->BooleanNot(), b); + Node* use = t.Use(inv, rTagged); + t.Return(use); + t.Lower(); + CHECK_EQ(IrOpcode::kChangeBitToBool, use->InputAt(0)->opcode()); + Node* cmp = use->InputAt(0)->InputAt(0); + CHECK_EQ(t.machine()->WordEqual()->opcode(), cmp->opcode()); + CHECK(b == cmp->InputAt(0) || b == cmp->InputAt(1)); + Node* f = t.jsgraph.FalseConstant(); + CHECK(f == cmp->InputAt(0) || f == cmp->InputAt(1)); +} + + +static Type* test_types[] = {Type::Signed32(), Type::Unsigned32(), + Type::Number(), Type::Any()}; + + +TEST(LowerNumberCmp_to_int32) { + TestingGraph t(Type::Signed32(), Type::Signed32()); + + t.CheckLoweringBinop(IrOpcode::kWord32Equal, t.simplified()->NumberEqual()); + t.CheckLoweringBinop(IrOpcode::kInt32LessThan, + t.simplified()->NumberLessThan()); + t.CheckLoweringBinop(IrOpcode::kInt32LessThanOrEqual, + t.simplified()->NumberLessThanOrEqual()); +} + + +TEST(LowerNumberCmp_to_uint32) { + TestingGraph t(Type::Unsigned32(), Type::Unsigned32()); + + t.CheckLoweringBinop(IrOpcode::kWord32Equal, t.simplified()->NumberEqual()); + t.CheckLoweringBinop(IrOpcode::kUint32LessThan, + t.simplified()->NumberLessThan()); + t.CheckLoweringBinop(IrOpcode::kUint32LessThanOrEqual, + t.simplified()->NumberLessThanOrEqual()); +} + + +TEST(LowerNumberCmp_to_float64) { + static Type* types[] = {Type::Number(), Type::Any()}; + + for (size_t i = 0; i < ARRAY_SIZE(types); i++) { + TestingGraph t(types[i], types[i]); + + t.CheckLoweringBinop(IrOpcode::kFloat64Equal, + t.simplified()->NumberEqual()); + t.CheckLoweringBinop(IrOpcode::kFloat64LessThan, + t.simplified()->NumberLessThan()); + t.CheckLoweringBinop(IrOpcode::kFloat64LessThanOrEqual, + t.simplified()->NumberLessThanOrEqual()); + } +} + + +TEST(LowerNumberAddSub_to_int32) { + TestingGraph t(Type::Signed32(), Type::Signed32()); + t.CheckLoweringTruncatedBinop(IrOpcode::kInt32Add, + t.simplified()->NumberAdd(), + t.simplified()->NumberToInt32()); + t.CheckLoweringTruncatedBinop(IrOpcode::kInt32Sub, + t.simplified()->NumberSubtract(), + t.simplified()->NumberToInt32()); +} + + +TEST(LowerNumberAddSub_to_uint32) { + TestingGraph t(Type::Unsigned32(), Type::Unsigned32()); + t.CheckLoweringTruncatedBinop(IrOpcode::kInt32Add, + t.simplified()->NumberAdd(), + t.simplified()->NumberToUint32()); + t.CheckLoweringTruncatedBinop(IrOpcode::kInt32Sub, + t.simplified()->NumberSubtract(), + t.simplified()->NumberToUint32()); +} + + +TEST(LowerNumberAddSub_to_float64) { + for (size_t i = 0; i < ARRAY_SIZE(test_types); i++) { + TestingGraph t(test_types[i], test_types[i]); + + t.CheckLoweringBinop(IrOpcode::kFloat64Add, t.simplified()->NumberAdd()); + t.CheckLoweringBinop(IrOpcode::kFloat64Sub, + t.simplified()->NumberSubtract()); + } +} + + +TEST(LowerNumberDivMod_to_float64) { + for (size_t i = 0; i < ARRAY_SIZE(test_types); i++) { + TestingGraph t(test_types[i], test_types[i]); + + t.CheckLoweringBinop(IrOpcode::kFloat64Div, t.simplified()->NumberDivide()); + t.CheckLoweringBinop(IrOpcode::kFloat64Mod, + t.simplified()->NumberModulus()); + } +} + + +static void CheckChangeOf(IrOpcode::Value change, Node* of, Node* node) { + CHECK_EQ(change, node->opcode()); + CHECK_EQ(of, node->InputAt(0)); +} + + +TEST(LowerNumberToInt32_to_nop) { + // NumberToInt32(x: rTagged | tInt32) used as rTagged + TestingGraph t(Type::Signed32()); + Node* trunc = t.graph()->NewNode(t.simplified()->NumberToInt32(), t.p0); + Node* use = t.Use(trunc, rTagged); + t.Return(use); + t.Lower(); + CHECK_EQ(t.p0, use->InputAt(0)); +} + + +TEST(LowerNumberToInt32_to_ChangeTaggedToFloat64) { + // NumberToInt32(x: rTagged | tInt32) used as rFloat64 + TestingGraph t(Type::Signed32()); + Node* trunc = t.graph()->NewNode(t.simplified()->NumberToInt32(), t.p0); + Node* use = t.Use(trunc, rFloat64); + t.Return(use); + t.Lower(); + CheckChangeOf(IrOpcode::kChangeTaggedToFloat64, t.p0, use->InputAt(0)); +} + + +TEST(LowerNumberToInt32_to_ChangeTaggedToInt32) { + // NumberToInt32(x: rTagged | tInt32) used as rWord32 + TestingGraph t(Type::Signed32()); + Node* trunc = t.graph()->NewNode(t.simplified()->NumberToInt32(), t.p0); + Node* use = t.Use(trunc, tInt32); + t.Return(use); + t.Lower(); + CheckChangeOf(IrOpcode::kChangeTaggedToInt32, t.p0, use->InputAt(0)); +} + + +TEST(LowerNumberToInt32_to_ChangeFloat64ToTagged) { + // TODO(titzer): NumberToInt32(x: rFloat64 | tInt32) used as rTagged +} + + +TEST(LowerNumberToInt32_to_ChangeFloat64ToInt32) { + // TODO(titzer): NumberToInt32(x: rFloat64 | tInt32) used as rWord32 | tInt32 +} + + +TEST(LowerNumberToInt32_to_TruncateFloat64ToInt32) { + // TODO(titzer): NumberToInt32(x: rFloat64) used as rWord32 | tUint32 +} + + +TEST(LowerNumberToUint32_to_nop) { + // NumberToUint32(x: rTagged | tUint32) used as rTagged + TestingGraph t(Type::Unsigned32()); + Node* trunc = t.graph()->NewNode(t.simplified()->NumberToUint32(), t.p0); + Node* use = t.Use(trunc, rTagged); + t.Return(use); + t.Lower(); + CHECK_EQ(t.p0, use->InputAt(0)); +} + + +TEST(LowerNumberToUint32_to_ChangeTaggedToFloat64) { + // NumberToUint32(x: rTagged | tUint32) used as rWord32 + TestingGraph t(Type::Unsigned32()); + Node* trunc = t.graph()->NewNode(t.simplified()->NumberToUint32(), t.p0); + Node* use = t.Use(trunc, rFloat64); + t.Return(use); + t.Lower(); + CheckChangeOf(IrOpcode::kChangeTaggedToFloat64, t.p0, use->InputAt(0)); +} + + +TEST(LowerNumberToUint32_to_ChangeTaggedToUint32) { + // NumberToUint32(x: rTagged | tUint32) used as rWord32 + TestingGraph t(Type::Unsigned32()); + Node* trunc = t.graph()->NewNode(t.simplified()->NumberToUint32(), t.p0); + Node* use = t.Use(trunc, tUint32); + t.Return(use); + t.Lower(); + CheckChangeOf(IrOpcode::kChangeTaggedToUint32, t.p0, use->InputAt(0)); +} + + +TEST(LowerNumberToUint32_to_ChangeFloat64ToTagged) { + // TODO(titzer): NumberToUint32(x: rFloat64 | tUint32) used as rTagged +} + + +TEST(LowerNumberToUint32_to_ChangeFloat64ToUint32) { + // TODO(titzer): NumberToUint32(x: rFloat64 | tUint32) used as rWord32 +} + + +TEST(LowerNumberToUint32_to_TruncateFloat64ToUint32) { + // TODO(titzer): NumberToUint32(x: rFloat64) used as rWord32 +} + + +TEST(LowerReferenceEqual_to_wordeq) { + TestingGraph t(Type::Any(), Type::Any()); + IrOpcode::Value opcode = + static_cast<IrOpcode::Value>(t.machine()->WordEqual()->opcode()); + t.CheckLoweringBinop(opcode, t.simplified()->ReferenceEqual(Type::Any())); +} + + +TEST(LowerStringOps_to_rtcalls) { + if (false) { // TODO(titzer): lower StringOps to runtime calls + TestingGraph t(Type::String(), Type::String()); + t.CheckLoweringBinop(IrOpcode::kCall, t.simplified()->StringEqual()); + t.CheckLoweringBinop(IrOpcode::kCall, t.simplified()->StringLessThan()); + t.CheckLoweringBinop(IrOpcode::kCall, + t.simplified()->StringLessThanOrEqual()); + t.CheckLoweringBinop(IrOpcode::kCall, t.simplified()->StringAdd()); + } +} + + +void CheckChangeInsertion(IrOpcode::Value expected, RepType from, RepType to) { + TestingGraph t(Type::Any()); + Node* in = t.ExampleWithOutput(from); + Node* use = t.Use(in, to); + t.Return(use); + t.Lower(); + CHECK_EQ(expected, use->InputAt(0)->opcode()); + CHECK_EQ(in, use->InputAt(0)->InputAt(0)); +} + + +TEST(InsertBasicChanges) { + if (false) { + // TODO(titzer): these changes need the output to have the right type. + CheckChangeInsertion(IrOpcode::kChangeFloat64ToInt32, rFloat64, tInt32); + CheckChangeInsertion(IrOpcode::kChangeFloat64ToUint32, rFloat64, tUint32); + CheckChangeInsertion(IrOpcode::kChangeTaggedToInt32, rTagged, tInt32); + CheckChangeInsertion(IrOpcode::kChangeTaggedToUint32, rTagged, tUint32); + } + + CheckChangeInsertion(IrOpcode::kChangeFloat64ToTagged, rFloat64, rTagged); + CheckChangeInsertion(IrOpcode::kChangeTaggedToFloat64, rTagged, rFloat64); + + CheckChangeInsertion(IrOpcode::kChangeInt32ToFloat64, tInt32, rFloat64); + CheckChangeInsertion(IrOpcode::kChangeInt32ToTagged, tInt32, rTagged); + + CheckChangeInsertion(IrOpcode::kChangeUint32ToFloat64, tUint32, rFloat64); + CheckChangeInsertion(IrOpcode::kChangeUint32ToTagged, tUint32, rTagged); +} + + +static void CheckChangesAroundBinop(TestingGraph* t, Operator* op, + IrOpcode::Value input_change, + IrOpcode::Value output_change) { + Node* binop = t->graph()->NewNode(op, t->p0, t->p1); + t->Return(binop); + t->Lower(); + CHECK_EQ(input_change, binop->InputAt(0)->opcode()); + CHECK_EQ(input_change, binop->InputAt(1)->opcode()); + CHECK_EQ(t->p0, binop->InputAt(0)->InputAt(0)); + CHECK_EQ(t->p1, binop->InputAt(1)->InputAt(0)); + CHECK_EQ(output_change, t->ret->InputAt(0)->opcode()); + CHECK_EQ(binop, t->ret->InputAt(0)->InputAt(0)); +} + + +TEST(InsertChangesAroundInt32Binops) { + TestingGraph t(Type::Signed32(), Type::Signed32()); + + Operator* ops[] = {t.machine()->Int32Add(), t.machine()->Int32Sub(), + t.machine()->Int32Mul(), t.machine()->Int32Div(), + t.machine()->Int32Mod(), t.machine()->Word32And(), + t.machine()->Word32Or(), t.machine()->Word32Xor(), + t.machine()->Word32Shl(), t.machine()->Word32Sar()}; + + for (size_t i = 0; i < ARRAY_SIZE(ops); i++) { + CheckChangesAroundBinop(&t, ops[i], IrOpcode::kChangeTaggedToInt32, + IrOpcode::kChangeInt32ToTagged); + } +} + + +TEST(InsertChangesAroundInt32Cmp) { + TestingGraph t(Type::Signed32(), Type::Signed32()); + + Operator* ops[] = {t.machine()->Int32LessThan(), + t.machine()->Int32LessThanOrEqual()}; + + for (size_t i = 0; i < ARRAY_SIZE(ops); i++) { + CheckChangesAroundBinop(&t, ops[i], IrOpcode::kChangeTaggedToInt32, + IrOpcode::kChangeBitToBool); + } +} + + +TEST(InsertChangesAroundUint32Cmp) { + TestingGraph t(Type::Unsigned32(), Type::Unsigned32()); + + Operator* ops[] = {t.machine()->Uint32LessThan(), + t.machine()->Uint32LessThanOrEqual()}; + + for (size_t i = 0; i < ARRAY_SIZE(ops); i++) { + CheckChangesAroundBinop(&t, ops[i], IrOpcode::kChangeTaggedToUint32, + IrOpcode::kChangeBitToBool); + } +} + + +TEST(InsertChangesAroundFloat64Binops) { + TestingGraph t(Type::Number(), Type::Number()); + + Operator* ops[] = { + t.machine()->Float64Add(), t.machine()->Float64Sub(), + t.machine()->Float64Mul(), t.machine()->Float64Div(), + t.machine()->Float64Mod(), + }; + + for (size_t i = 0; i < ARRAY_SIZE(ops); i++) { + CheckChangesAroundBinop(&t, ops[i], IrOpcode::kChangeTaggedToFloat64, + IrOpcode::kChangeFloat64ToTagged); + } +} + + +TEST(InsertChangesAroundFloat64Cmp) { + TestingGraph t(Type::Number(), Type::Number()); + + Operator* ops[] = {t.machine()->Float64Equal(), + t.machine()->Float64LessThan(), + t.machine()->Float64LessThanOrEqual()}; + + for (size_t i = 0; i < ARRAY_SIZE(ops); i++) { + CheckChangesAroundBinop(&t, ops[i], IrOpcode::kChangeTaggedToFloat64, + IrOpcode::kChangeBitToBool); + } +} + + +void CheckFieldAccessArithmetic(FieldAccess access, Node* load_or_store) { + Int32Matcher index = Int32Matcher(load_or_store->InputAt(1)); + CHECK(index.Is(access.offset - access.tag())); +} + + +Node* CheckElementAccessArithmetic(ElementAccess access, Node* load_or_store) { + Int32BinopMatcher index(load_or_store->InputAt(1)); + CHECK_EQ(IrOpcode::kInt32Add, index.node()->opcode()); + CHECK(index.right().Is(access.header_size - access.tag())); + + int element_size = 0; + switch (access.representation) { + case kMachineTagged: + element_size = kPointerSize; + break; + case kMachineWord8: + element_size = 1; + break; + case kMachineWord16: + element_size = 2; + break; + case kMachineWord32: + element_size = 4; + break; + case kMachineWord64: + case kMachineFloat64: + element_size = 8; + break; + case kMachineLast: + UNREACHABLE(); + break; + } + + if (element_size != 1) { + Int32BinopMatcher mul(index.left().node()); + CHECK_EQ(IrOpcode::kInt32Mul, mul.node()->opcode()); + CHECK(mul.right().Is(element_size)); + return mul.left().node(); + } else { + return index.left().node(); + } +} + + +static const MachineType machine_reps[] = {kMachineWord8, kMachineWord16, + kMachineWord32, kMachineWord64, + kMachineFloat64, kMachineTagged}; + + +// Representation types corresponding to those above. +static const RepType rep_types[] = {static_cast<RepType>(rWord32 | tUint32), + static_cast<RepType>(rWord32 | tUint32), + static_cast<RepType>(rWord32 | tInt32), + static_cast<RepType>(rWord64), + static_cast<RepType>(rFloat64 | tNumber), + static_cast<RepType>(rTagged | tAny)}; + + +TEST(LowerLoadField_to_load) { + TestingGraph t(Type::Any(), Type::Signed32()); + + for (size_t i = 0; i < ARRAY_SIZE(machine_reps); i++) { + FieldAccess access = {kTaggedBase, FixedArrayBase::kHeaderSize, + Handle<Name>::null(), Type::Any(), machine_reps[i]}; + + Node* load = + t.graph()->NewNode(t.simplified()->LoadField(access), t.p0, t.start); + Node* use = t.Use(load, rep_types[i]); + t.Return(use); + t.Lower(); + CHECK_EQ(IrOpcode::kLoad, load->opcode()); + CHECK_EQ(t.p0, load->InputAt(0)); + CheckFieldAccessArithmetic(access, load); + + MachineType rep = OpParameter<MachineType>(load); + CHECK_EQ(machine_reps[i], rep); + } +} + + +TEST(LowerStoreField_to_store) { + TestingGraph t(Type::Any(), Type::Signed32()); + + for (size_t i = 0; i < ARRAY_SIZE(machine_reps); i++) { + FieldAccess access = {kTaggedBase, FixedArrayBase::kHeaderSize, + Handle<Name>::null(), Type::Any(), machine_reps[i]}; + + + Node* val = t.ExampleWithOutput(rep_types[i]); + Node* store = t.graph()->NewNode(t.simplified()->StoreField(access), t.p0, + val, t.start, t.start); + t.Effect(store); + t.Lower(); + CHECK_EQ(IrOpcode::kStore, store->opcode()); + CHECK_EQ(val, store->InputAt(2)); + CheckFieldAccessArithmetic(access, store); + + StoreRepresentation rep = OpParameter<StoreRepresentation>(store); + if (rep_types[i] & rTagged) { + CHECK_EQ(kFullWriteBarrier, rep.write_barrier_kind); + } + CHECK_EQ(machine_reps[i], rep.rep); + } +} + + +TEST(LowerLoadElement_to_load) { + TestingGraph t(Type::Any(), Type::Signed32()); + + for (size_t i = 0; i < ARRAY_SIZE(machine_reps); i++) { + ElementAccess access = {kTaggedBase, FixedArrayBase::kHeaderSize, + Type::Any(), machine_reps[i]}; + + Node* load = t.graph()->NewNode(t.simplified()->LoadElement(access), t.p0, + t.p1, t.start); + Node* use = t.Use(load, rep_types[i]); + t.Return(use); + t.Lower(); + CHECK_EQ(IrOpcode::kLoad, load->opcode()); + CHECK_EQ(t.p0, load->InputAt(0)); + CheckElementAccessArithmetic(access, load); + + MachineType rep = OpParameter<MachineType>(load); + CHECK_EQ(machine_reps[i], rep); + } +} + + +TEST(LowerStoreElement_to_store) { + TestingGraph t(Type::Any(), Type::Signed32()); + + for (size_t i = 0; i < ARRAY_SIZE(machine_reps); i++) { + ElementAccess access = {kTaggedBase, FixedArrayBase::kHeaderSize, + Type::Any(), machine_reps[i]}; + + Node* val = t.ExampleWithOutput(rep_types[i]); + Node* store = t.graph()->NewNode(t.simplified()->StoreElement(access), t.p0, + t.p1, val, t.start, t.start); + t.Effect(store); + t.Lower(); + CHECK_EQ(IrOpcode::kStore, store->opcode()); + CHECK_EQ(val, store->InputAt(2)); + CheckElementAccessArithmetic(access, store); + + StoreRepresentation rep = OpParameter<StoreRepresentation>(store); + if (rep_types[i] & rTagged) { + CHECK_EQ(kFullWriteBarrier, rep.write_barrier_kind); + } + CHECK_EQ(machine_reps[i], rep.rep); + } +} + + +TEST(InsertChangeForLoadElementIndex) { + // LoadElement(obj: Tagged, index: tInt32 | rTagged) => + // Load(obj, Int32Add(Int32Mul(ChangeTaggedToInt32(index), #k), #k)) + TestingGraph t(Type::Any(), Type::Signed32()); + ElementAccess access = {kTaggedBase, FixedArrayBase::kHeaderSize, Type::Any(), + kMachineTagged}; + + Node* load = t.graph()->NewNode(t.simplified()->LoadElement(access), t.p0, + t.p1, t.start); + t.Return(load); + t.Lower(); + CHECK_EQ(IrOpcode::kLoad, load->opcode()); + CHECK_EQ(t.p0, load->InputAt(0)); + + Node* index = CheckElementAccessArithmetic(access, load); + CheckChangeOf(IrOpcode::kChangeTaggedToInt32, t.p1, index); +} + + +TEST(InsertChangeForStoreElementIndex) { + // StoreElement(obj: Tagged, index: tInt32 | rTagged, val) => + // Store(obj, Int32Add(Int32Mul(ChangeTaggedToInt32(index), #k), #k), val) + TestingGraph t(Type::Any(), Type::Signed32()); + ElementAccess access = {kTaggedBase, FixedArrayBase::kHeaderSize, Type::Any(), + kMachineTagged}; + + Node* store = + t.graph()->NewNode(t.simplified()->StoreElement(access), t.p0, t.p1, + t.jsgraph.TrueConstant(), t.start, t.start); + t.Effect(store); + t.Lower(); + CHECK_EQ(IrOpcode::kStore, store->opcode()); + CHECK_EQ(t.p0, store->InputAt(0)); + + Node* index = CheckElementAccessArithmetic(access, store); + CheckChangeOf(IrOpcode::kChangeTaggedToInt32, t.p1, index); +} + + +TEST(InsertChangeForLoadElement) { + // TODO(titzer): test all load/store representation change insertions. + TestingGraph t(Type::Any(), Type::Signed32()); + ElementAccess access = {kTaggedBase, FixedArrayBase::kHeaderSize, Type::Any(), + kMachineFloat64}; + + Node* load = t.graph()->NewNode(t.simplified()->LoadElement(access), t.p0, + t.p1, t.start); + t.Return(load); + t.Lower(); + CHECK_EQ(IrOpcode::kLoad, load->opcode()); + CHECK_EQ(t.p0, load->InputAt(0)); + CheckChangeOf(IrOpcode::kChangeFloat64ToTagged, load, t.ret->InputAt(0)); +} + + +TEST(InsertChangeForLoadField) { + // TODO(titzer): test all load/store representation change insertions. + TestingGraph t(Type::Any(), Type::Signed32()); + FieldAccess access = {kTaggedBase, FixedArrayBase::kHeaderSize, + Handle<Name>::null(), Type::Any(), kMachineFloat64}; + + Node* load = + t.graph()->NewNode(t.simplified()->LoadField(access), t.p0, t.start); + t.Return(load); + t.Lower(); + CHECK_EQ(IrOpcode::kLoad, load->opcode()); + CHECK_EQ(t.p0, load->InputAt(0)); + CheckChangeOf(IrOpcode::kChangeFloat64ToTagged, load, t.ret->InputAt(0)); +} + + +TEST(InsertChangeForStoreElement) { + // TODO(titzer): test all load/store representation change insertions. + TestingGraph t(Type::Any(), Type::Signed32()); + ElementAccess access = {kTaggedBase, FixedArrayBase::kHeaderSize, Type::Any(), + kMachineFloat64}; + + Node* store = + t.graph()->NewNode(t.simplified()->StoreElement(access), t.p0, + t.jsgraph.Int32Constant(0), t.p1, t.start, t.start); + t.Effect(store); + t.Lower(); + + CHECK_EQ(IrOpcode::kStore, store->opcode()); + CHECK_EQ(t.p0, store->InputAt(0)); + CheckChangeOf(IrOpcode::kChangeTaggedToFloat64, t.p1, store->InputAt(2)); +} + + +TEST(InsertChangeForStoreField) { + // TODO(titzer): test all load/store representation change insertions. + TestingGraph t(Type::Any(), Type::Signed32()); + FieldAccess access = {kTaggedBase, FixedArrayBase::kHeaderSize, + Handle<Name>::null(), Type::Any(), kMachineFloat64}; + + Node* store = t.graph()->NewNode(t.simplified()->StoreField(access), t.p0, + t.p1, t.start, t.start); + t.Effect(store); + t.Lower(); + + CHECK_EQ(IrOpcode::kStore, store->opcode()); + CHECK_EQ(t.p0, store->InputAt(0)); + CheckChangeOf(IrOpcode::kChangeTaggedToFloat64, t.p1, store->InputAt(2)); +} diff --git a/deps/v8/test/cctest/compiler/test-structured-ifbuilder-fuzzer.cc b/deps/v8/test/cctest/compiler/test-structured-ifbuilder-fuzzer.cc new file mode 100644 index 00000000000..02232264d9c --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-structured-ifbuilder-fuzzer.cc @@ -0,0 +1,667 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include <string> + +#include "src/v8.h" +#include "test/cctest/cctest.h" + +#include "src/base/utils/random-number-generator.h" +#include "test/cctest/compiler/codegen-tester.h" + +#if V8_TURBOFAN_TARGET + +using namespace v8::internal; +using namespace v8::internal::compiler; + +typedef StructuredMachineAssembler::IfBuilder IfBuilder; +typedef StructuredMachineAssembler::LoopBuilder Loop; + +static const int32_t kUninitializedVariableOffset = -1; +static const int32_t kUninitializedOutput = -1; +static const int32_t kVerifiedOutput = -2; + +static const int32_t kInitalVar = 1013; +static const int32_t kConjunctionInc = 1069; +static const int32_t kDisjunctionInc = 1151; +static const int32_t kThenInc = 1223; +static const int32_t kElseInc = 1291; +static const int32_t kIfInc = 1373; + +class IfBuilderModel { + public: + explicit IfBuilderModel(Zone* zone) + : zone_(zone), + variable_offset_(0), + root_(new (zone_) Node(NULL)), + current_node_(root_), + current_expression_(NULL) {} + + void If() { + if (current_node_->else_node != NULL) { + current_node_ = current_node_->else_node; + } else if (current_node_->then_node != NULL) { + current_node_ = current_node_->then_node; + } + DCHECK(current_expression_ == NULL); + current_expression_ = new (zone_) Expression(zone_, NULL); + current_node_->condition = current_expression_; + } + void IfNode() { LastChild()->variable_offset = variable_offset_++; } + + void OpenParen() { current_expression_ = LastChild(); } + void CloseParen() { current_expression_ = current_expression_->parent; } + + void And() { NewChild()->conjunction = true; } + void Or() { NewChild()->disjunction = true; } + + void Then() { + DCHECK(current_expression_ == NULL || current_expression_->parent == NULL); + current_expression_ = NULL; + DCHECK(current_node_->then_node == NULL); + current_node_->then_node = new (zone_) Node(current_node_); + } + void Else() { + DCHECK(current_expression_ == NULL || current_expression_->parent == NULL); + current_expression_ = NULL; + DCHECK(current_node_->else_node == NULL); + current_node_->else_node = new (zone_) Node(current_node_); + } + void Return() { + if (current_node_->else_node != NULL) { + current_node_->else_node->returns = true; + } else if (current_node_->then_node != NULL) { + current_node_->then_node->returns = true; + } else { + CHECK(false); + } + } + void End() {} + + void Print(std::vector<char>* v) { PrintRecursive(v, root_); } + + struct VerificationState { + int32_t* inputs; + int32_t* outputs; + int32_t var; + }; + + int32_t Verify(int length, int32_t* inputs, int32_t* outputs) { + CHECK_EQ(variable_offset_, length); + // Input/Output verification. + for (int i = 0; i < length; ++i) { + CHECK(inputs[i] == 0 || inputs[i] == 1); + CHECK(outputs[i] == kUninitializedOutput || outputs[i] >= 0); + } + // Do verification. + VerificationState state; + state.inputs = inputs; + state.outputs = outputs; + state.var = kInitalVar; + VerifyRecursive(root_, &state); + // Verify all outputs marked. + for (int i = 0; i < length; ++i) { + CHECK(outputs[i] == kUninitializedOutput || + outputs[i] == kVerifiedOutput); + } + return state.var; + } + + private: + struct Expression; + typedef std::vector<Expression*, zone_allocator<Expression*> > Expressions; + + struct Expression : public ZoneObject { + Expression(Zone* zone, Expression* p) + : variable_offset(kUninitializedVariableOffset), + disjunction(false), + conjunction(false), + parent(p), + children(Expressions::allocator_type(zone)) {} + int variable_offset; + bool disjunction; + bool conjunction; + Expression* parent; + Expressions children; + + private: + DISALLOW_COPY_AND_ASSIGN(Expression); + }; + + struct Node : public ZoneObject { + explicit Node(Node* p) + : parent(p), + condition(NULL), + then_node(NULL), + else_node(NULL), + returns(false) {} + Node* parent; + Expression* condition; + Node* then_node; + Node* else_node; + bool returns; + + private: + DISALLOW_COPY_AND_ASSIGN(Node); + }; + + Expression* LastChild() { + if (current_expression_->children.empty()) { + current_expression_->children.push_back( + new (zone_) Expression(zone_, current_expression_)); + } + return current_expression_->children.back(); + } + + Expression* NewChild() { + Expression* child = new (zone_) Expression(zone_, current_expression_); + current_expression_->children.push_back(child); + return child; + } + + static void PrintRecursive(std::vector<char>* v, Expression* expression) { + CHECK(expression != NULL); + if (expression->conjunction) { + DCHECK(!expression->disjunction); + v->push_back('&'); + } else if (expression->disjunction) { + v->push_back('|'); + } + if (expression->variable_offset != kUninitializedVariableOffset) { + v->push_back('v'); + } + Expressions& children = expression->children; + if (children.empty()) return; + v->push_back('('); + for (Expressions::iterator i = children.begin(); i != children.end(); ++i) { + PrintRecursive(v, *i); + } + v->push_back(')'); + } + + static void PrintRecursive(std::vector<char>* v, Node* node) { + // Termination condition. + if (node->condition == NULL) { + CHECK(node->then_node == NULL && node->else_node == NULL); + if (node->returns) v->push_back('r'); + return; + } + CHECK(!node->returns); + v->push_back('i'); + PrintRecursive(v, node->condition); + if (node->then_node != NULL) { + v->push_back('t'); + PrintRecursive(v, node->then_node); + } + if (node->else_node != NULL) { + v->push_back('e'); + PrintRecursive(v, node->else_node); + } + } + + static bool VerifyRecursive(Expression* expression, + VerificationState* state) { + bool result = false; + bool first_iteration = true; + Expressions& children = expression->children; + CHECK(!children.empty()); + for (Expressions::iterator i = children.begin(); i != children.end(); ++i) { + Expression* child = *i; + // Short circuit evaluation, + // but mixes of &&s and ||s have weird semantics. + if ((child->conjunction && !result) || (child->disjunction && result)) { + continue; + } + if (child->conjunction) state->var += kConjunctionInc; + if (child->disjunction) state->var += kDisjunctionInc; + bool child_result; + if (child->variable_offset != kUninitializedVariableOffset) { + // Verify output + CHECK_EQ(state->var, state->outputs[child->variable_offset]); + state->outputs[child->variable_offset] = kVerifiedOutput; // Mark seen. + child_result = state->inputs[child->variable_offset]; + CHECK(child->children.empty()); + state->var += kIfInc; + } else { + child_result = VerifyRecursive(child, state); + } + if (child->conjunction) { + result &= child_result; + } else if (child->disjunction) { + result |= child_result; + } else { + CHECK(first_iteration); + result = child_result; + } + first_iteration = false; + } + return result; + } + + static void VerifyRecursive(Node* node, VerificationState* state) { + if (node->condition == NULL) return; + bool result = VerifyRecursive(node->condition, state); + if (result) { + if (node->then_node) { + state->var += kThenInc; + return VerifyRecursive(node->then_node, state); + } + } else { + if (node->else_node) { + state->var += kElseInc; + return VerifyRecursive(node->else_node, state); + } + } + } + + Zone* zone_; + int variable_offset_; + Node* root_; + Node* current_node_; + Expression* current_expression_; + DISALLOW_COPY_AND_ASSIGN(IfBuilderModel); +}; + + +class IfBuilderGenerator : public StructuredMachineAssemblerTester<int32_t> { + public: + IfBuilderGenerator() + : StructuredMachineAssemblerTester<int32_t>( + MachineOperatorBuilder::pointer_rep(), + MachineOperatorBuilder::pointer_rep()), + var_(NewVariable(Int32Constant(kInitalVar))), + c_(this), + m_(this->zone()), + one_(Int32Constant(1)), + offset_(0) {} + + static void GenerateExpression(v8::base::RandomNumberGenerator* rng, + std::vector<char>* v, int n_vars) { + int depth = 1; + v->push_back('('); + bool need_if = true; + bool populated = false; + while (n_vars != 0) { + if (need_if) { + // can nest a paren or do a variable + if (rng->NextBool()) { + v->push_back('v'); + n_vars--; + need_if = false; + populated = true; + } else { + v->push_back('('); + depth++; + populated = false; + } + } else { + // can pop, do && or do || + int options = 3; + if (depth == 1 || !populated) { + options--; + } + switch (rng->NextInt(options)) { + case 0: + v->push_back('&'); + need_if = true; + break; + case 1: + v->push_back('|'); + need_if = true; + break; + case 2: + v->push_back(')'); + depth--; + break; + } + } + } + CHECK(!need_if); + while (depth != 0) { + v->push_back(')'); + depth--; + } + } + + static void GenerateIfThenElse(v8::base::RandomNumberGenerator* rng, + std::vector<char>* v, int n_ifs, + int max_exp_length) { + CHECK_GT(n_ifs, 0); + CHECK_GT(max_exp_length, 0); + bool have_env = true; + bool then_done = false; + bool else_done = false; + bool first_iteration = true; + while (n_ifs != 0) { + if (have_env) { + int options = 3; + if (else_done || first_iteration) { // Don't do else or return + options -= 2; + first_iteration = false; + } + switch (rng->NextInt(options)) { + case 0: + v->push_back('i'); + n_ifs--; + have_env = false; + GenerateExpression(rng, v, rng->NextInt(max_exp_length) + 1); + break; + case 1: + v->push_back('r'); + have_env = false; + break; + case 2: + v->push_back('e'); + else_done = true; + then_done = false; + break; + default: + CHECK(false); + } + } else { // Can only do then or else + int options = 2; + if (then_done) options--; + switch (rng->NextInt(options)) { + case 0: + v->push_back('e'); + else_done = true; + then_done = false; + break; + case 1: + v->push_back('t'); + then_done = true; + else_done = false; + break; + default: + CHECK(false); + } + have_env = true; + } + } + // Last instruction must have been an if, can complete it in several ways. + int options = 2; + if (then_done && !else_done) options++; + switch (rng->NextInt(3)) { + case 0: + // Do nothing. + break; + case 1: + v->push_back('t'); + switch (rng->NextInt(3)) { + case 0: + v->push_back('r'); + break; + case 1: + v->push_back('e'); + break; + case 2: + v->push_back('e'); + v->push_back('r'); + break; + default: + CHECK(false); + } + break; + case 2: + v->push_back('e'); + if (rng->NextBool()) v->push_back('r'); + break; + default: + CHECK(false); + } + } + + std::string::const_iterator ParseExpression(std::string::const_iterator it, + std::string::const_iterator end) { + // Prepare for expression. + m_.If(); + c_.If(); + int depth = 0; + for (; it != end; ++it) { + switch (*it) { + case 'v': + m_.IfNode(); + { + Node* offset = Int32Constant(offset_ * 4); + Store(kMachineWord32, Parameter(1), offset, var_.Get()); + var_.Set(Int32Add(var_.Get(), Int32Constant(kIfInc))); + c_.If(Load(kMachineWord32, Parameter(0), offset)); + offset_++; + } + break; + case '&': + m_.And(); + c_.And(); + var_.Set(Int32Add(var_.Get(), Int32Constant(kConjunctionInc))); + break; + case '|': + m_.Or(); + c_.Or(); + var_.Set(Int32Add(var_.Get(), Int32Constant(kDisjunctionInc))); + break; + case '(': + if (depth != 0) { + m_.OpenParen(); + c_.OpenParen(); + } + depth++; + break; + case ')': + depth--; + if (depth == 0) return it; + m_.CloseParen(); + c_.CloseParen(); + break; + default: + CHECK(false); + } + } + CHECK(false); + return it; + } + + void ParseIfThenElse(const std::string& str) { + int n_vars = 0; + for (std::string::const_iterator it = str.begin(); it != str.end(); ++it) { + if (*it == 'v') n_vars++; + } + InitializeConstants(n_vars); + for (std::string::const_iterator it = str.begin(); it != str.end(); ++it) { + switch (*it) { + case 'i': { + it++; + CHECK(it != str.end()); + CHECK_EQ('(', *it); + it = ParseExpression(it, str.end()); + CHECK_EQ(')', *it); + break; + } + case 't': + m_.Then(); + c_.Then(); + var_.Set(Int32Add(var_.Get(), Int32Constant(kThenInc))); + break; + case 'e': + m_.Else(); + c_.Else(); + var_.Set(Int32Add(var_.Get(), Int32Constant(kElseInc))); + break; + case 'r': + m_.Return(); + Return(var_.Get()); + break; + default: + CHECK(false); + } + } + m_.End(); + c_.End(); + Return(var_.Get()); + // Compare generated model to parsed version. + { + std::vector<char> v; + m_.Print(&v); + std::string m_str(v.begin(), v.end()); + CHECK(m_str == str); + } + } + + void ParseExpression(const std::string& str) { + CHECK(inputs_.is_empty()); + std::string wrapped = "i(" + str + ")te"; + ParseIfThenElse(wrapped); + } + + void ParseRandomIfThenElse(v8::base::RandomNumberGenerator* rng, int n_ifs, + int n_vars) { + std::vector<char> v; + GenerateIfThenElse(rng, &v, n_ifs, n_vars); + std::string str(v.begin(), v.end()); + ParseIfThenElse(str); + } + + void RunRandom(v8::base::RandomNumberGenerator* rng) { + // TODO(dcarney): permute inputs via model. + // TODO(dcarney): compute test_cases from n_ifs and n_vars. + int test_cases = 100; + for (int test = 0; test < test_cases; test++) { + Initialize(); + for (int i = 0; i < offset_; i++) { + inputs_[i] = rng->NextBool(); + } + DoCall(); + } + } + + void Run(const std::string& str, int32_t expected) { + Initialize(); + int offset = 0; + for (std::string::const_iterator it = str.begin(); it != str.end(); ++it) { + switch (*it) { + case 't': + inputs_[offset++] = 1; + break; + case 'f': + inputs_[offset++] = 0; + break; + default: + CHECK(false); + } + } + CHECK_EQ(offset_, offset); + // Call. + int32_t result = DoCall(); + CHECK_EQ(result, expected); + } + + private: + typedef std::vector<int32_t, zone_allocator<int32_t> > IOVector; + + void InitializeConstants(int n_vars) { + CHECK(inputs_.is_empty()); + inputs_.Reset(new int32_t[n_vars]); + outputs_.Reset(new int32_t[n_vars]); + } + + void Initialize() { + for (int i = 0; i < offset_; i++) { + inputs_[i] = 0; + outputs_[i] = kUninitializedOutput; + } + } + + int32_t DoCall() { + int32_t result = Call(inputs_.get(), outputs_.get()); + int32_t expected = m_.Verify(offset_, inputs_.get(), outputs_.get()); + CHECK_EQ(result, expected); + return result; + } + + const v8::internal::compiler::Variable var_; + IfBuilder c_; + IfBuilderModel m_; + Node* one_; + int32_t offset_; + SmartArrayPointer<int32_t> inputs_; + SmartArrayPointer<int32_t> outputs_; +}; + + +TEST(RunExpressionString) { + IfBuilderGenerator m; + m.ParseExpression("((v|v)|v)"); + m.Run("ttt", kInitalVar + 1 * kIfInc + kThenInc); + m.Run("ftt", kInitalVar + 2 * kIfInc + kDisjunctionInc + kThenInc); + m.Run("fft", kInitalVar + 3 * kIfInc + 2 * kDisjunctionInc + kThenInc); + m.Run("fff", kInitalVar + 3 * kIfInc + 2 * kDisjunctionInc + kElseInc); +} + + +TEST(RunExpressionStrings) { + const char* strings[] = { + "v", "(v)", "((v))", "v|v", + "(v|v)", "((v|v))", "v&v", "(v&v)", + "((v&v))", "v&(v)", "v&(v|v)", "v&(v|v)&v", + "v|(v)", "v|(v&v)", "v|(v&v)|v", "v|(((v)|(v&v)|(v)|v)&(v))|v", + }; + v8::base::RandomNumberGenerator rng; + for (size_t i = 0; i < ARRAY_SIZE(strings); i++) { + IfBuilderGenerator m; + m.ParseExpression(strings[i]); + m.RunRandom(&rng); + } +} + + +TEST(RunSimpleIfElseTester) { + const char* tests[] = { + "i(v)", "i(v)t", "i(v)te", + "i(v)er", "i(v)ter", "i(v)ti(v)trei(v)ei(v)ei(v)ei(v)ei(v)ei(v)ei(v)e"}; + v8::base::RandomNumberGenerator rng; + for (size_t i = 0; i < ARRAY_SIZE(tests); ++i) { + IfBuilderGenerator m; + m.ParseIfThenElse(tests[i]); + m.RunRandom(&rng); + } +} + + +TEST(RunRandomExpressions) { + v8::base::RandomNumberGenerator rng; + for (int n_vars = 1; n_vars < 12; n_vars++) { + for (int i = 0; i < n_vars * n_vars + 10; i++) { + IfBuilderGenerator m; + m.ParseRandomIfThenElse(&rng, 1, n_vars); + m.RunRandom(&rng); + } + } +} + + +TEST(RunRandomIfElse) { + v8::base::RandomNumberGenerator rng; + for (int n_ifs = 1; n_ifs < 12; n_ifs++) { + for (int i = 0; i < n_ifs * n_ifs + 10; i++) { + IfBuilderGenerator m; + m.ParseRandomIfThenElse(&rng, n_ifs, 1); + m.RunRandom(&rng); + } + } +} + + +TEST(RunRandomIfElseExpressions) { + v8::base::RandomNumberGenerator rng; + for (int n_vars = 2; n_vars < 6; n_vars++) { + for (int n_ifs = 2; n_ifs < 7; n_ifs++) { + for (int i = 0; i < n_ifs * n_vars + 10; i++) { + IfBuilderGenerator m; + m.ParseRandomIfThenElse(&rng, n_ifs, n_vars); + m.RunRandom(&rng); + } + } + } +} + +#endif diff --git a/deps/v8/test/cctest/compiler/test-structured-machine-assembler.cc b/deps/v8/test/cctest/compiler/test-structured-machine-assembler.cc new file mode 100644 index 00000000000..6d8020baf4f --- /dev/null +++ b/deps/v8/test/cctest/compiler/test-structured-machine-assembler.cc @@ -0,0 +1,1055 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" +#include "test/cctest/cctest.h" + +#include "src/base/utils/random-number-generator.h" +#include "src/compiler/structured-machine-assembler.h" +#include "test/cctest/compiler/codegen-tester.h" +#include "test/cctest/compiler/value-helper.h" + +#if V8_TURBOFAN_TARGET + +using namespace v8::internal::compiler; + +typedef StructuredMachineAssembler::IfBuilder IfBuilder; +typedef StructuredMachineAssembler::LoopBuilder Loop; + +namespace v8 { +namespace internal { +namespace compiler { + +class StructuredMachineAssemblerFriend { + public: + static bool VariableAlive(StructuredMachineAssembler* m, + const Variable& var) { + CHECK(m->current_environment_ != NULL); + int offset = var.offset_; + return offset < static_cast<int>(m->CurrentVars()->size()) && + m->CurrentVars()->at(offset) != NULL; + } +}; +} +} +} // namespace v8::internal::compiler + + +TEST(RunVariable) { + StructuredMachineAssemblerTester<int32_t> m; + + int32_t constant = 0x86c2bb16; + + Variable v1 = m.NewVariable(m.Int32Constant(constant)); + Variable v2 = m.NewVariable(v1.Get()); + m.Return(v2.Get()); + + CHECK_EQ(constant, m.Call()); +} + + +TEST(RunSimpleIf) { + StructuredMachineAssemblerTester<int32_t> m(kMachineWord32); + + int32_t constant = 0xc4a3e3a6; + { + IfBuilder cond(&m); + cond.If(m.Parameter(0)).Then(); + m.Return(m.Int32Constant(constant)); + } + m.Return(m.Word32Not(m.Int32Constant(constant))); + + CHECK_EQ(~constant, m.Call(0)); + CHECK_EQ(constant, m.Call(1)); +} + + +TEST(RunSimpleIfVariable) { + StructuredMachineAssemblerTester<int32_t> m(kMachineWord32); + + int32_t constant = 0xdb6f20c2; + Variable var = m.NewVariable(m.Int32Constant(constant)); + { + IfBuilder cond(&m); + cond.If(m.Parameter(0)).Then(); + var.Set(m.Word32Not(var.Get())); + } + m.Return(var.Get()); + + CHECK_EQ(constant, m.Call(0)); + CHECK_EQ(~constant, m.Call(1)); +} + + +TEST(RunSimpleElse) { + StructuredMachineAssemblerTester<int32_t> m(kMachineWord32); + + int32_t constant = 0xfc5eadf4; + { + IfBuilder cond(&m); + cond.If(m.Parameter(0)).Else(); + m.Return(m.Int32Constant(constant)); + } + m.Return(m.Word32Not(m.Int32Constant(constant))); + + CHECK_EQ(constant, m.Call(0)); + CHECK_EQ(~constant, m.Call(1)); +} + + +TEST(RunSimpleIfElse) { + StructuredMachineAssemblerTester<int32_t> m(kMachineWord32); + + int32_t constant = 0xaa9c8cd3; + { + IfBuilder cond(&m); + cond.If(m.Parameter(0)).Then(); + m.Return(m.Int32Constant(constant)); + cond.Else(); + m.Return(m.Word32Not(m.Int32Constant(constant))); + } + + CHECK_EQ(~constant, m.Call(0)); + CHECK_EQ(constant, m.Call(1)); +} + + +TEST(RunSimpleIfElseVariable) { + StructuredMachineAssemblerTester<int32_t> m(kMachineWord32); + + int32_t constant = 0x67b6f39c; + Variable var = m.NewVariable(m.Int32Constant(constant)); + { + IfBuilder cond(&m); + cond.If(m.Parameter(0)).Then(); + var.Set(m.Word32Not(m.Word32Not(var.Get()))); + cond.Else(); + var.Set(m.Word32Not(var.Get())); + } + m.Return(var.Get()); + + CHECK_EQ(~constant, m.Call(0)); + CHECK_EQ(constant, m.Call(1)); +} + + +TEST(RunSimpleIfNoThenElse) { + StructuredMachineAssemblerTester<int32_t> m(kMachineWord32); + + int32_t constant = 0xd5e550ed; + { + IfBuilder cond(&m); + cond.If(m.Parameter(0)); + } + m.Return(m.Int32Constant(constant)); + + CHECK_EQ(constant, m.Call(0)); + CHECK_EQ(constant, m.Call(1)); +} + + +TEST(RunSimpleConjunctionVariable) { + StructuredMachineAssemblerTester<int32_t> m(kMachineWord32); + + int32_t constant = 0xf8fb9ec6; + Variable var = m.NewVariable(m.Int32Constant(constant)); + { + IfBuilder cond(&m); + cond.If(m.Int32Constant(1)).And(); + var.Set(m.Word32Not(var.Get())); + cond.If(m.Parameter(0)).Then(); + var.Set(m.Word32Not(m.Word32Not(var.Get()))); + cond.Else(); + var.Set(m.Word32Not(var.Get())); + } + m.Return(var.Get()); + + CHECK_EQ(constant, m.Call(0)); + CHECK_EQ(~constant, m.Call(1)); +} + + +TEST(RunSimpleDisjunctionVariable) { + StructuredMachineAssemblerTester<int32_t> m(kMachineWord32); + + int32_t constant = 0x118f6ffc; + Variable var = m.NewVariable(m.Int32Constant(constant)); + { + IfBuilder cond(&m); + cond.If(m.Int32Constant(0)).Or(); + var.Set(m.Word32Not(var.Get())); + cond.If(m.Parameter(0)).Then(); + var.Set(m.Word32Not(m.Word32Not(var.Get()))); + cond.Else(); + var.Set(m.Word32Not(var.Get())); + } + m.Return(var.Get()); + + CHECK_EQ(constant, m.Call(0)); + CHECK_EQ(~constant, m.Call(1)); +} + + +TEST(RunIfElse) { + StructuredMachineAssemblerTester<int32_t> m(kMachineWord32); + + { + IfBuilder cond(&m); + bool first = true; + FOR_INT32_INPUTS(i) { + Node* c = m.Int32Constant(*i); + if (first) { + cond.If(m.Word32Equal(m.Parameter(0), c)).Then(); + m.Return(c); + first = false; + } else { + cond.Else(); + cond.If(m.Word32Equal(m.Parameter(0), c)).Then(); + m.Return(c); + } + } + } + m.Return(m.Int32Constant(333)); + + FOR_INT32_INPUTS(i) { CHECK_EQ(*i, m.Call(*i)); } +} + + +enum IfBuilderBranchType { kSkipBranch, kBranchFallsThrough, kBranchReturns }; + + +static IfBuilderBranchType all_branch_types[] = { + kSkipBranch, kBranchFallsThrough, kBranchReturns}; + + +static void RunIfBuilderDisjunction(size_t max, IfBuilderBranchType then_type, + IfBuilderBranchType else_type) { + StructuredMachineAssemblerTester<int32_t> m(kMachineWord32); + + std::vector<int32_t> inputs = ValueHelper::int32_vector(); + std::vector<int32_t>::const_iterator i = inputs.begin(); + int32_t hit = 0x8c723c9a; + int32_t miss = 0x88a6b9f3; + { + Node* p0 = m.Parameter(0); + IfBuilder cond(&m); + for (size_t j = 0; j < max; j++, ++i) { + CHECK(i != inputs.end()); // Thank you STL. + if (j > 0) cond.Or(); + cond.If(m.Word32Equal(p0, m.Int32Constant(*i))); + } + switch (then_type) { + case kSkipBranch: + break; + case kBranchFallsThrough: + cond.Then(); + break; + case kBranchReturns: + cond.Then(); + m.Return(m.Int32Constant(hit)); + break; + } + switch (else_type) { + case kSkipBranch: + break; + case kBranchFallsThrough: + cond.Else(); + break; + case kBranchReturns: + cond.Else(); + m.Return(m.Int32Constant(miss)); + break; + } + } + if (then_type != kBranchReturns || else_type != kBranchReturns) { + m.Return(m.Int32Constant(miss)); + } + + if (then_type != kBranchReturns) hit = miss; + + i = inputs.begin(); + for (size_t j = 0; i != inputs.end(); j++, ++i) { + int32_t result = m.Call(*i); + CHECK_EQ(j < max ? hit : miss, result); + } +} + + +TEST(RunIfBuilderDisjunction) { + size_t len = ValueHelper::int32_vector().size() - 1; + size_t max = len > 10 ? 10 : len - 1; + for (size_t i = 0; i < ARRAY_SIZE(all_branch_types); i++) { + for (size_t j = 0; j < ARRAY_SIZE(all_branch_types); j++) { + for (size_t size = 1; size < max; size++) { + RunIfBuilderDisjunction(size, all_branch_types[i], all_branch_types[j]); + } + RunIfBuilderDisjunction(len, all_branch_types[i], all_branch_types[j]); + } + } +} + + +static void RunIfBuilderConjunction(size_t max, IfBuilderBranchType then_type, + IfBuilderBranchType else_type) { + StructuredMachineAssemblerTester<int32_t> m(kMachineWord32); + + std::vector<int32_t> inputs = ValueHelper::int32_vector(); + std::vector<int32_t>::const_iterator i = inputs.begin(); + int32_t hit = 0xa0ceb9ca; + int32_t miss = 0x226cafaa; + { + IfBuilder cond(&m); + Node* p0 = m.Parameter(0); + for (size_t j = 0; j < max; j++, ++i) { + if (j > 0) cond.And(); + cond.If(m.Word32NotEqual(p0, m.Int32Constant(*i))); + } + switch (then_type) { + case kSkipBranch: + break; + case kBranchFallsThrough: + cond.Then(); + break; + case kBranchReturns: + cond.Then(); + m.Return(m.Int32Constant(hit)); + break; + } + switch (else_type) { + case kSkipBranch: + break; + case kBranchFallsThrough: + cond.Else(); + break; + case kBranchReturns: + cond.Else(); + m.Return(m.Int32Constant(miss)); + break; + } + } + if (then_type != kBranchReturns || else_type != kBranchReturns) { + m.Return(m.Int32Constant(miss)); + } + + if (then_type != kBranchReturns) hit = miss; + + i = inputs.begin(); + for (size_t j = 0; i != inputs.end(); j++, ++i) { + int32_t result = m.Call(*i); + CHECK_EQ(j >= max ? hit : miss, result); + } +} + + +TEST(RunIfBuilderConjunction) { + size_t len = ValueHelper::int32_vector().size() - 1; + size_t max = len > 10 ? 10 : len - 1; + for (size_t i = 0; i < ARRAY_SIZE(all_branch_types); i++) { + for (size_t j = 0; j < ARRAY_SIZE(all_branch_types); j++) { + for (size_t size = 1; size < max; size++) { + RunIfBuilderConjunction(size, all_branch_types[i], all_branch_types[j]); + } + RunIfBuilderConjunction(len, all_branch_types[i], all_branch_types[j]); + } + } +} + + +static void RunDisjunctionVariables(int disjunctions, bool explicit_then, + bool explicit_else) { + StructuredMachineAssemblerTester<int32_t> m(kMachineWord32); + + int32_t constant = 0x65a09535; + + Node* cmp_val = m.Int32Constant(constant); + Node* one = m.Int32Constant(1); + Variable var = m.NewVariable(m.Parameter(0)); + { + IfBuilder cond(&m); + cond.If(m.Word32Equal(var.Get(), cmp_val)); + for (int i = 0; i < disjunctions; i++) { + cond.Or(); + var.Set(m.Int32Add(var.Get(), one)); + cond.If(m.Word32Equal(var.Get(), cmp_val)); + } + if (explicit_then) { + cond.Then(); + } + if (explicit_else) { + cond.Else(); + var.Set(m.Int32Add(var.Get(), one)); + } + } + m.Return(var.Get()); + + int adds = disjunctions + (explicit_else ? 1 : 0); + int32_t input = constant - 2 * adds; + for (int i = 0; i < adds; i++) { + CHECK_EQ(input + adds, m.Call(input)); + input++; + } + for (int i = 0; i < adds + 1; i++) { + CHECK_EQ(constant, m.Call(input)); + input++; + } + for (int i = 0; i < adds; i++) { + CHECK_EQ(input + adds, m.Call(input)); + input++; + } +} + + +TEST(RunDisjunctionVariables) { + for (int disjunctions = 0; disjunctions < 10; disjunctions++) { + RunDisjunctionVariables(disjunctions, false, false); + RunDisjunctionVariables(disjunctions, false, true); + RunDisjunctionVariables(disjunctions, true, false); + RunDisjunctionVariables(disjunctions, true, true); + } +} + + +static void RunConjunctionVariables(int conjunctions, bool explicit_then, + bool explicit_else) { + StructuredMachineAssemblerTester<int32_t> m(kMachineWord32); + + int32_t constant = 0x2c7f4b45; + Node* cmp_val = m.Int32Constant(constant); + Node* one = m.Int32Constant(1); + Variable var = m.NewVariable(m.Parameter(0)); + { + IfBuilder cond(&m); + cond.If(m.Word32NotEqual(var.Get(), cmp_val)); + for (int i = 0; i < conjunctions; i++) { + cond.And(); + var.Set(m.Int32Add(var.Get(), one)); + cond.If(m.Word32NotEqual(var.Get(), cmp_val)); + } + if (explicit_then) { + cond.Then(); + var.Set(m.Int32Add(var.Get(), one)); + } + if (explicit_else) { + cond.Else(); + } + } + m.Return(var.Get()); + + int adds = conjunctions + (explicit_then ? 1 : 0); + int32_t input = constant - 2 * adds; + for (int i = 0; i < adds; i++) { + CHECK_EQ(input + adds, m.Call(input)); + input++; + } + for (int i = 0; i < adds + 1; i++) { + CHECK_EQ(constant, m.Call(input)); + input++; + } + for (int i = 0; i < adds; i++) { + CHECK_EQ(input + adds, m.Call(input)); + input++; + } +} + + +TEST(RunConjunctionVariables) { + for (int conjunctions = 0; conjunctions < 10; conjunctions++) { + RunConjunctionVariables(conjunctions, false, false); + RunConjunctionVariables(conjunctions, false, true); + RunConjunctionVariables(conjunctions, true, false); + RunConjunctionVariables(conjunctions, true, true); + } +} + + +TEST(RunSimpleNestedIf) { + StructuredMachineAssemblerTester<int32_t> m(kMachineWord32, kMachineWord32); + const size_t NUM_VALUES = 7; + std::vector<int32_t> inputs = ValueHelper::int32_vector(); + CHECK(inputs.size() >= NUM_VALUES); + Node* values[NUM_VALUES]; + for (size_t j = 0; j < NUM_VALUES; j++) { + values[j] = m.Int32Constant(inputs[j]); + } + { + IfBuilder if_0(&m); + if_0.If(m.Word32Equal(m.Parameter(0), values[0])).Then(); + { + IfBuilder if_1(&m); + if_1.If(m.Word32Equal(m.Parameter(1), values[1])).Then(); + { m.Return(values[3]); } + if_1.Else(); + { m.Return(values[4]); } + } + if_0.Else(); + { + IfBuilder if_1(&m); + if_1.If(m.Word32Equal(m.Parameter(1), values[2])).Then(); + { m.Return(values[5]); } + if_1.Else(); + { m.Return(values[6]); } + } + } + + int32_t result = m.Call(inputs[0], inputs[1]); + CHECK_EQ(inputs[3], result); + + result = m.Call(inputs[0], inputs[1] + 1); + CHECK_EQ(inputs[4], result); + + result = m.Call(inputs[0] + 1, inputs[2]); + CHECK_EQ(inputs[5], result); + + result = m.Call(inputs[0] + 1, inputs[2] + 1); + CHECK_EQ(inputs[6], result); +} + + +TEST(RunUnreachableBlockAfterIf) { + StructuredMachineAssemblerTester<int32_t> m; + { + IfBuilder cond(&m); + cond.If(m.Int32Constant(0)).Then(); + m.Return(m.Int32Constant(1)); + cond.Else(); + m.Return(m.Int32Constant(2)); + } + // This is unreachable. + m.Return(m.Int32Constant(3)); + CHECK_EQ(2, m.Call()); +} + + +TEST(RunUnreachableBlockAfterLoop) { + StructuredMachineAssemblerTester<int32_t> m; + { + Loop loop(&m); + m.Return(m.Int32Constant(1)); + } + // This is unreachable. + m.Return(m.Int32Constant(3)); + CHECK_EQ(1, m.Call()); +} + + +TEST(RunSimpleLoop) { + StructuredMachineAssemblerTester<int32_t> m; + int32_t constant = 0x120c1f85; + { + Loop loop(&m); + m.Return(m.Int32Constant(constant)); + } + CHECK_EQ(constant, m.Call()); +} + + +TEST(RunSimpleLoopBreak) { + StructuredMachineAssemblerTester<int32_t> m; + int32_t constant = 0x10ddb0a6; + { + Loop loop(&m); + loop.Break(); + } + m.Return(m.Int32Constant(constant)); + CHECK_EQ(constant, m.Call()); +} + + +TEST(RunCountToTen) { + StructuredMachineAssemblerTester<int32_t> m; + Variable i = m.NewVariable(m.Int32Constant(0)); + Node* ten = m.Int32Constant(10); + Node* one = m.Int32Constant(1); + { + Loop loop(&m); + { + IfBuilder cond(&m); + cond.If(m.Word32Equal(i.Get(), ten)).Then(); + loop.Break(); + } + i.Set(m.Int32Add(i.Get(), one)); + } + m.Return(i.Get()); + CHECK_EQ(10, m.Call()); +} + + +TEST(RunCountToTenAcc) { + StructuredMachineAssemblerTester<int32_t> m; + int32_t constant = 0xf27aed64; + Variable i = m.NewVariable(m.Int32Constant(0)); + Variable var = m.NewVariable(m.Int32Constant(constant)); + Node* ten = m.Int32Constant(10); + Node* one = m.Int32Constant(1); + { + Loop loop(&m); + { + IfBuilder cond(&m); + cond.If(m.Word32Equal(i.Get(), ten)).Then(); + loop.Break(); + } + i.Set(m.Int32Add(i.Get(), one)); + var.Set(m.Int32Add(var.Get(), i.Get())); + } + m.Return(var.Get()); + + CHECK_EQ(constant + 10 + 9 * 5, m.Call()); +} + + +TEST(RunSimpleNestedLoop) { + StructuredMachineAssemblerTester<int32_t> m(kMachineWord32); + + Node* zero = m.Int32Constant(0); + Node* one = m.Int32Constant(1); + Node* two = m.Int32Constant(2); + Node* three = m.Int32Constant(3); + { + Loop l1(&m); + { + Loop l2(&m); + { + IfBuilder cond(&m); + cond.If(m.Word32Equal(m.Parameter(0), one)).Then(); + l1.Break(); + } + { + Loop l3(&m); + { + IfBuilder cond(&m); + cond.If(m.Word32Equal(m.Parameter(0), two)).Then(); + l2.Break(); + cond.Else(); + cond.If(m.Word32Equal(m.Parameter(0), three)).Then(); + l3.Break(); + } + m.Return(three); + } + m.Return(two); + } + m.Return(one); + } + m.Return(zero); + + CHECK_EQ(0, m.Call(1)); + CHECK_EQ(1, m.Call(2)); + CHECK_EQ(2, m.Call(3)); + CHECK_EQ(3, m.Call(4)); +} + + +TEST(RunFib) { + StructuredMachineAssemblerTester<int32_t> m(kMachineWord32); + + // Constants. + Node* zero = m.Int32Constant(0); + Node* one = m.Int32Constant(1); + Node* two = m.Int32Constant(2); + // Variables. + // cnt = input + Variable cnt = m.NewVariable(m.Parameter(0)); + // if (cnt < 2) return i + { + IfBuilder lt2(&m); + lt2.If(m.Int32LessThan(cnt.Get(), two)).Then(); + m.Return(cnt.Get()); + } + // cnt -= 2 + cnt.Set(m.Int32Sub(cnt.Get(), two)); + // res = 1 + Variable res = m.NewVariable(one); + { + // prv_0 = 1 + // prv_1 = 1 + Variable prv_0 = m.NewVariable(one); + Variable prv_1 = m.NewVariable(one); + // while (cnt != 0) { + Loop main(&m); + { + IfBuilder nz(&m); + nz.If(m.Word32Equal(cnt.Get(), zero)).Then(); + main.Break(); + } + // res = prv_0 + prv_1 + // prv_0 = prv_1 + // prv_1 = res + res.Set(m.Int32Add(prv_0.Get(), prv_1.Get())); + prv_0.Set(prv_1.Get()); + prv_1.Set(res.Get()); + // cnt-- + cnt.Set(m.Int32Sub(cnt.Get(), one)); + } + m.Return(res.Get()); + + int32_t values[] = {0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144}; + for (size_t i = 0; i < ARRAY_SIZE(values); i++) { + CHECK_EQ(values[i], m.Call(static_cast<int32_t>(i))); + } +} + + +static int VariableIntroduction() { + while (true) { + int ret = 0; + for (int i = 0; i < 10; i++) { + for (int j = i; j < 10; j++) { + for (int k = j; k < 10; k++) { + ret++; + } + ret++; + } + ret++; + } + return ret; + } +} + + +TEST(RunVariableIntroduction) { + StructuredMachineAssemblerTester<int32_t> m; + Node* zero = m.Int32Constant(0); + Node* one = m.Int32Constant(1); + // Use an IfBuilder to get out of start block. + { + IfBuilder i0(&m); + i0.If(zero).Then(); + m.Return(one); + } + Node* ten = m.Int32Constant(10); + Variable v0 = + m.NewVariable(zero); // Introduce variable outside of start block. + { + Loop l0(&m); + Variable ret = m.NewVariable(zero); // Introduce loop variable. + { + Loop l1(&m); + { + IfBuilder i1(&m); + i1.If(m.Word32Equal(v0.Get(), ten)).Then(); + l1.Break(); + } + Variable v1 = m.NewVariable(v0.Get()); // Introduce loop variable. + { + Loop l2(&m); + { + IfBuilder i2(&m); + i2.If(m.Word32Equal(v1.Get(), ten)).Then(); + l2.Break(); + } + Variable v2 = m.NewVariable(v1.Get()); // Introduce loop variable. + { + Loop l3(&m); + { + IfBuilder i3(&m); + i3.If(m.Word32Equal(v2.Get(), ten)).Then(); + l3.Break(); + } + ret.Set(m.Int32Add(ret.Get(), one)); + v2.Set(m.Int32Add(v2.Get(), one)); + } + ret.Set(m.Int32Add(ret.Get(), one)); + v1.Set(m.Int32Add(v1.Get(), one)); + } + ret.Set(m.Int32Add(ret.Get(), one)); + v0.Set(m.Int32Add(v0.Get(), one)); + } + m.Return(ret.Get()); // Return loop variable. + } + CHECK_EQ(VariableIntroduction(), m.Call()); +} + + +TEST(RunIfBuilderVariableLiveness) { + StructuredMachineAssemblerTester<int32_t> m; + typedef i::compiler::StructuredMachineAssemblerFriend F; + Node* zero = m.Int32Constant(0); + Variable v_outer = m.NewVariable(zero); + IfBuilder cond(&m); + cond.If(zero).Then(); + Variable v_then = m.NewVariable(zero); + CHECK(F::VariableAlive(&m, v_outer)); + CHECK(F::VariableAlive(&m, v_then)); + cond.Else(); + Variable v_else = m.NewVariable(zero); + CHECK(F::VariableAlive(&m, v_outer)); + CHECK(F::VariableAlive(&m, v_else)); + CHECK(!F::VariableAlive(&m, v_then)); + cond.End(); + CHECK(F::VariableAlive(&m, v_outer)); + CHECK(!F::VariableAlive(&m, v_then)); + CHECK(!F::VariableAlive(&m, v_else)); +} + + +TEST(RunSimpleExpression1) { + StructuredMachineAssemblerTester<int32_t> m; + + int32_t constant = 0x0c2974ef; + Node* zero = m.Int32Constant(0); + Node* one = m.Int32Constant(1); + { + // if (((1 && 1) && 1) && 1) return constant; return 0; + IfBuilder cond(&m); + cond.OpenParen(); + cond.OpenParen().If(one).And(); + cond.If(one).CloseParen().And(); + cond.If(one).CloseParen().And(); + cond.If(one).Then(); + m.Return(m.Int32Constant(constant)); + } + m.Return(zero); + + CHECK_EQ(constant, m.Call()); +} + + +TEST(RunSimpleExpression2) { + StructuredMachineAssemblerTester<int32_t> m; + + int32_t constant = 0x2eddc11b; + Node* zero = m.Int32Constant(0); + Node* one = m.Int32Constant(1); + { + // if (((0 || 1) && 1) && 1) return constant; return 0; + IfBuilder cond(&m); + cond.OpenParen(); + cond.OpenParen().If(zero).Or(); + cond.If(one).CloseParen().And(); + cond.If(one).CloseParen().And(); + cond.If(one).Then(); + m.Return(m.Int32Constant(constant)); + } + m.Return(zero); + + CHECK_EQ(constant, m.Call()); +} + + +TEST(RunSimpleExpression3) { + StructuredMachineAssemblerTester<int32_t> m; + + int32_t constant = 0x9ed5e9ef; + Node* zero = m.Int32Constant(0); + Node* one = m.Int32Constant(1); + { + // if (1 && ((0 || 1) && 1) && 1) return constant; return 0; + IfBuilder cond(&m); + cond.If(one).And(); + cond.OpenParen(); + cond.OpenParen().If(zero).Or(); + cond.If(one).CloseParen().And(); + cond.If(one).CloseParen().And(); + cond.If(one).Then(); + m.Return(m.Int32Constant(constant)); + } + m.Return(zero); + + CHECK_EQ(constant, m.Call()); +} + + +TEST(RunSimpleExpressionVariable1) { + StructuredMachineAssemblerTester<int32_t> m; + + int32_t constant = 0x4b40a986; + Node* one = m.Int32Constant(1); + Variable var = m.NewVariable(m.Int32Constant(constant)); + { + // if (var.Get() && ((!var || var) && var) && var) {} return var; + // incrementing var in each environment. + IfBuilder cond(&m); + cond.If(var.Get()).And(); + var.Set(m.Int32Add(var.Get(), one)); + cond.OpenParen().OpenParen().If(m.Word32BinaryNot(var.Get())).Or(); + var.Set(m.Int32Add(var.Get(), one)); + cond.If(var.Get()).CloseParen().And(); + var.Set(m.Int32Add(var.Get(), one)); + cond.If(var.Get()).CloseParen().And(); + var.Set(m.Int32Add(var.Get(), one)); + cond.If(var.Get()); + } + m.Return(var.Get()); + + CHECK_EQ(constant + 4, m.Call()); +} + + +class QuicksortHelper : public StructuredMachineAssemblerTester<int32_t> { + public: + QuicksortHelper() + : StructuredMachineAssemblerTester<int32_t>( + MachineOperatorBuilder::pointer_rep(), kMachineWord32, + MachineOperatorBuilder::pointer_rep(), kMachineWord32), + input_(NULL), + stack_limit_(NULL), + one_(Int32Constant(1)), + stack_frame_size_(Int32Constant(kFrameVariables * 4)), + left_offset_(Int32Constant(0 * 4)), + right_offset_(Int32Constant(1 * 4)) { + Build(); + } + + int32_t DoCall(int32_t* input, int32_t input_length) { + int32_t stack_space[20]; + // Do call. + int32_t return_val = Call(input, input_length, stack_space, + static_cast<int32_t>(ARRAY_SIZE(stack_space))); + // Ran out of stack space. + if (return_val != 0) return return_val; + // Check sorted. + int32_t last = input[0]; + for (int32_t i = 0; i < input_length; i++) { + CHECK(last <= input[i]); + last = input[i]; + } + return return_val; + } + + private: + void Inc32(const Variable& var) { var.Set(Int32Add(var.Get(), one_)); } + Node* Index(Node* index) { return Word32Shl(index, Int32Constant(2)); } + Node* ArrayLoad(Node* index) { + return Load(kMachineWord32, input_, Index(index)); + } + void Swap(Node* a_index, Node* b_index) { + Node* a = ArrayLoad(a_index); + Node* b = ArrayLoad(b_index); + Store(kMachineWord32, input_, Index(a_index), b); + Store(kMachineWord32, input_, Index(b_index), a); + } + void AddToCallStack(const Variable& fp, Node* left, Node* right) { + { + // Stack limit check. + IfBuilder cond(this); + cond.If(IntPtrLessThanOrEqual(fp.Get(), stack_limit_)).Then(); + Return(Int32Constant(-1)); + } + Store(kMachineWord32, fp.Get(), left_offset_, left); + Store(kMachineWord32, fp.Get(), right_offset_, right); + fp.Set(IntPtrAdd(fp.Get(), ConvertInt32ToIntPtr(stack_frame_size_))); + } + void Build() { + Variable left = NewVariable(Int32Constant(0)); + Variable right = + NewVariable(Int32Sub(Parameter(kInputLengthParameter), one_)); + input_ = Parameter(kInputParameter); + Node* top_of_stack = Parameter(kStackParameter); + stack_limit_ = IntPtrSub( + top_of_stack, ConvertInt32ToIntPtr(Parameter(kStackLengthParameter))); + Variable fp = NewVariable(top_of_stack); + { + Loop outermost(this); + // Edge case - 2 element array. + { + IfBuilder cond(this); + cond.If(Word32Equal(left.Get(), Int32Sub(right.Get(), one_))).And(); + cond.If(Int32LessThanOrEqual(ArrayLoad(right.Get()), + ArrayLoad(left.Get()))).Then(); + Swap(left.Get(), right.Get()); + } + { + IfBuilder cond(this); + // Algorithm complete condition. + cond.If(WordEqual(top_of_stack, fp.Get())).And(); + cond.If(Int32LessThanOrEqual(Int32Sub(right.Get(), one_), left.Get())) + .Then(); + outermost.Break(); + // 'Recursion' exit condition. Pop frame and continue. + cond.Else(); + cond.If(Int32LessThanOrEqual(Int32Sub(right.Get(), one_), left.Get())) + .Then(); + fp.Set(IntPtrSub(fp.Get(), ConvertInt32ToIntPtr(stack_frame_size_))); + left.Set(Load(kMachineWord32, fp.Get(), left_offset_)); + right.Set(Load(kMachineWord32, fp.Get(), right_offset_)); + outermost.Continue(); + } + // Partition. + Variable store_index = NewVariable(left.Get()); + { + Node* pivot_index = + Int32Div(Int32Add(left.Get(), right.Get()), Int32Constant(2)); + Node* pivot = ArrayLoad(pivot_index); + Swap(pivot_index, right.Get()); + Variable i = NewVariable(left.Get()); + { + Loop partition(this); + { + IfBuilder cond(this); + // Parition complete. + cond.If(Word32Equal(i.Get(), right.Get())).Then(); + partition.Break(); + // Need swap. + cond.Else(); + cond.If(Int32LessThanOrEqual(ArrayLoad(i.Get()), pivot)).Then(); + Swap(i.Get(), store_index.Get()); + Inc32(store_index); + } + Inc32(i); + } // End partition loop. + Swap(store_index.Get(), right.Get()); + } + // 'Recurse' left and right halves of partition. + // Tail recurse second one. + AddToCallStack(fp, left.Get(), Int32Sub(store_index.Get(), one_)); + left.Set(Int32Add(store_index.Get(), one_)); + } // End outermost loop. + Return(Int32Constant(0)); + } + + static const int kFrameVariables = 2; // left, right + // Parameter offsets. + static const int kInputParameter = 0; + static const int kInputLengthParameter = 1; + static const int kStackParameter = 2; + static const int kStackLengthParameter = 3; + // Function inputs. + Node* input_; + Node* stack_limit_; + // Constants. + Node* const one_; + // Frame constants. + Node* const stack_frame_size_; + Node* const left_offset_; + Node* const right_offset_; +}; + + +TEST(RunSimpleQuicksort) { + QuicksortHelper m; + int32_t inputs[] = {9, 7, 1, 8, 11}; + CHECK_EQ(0, m.DoCall(inputs, ARRAY_SIZE(inputs))); +} + + +TEST(RunRandomQuicksort) { + QuicksortHelper m; + + v8::base::RandomNumberGenerator rng; + static const int kMaxLength = 40; + int32_t inputs[kMaxLength]; + + for (int length = 1; length < kMaxLength; length++) { + for (int i = 0; i < 70; i++) { + // Randomize inputs. + for (int j = 0; j < length; j++) { + inputs[j] = rng.NextInt(10) - 5; + } + CHECK_EQ(0, m.DoCall(inputs, length)); + } + } +} + + +TEST(MultipleScopes) { + StructuredMachineAssemblerTester<int32_t> m; + for (int i = 0; i < 10; i++) { + IfBuilder b(&m); + b.If(m.Int32Constant(0)).Then(); + m.NewVariable(m.Int32Constant(0)); + } + m.Return(m.Int32Constant(0)); + CHECK_EQ(0, m.Call()); +} + +#endif diff --git a/deps/v8/test/cctest/compiler/value-helper.h b/deps/v8/test/cctest/compiler/value-helper.h new file mode 100644 index 00000000000..5bfd7884d0e --- /dev/null +++ b/deps/v8/test/cctest/compiler/value-helper.h @@ -0,0 +1,131 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_CCTEST_COMPILER_VALUE_HELPER_H_ +#define V8_CCTEST_COMPILER_VALUE_HELPER_H_ + +#include "src/v8.h" + +#include "src/compiler/common-operator.h" +#include "src/compiler/node.h" +#include "src/compiler/node-matchers.h" +#include "src/isolate.h" +#include "src/objects.h" +#include "test/cctest/cctest.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// A collection of utilities related to numerical and heap values, including +// example input values of various types, including int32_t, uint32_t, double, +// etc. +class ValueHelper { + public: + Isolate* isolate_; + + ValueHelper() : isolate_(CcTest::InitIsolateOnce()) {} + + template <typename T> + void CheckConstant(T expected, Node* node) { + CHECK_EQ(expected, ValueOf<T>(node->op())); + } + + void CheckFloat64Constant(double expected, Node* node) { + CHECK_EQ(IrOpcode::kFloat64Constant, node->opcode()); + CHECK_EQ(expected, ValueOf<double>(node->op())); + } + + void CheckNumberConstant(double expected, Node* node) { + CHECK_EQ(IrOpcode::kNumberConstant, node->opcode()); + CHECK_EQ(expected, ValueOf<double>(node->op())); + } + + void CheckInt32Constant(int32_t expected, Node* node) { + CHECK_EQ(IrOpcode::kInt32Constant, node->opcode()); + CHECK_EQ(expected, ValueOf<int32_t>(node->op())); + } + + void CheckUint32Constant(int32_t expected, Node* node) { + CHECK_EQ(IrOpcode::kInt32Constant, node->opcode()); + CHECK_EQ(expected, ValueOf<uint32_t>(node->op())); + } + + void CheckHeapConstant(Object* expected, Node* node) { + CHECK_EQ(IrOpcode::kHeapConstant, node->opcode()); + CHECK_EQ(expected, *ValueOf<Handle<Object> >(node->op())); + } + + void CheckTrue(Node* node) { + CheckHeapConstant(isolate_->heap()->true_value(), node); + } + + void CheckFalse(Node* node) { + CheckHeapConstant(isolate_->heap()->false_value(), node); + } + + static std::vector<double> float64_vector() { + static const double nan = v8::base::OS::nan_value(); + static const double values[] = { + 0.125, 0.25, 0.375, 0.5, + 1.25, -1.75, 2, 5.125, + 6.25, 0.0, -0.0, 982983.25, + 888, 2147483647.0, -999.75, 3.1e7, + -2e66, 3e-88, -2147483648.0, V8_INFINITY, + -V8_INFINITY, nan, 2147483647.375, 2147483647.75, + 2147483648.0, 2147483648.25, 2147483649.25, -2147483647.0, + -2147483647.125, -2147483647.875, -2147483648.25, -2147483649.5}; + return std::vector<double>(&values[0], &values[ARRAY_SIZE(values)]); + } + + static const std::vector<int32_t> int32_vector() { + std::vector<uint32_t> values = uint32_vector(); + return std::vector<int32_t>(values.begin(), values.end()); + } + + static const std::vector<uint32_t> uint32_vector() { + static const uint32_t kValues[] = { + 0x00000000, 0x00000001, 0xffffffff, 0x1b09788b, 0x04c5fce8, 0xcc0de5bf, + 0x273a798e, 0x187937a3, 0xece3af83, 0x5495a16b, 0x0b668ecc, 0x11223344, + 0x0000009e, 0x00000043, 0x0000af73, 0x0000116b, 0x00658ecc, 0x002b3b4c, + 0x88776655, 0x70000000, 0x07200000, 0x7fffffff, 0x56123761, 0x7fffff00, + 0x761c4761, 0x80000000, 0x88888888, 0xa0000000, 0xdddddddd, 0xe0000000, + 0xeeeeeeee, 0xfffffffd, 0xf0000000, 0x007fffff, 0x003fffff, 0x001fffff, + 0x000fffff, 0x0007ffff, 0x0003ffff, 0x0001ffff, 0x0000ffff, 0x00007fff, + 0x00003fff, 0x00001fff, 0x00000fff, 0x000007ff, 0x000003ff, 0x000001ff}; + return std::vector<uint32_t>(&kValues[0], &kValues[ARRAY_SIZE(kValues)]); + } + + static const std::vector<double> nan_vector(size_t limit = 0) { + static const double nan = v8::base::OS::nan_value(); + static const double values[] = {-nan, -V8_INFINITY * -0.0, + -V8_INFINITY * 0.0, V8_INFINITY * -0.0, + V8_INFINITY * 0.0, nan}; + return std::vector<double>(&values[0], &values[ARRAY_SIZE(values)]); + } + + static const std::vector<uint32_t> ror_vector() { + static const uint32_t kValues[31] = { + 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, + 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31}; + return std::vector<uint32_t>(&kValues[0], &kValues[ARRAY_SIZE(kValues)]); + } +}; + +// Helper macros that can be used in FOR_INT32_INPUTS(i) { ... *i ... } +// Watch out, these macros aren't hygenic; they pollute your scope. Thanks STL. +#define FOR_INPUTS(ctype, itype, var) \ + std::vector<ctype> var##_vec = ValueHelper::itype##_vector(); \ + for (std::vector<ctype>::iterator var = var##_vec.begin(); \ + var != var##_vec.end(); ++var) + +#define FOR_INT32_INPUTS(var) FOR_INPUTS(int32_t, int32, var) +#define FOR_UINT32_INPUTS(var) FOR_INPUTS(uint32_t, uint32, var) +#define FOR_FLOAT64_INPUTS(var) FOR_INPUTS(double, float64, var) + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_CCTEST_COMPILER_VALUE_HELPER_H_ diff --git a/deps/v8/test/cctest/gay-fixed.cc b/deps/v8/test/cctest/gay-fixed.cc index 071ea4f8c17..81463ac1fa4 100644 --- a/deps/v8/test/cctest/gay-fixed.cc +++ b/deps/v8/test/cctest/gay-fixed.cc @@ -29,9 +29,9 @@ // have been generated using Gay's dtoa to produce the fixed representation: // dtoa(v, 3, number_digits, &decimal_point, &sign, NULL); -#include "v8.h" +#include "src/v8.h" -#include "gay-fixed.h" +#include "test/cctest/gay-fixed.h" namespace v8 { namespace internal { diff --git a/deps/v8/test/cctest/gay-precision.cc b/deps/v8/test/cctest/gay-precision.cc index c0e993509f3..6ab2715fea0 100644 --- a/deps/v8/test/cctest/gay-precision.cc +++ b/deps/v8/test/cctest/gay-precision.cc @@ -29,9 +29,9 @@ // have been generated using Gay's dtoa to produce the precision representation: // dtoa(v, 2, number_digits, &decimal_point, &sign, NULL); -#include "v8.h" +#include "src/v8.h" -#include "gay-precision.h" +#include "test/cctest/gay-precision.h" namespace v8 { namespace internal { diff --git a/deps/v8/test/cctest/gay-shortest.cc b/deps/v8/test/cctest/gay-shortest.cc index d065e97782b..896ea4c5142 100644 --- a/deps/v8/test/cctest/gay-shortest.cc +++ b/deps/v8/test/cctest/gay-shortest.cc @@ -29,9 +29,9 @@ // have been generated using Gay's dtoa to produce the shortest representation: // decimal_rep = dtoa(v, 0, 0, &decimal_point, &sign, NULL); -#include "v8.h" +#include "src/v8.h" -#include "gay-shortest.h" +#include "test/cctest/gay-shortest.h" namespace v8 { namespace internal { diff --git a/deps/v8/test/cctest/print-extension.cc b/deps/v8/test/cctest/print-extension.cc index 9f629195bd7..d1af3596e88 100644 --- a/deps/v8/test/cctest/print-extension.cc +++ b/deps/v8/test/cctest/print-extension.cc @@ -25,7 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "print-extension.h" +#include "test/cctest/print-extension.h" namespace v8 { namespace internal { diff --git a/deps/v8/test/cctest/print-extension.h b/deps/v8/test/cctest/print-extension.h index 7fe9226f7b4..b0d2b1ca197 100644 --- a/deps/v8/test/cctest/print-extension.h +++ b/deps/v8/test/cctest/print-extension.h @@ -28,7 +28,7 @@ #ifndef V8_TEST_CCTEST_PRINT_EXTENSION_H_ #define V8_TEST_CCTEST_PRINT_EXTENSION_H_ -#include "v8.h" +#include "src/v8.h" namespace v8 { namespace internal { diff --git a/deps/v8/test/cctest/profiler-extension.cc b/deps/v8/test/cctest/profiler-extension.cc index 1fdd1ba2478..263fc4f38d5 100644 --- a/deps/v8/test/cctest/profiler-extension.cc +++ b/deps/v8/test/cctest/profiler-extension.cc @@ -27,8 +27,8 @@ // // Tests of profiles generator and utilities. -#include "profiler-extension.h" -#include "checks.h" +#include "src/base/logging.h" +#include "test/cctest/profiler-extension.h" namespace v8 { namespace internal { diff --git a/deps/v8/test/cctest/profiler-extension.h b/deps/v8/test/cctest/profiler-extension.h index c26a29c39aa..6f816b33fba 100644 --- a/deps/v8/test/cctest/profiler-extension.h +++ b/deps/v8/test/cctest/profiler-extension.h @@ -30,7 +30,7 @@ #ifndef V8_TEST_CCTEST_PROFILER_EXTENSION_H_ #define V8_TEST_CCTEST_PROFILER_EXTENSION_H_ -#include "../include/v8-profiler.h" +#include "include/v8-profiler.h" namespace v8 { namespace internal { diff --git a/deps/v8/test/cctest/test-accessors.cc b/deps/v8/test/cctest/test-accessors.cc index daafb244e3d..5bf61c8fcac 100644 --- a/deps/v8/test/cctest/test-accessors.cc +++ b/deps/v8/test/cctest/test-accessors.cc @@ -27,12 +27,12 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "api.h" -#include "cctest.h" -#include "frames-inl.h" -#include "string-stream.h" +#include "src/api.h" +#include "src/frames-inl.h" +#include "src/string-stream.h" +#include "test/cctest/cctest.h" using ::v8::ObjectTemplate; using ::v8::Value; @@ -229,54 +229,6 @@ THREADED_TEST(AccessorIC) { } -static void AccessorProhibitsOverwritingGetter( - Local<String> name, - const v8::PropertyCallbackInfo<v8::Value>& info) { - ApiTestFuzzer::Fuzz(); - info.GetReturnValue().Set(true); -} - - -THREADED_TEST(AccessorProhibitsOverwriting) { - LocalContext context; - v8::Isolate* isolate = context->GetIsolate(); - v8::HandleScope scope(isolate); - Local<ObjectTemplate> templ = ObjectTemplate::New(isolate); - templ->SetAccessor(v8_str("x"), - AccessorProhibitsOverwritingGetter, - 0, - v8::Handle<Value>(), - v8::PROHIBITS_OVERWRITING, - v8::ReadOnly); - Local<v8::Object> instance = templ->NewInstance(); - context->Global()->Set(v8_str("obj"), instance); - Local<Value> value = CompileRun( - "obj.__defineGetter__('x', function() { return false; });" - "obj.x"); - CHECK(value->BooleanValue()); - value = CompileRun( - "var setter_called = false;" - "obj.__defineSetter__('x', function() { setter_called = true; });" - "obj.x = 42;" - "setter_called"); - CHECK(!value->BooleanValue()); - value = CompileRun( - "obj2 = {};" - "obj2.__proto__ = obj;" - "obj2.__defineGetter__('x', function() { return false; });" - "obj2.x"); - CHECK(value->BooleanValue()); - value = CompileRun( - "var setter_called = false;" - "obj2 = {};" - "obj2.__proto__ = obj;" - "obj2.__defineSetter__('x', function() { setter_called = true; });" - "obj2.x = 42;" - "setter_called"); - CHECK(!value->BooleanValue()); -} - - template <int C> static void HandleAllocatingGetter( Local<String> name, diff --git a/deps/v8/test/cctest/test-alloc.cc b/deps/v8/test/cctest/test-alloc.cc index 7a213ae4b52..314c3c14790 100644 --- a/deps/v8/test/cctest/test-alloc.cc +++ b/deps/v8/test/cctest/test-alloc.cc @@ -25,11 +25,11 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "v8.h" -#include "accessors.h" -#include "api.h" +#include "src/v8.h" +#include "test/cctest/cctest.h" -#include "cctest.h" +#include "src/accessors.h" +#include "src/api.h" using namespace v8::internal; @@ -50,7 +50,6 @@ static AllocationResult AllocateAfterFailures() { // for specific kinds. heap->AllocateFixedArray(100).ToObjectChecked(); heap->AllocateHeapNumber(0.42).ToObjectChecked(); - heap->AllocateArgumentsObject(Smi::FromInt(87), 10).ToObjectChecked(); Object* object = heap->AllocateJSObject( *CcTest::i_isolate()->object_function()).ToObjectChecked(); heap->CopyJSObject(JSObject::cast(object)).ToObjectChecked(); @@ -67,7 +66,7 @@ static AllocationResult AllocateAfterFailures() { static const int kLargeObjectSpaceFillerLength = 300000; static const int kLargeObjectSpaceFillerSize = FixedArray::SizeFor( kLargeObjectSpaceFillerLength); - ASSERT(kLargeObjectSpaceFillerSize > heap->old_pointer_space()->AreaSize()); + DCHECK(kLargeObjectSpaceFillerSize > heap->old_pointer_space()->AreaSize()); while (heap->OldGenerationSpaceAvailable() > kLargeObjectSpaceFillerSize) { heap->AllocateFixedArray( kLargeObjectSpaceFillerLength, TENURED).ToObjectChecked(); @@ -137,8 +136,8 @@ TEST(StressJS) { v8::HandleScope scope(CcTest::isolate()); v8::Handle<v8::Context> env = v8::Context::New(CcTest::isolate()); env->Enter(); - Handle<JSFunction> function = factory->NewFunctionWithPrototype( - factory->function_string(), factory->null_value()); + Handle<JSFunction> function = factory->NewFunction( + factory->function_string()); // Force the creation of an initial map and set the code to // something empty. factory->NewJSObject(function); @@ -147,7 +146,7 @@ TEST(StressJS) { // Patch the map to have an accessor for "get". Handle<Map> map(function->initial_map()); Handle<DescriptorArray> instance_descriptors(map->instance_descriptors()); - ASSERT(instance_descriptors->IsEmpty()); + DCHECK(instance_descriptors->IsEmpty()); PropertyAttributes attrs = static_cast<PropertyAttributes>(0); Handle<AccessorInfo> foreign = TestAccessorInfo(isolate, attrs); @@ -196,13 +195,13 @@ class Block { TEST(CodeRange) { - const int code_range_size = 32*MB; + const size_t code_range_size = 32*MB; CcTest::InitializeVM(); CodeRange code_range(reinterpret_cast<Isolate*>(CcTest::isolate())); code_range.SetUp(code_range_size); - int current_allocated = 0; - int total_allocated = 0; - List<Block> blocks(1000); + size_t current_allocated = 0; + size_t total_allocated = 0; + List< ::Block> blocks(1000); while (total_allocated < 5 * code_range_size) { if (current_allocated < code_range_size / 10) { @@ -219,7 +218,7 @@ TEST(CodeRange) { requested, &allocated); CHECK(base != NULL); - blocks.Add(Block(base, static_cast<int>(allocated))); + blocks.Add(::Block(base, static_cast<int>(allocated))); current_allocated += static_cast<int>(allocated); total_allocated += static_cast<int>(allocated); } else { diff --git a/deps/v8/test/cctest/test-api.cc b/deps/v8/test/cctest/test-api.cc index 14df05a8e81..9ddc9db71df 100644 --- a/deps/v8/test/cctest/test-api.cc +++ b/deps/v8/test/cctest/test-api.cc @@ -27,30 +27,30 @@ #include <climits> #include <csignal> -#include <string> #include <map> +#include <string> -#include "v8.h" +#include "src/v8.h" #if V8_OS_POSIX #include <unistd.h> // NOLINT #endif -#include "api.h" -#include "arguments.h" -#include "cctest.h" -#include "compilation-cache.h" -#include "cpu-profiler.h" -#include "execution.h" -#include "isolate.h" -#include "objects.h" -#include "parser.h" -#include "platform.h" -#include "snapshot.h" -#include "unicode-inl.h" -#include "utils.h" -#include "vm-state.h" -#include "../include/v8-util.h" +#include "include/v8-util.h" +#include "src/api.h" +#include "src/arguments.h" +#include "src/base/platform/platform.h" +#include "src/compilation-cache.h" +#include "src/cpu-profiler.h" +#include "src/execution.h" +#include "src/isolate.h" +#include "src/objects.h" +#include "src/parser.h" +#include "src/snapshot.h" +#include "src/unicode-inl.h" +#include "src/utils.h" +#include "src/vm-state.h" +#include "test/cctest/cctest.h" static const bool kLogThreading = false; @@ -99,50 +99,6 @@ void RunWithProfiler(void (*test)()) { } -static void ExpectString(const char* code, const char* expected) { - Local<Value> result = CompileRun(code); - CHECK(result->IsString()); - String::Utf8Value utf8(result); - CHECK_EQ(expected, *utf8); -} - - -static void ExpectInt32(const char* code, int expected) { - Local<Value> result = CompileRun(code); - CHECK(result->IsInt32()); - CHECK_EQ(expected, result->Int32Value()); -} - - -static void ExpectBoolean(const char* code, bool expected) { - Local<Value> result = CompileRun(code); - CHECK(result->IsBoolean()); - CHECK_EQ(expected, result->BooleanValue()); -} - - -static void ExpectTrue(const char* code) { - ExpectBoolean(code, true); -} - - -static void ExpectFalse(const char* code) { - ExpectBoolean(code, false); -} - - -static void ExpectObject(const char* code, Local<Value> expected) { - Local<Value> result = CompileRun(code); - CHECK(result->Equals(expected)); -} - - -static void ExpectUndefined(const char* code) { - Local<Value> result = CompileRun(code); - CHECK(result->IsUndefined()); -} - - static int signature_callback_count; static Local<Value> signature_expected_receiver; static void IncrementingSignatureCallback( @@ -230,11 +186,11 @@ THREADED_TEST(IsolateOfContext) { static void TestSignature(const char* loop_js, Local<Value> receiver) { i::ScopedVector<char> source(200); - i::OS::SNPrintF(source, - "for (var i = 0; i < 10; i++) {" - " %s" - "}", - loop_js); + i::SNPrintF(source, + "for (var i = 0; i < 10; i++) {" + " %s" + "}", + loop_js); signature_callback_count = 0; signature_expected_receiver = receiver; bool expected_to_throw = receiver.IsEmpty(); @@ -307,7 +263,7 @@ THREADED_TEST(ReceiverSignature) { unsigned bad_signature_start_offset = 2; for (unsigned i = 0; i < ARRAY_SIZE(test_objects); i++) { i::ScopedVector<char> source(200); - i::OS::SNPrintF( + i::SNPrintF( source, "var test_object = %s; test_object", test_objects[i]); Local<Value> test_object = CompileRun(source.start()); TestSignature("test_object.prop();", test_object); @@ -451,17 +407,10 @@ THREADED_TEST(Script) { } -static uint16_t* AsciiToTwoByteString(const char* source) { - int array_length = i::StrLength(source) + 1; - uint16_t* converted = i::NewArray<uint16_t>(array_length); - for (int i = 0; i < array_length; i++) converted[i] = source[i]; - return converted; -} - - class TestResource: public String::ExternalStringResource { public: - TestResource(uint16_t* data, int* counter = NULL, bool owning_data = true) + explicit TestResource(uint16_t* data, int* counter = NULL, + bool owning_data = true) : data_(data), length_(0), counter_(counter), owning_data_(owning_data) { while (data[length_]) ++length_; } @@ -489,11 +438,12 @@ class TestResource: public String::ExternalStringResource { class TestAsciiResource: public String::ExternalAsciiStringResource { public: - TestAsciiResource(const char* data, int* counter = NULL, size_t offset = 0) + explicit TestAsciiResource(const char* data, int* counter = NULL, + size_t offset = 0) : orig_data_(data), data_(data + offset), length_(strlen(data) - offset), - counter_(counter) { } + counter_(counter) {} ~TestAsciiResource() { i::DeleteArray(orig_data_); @@ -2004,6 +1954,27 @@ THREADED_TEST(EmptyInterceptorDoesNotShadowAccessors) { } +THREADED_TEST(ExecutableAccessorIsPreservedOnAttributeChange) { + v8::Isolate* isolate = CcTest::isolate(); + v8::HandleScope scope(isolate); + LocalContext env; + v8::Local<v8::Value> res = CompileRun("var a = []; a;"); + i::Handle<i::JSObject> a(v8::Utils::OpenHandle(v8::Object::Cast(*res))); + CHECK(a->map()->instance_descriptors()->IsFixedArray()); + CHECK_GT(i::FixedArray::cast(a->map()->instance_descriptors())->length(), 0); + CompileRun("Object.defineProperty(a, 'length', { writable: false });"); + CHECK_EQ(i::FixedArray::cast(a->map()->instance_descriptors())->length(), 0); + // But we should still have an ExecutableAccessorInfo. + i::Isolate* i_isolate = reinterpret_cast<i::Isolate*>(isolate); + i::LookupResult lookup(i_isolate); + i::Handle<i::String> name(v8::Utils::OpenHandle(*v8_str("length"))); + a->LookupOwnRealNamedProperty(name, &lookup); + CHECK(lookup.IsPropertyCallbacks()); + i::Handle<i::Object> callback(lookup.GetCallbackObject(), i_isolate); + CHECK(callback->IsExecutableAccessorInfo()); +} + + THREADED_TEST(EmptyInterceptorBreakTransitions) { v8::HandleScope scope(CcTest::isolate()); Handle<FunctionTemplate> templ = FunctionTemplate::New(CcTest::isolate()); @@ -2768,8 +2739,6 @@ THREADED_TEST(GlobalProxyIdentityHash) { THREADED_TEST(SymbolProperties) { - i::FLAG_harmony_symbols = true; - LocalContext env; v8::Isolate* isolate = env->GetIsolate(); v8::HandleScope scope(isolate); @@ -2918,8 +2887,6 @@ THREADED_TEST(PrivateProperties) { THREADED_TEST(GlobalSymbols) { - i::FLAG_harmony_symbols = true; - LocalContext env; v8::Isolate* isolate = env->GetIsolate(); v8::HandleScope scope(isolate); @@ -3003,7 +2970,7 @@ THREADED_TEST(ArrayBuffer_ApiInternalToExternal) { CHECK_EQ(1024, static_cast<int>(ab_contents.ByteLength())); uint8_t* data = static_cast<uint8_t*>(ab_contents.Data()); - ASSERT(data != NULL); + DCHECK(data != NULL); env->Global()->Set(v8_str("ab"), ab); v8::Handle<v8::Value> result = CompileRun("ab.byteLength"); @@ -3112,7 +3079,7 @@ static void CheckIsNeutered(v8::Handle<v8::TypedArray> ta) { static void CheckIsTypedArrayVarNeutered(const char* name) { i::ScopedVector<char> source(1024); - i::OS::SNPrintF(source, + i::SNPrintF(source, "%s.byteLength == 0 && %s.byteOffset == 0 && %s.length == 0", name, name, name); CHECK(CompileRun(source.start())->IsTrue()); @@ -4184,7 +4151,7 @@ bool message_received; static void check_message_0(v8::Handle<v8::Message> message, v8::Handle<Value> data) { CHECK_EQ(5.76, data->NumberValue()); - CHECK_EQ(6.75, message->GetScriptResourceName()->NumberValue()); + CHECK_EQ(6.75, message->GetScriptOrigin().ResourceName()->NumberValue()); CHECK(!message->IsSharedCrossOrigin()); message_received = true; } @@ -4258,7 +4225,7 @@ TEST(MessageHandler2) { static void check_message_3(v8::Handle<v8::Message> message, v8::Handle<Value> data) { CHECK(message->IsSharedCrossOrigin()); - CHECK_EQ(6.75, message->GetScriptResourceName()->NumberValue()); + CHECK_EQ(6.75, message->GetScriptOrigin().ResourceName()->NumberValue()); message_received = true; } @@ -4287,7 +4254,7 @@ TEST(MessageHandler3) { static void check_message_4(v8::Handle<v8::Message> message, v8::Handle<Value> data) { CHECK(!message->IsSharedCrossOrigin()); - CHECK_EQ(6.75, message->GetScriptResourceName()->NumberValue()); + CHECK_EQ(6.75, message->GetScriptOrigin().ResourceName()->NumberValue()); message_received = true; } @@ -4316,7 +4283,7 @@ TEST(MessageHandler4) { static void check_message_5a(v8::Handle<v8::Message> message, v8::Handle<Value> data) { CHECK(message->IsSharedCrossOrigin()); - CHECK_EQ(6.75, message->GetScriptResourceName()->NumberValue()); + CHECK_EQ(6.75, message->GetScriptOrigin().ResourceName()->NumberValue()); message_received = true; } @@ -4324,7 +4291,7 @@ static void check_message_5a(v8::Handle<v8::Message> message, static void check_message_5b(v8::Handle<v8::Message> message, v8::Handle<Value> data) { CHECK(!message->IsSharedCrossOrigin()); - CHECK_EQ(6.75, message->GetScriptResourceName()->NumberValue()); + CHECK_EQ(6.75, message->GetScriptOrigin().ResourceName()->NumberValue()); message_received = true; } @@ -4404,7 +4371,7 @@ THREADED_TEST(PropertyAttributes) { CHECK_EQ(v8::None, context->Global()->GetPropertyAttributes(prop)); // read-only prop = v8_str("read_only"); - context->Global()->Set(prop, v8_num(7), v8::ReadOnly); + context->Global()->ForceSet(prop, v8_num(7), v8::ReadOnly); CHECK_EQ(7, context->Global()->Get(prop)->Int32Value()); CHECK_EQ(v8::ReadOnly, context->Global()->GetPropertyAttributes(prop)); CompileRun("read_only = 9"); @@ -4413,14 +4380,14 @@ THREADED_TEST(PropertyAttributes) { CHECK_EQ(7, context->Global()->Get(prop)->Int32Value()); // dont-delete prop = v8_str("dont_delete"); - context->Global()->Set(prop, v8_num(13), v8::DontDelete); + context->Global()->ForceSet(prop, v8_num(13), v8::DontDelete); CHECK_EQ(13, context->Global()->Get(prop)->Int32Value()); CompileRun("delete dont_delete"); CHECK_EQ(13, context->Global()->Get(prop)->Int32Value()); CHECK_EQ(v8::DontDelete, context->Global()->GetPropertyAttributes(prop)); // dont-enum prop = v8_str("dont_enum"); - context->Global()->Set(prop, v8_num(28), v8::DontEnum); + context->Global()->ForceSet(prop, v8_num(28), v8::DontEnum); CHECK_EQ(v8::DontEnum, context->Global()->GetPropertyAttributes(prop)); // absent prop = v8_str("absent"); @@ -5343,15 +5310,28 @@ THREADED_TEST(TryCatchAndFinally) { } -static void TryCatchNestedHelper(int depth) { +static void TryCatchNested1Helper(int depth) { + if (depth > 0) { + v8::TryCatch try_catch; + try_catch.SetVerbose(true); + TryCatchNested1Helper(depth - 1); + CHECK(try_catch.HasCaught()); + try_catch.ReThrow(); + } else { + CcTest::isolate()->ThrowException(v8_str("E1")); + } +} + + +static void TryCatchNested2Helper(int depth) { if (depth > 0) { v8::TryCatch try_catch; try_catch.SetVerbose(true); - TryCatchNestedHelper(depth - 1); + TryCatchNested2Helper(depth - 1); CHECK(try_catch.HasCaught()); try_catch.ReThrow(); } else { - CcTest::isolate()->ThrowException(v8_str("back")); + CompileRun("throw 'E2';"); } } @@ -5360,17 +5340,29 @@ TEST(TryCatchNested) { v8::V8::Initialize(); LocalContext context; v8::HandleScope scope(context->GetIsolate()); - v8::TryCatch try_catch; - TryCatchNestedHelper(5); - CHECK(try_catch.HasCaught()); - CHECK_EQ(0, strcmp(*v8::String::Utf8Value(try_catch.Exception()), "back")); + + { + // Test nested try-catch with a native throw in the end. + v8::TryCatch try_catch; + TryCatchNested1Helper(5); + CHECK(try_catch.HasCaught()); + CHECK_EQ(0, strcmp(*v8::String::Utf8Value(try_catch.Exception()), "E1")); + } + + { + // Test nested try-catch with a JavaScript throw in the end. + v8::TryCatch try_catch; + TryCatchNested2Helper(5); + CHECK(try_catch.HasCaught()); + CHECK_EQ(0, strcmp(*v8::String::Utf8Value(try_catch.Exception()), "E2")); + } } void TryCatchMixedNestingCheck(v8::TryCatch* try_catch) { CHECK(try_catch->HasCaught()); Handle<Message> message = try_catch->Message(); - Handle<Value> resource = message->GetScriptResourceName(); + Handle<Value> resource = message->GetScriptOrigin().ResourceName(); CHECK_EQ(0, strcmp(*v8::String::Utf8Value(resource), "inner")); CHECK_EQ(0, strcmp(*v8::String::Utf8Value(message->Get()), "Uncaught Error: a")); @@ -5409,6 +5401,53 @@ TEST(TryCatchMixedNesting) { } +void TryCatchNativeHelper(const v8::FunctionCallbackInfo<v8::Value>& args) { + ApiTestFuzzer::Fuzz(); + v8::TryCatch try_catch; + args.GetIsolate()->ThrowException(v8_str("boom")); + CHECK(try_catch.HasCaught()); +} + + +TEST(TryCatchNative) { + v8::Isolate* isolate = CcTest::isolate(); + v8::HandleScope scope(isolate); + v8::V8::Initialize(); + v8::TryCatch try_catch; + Local<ObjectTemplate> templ = ObjectTemplate::New(isolate); + templ->Set(v8_str("TryCatchNativeHelper"), + v8::FunctionTemplate::New(isolate, TryCatchNativeHelper)); + LocalContext context(0, templ); + CompileRun("TryCatchNativeHelper();"); + CHECK(!try_catch.HasCaught()); +} + + +void TryCatchNativeResetHelper( + const v8::FunctionCallbackInfo<v8::Value>& args) { + ApiTestFuzzer::Fuzz(); + v8::TryCatch try_catch; + args.GetIsolate()->ThrowException(v8_str("boom")); + CHECK(try_catch.HasCaught()); + try_catch.Reset(); + CHECK(!try_catch.HasCaught()); +} + + +TEST(TryCatchNativeReset) { + v8::Isolate* isolate = CcTest::isolate(); + v8::HandleScope scope(isolate); + v8::V8::Initialize(); + v8::TryCatch try_catch; + Local<ObjectTemplate> templ = ObjectTemplate::New(isolate); + templ->Set(v8_str("TryCatchNativeResetHelper"), + v8::FunctionTemplate::New(isolate, TryCatchNativeResetHelper)); + LocalContext context(0, templ); + CompileRun("TryCatchNativeResetHelper();"); + CHECK(!try_catch.HasCaught()); +} + + THREADED_TEST(Equality) { LocalContext context; v8::Isolate* isolate = context->GetIsolate(); @@ -5430,7 +5469,7 @@ THREADED_TEST(Equality) { CHECK(v8_num(1)->StrictEquals(v8_num(1))); CHECK(!v8_num(1)->StrictEquals(v8_num(2))); CHECK(v8_num(0.0)->StrictEquals(v8_num(-0.0))); - Local<Value> not_a_number = v8_num(i::OS::nan_value()); + Local<Value> not_a_number = v8_num(v8::base::OS::nan_value()); CHECK(!not_a_number->StrictEquals(not_a_number)); CHECK(v8::False(isolate)->StrictEquals(v8::False(isolate))); CHECK(!v8::False(isolate)->StrictEquals(v8::Undefined(isolate))); @@ -6162,15 +6201,17 @@ THREADED_TEST(IndexedInterceptorWithAccessorCheck) { context->Global()->Set(v8_str("obj"), obj); const char* code = - "try {" - " for (var i = 0; i < 100; i++) {" + "var result = 'PASSED';" + "for (var i = 0; i < 100; i++) {" + " try {" " var v = obj[0];" - " if (v != undefined) throw 'Wrong value ' + v + ' at iteration ' + i;" + " result = 'Wrong value ' + v + ' at iteration ' + i;" + " break;" + " } catch (e) {" + " /* pass */" " }" - " 'PASSED'" - "} catch(e) {" - " e" - "}"; + "}" + "result"; ExpectString(code, "PASSED"); } @@ -6187,21 +6228,29 @@ THREADED_TEST(IndexedInterceptorWithAccessorCheckSwitchedOn) { context->Global()->Set(v8_str("obj"), obj); const char* code = - "try {" - " for (var i = 0; i < 100; i++) {" - " var expected = i;" + "var result = 'PASSED';" + "for (var i = 0; i < 100; i++) {" + " var expected = i;" + " if (i == 5) {" + " %EnableAccessChecks(obj);" + " }" + " try {" + " var v = obj[i];" " if (i == 5) {" - " %EnableAccessChecks(obj);" - " expected = undefined;" + " result = 'Should not have reached this!';" + " break;" + " } else if (v != expected) {" + " result = 'Wrong value ' + v + ' at iteration ' + i;" + " break;" + " }" + " } catch (e) {" + " if (i != 5) {" + " result = e;" " }" - " var v = obj[i];" - " if (v != expected) throw 'Wrong value ' + v + ' at iteration ' + i;" - " if (i == 5) %DisableAccessChecks(obj);" " }" - " 'PASSED'" - "} catch(e) {" - " e" - "}"; + " if (i == 5) %DisableAccessChecks(obj);" + "}" + "result"; ExpectString(code, "PASSED"); } @@ -6670,9 +6719,6 @@ TEST(UndetectableOptimized) { } -template <typename T> static void USE(T) { } - - // The point of this test is type checking. We run it only so compilers // don't complain about an unused function. TEST(PersistentHandles) { @@ -6728,6 +6774,33 @@ TEST(SimpleExtensions) { } +static const char* kStackTraceFromExtensionSource = + "function foo() {" + " throw new Error();" + "}" + "function bar() {" + " foo();" + "}"; + + +TEST(StackTraceInExtension) { + v8::HandleScope handle_scope(CcTest::isolate()); + v8::RegisterExtension(new Extension("stacktracetest", + kStackTraceFromExtensionSource)); + const char* extension_names[] = { "stacktracetest" }; + v8::ExtensionConfiguration extensions(1, extension_names); + v8::Handle<Context> context = + Context::New(CcTest::isolate(), &extensions); + Context::Scope lock(context); + CompileRun("function user() { bar(); }" + "var error;" + "try{ user(); } catch (e) { error = e; }"); + CHECK_EQ(-1, CompileRun("error.stack.indexOf('foo')")->Int32Value()); + CHECK_EQ(-1, CompileRun("error.stack.indexOf('bar')")->Int32Value()); + CHECK_NE(-1, CompileRun("error.stack.indexOf('user')")->Int32Value()); +} + + TEST(NullExtensions) { v8::HandleScope handle_scope(CcTest::isolate()); v8::RegisterExtension(new Extension("nulltest", NULL)); @@ -6764,7 +6837,7 @@ TEST(ExtensionWithSourceLength) { source_len <= kEmbeddedExtensionSourceValidLen + 1; ++source_len) { v8::HandleScope handle_scope(CcTest::isolate()); i::ScopedVector<char> extension_name(32); - i::OS::SNPrintF(extension_name, "ext #%d", source_len); + i::SNPrintF(extension_name, "ext #%d", source_len); v8::RegisterExtension(new Extension(extension_name.start(), kEmbeddedExtensionSource, 0, 0, source_len)); @@ -7144,8 +7217,9 @@ TEST(ErrorReporting) { static void MissingScriptInfoMessageListener(v8::Handle<v8::Message> message, v8::Handle<Value> data) { - CHECK(message->GetScriptResourceName()->IsUndefined()); - CHECK_EQ(v8::Undefined(CcTest::isolate()), message->GetScriptResourceName()); + CHECK(message->GetScriptOrigin().ResourceName()->IsUndefined()); + CHECK_EQ(v8::Undefined(CcTest::isolate()), + message->GetScriptOrigin().ResourceName()); message->GetLineNumber(); message->GetSourceLine(); } @@ -7924,7 +7998,7 @@ THREADED_TEST(StringWrite) { static void Utf16Helper( - LocalContext& context, + LocalContext& context, // NOLINT const char* name, const char* lengths_name, int len) { @@ -7951,7 +8025,7 @@ static uint16_t StringGet(Handle<String> str, int index) { static void WriteUtf8Helper( - LocalContext& context, + LocalContext& context, // NOLINT const char* name, const char* lengths_name, int len) { @@ -8330,9 +8404,9 @@ TEST(ApiUncaughtException) { static const char* script_resource_name = "ExceptionInNativeScript.js"; static void ExceptionInNativeScriptTestListener(v8::Handle<v8::Message> message, v8::Handle<Value>) { - v8::Handle<v8::Value> name_val = message->GetScriptResourceName(); + v8::Handle<v8::Value> name_val = message->GetScriptOrigin().ResourceName(); CHECK(!name_val.IsEmpty() && name_val->IsString()); - v8::String::Utf8Value name(message->GetScriptResourceName()); + v8::String::Utf8Value name(message->GetScriptOrigin().ResourceName()); CHECK_EQ(script_resource_name, *name); CHECK_EQ(3, message->GetLineNumber()); v8::String::Utf8Value source_line(message->GetSourceLine()); @@ -8396,6 +8470,41 @@ TEST(TryCatchFinallyUsingTryCatchHandler) { } +void CEvaluate(const v8::FunctionCallbackInfo<v8::Value>& args) { + v8::HandleScope scope(args.GetIsolate()); + CompileRun(args[0]->ToString()); +} + + +TEST(TryCatchFinallyStoresMessageUsingTryCatchHandler) { + v8::Isolate* isolate = CcTest::isolate(); + v8::HandleScope scope(isolate); + Local<ObjectTemplate> templ = ObjectTemplate::New(isolate); + templ->Set(v8_str("CEvaluate"), + v8::FunctionTemplate::New(isolate, CEvaluate)); + LocalContext context(0, templ); + v8::TryCatch try_catch; + CompileRun("try {" + " CEvaluate('throw 1;');" + "} finally {" + "}"); + CHECK(try_catch.HasCaught()); + CHECK(!try_catch.Message().IsEmpty()); + String::Utf8Value exception_value(try_catch.Exception()); + CHECK_EQ(*exception_value, "1"); + try_catch.Reset(); + CompileRun("try {" + " CEvaluate('throw 1;');" + "} finally {" + " throw 2;" + "}"); + CHECK(try_catch.HasCaught()); + CHECK(!try_catch.Message().IsEmpty()); + String::Utf8Value finally_exception_value(try_catch.Exception()); + CHECK_EQ(*finally_exception_value, "2"); +} + + // For use within the TestSecurityHandler() test. static bool g_security_callback_result = false; static bool NamedSecurityTestCallback(Local<v8::Object> global, @@ -8552,10 +8661,8 @@ THREADED_TEST(SecurityChecksForPrototypeChain) { v8::Local<Script> access_other0 = v8_compile("other.Object"); v8::Local<Script> access_other1 = v8_compile("other[42]"); for (int i = 0; i < 5; i++) { - CHECK(!access_other0->Run()->Equals(other_object)); - CHECK(access_other0->Run()->IsUndefined()); - CHECK(!access_other1->Run()->Equals(v8_num(87))); - CHECK(access_other1->Run()->IsUndefined()); + CHECK(access_other0->Run().IsEmpty()); + CHECK(access_other1->Run().IsEmpty()); } // Create an object that has 'other' in its prototype chain and make @@ -8567,10 +8674,8 @@ THREADED_TEST(SecurityChecksForPrototypeChain) { v8::Local<Script> access_f0 = v8_compile("f.Object"); v8::Local<Script> access_f1 = v8_compile("f[42]"); for (int j = 0; j < 5; j++) { - CHECK(!access_f0->Run()->Equals(other_object)); - CHECK(access_f0->Run()->IsUndefined()); - CHECK(!access_f1->Run()->Equals(v8_num(87))); - CHECK(access_f1->Run()->IsUndefined()); + CHECK(access_f0->Run().IsEmpty()); + CHECK(access_f1->Run().IsEmpty()); } // Now it gets hairy: Set the prototype for the other global object @@ -8589,10 +8694,8 @@ THREADED_TEST(SecurityChecksForPrototypeChain) { Local<Script> access_f2 = v8_compile("f.foo"); Local<Script> access_f3 = v8_compile("f[99]"); for (int k = 0; k < 5; k++) { - CHECK(!access_f2->Run()->Equals(v8_num(100))); - CHECK(access_f2->Run()->IsUndefined()); - CHECK(!access_f3->Run()->Equals(v8_num(101))); - CHECK(access_f3->Run()->IsUndefined()); + CHECK(access_f2->Run().IsEmpty()); + CHECK(access_f3->Run().IsEmpty()); } } @@ -8673,7 +8776,7 @@ THREADED_TEST(CrossDomainDelete) { Context::Scope scope_env2(env2); Local<Value> result = CompileRun("delete env1.prop"); - CHECK(result->IsFalse()); + CHECK(result.IsEmpty()); } // Check that env1.prop still exists. @@ -8711,7 +8814,7 @@ THREADED_TEST(CrossDomainIsPropertyEnumerable) { { Context::Scope scope_env2(env2); Local<Value> result = CompileRun(test); - CHECK(result->IsFalse()); + CHECK(result.IsEmpty()); } } @@ -8738,11 +8841,18 @@ THREADED_TEST(CrossDomainForIn) { env2->SetSecurityToken(bar); { Context::Scope scope_env2(env2); - Local<Value> result = - CompileRun("(function(){var obj = {'__proto__':env1};" - "for (var p in obj)" - " if (p == 'prop') return false;" - "return true;})()"); + Local<Value> result = CompileRun( + "(function() {" + " var obj = { '__proto__': env1 };" + " try {" + " for (var p in obj) {" + " if (p == 'prop') return false;" + " }" + " return false;" + " } catch (e) {" + " return true;" + " }" + "})()"); CHECK(result->IsTrue()); } } @@ -8804,7 +8914,7 @@ TEST(ContextDetachGlobal) { // Check that env3 is not accessible from env1 { Local<Value> r = global3->Get(v8_str("prop2")); - CHECK(r->IsUndefined()); + CHECK(r.IsEmpty()); } } @@ -8843,7 +8953,7 @@ TEST(DetachGlobal) { // Check that the global has been detached. No other.p property can // be found. result = CompileRun("other.p"); - CHECK(result->IsUndefined()); + CHECK(result.IsEmpty()); // Reuse global2 for env3. v8::Handle<Context> env3 = Context::New(env1->GetIsolate(), @@ -8873,7 +8983,7 @@ TEST(DetachGlobal) { // the global object for env3 which has a different security token, // so access should be blocked. result = CompileRun("other.p"); - CHECK(result->IsUndefined()); + CHECK(result.IsEmpty()); } @@ -8926,9 +9036,9 @@ TEST(DetachedAccesses) { result = CompileRun("bound_x()"); CHECK_EQ(v8_str("env2_x"), result); result = CompileRun("get_x()"); - CHECK(result->IsUndefined()); + CHECK(result.IsEmpty()); result = CompileRun("get_x_w()"); - CHECK(result->IsUndefined()); + CHECK(result.IsEmpty()); result = CompileRun("this_x()"); CHECK_EQ(v8_str("env2_x"), result); @@ -9018,19 +9128,13 @@ static bool IndexedAccessBlocker(Local<v8::Object> global, } -static int g_echo_value_1 = -1; -static int g_echo_value_2 = -1; +static int g_echo_value = -1; static void EchoGetter( Local<String> name, const v8::PropertyCallbackInfo<v8::Value>& info) { - info.GetReturnValue().Set(v8_num(g_echo_value_1)); -} - - -static void EchoGetter(const v8::FunctionCallbackInfo<v8::Value>& info) { - info.GetReturnValue().Set(v8_num(g_echo_value_2)); + info.GetReturnValue().Set(v8_num(g_echo_value)); } @@ -9038,14 +9142,7 @@ static void EchoSetter(Local<String> name, Local<Value> value, const v8::PropertyCallbackInfo<void>&) { if (value->IsNumber()) - g_echo_value_1 = value->Int32Value(); -} - - -static void EchoSetter(const v8::FunctionCallbackInfo<v8::Value>& info) { - v8::Handle<v8::Value> value = info[0]; - if (value->IsNumber()) - g_echo_value_2 = value->Int32Value(); + g_echo_value = value->Int32Value(); } @@ -9086,13 +9183,6 @@ TEST(AccessControl) { v8::AccessControl(v8::ALL_CAN_READ | v8::ALL_CAN_WRITE)); - global_template->SetAccessorProperty( - v8_str("accessible_js_prop"), - v8::FunctionTemplate::New(isolate, EchoGetter), - v8::FunctionTemplate::New(isolate, EchoSetter), - v8::None, - v8::AccessControl(v8::ALL_CAN_READ | v8::ALL_CAN_WRITE)); - // Add an accessor that is not accessible by cross-domain JS code. global_template->SetAccessor(v8_str("blocked_prop"), UnreachableGetter, UnreachableSetter, @@ -9144,54 +9234,35 @@ TEST(AccessControl) { // Access blocked property. CompileRun("other.blocked_prop = 1"); - ExpectUndefined("other.blocked_prop"); - ExpectUndefined( - "Object.getOwnPropertyDescriptor(other, 'blocked_prop')"); - ExpectFalse("propertyIsEnumerable.call(other, 'blocked_prop')"); - - // Enable ACCESS_HAS - allowed_access_type[v8::ACCESS_HAS] = true; - ExpectUndefined("other.blocked_prop"); - // ... and now we can get the descriptor... - ExpectUndefined( - "Object.getOwnPropertyDescriptor(other, 'blocked_prop').value"); - // ... and enumerate the property. - ExpectTrue("propertyIsEnumerable.call(other, 'blocked_prop')"); - allowed_access_type[v8::ACCESS_HAS] = false; + CHECK(CompileRun("other.blocked_prop").IsEmpty()); + CHECK(CompileRun("Object.getOwnPropertyDescriptor(other, 'blocked_prop')") + .IsEmpty()); + CHECK( + CompileRun("propertyIsEnumerable.call(other, 'blocked_prop')").IsEmpty()); // Access blocked element. - CompileRun("other[239] = 1"); + CHECK(CompileRun("other[239] = 1").IsEmpty()); - ExpectUndefined("other[239]"); - ExpectUndefined("Object.getOwnPropertyDescriptor(other, '239')"); - ExpectFalse("propertyIsEnumerable.call(other, '239')"); + CHECK(CompileRun("other[239]").IsEmpty()); + CHECK(CompileRun("Object.getOwnPropertyDescriptor(other, '239')").IsEmpty()); + CHECK(CompileRun("propertyIsEnumerable.call(other, '239')").IsEmpty()); // Enable ACCESS_HAS allowed_access_type[v8::ACCESS_HAS] = true; - ExpectUndefined("other[239]"); + CHECK(CompileRun("other[239]").IsEmpty()); // ... and now we can get the descriptor... - ExpectUndefined("Object.getOwnPropertyDescriptor(other, '239').value"); + CHECK(CompileRun("Object.getOwnPropertyDescriptor(other, '239').value") + .IsEmpty()); // ... and enumerate the property. ExpectTrue("propertyIsEnumerable.call(other, '239')"); allowed_access_type[v8::ACCESS_HAS] = false; // Access a property with JS accessor. - CompileRun("other.js_accessor_p = 2"); - - ExpectUndefined("other.js_accessor_p"); - ExpectUndefined( - "Object.getOwnPropertyDescriptor(other, 'js_accessor_p')"); + CHECK(CompileRun("other.js_accessor_p = 2").IsEmpty()); - // Enable ACCESS_HAS. - allowed_access_type[v8::ACCESS_HAS] = true; - ExpectUndefined("other.js_accessor_p"); - ExpectUndefined( - "Object.getOwnPropertyDescriptor(other, 'js_accessor_p').get"); - ExpectUndefined( - "Object.getOwnPropertyDescriptor(other, 'js_accessor_p').set"); - ExpectUndefined( - "Object.getOwnPropertyDescriptor(other, 'js_accessor_p').value"); - allowed_access_type[v8::ACCESS_HAS] = false; + CHECK(CompileRun("other.js_accessor_p").IsEmpty()); + CHECK(CompileRun("Object.getOwnPropertyDescriptor(other, 'js_accessor_p')") + .IsEmpty()); // Enable both ACCESS_HAS and ACCESS_GET. allowed_access_type[v8::ACCESS_HAS] = true; @@ -9200,59 +9271,19 @@ TEST(AccessControl) { ExpectString("other.js_accessor_p", "getter"); ExpectObject( "Object.getOwnPropertyDescriptor(other, 'js_accessor_p').get", getter); - ExpectUndefined( - "Object.getOwnPropertyDescriptor(other, 'js_accessor_p').set"); - ExpectUndefined( - "Object.getOwnPropertyDescriptor(other, 'js_accessor_p').value"); - - allowed_access_type[v8::ACCESS_GET] = false; - allowed_access_type[v8::ACCESS_HAS] = false; - - // Enable both ACCESS_HAS and ACCESS_SET. - allowed_access_type[v8::ACCESS_HAS] = true; - allowed_access_type[v8::ACCESS_SET] = true; - - ExpectUndefined("other.js_accessor_p"); - ExpectUndefined( - "Object.getOwnPropertyDescriptor(other, 'js_accessor_p').get"); ExpectObject( "Object.getOwnPropertyDescriptor(other, 'js_accessor_p').set", setter); ExpectUndefined( "Object.getOwnPropertyDescriptor(other, 'js_accessor_p').value"); - allowed_access_type[v8::ACCESS_SET] = false; allowed_access_type[v8::ACCESS_HAS] = false; - - // Enable both ACCESS_HAS, ACCESS_GET and ACCESS_SET. - allowed_access_type[v8::ACCESS_HAS] = true; - allowed_access_type[v8::ACCESS_GET] = true; - allowed_access_type[v8::ACCESS_SET] = true; - - ExpectString("other.js_accessor_p", "getter"); - ExpectObject( - "Object.getOwnPropertyDescriptor(other, 'js_accessor_p').get", getter); - ExpectObject( - "Object.getOwnPropertyDescriptor(other, 'js_accessor_p').set", setter); - ExpectUndefined( - "Object.getOwnPropertyDescriptor(other, 'js_accessor_p').value"); - - allowed_access_type[v8::ACCESS_SET] = false; allowed_access_type[v8::ACCESS_GET] = false; - allowed_access_type[v8::ACCESS_HAS] = false; // Access an element with JS accessor. - CompileRun("other[42] = 2"); + CHECK(CompileRun("other[42] = 2").IsEmpty()); - ExpectUndefined("other[42]"); - ExpectUndefined("Object.getOwnPropertyDescriptor(other, '42')"); - - // Enable ACCESS_HAS. - allowed_access_type[v8::ACCESS_HAS] = true; - ExpectUndefined("other[42]"); - ExpectUndefined("Object.getOwnPropertyDescriptor(other, '42').get"); - ExpectUndefined("Object.getOwnPropertyDescriptor(other, '42').set"); - ExpectUndefined("Object.getOwnPropertyDescriptor(other, '42').value"); - allowed_access_type[v8::ACCESS_HAS] = false; + CHECK(CompileRun("other[42]").IsEmpty()); + CHECK(CompileRun("Object.getOwnPropertyDescriptor(other, '42')").IsEmpty()); // Enable both ACCESS_HAS and ACCESS_GET. allowed_access_type[v8::ACCESS_HAS] = true; @@ -9260,37 +9291,11 @@ TEST(AccessControl) { ExpectString("other[42]", "el_getter"); ExpectObject("Object.getOwnPropertyDescriptor(other, '42').get", el_getter); - ExpectUndefined("Object.getOwnPropertyDescriptor(other, '42').set"); - ExpectUndefined("Object.getOwnPropertyDescriptor(other, '42').value"); - - allowed_access_type[v8::ACCESS_GET] = false; - allowed_access_type[v8::ACCESS_HAS] = false; - - // Enable both ACCESS_HAS and ACCESS_SET. - allowed_access_type[v8::ACCESS_HAS] = true; - allowed_access_type[v8::ACCESS_SET] = true; - - ExpectUndefined("other[42]"); - ExpectUndefined("Object.getOwnPropertyDescriptor(other, '42').get"); ExpectObject("Object.getOwnPropertyDescriptor(other, '42').set", el_setter); ExpectUndefined("Object.getOwnPropertyDescriptor(other, '42').value"); - allowed_access_type[v8::ACCESS_SET] = false; allowed_access_type[v8::ACCESS_HAS] = false; - - // Enable both ACCESS_HAS, ACCESS_GET and ACCESS_SET. - allowed_access_type[v8::ACCESS_HAS] = true; - allowed_access_type[v8::ACCESS_GET] = true; - allowed_access_type[v8::ACCESS_SET] = true; - - ExpectString("other[42]", "el_getter"); - ExpectObject("Object.getOwnPropertyDescriptor(other, '42').get", el_getter); - ExpectObject("Object.getOwnPropertyDescriptor(other, '42').set", el_setter); - ExpectUndefined("Object.getOwnPropertyDescriptor(other, '42').value"); - - allowed_access_type[v8::ACCESS_SET] = false; allowed_access_type[v8::ACCESS_GET] = false; - allowed_access_type[v8::ACCESS_HAS] = false; v8::Handle<Value> value; @@ -9298,50 +9303,38 @@ TEST(AccessControl) { value = CompileRun("other.accessible_prop = 3"); CHECK(value->IsNumber()); CHECK_EQ(3, value->Int32Value()); - CHECK_EQ(3, g_echo_value_1); - - // Access accessible js property - value = CompileRun("other.accessible_js_prop = 3"); - CHECK(value->IsNumber()); - CHECK_EQ(3, value->Int32Value()); - CHECK_EQ(3, g_echo_value_2); + CHECK_EQ(3, g_echo_value); value = CompileRun("other.accessible_prop"); CHECK(value->IsNumber()); CHECK_EQ(3, value->Int32Value()); - value = CompileRun("other.accessible_js_prop"); - CHECK(value->IsNumber()); - CHECK_EQ(3, value->Int32Value()); - value = CompileRun( "Object.getOwnPropertyDescriptor(other, 'accessible_prop').value"); CHECK(value->IsNumber()); CHECK_EQ(3, value->Int32Value()); - value = CompileRun( - "Object.getOwnPropertyDescriptor(other, 'accessible_js_prop').get()"); - CHECK(value->IsNumber()); - CHECK_EQ(3, value->Int32Value()); - value = CompileRun("propertyIsEnumerable.call(other, 'accessible_prop')"); CHECK(value->IsTrue()); - value = CompileRun("propertyIsEnumerable.call(other, 'accessible_js_prop')"); - CHECK(value->IsTrue()); - // Enumeration doesn't enumerate accessors from inaccessible objects in // the prototype chain even if the accessors are in themselves accessible. - value = - CompileRun("(function(){var obj = {'__proto__':other};" - "for (var p in obj)" - " if (p == 'accessible_prop' ||" - " p == 'accessible_js_prop' ||" - " p == 'blocked_js_prop' ||" - " p == 'blocked_js_prop') {" - " return false;" - " }" - "return true;})()"); + value = CompileRun( + "(function() {" + " var obj = { '__proto__': other };" + " try {" + " for (var p in obj) {" + " if (p == 'accessible_prop' ||" + " p == 'blocked_js_prop' ||" + " p == 'blocked_js_prop') {" + " return false;" + " }" + " }" + " return false;" + " } catch (e) {" + " return true;" + " }" + "})()"); CHECK(value->IsTrue()); context1->Exit(); @@ -9384,16 +9377,15 @@ TEST(AccessControlES5) { global1->Set(v8_str("other"), global0); // Regression test for issue 1154. - ExpectTrue("Object.keys(other).indexOf('blocked_prop') == -1"); - - ExpectUndefined("other.blocked_prop"); + CHECK(CompileRun("Object.keys(other)").IsEmpty()); + CHECK(CompileRun("other.blocked_prop").IsEmpty()); // Regression test for issue 1027. CompileRun("Object.defineProperty(\n" " other, 'blocked_prop', {configurable: false})"); - ExpectUndefined("other.blocked_prop"); - ExpectUndefined( - "Object.getOwnPropertyDescriptor(other, 'blocked_prop')"); + CHECK(CompileRun("other.blocked_prop").IsEmpty()); + CHECK(CompileRun("Object.getOwnPropertyDescriptor(other, 'blocked_prop')") + .IsEmpty()); // Regression test for issue 1171. ExpectTrue("Object.isExtensible(other)"); @@ -9411,7 +9403,7 @@ TEST(AccessControlES5) { // Make sure that we can set the accessible accessors value using normal // assignment. CompileRun("other.accessible_prop = 42"); - CHECK_EQ(42, g_echo_value_1); + CHECK_EQ(42, g_echo_value); v8::Handle<Value> value; CompileRun("Object.defineProperty(other, 'accessible_prop', {value: -1})"); @@ -9469,10 +9461,10 @@ THREADED_TEST(AccessControlGetOwnPropertyNames) { // proxy object. Accessing the object that requires access checks // is blocked by the access checks on the object itself. value = CompileRun("Object.getOwnPropertyNames(other).length == 0"); - CHECK(value->IsTrue()); + CHECK(value.IsEmpty()); value = CompileRun("Object.getOwnPropertyNames(object).length == 0"); - CHECK(value->IsTrue()); + CHECK(value.IsEmpty()); context1->Exit(); context0->Exit(); @@ -9580,7 +9572,7 @@ THREADED_TEST(CrossDomainAccessors) { CHECK_EQ(10, value->Int32Value()); value = v8_compile("other.unreachable")->Run(); - CHECK(value->IsUndefined()); + CHECK(value.IsEmpty()); context1->Exit(); context0->Exit(); @@ -10219,7 +10211,7 @@ THREADED_TEST(HiddenPrototypeIdentityHash) { int hash = o->GetIdentityHash(); USE(hash); o->Set(v8_str("foo"), v8_num(42)); - ASSERT_EQ(hash, o->GetIdentityHash()); + DCHECK_EQ(hash, o->GetIdentityHash()); } @@ -10282,7 +10274,7 @@ THREADED_TEST(SetPrototype) { // Getting property names of an object with a prototype chain that -// triggers dictionary elements in GetLocalPropertyNames() shouldn't +// triggers dictionary elements in GetOwnPropertyNames() shouldn't // crash the runtime. THREADED_TEST(Regress91517) { i::FLAG_allow_natives_syntax = true; @@ -10307,7 +10299,7 @@ THREADED_TEST(Regress91517) { // Force dictionary-based properties. i::ScopedVector<char> name_buf(1024); for (int i = 1; i <= 1000; i++) { - i::OS::SNPrintF(name_buf, "sdf%d", i); + i::SNPrintF(name_buf, "sdf%d", i); t2->InstanceTemplate()->Set(v8_str(name_buf.start()), v8_num(2)); } @@ -10321,11 +10313,11 @@ THREADED_TEST(Regress91517) { CHECK(o3->SetPrototype(o2)); CHECK(o2->SetPrototype(o1)); - // Call the runtime version of GetLocalPropertyNames() on the natively + // Call the runtime version of GetOwnPropertyNames() on the natively // created object through JavaScript. context->Global()->Set(v8_str("obj"), o4); // PROPERTY_ATTRIBUTES_NONE = 0 - CompileRun("var names = %GetLocalPropertyNames(obj, 0);"); + CompileRun("var names = %GetOwnPropertyNames(obj, 0);"); ExpectInt32("names.length", 1006); ExpectTrue("names.indexOf(\"baz\") >= 0"); @@ -10376,12 +10368,12 @@ THREADED_TEST(Regress269562) { o1->SetHiddenValue( v8_str("h1"), v8::Integer::New(context->GetIsolate(), 2013)); - // Call the runtime version of GetLocalPropertyNames() on + // Call the runtime version of GetOwnPropertyNames() on // the natively created object through JavaScript. context->Global()->Set(v8_str("obj"), o2); context->Global()->Set(v8_str("sym"), sym); // PROPERTY_ATTRIBUTES_NONE = 0 - CompileRun("var names = %GetLocalPropertyNames(obj, 0);"); + CompileRun("var names = %GetOwnPropertyNames(obj, 0);"); ExpectInt32("names.length", 7); ExpectTrue("names.indexOf(\"foo\") >= 0"); @@ -10442,7 +10434,7 @@ THREADED_TEST(SetPrototypeThrows) { v8::TryCatch try_catch; CHECK(!o1->SetPrototype(o0)); CHECK(!try_catch.HasCaught()); - ASSERT(!CcTest::i_isolate()->has_pending_exception()); + DCHECK(!CcTest::i_isolate()->has_pending_exception()); CHECK_EQ(42, CompileRun("function f() { return 42; }; f()")->Int32Value()); } @@ -13027,7 +13019,7 @@ static void WebKitLike(Handle<Message> message, Handle<Value> data) { Handle<String> errorMessageString = message->Get(); CHECK(!errorMessageString.IsEmpty()); message->GetStackTrace(); - message->GetScriptResourceName(); + message->GetScriptOrigin().ResourceName(); } @@ -13220,7 +13212,7 @@ THREADED_TEST(ObjectGetConstructorName) { bool ApiTestFuzzer::fuzzing_ = false; -i::Semaphore ApiTestFuzzer::all_tests_done_(0); +v8::base::Semaphore ApiTestFuzzer::all_tests_done_(0); int ApiTestFuzzer::active_tests_; int ApiTestFuzzer::tests_being_run_; int ApiTestFuzzer::current_; @@ -13514,7 +13506,6 @@ THREADED_TEST(LockUnlockLock) { static int GetGlobalObjectsCount() { - CcTest::heap()->EnsureHeapIsIterable(); int count = 0; i::HeapIterator it(CcTest::heap()); for (i::HeapObject* object = it.next(); object != NULL; object = it.next()) @@ -13549,21 +13540,21 @@ TEST(DontLeakGlobalObjects) { { v8::HandleScope scope(CcTest::isolate()); LocalContext context; } - v8::V8::ContextDisposedNotification(); + CcTest::isolate()->ContextDisposedNotification(); CheckSurvivingGlobalObjectsCount(0); { v8::HandleScope scope(CcTest::isolate()); LocalContext context; v8_compile("Date")->Run(); } - v8::V8::ContextDisposedNotification(); + CcTest::isolate()->ContextDisposedNotification(); CheckSurvivingGlobalObjectsCount(0); { v8::HandleScope scope(CcTest::isolate()); LocalContext context; v8_compile("/aaa/")->Run(); } - v8::V8::ContextDisposedNotification(); + CcTest::isolate()->ContextDisposedNotification(); CheckSurvivingGlobalObjectsCount(0); { v8::HandleScope scope(CcTest::isolate()); @@ -13572,7 +13563,7 @@ TEST(DontLeakGlobalObjects) { LocalContext context(&extensions); v8_compile("gc();")->Run(); } - v8::V8::ContextDisposedNotification(); + CcTest::isolate()->ContextDisposedNotification(); CheckSurvivingGlobalObjectsCount(0); } } @@ -14026,12 +14017,12 @@ void SetFunctionEntryHookTest::RunLoopInNewEnv(v8::Isolate* isolate) { CompileRun(script); bar_func_ = i::Handle<i::JSFunction>::cast( v8::Utils::OpenHandle(*env->Global()->Get(v8_str("bar")))); - ASSERT(!bar_func_.is_null()); + DCHECK(!bar_func_.is_null()); foo_func_ = i::Handle<i::JSFunction>::cast( v8::Utils::OpenHandle(*env->Global()->Get(v8_str("foo")))); - ASSERT(!foo_func_.is_null()); + DCHECK(!foo_func_.is_null()); v8::Handle<v8::Value> value = CompileRun("bar();"); CHECK(value->IsNumber()); @@ -14424,7 +14415,7 @@ static void CheckTryCatchSourceInfo(v8::Handle<v8::Script> script, CHECK_EQ(3, message->GetEndColumn()); v8::String::Utf8Value line(message->GetSourceLine()); CHECK_EQ(" throw 'nirk';", *line); - v8::String::Utf8Value name(message->GetScriptResourceName()); + v8::String::Utf8Value name(message->GetScriptOrigin().ResourceName()); CHECK_EQ(resource_name, *name); } @@ -14693,13 +14684,13 @@ THREADED_TEST(AccessChecksReenabledCorrectly) { context->Global()->Set(v8_str("obj_1"), instance_1); Local<Value> value_1 = CompileRun("obj_1.a"); - CHECK(value_1->IsUndefined()); + CHECK(value_1.IsEmpty()); Local<v8::Object> instance_2 = templ->NewInstance(); context->Global()->Set(v8_str("obj_2"), instance_2); Local<Value> value_2 = CompileRun("obj_2.a"); - CHECK(value_2->IsUndefined()); + CHECK(value_2.IsEmpty()); } @@ -14780,11 +14771,9 @@ THREADED_TEST(TurnOnAccessCheck) { context->DetachGlobal(); hidden_global->TurnOnAccessCheck(); - // Failing access check to property get results in undefined. - CHECK(f1->Call(global, 0, NULL)->IsUndefined()); - CHECK(f2->Call(global, 0, NULL)->IsUndefined()); - - // Failing access check to function call results in exception. + // Failing access check results in exception. + CHECK(f1->Call(global, 0, NULL).IsEmpty()); + CHECK(f2->Call(global, 0, NULL).IsEmpty()); CHECK(g1->Call(global, 0, NULL).IsEmpty()); CHECK(g2->Call(global, 0, NULL).IsEmpty()); @@ -14868,11 +14857,9 @@ THREADED_TEST(TurnOnAccessCheckAndRecompile) { context->DetachGlobal(); hidden_global->TurnOnAccessCheck(); - // Failing access check to property get results in undefined. - CHECK(f1->Call(global, 0, NULL)->IsUndefined()); - CHECK(f2->Call(global, 0, NULL)->IsUndefined()); - - // Failing access check to function call results in exception. + // Failing access check results in exception. + CHECK(f1->Call(global, 0, NULL).IsEmpty()); + CHECK(f2->Call(global, 0, NULL).IsEmpty()); CHECK(g1->Call(global, 0, NULL).IsEmpty()); CHECK(g2->Call(global, 0, NULL).IsEmpty()); @@ -14886,13 +14873,13 @@ THREADED_TEST(TurnOnAccessCheckAndRecompile) { f2 = Local<Function>::Cast(hidden_global->Get(v8_str("f2"))); g1 = Local<Function>::Cast(hidden_global->Get(v8_str("g1"))); g2 = Local<Function>::Cast(hidden_global->Get(v8_str("g2"))); - CHECK(hidden_global->Get(v8_str("h"))->IsUndefined()); - - // Failing access check to property get results in undefined. - CHECK(f1->Call(global, 0, NULL)->IsUndefined()); - CHECK(f2->Call(global, 0, NULL)->IsUndefined()); + CHECK(hidden_global->Get(v8_str("h")).IsEmpty()); - // Failing access check to function call results in exception. + // Failing access check results in exception. + v8::Local<v8::Value> result = f1->Call(global, 0, NULL); + CHECK(result.IsEmpty()); + CHECK(f1->Call(global, 0, NULL).IsEmpty()); + CHECK(f2->Call(global, 0, NULL).IsEmpty()); CHECK(g1->Call(global, 0, NULL).IsEmpty()); CHECK(g2->Call(global, 0, NULL).IsEmpty()); } @@ -14909,137 +14896,24 @@ TEST(PreCompileSerialization) { const char* script = "function foo(a) { return a+1; }"; v8::ScriptCompiler::Source source(v8_str(script)); v8::ScriptCompiler::Compile(isolate, &source, - v8::ScriptCompiler::kProduceDataToCache); + v8::ScriptCompiler::kProduceParserCache); // Serialize. const v8::ScriptCompiler::CachedData* cd = source.GetCachedData(); - char* serialized_data = i::NewArray<char>(cd->length); - i::OS::MemCopy(serialized_data, cd->data, cd->length); + i::byte* serialized_data = i::NewArray<i::byte>(cd->length); + i::MemCopy(serialized_data, cd->data, cd->length); // Deserialize. - i::ScriptData* deserialized = i::ScriptData::New(serialized_data, cd->length); + i::ScriptData* deserialized = new i::ScriptData(serialized_data, cd->length); // Verify that the original is the same as the deserialized. - CHECK_EQ(cd->length, deserialized->Length()); - CHECK_EQ(0, memcmp(cd->data, deserialized->Data(), cd->length)); + CHECK_EQ(cd->length, deserialized->length()); + CHECK_EQ(0, memcmp(cd->data, deserialized->data(), cd->length)); delete deserialized; i::DeleteArray(serialized_data); } -// Attempts to deserialize bad data. -TEST(PreCompileDeserializationError) { - v8::V8::Initialize(); - const char* data = "DONT CARE"; - int invalid_size = 3; - i::ScriptData* sd = i::ScriptData::New(data, invalid_size); - CHECK_EQ(NULL, sd); -} - - -TEST(CompileWithInvalidCachedData) { - v8::V8::Initialize(); - v8::Isolate* isolate = CcTest::isolate(); - LocalContext context; - v8::HandleScope scope(context->GetIsolate()); - i::FLAG_min_preparse_length = 0; - - const char* script = "function foo(){ return 5;}\n" - "function bar(){ return 6 + 7;} foo();"; - v8::ScriptCompiler::Source source(v8_str(script)); - v8::ScriptCompiler::Compile(isolate, &source, - v8::ScriptCompiler::kProduceDataToCache); - // source owns its cached data. Create a ScriptData based on it. The user - // never needs to create ScriptDatas any more; we only need it here because we - // want to modify the data before passing it back. - const v8::ScriptCompiler::CachedData* cd = source.GetCachedData(); - // ScriptData does not take ownership of the buffers passed to it. - i::ScriptData* sd = - i::ScriptData::New(reinterpret_cast<const char*>(cd->data), cd->length); - CHECK(!sd->HasError()); - // ScriptData private implementation details - const int kHeaderSize = i::PreparseDataConstants::kHeaderSize; - const int kFunctionEntrySize = i::FunctionEntry::kSize; - const int kFunctionEntryStartOffset = 0; - const int kFunctionEntryEndOffset = 1; - unsigned* sd_data = - reinterpret_cast<unsigned*>(const_cast<char*>(sd->Data())); - - // Overwrite function bar's end position with 0. - sd_data[kHeaderSize + 1 * kFunctionEntrySize + kFunctionEntryEndOffset] = 0; - v8::TryCatch try_catch; - - // Make the script slightly different so that we don't hit the compilation - // cache. Don't change the lenghts of tokens. - const char* script2 = "function foo(){ return 6;}\n" - "function bar(){ return 6 + 7;} foo();"; - v8::ScriptCompiler::Source source2( - v8_str(script2), - // CachedData doesn't take ownership of the buffers, Source takes - // ownership of CachedData. - new v8::ScriptCompiler::CachedData( - reinterpret_cast<const uint8_t*>(sd->Data()), sd->Length())); - Local<v8::UnboundScript> compiled_script = - v8::ScriptCompiler::CompileUnbound(isolate, &source2); - - CHECK(try_catch.HasCaught()); - { - String::Utf8Value exception_value(try_catch.Message()->Get()); - CHECK_EQ("Uncaught SyntaxError: Invalid cached data for function bar", - *exception_value); - } - - try_catch.Reset(); - delete sd; - - // Overwrite function bar's start position with 200. The function entry will - // not be found when searching for it by position, and the compilation fails. - - // ScriptData does not take ownership of the buffers passed to it. - sd = i::ScriptData::New(reinterpret_cast<const char*>(cd->data), cd->length); - sd_data = reinterpret_cast<unsigned*>(const_cast<char*>(sd->Data())); - sd_data[kHeaderSize + 1 * kFunctionEntrySize + kFunctionEntryStartOffset] = - 200; - const char* script3 = "function foo(){ return 7;}\n" - "function bar(){ return 6 + 7;} foo();"; - v8::ScriptCompiler::Source source3( - v8_str(script3), - new v8::ScriptCompiler::CachedData( - reinterpret_cast<const uint8_t*>(sd->Data()), sd->Length())); - compiled_script = - v8::ScriptCompiler::CompileUnbound(isolate, &source3); - CHECK(try_catch.HasCaught()); - { - String::Utf8Value exception_value(try_catch.Message()->Get()); - CHECK_EQ("Uncaught SyntaxError: Invalid cached data for function bar", - *exception_value); - } - CHECK(compiled_script.IsEmpty()); - try_catch.Reset(); - delete sd; - - // Try passing in cached data which is obviously invalid (wrong length). - sd = i::ScriptData::New(reinterpret_cast<const char*>(cd->data), cd->length); - const char* script4 = - "function foo(){ return 8;}\n" - "function bar(){ return 6 + 7;} foo();"; - v8::ScriptCompiler::Source source4( - v8_str(script4), - new v8::ScriptCompiler::CachedData( - reinterpret_cast<const uint8_t*>(sd->Data()), sd->Length() - 1)); - compiled_script = - v8::ScriptCompiler::CompileUnbound(isolate, &source4); - CHECK(try_catch.HasCaught()); - { - String::Utf8Value exception_value(try_catch.Message()->Get()); - CHECK_EQ("Uncaught SyntaxError: Invalid cached data", - *exception_value); - } - CHECK(compiled_script.IsEmpty()); - delete sd; -} - - // This tests that we do not allow dictionary load/call inline caches // to use functions that have not yet been compiled. The potential // problem of loading a function that has not yet been compiled can @@ -15283,19 +15157,19 @@ struct RegExpInterruptionData { } regexp_interruption_data; -class RegExpInterruptionThread : public i::Thread { +class RegExpInterruptionThread : public v8::base::Thread { public: explicit RegExpInterruptionThread(v8::Isolate* isolate) - : Thread("TimeoutThread"), isolate_(isolate) {} + : Thread(Options("TimeoutThread")), isolate_(isolate) {} virtual void Run() { for (regexp_interruption_data.loop_count = 0; regexp_interruption_data.loop_count < 7; regexp_interruption_data.loop_count++) { - i::OS::Sleep(50); // Wait a bit before requesting GC. + v8::base::OS::Sleep(50); // Wait a bit before requesting GC. reinterpret_cast<i::Isolate*>(isolate_)->stack_guard()->RequestGC(); } - i::OS::Sleep(50); // Wait a bit before terminating. + v8::base::OS::Sleep(50); // Wait a bit before terminating. v8::V8::TerminateExecution(isolate_); } @@ -15358,8 +15232,10 @@ TEST(ReadOnlyPropertyInGlobalProto) { v8::Handle<v8::Object> global = context->Global(); v8::Handle<v8::Object> global_proto = v8::Handle<v8::Object>::Cast(global->Get(v8_str("__proto__"))); - global_proto->Set(v8_str("x"), v8::Integer::New(isolate, 0), v8::ReadOnly); - global_proto->Set(v8_str("y"), v8::Integer::New(isolate, 0), v8::ReadOnly); + global_proto->ForceSet(v8_str("x"), v8::Integer::New(isolate, 0), + v8::ReadOnly); + global_proto->ForceSet(v8_str("y"), v8::Integer::New(isolate, 0), + v8::ReadOnly); // Check without 'eval' or 'with'. v8::Handle<v8::Value> res = CompileRun("function f() { x = 42; return x; }; f()"); @@ -15417,7 +15293,7 @@ TEST(ForceSet) { // Ordinary properties v8::Handle<v8::String> simple_property = v8::String::NewFromUtf8(isolate, "p"); - global->Set(simple_property, v8::Int32::New(isolate, 4), v8::ReadOnly); + global->ForceSet(simple_property, v8::Int32::New(isolate, 4), v8::ReadOnly); CHECK_EQ(4, global->Get(simple_property)->Int32Value()); // This should fail because the property is read-only global->Set(simple_property, v8::Int32::New(isolate, 5)); @@ -15505,7 +15381,7 @@ THREADED_TEST(ForceDelete) { // Ordinary properties v8::Handle<v8::String> simple_property = v8::String::NewFromUtf8(isolate, "p"); - global->Set(simple_property, v8::Int32::New(isolate, 4), v8::DontDelete); + global->ForceSet(simple_property, v8::Int32::New(isolate, 4), v8::DontDelete); CHECK_EQ(4, global->Get(simple_property)->Int32Value()); // This should fail because the property is dont-delete. CHECK(!global->Delete(simple_property)); @@ -15542,7 +15418,8 @@ THREADED_TEST(ForceDeleteWithInterceptor) { v8::Handle<v8::String> some_property = v8::String::NewFromUtf8(isolate, "a"); - global->Set(some_property, v8::Integer::New(isolate, 42), v8::DontDelete); + global->ForceSet(some_property, v8::Integer::New(isolate, 42), + v8::DontDelete); // Deleting a property should get intercepted and nothing should // happen. @@ -15831,20 +15708,20 @@ THREADED_TEST(PixelArray) { i::Handle<i::Object> no_failure; no_failure = i::JSObject::SetElement( jsobj, 1, value, NONE, i::SLOPPY).ToHandleChecked(); - ASSERT(!no_failure.is_null()); - i::USE(no_failure); + DCHECK(!no_failure.is_null()); + USE(no_failure); CheckElementValue(isolate, 2, jsobj, 1); *value.location() = i::Smi::FromInt(256); no_failure = i::JSObject::SetElement( jsobj, 1, value, NONE, i::SLOPPY).ToHandleChecked(); - ASSERT(!no_failure.is_null()); - i::USE(no_failure); + DCHECK(!no_failure.is_null()); + USE(no_failure); CheckElementValue(isolate, 255, jsobj, 1); *value.location() = i::Smi::FromInt(-1); no_failure = i::JSObject::SetElement( jsobj, 1, value, NONE, i::SLOPPY).ToHandleChecked(); - ASSERT(!no_failure.is_null()); - i::USE(no_failure); + DCHECK(!no_failure.is_null()); + USE(no_failure); CheckElementValue(isolate, 0, jsobj, 1); result = CompileRun("for (var i = 0; i < 8; i++) {" @@ -16316,15 +16193,15 @@ static void ObjectWithExternalArrayTestHelper( " }" "}" "res;"; - i::OS::SNPrintF(test_buf, - boundary_program, - low); + i::SNPrintF(test_buf, + boundary_program, + low); result = CompileRun(test_buf.start()); CHECK_EQ(low, result->IntegerValue()); - i::OS::SNPrintF(test_buf, - boundary_program, - high); + i::SNPrintF(test_buf, + boundary_program, + high); result = CompileRun(test_buf.start()); CHECK_EQ(high, result->IntegerValue()); @@ -16344,28 +16221,28 @@ static void ObjectWithExternalArrayTestHelper( CHECK_EQ(28, result->Int32Value()); // Make sure out-of-range loads do not throw. - i::OS::SNPrintF(test_buf, - "var caught_exception = false;" - "try {" - " ext_array[%d];" - "} catch (e) {" - " caught_exception = true;" - "}" - "caught_exception;", - element_count); + i::SNPrintF(test_buf, + "var caught_exception = false;" + "try {" + " ext_array[%d];" + "} catch (e) {" + " caught_exception = true;" + "}" + "caught_exception;", + element_count); result = CompileRun(test_buf.start()); CHECK_EQ(false, result->BooleanValue()); // Make sure out-of-range stores do not throw. - i::OS::SNPrintF(test_buf, - "var caught_exception = false;" - "try {" - " ext_array[%d] = 1;" - "} catch (e) {" - " caught_exception = true;" - "}" - "caught_exception;", - element_count); + i::SNPrintF(test_buf, + "var caught_exception = false;" + "try {" + " ext_array[%d] = 1;" + "} catch (e) {" + " caught_exception = true;" + "}" + "caught_exception;", + element_count); result = CompileRun(test_buf.start()); CHECK_EQ(false, result->BooleanValue()); @@ -16377,7 +16254,7 @@ static void ObjectWithExternalArrayTestHelper( CHECK_EQ(0, result->Int32Value()); if (array_type == v8::kExternalFloat64Array || array_type == v8::kExternalFloat32Array) { - CHECK_EQ(static_cast<int>(i::OS::nan_value()), + CHECK_EQ(static_cast<int>(v8::base::OS::nan_value()), static_cast<int>( i::Object::GetElement( isolate, jsobj, 7).ToHandleChecked()->Number())); @@ -16447,20 +16324,20 @@ static void ObjectWithExternalArrayTestHelper( array_type == v8::kExternalUint32Array); bool is_pixel_data = array_type == v8::kExternalUint8ClampedArray; - i::OS::SNPrintF(test_buf, - "%s" - "var all_passed = true;" - "for (var i = 0; i < source_data.length; i++) {" - " for (var j = 0; j < 8; j++) {" - " ext_array[j] = source_data[i];" - " }" - " all_passed = all_passed &&" - " (ext_array[5] == expected_results[i]);" - "}" - "all_passed;", - (is_unsigned ? - unsigned_data : - (is_pixel_data ? pixel_data : signed_data))); + i::SNPrintF(test_buf, + "%s" + "var all_passed = true;" + "for (var i = 0; i < source_data.length; i++) {" + " for (var j = 0; j < 8; j++) {" + " ext_array[j] = source_data[i];" + " }" + " all_passed = all_passed &&" + " (ext_array[5] == expected_results[i]);" + "}" + "all_passed;", + (is_unsigned ? + unsigned_data : + (is_pixel_data ? pixel_data : signed_data))); result = CompileRun(test_buf.start()); CHECK_EQ(true, result->BooleanValue()); } @@ -17215,7 +17092,7 @@ void AnalyzeStackInNativeCode(const v8::FunctionCallbackInfo<v8::Value>& args) { const int kOverviewTest = 1; const int kDetailedTest = 2; - ASSERT(args.Length() == 1); + DCHECK(args.Length() == 1); int testGroup = args[0]->Int32Value(); if (testGroup == kOverviewTest) { @@ -17541,9 +17418,9 @@ TEST(SourceURLInStackTrace) { "eval('(' + outer +')()%s');"; i::ScopedVector<char> code(1024); - i::OS::SNPrintF(code, source, "//# sourceURL=eval_url"); + i::SNPrintF(code, source, "//# sourceURL=eval_url"); CHECK(CompileRun(code.start())->IsUndefined()); - i::OS::SNPrintF(code, source, "//@ sourceURL=eval_url"); + i::SNPrintF(code, source, "//@ sourceURL=eval_url"); CHECK(CompileRun(code.start())->IsUndefined()); } @@ -17580,7 +17457,7 @@ TEST(ScriptIdInStackTrace) { script->Run(); for (int i = 0; i < 2; i++) { CHECK(scriptIdInStack[i] != v8::Message::kNoScriptIdInfo); - CHECK_EQ(scriptIdInStack[i], script->GetId()); + CHECK_EQ(scriptIdInStack[i], script->GetUnboundScript()->GetId()); } } @@ -17624,9 +17501,9 @@ TEST(InlineScriptWithSourceURLInStackTrace) { "outer()\n%s"; i::ScopedVector<char> code(1024); - i::OS::SNPrintF(code, source, "//# sourceURL=source_url"); + i::SNPrintF(code, source, "//# sourceURL=source_url"); CHECK(CompileRunWithOrigin(code.start(), "url", 0, 1)->IsUndefined()); - i::OS::SNPrintF(code, source, "//@ sourceURL=source_url"); + i::SNPrintF(code, source, "//@ sourceURL=source_url"); CHECK(CompileRunWithOrigin(code.start(), "url", 0, 1)->IsUndefined()); } @@ -17670,9 +17547,9 @@ TEST(DynamicWithSourceURLInStackTrace) { "outer()\n%s"; i::ScopedVector<char> code(1024); - i::OS::SNPrintF(code, source, "//# sourceURL=source_url"); + i::SNPrintF(code, source, "//# sourceURL=source_url"); CHECK(CompileRunWithOrigin(code.start(), "url", 0, 0)->IsUndefined()); - i::OS::SNPrintF(code, source, "//@ sourceURL=source_url"); + i::SNPrintF(code, source, "//@ sourceURL=source_url"); CHECK(CompileRunWithOrigin(code.start(), "url", 0, 0)->IsUndefined()); } @@ -17691,7 +17568,7 @@ TEST(DynamicWithSourceURLInStackTraceString) { "outer()\n%s"; i::ScopedVector<char> code(1024); - i::OS::SNPrintF(code, source, "//# sourceURL=source_url"); + i::SNPrintF(code, source, "//# sourceURL=source_url"); v8::TryCatch try_catch; CompileRunWithOrigin(code.start(), "", 0, 0); CHECK(try_catch.HasCaught()); @@ -17700,6 +17577,54 @@ TEST(DynamicWithSourceURLInStackTraceString) { } +TEST(EvalWithSourceURLInMessageScriptResourceNameOrSourceURL) { + LocalContext context; + v8::HandleScope scope(context->GetIsolate()); + + const char *source = + "function outer() {\n" + " var scriptContents = \"function foo() { FAIL.FAIL; }\\\n" + " //# sourceURL=source_url\";\n" + " eval(scriptContents);\n" + " foo(); }\n" + "outer();\n" + "//# sourceURL=outer_url"; + + v8::TryCatch try_catch; + CompileRun(source); + CHECK(try_catch.HasCaught()); + + Local<v8::Message> message = try_catch.Message(); + Handle<Value> sourceURL = + message->GetScriptOrigin().ResourceName(); + CHECK_EQ(*v8::String::Utf8Value(sourceURL), "source_url"); +} + + +TEST(RecursionWithSourceURLInMessageScriptResourceNameOrSourceURL) { + LocalContext context; + v8::HandleScope scope(context->GetIsolate()); + + const char *source = + "function outer() {\n" + " var scriptContents = \"function boo(){ boo(); }\\\n" + " //# sourceURL=source_url\";\n" + " eval(scriptContents);\n" + " boo(); }\n" + "outer();\n" + "//# sourceURL=outer_url"; + + v8::TryCatch try_catch; + CompileRun(source); + CHECK(try_catch.HasCaught()); + + Local<v8::Message> message = try_catch.Message(); + Handle<Value> sourceURL = + message->GetScriptOrigin().ResourceName(); + CHECK_EQ(*v8::String::Utf8Value(sourceURL), "source_url"); +} + + static void CreateGarbageInOldSpace() { i::Factory* factory = CcTest::i_isolate()->factory(); v8::HandleScope scope(CcTest::isolate()); @@ -17713,6 +17638,7 @@ static void CreateGarbageInOldSpace() { // Test that idle notification can be handled and eventually returns true. TEST(IdleNotification) { const intptr_t MB = 1024 * 1024; + const int IdlePauseInMs = 1000; LocalContext env; v8::HandleScope scope(env->GetIsolate()); intptr_t initial_size = CcTest::heap()->SizeOfObjects(); @@ -17721,7 +17647,7 @@ TEST(IdleNotification) { CHECK_GT(size_with_garbage, initial_size + MB); bool finished = false; for (int i = 0; i < 200 && !finished; i++) { - finished = v8::V8::IdleNotification(); + finished = env->GetIsolate()->IdleNotification(IdlePauseInMs); } intptr_t final_size = CcTest::heap()->SizeOfObjects(); CHECK(finished); @@ -17741,7 +17667,7 @@ TEST(IdleNotificationWithSmallHint) { CHECK_GT(size_with_garbage, initial_size + MB); bool finished = false; for (int i = 0; i < 200 && !finished; i++) { - finished = v8::V8::IdleNotification(IdlePauseInMs); + finished = env->GetIsolate()->IdleNotification(IdlePauseInMs); } intptr_t final_size = CcTest::heap()->SizeOfObjects(); CHECK(finished); @@ -17761,7 +17687,7 @@ TEST(IdleNotificationWithLargeHint) { CHECK_GT(size_with_garbage, initial_size + MB); bool finished = false; for (int i = 0; i < 200 && !finished; i++) { - finished = v8::V8::IdleNotification(IdlePauseInMs); + finished = env->GetIsolate()->IdleNotification(IdlePauseInMs); } intptr_t final_size = CcTest::heap()->SizeOfObjects(); CHECK(finished); @@ -17778,7 +17704,7 @@ TEST(Regress2107) { v8::HandleScope scope(env->GetIsolate()); intptr_t initial_size = CcTest::heap()->SizeOfObjects(); // Send idle notification to start a round of incremental GCs. - v8::V8::IdleNotification(kShortIdlePauseInMs); + env->GetIsolate()->IdleNotification(kShortIdlePauseInMs); // Emulate 7 page reloads. for (int i = 0; i < 7; i++) { { @@ -17788,8 +17714,8 @@ TEST(Regress2107) { CreateGarbageInOldSpace(); ctx->Exit(); } - v8::V8::ContextDisposedNotification(); - v8::V8::IdleNotification(kLongIdlePauseInMs); + env->GetIsolate()->ContextDisposedNotification(); + env->GetIsolate()->IdleNotification(kLongIdlePauseInMs); } // Create garbage and check that idle notification still collects it. CreateGarbageInOldSpace(); @@ -17797,7 +17723,7 @@ TEST(Regress2107) { CHECK_GT(size_with_garbage, initial_size + MB); bool finished = false; for (int i = 0; i < 200 && !finished; i++) { - finished = v8::V8::IdleNotification(kShortIdlePauseInMs); + finished = env->GetIsolate()->IdleNotification(kShortIdlePauseInMs); } intptr_t final_size = CcTest::heap()->SizeOfObjects(); CHECK_LT(final_size, initial_size + 1); @@ -18093,14 +18019,14 @@ TEST(ExternalInternalizedStringCollectedAtGC) { static double DoubleFromBits(uint64_t value) { double target; - i::OS::MemCopy(&target, &value, sizeof(target)); + i::MemCopy(&target, &value, sizeof(target)); return target; } static uint64_t DoubleToBits(double value) { uint64_t target; - i::OS::MemCopy(&target, &value, sizeof(target)); + i::MemCopy(&target, &value, sizeof(target)); return target; } @@ -18108,7 +18034,7 @@ static uint64_t DoubleToBits(double value) { static double DoubleToDateTime(double input) { double date_limit = 864e13; if (std::isnan(input) || input < -date_limit || input > date_limit) { - return i::OS::nan_value(); + return v8::base::OS::nan_value(); } return (input < 0) ? -(std::floor(-input)) : std::floor(input); } @@ -18175,7 +18101,8 @@ THREADED_TEST(QuietSignalingNaNs) { } else { uint64_t stored_bits = DoubleToBits(stored_number); // Check if quiet nan (bits 51..62 all set). -#if defined(V8_TARGET_ARCH_MIPS) && !defined(USE_SIMULATOR) +#if (defined(V8_TARGET_ARCH_MIPS) || defined(V8_TARGET_ARCH_MIPS64)) && \ + !defined(_MIPS_ARCH_MIPS64R6) && !defined(USE_SIMULATOR) // Most significant fraction bit for quiet nan is set to 0 // on MIPS architecture. Allowed by IEEE-754. CHECK_EQ(0xffe, static_cast<int>((stored_bits >> 51) & 0xfff)); @@ -18195,7 +18122,8 @@ THREADED_TEST(QuietSignalingNaNs) { } else { uint64_t stored_bits = DoubleToBits(stored_date); // Check if quiet nan (bits 51..62 all set). -#if defined(V8_TARGET_ARCH_MIPS) && !defined(USE_SIMULATOR) +#if (defined(V8_TARGET_ARCH_MIPS) || defined(V8_TARGET_ARCH_MIPS64)) && \ + !defined(_MIPS_ARCH_MIPS64R6) && !defined(USE_SIMULATOR) // Most significant fraction bit for quiet nan is set to 0 // on MIPS architecture. Allowed by IEEE-754. CHECK_EQ(0xffe, static_cast<int>((stored_bits >> 51) & 0xfff)); @@ -18271,7 +18199,7 @@ TEST(Regress528) { CompileRun(source_simple); context->Exit(); } - v8::V8::ContextDisposedNotification(); + isolate->ContextDisposedNotification(); for (gc_count = 1; gc_count < 10; gc_count++) { other_context->Enter(); CompileRun(source_simple); @@ -18293,7 +18221,7 @@ TEST(Regress528) { CompileRun(source_eval); context->Exit(); } - v8::V8::ContextDisposedNotification(); + isolate->ContextDisposedNotification(); for (gc_count = 1; gc_count < 10; gc_count++) { other_context->Enter(); CompileRun(source_eval); @@ -18320,7 +18248,7 @@ TEST(Regress528) { CHECK_EQ(1, message->GetLineNumber()); context->Exit(); } - v8::V8::ContextDisposedNotification(); + isolate->ContextDisposedNotification(); for (gc_count = 1; gc_count < 10; gc_count++) { other_context->Enter(); CompileRun(source_exception); @@ -18331,7 +18259,7 @@ TEST(Regress528) { CHECK_GE(2, gc_count); CHECK_EQ(1, GetGlobalObjectsCount()); - v8::V8::ContextDisposedNotification(); + isolate->ContextDisposedNotification(); } @@ -18511,8 +18439,8 @@ THREADED_TEST(FunctionGetScriptId) { env->Global()->Get(v8::String::NewFromUtf8(isolate, "foo"))); v8::Local<v8::Function> bar = v8::Local<v8::Function>::Cast( env->Global()->Get(v8::String::NewFromUtf8(isolate, "bar"))); - CHECK_EQ(script->GetId(), foo->ScriptId()); - CHECK_EQ(script->GetId(), bar->ScriptId()); + CHECK_EQ(script->GetUnboundScript()->GetId(), foo->ScriptId()); + CHECK_EQ(script->GetUnboundScript()->GetId(), bar->ScriptId()); } @@ -18586,8 +18514,7 @@ TEST(SetterOnConstructorPrototype) { v8::Isolate* isolate = CcTest::isolate(); v8::HandleScope scope(isolate); Local<ObjectTemplate> templ = ObjectTemplate::New(isolate); - templ->SetAccessor(v8_str("x"), - GetterWhichReturns42, + templ->SetAccessor(v8_str("x"), GetterWhichReturns42, SetterWhichSetsYOnThisTo23); LocalContext context; context->Global()->Set(v8_str("P"), templ->NewInstance()); @@ -18700,8 +18627,7 @@ TEST(Regress618) { // Use an API object with accessors as prototype. Local<ObjectTemplate> templ = ObjectTemplate::New(isolate); - templ->SetAccessor(v8_str("x"), - GetterWhichReturns42, + templ->SetAccessor(v8_str("x"), GetterWhichReturns42, SetterWhichSetsYOnThisTo23); context->Global()->Set(v8_str("P"), templ->NewInstance()); @@ -19252,7 +19178,7 @@ TEST(GCInFailedAccessCheckCallback) { ExpectUndefined("Object.prototype.__lookupGetter__.call(" " other, \'x\')"); - // HasLocalElement. + // HasOwnElement. ExpectFalse("Object.prototype.hasOwnProperty.call(other, \'0\')"); CHECK_EQ(false, global0->HasRealIndexedProperty(0)); @@ -19269,7 +19195,6 @@ TEST(IsolateNewDispose) { v8::Isolate* current_isolate = CcTest::isolate(); v8::Isolate* isolate = v8::Isolate::New(); CHECK(isolate != NULL); - CHECK(!reinterpret_cast<i::Isolate*>(isolate)->IsDefaultIsolate()); CHECK(current_isolate != isolate); CHECK(current_isolate == CcTest::isolate()); @@ -19430,23 +19355,23 @@ static int CalcFibonacci(v8::Isolate* isolate, int limit) { v8::HandleScope scope(isolate); LocalContext context(isolate); i::ScopedVector<char> code(1024); - i::OS::SNPrintF(code, "function fib(n) {" - " if (n <= 2) return 1;" - " return fib(n-1) + fib(n-2);" - "}" - "fib(%d)", limit); + i::SNPrintF(code, "function fib(n) {" + " if (n <= 2) return 1;" + " return fib(n-1) + fib(n-2);" + "}" + "fib(%d)", limit); Local<Value> value = CompileRun(code.start()); CHECK(value->IsNumber()); return static_cast<int>(value->NumberValue()); } -class IsolateThread : public v8::internal::Thread { +class IsolateThread : public v8::base::Thread { public: IsolateThread(v8::Isolate* isolate, int fib_limit) - : Thread("IsolateThread"), + : Thread(Options("IsolateThread")), isolate_(isolate), fib_limit_(fib_limit), - result_(0) { } + result_(0) {} void Run() { result_ = CalcFibonacci(isolate_, fib_limit_); @@ -19514,7 +19439,7 @@ TEST(IsolateDifferentContexts) { isolate->Dispose(); } -class InitDefaultIsolateThread : public v8::internal::Thread { +class InitDefaultIsolateThread : public v8::base::Thread { public: enum TestCase { SetResourceConstraints, @@ -19525,19 +19450,18 @@ class InitDefaultIsolateThread : public v8::internal::Thread { }; explicit InitDefaultIsolateThread(TestCase testCase) - : Thread("InitDefaultIsolateThread"), + : Thread(Options("InitDefaultIsolateThread")), testCase_(testCase), - result_(false) { } + result_(false) {} void Run() { v8::Isolate* isolate = v8::Isolate::New(); isolate->Enter(); switch (testCase_) { case SetResourceConstraints: { - static const int K = 1024; v8::ResourceConstraints constraints; - constraints.set_max_new_space_size(2 * K * K); - constraints.set_max_old_space_size(4 * K * K); + constraints.set_max_semi_space_size(1); + constraints.set_max_old_space_size(4); v8::SetResourceConstraints(CcTest::isolate(), &constraints); break; } @@ -19547,15 +19471,15 @@ class InitDefaultIsolateThread : public v8::internal::Thread { break; case SetCounterFunction: - v8::V8::SetCounterFunction(NULL); + CcTest::isolate()->SetCounterFunction(NULL); break; case SetCreateHistogramFunction: - v8::V8::SetCreateHistogramFunction(NULL); + CcTest::isolate()->SetCreateHistogramFunction(NULL); break; case SetAddHistogramSampleFunction: - v8::V8::SetAddHistogramSampleFunction(NULL); + CcTest::isolate()->SetAddHistogramSampleFunction(NULL); break; } isolate->Exit(); @@ -19749,7 +19673,7 @@ TEST(DontDeleteCellLoadICAPI) { // cell created using the API. LocalContext context; v8::HandleScope scope(context->GetIsolate()); - context->Global()->Set(v8_str("cell"), v8_str("value"), v8::DontDelete); + context->Global()->ForceSet(v8_str("cell"), v8_str("value"), v8::DontDelete); ExpectBoolean("delete cell", false); CompileRun(function_code); ExpectString("readCell()", "value"); @@ -19831,6 +19755,7 @@ TEST(PersistentHandleInNewSpaceVisitor) { object1.SetWrapperClassId(42); CHECK_EQ(42, object1.WrapperClassId()); + CcTest::heap()->CollectAllGarbage(i::Heap::kNoGCFlags); CcTest::heap()->CollectAllGarbage(i::Heap::kNoGCFlags); v8::Persistent<v8::Object> object2(isolate, v8::Object::New(isolate)); @@ -20345,15 +20270,15 @@ THREADED_TEST(ReadOnlyIndexedProperties) { LocalContext context; Local<v8::Object> obj = templ->NewInstance(); context->Global()->Set(v8_str("obj"), obj); - obj->Set(v8_str("1"), v8_str("DONT_CHANGE"), v8::ReadOnly); + obj->ForceSet(v8_str("1"), v8_str("DONT_CHANGE"), v8::ReadOnly); obj->Set(v8_str("1"), v8_str("foobar")); CHECK_EQ(v8_str("DONT_CHANGE"), obj->Get(v8_str("1"))); - obj->Set(v8_num(2), v8_str("DONT_CHANGE"), v8::ReadOnly); + obj->ForceSet(v8_num(2), v8_str("DONT_CHANGE"), v8::ReadOnly); obj->Set(v8_num(2), v8_str("foobar")); CHECK_EQ(v8_str("DONT_CHANGE"), obj->Get(v8_num(2))); // Test non-smi case. - obj->Set(v8_str("2000000000"), v8_str("DONT_CHANGE"), v8::ReadOnly); + obj->ForceSet(v8_str("2000000000"), v8_str("DONT_CHANGE"), v8::ReadOnly); obj->Set(v8_str("2000000000"), v8_str("foobar")); CHECK_EQ(v8_str("DONT_CHANGE"), obj->Get(v8_str("2000000000"))); } @@ -20477,20 +20402,20 @@ THREADED_TEST(Regress93759) { CHECK(result1->Equals(simple_object->GetPrototype())); Local<Value> result2 = CompileRun("Object.getPrototypeOf(protected)"); - CHECK(result2->Equals(Undefined(isolate))); + CHECK(result2.IsEmpty()); Local<Value> result3 = CompileRun("Object.getPrototypeOf(global)"); CHECK(result3->Equals(global_object->GetPrototype())); Local<Value> result4 = CompileRun("Object.getPrototypeOf(proxy)"); - CHECK(result4->Equals(Undefined(isolate))); + CHECK(result4.IsEmpty()); Local<Value> result5 = CompileRun("Object.getPrototypeOf(hidden)"); CHECK(result5->Equals( object_with_hidden->GetPrototype()->ToObject()->GetPrototype())); Local<Value> result6 = CompileRun("Object.getPrototypeOf(phidden)"); - CHECK(result6->Equals(Undefined(isolate))); + CHECK(result6.IsEmpty()); } @@ -20624,13 +20549,13 @@ uint8_t callback_fired = 0; void CallCompletedCallback1() { - i::OS::Print("Firing callback 1.\n"); + v8::base::OS::Print("Firing callback 1.\n"); callback_fired ^= 1; // Toggle first bit. } void CallCompletedCallback2() { - i::OS::Print("Firing callback 2.\n"); + v8::base::OS::Print("Firing callback 2.\n"); callback_fired ^= 2; // Toggle second bit. } @@ -20639,15 +20564,15 @@ void RecursiveCall(const v8::FunctionCallbackInfo<v8::Value>& args) { int32_t level = args[0]->Int32Value(); if (level < 3) { level++; - i::OS::Print("Entering recursion level %d.\n", level); + v8::base::OS::Print("Entering recursion level %d.\n", level); char script[64]; i::Vector<char> script_vector(script, sizeof(script)); - i::OS::SNPrintF(script_vector, "recursion(%d)", level); + i::SNPrintF(script_vector, "recursion(%d)", level); CompileRun(script_vector.start()); - i::OS::Print("Leaving recursion level %d.\n", level); + v8::base::OS::Print("Leaving recursion level %d.\n", level); CHECK_EQ(0, callback_fired); } else { - i::OS::Print("Recursion ends.\n"); + v8::base::OS::Print("Recursion ends.\n"); CHECK_EQ(0, callback_fired); } } @@ -20664,19 +20589,19 @@ TEST(CallCompletedCallback) { env->GetIsolate()->AddCallCompletedCallback(CallCompletedCallback1); env->GetIsolate()->AddCallCompletedCallback(CallCompletedCallback1); env->GetIsolate()->AddCallCompletedCallback(CallCompletedCallback2); - i::OS::Print("--- Script (1) ---\n"); + v8::base::OS::Print("--- Script (1) ---\n"); Local<Script> script = v8::Script::Compile( v8::String::NewFromUtf8(env->GetIsolate(), "recursion(0)")); script->Run(); CHECK_EQ(3, callback_fired); - i::OS::Print("\n--- Script (2) ---\n"); + v8::base::OS::Print("\n--- Script (2) ---\n"); callback_fired = 0; env->GetIsolate()->RemoveCallCompletedCallback(CallCompletedCallback1); script->Run(); CHECK_EQ(2, callback_fired); - i::OS::Print("\n--- Function ---\n"); + v8::base::OS::Print("\n--- Function ---\n"); callback_fired = 0; Local<Function> recursive_function = Local<Function>::Cast(env->Global()->Get(v8_str("recursion"))); @@ -20726,6 +20651,14 @@ static void MicrotaskTwo(const v8::FunctionCallbackInfo<Value>& info) { } +void* g_passed_to_three = NULL; + + +static void MicrotaskThree(void* data) { + g_passed_to_three = data; +} + + TEST(EnqueueMicrotask) { LocalContext env; v8::HandleScope scope(env->GetIsolate()); @@ -20759,6 +20692,62 @@ TEST(EnqueueMicrotask) { CompileRun("1+1;"); CHECK_EQ(2, CompileRun("ext1Calls")->Int32Value()); CHECK_EQ(2, CompileRun("ext2Calls")->Int32Value()); + + g_passed_to_three = NULL; + env->GetIsolate()->EnqueueMicrotask(MicrotaskThree); + CompileRun("1+1;"); + CHECK_EQ(NULL, g_passed_to_three); + CHECK_EQ(2, CompileRun("ext1Calls")->Int32Value()); + CHECK_EQ(2, CompileRun("ext2Calls")->Int32Value()); + + int dummy; + env->GetIsolate()->EnqueueMicrotask( + Function::New(env->GetIsolate(), MicrotaskOne)); + env->GetIsolate()->EnqueueMicrotask(MicrotaskThree, &dummy); + env->GetIsolate()->EnqueueMicrotask( + Function::New(env->GetIsolate(), MicrotaskTwo)); + CompileRun("1+1;"); + CHECK_EQ(&dummy, g_passed_to_three); + CHECK_EQ(3, CompileRun("ext1Calls")->Int32Value()); + CHECK_EQ(3, CompileRun("ext2Calls")->Int32Value()); + g_passed_to_three = NULL; +} + + +static void MicrotaskExceptionOne( + const v8::FunctionCallbackInfo<Value>& info) { + v8::HandleScope scope(info.GetIsolate()); + CompileRun("exception1Calls++;"); + info.GetIsolate()->ThrowException( + v8::Exception::Error(v8_str("first"))); +} + + +static void MicrotaskExceptionTwo( + const v8::FunctionCallbackInfo<Value>& info) { + v8::HandleScope scope(info.GetIsolate()); + CompileRun("exception2Calls++;"); + info.GetIsolate()->ThrowException( + v8::Exception::Error(v8_str("second"))); +} + + +TEST(RunMicrotasksIgnoresThrownExceptions) { + LocalContext env; + v8::Isolate* isolate = env->GetIsolate(); + v8::HandleScope scope(isolate); + CompileRun( + "var exception1Calls = 0;" + "var exception2Calls = 0;"); + isolate->EnqueueMicrotask( + Function::New(isolate, MicrotaskExceptionOne)); + isolate->EnqueueMicrotask( + Function::New(isolate, MicrotaskExceptionTwo)); + TryCatch try_catch; + CompileRun("1+1;"); + CHECK(!try_catch.HasCaught()); + CHECK_EQ(1, CompileRun("exception1Calls")->Int32Value()); + CHECK_EQ(1, CompileRun("exception2Calls")->Int32Value()); } @@ -20823,6 +20812,57 @@ TEST(SetAutorunMicrotasks) { } +TEST(RunMicrotasksWithoutEnteringContext) { + v8::Isolate* isolate = CcTest::isolate(); + HandleScope handle_scope(isolate); + isolate->SetAutorunMicrotasks(false); + Handle<Context> context = Context::New(isolate); + { + Context::Scope context_scope(context); + CompileRun("var ext1Calls = 0;"); + isolate->EnqueueMicrotask(Function::New(isolate, MicrotaskOne)); + } + isolate->RunMicrotasks(); + { + Context::Scope context_scope(context); + CHECK_EQ(1, CompileRun("ext1Calls")->Int32Value()); + } + isolate->SetAutorunMicrotasks(true); +} + + +static void DebugEventInObserver(const v8::Debug::EventDetails& event_details) { + v8::DebugEvent event = event_details.GetEvent(); + if (event != v8::Break) return; + Handle<Object> exec_state = event_details.GetExecutionState(); + Handle<Value> break_id = exec_state->Get(v8_str("break_id")); + CompileRun("function f(id) { new FrameDetails(id, 0); }"); + Handle<Function> fun = Handle<Function>::Cast( + CcTest::global()->Get(v8_str("f"))->ToObject()); + fun->Call(CcTest::global(), 1, &break_id); +} + + +TEST(Regress385349) { + i::FLAG_allow_natives_syntax = true; + v8::Isolate* isolate = CcTest::isolate(); + HandleScope handle_scope(isolate); + isolate->SetAutorunMicrotasks(false); + Handle<Context> context = Context::New(isolate); + v8::Debug::SetDebugEventListener(DebugEventInObserver); + { + Context::Scope context_scope(context); + CompileRun("var obj = {};" + "Object.observe(obj, function(changes) { debugger; });" + "obj.a = 0;"); + } + isolate->RunMicrotasks(); + isolate->SetAutorunMicrotasks(true); + v8::Debug::SetDebugEventListener(NULL); +} + + +#ifdef DEBUG static int probes_counter = 0; static int misses_counter = 0; static int updates_counter = 0; @@ -20852,11 +20892,10 @@ static const char* kMegamorphicTestProgram = " fooify(a);" " fooify(b);" "}"; +#endif static void StubCacheHelper(bool primary) { - V8::SetCounterFunction(LookupCounter); - USE(kMegamorphicTestProgram); #ifdef DEBUG i::FLAG_native_code_counters = true; if (primary) { @@ -20866,6 +20905,7 @@ static void StubCacheHelper(bool primary) { } i::FLAG_crankshaft = false; LocalContext env; + env->GetIsolate()->SetCounterFunction(LookupCounter); v8::HandleScope scope(env->GetIsolate()); int initial_probes = probes_counter; int initial_misses = misses_counter; @@ -20895,6 +20935,7 @@ TEST(PrimaryStubCache) { } +#ifdef DEBUG static int cow_arrays_created_runtime = 0; @@ -20904,13 +20945,14 @@ static int* LookupCounterCOWArrays(const char* name) { } return NULL; } +#endif TEST(CheckCOWArraysCreatedRuntimeCounter) { - V8::SetCounterFunction(LookupCounterCOWArrays); #ifdef DEBUG i::FLAG_native_code_counters = true; LocalContext env; + env->GetIsolate()->SetCounterFunction(LookupCounterCOWArrays); v8::HandleScope scope(env->GetIsolate()); int initial_cow_arrays = cow_arrays_created_runtime; CompileRun("var o = [1, 2, 3];"); @@ -21001,8 +21043,7 @@ static void InstanceCheckedSetter(Local<String> name, } -static void CheckInstanceCheckedResult(int getters, - int setters, +static void CheckInstanceCheckedResult(int getters, int setters, bool expects_callbacks, TryCatch* try_catch) { if (expects_callbacks) { @@ -21125,10 +21166,8 @@ THREADED_TEST(InstanceCheckOnPrototypeAccessor) { Local<FunctionTemplate> templ = FunctionTemplate::New(context->GetIsolate()); Local<ObjectTemplate> proto = templ->PrototypeTemplate(); - proto->SetAccessor(v8_str("foo"), - InstanceCheckedGetter, InstanceCheckedSetter, - Handle<Value>(), - v8::DEFAULT, + proto->SetAccessor(v8_str("foo"), InstanceCheckedGetter, + InstanceCheckedSetter, Handle<Value>(), v8::DEFAULT, v8::None, v8::AccessorSignature::New(context->GetIsolate(), templ)); context->Global()->Set(v8_str("f"), templ->GetFunction()); @@ -21394,7 +21433,6 @@ THREADED_TEST(Regress157124) { THREADED_TEST(Regress2535) { - i::FLAG_harmony_collections = true; LocalContext context; v8::HandleScope scope(context->GetIsolate()); Local<Value> set_value = CompileRun("new Set();"); @@ -21469,16 +21507,16 @@ class ThreadInterruptTest { private: static const int kExpectedValue = 1; - class InterruptThread : public i::Thread { + class InterruptThread : public v8::base::Thread { public: explicit InterruptThread(ThreadInterruptTest* test) - : Thread("InterruptThread"), test_(test) {} + : Thread(Options("InterruptThread")), test_(test) {} virtual void Run() { struct sigaction action; // Ensure that we'll enter waiting condition - i::OS::Sleep(100); + v8::base::OS::Sleep(100); // Setup signal handler memset(&action, 0, sizeof(action)); @@ -21489,7 +21527,7 @@ class ThreadInterruptTest { kill(getpid(), SIGCHLD); // Ensure that if wait has returned because of error - i::OS::Sleep(100); + v8::base::OS::Sleep(100); // Set value and signal semaphore test_->sem_value_ = 1; @@ -21503,7 +21541,7 @@ class ThreadInterruptTest { ThreadInterruptTest* test_; }; - i::Semaphore sem_; + v8::base::Semaphore sem_; volatile int sem_value_; }; @@ -21570,11 +21608,9 @@ TEST(JSONStringifyAccessCheck) { LocalContext context1(NULL, global_template); context1->Global()->Set(v8_str("other"), global0); - ExpectString("JSON.stringify(other)", "{}"); - ExpectString("JSON.stringify({ 'a' : other, 'b' : ['c'] })", - "{\"a\":{},\"b\":[\"c\"]}"); - ExpectString("JSON.stringify([other, 'b', 'c'])", - "[{},\"b\",\"c\"]"); + CHECK(CompileRun("JSON.stringify(other)").IsEmpty()); + CHECK(CompileRun("JSON.stringify({ 'a' : other, 'b' : ['c'] })").IsEmpty()); + CHECK(CompileRun("JSON.stringify([other, 'b', 'c'])").IsEmpty()); v8::Handle<v8::Array> array = v8::Array::New(isolate, 2); array->Set(0, v8_str("a")); @@ -21582,9 +21618,9 @@ TEST(JSONStringifyAccessCheck) { context1->Global()->Set(v8_str("array"), array); ExpectString("JSON.stringify(array)", "[\"a\",\"b\"]"); array->TurnOnAccessCheck(); - ExpectString("JSON.stringify(array)", "[]"); - ExpectString("JSON.stringify([array])", "[[]]"); - ExpectString("JSON.stringify({'a' : array})", "{\"a\":[]}"); + CHECK(CompileRun("JSON.stringify(array)").IsEmpty()); + CHECK(CompileRun("JSON.stringify([array])").IsEmpty()); + CHECK(CompileRun("JSON.stringify({'a' : array})").IsEmpty()); } } @@ -21624,7 +21660,7 @@ void CheckCorrectThrow(const char* script) { access_check_fail_thrown = false; catch_callback_called = false; i::ScopedVector<char> source(1024); - i::OS::SNPrintF(source, "try { %s; } catch (e) { catcher(e); }", script); + i::SNPrintF(source, "try { %s; } catch (e) { catcher(e); }", script); CompileRun(source.start()); CHECK(access_check_fail_thrown); CHECK(catch_callback_called); @@ -21682,18 +21718,18 @@ TEST(AccessCheckThrows) { CheckCorrectThrow("JSON.stringify(other)"); CheckCorrectThrow("has_own_property(other, 'x')"); CheckCorrectThrow("%GetProperty(other, 'x')"); - CheckCorrectThrow("%SetProperty(other, 'x', 'foo', 1, 0)"); - CheckCorrectThrow("%IgnoreAttributesAndSetProperty(other, 'x', 'foo')"); + CheckCorrectThrow("%SetProperty(other, 'x', 'foo', 0)"); + CheckCorrectThrow("%AddNamedProperty(other, 'x', 'foo', 1)"); CheckCorrectThrow("%DeleteProperty(other, 'x', 0)"); CheckCorrectThrow("%DeleteProperty(other, '1', 0)"); - CheckCorrectThrow("%HasLocalProperty(other, 'x')"); + CheckCorrectThrow("%HasOwnProperty(other, 'x')"); CheckCorrectThrow("%HasProperty(other, 'x')"); CheckCorrectThrow("%HasElement(other, 1)"); CheckCorrectThrow("%IsPropertyEnumerable(other, 'x')"); CheckCorrectThrow("%GetPropertyNames(other)"); // PROPERTY_ATTRIBUTES_NONE = 0 - CheckCorrectThrow("%GetLocalPropertyNames(other, 0)"); - CheckCorrectThrow("%DefineOrRedefineAccessorProperty(" + CheckCorrectThrow("%GetOwnPropertyNames(other, 0)"); + CheckCorrectThrow("%DefineAccessorPropertyUnchecked(" "other, 'x', null, null, 1)"); // Reset the failed access check callback so it does not influence @@ -21819,11 +21855,12 @@ class RequestInterruptTestBase { virtual ~RequestInterruptTestBase() { } + virtual void StartInterruptThread() = 0; + virtual void TestBody() = 0; void RunTest() { - InterruptThread i_thread(this); - i_thread.Start(); + StartInterruptThread(); v8::HandleScope handle_scope(isolate_); @@ -21852,7 +21889,6 @@ class RequestInterruptTestBase { return should_continue_; } - protected: static void ShouldContinueCallback( const v8::FunctionCallbackInfo<Value>& info) { RequestInterruptTestBase* test = @@ -21861,10 +21897,28 @@ class RequestInterruptTestBase { info.GetReturnValue().Set(test->ShouldContinue()); } - class InterruptThread : public i::Thread { + LocalContext env_; + v8::Isolate* isolate_; + v8::base::Semaphore sem_; + int warmup_; + bool should_continue_; +}; + + +class RequestInterruptTestBaseWithSimpleInterrupt + : public RequestInterruptTestBase { + public: + RequestInterruptTestBaseWithSimpleInterrupt() : i_thread(this) { } + + virtual void StartInterruptThread() { + i_thread.Start(); + } + + private: + class InterruptThread : public v8::base::Thread { public: explicit InterruptThread(RequestInterruptTestBase* test) - : Thread("RequestInterruptTest"), test_(test) {} + : Thread(Options("RequestInterruptTest")), test_(test) {} virtual void Run() { test_->sem_.Wait(); @@ -21880,15 +21934,12 @@ class RequestInterruptTestBase { RequestInterruptTestBase* test_; }; - LocalContext env_; - v8::Isolate* isolate_; - i::Semaphore sem_; - int warmup_; - bool should_continue_; + InterruptThread i_thread; }; -class RequestInterruptTestWithFunctionCall : public RequestInterruptTestBase { +class RequestInterruptTestWithFunctionCall + : public RequestInterruptTestBaseWithSimpleInterrupt { public: virtual void TestBody() { Local<Function> func = Function::New( @@ -21900,7 +21951,8 @@ class RequestInterruptTestWithFunctionCall : public RequestInterruptTestBase { }; -class RequestInterruptTestWithMethodCall : public RequestInterruptTestBase { +class RequestInterruptTestWithMethodCall + : public RequestInterruptTestBaseWithSimpleInterrupt { public: virtual void TestBody() { v8::Local<v8::FunctionTemplate> t = v8::FunctionTemplate::New(isolate_); @@ -21914,7 +21966,8 @@ class RequestInterruptTestWithMethodCall : public RequestInterruptTestBase { }; -class RequestInterruptTestWithAccessor : public RequestInterruptTestBase { +class RequestInterruptTestWithAccessor + : public RequestInterruptTestBaseWithSimpleInterrupt { public: virtual void TestBody() { v8::Local<v8::FunctionTemplate> t = v8::FunctionTemplate::New(isolate_); @@ -21928,7 +21981,8 @@ class RequestInterruptTestWithAccessor : public RequestInterruptTestBase { }; -class RequestInterruptTestWithNativeAccessor : public RequestInterruptTestBase { +class RequestInterruptTestWithNativeAccessor + : public RequestInterruptTestBaseWithSimpleInterrupt { public: virtual void TestBody() { v8::Local<v8::FunctionTemplate> t = v8::FunctionTemplate::New(isolate_); @@ -21955,7 +22009,7 @@ class RequestInterruptTestWithNativeAccessor : public RequestInterruptTestBase { class RequestInterruptTestWithMethodCallAndInterceptor - : public RequestInterruptTestBase { + : public RequestInterruptTestBaseWithSimpleInterrupt { public: virtual void TestBody() { v8::Local<v8::FunctionTemplate> t = v8::FunctionTemplate::New(isolate_); @@ -21978,7 +22032,8 @@ class RequestInterruptTestWithMethodCallAndInterceptor }; -class RequestInterruptTestWithMathAbs : public RequestInterruptTestBase { +class RequestInterruptTestWithMathAbs + : public RequestInterruptTestBaseWithSimpleInterrupt { public: virtual void TestBody() { env_->Global()->Set(v8_str("WakeUpInterruptor"), Function::New( @@ -22062,6 +22117,61 @@ TEST(RequestInterruptTestWithMathAbs) { } +class ClearInterruptFromAnotherThread + : public RequestInterruptTestBase { + public: + ClearInterruptFromAnotherThread() : i_thread(this), sem2_(0) { } + + virtual void StartInterruptThread() { + i_thread.Start(); + } + + virtual void TestBody() { + Local<Function> func = Function::New( + isolate_, ShouldContinueCallback, v8::External::New(isolate_, this)); + env_->Global()->Set(v8_str("ShouldContinue"), func); + + CompileRun("while (ShouldContinue()) { }"); + } + + private: + class InterruptThread : public v8::base::Thread { + public: + explicit InterruptThread(ClearInterruptFromAnotherThread* test) + : Thread(Options("RequestInterruptTest")), test_(test) {} + + virtual void Run() { + test_->sem_.Wait(); + test_->isolate_->RequestInterrupt(&OnInterrupt, test_); + test_->sem_.Wait(); + test_->isolate_->ClearInterrupt(); + test_->sem2_.Signal(); + } + + static void OnInterrupt(v8::Isolate* isolate, void* data) { + ClearInterruptFromAnotherThread* test = + reinterpret_cast<ClearInterruptFromAnotherThread*>(data); + test->sem_.Signal(); + bool success = test->sem2_.WaitFor(v8::base::TimeDelta::FromSeconds(2)); + // Crash instead of timeout to make this failure more prominent. + CHECK(success); + test->should_continue_ = false; + } + + private: + ClearInterruptFromAnotherThread* test_; + }; + + InterruptThread i_thread; + v8::base::Semaphore sem2_; +}; + + +TEST(ClearInterruptFromAnotherThread) { + ClearInterruptFromAnotherThread().RunTest(); +} + + static Local<Value> function_new_expected_env; static void FunctionNewCallback(const v8::FunctionCallbackInfo<Value>& info) { CHECK_EQ(function_new_expected_env, info.Data()); @@ -22167,152 +22277,152 @@ class ApiCallOptimizationChecker { info.GetReturnValue().Set(v8_str("returned")); } - public: - enum SignatureType { - kNoSignature, - kSignatureOnReceiver, - kSignatureOnPrototype - }; - - void RunAll() { - SignatureType signature_types[] = - {kNoSignature, kSignatureOnReceiver, kSignatureOnPrototype}; - for (unsigned i = 0; i < ARRAY_SIZE(signature_types); i++) { - SignatureType signature_type = signature_types[i]; - for (int j = 0; j < 2; j++) { - bool global = j == 0; - int key = signature_type + - ARRAY_SIZE(signature_types) * (global ? 1 : 0); - Run(signature_type, global, key); - } + public: + enum SignatureType { + kNoSignature, + kSignatureOnReceiver, + kSignatureOnPrototype + }; + + void RunAll() { + SignatureType signature_types[] = + {kNoSignature, kSignatureOnReceiver, kSignatureOnPrototype}; + for (unsigned i = 0; i < ARRAY_SIZE(signature_types); i++) { + SignatureType signature_type = signature_types[i]; + for (int j = 0; j < 2; j++) { + bool global = j == 0; + int key = signature_type + + ARRAY_SIZE(signature_types) * (global ? 1 : 0); + Run(signature_type, global, key); } } + } - void Run(SignatureType signature_type, bool global, int key) { - v8::Isolate* isolate = CcTest::isolate(); - v8::HandleScope scope(isolate); - // Build a template for signature checks. - Local<v8::ObjectTemplate> signature_template; - Local<v8::Signature> signature; - { - Local<v8::FunctionTemplate> parent_template = - FunctionTemplate::New(isolate); - parent_template->SetHiddenPrototype(true); - Local<v8::FunctionTemplate> function_template - = FunctionTemplate::New(isolate); - function_template->Inherit(parent_template); - switch (signature_type) { - case kNoSignature: - break; - case kSignatureOnReceiver: - signature = v8::Signature::New(isolate, function_template); - break; - case kSignatureOnPrototype: - signature = v8::Signature::New(isolate, parent_template); - break; - } - signature_template = function_template->InstanceTemplate(); - } - // Global object must pass checks. - Local<v8::Context> context = - v8::Context::New(isolate, NULL, signature_template); - v8::Context::Scope context_scope(context); - // Install regular object that can pass signature checks. - Local<Object> function_receiver = signature_template->NewInstance(); - context->Global()->Set(v8_str("function_receiver"), function_receiver); - // Get the holder objects. - Local<Object> inner_global = - Local<Object>::Cast(context->Global()->GetPrototype()); - // Install functions on hidden prototype object if there is one. - data = Object::New(isolate); - Local<FunctionTemplate> function_template = FunctionTemplate::New( - isolate, OptimizationCallback, data, signature); - Local<Function> function = function_template->GetFunction(); - Local<Object> global_holder = inner_global; - Local<Object> function_holder = function_receiver; - if (signature_type == kSignatureOnPrototype) { - function_holder = Local<Object>::Cast(function_holder->GetPrototype()); - global_holder = Local<Object>::Cast(global_holder->GetPrototype()); + void Run(SignatureType signature_type, bool global, int key) { + v8::Isolate* isolate = CcTest::isolate(); + v8::HandleScope scope(isolate); + // Build a template for signature checks. + Local<v8::ObjectTemplate> signature_template; + Local<v8::Signature> signature; + { + Local<v8::FunctionTemplate> parent_template = + FunctionTemplate::New(isolate); + parent_template->SetHiddenPrototype(true); + Local<v8::FunctionTemplate> function_template + = FunctionTemplate::New(isolate); + function_template->Inherit(parent_template); + switch (signature_type) { + case kNoSignature: + break; + case kSignatureOnReceiver: + signature = v8::Signature::New(isolate, function_template); + break; + case kSignatureOnPrototype: + signature = v8::Signature::New(isolate, parent_template); + break; } - global_holder->Set(v8_str("g_f"), function); - global_holder->SetAccessorProperty(v8_str("g_acc"), function, function); - function_holder->Set(v8_str("f"), function); - function_holder->SetAccessorProperty(v8_str("acc"), function, function); - // Initialize expected values. - callee = function; - count = 0; - if (global) { - receiver = context->Global(); - holder = inner_global; - } else { - holder = function_receiver; - // If not using a signature, add something else to the prototype chain - // to test the case that holder != receiver - if (signature_type == kNoSignature) { - receiver = Local<Object>::Cast(CompileRun( - "var receiver_subclass = {};\n" - "receiver_subclass.__proto__ = function_receiver;\n" - "receiver_subclass")); - } else { - receiver = Local<Object>::Cast(CompileRun( - "var receiver_subclass = function_receiver;\n" + signature_template = function_template->InstanceTemplate(); + } + // Global object must pass checks. + Local<v8::Context> context = + v8::Context::New(isolate, NULL, signature_template); + v8::Context::Scope context_scope(context); + // Install regular object that can pass signature checks. + Local<Object> function_receiver = signature_template->NewInstance(); + context->Global()->Set(v8_str("function_receiver"), function_receiver); + // Get the holder objects. + Local<Object> inner_global = + Local<Object>::Cast(context->Global()->GetPrototype()); + // Install functions on hidden prototype object if there is one. + data = Object::New(isolate); + Local<FunctionTemplate> function_template = FunctionTemplate::New( + isolate, OptimizationCallback, data, signature); + Local<Function> function = function_template->GetFunction(); + Local<Object> global_holder = inner_global; + Local<Object> function_holder = function_receiver; + if (signature_type == kSignatureOnPrototype) { + function_holder = Local<Object>::Cast(function_holder->GetPrototype()); + global_holder = Local<Object>::Cast(global_holder->GetPrototype()); + } + global_holder->Set(v8_str("g_f"), function); + global_holder->SetAccessorProperty(v8_str("g_acc"), function, function); + function_holder->Set(v8_str("f"), function); + function_holder->SetAccessorProperty(v8_str("acc"), function, function); + // Initialize expected values. + callee = function; + count = 0; + if (global) { + receiver = context->Global(); + holder = inner_global; + } else { + holder = function_receiver; + // If not using a signature, add something else to the prototype chain + // to test the case that holder != receiver + if (signature_type == kNoSignature) { + receiver = Local<Object>::Cast(CompileRun( + "var receiver_subclass = {};\n" + "receiver_subclass.__proto__ = function_receiver;\n" "receiver_subclass")); - } - } - // With no signature, the holder is not set. - if (signature_type == kNoSignature) holder = receiver; - // build wrap_function - i::ScopedVector<char> wrap_function(200); - if (global) { - i::OS::SNPrintF( - wrap_function, - "function wrap_f_%d() { var f = g_f; return f(); }\n" - "function wrap_get_%d() { return this.g_acc; }\n" - "function wrap_set_%d() { return this.g_acc = 1; }\n", - key, key, key); } else { - i::OS::SNPrintF( - wrap_function, - "function wrap_f_%d() { return receiver_subclass.f(); }\n" - "function wrap_get_%d() { return receiver_subclass.acc; }\n" - "function wrap_set_%d() { return receiver_subclass.acc = 1; }\n", - key, key, key); + receiver = Local<Object>::Cast(CompileRun( + "var receiver_subclass = function_receiver;\n" + "receiver_subclass")); } - // build source string - i::ScopedVector<char> source(1000); - i::OS::SNPrintF( - source, - "%s\n" // wrap functions - "function wrap_f() { return wrap_f_%d(); }\n" - "function wrap_get() { return wrap_get_%d(); }\n" - "function wrap_set() { return wrap_set_%d(); }\n" - "check = function(returned) {\n" - " if (returned !== 'returned') { throw returned; }\n" - "}\n" - "\n" - "check(wrap_f());\n" - "check(wrap_f());\n" - "%%OptimizeFunctionOnNextCall(wrap_f_%d);\n" - "check(wrap_f());\n" - "\n" - "check(wrap_get());\n" - "check(wrap_get());\n" - "%%OptimizeFunctionOnNextCall(wrap_get_%d);\n" - "check(wrap_get());\n" - "\n" - "check = function(returned) {\n" - " if (returned !== 1) { throw returned; }\n" - "}\n" - "check(wrap_set());\n" - "check(wrap_set());\n" - "%%OptimizeFunctionOnNextCall(wrap_set_%d);\n" - "check(wrap_set());\n", - wrap_function.start(), key, key, key, key, key, key); - v8::TryCatch try_catch; - CompileRun(source.start()); - ASSERT(!try_catch.HasCaught()); - CHECK_EQ(9, count); } + // With no signature, the holder is not set. + if (signature_type == kNoSignature) holder = receiver; + // build wrap_function + i::ScopedVector<char> wrap_function(200); + if (global) { + i::SNPrintF( + wrap_function, + "function wrap_f_%d() { var f = g_f; return f(); }\n" + "function wrap_get_%d() { return this.g_acc; }\n" + "function wrap_set_%d() { return this.g_acc = 1; }\n", + key, key, key); + } else { + i::SNPrintF( + wrap_function, + "function wrap_f_%d() { return receiver_subclass.f(); }\n" + "function wrap_get_%d() { return receiver_subclass.acc; }\n" + "function wrap_set_%d() { return receiver_subclass.acc = 1; }\n", + key, key, key); + } + // build source string + i::ScopedVector<char> source(1000); + i::SNPrintF( + source, + "%s\n" // wrap functions + "function wrap_f() { return wrap_f_%d(); }\n" + "function wrap_get() { return wrap_get_%d(); }\n" + "function wrap_set() { return wrap_set_%d(); }\n" + "check = function(returned) {\n" + " if (returned !== 'returned') { throw returned; }\n" + "}\n" + "\n" + "check(wrap_f());\n" + "check(wrap_f());\n" + "%%OptimizeFunctionOnNextCall(wrap_f_%d);\n" + "check(wrap_f());\n" + "\n" + "check(wrap_get());\n" + "check(wrap_get());\n" + "%%OptimizeFunctionOnNextCall(wrap_get_%d);\n" + "check(wrap_get());\n" + "\n" + "check = function(returned) {\n" + " if (returned !== 1) { throw returned; }\n" + "}\n" + "check(wrap_set());\n" + "check(wrap_set());\n" + "%%OptimizeFunctionOnNextCall(wrap_set_%d);\n" + "check(wrap_set());\n", + wrap_function.start(), key, key, key, key, key, key); + v8::TryCatch try_catch; + CompileRun(source.start()); + DCHECK(!try_catch.HasCaught()); + CHECK_EQ(9, count); + } }; @@ -22341,14 +22451,13 @@ void StoringEventLoggerCallback(const char* message, int status) { TEST(EventLogging) { v8::Isolate* isolate = CcTest::isolate(); isolate->SetEventLogger(StoringEventLoggerCallback); - v8::internal::HistogramTimer* histogramTimer = - new v8::internal::HistogramTimer( - "V8.Test", 0, 10000, 50, - reinterpret_cast<v8::internal::Isolate*>(isolate)); - histogramTimer->Start(); + v8::internal::HistogramTimer histogramTimer( + "V8.Test", 0, 10000, 50, + reinterpret_cast<v8::internal::Isolate*>(isolate)); + histogramTimer.Start(); CHECK_EQ("V8.Test", last_event_message); CHECK_EQ(0, last_event_status); - histogramTimer->Stop(); + histogramTimer.Stop(); CHECK_EQ("V8.Test", last_event_message); CHECK_EQ(1, last_event_status); } @@ -22448,6 +22557,72 @@ TEST(Promises) { } +TEST(PromiseThen) { + LocalContext context; + v8::Isolate* isolate = context->GetIsolate(); + v8::HandleScope scope(isolate); + Handle<Object> global = context->Global(); + + // Creation. + Handle<v8::Promise::Resolver> pr = v8::Promise::Resolver::New(isolate); + Handle<v8::Promise::Resolver> qr = v8::Promise::Resolver::New(isolate); + Handle<v8::Promise> p = pr->GetPromise(); + Handle<v8::Promise> q = qr->GetPromise(); + + CHECK(p->IsPromise()); + CHECK(q->IsPromise()); + + pr->Resolve(v8::Integer::New(isolate, 1)); + qr->Resolve(p); + + // Chaining non-pending promises. + CompileRun( + "var x1 = 0;\n" + "var x2 = 0;\n" + "function f1(x) { x1 = x; return x+1 };\n" + "function f2(x) { x2 = x; return x+1 };\n"); + Handle<Function> f1 = Handle<Function>::Cast(global->Get(v8_str("f1"))); + Handle<Function> f2 = Handle<Function>::Cast(global->Get(v8_str("f2"))); + + // Chain + q->Chain(f1); + CHECK(global->Get(v8_str("x1"))->IsNumber()); + CHECK_EQ(0, global->Get(v8_str("x1"))->Int32Value()); + isolate->RunMicrotasks(); + CHECK(!global->Get(v8_str("x1"))->IsNumber()); + CHECK_EQ(p, global->Get(v8_str("x1"))); + + // Then + CompileRun("x1 = x2 = 0;"); + q->Then(f1); + CHECK_EQ(0, global->Get(v8_str("x1"))->Int32Value()); + isolate->RunMicrotasks(); + CHECK_EQ(1, global->Get(v8_str("x1"))->Int32Value()); + + // Then + CompileRun("x1 = x2 = 0;"); + pr = v8::Promise::Resolver::New(isolate); + qr = v8::Promise::Resolver::New(isolate); + + qr->Resolve(pr); + qr->GetPromise()->Then(f1)->Then(f2); + + CHECK_EQ(0, global->Get(v8_str("x1"))->Int32Value()); + CHECK_EQ(0, global->Get(v8_str("x2"))->Int32Value()); + isolate->RunMicrotasks(); + CHECK_EQ(0, global->Get(v8_str("x1"))->Int32Value()); + CHECK_EQ(0, global->Get(v8_str("x2"))->Int32Value()); + + pr->Resolve(v8::Integer::New(isolate, 3)); + + CHECK_EQ(0, global->Get(v8_str("x1"))->Int32Value()); + CHECK_EQ(0, global->Get(v8_str("x2"))->Int32Value()); + isolate->RunMicrotasks(); + CHECK_EQ(3, global->Get(v8_str("x1"))->Int32Value()); + CHECK_EQ(4, global->Get(v8_str("x2"))->Int32Value()); +} + + TEST(DisallowJavascriptExecutionScope) { LocalContext context; v8::Isolate* isolate = context->GetIsolate(); @@ -22540,3 +22715,158 @@ TEST(CaptureStackTraceForStackOverflow) { CompileRun("(function f(x) { f(x+1); })(0)"); CHECK(try_catch.HasCaught()); } + + +TEST(ScriptNameAndLineNumber) { + LocalContext env; + v8::Isolate* isolate = env->GetIsolate(); + v8::HandleScope scope(isolate); + const char* url = "http://www.foo.com/foo.js"; + v8::ScriptOrigin origin(v8_str(url), v8::Integer::New(isolate, 13)); + v8::ScriptCompiler::Source script_source(v8_str("var foo;"), origin); + Local<Script> script = v8::ScriptCompiler::Compile( + isolate, &script_source); + Local<Value> script_name = script->GetUnboundScript()->GetScriptName(); + CHECK(!script_name.IsEmpty()); + CHECK(script_name->IsString()); + String::Utf8Value utf8_name(script_name); + CHECK_EQ(url, *utf8_name); + int line_number = script->GetUnboundScript()->GetLineNumber(0); + CHECK_EQ(13, line_number); +} + + +Local<v8::Context> call_eval_context; +Local<v8::Function> call_eval_bound_function; +static void CallEval(const v8::FunctionCallbackInfo<v8::Value>& args) { + v8::Context::Scope scope(call_eval_context); + args.GetReturnValue().Set( + call_eval_bound_function->Call(call_eval_context->Global(), 0, NULL)); +} + + +TEST(CrossActivationEval) { + LocalContext env; + v8::Isolate* isolate = env->GetIsolate(); + v8::HandleScope scope(isolate); + { + call_eval_context = v8::Context::New(isolate); + v8::Context::Scope scope(call_eval_context); + call_eval_bound_function = + Local<Function>::Cast(CompileRun("eval.bind(this, '1')")); + } + env->Global()->Set(v8_str("CallEval"), + v8::FunctionTemplate::New(isolate, CallEval)->GetFunction()); + Local<Value> result = CompileRun("CallEval();"); + CHECK_EQ(result, v8::Integer::New(isolate, 1)); +} + + +void SourceURLHelper(const char* source, const char* expected_source_url, + const char* expected_source_mapping_url) { + Local<Script> script = v8_compile(source); + if (expected_source_url != NULL) { + v8::String::Utf8Value url(script->GetUnboundScript()->GetSourceURL()); + CHECK_EQ(expected_source_url, *url); + } else { + CHECK(script->GetUnboundScript()->GetSourceURL()->IsUndefined()); + } + if (expected_source_mapping_url != NULL) { + v8::String::Utf8Value url( + script->GetUnboundScript()->GetSourceMappingURL()); + CHECK_EQ(expected_source_mapping_url, *url); + } else { + CHECK(script->GetUnboundScript()->GetSourceMappingURL()->IsUndefined()); + } +} + + +TEST(ScriptSourceURLAndSourceMappingURL) { + LocalContext env; + v8::Isolate* isolate = env->GetIsolate(); + v8::HandleScope scope(isolate); + SourceURLHelper("function foo() {}\n" + "//# sourceURL=bar1.js\n", "bar1.js", NULL); + SourceURLHelper("function foo() {}\n" + "//# sourceMappingURL=bar2.js\n", NULL, "bar2.js"); + + // Both sourceURL and sourceMappingURL. + SourceURLHelper("function foo() {}\n" + "//# sourceURL=bar3.js\n" + "//# sourceMappingURL=bar4.js\n", "bar3.js", "bar4.js"); + + // Two source URLs; the first one is ignored. + SourceURLHelper("function foo() {}\n" + "//# sourceURL=ignoreme.js\n" + "//# sourceURL=bar5.js\n", "bar5.js", NULL); + SourceURLHelper("function foo() {}\n" + "//# sourceMappingURL=ignoreme.js\n" + "//# sourceMappingURL=bar6.js\n", NULL, "bar6.js"); + + // SourceURL or sourceMappingURL in the middle of the script. + SourceURLHelper("function foo() {}\n" + "//# sourceURL=bar7.js\n" + "function baz() {}\n", "bar7.js", NULL); + SourceURLHelper("function foo() {}\n" + "//# sourceMappingURL=bar8.js\n" + "function baz() {}\n", NULL, "bar8.js"); + + // Too much whitespace. + SourceURLHelper("function foo() {}\n" + "//# sourceURL=bar9.js\n" + "//# sourceMappingURL=bar10.js\n", NULL, NULL); + SourceURLHelper("function foo() {}\n" + "//# sourceURL =bar11.js\n" + "//# sourceMappingURL =bar12.js\n", NULL, NULL); + + // Disallowed characters in value. + SourceURLHelper("function foo() {}\n" + "//# sourceURL=bar13 .js \n" + "//# sourceMappingURL=bar14 .js \n", + NULL, NULL); + SourceURLHelper("function foo() {}\n" + "//# sourceURL=bar15\t.js \n" + "//# sourceMappingURL=bar16\t.js \n", + NULL, NULL); + SourceURLHelper("function foo() {}\n" + "//# sourceURL=bar17'.js \n" + "//# sourceMappingURL=bar18'.js \n", + NULL, NULL); + SourceURLHelper("function foo() {}\n" + "//# sourceURL=bar19\".js \n" + "//# sourceMappingURL=bar20\".js \n", + NULL, NULL); + + // Not too much whitespace. + SourceURLHelper("function foo() {}\n" + "//# sourceURL= bar21.js \n" + "//# sourceMappingURL= bar22.js \n", "bar21.js", "bar22.js"); +} + + +TEST(GetOwnPropertyDescriptor) { + LocalContext env; + v8::Isolate* isolate = env->GetIsolate(); + v8::HandleScope scope(isolate); + CompileRun( + "var x = { value : 13};" + "Object.defineProperty(x, 'p0', {value : 12});" + "Object.defineProperty(x, 'p1', {" + " set : function(value) { this.value = value; }," + " get : function() { return this.value; }," + "});"); + Local<Object> x = Local<Object>::Cast(env->Global()->Get(v8_str("x"))); + Local<Value> desc = x->GetOwnPropertyDescriptor(v8_str("no_prop")); + CHECK(desc->IsUndefined()); + desc = x->GetOwnPropertyDescriptor(v8_str("p0")); + CHECK_EQ(v8_num(12), Local<Object>::Cast(desc)->Get(v8_str("value"))); + desc = x->GetOwnPropertyDescriptor(v8_str("p1")); + Local<Function> set = + Local<Function>::Cast(Local<Object>::Cast(desc)->Get(v8_str("set"))); + Local<Function> get = + Local<Function>::Cast(Local<Object>::Cast(desc)->Get(v8_str("get"))); + CHECK_EQ(v8_num(13), get->Call(x, 0, NULL)); + Handle<Value> args[] = { v8_num(14) }; + set->Call(x, 1, args); + CHECK_EQ(v8_num(14), get->Call(x, 0, NULL)); +} diff --git a/deps/v8/test/cctest/test-assembler-arm.cc b/deps/v8/test/cctest/test-assembler-arm.cc index 470cd61638e..4c339a32b45 100644 --- a/deps/v8/test/cctest/test-assembler-arm.cc +++ b/deps/v8/test/cctest/test-assembler-arm.cc @@ -25,13 +25,14 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "v8.h" +#include "src/v8.h" +#include "test/cctest/cctest.h" -#include "disassembler.h" -#include "factory.h" -#include "arm/simulator-arm.h" -#include "arm/assembler-arm-inl.h" -#include "cctest.h" +#include "src/arm/assembler-arm-inl.h" +#include "src/arm/simulator-arm.h" +#include "src/disassembler.h" +#include "src/factory.h" +#include "src/ostreams.h" using namespace v8::internal; @@ -60,7 +61,8 @@ TEST(0) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef DEBUG - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F2 f = FUNCTION_CAST<F2>(code->entry()); int res = reinterpret_cast<int>(CALL_GENERATED_CODE(f, 3, 4, 0, 0, 0)); @@ -95,7 +97,8 @@ TEST(1) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef DEBUG - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F1 f = FUNCTION_CAST<F1>(code->entry()); int res = reinterpret_cast<int>(CALL_GENERATED_CODE(f, 100, 0, 0, 0, 0)); @@ -139,7 +142,8 @@ TEST(2) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef DEBUG - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F1 f = FUNCTION_CAST<F1>(code->entry()); int res = reinterpret_cast<int>(CALL_GENERATED_CODE(f, 10, 0, 0, 0, 0)); @@ -185,7 +189,8 @@ TEST(3) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef DEBUG - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F3 f = FUNCTION_CAST<F3>(code->entry()); t.i = 100000; @@ -308,7 +313,8 @@ TEST(4) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef DEBUG - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F3 f = FUNCTION_CAST<F3>(code->entry()); t.a = 1.5; @@ -368,7 +374,8 @@ TEST(5) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef DEBUG - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F1 f = FUNCTION_CAST<F1>(code->entry()); int res = reinterpret_cast<int>( @@ -401,7 +408,8 @@ TEST(6) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef DEBUG - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F1 f = FUNCTION_CAST<F1>(code->entry()); int res = reinterpret_cast<int>( @@ -474,7 +482,8 @@ static void TestRoundingMode(VCVTTypes types, Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef DEBUG - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F1 f = FUNCTION_CAST<F1>(code->entry()); int res = reinterpret_cast<int>( @@ -657,7 +666,8 @@ TEST(8) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef DEBUG - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F4 fn = FUNCTION_CAST<F4>(code->entry()); d.a = 1.1; @@ -766,7 +776,8 @@ TEST(9) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef DEBUG - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F4 fn = FUNCTION_CAST<F4>(code->entry()); d.a = 1.1; @@ -871,7 +882,8 @@ TEST(10) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef DEBUG - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F4 fn = FUNCTION_CAST<F4>(code->entry()); d.a = 1.1; @@ -965,7 +977,8 @@ TEST(11) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef DEBUG - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F3 f = FUNCTION_CAST<F3>(code->entry()); Object* dummy = CALL_GENERATED_CODE(f, &i, 0, 0, 0, 0); @@ -1092,7 +1105,8 @@ TEST(13) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef DEBUG - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F3 f = FUNCTION_CAST<F3>(code->entry()); t.a = 1.5; @@ -1164,7 +1178,8 @@ TEST(14) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef DEBUG - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F3 f = FUNCTION_CAST<F3>(code->entry()); t.left = BitCast<double>(kHoleNanInt64); @@ -1180,7 +1195,7 @@ TEST(14) { #ifdef DEBUG const uint64_t kArmNanInt64 = (static_cast<uint64_t>(kArmNanUpper32) << 32) | kArmNanLower32; - ASSERT(kArmNanInt64 != kHoleNanInt64); + DCHECK(kArmNanInt64 != kHoleNanInt64); #endif // With VFP2 the sign of the canonicalized Nan is undefined. So // we remove the sign bit for the upper tests. @@ -1267,7 +1282,8 @@ TEST(15) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef DEBUG - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F3 f = FUNCTION_CAST<F3>(code->entry()); t.src0 = 0x01020304; @@ -1369,7 +1385,8 @@ TEST(16) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef DEBUG - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F3 f = FUNCTION_CAST<F3>(code->entry()); t.src0 = 0x01020304; @@ -1451,7 +1468,8 @@ TEST(18) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef DEBUG - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F3 f = FUNCTION_CAST<F3>(code->entry()); Object* dummy; diff --git a/deps/v8/test/cctest/test-assembler-arm64.cc b/deps/v8/test/cctest/test-assembler-arm64.cc index 25f3adb502d..3d05487f393 100644 --- a/deps/v8/test/cctest/test-assembler-arm64.cc +++ b/deps/v8/test/cctest/test-assembler-arm64.cc @@ -31,15 +31,15 @@ #include <cmath> #include <limits> -#include "v8.h" +#include "src/v8.h" -#include "macro-assembler.h" -#include "arm64/simulator-arm64.h" -#include "arm64/decoder-arm64-inl.h" -#include "arm64/disasm-arm64.h" -#include "arm64/utils-arm64.h" -#include "cctest.h" -#include "test-utils-arm64.h" +#include "src/arm64/decoder-arm64-inl.h" +#include "src/arm64/disasm-arm64.h" +#include "src/arm64/simulator-arm64.h" +#include "src/arm64/utils-arm64.h" +#include "src/macro-assembler.h" +#include "test/cctest/cctest.h" +#include "test/cctest/test-utils-arm64.h" using namespace v8::internal; @@ -58,7 +58,7 @@ using namespace v8::internal; // // RUN(); // -// ASSERT_EQUAL_64(1, x0); +// CHECK_EQUAL_64(1, x0); // // TEARDOWN(); // } @@ -74,22 +74,22 @@ using namespace v8::internal; // // We provide some helper assert to handle common cases: // -// ASSERT_EQUAL_32(int32_t, int_32t) -// ASSERT_EQUAL_FP32(float, float) -// ASSERT_EQUAL_32(int32_t, W register) -// ASSERT_EQUAL_FP32(float, S register) -// ASSERT_EQUAL_64(int64_t, int_64t) -// ASSERT_EQUAL_FP64(double, double) -// ASSERT_EQUAL_64(int64_t, X register) -// ASSERT_EQUAL_64(X register, X register) -// ASSERT_EQUAL_FP64(double, D register) +// CHECK_EQUAL_32(int32_t, int_32t) +// CHECK_EQUAL_FP32(float, float) +// CHECK_EQUAL_32(int32_t, W register) +// CHECK_EQUAL_FP32(float, S register) +// CHECK_EQUAL_64(int64_t, int_64t) +// CHECK_EQUAL_FP64(double, double) +// CHECK_EQUAL_64(int64_t, X register) +// CHECK_EQUAL_64(X register, X register) +// CHECK_EQUAL_FP64(double, D register) // -// e.g. ASSERT_EQUAL_64(0.5, d30); +// e.g. CHECK_EQUAL_64(0.5, d30); // // If more advance computation is required before the assert then access the // RegisterDump named core directly: // -// ASSERT_EQUAL_64(0x1234, core.xreg(0) & 0xffff); +// CHECK_EQUAL_64(0x1234, core.xreg(0) & 0xffff); #if 0 // TODO(all): enable. @@ -116,7 +116,7 @@ static void InitializeVM() { #define SETUP_SIZE(buf_size) \ Isolate* isolate = Isolate::Current(); \ HandleScope scope(isolate); \ - ASSERT(isolate != NULL); \ + DCHECK(isolate != NULL); \ byte* buf = new byte[buf_size]; \ MacroAssembler masm(isolate, buf, buf_size); \ Decoder<DispatchingDecoderVisitor>* decoder = \ @@ -170,11 +170,10 @@ static void InitializeVM() { #define SETUP_SIZE(buf_size) \ Isolate* isolate = Isolate::Current(); \ HandleScope scope(isolate); \ - ASSERT(isolate != NULL); \ + DCHECK(isolate != NULL); \ byte* buf = new byte[buf_size]; \ MacroAssembler masm(isolate, buf, buf_size); \ - RegisterDump core; \ - CpuFeatures::Probe(false); + RegisterDump core; #define RESET() \ __ Reset(); \ @@ -191,12 +190,12 @@ static void InitializeVM() { RESET(); \ START_AFTER_RESET(); -#define RUN() \ - CPU::FlushICache(buf, masm.SizeOfGeneratedCode()); \ - { \ - void (*test_function)(void); \ - memcpy(&test_function, &buf, sizeof(buf)); \ - test_function(); \ +#define RUN() \ + CpuFeatures::FlushICache(buf, masm.SizeOfGeneratedCode()); \ + { \ + void (*test_function)(void); \ + memcpy(&test_function, &buf, sizeof(buf)); \ + test_function(); \ } #define END() \ @@ -210,29 +209,29 @@ static void InitializeVM() { #endif // ifdef USE_SIMULATOR. -#define ASSERT_EQUAL_NZCV(expected) \ +#define CHECK_EQUAL_NZCV(expected) \ CHECK(EqualNzcv(expected, core.flags_nzcv())) -#define ASSERT_EQUAL_REGISTERS(expected) \ +#define CHECK_EQUAL_REGISTERS(expected) \ CHECK(EqualRegisters(&expected, &core)) -#define ASSERT_EQUAL_32(expected, result) \ +#define CHECK_EQUAL_32(expected, result) \ CHECK(Equal32(static_cast<uint32_t>(expected), &core, result)) -#define ASSERT_EQUAL_FP32(expected, result) \ +#define CHECK_EQUAL_FP32(expected, result) \ CHECK(EqualFP32(expected, &core, result)) -#define ASSERT_EQUAL_64(expected, result) \ +#define CHECK_EQUAL_64(expected, result) \ CHECK(Equal64(expected, &core, result)) -#define ASSERT_EQUAL_FP64(expected, result) \ +#define CHECK_EQUAL_FP64(expected, result) \ CHECK(EqualFP64(expected, &core, result)) #ifdef DEBUG -#define ASSERT_LITERAL_POOL_SIZE(expected) \ +#define DCHECK_LITERAL_POOL_SIZE(expected) \ CHECK((expected) == (__ LiteralPoolSize())) #else -#define ASSERT_LITERAL_POOL_SIZE(expected) \ +#define DCHECK_LITERAL_POOL_SIZE(expected) \ ((void) 0) #endif @@ -277,12 +276,12 @@ TEST(stack_ops) { RUN(); - ASSERT_EQUAL_64(0x1000, x0); - ASSERT_EQUAL_64(0x1050, x1); - ASSERT_EQUAL_64(0x104f, x2); - ASSERT_EQUAL_64(0x1fff, x3); - ASSERT_EQUAL_64(0xfffffff8, x4); - ASSERT_EQUAL_64(0xfffffff8, x5); + CHECK_EQUAL_64(0x1000, x0); + CHECK_EQUAL_64(0x1050, x1); + CHECK_EQUAL_64(0x104f, x2); + CHECK_EQUAL_64(0x1fff, x3); + CHECK_EQUAL_64(0xfffffff8, x4); + CHECK_EQUAL_64(0xfffffff8, x5); TEARDOWN(); } @@ -313,22 +312,22 @@ TEST(mvn) { RUN(); - ASSERT_EQUAL_64(0xfffff000, x0); - ASSERT_EQUAL_64(0xfffffffffffff000UL, x1); - ASSERT_EQUAL_64(0x00001fff, x2); - ASSERT_EQUAL_64(0x0000000000003fffUL, x3); - ASSERT_EQUAL_64(0xe00001ff, x4); - ASSERT_EQUAL_64(0xf0000000000000ffUL, x5); - ASSERT_EQUAL_64(0x00000001, x6); - ASSERT_EQUAL_64(0x0, x7); - ASSERT_EQUAL_64(0x7ff80000, x8); - ASSERT_EQUAL_64(0x3ffc000000000000UL, x9); - ASSERT_EQUAL_64(0xffffff00, x10); - ASSERT_EQUAL_64(0x0000000000000001UL, x11); - ASSERT_EQUAL_64(0xffff8003, x12); - ASSERT_EQUAL_64(0xffffffffffff0007UL, x13); - ASSERT_EQUAL_64(0xfffffffffffe000fUL, x14); - ASSERT_EQUAL_64(0xfffffffffffe000fUL, x15); + CHECK_EQUAL_64(0xfffff000, x0); + CHECK_EQUAL_64(0xfffffffffffff000UL, x1); + CHECK_EQUAL_64(0x00001fff, x2); + CHECK_EQUAL_64(0x0000000000003fffUL, x3); + CHECK_EQUAL_64(0xe00001ff, x4); + CHECK_EQUAL_64(0xf0000000000000ffUL, x5); + CHECK_EQUAL_64(0x00000001, x6); + CHECK_EQUAL_64(0x0, x7); + CHECK_EQUAL_64(0x7ff80000, x8); + CHECK_EQUAL_64(0x3ffc000000000000UL, x9); + CHECK_EQUAL_64(0xffffff00, x10); + CHECK_EQUAL_64(0x0000000000000001UL, x11); + CHECK_EQUAL_64(0xffff8003, x12); + CHECK_EQUAL_64(0xffffffffffff0007UL, x13); + CHECK_EQUAL_64(0xfffffffffffe000fUL, x14); + CHECK_EQUAL_64(0xfffffffffffe000fUL, x15); TEARDOWN(); } @@ -385,31 +384,31 @@ TEST(mov) { RUN(); - ASSERT_EQUAL_64(0x0123456789abcdefL, x0); - ASSERT_EQUAL_64(0x00000000abcd0000L, x1); - ASSERT_EQUAL_64(0xffffabcdffffffffL, x2); - ASSERT_EQUAL_64(0x5432ffffffffffffL, x3); - ASSERT_EQUAL_64(x4, x5); - ASSERT_EQUAL_32(-1, w6); - ASSERT_EQUAL_64(0x0123456789abcdefL, x7); - ASSERT_EQUAL_32(0x89abcdefL, w8); - ASSERT_EQUAL_64(0x0123456789abcdefL, x9); - ASSERT_EQUAL_32(0x89abcdefL, w10); - ASSERT_EQUAL_64(0x00000fff, x11); - ASSERT_EQUAL_64(0x0000000000000fffUL, x12); - ASSERT_EQUAL_64(0x00001ffe, x13); - ASSERT_EQUAL_64(0x0000000000003ffcUL, x14); - ASSERT_EQUAL_64(0x000001ff, x15); - ASSERT_EQUAL_64(0x00000000000000ffUL, x18); - ASSERT_EQUAL_64(0x00000001, x19); - ASSERT_EQUAL_64(0x0, x20); - ASSERT_EQUAL_64(0x7ff80000, x21); - ASSERT_EQUAL_64(0x3ffc000000000000UL, x22); - ASSERT_EQUAL_64(0x000000fe, x23); - ASSERT_EQUAL_64(0xfffffffffffffffcUL, x24); - ASSERT_EQUAL_64(0x00007ff8, x25); - ASSERT_EQUAL_64(0x000000000000fff0UL, x26); - ASSERT_EQUAL_64(0x000000000001ffe0UL, x27); + CHECK_EQUAL_64(0x0123456789abcdefL, x0); + CHECK_EQUAL_64(0x00000000abcd0000L, x1); + CHECK_EQUAL_64(0xffffabcdffffffffL, x2); + CHECK_EQUAL_64(0x5432ffffffffffffL, x3); + CHECK_EQUAL_64(x4, x5); + CHECK_EQUAL_32(-1, w6); + CHECK_EQUAL_64(0x0123456789abcdefL, x7); + CHECK_EQUAL_32(0x89abcdefL, w8); + CHECK_EQUAL_64(0x0123456789abcdefL, x9); + CHECK_EQUAL_32(0x89abcdefL, w10); + CHECK_EQUAL_64(0x00000fff, x11); + CHECK_EQUAL_64(0x0000000000000fffUL, x12); + CHECK_EQUAL_64(0x00001ffe, x13); + CHECK_EQUAL_64(0x0000000000003ffcUL, x14); + CHECK_EQUAL_64(0x000001ff, x15); + CHECK_EQUAL_64(0x00000000000000ffUL, x18); + CHECK_EQUAL_64(0x00000001, x19); + CHECK_EQUAL_64(0x0, x20); + CHECK_EQUAL_64(0x7ff80000, x21); + CHECK_EQUAL_64(0x3ffc000000000000UL, x22); + CHECK_EQUAL_64(0x000000fe, x23); + CHECK_EQUAL_64(0xfffffffffffffffcUL, x24); + CHECK_EQUAL_64(0x00007ff8, x25); + CHECK_EQUAL_64(0x000000000000fff0UL, x26); + CHECK_EQUAL_64(0x000000000001ffe0UL, x27); TEARDOWN(); } @@ -427,17 +426,23 @@ TEST(mov_imm_w) { __ Mov(w4, 0x00001234L); __ Mov(w5, 0x12340000L); __ Mov(w6, 0x12345678L); + __ Mov(w7, (int32_t)0x80000000); + __ Mov(w8, (int32_t)0xffff0000); + __ Mov(w9, kWMinInt); END(); RUN(); - ASSERT_EQUAL_64(0xffffffffL, x0); - ASSERT_EQUAL_64(0xffff1234L, x1); - ASSERT_EQUAL_64(0x1234ffffL, x2); - ASSERT_EQUAL_64(0x00000000L, x3); - ASSERT_EQUAL_64(0x00001234L, x4); - ASSERT_EQUAL_64(0x12340000L, x5); - ASSERT_EQUAL_64(0x12345678L, x6); + CHECK_EQUAL_64(0xffffffffL, x0); + CHECK_EQUAL_64(0xffff1234L, x1); + CHECK_EQUAL_64(0x1234ffffL, x2); + CHECK_EQUAL_64(0x00000000L, x3); + CHECK_EQUAL_64(0x00001234L, x4); + CHECK_EQUAL_64(0x12340000L, x5); + CHECK_EQUAL_64(0x12345678L, x6); + CHECK_EQUAL_64(0x80000000L, x7); + CHECK_EQUAL_64(0xffff0000L, x8); + CHECK_EQUAL_32(kWMinInt, w9); TEARDOWN(); } @@ -479,32 +484,32 @@ TEST(mov_imm_x) { RUN(); - ASSERT_EQUAL_64(0xffffffffffff1234L, x1); - ASSERT_EQUAL_64(0xffffffff12345678L, x2); - ASSERT_EQUAL_64(0xffff1234ffff5678L, x3); - ASSERT_EQUAL_64(0x1234ffffffff5678L, x4); - ASSERT_EQUAL_64(0x1234ffff5678ffffL, x5); - ASSERT_EQUAL_64(0x12345678ffffffffL, x6); - ASSERT_EQUAL_64(0x1234ffffffffffffL, x7); - ASSERT_EQUAL_64(0x123456789abcffffL, x8); - ASSERT_EQUAL_64(0x12345678ffff9abcL, x9); - ASSERT_EQUAL_64(0x1234ffff56789abcL, x10); - ASSERT_EQUAL_64(0xffff123456789abcL, x11); - ASSERT_EQUAL_64(0x0000000000000000L, x12); - ASSERT_EQUAL_64(0x0000000000001234L, x13); - ASSERT_EQUAL_64(0x0000000012345678L, x14); - ASSERT_EQUAL_64(0x0000123400005678L, x15); - ASSERT_EQUAL_64(0x1234000000005678L, x18); - ASSERT_EQUAL_64(0x1234000056780000L, x19); - ASSERT_EQUAL_64(0x1234567800000000L, x20); - ASSERT_EQUAL_64(0x1234000000000000L, x21); - ASSERT_EQUAL_64(0x123456789abc0000L, x22); - ASSERT_EQUAL_64(0x1234567800009abcL, x23); - ASSERT_EQUAL_64(0x1234000056789abcL, x24); - ASSERT_EQUAL_64(0x0000123456789abcL, x25); - ASSERT_EQUAL_64(0x123456789abcdef0L, x26); - ASSERT_EQUAL_64(0xffff000000000001L, x27); - ASSERT_EQUAL_64(0x8000ffff00000000L, x28); + CHECK_EQUAL_64(0xffffffffffff1234L, x1); + CHECK_EQUAL_64(0xffffffff12345678L, x2); + CHECK_EQUAL_64(0xffff1234ffff5678L, x3); + CHECK_EQUAL_64(0x1234ffffffff5678L, x4); + CHECK_EQUAL_64(0x1234ffff5678ffffL, x5); + CHECK_EQUAL_64(0x12345678ffffffffL, x6); + CHECK_EQUAL_64(0x1234ffffffffffffL, x7); + CHECK_EQUAL_64(0x123456789abcffffL, x8); + CHECK_EQUAL_64(0x12345678ffff9abcL, x9); + CHECK_EQUAL_64(0x1234ffff56789abcL, x10); + CHECK_EQUAL_64(0xffff123456789abcL, x11); + CHECK_EQUAL_64(0x0000000000000000L, x12); + CHECK_EQUAL_64(0x0000000000001234L, x13); + CHECK_EQUAL_64(0x0000000012345678L, x14); + CHECK_EQUAL_64(0x0000123400005678L, x15); + CHECK_EQUAL_64(0x1234000000005678L, x18); + CHECK_EQUAL_64(0x1234000056780000L, x19); + CHECK_EQUAL_64(0x1234567800000000L, x20); + CHECK_EQUAL_64(0x1234000000000000L, x21); + CHECK_EQUAL_64(0x123456789abc0000L, x22); + CHECK_EQUAL_64(0x1234567800009abcL, x23); + CHECK_EQUAL_64(0x1234000056789abcL, x24); + CHECK_EQUAL_64(0x0000123456789abcL, x25); + CHECK_EQUAL_64(0x123456789abcdef0L, x26); + CHECK_EQUAL_64(0xffff000000000001L, x27); + CHECK_EQUAL_64(0x8000ffff00000000L, x28); TEARDOWN(); } @@ -532,16 +537,16 @@ TEST(orr) { RUN(); - ASSERT_EQUAL_64(0xf000f0ff, x2); - ASSERT_EQUAL_64(0xf000f0f0, x3); - ASSERT_EQUAL_64(0xf00000ff0000f0f0L, x4); - ASSERT_EQUAL_64(0x0f00f0ff, x5); - ASSERT_EQUAL_64(0xff00f0ff, x6); - ASSERT_EQUAL_64(0x0f00f0ff, x7); - ASSERT_EQUAL_64(0x0ffff0f0, x8); - ASSERT_EQUAL_64(0x0ff00000000ff0f0L, x9); - ASSERT_EQUAL_64(0xf0ff, x10); - ASSERT_EQUAL_64(0xf0000000f000f0f0L, x11); + CHECK_EQUAL_64(0xf000f0ff, x2); + CHECK_EQUAL_64(0xf000f0f0, x3); + CHECK_EQUAL_64(0xf00000ff0000f0f0L, x4); + CHECK_EQUAL_64(0x0f00f0ff, x5); + CHECK_EQUAL_64(0xff00f0ff, x6); + CHECK_EQUAL_64(0x0f00f0ff, x7); + CHECK_EQUAL_64(0x0ffff0f0, x8); + CHECK_EQUAL_64(0x0ff00000000ff0f0L, x9); + CHECK_EQUAL_64(0xf0ff, x10); + CHECK_EQUAL_64(0xf0000000f000f0f0L, x11); TEARDOWN(); } @@ -566,14 +571,14 @@ TEST(orr_extend) { RUN(); - ASSERT_EQUAL_64(0x00000081, x6); - ASSERT_EQUAL_64(0x00010101, x7); - ASSERT_EQUAL_64(0x00020201, x8); - ASSERT_EQUAL_64(0x0000000400040401UL, x9); - ASSERT_EQUAL_64(0x00000000ffffff81UL, x10); - ASSERT_EQUAL_64(0xffffffffffff0101UL, x11); - ASSERT_EQUAL_64(0xfffffffe00020201UL, x12); - ASSERT_EQUAL_64(0x0000000400040401UL, x13); + CHECK_EQUAL_64(0x00000081, x6); + CHECK_EQUAL_64(0x00010101, x7); + CHECK_EQUAL_64(0x00020201, x8); + CHECK_EQUAL_64(0x0000000400040401UL, x9); + CHECK_EQUAL_64(0x00000000ffffff81UL, x10); + CHECK_EQUAL_64(0xffffffffffff0101UL, x11); + CHECK_EQUAL_64(0xfffffffe00020201UL, x12); + CHECK_EQUAL_64(0x0000000400040401UL, x13); TEARDOWN(); } @@ -589,14 +594,19 @@ TEST(bitwise_wide_imm) { __ Orr(x10, x0, Operand(0x1234567890abcdefUL)); __ Orr(w11, w1, Operand(0x90abcdef)); + + __ Orr(w12, w0, kWMinInt); + __ Eor(w13, w0, kWMinInt); END(); RUN(); - ASSERT_EQUAL_64(0, x0); - ASSERT_EQUAL_64(0xf0f0f0f0f0f0f0f0UL, x1); - ASSERT_EQUAL_64(0x1234567890abcdefUL, x10); - ASSERT_EQUAL_64(0xf0fbfdffUL, x11); + CHECK_EQUAL_64(0, x0); + CHECK_EQUAL_64(0xf0f0f0f0f0f0f0f0UL, x1); + CHECK_EQUAL_64(0x1234567890abcdefUL, x10); + CHECK_EQUAL_64(0xf0fbfdffUL, x11); + CHECK_EQUAL_32(kWMinInt, w12); + CHECK_EQUAL_32(kWMinInt, w13); TEARDOWN(); } @@ -624,16 +634,16 @@ TEST(orn) { RUN(); - ASSERT_EQUAL_64(0xffffffff0ffffff0L, x2); - ASSERT_EQUAL_64(0xfffff0ff, x3); - ASSERT_EQUAL_64(0xfffffff0fffff0ffL, x4); - ASSERT_EQUAL_64(0xffffffff87fffff0L, x5); - ASSERT_EQUAL_64(0x07fffff0, x6); - ASSERT_EQUAL_64(0xffffffff87fffff0L, x7); - ASSERT_EQUAL_64(0xff00ffff, x8); - ASSERT_EQUAL_64(0xff00ffffffffffffL, x9); - ASSERT_EQUAL_64(0xfffff0f0, x10); - ASSERT_EQUAL_64(0xffff0000fffff0f0L, x11); + CHECK_EQUAL_64(0xffffffff0ffffff0L, x2); + CHECK_EQUAL_64(0xfffff0ff, x3); + CHECK_EQUAL_64(0xfffffff0fffff0ffL, x4); + CHECK_EQUAL_64(0xffffffff87fffff0L, x5); + CHECK_EQUAL_64(0x07fffff0, x6); + CHECK_EQUAL_64(0xffffffff87fffff0L, x7); + CHECK_EQUAL_64(0xff00ffff, x8); + CHECK_EQUAL_64(0xff00ffffffffffffL, x9); + CHECK_EQUAL_64(0xfffff0f0, x10); + CHECK_EQUAL_64(0xffff0000fffff0f0L, x11); TEARDOWN(); } @@ -658,14 +668,14 @@ TEST(orn_extend) { RUN(); - ASSERT_EQUAL_64(0xffffff7f, x6); - ASSERT_EQUAL_64(0xfffffffffffefefdUL, x7); - ASSERT_EQUAL_64(0xfffdfdfb, x8); - ASSERT_EQUAL_64(0xfffffffbfffbfbf7UL, x9); - ASSERT_EQUAL_64(0x0000007f, x10); - ASSERT_EQUAL_64(0x0000fefd, x11); - ASSERT_EQUAL_64(0x00000001fffdfdfbUL, x12); - ASSERT_EQUAL_64(0xfffffffbfffbfbf7UL, x13); + CHECK_EQUAL_64(0xffffff7f, x6); + CHECK_EQUAL_64(0xfffffffffffefefdUL, x7); + CHECK_EQUAL_64(0xfffdfdfb, x8); + CHECK_EQUAL_64(0xfffffffbfffbfbf7UL, x9); + CHECK_EQUAL_64(0x0000007f, x10); + CHECK_EQUAL_64(0x0000fefd, x11); + CHECK_EQUAL_64(0x00000001fffdfdfbUL, x12); + CHECK_EQUAL_64(0xfffffffbfffbfbf7UL, x13); TEARDOWN(); } @@ -693,16 +703,16 @@ TEST(and_) { RUN(); - ASSERT_EQUAL_64(0x000000f0, x2); - ASSERT_EQUAL_64(0x00000ff0, x3); - ASSERT_EQUAL_64(0x00000ff0, x4); - ASSERT_EQUAL_64(0x00000070, x5); - ASSERT_EQUAL_64(0x0000ff00, x6); - ASSERT_EQUAL_64(0x00000f00, x7); - ASSERT_EQUAL_64(0x00000ff0, x8); - ASSERT_EQUAL_64(0x00000000, x9); - ASSERT_EQUAL_64(0x0000ff00, x10); - ASSERT_EQUAL_64(0x000000f0, x11); + CHECK_EQUAL_64(0x000000f0, x2); + CHECK_EQUAL_64(0x00000ff0, x3); + CHECK_EQUAL_64(0x00000ff0, x4); + CHECK_EQUAL_64(0x00000070, x5); + CHECK_EQUAL_64(0x0000ff00, x6); + CHECK_EQUAL_64(0x00000f00, x7); + CHECK_EQUAL_64(0x00000ff0, x8); + CHECK_EQUAL_64(0x00000000, x9); + CHECK_EQUAL_64(0x0000ff00, x10); + CHECK_EQUAL_64(0x000000f0, x11); TEARDOWN(); } @@ -727,14 +737,14 @@ TEST(and_extend) { RUN(); - ASSERT_EQUAL_64(0x00000081, x6); - ASSERT_EQUAL_64(0x00010102, x7); - ASSERT_EQUAL_64(0x00020204, x8); - ASSERT_EQUAL_64(0x0000000400040408UL, x9); - ASSERT_EQUAL_64(0xffffff81, x10); - ASSERT_EQUAL_64(0xffffffffffff0102UL, x11); - ASSERT_EQUAL_64(0xfffffffe00020204UL, x12); - ASSERT_EQUAL_64(0x0000000400040408UL, x13); + CHECK_EQUAL_64(0x00000081, x6); + CHECK_EQUAL_64(0x00010102, x7); + CHECK_EQUAL_64(0x00020204, x8); + CHECK_EQUAL_64(0x0000000400040408UL, x9); + CHECK_EQUAL_64(0xffffff81, x10); + CHECK_EQUAL_64(0xffffffffffff0102UL, x11); + CHECK_EQUAL_64(0xfffffffe00020204UL, x12); + CHECK_EQUAL_64(0x0000000400040408UL, x13); TEARDOWN(); } @@ -751,8 +761,8 @@ TEST(ands) { RUN(); - ASSERT_EQUAL_NZCV(NFlag); - ASSERT_EQUAL_64(0xf00000ff, x0); + CHECK_EQUAL_NZCV(NFlag); + CHECK_EQUAL_64(0xf00000ff, x0); START(); __ Mov(x0, 0xfff0); @@ -762,8 +772,8 @@ TEST(ands) { RUN(); - ASSERT_EQUAL_NZCV(ZFlag); - ASSERT_EQUAL_64(0x00000000, x0); + CHECK_EQUAL_NZCV(ZFlag); + CHECK_EQUAL_64(0x00000000, x0); START(); __ Mov(x0, 0x8000000000000000L); @@ -773,8 +783,8 @@ TEST(ands) { RUN(); - ASSERT_EQUAL_NZCV(NFlag); - ASSERT_EQUAL_64(0x8000000000000000L, x0); + CHECK_EQUAL_NZCV(NFlag); + CHECK_EQUAL_64(0x8000000000000000L, x0); START(); __ Mov(x0, 0xfff0); @@ -783,8 +793,8 @@ TEST(ands) { RUN(); - ASSERT_EQUAL_NZCV(ZFlag); - ASSERT_EQUAL_64(0x00000000, x0); + CHECK_EQUAL_NZCV(ZFlag); + CHECK_EQUAL_64(0x00000000, x0); START(); __ Mov(x0, 0xff000000); @@ -793,8 +803,8 @@ TEST(ands) { RUN(); - ASSERT_EQUAL_NZCV(NFlag); - ASSERT_EQUAL_64(0x80000000, x0); + CHECK_EQUAL_NZCV(NFlag); + CHECK_EQUAL_64(0x80000000, x0); TEARDOWN(); } @@ -832,18 +842,18 @@ TEST(bic) { RUN(); - ASSERT_EQUAL_64(0x0000ff00, x2); - ASSERT_EQUAL_64(0x0000f000, x3); - ASSERT_EQUAL_64(0x0000f000, x4); - ASSERT_EQUAL_64(0x0000ff80, x5); - ASSERT_EQUAL_64(0x000000f0, x6); - ASSERT_EQUAL_64(0x0000f0f0, x7); - ASSERT_EQUAL_64(0x0000f000, x8); - ASSERT_EQUAL_64(0x0000ff00, x9); - ASSERT_EQUAL_64(0x0000ffe0, x10); - ASSERT_EQUAL_64(0x0000fef0, x11); + CHECK_EQUAL_64(0x0000ff00, x2); + CHECK_EQUAL_64(0x0000f000, x3); + CHECK_EQUAL_64(0x0000f000, x4); + CHECK_EQUAL_64(0x0000ff80, x5); + CHECK_EQUAL_64(0x000000f0, x6); + CHECK_EQUAL_64(0x0000f0f0, x7); + CHECK_EQUAL_64(0x0000f000, x8); + CHECK_EQUAL_64(0x0000ff00, x9); + CHECK_EQUAL_64(0x0000ffe0, x10); + CHECK_EQUAL_64(0x0000fef0, x11); - ASSERT_EQUAL_64(0x543210, x21); + CHECK_EQUAL_64(0x543210, x21); TEARDOWN(); } @@ -868,14 +878,14 @@ TEST(bic_extend) { RUN(); - ASSERT_EQUAL_64(0xffffff7e, x6); - ASSERT_EQUAL_64(0xfffffffffffefefdUL, x7); - ASSERT_EQUAL_64(0xfffdfdfb, x8); - ASSERT_EQUAL_64(0xfffffffbfffbfbf7UL, x9); - ASSERT_EQUAL_64(0x0000007e, x10); - ASSERT_EQUAL_64(0x0000fefd, x11); - ASSERT_EQUAL_64(0x00000001fffdfdfbUL, x12); - ASSERT_EQUAL_64(0xfffffffbfffbfbf7UL, x13); + CHECK_EQUAL_64(0xffffff7e, x6); + CHECK_EQUAL_64(0xfffffffffffefefdUL, x7); + CHECK_EQUAL_64(0xfffdfdfb, x8); + CHECK_EQUAL_64(0xfffffffbfffbfbf7UL, x9); + CHECK_EQUAL_64(0x0000007e, x10); + CHECK_EQUAL_64(0x0000fefd, x11); + CHECK_EQUAL_64(0x00000001fffdfdfbUL, x12); + CHECK_EQUAL_64(0xfffffffbfffbfbf7UL, x13); TEARDOWN(); } @@ -892,8 +902,8 @@ TEST(bics) { RUN(); - ASSERT_EQUAL_NZCV(ZFlag); - ASSERT_EQUAL_64(0x00000000, x0); + CHECK_EQUAL_NZCV(ZFlag); + CHECK_EQUAL_64(0x00000000, x0); START(); __ Mov(x0, 0xffffffff); @@ -902,8 +912,8 @@ TEST(bics) { RUN(); - ASSERT_EQUAL_NZCV(NFlag); - ASSERT_EQUAL_64(0x80000000, x0); + CHECK_EQUAL_NZCV(NFlag); + CHECK_EQUAL_64(0x80000000, x0); START(); __ Mov(x0, 0x8000000000000000L); @@ -913,8 +923,8 @@ TEST(bics) { RUN(); - ASSERT_EQUAL_NZCV(ZFlag); - ASSERT_EQUAL_64(0x00000000, x0); + CHECK_EQUAL_NZCV(ZFlag); + CHECK_EQUAL_64(0x00000000, x0); START(); __ Mov(x0, 0xffffffffffffffffL); @@ -923,8 +933,8 @@ TEST(bics) { RUN(); - ASSERT_EQUAL_NZCV(NFlag); - ASSERT_EQUAL_64(0x8000000000000000L, x0); + CHECK_EQUAL_NZCV(NFlag); + CHECK_EQUAL_64(0x8000000000000000L, x0); START(); __ Mov(w0, 0xffff0000); @@ -933,8 +943,8 @@ TEST(bics) { RUN(); - ASSERT_EQUAL_NZCV(ZFlag); - ASSERT_EQUAL_64(0x00000000, x0); + CHECK_EQUAL_NZCV(ZFlag); + CHECK_EQUAL_64(0x00000000, x0); TEARDOWN(); } @@ -962,16 +972,16 @@ TEST(eor) { RUN(); - ASSERT_EQUAL_64(0xf000ff0f, x2); - ASSERT_EQUAL_64(0x0000f000, x3); - ASSERT_EQUAL_64(0x0000000f0000f000L, x4); - ASSERT_EQUAL_64(0x7800ff8f, x5); - ASSERT_EQUAL_64(0xffff00f0, x6); - ASSERT_EQUAL_64(0x0000f0f0, x7); - ASSERT_EQUAL_64(0x0000f00f, x8); - ASSERT_EQUAL_64(0x00000ff00000ffffL, x9); - ASSERT_EQUAL_64(0xff0000f0, x10); - ASSERT_EQUAL_64(0xff00ff00ff0000f0L, x11); + CHECK_EQUAL_64(0xf000ff0f, x2); + CHECK_EQUAL_64(0x0000f000, x3); + CHECK_EQUAL_64(0x0000000f0000f000L, x4); + CHECK_EQUAL_64(0x7800ff8f, x5); + CHECK_EQUAL_64(0xffff00f0, x6); + CHECK_EQUAL_64(0x0000f0f0, x7); + CHECK_EQUAL_64(0x0000f00f, x8); + CHECK_EQUAL_64(0x00000ff00000ffffL, x9); + CHECK_EQUAL_64(0xff0000f0, x10); + CHECK_EQUAL_64(0xff00ff00ff0000f0L, x11); TEARDOWN(); } @@ -996,14 +1006,14 @@ TEST(eor_extend) { RUN(); - ASSERT_EQUAL_64(0x11111190, x6); - ASSERT_EQUAL_64(0x1111111111101013UL, x7); - ASSERT_EQUAL_64(0x11131315, x8); - ASSERT_EQUAL_64(0x1111111511151519UL, x9); - ASSERT_EQUAL_64(0xeeeeee90, x10); - ASSERT_EQUAL_64(0xeeeeeeeeeeee1013UL, x11); - ASSERT_EQUAL_64(0xeeeeeeef11131315UL, x12); - ASSERT_EQUAL_64(0x1111111511151519UL, x13); + CHECK_EQUAL_64(0x11111190, x6); + CHECK_EQUAL_64(0x1111111111101013UL, x7); + CHECK_EQUAL_64(0x11131315, x8); + CHECK_EQUAL_64(0x1111111511151519UL, x9); + CHECK_EQUAL_64(0xeeeeee90, x10); + CHECK_EQUAL_64(0xeeeeeeeeeeee1013UL, x11); + CHECK_EQUAL_64(0xeeeeeeef11131315UL, x12); + CHECK_EQUAL_64(0x1111111511151519UL, x13); TEARDOWN(); } @@ -1031,16 +1041,16 @@ TEST(eon) { RUN(); - ASSERT_EQUAL_64(0xffffffff0fff00f0L, x2); - ASSERT_EQUAL_64(0xffff0fff, x3); - ASSERT_EQUAL_64(0xfffffff0ffff0fffL, x4); - ASSERT_EQUAL_64(0xffffffff87ff0070L, x5); - ASSERT_EQUAL_64(0x0000ff0f, x6); - ASSERT_EQUAL_64(0xffffffffffff0f0fL, x7); - ASSERT_EQUAL_64(0xffff0ff0, x8); - ASSERT_EQUAL_64(0xfffff00fffff0000L, x9); - ASSERT_EQUAL_64(0xfc3f03cf, x10); - ASSERT_EQUAL_64(0xffffefffffff100fL, x11); + CHECK_EQUAL_64(0xffffffff0fff00f0L, x2); + CHECK_EQUAL_64(0xffff0fff, x3); + CHECK_EQUAL_64(0xfffffff0ffff0fffL, x4); + CHECK_EQUAL_64(0xffffffff87ff0070L, x5); + CHECK_EQUAL_64(0x0000ff0f, x6); + CHECK_EQUAL_64(0xffffffffffff0f0fL, x7); + CHECK_EQUAL_64(0xffff0ff0, x8); + CHECK_EQUAL_64(0xfffff00fffff0000L, x9); + CHECK_EQUAL_64(0xfc3f03cf, x10); + CHECK_EQUAL_64(0xffffefffffff100fL, x11); TEARDOWN(); } @@ -1065,14 +1075,14 @@ TEST(eon_extend) { RUN(); - ASSERT_EQUAL_64(0xeeeeee6f, x6); - ASSERT_EQUAL_64(0xeeeeeeeeeeefefecUL, x7); - ASSERT_EQUAL_64(0xeeececea, x8); - ASSERT_EQUAL_64(0xeeeeeeeaeeeaeae6UL, x9); - ASSERT_EQUAL_64(0x1111116f, x10); - ASSERT_EQUAL_64(0x111111111111efecUL, x11); - ASSERT_EQUAL_64(0x11111110eeececeaUL, x12); - ASSERT_EQUAL_64(0xeeeeeeeaeeeaeae6UL, x13); + CHECK_EQUAL_64(0xeeeeee6f, x6); + CHECK_EQUAL_64(0xeeeeeeeeeeefefecUL, x7); + CHECK_EQUAL_64(0xeeececea, x8); + CHECK_EQUAL_64(0xeeeeeeeaeeeaeae6UL, x9); + CHECK_EQUAL_64(0x1111116f, x10); + CHECK_EQUAL_64(0x111111111111efecUL, x11); + CHECK_EQUAL_64(0x11111110eeececeaUL, x12); + CHECK_EQUAL_64(0xeeeeeeeaeeeaeae6UL, x13); TEARDOWN(); } @@ -1111,25 +1121,25 @@ TEST(mul) { RUN(); - ASSERT_EQUAL_64(0, x0); - ASSERT_EQUAL_64(0, x1); - ASSERT_EQUAL_64(0xffffffff, x2); - ASSERT_EQUAL_64(1, x3); - ASSERT_EQUAL_64(0, x4); - ASSERT_EQUAL_64(0xffffffff, x5); - ASSERT_EQUAL_64(0xffffffff00000001UL, x6); - ASSERT_EQUAL_64(1, x7); - ASSERT_EQUAL_64(0xffffffffffffffffUL, x8); - ASSERT_EQUAL_64(1, x9); - ASSERT_EQUAL_64(1, x10); - ASSERT_EQUAL_64(0, x11); - ASSERT_EQUAL_64(0, x12); - ASSERT_EQUAL_64(1, x13); - ASSERT_EQUAL_64(0xffffffff, x14); - ASSERT_EQUAL_64(0, x20); - ASSERT_EQUAL_64(0xffffffff00000001UL, x21); - ASSERT_EQUAL_64(0xffffffff, x22); - ASSERT_EQUAL_64(0xffffffffffffffffUL, x23); + CHECK_EQUAL_64(0, x0); + CHECK_EQUAL_64(0, x1); + CHECK_EQUAL_64(0xffffffff, x2); + CHECK_EQUAL_64(1, x3); + CHECK_EQUAL_64(0, x4); + CHECK_EQUAL_64(0xffffffff, x5); + CHECK_EQUAL_64(0xffffffff00000001UL, x6); + CHECK_EQUAL_64(1, x7); + CHECK_EQUAL_64(0xffffffffffffffffUL, x8); + CHECK_EQUAL_64(1, x9); + CHECK_EQUAL_64(1, x10); + CHECK_EQUAL_64(0, x11); + CHECK_EQUAL_64(0, x12); + CHECK_EQUAL_64(1, x13); + CHECK_EQUAL_64(0xffffffff, x14); + CHECK_EQUAL_64(0, x20); + CHECK_EQUAL_64(0xffffffff00000001UL, x21); + CHECK_EQUAL_64(0xffffffff, x22); + CHECK_EQUAL_64(0xffffffffffffffffUL, x23); TEARDOWN(); } @@ -1143,7 +1153,7 @@ static void SmullHelper(int64_t expected, int64_t a, int64_t b) { __ Smull(x2, w0, w1); END(); RUN(); - ASSERT_EQUAL_64(expected, x2); + CHECK_EQUAL_64(expected, x2); TEARDOWN(); } @@ -1199,31 +1209,31 @@ TEST(madd) { RUN(); - ASSERT_EQUAL_64(0, x0); - ASSERT_EQUAL_64(1, x1); - ASSERT_EQUAL_64(0xffffffff, x2); - ASSERT_EQUAL_64(0xffffffff, x3); - ASSERT_EQUAL_64(1, x4); - ASSERT_EQUAL_64(0, x5); - ASSERT_EQUAL_64(0, x6); - ASSERT_EQUAL_64(0xffffffff, x7); - ASSERT_EQUAL_64(0xfffffffe, x8); - ASSERT_EQUAL_64(2, x9); - ASSERT_EQUAL_64(0, x10); - ASSERT_EQUAL_64(0, x11); + CHECK_EQUAL_64(0, x0); + CHECK_EQUAL_64(1, x1); + CHECK_EQUAL_64(0xffffffff, x2); + CHECK_EQUAL_64(0xffffffff, x3); + CHECK_EQUAL_64(1, x4); + CHECK_EQUAL_64(0, x5); + CHECK_EQUAL_64(0, x6); + CHECK_EQUAL_64(0xffffffff, x7); + CHECK_EQUAL_64(0xfffffffe, x8); + CHECK_EQUAL_64(2, x9); + CHECK_EQUAL_64(0, x10); + CHECK_EQUAL_64(0, x11); - ASSERT_EQUAL_64(0, x12); - ASSERT_EQUAL_64(1, x13); - ASSERT_EQUAL_64(0xffffffff, x14); - ASSERT_EQUAL_64(0xffffffffffffffff, x15); - ASSERT_EQUAL_64(1, x20); - ASSERT_EQUAL_64(0x100000000UL, x21); - ASSERT_EQUAL_64(0, x22); - ASSERT_EQUAL_64(0xffffffff, x23); - ASSERT_EQUAL_64(0x1fffffffe, x24); - ASSERT_EQUAL_64(0xfffffffe00000002UL, x25); - ASSERT_EQUAL_64(0, x26); - ASSERT_EQUAL_64(0, x27); + CHECK_EQUAL_64(0, x12); + CHECK_EQUAL_64(1, x13); + CHECK_EQUAL_64(0xffffffff, x14); + CHECK_EQUAL_64(0xffffffffffffffff, x15); + CHECK_EQUAL_64(1, x20); + CHECK_EQUAL_64(0x100000000UL, x21); + CHECK_EQUAL_64(0, x22); + CHECK_EQUAL_64(0xffffffff, x23); + CHECK_EQUAL_64(0x1fffffffe, x24); + CHECK_EQUAL_64(0xfffffffe00000002UL, x25); + CHECK_EQUAL_64(0, x26); + CHECK_EQUAL_64(0, x27); TEARDOWN(); } @@ -1269,31 +1279,31 @@ TEST(msub) { RUN(); - ASSERT_EQUAL_64(0, x0); - ASSERT_EQUAL_64(1, x1); - ASSERT_EQUAL_64(0xffffffff, x2); - ASSERT_EQUAL_64(0xffffffff, x3); - ASSERT_EQUAL_64(1, x4); - ASSERT_EQUAL_64(0xfffffffe, x5); - ASSERT_EQUAL_64(0xfffffffe, x6); - ASSERT_EQUAL_64(1, x7); - ASSERT_EQUAL_64(0, x8); - ASSERT_EQUAL_64(0, x9); - ASSERT_EQUAL_64(0xfffffffe, x10); - ASSERT_EQUAL_64(0xfffffffe, x11); + CHECK_EQUAL_64(0, x0); + CHECK_EQUAL_64(1, x1); + CHECK_EQUAL_64(0xffffffff, x2); + CHECK_EQUAL_64(0xffffffff, x3); + CHECK_EQUAL_64(1, x4); + CHECK_EQUAL_64(0xfffffffe, x5); + CHECK_EQUAL_64(0xfffffffe, x6); + CHECK_EQUAL_64(1, x7); + CHECK_EQUAL_64(0, x8); + CHECK_EQUAL_64(0, x9); + CHECK_EQUAL_64(0xfffffffe, x10); + CHECK_EQUAL_64(0xfffffffe, x11); - ASSERT_EQUAL_64(0, x12); - ASSERT_EQUAL_64(1, x13); - ASSERT_EQUAL_64(0xffffffff, x14); - ASSERT_EQUAL_64(0xffffffffffffffffUL, x15); - ASSERT_EQUAL_64(1, x20); - ASSERT_EQUAL_64(0xfffffffeUL, x21); - ASSERT_EQUAL_64(0xfffffffffffffffeUL, x22); - ASSERT_EQUAL_64(0xffffffff00000001UL, x23); - ASSERT_EQUAL_64(0, x24); - ASSERT_EQUAL_64(0x200000000UL, x25); - ASSERT_EQUAL_64(0x1fffffffeUL, x26); - ASSERT_EQUAL_64(0xfffffffffffffffeUL, x27); + CHECK_EQUAL_64(0, x12); + CHECK_EQUAL_64(1, x13); + CHECK_EQUAL_64(0xffffffff, x14); + CHECK_EQUAL_64(0xffffffffffffffffUL, x15); + CHECK_EQUAL_64(1, x20); + CHECK_EQUAL_64(0xfffffffeUL, x21); + CHECK_EQUAL_64(0xfffffffffffffffeUL, x22); + CHECK_EQUAL_64(0xffffffff00000001UL, x23); + CHECK_EQUAL_64(0, x24); + CHECK_EQUAL_64(0x200000000UL, x25); + CHECK_EQUAL_64(0x1fffffffeUL, x26); + CHECK_EQUAL_64(0xfffffffffffffffeUL, x27); TEARDOWN(); } @@ -1331,18 +1341,18 @@ TEST(smulh) { RUN(); - ASSERT_EQUAL_64(0, x0); - ASSERT_EQUAL_64(0, x1); - ASSERT_EQUAL_64(0, x2); - ASSERT_EQUAL_64(0x01234567, x3); - ASSERT_EQUAL_64(0x02468acf, x4); - ASSERT_EQUAL_64(0xffffffffffffffffUL, x5); - ASSERT_EQUAL_64(0x4000000000000000UL, x6); - ASSERT_EQUAL_64(0, x7); - ASSERT_EQUAL_64(0, x8); - ASSERT_EQUAL_64(0x1c71c71c71c71c71UL, x9); - ASSERT_EQUAL_64(0xe38e38e38e38e38eUL, x10); - ASSERT_EQUAL_64(0x1c71c71c71c71c72UL, x11); + CHECK_EQUAL_64(0, x0); + CHECK_EQUAL_64(0, x1); + CHECK_EQUAL_64(0, x2); + CHECK_EQUAL_64(0x01234567, x3); + CHECK_EQUAL_64(0x02468acf, x4); + CHECK_EQUAL_64(0xffffffffffffffffUL, x5); + CHECK_EQUAL_64(0x4000000000000000UL, x6); + CHECK_EQUAL_64(0, x7); + CHECK_EQUAL_64(0, x8); + CHECK_EQUAL_64(0x1c71c71c71c71c71UL, x9); + CHECK_EQUAL_64(0xe38e38e38e38e38eUL, x10); + CHECK_EQUAL_64(0x1c71c71c71c71c72UL, x11); TEARDOWN(); } @@ -1371,14 +1381,14 @@ TEST(smaddl_umaddl) { RUN(); - ASSERT_EQUAL_64(3, x9); - ASSERT_EQUAL_64(5, x10); - ASSERT_EQUAL_64(5, x11); - ASSERT_EQUAL_64(0x200000001UL, x12); - ASSERT_EQUAL_64(0x100000003UL, x13); - ASSERT_EQUAL_64(0xfffffffe00000005UL, x14); - ASSERT_EQUAL_64(0xfffffffe00000005UL, x15); - ASSERT_EQUAL_64(0x1, x22); + CHECK_EQUAL_64(3, x9); + CHECK_EQUAL_64(5, x10); + CHECK_EQUAL_64(5, x11); + CHECK_EQUAL_64(0x200000001UL, x12); + CHECK_EQUAL_64(0x100000003UL, x13); + CHECK_EQUAL_64(0xfffffffe00000005UL, x14); + CHECK_EQUAL_64(0xfffffffe00000005UL, x15); + CHECK_EQUAL_64(0x1, x22); TEARDOWN(); } @@ -1407,14 +1417,14 @@ TEST(smsubl_umsubl) { RUN(); - ASSERT_EQUAL_64(5, x9); - ASSERT_EQUAL_64(3, x10); - ASSERT_EQUAL_64(3, x11); - ASSERT_EQUAL_64(0x1ffffffffUL, x12); - ASSERT_EQUAL_64(0xffffffff00000005UL, x13); - ASSERT_EQUAL_64(0x200000003UL, x14); - ASSERT_EQUAL_64(0x200000003UL, x15); - ASSERT_EQUAL_64(0x3ffffffffUL, x22); + CHECK_EQUAL_64(5, x9); + CHECK_EQUAL_64(3, x10); + CHECK_EQUAL_64(3, x11); + CHECK_EQUAL_64(0x1ffffffffUL, x12); + CHECK_EQUAL_64(0xffffffff00000005UL, x13); + CHECK_EQUAL_64(0x200000003UL, x14); + CHECK_EQUAL_64(0x200000003UL, x15); + CHECK_EQUAL_64(0x3ffffffffUL, x22); TEARDOWN(); } @@ -1470,34 +1480,34 @@ TEST(div) { RUN(); - ASSERT_EQUAL_64(1, x0); - ASSERT_EQUAL_64(0xffffffff, x1); - ASSERT_EQUAL_64(1, x2); - ASSERT_EQUAL_64(0xffffffff, x3); - ASSERT_EQUAL_64(1, x4); - ASSERT_EQUAL_64(1, x5); - ASSERT_EQUAL_64(0, x6); - ASSERT_EQUAL_64(1, x7); - ASSERT_EQUAL_64(0, x8); - ASSERT_EQUAL_64(0xffffffff00000001UL, x9); - ASSERT_EQUAL_64(0x40000000, x10); - ASSERT_EQUAL_64(0xC0000000, x11); - ASSERT_EQUAL_64(0x40000000, x12); - ASSERT_EQUAL_64(0x40000000, x13); - ASSERT_EQUAL_64(0x4000000000000000UL, x14); - ASSERT_EQUAL_64(0xC000000000000000UL, x15); - ASSERT_EQUAL_64(0, x22); - ASSERT_EQUAL_64(0x80000000, x23); - ASSERT_EQUAL_64(0, x24); - ASSERT_EQUAL_64(0x8000000000000000UL, x25); - ASSERT_EQUAL_64(0, x26); - ASSERT_EQUAL_64(0, x27); - ASSERT_EQUAL_64(0x7fffffffffffffffUL, x28); - ASSERT_EQUAL_64(0, x29); - ASSERT_EQUAL_64(0, x18); - ASSERT_EQUAL_64(0, x19); - ASSERT_EQUAL_64(0, x20); - ASSERT_EQUAL_64(0, x21); + CHECK_EQUAL_64(1, x0); + CHECK_EQUAL_64(0xffffffff, x1); + CHECK_EQUAL_64(1, x2); + CHECK_EQUAL_64(0xffffffff, x3); + CHECK_EQUAL_64(1, x4); + CHECK_EQUAL_64(1, x5); + CHECK_EQUAL_64(0, x6); + CHECK_EQUAL_64(1, x7); + CHECK_EQUAL_64(0, x8); + CHECK_EQUAL_64(0xffffffff00000001UL, x9); + CHECK_EQUAL_64(0x40000000, x10); + CHECK_EQUAL_64(0xC0000000, x11); + CHECK_EQUAL_64(0x40000000, x12); + CHECK_EQUAL_64(0x40000000, x13); + CHECK_EQUAL_64(0x4000000000000000UL, x14); + CHECK_EQUAL_64(0xC000000000000000UL, x15); + CHECK_EQUAL_64(0, x22); + CHECK_EQUAL_64(0x80000000, x23); + CHECK_EQUAL_64(0, x24); + CHECK_EQUAL_64(0x8000000000000000UL, x25); + CHECK_EQUAL_64(0, x26); + CHECK_EQUAL_64(0, x27); + CHECK_EQUAL_64(0x7fffffffffffffffUL, x28); + CHECK_EQUAL_64(0, x29); + CHECK_EQUAL_64(0, x18); + CHECK_EQUAL_64(0, x19); + CHECK_EQUAL_64(0, x20); + CHECK_EQUAL_64(0, x21); TEARDOWN(); } @@ -1520,13 +1530,13 @@ TEST(rbit_rev) { RUN(); - ASSERT_EQUAL_64(0x084c2a6e, x0); - ASSERT_EQUAL_64(0x084c2a6e195d3b7fUL, x1); - ASSERT_EQUAL_64(0x54761032, x2); - ASSERT_EQUAL_64(0xdcfe98ba54761032UL, x3); - ASSERT_EQUAL_64(0x10325476, x4); - ASSERT_EQUAL_64(0x98badcfe10325476UL, x5); - ASSERT_EQUAL_64(0x1032547698badcfeUL, x6); + CHECK_EQUAL_64(0x084c2a6e, x0); + CHECK_EQUAL_64(0x084c2a6e195d3b7fUL, x1); + CHECK_EQUAL_64(0x54761032, x2); + CHECK_EQUAL_64(0xdcfe98ba54761032UL, x3); + CHECK_EQUAL_64(0x10325476, x4); + CHECK_EQUAL_64(0x98badcfe10325476UL, x5); + CHECK_EQUAL_64(0x1032547698badcfeUL, x6); TEARDOWN(); } @@ -1556,18 +1566,18 @@ TEST(clz_cls) { RUN(); - ASSERT_EQUAL_64(8, x0); - ASSERT_EQUAL_64(12, x1); - ASSERT_EQUAL_64(0, x2); - ASSERT_EQUAL_64(0, x3); - ASSERT_EQUAL_64(32, x4); - ASSERT_EQUAL_64(64, x5); - ASSERT_EQUAL_64(7, x6); - ASSERT_EQUAL_64(11, x7); - ASSERT_EQUAL_64(12, x8); - ASSERT_EQUAL_64(8, x9); - ASSERT_EQUAL_64(31, x10); - ASSERT_EQUAL_64(63, x11); + CHECK_EQUAL_64(8, x0); + CHECK_EQUAL_64(12, x1); + CHECK_EQUAL_64(0, x2); + CHECK_EQUAL_64(0, x3); + CHECK_EQUAL_64(32, x4); + CHECK_EQUAL_64(64, x5); + CHECK_EQUAL_64(7, x6); + CHECK_EQUAL_64(11, x7); + CHECK_EQUAL_64(12, x8); + CHECK_EQUAL_64(8, x9); + CHECK_EQUAL_64(31, x10); + CHECK_EQUAL_64(63, x11); TEARDOWN(); } @@ -1605,8 +1615,8 @@ TEST(label) { RUN(); - ASSERT_EQUAL_64(0x1, x0); - ASSERT_EQUAL_64(0x1, x1); + CHECK_EQUAL_64(0x1, x0); + CHECK_EQUAL_64(0x1, x1); TEARDOWN(); } @@ -1639,7 +1649,7 @@ TEST(branch_at_start) { RUN(); - ASSERT_EQUAL_64(0x1, x0); + CHECK_EQUAL_64(0x1, x0); TEARDOWN(); } @@ -1683,8 +1693,8 @@ TEST(adr) { RUN(); - ASSERT_EQUAL_64(0x0, x0); - ASSERT_EQUAL_64(0x0, x1); + CHECK_EQUAL_64(0x0, x0); + CHECK_EQUAL_64(0x0, x1); TEARDOWN(); } @@ -1749,7 +1759,7 @@ TEST(adr_far) { RUN(); - ASSERT_EQUAL_64(0xf, x0); + CHECK_EQUAL_64(0xf, x0); TEARDOWN(); } @@ -1839,7 +1849,7 @@ TEST(branch_cond) { RUN(); - ASSERT_EQUAL_64(0x1, x0); + CHECK_EQUAL_64(0x1, x0); TEARDOWN(); } @@ -1886,9 +1896,9 @@ TEST(branch_to_reg) { RUN(); - ASSERT_EQUAL_64(core.xreg(3) + kInstructionSize, x0); - ASSERT_EQUAL_64(42, x1); - ASSERT_EQUAL_64(84, x2); + CHECK_EQUAL_64(core.xreg(3) + kInstructionSize, x0); + CHECK_EQUAL_64(42, x1); + CHECK_EQUAL_64(84, x2); TEARDOWN(); } @@ -1956,12 +1966,12 @@ TEST(compare_branch) { RUN(); - ASSERT_EQUAL_64(1, x0); - ASSERT_EQUAL_64(0, x1); - ASSERT_EQUAL_64(1, x2); - ASSERT_EQUAL_64(0, x3); - ASSERT_EQUAL_64(1, x4); - ASSERT_EQUAL_64(0, x5); + CHECK_EQUAL_64(1, x0); + CHECK_EQUAL_64(0, x1); + CHECK_EQUAL_64(1, x2); + CHECK_EQUAL_64(0, x3); + CHECK_EQUAL_64(1, x4); + CHECK_EQUAL_64(0, x5); TEARDOWN(); } @@ -2009,10 +2019,10 @@ TEST(test_branch) { RUN(); - ASSERT_EQUAL_64(1, x0); - ASSERT_EQUAL_64(0, x1); - ASSERT_EQUAL_64(1, x2); - ASSERT_EQUAL_64(0, x3); + CHECK_EQUAL_64(1, x0); + CHECK_EQUAL_64(0, x1); + CHECK_EQUAL_64(1, x2); + CHECK_EQUAL_64(0, x3); TEARDOWN(); } @@ -2085,8 +2095,8 @@ TEST(far_branch_backward) { RUN(); - ASSERT_EQUAL_64(0x7, x0); - ASSERT_EQUAL_64(0x1, x1); + CHECK_EQUAL_64(0x7, x0); + CHECK_EQUAL_64(0x1, x1); TEARDOWN(); } @@ -2155,8 +2165,8 @@ TEST(far_branch_simple_veneer) { RUN(); - ASSERT_EQUAL_64(0x7, x0); - ASSERT_EQUAL_64(0x1, x1); + CHECK_EQUAL_64(0x7, x0); + CHECK_EQUAL_64(0x1, x1); TEARDOWN(); } @@ -2250,8 +2260,8 @@ TEST(far_branch_veneer_link_chain) { RUN(); - ASSERT_EQUAL_64(0x7, x0); - ASSERT_EQUAL_64(0x1, x1); + CHECK_EQUAL_64(0x7, x0); + CHECK_EQUAL_64(0x1, x1); TEARDOWN(); } @@ -2340,8 +2350,8 @@ TEST(far_branch_veneer_broken_link_chain) { RUN(); - ASSERT_EQUAL_64(0x3, x0); - ASSERT_EQUAL_64(0x1, x1); + CHECK_EQUAL_64(0x3, x0); + CHECK_EQUAL_64(0x1, x1); TEARDOWN(); } @@ -2398,7 +2408,7 @@ TEST(branch_type) { RUN(); - ASSERT_EQUAL_64(0x0, x0); + CHECK_EQUAL_64(0x0, x0); TEARDOWN(); } @@ -2430,18 +2440,18 @@ TEST(ldr_str_offset) { RUN(); - ASSERT_EQUAL_64(0x76543210, x0); - ASSERT_EQUAL_64(0x76543210, dst[0]); - ASSERT_EQUAL_64(0xfedcba98, x1); - ASSERT_EQUAL_64(0xfedcba9800000000UL, dst[1]); - ASSERT_EQUAL_64(0x0123456789abcdefUL, x2); - ASSERT_EQUAL_64(0x0123456789abcdefUL, dst[2]); - ASSERT_EQUAL_64(0x32, x3); - ASSERT_EQUAL_64(0x3200, dst[3]); - ASSERT_EQUAL_64(0x7654, x4); - ASSERT_EQUAL_64(0x765400, dst[4]); - ASSERT_EQUAL_64(src_base, x17); - ASSERT_EQUAL_64(dst_base, x18); + CHECK_EQUAL_64(0x76543210, x0); + CHECK_EQUAL_64(0x76543210, dst[0]); + CHECK_EQUAL_64(0xfedcba98, x1); + CHECK_EQUAL_64(0xfedcba9800000000UL, dst[1]); + CHECK_EQUAL_64(0x0123456789abcdefUL, x2); + CHECK_EQUAL_64(0x0123456789abcdefUL, dst[2]); + CHECK_EQUAL_64(0x32, x3); + CHECK_EQUAL_64(0x3200, dst[3]); + CHECK_EQUAL_64(0x7654, x4); + CHECK_EQUAL_64(0x765400, dst[4]); + CHECK_EQUAL_64(src_base, x17); + CHECK_EQUAL_64(dst_base, x18); TEARDOWN(); } @@ -2479,18 +2489,18 @@ TEST(ldr_str_wide) { RUN(); - ASSERT_EQUAL_32(8191, w0); - ASSERT_EQUAL_32(8191, dst[8191]); - ASSERT_EQUAL_64(src_base, x22); - ASSERT_EQUAL_64(dst_base, x23); - ASSERT_EQUAL_32(0, w1); - ASSERT_EQUAL_32(0, dst[0]); - ASSERT_EQUAL_64(src_base + 4096 * sizeof(src[0]), x24); - ASSERT_EQUAL_64(dst_base + 4096 * sizeof(dst[0]), x25); - ASSERT_EQUAL_32(6144, w2); - ASSERT_EQUAL_32(6144, dst[6144]); - ASSERT_EQUAL_64(src_base + 6144 * sizeof(src[0]), x26); - ASSERT_EQUAL_64(dst_base + 6144 * sizeof(dst[0]), x27); + CHECK_EQUAL_32(8191, w0); + CHECK_EQUAL_32(8191, dst[8191]); + CHECK_EQUAL_64(src_base, x22); + CHECK_EQUAL_64(dst_base, x23); + CHECK_EQUAL_32(0, w1); + CHECK_EQUAL_32(0, dst[0]); + CHECK_EQUAL_64(src_base + 4096 * sizeof(src[0]), x24); + CHECK_EQUAL_64(dst_base + 4096 * sizeof(dst[0]), x25); + CHECK_EQUAL_32(6144, w2); + CHECK_EQUAL_32(6144, dst[6144]); + CHECK_EQUAL_64(src_base + 6144 * sizeof(src[0]), x26); + CHECK_EQUAL_64(dst_base + 6144 * sizeof(dst[0]), x27); TEARDOWN(); } @@ -2530,26 +2540,26 @@ TEST(ldr_str_preindex) { RUN(); - ASSERT_EQUAL_64(0xfedcba98, x0); - ASSERT_EQUAL_64(0xfedcba9800000000UL, dst[1]); - ASSERT_EQUAL_64(0x0123456789abcdefUL, x1); - ASSERT_EQUAL_64(0x0123456789abcdefUL, dst[2]); - ASSERT_EQUAL_64(0x01234567, x2); - ASSERT_EQUAL_64(0x0123456700000000UL, dst[4]); - ASSERT_EQUAL_64(0x32, x3); - ASSERT_EQUAL_64(0x3200, dst[3]); - ASSERT_EQUAL_64(0x9876, x4); - ASSERT_EQUAL_64(0x987600, dst[5]); - ASSERT_EQUAL_64(src_base + 4, x17); - ASSERT_EQUAL_64(dst_base + 12, x18); - ASSERT_EQUAL_64(src_base + 8, x19); - ASSERT_EQUAL_64(dst_base + 16, x20); - ASSERT_EQUAL_64(src_base + 12, x21); - ASSERT_EQUAL_64(dst_base + 36, x22); - ASSERT_EQUAL_64(src_base + 1, x23); - ASSERT_EQUAL_64(dst_base + 25, x24); - ASSERT_EQUAL_64(src_base + 3, x25); - ASSERT_EQUAL_64(dst_base + 41, x26); + CHECK_EQUAL_64(0xfedcba98, x0); + CHECK_EQUAL_64(0xfedcba9800000000UL, dst[1]); + CHECK_EQUAL_64(0x0123456789abcdefUL, x1); + CHECK_EQUAL_64(0x0123456789abcdefUL, dst[2]); + CHECK_EQUAL_64(0x01234567, x2); + CHECK_EQUAL_64(0x0123456700000000UL, dst[4]); + CHECK_EQUAL_64(0x32, x3); + CHECK_EQUAL_64(0x3200, dst[3]); + CHECK_EQUAL_64(0x9876, x4); + CHECK_EQUAL_64(0x987600, dst[5]); + CHECK_EQUAL_64(src_base + 4, x17); + CHECK_EQUAL_64(dst_base + 12, x18); + CHECK_EQUAL_64(src_base + 8, x19); + CHECK_EQUAL_64(dst_base + 16, x20); + CHECK_EQUAL_64(src_base + 12, x21); + CHECK_EQUAL_64(dst_base + 36, x22); + CHECK_EQUAL_64(src_base + 1, x23); + CHECK_EQUAL_64(dst_base + 25, x24); + CHECK_EQUAL_64(src_base + 3, x25); + CHECK_EQUAL_64(dst_base + 41, x26); TEARDOWN(); } @@ -2589,26 +2599,26 @@ TEST(ldr_str_postindex) { RUN(); - ASSERT_EQUAL_64(0xfedcba98, x0); - ASSERT_EQUAL_64(0xfedcba9800000000UL, dst[1]); - ASSERT_EQUAL_64(0x0123456789abcdefUL, x1); - ASSERT_EQUAL_64(0x0123456789abcdefUL, dst[2]); - ASSERT_EQUAL_64(0x0123456789abcdefUL, x2); - ASSERT_EQUAL_64(0x0123456789abcdefUL, dst[4]); - ASSERT_EQUAL_64(0x32, x3); - ASSERT_EQUAL_64(0x3200, dst[3]); - ASSERT_EQUAL_64(0x9876, x4); - ASSERT_EQUAL_64(0x987600, dst[5]); - ASSERT_EQUAL_64(src_base + 8, x17); - ASSERT_EQUAL_64(dst_base + 24, x18); - ASSERT_EQUAL_64(src_base + 16, x19); - ASSERT_EQUAL_64(dst_base + 32, x20); - ASSERT_EQUAL_64(src_base, x21); - ASSERT_EQUAL_64(dst_base, x22); - ASSERT_EQUAL_64(src_base + 2, x23); - ASSERT_EQUAL_64(dst_base + 30, x24); - ASSERT_EQUAL_64(src_base, x25); - ASSERT_EQUAL_64(dst_base, x26); + CHECK_EQUAL_64(0xfedcba98, x0); + CHECK_EQUAL_64(0xfedcba9800000000UL, dst[1]); + CHECK_EQUAL_64(0x0123456789abcdefUL, x1); + CHECK_EQUAL_64(0x0123456789abcdefUL, dst[2]); + CHECK_EQUAL_64(0x0123456789abcdefUL, x2); + CHECK_EQUAL_64(0x0123456789abcdefUL, dst[4]); + CHECK_EQUAL_64(0x32, x3); + CHECK_EQUAL_64(0x3200, dst[3]); + CHECK_EQUAL_64(0x9876, x4); + CHECK_EQUAL_64(0x987600, dst[5]); + CHECK_EQUAL_64(src_base + 8, x17); + CHECK_EQUAL_64(dst_base + 24, x18); + CHECK_EQUAL_64(src_base + 16, x19); + CHECK_EQUAL_64(dst_base + 32, x20); + CHECK_EQUAL_64(src_base, x21); + CHECK_EQUAL_64(dst_base, x22); + CHECK_EQUAL_64(src_base + 2, x23); + CHECK_EQUAL_64(dst_base + 30, x24); + CHECK_EQUAL_64(src_base, x25); + CHECK_EQUAL_64(dst_base, x26); TEARDOWN(); } @@ -2637,16 +2647,16 @@ TEST(load_signed) { RUN(); - ASSERT_EQUAL_64(0xffffff80, x0); - ASSERT_EQUAL_64(0x0000007f, x1); - ASSERT_EQUAL_64(0xffff8080, x2); - ASSERT_EQUAL_64(0x00007f7f, x3); - ASSERT_EQUAL_64(0xffffffffffffff80UL, x4); - ASSERT_EQUAL_64(0x000000000000007fUL, x5); - ASSERT_EQUAL_64(0xffffffffffff8080UL, x6); - ASSERT_EQUAL_64(0x0000000000007f7fUL, x7); - ASSERT_EQUAL_64(0xffffffff80008080UL, x8); - ASSERT_EQUAL_64(0x000000007fff7f7fUL, x9); + CHECK_EQUAL_64(0xffffff80, x0); + CHECK_EQUAL_64(0x0000007f, x1); + CHECK_EQUAL_64(0xffff8080, x2); + CHECK_EQUAL_64(0x00007f7f, x3); + CHECK_EQUAL_64(0xffffffffffffff80UL, x4); + CHECK_EQUAL_64(0x000000000000007fUL, x5); + CHECK_EQUAL_64(0xffffffffffff8080UL, x6); + CHECK_EQUAL_64(0x0000000000007f7fUL, x7); + CHECK_EQUAL_64(0xffffffff80008080UL, x8); + CHECK_EQUAL_64(0x000000007fff7f7fUL, x9); TEARDOWN(); } @@ -2686,15 +2696,15 @@ TEST(load_store_regoffset) { RUN(); - ASSERT_EQUAL_64(1, x0); - ASSERT_EQUAL_64(0x0000000300000002UL, x1); - ASSERT_EQUAL_64(3, x2); - ASSERT_EQUAL_64(3, x3); - ASSERT_EQUAL_64(2, x4); - ASSERT_EQUAL_32(1, dst[0]); - ASSERT_EQUAL_32(2, dst[1]); - ASSERT_EQUAL_32(3, dst[2]); - ASSERT_EQUAL_32(3, dst[3]); + CHECK_EQUAL_64(1, x0); + CHECK_EQUAL_64(0x0000000300000002UL, x1); + CHECK_EQUAL_64(3, x2); + CHECK_EQUAL_64(3, x3); + CHECK_EQUAL_64(2, x4); + CHECK_EQUAL_32(1, dst[0]); + CHECK_EQUAL_32(2, dst[1]); + CHECK_EQUAL_32(3, dst[2]); + CHECK_EQUAL_32(3, dst[3]); TEARDOWN(); } @@ -2726,18 +2736,18 @@ TEST(load_store_float) { RUN(); - ASSERT_EQUAL_FP32(2.0, s0); - ASSERT_EQUAL_FP32(2.0, dst[0]); - ASSERT_EQUAL_FP32(1.0, s1); - ASSERT_EQUAL_FP32(1.0, dst[2]); - ASSERT_EQUAL_FP32(3.0, s2); - ASSERT_EQUAL_FP32(3.0, dst[1]); - ASSERT_EQUAL_64(src_base, x17); - ASSERT_EQUAL_64(dst_base + sizeof(dst[0]), x18); - ASSERT_EQUAL_64(src_base + sizeof(src[0]), x19); - ASSERT_EQUAL_64(dst_base + 2 * sizeof(dst[0]), x20); - ASSERT_EQUAL_64(src_base + 2 * sizeof(src[0]), x21); - ASSERT_EQUAL_64(dst_base, x22); + CHECK_EQUAL_FP32(2.0, s0); + CHECK_EQUAL_FP32(2.0, dst[0]); + CHECK_EQUAL_FP32(1.0, s1); + CHECK_EQUAL_FP32(1.0, dst[2]); + CHECK_EQUAL_FP32(3.0, s2); + CHECK_EQUAL_FP32(3.0, dst[1]); + CHECK_EQUAL_64(src_base, x17); + CHECK_EQUAL_64(dst_base + sizeof(dst[0]), x18); + CHECK_EQUAL_64(src_base + sizeof(src[0]), x19); + CHECK_EQUAL_64(dst_base + 2 * sizeof(dst[0]), x20); + CHECK_EQUAL_64(src_base + 2 * sizeof(src[0]), x21); + CHECK_EQUAL_64(dst_base, x22); TEARDOWN(); } @@ -2769,18 +2779,18 @@ TEST(load_store_double) { RUN(); - ASSERT_EQUAL_FP64(2.0, d0); - ASSERT_EQUAL_FP64(2.0, dst[0]); - ASSERT_EQUAL_FP64(1.0, d1); - ASSERT_EQUAL_FP64(1.0, dst[2]); - ASSERT_EQUAL_FP64(3.0, d2); - ASSERT_EQUAL_FP64(3.0, dst[1]); - ASSERT_EQUAL_64(src_base, x17); - ASSERT_EQUAL_64(dst_base + sizeof(dst[0]), x18); - ASSERT_EQUAL_64(src_base + sizeof(src[0]), x19); - ASSERT_EQUAL_64(dst_base + 2 * sizeof(dst[0]), x20); - ASSERT_EQUAL_64(src_base + 2 * sizeof(src[0]), x21); - ASSERT_EQUAL_64(dst_base, x22); + CHECK_EQUAL_FP64(2.0, d0); + CHECK_EQUAL_FP64(2.0, dst[0]); + CHECK_EQUAL_FP64(1.0, d1); + CHECK_EQUAL_FP64(1.0, dst[2]); + CHECK_EQUAL_FP64(3.0, d2); + CHECK_EQUAL_FP64(3.0, dst[1]); + CHECK_EQUAL_64(src_base, x17); + CHECK_EQUAL_64(dst_base + sizeof(dst[0]), x18); + CHECK_EQUAL_64(src_base + sizeof(src[0]), x19); + CHECK_EQUAL_64(dst_base + 2 * sizeof(dst[0]), x20); + CHECK_EQUAL_64(src_base + 2 * sizeof(src[0]), x21); + CHECK_EQUAL_64(dst_base, x22); TEARDOWN(); } @@ -2804,13 +2814,13 @@ TEST(ldp_stp_float) { RUN(); - ASSERT_EQUAL_FP32(1.0, s31); - ASSERT_EQUAL_FP32(2.0, s0); - ASSERT_EQUAL_FP32(0.0, dst[0]); - ASSERT_EQUAL_FP32(2.0, dst[1]); - ASSERT_EQUAL_FP32(1.0, dst[2]); - ASSERT_EQUAL_64(src_base + 2 * sizeof(src[0]), x16); - ASSERT_EQUAL_64(dst_base + sizeof(dst[1]), x17); + CHECK_EQUAL_FP32(1.0, s31); + CHECK_EQUAL_FP32(2.0, s0); + CHECK_EQUAL_FP32(0.0, dst[0]); + CHECK_EQUAL_FP32(2.0, dst[1]); + CHECK_EQUAL_FP32(1.0, dst[2]); + CHECK_EQUAL_64(src_base + 2 * sizeof(src[0]), x16); + CHECK_EQUAL_64(dst_base + sizeof(dst[1]), x17); TEARDOWN(); } @@ -2834,13 +2844,13 @@ TEST(ldp_stp_double) { RUN(); - ASSERT_EQUAL_FP64(1.0, d31); - ASSERT_EQUAL_FP64(2.0, d0); - ASSERT_EQUAL_FP64(0.0, dst[0]); - ASSERT_EQUAL_FP64(2.0, dst[1]); - ASSERT_EQUAL_FP64(1.0, dst[2]); - ASSERT_EQUAL_64(src_base + 2 * sizeof(src[0]), x16); - ASSERT_EQUAL_64(dst_base + sizeof(dst[1]), x17); + CHECK_EQUAL_FP64(1.0, d31); + CHECK_EQUAL_FP64(2.0, d0); + CHECK_EQUAL_FP64(0.0, dst[0]); + CHECK_EQUAL_FP64(2.0, dst[1]); + CHECK_EQUAL_FP64(1.0, dst[2]); + CHECK_EQUAL_64(src_base + 2 * sizeof(src[0]), x16); + CHECK_EQUAL_64(dst_base + sizeof(dst[1]), x17); TEARDOWN(); } @@ -2875,27 +2885,85 @@ TEST(ldp_stp_offset) { RUN(); - ASSERT_EQUAL_64(0x44556677, x0); - ASSERT_EQUAL_64(0x00112233, x1); - ASSERT_EQUAL_64(0x0011223344556677UL, dst[0]); - ASSERT_EQUAL_64(0x00112233, x2); - ASSERT_EQUAL_64(0xccddeeff, x3); - ASSERT_EQUAL_64(0xccddeeff00112233UL, dst[1]); - ASSERT_EQUAL_64(0x8899aabbccddeeffUL, x4); - ASSERT_EQUAL_64(0x8899aabbccddeeffUL, dst[2]); - ASSERT_EQUAL_64(0xffeeddccbbaa9988UL, x5); - ASSERT_EQUAL_64(0xffeeddccbbaa9988UL, dst[3]); - ASSERT_EQUAL_64(0x8899aabb, x6); - ASSERT_EQUAL_64(0xbbaa9988, x7); - ASSERT_EQUAL_64(0xbbaa99888899aabbUL, dst[4]); - ASSERT_EQUAL_64(0x8899aabbccddeeffUL, x8); - ASSERT_EQUAL_64(0x8899aabbccddeeffUL, dst[5]); - ASSERT_EQUAL_64(0xffeeddccbbaa9988UL, x9); - ASSERT_EQUAL_64(0xffeeddccbbaa9988UL, dst[6]); - ASSERT_EQUAL_64(src_base, x16); - ASSERT_EQUAL_64(dst_base, x17); - ASSERT_EQUAL_64(src_base + 24, x18); - ASSERT_EQUAL_64(dst_base + 56, x19); + CHECK_EQUAL_64(0x44556677, x0); + CHECK_EQUAL_64(0x00112233, x1); + CHECK_EQUAL_64(0x0011223344556677UL, dst[0]); + CHECK_EQUAL_64(0x00112233, x2); + CHECK_EQUAL_64(0xccddeeff, x3); + CHECK_EQUAL_64(0xccddeeff00112233UL, dst[1]); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, x4); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, dst[2]); + CHECK_EQUAL_64(0xffeeddccbbaa9988UL, x5); + CHECK_EQUAL_64(0xffeeddccbbaa9988UL, dst[3]); + CHECK_EQUAL_64(0x8899aabb, x6); + CHECK_EQUAL_64(0xbbaa9988, x7); + CHECK_EQUAL_64(0xbbaa99888899aabbUL, dst[4]); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, x8); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, dst[5]); + CHECK_EQUAL_64(0xffeeddccbbaa9988UL, x9); + CHECK_EQUAL_64(0xffeeddccbbaa9988UL, dst[6]); + CHECK_EQUAL_64(src_base, x16); + CHECK_EQUAL_64(dst_base, x17); + CHECK_EQUAL_64(src_base + 24, x18); + CHECK_EQUAL_64(dst_base + 56, x19); + + TEARDOWN(); +} + + +TEST(ldp_stp_offset_wide) { + INIT_V8(); + SETUP(); + + uint64_t src[3] = {0x0011223344556677, 0x8899aabbccddeeff, + 0xffeeddccbbaa9988}; + uint64_t dst[7] = {0, 0, 0, 0, 0, 0, 0}; + uintptr_t src_base = reinterpret_cast<uintptr_t>(src); + uintptr_t dst_base = reinterpret_cast<uintptr_t>(dst); + // Move base too far from the array to force multiple instructions + // to be emitted. + const int64_t base_offset = 1024; + + START(); + __ Mov(x20, src_base - base_offset); + __ Mov(x21, dst_base - base_offset); + __ Mov(x18, src_base + base_offset + 24); + __ Mov(x19, dst_base + base_offset + 56); + __ Ldp(w0, w1, MemOperand(x20, base_offset)); + __ Ldp(w2, w3, MemOperand(x20, base_offset + 4)); + __ Ldp(x4, x5, MemOperand(x20, base_offset + 8)); + __ Ldp(w6, w7, MemOperand(x18, -12 - base_offset)); + __ Ldp(x8, x9, MemOperand(x18, -16 - base_offset)); + __ Stp(w0, w1, MemOperand(x21, base_offset)); + __ Stp(w2, w3, MemOperand(x21, base_offset + 8)); + __ Stp(x4, x5, MemOperand(x21, base_offset + 16)); + __ Stp(w6, w7, MemOperand(x19, -24 - base_offset)); + __ Stp(x8, x9, MemOperand(x19, -16 - base_offset)); + END(); + + RUN(); + + CHECK_EQUAL_64(0x44556677, x0); + CHECK_EQUAL_64(0x00112233, x1); + CHECK_EQUAL_64(0x0011223344556677UL, dst[0]); + CHECK_EQUAL_64(0x00112233, x2); + CHECK_EQUAL_64(0xccddeeff, x3); + CHECK_EQUAL_64(0xccddeeff00112233UL, dst[1]); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, x4); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, dst[2]); + CHECK_EQUAL_64(0xffeeddccbbaa9988UL, x5); + CHECK_EQUAL_64(0xffeeddccbbaa9988UL, dst[3]); + CHECK_EQUAL_64(0x8899aabb, x6); + CHECK_EQUAL_64(0xbbaa9988, x7); + CHECK_EQUAL_64(0xbbaa99888899aabbUL, dst[4]); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, x8); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, dst[5]); + CHECK_EQUAL_64(0xffeeddccbbaa9988UL, x9); + CHECK_EQUAL_64(0xffeeddccbbaa9988UL, dst[6]); + CHECK_EQUAL_64(src_base - base_offset, x20); + CHECK_EQUAL_64(dst_base - base_offset, x21); + CHECK_EQUAL_64(src_base + base_offset + 24, x18); + CHECK_EQUAL_64(dst_base + base_offset + 56, x19); TEARDOWN(); } @@ -2930,27 +2998,27 @@ TEST(ldnp_stnp_offset) { RUN(); - ASSERT_EQUAL_64(0x44556677, x0); - ASSERT_EQUAL_64(0x00112233, x1); - ASSERT_EQUAL_64(0x0011223344556677UL, dst[0]); - ASSERT_EQUAL_64(0x00112233, x2); - ASSERT_EQUAL_64(0xccddeeff, x3); - ASSERT_EQUAL_64(0xccddeeff00112233UL, dst[1]); - ASSERT_EQUAL_64(0x8899aabbccddeeffUL, x4); - ASSERT_EQUAL_64(0x8899aabbccddeeffUL, dst[2]); - ASSERT_EQUAL_64(0xffeeddccbbaa9988UL, x5); - ASSERT_EQUAL_64(0xffeeddccbbaa9988UL, dst[3]); - ASSERT_EQUAL_64(0x8899aabb, x6); - ASSERT_EQUAL_64(0xbbaa9988, x7); - ASSERT_EQUAL_64(0xbbaa99888899aabbUL, dst[4]); - ASSERT_EQUAL_64(0x8899aabbccddeeffUL, x8); - ASSERT_EQUAL_64(0x8899aabbccddeeffUL, dst[5]); - ASSERT_EQUAL_64(0xffeeddccbbaa9988UL, x9); - ASSERT_EQUAL_64(0xffeeddccbbaa9988UL, dst[6]); - ASSERT_EQUAL_64(src_base, x16); - ASSERT_EQUAL_64(dst_base, x17); - ASSERT_EQUAL_64(src_base + 24, x18); - ASSERT_EQUAL_64(dst_base + 56, x19); + CHECK_EQUAL_64(0x44556677, x0); + CHECK_EQUAL_64(0x00112233, x1); + CHECK_EQUAL_64(0x0011223344556677UL, dst[0]); + CHECK_EQUAL_64(0x00112233, x2); + CHECK_EQUAL_64(0xccddeeff, x3); + CHECK_EQUAL_64(0xccddeeff00112233UL, dst[1]); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, x4); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, dst[2]); + CHECK_EQUAL_64(0xffeeddccbbaa9988UL, x5); + CHECK_EQUAL_64(0xffeeddccbbaa9988UL, dst[3]); + CHECK_EQUAL_64(0x8899aabb, x6); + CHECK_EQUAL_64(0xbbaa9988, x7); + CHECK_EQUAL_64(0xbbaa99888899aabbUL, dst[4]); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, x8); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, dst[5]); + CHECK_EQUAL_64(0xffeeddccbbaa9988UL, x9); + CHECK_EQUAL_64(0xffeeddccbbaa9988UL, dst[6]); + CHECK_EQUAL_64(src_base, x16); + CHECK_EQUAL_64(dst_base, x17); + CHECK_EQUAL_64(src_base + 24, x18); + CHECK_EQUAL_64(dst_base + 56, x19); TEARDOWN(); } @@ -2986,26 +3054,89 @@ TEST(ldp_stp_preindex) { RUN(); - ASSERT_EQUAL_64(0x00112233, x0); - ASSERT_EQUAL_64(0xccddeeff, x1); - ASSERT_EQUAL_64(0x44556677, x2); - ASSERT_EQUAL_64(0x00112233, x3); - ASSERT_EQUAL_64(0xccddeeff00112233UL, dst[0]); - ASSERT_EQUAL_64(0x0000000000112233UL, dst[1]); - ASSERT_EQUAL_64(0x8899aabbccddeeffUL, x4); - ASSERT_EQUAL_64(0xffeeddccbbaa9988UL, x5); - ASSERT_EQUAL_64(0x0011223344556677UL, x6); - ASSERT_EQUAL_64(0x8899aabbccddeeffUL, x7); - ASSERT_EQUAL_64(0xffeeddccbbaa9988UL, dst[2]); - ASSERT_EQUAL_64(0x8899aabbccddeeffUL, dst[3]); - ASSERT_EQUAL_64(0x0011223344556677UL, dst[4]); - ASSERT_EQUAL_64(src_base, x16); - ASSERT_EQUAL_64(dst_base, x17); - ASSERT_EQUAL_64(dst_base + 16, x18); - ASSERT_EQUAL_64(src_base + 4, x19); - ASSERT_EQUAL_64(dst_base + 4, x20); - ASSERT_EQUAL_64(src_base + 8, x21); - ASSERT_EQUAL_64(dst_base + 24, x22); + CHECK_EQUAL_64(0x00112233, x0); + CHECK_EQUAL_64(0xccddeeff, x1); + CHECK_EQUAL_64(0x44556677, x2); + CHECK_EQUAL_64(0x00112233, x3); + CHECK_EQUAL_64(0xccddeeff00112233UL, dst[0]); + CHECK_EQUAL_64(0x0000000000112233UL, dst[1]); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, x4); + CHECK_EQUAL_64(0xffeeddccbbaa9988UL, x5); + CHECK_EQUAL_64(0x0011223344556677UL, x6); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, x7); + CHECK_EQUAL_64(0xffeeddccbbaa9988UL, dst[2]); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, dst[3]); + CHECK_EQUAL_64(0x0011223344556677UL, dst[4]); + CHECK_EQUAL_64(src_base, x16); + CHECK_EQUAL_64(dst_base, x17); + CHECK_EQUAL_64(dst_base + 16, x18); + CHECK_EQUAL_64(src_base + 4, x19); + CHECK_EQUAL_64(dst_base + 4, x20); + CHECK_EQUAL_64(src_base + 8, x21); + CHECK_EQUAL_64(dst_base + 24, x22); + + TEARDOWN(); +} + + +TEST(ldp_stp_preindex_wide) { + INIT_V8(); + SETUP(); + + uint64_t src[3] = {0x0011223344556677, 0x8899aabbccddeeff, + 0xffeeddccbbaa9988}; + uint64_t dst[5] = {0, 0, 0, 0, 0}; + uintptr_t src_base = reinterpret_cast<uintptr_t>(src); + uintptr_t dst_base = reinterpret_cast<uintptr_t>(dst); + // Move base too far from the array to force multiple instructions + // to be emitted. + const int64_t base_offset = 1024; + + START(); + __ Mov(x24, src_base - base_offset); + __ Mov(x25, dst_base + base_offset); + __ Mov(x18, dst_base + base_offset + 16); + __ Ldp(w0, w1, MemOperand(x24, base_offset + 4, PreIndex)); + __ Mov(x19, x24); + __ Mov(x24, src_base - base_offset + 4); + __ Ldp(w2, w3, MemOperand(x24, base_offset - 4, PreIndex)); + __ Stp(w2, w3, MemOperand(x25, 4 - base_offset, PreIndex)); + __ Mov(x20, x25); + __ Mov(x25, dst_base + base_offset + 4); + __ Mov(x24, src_base - base_offset); + __ Stp(w0, w1, MemOperand(x25, -4 - base_offset, PreIndex)); + __ Ldp(x4, x5, MemOperand(x24, base_offset + 8, PreIndex)); + __ Mov(x21, x24); + __ Mov(x24, src_base - base_offset + 8); + __ Ldp(x6, x7, MemOperand(x24, base_offset - 8, PreIndex)); + __ Stp(x7, x6, MemOperand(x18, 8 - base_offset, PreIndex)); + __ Mov(x22, x18); + __ Mov(x18, dst_base + base_offset + 16 + 8); + __ Stp(x5, x4, MemOperand(x18, -8 - base_offset, PreIndex)); + END(); + + RUN(); + + CHECK_EQUAL_64(0x00112233, x0); + CHECK_EQUAL_64(0xccddeeff, x1); + CHECK_EQUAL_64(0x44556677, x2); + CHECK_EQUAL_64(0x00112233, x3); + CHECK_EQUAL_64(0xccddeeff00112233UL, dst[0]); + CHECK_EQUAL_64(0x0000000000112233UL, dst[1]); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, x4); + CHECK_EQUAL_64(0xffeeddccbbaa9988UL, x5); + CHECK_EQUAL_64(0x0011223344556677UL, x6); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, x7); + CHECK_EQUAL_64(0xffeeddccbbaa9988UL, dst[2]); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, dst[3]); + CHECK_EQUAL_64(0x0011223344556677UL, dst[4]); + CHECK_EQUAL_64(src_base, x24); + CHECK_EQUAL_64(dst_base, x25); + CHECK_EQUAL_64(dst_base + 16, x18); + CHECK_EQUAL_64(src_base + 4, x19); + CHECK_EQUAL_64(dst_base + 4, x20); + CHECK_EQUAL_64(src_base + 8, x21); + CHECK_EQUAL_64(dst_base + 24, x22); TEARDOWN(); } @@ -3041,26 +3172,89 @@ TEST(ldp_stp_postindex) { RUN(); - ASSERT_EQUAL_64(0x44556677, x0); - ASSERT_EQUAL_64(0x00112233, x1); - ASSERT_EQUAL_64(0x00112233, x2); - ASSERT_EQUAL_64(0xccddeeff, x3); - ASSERT_EQUAL_64(0x4455667700112233UL, dst[0]); - ASSERT_EQUAL_64(0x0000000000112233UL, dst[1]); - ASSERT_EQUAL_64(0x0011223344556677UL, x4); - ASSERT_EQUAL_64(0x8899aabbccddeeffUL, x5); - ASSERT_EQUAL_64(0x8899aabbccddeeffUL, x6); - ASSERT_EQUAL_64(0xffeeddccbbaa9988UL, x7); - ASSERT_EQUAL_64(0xffeeddccbbaa9988UL, dst[2]); - ASSERT_EQUAL_64(0x8899aabbccddeeffUL, dst[3]); - ASSERT_EQUAL_64(0x0011223344556677UL, dst[4]); - ASSERT_EQUAL_64(src_base, x16); - ASSERT_EQUAL_64(dst_base, x17); - ASSERT_EQUAL_64(dst_base + 16, x18); - ASSERT_EQUAL_64(src_base + 4, x19); - ASSERT_EQUAL_64(dst_base + 4, x20); - ASSERT_EQUAL_64(src_base + 8, x21); - ASSERT_EQUAL_64(dst_base + 24, x22); + CHECK_EQUAL_64(0x44556677, x0); + CHECK_EQUAL_64(0x00112233, x1); + CHECK_EQUAL_64(0x00112233, x2); + CHECK_EQUAL_64(0xccddeeff, x3); + CHECK_EQUAL_64(0x4455667700112233UL, dst[0]); + CHECK_EQUAL_64(0x0000000000112233UL, dst[1]); + CHECK_EQUAL_64(0x0011223344556677UL, x4); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, x5); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, x6); + CHECK_EQUAL_64(0xffeeddccbbaa9988UL, x7); + CHECK_EQUAL_64(0xffeeddccbbaa9988UL, dst[2]); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, dst[3]); + CHECK_EQUAL_64(0x0011223344556677UL, dst[4]); + CHECK_EQUAL_64(src_base, x16); + CHECK_EQUAL_64(dst_base, x17); + CHECK_EQUAL_64(dst_base + 16, x18); + CHECK_EQUAL_64(src_base + 4, x19); + CHECK_EQUAL_64(dst_base + 4, x20); + CHECK_EQUAL_64(src_base + 8, x21); + CHECK_EQUAL_64(dst_base + 24, x22); + + TEARDOWN(); +} + + +TEST(ldp_stp_postindex_wide) { + INIT_V8(); + SETUP(); + + uint64_t src[4] = {0x0011223344556677, 0x8899aabbccddeeff, 0xffeeddccbbaa9988, + 0x7766554433221100}; + uint64_t dst[5] = {0, 0, 0, 0, 0}; + uintptr_t src_base = reinterpret_cast<uintptr_t>(src); + uintptr_t dst_base = reinterpret_cast<uintptr_t>(dst); + // Move base too far from the array to force multiple instructions + // to be emitted. + const int64_t base_offset = 1024; + + START(); + __ Mov(x24, src_base); + __ Mov(x25, dst_base); + __ Mov(x18, dst_base + 16); + __ Ldp(w0, w1, MemOperand(x24, base_offset + 4, PostIndex)); + __ Mov(x19, x24); + __ Sub(x24, x24, base_offset); + __ Ldp(w2, w3, MemOperand(x24, base_offset - 4, PostIndex)); + __ Stp(w2, w3, MemOperand(x25, 4 - base_offset, PostIndex)); + __ Mov(x20, x25); + __ Sub(x24, x24, base_offset); + __ Add(x25, x25, base_offset); + __ Stp(w0, w1, MemOperand(x25, -4 - base_offset, PostIndex)); + __ Ldp(x4, x5, MemOperand(x24, base_offset + 8, PostIndex)); + __ Mov(x21, x24); + __ Sub(x24, x24, base_offset); + __ Ldp(x6, x7, MemOperand(x24, base_offset - 8, PostIndex)); + __ Stp(x7, x6, MemOperand(x18, 8 - base_offset, PostIndex)); + __ Mov(x22, x18); + __ Add(x18, x18, base_offset); + __ Stp(x5, x4, MemOperand(x18, -8 - base_offset, PostIndex)); + END(); + + RUN(); + + CHECK_EQUAL_64(0x44556677, x0); + CHECK_EQUAL_64(0x00112233, x1); + CHECK_EQUAL_64(0x00112233, x2); + CHECK_EQUAL_64(0xccddeeff, x3); + CHECK_EQUAL_64(0x4455667700112233UL, dst[0]); + CHECK_EQUAL_64(0x0000000000112233UL, dst[1]); + CHECK_EQUAL_64(0x0011223344556677UL, x4); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, x5); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, x6); + CHECK_EQUAL_64(0xffeeddccbbaa9988UL, x7); + CHECK_EQUAL_64(0xffeeddccbbaa9988UL, dst[2]); + CHECK_EQUAL_64(0x8899aabbccddeeffUL, dst[3]); + CHECK_EQUAL_64(0x0011223344556677UL, dst[4]); + CHECK_EQUAL_64(src_base + base_offset, x24); + CHECK_EQUAL_64(dst_base - base_offset, x25); + CHECK_EQUAL_64(dst_base - base_offset + 16, x18); + CHECK_EQUAL_64(src_base + base_offset + 4, x19); + CHECK_EQUAL_64(dst_base - base_offset + 4, x20); + CHECK_EQUAL_64(src_base + base_offset + 8, x21); + CHECK_EQUAL_64(dst_base - base_offset + 24, x22); TEARDOWN(); } @@ -3080,8 +3274,8 @@ TEST(ldp_sign_extend) { RUN(); - ASSERT_EQUAL_64(0xffffffff80000000UL, x0); - ASSERT_EQUAL_64(0x000000007fffffffUL, x1); + CHECK_EQUAL_64(0xffffffff80000000UL, x0); + CHECK_EQUAL_64(0x000000007fffffffUL, x1); TEARDOWN(); } @@ -3114,19 +3308,19 @@ TEST(ldur_stur) { RUN(); - ASSERT_EQUAL_64(0x6789abcd, x0); - ASSERT_EQUAL_64(0x6789abcd0000L, dst[0]); - ASSERT_EQUAL_64(0xabcdef0123456789L, x1); - ASSERT_EQUAL_64(0xcdef012345678900L, dst[1]); - ASSERT_EQUAL_64(0x000000ab, dst[2]); - ASSERT_EQUAL_64(0xabcdef01, x2); - ASSERT_EQUAL_64(0x00abcdef01000000L, dst[3]); - ASSERT_EQUAL_64(0x00000001, x3); - ASSERT_EQUAL_64(0x0100000000000000L, dst[4]); - ASSERT_EQUAL_64(src_base, x17); - ASSERT_EQUAL_64(dst_base, x18); - ASSERT_EQUAL_64(src_base + 16, x19); - ASSERT_EQUAL_64(dst_base + 32, x20); + CHECK_EQUAL_64(0x6789abcd, x0); + CHECK_EQUAL_64(0x6789abcd0000L, dst[0]); + CHECK_EQUAL_64(0xabcdef0123456789L, x1); + CHECK_EQUAL_64(0xcdef012345678900L, dst[1]); + CHECK_EQUAL_64(0x000000ab, dst[2]); + CHECK_EQUAL_64(0xabcdef01, x2); + CHECK_EQUAL_64(0x00abcdef01000000L, dst[3]); + CHECK_EQUAL_64(0x00000001, x3); + CHECK_EQUAL_64(0x0100000000000000L, dst[4]); + CHECK_EQUAL_64(src_base, x17); + CHECK_EQUAL_64(dst_base, x18); + CHECK_EQUAL_64(src_base + 16, x19); + CHECK_EQUAL_64(dst_base + 32, x20); TEARDOWN(); } @@ -3147,10 +3341,10 @@ TEST(ldr_literal) { RUN(); - ASSERT_EQUAL_64(0x1234567890abcdefUL, x2); - ASSERT_EQUAL_64(0xfedcba09, x3); - ASSERT_EQUAL_FP64(1.234, d13); - ASSERT_EQUAL_FP32(2.5, s25); + CHECK_EQUAL_64(0x1234567890abcdefUL, x2); + CHECK_EQUAL_64(0xfedcba09, x3); + CHECK_EQUAL_FP64(1.234, d13); + CHECK_EQUAL_FP32(2.5, s25); TEARDOWN(); } @@ -3159,7 +3353,7 @@ TEST(ldr_literal) { static void LdrLiteralRangeHelper(ptrdiff_t range_, LiteralPoolEmitOption option, bool expect_dump) { - ASSERT(range_ > 0); + DCHECK(range_ > 0); SETUP_SIZE(range_ + 1024); Label label_1, label_2; @@ -3178,19 +3372,19 @@ static void LdrLiteralRangeHelper(ptrdiff_t range_, START(); // Force a pool dump so the pool starts off empty. __ EmitLiteralPool(JumpRequired); - ASSERT_LITERAL_POOL_SIZE(0); + DCHECK_LITERAL_POOL_SIZE(0); __ Ldr(x0, 0x1234567890abcdefUL); __ Ldr(w1, 0xfedcba09); __ Ldr(d0, 1.234); __ Ldr(s1, 2.5); - ASSERT_LITERAL_POOL_SIZE(4); + DCHECK_LITERAL_POOL_SIZE(4); code_size += 4 * sizeof(Instr); // Check that the requested range (allowing space for a branch over the pool) // can be handled by this test. - ASSERT((code_size + pool_guard_size) <= range); + DCHECK((code_size + pool_guard_size) <= range); // Emit NOPs up to 'range', leaving space for the pool guard. while ((code_size + pool_guard_size) < range) { @@ -3204,41 +3398,41 @@ static void LdrLiteralRangeHelper(ptrdiff_t range_, code_size += sizeof(Instr); } - ASSERT(code_size == range); - ASSERT_LITERAL_POOL_SIZE(4); + DCHECK(code_size == range); + DCHECK_LITERAL_POOL_SIZE(4); // Possibly generate a literal pool. __ CheckLiteralPool(option); __ Bind(&label_1); if (expect_dump) { - ASSERT_LITERAL_POOL_SIZE(0); + DCHECK_LITERAL_POOL_SIZE(0); } else { - ASSERT_LITERAL_POOL_SIZE(4); + DCHECK_LITERAL_POOL_SIZE(4); } // Force a pool flush to check that a second pool functions correctly. __ EmitLiteralPool(JumpRequired); - ASSERT_LITERAL_POOL_SIZE(0); + DCHECK_LITERAL_POOL_SIZE(0); // These loads should be after the pool (and will require a new one). __ Ldr(x4, 0x34567890abcdef12UL); __ Ldr(w5, 0xdcba09fe); __ Ldr(d4, 123.4); __ Ldr(s5, 250.0); - ASSERT_LITERAL_POOL_SIZE(4); + DCHECK_LITERAL_POOL_SIZE(4); END(); RUN(); // Check that the literals loaded correctly. - ASSERT_EQUAL_64(0x1234567890abcdefUL, x0); - ASSERT_EQUAL_64(0xfedcba09, x1); - ASSERT_EQUAL_FP64(1.234, d0); - ASSERT_EQUAL_FP32(2.5, s1); - ASSERT_EQUAL_64(0x34567890abcdef12UL, x4); - ASSERT_EQUAL_64(0xdcba09fe, x5); - ASSERT_EQUAL_FP64(123.4, d4); - ASSERT_EQUAL_FP32(250.0, s5); + CHECK_EQUAL_64(0x1234567890abcdefUL, x0); + CHECK_EQUAL_64(0xfedcba09, x1); + CHECK_EQUAL_FP64(1.234, d0); + CHECK_EQUAL_FP32(2.5, s1); + CHECK_EQUAL_64(0x34567890abcdef12UL, x4); + CHECK_EQUAL_64(0xdcba09fe, x5); + CHECK_EQUAL_FP64(123.4, d4); + CHECK_EQUAL_FP32(250.0, s5); TEARDOWN(); } @@ -3325,25 +3519,25 @@ TEST(add_sub_imm) { RUN(); - ASSERT_EQUAL_64(0x123, x10); - ASSERT_EQUAL_64(0x123111, x11); - ASSERT_EQUAL_64(0xabc000, x12); - ASSERT_EQUAL_64(0x0, x13); + CHECK_EQUAL_64(0x123, x10); + CHECK_EQUAL_64(0x123111, x11); + CHECK_EQUAL_64(0xabc000, x12); + CHECK_EQUAL_64(0x0, x13); - ASSERT_EQUAL_32(0x123, w14); - ASSERT_EQUAL_32(0x123111, w15); - ASSERT_EQUAL_32(0xabc000, w16); - ASSERT_EQUAL_32(0x0, w17); + CHECK_EQUAL_32(0x123, w14); + CHECK_EQUAL_32(0x123111, w15); + CHECK_EQUAL_32(0xabc000, w16); + CHECK_EQUAL_32(0x0, w17); - ASSERT_EQUAL_64(0xffffffffffffffffL, x20); - ASSERT_EQUAL_64(0x1000, x21); - ASSERT_EQUAL_64(0x111, x22); - ASSERT_EQUAL_64(0x7fffffffffffffffL, x23); + CHECK_EQUAL_64(0xffffffffffffffffL, x20); + CHECK_EQUAL_64(0x1000, x21); + CHECK_EQUAL_64(0x111, x22); + CHECK_EQUAL_64(0x7fffffffffffffffL, x23); - ASSERT_EQUAL_32(0xffffffff, w24); - ASSERT_EQUAL_32(0x1000, w25); - ASSERT_EQUAL_32(0x111, w26); - ASSERT_EQUAL_32(0xffffffff, w27); + CHECK_EQUAL_32(0xffffffff, w24); + CHECK_EQUAL_32(0x1000, w25); + CHECK_EQUAL_32(0x111, w26); + CHECK_EQUAL_32(0xffffffff, w27); TEARDOWN(); } @@ -3363,22 +3557,26 @@ TEST(add_sub_wide_imm) { __ Add(w12, w0, Operand(0x12345678)); __ Add(w13, w1, Operand(0xffffffff)); - __ Sub(x20, x0, Operand(0x1234567890abcdefUL)); + __ Add(w18, w0, Operand(kWMinInt)); + __ Sub(w19, w0, Operand(kWMinInt)); + __ Sub(x20, x0, Operand(0x1234567890abcdefUL)); __ Sub(w21, w0, Operand(0x12345678)); END(); RUN(); - ASSERT_EQUAL_64(0x1234567890abcdefUL, x10); - ASSERT_EQUAL_64(0x100000000UL, x11); + CHECK_EQUAL_64(0x1234567890abcdefUL, x10); + CHECK_EQUAL_64(0x100000000UL, x11); - ASSERT_EQUAL_32(0x12345678, w12); - ASSERT_EQUAL_64(0x0, x13); + CHECK_EQUAL_32(0x12345678, w12); + CHECK_EQUAL_64(0x0, x13); - ASSERT_EQUAL_64(-0x1234567890abcdefUL, x20); + CHECK_EQUAL_32(kWMinInt, w18); + CHECK_EQUAL_32(kWMinInt, w19); - ASSERT_EQUAL_32(-0x12345678, w21); + CHECK_EQUAL_64(-0x1234567890abcdefUL, x20); + CHECK_EQUAL_32(-0x12345678, w21); TEARDOWN(); } @@ -3415,23 +3613,23 @@ TEST(add_sub_shifted) { RUN(); - ASSERT_EQUAL_64(0xffffffffffffffffL, x10); - ASSERT_EQUAL_64(0x23456789abcdef00L, x11); - ASSERT_EQUAL_64(0x000123456789abcdL, x12); - ASSERT_EQUAL_64(0x000123456789abcdL, x13); - ASSERT_EQUAL_64(0xfffedcba98765432L, x14); - ASSERT_EQUAL_64(0xff89abcd, x15); - ASSERT_EQUAL_64(0xef89abcc, x18); - ASSERT_EQUAL_64(0xef0123456789abccL, x19); + CHECK_EQUAL_64(0xffffffffffffffffL, x10); + CHECK_EQUAL_64(0x23456789abcdef00L, x11); + CHECK_EQUAL_64(0x000123456789abcdL, x12); + CHECK_EQUAL_64(0x000123456789abcdL, x13); + CHECK_EQUAL_64(0xfffedcba98765432L, x14); + CHECK_EQUAL_64(0xff89abcd, x15); + CHECK_EQUAL_64(0xef89abcc, x18); + CHECK_EQUAL_64(0xef0123456789abccL, x19); - ASSERT_EQUAL_64(0x0123456789abcdefL, x20); - ASSERT_EQUAL_64(0xdcba9876543210ffL, x21); - ASSERT_EQUAL_64(0xfffedcba98765432L, x22); - ASSERT_EQUAL_64(0xfffedcba98765432L, x23); - ASSERT_EQUAL_64(0x000123456789abcdL, x24); - ASSERT_EQUAL_64(0x00765432, x25); - ASSERT_EQUAL_64(0x10765432, x26); - ASSERT_EQUAL_64(0x10fedcba98765432L, x27); + CHECK_EQUAL_64(0x0123456789abcdefL, x20); + CHECK_EQUAL_64(0xdcba9876543210ffL, x21); + CHECK_EQUAL_64(0xfffedcba98765432L, x22); + CHECK_EQUAL_64(0xfffedcba98765432L, x23); + CHECK_EQUAL_64(0x000123456789abcdL, x24); + CHECK_EQUAL_64(0x00765432, x25); + CHECK_EQUAL_64(0x10765432, x26); + CHECK_EQUAL_64(0x10fedcba98765432L, x27); TEARDOWN(); } @@ -3477,32 +3675,32 @@ TEST(add_sub_extended) { RUN(); - ASSERT_EQUAL_64(0xefL, x10); - ASSERT_EQUAL_64(0x1deL, x11); - ASSERT_EQUAL_64(0x337bcL, x12); - ASSERT_EQUAL_64(0x89abcdef0L, x13); + CHECK_EQUAL_64(0xefL, x10); + CHECK_EQUAL_64(0x1deL, x11); + CHECK_EQUAL_64(0x337bcL, x12); + CHECK_EQUAL_64(0x89abcdef0L, x13); - ASSERT_EQUAL_64(0xffffffffffffffefL, x14); - ASSERT_EQUAL_64(0xffffffffffffffdeL, x15); - ASSERT_EQUAL_64(0xffffffffffff37bcL, x16); - ASSERT_EQUAL_64(0xfffffffc4d5e6f78L, x17); - ASSERT_EQUAL_64(0x10L, x18); - ASSERT_EQUAL_64(0x20L, x19); - ASSERT_EQUAL_64(0xc840L, x20); - ASSERT_EQUAL_64(0x3b2a19080L, x21); + CHECK_EQUAL_64(0xffffffffffffffefL, x14); + CHECK_EQUAL_64(0xffffffffffffffdeL, x15); + CHECK_EQUAL_64(0xffffffffffff37bcL, x16); + CHECK_EQUAL_64(0xfffffffc4d5e6f78L, x17); + CHECK_EQUAL_64(0x10L, x18); + CHECK_EQUAL_64(0x20L, x19); + CHECK_EQUAL_64(0xc840L, x20); + CHECK_EQUAL_64(0x3b2a19080L, x21); - ASSERT_EQUAL_64(0x0123456789abce0fL, x22); - ASSERT_EQUAL_64(0x0123456789abcdcfL, x23); + CHECK_EQUAL_64(0x0123456789abce0fL, x22); + CHECK_EQUAL_64(0x0123456789abcdcfL, x23); - ASSERT_EQUAL_32(0x89abce2f, w24); - ASSERT_EQUAL_32(0xffffffef, w25); - ASSERT_EQUAL_32(0xffffffde, w26); - ASSERT_EQUAL_32(0xc3b2a188, w27); + CHECK_EQUAL_32(0x89abce2f, w24); + CHECK_EQUAL_32(0xffffffef, w25); + CHECK_EQUAL_32(0xffffffde, w26); + CHECK_EQUAL_32(0xc3b2a188, w27); - ASSERT_EQUAL_32(0x4d5e6f78, w28); - ASSERT_EQUAL_64(0xfffffffc4d5e6f78L, x29); + CHECK_EQUAL_32(0x4d5e6f78, w28); + CHECK_EQUAL_64(0xfffffffc4d5e6f78L, x29); - ASSERT_EQUAL_64(256, x30); + CHECK_EQUAL_64(256, x30); TEARDOWN(); } @@ -3536,19 +3734,19 @@ TEST(add_sub_negative) { RUN(); - ASSERT_EQUAL_64(-42, x10); - ASSERT_EQUAL_64(4000, x11); - ASSERT_EQUAL_64(0x1122334455667700, x12); + CHECK_EQUAL_64(-42, x10); + CHECK_EQUAL_64(4000, x11); + CHECK_EQUAL_64(0x1122334455667700, x12); - ASSERT_EQUAL_64(600, x13); - ASSERT_EQUAL_64(5000, x14); - ASSERT_EQUAL_64(0x1122334455667cdd, x15); + CHECK_EQUAL_64(600, x13); + CHECK_EQUAL_64(5000, x14); + CHECK_EQUAL_64(0x1122334455667cdd, x15); - ASSERT_EQUAL_32(0x11223000, w19); - ASSERT_EQUAL_32(398000, w20); + CHECK_EQUAL_32(0x11223000, w19); + CHECK_EQUAL_32(398000, w20); - ASSERT_EQUAL_32(0x11223400, w21); - ASSERT_EQUAL_32(402000, w22); + CHECK_EQUAL_32(0x11223400, w21); + CHECK_EQUAL_32(402000, w22); TEARDOWN(); } @@ -3584,9 +3782,9 @@ TEST(add_sub_zero) { RUN(); - ASSERT_EQUAL_64(0, x0); - ASSERT_EQUAL_64(0, x1); - ASSERT_EQUAL_64(0, x2); + CHECK_EQUAL_64(0, x0); + CHECK_EQUAL_64(0, x1); + CHECK_EQUAL_64(0, x2); TEARDOWN(); } @@ -3652,20 +3850,20 @@ TEST(neg) { RUN(); - ASSERT_EQUAL_64(0xfffffffffffffeddUL, x1); - ASSERT_EQUAL_64(0xfffffedd, x2); - ASSERT_EQUAL_64(0x1db97530eca86422UL, x3); - ASSERT_EQUAL_64(0xd950c844, x4); - ASSERT_EQUAL_64(0xe1db97530eca8643UL, x5); - ASSERT_EQUAL_64(0xf7654322, x6); - ASSERT_EQUAL_64(0x0076e5d4c3b2a191UL, x7); - ASSERT_EQUAL_64(0x01d950c9, x8); - ASSERT_EQUAL_64(0xffffff11, x9); - ASSERT_EQUAL_64(0x0000000000000022UL, x10); - ASSERT_EQUAL_64(0xfffcc844, x11); - ASSERT_EQUAL_64(0x0000000000019088UL, x12); - ASSERT_EQUAL_64(0x65432110, x13); - ASSERT_EQUAL_64(0x0000000765432110UL, x14); + CHECK_EQUAL_64(0xfffffffffffffeddUL, x1); + CHECK_EQUAL_64(0xfffffedd, x2); + CHECK_EQUAL_64(0x1db97530eca86422UL, x3); + CHECK_EQUAL_64(0xd950c844, x4); + CHECK_EQUAL_64(0xe1db97530eca8643UL, x5); + CHECK_EQUAL_64(0xf7654322, x6); + CHECK_EQUAL_64(0x0076e5d4c3b2a191UL, x7); + CHECK_EQUAL_64(0x01d950c9, x8); + CHECK_EQUAL_64(0xffffff11, x9); + CHECK_EQUAL_64(0x0000000000000022UL, x10); + CHECK_EQUAL_64(0xfffcc844, x11); + CHECK_EQUAL_64(0x0000000000019088UL, x12); + CHECK_EQUAL_64(0x65432110, x13); + CHECK_EQUAL_64(0x0000000765432110UL, x14); TEARDOWN(); } @@ -3715,29 +3913,29 @@ TEST(adc_sbc_shift) { RUN(); - ASSERT_EQUAL_64(0xffffffffffffffffL, x5); - ASSERT_EQUAL_64(1L << 60, x6); - ASSERT_EQUAL_64(0xf0123456789abcddL, x7); - ASSERT_EQUAL_64(0x0111111111111110L, x8); - ASSERT_EQUAL_64(0x1222222222222221L, x9); + CHECK_EQUAL_64(0xffffffffffffffffL, x5); + CHECK_EQUAL_64(1L << 60, x6); + CHECK_EQUAL_64(0xf0123456789abcddL, x7); + CHECK_EQUAL_64(0x0111111111111110L, x8); + CHECK_EQUAL_64(0x1222222222222221L, x9); - ASSERT_EQUAL_32(0xffffffff, w10); - ASSERT_EQUAL_32(1 << 30, w11); - ASSERT_EQUAL_32(0xf89abcdd, w12); - ASSERT_EQUAL_32(0x91111110, w13); - ASSERT_EQUAL_32(0x9a222221, w14); + CHECK_EQUAL_32(0xffffffff, w10); + CHECK_EQUAL_32(1 << 30, w11); + CHECK_EQUAL_32(0xf89abcdd, w12); + CHECK_EQUAL_32(0x91111110, w13); + CHECK_EQUAL_32(0x9a222221, w14); - ASSERT_EQUAL_64(0xffffffffffffffffL + 1, x18); - ASSERT_EQUAL_64((1L << 60) + 1, x19); - ASSERT_EQUAL_64(0xf0123456789abcddL + 1, x20); - ASSERT_EQUAL_64(0x0111111111111110L + 1, x21); - ASSERT_EQUAL_64(0x1222222222222221L + 1, x22); + CHECK_EQUAL_64(0xffffffffffffffffL + 1, x18); + CHECK_EQUAL_64((1L << 60) + 1, x19); + CHECK_EQUAL_64(0xf0123456789abcddL + 1, x20); + CHECK_EQUAL_64(0x0111111111111110L + 1, x21); + CHECK_EQUAL_64(0x1222222222222221L + 1, x22); - ASSERT_EQUAL_32(0xffffffff + 1, w23); - ASSERT_EQUAL_32((1 << 30) + 1, w24); - ASSERT_EQUAL_32(0xf89abcdd + 1, w25); - ASSERT_EQUAL_32(0x91111110 + 1, w26); - ASSERT_EQUAL_32(0x9a222221 + 1, w27); + CHECK_EQUAL_32(0xffffffff + 1, w23); + CHECK_EQUAL_32((1 << 30) + 1, w24); + CHECK_EQUAL_32(0xf89abcdd + 1, w25); + CHECK_EQUAL_32(0x91111110 + 1, w26); + CHECK_EQUAL_32(0x9a222221 + 1, w27); // Check that adc correctly sets the condition flags. START(); @@ -3750,8 +3948,8 @@ TEST(adc_sbc_shift) { RUN(); - ASSERT_EQUAL_NZCV(ZCFlag); - ASSERT_EQUAL_64(0, x10); + CHECK_EQUAL_NZCV(ZCFlag); + CHECK_EQUAL_64(0, x10); START(); __ Mov(x0, 1); @@ -3763,8 +3961,8 @@ TEST(adc_sbc_shift) { RUN(); - ASSERT_EQUAL_NZCV(ZCFlag); - ASSERT_EQUAL_64(0, x10); + CHECK_EQUAL_NZCV(ZCFlag); + CHECK_EQUAL_64(0, x10); START(); __ Mov(x0, 0x10); @@ -3776,8 +3974,8 @@ TEST(adc_sbc_shift) { RUN(); - ASSERT_EQUAL_NZCV(NVFlag); - ASSERT_EQUAL_64(0x8000000000000000L, x10); + CHECK_EQUAL_NZCV(NVFlag); + CHECK_EQUAL_64(0x8000000000000000L, x10); // Check that sbc correctly sets the condition flags. START(); @@ -3790,8 +3988,8 @@ TEST(adc_sbc_shift) { RUN(); - ASSERT_EQUAL_NZCV(ZFlag); - ASSERT_EQUAL_64(0, x10); + CHECK_EQUAL_NZCV(ZFlag); + CHECK_EQUAL_64(0, x10); START(); __ Mov(x0, 1); @@ -3803,8 +4001,8 @@ TEST(adc_sbc_shift) { RUN(); - ASSERT_EQUAL_NZCV(NFlag); - ASSERT_EQUAL_64(0x8000000000000001L, x10); + CHECK_EQUAL_NZCV(NFlag); + CHECK_EQUAL_64(0x8000000000000001L, x10); START(); __ Mov(x0, 0); @@ -3815,8 +4013,8 @@ TEST(adc_sbc_shift) { RUN(); - ASSERT_EQUAL_NZCV(ZFlag); - ASSERT_EQUAL_64(0, x10); + CHECK_EQUAL_NZCV(ZFlag); + CHECK_EQUAL_64(0, x10); START() __ Mov(w0, 0x7fffffff); @@ -3827,8 +4025,8 @@ TEST(adc_sbc_shift) { RUN(); - ASSERT_EQUAL_NZCV(NFlag); - ASSERT_EQUAL_64(0x80000000, x10); + CHECK_EQUAL_NZCV(NFlag); + CHECK_EQUAL_64(0x80000000, x10); START(); // Clear the C flag. @@ -3838,8 +4036,8 @@ TEST(adc_sbc_shift) { RUN(); - ASSERT_EQUAL_NZCV(NFlag); - ASSERT_EQUAL_64(0x8000000000000000L, x10); + CHECK_EQUAL_NZCV(NFlag); + CHECK_EQUAL_64(0x8000000000000000L, x10); START() __ Mov(x0, 0); @@ -3850,8 +4048,8 @@ TEST(adc_sbc_shift) { RUN(); - ASSERT_EQUAL_NZCV(NFlag); - ASSERT_EQUAL_64(0xffffffffffffffffL, x10); + CHECK_EQUAL_NZCV(NFlag); + CHECK_EQUAL_64(0xffffffffffffffffL, x10); START() __ Mov(x0, 0); @@ -3862,8 +4060,8 @@ TEST(adc_sbc_shift) { RUN(); - ASSERT_EQUAL_NZCV(NFlag); - ASSERT_EQUAL_64(0x8000000000000001L, x10); + CHECK_EQUAL_NZCV(NFlag); + CHECK_EQUAL_64(0x8000000000000001L, x10); TEARDOWN(); } @@ -3905,23 +4103,23 @@ TEST(adc_sbc_extend) { RUN(); - ASSERT_EQUAL_64(0x1df, x10); - ASSERT_EQUAL_64(0xffffffffffff37bdL, x11); - ASSERT_EQUAL_64(0xfffffff765432110L, x12); - ASSERT_EQUAL_64(0x123456789abcdef1L, x13); + CHECK_EQUAL_64(0x1df, x10); + CHECK_EQUAL_64(0xffffffffffff37bdL, x11); + CHECK_EQUAL_64(0xfffffff765432110L, x12); + CHECK_EQUAL_64(0x123456789abcdef1L, x13); - ASSERT_EQUAL_32(0x1df, w14); - ASSERT_EQUAL_32(0xffff37bd, w15); - ASSERT_EQUAL_32(0x9abcdef1, w9); + CHECK_EQUAL_32(0x1df, w14); + CHECK_EQUAL_32(0xffff37bd, w15); + CHECK_EQUAL_32(0x9abcdef1, w9); - ASSERT_EQUAL_64(0x1df + 1, x20); - ASSERT_EQUAL_64(0xffffffffffff37bdL + 1, x21); - ASSERT_EQUAL_64(0xfffffff765432110L + 1, x22); - ASSERT_EQUAL_64(0x123456789abcdef1L + 1, x23); + CHECK_EQUAL_64(0x1df + 1, x20); + CHECK_EQUAL_64(0xffffffffffff37bdL + 1, x21); + CHECK_EQUAL_64(0xfffffff765432110L + 1, x22); + CHECK_EQUAL_64(0x123456789abcdef1L + 1, x23); - ASSERT_EQUAL_32(0x1df + 1, w24); - ASSERT_EQUAL_32(0xffff37bd + 1, w25); - ASSERT_EQUAL_32(0x9abcdef1 + 1, w26); + CHECK_EQUAL_32(0x1df + 1, w24); + CHECK_EQUAL_32(0xffff37bd + 1, w25); + CHECK_EQUAL_32(0x9abcdef1 + 1, w26); // Check that adc correctly sets the condition flags. START(); @@ -3934,7 +4132,7 @@ TEST(adc_sbc_extend) { RUN(); - ASSERT_EQUAL_NZCV(CFlag); + CHECK_EQUAL_NZCV(CFlag); START(); __ Mov(x0, 0x7fffffffffffffffL); @@ -3946,7 +4144,7 @@ TEST(adc_sbc_extend) { RUN(); - ASSERT_EQUAL_NZCV(NVFlag); + CHECK_EQUAL_NZCV(NVFlag); START(); __ Mov(x0, 0x7fffffffffffffffL); @@ -3957,7 +4155,7 @@ TEST(adc_sbc_extend) { RUN(); - ASSERT_EQUAL_NZCV(NVFlag); + CHECK_EQUAL_NZCV(NVFlag); TEARDOWN(); } @@ -3993,19 +4191,19 @@ TEST(adc_sbc_wide_imm) { RUN(); - ASSERT_EQUAL_64(0x1234567890abcdefUL, x7); - ASSERT_EQUAL_64(0xffffffff, x8); - ASSERT_EQUAL_64(0xedcba9876f543210UL, x9); - ASSERT_EQUAL_64(0, x10); - ASSERT_EQUAL_64(0xffffffff, x11); - ASSERT_EQUAL_64(0xffff, x12); + CHECK_EQUAL_64(0x1234567890abcdefUL, x7); + CHECK_EQUAL_64(0xffffffff, x8); + CHECK_EQUAL_64(0xedcba9876f543210UL, x9); + CHECK_EQUAL_64(0, x10); + CHECK_EQUAL_64(0xffffffff, x11); + CHECK_EQUAL_64(0xffff, x12); - ASSERT_EQUAL_64(0x1234567890abcdefUL + 1, x18); - ASSERT_EQUAL_64(0, x19); - ASSERT_EQUAL_64(0xedcba9876f543211UL, x20); - ASSERT_EQUAL_64(1, x21); - ASSERT_EQUAL_64(0x100000000UL, x22); - ASSERT_EQUAL_64(0x10000, x23); + CHECK_EQUAL_64(0x1234567890abcdefUL + 1, x18); + CHECK_EQUAL_64(0, x19); + CHECK_EQUAL_64(0xedcba9876f543211UL, x20); + CHECK_EQUAL_64(1, x21); + CHECK_EQUAL_64(0x100000000UL, x22); + CHECK_EQUAL_64(0x10000, x23); TEARDOWN(); } @@ -4031,11 +4229,11 @@ TEST(flags) { RUN(); - ASSERT_EQUAL_64(0, x10); - ASSERT_EQUAL_64(-0x1111111111111111L, x11); - ASSERT_EQUAL_32(-0x11111111, w12); - ASSERT_EQUAL_64(-1L, x13); - ASSERT_EQUAL_32(0, w14); + CHECK_EQUAL_64(0, x10); + CHECK_EQUAL_64(-0x1111111111111111L, x11); + CHECK_EQUAL_32(-0x11111111, w12); + CHECK_EQUAL_64(-1L, x13); + CHECK_EQUAL_32(0, w14); START(); __ Mov(x0, 0); @@ -4044,7 +4242,7 @@ TEST(flags) { RUN(); - ASSERT_EQUAL_NZCV(ZCFlag); + CHECK_EQUAL_NZCV(ZCFlag); START(); __ Mov(w0, 0); @@ -4053,7 +4251,7 @@ TEST(flags) { RUN(); - ASSERT_EQUAL_NZCV(ZCFlag); + CHECK_EQUAL_NZCV(ZCFlag); START(); __ Mov(x0, 0); @@ -4063,7 +4261,7 @@ TEST(flags) { RUN(); - ASSERT_EQUAL_NZCV(NFlag); + CHECK_EQUAL_NZCV(NFlag); START(); __ Mov(w0, 0); @@ -4073,7 +4271,7 @@ TEST(flags) { RUN(); - ASSERT_EQUAL_NZCV(NFlag); + CHECK_EQUAL_NZCV(NFlag); START(); __ Mov(x1, 0x1111111111111111L); @@ -4082,7 +4280,7 @@ TEST(flags) { RUN(); - ASSERT_EQUAL_NZCV(CFlag); + CHECK_EQUAL_NZCV(CFlag); START(); __ Mov(w1, 0x11111111); @@ -4091,7 +4289,7 @@ TEST(flags) { RUN(); - ASSERT_EQUAL_NZCV(CFlag); + CHECK_EQUAL_NZCV(CFlag); START(); __ Mov(x0, 1); @@ -4101,7 +4299,7 @@ TEST(flags) { RUN(); - ASSERT_EQUAL_NZCV(NVFlag); + CHECK_EQUAL_NZCV(NVFlag); START(); __ Mov(w0, 1); @@ -4111,7 +4309,7 @@ TEST(flags) { RUN(); - ASSERT_EQUAL_NZCV(NVFlag); + CHECK_EQUAL_NZCV(NVFlag); START(); __ Mov(x0, 1); @@ -4121,7 +4319,7 @@ TEST(flags) { RUN(); - ASSERT_EQUAL_NZCV(ZCFlag); + CHECK_EQUAL_NZCV(ZCFlag); START(); __ Mov(w0, 1); @@ -4131,7 +4329,7 @@ TEST(flags) { RUN(); - ASSERT_EQUAL_NZCV(ZCFlag); + CHECK_EQUAL_NZCV(ZCFlag); START(); __ Mov(w0, 0); @@ -4143,7 +4341,7 @@ TEST(flags) { RUN(); - ASSERT_EQUAL_NZCV(NFlag); + CHECK_EQUAL_NZCV(NFlag); START(); __ Mov(w0, 0); @@ -4155,7 +4353,7 @@ TEST(flags) { RUN(); - ASSERT_EQUAL_NZCV(ZCFlag); + CHECK_EQUAL_NZCV(ZCFlag); TEARDOWN(); } @@ -4204,14 +4402,14 @@ TEST(cmp_shift) { RUN(); - ASSERT_EQUAL_32(ZCFlag, w0); - ASSERT_EQUAL_32(ZCFlag, w1); - ASSERT_EQUAL_32(ZCFlag, w2); - ASSERT_EQUAL_32(ZCFlag, w3); - ASSERT_EQUAL_32(ZCFlag, w4); - ASSERT_EQUAL_32(ZCFlag, w5); - ASSERT_EQUAL_32(ZCFlag, w6); - ASSERT_EQUAL_32(ZCFlag, w7); + CHECK_EQUAL_32(ZCFlag, w0); + CHECK_EQUAL_32(ZCFlag, w1); + CHECK_EQUAL_32(ZCFlag, w2); + CHECK_EQUAL_32(ZCFlag, w3); + CHECK_EQUAL_32(ZCFlag, w4); + CHECK_EQUAL_32(ZCFlag, w5); + CHECK_EQUAL_32(ZCFlag, w6); + CHECK_EQUAL_32(ZCFlag, w7); TEARDOWN(); } @@ -4257,14 +4455,14 @@ TEST(cmp_extend) { RUN(); - ASSERT_EQUAL_32(ZCFlag, w0); - ASSERT_EQUAL_32(ZCFlag, w1); - ASSERT_EQUAL_32(ZCFlag, w2); - ASSERT_EQUAL_32(NCFlag, w3); - ASSERT_EQUAL_32(NCFlag, w4); - ASSERT_EQUAL_32(ZCFlag, w5); - ASSERT_EQUAL_32(NCFlag, w6); - ASSERT_EQUAL_32(ZCFlag, w7); + CHECK_EQUAL_32(ZCFlag, w0); + CHECK_EQUAL_32(ZCFlag, w1); + CHECK_EQUAL_32(ZCFlag, w2); + CHECK_EQUAL_32(NCFlag, w3); + CHECK_EQUAL_32(NCFlag, w4); + CHECK_EQUAL_32(ZCFlag, w5); + CHECK_EQUAL_32(NCFlag, w6); + CHECK_EQUAL_32(ZCFlag, w7); TEARDOWN(); } @@ -4303,12 +4501,12 @@ TEST(ccmp) { RUN(); - ASSERT_EQUAL_32(NFlag, w0); - ASSERT_EQUAL_32(NCFlag, w1); - ASSERT_EQUAL_32(NoFlag, w2); - ASSERT_EQUAL_32(NZCVFlag, w3); - ASSERT_EQUAL_32(ZCFlag, w4); - ASSERT_EQUAL_32(ZCFlag, w5); + CHECK_EQUAL_32(NFlag, w0); + CHECK_EQUAL_32(NCFlag, w1); + CHECK_EQUAL_32(NoFlag, w2); + CHECK_EQUAL_32(NZCVFlag, w3); + CHECK_EQUAL_32(ZCFlag, w4); + CHECK_EQUAL_32(ZCFlag, w5); TEARDOWN(); } @@ -4332,8 +4530,8 @@ TEST(ccmp_wide_imm) { RUN(); - ASSERT_EQUAL_32(NFlag, w0); - ASSERT_EQUAL_32(NoFlag, w1); + CHECK_EQUAL_32(NFlag, w0); + CHECK_EQUAL_32(NoFlag, w1); TEARDOWN(); } @@ -4373,11 +4571,11 @@ TEST(ccmp_shift_extend) { RUN(); - ASSERT_EQUAL_32(ZCFlag, w0); - ASSERT_EQUAL_32(ZCFlag, w1); - ASSERT_EQUAL_32(ZCFlag, w2); - ASSERT_EQUAL_32(NCFlag, w3); - ASSERT_EQUAL_32(NZCVFlag, w4); + CHECK_EQUAL_32(ZCFlag, w0); + CHECK_EQUAL_32(ZCFlag, w1); + CHECK_EQUAL_32(ZCFlag, w2); + CHECK_EQUAL_32(NCFlag, w3); + CHECK_EQUAL_32(NZCVFlag, w4); TEARDOWN(); } @@ -4427,27 +4625,27 @@ TEST(csel) { RUN(); - ASSERT_EQUAL_64(0x0000000f, x0); - ASSERT_EQUAL_64(0x0000001f, x1); - ASSERT_EQUAL_64(0x00000020, x2); - ASSERT_EQUAL_64(0x0000000f, x3); - ASSERT_EQUAL_64(0xffffffe0ffffffe0UL, x4); - ASSERT_EQUAL_64(0x0000000f0000000fUL, x5); - ASSERT_EQUAL_64(0xffffffe0ffffffe1UL, x6); - ASSERT_EQUAL_64(0x0000000f0000000fUL, x7); - ASSERT_EQUAL_64(0x00000001, x8); - ASSERT_EQUAL_64(0xffffffff, x9); - ASSERT_EQUAL_64(0x0000001f00000020UL, x10); - ASSERT_EQUAL_64(0xfffffff0fffffff0UL, x11); - ASSERT_EQUAL_64(0xfffffff0fffffff1UL, x12); - ASSERT_EQUAL_64(0x0000000f, x13); - ASSERT_EQUAL_64(0x0000000f0000000fUL, x14); - ASSERT_EQUAL_64(0x0000000f, x15); - ASSERT_EQUAL_64(0x0000000f0000000fUL, x18); - ASSERT_EQUAL_64(0, x24); - ASSERT_EQUAL_64(0x0000001f0000001fUL, x25); - ASSERT_EQUAL_64(0x0000001f0000001fUL, x26); - ASSERT_EQUAL_64(0, x27); + CHECK_EQUAL_64(0x0000000f, x0); + CHECK_EQUAL_64(0x0000001f, x1); + CHECK_EQUAL_64(0x00000020, x2); + CHECK_EQUAL_64(0x0000000f, x3); + CHECK_EQUAL_64(0xffffffe0ffffffe0UL, x4); + CHECK_EQUAL_64(0x0000000f0000000fUL, x5); + CHECK_EQUAL_64(0xffffffe0ffffffe1UL, x6); + CHECK_EQUAL_64(0x0000000f0000000fUL, x7); + CHECK_EQUAL_64(0x00000001, x8); + CHECK_EQUAL_64(0xffffffff, x9); + CHECK_EQUAL_64(0x0000001f00000020UL, x10); + CHECK_EQUAL_64(0xfffffff0fffffff0UL, x11); + CHECK_EQUAL_64(0xfffffff0fffffff1UL, x12); + CHECK_EQUAL_64(0x0000000f, x13); + CHECK_EQUAL_64(0x0000000f0000000fUL, x14); + CHECK_EQUAL_64(0x0000000f, x15); + CHECK_EQUAL_64(0x0000000f0000000fUL, x18); + CHECK_EQUAL_64(0, x24); + CHECK_EQUAL_64(0x0000001f0000001fUL, x25); + CHECK_EQUAL_64(0x0000001f0000001fUL, x26); + CHECK_EQUAL_64(0, x27); TEARDOWN(); } @@ -4485,23 +4683,23 @@ TEST(csel_imm) { RUN(); - ASSERT_EQUAL_32(-2, w0); - ASSERT_EQUAL_32(-1, w1); - ASSERT_EQUAL_32(0, w2); - ASSERT_EQUAL_32(1, w3); - ASSERT_EQUAL_32(2, w4); - ASSERT_EQUAL_32(-1, w5); - ASSERT_EQUAL_32(0x40000000, w6); - ASSERT_EQUAL_32(0x80000000, w7); + CHECK_EQUAL_32(-2, w0); + CHECK_EQUAL_32(-1, w1); + CHECK_EQUAL_32(0, w2); + CHECK_EQUAL_32(1, w3); + CHECK_EQUAL_32(2, w4); + CHECK_EQUAL_32(-1, w5); + CHECK_EQUAL_32(0x40000000, w6); + CHECK_EQUAL_32(0x80000000, w7); - ASSERT_EQUAL_64(-2, x8); - ASSERT_EQUAL_64(-1, x9); - ASSERT_EQUAL_64(0, x10); - ASSERT_EQUAL_64(1, x11); - ASSERT_EQUAL_64(2, x12); - ASSERT_EQUAL_64(-1, x13); - ASSERT_EQUAL_64(0x4000000000000000UL, x14); - ASSERT_EQUAL_64(0x8000000000000000UL, x15); + CHECK_EQUAL_64(-2, x8); + CHECK_EQUAL_64(-1, x9); + CHECK_EQUAL_64(0, x10); + CHECK_EQUAL_64(1, x11); + CHECK_EQUAL_64(2, x12); + CHECK_EQUAL_64(-1, x13); + CHECK_EQUAL_64(0x4000000000000000UL, x14); + CHECK_EQUAL_64(0x8000000000000000UL, x15); TEARDOWN(); } @@ -4542,19 +4740,19 @@ TEST(lslv) { RUN(); - ASSERT_EQUAL_64(value, x0); - ASSERT_EQUAL_64(value << (shift[0] & 63), x16); - ASSERT_EQUAL_64(value << (shift[1] & 63), x17); - ASSERT_EQUAL_64(value << (shift[2] & 63), x18); - ASSERT_EQUAL_64(value << (shift[3] & 63), x19); - ASSERT_EQUAL_64(value << (shift[4] & 63), x20); - ASSERT_EQUAL_64(value << (shift[5] & 63), x21); - ASSERT_EQUAL_32(value << (shift[0] & 31), w22); - ASSERT_EQUAL_32(value << (shift[1] & 31), w23); - ASSERT_EQUAL_32(value << (shift[2] & 31), w24); - ASSERT_EQUAL_32(value << (shift[3] & 31), w25); - ASSERT_EQUAL_32(value << (shift[4] & 31), w26); - ASSERT_EQUAL_32(value << (shift[5] & 31), w27); + CHECK_EQUAL_64(value, x0); + CHECK_EQUAL_64(value << (shift[0] & 63), x16); + CHECK_EQUAL_64(value << (shift[1] & 63), x17); + CHECK_EQUAL_64(value << (shift[2] & 63), x18); + CHECK_EQUAL_64(value << (shift[3] & 63), x19); + CHECK_EQUAL_64(value << (shift[4] & 63), x20); + CHECK_EQUAL_64(value << (shift[5] & 63), x21); + CHECK_EQUAL_32(value << (shift[0] & 31), w22); + CHECK_EQUAL_32(value << (shift[1] & 31), w23); + CHECK_EQUAL_32(value << (shift[2] & 31), w24); + CHECK_EQUAL_32(value << (shift[3] & 31), w25); + CHECK_EQUAL_32(value << (shift[4] & 31), w26); + CHECK_EQUAL_32(value << (shift[5] & 31), w27); TEARDOWN(); } @@ -4595,21 +4793,21 @@ TEST(lsrv) { RUN(); - ASSERT_EQUAL_64(value, x0); - ASSERT_EQUAL_64(value >> (shift[0] & 63), x16); - ASSERT_EQUAL_64(value >> (shift[1] & 63), x17); - ASSERT_EQUAL_64(value >> (shift[2] & 63), x18); - ASSERT_EQUAL_64(value >> (shift[3] & 63), x19); - ASSERT_EQUAL_64(value >> (shift[4] & 63), x20); - ASSERT_EQUAL_64(value >> (shift[5] & 63), x21); + CHECK_EQUAL_64(value, x0); + CHECK_EQUAL_64(value >> (shift[0] & 63), x16); + CHECK_EQUAL_64(value >> (shift[1] & 63), x17); + CHECK_EQUAL_64(value >> (shift[2] & 63), x18); + CHECK_EQUAL_64(value >> (shift[3] & 63), x19); + CHECK_EQUAL_64(value >> (shift[4] & 63), x20); + CHECK_EQUAL_64(value >> (shift[5] & 63), x21); value &= 0xffffffffUL; - ASSERT_EQUAL_32(value >> (shift[0] & 31), w22); - ASSERT_EQUAL_32(value >> (shift[1] & 31), w23); - ASSERT_EQUAL_32(value >> (shift[2] & 31), w24); - ASSERT_EQUAL_32(value >> (shift[3] & 31), w25); - ASSERT_EQUAL_32(value >> (shift[4] & 31), w26); - ASSERT_EQUAL_32(value >> (shift[5] & 31), w27); + CHECK_EQUAL_32(value >> (shift[0] & 31), w22); + CHECK_EQUAL_32(value >> (shift[1] & 31), w23); + CHECK_EQUAL_32(value >> (shift[2] & 31), w24); + CHECK_EQUAL_32(value >> (shift[3] & 31), w25); + CHECK_EQUAL_32(value >> (shift[4] & 31), w26); + CHECK_EQUAL_32(value >> (shift[5] & 31), w27); TEARDOWN(); } @@ -4650,21 +4848,21 @@ TEST(asrv) { RUN(); - ASSERT_EQUAL_64(value, x0); - ASSERT_EQUAL_64(value >> (shift[0] & 63), x16); - ASSERT_EQUAL_64(value >> (shift[1] & 63), x17); - ASSERT_EQUAL_64(value >> (shift[2] & 63), x18); - ASSERT_EQUAL_64(value >> (shift[3] & 63), x19); - ASSERT_EQUAL_64(value >> (shift[4] & 63), x20); - ASSERT_EQUAL_64(value >> (shift[5] & 63), x21); + CHECK_EQUAL_64(value, x0); + CHECK_EQUAL_64(value >> (shift[0] & 63), x16); + CHECK_EQUAL_64(value >> (shift[1] & 63), x17); + CHECK_EQUAL_64(value >> (shift[2] & 63), x18); + CHECK_EQUAL_64(value >> (shift[3] & 63), x19); + CHECK_EQUAL_64(value >> (shift[4] & 63), x20); + CHECK_EQUAL_64(value >> (shift[5] & 63), x21); int32_t value32 = static_cast<int32_t>(value & 0xffffffffUL); - ASSERT_EQUAL_32(value32 >> (shift[0] & 31), w22); - ASSERT_EQUAL_32(value32 >> (shift[1] & 31), w23); - ASSERT_EQUAL_32(value32 >> (shift[2] & 31), w24); - ASSERT_EQUAL_32(value32 >> (shift[3] & 31), w25); - ASSERT_EQUAL_32(value32 >> (shift[4] & 31), w26); - ASSERT_EQUAL_32(value32 >> (shift[5] & 31), w27); + CHECK_EQUAL_32(value32 >> (shift[0] & 31), w22); + CHECK_EQUAL_32(value32 >> (shift[1] & 31), w23); + CHECK_EQUAL_32(value32 >> (shift[2] & 31), w24); + CHECK_EQUAL_32(value32 >> (shift[3] & 31), w25); + CHECK_EQUAL_32(value32 >> (shift[4] & 31), w26); + CHECK_EQUAL_32(value32 >> (shift[5] & 31), w27); TEARDOWN(); } @@ -4705,19 +4903,19 @@ TEST(rorv) { RUN(); - ASSERT_EQUAL_64(value, x0); - ASSERT_EQUAL_64(0xf0123456789abcdeUL, x16); - ASSERT_EQUAL_64(0xef0123456789abcdUL, x17); - ASSERT_EQUAL_64(0xdef0123456789abcUL, x18); - ASSERT_EQUAL_64(0xcdef0123456789abUL, x19); - ASSERT_EQUAL_64(0xabcdef0123456789UL, x20); - ASSERT_EQUAL_64(0x789abcdef0123456UL, x21); - ASSERT_EQUAL_32(0xf89abcde, w22); - ASSERT_EQUAL_32(0xef89abcd, w23); - ASSERT_EQUAL_32(0xdef89abc, w24); - ASSERT_EQUAL_32(0xcdef89ab, w25); - ASSERT_EQUAL_32(0xabcdef89, w26); - ASSERT_EQUAL_32(0xf89abcde, w27); + CHECK_EQUAL_64(value, x0); + CHECK_EQUAL_64(0xf0123456789abcdeUL, x16); + CHECK_EQUAL_64(0xef0123456789abcdUL, x17); + CHECK_EQUAL_64(0xdef0123456789abcUL, x18); + CHECK_EQUAL_64(0xcdef0123456789abUL, x19); + CHECK_EQUAL_64(0xabcdef0123456789UL, x20); + CHECK_EQUAL_64(0x789abcdef0123456UL, x21); + CHECK_EQUAL_32(0xf89abcde, w22); + CHECK_EQUAL_32(0xef89abcd, w23); + CHECK_EQUAL_32(0xdef89abc, w24); + CHECK_EQUAL_32(0xcdef89ab, w25); + CHECK_EQUAL_32(0xabcdef89, w26); + CHECK_EQUAL_32(0xf89abcde, w27); TEARDOWN(); } @@ -4751,14 +4949,14 @@ TEST(bfm) { RUN(); - ASSERT_EQUAL_64(0x88888888888889abL, x10); - ASSERT_EQUAL_64(0x8888cdef88888888L, x11); + CHECK_EQUAL_64(0x88888888888889abL, x10); + CHECK_EQUAL_64(0x8888cdef88888888L, x11); - ASSERT_EQUAL_32(0x888888ab, w20); - ASSERT_EQUAL_32(0x88cdef88, w21); + CHECK_EQUAL_32(0x888888ab, w20); + CHECK_EQUAL_32(0x88cdef88, w21); - ASSERT_EQUAL_64(0x8888888888ef8888L, x12); - ASSERT_EQUAL_64(0x88888888888888abL, x13); + CHECK_EQUAL_64(0x8888888888ef8888L, x12); + CHECK_EQUAL_64(0x88888888888888abL, x13); TEARDOWN(); } @@ -4800,28 +4998,28 @@ TEST(sbfm) { RUN(); - ASSERT_EQUAL_64(0xffffffffffff89abL, x10); - ASSERT_EQUAL_64(0xffffcdef00000000L, x11); - ASSERT_EQUAL_64(0x4567L, x12); - ASSERT_EQUAL_64(0x789abcdef0000L, x13); + CHECK_EQUAL_64(0xffffffffffff89abL, x10); + CHECK_EQUAL_64(0xffffcdef00000000L, x11); + CHECK_EQUAL_64(0x4567L, x12); + CHECK_EQUAL_64(0x789abcdef0000L, x13); - ASSERT_EQUAL_32(0xffffffab, w14); - ASSERT_EQUAL_32(0xffcdef00, w15); - ASSERT_EQUAL_32(0x54, w16); - ASSERT_EQUAL_32(0x00321000, w17); + CHECK_EQUAL_32(0xffffffab, w14); + CHECK_EQUAL_32(0xffcdef00, w15); + CHECK_EQUAL_32(0x54, w16); + CHECK_EQUAL_32(0x00321000, w17); - ASSERT_EQUAL_64(0x01234567L, x18); - ASSERT_EQUAL_64(0xfffffffffedcba98L, x19); - ASSERT_EQUAL_64(0xffffffffffcdef00L, x20); - ASSERT_EQUAL_64(0x321000L, x21); - ASSERT_EQUAL_64(0xffffffffffffabcdL, x22); - ASSERT_EQUAL_64(0x5432L, x23); - ASSERT_EQUAL_64(0xffffffffffffffefL, x24); - ASSERT_EQUAL_64(0x10, x25); - ASSERT_EQUAL_64(0xffffffffffffcdefL, x26); - ASSERT_EQUAL_64(0x3210, x27); - ASSERT_EQUAL_64(0xffffffff89abcdefL, x28); - ASSERT_EQUAL_64(0x76543210, x29); + CHECK_EQUAL_64(0x01234567L, x18); + CHECK_EQUAL_64(0xfffffffffedcba98L, x19); + CHECK_EQUAL_64(0xffffffffffcdef00L, x20); + CHECK_EQUAL_64(0x321000L, x21); + CHECK_EQUAL_64(0xffffffffffffabcdL, x22); + CHECK_EQUAL_64(0x5432L, x23); + CHECK_EQUAL_64(0xffffffffffffffefL, x24); + CHECK_EQUAL_64(0x10, x25); + CHECK_EQUAL_64(0xffffffffffffcdefL, x26); + CHECK_EQUAL_64(0x3210, x27); + CHECK_EQUAL_64(0xffffffff89abcdefL, x28); + CHECK_EQUAL_64(0x76543210, x29); TEARDOWN(); } @@ -4861,24 +5059,24 @@ TEST(ubfm) { RUN(); - ASSERT_EQUAL_64(0x00000000000089abL, x10); - ASSERT_EQUAL_64(0x0000cdef00000000L, x11); - ASSERT_EQUAL_64(0x4567L, x12); - ASSERT_EQUAL_64(0x789abcdef0000L, x13); + CHECK_EQUAL_64(0x00000000000089abL, x10); + CHECK_EQUAL_64(0x0000cdef00000000L, x11); + CHECK_EQUAL_64(0x4567L, x12); + CHECK_EQUAL_64(0x789abcdef0000L, x13); - ASSERT_EQUAL_32(0x000000ab, w25); - ASSERT_EQUAL_32(0x00cdef00, w26); - ASSERT_EQUAL_32(0x54, w27); - ASSERT_EQUAL_32(0x00321000, w28); + CHECK_EQUAL_32(0x000000ab, w25); + CHECK_EQUAL_32(0x00cdef00, w26); + CHECK_EQUAL_32(0x54, w27); + CHECK_EQUAL_32(0x00321000, w28); - ASSERT_EQUAL_64(0x8000000000000000L, x15); - ASSERT_EQUAL_64(0x0123456789abcdefL, x16); - ASSERT_EQUAL_64(0x01234567L, x17); - ASSERT_EQUAL_64(0xcdef00L, x18); - ASSERT_EQUAL_64(0xabcdL, x19); - ASSERT_EQUAL_64(0xefL, x20); - ASSERT_EQUAL_64(0xcdefL, x21); - ASSERT_EQUAL_64(0x89abcdefL, x22); + CHECK_EQUAL_64(0x8000000000000000L, x15); + CHECK_EQUAL_64(0x0123456789abcdefL, x16); + CHECK_EQUAL_64(0x01234567L, x17); + CHECK_EQUAL_64(0xcdef00L, x18); + CHECK_EQUAL_64(0xabcdL, x19); + CHECK_EQUAL_64(0xefL, x20); + CHECK_EQUAL_64(0xcdefL, x21); + CHECK_EQUAL_64(0x89abcdefL, x22); TEARDOWN(); } @@ -4893,26 +5091,30 @@ TEST(extr) { __ Mov(x2, 0xfedcba9876543210L); __ Extr(w10, w1, w2, 0); - __ Extr(w11, w1, w2, 1); - __ Extr(x12, x2, x1, 2); + __ Extr(x11, x1, x2, 0); + __ Extr(w12, w1, w2, 1); + __ Extr(x13, x2, x1, 2); - __ Ror(w13, w1, 0); - __ Ror(w14, w2, 17); - __ Ror(w15, w1, 31); - __ Ror(x18, x2, 1); - __ Ror(x19, x1, 63); + __ Ror(w20, w1, 0); + __ Ror(x21, x1, 0); + __ Ror(w22, w2, 17); + __ Ror(w23, w1, 31); + __ Ror(x24, x2, 1); + __ Ror(x25, x1, 63); END(); RUN(); - ASSERT_EQUAL_64(0x76543210, x10); - ASSERT_EQUAL_64(0xbb2a1908, x11); - ASSERT_EQUAL_64(0x0048d159e26af37bUL, x12); - ASSERT_EQUAL_64(0x89abcdef, x13); - ASSERT_EQUAL_64(0x19083b2a, x14); - ASSERT_EQUAL_64(0x13579bdf, x15); - ASSERT_EQUAL_64(0x7f6e5d4c3b2a1908UL, x18); - ASSERT_EQUAL_64(0x02468acf13579bdeUL, x19); + CHECK_EQUAL_64(0x76543210, x10); + CHECK_EQUAL_64(0xfedcba9876543210L, x11); + CHECK_EQUAL_64(0xbb2a1908, x12); + CHECK_EQUAL_64(0x0048d159e26af37bUL, x13); + CHECK_EQUAL_64(0x89abcdef, x20); + CHECK_EQUAL_64(0x0123456789abcdefL, x21); + CHECK_EQUAL_64(0x19083b2a, x22); + CHECK_EQUAL_64(0x13579bdf, x23); + CHECK_EQUAL_64(0x7f6e5d4c3b2a1908UL, x24); + CHECK_EQUAL_64(0x02468acf13579bdeUL, x25); TEARDOWN(); } @@ -4935,14 +5137,14 @@ TEST(fmov_imm) { RUN(); - ASSERT_EQUAL_FP32(1.0, s11); - ASSERT_EQUAL_FP64(-13.0, d22); - ASSERT_EQUAL_FP32(255.0, s1); - ASSERT_EQUAL_FP64(12.34567, d2); - ASSERT_EQUAL_FP32(0.0, s3); - ASSERT_EQUAL_FP64(0.0, d4); - ASSERT_EQUAL_FP32(kFP32PositiveInfinity, s5); - ASSERT_EQUAL_FP64(kFP64NegativeInfinity, d6); + CHECK_EQUAL_FP32(1.0, s11); + CHECK_EQUAL_FP64(-13.0, d22); + CHECK_EQUAL_FP32(255.0, s1); + CHECK_EQUAL_FP64(12.34567, d2); + CHECK_EQUAL_FP32(0.0, s3); + CHECK_EQUAL_FP64(0.0, d4); + CHECK_EQUAL_FP32(kFP32PositiveInfinity, s5); + CHECK_EQUAL_FP64(kFP64NegativeInfinity, d6); TEARDOWN(); } @@ -4967,13 +5169,13 @@ TEST(fmov_reg) { RUN(); - ASSERT_EQUAL_32(float_to_rawbits(1.0), w10); - ASSERT_EQUAL_FP32(1.0, s30); - ASSERT_EQUAL_FP32(1.0, s5); - ASSERT_EQUAL_64(double_to_rawbits(-13.0), x1); - ASSERT_EQUAL_FP64(-13.0, d2); - ASSERT_EQUAL_FP64(-13.0, d4); - ASSERT_EQUAL_FP32(rawbits_to_float(0x89abcdef), s6); + CHECK_EQUAL_32(float_to_rawbits(1.0), w10); + CHECK_EQUAL_FP32(1.0, s30); + CHECK_EQUAL_FP32(1.0, s5); + CHECK_EQUAL_64(double_to_rawbits(-13.0), x1); + CHECK_EQUAL_FP64(-13.0, d2); + CHECK_EQUAL_FP64(-13.0, d4); + CHECK_EQUAL_FP32(rawbits_to_float(0x89abcdef), s6); TEARDOWN(); } @@ -5017,20 +5219,20 @@ TEST(fadd) { RUN(); - ASSERT_EQUAL_FP32(4.25, s0); - ASSERT_EQUAL_FP32(1.0, s1); - ASSERT_EQUAL_FP32(1.0, s2); - ASSERT_EQUAL_FP32(kFP32PositiveInfinity, s3); - ASSERT_EQUAL_FP32(kFP32NegativeInfinity, s4); - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s5); - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s6); - ASSERT_EQUAL_FP64(0.25, d7); - ASSERT_EQUAL_FP64(2.25, d8); - ASSERT_EQUAL_FP64(2.25, d9); - ASSERT_EQUAL_FP64(kFP64PositiveInfinity, d10); - ASSERT_EQUAL_FP64(kFP64NegativeInfinity, d11); - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d12); - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d13); + CHECK_EQUAL_FP32(4.25, s0); + CHECK_EQUAL_FP32(1.0, s1); + CHECK_EQUAL_FP32(1.0, s2); + CHECK_EQUAL_FP32(kFP32PositiveInfinity, s3); + CHECK_EQUAL_FP32(kFP32NegativeInfinity, s4); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s5); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s6); + CHECK_EQUAL_FP64(0.25, d7); + CHECK_EQUAL_FP64(2.25, d8); + CHECK_EQUAL_FP64(2.25, d9); + CHECK_EQUAL_FP64(kFP64PositiveInfinity, d10); + CHECK_EQUAL_FP64(kFP64NegativeInfinity, d11); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d12); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d13); TEARDOWN(); } @@ -5074,20 +5276,20 @@ TEST(fsub) { RUN(); - ASSERT_EQUAL_FP32(2.25, s0); - ASSERT_EQUAL_FP32(1.0, s1); - ASSERT_EQUAL_FP32(-1.0, s2); - ASSERT_EQUAL_FP32(kFP32NegativeInfinity, s3); - ASSERT_EQUAL_FP32(kFP32PositiveInfinity, s4); - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s5); - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s6); - ASSERT_EQUAL_FP64(-4.25, d7); - ASSERT_EQUAL_FP64(-2.25, d8); - ASSERT_EQUAL_FP64(-2.25, d9); - ASSERT_EQUAL_FP64(kFP64NegativeInfinity, d10); - ASSERT_EQUAL_FP64(kFP64PositiveInfinity, d11); - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d12); - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d13); + CHECK_EQUAL_FP32(2.25, s0); + CHECK_EQUAL_FP32(1.0, s1); + CHECK_EQUAL_FP32(-1.0, s2); + CHECK_EQUAL_FP32(kFP32NegativeInfinity, s3); + CHECK_EQUAL_FP32(kFP32PositiveInfinity, s4); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s5); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s6); + CHECK_EQUAL_FP64(-4.25, d7); + CHECK_EQUAL_FP64(-2.25, d8); + CHECK_EQUAL_FP64(-2.25, d9); + CHECK_EQUAL_FP64(kFP64NegativeInfinity, d10); + CHECK_EQUAL_FP64(kFP64PositiveInfinity, d11); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d12); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d13); TEARDOWN(); } @@ -5132,20 +5334,20 @@ TEST(fmul) { RUN(); - ASSERT_EQUAL_FP32(6.5, s0); - ASSERT_EQUAL_FP32(0.0, s1); - ASSERT_EQUAL_FP32(0.0, s2); - ASSERT_EQUAL_FP32(kFP32NegativeInfinity, s3); - ASSERT_EQUAL_FP32(kFP32PositiveInfinity, s4); - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s5); - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s6); - ASSERT_EQUAL_FP64(-4.5, d7); - ASSERT_EQUAL_FP64(0.0, d8); - ASSERT_EQUAL_FP64(0.0, d9); - ASSERT_EQUAL_FP64(kFP64NegativeInfinity, d10); - ASSERT_EQUAL_FP64(kFP64PositiveInfinity, d11); - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d12); - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d13); + CHECK_EQUAL_FP32(6.5, s0); + CHECK_EQUAL_FP32(0.0, s1); + CHECK_EQUAL_FP32(0.0, s2); + CHECK_EQUAL_FP32(kFP32NegativeInfinity, s3); + CHECK_EQUAL_FP32(kFP32PositiveInfinity, s4); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s5); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s6); + CHECK_EQUAL_FP64(-4.5, d7); + CHECK_EQUAL_FP64(0.0, d8); + CHECK_EQUAL_FP64(0.0, d9); + CHECK_EQUAL_FP64(kFP64NegativeInfinity, d10); + CHECK_EQUAL_FP64(kFP64PositiveInfinity, d11); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d12); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d13); TEARDOWN(); } @@ -5168,10 +5370,10 @@ static void FmaddFmsubHelper(double n, double m, double a, END(); RUN(); - ASSERT_EQUAL_FP64(fmadd, d28); - ASSERT_EQUAL_FP64(fmsub, d29); - ASSERT_EQUAL_FP64(fnmadd, d30); - ASSERT_EQUAL_FP64(fnmsub, d31); + CHECK_EQUAL_FP64(fmadd, d28); + CHECK_EQUAL_FP64(fmsub, d29); + CHECK_EQUAL_FP64(fnmadd, d30); + CHECK_EQUAL_FP64(fnmsub, d31); TEARDOWN(); } @@ -5236,10 +5438,10 @@ static void FmaddFmsubHelper(float n, float m, float a, END(); RUN(); - ASSERT_EQUAL_FP32(fmadd, s28); - ASSERT_EQUAL_FP32(fmsub, s29); - ASSERT_EQUAL_FP32(fnmadd, s30); - ASSERT_EQUAL_FP32(fnmsub, s31); + CHECK_EQUAL_FP32(fmadd, s28); + CHECK_EQUAL_FP32(fmsub, s29); + CHECK_EQUAL_FP32(fnmadd, s30); + CHECK_EQUAL_FP32(fnmsub, s31); TEARDOWN(); } @@ -5295,12 +5497,12 @@ TEST(fmadd_fmsub_double_nans) { double q1 = rawbits_to_double(0x7ffaaaaa11111111); double q2 = rawbits_to_double(0x7ffaaaaa22222222); double qa = rawbits_to_double(0x7ffaaaaaaaaaaaaa); - ASSERT(IsSignallingNaN(s1)); - ASSERT(IsSignallingNaN(s2)); - ASSERT(IsSignallingNaN(sa)); - ASSERT(IsQuietNaN(q1)); - ASSERT(IsQuietNaN(q2)); - ASSERT(IsQuietNaN(qa)); + DCHECK(IsSignallingNaN(s1)); + DCHECK(IsSignallingNaN(s2)); + DCHECK(IsSignallingNaN(sa)); + DCHECK(IsQuietNaN(q1)); + DCHECK(IsQuietNaN(q2)); + DCHECK(IsQuietNaN(qa)); // The input NaNs after passing through ProcessNaN. double s1_proc = rawbits_to_double(0x7ffd555511111111); @@ -5309,22 +5511,22 @@ TEST(fmadd_fmsub_double_nans) { double q1_proc = q1; double q2_proc = q2; double qa_proc = qa; - ASSERT(IsQuietNaN(s1_proc)); - ASSERT(IsQuietNaN(s2_proc)); - ASSERT(IsQuietNaN(sa_proc)); - ASSERT(IsQuietNaN(q1_proc)); - ASSERT(IsQuietNaN(q2_proc)); - ASSERT(IsQuietNaN(qa_proc)); + DCHECK(IsQuietNaN(s1_proc)); + DCHECK(IsQuietNaN(s2_proc)); + DCHECK(IsQuietNaN(sa_proc)); + DCHECK(IsQuietNaN(q1_proc)); + DCHECK(IsQuietNaN(q2_proc)); + DCHECK(IsQuietNaN(qa_proc)); // Negated NaNs as it would be done on ARMv8 hardware. double s1_proc_neg = rawbits_to_double(0xfffd555511111111); double sa_proc_neg = rawbits_to_double(0xfffd5555aaaaaaaa); double q1_proc_neg = rawbits_to_double(0xfffaaaaa11111111); double qa_proc_neg = rawbits_to_double(0xfffaaaaaaaaaaaaa); - ASSERT(IsQuietNaN(s1_proc_neg)); - ASSERT(IsQuietNaN(sa_proc_neg)); - ASSERT(IsQuietNaN(q1_proc_neg)); - ASSERT(IsQuietNaN(qa_proc_neg)); + DCHECK(IsQuietNaN(s1_proc_neg)); + DCHECK(IsQuietNaN(sa_proc_neg)); + DCHECK(IsQuietNaN(q1_proc_neg)); + DCHECK(IsQuietNaN(qa_proc_neg)); // Quiet NaNs are propagated. FmaddFmsubHelper(q1, 0, 0, q1_proc, q1_proc_neg, q1_proc_neg, q1_proc); @@ -5378,12 +5580,12 @@ TEST(fmadd_fmsub_float_nans) { float q1 = rawbits_to_float(0x7fea1111); float q2 = rawbits_to_float(0x7fea2222); float qa = rawbits_to_float(0x7feaaaaa); - ASSERT(IsSignallingNaN(s1)); - ASSERT(IsSignallingNaN(s2)); - ASSERT(IsSignallingNaN(sa)); - ASSERT(IsQuietNaN(q1)); - ASSERT(IsQuietNaN(q2)); - ASSERT(IsQuietNaN(qa)); + DCHECK(IsSignallingNaN(s1)); + DCHECK(IsSignallingNaN(s2)); + DCHECK(IsSignallingNaN(sa)); + DCHECK(IsQuietNaN(q1)); + DCHECK(IsQuietNaN(q2)); + DCHECK(IsQuietNaN(qa)); // The input NaNs after passing through ProcessNaN. float s1_proc = rawbits_to_float(0x7fd51111); @@ -5392,22 +5594,22 @@ TEST(fmadd_fmsub_float_nans) { float q1_proc = q1; float q2_proc = q2; float qa_proc = qa; - ASSERT(IsQuietNaN(s1_proc)); - ASSERT(IsQuietNaN(s2_proc)); - ASSERT(IsQuietNaN(sa_proc)); - ASSERT(IsQuietNaN(q1_proc)); - ASSERT(IsQuietNaN(q2_proc)); - ASSERT(IsQuietNaN(qa_proc)); + DCHECK(IsQuietNaN(s1_proc)); + DCHECK(IsQuietNaN(s2_proc)); + DCHECK(IsQuietNaN(sa_proc)); + DCHECK(IsQuietNaN(q1_proc)); + DCHECK(IsQuietNaN(q2_proc)); + DCHECK(IsQuietNaN(qa_proc)); // Negated NaNs as it would be done on ARMv8 hardware. float s1_proc_neg = rawbits_to_float(0xffd51111); float sa_proc_neg = rawbits_to_float(0xffd5aaaa); float q1_proc_neg = rawbits_to_float(0xffea1111); float qa_proc_neg = rawbits_to_float(0xffeaaaaa); - ASSERT(IsQuietNaN(s1_proc_neg)); - ASSERT(IsQuietNaN(sa_proc_neg)); - ASSERT(IsQuietNaN(q1_proc_neg)); - ASSERT(IsQuietNaN(qa_proc_neg)); + DCHECK(IsQuietNaN(s1_proc_neg)); + DCHECK(IsQuietNaN(sa_proc_neg)); + DCHECK(IsQuietNaN(q1_proc_neg)); + DCHECK(IsQuietNaN(qa_proc_neg)); // Quiet NaNs are propagated. FmaddFmsubHelper(q1, 0, 0, q1_proc, q1_proc_neg, q1_proc_neg, q1_proc); @@ -5491,20 +5693,20 @@ TEST(fdiv) { RUN(); - ASSERT_EQUAL_FP32(1.625f, s0); - ASSERT_EQUAL_FP32(1.0f, s1); - ASSERT_EQUAL_FP32(-0.0f, s2); - ASSERT_EQUAL_FP32(0.0f, s3); - ASSERT_EQUAL_FP32(-0.0f, s4); - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s5); - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s6); - ASSERT_EQUAL_FP64(-1.125, d7); - ASSERT_EQUAL_FP64(0.0, d8); - ASSERT_EQUAL_FP64(-0.0, d9); - ASSERT_EQUAL_FP64(0.0, d10); - ASSERT_EQUAL_FP64(-0.0, d11); - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d12); - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d13); + CHECK_EQUAL_FP32(1.625f, s0); + CHECK_EQUAL_FP32(1.0f, s1); + CHECK_EQUAL_FP32(-0.0f, s2); + CHECK_EQUAL_FP32(0.0f, s3); + CHECK_EQUAL_FP32(-0.0f, s4); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s5); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s6); + CHECK_EQUAL_FP64(-1.125, d7); + CHECK_EQUAL_FP64(0.0, d8); + CHECK_EQUAL_FP64(-0.0, d9); + CHECK_EQUAL_FP64(0.0, d10); + CHECK_EQUAL_FP64(-0.0, d11); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d12); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d13); TEARDOWN(); } @@ -5607,10 +5809,10 @@ static void FminFmaxDoubleHelper(double n, double m, double min, double max, RUN(); - ASSERT_EQUAL_FP64(min, d28); - ASSERT_EQUAL_FP64(max, d29); - ASSERT_EQUAL_FP64(minnm, d30); - ASSERT_EQUAL_FP64(maxnm, d31); + CHECK_EQUAL_FP64(min, d28); + CHECK_EQUAL_FP64(max, d29); + CHECK_EQUAL_FP64(minnm, d30); + CHECK_EQUAL_FP64(maxnm, d31); TEARDOWN(); } @@ -5625,10 +5827,10 @@ TEST(fmax_fmin_d) { double snan_processed = rawbits_to_double(0x7ffd555512345678); double qnan_processed = qnan; - ASSERT(IsSignallingNaN(snan)); - ASSERT(IsQuietNaN(qnan)); - ASSERT(IsQuietNaN(snan_processed)); - ASSERT(IsQuietNaN(qnan_processed)); + DCHECK(IsSignallingNaN(snan)); + DCHECK(IsQuietNaN(qnan)); + DCHECK(IsQuietNaN(snan_processed)); + DCHECK(IsQuietNaN(qnan_processed)); // Bootstrap tests. FminFmaxDoubleHelper(0, 0, 0, 0, 0, 0); @@ -5692,10 +5894,10 @@ static void FminFmaxFloatHelper(float n, float m, float min, float max, RUN(); - ASSERT_EQUAL_FP32(min, s28); - ASSERT_EQUAL_FP32(max, s29); - ASSERT_EQUAL_FP32(minnm, s30); - ASSERT_EQUAL_FP32(maxnm, s31); + CHECK_EQUAL_FP32(min, s28); + CHECK_EQUAL_FP32(max, s29); + CHECK_EQUAL_FP32(minnm, s30); + CHECK_EQUAL_FP32(maxnm, s31); TEARDOWN(); } @@ -5710,10 +5912,10 @@ TEST(fmax_fmin_s) { float snan_processed = rawbits_to_float(0x7fd51234); float qnan_processed = qnan; - ASSERT(IsSignallingNaN(snan)); - ASSERT(IsQuietNaN(qnan)); - ASSERT(IsQuietNaN(snan_processed)); - ASSERT(IsQuietNaN(qnan_processed)); + DCHECK(IsSignallingNaN(snan)); + DCHECK(IsQuietNaN(qnan)); + DCHECK(IsQuietNaN(snan_processed)); + DCHECK(IsQuietNaN(qnan_processed)); // Bootstrap tests. FminFmaxFloatHelper(0, 0, 0, 0, 0, 0); @@ -5815,16 +6017,16 @@ TEST(fccmp) { RUN(); - ASSERT_EQUAL_32(ZCFlag, w0); - ASSERT_EQUAL_32(VFlag, w1); - ASSERT_EQUAL_32(NFlag, w2); - ASSERT_EQUAL_32(CVFlag, w3); - ASSERT_EQUAL_32(ZCFlag, w4); - ASSERT_EQUAL_32(ZVFlag, w5); - ASSERT_EQUAL_32(CFlag, w6); - ASSERT_EQUAL_32(NFlag, w7); - ASSERT_EQUAL_32(ZCFlag, w8); - ASSERT_EQUAL_32(ZCFlag, w9); + CHECK_EQUAL_32(ZCFlag, w0); + CHECK_EQUAL_32(VFlag, w1); + CHECK_EQUAL_32(NFlag, w2); + CHECK_EQUAL_32(CVFlag, w3); + CHECK_EQUAL_32(ZCFlag, w4); + CHECK_EQUAL_32(ZVFlag, w5); + CHECK_EQUAL_32(CFlag, w6); + CHECK_EQUAL_32(NFlag, w7); + CHECK_EQUAL_32(ZCFlag, w8); + CHECK_EQUAL_32(ZCFlag, w9); TEARDOWN(); } @@ -5894,20 +6096,20 @@ TEST(fcmp) { RUN(); - ASSERT_EQUAL_32(ZCFlag, w0); - ASSERT_EQUAL_32(NFlag, w1); - ASSERT_EQUAL_32(CFlag, w2); - ASSERT_EQUAL_32(CVFlag, w3); - ASSERT_EQUAL_32(CVFlag, w4); - ASSERT_EQUAL_32(ZCFlag, w5); - ASSERT_EQUAL_32(NFlag, w6); - ASSERT_EQUAL_32(ZCFlag, w10); - ASSERT_EQUAL_32(NFlag, w11); - ASSERT_EQUAL_32(CFlag, w12); - ASSERT_EQUAL_32(CVFlag, w13); - ASSERT_EQUAL_32(CVFlag, w14); - ASSERT_EQUAL_32(ZCFlag, w15); - ASSERT_EQUAL_32(NFlag, w16); + CHECK_EQUAL_32(ZCFlag, w0); + CHECK_EQUAL_32(NFlag, w1); + CHECK_EQUAL_32(CFlag, w2); + CHECK_EQUAL_32(CVFlag, w3); + CHECK_EQUAL_32(CVFlag, w4); + CHECK_EQUAL_32(ZCFlag, w5); + CHECK_EQUAL_32(NFlag, w6); + CHECK_EQUAL_32(ZCFlag, w10); + CHECK_EQUAL_32(NFlag, w11); + CHECK_EQUAL_32(CFlag, w12); + CHECK_EQUAL_32(CVFlag, w13); + CHECK_EQUAL_32(CVFlag, w14); + CHECK_EQUAL_32(ZCFlag, w15); + CHECK_EQUAL_32(NFlag, w16); TEARDOWN(); } @@ -5935,12 +6137,12 @@ TEST(fcsel) { RUN(); - ASSERT_EQUAL_FP32(1.0, s0); - ASSERT_EQUAL_FP32(2.0, s1); - ASSERT_EQUAL_FP64(3.0, d2); - ASSERT_EQUAL_FP64(4.0, d3); - ASSERT_EQUAL_FP32(1.0, s4); - ASSERT_EQUAL_FP64(3.0, d5); + CHECK_EQUAL_FP32(1.0, s0); + CHECK_EQUAL_FP32(2.0, s1); + CHECK_EQUAL_FP64(3.0, d2); + CHECK_EQUAL_FP64(4.0, d3); + CHECK_EQUAL_FP32(1.0, s4); + CHECK_EQUAL_FP64(3.0, d5); TEARDOWN(); } @@ -5974,18 +6176,18 @@ TEST(fneg) { RUN(); - ASSERT_EQUAL_FP32(-1.0, s0); - ASSERT_EQUAL_FP32(1.0, s1); - ASSERT_EQUAL_FP32(-0.0, s2); - ASSERT_EQUAL_FP32(0.0, s3); - ASSERT_EQUAL_FP32(kFP32NegativeInfinity, s4); - ASSERT_EQUAL_FP32(kFP32PositiveInfinity, s5); - ASSERT_EQUAL_FP64(-1.0, d6); - ASSERT_EQUAL_FP64(1.0, d7); - ASSERT_EQUAL_FP64(-0.0, d8); - ASSERT_EQUAL_FP64(0.0, d9); - ASSERT_EQUAL_FP64(kFP64NegativeInfinity, d10); - ASSERT_EQUAL_FP64(kFP64PositiveInfinity, d11); + CHECK_EQUAL_FP32(-1.0, s0); + CHECK_EQUAL_FP32(1.0, s1); + CHECK_EQUAL_FP32(-0.0, s2); + CHECK_EQUAL_FP32(0.0, s3); + CHECK_EQUAL_FP32(kFP32NegativeInfinity, s4); + CHECK_EQUAL_FP32(kFP32PositiveInfinity, s5); + CHECK_EQUAL_FP64(-1.0, d6); + CHECK_EQUAL_FP64(1.0, d7); + CHECK_EQUAL_FP64(-0.0, d8); + CHECK_EQUAL_FP64(0.0, d9); + CHECK_EQUAL_FP64(kFP64NegativeInfinity, d10); + CHECK_EQUAL_FP64(kFP64PositiveInfinity, d11); TEARDOWN(); } @@ -6015,14 +6217,14 @@ TEST(fabs) { RUN(); - ASSERT_EQUAL_FP32(1.0, s0); - ASSERT_EQUAL_FP32(1.0, s1); - ASSERT_EQUAL_FP32(0.0, s2); - ASSERT_EQUAL_FP32(kFP32PositiveInfinity, s3); - ASSERT_EQUAL_FP64(1.0, d4); - ASSERT_EQUAL_FP64(1.0, d5); - ASSERT_EQUAL_FP64(0.0, d6); - ASSERT_EQUAL_FP64(kFP64PositiveInfinity, d7); + CHECK_EQUAL_FP32(1.0, s0); + CHECK_EQUAL_FP32(1.0, s1); + CHECK_EQUAL_FP32(0.0, s2); + CHECK_EQUAL_FP32(kFP32PositiveInfinity, s3); + CHECK_EQUAL_FP64(1.0, d4); + CHECK_EQUAL_FP64(1.0, d5); + CHECK_EQUAL_FP64(0.0, d6); + CHECK_EQUAL_FP64(kFP64PositiveInfinity, d7); TEARDOWN(); } @@ -6066,20 +6268,20 @@ TEST(fsqrt) { RUN(); - ASSERT_EQUAL_FP32(0.0, s0); - ASSERT_EQUAL_FP32(1.0, s1); - ASSERT_EQUAL_FP32(0.5, s2); - ASSERT_EQUAL_FP32(256.0, s3); - ASSERT_EQUAL_FP32(-0.0, s4); - ASSERT_EQUAL_FP32(kFP32PositiveInfinity, s5); - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s6); - ASSERT_EQUAL_FP64(0.0, d7); - ASSERT_EQUAL_FP64(1.0, d8); - ASSERT_EQUAL_FP64(0.5, d9); - ASSERT_EQUAL_FP64(65536.0, d10); - ASSERT_EQUAL_FP64(-0.0, d11); - ASSERT_EQUAL_FP64(kFP32PositiveInfinity, d12); - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d13); + CHECK_EQUAL_FP32(0.0, s0); + CHECK_EQUAL_FP32(1.0, s1); + CHECK_EQUAL_FP32(0.5, s2); + CHECK_EQUAL_FP32(256.0, s3); + CHECK_EQUAL_FP32(-0.0, s4); + CHECK_EQUAL_FP32(kFP32PositiveInfinity, s5); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s6); + CHECK_EQUAL_FP64(0.0, d7); + CHECK_EQUAL_FP64(1.0, d8); + CHECK_EQUAL_FP64(0.5, d9); + CHECK_EQUAL_FP64(65536.0, d10); + CHECK_EQUAL_FP64(-0.0, d11); + CHECK_EQUAL_FP64(kFP32PositiveInfinity, d12); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d13); TEARDOWN(); } @@ -6145,30 +6347,30 @@ TEST(frinta) { RUN(); - ASSERT_EQUAL_FP32(1.0, s0); - ASSERT_EQUAL_FP32(1.0, s1); - ASSERT_EQUAL_FP32(2.0, s2); - ASSERT_EQUAL_FP32(2.0, s3); - ASSERT_EQUAL_FP32(3.0, s4); - ASSERT_EQUAL_FP32(-2.0, s5); - ASSERT_EQUAL_FP32(-3.0, s6); - ASSERT_EQUAL_FP32(kFP32PositiveInfinity, s7); - ASSERT_EQUAL_FP32(kFP32NegativeInfinity, s8); - ASSERT_EQUAL_FP32(0.0, s9); - ASSERT_EQUAL_FP32(-0.0, s10); - ASSERT_EQUAL_FP32(-0.0, s11); - ASSERT_EQUAL_FP64(1.0, d12); - ASSERT_EQUAL_FP64(1.0, d13); - ASSERT_EQUAL_FP64(2.0, d14); - ASSERT_EQUAL_FP64(2.0, d15); - ASSERT_EQUAL_FP64(3.0, d16); - ASSERT_EQUAL_FP64(-2.0, d17); - ASSERT_EQUAL_FP64(-3.0, d18); - ASSERT_EQUAL_FP64(kFP64PositiveInfinity, d19); - ASSERT_EQUAL_FP64(kFP64NegativeInfinity, d20); - ASSERT_EQUAL_FP64(0.0, d21); - ASSERT_EQUAL_FP64(-0.0, d22); - ASSERT_EQUAL_FP64(-0.0, d23); + CHECK_EQUAL_FP32(1.0, s0); + CHECK_EQUAL_FP32(1.0, s1); + CHECK_EQUAL_FP32(2.0, s2); + CHECK_EQUAL_FP32(2.0, s3); + CHECK_EQUAL_FP32(3.0, s4); + CHECK_EQUAL_FP32(-2.0, s5); + CHECK_EQUAL_FP32(-3.0, s6); + CHECK_EQUAL_FP32(kFP32PositiveInfinity, s7); + CHECK_EQUAL_FP32(kFP32NegativeInfinity, s8); + CHECK_EQUAL_FP32(0.0, s9); + CHECK_EQUAL_FP32(-0.0, s10); + CHECK_EQUAL_FP32(-0.0, s11); + CHECK_EQUAL_FP64(1.0, d12); + CHECK_EQUAL_FP64(1.0, d13); + CHECK_EQUAL_FP64(2.0, d14); + CHECK_EQUAL_FP64(2.0, d15); + CHECK_EQUAL_FP64(3.0, d16); + CHECK_EQUAL_FP64(-2.0, d17); + CHECK_EQUAL_FP64(-3.0, d18); + CHECK_EQUAL_FP64(kFP64PositiveInfinity, d19); + CHECK_EQUAL_FP64(kFP64NegativeInfinity, d20); + CHECK_EQUAL_FP64(0.0, d21); + CHECK_EQUAL_FP64(-0.0, d22); + CHECK_EQUAL_FP64(-0.0, d23); TEARDOWN(); } @@ -6234,30 +6436,30 @@ TEST(frintm) { RUN(); - ASSERT_EQUAL_FP32(1.0, s0); - ASSERT_EQUAL_FP32(1.0, s1); - ASSERT_EQUAL_FP32(1.0, s2); - ASSERT_EQUAL_FP32(1.0, s3); - ASSERT_EQUAL_FP32(2.0, s4); - ASSERT_EQUAL_FP32(-2.0, s5); - ASSERT_EQUAL_FP32(-3.0, s6); - ASSERT_EQUAL_FP32(kFP32PositiveInfinity, s7); - ASSERT_EQUAL_FP32(kFP32NegativeInfinity, s8); - ASSERT_EQUAL_FP32(0.0, s9); - ASSERT_EQUAL_FP32(-0.0, s10); - ASSERT_EQUAL_FP32(-1.0, s11); - ASSERT_EQUAL_FP64(1.0, d12); - ASSERT_EQUAL_FP64(1.0, d13); - ASSERT_EQUAL_FP64(1.0, d14); - ASSERT_EQUAL_FP64(1.0, d15); - ASSERT_EQUAL_FP64(2.0, d16); - ASSERT_EQUAL_FP64(-2.0, d17); - ASSERT_EQUAL_FP64(-3.0, d18); - ASSERT_EQUAL_FP64(kFP64PositiveInfinity, d19); - ASSERT_EQUAL_FP64(kFP64NegativeInfinity, d20); - ASSERT_EQUAL_FP64(0.0, d21); - ASSERT_EQUAL_FP64(-0.0, d22); - ASSERT_EQUAL_FP64(-1.0, d23); + CHECK_EQUAL_FP32(1.0, s0); + CHECK_EQUAL_FP32(1.0, s1); + CHECK_EQUAL_FP32(1.0, s2); + CHECK_EQUAL_FP32(1.0, s3); + CHECK_EQUAL_FP32(2.0, s4); + CHECK_EQUAL_FP32(-2.0, s5); + CHECK_EQUAL_FP32(-3.0, s6); + CHECK_EQUAL_FP32(kFP32PositiveInfinity, s7); + CHECK_EQUAL_FP32(kFP32NegativeInfinity, s8); + CHECK_EQUAL_FP32(0.0, s9); + CHECK_EQUAL_FP32(-0.0, s10); + CHECK_EQUAL_FP32(-1.0, s11); + CHECK_EQUAL_FP64(1.0, d12); + CHECK_EQUAL_FP64(1.0, d13); + CHECK_EQUAL_FP64(1.0, d14); + CHECK_EQUAL_FP64(1.0, d15); + CHECK_EQUAL_FP64(2.0, d16); + CHECK_EQUAL_FP64(-2.0, d17); + CHECK_EQUAL_FP64(-3.0, d18); + CHECK_EQUAL_FP64(kFP64PositiveInfinity, d19); + CHECK_EQUAL_FP64(kFP64NegativeInfinity, d20); + CHECK_EQUAL_FP64(0.0, d21); + CHECK_EQUAL_FP64(-0.0, d22); + CHECK_EQUAL_FP64(-1.0, d23); TEARDOWN(); } @@ -6323,30 +6525,30 @@ TEST(frintn) { RUN(); - ASSERT_EQUAL_FP32(1.0, s0); - ASSERT_EQUAL_FP32(1.0, s1); - ASSERT_EQUAL_FP32(2.0, s2); - ASSERT_EQUAL_FP32(2.0, s3); - ASSERT_EQUAL_FP32(2.0, s4); - ASSERT_EQUAL_FP32(-2.0, s5); - ASSERT_EQUAL_FP32(-2.0, s6); - ASSERT_EQUAL_FP32(kFP32PositiveInfinity, s7); - ASSERT_EQUAL_FP32(kFP32NegativeInfinity, s8); - ASSERT_EQUAL_FP32(0.0, s9); - ASSERT_EQUAL_FP32(-0.0, s10); - ASSERT_EQUAL_FP32(-0.0, s11); - ASSERT_EQUAL_FP64(1.0, d12); - ASSERT_EQUAL_FP64(1.0, d13); - ASSERT_EQUAL_FP64(2.0, d14); - ASSERT_EQUAL_FP64(2.0, d15); - ASSERT_EQUAL_FP64(2.0, d16); - ASSERT_EQUAL_FP64(-2.0, d17); - ASSERT_EQUAL_FP64(-2.0, d18); - ASSERT_EQUAL_FP64(kFP64PositiveInfinity, d19); - ASSERT_EQUAL_FP64(kFP64NegativeInfinity, d20); - ASSERT_EQUAL_FP64(0.0, d21); - ASSERT_EQUAL_FP64(-0.0, d22); - ASSERT_EQUAL_FP64(-0.0, d23); + CHECK_EQUAL_FP32(1.0, s0); + CHECK_EQUAL_FP32(1.0, s1); + CHECK_EQUAL_FP32(2.0, s2); + CHECK_EQUAL_FP32(2.0, s3); + CHECK_EQUAL_FP32(2.0, s4); + CHECK_EQUAL_FP32(-2.0, s5); + CHECK_EQUAL_FP32(-2.0, s6); + CHECK_EQUAL_FP32(kFP32PositiveInfinity, s7); + CHECK_EQUAL_FP32(kFP32NegativeInfinity, s8); + CHECK_EQUAL_FP32(0.0, s9); + CHECK_EQUAL_FP32(-0.0, s10); + CHECK_EQUAL_FP32(-0.0, s11); + CHECK_EQUAL_FP64(1.0, d12); + CHECK_EQUAL_FP64(1.0, d13); + CHECK_EQUAL_FP64(2.0, d14); + CHECK_EQUAL_FP64(2.0, d15); + CHECK_EQUAL_FP64(2.0, d16); + CHECK_EQUAL_FP64(-2.0, d17); + CHECK_EQUAL_FP64(-2.0, d18); + CHECK_EQUAL_FP64(kFP64PositiveInfinity, d19); + CHECK_EQUAL_FP64(kFP64NegativeInfinity, d20); + CHECK_EQUAL_FP64(0.0, d21); + CHECK_EQUAL_FP64(-0.0, d22); + CHECK_EQUAL_FP64(-0.0, d23); TEARDOWN(); } @@ -6408,28 +6610,28 @@ TEST(frintz) { RUN(); - ASSERT_EQUAL_FP32(1.0, s0); - ASSERT_EQUAL_FP32(1.0, s1); - ASSERT_EQUAL_FP32(1.0, s2); - ASSERT_EQUAL_FP32(1.0, s3); - ASSERT_EQUAL_FP32(2.0, s4); - ASSERT_EQUAL_FP32(-1.0, s5); - ASSERT_EQUAL_FP32(-2.0, s6); - ASSERT_EQUAL_FP32(kFP32PositiveInfinity, s7); - ASSERT_EQUAL_FP32(kFP32NegativeInfinity, s8); - ASSERT_EQUAL_FP32(0.0, s9); - ASSERT_EQUAL_FP32(-0.0, s10); - ASSERT_EQUAL_FP64(1.0, d11); - ASSERT_EQUAL_FP64(1.0, d12); - ASSERT_EQUAL_FP64(1.0, d13); - ASSERT_EQUAL_FP64(1.0, d14); - ASSERT_EQUAL_FP64(2.0, d15); - ASSERT_EQUAL_FP64(-1.0, d16); - ASSERT_EQUAL_FP64(-2.0, d17); - ASSERT_EQUAL_FP64(kFP64PositiveInfinity, d18); - ASSERT_EQUAL_FP64(kFP64NegativeInfinity, d19); - ASSERT_EQUAL_FP64(0.0, d20); - ASSERT_EQUAL_FP64(-0.0, d21); + CHECK_EQUAL_FP32(1.0, s0); + CHECK_EQUAL_FP32(1.0, s1); + CHECK_EQUAL_FP32(1.0, s2); + CHECK_EQUAL_FP32(1.0, s3); + CHECK_EQUAL_FP32(2.0, s4); + CHECK_EQUAL_FP32(-1.0, s5); + CHECK_EQUAL_FP32(-2.0, s6); + CHECK_EQUAL_FP32(kFP32PositiveInfinity, s7); + CHECK_EQUAL_FP32(kFP32NegativeInfinity, s8); + CHECK_EQUAL_FP32(0.0, s9); + CHECK_EQUAL_FP32(-0.0, s10); + CHECK_EQUAL_FP64(1.0, d11); + CHECK_EQUAL_FP64(1.0, d12); + CHECK_EQUAL_FP64(1.0, d13); + CHECK_EQUAL_FP64(1.0, d14); + CHECK_EQUAL_FP64(2.0, d15); + CHECK_EQUAL_FP64(-1.0, d16); + CHECK_EQUAL_FP64(-2.0, d17); + CHECK_EQUAL_FP64(kFP64PositiveInfinity, d18); + CHECK_EQUAL_FP64(kFP64NegativeInfinity, d19); + CHECK_EQUAL_FP64(0.0, d20); + CHECK_EQUAL_FP64(-0.0, d21); TEARDOWN(); } @@ -6475,19 +6677,19 @@ TEST(fcvt_ds) { RUN(); - ASSERT_EQUAL_FP64(1.0f, d0); - ASSERT_EQUAL_FP64(1.1f, d1); - ASSERT_EQUAL_FP64(1.5f, d2); - ASSERT_EQUAL_FP64(1.9f, d3); - ASSERT_EQUAL_FP64(2.5f, d4); - ASSERT_EQUAL_FP64(-1.5f, d5); - ASSERT_EQUAL_FP64(-2.5f, d6); - ASSERT_EQUAL_FP64(kFP64PositiveInfinity, d7); - ASSERT_EQUAL_FP64(kFP64NegativeInfinity, d8); - ASSERT_EQUAL_FP64(0.0f, d9); - ASSERT_EQUAL_FP64(-0.0f, d10); - ASSERT_EQUAL_FP64(FLT_MAX, d11); - ASSERT_EQUAL_FP64(FLT_MIN, d12); + CHECK_EQUAL_FP64(1.0f, d0); + CHECK_EQUAL_FP64(1.1f, d1); + CHECK_EQUAL_FP64(1.5f, d2); + CHECK_EQUAL_FP64(1.9f, d3); + CHECK_EQUAL_FP64(2.5f, d4); + CHECK_EQUAL_FP64(-1.5f, d5); + CHECK_EQUAL_FP64(-2.5f, d6); + CHECK_EQUAL_FP64(kFP64PositiveInfinity, d7); + CHECK_EQUAL_FP64(kFP64NegativeInfinity, d8); + CHECK_EQUAL_FP64(0.0f, d9); + CHECK_EQUAL_FP64(-0.0f, d10); + CHECK_EQUAL_FP64(FLT_MAX, d11); + CHECK_EQUAL_FP64(FLT_MIN, d12); // Check that the NaN payload is preserved according to ARM64 conversion // rules: @@ -6495,8 +6697,8 @@ TEST(fcvt_ds) { // - The top bit of the mantissa is forced to 1 (making it a quiet NaN). // - The remaining mantissa bits are copied until they run out. // - The low-order bits that haven't already been assigned are set to 0. - ASSERT_EQUAL_FP64(rawbits_to_double(0x7ff82468a0000000), d13); - ASSERT_EQUAL_FP64(rawbits_to_double(0x7ff82468a0000000), d14); + CHECK_EQUAL_FP64(rawbits_to_double(0x7ff82468a0000000), d13); + CHECK_EQUAL_FP64(rawbits_to_double(0x7ff82468a0000000), d14); TEARDOWN(); } @@ -6596,8 +6798,8 @@ TEST(fcvt_sd) { float expected = test[i].expected; // We only expect positive input. - ASSERT(std::signbit(in) == 0); - ASSERT(std::signbit(expected) == 0); + DCHECK(std::signbit(in) == 0); + DCHECK(std::signbit(expected) == 0); SETUP(); START(); @@ -6610,8 +6812,8 @@ TEST(fcvt_sd) { END(); RUN(); - ASSERT_EQUAL_FP32(expected, s20); - ASSERT_EQUAL_FP32(-expected, s21); + CHECK_EQUAL_FP32(expected, s20); + CHECK_EQUAL_FP32(-expected, s21); TEARDOWN(); } } @@ -6687,36 +6889,36 @@ TEST(fcvtas) { RUN(); - ASSERT_EQUAL_64(1, x0); - ASSERT_EQUAL_64(1, x1); - ASSERT_EQUAL_64(3, x2); - ASSERT_EQUAL_64(0xfffffffd, x3); - ASSERT_EQUAL_64(0x7fffffff, x4); - ASSERT_EQUAL_64(0x80000000, x5); - ASSERT_EQUAL_64(0x7fffff80, x6); - ASSERT_EQUAL_64(0x80000080, x7); - ASSERT_EQUAL_64(1, x8); - ASSERT_EQUAL_64(1, x9); - ASSERT_EQUAL_64(3, x10); - ASSERT_EQUAL_64(0xfffffffd, x11); - ASSERT_EQUAL_64(0x7fffffff, x12); - ASSERT_EQUAL_64(0x80000000, x13); - ASSERT_EQUAL_64(0x7ffffffe, x14); - ASSERT_EQUAL_64(0x80000001, x15); - ASSERT_EQUAL_64(1, x17); - ASSERT_EQUAL_64(3, x18); - ASSERT_EQUAL_64(0xfffffffffffffffdUL, x19); - ASSERT_EQUAL_64(0x7fffffffffffffffUL, x20); - ASSERT_EQUAL_64(0x8000000000000000UL, x21); - ASSERT_EQUAL_64(0x7fffff8000000000UL, x22); - ASSERT_EQUAL_64(0x8000008000000000UL, x23); - ASSERT_EQUAL_64(1, x24); - ASSERT_EQUAL_64(3, x25); - ASSERT_EQUAL_64(0xfffffffffffffffdUL, x26); - ASSERT_EQUAL_64(0x7fffffffffffffffUL, x27); - ASSERT_EQUAL_64(0x8000000000000000UL, x28); - ASSERT_EQUAL_64(0x7ffffffffffffc00UL, x29); - ASSERT_EQUAL_64(0x8000000000000400UL, x30); + CHECK_EQUAL_64(1, x0); + CHECK_EQUAL_64(1, x1); + CHECK_EQUAL_64(3, x2); + CHECK_EQUAL_64(0xfffffffd, x3); + CHECK_EQUAL_64(0x7fffffff, x4); + CHECK_EQUAL_64(0x80000000, x5); + CHECK_EQUAL_64(0x7fffff80, x6); + CHECK_EQUAL_64(0x80000080, x7); + CHECK_EQUAL_64(1, x8); + CHECK_EQUAL_64(1, x9); + CHECK_EQUAL_64(3, x10); + CHECK_EQUAL_64(0xfffffffd, x11); + CHECK_EQUAL_64(0x7fffffff, x12); + CHECK_EQUAL_64(0x80000000, x13); + CHECK_EQUAL_64(0x7ffffffe, x14); + CHECK_EQUAL_64(0x80000001, x15); + CHECK_EQUAL_64(1, x17); + CHECK_EQUAL_64(3, x18); + CHECK_EQUAL_64(0xfffffffffffffffdUL, x19); + CHECK_EQUAL_64(0x7fffffffffffffffUL, x20); + CHECK_EQUAL_64(0x8000000000000000UL, x21); + CHECK_EQUAL_64(0x7fffff8000000000UL, x22); + CHECK_EQUAL_64(0x8000008000000000UL, x23); + CHECK_EQUAL_64(1, x24); + CHECK_EQUAL_64(3, x25); + CHECK_EQUAL_64(0xfffffffffffffffdUL, x26); + CHECK_EQUAL_64(0x7fffffffffffffffUL, x27); + CHECK_EQUAL_64(0x8000000000000000UL, x28); + CHECK_EQUAL_64(0x7ffffffffffffc00UL, x29); + CHECK_EQUAL_64(0x8000000000000400UL, x30); TEARDOWN(); } @@ -6789,34 +6991,34 @@ TEST(fcvtau) { RUN(); - ASSERT_EQUAL_64(1, x0); - ASSERT_EQUAL_64(1, x1); - ASSERT_EQUAL_64(3, x2); - ASSERT_EQUAL_64(0, x3); - ASSERT_EQUAL_64(0xffffffff, x4); - ASSERT_EQUAL_64(0, x5); - ASSERT_EQUAL_64(0xffffff00, x6); - ASSERT_EQUAL_64(1, x8); - ASSERT_EQUAL_64(1, x9); - ASSERT_EQUAL_64(3, x10); - ASSERT_EQUAL_64(0, x11); - ASSERT_EQUAL_64(0xffffffff, x12); - ASSERT_EQUAL_64(0, x13); - ASSERT_EQUAL_64(0xfffffffe, x14); - ASSERT_EQUAL_64(1, x16); - ASSERT_EQUAL_64(1, x17); - ASSERT_EQUAL_64(3, x18); - ASSERT_EQUAL_64(0, x19); - ASSERT_EQUAL_64(0xffffffffffffffffUL, x20); - ASSERT_EQUAL_64(0, x21); - ASSERT_EQUAL_64(0xffffff0000000000UL, x22); - ASSERT_EQUAL_64(1, x24); - ASSERT_EQUAL_64(3, x25); - ASSERT_EQUAL_64(0, x26); - ASSERT_EQUAL_64(0xffffffffffffffffUL, x27); - ASSERT_EQUAL_64(0, x28); - ASSERT_EQUAL_64(0xfffffffffffff800UL, x29); - ASSERT_EQUAL_64(0xffffffff, x30); + CHECK_EQUAL_64(1, x0); + CHECK_EQUAL_64(1, x1); + CHECK_EQUAL_64(3, x2); + CHECK_EQUAL_64(0, x3); + CHECK_EQUAL_64(0xffffffff, x4); + CHECK_EQUAL_64(0, x5); + CHECK_EQUAL_64(0xffffff00, x6); + CHECK_EQUAL_64(1, x8); + CHECK_EQUAL_64(1, x9); + CHECK_EQUAL_64(3, x10); + CHECK_EQUAL_64(0, x11); + CHECK_EQUAL_64(0xffffffff, x12); + CHECK_EQUAL_64(0, x13); + CHECK_EQUAL_64(0xfffffffe, x14); + CHECK_EQUAL_64(1, x16); + CHECK_EQUAL_64(1, x17); + CHECK_EQUAL_64(3, x18); + CHECK_EQUAL_64(0, x19); + CHECK_EQUAL_64(0xffffffffffffffffUL, x20); + CHECK_EQUAL_64(0, x21); + CHECK_EQUAL_64(0xffffff0000000000UL, x22); + CHECK_EQUAL_64(1, x24); + CHECK_EQUAL_64(3, x25); + CHECK_EQUAL_64(0, x26); + CHECK_EQUAL_64(0xffffffffffffffffUL, x27); + CHECK_EQUAL_64(0, x28); + CHECK_EQUAL_64(0xfffffffffffff800UL, x29); + CHECK_EQUAL_64(0xffffffff, x30); TEARDOWN(); } @@ -6892,36 +7094,36 @@ TEST(fcvtms) { RUN(); - ASSERT_EQUAL_64(1, x0); - ASSERT_EQUAL_64(1, x1); - ASSERT_EQUAL_64(1, x2); - ASSERT_EQUAL_64(0xfffffffe, x3); - ASSERT_EQUAL_64(0x7fffffff, x4); - ASSERT_EQUAL_64(0x80000000, x5); - ASSERT_EQUAL_64(0x7fffff80, x6); - ASSERT_EQUAL_64(0x80000080, x7); - ASSERT_EQUAL_64(1, x8); - ASSERT_EQUAL_64(1, x9); - ASSERT_EQUAL_64(1, x10); - ASSERT_EQUAL_64(0xfffffffe, x11); - ASSERT_EQUAL_64(0x7fffffff, x12); - ASSERT_EQUAL_64(0x80000000, x13); - ASSERT_EQUAL_64(0x7ffffffe, x14); - ASSERT_EQUAL_64(0x80000001, x15); - ASSERT_EQUAL_64(1, x17); - ASSERT_EQUAL_64(1, x18); - ASSERT_EQUAL_64(0xfffffffffffffffeUL, x19); - ASSERT_EQUAL_64(0x7fffffffffffffffUL, x20); - ASSERT_EQUAL_64(0x8000000000000000UL, x21); - ASSERT_EQUAL_64(0x7fffff8000000000UL, x22); - ASSERT_EQUAL_64(0x8000008000000000UL, x23); - ASSERT_EQUAL_64(1, x24); - ASSERT_EQUAL_64(1, x25); - ASSERT_EQUAL_64(0xfffffffffffffffeUL, x26); - ASSERT_EQUAL_64(0x7fffffffffffffffUL, x27); - ASSERT_EQUAL_64(0x8000000000000000UL, x28); - ASSERT_EQUAL_64(0x7ffffffffffffc00UL, x29); - ASSERT_EQUAL_64(0x8000000000000400UL, x30); + CHECK_EQUAL_64(1, x0); + CHECK_EQUAL_64(1, x1); + CHECK_EQUAL_64(1, x2); + CHECK_EQUAL_64(0xfffffffe, x3); + CHECK_EQUAL_64(0x7fffffff, x4); + CHECK_EQUAL_64(0x80000000, x5); + CHECK_EQUAL_64(0x7fffff80, x6); + CHECK_EQUAL_64(0x80000080, x7); + CHECK_EQUAL_64(1, x8); + CHECK_EQUAL_64(1, x9); + CHECK_EQUAL_64(1, x10); + CHECK_EQUAL_64(0xfffffffe, x11); + CHECK_EQUAL_64(0x7fffffff, x12); + CHECK_EQUAL_64(0x80000000, x13); + CHECK_EQUAL_64(0x7ffffffe, x14); + CHECK_EQUAL_64(0x80000001, x15); + CHECK_EQUAL_64(1, x17); + CHECK_EQUAL_64(1, x18); + CHECK_EQUAL_64(0xfffffffffffffffeUL, x19); + CHECK_EQUAL_64(0x7fffffffffffffffUL, x20); + CHECK_EQUAL_64(0x8000000000000000UL, x21); + CHECK_EQUAL_64(0x7fffff8000000000UL, x22); + CHECK_EQUAL_64(0x8000008000000000UL, x23); + CHECK_EQUAL_64(1, x24); + CHECK_EQUAL_64(1, x25); + CHECK_EQUAL_64(0xfffffffffffffffeUL, x26); + CHECK_EQUAL_64(0x7fffffffffffffffUL, x27); + CHECK_EQUAL_64(0x8000000000000000UL, x28); + CHECK_EQUAL_64(0x7ffffffffffffc00UL, x29); + CHECK_EQUAL_64(0x8000000000000400UL, x30); TEARDOWN(); } @@ -6996,35 +7198,35 @@ TEST(fcvtmu) { RUN(); - ASSERT_EQUAL_64(1, x0); - ASSERT_EQUAL_64(1, x1); - ASSERT_EQUAL_64(1, x2); - ASSERT_EQUAL_64(0, x3); - ASSERT_EQUAL_64(0xffffffff, x4); - ASSERT_EQUAL_64(0, x5); - ASSERT_EQUAL_64(0x7fffff80, x6); - ASSERT_EQUAL_64(0, x7); - ASSERT_EQUAL_64(1, x8); - ASSERT_EQUAL_64(1, x9); - ASSERT_EQUAL_64(1, x10); - ASSERT_EQUAL_64(0, x11); - ASSERT_EQUAL_64(0xffffffff, x12); - ASSERT_EQUAL_64(0, x13); - ASSERT_EQUAL_64(0x7ffffffe, x14); - ASSERT_EQUAL_64(1, x17); - ASSERT_EQUAL_64(1, x18); - ASSERT_EQUAL_64(0x0UL, x19); - ASSERT_EQUAL_64(0xffffffffffffffffUL, x20); - ASSERT_EQUAL_64(0x0UL, x21); - ASSERT_EQUAL_64(0x7fffff8000000000UL, x22); - ASSERT_EQUAL_64(0x0UL, x23); - ASSERT_EQUAL_64(1, x24); - ASSERT_EQUAL_64(1, x25); - ASSERT_EQUAL_64(0x0UL, x26); - ASSERT_EQUAL_64(0xffffffffffffffffUL, x27); - ASSERT_EQUAL_64(0x0UL, x28); - ASSERT_EQUAL_64(0x7ffffffffffffc00UL, x29); - ASSERT_EQUAL_64(0x0UL, x30); + CHECK_EQUAL_64(1, x0); + CHECK_EQUAL_64(1, x1); + CHECK_EQUAL_64(1, x2); + CHECK_EQUAL_64(0, x3); + CHECK_EQUAL_64(0xffffffff, x4); + CHECK_EQUAL_64(0, x5); + CHECK_EQUAL_64(0x7fffff80, x6); + CHECK_EQUAL_64(0, x7); + CHECK_EQUAL_64(1, x8); + CHECK_EQUAL_64(1, x9); + CHECK_EQUAL_64(1, x10); + CHECK_EQUAL_64(0, x11); + CHECK_EQUAL_64(0xffffffff, x12); + CHECK_EQUAL_64(0, x13); + CHECK_EQUAL_64(0x7ffffffe, x14); + CHECK_EQUAL_64(1, x17); + CHECK_EQUAL_64(1, x18); + CHECK_EQUAL_64(0x0UL, x19); + CHECK_EQUAL_64(0xffffffffffffffffUL, x20); + CHECK_EQUAL_64(0x0UL, x21); + CHECK_EQUAL_64(0x7fffff8000000000UL, x22); + CHECK_EQUAL_64(0x0UL, x23); + CHECK_EQUAL_64(1, x24); + CHECK_EQUAL_64(1, x25); + CHECK_EQUAL_64(0x0UL, x26); + CHECK_EQUAL_64(0xffffffffffffffffUL, x27); + CHECK_EQUAL_64(0x0UL, x28); + CHECK_EQUAL_64(0x7ffffffffffffc00UL, x29); + CHECK_EQUAL_64(0x0UL, x30); TEARDOWN(); } @@ -7100,36 +7302,36 @@ TEST(fcvtns) { RUN(); - ASSERT_EQUAL_64(1, x0); - ASSERT_EQUAL_64(1, x1); - ASSERT_EQUAL_64(2, x2); - ASSERT_EQUAL_64(0xfffffffe, x3); - ASSERT_EQUAL_64(0x7fffffff, x4); - ASSERT_EQUAL_64(0x80000000, x5); - ASSERT_EQUAL_64(0x7fffff80, x6); - ASSERT_EQUAL_64(0x80000080, x7); - ASSERT_EQUAL_64(1, x8); - ASSERT_EQUAL_64(1, x9); - ASSERT_EQUAL_64(2, x10); - ASSERT_EQUAL_64(0xfffffffe, x11); - ASSERT_EQUAL_64(0x7fffffff, x12); - ASSERT_EQUAL_64(0x80000000, x13); - ASSERT_EQUAL_64(0x7ffffffe, x14); - ASSERT_EQUAL_64(0x80000001, x15); - ASSERT_EQUAL_64(1, x17); - ASSERT_EQUAL_64(2, x18); - ASSERT_EQUAL_64(0xfffffffffffffffeUL, x19); - ASSERT_EQUAL_64(0x7fffffffffffffffUL, x20); - ASSERT_EQUAL_64(0x8000000000000000UL, x21); - ASSERT_EQUAL_64(0x7fffff8000000000UL, x22); - ASSERT_EQUAL_64(0x8000008000000000UL, x23); - ASSERT_EQUAL_64(1, x24); - ASSERT_EQUAL_64(2, x25); - ASSERT_EQUAL_64(0xfffffffffffffffeUL, x26); - ASSERT_EQUAL_64(0x7fffffffffffffffUL, x27); -// ASSERT_EQUAL_64(0x8000000000000000UL, x28); - ASSERT_EQUAL_64(0x7ffffffffffffc00UL, x29); - ASSERT_EQUAL_64(0x8000000000000400UL, x30); + CHECK_EQUAL_64(1, x0); + CHECK_EQUAL_64(1, x1); + CHECK_EQUAL_64(2, x2); + CHECK_EQUAL_64(0xfffffffe, x3); + CHECK_EQUAL_64(0x7fffffff, x4); + CHECK_EQUAL_64(0x80000000, x5); + CHECK_EQUAL_64(0x7fffff80, x6); + CHECK_EQUAL_64(0x80000080, x7); + CHECK_EQUAL_64(1, x8); + CHECK_EQUAL_64(1, x9); + CHECK_EQUAL_64(2, x10); + CHECK_EQUAL_64(0xfffffffe, x11); + CHECK_EQUAL_64(0x7fffffff, x12); + CHECK_EQUAL_64(0x80000000, x13); + CHECK_EQUAL_64(0x7ffffffe, x14); + CHECK_EQUAL_64(0x80000001, x15); + CHECK_EQUAL_64(1, x17); + CHECK_EQUAL_64(2, x18); + CHECK_EQUAL_64(0xfffffffffffffffeUL, x19); + CHECK_EQUAL_64(0x7fffffffffffffffUL, x20); + CHECK_EQUAL_64(0x8000000000000000UL, x21); + CHECK_EQUAL_64(0x7fffff8000000000UL, x22); + CHECK_EQUAL_64(0x8000008000000000UL, x23); + CHECK_EQUAL_64(1, x24); + CHECK_EQUAL_64(2, x25); + CHECK_EQUAL_64(0xfffffffffffffffeUL, x26); + CHECK_EQUAL_64(0x7fffffffffffffffUL, x27); +// CHECK_EQUAL_64(0x8000000000000000UL, x28); + CHECK_EQUAL_64(0x7ffffffffffffc00UL, x29); + CHECK_EQUAL_64(0x8000000000000400UL, x30); TEARDOWN(); } @@ -7202,34 +7404,34 @@ TEST(fcvtnu) { RUN(); - ASSERT_EQUAL_64(1, x0); - ASSERT_EQUAL_64(1, x1); - ASSERT_EQUAL_64(2, x2); - ASSERT_EQUAL_64(0, x3); - ASSERT_EQUAL_64(0xffffffff, x4); - ASSERT_EQUAL_64(0, x5); - ASSERT_EQUAL_64(0xffffff00, x6); - ASSERT_EQUAL_64(1, x8); - ASSERT_EQUAL_64(1, x9); - ASSERT_EQUAL_64(2, x10); - ASSERT_EQUAL_64(0, x11); - ASSERT_EQUAL_64(0xffffffff, x12); - ASSERT_EQUAL_64(0, x13); - ASSERT_EQUAL_64(0xfffffffe, x14); - ASSERT_EQUAL_64(1, x16); - ASSERT_EQUAL_64(1, x17); - ASSERT_EQUAL_64(2, x18); - ASSERT_EQUAL_64(0, x19); - ASSERT_EQUAL_64(0xffffffffffffffffUL, x20); - ASSERT_EQUAL_64(0, x21); - ASSERT_EQUAL_64(0xffffff0000000000UL, x22); - ASSERT_EQUAL_64(1, x24); - ASSERT_EQUAL_64(2, x25); - ASSERT_EQUAL_64(0, x26); - ASSERT_EQUAL_64(0xffffffffffffffffUL, x27); -// ASSERT_EQUAL_64(0, x28); - ASSERT_EQUAL_64(0xfffffffffffff800UL, x29); - ASSERT_EQUAL_64(0xffffffff, x30); + CHECK_EQUAL_64(1, x0); + CHECK_EQUAL_64(1, x1); + CHECK_EQUAL_64(2, x2); + CHECK_EQUAL_64(0, x3); + CHECK_EQUAL_64(0xffffffff, x4); + CHECK_EQUAL_64(0, x5); + CHECK_EQUAL_64(0xffffff00, x6); + CHECK_EQUAL_64(1, x8); + CHECK_EQUAL_64(1, x9); + CHECK_EQUAL_64(2, x10); + CHECK_EQUAL_64(0, x11); + CHECK_EQUAL_64(0xffffffff, x12); + CHECK_EQUAL_64(0, x13); + CHECK_EQUAL_64(0xfffffffe, x14); + CHECK_EQUAL_64(1, x16); + CHECK_EQUAL_64(1, x17); + CHECK_EQUAL_64(2, x18); + CHECK_EQUAL_64(0, x19); + CHECK_EQUAL_64(0xffffffffffffffffUL, x20); + CHECK_EQUAL_64(0, x21); + CHECK_EQUAL_64(0xffffff0000000000UL, x22); + CHECK_EQUAL_64(1, x24); + CHECK_EQUAL_64(2, x25); + CHECK_EQUAL_64(0, x26); + CHECK_EQUAL_64(0xffffffffffffffffUL, x27); +// CHECK_EQUAL_64(0, x28); + CHECK_EQUAL_64(0xfffffffffffff800UL, x29); + CHECK_EQUAL_64(0xffffffff, x30); TEARDOWN(); } @@ -7305,36 +7507,36 @@ TEST(fcvtzs) { RUN(); - ASSERT_EQUAL_64(1, x0); - ASSERT_EQUAL_64(1, x1); - ASSERT_EQUAL_64(1, x2); - ASSERT_EQUAL_64(0xffffffff, x3); - ASSERT_EQUAL_64(0x7fffffff, x4); - ASSERT_EQUAL_64(0x80000000, x5); - ASSERT_EQUAL_64(0x7fffff80, x6); - ASSERT_EQUAL_64(0x80000080, x7); - ASSERT_EQUAL_64(1, x8); - ASSERT_EQUAL_64(1, x9); - ASSERT_EQUAL_64(1, x10); - ASSERT_EQUAL_64(0xffffffff, x11); - ASSERT_EQUAL_64(0x7fffffff, x12); - ASSERT_EQUAL_64(0x80000000, x13); - ASSERT_EQUAL_64(0x7ffffffe, x14); - ASSERT_EQUAL_64(0x80000001, x15); - ASSERT_EQUAL_64(1, x17); - ASSERT_EQUAL_64(1, x18); - ASSERT_EQUAL_64(0xffffffffffffffffUL, x19); - ASSERT_EQUAL_64(0x7fffffffffffffffUL, x20); - ASSERT_EQUAL_64(0x8000000000000000UL, x21); - ASSERT_EQUAL_64(0x7fffff8000000000UL, x22); - ASSERT_EQUAL_64(0x8000008000000000UL, x23); - ASSERT_EQUAL_64(1, x24); - ASSERT_EQUAL_64(1, x25); - ASSERT_EQUAL_64(0xffffffffffffffffUL, x26); - ASSERT_EQUAL_64(0x7fffffffffffffffUL, x27); - ASSERT_EQUAL_64(0x8000000000000000UL, x28); - ASSERT_EQUAL_64(0x7ffffffffffffc00UL, x29); - ASSERT_EQUAL_64(0x8000000000000400UL, x30); + CHECK_EQUAL_64(1, x0); + CHECK_EQUAL_64(1, x1); + CHECK_EQUAL_64(1, x2); + CHECK_EQUAL_64(0xffffffff, x3); + CHECK_EQUAL_64(0x7fffffff, x4); + CHECK_EQUAL_64(0x80000000, x5); + CHECK_EQUAL_64(0x7fffff80, x6); + CHECK_EQUAL_64(0x80000080, x7); + CHECK_EQUAL_64(1, x8); + CHECK_EQUAL_64(1, x9); + CHECK_EQUAL_64(1, x10); + CHECK_EQUAL_64(0xffffffff, x11); + CHECK_EQUAL_64(0x7fffffff, x12); + CHECK_EQUAL_64(0x80000000, x13); + CHECK_EQUAL_64(0x7ffffffe, x14); + CHECK_EQUAL_64(0x80000001, x15); + CHECK_EQUAL_64(1, x17); + CHECK_EQUAL_64(1, x18); + CHECK_EQUAL_64(0xffffffffffffffffUL, x19); + CHECK_EQUAL_64(0x7fffffffffffffffUL, x20); + CHECK_EQUAL_64(0x8000000000000000UL, x21); + CHECK_EQUAL_64(0x7fffff8000000000UL, x22); + CHECK_EQUAL_64(0x8000008000000000UL, x23); + CHECK_EQUAL_64(1, x24); + CHECK_EQUAL_64(1, x25); + CHECK_EQUAL_64(0xffffffffffffffffUL, x26); + CHECK_EQUAL_64(0x7fffffffffffffffUL, x27); + CHECK_EQUAL_64(0x8000000000000000UL, x28); + CHECK_EQUAL_64(0x7ffffffffffffc00UL, x29); + CHECK_EQUAL_64(0x8000000000000400UL, x30); TEARDOWN(); } @@ -7409,35 +7611,35 @@ TEST(fcvtzu) { RUN(); - ASSERT_EQUAL_64(1, x0); - ASSERT_EQUAL_64(1, x1); - ASSERT_EQUAL_64(1, x2); - ASSERT_EQUAL_64(0, x3); - ASSERT_EQUAL_64(0xffffffff, x4); - ASSERT_EQUAL_64(0, x5); - ASSERT_EQUAL_64(0x7fffff80, x6); - ASSERT_EQUAL_64(0, x7); - ASSERT_EQUAL_64(1, x8); - ASSERT_EQUAL_64(1, x9); - ASSERT_EQUAL_64(1, x10); - ASSERT_EQUAL_64(0, x11); - ASSERT_EQUAL_64(0xffffffff, x12); - ASSERT_EQUAL_64(0, x13); - ASSERT_EQUAL_64(0x7ffffffe, x14); - ASSERT_EQUAL_64(1, x17); - ASSERT_EQUAL_64(1, x18); - ASSERT_EQUAL_64(0x0UL, x19); - ASSERT_EQUAL_64(0xffffffffffffffffUL, x20); - ASSERT_EQUAL_64(0x0UL, x21); - ASSERT_EQUAL_64(0x7fffff8000000000UL, x22); - ASSERT_EQUAL_64(0x0UL, x23); - ASSERT_EQUAL_64(1, x24); - ASSERT_EQUAL_64(1, x25); - ASSERT_EQUAL_64(0x0UL, x26); - ASSERT_EQUAL_64(0xffffffffffffffffUL, x27); - ASSERT_EQUAL_64(0x0UL, x28); - ASSERT_EQUAL_64(0x7ffffffffffffc00UL, x29); - ASSERT_EQUAL_64(0x0UL, x30); + CHECK_EQUAL_64(1, x0); + CHECK_EQUAL_64(1, x1); + CHECK_EQUAL_64(1, x2); + CHECK_EQUAL_64(0, x3); + CHECK_EQUAL_64(0xffffffff, x4); + CHECK_EQUAL_64(0, x5); + CHECK_EQUAL_64(0x7fffff80, x6); + CHECK_EQUAL_64(0, x7); + CHECK_EQUAL_64(1, x8); + CHECK_EQUAL_64(1, x9); + CHECK_EQUAL_64(1, x10); + CHECK_EQUAL_64(0, x11); + CHECK_EQUAL_64(0xffffffff, x12); + CHECK_EQUAL_64(0, x13); + CHECK_EQUAL_64(0x7ffffffe, x14); + CHECK_EQUAL_64(1, x17); + CHECK_EQUAL_64(1, x18); + CHECK_EQUAL_64(0x0UL, x19); + CHECK_EQUAL_64(0xffffffffffffffffUL, x20); + CHECK_EQUAL_64(0x0UL, x21); + CHECK_EQUAL_64(0x7fffff8000000000UL, x22); + CHECK_EQUAL_64(0x0UL, x23); + CHECK_EQUAL_64(1, x24); + CHECK_EQUAL_64(1, x25); + CHECK_EQUAL_64(0x0UL, x26); + CHECK_EQUAL_64(0xffffffffffffffffUL, x27); + CHECK_EQUAL_64(0x0UL, x28); + CHECK_EQUAL_64(0x7ffffffffffffc00UL, x29); + CHECK_EQUAL_64(0x0UL, x30); TEARDOWN(); } @@ -7525,16 +7727,16 @@ static void TestUScvtfHelper(uint64_t in, for (int fbits = 0; fbits <= 32; fbits++) { double expected_scvtf = expected_scvtf_base / pow(2.0, fbits); double expected_ucvtf = expected_ucvtf_base / pow(2.0, fbits); - ASSERT_EQUAL_FP64(expected_scvtf, results_scvtf_x[fbits]); - ASSERT_EQUAL_FP64(expected_ucvtf, results_ucvtf_x[fbits]); - if (cvtf_s32) ASSERT_EQUAL_FP64(expected_scvtf, results_scvtf_w[fbits]); - if (cvtf_u32) ASSERT_EQUAL_FP64(expected_ucvtf, results_ucvtf_w[fbits]); + CHECK_EQUAL_FP64(expected_scvtf, results_scvtf_x[fbits]); + CHECK_EQUAL_FP64(expected_ucvtf, results_ucvtf_x[fbits]); + if (cvtf_s32) CHECK_EQUAL_FP64(expected_scvtf, results_scvtf_w[fbits]); + if (cvtf_u32) CHECK_EQUAL_FP64(expected_ucvtf, results_ucvtf_w[fbits]); } for (int fbits = 33; fbits <= 64; fbits++) { double expected_scvtf = expected_scvtf_base / pow(2.0, fbits); double expected_ucvtf = expected_ucvtf_base / pow(2.0, fbits); - ASSERT_EQUAL_FP64(expected_scvtf, results_scvtf_x[fbits]); - ASSERT_EQUAL_FP64(expected_ucvtf, results_ucvtf_x[fbits]); + CHECK_EQUAL_FP64(expected_scvtf, results_scvtf_x[fbits]); + CHECK_EQUAL_FP64(expected_ucvtf, results_ucvtf_x[fbits]); } TEARDOWN(); @@ -7680,18 +7882,18 @@ static void TestUScvtf32Helper(uint64_t in, for (int fbits = 0; fbits <= 32; fbits++) { float expected_scvtf = expected_scvtf_base / powf(2, fbits); float expected_ucvtf = expected_ucvtf_base / powf(2, fbits); - ASSERT_EQUAL_FP32(expected_scvtf, results_scvtf_x[fbits]); - ASSERT_EQUAL_FP32(expected_ucvtf, results_ucvtf_x[fbits]); - if (cvtf_s32) ASSERT_EQUAL_FP32(expected_scvtf, results_scvtf_w[fbits]); - if (cvtf_u32) ASSERT_EQUAL_FP32(expected_ucvtf, results_ucvtf_w[fbits]); + CHECK_EQUAL_FP32(expected_scvtf, results_scvtf_x[fbits]); + CHECK_EQUAL_FP32(expected_ucvtf, results_ucvtf_x[fbits]); + if (cvtf_s32) CHECK_EQUAL_FP32(expected_scvtf, results_scvtf_w[fbits]); + if (cvtf_u32) CHECK_EQUAL_FP32(expected_ucvtf, results_ucvtf_w[fbits]); break; } for (int fbits = 33; fbits <= 64; fbits++) { break; float expected_scvtf = expected_scvtf_base / powf(2, fbits); float expected_ucvtf = expected_ucvtf_base / powf(2, fbits); - ASSERT_EQUAL_FP32(expected_scvtf, results_scvtf_x[fbits]); - ASSERT_EQUAL_FP32(expected_ucvtf, results_ucvtf_x[fbits]); + CHECK_EQUAL_FP32(expected_scvtf, results_scvtf_x[fbits]); + CHECK_EQUAL_FP32(expected_ucvtf, results_ucvtf_x[fbits]); } TEARDOWN(); @@ -7795,13 +7997,13 @@ TEST(system_mrs) { RUN(); // NZCV - ASSERT_EQUAL_32(ZCFlag, w3); - ASSERT_EQUAL_32(NFlag, w4); - ASSERT_EQUAL_32(ZCVFlag, w5); + CHECK_EQUAL_32(ZCFlag, w3); + CHECK_EQUAL_32(NFlag, w4); + CHECK_EQUAL_32(ZCVFlag, w5); // FPCR // The default FPCR on Linux-based platforms is 0. - ASSERT_EQUAL_32(0, w6); + CHECK_EQUAL_32(0, w6); TEARDOWN(); } @@ -7869,11 +8071,11 @@ TEST(system_msr) { RUN(); // We should have incremented x7 (from 0) exactly 8 times. - ASSERT_EQUAL_64(8, x7); + CHECK_EQUAL_64(8, x7); - ASSERT_EQUAL_64(fpcr_core, x8); - ASSERT_EQUAL_64(fpcr_core, x9); - ASSERT_EQUAL_64(0, x10); + CHECK_EQUAL_64(fpcr_core, x8); + CHECK_EQUAL_64(fpcr_core, x9); + CHECK_EQUAL_64(0, x10); TEARDOWN(); } @@ -7891,8 +8093,8 @@ TEST(system_nop) { RUN(); - ASSERT_EQUAL_REGISTERS(before); - ASSERT_EQUAL_NZCV(before.flags_nzcv()); + CHECK_EQUAL_REGISTERS(before); + CHECK_EQUAL_NZCV(before.flags_nzcv()); TEARDOWN(); } @@ -7958,8 +8160,8 @@ TEST(zero_dest) { RUN(); - ASSERT_EQUAL_REGISTERS(before); - ASSERT_EQUAL_NZCV(before.flags_nzcv()); + CHECK_EQUAL_REGISTERS(before); + CHECK_EQUAL_NZCV(before.flags_nzcv()); TEARDOWN(); } @@ -8023,7 +8225,7 @@ TEST(zero_dest_setflags) { RUN(); - ASSERT_EQUAL_REGISTERS(before); + CHECK_EQUAL_REGISTERS(before); TEARDOWN(); } @@ -8136,15 +8338,15 @@ TEST(peek_poke_simple) { END(); RUN(); - ASSERT_EQUAL_64(literal_base * 1, x0); - ASSERT_EQUAL_64(literal_base * 2, x1); - ASSERT_EQUAL_64(literal_base * 3, x2); - ASSERT_EQUAL_64(literal_base * 4, x3); + CHECK_EQUAL_64(literal_base * 1, x0); + CHECK_EQUAL_64(literal_base * 2, x1); + CHECK_EQUAL_64(literal_base * 3, x2); + CHECK_EQUAL_64(literal_base * 4, x3); - ASSERT_EQUAL_64((literal_base * 1) & 0xffffffff, x10); - ASSERT_EQUAL_64((literal_base * 2) & 0xffffffff, x11); - ASSERT_EQUAL_64((literal_base * 3) & 0xffffffff, x12); - ASSERT_EQUAL_64((literal_base * 4) & 0xffffffff, x13); + CHECK_EQUAL_64((literal_base * 1) & 0xffffffff, x10); + CHECK_EQUAL_64((literal_base * 2) & 0xffffffff, x11); + CHECK_EQUAL_64((literal_base * 3) & 0xffffffff, x12); + CHECK_EQUAL_64((literal_base * 4) & 0xffffffff, x13); TEARDOWN(); } @@ -8214,17 +8416,17 @@ TEST(peek_poke_unaligned) { END(); RUN(); - ASSERT_EQUAL_64(literal_base * 1, x0); - ASSERT_EQUAL_64(literal_base * 2, x1); - ASSERT_EQUAL_64(literal_base * 3, x2); - ASSERT_EQUAL_64(literal_base * 4, x3); - ASSERT_EQUAL_64(literal_base * 5, x4); - ASSERT_EQUAL_64(literal_base * 6, x5); - ASSERT_EQUAL_64(literal_base * 7, x6); + CHECK_EQUAL_64(literal_base * 1, x0); + CHECK_EQUAL_64(literal_base * 2, x1); + CHECK_EQUAL_64(literal_base * 3, x2); + CHECK_EQUAL_64(literal_base * 4, x3); + CHECK_EQUAL_64(literal_base * 5, x4); + CHECK_EQUAL_64(literal_base * 6, x5); + CHECK_EQUAL_64(literal_base * 7, x6); - ASSERT_EQUAL_64((literal_base * 1) & 0xffffffff, x10); - ASSERT_EQUAL_64((literal_base * 2) & 0xffffffff, x11); - ASSERT_EQUAL_64((literal_base * 3) & 0xffffffff, x12); + CHECK_EQUAL_64((literal_base * 1) & 0xffffffff, x10); + CHECK_EQUAL_64((literal_base * 2) & 0xffffffff, x11); + CHECK_EQUAL_64((literal_base * 3) & 0xffffffff, x12); TEARDOWN(); } @@ -8271,10 +8473,10 @@ TEST(peek_poke_endianness) { uint64_t x5_expected = ((x1_expected << 16) & 0xffff0000) | ((x1_expected >> 16) & 0x0000ffff); - ASSERT_EQUAL_64(x0_expected, x0); - ASSERT_EQUAL_64(x1_expected, x1); - ASSERT_EQUAL_64(x4_expected, x4); - ASSERT_EQUAL_64(x5_expected, x5); + CHECK_EQUAL_64(x0_expected, x0); + CHECK_EQUAL_64(x1_expected, x1); + CHECK_EQUAL_64(x4_expected, x4); + CHECK_EQUAL_64(x5_expected, x5); TEARDOWN(); } @@ -8308,7 +8510,7 @@ TEST(peek_poke_mixed) { __ Poke(x1, 8); __ Poke(x0, 0); { - ASSERT(__ StackPointer().Is(csp)); + DCHECK(__ StackPointer().Is(csp)); __ Mov(x4, __ StackPointer()); __ SetStackPointer(x4); @@ -8340,12 +8542,12 @@ TEST(peek_poke_mixed) { uint64_t x7_expected = ((x1_expected << 16) & 0xffff0000) | ((x0_expected >> 48) & 0x0000ffff); - ASSERT_EQUAL_64(x0_expected, x0); - ASSERT_EQUAL_64(x1_expected, x1); - ASSERT_EQUAL_64(x2_expected, x2); - ASSERT_EQUAL_64(x3_expected, x3); - ASSERT_EQUAL_64(x6_expected, x6); - ASSERT_EQUAL_64(x7_expected, x7); + CHECK_EQUAL_64(x0_expected, x0); + CHECK_EQUAL_64(x1_expected, x1); + CHECK_EQUAL_64(x2_expected, x2); + CHECK_EQUAL_64(x3_expected, x3); + CHECK_EQUAL_64(x6_expected, x6); + CHECK_EQUAL_64(x7_expected, x7); TEARDOWN(); } @@ -8384,10 +8586,10 @@ static void PushPopJsspSimpleHelper(int reg_count, START(); - // Registers x8 and x9 are used by the macro assembler for debug code (for - // example in 'Pop'), so we can't use them here. We can't use jssp because it - // will be the stack pointer for this test. - static RegList const allowed = ~(x8.Bit() | x9.Bit() | jssp.Bit()); + // Registers in the TmpList can be used by the macro assembler for debug code + // (for example in 'Pop'), so we can't use them here. We can't use jssp + // because it will be the stack pointer for this test. + static RegList const allowed = ~(masm.TmpList()->list() | jssp.Bit()); if (reg_count == kPushPopJsspMaxRegCount) { reg_count = CountSetBits(allowed, kNumberOfRegisters); } @@ -8405,7 +8607,7 @@ static void PushPopJsspSimpleHelper(int reg_count, uint64_t literal_base = 0x0100001000100101UL; { - ASSERT(__ StackPointer().Is(csp)); + DCHECK(__ StackPointer().Is(csp)); __ Mov(jssp, __ StackPointer()); __ SetStackPointer(jssp); @@ -8434,7 +8636,7 @@ static void PushPopJsspSimpleHelper(int reg_count, case 3: __ Push(r[2], r[1], r[0]); break; case 2: __ Push(r[1], r[0]); break; case 1: __ Push(r[0]); break; - default: ASSERT(i == 0); break; + default: DCHECK(i == 0); break; } break; case PushPopRegList: @@ -8456,7 +8658,7 @@ static void PushPopJsspSimpleHelper(int reg_count, case 3: __ Pop(r[i], r[i+1], r[i+2]); break; case 2: __ Pop(r[i], r[i+1]); break; case 1: __ Pop(r[i]); break; - default: ASSERT(i == reg_count); break; + default: DCHECK(i == reg_count); break; } break; case PushPopRegList: @@ -8476,14 +8678,14 @@ static void PushPopJsspSimpleHelper(int reg_count, RUN(); // Check that the register contents were preserved. - // Always use ASSERT_EQUAL_64, even when testing W registers, so we can test + // Always use CHECK_EQUAL_64, even when testing W registers, so we can test // that the upper word was properly cleared by Pop. literal_base &= (0xffffffffffffffffUL >> (64-reg_size)); for (int i = 0; i < reg_count; i++) { if (x[i].IsZero()) { - ASSERT_EQUAL_64(0, x[i]); + CHECK_EQUAL_64(0, x[i]); } else { - ASSERT_EQUAL_64(literal_base * i, x[i]); + CHECK_EQUAL_64(literal_base * i, x[i]); } } @@ -8587,7 +8789,7 @@ static void PushPopFPJsspSimpleHelper(int reg_count, uint64_t literal_base = 0x0100001000100101UL; { - ASSERT(__ StackPointer().Is(csp)); + DCHECK(__ StackPointer().Is(csp)); __ Mov(jssp, __ StackPointer()); __ SetStackPointer(jssp); @@ -8618,7 +8820,7 @@ static void PushPopFPJsspSimpleHelper(int reg_count, case 3: __ Push(v[2], v[1], v[0]); break; case 2: __ Push(v[1], v[0]); break; case 1: __ Push(v[0]); break; - default: ASSERT(i == 0); break; + default: DCHECK(i == 0); break; } break; case PushPopRegList: @@ -8640,7 +8842,7 @@ static void PushPopFPJsspSimpleHelper(int reg_count, case 3: __ Pop(v[i], v[i+1], v[i+2]); break; case 2: __ Pop(v[i], v[i+1]); break; case 1: __ Pop(v[i]); break; - default: ASSERT(i == reg_count); break; + default: DCHECK(i == reg_count); break; } break; case PushPopRegList: @@ -8660,14 +8862,14 @@ static void PushPopFPJsspSimpleHelper(int reg_count, RUN(); // Check that the register contents were preserved. - // Always use ASSERT_EQUAL_FP64, even when testing S registers, so we can + // Always use CHECK_EQUAL_FP64, even when testing S registers, so we can // test that the upper word was properly cleared by Pop. literal_base &= (0xffffffffffffffffUL >> (64-reg_size)); for (int i = 0; i < reg_count; i++) { uint64_t literal = literal_base * i; double expected; memcpy(&expected, &literal, sizeof(expected)); - ASSERT_EQUAL_FP64(expected, d[i]); + CHECK_EQUAL_FP64(expected, d[i]); } TEARDOWN(); @@ -8764,7 +8966,7 @@ static void PushPopJsspMixedMethodsHelper(int claim, int reg_size) { START(); { - ASSERT(__ StackPointer().Is(csp)); + DCHECK(__ StackPointer().Is(csp)); __ Mov(jssp, __ StackPointer()); __ SetStackPointer(jssp); @@ -8800,16 +9002,16 @@ static void PushPopJsspMixedMethodsHelper(int claim, int reg_size) { RUN(); - // Always use ASSERT_EQUAL_64, even when testing W registers, so we can test + // Always use CHECK_EQUAL_64, even when testing W registers, so we can test // that the upper word was properly cleared by Pop. literal_base &= (0xffffffffffffffffUL >> (64-reg_size)); - ASSERT_EQUAL_64(literal_base * 3, x[9]); - ASSERT_EQUAL_64(literal_base * 2, x[8]); - ASSERT_EQUAL_64(literal_base * 0, x[7]); - ASSERT_EQUAL_64(literal_base * 3, x[6]); - ASSERT_EQUAL_64(literal_base * 1, x[5]); - ASSERT_EQUAL_64(literal_base * 2, x[4]); + CHECK_EQUAL_64(literal_base * 3, x[9]); + CHECK_EQUAL_64(literal_base * 2, x[8]); + CHECK_EQUAL_64(literal_base * 0, x[7]); + CHECK_EQUAL_64(literal_base * 3, x[6]); + CHECK_EQUAL_64(literal_base * 1, x[5]); + CHECK_EQUAL_64(literal_base * 2, x[4]); TEARDOWN(); } @@ -8869,7 +9071,7 @@ static void PushPopJsspWXOverlapHelper(int reg_count, int claim) { START(); { - ASSERT(__ StackPointer().Is(csp)); + DCHECK(__ StackPointer().Is(csp)); __ Mov(jssp, __ StackPointer()); __ SetStackPointer(jssp); @@ -8917,7 +9119,7 @@ static void PushPopJsspWXOverlapHelper(int reg_count, int claim) { int active_w_slots = 0; for (int i = 0; active_w_slots < requested_w_slots; i++) { - ASSERT(i < reg_count); + DCHECK(i < reg_count); // In order to test various arguments to PushMultipleTimes, and to try to // exercise different alignment and overlap effects, we push each // register a different number of times. @@ -8990,7 +9192,7 @@ static void PushPopJsspWXOverlapHelper(int reg_count, int claim) { } next_is_64 = !next_is_64; } - ASSERT(active_w_slots == 0); + DCHECK(active_w_slots == 0); // Drop memory to restore jssp. __ Drop(claim, kByteSizeInBytes); @@ -9018,15 +9220,15 @@ static void PushPopJsspWXOverlapHelper(int reg_count, int claim) { expected = stack[slot++]; } - // Always use ASSERT_EQUAL_64, even when testing W registers, so we can + // Always use CHECK_EQUAL_64, even when testing W registers, so we can // test that the upper word was properly cleared by Pop. if (x[i].IsZero()) { - ASSERT_EQUAL_64(0, x[i]); + CHECK_EQUAL_64(0, x[i]); } else { - ASSERT_EQUAL_64(expected, x[i]); + CHECK_EQUAL_64(expected, x[i]); } } - ASSERT(slot == requested_w_slots); + DCHECK(slot == requested_w_slots); TEARDOWN(); } @@ -9056,7 +9258,7 @@ TEST(push_pop_csp) { START(); - ASSERT(csp.Is(__ StackPointer())); + DCHECK(csp.Is(__ StackPointer())); __ Mov(x3, 0x3333333333333333UL); __ Mov(x2, 0x2222222222222222UL); @@ -9101,40 +9303,40 @@ TEST(push_pop_csp) { RUN(); - ASSERT_EQUAL_64(0x1111111111111111UL, x3); - ASSERT_EQUAL_64(0x0000000000000000UL, x2); - ASSERT_EQUAL_64(0x3333333333333333UL, x1); - ASSERT_EQUAL_64(0x2222222222222222UL, x0); - ASSERT_EQUAL_64(0x3333333333333333UL, x9); - ASSERT_EQUAL_64(0x2222222222222222UL, x8); - ASSERT_EQUAL_64(0x0000000000000000UL, x7); - ASSERT_EQUAL_64(0x3333333333333333UL, x6); - ASSERT_EQUAL_64(0x1111111111111111UL, x5); - ASSERT_EQUAL_64(0x2222222222222222UL, x4); + CHECK_EQUAL_64(0x1111111111111111UL, x3); + CHECK_EQUAL_64(0x0000000000000000UL, x2); + CHECK_EQUAL_64(0x3333333333333333UL, x1); + CHECK_EQUAL_64(0x2222222222222222UL, x0); + CHECK_EQUAL_64(0x3333333333333333UL, x9); + CHECK_EQUAL_64(0x2222222222222222UL, x8); + CHECK_EQUAL_64(0x0000000000000000UL, x7); + CHECK_EQUAL_64(0x3333333333333333UL, x6); + CHECK_EQUAL_64(0x1111111111111111UL, x5); + CHECK_EQUAL_64(0x2222222222222222UL, x4); - ASSERT_EQUAL_32(0x11111111U, w13); - ASSERT_EQUAL_32(0x33333333U, w12); - ASSERT_EQUAL_32(0x00000000U, w11); - ASSERT_EQUAL_32(0x22222222U, w10); - ASSERT_EQUAL_32(0x11111111U, w17); - ASSERT_EQUAL_32(0x00000000U, w16); - ASSERT_EQUAL_32(0x33333333U, w15); - ASSERT_EQUAL_32(0x22222222U, w14); + CHECK_EQUAL_32(0x11111111U, w13); + CHECK_EQUAL_32(0x33333333U, w12); + CHECK_EQUAL_32(0x00000000U, w11); + CHECK_EQUAL_32(0x22222222U, w10); + CHECK_EQUAL_32(0x11111111U, w17); + CHECK_EQUAL_32(0x00000000U, w16); + CHECK_EQUAL_32(0x33333333U, w15); + CHECK_EQUAL_32(0x22222222U, w14); - ASSERT_EQUAL_32(0x11111111U, w18); - ASSERT_EQUAL_32(0x11111111U, w19); - ASSERT_EQUAL_32(0x11111111U, w20); - ASSERT_EQUAL_32(0x11111111U, w21); - ASSERT_EQUAL_64(0x3333333333333333UL, x22); - ASSERT_EQUAL_64(0x0000000000000000UL, x23); + CHECK_EQUAL_32(0x11111111U, w18); + CHECK_EQUAL_32(0x11111111U, w19); + CHECK_EQUAL_32(0x11111111U, w20); + CHECK_EQUAL_32(0x11111111U, w21); + CHECK_EQUAL_64(0x3333333333333333UL, x22); + CHECK_EQUAL_64(0x0000000000000000UL, x23); - ASSERT_EQUAL_64(0x3333333333333333UL, x24); - ASSERT_EQUAL_64(0x3333333333333333UL, x26); + CHECK_EQUAL_64(0x3333333333333333UL, x24); + CHECK_EQUAL_64(0x3333333333333333UL, x26); - ASSERT_EQUAL_32(0x33333333U, w25); - ASSERT_EQUAL_32(0x00000000U, w27); - ASSERT_EQUAL_32(0x22222222U, w28); - ASSERT_EQUAL_32(0x33333333U, w29); + CHECK_EQUAL_32(0x33333333U, w25); + CHECK_EQUAL_32(0x00000000U, w27); + CHECK_EQUAL_32(0x22222222U, w28); + CHECK_EQUAL_32(0x33333333U, w29); TEARDOWN(); } @@ -9145,7 +9347,7 @@ TEST(push_queued) { START(); - ASSERT(__ StackPointer().Is(csp)); + DCHECK(__ StackPointer().Is(csp)); __ Mov(jssp, __ StackPointer()); __ SetStackPointer(jssp); @@ -9196,19 +9398,19 @@ TEST(push_queued) { RUN(); - ASSERT_EQUAL_64(0x1234000000000000, x0); - ASSERT_EQUAL_64(0x1234000100010001, x1); - ASSERT_EQUAL_64(0x1234000200020002, x2); - ASSERT_EQUAL_64(0x1234000300030003, x3); + CHECK_EQUAL_64(0x1234000000000000, x0); + CHECK_EQUAL_64(0x1234000100010001, x1); + CHECK_EQUAL_64(0x1234000200020002, x2); + CHECK_EQUAL_64(0x1234000300030003, x3); - ASSERT_EQUAL_32(0x12340004, w4); - ASSERT_EQUAL_32(0x12340005, w5); - ASSERT_EQUAL_32(0x12340006, w6); + CHECK_EQUAL_32(0x12340004, w4); + CHECK_EQUAL_32(0x12340005, w5); + CHECK_EQUAL_32(0x12340006, w6); - ASSERT_EQUAL_FP64(123400.0, d0); - ASSERT_EQUAL_FP64(123401.0, d1); + CHECK_EQUAL_FP64(123400.0, d0); + CHECK_EQUAL_FP64(123401.0, d1); - ASSERT_EQUAL_FP32(123402.0, s2); + CHECK_EQUAL_FP32(123402.0, s2); TEARDOWN(); } @@ -9220,7 +9422,7 @@ TEST(pop_queued) { START(); - ASSERT(__ StackPointer().Is(csp)); + DCHECK(__ StackPointer().Is(csp)); __ Mov(jssp, __ StackPointer()); __ SetStackPointer(jssp); @@ -9271,19 +9473,19 @@ TEST(pop_queued) { RUN(); - ASSERT_EQUAL_64(0x1234000000000000, x0); - ASSERT_EQUAL_64(0x1234000100010001, x1); - ASSERT_EQUAL_64(0x1234000200020002, x2); - ASSERT_EQUAL_64(0x1234000300030003, x3); + CHECK_EQUAL_64(0x1234000000000000, x0); + CHECK_EQUAL_64(0x1234000100010001, x1); + CHECK_EQUAL_64(0x1234000200020002, x2); + CHECK_EQUAL_64(0x1234000300030003, x3); - ASSERT_EQUAL_64(0x0000000012340004, x4); - ASSERT_EQUAL_64(0x0000000012340005, x5); - ASSERT_EQUAL_64(0x0000000012340006, x6); + CHECK_EQUAL_64(0x0000000012340004, x4); + CHECK_EQUAL_64(0x0000000012340005, x5); + CHECK_EQUAL_64(0x0000000012340006, x6); - ASSERT_EQUAL_FP64(123400.0, d0); - ASSERT_EQUAL_FP64(123401.0, d1); + CHECK_EQUAL_FP64(123400.0, d0); + CHECK_EQUAL_FP64(123401.0, d1); - ASSERT_EQUAL_FP32(123402.0, s2); + CHECK_EQUAL_FP32(123402.0, s2); TEARDOWN(); } @@ -9349,14 +9551,14 @@ TEST(jump_both_smi) { RUN(); - ASSERT_EQUAL_64(0x5555555500000001UL, x0); - ASSERT_EQUAL_64(0xaaaaaaaa00000001UL, x1); - ASSERT_EQUAL_64(0x1234567800000000UL, x2); - ASSERT_EQUAL_64(0x8765432100000000UL, x3); - ASSERT_EQUAL_64(0, x4); - ASSERT_EQUAL_64(0, x5); - ASSERT_EQUAL_64(0, x6); - ASSERT_EQUAL_64(1, x7); + CHECK_EQUAL_64(0x5555555500000001UL, x0); + CHECK_EQUAL_64(0xaaaaaaaa00000001UL, x1); + CHECK_EQUAL_64(0x1234567800000000UL, x2); + CHECK_EQUAL_64(0x8765432100000000UL, x3); + CHECK_EQUAL_64(0, x4); + CHECK_EQUAL_64(0, x5); + CHECK_EQUAL_64(0, x6); + CHECK_EQUAL_64(1, x7); TEARDOWN(); } @@ -9422,14 +9624,14 @@ TEST(jump_either_smi) { RUN(); - ASSERT_EQUAL_64(0x5555555500000001UL, x0); - ASSERT_EQUAL_64(0xaaaaaaaa00000001UL, x1); - ASSERT_EQUAL_64(0x1234567800000000UL, x2); - ASSERT_EQUAL_64(0x8765432100000000UL, x3); - ASSERT_EQUAL_64(0, x4); - ASSERT_EQUAL_64(1, x5); - ASSERT_EQUAL_64(1, x6); - ASSERT_EQUAL_64(1, x7); + CHECK_EQUAL_64(0x5555555500000001UL, x0); + CHECK_EQUAL_64(0xaaaaaaaa00000001UL, x1); + CHECK_EQUAL_64(0x1234567800000000UL, x2); + CHECK_EQUAL_64(0x8765432100000000UL, x3); + CHECK_EQUAL_64(0, x4); + CHECK_EQUAL_64(1, x5); + CHECK_EQUAL_64(1, x6); + CHECK_EQUAL_64(1, x7); TEARDOWN(); } @@ -9780,7 +9982,7 @@ TEST(cpureglist_utils_empty) { TEST(printf) { INIT_V8(); - SETUP(); + SETUP_SIZE(BUF_SIZE * 2); START(); char const * test_plain_string = "Printf with no arguments.\n"; @@ -9821,41 +10023,49 @@ TEST(printf) { __ Mov(x11, 40); __ Mov(x12, 500); - // x8 and x9 are used by debug code in part of the macro assembler. However, - // Printf guarantees to preserve them (so we can use Printf in debug code), - // and we need to test that they are properly preserved. The above code - // shouldn't need to use them, but we initialize x8 and x9 last to be on the - // safe side. This test still assumes that none of the code from - // before->Dump() to the end of the test can clobber x8 or x9, so where - // possible we use the Assembler directly to be safe. - __ orr(x8, xzr, 0x8888888888888888); - __ orr(x9, xzr, 0x9999999999999999); - - // Check that we don't clobber any registers, except those that we explicitly - // write results into. + // A single character. + __ Mov(w13, 'x'); + + // Check that we don't clobber any registers. before.Dump(&masm); __ Printf(test_plain_string); // NOLINT(runtime/printf) - __ Printf("x0: %" PRId64", x1: 0x%08" PRIx64 "\n", x0, x1); + __ Printf("x0: %" PRId64 ", x1: 0x%08" PRIx64 "\n", x0, x1); + __ Printf("w5: %" PRId32 ", x5: %" PRId64"\n", w5, x5); __ Printf("d0: %f\n", d0); __ Printf("Test %%s: %s\n", x2); __ Printf("w3(uint32): %" PRIu32 "\nw4(int32): %" PRId32 "\n" "x5(uint64): %" PRIu64 "\nx6(int64): %" PRId64 "\n", w3, w4, x5, x6); __ Printf("%%f: %f\n%%g: %g\n%%e: %e\n%%E: %E\n", s1, s2, d3, d4); - __ Printf("0x%08" PRIx32 ", 0x%016" PRIx64 "\n", x28, x28); + __ Printf("0x%" PRIx32 ", 0x%" PRIx64 "\n", w28, x28); __ Printf("%g\n", d10); + __ Printf("%%%%%s%%%c%%\n", x2, w13); + + // Print the stack pointer (csp). + DCHECK(csp.Is(__ StackPointer())); + __ Printf("StackPointer(csp): 0x%016" PRIx64 ", 0x%08" PRIx32 "\n", + __ StackPointer(), __ StackPointer().W()); // Test with a different stack pointer. const Register old_stack_pointer = __ StackPointer(); - __ mov(x29, old_stack_pointer); + __ Mov(x29, old_stack_pointer); __ SetStackPointer(x29); - __ Printf("old_stack_pointer: 0x%016" PRIx64 "\n", old_stack_pointer); - __ mov(old_stack_pointer, __ StackPointer()); + // Print the stack pointer (not csp). + __ Printf("StackPointer(not csp): 0x%016" PRIx64 ", 0x%08" PRIx32 "\n", + __ StackPointer(), __ StackPointer().W()); + __ Mov(old_stack_pointer, __ StackPointer()); __ SetStackPointer(old_stack_pointer); + // Test with three arguments. __ Printf("3=%u, 4=%u, 5=%u\n", x10, x11, x12); + // Mixed argument types. + __ Printf("w3: %" PRIu32 ", s1: %f, x5: %" PRIu64 ", d3: %f\n", + w3, s1, x5, d3); + __ Printf("s1: %f, d3: %f, w3: %" PRId32 ", x5: %" PRId64 "\n", + s1, d3, w3, x5); + END(); RUN(); @@ -9863,7 +10073,7 @@ TEST(printf) { // Printf preserves all registers by default, we can't look at the number of // bytes that were printed. However, the printf_no_preserve test should check // that, and here we just test that we didn't clobber any registers. - ASSERT_EQUAL_REGISTERS(before); + CHECK_EQUAL_REGISTERS(before); TEARDOWN(); } @@ -9877,7 +10087,7 @@ TEST(printf_no_preserve) { char const * test_plain_string = "Printf with no arguments.\n"; char const * test_substring = "'This is a substring.'"; - __ PrintfNoPreserve(test_plain_string); // NOLINT(runtime/printf) + __ PrintfNoPreserve(test_plain_string); __ Mov(x19, x0); // Test simple integer arguments. @@ -9915,7 +10125,7 @@ TEST(printf_no_preserve) { // Test printing callee-saved registers. __ Mov(x28, 0x123456789abcdef); - __ PrintfNoPreserve("0x%08" PRIx32 ", 0x%016" PRIx64 "\n", x28, x28); + __ PrintfNoPreserve("0x%" PRIx32 ", 0x%" PRIx64 "\n", w28, x28); __ Mov(x25, x0); __ Fmov(d10, 42.0); @@ -9926,11 +10136,11 @@ TEST(printf_no_preserve) { const Register old_stack_pointer = __ StackPointer(); __ Mov(x29, old_stack_pointer); __ SetStackPointer(x29); - - __ PrintfNoPreserve("old_stack_pointer: 0x%016" PRIx64 "\n", - old_stack_pointer); + // Print the stack pointer (not csp). + __ PrintfNoPreserve( + "StackPointer(not csp): 0x%016" PRIx64 ", 0x%08" PRIx32 "\n", + __ StackPointer(), __ StackPointer().W()); __ Mov(x27, x0); - __ Mov(old_stack_pointer, __ StackPointer()); __ SetStackPointer(old_stack_pointer); @@ -9941,6 +10151,15 @@ TEST(printf_no_preserve) { __ PrintfNoPreserve("3=%u, 4=%u, 5=%u\n", x3, x4, x5); __ Mov(x28, x0); + // Mixed argument types. + __ Mov(w3, 0xffffffff); + __ Fmov(s1, 1.234); + __ Mov(x5, 0xffffffffffffffff); + __ Fmov(d3, 3.456); + __ PrintfNoPreserve("w3: %" PRIu32 ", s1: %f, x5: %" PRIu64 ", d3: %f\n", + w3, s1, x5, d3); + __ Mov(x29, x0); + END(); RUN(); @@ -9948,33 +10167,35 @@ TEST(printf_no_preserve) { // use the return code to check that the string length was correct. // Printf with no arguments. - ASSERT_EQUAL_64(strlen(test_plain_string), x19); + CHECK_EQUAL_64(strlen(test_plain_string), x19); // x0: 1234, x1: 0x00001234 - ASSERT_EQUAL_64(25, x20); + CHECK_EQUAL_64(25, x20); // d0: 1.234000 - ASSERT_EQUAL_64(13, x21); + CHECK_EQUAL_64(13, x21); // Test %s: 'This is a substring.' - ASSERT_EQUAL_64(32, x22); + CHECK_EQUAL_64(32, x22); // w3(uint32): 4294967295 // w4(int32): -1 // x5(uint64): 18446744073709551615 // x6(int64): -1 - ASSERT_EQUAL_64(23 + 14 + 33 + 14, x23); + CHECK_EQUAL_64(23 + 14 + 33 + 14, x23); // %f: 1.234000 // %g: 2.345 // %e: 3.456000e+00 // %E: 4.567000E+00 - ASSERT_EQUAL_64(13 + 10 + 17 + 17, x24); - // 0x89abcdef, 0x0123456789abcdef - ASSERT_EQUAL_64(31, x25); + CHECK_EQUAL_64(13 + 10 + 17 + 17, x24); + // 0x89abcdef, 0x123456789abcdef + CHECK_EQUAL_64(30, x25); // 42 - ASSERT_EQUAL_64(3, x26); - // old_stack_pointer: 0x00007fb037ae2370 + CHECK_EQUAL_64(3, x26); + // StackPointer(not csp): 0x00007fb037ae2370, 0x37ae2370 // Note: This is an example value, but the field width is fixed here so the // string length is still predictable. - ASSERT_EQUAL_64(38, x27); + CHECK_EQUAL_64(54, x27); // 3=3, 4=40, 5=500 - ASSERT_EQUAL_64(17, x28); + CHECK_EQUAL_64(17, x28); + // w3: 4294967295, s1: 1.234000, x5: 18446744073709551615, d3: 3.456000 + CHECK_EQUAL_64(69, x29); TEARDOWN(); } @@ -10071,14 +10292,14 @@ static void DoSmiAbsTest(int32_t value, bool must_fail = false) { if (must_fail) { // We tested an invalid conversion. The code must have jump on slow. - ASSERT_EQUAL_64(0xbad, x2); + CHECK_EQUAL_64(0xbad, x2); } else { // The conversion is valid, check the result. int32_t result = (value >= 0) ? value : -value; - ASSERT_EQUAL_64(result, x1); + CHECK_EQUAL_64(result, x1); // Check that we didn't jump on slow. - ASSERT_EQUAL_64(0xc001c0de, x2); + CHECK_EQUAL_64(0xc001c0de, x2); } TEARDOWN(); @@ -10125,7 +10346,7 @@ TEST(blr_lr) { RUN(); - ASSERT_EQUAL_64(0xc001c0de, x0); + CHECK_EQUAL_64(0xc001c0de, x0); TEARDOWN(); } @@ -10196,14 +10417,14 @@ TEST(process_nan_double) { // Make sure that NaN propagation works correctly. double sn = rawbits_to_double(0x7ff5555511111111); double qn = rawbits_to_double(0x7ffaaaaa11111111); - ASSERT(IsSignallingNaN(sn)); - ASSERT(IsQuietNaN(qn)); + DCHECK(IsSignallingNaN(sn)); + DCHECK(IsQuietNaN(qn)); // The input NaNs after passing through ProcessNaN. double sn_proc = rawbits_to_double(0x7ffd555511111111); double qn_proc = qn; - ASSERT(IsQuietNaN(sn_proc)); - ASSERT(IsQuietNaN(qn_proc)); + DCHECK(IsQuietNaN(sn_proc)); + DCHECK(IsQuietNaN(qn_proc)); SETUP(); START(); @@ -10244,24 +10465,24 @@ TEST(process_nan_double) { uint64_t sn_raw = double_to_rawbits(sn); // - Signalling NaN - ASSERT_EQUAL_FP64(sn, d1); - ASSERT_EQUAL_FP64(rawbits_to_double(sn_raw & ~kDSignMask), d2); - ASSERT_EQUAL_FP64(rawbits_to_double(sn_raw ^ kDSignMask), d3); + CHECK_EQUAL_FP64(sn, d1); + CHECK_EQUAL_FP64(rawbits_to_double(sn_raw & ~kDSignMask), d2); + CHECK_EQUAL_FP64(rawbits_to_double(sn_raw ^ kDSignMask), d3); // - Quiet NaN - ASSERT_EQUAL_FP64(qn, d11); - ASSERT_EQUAL_FP64(rawbits_to_double(qn_raw & ~kDSignMask), d12); - ASSERT_EQUAL_FP64(rawbits_to_double(qn_raw ^ kDSignMask), d13); + CHECK_EQUAL_FP64(qn, d11); + CHECK_EQUAL_FP64(rawbits_to_double(qn_raw & ~kDSignMask), d12); + CHECK_EQUAL_FP64(rawbits_to_double(qn_raw ^ kDSignMask), d13); // - Signalling NaN - ASSERT_EQUAL_FP64(sn_proc, d4); - ASSERT_EQUAL_FP64(sn_proc, d5); - ASSERT_EQUAL_FP64(sn_proc, d6); - ASSERT_EQUAL_FP64(sn_proc, d7); + CHECK_EQUAL_FP64(sn_proc, d4); + CHECK_EQUAL_FP64(sn_proc, d5); + CHECK_EQUAL_FP64(sn_proc, d6); + CHECK_EQUAL_FP64(sn_proc, d7); // - Quiet NaN - ASSERT_EQUAL_FP64(qn_proc, d14); - ASSERT_EQUAL_FP64(qn_proc, d15); - ASSERT_EQUAL_FP64(qn_proc, d16); - ASSERT_EQUAL_FP64(qn_proc, d17); + CHECK_EQUAL_FP64(qn_proc, d14); + CHECK_EQUAL_FP64(qn_proc, d15); + CHECK_EQUAL_FP64(qn_proc, d16); + CHECK_EQUAL_FP64(qn_proc, d17); TEARDOWN(); } @@ -10272,14 +10493,14 @@ TEST(process_nan_float) { // Make sure that NaN propagation works correctly. float sn = rawbits_to_float(0x7f951111); float qn = rawbits_to_float(0x7fea1111); - ASSERT(IsSignallingNaN(sn)); - ASSERT(IsQuietNaN(qn)); + DCHECK(IsSignallingNaN(sn)); + DCHECK(IsQuietNaN(qn)); // The input NaNs after passing through ProcessNaN. float sn_proc = rawbits_to_float(0x7fd51111); float qn_proc = qn; - ASSERT(IsQuietNaN(sn_proc)); - ASSERT(IsQuietNaN(qn_proc)); + DCHECK(IsQuietNaN(sn_proc)); + DCHECK(IsQuietNaN(qn_proc)); SETUP(); START(); @@ -10320,32 +10541,32 @@ TEST(process_nan_float) { uint32_t sn_raw = float_to_rawbits(sn); // - Signalling NaN - ASSERT_EQUAL_FP32(sn, s1); - ASSERT_EQUAL_FP32(rawbits_to_float(sn_raw & ~kSSignMask), s2); - ASSERT_EQUAL_FP32(rawbits_to_float(sn_raw ^ kSSignMask), s3); + CHECK_EQUAL_FP32(sn, s1); + CHECK_EQUAL_FP32(rawbits_to_float(sn_raw & ~kSSignMask), s2); + CHECK_EQUAL_FP32(rawbits_to_float(sn_raw ^ kSSignMask), s3); // - Quiet NaN - ASSERT_EQUAL_FP32(qn, s11); - ASSERT_EQUAL_FP32(rawbits_to_float(qn_raw & ~kSSignMask), s12); - ASSERT_EQUAL_FP32(rawbits_to_float(qn_raw ^ kSSignMask), s13); + CHECK_EQUAL_FP32(qn, s11); + CHECK_EQUAL_FP32(rawbits_to_float(qn_raw & ~kSSignMask), s12); + CHECK_EQUAL_FP32(rawbits_to_float(qn_raw ^ kSSignMask), s13); // - Signalling NaN - ASSERT_EQUAL_FP32(sn_proc, s4); - ASSERT_EQUAL_FP32(sn_proc, s5); - ASSERT_EQUAL_FP32(sn_proc, s6); - ASSERT_EQUAL_FP32(sn_proc, s7); + CHECK_EQUAL_FP32(sn_proc, s4); + CHECK_EQUAL_FP32(sn_proc, s5); + CHECK_EQUAL_FP32(sn_proc, s6); + CHECK_EQUAL_FP32(sn_proc, s7); // - Quiet NaN - ASSERT_EQUAL_FP32(qn_proc, s14); - ASSERT_EQUAL_FP32(qn_proc, s15); - ASSERT_EQUAL_FP32(qn_proc, s16); - ASSERT_EQUAL_FP32(qn_proc, s17); + CHECK_EQUAL_FP32(qn_proc, s14); + CHECK_EQUAL_FP32(qn_proc, s15); + CHECK_EQUAL_FP32(qn_proc, s16); + CHECK_EQUAL_FP32(qn_proc, s17); TEARDOWN(); } static void ProcessNaNsHelper(double n, double m, double expected) { - ASSERT(std::isnan(n) || std::isnan(m)); - ASSERT(isnan(expected)); + DCHECK(std::isnan(n) || std::isnan(m)); + DCHECK(std::isnan(expected)); SETUP(); START(); @@ -10365,12 +10586,12 @@ static void ProcessNaNsHelper(double n, double m, double expected) { END(); RUN(); - ASSERT_EQUAL_FP64(expected, d2); - ASSERT_EQUAL_FP64(expected, d3); - ASSERT_EQUAL_FP64(expected, d4); - ASSERT_EQUAL_FP64(expected, d5); - ASSERT_EQUAL_FP64(expected, d6); - ASSERT_EQUAL_FP64(expected, d7); + CHECK_EQUAL_FP64(expected, d2); + CHECK_EQUAL_FP64(expected, d3); + CHECK_EQUAL_FP64(expected, d4); + CHECK_EQUAL_FP64(expected, d5); + CHECK_EQUAL_FP64(expected, d6); + CHECK_EQUAL_FP64(expected, d7); TEARDOWN(); } @@ -10383,20 +10604,20 @@ TEST(process_nans_double) { double sm = rawbits_to_double(0x7ff5555522222222); double qn = rawbits_to_double(0x7ffaaaaa11111111); double qm = rawbits_to_double(0x7ffaaaaa22222222); - ASSERT(IsSignallingNaN(sn)); - ASSERT(IsSignallingNaN(sm)); - ASSERT(IsQuietNaN(qn)); - ASSERT(IsQuietNaN(qm)); + DCHECK(IsSignallingNaN(sn)); + DCHECK(IsSignallingNaN(sm)); + DCHECK(IsQuietNaN(qn)); + DCHECK(IsQuietNaN(qm)); // The input NaNs after passing through ProcessNaN. double sn_proc = rawbits_to_double(0x7ffd555511111111); double sm_proc = rawbits_to_double(0x7ffd555522222222); double qn_proc = qn; double qm_proc = qm; - ASSERT(IsQuietNaN(sn_proc)); - ASSERT(IsQuietNaN(sm_proc)); - ASSERT(IsQuietNaN(qn_proc)); - ASSERT(IsQuietNaN(qm_proc)); + DCHECK(IsQuietNaN(sn_proc)); + DCHECK(IsQuietNaN(sm_proc)); + DCHECK(IsQuietNaN(qn_proc)); + DCHECK(IsQuietNaN(qm_proc)); // Quiet NaNs are propagated. ProcessNaNsHelper(qn, 0, qn_proc); @@ -10416,8 +10637,8 @@ TEST(process_nans_double) { static void ProcessNaNsHelper(float n, float m, float expected) { - ASSERT(std::isnan(n) || std::isnan(m)); - ASSERT(isnan(expected)); + DCHECK(std::isnan(n) || std::isnan(m)); + DCHECK(std::isnan(expected)); SETUP(); START(); @@ -10437,12 +10658,12 @@ static void ProcessNaNsHelper(float n, float m, float expected) { END(); RUN(); - ASSERT_EQUAL_FP32(expected, s2); - ASSERT_EQUAL_FP32(expected, s3); - ASSERT_EQUAL_FP32(expected, s4); - ASSERT_EQUAL_FP32(expected, s5); - ASSERT_EQUAL_FP32(expected, s6); - ASSERT_EQUAL_FP32(expected, s7); + CHECK_EQUAL_FP32(expected, s2); + CHECK_EQUAL_FP32(expected, s3); + CHECK_EQUAL_FP32(expected, s4); + CHECK_EQUAL_FP32(expected, s5); + CHECK_EQUAL_FP32(expected, s6); + CHECK_EQUAL_FP32(expected, s7); TEARDOWN(); } @@ -10455,20 +10676,20 @@ TEST(process_nans_float) { float sm = rawbits_to_float(0x7f952222); float qn = rawbits_to_float(0x7fea1111); float qm = rawbits_to_float(0x7fea2222); - ASSERT(IsSignallingNaN(sn)); - ASSERT(IsSignallingNaN(sm)); - ASSERT(IsQuietNaN(qn)); - ASSERT(IsQuietNaN(qm)); + DCHECK(IsSignallingNaN(sn)); + DCHECK(IsSignallingNaN(sm)); + DCHECK(IsQuietNaN(qn)); + DCHECK(IsQuietNaN(qm)); // The input NaNs after passing through ProcessNaN. float sn_proc = rawbits_to_float(0x7fd51111); float sm_proc = rawbits_to_float(0x7fd52222); float qn_proc = qn; float qm_proc = qm; - ASSERT(IsQuietNaN(sn_proc)); - ASSERT(IsQuietNaN(sm_proc)); - ASSERT(IsQuietNaN(qn_proc)); - ASSERT(IsQuietNaN(qm_proc)); + DCHECK(IsQuietNaN(sn_proc)); + DCHECK(IsQuietNaN(sm_proc)); + DCHECK(IsQuietNaN(qn_proc)); + DCHECK(IsQuietNaN(qm_proc)); // Quiet NaNs are propagated. ProcessNaNsHelper(qn, 0, qn_proc); @@ -10488,7 +10709,7 @@ TEST(process_nans_float) { static void DefaultNaNHelper(float n, float m, float a) { - ASSERT(std::isnan(n) || std::isnan(m) || isnan(a)); + DCHECK(std::isnan(n) || std::isnan(m) || std::isnan(a)); bool test_1op = std::isnan(n); bool test_2op = std::isnan(n) || std::isnan(m); @@ -10545,29 +10766,29 @@ static void DefaultNaNHelper(float n, float m, float a) { if (test_1op) { uint32_t n_raw = float_to_rawbits(n); - ASSERT_EQUAL_FP32(n, s10); - ASSERT_EQUAL_FP32(rawbits_to_float(n_raw & ~kSSignMask), s11); - ASSERT_EQUAL_FP32(rawbits_to_float(n_raw ^ kSSignMask), s12); - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s13); - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s14); - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s15); - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s16); - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d17); + CHECK_EQUAL_FP32(n, s10); + CHECK_EQUAL_FP32(rawbits_to_float(n_raw & ~kSSignMask), s11); + CHECK_EQUAL_FP32(rawbits_to_float(n_raw ^ kSSignMask), s12); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s13); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s14); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s15); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s16); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d17); } if (test_2op) { - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s18); - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s19); - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s20); - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s21); - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s22); - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s23); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s18); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s19); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s20); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s21); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s22); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s23); } - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s24); - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s25); - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s26); - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s27); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s24); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s25); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s26); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s27); TEARDOWN(); } @@ -10581,12 +10802,12 @@ TEST(default_nan_float) { float qn = rawbits_to_float(0x7fea1111); float qm = rawbits_to_float(0x7fea2222); float qa = rawbits_to_float(0x7feaaaaa); - ASSERT(IsSignallingNaN(sn)); - ASSERT(IsSignallingNaN(sm)); - ASSERT(IsSignallingNaN(sa)); - ASSERT(IsQuietNaN(qn)); - ASSERT(IsQuietNaN(qm)); - ASSERT(IsQuietNaN(qa)); + DCHECK(IsSignallingNaN(sn)); + DCHECK(IsSignallingNaN(sm)); + DCHECK(IsSignallingNaN(sa)); + DCHECK(IsQuietNaN(qn)); + DCHECK(IsQuietNaN(qm)); + DCHECK(IsQuietNaN(qa)); // - Signalling NaNs DefaultNaNHelper(sn, 0.0f, 0.0f); @@ -10616,7 +10837,7 @@ TEST(default_nan_float) { static void DefaultNaNHelper(double n, double m, double a) { - ASSERT(std::isnan(n) || std::isnan(m) || isnan(a)); + DCHECK(std::isnan(n) || std::isnan(m) || std::isnan(a)); bool test_1op = std::isnan(n); bool test_2op = std::isnan(n) || std::isnan(m); @@ -10673,29 +10894,29 @@ static void DefaultNaNHelper(double n, double m, double a) { if (test_1op) { uint64_t n_raw = double_to_rawbits(n); - ASSERT_EQUAL_FP64(n, d10); - ASSERT_EQUAL_FP64(rawbits_to_double(n_raw & ~kDSignMask), d11); - ASSERT_EQUAL_FP64(rawbits_to_double(n_raw ^ kDSignMask), d12); - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d13); - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d14); - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d15); - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d16); - ASSERT_EQUAL_FP32(kFP32DefaultNaN, s17); + CHECK_EQUAL_FP64(n, d10); + CHECK_EQUAL_FP64(rawbits_to_double(n_raw & ~kDSignMask), d11); + CHECK_EQUAL_FP64(rawbits_to_double(n_raw ^ kDSignMask), d12); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d13); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d14); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d15); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d16); + CHECK_EQUAL_FP32(kFP32DefaultNaN, s17); } if (test_2op) { - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d18); - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d19); - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d20); - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d21); - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d22); - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d23); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d18); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d19); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d20); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d21); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d22); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d23); } - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d24); - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d25); - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d26); - ASSERT_EQUAL_FP64(kFP64DefaultNaN, d27); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d24); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d25); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d26); + CHECK_EQUAL_FP64(kFP64DefaultNaN, d27); TEARDOWN(); } @@ -10709,12 +10930,12 @@ TEST(default_nan_double) { double qn = rawbits_to_double(0x7ffaaaaa11111111); double qm = rawbits_to_double(0x7ffaaaaa22222222); double qa = rawbits_to_double(0x7ffaaaaaaaaaaaaa); - ASSERT(IsSignallingNaN(sn)); - ASSERT(IsSignallingNaN(sm)); - ASSERT(IsSignallingNaN(sa)); - ASSERT(IsQuietNaN(qn)); - ASSERT(IsQuietNaN(qm)); - ASSERT(IsQuietNaN(qa)); + DCHECK(IsSignallingNaN(sn)); + DCHECK(IsSignallingNaN(sm)); + DCHECK(IsSignallingNaN(sa)); + DCHECK(IsQuietNaN(qn)); + DCHECK(IsQuietNaN(qm)); + DCHECK(IsQuietNaN(qa)); // - Signalling NaNs DefaultNaNHelper(sn, 0.0, 0.0); @@ -10775,7 +10996,7 @@ TEST(call_no_relocation) { RUN(); - ASSERT_EQUAL_64(1, x0); + CHECK_EQUAL_64(1, x0); // The return_address_from_call_start function doesn't currently encounter any // non-relocatable sequences, so we check it here to make sure it works. @@ -10832,12 +11053,12 @@ static void AbsHelperX(int64_t value) { END(); RUN(); - ASSERT_EQUAL_64(0, x0); - ASSERT_EQUAL_64(value, x1); - ASSERT_EQUAL_64(expected, x10); - ASSERT_EQUAL_64(expected, x11); - ASSERT_EQUAL_64(expected, x12); - ASSERT_EQUAL_64(expected, x13); + CHECK_EQUAL_64(0, x0); + CHECK_EQUAL_64(value, x1); + CHECK_EQUAL_64(expected, x10); + CHECK_EQUAL_64(expected, x11); + CHECK_EQUAL_64(expected, x12); + CHECK_EQUAL_64(expected, x13); TEARDOWN(); } @@ -10889,12 +11110,12 @@ static void AbsHelperW(int32_t value) { END(); RUN(); - ASSERT_EQUAL_32(0, w0); - ASSERT_EQUAL_32(value, w1); - ASSERT_EQUAL_32(expected, w10); - ASSERT_EQUAL_32(expected, w11); - ASSERT_EQUAL_32(expected, w12); - ASSERT_EQUAL_32(expected, w13); + CHECK_EQUAL_32(0, w0); + CHECK_EQUAL_32(value, w1); + CHECK_EQUAL_32(expected, w10); + CHECK_EQUAL_32(expected, w11); + CHECK_EQUAL_32(expected, w12); + CHECK_EQUAL_32(expected, w13); TEARDOWN(); } @@ -10952,16 +11173,16 @@ TEST(pool_size) { for (RelocIterator it(*code, pool_mask); !it.done(); it.next()) { RelocInfo* info = it.rinfo(); if (RelocInfo::IsConstPool(info->rmode())) { - ASSERT(info->data() == constant_pool_size); + DCHECK(info->data() == constant_pool_size); ++pool_count; } if (RelocInfo::IsVeneerPool(info->rmode())) { - ASSERT(info->data() == veneer_pool_size); + DCHECK(info->data() == veneer_pool_size); ++pool_count; } } - ASSERT(pool_count == 2); + DCHECK(pool_count == 2); TEARDOWN(); } diff --git a/deps/v8/test/cctest/test-assembler-ia32.cc b/deps/v8/test/cctest/test-assembler-ia32.cc index ba83b3d7eae..e8c7f951feb 100644 --- a/deps/v8/test/cctest/test-assembler-ia32.cc +++ b/deps/v8/test/cctest/test-assembler-ia32.cc @@ -27,14 +27,15 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "disassembler.h" -#include "factory.h" -#include "macro-assembler.h" -#include "platform.h" -#include "serialize.h" -#include "cctest.h" +#include "src/base/platform/platform.h" +#include "src/disassembler.h" +#include "src/factory.h" +#include "src/macro-assembler.h" +#include "src/ostreams.h" +#include "src/serialize.h" +#include "test/cctest/cctest.h" using namespace v8::internal; @@ -63,7 +64,8 @@ TEST(AssemblerIa320) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef OBJECT_PRINT - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F2 f = FUNCTION_CAST<F2>(code->entry()); int res = f(3, 4); @@ -99,7 +101,8 @@ TEST(AssemblerIa321) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef OBJECT_PRINT - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F1 f = FUNCTION_CAST<F1>(code->entry()); int res = f(100); @@ -139,7 +142,8 @@ TEST(AssemblerIa322) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef OBJECT_PRINT - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F1 f = FUNCTION_CAST<F1>(code->entry()); int res = f(10); @@ -152,7 +156,6 @@ typedef int (*F3)(float x); TEST(AssemblerIa323) { CcTest::InitializeVM(); - if (!CpuFeatures::IsSupported(SSE2)) return; Isolate* isolate = reinterpret_cast<Isolate*>(CcTest::isolate()); HandleScope scope(isolate); @@ -160,11 +163,8 @@ TEST(AssemblerIa323) { v8::internal::byte buffer[256]; Assembler assm(isolate, buffer, sizeof buffer); - CHECK(CpuFeatures::IsSupported(SSE2)); - { CpuFeatureScope fscope(&assm, SSE2); - __ cvttss2si(eax, Operand(esp, 4)); - __ ret(0); - } + __ cvttss2si(eax, Operand(esp, 4)); + __ ret(0); CodeDesc desc; assm.GetCode(&desc); @@ -186,7 +186,6 @@ typedef int (*F4)(double x); TEST(AssemblerIa324) { CcTest::InitializeVM(); - if (!CpuFeatures::IsSupported(SSE2)) return; Isolate* isolate = reinterpret_cast<Isolate*>(CcTest::isolate()); HandleScope scope(isolate); @@ -194,8 +193,6 @@ TEST(AssemblerIa324) { v8::internal::byte buffer[256]; Assembler assm(isolate, buffer, sizeof buffer); - CHECK(CpuFeatures::IsSupported(SSE2)); - CpuFeatureScope fscope(&assm, SSE2); __ cvttsd2si(eax, Operand(esp, 4)); __ ret(0); @@ -241,14 +238,12 @@ typedef double (*F5)(double x, double y); TEST(AssemblerIa326) { CcTest::InitializeVM(); - if (!CpuFeatures::IsSupported(SSE2)) return; Isolate* isolate = reinterpret_cast<Isolate*>(CcTest::isolate()); HandleScope scope(isolate); v8::internal::byte buffer[256]; Assembler assm(isolate, buffer, sizeof buffer); - CpuFeatureScope fscope(&assm, SSE2); __ movsd(xmm0, Operand(esp, 1 * kPointerSize)); __ movsd(xmm1, Operand(esp, 3 * kPointerSize)); __ addsd(xmm0, xmm1); @@ -285,13 +280,11 @@ typedef double (*F6)(int x); TEST(AssemblerIa328) { CcTest::InitializeVM(); - if (!CpuFeatures::IsSupported(SSE2)) return; Isolate* isolate = reinterpret_cast<Isolate*>(CcTest::isolate()); HandleScope scope(isolate); v8::internal::byte buffer[256]; Assembler assm(isolate, buffer, sizeof buffer); - CpuFeatureScope fscope(&assm, SSE2); __ mov(eax, Operand(esp, 4)); __ cvtsi2sd(xmm0, eax); // Copy xmm0 to st(0) using eight bytes of stack. @@ -305,7 +298,8 @@ TEST(AssemblerIa328) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef OBJECT_PRINT - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F6 f = FUNCTION_CAST<F6>(code->entry()); double res = f(12); @@ -358,14 +352,15 @@ TEST(AssemblerIa329) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef OBJECT_PRINT - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F7 f = FUNCTION_CAST<F7>(code->entry()); CHECK_EQ(kLess, f(1.1, 2.2)); CHECK_EQ(kEqual, f(2.2, 2.2)); CHECK_EQ(kGreater, f(3.3, 2.2)); - CHECK_EQ(kNaN, f(OS::nan_value(), 1.1)); + CHECK_EQ(kNaN, f(v8::base::OS::nan_value(), 1.1)); } @@ -462,9 +457,6 @@ void DoSSE2(const v8::FunctionCallbackInfo<v8::Value>& args) { v8::internal::byte buffer[256]; Assembler assm(isolate, buffer, sizeof buffer); - ASSERT(CpuFeatures::IsSupported(SSE2)); - CpuFeatureScope fscope(&assm, SSE2); - // Remove return address from the stack for fix stack frame alignment. __ pop(ecx); @@ -500,9 +492,7 @@ void DoSSE2(const v8::FunctionCallbackInfo<v8::Value>& args) { TEST(StackAlignmentForSSE2) { CcTest::InitializeVM(); - if (!CpuFeatures::IsSupported(SSE2)) return; - - CHECK_EQ(0, OS::ActivationFrameAlignment() % 16); + CHECK_EQ(0, v8::base::OS::ActivationFrameAlignment() % 16); v8::Isolate* isolate = CcTest::isolate(); v8::HandleScope handle_scope(isolate); @@ -540,15 +530,13 @@ TEST(StackAlignmentForSSE2) { TEST(AssemblerIa32Extractps) { CcTest::InitializeVM(); - if (!CpuFeatures::IsSupported(SSE2) || - !CpuFeatures::IsSupported(SSE4_1)) return; + if (!CpuFeatures::IsSupported(SSE4_1)) return; Isolate* isolate = reinterpret_cast<Isolate*>(CcTest::isolate()); HandleScope scope(isolate); v8::internal::byte buffer[256]; MacroAssembler assm(isolate, buffer, sizeof buffer); - { CpuFeatureScope fscope2(&assm, SSE2); - CpuFeatureScope fscope41(&assm, SSE4_1); + { CpuFeatureScope fscope41(&assm, SSE4_1); __ movsd(xmm1, Operand(esp, 4)); __ extractps(eax, xmm1, 0x1); __ ret(0); @@ -559,7 +547,8 @@ TEST(AssemblerIa32Extractps) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef OBJECT_PRINT - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F4 f = FUNCTION_CAST<F4>(code->entry()); @@ -573,14 +562,12 @@ TEST(AssemblerIa32Extractps) { typedef int (*F8)(float x, float y); TEST(AssemblerIa32SSE) { CcTest::InitializeVM(); - if (!CpuFeatures::IsSupported(SSE2)) return; Isolate* isolate = reinterpret_cast<Isolate*>(CcTest::isolate()); HandleScope scope(isolate); v8::internal::byte buffer[256]; MacroAssembler assm(isolate, buffer, sizeof buffer); { - CpuFeatureScope fscope(&assm, SSE2); __ movss(xmm0, Operand(esp, kPointerSize)); __ movss(xmm1, Operand(esp, 2 * kPointerSize)); __ shufps(xmm0, xmm0, 0x0); @@ -599,7 +586,8 @@ TEST(AssemblerIa32SSE) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef OBJECT_PRINT - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F8 f = FUNCTION_CAST<F8>(code->entry()); diff --git a/deps/v8/test/cctest/test-assembler-mips.cc b/deps/v8/test/cctest/test-assembler-mips.cc index e93c1ca45d2..cd1d5d6cc7d 100644 --- a/deps/v8/test/cctest/test-assembler-mips.cc +++ b/deps/v8/test/cctest/test-assembler-mips.cc @@ -25,15 +25,15 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "v8.h" +#include "src/v8.h" -#include "disassembler.h" -#include "factory.h" -#include "macro-assembler.h" -#include "mips/macro-assembler-mips.h" -#include "mips/simulator-mips.h" +#include "src/disassembler.h" +#include "src/factory.h" +#include "src/macro-assembler.h" +#include "src/mips/macro-assembler-mips.h" +#include "src/mips/simulator-mips.h" -#include "cctest.h" +#include "test/cctest/cctest.h" using namespace v8::internal; diff --git a/deps/v8/test/cctest/test-assembler-mips64.cc b/deps/v8/test/cctest/test-assembler-mips64.cc new file mode 100644 index 00000000000..4e9238930a8 --- /dev/null +++ b/deps/v8/test/cctest/test-assembler-mips64.cc @@ -0,0 +1,1375 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following +// disclaimer in the documentation and/or other materials provided +// with the distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived +// from this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +#include "src/v8.h" + +#include "src/disassembler.h" +#include "src/factory.h" +#include "src/macro-assembler.h" +#include "src/mips64/macro-assembler-mips64.h" +#include "src/mips64/simulator-mips64.h" + +#include "test/cctest/cctest.h" + +using namespace v8::internal; + + +// Define these function prototypes to match JSEntryFunction in execution.cc. +typedef Object* (*F1)(int x, int p1, int p2, int p3, int p4); +typedef Object* (*F2)(int x, int y, int p2, int p3, int p4); +typedef Object* (*F3)(void* p, int p1, int p2, int p3, int p4); + + +#define __ assm. + + +TEST(MIPS0) { + CcTest::InitializeVM(); + Isolate* isolate = CcTest::i_isolate(); + HandleScope scope(isolate); + + MacroAssembler assm(isolate, NULL, 0); + + // Addition. + __ addu(v0, a0, a1); + __ jr(ra); + __ nop(); + + CodeDesc desc; + assm.GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); + F2 f = FUNCTION_CAST<F2>(code->entry()); + int64_t res = + reinterpret_cast<int64_t>(CALL_GENERATED_CODE(f, 0xab0, 0xc, 0, 0, 0)); + ::printf("f() = %ld\n", res); + CHECK_EQ(0xabcL, res); +} + + +TEST(MIPS1) { + CcTest::InitializeVM(); + Isolate* isolate = CcTest::i_isolate(); + HandleScope scope(isolate); + + MacroAssembler assm(isolate, NULL, 0); + Label L, C; + + __ mov(a1, a0); + __ li(v0, 0); + __ b(&C); + __ nop(); + + __ bind(&L); + __ addu(v0, v0, a1); + __ addiu(a1, a1, -1); + + __ bind(&C); + __ xori(v1, a1, 0); + __ Branch(&L, ne, v1, Operand((int64_t)0)); + __ nop(); + + __ jr(ra); + __ nop(); + + CodeDesc desc; + assm.GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); + F1 f = FUNCTION_CAST<F1>(code->entry()); + int64_t res = + reinterpret_cast<int64_t>(CALL_GENERATED_CODE(f, 50, 0, 0, 0, 0)); + ::printf("f() = %ld\n", res); + CHECK_EQ(1275L, res); +} + + +TEST(MIPS2) { + CcTest::InitializeVM(); + Isolate* isolate = CcTest::i_isolate(); + HandleScope scope(isolate); + + MacroAssembler assm(isolate, NULL, 0); + + Label exit, error; + + // ----- Test all instructions. + + // Test lui, ori, and addiu, used in the li pseudo-instruction. + // This way we can then safely load registers with chosen values. + + __ ori(a4, zero_reg, 0); + __ lui(a4, 0x1234); + __ ori(a4, a4, 0); + __ ori(a4, a4, 0x0f0f); + __ ori(a4, a4, 0xf0f0); + __ addiu(a5, a4, 1); + __ addiu(a6, a5, -0x10); + + // Load values in temporary registers. + __ li(a4, 0x00000004); + __ li(a5, 0x00001234); + __ li(a6, 0x12345678); + __ li(a7, 0x7fffffff); + __ li(t0, 0xfffffffc); + __ li(t1, 0xffffedcc); + __ li(t2, 0xedcba988); + __ li(t3, 0x80000000); + + // SPECIAL class. + __ srl(v0, a6, 8); // 0x00123456 + __ sll(v0, v0, 11); // 0x91a2b000 + __ sra(v0, v0, 3); // 0xf2345600 + __ srav(v0, v0, a4); // 0xff234560 + __ sllv(v0, v0, a4); // 0xf2345600 + __ srlv(v0, v0, a4); // 0x0f234560 + __ Branch(&error, ne, v0, Operand(0x0f234560)); + __ nop(); + + __ addu(v0, a4, a5); // 0x00001238 + __ subu(v0, v0, a4); // 0x00001234 + __ Branch(&error, ne, v0, Operand(0x00001234)); + __ nop(); + __ addu(v1, a7, a4); // 32bit addu result is sign-extended into 64bit reg. + __ Branch(&error, ne, v1, Operand(0xffffffff80000003)); + __ nop(); + __ subu(v1, t3, a4); // 0x7ffffffc + __ Branch(&error, ne, v1, Operand(0x7ffffffc)); + __ nop(); + + __ and_(v0, a5, a6); // 0x0000000000001230 + __ or_(v0, v0, a5); // 0x0000000000001234 + __ xor_(v0, v0, a6); // 0x000000001234444c + __ nor(v0, v0, a6); // 0xffffffffedcba987 + __ Branch(&error, ne, v0, Operand(0xffffffffedcba983)); + __ nop(); + + // Shift both 32bit number to left, to preserve meaning of next comparison. + __ dsll32(a7, a7, 0); + __ dsll32(t3, t3, 0); + + __ slt(v0, t3, a7); + __ Branch(&error, ne, v0, Operand(0x1)); + __ nop(); + __ sltu(v0, t3, a7); + __ Branch(&error, ne, v0, Operand(zero_reg)); + __ nop(); + + // Restore original values in registers. + __ dsrl32(a7, a7, 0); + __ dsrl32(t3, t3, 0); + // End of SPECIAL class. + + __ addiu(v0, zero_reg, 0x7421); // 0x00007421 + __ addiu(v0, v0, -0x1); // 0x00007420 + __ addiu(v0, v0, -0x20); // 0x00007400 + __ Branch(&error, ne, v0, Operand(0x00007400)); + __ nop(); + __ addiu(v1, a7, 0x1); // 0x80000000 - result is sign-extended. + __ Branch(&error, ne, v1, Operand(0xffffffff80000000)); + __ nop(); + + __ slti(v0, a5, 0x00002000); // 0x1 + __ slti(v0, v0, 0xffff8000); // 0x0 + __ Branch(&error, ne, v0, Operand(zero_reg)); + __ nop(); + __ sltiu(v0, a5, 0x00002000); // 0x1 + __ sltiu(v0, v0, 0x00008000); // 0x1 + __ Branch(&error, ne, v0, Operand(0x1)); + __ nop(); + + __ andi(v0, a5, 0xf0f0); // 0x00001030 + __ ori(v0, v0, 0x8a00); // 0x00009a30 + __ xori(v0, v0, 0x83cc); // 0x000019fc + __ Branch(&error, ne, v0, Operand(0x000019fc)); + __ nop(); + __ lui(v1, 0x8123); // Result is sign-extended into 64bit register. + __ Branch(&error, ne, v1, Operand(0xffffffff81230000)); + __ nop(); + + // Bit twiddling instructions & conditional moves. + // Uses a4-t3 as set above. + __ Clz(v0, a4); // 29 + __ Clz(v1, a5); // 19 + __ addu(v0, v0, v1); // 48 + __ Clz(v1, a6); // 3 + __ addu(v0, v0, v1); // 51 + __ Clz(v1, t3); // 0 + __ addu(v0, v0, v1); // 51 + __ Branch(&error, ne, v0, Operand(51)); + __ Movn(a0, a7, a4); // Move a0<-a7 (a4 is NOT 0). + __ Ins(a0, a5, 12, 8); // 0x7ff34fff + __ Branch(&error, ne, a0, Operand(0x7ff34fff)); + __ Movz(a0, t2, t3); // a0 not updated (t3 is NOT 0). + __ Ext(a1, a0, 8, 12); // 0x34f + __ Branch(&error, ne, a1, Operand(0x34f)); + __ Movz(a0, t2, v1); // a0<-t2, v0 is 0, from 8 instr back. + __ Branch(&error, ne, a0, Operand(t2)); + + // Everything was correctly executed. Load the expected result. + __ li(v0, 0x31415926); + __ b(&exit); + __ nop(); + + __ bind(&error); + // Got an error. Return a wrong result. + __ li(v0, 666); + + __ bind(&exit); + __ jr(ra); + __ nop(); + + CodeDesc desc; + assm.GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); + F2 f = FUNCTION_CAST<F2>(code->entry()); + int64_t res = + reinterpret_cast<int64_t>(CALL_GENERATED_CODE(f, 0xab0, 0xc, 0, 0, 0)); + ::printf("f() = %ld\n", res); + + CHECK_EQ(0x31415926L, res); +} + + +TEST(MIPS3) { + // Test floating point instructions. + CcTest::InitializeVM(); + Isolate* isolate = CcTest::i_isolate(); + HandleScope scope(isolate); + + typedef struct { + double a; + double b; + double c; + double d; + double e; + double f; + double g; + double h; + double i; + } T; + T t; + + // Create a function that accepts &t, and loads, manipulates, and stores + // the doubles t.a ... t.f. + MacroAssembler assm(isolate, NULL, 0); + Label L, C; + + __ ldc1(f4, MemOperand(a0, OFFSET_OF(T, a)) ); + __ ldc1(f6, MemOperand(a0, OFFSET_OF(T, b)) ); + __ add_d(f8, f4, f6); + __ sdc1(f8, MemOperand(a0, OFFSET_OF(T, c)) ); // c = a + b. + + __ mov_d(f10, f8); // c + __ neg_d(f12, f6); // -b + __ sub_d(f10, f10, f12); + __ sdc1(f10, MemOperand(a0, OFFSET_OF(T, d)) ); // d = c - (-b). + + __ sdc1(f4, MemOperand(a0, OFFSET_OF(T, b)) ); // b = a. + + __ li(a4, 120); + __ mtc1(a4, f14); + __ cvt_d_w(f14, f14); // f14 = 120.0. + __ mul_d(f10, f10, f14); + __ sdc1(f10, MemOperand(a0, OFFSET_OF(T, e)) ); // e = d * 120 = 1.8066e16. + + __ div_d(f12, f10, f4); + __ sdc1(f12, MemOperand(a0, OFFSET_OF(T, f)) ); // f = e / a = 120.44. + + __ sqrt_d(f14, f12); + __ sdc1(f14, MemOperand(a0, OFFSET_OF(T, g)) ); + // g = sqrt(f) = 10.97451593465515908537 + + if (kArchVariant == kMips64r2) { + __ ldc1(f4, MemOperand(a0, OFFSET_OF(T, h)) ); + __ ldc1(f6, MemOperand(a0, OFFSET_OF(T, i)) ); + __ madd_d(f14, f6, f4, f6); + __ sdc1(f14, MemOperand(a0, OFFSET_OF(T, h)) ); + } + + __ jr(ra); + __ nop(); + + CodeDesc desc; + assm.GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); + F3 f = FUNCTION_CAST<F3>(code->entry()); + t.a = 1.5e14; + t.b = 2.75e11; + t.c = 0.0; + t.d = 0.0; + t.e = 0.0; + t.f = 0.0; + t.h = 1.5; + t.i = 2.75; + Object* dummy = CALL_GENERATED_CODE(f, &t, 0, 0, 0, 0); + USE(dummy); + CHECK_EQ(1.5e14, t.a); + CHECK_EQ(1.5e14, t.b); + CHECK_EQ(1.50275e14, t.c); + CHECK_EQ(1.50550e14, t.d); + CHECK_EQ(1.8066e16, t.e); + CHECK_EQ(120.44, t.f); + CHECK_EQ(10.97451593465515908537, t.g); + if (kArchVariant == kMips64r2) { + CHECK_EQ(6.875, t.h); + } +} + + +TEST(MIPS4) { + // Test moves between floating point and integer registers. + CcTest::InitializeVM(); + Isolate* isolate = CcTest::i_isolate(); + HandleScope scope(isolate); + + typedef struct { + double a; + double b; + double c; + } T; + T t; + + Assembler assm(isolate, NULL, 0); + Label L, C; + + __ ldc1(f4, MemOperand(a0, OFFSET_OF(T, a)) ); + __ ldc1(f5, MemOperand(a0, OFFSET_OF(T, b)) ); + + // Swap f4 and f5, by using 3 integer registers, a4-a6, + // both two 32-bit chunks, and one 64-bit chunk. + // mXhc1 is mips32/64-r2 only, not r1, + // but we will not support r1 in practice. + __ mfc1(a4, f4); + __ mfhc1(a5, f4); + __ dmfc1(a6, f5); + + __ mtc1(a4, f5); + __ mthc1(a5, f5); + __ dmtc1(a6, f4); + + // Store the swapped f4 and f5 back to memory. + __ sdc1(f4, MemOperand(a0, OFFSET_OF(T, a)) ); + __ sdc1(f5, MemOperand(a0, OFFSET_OF(T, c)) ); + + __ jr(ra); + __ nop(); + + CodeDesc desc; + assm.GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); + F3 f = FUNCTION_CAST<F3>(code->entry()); + t.a = 1.5e22; + t.b = 2.75e11; + t.c = 17.17; + Object* dummy = CALL_GENERATED_CODE(f, &t, 0, 0, 0, 0); + USE(dummy); + + CHECK_EQ(2.75e11, t.a); + CHECK_EQ(2.75e11, t.b); + CHECK_EQ(1.5e22, t.c); +} + + +TEST(MIPS5) { + // Test conversions between doubles and integers. + CcTest::InitializeVM(); + Isolate* isolate = CcTest::i_isolate(); + HandleScope scope(isolate); + + typedef struct { + double a; + double b; + int i; + int j; + } T; + T t; + + Assembler assm(isolate, NULL, 0); + Label L, C; + + // Load all structure elements to registers. + __ ldc1(f4, MemOperand(a0, OFFSET_OF(T, a)) ); + __ ldc1(f6, MemOperand(a0, OFFSET_OF(T, b)) ); + __ lw(a4, MemOperand(a0, OFFSET_OF(T, i)) ); + __ lw(a5, MemOperand(a0, OFFSET_OF(T, j)) ); + + // Convert double in f4 to int in element i. + __ cvt_w_d(f8, f4); + __ mfc1(a6, f8); + __ sw(a6, MemOperand(a0, OFFSET_OF(T, i)) ); + + // Convert double in f6 to int in element j. + __ cvt_w_d(f10, f6); + __ mfc1(a7, f10); + __ sw(a7, MemOperand(a0, OFFSET_OF(T, j)) ); + + // Convert int in original i (a4) to double in a. + __ mtc1(a4, f12); + __ cvt_d_w(f0, f12); + __ sdc1(f0, MemOperand(a0, OFFSET_OF(T, a)) ); + + // Convert int in original j (a5) to double in b. + __ mtc1(a5, f14); + __ cvt_d_w(f2, f14); + __ sdc1(f2, MemOperand(a0, OFFSET_OF(T, b)) ); + + __ jr(ra); + __ nop(); + + CodeDesc desc; + assm.GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); + F3 f = FUNCTION_CAST<F3>(code->entry()); + t.a = 1.5e4; + t.b = 2.75e8; + t.i = 12345678; + t.j = -100000; + Object* dummy = CALL_GENERATED_CODE(f, &t, 0, 0, 0, 0); + USE(dummy); + + CHECK_EQ(12345678.0, t.a); + CHECK_EQ(-100000.0, t.b); + CHECK_EQ(15000, t.i); + CHECK_EQ(275000000, t.j); +} + + +TEST(MIPS6) { + // Test simple memory loads and stores. + CcTest::InitializeVM(); + Isolate* isolate = CcTest::i_isolate(); + HandleScope scope(isolate); + + typedef struct { + uint32_t ui; + int32_t si; + int32_t r1; + int32_t r2; + int32_t r3; + int32_t r4; + int32_t r5; + int32_t r6; + } T; + T t; + + Assembler assm(isolate, NULL, 0); + Label L, C; + + // Basic word load/store. + __ lw(a4, MemOperand(a0, OFFSET_OF(T, ui)) ); + __ sw(a4, MemOperand(a0, OFFSET_OF(T, r1)) ); + + // lh with positive data. + __ lh(a5, MemOperand(a0, OFFSET_OF(T, ui)) ); + __ sw(a5, MemOperand(a0, OFFSET_OF(T, r2)) ); + + // lh with negative data. + __ lh(a6, MemOperand(a0, OFFSET_OF(T, si)) ); + __ sw(a6, MemOperand(a0, OFFSET_OF(T, r3)) ); + + // lhu with negative data. + __ lhu(a7, MemOperand(a0, OFFSET_OF(T, si)) ); + __ sw(a7, MemOperand(a0, OFFSET_OF(T, r4)) ); + + // lb with negative data. + __ lb(t0, MemOperand(a0, OFFSET_OF(T, si)) ); + __ sw(t0, MemOperand(a0, OFFSET_OF(T, r5)) ); + + // sh writes only 1/2 of word. + __ lui(t1, 0x3333); + __ ori(t1, t1, 0x3333); + __ sw(t1, MemOperand(a0, OFFSET_OF(T, r6)) ); + __ lhu(t1, MemOperand(a0, OFFSET_OF(T, si)) ); + __ sh(t1, MemOperand(a0, OFFSET_OF(T, r6)) ); + + __ jr(ra); + __ nop(); + + CodeDesc desc; + assm.GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); + F3 f = FUNCTION_CAST<F3>(code->entry()); + t.ui = 0x11223344; + t.si = 0x99aabbcc; + Object* dummy = CALL_GENERATED_CODE(f, &t, 0, 0, 0, 0); + USE(dummy); + + CHECK_EQ(0x11223344, t.r1); + CHECK_EQ(0x3344, t.r2); + CHECK_EQ(0xffffbbcc, t.r3); + CHECK_EQ(0x0000bbcc, t.r4); + CHECK_EQ(0xffffffcc, t.r5); + CHECK_EQ(0x3333bbcc, t.r6); +} + + +TEST(MIPS7) { + // Test floating point compare and branch instructions. + CcTest::InitializeVM(); + Isolate* isolate = CcTest::i_isolate(); + HandleScope scope(isolate); + + typedef struct { + double a; + double b; + double c; + double d; + double e; + double f; + int32_t result; + } T; + T t; + + // Create a function that accepts &t, and loads, manipulates, and stores + // the doubles t.a ... t.f. + MacroAssembler assm(isolate, NULL, 0); + Label neither_is_nan, less_than, outa_here; + + __ ldc1(f4, MemOperand(a0, OFFSET_OF(T, a)) ); + __ ldc1(f6, MemOperand(a0, OFFSET_OF(T, b)) ); + if (kArchVariant != kMips64r6) { + __ c(UN, D, f4, f6); + __ bc1f(&neither_is_nan); + } else { + __ cmp(UN, L, f2, f4, f6); + __ bc1eqz(&neither_is_nan, f2); + } + __ nop(); + __ sw(zero_reg, MemOperand(a0, OFFSET_OF(T, result)) ); + __ Branch(&outa_here); + + __ bind(&neither_is_nan); + + if (kArchVariant == kMips64r6) { + __ cmp(OLT, L, f2, f6, f4); + __ bc1nez(&less_than, f2); + } else { + __ c(OLT, D, f6, f4, 2); + __ bc1t(&less_than, 2); + } + + __ nop(); + __ sw(zero_reg, MemOperand(a0, OFFSET_OF(T, result)) ); + __ Branch(&outa_here); + + __ bind(&less_than); + __ Addu(a4, zero_reg, Operand(1)); + __ sw(a4, MemOperand(a0, OFFSET_OF(T, result)) ); // Set true. + + + // This test-case should have additional tests. + + __ bind(&outa_here); + + __ jr(ra); + __ nop(); + + CodeDesc desc; + assm.GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); + F3 f = FUNCTION_CAST<F3>(code->entry()); + t.a = 1.5e14; + t.b = 2.75e11; + t.c = 2.0; + t.d = -4.0; + t.e = 0.0; + t.f = 0.0; + t.result = 0; + Object* dummy = CALL_GENERATED_CODE(f, &t, 0, 0, 0, 0); + USE(dummy); + CHECK_EQ(1.5e14, t.a); + CHECK_EQ(2.75e11, t.b); + CHECK_EQ(1, t.result); +} + + +TEST(MIPS8) { + // Test ROTR and ROTRV instructions. + CcTest::InitializeVM(); + Isolate* isolate = CcTest::i_isolate(); + HandleScope scope(isolate); + + typedef struct { + int32_t input; + int32_t result_rotr_4; + int32_t result_rotr_8; + int32_t result_rotr_12; + int32_t result_rotr_16; + int32_t result_rotr_20; + int32_t result_rotr_24; + int32_t result_rotr_28; + int32_t result_rotrv_4; + int32_t result_rotrv_8; + int32_t result_rotrv_12; + int32_t result_rotrv_16; + int32_t result_rotrv_20; + int32_t result_rotrv_24; + int32_t result_rotrv_28; + } T; + T t; + + MacroAssembler assm(isolate, NULL, 0); + + // Basic word load. + __ lw(a4, MemOperand(a0, OFFSET_OF(T, input)) ); + + // ROTR instruction (called through the Ror macro). + __ Ror(a5, a4, 0x0004); + __ Ror(a6, a4, 0x0008); + __ Ror(a7, a4, 0x000c); + __ Ror(t0, a4, 0x0010); + __ Ror(t1, a4, 0x0014); + __ Ror(t2, a4, 0x0018); + __ Ror(t3, a4, 0x001c); + + // Basic word store. + __ sw(a5, MemOperand(a0, OFFSET_OF(T, result_rotr_4)) ); + __ sw(a6, MemOperand(a0, OFFSET_OF(T, result_rotr_8)) ); + __ sw(a7, MemOperand(a0, OFFSET_OF(T, result_rotr_12)) ); + __ sw(t0, MemOperand(a0, OFFSET_OF(T, result_rotr_16)) ); + __ sw(t1, MemOperand(a0, OFFSET_OF(T, result_rotr_20)) ); + __ sw(t2, MemOperand(a0, OFFSET_OF(T, result_rotr_24)) ); + __ sw(t3, MemOperand(a0, OFFSET_OF(T, result_rotr_28)) ); + + // ROTRV instruction (called through the Ror macro). + __ li(t3, 0x0004); + __ Ror(a5, a4, t3); + __ li(t3, 0x0008); + __ Ror(a6, a4, t3); + __ li(t3, 0x000C); + __ Ror(a7, a4, t3); + __ li(t3, 0x0010); + __ Ror(t0, a4, t3); + __ li(t3, 0x0014); + __ Ror(t1, a4, t3); + __ li(t3, 0x0018); + __ Ror(t2, a4, t3); + __ li(t3, 0x001C); + __ Ror(t3, a4, t3); + + // Basic word store. + __ sw(a5, MemOperand(a0, OFFSET_OF(T, result_rotrv_4)) ); + __ sw(a6, MemOperand(a0, OFFSET_OF(T, result_rotrv_8)) ); + __ sw(a7, MemOperand(a0, OFFSET_OF(T, result_rotrv_12)) ); + __ sw(t0, MemOperand(a0, OFFSET_OF(T, result_rotrv_16)) ); + __ sw(t1, MemOperand(a0, OFFSET_OF(T, result_rotrv_20)) ); + __ sw(t2, MemOperand(a0, OFFSET_OF(T, result_rotrv_24)) ); + __ sw(t3, MemOperand(a0, OFFSET_OF(T, result_rotrv_28)) ); + + __ jr(ra); + __ nop(); + + CodeDesc desc; + assm.GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); + F3 f = FUNCTION_CAST<F3>(code->entry()); + t.input = 0x12345678; + Object* dummy = CALL_GENERATED_CODE(f, &t, 0x0, 0, 0, 0); + USE(dummy); + CHECK_EQ(0x81234567, t.result_rotr_4); + CHECK_EQ(0x78123456, t.result_rotr_8); + CHECK_EQ(0x67812345, t.result_rotr_12); + CHECK_EQ(0x56781234, t.result_rotr_16); + CHECK_EQ(0x45678123, t.result_rotr_20); + CHECK_EQ(0x34567812, t.result_rotr_24); + CHECK_EQ(0x23456781, t.result_rotr_28); + + CHECK_EQ(0x81234567, t.result_rotrv_4); + CHECK_EQ(0x78123456, t.result_rotrv_8); + CHECK_EQ(0x67812345, t.result_rotrv_12); + CHECK_EQ(0x56781234, t.result_rotrv_16); + CHECK_EQ(0x45678123, t.result_rotrv_20); + CHECK_EQ(0x34567812, t.result_rotrv_24); + CHECK_EQ(0x23456781, t.result_rotrv_28); +} + + +TEST(MIPS9) { + // Test BRANCH improvements. + CcTest::InitializeVM(); + Isolate* isolate = CcTest::i_isolate(); + HandleScope scope(isolate); + + MacroAssembler assm(isolate, NULL, 0); + Label exit, exit2, exit3; + + __ Branch(&exit, ge, a0, Operand(zero_reg)); + __ Branch(&exit2, ge, a0, Operand(0x00001FFF)); + __ Branch(&exit3, ge, a0, Operand(0x0001FFFF)); + + __ bind(&exit); + __ bind(&exit2); + __ bind(&exit3); + __ jr(ra); + __ nop(); + + CodeDesc desc; + assm.GetCode(&desc); + isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); +} + + +TEST(MIPS10) { + // Test conversions between doubles and long integers. + // Test hos the long ints map to FP regs pairs. + CcTest::InitializeVM(); + Isolate* isolate = CcTest::i_isolate(); + HandleScope scope(isolate); + + typedef struct { + double a; + double a_converted; + double b; + int32_t dbl_mant; + int32_t dbl_exp; + int32_t long_hi; + int32_t long_lo; + int64_t long_as_int64; + int32_t b_long_hi; + int32_t b_long_lo; + int64_t b_long_as_int64; + } T; + T t; + + Assembler assm(isolate, NULL, 0); + Label L, C; + + if (kArchVariant == kMips64r2) { + // Rewritten for FR=1 FPU mode: + // - 32 FP regs of 64-bits each, no odd/even pairs. + // - Note that cvt_l_d/cvt_d_l ARE legal in FR=1 mode. + // Load all structure elements to registers. + __ ldc1(f0, MemOperand(a0, OFFSET_OF(T, a))); + + // Save the raw bits of the double. + __ mfc1(a4, f0); + __ mfhc1(a5, f0); + __ sw(a4, MemOperand(a0, OFFSET_OF(T, dbl_mant))); + __ sw(a5, MemOperand(a0, OFFSET_OF(T, dbl_exp))); + + // Convert double in f0 to long, save hi/lo parts. + __ cvt_l_d(f0, f0); + __ mfc1(a4, f0); // f0 LS 32 bits of long. + __ mfhc1(a5, f0); // f0 MS 32 bits of long. + __ sw(a4, MemOperand(a0, OFFSET_OF(T, long_lo))); + __ sw(a5, MemOperand(a0, OFFSET_OF(T, long_hi))); + + // Combine the high/low ints, convert back to double. + __ dsll32(a6, a5, 0); // Move a5 to high bits of a6. + __ or_(a6, a6, a4); + __ dmtc1(a6, f1); + __ cvt_d_l(f1, f1); + __ sdc1(f1, MemOperand(a0, OFFSET_OF(T, a_converted))); + + + // Convert the b long integers to double b. + __ lw(a4, MemOperand(a0, OFFSET_OF(T, b_long_lo))); + __ lw(a5, MemOperand(a0, OFFSET_OF(T, b_long_hi))); + __ mtc1(a4, f8); // f8 LS 32-bits. + __ mthc1(a5, f8); // f8 MS 32-bits. + __ cvt_d_l(f10, f8); + __ sdc1(f10, MemOperand(a0, OFFSET_OF(T, b))); + + // Convert double b back to long-int. + __ ldc1(f31, MemOperand(a0, OFFSET_OF(T, b))); + __ cvt_l_d(f31, f31); + __ dmfc1(a7, f31); + __ sd(a7, MemOperand(a0, OFFSET_OF(T, b_long_as_int64))); + + + __ jr(ra); + __ nop(); + + CodeDesc desc; + assm.GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); + F3 f = FUNCTION_CAST<F3>(code->entry()); + t.a = 2.147483647e9; // 0x7fffffff -> 0x41DFFFFFFFC00000 as double. + t.b_long_hi = 0x000000ff; // 0xFF00FF00FF -> 0x426FE01FE01FE000 as double. + t.b_long_lo = 0x00ff00ff; + Object* dummy = CALL_GENERATED_CODE(f, &t, 0, 0, 0, 0); + USE(dummy); + + CHECK_EQ(0x41DFFFFF, t.dbl_exp); + CHECK_EQ(0xFFC00000, t.dbl_mant); + CHECK_EQ(0, t.long_hi); + CHECK_EQ(0x7fffffff, t.long_lo); + CHECK_EQ(2.147483647e9, t.a_converted); + + // 0xFF00FF00FF -> 1.095233372415e12. + CHECK_EQ(1.095233372415e12, t.b); + CHECK_EQ(0xFF00FF00FF, t.b_long_as_int64); + } +} + + +TEST(MIPS11) { + // Do not run test on MIPS64r6, as these instructions are removed. + if (kArchVariant != kMips64r6) { + // Test LWL, LWR, SWL and SWR instructions. + CcTest::InitializeVM(); + Isolate* isolate = CcTest::i_isolate(); + HandleScope scope(isolate); + + typedef struct { + int32_t reg_init; + int32_t mem_init; + int32_t lwl_0; + int32_t lwl_1; + int32_t lwl_2; + int32_t lwl_3; + int32_t lwr_0; + int32_t lwr_1; + int32_t lwr_2; + int32_t lwr_3; + int32_t swl_0; + int32_t swl_1; + int32_t swl_2; + int32_t swl_3; + int32_t swr_0; + int32_t swr_1; + int32_t swr_2; + int32_t swr_3; + } T; + T t; + + Assembler assm(isolate, NULL, 0); + + // Test all combinations of LWL and vAddr. + __ lw(a4, MemOperand(a0, OFFSET_OF(T, reg_init)) ); + __ lwl(a4, MemOperand(a0, OFFSET_OF(T, mem_init)) ); + __ sw(a4, MemOperand(a0, OFFSET_OF(T, lwl_0)) ); + + __ lw(a5, MemOperand(a0, OFFSET_OF(T, reg_init)) ); + __ lwl(a5, MemOperand(a0, OFFSET_OF(T, mem_init) + 1) ); + __ sw(a5, MemOperand(a0, OFFSET_OF(T, lwl_1)) ); + + __ lw(a6, MemOperand(a0, OFFSET_OF(T, reg_init)) ); + __ lwl(a6, MemOperand(a0, OFFSET_OF(T, mem_init) + 2) ); + __ sw(a6, MemOperand(a0, OFFSET_OF(T, lwl_2)) ); + + __ lw(a7, MemOperand(a0, OFFSET_OF(T, reg_init)) ); + __ lwl(a7, MemOperand(a0, OFFSET_OF(T, mem_init) + 3) ); + __ sw(a7, MemOperand(a0, OFFSET_OF(T, lwl_3)) ); + + // Test all combinations of LWR and vAddr. + __ lw(a4, MemOperand(a0, OFFSET_OF(T, reg_init)) ); + __ lwr(a4, MemOperand(a0, OFFSET_OF(T, mem_init)) ); + __ sw(a4, MemOperand(a0, OFFSET_OF(T, lwr_0)) ); + + __ lw(a5, MemOperand(a0, OFFSET_OF(T, reg_init)) ); + __ lwr(a5, MemOperand(a0, OFFSET_OF(T, mem_init) + 1) ); + __ sw(a5, MemOperand(a0, OFFSET_OF(T, lwr_1)) ); + + __ lw(a6, MemOperand(a0, OFFSET_OF(T, reg_init)) ); + __ lwr(a6, MemOperand(a0, OFFSET_OF(T, mem_init) + 2) ); + __ sw(a6, MemOperand(a0, OFFSET_OF(T, lwr_2)) ); + + __ lw(a7, MemOperand(a0, OFFSET_OF(T, reg_init)) ); + __ lwr(a7, MemOperand(a0, OFFSET_OF(T, mem_init) + 3) ); + __ sw(a7, MemOperand(a0, OFFSET_OF(T, lwr_3)) ); + + // Test all combinations of SWL and vAddr. + __ lw(a4, MemOperand(a0, OFFSET_OF(T, mem_init)) ); + __ sw(a4, MemOperand(a0, OFFSET_OF(T, swl_0)) ); + __ lw(a4, MemOperand(a0, OFFSET_OF(T, reg_init)) ); + __ swl(a4, MemOperand(a0, OFFSET_OF(T, swl_0)) ); + + __ lw(a5, MemOperand(a0, OFFSET_OF(T, mem_init)) ); + __ sw(a5, MemOperand(a0, OFFSET_OF(T, swl_1)) ); + __ lw(a5, MemOperand(a0, OFFSET_OF(T, reg_init)) ); + __ swl(a5, MemOperand(a0, OFFSET_OF(T, swl_1) + 1) ); + + __ lw(a6, MemOperand(a0, OFFSET_OF(T, mem_init)) ); + __ sw(a6, MemOperand(a0, OFFSET_OF(T, swl_2)) ); + __ lw(a6, MemOperand(a0, OFFSET_OF(T, reg_init)) ); + __ swl(a6, MemOperand(a0, OFFSET_OF(T, swl_2) + 2) ); + + __ lw(a7, MemOperand(a0, OFFSET_OF(T, mem_init)) ); + __ sw(a7, MemOperand(a0, OFFSET_OF(T, swl_3)) ); + __ lw(a7, MemOperand(a0, OFFSET_OF(T, reg_init)) ); + __ swl(a7, MemOperand(a0, OFFSET_OF(T, swl_3) + 3) ); + + // Test all combinations of SWR and vAddr. + __ lw(a4, MemOperand(a0, OFFSET_OF(T, mem_init)) ); + __ sw(a4, MemOperand(a0, OFFSET_OF(T, swr_0)) ); + __ lw(a4, MemOperand(a0, OFFSET_OF(T, reg_init)) ); + __ swr(a4, MemOperand(a0, OFFSET_OF(T, swr_0)) ); + + __ lw(a5, MemOperand(a0, OFFSET_OF(T, mem_init)) ); + __ sw(a5, MemOperand(a0, OFFSET_OF(T, swr_1)) ); + __ lw(a5, MemOperand(a0, OFFSET_OF(T, reg_init)) ); + __ swr(a5, MemOperand(a0, OFFSET_OF(T, swr_1) + 1) ); + + __ lw(a6, MemOperand(a0, OFFSET_OF(T, mem_init)) ); + __ sw(a6, MemOperand(a0, OFFSET_OF(T, swr_2)) ); + __ lw(a6, MemOperand(a0, OFFSET_OF(T, reg_init)) ); + __ swr(a6, MemOperand(a0, OFFSET_OF(T, swr_2) + 2) ); + + __ lw(a7, MemOperand(a0, OFFSET_OF(T, mem_init)) ); + __ sw(a7, MemOperand(a0, OFFSET_OF(T, swr_3)) ); + __ lw(a7, MemOperand(a0, OFFSET_OF(T, reg_init)) ); + __ swr(a7, MemOperand(a0, OFFSET_OF(T, swr_3) + 3) ); + + __ jr(ra); + __ nop(); + + CodeDesc desc; + assm.GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); + F3 f = FUNCTION_CAST<F3>(code->entry()); + t.reg_init = 0xaabbccdd; + t.mem_init = 0x11223344; + + Object* dummy = CALL_GENERATED_CODE(f, &t, 0, 0, 0, 0); + USE(dummy); + + CHECK_EQ(0x44bbccdd, t.lwl_0); + CHECK_EQ(0x3344ccdd, t.lwl_1); + CHECK_EQ(0x223344dd, t.lwl_2); + CHECK_EQ(0x11223344, t.lwl_3); + + CHECK_EQ(0x11223344, t.lwr_0); + CHECK_EQ(0xaa112233, t.lwr_1); + CHECK_EQ(0xaabb1122, t.lwr_2); + CHECK_EQ(0xaabbcc11, t.lwr_3); + + CHECK_EQ(0x112233aa, t.swl_0); + CHECK_EQ(0x1122aabb, t.swl_1); + CHECK_EQ(0x11aabbcc, t.swl_2); + CHECK_EQ(0xaabbccdd, t.swl_3); + + CHECK_EQ(0xaabbccdd, t.swr_0); + CHECK_EQ(0xbbccdd44, t.swr_1); + CHECK_EQ(0xccdd3344, t.swr_2); + CHECK_EQ(0xdd223344, t.swr_3); + } +} + + +TEST(MIPS12) { + CcTest::InitializeVM(); + Isolate* isolate = CcTest::i_isolate(); + HandleScope scope(isolate); + + typedef struct { + int32_t x; + int32_t y; + int32_t y1; + int32_t y2; + int32_t y3; + int32_t y4; + } T; + T t; + + MacroAssembler assm(isolate, NULL, 0); + + __ mov(t2, fp); // Save frame pointer. + __ mov(fp, a0); // Access struct T by fp. + __ lw(a4, MemOperand(a0, OFFSET_OF(T, y)) ); + __ lw(a7, MemOperand(a0, OFFSET_OF(T, y4)) ); + + __ addu(a5, a4, a7); + __ subu(t0, a4, a7); + __ nop(); + __ push(a4); // These instructions disappear after opt. + __ Pop(); + __ addu(a4, a4, a4); + __ nop(); + __ Pop(); // These instructions disappear after opt. + __ push(a7); + __ nop(); + __ push(a7); // These instructions disappear after opt. + __ pop(a7); + __ nop(); + __ push(a7); + __ pop(t0); + __ nop(); + __ sw(a4, MemOperand(fp, OFFSET_OF(T, y)) ); + __ lw(a4, MemOperand(fp, OFFSET_OF(T, y)) ); + __ nop(); + __ sw(a4, MemOperand(fp, OFFSET_OF(T, y)) ); + __ lw(a5, MemOperand(fp, OFFSET_OF(T, y)) ); + __ nop(); + __ push(a5); + __ lw(a5, MemOperand(fp, OFFSET_OF(T, y)) ); + __ pop(a5); + __ nop(); + __ push(a5); + __ lw(a6, MemOperand(fp, OFFSET_OF(T, y)) ); + __ pop(a5); + __ nop(); + __ push(a5); + __ lw(a6, MemOperand(fp, OFFSET_OF(T, y)) ); + __ pop(a6); + __ nop(); + __ push(a6); + __ lw(a6, MemOperand(fp, OFFSET_OF(T, y)) ); + __ pop(a5); + __ nop(); + __ push(a5); + __ lw(a6, MemOperand(fp, OFFSET_OF(T, y)) ); + __ pop(a7); + __ nop(); + + __ mov(fp, t2); + __ jr(ra); + __ nop(); + + CodeDesc desc; + assm.GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); + F3 f = FUNCTION_CAST<F3>(code->entry()); + t.x = 1; + t.y = 2; + t.y1 = 3; + t.y2 = 4; + t.y3 = 0XBABA; + t.y4 = 0xDEDA; + + Object* dummy = CALL_GENERATED_CODE(f, &t, 0, 0, 0, 0); + USE(dummy); + + CHECK_EQ(3, t.y1); +} + + +TEST(MIPS13) { + // Test Cvt_d_uw and Trunc_uw_d macros. + CcTest::InitializeVM(); + Isolate* isolate = CcTest::i_isolate(); + HandleScope scope(isolate); + + typedef struct { + double cvt_big_out; + double cvt_small_out; + uint32_t trunc_big_out; + uint32_t trunc_small_out; + uint32_t cvt_big_in; + uint32_t cvt_small_in; + } T; + T t; + + MacroAssembler assm(isolate, NULL, 0); + + __ sw(a4, MemOperand(a0, OFFSET_OF(T, cvt_small_in))); + __ Cvt_d_uw(f10, a4, f22); + __ sdc1(f10, MemOperand(a0, OFFSET_OF(T, cvt_small_out))); + + __ Trunc_uw_d(f10, f10, f22); + __ swc1(f10, MemOperand(a0, OFFSET_OF(T, trunc_small_out))); + + __ sw(a4, MemOperand(a0, OFFSET_OF(T, cvt_big_in))); + __ Cvt_d_uw(f8, a4, f22); + __ sdc1(f8, MemOperand(a0, OFFSET_OF(T, cvt_big_out))); + + __ Trunc_uw_d(f8, f8, f22); + __ swc1(f8, MemOperand(a0, OFFSET_OF(T, trunc_big_out))); + + __ jr(ra); + __ nop(); + + CodeDesc desc; + assm.GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); + F3 f = FUNCTION_CAST<F3>(code->entry()); + + t.cvt_big_in = 0xFFFFFFFF; + t.cvt_small_in = 333; + + Object* dummy = CALL_GENERATED_CODE(f, &t, 0, 0, 0, 0); + USE(dummy); + + CHECK_EQ(t.cvt_big_out, static_cast<double>(t.cvt_big_in)); + CHECK_EQ(t.cvt_small_out, static_cast<double>(t.cvt_small_in)); + + CHECK_EQ(static_cast<int>(t.trunc_big_out), static_cast<int>(t.cvt_big_in)); + CHECK_EQ(static_cast<int>(t.trunc_small_out), + static_cast<int>(t.cvt_small_in)); +} + + +TEST(MIPS14) { + // Test round, floor, ceil, trunc, cvt. + CcTest::InitializeVM(); + Isolate* isolate = CcTest::i_isolate(); + HandleScope scope(isolate); + +#define ROUND_STRUCT_ELEMENT(x) \ + int32_t x##_up_out; \ + int32_t x##_down_out; \ + int32_t neg_##x##_up_out; \ + int32_t neg_##x##_down_out; \ + uint32_t x##_err1_out; \ + uint32_t x##_err2_out; \ + uint32_t x##_err3_out; \ + uint32_t x##_err4_out; \ + int32_t x##_invalid_result; + + typedef struct { + double round_up_in; + double round_down_in; + double neg_round_up_in; + double neg_round_down_in; + double err1_in; + double err2_in; + double err3_in; + double err4_in; + + ROUND_STRUCT_ELEMENT(round) + ROUND_STRUCT_ELEMENT(floor) + ROUND_STRUCT_ELEMENT(ceil) + ROUND_STRUCT_ELEMENT(trunc) + ROUND_STRUCT_ELEMENT(cvt) + } T; + T t; + +#undef ROUND_STRUCT_ELEMENT + + MacroAssembler assm(isolate, NULL, 0); + + // Save FCSR. + __ cfc1(a1, FCSR); + // Disable FPU exceptions. + __ ctc1(zero_reg, FCSR); +#define RUN_ROUND_TEST(x) \ + __ ldc1(f0, MemOperand(a0, OFFSET_OF(T, round_up_in))); \ + __ x##_w_d(f0, f0); \ + __ swc1(f0, MemOperand(a0, OFFSET_OF(T, x##_up_out))); \ + \ + __ ldc1(f0, MemOperand(a0, OFFSET_OF(T, round_down_in))); \ + __ x##_w_d(f0, f0); \ + __ swc1(f0, MemOperand(a0, OFFSET_OF(T, x##_down_out))); \ + \ + __ ldc1(f0, MemOperand(a0, OFFSET_OF(T, neg_round_up_in))); \ + __ x##_w_d(f0, f0); \ + __ swc1(f0, MemOperand(a0, OFFSET_OF(T, neg_##x##_up_out))); \ + \ + __ ldc1(f0, MemOperand(a0, OFFSET_OF(T, neg_round_down_in))); \ + __ x##_w_d(f0, f0); \ + __ swc1(f0, MemOperand(a0, OFFSET_OF(T, neg_##x##_down_out))); \ + \ + __ ldc1(f0, MemOperand(a0, OFFSET_OF(T, err1_in))); \ + __ ctc1(zero_reg, FCSR); \ + __ x##_w_d(f0, f0); \ + __ cfc1(a2, FCSR); \ + __ sw(a2, MemOperand(a0, OFFSET_OF(T, x##_err1_out))); \ + \ + __ ldc1(f0, MemOperand(a0, OFFSET_OF(T, err2_in))); \ + __ ctc1(zero_reg, FCSR); \ + __ x##_w_d(f0, f0); \ + __ cfc1(a2, FCSR); \ + __ sw(a2, MemOperand(a0, OFFSET_OF(T, x##_err2_out))); \ + \ + __ ldc1(f0, MemOperand(a0, OFFSET_OF(T, err3_in))); \ + __ ctc1(zero_reg, FCSR); \ + __ x##_w_d(f0, f0); \ + __ cfc1(a2, FCSR); \ + __ sw(a2, MemOperand(a0, OFFSET_OF(T, x##_err3_out))); \ + \ + __ ldc1(f0, MemOperand(a0, OFFSET_OF(T, err4_in))); \ + __ ctc1(zero_reg, FCSR); \ + __ x##_w_d(f0, f0); \ + __ cfc1(a2, FCSR); \ + __ sw(a2, MemOperand(a0, OFFSET_OF(T, x##_err4_out))); \ + __ swc1(f0, MemOperand(a0, OFFSET_OF(T, x##_invalid_result))); + + RUN_ROUND_TEST(round) + RUN_ROUND_TEST(floor) + RUN_ROUND_TEST(ceil) + RUN_ROUND_TEST(trunc) + RUN_ROUND_TEST(cvt) + + // Restore FCSR. + __ ctc1(a1, FCSR); + + __ jr(ra); + __ nop(); + + CodeDesc desc; + assm.GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); + F3 f = FUNCTION_CAST<F3>(code->entry()); + + t.round_up_in = 123.51; + t.round_down_in = 123.49; + t.neg_round_up_in = -123.5; + t.neg_round_down_in = -123.49; + t.err1_in = 123.51; + t.err2_in = 1; + t.err3_in = static_cast<double>(1) + 0xFFFFFFFF; + t.err4_in = NAN; + + Object* dummy = CALL_GENERATED_CODE(f, &t, 0, 0, 0, 0); + USE(dummy); + +#define GET_FPU_ERR(x) (static_cast<int>(x & kFCSRFlagMask)) +#define CHECK_ROUND_RESULT(type) \ + CHECK(GET_FPU_ERR(t.type##_err1_out) & kFCSRInexactFlagMask); \ + CHECK_EQ(0, GET_FPU_ERR(t.type##_err2_out)); \ + CHECK(GET_FPU_ERR(t.type##_err3_out) & kFCSRInvalidOpFlagMask); \ + CHECK(GET_FPU_ERR(t.type##_err4_out) & kFCSRInvalidOpFlagMask); \ + CHECK_EQ(static_cast<int32_t>(kFPUInvalidResult), t.type##_invalid_result); + + CHECK_ROUND_RESULT(round); + CHECK_ROUND_RESULT(floor); + CHECK_ROUND_RESULT(ceil); + CHECK_ROUND_RESULT(cvt); +} + + +TEST(MIPS15) { + // Test chaining of label usages within instructions (issue 1644). + CcTest::InitializeVM(); + Isolate* isolate = CcTest::i_isolate(); + HandleScope scope(isolate); + Assembler assm(isolate, NULL, 0); + + Label target; + __ beq(v0, v1, &target); + __ nop(); + __ bne(v0, v1, &target); + __ nop(); + __ bind(&target); + __ nop(); +} + + +// ----- mips64 tests ----------------------------------------------- + +TEST(MIPS16) { + // Test 64-bit memory loads and stores. + CcTest::InitializeVM(); + Isolate* isolate = CcTest::i_isolate(); + HandleScope scope(isolate); + + typedef struct { + int64_t r1; + int64_t r2; + int64_t r3; + int64_t r4; + int64_t r5; + int64_t r6; + uint32_t ui; + int32_t si; + } T; + T t; + + Assembler assm(isolate, NULL, 0); + Label L, C; + + // Basic 32-bit word load/store, with un-signed data. + __ lw(a4, MemOperand(a0, OFFSET_OF(T, ui)) ); + __ sw(a4, MemOperand(a0, OFFSET_OF(T, r1)) ); + + // Check that the data got zero-extended into 64-bit a4. + __ sd(a4, MemOperand(a0, OFFSET_OF(T, r2)) ); + + // Basic 32-bit word load/store, with SIGNED data. + __ lw(a5, MemOperand(a0, OFFSET_OF(T, si)) ); + __ sw(a5, MemOperand(a0, OFFSET_OF(T, r3)) ); + + // Check that the data got sign-extended into 64-bit a4. + __ sd(a5, MemOperand(a0, OFFSET_OF(T, r4)) ); + + // 32-bit UNSIGNED word load/store, with SIGNED data. + __ lwu(a6, MemOperand(a0, OFFSET_OF(T, si)) ); + __ sw(a6, MemOperand(a0, OFFSET_OF(T, r5)) ); + + // Check that the data got zero-extended into 64-bit a4. + __ sd(a6, MemOperand(a0, OFFSET_OF(T, r6)) ); + + // lh with positive data. + __ lh(a5, MemOperand(a0, OFFSET_OF(T, ui)) ); + __ sw(a5, MemOperand(a0, OFFSET_OF(T, r2)) ); + + // lh with negative data. + __ lh(a6, MemOperand(a0, OFFSET_OF(T, si)) ); + __ sw(a6, MemOperand(a0, OFFSET_OF(T, r3)) ); + + // lhu with negative data. + __ lhu(a7, MemOperand(a0, OFFSET_OF(T, si)) ); + __ sw(a7, MemOperand(a0, OFFSET_OF(T, r4)) ); + + // lb with negative data. + __ lb(t0, MemOperand(a0, OFFSET_OF(T, si)) ); + __ sw(t0, MemOperand(a0, OFFSET_OF(T, r5)) ); + + // // sh writes only 1/2 of word. + __ lui(t1, 0x3333); + __ ori(t1, t1, 0x3333); + __ sw(t1, MemOperand(a0, OFFSET_OF(T, r6)) ); + __ lhu(t1, MemOperand(a0, OFFSET_OF(T, si)) ); + __ sh(t1, MemOperand(a0, OFFSET_OF(T, r6)) ); + + __ jr(ra); + __ nop(); + + CodeDesc desc; + assm.GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); + F3 f = FUNCTION_CAST<F3>(code->entry()); + t.ui = 0x44332211; + t.si = 0x99aabbcc; + t.r1 = 0x1111111111111111; + t.r2 = 0x2222222222222222; + t.r3 = 0x3333333333333333; + t.r4 = 0x4444444444444444; + t.r5 = 0x5555555555555555; + t.r6 = 0x6666666666666666; + Object* dummy = CALL_GENERATED_CODE(f, &t, 0, 0, 0, 0); + USE(dummy); + + // Unsigned data, 32 & 64. + CHECK_EQ(0x1111111144332211L, t.r1); + CHECK_EQ(0x0000000000002211L, t.r2); + + // Signed data, 32 & 64. + CHECK_EQ(0x33333333ffffbbccL, t.r3); + CHECK_EQ(0xffffffff0000bbccL, t.r4); + + // Signed data, 32 & 64. + CHECK_EQ(0x55555555ffffffccL, t.r5); + CHECK_EQ(0x000000003333bbccL, t.r6); +} + +#undef __ diff --git a/deps/v8/test/cctest/test-assembler-x64.cc b/deps/v8/test/cctest/test-assembler-x64.cc index eb9fee85458..3d305b650e8 100644 --- a/deps/v8/test/cctest/test-assembler-x64.cc +++ b/deps/v8/test/cctest/test-assembler-x64.cc @@ -27,13 +27,14 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "macro-assembler.h" -#include "factory.h" -#include "platform.h" -#include "serialize.h" -#include "cctest.h" +#include "src/base/platform/platform.h" +#include "src/factory.h" +#include "src/macro-assembler.h" +#include "src/ostreams.h" +#include "src/serialize.h" +#include "test/cctest/cctest.h" using namespace v8::internal; @@ -66,11 +67,11 @@ static const Register arg2 = rsi; TEST(AssemblerX64ReturnOperation) { + CcTest::InitializeVM(); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Assembler assm(CcTest::i_isolate(), buffer, static_cast<int>(actual_size)); @@ -88,11 +89,11 @@ TEST(AssemblerX64ReturnOperation) { TEST(AssemblerX64StackOperations) { + CcTest::InitializeVM(); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Assembler assm(CcTest::i_isolate(), buffer, static_cast<int>(actual_size)); @@ -120,11 +121,11 @@ TEST(AssemblerX64StackOperations) { TEST(AssemblerX64ArithmeticOperations) { + CcTest::InitializeVM(); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Assembler assm(CcTest::i_isolate(), buffer, static_cast<int>(actual_size)); @@ -142,11 +143,11 @@ TEST(AssemblerX64ArithmeticOperations) { TEST(AssemblerX64CmpbOperation) { + CcTest::InitializeVM(); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Assembler assm(CcTest::i_isolate(), buffer, static_cast<int>(actual_size)); @@ -173,11 +174,11 @@ TEST(AssemblerX64CmpbOperation) { TEST(AssemblerX64ImulOperation) { + CcTest::InitializeVM(); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Assembler assm(CcTest::i_isolate(), buffer, static_cast<int>(actual_size)); @@ -201,11 +202,11 @@ TEST(AssemblerX64ImulOperation) { TEST(AssemblerX64XchglOperations) { + CcTest::InitializeVM(); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Assembler assm(CcTest::i_isolate(), buffer, static_cast<int>(actual_size)); @@ -229,11 +230,11 @@ TEST(AssemblerX64XchglOperations) { TEST(AssemblerX64OrlOperations) { + CcTest::InitializeVM(); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Assembler assm(CcTest::i_isolate(), buffer, static_cast<int>(actual_size)); @@ -253,11 +254,11 @@ TEST(AssemblerX64OrlOperations) { TEST(AssemblerX64RollOperations) { + CcTest::InitializeVM(); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Assembler assm(CcTest::i_isolate(), buffer, static_cast<int>(actual_size)); @@ -275,11 +276,11 @@ TEST(AssemblerX64RollOperations) { TEST(AssemblerX64SublOperations) { + CcTest::InitializeVM(); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Assembler assm(CcTest::i_isolate(), buffer, static_cast<int>(actual_size)); @@ -299,11 +300,11 @@ TEST(AssemblerX64SublOperations) { TEST(AssemblerX64TestlOperations) { + CcTest::InitializeVM(); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Assembler assm(CcTest::i_isolate(), buffer, static_cast<int>(actual_size)); @@ -328,11 +329,11 @@ TEST(AssemblerX64TestlOperations) { TEST(AssemblerX64XorlOperations) { + CcTest::InitializeVM(); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Assembler assm(CcTest::i_isolate(), buffer, static_cast<int>(actual_size)); @@ -352,11 +353,11 @@ TEST(AssemblerX64XorlOperations) { TEST(AssemblerX64MemoryOperands) { + CcTest::InitializeVM(); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Assembler assm(CcTest::i_isolate(), buffer, static_cast<int>(actual_size)); @@ -386,11 +387,11 @@ TEST(AssemblerX64MemoryOperands) { TEST(AssemblerX64ControlFlow) { + CcTest::InitializeVM(); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Assembler assm(CcTest::i_isolate(), buffer, static_cast<int>(actual_size)); @@ -415,11 +416,11 @@ TEST(AssemblerX64ControlFlow) { TEST(AssemblerX64LoopImmediates) { + CcTest::InitializeVM(); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Assembler assm(CcTest::i_isolate(), buffer, static_cast<int>(actual_size)); // Assemble two loops using rax as counter, and verify the ending counts. @@ -635,7 +636,7 @@ void DoSSE2(const v8::FunctionCallbackInfo<v8::Value>& args) { TEST(StackAlignmentForSSE2) { CcTest::InitializeVM(); - CHECK_EQ(0, OS::ActivationFrameAlignment() % 16); + CHECK_EQ(0, v8::base::OS::ActivationFrameAlignment() % 16); v8::Isolate* isolate = CcTest::isolate(); v8::HandleScope handle_scope(isolate); @@ -689,7 +690,8 @@ TEST(AssemblerX64Extractps) { Handle<Code> code = isolate->factory()->NewCode( desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef OBJECT_PRINT - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F3 f = FUNCTION_CAST<F3>(code->entry()); @@ -727,7 +729,8 @@ TEST(AssemblerX64SSE) { Code::ComputeFlags(Code::STUB), Handle<Code>()); #ifdef OBJECT_PRINT - code->Print(); + OFStream os(stdout); + code->Print(os); #endif F6 f = FUNCTION_CAST<F6>(code->entry()); diff --git a/deps/v8/test/cctest/test-assembler-x87.cc b/deps/v8/test/cctest/test-assembler-x87.cc new file mode 100644 index 00000000000..8341f9b49ee --- /dev/null +++ b/deps/v8/test/cctest/test-assembler-x87.cc @@ -0,0 +1,315 @@ +// Copyright 2011 the V8 project authors. All rights reserved. +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following +// disclaimer in the documentation and/or other materials provided +// with the distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived +// from this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +#include <stdlib.h> + +#include "src/v8.h" + +#include "src/base/platform/platform.h" +#include "src/disassembler.h" +#include "src/factory.h" +#include "src/macro-assembler.h" +#include "src/ostreams.h" +#include "src/serialize.h" +#include "test/cctest/cctest.h" + +using namespace v8::internal; + + +typedef int (*F0)(); +typedef int (*F1)(int x); +typedef int (*F2)(int x, int y); + + +#define __ assm. + +TEST(AssemblerIa320) { + CcTest::InitializeVM(); + Isolate* isolate = reinterpret_cast<Isolate*>(CcTest::isolate()); + HandleScope scope(isolate); + + v8::internal::byte buffer[256]; + Assembler assm(isolate, buffer, sizeof buffer); + + __ mov(eax, Operand(esp, 4)); + __ add(eax, Operand(esp, 8)); + __ ret(0); + + CodeDesc desc; + assm.GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); +#ifdef OBJECT_PRINT + OFStream os(stdout); + code->Print(os); +#endif + F2 f = FUNCTION_CAST<F2>(code->entry()); + int res = f(3, 4); + ::printf("f() = %d\n", res); + CHECK_EQ(7, res); +} + + +TEST(AssemblerIa321) { + CcTest::InitializeVM(); + Isolate* isolate = reinterpret_cast<Isolate*>(CcTest::isolate()); + HandleScope scope(isolate); + + v8::internal::byte buffer[256]; + Assembler assm(isolate, buffer, sizeof buffer); + Label L, C; + + __ mov(edx, Operand(esp, 4)); + __ xor_(eax, eax); // clear eax + __ jmp(&C); + + __ bind(&L); + __ add(eax, edx); + __ sub(edx, Immediate(1)); + + __ bind(&C); + __ test(edx, edx); + __ j(not_zero, &L); + __ ret(0); + + CodeDesc desc; + assm.GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); +#ifdef OBJECT_PRINT + OFStream os(stdout); + code->Print(os); +#endif + F1 f = FUNCTION_CAST<F1>(code->entry()); + int res = f(100); + ::printf("f() = %d\n", res); + CHECK_EQ(5050, res); +} + + +TEST(AssemblerIa322) { + CcTest::InitializeVM(); + Isolate* isolate = reinterpret_cast<Isolate*>(CcTest::isolate()); + HandleScope scope(isolate); + + v8::internal::byte buffer[256]; + Assembler assm(isolate, buffer, sizeof buffer); + Label L, C; + + __ mov(edx, Operand(esp, 4)); + __ mov(eax, 1); + __ jmp(&C); + + __ bind(&L); + __ imul(eax, edx); + __ sub(edx, Immediate(1)); + + __ bind(&C); + __ test(edx, edx); + __ j(not_zero, &L); + __ ret(0); + + // some relocated stuff here, not executed + __ mov(eax, isolate->factory()->true_value()); + __ jmp(NULL, RelocInfo::RUNTIME_ENTRY); + + CodeDesc desc; + assm.GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); +#ifdef OBJECT_PRINT + OFStream os(stdout); + code->Print(os); +#endif + F1 f = FUNCTION_CAST<F1>(code->entry()); + int res = f(10); + ::printf("f() = %d\n", res); + CHECK_EQ(3628800, res); +} + + +typedef int (*F3)(float x); + +typedef int (*F4)(double x); + +static int baz = 42; +TEST(AssemblerIa325) { + CcTest::InitializeVM(); + Isolate* isolate = reinterpret_cast<Isolate*>(CcTest::isolate()); + HandleScope scope(isolate); + + v8::internal::byte buffer[256]; + Assembler assm(isolate, buffer, sizeof buffer); + + __ mov(eax, Operand(reinterpret_cast<intptr_t>(&baz), RelocInfo::NONE32)); + __ ret(0); + + CodeDesc desc; + assm.GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); + F0 f = FUNCTION_CAST<F0>(code->entry()); + int res = f(); + CHECK_EQ(42, res); +} + + +typedef int (*F7)(double x, double y); + +TEST(AssemblerIa329) { + CcTest::InitializeVM(); + Isolate* isolate = reinterpret_cast<Isolate*>(CcTest::isolate()); + HandleScope scope(isolate); + v8::internal::byte buffer[256]; + MacroAssembler assm(isolate, buffer, sizeof buffer); + enum { kEqual = 0, kGreater = 1, kLess = 2, kNaN = 3, kUndefined = 4 }; + Label equal_l, less_l, greater_l, nan_l; + __ fld_d(Operand(esp, 3 * kPointerSize)); + __ fld_d(Operand(esp, 1 * kPointerSize)); + __ FCmp(); + __ j(parity_even, &nan_l); + __ j(equal, &equal_l); + __ j(below, &less_l); + __ j(above, &greater_l); + + __ mov(eax, kUndefined); + __ ret(0); + + __ bind(&equal_l); + __ mov(eax, kEqual); + __ ret(0); + + __ bind(&greater_l); + __ mov(eax, kGreater); + __ ret(0); + + __ bind(&less_l); + __ mov(eax, kLess); + __ ret(0); + + __ bind(&nan_l); + __ mov(eax, kNaN); + __ ret(0); + + + CodeDesc desc; + assm.GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); +#ifdef OBJECT_PRINT + OFStream os(stdout); + code->Print(os); +#endif + + F7 f = FUNCTION_CAST<F7>(code->entry()); + CHECK_EQ(kLess, f(1.1, 2.2)); + CHECK_EQ(kEqual, f(2.2, 2.2)); + CHECK_EQ(kGreater, f(3.3, 2.2)); + CHECK_EQ(kNaN, f(v8::base::OS::nan_value(), 1.1)); +} + + +TEST(AssemblerIa3210) { + // Test chaining of label usages within instructions (issue 1644). + CcTest::InitializeVM(); + Isolate* isolate = reinterpret_cast<Isolate*>(CcTest::isolate()); + HandleScope scope(isolate); + Assembler assm(isolate, NULL, 0); + + Label target; + __ j(equal, &target); + __ j(not_equal, &target); + __ bind(&target); + __ nop(); +} + + +TEST(AssemblerMultiByteNop) { + CcTest::InitializeVM(); + Isolate* isolate = reinterpret_cast<Isolate*>(CcTest::isolate()); + HandleScope scope(isolate); + v8::internal::byte buffer[1024]; + Assembler assm(isolate, buffer, sizeof(buffer)); + __ push(ebx); + __ push(ecx); + __ push(edx); + __ push(edi); + __ push(esi); + __ mov(eax, 1); + __ mov(ebx, 2); + __ mov(ecx, 3); + __ mov(edx, 4); + __ mov(edi, 5); + __ mov(esi, 6); + for (int i = 0; i < 16; i++) { + int before = assm.pc_offset(); + __ Nop(i); + CHECK_EQ(assm.pc_offset() - before, i); + } + + Label fail; + __ cmp(eax, 1); + __ j(not_equal, &fail); + __ cmp(ebx, 2); + __ j(not_equal, &fail); + __ cmp(ecx, 3); + __ j(not_equal, &fail); + __ cmp(edx, 4); + __ j(not_equal, &fail); + __ cmp(edi, 5); + __ j(not_equal, &fail); + __ cmp(esi, 6); + __ j(not_equal, &fail); + __ mov(eax, 42); + __ pop(esi); + __ pop(edi); + __ pop(edx); + __ pop(ecx); + __ pop(ebx); + __ ret(0); + __ bind(&fail); + __ mov(eax, 13); + __ pop(esi); + __ pop(edi); + __ pop(edx); + __ pop(ecx); + __ pop(ebx); + __ ret(0); + + CodeDesc desc; + assm.GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); + CHECK(code->IsCode()); + + F0 f = FUNCTION_CAST<F0>(code->entry()); + int res = f(); + CHECK_EQ(42, res); +} + + +#undef __ diff --git a/deps/v8/test/cctest/test-ast.cc b/deps/v8/test/cctest/test-ast.cc index d6431371aa1..a25ae69b617 100644 --- a/deps/v8/test/cctest/test-ast.cc +++ b/deps/v8/test/cctest/test-ast.cc @@ -27,10 +27,10 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "ast.h" -#include "cctest.h" +#include "src/ast.h" +#include "test/cctest/cctest.h" using namespace v8::internal; @@ -41,7 +41,7 @@ TEST(List) { Isolate* isolate = CcTest::i_isolate(); Zone zone(isolate); - AstNodeFactory<AstNullVisitor> factory(&zone); + AstNodeFactory<AstNullVisitor> factory(&zone, NULL); AstNode* node = factory.NewEmptyStatement(RelocInfo::kNoPosition); list->Add(node); CHECK_EQ(1, list->length()); diff --git a/deps/v8/test/cctest/test-atomicops.cc b/deps/v8/test/cctest/test-atomicops.cc index 53df22963dd..8b47208ba0a 100644 --- a/deps/v8/test/cctest/test-atomicops.cc +++ b/deps/v8/test/cctest/test-atomicops.cc @@ -25,11 +25,12 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "v8.h" +#include "src/v8.h" -#include "cctest.h" -#include "atomicops.h" +#include "src/base/atomicops.h" +#include "test/cctest/cctest.h" +using namespace v8::base; using namespace v8::internal; diff --git a/deps/v8/test/cctest/test-bignum-dtoa.cc b/deps/v8/test/cctest/test-bignum-dtoa.cc index a696ed8e3fc..9262e018c89 100644 --- a/deps/v8/test/cctest/test-bignum-dtoa.cc +++ b/deps/v8/test/cctest/test-bignum-dtoa.cc @@ -27,16 +27,16 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "bignum-dtoa.h" +#include "src/bignum-dtoa.h" -#include "cctest.h" -#include "double.h" -#include "gay-fixed.h" -#include "gay-precision.h" -#include "gay-shortest.h" -#include "platform.h" +#include "src/base/platform/platform.h" +#include "src/double.h" +#include "test/cctest/cctest.h" +#include "test/cctest/gay-fixed.h" +#include "test/cctest/gay-precision.h" +#include "test/cctest/gay-shortest.h" using namespace v8::internal; diff --git a/deps/v8/test/cctest/test-bignum.cc b/deps/v8/test/cctest/test-bignum.cc index 9aa5ef30d0a..47ce2a48a9d 100644 --- a/deps/v8/test/cctest/test-bignum.cc +++ b/deps/v8/test/cctest/test-bignum.cc @@ -27,11 +27,11 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "platform.h" -#include "cctest.h" -#include "bignum.h" +#include "src/base/platform/platform.h" +#include "src/bignum.h" +#include "test/cctest/cctest.h" using namespace v8::internal; diff --git a/deps/v8/test/cctest/test-checks.cc b/deps/v8/test/cctest/test-checks.cc new file mode 100644 index 00000000000..a49a7dbe2a6 --- /dev/null +++ b/deps/v8/test/cctest/test-checks.cc @@ -0,0 +1,26 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/checks.h" + +#include "test/cctest/cctest.h" + + +TEST(CheckEqualsZeroAndMinusZero) { + CHECK_EQ(0.0, 0.0); + CHECK_NE(0.0, -0.0); + CHECK_NE(-0.0, 0.0); + CHECK_EQ(-0.0, -0.0); +} + + +TEST(CheckEqualsReflexivity) { + double inf = V8_INFINITY; + double nan = v8::base::OS::nan_value(); + double constants[] = {-nan, -inf, -3.1415, -1.0, -0.1, -0.0, + 0.0, 0.1, 1.0, 3.1415, inf, nan}; + for (size_t i = 0; i < ARRAY_SIZE(constants); ++i) { + CHECK_EQ(constants[i], constants[i]); + } +} diff --git a/deps/v8/test/cctest/test-circular-queue.cc b/deps/v8/test/cctest/test-circular-queue.cc index c900be1a646..736a9b7c885 100644 --- a/deps/v8/test/cctest/test-circular-queue.cc +++ b/deps/v8/test/cctest/test-circular-queue.cc @@ -27,15 +27,16 @@ // // Tests of the circular queue. -#include "v8.h" -#include "circular-queue-inl.h" -#include "cctest.h" +#include "src/v8.h" + +#include "src/circular-queue-inl.h" +#include "test/cctest/cctest.h" using i::SamplingCircularQueue; TEST(SamplingCircularQueue) { - typedef i::AtomicWord Record; + typedef v8::base::AtomicWord Record; const int kMaxRecordsInQueue = 4; SamplingCircularQueue<Record, kMaxRecordsInQueue> scq; @@ -99,20 +100,18 @@ TEST(SamplingCircularQueue) { namespace { -typedef i::AtomicWord Record; +typedef v8::base::AtomicWord Record; typedef SamplingCircularQueue<Record, 12> TestSampleQueue; -class ProducerThread: public i::Thread { +class ProducerThread: public v8::base::Thread { public: - ProducerThread(TestSampleQueue* scq, - int records_per_chunk, - Record value, - i::Semaphore* finished) - : Thread("producer"), + ProducerThread(TestSampleQueue* scq, int records_per_chunk, Record value, + v8::base::Semaphore* finished) + : Thread(Options("producer")), scq_(scq), records_per_chunk_(records_per_chunk), value_(value), - finished_(finished) { } + finished_(finished) {} virtual void Run() { for (Record i = value_; i < value_ + records_per_chunk_; ++i) { @@ -129,7 +128,7 @@ class ProducerThread: public i::Thread { TestSampleQueue* scq_; const int records_per_chunk_; Record value_; - i::Semaphore* finished_; + v8::base::Semaphore* finished_; }; } // namespace @@ -142,7 +141,7 @@ TEST(SamplingCircularQueueMultithreading) { const int kRecordsPerChunk = 4; TestSampleQueue scq; - i::Semaphore semaphore(0); + v8::base::Semaphore semaphore(0); ProducerThread producer1(&scq, kRecordsPerChunk, 1, &semaphore); ProducerThread producer2(&scq, kRecordsPerChunk, 10, &semaphore); diff --git a/deps/v8/test/cctest/test-code-stubs-arm.cc b/deps/v8/test/cctest/test-code-stubs-arm.cc index 43233472bba..80403440da5 100644 --- a/deps/v8/test/cctest/test-code-stubs-arm.cc +++ b/deps/v8/test/cctest/test-code-stubs-arm.cc @@ -27,15 +27,15 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "cctest.h" -#include "code-stubs.h" -#include "test-code-stubs.h" -#include "factory.h" -#include "macro-assembler.h" -#include "platform.h" -#include "simulator.h" +#include "src/base/platform/platform.h" +#include "src/code-stubs.h" +#include "src/factory.h" +#include "src/macro-assembler.h" +#include "src/simulator.h" +#include "test/cctest/cctest.h" +#include "test/cctest/test-code-stubs.h" using namespace v8::internal; @@ -47,9 +47,8 @@ ConvertDToIFunc MakeConvertDToIFuncTrampoline(Isolate* isolate, bool inline_fastpath) { // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); HandleScope handles(isolate); MacroAssembler masm(isolate, buffer, static_cast<int>(actual_size)); @@ -127,7 +126,7 @@ ConvertDToIFunc MakeConvertDToIFuncTrampoline(Isolate* isolate, CodeDesc desc; masm.GetCode(&desc); - CPU::FlushICache(buffer, actual_size); + CpuFeatures::FlushICache(buffer, actual_size); return (reinterpret_cast<ConvertDToIFunc>( reinterpret_cast<intptr_t>(buffer))); } diff --git a/deps/v8/test/cctest/test-code-stubs-arm64.cc b/deps/v8/test/cctest/test-code-stubs-arm64.cc index 3ad07bf8091..6d5b0f49be7 100644 --- a/deps/v8/test/cctest/test-code-stubs-arm64.cc +++ b/deps/v8/test/cctest/test-code-stubs-arm64.cc @@ -27,15 +27,15 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "cctest.h" -#include "code-stubs.h" -#include "test-code-stubs.h" -#include "factory.h" -#include "macro-assembler.h" -#include "platform.h" -#include "simulator.h" +#include "src/base/platform/platform.h" +#include "src/code-stubs.h" +#include "src/factory.h" +#include "src/macro-assembler.h" +#include "src/simulator.h" +#include "test/cctest/cctest.h" +#include "test/cctest/test-code-stubs.h" using namespace v8::internal; @@ -46,10 +46,9 @@ ConvertDToIFunc MakeConvertDToIFuncTrampoline(Isolate* isolate, Register destination_reg, bool inline_fastpath) { // Allocate an executable page of memory. - size_t actual_size = 2 * Assembler::kMinimalBufferSize; - byte* buffer = static_cast<byte*>(OS::Allocate(actual_size, - &actual_size, - true)); + size_t actual_size = 4 * Assembler::kMinimalBufferSize; + byte* buffer = static_cast<byte*>( + v8::base::OS::Allocate(actual_size, &actual_size, true)); CHECK(buffer); HandleScope handles(isolate); MacroAssembler masm(isolate, buffer, static_cast<int>(actual_size)); @@ -123,7 +122,7 @@ ConvertDToIFunc MakeConvertDToIFuncTrampoline(Isolate* isolate, CodeDesc desc; masm.GetCode(&desc); - CPU::FlushICache(buffer, actual_size); + CpuFeatures::FlushICache(buffer, actual_size); return (reinterpret_cast<ConvertDToIFunc>( reinterpret_cast<intptr_t>(buffer))); } diff --git a/deps/v8/test/cctest/test-code-stubs-ia32.cc b/deps/v8/test/cctest/test-code-stubs-ia32.cc index 96639577b46..0b4a8d417bc 100644 --- a/deps/v8/test/cctest/test-code-stubs-ia32.cc +++ b/deps/v8/test/cctest/test-code-stubs-ia32.cc @@ -29,14 +29,14 @@ #include <limits> -#include "v8.h" +#include "src/v8.h" -#include "cctest.h" -#include "code-stubs.h" -#include "test-code-stubs.h" -#include "factory.h" -#include "macro-assembler.h" -#include "platform.h" +#include "src/base/platform/platform.h" +#include "src/code-stubs.h" +#include "src/factory.h" +#include "src/macro-assembler.h" +#include "test/cctest/cctest.h" +#include "test/cctest/test-code-stubs.h" using namespace v8::internal; @@ -47,9 +47,8 @@ ConvertDToIFunc MakeConvertDToIFuncTrampoline(Isolate* isolate, Register destination_reg) { // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); HandleScope handles(isolate); MacroAssembler assm(isolate, buffer, static_cast<int>(actual_size)); diff --git a/deps/v8/test/cctest/test-code-stubs-mips.cc b/deps/v8/test/cctest/test-code-stubs-mips.cc index f8897967833..796aa1d6107 100644 --- a/deps/v8/test/cctest/test-code-stubs-mips.cc +++ b/deps/v8/test/cctest/test-code-stubs-mips.cc @@ -27,16 +27,16 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "cctest.h" -#include "code-stubs.h" -#include "test-code-stubs.h" -#include "mips/constants-mips.h" -#include "factory.h" -#include "macro-assembler.h" -#include "platform.h" -#include "simulator.h" +#include "src/base/platform/platform.h" +#include "src/code-stubs.h" +#include "src/factory.h" +#include "src/macro-assembler.h" +#include "src/mips/constants-mips.h" +#include "src/simulator.h" +#include "test/cctest/cctest.h" +#include "test/cctest/test-code-stubs.h" using namespace v8::internal; @@ -48,9 +48,8 @@ ConvertDToIFunc MakeConvertDToIFuncTrampoline(Isolate* isolate, bool inline_fastpath) { // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); HandleScope handles(isolate); MacroAssembler masm(isolate, buffer, static_cast<int>(actual_size)); @@ -128,7 +127,7 @@ ConvertDToIFunc MakeConvertDToIFuncTrampoline(Isolate* isolate, CodeDesc desc; masm.GetCode(&desc); - CPU::FlushICache(buffer, actual_size); + CpuFeatures::FlushICache(buffer, actual_size); return (reinterpret_cast<ConvertDToIFunc>( reinterpret_cast<intptr_t>(buffer))); } diff --git a/deps/v8/test/cctest/test-code-stubs-mips64.cc b/deps/v8/test/cctest/test-code-stubs-mips64.cc new file mode 100644 index 00000000000..025a8ba04b5 --- /dev/null +++ b/deps/v8/test/cctest/test-code-stubs-mips64.cc @@ -0,0 +1,188 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Rrdistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Rrdistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Rrdistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following +// disclaimer in the documentation and/or other materials provided +// with the distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived +// from this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +#include <stdlib.h> + +#include "src/v8.h" + +#include "src/base/platform/platform.h" +#include "src/code-stubs.h" +#include "src/factory.h" +#include "src/macro-assembler.h" +#include "src/mips64/constants-mips64.h" +#include "src/simulator.h" +#include "test/cctest/cctest.h" +#include "test/cctest/test-code-stubs.h" + +using namespace v8::internal; + +#define __ masm. + +ConvertDToIFunc MakeConvertDToIFuncTrampoline(Isolate* isolate, + Register source_reg, + Register destination_reg, + bool inline_fastpath) { + // Allocate an executable page of memory. + size_t actual_size; + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); + CHECK(buffer); + HandleScope handles(isolate); + MacroAssembler masm(isolate, buffer, static_cast<int>(actual_size)); + DoubleToIStub stub(isolate, source_reg, destination_reg, 0, true, + inline_fastpath); + + byte* start = stub.GetCode()->instruction_start(); + Label done; + + // Save callee save registers. + __ MultiPush(kCalleeSaved | ra.bit()); + + // For softfp, move the input value into f12. + if (IsMipsSoftFloatABI) { + __ Move(f12, a0, a1); + } + // Push the double argument. + __ Dsubu(sp, sp, Operand(kDoubleSize)); + __ sdc1(f12, MemOperand(sp)); + __ Move(source_reg, sp); + + // Save registers make sure they don't get clobbered. + int source_reg_offset = kDoubleSize; + int reg_num = 2; + for (;reg_num < Register::NumAllocatableRegisters(); ++reg_num) { + Register reg = Register::from_code(reg_num); + if (!reg.is(destination_reg)) { + __ push(reg); + source_reg_offset += kPointerSize; + } + } + + // Re-push the double argument. + __ Dsubu(sp, sp, Operand(kDoubleSize)); + __ sdc1(f12, MemOperand(sp)); + + // Call through to the actual stub + if (inline_fastpath) { + __ ldc1(f12, MemOperand(source_reg)); + __ TryInlineTruncateDoubleToI(destination_reg, f12, &done); + if (destination_reg.is(source_reg) && !source_reg.is(sp)) { + // Restore clobbered source_reg. + __ Daddu(source_reg, sp, Operand(source_reg_offset)); + } + } + __ Call(start, RelocInfo::EXTERNAL_REFERENCE); + __ bind(&done); + + __ Daddu(sp, sp, Operand(kDoubleSize)); + + // Make sure no registers have been unexpectedly clobbered + for (--reg_num; reg_num >= 2; --reg_num) { + Register reg = Register::from_code(reg_num); + if (!reg.is(destination_reg)) { + __ lw(at, MemOperand(sp, 0)); + __ Assert(eq, kRegisterWasClobbered, reg, Operand(at)); + __ Daddu(sp, sp, Operand(kPointerSize)); + } + } + + __ Daddu(sp, sp, Operand(kDoubleSize)); + + __ Move(v0, destination_reg); + Label ok; + __ Branch(&ok, eq, v0, Operand(zero_reg)); + __ bind(&ok); + + // Restore callee save registers. + __ MultiPop(kCalleeSaved | ra.bit()); + + Label ok1; + __ Branch(&ok1, eq, v0, Operand(zero_reg)); + __ bind(&ok1); + __ Ret(); + + CodeDesc desc; + masm.GetCode(&desc); + CpuFeatures::FlushICache(buffer, actual_size); + return (reinterpret_cast<ConvertDToIFunc>( + reinterpret_cast<intptr_t>(buffer))); +} + +#undef __ + + +static Isolate* GetIsolateFrom(LocalContext* context) { + return reinterpret_cast<Isolate*>((*context)->GetIsolate()); +} + + +int32_t RunGeneratedCodeCallWrapper(ConvertDToIFunc func, + double from) { +#ifdef USE_SIMULATOR + Simulator::current(Isolate::Current())->CallFP(FUNCTION_ADDR(func), from, 0.); + return Simulator::current(Isolate::Current())->get_register(v0.code()); +#else + return (*func)(from); +#endif +} + + +TEST(ConvertDToI) { + CcTest::InitializeVM(); + LocalContext context; + Isolate* isolate = GetIsolateFrom(&context); + HandleScope scope(isolate); + +#if DEBUG + // Verify that the tests actually work with the C version. In the release + // code, the compiler optimizes it away because it's all constant, but does it + // wrong, triggering an assert on gcc. + RunAllTruncationTests(&ConvertDToICVersion); +#endif + + Register source_registers[] = { + sp, v0, v1, a0, a1, a2, a3, a4, a5, a6, a7, t0, t1}; + Register dest_registers[] = { + v0, v1, a0, a1, a2, a3, a4, a5, a6, a7, t0, t1}; + + for (size_t s = 0; s < sizeof(source_registers) / sizeof(Register); s++) { + for (size_t d = 0; d < sizeof(dest_registers) / sizeof(Register); d++) { + RunAllTruncationTests( + RunGeneratedCodeCallWrapper, + MakeConvertDToIFuncTrampoline(isolate, + source_registers[s], + dest_registers[d], + false)); + RunAllTruncationTests( + RunGeneratedCodeCallWrapper, + MakeConvertDToIFuncTrampoline(isolate, + source_registers[s], + dest_registers[d], + true)); + } + } +} diff --git a/deps/v8/test/cctest/test-code-stubs-x64.cc b/deps/v8/test/cctest/test-code-stubs-x64.cc index 3ffd292c5f0..b58b073f3b7 100644 --- a/deps/v8/test/cctest/test-code-stubs-x64.cc +++ b/deps/v8/test/cctest/test-code-stubs-x64.cc @@ -27,14 +27,14 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "cctest.h" -#include "code-stubs.h" -#include "test-code-stubs.h" -#include "factory.h" -#include "macro-assembler.h" -#include "platform.h" +#include "src/base/platform/platform.h" +#include "src/code-stubs.h" +#include "src/factory.h" +#include "src/macro-assembler.h" +#include "test/cctest/cctest.h" +#include "test/cctest/test-code-stubs.h" using namespace v8::internal; @@ -46,9 +46,8 @@ ConvertDToIFunc MakeConvertDToIFuncTrampoline(Isolate* isolate, Register destination_reg) { // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); HandleScope handles(isolate); MacroAssembler assm(isolate, buffer, static_cast<int>(actual_size)); diff --git a/deps/v8/test/cctest/test-code-stubs-x87.cc b/deps/v8/test/cctest/test-code-stubs-x87.cc new file mode 100644 index 00000000000..0b4a8d417bc --- /dev/null +++ b/deps/v8/test/cctest/test-code-stubs-x87.cc @@ -0,0 +1,148 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following +// disclaimer in the documentation and/or other materials provided +// with the distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived +// from this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +#include <stdlib.h> + +#include <limits> + +#include "src/v8.h" + +#include "src/base/platform/platform.h" +#include "src/code-stubs.h" +#include "src/factory.h" +#include "src/macro-assembler.h" +#include "test/cctest/cctest.h" +#include "test/cctest/test-code-stubs.h" + +using namespace v8::internal; + +#define __ assm. + +ConvertDToIFunc MakeConvertDToIFuncTrampoline(Isolate* isolate, + Register source_reg, + Register destination_reg) { + // Allocate an executable page of memory. + size_t actual_size; + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); + CHECK(buffer); + HandleScope handles(isolate); + MacroAssembler assm(isolate, buffer, static_cast<int>(actual_size)); + int offset = + source_reg.is(esp) ? 0 : (HeapNumber::kValueOffset - kSmiTagSize); + DoubleToIStub stub(isolate, source_reg, destination_reg, offset, true); + byte* start = stub.GetCode()->instruction_start(); + + __ push(ebx); + __ push(ecx); + __ push(edx); + __ push(esi); + __ push(edi); + + if (!source_reg.is(esp)) { + __ lea(source_reg, MemOperand(esp, 6 * kPointerSize - offset)); + } + + int param_offset = 7 * kPointerSize; + // Save registers make sure they don't get clobbered. + int reg_num = 0; + for (;reg_num < Register::NumAllocatableRegisters(); ++reg_num) { + Register reg = Register::FromAllocationIndex(reg_num); + if (!reg.is(esp) && !reg.is(ebp) && !reg.is(destination_reg)) { + __ push(reg); + param_offset += kPointerSize; + } + } + + // Re-push the double argument + __ push(MemOperand(esp, param_offset)); + __ push(MemOperand(esp, param_offset)); + + // Call through to the actual stub + __ call(start, RelocInfo::EXTERNAL_REFERENCE); + + __ add(esp, Immediate(kDoubleSize)); + + // Make sure no registers have been unexpectedly clobbered + for (--reg_num; reg_num >= 0; --reg_num) { + Register reg = Register::FromAllocationIndex(reg_num); + if (!reg.is(esp) && !reg.is(ebp) && !reg.is(destination_reg)) { + __ cmp(reg, MemOperand(esp, 0)); + __ Assert(equal, kRegisterWasClobbered); + __ add(esp, Immediate(kPointerSize)); + } + } + + __ mov(eax, destination_reg); + + __ pop(edi); + __ pop(esi); + __ pop(edx); + __ pop(ecx); + __ pop(ebx); + + __ ret(kDoubleSize); + + CodeDesc desc; + assm.GetCode(&desc); + return reinterpret_cast<ConvertDToIFunc>( + reinterpret_cast<intptr_t>(buffer)); +} + +#undef __ + + +static Isolate* GetIsolateFrom(LocalContext* context) { + return reinterpret_cast<Isolate*>((*context)->GetIsolate()); +} + + +TEST(ConvertDToI) { + CcTest::InitializeVM(); + LocalContext context; + Isolate* isolate = GetIsolateFrom(&context); + HandleScope scope(isolate); + +#if DEBUG + // Verify that the tests actually work with the C version. In the release + // code, the compiler optimizes it away because it's all constant, but does it + // wrong, triggering an assert on gcc. + RunAllTruncationTests(&ConvertDToICVersion); +#endif + + Register source_registers[] = {esp, eax, ebx, ecx, edx, edi, esi}; + Register dest_registers[] = {eax, ebx, ecx, edx, edi, esi}; + + for (size_t s = 0; s < sizeof(source_registers) / sizeof(Register); s++) { + for (size_t d = 0; d < sizeof(dest_registers) / sizeof(Register); d++) { + RunAllTruncationTests( + MakeConvertDToIFuncTrampoline(isolate, + source_registers[s], + dest_registers[d])); + } + } +} diff --git a/deps/v8/test/cctest/test-code-stubs.cc b/deps/v8/test/cctest/test-code-stubs.cc index 999febf777e..0784aac78e9 100644 --- a/deps/v8/test/cctest/test-code-stubs.cc +++ b/deps/v8/test/cctest/test-code-stubs.cc @@ -29,14 +29,14 @@ #include <limits> -#include "v8.h" +#include "src/v8.h" -#include "cctest.h" -#include "code-stubs.h" -#include "test-code-stubs.h" -#include "factory.h" -#include "macro-assembler.h" -#include "platform.h" +#include "src/base/platform/platform.h" +#include "src/code-stubs.h" +#include "src/factory.h" +#include "src/macro-assembler.h" +#include "test/cctest/cctest.h" +#include "test/cctest/test-code-stubs.h" using namespace v8::internal; @@ -92,7 +92,7 @@ int32_t DefaultCallWrapper(ConvertDToIFunc func, // #define NaN and Infinity so that it's possible to cut-and-paste these tests // directly to a .js file and run them. -#define NaN (OS::nan_value()) +#define NaN (v8::base::OS::nan_value()) #define Infinity (std::numeric_limits<double>::infinity()) #define RunOneTruncationTest(p1, p2) \ RunOneTruncationTestWithTest(callWrapper, func, p1, p2) diff --git a/deps/v8/test/cctest/test-code-stubs.h b/deps/v8/test/cctest/test-code-stubs.h index 910e0d17014..0cfa0ec7c88 100644 --- a/deps/v8/test/cctest/test-code-stubs.h +++ b/deps/v8/test/cctest/test-code-stubs.h @@ -28,7 +28,7 @@ #ifndef V8_TEST_CODE_STUBS_H_ #define V8_TEST_CODE_STUBS_H_ -#if V8_TARGET_ARCH_IA32 +#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87 #if __GNUC__ #define STDCALL __attribute__((stdcall)) #else diff --git a/deps/v8/test/cctest/test-compiler.cc b/deps/v8/test/cctest/test-compiler.cc index 9974ff57020..2d913715e1e 100644 --- a/deps/v8/test/cctest/test-compiler.cc +++ b/deps/v8/test/cctest/test-compiler.cc @@ -28,11 +28,12 @@ #include <stdlib.h> #include <wchar.h> -#include "v8.h" +#include "src/v8.h" -#include "compiler.h" -#include "disasm.h" -#include "cctest.h" +#include "src/compiler.h" +#include "src/disasm.h" +#include "src/parser.h" +#include "test/cctest/cctest.h" using namespace v8::internal; @@ -49,7 +50,7 @@ static void SetGlobalProperty(const char* name, Object* value) { Handle<String> internalized_name = isolate->factory()->InternalizeUtf8String(name); Handle<JSObject> global(isolate->context()->global_object()); - Runtime::SetObjectProperty(isolate, global, internalized_name, object, NONE, + Runtime::SetObjectProperty(isolate, global, internalized_name, object, SLOPPY).Check(); } @@ -58,15 +59,10 @@ static Handle<JSFunction> Compile(const char* source) { Isolate* isolate = CcTest::i_isolate(); Handle<String> source_code = isolate->factory()->NewStringFromUtf8( CStrVector(source)).ToHandleChecked(); - Handle<SharedFunctionInfo> shared_function = - Compiler::CompileScript(source_code, - Handle<String>(), - 0, - 0, - false, - Handle<Context>(isolate->native_context()), - NULL, NULL, NO_CACHED_DATA, - NOT_NATIVES_CODE); + Handle<SharedFunctionInfo> shared_function = Compiler::CompileScript( + source_code, Handle<String>(), 0, 0, false, + Handle<Context>(isolate->native_context()), NULL, NULL, + v8::ScriptCompiler::kNoCompileOptions, NOT_NATIVES_CODE); return isolate->factory()->NewFunctionFromSharedFunctionInfo( shared_function, isolate->native_context()); } @@ -75,7 +71,7 @@ static Handle<JSFunction> Compile(const char* source) { static double Inc(Isolate* isolate, int x) { const char* source = "result = %d + 1;"; EmbeddedVector<char, 512> buffer; - OS::SNPrintF(buffer, source, x); + SNPrintF(buffer, source, x); Handle<JSFunction> fun = Compile(buffer.start()); if (fun.is_null()) return -1; @@ -278,7 +274,7 @@ TEST(GetScriptLineNumber) { for (int i = 0; i < max_rows; ++i) { if (i > 0) buffer[i - 1] = '\n'; - OS::MemCopy(&buffer[i], function_f, sizeof(function_f) - 1); + MemCopy(&buffer[i], function_f, sizeof(function_f) - 1); v8::Handle<v8::String> script_body = v8::String::NewFromUtf8(CcTest::isolate(), buffer.start()); v8::Script::Compile(script_body, &origin)->Run(); @@ -313,8 +309,9 @@ TEST(FeedbackVectorPreservedAcrossRecompiles) { Handle<FixedArray> feedback_vector(f->shared()->feedback_vector()); // Verify that we gathered feedback. - CHECK_EQ(1, feedback_vector->length()); - CHECK(feedback_vector->get(0)->IsJSFunction()); + int expected_count = FLAG_vector_ics ? 2 : 1; + CHECK_EQ(expected_count, feedback_vector->length()); + CHECK(feedback_vector->get(expected_count - 1)->IsJSFunction()); CompileRun("%OptimizeFunctionOnNextCall(f); f(fun1);"); @@ -322,7 +319,8 @@ TEST(FeedbackVectorPreservedAcrossRecompiles) { // of the full code. CHECK(f->IsOptimized()); CHECK(f->shared()->has_deoptimization_support()); - CHECK(f->shared()->feedback_vector()->get(0)->IsJSFunction()); + CHECK(f->shared()->feedback_vector()-> + get(expected_count - 1)->IsJSFunction()); } @@ -348,16 +346,15 @@ TEST(FeedbackVectorUnaffectedByScopeChanges) { *v8::Handle<v8::Function>::Cast( CcTest::global()->Get(v8_str("morphing_call")))); - // morphing_call should have one feedback vector slot for the call to - // call_target(). - CHECK_EQ(1, f->shared()->feedback_vector()->length()); + int expected_count = FLAG_vector_ics ? 2 : 1; + CHECK_EQ(expected_count, f->shared()->feedback_vector()->length()); // And yet it's not compiled. CHECK(!f->shared()->is_compiled()); CompileRun("morphing_call();"); // The vector should have the same size despite the new scoping. - CHECK_EQ(1, f->shared()->feedback_vector()->length()); + CHECK_EQ(expected_count, f->shared()->feedback_vector()->length()); CHECK(f->shared()->is_compiled()); } @@ -422,7 +419,7 @@ static void CheckCodeForUnsafeLiteral(Handle<JSFunction> f) { v8::internal::EmbeddedVector<char, 128> decode_buffer; v8::internal::EmbeddedVector<char, 128> smi_hex_buffer; Smi* smi = Smi::FromInt(12345678); - OS::SNPrintF(smi_hex_buffer, "0x%lx", reinterpret_cast<intptr_t>(smi)); + SNPrintF(smi_hex_buffer, "0x%" V8PRIxPTR, reinterpret_cast<intptr_t>(smi)); while (pc < end) { int num_const = d.ConstantPoolSizeAt(pc); if (num_const >= 0) { diff --git a/deps/v8/test/cctest/test-constantpool.cc b/deps/v8/test/cctest/test-constantpool.cc index e16e45a57d1..453657609eb 100644 --- a/deps/v8/test/cctest/test-constantpool.cc +++ b/deps/v8/test/cctest/test-constantpool.cc @@ -2,14 +2,22 @@ // Test constant pool array code. -#include "v8.h" +#include "src/v8.h" -#include "factory.h" -#include "objects.h" -#include "cctest.h" +#include "src/factory.h" +#include "src/objects.h" +#include "test/cctest/cctest.h" using namespace v8::internal; +static ConstantPoolArray::Type kTypes[] = { ConstantPoolArray::INT64, + ConstantPoolArray::CODE_PTR, + ConstantPoolArray::HEAP_PTR, + ConstantPoolArray::INT32 }; +static ConstantPoolArray::LayoutSection kSmall = + ConstantPoolArray::SMALL_SECTION; +static ConstantPoolArray::LayoutSection kExtended = + ConstantPoolArray::EXTENDED_SECTION; Code* DummyCode(LocalContext* context) { CompileRun("function foo() {};"); @@ -20,28 +28,29 @@ Code* DummyCode(LocalContext* context) { } -TEST(ConstantPool) { +TEST(ConstantPoolSmall) { LocalContext context; Isolate* isolate = CcTest::i_isolate(); - Heap* heap = isolate->heap(); Factory* factory = isolate->factory(); v8::HandleScope scope(context->GetIsolate()); // Check construction. - Handle<ConstantPoolArray> array = factory->NewConstantPoolArray(3, 1, 2, 1); - CHECK_EQ(array->count_of_int64_entries(), 3); - CHECK_EQ(array->count_of_code_ptr_entries(), 1); - CHECK_EQ(array->count_of_heap_ptr_entries(), 2); - CHECK_EQ(array->count_of_int32_entries(), 1); - CHECK_EQ(array->length(), 7); - CHECK_EQ(array->first_int64_index(), 0); - CHECK_EQ(array->first_code_ptr_index(), 3); - CHECK_EQ(array->first_heap_ptr_index(), 4); - CHECK_EQ(array->first_int32_index(), 6); + ConstantPoolArray::NumberOfEntries small(3, 1, 2, 1); + Handle<ConstantPoolArray> array = factory->NewConstantPoolArray(small); + + int expected_counts[] = { 3, 1, 2, 1 }; + int expected_first_idx[] = { 0, 3, 4, 6 }; + int expected_last_idx[] = { 2, 3, 5, 6 }; + for (int i = 0; i < 4; i++) { + CHECK_EQ(expected_counts[i], array->number_of_entries(kTypes[i], kSmall)); + CHECK_EQ(expected_first_idx[i], array->first_index(kTypes[i], kSmall)); + CHECK_EQ(expected_last_idx[i], array->last_index(kTypes[i], kSmall)); + } + CHECK(!array->is_extended_layout()); // Check getters and setters. int64_t big_number = V8_2PART_UINT64_C(0x12345678, 9ABCDEF0); - Handle<Object> object = factory->NewHeapNumber(4.0); + Handle<Object> object = factory->NewHeapNumber(4.0, IMMUTABLE, TENURED); Code* code = DummyCode(&context); array->set(0, big_number); array->set(1, 0.5); @@ -50,19 +59,253 @@ TEST(ConstantPool) { array->set(4, code); array->set(5, *object); array->set(6, 50); - CHECK_EQ(array->get_int64_entry(0), big_number); - CHECK_EQ(array->get_int64_entry_as_double(1), 0.5); - CHECK_EQ(array->get_int64_entry_as_double(2), 3e-24); - CHECK_EQ(array->get_code_ptr_entry(3), code->entry()); - CHECK_EQ(array->get_heap_ptr_entry(4), code); - CHECK_EQ(array->get_heap_ptr_entry(5), *object); - CHECK_EQ(array->get_int32_entry(6), 50); - - // Check pointers are updated on GC. - Object* old_ptr = array->get_heap_ptr_entry(5); - CHECK_EQ(*object, old_ptr); + CHECK_EQ(big_number, array->get_int64_entry(0)); + CHECK_EQ(0.5, array->get_int64_entry_as_double(1)); + CHECK_EQ(3e-24, array->get_int64_entry_as_double(2)); + CHECK_EQ(code->entry(), array->get_code_ptr_entry(3)); + CHECK_EQ(code, array->get_heap_ptr_entry(4)); + CHECK_EQ(*object, array->get_heap_ptr_entry(5)); + CHECK_EQ(50, array->get_int32_entry(6)); +} + + +TEST(ConstantPoolExtended) { + LocalContext context; + Isolate* isolate = CcTest::i_isolate(); + Factory* factory = isolate->factory(); + v8::HandleScope scope(context->GetIsolate()); + + // Check construction. + ConstantPoolArray::NumberOfEntries small(1, 2, 3, 4); + ConstantPoolArray::NumberOfEntries extended(5, 6, 7, 8); + Handle<ConstantPoolArray> array = + factory->NewExtendedConstantPoolArray(small, extended); + + // Check small section. + int small_counts[] = { 1, 2, 3, 4 }; + int small_first_idx[] = { 0, 1, 3, 6 }; + int small_last_idx[] = { 0, 2, 5, 9 }; + for (int i = 0; i < 4; i++) { + CHECK_EQ(small_counts[i], array->number_of_entries(kTypes[i], kSmall)); + CHECK_EQ(small_first_idx[i], array->first_index(kTypes[i], kSmall)); + CHECK_EQ(small_last_idx[i], array->last_index(kTypes[i], kSmall)); + } + + // Check extended layout. + CHECK(array->is_extended_layout()); + int extended_counts[] = { 5, 6, 7, 8 }; + int extended_first_idx[] = { 10, 15, 21, 28 }; + int extended_last_idx[] = { 14, 20, 27, 35 }; + for (int i = 0; i < 4; i++) { + CHECK_EQ(extended_counts[i], + array->number_of_entries(kTypes[i], kExtended)); + CHECK_EQ(extended_first_idx[i], array->first_index(kTypes[i], kExtended)); + CHECK_EQ(extended_last_idx[i], array->last_index(kTypes[i], kExtended)); + } + + // Check small and large section's don't overlap. + int64_t small_section_int64 = V8_2PART_UINT64_C(0x56781234, DEF09ABC); + Code* small_section_code_ptr = DummyCode(&context); + Handle<Object> small_section_heap_ptr = + factory->NewHeapNumber(4.0, IMMUTABLE, TENURED); + int32_t small_section_int32 = 0xab12cd45; + + int64_t extended_section_int64 = V8_2PART_UINT64_C(0x12345678, 9ABCDEF0); + Code* extended_section_code_ptr = DummyCode(&context); + Handle<Object> extended_section_heap_ptr = + factory->NewHeapNumber(5.0, IMMUTABLE, TENURED); + int32_t extended_section_int32 = 0xef67ab89; + + for (int i = array->first_index(ConstantPoolArray::INT64, kSmall); + i <= array->last_index(ConstantPoolArray::INT32, kSmall); i++) { + if (i <= array->last_index(ConstantPoolArray::INT64, kSmall)) { + array->set(i, small_section_int64); + } else if (i <= array->last_index(ConstantPoolArray::CODE_PTR, kSmall)) { + array->set(i, small_section_code_ptr->entry()); + } else if (i <= array->last_index(ConstantPoolArray::HEAP_PTR, kSmall)) { + array->set(i, *small_section_heap_ptr); + } else { + CHECK(i <= array->last_index(ConstantPoolArray::INT32, kSmall)); + array->set(i, small_section_int32); + } + } + for (int i = array->first_index(ConstantPoolArray::INT64, kExtended); + i <= array->last_index(ConstantPoolArray::INT32, kExtended); i++) { + if (i <= array->last_index(ConstantPoolArray::INT64, kExtended)) { + array->set(i, extended_section_int64); + } else if (i <= array->last_index(ConstantPoolArray::CODE_PTR, kExtended)) { + array->set(i, extended_section_code_ptr->entry()); + } else if (i <= array->last_index(ConstantPoolArray::HEAP_PTR, kExtended)) { + array->set(i, *extended_section_heap_ptr); + } else { + CHECK(i <= array->last_index(ConstantPoolArray::INT32, kExtended)); + array->set(i, extended_section_int32); + } + } + + for (int i = array->first_index(ConstantPoolArray::INT64, kSmall); + i <= array->last_index(ConstantPoolArray::INT32, kSmall); i++) { + if (i <= array->last_index(ConstantPoolArray::INT64, kSmall)) { + CHECK_EQ(small_section_int64, array->get_int64_entry(i)); + } else if (i <= array->last_index(ConstantPoolArray::CODE_PTR, kSmall)) { + CHECK_EQ(small_section_code_ptr->entry(), array->get_code_ptr_entry(i)); + } else if (i <= array->last_index(ConstantPoolArray::HEAP_PTR, kSmall)) { + CHECK_EQ(*small_section_heap_ptr, array->get_heap_ptr_entry(i)); + } else { + CHECK(i <= array->last_index(ConstantPoolArray::INT32, kSmall)); + CHECK_EQ(small_section_int32, array->get_int32_entry(i)); + } + } + for (int i = array->first_index(ConstantPoolArray::INT64, kExtended); + i <= array->last_index(ConstantPoolArray::INT32, kExtended); i++) { + if (i <= array->last_index(ConstantPoolArray::INT64, kExtended)) { + CHECK_EQ(extended_section_int64, array->get_int64_entry(i)); + } else if (i <= array->last_index(ConstantPoolArray::CODE_PTR, kExtended)) { + CHECK_EQ(extended_section_code_ptr->entry(), + array->get_code_ptr_entry(i)); + } else if (i <= array->last_index(ConstantPoolArray::HEAP_PTR, kExtended)) { + CHECK_EQ(*extended_section_heap_ptr, array->get_heap_ptr_entry(i)); + } else { + CHECK(i <= array->last_index(ConstantPoolArray::INT32, kExtended)); + CHECK_EQ(extended_section_int32, array->get_int32_entry(i)); + } + } +} + + +static void CheckIterator(Handle<ConstantPoolArray> array, + ConstantPoolArray::Type type, + int expected_indexes[], + int count) { + int i = 0; + ConstantPoolArray::Iterator iter(*array, type); + while (!iter.is_finished()) { + CHECK_EQ(expected_indexes[i++], iter.next_index()); + } + CHECK_EQ(count, i); +} + + +TEST(ConstantPoolIteratorSmall) { + LocalContext context; + Isolate* isolate = CcTest::i_isolate(); + Factory* factory = isolate->factory(); + v8::HandleScope scope(context->GetIsolate()); + + ConstantPoolArray::NumberOfEntries small(1, 5, 2, 0); + Handle<ConstantPoolArray> array = factory->NewConstantPoolArray(small); + + int expected_int64_indexs[] = { 0 }; + CheckIterator(array, ConstantPoolArray::INT64, expected_int64_indexs, 1); + int expected_code_indexs[] = { 1, 2, 3, 4, 5 }; + CheckIterator(array, ConstantPoolArray::CODE_PTR, expected_code_indexs, 5); + int expected_heap_indexs[] = { 6, 7 }; + CheckIterator(array, ConstantPoolArray::HEAP_PTR, expected_heap_indexs, 2); + int expected_int32_indexs[1]; + CheckIterator(array, ConstantPoolArray::INT32, expected_int32_indexs, 0); +} + + +TEST(ConstantPoolIteratorExtended) { + LocalContext context; + Isolate* isolate = CcTest::i_isolate(); + Factory* factory = isolate->factory(); + v8::HandleScope scope(context->GetIsolate()); + + ConstantPoolArray::NumberOfEntries small(1, 0, 0, 4); + ConstantPoolArray::NumberOfEntries extended(5, 0, 3, 0); + Handle<ConstantPoolArray> array = + factory->NewExtendedConstantPoolArray(small, extended); + + int expected_int64_indexs[] = { 0, 5, 6, 7, 8, 9 }; + CheckIterator(array, ConstantPoolArray::INT64, expected_int64_indexs, 6); + int expected_code_indexs[1]; + CheckIterator(array, ConstantPoolArray::CODE_PTR, expected_code_indexs, 0); + int expected_heap_indexs[] = { 10, 11, 12 }; + CheckIterator(array, ConstantPoolArray::HEAP_PTR, expected_heap_indexs, 3); + int expected_int32_indexs[] = { 1, 2, 3, 4 }; + CheckIterator(array, ConstantPoolArray::INT32, expected_int32_indexs, 4); +} + + +TEST(ConstantPoolPreciseGC) { + LocalContext context; + Isolate* isolate = CcTest::i_isolate(); + Heap* heap = isolate->heap(); + Factory* factory = isolate->factory(); + v8::HandleScope scope(context->GetIsolate()); + + ConstantPoolArray::NumberOfEntries small(1, 0, 0, 1); + Handle<ConstantPoolArray> array = factory->NewConstantPoolArray(small); + + // Check that the store buffer knows which entries are pointers and which are + // not. To do this, make non-pointer entries which look like new space + // pointers but are actually invalid and ensure the GC doesn't try to move + // them. + Handle<HeapObject> object = factory->NewHeapNumber(4.0); + Object* raw_ptr = *object; + // If interpreted as a pointer, this should be right inside the heap number + // which will cause a crash when trying to lookup the 'map' pointer. + intptr_t invalid_ptr = reinterpret_cast<intptr_t>(raw_ptr) + kInt32Size; + int32_t invalid_ptr_int32 = static_cast<int32_t>(invalid_ptr); + int64_t invalid_ptr_int64 = static_cast<int64_t>(invalid_ptr); + array->set(0, invalid_ptr_int64); + array->set(1, invalid_ptr_int32); + + // Ensure we perform a scan on scavenge for the constant pool's page. + MemoryChunk::FromAddress(array->address())->set_scan_on_scavenge(true); heap->CollectGarbage(NEW_SPACE); - Object* new_ptr = array->get_heap_ptr_entry(5); - CHECK_NE(*object, old_ptr); - CHECK_EQ(*object, new_ptr); + + // Check the object was moved by GC. + CHECK_NE(*object, raw_ptr); + + // Check the non-pointer entries weren't changed. + CHECK_EQ(invalid_ptr_int64, array->get_int64_entry(0)); + CHECK_EQ(invalid_ptr_int32, array->get_int32_entry(1)); +} + + +TEST(ConstantPoolCompacting) { + if (i::FLAG_never_compact) return; + i::FLAG_always_compact = true; + LocalContext context; + Isolate* isolate = CcTest::i_isolate(); + Heap* heap = isolate->heap(); + Factory* factory = isolate->factory(); + v8::HandleScope scope(context->GetIsolate()); + + ConstantPoolArray::NumberOfEntries small(0, 0, 1, 0); + ConstantPoolArray::NumberOfEntries extended(0, 0, 1, 0); + Handle<ConstantPoolArray> array = + factory->NewExtendedConstantPoolArray(small, extended); + + // Start a second old-space page so that the heap pointer added to the + // constant pool array ends up on the an evacuation candidate page. + Page* first_page = heap->old_data_space()->anchor()->next_page(); + { + HandleScope scope(isolate); + Handle<HeapObject> temp = + factory->NewFixedDoubleArray(900 * KB / kDoubleSize, TENURED); + CHECK(heap->InOldDataSpace(temp->address())); + Handle<HeapObject> heap_ptr = + factory->NewHeapNumber(5.0, IMMUTABLE, TENURED); + CHECK(heap->InOldDataSpace(heap_ptr->address())); + CHECK(!first_page->Contains(heap_ptr->address())); + array->set(0, *heap_ptr); + array->set(1, *heap_ptr); + } + + // Check heap pointers are correctly updated on GC. + Object* old_ptr = array->get_heap_ptr_entry(0); + Handle<Object> object(old_ptr, isolate); + CHECK_EQ(old_ptr, *object); + CHECK_EQ(old_ptr, array->get_heap_ptr_entry(1)); + + // Force compacting garbage collection. + CHECK(FLAG_always_compact); + heap->CollectAllGarbage(Heap::kNoGCFlags); + + CHECK_NE(old_ptr, *object); + CHECK_EQ(*object, array->get_heap_ptr_entry(0)); + CHECK_EQ(*object, array->get_heap_ptr_entry(1)); } diff --git a/deps/v8/test/cctest/test-conversions.cc b/deps/v8/test/cctest/test-conversions.cc index 9e194eafffb..93bed7f4de1 100644 --- a/deps/v8/test/cctest/test-conversions.cc +++ b/deps/v8/test/cctest/test-conversions.cc @@ -27,10 +27,10 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "platform.h" -#include "cctest.h" +#include "src/base/platform/platform.h" +#include "test/cctest/cctest.h" using namespace v8::internal; @@ -172,9 +172,12 @@ TEST(TrailingJunk) { TEST(NonStrDecimalLiteral) { UnicodeCache uc; - CHECK(std::isnan(StringToDouble(&uc, " ", NO_FLAGS, OS::nan_value()))); - CHECK(std::isnan(StringToDouble(&uc, "", NO_FLAGS, OS::nan_value()))); - CHECK(std::isnan(StringToDouble(&uc, " ", NO_FLAGS, OS::nan_value()))); + CHECK(std::isnan( + StringToDouble(&uc, " ", NO_FLAGS, v8::base::OS::nan_value()))); + CHECK( + std::isnan(StringToDouble(&uc, "", NO_FLAGS, v8::base::OS::nan_value()))); + CHECK(std::isnan( + StringToDouble(&uc, " ", NO_FLAGS, v8::base::OS::nan_value()))); CHECK_EQ(0.0, StringToDouble(&uc, "", NO_FLAGS)); CHECK_EQ(0.0, StringToDouble(&uc, " ", NO_FLAGS)); } diff --git a/deps/v8/test/cctest/test-cpu-profiler.cc b/deps/v8/test/cctest/test-cpu-profiler.cc index 6cff7424b7b..6051c3fd7f6 100644 --- a/deps/v8/test/cctest/test-cpu-profiler.cc +++ b/deps/v8/test/cctest/test-cpu-profiler.cc @@ -27,14 +27,15 @@ // // Tests of profiles generator and utilities. -#include "v8.h" -#include "cpu-profiler-inl.h" -#include "cctest.h" -#include "platform.h" -#include "profiler-extension.h" -#include "smart-pointers.h" -#include "utils.h" -#include "../include/v8-profiler.h" +#include "src/v8.h" + +#include "include/v8-profiler.h" +#include "src/base/platform/platform.h" +#include "src/cpu-profiler-inl.h" +#include "src/smart-pointers.h" +#include "src/utils.h" +#include "test/cctest/cctest.h" +#include "test/cctest/profiler-extension.h" using i::CodeEntry; using i::CpuProfile; using i::CpuProfiler; @@ -45,7 +46,6 @@ using i::ProfileNode; using i::ProfilerEventsProcessor; using i::ScopedVector; using i::SmartPointer; -using i::TimeDelta; using i::Vector; @@ -54,7 +54,7 @@ TEST(StartStop) { CpuProfilesCollection profiles(isolate->heap()); ProfileGenerator generator(&profiles); SmartPointer<ProfilerEventsProcessor> processor(new ProfilerEventsProcessor( - &generator, NULL, TimeDelta::FromMicroseconds(100))); + &generator, NULL, v8::base::TimeDelta::FromMicroseconds(100))); processor->Start(); processor->StopSynchronously(); } @@ -104,9 +104,9 @@ i::Code* CreateCode(LocalContext* env) { i::EmbeddedVector<char, 256> script; i::EmbeddedVector<char, 32> name; - i::OS::SNPrintF(name, "function_%d", ++counter); + i::SNPrintF(name, "function_%d", ++counter); const char* name_start = name.start(); - i::OS::SNPrintF(script, + i::SNPrintF(script, "function %s() {\n" "var counter = 0;\n" "for (var i = 0; i < %d; ++i) counter += i;\n" @@ -142,7 +142,7 @@ TEST(CodeEvents) { profiles->StartProfiling("", false); ProfileGenerator generator(profiles); SmartPointer<ProfilerEventsProcessor> processor(new ProfilerEventsProcessor( - &generator, NULL, TimeDelta::FromMicroseconds(100))); + &generator, NULL, v8::base::TimeDelta::FromMicroseconds(100))); processor->Start(); CpuProfiler profiler(isolate, profiles, &generator, processor.get()); @@ -203,7 +203,7 @@ TEST(TickEvents) { profiles->StartProfiling("", false); ProfileGenerator generator(profiles); SmartPointer<ProfilerEventsProcessor> processor(new ProfilerEventsProcessor( - &generator, NULL, TimeDelta::FromMicroseconds(100))); + &generator, NULL, v8::base::TimeDelta::FromMicroseconds(100))); processor->Start(); CpuProfiler profiler(isolate, profiles, &generator, processor.get()); @@ -272,7 +272,7 @@ TEST(Issue1398) { profiles->StartProfiling("", false); ProfileGenerator generator(profiles); SmartPointer<ProfilerEventsProcessor> processor(new ProfilerEventsProcessor( - &generator, NULL, TimeDelta::FromMicroseconds(100))); + &generator, NULL, v8::base::TimeDelta::FromMicroseconds(100))); processor->Start(); CpuProfiler profiler(isolate, profiles, &generator, processor.get()); @@ -282,7 +282,7 @@ TEST(Issue1398) { sample->pc = code->address(); sample->tos = 0; sample->frames_count = i::TickSample::kMaxFramesCount; - for (int i = 0; i < sample->frames_count; ++i) { + for (unsigned i = 0; i < sample->frames_count; ++i) { sample->stack[i] = code->address(); } processor->FinishTickSample(); @@ -469,8 +469,8 @@ static const v8::CpuProfileNode* GetChild(v8::Isolate* isolate, const v8::CpuProfileNode* result = FindChild(isolate, node, name); if (!result) { char buffer[100]; - i::OS::SNPrintF(Vector<char>(buffer, ARRAY_SIZE(buffer)), - "Failed to GetChild: %s", name); + i::SNPrintF(Vector<char>(buffer, ARRAY_SIZE(buffer)), + "Failed to GetChild: %s", name); FATAL(buffer); } return result; @@ -587,6 +587,72 @@ TEST(CollectCpuProfile) { } +static const char* hot_deopt_no_frame_entry_test_source = +"function foo(a, b) {\n" +" try {\n" +" return a + b;\n" +" } catch (e) { }\n" +"}\n" +"function start(timeout) {\n" +" var start = Date.now();\n" +" do {\n" +" for (var i = 1; i < 1000; ++i) foo(1, i);\n" +" var duration = Date.now() - start;\n" +" } while (duration < timeout);\n" +" return duration;\n" +"}\n"; + +// Check that the profile tree for the script above will look like the +// following: +// +// [Top down]: +// 1062 0 (root) [-1] +// 1054 0 start [-1] +// 1054 1 foo [-1] +// 2 2 (program) [-1] +// 6 6 (garbage collector) [-1] +// +// The test checks no FP ranges are present in a deoptimized funcion. +// If 'foo' has no ranges the samples falling into the prologue will miss the +// 'start' function on the stack, so 'foo' will be attached to the (root). +TEST(HotDeoptNoFrameEntry) { + LocalContext env; + v8::HandleScope scope(env->GetIsolate()); + + v8::Script::Compile(v8::String::NewFromUtf8( + env->GetIsolate(), + hot_deopt_no_frame_entry_test_source))->Run(); + v8::Local<v8::Function> function = v8::Local<v8::Function>::Cast( + env->Global()->Get(v8::String::NewFromUtf8(env->GetIsolate(), "start"))); + + int32_t profiling_interval_ms = 200; + v8::Handle<v8::Value> args[] = { + v8::Integer::New(env->GetIsolate(), profiling_interval_ms) + }; + v8::CpuProfile* profile = + RunProfiler(env.local(), function, args, ARRAY_SIZE(args), 200); + function->Call(env->Global(), ARRAY_SIZE(args), args); + + const v8::CpuProfileNode* root = profile->GetTopDownRoot(); + + ScopedVector<v8::Handle<v8::String> > names(3); + names[0] = v8::String::NewFromUtf8( + env->GetIsolate(), ProfileGenerator::kGarbageCollectorEntryName); + names[1] = v8::String::NewFromUtf8(env->GetIsolate(), + ProfileGenerator::kProgramEntryName); + names[2] = v8::String::NewFromUtf8(env->GetIsolate(), "start"); + CheckChildrenNames(root, names); + + const v8::CpuProfileNode* startNode = + GetChild(env->GetIsolate(), root, "start"); + CHECK_EQ(1, startNode->GetChildrenCount()); + + GetChild(env->GetIsolate(), startNode, "foo"); + + profile->Delete(); +} + + TEST(CollectCpuProfileSamples) { LocalContext env; v8::HandleScope scope(env->GetIsolate()); @@ -724,11 +790,11 @@ class TestApiCallbacks { private: void Wait() { if (is_warming_up_) return; - double start = i::OS::TimeCurrentMillis(); + double start = v8::base::OS::TimeCurrentMillis(); double duration = 0; while (duration < min_duration_ms_) { - i::OS::Sleep(1); - duration = i::OS::TimeCurrentMillis() - start; + v8::base::OS::Sleep(1); + duration = v8::base::OS::TimeCurrentMillis() - start; } } @@ -957,23 +1023,20 @@ TEST(NativeMethodMonomorphicIC) { } -static const char* bound_function_test_source = "function foo(iterations) {\n" -" var r = 0;\n" -" for (var i = 0; i < iterations; i++) { r += i; }\n" -" return r;\n" -"}\n" -"function start(duration) {\n" -" var callback = foo.bind(this);\n" -" var start = Date.now();\n" -" while (Date.now() - start < duration) {\n" -" callback(10 * 1000);\n" -" }\n" -"}"; +static const char* bound_function_test_source = + "function foo() {\n" + " startProfiling('my_profile');\n" + "}\n" + "function start() {\n" + " var callback = foo.bind(this);\n" + " callback();\n" + "}"; TEST(BoundFunctionCall) { - LocalContext env; - v8::HandleScope scope(env->GetIsolate()); + v8::HandleScope scope(CcTest::isolate()); + v8::Local<v8::Context> env = CcTest::NewContext(PROFILER_EXTENSION); + v8::Context::Scope context_scope(env); v8::Script::Compile( v8::String::NewFromUtf8(env->GetIsolate(), bound_function_test_source)) @@ -981,12 +1044,7 @@ TEST(BoundFunctionCall) { v8::Local<v8::Function> function = v8::Local<v8::Function>::Cast( env->Global()->Get(v8::String::NewFromUtf8(env->GetIsolate(), "start"))); - int32_t duration_ms = 100; - v8::Handle<v8::Value> args[] = { - v8::Integer::New(env->GetIsolate(), duration_ms) - }; - v8::CpuProfile* profile = - RunProfiler(env.local(), function, args, ARRAY_SIZE(args), 100); + v8::CpuProfile* profile = RunProfiler(env, function, NULL, 0, 0); const v8::CpuProfileNode* root = profile->GetTopDownRoot(); ScopedVector<v8::Handle<v8::String> > names(3); @@ -1158,9 +1216,12 @@ TEST(FunctionApplySample) { const v8::CpuProfileNode* testNode = FindChild(env->GetIsolate(), startNode, "test"); if (testNode) { - ScopedVector<v8::Handle<v8::String> > names(2); + ScopedVector<v8::Handle<v8::String> > names(3); names[0] = v8::String::NewFromUtf8(env->GetIsolate(), "bar"); names[1] = v8::String::NewFromUtf8(env->GetIsolate(), "apply"); + // apply calls "get length" before invoking the function itself + // and we may get hit into it. + names[2] = v8::String::NewFromUtf8(env->GetIsolate(), "get length"); CheckChildrenNames(testNode, names); } @@ -1178,28 +1239,84 @@ TEST(FunctionApplySample) { } -static const char* js_native_js_test_source = -"var is_profiling = false;\n" -"function foo(iterations) {\n" -" if (!is_profiling) {\n" -" is_profiling = true;\n" +static const char* cpu_profiler_deep_stack_test_source = +"function foo(n) {\n" +" if (n)\n" +" foo(n - 1);\n" +" else\n" " startProfiling('my_profile');\n" -" }\n" -" var r = 0;\n" -" for (var i = 0; i < iterations; i++) { r += i; }\n" -" return r;\n" "}\n" -"function bar(iterations) {\n" -" try { foo(iterations); } catch(e) {}\n" -"}\n" -"function start(duration) {\n" -" var start = Date.now();\n" -" while (Date.now() - start < duration) {\n" -" try {\n" -" CallJsFunction(bar, 10 * 1000);\n" -" } catch(e) {}\n" -" }\n" -"}"; +"function start() {\n" +" foo(250);\n" +"}\n"; + + +// Check a deep stack +// +// [Top down]: +// 0 (root) 0 #1 +// 2 (program) 0 #2 +// 0 start 21 #3 no reason +// 0 foo 21 #4 no reason +// 0 foo 21 #5 no reason +// .... +// 0 foo 21 #253 no reason +// 1 startProfiling 0 #254 +TEST(CpuProfileDeepStack) { + v8::HandleScope scope(CcTest::isolate()); + v8::Local<v8::Context> env = CcTest::NewContext(PROFILER_EXTENSION); + v8::Context::Scope context_scope(env); + + v8::Script::Compile(v8::String::NewFromUtf8( + env->GetIsolate(), cpu_profiler_deep_stack_test_source))->Run(); + v8::Local<v8::Function> function = v8::Local<v8::Function>::Cast( + env->Global()->Get(v8::String::NewFromUtf8(env->GetIsolate(), "start"))); + + v8::CpuProfiler* cpu_profiler = env->GetIsolate()->GetCpuProfiler(); + v8::Local<v8::String> profile_name = + v8::String::NewFromUtf8(env->GetIsolate(), "my_profile"); + function->Call(env->Global(), 0, NULL); + v8::CpuProfile* profile = cpu_profiler->StopProfiling(profile_name); + CHECK_NE(NULL, profile); + // Dump collected profile to have a better diagnostic in case of failure. + reinterpret_cast<i::CpuProfile*>(profile)->Print(); + + const v8::CpuProfileNode* root = profile->GetTopDownRoot(); + { + ScopedVector<v8::Handle<v8::String> > names(3); + names[0] = v8::String::NewFromUtf8( + env->GetIsolate(), ProfileGenerator::kGarbageCollectorEntryName); + names[1] = v8::String::NewFromUtf8(env->GetIsolate(), + ProfileGenerator::kProgramEntryName); + names[2] = v8::String::NewFromUtf8(env->GetIsolate(), "start"); + CheckChildrenNames(root, names); + } + + const v8::CpuProfileNode* node = + GetChild(env->GetIsolate(), root, "start"); + for (int i = 0; i < 250; ++i) { + node = GetChild(env->GetIsolate(), node, "foo"); + } + // TODO(alph): + // In theory there must be one more 'foo' and a 'startProfiling' nodes, + // but due to unstable top frame extraction these might be missing. + + profile->Delete(); +} + + +static const char* js_native_js_test_source = + "function foo() {\n" + " startProfiling('my_profile');\n" + "}\n" + "function bar() {\n" + " try { foo(); } catch(e) {}\n" + "}\n" + "function start() {\n" + " try {\n" + " CallJsFunction(bar);\n" + " } catch(e) {}\n" + "}"; static void CallJsFunction(const v8::FunctionCallbackInfo<v8::Value>& info) { v8::Handle<v8::Function> function = info[0].As<v8::Function>(); @@ -1232,12 +1349,7 @@ TEST(JsNativeJsSample) { v8::Local<v8::Function> function = v8::Local<v8::Function>::Cast( env->Global()->Get(v8::String::NewFromUtf8(env->GetIsolate(), "start"))); - int32_t duration_ms = 20; - v8::Handle<v8::Value> args[] = { - v8::Integer::New(env->GetIsolate(), duration_ms) - }; - v8::CpuProfile* profile = - RunProfiler(env, function, args, ARRAY_SIZE(args), 10); + v8::CpuProfile* profile = RunProfiler(env, function, NULL, 0, 0); const v8::CpuProfileNode* root = profile->GetTopDownRoot(); { @@ -1268,28 +1380,18 @@ TEST(JsNativeJsSample) { static const char* js_native_js_runtime_js_test_source = -"var is_profiling = false;\n" -"function foo(iterations) {\n" -" if (!is_profiling) {\n" -" is_profiling = true;\n" -" startProfiling('my_profile');\n" -" }\n" -" var r = 0;\n" -" for (var i = 0; i < iterations; i++) { r += i; }\n" -" return r;\n" -"}\n" -"var bound = foo.bind(this);\n" -"function bar(iterations) {\n" -" try { bound(iterations); } catch(e) {}\n" -"}\n" -"function start(duration) {\n" -" var start = Date.now();\n" -" while (Date.now() - start < duration) {\n" -" try {\n" -" CallJsFunction(bar, 10 * 1000);\n" -" } catch(e) {}\n" -" }\n" -"}"; + "function foo() {\n" + " startProfiling('my_profile');\n" + "}\n" + "var bound = foo.bind(this);\n" + "function bar() {\n" + " try { bound(); } catch(e) {}\n" + "}\n" + "function start() {\n" + " try {\n" + " CallJsFunction(bar);\n" + " } catch(e) {}\n" + "}"; // [Top down]: @@ -1317,12 +1419,7 @@ TEST(JsNativeJsRuntimeJsSample) { v8::Local<v8::Function> function = v8::Local<v8::Function>::Cast( env->Global()->Get(v8::String::NewFromUtf8(env->GetIsolate(), "start"))); - int32_t duration_ms = 20; - v8::Handle<v8::Value> args[] = { - v8::Integer::New(env->GetIsolate(), duration_ms) - }; - v8::CpuProfile* profile = - RunProfiler(env, function, args, ARRAY_SIZE(args), 10); + v8::CpuProfile* profile = RunProfiler(env, function, NULL, 0, 0); const v8::CpuProfileNode* root = profile->GetTopDownRoot(); ScopedVector<v8::Handle<v8::String> > names(3); @@ -1343,7 +1440,11 @@ TEST(JsNativeJsRuntimeJsSample) { const v8::CpuProfileNode* barNode = GetChild(env->GetIsolate(), nativeFunctionNode, "bar"); - CHECK_EQ(1, barNode->GetChildrenCount()); + // The child is in fact a bound foo. + // A bound function has a wrapper that may make calls to + // other functions e.g. "get length". + CHECK_LE(1, barNode->GetChildrenCount()); + CHECK_GE(2, barNode->GetChildrenCount()); GetChild(env->GetIsolate(), barNode, "foo"); profile->Delete(); @@ -1351,32 +1452,25 @@ TEST(JsNativeJsRuntimeJsSample) { static void CallJsFunction2(const v8::FunctionCallbackInfo<v8::Value>& info) { + v8::base::OS::Print("In CallJsFunction2\n"); CallJsFunction(info); } static const char* js_native1_js_native2_js_test_source = -"var is_profiling = false;\n" -"function foo(iterations) {\n" -" if (!is_profiling) {\n" -" is_profiling = true;\n" -" startProfiling('my_profile');\n" -" }\n" -" var r = 0;\n" -" for (var i = 0; i < iterations; i++) { r += i; }\n" -" return r;\n" -"}\n" -"function bar(iterations) {\n" -" CallJsFunction2(foo, iterations);\n" -"}\n" -"function start(duration) {\n" -" var start = Date.now();\n" -" while (Date.now() - start < duration) {\n" -" try {\n" -" CallJsFunction1(bar, 10 * 1000);\n" -" } catch(e) {}\n" -" }\n" -"}"; + "function foo() {\n" + " try {\n" + " startProfiling('my_profile');\n" + " } catch(e) {}\n" + "}\n" + "function bar() {\n" + " CallJsFunction2(foo);\n" + "}\n" + "function start() {\n" + " try {\n" + " CallJsFunction1(bar);\n" + " } catch(e) {}\n" + "}"; // [Top down]: @@ -1411,12 +1505,7 @@ TEST(JsNative1JsNative2JsSample) { v8::Local<v8::Function> function = v8::Local<v8::Function>::Cast( env->Global()->Get(v8::String::NewFromUtf8(env->GetIsolate(), "start"))); - int32_t duration_ms = 20; - v8::Handle<v8::Value> args[] = { - v8::Integer::New(env->GetIsolate(), duration_ms) - }; - v8::CpuProfile* profile = - RunProfiler(env, function, args, ARRAY_SIZE(args), 10); + v8::CpuProfile* profile = RunProfiler(env, function, NULL, 0, 0); const v8::CpuProfileNode* root = profile->GetTopDownRoot(); ScopedVector<v8::Handle<v8::String> > names(3); @@ -1540,25 +1629,23 @@ TEST(FunctionDetails) { const_cast<v8::CpuProfileNode*>(current))->Print(0); // The tree should look like this: // 0 (root) 0 #1 - // 0 (anonymous function) 19 #2 no reason script_b:1 + // 0 "" 19 #2 no reason script_b:1 // 0 baz 19 #3 TryCatchStatement script_b:3 // 0 foo 18 #4 TryCatchStatement script_a:2 // 1 bar 18 #5 no reason script_a:3 const v8::CpuProfileNode* root = profile->GetTopDownRoot(); - const v8::CpuProfileNode* script = GetChild(env->GetIsolate(), root, - ProfileGenerator::kAnonymousFunctionName); - CheckFunctionDetails(env->GetIsolate(), script, - ProfileGenerator::kAnonymousFunctionName, "script_b", - script_b->GetId(), 1, 1); + const v8::CpuProfileNode* script = GetChild(env->GetIsolate(), root, ""); + CheckFunctionDetails(env->GetIsolate(), script, "", "script_b", + script_b->GetUnboundScript()->GetId(), 1, 1); const v8::CpuProfileNode* baz = GetChild(env->GetIsolate(), script, "baz"); CheckFunctionDetails(env->GetIsolate(), baz, "baz", "script_b", - script_b->GetId(), 3, 16); + script_b->GetUnboundScript()->GetId(), 3, 16); const v8::CpuProfileNode* foo = GetChild(env->GetIsolate(), baz, "foo"); CheckFunctionDetails(env->GetIsolate(), foo, "foo", "script_a", - script_a->GetId(), 2, 1); + script_a->GetUnboundScript()->GetId(), 2, 1); const v8::CpuProfileNode* bar = GetChild(env->GetIsolate(), foo, "bar"); CheckFunctionDetails(env->GetIsolate(), bar, "bar", "script_a", - script_a->GetId(), 3, 14); + script_a->GetUnboundScript()->GetId(), 3, 14); } diff --git a/deps/v8/test/cctest/test-dataflow.cc b/deps/v8/test/cctest/test-dataflow.cc index 532c9207b69..fc1a7fa13fc 100644 --- a/deps/v8/test/cctest/test-dataflow.cc +++ b/deps/v8/test/cctest/test-dataflow.cc @@ -27,10 +27,10 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "data-flow.h" -#include "cctest.h" +#include "src/data-flow.h" +#include "test/cctest/cctest.h" using namespace v8::internal; diff --git a/deps/v8/test/cctest/test-date.cc b/deps/v8/test/cctest/test-date.cc index 5190729fad1..f2187955add 100644 --- a/deps/v8/test/cctest/test-date.cc +++ b/deps/v8/test/cctest/test-date.cc @@ -25,11 +25,11 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "v8.h" +#include "src/v8.h" -#include "global-handles.h" -#include "snapshot.h" -#include "cctest.h" +#include "src/global-handles.h" +#include "src/snapshot.h" +#include "test/cctest/cctest.h" using namespace v8::internal; diff --git a/deps/v8/test/cctest/test-debug.cc b/deps/v8/test/cctest/test-debug.cc index 85e4512ea5a..5c0b0f392d5 100644 --- a/deps/v8/test/cctest/test-debug.cc +++ b/deps/v8/test/cctest/test-debug.cc @@ -27,28 +27,27 @@ #include <stdlib.h> -#include "v8.h" - -#include "api.h" -#include "cctest.h" -#include "compilation-cache.h" -#include "debug.h" -#include "deoptimizer.h" -#include "frames.h" -#include "platform.h" -#include "platform/condition-variable.h" -#include "platform/socket.h" -#include "stub-cache.h" -#include "utils.h" - - -using ::v8::internal::Mutex; -using ::v8::internal::LockGuard; -using ::v8::internal::ConditionVariable; -using ::v8::internal::Semaphore; +#include "src/v8.h" + +#include "src/api.h" +#include "src/base/platform/condition-variable.h" +#include "src/base/platform/platform.h" +#include "src/compilation-cache.h" +#include "src/debug.h" +#include "src/deoptimizer.h" +#include "src/frames.h" +#include "src/stub-cache.h" +#include "src/utils.h" +#include "test/cctest/cctest.h" + + +using ::v8::base::Mutex; +using ::v8::base::LockGuard; +using ::v8::base::ConditionVariable; +using ::v8::base::OS; +using ::v8::base::Semaphore; using ::v8::internal::EmbeddedVector; using ::v8::internal::Object; -using ::v8::internal::OS; using ::v8::internal::Handle; using ::v8::internal::Heap; using ::v8::internal::JSGlobalProxy; @@ -99,20 +98,19 @@ class DebugLocalContext { v8::internal::Isolate* isolate = reinterpret_cast<v8::internal::Isolate*>(context_->GetIsolate()); v8::internal::Factory* factory = isolate->factory(); - v8::internal::Debug* debug = isolate->debug(); // Expose the debug context global object in the global object for testing. - debug->Load(); - debug->debug_context()->set_security_token( + CHECK(isolate->debug()->Load()); + Handle<v8::internal::Context> debug_context = + isolate->debug()->debug_context(); + debug_context->set_security_token( v8::Utils::OpenHandle(*context_)->security_token()); Handle<JSGlobalProxy> global(Handle<JSGlobalProxy>::cast( v8::Utils::OpenHandle(*context_->Global()))); Handle<v8::internal::String> debug_string = factory->InternalizeOneByteString(STATIC_ASCII_VECTOR("debug")); - v8::internal::Runtime::SetObjectProperty(isolate, global, debug_string, - Handle<Object>(debug->debug_context()->global_proxy(), isolate), - DONT_ENUM, - ::v8::internal::SLOPPY).Check(); + v8::internal::Runtime::DefineObjectProperty(global, debug_string, + handle(debug_context->global_proxy(), isolate), DONT_ENUM).Check(); } private: @@ -182,9 +180,9 @@ static int SetBreakPointFromJS(v8::Isolate* isolate, const char* function_name, int line, int position) { EmbeddedVector<char, SMALL_STRING_BUFFER_SIZE> buffer; - OS::SNPrintF(buffer, - "debug.Debug.setBreakPoint(%s,%d,%d)", - function_name, line, position); + SNPrintF(buffer, + "debug.Debug.setBreakPoint(%s,%d,%d)", + function_name, line, position); buffer[SMALL_STRING_BUFFER_SIZE - 1] = '\0'; v8::Handle<v8::String> str = v8::String::NewFromUtf8(isolate, buffer.start()); return v8::Script::Compile(str)->Run()->Int32Value(); @@ -197,14 +195,14 @@ static int SetScriptBreakPointByIdFromJS(v8::Isolate* isolate, int script_id, EmbeddedVector<char, SMALL_STRING_BUFFER_SIZE> buffer; if (column >= 0) { // Column specified set script break point on precise location. - OS::SNPrintF(buffer, - "debug.Debug.setScriptBreakPointById(%d,%d,%d)", - script_id, line, column); + SNPrintF(buffer, + "debug.Debug.setScriptBreakPointById(%d,%d,%d)", + script_id, line, column); } else { // Column not specified set script break point on line. - OS::SNPrintF(buffer, - "debug.Debug.setScriptBreakPointById(%d,%d)", - script_id, line); + SNPrintF(buffer, + "debug.Debug.setScriptBreakPointById(%d,%d)", + script_id, line); } buffer[SMALL_STRING_BUFFER_SIZE - 1] = '\0'; { @@ -226,14 +224,14 @@ static int SetScriptBreakPointByNameFromJS(v8::Isolate* isolate, EmbeddedVector<char, SMALL_STRING_BUFFER_SIZE> buffer; if (column >= 0) { // Column specified set script break point on precise location. - OS::SNPrintF(buffer, - "debug.Debug.setScriptBreakPointByName(\"%s\",%d,%d)", - script_name, line, column); + SNPrintF(buffer, + "debug.Debug.setScriptBreakPointByName(\"%s\",%d,%d)", + script_name, line, column); } else { // Column not specified set script break point on line. - OS::SNPrintF(buffer, - "debug.Debug.setScriptBreakPointByName(\"%s\",%d)", - script_name, line); + SNPrintF(buffer, + "debug.Debug.setScriptBreakPointByName(\"%s\",%d)", + script_name, line); } buffer[SMALL_STRING_BUFFER_SIZE - 1] = '\0'; { @@ -260,9 +258,9 @@ static void ClearBreakPoint(int break_point) { static void ClearBreakPointFromJS(v8::Isolate* isolate, int break_point_number) { EmbeddedVector<char, SMALL_STRING_BUFFER_SIZE> buffer; - OS::SNPrintF(buffer, - "debug.Debug.clearBreakPoint(%d)", - break_point_number); + SNPrintF(buffer, + "debug.Debug.clearBreakPoint(%d)", + break_point_number); buffer[SMALL_STRING_BUFFER_SIZE - 1] = '\0'; v8::Script::Compile(v8::String::NewFromUtf8(isolate, buffer.start()))->Run(); } @@ -271,9 +269,9 @@ static void ClearBreakPointFromJS(v8::Isolate* isolate, static void EnableScriptBreakPointFromJS(v8::Isolate* isolate, int break_point_number) { EmbeddedVector<char, SMALL_STRING_BUFFER_SIZE> buffer; - OS::SNPrintF(buffer, - "debug.Debug.enableScriptBreakPoint(%d)", - break_point_number); + SNPrintF(buffer, + "debug.Debug.enableScriptBreakPoint(%d)", + break_point_number); buffer[SMALL_STRING_BUFFER_SIZE - 1] = '\0'; v8::Script::Compile(v8::String::NewFromUtf8(isolate, buffer.start()))->Run(); } @@ -282,9 +280,9 @@ static void EnableScriptBreakPointFromJS(v8::Isolate* isolate, static void DisableScriptBreakPointFromJS(v8::Isolate* isolate, int break_point_number) { EmbeddedVector<char, SMALL_STRING_BUFFER_SIZE> buffer; - OS::SNPrintF(buffer, - "debug.Debug.disableScriptBreakPoint(%d)", - break_point_number); + SNPrintF(buffer, + "debug.Debug.disableScriptBreakPoint(%d)", + break_point_number); buffer[SMALL_STRING_BUFFER_SIZE - 1] = '\0'; v8::Script::Compile(v8::String::NewFromUtf8(isolate, buffer.start()))->Run(); } @@ -294,9 +292,9 @@ static void ChangeScriptBreakPointConditionFromJS(v8::Isolate* isolate, int break_point_number, const char* condition) { EmbeddedVector<char, SMALL_STRING_BUFFER_SIZE> buffer; - OS::SNPrintF(buffer, - "debug.Debug.changeScriptBreakPointCondition(%d, \"%s\")", - break_point_number, condition); + SNPrintF(buffer, + "debug.Debug.changeScriptBreakPointCondition(%d, \"%s\")", + break_point_number, condition); buffer[SMALL_STRING_BUFFER_SIZE - 1] = '\0'; v8::Script::Compile(v8::String::NewFromUtf8(isolate, buffer.start()))->Run(); } @@ -306,9 +304,9 @@ static void ChangeScriptBreakPointIgnoreCountFromJS(v8::Isolate* isolate, int break_point_number, int ignoreCount) { EmbeddedVector<char, SMALL_STRING_BUFFER_SIZE> buffer; - OS::SNPrintF(buffer, - "debug.Debug.changeScriptBreakPointIgnoreCount(%d, %d)", - break_point_number, ignoreCount); + SNPrintF(buffer, + "debug.Debug.changeScriptBreakPointIgnoreCount(%d, %d)", + break_point_number, ignoreCount); buffer[SMALL_STRING_BUFFER_SIZE - 1] = '\0'; v8::Script::Compile(v8::String::NewFromUtf8(isolate, buffer.start()))->Run(); } @@ -422,12 +420,6 @@ void CheckDebuggerUnloaded(bool check_functions) { } -void ForceUnloadDebugger() { - CcTest::i_isolate()->debugger()->never_unload_debugger_ = false; - CcTest::i_isolate()->debugger()->UnloadDebugger(); -} - - } } // namespace v8::internal @@ -1094,7 +1086,7 @@ TEST(BreakPointICStore) { DebugLocalContext env; v8::HandleScope scope(env->GetIsolate()); - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); v8::Script::Compile(v8::String::NewFromUtf8(env->GetIsolate(), "function foo(){bar=0;}"))->Run(); v8::Local<v8::Function> foo = v8::Local<v8::Function>::Cast( @@ -1116,7 +1108,7 @@ TEST(BreakPointICStore) { foo->Call(env->Global(), 0, NULL); CHECK_EQ(2, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -1126,7 +1118,7 @@ TEST(BreakPointICLoad) { break_point_hit_count = 0; DebugLocalContext env; v8::HandleScope scope(env->GetIsolate()); - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); v8::Script::Compile(v8::String::NewFromUtf8(env->GetIsolate(), "bar=1")) ->Run(); v8::Script::Compile( @@ -1151,7 +1143,7 @@ TEST(BreakPointICLoad) { foo->Call(env->Global(), 0, NULL); CHECK_EQ(2, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -1161,7 +1153,7 @@ TEST(BreakPointICCall) { break_point_hit_count = 0; DebugLocalContext env; v8::HandleScope scope(env->GetIsolate()); - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); v8::Script::Compile( v8::String::NewFromUtf8(env->GetIsolate(), "function bar(){}"))->Run(); v8::Script::Compile(v8::String::NewFromUtf8(env->GetIsolate(), @@ -1185,7 +1177,7 @@ TEST(BreakPointICCall) { foo->Call(env->Global(), 0, NULL); CHECK_EQ(2, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -1195,7 +1187,7 @@ TEST(BreakPointICCallWithGC) { break_point_hit_count = 0; DebugLocalContext env; v8::HandleScope scope(env->GetIsolate()); - v8::Debug::SetDebugEventListener2(DebugEventBreakPointCollectGarbage); + v8::Debug::SetDebugEventListener(DebugEventBreakPointCollectGarbage); v8::Script::Compile( v8::String::NewFromUtf8(env->GetIsolate(), "function bar(){return 1;}")) ->Run(); @@ -1221,7 +1213,7 @@ TEST(BreakPointICCallWithGC) { foo->Call(env->Global(), 0, NULL); CHECK_EQ(2, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -1231,7 +1223,7 @@ TEST(BreakPointConstructCallWithGC) { break_point_hit_count = 0; DebugLocalContext env; v8::HandleScope scope(env->GetIsolate()); - v8::Debug::SetDebugEventListener2(DebugEventBreakPointCollectGarbage); + v8::Debug::SetDebugEventListener(DebugEventBreakPointCollectGarbage); v8::Script::Compile(v8::String::NewFromUtf8(env->GetIsolate(), "function bar(){ this.x = 1;}")) ->Run(); @@ -1257,7 +1249,7 @@ TEST(BreakPointConstructCallWithGC) { foo->Call(env->Global(), 0, NULL); CHECK_EQ(2, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -1278,7 +1270,7 @@ TEST(BreakPointReturn) { "frame_source_column"); - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); v8::Script::Compile( v8::String::NewFromUtf8(env->GetIsolate(), "function foo(){}"))->Run(); v8::Local<v8::Function> foo = v8::Local<v8::Function>::Cast( @@ -1304,7 +1296,7 @@ TEST(BreakPointReturn) { foo->Call(env->Global(), 0, NULL); CHECK_EQ(2, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -1327,7 +1319,7 @@ TEST(GCDuringBreakPointProcessing) { DebugLocalContext env; v8::HandleScope scope(env->GetIsolate()); - v8::Debug::SetDebugEventListener2(DebugEventBreakPointCollectGarbage); + v8::Debug::SetDebugEventListener(DebugEventBreakPointCollectGarbage); v8::Local<v8::Function> foo; // Test IC store break point with garbage collection. @@ -1355,7 +1347,7 @@ TEST(GCDuringBreakPointProcessing) { SetBreakPoint(foo, 0); CallWithBreakPoints(env->Global(), foo, 1, 25); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -1390,7 +1382,7 @@ TEST(BreakPointSurviveGC) { DebugLocalContext env; v8::HandleScope scope(env->GetIsolate()); - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); v8::Local<v8::Function> foo; // Test IC store break point with garbage collection. @@ -1436,7 +1428,7 @@ TEST(BreakPointSurviveGC) { CallAndGC(env->Global(), foo); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -1448,7 +1440,7 @@ TEST(BreakPointThroughJavaScript) { v8::HandleScope scope(env->GetIsolate()); env.ExposeDebug(); - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); v8::Script::Compile( v8::String::NewFromUtf8(env->GetIsolate(), "function bar(){}"))->Run(); v8::Script::Compile(v8::String::NewFromUtf8(env->GetIsolate(), @@ -1490,7 +1482,7 @@ TEST(BreakPointThroughJavaScript) { foo->Run(); CHECK_EQ(8, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); // Make sure that the break point numbers are consecutive. @@ -1507,7 +1499,7 @@ TEST(ScriptBreakPointByNameThroughJavaScript) { v8::HandleScope scope(env->GetIsolate()); env.ExposeDebug(); - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); v8::Local<v8::String> script = v8::String::NewFromUtf8( env->GetIsolate(), @@ -1592,7 +1584,7 @@ TEST(ScriptBreakPointByNameThroughJavaScript) { g->Call(env->Global(), 0, NULL); CHECK_EQ(0, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); // Make sure that the break point numbers are consecutive. @@ -1611,7 +1603,7 @@ TEST(ScriptBreakPointByIdThroughJavaScript) { v8::HandleScope scope(env->GetIsolate()); env.ExposeDebug(); - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); v8::Local<v8::String> source = v8::String::NewFromUtf8( env->GetIsolate(), @@ -1644,7 +1636,7 @@ TEST(ScriptBreakPointByIdThroughJavaScript) { env->Global()->Get(v8::String::NewFromUtf8(env->GetIsolate(), "g"))); // Get the script id knowing that internally it is a 32 integer. - int script_id = script->GetId(); + int script_id = script->GetUnboundScript()->GetId(); // Call f and g without break points. break_point_hit_count = 0; @@ -1700,7 +1692,7 @@ TEST(ScriptBreakPointByIdThroughJavaScript) { g->Call(env->Global(), 0, NULL); CHECK_EQ(0, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); // Make sure that the break point numbers are consecutive. @@ -1720,7 +1712,7 @@ TEST(EnableDisableScriptBreakPoint) { v8::HandleScope scope(env->GetIsolate()); env.ExposeDebug(); - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); v8::Local<v8::String> script = v8::String::NewFromUtf8( env->GetIsolate(), @@ -1766,7 +1758,7 @@ TEST(EnableDisableScriptBreakPoint) { f->Call(env->Global(), 0, NULL); CHECK_EQ(3, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -1778,7 +1770,7 @@ TEST(ConditionalScriptBreakPoint) { v8::HandleScope scope(env->GetIsolate()); env.ExposeDebug(); - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); v8::Local<v8::String> script = v8::String::NewFromUtf8( env->GetIsolate(), @@ -1829,7 +1821,7 @@ TEST(ConditionalScriptBreakPoint) { } CHECK_EQ(5, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -1841,7 +1833,7 @@ TEST(ScriptBreakPointIgnoreCount) { v8::HandleScope scope(env->GetIsolate()); env.ExposeDebug(); - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); v8::Local<v8::String> script = v8::String::NewFromUtf8( env->GetIsolate(), @@ -1885,7 +1877,7 @@ TEST(ScriptBreakPointIgnoreCount) { } CHECK_EQ(5, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -1897,7 +1889,7 @@ TEST(ScriptBreakPointReload) { v8::HandleScope scope(env->GetIsolate()); env.ExposeDebug(); - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); v8::Local<v8::Function> f; v8::Local<v8::String> script = v8::String::NewFromUtf8( @@ -1949,7 +1941,7 @@ TEST(ScriptBreakPointReload) { f->Call(env->Global(), 0, NULL); CHECK_EQ(1, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -1961,7 +1953,7 @@ TEST(ScriptBreakPointMultiple) { v8::HandleScope scope(env->GetIsolate()); env.ExposeDebug(); - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); v8::Local<v8::Function> f; v8::Local<v8::String> script_f = @@ -2018,7 +2010,7 @@ TEST(ScriptBreakPointMultiple) { g->Call(env->Global(), 0, NULL); CHECK_EQ(2, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -2030,7 +2022,7 @@ TEST(ScriptBreakPointLineOffset) { v8::HandleScope scope(env->GetIsolate()); env.ExposeDebug(); - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); v8::Local<v8::Function> f; v8::Local<v8::String> script = v8::String::NewFromUtf8( @@ -2078,7 +2070,7 @@ TEST(ScriptBreakPointLineOffset) { f->Call(env->Global(), 0, NULL); CHECK_EQ(1, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -2094,7 +2086,7 @@ TEST(ScriptBreakPointLine) { frame_function_name_source, "frame_function_name"); - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); v8::Local<v8::Function> f; v8::Local<v8::Function> g; @@ -2194,7 +2186,7 @@ TEST(ScriptBreakPointLine) { v8::Script::Compile(script, &origin)->Run(); CHECK_EQ(0, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -2205,7 +2197,7 @@ TEST(ScriptBreakPointLineTopLevel) { v8::HandleScope scope(env->GetIsolate()); env.ExposeDebug(); - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); v8::Local<v8::String> script = v8::String::NewFromUtf8(env->GetIsolate(), @@ -2241,7 +2233,7 @@ TEST(ScriptBreakPointLineTopLevel) { env->Global()->Get(v8::String::NewFromUtf8(env->GetIsolate(), "f"))); CHECK_EQ(0, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -2253,7 +2245,7 @@ TEST(ScriptBreakPointTopLevelCrash) { v8::HandleScope scope(env->GetIsolate()); env.ExposeDebug(); - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); v8::Local<v8::String> script_source = v8::String::NewFromUtf8(env->GetIsolate(), @@ -2276,7 +2268,7 @@ TEST(ScriptBreakPointTopLevelCrash) { ClearBreakPointFromJS(env->GetIsolate(), sbp1); ClearBreakPointFromJS(env->GetIsolate(), sbp2); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -2292,7 +2284,7 @@ TEST(RemoveBreakPointInBreak) { debug_event_remove_break_point = SetBreakPoint(foo, 0); // Register the debug event listener pasing the function - v8::Debug::SetDebugEventListener2(DebugEventRemoveBreakPoint, foo); + v8::Debug::SetDebugEventListener(DebugEventRemoveBreakPoint, foo); break_point_hit_count = 0; foo->Call(env->Global(), 0, NULL); @@ -2302,7 +2294,7 @@ TEST(RemoveBreakPointInBreak) { foo->Call(env->Global(), 0, NULL); CHECK_EQ(0, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -2312,7 +2304,7 @@ TEST(DebuggerStatement) { break_point_hit_count = 0; DebugLocalContext env; v8::HandleScope scope(env->GetIsolate()); - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); v8::Script::Compile( v8::String::NewFromUtf8(env->GetIsolate(), "function bar(){debugger}")) ->Run(); @@ -2332,7 +2324,7 @@ TEST(DebuggerStatement) { foo->Call(env->Global(), 0, NULL); CHECK_EQ(3, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -2342,7 +2334,7 @@ TEST(DebuggerStatementBreakpoint) { break_point_hit_count = 0; DebugLocalContext env; v8::HandleScope scope(env->GetIsolate()); - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); v8::Script::Compile( v8::String::NewFromUtf8(env->GetIsolate(), "function foo(){debugger;}")) ->Run(); @@ -2360,7 +2352,7 @@ TEST(DebuggerStatementBreakpoint) { CHECK_EQ(2, break_point_hit_count); ClearBreakPoint(bp); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -2378,7 +2370,7 @@ TEST(DebugEvaluate) { evaluate_check_source, "evaluate_check"); // Register the debug event listener - v8::Debug::SetDebugEventListener2(DebugEventEvaluate); + v8::Debug::SetDebugEventListener(DebugEventEvaluate); // Different expected vaules of x and a when in a break point (u = undefined, // d = Hello, world!). @@ -2479,7 +2471,7 @@ TEST(DebugEvaluate) { }; bar->Call(env->Global(), 2, argv_bar_3); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -2497,7 +2489,7 @@ TEST(ConditionalBreakpointWithCodeGenerationDisallowed) { v8::HandleScope scope(env->GetIsolate()); env.ExposeDebug(); - v8::Debug::SetDebugEventListener2(CheckDebugEvent); + v8::Debug::SetDebugEventListener(CheckDebugEvent); v8::Local<v8::Function> foo = CompileFunction(&env, "function foo(x) {\n" @@ -2514,7 +2506,7 @@ TEST(ConditionalBreakpointWithCodeGenerationDisallowed) { foo->Call(env->Global(), 0, NULL); CHECK_EQ(1, debugEventCount); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -2544,7 +2536,7 @@ TEST(DebugEvaluateWithCodeGenerationDisallowed) { v8::HandleScope scope(env->GetIsolate()); env.ExposeDebug(); - v8::Debug::SetDebugEventListener2(CheckDebugEval); + v8::Debug::SetDebugEventListener(CheckDebugEval); v8::Local<v8::Function> foo = CompileFunction(&env, "var global = 'Global';\n" @@ -2572,7 +2564,7 @@ TEST(DebugEvaluateWithCodeGenerationDisallowed) { checkGlobalEvalFunction.Clear(); checkFrameEvalFunction.Clear(); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -2630,7 +2622,7 @@ bool GetEvaluateStringResult(char *message, char* buffer, int buffer_size) { if (len > buffer_size - 1) { len = buffer_size - 1; } - OS::StrNCpy(buf, pos1, len); + StrNCpy(buf, pos1, len); buffer[buffer_size - 1] = '\0'; return true; } @@ -2677,7 +2669,7 @@ static void DebugProcessDebugMessagesHandler( // Test that the evaluation of expressions works even from ProcessDebugMessages // i.e. with empty stack. TEST(DebugEvaluateWithoutStack) { - v8::Debug::SetMessageHandler2(DebugProcessDebugMessagesHandler); + v8::Debug::SetMessageHandler(DebugProcessDebugMessagesHandler); DebugLocalContext env; v8::HandleScope scope(env->GetIsolate()); @@ -2733,8 +2725,8 @@ TEST(DebugEvaluateWithoutStack) { 0); CHECK_EQ(strcmp("805", process_debug_messages_data.results[2].buffer), 0); - v8::Debug::SetMessageHandler2(NULL); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetMessageHandler(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -2755,7 +2747,7 @@ TEST(DebugStepLinear) { SetBreakPoint(foo, 3); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStep); + v8::Debug::SetDebugEventListener(DebugEventStep); step_action = StepIn; break_point_hit_count = 0; @@ -2764,11 +2756,11 @@ TEST(DebugStepLinear) { // With stepping all break locations are hit. CHECK_EQ(4, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); // Register a debug event listener which just counts. - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); SetBreakPoint(foo, 3); break_point_hit_count = 0; @@ -2777,7 +2769,7 @@ TEST(DebugStepLinear) { // Without stepping only active break points are hit. CHECK_EQ(1, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -2788,7 +2780,7 @@ TEST(DebugStepKeyedLoadLoop) { v8::HandleScope scope(env->GetIsolate()); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStep); + v8::Debug::SetDebugEventListener(DebugEventStep); // Create a function for testing stepping of keyed load. The statement 'y=1' // is there to have more than one breakable statement in the loop, TODO(315). @@ -2826,7 +2818,7 @@ TEST(DebugStepKeyedLoadLoop) { // With stepping all break locations are hit. CHECK_EQ(35, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -2837,7 +2829,7 @@ TEST(DebugStepKeyedStoreLoop) { v8::HandleScope scope(env->GetIsolate()); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStep); + v8::Debug::SetDebugEventListener(DebugEventStep); // Create a function for testing stepping of keyed store. The statement 'y=1' // is there to have more than one breakable statement in the loop, TODO(315). @@ -2874,7 +2866,7 @@ TEST(DebugStepKeyedStoreLoop) { // With stepping all break locations are hit. CHECK_EQ(34, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -2885,7 +2877,7 @@ TEST(DebugStepNamedLoadLoop) { v8::HandleScope scope(env->GetIsolate()); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStep); + v8::Debug::SetDebugEventListener(DebugEventStep); // Create a function for testing stepping of named load. v8::Local<v8::Function> foo = CompileFunction( @@ -2918,7 +2910,7 @@ TEST(DebugStepNamedLoadLoop) { // With stepping all break locations are hit. CHECK_EQ(55, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -2928,7 +2920,7 @@ static void DoDebugStepNamedStoreLoop(int expected) { v8::HandleScope scope(env->GetIsolate()); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStep); + v8::Debug::SetDebugEventListener(DebugEventStep); // Create a function for testing stepping of named store. v8::Local<v8::Function> foo = CompileFunction( @@ -2953,7 +2945,7 @@ static void DoDebugStepNamedStoreLoop(int expected) { // With stepping all expected break locations are hit. CHECK_EQ(expected, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -2970,7 +2962,7 @@ TEST(DebugStepLinearMixedICs) { v8::HandleScope scope(env->GetIsolate()); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStep); + v8::Debug::SetDebugEventListener(DebugEventStep); // Create a function for testing stepping. v8::Local<v8::Function> foo = CompileFunction(&env, @@ -2993,11 +2985,11 @@ TEST(DebugStepLinearMixedICs) { // With stepping all break locations are hit. CHECK_EQ(11, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); // Register a debug event listener which just counts. - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); SetBreakPoint(foo, 0); break_point_hit_count = 0; @@ -3006,7 +2998,7 @@ TEST(DebugStepLinearMixedICs) { // Without stepping only active break points are hit. CHECK_EQ(1, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -3016,7 +3008,7 @@ TEST(DebugStepDeclarations) { v8::HandleScope scope(env->GetIsolate()); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStep); + v8::Debug::SetDebugEventListener(DebugEventStep); // Create a function for testing stepping. Run it to allow it to get // optimized. @@ -3039,7 +3031,7 @@ TEST(DebugStepDeclarations) { CHECK_EQ(6, break_point_hit_count); // Get rid of the debug event listener. - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -3049,7 +3041,7 @@ TEST(DebugStepLocals) { v8::HandleScope scope(env->GetIsolate()); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStep); + v8::Debug::SetDebugEventListener(DebugEventStep); // Create a function for testing stepping. Run it to allow it to get // optimized. @@ -3072,7 +3064,7 @@ TEST(DebugStepLocals) { CHECK_EQ(6, break_point_hit_count); // Get rid of the debug event listener. - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -3083,7 +3075,7 @@ TEST(DebugStepIf) { v8::HandleScope scope(isolate); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStep); + v8::Debug::SetDebugEventListener(DebugEventStep); // Create a function for testing stepping. Run it to allow it to get // optimized. @@ -3116,7 +3108,7 @@ TEST(DebugStepIf) { CHECK_EQ(5, break_point_hit_count); // Get rid of the debug event listener. - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -3127,7 +3119,7 @@ TEST(DebugStepSwitch) { v8::HandleScope scope(isolate); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStep); + v8::Debug::SetDebugEventListener(DebugEventStep); // Create a function for testing stepping. Run it to allow it to get // optimized. @@ -3173,7 +3165,7 @@ TEST(DebugStepSwitch) { CHECK_EQ(7, break_point_hit_count); // Get rid of the debug event listener. - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -3184,7 +3176,7 @@ TEST(DebugStepWhile) { v8::HandleScope scope(isolate); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStep); + v8::Debug::SetDebugEventListener(DebugEventStep); // Create a function for testing stepping. Run it to allow it to get // optimized. @@ -3199,22 +3191,29 @@ TEST(DebugStepWhile) { v8::Local<v8::Function> foo = CompileFunction(&env, src, "foo"); SetBreakPoint(foo, 8); // "var a = 0;" + // Looping 0 times. We still should break at the while-condition once. + step_action = StepIn; + break_point_hit_count = 0; + v8::Handle<v8::Value> argv_0[argc] = { v8::Number::New(isolate, 0) }; + foo->Call(env->Global(), argc, argv_0); + CHECK_EQ(3, break_point_hit_count); + // Looping 10 times. step_action = StepIn; break_point_hit_count = 0; v8::Handle<v8::Value> argv_10[argc] = { v8::Number::New(isolate, 10) }; foo->Call(env->Global(), argc, argv_10); - CHECK_EQ(22, break_point_hit_count); + CHECK_EQ(23, break_point_hit_count); // Looping 100 times. step_action = StepIn; break_point_hit_count = 0; v8::Handle<v8::Value> argv_100[argc] = { v8::Number::New(isolate, 100) }; foo->Call(env->Global(), argc, argv_100); - CHECK_EQ(202, break_point_hit_count); + CHECK_EQ(203, break_point_hit_count); // Get rid of the debug event listener. - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -3225,7 +3224,7 @@ TEST(DebugStepDoWhile) { v8::HandleScope scope(isolate); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStep); + v8::Debug::SetDebugEventListener(DebugEventStep); // Create a function for testing stepping. Run it to allow it to get // optimized. @@ -3255,7 +3254,7 @@ TEST(DebugStepDoWhile) { CHECK_EQ(202, break_point_hit_count); // Get rid of the debug event listener. - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -3266,7 +3265,7 @@ TEST(DebugStepFor) { v8::HandleScope scope(isolate); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStep); + v8::Debug::SetDebugEventListener(DebugEventStep); // Create a function for testing stepping. Run it to allow it to get // optimized. @@ -3297,7 +3296,7 @@ TEST(DebugStepFor) { CHECK_EQ(203, break_point_hit_count); // Get rid of the debug event listener. - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -3308,7 +3307,7 @@ TEST(DebugStepForContinue) { v8::HandleScope scope(isolate); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStep); + v8::Debug::SetDebugEventListener(DebugEventStep); // Create a function for testing stepping. Run it to allow it to get // optimized. @@ -3349,7 +3348,7 @@ TEST(DebugStepForContinue) { CHECK_EQ(457, break_point_hit_count); // Get rid of the debug event listener. - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -3360,7 +3359,7 @@ TEST(DebugStepForBreak) { v8::HandleScope scope(isolate); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStep); + v8::Debug::SetDebugEventListener(DebugEventStep); // Create a function for testing stepping. Run it to allow it to get // optimized. @@ -3402,7 +3401,7 @@ TEST(DebugStepForBreak) { CHECK_EQ(505, break_point_hit_count); // Get rid of the debug event listener. - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -3412,7 +3411,7 @@ TEST(DebugStepForIn) { v8::HandleScope scope(env->GetIsolate()); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStep); + v8::Debug::SetDebugEventListener(DebugEventStep); // Create a function for testing stepping. Run it to allow it to get // optimized. @@ -3450,7 +3449,7 @@ TEST(DebugStepForIn) { CHECK_EQ(8, break_point_hit_count); // Get rid of the debug event listener. - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -3460,7 +3459,7 @@ TEST(DebugStepWith) { v8::HandleScope scope(env->GetIsolate()); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStep); + v8::Debug::SetDebugEventListener(DebugEventStep); // Create a function for testing stepping. Run it to allow it to get // optimized. @@ -3482,7 +3481,7 @@ TEST(DebugStepWith) { CHECK_EQ(4, break_point_hit_count); // Get rid of the debug event listener. - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -3493,7 +3492,7 @@ TEST(DebugConditional) { v8::HandleScope scope(isolate); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStep); + v8::Debug::SetDebugEventListener(DebugEventStep); // Create a function for testing stepping. Run it to allow it to get // optimized. @@ -3519,7 +3518,7 @@ TEST(DebugConditional) { CHECK_EQ(5, break_point_hit_count); // Get rid of the debug event listener. - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -3534,7 +3533,7 @@ TEST(StepInOutSimple) { "frame_function_name"); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStepSequence); + v8::Debug::SetDebugEventListener(DebugEventStepSequence); // Create a function for testing stepping. Run it to allow it to get // optimized. @@ -3570,7 +3569,7 @@ TEST(StepInOutSimple) { break_point_hit_count); // Get rid of the debug event listener. - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -3585,7 +3584,7 @@ TEST(StepInOutTree) { "frame_function_name"); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStepSequence); + v8::Debug::SetDebugEventListener(DebugEventStepSequence); // Create a function for testing stepping. Run it to allow it to get // optimized. @@ -3622,7 +3621,7 @@ TEST(StepInOutTree) { break_point_hit_count); // Get rid of the debug event listener. - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(true); } @@ -3637,7 +3636,7 @@ TEST(StepInOutBranch) { "frame_function_name"); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStepSequence); + v8::Debug::SetDebugEventListener(DebugEventStepSequence); // Create a function for testing stepping. Run it to allow it to get // optimized. @@ -3657,7 +3656,7 @@ TEST(StepInOutBranch) { break_point_hit_count); // Get rid of the debug event listener. - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -3674,7 +3673,7 @@ TEST(DebugStepNatives) { "foo"); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStep); + v8::Debug::SetDebugEventListener(DebugEventStep); step_action = StepIn; break_point_hit_count = 0; @@ -3683,11 +3682,11 @@ TEST(DebugStepNatives) { // With stepping all break locations are hit. CHECK_EQ(3, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); // Register a debug event listener which just counts. - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); break_point_hit_count = 0; foo->Call(env->Global(), 0, NULL); @@ -3695,7 +3694,7 @@ TEST(DebugStepNatives) { // Without stepping only active break points are hit. CHECK_EQ(1, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -3713,7 +3712,7 @@ TEST(DebugStepFunctionApply) { "foo"); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStep); + v8::Debug::SetDebugEventListener(DebugEventStep); step_action = StepIn; break_point_hit_count = 0; @@ -3722,11 +3721,11 @@ TEST(DebugStepFunctionApply) { // With stepping all break locations are hit. CHECK_EQ(7, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); // Register a debug event listener which just counts. - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); break_point_hit_count = 0; foo->Call(env->Global(), 0, NULL); @@ -3734,7 +3733,7 @@ TEST(DebugStepFunctionApply) { // Without stepping only the debugger statement is hit. CHECK_EQ(1, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -3759,7 +3758,7 @@ TEST(DebugStepFunctionCall) { "foo"); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStep); + v8::Debug::SetDebugEventListener(DebugEventStep); step_action = StepIn; // Check stepping where the if condition in bar is false. @@ -3774,11 +3773,11 @@ TEST(DebugStepFunctionCall) { foo->Call(env->Global(), argc, argv); CHECK_EQ(8, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); // Register a debug event listener which just counts. - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); break_point_hit_count = 0; foo->Call(env->Global(), 0, NULL); @@ -3786,7 +3785,7 @@ TEST(DebugStepFunctionCall) { // Without stepping only the debugger statement is hit. CHECK_EQ(1, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -3798,7 +3797,7 @@ TEST(PauseInScript) { env.ExposeDebug(); // Register a debug event listener which counts. - v8::Debug::SetDebugEventListener2(DebugEventCounter); + v8::Debug::SetDebugEventListener(DebugEventCounter); // Create a script that returns a function. const char* src = "(function (evt) {})"; @@ -3819,7 +3818,7 @@ TEST(PauseInScript) { CHECK_EQ(1, break_point_hit_count); // Get rid of the debug event listener. - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -3845,7 +3844,7 @@ TEST(BreakOnException) { CompileFunction(&env, "function notCaught(){throws();}", "notCaught"); v8::V8::AddMessageListener(MessageCallbackCount); - v8::Debug::SetDebugEventListener2(DebugEventCounter); + v8::Debug::SetDebugEventListener(DebugEventCounter); // Initial state should be no break on exceptions. DebugEventCounterClear(); @@ -3963,7 +3962,7 @@ TEST(BreakOnException) { CHECK_EQ(1, uncaught_exception_hit_count); CHECK_EQ(1, message_callback_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); v8::V8::RemoveMessageListeners(MessageCallbackCount); } @@ -3983,7 +3982,7 @@ TEST(BreakOnCompileException) { frame_count = CompileFunction(&env, frame_count_source, "frame_count"); v8::V8::AddMessageListener(MessageCallbackCount); - v8::Debug::SetDebugEventListener2(DebugEventCounter); + v8::Debug::SetDebugEventListener(DebugEventCounter); DebugEventCounterClear(); MessageCallbackCountClear(); @@ -4039,7 +4038,7 @@ TEST(StepWithException) { "frame_function_name"); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventStepSequence); + v8::Debug::SetDebugEventListener(DebugEventStepSequence); // Create functions for testing stepping. const char* src = "function a() { n(); }; " @@ -4111,7 +4110,7 @@ TEST(StepWithException) { break_point_hit_count); // Get rid of the debug event listener. - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -4126,7 +4125,7 @@ TEST(DebugBreak) { v8::HandleScope scope(isolate); // Register a debug event listener which sets the break flag and counts. - v8::Debug::SetDebugEventListener2(DebugEventBreak); + v8::Debug::SetDebugEventListener(DebugEventBreak); // Create a function for testing stepping. const char* src = "function f0() {}" @@ -4166,7 +4165,7 @@ TEST(DebugBreak) { CHECK_EQ(4 * ARRAY_SIZE(argv), break_point_hit_count); // Get rid of the debug event listener. - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -4178,7 +4177,7 @@ TEST(DisableBreak) { v8::HandleScope scope(env->GetIsolate()); // Register a debug event listener which sets the break flag and counts. - v8::Debug::SetDebugEventListener2(DebugEventCounter); + v8::Debug::SetDebugEventListener(DebugEventCounter); // Create a function for testing stepping. const char* src = "function f() {g()};function g(){i=0; while(i<10){i++}}"; @@ -4195,7 +4194,7 @@ TEST(DisableBreak) { { v8::Debug::DebugBreak(env->GetIsolate()); i::Isolate* isolate = reinterpret_cast<i::Isolate*>(env->GetIsolate()); - v8::internal::DisableBreak disable_break(isolate, true); + v8::internal::DisableBreak disable_break(isolate->debug(), true); f->Call(env->Global(), 0, NULL); CHECK_EQ(1, break_point_hit_count); } @@ -4204,7 +4203,7 @@ TEST(DisableBreak) { CHECK_EQ(2, break_point_hit_count); // Get rid of the debug event listener. - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -4220,7 +4219,7 @@ TEST(NoBreakWhenBootstrapping) { v8::HandleScope scope(isolate); // Register a debug event listener which sets the break flag and counts. - v8::Debug::SetDebugEventListener2(DebugEventCounter); + v8::Debug::SetDebugEventListener(DebugEventCounter); // Set the debug break flag. v8::Debug::DebugBreak(isolate); @@ -4239,7 +4238,7 @@ TEST(NoBreakWhenBootstrapping) { CHECK_EQ(0, break_point_hit_count); // Get rid of the debug event listener. - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -4374,15 +4373,15 @@ TEST(InterceptorPropertyMirror) { // Check that the properties are interceptor properties. for (int i = 0; i < 3; i++) { EmbeddedVector<char, SMALL_STRING_BUFFER_SIZE> buffer; - OS::SNPrintF(buffer, - "named_values[%d] instanceof debug.PropertyMirror", i); + SNPrintF(buffer, + "named_values[%d] instanceof debug.PropertyMirror", i); CHECK(CompileRun(buffer.start())->BooleanValue()); - OS::SNPrintF(buffer, "named_values[%d].propertyType()", i); + SNPrintF(buffer, "named_values[%d].propertyType()", i); CHECK_EQ(v8::internal::INTERCEPTOR, CompileRun(buffer.start())->Int32Value()); - OS::SNPrintF(buffer, "named_values[%d].isNative()", i); + SNPrintF(buffer, "named_values[%d].isNative()", i); CHECK(CompileRun(buffer.start())->BooleanValue()); } @@ -4393,8 +4392,8 @@ TEST(InterceptorPropertyMirror) { // Check that the properties are interceptor properties. for (int i = 0; i < 2; i++) { EmbeddedVector<char, SMALL_STRING_BUFFER_SIZE> buffer; - OS::SNPrintF(buffer, - "indexed_values[%d] instanceof debug.PropertyMirror", i); + SNPrintF(buffer, + "indexed_values[%d] instanceof debug.PropertyMirror", i); CHECK(CompileRun(buffer.start())->BooleanValue()); } @@ -4405,7 +4404,7 @@ TEST(InterceptorPropertyMirror) { // Check that the properties are interceptor properties. for (int i = 0; i < 5; i++) { EmbeddedVector<char, SMALL_STRING_BUFFER_SIZE> buffer; - OS::SNPrintF(buffer, "both_values[%d] instanceof debug.PropertyMirror", i); + SNPrintF(buffer, "both_values[%d] instanceof debug.PropertyMirror", i); CHECK(CompileRun(buffer.start())->BooleanValue()); } @@ -4730,7 +4729,7 @@ class ThreadBarrier V8_FINAL { Mutex mutex_; int num_blocked_; - STATIC_CHECK(N > 0); + STATIC_ASSERT(N > 0); DISALLOW_COPY_AND_ASSIGN(ThreadBarrier); }; @@ -4745,8 +4744,8 @@ class Barriers { ThreadBarrier<2> barrier_3; ThreadBarrier<2> barrier_4; ThreadBarrier<2> barrier_5; - v8::internal::Semaphore semaphore_1; - v8::internal::Semaphore semaphore_2; + v8::base::Semaphore semaphore_1; + v8::base::Semaphore semaphore_2; }; @@ -4846,10 +4845,10 @@ Barriers message_queue_barriers; // This is the debugger thread, that executes no v8 calls except // placing JSON debugger commands in the queue. -class MessageQueueDebuggerThread : public v8::internal::Thread { +class MessageQueueDebuggerThread : public v8::base::Thread { public: MessageQueueDebuggerThread() - : Thread("MessageQueueDebuggerThread") { } + : Thread(Options("MessageQueueDebuggerThread")) {} void Run(); }; @@ -4961,7 +4960,7 @@ TEST(MessageQueues) { // Create a V8 environment DebugLocalContext env; v8::HandleScope scope(env->GetIsolate()); - v8::Debug::SetMessageHandler2(MessageHandler); + v8::Debug::SetMessageHandler(MessageHandler); message_queue_debugger_thread.Start(); const char* source_1 = "a = 3; b = 4; c = new Object(); c.d = 5;"; @@ -5055,7 +5054,7 @@ TEST(SendClientDataToHandler) { v8::HandleScope scope(isolate); TestClientData::ResetCounters(); handled_client_data_instances_count = 0; - v8::Debug::SetMessageHandler2(MessageHandlerCountingClientData); + v8::Debug::SetMessageHandler(MessageHandlerCountingClientData); const char* source_1 = "a = 3; b = 4; c = new Object(); c.d = 5;"; const int kBufferSize = 1000; uint16_t buffer[kBufferSize]; @@ -5103,15 +5102,15 @@ TEST(SendClientDataToHandler) { Barriers threaded_debugging_barriers; -class V8Thread : public v8::internal::Thread { +class V8Thread : public v8::base::Thread { public: - V8Thread() : Thread("V8Thread") { } + V8Thread() : Thread(Options("V8Thread")) {} void Run(); }; -class DebuggerThread : public v8::internal::Thread { +class DebuggerThread : public v8::base::Thread { public: - DebuggerThread() : Thread("DebuggerThread") { } + DebuggerThread() : Thread(Options("DebuggerThread")) {} void Run(); }; @@ -5129,10 +5128,7 @@ static void ThreadedMessageHandler(const v8::Debug::Message& message) { if (IsBreakEventMessage(print_buffer)) { // Check that we are inside the while loop. int source_line = GetSourceLineFromBreakEventMessage(print_buffer); - // TODO(2047): This should really be 8 <= source_line <= 13; but we - // currently have an off-by-one error when calculating the source - // position corresponding to the program counter at the debug break. - CHECK(7 <= source_line && source_line <= 13); + CHECK(8 <= source_line && source_line <= 13); threaded_debugging_barriers.barrier_2.Wait(); } } @@ -5162,7 +5158,7 @@ void V8Thread::Run() { v8::Isolate::Scope isolate_scope(isolate); DebugLocalContext env; v8::HandleScope scope(env->GetIsolate()); - v8::Debug::SetMessageHandler2(&ThreadedMessageHandler); + v8::Debug::SetMessageHandler(&ThreadedMessageHandler); v8::Handle<v8::ObjectTemplate> global_template = v8::ObjectTemplate::New(env->GetIsolate()); global_template->Set( @@ -5218,16 +5214,16 @@ TEST(ThreadedDebugging) { * breakpoint is hit when enabled, and missed when disabled. */ -class BreakpointsV8Thread : public v8::internal::Thread { +class BreakpointsV8Thread : public v8::base::Thread { public: - BreakpointsV8Thread() : Thread("BreakpointsV8Thread") { } + BreakpointsV8Thread() : Thread(Options("BreakpointsV8Thread")) {} void Run(); }; -class BreakpointsDebuggerThread : public v8::internal::Thread { +class BreakpointsDebuggerThread : public v8::base::Thread { public: explicit BreakpointsDebuggerThread(bool global_evaluate) - : Thread("BreakpointsDebuggerThread"), + : Thread(Options("BreakpointsDebuggerThread")), global_evaluate_(global_evaluate) {} void Run(); @@ -5281,7 +5277,7 @@ void BreakpointsV8Thread::Run() { v8::Isolate::Scope isolate_scope(isolate); DebugLocalContext env; v8::HandleScope scope(isolate); - v8::Debug::SetMessageHandler2(&BreakpointsMessageHandler); + v8::Debug::SetMessageHandler(&BreakpointsMessageHandler); CompileRun(source_1); breakpoints_barriers->barrier_1.Wait(); @@ -5438,7 +5434,7 @@ static void DummyDebugEventListener( TEST(SetDebugEventListenerOnUninitializedVM) { - v8::Debug::SetDebugEventListener2(DummyDebugEventListener); + v8::Debug::SetDebugEventListener(DummyDebugEventListener); } @@ -5447,7 +5443,7 @@ static void DummyMessageHandler(const v8::Debug::Message& message) { TEST(SetMessageHandlerOnUninitializedVM) { - v8::Debug::SetMessageHandler2(DummyMessageHandler); + v8::Debug::SetMessageHandler(DummyMessageHandler); } @@ -5501,13 +5497,12 @@ static void CheckDataParameter( v8::String::NewFromUtf8(args.GetIsolate(), "Test"); CHECK(v8::Debug::Call(debugger_call_with_data, data)->IsString()); - CHECK(v8::Debug::Call(debugger_call_with_data).IsEmpty()); - CHECK(v8::Debug::Call(debugger_call_with_data).IsEmpty()); - - v8::TryCatch catcher; - v8::Debug::Call(debugger_call_with_data); - CHECK(catcher.HasCaught()); - CHECK(catcher.Exception()->IsString()); + for (int i = 0; i < 3; i++) { + v8::TryCatch catcher; + CHECK(v8::Debug::Call(debugger_call_with_data).IsEmpty()); + CHECK(catcher.HasCaught()); + CHECK(catcher.Exception()->IsString()); + } } @@ -5639,7 +5634,7 @@ TEST(DebuggerUnload) { // Set a debug event listener. break_point_hit_count = 0; - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); { v8::HandleScope scope(env->GetIsolate()); // Create a couple of functions for the test. @@ -5664,12 +5659,12 @@ TEST(DebuggerUnload) { // Remove the debug event listener without clearing breakpoints. Do this // outside a handle scope. - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(true); // Now set a debug message handler. break_point_hit_count = 0; - v8::Debug::SetMessageHandler2(MessageHandlerBreakPointHitCount); + v8::Debug::SetMessageHandler(MessageHandlerBreakPointHitCount); { v8::HandleScope scope(env->GetIsolate()); @@ -5689,7 +5684,7 @@ TEST(DebuggerUnload) { // Remove the debug message handler without clearing breakpoints. Do this // outside a handle scope. - v8::Debug::SetMessageHandler2(NULL); + v8::Debug::SetMessageHandler(NULL); CheckDebuggerUnloaded(true); } @@ -5732,7 +5727,7 @@ TEST(DebuggerClearMessageHandler) { CheckDebuggerUnloaded(); // Set a debug message handler. - v8::Debug::SetMessageHandler2(MessageHandlerHitCount); + v8::Debug::SetMessageHandler(MessageHandlerHitCount); // Run code to throw a unhandled exception. This should end up in the message // handler. @@ -5743,7 +5738,7 @@ TEST(DebuggerClearMessageHandler) { // Clear debug message handler. message_handler_hit_count = 0; - v8::Debug::SetMessageHandler2(NULL); + v8::Debug::SetMessageHandler(NULL); // Run code to throw a unhandled exception. This should end up in the message // handler. @@ -5762,7 +5757,7 @@ static void MessageHandlerClearingMessageHandler( message_handler_hit_count++; // Clear debug message handler. - v8::Debug::SetMessageHandler2(NULL); + v8::Debug::SetMessageHandler(NULL); } @@ -5775,7 +5770,7 @@ TEST(DebuggerClearMessageHandlerWhileActive) { CheckDebuggerUnloaded(); // Set a debug message handler. - v8::Debug::SetMessageHandler2(MessageHandlerClearingMessageHandler); + v8::Debug::SetMessageHandler(MessageHandlerClearingMessageHandler); // Run code to throw a unhandled exception. This should end up in the message // handler. @@ -5788,344 +5783,6 @@ TEST(DebuggerClearMessageHandlerWhileActive) { } -/* Test DebuggerHostDispatch */ -/* In this test, the debugger waits for a command on a breakpoint - * and is dispatching host commands while in the infinite loop. - */ - -class HostDispatchV8Thread : public v8::internal::Thread { - public: - HostDispatchV8Thread() : Thread("HostDispatchV8Thread") { } - void Run(); -}; - -class HostDispatchDebuggerThread : public v8::internal::Thread { - public: - HostDispatchDebuggerThread() : Thread("HostDispatchDebuggerThread") { } - void Run(); -}; - -Barriers* host_dispatch_barriers; - -static void HostDispatchMessageHandler(const v8::Debug::Message& message) { - static char print_buffer[1000]; - v8::String::Value json(message.GetJSON()); - Utf16ToAscii(*json, json.length(), print_buffer); -} - - -static void HostDispatchDispatchHandler() { - host_dispatch_barriers->semaphore_1.Signal(); -} - - -void HostDispatchV8Thread::Run() { - const char* source_1 = "var y_global = 3;\n" - "function cat( new_value ) {\n" - " var x = new_value;\n" - " y_global = 4;\n" - " x = 3 * x + 1;\n" - " y_global = 5;\n" - " return x;\n" - "}\n" - "\n"; - const char* source_2 = "cat(17);\n"; - - v8::Isolate::Scope isolate_scope(CcTest::isolate()); - DebugLocalContext env; - v8::HandleScope scope(env->GetIsolate()); - - // Set up message and host dispatch handlers. - v8::Debug::SetMessageHandler2(HostDispatchMessageHandler); - v8::Debug::SetHostDispatchHandler(HostDispatchDispatchHandler, 10 /* ms */); - - CompileRun(source_1); - host_dispatch_barriers->barrier_1.Wait(); - host_dispatch_barriers->barrier_2.Wait(); - CompileRun(source_2); -} - - -void HostDispatchDebuggerThread::Run() { - const int kBufSize = 1000; - uint16_t buffer[kBufSize]; - - const char* command_1 = "{\"seq\":101," - "\"type\":\"request\"," - "\"command\":\"setbreakpoint\"," - "\"arguments\":{\"type\":\"function\",\"target\":\"cat\",\"line\":3}}"; - const char* command_2 = "{\"seq\":102," - "\"type\":\"request\"," - "\"command\":\"continue\"}"; - - v8::Isolate* isolate = CcTest::isolate(); - // v8 thread initializes, runs source_1 - host_dispatch_barriers->barrier_1.Wait(); - // 1: Set breakpoint in cat(). - v8::Debug::SendCommand(isolate, buffer, AsciiToUtf16(command_1, buffer)); - - host_dispatch_barriers->barrier_2.Wait(); - // v8 thread starts compiling source_2. - // Break happens, to run queued commands and host dispatches. - // Wait for host dispatch to be processed. - host_dispatch_barriers->semaphore_1.Wait(); - // 2: Continue evaluation - v8::Debug::SendCommand(isolate, buffer, AsciiToUtf16(command_2, buffer)); -} - - -TEST(DebuggerHostDispatch) { - HostDispatchDebuggerThread host_dispatch_debugger_thread; - HostDispatchV8Thread host_dispatch_v8_thread; - - // Create a V8 environment - Barriers stack_allocated_host_dispatch_barriers; - host_dispatch_barriers = &stack_allocated_host_dispatch_barriers; - - host_dispatch_v8_thread.Start(); - host_dispatch_debugger_thread.Start(); - - host_dispatch_v8_thread.Join(); - host_dispatch_debugger_thread.Join(); -} - - -/* Test DebugMessageDispatch */ -/* In this test, the V8 thread waits for a message from the debug thread. - * The DebugMessageDispatchHandler is executed from the debugger thread - * which signals the V8 thread to wake up. - */ - -class DebugMessageDispatchV8Thread : public v8::internal::Thread { - public: - DebugMessageDispatchV8Thread() : Thread("DebugMessageDispatchV8Thread") { } - void Run(); -}; - -class DebugMessageDispatchDebuggerThread : public v8::internal::Thread { - public: - DebugMessageDispatchDebuggerThread() - : Thread("DebugMessageDispatchDebuggerThread") { } - void Run(); -}; - -Barriers* debug_message_dispatch_barriers; - - -static void DebugMessageHandler() { - debug_message_dispatch_barriers->semaphore_1.Signal(); -} - - -void DebugMessageDispatchV8Thread::Run() { - v8::Isolate::Scope isolate_scope(CcTest::isolate()); - DebugLocalContext env; - v8::HandleScope scope(env->GetIsolate()); - - // Set up debug message dispatch handler. - v8::Debug::SetDebugMessageDispatchHandler(DebugMessageHandler); - - CompileRun("var y = 1 + 2;\n"); - debug_message_dispatch_barriers->barrier_1.Wait(); - debug_message_dispatch_barriers->semaphore_1.Wait(); - debug_message_dispatch_barriers->barrier_2.Wait(); -} - - -void DebugMessageDispatchDebuggerThread::Run() { - debug_message_dispatch_barriers->barrier_1.Wait(); - SendContinueCommand(); - debug_message_dispatch_barriers->barrier_2.Wait(); -} - - -TEST(DebuggerDebugMessageDispatch) { - DebugMessageDispatchDebuggerThread debug_message_dispatch_debugger_thread; - DebugMessageDispatchV8Thread debug_message_dispatch_v8_thread; - - // Create a V8 environment - Barriers stack_allocated_debug_message_dispatch_barriers; - debug_message_dispatch_barriers = - &stack_allocated_debug_message_dispatch_barriers; - - debug_message_dispatch_v8_thread.Start(); - debug_message_dispatch_debugger_thread.Start(); - - debug_message_dispatch_v8_thread.Join(); - debug_message_dispatch_debugger_thread.Join(); -} - - -TEST(DebuggerAgent) { - v8::V8::Initialize(); - i::Debugger* debugger = CcTest::i_isolate()->debugger(); - // Make sure these ports is not used by other tests to allow tests to run in - // parallel. - const int kPort1 = 5858 + FlagDependentPortOffset(); - const int kPort2 = 5857 + FlagDependentPortOffset(); - const int kPort3 = 5856 + FlagDependentPortOffset(); - - // Make a string with the port2 number. - const int kPortBufferLen = 6; - char port2_str[kPortBufferLen]; - OS::SNPrintF(i::Vector<char>(port2_str, kPortBufferLen), "%d", kPort2); - - bool ok; - - // Test starting and stopping the agent without any client connection. - debugger->StartAgent("test", kPort1); - debugger->StopAgent(); - // Test starting the agent, connecting a client and shutting down the agent - // with the client connected. - ok = debugger->StartAgent("test", kPort2); - CHECK(ok); - debugger->WaitForAgent(); - i::Socket* client = new i::Socket; - ok = client->Connect("localhost", port2_str); - CHECK(ok); - // It is important to wait for a message from the agent. Otherwise, - // we can close the server socket during "accept" syscall, making it failing - // (at least on Linux), and the test will work incorrectly. - char buf; - ok = client->Receive(&buf, 1) == 1; - CHECK(ok); - debugger->StopAgent(); - delete client; - - // Test starting and stopping the agent with the required port already - // occoupied. - i::Socket* server = new i::Socket; - ok = server->Bind(kPort3); - CHECK(ok); - - debugger->StartAgent("test", kPort3); - debugger->StopAgent(); - - delete server; -} - - -class DebuggerAgentProtocolServerThread : public i::Thread { - public: - explicit DebuggerAgentProtocolServerThread(int port) - : Thread("DebuggerAgentProtocolServerThread"), - port_(port), - server_(NULL), - client_(NULL), - listening_(0) { - } - ~DebuggerAgentProtocolServerThread() { - // Close both sockets. - delete client_; - delete server_; - } - - void Run(); - void WaitForListening() { listening_.Wait(); } - char* body() { return body_.get(); } - - private: - int port_; - i::SmartArrayPointer<char> body_; - i::Socket* server_; // Server socket used for bind/accept. - i::Socket* client_; // Single client connection used by the test. - i::Semaphore listening_; // Signalled when the server is in listen mode. -}; - - -void DebuggerAgentProtocolServerThread::Run() { - bool ok; - - // Create the server socket and bind it to the requested port. - server_ = new i::Socket; - CHECK(server_ != NULL); - ok = server_->Bind(port_); - CHECK(ok); - - // Listen for new connections. - ok = server_->Listen(1); - CHECK(ok); - listening_.Signal(); - - // Accept a connection. - client_ = server_->Accept(); - CHECK(client_ != NULL); - - // Receive a debugger agent protocol message. - i::DebuggerAgentUtil::ReceiveMessage(client_); -} - - -TEST(DebuggerAgentProtocolOverflowHeader) { - // Make sure this port is not used by other tests to allow tests to run in - // parallel. - const int kPort = 5860 + FlagDependentPortOffset(); - static const char* kLocalhost = "localhost"; - - // Make a string with the port number. - const int kPortBufferLen = 6; - char port_str[kPortBufferLen]; - OS::SNPrintF(i::Vector<char>(port_str, kPortBufferLen), "%d", kPort); - - // Create a socket server to receive a debugger agent message. - DebuggerAgentProtocolServerThread* server = - new DebuggerAgentProtocolServerThread(kPort); - server->Start(); - server->WaitForListening(); - - // Connect. - i::Socket* client = new i::Socket; - CHECK(client != NULL); - bool ok = client->Connect(kLocalhost, port_str); - CHECK(ok); - - // Send headers which overflow the receive buffer. - static const int kBufferSize = 1000; - char buffer[kBufferSize]; - - // Long key and short value: XXXX....XXXX:0\r\n. - for (int i = 0; i < kBufferSize - 4; i++) { - buffer[i] = 'X'; - } - buffer[kBufferSize - 4] = ':'; - buffer[kBufferSize - 3] = '0'; - buffer[kBufferSize - 2] = '\r'; - buffer[kBufferSize - 1] = '\n'; - int result = client->Send(buffer, kBufferSize); - CHECK_EQ(kBufferSize, result); - - // Short key and long value: X:XXXX....XXXX\r\n. - buffer[0] = 'X'; - buffer[1] = ':'; - for (int i = 2; i < kBufferSize - 2; i++) { - buffer[i] = 'X'; - } - buffer[kBufferSize - 2] = '\r'; - buffer[kBufferSize - 1] = '\n'; - result = client->Send(buffer, kBufferSize); - CHECK_EQ(kBufferSize, result); - - // Add empty body to request. - const char* content_length_zero_header = "Content-Length:0\r\n"; - int length = StrLength(content_length_zero_header); - result = client->Send(content_length_zero_header, length); - CHECK_EQ(length, result); - result = client->Send("\r\n", 2); - CHECK_EQ(2, result); - - // Wait until data is received. - server->Join(); - - // Check for empty body. - CHECK(server->body() == NULL); - - // Close the client before the server to avoid TIME_WAIT issues. - client->Shutdown(); - delete client; - delete server; -} - - // Test for issue http://code.google.com/p/v8/issues/detail?id=289. // Make sure that DebugGetLoadedScripts doesn't return scripts // with disposed external source. @@ -6189,7 +5846,7 @@ TEST(ScriptNameAndData) { frame_script_name_source, "frame_script_name"); - v8::Debug::SetDebugEventListener2(DebugEventBreakPointHitCount); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); // Test function source. v8::Local<v8::String> script = v8::String::NewFromUtf8(env->GetIsolate(), @@ -6282,7 +5939,7 @@ TEST(ContextData) { context_1 = v8::Context::New(isolate, NULL, global_template, global_object); context_2 = v8::Context::New(isolate, NULL, global_template, global_object); - v8::Debug::SetMessageHandler2(ContextCheckMessageHandler); + v8::Debug::SetMessageHandler(ContextCheckMessageHandler); // Default data value is undefined. CHECK(context_1->GetEmbedderData(0)->IsUndefined()); @@ -6321,7 +5978,7 @@ TEST(ContextData) { // Two times compile event and two times break event. CHECK_GT(message_handler_hit_count, 4); - v8::Debug::SetMessageHandler2(NULL); + v8::Debug::SetMessageHandler(NULL); CheckDebuggerUnloaded(); } @@ -6350,7 +6007,7 @@ TEST(DebugBreakInMessageHandler) { DebugLocalContext env; v8::HandleScope scope(env->GetIsolate()); - v8::Debug::SetMessageHandler2(DebugBreakMessageHandler); + v8::Debug::SetMessageHandler(DebugBreakMessageHandler); // Test functions. const char* script = "function f() { debugger; g(); } function g() { }"; @@ -6430,7 +6087,7 @@ TEST(RegExpDebugBreak) { v8::Local<v8::Value> result = f->Call(env->Global(), argc, argv); CHECK_EQ(12, result->Int32Value()); - v8::Debug::SetDebugEventListener2(DebugEventDebugBreak); + v8::Debug::SetDebugEventListener(DebugEventDebugBreak); v8::Debug::DebugBreak(env->GetIsolate()); result = f->Call(env->Global(), argc, argv); @@ -6444,7 +6101,7 @@ TEST(RegExpDebugBreak) { // Common part of EvalContextData and NestedBreakEventContextData tests. static void ExecuteScriptForContextCheck( - v8::Debug::MessageHandler2 message_handler) { + v8::Debug::MessageHandler message_handler) { // Create a context. v8::Handle<v8::Context> context_1; v8::Handle<v8::ObjectTemplate> global_template = @@ -6452,7 +6109,7 @@ static void ExecuteScriptForContextCheck( context_1 = v8::Context::New(CcTest::isolate(), NULL, global_template); - v8::Debug::SetMessageHandler2(message_handler); + v8::Debug::SetMessageHandler(message_handler); // Default data value is undefined. CHECK(context_1->GetEmbedderData(0)->IsUndefined()); @@ -6475,7 +6132,7 @@ static void ExecuteScriptForContextCheck( f->Call(context_1->Global(), 0, NULL); } - v8::Debug::SetMessageHandler2(NULL); + v8::Debug::SetMessageHandler(NULL); } @@ -6562,120 +6219,6 @@ TEST(NestedBreakEventContextData) { } -// Debug event listener which counts the script collected events. -int script_collected_count = 0; -static void DebugEventScriptCollectedEvent( - const v8::Debug::EventDetails& event_details) { - v8::DebugEvent event = event_details.GetEvent(); - // Count the number of breaks. - if (event == v8::ScriptCollected) { - script_collected_count++; - } -} - - -// Test that scripts collected are reported through the debug event listener. -TEST(ScriptCollectedEvent) { - v8::internal::Debug* debug = CcTest::i_isolate()->debug(); - break_point_hit_count = 0; - script_collected_count = 0; - DebugLocalContext env; - v8::HandleScope scope(env->GetIsolate()); - - // Request the loaded scripts to initialize the debugger script cache. - debug->GetLoadedScripts(); - - // Do garbage collection to ensure that only the script in this test will be - // collected afterwards. - CcTest::heap()->CollectAllGarbage(Heap::kAbortIncrementalMarkingMask); - - script_collected_count = 0; - v8::Debug::SetDebugEventListener2(DebugEventScriptCollectedEvent); - { - v8::Script::Compile( - v8::String::NewFromUtf8(env->GetIsolate(), "eval('a=1')"))->Run(); - v8::Script::Compile( - v8::String::NewFromUtf8(env->GetIsolate(), "eval('a=2')"))->Run(); - } - - // Do garbage collection to collect the script above which is no longer - // referenced. - CcTest::heap()->CollectAllGarbage(Heap::kAbortIncrementalMarkingMask); - - CHECK_EQ(2, script_collected_count); - - v8::Debug::SetDebugEventListener2(NULL); - CheckDebuggerUnloaded(); -} - - -// Debug event listener which counts the script collected events. -int script_collected_message_count = 0; -static void ScriptCollectedMessageHandler(const v8::Debug::Message& message) { - // Count the number of scripts collected. - if (message.IsEvent() && message.GetEvent() == v8::ScriptCollected) { - script_collected_message_count++; - v8::Handle<v8::Context> context = message.GetEventContext(); - CHECK(context.IsEmpty()); - } -} - - -// Test that GetEventContext doesn't fail and return empty handle for -// ScriptCollected events. -TEST(ScriptCollectedEventContext) { - i::FLAG_stress_compaction = false; - v8::Isolate* isolate = CcTest::isolate(); - v8::internal::Debug* debug = - reinterpret_cast<v8::internal::Isolate*>(isolate)->debug(); - script_collected_message_count = 0; - v8::HandleScope scope(isolate); - - v8::Persistent<v8::Context> context; - { - v8::HandleScope scope(isolate); - context.Reset(isolate, v8::Context::New(isolate)); - } - - // Enter context. We can't have a handle to the context in the outer - // scope, so we have to do it the hard way. - { - v8::HandleScope scope(isolate); - v8::Local<v8::Context> local_context = - v8::Local<v8::Context>::New(isolate, context); - local_context->Enter(); - } - - // Request the loaded scripts to initialize the debugger script cache. - debug->GetLoadedScripts(); - - // Do garbage collection to ensure that only the script in this test will be - // collected afterwards. - CcTest::heap()->CollectAllGarbage(Heap::kAbortIncrementalMarkingMask); - - v8::Debug::SetMessageHandler2(ScriptCollectedMessageHandler); - v8::Script::Compile(v8::String::NewFromUtf8(isolate, "eval('a=1')"))->Run(); - v8::Script::Compile(v8::String::NewFromUtf8(isolate, "eval('a=2')"))->Run(); - - // Leave context - { - v8::HandleScope scope(isolate); - v8::Local<v8::Context> local_context = - v8::Local<v8::Context>::New(isolate, context); - local_context->Exit(); - } - context.Reset(); - - // Do garbage collection to collect the script above which is no longer - // referenced. - CcTest::heap()->CollectAllGarbage(Heap::kAbortIncrementalMarkingMask); - - CHECK_EQ(2, script_collected_message_count); - - v8::Debug::SetMessageHandler2(NULL); -} - - // Debug event listener which counts the after compile events. int after_compile_message_count = 0; static void AfterCompileMessageHandler(const v8::Debug::Message& message) { @@ -6698,18 +6241,18 @@ TEST(AfterCompileMessageWhenMessageHandlerIsReset) { after_compile_message_count = 0; const char* script = "var a=1"; - v8::Debug::SetMessageHandler2(AfterCompileMessageHandler); + v8::Debug::SetMessageHandler(AfterCompileMessageHandler); v8::Script::Compile(v8::String::NewFromUtf8(env->GetIsolate(), script)) ->Run(); - v8::Debug::SetMessageHandler2(NULL); + v8::Debug::SetMessageHandler(NULL); - v8::Debug::SetMessageHandler2(AfterCompileMessageHandler); + v8::Debug::SetMessageHandler(AfterCompileMessageHandler); v8::Debug::DebugBreak(env->GetIsolate()); v8::Script::Compile(v8::String::NewFromUtf8(env->GetIsolate(), script)) ->Run(); // Setting listener to NULL should cause debugger unload. - v8::Debug::SetMessageHandler2(NULL); + v8::Debug::SetMessageHandler(NULL); CheckDebuggerUnloaded(); // Compilation cache should be disabled when debugger is active. @@ -6717,6 +6260,60 @@ TEST(AfterCompileMessageWhenMessageHandlerIsReset) { } +// Syntax error event handler which counts a number of events. +int compile_error_event_count = 0; + +static void CompileErrorEventCounterClear() { + compile_error_event_count = 0; +} + +static void CompileErrorEventCounter( + const v8::Debug::EventDetails& event_details) { + v8::DebugEvent event = event_details.GetEvent(); + + if (event == v8::CompileError) { + compile_error_event_count++; + } +} + + +// Tests that syntax error event is sent as many times as there are scripts +// with syntax error compiled. +TEST(SyntaxErrorMessageOnSyntaxException) { + DebugLocalContext env; + v8::HandleScope scope(env->GetIsolate()); + + // For this test, we want to break on uncaught exceptions: + ChangeBreakOnException(false, true); + + v8::Debug::SetDebugEventListener(CompileErrorEventCounter); + + CompileErrorEventCounterClear(); + + // Check initial state. + CHECK_EQ(0, compile_error_event_count); + + // Throws SyntaxError: Unexpected end of input + v8::Script::Compile(v8::String::NewFromUtf8(env->GetIsolate(), "+++")); + CHECK_EQ(1, compile_error_event_count); + + v8::Script::Compile( + v8::String::NewFromUtf8(env->GetIsolate(), "/sel\\/: \\")); + CHECK_EQ(2, compile_error_event_count); + + v8::Script::Compile( + v8::String::NewFromUtf8(env->GetIsolate(), "JSON.parse('1234:')")); + CHECK_EQ(2, compile_error_event_count); + + v8::Script::Compile( + v8::String::NewFromUtf8(env->GetIsolate(), "new RegExp('/\\/\\\\');")); + CHECK_EQ(2, compile_error_event_count); + + v8::Script::Compile(v8::String::NewFromUtf8(env->GetIsolate(), "throw 1;")); + CHECK_EQ(2, compile_error_event_count); +} + + // Tests that break event is sent when message handler is reset. TEST(BreakMessageWhenMessageHandlerIsReset) { DebugLocalContext env; @@ -6724,19 +6321,19 @@ TEST(BreakMessageWhenMessageHandlerIsReset) { after_compile_message_count = 0; const char* script = "function f() {};"; - v8::Debug::SetMessageHandler2(AfterCompileMessageHandler); + v8::Debug::SetMessageHandler(AfterCompileMessageHandler); v8::Script::Compile(v8::String::NewFromUtf8(env->GetIsolate(), script)) ->Run(); - v8::Debug::SetMessageHandler2(NULL); + v8::Debug::SetMessageHandler(NULL); - v8::Debug::SetMessageHandler2(AfterCompileMessageHandler); + v8::Debug::SetMessageHandler(AfterCompileMessageHandler); v8::Debug::DebugBreak(env->GetIsolate()); v8::Local<v8::Function> f = v8::Local<v8::Function>::Cast( env->Global()->Get(v8::String::NewFromUtf8(env->GetIsolate(), "f"))); f->Call(env->Global(), 0, NULL); // Setting message handler to NULL should cause debugger unload. - v8::Debug::SetMessageHandler2(NULL); + v8::Debug::SetMessageHandler(NULL); CheckDebuggerUnloaded(); // Compilation cache should be disabled when debugger is active. @@ -6764,18 +6361,18 @@ TEST(ExceptionMessageWhenMessageHandlerIsReset) { exception_event_count = 0; const char* script = "function f() {throw new Error()};"; - v8::Debug::SetMessageHandler2(AfterCompileMessageHandler); + v8::Debug::SetMessageHandler(AfterCompileMessageHandler); v8::Script::Compile(v8::String::NewFromUtf8(env->GetIsolate(), script)) ->Run(); - v8::Debug::SetMessageHandler2(NULL); + v8::Debug::SetMessageHandler(NULL); - v8::Debug::SetMessageHandler2(ExceptionMessageHandler); + v8::Debug::SetMessageHandler(ExceptionMessageHandler); v8::Local<v8::Function> f = v8::Local<v8::Function>::Cast( env->Global()->Get(v8::String::NewFromUtf8(env->GetIsolate(), "f"))); f->Call(env->Global(), 0, NULL); // Setting message handler to NULL should cause debugger unload. - v8::Debug::SetMessageHandler2(NULL); + v8::Debug::SetMessageHandler(NULL); CheckDebuggerUnloaded(); CHECK_EQ(1, exception_event_count); @@ -6799,7 +6396,7 @@ TEST(ProvisionalBreakpointOnLineOutOfRange) { SetScriptBreakPointByNameFromJS(env->GetIsolate(), resource_name, 5, 5); after_compile_message_count = 0; - v8::Debug::SetMessageHandler2(AfterCompileMessageHandler); + v8::Debug::SetMessageHandler(AfterCompileMessageHandler); v8::ScriptOrigin origin( v8::String::NewFromUtf8(env->GetIsolate(), resource_name), @@ -6817,7 +6414,7 @@ TEST(ProvisionalBreakpointOnLineOutOfRange) { ClearBreakPointFromJS(env->GetIsolate(), sbp1); ClearBreakPointFromJS(env->GetIsolate(), sbp2); - v8::Debug::SetMessageHandler2(NULL); + v8::Debug::SetMessageHandler(NULL); } @@ -6834,19 +6431,12 @@ static void BreakMessageHandler(const v8::Debug::Message& message) { } else if (message.IsEvent() && message.GetEvent() == v8::AfterCompile) { i::HandleScope scope(isolate); - bool is_debug_break = isolate->stack_guard()->IsDebugBreak(); - // Force DebugBreak flag while serializer is working. - isolate->stack_guard()->DebugBreak(); + int current_count = break_point_hit_count; // Force serialization to trigger some internal JS execution. message.GetJSON(); - // Restore previous state. - if (is_debug_break) { - isolate->stack_guard()->DebugBreak(); - } else { - isolate->stack_guard()->Continue(i::DEBUGBREAK); - } + CHECK_EQ(current_count, break_point_hit_count); } } @@ -6858,7 +6448,7 @@ TEST(NoDebugBreakInAfterCompileMessageHandler) { v8::HandleScope scope(env->GetIsolate()); // Register a debug event listener which sets the break flag and counts. - v8::Debug::SetMessageHandler2(BreakMessageHandler); + v8::Debug::SetMessageHandler(BreakMessageHandler); // Set the debug break flag. v8::Debug::DebugBreak(env->GetIsolate()); @@ -6877,7 +6467,7 @@ TEST(NoDebugBreakInAfterCompileMessageHandler) { CHECK_EQ(2, break_point_hit_count); // Get rid of the debug message handler. - v8::Debug::SetMessageHandler2(NULL); + v8::Debug::SetMessageHandler(NULL); CheckDebuggerUnloaded(); } @@ -6885,7 +6475,7 @@ TEST(NoDebugBreakInAfterCompileMessageHandler) { static int counting_message_handler_counter; static void CountingMessageHandler(const v8::Debug::Message& message) { - counting_message_handler_counter++; + if (message.IsResponse()) counting_message_handler_counter++; } @@ -6897,7 +6487,7 @@ TEST(ProcessDebugMessages) { counting_message_handler_counter = 0; - v8::Debug::SetMessageHandler2(CountingMessageHandler); + v8::Debug::SetMessageHandler(CountingMessageHandler); const int kBufferSize = 1000; uint16_t buffer[kBufferSize]; @@ -6927,7 +6517,84 @@ TEST(ProcessDebugMessages) { CHECK_GE(counting_message_handler_counter, 2); // Get rid of the debug message handler. - v8::Debug::SetMessageHandler2(NULL); + v8::Debug::SetMessageHandler(NULL); + CheckDebuggerUnloaded(); +} + + +class SendCommandThread : public v8::base::Thread { + public: + explicit SendCommandThread(v8::Isolate* isolate) + : Thread(Options("SendCommandThread")), + semaphore_(0), + isolate_(isolate) {} + + static void ProcessDebugMessages(v8::Isolate* isolate, void* data) { + v8::Debug::ProcessDebugMessages(); + reinterpret_cast<v8::base::Semaphore*>(data)->Signal(); + } + + virtual void Run() { + semaphore_.Wait(); + const int kBufferSize = 1000; + uint16_t buffer[kBufferSize]; + const char* scripts_command = + "{\"seq\":0," + "\"type\":\"request\"," + "\"command\":\"scripts\"}"; + int length = AsciiToUtf16(scripts_command, buffer); + // Send scripts command. + + for (int i = 0; i < 100; i++) { + CHECK_EQ(i, counting_message_handler_counter); + // Queue debug message. + v8::Debug::SendCommand(isolate_, buffer, length); + // Synchronize with the main thread to force message processing. + isolate_->RequestInterrupt(ProcessDebugMessages, &semaphore_); + semaphore_.Wait(); + } + + v8::V8::TerminateExecution(isolate_); + } + + void StartSending() { + semaphore_.Signal(); + } + + private: + v8::base::Semaphore semaphore_; + v8::Isolate* isolate_; +}; + + +static SendCommandThread* send_command_thread_ = NULL; + +static void StartSendingCommands( + const v8::FunctionCallbackInfo<v8::Value>& info) { + send_command_thread_->StartSending(); +} + + +TEST(ProcessDebugMessagesThreaded) { + DebugLocalContext env; + v8::Isolate* isolate = env->GetIsolate(); + v8::HandleScope scope(isolate); + + counting_message_handler_counter = 0; + + v8::Debug::SetMessageHandler(CountingMessageHandler); + send_command_thread_ = new SendCommandThread(isolate); + send_command_thread_->Start(); + + v8::Handle<v8::FunctionTemplate> start = + v8::FunctionTemplate::New(isolate, StartSendingCommands); + env->Global()->Set(v8_str("start"), start->GetFunction()); + + CompileRun("start(); while (true) { }"); + + CHECK_EQ(100, counting_message_handler_counter); + + v8::Debug::SetMessageHandler(NULL); CheckDebuggerUnloaded(); } @@ -6955,7 +6622,7 @@ TEST(Backtrace) { v8::Isolate* isolate = env->GetIsolate(); v8::HandleScope scope(isolate); - v8::Debug::SetMessageHandler2(BacktraceData::MessageHandler); + v8::Debug::SetMessageHandler(BacktraceData::MessageHandler); const int kBufferSize = 1000; uint16_t buffer[kBufferSize]; @@ -6989,7 +6656,7 @@ TEST(Backtrace) { CHECK_EQ(BacktraceData::frame_counter, 1); // Get rid of the debug message handler. - v8::Debug::SetMessageHandler2(NULL); + v8::Debug::SetMessageHandler(NULL); CheckDebuggerUnloaded(); } @@ -7029,7 +6696,7 @@ TEST(DebugBreakFunctionApply) { "foo"); // Register a debug event listener which steps and counts. - v8::Debug::SetDebugEventListener2(DebugEventBreakMax); + v8::Debug::SetDebugEventListener(DebugEventBreakMax); // Set the debug break flag before calling the code using function.apply. v8::Debug::DebugBreak(env->GetIsolate()); @@ -7043,7 +6710,7 @@ TEST(DebugBreakFunctionApply) { // When keeping the debug break several break will happen. CHECK_GT(break_point_hit_count, 1); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -7112,7 +6779,7 @@ TEST(CallingContextIsNotDebugContext) { named->NewInstance()); // Register the debug event listener - v8::Debug::SetDebugEventListener2(DebugEventGetAtgumentPropertyValue); + v8::Debug::SetDebugEventListener(DebugEventGetAtgumentPropertyValue); // Create a function that invokes debugger. v8::Local<v8::Function> foo = CompileFunction( @@ -7125,7 +6792,7 @@ TEST(CallingContextIsNotDebugContext) { foo->Call(env->Global(), 0, NULL); CHECK_EQ(1, break_point_hit_count); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); debugee_context = v8::Handle<v8::Context>(); debugger_context = v8::Handle<v8::Context>(); CheckDebuggerUnloaded(); @@ -7134,9 +6801,12 @@ TEST(CallingContextIsNotDebugContext) { TEST(DebugContextIsPreservedBetweenAccesses) { v8::HandleScope scope(CcTest::isolate()); + v8::Debug::SetDebugEventListener(DebugEventBreakPointHitCount); v8::Local<v8::Context> context1 = v8::Debug::GetDebugContext(); v8::Local<v8::Context> context2 = v8::Debug::GetDebugContext(); - CHECK_EQ(*context1, *context2); + CHECK(v8::Utils::OpenHandle(*context1).is_identical_to( + v8::Utils::OpenHandle(*context2))); + v8::Debug::SetDebugEventListener(NULL); } @@ -7153,13 +6823,13 @@ TEST(DebugEventContext) { v8::HandleScope scope(isolate); expected_context = v8::Context::New(isolate); expected_callback_data = v8::Int32::New(isolate, 2010); - v8::Debug::SetDebugEventListener2(DebugEventContextChecker, + v8::Debug::SetDebugEventListener(DebugEventContextChecker, expected_callback_data); v8::Context::Scope context_scope(expected_context); v8::Script::Compile( v8::String::NewFromUtf8(isolate, "(function(){debugger;})();"))->Run(); expected_context.Clear(); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); expected_context_data = v8::Handle<v8::Value>(); CheckDebuggerUnloaded(); } @@ -7183,7 +6853,7 @@ TEST(DebugEventBreakData) { DebugLocalContext env; v8::Isolate* isolate = env->GetIsolate(); v8::HandleScope scope(isolate); - v8::Debug::SetDebugEventListener2(DebugEventBreakDataChecker); + v8::Debug::SetDebugEventListener(DebugEventBreakDataChecker); TestClientData::constructor_call_counter = 0; TestClientData::destructor_call_counter = 0; @@ -7191,7 +6861,7 @@ TEST(DebugEventBreakData) { expected_break_data = NULL; was_debug_event_called = false; was_debug_break_called = false; - v8::Debug::DebugBreakForCommand(NULL, isolate); + v8::Debug::DebugBreakForCommand(isolate, NULL); v8::Script::Compile(v8::String::NewFromUtf8(env->GetIsolate(), "(function(x){return x;})(1);")) ->Run(); @@ -7202,7 +6872,7 @@ TEST(DebugEventBreakData) { expected_break_data = data1; was_debug_event_called = false; was_debug_break_called = false; - v8::Debug::DebugBreakForCommand(data1, isolate); + v8::Debug::DebugBreakForCommand(isolate, data1); v8::Script::Compile(v8::String::NewFromUtf8(env->GetIsolate(), "(function(x){return x+1;})(1);")) ->Run(); @@ -7224,7 +6894,7 @@ TEST(DebugEventBreakData) { was_debug_event_called = false; was_debug_break_called = false; v8::Debug::DebugBreak(isolate); - v8::Debug::DebugBreakForCommand(data2, isolate); + v8::Debug::DebugBreakForCommand(isolate, data2); v8::Script::Compile(v8::String::NewFromUtf8(env->GetIsolate(), "(function(x){return x+3;})(1);")) ->Run(); @@ -7235,7 +6905,7 @@ TEST(DebugEventBreakData) { CHECK_EQ(TestClientData::constructor_call_counter, TestClientData::destructor_call_counter); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -7289,7 +6959,7 @@ TEST(DeoptimizeDuringDebugBreak) { // This tests lazy deoptimization bailout for the stack check, as the first // time in function bar when using debug break and no break points will be at // the initial stack check. - v8::Debug::SetDebugEventListener2(DebugEventBreakDeoptimize); + v8::Debug::SetDebugEventListener(DebugEventBreakDeoptimize); // Compile and run function bar which will optimize it for some flag settings. v8::Script::Compile(v8::String::NewFromUtf8( @@ -7302,7 +6972,7 @@ TEST(DeoptimizeDuringDebugBreak) { CHECK(debug_event_break_deoptimize_done); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); } @@ -7357,7 +7027,7 @@ static void DebugEventBreakWithOptimizedStack( static void ScheduleBreak(const v8::FunctionCallbackInfo<v8::Value>& args) { - v8::Debug::SetDebugEventListener2(DebugEventBreakWithOptimizedStack); + v8::Debug::SetDebugEventListener(DebugEventBreakWithOptimizedStack); v8::Debug::DebugBreak(args.GetIsolate()); } @@ -7415,9 +7085,9 @@ static void TestDebugBreakInLoop(const char* loop_head, terminate_after_max_break_point_hit = true; EmbeddedVector<char, 1024> buffer; - OS::SNPrintF(buffer, - "function f() {%s%s%s}", - loop_head, loop_bodies[i], loop_tail); + SNPrintF(buffer, + "function f() {%s%s%s}", + loop_head, loop_bodies[i], loop_tail); // Function with infinite loop. CompileRun(buffer.start()); @@ -7440,7 +7110,7 @@ TEST(DebugBreakLoop) { v8::HandleScope scope(env->GetIsolate()); // Register a debug event listener which sets the break flag and counts. - v8::Debug::SetDebugEventListener2(DebugEventBreakMax); + v8::Debug::SetDebugEventListener(DebugEventBreakMax); // Create a function for getting the frame count when hitting the break. frame_count = CompileFunction(&env, frame_count_source, "frame_count"); @@ -7474,7 +7144,7 @@ TEST(DebugBreakLoop) { TestDebugBreakInLoop("for (;a == 1;) {", loop_bodies, "}"); // Get rid of the debug event listener. - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -7496,7 +7166,7 @@ static void DebugBreakInlineListener( int break_id = CcTest::i_isolate()->debug()->break_id(); char script[128]; i::Vector<char> script_vector(script, sizeof(script)); - OS::SNPrintF(script_vector, "%%GetFrameCount(%d)", break_id); + SNPrintF(script_vector, "%%GetFrameCount(%d)", break_id); v8::Local<v8::Value> result = CompileRun(script); int frame_count = result->Int32Value(); @@ -7505,12 +7175,12 @@ static void DebugBreakInlineListener( for (int i = 0; i < frame_count; i++) { // The 5. element in the returned array of GetFrameDetails contains the // source position of that frame. - OS::SNPrintF(script_vector, "%%GetFrameDetails(%d, %d)[5]", break_id, i); + SNPrintF(script_vector, "%%GetFrameDetails(%d, %d)[5]", break_id, i); v8::Local<v8::Value> result = CompileRun(script); CHECK_EQ(expected_line_number[i], i::Script::GetLineNumber(source_script, result->Int32Value())); } - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); v8::V8::TerminateExecution(CcTest::isolate()); } @@ -7533,7 +7203,7 @@ TEST(DebugBreakInline) { "g(false); \n" "%OptimizeFunctionOnNextCall(g); \n" "g(true);"; - v8::Debug::SetDebugEventListener2(DebugBreakInlineListener); + v8::Debug::SetDebugEventListener(DebugBreakInlineListener); inline_script = v8::Script::Compile(v8::String::NewFromUtf8(env->GetIsolate(), source)); inline_script->Run(); @@ -7567,7 +7237,7 @@ TEST(Regress131642) { // on the stack. DebugLocalContext env; v8::HandleScope scope(env->GetIsolate()); - v8::Debug::SetDebugEventListener2(DebugEventStepNext); + v8::Debug::SetDebugEventListener(DebugEventStepNext); // We step through the first script. It exits through an exception. We run // this inside a new frame to record a different FP than the second script @@ -7579,7 +7249,7 @@ TEST(Regress131642) { const char* script_2 = "[0].forEach(function() { });"; CompileRun(script_2); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); } @@ -7596,15 +7266,15 @@ TEST(DebuggerCreatesContextIffActive) { v8::HandleScope scope(env->GetIsolate()); CHECK_EQ(1, CountNativeContexts()); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CompileRun("debugger;"); CHECK_EQ(1, CountNativeContexts()); - v8::Debug::SetDebugEventListener2(NopListener); + v8::Debug::SetDebugEventListener(NopListener); CompileRun("debugger;"); CHECK_EQ(2, CountNativeContexts()); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); } @@ -7612,7 +7282,7 @@ TEST(LiveEditEnabled) { v8::internal::FLAG_allow_natives_syntax = true; LocalContext env; v8::HandleScope scope(env->GetIsolate()); - v8::Debug::SetLiveEditEnabled(true, env->GetIsolate()); + v8::Debug::SetLiveEditEnabled(env->GetIsolate(), true); CompileRun("%LiveEditCompareStrings('', '')"); } @@ -7621,7 +7291,7 @@ TEST(LiveEditDisabled) { v8::internal::FLAG_allow_natives_syntax = true; LocalContext env; v8::HandleScope scope(env->GetIsolate()); - v8::Debug::SetLiveEditEnabled(false, env->GetIsolate()); + v8::Debug::SetLiveEditEnabled(env->GetIsolate(), false); CompileRun("%LiveEditCompareStrings('', '')"); } @@ -7634,7 +7304,7 @@ TEST(PrecompiledFunction) { DebugLocalContext env; v8::HandleScope scope(env->GetIsolate()); env.ExposeDebug(); - v8::Debug::SetDebugEventListener2(DebugBreakInlineListener); + v8::Debug::SetDebugEventListener(DebugBreakInlineListener); v8::Local<v8::Function> break_here = CompileFunction(&env, "function break_here(){}", "break_here"); @@ -7651,12 +7321,12 @@ TEST(PrecompiledFunction) { "}; \n" "a = b = c = 2; \n" "bar(); \n"; - v8::Local<v8::Value> result = PreCompileCompileRun(source); + v8::Local<v8::Value> result = ParserCacheCompileRun(source); CHECK(result->IsString()); v8::String::Utf8Value utf8(result); CHECK_EQ("bar", *utf8); - v8::Debug::SetDebugEventListener2(NULL); + v8::Debug::SetDebugEventListener(NULL); CheckDebuggerUnloaded(); } @@ -7675,7 +7345,7 @@ static void AddDebugBreak(const v8::FunctionCallbackInfo<v8::Value>& args) { TEST(DebugBreakStackTrace) { DebugLocalContext env; v8::HandleScope scope(env->GetIsolate()); - v8::Debug::SetDebugEventListener2(DebugBreakStackTraceListener); + v8::Debug::SetDebugEventListener(DebugBreakStackTraceListener); v8::Handle<v8::FunctionTemplate> add_debug_break_template = v8::FunctionTemplate::New(env->GetIsolate(), AddDebugBreak); v8::Handle<v8::Function> add_debug_break = @@ -7690,3 +7360,48 @@ TEST(DebugBreakStackTrace) { " }" "})()"); } + + +v8::base::Semaphore terminate_requested_semaphore(0); +v8::base::Semaphore terminate_fired_semaphore(0); +bool terminate_already_fired = false; + + +static void DebugBreakTriggerTerminate( + const v8::Debug::EventDetails& event_details) { + if (event_details.GetEvent() != v8::Break || terminate_already_fired) return; + terminate_requested_semaphore.Signal(); + // Wait for at most 2 seconds for the terminate request. + CHECK(terminate_fired_semaphore.WaitFor(v8::base::TimeDelta::FromSeconds(2))); + terminate_already_fired = true; +} + + +class TerminationThread : public v8::base::Thread { + public: + explicit TerminationThread(v8::Isolate* isolate) + : Thread(Options("terminator")), isolate_(isolate) {} + + virtual void Run() { + terminate_requested_semaphore.Wait(); + v8::V8::TerminateExecution(isolate_); + terminate_fired_semaphore.Signal(); + } + + private: + v8::Isolate* isolate_; +}; + + +TEST(DebugBreakOffThreadTerminate) { + DebugLocalContext env; + v8::Isolate* isolate = env->GetIsolate(); + v8::HandleScope scope(isolate); + v8::Debug::SetDebugEventListener(DebugBreakTriggerTerminate); + TerminationThread terminator(isolate); + terminator.Start(); + v8::TryCatch try_catch; + v8::Debug::DebugBreak(isolate); + CompileRun("while (true);"); + CHECK(try_catch.HasTerminated()); +} diff --git a/deps/v8/test/cctest/test-declarative-accessors.cc b/deps/v8/test/cctest/test-declarative-accessors.cc index f2169a9fb81..8d93245eb62 100644 --- a/deps/v8/test/cctest/test-declarative-accessors.cc +++ b/deps/v8/test/cctest/test-declarative-accessors.cc @@ -27,9 +27,9 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "cctest.h" +#include "test/cctest/cctest.h" using namespace v8::internal; @@ -37,7 +37,7 @@ using namespace v8::internal; class HandleArray : public Malloced { public: static const unsigned kArraySize = 200; - explicit HandleArray() {} + HandleArray() {} ~HandleArray() { Reset(); } void Reset() { for (unsigned i = 0; i < kArraySize; i++) { diff --git a/deps/v8/test/cctest/test-decls.cc b/deps/v8/test/cctest/test-decls.cc index d6738a31aec..34f0b69643e 100644 --- a/deps/v8/test/cctest/test-decls.cc +++ b/deps/v8/test/cctest/test-decls.cc @@ -27,10 +27,10 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "heap.h" -#include "cctest.h" +#include "src/heap/heap.h" +#include "test/cctest/cctest.h" using namespace v8; @@ -236,17 +236,14 @@ TEST(Unknown) { { DeclarationContext context; context.Check("var x; x", 1, // access - 1, // declaration - 2, // declaration + initialization - EXPECT_RESULT, Undefined(CcTest::isolate())); + 0, 0, EXPECT_RESULT, Undefined(CcTest::isolate())); } { DeclarationContext context; context.Check("var x = 0; x", 1, // access - 2, // declaration + initialization - 2, // declaration + initialization - EXPECT_RESULT, Number::New(CcTest::isolate(), 0)); + 1, // initialization + 0, EXPECT_RESULT, Number::New(CcTest::isolate(), 0)); } { DeclarationContext context; @@ -260,78 +257,19 @@ TEST(Unknown) { { DeclarationContext context; context.Check("const x; x", 1, // access - 2, // declaration + initialization - 1, // declaration - EXPECT_RESULT, Undefined(CcTest::isolate())); + 0, 0, EXPECT_RESULT, Undefined(CcTest::isolate())); } { DeclarationContext context; - // SB 0 - BUG 1213579 context.Check("const x = 0; x", - 1, // access - 2, // declaration + initialization - 1, // declaration - EXPECT_RESULT, Undefined(CcTest::isolate())); - } -} - - - -class PresentPropertyContext: public DeclarationContext { - protected: - virtual v8::Handle<Integer> Query(Local<String> key) { - return Integer::New(isolate(), v8::None); - } -}; - - - -TEST(Present) { - HandleScope scope(CcTest::isolate()); - - { PresentPropertyContext context; - context.Check("var x; x", 1, // access 0, - 2, // declaration + initialization - EXPECT_EXCEPTION); // x is not defined! - } - - { PresentPropertyContext context; - context.Check("var x = 0; x", - 1, // access - 1, // initialization - 2, // declaration + initialization - EXPECT_RESULT, Number::New(CcTest::isolate(), 0)); - } - - { PresentPropertyContext context; - context.Check("function x() { }; x", - 1, // access 0, - 0, - EXPECT_RESULT); - } - - { PresentPropertyContext context; - context.Check("const x; x", - 1, // access - 1, // initialization - 1, // (re-)declaration - EXPECT_RESULT, Undefined(CcTest::isolate())); - } - - { PresentPropertyContext context; - context.Check("const x = 0; x", - 1, // access - 1, // initialization - 1, // (re-)declaration EXPECT_RESULT, Number::New(CcTest::isolate(), 0)); } } - class AbsentPropertyContext: public DeclarationContext { protected: virtual v8::Handle<Integer> Query(Local<String> key) { @@ -348,17 +286,14 @@ TEST(Absent) { { AbsentPropertyContext context; context.Check("var x; x", 1, // access - 1, // declaration - 2, // declaration + initialization - EXPECT_RESULT, Undefined(isolate)); + 0, 0, EXPECT_RESULT, Undefined(isolate)); } { AbsentPropertyContext context; context.Check("var x = 0; x", 1, // access - 2, // declaration + initialization - 2, // declaration + initialization - EXPECT_RESULT, Number::New(isolate, 0)); + 1, // initialization + 0, EXPECT_RESULT, Number::New(isolate, 0)); } { AbsentPropertyContext context; @@ -372,25 +307,19 @@ TEST(Absent) { { AbsentPropertyContext context; context.Check("const x; x", 1, // access - 2, // declaration + initialization - 1, // declaration - EXPECT_RESULT, Undefined(isolate)); + 0, 0, EXPECT_RESULT, Undefined(isolate)); } { AbsentPropertyContext context; context.Check("const x = 0; x", 1, // access - 2, // declaration + initialization - 1, // declaration - EXPECT_RESULT, Undefined(isolate)); // SB 0 - BUG 1213579 + 0, 0, EXPECT_RESULT, Number::New(isolate, 0)); } { AbsentPropertyContext context; context.Check("if (false) { var x = 0 }; x", 1, // access - 1, // declaration - 1, // declaration + initialization - EXPECT_RESULT, Undefined(isolate)); + 0, 0, EXPECT_RESULT, Undefined(isolate)); } } @@ -439,17 +368,14 @@ TEST(Appearing) { { AppearingPropertyContext context; context.Check("var x; x", 1, // access - 1, // declaration - 2, // declaration + initialization - EXPECT_RESULT, Undefined(CcTest::isolate())); + 0, 0, EXPECT_RESULT, Undefined(CcTest::isolate())); } { AppearingPropertyContext context; context.Check("var x = 0; x", 1, // access - 2, // declaration + initialization - 2, // declaration + initialization - EXPECT_RESULT, Number::New(CcTest::isolate(), 0)); + 1, // initialization + 0, EXPECT_RESULT, Number::New(CcTest::isolate(), 0)); } { AppearingPropertyContext context; @@ -463,78 +389,13 @@ TEST(Appearing) { { AppearingPropertyContext context; context.Check("const x; x", 1, // access - 2, // declaration + initialization - 1, // declaration - EXPECT_RESULT, Undefined(CcTest::isolate())); + 0, 0, EXPECT_RESULT, Undefined(CcTest::isolate())); } { AppearingPropertyContext context; context.Check("const x = 0; x", 1, // access - 2, // declaration + initialization - 1, // declaration - EXPECT_RESULT, Undefined(CcTest::isolate())); - // Result is undefined because declaration succeeded but - // initialization to 0 failed (due to context behavior). - } -} - - - -class ReappearingPropertyContext: public DeclarationContext { - public: - enum State { - DECLARE, - DONT_DECLARE, - INITIALIZE, - UNKNOWN - }; - - ReappearingPropertyContext() : state_(DECLARE) { } - - protected: - virtual v8::Handle<Integer> Query(Local<String> key) { - switch (state_) { - case DECLARE: - // Force the first declaration by returning that - // the property is absent. - state_ = DONT_DECLARE; - return Handle<Integer>(); - case DONT_DECLARE: - // Ignore the second declaration by returning - // that the property is already there. - state_ = INITIALIZE; - return Integer::New(isolate(), v8::None); - case INITIALIZE: - // Force an initialization by returning that - // the property is absent. This will make sure - // that the setter is called and it will not - // lead to redeclaration conflicts (yet). - state_ = UNKNOWN; - return Handle<Integer>(); - default: - CHECK(state_ == UNKNOWN); - break; - } - // Do the lookup in the object. - return Handle<Integer>(); - } - - private: - State state_; -}; - - -TEST(Reappearing) { - v8::V8::Initialize(); - HandleScope scope(CcTest::isolate()); - - { ReappearingPropertyContext context; - context.Check("const x; var x = 0", - 0, - 3, // const declaration+initialization, var initialization - 3, // 2 x declaration + var initialization - EXPECT_RESULT, Undefined(CcTest::isolate())); + 0, 0, EXPECT_RESULT, Number::New(CcTest::isolate(), 0)); } } @@ -562,11 +423,8 @@ TEST(ExistsInPrototype) { // Sanity check to make sure that the holder of the interceptor // really is the prototype object. { ExistsInPrototypeContext context; - context.Check("this.x = 87; this.x", - 0, - 0, - 0, - EXPECT_RESULT, Number::New(CcTest::isolate(), 87)); + context.Check("this.x = 87; this.x", 0, 0, 1, EXPECT_RESULT, + Number::New(CcTest::isolate(), 87)); } { ExistsInPrototypeContext context; @@ -669,19 +527,13 @@ TEST(ExistsInHiddenPrototype) { HandleScope scope(CcTest::isolate()); { ExistsInHiddenPrototypeContext context; - context.Check("var x; x", - 1, // access - 0, - 2, // declaration + initialization - EXPECT_EXCEPTION); // x is not defined! + context.Check("var x; x", 0, 0, 0, EXPECT_RESULT, + Undefined(CcTest::isolate())); } { ExistsInHiddenPrototypeContext context; - context.Check("var x = 0; x", - 1, // access - 1, // initialization - 2, // declaration + initialization - EXPECT_RESULT, Number::New(CcTest::isolate(), 0)); + context.Check("var x = 0; x", 0, 0, 0, EXPECT_RESULT, + Number::New(CcTest::isolate(), 0)); } { ExistsInHiddenPrototypeContext context; @@ -694,20 +546,14 @@ TEST(ExistsInHiddenPrototype) { // TODO(mstarzinger): The semantics of global const is vague. { ExistsInHiddenPrototypeContext context; - context.Check("const x; x", - 0, - 0, - 1, // (re-)declaration - EXPECT_RESULT, Undefined(CcTest::isolate())); + context.Check("const x; x", 0, 0, 0, EXPECT_RESULT, + Undefined(CcTest::isolate())); } // TODO(mstarzinger): The semantics of global const is vague. { ExistsInHiddenPrototypeContext context; - context.Check("const x = 0; x", - 0, - 0, - 1, // (re-)declaration - EXPECT_RESULT, Number::New(CcTest::isolate(), 0)); + context.Check("const x = 0; x", 0, 0, 0, EXPECT_RESULT, + Number::New(CcTest::isolate(), 0)); } } @@ -768,10 +614,8 @@ TEST(CrossScriptReferences) { EXPECT_RESULT, Number::New(isolate, 1)); context.Check("var x = 2; x", EXPECT_RESULT, Number::New(isolate, 2)); - context.Check("const x = 3; x", - EXPECT_RESULT, Number::New(isolate, 3)); - context.Check("const x = 4; x", - EXPECT_RESULT, Number::New(isolate, 4)); + context.Check("const x = 3; x", EXPECT_EXCEPTION); + context.Check("const x = 4; x", EXPECT_EXCEPTION); context.Check("x = 5; x", EXPECT_RESULT, Number::New(isolate, 5)); context.Check("var x = 6; x", @@ -787,8 +631,7 @@ TEST(CrossScriptReferences) { EXPECT_RESULT, Number::New(isolate, 1)); context.Check("var x = 2; x", // assignment ignored EXPECT_RESULT, Number::New(isolate, 1)); - context.Check("const x = 3; x", - EXPECT_RESULT, Number::New(isolate, 1)); + context.Check("const x = 3; x", EXPECT_EXCEPTION); context.Check("x = 4; x", // assignment ignored EXPECT_RESULT, Number::New(isolate, 1)); context.Check("var x = 5; x", // assignment ignored diff --git a/deps/v8/test/cctest/test-deoptimization.cc b/deps/v8/test/cctest/test-deoptimization.cc index dbbb3edb097..3127acc6a65 100644 --- a/deps/v8/test/cctest/test-deoptimization.cc +++ b/deps/v8/test/cctest/test-deoptimization.cc @@ -27,23 +27,23 @@ #include <stdlib.h> -#include "v8.h" - -#include "api.h" -#include "cctest.h" -#include "compilation-cache.h" -#include "debug.h" -#include "deoptimizer.h" -#include "isolate.h" -#include "platform.h" -#include "stub-cache.h" - +#include "src/v8.h" + +#include "src/api.h" +#include "src/base/platform/platform.h" +#include "src/compilation-cache.h" +#include "src/debug.h" +#include "src/deoptimizer.h" +#include "src/isolate.h" +#include "src/stub-cache.h" +#include "test/cctest/cctest.h" + +using ::v8::base::OS; using ::v8::internal::Deoptimizer; using ::v8::internal::EmbeddedVector; using ::v8::internal::Handle; using ::v8::internal::Isolate; using ::v8::internal::JSFunction; -using ::v8::internal::OS; using ::v8::internal::Object; // Size of temp buffer for formatting small strings. @@ -113,6 +113,8 @@ static Handle<JSFunction> GetJSFunction(v8::Handle<v8::Object> obj, TEST(DeoptimizeSimple) { + i::FLAG_turbo_deoptimization = true; + LocalContext env; v8::HandleScope scope(env->GetIsolate()); @@ -151,6 +153,8 @@ TEST(DeoptimizeSimple) { TEST(DeoptimizeSimpleWithArguments) { + i::FLAG_turbo_deoptimization = true; + LocalContext env; v8::HandleScope scope(env->GetIsolate()); @@ -190,6 +194,8 @@ TEST(DeoptimizeSimpleWithArguments) { TEST(DeoptimizeSimpleNested) { + i::FLAG_turbo_deoptimization = true; + LocalContext env; v8::HandleScope scope(env->GetIsolate()); @@ -215,6 +221,7 @@ TEST(DeoptimizeSimpleNested) { TEST(DeoptimizeRecursive) { + i::FLAG_turbo_deoptimization = true; LocalContext env; v8::HandleScope scope(env->GetIsolate()); @@ -242,6 +249,7 @@ TEST(DeoptimizeRecursive) { TEST(DeoptimizeMultiple) { + i::FLAG_turbo_deoptimization = true; LocalContext env; v8::HandleScope scope(env->GetIsolate()); @@ -270,6 +278,7 @@ TEST(DeoptimizeMultiple) { TEST(DeoptimizeConstructor) { + i::FLAG_turbo_deoptimization = true; LocalContext env; v8::HandleScope scope(env->GetIsolate()); @@ -308,6 +317,7 @@ TEST(DeoptimizeConstructor) { TEST(DeoptimizeConstructorMultiple) { + i::FLAG_turbo_deoptimization = true; LocalContext env; v8::HandleScope scope(env->GetIsolate()); @@ -337,6 +347,7 @@ TEST(DeoptimizeConstructorMultiple) { TEST(DeoptimizeBinaryOperationADDString) { + i::FLAG_turbo_deoptimization = true; i::FLAG_concurrent_recompilation = false; AllowNativesSyntaxNoInlining options; LocalContext env; @@ -397,9 +408,9 @@ static void CompileConstructorWithDeoptimizingValueOf() { static void TestDeoptimizeBinaryOpHelper(LocalContext* env, const char* binary_op) { EmbeddedVector<char, SMALL_STRING_BUFFER_SIZE> f_source_buffer; - OS::SNPrintF(f_source_buffer, - "function f(x, y) { return x %s y; };", - binary_op); + SNPrintF(f_source_buffer, + "function f(x, y) { return x %s y; };", + binary_op); char* f_source = f_source_buffer.start(); AllowNativesSyntaxNoInlining options; @@ -428,6 +439,7 @@ static void TestDeoptimizeBinaryOpHelper(LocalContext* env, TEST(DeoptimizeBinaryOperationADD) { + i::FLAG_turbo_deoptimization = true; i::FLAG_concurrent_recompilation = false; LocalContext env; v8::HandleScope scope(env->GetIsolate()); @@ -441,6 +453,7 @@ TEST(DeoptimizeBinaryOperationADD) { TEST(DeoptimizeBinaryOperationSUB) { + i::FLAG_turbo_deoptimization = true; i::FLAG_concurrent_recompilation = false; LocalContext env; v8::HandleScope scope(env->GetIsolate()); @@ -454,6 +467,7 @@ TEST(DeoptimizeBinaryOperationSUB) { TEST(DeoptimizeBinaryOperationMUL) { + i::FLAG_turbo_deoptimization = true; i::FLAG_concurrent_recompilation = false; LocalContext env; v8::HandleScope scope(env->GetIsolate()); @@ -467,6 +481,7 @@ TEST(DeoptimizeBinaryOperationMUL) { TEST(DeoptimizeBinaryOperationDIV) { + i::FLAG_turbo_deoptimization = true; i::FLAG_concurrent_recompilation = false; LocalContext env; v8::HandleScope scope(env->GetIsolate()); @@ -480,6 +495,7 @@ TEST(DeoptimizeBinaryOperationDIV) { TEST(DeoptimizeBinaryOperationMOD) { + i::FLAG_turbo_deoptimization = true; i::FLAG_concurrent_recompilation = false; LocalContext env; v8::HandleScope scope(env->GetIsolate()); @@ -493,6 +509,7 @@ TEST(DeoptimizeBinaryOperationMOD) { TEST(DeoptimizeCompare) { + i::FLAG_turbo_deoptimization = true; i::FLAG_concurrent_recompilation = false; LocalContext env; v8::HandleScope scope(env->GetIsolate()); @@ -537,6 +554,7 @@ TEST(DeoptimizeCompare) { TEST(DeoptimizeLoadICStoreIC) { + i::FLAG_turbo_deoptimization = true; i::FLAG_concurrent_recompilation = false; LocalContext env; v8::HandleScope scope(env->GetIsolate()); @@ -617,6 +635,7 @@ TEST(DeoptimizeLoadICStoreIC) { TEST(DeoptimizeLoadICStoreICNested) { + i::FLAG_turbo_deoptimization = true; i::FLAG_concurrent_recompilation = false; LocalContext env; v8::HandleScope scope(env->GetIsolate()); diff --git a/deps/v8/test/cctest/test-dictionary.cc b/deps/v8/test/cctest/test-dictionary.cc index aa1bc862318..9a1914237fe 100644 --- a/deps/v8/test/cctest/test-dictionary.cc +++ b/deps/v8/test/cctest/test-dictionary.cc @@ -25,16 +25,16 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "v8.h" +#include "src/v8.h" +#include "test/cctest/cctest.h" -#include "api.h" -#include "debug.h" -#include "execution.h" -#include "factory.h" -#include "macro-assembler.h" -#include "objects.h" -#include "global-handles.h" -#include "cctest.h" +#include "src/api.h" +#include "src/debug.h" +#include "src/execution.h" +#include "src/factory.h" +#include "src/global-handles.h" +#include "src/macro-assembler.h" +#include "src/objects.h" using namespace v8::internal; @@ -51,6 +51,7 @@ static void TestHashMap(Handle<HashMap> table) { table = HashMap::Put(table, a, b); CHECK_EQ(table->NumberOfElements(), 1); CHECK_EQ(table->Lookup(a), *b); + // When the key does not exist in the map, Lookup returns the hole. CHECK_EQ(table->Lookup(b), CcTest::heap()->the_hole_value()); // Keys still have to be valid after objects were moved. @@ -64,8 +65,10 @@ static void TestHashMap(Handle<HashMap> table) { CHECK_EQ(table->NumberOfElements(), 1); CHECK_NE(table->Lookup(a), *b); - // Keys mapped to the hole should be removed permanently. - table = HashMap::Put(table, a, factory->the_hole_value()); + // Keys that have been removed are mapped to the hole. + bool was_present = false; + table = HashMap::Remove(table, a, &was_present); + CHECK(was_present); CHECK_EQ(table->NumberOfElements(), 0); CHECK_EQ(table->Lookup(a), CcTest::heap()->the_hole_value()); @@ -187,7 +190,9 @@ static void TestHashSetCausesGC(Handle<HashSet> table) { CHECK(gc_count == isolate->heap()->gc_count()); // Calling Remove() will not cause GC in this case. - table = HashSet::Remove(table, key); + bool was_present = false; + table = HashSet::Remove(table, key, &was_present); + CHECK(!was_present); CHECK(gc_count == isolate->heap()->gc_count()); // Calling Add() should cause GC. diff --git a/deps/v8/test/cctest/test-disasm-arm.cc b/deps/v8/test/cctest/test-disasm-arm.cc index 24453bc88eb..c1f6ce26907 100644 --- a/deps/v8/test/cctest/test-disasm-arm.cc +++ b/deps/v8/test/cctest/test-disasm-arm.cc @@ -28,14 +28,14 @@ #include <stdlib.h> -#include "v8.h" - -#include "debug.h" -#include "disasm.h" -#include "disassembler.h" -#include "macro-assembler.h" -#include "serialize.h" -#include "cctest.h" +#include "src/v8.h" + +#include "src/debug.h" +#include "src/disasm.h" +#include "src/disassembler.h" +#include "src/macro-assembler.h" +#include "src/serialize.h" +#include "test/cctest/cctest.h" using namespace v8::internal; diff --git a/deps/v8/test/cctest/test-disasm-arm64.cc b/deps/v8/test/cctest/test-disasm-arm64.cc index 23f7b6daf26..fb01347c6ac 100644 --- a/deps/v8/test/cctest/test-disasm-arm64.cc +++ b/deps/v8/test/cctest/test-disasm-arm64.cc @@ -27,16 +27,17 @@ #include <stdio.h> #include <cstring> -#include "cctest.h" -#include "v8.h" +#include "src/v8.h" +#include "test/cctest/cctest.h" -#include "macro-assembler.h" -#include "arm64/assembler-arm64.h" -#include "arm64/macro-assembler-arm64.h" -#include "arm64/decoder-arm64-inl.h" -#include "arm64/disasm-arm64.h" -#include "arm64/utils-arm64.h" +#include "src/macro-assembler.h" + +#include "src/arm64/assembler-arm64.h" +#include "src/arm64/decoder-arm64-inl.h" +#include "src/arm64/disasm-arm64.h" +#include "src/arm64/macro-assembler-arm64.h" +#include "src/arm64/utils-arm64.h" using namespace v8::internal; @@ -1601,7 +1602,7 @@ TEST_(system_nop) { TEST_(debug) { SET_UP(); - ASSERT(kImmExceptionIsDebug == 0xdeb0); + DCHECK(kImmExceptionIsDebug == 0xdeb0); // All debug codes should produce the same instruction, and the debug code // can be any uint32_t. diff --git a/deps/v8/test/cctest/test-disasm-ia32.cc b/deps/v8/test/cctest/test-disasm-ia32.cc index 6972aeabadc..8436df7c5ab 100644 --- a/deps/v8/test/cctest/test-disasm-ia32.cc +++ b/deps/v8/test/cctest/test-disasm-ia32.cc @@ -27,15 +27,15 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "debug.h" -#include "disasm.h" -#include "disassembler.h" -#include "macro-assembler.h" -#include "serialize.h" -#include "stub-cache.h" -#include "cctest.h" +#include "src/debug.h" +#include "src/disasm.h" +#include "src/disassembler.h" +#include "src/macro-assembler.h" +#include "src/serialize.h" +#include "src/stub-cache.h" +#include "test/cctest/cctest.h" using namespace v8::internal; @@ -168,6 +168,11 @@ TEST(DisasmIa320) { __ nop(); __ idiv(edx); + __ idiv(Operand(edx, ecx, times_1, 1)); + __ idiv(Operand(esp, 12)); + __ div(edx); + __ div(Operand(edx, ecx, times_1, 1)); + __ div(Operand(esp, 12)); __ mul(edx); __ neg(edx); __ not_(edx); @@ -175,7 +180,9 @@ TEST(DisasmIa320) { __ imul(edx, Operand(ebx, ecx, times_4, 10000)); __ imul(edx, ecx, 12); + __ imul(edx, Operand(edx, eax, times_2, 42), 8); __ imul(edx, ecx, 1000); + __ imul(edx, Operand(ebx, ecx, times_4, 1), 9000); __ inc(edx); __ inc(Operand(ebx, ecx, times_4, 10000)); @@ -197,15 +204,24 @@ TEST(DisasmIa320) { __ sar(edx, 1); __ sar(edx, 6); __ sar_cl(edx); + __ sar(Operand(ebx, ecx, times_4, 10000), 1); + __ sar(Operand(ebx, ecx, times_4, 10000), 6); + __ sar_cl(Operand(ebx, ecx, times_4, 10000)); __ sbb(edx, Operand(ebx, ecx, times_4, 10000)); __ shld(edx, Operand(ebx, ecx, times_4, 10000)); __ shl(edx, 1); __ shl(edx, 6); __ shl_cl(edx); + __ shl(Operand(ebx, ecx, times_4, 10000), 1); + __ shl(Operand(ebx, ecx, times_4, 10000), 6); + __ shl_cl(Operand(ebx, ecx, times_4, 10000)); __ shrd(edx, Operand(ebx, ecx, times_4, 10000)); __ shr(edx, 1); __ shr(edx, 7); __ shr_cl(edx); + __ shr(Operand(ebx, ecx, times_4, 10000), 1); + __ shr(Operand(ebx, ecx, times_4, 10000), 6); + __ shr_cl(Operand(ebx, ecx, times_4, 10000)); // Immediates @@ -275,7 +291,7 @@ TEST(DisasmIa320) { __ jmp(&L1); __ jmp(Operand(ebx, ecx, times_4, 10000)); ExternalReference after_break_target = - ExternalReference(Debug_Address::AfterBreakTarget(), isolate); + ExternalReference::debug_after_break_target_address(isolate); __ jmp(Operand::StaticVariable(after_break_target)); __ jmp(ic, RelocInfo::CODE_TARGET); __ nop(); @@ -364,86 +380,76 @@ TEST(DisasmIa320) { // SSE instruction { - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope fscope(&assm, SSE2); - // Move operation - __ movaps(xmm0, xmm1); - __ shufps(xmm0, xmm0, 0x0); - - // logic operation - __ andps(xmm0, xmm1); - __ andps(xmm0, Operand(ebx, ecx, times_4, 10000)); - __ orps(xmm0, xmm1); - __ orps(xmm0, Operand(ebx, ecx, times_4, 10000)); - __ xorps(xmm0, xmm1); - __ xorps(xmm0, Operand(ebx, ecx, times_4, 10000)); - - // Arithmetic operation - __ addps(xmm1, xmm0); - __ addps(xmm1, Operand(ebx, ecx, times_4, 10000)); - __ subps(xmm1, xmm0); - __ subps(xmm1, Operand(ebx, ecx, times_4, 10000)); - __ mulps(xmm1, xmm0); - __ mulps(xmm1, Operand(ebx, ecx, times_4, 10000)); - __ divps(xmm1, xmm0); - __ divps(xmm1, Operand(ebx, ecx, times_4, 10000)); - } + // Move operation + __ movaps(xmm0, xmm1); + __ shufps(xmm0, xmm0, 0x0); + + // logic operation + __ andps(xmm0, xmm1); + __ andps(xmm0, Operand(ebx, ecx, times_4, 10000)); + __ orps(xmm0, xmm1); + __ orps(xmm0, Operand(ebx, ecx, times_4, 10000)); + __ xorps(xmm0, xmm1); + __ xorps(xmm0, Operand(ebx, ecx, times_4, 10000)); + + // Arithmetic operation + __ addps(xmm1, xmm0); + __ addps(xmm1, Operand(ebx, ecx, times_4, 10000)); + __ subps(xmm1, xmm0); + __ subps(xmm1, Operand(ebx, ecx, times_4, 10000)); + __ mulps(xmm1, xmm0); + __ mulps(xmm1, Operand(ebx, ecx, times_4, 10000)); + __ divps(xmm1, xmm0); + __ divps(xmm1, Operand(ebx, ecx, times_4, 10000)); } { - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope fscope(&assm, SSE2); - __ cvttss2si(edx, Operand(ebx, ecx, times_4, 10000)); - __ cvtsi2sd(xmm1, Operand(ebx, ecx, times_4, 10000)); - __ movsd(xmm1, Operand(ebx, ecx, times_4, 10000)); - __ movsd(Operand(ebx, ecx, times_4, 10000), xmm1); - // 128 bit move instructions. - __ movdqa(xmm0, Operand(ebx, ecx, times_4, 10000)); - __ movdqa(Operand(ebx, ecx, times_4, 10000), xmm0); - __ movdqu(xmm0, Operand(ebx, ecx, times_4, 10000)); - __ movdqu(Operand(ebx, ecx, times_4, 10000), xmm0); - - __ addsd(xmm1, xmm0); - __ mulsd(xmm1, xmm0); - __ subsd(xmm1, xmm0); - __ divsd(xmm1, xmm0); - __ ucomisd(xmm0, xmm1); - __ cmpltsd(xmm0, xmm1); - - __ andpd(xmm0, xmm1); - __ psllq(xmm0, 17); - __ psllq(xmm0, xmm1); - __ psrlq(xmm0, 17); - __ psrlq(xmm0, xmm1); - __ por(xmm0, xmm1); - } + __ cvttss2si(edx, Operand(ebx, ecx, times_4, 10000)); + __ cvtsi2sd(xmm1, Operand(ebx, ecx, times_4, 10000)); + __ movsd(xmm1, Operand(ebx, ecx, times_4, 10000)); + __ movsd(Operand(ebx, ecx, times_4, 10000), xmm1); + // 128 bit move instructions. + __ movdqa(xmm0, Operand(ebx, ecx, times_4, 10000)); + __ movdqa(Operand(ebx, ecx, times_4, 10000), xmm0); + __ movdqu(xmm0, Operand(ebx, ecx, times_4, 10000)); + __ movdqu(Operand(ebx, ecx, times_4, 10000), xmm0); + + __ addsd(xmm1, xmm0); + __ mulsd(xmm1, xmm0); + __ subsd(xmm1, xmm0); + __ divsd(xmm1, xmm0); + __ ucomisd(xmm0, xmm1); + __ cmpltsd(xmm0, xmm1); + + __ andpd(xmm0, xmm1); + __ psllq(xmm0, 17); + __ psllq(xmm0, xmm1); + __ psrlq(xmm0, 17); + __ psrlq(xmm0, xmm1); + __ por(xmm0, xmm1); } // cmov. { - if (CpuFeatures::IsSupported(CMOV)) { - CpuFeatureScope use_cmov(&assm, CMOV); - __ cmov(overflow, eax, Operand(eax, 0)); - __ cmov(no_overflow, eax, Operand(eax, 1)); - __ cmov(below, eax, Operand(eax, 2)); - __ cmov(above_equal, eax, Operand(eax, 3)); - __ cmov(equal, eax, Operand(ebx, 0)); - __ cmov(not_equal, eax, Operand(ebx, 1)); - __ cmov(below_equal, eax, Operand(ebx, 2)); - __ cmov(above, eax, Operand(ebx, 3)); - __ cmov(sign, eax, Operand(ecx, 0)); - __ cmov(not_sign, eax, Operand(ecx, 1)); - __ cmov(parity_even, eax, Operand(ecx, 2)); - __ cmov(parity_odd, eax, Operand(ecx, 3)); - __ cmov(less, eax, Operand(edx, 0)); - __ cmov(greater_equal, eax, Operand(edx, 1)); - __ cmov(less_equal, eax, Operand(edx, 2)); - __ cmov(greater, eax, Operand(edx, 3)); - } + __ cmov(overflow, eax, Operand(eax, 0)); + __ cmov(no_overflow, eax, Operand(eax, 1)); + __ cmov(below, eax, Operand(eax, 2)); + __ cmov(above_equal, eax, Operand(eax, 3)); + __ cmov(equal, eax, Operand(ebx, 0)); + __ cmov(not_equal, eax, Operand(ebx, 1)); + __ cmov(below_equal, eax, Operand(ebx, 2)); + __ cmov(above, eax, Operand(ebx, 3)); + __ cmov(sign, eax, Operand(ecx, 0)); + __ cmov(not_sign, eax, Operand(ecx, 1)); + __ cmov(parity_even, eax, Operand(ecx, 2)); + __ cmov(parity_odd, eax, Operand(ecx, 3)); + __ cmov(less, eax, Operand(edx, 0)); + __ cmov(greater_equal, eax, Operand(edx, 1)); + __ cmov(less_equal, eax, Operand(edx, 2)); + __ cmov(greater, eax, Operand(edx, 3)); } { - if (CpuFeatures::IsSupported(SSE2) && - CpuFeatures::IsSupported(SSE4_1)) { + if (CpuFeatures::IsSupported(SSE4_1)) { CpuFeatureScope scope(&assm, SSE4_1); __ pextrd(eax, xmm0, 1); __ pinsrd(xmm1, eax, 0); @@ -451,6 +457,14 @@ TEST(DisasmIa320) { } } + // xchg. + { + __ xchg(eax, eax); + __ xchg(eax, ebx); + __ xchg(ebx, ebx); + __ xchg(ebx, Operand(esp, 12)); + } + // Nop instructions for (int i = 0; i < 16; i++) { __ Nop(i); @@ -464,7 +478,8 @@ TEST(DisasmIa320) { desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); USE(code); #ifdef OBJECT_PRINT - code->Print(); + OFStream os(stdout); + code->Print(os); byte* begin = code->instruction_start(); byte* end = begin + code->instruction_size(); disasm::Disassembler::Disassemble(stdout, begin, end); diff --git a/deps/v8/test/cctest/test-disasm-mips.cc b/deps/v8/test/cctest/test-disasm-mips.cc index 725b3a56746..cfd861e241d 100644 --- a/deps/v8/test/cctest/test-disasm-mips.cc +++ b/deps/v8/test/cctest/test-disasm-mips.cc @@ -28,14 +28,14 @@ #include <stdlib.h> -#include "v8.h" - -#include "debug.h" -#include "disasm.h" -#include "disassembler.h" -#include "macro-assembler.h" -#include "serialize.h" -#include "cctest.h" +#include "src/v8.h" + +#include "src/debug.h" +#include "src/disasm.h" +#include "src/disassembler.h" +#include "src/macro-assembler.h" +#include "src/serialize.h" +#include "test/cctest/cctest.h" using namespace v8::internal; diff --git a/deps/v8/test/cctest/test-disasm-mips64.cc b/deps/v8/test/cctest/test-disasm-mips64.cc new file mode 100644 index 00000000000..d682d33480d --- /dev/null +++ b/deps/v8/test/cctest/test-disasm-mips64.cc @@ -0,0 +1,674 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following +// disclaimer in the documentation and/or other materials provided +// with the distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived +// from this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +// + +#include <stdlib.h> + +#include "src/v8.h" + +#include "src/debug.h" +#include "src/disasm.h" +#include "src/disassembler.h" +#include "src/macro-assembler.h" +#include "src/serialize.h" +#include "test/cctest/cctest.h" + +using namespace v8::internal; + + +bool DisassembleAndCompare(byte* pc, const char* compare_string) { + disasm::NameConverter converter; + disasm::Disassembler disasm(converter); + EmbeddedVector<char, 128> disasm_buffer; + + disasm.InstructionDecode(disasm_buffer, pc); + + if (strcmp(compare_string, disasm_buffer.start()) != 0) { + fprintf(stderr, + "expected: \n" + "%s\n" + "disassembled: \n" + "%s\n\n", + compare_string, disasm_buffer.start()); + return false; + } + return true; +} + + +// Set up V8 to a state where we can at least run the assembler and +// disassembler. Declare the variables and allocate the data structures used +// in the rest of the macros. +#define SET_UP() \ + CcTest::InitializeVM(); \ + Isolate* isolate = CcTest::i_isolate(); \ + HandleScope scope(isolate); \ + byte *buffer = reinterpret_cast<byte*>(malloc(4*1024)); \ + Assembler assm(isolate, buffer, 4*1024); \ + bool failure = false; + + +// This macro assembles one instruction using the preallocated assembler and +// disassembles the generated instruction, comparing the output to the expected +// value. If the comparison fails an error message is printed, but the test +// continues to run until the end. +#define COMPARE(asm_, compare_string) \ + { \ + int pc_offset = assm.pc_offset(); \ + byte *progcounter = &buffer[pc_offset]; \ + assm.asm_; \ + if (!DisassembleAndCompare(progcounter, compare_string)) failure = true; \ + } + + +// Verify that all invocations of the COMPARE macro passed successfully. +// Exit with a failure if at least one of the tests failed. +#define VERIFY_RUN() \ +if (failure) { \ + V8_Fatal(__FILE__, __LINE__, "MIPS Disassembler tests failed.\n"); \ + } + + +TEST(Type0) { + SET_UP(); + + COMPARE(addu(a0, a1, a2), + "00a62021 addu a0, a1, a2"); + COMPARE(daddu(a0, a1, a2), + "00a6202d daddu a0, a1, a2"); + COMPARE(addu(a6, a7, t0), + "016c5021 addu a6, a7, t0"); + COMPARE(daddu(a6, a7, t0), + "016c502d daddu a6, a7, t0"); + COMPARE(addu(v0, v1, s0), + "00701021 addu v0, v1, s0"); + COMPARE(daddu(v0, v1, s0), + "0070102d daddu v0, v1, s0"); + + COMPARE(subu(a0, a1, a2), + "00a62023 subu a0, a1, a2"); + COMPARE(dsubu(a0, a1, a2), + "00a6202f dsubu a0, a1, a2"); + COMPARE(subu(a6, a7, t0), + "016c5023 subu a6, a7, t0"); + COMPARE(dsubu(a6, a7, t0), + "016c502f dsubu a6, a7, t0"); + COMPARE(subu(v0, v1, s0), + "00701023 subu v0, v1, s0"); + COMPARE(dsubu(v0, v1, s0), + "0070102f dsubu v0, v1, s0"); + + if (kArchVariant != kMips64r6) { + COMPARE(mult(a0, a1), + "00850018 mult a0, a1"); + COMPARE(dmult(a0, a1), + "0085001c dmult a0, a1"); + COMPARE(mult(a6, a7), + "014b0018 mult a6, a7"); + COMPARE(dmult(a6, a7), + "014b001c dmult a6, a7"); + COMPARE(mult(v0, v1), + "00430018 mult v0, v1"); + COMPARE(dmult(v0, v1), + "0043001c dmult v0, v1"); + + COMPARE(multu(a0, a1), + "00850019 multu a0, a1"); + COMPARE(dmultu(a0, a1), + "0085001d dmultu a0, a1"); + COMPARE(multu(a6, a7), + "014b0019 multu a6, a7"); + COMPARE(dmultu(a6, a7), + "014b001d dmultu a6, a7"); + COMPARE(multu(v0, v1), + "00430019 multu v0, v1"); + COMPARE(dmultu(v0, v1), + "0043001d dmultu v0, v1"); + + COMPARE(div(a0, a1), + "0085001a div a0, a1"); + COMPARE(div(a6, a7), + "014b001a div a6, a7"); + COMPARE(div(v0, v1), + "0043001a div v0, v1"); + COMPARE(ddiv(a0, a1), + "0085001e ddiv a0, a1"); + COMPARE(ddiv(a6, a7), + "014b001e ddiv a6, a7"); + COMPARE(ddiv(v0, v1), + "0043001e ddiv v0, v1"); + + COMPARE(divu(a0, a1), + "0085001b divu a0, a1"); + COMPARE(divu(a6, a7), + "014b001b divu a6, a7"); + COMPARE(divu(v0, v1), + "0043001b divu v0, v1"); + COMPARE(ddivu(a0, a1), + "0085001f ddivu a0, a1"); + COMPARE(ddivu(a6, a7), + "014b001f ddivu a6, a7"); + COMPARE(ddivu(v0, v1), + "0043001f ddivu v0, v1"); + COMPARE(mul(a0, a1, a2), + "70a62002 mul a0, a1, a2"); + COMPARE(mul(a6, a7, t0), + "716c5002 mul a6, a7, t0"); + COMPARE(mul(v0, v1, s0), + "70701002 mul v0, v1, s0"); + } else { // MIPS64r6. + COMPARE(mul(a0, a1, a2), + "00a62098 mul a0, a1, a2"); + COMPARE(muh(a0, a1, a2), + "00a620d8 muh a0, a1, a2"); + COMPARE(dmul(a0, a1, a2), + "00a6209c dmul a0, a1, a2"); + COMPARE(dmuh(a0, a1, a2), + "00a620dc dmuh a0, a1, a2"); + COMPARE(mul(a5, a6, a7), + "014b4898 mul a5, a6, a7"); + COMPARE(muh(a5, a6, a7), + "014b48d8 muh a5, a6, a7"); + COMPARE(dmul(a5, a6, a7), + "014b489c dmul a5, a6, a7"); + COMPARE(dmuh(a5, a6, a7), + "014b48dc dmuh a5, a6, a7"); + COMPARE(mul(v0, v1, a0), + "00641098 mul v0, v1, a0"); + COMPARE(muh(v0, v1, a0), + "006410d8 muh v0, v1, a0"); + COMPARE(dmul(v0, v1, a0), + "0064109c dmul v0, v1, a0"); + COMPARE(dmuh(v0, v1, a0), + "006410dc dmuh v0, v1, a0"); + + COMPARE(mulu(a0, a1, a2), + "00a62099 mulu a0, a1, a2"); + COMPARE(muhu(a0, a1, a2), + "00a620d9 muhu a0, a1, a2"); + COMPARE(dmulu(a0, a1, a2), + "00a6209d dmulu a0, a1, a2"); + COMPARE(dmuhu(a0, a1, a2), + "00a620dd dmuhu a0, a1, a2"); + COMPARE(mulu(a5, a6, a7), + "014b4899 mulu a5, a6, a7"); + COMPARE(muhu(a5, a6, a7), + "014b48d9 muhu a5, a6, a7"); + COMPARE(dmulu(a5, a6, a7), + "014b489d dmulu a5, a6, a7"); + COMPARE(dmuhu(a5, a6, a7), + "014b48dd dmuhu a5, a6, a7"); + COMPARE(mulu(v0, v1, a0), + "00641099 mulu v0, v1, a0"); + COMPARE(muhu(v0, v1, a0), + "006410d9 muhu v0, v1, a0"); + COMPARE(dmulu(v0, v1, a0), + "0064109d dmulu v0, v1, a0"); + COMPARE(dmuhu(v0, v1, a0), + "006410dd dmuhu v0, v1, a0"); + + COMPARE(div(a0, a1, a2), + "00a6209a div a0, a1, a2"); + COMPARE(mod(a0, a1, a2), + "00a620da mod a0, a1, a2"); + COMPARE(ddiv(a0, a1, a2), + "00a6209e ddiv a0, a1, a2"); + COMPARE(dmod(a0, a1, a2), + "00a620de dmod a0, a1, a2"); + COMPARE(div(a5, a6, a7), + "014b489a div a5, a6, a7"); + COMPARE(mod(a5, a6, a7), + "014b48da mod a5, a6, a7"); + COMPARE(ddiv(a5, a6, a7), + "014b489e ddiv a5, a6, a7"); + COMPARE(dmod(a5, a6, a7), + "014b48de dmod a5, a6, a7"); + COMPARE(div(v0, v1, a0), + "0064109a div v0, v1, a0"); + COMPARE(mod(v0, v1, a0), + "006410da mod v0, v1, a0"); + COMPARE(ddiv(v0, v1, a0), + "0064109e ddiv v0, v1, a0"); + COMPARE(dmod(v0, v1, a0), + "006410de dmod v0, v1, a0"); + + COMPARE(divu(a0, a1, a2), + "00a6209b divu a0, a1, a2"); + COMPARE(modu(a0, a1, a2), + "00a620db modu a0, a1, a2"); + COMPARE(ddivu(a0, a1, a2), + "00a6209f ddivu a0, a1, a2"); + COMPARE(dmodu(a0, a1, a2), + "00a620df dmodu a0, a1, a2"); + COMPARE(divu(a5, a6, a7), + "014b489b divu a5, a6, a7"); + COMPARE(modu(a5, a6, a7), + "014b48db modu a5, a6, a7"); + COMPARE(ddivu(a5, a6, a7), + "014b489f ddivu a5, a6, a7"); + COMPARE(dmodu(a5, a6, a7), + "014b48df dmodu a5, a6, a7"); + COMPARE(divu(v0, v1, a0), + "0064109b divu v0, v1, a0"); + COMPARE(modu(v0, v1, a0), + "006410db modu v0, v1, a0"); + COMPARE(ddivu(v0, v1, a0), + "0064109f ddivu v0, v1, a0"); + COMPARE(dmodu(v0, v1, a0), + "006410df dmodu v0, v1, a0"); + + COMPARE(bovc(a0, a0, static_cast<int16_t>(0)), + "20840000 bovc a0, a0, 0"); + COMPARE(bovc(a1, a0, static_cast<int16_t>(0)), + "20a40000 bovc a1, a0, 0"); + COMPARE(bovc(a1, a0, 32767), + "20a47fff bovc a1, a0, 32767"); + COMPARE(bovc(a1, a0, -32768), + "20a48000 bovc a1, a0, -32768"); + + COMPARE(bnvc(a0, a0, static_cast<int16_t>(0)), + "60840000 bnvc a0, a0, 0"); + COMPARE(bnvc(a1, a0, static_cast<int16_t>(0)), + "60a40000 bnvc a1, a0, 0"); + COMPARE(bnvc(a1, a0, 32767), + "60a47fff bnvc a1, a0, 32767"); + COMPARE(bnvc(a1, a0, -32768), + "60a48000 bnvc a1, a0, -32768"); + + COMPARE(beqzc(a0, 0), + "d8800000 beqzc a0, 0x0"); + COMPARE(beqzc(a0, 0xfffff), // 0x0fffff == 1048575. + "d88fffff beqzc a0, 0xfffff"); + COMPARE(beqzc(a0, 0x100000), // 0x100000 == -1048576. + "d8900000 beqzc a0, 0x100000"); + + COMPARE(bnezc(a0, 0), + "f8800000 bnezc a0, 0x0"); + COMPARE(bnezc(a0, 0xfffff), // 0x0fffff == 1048575. + "f88fffff bnezc a0, 0xfffff"); + COMPARE(bnezc(a0, 0x100000), // 0x100000 == -1048576. + "f8900000 bnezc a0, 0x100000"); + } + + COMPARE(addiu(a0, a1, 0x0), + "24a40000 addiu a0, a1, 0"); + COMPARE(addiu(s0, s1, 32767), + "26307fff addiu s0, s1, 32767"); + COMPARE(addiu(a6, a7, -32768), + "256a8000 addiu a6, a7, -32768"); + COMPARE(addiu(v0, v1, -1), + "2462ffff addiu v0, v1, -1"); + COMPARE(daddiu(a0, a1, 0x0), + "64a40000 daddiu a0, a1, 0"); + COMPARE(daddiu(s0, s1, 32767), + "66307fff daddiu s0, s1, 32767"); + COMPARE(daddiu(a6, a7, -32768), + "656a8000 daddiu a6, a7, -32768"); + COMPARE(daddiu(v0, v1, -1), + "6462ffff daddiu v0, v1, -1"); + + COMPARE(and_(a0, a1, a2), + "00a62024 and a0, a1, a2"); + COMPARE(and_(s0, s1, s2), + "02328024 and s0, s1, s2"); + COMPARE(and_(a6, a7, t0), + "016c5024 and a6, a7, t0"); + COMPARE(and_(v0, v1, a2), + "00661024 and v0, v1, a2"); + + COMPARE(or_(a0, a1, a2), + "00a62025 or a0, a1, a2"); + COMPARE(or_(s0, s1, s2), + "02328025 or s0, s1, s2"); + COMPARE(or_(a6, a7, t0), + "016c5025 or a6, a7, t0"); + COMPARE(or_(v0, v1, a2), + "00661025 or v0, v1, a2"); + + COMPARE(xor_(a0, a1, a2), + "00a62026 xor a0, a1, a2"); + COMPARE(xor_(s0, s1, s2), + "02328026 xor s0, s1, s2"); + COMPARE(xor_(a6, a7, t0), + "016c5026 xor a6, a7, t0"); + COMPARE(xor_(v0, v1, a2), + "00661026 xor v0, v1, a2"); + + COMPARE(nor(a0, a1, a2), + "00a62027 nor a0, a1, a2"); + COMPARE(nor(s0, s1, s2), + "02328027 nor s0, s1, s2"); + COMPARE(nor(a6, a7, t0), + "016c5027 nor a6, a7, t0"); + COMPARE(nor(v0, v1, a2), + "00661027 nor v0, v1, a2"); + + COMPARE(andi(a0, a1, 0x1), + "30a40001 andi a0, a1, 0x1"); + COMPARE(andi(v0, v1, 0xffff), + "3062ffff andi v0, v1, 0xffff"); + + COMPARE(ori(a0, a1, 0x1), + "34a40001 ori a0, a1, 0x1"); + COMPARE(ori(v0, v1, 0xffff), + "3462ffff ori v0, v1, 0xffff"); + + COMPARE(xori(a0, a1, 0x1), + "38a40001 xori a0, a1, 0x1"); + COMPARE(xori(v0, v1, 0xffff), + "3862ffff xori v0, v1, 0xffff"); + + COMPARE(lui(a0, 0x1), + "3c040001 lui a0, 0x1"); + COMPARE(lui(v0, 0xffff), + "3c02ffff lui v0, 0xffff"); + + COMPARE(sll(a0, a1, 0), + "00052000 sll a0, a1, 0"); + COMPARE(sll(s0, s1, 8), + "00118200 sll s0, s1, 8"); + COMPARE(sll(a6, a7, 24), + "000b5600 sll a6, a7, 24"); + COMPARE(sll(v0, v1, 31), + "000317c0 sll v0, v1, 31"); + COMPARE(dsll(a0, a1, 0), + "00052038 dsll a0, a1, 0"); + COMPARE(dsll(s0, s1, 8), + "00118238 dsll s0, s1, 8"); + COMPARE(dsll(a6, a7, 24), + "000b5638 dsll a6, a7, 24"); + COMPARE(dsll(v0, v1, 31), + "000317f8 dsll v0, v1, 31"); + + COMPARE(sllv(a0, a1, a2), + "00c52004 sllv a0, a1, a2"); + COMPARE(sllv(s0, s1, s2), + "02518004 sllv s0, s1, s2"); + COMPARE(sllv(a6, a7, t0), + "018b5004 sllv a6, a7, t0"); + COMPARE(sllv(v0, v1, fp), + "03c31004 sllv v0, v1, fp"); + COMPARE(dsllv(a0, a1, a2), + "00c52014 dsllv a0, a1, a2"); + COMPARE(dsllv(s0, s1, s2), + "02518014 dsllv s0, s1, s2"); + COMPARE(dsllv(a6, a7, t0), + "018b5014 dsllv a6, a7, t0"); + COMPARE(dsllv(v0, v1, fp), + "03c31014 dsllv v0, v1, fp"); + + COMPARE(srl(a0, a1, 0), + "00052002 srl a0, a1, 0"); + COMPARE(srl(s0, s1, 8), + "00118202 srl s0, s1, 8"); + COMPARE(srl(a6, a7, 24), + "000b5602 srl a6, a7, 24"); + COMPARE(srl(v0, v1, 31), + "000317c2 srl v0, v1, 31"); + COMPARE(dsrl(a0, a1, 0), + "0005203a dsrl a0, a1, 0"); + COMPARE(dsrl(s0, s1, 8), + "0011823a dsrl s0, s1, 8"); + COMPARE(dsrl(a6, a7, 24), + "000b563a dsrl a6, a7, 24"); + COMPARE(dsrl(v0, v1, 31), + "000317fa dsrl v0, v1, 31"); + + COMPARE(srlv(a0, a1, a2), + "00c52006 srlv a0, a1, a2"); + COMPARE(srlv(s0, s1, s2), + "02518006 srlv s0, s1, s2"); + COMPARE(srlv(a6, a7, t0), + "018b5006 srlv a6, a7, t0"); + COMPARE(srlv(v0, v1, fp), + "03c31006 srlv v0, v1, fp"); + COMPARE(dsrlv(a0, a1, a2), + "00c52016 dsrlv a0, a1, a2"); + COMPARE(dsrlv(s0, s1, s2), + "02518016 dsrlv s0, s1, s2"); + COMPARE(dsrlv(a6, a7, t0), + "018b5016 dsrlv a6, a7, t0"); + COMPARE(dsrlv(v0, v1, fp), + "03c31016 dsrlv v0, v1, fp"); + + COMPARE(sra(a0, a1, 0), + "00052003 sra a0, a1, 0"); + COMPARE(sra(s0, s1, 8), + "00118203 sra s0, s1, 8"); + COMPARE(sra(a6, a7, 24), + "000b5603 sra a6, a7, 24"); + COMPARE(sra(v0, v1, 31), + "000317c3 sra v0, v1, 31"); + COMPARE(dsra(a0, a1, 0), + "0005203b dsra a0, a1, 0"); + COMPARE(dsra(s0, s1, 8), + "0011823b dsra s0, s1, 8"); + COMPARE(dsra(a6, a7, 24), + "000b563b dsra a6, a7, 24"); + COMPARE(dsra(v0, v1, 31), + "000317fb dsra v0, v1, 31"); + + COMPARE(srav(a0, a1, a2), + "00c52007 srav a0, a1, a2"); + COMPARE(srav(s0, s1, s2), + "02518007 srav s0, s1, s2"); + COMPARE(srav(a6, a7, t0), + "018b5007 srav a6, a7, t0"); + COMPARE(srav(v0, v1, fp), + "03c31007 srav v0, v1, fp"); + COMPARE(dsrav(a0, a1, a2), + "00c52017 dsrav a0, a1, a2"); + COMPARE(dsrav(s0, s1, s2), + "02518017 dsrav s0, s1, s2"); + COMPARE(dsrav(a6, a7, t0), + "018b5017 dsrav a6, a7, t0"); + COMPARE(dsrav(v0, v1, fp), + "03c31017 dsrav v0, v1, fp"); + + if (kArchVariant == kMips64r2) { + COMPARE(rotr(a0, a1, 0), + "00252002 rotr a0, a1, 0"); + COMPARE(rotr(s0, s1, 8), + "00318202 rotr s0, s1, 8"); + COMPARE(rotr(a6, a7, 24), + "002b5602 rotr a6, a7, 24"); + COMPARE(rotr(v0, v1, 31), + "002317c2 rotr v0, v1, 31"); + COMPARE(drotr(a0, a1, 0), + "0025203a drotr a0, a1, 0"); + COMPARE(drotr(s0, s1, 8), + "0031823a drotr s0, s1, 8"); + COMPARE(drotr(a6, a7, 24), + "002b563a drotr a6, a7, 24"); + COMPARE(drotr(v0, v1, 31), + "002317fa drotr v0, v1, 31"); + + COMPARE(rotrv(a0, a1, a2), + "00c52046 rotrv a0, a1, a2"); + COMPARE(rotrv(s0, s1, s2), + "02518046 rotrv s0, s1, s2"); + COMPARE(rotrv(a6, a7, t0), + "018b5046 rotrv a6, a7, t0"); + COMPARE(rotrv(v0, v1, fp), + "03c31046 rotrv v0, v1, fp"); + COMPARE(drotrv(a0, a1, a2), + "00c52056 drotrv a0, a1, a2"); + COMPARE(drotrv(s0, s1, s2), + "02518056 drotrv s0, s1, s2"); + COMPARE(drotrv(a6, a7, t0), + "018b5056 drotrv a6, a7, t0"); + COMPARE(drotrv(v0, v1, fp), + "03c31056 drotrv v0, v1, fp"); + } + + COMPARE(break_(0), + "0000000d break, code: 0x00000 (0)"); + COMPARE(break_(261120), + "00ff000d break, code: 0x3fc00 (261120)"); + COMPARE(break_(1047552), + "03ff000d break, code: 0xffc00 (1047552)"); + + COMPARE(tge(a0, a1, 0), + "00850030 tge a0, a1, code: 0x000"); + COMPARE(tge(s0, s1, 1023), + "0211fff0 tge s0, s1, code: 0x3ff"); + COMPARE(tgeu(a0, a1, 0), + "00850031 tgeu a0, a1, code: 0x000"); + COMPARE(tgeu(s0, s1, 1023), + "0211fff1 tgeu s0, s1, code: 0x3ff"); + COMPARE(tlt(a0, a1, 0), + "00850032 tlt a0, a1, code: 0x000"); + COMPARE(tlt(s0, s1, 1023), + "0211fff2 tlt s0, s1, code: 0x3ff"); + COMPARE(tltu(a0, a1, 0), + "00850033 tltu a0, a1, code: 0x000"); + COMPARE(tltu(s0, s1, 1023), + "0211fff3 tltu s0, s1, code: 0x3ff"); + COMPARE(teq(a0, a1, 0), + "00850034 teq a0, a1, code: 0x000"); + COMPARE(teq(s0, s1, 1023), + "0211fff4 teq s0, s1, code: 0x3ff"); + COMPARE(tne(a0, a1, 0), + "00850036 tne a0, a1, code: 0x000"); + COMPARE(tne(s0, s1, 1023), + "0211fff6 tne s0, s1, code: 0x3ff"); + + COMPARE(mfhi(a0), + "00002010 mfhi a0"); + COMPARE(mfhi(s2), + "00009010 mfhi s2"); + COMPARE(mfhi(t0), + "00006010 mfhi t0"); + COMPARE(mfhi(v1), + "00001810 mfhi v1"); + COMPARE(mflo(a0), + "00002012 mflo a0"); + COMPARE(mflo(s2), + "00009012 mflo s2"); + COMPARE(mflo(t0), + "00006012 mflo t0"); + COMPARE(mflo(v1), + "00001812 mflo v1"); + + COMPARE(slt(a0, a1, a2), + "00a6202a slt a0, a1, a2"); + COMPARE(slt(s0, s1, s2), + "0232802a slt s0, s1, s2"); + COMPARE(slt(a6, a7, t0), + "016c502a slt a6, a7, t0"); + COMPARE(slt(v0, v1, a2), + "0066102a slt v0, v1, a2"); + COMPARE(sltu(a0, a1, a2), + "00a6202b sltu a0, a1, a2"); + COMPARE(sltu(s0, s1, s2), + "0232802b sltu s0, s1, s2"); + COMPARE(sltu(a6, a7, t0), + "016c502b sltu a6, a7, t0"); + COMPARE(sltu(v0, v1, a2), + "0066102b sltu v0, v1, a2"); + + COMPARE(slti(a0, a1, 0), + "28a40000 slti a0, a1, 0"); + COMPARE(slti(s0, s1, 32767), + "2a307fff slti s0, s1, 32767"); + COMPARE(slti(a6, a7, -32768), + "296a8000 slti a6, a7, -32768"); + COMPARE(slti(v0, v1, -1), + "2862ffff slti v0, v1, -1"); + COMPARE(sltiu(a0, a1, 0), + "2ca40000 sltiu a0, a1, 0"); + COMPARE(sltiu(s0, s1, 32767), + "2e307fff sltiu s0, s1, 32767"); + COMPARE(sltiu(a6, a7, -32768), + "2d6a8000 sltiu a6, a7, -32768"); + COMPARE(sltiu(v0, v1, -1), + "2c62ffff sltiu v0, v1, -1"); + COMPARE(movz(a0, a1, a2), + "00a6200a movz a0, a1, a2"); + COMPARE(movz(s0, s1, s2), + "0232800a movz s0, s1, s2"); + COMPARE(movz(a6, a7, t0), + "016c500a movz a6, a7, t0"); + COMPARE(movz(v0, v1, a2), + "0066100a movz v0, v1, a2"); + COMPARE(movn(a0, a1, a2), + "00a6200b movn a0, a1, a2"); + COMPARE(movn(s0, s1, s2), + "0232800b movn s0, s1, s2"); + COMPARE(movn(a6, a7, t0), + "016c500b movn a6, a7, t0"); + COMPARE(movn(v0, v1, a2), + "0066100b movn v0, v1, a2"); + + COMPARE(movt(a0, a1, 1), + "00a52001 movt a0, a1, 1"); + COMPARE(movt(s0, s1, 2), + "02298001 movt s0, s1, 2"); + COMPARE(movt(a6, a7, 3), + "016d5001 movt a6, a7, 3"); + COMPARE(movt(v0, v1, 7), + "007d1001 movt v0, v1, 7"); + COMPARE(movf(a0, a1, 0), + "00a02001 movf a0, a1, 0"); + COMPARE(movf(s0, s1, 4), + "02308001 movf s0, s1, 4"); + COMPARE(movf(a6, a7, 5), + "01745001 movf a6, a7, 5"); + COMPARE(movf(v0, v1, 6), + "00781001 movf v0, v1, 6"); + + if (kArchVariant == kMips64r6) { + COMPARE(clz(a0, a1), + "00a02050 clz a0, a1"); + COMPARE(clz(s6, s7), + "02e0b050 clz s6, s7"); + COMPARE(clz(v0, v1), + "00601050 clz v0, v1"); + } else { + COMPARE(clz(a0, a1), + "70a42020 clz a0, a1"); + COMPARE(clz(s6, s7), + "72f6b020 clz s6, s7"); + COMPARE(clz(v0, v1), + "70621020 clz v0, v1"); + } + + COMPARE(ins_(a0, a1, 31, 1), + "7ca4ffc4 ins a0, a1, 31, 1"); + COMPARE(ins_(s6, s7, 30, 2), + "7ef6ff84 ins s6, s7, 30, 2"); + COMPARE(ins_(v0, v1, 0, 32), + "7c62f804 ins v0, v1, 0, 32"); + COMPARE(ext_(a0, a1, 31, 1), + "7ca407c0 ext a0, a1, 31, 1"); + COMPARE(ext_(s6, s7, 30, 2), + "7ef60f80 ext s6, s7, 30, 2"); + COMPARE(ext_(v0, v1, 0, 32), + "7c62f800 ext v0, v1, 0, 32"); + + VERIFY_RUN(); +} diff --git a/deps/v8/test/cctest/test-disasm-x64.cc b/deps/v8/test/cctest/test-disasm-x64.cc index 3b1f8af8265..4778b04bb71 100644 --- a/deps/v8/test/cctest/test-disasm-x64.cc +++ b/deps/v8/test/cctest/test-disasm-x64.cc @@ -27,15 +27,15 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "debug.h" -#include "disasm.h" -#include "disassembler.h" -#include "macro-assembler.h" -#include "serialize.h" -#include "stub-cache.h" -#include "cctest.h" +#include "src/debug.h" +#include "src/disasm.h" +#include "src/disassembler.h" +#include "src/macro-assembler.h" +#include "src/serialize.h" +#include "src/stub-cache.h" +#include "test/cctest/cctest.h" using namespace v8::internal; @@ -261,7 +261,7 @@ TEST(DisasmX64) { // TODO(mstarzinger): The following is protected. // __ jmp(Operand(rbx, rcx, times_4, 10000)); ExternalReference after_break_target = - ExternalReference(Debug_Address::AfterBreakTarget(), isolate); + ExternalReference::debug_after_break_target_address(isolate); USE(after_break_target); __ jmp(ic, RelocInfo::CODE_TARGET); __ nop(); @@ -420,6 +420,14 @@ TEST(DisasmX64) { } } + // xchg. + { + __ xchgq(rax, rax); + __ xchgq(rax, rbx); + __ xchgq(rbx, rbx); + __ xchgq(rbx, Operand(rsp, 12)); + } + // Nop instructions for (int i = 0; i < 16; i++) { __ Nop(i); @@ -433,7 +441,8 @@ TEST(DisasmX64) { desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); USE(code); #ifdef OBJECT_PRINT - code->Print(); + OFStream os(stdout); + code->Print(os); byte* begin = code->instruction_start(); byte* end = begin + code->instruction_size(); disasm::Disassembler::Disassemble(stdout, begin, end); diff --git a/deps/v8/test/cctest/test-disasm-x87.cc b/deps/v8/test/cctest/test-disasm-x87.cc new file mode 100644 index 00000000000..1515cc793b0 --- /dev/null +++ b/deps/v8/test/cctest/test-disasm-x87.cc @@ -0,0 +1,410 @@ +// Copyright 2011 the V8 project authors. All rights reserved. +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following +// disclaimer in the documentation and/or other materials provided +// with the distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived +// from this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +#include <stdlib.h> + +#include "src/v8.h" + +#include "src/debug.h" +#include "src/disasm.h" +#include "src/disassembler.h" +#include "src/macro-assembler.h" +#include "src/serialize.h" +#include "src/stub-cache.h" +#include "test/cctest/cctest.h" + +using namespace v8::internal; + + +#define __ assm. + + +static void DummyStaticFunction(Object* result) { +} + + +TEST(DisasmIa320) { + CcTest::InitializeVM(); + Isolate* isolate = CcTest::i_isolate(); + HandleScope scope(isolate); + v8::internal::byte buffer[2048]; + Assembler assm(isolate, buffer, sizeof buffer); + DummyStaticFunction(NULL); // just bloody use it (DELETE; debugging) + + // Short immediate instructions + __ adc(eax, 12345678); + __ add(eax, Immediate(12345678)); + __ or_(eax, 12345678); + __ sub(eax, Immediate(12345678)); + __ xor_(eax, 12345678); + __ and_(eax, 12345678); + Handle<FixedArray> foo = isolate->factory()->NewFixedArray(10, TENURED); + __ cmp(eax, foo); + + // ---- This one caused crash + __ mov(ebx, Operand(esp, ecx, times_2, 0)); // [esp+ecx*4] + + // ---- All instructions that I can think of + __ add(edx, ebx); + __ add(edx, Operand(12, RelocInfo::NONE32)); + __ add(edx, Operand(ebx, 0)); + __ add(edx, Operand(ebx, 16)); + __ add(edx, Operand(ebx, 1999)); + __ add(edx, Operand(ebx, -4)); + __ add(edx, Operand(ebx, -1999)); + __ add(edx, Operand(esp, 0)); + __ add(edx, Operand(esp, 16)); + __ add(edx, Operand(esp, 1999)); + __ add(edx, Operand(esp, -4)); + __ add(edx, Operand(esp, -1999)); + __ nop(); + __ add(esi, Operand(ecx, times_4, 0)); + __ add(esi, Operand(ecx, times_4, 24)); + __ add(esi, Operand(ecx, times_4, -4)); + __ add(esi, Operand(ecx, times_4, -1999)); + __ nop(); + __ add(edi, Operand(ebp, ecx, times_4, 0)); + __ add(edi, Operand(ebp, ecx, times_4, 12)); + __ add(edi, Operand(ebp, ecx, times_4, -8)); + __ add(edi, Operand(ebp, ecx, times_4, -3999)); + __ add(Operand(ebp, ecx, times_4, 12), Immediate(12)); + + __ nop(); + __ add(ebx, Immediate(12)); + __ nop(); + __ adc(ecx, 12); + __ adc(ecx, 1000); + __ nop(); + __ and_(edx, 3); + __ and_(edx, Operand(esp, 4)); + __ cmp(edx, 3); + __ cmp(edx, Operand(esp, 4)); + __ cmp(Operand(ebp, ecx, times_4, 0), Immediate(1000)); + Handle<FixedArray> foo2 = isolate->factory()->NewFixedArray(10, TENURED); + __ cmp(ebx, foo2); + __ cmpb(ebx, Operand(ebp, ecx, times_2, 0)); + __ cmpb(Operand(ebp, ecx, times_2, 0), ebx); + __ or_(edx, 3); + __ xor_(edx, 3); + __ nop(); + __ cpuid(); + __ movsx_b(edx, ecx); + __ movsx_w(edx, ecx); + __ movzx_b(edx, ecx); + __ movzx_w(edx, ecx); + + __ nop(); + __ imul(edx, ecx); + __ shld(edx, ecx); + __ shrd(edx, ecx); + __ bts(edx, ecx); + __ bts(Operand(ebx, ecx, times_4, 0), ecx); + __ nop(); + __ pushad(); + __ popad(); + __ pushfd(); + __ popfd(); + __ push(Immediate(12)); + __ push(Immediate(23456)); + __ push(ecx); + __ push(esi); + __ push(Operand(ebp, JavaScriptFrameConstants::kFunctionOffset)); + __ push(Operand(ebx, ecx, times_4, 0)); + __ push(Operand(ebx, ecx, times_4, 0)); + __ push(Operand(ebx, ecx, times_4, 10000)); + __ pop(edx); + __ pop(eax); + __ pop(Operand(ebx, ecx, times_4, 0)); + __ nop(); + + __ add(edx, Operand(esp, 16)); + __ add(edx, ecx); + __ mov_b(edx, ecx); + __ mov_b(ecx, 6); + __ mov_b(Operand(ebx, ecx, times_4, 10000), 6); + __ mov_b(Operand(esp, 16), edx); + __ mov_w(edx, Operand(esp, 16)); + __ mov_w(Operand(esp, 16), edx); + __ nop(); + __ movsx_w(edx, Operand(esp, 12)); + __ movsx_b(edx, Operand(esp, 12)); + __ movzx_w(edx, Operand(esp, 12)); + __ movzx_b(edx, Operand(esp, 12)); + __ nop(); + __ mov(edx, 1234567); + __ mov(edx, Operand(esp, 12)); + __ mov(Operand(ebx, ecx, times_4, 10000), Immediate(12345)); + __ mov(Operand(ebx, ecx, times_4, 10000), edx); + __ nop(); + __ dec_b(edx); + __ dec_b(Operand(eax, 10)); + __ dec_b(Operand(ebx, ecx, times_4, 10000)); + __ dec(edx); + __ cdq(); + + __ nop(); + __ idiv(edx); + __ idiv(Operand(edx, ecx, times_1, 1)); + __ idiv(Operand(esp, 12)); + __ div(edx); + __ div(Operand(edx, ecx, times_1, 1)); + __ div(Operand(esp, 12)); + __ mul(edx); + __ neg(edx); + __ not_(edx); + __ test(Operand(ebx, ecx, times_4, 10000), Immediate(123456)); + + __ imul(edx, Operand(ebx, ecx, times_4, 10000)); + __ imul(edx, ecx, 12); + __ imul(edx, Operand(edx, eax, times_2, 42), 8); + __ imul(edx, ecx, 1000); + __ imul(edx, Operand(ebx, ecx, times_4, 1), 9000); + + __ inc(edx); + __ inc(Operand(ebx, ecx, times_4, 10000)); + __ push(Operand(ebx, ecx, times_4, 10000)); + __ pop(Operand(ebx, ecx, times_4, 10000)); + __ call(Operand(ebx, ecx, times_4, 10000)); + __ jmp(Operand(ebx, ecx, times_4, 10000)); + + __ lea(edx, Operand(ebx, ecx, times_4, 10000)); + __ or_(edx, 12345); + __ or_(edx, Operand(ebx, ecx, times_4, 10000)); + + __ nop(); + + __ rcl(edx, 1); + __ rcl(edx, 7); + __ rcr(edx, 1); + __ rcr(edx, 7); + __ sar(edx, 1); + __ sar(edx, 6); + __ sar_cl(edx); + __ sar(Operand(ebx, ecx, times_4, 10000), 1); + __ sar(Operand(ebx, ecx, times_4, 10000), 6); + __ sar_cl(Operand(ebx, ecx, times_4, 10000)); + __ sbb(edx, Operand(ebx, ecx, times_4, 10000)); + __ shld(edx, Operand(ebx, ecx, times_4, 10000)); + __ shl(edx, 1); + __ shl(edx, 6); + __ shl_cl(edx); + __ shl(Operand(ebx, ecx, times_4, 10000), 1); + __ shl(Operand(ebx, ecx, times_4, 10000), 6); + __ shl_cl(Operand(ebx, ecx, times_4, 10000)); + __ shrd(edx, Operand(ebx, ecx, times_4, 10000)); + __ shr(edx, 1); + __ shr(edx, 7); + __ shr_cl(edx); + __ shr(Operand(ebx, ecx, times_4, 10000), 1); + __ shr(Operand(ebx, ecx, times_4, 10000), 6); + __ shr_cl(Operand(ebx, ecx, times_4, 10000)); + + + // Immediates + + __ adc(edx, 12345); + + __ add(ebx, Immediate(12)); + __ add(Operand(edx, ecx, times_4, 10000), Immediate(12)); + + __ and_(ebx, 12345); + + __ cmp(ebx, 12345); + __ cmp(ebx, Immediate(12)); + __ cmp(Operand(edx, ecx, times_4, 10000), Immediate(12)); + __ cmpb(eax, 100); + + __ or_(ebx, 12345); + + __ sub(ebx, Immediate(12)); + __ sub(Operand(edx, ecx, times_4, 10000), Immediate(12)); + + __ xor_(ebx, 12345); + + __ imul(edx, ecx, 12); + __ imul(edx, ecx, 1000); + + __ cld(); + __ rep_movs(); + __ rep_stos(); + __ stos(); + + __ sub(edx, Operand(ebx, ecx, times_4, 10000)); + __ sub(edx, ebx); + + __ test(edx, Immediate(12345)); + __ test(edx, Operand(ebx, ecx, times_8, 10000)); + __ test(Operand(esi, edi, times_1, -20000000), Immediate(300000000)); + __ test_b(edx, Operand(ecx, ebx, times_2, 1000)); + __ test_b(Operand(eax, -20), 0x9A); + __ nop(); + + __ xor_(edx, 12345); + __ xor_(edx, Operand(ebx, ecx, times_8, 10000)); + __ bts(Operand(ebx, ecx, times_8, 10000), edx); + __ hlt(); + __ int3(); + __ ret(0); + __ ret(8); + + // Calls + + Label L1, L2; + __ bind(&L1); + __ nop(); + __ call(&L1); + __ call(&L2); + __ nop(); + __ bind(&L2); + __ call(Operand(ebx, ecx, times_4, 10000)); + __ nop(); + Handle<Code> ic(LoadIC::initialize_stub(isolate, NOT_CONTEXTUAL)); + __ call(ic, RelocInfo::CODE_TARGET); + __ nop(); + __ call(FUNCTION_ADDR(DummyStaticFunction), RelocInfo::RUNTIME_ENTRY); + __ nop(); + + __ jmp(&L1); + __ jmp(Operand(ebx, ecx, times_4, 10000)); + ExternalReference after_break_target = + ExternalReference::debug_after_break_target_address(isolate); + __ jmp(Operand::StaticVariable(after_break_target)); + __ jmp(ic, RelocInfo::CODE_TARGET); + __ nop(); + + + Label Ljcc; + __ nop(); + // long jumps + __ j(overflow, &Ljcc); + __ j(no_overflow, &Ljcc); + __ j(below, &Ljcc); + __ j(above_equal, &Ljcc); + __ j(equal, &Ljcc); + __ j(not_equal, &Ljcc); + __ j(below_equal, &Ljcc); + __ j(above, &Ljcc); + __ j(sign, &Ljcc); + __ j(not_sign, &Ljcc); + __ j(parity_even, &Ljcc); + __ j(parity_odd, &Ljcc); + __ j(less, &Ljcc); + __ j(greater_equal, &Ljcc); + __ j(less_equal, &Ljcc); + __ j(greater, &Ljcc); + __ nop(); + __ bind(&Ljcc); + // short jumps + __ j(overflow, &Ljcc); + __ j(no_overflow, &Ljcc); + __ j(below, &Ljcc); + __ j(above_equal, &Ljcc); + __ j(equal, &Ljcc); + __ j(not_equal, &Ljcc); + __ j(below_equal, &Ljcc); + __ j(above, &Ljcc); + __ j(sign, &Ljcc); + __ j(not_sign, &Ljcc); + __ j(parity_even, &Ljcc); + __ j(parity_odd, &Ljcc); + __ j(less, &Ljcc); + __ j(greater_equal, &Ljcc); + __ j(less_equal, &Ljcc); + __ j(greater, &Ljcc); + + // 0xD9 instructions + __ nop(); + + __ fld(1); + __ fld1(); + __ fldz(); + __ fldpi(); + __ fabs(); + __ fchs(); + __ fprem(); + __ fprem1(); + __ fincstp(); + __ ftst(); + __ fxch(3); + __ fld_s(Operand(ebx, ecx, times_4, 10000)); + __ fstp_s(Operand(ebx, ecx, times_4, 10000)); + __ ffree(3); + __ fld_d(Operand(ebx, ecx, times_4, 10000)); + __ fstp_d(Operand(ebx, ecx, times_4, 10000)); + __ nop(); + + __ fild_s(Operand(ebx, ecx, times_4, 10000)); + __ fistp_s(Operand(ebx, ecx, times_4, 10000)); + __ fild_d(Operand(ebx, ecx, times_4, 10000)); + __ fistp_d(Operand(ebx, ecx, times_4, 10000)); + __ fnstsw_ax(); + __ nop(); + __ fadd(3); + __ fsub(3); + __ fmul(3); + __ fdiv(3); + + __ faddp(3); + __ fsubp(3); + __ fmulp(3); + __ fdivp(3); + __ fcompp(); + __ fwait(); + __ frndint(); + __ fninit(); + __ nop(); + + // xchg. + { + __ xchg(eax, eax); + __ xchg(eax, ebx); + __ xchg(ebx, ebx); + __ xchg(ebx, Operand(esp, 12)); + } + + // Nop instructions + for (int i = 0; i < 16; i++) { + __ Nop(i); + } + + __ ret(0); + + CodeDesc desc; + assm.GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); + USE(code); +#ifdef OBJECT_PRINT + OFStream os(stdout); + code->Print(os); + byte* begin = code->instruction_start(); + byte* end = begin + code->instruction_size(); + disasm::Disassembler::Disassemble(stdout, begin, end); +#endif +} + +#undef __ diff --git a/deps/v8/test/cctest/test-diy-fp.cc b/deps/v8/test/cctest/test-diy-fp.cc index 145e317ff8a..255118e9673 100644 --- a/deps/v8/test/cctest/test-diy-fp.cc +++ b/deps/v8/test/cctest/test-diy-fp.cc @@ -27,11 +27,11 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "platform.h" -#include "cctest.h" -#include "diy-fp.h" +#include "src/base/platform/platform.h" +#include "src/diy-fp.h" +#include "test/cctest/cctest.h" using namespace v8::internal; diff --git a/deps/v8/test/cctest/test-double.cc b/deps/v8/test/cctest/test-double.cc index 2c9f0c21bb6..16dcb37101e 100644 --- a/deps/v8/test/cctest/test-double.cc +++ b/deps/v8/test/cctest/test-double.cc @@ -27,12 +27,12 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "platform.h" -#include "cctest.h" -#include "diy-fp.h" -#include "double.h" +#include "src/base/platform/platform.h" +#include "src/diy-fp.h" +#include "src/double.h" +#include "test/cctest/cctest.h" using namespace v8::internal; @@ -105,7 +105,7 @@ TEST(IsDenormal) { TEST(IsSpecial) { CHECK(Double(V8_INFINITY).IsSpecial()); CHECK(Double(-V8_INFINITY).IsSpecial()); - CHECK(Double(OS::nan_value()).IsSpecial()); + CHECK(Double(v8::base::OS::nan_value()).IsSpecial()); uint64_t bits = V8_2PART_UINT64_C(0xFFF12345, 00000000); CHECK(Double(bits).IsSpecial()); // Denormals are not special: @@ -128,7 +128,7 @@ TEST(IsSpecial) { TEST(IsInfinite) { CHECK(Double(V8_INFINITY).IsInfinite()); CHECK(Double(-V8_INFINITY).IsInfinite()); - CHECK(!Double(OS::nan_value()).IsInfinite()); + CHECK(!Double(v8::base::OS::nan_value()).IsInfinite()); CHECK(!Double(0.0).IsInfinite()); CHECK(!Double(-0.0).IsInfinite()); CHECK(!Double(1.0).IsInfinite()); diff --git a/deps/v8/test/cctest/test-dtoa.cc b/deps/v8/test/cctest/test-dtoa.cc index 66c2aafc664..3f396a5d1b6 100644 --- a/deps/v8/test/cctest/test-dtoa.cc +++ b/deps/v8/test/cctest/test-dtoa.cc @@ -27,16 +27,16 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "dtoa.h" +#include "src/dtoa.h" -#include "cctest.h" -#include "double.h" -#include "gay-fixed.h" -#include "gay-precision.h" -#include "gay-shortest.h" -#include "platform.h" +#include "src/base/platform/platform.h" +#include "src/double.h" +#include "test/cctest/cctest.h" +#include "test/cctest/gay-fixed.h" +#include "test/cctest/gay-precision.h" +#include "test/cctest/gay-shortest.h" using namespace v8::internal; diff --git a/deps/v8/test/cctest/test-fast-dtoa.cc b/deps/v8/test/cctest/test-fast-dtoa.cc index 46f975799fc..52198a45f24 100644 --- a/deps/v8/test/cctest/test-fast-dtoa.cc +++ b/deps/v8/test/cctest/test-fast-dtoa.cc @@ -27,15 +27,15 @@ #include <stdlib.h> -#include "v8.h" - -#include "platform.h" -#include "cctest.h" -#include "diy-fp.h" -#include "double.h" -#include "fast-dtoa.h" -#include "gay-precision.h" -#include "gay-shortest.h" +#include "src/v8.h" + +#include "src/base/platform/platform.h" +#include "src/diy-fp.h" +#include "src/double.h" +#include "src/fast-dtoa.h" +#include "test/cctest/cctest.h" +#include "test/cctest/gay-precision.h" +#include "test/cctest/gay-shortest.h" using namespace v8::internal; diff --git a/deps/v8/test/cctest/test-fixed-dtoa.cc b/deps/v8/test/cctest/test-fixed-dtoa.cc index 21926f19705..de40d09f1bf 100644 --- a/deps/v8/test/cctest/test-fixed-dtoa.cc +++ b/deps/v8/test/cctest/test-fixed-dtoa.cc @@ -27,13 +27,13 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "platform.h" -#include "cctest.h" -#include "double.h" -#include "fixed-dtoa.h" -#include "gay-fixed.h" +#include "src/base/platform/platform.h" +#include "src/double.h" +#include "src/fixed-dtoa.h" +#include "test/cctest/cctest.h" +#include "test/cctest/gay-fixed.h" using namespace v8::internal; diff --git a/deps/v8/test/cctest/test-flags.cc b/deps/v8/test/cctest/test-flags.cc index a1d2405ad58..862b73adba0 100644 --- a/deps/v8/test/cctest/test-flags.cc +++ b/deps/v8/test/cctest/test-flags.cc @@ -27,8 +27,8 @@ #include <stdlib.h> -#include "v8.h" -#include "cctest.h" +#include "src/v8.h" +#include "test/cctest/cctest.h" using namespace v8::internal; diff --git a/deps/v8/test/cctest/test-func-name-inference.cc b/deps/v8/test/cctest/test-func-name-inference.cc index f452b3ed3b6..bc503b58c60 100644 --- a/deps/v8/test/cctest/test-func-name-inference.cc +++ b/deps/v8/test/cctest/test-func-name-inference.cc @@ -26,12 +26,12 @@ // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "v8.h" +#include "src/v8.h" -#include "api.h" -#include "debug.h" -#include "runtime.h" -#include "cctest.h" +#include "src/api.h" +#include "src/debug.h" +#include "src/runtime.h" +#include "test/cctest/cctest.h" using ::v8::internal::CStrVector; diff --git a/deps/v8/test/cctest/test-fuzz-arm64.cc b/deps/v8/test/cctest/test-fuzz-arm64.cc index 0ceb60f7b38..ada609fe78a 100644 --- a/deps/v8/test/cctest/test-fuzz-arm64.cc +++ b/deps/v8/test/cctest/test-fuzz-arm64.cc @@ -23,11 +23,11 @@ // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. #include <stdlib.h> -#include "cctest.h" +#include "test/cctest/cctest.h" -#include "arm64/decoder-arm64.h" -#include "arm64/decoder-arm64-inl.h" -#include "arm64/disasm-arm64.h" +#include "src/arm64/decoder-arm64.h" +#include "src/arm64/decoder-arm64-inl.h" +#include "src/arm64/disasm-arm64.h" using namespace v8::internal; diff --git a/deps/v8/test/cctest/test-gc-tracer.cc b/deps/v8/test/cctest/test-gc-tracer.cc new file mode 100644 index 00000000000..190644dec1e --- /dev/null +++ b/deps/v8/test/cctest/test-gc-tracer.cc @@ -0,0 +1,125 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following +// disclaimer in the documentation and/or other materials provided +// with the distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived +// from this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +#include <stdlib.h> +#include <utility> + +#include "src/v8.h" + +#include "test/cctest/cctest.h" + +using namespace v8::internal; + +TEST(RingBufferPartialFill) { + const int max_size = 6; + typedef RingBuffer<int, max_size>::const_iterator Iter; + RingBuffer<int, max_size> ring_buffer; + CHECK(ring_buffer.empty()); + CHECK_EQ(static_cast<int>(ring_buffer.size()), 0); + CHECK(ring_buffer.begin() == ring_buffer.end()); + + // Fill ring_buffer partially: [0, 1, 2] + for (int i = 0; i < max_size / 2; i++) ring_buffer.push_back(i); + + CHECK(!ring_buffer.empty()); + CHECK(static_cast<int>(ring_buffer.size()) == max_size / 2); + CHECK(ring_buffer.begin() != ring_buffer.end()); + + // Test forward itartion + int i = 0; + for (Iter iter = ring_buffer.begin(); iter != ring_buffer.end(); ++iter) { + CHECK(*iter == i); + ++i; + } + CHECK_EQ(i, 3); // one past last element. + + // Test backward iteration + i = 2; + Iter iter = ring_buffer.back(); + while (true) { + CHECK(*iter == i); + if (iter == ring_buffer.begin()) break; + --iter; + --i; + } + CHECK_EQ(i, 0); +} + + +TEST(RingBufferWrapAround) { + const int max_size = 6; + typedef RingBuffer<int, max_size>::const_iterator Iter; + RingBuffer<int, max_size> ring_buffer; + + // Fill ring_buffer (wrap around): [9, 10, 11, 12, 13, 14] + for (int i = 0; i < 2 * max_size + 3; i++) ring_buffer.push_back(i); + + CHECK(!ring_buffer.empty()); + CHECK(static_cast<int>(ring_buffer.size()) == max_size); + CHECK(ring_buffer.begin() != ring_buffer.end()); + + // Test forward iteration + int i = 9; + for (Iter iter = ring_buffer.begin(); iter != ring_buffer.end(); ++iter) { + CHECK(*iter == i); + ++i; + } + CHECK_EQ(i, 15); // one past last element. + + // Test backward iteration + i = 14; + Iter iter = ring_buffer.back(); + while (true) { + CHECK(*iter == i); + if (iter == ring_buffer.begin()) break; + --iter; + --i; + } + CHECK_EQ(i, 9); +} + + +TEST(RingBufferPushFront) { + const int max_size = 6; + typedef RingBuffer<int, max_size>::const_iterator Iter; + RingBuffer<int, max_size> ring_buffer; + + // Fill ring_buffer (wrap around): [14, 13, 12, 11, 10, 9] + for (int i = 0; i < 2 * max_size + 3; i++) ring_buffer.push_front(i); + + CHECK(!ring_buffer.empty()); + CHECK(static_cast<int>(ring_buffer.size()) == max_size); + CHECK(ring_buffer.begin() != ring_buffer.end()); + + // Test forward iteration + int i = 14; + for (Iter iter = ring_buffer.begin(); iter != ring_buffer.end(); ++iter) { + CHECK(*iter == i); + --i; + } + CHECK_EQ(i, 8); // one past last element. +} diff --git a/deps/v8/test/cctest/test-global-handles.cc b/deps/v8/test/cctest/test-global-handles.cc index 1ab90ec5ec1..ee295d6991c 100644 --- a/deps/v8/test/cctest/test-global-handles.cc +++ b/deps/v8/test/cctest/test-global-handles.cc @@ -25,9 +25,9 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "global-handles.h" +#include "src/global-handles.h" -#include "cctest.h" +#include "test/cctest/cctest.h" using namespace v8::internal; using v8::UniqueId; @@ -56,7 +56,7 @@ class TestRetainedObjectInfo : public v8::RetainedObjectInfo { bool has_been_disposed() { return has_been_disposed_; } virtual void Dispose() { - ASSERT(!has_been_disposed_); + DCHECK(!has_been_disposed_); has_been_disposed_ = true; } @@ -121,16 +121,16 @@ TEST(IterateObjectGroupsOldApi) { global_handles->IterateObjectGroups(&visitor, &CanSkipCallback); // CanSkipCallback was called for all objects. - ASSERT(can_skip_called_objects.length() == 4); - ASSERT(can_skip_called_objects.Contains(*g1s1.location())); - ASSERT(can_skip_called_objects.Contains(*g1s2.location())); - ASSERT(can_skip_called_objects.Contains(*g2s1.location())); - ASSERT(can_skip_called_objects.Contains(*g2s2.location())); + DCHECK(can_skip_called_objects.length() == 4); + DCHECK(can_skip_called_objects.Contains(*g1s1.location())); + DCHECK(can_skip_called_objects.Contains(*g1s2.location())); + DCHECK(can_skip_called_objects.Contains(*g2s1.location())); + DCHECK(can_skip_called_objects.Contains(*g2s2.location())); // Nothing was visited. - ASSERT(visitor.visited.length() == 0); - ASSERT(!info1.has_been_disposed()); - ASSERT(!info2.has_been_disposed()); + DCHECK(visitor.visited.length() == 0); + DCHECK(!info1.has_been_disposed()); + DCHECK(!info2.has_been_disposed()); } // Iterate again, now only skip the second object group. @@ -145,18 +145,18 @@ TEST(IterateObjectGroupsOldApi) { global_handles->IterateObjectGroups(&visitor, &CanSkipCallback); // CanSkipCallback was called for all objects. - ASSERT(can_skip_called_objects.length() == 3 || + DCHECK(can_skip_called_objects.length() == 3 || can_skip_called_objects.length() == 4); - ASSERT(can_skip_called_objects.Contains(*g1s2.location())); - ASSERT(can_skip_called_objects.Contains(*g2s1.location())); - ASSERT(can_skip_called_objects.Contains(*g2s2.location())); + DCHECK(can_skip_called_objects.Contains(*g1s2.location())); + DCHECK(can_skip_called_objects.Contains(*g2s1.location())); + DCHECK(can_skip_called_objects.Contains(*g2s2.location())); // The first group was visited. - ASSERT(visitor.visited.length() == 2); - ASSERT(visitor.visited.Contains(*g1s1.location())); - ASSERT(visitor.visited.Contains(*g1s2.location())); - ASSERT(info1.has_been_disposed()); - ASSERT(!info2.has_been_disposed()); + DCHECK(visitor.visited.length() == 2); + DCHECK(visitor.visited.Contains(*g1s1.location())); + DCHECK(visitor.visited.Contains(*g1s2.location())); + DCHECK(info1.has_been_disposed()); + DCHECK(!info2.has_been_disposed()); } // Iterate again, don't skip anything. @@ -166,15 +166,15 @@ TEST(IterateObjectGroupsOldApi) { global_handles->IterateObjectGroups(&visitor, &CanSkipCallback); // CanSkipCallback was called for all objects. - ASSERT(can_skip_called_objects.length() == 1); - ASSERT(can_skip_called_objects.Contains(*g2s1.location()) || + DCHECK(can_skip_called_objects.length() == 1); + DCHECK(can_skip_called_objects.Contains(*g2s1.location()) || can_skip_called_objects.Contains(*g2s2.location())); // The second group was visited. - ASSERT(visitor.visited.length() == 2); - ASSERT(visitor.visited.Contains(*g2s1.location())); - ASSERT(visitor.visited.Contains(*g2s2.location())); - ASSERT(info2.has_been_disposed()); + DCHECK(visitor.visited.length() == 2); + DCHECK(visitor.visited.Contains(*g2s1.location())); + DCHECK(visitor.visited.Contains(*g2s2.location())); + DCHECK(info2.has_been_disposed()); } } @@ -216,16 +216,16 @@ TEST(IterateObjectGroups) { global_handles->IterateObjectGroups(&visitor, &CanSkipCallback); // CanSkipCallback was called for all objects. - ASSERT(can_skip_called_objects.length() == 4); - ASSERT(can_skip_called_objects.Contains(*g1s1.location())); - ASSERT(can_skip_called_objects.Contains(*g1s2.location())); - ASSERT(can_skip_called_objects.Contains(*g2s1.location())); - ASSERT(can_skip_called_objects.Contains(*g2s2.location())); + DCHECK(can_skip_called_objects.length() == 4); + DCHECK(can_skip_called_objects.Contains(*g1s1.location())); + DCHECK(can_skip_called_objects.Contains(*g1s2.location())); + DCHECK(can_skip_called_objects.Contains(*g2s1.location())); + DCHECK(can_skip_called_objects.Contains(*g2s2.location())); // Nothing was visited. - ASSERT(visitor.visited.length() == 0); - ASSERT(!info1.has_been_disposed()); - ASSERT(!info2.has_been_disposed()); + DCHECK(visitor.visited.length() == 0); + DCHECK(!info1.has_been_disposed()); + DCHECK(!info2.has_been_disposed()); } // Iterate again, now only skip the second object group. @@ -240,18 +240,18 @@ TEST(IterateObjectGroups) { global_handles->IterateObjectGroups(&visitor, &CanSkipCallback); // CanSkipCallback was called for all objects. - ASSERT(can_skip_called_objects.length() == 3 || + DCHECK(can_skip_called_objects.length() == 3 || can_skip_called_objects.length() == 4); - ASSERT(can_skip_called_objects.Contains(*g1s2.location())); - ASSERT(can_skip_called_objects.Contains(*g2s1.location())); - ASSERT(can_skip_called_objects.Contains(*g2s2.location())); + DCHECK(can_skip_called_objects.Contains(*g1s2.location())); + DCHECK(can_skip_called_objects.Contains(*g2s1.location())); + DCHECK(can_skip_called_objects.Contains(*g2s2.location())); // The first group was visited. - ASSERT(visitor.visited.length() == 2); - ASSERT(visitor.visited.Contains(*g1s1.location())); - ASSERT(visitor.visited.Contains(*g1s2.location())); - ASSERT(info1.has_been_disposed()); - ASSERT(!info2.has_been_disposed()); + DCHECK(visitor.visited.length() == 2); + DCHECK(visitor.visited.Contains(*g1s1.location())); + DCHECK(visitor.visited.Contains(*g1s2.location())); + DCHECK(info1.has_been_disposed()); + DCHECK(!info2.has_been_disposed()); } // Iterate again, don't skip anything. @@ -261,15 +261,15 @@ TEST(IterateObjectGroups) { global_handles->IterateObjectGroups(&visitor, &CanSkipCallback); // CanSkipCallback was called for all objects. - ASSERT(can_skip_called_objects.length() == 1); - ASSERT(can_skip_called_objects.Contains(*g2s1.location()) || + DCHECK(can_skip_called_objects.length() == 1); + DCHECK(can_skip_called_objects.Contains(*g2s1.location()) || can_skip_called_objects.Contains(*g2s2.location())); // The second group was visited. - ASSERT(visitor.visited.length() == 2); - ASSERT(visitor.visited.Contains(*g2s1.location())); - ASSERT(visitor.visited.Contains(*g2s2.location())); - ASSERT(info2.has_been_disposed()); + DCHECK(visitor.visited.length() == 2); + DCHECK(visitor.visited.Contains(*g2s1.location())); + DCHECK(visitor.visited.Contains(*g2s2.location())); + DCHECK(info2.has_been_disposed()); } } @@ -306,16 +306,16 @@ TEST(ImplicitReferences) { List<ImplicitRefGroup*>* implicit_refs = global_handles->implicit_ref_groups(); USE(implicit_refs); - ASSERT(implicit_refs->length() == 2); - ASSERT(implicit_refs->at(0)->parent == + DCHECK(implicit_refs->length() == 2); + DCHECK(implicit_refs->at(0)->parent == reinterpret_cast<HeapObject**>(g1s1.location())); - ASSERT(implicit_refs->at(0)->length == 2); - ASSERT(implicit_refs->at(0)->children[0] == g1c1.location()); - ASSERT(implicit_refs->at(0)->children[1] == g1c2.location()); - ASSERT(implicit_refs->at(1)->parent == + DCHECK(implicit_refs->at(0)->length == 2); + DCHECK(implicit_refs->at(0)->children[0] == g1c1.location()); + DCHECK(implicit_refs->at(0)->children[1] == g1c2.location()); + DCHECK(implicit_refs->at(1)->parent == reinterpret_cast<HeapObject**>(g2s1.location())); - ASSERT(implicit_refs->at(1)->length == 1); - ASSERT(implicit_refs->at(1)->children[0] == g2c1.location()); + DCHECK(implicit_refs->at(1)->length == 1); + DCHECK(implicit_refs->at(1)->children[0] == g2c1.location()); global_handles->RemoveObjectGroups(); global_handles->RemoveImplicitRefGroups(); } diff --git a/deps/v8/test/cctest/test-global-object.cc b/deps/v8/test/cctest/test-global-object.cc index bbec9df7750..0e2c9408c62 100644 --- a/deps/v8/test/cctest/test-global-object.cc +++ b/deps/v8/test/cctest/test-global-object.cc @@ -25,9 +25,9 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "v8.h" +#include "src/v8.h" -#include "cctest.h" +#include "test/cctest/cctest.h" using namespace v8; diff --git a/deps/v8/test/cctest/test-hashing.cc b/deps/v8/test/cctest/test-hashing.cc index 9a7d61ddd88..9857f9d88a2 100644 --- a/deps/v8/test/cctest/test-hashing.cc +++ b/deps/v8/test/cctest/test-hashing.cc @@ -27,16 +27,16 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "factory.h" -#include "macro-assembler.h" -#include "cctest.h" -#include "code-stubs.h" -#include "objects.h" +#include "src/code-stubs.h" +#include "src/factory.h" +#include "src/macro-assembler.h" +#include "src/objects.h" +#include "test/cctest/cctest.h" #ifdef USE_SIMULATOR -#include "simulator.h" +#include "src/simulator.h" #endif using namespace v8::internal; @@ -50,8 +50,8 @@ typedef uint32_t (*HASH_FUNCTION)(); void generate(MacroAssembler* masm, i::Vector<const uint8_t> string) { // GenerateHashInit takes the first character as an argument so it can't // handle the zero length string. - ASSERT(string.length() > 0); -#if V8_TARGET_ARCH_IA32 + DCHECK(string.length() > 0); +#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87 __ push(ebx); __ push(ecx); __ mov(eax, Immediate(0)); @@ -114,11 +114,11 @@ void generate(MacroAssembler* masm, i::Vector<const uint8_t> string) { __ Pop(xzr, root); __ Ret(); __ SetStackPointer(old_stack_pointer); -#elif V8_TARGET_ARCH_MIPS +#elif V8_TARGET_ARCH_MIPS || V8_TARGET_ARCH_MIPS64 __ push(kRootRegister); __ InitializeRootRegister(); - __ li(v0, Operand(0)); + __ mov(v0, zero_reg); __ li(t1, Operand(string.at(0))); StringHelper::GenerateHashInit(masm, v0, t1); for (int i = 1; i < string.length(); i++) { @@ -136,7 +136,7 @@ void generate(MacroAssembler* masm, i::Vector<const uint8_t> string) { void generate(MacroAssembler* masm, uint32_t key) { -#if V8_TARGET_ARCH_IA32 +#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87 __ push(ebx); __ mov(eax, Immediate(key)); __ GetNumberHash(eax, ebx); @@ -170,7 +170,7 @@ void generate(MacroAssembler* masm, uint32_t key) { __ Pop(xzr, root); __ Ret(); __ SetStackPointer(old_stack_pointer); -#elif V8_TARGET_ARCH_MIPS +#elif V8_TARGET_ARCH_MIPS || V8_TARGET_ARCH_MIPS64 __ push(kRootRegister); __ InitializeRootRegister(); __ li(v0, Operand(key)); diff --git a/deps/v8/test/cctest/test-hashmap.cc b/deps/v8/test/cctest/test-hashmap.cc index 70213c9aa84..1e94bed5938 100644 --- a/deps/v8/test/cctest/test-hashmap.cc +++ b/deps/v8/test/cctest/test-hashmap.cc @@ -27,9 +27,10 @@ #include <stdlib.h> -#include "v8.h" -#include "hashmap.h" -#include "cctest.h" +#include "src/v8.h" +#include "test/cctest/cctest.h" + +#include "src/hashmap.h" using namespace v8::internal; diff --git a/deps/v8/test/cctest/test-heap-profiler.cc b/deps/v8/test/cctest/test-heap-profiler.cc index eeafc7093cc..e456323baed 100644 --- a/deps/v8/test/cctest/test-heap-profiler.cc +++ b/deps/v8/test/cctest/test-heap-profiler.cc @@ -29,16 +29,16 @@ #include <ctype.h> -#include "v8.h" +#include "src/v8.h" -#include "allocation-tracker.h" -#include "cctest.h" -#include "hashmap.h" -#include "heap-profiler.h" -#include "snapshot.h" -#include "debug.h" -#include "utils-inl.h" -#include "../include/v8-profiler.h" +#include "include/v8-profiler.h" +#include "src/allocation-tracker.h" +#include "src/debug.h" +#include "src/hashmap.h" +#include "src/heap-profiler.h" +#include "src/snapshot.h" +#include "src/utils-inl.h" +#include "test/cctest/cctest.h" using i::AllocationTraceNode; using i::AllocationTraceTree; @@ -471,6 +471,174 @@ TEST(HeapSnapshotConsString) { } +TEST(HeapSnapshotSymbol) { + LocalContext env; + v8::HandleScope scope(env->GetIsolate()); + v8::HeapProfiler* heap_profiler = env->GetIsolate()->GetHeapProfiler(); + + CompileRun("a = Symbol('mySymbol');\n"); + const v8::HeapSnapshot* snapshot = + heap_profiler->TakeHeapSnapshot(v8_str("Symbol")); + CHECK(ValidateSnapshot(snapshot)); + const v8::HeapGraphNode* global = GetGlobalObject(snapshot); + const v8::HeapGraphNode* a = + GetProperty(global, v8::HeapGraphEdge::kProperty, "a"); + CHECK_NE(NULL, a); + CHECK_EQ(a->GetType(), v8::HeapGraphNode::kSymbol); + CHECK_EQ(v8_str("symbol"), a->GetName()); + const v8::HeapGraphNode* name = + GetProperty(a, v8::HeapGraphEdge::kInternal, "name"); + CHECK_NE(NULL, name); + CHECK_EQ(v8_str("mySymbol"), name->GetName()); +} + + +TEST(HeapSnapshotWeakCollection) { + LocalContext env; + v8::HandleScope scope(env->GetIsolate()); + v8::HeapProfiler* heap_profiler = env->GetIsolate()->GetHeapProfiler(); + + CompileRun( + "k = {}; v = {}; s = 'str';\n" + "ws = new WeakSet(); ws.add(k); ws.add(v); ws[s] = s;\n" + "wm = new WeakMap(); wm.set(k, v); wm[s] = s;\n"); + const v8::HeapSnapshot* snapshot = + heap_profiler->TakeHeapSnapshot(v8_str("WeakCollections")); + CHECK(ValidateSnapshot(snapshot)); + const v8::HeapGraphNode* global = GetGlobalObject(snapshot); + const v8::HeapGraphNode* k = + GetProperty(global, v8::HeapGraphEdge::kProperty, "k"); + CHECK_NE(NULL, k); + const v8::HeapGraphNode* v = + GetProperty(global, v8::HeapGraphEdge::kProperty, "v"); + CHECK_NE(NULL, v); + const v8::HeapGraphNode* s = + GetProperty(global, v8::HeapGraphEdge::kProperty, "s"); + CHECK_NE(NULL, s); + + const v8::HeapGraphNode* ws = + GetProperty(global, v8::HeapGraphEdge::kProperty, "ws"); + CHECK_NE(NULL, ws); + CHECK_EQ(v8::HeapGraphNode::kObject, ws->GetType()); + CHECK_EQ(v8_str("WeakSet"), ws->GetName()); + + const v8::HeapGraphNode* ws_table = + GetProperty(ws, v8::HeapGraphEdge::kInternal, "table"); + CHECK_EQ(v8::HeapGraphNode::kArray, ws_table->GetType()); + CHECK_GT(ws_table->GetChildrenCount(), 0); + int weak_entries = 0; + for (int i = 0, count = ws_table->GetChildrenCount(); i < count; ++i) { + const v8::HeapGraphEdge* prop = ws_table->GetChild(i); + if (prop->GetType() != v8::HeapGraphEdge::kWeak) continue; + if (k->GetId() == prop->GetToNode()->GetId()) { + ++weak_entries; + } + } + CHECK_EQ(1, weak_entries); + const v8::HeapGraphNode* ws_s = + GetProperty(ws, v8::HeapGraphEdge::kProperty, "str"); + CHECK_NE(NULL, ws_s); + CHECK_EQ(static_cast<int>(s->GetId()), static_cast<int>(ws_s->GetId())); + + const v8::HeapGraphNode* wm = + GetProperty(global, v8::HeapGraphEdge::kProperty, "wm"); + CHECK_NE(NULL, wm); + CHECK_EQ(v8::HeapGraphNode::kObject, wm->GetType()); + CHECK_EQ(v8_str("WeakMap"), wm->GetName()); + + const v8::HeapGraphNode* wm_table = + GetProperty(wm, v8::HeapGraphEdge::kInternal, "table"); + CHECK_EQ(v8::HeapGraphNode::kArray, wm_table->GetType()); + CHECK_GT(wm_table->GetChildrenCount(), 0); + weak_entries = 0; + for (int i = 0, count = wm_table->GetChildrenCount(); i < count; ++i) { + const v8::HeapGraphEdge* prop = wm_table->GetChild(i); + if (prop->GetType() != v8::HeapGraphEdge::kWeak) continue; + const v8::SnapshotObjectId to_node_id = prop->GetToNode()->GetId(); + if (to_node_id == k->GetId() || to_node_id == v->GetId()) { + ++weak_entries; + } + } + CHECK_EQ(2, weak_entries); + const v8::HeapGraphNode* wm_s = + GetProperty(wm, v8::HeapGraphEdge::kProperty, "str"); + CHECK_NE(NULL, wm_s); + CHECK_EQ(static_cast<int>(s->GetId()), static_cast<int>(wm_s->GetId())); +} + + +TEST(HeapSnapshotCollection) { + LocalContext env; + v8::HandleScope scope(env->GetIsolate()); + v8::HeapProfiler* heap_profiler = env->GetIsolate()->GetHeapProfiler(); + + CompileRun( + "k = {}; v = {}; s = 'str';\n" + "set = new Set(); set.add(k); set.add(v); set[s] = s;\n" + "map = new Map(); map.set(k, v); map[s] = s;\n"); + const v8::HeapSnapshot* snapshot = + heap_profiler->TakeHeapSnapshot(v8_str("Collections")); + CHECK(ValidateSnapshot(snapshot)); + const v8::HeapGraphNode* global = GetGlobalObject(snapshot); + const v8::HeapGraphNode* k = + GetProperty(global, v8::HeapGraphEdge::kProperty, "k"); + CHECK_NE(NULL, k); + const v8::HeapGraphNode* v = + GetProperty(global, v8::HeapGraphEdge::kProperty, "v"); + CHECK_NE(NULL, v); + const v8::HeapGraphNode* s = + GetProperty(global, v8::HeapGraphEdge::kProperty, "s"); + CHECK_NE(NULL, s); + + const v8::HeapGraphNode* set = + GetProperty(global, v8::HeapGraphEdge::kProperty, "set"); + CHECK_NE(NULL, set); + CHECK_EQ(v8::HeapGraphNode::kObject, set->GetType()); + CHECK_EQ(v8_str("Set"), set->GetName()); + + const v8::HeapGraphNode* set_table = + GetProperty(set, v8::HeapGraphEdge::kInternal, "table"); + CHECK_EQ(v8::HeapGraphNode::kArray, set_table->GetType()); + CHECK_GT(set_table->GetChildrenCount(), 0); + int entries = 0; + for (int i = 0, count = set_table->GetChildrenCount(); i < count; ++i) { + const v8::HeapGraphEdge* prop = set_table->GetChild(i); + const v8::SnapshotObjectId to_node_id = prop->GetToNode()->GetId(); + if (to_node_id == k->GetId() || to_node_id == v->GetId()) { + ++entries; + } + } + CHECK_EQ(2, entries); + const v8::HeapGraphNode* set_s = + GetProperty(set, v8::HeapGraphEdge::kProperty, "str"); + CHECK_NE(NULL, set_s); + CHECK_EQ(static_cast<int>(s->GetId()), static_cast<int>(set_s->GetId())); + + const v8::HeapGraphNode* map = + GetProperty(global, v8::HeapGraphEdge::kProperty, "map"); + CHECK_NE(NULL, map); + CHECK_EQ(v8::HeapGraphNode::kObject, map->GetType()); + CHECK_EQ(v8_str("Map"), map->GetName()); + + const v8::HeapGraphNode* map_table = + GetProperty(map, v8::HeapGraphEdge::kInternal, "table"); + CHECK_EQ(v8::HeapGraphNode::kArray, map_table->GetType()); + CHECK_GT(map_table->GetChildrenCount(), 0); + entries = 0; + for (int i = 0, count = map_table->GetChildrenCount(); i < count; ++i) { + const v8::HeapGraphEdge* prop = map_table->GetChild(i); + const v8::SnapshotObjectId to_node_id = prop->GetToNode()->GetId(); + if (to_node_id == k->GetId() || to_node_id == v->GetId()) { + ++entries; + } + } + CHECK_EQ(2, entries); + const v8::HeapGraphNode* map_s = + GetProperty(map, v8::HeapGraphEdge::kProperty, "str"); + CHECK_NE(NULL, map_s); + CHECK_EQ(static_cast<int>(s->GetId()), static_cast<int>(map_s->GetId())); +} + TEST(HeapSnapshotInternalReferences) { v8::Isolate* isolate = CcTest::isolate(); @@ -690,11 +858,11 @@ class TestJSONStream : public v8::OutputStream { if (abort_countdown_ == 0) return kAbort; CHECK_GT(chars_written, 0); i::Vector<char> chunk = buffer_.AddBlock(chars_written, '\0'); - i::OS::MemCopy(chunk.start(), buffer, chars_written); + i::MemCopy(chunk.start(), buffer, chars_written); return kContinue; } virtual WriteResult WriteUint32Chunk(uint32_t* buffer, int chars_written) { - ASSERT(false); + DCHECK(false); return kAbort; } void WriteTo(i::Vector<char> dest) { buffer_.WriteTo(dest); } @@ -863,13 +1031,13 @@ class TestStatsStream : public v8::OutputStream { virtual ~TestStatsStream() {} virtual void EndOfStream() { ++eos_signaled_; } virtual WriteResult WriteAsciiChunk(char* buffer, int chars_written) { - ASSERT(false); + DCHECK(false); return kAbort; } virtual WriteResult WriteHeapStatsChunk(v8::HeapStatsUpdate* buffer, int updates_written) { ++intervals_count_; - ASSERT(updates_written); + DCHECK(updates_written); updates_written_ += updates_written; entries_count_ = 0; if (first_interval_index_ == -1 && updates_written != 0) @@ -1554,9 +1722,9 @@ TEST(GlobalObjectFields) { const v8::HeapGraphNode* global_context = GetProperty(global, v8::HeapGraphEdge::kInternal, "global_context"); CHECK_NE(NULL, global_context); - const v8::HeapGraphNode* global_receiver = - GetProperty(global, v8::HeapGraphEdge::kInternal, "global_receiver"); - CHECK_NE(NULL, global_receiver); + const v8::HeapGraphNode* global_proxy = + GetProperty(global, v8::HeapGraphEdge::kInternal, "global_proxy"); + CHECK_NE(NULL, global_proxy); } @@ -1601,7 +1769,7 @@ TEST(GetHeapValueForNode) { v8::HandleScope scope(env->GetIsolate()); v8::HeapProfiler* heap_profiler = env->GetIsolate()->GetHeapProfiler(); - CompileRun("a = { s_prop: \'value\', n_prop: 0.1 };"); + CompileRun("a = { s_prop: \'value\', n_prop: \'value2\' };"); const v8::HeapSnapshot* snapshot = heap_profiler->TakeHeapSnapshot(v8_str("value")); CHECK(ValidateSnapshot(snapshot)); @@ -1622,10 +1790,9 @@ TEST(GetHeapValueForNode) { CHECK(js_s_prop == heap_profiler->FindObjectById(s_prop->GetId())); const v8::HeapGraphNode* n_prop = GetProperty(obj, v8::HeapGraphEdge::kProperty, "n_prop"); - v8::Local<v8::Number> js_n_prop = - js_obj->Get(v8_str("n_prop")).As<v8::Number>(); - CHECK(js_n_prop->NumberValue() == - heap_profiler->FindObjectById(n_prop->GetId())->NumberValue()); + v8::Local<v8::String> js_n_prop = + js_obj->Get(v8_str("n_prop")).As<v8::String>(); + CHECK(js_n_prop == heap_profiler->FindObjectById(n_prop->GetId())); } @@ -1888,7 +2055,7 @@ TEST(NoDebugObjectInSnapshot) { v8::HandleScope scope(env->GetIsolate()); v8::HeapProfiler* heap_profiler = env->GetIsolate()->GetHeapProfiler(); - CcTest::i_isolate()->debug()->Load(); + CHECK(CcTest::i_isolate()->debug()->Load()); CompileRun("foo = {};"); const v8::HeapSnapshot* snapshot = heap_profiler->TakeHeapSnapshot(v8_str("snapshot")); @@ -2017,7 +2184,7 @@ TEST(ManyLocalsInSharedContext) { // ... well check just every 15th because otherwise it's too slow in debug. for (int i = 0; i < num_objects - 1; i += 15) { i::EmbeddedVector<char, 100> var_name; - i::OS::SNPrintF(var_name, "f_%d", i); + i::SNPrintF(var_name, "f_%d", i); const v8::HeapGraphNode* f_object = GetProperty( context_object, v8::HeapGraphEdge::kContextVariable, var_name.start()); CHECK_NE(NULL, f_object); @@ -2112,7 +2279,7 @@ static const v8::HeapGraphNode* GetNodeByPath(const v8::HeapSnapshot* snapshot, v8::String::Utf8Value edge_name(edge->GetName()); v8::String::Utf8Value node_name(to_node->GetName()); i::EmbeddedVector<char, 100> name; - i::OS::SNPrintF(name, "%s::%s", *edge_name, *node_name); + i::SNPrintF(name, "%s::%s", *edge_name, *node_name); if (strstr(name.start(), path[current_depth])) { node = to_node; break; @@ -2239,7 +2406,7 @@ TEST(ArrayGrowLeftTrim) { "for (var i = 0; i < 3; ++i)\n" " a.shift();\n"); - const char* names[] = { "(anonymous function)" }; + const char* names[] = {""}; AllocationTracker* tracker = reinterpret_cast<i::HeapProfiler*>(heap_profiler)->allocation_tracker(); CHECK_NE(NULL, tracker); @@ -2274,8 +2441,7 @@ TEST(TrackHeapAllocations) { // Print for better diagnostics in case of failure. tracker->trace_tree()->Print(tracker); - const char* names[] = - { "(anonymous function)", "start", "f_0_0", "f_0_1", "f_0_2" }; + const char* names[] = {"", "start", "f_0_0", "f_0_1", "f_0_2"}; AllocationTraceNode* node = FindNode(tracker, Vector<const char*>(names, ARRAY_SIZE(names))); CHECK_NE(NULL, node); @@ -2310,7 +2476,7 @@ TEST(TrackBumpPointerAllocations) { LocalContext env; v8::HeapProfiler* heap_profiler = env->GetIsolate()->GetHeapProfiler(); - const char* names[] = { "(anonymous function)", "start", "f_0", "f_1" }; + const char* names[] = {"", "start", "f_0", "f_1"}; // First check that normally all allocations are recorded. { heap_profiler->StartTrackingHeapObjects(true); @@ -2444,7 +2610,7 @@ TEST(ArrayBufferSharedBackingStore) { CHECK_EQ(1024, static_cast<int>(ab_contents.ByteLength())); void* data = ab_contents.Data(); - ASSERT(data != NULL); + DCHECK(data != NULL); v8::Local<v8::ArrayBuffer> ab2 = v8::ArrayBuffer::New(isolate, data, ab_contents.ByteLength()); CHECK(ab2->IsExternal()); diff --git a/deps/v8/test/cctest/test-heap.cc b/deps/v8/test/cctest/test-heap.cc index 3cc61ed5fdf..ab000dc6a6e 100644 --- a/deps/v8/test/cctest/test-heap.cc +++ b/deps/v8/test/cctest/test-heap.cc @@ -28,37 +28,18 @@ #include <stdlib.h> #include <utility> -#include "v8.h" +#include "src/v8.h" -#include "compilation-cache.h" -#include "execution.h" -#include "factory.h" -#include "macro-assembler.h" -#include "global-handles.h" -#include "stub-cache.h" -#include "cctest.h" +#include "src/compilation-cache.h" +#include "src/execution.h" +#include "src/factory.h" +#include "src/global-handles.h" +#include "src/macro-assembler.h" +#include "src/stub-cache.h" +#include "test/cctest/cctest.h" using namespace v8::internal; -// Go through all incremental marking steps in one swoop. -static void SimulateIncrementalMarking() { - MarkCompactCollector* collector = CcTest::heap()->mark_compact_collector(); - IncrementalMarking* marking = CcTest::heap()->incremental_marking(); - if (collector->IsConcurrentSweepingInProgress()) { - collector->WaitUntilSweepingCompleted(); - } - CHECK(marking->IsMarking() || marking->IsStopped()); - if (marking->IsStopped()) { - marking->Start(); - } - CHECK(marking->IsMarking()); - while (!marking->IsComplete()) { - marking->Step(MB, IncrementalMarking::NO_GC_VIA_STACK_GUARD); - } - CHECK(marking->IsComplete()); -} - - static void CheckMap(Map* map, int type, int instance_size) { CHECK(map->IsHeapObject()); #ifdef DEBUG @@ -179,7 +160,8 @@ TEST(HeapObjects) { CHECK(value->IsNumber()); CHECK_EQ(Smi::kMaxValue, Handle<Smi>::cast(value)->value()); -#if !defined(V8_TARGET_ARCH_X64) && !defined(V8_TARGET_ARCH_ARM64) +#if !defined(V8_TARGET_ARCH_X64) && !defined(V8_TARGET_ARCH_ARM64) && \ + !defined(V8_TARGET_ARCH_MIPS64) // TODO(lrn): We need a NumberFromIntptr function in order to test this. value = factory->NewNumberFromInt(Smi::kMinValue - 1); CHECK(value->IsHeapNumber()); @@ -209,7 +191,9 @@ TEST(HeapObjects) { Handle<String> object_string = Handle<String>::cast(factory->Object_string()); Handle<GlobalObject> global(CcTest::i_isolate()->context()->global_object()); - CHECK(JSReceiver::HasLocalProperty(global, object_string)); + v8::Maybe<bool> maybe = JSReceiver::HasOwnProperty(global, object_string); + CHECK(maybe.has_value); + CHECK(maybe.value); // Check ToString for oddballs CheckOddball(isolate, heap->true_value(), "true"); @@ -260,18 +244,12 @@ TEST(GarbageCollection) { { HandleScope inner_scope(isolate); // Allocate a function and keep it in global object's property. - Handle<JSFunction> function = factory->NewFunctionWithPrototype( - name, factory->undefined_value()); - Handle<Map> initial_map = - factory->NewMap(JS_OBJECT_TYPE, JSObject::kHeaderSize); - function->set_initial_map(*initial_map); - JSReceiver::SetProperty(global, name, function, NONE, SLOPPY).Check(); + Handle<JSFunction> function = factory->NewFunction(name); + JSReceiver::SetProperty(global, name, function, SLOPPY).Check(); // Allocate an object. Unrooted after leaving the scope. Handle<JSObject> obj = factory->NewJSObject(function); - JSReceiver::SetProperty( - obj, prop_name, twenty_three, NONE, SLOPPY).Check(); - JSReceiver::SetProperty( - obj, prop_namex, twenty_four, NONE, SLOPPY).Check(); + JSReceiver::SetProperty(obj, prop_name, twenty_three, SLOPPY).Check(); + JSReceiver::SetProperty(obj, prop_namex, twenty_four, SLOPPY).Check(); CHECK_EQ(Smi::FromInt(23), *Object::GetProperty(obj, prop_name).ToHandleChecked()); @@ -282,7 +260,9 @@ TEST(GarbageCollection) { heap->CollectGarbage(NEW_SPACE); // Function should be alive. - CHECK(JSReceiver::HasLocalProperty(global, name)); + v8::Maybe<bool> maybe = JSReceiver::HasOwnProperty(global, name); + CHECK(maybe.has_value); + CHECK(maybe.value); // Check function is retained. Handle<Object> func_value = Object::GetProperty(global, name).ToHandleChecked(); @@ -293,15 +273,16 @@ TEST(GarbageCollection) { HandleScope inner_scope(isolate); // Allocate another object, make it reachable from global. Handle<JSObject> obj = factory->NewJSObject(function); - JSReceiver::SetProperty(global, obj_name, obj, NONE, SLOPPY).Check(); - JSReceiver::SetProperty( - obj, prop_name, twenty_three, NONE, SLOPPY).Check(); + JSReceiver::SetProperty(global, obj_name, obj, SLOPPY).Check(); + JSReceiver::SetProperty(obj, prop_name, twenty_three, SLOPPY).Check(); } // After gc, it should survive. heap->CollectGarbage(NEW_SPACE); - CHECK(JSReceiver::HasLocalProperty(global, obj_name)); + maybe = JSReceiver::HasOwnProperty(global, obj_name); + CHECK(maybe.has_value); + CHECK(maybe.value); Handle<Object> obj = Object::GetProperty(global, obj_name).ToHandleChecked(); CHECK(obj->IsJSObject()); @@ -623,23 +604,18 @@ TEST(FunctionAllocation) { v8::HandleScope sc(CcTest::isolate()); Handle<String> name = factory->InternalizeUtf8String("theFunction"); - Handle<JSFunction> function = factory->NewFunctionWithPrototype( - name, factory->undefined_value()); - Handle<Map> initial_map = - factory->NewMap(JS_OBJECT_TYPE, JSObject::kHeaderSize); - function->set_initial_map(*initial_map); + Handle<JSFunction> function = factory->NewFunction(name); Handle<Smi> twenty_three(Smi::FromInt(23), isolate); Handle<Smi> twenty_four(Smi::FromInt(24), isolate); Handle<String> prop_name = factory->InternalizeUtf8String("theSlot"); Handle<JSObject> obj = factory->NewJSObject(function); - JSReceiver::SetProperty(obj, prop_name, twenty_three, NONE, SLOPPY).Check(); + JSReceiver::SetProperty(obj, prop_name, twenty_three, SLOPPY).Check(); CHECK_EQ(Smi::FromInt(23), *Object::GetProperty(obj, prop_name).ToHandleChecked()); // Check that we can add properties to function objects. - JSReceiver::SetProperty( - function, prop_name, twenty_four, NONE, SLOPPY).Check(); + JSReceiver::SetProperty(function, prop_name, twenty_four, SLOPPY).Check(); CHECK_EQ(Smi::FromInt(24), *Object::GetProperty(function, prop_name).ToHandleChecked()); } @@ -663,55 +639,85 @@ TEST(ObjectProperties) { Handle<Smi> two(Smi::FromInt(2), isolate); // check for empty - CHECK(!JSReceiver::HasLocalProperty(obj, first)); + v8::Maybe<bool> maybe = JSReceiver::HasOwnProperty(obj, first); + CHECK(maybe.has_value); + CHECK(!maybe.value); // add first - JSReceiver::SetProperty(obj, first, one, NONE, SLOPPY).Check(); - CHECK(JSReceiver::HasLocalProperty(obj, first)); + JSReceiver::SetProperty(obj, first, one, SLOPPY).Check(); + maybe = JSReceiver::HasOwnProperty(obj, first); + CHECK(maybe.has_value); + CHECK(maybe.value); // delete first JSReceiver::DeleteProperty(obj, first, JSReceiver::NORMAL_DELETION).Check(); - CHECK(!JSReceiver::HasLocalProperty(obj, first)); + maybe = JSReceiver::HasOwnProperty(obj, first); + CHECK(maybe.has_value); + CHECK(!maybe.value); // add first and then second - JSReceiver::SetProperty(obj, first, one, NONE, SLOPPY).Check(); - JSReceiver::SetProperty(obj, second, two, NONE, SLOPPY).Check(); - CHECK(JSReceiver::HasLocalProperty(obj, first)); - CHECK(JSReceiver::HasLocalProperty(obj, second)); + JSReceiver::SetProperty(obj, first, one, SLOPPY).Check(); + JSReceiver::SetProperty(obj, second, two, SLOPPY).Check(); + maybe = JSReceiver::HasOwnProperty(obj, first); + CHECK(maybe.has_value); + CHECK(maybe.value); + maybe = JSReceiver::HasOwnProperty(obj, second); + CHECK(maybe.has_value); + CHECK(maybe.value); // delete first and then second JSReceiver::DeleteProperty(obj, first, JSReceiver::NORMAL_DELETION).Check(); - CHECK(JSReceiver::HasLocalProperty(obj, second)); + maybe = JSReceiver::HasOwnProperty(obj, second); + CHECK(maybe.has_value); + CHECK(maybe.value); JSReceiver::DeleteProperty(obj, second, JSReceiver::NORMAL_DELETION).Check(); - CHECK(!JSReceiver::HasLocalProperty(obj, first)); - CHECK(!JSReceiver::HasLocalProperty(obj, second)); + maybe = JSReceiver::HasOwnProperty(obj, first); + CHECK(maybe.has_value); + CHECK(!maybe.value); + maybe = JSReceiver::HasOwnProperty(obj, second); + CHECK(maybe.has_value); + CHECK(!maybe.value); // add first and then second - JSReceiver::SetProperty(obj, first, one, NONE, SLOPPY).Check(); - JSReceiver::SetProperty(obj, second, two, NONE, SLOPPY).Check(); - CHECK(JSReceiver::HasLocalProperty(obj, first)); - CHECK(JSReceiver::HasLocalProperty(obj, second)); + JSReceiver::SetProperty(obj, first, one, SLOPPY).Check(); + JSReceiver::SetProperty(obj, second, two, SLOPPY).Check(); + maybe = JSReceiver::HasOwnProperty(obj, first); + CHECK(maybe.has_value); + CHECK(maybe.value); + maybe = JSReceiver::HasOwnProperty(obj, second); + CHECK(maybe.has_value); + CHECK(maybe.value); // delete second and then first JSReceiver::DeleteProperty(obj, second, JSReceiver::NORMAL_DELETION).Check(); - CHECK(JSReceiver::HasLocalProperty(obj, first)); + maybe = JSReceiver::HasOwnProperty(obj, first); + CHECK(maybe.has_value); + CHECK(maybe.value); JSReceiver::DeleteProperty(obj, first, JSReceiver::NORMAL_DELETION).Check(); - CHECK(!JSReceiver::HasLocalProperty(obj, first)); - CHECK(!JSReceiver::HasLocalProperty(obj, second)); + maybe = JSReceiver::HasOwnProperty(obj, first); + CHECK(maybe.has_value); + CHECK(!maybe.value); + maybe = JSReceiver::HasOwnProperty(obj, second); + CHECK(maybe.has_value); + CHECK(!maybe.value); // check string and internalized string match const char* string1 = "fisk"; Handle<String> s1 = factory->NewStringFromAsciiChecked(string1); - JSReceiver::SetProperty(obj, s1, one, NONE, SLOPPY).Check(); + JSReceiver::SetProperty(obj, s1, one, SLOPPY).Check(); Handle<String> s1_string = factory->InternalizeUtf8String(string1); - CHECK(JSReceiver::HasLocalProperty(obj, s1_string)); + maybe = JSReceiver::HasOwnProperty(obj, s1_string); + CHECK(maybe.has_value); + CHECK(maybe.value); // check internalized string and string match const char* string2 = "fugl"; Handle<String> s2_string = factory->InternalizeUtf8String(string2); - JSReceiver::SetProperty(obj, s2_string, one, NONE, SLOPPY).Check(); + JSReceiver::SetProperty(obj, s2_string, one, SLOPPY).Check(); Handle<String> s2 = factory->NewStringFromAsciiChecked(string2); - CHECK(JSReceiver::HasLocalProperty(obj, s2)); + maybe = JSReceiver::HasOwnProperty(obj, s2); + CHECK(maybe.has_value); + CHECK(maybe.value); } @@ -722,18 +728,15 @@ TEST(JSObjectMaps) { v8::HandleScope sc(CcTest::isolate()); Handle<String> name = factory->InternalizeUtf8String("theFunction"); - Handle<JSFunction> function = factory->NewFunctionWithPrototype( - name, factory->undefined_value()); - Handle<Map> initial_map = - factory->NewMap(JS_OBJECT_TYPE, JSObject::kHeaderSize); - function->set_initial_map(*initial_map); + Handle<JSFunction> function = factory->NewFunction(name); Handle<String> prop_name = factory->InternalizeUtf8String("theSlot"); Handle<JSObject> obj = factory->NewJSObject(function); + Handle<Map> initial_map(function->initial_map()); // Set a propery Handle<Smi> twenty_three(Smi::FromInt(23), isolate); - JSReceiver::SetProperty(obj, prop_name, twenty_three, NONE, SLOPPY).Check(); + JSReceiver::SetProperty(obj, prop_name, twenty_three, SLOPPY).Check(); CHECK_EQ(Smi::FromInt(23), *Object::GetProperty(obj, prop_name).ToHandleChecked()); @@ -811,8 +814,8 @@ TEST(JSObjectCopy) { Handle<Smi> one(Smi::FromInt(1), isolate); Handle<Smi> two(Smi::FromInt(2), isolate); - JSReceiver::SetProperty(obj, first, one, NONE, SLOPPY).Check(); - JSReceiver::SetProperty(obj, second, two, NONE, SLOPPY).Check(); + JSReceiver::SetProperty(obj, first, one, SLOPPY).Check(); + JSReceiver::SetProperty(obj, second, two, SLOPPY).Check(); JSReceiver::SetElement(obj, 0, first, NONE, SLOPPY).Check(); JSReceiver::SetElement(obj, 1, second, NONE, SLOPPY).Check(); @@ -837,8 +840,8 @@ TEST(JSObjectCopy) { CHECK_EQ(*value1, *value2); // Flip the values. - JSReceiver::SetProperty(clone, first, two, NONE, SLOPPY).Check(); - JSReceiver::SetProperty(clone, second, one, NONE, SLOPPY).Check(); + JSReceiver::SetProperty(clone, first, two, SLOPPY).Check(); + JSReceiver::SetProperty(clone, second, one, SLOPPY).Check(); JSReceiver::SetElement(clone, 0, second, NONE, SLOPPY).Check(); JSReceiver::SetElement(clone, 1, first, NONE, SLOPPY).Check(); @@ -901,7 +904,6 @@ TEST(StringAllocation) { static int ObjectsFoundInHeap(Heap* heap, Handle<Object> objs[], int size) { // Count the number of objects found in the heap. int found_count = 0; - heap->EnsureHeapIsIterable(); HeapIterator iterator(heap); for (HeapObject* obj = iterator.next(); obj != NULL; obj = iterator.next()) { for (int i = 0; i < size; i++) { @@ -1030,7 +1032,9 @@ TEST(Regression39128) { CHECK_EQ(0, FixedArray::cast(jsobject->elements())->length()); CHECK_EQ(0, jsobject->properties()->length()); // Create a reference to object in new space in jsobject. - jsobject->FastPropertyAtPut(-1, array); + FieldIndex index = FieldIndex::ForInObjectOffset( + JSObject::kHeaderSize - kPointerSize); + jsobject->FastPropertyAtPut(index, array); CHECK_EQ(0, static_cast<int>(*limit_addr - *top_addr)); @@ -1200,7 +1204,7 @@ TEST(TestCodeFlushingIncremental) { // Simulate several GCs that use incremental marking. const int kAgingThreshold = 6; for (int i = 0; i < kAgingThreshold; i++) { - SimulateIncrementalMarking(); + SimulateIncrementalMarking(CcTest::heap()); CcTest::heap()->CollectAllGarbage(Heap::kNoGCFlags); } CHECK(!function->shared()->is_compiled() || function->IsOptimized()); @@ -1214,7 +1218,7 @@ TEST(TestCodeFlushingIncremental) { // Simulate several GCs that use incremental marking but make sure // the loop breaks once the function is enqueued as a candidate. for (int i = 0; i < kAgingThreshold; i++) { - SimulateIncrementalMarking(); + SimulateIncrementalMarking(CcTest::heap()); if (!function->next_function_link()->IsUndefined()) break; CcTest::heap()->CollectAllGarbage(Heap::kNoGCFlags); } @@ -1290,7 +1294,7 @@ TEST(TestCodeFlushingIncrementalScavenge) { // Simulate incremental marking so that the functions are enqueued as // code flushing candidates. Then kill one of the functions. Finally // perform a scavenge while incremental marking is still running. - SimulateIncrementalMarking(); + SimulateIncrementalMarking(CcTest::heap()); *function2.location() = NULL; CcTest::heap()->CollectGarbage(NEW_SPACE, "test scavenge while marking"); @@ -1344,7 +1348,7 @@ TEST(TestCodeFlushingIncrementalAbort) { // Simulate incremental marking so that the function is enqueued as // code flushing candidate. - SimulateIncrementalMarking(); + SimulateIncrementalMarking(heap); // Enable the debugger and add a breakpoint while incremental marking // is running so that incremental marking aborts and code flushing is @@ -1604,8 +1608,8 @@ TEST(TestSizeOfObjects) { CcTest::heap()->CollectAllGarbage(Heap::kNoGCFlags); CcTest::heap()->CollectAllGarbage(Heap::kNoGCFlags); MarkCompactCollector* collector = CcTest::heap()->mark_compact_collector(); - if (collector->IsConcurrentSweepingInProgress()) { - collector->WaitUntilSweepingCompleted(); + if (collector->sweeping_in_progress()) { + collector->EnsureSweepingCompleted(); } int initial_size = static_cast<int>(CcTest::heap()->SizeOfObjects()); @@ -1631,8 +1635,8 @@ TEST(TestSizeOfObjects) { CHECK_EQ(initial_size, static_cast<int>(CcTest::heap()->SizeOfObjects())); // Waiting for sweeper threads should not change heap size. - if (collector->IsConcurrentSweepingInProgress()) { - collector->WaitUntilSweepingCompleted(); + if (collector->sweeping_in_progress()) { + collector->EnsureSweepingCompleted(); } CHECK_EQ(initial_size, static_cast<int>(CcTest::heap()->SizeOfObjects())); } @@ -1640,9 +1644,8 @@ TEST(TestSizeOfObjects) { TEST(TestSizeOfObjectsVsHeapIteratorPrecision) { CcTest::InitializeVM(); - CcTest::heap()->EnsureHeapIsIterable(); - intptr_t size_of_objects_1 = CcTest::heap()->SizeOfObjects(); HeapIterator iterator(CcTest::heap()); + intptr_t size_of_objects_1 = CcTest::heap()->SizeOfObjects(); intptr_t size_of_objects_2 = 0; for (HeapObject* obj = iterator.next(); obj != NULL; @@ -1811,7 +1814,7 @@ TEST(LeakNativeContextViaMap) { ctx2->Exit(); v8::Local<v8::Context>::New(isolate, ctx1)->Exit(); ctx1p.Reset(); - v8::V8::ContextDisposedNotification(); + isolate->ContextDisposedNotification(); } CcTest::heap()->CollectAllAvailableGarbage(); CHECK_EQ(2, NumberOfGlobalObjects()); @@ -1857,7 +1860,7 @@ TEST(LeakNativeContextViaFunction) { ctx2->Exit(); ctx1->Exit(); ctx1p.Reset(); - v8::V8::ContextDisposedNotification(); + isolate->ContextDisposedNotification(); } CcTest::heap()->CollectAllAvailableGarbage(); CHECK_EQ(2, NumberOfGlobalObjects()); @@ -1901,7 +1904,7 @@ TEST(LeakNativeContextViaMapKeyed) { ctx2->Exit(); ctx1->Exit(); ctx1p.Reset(); - v8::V8::ContextDisposedNotification(); + isolate->ContextDisposedNotification(); } CcTest::heap()->CollectAllAvailableGarbage(); CHECK_EQ(2, NumberOfGlobalObjects()); @@ -1949,7 +1952,7 @@ TEST(LeakNativeContextViaMapProto) { ctx2->Exit(); ctx1->Exit(); ctx1p.Reset(); - v8::V8::ContextDisposedNotification(); + isolate->ContextDisposedNotification(); } CcTest::heap()->CollectAllAvailableGarbage(); CHECK_EQ(2, NumberOfGlobalObjects()); @@ -2110,8 +2113,8 @@ TEST(ResetSharedFunctionInfoCountersDuringIncrementalMarking) { // The following two calls will increment CcTest::heap()->global_ic_age(). const int kLongIdlePauseInMs = 1000; - v8::V8::ContextDisposedNotification(); - v8::V8::IdleNotification(kLongIdlePauseInMs); + CcTest::isolate()->ContextDisposedNotification(); + CcTest::isolate()->IdleNotification(kLongIdlePauseInMs); while (!marking->IsStopped() && !marking->IsComplete()) { marking->Step(1 * MB, IncrementalMarking::NO_GC_VIA_STACK_GUARD); @@ -2166,8 +2169,8 @@ TEST(ResetSharedFunctionInfoCountersDuringMarkSweep) { // The following two calls will increment CcTest::heap()->global_ic_age(). // Since incremental marking is off, IdleNotification will do full GC. const int kLongIdlePauseInMs = 1000; - v8::V8::ContextDisposedNotification(); - v8::V8::IdleNotification(kLongIdlePauseInMs); + CcTest::isolate()->ContextDisposedNotification(); + CcTest::isolate()->IdleNotification(kLongIdlePauseInMs); CHECK_EQ(CcTest::heap()->global_ic_age(), f->shared()->ic_age()); CHECK_EQ(0, f->shared()->opt_count()); @@ -2207,100 +2210,70 @@ TEST(OptimizedAllocationAlwaysInNewSpace) { TEST(OptimizedPretenuringAllocationFolding) { i::FLAG_allow_natives_syntax = true; - i::FLAG_max_new_space_size = 2; - i::FLAG_allocation_site_pretenuring = false; + i::FLAG_expose_gc = true; CcTest::InitializeVM(); if (!CcTest::i_isolate()->use_crankshaft() || i::FLAG_always_opt) return; if (i::FLAG_gc_global || i::FLAG_stress_compaction) return; v8::HandleScope scope(CcTest::isolate()); - CcTest::heap()->SetNewSpaceHighPromotionModeActive(true); - v8::Local<v8::Value> res = CompileRun( - "function DataObject() {" - " this.a = 1.1;" - " this.b = [{}];" - " this.c = 1.2;" - " this.d = [{}];" - " this.e = 1.3;" - " this.f = [{}];" - "}" - "var number_elements = 20000;" + // Grow new space unitl maximum capacity reached. + while (!CcTest::heap()->new_space()->IsAtMaximumCapacity()) { + CcTest::heap()->new_space()->Grow(); + } + + i::ScopedVector<char> source(1024); + i::SNPrintF( + source, + "var number_elements = %d;" "var elements = new Array();" "function f() {" " for (var i = 0; i < number_elements; i++) {" - " elements[i] = new DataObject();" + " elements[i] = [[{}], [1.1]];" " }" " return elements[number_elements-1]" "};" - "f(); f(); f();" - "%OptimizeFunctionOnNextCall(f);" - "f();"); - - Handle<JSObject> o = - v8::Utils::OpenHandle(*v8::Handle<v8::Object>::Cast(res)); - - CHECK(CcTest::heap()->InOldDataSpace(o->RawFastPropertyAt(0))); - CHECK(CcTest::heap()->InOldPointerSpace(o->RawFastPropertyAt(1))); - CHECK(CcTest::heap()->InOldDataSpace(o->RawFastPropertyAt(2))); - CHECK(CcTest::heap()->InOldPointerSpace(o->RawFastPropertyAt(3))); - CHECK(CcTest::heap()->InOldDataSpace(o->RawFastPropertyAt(4))); - CHECK(CcTest::heap()->InOldPointerSpace(o->RawFastPropertyAt(5))); -} + "f(); gc();" + "f(); f();" + "%%OptimizeFunctionOnNextCall(f);" + "f();", + AllocationSite::kPretenureMinimumCreated); + v8::Local<v8::Value> res = CompileRun(source.start()); -TEST(OptimizedPretenuringAllocationFoldingBlocks) { - i::FLAG_allow_natives_syntax = true; - i::FLAG_max_new_space_size = 2; - i::FLAG_allocation_site_pretenuring = false; - CcTest::InitializeVM(); - if (!CcTest::i_isolate()->use_crankshaft() || i::FLAG_always_opt) return; - if (i::FLAG_gc_global || i::FLAG_stress_compaction) return; - v8::HandleScope scope(CcTest::isolate()); - CcTest::heap()->SetNewSpaceHighPromotionModeActive(true); - - v8::Local<v8::Value> res = CompileRun( - "var number_elements = 30000;" - "var elements = new Array(number_elements);" - "function DataObject() {" - " this.a = [{}];" - " this.b = [{}];" - " this.c = 1.1;" - " this.d = 1.2;" - " this.e = [{}];" - " this.f = 1.3;" - "}" - "function f() {" - " for (var i = 0; i < number_elements; i++) {" - " elements[i] = new DataObject();" - " }" - " return elements[number_elements - 1];" - "};" - "f(); f(); f();" - "%OptimizeFunctionOnNextCall(f);" - "f();"); + v8::Local<v8::Value> int_array = v8::Object::Cast(*res)->Get(v8_str("0")); + Handle<JSObject> int_array_handle = + v8::Utils::OpenHandle(*v8::Handle<v8::Object>::Cast(int_array)); + v8::Local<v8::Value> double_array = v8::Object::Cast(*res)->Get(v8_str("1")); + Handle<JSObject> double_array_handle = + v8::Utils::OpenHandle(*v8::Handle<v8::Object>::Cast(double_array)); Handle<JSObject> o = v8::Utils::OpenHandle(*v8::Handle<v8::Object>::Cast(res)); - - CHECK(CcTest::heap()->InOldPointerSpace(o->RawFastPropertyAt(0))); - CHECK(CcTest::heap()->InOldPointerSpace(o->RawFastPropertyAt(1))); - CHECK(CcTest::heap()->InOldDataSpace(o->RawFastPropertyAt(2))); - CHECK(CcTest::heap()->InOldDataSpace(o->RawFastPropertyAt(3))); - CHECK(CcTest::heap()->InOldPointerSpace(o->RawFastPropertyAt(4))); - CHECK(CcTest::heap()->InOldDataSpace(o->RawFastPropertyAt(5))); + CHECK(CcTest::heap()->InOldPointerSpace(*o)); + CHECK(CcTest::heap()->InOldPointerSpace(*int_array_handle)); + CHECK(CcTest::heap()->InOldPointerSpace(int_array_handle->elements())); + CHECK(CcTest::heap()->InOldPointerSpace(*double_array_handle)); + CHECK(CcTest::heap()->InOldDataSpace(double_array_handle->elements())); } TEST(OptimizedPretenuringObjectArrayLiterals) { i::FLAG_allow_natives_syntax = true; - i::FLAG_max_new_space_size = 2; + i::FLAG_expose_gc = true; CcTest::InitializeVM(); if (!CcTest::i_isolate()->use_crankshaft() || i::FLAG_always_opt) return; if (i::FLAG_gc_global || i::FLAG_stress_compaction) return; v8::HandleScope scope(CcTest::isolate()); - v8::Local<v8::Value> res = CompileRun( - "var number_elements = 20000;" + // Grow new space unitl maximum capacity reached. + while (!CcTest::heap()->new_space()->IsAtMaximumCapacity()) { + CcTest::heap()->new_space()->Grow(); + } + + i::ScopedVector<char> source(1024); + i::SNPrintF( + source, + "var number_elements = %d;" "var elements = new Array(number_elements);" "function f() {" " for (var i = 0; i < number_elements; i++) {" @@ -2308,9 +2281,13 @@ TEST(OptimizedPretenuringObjectArrayLiterals) { " }" " return elements[number_elements - 1];" "};" - "f(); f(); f();" - "%OptimizeFunctionOnNextCall(f);" - "f();"); + "f(); gc();" + "f(); f();" + "%%OptimizeFunctionOnNextCall(f);" + "f();", + AllocationSite::kPretenureMinimumCreated); + + v8::Local<v8::Value> res = CompileRun(source.start()); Handle<JSObject> o = v8::Utils::OpenHandle(*v8::Handle<v8::Object>::Cast(res)); @@ -2322,14 +2299,22 @@ TEST(OptimizedPretenuringObjectArrayLiterals) { TEST(OptimizedPretenuringMixedInObjectProperties) { i::FLAG_allow_natives_syntax = true; - i::FLAG_max_new_space_size = 2; + i::FLAG_expose_gc = true; CcTest::InitializeVM(); if (!CcTest::i_isolate()->use_crankshaft() || i::FLAG_always_opt) return; if (i::FLAG_gc_global || i::FLAG_stress_compaction) return; v8::HandleScope scope(CcTest::isolate()); - v8::Local<v8::Value> res = CompileRun( - "var number_elements = 20000;" + // Grow new space unitl maximum capacity reached. + while (!CcTest::heap()->new_space()->IsAtMaximumCapacity()) { + CcTest::heap()->new_space()->Grow(); + } + + + i::ScopedVector<char> source(1024); + i::SNPrintF( + source, + "var number_elements = %d;" "var elements = new Array(number_elements);" "function f() {" " for (var i = 0; i < number_elements; i++) {" @@ -2337,34 +2322,49 @@ TEST(OptimizedPretenuringMixedInObjectProperties) { " }" " return elements[number_elements - 1];" "};" - "f(); f(); f();" - "%OptimizeFunctionOnNextCall(f);" - "f();"); + "f(); gc();" + "f(); f();" + "%%OptimizeFunctionOnNextCall(f);" + "f();", + AllocationSite::kPretenureMinimumCreated); + + v8::Local<v8::Value> res = CompileRun(source.start()); Handle<JSObject> o = v8::Utils::OpenHandle(*v8::Handle<v8::Object>::Cast(res)); CHECK(CcTest::heap()->InOldPointerSpace(*o)); - CHECK(CcTest::heap()->InOldPointerSpace(o->RawFastPropertyAt(0))); - CHECK(CcTest::heap()->InOldDataSpace(o->RawFastPropertyAt(1))); + FieldIndex idx1 = FieldIndex::ForPropertyIndex(o->map(), 0); + FieldIndex idx2 = FieldIndex::ForPropertyIndex(o->map(), 1); + CHECK(CcTest::heap()->InOldPointerSpace(o->RawFastPropertyAt(idx1))); + CHECK(CcTest::heap()->InOldDataSpace(o->RawFastPropertyAt(idx2))); - JSObject* inner_object = reinterpret_cast<JSObject*>(o->RawFastPropertyAt(0)); + JSObject* inner_object = + reinterpret_cast<JSObject*>(o->RawFastPropertyAt(idx1)); CHECK(CcTest::heap()->InOldPointerSpace(inner_object)); - CHECK(CcTest::heap()->InOldDataSpace(inner_object->RawFastPropertyAt(0))); - CHECK(CcTest::heap()->InOldPointerSpace(inner_object->RawFastPropertyAt(1))); + CHECK(CcTest::heap()->InOldDataSpace(inner_object->RawFastPropertyAt(idx1))); + CHECK(CcTest::heap()->InOldPointerSpace( + inner_object->RawFastPropertyAt(idx2))); } TEST(OptimizedPretenuringDoubleArrayProperties) { i::FLAG_allow_natives_syntax = true; - i::FLAG_max_new_space_size = 2; + i::FLAG_expose_gc = true; CcTest::InitializeVM(); if (!CcTest::i_isolate()->use_crankshaft() || i::FLAG_always_opt) return; if (i::FLAG_gc_global || i::FLAG_stress_compaction) return; v8::HandleScope scope(CcTest::isolate()); - v8::Local<v8::Value> res = CompileRun( - "var number_elements = 30000;" + // Grow new space unitl maximum capacity reached. + while (!CcTest::heap()->new_space()->IsAtMaximumCapacity()) { + CcTest::heap()->new_space()->Grow(); + } + + i::ScopedVector<char> source(1024); + i::SNPrintF( + source, + "var number_elements = %d;" "var elements = new Array(number_elements);" "function f() {" " for (var i = 0; i < number_elements; i++) {" @@ -2372,9 +2372,13 @@ TEST(OptimizedPretenuringDoubleArrayProperties) { " }" " return elements[i - 1];" "};" - "f(); f(); f();" - "%OptimizeFunctionOnNextCall(f);" - "f();"); + "f(); gc();" + "f(); f();" + "%%OptimizeFunctionOnNextCall(f);" + "f();", + AllocationSite::kPretenureMinimumCreated); + + v8::Local<v8::Value> res = CompileRun(source.start()); Handle<JSObject> o = v8::Utils::OpenHandle(*v8::Handle<v8::Object>::Cast(res)); @@ -2386,14 +2390,21 @@ TEST(OptimizedPretenuringDoubleArrayProperties) { TEST(OptimizedPretenuringdoubleArrayLiterals) { i::FLAG_allow_natives_syntax = true; - i::FLAG_max_new_space_size = 2; + i::FLAG_expose_gc = true; CcTest::InitializeVM(); if (!CcTest::i_isolate()->use_crankshaft() || i::FLAG_always_opt) return; if (i::FLAG_gc_global || i::FLAG_stress_compaction) return; v8::HandleScope scope(CcTest::isolate()); - v8::Local<v8::Value> res = CompileRun( - "var number_elements = 30000;" + // Grow new space unitl maximum capacity reached. + while (!CcTest::heap()->new_space()->IsAtMaximumCapacity()) { + CcTest::heap()->new_space()->Grow(); + } + + i::ScopedVector<char> source(1024); + i::SNPrintF( + source, + "var number_elements = %d;" "var elements = new Array(number_elements);" "function f() {" " for (var i = 0; i < number_elements; i++) {" @@ -2401,9 +2412,13 @@ TEST(OptimizedPretenuringdoubleArrayLiterals) { " }" " return elements[number_elements - 1];" "};" - "f(); f(); f();" - "%OptimizeFunctionOnNextCall(f);" - "f();"); + "f(); gc();" + "f(); f();" + "%%OptimizeFunctionOnNextCall(f);" + "f();", + AllocationSite::kPretenureMinimumCreated); + + v8::Local<v8::Value> res = CompileRun(source.start()); Handle<JSObject> o = v8::Utils::OpenHandle(*v8::Handle<v8::Object>::Cast(res)); @@ -2415,14 +2430,21 @@ TEST(OptimizedPretenuringdoubleArrayLiterals) { TEST(OptimizedPretenuringNestedMixedArrayLiterals) { i::FLAG_allow_natives_syntax = true; - i::FLAG_max_new_space_size = 2; + i::FLAG_expose_gc = true; CcTest::InitializeVM(); if (!CcTest::i_isolate()->use_crankshaft() || i::FLAG_always_opt) return; if (i::FLAG_gc_global || i::FLAG_stress_compaction) return; v8::HandleScope scope(CcTest::isolate()); - v8::Local<v8::Value> res = CompileRun( - "var number_elements = 20000;" + // Grow new space unitl maximum capacity reached. + while (!CcTest::heap()->new_space()->IsAtMaximumCapacity()) { + CcTest::heap()->new_space()->Grow(); + } + + i::ScopedVector<char> source(1024); + i::SNPrintF( + source, + "var number_elements = 100;" "var elements = new Array(number_elements);" "function f() {" " for (var i = 0; i < number_elements; i++) {" @@ -2430,10 +2452,13 @@ TEST(OptimizedPretenuringNestedMixedArrayLiterals) { " }" " return elements[number_elements - 1];" "};" - "f(); f(); f();" - "%OptimizeFunctionOnNextCall(f);" + "f(); gc();" + "f(); f();" + "%%OptimizeFunctionOnNextCall(f);" "f();"); + v8::Local<v8::Value> res = CompileRun(source.start()); + v8::Local<v8::Value> int_array = v8::Object::Cast(*res)->Get(v8_str("0")); Handle<JSObject> int_array_handle = v8::Utils::OpenHandle(*v8::Handle<v8::Object>::Cast(int_array)); @@ -2453,14 +2478,21 @@ TEST(OptimizedPretenuringNestedMixedArrayLiterals) { TEST(OptimizedPretenuringNestedObjectLiterals) { i::FLAG_allow_natives_syntax = true; - i::FLAG_max_new_space_size = 2; + i::FLAG_expose_gc = true; CcTest::InitializeVM(); if (!CcTest::i_isolate()->use_crankshaft() || i::FLAG_always_opt) return; if (i::FLAG_gc_global || i::FLAG_stress_compaction) return; v8::HandleScope scope(CcTest::isolate()); - v8::Local<v8::Value> res = CompileRun( - "var number_elements = 20000;" + // Grow new space unitl maximum capacity reached. + while (!CcTest::heap()->new_space()->IsAtMaximumCapacity()) { + CcTest::heap()->new_space()->Grow(); + } + + i::ScopedVector<char> source(1024); + i::SNPrintF( + source, + "var number_elements = %d;" "var elements = new Array(number_elements);" "function f() {" " for (var i = 0; i < number_elements; i++) {" @@ -2468,9 +2500,13 @@ TEST(OptimizedPretenuringNestedObjectLiterals) { " }" " return elements[number_elements - 1];" "};" - "f(); f(); f();" - "%OptimizeFunctionOnNextCall(f);" - "f();"); + "f(); gc();" + "f(); f();" + "%%OptimizeFunctionOnNextCall(f);" + "f();", + AllocationSite::kPretenureMinimumCreated); + + v8::Local<v8::Value> res = CompileRun(source.start()); v8::Local<v8::Value> int_array_1 = v8::Object::Cast(*res)->Get(v8_str("0")); Handle<JSObject> int_array_handle_1 = @@ -2491,14 +2527,21 @@ TEST(OptimizedPretenuringNestedObjectLiterals) { TEST(OptimizedPretenuringNestedDoubleLiterals) { i::FLAG_allow_natives_syntax = true; - i::FLAG_max_new_space_size = 2; + i::FLAG_expose_gc = true; CcTest::InitializeVM(); if (!CcTest::i_isolate()->use_crankshaft() || i::FLAG_always_opt) return; if (i::FLAG_gc_global || i::FLAG_stress_compaction) return; v8::HandleScope scope(CcTest::isolate()); - v8::Local<v8::Value> res = CompileRun( - "var number_elements = 20000;" + // Grow new space unitl maximum capacity reached. + while (!CcTest::heap()->new_space()->IsAtMaximumCapacity()) { + CcTest::heap()->new_space()->Grow(); + } + + i::ScopedVector<char> source(1024); + i::SNPrintF( + source, + "var number_elements = %d;" "var elements = new Array(number_elements);" "function f() {" " for (var i = 0; i < number_elements; i++) {" @@ -2506,9 +2549,13 @@ TEST(OptimizedPretenuringNestedDoubleLiterals) { " }" " return elements[number_elements - 1];" "};" - "f(); f(); f();" - "%OptimizeFunctionOnNextCall(f);" - "f();"); + "f(); gc();" + "f(); f();" + "%%OptimizeFunctionOnNextCall(f);" + "f();", + AllocationSite::kPretenureMinimumCreated); + + v8::Local<v8::Value> res = CompileRun(source.start()); v8::Local<v8::Value> double_array_1 = v8::Object::Cast(*res)->Get(v8_str("0")); @@ -2532,19 +2579,29 @@ TEST(OptimizedPretenuringNestedDoubleLiterals) { // Make sure pretenuring feedback is gathered for constructed objects as well // as for literals. TEST(OptimizedPretenuringConstructorCalls) { - if (!FLAG_allocation_site_pretenuring || !i::FLAG_pretenuring_call_new) { + if (!i::FLAG_pretenuring_call_new) { // FLAG_pretenuring_call_new needs to be synced with the snapshot. return; } i::FLAG_allow_natives_syntax = true; - i::FLAG_max_new_space_size = 2; + i::FLAG_expose_gc = true; CcTest::InitializeVM(); if (!CcTest::i_isolate()->use_crankshaft() || i::FLAG_always_opt) return; if (i::FLAG_gc_global || i::FLAG_stress_compaction) return; v8::HandleScope scope(CcTest::isolate()); - v8::Local<v8::Value> res = CompileRun( - "var number_elements = 20000;" + // Grow new space unitl maximum capacity reached. + while (!CcTest::heap()->new_space()->IsAtMaximumCapacity()) { + CcTest::heap()->new_space()->Grow(); + } + + i::ScopedVector<char> source(1024); + // Call new is doing slack tracking for the first + // JSFunction::kGenerousAllocationCount allocations, and we can't find + // mementos during that time. + i::SNPrintF( + source, + "var number_elements = %d;" "var elements = new Array(number_elements);" "function foo() {" " this.a = 3;" @@ -2556,9 +2613,14 @@ TEST(OptimizedPretenuringConstructorCalls) { " }" " return elements[number_elements - 1];" "};" - "f(); f(); f();" - "%OptimizeFunctionOnNextCall(f);" - "f();"); + "f(); gc();" + "f(); f();" + "%%OptimizeFunctionOnNextCall(f);" + "f();", + AllocationSite::kPretenureMinimumCreated + + JSFunction::kGenerousAllocationCount); + + v8::Local<v8::Value> res = CompileRun(source.start()); Handle<JSObject> o = v8::Utils::OpenHandle(*v8::Handle<v8::Object>::Cast(res)); @@ -2567,57 +2629,77 @@ TEST(OptimizedPretenuringConstructorCalls) { } -// Test regular array literals allocation. -TEST(OptimizedAllocationArrayLiterals) { +TEST(OptimizedPretenuringCallNew) { + if (!i::FLAG_pretenuring_call_new) { + // FLAG_pretenuring_call_new needs to be synced with the snapshot. + return; + } i::FLAG_allow_natives_syntax = true; + i::FLAG_expose_gc = true; CcTest::InitializeVM(); if (!CcTest::i_isolate()->use_crankshaft() || i::FLAG_always_opt) return; if (i::FLAG_gc_global || i::FLAG_stress_compaction) return; v8::HandleScope scope(CcTest::isolate()); - v8::Local<v8::Value> res = CompileRun( + // Grow new space unitl maximum capacity reached. + while (!CcTest::heap()->new_space()->IsAtMaximumCapacity()) { + CcTest::heap()->new_space()->Grow(); + } + + i::ScopedVector<char> source(1024); + // Call new is doing slack tracking for the first + // JSFunction::kGenerousAllocationCount allocations, and we can't find + // mementos during that time. + i::SNPrintF( + source, + "var number_elements = %d;" + "var elements = new Array(number_elements);" + "function g() { this.a = 0; }" "function f() {" - " var numbers = new Array(1, 2, 3);" - " numbers[0] = 3.14;" - " return numbers;" + " for (var i = 0; i < number_elements; i++) {" + " elements[i] = new g();" + " }" + " return elements[number_elements - 1];" "};" - "f(); f(); f();" - "%OptimizeFunctionOnNextCall(f);" - "f();"); - CHECK_EQ(static_cast<int>(3.14), - v8::Object::Cast(*res)->Get(v8_str("0"))->Int32Value()); + "f(); gc();" + "f(); f();" + "%%OptimizeFunctionOnNextCall(f);" + "f();", + AllocationSite::kPretenureMinimumCreated + + JSFunction::kGenerousAllocationCount); + + v8::Local<v8::Value> res = CompileRun(source.start()); Handle<JSObject> o = v8::Utils::OpenHandle(*v8::Handle<v8::Object>::Cast(res)); - - CHECK(CcTest::heap()->InNewSpace(o->elements())); + CHECK(CcTest::heap()->InOldPointerSpace(*o)); } -// Test global pretenuring call new. -TEST(OptimizedPretenuringCallNew) { +// Test regular array literals allocation. +TEST(OptimizedAllocationArrayLiterals) { i::FLAG_allow_natives_syntax = true; - i::FLAG_allocation_site_pretenuring = false; - i::FLAG_pretenuring_call_new = true; CcTest::InitializeVM(); if (!CcTest::i_isolate()->use_crankshaft() || i::FLAG_always_opt) return; if (i::FLAG_gc_global || i::FLAG_stress_compaction) return; v8::HandleScope scope(CcTest::isolate()); - CcTest::heap()->SetNewSpaceHighPromotionModeActive(true); - AlwaysAllocateScope always_allocate(CcTest::i_isolate()); v8::Local<v8::Value> res = CompileRun( - "function g() { this.a = 0; }" "function f() {" - " return new g();" + " var numbers = new Array(1, 2, 3);" + " numbers[0] = 3.14;" + " return numbers;" "};" "f(); f(); f();" "%OptimizeFunctionOnNextCall(f);" "f();"); + CHECK_EQ(static_cast<int>(3.14), + v8::Object::Cast(*res)->Get(v8_str("0"))->Int32Value()); Handle<JSObject> o = v8::Utils::OpenHandle(*v8::Handle<v8::Object>::Cast(res)); - CHECK(CcTest::heap()->InOldPointerSpace(*o)); + + CHECK(CcTest::heap()->InNewSpace(o->elements())); } @@ -2641,7 +2723,7 @@ TEST(Regress1465) { AlwaysAllocateScope always_allocate(CcTest::i_isolate()); for (int i = 0; i < transitions_count; i++) { EmbeddedVector<char, 64> buffer; - OS::SNPrintF(buffer, "var o = new F; o.prop%d = %d;", i, i); + SNPrintF(buffer, "var o = new F; o.prop%d = %d;", i, i); CompileRun(buffer.start()); } CompileRun("var root = new F;"); @@ -2657,7 +2739,7 @@ TEST(Regress1465) { CompileRun("%DebugPrint(root);"); CHECK_EQ(transitions_count, transitions_before); - SimulateIncrementalMarking(); + SimulateIncrementalMarking(CcTest::heap()); CcTest::heap()->CollectAllGarbage(Heap::kNoGCFlags); // Count number of live transitions after marking. Note that one transition @@ -2673,7 +2755,7 @@ static void AddTransitions(int transitions_count) { AlwaysAllocateScope always_allocate(CcTest::i_isolate()); for (int i = 0; i < transitions_count; i++) { EmbeddedVector<char, 64> buffer; - OS::SNPrintF(buffer, "var o = new F; o.prop%d = %d;", i, i); + SNPrintF(buffer, "var o = new F; o.prop%d = %d;", i, i); CompileRun(buffer.start()); } } @@ -2695,8 +2777,7 @@ static void AddPropertyTo( i::FLAG_gc_interval = gc_count; i::FLAG_gc_global = true; CcTest::heap()->set_allocation_timeout(gc_count); - JSReceiver::SetProperty( - object, prop_name, twenty_three, NONE, SLOPPY).Check(); + JSReceiver::SetProperty(object, prop_name, twenty_three, SLOPPY).Check(); } @@ -2799,7 +2880,7 @@ TEST(TransitionArraySimpleToFull) { CompileRun("o = new F;" "root = new F"); root = GetByName("root"); - ASSERT(root->map()->transitions()->IsSimpleTransition()); + DCHECK(root->map()->transitions()->IsSimpleTransition()); AddPropertyTo(2, root, "happy"); // Count number of live transitions after marking. Note that one transition @@ -2823,7 +2904,7 @@ TEST(Regress2143a) { "root.foo = 0;" "root = new Object;"); - SimulateIncrementalMarking(); + SimulateIncrementalMarking(CcTest::heap()); // Compile a StoreIC that performs the prepared map transition. This // will restart incremental marking and should make sure the root is @@ -2864,7 +2945,7 @@ TEST(Regress2143b) { "root.foo = 0;" "root = new Object;"); - SimulateIncrementalMarking(); + SimulateIncrementalMarking(CcTest::heap()); // Compile an optimized LStoreNamedField that performs the prepared // map transition. This will restart incremental marking and should @@ -2920,7 +3001,8 @@ TEST(ReleaseOverReservedPages) { // Triggering one GC will cause a lot of garbage to be discovered but // even spread across all allocated pages. - heap->CollectAllGarbage(Heap::kNoGCFlags, "triggered for preparation"); + heap->CollectAllGarbage(Heap::kAbortIncrementalMarkingMask, + "triggered for preparation"); CHECK_GE(number_of_test_pages + 1, old_pointer_space->CountTotalPages()); // Triggering subsequent GCs should cause at least half of the pages @@ -2986,8 +3068,9 @@ TEST(PrintSharedFunctionInfo) { *v8::Handle<v8::Function>::Cast( CcTest::global()->Get(v8_str("g")))); - DisallowHeapAllocation no_allocation; - g->shared()->PrintLn(); + OFStream os(stdout); + g->shared()->Print(os); + os << endl; } #endif // OBJECT_PRINT @@ -3018,9 +3101,9 @@ TEST(Regress2211) { CHECK(value->Equals(obj->GetHiddenValue(v8_str("key string")))); // Check size. - DescriptorArray* descriptors = internal_obj->map()->instance_descriptors(); + FieldIndex index = FieldIndex::ForDescriptor(internal_obj->map(), 0); ObjectHashTable* hashtable = ObjectHashTable::cast( - internal_obj->RawFastPropertyAt(descriptors->GetFieldIndex(0))); + internal_obj->RawFastPropertyAt(index)); // HashTable header (5) and 4 initial entries (8). CHECK_LE(hashtable->SizeFor(hashtable->length()), 13 * kPointerSize); } @@ -3058,18 +3141,22 @@ TEST(IncrementalMarkingClearsTypeFeedbackInfo) { Handle<FixedArray> feedback_vector(f->shared()->feedback_vector()); - CHECK_EQ(2, feedback_vector->length()); - CHECK(feedback_vector->get(0)->IsJSFunction()); - CHECK(feedback_vector->get(1)->IsJSFunction()); + int expected_length = FLAG_vector_ics ? 4 : 2; + CHECK_EQ(expected_length, feedback_vector->length()); + for (int i = 0; i < expected_length; i++) { + if ((i % 2) == 1) { + CHECK(feedback_vector->get(i)->IsJSFunction()); + } + } - SimulateIncrementalMarking(); + SimulateIncrementalMarking(CcTest::heap()); CcTest::heap()->CollectAllGarbage(Heap::kNoGCFlags); - CHECK_EQ(2, feedback_vector->length()); - CHECK_EQ(feedback_vector->get(0), - *TypeFeedbackInfo::UninitializedSentinel(CcTest::i_isolate())); - CHECK_EQ(feedback_vector->get(1), - *TypeFeedbackInfo::UninitializedSentinel(CcTest::i_isolate())); + CHECK_EQ(expected_length, feedback_vector->length()); + for (int i = 0; i < expected_length; i++) { + CHECK_EQ(feedback_vector->get(i), + *TypeFeedbackInfo::UninitializedSentinel(CcTest::i_isolate())); + } } @@ -3105,7 +3192,7 @@ TEST(IncrementalMarkingPreservesMonomorphicIC) { Code* ic_before = FindFirstIC(f->shared()->code(), Code::LOAD_IC); CHECK(ic_before->ic_state() == MONOMORPHIC); - SimulateIncrementalMarking(); + SimulateIncrementalMarking(CcTest::heap()); CcTest::heap()->CollectAllGarbage(Heap::kNoGCFlags); Code* ic_after = FindFirstIC(f->shared()->code(), Code::LOAD_IC); @@ -3138,8 +3225,8 @@ TEST(IncrementalMarkingClearsMonomorphicIC) { CHECK(ic_before->ic_state() == MONOMORPHIC); // Fire context dispose notification. - v8::V8::ContextDisposedNotification(); - SimulateIncrementalMarking(); + CcTest::isolate()->ContextDisposedNotification(); + SimulateIncrementalMarking(CcTest::heap()); CcTest::heap()->CollectAllGarbage(Heap::kNoGCFlags); Code* ic_after = FindFirstIC(f->shared()->code(), Code::LOAD_IC); @@ -3179,8 +3266,8 @@ TEST(IncrementalMarkingClearsPolymorphicIC) { CHECK(ic_before->ic_state() == POLYMORPHIC); // Fire context dispose notification. - v8::V8::ContextDisposedNotification(); - SimulateIncrementalMarking(); + CcTest::isolate()->ContextDisposedNotification(); + SimulateIncrementalMarking(CcTest::heap()); CcTest::heap()->CollectAllGarbage(Heap::kNoGCFlags); Code* ic_after = FindFirstIC(f->shared()->code(), Code::LOAD_IC); @@ -3341,7 +3428,7 @@ TEST(Regress159140) { // Simulate incremental marking so that the functions are enqueued as // code flushing candidates. Then optimize one function. Finally // finish the GC to complete code flushing. - SimulateIncrementalMarking(); + SimulateIncrementalMarking(heap); CompileRun("%OptimizeFunctionOnNextCall(g); g(3);"); heap->CollectAllGarbage(Heap::kNoGCFlags); @@ -3388,7 +3475,7 @@ TEST(Regress165495) { // Simulate incremental marking so that unoptimized code is flushed // even though it still is cached in the optimized code map. - SimulateIncrementalMarking(); + SimulateIncrementalMarking(heap); heap->CollectAllGarbage(Heap::kNoGCFlags); // Make a new closure that will get code installed from the code map. @@ -3456,7 +3543,7 @@ TEST(Regress169209) { } // Simulate incremental marking and collect code flushing candidates. - SimulateIncrementalMarking(); + SimulateIncrementalMarking(heap); CHECK(shared1->code()->gc_metadata() != NULL); // Optimize function and make sure the unoptimized code is replaced. @@ -3602,7 +3689,7 @@ TEST(Regress168801) { // Simulate incremental marking so that unoptimized function is enqueued as a // candidate for code flushing. The shared function info however will not be // explicitly enqueued. - SimulateIncrementalMarking(); + SimulateIncrementalMarking(heap); // Now optimize the function so that it is taken off the candidate list. { @@ -3659,7 +3746,7 @@ TEST(Regress173458) { // Simulate incremental marking so that unoptimized function is enqueued as a // candidate for code flushing. The shared function info however will not be // explicitly enqueued. - SimulateIncrementalMarking(); + SimulateIncrementalMarking(heap); // Now enable the debugger which in turn will disable code flushing. CHECK(isolate->debug()->Load()); @@ -3688,7 +3775,7 @@ TEST(DeferredHandles) { } // An entire block of handles has been filled. // Next handle would require a new block. - ASSERT(data->next == data->limit); + DCHECK(data->next == data->limit); DeferredHandleScope deferred(isolate); DummyVisitor visitor; @@ -3709,7 +3796,7 @@ TEST(IncrementalMarkingStepMakesBigProgressWithLargeObjects) { if (marking->IsStopped()) marking->Start(); // This big step should be sufficient to mark the whole array. marking->Step(100 * MB, IncrementalMarking::NO_GC_VIA_STACK_GUARD); - ASSERT(marking->IsComplete()); + DCHECK(marking->IsComplete()); } @@ -3736,10 +3823,6 @@ TEST(DisableInlineAllocation) { CcTest::heap()->DisableInlineAllocation(); CompileRun("run()"); - // Run test with inline allocation disabled and pretenuring. - CcTest::heap()->SetNewSpaceHighPromotionModeActive(true); - CompileRun("run()"); - // Run test with inline allocation re-enabled. CcTest::heap()->EnableInlineAllocation(); CompileRun("run()"); @@ -3803,7 +3886,7 @@ TEST(EnsureAllocationSiteDependentCodesProcessed) { // Now make sure that a gc should get rid of the function, even though we // still have the allocation site alive. for (int i = 0; i < 4; i++) { - heap->CollectAllGarbage(false); + heap->CollectAllGarbage(Heap::kNoGCFlags); } // The site still exists because of our global handle, but the code is no @@ -3853,7 +3936,7 @@ TEST(CellsInOptimizedCodeAreWeak) { heap->CollectAllGarbage(Heap::kAbortIncrementalMarkingMask); } - ASSERT(code->marked_for_deoptimization()); + DCHECK(code->marked_for_deoptimization()); } @@ -3894,7 +3977,7 @@ TEST(ObjectsInOptimizedCodeAreWeak) { heap->CollectAllGarbage(Heap::kAbortIncrementalMarkingMask); } - ASSERT(code->marked_for_deoptimization()); + DCHECK(code->marked_for_deoptimization()); } @@ -3911,37 +3994,41 @@ TEST(NoWeakHashTableLeakWithIncrementalMarking) { if (!isolate->use_crankshaft()) return; HandleScope outer_scope(heap->isolate()); for (int i = 0; i < 3; i++) { - SimulateIncrementalMarking(); + SimulateIncrementalMarking(heap); { LocalContext context; HandleScope scope(heap->isolate()); EmbeddedVector<char, 256> source; - OS::SNPrintF(source, - "function bar%d() {" - " return foo%d(1);" - "};" - "function foo%d(x) { with (x) { return 1 + x; } };" - "bar%d();" - "bar%d();" - "bar%d();" - "%OptimizeFunctionOnNextCall(bar%d);" - "bar%d();", i, i, i, i, i, i, i, i); + SNPrintF(source, + "function bar%d() {" + " return foo%d(1);" + "};" + "function foo%d(x) { with (x) { return 1 + x; } };" + "bar%d();" + "bar%d();" + "bar%d();" + "%%OptimizeFunctionOnNextCall(bar%d);" + "bar%d();", i, i, i, i, i, i, i, i); CompileRun(source.start()); } heap->CollectAllGarbage(i::Heap::kNoGCFlags); } - WeakHashTable* table = WeakHashTable::cast(heap->weak_object_to_code_table()); - CHECK_EQ(0, table->NumberOfElements()); + int elements = 0; + if (heap->weak_object_to_code_table()->IsHashTable()) { + WeakHashTable* t = WeakHashTable::cast(heap->weak_object_to_code_table()); + elements = t->NumberOfElements(); + } + CHECK_EQ(0, elements); } static Handle<JSFunction> OptimizeDummyFunction(const char* name) { EmbeddedVector<char, 256> source; - OS::SNPrintF(source, - "function %s() { return 0; }" - "%s(); %s();" - "%%OptimizeFunctionOnNextCall(%s);" - "%s();", name, name, name, name, name); + SNPrintF(source, + "function %s() { return 0; }" + "%s(); %s();" + "%%OptimizeFunctionOnNextCall(%s);" + "%s();", name, name, name, name, name); CompileRun(source.start()); Handle<JSFunction> fun = v8::Utils::OpenHandle( @@ -3993,7 +4080,8 @@ static Handle<Code> DummyOptimizedCode(Isolate* isolate) { i::byte buffer[i::Assembler::kMinimalBufferSize]; MacroAssembler masm(isolate, buffer, sizeof(buffer)); CodeDesc desc; - masm.Prologue(BUILD_FUNCTION_FRAME); + masm.Push(isolate->factory()->undefined_value()); + masm.Drop(1); masm.GetCode(&desc); Handle<Object> undefined(isolate->heap()->undefined_value(), isolate); Handle<Code> code = isolate->factory()->NewCode( @@ -4221,7 +4309,7 @@ TEST(Regress357137) { global->Set(v8::String::NewFromUtf8(isolate, "interrupt"), v8::FunctionTemplate::New(isolate, RequestInterrupt)); v8::Local<v8::Context> context = v8::Context::New(isolate, NULL, global); - ASSERT(!context.IsEmpty()); + DCHECK(!context.IsEmpty()); v8::Context::Scope cscope(context); v8::Local<v8::Value> result = CompileRun( @@ -4246,6 +4334,7 @@ TEST(ArrayShiftSweeping) { "var tmp = new Array(100000);" "array[0] = 10;" "gc();" + "gc();" "array.shift();" "array;"); @@ -4254,6 +4343,145 @@ TEST(ArrayShiftSweeping) { CHECK(heap->InOldPointerSpace(o->elements())); CHECK(heap->InOldPointerSpace(*o)); Page* page = Page::FromAddress(o->elements()->address()); - CHECK(page->WasSwept() || + CHECK(page->parallel_sweeping() <= MemoryChunk::SWEEPING_FINALIZE || Marking::IsBlack(Marking::MarkBitFrom(o->elements()))); } + + +TEST(PromotionQueue) { + i::FLAG_expose_gc = true; + i::FLAG_max_semi_space_size = 2; + CcTest::InitializeVM(); + v8::HandleScope scope(CcTest::isolate()); + Isolate* isolate = CcTest::i_isolate(); + Heap* heap = isolate->heap(); + NewSpace* new_space = heap->new_space(); + + // In this test we will try to overwrite the promotion queue which is at the + // end of to-space. To actually make that possible, we need at least two + // semi-space pages and take advantage of fragementation. + // (1) Grow semi-space to two pages. + // (2) Create a few small long living objects and call the scavenger to + // move them to the other semi-space. + // (3) Create a huge object, i.e., remainder of first semi-space page and + // create another huge object which should be of maximum allocatable memory + // size of the second semi-space page. + // (4) Call the scavenger again. + // What will happen is: the scavenger will promote the objects created in (2) + // and will create promotion queue entries at the end of the second + // semi-space page during the next scavenge when it promotes the objects to + // the old generation. The first allocation of (3) will fill up the first + // semi-space page. The second allocation in (3) will not fit into the first + // semi-space page, but it will overwrite the promotion queue which are in + // the second semi-space page. If the right guards are in place, the promotion + // queue will be evacuated in that case. + + // Grow the semi-space to two pages to make semi-space copy overwrite the + // promotion queue, which will be at the end of the second page. + intptr_t old_capacity = new_space->Capacity(); + new_space->Grow(); + CHECK(new_space->IsAtMaximumCapacity()); + CHECK(2 * old_capacity == new_space->Capacity()); + + // Call the scavenger two times to get an empty new space + heap->CollectGarbage(NEW_SPACE); + heap->CollectGarbage(NEW_SPACE); + + // First create a few objects which will survive a scavenge, and will get + // promoted to the old generation later on. These objects will create + // promotion queue entries at the end of the second semi-space page. + const int number_handles = 12; + Handle<FixedArray> handles[number_handles]; + for (int i = 0; i < number_handles; i++) { + handles[i] = isolate->factory()->NewFixedArray(1, NOT_TENURED); + } + heap->CollectGarbage(NEW_SPACE); + + // Create the first huge object which will exactly fit the first semi-space + // page. + int new_linear_size = static_cast<int>( + *heap->new_space()->allocation_limit_address() - + *heap->new_space()->allocation_top_address()); + int length = new_linear_size / kPointerSize - FixedArray::kHeaderSize; + Handle<FixedArray> first = + isolate->factory()->NewFixedArray(length, NOT_TENURED); + CHECK(heap->InNewSpace(*first)); + + // Create the second huge object of maximum allocatable second semi-space + // page size. + new_linear_size = static_cast<int>( + *heap->new_space()->allocation_limit_address() - + *heap->new_space()->allocation_top_address()); + length = Page::kMaxRegularHeapObjectSize / kPointerSize - + FixedArray::kHeaderSize; + Handle<FixedArray> second = + isolate->factory()->NewFixedArray(length, NOT_TENURED); + CHECK(heap->InNewSpace(*second)); + + // This scavenge will corrupt memory if the promotion queue is not evacuated. + heap->CollectGarbage(NEW_SPACE); +} + + +TEST(Regress388880) { + i::FLAG_expose_gc = true; + CcTest::InitializeVM(); + v8::HandleScope scope(CcTest::isolate()); + Isolate* isolate = CcTest::i_isolate(); + Factory* factory = isolate->factory(); + Heap* heap = isolate->heap(); + + Handle<Map> map1 = Map::Create(isolate->object_function(), 1); + Handle<Map> map2 = + Map::CopyWithField(map1, factory->NewStringFromStaticAscii("foo"), + HeapType::Any(isolate), NONE, Representation::Tagged(), + OMIT_TRANSITION).ToHandleChecked(); + + int desired_offset = Page::kPageSize - map1->instance_size(); + + // Allocate fixed array in old pointer space so, that object allocated + // afterwards would end at the end of the page. + { + SimulateFullSpace(heap->old_pointer_space()); + int padding_size = desired_offset - Page::kObjectStartOffset; + int padding_array_length = + (padding_size - FixedArray::kHeaderSize) / kPointerSize; + + Handle<FixedArray> temp2 = + factory->NewFixedArray(padding_array_length, TENURED); + Page* page = Page::FromAddress(temp2->address()); + CHECK_EQ(Page::kObjectStartOffset, page->Offset(temp2->address())); + } + + Handle<JSObject> o = factory->NewJSObjectFromMap(map1, TENURED, false); + o->set_properties(*factory->empty_fixed_array()); + + // Ensure that the object allocated where we need it. + Page* page = Page::FromAddress(o->address()); + CHECK_EQ(desired_offset, page->Offset(o->address())); + + // Now we have an object right at the end of the page. + + // Enable incremental marking to trigger actions in Heap::AdjustLiveBytes() + // that would cause crash. + IncrementalMarking* marking = CcTest::heap()->incremental_marking(); + marking->Abort(); + marking->Start(); + CHECK(marking->IsMarking()); + + // Now everything is set up for crashing in JSObject::MigrateFastToFast() + // when it calls heap->AdjustLiveBytes(...). + JSObject::MigrateToMap(o, map2); +} + + +#ifdef DEBUG +TEST(PathTracer) { + CcTest::InitializeVM(); + v8::HandleScope scope(CcTest::isolate()); + + v8::Local<v8::Value> result = CompileRun("'abc'"); + Handle<Object> o = v8::Utils::OpenHandle(*result); + CcTest::i_isolate()->heap()->TracePathToObject(*o); +} +#endif // DEBUG diff --git a/deps/v8/test/cctest/test-hydrogen-types.cc b/deps/v8/test/cctest/test-hydrogen-types.cc new file mode 100644 index 00000000000..0ac53bde097 --- /dev/null +++ b/deps/v8/test/cctest/test-hydrogen-types.cc @@ -0,0 +1,168 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/hydrogen-types.h" + +#include "test/cctest/cctest.h" + +using namespace v8::internal; + + +static const HType kTypes[] = { + #define DECLARE_TYPE(Name, mask) HType::Name(), + HTYPE_LIST(DECLARE_TYPE) + #undef DECLARE_TYPE +}; + +static const int kNumberOfTypes = sizeof(kTypes) / sizeof(kTypes[0]); + + +TEST(HTypeDistinct) { + for (int i = 0; i < kNumberOfTypes; ++i) { + for (int j = 0; j < kNumberOfTypes; ++j) { + CHECK(i == j || !kTypes[i].Equals(kTypes[j])); + } + } +} + + +TEST(HTypeReflexivity) { + // Reflexivity of = + for (int i = 0; i < kNumberOfTypes; ++i) { + CHECK(kTypes[i].Equals(kTypes[i])); + } + + // Reflexivity of < + for (int i = 0; i < kNumberOfTypes; ++i) { + CHECK(kTypes[i].IsSubtypeOf(kTypes[i])); + } +} + + +TEST(HTypeTransitivity) { + // Transitivity of = + for (int i = 0; i < kNumberOfTypes; ++i) { + for (int j = 0; j < kNumberOfTypes; ++j) { + for (int k = 0; k < kNumberOfTypes; ++k) { + HType ti = kTypes[i]; + HType tj = kTypes[j]; + HType tk = kTypes[k]; + CHECK(!ti.Equals(tj) || !tj.Equals(tk) || ti.Equals(tk)); + } + } + } + + // Transitivity of < + for (int i = 0; i < kNumberOfTypes; ++i) { + for (int j = 0; j < kNumberOfTypes; ++j) { + for (int k = 0; k < kNumberOfTypes; ++k) { + HType ti = kTypes[i]; + HType tj = kTypes[j]; + HType tk = kTypes[k]; + CHECK(!ti.IsSubtypeOf(tj) || !tj.IsSubtypeOf(tk) || ti.IsSubtypeOf(tk)); + } + } + } +} + + +TEST(HTypeCombine) { + // T < T /\ T' and T' < T /\ T' for all T,T' + for (int i = 0; i < kNumberOfTypes; ++i) { + for (int j = 0; j < kNumberOfTypes; ++j) { + HType ti = kTypes[i]; + HType tj = kTypes[j]; + CHECK(ti.IsSubtypeOf(ti.Combine(tj))); + CHECK(tj.IsSubtypeOf(ti.Combine(tj))); + } + } +} + + +TEST(HTypeAny) { + // T < Any for all T + for (int i = 0; i < kNumberOfTypes; ++i) { + HType ti = kTypes[i]; + CHECK(ti.IsAny()); + } + + // Any < T implies T = Any for all T + for (int i = 0; i < kNumberOfTypes; ++i) { + HType ti = kTypes[i]; + CHECK(!HType::Any().IsSubtypeOf(ti) || HType::Any().Equals(ti)); + } +} + + +TEST(HTypeTagged) { + // T < Tagged for all T \ {Any} + for (int i = 0; i < kNumberOfTypes; ++i) { + HType ti = kTypes[i]; + CHECK(ti.IsTagged() || HType::Any().Equals(ti)); + } + + // Tagged < T implies T = Tagged or T = Any + for (int i = 0; i < kNumberOfTypes; ++i) { + HType ti = kTypes[i]; + CHECK(!HType::Tagged().IsSubtypeOf(ti) || + HType::Tagged().Equals(ti) || + HType::Any().Equals(ti)); + } +} + + +TEST(HTypeSmi) { + // T < Smi implies T = None or T = Smi for all T + for (int i = 0; i < kNumberOfTypes; ++i) { + HType ti = kTypes[i]; + CHECK(!ti.IsSmi() || + ti.Equals(HType::Smi()) || + ti.Equals(HType::None())); + } +} + + +TEST(HTypeHeapObject) { + CHECK(!HType::TaggedPrimitive().IsHeapObject()); + CHECK(!HType::TaggedNumber().IsHeapObject()); + CHECK(!HType::Smi().IsHeapObject()); + CHECK(HType::HeapObject().IsHeapObject()); + CHECK(HType::HeapPrimitive().IsHeapObject()); + CHECK(HType::Null().IsHeapObject()); + CHECK(HType::HeapNumber().IsHeapObject()); + CHECK(HType::String().IsHeapObject()); + CHECK(HType::Boolean().IsHeapObject()); + CHECK(HType::Undefined().IsHeapObject()); + CHECK(HType::JSObject().IsHeapObject()); + CHECK(HType::JSArray().IsHeapObject()); +} + + +TEST(HTypePrimitive) { + CHECK(HType::TaggedNumber().IsTaggedPrimitive()); + CHECK(HType::Smi().IsTaggedPrimitive()); + CHECK(!HType::HeapObject().IsTaggedPrimitive()); + CHECK(HType::HeapPrimitive().IsTaggedPrimitive()); + CHECK(HType::Null().IsHeapPrimitive()); + CHECK(HType::HeapNumber().IsHeapPrimitive()); + CHECK(HType::String().IsHeapPrimitive()); + CHECK(HType::Boolean().IsHeapPrimitive()); + CHECK(HType::Undefined().IsHeapPrimitive()); + CHECK(!HType::JSObject().IsTaggedPrimitive()); + CHECK(!HType::JSArray().IsTaggedPrimitive()); +} + + +TEST(HTypeJSObject) { + CHECK(HType::JSArray().IsJSObject()); +} + + +TEST(HTypeNone) { + // None < T for all T + for (int i = 0; i < kNumberOfTypes; ++i) { + HType ti = kTypes[i]; + CHECK(HType::None().IsSubtypeOf(ti)); + } +} diff --git a/deps/v8/test/cctest/test-javascript-arm64.cc b/deps/v8/test/cctest/test-javascript-arm64.cc index bd7a2b2851a..5e4503478d6 100644 --- a/deps/v8/test/cctest/test-javascript-arm64.cc +++ b/deps/v8/test/cctest/test-javascript-arm64.cc @@ -27,18 +27,18 @@ #include <limits.h> -#include "v8.h" - -#include "api.h" -#include "isolate.h" -#include "compilation-cache.h" -#include "execution.h" -#include "snapshot.h" -#include "platform.h" -#include "utils.h" -#include "cctest.h" -#include "parser.h" -#include "unicode-inl.h" +#include "src/v8.h" + +#include "src/api.h" +#include "src/base/platform/platform.h" +#include "src/compilation-cache.h" +#include "src/execution.h" +#include "src/isolate.h" +#include "src/parser.h" +#include "src/snapshot.h" +#include "src/unicode-inl.h" +#include "src/utils.h" +#include "test/cctest/cctest.h" using ::v8::Context; using ::v8::Extension; diff --git a/deps/v8/test/cctest/test-js-arm64-variables.cc b/deps/v8/test/cctest/test-js-arm64-variables.cc index df3f4a8295b..7f2771094c1 100644 --- a/deps/v8/test/cctest/test-js-arm64-variables.cc +++ b/deps/v8/test/cctest/test-js-arm64-variables.cc @@ -29,18 +29,18 @@ #include <limits.h> -#include "v8.h" - -#include "api.h" -#include "isolate.h" -#include "compilation-cache.h" -#include "execution.h" -#include "snapshot.h" -#include "platform.h" -#include "utils.h" -#include "cctest.h" -#include "parser.h" -#include "unicode-inl.h" +#include "src/v8.h" + +#include "src/api.h" +#include "src/base/platform/platform.h" +#include "src/compilation-cache.h" +#include "src/execution.h" +#include "src/isolate.h" +#include "src/parser.h" +#include "src/snapshot.h" +#include "src/unicode-inl.h" +#include "src/utils.h" +#include "test/cctest/cctest.h" using ::v8::Context; using ::v8::Extension; diff --git a/deps/v8/test/cctest/test-libplatform-default-platform.cc b/deps/v8/test/cctest/test-libplatform-default-platform.cc new file mode 100644 index 00000000000..dac6db2a006 --- /dev/null +++ b/deps/v8/test/cctest/test-libplatform-default-platform.cc @@ -0,0 +1,30 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/v8.h" + +#include "src/libplatform/default-platform.h" +#include "test/cctest/cctest.h" +#include "test/cctest/test-libplatform.h" + +using namespace v8::internal; +using namespace v8::platform; + + +TEST(DefaultPlatformMessagePump) { + TaskCounter task_counter; + + DefaultPlatform platform; + + TestTask* task = new TestTask(&task_counter, true); + + CHECK(!platform.PumpMessageLoop(CcTest::isolate())); + + platform.CallOnForegroundThread(CcTest::isolate(), task); + + CHECK_EQ(1, task_counter.GetCount()); + CHECK(platform.PumpMessageLoop(CcTest::isolate())); + CHECK_EQ(0, task_counter.GetCount()); + CHECK(!platform.PumpMessageLoop(CcTest::isolate())); +} diff --git a/deps/v8/test/cctest/test-libplatform-task-queue.cc b/deps/v8/test/cctest/test-libplatform-task-queue.cc index 47655157630..630686b4595 100644 --- a/deps/v8/test/cctest/test-libplatform-task-queue.cc +++ b/deps/v8/test/cctest/test-libplatform-task-queue.cc @@ -25,13 +25,14 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "v8.h" +#include "src/v8.h" -#include "cctest.h" -#include "libplatform/task-queue.h" -#include "test-libplatform.h" +#include "src/libplatform/task-queue.h" +#include "test/cctest/cctest.h" +#include "test/cctest/test-libplatform.h" using namespace v8::internal; +using namespace v8::platform; TEST(TaskQueueBasic) { diff --git a/deps/v8/test/cctest/test-libplatform-worker-thread.cc b/deps/v8/test/cctest/test-libplatform-worker-thread.cc index 090d6e1a180..ba6b51fd022 100644 --- a/deps/v8/test/cctest/test-libplatform-worker-thread.cc +++ b/deps/v8/test/cctest/test-libplatform-worker-thread.cc @@ -25,14 +25,15 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "v8.h" +#include "src/v8.h" -#include "cctest.h" -#include "libplatform/task-queue.h" -#include "libplatform/worker-thread.h" -#include "test-libplatform.h" +#include "src/libplatform/task-queue.h" +#include "src/libplatform/worker-thread.h" +#include "test/cctest/cctest.h" +#include "test/cctest/test-libplatform.h" using namespace v8::internal; +using namespace v8::platform; TEST(WorkerThread) { @@ -54,7 +55,7 @@ TEST(WorkerThread) { queue.Append(task3); queue.Append(task4); - // TaskQueue ASSERTs that it is empty in its destructor. + // TaskQueue DCHECKs that it is empty in its destructor. queue.Terminate(); delete thread1; diff --git a/deps/v8/test/cctest/test-libplatform.h b/deps/v8/test/cctest/test-libplatform.h index e32770eeda3..67147f33e66 100644 --- a/deps/v8/test/cctest/test-libplatform.h +++ b/deps/v8/test/cctest/test-libplatform.h @@ -28,11 +28,12 @@ #ifndef TEST_LIBPLATFORM_H_ #define TEST_LIBPLATFORM_H_ -#include "v8.h" +#include "src/v8.h" -#include "cctest.h" +#include "test/cctest/cctest.h" using namespace v8::internal; +using namespace v8::platform; class TaskCounter { public: @@ -40,22 +41,22 @@ class TaskCounter { ~TaskCounter() { CHECK_EQ(0, counter_); } int GetCount() const { - LockGuard<Mutex> guard(&lock_); + v8::base::LockGuard<v8::base::Mutex> guard(&lock_); return counter_; } void Inc() { - LockGuard<Mutex> guard(&lock_); + v8::base::LockGuard<v8::base::Mutex> guard(&lock_); ++counter_; } void Dec() { - LockGuard<Mutex> guard(&lock_); + v8::base::LockGuard<v8::base::Mutex> guard(&lock_); --counter_; } private: - mutable Mutex lock_; + mutable v8::base::Mutex lock_; int counter_; DISALLOW_COPY_AND_ASSIGN(TaskCounter); @@ -93,10 +94,12 @@ class TestTask : public v8::Task { }; -class TestWorkerThread : public Thread { +class TestWorkerThread : public v8::base::Thread { public: explicit TestWorkerThread(v8::Task* task) - : Thread("libplatform TestWorkerThread"), semaphore_(0), task_(task) {} + : Thread(Options("libplatform TestWorkerThread")), + semaphore_(0), + task_(task) {} virtual ~TestWorkerThread() {} void Signal() { semaphore_.Signal(); } @@ -111,7 +114,7 @@ class TestWorkerThread : public Thread { } private: - Semaphore semaphore_; + v8::base::Semaphore semaphore_; v8::Task* task_; DISALLOW_COPY_AND_ASSIGN(TestWorkerThread); diff --git a/deps/v8/test/cctest/test-list.cc b/deps/v8/test/cctest/test-list.cc index a29972b583b..20c13f6e65b 100644 --- a/deps/v8/test/cctest/test-list.cc +++ b/deps/v8/test/cctest/test-list.cc @@ -27,8 +27,8 @@ #include <stdlib.h> #include <string.h> -#include "v8.h" -#include "cctest.h" +#include "src/v8.h" +#include "test/cctest/cctest.h" using namespace v8::internal; diff --git a/deps/v8/test/cctest/test-liveedit.cc b/deps/v8/test/cctest/test-liveedit.cc index 1c64108c8d3..f5c22743bde 100644 --- a/deps/v8/test/cctest/test-liveedit.cc +++ b/deps/v8/test/cctest/test-liveedit.cc @@ -27,10 +27,10 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "liveedit.h" -#include "cctest.h" +#include "src/liveedit.h" +#include "test/cctest/cctest.h" using namespace v8::internal; @@ -117,12 +117,12 @@ void CompareStringsOneWay(const char* s1, const char* s2, int similar_part_length = diff_pos1 - pos1; int diff_pos2 = pos2 + similar_part_length; - ASSERT_EQ(diff_pos2, chunk->pos2); + DCHECK_EQ(diff_pos2, chunk->pos2); for (int j = 0; j < similar_part_length; j++) { - ASSERT(pos1 + j < len1); - ASSERT(pos2 + j < len2); - ASSERT_EQ(s1[pos1 + j], s2[pos2 + j]); + DCHECK(pos1 + j < len1); + DCHECK(pos2 + j < len2); + DCHECK_EQ(s1[pos1 + j], s2[pos2 + j]); } diff_parameter += chunk->len1 + chunk->len2; pos1 = diff_pos1 + chunk->len1; @@ -131,17 +131,17 @@ void CompareStringsOneWay(const char* s1, const char* s2, { // After last chunk. int similar_part_length = len1 - pos1; - ASSERT_EQ(similar_part_length, len2 - pos2); + DCHECK_EQ(similar_part_length, len2 - pos2); USE(len2); for (int j = 0; j < similar_part_length; j++) { - ASSERT(pos1 + j < len1); - ASSERT(pos2 + j < len2); - ASSERT_EQ(s1[pos1 + j], s2[pos2 + j]); + DCHECK(pos1 + j < len1); + DCHECK(pos2 + j < len2); + DCHECK_EQ(s1[pos1 + j], s2[pos2 + j]); } } if (expected_diff_parameter != -1) { - ASSERT_EQ(expected_diff_parameter, diff_parameter); + DCHECK_EQ(expected_diff_parameter, diff_parameter); } } diff --git a/deps/v8/test/cctest/test-lockers.cc b/deps/v8/test/cctest/test-lockers.cc index 12b35a5c4da..ed315cea3dc 100644 --- a/deps/v8/test/cctest/test-lockers.cc +++ b/deps/v8/test/cctest/test-lockers.cc @@ -27,19 +27,19 @@ #include <limits.h> -#include "v8.h" - -#include "api.h" -#include "isolate.h" -#include "compilation-cache.h" -#include "execution.h" -#include "smart-pointers.h" -#include "snapshot.h" -#include "platform.h" -#include "utils.h" -#include "cctest.h" -#include "parser.h" -#include "unicode-inl.h" +#include "src/v8.h" + +#include "src/api.h" +#include "src/base/platform/platform.h" +#include "src/compilation-cache.h" +#include "src/execution.h" +#include "src/isolate.h" +#include "src/parser.h" +#include "src/smart-pointers.h" +#include "src/snapshot.h" +#include "src/unicode-inl.h" +#include "src/utils.h" +#include "test/cctest/cctest.h" using ::v8::Context; using ::v8::Extension; @@ -56,10 +56,10 @@ using ::v8::V8; // Migrating an isolate -class KangarooThread : public v8::internal::Thread { +class KangarooThread : public v8::base::Thread { public: KangarooThread(v8::Isolate* isolate, v8::Handle<v8::Context> context) - : Thread("KangarooThread"), + : Thread(Options("KangarooThread")), isolate_(isolate), context_(isolate, context) {} @@ -146,12 +146,11 @@ class JoinableThread { virtual void Run() = 0; private: - class ThreadWithSemaphore : public i::Thread { + class ThreadWithSemaphore : public v8::base::Thread { public: explicit ThreadWithSemaphore(JoinableThread* joinable_thread) - : Thread(joinable_thread->name_), - joinable_thread_(joinable_thread) { - } + : Thread(Options(joinable_thread->name_)), + joinable_thread_(joinable_thread) {} virtual void Run() { joinable_thread_->Run(); @@ -163,7 +162,7 @@ class JoinableThread { }; const char* name_; - i::Semaphore semaphore_; + v8::base::Semaphore semaphore_; ThreadWithSemaphore thread_; friend class ThreadWithSemaphore; @@ -223,9 +222,7 @@ TEST(IsolateLockingStress) { class IsolateNonlockingThread : public JoinableThread { public: - explicit IsolateNonlockingThread() - : JoinableThread("IsolateNonlockingThread") { - } + IsolateNonlockingThread() : JoinableThread("IsolateNonlockingThread") {} virtual void Run() { v8::Isolate* isolate = v8::Isolate::New(); @@ -247,6 +244,8 @@ class IsolateNonlockingThread : public JoinableThread { TEST(MultithreadedParallelIsolates) { #if V8_TARGET_ARCH_ARM || V8_TARGET_ARCH_MIPS const int kNThreads = 10; +#elif V8_TARGET_ARCH_X64 && V8_TARGET_ARCH_32_BIT + const int kNThreads = 4; #else const int kNThreads = 50; #endif @@ -713,6 +712,8 @@ class IsolateGenesisThread : public JoinableThread { TEST(ExtensionsRegistration) { #if V8_TARGET_ARCH_ARM || V8_TARGET_ARCH_MIPS const int kNThreads = 10; +#elif V8_TARGET_ARCH_X64 && V8_TARGET_ARCH_32_BIT + const int kNThreads = 4; #else const int kNThreads = 40; #endif diff --git a/deps/v8/test/cctest/test-log-stack-tracer.cc b/deps/v8/test/cctest/test-log-stack-tracer.cc index 5b6858e5536..334a2010532 100644 --- a/deps/v8/test/cctest/test-log-stack-tracer.cc +++ b/deps/v8/test/cctest/test-log-stack-tracer.cc @@ -29,17 +29,17 @@ #include <stdlib.h> -#include "v8.h" - -#include "api.h" -#include "cctest.h" -#include "codegen.h" -#include "disassembler.h" -#include "isolate.h" -#include "log.h" -#include "sampler.h" -#include "trace-extension.h" -#include "vm-state-inl.h" +#include "src/v8.h" + +#include "src/api.h" +#include "src/codegen.h" +#include "src/disassembler.h" +#include "src/isolate.h" +#include "src/log.h" +#include "src/sampler.h" +#include "src/vm-state-inl.h" +#include "test/cctest/cctest.h" +#include "test/cctest/trace-extension.h" using v8::Function; using v8::Local; @@ -119,12 +119,12 @@ static void CreateTraceCallerFunction(v8::Local<v8::Context> context, const char* func_name, const char* trace_func_name) { i::EmbeddedVector<char, 256> trace_call_buf; - i::OS::SNPrintF(trace_call_buf, - "function %s() {" - " fp = new FPGrabber();" - " %s(fp.low_bits, fp.high_bits);" - "}", - func_name, trace_func_name); + i::SNPrintF(trace_call_buf, + "function %s() {" + " fp = new FPGrabber();" + " %s(fp.low_bits, fp.high_bits);" + "}", + func_name, trace_func_name); // Create the FPGrabber function, which grabs the caller's frame pointer // when called as a constructor. @@ -172,7 +172,7 @@ TEST(CFromJSStackTrace) { CHECK_EQ(FUNCTION_ADDR(i::TraceExtension::Trace), sample.external_callback); // Stack tracing will start from the first JS function, i.e. "JSFuncDoTrace" - int base = 0; + unsigned base = 0; CHECK_GT(sample.frames_count, base + 1); CHECK(IsAddressWithinFuncCode( @@ -225,7 +225,7 @@ TEST(PureJSStackTrace) { CHECK_EQ(FUNCTION_ADDR(i::TraceExtension::JSTrace), sample.external_callback); // Stack sampling will start from the caller of JSFuncDoTrace, i.e. "JSTrace" - int base = 0; + unsigned base = 0; CHECK_GT(sample.frames_count, base + 1); CHECK(IsAddressWithinFuncCode(context, "JSTrace", sample.stack[base + 0])); CHECK(IsAddressWithinFuncCode( diff --git a/deps/v8/test/cctest/test-log.cc b/deps/v8/test/cctest/test-log.cc index e6ed75e64da..d72e6f0e1e0 100644 --- a/deps/v8/test/cctest/test-log.cc +++ b/deps/v8/test/cctest/test-log.cc @@ -34,15 +34,16 @@ #include <cmath> #endif // __linux__ -#include "v8.h" -#include "log.h" -#include "log-utils.h" -#include "cpu-profiler.h" -#include "natives.h" -#include "utils.h" -#include "v8threads.h" -#include "cctest.h" -#include "vm-state-inl.h" +#include "src/v8.h" + +#include "src/cpu-profiler.h" +#include "src/log.h" +#include "src/log-utils.h" +#include "src/natives.h" +#include "src/utils.h" +#include "src/v8threads.h" +#include "src/vm-state-inl.h" +#include "test/cctest/cctest.h" using v8::internal::Address; using v8::internal::EmbeddedVector; @@ -112,7 +113,7 @@ class ScopedLoggerInitializer { static const char* StrNStr(const char* s1, const char* s2, int n) { if (s1[n] == '\0') return strstr(s1, s2); i::ScopedVector<char> str(n + 1); - i::OS::StrNCpy(str, s1, static_cast<size_t>(n)); + i::StrNCpy(str, s1, static_cast<size_t>(n)); str[n] = '\0'; char* found = strstr(str.start(), s2); return found != NULL ? s1 + (found - str.start()) : NULL; @@ -358,9 +359,9 @@ TEST(LogCallbacks) { CHECK(exists); i::EmbeddedVector<char, 100> ref_data; - i::OS::SNPrintF(ref_data, - "code-creation,Callback,-2,0x%" V8PRIxPTR ",1,\"method1\"\0", - ObjMethod1); + i::SNPrintF(ref_data, + "code-creation,Callback,-2,0x%" V8PRIxPTR ",1,\"method1\"", + reinterpret_cast<intptr_t>(ObjMethod1)); CHECK_NE(NULL, StrNStr(log.start(), ref_data.start(), log.length())); log.Dispose(); @@ -402,23 +403,23 @@ TEST(LogAccessorCallbacks) { CHECK(exists); EmbeddedVector<char, 100> prop1_getter_record; - i::OS::SNPrintF(prop1_getter_record, - "code-creation,Callback,-2,0x%" V8PRIxPTR ",1,\"get prop1\"", - Prop1Getter); + i::SNPrintF(prop1_getter_record, + "code-creation,Callback,-2,0x%" V8PRIxPTR ",1,\"get prop1\"", + reinterpret_cast<intptr_t>(Prop1Getter)); CHECK_NE(NULL, StrNStr(log.start(), prop1_getter_record.start(), log.length())); EmbeddedVector<char, 100> prop1_setter_record; - i::OS::SNPrintF(prop1_setter_record, - "code-creation,Callback,-2,0x%" V8PRIxPTR ",1,\"set prop1\"", - Prop1Setter); + i::SNPrintF(prop1_setter_record, + "code-creation,Callback,-2,0x%" V8PRIxPTR ",1,\"set prop1\"", + reinterpret_cast<intptr_t>(Prop1Setter)); CHECK_NE(NULL, StrNStr(log.start(), prop1_setter_record.start(), log.length())); EmbeddedVector<char, 100> prop2_getter_record; - i::OS::SNPrintF(prop2_getter_record, - "code-creation,Callback,-2,0x%" V8PRIxPTR ",1,\"get prop2\"", - Prop2Getter); + i::SNPrintF(prop2_getter_record, + "code-creation,Callback,-2,0x%" V8PRIxPTR ",1,\"get prop2\"", + reinterpret_cast<intptr_t>(Prop2Getter)); CHECK_NE(NULL, StrNStr(log.start(), prop2_getter_record.start(), log.length())); log.Dispose(); diff --git a/deps/v8/test/cctest/test-macro-assembler-arm.cc b/deps/v8/test/cctest/test-macro-assembler-arm.cc index 8aed4c27b5c..2cfad0df835 100644 --- a/deps/v8/test/cctest/test-macro-assembler-arm.cc +++ b/deps/v8/test/cctest/test-macro-assembler-arm.cc @@ -27,11 +27,13 @@ #include <stdlib.h> -#include "v8.h" -#include "macro-assembler.h" -#include "arm/macro-assembler-arm.h" -#include "arm/simulator-arm.h" -#include "cctest.h" +#include "src/v8.h" +#include "test/cctest/cctest.h" + +#include "src/macro-assembler.h" + +#include "src/arm/macro-assembler-arm.h" +#include "src/arm/simulator-arm.h" using namespace v8::internal; @@ -66,10 +68,12 @@ TEST(CopyBytes) { size_t act_size; // Allocate two blocks to copy data between. - byte* src_buffer = static_cast<byte*>(OS::Allocate(data_size, &act_size, 0)); + byte* src_buffer = + static_cast<byte*>(v8::base::OS::Allocate(data_size, &act_size, 0)); CHECK(src_buffer); CHECK(act_size >= static_cast<size_t>(data_size)); - byte* dest_buffer = static_cast<byte*>(OS::Allocate(data_size, &act_size, 0)); + byte* dest_buffer = + static_cast<byte*>(v8::base::OS::Allocate(data_size, &act_size, 0)); CHECK(dest_buffer); CHECK(act_size >= static_cast<size_t>(data_size)); @@ -137,9 +141,8 @@ TEST(LoadAndStoreWithRepresentation) { // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); diff --git a/deps/v8/test/cctest/test-macro-assembler-ia32.cc b/deps/v8/test/cctest/test-macro-assembler-ia32.cc index 3ad52712c49..4d37579918a 100644 --- a/deps/v8/test/cctest/test-macro-assembler-ia32.cc +++ b/deps/v8/test/cctest/test-macro-assembler-ia32.cc @@ -27,13 +27,13 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" +#include "test/cctest/cctest.h" -#include "macro-assembler.h" -#include "factory.h" -#include "platform.h" -#include "serialize.h" -#include "cctest.h" +#include "src/base/platform/platform.h" +#include "src/factory.h" +#include "src/macro-assembler.h" +#include "src/serialize.h" using namespace v8::internal; @@ -54,9 +54,8 @@ TEST(LoadAndStoreWithRepresentation) { // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); @@ -123,20 +122,17 @@ TEST(LoadAndStoreWithRepresentation) { __ j(not_equal, &exit); // Test 5. - if (CpuFeatures::IsSupported(SSE2)) { - CpuFeatureScope scope(masm, SSE2); - __ mov(eax, Immediate(5)); // Test XMM move immediate. - __ Move(xmm0, 0.0); - __ Move(xmm1, 0.0); - __ ucomisd(xmm0, xmm1); - __ j(not_equal, &exit); - __ Move(xmm2, 991.01); - __ ucomisd(xmm0, xmm2); - __ j(equal, &exit); - __ Move(xmm0, 991.01); - __ ucomisd(xmm0, xmm2); - __ j(not_equal, &exit); - } + __ mov(eax, Immediate(5)); // Test XMM move immediate. + __ Move(xmm0, 0.0); + __ Move(xmm1, 0.0); + __ ucomisd(xmm0, xmm1); + __ j(not_equal, &exit); + __ Move(xmm2, 991.01); + __ ucomisd(xmm0, xmm2); + __ j(equal, &exit); + __ Move(xmm0, 991.01); + __ ucomisd(xmm0, xmm2); + __ j(not_equal, &exit); // Test 6. __ mov(eax, Immediate(6)); diff --git a/deps/v8/test/cctest/test-macro-assembler-mips.cc b/deps/v8/test/cctest/test-macro-assembler-mips.cc index a5045a8f01b..33a4611540f 100644 --- a/deps/v8/test/cctest/test-macro-assembler-mips.cc +++ b/deps/v8/test/cctest/test-macro-assembler-mips.cc @@ -27,11 +27,12 @@ #include <stdlib.h> -#include "v8.h" -#include "macro-assembler.h" -#include "mips/macro-assembler-mips.h" -#include "mips/simulator-mips.h" -#include "cctest.h" +#include "src/v8.h" +#include "test/cctest/cctest.h" + +#include "src/macro-assembler.h" +#include "src/mips/macro-assembler-mips.h" +#include "src/mips/simulator-mips.h" using namespace v8::internal; @@ -66,10 +67,12 @@ TEST(CopyBytes) { size_t act_size; // Allocate two blocks to copy data between. - byte* src_buffer = static_cast<byte*>(OS::Allocate(data_size, &act_size, 0)); + byte* src_buffer = + static_cast<byte*>(v8::base::OS::Allocate(data_size, &act_size, 0)); CHECK(src_buffer); CHECK(act_size >= static_cast<size_t>(data_size)); - byte* dest_buffer = static_cast<byte*>(OS::Allocate(data_size, &act_size, 0)); + byte* dest_buffer = + static_cast<byte*>(v8::base::OS::Allocate(data_size, &act_size, 0)); CHECK(dest_buffer); CHECK(act_size >= static_cast<size_t>(data_size)); diff --git a/deps/v8/test/cctest/test-macro-assembler-mips64.cc b/deps/v8/test/cctest/test-macro-assembler-mips64.cc new file mode 100644 index 00000000000..eef658de67f --- /dev/null +++ b/deps/v8/test/cctest/test-macro-assembler-mips64.cc @@ -0,0 +1,217 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following +// disclaimer in the documentation and/or other materials provided +// with the distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived +// from this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +#include <stdlib.h> + +#include "src/v8.h" +#include "test/cctest/cctest.h" + +#include "src/macro-assembler.h" +#include "src/mips64/macro-assembler-mips64.h" +#include "src/mips64/simulator-mips64.h" + + +using namespace v8::internal; + +typedef void* (*F)(int64_t x, int64_t y, int p2, int p3, int p4); + +#define __ masm-> + + +static byte to_non_zero(int n) { + return static_cast<unsigned>(n) % 255 + 1; +} + + +static bool all_zeroes(const byte* beg, const byte* end) { + CHECK(beg); + CHECK(beg <= end); + while (beg < end) { + if (*beg++ != 0) + return false; + } + return true; +} + + +TEST(CopyBytes) { + CcTest::InitializeVM(); + Isolate* isolate = Isolate::Current(); + HandleScope handles(isolate); + + const int data_size = 1 * KB; + size_t act_size; + + // Allocate two blocks to copy data between. + byte* src_buffer = + static_cast<byte*>(v8::base::OS::Allocate(data_size, &act_size, 0)); + CHECK(src_buffer); + CHECK(act_size >= static_cast<size_t>(data_size)); + byte* dest_buffer = + static_cast<byte*>(v8::base::OS::Allocate(data_size, &act_size, 0)); + CHECK(dest_buffer); + CHECK(act_size >= static_cast<size_t>(data_size)); + + // Storage for a0 and a1. + byte* a0_; + byte* a1_; + + MacroAssembler assembler(isolate, NULL, 0); + MacroAssembler* masm = &assembler; + + // Code to be generated: The stuff in CopyBytes followed by a store of a0 and + // a1, respectively. + __ CopyBytes(a0, a1, a2, a3); + __ li(a2, Operand(reinterpret_cast<int64_t>(&a0_))); + __ li(a3, Operand(reinterpret_cast<int64_t>(&a1_))); + __ sd(a0, MemOperand(a2)); + __ jr(ra); + __ sd(a1, MemOperand(a3)); + + CodeDesc desc; + masm->GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); + + ::F f = FUNCTION_CAST< ::F>(code->entry()); + + // Initialise source data with non-zero bytes. + for (int i = 0; i < data_size; i++) { + src_buffer[i] = to_non_zero(i); + } + + const int fuzz = 11; + + for (int size = 0; size < 600; size++) { + for (const byte* src = src_buffer; src < src_buffer + fuzz; src++) { + for (byte* dest = dest_buffer; dest < dest_buffer + fuzz; dest++) { + memset(dest_buffer, 0, data_size); + CHECK(dest + size < dest_buffer + data_size); + (void) CALL_GENERATED_CODE(f, reinterpret_cast<int64_t>(src), + reinterpret_cast<int64_t>(dest), + size, 0, 0); + // a0 and a1 should point at the first byte after the copied data. + CHECK_EQ(src + size, a0_); + CHECK_EQ(dest + size, a1_); + // Check that we haven't written outside the target area. + CHECK(all_zeroes(dest_buffer, dest)); + CHECK(all_zeroes(dest + size, dest_buffer + data_size)); + // Check the target area. + CHECK_EQ(0, memcmp(src, dest, size)); + } + } + } + + // Check that the source data hasn't been clobbered. + for (int i = 0; i < data_size; i++) { + CHECK(src_buffer[i] == to_non_zero(i)); + } +} + + +TEST(LoadConstants) { + CcTest::InitializeVM(); + Isolate* isolate = Isolate::Current(); + HandleScope handles(isolate); + + int64_t refConstants[64]; + int64_t result[64]; + + int64_t mask = 1; + for (int i = 0; i < 64; i++) { + refConstants[i] = ~(mask << i); + } + + MacroAssembler assembler(isolate, NULL, 0); + MacroAssembler* masm = &assembler; + + __ mov(a4, a0); + for (int i = 0; i < 64; i++) { + // Load constant. + __ li(a5, Operand(refConstants[i])); + __ sd(a5, MemOperand(a4)); + __ Daddu(a4, a4, Operand(kPointerSize)); + } + + __ jr(ra); + __ nop(); + + CodeDesc desc; + masm->GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); + + ::F f = FUNCTION_CAST< ::F>(code->entry()); + (void) CALL_GENERATED_CODE(f, reinterpret_cast<int64_t>(result), + 0, 0, 0, 0); + // Check results. + for (int i = 0; i < 64; i++) { + CHECK(refConstants[i] == result[i]); + } +} + + +TEST(LoadAddress) { + CcTest::InitializeVM(); + Isolate* isolate = Isolate::Current(); + HandleScope handles(isolate); + + MacroAssembler assembler(isolate, NULL, 0); + MacroAssembler* masm = &assembler; + Label to_jump, skip; + __ mov(a4, a0); + + __ Branch(&skip); + __ bind(&to_jump); + __ nop(); + __ nop(); + __ jr(ra); + __ nop(); + __ bind(&skip); + __ li(a4, Operand(masm->jump_address(&to_jump)), ADDRESS_LOAD); + int check_size = masm->InstructionsGeneratedSince(&skip); + CHECK_EQ(check_size, 4); + __ jr(a4); + __ nop(); + __ stop("invalid"); + __ stop("invalid"); + __ stop("invalid"); + __ stop("invalid"); + __ stop("invalid"); + + + CodeDesc desc; + masm->GetCode(&desc); + Handle<Code> code = isolate->factory()->NewCode( + desc, Code::ComputeFlags(Code::STUB), Handle<Code>()); + + ::F f = FUNCTION_CAST< ::F>(code->entry()); + (void) CALL_GENERATED_CODE(f, 0, 0, 0, 0, 0); + // Check results. +} + +#undef __ diff --git a/deps/v8/test/cctest/test-macro-assembler-x64.cc b/deps/v8/test/cctest/test-macro-assembler-x64.cc index 609bc69956c..2c0e9180576 100644 --- a/deps/v8/test/cctest/test-macro-assembler-x64.cc +++ b/deps/v8/test/cctest/test-macro-assembler-x64.cc @@ -27,13 +27,13 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "macro-assembler.h" -#include "factory.h" -#include "platform.h" -#include "serialize.h" -#include "cctest.h" +#include "src/base/platform/platform.h" +#include "src/factory.h" +#include "src/macro-assembler.h" +#include "src/serialize.h" +#include "test/cctest/cctest.h" namespace i = v8::internal; using i::Address; @@ -46,7 +46,6 @@ using i::Immediate; using i::Isolate; using i::Label; using i::MacroAssembler; -using i::OS; using i::Operand; using i::RelocInfo; using i::Representation; @@ -157,9 +156,8 @@ TEST(SmiMove) { i::V8::Initialize(NULL); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); @@ -207,7 +205,7 @@ void TestSmiCompare(MacroAssembler* masm, Label* exit, int id, int x, int y) { __ movl(rax, Immediate(id + 2)); __ j(less_equal, exit); } else { - ASSERT_EQ(x, y); + DCHECK_EQ(x, y); __ movl(rax, Immediate(id + 3)); __ j(not_equal, exit); } @@ -224,7 +222,7 @@ void TestSmiCompare(MacroAssembler* masm, Label* exit, int id, int x, int y) { __ movl(rax, Immediate(id + 9)); __ j(greater_equal, exit); } else { - ASSERT(y > x); + DCHECK(y > x); __ movl(rax, Immediate(id + 10)); __ j(less_equal, exit); } @@ -244,10 +242,8 @@ TEST(SmiCompare) { i::V8::Initialize(NULL); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = - static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize * 2, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize * 2, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); @@ -295,9 +291,8 @@ TEST(Integer32ToSmi) { i::V8::Initialize(NULL); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); @@ -399,7 +394,7 @@ void TestI64PlusConstantToSmi(MacroAssembler* masm, int64_t x, int y) { int64_t result = x + y; - ASSERT(Smi::IsValid(result)); + DCHECK(Smi::IsValid(result)); __ movl(rax, Immediate(id)); __ Move(r8, Smi::FromInt(static_cast<int>(result))); __ movq(rcx, x); @@ -423,9 +418,8 @@ TEST(Integer64PlusConstantToSmi) { i::V8::Initialize(NULL); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); @@ -467,9 +461,8 @@ TEST(SmiCheck) { i::V8::Initialize(NULL); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); @@ -714,10 +707,8 @@ TEST(SmiNeg) { i::V8::Initialize(NULL); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = - static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); @@ -820,7 +811,7 @@ static void SmiAddOverflowTest(MacroAssembler* masm, int id, int x) { // Adds a Smi to x so that the addition overflows. - ASSERT(x != 0); // Can't overflow by adding a Smi. + DCHECK(x != 0); // Can't overflow by adding a Smi. int y_max = (x > 0) ? (Smi::kMaxValue + 0) : (Smi::kMinValue - x - 1); int y_min = (x > 0) ? (Smi::kMaxValue - x + 1) : (Smi::kMinValue + 0); @@ -930,10 +921,8 @@ TEST(SmiAdd) { i::V8::Initialize(NULL); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = - static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize * 3, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize * 3, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); @@ -1039,7 +1028,7 @@ static void SmiSubOverflowTest(MacroAssembler* masm, int id, int x) { // Subtracts a Smi from x so that the subtraction overflows. - ASSERT(x != -1); // Can't overflow by subtracting a Smi. + DCHECK(x != -1); // Can't overflow by subtracting a Smi. int y_max = (x < 0) ? (Smi::kMaxValue + 0) : (Smi::kMinValue + 0); int y_min = (x < 0) ? (Smi::kMaxValue + x + 2) : (Smi::kMinValue + x); @@ -1151,10 +1140,8 @@ TEST(SmiSub) { i::V8::Initialize(NULL); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = - static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize * 4, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize * 4, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); @@ -1242,9 +1229,8 @@ TEST(SmiMul) { i::V8::Initialize(NULL); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); @@ -1347,10 +1333,8 @@ TEST(SmiDiv) { i::V8::Initialize(NULL); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = - static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize * 2, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize * 2, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); @@ -1457,10 +1441,8 @@ TEST(SmiMod) { i::V8::Initialize(NULL); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = - static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize * 2, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize * 2, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); @@ -1515,7 +1497,7 @@ void TestSmiIndex(MacroAssembler* masm, Label* exit, int id, int x) { for (int i = 0; i < 8; i++) { __ Move(rcx, Smi::FromInt(x)); SmiIndex index = masm->SmiToIndex(rdx, rcx, i); - ASSERT(index.reg.is(rcx) || index.reg.is(rdx)); + DCHECK(index.reg.is(rcx) || index.reg.is(rdx)); __ shlq(index.reg, Immediate(index.scale)); __ Set(r8, static_cast<intptr_t>(x) << i); __ cmpq(index.reg, r8); @@ -1523,7 +1505,7 @@ void TestSmiIndex(MacroAssembler* masm, Label* exit, int id, int x) { __ incq(rax); __ Move(rcx, Smi::FromInt(x)); index = masm->SmiToIndex(rcx, rcx, i); - ASSERT(index.reg.is(rcx)); + DCHECK(index.reg.is(rcx)); __ shlq(rcx, Immediate(index.scale)); __ Set(r8, static_cast<intptr_t>(x) << i); __ cmpq(rcx, r8); @@ -1532,7 +1514,7 @@ void TestSmiIndex(MacroAssembler* masm, Label* exit, int id, int x) { __ Move(rcx, Smi::FromInt(x)); index = masm->SmiToNegativeIndex(rdx, rcx, i); - ASSERT(index.reg.is(rcx) || index.reg.is(rdx)); + DCHECK(index.reg.is(rcx) || index.reg.is(rdx)); __ shlq(index.reg, Immediate(index.scale)); __ Set(r8, static_cast<intptr_t>(-x) << i); __ cmpq(index.reg, r8); @@ -1540,7 +1522,7 @@ void TestSmiIndex(MacroAssembler* masm, Label* exit, int id, int x) { __ incq(rax); __ Move(rcx, Smi::FromInt(x)); index = masm->SmiToNegativeIndex(rcx, rcx, i); - ASSERT(index.reg.is(rcx)); + DCHECK(index.reg.is(rcx)); __ shlq(rcx, Immediate(index.scale)); __ Set(r8, static_cast<intptr_t>(-x) << i); __ cmpq(rcx, r8); @@ -1554,10 +1536,8 @@ TEST(SmiIndex) { i::V8::Initialize(NULL); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = - static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize * 5, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize * 5, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); @@ -1623,10 +1603,8 @@ TEST(SmiSelectNonSmi) { i::V8::Initialize(NULL); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = - static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize * 2, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize * 2, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); @@ -1702,10 +1680,8 @@ TEST(SmiAnd) { i::V8::Initialize(NULL); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = - static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize * 2, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize * 2, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); @@ -1783,10 +1759,8 @@ TEST(SmiOr) { i::V8::Initialize(NULL); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = - static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize * 2, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize * 2, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); @@ -1866,10 +1840,8 @@ TEST(SmiXor) { i::V8::Initialize(NULL); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = - static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize * 2, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize * 2, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); @@ -1933,10 +1905,8 @@ TEST(SmiNot) { i::V8::Initialize(NULL); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = - static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); @@ -2029,10 +1999,8 @@ TEST(SmiShiftLeft) { i::V8::Initialize(NULL); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = - static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize * 7, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize * 7, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); @@ -2135,10 +2103,8 @@ TEST(SmiShiftLogicalRight) { i::V8::Initialize(NULL); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = - static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize * 5, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize * 5, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); @@ -2204,10 +2170,8 @@ TEST(SmiShiftArithmeticRight) { i::V8::Initialize(NULL); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = - static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize * 3, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize * 3, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); @@ -2239,7 +2203,7 @@ TEST(SmiShiftArithmeticRight) { void TestPositiveSmiPowerUp(MacroAssembler* masm, Label* exit, int id, int x) { - ASSERT(x >= 0); + DCHECK(x >= 0); int powers[] = { 0, 1, 2, 3, 8, 16, 24, 31 }; int power_count = 8; __ movl(rax, Immediate(id)); @@ -2268,10 +2232,8 @@ TEST(PositiveSmiTimesPowerOfTwoToInteger64) { i::V8::Initialize(NULL); // Allocate an executable page of memory. size_t actual_size; - byte* buffer = - static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize * 4, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize * 4, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); @@ -2311,10 +2273,8 @@ TEST(OperandOffset) { // Allocate an executable page of memory. size_t actual_size; - byte* buffer = - static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize * 2, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize * 2, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); @@ -2665,9 +2625,8 @@ TEST(LoadAndStoreWithRepresentation) { // Allocate an executable page of memory. size_t actual_size; - byte* buffer = static_cast<byte*>(OS::Allocate(Assembler::kMinimalBufferSize, - &actual_size, - true)); + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); CHECK(buffer); Isolate* isolate = CcTest::i_isolate(); HandleScope handles(isolate); diff --git a/deps/v8/test/cctest/test-macro-assembler-x87.cc b/deps/v8/test/cctest/test-macro-assembler-x87.cc new file mode 100644 index 00000000000..9aa40c0b104 --- /dev/null +++ b/deps/v8/test/cctest/test-macro-assembler-x87.cc @@ -0,0 +1,150 @@ +// Copyright 2013 the V8 project authors. All rights reserved. +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following +// disclaimer in the documentation and/or other materials provided +// with the distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived +// from this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +#include <stdlib.h> + +#include "src/v8.h" +#include "test/cctest/cctest.h" + +#include "src/base/platform/platform.h" +#include "src/factory.h" +#include "src/macro-assembler.h" +#include "src/serialize.h" + +using namespace v8::internal; + +#if __GNUC__ +#define STDCALL __attribute__((stdcall)) +#else +#define STDCALL __stdcall +#endif + +typedef int STDCALL F0Type(); +typedef F0Type* F0; + +#define __ masm-> + + +TEST(LoadAndStoreWithRepresentation) { + v8::internal::V8::Initialize(NULL); + + // Allocate an executable page of memory. + size_t actual_size; + byte* buffer = static_cast<byte*>(v8::base::OS::Allocate( + Assembler::kMinimalBufferSize, &actual_size, true)); + CHECK(buffer); + Isolate* isolate = CcTest::i_isolate(); + HandleScope handles(isolate); + MacroAssembler assembler(isolate, buffer, static_cast<int>(actual_size)); + MacroAssembler* masm = &assembler; // Create a pointer for the __ macro. + __ push(ebx); + __ push(edx); + __ sub(esp, Immediate(1 * kPointerSize)); + Label exit; + + // Test 1. + __ mov(eax, Immediate(1)); // Test number. + __ mov(Operand(esp, 0 * kPointerSize), Immediate(0)); + __ mov(ebx, Immediate(-1)); + __ Store(ebx, Operand(esp, 0 * kPointerSize), Representation::UInteger8()); + __ mov(ebx, Operand(esp, 0 * kPointerSize)); + __ mov(edx, Immediate(255)); + __ cmp(ebx, edx); + __ j(not_equal, &exit); + __ Load(ebx, Operand(esp, 0 * kPointerSize), Representation::UInteger8()); + __ cmp(ebx, edx); + __ j(not_equal, &exit); + + + // Test 2. + __ mov(eax, Immediate(2)); // Test number. + __ mov(Operand(esp, 0 * kPointerSize), Immediate(0)); + __ mov(ebx, Immediate(-1)); + __ Store(ebx, Operand(esp, 0 * kPointerSize), Representation::Integer8()); + __ mov(ebx, Operand(esp, 0 * kPointerSize)); + __ mov(edx, Immediate(255)); + __ cmp(ebx, edx); + __ j(not_equal, &exit); + __ Load(ebx, Operand(esp, 0 * kPointerSize), Representation::Integer8()); + __ mov(edx, Immediate(-1)); + __ cmp(ebx, edx); + __ j(not_equal, &exit); + + // Test 3. + __ mov(eax, Immediate(3)); // Test number. + __ mov(Operand(esp, 0 * kPointerSize), Immediate(0)); + __ mov(ebx, Immediate(-1)); + __ Store(ebx, Operand(esp, 0 * kPointerSize), Representation::Integer16()); + __ mov(ebx, Operand(esp, 0 * kPointerSize)); + __ mov(edx, Immediate(65535)); + __ cmp(ebx, edx); + __ j(not_equal, &exit); + __ Load(edx, Operand(esp, 0 * kPointerSize), Representation::Integer16()); + __ mov(ebx, Immediate(-1)); + __ cmp(ebx, edx); + __ j(not_equal, &exit); + + // Test 4. + __ mov(eax, Immediate(4)); // Test number. + __ mov(Operand(esp, 0 * kPointerSize), Immediate(0)); + __ mov(ebx, Immediate(-1)); + __ Store(ebx, Operand(esp, 0 * kPointerSize), Representation::UInteger16()); + __ mov(ebx, Operand(esp, 0 * kPointerSize)); + __ mov(edx, Immediate(65535)); + __ cmp(ebx, edx); + __ j(not_equal, &exit); + __ Load(edx, Operand(esp, 0 * kPointerSize), Representation::UInteger16()); + __ cmp(ebx, edx); + __ j(not_equal, &exit); + + // Test 5. + __ mov(eax, Immediate(5)); + __ Move(edx, Immediate(0)); // Test Move() + __ cmp(edx, Immediate(0)); + __ j(not_equal, &exit); + __ Move(ecx, Immediate(-1)); + __ cmp(ecx, Immediate(-1)); + __ j(not_equal, &exit); + __ Move(ebx, Immediate(0x77)); + __ cmp(ebx, Immediate(0x77)); + __ j(not_equal, &exit); + + __ xor_(eax, eax); // Success. + __ bind(&exit); + __ add(esp, Immediate(1 * kPointerSize)); + __ pop(edx); + __ pop(ebx); + __ ret(0); + + CodeDesc desc; + masm->GetCode(&desc); + // Call the function from C++. + int result = FUNCTION_CAST<F0>(buffer)(); + CHECK_EQ(0, result); +} + +#undef __ diff --git a/deps/v8/test/cctest/test-mark-compact.cc b/deps/v8/test/cctest/test-mark-compact.cc index 5f13bd25ab4..1d4b0d8e7d7 100644 --- a/deps/v8/test/cctest/test-mark-compact.cc +++ b/deps/v8/test/cctest/test-mark-compact.cc @@ -28,21 +28,21 @@ #include <stdlib.h> #ifdef __linux__ -#include <sys/types.h> -#include <sys/stat.h> +#include <errno.h> #include <fcntl.h> +#include <sys/stat.h> +#include <sys/types.h> #include <unistd.h> -#include <errno.h> #endif #include <utility> -#include "v8.h" +#include "src/v8.h" -#include "full-codegen.h" -#include "global-handles.h" -#include "snapshot.h" -#include "cctest.h" +#include "src/full-codegen.h" +#include "src/global-handles.h" +#include "src/snapshot.h" +#include "test/cctest/cctest.h" using namespace v8::internal; @@ -77,7 +77,7 @@ TEST(MarkingDeque) { TEST(Promotion) { CcTest::InitializeVM(); TestHeap* heap = CcTest::test_heap(); - heap->ConfigureHeap(2*256*KB, 1*MB, 1*MB, 0); + heap->ConfigureHeap(1, 1, 1, 0); v8::HandleScope sc(CcTest::isolate()); @@ -92,7 +92,8 @@ TEST(Promotion) { CHECK(heap->InSpace(*array, NEW_SPACE)); // Call mark compact GC, so array becomes an old object. - heap->CollectGarbage(OLD_POINTER_SPACE); + heap->CollectAllGarbage(Heap::kAbortIncrementalMarkingMask); + heap->CollectAllGarbage(Heap::kAbortIncrementalMarkingMask); // Array now sits in the old space CHECK(heap->InSpace(*array, OLD_POINTER_SPACE)); @@ -102,7 +103,7 @@ TEST(Promotion) { TEST(NoPromotion) { CcTest::InitializeVM(); TestHeap* heap = CcTest::test_heap(); - heap->ConfigureHeap(2*256*KB, 1*MB, 1*MB, 0); + heap->ConfigureHeap(1, 1, 1, 0); v8::HandleScope sc(CcTest::isolate()); @@ -156,12 +157,8 @@ TEST(MarkCompactCollector) { { HandleScope scope(isolate); // allocate a garbage Handle<String> func_name = factory->InternalizeUtf8String("theFunction"); - Handle<JSFunction> function = factory->NewFunctionWithPrototype( - func_name, factory->undefined_value()); - Handle<Map> initial_map = factory->NewMap( - JS_OBJECT_TYPE, JSObject::kHeaderSize); - function->set_initial_map(*initial_map); - JSReceiver::SetProperty(global, func_name, function, NONE, SLOPPY).Check(); + Handle<JSFunction> function = factory->NewFunction(func_name); + JSReceiver::SetProperty(global, func_name, function, SLOPPY).Check(); factory->NewJSObject(function); } @@ -170,7 +167,9 @@ TEST(MarkCompactCollector) { { HandleScope scope(isolate); Handle<String> func_name = factory->InternalizeUtf8String("theFunction"); - CHECK(JSReceiver::HasLocalProperty(global, func_name)); + v8::Maybe<bool> maybe = JSReceiver::HasOwnProperty(global, func_name); + CHECK(maybe.has_value); + CHECK(maybe.value); Handle<Object> func_value = Object::GetProperty(global, func_name).ToHandleChecked(); CHECK(func_value->IsJSFunction()); @@ -178,17 +177,19 @@ TEST(MarkCompactCollector) { Handle<JSObject> obj = factory->NewJSObject(function); Handle<String> obj_name = factory->InternalizeUtf8String("theObject"); - JSReceiver::SetProperty(global, obj_name, obj, NONE, SLOPPY).Check(); + JSReceiver::SetProperty(global, obj_name, obj, SLOPPY).Check(); Handle<String> prop_name = factory->InternalizeUtf8String("theSlot"); Handle<Smi> twenty_three(Smi::FromInt(23), isolate); - JSReceiver::SetProperty(obj, prop_name, twenty_three, NONE, SLOPPY).Check(); + JSReceiver::SetProperty(obj, prop_name, twenty_three, SLOPPY).Check(); } heap->CollectGarbage(OLD_POINTER_SPACE, "trigger 5"); { HandleScope scope(isolate); Handle<String> obj_name = factory->InternalizeUtf8String("theObject"); - CHECK(JSReceiver::HasLocalProperty(global, obj_name)); + v8::Maybe<bool> maybe = JSReceiver::HasOwnProperty(global, obj_name); + CHECK(maybe.has_value); + CHECK(maybe.value); Handle<Object> object = Object::GetProperty(global, obj_name).ToHandleChecked(); CHECK(object->IsJSObject()); @@ -240,7 +241,7 @@ static void WeakPointerCallback( std::pair<v8::Persistent<v8::Value>*, int>* p = reinterpret_cast<std::pair<v8::Persistent<v8::Value>*, int>*>( data.GetParameter()); - ASSERT_EQ(1234, p->second); + DCHECK_EQ(1234, p->second); NumberOfWeakCalls++; p->first->Reset(); } @@ -365,7 +366,7 @@ class TestRetainedObjectInfo : public v8::RetainedObjectInfo { bool has_been_disposed() { return has_been_disposed_; } virtual void Dispose() { - ASSERT(!has_been_disposed_); + DCHECK(!has_been_disposed_); has_been_disposed_ = true; } @@ -393,7 +394,7 @@ TEST(EmptyObjectGroups) { TestRetainedObjectInfo info; global_handles->AddObjectGroup(NULL, 0, &info); - ASSERT(info.has_been_disposed()); + DCHECK(info.has_been_disposed()); global_handles->AddImplicitReferences( Handle<HeapObject>::cast(object).location(), NULL, 0); diff --git a/deps/v8/test/cctest/test-mementos.cc b/deps/v8/test/cctest/test-mementos.cc index a377b4a4c6f..4c85151b88c 100644 --- a/deps/v8/test/cctest/test-mementos.cc +++ b/deps/v8/test/cctest/test-mementos.cc @@ -25,7 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "cctest.h" +#include "test/cctest/cctest.h" using namespace v8::internal; @@ -89,10 +89,7 @@ TEST(PretenuringCallNew) { Isolate* isolate = CcTest::i_isolate(); Heap* heap = isolate->heap(); - // We need to create several instances to get past the slack-tracking - // phase, where mementos aren't emitted. int call_count = 10; - CHECK_GE(call_count, SharedFunctionInfo::kGenerousAllocationCount); i::ScopedVector<char> test_buf(1024); const char* program = "function f() {" @@ -105,7 +102,7 @@ TEST(PretenuringCallNew) { " a = new f();" "}" "a;"; - i::OS::SNPrintF(test_buf, program, call_count); + i::SNPrintF(test_buf, program, call_count); v8::Local<v8::Value> res = CompileRun(test_buf.start()); Handle<JSObject> o = v8::Utils::OpenHandle(*v8::Handle<v8::Object>::Cast(res)); @@ -117,8 +114,8 @@ TEST(PretenuringCallNew) { CHECK_EQ(memento->map(), heap->allocation_memento_map()); // Furthermore, how many mementos did we create? The count should match - // call_count - SharedFunctionInfo::kGenerousAllocationCount. + // call_count. Note, that mementos are allocated during the inobject slack + // tracking phase. AllocationSite* site = memento->GetAllocationSite(); - CHECK_EQ(call_count - SharedFunctionInfo::kGenerousAllocationCount, - site->pretenure_create_count()->value()); + CHECK_EQ(call_count, site->pretenure_create_count()->value()); } diff --git a/deps/v8/test/cctest/test-microtask-delivery.cc b/deps/v8/test/cctest/test-microtask-delivery.cc index e6f38b79bce..082bc1a3ed9 100644 --- a/deps/v8/test/cctest/test-microtask-delivery.cc +++ b/deps/v8/test/cctest/test-microtask-delivery.cc @@ -25,9 +25,9 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "v8.h" +#include "src/v8.h" -#include "cctest.h" +#include "test/cctest/cctest.h" using namespace v8; namespace i = v8::internal; diff --git a/deps/v8/test/cctest/test-mutex.cc b/deps/v8/test/cctest/test-mutex.cc deleted file mode 100644 index cdc829f1562..00000000000 --- a/deps/v8/test/cctest/test-mutex.cc +++ /dev/null @@ -1,118 +0,0 @@ -// Copyright 2013 the V8 project authors. All rights reserved. -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are -// met: -// -// * Redistributions of source code must retain the above copyright -// notice, this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above -// copyright notice, this list of conditions and the following -// disclaimer in the documentation and/or other materials provided -// with the distribution. -// * Neither the name of Google Inc. nor the names of its -// contributors may be used to endorse or promote products derived -// from this software without specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -#include <cstdlib> - -#include "v8.h" - -#include "cctest.h" -#include "platform/mutex.h" - -using namespace ::v8::internal; - - -TEST(LockGuardMutex) { - Mutex mutex; - { LockGuard<Mutex> lock_guard(&mutex); - } - { LockGuard<Mutex> lock_guard(&mutex); - } -} - - -TEST(LockGuardRecursiveMutex) { - RecursiveMutex recursive_mutex; - { LockGuard<RecursiveMutex> lock_guard(&recursive_mutex); - } - { LockGuard<RecursiveMutex> lock_guard1(&recursive_mutex); - LockGuard<RecursiveMutex> lock_guard2(&recursive_mutex); - } -} - - -TEST(LockGuardLazyMutex) { - LazyMutex lazy_mutex = LAZY_MUTEX_INITIALIZER; - { LockGuard<Mutex> lock_guard(lazy_mutex.Pointer()); - } - { LockGuard<Mutex> lock_guard(lazy_mutex.Pointer()); - } -} - - -TEST(LockGuardLazyRecursiveMutex) { - LazyRecursiveMutex lazy_recursive_mutex = LAZY_RECURSIVE_MUTEX_INITIALIZER; - { LockGuard<RecursiveMutex> lock_guard(lazy_recursive_mutex.Pointer()); - } - { LockGuard<RecursiveMutex> lock_guard1(lazy_recursive_mutex.Pointer()); - LockGuard<RecursiveMutex> lock_guard2(lazy_recursive_mutex.Pointer()); - } -} - - -TEST(MultipleMutexes) { - Mutex mutex1; - Mutex mutex2; - Mutex mutex3; - // Order 1 - mutex1.Lock(); - mutex2.Lock(); - mutex3.Lock(); - mutex1.Unlock(); - mutex2.Unlock(); - mutex3.Unlock(); - // Order 2 - mutex1.Lock(); - mutex2.Lock(); - mutex3.Lock(); - mutex3.Unlock(); - mutex2.Unlock(); - mutex1.Unlock(); -} - - -TEST(MultipleRecursiveMutexes) { - RecursiveMutex recursive_mutex1; - RecursiveMutex recursive_mutex2; - // Order 1 - recursive_mutex1.Lock(); - recursive_mutex2.Lock(); - CHECK(recursive_mutex1.TryLock()); - CHECK(recursive_mutex2.TryLock()); - recursive_mutex1.Unlock(); - recursive_mutex1.Unlock(); - recursive_mutex2.Unlock(); - recursive_mutex2.Unlock(); - // Order 2 - recursive_mutex1.Lock(); - CHECK(recursive_mutex1.TryLock()); - recursive_mutex2.Lock(); - CHECK(recursive_mutex2.TryLock()); - recursive_mutex2.Unlock(); - recursive_mutex1.Unlock(); - recursive_mutex2.Unlock(); - recursive_mutex1.Unlock(); -} diff --git a/deps/v8/test/cctest/test-object-observe.cc b/deps/v8/test/cctest/test-object-observe.cc index a7b346fc6fe..679569e27c1 100644 --- a/deps/v8/test/cctest/test-object-observe.cc +++ b/deps/v8/test/cctest/test-object-observe.cc @@ -25,9 +25,9 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "v8.h" +#include "src/v8.h" -#include "cctest.h" +#include "test/cctest/cctest.h" using namespace v8; namespace i = v8::internal; @@ -275,8 +275,8 @@ TEST(APITestBasicMutation) { // Setting an indexed element via the property setting method obj->Set(Number::New(v8_isolate, 1), Number::New(v8_isolate, 5)); // Setting with a non-String, non-uint32 key - obj->Set(Number::New(v8_isolate, 1.1), - Number::New(v8_isolate, 6), DontDelete); + obj->ForceSet(Number::New(v8_isolate, 1.1), Number::New(v8_isolate, 6), + DontDelete); obj->Delete(String::NewFromUtf8(v8_isolate, "foo")); obj->Delete(1); obj->ForceDelete(Number::New(v8_isolate, 1.1)); @@ -616,7 +616,6 @@ TEST(GetNotifierFromSameOrigin) { static int GetGlobalObjectsCount() { - CcTest::heap()->EnsureHeapIsIterable(); int count = 0; i::HeapIterator it(CcTest::heap()); for (i::HeapObject* object = it.next(); object != NULL; object = it.next()) @@ -659,7 +658,7 @@ TEST(DontLeakContextOnObserve) { "Object.unobserve(obj, observer);"); } - v8::V8::ContextDisposedNotification(); + CcTest::isolate()->ContextDisposedNotification(); CheckSurvivingGlobalObjectsCount(1); } @@ -680,7 +679,7 @@ TEST(DontLeakContextOnGetNotifier) { CompileRun("Object.getNotifier(obj);"); } - v8::V8::ContextDisposedNotification(); + CcTest::isolate()->ContextDisposedNotification(); CheckSurvivingGlobalObjectsCount(1); } @@ -707,6 +706,6 @@ TEST(DontLeakContextOnNotifierPerformChange) { "notifier, 'foo', function(){})"); } - v8::V8::ContextDisposedNotification(); + CcTest::isolate()->ContextDisposedNotification(); CheckSurvivingGlobalObjectsCount(1); } diff --git a/deps/v8/test/cctest/test-ordered-hash-table.cc b/deps/v8/test/cctest/test-ordered-hash-table.cc index 48a457f5eec..bb1e0145b53 100644 --- a/deps/v8/test/cctest/test-ordered-hash-table.cc +++ b/deps/v8/test/cctest/test-ordered-hash-table.cc @@ -27,34 +27,17 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "cctest.h" -#include "factory.h" +#include "src/factory.h" +#include "test/cctest/cctest.h" namespace { using namespace v8::internal; -void CheckIterResultObject(Isolate* isolate, - Handle<JSObject> result, - Handle<Object> value, - bool done) { - Handle<Object> value_object = - Object::GetProperty(isolate, result, "value").ToHandleChecked(); - Handle<Object> done_object = - Object::GetProperty(isolate, result, "done").ToHandleChecked(); - - CHECK_EQ(*value_object, *value); - CHECK(done_object->IsBoolean()); - CHECK_EQ(done_object->BooleanValue(), done); -} - - TEST(Set) { - i::FLAG_harmony_collections = true; - LocalContext context; Isolate* isolate = CcTest::i_isolate(); Factory* factory = isolate->factory(); @@ -64,21 +47,22 @@ TEST(Set) { CHECK_EQ(0, ordered_set->NumberOfElements()); CHECK_EQ(0, ordered_set->NumberOfDeletedElements()); - Handle<JSSetIterator> value_iterator = - JSSetIterator::Create(ordered_set, JSSetIterator::kKindValues); - Handle<JSSetIterator> value_iterator_2 = - JSSetIterator::Create(ordered_set, JSSetIterator::kKindValues); - Handle<Map> map = factory->NewMap(JS_OBJECT_TYPE, JSObject::kHeaderSize); Handle<JSObject> obj = factory->NewJSObjectFromMap(map); CHECK(!ordered_set->Contains(obj)); ordered_set = OrderedHashSet::Add(ordered_set, obj); CHECK_EQ(1, ordered_set->NumberOfElements()); CHECK(ordered_set->Contains(obj)); - ordered_set = OrderedHashSet::Remove(ordered_set, obj); + bool was_present = false; + ordered_set = OrderedHashSet::Remove(ordered_set, obj, &was_present); + CHECK(was_present); CHECK_EQ(0, ordered_set->NumberOfElements()); CHECK(!ordered_set->Contains(obj)); + // Removing a not-present object should set was_present to false. + ordered_set = OrderedHashSet::Remove(ordered_set, obj, &was_present); + CHECK(!was_present); + // Test for collisions/chaining Handle<JSObject> obj1 = factory->NewJSObjectFromMap(map); ordered_set = OrderedHashSet::Add(ordered_set, obj1); @@ -91,18 +75,6 @@ TEST(Set) { CHECK(ordered_set->Contains(obj2)); CHECK(ordered_set->Contains(obj3)); - // Test iteration - CheckIterResultObject( - isolate, JSSetIterator::Next(value_iterator), obj1, false); - CheckIterResultObject( - isolate, JSSetIterator::Next(value_iterator), obj2, false); - CheckIterResultObject( - isolate, JSSetIterator::Next(value_iterator), obj3, false); - CheckIterResultObject(isolate, - JSSetIterator::Next(value_iterator), - factory->undefined_value(), - true); - // Test growth ordered_set = OrderedHashSet::Add(ordered_set, obj); Handle<JSObject> obj4 = factory->NewJSObjectFromMap(map); @@ -116,35 +88,21 @@ TEST(Set) { CHECK_EQ(0, ordered_set->NumberOfDeletedElements()); CHECK_EQ(4, ordered_set->NumberOfBuckets()); - // Test iteration after growth - CheckIterResultObject( - isolate, JSSetIterator::Next(value_iterator_2), obj1, false); - CheckIterResultObject( - isolate, JSSetIterator::Next(value_iterator_2), obj2, false); - CheckIterResultObject( - isolate, JSSetIterator::Next(value_iterator_2), obj3, false); - CheckIterResultObject( - isolate, JSSetIterator::Next(value_iterator_2), obj, false); - CheckIterResultObject( - isolate, JSSetIterator::Next(value_iterator_2), obj4, false); - CheckIterResultObject(isolate, - JSSetIterator::Next(value_iterator_2), - factory->undefined_value(), - true); - // Test shrinking - ordered_set = OrderedHashSet::Remove(ordered_set, obj); - ordered_set = OrderedHashSet::Remove(ordered_set, obj1); - ordered_set = OrderedHashSet::Remove(ordered_set, obj2); - ordered_set = OrderedHashSet::Remove(ordered_set, obj3); + ordered_set = OrderedHashSet::Remove(ordered_set, obj, &was_present); + CHECK(was_present); + ordered_set = OrderedHashSet::Remove(ordered_set, obj1, &was_present); + CHECK(was_present); + ordered_set = OrderedHashSet::Remove(ordered_set, obj2, &was_present); + CHECK(was_present); + ordered_set = OrderedHashSet::Remove(ordered_set, obj3, &was_present); + CHECK(was_present); CHECK_EQ(1, ordered_set->NumberOfElements()); CHECK_EQ(2, ordered_set->NumberOfBuckets()); } TEST(Map) { - i::FLAG_harmony_collections = true; - LocalContext context; Isolate* isolate = CcTest::i_isolate(); Factory* factory = isolate->factory(); @@ -154,11 +112,6 @@ TEST(Map) { CHECK_EQ(0, ordered_map->NumberOfElements()); CHECK_EQ(0, ordered_map->NumberOfDeletedElements()); - Handle<JSMapIterator> value_iterator = - JSMapIterator::Create(ordered_map, JSMapIterator::kKindValues); - Handle<JSMapIterator> key_iterator = - JSMapIterator::Create(ordered_map, JSMapIterator::kKindKeys); - Handle<Map> map = factory->NewMap(JS_OBJECT_TYPE, JSObject::kHeaderSize); Handle<JSObject> obj = factory->NewJSObjectFromMap(map); Handle<JSObject> val = factory->NewJSObjectFromMap(map); @@ -166,8 +119,9 @@ TEST(Map) { ordered_map = OrderedHashMap::Put(ordered_map, obj, val); CHECK_EQ(1, ordered_map->NumberOfElements()); CHECK(ordered_map->Lookup(obj)->SameValue(*val)); - ordered_map = OrderedHashMap::Put( - ordered_map, obj, factory->the_hole_value()); + bool was_present = false; + ordered_map = OrderedHashMap::Remove(ordered_map, obj, &was_present); + CHECK(was_present); CHECK_EQ(0, ordered_map->NumberOfElements()); CHECK(ordered_map->Lookup(obj)->IsTheHole()); @@ -186,18 +140,6 @@ TEST(Map) { CHECK(ordered_map->Lookup(obj2)->SameValue(*val2)); CHECK(ordered_map->Lookup(obj3)->SameValue(*val3)); - // Test iteration - CheckIterResultObject( - isolate, JSMapIterator::Next(value_iterator), val1, false); - CheckIterResultObject( - isolate, JSMapIterator::Next(value_iterator), val2, false); - CheckIterResultObject( - isolate, JSMapIterator::Next(value_iterator), val3, false); - CheckIterResultObject(isolate, - JSMapIterator::Next(value_iterator), - factory->undefined_value(), - true); - // Test growth ordered_map = OrderedHashMap::Put(ordered_map, obj, val); Handle<JSObject> obj4 = factory->NewJSObjectFromMap(map); @@ -211,31 +153,15 @@ TEST(Map) { CHECK_EQ(5, ordered_map->NumberOfElements()); CHECK_EQ(4, ordered_map->NumberOfBuckets()); - // Test iteration after growth - CheckIterResultObject( - isolate, JSMapIterator::Next(key_iterator), obj1, false); - CheckIterResultObject( - isolate, JSMapIterator::Next(key_iterator), obj2, false); - CheckIterResultObject( - isolate, JSMapIterator::Next(key_iterator), obj3, false); - CheckIterResultObject( - isolate, JSMapIterator::Next(key_iterator), obj, false); - CheckIterResultObject( - isolate, JSMapIterator::Next(key_iterator), obj4, false); - CheckIterResultObject(isolate, - JSMapIterator::Next(key_iterator), - factory->undefined_value(), - true); - // Test shrinking - ordered_map = OrderedHashMap::Put( - ordered_map, obj, factory->the_hole_value()); - ordered_map = OrderedHashMap::Put( - ordered_map, obj1, factory->the_hole_value()); - ordered_map = OrderedHashMap::Put( - ordered_map, obj2, factory->the_hole_value()); - ordered_map = OrderedHashMap::Put( - ordered_map, obj3, factory->the_hole_value()); + ordered_map = OrderedHashMap::Remove(ordered_map, obj, &was_present); + CHECK(was_present); + ordered_map = OrderedHashMap::Remove(ordered_map, obj1, &was_present); + CHECK(was_present); + ordered_map = OrderedHashMap::Remove(ordered_map, obj2, &was_present); + CHECK(was_present); + ordered_map = OrderedHashMap::Remove(ordered_map, obj3, &was_present); + CHECK(was_present); CHECK_EQ(1, ordered_map->NumberOfElements()); CHECK_EQ(2, ordered_map->NumberOfBuckets()); } diff --git a/deps/v8/test/cctest/test-ostreams.cc b/deps/v8/test/cctest/test-ostreams.cc new file mode 100644 index 00000000000..c83f96d4665 --- /dev/null +++ b/deps/v8/test/cctest/test-ostreams.cc @@ -0,0 +1,148 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include <string.h> +#include <limits> + +#include "include/v8stdint.h" +#include "src/ostreams.h" +#include "test/cctest/cctest.h" + +using namespace v8::internal; + + +TEST(OStringStreamConstructor) { + OStringStream oss; + const size_t expected_size = 0; + CHECK(expected_size == oss.size()); + CHECK_GT(oss.capacity(), 0); + CHECK_NE(NULL, oss.data()); + CHECK_EQ("", oss.c_str()); +} + + +#define TEST_STRING \ + "Ash nazg durbatuluk, " \ + "ash nazg gimbatul, " \ + "ash nazg thrakatuluk, " \ + "agh burzum-ishi krimpatul." + +TEST(OStringStreamGrow) { + OStringStream oss; + const int repeat = 30; + size_t len = strlen(TEST_STRING); + for (int i = 0; i < repeat; ++i) { + oss.write(TEST_STRING, len); + } + const char* expected = + TEST_STRING TEST_STRING TEST_STRING TEST_STRING TEST_STRING + TEST_STRING TEST_STRING TEST_STRING TEST_STRING TEST_STRING + TEST_STRING TEST_STRING TEST_STRING TEST_STRING TEST_STRING + TEST_STRING TEST_STRING TEST_STRING TEST_STRING TEST_STRING + TEST_STRING TEST_STRING TEST_STRING TEST_STRING TEST_STRING + TEST_STRING TEST_STRING TEST_STRING TEST_STRING TEST_STRING; + const size_t expected_len = len * repeat; + CHECK(expected_len == oss.size()); + CHECK_GT(oss.capacity(), 0); + CHECK_EQ(0, strncmp(expected, oss.data(), expected_len)); + CHECK_EQ(expected, oss.c_str()); +} + + +template <class T> +static void check(const char* expected, T value) { + OStringStream oss; + oss << value << " " << hex << value; + CHECK_EQ(expected, oss.c_str()); +} + + +TEST(NumericFormatting) { + check<bool>("0 0", false); + check<bool>("1 1", true); + + check<int16_t>("-12345 cfc7", -12345); + check<int16_t>("-32768 8000", std::numeric_limits<int16_t>::min()); + check<int16_t>("32767 7fff", std::numeric_limits<int16_t>::max()); + + check<uint16_t>("34567 8707", 34567); + check<uint16_t>("0 0", std::numeric_limits<uint16_t>::min()); + check<uint16_t>("65535 ffff", std::numeric_limits<uint16_t>::max()); + + check<int32_t>("-1234567 ffed2979", -1234567); + check<int32_t>("-2147483648 80000000", std::numeric_limits<int32_t>::min()); + check<int32_t>("2147483647 7fffffff", std::numeric_limits<int32_t>::max()); + + check<uint32_t>("3456789 34bf15", 3456789); + check<uint32_t>("0 0", std::numeric_limits<uint32_t>::min()); + check<uint32_t>("4294967295 ffffffff", std::numeric_limits<uint32_t>::max()); + + check<int64_t>("-1234567 ffffffffffed2979", -1234567); + check<int64_t>("-9223372036854775808 8000000000000000", + std::numeric_limits<int64_t>::min()); + check<int64_t>("9223372036854775807 7fffffffffffffff", + std::numeric_limits<int64_t>::max()); + + check<uint64_t>("3456789 34bf15", 3456789); + check<uint64_t>("0 0", std::numeric_limits<uint64_t>::min()); + check<uint64_t>("18446744073709551615 ffffffffffffffff", + std::numeric_limits<uint64_t>::max()); + + check<float>("0 0", 0.0f); + check<float>("123 123", 123.0f); + check<float>("-0.5 -0.5", -0.5f); + check<float>("1.25 1.25", 1.25f); + check<float>("0.0625 0.0625", 6.25e-2f); + + check<double>("0 0", 0.0); + check<double>("123 123", 123.0); + check<double>("-0.5 -0.5", -0.5); + check<double>("1.25 1.25", 1.25); + check<double>("0.0625 0.0625", 6.25e-2); +} + + +TEST(CharacterOutput) { + check<char>("a a", 'a'); + check<signed char>("B B", 'B'); + check<unsigned char>("9 9", '9'); + check<const char*>("bye bye", "bye"); + + OStringStream os; + os.put('H').write("ello", 4); + CHECK_EQ("Hello", os.c_str()); +} + + +TEST(Manipulators) { + OStringStream os; + os << 123 << hex << 123 << endl << 123 << dec << 123 << 123; + CHECK_EQ("1237b\n7b123123", os.c_str()); +} + + +class MiscStuff { + public: + MiscStuff(int i, double d, const char* s) : i_(i), d_(d), s_(s) { } + + private: + friend OStream& operator<<(OStream& os, const MiscStuff& m); + + int i_; + double d_; + const char* s_; +}; + + +OStream& operator<<(OStream& os, const MiscStuff& m) { + return os << "{i:" << m.i_ << ", d:" << m.d_ << ", s:'" << m.s_ << "'}"; +} + + +TEST(CustomOutput) { + OStringStream os; + MiscStuff m(123, 4.5, "Hurz!"); + os << m; + CHECK_EQ("{i:123, d:4.5, s:'Hurz!'}", os.c_str()); +} diff --git a/deps/v8/test/cctest/test-parsing.cc b/deps/v8/test/cctest/test-parsing.cc index 58734d054e2..9cb5d69e6c5 100644 --- a/deps/v8/test/cctest/test-parsing.cc +++ b/deps/v8/test/cctest/test-parsing.cc @@ -25,22 +25,25 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include <stdlib.h> #include <stdio.h> +#include <stdlib.h> #include <string.h> -#include "v8.h" +#include "src/v8.h" + +#include "src/ast-value-factory.h" +#include "src/compiler.h" +#include "src/execution.h" +#include "src/isolate.h" +#include "src/objects.h" +#include "src/parser.h" +#include "src/preparser.h" +#include "src/rewriter.h" +#include "src/scanner-character-streams.h" +#include "src/token.h" +#include "src/utils.h" -#include "cctest.h" -#include "compiler.h" -#include "execution.h" -#include "isolate.h" -#include "objects.h" -#include "parser.h" -#include "preparser.h" -#include "scanner-character-streams.h" -#include "token.h" -#include "utils.h" +#include "test/cctest/cctest.h" TEST(ScanKeywords) { struct KeywordToken { @@ -84,7 +87,7 @@ TEST(ScanKeywords) { // Adding characters will make keyword matching fail. static const char chars_to_append[] = { 'z', '0', '_' }; for (int j = 0; j < static_cast<int>(ARRAY_SIZE(chars_to_append)); ++j) { - i::OS::MemMove(buffer, keyword, length); + i::MemMove(buffer, keyword, length); buffer[length] = chars_to_append[j]; i::Utf8ToUtf16CharacterStream stream(buffer, length + 1); i::Scanner scanner(&unicode_cache); @@ -94,7 +97,7 @@ TEST(ScanKeywords) { } // Replacing characters will make keyword matching fail. { - i::OS::MemMove(buffer, keyword, length); + i::MemMove(buffer, keyword, length); buffer[length - 1] = '_'; i::Utf8ToUtf16CharacterStream stream(buffer, length); i::Scanner scanner(&unicode_cache); @@ -141,9 +144,8 @@ TEST(ScanHTMLEndComments) { }; // Parser/Scanner needs a stack limit. - int marker; - CcTest::i_isolate()->stack_guard()->SetStackLimit( - reinterpret_cast<uintptr_t>(&marker) - 128 * 1024); + CcTest::i_isolate()->stack_guard()->SetStackLimit(GetCurrentStackPosition() - + 128 * 1024); uintptr_t stack_limit = CcTest::i_isolate()->stack_guard()->real_climit(); for (int i = 0; tests[i]; i++) { const i::byte* source = @@ -156,8 +158,7 @@ TEST(ScanHTMLEndComments) { preparser.set_allow_lazy(true); i::PreParser::PreParseResult result = preparser.PreParseProgram(); CHECK_EQ(i::PreParser::kPreParseSuccess, result); - i::ScriptData data(log.ExtractData()); - CHECK(!data.has_error()); + CHECK(!log.HasError()); } for (int i = 0; fail_tests[i]; i++) { @@ -172,8 +173,7 @@ TEST(ScanHTMLEndComments) { i::PreParser::PreParseResult result = preparser.PreParseProgram(); // Even in the case of a syntax error, kPreParseSuccess is returned. CHECK_EQ(i::PreParser::kPreParseSuccess, result); - i::ScriptData data(log.ExtractData()); - CHECK(data.has_error()); + CHECK(log.HasError()); } } @@ -197,9 +197,8 @@ TEST(UsingCachedData) { v8::HandleScope handles(isolate); v8::Local<v8::Context> context = v8::Context::New(isolate); v8::Context::Scope context_scope(context); - int marker; - CcTest::i_isolate()->stack_guard()->SetStackLimit( - reinterpret_cast<uintptr_t>(&marker) - 128 * 1024); + CcTest::i_isolate()->stack_guard()->SetStackLimit(GetCurrentStackPosition() - + 128 * 1024); // Source containing functions that might be lazily compiled and all types // of symbols (string, propertyName, regexp). @@ -213,23 +212,28 @@ TEST(UsingCachedData) { "var v = /RegExp Literal/;" "var w = /RegExp Literal\\u0020With Escape/gin;" "var y = { get getter() { return 42; }, " - " set setter(v) { this.value = v; }};"; + " set setter(v) { this.value = v; }};" + "var f = a => function (b) { return a + b; };" + "var g = a => b => a + b;"; int source_length = i::StrLength(source); // ScriptResource will be deleted when the corresponding String is GCd. v8::ScriptCompiler::Source script_source(v8::String::NewExternal( isolate, new ScriptResource(source, source_length))); + i::FLAG_harmony_arrow_functions = true; i::FLAG_min_preparse_length = 0; v8::ScriptCompiler::Compile(isolate, &script_source, - v8::ScriptCompiler::kProduceDataToCache); + v8::ScriptCompiler::kProduceParserCache); CHECK(script_source.GetCachedData()); // Compile the script again, using the cached data. bool lazy_flag = i::FLAG_lazy; i::FLAG_lazy = true; - v8::ScriptCompiler::Compile(isolate, &script_source); + v8::ScriptCompiler::Compile(isolate, &script_source, + v8::ScriptCompiler::kConsumeParserCache); i::FLAG_lazy = false; - v8::ScriptCompiler::CompileUnbound(isolate, &script_source); + v8::ScriptCompiler::CompileUnbound(isolate, &script_source, + v8::ScriptCompiler::kConsumeParserCache); i::FLAG_lazy = lazy_flag; } @@ -240,49 +244,55 @@ TEST(PreparseFunctionDataIsUsed) { // Make preparsing work for short scripts. i::FLAG_min_preparse_length = 0; + i::FLAG_harmony_arrow_functions = true; v8::Isolate* isolate = CcTest::isolate(); v8::HandleScope handles(isolate); v8::Local<v8::Context> context = v8::Context::New(isolate); v8::Context::Scope context_scope(context); - int marker; - CcTest::i_isolate()->stack_guard()->SetStackLimit( - reinterpret_cast<uintptr_t>(&marker) - 128 * 1024); + CcTest::i_isolate()->stack_guard()->SetStackLimit(GetCurrentStackPosition() - + 128 * 1024); - const char* good_code = - "function this_is_lazy() { var a; } function foo() { return 25; } foo();"; + const char* good_code[] = { + "function this_is_lazy() { var a; } function foo() { return 25; } foo();", + "var this_is_lazy = () => { var a; }; var foo = () => 25; foo();", + }; // Insert a syntax error inside the lazy function. - const char* bad_code = - "function this_is_lazy() { if ( } function foo() { return 25; } foo();"; - - v8::ScriptCompiler::Source good_source(v8_str(good_code)); - v8::ScriptCompiler::Compile(isolate, &good_source, - v8::ScriptCompiler::kProduceDataToCache); - - const v8::ScriptCompiler::CachedData* cached_data = - good_source.GetCachedData(); - CHECK(cached_data->data != NULL); - CHECK_GT(cached_data->length, 0); - - // Now compile the erroneous code with the good preparse data. If the preparse - // data is used, the lazy function is skipped and it should compile fine. - v8::ScriptCompiler::Source bad_source( - v8_str(bad_code), new v8::ScriptCompiler::CachedData( - cached_data->data, cached_data->length)); - v8::Local<v8::Value> result = - v8::ScriptCompiler::Compile(isolate, &bad_source)->Run(); - CHECK(result->IsInt32()); - CHECK_EQ(25, result->Int32Value()); + const char* bad_code[] = { + "function this_is_lazy() { if ( } function foo() { return 25; } foo();", + "var this_is_lazy = () => { if ( }; var foo = () => 25; foo();", + }; + + for (unsigned i = 0; i < ARRAY_SIZE(good_code); i++) { + v8::ScriptCompiler::Source good_source(v8_str(good_code[i])); + v8::ScriptCompiler::Compile(isolate, &good_source, + v8::ScriptCompiler::kProduceDataToCache); + + const v8::ScriptCompiler::CachedData* cached_data = + good_source.GetCachedData(); + CHECK(cached_data->data != NULL); + CHECK_GT(cached_data->length, 0); + + // Now compile the erroneous code with the good preparse data. If the + // preparse data is used, the lazy function is skipped and it should + // compile fine. + v8::ScriptCompiler::Source bad_source( + v8_str(bad_code[i]), new v8::ScriptCompiler::CachedData( + cached_data->data, cached_data->length)); + v8::Local<v8::Value> result = + v8::ScriptCompiler::Compile(isolate, &bad_source)->Run(); + CHECK(result->IsInt32()); + CHECK_EQ(25, result->Int32Value()); + } } TEST(StandAlonePreParser) { v8::V8::Initialize(); - int marker; - CcTest::i_isolate()->stack_guard()->SetStackLimit( - reinterpret_cast<uintptr_t>(&marker) - 128 * 1024); + CcTest::i_isolate()->stack_guard()->SetStackLimit(GetCurrentStackPosition() - + 128 * 1024); const char* programs[] = { "{label: 42}", @@ -290,6 +300,7 @@ TEST(StandAlonePreParser) { "function foo(x, y) { return x + y; }", "%ArgleBargle(glop);", "var x = new new Function('this.x = 42');", + "var f = (x, y) => x + y;", NULL }; @@ -306,10 +317,10 @@ TEST(StandAlonePreParser) { i::PreParser preparser(&scanner, &log, stack_limit); preparser.set_allow_lazy(true); preparser.set_allow_natives_syntax(true); + preparser.set_allow_arrow_functions(true); i::PreParser::PreParseResult result = preparser.PreParseProgram(); CHECK_EQ(i::PreParser::kPreParseSuccess, result); - i::ScriptData data(log.ExtractData()); - CHECK(!data.has_error()); + CHECK(!log.HasError()); } } @@ -317,9 +328,8 @@ TEST(StandAlonePreParser) { TEST(StandAlonePreParserNoNatives) { v8::V8::Initialize(); - int marker; - CcTest::i_isolate()->stack_guard()->SetStackLimit( - reinterpret_cast<uintptr_t>(&marker) - 128 * 1024); + CcTest::i_isolate()->stack_guard()->SetStackLimit(GetCurrentStackPosition() - + 128 * 1024); const char* programs[] = { "%ArgleBargle(glop);", @@ -342,9 +352,7 @@ TEST(StandAlonePreParserNoNatives) { preparser.set_allow_lazy(true); i::PreParser::PreParseResult result = preparser.PreParseProgram(); CHECK_EQ(i::PreParser::kPreParseSuccess, result); - i::ScriptData data(log.ExtractData()); - // Data contains syntax error. - CHECK(data.has_error()); + CHECK(log.HasError()); } } @@ -356,13 +364,12 @@ TEST(PreparsingObjectLiterals) { v8::HandleScope handles(isolate); v8::Local<v8::Context> context = v8::Context::New(isolate); v8::Context::Scope context_scope(context); - int marker; - CcTest::i_isolate()->stack_guard()->SetStackLimit( - reinterpret_cast<uintptr_t>(&marker) - 128 * 1024); + CcTest::i_isolate()->stack_guard()->SetStackLimit(GetCurrentStackPosition() - + 128 * 1024); { const char* source = "var myo = {if: \"foo\"}; myo.if;"; - v8::Local<v8::Value> result = PreCompileCompileRun(source); + v8::Local<v8::Value> result = ParserCacheCompileRun(source); CHECK(result->IsString()); v8::String::Utf8Value utf8(result); CHECK_EQ("foo", *utf8); @@ -370,7 +377,7 @@ TEST(PreparsingObjectLiterals) { { const char* source = "var myo = {\"bar\": \"foo\"}; myo[\"bar\"];"; - v8::Local<v8::Value> result = PreCompileCompileRun(source); + v8::Local<v8::Value> result = ParserCacheCompileRun(source); CHECK(result->IsString()); v8::String::Utf8Value utf8(result); CHECK_EQ("foo", *utf8); @@ -378,7 +385,7 @@ TEST(PreparsingObjectLiterals) { { const char* source = "var myo = {1: \"foo\"}; myo[1];"; - v8::Local<v8::Value> result = PreCompileCompileRun(source); + v8::Local<v8::Value> result = ParserCacheCompileRun(source); CHECK(result->IsString()); v8::String::Utf8Value utf8(result); CHECK_EQ("foo", *utf8); @@ -390,9 +397,7 @@ TEST(RegressChromium62639) { v8::V8::Initialize(); i::Isolate* isolate = CcTest::i_isolate(); - int marker; - isolate->stack_guard()->SetStackLimit( - reinterpret_cast<uintptr_t>(&marker) - 128 * 1024); + isolate->stack_guard()->SetStackLimit(GetCurrentStackPosition() - 128 * 1024); const char* program = "var x = 'something';\n" "escape: function() {}"; @@ -413,8 +418,7 @@ TEST(RegressChromium62639) { i::PreParser::PreParseResult result = preparser.PreParseProgram(); // Even in the case of a syntax error, kPreParseSuccess is returned. CHECK_EQ(i::PreParser::kPreParseSuccess, result); - i::ScriptData data(log.ExtractData()); - CHECK(data.has_error()); + CHECK(log.HasError()); } @@ -427,9 +431,7 @@ TEST(Regress928) { // as with-content, which made it assume that a function inside // the block could be lazily compiled, and an extra, unexpected, // entry was added to the data. - int marker; - isolate->stack_guard()->SetStackLimit( - reinterpret_cast<uintptr_t>(&marker) - 128 * 1024); + isolate->stack_guard()->SetStackLimit(GetCurrentStackPosition() - 128 * 1024); const char* program = "try { } catch (e) { var foo = function () { /* first */ } }" @@ -446,15 +448,15 @@ TEST(Regress928) { preparser.set_allow_lazy(true); i::PreParser::PreParseResult result = preparser.PreParseProgram(); CHECK_EQ(i::PreParser::kPreParseSuccess, result); - i::ScriptData data(log.ExtractData()); - CHECK(!data.has_error()); - data.Initialize(); + i::ScriptData* sd = log.GetScriptData(); + i::ParseData pd(sd); + pd.Initialize(); int first_function = static_cast<int>(strstr(program, "function") - program); int first_lbrace = first_function + i::StrLength("function () "); CHECK_EQ('{', program[first_lbrace]); - i::FunctionEntry entry1 = data.GetFunctionEntry(first_lbrace); + i::FunctionEntry entry1 = pd.GetFunctionEntry(first_lbrace); CHECK(!entry1.is_valid()); int second_function = @@ -462,18 +464,18 @@ TEST(Regress928) { int second_lbrace = second_function + i::StrLength("function () "); CHECK_EQ('{', program[second_lbrace]); - i::FunctionEntry entry2 = data.GetFunctionEntry(second_lbrace); + i::FunctionEntry entry2 = pd.GetFunctionEntry(second_lbrace); CHECK(entry2.is_valid()); CHECK_EQ('}', program[entry2.end_pos() - 1]); + delete sd; } TEST(PreParseOverflow) { v8::V8::Initialize(); - int marker; - CcTest::i_isolate()->stack_guard()->SetStackLimit( - reinterpret_cast<uintptr_t>(&marker) - 128 * 1024); + CcTest::i_isolate()->stack_guard()->SetStackLimit(GetCurrentStackPosition() - + 128 * 1024); size_t kProgramSize = 1024 * 1024; i::SmartArrayPointer<char> program(i::NewArray<char>(kProgramSize + 1)); @@ -491,6 +493,7 @@ TEST(PreParseOverflow) { i::PreParser preparser(&scanner, &log, stack_limit); preparser.set_allow_lazy(true); + preparser.set_allow_arrow_functions(true); i::PreParser::PreParseResult result = preparser.PreParseProgram(); CHECK_EQ(i::PreParser::kPreParseStackOverflow, result); } @@ -669,7 +672,7 @@ TEST(Utf8CharacterStream) { i, unibrow::Utf16::kNoPreviousCharacter); } - ASSERT(cursor == kAllUtf8CharsSizeU); + DCHECK(cursor == kAllUtf8CharsSizeU); i::Utf8ToUtf16CharacterStream stream(reinterpret_cast<const i::byte*>(buffer), kAllUtf8CharsSizeU); @@ -758,8 +761,8 @@ TEST(StreamScanner) { i::Token::EOS, i::Token::ILLEGAL }; - ASSERT_EQ('{', str2[19]); - ASSERT_EQ('}', str2[37]); + DCHECK_EQ('{', str2[19]); + DCHECK_EQ('}', str2[37]); TestStreamScanner(&stream2, expectations2, 20, 37); const char* str3 = "{}}}}"; @@ -796,8 +799,12 @@ void TestScanRegExp(const char* re_source, const char* expected) { CHECK(start == i::Token::DIV || start == i::Token::ASSIGN_DIV); CHECK(scanner.ScanRegExpPattern(start == i::Token::ASSIGN_DIV)); scanner.Next(); // Current token is now the regexp literal. + i::Zone zone(CcTest::i_isolate()); + i::AstValueFactory ast_value_factory(&zone, + CcTest::i_isolate()->heap()->HashSeed()); + ast_value_factory.Internalize(CcTest::i_isolate()); i::Handle<i::String> val = - scanner.AllocateInternalizedString(CcTest::i_isolate()); + scanner.CurrentSymbol(&ast_value_factory)->string(); i::DisallowHeapAllocation no_alloc; i::String::FlatContent content = val->GetFlatContent(); CHECK(content.IsAscii()); @@ -964,7 +971,13 @@ TEST(ScopePositions) { " infunction;\n" " }", "\n" " more;", i::FUNCTION_SCOPE, i::SLOPPY }, - { " (function fun", "(a,b) { infunction; }", ")();", + // TODO(aperez): Change to use i::ARROW_SCOPE when implemented + { " start;\n", "(a,b) => a + b", "; more;", + i::FUNCTION_SCOPE, i::SLOPPY }, + { " start;\n", "(a,b) => { return a+b; }", "\nmore;", + i::FUNCTION_SCOPE, i::SLOPPY }, + { " start;\n" + " (function fun", "(a,b) { infunction; }", ")();", i::FUNCTION_SCOPE, i::SLOPPY }, { " for ", "(let x = 1 ; x < 10; ++ x) { block; }", " more;", i::BLOCK_SCOPE, i::STRICT }, @@ -1091,9 +1104,7 @@ TEST(ScopePositions) { v8::Handle<v8::Context> context = v8::Context::New(CcTest::isolate()); v8::Context::Scope context_scope(context); - int marker; - isolate->stack_guard()->SetStackLimit( - reinterpret_cast<uintptr_t>(&marker) - 128 * 1024); + isolate->stack_guard()->SetStackLimit(GetCurrentStackPosition() - 128 * 1024); for (int i = 0; source_data[i].outer_prefix; i++) { int kPrefixLen = Utf8LengthHelper(source_data[i].outer_prefix); @@ -1105,10 +1116,10 @@ TEST(ScopePositions) { int kProgramSize = kPrefixLen + kInnerLen + kSuffixLen; int kProgramByteSize = kPrefixByteLen + kInnerByteLen + kSuffixByteLen; i::ScopedVector<char> program(kProgramByteSize + 1); - i::OS::SNPrintF(program, "%s%s%s", - source_data[i].outer_prefix, - source_data[i].inner_source, - source_data[i].outer_suffix); + i::SNPrintF(program, "%s%s%s", + source_data[i].outer_prefix, + source_data[i].inner_source, + source_data[i].outer_suffix); // Parse program source. i::Handle<i::String> source = factory->NewStringFromUtf8( @@ -1119,6 +1130,7 @@ TEST(ScopePositions) { i::Parser parser(&info); parser.set_allow_lazy(true); parser.set_allow_harmony_scoping(true); + parser.set_allow_arrow_functions(true); info.MarkAsGlobal(); info.SetStrictMode(source_data[i].strict_mode); parser.Parse(); @@ -1141,20 +1153,41 @@ TEST(ScopePositions) { } -i::Handle<i::String> FormatMessage(i::ScriptData* data) { +const char* ReadString(unsigned* start) { + int length = start[0]; + char* result = i::NewArray<char>(length + 1); + for (int i = 0; i < length; i++) { + result[i] = start[i + 1]; + } + result[length] = '\0'; + return result; +} + + +i::Handle<i::String> FormatMessage(i::Vector<unsigned> data) { i::Isolate* isolate = CcTest::i_isolate(); i::Factory* factory = isolate->factory(); - const char* message = data->BuildMessage(); + const char* message = + ReadString(&data[i::PreparseDataConstants::kMessageTextPos]); i::Handle<i::String> format = v8::Utils::OpenHandle( *v8::String::NewFromUtf8(CcTest::isolate(), message)); - i::Vector<const char*> args = data->BuildArgs(); - i::Handle<i::JSArray> args_array = factory->NewJSArray(args.length()); - for (int i = 0; i < args.length(); i++) { - i::JSArray::SetElement( - args_array, i, v8::Utils::OpenHandle(*v8::String::NewFromUtf8( - CcTest::isolate(), args[i])), - NONE, i::SLOPPY).Check(); + int arg_count = data[i::PreparseDataConstants::kMessageArgCountPos]; + const char* arg = NULL; + i::Handle<i::JSArray> args_array; + if (arg_count == 1) { + // Position after text found by skipping past length field and + // length field content words. + int pos = i::PreparseDataConstants::kMessageTextPos + 1 + + data[i::PreparseDataConstants::kMessageTextPos]; + arg = ReadString(&data[pos]); + args_array = factory->NewJSArray(1); + i::JSArray::SetElement(args_array, 0, v8::Utils::OpenHandle(*v8_str(arg)), + NONE, i::SLOPPY).Check(); + } else { + CHECK_EQ(0, arg_count); + args_array = factory->NewJSArray(0); } + i::Handle<i::JSObject> builtins(isolate->js_builtins_object()); i::Handle<i::Object> format_fun = i::Object::GetProperty( isolate, builtins, "FormatMessage").ToHandleChecked(); @@ -1162,11 +1195,9 @@ i::Handle<i::String> FormatMessage(i::ScriptData* data) { i::Handle<i::Object> result = i::Execution::Call( isolate, format_fun, builtins, 2, arg_handles).ToHandleChecked(); CHECK(result->IsString()); - for (int i = 0; i < args.length(); i++) { - i::DeleteArray(args[i]); - } - i::DeleteArray(args.start()); i::DeleteArray(message); + i::DeleteArray(arg); + data.Dispose(); return i::Handle<i::String>::cast(result); } @@ -1177,8 +1208,8 @@ enum ParserFlag { kAllowHarmonyScoping, kAllowModules, kAllowGenerators, - kAllowForOf, - kAllowHarmonyNumericLiterals + kAllowHarmonyNumericLiterals, + kAllowArrowFunctions }; @@ -1196,9 +1227,9 @@ void SetParserFlags(i::ParserBase<Traits>* parser, parser->set_allow_harmony_scoping(flags.Contains(kAllowHarmonyScoping)); parser->set_allow_modules(flags.Contains(kAllowModules)); parser->set_allow_generators(flags.Contains(kAllowGenerators)); - parser->set_allow_for_of(flags.Contains(kAllowForOf)); parser->set_allow_harmony_numeric_literals( flags.Contains(kAllowHarmonyNumericLiterals)); + parser->set_allow_arrow_functions(flags.Contains(kAllowArrowFunctions)); } @@ -1221,7 +1252,8 @@ void TestParserSyncWithFlags(i::Handle<i::String> source, i::PreParser::PreParseResult result = preparser.PreParseProgram(); CHECK_EQ(i::PreParser::kPreParseSuccess, result); } - i::ScriptData data(log.ExtractData()); + + bool preparse_error = log.HasError(); // Parse the data i::FunctionLiteral* function; @@ -1246,7 +1278,7 @@ void TestParserSyncWithFlags(i::Handle<i::String> source, isolate, exception_handle, "message").ToHandleChecked()); if (result == kSuccess) { - i::OS::Print( + v8::base::OS::Print( "Parser failed on:\n" "\t%s\n" "with error:\n" @@ -1256,8 +1288,8 @@ void TestParserSyncWithFlags(i::Handle<i::String> source, CHECK(false); } - if (!data.has_error()) { - i::OS::Print( + if (!preparse_error) { + v8::base::OS::Print( "Parser failed on:\n" "\t%s\n" "with error:\n" @@ -1267,9 +1299,10 @@ void TestParserSyncWithFlags(i::Handle<i::String> source, CHECK(false); } // Check that preparser and parser produce the same error. - i::Handle<i::String> preparser_message = FormatMessage(&data); + i::Handle<i::String> preparser_message = + FormatMessage(log.ErrorMessageData()); if (!i::String::Equals(message_string, preparser_message)) { - i::OS::Print( + v8::base::OS::Print( "Expected parser and preparser to produce the same error on:\n" "\t%s\n" "However, found the following error messages\n" @@ -1280,17 +1313,18 @@ void TestParserSyncWithFlags(i::Handle<i::String> source, preparser_message->ToCString().get()); CHECK(false); } - } else if (data.has_error()) { - i::OS::Print( + } else if (preparse_error) { + v8::base::OS::Print( "Preparser failed on:\n" "\t%s\n" "with error:\n" "\t%s\n" "However, the parser succeeded", - source->ToCString().get(), FormatMessage(&data)->ToCString().get()); + source->ToCString().get(), + FormatMessage(log.ErrorMessageData())->ToCString().get()); CHECK(false); } else if (result == kError) { - i::OS::Print( + v8::base::OS::Print( "Expected error on:\n" "\t%s\n" "However, parser and preparser succeeded", @@ -1301,15 +1335,22 @@ void TestParserSyncWithFlags(i::Handle<i::String> source, void TestParserSync(const char* source, - const ParserFlag* flag_list, - size_t flag_list_length, - ParserSyncTestResult result = kSuccessOrError) { + const ParserFlag* varying_flags, + size_t varying_flags_length, + ParserSyncTestResult result = kSuccessOrError, + const ParserFlag* always_true_flags = NULL, + size_t always_true_flags_length = 0) { i::Handle<i::String> str = CcTest::i_isolate()->factory()->NewStringFromAsciiChecked(source); - for (int bits = 0; bits < (1 << flag_list_length); bits++) { + for (int bits = 0; bits < (1 << varying_flags_length); bits++) { i::EnumSet<ParserFlag> flags; - for (size_t flag_index = 0; flag_index < flag_list_length; flag_index++) { - if ((bits & (1 << flag_index)) != 0) flags.Add(flag_list[flag_index]); + for (size_t flag_index = 0; flag_index < varying_flags_length; + ++flag_index) { + if ((bits & (1 << flag_index)) != 0) flags.Add(varying_flags[flag_index]); + } + for (size_t flag_index = 0; flag_index < always_true_flags_length; + ++flag_index) { + flags.Add(always_true_flags[flag_index]); } TestParserSyncWithFlags(str, flags, result); } @@ -1390,14 +1431,12 @@ TEST(ParserSync) { v8::Handle<v8::Context> context = v8::Context::New(CcTest::isolate()); v8::Context::Scope context_scope(context); - int marker; - CcTest::i_isolate()->stack_guard()->SetStackLimit( - reinterpret_cast<uintptr_t>(&marker) - 128 * 1024); + CcTest::i_isolate()->stack_guard()->SetStackLimit(GetCurrentStackPosition() - + 128 * 1024); - static const ParserFlag flags1[] = { - kAllowLazy, kAllowHarmonyScoping, kAllowModules, kAllowGenerators, - kAllowForOf - }; + static const ParserFlag flags1[] = {kAllowLazy, kAllowHarmonyScoping, + kAllowModules, kAllowGenerators, + kAllowArrowFunctions}; for (int i = 0; context_data[i][0] != NULL; ++i) { for (int j = 0; statement_data[j] != NULL; ++j) { for (int k = 0; termination_data[k] != NULL; ++k) { @@ -1410,7 +1449,7 @@ TEST(ParserSync) { // Plug the source code pieces together. i::ScopedVector<char> program(kProgramSize + 1); - int length = i::OS::SNPrintF(program, + int length = i::SNPrintF(program, "label: for (;;) { %s%s%s%s }", context_data[i][0], statement_data[j], @@ -1461,22 +1500,42 @@ void RunParserSyncTest(const char* context_data[][2], const char* statement_data[], ParserSyncTestResult result, const ParserFlag* flags = NULL, - int flags_len = 0) { + int flags_len = 0, + const ParserFlag* always_true_flags = NULL, + int always_true_flags_len = 0) { v8::HandleScope handles(CcTest::isolate()); v8::Handle<v8::Context> context = v8::Context::New(CcTest::isolate()); v8::Context::Scope context_scope(context); - int marker; - CcTest::i_isolate()->stack_guard()->SetStackLimit( - reinterpret_cast<uintptr_t>(&marker) - 128 * 1024); + CcTest::i_isolate()->stack_guard()->SetStackLimit(GetCurrentStackPosition() - + 128 * 1024); static const ParserFlag default_flags[] = { - kAllowLazy, kAllowHarmonyScoping, kAllowModules, kAllowGenerators, - kAllowForOf, kAllowNativesSyntax - }; - if (!flags) { + kAllowLazy, kAllowHarmonyScoping, kAllowModules, + kAllowGenerators, kAllowNativesSyntax, kAllowArrowFunctions}; + ParserFlag* generated_flags = NULL; + if (flags == NULL) { flags = default_flags; flags_len = ARRAY_SIZE(default_flags); + if (always_true_flags != NULL) { + // Remove always_true_flags from default_flags. + CHECK(always_true_flags_len < flags_len); + generated_flags = new ParserFlag[flags_len - always_true_flags_len]; + int flag_index = 0; + for (int i = 0; i < flags_len; ++i) { + bool use_flag = true; + for (int j = 0; j < always_true_flags_len; ++j) { + if (flags[i] == always_true_flags[j]) { + use_flag = false; + break; + } + } + if (use_flag) generated_flags[flag_index++] = flags[i]; + } + CHECK(flag_index == flags_len - always_true_flags_len); + flags_len = flag_index; + flags = generated_flags; + } } for (int i = 0; context_data[i][0] != NULL; ++i) { for (int j = 0; statement_data[j] != NULL; ++j) { @@ -1487,18 +1546,21 @@ void RunParserSyncTest(const char* context_data[][2], // Plug the source code pieces together. i::ScopedVector<char> program(kProgramSize + 1); - int length = i::OS::SNPrintF(program, - "%s%s%s", - context_data[i][0], - statement_data[j], - context_data[i][1]); + int length = i::SNPrintF(program, + "%s%s%s", + context_data[i][0], + statement_data[j], + context_data[i][1]); CHECK(length == kProgramSize); TestParserSync(program.start(), flags, flags_len, - result); + result, + always_true_flags, + always_true_flags_len); } } + delete[] generated_flags; } @@ -1526,6 +1588,10 @@ TEST(ErrorsEvalAndArguments) { "function foo(arguments) { }", "function foo(bar, eval) { }", "function foo(bar, arguments) { }", + "(eval) => { }", + "(arguments) => { }", + "(foo, eval) => { }", + "(foo, arguments) => { }", "eval = 1;", "arguments = 1;", "var foo = eval = 1;", @@ -1582,6 +1648,7 @@ TEST(NoErrorsEvalAndArgumentsStrict) { const char* context_data[][2] = { { "\"use strict\";", "" }, { "function test_func() { \"use strict\";", "}" }, + { "() => { \"use strict\"; ", "}" }, { NULL, NULL } }; @@ -1597,7 +1664,9 @@ TEST(NoErrorsEvalAndArgumentsStrict) { NULL }; - RunParserSyncTest(context_data, statement_data, kSuccess); + static const ParserFlag always_flags[] = {kAllowArrowFunctions}; + RunParserSyncTest(context_data, statement_data, kSuccess, NULL, 0, + always_flags, ARRAY_SIZE(always_flags)); } @@ -1609,6 +1678,7 @@ TEST(ErrorsFutureStrictReservedWords) { const char* context_data[][2] = { { "\"use strict\";", "" }, { "function test_func() {\"use strict\"; ", "}"}, + { "() => { \"use strict\"; ", "}" }, { NULL, NULL } }; @@ -1623,10 +1693,13 @@ TEST(ErrorsFutureStrictReservedWords) { "var foo = interface = 1;", "++interface;", "interface++;", + "var yield = 13;", NULL }; - RunParserSyncTest(context_data, statement_data, kError); + static const ParserFlag always_flags[] = {kAllowArrowFunctions}; + RunParserSyncTest(context_data, statement_data, kError, NULL, 0, always_flags, + ARRAY_SIZE(always_flags)); } @@ -1634,6 +1707,7 @@ TEST(NoErrorsFutureStrictReservedWords) { const char* context_data[][2] = { { "", "" }, { "function test_func() {", "}"}, + { "() => {", "}" }, { NULL, NULL } }; @@ -1648,10 +1722,13 @@ TEST(NoErrorsFutureStrictReservedWords) { "var foo = interface = 1;", "++interface;", "interface++;", + "var yield = 13;", NULL }; - RunParserSyncTest(context_data, statement_data, kSuccess); + static const ParserFlag always_flags[] = {kAllowArrowFunctions}; + RunParserSyncTest(context_data, statement_data, kSuccess, NULL, 0, + always_flags, ARRAY_SIZE(always_flags)); } @@ -1664,6 +1741,8 @@ TEST(ErrorsReservedWords) { { "\"use strict\";", "" }, { "var eval; function test_func() {", "}"}, { "var eval; function test_func() {\"use strict\"; ", "}"}, + { "var eval; () => {", "}"}, + { "var eval; () => {\"use strict\"; ", "}"}, { NULL, NULL } }; @@ -1674,6 +1753,8 @@ TEST(ErrorsReservedWords) { "function super() { }", "function foo(super) { }", "function foo(bar, super) { }", + "(super) => { }", + "(bar, super) => { }", "super = 1;", "var foo = super = 1;", "++super;", @@ -1686,12 +1767,47 @@ TEST(ErrorsReservedWords) { } -TEST(NoErrorsYieldSloppy) { +TEST(NoErrorsLetSloppyAllModes) { + // In sloppy mode, it's okay to use "let" as identifier. + const char* context_data[][2] = { + { "", "" }, + { "function f() {", "}" }, + { "(function f() {", "})" }, + { NULL, NULL } + }; + + const char* statement_data[] = { + "var let;", + "var foo, let;", + "try { } catch (let) { }", + "function let() { }", + "(function let() { })", + "function foo(let) { }", + "function foo(bar, let) { }", + "let = 1;", + "var foo = let = 1;", + "let * 2;", + "++let;", + "let++;", + "let: 34", + "function let(let) { let: let(let + let(0)); }", + "({ let: 1 })", + "({ get let() { 1 } })", + "let(100)", + NULL + }; + + RunParserSyncTest(context_data, statement_data, kSuccess); +} + + +TEST(NoErrorsYieldSloppyAllModes) { // In sloppy mode, it's okay to use "yield" as identifier, *except* inside a - // generator (see next test). + // generator (see other test). const char* context_data[][2] = { { "", "" }, - { "function is_not_gen() {", "}" }, + { "function not_gen() {", "}" }, + { "(function not_gen() {", "})" }, { NULL, NULL } }; @@ -1700,12 +1816,20 @@ TEST(NoErrorsYieldSloppy) { "var foo, yield;", "try { } catch (yield) { }", "function yield() { }", + "(function yield() { })", "function foo(yield) { }", "function foo(bar, yield) { }", "yield = 1;", "var foo = yield = 1;", + "yield * 2;", "++yield;", "yield++;", + "yield: 34", + "function yield(yield) { yield: yield (yield + yield(0)); }", + "({ yield: 1 })", + "({ get yield() { 1 } })", + "yield(100)", + "yield[100]", NULL }; @@ -1713,9 +1837,15 @@ TEST(NoErrorsYieldSloppy) { } -TEST(ErrorsYieldSloppyGenerator) { +TEST(NoErrorsYieldSloppyGeneratorsEnabled) { + // In sloppy mode, it's okay to use "yield" as identifier, *except* inside a + // generator (see next test). const char* context_data[][2] = { - { "function * is_gen() {", "}" }, + { "", "" }, + { "function not_gen() {", "}" }, + { "function * gen() { function not_gen() {", "} }" }, + { "(function not_gen() {", "})" }, + { "(function * gen() { (function not_gen() {", "}) })" }, { NULL, NULL } }; @@ -1724,28 +1854,41 @@ TEST(ErrorsYieldSloppyGenerator) { "var foo, yield;", "try { } catch (yield) { }", "function yield() { }", - // BUG: These should not be allowed, but they are (if kAllowGenerators is - // set) - // "function foo(yield) { }", - // "function foo(bar, yield) { }", + "(function yield() { })", + "function foo(yield) { }", + "function foo(bar, yield) { }", + "function * yield() { }", + "(function * yield() { })", "yield = 1;", "var foo = yield = 1;", + "yield * 2;", "++yield;", "yield++;", + "yield: 34", + "function yield(yield) { yield: yield (yield + yield(0)); }", + "({ yield: 1 })", + "({ get yield() { 1 } })", + "yield(100)", + "yield[100]", NULL }; - // If generators are not allowed, the error will be produced at the '*' token, - // so this test works both with and without the kAllowGenerators flag. - RunParserSyncTest(context_data, statement_data, kError); + // This test requires kAllowGenerators to succeed. + static const ParserFlag always_true_flags[] = { kAllowGenerators }; + RunParserSyncTest(context_data, statement_data, kSuccess, NULL, 0, + always_true_flags, 1); } TEST(ErrorsYieldStrict) { const char* context_data[][2] = { { "\"use strict\";", "" }, - { "\"use strict\"; function is_not_gen() {", "}" }, + { "\"use strict\"; function not_gen() {", "}" }, { "function test_func() {\"use strict\"; ", "}"}, + { "\"use strict\"; function * gen() { function not_gen() {", "} }" }, + { "\"use strict\"; (function not_gen() {", "})" }, + { "\"use strict\"; (function * gen() { (function not_gen() {", "}) })" }, + { "() => {\"use strict\"; ", "}" }, { NULL, NULL } }; @@ -1754,12 +1897,16 @@ TEST(ErrorsYieldStrict) { "var foo, yield;", "try { } catch (yield) { }", "function yield() { }", + "(function yield() { })", "function foo(yield) { }", "function foo(bar, yield) { }", + "function * yield() { }", + "(function * yield() { })", "yield = 1;", "var foo = yield = 1;", "++yield;", "yield++;", + "yield: 34;", NULL }; @@ -1767,22 +1914,116 @@ TEST(ErrorsYieldStrict) { } -TEST(ErrorsYield) { +TEST(NoErrorsGenerator) { const char* context_data[][2] = { - { "function * is_gen() {", "}" }, + { "function * gen() {", "}" }, + { "(function * gen() {", "})" }, + { "(function * () {", "})" }, { NULL, NULL } }; const char* statement_data[] = { - "yield 2;", // this is legal inside generator - "yield * 2;", // this is legal inside generator + // A generator without a body is valid. + "" + // Valid yield expressions inside generators. + "yield 2;", + "yield * 2;", + "yield * \n 2;", + "yield yield 1;", + "yield * yield * 1;", + "yield 3 + (yield 4);", + "yield * 3 + (yield * 4);", + "(yield * 3) + (yield * 4);", + "yield 3; yield 4;", + "yield * 3; yield * 4;", + "(function (yield) { })", + "yield { yield: 12 }", + "yield /* comment */ { yield: 12 }", + "yield * \n { yield: 12 }", + "yield /* comment */ * \n { yield: 12 }", + // You can return in a generator. + "yield 1; return", + "yield * 1; return", + "yield 1; return 37", + "yield * 1; return 37", + "yield 1; return 37; yield 'dead';", + "yield * 1; return 37; yield * 'dead';", + // Yield is still a valid key in object literals. + "({ yield: 1 })", + "({ get yield() { } })", + // Yield without RHS. + "yield;", + "yield", + "yield\n", + "yield /* comment */" + "yield // comment\n" + "(yield)", + "[yield]", + "{yield}", + "yield, yield", + "yield; yield", + "(yield) ? yield : yield", + "(yield) \n ? yield : yield", + // If there is a newline before the next token, we don't look for RHS. + "yield\nfor (;;) {}", + NULL + }; + + // This test requires kAllowGenerators to succeed. + static const ParserFlag always_true_flags[] = { + kAllowGenerators + }; + RunParserSyncTest(context_data, statement_data, kSuccess, NULL, 0, + always_true_flags, 1); +} + + +TEST(ErrorsYieldGenerator) { + const char* context_data[][2] = { + { "function * gen() {", "}" }, + { "\"use strict\"; function * gen() {", "}" }, + { NULL, NULL } + }; + + const char* statement_data[] = { + // Invalid yield expressions inside generators. + "var yield;", + "var foo, yield;", + "try { } catch (yield) { }", + "function yield() { }", + // The name of the NFE is let-bound in the generator, which does not permit + // yield to be an identifier. + "(function yield() { })", + "(function * yield() { })", + // Yield isn't valid as a formal parameter for generators. + "function * foo(yield) { }", + "(function * foo(yield) { })", + "yield = 1;", + "var foo = yield = 1;", + "++yield;", + "yield++;", + "yield *", + "(yield *)", + // Yield binds very loosely, so this parses as "yield (3 + yield 4)", which + // is invalid. + "yield 3 + yield 4;", + "yield: 34", + "yield ? 1 : 2", + // Parses as yield (/ yield): invalid. + "yield / yield", + "+ yield", + "+ yield 3", + // Invalid (no newline allowed between yield and *). + "yield\n*3", + // Invalid (we see a newline, so we parse {yield:42} as a statement, not an + // object literal, and yield is not a valid label). + "yield\n{yield: 42}", + "yield /* comment */\n {yield: 42}", + "yield //comment\n {yield: 42}", NULL }; - // Here we cannot assert that there is no error, since there will be without - // the kAllowGenerators flag. However, we test that Parser and PreParser - // produce the same errors. - RunParserSyncTest(context_data, statement_data, kSuccessOrError); + RunParserSyncTest(context_data, statement_data, kError); } @@ -1790,16 +2031,18 @@ TEST(ErrorsNameOfStrictFunction) { // Tests that illegal tokens as names of a strict function produce the correct // errors. const char* context_data[][2] = { - { "", ""}, - { "\"use strict\";", ""}, + { "function ", ""}, + { "\"use strict\"; function", ""}, + { "function * ", ""}, + { "\"use strict\"; function * ", ""}, { NULL, NULL } }; const char* statement_data[] = { - "function eval() {\"use strict\";}", - "function arguments() {\"use strict\";}", - "function interface() {\"use strict\";}", - "function yield() {\"use strict\";}", + "eval() {\"use strict\";}", + "arguments() {\"use strict\";}", + "interface() {\"use strict\";}", + "yield() {\"use strict\";}", // Future reserved words are always illegal "function super() { }", "function super() {\"use strict\";}", @@ -1812,15 +2055,15 @@ TEST(ErrorsNameOfStrictFunction) { TEST(NoErrorsNameOfStrictFunction) { const char* context_data[][2] = { - { "", ""}, + { "function ", ""}, { NULL, NULL } }; const char* statement_data[] = { - "function eval() { }", - "function arguments() { }", - "function interface() { }", - "function yield() { }", + "eval() { }", + "arguments() { }", + "interface() { }", + "yield() { }", NULL }; @@ -1828,12 +2071,35 @@ TEST(NoErrorsNameOfStrictFunction) { } +TEST(NoErrorsNameOfStrictGenerator) { + const char* context_data[][2] = { + { "function * ", ""}, + { NULL, NULL } + }; + + const char* statement_data[] = { + "eval() { }", + "arguments() { }", + "interface() { }", + "yield() { }", + NULL + }; + + // This test requires kAllowGenerators to succeed. + static const ParserFlag always_true_flags[] = { + kAllowGenerators + }; + RunParserSyncTest(context_data, statement_data, kSuccess, NULL, 0, + always_true_flags, 1); +} + TEST(ErrorsIllegalWordsAsLabelsSloppy) { // Using future reserved words as labels is always an error. const char* context_data[][2] = { { "", ""}, { "function test_func() {", "}" }, + { "() => {", "}" }, { NULL, NULL } }; @@ -1851,6 +2117,7 @@ TEST(ErrorsIllegalWordsAsLabelsStrict) { const char* context_data[][2] = { { "\"use strict\";", "" }, { "function test_func() {\"use strict\"; ", "}"}, + { "() => {\"use strict\"; ", "}" }, { NULL, NULL } }; @@ -1870,8 +2137,10 @@ TEST(NoErrorsIllegalWordsAsLabels) { const char* context_data[][2] = { { "", ""}, { "function test_func() {", "}" }, + { "() => {", "}" }, { "\"use strict\";", "" }, { "\"use strict\"; function test_func() {", "}" }, + { "\"use strict\"; () => {", "}" }, { NULL, NULL } }; @@ -1882,7 +2151,9 @@ TEST(NoErrorsIllegalWordsAsLabels) { NULL }; - RunParserSyncTest(context_data, statement_data, kSuccess); + static const ParserFlag always_flags[] = {kAllowArrowFunctions}; + RunParserSyncTest(context_data, statement_data, kSuccess, NULL, 0, + always_flags, ARRAY_SIZE(always_flags)); } @@ -1891,6 +2162,7 @@ TEST(ErrorsParenthesizedLabels) { const char* context_data[][2] = { { "", ""}, { "function test_func() {", "}" }, + { "() => {", "}" }, { NULL, NULL } }; @@ -1969,9 +2241,8 @@ TEST(DontRegressPreParserDataSizes) { v8::Isolate* isolate = CcTest::isolate(); v8::HandleScope handles(isolate); - int marker; - CcTest::i_isolate()->stack_guard()->SetStackLimit( - reinterpret_cast<uintptr_t>(&marker) - 128 * 1024); + CcTest::i_isolate()->stack_guard()->SetStackLimit(GetCurrentStackPosition() - + 128 * 1024); struct TestCase { const char* program; @@ -1997,21 +2268,20 @@ TEST(DontRegressPreParserDataSizes) { factory->NewStringFromUtf8(i::CStrVector(program)).ToHandleChecked(); i::Handle<i::Script> script = factory->NewScript(source); i::CompilationInfoWithZone info(script); - i::ScriptData* data = NULL; - info.SetCachedData(&data, i::PRODUCE_CACHED_DATA); + i::ScriptData* sd = NULL; + info.SetCachedData(&sd, v8::ScriptCompiler::kProduceParserCache); i::Parser::Parse(&info, true); - CHECK(data); - CHECK(!data->HasError()); + i::ParseData pd(sd); - if (data->function_count() != test_cases[i].functions) { - i::OS::Print( + if (pd.FunctionCount() != test_cases[i].functions) { + v8::base::OS::Print( "Expected preparse data for program:\n" "\t%s\n" "to contain %d functions, however, received %d functions.\n", - program, test_cases[i].functions, - data->function_count()); + program, test_cases[i].functions, pd.FunctionCount()); CHECK(false); } + delete sd; } } @@ -2132,9 +2402,13 @@ TEST(Intrinsics) { NULL }; - // Parsing will fail or succeed depending on whether we allow natives syntax - // or not. - RunParserSyncTest(context_data, statement_data, kSuccessOrError); + // This test requires kAllowNativesSyntax to succeed. + static const ParserFlag always_true_flags[] = { + kAllowNativesSyntax + }; + + RunParserSyncTest(context_data, statement_data, kSuccess, NULL, 0, + always_true_flags, 1); } @@ -2199,10 +2473,12 @@ TEST(ErrorsNewExpression) { TEST(StrictObjectLiteralChecking) { const char* strict_context_data[][2] = { {"\"use strict\"; var myobject = {", "};"}, + {"\"use strict\"; var myobject = {", ",};"}, { NULL, NULL } }; const char* non_strict_context_data[][2] = { {"var myobject = {", "};"}, + {"var myobject = {", ",};"}, { NULL, NULL } }; @@ -2231,23 +2507,29 @@ TEST(ErrorsObjectLiteralChecking) { }; const char* statement_data[] = { + ",", "foo: 1, get foo() {}", - "foo: 1, set foo() {}", + "foo: 1, set foo(v) {}", "\"foo\": 1, get \"foo\"() {}", - "\"foo\": 1, set \"foo\"() {}", + "\"foo\": 1, set \"foo\"(v) {}", "1: 1, get 1() {}", "1: 1, set 1() {}", // It's counter-intuitive, but these collide too (even in classic // mode). Note that we can have "foo" and foo as properties in classic mode, // but we cannot have "foo" and get foo, or foo and get "foo". "foo: 1, get \"foo\"() {}", - "foo: 1, set \"foo\"() {}", + "foo: 1, set \"foo\"(v) {}", "\"foo\": 1, get foo() {}", - "\"foo\": 1, set foo() {}", + "\"foo\": 1, set foo(v) {}", "1: 1, get \"1\"() {}", "1: 1, set \"1\"() {}", "\"1\": 1, get 1() {}" - "\"1\": 1, set 1() {}" + "\"1\": 1, set 1(v) {}" + // Wrong number of parameters + "get bar(x) {}", + "get bar(x, y) {}", + "set bar() {}", + "set bar(x, y) {}", // Parsing FunctionLiteral for getter or setter fails "get foo( +", "get foo() \"error\"", @@ -2261,7 +2543,9 @@ TEST(ErrorsObjectLiteralChecking) { TEST(NoErrorsObjectLiteralChecking) { const char* context_data[][2] = { {"var myobject = {", "};"}, + {"var myobject = {", ",};"}, {"\"use strict\"; var myobject = {", "};"}, + {"\"use strict\"; var myobject = {", ",};"}, { NULL, NULL } }; @@ -2271,25 +2555,22 @@ TEST(NoErrorsObjectLiteralChecking) { "1: 1, 2: 2", // Syntax: IdentifierName ':' AssignmentExpression "foo: bar = 5 + baz", - // Syntax: 'get' (IdentifierName | String | Number) FunctionLiteral + // Syntax: 'get' PropertyName '(' ')' '{' FunctionBody '}' "get foo() {}", "get \"foo\"() {}", "get 1() {}", - // Syntax: 'set' (IdentifierName | String | Number) FunctionLiteral - "set foo() {}", - "set \"foo\"() {}", - "set 1() {}", + // Syntax: 'set' PropertyName '(' PropertySetParameterList ')' + // '{' FunctionBody '}' + "set foo(v) {}", + "set \"foo\"(v) {}", + "set 1(v) {}", // Non-colliding getters and setters -> no errors "foo: 1, get bar() {}", - "foo: 1, set bar(b) {}", + "foo: 1, set bar(v) {}", "\"foo\": 1, get \"bar\"() {}", - "\"foo\": 1, set \"bar\"() {}", + "\"foo\": 1, set \"bar\"(v) {}", "1: 1, get 2() {}", - "1: 1, set 2() {}", - // Weird number of parameters -> no errors - "get bar() {}, set bar() {}", - "get bar(x) {}, set bar(x) {}", - "get bar(x, y) {}, set bar(x, y) {}", + "1: 1, set 2(v) {}", // Keywords, future reserved and strict future reserved are also allowed as // property names. "if: 4", @@ -2459,3 +2740,606 @@ TEST(InvalidLeftHandSide) { RunParserSyncTest(postfix_context_data, good_statement_data, kSuccess); RunParserSyncTest(postfix_context_data, bad_statement_data_common, kError); } + + +TEST(FuncNameInferrerBasic) { + // Tests that function names are inferred properly. + i::FLAG_allow_natives_syntax = true; + v8::Isolate* isolate = CcTest::isolate(); + v8::HandleScope scope(isolate); + LocalContext env; + CompileRun("var foo1 = function() {}; " + "var foo2 = function foo3() {}; " + "function not_ctor() { " + " var foo4 = function() {}; " + " return %FunctionGetInferredName(foo4); " + "} " + "function Ctor() { " + " var foo5 = function() {}; " + " return %FunctionGetInferredName(foo5); " + "} " + "var obj1 = { foo6: function() {} }; " + "var obj2 = { 'foo7': function() {} }; " + "var obj3 = {}; " + "obj3[1] = function() {}; " + "var obj4 = {}; " + "obj4[1] = function foo8() {}; " + "var obj5 = {}; " + "obj5['foo9'] = function() {}; " + "var obj6 = { obj7 : { foo10: function() {} } };"); + ExpectString("%FunctionGetInferredName(foo1)", "foo1"); + // foo2 is not unnamed -> its name is not inferred. + ExpectString("%FunctionGetInferredName(foo2)", ""); + ExpectString("not_ctor()", "foo4"); + ExpectString("Ctor()", "Ctor.foo5"); + ExpectString("%FunctionGetInferredName(obj1.foo6)", "obj1.foo6"); + ExpectString("%FunctionGetInferredName(obj2.foo7)", "obj2.foo7"); + ExpectString("%FunctionGetInferredName(obj3[1])", + "obj3.(anonymous function)"); + ExpectString("%FunctionGetInferredName(obj4[1])", ""); + ExpectString("%FunctionGetInferredName(obj5['foo9'])", "obj5.foo9"); + ExpectString("%FunctionGetInferredName(obj6.obj7.foo10)", "obj6.obj7.foo10"); +} + + +TEST(FuncNameInferrerTwoByte) { + // Tests function name inferring in cases where some parts of the inferred + // function name are two-byte strings. + i::FLAG_allow_natives_syntax = true; + v8::Isolate* isolate = CcTest::isolate(); + v8::HandleScope scope(isolate); + LocalContext env; + uint16_t* two_byte_source = AsciiToTwoByteString( + "var obj1 = { oXj2 : { foo1: function() {} } }; " + "%FunctionGetInferredName(obj1.oXj2.foo1)"); + uint16_t* two_byte_name = AsciiToTwoByteString("obj1.oXj2.foo1"); + // Make it really non-ASCII (replace the Xs with a non-ASCII character). + two_byte_source[14] = two_byte_source[78] = two_byte_name[6] = 0x010d; + v8::Local<v8::String> source = + v8::String::NewFromTwoByte(isolate, two_byte_source); + v8::Local<v8::Value> result = CompileRun(source); + CHECK(result->IsString()); + v8::Local<v8::String> expected_name = + v8::String::NewFromTwoByte(isolate, two_byte_name); + CHECK(result->Equals(expected_name)); + i::DeleteArray(two_byte_source); + i::DeleteArray(two_byte_name); +} + + +TEST(FuncNameInferrerEscaped) { + // The same as FuncNameInferrerTwoByte, except that we express the two-byte + // character as a unicode escape. + i::FLAG_allow_natives_syntax = true; + v8::Isolate* isolate = CcTest::isolate(); + v8::HandleScope scope(isolate); + LocalContext env; + uint16_t* two_byte_source = AsciiToTwoByteString( + "var obj1 = { o\\u010dj2 : { foo1: function() {} } }; " + "%FunctionGetInferredName(obj1.o\\u010dj2.foo1)"); + uint16_t* two_byte_name = AsciiToTwoByteString("obj1.oXj2.foo1"); + // Fix to correspond to the non-ASCII name in two_byte_source. + two_byte_name[6] = 0x010d; + v8::Local<v8::String> source = + v8::String::NewFromTwoByte(isolate, two_byte_source); + v8::Local<v8::Value> result = CompileRun(source); + CHECK(result->IsString()); + v8::Local<v8::String> expected_name = + v8::String::NewFromTwoByte(isolate, two_byte_name); + CHECK(result->Equals(expected_name)); + i::DeleteArray(two_byte_source); + i::DeleteArray(two_byte_name); +} + + +TEST(RegressionLazyFunctionWithErrorWithArg) { + // The bug occurred when a lazy function had an error which requires a + // parameter (such as "unknown label" here). The error message was processed + // before the AstValueFactory containing the error message string was + // internalized. + v8::Isolate* isolate = CcTest::isolate(); + v8::HandleScope scope(isolate); + LocalContext env; + i::FLAG_lazy = true; + i::FLAG_min_preparse_length = 0; + CompileRun("function this_is_lazy() {\n" + " break p;\n" + "}\n" + "this_is_lazy();\n"); +} + + +TEST(SerializationOfMaybeAssignmentFlag) { + i::Isolate* isolate = CcTest::i_isolate(); + i::Factory* factory = isolate->factory(); + i::HandleScope scope(isolate); + LocalContext env; + + const char* src = + "function h() {" + " var result = [];" + " function f() {" + " result.push(2);" + " }" + " function assertResult(r) {" + " f();" + " result = [];" + " }" + " assertResult([2]);" + " assertResult([2]);" + " return f;" + "};" + "h();"; + + i::ScopedVector<char> program(Utf8LengthHelper(src) + 1); + i::SNPrintF(program, "%s", src); + i::Handle<i::String> source = factory->InternalizeUtf8String(program.start()); + source->PrintOn(stdout); + printf("\n"); + i::Zone zone(isolate); + v8::Local<v8::Value> v = CompileRun(src); + i::Handle<i::Object> o = v8::Utils::OpenHandle(*v); + i::Handle<i::JSFunction> f = i::Handle<i::JSFunction>::cast(o); + i::Context* context = f->context(); + i::AstValueFactory avf(&zone, isolate->heap()->HashSeed()); + avf.Internalize(isolate); + const i::AstRawString* name = avf.GetOneByteString("result"); + i::Handle<i::String> str = name->string(); + CHECK(str->IsInternalizedString()); + i::Scope* global_scope = + new (&zone) i::Scope(NULL, i::GLOBAL_SCOPE, &avf, &zone); + global_scope->Initialize(); + i::Scope* s = i::Scope::DeserializeScopeChain(context, global_scope, &zone); + DCHECK(s != global_scope); + DCHECK(name != NULL); + + // Get result from h's function context (that is f's context) + i::Variable* var = s->Lookup(name); + + CHECK(var != NULL); + // Maybe assigned should survive deserialization + CHECK(var->maybe_assigned() == i::kMaybeAssigned); + // TODO(sigurds) Figure out if is_used should survive context serialization. +} + + +TEST(IfArgumentsArrayAccessedThenParametersMaybeAssigned) { + i::Isolate* isolate = CcTest::i_isolate(); + i::Factory* factory = isolate->factory(); + i::HandleScope scope(isolate); + LocalContext env; + + + const char* src = + "function f(x) {" + " var a = arguments;" + " function g(i) {" + " ++a[0];" + " };" + " return g;" + " }" + "f(0);"; + + i::ScopedVector<char> program(Utf8LengthHelper(src) + 1); + i::SNPrintF(program, "%s", src); + i::Handle<i::String> source = factory->InternalizeUtf8String(program.start()); + source->PrintOn(stdout); + printf("\n"); + i::Zone zone(isolate); + v8::Local<v8::Value> v = CompileRun(src); + i::Handle<i::Object> o = v8::Utils::OpenHandle(*v); + i::Handle<i::JSFunction> f = i::Handle<i::JSFunction>::cast(o); + i::Context* context = f->context(); + i::AstValueFactory avf(&zone, isolate->heap()->HashSeed()); + avf.Internalize(isolate); + + i::Scope* global_scope = + new (&zone) i::Scope(NULL, i::GLOBAL_SCOPE, &avf, &zone); + global_scope->Initialize(); + i::Scope* s = i::Scope::DeserializeScopeChain(context, global_scope, &zone); + DCHECK(s != global_scope); + const i::AstRawString* name_x = avf.GetOneByteString("x"); + + // Get result from f's function context (that is g's outer context) + i::Variable* var_x = s->Lookup(name_x); + CHECK(var_x != NULL); + CHECK(var_x->maybe_assigned() == i::kMaybeAssigned); +} + + +TEST(ExportsMaybeAssigned) { + i::FLAG_use_strict = true; + i::FLAG_harmony_scoping = true; + i::FLAG_harmony_modules = true; + + i::Isolate* isolate = CcTest::i_isolate(); + i::Factory* factory = isolate->factory(); + i::HandleScope scope(isolate); + LocalContext env; + + const char* src = + "module A {" + " export var x = 1;" + " export function f() { return x };" + " export const y = 2;" + " module B {}" + " export module C {}" + "};" + "A.f"; + + i::ScopedVector<char> program(Utf8LengthHelper(src) + 1); + i::SNPrintF(program, "%s", src); + i::Handle<i::String> source = factory->InternalizeUtf8String(program.start()); + source->PrintOn(stdout); + printf("\n"); + i::Zone zone(isolate); + v8::Local<v8::Value> v = CompileRun(src); + i::Handle<i::Object> o = v8::Utils::OpenHandle(*v); + i::Handle<i::JSFunction> f = i::Handle<i::JSFunction>::cast(o); + i::Context* context = f->context(); + i::AstValueFactory avf(&zone, isolate->heap()->HashSeed()); + avf.Internalize(isolate); + + i::Scope* global_scope = + new (&zone) i::Scope(NULL, i::GLOBAL_SCOPE, &avf, &zone); + global_scope->Initialize(); + i::Scope* s = i::Scope::DeserializeScopeChain(context, global_scope, &zone); + DCHECK(s != global_scope); + const i::AstRawString* name_x = avf.GetOneByteString("x"); + const i::AstRawString* name_f = avf.GetOneByteString("f"); + const i::AstRawString* name_y = avf.GetOneByteString("y"); + const i::AstRawString* name_B = avf.GetOneByteString("B"); + const i::AstRawString* name_C = avf.GetOneByteString("C"); + + // Get result from h's function context (that is f's context) + i::Variable* var_x = s->Lookup(name_x); + CHECK(var_x != NULL); + CHECK(var_x->maybe_assigned() == i::kMaybeAssigned); + i::Variable* var_f = s->Lookup(name_f); + CHECK(var_f != NULL); + CHECK(var_f->maybe_assigned() == i::kMaybeAssigned); + i::Variable* var_y = s->Lookup(name_y); + CHECK(var_y != NULL); + CHECK(var_y->maybe_assigned() == i::kNotAssigned); + i::Variable* var_B = s->Lookup(name_B); + CHECK(var_B != NULL); + CHECK(var_B->maybe_assigned() == i::kNotAssigned); + i::Variable* var_C = s->Lookup(name_C); + CHECK(var_C != NULL); + CHECK(var_C->maybe_assigned() == i::kNotAssigned); +} + + +TEST(InnerAssignment) { + i::Isolate* isolate = CcTest::i_isolate(); + i::Factory* factory = isolate->factory(); + i::HandleScope scope(isolate); + LocalContext env; + + const char* prefix = "function f() {"; + const char* midfix = " function g() {"; + const char* suffix = "}}"; + struct { const char* source; bool assigned; bool strict; } outers[] = { + // Actual assignments. + { "var x; var x = 5;", true, false }, + { "var x; { var x = 5; }", true, false }, + { "'use strict'; let x; x = 6;", true, true }, + { "var x = 5; function x() {}", true, false }, + // Actual non-assignments. + { "var x;", false, false }, + { "var x = 5;", false, false }, + { "'use strict'; let x;", false, true }, + { "'use strict'; let x = 6;", false, true }, + { "'use strict'; var x = 0; { let x = 6; }", false, true }, + { "'use strict'; var x = 0; { let x; x = 6; }", false, true }, + { "'use strict'; let x = 0; { let x = 6; }", false, true }, + { "'use strict'; let x = 0; { let x; x = 6; }", false, true }, + { "var x; try {} catch (x) { x = 5; }", false, false }, + { "function x() {}", false, false }, + // Eval approximation. + { "var x; eval('');", true, false }, + { "eval(''); var x;", true, false }, + { "'use strict'; let x; eval('');", true, true }, + { "'use strict'; eval(''); let x;", true, true }, + // Non-assignments not recognized, because the analysis is approximative. + { "var x; var x;", true, false }, + { "var x = 5; var x;", true, false }, + { "var x; { var x; }", true, false }, + { "var x; function x() {}", true, false }, + { "function x() {}; var x;", true, false }, + { "var x; try {} catch (x) { var x = 5; }", true, false }, + }; + struct { const char* source; bool assigned; bool with; } inners[] = { + // Actual assignments. + { "x = 1;", true, false }, + { "x++;", true, false }, + { "++x;", true, false }, + { "x--;", true, false }, + { "--x;", true, false }, + { "{ x = 1; }", true, false }, + { "'use strict'; { let x; }; x = 0;", true, false }, + { "'use strict'; { const x = 1; }; x = 0;", true, false }, + { "'use strict'; { function x() {} }; x = 0;", true, false }, + { "with ({}) { x = 1; }", true, true }, + { "eval('');", true, false }, + { "'use strict'; { let y; eval('') }", true, false }, + { "function h() { x = 0; }", true, false }, + { "(function() { x = 0; })", true, false }, + { "(function() { x = 0; })", true, false }, + { "with ({}) (function() { x = 0; })", true, true }, + // Actual non-assignments. + { "", false, false }, + { "x;", false, false }, + { "var x;", false, false }, + { "var x = 8;", false, false }, + { "var x; x = 8;", false, false }, + { "'use strict'; let x;", false, false }, + { "'use strict'; let x = 8;", false, false }, + { "'use strict'; let x; x = 8;", false, false }, + { "'use strict'; const x = 8;", false, false }, + { "function x() {}", false, false }, + { "function x() { x = 0; }", false, false }, + { "function h(x) { x = 0; }", false, false }, + { "'use strict'; { let x; x = 0; }", false, false }, + { "{ var x; }; x = 0;", false, false }, + { "with ({}) {}", false, true }, + { "var x; { with ({}) { x = 1; } }", false, true }, + { "try {} catch(x) { x = 0; }", false, false }, + { "try {} catch(x) { with ({}) { x = 1; } }", false, true }, + // Eval approximation. + { "eval('');", true, false }, + { "function h() { eval(''); }", true, false }, + { "(function() { eval(''); })", true, false }, + // Shadowing not recognized because of eval approximation. + { "var x; eval('');", true, false }, + { "'use strict'; let x; eval('');", true, false }, + { "try {} catch(x) { eval(''); }", true, false }, + { "function x() { eval(''); }", true, false }, + { "(function(x) { eval(''); })", true, false }, + }; + + // Used to trigger lazy compilation of function + int comment_len = 2048; + i::ScopedVector<char> comment(comment_len + 1); + i::SNPrintF(comment, "/*%0*d*/", comment_len - 4, 0); + int prefix_len = Utf8LengthHelper(prefix); + int midfix_len = Utf8LengthHelper(midfix); + int suffix_len = Utf8LengthHelper(suffix); + for (unsigned i = 0; i < ARRAY_SIZE(outers); ++i) { + const char* outer = outers[i].source; + int outer_len = Utf8LengthHelper(outer); + for (unsigned j = 0; j < ARRAY_SIZE(inners); ++j) { + for (unsigned outer_lazy = 0; outer_lazy < 2; ++outer_lazy) { + for (unsigned inner_lazy = 0; inner_lazy < 2; ++inner_lazy) { + if (outers[i].strict && inners[j].with) continue; + const char* inner = inners[j].source; + int inner_len = Utf8LengthHelper(inner); + + int outer_comment_len = outer_lazy ? comment_len : 0; + int inner_comment_len = inner_lazy ? comment_len : 0; + const char* outer_comment = outer_lazy ? comment.start() : ""; + const char* inner_comment = inner_lazy ? comment.start() : ""; + int len = prefix_len + outer_comment_len + outer_len + midfix_len + + inner_comment_len + inner_len + suffix_len; + i::ScopedVector<char> program(len + 1); + + i::SNPrintF(program, "%s%s%s%s%s%s%s", prefix, outer_comment, outer, + midfix, inner_comment, inner, suffix); + i::Handle<i::String> source = + factory->InternalizeUtf8String(program.start()); + source->PrintOn(stdout); + printf("\n"); + + i::Handle<i::Script> script = factory->NewScript(source); + i::CompilationInfoWithZone info(script); + i::Parser parser(&info); + parser.set_allow_harmony_scoping(true); + CHECK(parser.Parse()); + CHECK(i::Rewriter::Rewrite(&info)); + CHECK(i::Scope::Analyze(&info)); + CHECK(info.function() != NULL); + + i::Scope* scope = info.function()->scope(); + CHECK_EQ(scope->inner_scopes()->length(), 1); + i::Scope* inner_scope = scope->inner_scopes()->at(0); + const i::AstRawString* var_name = + info.ast_value_factory()->GetOneByteString("x"); + i::Variable* var = inner_scope->Lookup(var_name); + bool expected = outers[i].assigned || inners[j].assigned; + CHECK(var != NULL); + CHECK(var->is_used() || !expected); + CHECK((var->maybe_assigned() == i::kMaybeAssigned) == expected); + } + } + } + } +} + +namespace { + +int* global_use_counts = NULL; + +void MockUseCounterCallback(v8::Isolate* isolate, + v8::Isolate::UseCounterFeature feature) { + ++global_use_counts[feature]; +} + +} + + +TEST(UseAsmUseCount) { + i::Isolate* isolate = CcTest::i_isolate(); + i::HandleScope scope(isolate); + LocalContext env; + int use_counts[v8::Isolate::kUseCounterFeatureCount] = {}; + global_use_counts = use_counts; + CcTest::isolate()->SetUseCounterCallback(MockUseCounterCallback); + CompileRun("\"use asm\";\n" + "var foo = 1;\n" + "\"use asm\";\n" // Only the first one counts. + "function bar() { \"use asm\"; var baz = 1; }"); + CHECK_EQ(2, use_counts[v8::Isolate::kUseAsm]); +} + + +TEST(ErrorsArrowFunctions) { + // Tests that parser and preparser generate the same kind of errors + // on invalid arrow function syntax. + const char* context_data[][2] = { + {"", ";"}, + {"v = ", ";"}, + {"bar ? (", ") : baz;"}, + {"bar ? baz : (", ");"}, + {"bar[", "];"}, + {"bar, ", ";"}, + {"", ", bar;"}, + {NULL, NULL} + }; + + const char* statement_data[] = { + "=> 0", + "=>", + "() =>", + "=> {}", + ") => {}", + ", => {}", + "(,) => {}", + "return => {}", + "() => {'value': 42}", + + // Check that the early return introduced in ParsePrimaryExpression + // does not accept stray closing parentheses. + ")", + ") => 0", + "foo[()]", + "()", + + // Parameter lists with extra parens should be recognized as errors. + "(()) => 0", + "((x)) => 0", + "((x, y)) => 0", + "(x, (y)) => 0", + "((x, y, z)) => 0", + "(x, (y, z)) => 0", + "((x, y), z) => 0", + + // Parameter lists are always validated as strict, so those are errors. + "eval => {}", + "arguments => {}", + "yield => {}", + "interface => {}", + "(eval) => {}", + "(arguments) => {}", + "(yield) => {}", + "(interface) => {}", + "(eval, bar) => {}", + "(bar, eval) => {}", + "(bar, arguments) => {}", + "(bar, yield) => {}", + "(bar, interface) => {}", + // TODO(aperez): Detecting duplicates does not work in PreParser. + // "(bar, bar) => {}", + + // The parameter list is parsed as an expression, but only + // a comma-separated list of identifier is valid. + "32 => {}", + "(32) => {}", + "(a, 32) => {}", + "if => {}", + "(if) => {}", + "(a, if) => {}", + "a + b => {}", + "(a + b) => {}", + "(a + b, c) => {}", + "(a, b - c) => {}", + "\"a\" => {}", + "(\"a\") => {}", + "(\"a\", b) => {}", + "(a, \"b\") => {}", + "-a => {}", + "(-a) => {}", + "(-a, b) => {}", + "(a, -b) => {}", + "{} => {}", + "({}) => {}", + "(a, {}) => {}", + "({}, a) => {}", + "a++ => {}", + "(a++) => {}", + "(a++, b) => {}", + "(a, b++) => {}", + "[] => {}", + "([]) => {}", + "(a, []) => {}", + "([], a) => {}", + "(a = b) => {}", + "(a = b, c) => {}", + "(a, b = c) => {}", + "(foo ? bar : baz) => {}", + "(a, foo ? bar : baz) => {}", + "(foo ? bar : baz, a) => {}", + NULL + }; + + // The test is quite slow, so run it with a reduced set of flags. + static const ParserFlag flags[] = { + kAllowLazy, kAllowHarmonyScoping, kAllowGenerators + }; + static const ParserFlag always_flags[] = { kAllowArrowFunctions }; + RunParserSyncTest(context_data, statement_data, kError, flags, + ARRAY_SIZE(flags), always_flags, ARRAY_SIZE(always_flags)); +} + + +TEST(NoErrorsArrowFunctions) { + // Tests that parser and preparser accept valid arrow functions syntax. + const char* context_data[][2] = { + {"", ";"}, + {"bar ? (", ") : baz;"}, + {"bar ? baz : (", ");"}, + {"bar, ", ";"}, + {"", ", bar;"}, + {NULL, NULL} + }; + + const char* statement_data[] = { + "() => {}", + "() => { return 42 }", + "x => { return x; }", + "(x) => { return x; }", + "(x, y) => { return x + y; }", + "(x, y, z) => { return x + y + z; }", + "(x, y) => { x.a = y; }", + "() => 42", + "x => x", + "x => x * x", + "(x) => x", + "(x) => x * x", + "(x, y) => x + y", + "(x, y, z) => x, y, z", + "(x, y) => x.a = y", + "() => ({'value': 42})", + "x => y => x + y", + "(x, y) => (u, v) => x*u + y*v", + "(x, y) => z => z * (x + y)", + "x => (y, z) => z * (x + y)", + + // Those are comma-separated expressions, with arrow functions as items. + // They stress the code for validating arrow function parameter lists. + "a, b => 0", + "a, b, (c, d) => 0", + "(a, b, (c, d) => 0)", + "(a, b) => 0, (c, d) => 1", + "(a, b => {}, a => a + 1)", + "((a, b) => {}, (a => a + 1))", + "(a, (a, (b, c) => 0))", + + // Arrow has more precedence, this is the same as: foo ? bar : (baz = {}) + "foo ? bar : baz => {}", + NULL + }; + + static const ParserFlag always_flags[] = {kAllowArrowFunctions}; + RunParserSyncTest(context_data, statement_data, kSuccess, NULL, 0, + always_flags, ARRAY_SIZE(always_flags)); +} diff --git a/deps/v8/test/cctest/test-platform-linux.cc b/deps/v8/test/cctest/test-platform-linux.cc index f289e948284..613638e78a3 100644 --- a/deps/v8/test/cctest/test-platform-linux.cc +++ b/deps/v8/test/cctest/test-platform-linux.cc @@ -31,16 +31,16 @@ #include <stdlib.h> #include <unistd.h> // for usleep() -#include "v8.h" +#include "src/v8.h" -#include "platform.h" -#include "cctest.h" +#include "src/base/platform/platform.h" +#include "test/cctest/cctest.h" using namespace ::v8::internal; TEST(VirtualMemory) { - VirtualMemory* vm = new VirtualMemory(1 * MB); + v8::base::VirtualMemory* vm = new v8::base::VirtualMemory(1 * MB); CHECK(vm->IsReserved()); void* block_addr = vm->address(); size_t block_size = 4 * KB; @@ -51,8 +51,3 @@ TEST(VirtualMemory) { CHECK(vm->Uncommit(block_addr, block_size)); delete vm; } - - -TEST(GetCurrentProcessId) { - CHECK_EQ(static_cast<int>(getpid()), OS::GetCurrentProcessId()); -} diff --git a/deps/v8/test/cctest/test-platform-tls.cc b/deps/v8/test/cctest/test-platform-tls.cc deleted file mode 100644 index 31501d9ef7c..00000000000 --- a/deps/v8/test/cctest/test-platform-tls.cc +++ /dev/null @@ -1,93 +0,0 @@ -// Copyright 2011 the V8 project authors. All rights reserved. -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are -// met: -// -// * Redistributions of source code must retain the above copyright -// notice, this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above -// copyright notice, this list of conditions and the following -// disclaimer in the documentation and/or other materials provided -// with the distribution. -// * Neither the name of Google Inc. nor the names of its -// contributors may be used to endorse or promote products derived -// from this software without specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// -// Tests of fast TLS support. - -#include "v8.h" - -#include "cctest.h" -#include "checks.h" -#include "platform.h" - -using v8::internal::Thread; - -static const int kValueCount = 128; - -static Thread::LocalStorageKey keys[kValueCount]; - -static void* GetValue(int num) { - return reinterpret_cast<void*>(static_cast<intptr_t>(num + 1)); -} - - -static void DoTest() { - for (int i = 0; i < kValueCount; i++) { - CHECK(!Thread::HasThreadLocal(keys[i])); - } - for (int i = 0; i < kValueCount; i++) { - Thread::SetThreadLocal(keys[i], GetValue(i)); - } - for (int i = 0; i < kValueCount; i++) { - CHECK(Thread::HasThreadLocal(keys[i])); - } - for (int i = 0; i < kValueCount; i++) { - CHECK_EQ(GetValue(i), Thread::GetThreadLocal(keys[i])); - CHECK_EQ(GetValue(i), Thread::GetExistingThreadLocal(keys[i])); - } - for (int i = 0; i < kValueCount; i++) { - Thread::SetThreadLocal(keys[i], GetValue(kValueCount - i - 1)); - } - for (int i = 0; i < kValueCount; i++) { - CHECK(Thread::HasThreadLocal(keys[i])); - } - for (int i = 0; i < kValueCount; i++) { - CHECK_EQ(GetValue(kValueCount - i - 1), - Thread::GetThreadLocal(keys[i])); - CHECK_EQ(GetValue(kValueCount - i - 1), - Thread::GetExistingThreadLocal(keys[i])); - } -} - -class TestThread : public Thread { - public: - TestThread() : Thread("TestThread") {} - - virtual void Run() { - DoTest(); - } -}; - - -TEST(FastTLS) { - for (int i = 0; i < kValueCount; i++) { - keys[i] = Thread::CreateThreadLocalKey(); - } - DoTest(); - TestThread thread; - thread.Start(); - thread.Join(); -} diff --git a/deps/v8/test/cctest/test-platform-win32.cc b/deps/v8/test/cctest/test-platform-win32.cc index d7fdab11edc..cecde741201 100644 --- a/deps/v8/test/cctest/test-platform-win32.cc +++ b/deps/v8/test/cctest/test-platform-win32.cc @@ -29,17 +29,17 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "platform.h" -#include "cctest.h" -#include "win32-headers.h" +#include "src/base/platform/platform.h" +#include "src/base/win32-headers.h" +#include "test/cctest/cctest.h" using namespace ::v8::internal; TEST(VirtualMemory) { - VirtualMemory* vm = new VirtualMemory(1 * MB); + v8::base::VirtualMemory* vm = new v8::base::VirtualMemory(1 * MB); CHECK(vm->IsReserved()); void* block_addr = vm->address(); size_t block_size = 4 * KB; @@ -50,9 +50,3 @@ TEST(VirtualMemory) { CHECK(vm->Uncommit(block_addr, block_size)); delete vm; } - - -TEST(GetCurrentProcessId) { - CHECK_EQ(static_cast<int>(::GetCurrentProcessId()), - OS::GetCurrentProcessId()); -} diff --git a/deps/v8/test/cctest/test-platform.cc b/deps/v8/test/cctest/test-platform.cc index 6b28b189530..3beaccea8e0 100644 --- a/deps/v8/test/cctest/test-platform.cc +++ b/deps/v8/test/cctest/test-platform.cc @@ -27,8 +27,8 @@ #include <stdlib.h> -#include "cctest.h" -#include "platform.h" +#include "src/base/platform/platform.h" +#include "test/cctest/cctest.h" using namespace ::v8::internal; @@ -94,7 +94,7 @@ TEST(StackAlignment) { v8::Local<v8::Function>::Cast(global_object->Get(v8_str("foo"))); v8::Local<v8::Value> result = foo->Call(global_object, 0, NULL); - CHECK_EQ(0, result->Int32Value() % OS::ActivationFrameAlignment()); + CHECK_EQ(0, result->Int32Value() % v8::base::OS::ActivationFrameAlignment()); } #undef GET_STACK_POINTERS diff --git a/deps/v8/test/cctest/test-profile-generator.cc b/deps/v8/test/cctest/test-profile-generator.cc index c3198b1512f..7578b35fbdc 100644 --- a/deps/v8/test/cctest/test-profile-generator.cc +++ b/deps/v8/test/cctest/test-profile-generator.cc @@ -27,12 +27,13 @@ // // Tests of profiles generator and utilities. -#include "v8.h" -#include "profile-generator-inl.h" -#include "profiler-extension.h" -#include "cctest.h" -#include "cpu-profiler.h" -#include "../include/v8-profiler.h" +#include "src/v8.h" + +#include "include/v8-profiler.h" +#include "src/cpu-profiler.h" +#include "src/profile-generator-inl.h" +#include "test/cctest/cctest.h" +#include "test/cctest/profiler-extension.h" using i::CodeEntry; using i::CodeMap; @@ -571,14 +572,14 @@ TEST(RecordStackTraceAtStartProfiling) { const_cast<ProfileNode*>(current)->Print(0); // The tree should look like this: // (root) - // (anonymous function) + // "" // a // b // c // There can also be: // startProfiling // if the sampler managed to get a tick. - current = PickChild(current, "(anonymous function)"); + current = PickChild(current, ""); CHECK_NE(NULL, const_cast<ProfileNode*>(current)); current = PickChild(current, "a"); CHECK_NE(NULL, const_cast<ProfileNode*>(current)); @@ -601,7 +602,7 @@ TEST(Issue51919) { CpuProfilesCollection::kMaxSimultaneousProfiles> titles; for (int i = 0; i < CpuProfilesCollection::kMaxSimultaneousProfiles; ++i) { i::Vector<char> title = i::Vector<char>::New(16); - i::OS::SNPrintF(title, "%d", i); + i::SNPrintF(title, "%d", i); CHECK(collection.StartProfiling(title.start(), false)); titles[i] = title.start(); } @@ -650,22 +651,22 @@ TEST(ProfileNodeScriptId) { const_cast<v8::CpuProfileNode*>(current))->Print(0); // The tree should look like this: // (root) - // (anonymous function) + // "" // b // a // There can also be: // startProfiling // if the sampler managed to get a tick. - current = PickChild(current, i::ProfileGenerator::kAnonymousFunctionName); + current = PickChild(current, ""); CHECK_NE(NULL, const_cast<v8::CpuProfileNode*>(current)); current = PickChild(current, "b"); CHECK_NE(NULL, const_cast<v8::CpuProfileNode*>(current)); - CHECK_EQ(script_b->GetId(), current->GetScriptId()); + CHECK_EQ(script_b->GetUnboundScript()->GetId(), current->GetScriptId()); current = PickChild(current, "a"); CHECK_NE(NULL, const_cast<v8::CpuProfileNode*>(current)); - CHECK_EQ(script_a->GetId(), current->GetScriptId()); + CHECK_EQ(script_a->GetUnboundScript()->GetId(), current->GetScriptId()); } @@ -759,10 +760,10 @@ TEST(BailoutReason) { const_cast<v8::CpuProfileNode*>(current))->Print(0); // The tree should look like this: // (root) - // (anonymous function) + // "" // kTryFinally // kTryCatch - current = PickChild(current, i::ProfileGenerator::kAnonymousFunctionName); + current = PickChild(current, ""); CHECK_NE(NULL, const_cast<v8::CpuProfileNode*>(current)); current = PickChild(current, "TryFinally"); diff --git a/deps/v8/test/cctest/test-random-number-generator.cc b/deps/v8/test/cctest/test-random-number-generator.cc index 93f3257003e..a53205c9c8d 100644 --- a/deps/v8/test/cctest/test-random-number-generator.cc +++ b/deps/v8/test/cctest/test-random-number-generator.cc @@ -25,10 +25,11 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "v8.h" +#include "src/v8.h" +#include "test/cctest/cctest.h" -#include "cctest.h" -#include "utils/random-number-generator.h" +#include "src/base/utils/random-number-generator.h" +#include "src/isolate-inl.h" using namespace v8::internal; @@ -39,46 +40,13 @@ static const int kRandomSeeds[] = { }; -TEST(NextIntWithMaxValue) { - for (unsigned n = 0; n < ARRAY_SIZE(kRandomSeeds); ++n) { - RandomNumberGenerator rng(kRandomSeeds[n]); - for (int max = 1; max <= kMaxRuns; ++max) { - int n = rng.NextInt(max); - CHECK_LE(0, n); - CHECK_LT(n, max); - } - } -} - - -TEST(NextBoolReturnsBooleanValue) { - for (unsigned n = 0; n < ARRAY_SIZE(kRandomSeeds); ++n) { - RandomNumberGenerator rng(kRandomSeeds[n]); - for (int k = 0; k < kMaxRuns; ++k) { - bool b = rng.NextBool(); - CHECK(b == false || b == true); - } - } -} - - -TEST(NextDoubleRange) { - for (unsigned n = 0; n < ARRAY_SIZE(kRandomSeeds); ++n) { - RandomNumberGenerator rng(kRandomSeeds[n]); - for (int k = 0; k < kMaxRuns; ++k) { - double d = rng.NextDouble(); - CHECK_LE(0.0, d); - CHECK_LT(d, 1.0); - } - } -} - - TEST(RandomSeedFlagIsUsed) { for (unsigned n = 0; n < ARRAY_SIZE(kRandomSeeds); ++n) { FLAG_random_seed = kRandomSeeds[n]; - RandomNumberGenerator rng1; - RandomNumberGenerator rng2(kRandomSeeds[n]); + v8::Isolate* i = v8::Isolate::New(); + v8::base::RandomNumberGenerator& rng1 = + *reinterpret_cast<Isolate*>(i)->random_number_generator(); + v8::base::RandomNumberGenerator rng2(kRandomSeeds[n]); for (int k = 1; k <= kMaxRuns; ++k) { int64_t i1, i2; rng1.NextBytes(&i1, sizeof(i1)); @@ -88,5 +56,6 @@ TEST(RandomSeedFlagIsUsed) { CHECK_EQ(rng2.NextInt(k), rng1.NextInt(k)); CHECK_EQ(rng2.NextDouble(), rng1.NextDouble()); } + i->Dispose(); } } diff --git a/deps/v8/test/cctest/test-regexp.cc b/deps/v8/test/cctest/test-regexp.cc index 10b227c8e97..5c1764eacfa 100644 --- a/deps/v8/test/cctest/test-regexp.cc +++ b/deps/v8/test/cctest/test-regexp.cc @@ -28,48 +28,58 @@ #include <stdlib.h> -#include "v8.h" - -#include "ast.h" -#include "char-predicates-inl.h" -#include "cctest.h" -#include "jsregexp.h" -#include "parser.h" -#include "regexp-macro-assembler.h" -#include "regexp-macro-assembler-irregexp.h" -#include "string-stream.h" -#include "zone-inl.h" +#include "src/v8.h" + +#include "src/ast.h" +#include "src/char-predicates-inl.h" +#include "src/jsregexp.h" +#include "src/ostreams.h" +#include "src/parser.h" +#include "src/regexp-macro-assembler.h" +#include "src/regexp-macro-assembler-irregexp.h" +#include "src/string-stream.h" +#include "src/zone-inl.h" #ifdef V8_INTERPRETED_REGEXP -#include "interpreter-irregexp.h" +#include "src/interpreter-irregexp.h" #else // V8_INTERPRETED_REGEXP -#include "macro-assembler.h" -#include "code.h" +#include "src/macro-assembler.h" #if V8_TARGET_ARCH_ARM -#include "arm/assembler-arm.h" -#include "arm/macro-assembler-arm.h" -#include "arm/regexp-macro-assembler-arm.h" +#include "src/arm/assembler-arm.h" // NOLINT +#include "src/arm/macro-assembler-arm.h" +#include "src/arm/regexp-macro-assembler-arm.h" #endif #if V8_TARGET_ARCH_ARM64 -#include "arm64/assembler-arm64.h" -#include "arm64/macro-assembler-arm64.h" -#include "arm64/regexp-macro-assembler-arm64.h" +#include "src/arm64/assembler-arm64.h" +#include "src/arm64/macro-assembler-arm64.h" +#include "src/arm64/regexp-macro-assembler-arm64.h" #endif #if V8_TARGET_ARCH_MIPS -#include "mips/assembler-mips.h" -#include "mips/macro-assembler-mips.h" -#include "mips/regexp-macro-assembler-mips.h" +#include "src/mips/assembler-mips.h" +#include "src/mips/macro-assembler-mips.h" +#include "src/mips/regexp-macro-assembler-mips.h" +#endif +#if V8_TARGET_ARCH_MIPS64 +#include "src/mips64/assembler-mips64.h" +#include "src/mips64/macro-assembler-mips64.h" +#include "src/mips64/regexp-macro-assembler-mips64.h" #endif #if V8_TARGET_ARCH_X64 -#include "x64/assembler-x64.h" -#include "x64/macro-assembler-x64.h" -#include "x64/regexp-macro-assembler-x64.h" +#include "src/x64/assembler-x64.h" +#include "src/x64/macro-assembler-x64.h" +#include "src/x64/regexp-macro-assembler-x64.h" #endif #if V8_TARGET_ARCH_IA32 -#include "ia32/assembler-ia32.h" -#include "ia32/macro-assembler-ia32.h" -#include "ia32/regexp-macro-assembler-ia32.h" +#include "src/ia32/assembler-ia32.h" +#include "src/ia32/macro-assembler-ia32.h" +#include "src/ia32/regexp-macro-assembler-ia32.h" +#endif +#if V8_TARGET_ARCH_X87 +#include "src/x87/assembler-x87.h" +#include "src/x87/macro-assembler-x87.h" +#include "src/x87/regexp-macro-assembler-x87.h" #endif #endif // V8_INTERPRETED_REGEXP +#include "test/cctest/cctest.h" using namespace v8::internal; @@ -85,7 +95,7 @@ static bool CheckParse(const char* input) { } -static SmartArrayPointer<const char> Parse(const char* input) { +static void CheckParseEq(const char* input, const char* expected) { V8::Initialize(NULL); v8::HandleScope scope(CcTest::isolate()); Zone zone(CcTest::i_isolate()); @@ -95,8 +105,9 @@ static SmartArrayPointer<const char> Parse(const char* input) { &reader, false, &result, &zone)); CHECK(result.tree != NULL); CHECK(result.error.is_null()); - SmartArrayPointer<const char> output = result.tree->ToString(&zone); - return output; + OStringStream os; + result.tree->Print(os, &zone); + CHECK_EQ(expected, os.c_str()); } @@ -137,7 +148,6 @@ static MinMaxPair CheckMinMaxMatch(const char* input) { #define CHECK_PARSE_ERROR(input) CHECK(!CheckParse(input)) -#define CHECK_PARSE_EQ(input, expected) CHECK_EQ(expected, Parse(input).get()) #define CHECK_SIMPLE(input, simple) CHECK_EQ(simple, CheckSimple(input)); #define CHECK_MIN_MAX(input, min, max) \ { MinMaxPair min_max = CheckMinMaxMatch(input); \ @@ -150,126 +160,129 @@ TEST(Parser) { CHECK_PARSE_ERROR("?"); - CHECK_PARSE_EQ("abc", "'abc'"); - CHECK_PARSE_EQ("", "%"); - CHECK_PARSE_EQ("abc|def", "(| 'abc' 'def')"); - CHECK_PARSE_EQ("abc|def|ghi", "(| 'abc' 'def' 'ghi')"); - CHECK_PARSE_EQ("^xxx$", "(: @^i 'xxx' @$i)"); - CHECK_PARSE_EQ("ab\\b\\d\\bcd", "(: 'ab' @b [0-9] @b 'cd')"); - CHECK_PARSE_EQ("\\w|\\d", "(| [0-9 A-Z _ a-z] [0-9])"); - CHECK_PARSE_EQ("a*", "(# 0 - g 'a')"); - CHECK_PARSE_EQ("a*?", "(# 0 - n 'a')"); - CHECK_PARSE_EQ("abc+", "(: 'ab' (# 1 - g 'c'))"); - CHECK_PARSE_EQ("abc+?", "(: 'ab' (# 1 - n 'c'))"); - CHECK_PARSE_EQ("xyz?", "(: 'xy' (# 0 1 g 'z'))"); - CHECK_PARSE_EQ("xyz??", "(: 'xy' (# 0 1 n 'z'))"); - CHECK_PARSE_EQ("xyz{0,1}", "(: 'xy' (# 0 1 g 'z'))"); - CHECK_PARSE_EQ("xyz{0,1}?", "(: 'xy' (# 0 1 n 'z'))"); - CHECK_PARSE_EQ("xyz{93}", "(: 'xy' (# 93 93 g 'z'))"); - CHECK_PARSE_EQ("xyz{93}?", "(: 'xy' (# 93 93 n 'z'))"); - CHECK_PARSE_EQ("xyz{1,32}", "(: 'xy' (# 1 32 g 'z'))"); - CHECK_PARSE_EQ("xyz{1,32}?", "(: 'xy' (# 1 32 n 'z'))"); - CHECK_PARSE_EQ("xyz{1,}", "(: 'xy' (# 1 - g 'z'))"); - CHECK_PARSE_EQ("xyz{1,}?", "(: 'xy' (# 1 - n 'z'))"); - CHECK_PARSE_EQ("a\\fb\\nc\\rd\\te\\vf", "'a\\x0cb\\x0ac\\x0dd\\x09e\\x0bf'"); - CHECK_PARSE_EQ("a\\nb\\bc", "(: 'a\\x0ab' @b 'c')"); - CHECK_PARSE_EQ("(?:foo)", "'foo'"); - CHECK_PARSE_EQ("(?: foo )", "' foo '"); - CHECK_PARSE_EQ("(foo|bar|baz)", "(^ (| 'foo' 'bar' 'baz'))"); - CHECK_PARSE_EQ("foo|(bar|baz)|quux", "(| 'foo' (^ (| 'bar' 'baz')) 'quux')"); - CHECK_PARSE_EQ("foo(?=bar)baz", "(: 'foo' (-> + 'bar') 'baz')"); - CHECK_PARSE_EQ("foo(?!bar)baz", "(: 'foo' (-> - 'bar') 'baz')"); - CHECK_PARSE_EQ("()", "(^ %)"); - CHECK_PARSE_EQ("(?=)", "(-> + %)"); - CHECK_PARSE_EQ("[]", "^[\\x00-\\uffff]"); // Doesn't compile on windows - CHECK_PARSE_EQ("[^]", "[\\x00-\\uffff]"); // \uffff isn't in codepage 1252 - CHECK_PARSE_EQ("[x]", "[x]"); - CHECK_PARSE_EQ("[xyz]", "[x y z]"); - CHECK_PARSE_EQ("[a-zA-Z0-9]", "[a-z A-Z 0-9]"); - CHECK_PARSE_EQ("[-123]", "[- 1 2 3]"); - CHECK_PARSE_EQ("[^123]", "^[1 2 3]"); - CHECK_PARSE_EQ("]", "']'"); - CHECK_PARSE_EQ("}", "'}'"); - CHECK_PARSE_EQ("[a-b-c]", "[a-b - c]"); - CHECK_PARSE_EQ("[\\d]", "[0-9]"); - CHECK_PARSE_EQ("[x\\dz]", "[x 0-9 z]"); - CHECK_PARSE_EQ("[\\d-z]", "[0-9 - z]"); - CHECK_PARSE_EQ("[\\d-\\d]", "[0-9 - 0-9]"); - CHECK_PARSE_EQ("[z-\\d]", "[z - 0-9]"); + CheckParseEq("abc", "'abc'"); + CheckParseEq("", "%"); + CheckParseEq("abc|def", "(| 'abc' 'def')"); + CheckParseEq("abc|def|ghi", "(| 'abc' 'def' 'ghi')"); + CheckParseEq("^xxx$", "(: @^i 'xxx' @$i)"); + CheckParseEq("ab\\b\\d\\bcd", "(: 'ab' @b [0-9] @b 'cd')"); + CheckParseEq("\\w|\\d", "(| [0-9 A-Z _ a-z] [0-9])"); + CheckParseEq("a*", "(# 0 - g 'a')"); + CheckParseEq("a*?", "(# 0 - n 'a')"); + CheckParseEq("abc+", "(: 'ab' (# 1 - g 'c'))"); + CheckParseEq("abc+?", "(: 'ab' (# 1 - n 'c'))"); + CheckParseEq("xyz?", "(: 'xy' (# 0 1 g 'z'))"); + CheckParseEq("xyz??", "(: 'xy' (# 0 1 n 'z'))"); + CheckParseEq("xyz{0,1}", "(: 'xy' (# 0 1 g 'z'))"); + CheckParseEq("xyz{0,1}?", "(: 'xy' (# 0 1 n 'z'))"); + CheckParseEq("xyz{93}", "(: 'xy' (# 93 93 g 'z'))"); + CheckParseEq("xyz{93}?", "(: 'xy' (# 93 93 n 'z'))"); + CheckParseEq("xyz{1,32}", "(: 'xy' (# 1 32 g 'z'))"); + CheckParseEq("xyz{1,32}?", "(: 'xy' (# 1 32 n 'z'))"); + CheckParseEq("xyz{1,}", "(: 'xy' (# 1 - g 'z'))"); + CheckParseEq("xyz{1,}?", "(: 'xy' (# 1 - n 'z'))"); + CheckParseEq("a\\fb\\nc\\rd\\te\\vf", "'a\\x0cb\\x0ac\\x0dd\\x09e\\x0bf'"); + CheckParseEq("a\\nb\\bc", "(: 'a\\x0ab' @b 'c')"); + CheckParseEq("(?:foo)", "'foo'"); + CheckParseEq("(?: foo )", "' foo '"); + CheckParseEq("(foo|bar|baz)", "(^ (| 'foo' 'bar' 'baz'))"); + CheckParseEq("foo|(bar|baz)|quux", "(| 'foo' (^ (| 'bar' 'baz')) 'quux')"); + CheckParseEq("foo(?=bar)baz", "(: 'foo' (-> + 'bar') 'baz')"); + CheckParseEq("foo(?!bar)baz", "(: 'foo' (-> - 'bar') 'baz')"); + CheckParseEq("()", "(^ %)"); + CheckParseEq("(?=)", "(-> + %)"); + CheckParseEq("[]", "^[\\x00-\\uffff]"); // Doesn't compile on windows + CheckParseEq("[^]", "[\\x00-\\uffff]"); // \uffff isn't in codepage 1252 + CheckParseEq("[x]", "[x]"); + CheckParseEq("[xyz]", "[x y z]"); + CheckParseEq("[a-zA-Z0-9]", "[a-z A-Z 0-9]"); + CheckParseEq("[-123]", "[- 1 2 3]"); + CheckParseEq("[^123]", "^[1 2 3]"); + CheckParseEq("]", "']'"); + CheckParseEq("}", "'}'"); + CheckParseEq("[a-b-c]", "[a-b - c]"); + CheckParseEq("[\\d]", "[0-9]"); + CheckParseEq("[x\\dz]", "[x 0-9 z]"); + CheckParseEq("[\\d-z]", "[0-9 - z]"); + CheckParseEq("[\\d-\\d]", "[0-9 - 0-9]"); + CheckParseEq("[z-\\d]", "[z - 0-9]"); // Control character outside character class. - CHECK_PARSE_EQ("\\cj\\cJ\\ci\\cI\\ck\\cK", - "'\\x0a\\x0a\\x09\\x09\\x0b\\x0b'"); - CHECK_PARSE_EQ("\\c!", "'\\c!'"); - CHECK_PARSE_EQ("\\c_", "'\\c_'"); - CHECK_PARSE_EQ("\\c~", "'\\c~'"); - CHECK_PARSE_EQ("\\c1", "'\\c1'"); + CheckParseEq("\\cj\\cJ\\ci\\cI\\ck\\cK", "'\\x0a\\x0a\\x09\\x09\\x0b\\x0b'"); + CheckParseEq("\\c!", "'\\c!'"); + CheckParseEq("\\c_", "'\\c_'"); + CheckParseEq("\\c~", "'\\c~'"); + CheckParseEq("\\c1", "'\\c1'"); // Control character inside character class. - CHECK_PARSE_EQ("[\\c!]", "[\\ c !]"); - CHECK_PARSE_EQ("[\\c_]", "[\\x1f]"); - CHECK_PARSE_EQ("[\\c~]", "[\\ c ~]"); - CHECK_PARSE_EQ("[\\ca]", "[\\x01]"); - CHECK_PARSE_EQ("[\\cz]", "[\\x1a]"); - CHECK_PARSE_EQ("[\\cA]", "[\\x01]"); - CHECK_PARSE_EQ("[\\cZ]", "[\\x1a]"); - CHECK_PARSE_EQ("[\\c1]", "[\\x11]"); - - CHECK_PARSE_EQ("[a\\]c]", "[a ] c]"); - CHECK_PARSE_EQ("\\[\\]\\{\\}\\(\\)\\%\\^\\#\\ ", "'[]{}()%^# '"); - CHECK_PARSE_EQ("[\\[\\]\\{\\}\\(\\)\\%\\^\\#\\ ]", "[[ ] { } ( ) % ^ # ]"); - CHECK_PARSE_EQ("\\0", "'\\x00'"); - CHECK_PARSE_EQ("\\8", "'8'"); - CHECK_PARSE_EQ("\\9", "'9'"); - CHECK_PARSE_EQ("\\11", "'\\x09'"); - CHECK_PARSE_EQ("\\11a", "'\\x09a'"); - CHECK_PARSE_EQ("\\011", "'\\x09'"); - CHECK_PARSE_EQ("\\00011", "'\\x0011'"); - CHECK_PARSE_EQ("\\118", "'\\x098'"); - CHECK_PARSE_EQ("\\111", "'I'"); - CHECK_PARSE_EQ("\\1111", "'I1'"); - CHECK_PARSE_EQ("(x)(x)(x)\\1", "(: (^ 'x') (^ 'x') (^ 'x') (<- 1))"); - CHECK_PARSE_EQ("(x)(x)(x)\\2", "(: (^ 'x') (^ 'x') (^ 'x') (<- 2))"); - CHECK_PARSE_EQ("(x)(x)(x)\\3", "(: (^ 'x') (^ 'x') (^ 'x') (<- 3))"); - CHECK_PARSE_EQ("(x)(x)(x)\\4", "(: (^ 'x') (^ 'x') (^ 'x') '\\x04')"); - CHECK_PARSE_EQ("(x)(x)(x)\\1*", "(: (^ 'x') (^ 'x') (^ 'x')" - " (# 0 - g (<- 1)))"); - CHECK_PARSE_EQ("(x)(x)(x)\\2*", "(: (^ 'x') (^ 'x') (^ 'x')" - " (# 0 - g (<- 2)))"); - CHECK_PARSE_EQ("(x)(x)(x)\\3*", "(: (^ 'x') (^ 'x') (^ 'x')" - " (# 0 - g (<- 3)))"); - CHECK_PARSE_EQ("(x)(x)(x)\\4*", "(: (^ 'x') (^ 'x') (^ 'x')" - " (# 0 - g '\\x04'))"); - CHECK_PARSE_EQ("(x)(x)(x)(x)(x)(x)(x)(x)(x)(x)\\10", - "(: (^ 'x') (^ 'x') (^ 'x') (^ 'x') (^ 'x') (^ 'x')" - " (^ 'x') (^ 'x') (^ 'x') (^ 'x') (<- 10))"); - CHECK_PARSE_EQ("(x)(x)(x)(x)(x)(x)(x)(x)(x)(x)\\11", - "(: (^ 'x') (^ 'x') (^ 'x') (^ 'x') (^ 'x') (^ 'x')" - " (^ 'x') (^ 'x') (^ 'x') (^ 'x') '\\x09')"); - CHECK_PARSE_EQ("(a)\\1", "(: (^ 'a') (<- 1))"); - CHECK_PARSE_EQ("(a\\1)", "(^ 'a')"); - CHECK_PARSE_EQ("(\\1a)", "(^ 'a')"); - CHECK_PARSE_EQ("(?=a)?a", "'a'"); - CHECK_PARSE_EQ("(?=a){0,10}a", "'a'"); - CHECK_PARSE_EQ("(?=a){1,10}a", "(: (-> + 'a') 'a')"); - CHECK_PARSE_EQ("(?=a){9,10}a", "(: (-> + 'a') 'a')"); - CHECK_PARSE_EQ("(?!a)?a", "'a'"); - CHECK_PARSE_EQ("\\1(a)", "(^ 'a')"); - CHECK_PARSE_EQ("(?!(a))\\1", "(: (-> - (^ 'a')) (<- 1))"); - CHECK_PARSE_EQ("(?!\\1(a\\1)\\1)\\1", "(: (-> - (: (^ 'a') (<- 1))) (<- 1))"); - CHECK_PARSE_EQ("[\\0]", "[\\x00]"); - CHECK_PARSE_EQ("[\\11]", "[\\x09]"); - CHECK_PARSE_EQ("[\\11a]", "[\\x09 a]"); - CHECK_PARSE_EQ("[\\011]", "[\\x09]"); - CHECK_PARSE_EQ("[\\00011]", "[\\x00 1 1]"); - CHECK_PARSE_EQ("[\\118]", "[\\x09 8]"); - CHECK_PARSE_EQ("[\\111]", "[I]"); - CHECK_PARSE_EQ("[\\1111]", "[I 1]"); - CHECK_PARSE_EQ("\\x34", "'\x34'"); - CHECK_PARSE_EQ("\\x60", "'\x60'"); - CHECK_PARSE_EQ("\\x3z", "'x3z'"); - CHECK_PARSE_EQ("\\c", "'\\c'"); - CHECK_PARSE_EQ("\\u0034", "'\x34'"); - CHECK_PARSE_EQ("\\u003z", "'u003z'"); - CHECK_PARSE_EQ("foo[z]*", "(: 'foo' (# 0 - g [z]))"); + CheckParseEq("[\\c!]", "[\\ c !]"); + CheckParseEq("[\\c_]", "[\\x1f]"); + CheckParseEq("[\\c~]", "[\\ c ~]"); + CheckParseEq("[\\ca]", "[\\x01]"); + CheckParseEq("[\\cz]", "[\\x1a]"); + CheckParseEq("[\\cA]", "[\\x01]"); + CheckParseEq("[\\cZ]", "[\\x1a]"); + CheckParseEq("[\\c1]", "[\\x11]"); + + CheckParseEq("[a\\]c]", "[a ] c]"); + CheckParseEq("\\[\\]\\{\\}\\(\\)\\%\\^\\#\\ ", "'[]{}()%^# '"); + CheckParseEq("[\\[\\]\\{\\}\\(\\)\\%\\^\\#\\ ]", "[[ ] { } ( ) % ^ # ]"); + CheckParseEq("\\0", "'\\x00'"); + CheckParseEq("\\8", "'8'"); + CheckParseEq("\\9", "'9'"); + CheckParseEq("\\11", "'\\x09'"); + CheckParseEq("\\11a", "'\\x09a'"); + CheckParseEq("\\011", "'\\x09'"); + CheckParseEq("\\00011", "'\\x0011'"); + CheckParseEq("\\118", "'\\x098'"); + CheckParseEq("\\111", "'I'"); + CheckParseEq("\\1111", "'I1'"); + CheckParseEq("(x)(x)(x)\\1", "(: (^ 'x') (^ 'x') (^ 'x') (<- 1))"); + CheckParseEq("(x)(x)(x)\\2", "(: (^ 'x') (^ 'x') (^ 'x') (<- 2))"); + CheckParseEq("(x)(x)(x)\\3", "(: (^ 'x') (^ 'x') (^ 'x') (<- 3))"); + CheckParseEq("(x)(x)(x)\\4", "(: (^ 'x') (^ 'x') (^ 'x') '\\x04')"); + CheckParseEq("(x)(x)(x)\\1*", + "(: (^ 'x') (^ 'x') (^ 'x')" + " (# 0 - g (<- 1)))"); + CheckParseEq("(x)(x)(x)\\2*", + "(: (^ 'x') (^ 'x') (^ 'x')" + " (# 0 - g (<- 2)))"); + CheckParseEq("(x)(x)(x)\\3*", + "(: (^ 'x') (^ 'x') (^ 'x')" + " (# 0 - g (<- 3)))"); + CheckParseEq("(x)(x)(x)\\4*", + "(: (^ 'x') (^ 'x') (^ 'x')" + " (# 0 - g '\\x04'))"); + CheckParseEq("(x)(x)(x)(x)(x)(x)(x)(x)(x)(x)\\10", + "(: (^ 'x') (^ 'x') (^ 'x') (^ 'x') (^ 'x') (^ 'x')" + " (^ 'x') (^ 'x') (^ 'x') (^ 'x') (<- 10))"); + CheckParseEq("(x)(x)(x)(x)(x)(x)(x)(x)(x)(x)\\11", + "(: (^ 'x') (^ 'x') (^ 'x') (^ 'x') (^ 'x') (^ 'x')" + " (^ 'x') (^ 'x') (^ 'x') (^ 'x') '\\x09')"); + CheckParseEq("(a)\\1", "(: (^ 'a') (<- 1))"); + CheckParseEq("(a\\1)", "(^ 'a')"); + CheckParseEq("(\\1a)", "(^ 'a')"); + CheckParseEq("(?=a)?a", "'a'"); + CheckParseEq("(?=a){0,10}a", "'a'"); + CheckParseEq("(?=a){1,10}a", "(: (-> + 'a') 'a')"); + CheckParseEq("(?=a){9,10}a", "(: (-> + 'a') 'a')"); + CheckParseEq("(?!a)?a", "'a'"); + CheckParseEq("\\1(a)", "(^ 'a')"); + CheckParseEq("(?!(a))\\1", "(: (-> - (^ 'a')) (<- 1))"); + CheckParseEq("(?!\\1(a\\1)\\1)\\1", "(: (-> - (: (^ 'a') (<- 1))) (<- 1))"); + CheckParseEq("[\\0]", "[\\x00]"); + CheckParseEq("[\\11]", "[\\x09]"); + CheckParseEq("[\\11a]", "[\\x09 a]"); + CheckParseEq("[\\011]", "[\\x09]"); + CheckParseEq("[\\00011]", "[\\x00 1 1]"); + CheckParseEq("[\\118]", "[\\x09 8]"); + CheckParseEq("[\\111]", "[I]"); + CheckParseEq("[\\1111]", "[I 1]"); + CheckParseEq("\\x34", "'\x34'"); + CheckParseEq("\\x60", "'\x60'"); + CheckParseEq("\\x3z", "'x3z'"); + CheckParseEq("\\c", "'\\c'"); + CheckParseEq("\\u0034", "'\x34'"); + CheckParseEq("\\u003z", "'u003z'"); + CheckParseEq("foo[z]*", "(: 'foo' (# 0 - g [z]))"); CHECK_SIMPLE("", false); CHECK_SIMPLE("a", true); @@ -317,22 +330,22 @@ TEST(Parser) { CHECK_SIMPLE("(?!a)?a\\1", false); CHECK_SIMPLE("(?:(?=a))a\\1", false); - CHECK_PARSE_EQ("a{}", "'a{}'"); - CHECK_PARSE_EQ("a{,}", "'a{,}'"); - CHECK_PARSE_EQ("a{", "'a{'"); - CHECK_PARSE_EQ("a{z}", "'a{z}'"); - CHECK_PARSE_EQ("a{1z}", "'a{1z}'"); - CHECK_PARSE_EQ("a{12z}", "'a{12z}'"); - CHECK_PARSE_EQ("a{12,", "'a{12,'"); - CHECK_PARSE_EQ("a{12,3b", "'a{12,3b'"); - CHECK_PARSE_EQ("{}", "'{}'"); - CHECK_PARSE_EQ("{,}", "'{,}'"); - CHECK_PARSE_EQ("{", "'{'"); - CHECK_PARSE_EQ("{z}", "'{z}'"); - CHECK_PARSE_EQ("{1z}", "'{1z}'"); - CHECK_PARSE_EQ("{12z}", "'{12z}'"); - CHECK_PARSE_EQ("{12,", "'{12,'"); - CHECK_PARSE_EQ("{12,3b", "'{12,3b'"); + CheckParseEq("a{}", "'a{}'"); + CheckParseEq("a{,}", "'a{,}'"); + CheckParseEq("a{", "'a{'"); + CheckParseEq("a{z}", "'a{z}'"); + CheckParseEq("a{1z}", "'a{1z}'"); + CheckParseEq("a{12z}", "'a{12z}'"); + CheckParseEq("a{12,", "'a{12,'"); + CheckParseEq("a{12,3b", "'a{12,3b'"); + CheckParseEq("{}", "'{}'"); + CheckParseEq("{,}", "'{,}'"); + CheckParseEq("{", "'{'"); + CheckParseEq("{z}", "'{z}'"); + CheckParseEq("{1z}", "'{1z}'"); + CheckParseEq("{12z}", "'{12z}'"); + CheckParseEq("{12,", "'{12,'"); + CheckParseEq("{12,3b", "'{12,3b'"); CHECK_MIN_MAX("a", 1, 1); CHECK_MIN_MAX("abc", 3, 3); @@ -386,10 +399,10 @@ TEST(Parser) { TEST(ParserRegression) { - CHECK_PARSE_EQ("[A-Z$-][x]", "(! [A-Z $ -] [x])"); - CHECK_PARSE_EQ("a{3,4*}", "(: 'a{3,' (# 0 - g '4') '}')"); - CHECK_PARSE_EQ("{", "'{'"); - CHECK_PARSE_EQ("a|", "(| 'a' %)"); + CheckParseEq("[A-Z$-][x]", "(! [A-Z $ -] [x])"); + CheckParseEq("a{3,4*}", "(: 'a{3,' (# 0 - g '4') '}')"); + CheckParseEq("{", "'{'"); + CheckParseEq("a|", "(| 'a' %)"); } static void ExpectError(const char* input, @@ -429,13 +442,11 @@ TEST(Errors) { // Check that we don't allow more than kMaxCapture captures const int kMaxCaptures = 1 << 16; // Must match RegExpParser::kMaxCaptures. const char* kTooManyCaptures = "Too many captures"; - HeapStringAllocator allocator; - StringStream accumulator(&allocator); + OStringStream os; for (int i = 0; i <= kMaxCaptures; i++) { - accumulator.Add("()"); + os << "()"; } - SmartArrayPointer<const char> many_captures(accumulator.ToCString()); - ExpectError(many_captures.get(), kTooManyCaptures); + ExpectError(os.c_str(), kTooManyCaptures); } @@ -663,11 +674,11 @@ TEST(ParsePossessiveRepetition) { // Enable possessive quantifier syntax. FLAG_regexp_possessive_quantifier = true; - CHECK_PARSE_EQ("a*+", "(# 0 - p 'a')"); - CHECK_PARSE_EQ("a++", "(# 1 - p 'a')"); - CHECK_PARSE_EQ("a?+", "(# 0 1 p 'a')"); - CHECK_PARSE_EQ("a{10,20}+", "(# 10 20 p 'a')"); - CHECK_PARSE_EQ("za{10,20}+b", "(: 'z' (# 10 20 p 'a') 'b')"); + CheckParseEq("a*+", "(# 0 - p 'a')"); + CheckParseEq("a++", "(# 1 - p 'a')"); + CheckParseEq("a?+", "(# 0 1 p 'a')"); + CheckParseEq("a{10,20}+", "(# 10 20 p 'a')"); + CheckParseEq("za{10,20}+b", "(: 'z' (# 10 20 p 'a') 'b')"); // Disable possessive quantifier syntax. FLAG_regexp_possessive_quantifier = false; @@ -698,6 +709,10 @@ typedef RegExpMacroAssemblerARM ArchRegExpMacroAssembler; typedef RegExpMacroAssemblerARM64 ArchRegExpMacroAssembler; #elif V8_TARGET_ARCH_MIPS typedef RegExpMacroAssemblerMIPS ArchRegExpMacroAssembler; +#elif V8_TARGET_ARCH_MIPS64 +typedef RegExpMacroAssemblerMIPS ArchRegExpMacroAssembler; +#elif V8_TARGET_ARCH_X87 +typedef RegExpMacroAssemblerX87 ArchRegExpMacroAssembler; #endif class ContextInitializer { @@ -1668,26 +1683,26 @@ TEST(CanonicalizeCharacterSets) { list->Add(CharacterRange(30, 40), &zone); list->Add(CharacterRange(50, 60), &zone); set.Canonicalize(); - ASSERT_EQ(3, list->length()); - ASSERT_EQ(10, list->at(0).from()); - ASSERT_EQ(20, list->at(0).to()); - ASSERT_EQ(30, list->at(1).from()); - ASSERT_EQ(40, list->at(1).to()); - ASSERT_EQ(50, list->at(2).from()); - ASSERT_EQ(60, list->at(2).to()); + DCHECK_EQ(3, list->length()); + DCHECK_EQ(10, list->at(0).from()); + DCHECK_EQ(20, list->at(0).to()); + DCHECK_EQ(30, list->at(1).from()); + DCHECK_EQ(40, list->at(1).to()); + DCHECK_EQ(50, list->at(2).from()); + DCHECK_EQ(60, list->at(2).to()); list->Rewind(0); list->Add(CharacterRange(10, 20), &zone); list->Add(CharacterRange(50, 60), &zone); list->Add(CharacterRange(30, 40), &zone); set.Canonicalize(); - ASSERT_EQ(3, list->length()); - ASSERT_EQ(10, list->at(0).from()); - ASSERT_EQ(20, list->at(0).to()); - ASSERT_EQ(30, list->at(1).from()); - ASSERT_EQ(40, list->at(1).to()); - ASSERT_EQ(50, list->at(2).from()); - ASSERT_EQ(60, list->at(2).to()); + DCHECK_EQ(3, list->length()); + DCHECK_EQ(10, list->at(0).from()); + DCHECK_EQ(20, list->at(0).to()); + DCHECK_EQ(30, list->at(1).from()); + DCHECK_EQ(40, list->at(1).to()); + DCHECK_EQ(50, list->at(2).from()); + DCHECK_EQ(60, list->at(2).to()); list->Rewind(0); list->Add(CharacterRange(30, 40), &zone); @@ -1696,26 +1711,26 @@ TEST(CanonicalizeCharacterSets) { list->Add(CharacterRange(100, 100), &zone); list->Add(CharacterRange(1, 1), &zone); set.Canonicalize(); - ASSERT_EQ(5, list->length()); - ASSERT_EQ(1, list->at(0).from()); - ASSERT_EQ(1, list->at(0).to()); - ASSERT_EQ(10, list->at(1).from()); - ASSERT_EQ(20, list->at(1).to()); - ASSERT_EQ(25, list->at(2).from()); - ASSERT_EQ(25, list->at(2).to()); - ASSERT_EQ(30, list->at(3).from()); - ASSERT_EQ(40, list->at(3).to()); - ASSERT_EQ(100, list->at(4).from()); - ASSERT_EQ(100, list->at(4).to()); + DCHECK_EQ(5, list->length()); + DCHECK_EQ(1, list->at(0).from()); + DCHECK_EQ(1, list->at(0).to()); + DCHECK_EQ(10, list->at(1).from()); + DCHECK_EQ(20, list->at(1).to()); + DCHECK_EQ(25, list->at(2).from()); + DCHECK_EQ(25, list->at(2).to()); + DCHECK_EQ(30, list->at(3).from()); + DCHECK_EQ(40, list->at(3).to()); + DCHECK_EQ(100, list->at(4).from()); + DCHECK_EQ(100, list->at(4).to()); list->Rewind(0); list->Add(CharacterRange(10, 19), &zone); list->Add(CharacterRange(21, 30), &zone); list->Add(CharacterRange(20, 20), &zone); set.Canonicalize(); - ASSERT_EQ(1, list->length()); - ASSERT_EQ(10, list->at(0).from()); - ASSERT_EQ(30, list->at(0).to()); + DCHECK_EQ(1, list->length()); + DCHECK_EQ(10, list->at(0).from()); + DCHECK_EQ(30, list->at(0).to()); } @@ -1798,8 +1813,8 @@ TEST(CharacterRangeMerge) { offset += 9; } - ASSERT(CharacterRange::IsCanonical(&l1)); - ASSERT(CharacterRange::IsCanonical(&l2)); + DCHECK(CharacterRange::IsCanonical(&l1)); + DCHECK(CharacterRange::IsCanonical(&l2)); ZoneList<CharacterRange> first_only(4, &zone); ZoneList<CharacterRange> second_only(4, &zone); diff --git a/deps/v8/test/cctest/test-reloc-info.cc b/deps/v8/test/cctest/test-reloc-info.cc index 5ab9e803c25..94ed287c440 100644 --- a/deps/v8/test/cctest/test-reloc-info.cc +++ b/deps/v8/test/cctest/test-reloc-info.cc @@ -26,8 +26,8 @@ // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "cctest.h" -#include "assembler.h" +#include "src/assembler.h" +#include "test/cctest/cctest.h" namespace v8 { namespace internal { diff --git a/deps/v8/test/cctest/test-representation.cc b/deps/v8/test/cctest/test-representation.cc index 95a65cbbf7b..fc1f531331e 100644 --- a/deps/v8/test/cctest/test-representation.cc +++ b/deps/v8/test/cctest/test-representation.cc @@ -25,9 +25,10 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "cctest.h" -#include "types.h" -#include "property-details.h" +#include "test/cctest/cctest.h" + +#include "src/property-details.h" +#include "src/types.h" using namespace v8::internal; diff --git a/deps/v8/test/cctest/test-semaphore.cc b/deps/v8/test/cctest/test-semaphore.cc index 895303f4f87..c7fca519dcf 100644 --- a/deps/v8/test/cctest/test-semaphore.cc +++ b/deps/v8/test/cctest/test-semaphore.cc @@ -27,38 +27,39 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "platform.h" -#include "cctest.h" +#include "src/base/platform/platform.h" +#include "test/cctest/cctest.h" using namespace ::v8::internal; -class WaitAndSignalThread V8_FINAL : public Thread { - public: - explicit WaitAndSignalThread(Semaphore* semaphore) - : Thread("WaitAndSignalThread"), semaphore_(semaphore) {} +class WaitAndSignalThread V8_FINAL : public v8::base::Thread { + public: + explicit WaitAndSignalThread(v8::base::Semaphore* semaphore) + : Thread(Options("WaitAndSignalThread")), semaphore_(semaphore) {} virtual ~WaitAndSignalThread() {} virtual void Run() V8_OVERRIDE { for (int n = 0; n < 1000; ++n) { semaphore_->Wait(); - bool result = semaphore_->WaitFor(TimeDelta::FromMicroseconds(1)); - ASSERT(!result); + bool result = + semaphore_->WaitFor(v8::base::TimeDelta::FromMicroseconds(1)); + DCHECK(!result); USE(result); semaphore_->Signal(); } } - private: - Semaphore* semaphore_; + private: + v8::base::Semaphore* semaphore_; }; TEST(WaitAndSignal) { - Semaphore semaphore(0); + v8::base::Semaphore semaphore(0); WaitAndSignalThread t1(&semaphore); WaitAndSignalThread t2(&semaphore); @@ -73,33 +74,33 @@ TEST(WaitAndSignal) { semaphore.Wait(); - bool result = semaphore.WaitFor(TimeDelta::FromMicroseconds(1)); - ASSERT(!result); + bool result = semaphore.WaitFor(v8::base::TimeDelta::FromMicroseconds(1)); + DCHECK(!result); USE(result); } TEST(WaitFor) { bool ok; - Semaphore semaphore(0); + v8::base::Semaphore semaphore(0); // Semaphore not signalled - timeout. - ok = semaphore.WaitFor(TimeDelta::FromMicroseconds(0)); + ok = semaphore.WaitFor(v8::base::TimeDelta::FromMicroseconds(0)); CHECK(!ok); - ok = semaphore.WaitFor(TimeDelta::FromMicroseconds(100)); + ok = semaphore.WaitFor(v8::base::TimeDelta::FromMicroseconds(100)); CHECK(!ok); - ok = semaphore.WaitFor(TimeDelta::FromMicroseconds(1000)); + ok = semaphore.WaitFor(v8::base::TimeDelta::FromMicroseconds(1000)); CHECK(!ok); // Semaphore signalled - no timeout. semaphore.Signal(); - ok = semaphore.WaitFor(TimeDelta::FromMicroseconds(0)); + ok = semaphore.WaitFor(v8::base::TimeDelta::FromMicroseconds(0)); CHECK(ok); semaphore.Signal(); - ok = semaphore.WaitFor(TimeDelta::FromMicroseconds(100)); + ok = semaphore.WaitFor(v8::base::TimeDelta::FromMicroseconds(100)); CHECK(ok); semaphore.Signal(); - ok = semaphore.WaitFor(TimeDelta::FromMicroseconds(1000)); + ok = semaphore.WaitFor(v8::base::TimeDelta::FromMicroseconds(1000)); CHECK(ok); } @@ -110,13 +111,13 @@ static const int kBufferSize = 4096; // GCD(buffer size, alphabet size) = 1 static char buffer[kBufferSize]; static const int kDataSize = kBufferSize * kAlphabetSize * 10; -static Semaphore free_space(kBufferSize); -static Semaphore used_space(0); +static v8::base::Semaphore free_space(kBufferSize); +static v8::base::Semaphore used_space(0); -class ProducerThread V8_FINAL : public Thread { - public: - ProducerThread() : Thread("ProducerThread") {} +class ProducerThread V8_FINAL : public v8::base::Thread { + public: + ProducerThread() : Thread(Options("ProducerThread")) {} virtual ~ProducerThread() {} virtual void Run() V8_OVERRIDE { @@ -129,15 +130,15 @@ class ProducerThread V8_FINAL : public Thread { }; -class ConsumerThread V8_FINAL : public Thread { - public: - ConsumerThread() : Thread("ConsumerThread") {} +class ConsumerThread V8_FINAL : public v8::base::Thread { + public: + ConsumerThread() : Thread(Options("ConsumerThread")) {} virtual ~ConsumerThread() {} virtual void Run() V8_OVERRIDE { for (int n = 0; n < kDataSize; ++n) { used_space.Wait(); - ASSERT_EQ(static_cast<int>(alphabet[n % kAlphabetSize]), + DCHECK_EQ(static_cast<int>(alphabet[n % kAlphabetSize]), static_cast<int>(buffer[n % kBufferSize])); free_space.Signal(); } diff --git a/deps/v8/test/cctest/test-serialize.cc b/deps/v8/test/cctest/test-serialize.cc index 10c35c1c4d6..9ae90c47764 100644 --- a/deps/v8/test/cctest/test-serialize.cc +++ b/deps/v8/test/cctest/test-serialize.cc @@ -27,21 +27,22 @@ #include <signal.h> -#include "sys/stat.h" - -#include "v8.h" - -#include "debug.h" -#include "ic-inl.h" -#include "runtime.h" -#include "serialize.h" -#include "scopeinfo.h" -#include "snapshot.h" -#include "cctest.h" -#include "spaces.h" -#include "objects.h" -#include "natives.h" -#include "bootstrapper.h" +#include <sys/stat.h> + +#include "src/v8.h" + +#include "src/bootstrapper.h" +#include "src/compilation-cache.h" +#include "src/debug.h" +#include "src/heap/spaces.h" +#include "src/ic-inl.h" +#include "src/natives.h" +#include "src/objects.h" +#include "src/runtime.h" +#include "src/scopeinfo.h" +#include "src/serialize.h" +#include "src/snapshot.h" +#include "test/cctest/cctest.h" using namespace v8::internal; @@ -77,7 +78,7 @@ static int* counter_function(const char* name) { return &local_counters[hash]; } hash = (hash + 1) % kCounters; - ASSERT(hash != original_hash); // Hash table has been filled up. + DCHECK(hash != original_hash); // Hash table has been filled up. } } @@ -115,21 +116,21 @@ TEST(ExternalReferenceEncoder) { encoder.Encode(total_compile_size.address())); ExternalReference stack_limit_address = ExternalReference::address_of_stack_limit(isolate); - CHECK_EQ(make_code(UNCLASSIFIED, 4), + CHECK_EQ(make_code(UNCLASSIFIED, 2), encoder.Encode(stack_limit_address.address())); ExternalReference real_stack_limit_address = ExternalReference::address_of_real_stack_limit(isolate); - CHECK_EQ(make_code(UNCLASSIFIED, 5), + CHECK_EQ(make_code(UNCLASSIFIED, 3), encoder.Encode(real_stack_limit_address.address())); - CHECK_EQ(make_code(UNCLASSIFIED, 16), + CHECK_EQ(make_code(UNCLASSIFIED, 8), encoder.Encode(ExternalReference::debug_break(isolate).address())); - CHECK_EQ(make_code(UNCLASSIFIED, 10), - encoder.Encode( - ExternalReference::new_space_start(isolate).address())); - CHECK_EQ(make_code(UNCLASSIFIED, 3), - encoder.Encode( - ExternalReference::roots_array_start(isolate).address())); - CHECK_EQ(make_code(UNCLASSIFIED, 52), + CHECK_EQ( + make_code(UNCLASSIFIED, 4), + encoder.Encode(ExternalReference::new_space_start(isolate).address())); + CHECK_EQ( + make_code(UNCLASSIFIED, 1), + encoder.Encode(ExternalReference::roots_array_start(isolate).address())); + CHECK_EQ(make_code(UNCLASSIFIED, 34), encoder.Encode(ExternalReference::cpu_features().address())); } @@ -152,20 +153,20 @@ TEST(ExternalReferenceDecoder) { make_code(STATS_COUNTER, Counters::k_total_compile_size))); CHECK_EQ(ExternalReference::address_of_stack_limit(isolate).address(), - decoder.Decode(make_code(UNCLASSIFIED, 4))); + decoder.Decode(make_code(UNCLASSIFIED, 2))); CHECK_EQ(ExternalReference::address_of_real_stack_limit(isolate).address(), - decoder.Decode(make_code(UNCLASSIFIED, 5))); + decoder.Decode(make_code(UNCLASSIFIED, 3))); CHECK_EQ(ExternalReference::debug_break(isolate).address(), - decoder.Decode(make_code(UNCLASSIFIED, 16))); + decoder.Decode(make_code(UNCLASSIFIED, 8))); CHECK_EQ(ExternalReference::new_space_start(isolate).address(), - decoder.Decode(make_code(UNCLASSIFIED, 10))); + decoder.Decode(make_code(UNCLASSIFIED, 4))); } class FileByteSink : public SnapshotByteSink { public: explicit FileByteSink(const char* snapshot_file) { - fp_ = OS::FOpen(snapshot_file, "wb"); + fp_ = v8::base::OS::FOpen(snapshot_file, "wb"); file_name_ = snapshot_file; if (fp_ == NULL) { PrintF("Unable to write to snapshot file \"%s\"\n", snapshot_file); @@ -177,9 +178,9 @@ class FileByteSink : public SnapshotByteSink { fclose(fp_); } } - virtual void Put(int byte, const char* description) { + virtual void Put(byte b, const char* description) { if (fp_ != NULL) { - fputc(byte, fp_); + fputc(b, fp_); } } virtual int Position() { @@ -210,8 +211,8 @@ void FileByteSink::WriteSpaceUsed( int property_cell_space_used) { int file_name_length = StrLength(file_name_) + 10; Vector<char> name = Vector<char>::New(file_name_length + 1); - OS::SNPrintF(name, "%s.size", file_name_); - FILE* fp = OS::FOpen(name.start(), "w"); + SNPrintF(name, "%s.size", file_name_); + FILE* fp = v8::base::OS::FOpen(name.start(), "w"); name.Dispose(); fprintf(fp, "new %d\n", new_space_used); fprintf(fp, "pointer %d\n", pointer_space_used); @@ -262,7 +263,7 @@ static void Serialize() { // Test that the whole heap can be serialized. TEST(Serialize) { if (!Snapshot::HaveASnapshotToStartFrom()) { - Serializer::RequestEnable(CcTest::i_isolate()); + CcTest::i_isolate()->enable_serializer(); v8::V8::Initialize(); Serialize(); } @@ -272,7 +273,7 @@ TEST(Serialize) { // Test that heap serialization is non-destructive. TEST(SerializeTwice) { if (!Snapshot::HaveASnapshotToStartFrom()) { - Serializer::RequestEnable(CcTest::i_isolate()); + CcTest::i_isolate()->enable_serializer(); v8::V8::Initialize(); Serialize(); Serialize(); @@ -283,8 +284,60 @@ TEST(SerializeTwice) { //---------------------------------------------------------------------------- // Tests that the heap can be deserialized. + +static void ReserveSpaceForSnapshot(Deserializer* deserializer, + const char* file_name) { + int file_name_length = StrLength(file_name) + 10; + Vector<char> name = Vector<char>::New(file_name_length + 1); + SNPrintF(name, "%s.size", file_name); + FILE* fp = v8::base::OS::FOpen(name.start(), "r"); + name.Dispose(); + int new_size, pointer_size, data_size, code_size, map_size, cell_size, + property_cell_size; +#ifdef _MSC_VER + // Avoid warning about unsafe fscanf from MSVC. + // Please note that this is only fine if %c and %s are not being used. +#define fscanf fscanf_s +#endif + CHECK_EQ(1, fscanf(fp, "new %d\n", &new_size)); + CHECK_EQ(1, fscanf(fp, "pointer %d\n", &pointer_size)); + CHECK_EQ(1, fscanf(fp, "data %d\n", &data_size)); + CHECK_EQ(1, fscanf(fp, "code %d\n", &code_size)); + CHECK_EQ(1, fscanf(fp, "map %d\n", &map_size)); + CHECK_EQ(1, fscanf(fp, "cell %d\n", &cell_size)); + CHECK_EQ(1, fscanf(fp, "property cell %d\n", &property_cell_size)); +#ifdef _MSC_VER +#undef fscanf +#endif + fclose(fp); + deserializer->set_reservation(NEW_SPACE, new_size); + deserializer->set_reservation(OLD_POINTER_SPACE, pointer_size); + deserializer->set_reservation(OLD_DATA_SPACE, data_size); + deserializer->set_reservation(CODE_SPACE, code_size); + deserializer->set_reservation(MAP_SPACE, map_size); + deserializer->set_reservation(CELL_SPACE, cell_size); + deserializer->set_reservation(PROPERTY_CELL_SPACE, property_cell_size); +} + + +bool InitializeFromFile(const char* snapshot_file) { + int len; + byte* str = ReadBytes(snapshot_file, &len); + if (!str) return false; + bool success; + { + SnapshotByteSource source(str, len); + Deserializer deserializer(&source); + ReserveSpaceForSnapshot(&deserializer, snapshot_file); + success = V8::Initialize(&deserializer); + } + DeleteArray(str); + return success; +} + + static void Deserialize() { - CHECK(Snapshot::Initialize(FLAG_testing_serialization_file)); + CHECK(InitializeFromFile(FLAG_testing_serialization_file)); } @@ -370,7 +423,7 @@ DEPENDENT_TEST(DeserializeFromSecondSerializationAndRunScript2, TEST(PartialSerialization) { if (!Snapshot::HaveASnapshotToStartFrom()) { Isolate* isolate = CcTest::i_isolate(); - Serializer::RequestEnable(isolate); + CcTest::i_isolate()->enable_serializer(); v8::V8::Initialize(); v8::Isolate* v8_isolate = reinterpret_cast<v8::Isolate*>(isolate); Heap* heap = isolate->heap(); @@ -380,7 +433,7 @@ TEST(PartialSerialization) { HandleScope scope(isolate); env.Reset(v8_isolate, v8::Context::New(v8_isolate)); } - ASSERT(!env.IsEmpty()); + DCHECK(!env.IsEmpty()); { v8::HandleScope handle_scope(v8_isolate); v8::Local<v8::Context>::New(v8_isolate, env)->Enter(); @@ -398,13 +451,13 @@ TEST(PartialSerialization) { { v8::HandleScope handle_scope(v8_isolate); v8::Local<v8::String> foo = v8::String::NewFromUtf8(v8_isolate, "foo"); - ASSERT(!foo.IsEmpty()); + DCHECK(!foo.IsEmpty()); raw_foo = *(v8::Utils::OpenHandle(*foo)); } int file_name_length = StrLength(FLAG_testing_serialization_file) + 10; Vector<char> startup_name = Vector<char>::New(file_name_length + 1); - OS::SNPrintF(startup_name, "%s.startup", FLAG_testing_serialization_file); + SNPrintF(startup_name, "%s.startup", FLAG_testing_serialization_file); { v8::HandleScope handle_scope(v8_isolate); @@ -443,48 +496,13 @@ TEST(PartialSerialization) { } -static void ReserveSpaceForSnapshot(Deserializer* deserializer, - const char* file_name) { - int file_name_length = StrLength(file_name) + 10; - Vector<char> name = Vector<char>::New(file_name_length + 1); - OS::SNPrintF(name, "%s.size", file_name); - FILE* fp = OS::FOpen(name.start(), "r"); - name.Dispose(); - int new_size, pointer_size, data_size, code_size, map_size, cell_size, - property_cell_size; -#ifdef _MSC_VER - // Avoid warning about unsafe fscanf from MSVC. - // Please note that this is only fine if %c and %s are not being used. -#define fscanf fscanf_s -#endif - CHECK_EQ(1, fscanf(fp, "new %d\n", &new_size)); - CHECK_EQ(1, fscanf(fp, "pointer %d\n", &pointer_size)); - CHECK_EQ(1, fscanf(fp, "data %d\n", &data_size)); - CHECK_EQ(1, fscanf(fp, "code %d\n", &code_size)); - CHECK_EQ(1, fscanf(fp, "map %d\n", &map_size)); - CHECK_EQ(1, fscanf(fp, "cell %d\n", &cell_size)); - CHECK_EQ(1, fscanf(fp, "property cell %d\n", &property_cell_size)); -#ifdef _MSC_VER -#undef fscanf -#endif - fclose(fp); - deserializer->set_reservation(NEW_SPACE, new_size); - deserializer->set_reservation(OLD_POINTER_SPACE, pointer_size); - deserializer->set_reservation(OLD_DATA_SPACE, data_size); - deserializer->set_reservation(CODE_SPACE, code_size); - deserializer->set_reservation(MAP_SPACE, map_size); - deserializer->set_reservation(CELL_SPACE, cell_size); - deserializer->set_reservation(PROPERTY_CELL_SPACE, property_cell_size); -} - - DEPENDENT_TEST(PartialDeserialization, PartialSerialization) { - if (!Snapshot::IsEnabled()) { + if (!Snapshot::HaveASnapshotToStartFrom()) { int file_name_length = StrLength(FLAG_testing_serialization_file) + 10; Vector<char> startup_name = Vector<char>::New(file_name_length + 1); - OS::SNPrintF(startup_name, "%s.startup", FLAG_testing_serialization_file); + SNPrintF(startup_name, "%s.startup", FLAG_testing_serialization_file); - CHECK(Snapshot::Initialize(startup_name.start())); + CHECK(InitializeFromFile(startup_name.start())); startup_name.Dispose(); const char* file_name = FLAG_testing_serialization_file; @@ -521,7 +539,7 @@ DEPENDENT_TEST(PartialDeserialization, PartialSerialization) { TEST(ContextSerialization) { if (!Snapshot::HaveASnapshotToStartFrom()) { Isolate* isolate = CcTest::i_isolate(); - Serializer::RequestEnable(isolate); + CcTest::i_isolate()->enable_serializer(); v8::V8::Initialize(); v8::Isolate* v8_isolate = reinterpret_cast<v8::Isolate*>(isolate); Heap* heap = isolate->heap(); @@ -531,7 +549,7 @@ TEST(ContextSerialization) { HandleScope scope(isolate); env.Reset(v8_isolate, v8::Context::New(v8_isolate)); } - ASSERT(!env.IsEmpty()); + DCHECK(!env.IsEmpty()); { v8::HandleScope handle_scope(v8_isolate); v8::Local<v8::Context>::New(v8_isolate, env)->Enter(); @@ -548,7 +566,7 @@ TEST(ContextSerialization) { int file_name_length = StrLength(FLAG_testing_serialization_file) + 10; Vector<char> startup_name = Vector<char>::New(file_name_length + 1); - OS::SNPrintF(startup_name, "%s.startup", FLAG_testing_serialization_file); + SNPrintF(startup_name, "%s.startup", FLAG_testing_serialization_file); { v8::HandleScope handle_scope(v8_isolate); @@ -594,9 +612,9 @@ DEPENDENT_TEST(ContextDeserialization, ContextSerialization) { if (!Snapshot::HaveASnapshotToStartFrom()) { int file_name_length = StrLength(FLAG_testing_serialization_file) + 10; Vector<char> startup_name = Vector<char>::New(file_name_length + 1); - OS::SNPrintF(startup_name, "%s.startup", FLAG_testing_serialization_file); + SNPrintF(startup_name, "%s.startup", FLAG_testing_serialization_file); - CHECK(Snapshot::Initialize(startup_name.start())); + CHECK(InitializeFromFile(startup_name.start())); startup_name.Dispose(); const char* file_name = FLAG_testing_serialization_file; @@ -644,3 +662,185 @@ DEPENDENT_TEST(DependentTestThatAlwaysFails, TestThatAlwaysSucceeds) { bool ArtificialFailure2 = false; CHECK(ArtificialFailure2); } + + +int CountBuiltins() { + // Check that we have not deserialized any additional builtin. + HeapIterator iterator(CcTest::heap()); + DisallowHeapAllocation no_allocation; + int counter = 0; + for (HeapObject* obj = iterator.next(); obj != NULL; obj = iterator.next()) { + if (obj->IsCode() && Code::cast(obj)->kind() == Code::BUILTIN) counter++; + } + return counter; +} + + +TEST(SerializeToplevelOnePlusOne) { + FLAG_serialize_toplevel = true; + LocalContext context; + Isolate* isolate = CcTest::i_isolate(); + isolate->compilation_cache()->Disable(); // Disable same-isolate code cache. + + v8::HandleScope scope(CcTest::isolate()); + + const char* source = "1 + 1"; + + Handle<String> orig_source = isolate->factory() + ->NewStringFromUtf8(CStrVector(source)) + .ToHandleChecked(); + Handle<String> copy_source = isolate->factory() + ->NewStringFromUtf8(CStrVector(source)) + .ToHandleChecked(); + CHECK(!orig_source.is_identical_to(copy_source)); + CHECK(orig_source->Equals(*copy_source)); + + ScriptData* cache = NULL; + + Handle<SharedFunctionInfo> orig = Compiler::CompileScript( + orig_source, Handle<String>(), 0, 0, false, + Handle<Context>(isolate->native_context()), NULL, &cache, + v8::ScriptCompiler::kProduceCodeCache, NOT_NATIVES_CODE); + + int builtins_count = CountBuiltins(); + + Handle<SharedFunctionInfo> copy; + { + DisallowCompilation no_compile_expected(isolate); + copy = Compiler::CompileScript( + copy_source, Handle<String>(), 0, 0, false, + Handle<Context>(isolate->native_context()), NULL, &cache, + v8::ScriptCompiler::kConsumeCodeCache, NOT_NATIVES_CODE); + } + + CHECK_NE(*orig, *copy); + CHECK(Script::cast(copy->script())->source() == *copy_source); + + Handle<JSFunction> copy_fun = + isolate->factory()->NewFunctionFromSharedFunctionInfo( + copy, isolate->native_context()); + Handle<JSObject> global(isolate->context()->global_object()); + Handle<Object> copy_result = + Execution::Call(isolate, copy_fun, global, 0, NULL).ToHandleChecked(); + CHECK_EQ(2, Handle<Smi>::cast(copy_result)->value()); + + CHECK_EQ(builtins_count, CountBuiltins()); + + delete cache; +} + + +TEST(SerializeToplevelInternalizedString) { + FLAG_serialize_toplevel = true; + LocalContext context; + Isolate* isolate = CcTest::i_isolate(); + isolate->compilation_cache()->Disable(); // Disable same-isolate code cache. + + v8::HandleScope scope(CcTest::isolate()); + + const char* source = "'string1'"; + + Handle<String> orig_source = isolate->factory() + ->NewStringFromUtf8(CStrVector(source)) + .ToHandleChecked(); + Handle<String> copy_source = isolate->factory() + ->NewStringFromUtf8(CStrVector(source)) + .ToHandleChecked(); + CHECK(!orig_source.is_identical_to(copy_source)); + CHECK(orig_source->Equals(*copy_source)); + + Handle<JSObject> global(isolate->context()->global_object()); + ScriptData* cache = NULL; + + Handle<SharedFunctionInfo> orig = Compiler::CompileScript( + orig_source, Handle<String>(), 0, 0, false, + Handle<Context>(isolate->native_context()), NULL, &cache, + v8::ScriptCompiler::kProduceCodeCache, NOT_NATIVES_CODE); + Handle<JSFunction> orig_fun = + isolate->factory()->NewFunctionFromSharedFunctionInfo( + orig, isolate->native_context()); + Handle<Object> orig_result = + Execution::Call(isolate, orig_fun, global, 0, NULL).ToHandleChecked(); + CHECK(orig_result->IsInternalizedString()); + + int builtins_count = CountBuiltins(); + + Handle<SharedFunctionInfo> copy; + { + DisallowCompilation no_compile_expected(isolate); + copy = Compiler::CompileScript( + copy_source, Handle<String>(), 0, 0, false, + Handle<Context>(isolate->native_context()), NULL, &cache, + v8::ScriptCompiler::kConsumeCodeCache, NOT_NATIVES_CODE); + } + CHECK_NE(*orig, *copy); + CHECK(Script::cast(copy->script())->source() == *copy_source); + + Handle<JSFunction> copy_fun = + isolate->factory()->NewFunctionFromSharedFunctionInfo( + copy, isolate->native_context()); + CHECK_NE(*orig_fun, *copy_fun); + Handle<Object> copy_result = + Execution::Call(isolate, copy_fun, global, 0, NULL).ToHandleChecked(); + CHECK(orig_result.is_identical_to(copy_result)); + Handle<String> expected = + isolate->factory()->NewStringFromAsciiChecked("string1"); + + CHECK(Handle<String>::cast(copy_result)->Equals(*expected)); + CHECK_EQ(builtins_count, CountBuiltins()); + + delete cache; +} + + +TEST(SerializeToplevelIsolates) { + FLAG_serialize_toplevel = true; + + const char* source = "function f() { return 'abc'; }; f() + 'def'"; + v8::ScriptCompiler::CachedData* cache; + + v8::Isolate* isolate1 = v8::Isolate::New(); + v8::Isolate* isolate2 = v8::Isolate::New(); + { + v8::Isolate::Scope iscope(isolate1); + v8::HandleScope scope(isolate1); + v8::Local<v8::Context> context = v8::Context::New(isolate1); + v8::Context::Scope context_scope(context); + + v8::Local<v8::String> source_str = v8_str(source); + v8::ScriptOrigin origin(v8_str("test")); + v8::ScriptCompiler::Source source(source_str, origin); + v8::Local<v8::UnboundScript> script = v8::ScriptCompiler::CompileUnbound( + isolate1, &source, v8::ScriptCompiler::kProduceCodeCache); + const v8::ScriptCompiler::CachedData* data = source.GetCachedData(); + // Persist cached data. + uint8_t* buffer = NewArray<uint8_t>(data->length); + MemCopy(buffer, data->data, data->length); + cache = new v8::ScriptCompiler::CachedData( + buffer, data->length, v8::ScriptCompiler::CachedData::BufferOwned); + + v8::Local<v8::Value> result = script->BindToCurrentContext()->Run(); + CHECK(result->ToString()->Equals(v8_str("abcdef"))); + } + isolate1->Dispose(); + + { + v8::Isolate::Scope iscope(isolate2); + v8::HandleScope scope(isolate2); + v8::Local<v8::Context> context = v8::Context::New(isolate2); + v8::Context::Scope context_scope(context); + + v8::Local<v8::String> source_str = v8_str(source); + v8::ScriptOrigin origin(v8_str("test")); + v8::ScriptCompiler::Source source(source_str, origin, cache); + v8::Local<v8::UnboundScript> script; + { + DisallowCompilation no_compile(reinterpret_cast<Isolate*>(isolate2)); + script = v8::ScriptCompiler::CompileUnbound( + isolate2, &source, v8::ScriptCompiler::kConsumeCodeCache); + } + v8::Local<v8::Value> result = script->BindToCurrentContext()->Run(); + CHECK(result->ToString()->Equals(v8_str("abcdef"))); + } + isolate2->Dispose(); +} diff --git a/deps/v8/test/cctest/test-socket.cc b/deps/v8/test/cctest/test-socket.cc deleted file mode 100644 index 47d8b178312..00000000000 --- a/deps/v8/test/cctest/test-socket.cc +++ /dev/null @@ -1,176 +0,0 @@ -// Copyright 2009 the V8 project authors. All rights reserved. -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are -// met: -// -// * Redistributions of source code must retain the above copyright -// notice, this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above -// copyright notice, this list of conditions and the following -// disclaimer in the documentation and/or other materials provided -// with the distribution. -// * Neither the name of Google Inc. nor the names of its -// contributors may be used to endorse or promote products derived -// from this software without specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -#include "v8.h" -#include "platform.h" -#include "platform/socket.h" -#include "cctest.h" - - -using namespace ::v8::internal; - - -class SocketListenerThread : public Thread { - public: - SocketListenerThread(int port, int data_size) - : Thread("SocketListenerThread"), - port_(port), - data_size_(data_size), - server_(NULL), - client_(NULL), - listening_(0) { - data_ = new char[data_size_]; - } - ~SocketListenerThread() { - // Close both sockets. - delete client_; - delete server_; - delete[] data_; - } - - void Run(); - void WaitForListening() { listening_.Wait(); } - char* data() { return data_; } - - private: - int port_; - char* data_; - int data_size_; - Socket* server_; // Server socket used for bind/accept. - Socket* client_; // Single client connection used by the test. - Semaphore listening_; // Signalled when the server socket is in listen mode. -}; - - -void SocketListenerThread::Run() { - bool ok; - - // Create the server socket and bind it to the requested port. - server_ = new Socket; - server_->SetReuseAddress(true); - CHECK(server_ != NULL); - ok = server_->Bind(port_); - CHECK(ok); - - // Listen for new connections. - ok = server_->Listen(1); - CHECK(ok); - listening_.Signal(); - - // Accept a connection. - client_ = server_->Accept(); - CHECK(client_ != NULL); - - // Read the expected niumber of bytes of data. - int bytes_read = 0; - while (bytes_read < data_size_) { - bytes_read += client_->Receive(data_ + bytes_read, data_size_ - bytes_read); - } -} - - -static bool SendAll(Socket* socket, const char* data, int len) { - int sent_len = 0; - while (sent_len < len) { - int status = socket->Send(data, len); - if (status <= 0) { - return false; - } - sent_len += status; - } - return true; -} - - -static void SendAndReceive(int port, char *data, int len) { - static const char* kLocalhost = "localhost"; - - bool ok; - - // Make a string with the port number. - const int kPortBuferLen = 6; - char port_str[kPortBuferLen]; - OS::SNPrintF(Vector<char>(port_str, kPortBuferLen), "%d", port); - - // Create a socket listener. - SocketListenerThread* listener = new SocketListenerThread(port, len); - listener->Start(); - listener->WaitForListening(); - - // Connect and write some data. - Socket* client = new Socket; - CHECK(client != NULL); - ok = client->Connect(kLocalhost, port_str); - CHECK(ok); - - // Send all the data. - ok = SendAll(client, data, len); - CHECK(ok); - - // Wait until data is received. - listener->Join(); - - // Check that data received is the same as data send. - for (int i = 0; i < len; i++) { - CHECK(data[i] == listener->data()[i]); - } - - // Close the client before the listener to avoid TIME_WAIT issues. - client->Shutdown(); - delete client; - delete listener; -} - - -TEST(Socket) { - // Make sure this port is not used by other tests to allow tests to run in - // parallel. - static const int kPort = 5859 + FlagDependentPortOffset(); - - // Send and receive some data. - static const int kBufferSizeSmall = 20; - char small_data[kBufferSizeSmall + 1] = "1234567890abcdefghij"; - SendAndReceive(kPort, small_data, kBufferSizeSmall); - - // Send and receive some more data. - static const int kBufferSizeMedium = 10000; - char* medium_data = new char[kBufferSizeMedium]; - for (int i = 0; i < kBufferSizeMedium; i++) { - medium_data[i] = i % 256; - } - SendAndReceive(kPort, medium_data, kBufferSizeMedium); - delete[] medium_data; - - // Send and receive even more data. - static const int kBufferSizeLarge = 1000000; - char* large_data = new char[kBufferSizeLarge]; - for (int i = 0; i < kBufferSizeLarge; i++) { - large_data[i] = i % 256; - } - SendAndReceive(kPort, large_data, kBufferSizeLarge); - delete[] large_data; -} diff --git a/deps/v8/test/cctest/test-spaces.cc b/deps/v8/test/cctest/test-spaces.cc index 47e2536fc9a..00620944002 100644 --- a/deps/v8/test/cctest/test-spaces.cc +++ b/deps/v8/test/cctest/test-spaces.cc @@ -27,8 +27,10 @@ #include <stdlib.h> -#include "v8.h" -#include "cctest.h" +#include "src/snapshot.h" +#include "src/v8.h" +#include "test/cctest/cctest.h" + using namespace v8::internal; @@ -169,12 +171,14 @@ static void VerifyMemoryChunk(Isolate* isolate, commit_area_size, executable, NULL); - size_t alignment = code_range->exists() ? - MemoryChunk::kAlignment : OS::CommitPageSize(); - size_t reserved_size = ((executable == EXECUTABLE)) - ? RoundUp(header_size + guard_size + reserve_area_size + guard_size, - alignment) - : RoundUp(header_size + reserve_area_size, OS::CommitPageSize()); + size_t alignment = code_range != NULL && code_range->valid() ? + MemoryChunk::kAlignment : v8::base::OS::CommitPageSize(); + size_t reserved_size = + ((executable == EXECUTABLE)) + ? RoundUp(header_size + guard_size + reserve_area_size + guard_size, + alignment) + : RoundUp(header_size + reserve_area_size, + v8::base::OS::CommitPageSize()); CHECK(memory_chunk->size() == reserved_size); CHECK(memory_chunk->area_start() < memory_chunk->address() + memory_chunk->size()); @@ -221,7 +225,7 @@ TEST(MemoryChunk) { // With CodeRange. CodeRange* code_range = new CodeRange(isolate); - const int code_range_size = 32 * MB; + const size_t code_range_size = 32 * MB; if (!code_range->SetUp(code_range_size)) return; VerifyMemoryChunk(isolate, @@ -393,7 +397,7 @@ TEST(LargeObjectSpace) { if (allocation.IsRetry()) break; } CHECK(lo->Available() < available); - }; + } CHECK(!lo->IsEmpty()); @@ -403,11 +407,15 @@ TEST(LargeObjectSpace) { TEST(SizeOfFirstPageIsLargeEnough) { if (i::FLAG_always_opt) return; + // Bootstrapping without a snapshot causes more allocations. + if (!i::Snapshot::HaveASnapshotToStartFrom()) return; CcTest::InitializeVM(); Isolate* isolate = CcTest::i_isolate(); // Freshly initialized VM gets by with one page per space. for (int i = FIRST_PAGED_SPACE; i <= LAST_PAGED_SPACE; i++) { + // Debug code can be very large, so skip CODE_SPACE if we are generating it. + if (i == CODE_SPACE && i::FLAG_debug_code) continue; CHECK_EQ(1, isolate->heap()->paged_space(i)->CountTotalPages()); } @@ -415,6 +423,8 @@ TEST(SizeOfFirstPageIsLargeEnough) { HandleScope scope(isolate); CompileRun("/*empty*/"); for (int i = FIRST_PAGED_SPACE; i <= LAST_PAGED_SPACE; i++) { + // Debug code can be very large, so skip CODE_SPACE if we are generating it. + if (i == CODE_SPACE && i::FLAG_debug_code) continue; CHECK_EQ(1, isolate->heap()->paged_space(i)->CountTotalPages()); } diff --git a/deps/v8/test/cctest/test-strings.cc b/deps/v8/test/cctest/test-strings.cc index 706836c1c9e..b55780182bf 100644 --- a/deps/v8/test/cctest/test-strings.cc +++ b/deps/v8/test/cctest/test-strings.cc @@ -32,12 +32,12 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "api.h" -#include "factory.h" -#include "objects.h" -#include "cctest.h" +#include "src/api.h" +#include "src/factory.h" +#include "src/objects.h" +#include "test/cctest/cctest.h" // Adapted from http://en.wikipedia.org/wiki/Multiply-with-carry class MyRandomNumberGenerator { @@ -77,7 +77,7 @@ class MyRandomNumberGenerator { } bool next(double threshold) { - ASSERT(threshold >= 0.0 && threshold <= 1.0); + DCHECK(threshold >= 0.0 && threshold <= 1.0); if (threshold == 1.0) return true; if (threshold == 0.0) return false; uint32_t value = next() % 100000; @@ -1201,10 +1201,9 @@ TEST(SliceFromSlice) { TEST(AsciiArrayJoin) { // Set heap limits. - static const int K = 1024; v8::ResourceConstraints constraints; - constraints.set_max_new_space_size(2 * K * K); - constraints.set_max_old_space_size(4 * K * K); + constraints.set_max_semi_space_size(1); + constraints.set_max_old_space_size(4); v8::SetResourceConstraints(CcTest::isolate(), &constraints); // String s is made of 2^17 = 131072 'c' characters and a is an array diff --git a/deps/v8/test/cctest/test-strtod.cc b/deps/v8/test/cctest/test-strtod.cc index bebf4d14b06..7c1118603e8 100644 --- a/deps/v8/test/cctest/test-strtod.cc +++ b/deps/v8/test/cctest/test-strtod.cc @@ -27,14 +27,14 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "bignum.h" -#include "cctest.h" -#include "diy-fp.h" -#include "double.h" -#include "strtod.h" -#include "utils/random-number-generator.h" +#include "src/base/utils/random-number-generator.h" +#include "src/bignum.h" +#include "src/diy-fp.h" +#include "src/double.h" +#include "src/strtod.h" +#include "test/cctest/cctest.h" using namespace v8::internal; @@ -449,7 +449,7 @@ static const int kShortStrtodRandomCount = 2; static const int kLargeStrtodRandomCount = 2; TEST(RandomStrtod) { - RandomNumberGenerator rng; + v8::base::RandomNumberGenerator rng; char buffer[kBufferSize]; for (int length = 1; length < 15; length++) { for (int i = 0; i < kShortStrtodRandomCount; ++i) { diff --git a/deps/v8/test/cctest/test-symbols.cc b/deps/v8/test/cctest/test-symbols.cc index f0d0ed1606c..066c9970376 100644 --- a/deps/v8/test/cctest/test-symbols.cc +++ b/deps/v8/test/cctest/test-symbols.cc @@ -5,10 +5,11 @@ // of ConsStrings. These operations may not be very fast, but they // should be possible without getting errors due to too deep recursion. -#include "v8.h" +#include "src/v8.h" -#include "cctest.h" -#include "objects.h" +#include "src/objects.h" +#include "src/ostreams.h" +#include "test/cctest/cctest.h" using namespace v8::internal; @@ -21,16 +22,16 @@ TEST(Create) { const int kNumSymbols = 30; Handle<Symbol> symbols[kNumSymbols]; + OFStream os(stdout); for (int i = 0; i < kNumSymbols; ++i) { symbols[i] = isolate->factory()->NewSymbol(); CHECK(symbols[i]->IsName()); CHECK(symbols[i]->IsSymbol()); CHECK(symbols[i]->HasHashCode()); CHECK_GT(symbols[i]->Hash(), 0); - symbols[i]->ShortPrint(); - PrintF("\n"); + os << Brief(*symbols[i]) << "\n"; #if OBJECT_PRINT - symbols[i]->Print(); + symbols[i]->Print(os); #endif #if VERIFY_HEAP symbols[i]->ObjectVerify(); diff --git a/deps/v8/test/cctest/test-thread-termination.cc b/deps/v8/test/cctest/test-thread-termination.cc index 569ee95c644..a5ed7ab9bfe 100644 --- a/deps/v8/test/cctest/test-thread-termination.cc +++ b/deps/v8/test/cctest/test-thread-termination.cc @@ -25,12 +25,13 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "v8.h" -#include "platform.h" -#include "cctest.h" +#include "src/v8.h" +#include "test/cctest/cctest.h" +#include "src/base/platform/platform.h" -v8::internal::Semaphore* semaphore = NULL; + +v8::base::Semaphore* semaphore = NULL; void Signal(const v8::FunctionCallbackInfo<v8::Value>& args) { @@ -158,11 +159,11 @@ TEST(TerminateOnlyV8ThreadFromThreadItselfNoLoop) { } -class TerminatorThread : public v8::internal::Thread { +class TerminatorThread : public v8::base::Thread { public: explicit TerminatorThread(i::Isolate* isolate) - : Thread("TerminatorThread"), - isolate_(reinterpret_cast<v8::Isolate*>(isolate)) { } + : Thread(Options("TerminatorThread")), + isolate_(reinterpret_cast<v8::Isolate*>(isolate)) {} void Run() { semaphore->Wait(); CHECK(!v8::V8::IsExecutionTerminating(isolate_)); @@ -177,7 +178,7 @@ class TerminatorThread : public v8::internal::Thread { // Test that a single thread of JavaScript execution can be terminated // from the side by another thread. TEST(TerminateOnlyV8ThreadFromOtherThread) { - semaphore = new v8::internal::Semaphore(0); + semaphore = new v8::base::Semaphore(0); TerminatorThread thread(CcTest::i_isolate()); thread.Start(); @@ -358,3 +359,103 @@ TEST(TerminateCancelTerminateFromThreadItself) { // Check that execution completed with correct return value. CHECK(v8::Script::Compile(source)->Run()->Equals(v8_str("completed"))); } + + +void MicrotaskShouldNotRun(const v8::FunctionCallbackInfo<v8::Value>& info) { + CHECK(false); +} + + +void MicrotaskLoopForever(const v8::FunctionCallbackInfo<v8::Value>& info) { + v8::Isolate* isolate = info.GetIsolate(); + v8::HandleScope scope(isolate); + // Enqueue another should-not-run task to ensure we clean out the queue + // when we terminate. + isolate->EnqueueMicrotask(v8::Function::New(isolate, MicrotaskShouldNotRun)); + CompileRun("terminate(); while (true) { }"); + CHECK(v8::V8::IsExecutionTerminating()); +} + + +TEST(TerminateFromOtherThreadWhileMicrotaskRunning) { + semaphore = new v8::base::Semaphore(0); + TerminatorThread thread(CcTest::i_isolate()); + thread.Start(); + + v8::Isolate* isolate = CcTest::isolate(); + isolate->SetAutorunMicrotasks(false); + v8::HandleScope scope(isolate); + v8::Handle<v8::ObjectTemplate> global = + CreateGlobalTemplate(CcTest::isolate(), Signal, DoLoop); + v8::Handle<v8::Context> context = + v8::Context::New(CcTest::isolate(), NULL, global); + v8::Context::Scope context_scope(context); + isolate->EnqueueMicrotask(v8::Function::New(isolate, MicrotaskLoopForever)); + // The second task should never be run because we bail out if we're + // terminating. + isolate->EnqueueMicrotask(v8::Function::New(isolate, MicrotaskShouldNotRun)); + isolate->RunMicrotasks(); + + v8::V8::CancelTerminateExecution(isolate); + isolate->RunMicrotasks(); // should not run MicrotaskShouldNotRun + + thread.Join(); + delete semaphore; + semaphore = NULL; +} + + +static int callback_counter = 0; + + +static void CounterCallback(v8::Isolate* isolate, void* data) { + callback_counter++; +} + + +TEST(PostponeTerminateException) { + v8::Isolate* isolate = CcTest::isolate(); + v8::HandleScope scope(isolate); + v8::Handle<v8::ObjectTemplate> global = + CreateGlobalTemplate(CcTest::isolate(), TerminateCurrentThread, DoLoop); + v8::Handle<v8::Context> context = + v8::Context::New(CcTest::isolate(), NULL, global); + v8::Context::Scope context_scope(context); + + v8::TryCatch try_catch; + static const char* terminate_and_loop = + "terminate(); for (var i = 0; i < 10000; i++);"; + + { // Postpone terminate execution interrupts. + i::PostponeInterruptsScope p1(CcTest::i_isolate(), + i::StackGuard::TERMINATE_EXECUTION) ; + + // API interrupts should still be triggered. + CcTest::isolate()->RequestInterrupt(&CounterCallback, NULL); + CHECK_EQ(0, callback_counter); + CompileRun(terminate_and_loop); + CHECK(!try_catch.HasTerminated()); + CHECK_EQ(1, callback_counter); + + { // Postpone API interrupts as well. + i::PostponeInterruptsScope p2(CcTest::i_isolate(), + i::StackGuard::API_INTERRUPT); + + // None of the two interrupts should trigger. + CcTest::isolate()->RequestInterrupt(&CounterCallback, NULL); + CompileRun(terminate_and_loop); + CHECK(!try_catch.HasTerminated()); + CHECK_EQ(1, callback_counter); + } + + // Now the previously requested API interrupt should trigger. + CompileRun(terminate_and_loop); + CHECK(!try_catch.HasTerminated()); + CHECK_EQ(2, callback_counter); + } + + // Now the previously requested terminate execution interrupt should trigger. + CompileRun("for (var i = 0; i < 10000; i++);"); + CHECK(try_catch.HasTerminated()); + CHECK_EQ(2, callback_counter); +} diff --git a/deps/v8/test/cctest/test-threads.cc b/deps/v8/test/cctest/test-threads.cc index 24fb1d1d75d..12042261f40 100644 --- a/deps/v8/test/cctest/test-threads.cc +++ b/deps/v8/test/cctest/test-threads.cc @@ -25,12 +25,11 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "v8.h" +#include "src/v8.h" +#include "test/cctest/cctest.h" -#include "platform.h" -#include "isolate.h" - -#include "cctest.h" +#include "src/base/platform/platform.h" +#include "src/isolate.h" enum Turn { @@ -43,9 +42,9 @@ enum Turn { static Turn turn = FILL_CACHE; -class ThreadA : public v8::internal::Thread { +class ThreadA : public v8::base::Thread { public: - ThreadA() : Thread("ThreadA") { } + ThreadA() : Thread(Options("ThreadA")) {} void Run() { v8::Isolate* isolate = CcTest::isolate(); v8::Locker locker(isolate); @@ -83,9 +82,9 @@ class ThreadA : public v8::internal::Thread { }; -class ThreadB : public v8::internal::Thread { +class ThreadB : public v8::base::Thread { public: - ThreadB() : Thread("ThreadB") { } + ThreadB() : Thread(Options("ThreadB")) {} void Run() { do { { @@ -123,16 +122,16 @@ TEST(JSFunctionResultCachesInTwoThreads) { CHECK_EQ(DONE, turn); } -class ThreadIdValidationThread : public v8::internal::Thread { +class ThreadIdValidationThread : public v8::base::Thread { public: - ThreadIdValidationThread(i::Thread* thread_to_start, - i::List<i::ThreadId>* refs, - unsigned int thread_no, - i::Semaphore* semaphore) - : Thread("ThreadRefValidationThread"), - refs_(refs), thread_no_(thread_no), thread_to_start_(thread_to_start), - semaphore_(semaphore) { - } + ThreadIdValidationThread(v8::base::Thread* thread_to_start, + i::List<i::ThreadId>* refs, unsigned int thread_no, + v8::base::Semaphore* semaphore) + : Thread(Options("ThreadRefValidationThread")), + refs_(refs), + thread_no_(thread_no), + thread_to_start_(thread_to_start), + semaphore_(semaphore) {} void Run() { i::ThreadId thread_id = i::ThreadId::Current(); @@ -150,8 +149,8 @@ class ThreadIdValidationThread : public v8::internal::Thread { private: i::List<i::ThreadId>* refs_; int thread_no_; - i::Thread* thread_to_start_; - i::Semaphore* semaphore_; + v8::base::Thread* thread_to_start_; + v8::base::Semaphore* semaphore_; }; @@ -159,7 +158,7 @@ TEST(ThreadIdValidation) { const int kNThreads = 100; i::List<ThreadIdValidationThread*> threads(kNThreads); i::List<i::ThreadId> refs(kNThreads); - i::Semaphore semaphore(0); + v8::base::Semaphore semaphore(0); ThreadIdValidationThread* prev = NULL; for (int i = kNThreads - 1; i >= 0; i--) { ThreadIdValidationThread* newThread = @@ -176,19 +175,3 @@ TEST(ThreadIdValidation) { delete threads[i]; } } - - -class ThreadC : public v8::internal::Thread { - public: - ThreadC() : Thread("ThreadC") { } - void Run() { - Join(); - } -}; - - -TEST(ThreadJoinSelf) { - ThreadC thread; - thread.Start(); - thread.Join(); -} diff --git a/deps/v8/test/cctest/test-time.cc b/deps/v8/test/cctest/test-time.cc deleted file mode 100644 index 1ef9e08f65e..00000000000 --- a/deps/v8/test/cctest/test-time.cc +++ /dev/null @@ -1,197 +0,0 @@ -// Copyright 2013 the V8 project authors. All rights reserved. -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are -// met: -// -// * Redistributions of source code must retain the above copyright -// notice, this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above -// copyright notice, this list of conditions and the following -// disclaimer in the documentation and/or other materials provided -// with the distribution. -// * Neither the name of Google Inc. nor the names of its -// contributors may be used to endorse or promote products derived -// from this software without specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -#include "v8.h" - -#if V8_OS_POSIX -#include <sys/time.h> // NOLINT -#endif - -#include "cctest.h" -#if V8_OS_WIN -#include "win32-headers.h" -#endif - -using namespace v8::internal; - - -TEST(TimeDeltaFromAndIn) { - CHECK(TimeDelta::FromDays(2) == TimeDelta::FromHours(48)); - CHECK(TimeDelta::FromHours(3) == TimeDelta::FromMinutes(180)); - CHECK(TimeDelta::FromMinutes(2) == TimeDelta::FromSeconds(120)); - CHECK(TimeDelta::FromSeconds(2) == TimeDelta::FromMilliseconds(2000)); - CHECK(TimeDelta::FromMilliseconds(2) == TimeDelta::FromMicroseconds(2000)); - CHECK_EQ(static_cast<int>(13), TimeDelta::FromDays(13).InDays()); - CHECK_EQ(static_cast<int>(13), TimeDelta::FromHours(13).InHours()); - CHECK_EQ(static_cast<int>(13), TimeDelta::FromMinutes(13).InMinutes()); - CHECK_EQ(static_cast<int64_t>(13), TimeDelta::FromSeconds(13).InSeconds()); - CHECK_EQ(13.0, TimeDelta::FromSeconds(13).InSecondsF()); - CHECK_EQ(static_cast<int64_t>(13), - TimeDelta::FromMilliseconds(13).InMilliseconds()); - CHECK_EQ(13.0, TimeDelta::FromMilliseconds(13).InMillisecondsF()); - CHECK_EQ(static_cast<int64_t>(13), - TimeDelta::FromMicroseconds(13).InMicroseconds()); -} - - -#if V8_OS_MACOSX -TEST(TimeDeltaFromMachTimespec) { - TimeDelta null = TimeDelta(); - CHECK(null == TimeDelta::FromMachTimespec(null.ToMachTimespec())); - TimeDelta delta1 = TimeDelta::FromMilliseconds(42); - CHECK(delta1 == TimeDelta::FromMachTimespec(delta1.ToMachTimespec())); - TimeDelta delta2 = TimeDelta::FromDays(42); - CHECK(delta2 == TimeDelta::FromMachTimespec(delta2.ToMachTimespec())); -} -#endif - - -TEST(TimeJsTime) { - Time t = Time::FromJsTime(700000.3); - CHECK_EQ(700000.3, t.ToJsTime()); -} - - -#if V8_OS_POSIX -TEST(TimeFromTimespec) { - Time null; - CHECK(null.IsNull()); - CHECK(null == Time::FromTimespec(null.ToTimespec())); - Time now = Time::Now(); - CHECK(now == Time::FromTimespec(now.ToTimespec())); - Time now_sys = Time::NowFromSystemTime(); - CHECK(now_sys == Time::FromTimespec(now_sys.ToTimespec())); - Time unix_epoch = Time::UnixEpoch(); - CHECK(unix_epoch == Time::FromTimespec(unix_epoch.ToTimespec())); - Time max = Time::Max(); - CHECK(max.IsMax()); - CHECK(max == Time::FromTimespec(max.ToTimespec())); -} - - -TEST(TimeFromTimeval) { - Time null; - CHECK(null.IsNull()); - CHECK(null == Time::FromTimeval(null.ToTimeval())); - Time now = Time::Now(); - CHECK(now == Time::FromTimeval(now.ToTimeval())); - Time now_sys = Time::NowFromSystemTime(); - CHECK(now_sys == Time::FromTimeval(now_sys.ToTimeval())); - Time unix_epoch = Time::UnixEpoch(); - CHECK(unix_epoch == Time::FromTimeval(unix_epoch.ToTimeval())); - Time max = Time::Max(); - CHECK(max.IsMax()); - CHECK(max == Time::FromTimeval(max.ToTimeval())); -} -#endif - - -#if V8_OS_WIN -TEST(TimeFromFiletime) { - Time null; - CHECK(null.IsNull()); - CHECK(null == Time::FromFiletime(null.ToFiletime())); - Time now = Time::Now(); - CHECK(now == Time::FromFiletime(now.ToFiletime())); - Time now_sys = Time::NowFromSystemTime(); - CHECK(now_sys == Time::FromFiletime(now_sys.ToFiletime())); - Time unix_epoch = Time::UnixEpoch(); - CHECK(unix_epoch == Time::FromFiletime(unix_epoch.ToFiletime())); - Time max = Time::Max(); - CHECK(max.IsMax()); - CHECK(max == Time::FromFiletime(max.ToFiletime())); -} -#endif - - -TEST(TimeTicksIsMonotonic) { - TimeTicks previous_normal_ticks; - TimeTicks previous_highres_ticks; - ElapsedTimer timer; - timer.Start(); - while (!timer.HasExpired(TimeDelta::FromMilliseconds(100))) { - TimeTicks normal_ticks = TimeTicks::Now(); - TimeTicks highres_ticks = TimeTicks::HighResolutionNow(); - CHECK_GE(normal_ticks, previous_normal_ticks); - CHECK_GE((normal_ticks - previous_normal_ticks).InMicroseconds(), 0); - CHECK_GE(highres_ticks, previous_highres_ticks); - CHECK_GE((highres_ticks - previous_highres_ticks).InMicroseconds(), 0); - previous_normal_ticks = normal_ticks; - previous_highres_ticks = highres_ticks; - } -} - - -template <typename T> -static void ResolutionTest(T (*Now)(), TimeDelta target_granularity) { - // We're trying to measure that intervals increment in a VERY small amount - // of time -- according to the specified target granularity. Unfortunately, - // if we happen to have a context switch in the middle of our test, the - // context switch could easily exceed our limit. So, we iterate on this - // several times. As long as we're able to detect the fine-granularity - // timers at least once, then the test has succeeded. - static const TimeDelta kExpirationTimeout = TimeDelta::FromSeconds(1); - ElapsedTimer timer; - timer.Start(); - TimeDelta delta; - do { - T start = Now(); - T now = start; - // Loop until we can detect that the clock has changed. Non-HighRes timers - // will increment in chunks, i.e. 15ms. By spinning until we see a clock - // change, we detect the minimum time between measurements. - do { - now = Now(); - delta = now - start; - } while (now <= start); - CHECK_NE(static_cast<int64_t>(0), delta.InMicroseconds()); - } while (delta > target_granularity && !timer.HasExpired(kExpirationTimeout)); - CHECK_LE(delta, target_granularity); -} - - -TEST(TimeNowResolution) { - // We assume that Time::Now() has at least 16ms resolution. - static const TimeDelta kTargetGranularity = TimeDelta::FromMilliseconds(16); - ResolutionTest<Time>(&Time::Now, kTargetGranularity); -} - - -TEST(TimeTicksNowResolution) { - // We assume that TimeTicks::Now() has at least 16ms resolution. - static const TimeDelta kTargetGranularity = TimeDelta::FromMilliseconds(16); - ResolutionTest<TimeTicks>(&TimeTicks::Now, kTargetGranularity); -} - - -TEST(TimeTicksHighResolutionNowResolution) { - if (!TimeTicks::IsHighResolutionClockWorking()) return; - - // We assume that TimeTicks::HighResolutionNow() has sub-ms resolution. - static const TimeDelta kTargetGranularity = TimeDelta::FromMilliseconds(1); - ResolutionTest<TimeTicks>(&TimeTicks::HighResolutionNow, kTargetGranularity); -} diff --git a/deps/v8/test/cctest/test-types.cc b/deps/v8/test/cctest/test-types.cc index 47868f6484b..8c5e41ca10e 100644 --- a/deps/v8/test/cctest/test-types.cc +++ b/deps/v8/test/cctest/test-types.cc @@ -1,35 +1,13 @@ // Copyright 2013 the V8 project authors. All rights reserved. -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are -// met: -// -// * Redistributions of source code must retain the above copyright -// notice, this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above -// copyright notice, this list of conditions and the following -// disclaimer in the documentation and/or other materials provided -// with the distribution. -// * Neither the name of Google Inc. nor the names of its -// contributors may be used to endorse or promote products derived -// from this software without specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. #include <vector> -#include "cctest.h" -#include "types.h" -#include "utils/random-number-generator.h" +#include "src/hydrogen-types.h" +#include "src/isolate-inl.h" +#include "src/types.h" +#include "test/cctest/cctest.h" using namespace v8::internal; @@ -41,11 +19,7 @@ struct ZoneRep { return !IsBitset(t) && reinterpret_cast<intptr_t>(AsStruct(t)[0]) == tag; } static bool IsBitset(Type* t) { return reinterpret_cast<intptr_t>(t) & 1; } - static bool IsClass(Type* t) { return IsStruct(t, 0); } - static bool IsConstant(Type* t) { return IsStruct(t, 1); } - static bool IsArray(Type* t) { return IsStruct(t, 2); } - static bool IsFunction(Type* t) { return IsStruct(t, 3); } - static bool IsUnion(Type* t) { return IsStruct(t, 4); } + static bool IsUnion(Type* t) { return IsStruct(t, 6); } static Struct* AsStruct(Type* t) { return reinterpret_cast<Struct*>(t); @@ -53,12 +27,6 @@ struct ZoneRep { static int AsBitset(Type* t) { return static_cast<int>(reinterpret_cast<intptr_t>(t) >> 1); } - static Map* AsClass(Type* t) { - return *static_cast<Map**>(AsStruct(t)[3]); - } - static Object* AsConstant(Type* t) { - return *static_cast<Object**>(AsStruct(t)[3]); - } static Struct* AsUnion(Type* t) { return AsStruct(t); } @@ -67,6 +35,13 @@ struct ZoneRep { } static Zone* ToRegion(Zone* zone, Isolate* isolate) { return zone; } + + struct BitsetType : Type::BitsetType { + using Type::BitsetType::New; + using Type::BitsetType::Glb; + using Type::BitsetType::Lub; + using Type::BitsetType::InherentLub; + }; }; @@ -77,29 +52,32 @@ struct HeapRep { return t->IsFixedArray() && Smi::cast(AsStruct(t)->get(0))->value() == tag; } static bool IsBitset(Handle<HeapType> t) { return t->IsSmi(); } - static bool IsClass(Handle<HeapType> t) { return t->IsMap(); } - static bool IsConstant(Handle<HeapType> t) { return t->IsBox(); } - static bool IsArray(Handle<HeapType> t) { return IsStruct(t, 2); } - static bool IsFunction(Handle<HeapType> t) { return IsStruct(t, 3); } - static bool IsUnion(Handle<HeapType> t) { return IsStruct(t, 4); } + static bool IsUnion(Handle<HeapType> t) { return IsStruct(t, 6); } static Struct* AsStruct(Handle<HeapType> t) { return FixedArray::cast(*t); } static int AsBitset(Handle<HeapType> t) { return Smi::cast(*t)->value(); } - static Map* AsClass(Handle<HeapType> t) { return Map::cast(*t); } - static Object* AsConstant(Handle<HeapType> t) { - return Box::cast(*t)->value(); - } static Struct* AsUnion(Handle<HeapType> t) { return AsStruct(t); } static int Length(Struct* structured) { return structured->length() - 1; } static Isolate* ToRegion(Zone* zone, Isolate* isolate) { return isolate; } + + struct BitsetType : HeapType::BitsetType { + using HeapType::BitsetType::New; + using HeapType::BitsetType::Glb; + using HeapType::BitsetType::Lub; + using HeapType::BitsetType::InherentLub; + static int Glb(Handle<HeapType> type) { return Glb(*type); } + static int Lub(Handle<HeapType> type) { return Lub(*type); } + static int InherentLub(Handle<HeapType> type) { return InherentLub(*type); } + }; }; template<class Type, class TypeHandle, class Region> class Types { public: - Types(Region* region, Isolate* isolate) : region_(region) { + Types(Region* region, Isolate* isolate) + : region_(region), rng_(isolate->random_number_generator()) { #define DECLARE_TYPE(name, value) \ name = Type::name(region); \ types.push_back(name); @@ -143,7 +121,16 @@ class Types { types.push_back(Type::Constant(*it, region)); } - FloatArray = Type::Array(Float, region); + doubles.push_back(-0.0); + doubles.push_back(+0.0); + doubles.push_back(-std::numeric_limits<double>::infinity()); + doubles.push_back(+std::numeric_limits<double>::infinity()); + for (int i = 0; i < 10; ++i) { + doubles.push_back(rng_->NextInt()); + doubles.push_back(rng_->NextDouble() * rng_->NextInt()); + } + + NumberArray = Type::Array(Number, region); StringArray = Type::Array(String, region); AnyArray = Type::Array(Any, region); @@ -152,7 +139,7 @@ class Types { NumberFunction2 = Type::Function(Number, Number, Number, region); MethodFunction = Type::Function(String, Object, 0, region); - for (int i = 0; i < 50; ++i) { + for (int i = 0; i < 30; ++i) { types.push_back(Fuzz()); } } @@ -183,7 +170,7 @@ class Types { TypeHandle ArrayConstant; TypeHandle UninitializedConstant; - TypeHandle FloatArray; + TypeHandle NumberArray; TypeHandle StringArray; TypeHandle AnyArray; @@ -195,9 +182,27 @@ class Types { typedef std::vector<TypeHandle> TypeVector; typedef std::vector<Handle<i::Map> > MapVector; typedef std::vector<Handle<i::Object> > ValueVector; + typedef std::vector<double> DoubleVector; + TypeVector types; MapVector maps; ValueVector values; + DoubleVector doubles; // Some floating-point values, excluding NaN. + + // Range type helper functions, partially copied from types.cc. + // Note: dle(dmin(x,y), dmax(x,y)) holds iff neither x nor y is NaN. + bool dle(double x, double y) { + return x <= y && (x != 0 || IsMinusZero(x) || !IsMinusZero(y)); + } + bool deq(double x, double y) { + return dle(x, y) && dle(y, x); + } + double dmin(double x, double y) { + return dle(x, y) ? x : y; + } + double dmax(double x, double y) { + return dle(x, y) ? y : x; + } TypeHandle Of(Handle<i::Object> value) { return Type::Of(value, region_); @@ -211,6 +216,10 @@ class Types { return Type::Constant(value, region_); } + TypeHandle Range(double min, double max) { + return Type::Range(min, max, region_); + } + TypeHandle Class(Handle<i::Map> map) { return Type::Class(map, region_); } @@ -246,18 +255,18 @@ class Types { } TypeHandle Random() { - return types[rng_.NextInt(static_cast<int>(types.size()))]; + return types[rng_->NextInt(static_cast<int>(types.size()))]; } TypeHandle Fuzz(int depth = 5) { - switch (rng_.NextInt(depth == 0 ? 3 : 20)) { + switch (rng_->NextInt(depth == 0 ? 3 : 20)) { case 0: { // bitset int n = 0 #define COUNT_BITSET_TYPES(type, value) + 1 BITSET_TYPE_LIST(COUNT_BITSET_TYPES) #undef COUNT_BITSET_TYPES ; - int i = rng_.NextInt(n); + int i = rng_->NextInt(n); #define PICK_BITSET_TYPE(type, value) \ if (i-- == 0) return Type::type(region_); BITSET_TYPE_LIST(PICK_BITSET_TYPE) @@ -265,31 +274,37 @@ class Types { UNREACHABLE(); } case 1: { // class - int i = rng_.NextInt(static_cast<int>(maps.size())); + int i = rng_->NextInt(static_cast<int>(maps.size())); return Type::Class(maps[i], region_); } case 2: { // constant - int i = rng_.NextInt(static_cast<int>(values.size())); + int i = rng_->NextInt(static_cast<int>(values.size())); return Type::Constant(values[i], region_); } - case 3: { // array + case 3: { // context + int depth = rng_->NextInt(3); + TypeHandle type = Type::Internal(region_); + for (int i = 0; i < depth; ++i) type = Type::Context(type, region_); + return type; + } + case 4: { // array TypeHandle element = Fuzz(depth / 2); return Type::Array(element, region_); } - case 4: case 5: case 6: { // function TypeHandle result = Fuzz(depth / 2); TypeHandle receiver = Fuzz(depth / 2); - int arity = rng_.NextInt(3); + int arity = rng_->NextInt(3); TypeHandle type = Type::Function(result, receiver, arity, region_); for (int i = 0; i < type->AsFunction()->Arity(); ++i) { - TypeHandle parameter = Fuzz(depth - 1); + TypeHandle parameter = Fuzz(depth / 2); type->AsFunction()->InitParameter(i, parameter); } + return type; } default: { // union - int n = rng_.NextInt(10); + int n = rng_->NextInt(10); TypeHandle type = None; for (int i = 0; i < n; ++i) { TypeHandle operand = Fuzz(depth - 1); @@ -301,9 +316,11 @@ class Types { UNREACHABLE(); } + Region* region() { return region_; } + private: Region* region_; - RandomNumberGenerator rng_; + v8::base::RandomNumberGenerator* rng_; }; @@ -313,6 +330,7 @@ struct Tests : Rep { typedef typename TypesInstance::TypeVector::iterator TypeIterator; typedef typename TypesInstance::MapVector::iterator MapIterator; typedef typename TypesInstance::ValueVector::iterator ValueIterator; + typedef typename TypesInstance::DoubleVector::iterator DoubleIterator; Isolate* isolate; HandleScope scope; @@ -328,19 +346,13 @@ struct Tests : Rep { bool Equal(TypeHandle type1, TypeHandle type2) { return - type1->Is(type2) && type2->Is(type1) && + type1->Equals(type2) && Rep::IsBitset(type1) == Rep::IsBitset(type2) && - Rep::IsClass(type1) == Rep::IsClass(type2) && - Rep::IsConstant(type1) == Rep::IsConstant(type2) && Rep::IsUnion(type1) == Rep::IsUnion(type2) && type1->NumClasses() == type2->NumClasses() && type1->NumConstants() == type2->NumConstants() && (!Rep::IsBitset(type1) || Rep::AsBitset(type1) == Rep::AsBitset(type2)) && - (!Rep::IsClass(type1) || - Rep::AsClass(type1) == Rep::AsClass(type2)) && - (!Rep::IsConstant(type1) || - Rep::AsConstant(type1) == Rep::AsConstant(type2)) && (!Rep::IsUnion(type1) || Rep::Length(Rep::AsUnion(type1)) == Rep::Length(Rep::AsUnion(type2))); } @@ -460,7 +472,7 @@ struct Tests : Rep { for (MapIterator mt = T.maps.begin(); mt != T.maps.end(); ++mt) { Handle<i::Map> map = *mt; TypeHandle type = T.Class(map); - CHECK(this->IsClass(type)); + CHECK(type->IsClass()); } // Map attribute @@ -487,7 +499,7 @@ struct Tests : Rep { for (ValueIterator vt = T.values.begin(); vt != T.values.end(); ++vt) { Handle<i::Object> value = *vt; TypeHandle type = T.Constant(value); - CHECK(this->IsConstant(type)); + CHECK(type->IsConstant()); } // Value attribute @@ -507,6 +519,93 @@ struct Tests : Rep { CHECK(Equal(type1, type2) == (*value1 == *value2)); } } + + // Typing of numbers + Factory* fac = isolate->factory(); + CHECK(T.Constant(fac->NewNumber(0))->Is(T.UnsignedSmall)); + CHECK(T.Constant(fac->NewNumber(1))->Is(T.UnsignedSmall)); + CHECK(T.Constant(fac->NewNumber(0x3fffffff))->Is(T.UnsignedSmall)); + CHECK(T.Constant(fac->NewNumber(-1))->Is(T.OtherSignedSmall)); + CHECK(T.Constant(fac->NewNumber(-0x3fffffff))->Is(T.OtherSignedSmall)); + CHECK(T.Constant(fac->NewNumber(-0x40000000))->Is(T.OtherSignedSmall)); + if (SmiValuesAre31Bits()) { + CHECK(T.Constant(fac->NewNumber(0x40000000))->Is(T.OtherUnsigned31)); + CHECK(T.Constant(fac->NewNumber(0x7fffffff))->Is(T.OtherUnsigned31)); + CHECK(T.Constant(fac->NewNumber(-0x40000001))->Is(T.OtherSigned32)); + CHECK(T.Constant(fac->NewNumber(-0x7fffffff))->Is(T.OtherSigned32)); + CHECK(T.Constant(fac->NewNumber(-0x7fffffff-1))->Is(T.OtherSigned32)); + } else { + CHECK(SmiValuesAre32Bits()); + CHECK(T.Constant(fac->NewNumber(0x40000000))->Is(T.UnsignedSmall)); + CHECK(T.Constant(fac->NewNumber(0x7fffffff))->Is(T.UnsignedSmall)); + CHECK(!T.Constant(fac->NewNumber(0x40000000))->Is(T.OtherUnsigned31)); + CHECK(!T.Constant(fac->NewNumber(0x7fffffff))->Is(T.OtherUnsigned31)); + CHECK(T.Constant(fac->NewNumber(-0x40000001))->Is(T.OtherSignedSmall)); + CHECK(T.Constant(fac->NewNumber(-0x7fffffff))->Is(T.OtherSignedSmall)); + CHECK(T.Constant(fac->NewNumber(-0x7fffffff-1))->Is(T.OtherSignedSmall)); + CHECK(!T.Constant(fac->NewNumber(-0x40000001))->Is(T.OtherSigned32)); + CHECK(!T.Constant(fac->NewNumber(-0x7fffffff))->Is(T.OtherSigned32)); + CHECK(!T.Constant(fac->NewNumber(-0x7fffffff-1))->Is(T.OtherSigned32)); + } + CHECK(T.Constant(fac->NewNumber(0x80000000u))->Is(T.OtherUnsigned32)); + CHECK(T.Constant(fac->NewNumber(0xffffffffu))->Is(T.OtherUnsigned32)); + CHECK(T.Constant(fac->NewNumber(0xffffffffu+1.0))->Is(T.OtherNumber)); + CHECK(T.Constant(fac->NewNumber(-0x7fffffff-2.0))->Is(T.OtherNumber)); + CHECK(T.Constant(fac->NewNumber(0.1))->Is(T.OtherNumber)); + CHECK(T.Constant(fac->NewNumber(-10.1))->Is(T.OtherNumber)); + CHECK(T.Constant(fac->NewNumber(10e60))->Is(T.OtherNumber)); + CHECK(T.Constant(fac->NewNumber(-1.0*0.0))->Is(T.MinusZero)); + CHECK(T.Constant(fac->NewNumber(v8::base::OS::nan_value()))->Is(T.NaN)); + CHECK(T.Constant(fac->NewNumber(V8_INFINITY))->Is(T.OtherNumber)); + CHECK(T.Constant(fac->NewNumber(-V8_INFINITY))->Is(T.OtherNumber)); + } + + void Range() { + // Constructor + for (DoubleIterator i = T.doubles.begin(); i != T.doubles.end(); ++i) { + for (DoubleIterator j = T.doubles.begin(); j != T.doubles.end(); ++j) { + double min = T.dmin(*i, *j); + double max = T.dmax(*i, *j); + TypeHandle type = T.Range(min, max); + CHECK(type->IsRange()); + } + } + + // Range attributes + for (DoubleIterator i = T.doubles.begin(); i != T.doubles.end(); ++i) { + for (DoubleIterator j = T.doubles.begin(); j != T.doubles.end(); ++j) { + double min = T.dmin(*i, *j); + double max = T.dmax(*i, *j); + printf("RangeType: min, max = %f, %f\n", min, max); + TypeHandle type = T.Range(min, max); + printf("RangeType: Min, Max = %f, %f\n", + type->AsRange()->Min(), type->AsRange()->Max()); + CHECK(min == type->AsRange()->Min()); + CHECK(max == type->AsRange()->Max()); + } + } + +// TODO(neis): enable once subtyping is updated. +// // Functionality & Injectivity: Range(min1, max1) = Range(min2, max2) <=> +// // min1 = min2 /\ max1 = max2 +// for (DoubleIterator i1 = T.doubles.begin(); i1 != T.doubles.end(); ++i1) { +// for (DoubleIterator j1 = T.doubles.begin(); j1 != T.doubles.end(); ++j1) { +// for (DoubleIterator i2 = T.doubles.begin(); +// i2 != T.doubles.end(); ++i2) { +// for (DoubleIterator j2 = T.doubles.begin(); +// j2 != T.doubles.end(); ++j2) { +// double min1 = T.dmin(*i1, *j1); +// double max1 = T.dmax(*i1, *j1); +// double min2 = T.dmin(*i2, *j2); +// double max2 = T.dmax(*i2, *j2); +// TypeHandle type1 = T.Range(min1, max1); +// TypeHandle type2 = T.Range(min2, max2); +// CHECK(Equal(type1, type2) == +// (T.deq(min1, min2) && T.deq(max1, max2))); +// } +// } +// } +// } } void Array() { @@ -514,7 +613,7 @@ struct Tests : Rep { for (int i = 0; i < 20; ++i) { TypeHandle type = T.Random(); TypeHandle array = T.Array1(type); - CHECK(this->IsArray(array)); + CHECK(array->IsArray()); } // Attributes @@ -669,6 +768,44 @@ struct Tests : Rep { } } + void Bounds() { + // Ordering: (T->BitsetGlb())->Is(T->BitsetLub()) + for (TypeIterator it = T.types.begin(); it != T.types.end(); ++it) { + TypeHandle type = *it; + TypeHandle glb = + Rep::BitsetType::New(Rep::BitsetType::Glb(type), T.region()); + TypeHandle lub = + Rep::BitsetType::New(Rep::BitsetType::Lub(type), T.region()); + CHECK(glb->Is(lub)); + } + + // Lower bound: (T->BitsetGlb())->Is(T) + for (TypeIterator it = T.types.begin(); it != T.types.end(); ++it) { + TypeHandle type = *it; + TypeHandle glb = + Rep::BitsetType::New(Rep::BitsetType::Glb(type), T.region()); + CHECK(glb->Is(type)); + } + + // Upper bound: T->Is(T->BitsetLub()) + for (TypeIterator it = T.types.begin(); it != T.types.end(); ++it) { + TypeHandle type = *it; + TypeHandle lub = + Rep::BitsetType::New(Rep::BitsetType::Lub(type), T.region()); + CHECK(type->Is(lub)); + } + + // Inherent bound: (T->BitsetLub())->Is(T->InherentBitsetLub()) + for (TypeIterator it = T.types.begin(); it != T.types.end(); ++it) { + TypeHandle type = *it; + TypeHandle lub = + Rep::BitsetType::New(Rep::BitsetType::Lub(type), T.region()); + TypeHandle inherent = + Rep::BitsetType::New(Rep::BitsetType::InherentLub(type), T.region()); + CHECK(lub->Is(inherent)); + } + } + void Is() { // Least Element (Bottom): None->Is(T) for (TypeIterator it = T.types.begin(); it != T.types.end(); ++it) { @@ -772,10 +909,9 @@ struct Tests : Rep { CheckSub(T.SignedSmall, T.Number); CheckSub(T.Signed32, T.Number); - CheckSub(T.Float, T.Number); CheckSub(T.SignedSmall, T.Signed32); - CheckUnordered(T.SignedSmall, T.Float); - CheckUnordered(T.Signed32, T.Float); + CheckUnordered(T.SignedSmall, T.MinusZero); + CheckUnordered(T.Signed32, T.Unsigned32); CheckSub(T.UniqueName, T.Name); CheckSub(T.String, T.Name); @@ -823,8 +959,8 @@ struct Tests : Rep { CheckUnordered(T.ObjectConstant2, T.ArrayClass); CheckUnordered(T.ArrayConstant, T.ObjectClass); - CheckSub(T.FloatArray, T.Array); - CheckSub(T.FloatArray, T.Object); + CheckSub(T.NumberArray, T.Array); + CheckSub(T.NumberArray, T.Object); CheckUnordered(T.StringArray, T.AnyArray); CheckSub(T.MethodFunction, T.Function); @@ -1044,13 +1180,13 @@ struct Tests : Rep { } } - // T1->Maybe(T2) iff Intersect(T1, T2) inhabited + // T1->Maybe(T2) implies Intersect(T1, T2) inhabited for (TypeIterator it1 = T.types.begin(); it1 != T.types.end(); ++it1) { for (TypeIterator it2 = T.types.begin(); it2 != T.types.end(); ++it2) { TypeHandle type1 = *it1; TypeHandle type2 = *it2; TypeHandle intersect12 = T.Intersect(type1, type2); - CHECK(type1->Maybe(type2) == intersect12->IsInhabited()); + CHECK(!type1->Maybe(type2) || intersect12->IsInhabited()); } } @@ -1114,8 +1250,8 @@ struct Tests : Rep { CheckDisjoint(T.Boolean, T.Undefined, T.Semantic); CheckOverlap(T.SignedSmall, T.Number, T.Semantic); - CheckOverlap(T.Float, T.Number, T.Semantic); - CheckDisjoint(T.Signed32, T.Float, T.Semantic); + CheckOverlap(T.NaN, T.Number, T.Semantic); + CheckDisjoint(T.Signed32, T.NaN, T.Semantic); CheckOverlap(T.UniqueName, T.Name, T.Semantic); CheckOverlap(T.String, T.Name, T.Semantic); @@ -1145,7 +1281,6 @@ struct Tests : Rep { CheckOverlap(T.SmiConstant, T.SignedSmall, T.Semantic); CheckOverlap(T.SmiConstant, T.Signed32, T.Semantic); CheckOverlap(T.SmiConstant, T.Number, T.Semantic); - CheckDisjoint(T.SmiConstant, T.Float, T.Semantic); CheckOverlap(T.ObjectConstant1, T.Object, T.Semantic); CheckOverlap(T.ObjectConstant2, T.Object, T.Semantic); CheckOverlap(T.ArrayConstant, T.Object, T.Semantic); @@ -1160,9 +1295,9 @@ struct Tests : Rep { CheckDisjoint(T.ObjectConstant2, T.ArrayClass, T.Semantic); CheckDisjoint(T.ArrayConstant, T.ObjectClass, T.Semantic); - CheckOverlap(T.FloatArray, T.Array, T.Semantic); - CheckDisjoint(T.FloatArray, T.AnyArray, T.Semantic); - CheckDisjoint(T.FloatArray, T.StringArray, T.Semantic); + CheckOverlap(T.NumberArray, T.Array, T.Semantic); + CheckDisjoint(T.NumberArray, T.AnyArray, T.Semantic); + CheckDisjoint(T.NumberArray, T.StringArray, T.Semantic); CheckOverlap(T.MethodFunction, T.Function, T.Semantic); CheckDisjoint(T.SignedFunction1, T.NumberFunction1, T.Semantic); @@ -1303,22 +1438,18 @@ struct Tests : Rep { // Bitset-array CHECK(this->IsBitset(T.Union(T.AnyArray, T.Array))); - CHECK(this->IsUnion(T.Union(T.FloatArray, T.Number))); + CHECK(this->IsUnion(T.Union(T.NumberArray, T.Number))); CheckEqual(T.Union(T.AnyArray, T.Array), T.Array); - CheckSub(T.None, T.Union(T.FloatArray, T.Number)); - CheckSub(T.Union(T.FloatArray, T.Number), T.Any); CheckUnordered(T.Union(T.AnyArray, T.String), T.Array); - CheckOverlap(T.Union(T.FloatArray, T.String), T.Object, T.Semantic); - CheckDisjoint(T.Union(T.FloatArray, T.String), T.Number, T.Semantic); + CheckOverlap(T.Union(T.NumberArray, T.String), T.Object, T.Semantic); + CheckDisjoint(T.Union(T.NumberArray, T.String), T.Number, T.Semantic); // Bitset-function CHECK(this->IsBitset(T.Union(T.MethodFunction, T.Function))); CHECK(this->IsUnion(T.Union(T.NumberFunction1, T.Number))); CheckEqual(T.Union(T.MethodFunction, T.Function), T.Function); - CheckSub(T.None, T.Union(T.MethodFunction, T.Number)); - CheckSub(T.Union(T.MethodFunction, T.Number), T.Any); CheckUnordered(T.Union(T.NumberFunction1, T.String), T.Function); CheckOverlap(T.Union(T.NumberFunction2, T.String), T.Object, T.Semantic); CheckDisjoint(T.Union(T.NumberFunction1, T.String), T.Number, T.Semantic); @@ -1353,10 +1484,10 @@ struct Tests : Rep { // Bitset-union CheckSub( - T.Float, + T.NaN, T.Union(T.Union(T.ArrayClass, T.ObjectConstant1), T.Number)); CheckSub( - T.Union(T.Union(T.ArrayClass, T.ObjectConstant1), T.Float), + T.Union(T.Union(T.ArrayClass, T.ObjectConstant1), T.Signed32), T.Union(T.ObjectConstant1, T.Union(T.Number, T.ArrayClass))); // Class-union @@ -1380,9 +1511,9 @@ struct Tests : Rep { // Array-union CheckEqual( - T.Union(T.AnyArray, T.Union(T.FloatArray, T.AnyArray)), - T.Union(T.AnyArray, T.FloatArray)); - CheckSub(T.Union(T.AnyArray, T.FloatArray), T.Array); + T.Union(T.AnyArray, T.Union(T.NumberArray, T.AnyArray)), + T.Union(T.AnyArray, T.NumberArray)); + CheckSub(T.Union(T.AnyArray, T.NumberArray), T.Array); // Function-union CheckEqual( @@ -1524,7 +1655,7 @@ struct Tests : Rep { CheckSub(T.Intersect(T.ObjectClass, T.Number), T.Representation); // Bitset-array - CheckEqual(T.Intersect(T.FloatArray, T.Object), T.FloatArray); + CheckEqual(T.Intersect(T.NumberArray, T.Object), T.NumberArray); CheckSub(T.Intersect(T.AnyArray, T.Function), T.Representation); // Bitset-function @@ -1535,24 +1666,24 @@ struct Tests : Rep { CheckEqual( T.Intersect(T.Object, T.Union(T.ObjectConstant1, T.ObjectClass)), T.Union(T.ObjectConstant1, T.ObjectClass)); - CheckEqual( - T.Intersect(T.Union(T.ArrayClass, T.ObjectConstant1), T.Number), - T.None); + CHECK( + !T.Intersect(T.Union(T.ArrayClass, T.ObjectConstant1), T.Number) + ->IsInhabited()); // Class-constant - CheckEqual(T.Intersect(T.ObjectConstant1, T.ObjectClass), T.None); - CheckEqual(T.Intersect(T.ArrayClass, T.ObjectConstant2), T.None); + CHECK(!T.Intersect(T.ObjectConstant1, T.ObjectClass)->IsInhabited()); + CHECK(!T.Intersect(T.ArrayClass, T.ObjectConstant2)->IsInhabited()); // Array-union CheckEqual( - T.Intersect(T.FloatArray, T.Union(T.FloatArray, T.ArrayClass)), - T.FloatArray); + T.Intersect(T.NumberArray, T.Union(T.NumberArray, T.ArrayClass)), + T.NumberArray); CheckEqual( T.Intersect(T.AnyArray, T.Union(T.Object, T.SmiConstant)), T.AnyArray); - CheckEqual( - T.Intersect(T.Union(T.AnyArray, T.ArrayConstant), T.FloatArray), - T.None); + CHECK( + !T.Intersect(T.Union(T.AnyArray, T.ArrayConstant), T.NumberArray) + ->IsInhabited()); // Function-union CheckEqual( @@ -1561,9 +1692,9 @@ struct Tests : Rep { CheckEqual( T.Intersect(T.NumberFunction1, T.Union(T.Object, T.SmiConstant)), T.NumberFunction1); - CheckEqual( - T.Intersect(T.Union(T.MethodFunction, T.Name), T.NumberFunction2), - T.None); + CHECK( + !T.Intersect(T.Union(T.MethodFunction, T.Name), T.NumberFunction2) + ->IsInhabited()); // Class-union CheckEqual( @@ -1572,9 +1703,9 @@ struct Tests : Rep { CheckEqual( T.Intersect(T.ArrayClass, T.Union(T.Object, T.SmiConstant)), T.ArrayClass); - CheckEqual( - T.Intersect(T.Union(T.ObjectClass, T.ArrayConstant), T.ArrayClass), - T.None); + CHECK( + !T.Intersect(T.Union(T.ObjectClass, T.ArrayConstant), T.ArrayClass) + ->IsInhabited()); // Constant-union CheckEqual( @@ -1584,10 +1715,10 @@ struct Tests : Rep { CheckEqual( T.Intersect(T.SmiConstant, T.Union(T.Number, T.ObjectConstant2)), T.SmiConstant); - CheckEqual( - T.Intersect( - T.Union(T.ArrayConstant, T.ObjectClass), T.ObjectConstant1), - T.None); + CHECK( + !T.Intersect( + T.Union(T.ArrayConstant, T.ObjectClass), T.ObjectConstant1) + ->IsInhabited()); // Union-union CheckEqual( @@ -1615,6 +1746,46 @@ struct Tests : Rep { T.Union(T.ObjectConstant2, T.ObjectConstant1)); } + void Distributivity1() { + // Distributivity: + // Union(T1, Intersect(T2, T3)) = Intersect(Union(T1, T2), Union(T1, T3)) + for (TypeIterator it1 = T.types.begin(); it1 != T.types.end(); ++it1) { + for (TypeIterator it2 = T.types.begin(); it2 != T.types.end(); ++it2) { + for (TypeIterator it3 = T.types.begin(); it3 != T.types.end(); ++it3) { + TypeHandle type1 = *it1; + TypeHandle type2 = *it2; + TypeHandle type3 = *it3; + TypeHandle union12 = T.Union(type1, type2); + TypeHandle union13 = T.Union(type1, type3); + TypeHandle intersect23 = T.Intersect(type2, type3); + TypeHandle union1_23 = T.Union(type1, intersect23); + TypeHandle intersect12_13 = T.Intersect(union12, union13); + CHECK(Equal(union1_23, intersect12_13)); + } + } + } + } + + void Distributivity2() { + // Distributivity: + // Intersect(T1, Union(T2, T3)) = Union(Intersect(T1, T2), Intersect(T1,T3)) + for (TypeIterator it1 = T.types.begin(); it1 != T.types.end(); ++it1) { + for (TypeIterator it2 = T.types.begin(); it2 != T.types.end(); ++it2) { + for (TypeIterator it3 = T.types.begin(); it3 != T.types.end(); ++it3) { + TypeHandle type1 = *it1; + TypeHandle type2 = *it2; + TypeHandle type3 = *it3; + TypeHandle intersect12 = T.Intersect(type1, type2); + TypeHandle intersect13 = T.Intersect(type1, type3); + TypeHandle union23 = T.Union(type2, type3); + TypeHandle intersect1_23 = T.Intersect(type1, union23); + TypeHandle union12_13 = T.Union(intersect12, intersect13); + CHECK(Equal(intersect1_23, union12_13)); + } + } + } + } + template<class Type2, class TypeHandle2, class Region2, class Rep2> void Convert() { Types<Type2, TypeHandle2, Region2> T2( @@ -1626,6 +1797,18 @@ struct Tests : Rep { CheckEqual(type1, type3); } } + + void HTypeFromType() { + for (TypeIterator it1 = T.types.begin(); it1 != T.types.end(); ++it1) { + for (TypeIterator it2 = T.types.begin(); it2 != T.types.end(); ++it2) { + TypeHandle type1 = *it1; + TypeHandle type2 = *it2; + HType htype1 = HType::FromType<Type>(type1); + HType htype2 = HType::FromType<Type>(type2); + CHECK(!type1->Is(type2) || htype1.IsSubtypeOf(htype2)); + } + } + } }; typedef Tests<Type, Type*, Zone, ZoneRep> ZoneTests; @@ -1653,6 +1836,13 @@ TEST(ConstantType) { } +TEST(RangeType) { + CcTest::InitializeVM(); + ZoneTests().Range(); + HeapTests().Range(); +} + + TEST(ArrayType) { CcTest::InitializeVM(); ZoneTests().Array(); @@ -1681,6 +1871,13 @@ TEST(NowOf) { } +TEST(Bounds) { + CcTest::InitializeVM(); + ZoneTests().Bounds(); + HeapTests().Bounds(); +} + + TEST(Is) { CcTest::InitializeVM(); ZoneTests().Is(); @@ -1744,8 +1941,29 @@ TEST(Intersect2) { } +TEST(Distributivity1) { + CcTest::InitializeVM(); + ZoneTests().Distributivity1(); + HeapTests().Distributivity1(); +} + + +TEST(Distributivity2) { + CcTest::InitializeVM(); + ZoneTests().Distributivity2(); + HeapTests().Distributivity2(); +} + + TEST(Convert) { CcTest::InitializeVM(); ZoneTests().Convert<HeapType, Handle<HeapType>, Isolate, HeapRep>(); HeapTests().Convert<Type, Type*, Zone, ZoneRep>(); } + + +TEST(HTypeFromType) { + CcTest::InitializeVM(); + ZoneTests().HTypeFromType(); + HeapTests().HTypeFromType(); +} diff --git a/deps/v8/test/cctest/test-unbound-queue.cc b/deps/v8/test/cctest/test-unbound-queue.cc index dd9b9c142b7..6da91e69438 100644 --- a/deps/v8/test/cctest/test-unbound-queue.cc +++ b/deps/v8/test/cctest/test-unbound-queue.cc @@ -27,9 +27,10 @@ // // Tests of the unbound queue. -#include "v8.h" -#include "unbound-queue-inl.h" -#include "cctest.h" +#include "src/v8.h" +#include "test/cctest/cctest.h" + +#include "src/unbound-queue-inl.h" using i::UnboundQueue; diff --git a/deps/v8/test/cctest/test-unique.cc b/deps/v8/test/cctest/test-unique.cc index ad14ff13344..302539a96d0 100644 --- a/deps/v8/test/cctest/test-unique.cc +++ b/deps/v8/test/cctest/test-unique.cc @@ -27,12 +27,12 @@ #include <stdlib.h> -#include "v8.h" +#include "src/v8.h" -#include "factory.h" -#include "global-handles.h" -#include "unique.h" -#include "cctest.h" +#include "src/factory.h" +#include "src/global-handles.h" +#include "src/unique.h" +#include "test/cctest/cctest.h" using namespace v8::internal; diff --git a/deps/v8/test/cctest/test-unscopables-hidden-prototype.cc b/deps/v8/test/cctest/test-unscopables-hidden-prototype.cc new file mode 100644 index 00000000000..aef2ccf288c --- /dev/null +++ b/deps/v8/test/cctest/test-unscopables-hidden-prototype.cc @@ -0,0 +1,103 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include <stdlib.h> + +#include "src/v8.h" +#include "test/cctest/cctest.h" + +namespace { + + +static void Cleanup() { + CompileRun( + "delete object.x;" + "delete hidden_prototype.x;" + "delete object[Symbol.unscopables];" + "delete hidden_prototype[Symbol.unscopables];"); +} + + +TEST(Unscopables) { + LocalContext context; + v8::Isolate* isolate = context->GetIsolate(); + v8::HandleScope handle_scope(isolate); + + v8::Local<v8::FunctionTemplate> t0 = v8::FunctionTemplate::New(isolate); + v8::Local<v8::FunctionTemplate> t1 = v8::FunctionTemplate::New(isolate); + + t1->SetHiddenPrototype(true); + + v8::Local<v8::Object> object = t0->GetFunction()->NewInstance(); + v8::Local<v8::Object> hidden_prototype = t1->GetFunction()->NewInstance(); + + object->SetPrototype(hidden_prototype); + + context->Global()->Set(v8_str("object"), object); + context->Global()->Set(v8_str("hidden_prototype"), hidden_prototype); + + CHECK_EQ(1, CompileRun( + "var result;" + "var x = 0;" + "object.x = 1;" + "with (object) {" + " result = x;" + "}" + "result")->Int32Value()); + + Cleanup(); + CHECK_EQ(2, CompileRun( + "var result;" + "var x = 0;" + "hidden_prototype.x = 2;" + "with (object) {" + " result = x;" + "}" + "result")->Int32Value()); + + Cleanup(); + CHECK_EQ(0, CompileRun( + "var result;" + "var x = 0;" + "object.x = 3;" + "object[Symbol.unscopables] = {x: true};" + "with (object) {" + " result = x;" + "}" + "result")->Int32Value()); + + Cleanup(); + CHECK_EQ(0, CompileRun( + "var result;" + "var x = 0;" + "hidden_prototype.x = 4;" + "hidden_prototype[Symbol.unscopables] = {x: true};" + "with (object) {" + " result = x;" + "}" + "result")->Int32Value()); + + Cleanup(); + CHECK_EQ(0, CompileRun( + "var result;" + "var x = 0;" + "object.x = 5;" + "hidden_prototype[Symbol.unscopables] = {x: true};" + "with (object) {" + " result = x;" + "}" + "result;")->Int32Value()); + + Cleanup(); + CHECK_EQ(0, CompileRun( + "var result;" + "var x = 0;" + "hidden_prototype.x = 6;" + "object[Symbol.unscopables] = {x: true};" + "with (object) {" + " result = x;" + "}" + "result")->Int32Value()); +} +} diff --git a/deps/v8/test/cctest/test-utils-arm64.cc b/deps/v8/test/cctest/test-utils-arm64.cc index 9eb32b002ee..b0b77bc97d6 100644 --- a/deps/v8/test/cctest/test-utils-arm64.cc +++ b/deps/v8/test/cctest/test-utils-arm64.cc @@ -25,12 +25,12 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "v8.h" +#include "src/v8.h" -#include "macro-assembler.h" -#include "arm64/utils-arm64.h" -#include "cctest.h" -#include "test-utils-arm64.h" +#include "src/arm64/utils-arm64.h" +#include "src/macro-assembler.h" +#include "test/cctest/cctest.h" +#include "test/cctest/test-utils-arm64.h" using namespace v8::internal; @@ -95,7 +95,7 @@ bool EqualFP64(double expected, const RegisterDump*, double result) { bool Equal32(uint32_t expected, const RegisterDump* core, const Register& reg) { - ASSERT(reg.Is32Bits()); + DCHECK(reg.Is32Bits()); // Retrieve the corresponding X register so we can check that the upper part // was properly cleared. int64_t result_x = core->xreg(reg.code()); @@ -112,7 +112,7 @@ bool Equal32(uint32_t expected, const RegisterDump* core, const Register& reg) { bool Equal64(uint64_t expected, const RegisterDump* core, const Register& reg) { - ASSERT(reg.Is64Bits()); + DCHECK(reg.Is64Bits()); uint64_t result = core->xreg(reg.code()); return Equal64(expected, core, result); } @@ -121,7 +121,7 @@ bool Equal64(uint64_t expected, bool EqualFP32(float expected, const RegisterDump* core, const FPRegister& fpreg) { - ASSERT(fpreg.Is32Bits()); + DCHECK(fpreg.Is32Bits()); // Retrieve the corresponding D register so we can check that the upper part // was properly cleared. uint64_t result_64 = core->dreg_bits(fpreg.code()); @@ -138,7 +138,7 @@ bool EqualFP32(float expected, bool EqualFP64(double expected, const RegisterDump* core, const FPRegister& fpreg) { - ASSERT(fpreg.Is64Bits()); + DCHECK(fpreg.Is64Bits()); return EqualFP64(expected, core, core->dreg(fpreg.code())); } @@ -146,7 +146,7 @@ bool EqualFP64(double expected, bool Equal64(const Register& reg0, const RegisterDump* core, const Register& reg1) { - ASSERT(reg0.Is64Bits() && reg1.Is64Bits()); + DCHECK(reg0.Is64Bits() && reg1.Is64Bits()); int64_t expected = core->xreg(reg0.code()); int64_t result = core->xreg(reg1.code()); return Equal64(expected, core, result); @@ -174,8 +174,8 @@ static char FlagV(uint32_t flags) { bool EqualNzcv(uint32_t expected, uint32_t result) { - ASSERT((expected & ~NZCVFlag) == 0); - ASSERT((result & ~NZCVFlag) == 0); + DCHECK((expected & ~NZCVFlag) == 0); + DCHECK((result & ~NZCVFlag) == 0); if (result != expected) { printf("Expected: %c%c%c%c\t Found: %c%c%c%c\n", FlagN(expected), FlagZ(expected), FlagC(expected), FlagV(expected), @@ -231,7 +231,7 @@ RegList PopulateRegisterArray(Register* w, Register* x, Register* r, } } // Check that we got enough registers. - ASSERT(CountSetBits(list, kNumberOfRegisters) == reg_count); + DCHECK(CountSetBits(list, kNumberOfRegisters) == reg_count); return list; } @@ -258,7 +258,7 @@ RegList PopulateFPRegisterArray(FPRegister* s, FPRegister* d, FPRegister* v, } } // Check that we got enough registers. - ASSERT(CountSetBits(list, kNumberOfFPRegisters) == reg_count); + DCHECK(CountSetBits(list, kNumberOfFPRegisters) == reg_count); return list; } @@ -270,7 +270,7 @@ void Clobber(MacroAssembler* masm, RegList reg_list, uint64_t const value) { if (reg_list & (1UL << i)) { Register xn = Register::Create(i, kXRegSizeInBits); // We should never write into csp here. - ASSERT(!xn.Is(csp)); + DCHECK(!xn.Is(csp)); if (!xn.IsZero()) { if (!first.IsValid()) { // This is the first register we've hit, so construct the literal. @@ -320,7 +320,7 @@ void Clobber(MacroAssembler* masm, CPURegList reg_list) { void RegisterDump::Dump(MacroAssembler* masm) { - ASSERT(__ StackPointer().Is(csp)); + DCHECK(__ StackPointer().Is(csp)); // Ensure that we don't unintentionally clobber any registers. RegList old_tmp_list = masm->TmpList()->list(); @@ -396,7 +396,7 @@ void RegisterDump::Dump(MacroAssembler* masm) { // easily restore them. Register dump2_base = x10; Register dump2 = x11; - ASSERT(!AreAliased(dump_base, dump, tmp, dump2_base, dump2)); + DCHECK(!AreAliased(dump_base, dump, tmp, dump2_base, dump2)); // Don't lose the dump_ address. __ Mov(dump2_base, dump_base); diff --git a/deps/v8/test/cctest/test-utils-arm64.h b/deps/v8/test/cctest/test-utils-arm64.h index 2ff26e49cc2..d00ad5e78cd 100644 --- a/deps/v8/test/cctest/test-utils-arm64.h +++ b/deps/v8/test/cctest/test-utils-arm64.h @@ -28,12 +28,12 @@ #ifndef V8_ARM64_TEST_UTILS_ARM64_H_ #define V8_ARM64_TEST_UTILS_ARM64_H_ -#include "v8.h" +#include "src/v8.h" +#include "test/cctest/cctest.h" -#include "macro-assembler.h" -#include "arm64/macro-assembler-arm64.h" -#include "arm64/utils-arm64.h" -#include "cctest.h" +#include "src/arm64/macro-assembler-arm64.h" +#include "src/arm64/utils-arm64.h" +#include "src/macro-assembler.h" using namespace v8::internal; @@ -59,7 +59,7 @@ class RegisterDump { if (code == kSPRegInternalCode) { return wspreg(); } - ASSERT(RegAliasesMatch(code)); + DCHECK(RegAliasesMatch(code)); return dump_.w_[code]; } @@ -67,13 +67,13 @@ class RegisterDump { if (code == kSPRegInternalCode) { return spreg(); } - ASSERT(RegAliasesMatch(code)); + DCHECK(RegAliasesMatch(code)); return dump_.x_[code]; } // FPRegister accessors. inline uint32_t sreg_bits(unsigned code) const { - ASSERT(FPRegAliasesMatch(code)); + DCHECK(FPRegAliasesMatch(code)); return dump_.s_[code]; } @@ -82,7 +82,7 @@ class RegisterDump { } inline uint64_t dreg_bits(unsigned code) const { - ASSERT(FPRegAliasesMatch(code)); + DCHECK(FPRegAliasesMatch(code)); return dump_.d_[code]; } @@ -92,19 +92,19 @@ class RegisterDump { // Stack pointer accessors. inline int64_t spreg() const { - ASSERT(SPRegAliasesMatch()); + DCHECK(SPRegAliasesMatch()); return dump_.sp_; } inline int64_t wspreg() const { - ASSERT(SPRegAliasesMatch()); + DCHECK(SPRegAliasesMatch()); return dump_.wsp_; } // Flags accessors. inline uint64_t flags_nzcv() const { - ASSERT(IsComplete()); - ASSERT((dump_.flags_ & ~Flags_mask) == 0); + DCHECK(IsComplete()); + DCHECK((dump_.flags_ & ~Flags_mask) == 0); return dump_.flags_ & Flags_mask; } @@ -120,21 +120,21 @@ class RegisterDump { // w<code>. A failure of this test most likely represents a failure in the // ::Dump method, or a failure in the simulator. bool RegAliasesMatch(unsigned code) const { - ASSERT(IsComplete()); - ASSERT(code < kNumberOfRegisters); + DCHECK(IsComplete()); + DCHECK(code < kNumberOfRegisters); return ((dump_.x_[code] & kWRegMask) == dump_.w_[code]); } // As RegAliasesMatch, but for the stack pointer. bool SPRegAliasesMatch() const { - ASSERT(IsComplete()); + DCHECK(IsComplete()); return ((dump_.sp_ & kWRegMask) == dump_.wsp_); } // As RegAliasesMatch, but for floating-point registers. bool FPRegAliasesMatch(unsigned code) const { - ASSERT(IsComplete()); - ASSERT(code < kNumberOfFPRegisters); + DCHECK(IsComplete()); + DCHECK(code < kNumberOfFPRegisters); return (dump_.d_[code] & kSRegMask) == dump_.s_[code]; } diff --git a/deps/v8/test/cctest/test-utils.cc b/deps/v8/test/cctest/test-utils.cc index 86d52fa82b1..cf539305b30 100644 --- a/deps/v8/test/cctest/test-utils.cc +++ b/deps/v8/test/cctest/test-utils.cc @@ -27,11 +27,13 @@ #include <stdlib.h> -#include "v8.h" +#include <vector> -#include "cctest.h" -#include "platform.h" -#include "utils-inl.h" +#include "src/v8.h" + +#include "src/base/platform/platform.h" +#include "src/utils-inl.h" +#include "test/cctest/cctest.h" using namespace v8::internal; @@ -70,7 +72,7 @@ TEST(Utils1) { CHECK_EQ(INT_MAX, FastD2IChecked(1.0e100)); CHECK_EQ(INT_MIN, FastD2IChecked(-1.0e100)); - CHECK_EQ(INT_MIN, FastD2IChecked(OS::nan_value())); + CHECK_EQ(INT_MIN, FastD2IChecked(v8::base::OS::nan_value())); } @@ -83,7 +85,7 @@ TEST(SNPrintF) { static const char kMarker = static_cast<char>(42); Vector<char> buffer = Vector<char>::New(i + 1); buffer[i] = kMarker; - int n = OS::SNPrintF(Vector<char>(buffer.start(), i), "%s", s); + int n = SNPrintF(Vector<char>(buffer.start(), i), "%s", s); CHECK(n <= i); CHECK(n == length || n == -1); CHECK_EQ(0, strncmp(buffer.start(), s, i - 1)); @@ -110,15 +112,15 @@ void TestMemMove(byte* area1, area1[i] = i & 0xFF; area2[i] = i & 0xFF; } - OS::MemMove(area1 + dest_offset, area1 + src_offset, length); + MemMove(area1 + dest_offset, area1 + src_offset, length); memmove(area2 + dest_offset, area2 + src_offset, length); if (memcmp(area1, area2, kAreaSize) != 0) { - printf("OS::MemMove(): src_offset: %d, dest_offset: %d, length: %d\n", + printf("MemMove(): src_offset: %d, dest_offset: %d, length: %d\n", src_offset, dest_offset, length); for (int i = 0; i < kAreaSize; i++) { if (area1[i] == area2[i]) continue; - printf("diff at offset %d (%p): is %d, should be %d\n", - i, reinterpret_cast<void*>(area1 + i), area1[i], area2[i]); + printf("diff at offset %d (%p): is %d, should be %d\n", i, + reinterpret_cast<void*>(area1 + i), area1[i], area2[i]); } CHECK(false); } @@ -218,3 +220,40 @@ TEST(SequenceCollectorRegression) { CHECK_EQ(0, strncmp("0123456789012345678901234567890123", seq.start(), seq.length())); } + + +// TODO(svenpanne) Unconditionally test this when our infrastructure is fixed. +#if !V8_CC_MSVC && !V8_OS_NACL +TEST(CPlusPlus11Features) { + struct S { + bool x; + struct T { + double y; + int z[3]; + } t; + }; + S s{true, {3.1415, {1, 2, 3}}}; + CHECK_EQ(2, s.t.z[1]); + +// TODO(svenpanne) Remove the old-skool code when we ship the new C++ headers. +#if 0 + std::vector<int> vec{11, 22, 33, 44}; +#else + std::vector<int> vec; + vec.push_back(11); + vec.push_back(22); + vec.push_back(33); + vec.push_back(44); +#endif + vec.push_back(55); + vec.push_back(66); + for (auto& i : vec) { + ++i; + } + int j = 12; + for (auto i : vec) { + CHECK_EQ(j, i); + j += 11; + } +} +#endif diff --git a/deps/v8/test/cctest/test-version.cc b/deps/v8/test/cctest/test-version.cc index 6bec4b75eef..231451d11ae 100644 --- a/deps/v8/test/cctest/test-version.cc +++ b/deps/v8/test/cctest/test-version.cc @@ -25,10 +25,10 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "v8.h" +#include "src/v8.h" -#include "version.h" -#include "cctest.h" +#include "src/version.h" +#include "test/cctest/cctest.h" using namespace v8::internal; diff --git a/deps/v8/test/cctest/test-weakmaps.cc b/deps/v8/test/cctest/test-weakmaps.cc index 14a5e020a14..bb412a82aad 100644 --- a/deps/v8/test/cctest/test-weakmaps.cc +++ b/deps/v8/test/cctest/test-weakmaps.cc @@ -27,11 +27,11 @@ #include <utility> -#include "v8.h" +#include "src/v8.h" -#include "global-handles.h" -#include "snapshot.h" -#include "cctest.h" +#include "src/global-handles.h" +#include "src/snapshot.h" +#include "test/cctest/cctest.h" using namespace v8::internal; @@ -46,10 +46,12 @@ static Handle<JSWeakMap> AllocateJSWeakMap(Isolate* isolate) { Handle<Map> map = factory->NewMap(JS_WEAK_MAP_TYPE, JSWeakMap::kSize); Handle<JSObject> weakmap_obj = factory->NewJSObjectFromMap(map); Handle<JSWeakMap> weakmap(JSWeakMap::cast(*weakmap_obj)); - // Do not use handles for the hash table, it would make entries strong. - Handle<ObjectHashTable> table = ObjectHashTable::New(isolate, 1); - weakmap->set_table(*table); - weakmap->set_next(Smi::FromInt(0)); + // Do not leak handles for the hash table, it would make entries strong. + { + HandleScope scope(isolate); + Handle<ObjectHashTable> table = ObjectHashTable::New(isolate, 1); + weakmap->set_table(*table); + } return weakmap; } @@ -69,7 +71,7 @@ static void WeakPointerCallback( std::pair<v8::Persistent<v8::Value>*, int>* p = reinterpret_cast<std::pair<v8::Persistent<v8::Value>*, int>*>( data.GetParameter()); - ASSERT_EQ(1234, p->second); + DCHECK_EQ(1234, p->second); NumberOfWeakCalls++; p->first->Reset(); } @@ -185,8 +187,8 @@ TEST(Regress2060a) { Factory* factory = isolate->factory(); Heap* heap = isolate->heap(); HandleScope scope(isolate); - Handle<JSFunction> function = factory->NewFunctionWithPrototype( - factory->function_string(), factory->null_value()); + Handle<JSFunction> function = factory->NewFunction( + factory->function_string()); Handle<JSObject> key = factory->NewJSObject(function); Handle<JSWeakMap> weakmap = AllocateJSWeakMap(isolate); @@ -225,8 +227,8 @@ TEST(Regress2060b) { Factory* factory = isolate->factory(); Heap* heap = isolate->heap(); HandleScope scope(isolate); - Handle<JSFunction> function = factory->NewFunctionWithPrototype( - factory->function_string(), factory->null_value()); + Handle<JSFunction> function = factory->NewFunction( + factory->function_string()); // Start second old-space page so that keys land on evacuation candidate. Page* first_page = heap->old_pointer_space()->anchor()->next_page(); @@ -253,3 +255,20 @@ TEST(Regress2060b) { heap->CollectAllGarbage(Heap::kNoGCFlags); heap->CollectAllGarbage(Heap::kNoGCFlags); } + + +TEST(Regress399527) { + CcTest::InitializeVM(); + v8::HandleScope scope(CcTest::isolate()); + Isolate* isolate = CcTest::i_isolate(); + Heap* heap = isolate->heap(); + { + HandleScope scope(isolate); + AllocateJSWeakMap(isolate); + SimulateIncrementalMarking(heap); + } + // The weak map is marked black here but leaving the handle scope will make + // the object unreachable. Aborting incremental marking will clear all the + // marking bits which makes the weak map garbage. + heap->CollectAllGarbage(Heap::kAbortIncrementalMarkingMask); +} diff --git a/deps/v8/test/cctest/test-weaksets.cc b/deps/v8/test/cctest/test-weaksets.cc index a3a94789736..299cc92e9b2 100644 --- a/deps/v8/test/cctest/test-weaksets.cc +++ b/deps/v8/test/cctest/test-weaksets.cc @@ -27,11 +27,11 @@ #include <utility> -#include "v8.h" +#include "src/v8.h" -#include "global-handles.h" -#include "snapshot.h" -#include "cctest.h" +#include "src/global-handles.h" +#include "src/snapshot.h" +#include "test/cctest/cctest.h" using namespace v8::internal; @@ -46,10 +46,12 @@ static Handle<JSWeakSet> AllocateJSWeakSet(Isolate* isolate) { Handle<Map> map = factory->NewMap(JS_WEAK_SET_TYPE, JSWeakSet::kSize); Handle<JSObject> weakset_obj = factory->NewJSObjectFromMap(map); Handle<JSWeakSet> weakset(JSWeakSet::cast(*weakset_obj)); - // Do not use handles for the hash table, it would make entries strong. - Handle<ObjectHashTable> table = ObjectHashTable::New(isolate, 1); - weakset->set_table(*table); - weakset->set_next(Smi::FromInt(0)); + // Do not leak handles for the hash table, it would make entries strong. + { + HandleScope scope(isolate); + Handle<ObjectHashTable> table = ObjectHashTable::New(isolate, 1); + weakset->set_table(*table); + } return weakset; } @@ -69,7 +71,7 @@ static void WeakPointerCallback( std::pair<v8::Persistent<v8::Value>*, int>* p = reinterpret_cast<std::pair<v8::Persistent<v8::Value>*, int>*>( data.GetParameter()); - ASSERT_EQ(1234, p->second); + DCHECK_EQ(1234, p->second); NumberOfWeakCalls++; p->first->Reset(); } @@ -185,8 +187,8 @@ TEST(WeakSet_Regress2060a) { Factory* factory = isolate->factory(); Heap* heap = isolate->heap(); HandleScope scope(isolate); - Handle<JSFunction> function = factory->NewFunctionWithPrototype( - factory->function_string(), factory->null_value()); + Handle<JSFunction> function = factory->NewFunction( + factory->function_string()); Handle<JSObject> key = factory->NewJSObject(function); Handle<JSWeakSet> weakset = AllocateJSWeakSet(isolate); @@ -225,8 +227,8 @@ TEST(WeakSet_Regress2060b) { Factory* factory = isolate->factory(); Heap* heap = isolate->heap(); HandleScope scope(isolate); - Handle<JSFunction> function = factory->NewFunctionWithPrototype( - factory->function_string(), factory->null_value()); + Handle<JSFunction> function = factory->NewFunction( + factory->function_string()); // Start second old-space page so that keys land on evacuation candidate. Page* first_page = heap->old_pointer_space()->anchor()->next_page(); diff --git a/deps/v8/test/cctest/test-weaktypedarrays.cc b/deps/v8/test/cctest/test-weaktypedarrays.cc index daf07eed02e..d40b7e95a91 100644 --- a/deps/v8/test/cctest/test-weaktypedarrays.cc +++ b/deps/v8/test/cctest/test-weaktypedarrays.cc @@ -27,12 +27,12 @@ #include <stdlib.h> -#include "v8.h" -#include "api.h" -#include "heap.h" -#include "objects.h" +#include "src/v8.h" +#include "test/cctest/cctest.h" -#include "cctest.h" +#include "src/api.h" +#include "src/heap/heap.h" +#include "src/objects.h" using namespace v8::internal; @@ -155,7 +155,7 @@ TEST(WeakArrayBuffersFromScript) { } i::ScopedVector<char> source(1024); - i::OS::SNPrintF(source, "ab%d = null;", i); + i::SNPrintF(source, "ab%d = null;", i); CompileRun(source.start()); isolate->heap()->CollectAllGarbage(Heap::kAbortIncrementalMarkingMask); @@ -165,7 +165,7 @@ TEST(WeakArrayBuffersFromScript) { v8::HandleScope s2(context->GetIsolate()); for (int j = 1; j <= 3; j++) { if (j == i) continue; - i::OS::SNPrintF(source, "ab%d", j); + i::SNPrintF(source, "ab%d", j); v8::Handle<v8::ArrayBuffer> ab = v8::Handle<v8::ArrayBuffer>::Cast(CompileRun(source.start())); CHECK(HasArrayBufferInWeakList(isolate->heap(), @@ -282,11 +282,11 @@ static void TestTypedArrayFromScript(const char* constructor) { { v8::HandleScope s1(context->GetIsolate()); - i::OS::SNPrintF(source, - "var ta1 = new %s(ab);" - "var ta2 = new %s(ab);" - "var ta3 = new %s(ab)", - constructor, constructor, constructor); + i::SNPrintF(source, + "var ta1 = new %s(ab);" + "var ta2 = new %s(ab);" + "var ta3 = new %s(ab)", + constructor, constructor, constructor); CompileRun(source.start()); v8::Handle<v8::ArrayBuffer> ab = @@ -305,7 +305,7 @@ static void TestTypedArrayFromScript(const char* constructor) { CHECK(HasViewInWeakList(*iab, *v8::Utils::OpenHandle(*ta3))); } - i::OS::SNPrintF(source, "ta%d = null;", i); + i::SNPrintF(source, "ta%d = null;", i); CompileRun(source.start()); isolate->heap()->CollectAllGarbage(Heap::kAbortIncrementalMarkingMask); @@ -319,7 +319,7 @@ static void TestTypedArrayFromScript(const char* constructor) { CHECK_EQ(2, CountViews(*iab)); for (int j = 1; j <= 3; j++) { if (j == i) continue; - i::OS::SNPrintF(source, "ta%d", j); + i::SNPrintF(source, "ta%d", j); v8::Handle<TypedArray> ta = v8::Handle<TypedArray>::Cast(CompileRun(source.start())); CHECK(HasViewInWeakList(*iab, *v8::Utils::OpenHandle(*ta))); diff --git a/deps/v8/test/cctest/trace-extension.cc b/deps/v8/test/cctest/trace-extension.cc index 2da68131663..8f390e4bc55 100644 --- a/deps/v8/test/cctest/trace-extension.cc +++ b/deps/v8/test/cctest/trace-extension.cc @@ -25,10 +25,10 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "trace-extension.h" +#include "test/cctest/trace-extension.h" -#include "cctest.h" -#include "sampler.h" +#include "src/sampler.h" +#include "test/cctest/cctest.h" namespace v8 { namespace internal { diff --git a/deps/v8/test/cctest/trace-extension.h b/deps/v8/test/cctest/trace-extension.h index b80b3d45dc8..919eda5bb50 100644 --- a/deps/v8/test/cctest/trace-extension.h +++ b/deps/v8/test/cctest/trace-extension.h @@ -28,7 +28,7 @@ #ifndef V8_TEST_CCTEST_TRACE_EXTENSION_H_ #define V8_TEST_CCTEST_TRACE_EXTENSION_H_ -#include "v8.h" +#include "src/v8.h" namespace v8 { namespace internal { diff --git a/deps/v8/test/compiler-unittests/DEPS b/deps/v8/test/compiler-unittests/DEPS new file mode 100644 index 00000000000..8aa02395f55 --- /dev/null +++ b/deps/v8/test/compiler-unittests/DEPS @@ -0,0 +1,6 @@ +include_rules = [ + "+src", + "+testing/gtest", + "+testing/gtest-type-names.h", + "+testing/gmock", +] diff --git a/deps/v8/test/compiler-unittests/arm/instruction-selector-arm-unittest.cc b/deps/v8/test/compiler-unittests/arm/instruction-selector-arm-unittest.cc new file mode 100644 index 00000000000..b781ac8f9f5 --- /dev/null +++ b/deps/v8/test/compiler-unittests/arm/instruction-selector-arm-unittest.cc @@ -0,0 +1,27 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "test/compiler-unittests/instruction-selector-unittest.h" + +namespace v8 { +namespace internal { +namespace compiler { + +class InstructionSelectorARMTest : public InstructionSelectorTest {}; + + +TARGET_TEST_F(InstructionSelectorARMTest, Int32AddP) { + StreamBuilder m(this, kMachineWord32, kMachineWord32, kMachineWord32); + m.Return(m.Int32Add(m.Parameter(0), m.Parameter(1))); + Stream s = m.Build(); + ASSERT_EQ(1U, s.size()); + EXPECT_EQ(kArmAdd, s[0]->arch_opcode()); + EXPECT_EQ(kMode_Operand2_R, s[0]->addressing_mode()); + EXPECT_EQ(2U, s[0]->InputCount()); + EXPECT_EQ(1U, s[0]->OutputCount()); +} + +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/test/compiler-unittests/change-lowering-unittest.cc b/deps/v8/test/compiler-unittests/change-lowering-unittest.cc new file mode 100644 index 00000000000..68de48013c0 --- /dev/null +++ b/deps/v8/test/compiler-unittests/change-lowering-unittest.cc @@ -0,0 +1,257 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "src/compiler/change-lowering.h" +#include "src/compiler/common-operator.h" +#include "src/compiler/graph.h" +#include "src/compiler/node-properties-inl.h" +#include "src/compiler/simplified-operator.h" +#include "src/factory.h" +#include "test/compiler-unittests/compiler-unittests.h" +#include "test/compiler-unittests/node-matchers.h" +#include "testing/gtest-type-names.h" + +using testing::_; + +namespace v8 { +namespace internal { +namespace compiler { + +template <typename T> +class ChangeLoweringTest : public CompilerTest { + public: + static const size_t kPointerSize = sizeof(T); + + explicit ChangeLoweringTest(int num_parameters = 1) + : graph_(zone()), common_(zone()), simplified_(zone()) { + graph()->SetStart(graph()->NewNode(common()->Start(num_parameters))); + } + virtual ~ChangeLoweringTest() {} + + protected: + Node* Parameter(int32_t index = 0) { + return graph()->NewNode(common()->Parameter(index), graph()->start()); + } + + Reduction Reduce(Node* node) { + CompilationInfo info(isolate(), zone()); + Linkage linkage(&info); + ChangeLowering<kPointerSize> reducer(graph(), &linkage); + return reducer.Reduce(node); + } + + Graph* graph() { return &graph_; } + Factory* factory() const { return isolate()->factory(); } + CommonOperatorBuilder* common() { return &common_; } + SimplifiedOperatorBuilder* simplified() { return &simplified_; } + + PrintableUnique<HeapObject> true_unique() { + return PrintableUnique<HeapObject>::CreateImmovable( + zone(), factory()->true_value()); + } + PrintableUnique<HeapObject> false_unique() { + return PrintableUnique<HeapObject>::CreateImmovable( + zone(), factory()->false_value()); + } + + private: + Graph graph_; + CommonOperatorBuilder common_; + SimplifiedOperatorBuilder simplified_; +}; + + +typedef ::testing::Types<int32_t, int64_t> ChangeLoweringTypes; +TYPED_TEST_CASE(ChangeLoweringTest, ChangeLoweringTypes); + + +TARGET_TYPED_TEST(ChangeLoweringTest, ChangeBitToBool) { + Node* val = this->Parameter(0); + Node* node = + this->graph()->NewNode(this->simplified()->ChangeBitToBool(), val); + Reduction reduction = this->Reduce(node); + ASSERT_TRUE(reduction.Changed()); + + Node* phi = reduction.replacement(); + EXPECT_THAT(phi, IsPhi(IsHeapConstant(this->true_unique()), + IsHeapConstant(this->false_unique()), _)); + + Node* merge = NodeProperties::GetControlInput(phi); + ASSERT_EQ(IrOpcode::kMerge, merge->opcode()); + + Node* if_true = NodeProperties::GetControlInput(merge, 0); + ASSERT_EQ(IrOpcode::kIfTrue, if_true->opcode()); + + Node* if_false = NodeProperties::GetControlInput(merge, 1); + ASSERT_EQ(IrOpcode::kIfFalse, if_false->opcode()); + + Node* branch = NodeProperties::GetControlInput(if_true); + EXPECT_EQ(branch, NodeProperties::GetControlInput(if_false)); + EXPECT_THAT(branch, IsBranch(val, this->graph()->start())); +} + + +TARGET_TYPED_TEST(ChangeLoweringTest, StringAdd) { + Node* node = this->graph()->NewNode(this->simplified()->StringAdd(), + this->Parameter(0), this->Parameter(1)); + Reduction reduction = this->Reduce(node); + EXPECT_FALSE(reduction.Changed()); +} + + +class ChangeLowering32Test : public ChangeLoweringTest<int32_t> { + public: + virtual ~ChangeLowering32Test() {} +}; + + +TARGET_TEST_F(ChangeLowering32Test, ChangeBoolToBit) { + Node* val = Parameter(0); + Node* node = graph()->NewNode(simplified()->ChangeBoolToBit(), val); + Reduction reduction = Reduce(node); + ASSERT_TRUE(reduction.Changed()); + + EXPECT_THAT(reduction.replacement(), + IsWord32Equal(val, IsHeapConstant(true_unique()))); +} + + +TARGET_TEST_F(ChangeLowering32Test, ChangeInt32ToTagged) { + Node* val = Parameter(0); + Node* node = graph()->NewNode(simplified()->ChangeInt32ToTagged(), val); + Reduction reduction = Reduce(node); + ASSERT_TRUE(reduction.Changed()); + + Node* phi = reduction.replacement(); + ASSERT_EQ(IrOpcode::kPhi, phi->opcode()); + + Node* smi = NodeProperties::GetValueInput(phi, 1); + ASSERT_THAT(smi, IsProjection(0, IsInt32AddWithOverflow(val, val))); + + Node* heap_number = NodeProperties::GetValueInput(phi, 0); + ASSERT_EQ(IrOpcode::kCall, heap_number->opcode()); + + Node* merge = NodeProperties::GetControlInput(phi); + ASSERT_EQ(IrOpcode::kMerge, merge->opcode()); + + const int32_t kValueOffset = HeapNumber::kValueOffset - kHeapObjectTag; + EXPECT_THAT(NodeProperties::GetControlInput(merge, 0), + IsStore(kMachineFloat64, kNoWriteBarrier, heap_number, + IsInt32Constant(kValueOffset), + IsChangeInt32ToFloat64(val), _, heap_number)); + + Node* if_true = NodeProperties::GetControlInput(heap_number); + ASSERT_EQ(IrOpcode::kIfTrue, if_true->opcode()); + + Node* if_false = NodeProperties::GetControlInput(merge, 1); + ASSERT_EQ(IrOpcode::kIfFalse, if_false->opcode()); + + Node* branch = NodeProperties::GetControlInput(if_true); + EXPECT_EQ(branch, NodeProperties::GetControlInput(if_false)); + EXPECT_THAT(branch, + IsBranch(IsProjection(1, IsInt32AddWithOverflow(val, val)), + graph()->start())); +} + + +TARGET_TEST_F(ChangeLowering32Test, ChangeTaggedToFloat64) { + Node* val = Parameter(0); + Node* node = graph()->NewNode(simplified()->ChangeTaggedToFloat64(), val); + Reduction reduction = Reduce(node); + ASSERT_TRUE(reduction.Changed()); + + const int32_t kShiftAmount = + kSmiTagSize + SmiTagging<kPointerSize>::kSmiShiftSize; + const int32_t kValueOffset = HeapNumber::kValueOffset - kHeapObjectTag; + Node* phi = reduction.replacement(); + ASSERT_THAT( + phi, IsPhi(IsLoad(kMachineFloat64, val, IsInt32Constant(kValueOffset), _), + IsChangeInt32ToFloat64( + IsWord32Sar(val, IsInt32Constant(kShiftAmount))), + _)); + + Node* merge = NodeProperties::GetControlInput(phi); + ASSERT_EQ(IrOpcode::kMerge, merge->opcode()); + + Node* if_true = NodeProperties::GetControlInput(merge, 0); + ASSERT_EQ(IrOpcode::kIfTrue, if_true->opcode()); + + Node* if_false = NodeProperties::GetControlInput(merge, 1); + ASSERT_EQ(IrOpcode::kIfFalse, if_false->opcode()); + + Node* branch = NodeProperties::GetControlInput(if_true); + EXPECT_EQ(branch, NodeProperties::GetControlInput(if_false)); + STATIC_ASSERT(kSmiTag == 0); + STATIC_ASSERT(kSmiTagSize == 1); + EXPECT_THAT(branch, IsBranch(IsWord32And(val, IsInt32Constant(kSmiTagMask)), + graph()->start())); +} + + +class ChangeLowering64Test : public ChangeLoweringTest<int64_t> { + public: + virtual ~ChangeLowering64Test() {} +}; + + +TARGET_TEST_F(ChangeLowering64Test, ChangeBoolToBit) { + Node* val = Parameter(0); + Node* node = graph()->NewNode(simplified()->ChangeBoolToBit(), val); + Reduction reduction = Reduce(node); + ASSERT_TRUE(reduction.Changed()); + + EXPECT_THAT(reduction.replacement(), + IsWord64Equal(val, IsHeapConstant(true_unique()))); +} + + +TARGET_TEST_F(ChangeLowering64Test, ChangeInt32ToTagged) { + Node* val = Parameter(0); + Node* node = graph()->NewNode(simplified()->ChangeInt32ToTagged(), val); + Reduction reduction = Reduce(node); + ASSERT_TRUE(reduction.Changed()); + + const int32_t kShiftAmount = + kSmiTagSize + SmiTagging<kPointerSize>::kSmiShiftSize; + EXPECT_THAT(reduction.replacement(), + IsWord64Shl(val, IsInt32Constant(kShiftAmount))); +} + + +TARGET_TEST_F(ChangeLowering64Test, ChangeTaggedToFloat64) { + Node* val = Parameter(0); + Node* node = graph()->NewNode(simplified()->ChangeTaggedToFloat64(), val); + Reduction reduction = Reduce(node); + ASSERT_TRUE(reduction.Changed()); + + const int32_t kShiftAmount = + kSmiTagSize + SmiTagging<kPointerSize>::kSmiShiftSize; + const int32_t kValueOffset = HeapNumber::kValueOffset - kHeapObjectTag; + Node* phi = reduction.replacement(); + ASSERT_THAT( + phi, IsPhi(IsLoad(kMachineFloat64, val, IsInt32Constant(kValueOffset), _), + IsChangeInt32ToFloat64(IsConvertInt64ToInt32( + IsWord64Sar(val, IsInt32Constant(kShiftAmount)))), + _)); + + Node* merge = NodeProperties::GetControlInput(phi); + ASSERT_EQ(IrOpcode::kMerge, merge->opcode()); + + Node* if_true = NodeProperties::GetControlInput(merge, 0); + ASSERT_EQ(IrOpcode::kIfTrue, if_true->opcode()); + + Node* if_false = NodeProperties::GetControlInput(merge, 1); + ASSERT_EQ(IrOpcode::kIfFalse, if_false->opcode()); + + Node* branch = NodeProperties::GetControlInput(if_true); + EXPECT_EQ(branch, NodeProperties::GetControlInput(if_false)); + STATIC_ASSERT(kSmiTag == 0); + STATIC_ASSERT(kSmiTagSize == 1); + EXPECT_THAT(branch, IsBranch(IsWord64And(val, IsInt32Constant(kSmiTagMask)), + graph()->start())); +} + +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/test/compiler-unittests/compiler-unittests.cc b/deps/v8/test/compiler-unittests/compiler-unittests.cc new file mode 100644 index 00000000000..2ce4c93ee22 --- /dev/null +++ b/deps/v8/test/compiler-unittests/compiler-unittests.cc @@ -0,0 +1,86 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "include/libplatform/libplatform.h" +#include "test/compiler-unittests/compiler-unittests.h" +#include "testing/gmock/include/gmock/gmock.h" + +using testing::IsNull; +using testing::NotNull; + +namespace v8 { +namespace internal { +namespace compiler { + +// static +v8::Isolate* CompilerTest::isolate_ = NULL; + + +CompilerTest::CompilerTest() + : isolate_scope_(isolate_), + handle_scope_(isolate_), + context_scope_(v8::Context::New(isolate_)), + zone_(isolate()) {} + + +CompilerTest::~CompilerTest() {} + + +// static +void CompilerTest::SetUpTestCase() { + Test::SetUpTestCase(); + EXPECT_THAT(isolate_, IsNull()); + isolate_ = v8::Isolate::New(); + ASSERT_THAT(isolate_, NotNull()); +} + + +// static +void CompilerTest::TearDownTestCase() { + ASSERT_THAT(isolate_, NotNull()); + isolate_->Dispose(); + isolate_ = NULL; + Test::TearDownTestCase(); +} + +} // namespace compiler +} // namespace internal +} // namespace v8 + + +namespace { + +class CompilerTestEnvironment V8_FINAL : public ::testing::Environment { + public: + CompilerTestEnvironment() : platform_(NULL) {} + ~CompilerTestEnvironment() {} + + virtual void SetUp() V8_OVERRIDE { + EXPECT_THAT(platform_, IsNull()); + platform_ = v8::platform::CreateDefaultPlatform(); + v8::V8::InitializePlatform(platform_); + ASSERT_TRUE(v8::V8::Initialize()); + } + + virtual void TearDown() V8_OVERRIDE { + ASSERT_THAT(platform_, NotNull()); + v8::V8::Dispose(); + v8::V8::ShutdownPlatform(); + delete platform_; + platform_ = NULL; + } + + private: + v8::Platform* platform_; +}; + +} + + +int main(int argc, char** argv) { + testing::InitGoogleMock(&argc, argv); + testing::AddGlobalTestEnvironment(new CompilerTestEnvironment); + v8::V8::SetFlagsFromCommandLine(&argc, argv, true); + return RUN_ALL_TESTS(); +} diff --git a/deps/v8/test/compiler-unittests/compiler-unittests.gyp b/deps/v8/test/compiler-unittests/compiler-unittests.gyp new file mode 100644 index 00000000000..c1de0c42354 --- /dev/null +++ b/deps/v8/test/compiler-unittests/compiler-unittests.gyp @@ -0,0 +1,61 @@ +# Copyright 2014 the V8 project authors. All rights reserved. +# Use of this source code is governed by a BSD-style license that can be +# found in the LICENSE file. + +{ + 'variables': { + 'v8_code': 1, + }, + 'includes': ['../../build/toolchain.gypi', '../../build/features.gypi'], + 'targets': [ + { + 'target_name': 'compiler-unittests', + 'type': 'executable', + 'dependencies': [ + '../../testing/gmock.gyp:gmock', + '../../testing/gtest.gyp:gtest', + '../../tools/gyp/v8.gyp:v8_libplatform', + ], + 'include_dirs': [ + '../..', + ], + 'sources': [ ### gcmole(all) ### + 'change-lowering-unittest.cc', + 'compiler-unittests.cc', + 'instruction-selector-unittest.cc', + 'node-matchers.cc', + 'node-matchers.h', + ], + 'conditions': [ + ['v8_target_arch=="arm"', { + 'sources': [ ### gcmole(arch:arm) ### + 'arm/instruction-selector-arm-unittest.cc', + ], + }], + ['component=="shared_library"', { + # compiler-unittests can't be built against a shared library, so we + # need to depend on the underlying static target in that case. + 'conditions': [ + ['v8_use_snapshot=="true"', { + 'dependencies': ['../../tools/gyp/v8.gyp:v8_snapshot'], + }, + { + 'dependencies': [ + '../../tools/gyp/v8.gyp:v8_nosnapshot', + ], + }], + ], + }, { + 'dependencies': ['../../tools/gyp/v8.gyp:v8'], + }], + ['os_posix == 1', { + # TODO(svenpanne): This is a temporary work-around to fix the warnings + # that show up because we use -std=gnu++0x instead of -std=c++11. + 'cflags!': [ + '-pedantic', + ], + }], + ], + }, + ], +} diff --git a/deps/v8/test/compiler-unittests/compiler-unittests.h b/deps/v8/test/compiler-unittests/compiler-unittests.h new file mode 100644 index 00000000000..091b137066d --- /dev/null +++ b/deps/v8/test/compiler-unittests/compiler-unittests.h @@ -0,0 +1,69 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_UNITTESTS_COMPILER_UNITTESTS_H_ +#define V8_COMPILER_UNITTESTS_COMPILER_UNITTESTS_H_ + +#include "include/v8.h" +#include "src/zone.h" +#include "testing/gtest/include/gtest/gtest.h" + +namespace v8 { +namespace internal { +namespace compiler { + +// The TARGET_TEST(Case, Name) macro works just like +// TEST(Case, Name), except that the test is disabled +// if the platform is not a supported TurboFan target. +#if V8_TURBOFAN_TARGET +#define TARGET_TEST(Case, Name) TEST(Case, Name) +#else +#define TARGET_TEST(Case, Name) TEST(Case, DISABLED_##Name) +#endif + + +// The TARGET_TEST_F(Case, Name) macro works just like +// TEST_F(Case, Name), except that the test is disabled +// if the platform is not a supported TurboFan target. +#if V8_TURBOFAN_TARGET +#define TARGET_TEST_F(Case, Name) TEST_F(Case, Name) +#else +#define TARGET_TEST_F(Case, Name) TEST_F(Case, DISABLED_##Name) +#endif + + +// The TARGET_TYPED_TEST(Case, Name) macro works just like +// TYPED_TEST(Case, Name), except that the test is disabled +// if the platform is not a supported TurboFan target. +#if V8_TURBOFAN_TARGET +#define TARGET_TYPED_TEST(Case, Name) TYPED_TEST(Case, Name) +#else +#define TARGET_TYPED_TEST(Case, Name) TYPED_TEST(Case, DISABLED_##Name) +#endif + + +class CompilerTest : public ::testing::Test { + public: + CompilerTest(); + virtual ~CompilerTest(); + + Isolate* isolate() const { return reinterpret_cast<Isolate*>(isolate_); } + Zone* zone() { return &zone_; } + + static void SetUpTestCase(); + static void TearDownTestCase(); + + private: + static v8::Isolate* isolate_; + v8::Isolate::Scope isolate_scope_; + v8::HandleScope handle_scope_; + v8::Context::Scope context_scope_; + Zone zone_; +}; + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_COMPILER_UNITTESTS_COMPILER_UNITTESTS_H_ diff --git a/deps/v8/test/compiler-unittests/compiler-unittests.status b/deps/v8/test/compiler-unittests/compiler-unittests.status new file mode 100644 index 00000000000..d439913ccf6 --- /dev/null +++ b/deps/v8/test/compiler-unittests/compiler-unittests.status @@ -0,0 +1,6 @@ +# Copyright 2014 the V8 project authors. All rights reserved. +# Use of this source code is governed by a BSD-style license that can be +# found in the LICENSE file. + +[ +] diff --git a/deps/v8/test/compiler-unittests/instruction-selector-unittest.cc b/deps/v8/test/compiler-unittests/instruction-selector-unittest.cc new file mode 100644 index 00000000000..70186529afb --- /dev/null +++ b/deps/v8/test/compiler-unittests/instruction-selector-unittest.cc @@ -0,0 +1,92 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "test/compiler-unittests/instruction-selector-unittest.h" + +namespace v8 { +namespace internal { +namespace compiler { + +InstructionSelectorTest::Stream InstructionSelectorTest::StreamBuilder::Build( + InstructionSelector::Features features, + InstructionSelectorTest::StreamBuilderMode mode) { + Schedule* schedule = Export(); + EXPECT_NE(0, graph()->NodeCount()); + CompilationInfo info(test_->isolate(), test_->zone()); + Linkage linkage(&info, call_descriptor()); + InstructionSequence sequence(&linkage, graph(), schedule); + SourcePositionTable source_position_table(graph()); + InstructionSelector selector(&sequence, &source_position_table, features); + selector.SelectInstructions(); + if (FLAG_trace_turbo) { + OFStream out(stdout); + out << "--- Code sequence after instruction selection ---" << endl + << sequence; + } + Stream s; + for (InstructionSequence::const_iterator i = sequence.begin(); + i != sequence.end(); ++i) { + Instruction* instr = *i; + if (instr->opcode() < 0) continue; + if (mode == kTargetInstructions) { + switch (instr->arch_opcode()) { +#define CASE(Name) \ + case k##Name: \ + break; + TARGET_ARCH_OPCODE_LIST(CASE) +#undef CASE + default: + continue; + } + } + for (size_t i = 0; i < instr->OutputCount(); ++i) { + InstructionOperand* output = instr->OutputAt(i); + EXPECT_NE(InstructionOperand::IMMEDIATE, output->kind()); + if (output->IsConstant()) { + s.constants_.insert(std::make_pair( + output->index(), sequence.GetConstant(output->index()))); + } + } + for (size_t i = 0; i < instr->InputCount(); ++i) { + InstructionOperand* input = instr->InputAt(i); + EXPECT_NE(InstructionOperand::CONSTANT, input->kind()); + if (input->IsImmediate()) { + s.immediates_.insert(std::make_pair( + input->index(), sequence.GetImmediate(input->index()))); + } + } + s.instructions_.push_back(instr); + } + return s; +} + + +TARGET_TEST_F(InstructionSelectorTest, ReturnP) { + StreamBuilder m(this, kMachineWord32, kMachineWord32); + m.Return(m.Parameter(0)); + Stream s = m.Build(kAllInstructions); + ASSERT_EQ(2U, s.size()); + EXPECT_EQ(kArchNop, s[0]->arch_opcode()); + ASSERT_EQ(1U, s[0]->OutputCount()); + EXPECT_EQ(kArchRet, s[1]->arch_opcode()); + EXPECT_EQ(1U, s[1]->InputCount()); +} + + +TARGET_TEST_F(InstructionSelectorTest, ReturnImm) { + StreamBuilder m(this, kMachineWord32); + m.Return(m.Int32Constant(0)); + Stream s = m.Build(kAllInstructions); + ASSERT_EQ(2U, s.size()); + EXPECT_EQ(kArchNop, s[0]->arch_opcode()); + ASSERT_EQ(1U, s[0]->OutputCount()); + EXPECT_EQ(InstructionOperand::CONSTANT, s[0]->OutputAt(0)->kind()); + EXPECT_EQ(0, s.ToInt32(s[0]->OutputAt(0))); + EXPECT_EQ(kArchRet, s[1]->arch_opcode()); + EXPECT_EQ(1U, s[1]->InputCount()); +} + +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/test/compiler-unittests/instruction-selector-unittest.h b/deps/v8/test/compiler-unittests/instruction-selector-unittest.h new file mode 100644 index 00000000000..3a7b5907573 --- /dev/null +++ b/deps/v8/test/compiler-unittests/instruction-selector-unittest.h @@ -0,0 +1,129 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_UNITTESTS_INSTRUCTION_SELECTOR_UNITTEST_H_ +#define V8_COMPILER_UNITTESTS_INSTRUCTION_SELECTOR_UNITTEST_H_ + +#include <deque> + +#include "src/compiler/instruction-selector.h" +#include "src/compiler/raw-machine-assembler.h" +#include "test/compiler-unittests/compiler-unittests.h" + +namespace v8 { +namespace internal { +namespace compiler { + +class InstructionSelectorTest : public CompilerTest { + public: + InstructionSelectorTest() {} + virtual ~InstructionSelectorTest() {} + + protected: + class Stream; + + enum StreamBuilderMode { kAllInstructions, kTargetInstructions }; + + class StreamBuilder V8_FINAL : public RawMachineAssembler { + public: + StreamBuilder(InstructionSelectorTest* test, MachineType return_type) + : RawMachineAssembler(new (test->zone()) Graph(test->zone()), + CallDescriptorBuilder(test->zone(), return_type)), + test_(test) {} + StreamBuilder(InstructionSelectorTest* test, MachineType return_type, + MachineType parameter0_type) + : RawMachineAssembler(new (test->zone()) Graph(test->zone()), + CallDescriptorBuilder(test->zone(), return_type, + parameter0_type)), + test_(test) {} + StreamBuilder(InstructionSelectorTest* test, MachineType return_type, + MachineType parameter0_type, MachineType parameter1_type) + : RawMachineAssembler( + new (test->zone()) Graph(test->zone()), + CallDescriptorBuilder(test->zone(), return_type, parameter0_type, + parameter1_type)), + test_(test) {} + + Stream Build(CpuFeature feature) { + return Build(InstructionSelector::Features(feature)); + } + Stream Build(CpuFeature feature1, CpuFeature feature2) { + return Build(InstructionSelector::Features(feature1, feature2)); + } + Stream Build(StreamBuilderMode mode = kTargetInstructions) { + return Build(InstructionSelector::Features(), mode); + } + Stream Build(InstructionSelector::Features features, + StreamBuilderMode mode = kTargetInstructions); + + private: + MachineCallDescriptorBuilder* CallDescriptorBuilder( + Zone* zone, MachineType return_type) { + return new (zone) MachineCallDescriptorBuilder(return_type, 0, NULL); + } + + MachineCallDescriptorBuilder* CallDescriptorBuilder( + Zone* zone, MachineType return_type, MachineType parameter0_type) { + MachineType* parameter_types = zone->NewArray<MachineType>(1); + parameter_types[0] = parameter0_type; + return new (zone) + MachineCallDescriptorBuilder(return_type, 1, parameter_types); + } + + MachineCallDescriptorBuilder* CallDescriptorBuilder( + Zone* zone, MachineType return_type, MachineType parameter0_type, + MachineType parameter1_type) { + MachineType* parameter_types = zone->NewArray<MachineType>(2); + parameter_types[0] = parameter0_type; + parameter_types[1] = parameter1_type; + return new (zone) + MachineCallDescriptorBuilder(return_type, 2, parameter_types); + } + + private: + InstructionSelectorTest* test_; + }; + + class Stream V8_FINAL { + public: + size_t size() const { return instructions_.size(); } + const Instruction* operator[](size_t index) const { + EXPECT_LT(index, size()); + return instructions_[index]; + } + + int32_t ToInt32(const InstructionOperand* operand) const { + return ToConstant(operand).ToInt32(); + } + + private: + Constant ToConstant(const InstructionOperand* operand) const { + ConstantMap::const_iterator i; + if (operand->IsConstant()) { + i = constants_.find(operand->index()); + EXPECT_NE(constants_.end(), i); + } else { + EXPECT_EQ(InstructionOperand::IMMEDIATE, operand->kind()); + i = immediates_.find(operand->index()); + EXPECT_NE(immediates_.end(), i); + } + EXPECT_EQ(operand->index(), i->first); + return i->second; + } + + friend class StreamBuilder; + + typedef std::map<int, Constant> ConstantMap; + + ConstantMap constants_; + ConstantMap immediates_; + std::deque<Instruction*> instructions_; + }; +}; + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_COMPILER_UNITTESTS_INSTRUCTION_SELECTOR_UNITTEST_H_ diff --git a/deps/v8/test/compiler-unittests/node-matchers.cc b/deps/v8/test/compiler-unittests/node-matchers.cc new file mode 100644 index 00000000000..d580834113d --- /dev/null +++ b/deps/v8/test/compiler-unittests/node-matchers.cc @@ -0,0 +1,454 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#include "test/compiler-unittests/node-matchers.h" + +#include <ostream> // NOLINT(readability/streams) + +#include "src/compiler/node-properties-inl.h" + +using testing::MakeMatcher; +using testing::MatcherInterface; +using testing::MatchResultListener; +using testing::StringMatchResultListener; + +namespace v8 { +namespace internal { + +// TODO(bmeurer): Find a new home for these functions. +template <typename T> +inline std::ostream& operator<<(std::ostream& os, + const PrintableUnique<T>& value) { + return os << value.string(); +} + +namespace compiler { + +namespace { + +template <typename T> +bool PrintMatchAndExplain(const T& value, const char* value_name, + const Matcher<T>& value_matcher, + MatchResultListener* listener) { + StringMatchResultListener value_listener; + if (!value_matcher.MatchAndExplain(value, &value_listener)) { + *listener << "whose " << value_name << " " << value << " doesn't match"; + if (value_listener.str() != "") { + *listener << ", " << value_listener.str(); + } + return false; + } + return true; +} + + +class NodeMatcher : public MatcherInterface<Node*> { + public: + explicit NodeMatcher(IrOpcode::Value opcode) : opcode_(opcode) {} + + virtual void DescribeTo(std::ostream* os) const V8_OVERRIDE { + *os << "is a " << IrOpcode::Mnemonic(opcode_) << " node"; + } + + virtual bool MatchAndExplain(Node* node, MatchResultListener* listener) const + V8_OVERRIDE { + if (node == NULL) { + *listener << "which is NULL"; + return false; + } + if (node->opcode() != opcode_) { + *listener << "whose opcode is " << IrOpcode::Mnemonic(node->opcode()); + return false; + } + return true; + } + + private: + const IrOpcode::Value opcode_; +}; + + +class IsBranchMatcher V8_FINAL : public NodeMatcher { + public: + IsBranchMatcher(const Matcher<Node*>& value_matcher, + const Matcher<Node*>& control_matcher) + : NodeMatcher(IrOpcode::kBranch), + value_matcher_(value_matcher), + control_matcher_(control_matcher) {} + + virtual void DescribeTo(std::ostream* os) const V8_OVERRIDE { + NodeMatcher::DescribeTo(os); + *os << " whose value ("; + value_matcher_.DescribeTo(os); + *os << ") and control ("; + control_matcher_.DescribeTo(os); + *os << ")"; + } + + virtual bool MatchAndExplain(Node* node, MatchResultListener* listener) const + V8_OVERRIDE { + return (NodeMatcher::MatchAndExplain(node, listener) && + PrintMatchAndExplain(NodeProperties::GetValueInput(node, 0), + "value", value_matcher_, listener) && + PrintMatchAndExplain(NodeProperties::GetControlInput(node), + "control", control_matcher_, listener)); + } + + private: + const Matcher<Node*> value_matcher_; + const Matcher<Node*> control_matcher_; +}; + + +template <typename T> +class IsConstantMatcher V8_FINAL : public NodeMatcher { + public: + IsConstantMatcher(IrOpcode::Value opcode, const Matcher<T>& value_matcher) + : NodeMatcher(opcode), value_matcher_(value_matcher) {} + + virtual void DescribeTo(std::ostream* os) const V8_OVERRIDE { + NodeMatcher::DescribeTo(os); + *os << " whose value ("; + value_matcher_.DescribeTo(os); + *os << ")"; + } + + virtual bool MatchAndExplain(Node* node, MatchResultListener* listener) const + V8_OVERRIDE { + return (NodeMatcher::MatchAndExplain(node, listener) && + PrintMatchAndExplain(OpParameter<T>(node), "value", value_matcher_, + listener)); + } + + private: + const Matcher<T> value_matcher_; +}; + + +class IsPhiMatcher V8_FINAL : public NodeMatcher { + public: + IsPhiMatcher(const Matcher<Node*>& value0_matcher, + const Matcher<Node*>& value1_matcher, + const Matcher<Node*>& control_matcher) + : NodeMatcher(IrOpcode::kPhi), + value0_matcher_(value0_matcher), + value1_matcher_(value1_matcher), + control_matcher_(control_matcher) {} + + virtual void DescribeTo(std::ostream* os) const V8_OVERRIDE { + NodeMatcher::DescribeTo(os); + *os << " whose value0 ("; + value0_matcher_.DescribeTo(os); + *os << "), value1 ("; + value1_matcher_.DescribeTo(os); + *os << ") and control ("; + control_matcher_.DescribeTo(os); + *os << ")"; + } + + virtual bool MatchAndExplain(Node* node, MatchResultListener* listener) const + V8_OVERRIDE { + return (NodeMatcher::MatchAndExplain(node, listener) && + PrintMatchAndExplain(NodeProperties::GetValueInput(node, 0), + "value0", value0_matcher_, listener) && + PrintMatchAndExplain(NodeProperties::GetValueInput(node, 1), + "value1", value1_matcher_, listener) && + PrintMatchAndExplain(NodeProperties::GetControlInput(node), + "control", control_matcher_, listener)); + } + + private: + const Matcher<Node*> value0_matcher_; + const Matcher<Node*> value1_matcher_; + const Matcher<Node*> control_matcher_; +}; + + +class IsProjectionMatcher V8_FINAL : public NodeMatcher { + public: + IsProjectionMatcher(const Matcher<int32_t>& index_matcher, + const Matcher<Node*>& base_matcher) + : NodeMatcher(IrOpcode::kProjection), + index_matcher_(index_matcher), + base_matcher_(base_matcher) {} + + virtual void DescribeTo(std::ostream* os) const V8_OVERRIDE { + NodeMatcher::DescribeTo(os); + *os << " whose index ("; + index_matcher_.DescribeTo(os); + *os << ") and base ("; + base_matcher_.DescribeTo(os); + *os << ")"; + } + + virtual bool MatchAndExplain(Node* node, MatchResultListener* listener) const + V8_OVERRIDE { + return (NodeMatcher::MatchAndExplain(node, listener) && + PrintMatchAndExplain(OpParameter<int32_t>(node), "index", + index_matcher_, listener) && + PrintMatchAndExplain(NodeProperties::GetValueInput(node, 0), "base", + base_matcher_, listener)); + } + + private: + const Matcher<int32_t> index_matcher_; + const Matcher<Node*> base_matcher_; +}; + + +class IsLoadMatcher V8_FINAL : public NodeMatcher { + public: + IsLoadMatcher(const Matcher<MachineType>& type_matcher, + const Matcher<Node*>& base_matcher, + const Matcher<Node*>& index_matcher, + const Matcher<Node*>& effect_matcher) + : NodeMatcher(IrOpcode::kLoad), + type_matcher_(type_matcher), + base_matcher_(base_matcher), + index_matcher_(index_matcher), + effect_matcher_(effect_matcher) {} + + virtual void DescribeTo(std::ostream* os) const V8_OVERRIDE { + NodeMatcher::DescribeTo(os); + *os << " whose type ("; + type_matcher_.DescribeTo(os); + *os << "), base ("; + base_matcher_.DescribeTo(os); + *os << "), index ("; + index_matcher_.DescribeTo(os); + *os << ") and effect ("; + effect_matcher_.DescribeTo(os); + *os << ")"; + } + + virtual bool MatchAndExplain(Node* node, MatchResultListener* listener) const + V8_OVERRIDE { + return (NodeMatcher::MatchAndExplain(node, listener) && + PrintMatchAndExplain(OpParameter<MachineType>(node), "type", + type_matcher_, listener) && + PrintMatchAndExplain(NodeProperties::GetValueInput(node, 0), "base", + base_matcher_, listener) && + PrintMatchAndExplain(NodeProperties::GetValueInput(node, 1), + "index", index_matcher_, listener) && + PrintMatchAndExplain(NodeProperties::GetEffectInput(node), "effect", + effect_matcher_, listener)); + } + + private: + const Matcher<MachineType> type_matcher_; + const Matcher<Node*> base_matcher_; + const Matcher<Node*> index_matcher_; + const Matcher<Node*> effect_matcher_; +}; + + +class IsStoreMatcher V8_FINAL : public NodeMatcher { + public: + IsStoreMatcher(const Matcher<MachineType>& type_matcher, + const Matcher<WriteBarrierKind> write_barrier_matcher, + const Matcher<Node*>& base_matcher, + const Matcher<Node*>& index_matcher, + const Matcher<Node*>& value_matcher, + const Matcher<Node*>& effect_matcher, + const Matcher<Node*>& control_matcher) + : NodeMatcher(IrOpcode::kStore), + type_matcher_(type_matcher), + write_barrier_matcher_(write_barrier_matcher), + base_matcher_(base_matcher), + index_matcher_(index_matcher), + value_matcher_(value_matcher), + effect_matcher_(effect_matcher), + control_matcher_(control_matcher) {} + + virtual void DescribeTo(std::ostream* os) const V8_OVERRIDE { + NodeMatcher::DescribeTo(os); + *os << " whose type ("; + type_matcher_.DescribeTo(os); + *os << "), write barrier ("; + write_barrier_matcher_.DescribeTo(os); + *os << "), base ("; + base_matcher_.DescribeTo(os); + *os << "), index ("; + index_matcher_.DescribeTo(os); + *os << "), value ("; + value_matcher_.DescribeTo(os); + *os << "), effect ("; + effect_matcher_.DescribeTo(os); + *os << ") and control ("; + control_matcher_.DescribeTo(os); + *os << ")"; + } + + virtual bool MatchAndExplain(Node* node, MatchResultListener* listener) const + V8_OVERRIDE { + return (NodeMatcher::MatchAndExplain(node, listener) && + PrintMatchAndExplain(OpParameter<StoreRepresentation>(node).rep, + "type", type_matcher_, listener) && + PrintMatchAndExplain( + OpParameter<StoreRepresentation>(node).write_barrier_kind, + "write barrier", write_barrier_matcher_, listener) && + PrintMatchAndExplain(NodeProperties::GetValueInput(node, 0), "base", + base_matcher_, listener) && + PrintMatchAndExplain(NodeProperties::GetValueInput(node, 1), + "index", index_matcher_, listener) && + PrintMatchAndExplain(NodeProperties::GetValueInput(node, 2), + "value", value_matcher_, listener) && + PrintMatchAndExplain(NodeProperties::GetEffectInput(node), "effect", + effect_matcher_, listener) && + PrintMatchAndExplain(NodeProperties::GetControlInput(node), + "control", control_matcher_, listener)); + } + + private: + const Matcher<MachineType> type_matcher_; + const Matcher<WriteBarrierKind> write_barrier_matcher_; + const Matcher<Node*> base_matcher_; + const Matcher<Node*> index_matcher_; + const Matcher<Node*> value_matcher_; + const Matcher<Node*> effect_matcher_; + const Matcher<Node*> control_matcher_; +}; + + +class IsBinopMatcher V8_FINAL : public NodeMatcher { + public: + IsBinopMatcher(IrOpcode::Value opcode, const Matcher<Node*>& lhs_matcher, + const Matcher<Node*>& rhs_matcher) + : NodeMatcher(opcode), + lhs_matcher_(lhs_matcher), + rhs_matcher_(rhs_matcher) {} + + virtual void DescribeTo(std::ostream* os) const V8_OVERRIDE { + NodeMatcher::DescribeTo(os); + *os << " whose lhs ("; + lhs_matcher_.DescribeTo(os); + *os << ") and rhs ("; + rhs_matcher_.DescribeTo(os); + *os << ")"; + } + + virtual bool MatchAndExplain(Node* node, MatchResultListener* listener) const + V8_OVERRIDE { + return (NodeMatcher::MatchAndExplain(node, listener) && + PrintMatchAndExplain(NodeProperties::GetValueInput(node, 0), "lhs", + lhs_matcher_, listener) && + PrintMatchAndExplain(NodeProperties::GetValueInput(node, 1), "rhs", + rhs_matcher_, listener)); + } + + private: + const Matcher<Node*> lhs_matcher_; + const Matcher<Node*> rhs_matcher_; +}; + + +class IsUnopMatcher V8_FINAL : public NodeMatcher { + public: + IsUnopMatcher(IrOpcode::Value opcode, const Matcher<Node*>& input_matcher) + : NodeMatcher(opcode), input_matcher_(input_matcher) {} + + virtual void DescribeTo(std::ostream* os) const V8_OVERRIDE { + NodeMatcher::DescribeTo(os); + *os << " whose input ("; + input_matcher_.DescribeTo(os); + *os << ")"; + } + + virtual bool MatchAndExplain(Node* node, MatchResultListener* listener) const + V8_OVERRIDE { + return (NodeMatcher::MatchAndExplain(node, listener) && + PrintMatchAndExplain(NodeProperties::GetValueInput(node, 0), + "input", input_matcher_, listener)); + } + + private: + const Matcher<Node*> input_matcher_; +}; + +} + + +Matcher<Node*> IsBranch(const Matcher<Node*>& value_matcher, + const Matcher<Node*>& control_matcher) { + return MakeMatcher(new IsBranchMatcher(value_matcher, control_matcher)); +} + + +Matcher<Node*> IsInt32Constant(const Matcher<int32_t>& value_matcher) { + return MakeMatcher( + new IsConstantMatcher<int32_t>(IrOpcode::kInt32Constant, value_matcher)); +} + + +Matcher<Node*> IsHeapConstant( + const Matcher<PrintableUnique<HeapObject> >& value_matcher) { + return MakeMatcher(new IsConstantMatcher<PrintableUnique<HeapObject> >( + IrOpcode::kHeapConstant, value_matcher)); +} + + +Matcher<Node*> IsPhi(const Matcher<Node*>& value0_matcher, + const Matcher<Node*>& value1_matcher, + const Matcher<Node*>& merge_matcher) { + return MakeMatcher( + new IsPhiMatcher(value0_matcher, value1_matcher, merge_matcher)); +} + + +Matcher<Node*> IsProjection(const Matcher<int32_t>& index_matcher, + const Matcher<Node*>& base_matcher) { + return MakeMatcher(new IsProjectionMatcher(index_matcher, base_matcher)); +} + + +Matcher<Node*> IsLoad(const Matcher<MachineType>& type_matcher, + const Matcher<Node*>& base_matcher, + const Matcher<Node*>& index_matcher, + const Matcher<Node*>& effect_matcher) { + return MakeMatcher(new IsLoadMatcher(type_matcher, base_matcher, + index_matcher, effect_matcher)); +} + + +Matcher<Node*> IsStore(const Matcher<MachineType>& type_matcher, + const Matcher<WriteBarrierKind>& write_barrier_matcher, + const Matcher<Node*>& base_matcher, + const Matcher<Node*>& index_matcher, + const Matcher<Node*>& value_matcher, + const Matcher<Node*>& effect_matcher, + const Matcher<Node*>& control_matcher) { + return MakeMatcher(new IsStoreMatcher( + type_matcher, write_barrier_matcher, base_matcher, index_matcher, + value_matcher, effect_matcher, control_matcher)); +} + + +#define IS_BINOP_MATCHER(Name) \ + Matcher<Node*> Is##Name(const Matcher<Node*>& lhs_matcher, \ + const Matcher<Node*>& rhs_matcher) { \ + return MakeMatcher( \ + new IsBinopMatcher(IrOpcode::k##Name, lhs_matcher, rhs_matcher)); \ + } +IS_BINOP_MATCHER(Word32And) +IS_BINOP_MATCHER(Word32Sar) +IS_BINOP_MATCHER(Word32Equal) +IS_BINOP_MATCHER(Word64And) +IS_BINOP_MATCHER(Word64Sar) +IS_BINOP_MATCHER(Word64Shl) +IS_BINOP_MATCHER(Word64Equal) +IS_BINOP_MATCHER(Int32AddWithOverflow) +#undef IS_BINOP_MATCHER + + +#define IS_UNOP_MATCHER(Name) \ + Matcher<Node*> Is##Name(const Matcher<Node*>& input_matcher) { \ + return MakeMatcher(new IsUnopMatcher(IrOpcode::k##Name, input_matcher)); \ + } +IS_UNOP_MATCHER(ConvertInt64ToInt32) +IS_UNOP_MATCHER(ChangeInt32ToFloat64) +#undef IS_UNOP_MATCHER + +} // namespace compiler +} // namespace internal +} // namespace v8 diff --git a/deps/v8/test/compiler-unittests/node-matchers.h b/deps/v8/test/compiler-unittests/node-matchers.h new file mode 100644 index 00000000000..09da07a7f58 --- /dev/null +++ b/deps/v8/test/compiler-unittests/node-matchers.h @@ -0,0 +1,71 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_COMPILER_UNITTESTS_NODE_MATCHERS_H_ +#define V8_COMPILER_UNITTESTS_NODE_MATCHERS_H_ + +#include "src/compiler/machine-operator.h" +#include "testing/gmock/include/gmock/gmock.h" + +namespace v8 { +namespace internal { + +// Forward declarations. +class HeapObject; +template <class T> +class PrintableUnique; + +namespace compiler { + +// Forward declarations. +class Node; + +using testing::Matcher; + +Matcher<Node*> IsBranch(const Matcher<Node*>& value_matcher, + const Matcher<Node*>& control_matcher); +Matcher<Node*> IsHeapConstant( + const Matcher<PrintableUnique<HeapObject> >& value_matcher); +Matcher<Node*> IsInt32Constant(const Matcher<int32_t>& value_matcher); +Matcher<Node*> IsPhi(const Matcher<Node*>& value0_matcher, + const Matcher<Node*>& value1_matcher, + const Matcher<Node*>& merge_matcher); +Matcher<Node*> IsProjection(const Matcher<int32_t>& index_matcher, + const Matcher<Node*>& base_matcher); + +Matcher<Node*> IsLoad(const Matcher<MachineType>& type_matcher, + const Matcher<Node*>& base_matcher, + const Matcher<Node*>& index_matcher, + const Matcher<Node*>& effect_matcher); +Matcher<Node*> IsStore(const Matcher<MachineType>& type_matcher, + const Matcher<WriteBarrierKind>& write_barrier_matcher, + const Matcher<Node*>& base_matcher, + const Matcher<Node*>& index_matcher, + const Matcher<Node*>& value_matcher, + const Matcher<Node*>& effect_matcher, + const Matcher<Node*>& control_matcher); +Matcher<Node*> IsWord32And(const Matcher<Node*>& lhs_matcher, + const Matcher<Node*>& rhs_matcher); +Matcher<Node*> IsWord32Sar(const Matcher<Node*>& lhs_matcher, + const Matcher<Node*>& rhs_matcher); +Matcher<Node*> IsWord32Equal(const Matcher<Node*>& lhs_matcher, + const Matcher<Node*>& rhs_matcher); +Matcher<Node*> IsWord64And(const Matcher<Node*>& lhs_matcher, + const Matcher<Node*>& rhs_matcher); +Matcher<Node*> IsWord64Shl(const Matcher<Node*>& lhs_matcher, + const Matcher<Node*>& rhs_matcher); +Matcher<Node*> IsWord64Sar(const Matcher<Node*>& lhs_matcher, + const Matcher<Node*>& rhs_matcher); +Matcher<Node*> IsWord64Equal(const Matcher<Node*>& lhs_matcher, + const Matcher<Node*>& rhs_matcher); +Matcher<Node*> IsInt32AddWithOverflow(const Matcher<Node*>& lhs_matcher, + const Matcher<Node*>& rhs_matcher); +Matcher<Node*> IsConvertInt64ToInt32(const Matcher<Node*>& input_matcher); +Matcher<Node*> IsChangeInt32ToFloat64(const Matcher<Node*>& input_matcher); + +} // namespace compiler +} // namespace internal +} // namespace v8 + +#endif // V8_COMPILER_UNITTESTS_NODE_MATCHERS_H_ diff --git a/deps/v8/test/compiler-unittests/testcfg.py b/deps/v8/test/compiler-unittests/testcfg.py new file mode 100644 index 00000000000..4eec956f7e3 --- /dev/null +++ b/deps/v8/test/compiler-unittests/testcfg.py @@ -0,0 +1,51 @@ +# Copyright 2014 the V8 project authors. All rights reserved. +# Use of this source code is governed by a BSD-style license that can be +# found in the LICENSE file. + +import os +import shutil + +from testrunner.local import commands +from testrunner.local import testsuite +from testrunner.local import utils +from testrunner.objects import testcase + + +class CompilerUnitTestsSuite(testsuite.TestSuite): + def __init__(self, name, root): + super(CompilerUnitTestsSuite, self).__init__(name, root) + + def ListTests(self, context): + shell = os.path.abspath(os.path.join(context.shell_dir, self.shell())) + if utils.IsWindows(): + shell += ".exe" + output = commands.Execute(context.command_prefix + + [shell, "--gtest_list_tests"] + + context.extra_flags) + if output.exit_code != 0: + print output.stdout + print output.stderr + return [] + tests = [] + test_case = '' + for test_desc in output.stdout.strip().split(): + if test_desc.endswith('.'): + test_case = test_desc + else: + test = testcase.TestCase(self, test_case + test_desc, dependency=None) + tests.append(test) + tests.sort() + return tests + + def GetFlagsForTestCase(self, testcase, context): + return (testcase.flags + ["--gtest_filter=" + testcase.path] + + ["--gtest_random_seed=%s" % context.random_seed] + + ["--gtest_print_time=0"] + + context.mode_flags) + + def shell(self): + return "compiler-unittests" + + +def GetSuite(name, root): + return CompilerUnitTestsSuite(name, root) diff --git a/deps/v8/test/fuzz-natives/base.js b/deps/v8/test/fuzz-natives/base.js index b9f70043fb5..d1f721d0c0b 100644 --- a/deps/v8/test/fuzz-natives/base.js +++ b/deps/v8/test/fuzz-natives/base.js @@ -2,6 +2,8 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. +// Flags: --allow-natives-syntax + // TODO(jkummerow): There are many ways to improve these tests, e.g.: // - more variance in randomized inputs // - better time complexity management @@ -15,7 +17,9 @@ function makeArguments() { result.push(17); result.push(-31); result.push(new Array(100)); - result.push(new Array(100003)); + var a = %NormalizeElements([]); + a.length = 100003; + result.push(a); result.push(Number.MIN_VALUE); result.push("whoops"); result.push("x"); diff --git a/deps/v8/test/fuzz-natives/fuzz-natives.status b/deps/v8/test/fuzz-natives/fuzz-natives.status index fb3cae902a4..7165c3845a2 100644 --- a/deps/v8/test/fuzz-natives/fuzz-natives.status +++ b/deps/v8/test/fuzz-natives/fuzz-natives.status @@ -31,6 +31,27 @@ "CreateDateTimeFormat": [SKIP], "CreateNumberFormat": [SKIP], + # TODO(danno): Fix these internal function that are only callable form stubs + # and un-blacklist them! + "CompileUnoptimized": [SKIP], + "NotifyDeoptimized": [SKIP], + "NotifyStubFailure": [SKIP], + "NewSloppyArguments": [SKIP], + "NewStrictArguments": [SKIP], + "ArrayConstructor": [SKIP], + "InternalArrayConstructor": [SKIP], + "FinalizeInstanceSize": [SKIP], + "PromoteScheduledException": [SKIP], + "NewFunctionContext": [SKIP], + "PushWithContext": [SKIP], + "PushCatchContext": [SKIP], + "PushModuleContext": [SKIP], + "LoadLookupSlot": [SKIP], + "LoadLookupSlotNoReferenceError": [SKIP], + "ResolvePossiblyDirectEval": [SKIP], + "ForInInit": [SKIP], + "ForInNext": [SKIP], + # TODO(jkummerow): Figure out what to do about inlined functions. "_GeneratorNext": [SKIP], "_GeneratorThrow": [SKIP], diff --git a/deps/v8/test/fuzz-natives/testcfg.py b/deps/v8/test/fuzz-natives/testcfg.py index d8e3f056c6c..5e00b404bc9 100644 --- a/deps/v8/test/fuzz-natives/testcfg.py +++ b/deps/v8/test/fuzz-natives/testcfg.py @@ -28,14 +28,19 @@ def ListTests(self, context): if output.exit_code != 0: print output.stdout print output.stderr - assert false, "Failed to get natives list." + assert False, "Failed to get natives list." tests = [] for line in output.stdout.strip().split(): - (name, argc) = line.split(",") - flags = ["--allow-natives-syntax", - "-e", "var NAME = '%s', ARGC = %s;" % (name, argc)] - test = testcase.TestCase(self, name, flags) - tests.append(test) + try: + (name, argc) = line.split(",") + flags = ["--allow-natives-syntax", + "-e", "var NAME = '%s', ARGC = %s;" % (name, argc)] + test = testcase.TestCase(self, name, flags) + tests.append(test) + except: + # Work-around: If parsing didn't work, it might have been due to output + # caused by other d8 flags. + pass return tests def GetFlagsForTestCase(self, testcase, context): diff --git a/deps/v8/test/intl/intl.status b/deps/v8/test/intl/intl.status index 4ecbf325ada..007943a323c 100644 --- a/deps/v8/test/intl/intl.status +++ b/deps/v8/test/intl/intl.status @@ -38,5 +38,11 @@ # BUG(2899): default locale for search fails on mac and on android. 'collator/default-locale': [['system == macos or arch == android_arm or arch == android_ia32', FAIL]], + + # BUG(v8:3454). + 'date-format/parse-MMMdy': [FAIL], + 'date-format/parse-mdyhms': [FAIL], + 'number-format/parse-decimal': [FAIL], + 'number-format/parse-percent': [FAIL], }], # ALWAYS ] diff --git a/deps/v8/test/mjsunit/allocation-site-info.js b/deps/v8/test/mjsunit/allocation-site-info.js index 35b60ee266f..9984f5bd2cb 100644 --- a/deps/v8/test/mjsunit/allocation-site-info.js +++ b/deps/v8/test/mjsunit/allocation-site-info.js @@ -25,25 +25,9 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --allow-natives-syntax --smi-only-arrays --expose-gc +// Flags: --allow-natives-syntax --expose-gc // Flags: --noalways-opt -// Test element kind of objects. -// Since --smi-only-arrays affects builtins, its default setting at compile -// time sticks if built with snapshot. If --smi-only-arrays is deactivated -// by default, only a no-snapshot build actually has smi-only arrays enabled -// in this test case. Depending on whether smi-only arrays are actually -// enabled, this test takes the appropriate code path to check smi-only arrays. - -// support_smi_only_arrays = %HasFastSmiElements(new Array(1,2,3,4,5,6,7,8)); -support_smi_only_arrays = true; - -if (support_smi_only_arrays) { - print("Tests include smi-only arrays."); -} else { - print("Tests do NOT include smi-only arrays."); -} - var elements_kind = { fast_smi_only : 'fast smi only elements', fast : 'fast elements', @@ -73,10 +57,6 @@ function isHoley(obj) { } function assertKind(expected, obj, name_opt) { - if (!support_smi_only_arrays && - expected == elements_kind.fast_smi_only) { - expected = elements_kind.fast; - } assertEquals(expected, getKind(obj), name_opt); } @@ -88,403 +68,402 @@ function assertNotHoley(obj, name_opt) { assertEquals(false, isHoley(obj), name_opt); } -if (support_smi_only_arrays) { - obj = []; - assertNotHoley(obj); - assertKind(elements_kind.fast_smi_only, obj); +obj = []; +assertNotHoley(obj); +assertKind(elements_kind.fast_smi_only, obj); - obj = [1, 2, 3]; - assertNotHoley(obj); - assertKind(elements_kind.fast_smi_only, obj); +obj = [1, 2, 3]; +assertNotHoley(obj); +assertKind(elements_kind.fast_smi_only, obj); - obj = new Array(); - assertNotHoley(obj); - assertKind(elements_kind.fast_smi_only, obj); +obj = new Array(); +assertNotHoley(obj); +assertKind(elements_kind.fast_smi_only, obj); - obj = new Array(0); - assertNotHoley(obj); - assertKind(elements_kind.fast_smi_only, obj); +obj = new Array(0); +assertNotHoley(obj); +assertKind(elements_kind.fast_smi_only, obj); - obj = new Array(2); - assertHoley(obj); - assertKind(elements_kind.fast_smi_only, obj); +obj = new Array(2); +assertHoley(obj); +assertKind(elements_kind.fast_smi_only, obj); - obj = new Array(1,2,3); - assertNotHoley(obj); - assertKind(elements_kind.fast_smi_only, obj); +obj = new Array(1,2,3); +assertNotHoley(obj); +assertKind(elements_kind.fast_smi_only, obj); - obj = new Array(1, "hi", 2, undefined); - assertNotHoley(obj); - assertKind(elements_kind.fast, obj); +obj = new Array(1, "hi", 2, undefined); +assertNotHoley(obj); +assertKind(elements_kind.fast, obj); - function fastliteralcase(literal, value) { - literal[0] = value; - return literal; - } +function fastliteralcase(literal, value) { + literal[0] = value; + return literal; +} - function get_standard_literal() { - var literal = [1, 2, 3]; - return literal; - } +function get_standard_literal() { + var literal = [1, 2, 3]; + return literal; +} - // Case: [1,2,3] as allocation site - obj = fastliteralcase(get_standard_literal(), 1); - assertKind(elements_kind.fast_smi_only, obj); - obj = fastliteralcase(get_standard_literal(), 1.5); +// Case: [1,2,3] as allocation site +obj = fastliteralcase(get_standard_literal(), 1); +assertKind(elements_kind.fast_smi_only, obj); +obj = fastliteralcase(get_standard_literal(), 1.5); +assertKind(elements_kind.fast_double, obj); +obj = fastliteralcase(get_standard_literal(), 2); +assertKind(elements_kind.fast_double, obj); + +// The test below is in a loop because arrays that live +// at global scope without the chance of being recreated +// don't have allocation site information attached. +for (i = 0; i < 2; i++) { + obj = fastliteralcase([5, 3, 2], 1.5); assertKind(elements_kind.fast_double, obj); - obj = fastliteralcase(get_standard_literal(), 2); + obj = fastliteralcase([3, 6, 2], 1.5); assertKind(elements_kind.fast_double, obj); - // The test below is in a loop because arrays that live - // at global scope without the chance of being recreated - // don't have allocation site information attached. - for (i = 0; i < 2; i++) { - obj = fastliteralcase([5, 3, 2], 1.5); - assertKind(elements_kind.fast_double, obj); - obj = fastliteralcase([3, 6, 2], 1.5); - assertKind(elements_kind.fast_double, obj); - - // Note: thanks to pessimistic transition store stubs, we'll attempt - // to transition to the most general elements kind seen at a particular - // store site. So, the elements kind will be double. - obj = fastliteralcase([2, 6, 3], 2); - assertKind(elements_kind.fast_double, obj); - } + // Note: thanks to pessimistic transition store stubs, we'll attempt + // to transition to the most general elements kind seen at a particular + // store site. So, the elements kind will be double. + obj = fastliteralcase([2, 6, 3], 2); + assertKind(elements_kind.fast_double, obj); +} - // Verify that we will not pretransition the double->fast path. - obj = fastliteralcase(get_standard_literal(), "elliot"); - assertKind(elements_kind.fast, obj); - obj = fastliteralcase(get_standard_literal(), 3); - assertKind(elements_kind.fast, obj); +// Verify that we will not pretransition the double->fast path. +obj = fastliteralcase(get_standard_literal(), "elliot"); +assertKind(elements_kind.fast, obj); +obj = fastliteralcase(get_standard_literal(), 3); +assertKind(elements_kind.fast, obj); - // Make sure this works in crankshafted code too. +// Make sure this works in crankshafted code too. %OptimizeFunctionOnNextCall(get_standard_literal); - get_standard_literal(); - obj = get_standard_literal(); - assertKind(elements_kind.fast, obj); +get_standard_literal(); +obj = get_standard_literal(); +assertKind(elements_kind.fast, obj); + +function fastliteralcase_smifast(value) { + var literal = [1, 2, 3, 4]; + literal[0] = value; + return literal; +} - function fastliteralcase_smifast(value) { - var literal = [1, 2, 3, 4]; - literal[0] = value; - return literal; - } +obj = fastliteralcase_smifast(1); +assertKind(elements_kind.fast_smi_only, obj); +obj = fastliteralcase_smifast("carter"); +assertKind(elements_kind.fast, obj); +obj = fastliteralcase_smifast(2); +assertKind(elements_kind.fast, obj); + +// Case: make sure transitions from packed to holey are tracked +function fastliteralcase_smiholey(index, value) { + var literal = [1, 2, 3, 4]; + literal[index] = value; + return literal; +} - obj = fastliteralcase_smifast(1); - assertKind(elements_kind.fast_smi_only, obj); - obj = fastliteralcase_smifast("carter"); - assertKind(elements_kind.fast, obj); - obj = fastliteralcase_smifast(2); - assertKind(elements_kind.fast, obj); +obj = fastliteralcase_smiholey(5, 1); +assertKind(elements_kind.fast_smi_only, obj); +assertHoley(obj); +obj = fastliteralcase_smiholey(0, 1); +assertKind(elements_kind.fast_smi_only, obj); +assertHoley(obj); + +function newarraycase_smidouble(value) { + var a = new Array(); + a[0] = value; + return a; +} - // Case: make sure transitions from packed to holey are tracked - function fastliteralcase_smiholey(index, value) { - var literal = [1, 2, 3, 4]; - literal[index] = value; - return literal; - } +// Case: new Array() as allocation site, smi->double +obj = newarraycase_smidouble(1); +assertKind(elements_kind.fast_smi_only, obj); +obj = newarraycase_smidouble(1.5); +assertKind(elements_kind.fast_double, obj); +obj = newarraycase_smidouble(2); +assertKind(elements_kind.fast_double, obj); + +function newarraycase_smiobj(value) { + var a = new Array(); + a[0] = value; + return a; +} - obj = fastliteralcase_smiholey(5, 1); - assertKind(elements_kind.fast_smi_only, obj); - assertHoley(obj); - obj = fastliteralcase_smiholey(0, 1); - assertKind(elements_kind.fast_smi_only, obj); - assertHoley(obj); +// Case: new Array() as allocation site, smi->fast +obj = newarraycase_smiobj(1); +assertKind(elements_kind.fast_smi_only, obj); +obj = newarraycase_smiobj("gloria"); +assertKind(elements_kind.fast, obj); +obj = newarraycase_smiobj(2); +assertKind(elements_kind.fast, obj); + +function newarraycase_length_smidouble(value) { + var a = new Array(3); + a[0] = value; + return a; +} - function newarraycase_smidouble(value) { - var a = new Array(); - a[0] = value; - return a; - } +// Case: new Array(length) as allocation site +obj = newarraycase_length_smidouble(1); +assertKind(elements_kind.fast_smi_only, obj); +obj = newarraycase_length_smidouble(1.5); +assertKind(elements_kind.fast_double, obj); +obj = newarraycase_length_smidouble(2); +assertKind(elements_kind.fast_double, obj); + +// Try to continue the transition to fast object. +// TODO(mvstanton): re-enable commented out code when +// FLAG_pretenuring_call_new is turned on in the build. +obj = newarraycase_length_smidouble("coates"); +assertKind(elements_kind.fast, obj); +obj = newarraycase_length_smidouble(2); +// assertKind(elements_kind.fast, obj); + +function newarraycase_length_smiobj(value) { + var a = new Array(3); + a[0] = value; + return a; +} - // Case: new Array() as allocation site, smi->double - obj = newarraycase_smidouble(1); - assertKind(elements_kind.fast_smi_only, obj); - obj = newarraycase_smidouble(1.5); - assertKind(elements_kind.fast_double, obj); - obj = newarraycase_smidouble(2); - assertKind(elements_kind.fast_double, obj); +// Case: new Array(<length>) as allocation site, smi->fast +obj = newarraycase_length_smiobj(1); +assertKind(elements_kind.fast_smi_only, obj); +obj = newarraycase_length_smiobj("gloria"); +assertKind(elements_kind.fast, obj); +obj = newarraycase_length_smiobj(2); +assertKind(elements_kind.fast, obj); + +function newarraycase_list_smidouble(value) { + var a = new Array(1, 2, 3); + a[0] = value; + return a; +} + +obj = newarraycase_list_smidouble(1); +assertKind(elements_kind.fast_smi_only, obj); +obj = newarraycase_list_smidouble(1.5); +assertKind(elements_kind.fast_double, obj); +obj = newarraycase_list_smidouble(2); +assertKind(elements_kind.fast_double, obj); + +function newarraycase_list_smiobj(value) { + var a = new Array(4, 5, 6); + a[0] = value; + return a; +} - function newarraycase_smiobj(value) { - var a = new Array(); - a[0] = value; +obj = newarraycase_list_smiobj(1); +assertKind(elements_kind.fast_smi_only, obj); +obj = newarraycase_list_smiobj("coates"); +assertKind(elements_kind.fast, obj); +obj = newarraycase_list_smiobj(2); +assertKind(elements_kind.fast, obj); + +// Case: array constructor calls with out of date feedback. +// The boilerplate should incorporate all feedback, but the input array +// should be minimally transitioned based on immediate need. +(function() { + function foo(i) { + // We have two cases, one for literals one for constructed arrays. + var a = (i == 0) + ? [1, 2, 3] + : new Array(1, 2, 3); return a; } - // Case: new Array() as allocation site, smi->fast - obj = newarraycase_smiobj(1); - assertKind(elements_kind.fast_smi_only, obj); - obj = newarraycase_smiobj("gloria"); - assertKind(elements_kind.fast, obj); - obj = newarraycase_smiobj(2); - assertKind(elements_kind.fast, obj); - - function newarraycase_length_smidouble(value) { - var a = new Array(3); - a[0] = value; - return a; + for (i = 0; i < 2; i++) { + a = foo(i); + b = foo(i); + b[5] = 1; // boilerplate goes holey + assertHoley(foo(i)); + a[0] = 3.5; // boilerplate goes holey double + assertKind(elements_kind.fast_double, a); + assertNotHoley(a); + c = foo(i); + assertKind(elements_kind.fast_double, c); + assertHoley(c); } +})(); - // Case: new Array(length) as allocation site - obj = newarraycase_length_smidouble(1); - assertKind(elements_kind.fast_smi_only, obj); - obj = newarraycase_length_smidouble(1.5); - assertKind(elements_kind.fast_double, obj); - obj = newarraycase_length_smidouble(2); - assertKind(elements_kind.fast_double, obj); +function newarraycase_onearg(len, value) { + var a = new Array(len); + a[0] = value; + return a; +} - // Try to continue the transition to fast object. - // TODO(mvstanton): re-enable commented out code when - // FLAG_pretenuring_call_new is turned on in the build. - obj = newarraycase_length_smidouble("coates"); - assertKind(elements_kind.fast, obj); - obj = newarraycase_length_smidouble(2); - // assertKind(elements_kind.fast, obj); +obj = newarraycase_onearg(5, 3.5); +assertKind(elements_kind.fast_double, obj); +obj = newarraycase_onearg(10, 5); +assertKind(elements_kind.fast_double, obj); +obj = newarraycase_onearg(0, 5); +assertKind(elements_kind.fast_double, obj); + +// Verify that cross context calls work +var realmA = Realm.current(); +var realmB = Realm.create(); +assertEquals(0, realmA); +assertEquals(1, realmB); + +function instanceof_check(type) { + assertTrue(new type() instanceof type); + assertTrue(new type(5) instanceof type); + assertTrue(new type(1,2,3) instanceof type); +} - function newarraycase_length_smiobj(value) { - var a = new Array(3); - a[0] = value; - return a; - } +function instanceof_check2(type) { + assertTrue(new type() instanceof type); + assertTrue(new type(5) instanceof type); + assertTrue(new type(1,2,3) instanceof type); +} - // Case: new Array(<length>) as allocation site, smi->fast - obj = newarraycase_length_smiobj(1); - assertKind(elements_kind.fast_smi_only, obj); - obj = newarraycase_length_smiobj("gloria"); - assertKind(elements_kind.fast, obj); - obj = newarraycase_length_smiobj(2); - assertKind(elements_kind.fast, obj); +var realmBArray = Realm.eval(realmB, "Array"); +instanceof_check(Array); +instanceof_check(realmBArray); - function newarraycase_list_smidouble(value) { - var a = new Array(1, 2, 3); - a[0] = value; - return a; - } +// instanceof_check2 is here because the call site goes through a state. +// Since instanceof_check(Array) was first called with the current context +// Array function, it went from (uninit->Array) then (Array->megamorphic). +// We'll get a different state traversal if we start with realmBArray. +// It'll go (uninit->realmBArray) then (realmBArray->megamorphic). Recognize +// that state "Array" implies an AllocationSite is present, and code is +// configured to use it. +instanceof_check2(realmBArray); +instanceof_check2(Array); - obj = newarraycase_list_smidouble(1); - assertKind(elements_kind.fast_smi_only, obj); - obj = newarraycase_list_smidouble(1.5); - assertKind(elements_kind.fast_double, obj); - obj = newarraycase_list_smidouble(2); - assertKind(elements_kind.fast_double, obj); + %OptimizeFunctionOnNextCall(instanceof_check); - function newarraycase_list_smiobj(value) { - var a = new Array(4, 5, 6); - a[0] = value; - return a; +// No de-opt will occur because HCallNewArray wasn't selected, on account of +// the call site not being monomorphic to Array. +instanceof_check(Array); +assertOptimized(instanceof_check); +instanceof_check(realmBArray); +assertOptimized(instanceof_check); + +// Try to optimize again, but first clear all type feedback, and allow it +// to be monomorphic on first call. Only after crankshafting do we introduce +// realmBArray. This should deopt the method. + %DeoptimizeFunction(instanceof_check); + %ClearFunctionTypeFeedback(instanceof_check); +instanceof_check(Array); +instanceof_check(Array); + %OptimizeFunctionOnNextCall(instanceof_check); +instanceof_check(Array); +assertOptimized(instanceof_check); + +instanceof_check(realmBArray); +assertUnoptimized(instanceof_check); + +// Case: make sure nested arrays benefit from allocation site feedback as +// well. +(function() { + // Make sure we handle nested arrays + function get_nested_literal() { + var literal = [[1,2,3,4], [2], [3]]; + return literal; } - obj = newarraycase_list_smiobj(1); - assertKind(elements_kind.fast_smi_only, obj); - obj = newarraycase_list_smiobj("coates"); - assertKind(elements_kind.fast, obj); - obj = newarraycase_list_smiobj(2); + obj = get_nested_literal(); assertKind(elements_kind.fast, obj); - - // Case: array constructor calls with out of date feedback. - // The boilerplate should incorporate all feedback, but the input array - // should be minimally transitioned based on immediate need. - (function() { - function foo(i) { - // We have two cases, one for literals one for constructed arrays. - var a = (i == 0) - ? [1, 2, 3] - : new Array(1, 2, 3); - return a; - } - - for (i = 0; i < 2; i++) { - a = foo(i); - b = foo(i); - b[5] = 1; // boilerplate goes holey - assertHoley(foo(i)); - a[0] = 3.5; // boilerplate goes holey double - assertKind(elements_kind.fast_double, a); - assertNotHoley(a); - c = foo(i); - assertKind(elements_kind.fast_double, c); - assertHoley(c); - } - })(); - - function newarraycase_onearg(len, value) { - var a = new Array(len); - a[0] = value; - return a; + obj[0][0] = 3.5; + obj[2][0] = "hello"; + obj = get_nested_literal(); + assertKind(elements_kind.fast_double, obj[0]); + assertKind(elements_kind.fast_smi_only, obj[1]); + assertKind(elements_kind.fast, obj[2]); + + // A more complex nested literal case. + function get_deep_nested_literal() { + var literal = [[1], [[2], "hello"], 3, [4]]; + return literal; } - obj = newarraycase_onearg(5, 3.5); - assertKind(elements_kind.fast_double, obj); - obj = newarraycase_onearg(10, 5); - assertKind(elements_kind.fast_double, obj); - obj = newarraycase_onearg(0, 5); - assertKind(elements_kind.fast_double, obj); - // Now pass a length that forces the dictionary path. - obj = newarraycase_onearg(100000, 5); - assertKind(elements_kind.dictionary, obj); - assertTrue(obj.length == 100000); - - // Verify that cross context calls work - var realmA = Realm.current(); - var realmB = Realm.create(); - assertEquals(0, realmA); - assertEquals(1, realmB); - - function instanceof_check(type) { - assertTrue(new type() instanceof type); - assertTrue(new type(5) instanceof type); - assertTrue(new type(1,2,3) instanceof type); + obj = get_deep_nested_literal(); + assertKind(elements_kind.fast_smi_only, obj[1][0]); + obj[0][0] = 3.5; + obj[1][0][0] = "goodbye"; + assertKind(elements_kind.fast_double, obj[0]); + assertKind(elements_kind.fast, obj[1][0]); + + obj = get_deep_nested_literal(); + assertKind(elements_kind.fast_double, obj[0]); + assertKind(elements_kind.fast, obj[1][0]); +})(); + +// Perform a gc because without it the test below can experience an +// allocation failure at an inconvenient point. Allocation mementos get +// cleared on gc, and they can't deliver elements kind feedback when that +// happens. +gc(); + +// Make sure object literals with array fields benefit from the type feedback +// that allocation mementos provide. +(function() { + // A literal in an object + function get_object_literal() { + var literal = { + array: [1,2,3], + data: 3.5 + }; + return literal; } - function instanceof_check2(type) { - assertTrue(new type() instanceof type); - assertTrue(new type(5) instanceof type); - assertTrue(new type(1,2,3) instanceof type); + obj = get_object_literal(); + assertKind(elements_kind.fast_smi_only, obj.array); + obj.array[1] = 3.5; + assertKind(elements_kind.fast_double, obj.array); + obj = get_object_literal(); + assertKind(elements_kind.fast_double, obj.array); + + function get_nested_object_literal() { + var literal = { + array: [[1],[2],[3]], + data: 3.5 + }; + return literal; } - var realmBArray = Realm.eval(realmB, "Array"); - instanceof_check(Array); - instanceof_check(realmBArray); - - // instanceof_check2 is here because the call site goes through a state. - // Since instanceof_check(Array) was first called with the current context - // Array function, it went from (uninit->Array) then (Array->megamorphic). - // We'll get a different state traversal if we start with realmBArray. - // It'll go (uninit->realmBArray) then (realmBArray->megamorphic). Recognize - // that state "Array" implies an AllocationSite is present, and code is - // configured to use it. - instanceof_check2(realmBArray); - instanceof_check2(Array); + obj = get_nested_object_literal(); + assertKind(elements_kind.fast, obj.array); + assertKind(elements_kind.fast_smi_only, obj.array[1]); + obj.array[1][0] = 3.5; + assertKind(elements_kind.fast_double, obj.array[1]); + obj = get_nested_object_literal(); + assertKind(elements_kind.fast_double, obj.array[1]); - %OptimizeFunctionOnNextCall(instanceof_check); + %OptimizeFunctionOnNextCall(get_nested_object_literal); + get_nested_object_literal(); + obj = get_nested_object_literal(); + assertKind(elements_kind.fast_double, obj.array[1]); - // No de-opt will occur because HCallNewArray wasn't selected, on account of - // the call site not being monomorphic to Array. - instanceof_check(Array); - assertOptimized(instanceof_check); - instanceof_check(realmBArray); - assertOptimized(instanceof_check); + // Make sure we handle nested arrays + function get_nested_literal() { + var literal = [[1,2,3,4], [2], [3]]; + return literal; + } - // Try to optimize again, but first clear all type feedback, and allow it - // to be monomorphic on first call. Only after crankshafting do we introduce - // realmBArray. This should deopt the method. - %DeoptimizeFunction(instanceof_check); - %ClearFunctionTypeFeedback(instanceof_check); - instanceof_check(Array); - instanceof_check(Array); - %OptimizeFunctionOnNextCall(instanceof_check); - instanceof_check(Array); - assertOptimized(instanceof_check); - - instanceof_check(realmBArray); - assertUnoptimized(instanceof_check); - - // Case: make sure nested arrays benefit from allocation site feedback as - // well. - (function() { - // Make sure we handle nested arrays - function get_nested_literal() { - var literal = [[1,2,3,4], [2], [3]]; - return literal; - } - - obj = get_nested_literal(); - assertKind(elements_kind.fast, obj); - obj[0][0] = 3.5; - obj[2][0] = "hello"; - obj = get_nested_literal(); - assertKind(elements_kind.fast_double, obj[0]); - assertKind(elements_kind.fast_smi_only, obj[1]); - assertKind(elements_kind.fast, obj[2]); - - // A more complex nested literal case. - function get_deep_nested_literal() { - var literal = [[1], [[2], "hello"], 3, [4]]; - return literal; - } - - obj = get_deep_nested_literal(); - assertKind(elements_kind.fast_smi_only, obj[1][0]); - obj[0][0] = 3.5; - obj[1][0][0] = "goodbye"; - assertKind(elements_kind.fast_double, obj[0]); - assertKind(elements_kind.fast, obj[1][0]); - - obj = get_deep_nested_literal(); - assertKind(elements_kind.fast_double, obj[0]); - assertKind(elements_kind.fast, obj[1][0]); - })(); - - - // Make sure object literals with array fields benefit from the type feedback - // that allocation mementos provide. - (function() { - // A literal in an object - function get_object_literal() { - var literal = { - array: [1,2,3], - data: 3.5 - }; - return literal; - } - - obj = get_object_literal(); - assertKind(elements_kind.fast_smi_only, obj.array); - obj.array[1] = 3.5; - assertKind(elements_kind.fast_double, obj.array); - obj = get_object_literal(); - assertKind(elements_kind.fast_double, obj.array); - - function get_nested_object_literal() { - var literal = { - array: [[1],[2],[3]], - data: 3.5 - }; - return literal; - } - - obj = get_nested_object_literal(); - assertKind(elements_kind.fast, obj.array); - assertKind(elements_kind.fast_smi_only, obj.array[1]); - obj.array[1][0] = 3.5; - assertKind(elements_kind.fast_double, obj.array[1]); - obj = get_nested_object_literal(); - assertKind(elements_kind.fast_double, obj.array[1]); + obj = get_nested_literal(); + assertKind(elements_kind.fast, obj); + obj[0][0] = 3.5; + obj[2][0] = "hello"; + obj = get_nested_literal(); + assertKind(elements_kind.fast_double, obj[0]); + assertKind(elements_kind.fast_smi_only, obj[1]); + assertKind(elements_kind.fast, obj[2]); + + // A more complex nested literal case. + function get_deep_nested_literal() { + var literal = [[1], [[2], "hello"], 3, [4]]; + return literal; + } - %OptimizeFunctionOnNextCall(get_nested_object_literal); - get_nested_object_literal(); - obj = get_nested_object_literal(); - assertKind(elements_kind.fast_double, obj.array[1]); - - // Make sure we handle nested arrays - function get_nested_literal() { - var literal = [[1,2,3,4], [2], [3]]; - return literal; - } - - obj = get_nested_literal(); - assertKind(elements_kind.fast, obj); - obj[0][0] = 3.5; - obj[2][0] = "hello"; - obj = get_nested_literal(); - assertKind(elements_kind.fast_double, obj[0]); - assertKind(elements_kind.fast_smi_only, obj[1]); - assertKind(elements_kind.fast, obj[2]); - - // A more complex nested literal case. - function get_deep_nested_literal() { - var literal = [[1], [[2], "hello"], 3, [4]]; - return literal; - } - - obj = get_deep_nested_literal(); - assertKind(elements_kind.fast_smi_only, obj[1][0]); - obj[0][0] = 3.5; - obj[1][0][0] = "goodbye"; - assertKind(elements_kind.fast_double, obj[0]); - assertKind(elements_kind.fast, obj[1][0]); - - obj = get_deep_nested_literal(); - assertKind(elements_kind.fast_double, obj[0]); - assertKind(elements_kind.fast, obj[1][0]); - })(); -} + obj = get_deep_nested_literal(); + assertKind(elements_kind.fast_smi_only, obj[1][0]); + obj[0][0] = 3.5; + obj[1][0][0] = "goodbye"; + assertKind(elements_kind.fast_double, obj[0]); + assertKind(elements_kind.fast, obj[1][0]); + + obj = get_deep_nested_literal(); + assertKind(elements_kind.fast_double, obj[0]); + assertKind(elements_kind.fast, obj[1][0]); +})(); diff --git a/deps/v8/test/mjsunit/apply.js b/deps/v8/test/mjsunit/apply.js index 413ee937c66..abbc9a11b4a 100644 --- a/deps/v8/test/mjsunit/apply.js +++ b/deps/v8/test/mjsunit/apply.js @@ -25,6 +25,8 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +// Flags: --allow-natives-syntax + function f0() { return this; } @@ -114,7 +116,8 @@ function al() { for (var j = 1; j < 0x40000000; j <<= 1) { try { - var a = new Array(j); + var a = %NormalizeElements([]); + a.length = j; a[j - 1] = 42; assertEquals(42 + j, al.apply(345, a)); } catch (e) { @@ -122,7 +125,8 @@ for (var j = 1; j < 0x40000000; j <<= 1) { for (; j < 0x40000000; j <<= 1) { var caught = false; try { - a = new Array(j); + a = %NormalizeElements([]); + a.length = j; a[j - 1] = 42; al.apply(345, a); assertUnreachable("Apply of array with length " + a.length + diff --git a/deps/v8/test/mjsunit/array-construct-transition.js b/deps/v8/test/mjsunit/array-construct-transition.js index f8d7c830e5f..3847f9405a0 100644 --- a/deps/v8/test/mjsunit/array-construct-transition.js +++ b/deps/v8/test/mjsunit/array-construct-transition.js @@ -25,15 +25,11 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --allow-natives-syntax --smi-only-arrays +// Flags: --allow-natives-syntax -support_smi_only_arrays = %HasFastSmiElements(new Array(1,2,3,4,5,6)); - -if (support_smi_only_arrays) { - var a = new Array(0, 1, 2); - assertTrue(%HasFastSmiElements(a)); - var b = new Array(0.5, 1.2, 2.3); - assertTrue(%HasFastDoubleElements(b)); - var c = new Array(0.5, 1.2, new Object()); - assertTrue(%HasFastObjectElements(c)); -} +var a = new Array(0, 1, 2); +assertTrue(%HasFastSmiElements(a)); +var b = new Array(0.5, 1.2, 2.3); +assertTrue(%HasFastDoubleElements(b)); +var c = new Array(0.5, 1.2, new Object()); +assertTrue(%HasFastObjectElements(c)); diff --git a/deps/v8/test/mjsunit/array-constructor-feedback.js b/deps/v8/test/mjsunit/array-constructor-feedback.js index 45d5c58c772..c2c1a1842f7 100644 --- a/deps/v8/test/mjsunit/array-constructor-feedback.js +++ b/deps/v8/test/mjsunit/array-constructor-feedback.js @@ -25,24 +25,10 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --allow-natives-syntax --smi-only-arrays --expose-gc +// Flags: --allow-natives-syntax --expose-gc // Flags: --noalways-opt // Test element kind of objects. -// Since --smi-only-arrays affects builtins, its default setting at compile -// time sticks if built with snapshot. If --smi-only-arrays is deactivated -// by default, only a no-snapshot build actually has smi-only arrays enabled -// in this test case. Depending on whether smi-only arrays are actually -// enabled, this test takes the appropriate code path to check smi-only arrays. - -// support_smi_only_arrays = %HasFastSmiElements(new Array(1,2,3,4,5,6,7,8)); -support_smi_only_arrays = true; - -if (support_smi_only_arrays) { - print("Tests include smi-only arrays."); -} else { - print("Tests do NOT include smi-only arrays."); -} var elements_kind = { fast_smi_only : 'fast smi only elements', @@ -73,183 +59,161 @@ function isHoley(obj) { } function assertKind(expected, obj, name_opt) { - if (!support_smi_only_arrays && - expected == elements_kind.fast_smi_only) { - expected = elements_kind.fast; - } assertEquals(expected, getKind(obj), name_opt); } -if (support_smi_only_arrays) { - - // Test: If a call site goes megamorphic, it retains the ability to - // use allocation site feedback (if FLAG_allocation_site_pretenuring - // is on). - (function() { - function bar(t, len) { - return new t(len); - } - - a = bar(Array, 10); - a[0] = 3.5; - b = bar(Array, 1); - assertKind(elements_kind.fast_double, b); - c = bar(Object, 3); - b = bar(Array, 10); - // TODO(mvstanton): re-enable when FLAG_allocation_site_pretenuring - // is on in the build. - // assertKind(elements_kind.fast_double, b); - })(); - - - // Test: ensure that crankshafted array constructor sites are deopted - // if another function is used. - (function() { - function bar0(t) { - return new t(); - } - a = bar0(Array); - a[0] = 3.5; - b = bar0(Array); - assertKind(elements_kind.fast_double, b); +// Test: If a call site goes megamorphic, it retains the ability to +// use allocation site feedback (if FLAG_allocation_site_pretenuring +// is on). +(function() { + function bar(t, len) { + return new t(len); + } + + a = bar(Array, 10); + a[0] = 3.5; + b = bar(Array, 1); + assertKind(elements_kind.fast_double, b); + c = bar(Object, 3); + b = bar(Array, 10); + // TODO(mvstanton): re-enable when FLAG_allocation_site_pretenuring + // is on in the build. + // assertKind(elements_kind.fast_double, b); +})(); + + +// Test: ensure that crankshafted array constructor sites are deopted +// if another function is used. +(function() { + function bar0(t) { + return new t(); + } + a = bar0(Array); + a[0] = 3.5; + b = bar0(Array); + assertKind(elements_kind.fast_double, b); %OptimizeFunctionOnNextCall(bar0); - b = bar0(Array); - assertKind(elements_kind.fast_double, b); - assertOptimized(bar0); - // bar0 should deopt - b = bar0(Object); - assertUnoptimized(bar0) - // When it's re-optimized, we should call through the full stub - bar0(Array); + b = bar0(Array); + assertKind(elements_kind.fast_double, b); + assertOptimized(bar0); + // bar0 should deopt + b = bar0(Object); + assertUnoptimized(bar0) + // When it's re-optimized, we should call through the full stub + bar0(Array); %OptimizeFunctionOnNextCall(bar0); - b = bar0(Array); - // This only makes sense to test if we allow crankshafting - if (4 != %GetOptimizationStatus(bar0)) { - // We also lost our ability to record kind feedback, as the site - // is megamorphic now. - assertKind(elements_kind.fast_smi_only, b); - assertOptimized(bar0); - b[0] = 3.5; - c = bar0(Array); - assertKind(elements_kind.fast_smi_only, c); - } - })(); - - - // Test: Ensure that inlined array calls in crankshaft learn from deopts - // based on the move to a dictionary for the array. - (function() { - function bar(len) { - return new Array(len); - } - a = bar(10); - a[0] = "a string"; - a = bar(10); - assertKind(elements_kind.fast, a); - %OptimizeFunctionOnNextCall(bar); - a = bar(10); - assertKind(elements_kind.fast, a); - assertOptimized(bar); - // bar should deopt because the length is too large. - a = bar(100000); - assertUnoptimized(bar); - assertKind(elements_kind.dictionary, a); - // The allocation site now has feedback that means the array constructor - // will not be inlined. - %OptimizeFunctionOnNextCall(bar); - a = bar(100000); - assertKind(elements_kind.dictionary, a); - assertOptimized(bar); + b = bar0(Array); + // This only makes sense to test if we allow crankshafting + if (4 != %GetOptimizationStatus(bar0)) { + // We also lost our ability to record kind feedback, as the site + // is megamorphic now. + assertKind(elements_kind.fast_smi_only, b); + assertOptimized(bar0); + b[0] = 3.5; + c = bar0(Array); + assertKind(elements_kind.fast_smi_only, c); + } +})(); - // If the argument isn't a smi, it bails out as well - a = bar("oops"); - assertOptimized(bar); - assertKind(elements_kind.fast, a); - function barn(one, two, three) { - return new Array(one, two, three); - } +// Test: Ensure that inlined array calls in crankshaft learn from deopts +// based on the move to a dictionary for the array. +(function() { + function bar(len) { + return new Array(len); + } + a = bar(10); + a[0] = "a string"; + a = bar(10); + assertKind(elements_kind.fast, a); + %OptimizeFunctionOnNextCall(bar); + a = bar(10); + assertKind(elements_kind.fast, a); + assertOptimized(bar); + bar(100000); + assertOptimized(bar); + + // If the argument isn't a smi, things should still work. + a = bar("oops"); + assertOptimized(bar); + assertKind(elements_kind.fast, a); + + function barn(one, two, three) { + return new Array(one, two, three); + } - barn(1, 2, 3); - barn(1, 2, 3); - %OptimizeFunctionOnNextCall(barn); - barn(1, 2, 3); - assertOptimized(barn); - a = barn(1, "oops", 3); - // The method should deopt, but learn from the failure to avoid inlining - // the array. - assertKind(elements_kind.fast, a); - assertUnoptimized(barn); + barn(1, 2, 3); + barn(1, 2, 3); %OptimizeFunctionOnNextCall(barn); - a = barn(1, "oops", 3); - assertOptimized(barn); - })(); - - - // Test: When a method with array constructor is crankshafted, the type - // feedback for elements kind is baked in. Verify that transitions don't - // change it anymore - (function() { - function bar() { - return new Array(); - } - a = bar(); - bar(); + barn(1, 2, 3); + assertOptimized(barn); + a = barn(1, "oops", 3); + assertOptimized(barn); +})(); + + +// Test: When a method with array constructor is crankshafted, the type +// feedback for elements kind is baked in. Verify that transitions don't +// change it anymore +(function() { + function bar() { + return new Array(); + } + a = bar(); + bar(); %OptimizeFunctionOnNextCall(bar); - b = bar(); - // This only makes sense to test if we allow crankshafting - if (4 != %GetOptimizationStatus(bar)) { - assertOptimized(bar); + b = bar(); + // This only makes sense to test if we allow crankshafting + if (4 != %GetOptimizationStatus(bar)) { + assertOptimized(bar); %DebugPrint(3); - b[0] = 3.5; - c = bar(); - assertKind(elements_kind.fast_smi_only, c); - assertOptimized(bar); - } - })(); - - - // Test: create arrays in two contexts, verifying that the correct - // map for Array in that context will be used. - (function() { - function bar() { return new Array(); } - bar(); - bar(); + b[0] = 3.5; + c = bar(); + assertKind(elements_kind.fast_smi_only, c); + assertOptimized(bar); + } +})(); + + +// Test: create arrays in two contexts, verifying that the correct +// map for Array in that context will be used. +(function() { + function bar() { return new Array(); } + bar(); + bar(); %OptimizeFunctionOnNextCall(bar); - a = bar(); - assertTrue(a instanceof Array); - - var contextB = Realm.create(); - Realm.eval(contextB, "function bar2() { return new Array(); };"); - Realm.eval(contextB, "bar2(); bar2();"); - Realm.eval(contextB, "%OptimizeFunctionOnNextCall(bar2);"); - Realm.eval(contextB, "bar2();"); - assertFalse(Realm.eval(contextB, "bar2();") instanceof Array); - assertTrue(Realm.eval(contextB, "bar2() instanceof Array")); - })(); - - // Test: create array with packed feedback, then optimize/inline - // function. Verify that if we ask for a holey array then we deopt. - // Reoptimization will proceed with the correct feedback and we - // won't deopt anymore. - (function() { - function bar(len) { return new Array(len); } - bar(0); - bar(0); + a = bar(); + assertTrue(a instanceof Array); + + var contextB = Realm.create(); + Realm.eval(contextB, "function bar2() { return new Array(); };"); + Realm.eval(contextB, "bar2(); bar2();"); + Realm.eval(contextB, "%OptimizeFunctionOnNextCall(bar2);"); + Realm.eval(contextB, "bar2();"); + assertFalse(Realm.eval(contextB, "bar2();") instanceof Array); + assertTrue(Realm.eval(contextB, "bar2() instanceof Array")); +})(); + +// Test: create array with packed feedback, then optimize function, which +// should deal with arguments that create holey arrays. +(function() { + function bar(len) { return new Array(len); } + bar(0); + bar(0); %OptimizeFunctionOnNextCall(bar); - a = bar(0); - assertOptimized(bar); + a = bar(0); + assertOptimized(bar); + assertFalse(isHoley(a)); + a = bar(1); // ouch! + assertOptimized(bar); + assertTrue(isHoley(a)); + a = bar(100); + assertTrue(isHoley(a)); + a = bar(0); + assertOptimized(bar); + // Crankshafted functions don't use mementos, so feedback still + // indicates a packed array is desired. (unless --nocrankshaft is in use). + if (4 != %GetOptimizationStatus(bar)) { assertFalse(isHoley(a)); - a = bar(1); // ouch! - assertUnoptimized(bar); - assertTrue(isHoley(a)); - // Try again - %OptimizeFunctionOnNextCall(bar); - a = bar(100); - assertOptimized(bar); - assertTrue(isHoley(a)); - a = bar(0); - assertOptimized(bar); - assertTrue(isHoley(a)); - })(); -} + } +})(); diff --git a/deps/v8/test/mjsunit/array-feedback.js b/deps/v8/test/mjsunit/array-feedback.js index 4129be1f880..ffbb49b19af 100644 --- a/deps/v8/test/mjsunit/array-feedback.js +++ b/deps/v8/test/mjsunit/array-feedback.js @@ -25,25 +25,9 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --allow-natives-syntax --smi-only-arrays --expose-gc +// Flags: --allow-natives-syntax --expose-gc // Flags: --noalways-opt -// Test element kind of objects. -// Since --smi-only-arrays affects builtins, its default setting at compile -// time sticks if built with snapshot. If --smi-only-arrays is deactivated -// by default, only a no-snapshot build actually has smi-only arrays enabled -// in this test case. Depending on whether smi-only arrays are actually -// enabled, this test takes the appropriate code path to check smi-only arrays. - -// support_smi_only_arrays = %HasFastSmiElements(new Array(1,2,3,4,5,6,7,8)); -support_smi_only_arrays = true; - -if (support_smi_only_arrays) { - print("Tests include smi-only arrays."); -} else { - print("Tests do NOT include smi-only arrays."); -} - var elements_kind = { fast_smi_only : 'fast smi only elements', fast : 'fast elements', @@ -73,144 +57,153 @@ function isHoley(obj) { } function assertKind(expected, obj, name_opt) { - if (!support_smi_only_arrays && - expected == elements_kind.fast_smi_only) { - expected = elements_kind.fast; - } assertEquals(expected, getKind(obj), name_opt); } -if (support_smi_only_arrays) { - - // Verify that basic elements kind feedback works for non-constructor - // array calls (as long as the call is made through an IC, and not - // a CallStub). - // (function (){ - // function create0() { - // return Array(); - // } - - // // Calls through ICs need warm up through uninitialized, then - // // premonomorphic first. - // create0(); - // create0(); - // a = create0(); - // assertKind(elements_kind.fast_smi_only, a); - // a[0] = 3.5; - // b = create0(); - // assertKind(elements_kind.fast_double, b); - - // function create1(arg) { - // return Array(arg); - // } - - // create1(0); - // create1(0); - // a = create1(0); - // assertFalse(isHoley(a)); - // assertKind(elements_kind.fast_smi_only, a); - // a[0] = "hello"; - // b = create1(10); - // assertTrue(isHoley(b)); - // assertKind(elements_kind.fast, b); - - // a = create1(100000); - // assertKind(elements_kind.dictionary, a); - - // function create3(arg1, arg2, arg3) { - // return Array(arg1, arg2, arg3); - // } - - // create3(); - // create3(); - // a = create3(1,2,3); - // a[0] = 3.5; - // b = create3(1,2,3); - // assertKind(elements_kind.fast_double, b); - // assertFalse(isHoley(b)); - // })(); - - - // Verify that keyed calls work - // (function (){ - // function create0(name) { - // return this[name](); - // } - - // name = "Array"; - // create0(name); - // create0(name); - // a = create0(name); - // a[0] = 3.5; - // b = create0(name); - // assertKind(elements_kind.fast_double, b); - // })(); - - - // Verify that the IC can't be spoofed by patching - (function (){ - function create0() { - return Array(); - } - - create0(); - create0(); - a = create0(); - assertKind(elements_kind.fast_smi_only, a); - var oldArray = this.Array; - this.Array = function() { return ["hi"]; }; - b = create0(); - assertEquals(["hi"], b); - this.Array = oldArray; - })(); - - // Verify that calls are still made through an IC after crankshaft, - // though the type information is reset. - // TODO(mvstanton): instead, consume the type feedback gathered up - // until crankshaft time. - // (function (){ - // function create0() { - // return Array(); - // } - - // create0(); - // create0(); - // a = create0(); - // a[0] = 3.5; - // %OptimizeFunctionOnNextCall(create0); - // create0(); - // // This test only makes sense if crankshaft is allowed - // if (4 != %GetOptimizationStatus(create0)) { - // create0(); - // b = create0(); - // assertKind(elements_kind.fast_smi_only, b); - // b[0] = 3.5; - // c = create0(); - // assertKind(elements_kind.fast_double, c); - // assertOptimized(create0); - // } - // })(); - - - // Verify that cross context calls work - (function (){ - var realmA = Realm.current(); - var realmB = Realm.create(); - assertEquals(0, realmA); - assertEquals(1, realmB); - - function instanceof_check(type) { - assertTrue(type() instanceof type); - assertTrue(type(5) instanceof type); - assertTrue(type(1,2,3) instanceof type); - } - - var realmBArray = Realm.eval(realmB, "Array"); - instanceof_check(Array); - instanceof_check(Array); - instanceof_check(Array); - instanceof_check(realmBArray); - instanceof_check(realmBArray); - instanceof_check(realmBArray); - })(); -} +// Verify that basic elements kind feedback works for non-constructor +// array calls (as long as the call is made through an IC, and not +// a CallStub). +(function (){ + function create0() { + return Array(); + } + + // Calls through ICs need warm up through uninitialized, then + // premonomorphic first. + create0(); + a = create0(); + assertKind(elements_kind.fast_smi_only, a); + a[0] = 3.5; + b = create0(); + assertKind(elements_kind.fast_double, b); + + function create1(arg) { + return Array(arg); + } + + create1(0); + create1(0); + a = create1(0); + assertFalse(isHoley(a)); + assertKind(elements_kind.fast_smi_only, a); + a[0] = "hello"; + b = create1(10); + assertTrue(isHoley(b)); + assertKind(elements_kind.fast, b); + + a = create1(100000); + assertKind(elements_kind.fast_smi_only, a); + + function create3(arg1, arg2, arg3) { + return Array(arg1, arg2, arg3); + } + + create3(1,2,3); + create3(1,2,3); + a = create3(1,2,3); + a[0] = 3.035; + assertKind(elements_kind.fast_double, a); + b = create3(1,2,3); + assertKind(elements_kind.fast_double, b); + assertFalse(isHoley(b)); +})(); + + +// Verify that keyed calls work +(function (){ + function create0(name) { + return this[name](); + } + + name = "Array"; + create0(name); + create0(name); + a = create0(name); + a[0] = 3.5; + b = create0(name); + assertKind(elements_kind.fast_double, b); +})(); + + +// Verify that feedback is turned off if the call site goes megamorphic. +(function (){ + function foo(arg) { return arg(); } + foo(Array); + foo(function() {}); + foo(Array); + + gc(); + + a = foo(Array); + a[0] = 3.5; + b = foo(Array); + // b doesn't benefit from elements kind feedback at a megamorphic site. + assertKind(elements_kind.fast_smi_only, b); +})(); + + +// Verify that crankshaft consumes type feedback. +(function (){ + function create0() { + return Array(); + } + + create0(); + create0(); + a = create0(); + a[0] = 3.5; + %OptimizeFunctionOnNextCall(create0); + create0(); + create0(); + b = create0(); + assertKind(elements_kind.fast_double, b); + assertOptimized(create0); + + function create1(arg) { + return Array(arg); + } + + create1(8); + create1(8); + a = create1(8); + a[0] = 3.5; + %OptimizeFunctionOnNextCall(create1); + b = create1(8); + assertKind(elements_kind.fast_double, b); + assertOptimized(create1); + + function createN(arg1, arg2, arg3) { + return Array(arg1, arg2, arg3); + } + + createN(1, 2, 3); + createN(1, 2, 3); + a = createN(1, 2, 3); + a[0] = 3.5; + %OptimizeFunctionOnNextCall(createN); + b = createN(1, 2, 3); + assertKind(elements_kind.fast_double, b); + assertOptimized(createN); +})(); + +// Verify that cross context calls work +(function (){ + var realmA = Realm.current(); + var realmB = Realm.create(); + assertEquals(0, realmA); + assertEquals(1, realmB); + + function instanceof_check(type) { + assertTrue(type() instanceof type); + assertTrue(type(5) instanceof type); + assertTrue(type(1,2,3) instanceof type); + } + + var realmBArray = Realm.eval(realmB, "Array"); + instanceof_check(Array); + instanceof_check(Array); + instanceof_check(Array); + instanceof_check(realmBArray); + instanceof_check(realmBArray); + instanceof_check(realmBArray); +})(); diff --git a/deps/v8/test/mjsunit/array-literal-feedback.js b/deps/v8/test/mjsunit/array-literal-feedback.js index cfda0f6d5f6..ed9c4e879e4 100644 --- a/deps/v8/test/mjsunit/array-literal-feedback.js +++ b/deps/v8/test/mjsunit/array-literal-feedback.js @@ -25,25 +25,9 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --allow-natives-syntax --smi-only-arrays --expose-gc +// Flags: --allow-natives-syntax --expose-gc // Flags: --noalways-opt -// Test element kind of objects. -// Since --smi-only-arrays affects builtins, its default setting at compile -// time sticks if built with snapshot. If --smi-only-arrays is deactivated -// by default, only a no-snapshot build actually has smi-only arrays enabled -// in this test case. Depending on whether smi-only arrays are actually -// enabled, this test takes the appropriate code path to check smi-only arrays. - -// support_smi_only_arrays = %HasFastSmiElements(new Array(1,2,3,4,5,6,7,8)); -support_smi_only_arrays = true; - -if (support_smi_only_arrays) { - print("Tests include smi-only arrays."); -} else { - print("Tests do NOT include smi-only arrays."); -} - var elements_kind = { fast_smi_only : 'fast smi only elements', fast : 'fast elements', @@ -73,58 +57,51 @@ function isHoley(obj) { } function assertKind(expected, obj, name_opt) { - if (!support_smi_only_arrays && - expected == elements_kind.fast_smi_only) { - expected = elements_kind.fast; - } assertEquals(expected, getKind(obj), name_opt); } -if (support_smi_only_arrays) { - - function get_literal(x) { - var literal = [1, 2, x]; - return literal; - } +function get_literal(x) { + var literal = [1, 2, x]; + return literal; +} - get_literal(3); - // It's important to store a from before we crankshaft get_literal, because - // mementos won't be created from crankshafted code at all. - a = get_literal(3); +get_literal(3); +// It's important to store a from before we crankshaft get_literal, because +// mementos won't be created from crankshafted code at all. +a = get_literal(3); %OptimizeFunctionOnNextCall(get_literal); - get_literal(3); - assertOptimized(get_literal); - assertTrue(%HasFastSmiElements(a)); - // a has a memento so the transition caused by the store will affect the - // boilerplate. - a[0] = 3.5; - - // We should have transitioned the boilerplate array to double, and - // crankshafted code should de-opt on the unexpected elements kind - b = get_literal(3); - assertTrue(%HasFastDoubleElements(b)); - assertEquals([1, 2, 3], b); - assertUnoptimized(get_literal); - - // Optimize again - get_literal(3); +get_literal(3); +assertOptimized(get_literal); +assertTrue(%HasFastSmiElements(a)); +// a has a memento so the transition caused by the store will affect the +// boilerplate. +a[0] = 3.5; + +// We should have transitioned the boilerplate array to double, and +// crankshafted code should de-opt on the unexpected elements kind +b = get_literal(3); +assertTrue(%HasFastDoubleElements(b)); +assertEquals([1, 2, 3], b); +assertUnoptimized(get_literal); + +// Optimize again +get_literal(3); %OptimizeFunctionOnNextCall(get_literal); - b = get_literal(3); - assertTrue(%HasFastDoubleElements(b)); - assertOptimized(get_literal); +b = get_literal(3); +assertTrue(%HasFastDoubleElements(b)); +assertOptimized(get_literal); - // Test: make sure allocation site information is updated through a - // transition from SMI->DOUBLE->FAST - (function() { - function bar(a, b, c) { - return [a, b, c]; - } +// Test: make sure allocation site information is updated through a +// transition from SMI->DOUBLE->FAST +(function() { + function bar(a, b, c) { + return [a, b, c]; + } - a = bar(1, 2, 3); - a[0] = 3.5; - a[1] = 'hi'; - b = bar(1, 2, 3); - assertKind(elements_kind.fast, b); - })(); -} + a = bar(1, 2, 3); + a[0] = 3.5; + a[1] = 'hi'; + b = bar(1, 2, 3); + assertKind(elements_kind.fast, b); +})(); diff --git a/deps/v8/test/mjsunit/array-literal-transitions.js b/deps/v8/test/mjsunit/array-literal-transitions.js index ca6033b2177..e1624553f4a 100644 --- a/deps/v8/test/mjsunit/array-literal-transitions.js +++ b/deps/v8/test/mjsunit/array-literal-transitions.js @@ -25,22 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --allow-natives-syntax --smi-only-arrays --expose-gc - -// Test element kind of objects. -// Since --smi-only-arrays affects builtins, its default setting at compile -// time sticks if built with snapshot. If --smi-only-arrays is deactivated -// by default, only a no-snapshot build actually has smi-only arrays enabled -// in this test case. Depending on whether smi-only arrays are actually -// enabled, this test takes the appropriate code path to check smi-only arrays. - -support_smi_only_arrays = %HasFastSmiElements([1,2,3,4,5,6,7,8,9,10]); - -if (support_smi_only_arrays) { - print("Tests include smi-only arrays."); -} else { - print("Tests do NOT include smi-only arrays."); -} +// Flags: --allow-natives-syntax --expose-gc // IC and Crankshaft support for smi-only elements in dynamic array literals. function get(foo) { return foo; } // Used to generate dynamic values. @@ -94,114 +79,112 @@ function array_literal_test() { assertEquals(1, f0[0]); } -if (support_smi_only_arrays) { - for (var i = 0; i < 3; i++) { - array_literal_test(); - } - %OptimizeFunctionOnNextCall(array_literal_test); +for (var i = 0; i < 3; i++) { array_literal_test(); +} + %OptimizeFunctionOnNextCall(array_literal_test); +array_literal_test(); + +function test_large_literal() { - function test_large_literal() { - - function d() { - gc(); - return 2.5; - } - - function o() { - gc(); - return new Object(); - } - - large = - [ 0, 1, 2, 3, 4, 5, d(), d(), d(), d(), d(), d(), o(), o(), o(), o() ]; - assertFalse(%HasDictionaryElements(large)); - assertFalse(%HasFastSmiElements(large)); - assertFalse(%HasFastDoubleElements(large)); - assertTrue(%HasFastObjectElements(large)); - assertEquals(large, - [0, 1, 2, 3, 4, 5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, - new Object(), new Object(), new Object(), new Object()]); + function d() { + gc(); + return 2.5; } - for (var i = 0; i < 3; i++) { - test_large_literal(); + function o() { + gc(); + return new Object(); } - %OptimizeFunctionOnNextCall(test_large_literal); + + large = + [ 0, 1, 2, 3, 4, 5, d(), d(), d(), d(), d(), d(), o(), o(), o(), o() ]; + assertFalse(%HasDictionaryElements(large)); + assertFalse(%HasFastSmiElements(large)); + assertFalse(%HasFastDoubleElements(large)); + assertTrue(%HasFastObjectElements(large)); + assertEquals(large, + [0, 1, 2, 3, 4, 5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, + new Object(), new Object(), new Object(), new Object()]); +} + +for (var i = 0; i < 3; i++) { test_large_literal(); +} + %OptimizeFunctionOnNextCall(test_large_literal); +test_large_literal(); - function deopt_array(use_literal) { - if (use_literal) { - return [.5, 3, 4]; - } else { - return new Array(); - } +function deopt_array(use_literal) { + if (use_literal) { + return [.5, 3, 4]; + } else { + return new Array(); } +} - deopt_array(false); - deopt_array(false); - deopt_array(false); +deopt_array(false); +deopt_array(false); +deopt_array(false); %OptimizeFunctionOnNextCall(deopt_array); - var array = deopt_array(false); - assertOptimized(deopt_array); - deopt_array(true); - assertOptimized(deopt_array); - array = deopt_array(false); - assertOptimized(deopt_array); - - // Check that unexpected changes in the objects stored into the boilerplate - // also force a deopt. - function deopt_array_literal_all_smis(a) { - return [0, 1, a]; - } +var array = deopt_array(false); +assertOptimized(deopt_array); +deopt_array(true); +assertOptimized(deopt_array); +array = deopt_array(false); +assertOptimized(deopt_array); + +// Check that unexpected changes in the objects stored into the boilerplate +// also force a deopt. +function deopt_array_literal_all_smis(a) { + return [0, 1, a]; +} - deopt_array_literal_all_smis(2); - deopt_array_literal_all_smis(3); - deopt_array_literal_all_smis(4); - array = deopt_array_literal_all_smis(4); - assertEquals(0, array[0]); - assertEquals(1, array[1]); - assertEquals(4, array[2]); +deopt_array_literal_all_smis(2); +deopt_array_literal_all_smis(3); +deopt_array_literal_all_smis(4); +array = deopt_array_literal_all_smis(4); +assertEquals(0, array[0]); +assertEquals(1, array[1]); +assertEquals(4, array[2]); %OptimizeFunctionOnNextCall(deopt_array_literal_all_smis); - array = deopt_array_literal_all_smis(5); - array = deopt_array_literal_all_smis(6); - assertOptimized(deopt_array_literal_all_smis); - assertEquals(0, array[0]); - assertEquals(1, array[1]); - assertEquals(6, array[2]); - - array = deopt_array_literal_all_smis(.5); - assertUnoptimized(deopt_array_literal_all_smis); - assertEquals(0, array[0]); - assertEquals(1, array[1]); - assertEquals(.5, array[2]); - - function deopt_array_literal_all_doubles(a) { - return [0.5, 1, a]; - } +array = deopt_array_literal_all_smis(5); +array = deopt_array_literal_all_smis(6); +assertOptimized(deopt_array_literal_all_smis); +assertEquals(0, array[0]); +assertEquals(1, array[1]); +assertEquals(6, array[2]); + +array = deopt_array_literal_all_smis(.5); +assertUnoptimized(deopt_array_literal_all_smis); +assertEquals(0, array[0]); +assertEquals(1, array[1]); +assertEquals(.5, array[2]); + +function deopt_array_literal_all_doubles(a) { + return [0.5, 1, a]; +} - deopt_array_literal_all_doubles(.5); - deopt_array_literal_all_doubles(.5); - deopt_array_literal_all_doubles(.5); - array = deopt_array_literal_all_doubles(0.5); - assertEquals(0.5, array[0]); - assertEquals(1, array[1]); - assertEquals(0.5, array[2]); +deopt_array_literal_all_doubles(.5); +deopt_array_literal_all_doubles(.5); +deopt_array_literal_all_doubles(.5); +array = deopt_array_literal_all_doubles(0.5); +assertEquals(0.5, array[0]); +assertEquals(1, array[1]); +assertEquals(0.5, array[2]); %OptimizeFunctionOnNextCall(deopt_array_literal_all_doubles); - array = deopt_array_literal_all_doubles(5); - array = deopt_array_literal_all_doubles(6); - assertOptimized(deopt_array_literal_all_doubles); - assertEquals(0.5, array[0]); - assertEquals(1, array[1]); - assertEquals(6, array[2]); - - var foo = new Object(); - array = deopt_array_literal_all_doubles(foo); - assertUnoptimized(deopt_array_literal_all_doubles); - assertEquals(0.5, array[0]); - assertEquals(1, array[1]); - assertEquals(foo, array[2]); -} +array = deopt_array_literal_all_doubles(5); +array = deopt_array_literal_all_doubles(6); +assertOptimized(deopt_array_literal_all_doubles); +assertEquals(0.5, array[0]); +assertEquals(1, array[1]); +assertEquals(6, array[2]); + +var foo = new Object(); +array = deopt_array_literal_all_doubles(foo); +assertUnoptimized(deopt_array_literal_all_doubles); +assertEquals(0.5, array[0]); +assertEquals(1, array[1]); +assertEquals(foo, array[2]); (function literals_after_osr() { var color = [0]; diff --git a/deps/v8/test/mjsunit/array-natives-elements.js b/deps/v8/test/mjsunit/array-natives-elements.js index cf848bb4b90..d63346d0a46 100644 --- a/deps/v8/test/mjsunit/array-natives-elements.js +++ b/deps/v8/test/mjsunit/array-natives-elements.js @@ -25,22 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --allow-natives-syntax --smi-only-arrays - -// Test element kind of objects. -// Since --smi-only-arrays affects builtins, its default setting at compile time -// sticks if built with snapshot. If --smi-only-arrays is deactivated by -// default, only a no-snapshot build actually has smi-only arrays enabled in -// this test case. Depending on whether smi-only arrays are actually enabled, -// this test takes the appropriate code path to check smi-only arrays. - -support_smi_only_arrays = %HasFastSmiElements([1,2,3,4,5,6,7,8,9,10]); - -if (support_smi_only_arrays) { - print("Tests include smi-only arrays."); -} else { - print("Tests do NOT include smi-only arrays."); -} +// Flags: --allow-natives-syntax // IC and Crankshaft support for smi-only elements in dynamic array literals. function get(foo) { return foo; } // Used to generate dynamic values. @@ -54,29 +39,30 @@ function array_natives_test() { assertTrue(%HasFastDoubleElements([1.1])); assertTrue(%HasFastDoubleElements([1.1,2])); - // Push - var a0 = [1, 2, 3]; - if (%HasFastSmiElements(a0)) { - assertTrue(%HasFastSmiElements(a0)); - a0.push(4); - assertTrue(%HasFastSmiElements(a0)); - a0.push(1.3); - assertTrue(%HasFastDoubleElements(a0)); - a0.push(1.5); - assertTrue(%HasFastDoubleElements(a0)); - a0.push({}); - assertTrue(%HasFastObjectElements(a0)); - a0.push({}); - assertTrue(%HasFastObjectElements(a0)); - } else { - assertTrue(%HasFastObjectElements(a0)); - a0.push(4); - a0.push(1.3); - a0.push(1.5); - a0.push({}); - a0.push({}); - assertTrue(%HasFastObjectElements(a0)); + // This code exists to eliminate the learning influence of AllocationSites + // on the following tests. + var __sequence = 0; + function make_array_string(literal) { + this.__sequence = this.__sequence + 1; + return "/* " + this.__sequence + " */ " + literal; } + function make_array(literal) { + return eval(make_array_string(literal)); + } + + // Push + var a0 = make_array("[1, 2, 3]"); + assertTrue(%HasFastSmiElements(a0)); + a0.push(4); + assertTrue(%HasFastSmiElements(a0)); + a0.push(1.3); + assertTrue(%HasFastDoubleElements(a0)); + a0.push(1.5); + assertTrue(%HasFastDoubleElements(a0)); + a0.push({}); + assertTrue(%HasFastObjectElements(a0)); + a0.push({}); + assertTrue(%HasFastObjectElements(a0)); assertEquals([1,2,3,4,1.3,1.5,{},{}], a0); // Concat @@ -307,10 +293,8 @@ function array_natives_test() { assertEquals([1.1,{},2,3], a4); } -if (support_smi_only_arrays) { - for (var i = 0; i < 3; i++) { - array_natives_test(); - } - %OptimizeFunctionOnNextCall(array_natives_test); +for (var i = 0; i < 3; i++) { array_natives_test(); } +%OptimizeFunctionOnNextCall(array_natives_test); +array_natives_test(); diff --git a/deps/v8/test/mjsunit/array-push-unshift-read-only-length.js b/deps/v8/test/mjsunit/array-push-unshift-read-only-length.js new file mode 100644 index 00000000000..67aa39787aa --- /dev/null +++ b/deps/v8/test/mjsunit/array-push-unshift-read-only-length.js @@ -0,0 +1,107 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + +function test(mode) { + var a = []; + Object.defineProperty(a, "length", { writable : false}); + + function check(f) { + try { + f(a); + } catch(e) { } + assertFalse(0 in a); + assertEquals(0, a.length); + } + + function push(a) { + a.push(3); + } + + if (mode == "fast properties") %ToFastProperties(a); + + check(push); + check(push); + check(push); + %OptimizeFunctionOnNextCall(push); + check(push); + + function unshift(a) { + a.unshift(3); + } + + check(unshift); + check(unshift); + check(unshift); + %OptimizeFunctionOnNextCall(unshift); + check(unshift); +} + +test("fast properties"); + +test("normalized"); + +var b = []; +Object.defineProperty(b.__proto__, "0", { + set : function(v) { + b.x = v; + Object.defineProperty(b, "length", { writable : false }); + }, + get: function() { + return b.x; + } +}); + +b = []; +try { + b.push(3, 4, 5); +} catch(e) { } +assertFalse(1 in b); +assertFalse(2 in b); +assertEquals(0, b.length); + +b = []; +try { + b.unshift(3, 4, 5); +} catch(e) { } +assertFalse(1 in b); +assertFalse(2 in b); +assertEquals(0, b.length); + +b = [1, 2]; +try { + b.unshift(3, 4, 5); +} catch(e) { } +assertEquals(3, b[0]); +assertEquals(4, b[1]); +assertEquals(5, b[2]); +assertEquals(1, b[3]); +assertEquals(2, b[4]); +assertEquals(5, b.length); + +b = [1, 2]; + +Object.defineProperty(b.__proto__, "4", { + set : function(v) { + b.z = v; + Object.defineProperty(b, "length", { writable : false }); + }, + get: function() { + return b.z; + } +}); + +try { + b.unshift(3, 4, 5); +} catch(e) { } + +// TODO(ulan): According to the ECMA-262 unshift should throw an exception +// when moving b[0] to b[3] (see 15.4.4.13 step 6.d.ii). This is difficult +// to do with our current implementation of SmartMove() in src/array.js and +// it will regress performance. Uncomment the following line once acceptable +// solution is found: +// assertFalse(2 in b); +// assertFalse(3 in b); +// assertEquals(2, b.length); diff --git a/deps/v8/test/mjsunit/array-shift2.js b/deps/v8/test/mjsunit/array-shift2.js new file mode 100644 index 00000000000..73d8cd4ff17 --- /dev/null +++ b/deps/v8/test/mjsunit/array-shift2.js @@ -0,0 +1,18 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + +Object.defineProperty(Array.prototype, "1", { + get: function() { return "element 1"; }, + set: function(value) { } +}); +function test(array) { + array.shift(); + return array; +} +assertEquals(["element 1",2], test(["0",,2])); +assertEquals(["element 1",{}], test([{},,{}])); +%OptimizeFunctionOnNextCall(test); +assertEquals(["element 1",0], test([{},,0])); diff --git a/deps/v8/test/mjsunit/array-shift3.js b/deps/v8/test/mjsunit/array-shift3.js new file mode 100644 index 00000000000..3a0afc596b1 --- /dev/null +++ b/deps/v8/test/mjsunit/array-shift3.js @@ -0,0 +1,15 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + +Array.prototype[1] = "element 1"; +function test(a) { + a.shift(); + return a; +} +assertEquals(["element 1",{}], test([0,,{}])); +assertEquals(["element 1",10], test([9,,10])); +%OptimizeFunctionOnNextCall(test); +assertEquals(["element 1",10], test([9,,10])); diff --git a/deps/v8/test/mjsunit/assert-opt-and-deopt.js b/deps/v8/test/mjsunit/assert-opt-and-deopt.js index d0caafa27cd..e9aba1d3c92 100644 --- a/deps/v8/test/mjsunit/assert-opt-and-deopt.js +++ b/deps/v8/test/mjsunit/assert-opt-and-deopt.js @@ -137,7 +137,7 @@ OptTracker.prototype.DisableAsserts_ = function(func) { case OptTracker.OptimizationState.NEVER: return true; } - return false; + return true; } // (End of class OptTracker.) diff --git a/deps/v8/test/mjsunit/binary-op-newspace.js b/deps/v8/test/mjsunit/binary-op-newspace.js index dac7d24dba2..52903f051a3 100644 --- a/deps/v8/test/mjsunit/binary-op-newspace.js +++ b/deps/v8/test/mjsunit/binary-op-newspace.js @@ -25,7 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --max-new-space-size=2 --noopt +// Flags: --max-semi-space-size=1 --noopt // Check that a mod where the stub code hits a failure in heap number // allocation still works. diff --git a/deps/v8/test/mjsunit/bounds-checks-elimination.js b/deps/v8/test/mjsunit/bounds-checks-elimination.js new file mode 100644 index 00000000000..4ea7f17e52e --- /dev/null +++ b/deps/v8/test/mjsunit/bounds-checks-elimination.js @@ -0,0 +1,123 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax --array-bounds-checks-elimination + +var a = [] +for (var i = 0; i < 9; i++) a[i] = i + 1; + +function test(f, arg1, arg2, expected) { + assertEquals(expected, f(arg1)); + f(arg2); + %OptimizeFunctionOnNextCall(f); + assertEquals(expected, f(arg1)); +} + +test(function f0() { + return a[7] * a[6] * a[5] * a[4] * a[3] * a[2] * a[1] * a[0]; +}, 0, 1, 40320); + +test(function f1() { + return a[7] * a[6] * a[5] * a[4] * a[10] * a[2] * a[1] * a[0]; +}, 0, 1, NaN); + +test(function f2() { + return a[0] * a[1] * a[2] * a[3] * a[4] * a[5] * a[6] * a[7]; +}, 0, 1, 40320); + +test(function f3() { + return a[3] * a[0] * a[6] * a[7] * a[5] * a[1] * a[4] * a[2]; +}, 0, 1, 40320); + +test(function f4(b) { + return a[b+3] * a[0] * a[b+6] * a[7] * a[b+5] * a[1] * a[b+4] * a[2]; +}, 0, 1, 40320); + +test(function f5(b) { + return a[b+1] * a[0] * a[b+4] * a[7] * a[b+3] * a[1] * a[b+2] * a[2]; +}, 2, 3, 40320); + +test(function f6(b) { + var c; + if (b) c = a[3] * a[0] * a[6] * a[7]; + return c * a[5] * a[1] * a[4] * a[2]; +}, true, false, 40320); + +test(function f7(b) { + var c = a[7]; + if (b) c *= a[3] * a[0] * a[6]; + return c * a[5] * a[1] * a[4] * a[2]; +}, true, false, 40320); + +test(function f8(b) { + var c = a[7]; + if (b) c *= a[3] * a[0] * a[6]; + return c * a[5] * a[10] * a[4] * a[2]; +}, true, false, NaN); + +test(function f9(b) { + var c = a[1]; + if (b) { + c *= a[3] * a[0] * a[6]; + } else { + c = a[6] * a[5] * a[4]; + } + return c * a[5] * a[7] * a[4] * a[2]; +}, true, false, 40320); + +test(function fa(b) { + var c = a[1]; + if (b) { + c = a[6] * a[b+5] * a[4]; + } else { + c *= a[b+3] * a[0] * a[b+6]; + } + return c * a[5] * a[b+7] * a[4] * a[2]; +}, 0, 1, 40320); + +test(function fb(b) { + var c = a[b-3]; + if (b != 4) { + c = a[6] * a[b+1] * a[4]; + } else { + c *= a[b-1] * a[0] * a[b+2]; + } + return c * a[5] * a[b+3] * a[4] * a[b-2]; +}, 4, 3, 40320); + +test(function fc(b) { + var c = a[b-3]; + if (b != 4) { + c = a[6] * a[b+1] * a[4]; + } else { + c *= a[b-1] * a[0] * a[b+2]; + } + return c * a[5] * a[b+3] * a[4] * a[b-2]; +}, 6, 3, NaN); + +test(function fd(b) { + var c = a[b-3]; + if (b != 4) { + c = a[6] * a[b+1] * a[4]; + } else { + c *= a[b-1] * a[0] * a[b+2]; + } + return c * a[5] * a[b+3] * a[4] * a[b-2]; +}, 1, 4, NaN); + +test(function fe(b) { + var c = 1; + for (var i = 1; i < b-1; i++) { + c *= a[i-1] * a[i] * a[i+1]; + } + return c; +}, 8, 4, (40320 / 8 / 7) * (40320 / 8) * (40320 / 2)); + +test(function ff(b) { + var c = 0; + for (var i = 0; i < b; i++) { + c += a[3] * a[0] * a[6] * a[7] * a[5] * a[1] * a[4] * a[2]; + } + return c; +}, 100, 4, 40320 * 100); diff --git a/deps/v8/test/mjsunit/builtins.js b/deps/v8/test/mjsunit/builtins.js index ce2c6802f07..fe7d35d8ea1 100644 --- a/deps/v8/test/mjsunit/builtins.js +++ b/deps/v8/test/mjsunit/builtins.js @@ -38,6 +38,14 @@ function isFunction(obj) { return typeof obj == "function"; } +function isV8Native(name) { + return name == "GeneratorFunctionPrototype" || + name == "SetIterator" || + name == "MapIterator" || + name == "ArrayIterator" || + name == "StringIterator"; +} + function checkConstructor(func, name) { // A constructor is a function with a prototype and properties on the // prototype object besides "constructor"; @@ -54,12 +62,13 @@ function checkConstructor(func, name) { assertFalse(proto_desc.writable, name); assertFalse(proto_desc.configurable, name); var prototype = proto_desc.value; - assertEquals(name == "GeneratorFunctionPrototype" ? Object.prototype : null, + assertEquals(isV8Native(name) ? Object.prototype : null, Object.getPrototypeOf(prototype), name); for (var i = 0; i < propNames.length; i++) { var propName = propNames[i]; if (propName == "constructor") continue; + if (isV8Native(name)) continue; var testName = name + "-" + propName; var propDesc = Object.getOwnPropertyDescriptor(prototype, propName); assertTrue(propDesc.hasOwnProperty("value"), testName); diff --git a/deps/v8/test/mjsunit/compiler/inline-arguments.js b/deps/v8/test/mjsunit/compiler/inline-arguments.js index 1337ab237a4..d52f31b5e91 100644 --- a/deps/v8/test/mjsunit/compiler/inline-arguments.js +++ b/deps/v8/test/mjsunit/compiler/inline-arguments.js @@ -309,3 +309,29 @@ test_toarr(toarr2); delete forceDeopt.deopt; outer(); })(); + + +// Test inlining of functions with %_Arguments and %_ArgumentsLength intrinsic. +(function () { + function inner(len,a,b,c) { + assertSame(len, %_ArgumentsLength()); + for (var i = 1; i < len; ++i) { + var c = String.fromCharCode(96 + i); + assertSame(c, %_Arguments(i)); + } + } + + function outer() { + inner(1); + inner(2, 'a'); + inner(3, 'a', 'b'); + inner(4, 'a', 'b', 'c'); + inner(5, 'a', 'b', 'c', 'd'); + inner(6, 'a', 'b', 'c', 'd', 'e'); + } + + outer(); + outer(); + %OptimizeFunctionOnNextCall(outer); + outer(); +})(); diff --git a/deps/v8/test/mjsunit/compiler/math-floor-global.js b/deps/v8/test/mjsunit/compiler/math-floor-global.js index 4a3bcb72200..9ee649cb2d8 100644 --- a/deps/v8/test/mjsunit/compiler/math-floor-global.js +++ b/deps/v8/test/mjsunit/compiler/math-floor-global.js @@ -25,7 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --max-new-space-size=2 --allow-natives-syntax +// Flags: --max-semi-space-size=1 --allow-natives-syntax // Test inlining of Math.floor when assigned to a global. var flo = Math.floor; diff --git a/deps/v8/test/mjsunit/compiler/math-floor-local.js b/deps/v8/test/mjsunit/compiler/math-floor-local.js index 8424ac96d3e..5ebe90b705f 100644 --- a/deps/v8/test/mjsunit/compiler/math-floor-local.js +++ b/deps/v8/test/mjsunit/compiler/math-floor-local.js @@ -25,7 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --max-new-space-size=2 --allow-natives-syntax +// Flags: --max-semi-space-size=1 --allow-natives-syntax // Test inlining of Math.floor when assigned to a local. var test_id = 0; diff --git a/deps/v8/test/mjsunit/const-eval-init.js b/deps/v8/test/mjsunit/const-eval-init.js index d3503845d0c..50e3a8d0be2 100644 --- a/deps/v8/test/mjsunit/const-eval-init.js +++ b/deps/v8/test/mjsunit/const-eval-init.js @@ -36,29 +36,29 @@ function testIntroduceGlobal() { var source = // Deleting 'x' removes the local const property. "delete x;" + - // Initialization turns into assignment to global 'x'. + // Initialization redefines global 'x'. "const x = 3; assertEquals(3, x);" + - // No constness of the global 'x'. - "x = 4; assertEquals(4, x);"; + // Test constness of the global 'x'. + "x = 4; assertEquals(3, x);"; eval(source); } testIntroduceGlobal(); -assertEquals(4, x); +assertEquals("undefined", typeof x); function testAssignExistingGlobal() { var source = // Delete 'x' to remove the local const property. "delete x;" + - // Initialization turns into assignment to global 'x'. + // Initialization redefines global 'x'. "const x = 5; assertEquals(5, x);" + - // No constness of the global 'x'. - "x = 6; assertEquals(6, x);"; + // Test constness of the global 'x'. + "x = 6; assertEquals(5, x);"; eval(source); } testAssignExistingGlobal(); -assertEquals(6, x); +assertEquals("undefined", typeof x); function testAssignmentArgument(x) { function local() { @@ -66,7 +66,7 @@ function testAssignmentArgument(x) { eval(source); } local(); - assertEquals(7, x); + assertEquals("undefined", typeof x); } for (var i = 0; i < 5; i++) { @@ -74,17 +74,18 @@ for (var i = 0; i < 5; i++) { } %OptimizeFunctionOnNextCall(testAssignmentArgument); testAssignmentArgument(); -assertEquals(6, x); +assertEquals("undefined", typeof x); __defineSetter__('x', function() { throw 42; }); -function testAssignGlobalThrows() { - // Initialization turns into assignment to global 'x' which - // throws an exception. - var source = "delete x; const x = 8"; +var finished = false; +function testRedefineGlobal() { + // Initialization redefines global 'x'. + var source = "delete x; const x = 8; finished = true;"; eval(source); } -assertThrows("testAssignGlobalThrows()"); +testRedefineGlobal(); +assertTrue(finished); function testInitFastCaseExtension() { var source = "const x = 9; assertEquals(9, x); x = 10; assertEquals(9, x)"; @@ -111,7 +112,7 @@ function testAssignSurroundingContextSlot() { eval(source); } local(); - assertEquals(13, x); + assertEquals(12, x); } testAssignSurroundingContextSlot(); diff --git a/deps/v8/test/mjsunit/const-redecl.js b/deps/v8/test/mjsunit/const-redecl.js index c0b97e6ced1..f311f0de66d 100644 --- a/deps/v8/test/mjsunit/const-redecl.js +++ b/deps/v8/test/mjsunit/const-redecl.js @@ -49,37 +49,6 @@ function TestLocal(s,e) { } -// NOTE: TestGlobal usually only tests the given string in the context -// of a global object in dictionary mode. This is because we use -// delete to get rid of any added properties. -function TestGlobal(s,e) { - // Collect the global properties before the call. - var properties = []; - for (var key in this) properties.push(key); - // Compute the result. - var result; - try { - var code = s + (e ? "; $$$result=" + e : ""); - if (this.execScript) { - execScript(code); - } else { - this.eval(code); - } - // Avoid issues if $$$result is not defined by - // reading it through this. - result = this.$$$result; - } catch (x) { - result = CheckException(x); - } - // Get rid of any introduced global properties before - // returning the result. - for (var key in this) { - if (properties.indexOf(key) == -1) delete this[key]; - } - return result; -} - - function TestContext(s,e) { try { // Use a with-statement to force the system to do dynamic @@ -98,8 +67,6 @@ function TestAll(expected,s,opt_e) { var msg = s; if (opt_e) { e = opt_e; msg += "; " + opt_e; } assertEquals(expected, TestLocal(s,e), "local:'" + msg + "'"); - // Redeclarations of global consts do not throw, they are silently ignored. - assertEquals(42, TestGlobal(s, 42), "global:'" + msg + "'"); assertEquals(expected, TestContext(s,e), "context:'" + msg + "'"); } @@ -112,7 +79,7 @@ function TestConflict(def0, def1) { // Eval first definition. TestAll("TypeError", 'eval("' + def0 +'"); ' + def1); // Eval second definition. - TestAll("TypeError", def0 + '; eval("' + def1 + '")'); + TestAll("TypeError", def0 + '; eval("' + def1 +'")'); // Eval both definitions separately. TestAll("TypeError", 'eval("' + def0 +'"); eval("' + def1 + '")'); } @@ -234,47 +201,26 @@ var undefined = 1; // Should be silently ignored. assertEquals(original_undef, undefined, "undefined got overwritten"); undefined = original_undef; -var a; const a; const a = 1; -assertEquals(1, a, "a has wrong value"); -a = 2; -assertEquals(2, a, "a should be writable"); - -var b = 1; const b = 2; -assertEquals(2, b, "b has wrong value"); - -var c = 1; const c = 2; const c = 3; -assertEquals(3, c, "c has wrong value"); - -const d = 1; const d = 2; -assertEquals(1, d, "d has wrong value"); - -const e = 1; var e = 2; +const e = 1; eval('var e = 2'); assertEquals(1, e, "e has wrong value"); -const f = 1; const f; -assertEquals(1, f, "f has wrong value"); - -var g; const g = 1; -assertEquals(1, g, "g has wrong value"); -g = 2; -assertEquals(2, g, "g should be writable"); - -const h; var h = 1; -assertEquals(undefined,h, "h has wrong value"); +const h; eval('var h = 1'); +assertEquals(undefined, h, "h has wrong value"); eval("Object.defineProperty(this, 'i', { writable: true });" + "const i = 7;" + "assertEquals(7, i, \"i has wrong value\");"); var global = this; -assertThrows(function() { - Object.defineProperty(global, 'j', { writable: true }) -}, TypeError); -const j = 2; // This is what makes the function above throw, because the -// const declaration gets hoisted and makes the property non-configurable. +Object.defineProperty(global, 'j', { value: 100, writable: true }); +assertEquals(100, j); +// The const declaration stays configurable, so the declaration above goes +// through even though the const declaration is hoisted above. +const j = 2; assertEquals(2, j, "j has wrong value"); -var k = 1; const k; -// You could argue about the expected result here. For now, the winning -// argument is that "const k;" is equivalent to "const k = undefined;". -assertEquals(undefined, k, "k has wrong value"); +var k = 1; +try { eval('const k'); } catch(e) { } +assertEquals(1, k, "k has wrong value"); +try { eval('const k = 10'); } catch(e) { } +assertEquals(1, k, "k has wrong value"); diff --git a/deps/v8/test/mjsunit/constant-folding-2.js b/deps/v8/test/mjsunit/constant-folding-2.js index f429c6ca10c..73cf040f5a8 100644 --- a/deps/v8/test/mjsunit/constant-folding-2.js +++ b/deps/v8/test/mjsunit/constant-folding-2.js @@ -181,6 +181,17 @@ test(function mathRound() { assertEquals(Math.pow(2, 52) + 1, Math.round(Math.pow(2, 52) + 1)); }); +test(function mathFround() { + assertTrue(isNaN(Math.fround(NaN))); + assertEquals("Infinity", String(1/Math.fround(0))); + assertEquals("-Infinity", String(1/Math.fround(-0))); + assertEquals("Infinity", String(Math.fround(Infinity))); + assertEquals("-Infinity", String(Math.fround(-Infinity))); + assertEquals("Infinity", String(Math.fround(1E200))); + assertEquals("-Infinity", String(Math.fround(-1E200))); + assertEquals(3.1415927410125732, Math.fround(Math.PI)); +}); + test(function mathFloor() { assertEquals(1, Math.floor(1.5)); assertEquals(-2, Math.floor(-1.5)); diff --git a/deps/v8/test/mjsunit/cross-realm-filtering.js b/deps/v8/test/mjsunit/cross-realm-filtering.js new file mode 100644 index 00000000000..902cceb58ff --- /dev/null +++ b/deps/v8/test/mjsunit/cross-realm-filtering.js @@ -0,0 +1,141 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +var realms = [Realm.current(), Realm.create()]; + +// Check stack trace filtering across security contexts. +var thrower_script = + "(function () { Realm.eval(Realm.current(), 'throw Error()') })"; +Realm.shared = { + thrower_0: Realm.eval(realms[0], thrower_script), + thrower_1: Realm.eval(realms[1], thrower_script), +}; + +var script = " \ + Error.prepareStackTrace = function(a, b) { return b; }; \ + try { \ + Realm.shared.thrower_0(); \ + } catch (e) { \ + Realm.shared.error_0 = e.stack; \ + } \ + try { \ + Realm.shared.thrower_1(); \ + } catch (e) { \ + Realm.shared.error_1 = e.stack; \ + } \ +"; + +function assertNotIn(thrower, error) { + for (var i = 0; i < error.length; i++) { + assertFalse(false === error[i].getFunction()); + } +} + +Realm.eval(realms[1], script); +assertSame(3, Realm.shared.error_0.length); +assertSame(4, Realm.shared.error_1.length); + +assertTrue(Realm.shared.thrower_1 === Realm.shared.error_1[2].getFunction()); +assertNotIn(Realm.shared.thrower_0, Realm.shared.error_0); +assertNotIn(Realm.shared.thrower_0, Realm.shared.error_1); + +Realm.eval(realms[0], script); +assertSame(5, Realm.shared.error_0.length); +assertSame(4, Realm.shared.error_1.length); + +assertTrue(Realm.shared.thrower_0 === Realm.shared.error_0[2].getFunction()); +assertNotIn(Realm.shared.thrower_1, Realm.shared.error_0); +assertNotIn(Realm.shared.thrower_1, Realm.shared.error_1); + + +// Check .caller filtering across security contexts. +var caller_script = "(function (f) { f(); })"; +Realm.shared = { + caller_0 : Realm.eval(realms[0], caller_script), + caller_1 : Realm.eval(realms[1], caller_script), +} + +script = " \ + function f_0() { Realm.shared.result_0 = arguments.callee.caller; }; \ + function f_1() { Realm.shared.result_1 = arguments.callee.caller; }; \ + Realm.shared.caller_0(f_0); \ + Realm.shared.caller_1(f_1); \ +"; + +Realm.eval(realms[1], script); +assertSame(null, Realm.shared.result_0); +assertSame(Realm.shared.caller_1, Realm.shared.result_1); + +Realm.eval(realms[0], script); +assertSame(Realm.shared.caller_0, Realm.shared.result_0); +assertSame(null, Realm.shared.result_1); + + +// Check function constructor. +var ctor_script = "Function.constructor"; +var ctor_a_script = + "(function() { return Function.constructor.apply(this, ['return 1;']); })"; +var ctor_b_script = "Function.constructor.bind(this, 'return 1;')"; +var ctor_c_script = + "(function() { return Function.constructor.call(this, 'return 1;'); })"; +Realm.shared = { + ctor_0 : Realm.eval(realms[0], ctor_script), + ctor_1 : Realm.eval(realms[1], ctor_script), + ctor_a_0 : Realm.eval(realms[0], ctor_a_script), + ctor_a_1 : Realm.eval(realms[1], ctor_a_script), + ctor_b_0 : Realm.eval(realms[0], ctor_b_script), + ctor_b_1 : Realm.eval(realms[1], ctor_b_script), + ctor_c_0 : Realm.eval(realms[0], ctor_c_script), + ctor_c_1 : Realm.eval(realms[1], ctor_c_script), +} + +var script_0 = " \ + var ctor_0 = Realm.shared.ctor_0; \ + Realm.shared.direct_0 = ctor_0('return 1'); \ + Realm.shared.indirect_0 = (function() { return ctor_0('return 1;'); })(); \ + Realm.shared.apply_0 = ctor_0.apply(this, ['return 1']); \ + Realm.shared.bind_0 = ctor_0.bind(this, 'return 1')(); \ + Realm.shared.call_0 = ctor_0.call(this, 'return 1'); \ + Realm.shared.a_0 = Realm.shared.ctor_a_0(); \ + Realm.shared.b_0 = Realm.shared.ctor_b_0(); \ + Realm.shared.c_0 = Realm.shared.ctor_c_0(); \ +"; + +script = script_0 + script_0.replace(/_0/g, "_1"); + +Realm.eval(realms[0], script); +assertSame(1, Realm.shared.direct_0()); +assertSame(1, Realm.shared.indirect_0()); +assertSame(1, Realm.shared.apply_0()); +assertSame(1, Realm.shared.bind_0()); +assertSame(1, Realm.shared.call_0()); +assertSame(1, Realm.shared.a_0()); +assertSame(1, Realm.shared.b_0()); +assertSame(1, Realm.shared.c_0()); +assertSame(undefined, Realm.shared.direct_1); +assertSame(undefined, Realm.shared.indirect_1); +assertSame(undefined, Realm.shared.apply_1); +assertSame(undefined, Realm.shared.bind_1); +assertSame(undefined, Realm.shared.call_1); +assertSame(1, Realm.shared.a_1()); +assertSame(undefined, Realm.shared.b_1); +assertSame(1, Realm.shared.c_1()); + +Realm.eval(realms[1], script); +assertSame(undefined, Realm.shared.direct_0); +assertSame(undefined, Realm.shared.indirect_0); +assertSame(undefined, Realm.shared.apply_0); +assertSame(undefined, Realm.shared.bind_0); +assertSame(undefined, Realm.shared.call_0); +assertSame(1, Realm.shared.a_0()); +assertSame(undefined, Realm.shared.b_0); +assertSame(1, Realm.shared.c_1()); +assertSame(1, Realm.shared.direct_1()); +assertSame(1, Realm.shared.indirect_1()); +assertSame(1, Realm.shared.apply_1()); +assertSame(1, Realm.shared.bind_1()); +assertSame(1, Realm.shared.call_1()); +assertSame(1, Realm.shared.a_1()); +assertSame(1, Realm.shared.b_1()); +assertSame(1, Realm.shared.c_1()); diff --git a/deps/v8/test/mjsunit/debug-break-native.js b/deps/v8/test/mjsunit/debug-break-native.js new file mode 100644 index 00000000000..11d7274929c --- /dev/null +++ b/deps/v8/test/mjsunit/debug-break-native.js @@ -0,0 +1,42 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug + +Debug = debug.Debug +var exception = null; + +function breakListener(event, exec_state, event_data, data) { + if (event != Debug.DebugEvent.Break) return; + try { + exec_state.prepareStep(Debug.StepAction.StepIn, 1); + // Assert that the break happens at an intended location. + assertTrue(exec_state.frame(0).sourceLineText().indexOf("// break") > 0); + } catch (e) { + exception = e; + } +} + +Debug.setListener(breakListener); + +debugger; // break + +function f(x) { + return x; // break +} // break + +Debug.setBreakPoint(f, 0, 0); // break +Debug.scripts(); // break +debug.MakeMirror(f); // break + +new Error("123").stack; // break +Math.sin(0); // break + +f("this should break"); // break + +Debug.setListener(null); // break + +f("this should not break"); + +assertNull(exception); diff --git a/deps/v8/test/mjsunit/debug-compile-event.js b/deps/v8/test/mjsunit/debug-compile-event.js index 89a71ddb598..c38cd8477a9 100644 --- a/deps/v8/test/mjsunit/debug-compile-event.js +++ b/deps/v8/test/mjsunit/debug-compile-event.js @@ -32,6 +32,7 @@ Debug = debug.Debug var exception = false; // Exception in debug event listener. var before_compile_count = 0; var after_compile_count = 0; +var compile_error_count = 0; var current_source = ''; // Current source being compiled. var source_count = 0; // Total number of scources compiled. var host_compilations = 0; // Number of scources compiled through the API. @@ -48,11 +49,12 @@ function compileSource(source) { function listener(event, exec_state, event_data, data) { try { if (event == Debug.DebugEvent.BeforeCompile || - event == Debug.DebugEvent.AfterCompile) { + event == Debug.DebugEvent.AfterCompile || + event == Debug.DebugEvent.CompileError) { // Count the events. if (event == Debug.DebugEvent.BeforeCompile) { before_compile_count++; - } else { + } else if (event == Debug.DebugEvent.AfterCompile) { after_compile_count++; switch (event_data.script().compilationType()) { case Debug.ScriptCompilationType.Host: @@ -62,6 +64,8 @@ function listener(event, exec_state, event_data, data) { eval_compilations++; break; } + } else { + compile_error_count++; } // If the compiled source contains 'eval' there will be additional compile @@ -81,9 +85,11 @@ function listener(event, exec_state, event_data, data) { assertTrue('context' in msg.body.script); // Check that we pick script name from //# sourceURL, iff present - assertEquals(current_source.indexOf('sourceURL') >= 0 ? - 'myscript.js' : undefined, - event_data.script().name()); + if (event == Debug.DebugEvent.AfterCompile) { + assertEquals(current_source.indexOf('sourceURL') >= 0 ? + 'myscript.js' : undefined, + event_data.script().name()); + } } } catch (e) { exception = e @@ -105,11 +111,17 @@ compileSource('JSON.parse(\'{"a":1,"b":2}\')'); // Using JSON.parse does not causes additional compilation events. compileSource('x=1; //# sourceURL=myscript.js'); +try { + compileSource('}'); +} catch(e) { +} + // Make sure that the debug event listener was invoked. assertFalse(exception, "exception in listener") -// Number of before and after compile events should be the same. -assertEquals(before_compile_count, after_compile_count); +// Number of before and after + error events should be the same. +assertEquals(before_compile_count, after_compile_count + compile_error_count); +assertEquals(compile_error_count, 1); // Check the actual number of events (no compilation through the API as all // source compiled through eval). diff --git a/deps/v8/test/mjsunit/debug-compile-optimized.js b/deps/v8/test/mjsunit/debug-compile-optimized.js new file mode 100644 index 00000000000..468605abaab --- /dev/null +++ b/deps/v8/test/mjsunit/debug-compile-optimized.js @@ -0,0 +1,18 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug --allow-natives-syntax --crankshaft + +Debug = debug.Debug; + +Debug.setListener(function() {}); + +function f() {} +f(); +f(); +%OptimizeFunctionOnNextCall(f); +f(); +assertOptimized(f); + +Debug.setListener(null); diff --git a/deps/v8/test/mjsunit/debug-is-active.js b/deps/v8/test/mjsunit/debug-is-active.js new file mode 100644 index 00000000000..19968f0c100 --- /dev/null +++ b/deps/v8/test/mjsunit/debug-is-active.js @@ -0,0 +1,28 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug --allow-natives-syntax + +Debug = debug.Debug; + +function f() { return %_DebugIsActive() != 0; } + +assertFalse(f()); +assertFalse(f()); +Debug.setListener(function() {}); +assertTrue(f()); +Debug.setListener(null); +assertFalse(f()); + +%OptimizeFunctionOnNextCall(f); +assertFalse(f()); +assertOptimized(f); + +Debug.setListener(function() {}); +assertTrue(f()); +assertOptimized(f); + +Debug.setListener(null); +assertFalse(f()); +assertOptimized(f); diff --git a/deps/v8/test/mjsunit/debug-mirror-cache.js b/deps/v8/test/mjsunit/debug-mirror-cache.js index 07aaf880dcc..c690aa01332 100644 --- a/deps/v8/test/mjsunit/debug-mirror-cache.js +++ b/deps/v8/test/mjsunit/debug-mirror-cache.js @@ -62,6 +62,9 @@ function listener(event, exec_state, event_data, data) { json = '{"seq":0,"type":"request","command":"backtrace"}' dcp.processDebugJSONRequest(json); + // Make sure looking up loaded scripts does not clear the cache. + Debug.scripts(); + // Some mirrors where cached. assertFalse(debug.next_handle_ == 0, "Mirror cache not used"); assertFalse(debug.mirror_cache_.length == 0, "Mirror cache not used"); diff --git a/deps/v8/test/mjsunit/debug-script.js b/deps/v8/test/mjsunit/debug-script.js index 80d423e10bf..5b5e75962fe 100644 --- a/deps/v8/test/mjsunit/debug-script.js +++ b/deps/v8/test/mjsunit/debug-script.js @@ -59,7 +59,7 @@ for (i = 0; i < scripts.length; i++) { } // This has to be updated if the number of native scripts change. -assertTrue(named_native_count == 19 || named_native_count == 20); +assertTrue(named_native_count == 25 || named_native_count == 26); // Only the 'gc' extension is loaded. assertEquals(1, extension_count); // This script and mjsunit.js has been loaded. If using d8, d8 loads diff --git a/deps/v8/test/mjsunit/debug-scripts-request.js b/deps/v8/test/mjsunit/debug-scripts-request.js index e027563b9b8..f9fdde6348e 100644 --- a/deps/v8/test/mjsunit/debug-scripts-request.js +++ b/deps/v8/test/mjsunit/debug-scripts-request.js @@ -108,3 +108,5 @@ debugger; assertTrue(listenerComplete, "listener did not run to completion, exception: " + exception); assertFalse(exception, "exception in listener") + +Debug.setListener(null); diff --git a/deps/v8/test/mjsunit/debug-stepin-positions.js b/deps/v8/test/mjsunit/debug-stepin-positions.js index 722df53666a..ff532e3dd74 100644 --- a/deps/v8/test/mjsunit/debug-stepin-positions.js +++ b/deps/v8/test/mjsunit/debug-stepin-positions.js @@ -37,12 +37,13 @@ function TestCase(fun, frame_number) { var exception = false; var codeSnippet = undefined; var resultPositions = undefined; + var step = 0; function listener(event, exec_state, event_data, data) { try { if (event == Debug.DebugEvent.Break || event == Debug.DebugEvent.Exception) { - Debug.setListener(null); + if (step++ > 0) return; assertHasLineMark(/pause/, exec_state.frame(0)); assertHasLineMark(/positions/, exec_state.frame(frame_number)); var frame = exec_state.frame(frame_number); diff --git a/deps/v8/test/cctest/test-cpu-x64.cc b/deps/v8/test/mjsunit/debug-toggle-mirror-cache.js similarity index 75% rename from deps/v8/test/cctest/test-cpu-x64.cc rename to deps/v8/test/mjsunit/debug-toggle-mirror-cache.js index a2c45cf8621..a44c11551e0 100644 --- a/deps/v8/test/cctest/test-cpu-x64.cc +++ b/deps/v8/test/mjsunit/debug-toggle-mirror-cache.js @@ -1,4 +1,4 @@ -// Copyright 2013 the V8 project authors. All rights reserved. +// Copyright 2014 the V8 project authors. All rights reserved. // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: @@ -25,20 +25,16 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "v8.h" +// Flags: --expose-debug-as debug -#include "cctest.h" -#include "cpu.h" +var handle1 = debug.MakeMirror(123).handle(); +assertEquals("number", debug.LookupMirror(handle1).type()); -using namespace v8::internal; +debug.ToggleMirrorCache(false); +var handle2 = debug.MakeMirror(123).handle(); +assertEquals(undefined, handle2); +assertThrows(function() { debug.LookupMirror(handle2) }); - -TEST(RequiredFeaturesX64) { - // Test for the features required by every x64 CPU. - CPU cpu; - CHECK(cpu.has_fpu()); - CHECK(cpu.has_cmov()); - CHECK(cpu.has_mmx()); - CHECK(cpu.has_sse()); - CHECK(cpu.has_sse2()); -} +debug.ToggleMirrorCache(true); +var handle3 = debug.MakeMirror(123).handle(); +assertEquals("number", debug.LookupMirror(handle3).type()); diff --git a/deps/v8/test/mjsunit/define-property-gc.js b/deps/v8/test/mjsunit/define-property-gc.js index 573a7edbdcb..b130b164b1b 100644 --- a/deps/v8/test/mjsunit/define-property-gc.js +++ b/deps/v8/test/mjsunit/define-property-gc.js @@ -26,7 +26,7 @@ // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // Tests the handling of GC issues in the defineProperty method. -// Flags: --max-new-space-size=2 +// Flags: --max-semi-space-size=1 function Regular() { this[0] = 0; diff --git a/deps/v8/test/mjsunit/deserialize-reference.js b/deps/v8/test/mjsunit/deserialize-reference.js new file mode 100644 index 00000000000..b0320131593 --- /dev/null +++ b/deps/v8/test/mjsunit/deserialize-reference.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --cache=code --serialize-toplevel + +var a = "123"; +assertEquals(a, "123"); diff --git a/deps/v8/test/mjsunit/dictionary-properties.js b/deps/v8/test/mjsunit/dictionary-properties.js new file mode 100644 index 00000000000..0659268bac2 --- /dev/null +++ b/deps/v8/test/mjsunit/dictionary-properties.js @@ -0,0 +1,48 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + +// Test loading existent and nonexistent properties from dictionary +// mode objects. + +function SlowObject() { + this.foo = 1; + this.bar = 2; + this.qux = 3; + delete this.qux; + assertFalse(%HasFastProperties(this)); +} +function SlowObjectWithBaz() { + var o = new SlowObject(); + o.baz = 4; + return o; +} + +function Load(o) { + return o.baz; +} + +for (var i = 0; i < 10; i++) { + var o1 = new SlowObject(); + var o2 = SlowObjectWithBaz(); + assertEquals(undefined, Load(o1)); + assertEquals(4, Load(o2)); +} + +// Test objects getting optimized as fast prototypes. + +function SlowPrototype() { + this.foo = 1; +} +SlowPrototype.prototype.bar = 2; +SlowPrototype.prototype.baz = 3; +delete SlowPrototype.prototype.baz; +new SlowPrototype; + +// Prototypes stay fast even after deleting properties. +assertTrue(%HasFastProperties(SlowPrototype.prototype)); +var fast_proto = new SlowPrototype(); +assertTrue(%HasFastProperties(SlowPrototype.prototype)); +assertTrue(%HasFastProperties(fast_proto.__proto__)); diff --git a/deps/v8/test/mjsunit/elements-kind-depends.js b/deps/v8/test/mjsunit/elements-kind-depends.js index 82f188b71e3..539fbd0e423 100644 --- a/deps/v8/test/mjsunit/elements-kind-depends.js +++ b/deps/v8/test/mjsunit/elements-kind-depends.js @@ -25,7 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --allow-natives-syntax --smi-only-arrays +// Flags: --allow-natives-syntax function burn() { var a = new Array(3); diff --git a/deps/v8/test/mjsunit/elements-kind.js b/deps/v8/test/mjsunit/elements-kind.js index 3aa513a378e..64b4a094ff4 100644 --- a/deps/v8/test/mjsunit/elements-kind.js +++ b/deps/v8/test/mjsunit/elements-kind.js @@ -25,22 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --allow-natives-syntax --smi-only-arrays --expose-gc --nostress-opt --typed-array-max_size_in-heap=2048 - -// Test element kind of objects. -// Since --smi-only-arrays affects builtins, its default setting at compile -// time sticks if built with snapshot. If --smi-only-arrays is deactivated -// by default, only a no-snapshot build actually has smi-only arrays enabled -// in this test case. Depending on whether smi-only arrays are actually -// enabled, this test takes the appropriate code path to check smi-only arrays. - -support_smi_only_arrays = %HasFastSmiElements(new Array(1,2,3,4,5,6,7,8)); - -if (support_smi_only_arrays) { - print("Tests include smi-only arrays."); -} else { - print("Tests do NOT include smi-only arrays."); -} +// Flags: --allow-natives-syntax --expose-gc --nostress-opt --typed-array-max_size_in-heap=2048 var elements_kind = { fast_smi_only : 'fast smi only elements', @@ -131,10 +116,6 @@ function getKind(obj) { } function assertKind(expected, obj, name_opt) { - if (!support_smi_only_arrays && - expected == elements_kind.fast_smi_only) { - expected = elements_kind.fast; - } assertEquals(expected, getKind(obj), name_opt); } @@ -144,13 +125,11 @@ me.dance = 0xD15C0; me.drink = 0xC0C0A; assertKind(elements_kind.fast, me); -if (support_smi_only_arrays) { - var too = [1,2,3]; - assertKind(elements_kind.fast_smi_only, too); - too.dance = 0xD15C0; - too.drink = 0xC0C0A; - assertKind(elements_kind.fast_smi_only, too); -} +var too = [1,2,3]; +assertKind(elements_kind.fast_smi_only, too); +too.dance = 0xD15C0; +too.drink = 0xC0C0A; +assertKind(elements_kind.fast_smi_only, too); // Make sure the element kind transitions from smi when a non-smi is stored. function test_wrapper() { @@ -166,7 +145,9 @@ function test_wrapper() { } assertKind(elements_kind.fast, you); - assertKind(elements_kind.dictionary, new Array(0xDECAF)); + var temp = []; + temp[0xDECAF] = 0; + assertKind(elements_kind.dictionary, temp); var fast_double_array = new Array(0xDECAF); for (var i = 0; i < 0xDECAF; i++) fast_double_array[i] = i / 2; @@ -217,111 +198,106 @@ function test_wrapper() { test_wrapper(); %ClearFunctionTypeFeedback(test_wrapper); -if (support_smi_only_arrays) { - %NeverOptimizeFunction(construct_smis); +%NeverOptimizeFunction(construct_smis); - // This code exists to eliminate the learning influence of AllocationSites - // on the following tests. - var __sequence = 0; - function make_array_string() { - this.__sequence = this.__sequence + 1; - return "/* " + this.__sequence + " */ [0, 0, 0];" - } - function make_array() { - return eval(make_array_string()); - } +// This code exists to eliminate the learning influence of AllocationSites +// on the following tests. +var __sequence = 0; +function make_array_string() { + this.__sequence = this.__sequence + 1; + return "/* " + this.__sequence + " */ [0, 0, 0];" +} +function make_array() { + return eval(make_array_string()); +} - function construct_smis() { - var a = make_array(); - a[0] = 0; // Send the COW array map to the steak house. - assertKind(elements_kind.fast_smi_only, a); - return a; - } +function construct_smis() { + var a = make_array(); + a[0] = 0; // Send the COW array map to the steak house. + assertKind(elements_kind.fast_smi_only, a); + return a; +} %NeverOptimizeFunction(construct_doubles); - function construct_doubles() { - var a = construct_smis(); - a[0] = 1.5; - assertKind(elements_kind.fast_double, a); - return a; - } +function construct_doubles() { + var a = construct_smis(); + a[0] = 1.5; + assertKind(elements_kind.fast_double, a); + return a; +} %NeverOptimizeFunction(construct_objects); - function construct_objects() { - var a = construct_smis(); - a[0] = "one"; - assertKind(elements_kind.fast, a); - return a; - } +function construct_objects() { + var a = construct_smis(); + a[0] = "one"; + assertKind(elements_kind.fast, a); + return a; +} - // Test crankshafted transition SMI->DOUBLE. +// Test crankshafted transition SMI->DOUBLE. %NeverOptimizeFunction(convert_to_double); - function convert_to_double(array) { - array[1] = 2.5; - assertKind(elements_kind.fast_double, array); - assertEquals(2.5, array[1]); - } - var smis = construct_smis(); - for (var i = 0; i < 3; i++) convert_to_double(smis); +function convert_to_double(array) { + array[1] = 2.5; + assertKind(elements_kind.fast_double, array); + assertEquals(2.5, array[1]); +} +var smis = construct_smis(); +for (var i = 0; i < 3; i++) convert_to_double(smis); %OptimizeFunctionOnNextCall(convert_to_double); - smis = construct_smis(); - convert_to_double(smis); - // Test crankshafted transitions SMI->FAST and DOUBLE->FAST. +smis = construct_smis(); +convert_to_double(smis); +// Test crankshafted transitions SMI->FAST and DOUBLE->FAST. %NeverOptimizeFunction(convert_to_fast); - function convert_to_fast(array) { - array[1] = "two"; - assertKind(elements_kind.fast, array); - assertEquals("two", array[1]); - } - smis = construct_smis(); - for (var i = 0; i < 3; i++) convert_to_fast(smis); - var doubles = construct_doubles(); - for (var i = 0; i < 3; i++) convert_to_fast(doubles); - smis = construct_smis(); - doubles = construct_doubles(); +function convert_to_fast(array) { + array[1] = "two"; + assertKind(elements_kind.fast, array); + assertEquals("two", array[1]); +} +smis = construct_smis(); +for (var i = 0; i < 3; i++) convert_to_fast(smis); +var doubles = construct_doubles(); +for (var i = 0; i < 3; i++) convert_to_fast(doubles); +smis = construct_smis(); +doubles = construct_doubles(); %OptimizeFunctionOnNextCall(convert_to_fast); - convert_to_fast(smis); - convert_to_fast(doubles); - // Test transition chain SMI->DOUBLE->FAST (crankshafted function will - // transition to FAST directly). +convert_to_fast(smis); +convert_to_fast(doubles); +// Test transition chain SMI->DOUBLE->FAST (crankshafted function will +// transition to FAST directly). %NeverOptimizeFunction(convert_mixed); - function convert_mixed(array, value, kind) { - array[1] = value; - assertKind(kind, array); - assertEquals(value, array[1]); - } - smis = construct_smis(); - for (var i = 0; i < 3; i++) { - convert_mixed(smis, 1.5, elements_kind.fast_double); - } - doubles = construct_doubles(); - for (var i = 0; i < 3; i++) { - convert_mixed(doubles, "three", elements_kind.fast); - } - convert_mixed(construct_smis(), "three", elements_kind.fast); - convert_mixed(construct_doubles(), "three", elements_kind.fast); - %OptimizeFunctionOnNextCall(convert_mixed); - smis = construct_smis(); - doubles = construct_doubles(); - convert_mixed(smis, 1, elements_kind.fast); - convert_mixed(doubles, 1, elements_kind.fast); - assertTrue(%HaveSameMap(smis, doubles)); +function convert_mixed(array, value, kind) { + array[1] = value; + assertKind(kind, array); + assertEquals(value, array[1]); +} +smis = construct_smis(); +for (var i = 0; i < 3; i++) { + convert_mixed(smis, 1.5, elements_kind.fast_double); } +doubles = construct_doubles(); +for (var i = 0; i < 3; i++) { + convert_mixed(doubles, "three", elements_kind.fast); +} +convert_mixed(construct_smis(), "three", elements_kind.fast); +convert_mixed(construct_doubles(), "three", elements_kind.fast); + %OptimizeFunctionOnNextCall(convert_mixed); +smis = construct_smis(); +doubles = construct_doubles(); +convert_mixed(smis, 1, elements_kind.fast); +convert_mixed(doubles, 1, elements_kind.fast); +assertTrue(%HaveSameMap(smis, doubles)); // Crankshaft support for smi-only elements in dynamic array literals. function get(foo) { return foo; } // Used to generate dynamic values. function crankshaft_test() { - if (support_smi_only_arrays) { - var a1 = [get(1), get(2), get(3)]; - assertKind(elements_kind.fast_smi_only, a1); - } + var a1 = [get(1), get(2), get(3)]; + assertKind(elements_kind.fast_smi_only, a1); + var a2 = new Array(get(1), get(2), get(3)); assertKind(elements_kind.fast_smi_only, a2); var b = [get(1), get(2), get("three")]; assertKind(elements_kind.fast, b); var c = [get(1), get(2), get(3.5)]; - if (support_smi_only_arrays) { - assertKind(elements_kind.fast_double, c); - } + assertKind(elements_kind.fast_double, c); } for (var i = 0; i < 3; i++) { crankshaft_test(); @@ -335,85 +311,76 @@ crankshaft_test(); // DOUBLE->OBJECT, and SMI->OBJECT. No matter in which order these three are // created, they must always end up with the same FAST map. -// This test is meaningless without FAST_SMI_ONLY_ELEMENTS. -if (support_smi_only_arrays) { - // Preparation: create one pair of identical objects for each case. - var a = [1, 2, 3]; - var b = [1, 2, 3]; - assertTrue(%HaveSameMap(a, b)); - assertKind(elements_kind.fast_smi_only, a); - var c = [1, 2, 3]; - c["case2"] = true; - var d = [1, 2, 3]; - d["case2"] = true; - assertTrue(%HaveSameMap(c, d)); - assertFalse(%HaveSameMap(a, c)); - assertKind(elements_kind.fast_smi_only, c); - var e = [1, 2, 3]; - e["case3"] = true; - var f = [1, 2, 3]; - f["case3"] = true; - assertTrue(%HaveSameMap(e, f)); - assertFalse(%HaveSameMap(a, e)); - assertFalse(%HaveSameMap(c, e)); - assertKind(elements_kind.fast_smi_only, e); - // Case 1: SMI->DOUBLE, DOUBLE->OBJECT, SMI->OBJECT. - a[0] = 1.5; - assertKind(elements_kind.fast_double, a); - a[0] = "foo"; - assertKind(elements_kind.fast, a); - b[0] = "bar"; - assertTrue(%HaveSameMap(a, b)); - // Case 2: SMI->DOUBLE, SMI->OBJECT, DOUBLE->OBJECT. - c[0] = 1.5; - assertKind(elements_kind.fast_double, c); - assertFalse(%HaveSameMap(c, d)); - d[0] = "foo"; - assertKind(elements_kind.fast, d); - assertFalse(%HaveSameMap(c, d)); - c[0] = "bar"; - assertTrue(%HaveSameMap(c, d)); - // Case 3: SMI->OBJECT, SMI->DOUBLE, DOUBLE->OBJECT. - e[0] = "foo"; - assertKind(elements_kind.fast, e); - assertFalse(%HaveSameMap(e, f)); - f[0] = 1.5; - assertKind(elements_kind.fast_double, f); - assertFalse(%HaveSameMap(e, f)); - f[0] = "bar"; - assertKind(elements_kind.fast, f); - assertTrue(%HaveSameMap(e, f)); -} +// Preparation: create one pair of identical objects for each case. +var a = [1, 2, 3]; +var b = [1, 2, 3]; +assertTrue(%HaveSameMap(a, b)); +assertKind(elements_kind.fast_smi_only, a); +var c = [1, 2, 3]; +c["case2"] = true; +var d = [1, 2, 3]; +d["case2"] = true; +assertTrue(%HaveSameMap(c, d)); +assertFalse(%HaveSameMap(a, c)); +assertKind(elements_kind.fast_smi_only, c); +var e = [1, 2, 3]; +e["case3"] = true; +var f = [1, 2, 3]; +f["case3"] = true; +assertTrue(%HaveSameMap(e, f)); +assertFalse(%HaveSameMap(a, e)); +assertFalse(%HaveSameMap(c, e)); +assertKind(elements_kind.fast_smi_only, e); +// Case 1: SMI->DOUBLE, DOUBLE->OBJECT, SMI->OBJECT. +a[0] = 1.5; +assertKind(elements_kind.fast_double, a); +a[0] = "foo"; +assertKind(elements_kind.fast, a); +b[0] = "bar"; +assertTrue(%HaveSameMap(a, b)); +// Case 2: SMI->DOUBLE, SMI->OBJECT, DOUBLE->OBJECT. +c[0] = 1.5; +assertKind(elements_kind.fast_double, c); +assertFalse(%HaveSameMap(c, d)); +d[0] = "foo"; +assertKind(elements_kind.fast, d); +assertFalse(%HaveSameMap(c, d)); +c[0] = "bar"; +assertTrue(%HaveSameMap(c, d)); +// Case 3: SMI->OBJECT, SMI->DOUBLE, DOUBLE->OBJECT. +e[0] = "foo"; +assertKind(elements_kind.fast, e); +assertFalse(%HaveSameMap(e, f)); +f[0] = 1.5; +assertKind(elements_kind.fast_double, f); +assertFalse(%HaveSameMap(e, f)); +f[0] = "bar"; +assertKind(elements_kind.fast, f); +assertTrue(%HaveSameMap(e, f)); // Test if Array.concat() works correctly with DOUBLE elements. -if (support_smi_only_arrays) { - var a = [1, 2]; - assertKind(elements_kind.fast_smi_only, a); - var b = [4.5, 5.5]; - assertKind(elements_kind.fast_double, b); - var c = a.concat(b); - assertEquals([1, 2, 4.5, 5.5], c); - assertKind(elements_kind.fast_double, c); -} +var a = [1, 2]; +assertKind(elements_kind.fast_smi_only, a); +var b = [4.5, 5.5]; +assertKind(elements_kind.fast_double, b); +var c = a.concat(b); +assertEquals([1, 2, 4.5, 5.5], c); +assertKind(elements_kind.fast_double, c); // Test that Array.push() correctly handles SMI elements. -if (support_smi_only_arrays) { - var a = [1, 2]; - assertKind(elements_kind.fast_smi_only, a); - a.push(3, 4, 5); - assertKind(elements_kind.fast_smi_only, a); - assertEquals([1, 2, 3, 4, 5], a); -} +var a = [1, 2]; +assertKind(elements_kind.fast_smi_only, a); +a.push(3, 4, 5); +assertKind(elements_kind.fast_smi_only, a); +assertEquals([1, 2, 3, 4, 5], a); // Test that Array.splice() and Array.slice() return correct ElementsKinds. -if (support_smi_only_arrays) { - var a = ["foo", "bar"]; - assertKind(elements_kind.fast, a); - var b = a.splice(0, 1); - assertKind(elements_kind.fast, b); - var c = a.slice(0, 1); - assertKind(elements_kind.fast, c); -} +var a = ["foo", "bar"]; +assertKind(elements_kind.fast, a); +var b = a.splice(0, 1); +assertKind(elements_kind.fast, b); +var c = a.slice(0, 1); +assertKind(elements_kind.fast, c); // Throw away type information in the ICs for next stress run. gc(); diff --git a/deps/v8/test/mjsunit/elements-transition-hoisting.js b/deps/v8/test/mjsunit/elements-transition-hoisting.js index 76027b9ed18..9f229d2e17b 100644 --- a/deps/v8/test/mjsunit/elements-transition-hoisting.js +++ b/deps/v8/test/mjsunit/elements-transition-hoisting.js @@ -25,21 +25,13 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --allow-natives-syntax --smi-only-arrays +// Flags: --allow-natives-syntax // Flags: --nostress-opt // Ensure that ElementsKind transitions in various situations are hoisted (or // not hoisted) correctly, don't change the semantics programs and don't trigger // deopt through hoisting in important situations. -support_smi_only_arrays = %HasFastSmiElements(new Array(1,2,3,4,5,6)); - -if (support_smi_only_arrays) { - print("Tests include smi-only arrays."); -} else { - print("Tests do NOT include smi-only arrays."); -} - function test_wrapper() { // Make sure that a simple elements array transitions inside a loop before // stores to an array gets hoisted in a way that doesn't generate a deopt in @@ -238,9 +230,7 @@ function test_wrapper() { %ClearFunctionTypeFeedback(testStraightLineDupeElinination); } -if (support_smi_only_arrays) { - // The test is called in a test wrapper that has type feedback cleared to - // prevent the influence of allocation-sites, which learn from transitions. - test_wrapper(); - %ClearFunctionTypeFeedback(test_wrapper); -} +// The test is called in a test wrapper that has type feedback cleared to +// prevent the influence of allocation-sites, which learn from transitions. +test_wrapper(); +%ClearFunctionTypeFeedback(test_wrapper); diff --git a/deps/v8/test/mjsunit/elements-transition.js b/deps/v8/test/mjsunit/elements-transition.js index 7298e68a12c..f6a8188e2f5 100644 --- a/deps/v8/test/mjsunit/elements-transition.js +++ b/deps/v8/test/mjsunit/elements-transition.js @@ -25,107 +25,95 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --allow-natives-syntax --smi-only-arrays +// Flags: --allow-natives-syntax // Flags: --nostress-opt -support_smi_only_arrays = %HasFastSmiElements(new Array(1,2,3,4,5,6,7,8)); - -if (support_smi_only_arrays) { - print("Tests include smi-only arrays."); -} else { - print("Tests do NOT include smi-only arrays."); +// This code exists to eliminate the learning influence of AllocationSites +// on the following tests. +var __sequence = 0; +function make_array_string(length) { + this.__sequence = this.__sequence + 1; + return "/* " + this.__sequence + " */ new Array(" + length + ");"; +} +function make_array(length) { + return eval(make_array_string(length)); } -if (support_smi_only_arrays) { - // This code exists to eliminate the learning influence of AllocationSites - // on the following tests. - var __sequence = 0; - function make_array_string(length) { - this.__sequence = this.__sequence + 1; - return "/* " + this.__sequence + " */ new Array(" + length + ");"; - } - function make_array(length) { - return eval(make_array_string(length)); - } - - function test(test_double, test_object, set, length) { - // We apply the same operations to two identical arrays. The first array - // triggers an IC miss, upon which the conversion stub is generated, but the - // actual conversion is done in runtime. The second array, arriving at - // the previously patched IC, is then converted using the conversion stub. - var array_1 = make_array(length); - var array_2 = make_array(length); +function test(test_double, test_object, set, length) { + // We apply the same operations to two identical arrays. The first array + // triggers an IC miss, upon which the conversion stub is generated, but the + // actual conversion is done in runtime. The second array, arriving at + // the previously patched IC, is then converted using the conversion stub. + var array_1 = make_array(length); + var array_2 = make_array(length); - // false, true, nice setter function, 20 - assertTrue(%HasFastSmiElements(array_1)); - assertTrue(%HasFastSmiElements(array_2)); - for (var i = 0; i < length; i++) { - if (i == length - 5 && test_double) { - // Trigger conversion to fast double elements at length-5. - set(array_1, i, 0.5); - set(array_2, i, 0.5); - assertTrue(%HasFastDoubleElements(array_1)); - assertTrue(%HasFastDoubleElements(array_2)); - } else if (i == length - 3 && test_object) { - // Trigger conversion to fast object elements at length-3. - set(array_1, i, 'object'); - set(array_2, i, 'object'); - assertTrue(%HasFastObjectElements(array_1)); - assertTrue(%HasFastObjectElements(array_2)); - } else if (i != length - 7) { - // Set the element to an integer but leave a hole at length-7. - set(array_1, i, 2*i+1); - set(array_2, i, 2*i+1); - } + // false, true, nice setter function, 20 + assertTrue(%HasFastSmiElements(array_1)); + assertTrue(%HasFastSmiElements(array_2)); + for (var i = 0; i < length; i++) { + if (i == length - 5 && test_double) { + // Trigger conversion to fast double elements at length-5. + set(array_1, i, 0.5); + set(array_2, i, 0.5); + assertTrue(%HasFastDoubleElements(array_1)); + assertTrue(%HasFastDoubleElements(array_2)); + } else if (i == length - 3 && test_object) { + // Trigger conversion to fast object elements at length-3. + set(array_1, i, 'object'); + set(array_2, i, 'object'); + assertTrue(%HasFastObjectElements(array_1)); + assertTrue(%HasFastObjectElements(array_2)); + } else if (i != length - 7) { + // Set the element to an integer but leave a hole at length-7. + set(array_1, i, 2*i+1); + set(array_2, i, 2*i+1); } + } - for (var i = 0; i < length; i++) { - if (i == length - 5 && test_double) { - assertEquals(0.5, array_1[i]); - assertEquals(0.5, array_2[i]); - } else if (i == length - 3 && test_object) { - assertEquals('object', array_1[i]); - assertEquals('object', array_2[i]); - } else if (i != length - 7) { - assertEquals(2*i+1, array_1[i]); - assertEquals(2*i+1, array_2[i]); - } else { - assertEquals(undefined, array_1[i]); - assertEquals(undefined, array_2[i]); - } + for (var i = 0; i < length; i++) { + if (i == length - 5 && test_double) { + assertEquals(0.5, array_1[i]); + assertEquals(0.5, array_2[i]); + } else if (i == length - 3 && test_object) { + assertEquals('object', array_1[i]); + assertEquals('object', array_2[i]); + } else if (i != length - 7) { + assertEquals(2*i+1, array_1[i]); + assertEquals(2*i+1, array_2[i]); + } else { + assertEquals(undefined, array_1[i]); + assertEquals(undefined, array_2[i]); } - - assertEquals(length, array_1.length); - assertEquals(length, array_2.length); } - function run_test(test_double, test_object, set, length) { - test(test_double, test_object, set, length); + assertEquals(length, array_1.length); + assertEquals(length, array_2.length); +} + +function run_test(test_double, test_object, set, length) { + test(test_double, test_object, set, length); %ClearFunctionTypeFeedback(test); - } +} - run_test(false, false, function(a,i,v){ a[i] = v; }, 20); - run_test(true, false, function(a,i,v){ a[i] = v; }, 20); - run_test(false, true, function(a,i,v){ a[i] = v; }, 20); - run_test(true, true, function(a,i,v){ a[i] = v; }, 20); +run_test(false, false, function(a,i,v){ a[i] = v; }, 20); +run_test(true, false, function(a,i,v){ a[i] = v; }, 20); +run_test(false, true, function(a,i,v){ a[i] = v; }, 20); +run_test(true, true, function(a,i,v){ a[i] = v; }, 20); - run_test(false, false, function(a,i,v){ a[i] = v; }, 10000); - run_test(true, false, function(a,i,v){ a[i] = v; }, 10000); - run_test(false, true, function(a,i,v){ a[i] = v; }, 10000); - run_test(true, true, function(a,i,v){ a[i] = v; }, 10000); +run_test(false, false, function(a,i,v){ a[i] = v; }, 10000); +run_test(true, false, function(a,i,v){ a[i] = v; }, 10000); +run_test(false, true, function(a,i,v){ a[i] = v; }, 10000); +run_test(true, true, function(a,i,v){ a[i] = v; }, 10000); - // Check COW arrays - function get_cow() { return [1, 2, 3]; } +// Check COW arrays +function get_cow() { return [1, 2, 3]; } - function transition(x) { x[0] = 1.5; } +function transition(x) { x[0] = 1.5; } - var ignore = get_cow(); - transition(ignore); // Handled by runtime. - var a = get_cow(); - var b = get_cow(); - transition(a); // Handled by IC. - assertEquals(1.5, a[0]); - assertEquals(1, b[0]); -} else { - print("Test skipped because smi only arrays are not supported."); -} +var ignore = get_cow(); +transition(ignore); // Handled by runtime. +var a = get_cow(); +var b = get_cow(); +transition(a); // Handled by IC. +assertEquals(1.5, a[0]); +assertEquals(1, b[0]); diff --git a/deps/v8/test/mjsunit/error-tostring-omit.js b/deps/v8/test/mjsunit/error-tostring-omit.js index 111adfc2121..9ff43fa9b2e 100644 --- a/deps/v8/test/mjsunit/error-tostring-omit.js +++ b/deps/v8/test/mjsunit/error-tostring-omit.js @@ -37,23 +37,15 @@ function veryLongString() { "Nam accumsan dignissim turpis a turpis duis."; } +assertTrue(veryLongString().length > 256); -var re = /omitted/; +var re = /...<omitted>.../; try { - veryLongString.nonexistentMethod(); + Number.prototype.toFixed.call(veryLongString); } catch (e) { - assertTrue(e.message.length < 350); - // TODO(verwaest): Proper error message. - // assertTrue(re.test(e.message)); -} - -try { - veryLongString().nonexistentMethod(); -} catch (e) { - assertTrue(e.message.length < 350); - // TODO(verwaest): Proper error message. - // assertTrue(re.test(e.message)); + assertTrue(e.message.length < 256); + assertTrue(re.test(e.message)); } try { diff --git a/deps/v8/test/mjsunit/harmony/array-iterator.js b/deps/v8/test/mjsunit/es6/array-iterator.js similarity index 73% rename from deps/v8/test/mjsunit/harmony/array-iterator.js rename to deps/v8/test/mjsunit/es6/array-iterator.js index 6a402e73939..63a7415b966 100644 --- a/deps/v8/test/mjsunit/harmony/array-iterator.js +++ b/deps/v8/test/mjsunit/es6/array-iterator.js @@ -25,23 +25,40 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --harmony-iteration --allow-natives-syntax +// Flags: --allow-natives-syntax + + +var NONE = 0; +var READ_ONLY = 1; +var DONT_ENUM = 2; +var DONT_DELETE = 4; + + +function assertHasOwnProperty(object, name, attrs) { + assertTrue(object.hasOwnProperty(name)); + var desc = Object.getOwnPropertyDescriptor(object, name); + assertEquals(desc.writable, !(attrs & READ_ONLY)); + assertEquals(desc.enumerable, !(attrs & DONT_ENUM)); + assertEquals(desc.configurable, !(attrs & DONT_DELETE)); +} + function TestArrayPrototype() { - assertTrue(Array.prototype.hasOwnProperty('entries')); - assertTrue(Array.prototype.hasOwnProperty('values')); - assertTrue(Array.prototype.hasOwnProperty('keys')); + assertHasOwnProperty(Array.prototype, 'entries', DONT_ENUM); + assertHasOwnProperty(Array.prototype, 'values', DONT_ENUM); + assertHasOwnProperty(Array.prototype, 'keys', DONT_ENUM); + assertHasOwnProperty(Array.prototype, Symbol.iterator, DONT_ENUM); - assertFalse(Array.prototype.propertyIsEnumerable('entries')); - assertFalse(Array.prototype.propertyIsEnumerable('values')); - assertFalse(Array.prototype.propertyIsEnumerable('keys')); + assertEquals(Array.prototype.values, Array.prototype[Symbol.iterator]); } TestArrayPrototype(); + function assertIteratorResult(value, done, result) { assertEquals({value: value, done: done}, result); } + function TestValues() { var array = ['a', 'b', 'c']; var iterator = array.values(); @@ -55,6 +72,7 @@ function TestValues() { } TestValues(); + function TestValuesMutate() { var array = ['a', 'b', 'c']; var iterator = array.values(); @@ -67,6 +85,7 @@ function TestValuesMutate() { } TestValuesMutate(); + function TestKeys() { var array = ['a', 'b', 'c']; var iterator = array.keys(); @@ -80,6 +99,7 @@ function TestKeys() { } TestKeys(); + function TestKeysMutate() { var array = ['a', 'b', 'c']; var iterator = array.keys(); @@ -92,6 +112,7 @@ function TestKeysMutate() { } TestKeysMutate(); + function TestEntries() { var array = ['a', 'b', 'c']; var iterator = array.entries(); @@ -105,6 +126,7 @@ function TestEntries() { } TestEntries(); + function TestEntriesMutate() { var array = ['a', 'b', 'c']; var iterator = array.entries(); @@ -117,29 +139,32 @@ function TestEntriesMutate() { } TestEntriesMutate(); + function TestArrayIteratorPrototype() { var array = []; var iterator = array.values(); - var ArrayIterator = iterator.constructor; - assertEquals(ArrayIterator.prototype, array.values().__proto__); - assertEquals(ArrayIterator.prototype, array.keys().__proto__); - assertEquals(ArrayIterator.prototype, array.entries().__proto__); + var ArrayIteratorPrototype = iterator.__proto__; + + assertEquals(ArrayIteratorPrototype, array.values().__proto__); + assertEquals(ArrayIteratorPrototype, array.keys().__proto__); + assertEquals(ArrayIteratorPrototype, array.entries().__proto__); - assertEquals(Object.prototype, ArrayIterator.prototype.__proto__); + assertEquals(Object.prototype, ArrayIteratorPrototype.__proto__); assertEquals('Array Iterator', %_ClassOf(array.values())); assertEquals('Array Iterator', %_ClassOf(array.keys())); assertEquals('Array Iterator', %_ClassOf(array.entries())); - var prototypeDescriptor = - Object.getOwnPropertyDescriptor(ArrayIterator, 'prototype'); - assertFalse(prototypeDescriptor.configurable); - assertFalse(prototypeDescriptor.enumerable); - assertFalse(prototypeDescriptor.writable); + assertFalse(ArrayIteratorPrototype.hasOwnProperty('constructor')); + assertArrayEquals(['next'], + Object.getOwnPropertyNames(ArrayIteratorPrototype)); + assertHasOwnProperty(ArrayIteratorPrototype, 'next', DONT_ENUM); + assertHasOwnProperty(ArrayIteratorPrototype, Symbol.iterator, DONT_ENUM); } TestArrayIteratorPrototype(); + function TestForArrayValues() { var buffer = []; var array = [0, 'a', true, false, null, /* hole */, undefined, NaN]; @@ -151,12 +176,13 @@ function TestForArrayValues() { assertEquals(8, buffer.length); for (var i = 0; i < buffer.length - 1; i++) { - assertEquals(array[i], buffer[i]); + assertSame(array[i], buffer[i]); } assertTrue(isNaN(buffer[buffer.length - 1])); } TestForArrayValues(); + function TestForArrayKeys() { var buffer = []; var array = [0, 'a', true, false, null, /* hole */, undefined, NaN]; @@ -173,6 +199,7 @@ function TestForArrayKeys() { } TestForArrayKeys(); + function TestForArrayEntries() { var buffer = []; var array = [0, 'a', true, false, null, /* hole */, undefined, NaN]; @@ -184,7 +211,7 @@ function TestForArrayEntries() { assertEquals(8, buffer.length); for (var i = 0; i < buffer.length - 1; i++) { - assertEquals(array[i], buffer[i][1]); + assertSame(array[i], buffer[i][1]); } assertTrue(isNaN(buffer[buffer.length - 1][1])); @@ -193,3 +220,33 @@ function TestForArrayEntries() { } } TestForArrayEntries(); + + +function TestForArray() { + var buffer = []; + var array = [0, 'a', true, false, null, /* hole */, undefined, NaN]; + var i = 0; + for (var value of array) { + buffer[i++] = value; + } + + assertEquals(8, buffer.length); + + for (var i = 0; i < buffer.length - 1; i++) { + assertSame(array[i], buffer[i]); + } + assertTrue(isNaN(buffer[buffer.length - 1])); +} +TestForArrayValues(); + + +function TestNonOwnSlots() { + var array = [0]; + var iterator = array.values(); + var object = {__proto__: iterator}; + + assertThrows(function() { + object.next(); + }, TypeError); +} +TestNonOwnSlots(); diff --git a/deps/v8/test/mjsunit/es6/collection-iterator.js b/deps/v8/test/mjsunit/es6/collection-iterator.js new file mode 100644 index 00000000000..5503fe58c0d --- /dev/null +++ b/deps/v8/test/mjsunit/es6/collection-iterator.js @@ -0,0 +1,200 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + + +(function TestSetIterator() { + var s = new Set; + var iter = s.values(); + assertEquals('Set Iterator', %_ClassOf(iter)); + + var SetIteratorPrototype = iter.__proto__; + assertFalse(SetIteratorPrototype.hasOwnProperty('constructor')); + assertEquals(SetIteratorPrototype.__proto__, Object.prototype); + + var propertyNames = Object.getOwnPropertyNames(SetIteratorPrototype); + assertArrayEquals(['next'], propertyNames); + + assertEquals(new Set().values().__proto__, SetIteratorPrototype); + assertEquals(new Set().entries().__proto__, SetIteratorPrototype); +})(); + + +(function TestSetIteratorValues() { + var s = new Set; + s.add(1); + s.add(2); + s.add(3); + var iter = s.values(); + + assertEquals({value: 1, done: false}, iter.next()); + assertEquals({value: 2, done: false}, iter.next()); + assertEquals({value: 3, done: false}, iter.next()); + assertEquals({value: undefined, done: true}, iter.next()); + assertEquals({value: undefined, done: true}, iter.next()); +})(); + + +(function TestSetIteratorKeys() { + assertEquals(Set.prototype.keys, Set.prototype.values); +})(); + + +(function TestSetIteratorEntries() { + var s = new Set; + s.add(1); + s.add(2); + s.add(3); + var iter = s.entries(); + + assertEquals({value: [1, 1], done: false}, iter.next()); + assertEquals({value: [2, 2], done: false}, iter.next()); + assertEquals({value: [3, 3], done: false}, iter.next()); + assertEquals({value: undefined, done: true}, iter.next()); + assertEquals({value: undefined, done: true}, iter.next()); +})(); + + +(function TestSetIteratorMutations() { + var s = new Set; + s.add(1); + var iter = s.values(); + assertEquals({value: 1, done: false}, iter.next()); + s.add(2); + s.add(3); + s.add(4); + s.add(5); + assertEquals({value: 2, done: false}, iter.next()); + s.delete(3); + assertEquals({value: 4, done: false}, iter.next()); + s.delete(5); + assertEquals({value: undefined, done: true}, iter.next()); + s.add(4); + assertEquals({value: undefined, done: true}, iter.next()); +})(); + + +(function TestSetInvalidReceiver() { + assertThrows(function() { + Set.prototype.values.call({}); + }, TypeError); + assertThrows(function() { + Set.prototype.entries.call({}); + }, TypeError); +})(); + + +(function TestSetIteratorInvalidReceiver() { + var iter = new Set().values(); + assertThrows(function() { + iter.next.call({}); + }); +})(); + + +(function TestSetIteratorSymbol() { + assertEquals(Set.prototype[Symbol.iterator], Set.prototype.values); + assertTrue(Set.prototype.hasOwnProperty(Symbol.iterator)); + assertFalse(Set.prototype.propertyIsEnumerable(Symbol.iterator)); + + var iter = new Set().values(); + assertEquals(iter, iter[Symbol.iterator]()); + assertEquals(iter[Symbol.iterator].name, '[Symbol.iterator]'); +})(); + + +(function TestMapIterator() { + var m = new Map; + var iter = m.values(); + assertEquals('Map Iterator', %_ClassOf(iter)); + + var MapIteratorPrototype = iter.__proto__; + assertFalse(MapIteratorPrototype.hasOwnProperty('constructor')); + assertEquals(MapIteratorPrototype.__proto__, Object.prototype); + + var propertyNames = Object.getOwnPropertyNames(MapIteratorPrototype); + assertArrayEquals(['next'], propertyNames); + + assertEquals(new Map().values().__proto__, MapIteratorPrototype); + assertEquals(new Map().keys().__proto__, MapIteratorPrototype); + assertEquals(new Map().entries().__proto__, MapIteratorPrototype); +})(); + + +(function TestMapIteratorValues() { + var m = new Map; + m.set(1, 11); + m.set(2, 22); + m.set(3, 33); + var iter = m.values(); + + assertEquals({value: 11, done: false}, iter.next()); + assertEquals({value: 22, done: false}, iter.next()); + assertEquals({value: 33, done: false}, iter.next()); + assertEquals({value: undefined, done: true}, iter.next()); + assertEquals({value: undefined, done: true}, iter.next()); +})(); + + +(function TestMapIteratorKeys() { + var m = new Map; + m.set(1, 11); + m.set(2, 22); + m.set(3, 33); + var iter = m.keys(); + + assertEquals({value: 1, done: false}, iter.next()); + assertEquals({value: 2, done: false}, iter.next()); + assertEquals({value: 3, done: false}, iter.next()); + assertEquals({value: undefined, done: true}, iter.next()); + assertEquals({value: undefined, done: true}, iter.next()); +})(); + + +(function TestMapIteratorEntries() { + var m = new Map; + m.set(1, 11); + m.set(2, 22); + m.set(3, 33); + var iter = m.entries(); + + assertEquals({value: [1, 11], done: false}, iter.next()); + assertEquals({value: [2, 22], done: false}, iter.next()); + assertEquals({value: [3, 33], done: false}, iter.next()); + assertEquals({value: undefined, done: true}, iter.next()); + assertEquals({value: undefined, done: true}, iter.next()); +})(); + + +(function TestMapInvalidReceiver() { + assertThrows(function() { + Map.prototype.values.call({}); + }, TypeError); + assertThrows(function() { + Map.prototype.keys.call({}); + }, TypeError); + assertThrows(function() { + Map.prototype.entries.call({}); + }, TypeError); +})(); + + +(function TestMapIteratorInvalidReceiver() { + var iter = new Map().values(); + assertThrows(function() { + iter.next.call({}); + }, TypeError); +})(); + + +(function TestMapIteratorSymbol() { + assertEquals(Map.prototype[Symbol.iterator], Map.prototype.entries); + assertTrue(Map.prototype.hasOwnProperty(Symbol.iterator)); + assertFalse(Map.prototype.propertyIsEnumerable(Symbol.iterator)); + + var iter = new Map().values(); + assertEquals(iter, iter[Symbol.iterator]()); + assertEquals(iter[Symbol.iterator].name, '[Symbol.iterator]'); +})(); diff --git a/deps/v8/test/mjsunit/harmony/collections.js b/deps/v8/test/mjsunit/es6/collections.js similarity index 68% rename from deps/v8/test/mjsunit/harmony/collections.js rename to deps/v8/test/mjsunit/es6/collections.js index 7bf7bf70639..1e2f232ee81 100644 --- a/deps/v8/test/mjsunit/harmony/collections.js +++ b/deps/v8/test/mjsunit/es6/collections.js @@ -25,10 +25,16 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --harmony-collections // Flags: --expose-gc --allow-natives-syntax +function assertSize(expected, collection) { + if (collection instanceof Map || collection instanceof Set) { + assertEquals(expected, collection.size); + } +} + + // Test valid getter and setter calls on Sets and WeakSets function TestValidSetCalls(m) { assertDoesNotThrow(function () { m.add(new Object) }); @@ -67,7 +73,7 @@ TestInvalidCalls(new WeakMap); // Test expected behavior for Sets and WeakSets function TestSet(set, key) { assertFalse(set.has(key)); - assertSame(undefined, set.add(key)); + assertSame(set, set.add(key)); assertTrue(set.has(key)); assertTrue(set.delete(key)); assertFalse(set.has(key)); @@ -92,7 +98,7 @@ TestSet(new WeakSet, new Object); // Test expected mapping behavior for Maps and WeakMaps function TestMapping(map, key, value) { - assertSame(undefined, map.set(key, value)); + assertSame(map, map.set(key, value)); assertSame(value, map.get(key)); } function TestMapBehavior1(m) { @@ -290,20 +296,19 @@ assertEquals("WeakSet", WeakSet.name); // Test prototype property of Set, Map, WeakMap and WeakSet. -// TODO(2793): Should all be non-writable, and the extra flag removed. -function TestPrototype(C, writable) { +function TestPrototype(C) { assertTrue(C.prototype instanceof Object); assertEquals({ value: {}, - writable: writable, + writable: false, enumerable: false, configurable: false }, Object.getOwnPropertyDescriptor(C, "prototype")); } -TestPrototype(Set, true); -TestPrototype(Map, true); -TestPrototype(WeakMap, false); -TestPrototype(WeakSet, false); +TestPrototype(Set); +TestPrototype(Map); +TestPrototype(WeakMap); +TestPrototype(WeakSet); // Test constructor property of the Set, Map, WeakMap and WeakSet prototype. @@ -311,6 +316,7 @@ function TestConstructor(C) { assertFalse(C === Object.prototype.constructor); assertSame(C, C.prototype.constructor); assertSame(C, (new C).__proto__.constructor); + assertEquals(1, C.length); } TestConstructor(Set); TestConstructor(Map); @@ -852,3 +858,431 @@ for (var i = 9; i >= 0; i--) { }); assertEquals(4950, accumulated); })(); + + +(function TestMapForEachAllRemovedTransition() { + var map = new Map; + map.set(0, 0); + + var buffer = []; + map.forEach(function(v) { + buffer.push(v); + if (v === 0) { + for (var i = 1; i < 4; i++) { + map.set(i, i); + } + } + + if (v === 3) { + for (var i = 0; i < 4; i++) { + map.delete(i); + } + for (var i = 4; i < 8; i++) { + map.set(i, i); + } + } + }); + + assertArrayEquals([0, 1, 2, 3, 4, 5, 6, 7], buffer); +})(); + + +(function TestMapForEachClearTransition() { + var map = new Map; + map.set(0, 0); + + var i = 0; + var buffer = []; + map.forEach(function(v) { + buffer.push(v); + if (++i < 5) { + for (var j = 0; j < 5; j++) { + map.clear(); + map.set(i, i); + } + } + }); + + assertArrayEquals([0, 1, 2, 3, 4], buffer); +})(); + + +(function TestMapForEachNestedNonTrivialTransition() { + var map = new Map; + map.set(0, 0); + map.set(1, 1); + map.set(2, 2); + map.set(3, 3); + map.delete(0); + + var i = 0; + var buffer = []; + map.forEach(function(v) { + if (++i > 10) return; + + buffer.push(v); + + if (v == 3) { + map.delete(1); + for (var j = 4; j < 10; j++) { + map.set(j, j); + } + for (var j = 4; j < 10; j += 2) { + map.delete(j); + } + map.delete(2); + + for (var j = 10; j < 20; j++) { + map.set(j, j); + } + for (var j = 10; j < 20; j += 2) { + map.delete(j); + } + + map.delete(3); + } + }); + + assertArrayEquals([1, 2, 3, 5, 7, 9, 11, 13, 15, 17], buffer); +})(); + + +(function TestMapForEachAllRemovedTransitionNoClear() { + var map = new Map; + map.set(0, 0); + + var buffer = []; + map.forEach(function(v) { + buffer.push(v); + if (v === 0) { + for (var i = 1; i < 8; i++) { + map.set(i, i); + } + } + + if (v === 4) { + for (var i = 0; i < 8; i++) { + map.delete(i); + } + } + }); + + assertArrayEquals([0, 1, 2, 3, 4], buffer); +})(); + + +(function TestMapForEachNoMoreElementsAfterTransition() { + var map = new Map; + map.set(0, 0); + + var buffer = []; + map.forEach(function(v) { + buffer.push(v); + if (v === 0) { + for (var i = 1; i < 16; i++) { + map.set(i, i); + } + } + + if (v === 4) { + for (var i = 5; i < 16; i++) { + map.delete(i); + } + } + }); + + assertArrayEquals([0, 1, 2, 3, 4], buffer); +})(); + + +// Allows testing iterator-based constructors easily. +var oneAndTwo = new Map(); +var k0 = {key: 0}; +var k1 = {key: 1}; +var k2 = {key: 2}; +oneAndTwo.set(k1, 1); +oneAndTwo.set(k2, 2); + + +function TestSetConstructor(ctor) { + var s = new ctor(null); + assertSize(0, s); + + s = new ctor(undefined); + assertSize(0, s); + + // No @@iterator + assertThrows(function() { + new ctor({}); + }, TypeError); + + // @@iterator not callable + assertThrows(function() { + var object = {}; + object[Symbol.iterator] = 42; + new ctor(object); + }, TypeError); + + // @@iterator result not object + assertThrows(function() { + var object = {}; + object[Symbol.iterator] = function() { + return 42; + }; + new ctor(object); + }, TypeError); + + var s2 = new Set(); + s2.add(k0); + s2.add(k1); + s2.add(k2); + s = new ctor(s2.values()); + assertSize(3, s); + assertTrue(s.has(k0)); + assertTrue(s.has(k1)); + assertTrue(s.has(k2)); +} +TestSetConstructor(Set); +TestSetConstructor(WeakSet); + + +function TestSetConstructorAddNotCallable(ctor) { + var originalPrototypeAdd = ctor.prototype.add; + assertThrows(function() { + ctor.prototype.add = 42; + new ctor(oneAndTwo.values()); + }, TypeError); + ctor.prototype.add = originalPrototypeAdd; +} +TestSetConstructorAddNotCallable(Set); +TestSetConstructorAddNotCallable(WeakSet); + + +function TestSetConstructorGetAddOnce(ctor) { + var originalPrototypeAdd = ctor.prototype.add; + var getAddCount = 0; + Object.defineProperty(ctor.prototype, 'add', { + get: function() { + getAddCount++; + return function() {}; + } + }); + var s = new ctor(oneAndTwo.values()); + assertEquals(1, getAddCount); + assertSize(0, s); + Object.defineProperty(ctor.prototype, 'add', { + value: originalPrototypeAdd, + writable: true + }); +} +TestSetConstructorGetAddOnce(Set); +TestSetConstructorGetAddOnce(WeakSet); + + +function TestSetConstructorAddReplaced(ctor) { + var originalPrototypeAdd = ctor.prototype.add; + var addCount = 0; + ctor.prototype.add = function(value) { + addCount++; + originalPrototypeAdd.call(this, value); + ctor.prototype.add = null; + }; + var s = new ctor(oneAndTwo.keys()); + assertEquals(2, addCount); + assertSize(2, s); + ctor.prototype.add = originalPrototypeAdd; +} +TestSetConstructorAddReplaced(Set); +TestSetConstructorAddReplaced(WeakSet); + + +function TestSetConstructorOrderOfDoneValue(ctor) { + var valueCount = 0, doneCount = 0; + var iterator = { + next: function() { + return { + get value() { + valueCount++; + }, + get done() { + doneCount++; + throw new Error(); + } + }; + } + }; + iterator[Symbol.iterator] = function() { + return this; + }; + assertThrows(function() { + new ctor(iterator); + }); + assertEquals(1, doneCount); + assertEquals(0, valueCount); +} +TestSetConstructorOrderOfDoneValue(Set); +TestSetConstructorOrderOfDoneValue(WeakSet); + + +function TestSetConstructorNextNotAnObject(ctor) { + var iterator = { + next: function() { + return 'abc'; + } + }; + iterator[Symbol.iterator] = function() { + return this; + }; + assertThrows(function() { + new ctor(iterator); + }, TypeError); +} +TestSetConstructorNextNotAnObject(Set); +TestSetConstructorNextNotAnObject(WeakSet); + + +function TestMapConstructor(ctor) { + var m = new ctor(null); + assertSize(0, m); + + m = new ctor(undefined); + assertSize(0, m); + + // No @@iterator + assertThrows(function() { + new ctor({}); + }, TypeError); + + // @@iterator not callable + assertThrows(function() { + var object = {}; + object[Symbol.iterator] = 42; + new ctor(object); + }, TypeError); + + // @@iterator result not object + assertThrows(function() { + var object = {}; + object[Symbol.iterator] = function() { + return 42; + }; + new ctor(object); + }, TypeError); + + var m2 = new Map(); + m2.set(k0, 'a'); + m2.set(k1, 'b'); + m2.set(k2, 'c'); + m = new ctor(m2.entries()); + assertSize(3, m); + assertEquals('a', m.get(k0)); + assertEquals('b', m.get(k1)); + assertEquals('c', m.get(k2)); +} +TestMapConstructor(Map); +TestMapConstructor(WeakMap); + + +function TestMapConstructorSetNotCallable(ctor) { + var originalPrototypeSet = ctor.prototype.set; + assertThrows(function() { + ctor.prototype.set = 42; + new ctor(oneAndTwo.entries()); + }, TypeError); + ctor.prototype.set = originalPrototypeSet; +} +TestMapConstructorSetNotCallable(Map); +TestMapConstructorSetNotCallable(WeakMap); + + +function TestMapConstructorGetAddOnce(ctor) { + var originalPrototypeSet = ctor.prototype.set; + var getSetCount = 0; + Object.defineProperty(ctor.prototype, 'set', { + get: function() { + getSetCount++; + return function() {}; + } + }); + var m = new ctor(oneAndTwo.entries()); + assertEquals(1, getSetCount); + assertSize(0, m); + Object.defineProperty(ctor.prototype, 'set', { + value: originalPrototypeSet, + writable: true + }); +} +TestMapConstructorGetAddOnce(Map); +TestMapConstructorGetAddOnce(WeakMap); + + +function TestMapConstructorSetReplaced(ctor) { + var originalPrototypeSet = ctor.prototype.set; + var setCount = 0; + ctor.prototype.set = function(key, value) { + setCount++; + originalPrototypeSet.call(this, key, value); + ctor.prototype.set = null; + }; + var m = new ctor(oneAndTwo.entries()); + assertEquals(2, setCount); + assertSize(2, m); + ctor.prototype.set = originalPrototypeSet; +} +TestMapConstructorSetReplaced(Map); +TestMapConstructorSetReplaced(WeakMap); + + +function TestMapConstructorOrderOfDoneValue(ctor) { + var valueCount = 0, doneCount = 0; + function FakeError() {} + var iterator = { + next: function() { + return { + get value() { + valueCount++; + }, + get done() { + doneCount++; + throw new FakeError(); + } + }; + } + }; + iterator[Symbol.iterator] = function() { + return this; + }; + assertThrows(function() { + new ctor(iterator); + }, FakeError); + assertEquals(1, doneCount); + assertEquals(0, valueCount); +} +TestMapConstructorOrderOfDoneValue(Map); +TestMapConstructorOrderOfDoneValue(WeakMap); + + +function TestMapConstructorNextNotAnObject(ctor) { + var iterator = { + next: function() { + return 'abc'; + } + }; + iterator[Symbol.iterator] = function() { + return this; + }; + assertThrows(function() { + new ctor(iterator); + }, TypeError); +} +TestMapConstructorNextNotAnObject(Map); +TestMapConstructorNextNotAnObject(WeakMap); + + +function TestMapConstructorIteratorNotObjectValues(ctor) { + assertThrows(function() { + new ctor(oneAndTwo.values()); + }, TypeError); +} +TestMapConstructorIteratorNotObjectValues(Map); +TestMapConstructorIteratorNotObjectValues(WeakMap); diff --git a/deps/v8/test/mjsunit/es6/debug-promises-throw-in-reject.js b/deps/v8/test/mjsunit/es6/debug-promises-throw-in-reject.js deleted file mode 100644 index cdf759606c6..00000000000 --- a/deps/v8/test/mjsunit/es6/debug-promises-throw-in-reject.js +++ /dev/null @@ -1,61 +0,0 @@ -// Copyright 2014 the V8 project authors. All rights reserved. -// Use of this source code is governed by a BSD-style license that can be -// found in the LICENSE file. - -// Flags: --harmony-promises --expose-debug-as debug - -// Test debug events when an exception is thrown inside a Promise, which is -// caught by a custom promise, which throws a new exception in its reject -// handler. We expect an Exception debug event with a promise to be triggered. - -Debug = debug.Debug; - -var log = []; -var step = 0; - -var p = new Promise(function(resolve, reject) { - log.push("resolve"); - resolve(); -}); - -function MyPromise(resolver) { - var reject = function() { - log.push("throw reject"); - throw new Error("reject"); // event - }; - var resolve = function() { }; - log.push("construct"); - resolver(resolve, reject); -}; - -MyPromise.prototype = p; -p.constructor = MyPromise; - -var q = p.chain( - function() { - log.push("throw caught"); - throw new Error("caught"); - }); - -function listener(event, exec_state, event_data, data) { - try { - if (event == Debug.DebugEvent.Exception) { - assertEquals(["resolve", "construct", "end main", - "throw caught", "throw reject"], log); - assertEquals("reject", event_data.exception().message); - assertEquals(q, event_data.promise()); - assertTrue(exec_state.frame(0).sourceLineText().indexOf('// event') > 0); - } - } catch (e) { - // Signal a failure with exit code 1. This is necessary since the - // debugger swallows exceptions and we expect the chained function - // and this listener to be executed after the main script is finished. - print("Unexpected exception: " + e + "\n" + e.stack); - quit(1); - } -} - -Debug.setBreakOnUncaughtException(); -Debug.setListener(listener); - -log.push("end main"); diff --git a/deps/v8/test/mjsunit/es6/debug-promises-undefined-reject.js b/deps/v8/test/mjsunit/es6/debug-promises-undefined-reject.js deleted file mode 100644 index 5bad5bd3705..00000000000 --- a/deps/v8/test/mjsunit/es6/debug-promises-undefined-reject.js +++ /dev/null @@ -1,57 +0,0 @@ -// Copyright 2014 the V8 project authors. All rights reserved. -// Use of this source code is governed by a BSD-style license that can be -// found in the LICENSE file. - -// Flags: --harmony-promises --expose-debug-as debug - -// Test debug events when an exception is thrown inside a Promise, which is -// caught by a custom promise, which has no reject handler. -// We expect an Exception event with a promise to be triggered. - -Debug = debug.Debug; - -var log = []; -var step = 0; - -var p = new Promise(function(resolve, reject) { - log.push("resolve"); - resolve(); -}); - -function MyPromise(resolver) { - var reject = undefined; - var resolve = function() { }; - log.push("construct"); - resolver(resolve, reject); -}; - -MyPromise.prototype = p; -p.constructor = MyPromise; - -var q = p.chain( - function() { - log.push("throw caught"); - throw new Error("caught"); // event - }); - -function listener(event, exec_state, event_data, data) { - try { - if (event == Debug.DebugEvent.Exception) { - assertEquals(["resolve", "construct", "end main", "throw caught"], log); - assertEquals("undefined is not a function", - event_data.exception().message); - assertEquals(q, event_data.promise()); - } - } catch (e) { - // Signal a failure with exit code 1. This is necessary since the - // debugger swallows exceptions and we expect the chained function - // and this listener to be executed after the main script is finished. - print("Unexpected exception: " + e + "\n" + e.stack); - quit(1); - } -} - -Debug.setBreakOnUncaughtException(); -Debug.setListener(listener); - -log.push("end main"); diff --git a/deps/v8/test/mjsunit/es6/debug-promises/async-task-event.js b/deps/v8/test/mjsunit/es6/debug-promises/async-task-event.js new file mode 100644 index 00000000000..88030a2e73c --- /dev/null +++ b/deps/v8/test/mjsunit/es6/debug-promises/async-task-event.js @@ -0,0 +1,61 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug + +Debug = debug.Debug; + +var base_id = -1; +var exception = null; +var expected = [ + "enqueue #1", + "willHandle #1", + "then #1", + "enqueue #2", + "didHandle #1", + "willHandle #2", + "then #2", + "enqueue #3", + "didHandle #2", + "willHandle #3", + "didHandle #3" +]; + +function assertLog(msg) { + print(msg); + assertTrue(expected.length > 0); + assertEquals(expected.shift(), msg); + if (!expected.length) { + Debug.setListener(null); + } +} + +function listener(event, exec_state, event_data, data) { + if (event != Debug.DebugEvent.AsyncTaskEvent) return; + try { + if (base_id < 0) + base_id = event_data.id(); + var id = event_data.id() - base_id + 1; + assertEquals("Promise.resolve", event_data.name()); + assertLog(event_data.type() + " #" + id); + } catch (e) { + print(e + e.stack) + exception = e; + } +} + +Debug.setListener(listener); + +var resolver; +var p = new Promise(function(resolve, reject) { + resolver = resolve; +}); +p.then(function() { + assertLog("then #1"); +}).then(function() { + assertLog("then #2"); +}); +resolver(); + +assertNull(exception); diff --git a/deps/v8/test/mjsunit/es6/debug-promises/events.js b/deps/v8/test/mjsunit/es6/debug-promises/events.js new file mode 100644 index 00000000000..a9f94543f40 --- /dev/null +++ b/deps/v8/test/mjsunit/es6/debug-promises/events.js @@ -0,0 +1,124 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax --expose-debug-as debug + +Debug = debug.Debug; + +var eventsExpected = 16; +var exception = null; +var result = []; + +function updatePromise(promise, parentPromise, status, value) { + var i; + for (i = 0; i < result.length; ++i) { + if (result[i].promise === promise) { + result[i].parentPromise = parentPromise || result[i].parentPromise; + result[i].status = status || result[i].status; + result[i].value = value || result[i].value; + break; + } + } + assertTrue(i < result.length); +} + +function listener(event, exec_state, event_data, data) { + if (event != Debug.DebugEvent.PromiseEvent) return; + try { + eventsExpected--; + assertTrue(event_data.promise().isPromise()); + if (event_data.status() === 0) { + // New promise. + assertEquals("pending", event_data.promise().status()); + result.push({ promise: event_data.promise().value(), status: 0 }); + assertTrue(exec_state.frame(0).sourceLineText().indexOf("// event") > 0); + } else if (event_data.status() !== undefined) { + // Resolve/reject promise. + updatePromise(event_data.promise().value(), + undefined, + event_data.status(), + event_data.value().value()); + } else { + // Chain promises. + assertTrue(event_data.parentPromise().isPromise()); + updatePromise(event_data.promise().value(), + event_data.parentPromise().value()); + assertTrue(exec_state.frame(0).sourceLineText().indexOf("// event") > 0); + } + } catch (e) { + print(e + e.stack) + exception = e; + } +} + +Debug.setListener(listener); + +function resolver(resolve, reject) { resolve(); } + +var p1 = new Promise(resolver); // event +var p2 = p1.then().then(); // event +var p3 = new Promise(function(resolve, reject) { // event + reject("rejected"); +}); +var p4 = p3.then(); // event +var p5 = p1.then(); // event + +function assertAsync(b, s) { + if (b) { + print(s, "succeeded"); + } else { + %AbortJS(s + " FAILED!"); + } +} + +function testDone(iteration) { + function checkResult() { + if (eventsExpected === 0) { + assertAsync(result.length === 6, "result.length"); + + assertAsync(result[0].promise === p1, "result[0].promise"); + assertAsync(result[0].parentPromise === undefined, + "result[0].parentPromise"); + assertAsync(result[0].status === 1, "result[0].status"); + assertAsync(result[0].value === undefined, "result[0].value"); + + assertAsync(result[1].parentPromise === p1, + "result[1].parentPromise"); + assertAsync(result[1].status === 1, "result[1].status"); + + assertAsync(result[2].promise === p2, "result[2].promise"); + + assertAsync(result[3].promise === p3, "result[3].promise"); + assertAsync(result[3].parentPromise === undefined, + "result[3].parentPromise"); + assertAsync(result[3].status === -1, "result[3].status"); + assertAsync(result[3].value === "rejected", "result[3].value"); + + assertAsync(result[4].promise === p4, "result[4].promise"); + assertAsync(result[4].parentPromise === p3, + "result[4].parentPromise"); + assertAsync(result[4].status === -1, "result[4].status"); + assertAsync(result[4].value === "rejected", "result[4].value"); + + assertAsync(result[5].promise === p5, "result[5].promise"); + assertAsync(result[5].parentPromise === p1, + "result[5].parentPromise"); + assertAsync(result[5].status === 1, "result[5].status"); + + assertAsync(exception === null, "exception === null"); + Debug.setListener(null); + } else if (iteration > 10) { + %AbortJS("Not all events were received!"); + } else { + testDone(iteration + 1); + } + } + + var iteration = iteration || 0; + var dummy = {}; + Object.observe(dummy, checkResult); + dummy.dummy = dummy; +} + +testDone(); diff --git a/deps/v8/test/mjsunit/es6/debug-promises-reentry.js b/deps/v8/test/mjsunit/es6/debug-promises/reentry.js similarity index 90% rename from deps/v8/test/mjsunit/es6/debug-promises-reentry.js rename to deps/v8/test/mjsunit/es6/debug-promises/reentry.js index 03c7fc2c86f..fbe54242dd8 100644 --- a/deps/v8/test/mjsunit/es6/debug-promises-reentry.js +++ b/deps/v8/test/mjsunit/es6/debug-promises/reentry.js @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -// Flags: --harmony-promises --expose-debug-as debug +// Flags: --expose-debug-as debug // Test reentry of special try catch for Promises. diff --git a/deps/v8/test/mjsunit/es6/debug-promises/reject-after-resolve.js b/deps/v8/test/mjsunit/es6/debug-promises/reject-after-resolve.js new file mode 100644 index 00000000000..a0036cfd0f3 --- /dev/null +++ b/deps/v8/test/mjsunit/es6/debug-promises/reject-after-resolve.js @@ -0,0 +1,37 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug --allow-natives-syntax + +// Test debug events when we listen to uncaught exceptions and +// the Promise is rejected in a chained closure after it has been resolved. +// We expect no Exception debug event to be triggered. + +Debug = debug.Debug; + +var log = []; + +var p = new Promise(function(resolve, reject) { + log.push("resolve"); + resolve(reject); +}); + +var q = p.chain( + function(value) { + assertEquals(["resolve", "end main"], log); + value(new Error("reject")); + }); + +function listener(event, exec_state, event_data, data) { + try { + assertTrue(event != Debug.DebugEvent.Exception); + } catch (e) { + %AbortJS(e + "\n" + e.stack); + } +} + +Debug.setBreakOnException(); +Debug.setListener(listener); + +log.push("end main"); diff --git a/deps/v8/test/mjsunit/es6/debug-promises/reject-caught-all.js b/deps/v8/test/mjsunit/es6/debug-promises/reject-caught-all.js new file mode 100644 index 00000000000..0fca57730a1 --- /dev/null +++ b/deps/v8/test/mjsunit/es6/debug-promises/reject-caught-all.js @@ -0,0 +1,72 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug --allow-natives-syntax + +// Test debug events when we listen to all exceptions and +// there is a catch handler for the to-be-rejected Promise. +// We expect a normal Exception debug event to be triggered. + +Debug = debug.Debug; + +var log = []; +var expected_events = 1; + +var p = new Promise(function(resolve, reject) { + log.push("resolve"); + resolve(); +}); + +var q = p.chain( + function(value) { + log.push("reject"); + return Promise.reject(new Error("reject")); + }); + +q.catch( + function(e) { + assertEquals("reject", e.message); + }); + + +function listener(event, exec_state, event_data, data) { + try { + if (event == Debug.DebugEvent.Exception) { + expected_events--; + assertTrue(expected_events >= 0); + assertEquals("reject", event_data.exception().message); + assertSame(q, event_data.promise()); + assertFalse(event_data.uncaught()); + } + } catch (e) { + %AbortJS(e + "\n" + e.stack); + } +} + +Debug.setBreakOnException(); +Debug.setListener(listener); + +log.push("end main"); + +function testDone(iteration) { + function checkResult() { + try { + assertTrue(iteration < 10); + if (expected_events === 0) { + assertEquals(["resolve", "end main", "reject"], log); + } else { + testDone(iteration + 1); + } + } catch (e) { + %AbortJS(e + "\n" + e.stack); + } + } + + // Run testDone through the Object.observe processing loop. + var dummy = {}; + Object.observe(dummy, checkResult); + dummy.dummy = dummy; +} + +testDone(0); diff --git a/deps/v8/test/mjsunit/es6/debug-promises/reject-caught-late.js b/deps/v8/test/mjsunit/es6/debug-promises/reject-caught-late.js new file mode 100644 index 00000000000..2ff13d5605c --- /dev/null +++ b/deps/v8/test/mjsunit/es6/debug-promises/reject-caught-late.js @@ -0,0 +1,34 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug --allow-natives-syntax + +// Test debug events when we only listen to uncaught exceptions, the Promise +// is rejected, and a catch handler is installed right before the rejection. +// We expect no debug event to be triggered. + +Debug = debug.Debug; + +var p = new Promise(function(resolve, reject) { + resolve(); +}); + +var q = p.chain( + function() { + q.catch(function(e) { + assertEquals("caught", e.message); + }); + return Promise.reject(Error("caught")); + }); + +function listener(event, exec_state, event_data, data) { + try { + assertTrue(event != Debug.DebugEvent.Exception); + } catch (e) { + %AbortJS(e + "\n" + e.stack); + } +} + +Debug.setBreakOnUncaughtException(); +Debug.setListener(listener); diff --git a/deps/v8/test/mjsunit/es6/debug-promises/reject-caught-uncaught.js b/deps/v8/test/mjsunit/es6/debug-promises/reject-caught-uncaught.js new file mode 100644 index 00000000000..d3fd9f3ae7a --- /dev/null +++ b/deps/v8/test/mjsunit/es6/debug-promises/reject-caught-uncaught.js @@ -0,0 +1,36 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug --allow-natives-syntax + +// Test debug events when we only listen to uncaught exceptions and +// there is a catch handler for the to-be-rejected Promise. +// We expect no debug event to be triggered. + +Debug = debug.Debug; + +var p = new Promise(function(resolve, reject) { + resolve(); +}); + +var q = p.chain( + function() { + return Promise.reject(Error("caught reject")); + }); + +q.catch( + function(e) { + assertEquals("caught reject", e.message); + }); + +function listener(event, exec_state, event_data, data) { + try { + assertTrue(event != Debug.DebugEvent.Exception); + } catch (e) { + %AbortJS(e + "\n" + e.stack); + } +} + +Debug.setBreakOnUncaughtException(); +Debug.setListener(listener); diff --git a/deps/v8/test/mjsunit/es6/debug-promises/reject-in-constructor.js b/deps/v8/test/mjsunit/es6/debug-promises/reject-in-constructor.js new file mode 100644 index 00000000000..a05b3ac5d65 --- /dev/null +++ b/deps/v8/test/mjsunit/es6/debug-promises/reject-in-constructor.js @@ -0,0 +1,39 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug + +// Test debug events when we only listen to uncaught exceptions and +// the Promise is rejected in the Promise constructor. +// We expect an Exception debug event with a promise to be triggered. + +Debug = debug.Debug; + +var steps = 0; +var exception = null; + +function listener(event, exec_state, event_data, data) { + try { + if (event == Debug.DebugEvent.Exception) { + steps++; + assertEquals("uncaught", event_data.exception().message); + assertTrue(event_data.promise() instanceof Promise); + assertTrue(event_data.uncaught()); + // Assert that the debug event is triggered at the throw site. + assertTrue(exec_state.frame(0).sourceLineText().indexOf("// event") > 0); + } + } catch (e) { + exception = e; + } +} + +Debug.setBreakOnUncaughtException(); +Debug.setListener(listener); + +var p = new Promise(function(resolve, reject) { + reject(new Error("uncaught")); // event +}); + +assertEquals(1, steps); +assertNull(exception); diff --git a/deps/v8/test/mjsunit/es6/debug-promises/reject-uncaught-all.js b/deps/v8/test/mjsunit/es6/debug-promises/reject-uncaught-all.js new file mode 100644 index 00000000000..beaf1878fef --- /dev/null +++ b/deps/v8/test/mjsunit/es6/debug-promises/reject-uncaught-all.js @@ -0,0 +1,69 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug --allow-natives-syntax + +// Test debug events when we listen to all exceptions and +// there is a catch handler for the to-be-rejected Promise. +// We expect an Exception debug event with a promise to be triggered. + +Debug = debug.Debug; + +var expected_events = 1; +var log = []; + +var p = new Promise(function(resolve, reject) { + log.push("resolve"); + resolve(); +}); + +var q = p.chain( + function() { + log.push("reject"); + return Promise.reject(new Error("uncaught reject")); + }); + +function listener(event, exec_state, event_data, data) { + try { + if (event == Debug.DebugEvent.Exception) { + expected_events--; + assertTrue(expected_events >= 0); + assertEquals("uncaught reject", event_data.exception().message); + assertTrue(event_data.promise() instanceof Promise); + assertSame(q, event_data.promise()); + assertTrue(event_data.uncaught()); + // All of the frames on the stack are from native Javascript. + assertEquals(0, exec_state.frameCount()); + } + } catch (e) { + %AbortJS(e + "\n" + e.stack); + } +} + +Debug.setBreakOnException(); +Debug.setListener(listener); + +log.push("end main"); + +function testDone(iteration) { + function checkResult() { + try { + assertTrue(iteration < 10); + if (expected_events === 0) { + assertEquals(["resolve", "end main", "reject"], log); + } else { + testDone(iteration + 1); + } + } catch (e) { + %AbortJS(e + "\n" + e.stack); + } + } + + // Run testDone through the Object.observe processing loop. + var dummy = {}; + Object.observe(dummy, checkResult); + dummy.dummy = dummy; +} + +testDone(0); diff --git a/deps/v8/test/mjsunit/es6/debug-promises/reject-uncaught-late.js b/deps/v8/test/mjsunit/es6/debug-promises/reject-uncaught-late.js new file mode 100644 index 00000000000..4a883da13a7 --- /dev/null +++ b/deps/v8/test/mjsunit/es6/debug-promises/reject-uncaught-late.js @@ -0,0 +1,76 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug --allow-natives-syntax + +// Test debug events when we only listen to uncaught exceptions and +// there is a catch handler for the to-be-rejected Promise. +// We expect an Exception debug event with a promise to be triggered. + +Debug = debug.Debug; + +var expected_events = 1; +var log = []; + +var reject_closure; + +var p = new Promise(function(resolve, reject) { + log.push("postpone p"); + reject_closure = reject; +}); + +var q = new Promise(function(resolve, reject) { + log.push("resolve q"); + resolve(); +}); + +q.then(function() { + log.push("reject p"); + reject_closure(new Error("uncaught reject p")); // event +}) + + +function listener(event, exec_state, event_data, data) { + try { + if (event == Debug.DebugEvent.Exception) { + expected_events--; + assertTrue(expected_events >= 0); + assertEquals("uncaught reject p", event_data.exception().message); + assertTrue(event_data.promise() instanceof Promise); + assertSame(p, event_data.promise()); + assertTrue(event_data.uncaught()); + // Assert that the debug event is triggered at the throw site. + assertTrue(exec_state.frame(0).sourceLineText().indexOf("// event") > 0); + } + } catch (e) { + %AbortJS(e + "\n" + e.stack); + } +} + +Debug.setBreakOnUncaughtException(); +Debug.setListener(listener); + +log.push("end main"); + +function testDone(iteration) { + function checkResult() { + try { + assertTrue(iteration < 10); + if (expected_events === 0) { + assertEquals(["postpone p", "resolve q", "end main", "reject p"], log); + } else { + testDone(iteration + 1); + } + } catch (e) { + %AbortJS(e + "\n" + e.stack); + } + } + + // Run testDone through the Object.observe processing loop. + var dummy = {}; + Object.observe(dummy, checkResult); + dummy.dummy = dummy; +} + +testDone(0); diff --git a/deps/v8/test/mjsunit/es6/debug-promises/reject-uncaught-uncaught.js b/deps/v8/test/mjsunit/es6/debug-promises/reject-uncaught-uncaught.js new file mode 100644 index 00000000000..86e2a815e72 --- /dev/null +++ b/deps/v8/test/mjsunit/es6/debug-promises/reject-uncaught-uncaught.js @@ -0,0 +1,69 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug --allow-natives-syntax + +// Test debug events when we only listen to uncaught exceptions and +// there is no catch handler for the to-be-rejected Promise. +// We expect an Exception debug event with a promise to be triggered. + +Debug = debug.Debug; + +var expected_events = 1; +var log = []; + +var p = new Promise(function(resolve, reject) { + log.push("resolve"); + resolve(); +}); + +var q = p.chain( + function() { + log.push("reject"); + return Promise.reject(Error("uncaught reject")); // event + }); + +function listener(event, exec_state, event_data, data) { + try { + if (event == Debug.DebugEvent.Exception) { + expected_events--; + assertTrue(expected_events >= 0); + assertEquals("uncaught reject", event_data.exception().message); + assertTrue(event_data.promise() instanceof Promise); + assertSame(q, event_data.promise()); + assertTrue(event_data.uncaught()); + // All of the frames on the stack are from native Javascript. + assertEquals(0, exec_state.frameCount()); + } + } catch (e) { + %AbortJS(e + "\n" + e.stack); + } +} + +Debug.setBreakOnUncaughtException(); +Debug.setListener(listener); + +log.push("end main"); + +function testDone(iteration) { + function checkResult() { + try { + assertTrue(iteration < 10); + if (expected_events === 0) { + assertEquals(["resolve", "end main", "reject"], log); + } else { + testDone(iteration + 1); + } + } catch (e) { + %AbortJS(e + "\n" + e.stack); + } + } + + // Run testDone through the Object.observe processing loop. + var dummy = {}; + Object.observe(dummy, checkResult); + dummy.dummy = dummy; +} + +testDone(0); diff --git a/deps/v8/test/mjsunit/es6/debug-promises/reject-with-invalid-reject.js b/deps/v8/test/mjsunit/es6/debug-promises/reject-with-invalid-reject.js new file mode 100644 index 00000000000..fc6233da8d1 --- /dev/null +++ b/deps/v8/test/mjsunit/es6/debug-promises/reject-with-invalid-reject.js @@ -0,0 +1,77 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug --allow-natives-syntax + +// Test debug events when a Promise is rejected, which is caught by a custom +// promise, which has a number for reject closure. We expect an Exception debug +// events trying to call the invalid reject closure. + +Debug = debug.Debug; + +var expected_events = 1; +var log = []; + +var p = new Promise(function(resolve, reject) { + log.push("resolve"); + resolve(); +}); + +function MyPromise(resolver) { + var reject = 1; + var resolve = function() { }; + log.push("construct"); + resolver(resolve, reject); +}; + +MyPromise.prototype = new Promise(function() {}); +p.constructor = MyPromise; + +var q = p.chain( + function() { + log.push("reject caught"); + return Promise.reject(new Error("caught")); + }); + +function listener(event, exec_state, event_data, data) { + try { + if (event == Debug.DebugEvent.Exception) { + expected_events--; + assertTrue(expected_events >= 0); + assertEquals("number is not a function", event_data.exception().message); + // All of the frames on the stack are from native Javascript. + assertEquals(0, exec_state.frameCount()); + } + } catch (e) { + %AbortJS(e + "\n" + e.stack); + } +} + +Debug.setBreakOnUncaughtException(); +Debug.setListener(listener); + +function testDone(iteration) { + function checkResult() { + try { + assertTrue(iteration < 10); + if (expected_events === 0) { + assertEquals(["resolve", "construct", "end main", "reject caught"], + log); + } else { + testDone(iteration + 1); + } + } catch (e) { + %AbortJS(e + "\n" + e.stack); + } + } + + // Run testDone through the Object.observe processing loop. + var dummy = {}; + Object.observe(dummy, checkResult); + dummy.dummy = dummy; +} + +testDone(0); + +log.push("end main"); diff --git a/deps/v8/test/mjsunit/es6/debug-promises/reject-with-throw-in-reject.js b/deps/v8/test/mjsunit/es6/debug-promises/reject-with-throw-in-reject.js new file mode 100644 index 00000000000..15e464ec601 --- /dev/null +++ b/deps/v8/test/mjsunit/es6/debug-promises/reject-with-throw-in-reject.js @@ -0,0 +1,87 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug --allow-natives-syntax + +// Test debug events when a Promise is rejected, which is caught by a +// custom promise, which throws a new exception in its reject handler. +// We expect two Exception debug events: +// 1) when promise q is rejected. +// 2) when the custom reject closure in MyPromise throws an exception. + +Debug = debug.Debug; + +var expected_events = 1; +var log = []; + +var p = new Promise(function(resolve, reject) { + log.push("resolve"); + resolve(); +}); + +function MyPromise(resolver) { + var reject = function() { + log.push("throw in reject"); + throw new Error("reject"); // event + }; + var resolve = function() { }; + log.push("construct"); + resolver(resolve, reject); +}; + +MyPromise.prototype = new Promise(function() {}); +p.constructor = MyPromise; + +var q = p.chain( + function() { + log.push("reject caught"); + return Promise.reject(new Error("caught")); + }); + +function listener(event, exec_state, event_data, data) { + try { + if (event == Debug.DebugEvent.Exception) { + expected_events--; + assertTrue(expected_events >= 0); + assertEquals("reject", event_data.exception().message); + // Assert that the debug event is triggered at the throw site. + assertTrue( + exec_state.frame(0).sourceLineText().indexOf("// event") > 0); + } + } catch (e) { + // Signal a failure with exit code 1. This is necessary since the + // debugger swallows exceptions and we expect the chained function + // and this listener to be executed after the main script is finished. + print("Unexpected exception: " + e + "\n" + e.stack); + quit(1); + } +} + +Debug.setBreakOnUncaughtException(); +Debug.setListener(listener); + +log.push("end main"); + +function testDone(iteration) { + function checkResult() { + try { + assertTrue(iteration < 10); + if (expected_events === 0) { + assertEquals(["resolve", "construct", "end main", + "reject caught", "throw in reject"], log); + } else { + testDone(iteration + 1); + } + } catch (e) { + %AbortJS(e + "\n" + e.stack); + } + } + + // Run testDone through the Object.observe processing loop. + var dummy = {}; + Object.observe(dummy, checkResult); + dummy.dummy = dummy; +} + +testDone(0); diff --git a/deps/v8/test/mjsunit/es6/debug-promises/reject-with-undefined-reject.js b/deps/v8/test/mjsunit/es6/debug-promises/reject-with-undefined-reject.js new file mode 100644 index 00000000000..d11c01ff73b --- /dev/null +++ b/deps/v8/test/mjsunit/es6/debug-promises/reject-with-undefined-reject.js @@ -0,0 +1,77 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug --allow-natives-syntax + +// Test debug events when a Promise is rejected, which is caught by a custom +// promise, which has undefined for reject closure. We expect an Exception +// debug even calling the (undefined) custom rejected closure. + +Debug = debug.Debug; + +var expected_events = 1; +var log = []; + +var p = new Promise(function(resolve, reject) { + log.push("resolve"); + resolve(); +}); + +function MyPromise(resolver) { + var reject = undefined; + var resolve = function() { }; + log.push("construct"); + resolver(resolve, reject); +}; + +MyPromise.prototype = new Promise(function() {}); +p.constructor = MyPromise; + +var q = p.chain( + function() { + log.push("reject caught"); + return Promise.reject(new Error("caught")); + }); + +function listener(event, exec_state, event_data, data) { + try { + if (event == Debug.DebugEvent.Exception) { + expected_events--; + assertTrue(expected_events >= 0); + assertEquals("caught", event_data.exception().message); + // All of the frames on the stack are from native Javascript. + assertEquals(0, exec_state.frameCount()); + } + } catch (e) { + %AbortJS(e + "\n" + e.stack); + } +} + +Debug.setBreakOnUncaughtException(); +Debug.setListener(listener); + +function testDone(iteration) { + function checkResult() { + try { + assertTrue(iteration < 10); + if (expected_events === 0) { + assertEquals(["resolve", "construct", "end main", "reject caught"], + log); + } else { + testDone(iteration + 1); + } + } catch (e) { + %AbortJS(e + "\n" + e.stack); + } + } + + // Run testDone through the Object.observe processing loop. + var dummy = {}; + Object.observe(dummy, checkResult); + dummy.dummy = dummy; +} + +testDone(0); + +log.push("end main"); diff --git a/deps/v8/test/mjsunit/es6/debug-promises-caught-all.js b/deps/v8/test/mjsunit/es6/debug-promises/throw-caught-all.js similarity index 58% rename from deps/v8/test/mjsunit/es6/debug-promises-caught-all.js rename to deps/v8/test/mjsunit/es6/debug-promises/throw-caught-all.js index 5189373e184..2fbf05141d5 100644 --- a/deps/v8/test/mjsunit/es6/debug-promises-caught-all.js +++ b/deps/v8/test/mjsunit/es6/debug-promises/throw-caught-all.js @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -// Flags: --harmony-promises --expose-debug-as debug +// Flags: --expose-debug-as debug --allow-natives-syntax // Test debug events when we listen to all exceptions and // there is a catch handler for the exception thrown in a Promise. @@ -10,8 +10,8 @@ Debug = debug.Debug; +var expected_events = 1; var log = []; -var step = 0; var p = new Promise(function(resolve, reject) { log.push("resolve"); @@ -31,21 +31,15 @@ q.catch( function listener(event, exec_state, event_data, data) { try { - // Ignore exceptions during startup in stress runs. - if (step >= 1) return; - assertEquals(["resolve", "end main", "throw"], log); if (event == Debug.DebugEvent.Exception) { + expected_events--; + assertTrue(expected_events >= 0); assertEquals("caught", event_data.exception().message); - assertEquals(undefined, event_data.promise()); + assertSame(q, event_data.promise()); assertFalse(event_data.uncaught()); - step++; } } catch (e) { - // Signal a failure with exit code 1. This is necessary since the - // debugger swallows exceptions and we expect the chained function - // and this listener to be executed after the main script is finished. - print("Unexpected exception: " + e + "\n" + e.stack); - quit(1); + %AbortJS(e + "\n" + e.stack); } } @@ -53,3 +47,25 @@ Debug.setBreakOnException(); Debug.setListener(listener); log.push("end main"); + +function testDone(iteration) { + function checkResult() { + try { + assertTrue(iteration < 10); + if (expected_events === 0) { + assertEquals(["resolve", "end main", "throw"], log); + } else { + testDone(iteration + 1); + } + } catch (e) { + %AbortJS(e + "\n" + e.stack); + } + } + + // Run testDone through the Object.observe processing loop. + var dummy = {}; + Object.observe(dummy, checkResult); + dummy.dummy = dummy; +} + +testDone(0); diff --git a/deps/v8/test/mjsunit/es6/debug-promises-caught-late.js b/deps/v8/test/mjsunit/es6/debug-promises/throw-caught-late.js similarity index 70% rename from deps/v8/test/mjsunit/es6/debug-promises-caught-late.js rename to deps/v8/test/mjsunit/es6/debug-promises/throw-caught-late.js index 66e073d4a3c..ac79aba769d 100644 --- a/deps/v8/test/mjsunit/es6/debug-promises-caught-late.js +++ b/deps/v8/test/mjsunit/es6/debug-promises/throw-caught-late.js @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -// Flags: --harmony-promises --expose-debug-as debug +// Flags: --expose-debug-as debug --allow-natives-syntax // Test debug events when we only listen to uncaught exceptions, the Promise // throws, and a catch handler is installed right before throwing. @@ -26,11 +26,7 @@ function listener(event, exec_state, event_data, data) { try { assertTrue(event != Debug.DebugEvent.Exception); } catch (e) { - // Signal a failure with exit code 1. This is necessary since the - // debugger swallows exceptions and we expect the chained function - // and this listener to be executed after the main script is finished. - print("Unexpected exception: " + e + "\n" + e.stack); - quit(1); + %AbortJS(e + "\n" + e.stack); } } diff --git a/deps/v8/test/mjsunit/es6/debug-promises-caught-uncaught.js b/deps/v8/test/mjsunit/es6/debug-promises/throw-caught-uncaught.js similarity index 63% rename from deps/v8/test/mjsunit/es6/debug-promises-caught-uncaught.js rename to deps/v8/test/mjsunit/es6/debug-promises/throw-caught-uncaught.js index 9620d31bddd..0ad9ce48a22 100644 --- a/deps/v8/test/mjsunit/es6/debug-promises-caught-uncaught.js +++ b/deps/v8/test/mjsunit/es6/debug-promises/throw-caught-uncaught.js @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -// Flags: --harmony-promises --expose-debug-as debug +// Flags: --expose-debug-as debug --allow-natives-syntax // Test debug events when we only listen to uncaught exceptions and // there is a catch handler for the exception thrown in a Promise. @@ -16,23 +16,19 @@ var p = new Promise(function(resolve, reject) { var q = p.chain( function() { - throw new Error("caught"); + throw new Error("caught throw"); }); q.catch( function(e) { - assertEquals("caught", e.message); + assertEquals("caught throw", e.message); }); function listener(event, exec_state, event_data, data) { try { assertTrue(event != Debug.DebugEvent.Exception); } catch (e) { - // Signal a failure with exit code 1. This is necessary since the - // debugger swallows exceptions and we expect the chained function - // and this listener to be executed after the main script is finished. - print("Unexpected exception: " + e + "\n" + e.stack); - quit(1); + %AbortJS(e + "\n" + e.stack); } } diff --git a/deps/v8/test/mjsunit/es6/debug-promises-throw-in-constructor.js b/deps/v8/test/mjsunit/es6/debug-promises/throw-in-constructor.js similarity index 69% rename from deps/v8/test/mjsunit/es6/debug-promises-throw-in-constructor.js rename to deps/v8/test/mjsunit/es6/debug-promises/throw-in-constructor.js index d0267cefb52..fd6b4dd348b 100644 --- a/deps/v8/test/mjsunit/es6/debug-promises-throw-in-constructor.js +++ b/deps/v8/test/mjsunit/es6/debug-promises/throw-in-constructor.js @@ -2,10 +2,10 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -// Flags: --harmony-promises --expose-debug-as debug +// Flags: --expose-debug-as debug // Test debug events when we only listen to uncaught exceptions and -// an exception is thrown in the the Promise constructor. +// an exception is thrown in the Promise constructor. // We expect an Exception debug event with a promise to be triggered. Debug = debug.Debug; @@ -15,8 +15,6 @@ var exception = null; function listener(event, exec_state, event_data, data) { try { - // Ignore exceptions during startup in stress runs. - if (step >= 1) return; if (event == Debug.DebugEvent.Exception) { assertEquals(0, step); assertEquals("uncaught", event_data.exception().message); @@ -27,10 +25,6 @@ function listener(event, exec_state, event_data, data) { step++; } } catch (e) { - // Signal a failure with exit code 1. This is necessary since the - // debugger swallows exceptions and we expect the chained function - // and this listener to be executed after the main script is finished. - print("Unexpected exception: " + e + "\n" + e.stack); exception = e; } } diff --git a/deps/v8/test/mjsunit/es6/debug-promises-uncaught-all.js b/deps/v8/test/mjsunit/es6/debug-promises/throw-uncaught-all.js similarity index 59% rename from deps/v8/test/mjsunit/es6/debug-promises-uncaught-all.js rename to deps/v8/test/mjsunit/es6/debug-promises/throw-uncaught-all.js index 714e7da9c58..72f800bf5b3 100644 --- a/deps/v8/test/mjsunit/es6/debug-promises-uncaught-all.js +++ b/deps/v8/test/mjsunit/es6/debug-promises/throw-uncaught-all.js @@ -2,17 +2,16 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -// Flags: --harmony-promises --expose-debug-as debug +// Flags: --expose-debug-as debug --allow-natives-syntax // Test debug events when we listen to all exceptions and -// there is a catch handler for the exception thrown in a Promise. +// there is no catch handler for the exception thrown in a Promise. // We expect an Exception debug event with a promise to be triggered. Debug = debug.Debug; +var expected_events = 1; var log = []; -var step = 0; -var exception = undefined; var p = new Promise(function(resolve, reject) { log.push("resolve"); @@ -28,24 +27,18 @@ var q = p.chain( function listener(event, exec_state, event_data, data) { try { // Ignore exceptions during startup in stress runs. - if (step >= 1) return; - assertEquals(["resolve", "end main", "throw"], log); if (event == Debug.DebugEvent.Exception) { - assertEquals(0, step); + expected_events--; + assertTrue(expected_events >= 0); assertEquals("uncaught", event_data.exception().message); assertTrue(event_data.promise() instanceof Promise); - assertEquals(q, event_data.promise()); + assertSame(q, event_data.promise()); assertTrue(event_data.uncaught()); // Assert that the debug event is triggered at the throw site. assertTrue(exec_state.frame(0).sourceLineText().indexOf("// event") > 0); - step++; } } catch (e) { - // Signal a failure with exit code 1. This is necessary since the - // debugger swallows exceptions and we expect the chained function - // and this listener to be executed after the main script is finished. - print("Unexpected exception: " + e + "\n" + e.stack); - quit(1); + %AbortJS(e + "\n" + e.stack); } } @@ -53,3 +46,25 @@ Debug.setBreakOnException(); Debug.setListener(listener); log.push("end main"); + +function testDone(iteration) { + function checkResult() { + try { + assertTrue(iteration < 10); + if (expected_events === 0) { + assertEquals(["resolve", "end main", "throw"], log); + } else { + testDone(iteration + 1); + } + } catch (e) { + %AbortJS(e + "\n" + e.stack); + } + } + + // Rerun testDone through the Object.observe processing loop. + var dummy = {}; + Object.observe(dummy, checkResult); + dummy.dummy = dummy; +} + +testDone(0); diff --git a/deps/v8/test/mjsunit/es6/debug-promises-uncaught-uncaught.js b/deps/v8/test/mjsunit/es6/debug-promises/throw-uncaught-uncaught.js similarity index 60% rename from deps/v8/test/mjsunit/es6/debug-promises-uncaught-uncaught.js rename to deps/v8/test/mjsunit/es6/debug-promises/throw-uncaught-uncaught.js index fa97ac0d859..69aa8ebbd24 100644 --- a/deps/v8/test/mjsunit/es6/debug-promises-uncaught-uncaught.js +++ b/deps/v8/test/mjsunit/es6/debug-promises/throw-uncaught-uncaught.js @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -// Flags: --harmony-promises --expose-debug-as debug +// Flags: --expose-debug-as debug --allow-natives-syntax // Test debug events when we only listen to uncaught exceptions and // there is a catch handler for the exception thrown in a Promise. @@ -10,8 +10,8 @@ Debug = debug.Debug; +var expected_events = 1; var log = []; -var step = 0; var p = new Promise(function(resolve, reject) { log.push("resolve"); @@ -25,26 +25,20 @@ var q = p.chain( }); function listener(event, exec_state, event_data, data) { + if (event == Debug.DebugEvent.AsyncTaskEvent) return; try { - // Ignore exceptions during startup in stress runs. - if (step >= 1) return; - assertEquals(["resolve", "end main", "throw"], log); if (event == Debug.DebugEvent.Exception) { - assertEquals(0, step); + expected_events--; + assertTrue(expected_events >= 0); assertEquals("uncaught", event_data.exception().message); assertTrue(event_data.promise() instanceof Promise); - assertEquals(q, event_data.promise()); + assertSame(q, event_data.promise()); assertTrue(event_data.uncaught()); // Assert that the debug event is triggered at the throw site. assertTrue(exec_state.frame(0).sourceLineText().indexOf("// event") > 0); - step++; } } catch (e) { - // Signal a failure with exit code 1. This is necessary since the - // debugger swallows exceptions and we expect the chained function - // and this listener to be executed after the main script is finished. - print("Unexpected exception: " + e + "\n" + e.stack); - quit(1); + %AbortJS(e + "\n" + e.stack); } } @@ -52,3 +46,25 @@ Debug.setBreakOnUncaughtException(); Debug.setListener(listener); log.push("end main"); + +function testDone(iteration) { + function checkResult() { + try { + assertTrue(iteration < 10); + if (expected_events === 0) { + assertEquals(["resolve", "end main", "throw"], log); + } else { + testDone(iteration + 1); + } + } catch (e) { + %AbortJS(e + "\n" + e.stack); + } + } + + // Run testDone through the Object.observe processing loop. + var dummy = {}; + Object.observe(dummy, checkResult); + dummy.dummy = dummy; +} + +testDone(0); diff --git a/deps/v8/test/mjsunit/es6/debug-promises/throw-with-throw-in-reject.js b/deps/v8/test/mjsunit/es6/debug-promises/throw-with-throw-in-reject.js new file mode 100644 index 00000000000..1ea1c7f9ff3 --- /dev/null +++ b/deps/v8/test/mjsunit/es6/debug-promises/throw-with-throw-in-reject.js @@ -0,0 +1,90 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug --allow-natives-syntax + +// Test debug events when an exception is thrown inside a Promise, which is +// caught by a custom promise, which throws a new exception in its reject +// handler. We expect two Exception debug events: +// 1) when the exception is thrown in the promise q. +// 2) when the custom reject closure in MyPromise throws an exception. + +Debug = debug.Debug; + +var expected_events = 2; +var log = []; + +var p = new Promise(function(resolve, reject) { + log.push("resolve"); + resolve(); +}); + +function MyPromise(resolver) { + var reject = function() { + log.push("throw in reject"); + throw new Error("reject"); // event + }; + var resolve = function() { }; + log.push("construct"); + resolver(resolve, reject); +}; + +MyPromise.prototype = new Promise(function() {}); +p.constructor = MyPromise; + +var q = p.chain( + function() { + log.push("throw caught"); + throw new Error("caught"); // event + }); + +function listener(event, exec_state, event_data, data) { + try { + if (event == Debug.DebugEvent.Exception) { + expected_events--; + assertTrue(expected_events >= 0); + if (expected_events == 1) { + assertEquals(["resolve", "construct", "end main", + "throw caught"], log); + assertEquals("caught", event_data.exception().message); + } else if (expected_events == 0) { + assertEquals("reject", event_data.exception().message); + } else { + assertUnreachable(); + } + assertSame(q, event_data.promise()); + assertTrue(exec_state.frame(0).sourceLineText().indexOf('// event') > 0); + } + } catch (e) { + %AbortJS(e + "\n" + e.stack); + } +} + +Debug.setBreakOnUncaughtException(); +Debug.setListener(listener); + +log.push("end main"); + +function testDone(iteration) { + function checkResult() { + try { + assertTrue(iteration < 10); + if (expected_events === 0) { + assertEquals(["resolve", "construct", "end main", + "throw caught", "throw in reject"], log); + } else { + testDone(iteration + 1); + } + } catch (e) { + %AbortJS(e + "\n" + e.stack); + } + } + + // Run testDone through the Object.observe processing loop. + var dummy = {}; + Object.observe(dummy, checkResult); + dummy.dummy = dummy; +} + +testDone(0); diff --git a/deps/v8/test/mjsunit/es6/debug-promises/throw-with-undefined-reject.js b/deps/v8/test/mjsunit/es6/debug-promises/throw-with-undefined-reject.js new file mode 100644 index 00000000000..94dcdffa225 --- /dev/null +++ b/deps/v8/test/mjsunit/es6/debug-promises/throw-with-undefined-reject.js @@ -0,0 +1,88 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug --allow-natives-syntax + +// Test debug events when an exception is thrown inside a Promise, which is +// caught by a custom promise, which has no reject handler. +// We expect two Exception debug events: +// 1) when the exception is thrown in the promise q. +// 2) when calling the undefined custom reject closure in MyPromise throws. + +Debug = debug.Debug; + +var expected_events = 2; +var log = []; + +var p = new Promise(function(resolve, reject) { + log.push("resolve"); + resolve(); +}); + +function MyPromise(resolver) { + var reject = undefined; + var resolve = function() { }; + log.push("construct"); + resolver(resolve, reject); +}; + +MyPromise.prototype = new Promise(function() {}); +p.constructor = MyPromise; + +var q = p.chain( + function() { + log.push("throw caught"); + throw new Error("caught"); // event + }); + +function listener(event, exec_state, event_data, data) { + try { + if (event == Debug.DebugEvent.Exception) { + expected_events--; + assertTrue(expected_events >= 0); + if (expected_events == 1) { + assertTrue( + exec_state.frame(0).sourceLineText().indexOf('// event') > 0); + assertEquals("caught", event_data.exception().message); + } else if (expected_events == 0) { + // All of the frames on the stack are from native Javascript. + assertEquals(0, exec_state.frameCount()); + assertEquals("undefined is not a function", + event_data.exception().message); + } else { + assertUnreachable(); + } + assertSame(q, event_data.promise()); + } + } catch (e) { + %AbortJS(e + "\n" + e.stack); + } +} + +Debug.setBreakOnUncaughtException(); +Debug.setListener(listener); + +log.push("end main"); + +function testDone(iteration) { + function checkResult() { + try { + assertTrue(iteration < 10); + if (expected_events === 0) { + assertEquals(["resolve", "construct", "end main", "throw caught"], log); + } else { + testDone(iteration + 1); + } + } catch (e) { + %AbortJS(e + "\n" + e.stack); + } + } + + // Run testDone through the Object.observe processing loop. + var dummy = {}; + Object.observe(dummy, checkResult); + dummy.dummy = dummy; +} + +testDone(0); diff --git a/deps/v8/test/mjsunit/es6/debug-promises/try-reject-in-constructor.js b/deps/v8/test/mjsunit/es6/debug-promises/try-reject-in-constructor.js new file mode 100644 index 00000000000..00981a67d09 --- /dev/null +++ b/deps/v8/test/mjsunit/es6/debug-promises/try-reject-in-constructor.js @@ -0,0 +1,42 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug + +// Test debug events when we only listen to uncaught exceptions and +// the Promise is rejected within a try-catch in the Promise constructor. +// We expect an Exception debug event with a promise to be triggered. + +Debug = debug.Debug; + +var step = 0; +var exception = null; + +function listener(event, exec_state, event_data, data) { + try { + if (event == Debug.DebugEvent.Exception) { + assertEquals(0, step); + assertEquals("uncaught", event_data.exception().message); + assertTrue(event_data.promise() instanceof Promise); + assertTrue(event_data.uncaught()); + // Assert that the debug event is triggered at the throw site. + assertTrue(exec_state.frame(0).sourceLineText().indexOf("// event") > 0); + step++; + } + } catch (e) { + exception = e; + } +} + +Debug.setBreakOnUncaughtException(); +Debug.setListener(listener); + +var p = new Promise(function(resolve, reject) { + try { // This try-catch must not prevent this uncaught reject event. + reject(new Error("uncaught")); // event + } catch (e) { } +}); + +assertEquals(1, step); +assertNull(exception); diff --git a/deps/v8/test/mjsunit/es6/debug-promises/try-throw-reject-in-constructor.js b/deps/v8/test/mjsunit/es6/debug-promises/try-throw-reject-in-constructor.js new file mode 100644 index 00000000000..feff81da900 --- /dev/null +++ b/deps/v8/test/mjsunit/es6/debug-promises/try-throw-reject-in-constructor.js @@ -0,0 +1,44 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug + +// Test debug events when we only listen to uncaught exceptions and +// an exception is thrown in the Promise constructor, but caught in an +// inner try-catch. The Promise is rejected afterwards. +// We expect an Exception debug event with a promise to be triggered. + +Debug = debug.Debug; + +var step = 0; +var exception = null; + +function listener(event, exec_state, event_data, data) { + try { + if (event == Debug.DebugEvent.Exception) { + assertEquals(0, step); + assertEquals("uncaught", event_data.exception().message); + assertTrue(event_data.promise() instanceof Promise); + assertTrue(event_data.uncaught()); + // Assert that the debug event is triggered at the throw site. + assertTrue(exec_state.frame(0).sourceLineText().indexOf("// event") > 0); + step++; + } + } catch (e) { + exception = e; + } +} + +Debug.setBreakOnUncaughtException(); +Debug.setListener(listener); + +var p = new Promise(function(resolve, reject) { + try { // This try-catch must not prevent this uncaught reject event. + throw new Error("caught"); + } catch (e) { } + reject(new Error("uncaught")); // event +}); + +assertEquals(1, step); +assertNull(exception); diff --git a/deps/v8/test/mjsunit/es6/debug-stepin-collections-foreach.js b/deps/v8/test/mjsunit/es6/debug-stepin-collections-foreach.js new file mode 100644 index 00000000000..08938f7751e --- /dev/null +++ b/deps/v8/test/mjsunit/es6/debug-stepin-collections-foreach.js @@ -0,0 +1,118 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. +// +// Flags: --expose-debug-as debug + +Debug = debug.Debug + +var exception = false; + +function listener(event, exec_state, event_data, data) { + try { + if (event == Debug.DebugEvent.Break) { + if (breaks == 0) { + exec_state.prepareStep(Debug.StepAction.StepIn, 2); + breaks = 1; + } else if (breaks <= 3) { + breaks++; + // Check whether we break at the expected line. + print(event_data.sourceLineText()); + assertTrue(event_data.sourceLineText().indexOf("Expected to step") > 0); + exec_state.prepareStep(Debug.StepAction.StepIn, 3); + } + } + } catch (e) { + exception = true; + } +} + +function cb_set(num) { + print("element " + num); // Expected to step to this point. + return true; +} + +function cb_map(key, val) { + print("key " + key + ", value " + val); // Expected to step to this point. + return true; +} + +var s = new Set(); +s.add(1); +s.add(2); +s.add(3); +s.add(4); + +var m = new Map(); +m.set('foo', 1); +m.set('bar', 2); +m.set('baz', 3); +m.set('bat', 4); + +Debug.setListener(listener); + +var breaks = 0; +debugger; +s.forEach(cb_set); +assertFalse(exception); +assertEquals(4, breaks); + +breaks = 0; +debugger; +m.forEach(cb_map); +assertFalse(exception); +assertEquals(4, breaks); + +Debug.setListener(null); + + +// Test two levels of builtin callbacks: +// Array.forEach calls a callback function, which by itself uses +// Array.forEach with another callback function. + +function second_level_listener(event, exec_state, event_data, data) { + try { + if (event == Debug.DebugEvent.Break) { + if (breaks == 0) { + exec_state.prepareStep(Debug.StepAction.StepIn, 3); + breaks = 1; + } else if (breaks <= 16) { + breaks++; + // Check whether we break at the expected line. + assertTrue(event_data.sourceLineText().indexOf("Expected to step") > 0); + // Step two steps further every four breaks to skip the + // forEach call in the first level of recurision. + var step = (breaks % 4 == 1) ? 6 : 3; + exec_state.prepareStep(Debug.StepAction.StepIn, step); + } + } + } catch (e) { + exception = true; + } +} + +function cb_set_foreach(num) { + s.forEach(cb_set); + print("back to the first level of recursion."); +} + +function cb_map_foreach(key, val) { + m.forEach(cb_set); + print("back to the first level of recursion."); +} + +Debug.setListener(second_level_listener); + +breaks = 0; +debugger; +s.forEach(cb_set_foreach); +assertFalse(exception); +assertEquals(17, breaks); + +breaks = 0; +debugger; +m.forEach(cb_map_foreach); +assertFalse(exception); +assertEquals(17, breaks); + +Debug.setListener(null); diff --git a/deps/v8/test/mjsunit/harmony/iteration-semantics.js b/deps/v8/test/mjsunit/es6/iteration-semantics.js similarity index 82% rename from deps/v8/test/mjsunit/harmony/iteration-semantics.js rename to deps/v8/test/mjsunit/es6/iteration-semantics.js index 2449115dd42..7849b29abe0 100644 --- a/deps/v8/test/mjsunit/harmony/iteration-semantics.js +++ b/deps/v8/test/mjsunit/es6/iteration-semantics.js @@ -25,7 +25,6 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --harmony-iteration // Flags: --harmony-generators --harmony-scoping --harmony-proxies // Test for-of semantics. @@ -41,13 +40,19 @@ function* values() { } } +function wrap_iterator(iterator) { + var iterable = {}; + iterable[Symbol.iterator] = function() { return iterator; }; + return iterable; +} + function integers_until(max) { function next() { var ret = { value: this.n, done: this.n == max }; this.n++; return ret; } - return { next: next, n: 0 } + return wrap_iterator({ next: next, n: 0 }); } function results(results) { @@ -55,7 +60,7 @@ function results(results) { function next() { return results[i++]; } - return { next: next } + return wrap_iterator({ next: next }); } function* integers_from(n) { @@ -72,44 +77,44 @@ function sum(x, tail) { return x + tail; } -function fold(cons, seed, iter) { - for (var x of iter) { +function fold(cons, seed, iterable) { + for (var x of iterable) { seed = cons(x, seed); } return seed; } -function* take(iter, n) { +function* take(iterable, n) { if (n == 0) return; - for (let x of iter) { + for (let x of iterable) { yield x; if (--n == 0) break; } } -function nth(iter, n) { - for (let x of iter) { +function nth(iterable, n) { + for (let x of iterable) { if (n-- == 0) return x; } throw "unreachable"; } -function* skip_every(iter, n) { +function* skip_every(iterable, n) { var i = 0; - for (let x of iter) { + for (let x of iterable) { if (++i % n == 0) continue; yield x; } } -function* iter_map(iter, f) { - for (var x of iter) { +function* iter_map(iterable, f) { + for (var x of iterable) { yield f(x); } } -function nested_fold(cons, seed, iter) { +function nested_fold(cons, seed, iterable) { var visited = [] - for (let x of iter) { + for (let x of iterable) { for (let y of x) { seed = cons(y, seed); } @@ -117,8 +122,8 @@ function nested_fold(cons, seed, iter) { return seed; } -function* unreachable(iter) { - for (let x of iter) { +function* unreachable(iterable) { + for (let x of iterable) { throw "not reached"; } } @@ -141,17 +146,19 @@ function never_getter(o, prop) { return o; } -function remove_next_after(iter, n) { +function remove_next_after(iterable, n) { + var iterator = iterable[Symbol.iterator](); function next() { if (n-- == 0) delete this.next; - return iter.next(); + return iterator.next(); } - return { next: next } + return wrap_iterator({ next: next }); } -function poison_next_after(iter, n) { +function poison_next_after(iterable, n) { + var iterator = iterable[Symbol.iterator](); function next() { - return iter.next(); + return iterator.next(); } function next_getter() { if (n-- < 0) @@ -160,7 +167,7 @@ function poison_next_after(iter, n) { } var o = {}; Object.defineProperty(o, 'next', { get: next_getter }); - return o; + return wrap_iterator(o); } // Now, the tests. @@ -204,13 +211,13 @@ assertEquals([1, 2], { value: 37, done: true }, never_getter(never_getter({}, 'done'), 'value')]))); -// Null and undefined do not cause an error. -assertEquals(0, fold(sum, 0, unreachable(null))); -assertEquals(0, fold(sum, 0, unreachable(undefined))); +// Unlike the case with for-in, null and undefined cause an error. +assertThrows('fold(sum, 0, unreachable(null))', TypeError); +assertThrows('fold(sum, 0, unreachable(undefined))', TypeError); // Other non-iterators do cause an error. assertThrows('fold(sum, 0, unreachable({}))', TypeError); -assertThrows('fold(sum, 0, unreachable("foo"))', TypeError); +assertThrows('fold(sum, 0, unreachable(false))', TypeError); assertThrows('fold(sum, 0, unreachable(37))', TypeError); // "next" is looked up each time. @@ -223,33 +230,33 @@ assertEquals(45, assertEquals(45, fold(sum, 0, poison_next_after(integers_until(10), 10))); -function labelled_continue(iter) { +function labelled_continue(iterable) { var n = 0; outer: while (true) { n++; - for (var x of iter) continue outer; + for (var x of iterable) continue outer; break; } return n; } assertEquals(11, labelled_continue(integers_until(10))); -function labelled_break(iter) { +function labelled_break(iterable) { var n = 0; outer: while (true) { n++; - for (var x of iter) break outer; + for (var x of iterable) break outer; } return n; } assertEquals(1, labelled_break(integers_until(10))); // Test continue/break in catch. -function catch_control(iter, k) { +function catch_control(iterable, k) { var n = 0; - for (var x of iter) { + for (var x of iterable) { try { return k(x); } catch (e) { @@ -274,9 +281,9 @@ assertEquals(5, })); // Test continue/break in try. -function try_control(iter, k) { +function try_control(iterable, k) { var n = 0; - for (var x of iter) { + for (var x of iterable) { try { var e = k(x); if (e == "continue") continue; @@ -313,16 +320,17 @@ assertEquals([1, 2], .map(transparent_proxy)))); // Proxy iterators. -function poison_proxy_after(x, n) { - return Proxy.create({ +function poison_proxy_after(iterable, n) { + var iterator = iterable[Symbol.iterator](); + return wrap_iterator(Proxy.create({ get: function(receiver, name) { if (name == 'next' && n-- < 0) throw "unreachable"; - return x[name]; + return iterator[name]; }, // Needed for integers_until(10)'s this.n++. set: function(receiver, name, val) { - return x[name] = val; + return iterator[name] = val; } - }); + })); } assertEquals(45, fold(sum, 0, poison_proxy_after(integers_until(10), 10))); diff --git a/deps/v8/test/mjsunit/harmony/iteration-syntax.js b/deps/v8/test/mjsunit/es6/iteration-syntax.js similarity index 98% rename from deps/v8/test/mjsunit/harmony/iteration-syntax.js rename to deps/v8/test/mjsunit/es6/iteration-syntax.js index 3bda78ed4e1..356a97898a3 100644 --- a/deps/v8/test/mjsunit/harmony/iteration-syntax.js +++ b/deps/v8/test/mjsunit/es6/iteration-syntax.js @@ -25,7 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --harmony-iteration --harmony-scoping +// Flags: --harmony-scoping --use-strict // Test for-of syntax. diff --git a/deps/v8/test/mjsunit/es6/math-cbrt.js b/deps/v8/test/mjsunit/es6/math-cbrt.js index 83d9eb5d759..713c020e42e 100644 --- a/deps/v8/test/mjsunit/es6/math-cbrt.js +++ b/deps/v8/test/mjsunit/es6/math-cbrt.js @@ -2,8 +2,6 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -// Flags: --harmony-maths - assertTrue(isNaN(Math.cbrt(NaN))); assertTrue(isNaN(Math.cbrt(function() {}))); assertTrue(isNaN(Math.cbrt({ toString: function() { return NaN; } }))); diff --git a/deps/v8/test/mjsunit/es6/math-clz32.js b/deps/v8/test/mjsunit/es6/math-clz32.js index 816f6a936e6..3cbd4c3fccd 100644 --- a/deps/v8/test/mjsunit/es6/math-clz32.js +++ b/deps/v8/test/mjsunit/es6/math-clz32.js @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -// Flags: --harmony-maths --allow-natives-syntax +// Flags: --allow-natives-syntax [NaN, Infinity, -Infinity, 0, -0, "abc", "Infinity", "-Infinity", {}].forEach( function(x) { diff --git a/deps/v8/test/mjsunit/es6/math-expm1.js b/deps/v8/test/mjsunit/es6/math-expm1.js index de915c0969e..b4e31a959b1 100644 --- a/deps/v8/test/mjsunit/es6/math-expm1.js +++ b/deps/v8/test/mjsunit/es6/math-expm1.js @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -// Flags: --harmony-maths --no-fast-math +// Flags: --no-fast-math assertTrue(isNaN(Math.expm1(NaN))); assertTrue(isNaN(Math.expm1(function() {}))); diff --git a/deps/v8/test/mjsunit/es6/math-fround.js b/deps/v8/test/mjsunit/es6/math-fround.js index ea432ea2de4..c53396a38a1 100644 --- a/deps/v8/test/mjsunit/es6/math-fround.js +++ b/deps/v8/test/mjsunit/es6/math-fround.js @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -// Flags: --harmony-maths +// Flags: --allow-natives-syntax // Monkey-patch Float32Array. Float32Array = function(x) { this[0] = 0; }; @@ -11,15 +11,33 @@ assertTrue(isNaN(Math.fround(NaN))); assertTrue(isNaN(Math.fround(function() {}))); assertTrue(isNaN(Math.fround({ toString: function() { return NaN; } }))); assertTrue(isNaN(Math.fround({ valueOf: function() { return "abc"; } }))); -assertEquals("Infinity", String(1/Math.fround(0))); -assertEquals("-Infinity", String(1/Math.fround(-0))); -assertEquals("Infinity", String(Math.fround(Infinity))); -assertEquals("-Infinity", String(Math.fround(-Infinity))); +assertTrue(isNaN(Math.fround(NaN))); +assertTrue(isNaN(Math.fround(function() {}))); +assertTrue(isNaN(Math.fround({ toString: function() { return NaN; } }))); +assertTrue(isNaN(Math.fround({ valueOf: function() { return "abc"; } }))); + +function unopt(x) { return Math.fround(x); } +function opt(y) { return Math.fround(y); } -assertEquals("Infinity", String(Math.fround(1E200))); -assertEquals("-Infinity", String(Math.fround(-1E200))); -assertEquals("Infinity", String(1/Math.fround(1E-300))); -assertEquals("-Infinity", String(1/Math.fround(-1E-300))); +opt(0.1); +opt(0.1); +unopt(0.1); +%NeverOptimizeFunction(unopt); +%OptimizeFunctionOnNextCall(opt); + +function test(f) { + assertEquals("Infinity", String(1/f(0))); + assertEquals("-Infinity", String(1/f(-0))); + assertEquals("Infinity", String(f(Infinity))); + assertEquals("-Infinity", String(f(-Infinity))); + assertEquals("Infinity", String(f(1E200))); + assertEquals("-Infinity", String(f(-1E200))); + assertEquals("Infinity", String(1/f(1E-300))); + assertEquals("-Infinity", String(1/f(-1E-300))); +} + +test(opt); +test(unopt); mantissa_23_shift = Math.pow(2, -23); mantissa_29_shift = Math.pow(2, -23-29); @@ -81,13 +99,16 @@ ieee754float.prototype.toSingleSubnormal = function(sign, exponent) { var pi = new ieee754float(0, 0x400, 0x490fda, 0x14442d18); -assertEquals(pi.toSingle(), Math.fround(pi.toDouble())); +assertEquals(pi.toSingle(), opt(pi.toDouble())); +assertEquals(pi.toSingle(), unopt(pi.toDouble())); + function fuzz_mantissa(sign, exp, m1inc, m2inc) { for (var m1 = 0; m1 < (1 << 23); m1 += m1inc) { for (var m2 = 0; m2 < (1 << 29); m2 += m2inc) { var float = new ieee754float(sign, exp, m1, m2); - assertEquals(float.toSingle(), Math.fround(float.toDouble())); + assertEquals(float.toSingle(), unopt(float.toDouble())); + assertEquals(float.toSingle(), opt(float.toDouble())); } } } diff --git a/deps/v8/test/mjsunit/es6/math-hyperbolic.js b/deps/v8/test/mjsunit/es6/math-hyperbolic.js index c45a19c526d..1ceb95182bd 100644 --- a/deps/v8/test/mjsunit/es6/math-hyperbolic.js +++ b/deps/v8/test/mjsunit/es6/math-hyperbolic.js @@ -25,8 +25,6 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --harmony-maths - [Math.sinh, Math.cosh, Math.tanh, Math.asinh, Math.acosh, Math.atanh]. forEach(function(fun) { assertTrue(isNaN(fun(NaN))); diff --git a/deps/v8/test/mjsunit/es6/math-hypot.js b/deps/v8/test/mjsunit/es6/math-hypot.js index 10526272137..d2392df3a40 100644 --- a/deps/v8/test/mjsunit/es6/math-hypot.js +++ b/deps/v8/test/mjsunit/es6/math-hypot.js @@ -25,8 +25,6 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --harmony-maths - assertTrue(isNaN(Math.hypot({}))); assertTrue(isNaN(Math.hypot(undefined, 1))); assertTrue(isNaN(Math.hypot(1, undefined))); diff --git a/deps/v8/test/mjsunit/es6/math-log1p.js b/deps/v8/test/mjsunit/es6/math-log1p.js index eefea6ee380..5468444fdac 100644 --- a/deps/v8/test/mjsunit/es6/math-log1p.js +++ b/deps/v8/test/mjsunit/es6/math-log1p.js @@ -2,22 +2,20 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -// Flags: --harmony-maths - assertTrue(isNaN(Math.log1p(NaN))); assertTrue(isNaN(Math.log1p(function() {}))); assertTrue(isNaN(Math.log1p({ toString: function() { return NaN; } }))); assertTrue(isNaN(Math.log1p({ valueOf: function() { return "abc"; } }))); -assertEquals("Infinity", String(1/Math.log1p(0))); -assertEquals("-Infinity", String(1/Math.log1p(-0))); -assertEquals("Infinity", String(Math.log1p(Infinity))); -assertEquals("-Infinity", String(Math.log1p(-1))); +assertEquals(Infinity, 1/Math.log1p(0)); +assertEquals(-Infinity, 1/Math.log1p(-0)); +assertEquals(Infinity, Math.log1p(Infinity)); +assertEquals(-Infinity, Math.log1p(-1)); assertTrue(isNaN(Math.log1p(-2))); assertTrue(isNaN(Math.log1p(-Infinity))); -for (var x = 1E300; x > 1E-1; x *= 0.8) { +for (var x = 1E300; x > 1E16; x *= 0.8) { var expected = Math.log(x + 1); - assertEqualsDelta(expected, Math.log1p(x), expected * 1E-14); + assertEqualsDelta(expected, Math.log1p(x), expected * 1E-16); } // Values close to 0: @@ -37,5 +35,36 @@ function log1p(x) { for (var x = 1E-1; x > 1E-300; x *= 0.8) { var expected = log1p(x); - assertEqualsDelta(expected, Math.log1p(x), expected * 1E-14); + assertEqualsDelta(expected, Math.log1p(x), expected * 1E-16); } + +// Issue 3481. +assertEquals(6.9756137364252422e-03, + Math.log1p(8070450532247929/Math.pow(2,60))); + +// Tests related to the fdlibm implementation. +// Test largest double value. +assertEquals(709.782712893384, Math.log1p(1.7976931348623157e308)); +// Test small values. +assertEquals(Math.pow(2, -55), Math.log1p(Math.pow(2, -55))); +assertEquals(9.313225741817976e-10, Math.log1p(Math.pow(2, -30))); +// Cover various code paths. +// -.2929 < x < .41422, k = 0 +assertEquals(-0.2876820724517809, Math.log1p(-0.25)); +assertEquals(0.22314355131420976, Math.log1p(0.25)); +// 0.41422 < x < 9.007e15 +assertEquals(2.3978952727983707, Math.log1p(10)); +// x > 9.007e15 +assertEquals(36.841361487904734, Math.log1p(10e15)); +// Normalize u. +assertEquals(37.08337388996168, Math.log1p(12738099905822720)); +// Normalize u/2. +assertEquals(37.08336444902049, Math.log1p(12737979646738432)); +// |f| = 0, k != 0 +assertEquals(1.3862943611198906, Math.log1p(3)); +// |f| != 0, k != 0 +assertEquals(1.3862945995384413, Math.log1p(3 + Math.pow(2,-20))); +// final if-clause: k = 0 +assertEquals(0.5596157879354227, Math.log1p(0.75)); +// final if-clause: k != 0 +assertEquals(0.8109302162163288, Math.log1p(1.25)); diff --git a/deps/v8/test/mjsunit/es6/math-log2-log10.js b/deps/v8/test/mjsunit/es6/math-log2-log10.js index 2ab496012cb..4479894d7d8 100644 --- a/deps/v8/test/mjsunit/es6/math-log2-log10.js +++ b/deps/v8/test/mjsunit/es6/math-log2-log10.js @@ -25,8 +25,6 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --harmony-maths - [Math.log10, Math.log2].forEach( function(fun) { assertTrue(isNaN(fun(NaN))); assertTrue(isNaN(fun(fun))); diff --git a/deps/v8/test/mjsunit/es6/math-sign.js b/deps/v8/test/mjsunit/es6/math-sign.js index 8a89d62828b..65f1609d634 100644 --- a/deps/v8/test/mjsunit/es6/math-sign.js +++ b/deps/v8/test/mjsunit/es6/math-sign.js @@ -25,8 +25,6 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --harmony-maths - assertEquals("Infinity", String(1/Math.sign(0))); assertEquals("-Infinity", String(1/Math.sign(-0))); assertEquals(1, Math.sign(100)); diff --git a/deps/v8/test/mjsunit/es6/math-trunc.js b/deps/v8/test/mjsunit/es6/math-trunc.js index ed91ed1380f..9231576ddaa 100644 --- a/deps/v8/test/mjsunit/es6/math-trunc.js +++ b/deps/v8/test/mjsunit/es6/math-trunc.js @@ -25,8 +25,6 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --harmony-maths - assertEquals("Infinity", String(1/Math.trunc(0))); assertEquals("-Infinity", String(1/Math.trunc(-0))); assertEquals("Infinity", String(1/Math.trunc(Math.PI/4))); diff --git a/deps/v8/test/mjsunit/es6/mirror-collections.js b/deps/v8/test/mjsunit/es6/mirror-collections.js new file mode 100644 index 00000000000..e10f5c1a981 --- /dev/null +++ b/deps/v8/test/mjsunit/es6/mirror-collections.js @@ -0,0 +1,144 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug --expose-gc + +function testMapMirror(mirror) { + // Create JSON representation. + var serializer = debug.MakeMirrorSerializer(); + var json = JSON.stringify(serializer.serializeValue(mirror)); + + // Check the mirror hierachy. + assertTrue(mirror instanceof debug.Mirror); + assertTrue(mirror instanceof debug.ValueMirror); + assertTrue(mirror instanceof debug.ObjectMirror); + assertTrue(mirror instanceof debug.MapMirror); + + assertTrue(mirror.isMap()); + + // Parse JSON representation and check. + var fromJSON = eval('(' + json + ')'); + assertEquals('map', fromJSON.type); +} + +function testSetMirror(mirror) { + // Create JSON representation. + var serializer = debug.MakeMirrorSerializer(); + var json = JSON.stringify(serializer.serializeValue(mirror)); + + // Check the mirror hierachy. + assertTrue(mirror instanceof debug.Mirror); + assertTrue(mirror instanceof debug.ValueMirror); + assertTrue(mirror instanceof debug.ObjectMirror); + assertTrue(mirror instanceof debug.SetMirror); + + assertTrue(mirror.isSet()); + + // Parse JSON representation and check. + var fromJSON = eval('(' + json + ')'); + assertEquals('set', fromJSON.type); +} + +var o1 = new Object(); +var o2 = new Object(); +var o3 = new Object(); + +// Test the mirror object for Maps +var map = new Map(); +map.set(o1, 11); +map.set(o2, 22); +map.delete(o1); +var mapMirror = debug.MakeMirror(map); +testMapMirror(mapMirror); +var entries = mapMirror.entries(); +assertEquals(1, entries.length); +assertSame(o2, entries[0].key); +assertEquals(22, entries[0].value); +map.set(o1, 33); +map.set(o3, o2); +map.delete(o2); +map.set(undefined, 44); +entries = mapMirror.entries(); +assertEquals(3, entries.length); +assertSame(o1, entries[0].key); +assertEquals(33, entries[0].value); +assertSame(o3, entries[1].key); +assertSame(o2, entries[1].value); +assertEquals(undefined, entries[2].key); +assertEquals(44, entries[2].value); + +// Test the mirror object for Sets +var set = new Set(); +set.add(o1); +set.add(o2); +set.delete(o1); +set.add(undefined); +var setMirror = debug.MakeMirror(set); +testSetMirror(setMirror); +var values = setMirror.values(); +assertEquals(2, values.length); +assertSame(o2, values[0]); +assertEquals(undefined, values[1]); + +// Test the mirror object for WeakMaps +var weakMap = new WeakMap(); +weakMap.set(o1, 11); +weakMap.set(new Object(), 22); +weakMap.set(o3, 33); +weakMap.set(new Object(), 44); +var weakMapMirror = debug.MakeMirror(weakMap); +testMapMirror(weakMapMirror); +weakMap.set(new Object(), 55); +assertTrue(weakMapMirror.entries().length <= 5); +gc(); + +function testWeakMapEntries(weakMapMirror) { + var entries = weakMapMirror.entries(); + assertEquals(2, entries.length); + var found = 0; + for (var i = 0; i < entries.length; i++) { + if (Object.is(entries[i].key, o1)) { + assertEquals(11, entries[i].value); + found++; + } + if (Object.is(entries[i].key, o3)) { + assertEquals(33, entries[i].value); + found++; + } + } + assertEquals(2, found); +} + +testWeakMapEntries(weakMapMirror); + +// Test the mirror object for WeakSets +var weakSet = new WeakSet(); +weakSet.add(o1); +weakSet.add(new Object()); +weakSet.add(o2); +weakSet.add(new Object()); +weakSet.add(new Object()); +weakSet.add(o3); +weakSet.delete(o2); +var weakSetMirror = debug.MakeMirror(weakSet); +testSetMirror(weakSetMirror); +assertTrue(weakSetMirror.values().length <= 5); +gc(); + +function testWeakSetValues(weakSetMirror) { + var values = weakSetMirror.values(); + assertEquals(2, values.length); + var found = 0; + for (var i = 0; i < values.length; i++) { + if (Object.is(values[i], o1)) { + found++; + } + if (Object.is(values[i], o3)) { + found++; + } + } + assertEquals(2, found); +} + +testWeakSetValues(weakSetMirror); diff --git a/deps/v8/test/mjsunit/es6/mirror-promises.js b/deps/v8/test/mjsunit/es6/mirror-promises.js index 5a21a6b9e62..deeba8f549f 100644 --- a/deps/v8/test/mjsunit/es6/mirror-promises.js +++ b/deps/v8/test/mjsunit/es6/mirror-promises.js @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -// Flags: --expose-debug-as debug --harmony-promises +// Flags: --expose-debug-as debug // Test the mirror object for promises. function MirrorRefCache(json_refs) { @@ -39,7 +39,8 @@ function testPromiseMirror(promise, status, value) { assertEquals("Object", mirror.className()); assertEquals("#<Promise>", mirror.toText()); assertSame(promise, mirror.value()); - assertEquals(value, mirror.promiseValue()); + assertTrue(mirror.promiseValue() instanceof debug.Mirror); + assertEquals(value, mirror.promiseValue().value()); // Parse JSON representation and check. var fromJSON = eval('(' + json + ')'); @@ -48,7 +49,7 @@ function testPromiseMirror(promise, status, value) { assertEquals('function', refs.lookup(fromJSON.constructorFunction.ref).type); assertEquals('Promise', refs.lookup(fromJSON.constructorFunction.ref).name); assertEquals(status, fromJSON.status); - assertEquals(value, fromJSON.promiseValue); + assertEquals(value, refs.lookup(fromJSON.promiseValue.ref).value); } // Test a number of different promises. @@ -67,3 +68,23 @@ var thrownv = new Promise(function(resolve, reject) { throw 'throw' }); testPromiseMirror(resolvedv, "resolved", 'resolve'); testPromiseMirror(rejectedv, "rejected", 'reject'); testPromiseMirror(thrownv, "rejected", 'throw'); + +// Test internal properties of different promises. +var m1 = debug.MakeMirror(new Promise( + function(resolve, reject) { resolve(1) })); +var ip = m1.internalProperties(); +assertEquals(2, ip.length); +assertEquals("[[PromiseStatus]]", ip[0].name()); +assertEquals("[[PromiseValue]]", ip[1].name()); +assertEquals("resolved", ip[0].value().value()); +assertEquals(1, ip[1].value().value()); + +var m2 = debug.MakeMirror(new Promise(function(resolve, reject) { reject(2) })); +ip = m2.internalProperties(); +assertEquals("rejected", ip[0].value().value()); +assertEquals(2, ip[1].value().value()); + +var m3 = debug.MakeMirror(new Promise(function(resolve, reject) { })); +ip = m3.internalProperties(); +assertEquals("pending", ip[0].value().value()); +assertEquals("undefined", typeof(ip[1].value().value())); diff --git a/deps/v8/test/mjsunit/es6/mirror-symbols.js b/deps/v8/test/mjsunit/es6/mirror-symbols.js new file mode 100644 index 00000000000..f218332abf1 --- /dev/null +++ b/deps/v8/test/mjsunit/es6/mirror-symbols.js @@ -0,0 +1,38 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug +// Test the mirror object for symbols. + +function testSymbolMirror(symbol, description) { + // Create mirror and JSON representation. + var mirror = debug.MakeMirror(symbol); + var serializer = debug.MakeMirrorSerializer(); + var json = JSON.stringify(serializer.serializeValue(mirror)); + + // Check the mirror hierachy. + assertTrue(mirror instanceof debug.Mirror); + assertTrue(mirror instanceof debug.ValueMirror); + assertTrue(mirror instanceof debug.SymbolMirror); + + // Check the mirror properties. + assertTrue(mirror.isSymbol()); + assertEquals(description, mirror.description()); + assertEquals('symbol', mirror.type()); + assertTrue(mirror.isPrimitive()); + var description_text = description === undefined ? "" : description; + assertEquals('Symbol(' + description_text + ')', mirror.toText()); + assertSame(symbol, mirror.value()); + + // Parse JSON representation and check. + var fromJSON = eval('(' + json + ')'); + assertEquals('symbol', fromJSON.type); + assertEquals(description, fromJSON.description); +} + +// Test a number of different symbols. +testSymbolMirror(Symbol("a"), "a"); +testSymbolMirror(Symbol(12), "12"); +testSymbolMirror(Symbol.for("b"), "b"); +testSymbolMirror(Symbol(), undefined); diff --git a/deps/v8/test/mjsunit/es6/promises.js b/deps/v8/test/mjsunit/es6/promises.js index 6dfe9261a85..faf154ee0a5 100644 --- a/deps/v8/test/mjsunit/es6/promises.js +++ b/deps/v8/test/mjsunit/es6/promises.js @@ -27,6 +27,42 @@ // Flags: --allow-natives-syntax +// Make sure we don't rely on functions patchable by monkeys. +var call = Function.prototype.call.call.bind(Function.prototype.call) +var observe = Object.observe; +var getOwnPropertyNames = Object.getOwnPropertyNames +var defineProperty = Object.defineProperty + +function clear(o) { + if (o === null || (typeof o !== 'object' && typeof o !== 'function')) return + clear(o.__proto__) + var properties = getOwnPropertyNames(o) + for (var i in properties) { + clearProp(o, properties[i]) + } +} + +function clearProp(o, name) { + var poisoned = {caller: 0, callee: 0, arguments: 0} + try { + var x = o[name] + o[name] = undefined + clear(x) + } catch(e) {} // assertTrue(name in poisoned) } +} + +// Find intrinsics and null them out. +var globals = Object.getOwnPropertyNames(this) +var whitelist = {Promise: true, TypeError: true} +for (var i in globals) { + var name = globals[i] + if (name in whitelist || name[0] === name[0].toLowerCase()) delete globals[i] +} +for (var i in globals) { + if (globals[i]) clearProp(this, globals[i]) +} + + var asyncAssertsExpected = 0; function assertAsyncRan() { ++asyncAssertsExpected } @@ -43,7 +79,7 @@ function assertAsync(b, s) { function assertAsyncDone(iteration) { var iteration = iteration || 0 var dummy = {} - Object.observe(dummy, + observe(dummy, function() { if (asyncAssertsExpected === 0) assertAsync(true, "all") @@ -777,13 +813,13 @@ function assertAsyncDone(iteration) { MyPromise.__proto__ = Promise MyPromise.defer = function() { log += "d" - return this.__proto__.defer.call(this) + return call(this.__proto__.defer, this) } MyPromise.prototype.__proto__ = Promise.prototype MyPromise.prototype.chain = function(resolve, reject) { log += "c" - return this.__proto__.__proto__.chain.call(this, resolve, reject) + return call(this.__proto__.__proto__.chain, this, resolve, reject) } log = "" diff --git a/deps/v8/test/mjsunit/harmony/regress/regress-2186.js b/deps/v8/test/mjsunit/es6/regress/regress-2186.js similarity index 98% rename from deps/v8/test/mjsunit/harmony/regress/regress-2186.js rename to deps/v8/test/mjsunit/es6/regress/regress-2186.js index 0921dceadb2..c82242a10ec 100644 --- a/deps/v8/test/mjsunit/harmony/regress/regress-2186.js +++ b/deps/v8/test/mjsunit/es6/regress/regress-2186.js @@ -25,8 +25,6 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --harmony-collections - function heapify(i) { return 2.0 * (i / 2); } diff --git a/deps/v8/test/cctest/test-cpu-ia32.cc b/deps/v8/test/mjsunit/es6/regress/regress-cr372788.js similarity index 75% rename from deps/v8/test/cctest/test-cpu-ia32.cc rename to deps/v8/test/mjsunit/es6/regress/regress-cr372788.js index 245450bf92b..9b66a7e08b3 100644 --- a/deps/v8/test/cctest/test-cpu-ia32.cc +++ b/deps/v8/test/mjsunit/es6/regress/regress-cr372788.js @@ -1,4 +1,4 @@ -// Copyright 2013 the V8 project authors. All rights reserved. +// Copyright 2014 the V8 project authors. All rights reserved. // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: @@ -25,16 +25,21 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "v8.h" +// Flags: --allow-natives-syntax -#include "cctest.h" -#include "cpu.h" +var x = 0; +var y = 0; -using namespace v8::internal; +var thenable = { then: function(f) { x++; f(); } }; - -TEST(RequiredFeaturesX64) { - // Test for the features required by every x86 CPU in compat/legacy mode. - CPU cpu; - CHECK(cpu.has_sahf()); +for (var i = 0; i < 3; ++i) { + Promise.resolve(thenable).then(function() { x++; y++; }); } +assertEquals(0, x); + +(function check() { + Promise.resolve().chain(function() { + // Delay check until all handlers have run. + if (y < 3) check(); else assertEquals(6, x); + }).catch(function(e) { %AbortJS("FAILURE: " + e) }); +})(); diff --git a/deps/v8/test/mjsunit/harmony/regress/regress-crbug-248025.js b/deps/v8/test/mjsunit/es6/regress/regress-crbug-248025.js similarity index 98% rename from deps/v8/test/mjsunit/harmony/regress/regress-crbug-248025.js rename to deps/v8/test/mjsunit/es6/regress/regress-crbug-248025.js index c5988595663..b7982cda744 100644 --- a/deps/v8/test/mjsunit/harmony/regress/regress-crbug-248025.js +++ b/deps/v8/test/mjsunit/es6/regress/regress-crbug-248025.js @@ -25,8 +25,6 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --harmony-iteration - // Filler long enough to trigger lazy parsing. var filler = "//" + new Array(1024).join('x'); diff --git a/deps/v8/test/mjsunit/harmony/regress/regress-crbug-346141.js b/deps/v8/test/mjsunit/es6/regress/regress-crbug-346141.js similarity index 89% rename from deps/v8/test/mjsunit/harmony/regress/regress-crbug-346141.js rename to deps/v8/test/mjsunit/es6/regress/regress-crbug-346141.js index 798b7704ec2..2b9655e1744 100644 --- a/deps/v8/test/mjsunit/harmony/regress/regress-crbug-346141.js +++ b/deps/v8/test/mjsunit/es6/regress/regress-crbug-346141.js @@ -2,8 +2,6 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -// Flags: --harmony-symbols - var s = Symbol() var o = {} o[s] = 2 diff --git a/deps/v8/test/mjsunit/es6/string-html.js b/deps/v8/test/mjsunit/es6/string-html.js new file mode 100644 index 00000000000..4f3feb56dd1 --- /dev/null +++ b/deps/v8/test/mjsunit/es6/string-html.js @@ -0,0 +1,159 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Tests taken from: +// http://mathias.html5.org/tests/javascript/string/ + +assertEquals('_'.anchor('b'), '<a name="b">_</a>'); +assertEquals('<'.anchor('<'), '<a name="<"><</a>'); +assertEquals('_'.anchor(0x2A), '<a name="42">_</a>'); +assertEquals('_'.anchor('\x22'), '<a name=""">_</a>'); +assertEquals(String.prototype.anchor.call(0x2A, 0x2A), '<a name="42">42</a>'); +assertThrows(function() { + String.prototype.anchor.call(undefined); +}, TypeError); +assertThrows(function() { + String.prototype.anchor.call(null); +}, TypeError); +assertEquals(String.prototype.anchor.length, 1); + +assertEquals('_'.big(), '<big>_</big>'); +assertEquals('<'.big(), '<big><</big>'); +assertEquals(String.prototype.big.call(0x2A), '<big>42</big>'); +assertThrows(function() { + String.prototype.big.call(undefined); +}, TypeError); +assertThrows(function() { + String.prototype.big.call(null); +}, TypeError); +assertEquals(String.prototype.big.length, 0); + +assertEquals('_'.blink(), '<blink>_</blink>'); +assertEquals('<'.blink(), '<blink><</blink>'); +assertEquals(String.prototype.blink.call(0x2A), '<blink>42</blink>'); +assertThrows(function() { + String.prototype.blink.call(undefined); +}, TypeError); +assertThrows(function() { + String.prototype.blink.call(null); +}, TypeError); +assertEquals(String.prototype.blink.length, 0); + +assertEquals('_'.bold(), '<b>_</b>'); +assertEquals('<'.bold(), '<b><</b>'); +assertEquals(String.prototype.bold.call(0x2A), '<b>42</b>'); +assertThrows(function() { + String.prototype.bold.call(undefined); +}, TypeError); +assertThrows(function() { + String.prototype.bold.call(null); +}, TypeError); +assertEquals(String.prototype.bold.length, 0); + +assertEquals('_'.fixed(), '<tt>_</tt>'); +assertEquals('<'.fixed(), '<tt><</tt>'); +assertEquals(String.prototype.fixed.call(0x2A), '<tt>42</tt>'); +assertThrows(function() { + String.prototype.fixed.call(undefined); +}, TypeError); +assertThrows(function() { + String.prototype.fixed.call(null); +}, TypeError); +assertEquals(String.prototype.fixed.length, 0); + +assertEquals('_'.fontcolor('b'), '<font color="b">_</font>'); +assertEquals('<'.fontcolor('<'), '<font color="<"><</font>'); +assertEquals('_'.fontcolor(0x2A), '<font color="42">_</font>'); +assertEquals('_'.fontcolor('\x22'), '<font color=""">_</font>'); +assertEquals(String.prototype.fontcolor.call(0x2A, 0x2A), + '<font color="42">42</font>'); +assertThrows(function() { + String.prototype.fontcolor.call(undefined); +}, TypeError); +assertThrows(function() { + String.prototype.fontcolor.call(null); +}, TypeError); +assertEquals(String.prototype.fontcolor.length, 1); + +assertEquals('_'.fontsize('b'), '<font size="b">_</font>'); +assertEquals('<'.fontsize('<'), '<font size="<"><</font>'); +assertEquals('_'.fontsize(0x2A), '<font size="42">_</font>'); +assertEquals('_'.fontsize('\x22'), '<font size=""">_</font>'); +assertEquals(String.prototype.fontsize.call(0x2A, 0x2A), + '<font size="42">42</font>'); +assertThrows(function() { + String.prototype.fontsize.call(undefined); +}, TypeError); +assertThrows(function() { + String.prototype.fontsize.call(null); +}, TypeError); +assertEquals(String.prototype.fontsize.length, 1); + +assertEquals('_'.italics(), '<i>_</i>'); +assertEquals('<'.italics(), '<i><</i>'); +assertEquals(String.prototype.italics.call(0x2A), '<i>42</i>'); +assertThrows(function() { + String.prototype.italics.call(undefined); +}, TypeError); +assertThrows(function() { + String.prototype.italics.call(null); +}, TypeError); +assertEquals(String.prototype.italics.length, 0); + +assertEquals('_'.link('b'), '<a href="b">_</a>'); +assertEquals('<'.link('<'), '<a href="<"><</a>'); +assertEquals('_'.link(0x2A), '<a href="42">_</a>'); +assertEquals('_'.link('\x22'), '<a href=""">_</a>'); +assertEquals(String.prototype.link.call(0x2A, 0x2A), '<a href="42">42</a>'); +assertThrows(function() { + String.prototype.link.call(undefined); +}, TypeError); +assertThrows(function() { + String.prototype.link.call(null); +}, TypeError); +assertEquals(String.prototype.link.length, 1); + +assertEquals('_'.small(), '<small>_</small>'); +assertEquals('<'.small(), '<small><</small>'); +assertEquals(String.prototype.small.call(0x2A), '<small>42</small>'); +assertThrows(function() { + String.prototype.small.call(undefined); +}, TypeError); +assertThrows(function() { + String.prototype.small.call(null); +}, TypeError); +assertEquals(String.prototype.small.length, 0); + +assertEquals('_'.strike(), '<strike>_</strike>'); +assertEquals('<'.strike(), '<strike><</strike>'); +assertEquals(String.prototype.strike.call(0x2A), '<strike>42</strike>'); +assertThrows(function() { + String.prototype.strike.call(undefined); +}, TypeError); +assertThrows(function() { + String.prototype.strike.call(null); +}, TypeError); +assertEquals(String.prototype.strike.length, 0); + +assertEquals('_'.sub(), '<sub>_</sub>'); +assertEquals('<'.sub(), '<sub><</sub>'); +assertEquals(String.prototype.sub.call(0x2A), '<sub>42</sub>'); +assertThrows(function() { + String.prototype.sub.call(undefined); +}, TypeError); +assertThrows(function() { + String.prototype.sub.call(null); +}, TypeError); +assertEquals(String.prototype.sub.length, 0); + +assertEquals('_'.sup(), '<sup>_</sup>'); +assertEquals('<'.sup(), '<sup><</sup>'); +assertEquals(String.prototype.sup.call(0x2A), '<sup>42</sup>'); +assertThrows(function() { + String.prototype.sup.call(undefined); +}, TypeError); +assertThrows(function() { + String.prototype.sup.call(null); +}, TypeError); +assertEquals(String.prototype.sup.length, 0); diff --git a/deps/v8/test/mjsunit/es6/string-iterator.js b/deps/v8/test/mjsunit/es6/string-iterator.js new file mode 100644 index 00000000000..e6bea6dfe76 --- /dev/null +++ b/deps/v8/test/mjsunit/es6/string-iterator.js @@ -0,0 +1,89 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + + +function TestStringPrototypeIterator() { + assertTrue(String.prototype.hasOwnProperty(Symbol.iterator)); + assertFalse("".hasOwnProperty(Symbol.iterator)); + assertFalse("".propertyIsEnumerable(Symbol.iterator)); +} +TestStringPrototypeIterator(); + + +function assertIteratorResult(value, done, result) { + assertEquals({value: value, done: done}, result); +} + + +function TestManualIteration() { + var string = "abc"; + var iterator = string[Symbol.iterator](); + assertIteratorResult('a', false, iterator.next()); + assertIteratorResult('b', false, iterator.next()); + assertIteratorResult('c', false, iterator.next()); + assertIteratorResult(void 0, true, iterator.next()); + assertIteratorResult(void 0, true, iterator.next()); +} +TestManualIteration(); + + +function TestSurrogatePairs() { + var lo = "\uD834"; + var hi = "\uDF06"; + var pair = lo + hi; + var string = "abc" + pair + "def" + lo + pair + hi + lo; + var iterator = string[Symbol.iterator](); + assertIteratorResult('a', false, iterator.next()); + assertIteratorResult('b', false, iterator.next()); + assertIteratorResult('c', false, iterator.next()); + assertIteratorResult(pair, false, iterator.next()); + assertIteratorResult('d', false, iterator.next()); + assertIteratorResult('e', false, iterator.next()); + assertIteratorResult('f', false, iterator.next()); + assertIteratorResult(lo, false, iterator.next()); + assertIteratorResult(pair, false, iterator.next()); + assertIteratorResult(hi, false, iterator.next()); + assertIteratorResult(lo, false, iterator.next()); + assertIteratorResult(void 0, true, iterator.next()); + assertIteratorResult(void 0, true, iterator.next()); +} +TestSurrogatePairs(); + + +function TestStringIteratorPrototype() { + var iterator = ""[Symbol.iterator](); + var StringIteratorPrototype = iterator.__proto__; + assertFalse(StringIteratorPrototype.hasOwnProperty('constructor')); + assertEquals(StringIteratorPrototype.__proto__, Object.prototype); + assertArrayEquals(['next'], + Object.getOwnPropertyNames(StringIteratorPrototype)); + assertEquals('[object String Iterator]', "" + iterator); +} +TestStringIteratorPrototype(); + + +function TestForOf() { + var lo = "\uD834"; + var hi = "\uDF06"; + var pair = lo + hi; + var string = "abc" + pair + "def" + lo + pair + hi + lo; + var expected = ['a', 'b', 'c', pair, 'd', 'e', 'f', lo, pair, hi, lo]; + + var i = 0; + for (var char of string) { + assertEquals(expected[i++], char); + } + + assertEquals(expected.length, i); +} +TestForOf(); + + +function TestNonOwnSlots() { + var iterator = ""[Symbol.iterator](); + var object = {__proto__: iterator}; + + assertThrows(function() { object.next(); }, TypeError); +} +TestNonOwnSlots(); diff --git a/deps/v8/test/mjsunit/harmony/symbols.js b/deps/v8/test/mjsunit/es6/symbols.js similarity index 90% rename from deps/v8/test/mjsunit/harmony/symbols.js rename to deps/v8/test/mjsunit/es6/symbols.js index 220439291cb..0b070027002 100644 --- a/deps/v8/test/mjsunit/harmony/symbols.js +++ b/deps/v8/test/mjsunit/es6/symbols.js @@ -25,7 +25,6 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --harmony-symbols --harmony-collections // Flags: --expose-gc --allow-natives-syntax var symbols = [] @@ -102,7 +101,9 @@ TestConstructor() function TestValueOf() { for (var i in symbols) { + assertTrue(symbols[i] === Object(symbols[i]).valueOf()) assertTrue(symbols[i] === symbols[i].valueOf()) + assertTrue(Symbol.prototype.valueOf.call(Object(symbols[i])) === symbols[i]) assertTrue(Symbol.prototype.valueOf.call(symbols[i]) === symbols[i]) } } @@ -113,7 +114,7 @@ function TestToString() { for (var i in symbols) { assertThrows(function() { String(symbols[i]) }, TypeError) assertThrows(function() { symbols[i] + "" }, TypeError) - assertTrue(isValidSymbolString(String(Object(symbols[i])))) + assertThrows(function() { String(Object(symbols[i])) }, TypeError) assertTrue(isValidSymbolString(symbols[i].toString())) assertTrue(isValidSymbolString(Object(symbols[i]).toString())) assertTrue( @@ -127,6 +128,8 @@ TestToString() function TestToBoolean() { for (var i in symbols) { + assertTrue(Boolean(Object(symbols[i]))) + assertFalse(!Object(symbols[i])) assertTrue(Boolean(symbols[i]).valueOf()) assertFalse(!symbols[i]) assertTrue(!!symbols[i]) @@ -144,8 +147,10 @@ TestToBoolean() function TestToNumber() { for (var i in symbols) { - assertSame(NaN, Number(symbols[i]).valueOf()) - assertSame(NaN, symbols[i] + 0) + assertThrows(function() { Number(Object(symbols[i])) }, TypeError) + assertThrows(function() { +Object(symbols[i]) }, TypeError) + assertThrows(function() { Number(symbols[i]) }, TypeError) + assertThrows(function() { symbols[i] + 0 }, TypeError) } } TestToNumber() @@ -367,6 +372,34 @@ for (var i in objs) { } +function TestDefineProperties() { + var properties = {} + for (var i in symbols) { + Object.defineProperty( + properties, symbols[i], {value: {value: i}, enumerable: i % 2 === 0}) + } + var o = Object.defineProperties({}, properties) + for (var i in symbols) { + assertEquals(i % 2 === 0, symbols[i] in o) + } +} +TestDefineProperties() + + +function TestCreate() { + var properties = {} + for (var i in symbols) { + Object.defineProperty( + properties, symbols[i], {value: {value: i}, enumerable: i % 2 === 0}) + } + var o = Object.create(Object.prototype, properties) + for (var i in symbols) { + assertEquals(i % 2 === 0, symbols[i] in o) + } +} +TestCreate() + + function TestCachedKeyAfterScavenge() { gc(); // Keyed property lookup are cached. Hereby we assume that the keys are @@ -412,8 +445,9 @@ TestGetOwnPropertySymbolsWithProto() function TestWellKnown() { var symbols = [ - "create", "hasInstance", "isConcatSpreadable", "isRegExp", - "iterator", "toStringTag", "unscopables" + // TODO(rossberg): reactivate once implemented. + // "hasInstance", "isConcatSpreadable", "isRegExp", + "iterator", /* "toStringTag", */ "unscopables" ] for (var i in symbols) { diff --git a/deps/v8/test/mjsunit/es6/typed-array-iterator.js b/deps/v8/test/mjsunit/es6/typed-array-iterator.js new file mode 100644 index 00000000000..a2e4906c191 --- /dev/null +++ b/deps/v8/test/mjsunit/es6/typed-array-iterator.js @@ -0,0 +1,39 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + + +var constructors = [Uint8Array, Int8Array, + Uint16Array, Int16Array, + Uint32Array, Int32Array, + Float32Array, Float64Array, + Uint8ClampedArray]; + +function TestTypedArrayPrototype(constructor) { + assertTrue(constructor.prototype.hasOwnProperty('entries')); + assertTrue(constructor.prototype.hasOwnProperty('values')); + assertTrue(constructor.prototype.hasOwnProperty('keys')); + assertTrue(constructor.prototype.hasOwnProperty(Symbol.iterator)); + + assertFalse(constructor.prototype.propertyIsEnumerable('entries')); + assertFalse(constructor.prototype.propertyIsEnumerable('values')); + assertFalse(constructor.prototype.propertyIsEnumerable('keys')); + assertFalse(constructor.prototype.propertyIsEnumerable(Symbol.iterator)); + + assertEquals(Array.prototype.entries, constructor.prototype.entries); + assertEquals(Array.prototype.values, constructor.prototype.values); + assertEquals(Array.prototype.keys, constructor.prototype.keys); + assertEquals(Array.prototype.values, constructor.prototype[Symbol.iterator]); +} +constructors.forEach(TestTypedArrayPrototype); + + +function TestTypedArrayValues(constructor) { + var array = [1, 2, 3]; + var i = 0; + for (var value of new constructor(array)) { + assertEquals(array[i++], value); + } + assertEquals(i, array.length); +} +constructors.forEach(TestTypedArrayValues); diff --git a/deps/v8/test/mjsunit/es6/unscopables.js b/deps/v8/test/mjsunit/es6/unscopables.js new file mode 100644 index 00000000000..678536dba42 --- /dev/null +++ b/deps/v8/test/mjsunit/es6/unscopables.js @@ -0,0 +1,664 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --harmony-unscopables +// Flags: --harmony-collections + +var global = this; +var globalProto = Object.getPrototypeOf(global); + +// Number of objects being tested. There is an assert ensuring this is correct. +var objectCount = 21; + + +function runTest(f) { + function restore(object, oldProto) { + delete object[Symbol.unscopables]; + delete object.x; + delete object.x_; + delete object.y; + delete object.z; + Object.setPrototypeOf(object, oldProto); + } + + function getObject(i) { + var objects = [ + {}, + [], + function() {}, + function() { + return arguments; + }(), + function() { + 'use strict'; + return arguments; + }(), + Object(1), + Object(true), + Object('bla'), + new Date, + new RegExp, + new Set, + new Map, + new WeakMap, + new WeakSet, + new ArrayBuffer(10), + new Int32Array(5), + Object, + Function, + Date, + RegExp, + global + ]; + + assertEquals(objectCount, objects.length); + return objects[i]; + } + + // Tests depends on this not being there to start with. + delete Array.prototype[Symbol.unscopables]; + + if (f.length === 1) { + for (var i = 0; i < objectCount; i++) { + var object = getObject(i); + var oldObjectProto = Object.getPrototypeOf(object); + f(object); + restore(object, oldObjectProto); + } + } else { + for (var i = 0; i < objectCount; i++) { + for (var j = 0; j < objectCount; j++) { + var object = getObject(i); + var proto = getObject(j); + if (object === proto) { + continue; + } + var oldObjectProto = Object.getPrototypeOf(object); + var oldProtoProto = Object.getPrototypeOf(proto); + f(object, proto); + restore(object, oldObjectProto); + restore(proto, oldProtoProto); + } + } + } +} + +// Test array first, since other tests are changing +// Array.prototype[Symbol.unscopables]. +function TestArrayPrototypeUnscopables() { + var descr = Object.getOwnPropertyDescriptor(Array.prototype, + Symbol.unscopables); + assertFalse(descr.enumerable); + assertFalse(descr.writable); + assertTrue(descr.configurable); + assertEquals(null, Object.getPrototypeOf(descr.value)); + + var copyWithin = 'local copyWithin'; + var entries = 'local entries'; + var fill = 'local fill'; + var find = 'local find'; + var findIndex = 'local findIndex'; + var keys = 'local keys'; + var values = 'local values'; + + var array = []; + array.toString = 42; + + with (array) { + assertEquals('local copyWithin', copyWithin); + assertEquals('local entries', entries); + assertEquals('local fill', fill); + assertEquals('local find', find); + assertEquals('local findIndex', findIndex); + assertEquals('local keys', keys); + assertEquals('local values', values); + assertEquals(42, toString); + } +} +TestArrayPrototypeUnscopables(); + + + +function TestBasics(object) { + var x = 1; + var y = 2; + var z = 3; + object.x = 4; + object.y = 5; + + with (object) { + assertEquals(4, x); + assertEquals(5, y); + assertEquals(3, z); + } + + object[Symbol.unscopables] = {x: true}; + with (object) { + assertEquals(1, x); + assertEquals(5, y); + assertEquals(3, z); + } + + object[Symbol.unscopables] = {x: 0, y: true}; + with (object) { + assertEquals(1, x); + assertEquals(2, y); + assertEquals(3, z); + } +} +runTest(TestBasics); + + +function TestUnscopableChain(object) { + var x = 1; + object.x = 2; + + with (object) { + assertEquals(2, x); + } + + object[Symbol.unscopables] = { + __proto__: {x: true} + }; + with (object) { + assertEquals(1, x); + } +} +runTest(TestUnscopableChain); + + +function TestBasicsSet(object) { + var x = 1; + object.x = 2; + + with (object) { + assertEquals(2, x); + } + + object[Symbol.unscopables] = {x: true}; + with (object) { + assertEquals(1, x); + x = 3; + assertEquals(3, x); + } + + assertEquals(3, x); + assertEquals(2, object.x); +} +runTest(TestBasicsSet); + + +function TestOnProto(object, proto) { + var x = 1; + var y = 2; + var z = 3; + proto.x = 4; + + Object.setPrototypeOf(object, proto); + object.y = 5; + + with (object) { + assertEquals(4, x); + assertEquals(5, y); + assertEquals(3, z); + } + + proto[Symbol.unscopables] = {x: true}; + with (object) { + assertEquals(1, x); + assertEquals(5, y); + assertEquals(3, z); + } + + object[Symbol.unscopables] = {y: true}; + with (object) { + assertEquals(4, x); + assertEquals(2, y); + assertEquals(3, z); + } + + proto[Symbol.unscopables] = {y: true}; + object[Symbol.unscopables] = {x: true}; + with (object) { + assertEquals(1, x); + assertEquals(5, y); + assertEquals(3, z); + } +} +runTest(TestOnProto); + + +function TestSetBlockedOnProto(object, proto) { + var x = 1; + object.x = 2; + + with (object) { + assertEquals(2, x); + } + + Object.setPrototypeOf(object, proto); + proto[Symbol.unscopables] = {x: true}; + with (object) { + assertEquals(1, x); + x = 3; + assertEquals(3, x); + } + + assertEquals(3, x); + assertEquals(2, object.x); +} +runTest(TestSetBlockedOnProto); + + +function TestNonObject(object) { + var x = 1; + var y = 2; + object.x = 3; + object.y = 4; + + object[Symbol.unscopables] = 'xy'; + with (object) { + assertEquals(3, x); + assertEquals(4, y); + } + + object[Symbol.unscopables] = null; + with (object) { + assertEquals(3, x); + assertEquals(4, y); + } +} +runTest(TestNonObject); + + +function TestChangeDuringWith(object) { + var x = 1; + var y = 2; + object.x = 3; + object.y = 4; + + with (object) { + assertEquals(3, x); + assertEquals(4, y); + object[Symbol.unscopables] = {x: true}; + assertEquals(1, x); + assertEquals(4, y); + } +} +runTest(TestChangeDuringWith); + + +function TestChangeDuringWithWithPossibleOptimization(object) { + var x = 1; + object.x = 2; + with (object) { + for (var i = 0; i < 1000; i++) { + if (i === 500) object[Symbol.unscopables] = {x: true}; + assertEquals(i < 500 ? 2: 1, x); + } + } +} +TestChangeDuringWithWithPossibleOptimization({}); + + +function TestChangeDuringWithWithPossibleOptimization2(object) { + var x = 1; + object.x = 2; + object[Symbol.unscopables] = {x: true}; + with (object) { + for (var i = 0; i < 1000; i++) { + if (i === 500) delete object[Symbol.unscopables]; + assertEquals(i < 500 ? 1 : 2, x); + } + } +} +TestChangeDuringWithWithPossibleOptimization2({}); + + +function TestChangeDuringWithWithPossibleOptimization3(object) { + var x = 1; + object.x = 2; + object[Symbol.unscopables] = {}; + with (object) { + for (var i = 0; i < 1000; i++) { + if (i === 500) object[Symbol.unscopables].x = true; + assertEquals(i < 500 ? 2 : 1, x); + } + } +} +TestChangeDuringWithWithPossibleOptimization3({}); + + +function TestChangeDuringWithWithPossibleOptimization4(object) { + var x = 1; + object.x = 2; + object[Symbol.unscopables] = {x: true}; + with (object) { + for (var i = 0; i < 1000; i++) { + if (i === 500) delete object[Symbol.unscopables].x; + assertEquals(i < 500 ? 1 : 2, x); + } + } +} +TestChangeDuringWithWithPossibleOptimization4({}); + + +function TestAccessorReceiver(object, proto) { + var x = 'local'; + + Object.defineProperty(proto, 'x', { + get: function() { + assertEquals(object, this); + return this.x_; + }, + configurable: true + }); + proto.x_ = 'proto'; + + Object.setPrototypeOf(object, proto); + proto.x_ = 'object'; + + with (object) { + assertEquals('object', x); + } +} +runTest(TestAccessorReceiver); + + +function TestUnscopablesGetter(object) { + // This test gets really messy when object is the global since the assert + // functions are properties on the global object and the call count gets + // completely different. + if (object === global) return; + + var x = 'local'; + object.x = 'object'; + + var callCount = 0; + Object.defineProperty(object, Symbol.unscopables, { + get: function() { + callCount++; + return {}; + }, + configurable: true + }); + with (object) { + assertEquals('object', x); + } + // Once for HasBinding + assertEquals(1, callCount); + + callCount = 0; + Object.defineProperty(object, Symbol.unscopables, { + get: function() { + callCount++; + return {x: true}; + }, + configurable: true + }); + with (object) { + assertEquals('local', x); + } + // Once for HasBinding + assertEquals(1, callCount); + + callCount = 0; + Object.defineProperty(object, Symbol.unscopables, { + get: function() { + callCount++; + return callCount == 1 ? {} : {x: true}; + }, + configurable: true + }); + with (object) { + x = 1; + } + // Once for HasBinding + assertEquals(1, callCount); + assertEquals(1, object.x); + assertEquals('local', x); + with (object) { + x = 2; + } + // One more HasBinding. + assertEquals(2, callCount); + assertEquals(1, object.x); + assertEquals(2, x); +} +runTest(TestUnscopablesGetter); + + +var global = this; +function TestUnscopablesGetter2() { + var x = 'local'; + + var globalProto = Object.getPrototypeOf(global); + var protos = [{}, [], function() {}, global]; + var objects = [{}, [], function() {}]; + + protos.forEach(function(proto) { + objects.forEach(function(object) { + Object.defineProperty(proto, 'x', { + get: function() { + assertEquals(object, this); + return 'proto'; + }, + configurable: true + }); + + object.__proto__ = proto; + Object.defineProperty(object, 'x', { + get: function() { + assertEquals(object, this); + return 'object'; + }, + configurable: true + }); + + with (object) { + assertEquals('object', x); + } + + object[Symbol.unscopables] = {x: true}; + with (object) { + assertEquals('local', x); + } + + delete proto[Symbol.unscopables]; + delete object[Symbol.unscopables]; + }); + }); + + delete global.x; + Object.setPrototypeOf(global, globalProto); +} +TestUnscopablesGetter2(); + + +function TestSetterOnBlacklisted(object, proto) { + var x = 'local'; + Object.defineProperty(proto, 'x', { + set: function(x) { + assertUnreachable(); + }, + get: function() { + return 'proto'; + }, + configurable: true + }); + Object.setPrototypeOf(object, proto); + Object.defineProperty(object, 'x', { + get: function() { + return this.x_; + }, + set: function(x) { + this.x_ = x; + }, + configurable: true + }); + object.x_ = 1; + + with (object) { + x = 2; + assertEquals(2, x); + } + + assertEquals(2, object.x); + + object[Symbol.unscopables] = {x: true}; + + with (object) { + x = 3; + assertEquals(3, x); + } + + assertEquals(2, object.x); +} +runTest(TestSetterOnBlacklisted); + + +function TestObjectsAsUnscopables(object, unscopables) { + var x = 1; + object.x = 2; + + with (object) { + assertEquals(2, x); + object[Symbol.unscopables] = unscopables; + assertEquals(2, x); + } +} +runTest(TestObjectsAsUnscopables); + + +function TestAccessorOnUnscopables(object) { + var x = 1; + object.x = 2; + + var unscopables = { + get x() { + assertUnreachable(); + } + }; + + with (object) { + assertEquals(2, x); + object[Symbol.unscopables] = unscopables; + assertEquals(1, x); + } +} +runTest(TestAccessorOnUnscopables); + + +function TestLengthUnscopables(object, proto) { + var length = 2; + with (object) { + assertEquals(1, length); + object[Symbol.unscopables] = {length: true}; + assertEquals(2, length); + delete object[Symbol.unscopables]; + assertEquals(1, length); + } +} +TestLengthUnscopables([1], Array.prototype); +TestLengthUnscopables(function(x) {}, Function.prototype); +TestLengthUnscopables(new String('x'), String.prototype); + + +function TestFunctionNameUnscopables(object) { + var name = 'local'; + with (object) { + assertEquals('f', name); + object[Symbol.unscopables] = {name: true}; + assertEquals('local', name); + delete object[Symbol.unscopables]; + assertEquals('f', name); + } +} +TestFunctionNameUnscopables(function f() {}); + + +function TestFunctionPrototypeUnscopables() { + var prototype = 'local'; + var f = function() {}; + var g = function() {}; + Object.setPrototypeOf(f, g); + var fp = f.prototype; + var gp = g.prototype; + with (f) { + assertEquals(fp, prototype); + f[Symbol.unscopables] = {prototype: true}; + assertEquals('local', prototype); + delete f[Symbol.unscopables]; + assertEquals(fp, prototype); + } +} +TestFunctionPrototypeUnscopables(function() {}); + + +function TestFunctionArgumentsUnscopables() { + var func = function() { + var arguments = 'local'; + var args = func.arguments; + with (func) { + assertEquals(args, arguments); + func[Symbol.unscopables] = {arguments: true}; + assertEquals('local', arguments); + delete func[Symbol.unscopables]; + assertEquals(args, arguments); + } + } + func(1); +} +TestFunctionArgumentsUnscopables(); + + +function TestArgumentsLengthUnscopables() { + var func = function() { + var length = 'local'; + with (arguments) { + assertEquals(1, length); + arguments[Symbol.unscopables] = {length: true}; + assertEquals('local', length); + } + } + func(1); +} +TestArgumentsLengthUnscopables(); + + +function TestFunctionCallerUnscopables() { + var func = function() { + var caller = 'local'; + with (func) { + assertEquals(TestFunctionCallerUnscopables, caller); + func[Symbol.unscopables] = {caller: true}; + assertEquals('local', caller); + delete func[Symbol.unscopables]; + assertEquals(TestFunctionCallerUnscopables, caller); + } + } + func(1); +} +TestFunctionCallerUnscopables(); + + +function TestGetUnscopablesGetterThrows() { + var object = { + get x() { + assertUnreachable(); + } + }; + function CustomError() {} + Object.defineProperty(object, Symbol.unscopables, { + get: function() { + throw new CustomError(); + } + }); + assertThrows(function() { + with (object) { + x; + } + }, CustomError); +} +TestGetUnscopablesGetterThrows(); diff --git a/deps/v8/test/mjsunit/es6/weak_collections.js b/deps/v8/test/mjsunit/es6/weak_collections.js deleted file mode 100644 index 74235e7d2c7..00000000000 --- a/deps/v8/test/mjsunit/es6/weak_collections.js +++ /dev/null @@ -1,333 +0,0 @@ -// Copyright 2012 the V8 project authors. All rights reserved. -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are -// met: -// -// * Redistributions of source code must retain the above copyright -// notice, this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above -// copyright notice, this list of conditions and the following -// disclaimer in the documentation and/or other materials provided -// with the distribution. -// * Neither the name of Google Inc. nor the names of its -// contributors may be used to endorse or promote products derived -// from this software without specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -// Flags: --expose-gc --allow-natives-syntax - - -// Note: this test is superseded by harmony/collections.js. -// IF YOU CHANGE THIS FILE, apply the same changes to harmony/collections.js! -// TODO(rossberg): Remove once non-weak collections have caught up. - -// Test valid getter and setter calls on WeakSets. -function TestValidSetCalls(m) { - assertDoesNotThrow(function () { m.add(new Object) }); - assertDoesNotThrow(function () { m.has(new Object) }); - assertDoesNotThrow(function () { m.delete(new Object) }); -} -TestValidSetCalls(new WeakSet); - - -// Test valid getter and setter calls on WeakMaps -function TestValidMapCalls(m) { - assertDoesNotThrow(function () { m.get(new Object) }); - assertDoesNotThrow(function () { m.set(new Object) }); - assertDoesNotThrow(function () { m.has(new Object) }); - assertDoesNotThrow(function () { m.delete(new Object) }); -} -TestValidMapCalls(new WeakMap); - - -// Test invalid getter and setter calls for WeakMap -function TestInvalidCalls(m) { - assertThrows(function () { m.get(undefined) }, TypeError); - assertThrows(function () { m.set(undefined, 0) }, TypeError); - assertThrows(function () { m.get(null) }, TypeError); - assertThrows(function () { m.set(null, 0) }, TypeError); - assertThrows(function () { m.get(0) }, TypeError); - assertThrows(function () { m.set(0, 0) }, TypeError); - assertThrows(function () { m.get('a-key') }, TypeError); - assertThrows(function () { m.set('a-key', 0) }, TypeError); -} -TestInvalidCalls(new WeakMap); - - -// Test expected behavior for WeakSets -function TestSet(set, key) { - assertFalse(set.has(key)); - assertSame(undefined, set.add(key)); - assertTrue(set.has(key)); - assertTrue(set.delete(key)); - assertFalse(set.has(key)); - assertFalse(set.delete(key)); - assertFalse(set.has(key)); -} -function TestSetBehavior(set) { - for (var i = 0; i < 20; i++) { - TestSet(set, new Object); - TestSet(set, i); - TestSet(set, i / 100); - TestSet(set, 'key-' + i); - } - var keys = [ +0, -0, +Infinity, -Infinity, true, false, null, undefined ]; - for (var i = 0; i < keys.length; i++) { - TestSet(set, keys[i]); - } -} -TestSet(new WeakSet, new Object); - - -// Test expected mapping behavior for WeakMaps -function TestMapping(map, key, value) { - assertSame(undefined, map.set(key, value)); - assertSame(value, map.get(key)); -} -function TestMapBehavior1(m) { - TestMapping(m, new Object, 23); - TestMapping(m, new Object, 'the-value'); - TestMapping(m, new Object, new Object); -} -TestMapBehavior1(new WeakMap); - - -// Test expected querying behavior of WeakMaps -function TestQuery(m) { - var key = new Object; - var values = [ 'x', 0, +Infinity, -Infinity, true, false, null, undefined ]; - for (var i = 0; i < values.length; i++) { - TestMapping(m, key, values[i]); - assertTrue(m.has(key)); - assertFalse(m.has(new Object)); - } -} -TestQuery(new WeakMap); - - -// Test expected deletion behavior of WeakMaps -function TestDelete(m) { - var key = new Object; - TestMapping(m, key, 'to-be-deleted'); - assertTrue(m.delete(key)); - assertFalse(m.delete(key)); - assertFalse(m.delete(new Object)); - assertSame(m.get(key), undefined); -} -TestDelete(new WeakMap); - - -// Test GC of WeakMaps with entry -function TestGC1(m) { - var key = new Object; - m.set(key, 'not-collected'); - gc(); - assertSame('not-collected', m.get(key)); -} -TestGC1(new WeakMap); - - -// Test GC of WeakMaps with chained entries -function TestGC2(m) { - var head = new Object; - for (key = head, i = 0; i < 10; i++, key = m.get(key)) { - m.set(key, new Object); - } - gc(); - var count = 0; - for (key = head; key != undefined; key = m.get(key)) { - count++; - } - assertEquals(11, count); -} -TestGC2(new WeakMap); - - -// Test property attribute [[Enumerable]] -function TestEnumerable(func) { - function props(x) { - var array = []; - for (var p in x) array.push(p); - return array.sort(); - } - assertArrayEquals([], props(func)); - assertArrayEquals([], props(func.prototype)); - assertArrayEquals([], props(new func())); -} -TestEnumerable(WeakMap); -TestEnumerable(WeakSet); - - -// Test arbitrary properties on WeakMaps -function TestArbitrary(m) { - function TestProperty(map, property, value) { - map[property] = value; - assertEquals(value, map[property]); - } - for (var i = 0; i < 20; i++) { - TestProperty(m, i, 'val' + i); - TestProperty(m, 'foo' + i, 'bar' + i); - } - TestMapping(m, new Object, 'foobar'); -} -TestArbitrary(new WeakMap); - - -// Test direct constructor call -assertThrows(function() { WeakMap(); }, TypeError); -assertThrows(function() { WeakSet(); }, TypeError); - - -// Test some common JavaScript idioms for WeakMaps -var m = new WeakMap; -assertTrue(m instanceof WeakMap); -assertTrue(WeakMap.prototype.set instanceof Function) -assertTrue(WeakMap.prototype.get instanceof Function) -assertTrue(WeakMap.prototype.has instanceof Function) -assertTrue(WeakMap.prototype.delete instanceof Function) -assertTrue(WeakMap.prototype.clear instanceof Function) - - -// Test some common JavaScript idioms for WeakSets -var s = new WeakSet; -assertTrue(s instanceof WeakSet); -assertTrue(WeakSet.prototype.add instanceof Function) -assertTrue(WeakSet.prototype.has instanceof Function) -assertTrue(WeakSet.prototype.delete instanceof Function) -assertTrue(WeakSet.prototype.clear instanceof Function) - - -// Test class of instance and prototype. -assertEquals("WeakMap", %_ClassOf(new WeakMap)) -assertEquals("Object", %_ClassOf(WeakMap.prototype)) -assertEquals("WeakSet", %_ClassOf(new WeakSet)) -assertEquals("Object", %_ClassOf(WeakMap.prototype)) - - -// Test name of constructor. -assertEquals("WeakMap", WeakMap.name); -assertEquals("WeakSet", WeakSet.name); - - -// Test prototype property of WeakMap and WeakSet. -function TestPrototype(C) { - assertTrue(C.prototype instanceof Object); - assertEquals({ - value: {}, - writable: false, - enumerable: false, - configurable: false - }, Object.getOwnPropertyDescriptor(C, "prototype")); -} -TestPrototype(WeakMap); -TestPrototype(WeakSet); - - -// Test constructor property of the WeakMap and WeakSet prototype. -function TestConstructor(C) { - assertFalse(C === Object.prototype.constructor); - assertSame(C, C.prototype.constructor); - assertSame(C, (new C).__proto__.constructor); -} -TestConstructor(WeakMap); -TestConstructor(WeakSet); - - -// Test the WeakMap and WeakSet global properties themselves. -function TestDescriptor(global, C) { - assertEquals({ - value: C, - writable: true, - enumerable: false, - configurable: true - }, Object.getOwnPropertyDescriptor(global, C.name)); -} -TestDescriptor(this, WeakMap); -TestDescriptor(this, WeakSet); - - -// Regression test for WeakMap prototype. -assertTrue(WeakMap.prototype.constructor === WeakMap) -assertTrue(Object.getPrototypeOf(WeakMap.prototype) === Object.prototype) - - -// Regression test for issue 1617: The prototype of the WeakMap constructor -// needs to be unique (i.e. different from the one of the Object constructor). -assertFalse(WeakMap.prototype === Object.prototype); -var o = Object.create({}); -assertFalse("get" in o); -assertFalse("set" in o); -assertEquals(undefined, o.get); -assertEquals(undefined, o.set); -var o = Object.create({}, { myValue: { - value: 10, - enumerable: false, - configurable: true, - writable: true -}}); -assertEquals(10, o.myValue); - - -// Regression test for issue 1884: Invoking any of the methods for Harmony -// maps, sets, or weak maps, with a wrong type of receiver should be throwing -// a proper TypeError. -var alwaysBogus = [ undefined, null, true, "x", 23, {} ]; -var bogusReceiversTestSet = [ - { proto: WeakMap.prototype, - funcs: [ 'get', 'set', 'has', 'delete' ], - receivers: alwaysBogus.concat([ new WeakSet ]), - }, - { proto: WeakSet.prototype, - funcs: [ 'add', 'has', 'delete' ], - receivers: alwaysBogus.concat([ new WeakMap ]), - }, -]; -function TestBogusReceivers(testSet) { - for (var i = 0; i < testSet.length; i++) { - var proto = testSet[i].proto; - var funcs = testSet[i].funcs; - var receivers = testSet[i].receivers; - for (var j = 0; j < funcs.length; j++) { - var func = proto[funcs[j]]; - for (var k = 0; k < receivers.length; k++) { - assertThrows(function () { func.call(receivers[k], {}) }, TypeError); - } - } - } -} -TestBogusReceivers(bogusReceiversTestSet); - - -// Test WeakMap clear -(function() { - var k = new Object(); - var w = new WeakMap(); - w.set(k, 23); - assertTrue(w.has(k)); - assertEquals(23, w.get(k)); - w.clear(); - assertFalse(w.has(k)); - assertEquals(undefined, w.get(k)); -})(); - - -// Test WeakSet clear -(function() { - var k = new Object(); - var w = new WeakSet(); - w.add(k); - assertTrue(w.has(k)); - w.clear(); - assertFalse(w.has(k)); -})(); diff --git a/deps/v8/test/mjsunit/es7/object-observe-debug-event.js b/deps/v8/test/mjsunit/es7/object-observe-debug-event.js new file mode 100644 index 00000000000..ed627642cc0 --- /dev/null +++ b/deps/v8/test/mjsunit/es7/object-observe-debug-event.js @@ -0,0 +1,51 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug + +Debug = debug.Debug; + +var base_id = -1; +var exception = null; +var expected = [ + "enqueue #1", + "willHandle #1", + "didHandle #1", +]; + +function assertLog(msg) { + print(msg); + assertTrue(expected.length > 0); + assertEquals(expected.shift(), msg); + if (!expected.length) { + Debug.setListener(null); + } +} + +function listener(event, exec_state, event_data, data) { + if (event != Debug.DebugEvent.AsyncTaskEvent) return; + try { + if (base_id < 0) + base_id = event_data.id(); + var id = event_data.id() - base_id + 1; + assertEquals("Object.observe", event_data.name()); + assertLog(event_data.type() + " #" + id); + } catch (e) { + print(e + e.stack) + exception = e; + } +} + +Debug.setListener(listener); + +var obj = {}; +Object.observe(obj, function(changes) { + print(change.type + " " + change.name + " " + change.oldValue); +}); + +obj.foo = 1; +obj.zoo = 2; +obj.foo = 3; + +assertNull(exception); diff --git a/deps/v8/test/mjsunit/es7/object-observe-runtime.js b/deps/v8/test/mjsunit/es7/object-observe-runtime.js new file mode 100644 index 00000000000..769cd1b2969 --- /dev/null +++ b/deps/v8/test/mjsunit/es7/object-observe-runtime.js @@ -0,0 +1,18 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + +// These tests are meant to ensure that that the Object.observe runtime +// functions are hardened. + +var obj = {}; +%SetIsObserved(obj); +assertThrows(function() { + %SetIsObserved(obj); +}); + +assertThrows(function() { + %SetIsObserved(this); +}); diff --git a/deps/v8/test/mjsunit/es7/object-observe.js b/deps/v8/test/mjsunit/es7/object-observe.js index 7bb579f0c14..5af205eadf8 100644 --- a/deps/v8/test/mjsunit/es7/object-observe.js +++ b/deps/v8/test/mjsunit/es7/object-observe.js @@ -25,8 +25,8 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --harmony-proxies --harmony-collections -// Flags: --harmony-symbols --allow-natives-syntax +// Flags: --harmony-proxies +// Flags: --allow-natives-syntax var allObservers = []; function reset() { @@ -1234,8 +1234,9 @@ observer2.assertCallbackRecords([ // Updating length on large (slow) array reset(); -var slow_arr = new Array(1000000000); +var slow_arr = %NormalizeElements([]); slow_arr[500000000] = 'hello'; +slow_arr.length = 1000000000; Object.observe(slow_arr, observer.callback); var spliceRecords; function slowSpliceCallback(records) { @@ -1685,8 +1686,10 @@ var obj = { __proto__: fun }; Object.observe(obj, observer.callback); obj.prototype = 7; Object.deliverChangeRecords(observer.callback); -observer.assertNotCalled(); - +observer.assertRecordCount(1); +observer.assertCallbackRecords([ + { object: obj, name: 'prototype', type: 'add' }, +]); // Check that changes in observation status are detected in all IC states and // in optimized code, especially in cases usually using fast elements. diff --git a/deps/v8/test/mjsunit/fast-non-keyed.js b/deps/v8/test/mjsunit/fast-non-keyed.js index c2f7fc7f968..6a300ab1e36 100644 --- a/deps/v8/test/mjsunit/fast-non-keyed.js +++ b/deps/v8/test/mjsunit/fast-non-keyed.js @@ -108,6 +108,6 @@ var obj3 = {}; AddProps3(obj3); assertTrue(%HasFastProperties(obj3)); -var bad_name = {}; -bad_name[".foo"] = 0; -assertFalse(%HasFastProperties(bad_name)); +var funny_name = {}; +funny_name[".foo"] = 0; +assertTrue(%HasFastProperties(funny_name)); diff --git a/deps/v8/test/mjsunit/fast-prototype.js b/deps/v8/test/mjsunit/fast-prototype.js index cdcc1a9ed68..98647612f6d 100644 --- a/deps/v8/test/mjsunit/fast-prototype.js +++ b/deps/v8/test/mjsunit/fast-prototype.js @@ -50,6 +50,8 @@ function DoProtoMagic(proto, set__proto__) { (new Sub()).__proto__ = proto; } else { Sub.prototype = proto; + // Need to instantiate Sub to mark .prototype as prototype. + new Sub(); } } @@ -72,10 +74,15 @@ function test(use_new, add_first, set__proto__, same_map_as) { // Still fast assertTrue(%HasFastProperties(proto)); AddProps(proto); - // After we add all those properties it went slow mode again :-( - assertFalse(%HasFastProperties(proto)); + if (set__proto__) { + // After we add all those properties it went slow mode again :-( + assertFalse(%HasFastProperties(proto)); + } else { + // .prototype keeps it fast. + assertTrue(%HasFastProperties(proto)); + } } - if (same_map_as && !add_first) { + if (same_map_as && !add_first && set__proto__) { assertTrue(%HaveSameMap(same_map_as, proto)); } return proto; diff --git a/deps/v8/test/mjsunit/global-const-var-conflicts.js b/deps/v8/test/mjsunit/global-const-var-conflicts.js index 2fca96f9f85..3b87e3d7be3 100644 --- a/deps/v8/test/mjsunit/global-const-var-conflicts.js +++ b/deps/v8/test/mjsunit/global-const-var-conflicts.js @@ -41,17 +41,20 @@ try { eval("var b"); } catch (e) { caught++; assertTrue(e instanceof TypeError); assertEquals(0, b); try { eval("var b = 1"); } catch (e) { caught++; assertTrue(e instanceof TypeError); } assertEquals(0, b); +assertEquals(0, caught); eval("var c"); try { eval("const c"); } catch (e) { caught++; assertTrue(e instanceof TypeError); } assertTrue(typeof c == 'undefined'); +assertEquals(1, caught); try { eval("const c = 1"); } catch (e) { caught++; assertTrue(e instanceof TypeError); } -assertEquals(1, c); +assertEquals(undefined, c); +assertEquals(2, caught); eval("var d = 0"); try { eval("const d"); } catch (e) { caught++; assertTrue(e instanceof TypeError); } -assertEquals(undefined, d); +assertEquals(0, d); +assertEquals(3, caught); try { eval("const d = 1"); } catch (e) { caught++; assertTrue(e instanceof TypeError); } -assertEquals(1, d); - -assertEquals(0, caught); +assertEquals(0, d); +assertEquals(4, caught); diff --git a/deps/v8/test/mjsunit/harmony/array-fill.js b/deps/v8/test/mjsunit/harmony/array-fill.js index 571233f6fa1..eae18d113bf 100644 --- a/deps/v8/test/mjsunit/harmony/array-fill.js +++ b/deps/v8/test/mjsunit/harmony/array-fill.js @@ -4,7 +4,7 @@ // Flags: --harmony-arrays -assertEquals(1, Array.prototype.find.length); +assertEquals(1, Array.prototype.fill.length); assertArrayEquals([].fill(8), []); assertArrayEquals([0, 0, 0, 0, 0].fill(), [undefined, undefined, undefined, undefined, undefined]); @@ -22,7 +22,7 @@ assertArrayEquals([0, 0, 0, 0, 0].fill(8, -1, -3), [0, 0, 0, 0, 0]); assertArrayEquals([0, 0, 0, 0, 0].fill(8, undefined, 4), [8, 8, 8, 8, 0]); assertArrayEquals([ , , , , 0].fill(8, 1, 3), [, 8, 8, , 0]); -// If the range if empty, the array is not actually modified and +// If the range is empty, the array is not actually modified and // should not throw, even when applied to a frozen object. assertArrayEquals(Object.freeze([1, 2, 3]).fill(0, 0, 0), [1, 2, 3]); diff --git a/deps/v8/test/mjsunit/harmony/arrow-functions.js b/deps/v8/test/mjsunit/harmony/arrow-functions.js new file mode 100644 index 00000000000..22b1c94f7ff --- /dev/null +++ b/deps/v8/test/mjsunit/harmony/arrow-functions.js @@ -0,0 +1,48 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --harmony-arrow-functions + +// Arrow functions are like functions, except they throw when using the +// "new" operator on them. +assertEquals("function", typeof (() => {})); +assertEquals(Function.prototype, Object.getPrototypeOf(() => {})); +assertThrows("new (() => {})", TypeError); + +// Check the different syntax variations +assertEquals(1, (() => 1)()); +assertEquals(2, (a => a + 1)(1)); +assertEquals(3, (() => { return 3; })()); +assertEquals(4, (a => { return a + 3; })(1)); +assertEquals(5, ((a, b) => a + b)(1, 4)); +assertEquals(6, ((a, b) => { return a + b; })(1, 5)); + +// The following are tests from: +// http://wiki.ecmascript.org/doku.php?id=harmony:arrow_function_syntax + +// Empty arrow function returns undefined +var empty = () => {}; +assertEquals(undefined, empty()); + +// Single parameter case needs no parentheses around parameter list +var identity = x => x; +assertEquals(empty, identity(empty)); + +// No need for parentheses even for lower-precedence expression body +var square = x => x * x; +assertEquals(9, square(3)); + +// Parenthesize the body to return an object literal expression +var key_maker = val => ({key: val}); +assertEquals(empty, key_maker(empty).key); + +// Statement body needs braces, must use 'return' explicitly if not void +var evens = [0, 2, 4, 6, 8]; +assertEquals([1, 3, 5, 7, 9], evens.map(v => v + 1)); + +var fives = []; +[1, 2, 3, 4, 5, 6, 7, 8, 9, 10].forEach(v => { + if (v % 5 === 0) fives.push(v); +}); +assertEquals([5, 10], fives); diff --git a/deps/v8/test/mjsunit/harmony/block-conflicts.js b/deps/v8/test/mjsunit/harmony/block-conflicts.js index 3aa9d222230..1eedb682aa5 100644 --- a/deps/v8/test/mjsunit/harmony/block-conflicts.js +++ b/deps/v8/test/mjsunit/harmony/block-conflicts.js @@ -29,7 +29,6 @@ // Test for conflicting variable bindings. -// TODO(ES6): properly activate extended mode "use strict"; function CheckException(e) { @@ -40,9 +39,18 @@ function CheckException(e) { } +function TestGlobal(s,e) { + try { + return eval(s + e); + } catch (x) { + return CheckException(x); + } +} + + function TestFunction(s,e) { try { - return eval("(function(){" + s + ";return " + e + "})")(); + return eval("(function(){" + s + " return " + e + "})")(); } catch (x) { return CheckException(x); } @@ -51,7 +59,7 @@ function TestFunction(s,e) { function TestBlock(s,e) { try { - return eval("(function(){ if (true) { " + s + "; }; return " + e + "})")(); + return eval("(function(){ {" + s + "} return " + e + "})")(); } catch (x) { return CheckException(x); } @@ -60,76 +68,123 @@ function TestBlock(s,e) { function TestAll(expected,s,opt_e) { var e = ""; var msg = s; - if (opt_e) { e = opt_e; msg += "; " + opt_e; } - assertEquals(expected, TestFunction(s,e), "function:'" + msg + "'"); - assertEquals(expected, TestBlock(s,e), "block:'" + msg + "'"); + if (opt_e) { e = opt_e; msg += opt_e; } + assertEquals(expected === 'LocalConflict' ? 'NoConflict' : expected, + TestGlobal(s,e), "global:'" + msg + "'"); + assertEquals(expected === 'LocalConflict' ? 'NoConflict' : expected, + TestFunction(s,e), "function:'" + msg + "'"); + assertEquals(expected === 'LocalConflict' ? 'Conflict' : expected, + TestBlock(s,e), "block:'" + msg + "'"); } function TestConflict(s) { TestAll('Conflict', s); - TestAll('Conflict', 'eval("' + s + '")'); + TestAll('Conflict', 'eval("' + s + '");'); } - function TestNoConflict(s) { TestAll('NoConflict', s, "'NoConflict'"); - TestAll('NoConflict', 'eval("' + s + '")', "'NoConflict'"); + TestAll('NoConflict', 'eval("' + s + '");', "'NoConflict'"); } -var letbinds = [ "let x", - "let x = 0", - "let x = undefined", - "function x() { }", - "let x = function() {}", - "let x, y", - "let y, x", - "const x = 0", - "const x = undefined", - "const x = function() {}", - "const x = 2, y = 3", - "const y = 4, x = 5", +function TestLocalConflict(s) { + TestAll('LocalConflict', s, "'NoConflict'"); + TestAll('NoConflict', 'eval("' + s + '");', "'NoConflict'"); +} + +var letbinds = [ "let x;", + "let x = 0;", + "let x = undefined;", + "let x = function() {};", + "let x, y;", + "let y, x;", + "const x = 0;", + "const x = undefined;", + "const x = function() {};", + "const x = 2, y = 3;", + "const y = 4, x = 5;", ]; -var varbinds = [ "var x", - "var x = 0", - "var x = undefined", - "var x = function() {}", - "var x, y", - "var y, x", +var varbinds = [ "var x;", + "var x = 0;", + "var x = undefined;", + "var x = function() {};", + "var x, y;", + "var y, x;", ]; - +var funbind = "function x() {}"; for (var l = 0; l < letbinds.length; ++l) { // Test conflicting let/var bindings. for (var v = 0; v < varbinds.length; ++v) { // Same level. - TestConflict(letbinds[l] +'; ' + varbinds[v]); - TestConflict(varbinds[v] +'; ' + letbinds[l]); + TestConflict(letbinds[l] + varbinds[v]); + TestConflict(varbinds[v] + letbinds[l]); // Different level. - TestConflict(letbinds[l] +'; {' + varbinds[v] + '; }'); - TestConflict('{ ' + varbinds[v] +'; }' + letbinds[l]); + TestConflict(letbinds[l] + '{' + varbinds[v] + '}'); + TestConflict('{' + varbinds[v] +'}' + letbinds[l]); + TestNoConflict(varbinds[v] + '{' + letbinds[l] + '}'); + TestNoConflict('{' + letbinds[l] + '}' + varbinds[v]); + // For loop. + TestConflict('for (' + letbinds[l] + '0;) {' + varbinds[v] + '}'); + TestNoConflict('for (' + varbinds[v] + '0;) {' + letbinds[l] + '}'); } // Test conflicting let/let bindings. for (var k = 0; k < letbinds.length; ++k) { // Same level. - TestConflict(letbinds[l] +'; ' + letbinds[k]); - TestConflict(letbinds[k] +'; ' + letbinds[l]); + TestConflict(letbinds[l] + letbinds[k]); + TestConflict(letbinds[k] + letbinds[l]); // Different level. - TestNoConflict(letbinds[l] +'; { ' + letbinds[k] + '; }'); - TestNoConflict('{ ' + letbinds[k] +'; } ' + letbinds[l]); + TestNoConflict(letbinds[l] + '{ ' + letbinds[k] + '}'); + TestNoConflict('{' + letbinds[k] +'} ' + letbinds[l]); + // For loop. + TestNoConflict('for (' + letbinds[l] + '0;) {' + letbinds[k] + '}'); + TestNoConflict('for (' + letbinds[k] + '0;) {' + letbinds[l] + '}'); } + // Test conflicting function/let bindings. + // Same level. + TestConflict(letbinds[l] + funbind); + TestConflict(funbind + letbinds[l]); + // Different level. + TestNoConflict(letbinds[l] + '{' + funbind + '}'); + TestNoConflict('{' + funbind + '}' + letbinds[l]); + TestNoConflict(funbind + '{' + letbinds[l] + '}'); + TestNoConflict('{' + letbinds[l] + '}' + funbind); + // For loop. + TestNoConflict('for (' + letbinds[l] + '0;) {' + funbind + '}'); + // Test conflicting parameter/let bindings. - TestConflict('(function (x) { ' + letbinds[l] + '; })()'); + TestConflict('(function(x) {' + letbinds[l] + '})();'); +} + +// Test conflicting function/var bindings. +for (var v = 0; v < varbinds.length; ++v) { + // Same level. + TestLocalConflict(varbinds[v] + funbind); + TestLocalConflict(funbind + varbinds[v]); + // Different level. + TestLocalConflict(funbind + '{' + varbinds[v] + '}'); + TestLocalConflict('{' + varbinds[v] +'}' + funbind); + TestNoConflict(varbinds[v] + '{' + funbind + '}'); + TestNoConflict('{' + funbind + '}' + varbinds[v]); + // For loop. + TestNoConflict('for (' + varbinds[v] + '0;) {' + funbind + '}'); } // Test conflicting catch/var bindings. for (var v = 0; v < varbinds.length; ++v) { - TestConflict('try {} catch (x) { ' + varbinds[v] + '; }'); + TestConflict('try {} catch(x) {' + varbinds[v] + '}'); } // Test conflicting parameter/var bindings. for (var v = 0; v < varbinds.length; ++v) { - TestNoConflict('(function (x) { ' + varbinds[v] + '; })()'); + TestNoConflict('(function (x) {' + varbinds[v] + '})();'); } + +// Test conflicting catch/function bindings. +TestNoConflict('try {} catch(x) {' + funbind + '}'); + +// Test conflicting parameter/function bindings. +TestNoConflict('(function (x) {' + funbind + '})();'); diff --git a/deps/v8/test/mjsunit/harmony/block-const-assign.js b/deps/v8/test/mjsunit/harmony/block-const-assign.js index 8297a558a40..b71729e8a20 100644 --- a/deps/v8/test/mjsunit/harmony/block-const-assign.js +++ b/deps/v8/test/mjsunit/harmony/block-const-assign.js @@ -30,9 +30,8 @@ // Test that we throw early syntax errors in harmony mode // when using an immutable binding in an assigment or with // prefix/postfix decrement/increment operators. -// TODO(ES6): properly activate extended mode -"use strict"; +"use strict"; // Function local const. function constDecl0(use) { diff --git a/deps/v8/test/mjsunit/harmony/block-early-errors.js b/deps/v8/test/mjsunit/harmony/block-early-errors.js index 791f001af0e..8ed5ea84ec7 100644 --- a/deps/v8/test/mjsunit/harmony/block-early-errors.js +++ b/deps/v8/test/mjsunit/harmony/block-early-errors.js @@ -30,7 +30,6 @@ function CheckException(e) { var string = e.toString(); assertInstanceof(e, SyntaxError); - assertTrue(string.indexOf("Illegal let") >= 0); } function Check(str) { @@ -49,7 +48,7 @@ function Check(str) { } // Check for early syntax errors when using let -// declarations outside of extended mode. +// declarations outside of strict mode. Check("let x;"); Check("let x = 1;"); Check("let x, y;"); diff --git a/deps/v8/test/mjsunit/harmony/block-for.js b/deps/v8/test/mjsunit/harmony/block-for.js index e84f0d2fee8..110f1ccf45e 100644 --- a/deps/v8/test/mjsunit/harmony/block-for.js +++ b/deps/v8/test/mjsunit/harmony/block-for.js @@ -27,7 +27,6 @@ // Flags: --harmony-scoping -// TODO(ES6): properly activate extended mode "use strict"; function props(x) { @@ -93,7 +92,6 @@ assertEquals('ab', result); // Check that there is exactly one variable without initializer // in a for-in statement with let variables. -// TODO(ES6): properly activate extended mode assertThrows("function foo() { 'use strict'; for (let in {}) { } }", SyntaxError); assertThrows("function foo() { 'use strict'; for (let x = 3 in {}) { } }", SyntaxError); assertThrows("function foo() { 'use strict'; for (let x, y in {}) { } }", SyntaxError); @@ -102,7 +100,7 @@ assertThrows("function foo() { 'use strict'; for (let x, y = 4 in {}) { } }", Sy assertThrows("function foo() { 'use strict'; for (let x = 3, y = 4 in {}) { } }", SyntaxError); -// In a normal for statement the iteration variable is not +// In a normal for statement the iteration variable is // freshly allocated for each iteration. function closures1() { let a = []; @@ -110,7 +108,7 @@ function closures1() { a.push(function () { return i; }); } for (let j = 0; j < 5; ++j) { - assertEquals(5, a[j]()); + assertEquals(j, a[j]()); } } closures1(); @@ -123,13 +121,45 @@ function closures2() { b.push(function () { return j; }); } for (let k = 0; k < 5; ++k) { - assertEquals(5, a[k]()); - assertEquals(15, b[k]()); + assertEquals(k, a[k]()); + assertEquals(k + 10, b[k]()); } } closures2(); +function closure_in_for_init() { + let a = []; + for (let i = 0, f = function() { return i }; i < 5; ++i) { + a.push(f); + } + for (let k = 0; k < 5; ++k) { + assertEquals(0, a[k]()); + } +} +closure_in_for_init(); + + +function closure_in_for_cond() { + let a = []; + for (let i = 0; a.push(function () { return i; }), i < 5; ++i) { } + for (let k = 0; k < 5; ++k) { + assertEquals(k, a[k]()); + } +} +closure_in_for_next(); + + +function closure_in_for_next() { + let a = []; + for (let i = 0; i < 5; a.push(function () { return i; }), ++i) { } + for (let k = 0; k < 5; ++k) { + assertEquals(k + 1, a[k]()); + } +} +closure_in_for_next(); + + // In a for-in statement the iteration variable is fresh // for earch iteration. function closures3(x) { diff --git a/deps/v8/test/mjsunit/harmony/block-leave.js b/deps/v8/test/mjsunit/harmony/block-leave.js index a7f6b694753..87d35b396dd 100644 --- a/deps/v8/test/mjsunit/harmony/block-leave.js +++ b/deps/v8/test/mjsunit/harmony/block-leave.js @@ -27,7 +27,6 @@ // Flags: --harmony-scoping -// TODO(ES6): properly activate extended mode "use strict"; // We want to test the context chain shape. In each of the tests cases diff --git a/deps/v8/test/mjsunit/harmony/block-let-crankshaft.js b/deps/v8/test/mjsunit/harmony/block-let-crankshaft.js index 5888fd24f56..e8e00b200ea 100644 --- a/deps/v8/test/mjsunit/harmony/block-let-crankshaft.js +++ b/deps/v8/test/mjsunit/harmony/block-let-crankshaft.js @@ -27,12 +27,12 @@ // Flags: --harmony-scoping --allow-natives-syntax -// TODO(ES6): properly activate extended mode "use strict"; // Check that the following functions are optimizable. var functions = [ f1, f2, f3, f4, f5, f6, f7, f8, f9, f10, f11, f12, f13, f14, - f15, f16, f17, f18, f19, f20, f21, f22, f23 ]; + f15, f16, f17, f18, f19, f20, f21, f22, f23, f24, f25, f26, + f27, f28, f29, f30, f31, f32, f33]; for (var i = 0; i < functions.length; ++i) { var func = functions[i]; @@ -156,6 +156,184 @@ function f23() { (function() { x; }); } +function f24() { + let x = 1; + { + let x = 2; + { + let x = 3; + assertEquals(3, x); + } + assertEquals(2, x); + } + assertEquals(1, x); +} + +function f25() { + { + let x = 2; + L: { + let x = 3; + assertEquals(3, x); + break L; + assertTrue(false); + } + assertEquals(2, x); + } + assertTrue(true); +} + +function f26() { + { + let x = 1; + L: { + let x = 2; + { + let x = 3; + assertEquals(3, x); + break L; + assertTrue(false); + } + assertTrue(false); + } + assertEquals(1, x); + } +} + + +function f27() { + do { + let x = 4; + assertEquals(4,x); + { + let x = 5; + assertEquals(5, x); + continue; + assertTrue(false); + } + } while (false); +} + +function f28() { + label: for (var i = 0; i < 10; ++i) { + let x = 'middle' + i; + for (var j = 0; j < 10; ++j) { + let x = 'inner' + j; + continue label; + } + } +} + +function f29() { + // Verify that the context is correctly set in the stack frame after exiting + // from with. + + let x = 'outer'; + label: { + let x = 'inner'; + break label; + } + f(); // The context could be restored from the stack after the call. + assertEquals('outer', x); + + function f() { + assertEquals('outer', x); + }; +} + +function f30() { + let x = 'outer'; + for (var i = 0; i < 10; ++i) { + let x = 'inner'; + continue; + } + f(); + assertEquals('outer', x); + + function f() { + assertEquals('outer', x); + }; +} + +function f31() { + { + let x = 'outer'; + label: for (var i = 0; assertEquals('outer', x), i < 10; ++i) { + let x = 'middle' + i; + { + let x = 'inner' + j; + continue label; + } + } + assertEquals('outer', x); + } +} + +var c = true; + +function f32() { + { + let x = 'outer'; + L: { + { + let x = 'inner'; + if (c) { + break L; + } + } + foo(); + } + } + + function foo() { + return 'bar'; + } +} + +function f33() { + { + let x = 'outer'; + L: { + { + let x = 'inner'; + if (c) { + break L; + } + foo(); + } + } + } + + function foo() { + return 'bar'; + } +} + +function TestThrow() { + function f() { + let x = 'outer'; + { + let x = 'inner'; + throw x; + } + } + for (var i = 0; i < 5; i++) { + try { + f(); + } catch (e) { + assertEquals('inner', e); + } + } + %OptimizeFunctionOnNextCall(f); + try { + f(); + } catch (e) { + assertEquals('inner', e); + } + assertOptimized(f); +} + +TestThrow(); // Test that temporal dead zone semantics for function and block scoped // let bindings are handled by the optimizing compiler. @@ -208,9 +386,59 @@ function TestFunctionContext(s) { } } +function TestBlockLocal(s) { + 'use strict'; + var func = eval("(function baz(){ { " + s + "; } })"); + print("Testing:"); + print(func); + for (var i = 0; i < 5; ++i) { + try { + func(); + assertUnreachable(); + } catch (e) { + assertInstanceof(e, ReferenceError); + } + } + %OptimizeFunctionOnNextCall(func); + try { + func(); + assertUnreachable(); + } catch (e) { + assertInstanceof(e, ReferenceError); + } +} + +function TestBlockContext(s) { + 'use strict'; + var func = eval("(function baz(){ { " + s + "; (function() { x; }); } })"); + print("Testing:"); + print(func); + for (var i = 0; i < 5; ++i) { + print(i); + try { + func(); + assertUnreachable(); + } catch (e) { + assertInstanceof(e, ReferenceError); + } + } + print("optimize"); + %OptimizeFunctionOnNextCall(func); + try { + print("call"); + func(); + assertUnreachable(); + } catch (e) { + print("catch"); + assertInstanceof(e, ReferenceError); + } +} + function TestAll(s) { TestFunctionLocal(s); TestFunctionContext(s); + TestBlockLocal(s); + TestBlockContext(s); } // Use before initialization in declaration statement. @@ -229,34 +457,28 @@ TestAll('x++; let x;'); TestAll('let y = x; const x = 1;'); -function f(x, b) { - let y = (b ? y : x) + 42; +function f(x) { + let y = x + 42; return y; } -function g(x, b) { +function g(x) { { - let y = (b ? y : x) + 42; + let y = x + 42; return y; } } for (var i=0; i<10; i++) { - f(i, false); - g(i, false); + f(i); + g(i); } %OptimizeFunctionOnNextCall(f); %OptimizeFunctionOnNextCall(g); -try { - f(42, true); -} catch (e) { - assertInstanceof(e, ReferenceError); -} +f(12); +g(12); -try { - g(42, true); -} catch (e) { - assertInstanceof(e, ReferenceError); -} +assertTrue(%GetOptimizationStatus(f) != 2); +assertTrue(%GetOptimizationStatus(g) != 2); diff --git a/deps/v8/test/mjsunit/harmony/block-let-declaration.js b/deps/v8/test/mjsunit/harmony/block-let-declaration.js index 4ddeefdbaab..44a0049a44e 100644 --- a/deps/v8/test/mjsunit/harmony/block-let-declaration.js +++ b/deps/v8/test/mjsunit/harmony/block-let-declaration.js @@ -28,7 +28,7 @@ // Flags: --harmony-scoping // Test let declarations in various settings. -// TODO(ES6): properly activate extended mode + "use strict"; // Global @@ -56,11 +56,11 @@ if (true) { // an exception in eval code during parsing, before even compiling or executing // the code. Thus the generated function is not called here. function TestLocalThrows(str, expect) { - assertThrows("(function(){ 'use strict'; " + str + "})", expect); + assertThrows("(function(arg){ 'use strict'; " + str + "})", expect); } function TestLocalDoesNotThrow(str) { - assertDoesNotThrow("(function(){ 'use strict'; " + str + "})()"); + assertDoesNotThrow("(function(arg){ 'use strict'; " + str + "})()"); } // Test let declarations in statement positions. @@ -108,6 +108,28 @@ TestLocalDoesNotThrow("for (;false;) var x;"); TestLocalDoesNotThrow("switch (true) { case true: var x; }"); TestLocalDoesNotThrow("switch (true) { default: var x; }"); +// Test that redeclarations of functions are only allowed in outermost scope. +TestLocalThrows("{ let f; var f; }"); +TestLocalThrows("{ var f; let f; }"); +TestLocalThrows("{ function f() {} let f; }"); +TestLocalThrows("{ let f; function f() {} }"); +TestLocalThrows("{ function f() {} var f; }"); +TestLocalThrows("{ var f; function f() {} }"); +TestLocalThrows("{ function f() {} function f() {} }"); +TestLocalThrows("function f() {} let f;"); +TestLocalThrows("let f; function f() {}"); +TestLocalDoesNotThrow("function arg() {}"); +TestLocalDoesNotThrow("function f() {} var f;"); +TestLocalDoesNotThrow("var f; function f() {}"); +TestLocalDoesNotThrow("function f() {} function f() {}"); + +function g(f) { + function f() { return 1 } + return f() +} +assertEquals(1, g(function() { return 2 })) + + // Test function declarations in source element and // sloppy statement positions. function f() { diff --git a/deps/v8/test/mjsunit/harmony/block-let-semantics.js b/deps/v8/test/mjsunit/harmony/block-let-semantics.js index d14e7cd3692..a37b795b0a5 100644 --- a/deps/v8/test/mjsunit/harmony/block-let-semantics.js +++ b/deps/v8/test/mjsunit/harmony/block-let-semantics.js @@ -27,7 +27,6 @@ // Flags: --harmony-scoping -// TODO(ES6): properly activate extended mode "use strict"; // Test temporal dead zone semantics of let bound variables in diff --git a/deps/v8/test/mjsunit/harmony/block-scoping.js b/deps/v8/test/mjsunit/harmony/block-scoping.js index 31194d99fde..001d9fbfd53 100644 --- a/deps/v8/test/mjsunit/harmony/block-scoping.js +++ b/deps/v8/test/mjsunit/harmony/block-scoping.js @@ -28,7 +28,6 @@ // Flags: --allow-natives-syntax --harmony-scoping // Test functionality of block scopes. -// TODO(ES6): properly activate extended mode "use strict"; // Hoisting of var declarations. @@ -40,8 +39,10 @@ function f1() { assertEquals(1, x) assertEquals(undefined, y) } +for (var j = 0; j < 5; ++j) f1(); +%OptimizeFunctionOnNextCall(f1); f1(); - +assertTrue(%GetOptimizationStatus(f1) != 2); // Dynamic lookup in and through block contexts. function f2(one) { @@ -59,8 +60,8 @@ function f2(one) { assertEquals(6, eval('v')); } } -f2(1); +f2(1); // Lookup in and through block contexts. function f3(one) { @@ -76,10 +77,13 @@ function f3(one) { assertEquals(4, z); assertEquals(5, u); assertEquals(6, v); - } } +for (var j = 0; j < 5; ++j) f3(1); +%OptimizeFunctionOnNextCall(f3); f3(1); +assertTrue(%GetOptimizationStatus(f3) != 2); + // Dynamic lookup from closure. diff --git a/deps/v8/test/mjsunit/harmony/debug-blockscopes.js b/deps/v8/test/mjsunit/harmony/debug-blockscopes.js index f56a306b6fa..2db49427ccd 100644 --- a/deps/v8/test/mjsunit/harmony/debug-blockscopes.js +++ b/deps/v8/test/mjsunit/harmony/debug-blockscopes.js @@ -29,7 +29,6 @@ // The functions used for testing backtraces. They are at the top to make the // testing of source line/column easier. -// TODO(ES6): properly activate extended mode "use strict"; // Get the Debug object exposed from the debug context global object. @@ -412,10 +411,12 @@ function for_loop_3() { listener_delegate = function(exec_state) { CheckScopeChain([debug.ScopeType.Block, + debug.ScopeType.Block, debug.ScopeType.Local, debug.ScopeType.Global], exec_state); CheckScopeContent({x:3}, 0, exec_state); - CheckScopeContent({}, 1, exec_state); + CheckScopeContent({x:3}, 1, exec_state); + CheckScopeContent({}, 2, exec_state); }; for_loop_3(); EndTest(); @@ -433,12 +434,14 @@ function for_loop_4() { listener_delegate = function(exec_state) { CheckScopeChain([debug.ScopeType.Block, + debug.ScopeType.Block, debug.ScopeType.Block, debug.ScopeType.Local, debug.ScopeType.Global], exec_state); CheckScopeContent({x:5}, 0, exec_state); CheckScopeContent({x:3}, 1, exec_state); - CheckScopeContent({}, 2, exec_state); + CheckScopeContent({x:3}, 2, exec_state); + CheckScopeContent({}, 3, exec_state); }; for_loop_4(); EndTest(); @@ -455,10 +458,12 @@ function for_loop_5() { listener_delegate = function(exec_state) { CheckScopeChain([debug.ScopeType.Block, + debug.ScopeType.Block, debug.ScopeType.Local, debug.ScopeType.Global], exec_state); CheckScopeContent({x:3,y:5}, 0, exec_state); - CheckScopeContent({}, 1, exec_state); + CheckScopeContent({x:3,y:5}, 1, exec_state); + CheckScopeContent({}, 2, exec_state); }; for_loop_5(); EndTest(); diff --git a/deps/v8/test/mjsunit/harmony/debug-evaluate-blockscopes.js b/deps/v8/test/mjsunit/harmony/debug-evaluate-blockscopes.js index d6ce8b2b6a8..16885d009e8 100644 --- a/deps/v8/test/mjsunit/harmony/debug-evaluate-blockscopes.js +++ b/deps/v8/test/mjsunit/harmony/debug-evaluate-blockscopes.js @@ -30,7 +30,6 @@ // Test debug evaluation for functions without local context, but with // nested catch contexts. -// TODO(ES6): properly activate extended mode "use strict"; var x; diff --git a/deps/v8/test/mjsunit/regress/regress-2336.js b/deps/v8/test/mjsunit/harmony/empty-for.js similarity index 70% rename from deps/v8/test/mjsunit/regress/regress-2336.js rename to deps/v8/test/mjsunit/harmony/empty-for.js index edfff60211f..02211260ff5 100644 --- a/deps/v8/test/mjsunit/regress/regress-2336.js +++ b/deps/v8/test/mjsunit/harmony/empty-for.js @@ -1,4 +1,4 @@ -// Copyright 2012 the V8 project authors. All rights reserved. +// Copyright 2014 the V8 project authors. All rights reserved. // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: @@ -25,29 +25,48 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --expose-debug-as debug --expose-gc +// Flags: --harmony-scoping -// Check that we can cope with a debug listener that runs in the -// GC epilogue and causes enough allocation to trigger a new GC during -// the epilogue. +"use strict"; -var f = eval("(function f() { return 42; })"); +function for_const() { + for (const x = 1;;) { + if (x == 1) break; + } + for (const x = 1; x < 2;) { + if (x == 1) break; + } + for (const x = 1;; 0) { + if (x == 1) break; + } +} -Debug = debug.Debug; +for_const(); -var called = false; +function for_let() { + for (let x;;) { + if (!x) break; + } + for (let x; x < 2;) { + if (!x) break; + } + for (let x = 1;; x++) { + if (x == 2) break; + } +} + +for_let(); -function listener(event, exec_state, event_data, data) { - if (event == Debug.DebugEvent.ScriptCollected) { - if (!called) { - called = true; - gc(); - } +function for_var() { + for (var x;;) { + if (!x) break; + } + for (var x; x < 2;) { + if (!x) break; + } + for (var x = 1;; x++) { + if (x == 2) break; } -}; +} -Debug.scripts(); -Debug.setListener(listener); -f = void 0; -gc(); -assertTrue(called); +for_var(); diff --git a/deps/v8/test/mjsunit/harmony/generators-debug-liveedit.js b/deps/v8/test/mjsunit/harmony/generators-debug-liveedit.js new file mode 100644 index 00000000000..341ef483c53 --- /dev/null +++ b/deps/v8/test/mjsunit/harmony/generators-debug-liveedit.js @@ -0,0 +1,119 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug --harmony-generators + +var Debug = debug.Debug; +var LiveEdit = Debug.LiveEdit; + +unique_id = 0; + +var Generator = (function*(){}).constructor; + +function assertIteratorResult(value, done, result) { + assertEquals({value: value, done: done}, result); +} + +function MakeGenerator() { + // Prevents eval script caching. + unique_id++; + return Generator('callback', + "/* " + unique_id + "*/\n" + + "yield callback();\n" + + "return 'Cat';\n"); +} + +function MakeFunction() { + // Prevents eval script caching. + unique_id++; + return Function('callback', + "/* " + unique_id + "*/\n" + + "callback();\n" + + "return 'Cat';\n"); +} + +// First, try MakeGenerator with no perturbations. +(function(){ + var generator = MakeGenerator(); + function callback() {}; + var iter = generator(callback); + assertIteratorResult(undefined, false, iter.next()); + assertIteratorResult("Cat", true, iter.next()); +})(); + +function patch(fun, from, to) { + function debug() { + var log = new Array(); + var script = Debug.findScript(fun); + var pos = script.source.indexOf(from); + try { + LiveEdit.TestApi.ApplySingleChunkPatch(script, pos, from.length, to, + log); + } finally { + print("Change log: " + JSON.stringify(log) + "\n"); + } + } + Debug.ExecuteInDebugContext(debug, false); +} + +// Try to edit a MakeGenerator while it's running, then again while it's +// stopped. +(function(){ + var generator = MakeGenerator(); + + var gen_patch_attempted = false; + function attempt_gen_patch() { + assertFalse(gen_patch_attempted); + gen_patch_attempted = true; + assertThrows(function() { patch(generator, "'Cat'", "'Capybara'") }, + LiveEdit.Failure); + }; + var iter = generator(attempt_gen_patch); + assertIteratorResult(undefined, false, iter.next()); + // Patch should not succeed because there is a live generator activation on + // the stack. + assertIteratorResult("Cat", true, iter.next()); + assertTrue(gen_patch_attempted); + + // At this point one iterator is live, but closed, so the patch will succeed. + patch(generator, "'Cat'", "'Capybara'"); + iter = generator(function(){}); + assertIteratorResult(undefined, false, iter.next()); + // Patch successful. + assertIteratorResult("Capybara", true, iter.next()); + + // Patching will fail however when a live iterator is suspended. + iter = generator(function(){}); + assertIteratorResult(undefined, false, iter.next()); + assertThrows(function() { patch(generator, "'Capybara'", "'Tapir'") }, + LiveEdit.Failure); + assertIteratorResult("Capybara", true, iter.next()); + + // Try to patch functions with activations inside and outside generator + // function activations. We should succeed in the former case, but not in the + // latter. + var fun_outside = MakeFunction(); + var fun_inside = MakeFunction(); + var fun_patch_attempted = false; + var fun_patch_restarted = false; + function attempt_fun_patches() { + if (fun_patch_attempted) { + assertFalse(fun_patch_restarted); + fun_patch_restarted = true; + return; + } + fun_patch_attempted = true; + // Patching outside a generator activation must fail. + assertThrows(function() { patch(fun_outside, "'Cat'", "'Cobra'") }, + LiveEdit.Failure); + // Patching inside a generator activation may succeed. + patch(fun_inside, "'Cat'", "'Koala'"); + } + iter = generator(function() { return fun_inside(attempt_fun_patches) }); + assertEquals('Cat', + fun_outside(function () { + assertIteratorResult('Koala', false, iter.next()); + assertTrue(fun_patch_restarted); + })); +})(); diff --git a/deps/v8/test/mjsunit/harmony/generators-iteration.js b/deps/v8/test/mjsunit/harmony/generators-iteration.js index d86a20f9e7d..1a793678d93 100644 --- a/deps/v8/test/mjsunit/harmony/generators-iteration.js +++ b/deps/v8/test/mjsunit/harmony/generators-iteration.js @@ -337,6 +337,50 @@ TestGenerator( "foo", [2, "1foo3", 5, "4foo6", "foofoo"]); +// Yield with no arguments yields undefined. +TestGenerator( + function* g26() { return yield yield }, + [undefined, undefined, undefined], + "foo", + [undefined, "foo", "foo"]); + +// A newline causes the parser to stop looking for an argument to yield. +TestGenerator( + function* g27() { + yield + 3 + return + }, + [undefined, undefined], + "foo", + [undefined, undefined]); + +// TODO(wingo): We should use TestGenerator for these, except that +// currently yield* will unconditionally propagate a throw() to the +// delegate iterator, which fails for these iterators that don't have +// throw(). See http://code.google.com/p/v8/issues/detail?id=3484. +(function() { + function* g28() { + yield* [1, 2, 3]; + } + var iter = g28(); + assertIteratorResult(1, false, iter.next()); + assertIteratorResult(2, false, iter.next()); + assertIteratorResult(3, false, iter.next()); + assertIteratorResult(undefined, true, iter.next()); +})(); + +(function() { + function* g29() { + yield* "abc"; + } + var iter = g29(); + assertIteratorResult("a", false, iter.next()); + assertIteratorResult("b", false, iter.next()); + assertIteratorResult("c", false, iter.next()); + assertIteratorResult(undefined, true, iter.next()); +})(); + // Generator function instances. TestGenerator(GeneratorFunction(), [undefined], @@ -375,12 +419,16 @@ function TestDelegatingYield() { function next() { return results[i++]; } - return { next: next } + var iter = { next: next }; + var ret = {}; + ret[Symbol.iterator] = function() { return iter; }; + return ret; } function* yield_results(expected) { return yield* results(expected); } - function collect_results(iter) { + function collect_results(iterable) { + var iter = iterable[Symbol.iterator](); var ret = []; var result; do { diff --git a/deps/v8/test/mjsunit/harmony/generators-parsing.js b/deps/v8/test/mjsunit/harmony/generators-parsing.js index 2a4a68c37cd..21790b0e138 100644 --- a/deps/v8/test/mjsunit/harmony/generators-parsing.js +++ b/deps/v8/test/mjsunit/harmony/generators-parsing.js @@ -35,6 +35,40 @@ function* g() { yield 3; yield 4; } // Yield expressions. function* g() { (yield 3) + (yield 4); } +// Yield without a RHS. +function* g() { yield; } +function* g() { yield } +function* g() { + yield +} +function* g() { (yield) } +function* g() { [yield] } +function* g() { {yield} } +function* g() { yield, yield } +function* g() { yield; yield } +function* g() { (yield) ? yield : yield } +function* g() { + (yield) + ? yield + : yield +} + +// If yield has a RHS, it needs to start on the same line. The * in a +// yield* counts as starting the RHS. +function* g() { + yield * + foo +} +assertThrows("function* g() { yield\n* foo }", SyntaxError); +assertEquals(undefined, + (function*(){ + yield + 3 + })().next().value); + +// A YieldExpression is not a LogicalORExpression. +assertThrows("function* g() { yield ? yield : yield }", SyntaxError); + // You can have a generator in strict mode. function* g() { "use strict"; yield 3; yield 4; } @@ -50,14 +84,10 @@ function* g() { yield 1; return 2; yield "dead"; } // Named generator expression. (function* g() { yield 3; }); -// A generator without a yield is specified as causing an early error. This -// behavior is currently unimplemented. See -// https://bugs.ecmascript.org/show_bug.cgi?id=1283. +// You can have a generator without a yield. function* g() { } -// A YieldExpression in the RHS of a YieldExpression is currently specified as -// causing an early error. This behavior is currently unimplemented. See -// https://bugs.ecmascript.org/show_bug.cgi?id=1283. +// A YieldExpression is valid as the RHS of a YieldExpression. function* g() { yield yield 1; } function* g() { yield 3 + (yield 4); } @@ -86,9 +116,6 @@ assertThrows("function* g() { yield: 1 }", SyntaxError) // functions. function* g() { function f() { yield (yield + yield (0)); } } -// Yield needs a RHS. -assertThrows("function* g() { yield; }", SyntaxError); - // Yield in a generator is not an identifier. assertThrows("function* g() { yield = 10; }", SyntaxError); diff --git a/deps/v8/test/mjsunit/harmony/generators-poisoned-properties.js b/deps/v8/test/mjsunit/harmony/generators-poisoned-properties.js new file mode 100644 index 00000000000..39a583ec976 --- /dev/null +++ b/deps/v8/test/mjsunit/harmony/generators-poisoned-properties.js @@ -0,0 +1,42 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --harmony-generators + +function assertIteratorResult(value, done, result) { + assertEquals({value: value, done: done}, result); +} + +function test(f) { + var cdesc = Object.getOwnPropertyDescriptor(f, "caller"); + var adesc = Object.getOwnPropertyDescriptor(f, "arguments"); + + assertFalse(cdesc.enumerable); + assertFalse(cdesc.configurable); + + assertFalse(adesc.enumerable); + assertFalse(adesc.configurable); + + assertSame(cdesc.get, cdesc.set); + assertSame(cdesc.get, adesc.get); + assertSame(cdesc.get, adesc.set); + + assertTrue(cdesc.get instanceof Function); + assertEquals(0, cdesc.get.length); + assertThrows(cdesc.get, TypeError); + + assertThrows(function() { return f.caller; }, TypeError); + assertThrows(function() { f.caller = 42; }, TypeError); + assertThrows(function() { return f.arguments; }, TypeError); + assertThrows(function() { f.arguments = 42; }, TypeError); +} + +function *sloppy() { test(sloppy); } +function *strict() { "use strict"; test(strict); } + +test(sloppy); +test(strict); + +assertIteratorResult(undefined, true, sloppy().next()); +assertIteratorResult(undefined, true, strict().next()); diff --git a/deps/v8/test/mjsunit/harmony/generators-runtime.js b/deps/v8/test/mjsunit/harmony/generators-runtime.js index aef063b6c82..9fb70754928 100644 --- a/deps/v8/test/mjsunit/harmony/generators-runtime.js +++ b/deps/v8/test/mjsunit/harmony/generators-runtime.js @@ -29,9 +29,8 @@ // Test aspects of the generator runtime. -// FIXME(wingo): Replace this reference with a more official link. // See: -// http://wiki.ecmascript.org/lib/exe/fetch.php?cache=cache&media=harmony:es6_generator_object_model_3-29-13.png +// http://people.mozilla.org/~jorendorff/es6-draft.html#sec-generatorfunction-objects function f() { } function* g() { yield 1; } @@ -55,7 +54,16 @@ function TestGeneratorFunctionInstance() { var f_desc = Object.getOwnPropertyDescriptor(f, prop); var g_desc = Object.getOwnPropertyDescriptor(g, prop); assertEquals(f_desc.configurable, g_desc.configurable, prop); - assertEquals(f_desc.writable, g_desc.writable, prop); + if (prop === 'arguments' || prop === 'caller') { + // Unlike sloppy functions, which have read-only data arguments and caller + // properties, sloppy generators have a poison pill implemented via + // accessors + assertFalse('writable' in g_desc, prop); + assertTrue(g_desc.get instanceof Function, prop); + assertEquals(g_desc.get, g_desc.set, prop); + } else { + assertEquals(f_desc.writable, g_desc.writable, prop); + } assertEquals(f_desc.enumerable, g_desc.enumerable, prop); } } @@ -92,6 +100,16 @@ function TestGeneratorObjectPrototype() { found_property_names.sort(); assertArrayEquals(expected_property_names, found_property_names); + + iterator_desc = Object.getOwnPropertyDescriptor(GeneratorObjectPrototype, + Symbol.iterator); + assertTrue(iterator_desc !== undefined); + assertFalse(iterator_desc.writable); + assertFalse(iterator_desc.enumerable); + assertFalse(iterator_desc.configurable); + + // The generator object's "iterator" function is just the identity. + assertSame(iterator_desc.value.call(42), 42); } TestGeneratorObjectPrototype(); diff --git a/deps/v8/test/mjsunit/harmony/private.js b/deps/v8/test/mjsunit/harmony/private.js index 225799831cf..4b29fd863ed 100644 --- a/deps/v8/test/mjsunit/harmony/private.js +++ b/deps/v8/test/mjsunit/harmony/private.js @@ -25,7 +25,6 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --harmony-symbols --harmony-collections // Flags: --expose-gc --allow-natives-syntax var symbols = [] @@ -115,8 +114,8 @@ TestToBoolean() function TestToNumber() { for (var i in symbols) { - assertSame(NaN, Number(symbols[i]).valueOf()) - assertSame(NaN, symbols[i] + 0) + assertThrows(function() { Number(symbols[i]); }, TypeError); + assertThrows(function() { symbols[i] + 0; }, TypeError); } } TestToNumber() @@ -342,3 +341,18 @@ function TestGetOwnPropertySymbols() { assertEquals(syms, [publicSymbol, publicSymbol2]) } TestGetOwnPropertySymbols() + + +function TestSealAndFreeze(freeze) { + var sym = %CreatePrivateSymbol("private") + var obj = {} + obj[sym] = 1 + freeze(obj) + obj[sym] = 2 + assertEquals(2, obj[sym]) + assertTrue(delete obj[sym]) + assertEquals(undefined, obj[sym]) +} +TestSealAndFreeze(Object.seal) +TestSealAndFreeze(Object.freeze) +TestSealAndFreeze(Object.preventExtensions) diff --git a/deps/v8/test/mjsunit/harmony/proxies-example-membrane.js b/deps/v8/test/mjsunit/harmony/proxies-example-membrane.js index a645a6603a1..7b2af722f2d 100644 --- a/deps/v8/test/mjsunit/harmony/proxies-example-membrane.js +++ b/deps/v8/test/mjsunit/harmony/proxies-example-membrane.js @@ -25,7 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --harmony +// Flags: --harmony --harmony-proxies // A simple no-op handler. Adapted from: diff --git a/deps/v8/test/mjsunit/harmony/proxies-hash.js b/deps/v8/test/mjsunit/harmony/proxies-hash.js index 789de35f6d3..65d2d3c5646 100644 --- a/deps/v8/test/mjsunit/harmony/proxies-hash.js +++ b/deps/v8/test/mjsunit/harmony/proxies-hash.js @@ -25,7 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --harmony-proxies --harmony-collections +// Flags: --harmony-proxies // Helper. diff --git a/deps/v8/test/mjsunit/harmony/proxies-json.js b/deps/v8/test/mjsunit/harmony/proxies-json.js index 539c5a84cb9..eba10a1453b 100644 --- a/deps/v8/test/mjsunit/harmony/proxies-json.js +++ b/deps/v8/test/mjsunit/harmony/proxies-json.js @@ -25,7 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --harmony +// Flags: --harmony-proxies function testStringify(expected, object) { // Test fast case that bails out to slow case. diff --git a/deps/v8/test/mjsunit/harmony/proxies-symbols.js b/deps/v8/test/mjsunit/harmony/proxies-symbols.js index 8920e39968d..52353c036d4 100644 --- a/deps/v8/test/mjsunit/harmony/proxies-symbols.js +++ b/deps/v8/test/mjsunit/harmony/proxies-symbols.js @@ -25,7 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --harmony-proxies --harmony-symbols +// Flags: --harmony-proxies // Helper. diff --git a/deps/v8/test/mjsunit/harmony/proxies-with-unscopables.js b/deps/v8/test/mjsunit/harmony/proxies-with-unscopables.js new file mode 100644 index 00000000000..b982480feb9 --- /dev/null +++ b/deps/v8/test/mjsunit/harmony/proxies-with-unscopables.js @@ -0,0 +1,153 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --harmony-unscopables +// Flags: --harmony-proxies + + +// TODO(arv): Once proxies can intercept symbols, add more tests. + + +function TestBasics() { + var log = []; + + var proxy = Proxy.create({ + getPropertyDescriptor: function(key) { + log.push(key); + if (key === 'x') { + return { + value: 1, + configurable: true + }; + } + return undefined; + } + }); + + var x = 'local'; + + with (proxy) { + assertEquals(1, x); + } + + // One 'x' for HasBinding and one for GetBindingValue + assertEquals(['assertEquals', 'x', 'x'], log); +} +TestBasics(); + + +function TestInconsistent() { + var log = []; + var calls = 0; + + var proxy = Proxy.create({ + getPropertyDescriptor: function(key) { + log.push(key); + if (key === 'x' && calls < 1) { + calls++; + return { + value: 1, + configurable: true + }; + } + return undefined; + } + }); + + var x = 'local'; + + with (proxy) { + assertEquals(void 0, x); + } + + // One 'x' for HasBinding and one for GetBindingValue + assertEquals(['assertEquals', 'x', 'x'], log); +} +TestInconsistent(); + + +function TestUseProxyAsUnscopables() { + var x = 1; + var object = { + x: 2 + }; + var calls = 0; + var proxy = Proxy.create({ + has: function(key) { + calls++; + assertEquals('x', key); + return calls === 2; + }, + getPropertyDescriptor: function(key) { + assertUnreachable(); + } + }); + + object[Symbol.unscopables] = proxy; + + with (object) { + assertEquals(2, x); + assertEquals(1, x); + } + + // HasBinding, HasBinding + assertEquals(2, calls); +} +TestUseProxyAsUnscopables(); + + +function TestThrowInHasUnscopables() { + var x = 1; + var object = { + x: 2 + }; + + function CustomError() {} + + var calls = 0; + var proxy = Proxy.create({ + has: function(key) { + if (calls++ === 0) { + throw new CustomError(); + } + assertUnreachable(); + }, + getPropertyDescriptor: function(key) { + assertUnreachable(); + } + }); + + object[Symbol.unscopables] = proxy; + + assertThrows(function() { + with (object) { + x; + } + }, CustomError); +} +TestThrowInHasUnscopables(); + + +var global = this; +function TestGlobalShouldIgnoreUnscopables() { + global.x = 1; + var proxy = Proxy.create({ + getPropertyDescriptor: function() { + assertUnreachable(); + } + }); + global[Symbol.unscopables] = proxy; + + assertEquals(1, global.x); + assertEquals(1, x); + + global.x = 2; + assertEquals(2, global.x); + assertEquals(2, x); + + x = 3; + assertEquals(3, global.x); + assertEquals(3, x); +} +TestGlobalShouldIgnoreUnscopables(); diff --git a/deps/v8/test/mjsunit/harmony/proxies.js b/deps/v8/test/mjsunit/harmony/proxies.js index 00e605f8d2d..b082c066969 100644 --- a/deps/v8/test/mjsunit/harmony/proxies.js +++ b/deps/v8/test/mjsunit/harmony/proxies.js @@ -1807,7 +1807,7 @@ TestKeysThrow({ }, }) -TestKeysThrow([], { +TestKeysThrow({ get getOwnPropertyNames() { return function() { return [1, 2] } }, diff --git a/deps/v8/test/mjsunit/harmony/regress/regress-3426.js b/deps/v8/test/mjsunit/harmony/regress/regress-3426.js new file mode 100644 index 00000000000..c3b11a1792f --- /dev/null +++ b/deps/v8/test/mjsunit/harmony/regress/regress-3426.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --harmony-scoping + +assertThrows("(function() { 'use strict'; { let f; var f; } })", SyntaxError); diff --git a/deps/v8/test/mjsunit/harmony/set-prototype-of.js b/deps/v8/test/mjsunit/harmony/set-prototype-of.js index 02bd5e2ee62..810220d1a8f 100644 --- a/deps/v8/test/mjsunit/harmony/set-prototype-of.js +++ b/deps/v8/test/mjsunit/harmony/set-prototype-of.js @@ -25,8 +25,6 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --harmony-symbols - function getObjects() { function func() {} diff --git a/deps/v8/test/mjsunit/harmony/string-codepointat.js b/deps/v8/test/mjsunit/harmony/string-codepointat.js new file mode 100644 index 00000000000..411b0f23c7e --- /dev/null +++ b/deps/v8/test/mjsunit/harmony/string-codepointat.js @@ -0,0 +1,91 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --harmony-strings + +// Tests taken from: +// https://github.com/mathiasbynens/String.prototype.codePointAt + +assertEquals(String.prototype.codePointAt.length, 1); +assertEquals(String.prototype.propertyIsEnumerable("codePointAt"), false); + +// String that starts with a BMP symbol +assertEquals("abc\uD834\uDF06def".codePointAt(""), 0x61); +assertEquals("abc\uD834\uDF06def".codePointAt("_"), 0x61); +assertEquals("abc\uD834\uDF06def".codePointAt(), 0x61); +assertEquals("abc\uD834\uDF06def".codePointAt(-Infinity), undefined); +assertEquals("abc\uD834\uDF06def".codePointAt(-1), undefined); +assertEquals("abc\uD834\uDF06def".codePointAt(-0), 0x61); +assertEquals("abc\uD834\uDF06def".codePointAt(0), 0x61); +assertEquals("abc\uD834\uDF06def".codePointAt(3), 0x1D306); +assertEquals("abc\uD834\uDF06def".codePointAt(4), 0xDF06); +assertEquals("abc\uD834\uDF06def".codePointAt(5), 0x64); +assertEquals("abc\uD834\uDF06def".codePointAt(42), undefined); +assertEquals("abc\uD834\uDF06def".codePointAt(Infinity), undefined); +assertEquals("abc\uD834\uDF06def".codePointAt(Infinity), undefined); +assertEquals("abc\uD834\uDF06def".codePointAt(NaN), 0x61); +assertEquals("abc\uD834\uDF06def".codePointAt(false), 0x61); +assertEquals("abc\uD834\uDF06def".codePointAt(null), 0x61); +assertEquals("abc\uD834\uDF06def".codePointAt(undefined), 0x61); + +// String that starts with an astral symbol +assertEquals("\uD834\uDF06def".codePointAt(""), 0x1D306); +assertEquals("\uD834\uDF06def".codePointAt("1"), 0xDF06); +assertEquals("\uD834\uDF06def".codePointAt("_"), 0x1D306); +assertEquals("\uD834\uDF06def".codePointAt(), 0x1D306); +assertEquals("\uD834\uDF06def".codePointAt(-1), undefined); +assertEquals("\uD834\uDF06def".codePointAt(-0), 0x1D306); +assertEquals("\uD834\uDF06def".codePointAt(0), 0x1D306); +assertEquals("\uD834\uDF06def".codePointAt(1), 0xDF06); +assertEquals("\uD834\uDF06def".codePointAt(42), undefined); +assertEquals("\uD834\uDF06def".codePointAt(false), 0x1D306); +assertEquals("\uD834\uDF06def".codePointAt(null), 0x1D306); +assertEquals("\uD834\uDF06def".codePointAt(undefined), 0x1D306); + +// Lone high surrogates +assertEquals("\uD834abc".codePointAt(""), 0xD834); +assertEquals("\uD834abc".codePointAt("_"), 0xD834); +assertEquals("\uD834abc".codePointAt(), 0xD834); +assertEquals("\uD834abc".codePointAt(-1), undefined); +assertEquals("\uD834abc".codePointAt(-0), 0xD834); +assertEquals("\uD834abc".codePointAt(0), 0xD834); +assertEquals("\uD834abc".codePointAt(false), 0xD834); +assertEquals("\uD834abc".codePointAt(NaN), 0xD834); +assertEquals("\uD834abc".codePointAt(null), 0xD834); +assertEquals("\uD834abc".codePointAt(undefined), 0xD834); + +// Lone low surrogates +assertEquals("\uDF06abc".codePointAt(""), 0xDF06); +assertEquals("\uDF06abc".codePointAt("_"), 0xDF06); +assertEquals("\uDF06abc".codePointAt(), 0xDF06); +assertEquals("\uDF06abc".codePointAt(-1), undefined); +assertEquals("\uDF06abc".codePointAt(-0), 0xDF06); +assertEquals("\uDF06abc".codePointAt(0), 0xDF06); +assertEquals("\uDF06abc".codePointAt(false), 0xDF06); +assertEquals("\uDF06abc".codePointAt(NaN), 0xDF06); +assertEquals("\uDF06abc".codePointAt(null), 0xDF06); +assertEquals("\uDF06abc".codePointAt(undefined), 0xDF06); + +assertThrows(function() { + String.prototype.codePointAt.call(undefined); +}, TypeError); +assertThrows(function() { + String.prototype.codePointAt.call(undefined, 4); +}, TypeError); +assertThrows(function() { + String.prototype.codePointAt.call(null); +}, TypeError); +assertThrows(function() { + String.prototype.codePointAt.call(null, 4); +}, TypeError); +assertEquals(String.prototype.codePointAt.call(42, 0), 0x34); +assertEquals(String.prototype.codePointAt.call(42, 1), 0x32); +assertEquals(String.prototype.codePointAt.call({ + toString: function() { return "abc"; } +}, 2), 0x63); +var tmp = 0; +assertEquals(String.prototype.codePointAt.call({ + toString: function() { ++tmp; return String(tmp); } +}, 0), 0x31); +assertEquals(tmp, 1); diff --git a/deps/v8/test/mjsunit/harmony/string-fromcodepoint.js b/deps/v8/test/mjsunit/harmony/string-fromcodepoint.js new file mode 100644 index 00000000000..97ecf0eec53 --- /dev/null +++ b/deps/v8/test/mjsunit/harmony/string-fromcodepoint.js @@ -0,0 +1,62 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --harmony-strings + +// Tests taken from: +// https://github.com/mathiasbynens/String.fromCodePoint + +assertEquals(String.fromCodePoint.length, 1); +assertEquals(String.propertyIsEnumerable("fromCodePoint"), false); + +assertEquals(String.fromCodePoint(""), "\0"); +assertEquals(String.fromCodePoint(), ""); +assertEquals(String.fromCodePoint(-0), "\0"); +assertEquals(String.fromCodePoint(0), "\0"); +assertEquals(String.fromCodePoint(0x1D306), "\uD834\uDF06"); +assertEquals( + String.fromCodePoint(0x1D306, 0x61, 0x1D307), + "\uD834\uDF06a\uD834\uDF07"); +assertEquals(String.fromCodePoint(0x61, 0x62, 0x1D307), "ab\uD834\uDF07"); +assertEquals(String.fromCodePoint(false), "\0"); +assertEquals(String.fromCodePoint(null), "\0"); + +assertThrows(function() { String.fromCodePoint("_"); }, RangeError); +assertThrows(function() { String.fromCodePoint("+Infinity"); }, RangeError); +assertThrows(function() { String.fromCodePoint("-Infinity"); }, RangeError); +assertThrows(function() { String.fromCodePoint(-1); }, RangeError); +assertThrows(function() { String.fromCodePoint(0x10FFFF + 1); }, RangeError); +assertThrows(function() { String.fromCodePoint(3.14); }, RangeError); +assertThrows(function() { String.fromCodePoint(3e-2); }, RangeError); +assertThrows(function() { String.fromCodePoint(-Infinity); }, RangeError); +assertThrows(function() { String.fromCodePoint(+Infinity); }, RangeError); +assertThrows(function() { String.fromCodePoint(NaN); }, RangeError); +assertThrows(function() { String.fromCodePoint(undefined); }, RangeError); +assertThrows(function() { String.fromCodePoint({}); }, RangeError); +assertThrows(function() { String.fromCodePoint(/./); }, RangeError); +assertThrows(function() { String.fromCodePoint({ + valueOf: function() { throw Error(); } }); +}, Error); +assertThrows(function() { String.fromCodePoint({ + valueOf: function() { throw Error(); } }); +}, Error); +var tmp = 0x60; +assertEquals(String.fromCodePoint({ + valueOf: function() { ++tmp; return tmp; } +}), "a"); +assertEquals(tmp, 0x61); + +var counter = Math.pow(2, 15) * 3 / 2; +var result = []; +while (--counter >= 0) { + result.push(0); // one code unit per symbol +} +String.fromCodePoint.apply(null, result); // must not throw + +var counter = Math.pow(2, 15) * 3 / 2; +var result = []; +while (--counter >= 0) { + result.push(0xFFFF + 1); // two code units per symbol +} +String.fromCodePoint.apply(null, result); // must not throw diff --git a/deps/v8/test/mjsunit/harmony/typeof.js b/deps/v8/test/mjsunit/harmony/typeof.js deleted file mode 100644 index acde97785fd..00000000000 --- a/deps/v8/test/mjsunit/harmony/typeof.js +++ /dev/null @@ -1,35 +0,0 @@ -// Copyright 2011 the V8 project authors. All rights reserved. -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are -// met: -// -// * Redistributions of source code must retain the above copyright -// notice, this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above -// copyright notice, this list of conditions and the following -// disclaimer in the documentation and/or other materials provided -// with the distribution. -// * Neither the name of Google Inc. nor the names of its -// contributors may be used to endorse or promote products derived -// from this software without specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -// Flags: --harmony-typeof - -assertFalse(typeof null == 'object') -assertFalse(typeof null === 'object') -assertTrue(typeof null == 'null') -assertTrue(typeof null === 'null') -assertEquals("null", typeof null) -assertSame("null", typeof null) diff --git a/deps/v8/test/mjsunit/json-stringify-recursive.js b/deps/v8/test/mjsunit/json-stringify-recursive.js index 31aa0027c56..d2788a5f031 100644 --- a/deps/v8/test/mjsunit/json-stringify-recursive.js +++ b/deps/v8/test/mjsunit/json-stringify-recursive.js @@ -25,6 +25,8 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +// Flags: --stack-size=100 + var a = {}; for (i = 0; i < 10000; i++) { var current = {}; diff --git a/deps/v8/test/mjsunit/keyed-load-dictionary-stub.js b/deps/v8/test/mjsunit/keyed-load-dictionary-stub.js new file mode 100644 index 00000000000..733fd6d9c94 --- /dev/null +++ b/deps/v8/test/mjsunit/keyed-load-dictionary-stub.js @@ -0,0 +1,20 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + +function generate_dictionary_array() { + var result = [0, 1, 2, 3, 4]; + result[256 * 1024] = 5; + return result; +} + +function get_accessor(a, i) { + return a[i]; +} + +var array1 = generate_dictionary_array(); +get_accessor(array1, 1); +get_accessor(array1, 2); +get_accessor(12345, 2); diff --git a/deps/v8/test/mjsunit/math-abs.js b/deps/v8/test/mjsunit/math-abs.js index 09b9c88f750..b90ae0917c4 100644 --- a/deps/v8/test/mjsunit/math-abs.js +++ b/deps/v8/test/mjsunit/math-abs.js @@ -25,7 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --max-new-space-size=2 --allow-natives-syntax +// Flags: --max-semi-space-size=1 --allow-natives-syntax function zero() { var x = 0.5; diff --git a/deps/v8/test/mjsunit/math-floor-part1.js b/deps/v8/test/mjsunit/math-floor-part1.js index bae47dc3cf6..65ae3c68e46 100644 --- a/deps/v8/test/mjsunit/math-floor-part1.js +++ b/deps/v8/test/mjsunit/math-floor-part1.js @@ -25,7 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --max-new-space-size=2 --allow-natives-syntax +// Flags: --max-semi-space-size=1 --allow-natives-syntax var test_id = 0; diff --git a/deps/v8/test/mjsunit/math-floor-part2.js b/deps/v8/test/mjsunit/math-floor-part2.js index ad60fba4544..60045705ced 100644 --- a/deps/v8/test/mjsunit/math-floor-part2.js +++ b/deps/v8/test/mjsunit/math-floor-part2.js @@ -25,7 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --max-new-space-size=2 --allow-natives-syntax +// Flags: --max-semi-space-size=1 --allow-natives-syntax var test_id = 0; diff --git a/deps/v8/test/mjsunit/math-floor-part3.js b/deps/v8/test/mjsunit/math-floor-part3.js index a6d1c5e856c..9225c388ba9 100644 --- a/deps/v8/test/mjsunit/math-floor-part3.js +++ b/deps/v8/test/mjsunit/math-floor-part3.js @@ -25,7 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --max-new-space-size=2 --allow-natives-syntax +// Flags: --max-semi-space-size=1 --allow-natives-syntax var test_id = 0; diff --git a/deps/v8/test/mjsunit/math-floor-part4.js b/deps/v8/test/mjsunit/math-floor-part4.js index 58212b4c56a..ade36a9c300 100644 --- a/deps/v8/test/mjsunit/math-floor-part4.js +++ b/deps/v8/test/mjsunit/math-floor-part4.js @@ -25,7 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --max-new-space-size=2 --allow-natives-syntax +// Flags: --max-semi-space-size=1 --allow-natives-syntax var test_id = 0; diff --git a/deps/v8/test/mjsunit/migrations.js b/deps/v8/test/mjsunit/migrations.js new file mode 100644 index 00000000000..6a2ea64a7a0 --- /dev/null +++ b/deps/v8/test/mjsunit/migrations.js @@ -0,0 +1,311 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-ayle license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax --track-fields --expose-gc + +var global = Function('return this')(); +var verbose = 0; + +function test(ctor_desc, use_desc, migr_desc) { + var n = 5; + var objects = []; + var results = []; + + if (verbose) { + print(); + print("==========================================================="); + print("=== " + ctor_desc.name + + " | " + use_desc.name + " |--> " + migr_desc.name); + print("==========================================================="); + } + + // Clean ICs and transitions. + %NotifyContextDisposed(); + gc(); gc(); gc(); + + + // create objects + if (verbose) { + print("-----------------------------"); + print("--- construct"); + print(); + } + for (var i = 0; i < n; i++) { + objects[i] = ctor_desc.ctor.apply(ctor_desc, ctor_desc.args(i)); + } + + try { + // use them + if (verbose) { + print("-----------------------------"); + print("--- use 1"); + print(); + } + var use = use_desc.use1; + for (var i = 0; i < n; i++) { + if (i == 3) %OptimizeFunctionOnNextCall(use); + results[i] = use(objects[i], i); + } + + // trigger migrations + if (verbose) { + print("-----------------------------"); + print("--- trigger migration"); + print(); + } + var migr = migr_desc.migr; + for (var i = 0; i < n; i++) { + if (i == 3) %OptimizeFunctionOnNextCall(migr); + migr(objects[i], i); + } + + // use again + if (verbose) { + print("-----------------------------"); + print("--- use 2"); + print(); + } + var use = use_desc.use2 !== undefined ? use_desc.use2 : use_desc.use1; + for (var i = 0; i < n; i++) { + if (i == 3) %OptimizeFunctionOnNextCall(use); + results[i] = use(objects[i], i); + if (verbose >= 2) print(results[i]); + } + + } catch (e) { + if (verbose) print("--- incompatible use: " + e); + } + return results; +} + + +var ctors = [ + { + name: "none-to-double", + ctor: function(v) { return {a: v}; }, + args: function(i) { return [1.5 + i]; }, + }, + { + name: "double", + ctor: function(v) { var o = {}; o.a = v; return o; }, + args: function(i) { return [1.5 + i]; }, + }, + { + name: "none-to-smi", + ctor: function(v) { return {a: v}; }, + args: function(i) { return [i]; }, + }, + { + name: "smi", + ctor: function(v) { var o = {}; o.a = v; return o; }, + args: function(i) { return [i]; }, + }, + { + name: "none-to-object", + ctor: function(v) { return {a: v}; }, + args: function(i) { return ["s"]; }, + }, + { + name: "object", + ctor: function(v) { var o = {}; o.a = v; return o; }, + args: function(i) { return ["s"]; }, + }, + { + name: "{a:, b:, c:}", + ctor: function(v1, v2, v3) { return {a: v1, b: v2, c: v3}; }, + args: function(i) { return [1.5 + i, 1.6, 1.7]; }, + }, + { + name: "{a..h:}", + ctor: function(v) { var o = {}; o.h=o.g=o.f=o.e=o.d=o.c=o.b=o.a=v; return o; }, + args: function(i) { return [1.5 + i]; }, + }, + { + name: "1", + ctor: function(v) { var o = 1; o.a = v; return o; }, + args: function(i) { return [1.5 + i]; }, + }, + { + name: "f()", + ctor: function(v) { var o = function() { return v;}; o.a = v; return o; }, + args: function(i) { return [1.5 + i]; }, + }, + { + name: "f().bind", + ctor: function(v) { var o = function(a,b,c) { return a+b+c; }; o = o.bind(o, v, v+1, v+2.2); return o; }, + args: function(i) { return [1.5 + i]; }, + }, + { + name: "dictionary elements", + ctor: function(v) { var o = []; o[1] = v; o[200000] = v; return o; }, + args: function(i) { return [1.5 + i]; }, + }, + { + name: "json", + ctor: function(v) { var json = '{"a":' + v + ',"b":' + v + '}'; return JSON.parse(json); }, + args: function(i) { return [1.5 + i]; }, + }, + { + name: "fast accessors", + accessor: { + get: function() { return this.a_; }, + set: function(value) {this.a_ = value; }, + configurable: true, + }, + ctor: function(v) { + var o = {a_:v}; + Object.defineProperty(o, "a", this.accessor); + return o; + }, + args: function(i) { return [1.5 + i]; }, + }, + { + name: "slow accessor", + accessor1: { value: this.a_, configurable: true }, + accessor2: { + get: function() { return this.a_; }, + set: function(value) {this.a_ = value; }, + configurable: true, + }, + ctor: function(v) { + var o = {a_:v}; + Object.defineProperty(o, "a", this.accessor1); + Object.defineProperty(o, "a", this.accessor2); + return o; + }, + args: function(i) { return [1.5 + i]; }, + }, + { + name: "slow", + proto: {}, + ctor: function(v) { + var o = {__proto__: this.proto}; + o.a = v; + for (var i = 0; %HasFastProperties(o); i++) o["f"+i] = v; + return o; + }, + args: function(i) { return [1.5 + i]; }, + }, + { + name: "global", + ctor: function(v) { return global; }, + args: function(i) { return [i]; }, + }, +]; + + + +var uses = [ + { + name: "o.a+1.0", + use1: function(o, i) { return o.a + 1.0; }, + use2: function(o, i) { return o.a + 1.1; }, + }, + { + name: "o.b+1.0", + use1: function(o, i) { return o.b + 1.0; }, + use2: function(o, i) { return o.b + 1.1; }, + }, + { + name: "o[1]+1.0", + use1: function(o, i) { return o[1] + 1.0; }, + use2: function(o, i) { return o[1] + 1.1; }, + }, + { + name: "o[-1]+1.0", + use1: function(o, i) { return o[-1] + 1.0; }, + use2: function(o, i) { return o[-1] + 1.1; }, + }, + { + name: "()", + use1: function(o, i) { return o() + 1.0; }, + use2: function(o, i) { return o() + 1.1; }, + }, +]; + + + +var migrations = [ + { + name: "to smi", + migr: function(o, i) { if (i == 0) o.a = 1; }, + }, + { + name: "to double", + migr: function(o, i) { if (i == 0) o.a = 1.1; }, + }, + { + name: "to object", + migr: function(o, i) { if (i == 0) o.a = {}; }, + }, + { + name: "set prototype {}", + migr: function(o, i) { o.__proto__ = {}; }, + }, + { + name: "%FunctionSetPrototype", + migr: function(o, i) { %FunctionSetPrototype(o, null); }, + }, + { + name: "modify prototype", + migr: function(o, i) { if (i == 0) o.__proto__.__proto1__ = [,,,5,,,]; }, + }, + { + name: "freeze prototype", + migr: function(o, i) { if (i == 0) Object.freeze(o.__proto__); }, + }, + { + name: "delete and re-add property", + migr: function(o, i) { var v = o.a; delete o.a; o.a = v; }, + }, + { + name: "modify prototype", + migr: function(o, i) { if (i >= 0) o.__proto__ = {}; }, + }, + { + name: "set property callback", + migr: function(o, i) { + Object.defineProperty(o, "a", { + get: function() { return 1.5 + i; }, + set: function(value) {}, + configurable: true, + }); + }, + }, + { + name: "observe", + migr: function(o, i) { Object.observe(o, function(){}); }, + }, + { + name: "%EnableAccessChecks", + migr: function(o, i) { + if (typeof (o) !== 'function') %EnableAccessChecks(o); + }, + }, + { + name: "%DisableAccessChecks", + migr: function(o, i) { + if ((typeof (o) !== 'function') && (o !== global)) %DisableAccessChecks(o); + }, + }, + { + name: "seal", + migr: function(o, i) { Object.seal(o); }, + }, + { // Must be the last in the sequence, because after the global object freeze + // the other modifications does not make sence. + name: "freeze", + migr: function(o, i) { Object.freeze(o); }, + }, +]; + + + +migrations.forEach(function(migr) { + uses.forEach(function(use) { + ctors.forEach(function(ctor) { + test(ctor, use, migr); + }); + }); +}); diff --git a/deps/v8/test/mjsunit/mirror-object.js b/deps/v8/test/mjsunit/mirror-object.js index 8bf8a2d4f83..7020338ca22 100644 --- a/deps/v8/test/mjsunit/mirror-object.js +++ b/deps/v8/test/mjsunit/mirror-object.js @@ -111,12 +111,14 @@ function testObjectMirror(obj, cls_name, ctor_name, hasSpecialProperties) { // Check that the serialization contains all properties. assertEquals(names.length, fromJSON.properties.length, 'Some properties missing in JSON'); - for (var i = 0; i < fromJSON.properties.length; i++) { - var name = fromJSON.properties[i].name; - if (typeof name == 'undefined') name = fromJSON.properties[i].index; + for (var j = 0; j < names.length; j++) { + var name = names[j]; + // Serialization of symbol-named properties to JSON doesn't really + // work currently, as they don't get a {name: ...} entry. + if (typeof name === 'symbol') continue; var found = false; - for (var j = 0; j < names.length; j++) { - if (names[j] == name) { + for (var i = 0; i < fromJSON.properties.length; i++) { + if (fromJSON.properties[i].name == name) { // Check that serialized handle is correct. assertEquals(properties[i].value().handle(), fromJSON.properties[i].ref, 'Unexpected serialized handle'); @@ -170,6 +172,9 @@ function Point(x,y) { this.y_ = y; } +var object_with_symbol = {}; +object_with_symbol[Symbol.iterator] = 42; + // Test a number of different objects. testObjectMirror({}, 'Object', 'Object'); testObjectMirror({'a':1,'b':2}, 'Object', 'Object'); @@ -180,6 +185,7 @@ testObjectMirror(this.__proto__, 'Object', ''); testObjectMirror([], 'Array', 'Array'); testObjectMirror([1,2], 'Array', 'Array'); testObjectMirror(Object(17), 'Number', 'Number'); +testObjectMirror(object_with_symbol, 'Object', 'Object'); // Test circular references. o = {}; diff --git a/deps/v8/test/mjsunit/mirror-script.js b/deps/v8/test/mjsunit/mirror-script.js index 1d64ac26bf7..e545a616372 100644 --- a/deps/v8/test/mjsunit/mirror-script.js +++ b/deps/v8/test/mjsunit/mirror-script.js @@ -84,7 +84,7 @@ function testScriptMirror(f, file_name, file_lines, type, compilation_type, // Test the script mirror for different functions. testScriptMirror(function(){}, 'mirror-script.js', 98, 2, 0); -testScriptMirror(Math.sin, 'native math.js', -1, 0, 0); +testScriptMirror(Math.round, 'native math.js', -1, 0, 0); testScriptMirror(eval('(function(){})'), null, 1, 2, 1, '(function(){})', 87); testScriptMirror(eval('(function(){\n })'), null, 2, 2, 1, '(function(){\n })', 88); diff --git a/deps/v8/test/mjsunit/mjsunit.js b/deps/v8/test/mjsunit/mjsunit.js index 5f03774d75d..04302790885 100644 --- a/deps/v8/test/mjsunit/mjsunit.js +++ b/deps/v8/test/mjsunit/mjsunit.js @@ -231,8 +231,16 @@ var assertUnoptimized; return deepObjectEquals(a, b); } + function checkArity(args, arity, name) { + if (args.length < arity) { + fail(PrettyPrint(arity), args.length, + name + " requires " + arity + " or more arguments"); + } + } assertSame = function assertSame(expected, found, name_opt) { + checkArity(arguments, 2, "assertSame"); + // TODO(mstarzinger): We should think about using Harmony's egal operator // or the function equivalent Object.is() here. if (found === expected) { @@ -245,6 +253,8 @@ var assertUnoptimized; assertEquals = function assertEquals(expected, found, name_opt) { + checkArity(arguments, 2, "assertEquals"); + if (!deepEquals(found, expected)) { fail(PrettyPrint(expected), found, name_opt); } @@ -371,15 +381,18 @@ var assertUnoptimized; throw new MjsUnitAssertionError(message); }; + var OptimizationStatusImpl = undefined; - var OptimizationStatus; - try { - OptimizationStatus = - new Function("fun", "sync", "return %GetOptimizationStatus(fun, sync);"); - } catch (e) { - OptimizationStatus = function() { - throw new Error("natives syntax not allowed"); + var OptimizationStatus = function(fun, sync_opt) { + if (OptimizationStatusImpl === undefined) { + try { + OptimizationStatusImpl = new Function( + "fun", "sync", "return %GetOptimizationStatus(fun, sync);"); + } catch (e) { + throw new Error("natives syntax not allowed"); + } } + return OptimizationStatusImpl(fun, sync_opt); } assertUnoptimized = function assertUnoptimized(fun, sync_opt, name_opt) { diff --git a/deps/v8/test/mjsunit/mjsunit.status b/deps/v8/test/mjsunit/mjsunit.status index 117a0e6f71e..228d243643e 100644 --- a/deps/v8/test/mjsunit/mjsunit.status +++ b/deps/v8/test/mjsunit/mjsunit.status @@ -48,6 +48,116 @@ # This test non-deterministically runs out of memory on Windows ia32. 'regress/regress-crbug-160010': [SKIP], + # Issue 3389: deopt_every_n_garbage_collections is unsafe + 'regress/regress-2653': [SKIP], + + ############################################################################## + # TurboFan compiler failures. + + # TODO(mstarzinger): An arguments object materialized in the prologue can't + # be accessed indirectly. Either we drop that requirement or wait for support + # from the deoptimizer to do that. + 'arguments-indirect': [PASS, NO_VARIANTS], + + # TODO(mstarzinger): Sometimes the try-catch blacklist fails. + 'debug-references': [PASS, NO_VARIANTS], + 'regress/regress-263': [PASS, NO_VARIANTS], + + # Some tests are over-restrictive about object layout. + 'array-constructor-feedback': [PASS, NO_VARIANTS], + 'array-feedback': [PASS, NO_VARIANTS], + + # Some tests are just too slow to run for now. + 'big-object-literal': [PASS, NO_VARIANTS], + 'bit-not': [PASS, NO_VARIANTS], + 'json2': [PASS, NO_VARIANTS], + 'packed-elements': [PASS, NO_VARIANTS], + 'unbox-double-arrays': [PASS, NO_VARIANTS], + 'whitespaces': [PASS, NO_VARIANTS], + 'compiler/optimized-for-in': [PASS, NO_VARIANTS], + 'compiler/osr-assert': [PASS, NO_VARIANTS], + 'compiler/osr-regress-max-locals': [PASS, NO_VARIANTS], + 'es7/object-observe': [PASS, NO_VARIANTS], + 'regress/regress-2185-2': [PASS, NO_VARIANTS], + 'regress/regress-284': [PASS, NO_VARIANTS], + 'regress/string-set-char-deopt': [PASS, NO_VARIANTS], + 'tools/profviz': [PASS, NO_VARIANTS], + + # Support for breakpoints requires special relocation info for DebugBreak. + 'debug-clearbreakpointgroup': [PASS, NO_VARIANTS], + 'debug-step-2': [PASS, NO_VARIANTS], + 'regress/regress-debug-deopt-while-recompile': [PASS, NO_VARIANTS], + 'regress/regress-opt-after-debug-deopt': [PASS, NO_VARIANTS], + + # Support for %GetFrameDetails is missing and requires checkpoints. + 'debug-backtrace-text': [PASS, NO_VARIANTS], + 'debug-break-inline': [PASS, NO_VARIANTS], + 'debug-evaluate-arguments': [PASS, NO_VARIANTS], + 'debug-evaluate-bool-constructor': [PASS, NO_VARIANTS], + 'debug-evaluate-closure': [PASS, NO_VARIANTS], + 'debug-evaluate-const': [PASS, NO_VARIANTS], + 'debug-evaluate-locals-optimized-double': [PASS, NO_VARIANTS], + 'debug-evaluate-locals-optimized': [PASS, NO_VARIANTS], + 'debug-evaluate-locals': [PASS, NO_VARIANTS], + 'debug-evaluate-with-context': [PASS, NO_VARIANTS], + 'debug-evaluate-with': [PASS, NO_VARIANTS], + 'debug-liveedit-double-call': [PASS, NO_VARIANTS], + 'debug-liveedit-restart-frame': [PASS, NO_VARIANTS], + 'debug-receiver': [PASS, NO_VARIANTS], + 'debug-return-value': [PASS, NO_VARIANTS], + 'debug-scopes': [PASS, NO_VARIANTS], + 'debug-set-variable-value': [PASS, NO_VARIANTS], + 'debug-step-stub-callfunction': [PASS, NO_VARIANTS], + 'debug-stepin-accessor': [PASS, NO_VARIANTS], + 'debug-stepin-builtin': [PASS, NO_VARIANTS], + 'debug-stepin-constructor': [PASS, NO_VARIANTS], + 'debug-stepin-function-call': [PASS, NO_VARIANTS], + 'debug-stepnext-do-while': [PASS, NO_VARIANTS], + 'debug-stepout-recursive-function': [PASS, NO_VARIANTS], + 'debug-stepout-scope-part1': [PASS, NO_VARIANTS], + 'debug-stepout-scope-part2': [PASS, NO_VARIANTS], + 'debug-stepout-scope-part3': [PASS, NO_VARIANTS], + 'debug-stepout-scope-part7': [PASS, NO_VARIANTS], + 'debug-stepout-to-builtin': [PASS, NO_VARIANTS], + 'es6/debug-promises/throw-in-constructor': [PASS, NO_VARIANTS], + 'es6/debug-promises/reject-in-constructor': [PASS, NO_VARIANTS], + 'es6/debug-promises/throw-with-undefined-reject': [PASS, NO_VARIANTS], + 'es6/debug-promises/throw-with-throw-in-reject': [PASS, NO_VARIANTS], + 'es6/debug-promises/reject-with-throw-in-reject': [PASS, NO_VARIANTS], + 'es6/debug-promises/throw-uncaught-all': [PASS, NO_VARIANTS], + 'es6/debug-promises/throw-uncaught-uncaught': [PASS, NO_VARIANTS], + 'es6/debug-promises/reject-uncaught-late': [PASS, NO_VARIANTS], + 'harmony/debug-blockscopes': [PASS, NO_VARIANTS], + 'harmony/generators-debug-scopes': [PASS, NO_VARIANTS], + 'regress/regress-1081309': [PASS, NO_VARIANTS], + 'regress/regress-1170187': [PASS, NO_VARIANTS], + 'regress/regress-119609': [PASS, NO_VARIANTS], + 'regress/regress-131994': [PASS, NO_VARIANTS], + 'regress/regress-269': [PASS, NO_VARIANTS], + 'regress/regress-325676': [PASS, NO_VARIANTS], + 'regress/regress-crbug-107996': [PASS, NO_VARIANTS], + 'regress/regress-crbug-171715': [PASS, NO_VARIANTS], + 'regress/regress-crbug-222893': [PASS, NO_VARIANTS], + 'regress/regress-crbug-259300': [PASS, NO_VARIANTS], + 'regress/regress-frame-details-null-receiver': [PASS, NO_VARIANTS], + + # Support for ES6 generators is missing. + 'regress-3225': [PASS, NO_VARIANTS], + 'harmony/generators-debug-liveedit': [PASS, NO_VARIANTS], + 'harmony/generators-iteration': [PASS, NO_VARIANTS], + 'harmony/generators-parsing': [PASS, NO_VARIANTS], + 'harmony/generators-poisoned-properties': [PASS, NO_VARIANTS], + 'harmony/generators-relocation': [PASS, NO_VARIANTS], + 'harmony/regress/regress-2681': [PASS, NO_VARIANTS], + 'harmony/regress/regress-2691': [PASS, NO_VARIANTS], + 'harmony/regress/regress-3280': [PASS, NO_VARIANTS], + + # Support for ES6 for-of iteration is missing. + 'es6/array-iterator': [PASS, NO_VARIANTS], + 'es6/iteration-semantics': [PASS, NO_VARIANTS], + 'es6/string-iterator': [PASS, NO_VARIANTS], + 'es6/typed-array-iterator': [PASS, NO_VARIANTS], + ############################################################################## # Too slow in debug mode with --stress-opt mode. 'compiler/regress-stacktrace-methods': [PASS, ['mode == debug', SKIP]], @@ -67,14 +177,11 @@ # No need to waste time for this test. 'd8-performance-now': [PASS, NO_VARIANTS], - ############################################################################## - 'big-object-literal': [PASS, ['arch == arm or arch == android_arm or arch == android_arm64', SKIP]], - # Issue 488: this test sometimes times out. 'array-constructor': [PASS, TIMEOUT], # Very slow on ARM and MIPS, contains no architecture dependent code. - 'unicode-case-overoptimization': [PASS, NO_VARIANTS, ['arch == arm or arch == android_arm or arch == android_arm64 or arch == mipsel or arch == mips', TIMEOUT]], + 'unicode-case-overoptimization': [PASS, NO_VARIANTS, ['arch == arm or arch == android_arm or arch == android_arm64 or arch == mipsel or arch == mips64el or arch == mips', TIMEOUT]], ############################################################################## # This test expects to reach a certain recursion depth, which may not work @@ -84,6 +191,8 @@ ############################################################################## # Skip long running tests that time out in debug mode. 'generated-transition-stub': [PASS, ['mode == debug', SKIP]], + 'migrations': [SKIP], + 'array-functions-prototype-misc': [PASS, ['mode == debug', SKIP]], ############################################################################## # This test sets the umask on a per-process basis and hence cannot be @@ -94,7 +203,7 @@ # get the same random seed and would generate the same directory name. Besides # that, it doesn't make sense to run several variants of d8-os anyways. 'd8-os': [PASS, NO_VARIANTS, ['isolates or arch == android_arm or arch == android_arm64 or arch == android_ia32', SKIP]], - 'tools/tickprocessor': [PASS, ['arch == android_arm or arch == android_arm64 or arch == android_ia32', SKIP]], + 'tools/tickprocessor': [PASS, NO_VARIANTS, ['arch == android_arm or arch == android_arm64 or arch == android_ia32', SKIP]], ############################################################################## # Long running test that reproduces memory leak and should be run manually. @@ -117,7 +226,7 @@ # BUG(v8:2989). PASS/FAIL on linux32 because crankshaft is turned off for # nosse2. Also for arm novfp3. - 'regress/regress-2989': [FAIL, NO_VARIANTS, ['system == linux and arch == ia32 or arch == arm and simulator == True', PASS]], + 'regress/regress-2989': [FAIL, NO_VARIANTS, ['system == linux and arch == x87 or arch == arm and simulator == True', PASS]], # Skip endain dependent test for mips due to different typed views of the same # array buffer. @@ -133,18 +242,66 @@ 'array-feedback': [SKIP], 'array-literal-feedback': [SKIP], 'd8-performance-now': [SKIP], + 'debug-stepout-scope-part8': [PASS, ['arch == arm ', FAIL]], 'elements-kind': [SKIP], + 'elements-transition-hoisting': [SKIP], 'fast-prototype': [SKIP], + 'getters-on-elements': [SKIP], + 'harmony/block-let-crankshaft': [SKIP], 'opt-elements-kind': [SKIP], 'osr-elements-kind': [SKIP], 'regress/regress-165637': [SKIP], 'regress/regress-2249': [SKIP], - 'debug-stepout-scope-part8': [PASS, ['arch == arm ', FAIL]], + # Tests taking too long + 'debug-stepout-scope-part8': [SKIP], + 'mirror-object': [SKIP], + 'packed-elements': [SKIP], + 'regress/regress-1122': [SKIP], + 'regress/regress-331444': [SKIP], + 'regress/regress-353551': [SKIP], + 'regress/regress-crbug-119926': [SKIP], + 'regress/short-circuit': [SKIP], + 'stack-traces-overflow': [SKIP], + 'unicode-test': [SKIP], + 'whitespaces': [SKIP], + + # TODO(mstarzinger): Takes too long with TF. + 'array-sort': [PASS, NO_VARIANTS], }], # 'gc_stress == True' +############################################################################## +['no_i18n', { + # Don't call runtime functions that don't exist without i18n support. + 'runtime-gen/availablelocalesof': [SKIP], + 'runtime-gen/breakiteratoradopttext': [SKIP], + 'runtime-gen/breakiteratorbreaktype': [SKIP], + 'runtime-gen/breakiteratorbreaktype': [SKIP], + 'runtime-gen/breakiteratorcurrent': [SKIP], + 'runtime-gen/breakiteratorfirst': [SKIP], + 'runtime-gen/breakiteratornext': [SKIP], + 'runtime-gen/canonicalizelanguagetag': [SKIP], + 'runtime-gen/createbreakiterator': [SKIP], + 'runtime-gen/createcollator': [SKIP], + 'runtime-gen/getdefaulticulocale': [SKIP], + 'runtime-gen/getimplfrominitializedintlobject': [SKIP], + 'runtime-gen/getlanguagetagvariants': [SKIP], + 'runtime-gen/internalcompare': [SKIP], + 'runtime-gen/internaldateformat': [SKIP], + 'runtime-gen/internaldateparse': [SKIP], + 'runtime-gen/internalnumberformat': [SKIP], + 'runtime-gen/internalnumberparse': [SKIP], + 'runtime-gen/isinitializedintlobject': [SKIP], + 'runtime-gen/isinitializedintlobjectoftype': [SKIP], + 'runtime-gen/markasinitializedintlobjectoftype': [SKIP], + 'runtime-gen/stringnormalize': [SKIP], +}], + ############################################################################## ['arch == arm64 or arch == android_arm64', { + # arm64 TF timeout. + 'regress/regress-1257': [PASS, TIMEOUT], + # Requires bigger stack size in the Genesis and if stack size is increased, # the test requires too much time to run. However, the problem test covers # should be platform-independent. @@ -153,6 +310,7 @@ # Pass but take too long to run. Skip. # Some similar tests (with fewer iterations) may be included in arm64-js # tests. + 'big-object-literal': [SKIP], 'compiler/regress-arguments': [SKIP], 'compiler/regress-gvn': [SKIP], 'compiler/regress-max-locals-for-osr': [SKIP], @@ -174,15 +332,12 @@ 'regress/regress-2185-2': [PASS, TIMEOUT], 'whitespaces': [PASS, TIMEOUT, SLOW], - # Stack manipulations in LiveEdit is not implemented for this arch. - 'debug-liveedit-check-stack': [SKIP], - 'debug-liveedit-stack-padding': [SKIP], - 'debug-liveedit-restart-frame': [SKIP], - 'debug-liveedit-double-call': [SKIP], - # BUG(v8:3147). It works on other architectures by accident. 'regress/regress-conditional-position': [FAIL], + # BUG(v8:3457). + 'deserialize-reference': [PASS, FAIL], + # Slow tests. 'array-concat': [PASS, SLOW], 'array-constructor': [PASS, SLOW], @@ -193,7 +348,7 @@ 'bit-not': [PASS, SLOW], 'compiler/alloc-number': [PASS, SLOW], 'compiler/osr-assert': [PASS, SLOW], - 'compiler/osr-warm': [PASS, SLOW], + 'compiler/osr-warm': [PASS, TIMEOUT, SLOW], 'compiler/osr-with-args': [PASS, SLOW], 'debug-scopes': [PASS, SLOW], 'generated-transition-stub': [PASS, SLOW], @@ -259,6 +414,7 @@ # Long running tests. Skipping because having them timeout takes too long on # the buildbot. + 'big-object-literal': [SKIP], 'compiler/alloc-number': [SKIP], 'regress/regress-490': [SKIP], 'regress/regress-634': [SKIP], @@ -270,12 +426,6 @@ # should be platform-independent. 'regress/regress-1132': [SKIP], - # Stack manipulations in LiveEdit is not implemented for this arch. - 'debug-liveedit-check-stack': [SKIP], - 'debug-liveedit-stack-padding': [SKIP], - 'debug-liveedit-restart-frame': [SKIP], - 'debug-liveedit-double-call': [SKIP], - # Currently always deopt on minus zero 'math-floor-of-div-minus-zero': [SKIP], @@ -321,16 +471,77 @@ # should be platform-independent. 'regress/regress-1132': [SKIP], - # Stack manipulations in LiveEdit is not implemented for this arch. - 'debug-liveedit-check-stack': [SKIP], - 'debug-liveedit-stack-padding': [SKIP], - 'debug-liveedit-restart-frame': [SKIP], - 'debug-liveedit-double-call': [SKIP], - # Currently always deopt on minus zero 'math-floor-of-div-minus-zero': [SKIP], + + # BUG(v8:3457). + 'deserialize-reference': [SKIP], }], # 'arch == mipsel or arch == mips' +############################################################################## +['arch == mips64el', { + + # Slow tests which times out in debug mode. + 'try': [PASS, ['mode == debug', SKIP]], + 'debug-scripts-request': [PASS, ['mode == debug', SKIP]], + 'array-constructor': [PASS, ['mode == debug', SKIP]], + + # Times out often in release mode on MIPS. + 'compiler/regress-stacktrace-methods': [PASS, PASS, ['mode == release', TIMEOUT]], + 'array-splice': [PASS, TIMEOUT], + + # Long running test. + 'mirror-object': [PASS, TIMEOUT], + 'string-indexof-2': [PASS, TIMEOUT], + + # BUG(3251035): Timeouts in long looping crankshaft optimization + # tests. Skipping because having them timeout takes too long on the + # buildbot. + 'compiler/alloc-number': [PASS, SLOW], + 'compiler/array-length': [PASS, SLOW], + 'compiler/assignment-deopt': [PASS, SLOW], + 'compiler/deopt-args': [PASS, SLOW], + 'compiler/inline-compare': [PASS, SLOW], + 'compiler/inline-global-access': [PASS, SLOW], + 'compiler/optimized-function-calls': [PASS, SLOW], + 'compiler/pic': [PASS, SLOW], + 'compiler/property-calls': [PASS, SLOW], + 'compiler/recursive-deopt': [PASS, SLOW], + 'compiler/regress-4': [PASS, SLOW], + 'compiler/regress-funcaller': [PASS, SLOW], + 'compiler/regress-rep-change': [PASS, SLOW], + 'compiler/regress-arguments': [PASS, SLOW], + 'compiler/regress-funarguments': [PASS, SLOW], + 'compiler/regress-3249650': [PASS, SLOW], + 'compiler/simple-deopt': [PASS, SLOW], + 'regress/regress-490': [PASS, SLOW], + 'regress/regress-634': [PASS, SLOW], + 'regress/regress-create-exception': [PASS, SLOW], + 'regress/regress-3218915': [PASS, SLOW], + 'regress/regress-3247124': [PASS, SLOW], + + # Requires bigger stack size in the Genesis and if stack size is increased, + # the test requires too much time to run. However, the problem test covers + # should be platform-independent. + 'regress/regress-1132': [SKIP], + + # Currently always deopt on minus zero + 'math-floor-of-div-minus-zero': [SKIP], + + # BUG(v8:3457). + 'deserialize-reference': [SKIP], +}], # 'arch == mips64el' + +['arch == mips64el and simulator_run == False', { + # Random failures on HW, need investigation. + 'debug-*': [SKIP], +}], +############################################################################## +['system == windows', { + # BUG(v8:3435) + 'debug-script-breakpoints': [PASS, FAIL], +}], # 'system == windows' + ############################################################################## # Native Client uses the ARM simulator so will behave similarly to arm # on mjsunit tests. @@ -346,6 +557,7 @@ 'debug-liveedit-stack-padding': [SKIP], 'debug-liveedit-restart-frame': [SKIP], 'debug-liveedit-double-call': [SKIP], + 'harmony/generators-debug-liveedit': [SKIP], # NaCl builds have problems with this test since Pepper_28. # V8 Issue 2786 diff --git a/deps/v8/test/mjsunit/object-define-property.js b/deps/v8/test/mjsunit/object-define-property.js index cbb2d211f44..4c495c6824f 100644 --- a/deps/v8/test/mjsunit/object-define-property.js +++ b/deps/v8/test/mjsunit/object-define-property.js @@ -27,7 +27,7 @@ // Tests the object.defineProperty method - ES 15.2.3.6 -// Flags: --allow-natives-syntax --es5-readonly +// Flags: --allow-natives-syntax // Check that an exception is thrown when null is passed as object. var exception = false; @@ -467,35 +467,35 @@ try { } -// Test runtime calls to DefineOrRedefineDataProperty and -// DefineOrRedefineAccessorProperty - make sure we don't +// Test runtime calls to DefineDataPropertyUnchecked and +// DefineAccessorPropertyUnchecked - make sure we don't // crash. try { - %DefineOrRedefineAccessorProperty(0, 0, 0, 0, 0); + %DefineAccessorPropertyUnchecked(0, 0, 0, 0, 0); } catch (e) { assertTrue(/illegal access/.test(e)); } try { - %DefineOrRedefineDataProperty(0, 0, 0, 0); + %DefineDataPropertyUnchecked(0, 0, 0, 0); } catch (e) { assertTrue(/illegal access/.test(e)); } try { - %DefineOrRedefineDataProperty(null, null, null, null); + %DefineDataPropertyUnchecked(null, null, null, null); } catch (e) { assertTrue(/illegal access/.test(e)); } try { - %DefineOrRedefineAccessorProperty(null, null, null, null, null); + %DefineAccessorPropertyUnchecked(null, null, null, null, null); } catch (e) { assertTrue(/illegal access/.test(e)); } try { - %DefineOrRedefineDataProperty({}, null, null, null); + %DefineDataPropertyUnchecked({}, null, null, null); } catch (e) { assertTrue(/illegal access/.test(e)); } @@ -503,13 +503,13 @@ try { // Defining properties null should fail even when we have // other allowed values try { - %DefineOrRedefineAccessorProperty(null, 'foo', func, null, 0); + %DefineAccessorPropertyUnchecked(null, 'foo', func, null, 0); } catch (e) { assertTrue(/illegal access/.test(e)); } try { - %DefineOrRedefineDataProperty(null, 'foo', 0, 0); + %DefineDataPropertyUnchecked(null, 'foo', 0, 0); } catch (e) { assertTrue(/illegal access/.test(e)); } diff --git a/deps/v8/test/mjsunit/object-toprimitive.js b/deps/v8/test/mjsunit/object-toprimitive.js index 3a67ced47e7..34803ec9348 100644 --- a/deps/v8/test/mjsunit/object-toprimitive.js +++ b/deps/v8/test/mjsunit/object-toprimitive.js @@ -102,3 +102,5 @@ trace = []; var nt = Number(ot); assertEquals(87, nt); assertEquals(["gvo", "gts", "ts"], trace); + +assertThrows('Number(Symbol())', TypeError); diff --git a/deps/v8/test/mjsunit/opt-elements-kind.js b/deps/v8/test/mjsunit/opt-elements-kind.js index f26bb420677..be7303b04bb 100644 --- a/deps/v8/test/mjsunit/opt-elements-kind.js +++ b/deps/v8/test/mjsunit/opt-elements-kind.js @@ -25,28 +25,13 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --allow-natives-syntax --smi-only-arrays --expose-gc +// Flags: --allow-natives-syntax --expose-gc // Limit the number of stress runs to reduce polymorphism it defeats some of the // assumptions made about how elements transitions work because transition stubs // end up going generic. // Flags: --stress-runs=2 -// Test element kind of objects. -// Since --smi-only-arrays affects builtins, its default setting at compile -// time sticks if built with snapshot. If --smi-only-arrays is deactivated -// by default, only a no-snapshot build actually has smi-only arrays enabled -// in this test case. Depending on whether smi-only arrays are actually -// enabled, this test takes the appropriate code path to check smi-only arrays. - -support_smi_only_arrays = %HasFastSmiElements(new Array(1,2,3,4,5,6,7,8)); - -if (support_smi_only_arrays) { - print("Tests include smi-only arrays."); -} else { - print("Tests do NOT include smi-only arrays."); -} - var elements_kind = { fast_smi_only : 'fast smi only elements', fast : 'fast elements', @@ -100,10 +85,6 @@ function getKind(obj) { } function assertKind(expected, obj, name_opt) { - if (!support_smi_only_arrays && - expected == elements_kind.fast_smi_only) { - expected = elements_kind.fast; - } assertEquals(expected, getKind(obj), name_opt); } @@ -143,8 +124,6 @@ function convert_mixed(array, value, kind) { } function test1() { - if (!support_smi_only_arrays) return; - // Test transition chain SMI->DOUBLE->FAST (crankshafted function will // transition to FAST directly). var smis = construct_smis(); diff --git a/deps/v8/test/mjsunit/osr-elements-kind.js b/deps/v8/test/mjsunit/osr-elements-kind.js index 2ad3c434873..518b9847430 100644 --- a/deps/v8/test/mjsunit/osr-elements-kind.js +++ b/deps/v8/test/mjsunit/osr-elements-kind.js @@ -25,28 +25,13 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --allow-natives-syntax --smi-only-arrays --expose-gc +// Flags: --allow-natives-syntax --expose-gc // Limit the number of stress runs to reduce polymorphism it defeats some of the // assumptions made about how elements transitions work because transition stubs // end up going generic. // Flags: --stress-runs=2 -// Test element kind of objects. -// Since --smi-only-arrays affects builtins, its default setting at compile -// time sticks if built with snapshot. If --smi-only-arrays is deactivated -// by default, only a no-snapshot build actually has smi-only arrays enabled -// in this test case. Depending on whether smi-only arrays are actually -// enabled, this test takes the appropriate code path to check smi-only arrays. - -support_smi_only_arrays = %HasFastSmiElements(new Array(1,2,3,4,5,6,7,8)); - -if (support_smi_only_arrays) { - print("Tests include smi-only arrays."); -} else { - print("Tests do NOT include smi-only arrays."); -} - var elements_kind = { fast_smi_only : 'fast smi only elements', fast : 'fast elements', @@ -100,10 +85,6 @@ function getKind(obj) { } function assertKind(expected, obj, name_opt) { - if (!support_smi_only_arrays && - expected == elements_kind.fast_smi_only) { - expected = elements_kind.fast; - } assertEquals(expected, getKind(obj), name_opt); } @@ -113,53 +94,51 @@ function assertKind(expected, obj, name_opt) { %NeverOptimizeFunction(convert_mixed); for (var i = 0; i < 1000000; i++) { } -if (support_smi_only_arrays) { - // This code exists to eliminate the learning influence of AllocationSites - // on the following tests. - var __sequence = 0; - function make_array_string() { - this.__sequence = this.__sequence + 1; - return "/* " + this.__sequence + " */ [0, 0, 0];" - } - function make_array() { - return eval(make_array_string()); - } +// This code exists to eliminate the learning influence of AllocationSites +// on the following tests. +var __sequence = 0; +function make_array_string() { + this.__sequence = this.__sequence + 1; + return "/* " + this.__sequence + " */ [0, 0, 0];" +} +function make_array() { + return eval(make_array_string()); +} - function construct_smis() { - var a = make_array(); - a[0] = 0; // Send the COW array map to the steak house. - assertKind(elements_kind.fast_smi_only, a); - return a; - } - function construct_doubles() { - var a = construct_smis(); - a[0] = 1.5; - assertKind(elements_kind.fast_double, a); - return a; - } +function construct_smis() { + var a = make_array(); + a[0] = 0; // Send the COW array map to the steak house. + assertKind(elements_kind.fast_smi_only, a); + return a; +} +function construct_doubles() { + var a = construct_smis(); + a[0] = 1.5; + assertKind(elements_kind.fast_double, a); + return a; +} - // Test transition chain SMI->DOUBLE->FAST (crankshafted function will - // transition to FAST directly). - function convert_mixed(array, value, kind) { - array[1] = value; - assertKind(kind, array); - assertEquals(value, array[1]); - } - smis = construct_smis(); - convert_mixed(smis, 1.5, elements_kind.fast_double); +// Test transition chain SMI->DOUBLE->FAST (crankshafted function will +// transition to FAST directly). +function convert_mixed(array, value, kind) { + array[1] = value; + assertKind(kind, array); + assertEquals(value, array[1]); +} +smis = construct_smis(); +convert_mixed(smis, 1.5, elements_kind.fast_double); - doubles = construct_doubles(); - convert_mixed(doubles, "three", elements_kind.fast); +doubles = construct_doubles(); +convert_mixed(doubles, "three", elements_kind.fast); - convert_mixed(construct_smis(), "three", elements_kind.fast); - convert_mixed(construct_doubles(), "three", elements_kind.fast); +convert_mixed(construct_smis(), "three", elements_kind.fast); +convert_mixed(construct_doubles(), "three", elements_kind.fast); - smis = construct_smis(); - doubles = construct_doubles(); - convert_mixed(smis, 1, elements_kind.fast); - convert_mixed(doubles, 1, elements_kind.fast); - assertTrue(%HaveSameMap(smis, doubles)); -} +smis = construct_smis(); +doubles = construct_doubles(); +convert_mixed(smis, 1, elements_kind.fast); +convert_mixed(doubles, 1, elements_kind.fast); +assertTrue(%HaveSameMap(smis, doubles)); // Throw away type information in the ICs for next stress run. gc(); diff --git a/deps/v8/test/mjsunit/outobject-double-for-in.js b/deps/v8/test/mjsunit/outobject-double-for-in.js new file mode 100644 index 00000000000..eb8ac940a7f --- /dev/null +++ b/deps/v8/test/mjsunit/outobject-double-for-in.js @@ -0,0 +1,66 @@ +// Copyright 2012 the V8 project authors. All rights reserved. +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above +// copyright notice, this list of conditions and the following +// disclaimer in the documentation and/or other materials provided +// with the distribution. +// * Neither the name of Google Inc. nor the names of its +// contributors may be used to endorse or promote products derived +// from this software without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +// Flags: --allow-natives-syntax + +function DoubleContainer() { + this.x0 = 0.5; + this.x1 = undefined; + this.x2 = undefined; + this.x3 = undefined; + this.x4 = undefined; + this.x5 = undefined; + this.x6 = undefined; + this.x7 = 5; + this.x8 = undefined; + this.x9 = undefined; + this.x10 = undefined; + this.x11 = undefined; + this.x12 = undefined; + this.x13 = undefined; + this.x14 = undefined; + this.x15 = undefined; + this.x16 = true; + this.y = 2.5; +} + +var z = new DoubleContainer(); + +function test_props(a) { + for (var i in a) { + assertTrue(i !== "x0" || a[i] === 0.5); + assertTrue(i !== "y" || a[i] === 2.5); + assertTrue(i !== "x12" || a[i] === undefined); + assertTrue(i !== "x16" || a[i] === true); + assertTrue(i !== "x7" || a[i] === 5); + } +} + +test_props(z); +test_props(z); +%OptimizeFunctionOnNextCall(test_props); +test_props(z); diff --git a/deps/v8/test/mjsunit/override-read-only-property.js b/deps/v8/test/mjsunit/override-read-only-property.js index 2876ae1f849..f8114a66016 100644 --- a/deps/v8/test/mjsunit/override-read-only-property.js +++ b/deps/v8/test/mjsunit/override-read-only-property.js @@ -25,8 +25,6 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --es5_readonly - // According to ECMA-262, sections 8.6.2.2 and 8.6.2.3 you're not // allowed to override read-only properties, not even if the read-only // property is in the prototype chain. diff --git a/deps/v8/test/mjsunit/own-symbols.js b/deps/v8/test/mjsunit/own-symbols.js new file mode 100644 index 00000000000..588a032aa86 --- /dev/null +++ b/deps/v8/test/mjsunit/own-symbols.js @@ -0,0 +1,55 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. +// +// Flags: --allow-natives-syntax + +var s = %CreatePrivateOwnSymbol("s"); +var s1 = %CreatePrivateOwnSymbol("s1"); + +function TestSimple() { + var p = {} + p[s] = "moo"; + + var o = Object.create(p); + + assertEquals(undefined, o[s]); + assertEquals("moo", p[s]); + + o[s] = "bow-wow"; + assertEquals("bow-wow", o[s]); + assertEquals("moo", p[s]); +} + +TestSimple(); + + +function TestICs() { + var p = {} + p[s] = "moo"; + + + var o = Object.create(p); + o[s1] = "bow-wow"; + function checkNonOwn(o) { + assertEquals(undefined, o[s]); + assertEquals("bow-wow", o[s1]); + } + + checkNonOwn(o); + + // Test monomorphic/optimized. + for (var i = 0; i < 1000; i++) { + checkNonOwn(o); + } + + // Test non-monomorphic. + for (var i = 0; i < 1000; i++) { + var oNew = Object.create(p); + oNew["s" + i] = i; + oNew[s1] = "bow-wow"; + checkNonOwn(oNew); + } +} + +TestICs(); diff --git a/deps/v8/test/mjsunit/packed-elements.js b/deps/v8/test/mjsunit/packed-elements.js index 4a873730643..3ce92d1186b 100644 --- a/deps/v8/test/mjsunit/packed-elements.js +++ b/deps/v8/test/mjsunit/packed-elements.js @@ -25,9 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --allow-natives-syntax --smi-only-arrays --packed-arrays - -var has_packed_elements = !%HasFastHoleyElements(Array()); +// Flags: --allow-natives-syntax function test1() { var a = Array(8); @@ -101,11 +99,9 @@ function test_with_optimization(f) { for (i = 0; i < 25000; ++i) f(); // Make sure GC happens } -if (has_packed_elements) { - test_with_optimization(test1); - test_with_optimization(test2); - test_with_optimization(test3); - test_with_optimization(test4); - test_with_optimization(test5); - test_with_optimization(test6); -} +test_with_optimization(test1); +test_with_optimization(test2); +test_with_optimization(test3); +test_with_optimization(test4); +test_with_optimization(test5); +test_with_optimization(test6); diff --git a/deps/v8/test/mjsunit/polymorph-arrays.js b/deps/v8/test/mjsunit/polymorph-arrays.js index ff0c433bd76..2bb0433214e 100644 --- a/deps/v8/test/mjsunit/polymorph-arrays.js +++ b/deps/v8/test/mjsunit/polymorph-arrays.js @@ -37,7 +37,7 @@ function init_sparse_array(a) { a[i] = i; } a[5000000] = 256; - assertTrue(%HasDictionaryElements(a)); + return %NormalizeElements(a); } function testPolymorphicLoads() { @@ -49,7 +49,7 @@ function testPolymorphicLoads() { var object_array = new Object; var sparse_object_array = new Object; var js_array = new Array(10); - var sparse_js_array = new Array(5000001); + var sparse_js_array = %NormalizeElements([]); init_array(object_array); init_array(js_array); @@ -67,7 +67,7 @@ function testPolymorphicLoads() { var object_array = new Object; var sparse_object_array = new Object; var js_array = new Array(10); - var sparse_js_array = new Array(5000001); + var sparse_js_array = %NormalizeElements([]); init_array(object_array); init_array(js_array); @@ -114,7 +114,8 @@ function testPolymorphicStores() { var object_array = new Object; var sparse_object_array = new Object; var js_array = new Array(10); - var sparse_js_array = new Array(5000001); + var sparse_js_array = []; + sparse_js_array.length = 5000001; init_array(object_array); init_array(js_array); @@ -132,7 +133,8 @@ function testPolymorphicStores() { var object_array = new Object; var sparse_object_array = new Object; var js_array = new Array(10); - var sparse_js_array = new Array(5000001); + var sparse_js_array = %NormalizeElements([]); + sparse_js_array.length = 5000001; init_array(object_array); init_array(js_array); diff --git a/deps/v8/test/mjsunit/proto-accessor.js b/deps/v8/test/mjsunit/proto-accessor.js index b2e7d346690..513a044023e 100644 --- a/deps/v8/test/mjsunit/proto-accessor.js +++ b/deps/v8/test/mjsunit/proto-accessor.js @@ -25,8 +25,6 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --harmony-symbols - // Fake Symbol if undefined, allowing test to run in non-Harmony mode as well. this.Symbol = typeof Symbol != 'undefined' ? Symbol : String; diff --git a/deps/v8/test/mjsunit/readonly.js b/deps/v8/test/mjsunit/readonly.js index 050e2562759..084e9ffe23d 100644 --- a/deps/v8/test/mjsunit/readonly.js +++ b/deps/v8/test/mjsunit/readonly.js @@ -25,8 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --allow-natives-syntax --es5_readonly -// Flags: --harmony-proxies +// Flags: --allow-natives-syntax --harmony-proxies // Different ways to create an object. diff --git a/deps/v8/test/mjsunit/regress-3456.js b/deps/v8/test/mjsunit/regress-3456.js new file mode 100644 index 00000000000..498953b8073 --- /dev/null +++ b/deps/v8/test/mjsunit/regress-3456.js @@ -0,0 +1,13 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --min-preparse-length 1 + +// Arrow function parsing (commit r22366) changed the flags stored in +// PreParserExpression, and IsValidReferenceExpression() would return +// false for certain valid expressions. This case is the minimum amount +// of code needed to validate that IsValidReferenceExpression() works +// properly. If it does not, a ReferenceError is thrown during parsing. + +function f() { ++(this.foo) } diff --git a/deps/v8/test/mjsunit/regress/debug-prepare-step-in.js b/deps/v8/test/mjsunit/regress/debug-prepare-step-in.js index b8c21164000..60b47f7a5d1 100644 --- a/deps/v8/test/mjsunit/regress/debug-prepare-step-in.js +++ b/deps/v8/test/mjsunit/regress/debug-prepare-step-in.js @@ -52,3 +52,5 @@ function g() { } g(); + +Debug.setListener(null); diff --git a/deps/v8/test/mjsunit/regress/regress-1170.js b/deps/v8/test/mjsunit/regress/regress-1170.js index 8c5f6f8ab4b..5d5800ee373 100644 --- a/deps/v8/test/mjsunit/regress/regress-1170.js +++ b/deps/v8/test/mjsunit/regress/regress-1170.js @@ -25,8 +25,6 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --es52_globals - var setter_value = 0; this.__defineSetter__("a", function(v) { setter_value = v; }); @@ -35,8 +33,9 @@ assertEquals(1, setter_value); assertFalse("value" in Object.getOwnPropertyDescriptor(this, "a")); eval("with({}) { eval('var a = 2') }"); -assertEquals(2, setter_value); +assertTrue("get" in Object.getOwnPropertyDescriptor(this, "a")); assertFalse("value" in Object.getOwnPropertyDescriptor(this, "a")); +assertEquals(2, setter_value); // Function declarations are treated specially to match Safari. We do // not call setters for them. @@ -47,10 +46,8 @@ assertTrue("value" in Object.getOwnPropertyDescriptor(this, "a")); this.__defineSetter__("b", function(v) { setter_value = v; }); try { eval("const b = 3"); -} catch(e) { - assertUnreachable(); -} -assertEquals(3, setter_value); +} catch(e) { } +assertEquals(2, setter_value); try { eval("with({}) { eval('const b = 23') }"); diff --git a/deps/v8/test/mjsunit/regress/regress-1199637.js b/deps/v8/test/mjsunit/regress/regress-1199637.js index 8b02a6559c4..397aeb87625 100644 --- a/deps/v8/test/mjsunit/regress/regress-1199637.js +++ b/deps/v8/test/mjsunit/regress/regress-1199637.js @@ -34,43 +34,43 @@ const NONE = 0; const READ_ONLY = 1; // Use DeclareGlobal... -%SetProperty(this.__proto__, "a", 1234, NONE); +%AddNamedProperty(this.__proto__, "a", 1234, NONE); assertEquals(1234, a); eval("var a = 5678;"); assertEquals(5678, a); -%SetProperty(this.__proto__, "b", 1234, NONE); +%AddNamedProperty(this.__proto__, "b", 1234, NONE); assertEquals(1234, b); eval("const b = 5678;"); assertEquals(5678, b); -%SetProperty(this.__proto__, "c", 1234, READ_ONLY); +%AddNamedProperty(this.__proto__, "c", 1234, READ_ONLY); assertEquals(1234, c); eval("var c = 5678;"); assertEquals(5678, c); -%SetProperty(this.__proto__, "d", 1234, READ_ONLY); +%AddNamedProperty(this.__proto__, "d", 1234, READ_ONLY); assertEquals(1234, d); eval("const d = 5678;"); assertEquals(5678, d); // Use DeclareContextSlot... -%SetProperty(this.__proto__, "x", 1234, NONE); +%AddNamedProperty(this.__proto__, "x", 1234, NONE); assertEquals(1234, x); eval("with({}) { var x = 5678; }"); assertEquals(5678, x); -%SetProperty(this.__proto__, "y", 1234, NONE); +%AddNamedProperty(this.__proto__, "y", 1234, NONE); assertEquals(1234, y); eval("with({}) { const y = 5678; }"); assertEquals(5678, y); -%SetProperty(this.__proto__, "z", 1234, READ_ONLY); +%AddNamedProperty(this.__proto__, "z", 1234, READ_ONLY); assertEquals(1234, z); eval("with({}) { var z = 5678; }"); assertEquals(5678, z); -%SetProperty(this.__proto__, "w", 1234, READ_ONLY); +%AddNamedProperty(this.__proto__, "w", 1234, READ_ONLY); assertEquals(1234, w); eval("with({}) { const w = 5678; }"); assertEquals(5678, w); diff --git a/deps/v8/test/mjsunit/regress/regress-1213575.js b/deps/v8/test/mjsunit/regress/regress-1213575.js index f3a11dbaab0..8c197bcf83a 100644 --- a/deps/v8/test/mjsunit/regress/regress-1213575.js +++ b/deps/v8/test/mjsunit/regress/regress-1213575.js @@ -37,4 +37,4 @@ try { assertTrue(e instanceof TypeError); caught = true; } -assertFalse(caught); +assertTrue(caught); diff --git a/deps/v8/test/mjsunit/regress/regress-1530.js b/deps/v8/test/mjsunit/regress/regress-1530.js index db2114450e4..20d1f265c09 100644 --- a/deps/v8/test/mjsunit/regress/regress-1530.js +++ b/deps/v8/test/mjsunit/regress/regress-1530.js @@ -33,34 +33,52 @@ var f = function() {}; // Verify that normal assignment of 'prototype' property works properly // and updates the internal value. -var x = { foo: 'bar' }; -f.prototype = x; -assertSame(f.prototype, x); +var a = { foo: 'bar' }; +f.prototype = a; +assertSame(f.prototype, a); assertSame(f.prototype.foo, 'bar'); assertSame(new f().foo, 'bar'); -assertSame(Object.getPrototypeOf(new f()), x); -assertSame(Object.getOwnPropertyDescriptor(f, 'prototype').value, x); +assertSame(Object.getPrototypeOf(new f()), a); +assertSame(Object.getOwnPropertyDescriptor(f, 'prototype').value, a); +assertTrue(Object.getOwnPropertyDescriptor(f, 'prototype').writable); // Verify that 'prototype' behaves like a data property when it comes to // redefining with Object.defineProperty() and the internal value gets // updated. -var y = { foo: 'baz' }; -Object.defineProperty(f, 'prototype', { value: y, writable: true }); -assertSame(f.prototype, y); +var b = { foo: 'baz' }; +Object.defineProperty(f, 'prototype', { value: b, writable: true }); +assertSame(f.prototype, b); assertSame(f.prototype.foo, 'baz'); assertSame(new f().foo, 'baz'); -assertSame(Object.getPrototypeOf(new f()), y); -assertSame(Object.getOwnPropertyDescriptor(f, 'prototype').value, y); +assertSame(Object.getPrototypeOf(new f()), b); +assertSame(Object.getOwnPropertyDescriptor(f, 'prototype').value, b); +assertTrue(Object.getOwnPropertyDescriptor(f, 'prototype').writable); // Verify that the previous redefinition didn't screw up callbacks and // the internal value still gets updated. -var z = { foo: 'other' }; -f.prototype = z; -assertSame(f.prototype, z); +var c = { foo: 'other' }; +f.prototype = c; +assertSame(f.prototype, c); assertSame(f.prototype.foo, 'other'); assertSame(new f().foo, 'other'); -assertSame(Object.getPrototypeOf(new f()), z); -assertSame(Object.getOwnPropertyDescriptor(f, 'prototype').value, z); +assertSame(Object.getPrototypeOf(new f()), c); +assertSame(Object.getOwnPropertyDescriptor(f, 'prototype').value, c); +assertTrue(Object.getOwnPropertyDescriptor(f, 'prototype').writable); + +// Verify that 'prototype' can be redefined to contain a different value +// and have a different writability attribute at the same time. +var d = { foo: 'final' }; +Object.defineProperty(f, 'prototype', { value: d, writable: false }); +assertSame(f.prototype, d); +assertSame(f.prototype.foo, 'final'); +assertSame(new f().foo, 'final'); +assertSame(Object.getPrototypeOf(new f()), d); +assertSame(Object.getOwnPropertyDescriptor(f, 'prototype').value, d); +assertFalse(Object.getOwnPropertyDescriptor(f, 'prototype').writable); + +// Verify that non-writability of redefined 'prototype' is respected. +assertThrows("'use strict'; f.prototype = {}"); +assertThrows("Object.defineProperty(f, 'prototype', { value: {} })"); // Verify that non-writability of other properties is respected. assertThrows("Object.defineProperty(f, 'name', { value: {} })"); diff --git a/deps/v8/test/mjsunit/regress/regress-1708.js b/deps/v8/test/mjsunit/regress/regress-1708.js index 48ee79c77cb..ed2ddb1458f 100644 --- a/deps/v8/test/mjsunit/regress/regress-1708.js +++ b/deps/v8/test/mjsunit/regress/regress-1708.js @@ -32,7 +32,7 @@ // sure that concurrent sweeping, which relies on similar assumptions // as lazy sweeping works correctly. -// Flags: --expose-gc --noincremental-marking --max-new-space-size=2 +// Flags: --expose-gc --noincremental-marking --max-semi-space-size=1 (function() { var head = new Array(1); diff --git a/deps/v8/test/mjsunit/regress/regress-2790.js b/deps/v8/test/mjsunit/regress/regress-2790.js index 927f2607cc1..ac79e640459 100644 --- a/deps/v8/test/mjsunit/regress/regress-2790.js +++ b/deps/v8/test/mjsunit/regress/regress-2790.js @@ -26,6 +26,6 @@ // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // Test that we can create arrays of any size. -for (var i = 1000; i < 1000000; i += 197) { +for (var i = 1000; i < 1000000; i += 19703) { new Array(i); } diff --git a/deps/v8/test/mjsunit/regress/regress-320532.js b/deps/v8/test/mjsunit/regress/regress-320532.js index 6ec4b97293f..0c3198f7907 100644 --- a/deps/v8/test/mjsunit/regress/regress-320532.js +++ b/deps/v8/test/mjsunit/regress/regress-320532.js @@ -25,7 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // -// Flags: --allow-natives-syntax --smi-only-arrays --expose-gc +// Flags: --allow-natives-syntax --expose-gc // Flags: --noalways-opt // Flags: --stress-runs=8 --send-idle-notification --gc-global diff --git a/deps/v8/test/mjsunit/regress/regress-3281.js b/deps/v8/test/mjsunit/regress/regress-3281.js index ebb25991d85..7d42c026b6b 100644 --- a/deps/v8/test/mjsunit/regress/regress-3281.js +++ b/deps/v8/test/mjsunit/regress/regress-3281.js @@ -2,12 +2,11 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -// Flags: --allow-natives-syntax --harmony-collections - +// Flags: --expose-natives-as=builtins // Should not crash or raise an exception. var s = new Set(); -var setIterator = %SetCreateIterator(s, 2); +var setIterator = new builtins.SetIterator(s, 2); var m = new Map(); -var mapIterator = %MapCreateIterator(m, 2); +var mapIterator = new builtins.MapIterator(m, 2); diff --git a/deps/v8/test/mjsunit/regress/regress-3307.js b/deps/v8/test/mjsunit/regress/regress-3307.js new file mode 100644 index 00000000000..1fc770d20c8 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-3307.js @@ -0,0 +1,24 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + +function p(x) { + this.x = x; +} + +function f() { + var a = new p(1), b = new p(2); + for (var i = 0; i < 1; i++) { + a.x += b.x; + } + return a.x; +} + +new p(0.1); // make 'x' mutable box double field in p. + +assertEquals(3, f()); +assertEquals(3, f()); +%OptimizeFunctionOnNextCall(f); +assertEquals(3, f()); diff --git a/deps/v8/test/mjsunit/regress/regress-3315.js b/deps/v8/test/mjsunit/regress/regress-3315.js new file mode 100644 index 00000000000..a1105e28488 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-3315.js @@ -0,0 +1,26 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +var indexZeroCallCount = 0; +var indexOneCallCount = 0; +var lengthCallCount = 0; +var acceptList = { + get 0() { + indexZeroCallCount++; + return 'foo'; + }, + get 1() { + indexOneCallCount++; + return 'bar'; + }, + get length() { + lengthCallCount++; + return 1; + } +}; + +Object.observe({}, function(){}, acceptList); +assertEquals(1, lengthCallCount); +assertEquals(1, indexZeroCallCount); +assertEquals(0, indexOneCallCount); diff --git a/deps/v8/test/mjsunit/regress/regress-3334.js b/deps/v8/test/mjsunit/regress/regress-3334.js new file mode 100644 index 00000000000..301155dbde7 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-3334.js @@ -0,0 +1,13 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +function foo(){} +Object.defineProperty(foo, "prototype", { value: 2 }); +assertEquals(2, foo.prototype); + +function bar(){} +Object.defineProperty(bar, "prototype", { value: 2, writable: false }); +assertEquals(2, bar.prototype); +assertThrows(function() { "use strict"; bar.prototype = 10; }, TypeError); +assertEquals(false, Object.getOwnPropertyDescriptor(bar,"prototype").writable); diff --git a/deps/v8/test/mjsunit/regress/regress-334.js b/deps/v8/test/mjsunit/regress/regress-334.js index 37dd299cf58..c52c72aa905 100644 --- a/deps/v8/test/mjsunit/regress/regress-334.js +++ b/deps/v8/test/mjsunit/regress/regress-334.js @@ -37,10 +37,10 @@ function func1(){} function func2(){} var object = {__proto__:{}}; -%SetProperty(object, "foo", func1, DONT_ENUM | DONT_DELETE); -%SetProperty(object, "bar", func1, DONT_ENUM | READ_ONLY); -%SetProperty(object, "baz", func1, DONT_DELETE | READ_ONLY); -%SetProperty(object.__proto__, "bif", func1, DONT_ENUM | DONT_DELETE); +%AddNamedProperty(object, "foo", func1, DONT_ENUM | DONT_DELETE); +%AddNamedProperty(object, "bar", func1, DONT_ENUM | READ_ONLY); +%AddNamedProperty(object, "baz", func1, DONT_DELETE | READ_ONLY); +%AddNamedProperty(object.__proto__, "bif", func1, DONT_ENUM | DONT_DELETE); object.bif = func2; function enumerable(obj) { diff --git a/deps/v8/test/mjsunit/regress/regress-3359.js b/deps/v8/test/mjsunit/regress/regress-3359.js new file mode 100644 index 00000000000..0973797e7e7 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-3359.js @@ -0,0 +1,12 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + +function f() { + return 1 >> Boolean.constructor + 1; +} +assertEquals(1, f()); +%OptimizeFunctionOnNextCall(f); +assertEquals(1, f()); diff --git a/deps/v8/test/mjsunit/regress/regress-3380.js b/deps/v8/test/mjsunit/regress/regress-3380.js new file mode 100644 index 00000000000..2fae459b3b4 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-3380.js @@ -0,0 +1,16 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + +function foo(a) { + return (a[0] >>> 0) > 0; +} + +var a = new Uint32Array([4]); +var b = new Uint32Array([0x80000000]); +assertTrue(foo(a)); +assertTrue(foo(a)); +%OptimizeFunctionOnNextCall(foo); +assertTrue(foo(b)) diff --git a/deps/v8/test/mjsunit/regress/regress-3392.js b/deps/v8/test/mjsunit/regress/regress-3392.js new file mode 100644 index 00000000000..375f30210ce --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-3392.js @@ -0,0 +1,18 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + +function foo() { + var a = {b: -1.5}; + for (var i = 0; i < 1; i++) { + a.b = 1; + } + assertTrue(0 <= a.b); +} + +foo(); +foo(); +%OptimizeFunctionOnNextCall(foo); +foo(); diff --git a/deps/v8/test/mjsunit/regress/regress-3404.js b/deps/v8/test/mjsunit/regress/regress-3404.js new file mode 100644 index 00000000000..c4d280e577d --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-3404.js @@ -0,0 +1,27 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +function testError(error) { + // Reconfigure e.stack to be non-configurable + var desc1 = Object.getOwnPropertyDescriptor(error, "stack"); + Object.defineProperty(error, "stack", + {get: desc1.get, set: desc1.set, configurable: false}); + + var desc2 = Object.getOwnPropertyDescriptor(error, "stack"); + assertFalse(desc2.configurable); + assertEquals(desc1.get, desc2.get); + assertEquals(desc2.get, desc2.get); +} + +function stackOverflow() { + function f() { f(); } + try { f() } catch (e) { return e; } +} + +function referenceError() { + try { g() } catch (e) { return e; } +} + +testError(referenceError()); +testError(stackOverflow()); diff --git a/deps/v8/test/mjsunit/regress/regress-3462.js b/deps/v8/test/mjsunit/regress/regress-3462.js new file mode 100644 index 00000000000..5a3355920b5 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-3462.js @@ -0,0 +1,48 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + + +function TestFunctionPrototypeSetter() { + var f = function() {}; + var o = {__proto__: f}; + o.prototype = 42; + assertEquals(42, o.prototype); + assertTrue(o.hasOwnProperty('prototype')); +} +TestFunctionPrototypeSetter(); + + +function TestFunctionPrototypeSetterOnValue() { + var f = function() {}; + var fp = f.prototype; + Number.prototype.__proto__ = f; + var n = 42; + var o = {}; + n.prototype = o; + assertEquals(fp, n.prototype); + assertEquals(fp, f.prototype); + assertFalse(Number.prototype.hasOwnProperty('prototype')); +} +TestFunctionPrototypeSetterOnValue(); + + +function TestArrayLengthSetter() { + var a = [1]; + var o = {__proto__: a}; + o.length = 2; + assertEquals(2, o.length); + assertEquals(1, a.length); + assertTrue(o.hasOwnProperty('length')); +} +TestArrayLengthSetter(); + + +function TestArrayLengthSetterOnValue() { + Number.prototype.__proto__ = [1]; + var n = 42; + n.length = 2; + assertEquals(1, n.length); + assertFalse(Number.prototype.hasOwnProperty('length')); +} +TestArrayLengthSetterOnValue(); diff --git a/deps/v8/test/mjsunit/regress/regress-3476.js b/deps/v8/test/mjsunit/regress/regress-3476.js new file mode 100644 index 00000000000..f4333dbbfc0 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-3476.js @@ -0,0 +1,24 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + +function MyWrapper(v) { + return { valueOf: function() { return v } }; +} + +function f() { + assertEquals("truex", true + "x"); + assertEquals("truey", true + new String("y")); + assertEquals("truez", true + new MyWrapper("z")); + + assertEquals("xtrue", "x" + true); + assertEquals("ytrue", new String("y") + true); + assertEquals("ztrue", new MyWrapper("z") + true); +} + +f(); +f(); +%OptimizeFunctionOnNextCall(f); +f(); diff --git a/deps/v8/test/mjsunit/regress/regress-370827.js b/deps/v8/test/mjsunit/regress/regress-370827.js new file mode 100644 index 00000000000..5536d5196b5 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-370827.js @@ -0,0 +1,21 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax --expose-gc --heap-stats + +function g(dummy, x) { + var start = ""; + if (x) { start = x + " - "; } + start = start + "array length"; +}; + +function f() { + gc(); + g([0.1]); +} + +f(); +%OptimizeFunctionOnNextCall(f); +f(); +f(); diff --git a/deps/v8/test/mjsunit/regress/regress-373283.js b/deps/v8/test/mjsunit/regress/regress-373283.js new file mode 100644 index 00000000000..20cee4d808e --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-373283.js @@ -0,0 +1,18 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax --deopt-every-n-times=1 + +function __f_0() { + var x = []; + x[21] = 1; + x[21] + 0; +} + +for (var i = 0; i < 3; i++) __f_0(); +%OptimizeFunctionOnNextCall(__f_0); +for (var i = 0; i < 10; i++) __f_0(); +%OptimizeFunctionOnNextCall(__f_0); +__f_0(); +%GetScript("foo"); diff --git a/deps/v8/test/mjsunit/regress/regress-377290.js b/deps/v8/test/mjsunit/regress/regress-377290.js new file mode 100644 index 00000000000..23e31e79d95 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-377290.js @@ -0,0 +1,17 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-gc + +Object.prototype.__defineGetter__('constructor', function() { throw 42; }); +__v_7 = [ + function() { [].push() }, +]; +for (var __v_6 = 0; __v_6 < 5; ++__v_6) { + for (var __v_8 in __v_7) { + print(__v_8, " -> ", __v_7[__v_8]); + gc(); + try { __v_7[__v_8](); } catch (e) {}; + } +} diff --git a/deps/v8/test/mjsunit/regress/regress-379770.js b/deps/v8/test/mjsunit/regress/regress-379770.js new file mode 100644 index 00000000000..a6653c25912 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-379770.js @@ -0,0 +1,26 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. +// Flags: --allow-natives-syntax --nostress-opt +// Flags: --nouse-osr + +function foo(obj) { + var counter = 1; + for (var i = 0; i < obj.length; i++) { + %OptimizeFunctionOnNextCall(foo, "osr"); + } + counter += obj; + return counter; +} + +function bar() { + var a = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]; + for (var i = 0; i < 100; i++ ) { + foo(a); + } +} + +try { + bar(); +} catch (e) { +} diff --git a/deps/v8/test/mjsunit/regress/regress-380049.js b/deps/v8/test/mjsunit/regress/regress-380049.js new file mode 100644 index 00000000000..a78626cc546 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-380049.js @@ -0,0 +1,9 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + +function foo(a,b,c) { return arguments; } +var f = foo(false, null, 40); +assertThrows(function() { %ObjectFreeze(f); }); diff --git a/deps/v8/test/mjsunit/regress/regress-380092.js b/deps/v8/test/mjsunit/regress/regress-380092.js new file mode 100644 index 00000000000..fe6b0b7619c --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-380092.js @@ -0,0 +1,22 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + +function many_hoist(o, index) { + return o[index + 33554427]; +} + +var obj = {}; +many_hoist(obj, 0); +%OptimizeFunctionOnNextCall(many_hoist); +many_hoist(obj, 5); + +function constant_too_large(o, index) { + return o[index + 1033554433]; +} + +constant_too_large(obj, 0); +%OptimizeFunctionOnNextCall(constant_too_large); +constant_too_large(obj, 5); diff --git a/deps/v8/test/mjsunit/regress/regress-381313.js b/deps/v8/test/mjsunit/regress/regress-381313.js new file mode 100644 index 00000000000..d2b9d7c11d3 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-381313.js @@ -0,0 +1,42 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + +var g = 0; + +function f(x, deopt) { + var a0 = x; + var a1 = 2 * x; + var a2 = 3 * x; + var a3 = 4 * x; + var a4 = 5 * x; + var a5 = 6 * x; + var a6 = 7 * x; + var a7 = 8 * x; + var a8 = 9 * x; + var a9 = 10 * x; + var a10 = 11 * x; + var a11 = 12 * x; + var a12 = 13 * x; + var a13 = 14 * x; + var a14 = 15 * x; + var a15 = 16 * x; + var a16 = 17 * x; + var a17 = 18 * x; + var a18 = 19 * x; + var a19 = 20 * x; + + g = 1; + + deopt + 0; + + return a0 + a1 + a2 + a3 + a4 + a5 + a6 + a7 + a8 + a9 + + a10 + a11 + a12 + a13 + a14 + a15 + a16 + a17 + a18 + a19; +} + +f(0.5, 0); +f(0.5, 0); +%OptimizeFunctionOnNextCall(f); +print(f(0.5, "")); diff --git a/deps/v8/test/cctest/test-cpu.cc b/deps/v8/test/mjsunit/regress/regress-392114.js similarity index 63% rename from deps/v8/test/cctest/test-cpu.cc rename to deps/v8/test/mjsunit/regress/regress-392114.js index 06966c68c86..e5cf1cde372 100644 --- a/deps/v8/test/cctest/test-cpu.cc +++ b/deps/v8/test/mjsunit/regress/regress-392114.js @@ -1,4 +1,4 @@ -// Copyright 2013 the V8 project authors. All rights reserved. +// Copyright 2014 the V8 project authors. All rights reserved. // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: @@ -25,31 +25,42 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#include "v8.h" +// Flags: --expose-debug-as debug --allow-natives-syntax -#include "cctest.h" -#include "cpu.h" +Debug = debug.Debug; -using namespace v8::internal; +function dummy(x) { + return x + 100; +} +function create_closure() { + var f = function(arg) { + if (arg) { %DeoptimizeFunction(f); } + var a = Array(10); + for (var i = 0; i < a.length; i++) { + a[i] = i; + } + } + return f; +} -TEST(FeatureImplications) { - // Test for features implied by other features. - CPU cpu; +var c = create_closure(); +c(); - // ia32 and x64 features - CHECK(!cpu.has_sse() || cpu.has_mmx()); - CHECK(!cpu.has_sse2() || cpu.has_sse()); - CHECK(!cpu.has_sse3() || cpu.has_sse2()); - CHECK(!cpu.has_ssse3() || cpu.has_sse3()); - CHECK(!cpu.has_sse41() || cpu.has_sse3()); - CHECK(!cpu.has_sse42() || cpu.has_sse41()); +// c CallIC state now has custom Array handler installed. - // arm features - CHECK(!cpu.has_vfp3_d32() || cpu.has_vfp3()); -} +// Turn on the debugger. +Debug.setListener(function () {}); +var d = create_closure(); +%OptimizeFunctionOnNextCall(d); +// Thanks to the debugger, we recreate the full code too. We deopt and run +// it, stomping on the unexpected AllocationSite in the type vector slot. +d(true); -TEST(NumberOfProcessorsOnline) { - CHECK_GT(CPU::NumberOfProcessorsOnline(), 0); -} +// CallIC in c misinterprets type vector slot contents as an AllocationSite, +// corrupting the heap. +c(); + +// CallIC MISS - crash due to corruption. +dummy(); diff --git a/deps/v8/test/mjsunit/regress/regress-99167.js b/deps/v8/test/mjsunit/regress/regress-99167.js index 777acf4487e..eac49d12b05 100644 --- a/deps/v8/test/mjsunit/regress/regress-99167.js +++ b/deps/v8/test/mjsunit/regress/regress-99167.js @@ -25,7 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --expose-gc --max-new-space-size=2 +// Flags: --expose-gc --max-semi-space-size=1 eval("function Node() { this.a = 1; this.a = 3; }"); new Node; diff --git a/deps/v8/test/mjsunit/regress/regress-cntl-descriptors-enum.js b/deps/v8/test/mjsunit/regress/regress-cntl-descriptors-enum.js index ee72fafc8a7..fd4ac6d6c0f 100644 --- a/deps/v8/test/mjsunit/regress/regress-cntl-descriptors-enum.js +++ b/deps/v8/test/mjsunit/regress/regress-cntl-descriptors-enum.js @@ -30,10 +30,10 @@ DontEnum = 2; var o = {}; -%SetProperty(o, "a", 0, DontEnum); +%AddNamedProperty(o, "a", 0, DontEnum); var o2 = {}; -%SetProperty(o2, "a", 0, DontEnum); +%AddNamedProperty(o2, "a", 0, DontEnum); assertTrue(%HaveSameMap(o, o2)); diff --git a/deps/v8/test/mjsunit/regress/regress-crbug-245480.js b/deps/v8/test/mjsunit/regress/regress-crbug-245480.js index ec8850905bb..43fa6ba3b68 100644 --- a/deps/v8/test/mjsunit/regress/regress-crbug-245480.js +++ b/deps/v8/test/mjsunit/regress/regress-crbug-245480.js @@ -25,25 +25,9 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --allow-natives-syntax --smi-only-arrays --expose-gc +// Flags: --allow-natives-syntax --expose-gc // Flags: --noalways-opt -// Test element kind of objects. -// Since --smi-only-arrays affects builtins, its default setting at compile -// time sticks if built with snapshot. If --smi-only-arrays is deactivated -// by default, only a no-snapshot build actually has smi-only arrays enabled -// in this test case. Depending on whether smi-only arrays are actually -// enabled, this test takes the appropriate code path to check smi-only arrays. - -// support_smi_only_arrays = %HasFastSmiElements(new Array(1,2,3,4,5,6,7,8)); -support_smi_only_arrays = true; - -if (support_smi_only_arrays) { - print("Tests include smi-only arrays."); -} else { - print("Tests do NOT include smi-only arrays."); -} - function isHoley(obj) { if (%HasFastHoleyElements(obj)) return true; return false; @@ -57,19 +41,17 @@ function assertNotHoley(obj, name_opt) { assertEquals(false, isHoley(obj), name_opt); } -if (support_smi_only_arrays) { - function create_array(arg) { - return new Array(arg); - } - - obj = create_array(0); - assertNotHoley(obj); - create_array(0); - %OptimizeFunctionOnNextCall(create_array); - obj = create_array(10); - assertHoley(obj); +function create_array(arg) { + return new Array(arg); } +obj = create_array(0); +assertNotHoley(obj); +create_array(0); +%OptimizeFunctionOnNextCall(create_array); +obj = create_array(10); +assertHoley(obj); + // The code below would assert in debug or crash in release function f(length) { return new Array(length) diff --git a/deps/v8/test/mjsunit/regress/regress-crbug-350864.js b/deps/v8/test/mjsunit/regress/regress-crbug-350864.js index 8a793cb0a00..510834be3e4 100644 --- a/deps/v8/test/mjsunit/regress/regress-crbug-350864.js +++ b/deps/v8/test/mjsunit/regress/regress-crbug-350864.js @@ -25,8 +25,6 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --harmony-symbols - var v0 = new WeakMap; var v1 = {}; v0.set(v1, 1); diff --git a/deps/v8/test/mjsunit/regress/regress-crbug-374838.js b/deps/v8/test/mjsunit/regress/regress-crbug-374838.js new file mode 100644 index 00000000000..614b4d9a877 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-crbug-374838.js @@ -0,0 +1,20 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + +function foo() { + var a = [0]; + result = 0; + for (var i = 0; i < 4; i++) { + result += a.length; + a.shift(); + } + return result; +} + +assertEquals(1, foo()); +assertEquals(1, foo()); +%OptimizeFunctionOnNextCall(foo); +assertEquals(1, foo()); diff --git a/deps/v8/test/mjsunit/regress/regress-crbug-380512.js b/deps/v8/test/mjsunit/regress/regress-crbug-380512.js new file mode 100644 index 00000000000..af78ba7183a --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-crbug-380512.js @@ -0,0 +1,12 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + +function f() { [].lastIndexOf(42); } + +f(); +f(); +%OptimizeFunctionOnNextCall(f); +f(); diff --git a/deps/v8/test/mjsunit/regress/regress-crbug-381534.js b/deps/v8/test/mjsunit/regress/regress-crbug-381534.js new file mode 100644 index 00000000000..2aa39296774 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-crbug-381534.js @@ -0,0 +1,40 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + +var obj = {}; + +function f(v) { + var v1 = -(4/3); + var v2 = 1; + var arr = new Array(+0, true, 0, -0, false, undefined, null, "0", obj, v1, -(4/3), -1.3333333333333, "str", v2, 1, false); + assertEquals(10, arr.lastIndexOf(-(4/3))); + assertEquals(9, arr.indexOf(-(4/3))); + + assertEquals(10, arr.lastIndexOf(v)); + assertEquals(9, arr.indexOf(v)); + + assertEquals(8, arr.lastIndexOf(obj)); + assertEquals(8, arr.indexOf(obj)); +} + +function g(v, x, index) { + var arr = new Array({}, x-1.1, x-2, x-3.1); + assertEquals(index, arr.indexOf(0)); + assertEquals(index, arr.lastIndexOf(0)); + + assertEquals(index, arr.indexOf(v)); + assertEquals(index, arr.lastIndexOf(v)); +} + +f(-(4/3)); +f(-(4/3)); +%OptimizeFunctionOnNextCall(f); +f(-(4/3)); + +g(0, 2, 2); +g(0, 3.1, 3); +%OptimizeFunctionOnNextCall(g); +g(0, 1.1, 1); diff --git a/deps/v8/test/mjsunit/regress/regress-crbug-382513.js b/deps/v8/test/mjsunit/regress/regress-crbug-382513.js new file mode 100644 index 00000000000..59d2dcac72b --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-crbug-382513.js @@ -0,0 +1,11 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + +function foo() { return [+0,false].indexOf(-(4/3)); } +foo(); +foo(); +%OptimizeFunctionOnNextCall(foo); +foo(); diff --git a/deps/v8/test/mjsunit/regress/regress-crbug-385002.js b/deps/v8/test/mjsunit/regress/regress-crbug-385002.js new file mode 100644 index 00000000000..34713e27d40 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-crbug-385002.js @@ -0,0 +1,15 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --stack-size=200 --allow-natives-syntax + +%Break(); // Schedule an interrupt that does not go away. + +function f() { f(); } +assertThrows(f, RangeError); + +var locals = ""; +for (var i = 0; i < 1024; i++) locals += "var v" + i + ";"; +eval("function g() {" + locals + "f();}"); +assertThrows("g()", RangeError); diff --git a/deps/v8/test/mjsunit/regress/regress-crbug-387599.js b/deps/v8/test/mjsunit/regress/regress-crbug-387599.js new file mode 100644 index 00000000000..98750aa9182 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-crbug-387599.js @@ -0,0 +1,19 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax --expose-debug-as debug + +Debug = debug.Debug; +Debug.setListener(function() {}); + +function f() { + for (var i = 0; i < 100; i++) { + %OptimizeFunctionOnNextCall(f, "osr"); + } +} + +Debug.setBreakPoint(f, 0, 0); +f(); +f(); +Debug.setListener(null); diff --git a/deps/v8/test/mjsunit/regress/regress-crbug-387627.js b/deps/v8/test/mjsunit/regress/regress-crbug-387627.js new file mode 100644 index 00000000000..5c6389b5f1f --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-crbug-387627.js @@ -0,0 +1,13 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + +function f() {} +%FunctionBindArguments(f, {}, undefined, 1); + +f(); +f(); +%OptimizeFunctionOnNextCall(f); +f(); diff --git a/deps/v8/test/mjsunit/regress/regress-crbug-387636.js b/deps/v8/test/mjsunit/regress/regress-crbug-387636.js new file mode 100644 index 00000000000..1e50ace45a2 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-crbug-387636.js @@ -0,0 +1,14 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + +function f() { + [].indexOf(0x40000000); +} + +f(); +f(); +%OptimizeFunctionOnNextCall(f); +f(); diff --git a/deps/v8/test/mjsunit/regress/regress-crbug-390918.js b/deps/v8/test/mjsunit/regress/regress-crbug-390918.js new file mode 100644 index 00000000000..4c202b3a9b2 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-crbug-390918.js @@ -0,0 +1,18 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + +function f(scale) { + var arr = {a: 1.1}; + + for (var i = 0; i < 2; i++) { + arr[2 * scale] = 0; + } +} + +f({}); +f({}); +%OptimizeFunctionOnNextCall(f); +f(1004); diff --git a/deps/v8/test/mjsunit/regress/regress-crbug-390925.js b/deps/v8/test/mjsunit/regress/regress-crbug-390925.js new file mode 100644 index 00000000000..24873df17b8 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-crbug-390925.js @@ -0,0 +1,9 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + +var a = new Array(); +Object.freeze(a); +assertThrows(function() { %LiveEditCheckAndDropActivations(a, true); }); diff --git a/deps/v8/test/mjsunit/regress/regress-crbug-393988.js b/deps/v8/test/mjsunit/regress/regress-crbug-393988.js new file mode 100644 index 00000000000..9543e1e4c4a --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-crbug-393988.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +var o = {}; +Error.captureStackTrace(o); +Object.defineProperty(o, "stack", { value: 1 }); +assertEquals(1, o.stack); diff --git a/deps/v8/test/mjsunit/regress/regress-crbug-401915.js b/deps/v8/test/mjsunit/regress/regress-crbug-401915.js new file mode 100644 index 00000000000..96dce048689 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-crbug-401915.js @@ -0,0 +1,20 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax --expose-debug-as debug + +Debug = debug.Debug; +Debug.setListener(function() {}); +Debug.setBreakOnException(); + +try { + try { + %DebugPushPromise(new Promise(function() {})); + } catch (e) { + } + throw new Error(); +} catch (e) { +} + +Debug.setListener(null); diff --git a/deps/v8/test/mjsunit/regress/regress-create-exception.js b/deps/v8/test/mjsunit/regress/regress-create-exception.js index e0553041ac1..440449cf5fd 100644 --- a/deps/v8/test/mjsunit/regress/regress-create-exception.js +++ b/deps/v8/test/mjsunit/regress/regress-create-exception.js @@ -25,7 +25,7 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Flags: --max-new-space-size=2 +// Flags: --max-semi-space-size=1 "use strict"; // Check for GC bug constructing exceptions. diff --git a/deps/v8/test/mjsunit/regress/regress-debug-context-load.js b/deps/v8/test/mjsunit/regress/regress-debug-context-load.js new file mode 100644 index 00000000000..0b3c275f99b --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-debug-context-load.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --expose-debug-as debug + +Debug = debug.Debug; +Debug.setListener(null); diff --git a/deps/v8/test/mjsunit/regress/regress-double-property.js b/deps/v8/test/mjsunit/regress/regress-double-property.js new file mode 100644 index 00000000000..2ddb45b4b66 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-double-property.js @@ -0,0 +1,9 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +function f(a) { + return {0.1: a}; +} + +f(); diff --git a/deps/v8/test/mjsunit/regress/regress-escape-preserve-smi-representation.js b/deps/v8/test/mjsunit/regress/regress-escape-preserve-smi-representation.js index 551147ed55e..fd899d64e23 100644 --- a/deps/v8/test/mjsunit/regress/regress-escape-preserve-smi-representation.js +++ b/deps/v8/test/mjsunit/regress/regress-escape-preserve-smi-representation.js @@ -12,7 +12,7 @@ function deepEquals(a, b) { if (objectClass === "RegExp") { return (a.toString() === b.toString()); } if (objectClass === "Function") return false; if (objectClass === "Array") { - var elementCount = 0; + var elementsCount = 0; if (a.length != b.length) { return false; } for (var i = 0; i < a.length; i++) { if (!deepEquals(a[i], b[i])) return false; @@ -23,12 +23,11 @@ function deepEquals(a, b) { function __f_1(){ - var __v_0 = []; - for(var i=0; i<2; i++){ - var __v_1=[]; - __v_0.push([]) - deepEquals(2, __v_0.length); - } + var __v_0 = []; + for(var i=0; i<2; i++){ + __v_0.push([]) + deepEquals(2, __v_0.length); + } } __f_1(); %OptimizeFunctionOnNextCall(__f_1); diff --git a/deps/v8/test/mjsunit/regress/regress-global-freeze-const.js b/deps/v8/test/mjsunit/regress/regress-freeze-setter.js similarity index 68% rename from deps/v8/test/mjsunit/regress/regress-global-freeze-const.js rename to deps/v8/test/mjsunit/regress/regress-freeze-setter.js index 0b9e1f3ebd4..c5ac98667fe 100644 --- a/deps/v8/test/mjsunit/regress/regress-global-freeze-const.js +++ b/deps/v8/test/mjsunit/regress/regress-freeze-setter.js @@ -2,6 +2,6 @@ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. -__defineSetter__('x', function() { }); +Object.defineProperty(this, 'x', {set: function() { }}); Object.freeze(this); -eval('const x = 1'); +eval('"use strict"; x = 20;'); diff --git a/deps/v8/test/mjsunit/regress/regress-function-constructor-receiver.js b/deps/v8/test/mjsunit/regress/regress-function-constructor-receiver.js new file mode 100644 index 00000000000..f345435ade0 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-function-constructor-receiver.js @@ -0,0 +1,17 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Return the raw CallSites array. +Error.prepareStackTrace = function (a,b) { return b; }; + +var threw = false; +try { + new Function({toString:0,valueOf:0}); +} catch (e) { + threw = true; + // Ensure that the receiver during "new Function" is the global proxy. + assertEquals(this, e.stack[0].getThis()); +} + +assertTrue(threw); diff --git a/deps/v8/test/mjsunit/regress/regress-mask-array-length.js b/deps/v8/test/mjsunit/regress/regress-mask-array-length.js new file mode 100644 index 00000000000..bd87e7c5db1 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-mask-array-length.js @@ -0,0 +1,10 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +var a = []; +var o = { + __proto__: a +}; +Object.preventExtensions(o); +o.length = 'abc'; diff --git a/deps/v8/test/cctest/test-platform-macos.cc b/deps/v8/test/mjsunit/regress/regress-regexp-nocase.js similarity index 88% rename from deps/v8/test/cctest/test-platform-macos.cc rename to deps/v8/test/mjsunit/regress/regress-regexp-nocase.js index 5bc5f97849a..27637da0913 100644 --- a/deps/v8/test/cctest/test-platform-macos.cc +++ b/deps/v8/test/mjsunit/regress/regress-regexp-nocase.js @@ -1,4 +1,4 @@ -// Copyright 2006-2008 the V8 project authors. All rights reserved. +// Copyright 2014 the V8 project authors. All rights reserved. // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: @@ -24,12 +24,7 @@ // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// -// Tests of the TokenLock class from lock.h - -#include <stdlib.h> -#include "v8.h" -#include "cctest.h" +var s = "('')x\nx\uF670"; -using namespace ::v8::internal; +assertEquals(s.match(/\((').*\1\)/i), ["('')", "'"]); diff --git a/deps/v8/test/mjsunit/regress/regress-set-flags-stress-compact.js b/deps/v8/test/mjsunit/regress/regress-set-flags-stress-compact.js new file mode 100644 index 00000000000..5bc59a7e110 --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-set-flags-stress-compact.js @@ -0,0 +1,10 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +// Flags: --allow-natives-syntax + +%SetFlags("--gc-interval=164 --stress-compaction"); + +var a = []; +for (var i = 0; i < 10000; i++) { a[i * 100] = 0; } diff --git a/deps/v8/test/mjsunit/regress/regress-update-field-type-attributes.js b/deps/v8/test/mjsunit/regress/regress-update-field-type-attributes.js new file mode 100644 index 00000000000..c23d062067e --- /dev/null +++ b/deps/v8/test/mjsunit/regress/regress-update-field-type-attributes.js @@ -0,0 +1,12 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +function test(){ + function InnerClass(){} + var container = {field: new InnerClass()}; + return Object.freeze(container); +}; + +assertTrue(Object.isFrozen(test())); +assertTrue(Object.isFrozen(test())); diff --git a/deps/v8/test/mjsunit/runtime-gen/apply.js b/deps/v8/test/mjsunit/runtime-gen/apply.js new file mode 100644 index 00000000000..94c4753cb9a --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/apply.js @@ -0,0 +1,9 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = function() {}; +var _receiver = new Object(); +var _arguments = new Object(); +var _offset = 1; +var _argc = 1; +%Apply(arg0, _receiver, _arguments, _offset, _argc); diff --git a/deps/v8/test/mjsunit/runtime-gen/arraybuffergetbytelength.js b/deps/v8/test/mjsunit/runtime-gen/arraybuffergetbytelength.js new file mode 100644 index 00000000000..8aff9ac073a --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/arraybuffergetbytelength.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new ArrayBuffer(8); +%ArrayBufferGetByteLength(_holder); diff --git a/deps/v8/test/mjsunit/runtime-gen/arraybufferinitialize.js b/deps/v8/test/mjsunit/runtime-gen/arraybufferinitialize.js new file mode 100644 index 00000000000..c4520c6a647 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/arraybufferinitialize.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new ArrayBuffer(8); +var _byteLength = 1.5; +%ArrayBufferInitialize(_holder, _byteLength); diff --git a/deps/v8/test/mjsunit/runtime-gen/arraybufferisview.js b/deps/v8/test/mjsunit/runtime-gen/arraybufferisview.js new file mode 100644 index 00000000000..46cc5ba9957 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/arraybufferisview.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _object = new Object(); +%ArrayBufferIsView(_object); diff --git a/deps/v8/test/mjsunit/runtime-gen/arraybufferneuter.js b/deps/v8/test/mjsunit/runtime-gen/arraybufferneuter.js new file mode 100644 index 00000000000..89e9ee96b7b --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/arraybufferneuter.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _array_buffer = new ArrayBuffer(8); +%ArrayBufferNeuter(_array_buffer); diff --git a/deps/v8/test/mjsunit/runtime-gen/arraybuffersliceimpl.js b/deps/v8/test/mjsunit/runtime-gen/arraybuffersliceimpl.js new file mode 100644 index 00000000000..cb02bb069c1 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/arraybuffersliceimpl.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _source = new ArrayBuffer(8); +var _target = new ArrayBuffer(8); +var arg2 = 0; +%ArrayBufferSliceImpl(_source, _target, arg2); diff --git a/deps/v8/test/mjsunit/runtime-gen/arraybufferviewgetbytelength.js b/deps/v8/test/mjsunit/runtime-gen/arraybufferviewgetbytelength.js new file mode 100644 index 00000000000..e32ea0d4e7f --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/arraybufferviewgetbytelength.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new Int32Array(2); +%ArrayBufferViewGetByteLength(_holder); diff --git a/deps/v8/test/mjsunit/runtime-gen/arraybufferviewgetbyteoffset.js b/deps/v8/test/mjsunit/runtime-gen/arraybufferviewgetbyteoffset.js new file mode 100644 index 00000000000..4c64ff206de --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/arraybufferviewgetbyteoffset.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new Int32Array(2); +%ArrayBufferViewGetByteOffset(_holder); diff --git a/deps/v8/test/mjsunit/runtime-gen/arrayconcat.js b/deps/v8/test/mjsunit/runtime-gen/arrayconcat.js new file mode 100644 index 00000000000..09487a6073a --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/arrayconcat.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = [1, 'a']; +%ArrayConcat(arg0); diff --git a/deps/v8/test/mjsunit/runtime-gen/availablelocalesof.js b/deps/v8/test/mjsunit/runtime-gen/availablelocalesof.js new file mode 100644 index 00000000000..a59c9b077c3 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/availablelocalesof.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _service = "foo"; +%AvailableLocalesOf(_service); diff --git a/deps/v8/test/mjsunit/runtime-gen/basicjsonstringify.js b/deps/v8/test/mjsunit/runtime-gen/basicjsonstringify.js new file mode 100644 index 00000000000..55d197831e4 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/basicjsonstringify.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _object = new Object(); +%BasicJSONStringify(_object); diff --git a/deps/v8/test/mjsunit/runtime-gen/booleanize.js b/deps/v8/test/mjsunit/runtime-gen/booleanize.js new file mode 100644 index 00000000000..8685368e4fd --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/booleanize.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _value_raw = new Object(); +var _token_raw = 1; +%Booleanize(_value_raw, _token_raw); diff --git a/deps/v8/test/mjsunit/runtime-gen/boundfunctiongetbindings.js b/deps/v8/test/mjsunit/runtime-gen/boundfunctiongetbindings.js new file mode 100644 index 00000000000..9221d3dd289 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/boundfunctiongetbindings.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _callable = new Object(); +%BoundFunctionGetBindings(_callable); diff --git a/deps/v8/test/mjsunit/runtime-gen/break.js b/deps/v8/test/mjsunit/runtime-gen/break.js new file mode 100644 index 00000000000..4b600d8e3d2 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/break.js @@ -0,0 +1,4 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +%Break(); diff --git a/deps/v8/test/mjsunit/runtime-gen/breakiteratoradopttext.js b/deps/v8/test/mjsunit/runtime-gen/breakiteratoradopttext.js new file mode 100644 index 00000000000..64b6059da35 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/breakiteratoradopttext.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = %GetImplFromInitializedIntlObject(new Intl.v8BreakIterator()); +var _text = "foo"; +%BreakIteratorAdoptText(arg0, _text); diff --git a/deps/v8/test/mjsunit/runtime-gen/breakiteratorbreaktype.js b/deps/v8/test/mjsunit/runtime-gen/breakiteratorbreaktype.js new file mode 100644 index 00000000000..08cceb87f86 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/breakiteratorbreaktype.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = %GetImplFromInitializedIntlObject(new Intl.v8BreakIterator()); +%BreakIteratorBreakType(arg0); diff --git a/deps/v8/test/mjsunit/runtime-gen/breakiteratorcurrent.js b/deps/v8/test/mjsunit/runtime-gen/breakiteratorcurrent.js new file mode 100644 index 00000000000..42000a846ca --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/breakiteratorcurrent.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = %GetImplFromInitializedIntlObject(new Intl.v8BreakIterator()); +%BreakIteratorCurrent(arg0); diff --git a/deps/v8/test/mjsunit/runtime-gen/breakiteratorfirst.js b/deps/v8/test/mjsunit/runtime-gen/breakiteratorfirst.js new file mode 100644 index 00000000000..3fad88c9e3a --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/breakiteratorfirst.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = %GetImplFromInitializedIntlObject(new Intl.v8BreakIterator()); +%BreakIteratorFirst(arg0); diff --git a/deps/v8/test/mjsunit/runtime-gen/breakiteratornext.js b/deps/v8/test/mjsunit/runtime-gen/breakiteratornext.js new file mode 100644 index 00000000000..be72ffc1d6f --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/breakiteratornext.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = %GetImplFromInitializedIntlObject(new Intl.v8BreakIterator()); +%BreakIteratorNext(arg0); diff --git a/deps/v8/test/mjsunit/runtime-gen/canonicalizelanguagetag.js b/deps/v8/test/mjsunit/runtime-gen/canonicalizelanguagetag.js new file mode 100644 index 00000000000..45df230a40a --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/canonicalizelanguagetag.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _locale_id_str = "foo"; +%CanonicalizeLanguageTag(_locale_id_str); diff --git a/deps/v8/test/mjsunit/runtime-gen/changebreakonexception.js b/deps/v8/test/mjsunit/runtime-gen/changebreakonexception.js new file mode 100644 index 00000000000..4bc0d43d01b --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/changebreakonexception.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _type_arg = 32; +var _enable = true; +%ChangeBreakOnException(_type_arg, _enable); diff --git a/deps/v8/test/mjsunit/runtime-gen/charfromcode.js b/deps/v8/test/mjsunit/runtime-gen/charfromcode.js new file mode 100644 index 00000000000..20823391da1 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/charfromcode.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _code = 32; +%CharFromCode(_code); diff --git a/deps/v8/test/mjsunit/runtime-gen/checkexecutionstate.js b/deps/v8/test/mjsunit/runtime-gen/checkexecutionstate.js new file mode 100644 index 00000000000..7e740c39f6c --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/checkexecutionstate.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _break_id = 32; +try { +%CheckExecutionState(_break_id); +} catch(e) {} diff --git a/deps/v8/test/mjsunit/runtime-gen/checkisbootstrapping.js b/deps/v8/test/mjsunit/runtime-gen/checkisbootstrapping.js new file mode 100644 index 00000000000..114b20c1c80 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/checkisbootstrapping.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +try { +%CheckIsBootstrapping(); +} catch(e) {} diff --git a/deps/v8/test/mjsunit/runtime-gen/clearbreakpoint.js b/deps/v8/test/mjsunit/runtime-gen/clearbreakpoint.js new file mode 100644 index 00000000000..1c11bc8f749 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/clearbreakpoint.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _break_point_object_arg = new Object(); +%ClearBreakPoint(_break_point_object_arg); diff --git a/deps/v8/test/mjsunit/runtime-gen/clearfunctiontypefeedback.js b/deps/v8/test/mjsunit/runtime-gen/clearfunctiontypefeedback.js new file mode 100644 index 00000000000..f42b8da2002 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/clearfunctiontypefeedback.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _function = function() {}; +%ClearFunctionTypeFeedback(_function); diff --git a/deps/v8/test/mjsunit/runtime-gen/clearstepping.js b/deps/v8/test/mjsunit/runtime-gen/clearstepping.js new file mode 100644 index 00000000000..bfab2cde0b9 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/clearstepping.js @@ -0,0 +1,4 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +%ClearStepping(); diff --git a/deps/v8/test/mjsunit/runtime-gen/collectstacktrace.js b/deps/v8/test/mjsunit/runtime-gen/collectstacktrace.js new file mode 100644 index 00000000000..bac9b6a66cd --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/collectstacktrace.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _error_object = new Object(); +var _caller = new Object(); +%CollectStackTrace(_error_object, _caller); diff --git a/deps/v8/test/mjsunit/runtime-gen/compilestring.js b/deps/v8/test/mjsunit/runtime-gen/compilestring.js new file mode 100644 index 00000000000..659afcaaefa --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/compilestring.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _source = "foo"; +var arg1 = false; +%CompileString(_source, arg1); diff --git a/deps/v8/test/mjsunit/runtime-gen/constructdouble.js b/deps/v8/test/mjsunit/runtime-gen/constructdouble.js new file mode 100644 index 00000000000..9ac3dee9c03 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/constructdouble.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _hi = 32; +var _lo = 32; +%ConstructDouble(_hi, _lo); diff --git a/deps/v8/test/mjsunit/runtime-gen/createbreakiterator.js b/deps/v8/test/mjsunit/runtime-gen/createbreakiterator.js new file mode 100644 index 00000000000..a8750b3399c --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/createbreakiterator.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = 'en-US'; +var arg1 = {type: 'string'}; +var _resolved = new Object(); +%CreateBreakIterator(arg0, arg1, _resolved); diff --git a/deps/v8/test/mjsunit/runtime-gen/createcollator.js b/deps/v8/test/mjsunit/runtime-gen/createcollator.js new file mode 100644 index 00000000000..0d5b18d55dd --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/createcollator.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _locale = "foo"; +var _options = new Object(); +var _resolved = new Object(); +%CreateCollator(_locale, _options, _resolved); diff --git a/deps/v8/test/mjsunit/runtime-gen/createglobalprivatesymbol.js b/deps/v8/test/mjsunit/runtime-gen/createglobalprivatesymbol.js new file mode 100644 index 00000000000..e4968c14f3d --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/createglobalprivatesymbol.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _name = "foo"; +%CreateGlobalPrivateSymbol(_name); diff --git a/deps/v8/test/mjsunit/runtime-gen/createjsfunctionproxy.js b/deps/v8/test/mjsunit/runtime-gen/createjsfunctionproxy.js new file mode 100644 index 00000000000..b4e1c31ae8c --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/createjsfunctionproxy.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _handler = new Object(); +var arg1 = function() {}; +var _construct_trap = function() {}; +var _prototype = new Object(); +%CreateJSFunctionProxy(_handler, arg1, _construct_trap, _prototype); diff --git a/deps/v8/test/mjsunit/runtime-gen/createjsproxy.js b/deps/v8/test/mjsunit/runtime-gen/createjsproxy.js new file mode 100644 index 00000000000..ecdef60223c --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/createjsproxy.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _handler = new Object(); +var _prototype = new Object(); +%CreateJSProxy(_handler, _prototype); diff --git a/deps/v8/test/mjsunit/runtime-gen/createprivateownsymbol.js b/deps/v8/test/mjsunit/runtime-gen/createprivateownsymbol.js new file mode 100644 index 00000000000..74548287c10 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/createprivateownsymbol.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = "foo"; +%CreatePrivateOwnSymbol(arg0); diff --git a/deps/v8/test/mjsunit/runtime-gen/createprivatesymbol.js b/deps/v8/test/mjsunit/runtime-gen/createprivatesymbol.js new file mode 100644 index 00000000000..bbd99c12b86 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/createprivatesymbol.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = "foo"; +%CreatePrivateSymbol(arg0); diff --git a/deps/v8/test/mjsunit/runtime-gen/createsymbol.js b/deps/v8/test/mjsunit/runtime-gen/createsymbol.js new file mode 100644 index 00000000000..8452b9c90bb --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/createsymbol.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = "foo"; +%CreateSymbol(arg0); diff --git a/deps/v8/test/mjsunit/runtime-gen/dataviewgetbuffer.js b/deps/v8/test/mjsunit/runtime-gen/dataviewgetbuffer.js new file mode 100644 index 00000000000..84bab807f37 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/dataviewgetbuffer.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new DataView(new ArrayBuffer(24)); +%DataViewGetBuffer(_holder); diff --git a/deps/v8/test/mjsunit/runtime-gen/dataviewgetfloat32.js b/deps/v8/test/mjsunit/runtime-gen/dataviewgetfloat32.js new file mode 100644 index 00000000000..57f3c2a5960 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/dataviewgetfloat32.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new DataView(new ArrayBuffer(24)); +var _offset = 1.5; +var _is_little_endian = true; +%DataViewGetFloat32(_holder, _offset, _is_little_endian); diff --git a/deps/v8/test/mjsunit/runtime-gen/dataviewgetfloat64.js b/deps/v8/test/mjsunit/runtime-gen/dataviewgetfloat64.js new file mode 100644 index 00000000000..7f80c5b0a09 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/dataviewgetfloat64.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new DataView(new ArrayBuffer(24)); +var _offset = 1.5; +var _is_little_endian = true; +%DataViewGetFloat64(_holder, _offset, _is_little_endian); diff --git a/deps/v8/test/mjsunit/runtime-gen/dataviewgetint16.js b/deps/v8/test/mjsunit/runtime-gen/dataviewgetint16.js new file mode 100644 index 00000000000..e618c1c00a1 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/dataviewgetint16.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new DataView(new ArrayBuffer(24)); +var _offset = 1.5; +var _is_little_endian = true; +%DataViewGetInt16(_holder, _offset, _is_little_endian); diff --git a/deps/v8/test/mjsunit/runtime-gen/dataviewgetint32.js b/deps/v8/test/mjsunit/runtime-gen/dataviewgetint32.js new file mode 100644 index 00000000000..2395a6dd9cc --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/dataviewgetint32.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new DataView(new ArrayBuffer(24)); +var _offset = 1.5; +var _is_little_endian = true; +%DataViewGetInt32(_holder, _offset, _is_little_endian); diff --git a/deps/v8/test/mjsunit/runtime-gen/dataviewgetint8.js b/deps/v8/test/mjsunit/runtime-gen/dataviewgetint8.js new file mode 100644 index 00000000000..fe92ed7c359 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/dataviewgetint8.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new DataView(new ArrayBuffer(24)); +var _offset = 1.5; +var _is_little_endian = true; +%DataViewGetInt8(_holder, _offset, _is_little_endian); diff --git a/deps/v8/test/mjsunit/runtime-gen/dataviewgetuint16.js b/deps/v8/test/mjsunit/runtime-gen/dataviewgetuint16.js new file mode 100644 index 00000000000..50be62b0091 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/dataviewgetuint16.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new DataView(new ArrayBuffer(24)); +var _offset = 1.5; +var _is_little_endian = true; +%DataViewGetUint16(_holder, _offset, _is_little_endian); diff --git a/deps/v8/test/mjsunit/runtime-gen/dataviewgetuint32.js b/deps/v8/test/mjsunit/runtime-gen/dataviewgetuint32.js new file mode 100644 index 00000000000..2f85aeef8a4 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/dataviewgetuint32.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new DataView(new ArrayBuffer(24)); +var _offset = 1.5; +var _is_little_endian = true; +%DataViewGetUint32(_holder, _offset, _is_little_endian); diff --git a/deps/v8/test/mjsunit/runtime-gen/dataviewgetuint8.js b/deps/v8/test/mjsunit/runtime-gen/dataviewgetuint8.js new file mode 100644 index 00000000000..6a682e17313 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/dataviewgetuint8.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new DataView(new ArrayBuffer(24)); +var _offset = 1.5; +var _is_little_endian = true; +%DataViewGetUint8(_holder, _offset, _is_little_endian); diff --git a/deps/v8/test/mjsunit/runtime-gen/dataviewinitialize.js b/deps/v8/test/mjsunit/runtime-gen/dataviewinitialize.js new file mode 100644 index 00000000000..167d531562d --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/dataviewinitialize.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new DataView(new ArrayBuffer(24)); +var _buffer = new ArrayBuffer(8); +var _byte_offset = 1.5; +var _byte_length = 1.5; +%DataViewInitialize(_holder, _buffer, _byte_offset, _byte_length); diff --git a/deps/v8/test/mjsunit/runtime-gen/dataviewsetfloat32.js b/deps/v8/test/mjsunit/runtime-gen/dataviewsetfloat32.js new file mode 100644 index 00000000000..46d00afff06 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/dataviewsetfloat32.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new DataView(new ArrayBuffer(24)); +var _offset = 1.5; +var _value = 1.5; +var _is_little_endian = true; +%DataViewSetFloat32(_holder, _offset, _value, _is_little_endian); diff --git a/deps/v8/test/mjsunit/runtime-gen/dataviewsetfloat64.js b/deps/v8/test/mjsunit/runtime-gen/dataviewsetfloat64.js new file mode 100644 index 00000000000..c57b514dd0c --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/dataviewsetfloat64.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new DataView(new ArrayBuffer(24)); +var _offset = 1.5; +var _value = 1.5; +var _is_little_endian = true; +%DataViewSetFloat64(_holder, _offset, _value, _is_little_endian); diff --git a/deps/v8/test/mjsunit/runtime-gen/dataviewsetint16.js b/deps/v8/test/mjsunit/runtime-gen/dataviewsetint16.js new file mode 100644 index 00000000000..1f45448f694 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/dataviewsetint16.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new DataView(new ArrayBuffer(24)); +var _offset = 1.5; +var _value = 1.5; +var _is_little_endian = true; +%DataViewSetInt16(_holder, _offset, _value, _is_little_endian); diff --git a/deps/v8/test/mjsunit/runtime-gen/dataviewsetint32.js b/deps/v8/test/mjsunit/runtime-gen/dataviewsetint32.js new file mode 100644 index 00000000000..837d4f26d56 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/dataviewsetint32.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new DataView(new ArrayBuffer(24)); +var _offset = 1.5; +var _value = 1.5; +var _is_little_endian = true; +%DataViewSetInt32(_holder, _offset, _value, _is_little_endian); diff --git a/deps/v8/test/mjsunit/runtime-gen/dataviewsetint8.js b/deps/v8/test/mjsunit/runtime-gen/dataviewsetint8.js new file mode 100644 index 00000000000..725e658ec4f --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/dataviewsetint8.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new DataView(new ArrayBuffer(24)); +var _offset = 1.5; +var _value = 1.5; +var _is_little_endian = true; +%DataViewSetInt8(_holder, _offset, _value, _is_little_endian); diff --git a/deps/v8/test/mjsunit/runtime-gen/dataviewsetuint16.js b/deps/v8/test/mjsunit/runtime-gen/dataviewsetuint16.js new file mode 100644 index 00000000000..d1b1a24bcdf --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/dataviewsetuint16.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new DataView(new ArrayBuffer(24)); +var _offset = 1.5; +var _value = 1.5; +var _is_little_endian = true; +%DataViewSetUint16(_holder, _offset, _value, _is_little_endian); diff --git a/deps/v8/test/mjsunit/runtime-gen/dataviewsetuint32.js b/deps/v8/test/mjsunit/runtime-gen/dataviewsetuint32.js new file mode 100644 index 00000000000..e46c8f302a8 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/dataviewsetuint32.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new DataView(new ArrayBuffer(24)); +var _offset = 1.5; +var _value = 1.5; +var _is_little_endian = true; +%DataViewSetUint32(_holder, _offset, _value, _is_little_endian); diff --git a/deps/v8/test/mjsunit/runtime-gen/dataviewsetuint8.js b/deps/v8/test/mjsunit/runtime-gen/dataviewsetuint8.js new file mode 100644 index 00000000000..6c367230821 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/dataviewsetuint8.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new DataView(new ArrayBuffer(24)); +var _offset = 1.5; +var _value = 1.5; +var _is_little_endian = true; +%DataViewSetUint8(_holder, _offset, _value, _is_little_endian); diff --git a/deps/v8/test/mjsunit/runtime-gen/datecacheversion.js b/deps/v8/test/mjsunit/runtime-gen/datecacheversion.js new file mode 100644 index 00000000000..ea56c73c748 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/datecacheversion.js @@ -0,0 +1,4 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +%DateCacheVersion(); diff --git a/deps/v8/test/mjsunit/runtime-gen/datecurrenttime.js b/deps/v8/test/mjsunit/runtime-gen/datecurrenttime.js new file mode 100644 index 00000000000..759ebd00382 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/datecurrenttime.js @@ -0,0 +1,4 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +%DateCurrentTime(); diff --git a/deps/v8/test/mjsunit/runtime-gen/datelocaltimezone.js b/deps/v8/test/mjsunit/runtime-gen/datelocaltimezone.js new file mode 100644 index 00000000000..bfc1a81c7f6 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/datelocaltimezone.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 1.5; +%DateLocalTimezone(_x); diff --git a/deps/v8/test/mjsunit/runtime-gen/datemakeday.js b/deps/v8/test/mjsunit/runtime-gen/datemakeday.js new file mode 100644 index 00000000000..3d2334f51eb --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/datemakeday.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _year = 1; +var _month = 1; +%DateMakeDay(_year, _month); diff --git a/deps/v8/test/mjsunit/runtime-gen/dateparsestring.js b/deps/v8/test/mjsunit/runtime-gen/dateparsestring.js new file mode 100644 index 00000000000..fdf5faa7e96 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/dateparsestring.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _str = "foo"; +var arg1 = new Array(8); +%DateParseString(_str, arg1); diff --git a/deps/v8/test/mjsunit/runtime-gen/datesetvalue.js b/deps/v8/test/mjsunit/runtime-gen/datesetvalue.js new file mode 100644 index 00000000000..dac1a364477 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/datesetvalue.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _date = new Date(); +var _time = 1.5; +var _is_utc = 1; +%DateSetValue(_date, _time, _is_utc); diff --git a/deps/v8/test/mjsunit/runtime-gen/datetoutc.js b/deps/v8/test/mjsunit/runtime-gen/datetoutc.js new file mode 100644 index 00000000000..f46644e9514 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/datetoutc.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 1.5; +%DateToUTC(_x); diff --git a/deps/v8/test/mjsunit/runtime-gen/debugasynctaskevent.js b/deps/v8/test/mjsunit/runtime-gen/debugasynctaskevent.js new file mode 100644 index 00000000000..ceeaf13774e --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debugasynctaskevent.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _data = new Object(); +%DebugAsyncTaskEvent(_data); diff --git a/deps/v8/test/mjsunit/runtime-gen/debugbreak.js b/deps/v8/test/mjsunit/runtime-gen/debugbreak.js new file mode 100644 index 00000000000..68220dfa9b1 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debugbreak.js @@ -0,0 +1,4 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +%DebugBreak(); diff --git a/deps/v8/test/mjsunit/runtime-gen/debugcallbacksupportsstepping.js b/deps/v8/test/mjsunit/runtime-gen/debugcallbacksupportsstepping.js new file mode 100644 index 00000000000..b683be0aa41 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debugcallbacksupportsstepping.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _callback = new Object(); +%DebugCallbackSupportsStepping(_callback); diff --git a/deps/v8/test/mjsunit/runtime-gen/debugconstructedby.js b/deps/v8/test/mjsunit/runtime-gen/debugconstructedby.js new file mode 100644 index 00000000000..885034429b5 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debugconstructedby.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _constructor = function() {}; +var _max_references = 32; +%DebugConstructedBy(_constructor, _max_references); diff --git a/deps/v8/test/mjsunit/runtime-gen/debugdisassembleconstructor.js b/deps/v8/test/mjsunit/runtime-gen/debugdisassembleconstructor.js new file mode 100644 index 00000000000..c2faca4f0ce --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debugdisassembleconstructor.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _func = function() {}; +%DebugDisassembleConstructor(_func); diff --git a/deps/v8/test/mjsunit/runtime-gen/debugdisassemblefunction.js b/deps/v8/test/mjsunit/runtime-gen/debugdisassemblefunction.js new file mode 100644 index 00000000000..f65886779d5 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debugdisassemblefunction.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _func = function() {}; +%DebugDisassembleFunction(_func); diff --git a/deps/v8/test/mjsunit/runtime-gen/debugevaluate.js b/deps/v8/test/mjsunit/runtime-gen/debugevaluate.js new file mode 100644 index 00000000000..60e1e63fd0e --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debugevaluate.js @@ -0,0 +1,12 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _break_id = 32; +var _wrapped_id = 1; +var _inlined_jsframe_index = 32; +var _source = "foo"; +var _disable_break = true; +var _context_extension = new Object(); +try { +%DebugEvaluate(_break_id, _wrapped_id, _inlined_jsframe_index, _source, _disable_break, _context_extension); +} catch(e) {} diff --git a/deps/v8/test/mjsunit/runtime-gen/debugevaluateglobal.js b/deps/v8/test/mjsunit/runtime-gen/debugevaluateglobal.js new file mode 100644 index 00000000000..11411d19924 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debugevaluateglobal.js @@ -0,0 +1,10 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _break_id = 32; +var _source = "foo"; +var _disable_break = true; +var _context_extension = new Object(); +try { +%DebugEvaluateGlobal(_break_id, _source, _disable_break, _context_extension); +} catch(e) {} diff --git a/deps/v8/test/mjsunit/runtime-gen/debuggetproperty.js b/deps/v8/test/mjsunit/runtime-gen/debuggetproperty.js new file mode 100644 index 00000000000..90109d1dc87 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debuggetproperty.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +var _name = "name"; +%DebugGetProperty(_obj, _name); diff --git a/deps/v8/test/mjsunit/runtime-gen/debuggetpropertydetails.js b/deps/v8/test/mjsunit/runtime-gen/debuggetpropertydetails.js new file mode 100644 index 00000000000..0fe2f3104fc --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debuggetpropertydetails.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +var _name = "name"; +%DebugGetPropertyDetails(_obj, _name); diff --git a/deps/v8/test/mjsunit/runtime-gen/debuggetprototype.js b/deps/v8/test/mjsunit/runtime-gen/debuggetprototype.js new file mode 100644 index 00000000000..27de855b7b8 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debuggetprototype.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +%DebugGetPrototype(_obj); diff --git a/deps/v8/test/mjsunit/runtime-gen/debugindexedinterceptorelementvalue.js b/deps/v8/test/mjsunit/runtime-gen/debugindexedinterceptorelementvalue.js new file mode 100644 index 00000000000..22d24eead9c --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debugindexedinterceptorelementvalue.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +var _index = 32; +try { +%DebugIndexedInterceptorElementValue(_obj, _index); +} catch(e) {} diff --git a/deps/v8/test/mjsunit/runtime-gen/debugnamedinterceptorpropertyvalue.js b/deps/v8/test/mjsunit/runtime-gen/debugnamedinterceptorpropertyvalue.js new file mode 100644 index 00000000000..13641d2c2b9 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debugnamedinterceptorpropertyvalue.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +var _name = "name"; +try { +%DebugNamedInterceptorPropertyValue(_obj, _name); +} catch(e) {} diff --git a/deps/v8/test/mjsunit/runtime-gen/debugpoppromise.js b/deps/v8/test/mjsunit/runtime-gen/debugpoppromise.js new file mode 100644 index 00000000000..9b81b137051 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debugpoppromise.js @@ -0,0 +1,4 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +%DebugPopPromise(); diff --git a/deps/v8/test/mjsunit/runtime-gen/debugpreparestepinifstepping.js b/deps/v8/test/mjsunit/runtime-gen/debugpreparestepinifstepping.js new file mode 100644 index 00000000000..a6061e6f984 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debugpreparestepinifstepping.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _callback = function() {}; +%DebugPrepareStepInIfStepping(_callback); diff --git a/deps/v8/test/mjsunit/runtime-gen/debugprintscopes.js b/deps/v8/test/mjsunit/runtime-gen/debugprintscopes.js new file mode 100644 index 00000000000..2f106ddb6a5 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debugprintscopes.js @@ -0,0 +1,4 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +%DebugPrintScopes(); diff --git a/deps/v8/test/mjsunit/runtime-gen/debugpromiseevent.js b/deps/v8/test/mjsunit/runtime-gen/debugpromiseevent.js new file mode 100644 index 00000000000..20ae13c67a3 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debugpromiseevent.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _data = new Object(); +%DebugPromiseEvent(_data); diff --git a/deps/v8/test/mjsunit/runtime-gen/debugpromiserejectevent.js b/deps/v8/test/mjsunit/runtime-gen/debugpromiserejectevent.js new file mode 100644 index 00000000000..4e6e6334261 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debugpromiserejectevent.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _promise = new Object(); +var _value = new Object(); +%DebugPromiseRejectEvent(_promise, _value); diff --git a/deps/v8/test/mjsunit/runtime-gen/debugpropertyattributesfromdetails.js b/deps/v8/test/mjsunit/runtime-gen/debugpropertyattributesfromdetails.js new file mode 100644 index 00000000000..7802a352429 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debugpropertyattributesfromdetails.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _details = 513; +%DebugPropertyAttributesFromDetails(_details); diff --git a/deps/v8/test/mjsunit/runtime-gen/debugpropertyindexfromdetails.js b/deps/v8/test/mjsunit/runtime-gen/debugpropertyindexfromdetails.js new file mode 100644 index 00000000000..02edeeee244 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debugpropertyindexfromdetails.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _details = 513; +%DebugPropertyIndexFromDetails(_details); diff --git a/deps/v8/test/mjsunit/runtime-gen/debugpropertytypefromdetails.js b/deps/v8/test/mjsunit/runtime-gen/debugpropertytypefromdetails.js new file mode 100644 index 00000000000..551ff2c6214 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debugpropertytypefromdetails.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _details = 513; +%DebugPropertyTypeFromDetails(_details); diff --git a/deps/v8/test/mjsunit/runtime-gen/debugpushpromise.js b/deps/v8/test/mjsunit/runtime-gen/debugpushpromise.js new file mode 100644 index 00000000000..350a61354a5 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debugpushpromise.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _promise = new Object(); +%DebugPushPromise(_promise); diff --git a/deps/v8/test/mjsunit/runtime-gen/debugreferencedby.js b/deps/v8/test/mjsunit/runtime-gen/debugreferencedby.js new file mode 100644 index 00000000000..94e1242793c --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debugreferencedby.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _target = new Object(); +var _instance_filter = new Object(); +var _max_references = 32; +%DebugReferencedBy(_target, _instance_filter, _max_references); diff --git a/deps/v8/test/mjsunit/runtime-gen/debugtrace.js b/deps/v8/test/mjsunit/runtime-gen/debugtrace.js new file mode 100644 index 00000000000..2933ad114dc --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/debugtrace.js @@ -0,0 +1,4 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +%DebugTrace(); diff --git a/deps/v8/test/mjsunit/runtime-gen/defineaccessorpropertyunchecked.js b/deps/v8/test/mjsunit/runtime-gen/defineaccessorpropertyunchecked.js new file mode 100644 index 00000000000..c6cbb91cc77 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/defineaccessorpropertyunchecked.js @@ -0,0 +1,9 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +var _name = "name"; +var arg2 = function() {}; +var arg3 = function() {}; +var arg4 = 2; +%DefineAccessorPropertyUnchecked(_obj, _name, arg2, arg3, arg4); diff --git a/deps/v8/test/mjsunit/runtime-gen/defineapiaccessorproperty.js b/deps/v8/test/mjsunit/runtime-gen/defineapiaccessorproperty.js new file mode 100644 index 00000000000..856a53129e6 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/defineapiaccessorproperty.js @@ -0,0 +1,9 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _object = new Object(); +var _name = "name"; +var arg2 = undefined; +var arg3 = undefined; +var _attribute = 1; +%DefineApiAccessorProperty(_object, _name, arg2, arg3, _attribute); diff --git a/deps/v8/test/mjsunit/runtime-gen/definedatapropertyunchecked.js b/deps/v8/test/mjsunit/runtime-gen/definedatapropertyunchecked.js new file mode 100644 index 00000000000..cb0f07f6006 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/definedatapropertyunchecked.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _js_object = new Object(); +var _name = "name"; +var _obj_value = new Object(); +var _unchecked = 1; +%DefineDataPropertyUnchecked(_js_object, _name, _obj_value, _unchecked); diff --git a/deps/v8/test/mjsunit/runtime-gen/deleteproperty.js b/deps/v8/test/mjsunit/runtime-gen/deleteproperty.js new file mode 100644 index 00000000000..66a882b1aba --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/deleteproperty.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _object = new Object(); +var _key = "name"; +var _strict_mode = 1; +%DeleteProperty(_object, _key, _strict_mode); diff --git a/deps/v8/test/mjsunit/runtime-gen/deoptimizefunction.js b/deps/v8/test/mjsunit/runtime-gen/deoptimizefunction.js new file mode 100644 index 00000000000..ec5db2ddae2 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/deoptimizefunction.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _function = function() {}; +%DeoptimizeFunction(_function); diff --git a/deps/v8/test/mjsunit/runtime-gen/doublehi.js b/deps/v8/test/mjsunit/runtime-gen/doublehi.js new file mode 100644 index 00000000000..ac945dcd284 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/doublehi.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 1.5; +%DoubleHi(_x); diff --git a/deps/v8/test/mjsunit/runtime-gen/doublelo.js b/deps/v8/test/mjsunit/runtime-gen/doublelo.js new file mode 100644 index 00000000000..42c4c25495e --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/doublelo.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 1.5; +%DoubleLo(_x); diff --git a/deps/v8/test/mjsunit/runtime-gen/enqueuemicrotask.js b/deps/v8/test/mjsunit/runtime-gen/enqueuemicrotask.js new file mode 100644 index 00000000000..2f21667613b --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/enqueuemicrotask.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _microtask = function() {}; +%EnqueueMicrotask(_microtask); diff --git a/deps/v8/test/mjsunit/runtime-gen/estimatenumberofelements.js b/deps/v8/test/mjsunit/runtime-gen/estimatenumberofelements.js new file mode 100644 index 00000000000..cf3b9b606fc --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/estimatenumberofelements.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _array = new Array(); +%EstimateNumberOfElements(_array); diff --git a/deps/v8/test/mjsunit/runtime-gen/executeindebugcontext.js b/deps/v8/test/mjsunit/runtime-gen/executeindebugcontext.js new file mode 100644 index 00000000000..18bfac9b53e --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/executeindebugcontext.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _function = function() {}; +var _without_debugger = true; +%ExecuteInDebugContext(_function, _without_debugger); diff --git a/deps/v8/test/mjsunit/runtime-gen/finisharrayprototypesetup.js b/deps/v8/test/mjsunit/runtime-gen/finisharrayprototypesetup.js new file mode 100644 index 00000000000..e4e8eabab48 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/finisharrayprototypesetup.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _prototype = new Array(); +%FinishArrayPrototypeSetup(_prototype); diff --git a/deps/v8/test/mjsunit/runtime-gen/fix.js b/deps/v8/test/mjsunit/runtime-gen/fix.js new file mode 100644 index 00000000000..010d2bcb70e --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/fix.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _proxy = Proxy.create({}); +%Fix(_proxy); diff --git a/deps/v8/test/mjsunit/runtime-gen/flattenstring.js b/deps/v8/test/mjsunit/runtime-gen/flattenstring.js new file mode 100644 index 00000000000..3f0b38d6c86 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/flattenstring.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _str = "foo"; +%FlattenString(_str); diff --git a/deps/v8/test/mjsunit/runtime-gen/functionbindarguments.js b/deps/v8/test/mjsunit/runtime-gen/functionbindarguments.js new file mode 100644 index 00000000000..4d367162537 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/functionbindarguments.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _bound_function = function() {}; +var _bindee = new Object(); +var arg2 = undefined; +var _new_length = 1.5; +%FunctionBindArguments(_bound_function, _bindee, arg2, _new_length); diff --git a/deps/v8/test/mjsunit/runtime-gen/functiongetinferredname.js b/deps/v8/test/mjsunit/runtime-gen/functiongetinferredname.js new file mode 100644 index 00000000000..8d765007cb1 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/functiongetinferredname.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _f = function() {}; +%FunctionGetInferredName(_f); diff --git a/deps/v8/test/mjsunit/runtime-gen/functiongetname.js b/deps/v8/test/mjsunit/runtime-gen/functiongetname.js new file mode 100644 index 00000000000..ad23b11a691 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/functiongetname.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _f = function() {}; +%FunctionGetName(_f); diff --git a/deps/v8/test/mjsunit/runtime-gen/functiongetscript.js b/deps/v8/test/mjsunit/runtime-gen/functiongetscript.js new file mode 100644 index 00000000000..bd4364447ec --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/functiongetscript.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _fun = function() {}; +%FunctionGetScript(_fun); diff --git a/deps/v8/test/mjsunit/runtime-gen/functiongetscriptsourceposition.js b/deps/v8/test/mjsunit/runtime-gen/functiongetscriptsourceposition.js new file mode 100644 index 00000000000..eb462f96f78 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/functiongetscriptsourceposition.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _fun = function() {}; +%FunctionGetScriptSourcePosition(_fun); diff --git a/deps/v8/test/mjsunit/runtime-gen/functiongetsourcecode.js b/deps/v8/test/mjsunit/runtime-gen/functiongetsourcecode.js new file mode 100644 index 00000000000..b9de88a15d9 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/functiongetsourcecode.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _f = function() {}; +%FunctionGetSourceCode(_f); diff --git a/deps/v8/test/mjsunit/runtime-gen/functionisapifunction.js b/deps/v8/test/mjsunit/runtime-gen/functionisapifunction.js new file mode 100644 index 00000000000..7fb8a21e0a5 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/functionisapifunction.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _f = function() {}; +%FunctionIsAPIFunction(_f); diff --git a/deps/v8/test/mjsunit/runtime-gen/functionisarrow.js b/deps/v8/test/mjsunit/runtime-gen/functionisarrow.js new file mode 100644 index 00000000000..08410b49ddf --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/functionisarrow.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = () => null; +%FunctionIsArrow(arg0); diff --git a/deps/v8/test/mjsunit/runtime-gen/functionisbuiltin.js b/deps/v8/test/mjsunit/runtime-gen/functionisbuiltin.js new file mode 100644 index 00000000000..a8dd6c6a885 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/functionisbuiltin.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _f = function() {}; +%FunctionIsBuiltin(_f); diff --git a/deps/v8/test/mjsunit/runtime-gen/functionisgenerator.js b/deps/v8/test/mjsunit/runtime-gen/functionisgenerator.js new file mode 100644 index 00000000000..8be6aab2a7c --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/functionisgenerator.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _f = function() {}; +%FunctionIsGenerator(_f); diff --git a/deps/v8/test/mjsunit/runtime-gen/functionmarknameshouldprintasanonymous.js b/deps/v8/test/mjsunit/runtime-gen/functionmarknameshouldprintasanonymous.js new file mode 100644 index 00000000000..74f18e258c4 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/functionmarknameshouldprintasanonymous.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _f = function() {}; +%FunctionMarkNameShouldPrintAsAnonymous(_f); diff --git a/deps/v8/test/mjsunit/runtime-gen/functionnameshouldprintasanonymous.js b/deps/v8/test/mjsunit/runtime-gen/functionnameshouldprintasanonymous.js new file mode 100644 index 00000000000..aa5bcddc180 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/functionnameshouldprintasanonymous.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _f = function() {}; +%FunctionNameShouldPrintAsAnonymous(_f); diff --git a/deps/v8/test/mjsunit/runtime-gen/functionremoveprototype.js b/deps/v8/test/mjsunit/runtime-gen/functionremoveprototype.js new file mode 100644 index 00000000000..a7ec5f52a95 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/functionremoveprototype.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _f = function() {}; +%FunctionRemovePrototype(_f); diff --git a/deps/v8/test/mjsunit/runtime-gen/functionsetinstanceclassname.js b/deps/v8/test/mjsunit/runtime-gen/functionsetinstanceclassname.js new file mode 100644 index 00000000000..6986a15b1c6 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/functionsetinstanceclassname.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _fun = function() {}; +var _name = "foo"; +%FunctionSetInstanceClassName(_fun, _name); diff --git a/deps/v8/test/mjsunit/runtime-gen/functionsetlength.js b/deps/v8/test/mjsunit/runtime-gen/functionsetlength.js new file mode 100644 index 00000000000..5582e82cf26 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/functionsetlength.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _fun = function() {}; +var _length = 1; +%FunctionSetLength(_fun, _length); diff --git a/deps/v8/test/mjsunit/runtime-gen/functionsetname.js b/deps/v8/test/mjsunit/runtime-gen/functionsetname.js new file mode 100644 index 00000000000..0d44b203172 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/functionsetname.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _f = function() {}; +var _name = "foo"; +%FunctionSetName(_f, _name); diff --git a/deps/v8/test/mjsunit/runtime-gen/functionsetprototype.js b/deps/v8/test/mjsunit/runtime-gen/functionsetprototype.js new file mode 100644 index 00000000000..eb69ea8f5b1 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/functionsetprototype.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _fun = function() {}; +var _value = new Object(); +%FunctionSetPrototype(_fun, _value); diff --git a/deps/v8/test/mjsunit/runtime-gen/getallscopesdetails.js b/deps/v8/test/mjsunit/runtime-gen/getallscopesdetails.js new file mode 100644 index 00000000000..97ad7cb5387 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getallscopesdetails.js @@ -0,0 +1,10 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _break_id = 32; +var _wrapped_id = 1; +var _inlined_jsframe_index = 32; +var _flag = true; +try { +%GetAllScopesDetails(_break_id, _wrapped_id, _inlined_jsframe_index, _flag); +} catch(e) {} diff --git a/deps/v8/test/mjsunit/runtime-gen/getargumentsproperty.js b/deps/v8/test/mjsunit/runtime-gen/getargumentsproperty.js new file mode 100644 index 00000000000..646e56be9f0 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getargumentsproperty.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _raw_key = new Object(); +%GetArgumentsProperty(_raw_key); diff --git a/deps/v8/test/mjsunit/runtime-gen/getarraykeys.js b/deps/v8/test/mjsunit/runtime-gen/getarraykeys.js new file mode 100644 index 00000000000..341faa69eca --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getarraykeys.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _array = new Object(); +var _length = 32; +%GetArrayKeys(_array, _length); diff --git a/deps/v8/test/mjsunit/runtime-gen/getbreaklocations.js b/deps/v8/test/mjsunit/runtime-gen/getbreaklocations.js new file mode 100644 index 00000000000..d31fa15c513 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getbreaklocations.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _fun = function() {}; +var arg1 = 0; +%GetBreakLocations(_fun, arg1); diff --git a/deps/v8/test/mjsunit/runtime-gen/getcalltrap.js b/deps/v8/test/mjsunit/runtime-gen/getcalltrap.js new file mode 100644 index 00000000000..406af9ffd9e --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getcalltrap.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _proxy = Proxy.createFunction({}, function() {}); +%GetCallTrap(_proxy); diff --git a/deps/v8/test/mjsunit/runtime-gen/getconstructordelegate.js b/deps/v8/test/mjsunit/runtime-gen/getconstructordelegate.js new file mode 100644 index 00000000000..6d014156672 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getconstructordelegate.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _object = new Object(); +%GetConstructorDelegate(_object); diff --git a/deps/v8/test/mjsunit/runtime-gen/getconstructtrap.js b/deps/v8/test/mjsunit/runtime-gen/getconstructtrap.js new file mode 100644 index 00000000000..116d301eb32 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getconstructtrap.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _proxy = Proxy.createFunction({}, function() {}); +%GetConstructTrap(_proxy); diff --git a/deps/v8/test/mjsunit/runtime-gen/getdataproperty.js b/deps/v8/test/mjsunit/runtime-gen/getdataproperty.js new file mode 100644 index 00000000000..59cfba56d94 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getdataproperty.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _object = new Object(); +var _key = "name"; +%GetDataProperty(_object, _key); diff --git a/deps/v8/test/mjsunit/runtime-gen/getdefaulticulocale.js b/deps/v8/test/mjsunit/runtime-gen/getdefaulticulocale.js new file mode 100644 index 00000000000..920f2566837 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getdefaulticulocale.js @@ -0,0 +1,4 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +%GetDefaultICULocale(); diff --git a/deps/v8/test/mjsunit/runtime-gen/getdefaultreceiver.js b/deps/v8/test/mjsunit/runtime-gen/getdefaultreceiver.js new file mode 100644 index 00000000000..1d5b1cb44c7 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getdefaultreceiver.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = function() {}; +%GetDefaultReceiver(arg0); diff --git a/deps/v8/test/mjsunit/runtime-gen/getframecount.js b/deps/v8/test/mjsunit/runtime-gen/getframecount.js new file mode 100644 index 00000000000..a958efcd7fa --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getframecount.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _break_id = 32; +try { +%GetFrameCount(_break_id); +} catch(e) {} diff --git a/deps/v8/test/mjsunit/runtime-gen/getframedetails.js b/deps/v8/test/mjsunit/runtime-gen/getframedetails.js new file mode 100644 index 00000000000..1138424845f --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getframedetails.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _break_id = 32; +var _index = 32; +try { +%GetFrameDetails(_break_id, _index); +} catch(e) {} diff --git a/deps/v8/test/mjsunit/runtime-gen/getfunctioncodepositionfromsource.js b/deps/v8/test/mjsunit/runtime-gen/getfunctioncodepositionfromsource.js new file mode 100644 index 00000000000..473b263241d --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getfunctioncodepositionfromsource.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _function = function() {}; +var _source_position = 32; +%GetFunctionCodePositionFromSource(_function, _source_position); diff --git a/deps/v8/test/mjsunit/runtime-gen/getfunctiondelegate.js b/deps/v8/test/mjsunit/runtime-gen/getfunctiondelegate.js new file mode 100644 index 00000000000..4d02ec21940 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getfunctiondelegate.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _object = new Object(); +%GetFunctionDelegate(_object); diff --git a/deps/v8/test/mjsunit/runtime-gen/getfunctionscopecount.js b/deps/v8/test/mjsunit/runtime-gen/getfunctionscopecount.js new file mode 100644 index 00000000000..fb854cff426 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getfunctionscopecount.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _fun = function() {}; +%GetFunctionScopeCount(_fun); diff --git a/deps/v8/test/mjsunit/runtime-gen/getfunctionscopedetails.js b/deps/v8/test/mjsunit/runtime-gen/getfunctionscopedetails.js new file mode 100644 index 00000000000..c24314003a1 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getfunctionscopedetails.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _fun = function() {}; +var _index = 32; +%GetFunctionScopeDetails(_fun, _index); diff --git a/deps/v8/test/mjsunit/runtime-gen/gethandler.js b/deps/v8/test/mjsunit/runtime-gen/gethandler.js new file mode 100644 index 00000000000..ea982cbb516 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/gethandler.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _proxy = Proxy.create({}); +%GetHandler(_proxy); diff --git a/deps/v8/test/mjsunit/runtime-gen/getheapusage.js b/deps/v8/test/mjsunit/runtime-gen/getheapusage.js new file mode 100644 index 00000000000..cb174b72f26 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getheapusage.js @@ -0,0 +1,4 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +%GetHeapUsage(); diff --git a/deps/v8/test/mjsunit/runtime-gen/getimplfrominitializedintlobject.js b/deps/v8/test/mjsunit/runtime-gen/getimplfrominitializedintlobject.js new file mode 100644 index 00000000000..899ba8859ed --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getimplfrominitializedintlobject.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = new Intl.NumberFormat('en-US'); +%GetImplFromInitializedIntlObject(arg0); diff --git a/deps/v8/test/mjsunit/runtime-gen/getindexedinterceptorelementnames.js b/deps/v8/test/mjsunit/runtime-gen/getindexedinterceptorelementnames.js new file mode 100644 index 00000000000..8a83f0acd66 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getindexedinterceptorelementnames.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +%GetIndexedInterceptorElementNames(_obj); diff --git a/deps/v8/test/mjsunit/runtime-gen/getinterceptorinfo.js b/deps/v8/test/mjsunit/runtime-gen/getinterceptorinfo.js new file mode 100644 index 00000000000..b33ba649160 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getinterceptorinfo.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +%GetInterceptorInfo(_obj); diff --git a/deps/v8/test/mjsunit/runtime-gen/getlanguagetagvariants.js b/deps/v8/test/mjsunit/runtime-gen/getlanguagetagvariants.js new file mode 100644 index 00000000000..0ecfee522c0 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getlanguagetagvariants.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _input = new Array(); +%GetLanguageTagVariants(_input); diff --git a/deps/v8/test/mjsunit/runtime-gen/getnamedinterceptorpropertynames.js b/deps/v8/test/mjsunit/runtime-gen/getnamedinterceptorpropertynames.js new file mode 100644 index 00000000000..0dee531be68 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getnamedinterceptorpropertynames.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +%GetNamedInterceptorPropertyNames(_obj); diff --git a/deps/v8/test/mjsunit/runtime-gen/getobjectcontextnotifierperformchange.js b/deps/v8/test/mjsunit/runtime-gen/getobjectcontextnotifierperformchange.js new file mode 100644 index 00000000000..2960acee45d --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getobjectcontextnotifierperformchange.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _object_info = new Object(); +%GetObjectContextNotifierPerformChange(_object_info); diff --git a/deps/v8/test/mjsunit/runtime-gen/getobjectcontextobjectgetnotifier.js b/deps/v8/test/mjsunit/runtime-gen/getobjectcontextobjectgetnotifier.js new file mode 100644 index 00000000000..d6a043061ef --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getobjectcontextobjectgetnotifier.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _object = new Object(); +%GetObjectContextObjectGetNotifier(_object); diff --git a/deps/v8/test/mjsunit/runtime-gen/getobjectcontextobjectobserve.js b/deps/v8/test/mjsunit/runtime-gen/getobjectcontextobjectobserve.js new file mode 100644 index 00000000000..f1669e7385a --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getobjectcontextobjectobserve.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _object = new Object(); +%GetObjectContextObjectObserve(_object); diff --git a/deps/v8/test/mjsunit/runtime-gen/getobservationstate.js b/deps/v8/test/mjsunit/runtime-gen/getobservationstate.js new file mode 100644 index 00000000000..429cdcd91f2 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getobservationstate.js @@ -0,0 +1,4 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +%GetObservationState(); diff --git a/deps/v8/test/mjsunit/runtime-gen/getoptimizationcount.js b/deps/v8/test/mjsunit/runtime-gen/getoptimizationcount.js new file mode 100644 index 00000000000..da1ab9efcce --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getoptimizationcount.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _function = function() {}; +%GetOptimizationCount(_function); diff --git a/deps/v8/test/mjsunit/runtime-gen/getownelementnames.js b/deps/v8/test/mjsunit/runtime-gen/getownelementnames.js new file mode 100644 index 00000000000..54d9a698551 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getownelementnames.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +%GetOwnElementNames(_obj); diff --git a/deps/v8/test/mjsunit/runtime-gen/getownproperty.js b/deps/v8/test/mjsunit/runtime-gen/getownproperty.js new file mode 100644 index 00000000000..1e5a808f71d --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getownproperty.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +var _name = "name"; +%GetOwnProperty(_obj, _name); diff --git a/deps/v8/test/mjsunit/runtime-gen/getownpropertynames.js b/deps/v8/test/mjsunit/runtime-gen/getownpropertynames.js new file mode 100644 index 00000000000..10f7f2c7767 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getownpropertynames.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +var _filter_value = 1; +%GetOwnPropertyNames(_obj, _filter_value); diff --git a/deps/v8/test/mjsunit/runtime-gen/getproperty.js b/deps/v8/test/mjsunit/runtime-gen/getproperty.js new file mode 100644 index 00000000000..569189a3aa4 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getproperty.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _object = new Object(); +var _key = new Object(); +%GetProperty(_object, _key); diff --git a/deps/v8/test/mjsunit/runtime-gen/getpropertynames.js b/deps/v8/test/mjsunit/runtime-gen/getpropertynames.js new file mode 100644 index 00000000000..ad94eedc9c4 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getpropertynames.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _object = new Object(); +%GetPropertyNames(_object); diff --git a/deps/v8/test/mjsunit/runtime-gen/getpropertynamesfast.js b/deps/v8/test/mjsunit/runtime-gen/getpropertynamesfast.js new file mode 100644 index 00000000000..c2d14cb6534 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getpropertynamesfast.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _raw_object = new Object(); +%GetPropertyNamesFast(_raw_object); diff --git a/deps/v8/test/mjsunit/runtime-gen/getprototype.js b/deps/v8/test/mjsunit/runtime-gen/getprototype.js new file mode 100644 index 00000000000..b9ef1f9912b --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getprototype.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +%GetPrototype(_obj); diff --git a/deps/v8/test/mjsunit/runtime-gen/getrootnan.js b/deps/v8/test/mjsunit/runtime-gen/getrootnan.js new file mode 100644 index 00000000000..b6df0fd5fb4 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getrootnan.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +try { +%GetRootNaN(); +} catch(e) {} diff --git a/deps/v8/test/mjsunit/runtime-gen/getscopecount.js b/deps/v8/test/mjsunit/runtime-gen/getscopecount.js new file mode 100644 index 00000000000..d53bece37c4 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getscopecount.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _break_id = 32; +var _wrapped_id = 1; +try { +%GetScopeCount(_break_id, _wrapped_id); +} catch(e) {} diff --git a/deps/v8/test/mjsunit/runtime-gen/getscopedetails.js b/deps/v8/test/mjsunit/runtime-gen/getscopedetails.js new file mode 100644 index 00000000000..4ea28ac73e2 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getscopedetails.js @@ -0,0 +1,10 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _break_id = 32; +var _wrapped_id = 1; +var _inlined_jsframe_index = 32; +var _index = 32; +try { +%GetScopeDetails(_break_id, _wrapped_id, _inlined_jsframe_index, _index); +} catch(e) {} diff --git a/deps/v8/test/mjsunit/runtime-gen/getscript.js b/deps/v8/test/mjsunit/runtime-gen/getscript.js new file mode 100644 index 00000000000..cae0087ccfa --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getscript.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _script_name = "foo"; +%GetScript(_script_name); diff --git a/deps/v8/test/mjsunit/runtime-gen/getstepinpositions.js b/deps/v8/test/mjsunit/runtime-gen/getstepinpositions.js new file mode 100644 index 00000000000..221c586ed42 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getstepinpositions.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _break_id = 32; +var _wrapped_id = 1; +try { +%GetStepInPositions(_break_id, _wrapped_id); +} catch(e) {} diff --git a/deps/v8/test/mjsunit/runtime-gen/gettemplatefield.js b/deps/v8/test/mjsunit/runtime-gen/gettemplatefield.js new file mode 100644 index 00000000000..16d3824b2dc --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/gettemplatefield.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _templ = new Object(); +var _index = 1; +try { +%GetTemplateField(_templ, _index); +} catch(e) {} diff --git a/deps/v8/test/mjsunit/runtime-gen/getthreadcount.js b/deps/v8/test/mjsunit/runtime-gen/getthreadcount.js new file mode 100644 index 00000000000..5037066a7d7 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getthreadcount.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _break_id = 32; +try { +%GetThreadCount(_break_id); +} catch(e) {} diff --git a/deps/v8/test/mjsunit/runtime-gen/getthreaddetails.js b/deps/v8/test/mjsunit/runtime-gen/getthreaddetails.js new file mode 100644 index 00000000000..6fc0d14ce44 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getthreaddetails.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _break_id = 32; +var _index = 32; +try { +%GetThreadDetails(_break_id, _index); +} catch(e) {} diff --git a/deps/v8/test/mjsunit/runtime-gen/getv8version.js b/deps/v8/test/mjsunit/runtime-gen/getv8version.js new file mode 100644 index 00000000000..e311eef1396 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getv8version.js @@ -0,0 +1,4 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +%GetV8Version(); diff --git a/deps/v8/test/mjsunit/runtime-gen/getweakmapentries.js b/deps/v8/test/mjsunit/runtime-gen/getweakmapentries.js new file mode 100644 index 00000000000..ced728d3b5e --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getweakmapentries.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new WeakMap(); +%GetWeakMapEntries(_holder); diff --git a/deps/v8/test/mjsunit/runtime-gen/getweaksetvalues.js b/deps/v8/test/mjsunit/runtime-gen/getweaksetvalues.js new file mode 100644 index 00000000000..650c947d072 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/getweaksetvalues.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new WeakMap(); +%GetWeakSetValues(_holder); diff --git a/deps/v8/test/mjsunit/runtime-gen/globalprint.js b/deps/v8/test/mjsunit/runtime-gen/globalprint.js new file mode 100644 index 00000000000..059f08efe25 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/globalprint.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _string = "foo"; +%GlobalPrint(_string); diff --git a/deps/v8/test/mjsunit/runtime-gen/globalproxy.js b/deps/v8/test/mjsunit/runtime-gen/globalproxy.js new file mode 100644 index 00000000000..80e500c8872 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/globalproxy.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _global = new Object(); +%GlobalProxy(_global); diff --git a/deps/v8/test/mjsunit/runtime-gen/haselement.js b/deps/v8/test/mjsunit/runtime-gen/haselement.js new file mode 100644 index 00000000000..3d32ac5f002 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/haselement.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _receiver = new Object(); +var _index = 1; +%HasElement(_receiver, _index); diff --git a/deps/v8/test/mjsunit/runtime-gen/hasownproperty.js b/deps/v8/test/mjsunit/runtime-gen/hasownproperty.js new file mode 100644 index 00000000000..7443bff104e --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/hasownproperty.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _object = new Object(); +var _key = "name"; +%HasOwnProperty(_object, _key); diff --git a/deps/v8/test/mjsunit/runtime-gen/hasproperty.js b/deps/v8/test/mjsunit/runtime-gen/hasproperty.js new file mode 100644 index 00000000000..df4de8eb34e --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/hasproperty.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _receiver = new Object(); +var _key = "name"; +%HasProperty(_receiver, _key); diff --git a/deps/v8/test/mjsunit/runtime-gen/havesamemap.js b/deps/v8/test/mjsunit/runtime-gen/havesamemap.js new file mode 100644 index 00000000000..b399d17cb73 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/havesamemap.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj1 = new Object(); +var _obj2 = new Object(); +%HaveSameMap(_obj1, _obj2); diff --git a/deps/v8/test/mjsunit/runtime-gen/internalcompare.js b/deps/v8/test/mjsunit/runtime-gen/internalcompare.js new file mode 100644 index 00000000000..95cc006f317 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/internalcompare.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = %GetImplFromInitializedIntlObject(new Intl.Collator('en-US')); +var _string1 = "foo"; +var _string2 = "foo"; +%InternalCompare(arg0, _string1, _string2); diff --git a/deps/v8/test/mjsunit/runtime-gen/internaldateformat.js b/deps/v8/test/mjsunit/runtime-gen/internaldateformat.js new file mode 100644 index 00000000000..933714e9349 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/internaldateformat.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = %GetImplFromInitializedIntlObject(new Intl.DateTimeFormat('en-US')); +var _date = new Date(); +%InternalDateFormat(arg0, _date); diff --git a/deps/v8/test/mjsunit/runtime-gen/internaldateparse.js b/deps/v8/test/mjsunit/runtime-gen/internaldateparse.js new file mode 100644 index 00000000000..be8c49a9426 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/internaldateparse.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = %GetImplFromInitializedIntlObject(new Intl.DateTimeFormat('en-US')); +var _date_string = "foo"; +%InternalDateParse(arg0, _date_string); diff --git a/deps/v8/test/mjsunit/runtime-gen/internalnumberformat.js b/deps/v8/test/mjsunit/runtime-gen/internalnumberformat.js new file mode 100644 index 00000000000..cd21edc2476 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/internalnumberformat.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = %GetImplFromInitializedIntlObject(new Intl.NumberFormat('en-US')); +var _number = new Object(); +%InternalNumberFormat(arg0, _number); diff --git a/deps/v8/test/mjsunit/runtime-gen/internalnumberparse.js b/deps/v8/test/mjsunit/runtime-gen/internalnumberparse.js new file mode 100644 index 00000000000..cdbd322c4c1 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/internalnumberparse.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = %GetImplFromInitializedIntlObject(new Intl.NumberFormat('en-US')); +var _number_string = "foo"; +%InternalNumberParse(arg0, _number_string); diff --git a/deps/v8/test/mjsunit/runtime-gen/internalsetprototype.js b/deps/v8/test/mjsunit/runtime-gen/internalsetprototype.js new file mode 100644 index 00000000000..1bc67d38268 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/internalsetprototype.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +var _prototype = new Object(); +%InternalSetPrototype(_obj, _prototype); diff --git a/deps/v8/test/mjsunit/runtime-gen/isattachedglobal.js b/deps/v8/test/mjsunit/runtime-gen/isattachedglobal.js new file mode 100644 index 00000000000..9ead91a408b --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/isattachedglobal.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _global = new Object(); +%IsAttachedGlobal(_global); diff --git a/deps/v8/test/mjsunit/runtime-gen/isbreakonexception.js b/deps/v8/test/mjsunit/runtime-gen/isbreakonexception.js new file mode 100644 index 00000000000..e55c7d030af --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/isbreakonexception.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _type_arg = 32; +%IsBreakOnException(_type_arg); diff --git a/deps/v8/test/mjsunit/runtime-gen/isconcurrentrecompilationsupported.js b/deps/v8/test/mjsunit/runtime-gen/isconcurrentrecompilationsupported.js new file mode 100644 index 00000000000..44e2917d72d --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/isconcurrentrecompilationsupported.js @@ -0,0 +1,4 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +%IsConcurrentRecompilationSupported(); diff --git a/deps/v8/test/mjsunit/runtime-gen/isextensible.js b/deps/v8/test/mjsunit/runtime-gen/isextensible.js new file mode 100644 index 00000000000..20a7c8d8a4f --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/isextensible.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +%IsExtensible(_obj); diff --git a/deps/v8/test/mjsunit/runtime-gen/isinitializedintlobject.js b/deps/v8/test/mjsunit/runtime-gen/isinitializedintlobject.js new file mode 100644 index 00000000000..2816e5e27ab --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/isinitializedintlobject.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _input = new Object(); +%IsInitializedIntlObject(_input); diff --git a/deps/v8/test/mjsunit/runtime-gen/isinitializedintlobjectoftype.js b/deps/v8/test/mjsunit/runtime-gen/isinitializedintlobjectoftype.js new file mode 100644 index 00000000000..60e3850082a --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/isinitializedintlobjectoftype.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _input = new Object(); +var _expected_type = "foo"; +%IsInitializedIntlObjectOfType(_input, _expected_type); diff --git a/deps/v8/test/mjsunit/runtime-gen/isinprototypechain.js b/deps/v8/test/mjsunit/runtime-gen/isinprototypechain.js new file mode 100644 index 00000000000..37048348d19 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/isinprototypechain.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _O = new Object(); +var _V = new Object(); +%IsInPrototypeChain(_O, _V); diff --git a/deps/v8/test/mjsunit/runtime-gen/isjsfunctionproxy.js b/deps/v8/test/mjsunit/runtime-gen/isjsfunctionproxy.js new file mode 100644 index 00000000000..ca6ea5a9162 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/isjsfunctionproxy.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +%IsJSFunctionProxy(_obj); diff --git a/deps/v8/test/mjsunit/runtime-gen/isjsglobalproxy.js b/deps/v8/test/mjsunit/runtime-gen/isjsglobalproxy.js new file mode 100644 index 00000000000..f0de6101552 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/isjsglobalproxy.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +%IsJSGlobalProxy(_obj); diff --git a/deps/v8/test/mjsunit/runtime-gen/isjsmodule.js b/deps/v8/test/mjsunit/runtime-gen/isjsmodule.js new file mode 100644 index 00000000000..8b43a729fb0 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/isjsmodule.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +%IsJSModule(_obj); diff --git a/deps/v8/test/mjsunit/runtime-gen/isjsproxy.js b/deps/v8/test/mjsunit/runtime-gen/isjsproxy.js new file mode 100644 index 00000000000..a4d32beb165 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/isjsproxy.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +%IsJSProxy(_obj); diff --git a/deps/v8/test/mjsunit/runtime-gen/isobserved.js b/deps/v8/test/mjsunit/runtime-gen/isobserved.js new file mode 100644 index 00000000000..f649a1b33e1 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/isobserved.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +%IsObserved(_obj); diff --git a/deps/v8/test/mjsunit/runtime-gen/isoptimized.js b/deps/v8/test/mjsunit/runtime-gen/isoptimized.js new file mode 100644 index 00000000000..e1daf0da88c --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/isoptimized.js @@ -0,0 +1,4 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +%IsOptimized(); diff --git a/deps/v8/test/mjsunit/runtime-gen/ispropertyenumerable.js b/deps/v8/test/mjsunit/runtime-gen/ispropertyenumerable.js new file mode 100644 index 00000000000..575ee3468ce --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/ispropertyenumerable.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _object = new Object(); +var _key = "name"; +%IsPropertyEnumerable(_object, _key); diff --git a/deps/v8/test/mjsunit/runtime-gen/issloppymodefunction.js b/deps/v8/test/mjsunit/runtime-gen/issloppymodefunction.js new file mode 100644 index 00000000000..a0c75b32df3 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/issloppymodefunction.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = function() {}; +%IsSloppyModeFunction(arg0); diff --git a/deps/v8/test/mjsunit/runtime-gen/istemplate.js b/deps/v8/test/mjsunit/runtime-gen/istemplate.js new file mode 100644 index 00000000000..421229fe6e4 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/istemplate.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _arg = new Object(); +%IsTemplate(_arg); diff --git a/deps/v8/test/mjsunit/runtime-gen/isvalidsmi.js b/deps/v8/test/mjsunit/runtime-gen/isvalidsmi.js new file mode 100644 index 00000000000..98cf53bb2db --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/isvalidsmi.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _number = 32; +%IsValidSmi(_number); diff --git a/deps/v8/test/mjsunit/runtime-gen/keyedgetproperty.js b/deps/v8/test/mjsunit/runtime-gen/keyedgetproperty.js new file mode 100644 index 00000000000..cd8473c99a6 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/keyedgetproperty.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _receiver_obj = new Object(); +var _key_obj = new Object(); +%KeyedGetProperty(_receiver_obj, _key_obj); diff --git a/deps/v8/test/mjsunit/runtime-gen/liveeditcheckanddropactivations.js b/deps/v8/test/mjsunit/runtime-gen/liveeditcheckanddropactivations.js new file mode 100644 index 00000000000..7247acc3a7d --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/liveeditcheckanddropactivations.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _shared_array = new Array(); +var _do_drop = true; +%LiveEditCheckAndDropActivations(_shared_array, _do_drop); diff --git a/deps/v8/test/mjsunit/runtime-gen/liveeditcomparestrings.js b/deps/v8/test/mjsunit/runtime-gen/liveeditcomparestrings.js new file mode 100644 index 00000000000..611d78b03c1 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/liveeditcomparestrings.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _s1 = "foo"; +var _s2 = "foo"; +%LiveEditCompareStrings(_s1, _s2); diff --git a/deps/v8/test/mjsunit/runtime-gen/liveeditfunctionsetscript.js b/deps/v8/test/mjsunit/runtime-gen/liveeditfunctionsetscript.js new file mode 100644 index 00000000000..51d61d3bc7c --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/liveeditfunctionsetscript.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _function_object = new Object(); +var _script_object = new Object(); +%LiveEditFunctionSetScript(_function_object, _script_object); diff --git a/deps/v8/test/mjsunit/runtime-gen/loadmutabledouble.js b/deps/v8/test/mjsunit/runtime-gen/loadmutabledouble.js new file mode 100644 index 00000000000..1a2e7e9f90d --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/loadmutabledouble.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = {foo: 1.2}; +var _index = 1; +%LoadMutableDouble(arg0, _index); diff --git a/deps/v8/test/mjsunit/runtime-gen/lookupaccessor.js b/deps/v8/test/mjsunit/runtime-gen/lookupaccessor.js new file mode 100644 index 00000000000..89f40d76c9e --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/lookupaccessor.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _receiver = new Object(); +var _name = "name"; +var _flag = 1; +%LookupAccessor(_receiver, _name, _flag); diff --git a/deps/v8/test/mjsunit/runtime-gen/mapclear.js b/deps/v8/test/mjsunit/runtime-gen/mapclear.js new file mode 100644 index 00000000000..b34e6945140 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/mapclear.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new Map(); +%MapClear(_holder); diff --git a/deps/v8/test/mjsunit/runtime-gen/mapdelete.js b/deps/v8/test/mjsunit/runtime-gen/mapdelete.js new file mode 100644 index 00000000000..ab78954427b --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/mapdelete.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new Map(); +var _key = new Object(); +%MapDelete(_holder, _key); diff --git a/deps/v8/test/mjsunit/runtime-gen/mapget.js b/deps/v8/test/mjsunit/runtime-gen/mapget.js new file mode 100644 index 00000000000..0e996f52329 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/mapget.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new Map(); +var _key = new Object(); +%MapGet(_holder, _key); diff --git a/deps/v8/test/mjsunit/runtime-gen/mapgetsize.js b/deps/v8/test/mjsunit/runtime-gen/mapgetsize.js new file mode 100644 index 00000000000..50a06044b48 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/mapgetsize.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new Map(); +%MapGetSize(_holder); diff --git a/deps/v8/test/mjsunit/runtime-gen/maphas.js b/deps/v8/test/mjsunit/runtime-gen/maphas.js new file mode 100644 index 00000000000..2dc70c93e3d --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/maphas.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new Map(); +var _key = new Object(); +%MapHas(_holder, _key); diff --git a/deps/v8/test/mjsunit/runtime-gen/mapinitialize.js b/deps/v8/test/mjsunit/runtime-gen/mapinitialize.js new file mode 100644 index 00000000000..6240a025948 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/mapinitialize.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new Map(); +%MapInitialize(_holder); diff --git a/deps/v8/test/mjsunit/runtime-gen/mapiteratorinitialize.js b/deps/v8/test/mjsunit/runtime-gen/mapiteratorinitialize.js new file mode 100644 index 00000000000..584fe18a4dd --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/mapiteratorinitialize.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new Map().entries(); +var _map = new Map(); +var _kind = 1; +%MapIteratorInitialize(_holder, _map, _kind); diff --git a/deps/v8/test/mjsunit/runtime-gen/mapiteratornext.js b/deps/v8/test/mjsunit/runtime-gen/mapiteratornext.js new file mode 100644 index 00000000000..e155227023f --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/mapiteratornext.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new Map().entries(); +var _value_array = new Array(); +%MapIteratorNext(_holder, _value_array); diff --git a/deps/v8/test/mjsunit/runtime-gen/mapset.js b/deps/v8/test/mjsunit/runtime-gen/mapset.js new file mode 100644 index 00000000000..32c2080a8d0 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/mapset.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new Map(); +var _key = new Object(); +var _value = new Object(); +%MapSet(_holder, _key, _value); diff --git a/deps/v8/test/mjsunit/runtime-gen/markasinitializedintlobjectoftype.js b/deps/v8/test/mjsunit/runtime-gen/markasinitializedintlobjectoftype.js new file mode 100644 index 00000000000..bd0c581c89b --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/markasinitializedintlobjectoftype.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _input = new Object(); +var _type = "foo"; +var _impl = new Object(); +%MarkAsInitializedIntlObjectOfType(_input, _type, _impl); diff --git a/deps/v8/test/mjsunit/runtime-gen/mathacos.js b/deps/v8/test/mjsunit/runtime-gen/mathacos.js new file mode 100644 index 00000000000..fa44268389c --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/mathacos.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 1.5; +%MathAcos(_x); diff --git a/deps/v8/test/mjsunit/runtime-gen/mathasin.js b/deps/v8/test/mjsunit/runtime-gen/mathasin.js new file mode 100644 index 00000000000..0d20b3108d2 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/mathasin.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 1.5; +%MathAsin(_x); diff --git a/deps/v8/test/mjsunit/runtime-gen/mathatan.js b/deps/v8/test/mjsunit/runtime-gen/mathatan.js new file mode 100644 index 00000000000..0e2708f1f29 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/mathatan.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 1.5; +%MathAtan(_x); diff --git a/deps/v8/test/mjsunit/runtime-gen/mathatan2.js b/deps/v8/test/mjsunit/runtime-gen/mathatan2.js new file mode 100644 index 00000000000..42947971158 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/mathatan2.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 1.5; +var _y = 1.5; +%MathAtan2(_x, _y); diff --git a/deps/v8/test/mjsunit/runtime-gen/mathexprt.js b/deps/v8/test/mjsunit/runtime-gen/mathexprt.js new file mode 100644 index 00000000000..e4584366dea --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/mathexprt.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 1.5; +%MathExpRT(_x); diff --git a/deps/v8/test/mjsunit/runtime-gen/mathfloorrt.js b/deps/v8/test/mjsunit/runtime-gen/mathfloorrt.js new file mode 100644 index 00000000000..2ae83aab529 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/mathfloorrt.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 1.5; +%MathFloorRT(_x); diff --git a/deps/v8/test/mjsunit/runtime-gen/mathfround.js b/deps/v8/test/mjsunit/runtime-gen/mathfround.js new file mode 100644 index 00000000000..10a92986c1b --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/mathfround.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 1.5; +%MathFround(_x); diff --git a/deps/v8/test/mjsunit/runtime-gen/mathlogrt.js b/deps/v8/test/mjsunit/runtime-gen/mathlogrt.js new file mode 100644 index 00000000000..5c484cbbb10 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/mathlogrt.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 1.5; +%MathLogRT(_x); diff --git a/deps/v8/test/mjsunit/runtime-gen/mathsqrtrt.js b/deps/v8/test/mjsunit/runtime-gen/mathsqrtrt.js new file mode 100644 index 00000000000..e0df8d72d57 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/mathsqrtrt.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 1.5; +%MathSqrtRT(_x); diff --git a/deps/v8/test/mjsunit/runtime-gen/maxsmi.js b/deps/v8/test/mjsunit/runtime-gen/maxsmi.js new file mode 100644 index 00000000000..717a6544ebd --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/maxsmi.js @@ -0,0 +1,4 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +%MaxSmi(); diff --git a/deps/v8/test/mjsunit/runtime-gen/movearraycontents.js b/deps/v8/test/mjsunit/runtime-gen/movearraycontents.js new file mode 100644 index 00000000000..41c4ee1cd31 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/movearraycontents.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _from = new Array(); +var _to = new Array(); +%MoveArrayContents(_from, _to); diff --git a/deps/v8/test/mjsunit/runtime-gen/neveroptimizefunction.js b/deps/v8/test/mjsunit/runtime-gen/neveroptimizefunction.js new file mode 100644 index 00000000000..b03e42f1f82 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/neveroptimizefunction.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _function = function() {}; +%NeverOptimizeFunction(_function); diff --git a/deps/v8/test/mjsunit/runtime-gen/newarguments.js b/deps/v8/test/mjsunit/runtime-gen/newarguments.js new file mode 100644 index 00000000000..908fc3af7cb --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/newarguments.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _callee = function() {}; +%NewArguments(_callee); diff --git a/deps/v8/test/mjsunit/runtime-gen/newobjectfrombound.js b/deps/v8/test/mjsunit/runtime-gen/newobjectfrombound.js new file mode 100644 index 00000000000..36f75077b6a --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/newobjectfrombound.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = (function() {}).bind({}); +%NewObjectFromBound(arg0); diff --git a/deps/v8/test/mjsunit/runtime-gen/newstring.js b/deps/v8/test/mjsunit/runtime-gen/newstring.js new file mode 100644 index 00000000000..24b01489e5e --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/newstring.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _length = 1; +var _is_one_byte = true; +%NewString(_length, _is_one_byte); diff --git a/deps/v8/test/mjsunit/runtime-gen/newstringwrapper.js b/deps/v8/test/mjsunit/runtime-gen/newstringwrapper.js new file mode 100644 index 00000000000..cf53a3af209 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/newstringwrapper.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _value = "foo"; +%NewStringWrapper(_value); diff --git a/deps/v8/test/mjsunit/runtime-gen/newsymbolwrapper.js b/deps/v8/test/mjsunit/runtime-gen/newsymbolwrapper.js new file mode 100644 index 00000000000..08c0ea7e606 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/newsymbolwrapper.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _symbol = Symbol("symbol"); +%NewSymbolWrapper(_symbol); diff --git a/deps/v8/test/mjsunit/runtime-gen/notifycontextdisposed.js b/deps/v8/test/mjsunit/runtime-gen/notifycontextdisposed.js new file mode 100644 index 00000000000..d353fc5ceaa --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/notifycontextdisposed.js @@ -0,0 +1,4 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +%NotifyContextDisposed(); diff --git a/deps/v8/test/mjsunit/runtime-gen/numberadd.js b/deps/v8/test/mjsunit/runtime-gen/numberadd.js new file mode 100644 index 00000000000..f85017d49d1 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numberadd.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 1.5; +var _y = 1.5; +%NumberAdd(_x, _y); diff --git a/deps/v8/test/mjsunit/runtime-gen/numberand.js b/deps/v8/test/mjsunit/runtime-gen/numberand.js new file mode 100644 index 00000000000..9635e11bb66 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numberand.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 32; +var _y = 32; +%NumberAnd(_x, _y); diff --git a/deps/v8/test/mjsunit/runtime-gen/numbercompare.js b/deps/v8/test/mjsunit/runtime-gen/numbercompare.js new file mode 100644 index 00000000000..5f7ac9363c0 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numbercompare.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 1.5; +var _y = 1.5; +var _uncomparable_result = new Object(); +%NumberCompare(_x, _y, _uncomparable_result); diff --git a/deps/v8/test/mjsunit/runtime-gen/numberdiv.js b/deps/v8/test/mjsunit/runtime-gen/numberdiv.js new file mode 100644 index 00000000000..c62d5921c7b --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numberdiv.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 1.5; +var _y = 1.5; +%NumberDiv(_x, _y); diff --git a/deps/v8/test/mjsunit/runtime-gen/numberequals.js b/deps/v8/test/mjsunit/runtime-gen/numberequals.js new file mode 100644 index 00000000000..3b919fc02f0 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numberequals.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 1.5; +var _y = 1.5; +%NumberEquals(_x, _y); diff --git a/deps/v8/test/mjsunit/runtime-gen/numberimul.js b/deps/v8/test/mjsunit/runtime-gen/numberimul.js new file mode 100644 index 00000000000..f3c98bdc285 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numberimul.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 32; +var _y = 32; +%NumberImul(_x, _y); diff --git a/deps/v8/test/mjsunit/runtime-gen/numbermod.js b/deps/v8/test/mjsunit/runtime-gen/numbermod.js new file mode 100644 index 00000000000..6d5faeb2c5e --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numbermod.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 1.5; +var _y = 1.5; +%NumberMod(_x, _y); diff --git a/deps/v8/test/mjsunit/runtime-gen/numbermul.js b/deps/v8/test/mjsunit/runtime-gen/numbermul.js new file mode 100644 index 00000000000..0bdc7c23788 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numbermul.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 1.5; +var _y = 1.5; +%NumberMul(_x, _y); diff --git a/deps/v8/test/mjsunit/runtime-gen/numberor.js b/deps/v8/test/mjsunit/runtime-gen/numberor.js new file mode 100644 index 00000000000..c5ac65fc8d1 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numberor.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 32; +var _y = 32; +%NumberOr(_x, _y); diff --git a/deps/v8/test/mjsunit/runtime-gen/numbersar.js b/deps/v8/test/mjsunit/runtime-gen/numbersar.js new file mode 100644 index 00000000000..639270a08a4 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numbersar.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 32; +var _y = 32; +%NumberSar(_x, _y); diff --git a/deps/v8/test/mjsunit/runtime-gen/numbershl.js b/deps/v8/test/mjsunit/runtime-gen/numbershl.js new file mode 100644 index 00000000000..b505ff6ed8f --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numbershl.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 32; +var _y = 32; +%NumberShl(_x, _y); diff --git a/deps/v8/test/mjsunit/runtime-gen/numbershr.js b/deps/v8/test/mjsunit/runtime-gen/numbershr.js new file mode 100644 index 00000000000..bd1a3c45414 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numbershr.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 32; +var _y = 32; +%NumberShr(_x, _y); diff --git a/deps/v8/test/mjsunit/runtime-gen/numbersub.js b/deps/v8/test/mjsunit/runtime-gen/numbersub.js new file mode 100644 index 00000000000..5c99f872fac --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numbersub.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 1.5; +var _y = 1.5; +%NumberSub(_x, _y); diff --git a/deps/v8/test/mjsunit/runtime-gen/numbertoexponential.js b/deps/v8/test/mjsunit/runtime-gen/numbertoexponential.js new file mode 100644 index 00000000000..30159bb3ad3 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numbertoexponential.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _value = 1.5; +var _f_number = 1.5; +%NumberToExponential(_value, _f_number); diff --git a/deps/v8/test/mjsunit/runtime-gen/numbertofixed.js b/deps/v8/test/mjsunit/runtime-gen/numbertofixed.js new file mode 100644 index 00000000000..0df152541aa --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numbertofixed.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _value = 1.5; +var _f_number = 1.5; +%NumberToFixed(_value, _f_number); diff --git a/deps/v8/test/mjsunit/runtime-gen/numbertointeger.js b/deps/v8/test/mjsunit/runtime-gen/numbertointeger.js new file mode 100644 index 00000000000..eada58f45ab --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numbertointeger.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _number = 1.5; +%NumberToInteger(_number); diff --git a/deps/v8/test/mjsunit/runtime-gen/numbertointegermapminuszero.js b/deps/v8/test/mjsunit/runtime-gen/numbertointegermapminuszero.js new file mode 100644 index 00000000000..ce324806101 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numbertointegermapminuszero.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _number = 1.5; +%NumberToIntegerMapMinusZero(_number); diff --git a/deps/v8/test/mjsunit/runtime-gen/numbertojsint32.js b/deps/v8/test/mjsunit/runtime-gen/numbertojsint32.js new file mode 100644 index 00000000000..77321f9c62c --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numbertojsint32.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _number = 1.5; +%NumberToJSInt32(_number); diff --git a/deps/v8/test/mjsunit/runtime-gen/numbertojsuint32.js b/deps/v8/test/mjsunit/runtime-gen/numbertojsuint32.js new file mode 100644 index 00000000000..d4f7302fe9f --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numbertojsuint32.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _number = 32; +%NumberToJSUint32(_number); diff --git a/deps/v8/test/mjsunit/runtime-gen/numbertoprecision.js b/deps/v8/test/mjsunit/runtime-gen/numbertoprecision.js new file mode 100644 index 00000000000..6591117ec83 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numbertoprecision.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _value = 1.5; +var _f_number = 1.5; +%NumberToPrecision(_value, _f_number); diff --git a/deps/v8/test/mjsunit/runtime-gen/numbertoradixstring.js b/deps/v8/test/mjsunit/runtime-gen/numbertoradixstring.js new file mode 100644 index 00000000000..020aac28536 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numbertoradixstring.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _value = 1.5; +var arg1 = 2; +%NumberToRadixString(_value, arg1); diff --git a/deps/v8/test/mjsunit/runtime-gen/numbertostringrt.js b/deps/v8/test/mjsunit/runtime-gen/numbertostringrt.js new file mode 100644 index 00000000000..4b2b6d93b0f --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numbertostringrt.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _number = 1.5; +%NumberToStringRT(_number); diff --git a/deps/v8/test/mjsunit/runtime-gen/numberunaryminus.js b/deps/v8/test/mjsunit/runtime-gen/numberunaryminus.js new file mode 100644 index 00000000000..54dc49eda91 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numberunaryminus.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 1.5; +%NumberUnaryMinus(_x); diff --git a/deps/v8/test/mjsunit/runtime-gen/numberxor.js b/deps/v8/test/mjsunit/runtime-gen/numberxor.js new file mode 100644 index 00000000000..237269803ba --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/numberxor.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 32; +var _y = 32; +%NumberXor(_x, _y); diff --git a/deps/v8/test/mjsunit/runtime-gen/objectfreeze.js b/deps/v8/test/mjsunit/runtime-gen/objectfreeze.js new file mode 100644 index 00000000000..cfc066c6f1a --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/objectfreeze.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _object = new Object(); +%ObjectFreeze(_object); diff --git a/deps/v8/test/mjsunit/runtime-gen/objectwascreatedincurrentorigin.js b/deps/v8/test/mjsunit/runtime-gen/objectwascreatedincurrentorigin.js new file mode 100644 index 00000000000..776997009ca --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/objectwascreatedincurrentorigin.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _object = new Object(); +%ObjectWasCreatedInCurrentOrigin(_object); diff --git a/deps/v8/test/mjsunit/runtime-gen/observationweakmapcreate.js b/deps/v8/test/mjsunit/runtime-gen/observationweakmapcreate.js new file mode 100644 index 00000000000..6c71eace415 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/observationweakmapcreate.js @@ -0,0 +1,4 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +%ObservationWeakMapCreate(); diff --git a/deps/v8/test/mjsunit/runtime-gen/observerobjectandrecordhavesameorigin.js b/deps/v8/test/mjsunit/runtime-gen/observerobjectandrecordhavesameorigin.js new file mode 100644 index 00000000000..6c251ecd957 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/observerobjectandrecordhavesameorigin.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _observer = function() {}; +var _object = new Object(); +var _record = new Object(); +%ObserverObjectAndRecordHaveSameOrigin(_observer, _object, _record); diff --git a/deps/v8/test/mjsunit/runtime-gen/optimizeobjectforaddingmultipleproperties.js b/deps/v8/test/mjsunit/runtime-gen/optimizeobjectforaddingmultipleproperties.js new file mode 100644 index 00000000000..7016e1c0624 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/optimizeobjectforaddingmultipleproperties.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _object = new Object(); +var _properties = 1; +%OptimizeObjectForAddingMultipleProperties(_object, _properties); diff --git a/deps/v8/test/mjsunit/runtime-gen/ownkeys.js b/deps/v8/test/mjsunit/runtime-gen/ownkeys.js new file mode 100644 index 00000000000..0a392422cc7 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/ownkeys.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _raw_object = new Object(); +%OwnKeys(_raw_object); diff --git a/deps/v8/test/mjsunit/runtime-gen/parsejson.js b/deps/v8/test/mjsunit/runtime-gen/parsejson.js new file mode 100644 index 00000000000..0a038790ea4 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/parsejson.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = "{}"; +%ParseJson(arg0); diff --git a/deps/v8/test/mjsunit/runtime-gen/preventextensions.js b/deps/v8/test/mjsunit/runtime-gen/preventextensions.js new file mode 100644 index 00000000000..8e24b75e0c2 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/preventextensions.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +%PreventExtensions(_obj); diff --git a/deps/v8/test/mjsunit/runtime-gen/pushifabsent.js b/deps/v8/test/mjsunit/runtime-gen/pushifabsent.js new file mode 100644 index 00000000000..c998121f533 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/pushifabsent.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _array = new Array(); +var _element = new Object(); +%PushIfAbsent(_array, _element); diff --git a/deps/v8/test/mjsunit/runtime-gen/quotejsonstring.js b/deps/v8/test/mjsunit/runtime-gen/quotejsonstring.js new file mode 100644 index 00000000000..61ade342638 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/quotejsonstring.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _string = "foo"; +%QuoteJSONString(_string); diff --git a/deps/v8/test/mjsunit/runtime-gen/regexpcompile.js b/deps/v8/test/mjsunit/runtime-gen/regexpcompile.js new file mode 100644 index 00000000000..c0edfa6fcf9 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/regexpcompile.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _re = /ab/g; +var _pattern = "foo"; +var _flags = "foo"; +%RegExpCompile(_re, _pattern, _flags); diff --git a/deps/v8/test/mjsunit/runtime-gen/regexpconstructresult.js b/deps/v8/test/mjsunit/runtime-gen/regexpconstructresult.js new file mode 100644 index 00000000000..50d2e0d8fe9 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/regexpconstructresult.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _size = 1; +var _index = new Object(); +var _input = new Object(); +%_RegExpConstructResult(_size, _index, _input); diff --git a/deps/v8/test/mjsunit/runtime-gen/regexpexecmultiple.js b/deps/v8/test/mjsunit/runtime-gen/regexpexecmultiple.js new file mode 100644 index 00000000000..9db6e6d2b3c --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/regexpexecmultiple.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _regexp = /ab/g; +var _subject = "foo"; +var arg2 = ['a']; +var arg3 = ['a']; +%RegExpExecMultiple(_regexp, _subject, arg2, arg3); diff --git a/deps/v8/test/mjsunit/runtime-gen/regexpexecrt.js b/deps/v8/test/mjsunit/runtime-gen/regexpexecrt.js new file mode 100644 index 00000000000..3b20191f2bf --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/regexpexecrt.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _regexp = /ab/g; +var _subject = "foo"; +var _index = 1; +var _last_match_info = new Array(); +%RegExpExecRT(_regexp, _subject, _index, _last_match_info); diff --git a/deps/v8/test/mjsunit/runtime-gen/regexpinitializeobject.js b/deps/v8/test/mjsunit/runtime-gen/regexpinitializeobject.js new file mode 100644 index 00000000000..fccdeeed788 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/regexpinitializeobject.js @@ -0,0 +1,9 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _regexp = /ab/g; +var _source = "foo"; +var _global = new Object(); +var _ignoreCase = new Object(); +var _multiline = new Object(); +%RegExpInitializeObject(_regexp, _source, _global, _ignoreCase, _multiline); diff --git a/deps/v8/test/mjsunit/runtime-gen/removearrayholes.js b/deps/v8/test/mjsunit/runtime-gen/removearrayholes.js new file mode 100644 index 00000000000..971e63cab54 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/removearrayholes.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _object = new Object(); +var _limit = 32; +%RemoveArrayHoles(_object, _limit); diff --git a/deps/v8/test/mjsunit/runtime-gen/rempio2.js b/deps/v8/test/mjsunit/runtime-gen/rempio2.js new file mode 100644 index 00000000000..6d47bac4ac5 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/rempio2.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = 1.5; +%RemPiO2(_x); diff --git a/deps/v8/test/mjsunit/runtime-gen/roundnumber.js b/deps/v8/test/mjsunit/runtime-gen/roundnumber.js new file mode 100644 index 00000000000..2ec1159b2bf --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/roundnumber.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _input = 1.5; +%RoundNumber(_input); diff --git a/deps/v8/test/mjsunit/runtime-gen/runmicrotasks.js b/deps/v8/test/mjsunit/runtime-gen/runmicrotasks.js new file mode 100644 index 00000000000..945260a8df5 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/runmicrotasks.js @@ -0,0 +1,4 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +%RunMicrotasks(); diff --git a/deps/v8/test/mjsunit/runtime-gen/runninginsimulator.js b/deps/v8/test/mjsunit/runtime-gen/runninginsimulator.js new file mode 100644 index 00000000000..fe5678259dc --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/runninginsimulator.js @@ -0,0 +1,4 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +%RunningInSimulator(); diff --git a/deps/v8/test/mjsunit/runtime-gen/setadd.js b/deps/v8/test/mjsunit/runtime-gen/setadd.js new file mode 100644 index 00000000000..75b923fbf33 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/setadd.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new Set(); +var _key = new Object(); +%SetAdd(_holder, _key); diff --git a/deps/v8/test/mjsunit/runtime-gen/setclear.js b/deps/v8/test/mjsunit/runtime-gen/setclear.js new file mode 100644 index 00000000000..82ef6d955bd --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/setclear.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new Set(); +%SetClear(_holder); diff --git a/deps/v8/test/mjsunit/runtime-gen/setcode.js b/deps/v8/test/mjsunit/runtime-gen/setcode.js new file mode 100644 index 00000000000..4e2206fbc86 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/setcode.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _target = function() {}; +var _source = function() {}; +%SetCode(_target, _source); diff --git a/deps/v8/test/mjsunit/runtime-gen/setdebugeventlistener.js b/deps/v8/test/mjsunit/runtime-gen/setdebugeventlistener.js new file mode 100644 index 00000000000..d51b277b80a --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/setdebugeventlistener.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = undefined; +var _data = new Object(); +%SetDebugEventListener(arg0, _data); diff --git a/deps/v8/test/mjsunit/runtime-gen/setdelete.js b/deps/v8/test/mjsunit/runtime-gen/setdelete.js new file mode 100644 index 00000000000..80bd343d0e5 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/setdelete.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new Set(); +var _key = new Object(); +%SetDelete(_holder, _key); diff --git a/deps/v8/test/mjsunit/runtime-gen/setdisablebreak.js b/deps/v8/test/mjsunit/runtime-gen/setdisablebreak.js new file mode 100644 index 00000000000..461942b60f0 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/setdisablebreak.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _disable_break = true; +%SetDisableBreak(_disable_break); diff --git a/deps/v8/test/mjsunit/runtime-gen/setflags.js b/deps/v8/test/mjsunit/runtime-gen/setflags.js new file mode 100644 index 00000000000..70db03ee98e --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/setflags.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _arg = "foo"; +%SetFlags(_arg); diff --git a/deps/v8/test/mjsunit/runtime-gen/setfunctionbreakpoint.js b/deps/v8/test/mjsunit/runtime-gen/setfunctionbreakpoint.js new file mode 100644 index 00000000000..010330e5a49 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/setfunctionbreakpoint.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _function = function() {}; +var arg1 = 218; +var _break_point_object_arg = new Object(); +%SetFunctionBreakPoint(_function, arg1, _break_point_object_arg); diff --git a/deps/v8/test/mjsunit/runtime-gen/setgetsize.js b/deps/v8/test/mjsunit/runtime-gen/setgetsize.js new file mode 100644 index 00000000000..842016bb2d2 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/setgetsize.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new Set(); +%SetGetSize(_holder); diff --git a/deps/v8/test/mjsunit/runtime-gen/sethas.js b/deps/v8/test/mjsunit/runtime-gen/sethas.js new file mode 100644 index 00000000000..8cec0d8c357 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/sethas.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new Set(); +var _key = new Object(); +%SetHas(_holder, _key); diff --git a/deps/v8/test/mjsunit/runtime-gen/setinitialize.js b/deps/v8/test/mjsunit/runtime-gen/setinitialize.js new file mode 100644 index 00000000000..b21a089692b --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/setinitialize.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new Set(); +%SetInitialize(_holder); diff --git a/deps/v8/test/mjsunit/runtime-gen/setisobserved.js b/deps/v8/test/mjsunit/runtime-gen/setisobserved.js new file mode 100644 index 00000000000..d885113ffa3 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/setisobserved.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +%SetIsObserved(_obj); diff --git a/deps/v8/test/mjsunit/runtime-gen/setiteratorinitialize.js b/deps/v8/test/mjsunit/runtime-gen/setiteratorinitialize.js new file mode 100644 index 00000000000..34769e51dc2 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/setiteratorinitialize.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new Set().values(); +var _set = new Set(); +var arg2 = 2; +%SetIteratorInitialize(_holder, _set, arg2); diff --git a/deps/v8/test/mjsunit/runtime-gen/setiteratornext.js b/deps/v8/test/mjsunit/runtime-gen/setiteratornext.js new file mode 100644 index 00000000000..02b74d44dab --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/setiteratornext.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new Set().values(); +var _value_array = new Array(); +%SetIteratorNext(_holder, _value_array); diff --git a/deps/v8/test/mjsunit/runtime-gen/setprototype.js b/deps/v8/test/mjsunit/runtime-gen/setprototype.js new file mode 100644 index 00000000000..6353151f4e4 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/setprototype.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +var _prototype = new Object(); +%SetPrototype(_obj, _prototype); diff --git a/deps/v8/test/mjsunit/runtime-gen/setscopevariablevalue.js b/deps/v8/test/mjsunit/runtime-gen/setscopevariablevalue.js new file mode 100644 index 00000000000..680bab52ccc --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/setscopevariablevalue.js @@ -0,0 +1,10 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _fun = function() {}; +var _wrapped_id = 1; +var _inlined_jsframe_index = 32; +var _index = 32; +var _variable_name = "foo"; +var _new_value = new Object(); +%SetScopeVariableValue(_fun, _wrapped_id, _inlined_jsframe_index, _index, _variable_name, _new_value); diff --git a/deps/v8/test/mjsunit/runtime-gen/smilexicographiccompare.js b/deps/v8/test/mjsunit/runtime-gen/smilexicographiccompare.js new file mode 100644 index 00000000000..d227a9ffc11 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/smilexicographiccompare.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x_value = 1; +var _y_value = 1; +%SmiLexicographicCompare(_x_value, _y_value); diff --git a/deps/v8/test/mjsunit/runtime-gen/sparsejoinwithseparator.js b/deps/v8/test/mjsunit/runtime-gen/sparsejoinwithseparator.js new file mode 100644 index 00000000000..3a8e7754d45 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/sparsejoinwithseparator.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _elements_array = new Array(); +var _array_length = 32; +var _separator = "foo"; +%SparseJoinWithSeparator(_elements_array, _array_length, _separator); diff --git a/deps/v8/test/mjsunit/runtime-gen/specialarrayfunctions.js b/deps/v8/test/mjsunit/runtime-gen/specialarrayfunctions.js new file mode 100644 index 00000000000..5956e8422ce --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/specialarrayfunctions.js @@ -0,0 +1,4 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +%SpecialArrayFunctions(); diff --git a/deps/v8/test/mjsunit/runtime-gen/stringbuilderconcat.js b/deps/v8/test/mjsunit/runtime-gen/stringbuilderconcat.js new file mode 100644 index 00000000000..9d7c78a3e6c --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/stringbuilderconcat.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = [1, 2, 3]; +var arg1 = 3; +var _special = "foo"; +%StringBuilderConcat(arg0, arg1, _special); diff --git a/deps/v8/test/mjsunit/runtime-gen/stringbuilderjoin.js b/deps/v8/test/mjsunit/runtime-gen/stringbuilderjoin.js new file mode 100644 index 00000000000..bf990c62d6b --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/stringbuilderjoin.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var arg0 = ['a', 'b']; +var arg1 = 4; +var _separator = "foo"; +%StringBuilderJoin(arg0, arg1, _separator); diff --git a/deps/v8/test/mjsunit/runtime-gen/stringcharcodeatrt.js b/deps/v8/test/mjsunit/runtime-gen/stringcharcodeatrt.js new file mode 100644 index 00000000000..fa016ac00e0 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/stringcharcodeatrt.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _subject = "foo"; +var _i = 32; +%StringCharCodeAtRT(_subject, _i); diff --git a/deps/v8/test/mjsunit/runtime-gen/stringequals.js b/deps/v8/test/mjsunit/runtime-gen/stringequals.js new file mode 100644 index 00000000000..14e40eb0281 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/stringequals.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _x = "foo"; +var _y = "foo"; +%StringEquals(_x, _y); diff --git a/deps/v8/test/mjsunit/runtime-gen/stringindexof.js b/deps/v8/test/mjsunit/runtime-gen/stringindexof.js new file mode 100644 index 00000000000..3c5cab31c52 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/stringindexof.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _sub = "foo"; +var _pat = "foo"; +var _index = new Object(); +%StringIndexOf(_sub, _pat, _index); diff --git a/deps/v8/test/mjsunit/runtime-gen/stringlastindexof.js b/deps/v8/test/mjsunit/runtime-gen/stringlastindexof.js new file mode 100644 index 00000000000..afbc51f5a42 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/stringlastindexof.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _sub = "foo"; +var _pat = "foo"; +var _index = new Object(); +%StringLastIndexOf(_sub, _pat, _index); diff --git a/deps/v8/test/mjsunit/runtime-gen/stringlocalecompare.js b/deps/v8/test/mjsunit/runtime-gen/stringlocalecompare.js new file mode 100644 index 00000000000..b37e231183c --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/stringlocalecompare.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _str1 = "foo"; +var _str2 = "foo"; +%StringLocaleCompare(_str1, _str2); diff --git a/deps/v8/test/mjsunit/runtime-gen/stringmatch.js b/deps/v8/test/mjsunit/runtime-gen/stringmatch.js new file mode 100644 index 00000000000..330aeae9c04 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/stringmatch.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _subject = "foo"; +var _regexp = /ab/g; +var arg2 = ['a', 'b']; +%StringMatch(_subject, _regexp, arg2); diff --git a/deps/v8/test/mjsunit/runtime-gen/stringnormalize.js b/deps/v8/test/mjsunit/runtime-gen/stringnormalize.js new file mode 100644 index 00000000000..fb408a41a55 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/stringnormalize.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _stringValue = "foo"; +var arg1 = 2; +%StringNormalize(_stringValue, arg1); diff --git a/deps/v8/test/mjsunit/runtime-gen/stringparsefloat.js b/deps/v8/test/mjsunit/runtime-gen/stringparsefloat.js new file mode 100644 index 00000000000..520a24e7560 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/stringparsefloat.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _subject = "foo"; +%StringParseFloat(_subject); diff --git a/deps/v8/test/mjsunit/runtime-gen/stringparseint.js b/deps/v8/test/mjsunit/runtime-gen/stringparseint.js new file mode 100644 index 00000000000..43116554eb5 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/stringparseint.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _subject = "foo"; +var _radix = 32; +%StringParseInt(_subject, _radix); diff --git a/deps/v8/test/mjsunit/runtime-gen/stringreplaceglobalregexpwithstring.js b/deps/v8/test/mjsunit/runtime-gen/stringreplaceglobalregexpwithstring.js new file mode 100644 index 00000000000..ad2b6e67d9e --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/stringreplaceglobalregexpwithstring.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _subject = "foo"; +var _regexp = /ab/g; +var _replacement = "foo"; +var arg3 = ['a']; +%StringReplaceGlobalRegExpWithString(_subject, _regexp, _replacement, arg3); diff --git a/deps/v8/test/mjsunit/runtime-gen/stringreplaceonecharwithstring.js b/deps/v8/test/mjsunit/runtime-gen/stringreplaceonecharwithstring.js new file mode 100644 index 00000000000..5e38a79f445 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/stringreplaceonecharwithstring.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _subject = "foo"; +var _search = "foo"; +var _replace = "foo"; +%StringReplaceOneCharWithString(_subject, _search, _replace); diff --git a/deps/v8/test/mjsunit/runtime-gen/stringsplit.js b/deps/v8/test/mjsunit/runtime-gen/stringsplit.js new file mode 100644 index 00000000000..dfe683194a8 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/stringsplit.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _subject = "foo"; +var _pattern = "foo"; +var _limit = 32; +%StringSplit(_subject, _pattern, _limit); diff --git a/deps/v8/test/mjsunit/runtime-gen/stringtoarray.js b/deps/v8/test/mjsunit/runtime-gen/stringtoarray.js new file mode 100644 index 00000000000..6ed48a771a4 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/stringtoarray.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _s = "foo"; +var _limit = 32; +%StringToArray(_s, _limit); diff --git a/deps/v8/test/mjsunit/runtime-gen/stringtolowercase.js b/deps/v8/test/mjsunit/runtime-gen/stringtolowercase.js new file mode 100644 index 00000000000..3a7261a0e04 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/stringtolowercase.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _s = "foo"; +%StringToLowerCase(_s); diff --git a/deps/v8/test/mjsunit/runtime-gen/stringtonumber.js b/deps/v8/test/mjsunit/runtime-gen/stringtonumber.js new file mode 100644 index 00000000000..88e2e84a2ed --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/stringtonumber.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _subject = "foo"; +%StringToNumber(_subject); diff --git a/deps/v8/test/mjsunit/runtime-gen/stringtouppercase.js b/deps/v8/test/mjsunit/runtime-gen/stringtouppercase.js new file mode 100644 index 00000000000..b7d97310153 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/stringtouppercase.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _s = "foo"; +%StringToUpperCase(_s); diff --git a/deps/v8/test/mjsunit/runtime-gen/stringtrim.js b/deps/v8/test/mjsunit/runtime-gen/stringtrim.js new file mode 100644 index 00000000000..75d197efa9d --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/stringtrim.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _string = "foo"; +var _trimLeft = true; +var _trimRight = true; +%StringTrim(_string, _trimLeft, _trimRight); diff --git a/deps/v8/test/mjsunit/runtime-gen/symboldescription.js b/deps/v8/test/mjsunit/runtime-gen/symboldescription.js new file mode 100644 index 00000000000..13360828b81 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/symboldescription.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _symbol = Symbol("symbol"); +%SymbolDescription(_symbol); diff --git a/deps/v8/test/mjsunit/runtime-gen/symbolisprivate.js b/deps/v8/test/mjsunit/runtime-gen/symbolisprivate.js new file mode 100644 index 00000000000..8e5343e1d55 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/symbolisprivate.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _symbol = Symbol("symbol"); +%SymbolIsPrivate(_symbol); diff --git a/deps/v8/test/mjsunit/runtime-gen/symbolregistry.js b/deps/v8/test/mjsunit/runtime-gen/symbolregistry.js new file mode 100644 index 00000000000..71964e6eae2 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/symbolregistry.js @@ -0,0 +1,4 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +%SymbolRegistry(); diff --git a/deps/v8/test/mjsunit/runtime-gen/tobool.js b/deps/v8/test/mjsunit/runtime-gen/tobool.js new file mode 100644 index 00000000000..ca522c8a9fb --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/tobool.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _object = new Object(); +%ToBool(_object); diff --git a/deps/v8/test/mjsunit/runtime-gen/tofastproperties.js b/deps/v8/test/mjsunit/runtime-gen/tofastproperties.js new file mode 100644 index 00000000000..f9c1890b1c6 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/tofastproperties.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _object = new Object(); +%ToFastProperties(_object); diff --git a/deps/v8/test/mjsunit/runtime-gen/traceenter.js b/deps/v8/test/mjsunit/runtime-gen/traceenter.js new file mode 100644 index 00000000000..768a0c24371 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/traceenter.js @@ -0,0 +1,4 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +%TraceEnter(); diff --git a/deps/v8/test/mjsunit/runtime-gen/traceexit.js b/deps/v8/test/mjsunit/runtime-gen/traceexit.js new file mode 100644 index 00000000000..378d008c908 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/traceexit.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +%TraceExit(_obj); diff --git a/deps/v8/test/mjsunit/runtime-gen/truncatestring.js b/deps/v8/test/mjsunit/runtime-gen/truncatestring.js new file mode 100644 index 00000000000..64ef628e5b9 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/truncatestring.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _string = "seqstring"; +var _new_length = 1; +%TruncateString(_string, _new_length); diff --git a/deps/v8/test/mjsunit/runtime-gen/trymigrateinstance.js b/deps/v8/test/mjsunit/runtime-gen/trymigrateinstance.js new file mode 100644 index 00000000000..b82eb741bb8 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/trymigrateinstance.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _object = new Object(); +%TryMigrateInstance(_object); diff --git a/deps/v8/test/mjsunit/runtime-gen/typedarraygetbuffer.js b/deps/v8/test/mjsunit/runtime-gen/typedarraygetbuffer.js new file mode 100644 index 00000000000..56a805b3b2a --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/typedarraygetbuffer.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new Int32Array(2); +%TypedArrayGetBuffer(_holder); diff --git a/deps/v8/test/mjsunit/runtime-gen/typedarraygetlength.js b/deps/v8/test/mjsunit/runtime-gen/typedarraygetlength.js new file mode 100644 index 00000000000..8d1865f40fb --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/typedarraygetlength.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new Int32Array(2); +%TypedArrayGetLength(_holder); diff --git a/deps/v8/test/mjsunit/runtime-gen/typedarrayinitialize.js b/deps/v8/test/mjsunit/runtime-gen/typedarrayinitialize.js new file mode 100644 index 00000000000..be1e29607e9 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/typedarrayinitialize.js @@ -0,0 +1,9 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new Int32Array(2); +var arg1 = 6; +var arg2 = new ArrayBuffer(8); +var _byte_offset_object = 1.5; +var arg4 = 4; +%TypedArrayInitialize(_holder, arg1, arg2, _byte_offset_object, arg4); diff --git a/deps/v8/test/mjsunit/runtime-gen/typedarrayinitializefromarraylike.js b/deps/v8/test/mjsunit/runtime-gen/typedarrayinitializefromarraylike.js new file mode 100644 index 00000000000..0ca7a0f7ced --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/typedarrayinitializefromarraylike.js @@ -0,0 +1,8 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _holder = new Int32Array(2); +var arg1 = 6; +var _source = new Object(); +var _length_obj = 1.5; +%TypedArrayInitializeFromArrayLike(_holder, arg1, _source, _length_obj); diff --git a/deps/v8/test/mjsunit/runtime-gen/typedarraymaxsizeinheap.js b/deps/v8/test/mjsunit/runtime-gen/typedarraymaxsizeinheap.js new file mode 100644 index 00000000000..61467bd9fa2 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/typedarraymaxsizeinheap.js @@ -0,0 +1,4 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +%TypedArrayMaxSizeInHeap(); diff --git a/deps/v8/test/mjsunit/runtime-gen/typedarraysetfastcases.js b/deps/v8/test/mjsunit/runtime-gen/typedarraysetfastcases.js new file mode 100644 index 00000000000..495212952b2 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/typedarraysetfastcases.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _target_obj = new Int32Array(2); +var _source_obj = new Int32Array(2); +var arg2 = 0; +%TypedArraySetFastCases(_target_obj, _source_obj, arg2); diff --git a/deps/v8/test/mjsunit/runtime-gen/typeof.js b/deps/v8/test/mjsunit/runtime-gen/typeof.js new file mode 100644 index 00000000000..78bfa6ea2cc --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/typeof.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _obj = new Object(); +%Typeof(_obj); diff --git a/deps/v8/test/mjsunit/runtime-gen/unblockconcurrentrecompilation.js b/deps/v8/test/mjsunit/runtime-gen/unblockconcurrentrecompilation.js new file mode 100644 index 00000000000..a08add7b285 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/unblockconcurrentrecompilation.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +try { +%UnblockConcurrentRecompilation(); +} catch(e) {} diff --git a/deps/v8/test/mjsunit/runtime-gen/uriescape.js b/deps/v8/test/mjsunit/runtime-gen/uriescape.js new file mode 100644 index 00000000000..f32edc98e6e --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/uriescape.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _source = "foo"; +%URIEscape(_source); diff --git a/deps/v8/test/mjsunit/runtime-gen/uriunescape.js b/deps/v8/test/mjsunit/runtime-gen/uriunescape.js new file mode 100644 index 00000000000..2ba812c5886 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/uriunescape.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _source = "foo"; +%URIUnescape(_source); diff --git a/deps/v8/test/mjsunit/runtime-gen/weakcollectiondelete.js b/deps/v8/test/mjsunit/runtime-gen/weakcollectiondelete.js new file mode 100644 index 00000000000..a6fff79e195 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/weakcollectiondelete.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _weak_collection = new WeakMap(); +var _key = new Object(); +%WeakCollectionDelete(_weak_collection, _key); diff --git a/deps/v8/test/mjsunit/runtime-gen/weakcollectionget.js b/deps/v8/test/mjsunit/runtime-gen/weakcollectionget.js new file mode 100644 index 00000000000..f248ac05a5c --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/weakcollectionget.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _weak_collection = new WeakMap(); +var _key = new Object(); +%WeakCollectionGet(_weak_collection, _key); diff --git a/deps/v8/test/mjsunit/runtime-gen/weakcollectionhas.js b/deps/v8/test/mjsunit/runtime-gen/weakcollectionhas.js new file mode 100644 index 00000000000..af600c3e8d7 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/weakcollectionhas.js @@ -0,0 +1,6 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _weak_collection = new WeakMap(); +var _key = new Object(); +%WeakCollectionHas(_weak_collection, _key); diff --git a/deps/v8/test/mjsunit/runtime-gen/weakcollectioninitialize.js b/deps/v8/test/mjsunit/runtime-gen/weakcollectioninitialize.js new file mode 100644 index 00000000000..97f5ce56a1c --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/weakcollectioninitialize.js @@ -0,0 +1,5 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _weak_collection = new WeakMap(); +%WeakCollectionInitialize(_weak_collection); diff --git a/deps/v8/test/mjsunit/runtime-gen/weakcollectionset.js b/deps/v8/test/mjsunit/runtime-gen/weakcollectionset.js new file mode 100644 index 00000000000..3479ba60316 --- /dev/null +++ b/deps/v8/test/mjsunit/runtime-gen/weakcollectionset.js @@ -0,0 +1,7 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY +// Flags: --allow-natives-syntax --harmony --harmony-proxies +var _weak_collection = new WeakMap(); +var _key = new Object(); +var _value = new Object(); +%WeakCollectionSet(_weak_collection, _key, _value); diff --git a/deps/v8/test/mjsunit/sin-cos.js b/deps/v8/test/mjsunit/sin-cos.js index 02ae57ba277..71fae2056d3 100644 --- a/deps/v8/test/mjsunit/sin-cos.js +++ b/deps/v8/test/mjsunit/sin-cos.js @@ -157,8 +157,8 @@ assertEquals(0, Math.sin("0x00000")); assertEquals(1, Math.cos("0x00000")); assertTrue(isNaN(Math.sin(Infinity))); assertTrue(isNaN(Math.cos("-Infinity"))); -assertEquals("Infinity", String(Math.tan(Math.PI/2))); -assertEquals("-Infinity", String(Math.tan(-Math.PI/2))); +assertTrue(Math.tan(Math.PI/2) > 1e16); +assertTrue(Math.tan(-Math.PI/2) < -1e16); assertEquals("-Infinity", String(1/Math.sin("-0"))); // Assert that the remainder after division by pi is reasonably precise. @@ -185,3 +185,96 @@ for (var i = -1024; i < 1024; i++) { assertFalse(isNaN(Math.cos(1.57079632679489700))); assertFalse(isNaN(Math.cos(-1e-100))); assertFalse(isNaN(Math.cos(-1e-323))); + +// Tests for specific values expected from the fdlibm implementation. + +var two_32 = Math.pow(2, -32); +var two_28 = Math.pow(2, -28); + +// Tests for Math.sin for |x| < pi/4 +assertEquals(Infinity, 1/Math.sin(+0.0)); +assertEquals(-Infinity, 1/Math.sin(-0.0)); +// sin(x) = x for x < 2^-27 +assertEquals(two_32, Math.sin(two_32)); +assertEquals(-two_32, Math.sin(-two_32)); +// sin(pi/8) = sqrt(sqrt(2)-1)/2^(3/4) +assertEquals(0.3826834323650898, Math.sin(Math.PI/8)); +assertEquals(-0.3826834323650898, -Math.sin(Math.PI/8)); + +// Tests for Math.cos for |x| < pi/4 +// cos(x) = 1 for |x| < 2^-27 +assertEquals(1, Math.cos(two_32)); +assertEquals(1, Math.cos(-two_32)); +// Test KERNELCOS for |x| < 0.3. +// cos(pi/20) = sqrt(sqrt(2)*sqrt(sqrt(5)+5)+4)/2^(3/2) +assertEquals(0.9876883405951378, Math.cos(Math.PI/20)); +// Test KERNELCOS for x ~= 0.78125 +assertEquals(0.7100335477927638, Math.cos(0.7812504768371582)); +assertEquals(0.7100338835660797, Math.cos(0.78125)); +// Test KERNELCOS for |x| > 0.3. +// cos(pi/8) = sqrt(sqrt(2)+1)/2^(3/4) +assertEquals(0.9238795325112867, Math.cos(Math.PI/8)); +// Test KERNELTAN for |x| < 0.67434. +assertEquals(0.9238795325112867, Math.cos(-Math.PI/8)); + +// Tests for Math.tan for |x| < pi/4 +assertEquals(Infinity, 1/Math.tan(0.0)); +assertEquals(-Infinity, 1/Math.tan(-0.0)); +// tan(x) = x for |x| < 2^-28 +assertEquals(two_32, Math.tan(two_32)); +assertEquals(-two_32, Math.tan(-two_32)); +// Test KERNELTAN for |x| > 0.67434. +assertEquals(0.8211418015898941, Math.tan(11/16)); +assertEquals(-0.8211418015898941, Math.tan(-11/16)); +assertEquals(0.41421356237309503, Math.tan(Math.PI / 8)); + +// Tests for Math.sin. +assertEquals(0.479425538604203, Math.sin(0.5)); +assertEquals(-0.479425538604203, Math.sin(-0.5)); +assertEquals(1, Math.sin(Math.PI/2)); +assertEquals(-1, Math.sin(-Math.PI/2)); +// Test that Math.sin(Math.PI) != 0 since Math.PI is not exact. +assertEquals(1.2246467991473532e-16, Math.sin(Math.PI)); +assertEquals(-7.047032979958965e-14, Math.sin(2200*Math.PI)); +// Test Math.sin for various phases. +assertEquals(-0.7071067811865477, Math.sin(7/4 * Math.PI)); +assertEquals(0.7071067811865474, Math.sin(9/4 * Math.PI)); +assertEquals(0.7071067811865483, Math.sin(11/4 * Math.PI)); +assertEquals(-0.7071067811865479, Math.sin(13/4 * Math.PI)); +assertEquals(-3.2103381051568376e-11, Math.sin(1048576/4 * Math.PI)); + +// Tests for Math.cos. +assertEquals(1, Math.cos(two_28)); +// Cover different code paths in KERNELCOS. +assertEquals(0.9689124217106447, Math.cos(0.25)); +assertEquals(0.8775825618903728, Math.cos(0.5)); +assertEquals(0.7073882691671998, Math.cos(0.785)); +// Test that Math.cos(Math.PI/2) != 0 since Math.PI is not exact. +assertEquals(6.123233995736766e-17, Math.cos(Math.PI/2)); +// Test Math.cos for various phases. +assertEquals(0.7071067811865474, Math.cos(7/4 * Math.PI)); +assertEquals(0.7071067811865477, Math.cos(9/4 * Math.PI)); +assertEquals(-0.7071067811865467, Math.cos(11/4 * Math.PI)); +assertEquals(-0.7071067811865471, Math.cos(13/4 * Math.PI)); +assertEquals(0.9367521275331447, Math.cos(1000000)); +assertEquals(-3.435757038074824e-12, Math.cos(1048575/2 * Math.PI)); + +// Tests for Math.tan. +assertEquals(two_28, Math.tan(two_28)); +// Test that Math.tan(Math.PI/2) != Infinity since Math.PI is not exact. +assertEquals(1.633123935319537e16, Math.tan(Math.PI/2)); +// Cover different code paths in KERNELTAN (tangent and cotangent) +assertEquals(0.5463024898437905, Math.tan(0.5)); +assertEquals(2.0000000000000027, Math.tan(1.107148717794091)); +assertEquals(-1.0000000000000004, Math.tan(7/4*Math.PI)); +assertEquals(0.9999999999999994, Math.tan(9/4*Math.PI)); +assertEquals(-6.420676210313675e-11, Math.tan(1048576/2*Math.PI)); +assertEquals(2.910566692924059e11, Math.tan(1048575/2*Math.PI)); + +// Test Hayne-Panek reduction. +assertEquals(0.377820109360752e0, Math.sin(Math.pow(2, 120))); +assertEquals(-0.9258790228548379e0, Math.cos(Math.pow(2, 120))); +assertEquals(-0.40806638884180424e0, Math.tan(Math.pow(2, 120))); +assertEquals(-0.377820109360752e0, Math.sin(-Math.pow(2, 120))); +assertEquals(-0.9258790228548379e0, Math.cos(-Math.pow(2, 120))); +assertEquals(0.40806638884180424e0, Math.tan(-Math.pow(2, 120))); diff --git a/deps/v8/test/mjsunit/stack-traces-overflow.js b/deps/v8/test/mjsunit/stack-traces-overflow.js index 7722e93bd26..e20c6091d77 100644 --- a/deps/v8/test/mjsunit/stack-traces-overflow.js +++ b/deps/v8/test/mjsunit/stack-traces-overflow.js @@ -25,6 +25,8 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +// Flags: --stack-size=100 + function rec1(a) { rec1(a+1); } function rec2(a) { rec3(a+1); } function rec3(a) { rec2(a+1); } @@ -61,8 +63,8 @@ try { function testErrorPrototype(prototype) { var object = {}; object.__proto__ = prototype; - object.stack = "123"; - assertEquals("123", object.stack); + object.stack = "123"; // Overwriting stack property fails. + assertEquals(prototype.stack, object.stack); assertTrue("123" != prototype.stack); } @@ -106,11 +108,28 @@ try { assertEquals(1, e.stack.split('\n').length); } +// A limit outside the range of integers. +Error.stackTraceLimit = 1e12; +try { + rec1(0); +} catch (e) { + assertTrue(e.stack.split('\n').length > 100); +} + +Error.stackTraceLimit = Infinity; +try { + rec1(0); +} catch (e) { + assertTrue(e.stack.split('\n').length > 100); +} + Error.stackTraceLimit = "not a number"; try { rec1(0); } catch (e) { assertEquals(undefined, e.stack); + e.stack = "abc"; + assertEquals("abc", e.stack); } Error.stackTraceLimit = 3; diff --git a/deps/v8/test/mjsunit/stack-traces.js b/deps/v8/test/mjsunit/stack-traces.js index 46a16eb87a2..f80a627b24a 100644 --- a/deps/v8/test/mjsunit/stack-traces.js +++ b/deps/v8/test/mjsunit/stack-traces.js @@ -331,3 +331,23 @@ Error.prepareStackTrace = function() { Error.prepareStackTrace = "custom"; }; new Error().stack; assertEquals("custom", Error.prepareStackTrace); + +// Check that the formatted stack trace can be set to undefined. +error = new Error(); +error.stack = undefined; +assertEquals(undefined, error.stack); + +// Check that the stack trace accessors are not forcibly set. +var my_error = {}; +Object.freeze(my_error); +assertThrows(function() { Error.captureStackTrace(my_error); }); + +my_error = {}; +Object.preventExtensions(my_error); +assertThrows(function() { Error.captureStackTrace(my_error); }); + +var fake_error = {}; +my_error = new Error(); +var stolen_getter = Object.getOwnPropertyDescriptor(my_error, 'stack').get; +Object.defineProperty(fake_error, 'stack', { get: stolen_getter }); +assertEquals(undefined, fake_error.stack); diff --git a/deps/v8/test/mjsunit/tools/profviz-test.default b/deps/v8/test/mjsunit/tools/profviz-test.default index 04185a260cf..bff249d651c 100644 --- a/deps/v8/test/mjsunit/tools/profviz-test.default +++ b/deps/v8/test/mjsunit/tools/profviz-test.default @@ -1,5 +1,5 @@ [ - "set yrange [0:24.5]", + "set yrange [0:25.5]", "set xlabel \"execution time in ms\"", "set xrange [2.4204999999999997:141.1669999999999]", "set style fill pattern 2 bo 1", @@ -17,7 +17,7 @@ "set object 6 rect from 57.242999999999974, 7 to 57.329716562499975, 6.766323024054983 fc rgb \"#9944CC\"", "set object 7 rect from 58.751499999999965, 7 to 58.838216562499966, 6.766323024054983 fc rgb \"#9944CC\"", "set object 8 rect from 60.72499999999996, 7 to 60.81171656249996, 6.766323024054983 fc rgb \"#9944CC\"", - "set ytics out nomirror (\"execution (59.6%%)\" 12.5, \"external (0.2%%)\" 13.5, \"compile unopt (3.1%%)\" 14.5, \"recompile sync (6.7%%)\" 15.5, \"recompile async (11.6%%)\" 16.5, \"compile eval (0.0%%)\" 17.5, \"parse (10.0%%)\" 18.5, \"preparse (0.8%%)\" 19.5, \"lazy parse (2.9%%)\" 20.5, \"gc scavenge (1.7%%)\" 21.5, \"gc compaction (3.3%%)\" 22.5, \"gc context (0.0%%)\" 23.5, \"code kind color coding\" 11, \"code kind in execution\" 10, \"top 8 js stack frames\" 9, \"pause times\" 0, \"max deopt size: 9.1 kB\" 7)", + "set ytics out nomirror (\"execution (59.6%%)\" 12.5, \"external (0.2%%)\" 13.5, \"compile unopt (3.1%%)\" 14.5, \"recompile sync (6.6%%)\" 15.5, \"recompile async (11.6%%)\" 16.5, \"compile eval (0.0%%)\" 17.5, \"ic miss (0.0%%)\" 18.5, \"parse (9.9%%)\" 19.5, \"preparse (0.6%%)\" 20.5, \"lazy parse (2.9%%)\" 21.5, \"gc scavenge (1.6%%)\" 22.5, \"gc compaction (3.3%%)\" 23.5, \"gc context (0.0%%)\" 24.5, \"code kind color coding\" 11, \"code kind in execution\" 10, \"top 8 js stack frames\" 9, \"pause times\" 0, \"max deopt size: 9.1 kB\" 7)", "set object 9 rect from 42.11000000000001, 12.83 to 42.28050000000001, 12.17 fc rgb \"#000000\"", "set object 10 rect from 42.298000000000016, 12.83 to 42.30000000000002, 12.17 fc rgb \"#000000\"", "set object 11 rect from 42.31450000000002, 12.83 to 42.62700000000002, 12.17 fc rgb \"#000000\"", @@ -448,232 +448,232 @@ "set object 436 rect from 108.1159999999999, 16.83 to 110.07649999999991, 16.17 fc rgb \"#CC4499\"", "set object 437 rect from 131.1424999999999, 16.83 to 133.02899999999988, 16.17 fc rgb \"#CC4499\"", "set object 438 rect from 141.13349999999986, 16.83 to 141.1669999999999, 16.17 fc rgb \"#CC4499\"", - "set object 439 rect from 22.2675, 18.83 to 22.3815, 18.17 fc rgb \"#00CC00\"", - "set object 440 rect from 22.665, 18.83 to 23.1135, 18.17 fc rgb \"#00CC00\"", - "set object 441 rect from 27.951000000000004, 18.83 to 27.972500000000004, 18.17 fc rgb \"#00CC00\"", - "set object 442 rect from 27.993000000000002, 18.83 to 28.013500000000004, 18.17 fc rgb \"#00CC00\"", - "set object 443 rect from 28.043000000000003, 18.83 to 28.063500000000005, 18.17 fc rgb \"#00CC00\"", - "set object 444 rect from 28.085000000000004, 18.83 to 28.087500000000002, 18.17 fc rgb \"#00CC00\"", - "set object 445 rect from 28.115000000000002, 18.83 to 28.139500000000005, 18.17 fc rgb \"#00CC00\"", - "set object 446 rect from 28.154000000000007, 18.83 to 28.260000000000005, 18.17 fc rgb \"#00CC00\"", - "set object 447 rect from 28.309500000000003, 18.83 to 28.374000000000006, 18.17 fc rgb \"#00CC00\"", - "set object 448 rect from 28.383500000000005, 18.83 to 28.385000000000005, 18.17 fc rgb \"#00CC00\"", - "set object 449 rect from 28.396500000000003, 18.83 to 28.445000000000007, 18.17 fc rgb \"#00CC00\"", - "set object 450 rect from 28.459500000000006, 18.83 to 28.463000000000005, 18.17 fc rgb \"#00CC00\"", - "set object 451 rect from 28.489500000000007, 18.83 to 28.499000000000006, 18.17 fc rgb \"#00CC00\"", - "set object 452 rect from 28.512500000000006, 18.83 to 28.516000000000005, 18.17 fc rgb \"#00CC00\"", - "set object 453 rect from 28.529500000000006, 18.83 to 28.533000000000005, 18.17 fc rgb \"#00CC00\"", - "set object 454 rect from 28.554500000000004, 18.83 to 28.557000000000006, 18.17 fc rgb \"#00CC00\"", - "set object 455 rect from 28.573500000000006, 18.83 to 28.579000000000008, 18.17 fc rgb \"#00CC00\"", - "set object 456 rect from 28.59950000000001, 18.83 to 28.602000000000007, 18.17 fc rgb \"#00CC00\"", - "set object 457 rect from 28.623500000000007, 18.83 to 28.625000000000007, 18.17 fc rgb \"#00CC00\"", - "set object 458 rect from 28.637500000000006, 18.83 to 28.647000000000006, 18.17 fc rgb \"#00CC00\"", - "set object 459 rect from 28.657500000000006, 18.83 to 28.669000000000008, 18.17 fc rgb \"#00CC00\"", - "set object 460 rect from 28.682500000000005, 18.83 to 28.686000000000007, 18.17 fc rgb \"#00CC00\"", - "set object 461 rect from 28.695500000000006, 18.83 to 28.701000000000008, 18.17 fc rgb \"#00CC00\"", - "set object 462 rect from 28.72450000000001, 18.83 to 28.811000000000007, 18.17 fc rgb \"#00CC00\"", - "set object 463 rect from 28.83250000000001, 18.83 to 28.907500000000006, 18.17 fc rgb \"#00CC00\"", - "set object 464 rect from 28.97100000000001, 18.83 to 28.97450000000001, 18.17 fc rgb \"#00CC00\"", - "set object 465 rect from 28.99600000000001, 18.83 to 28.99850000000001, 18.17 fc rgb \"#00CC00\"", - "set object 466 rect from 29.01200000000001, 18.83 to 29.01350000000001, 18.17 fc rgb \"#00CC00\"", - "set object 467 rect from 29.02600000000001, 18.83 to 29.056500000000007, 18.17 fc rgb \"#00CC00\"", - "set object 468 rect from 29.06900000000001, 18.83 to 29.159500000000012, 18.17 fc rgb \"#00CC00\"", - "set object 469 rect from 29.17100000000001, 18.83 to 29.18450000000001, 18.17 fc rgb \"#00CC00\"", - "set object 470 rect from 29.19400000000001, 18.83 to 41.84850000000001, 18.17 fc rgb \"#00CC00\"", - "set object 471 rect from 41.87900000000001, 18.83 to 41.88650000000001, 18.17 fc rgb \"#00CC00\"", - "set object 472 rect from 27.972500000000004, 19.83 to 28.053000000000004, 19.17 fc rgb \"#44CC00\"", - "set object 473 rect from 28.063500000000005, 19.83 to 28.169000000000004, 19.17 fc rgb \"#44CC00\"", - "set object 474 rect from 28.260000000000005, 19.83 to 28.489500000000007, 19.17 fc rgb \"#44CC00\"", - "set object 475 rect from 28.499000000000006, 19.83 to 28.761500000000005, 19.17 fc rgb \"#44CC00\"", - "set object 476 rect from 28.78900000000001, 19.83 to 28.847500000000007, 19.17 fc rgb \"#44CC00\"", - "set object 477 rect from 28.907500000000006, 19.83 to 29.047000000000008, 19.17 fc rgb \"#44CC00\"", - "set object 478 rect from 29.056500000000007, 19.83 to 29.111000000000008, 19.17 fc rgb \"#44CC00\"", - "set object 479 rect from 29.12350000000001, 19.83 to 29.21900000000001, 19.17 fc rgb \"#44CC00\"", - "set object 480 rect from 41.82650000000001, 19.83 to 41.83500000000001, 19.17 fc rgb \"#44CC00\"", - "set object 481 rect from 41.84850000000001, 19.83 to 41.87900000000001, 19.17 fc rgb \"#44CC00\"", - "set object 482 rect from 16.737, 20.83 to 16.9595, 20.17 fc rgb \"#00CC44\"", - "set object 483 rect from 17.8715, 20.83 to 18.017000000000003, 20.17 fc rgb \"#00CC44\"", - "set object 484 rect from 18.992, 20.83 to 19.0685, 20.17 fc rgb \"#00CC44\"", - "set object 485 rect from 20.52, 20.83 to 20.5975, 20.17 fc rgb \"#00CC44\"", - "set object 486 rect from 21.109, 20.83 to 21.1335, 20.17 fc rgb \"#00CC44\"", - "set object 487 rect from 21.212, 20.83 to 21.2695, 20.17 fc rgb \"#00CC44\"", - "set object 488 rect from 21.4595, 20.83 to 21.49, 20.17 fc rgb \"#00CC44\"", - "set object 489 rect from 21.566499999999998, 20.83 to 21.588, 20.17 fc rgb \"#00CC44\"", - "set object 490 rect from 21.6535, 20.83 to 21.727, 20.17 fc rgb \"#00CC44\"", - "set object 491 rect from 22.445, 20.83 to 22.4625, 20.17 fc rgb \"#00CC44\"", - "set object 492 rect from 22.502000000000002, 20.83 to 22.5165, 20.17 fc rgb \"#00CC44\"", - "set object 493 rect from 22.553, 20.83 to 22.5645, 20.17 fc rgb \"#00CC44\"", - "set object 494 rect from 23.233, 20.83 to 23.336000000000002, 20.17 fc rgb \"#00CC44\"", - "set object 495 rect from 23.4255, 20.83 to 23.506, 20.17 fc rgb \"#00CC44\"", - "set object 496 rect from 23.5895, 20.83 to 23.613, 20.17 fc rgb \"#00CC44\"", - "set object 497 rect from 23.870500000000003, 20.83 to 23.907, 20.17 fc rgb \"#00CC44\"", - "set object 498 rect from 24.393, 20.83 to 24.430500000000002, 20.17 fc rgb \"#00CC44\"", - "set object 499 rect from 24.470000000000002, 20.83 to 24.504500000000004, 20.17 fc rgb \"#00CC44\"", - "set object 500 rect from 25.267500000000002, 20.83 to 25.283, 20.17 fc rgb \"#00CC44\"", - "set object 501 rect from 25.4195, 20.83 to 25.427, 20.17 fc rgb \"#00CC44\"", - "set object 502 rect from 25.519500000000004, 20.83 to 25.526000000000003, 20.17 fc rgb \"#00CC44\"", - "set object 503 rect from 42.28050000000001, 20.83 to 42.298000000000016, 20.17 fc rgb \"#00CC44\"", - "set object 504 rect from 42.62700000000002, 20.83 to 42.656500000000015, 20.17 fc rgb \"#00CC44\"", - "set object 505 rect from 42.747000000000014, 20.83 to 42.763500000000015, 20.17 fc rgb \"#00CC44\"", - "set object 506 rect from 42.80300000000001, 20.83 to 42.81050000000001, 20.17 fc rgb \"#00CC44\"", - "set object 507 rect from 42.844000000000015, 20.83 to 42.858500000000014, 20.17 fc rgb \"#00CC44\"", - "set object 508 rect from 43.60550000000001, 20.83 to 43.62000000000002, 20.17 fc rgb \"#00CC44\"", - "set object 509 rect from 44.796000000000014, 20.83 to 44.81150000000002, 20.17 fc rgb \"#00CC44\"", - "set object 510 rect from 44.84500000000001, 20.83 to 44.87150000000002, 20.17 fc rgb \"#00CC44\"", - "set object 511 rect from 44.996000000000016, 20.83 to 45.00850000000001, 20.17 fc rgb \"#00CC44\"", - "set object 512 rect from 45.04700000000001, 20.83 to 45.06450000000002, 20.17 fc rgb \"#00CC44\"", - "set object 513 rect from 45.09600000000001, 20.83 to 45.107500000000016, 20.17 fc rgb \"#00CC44\"", - "set object 514 rect from 45.14400000000002, 20.83 to 45.16150000000002, 20.17 fc rgb \"#00CC44\"", - "set object 515 rect from 45.32050000000002, 20.83 to 45.33700000000002, 20.17 fc rgb \"#00CC44\"", - "set object 516 rect from 45.38750000000002, 20.83 to 45.402000000000015, 20.17 fc rgb \"#00CC44\"", - "set object 517 rect from 45.43250000000002, 20.83 to 45.442000000000014, 20.17 fc rgb \"#00CC44\"", - "set object 518 rect from 45.46050000000002, 20.83 to 45.46500000000002, 20.17 fc rgb \"#00CC44\"", - "set object 519 rect from 45.47750000000001, 20.83 to 45.48300000000001, 20.17 fc rgb \"#00CC44\"", - "set object 520 rect from 45.49750000000001, 20.83 to 45.55900000000001, 20.17 fc rgb \"#00CC44\"", - "set object 521 rect from 45.66050000000001, 20.83 to 45.70300000000001, 20.17 fc rgb \"#00CC44\"", - "set object 522 rect from 45.79350000000001, 20.83 to 45.81700000000001, 20.17 fc rgb \"#00CC44\"", - "set object 523 rect from 45.86950000000001, 20.83 to 45.92300000000001, 20.17 fc rgb \"#00CC44\"", - "set object 524 rect from 45.99450000000001, 20.83 to 46.060500000000005, 20.17 fc rgb \"#00CC44\"", - "set object 525 rect from 46.18500000000001, 20.83 to 46.28150000000001, 20.17 fc rgb \"#00CC44\"", - "set object 526 rect from 46.550000000000004, 20.83 to 46.5915, 20.17 fc rgb \"#00CC44\"", - "set object 527 rect from 46.65500000000001, 20.83 to 46.691500000000005, 20.17 fc rgb \"#00CC44\"", - "set object 528 rect from 46.861000000000004, 20.83 to 46.8935, 20.17 fc rgb \"#00CC44\"", - "set object 529 rect from 47.039500000000004, 20.83 to 47.049, 20.17 fc rgb \"#00CC44\"", - "set object 530 rect from 47.0765, 20.83 to 47.135000000000005, 20.17 fc rgb \"#00CC44\"", - "set object 531 rect from 47.4125, 20.83 to 47.465, 20.17 fc rgb \"#00CC44\"", - "set object 532 rect from 49.454499999999996, 20.83 to 49.467, 20.17 fc rgb \"#00CC44\"", - "set object 533 rect from 49.6855, 20.83 to 49.726, 20.17 fc rgb \"#00CC44\"", - "set object 534 rect from 49.799499999999995, 20.83 to 49.812999999999995, 20.17 fc rgb \"#00CC44\"", - "set object 535 rect from 49.841499999999996, 20.83 to 49.849999999999994, 20.17 fc rgb \"#00CC44\"", - "set object 536 rect from 49.894499999999994, 20.83 to 49.9695, 20.17 fc rgb \"#00CC44\"", - "set object 537 rect from 50.083999999999996, 20.83 to 50.14149999999999, 20.17 fc rgb \"#00CC44\"", - "set object 538 rect from 50.29299999999999, 20.83 to 50.31249999999999, 20.17 fc rgb \"#00CC44\"", - "set object 539 rect from 50.36699999999999, 20.83 to 50.39849999999999, 20.17 fc rgb \"#00CC44\"", - "set object 540 rect from 50.520999999999994, 20.83 to 50.528499999999994, 20.17 fc rgb \"#00CC44\"", - "set object 541 rect from 50.54899999999999, 20.83 to 50.62049999999999, 20.17 fc rgb \"#00CC44\"", - "set object 542 rect from 51.27549999999999, 20.83 to 51.29099999999999, 20.17 fc rgb \"#00CC44\"", - "set object 543 rect from 51.52249999999999, 20.83 to 51.56899999999999, 20.17 fc rgb \"#00CC44\"", - "set object 544 rect from 51.87299999999998, 20.83 to 51.89049999999999, 20.17 fc rgb \"#00CC44\"", - "set object 545 rect from 52.115999999999985, 20.83 to 52.13449999999999, 20.17 fc rgb \"#00CC44\"", - "set object 546 rect from 52.286999999999985, 20.83 to 52.300499999999985, 20.17 fc rgb \"#00CC44\"", - "set object 547 rect from 52.326999999999984, 20.83 to 52.33049999999999, 20.17 fc rgb \"#00CC44\"", - "set object 548 rect from 52.362999999999985, 20.83 to 52.404499999999985, 20.17 fc rgb \"#00CC44\"", - "set object 549 rect from 54.566499999999984, 20.83 to 54.64299999999998, 20.17 fc rgb \"#00CC44\"", - "set object 550 rect from 55.49149999999998, 20.83 to 55.53099999999998, 20.17 fc rgb \"#00CC44\"", - "set object 551 rect from 56.64049999999998, 20.83 to 56.64999999999998, 20.17 fc rgb \"#00CC44\"", - "set object 552 rect from 56.750999999999976, 20.83 to 56.76449999999998, 20.17 fc rgb \"#00CC44\"", - "set object 553 rect from 57.039499999999975, 20.83 to 57.076499999999974, 20.17 fc rgb \"#00CC44\"", - "set object 554 rect from 57.885999999999974, 20.83 to 57.893499999999975, 20.17 fc rgb \"#00CC44\"", - "set object 555 rect from 57.97749999999997, 20.83 to 57.99099999999997, 20.17 fc rgb \"#00CC44\"", - "set object 556 rect from 58.04499999999997, 20.83 to 58.055499999999974, 20.17 fc rgb \"#00CC44\"", - "set object 557 rect from 58.14549999999997, 20.83 to 58.15399999999997, 20.17 fc rgb \"#00CC44\"", - "set object 558 rect from 58.17549999999997, 20.83 to 58.18399999999997, 20.17 fc rgb \"#00CC44\"", - "set object 559 rect from 58.40999999999997, 20.83 to 58.431499999999964, 20.17 fc rgb \"#00CC44\"", - "set object 560 rect from 58.51699999999997, 20.83 to 58.53049999999997, 20.17 fc rgb \"#00CC44\"", - "set object 561 rect from 58.590999999999966, 20.83 to 58.60049999999997, 20.17 fc rgb \"#00CC44\"", - "set object 562 rect from 59.65599999999996, 20.83 to 59.669499999999964, 20.17 fc rgb \"#00CC44\"", - "set object 563 rect from 60.05149999999996, 20.83 to 60.060999999999964, 20.17 fc rgb \"#00CC44\"", - "set object 564 rect from 60.176999999999964, 20.83 to 60.19499999999996, 20.17 fc rgb \"#00CC44\"", - "set object 565 rect from 60.26949999999996, 20.83 to 60.27999999999996, 20.17 fc rgb \"#00CC44\"", - "set object 566 rect from 60.31149999999996, 20.83 to 60.34699999999996, 20.17 fc rgb \"#00CC44\"", - "set object 567 rect from 60.471499999999956, 20.83 to 60.48399999999996, 20.17 fc rgb \"#00CC44\"", - "set object 568 rect from 60.508499999999955, 20.83 to 60.51999999999996, 20.17 fc rgb \"#00CC44\"", - "set object 569 rect from 60.92099999999996, 20.83 to 60.98249999999996, 20.17 fc rgb \"#00CC44\"", - "set object 570 rect from 63.15199999999995, 20.83 to 63.228499999999954, 20.17 fc rgb \"#00CC44\"", - "set object 571 rect from 67.34999999999994, 20.83 to 67.36349999999995, 20.17 fc rgb \"#00CC44\"", - "set object 572 rect from 67.40699999999995, 20.83 to 67.41249999999995, 20.17 fc rgb \"#00CC44\"", - "set object 573 rect from 67.45699999999994, 20.83 to 67.46599999999995, 20.17 fc rgb \"#00CC44\"", - "set object 574 rect from 69.11299999999994, 20.83 to 69.12949999999995, 20.17 fc rgb \"#00CC44\"", - "set object 575 rect from 69.19199999999995, 20.83 to 69.22649999999994, 20.17 fc rgb \"#00CC44\"", - "set object 576 rect from 69.30799999999994, 20.83 to 69.31949999999995, 20.17 fc rgb \"#00CC44\"", - "set object 577 rect from 69.34699999999995, 20.83 to 69.35749999999994, 20.17 fc rgb \"#00CC44\"", - "set object 578 rect from 69.38399999999996, 20.83 to 69.40549999999995, 20.17 fc rgb \"#00CC44\"", - "set object 579 rect from 69.45099999999994, 20.83 to 69.46349999999994, 20.17 fc rgb \"#00CC44\"", - "set object 580 rect from 70.31749999999994, 20.83 to 70.33949999999994, 20.17 fc rgb \"#00CC44\"", - "set object 581 rect from 74.41449999999995, 20.83 to 74.43899999999994, 20.17 fc rgb \"#00CC44\"", - "set object 582 rect from 74.52049999999994, 20.83 to 74.54499999999993, 20.17 fc rgb \"#00CC44\"", - "set object 583 rect from 74.59549999999994, 20.83 to 74.60899999999995, 20.17 fc rgb \"#00CC44\"", - "set object 584 rect from 84.09999999999994, 20.83 to 84.15349999999994, 20.17 fc rgb \"#00CC44\"", - "set object 585 rect from 84.26099999999994, 20.83 to 84.27549999999994, 20.17 fc rgb \"#00CC44\"", - "set object 586 rect from 84.31099999999992, 20.83 to 84.31949999999993, 20.17 fc rgb \"#00CC44\"", - "set object 587 rect from 84.34199999999993, 20.83 to 84.35349999999993, 20.17 fc rgb \"#00CC44\"", - "set object 588 rect from 84.37299999999993, 20.83 to 84.40149999999993, 20.17 fc rgb \"#00CC44\"", - "set object 589 rect from 84.43999999999994, 20.83 to 84.46149999999993, 20.17 fc rgb \"#00CC44\"", - "set object 590 rect from 84.53049999999993, 20.83 to 84.60099999999994, 20.17 fc rgb \"#00CC44\"", - "set object 591 rect from 84.68049999999992, 20.83 to 84.69199999999992, 20.17 fc rgb \"#00CC44\"", - "set object 592 rect from 84.71649999999993, 20.83 to 84.72799999999992, 20.17 fc rgb \"#00CC44\"", - "set object 593 rect from 84.92199999999994, 20.83 to 84.93849999999993, 20.17 fc rgb \"#00CC44\"", - "set object 594 rect from 84.99799999999993, 20.83 to 85.01049999999992, 20.17 fc rgb \"#00CC44\"", - "set object 595 rect from 85.03599999999992, 20.83 to 85.04449999999993, 20.17 fc rgb \"#00CC44\"", - "set object 596 rect from 85.06199999999993, 20.83 to 85.07249999999993, 20.17 fc rgb \"#00CC44\"", - "set object 597 rect from 85.09499999999994, 20.83 to 85.10249999999992, 20.17 fc rgb \"#00CC44\"", - "set object 598 rect from 85.38399999999993, 20.83 to 85.43999999999994, 20.17 fc rgb \"#00CC44\"", - "set object 599 rect from 85.59949999999992, 20.83 to 85.61599999999993, 20.17 fc rgb \"#00CC44\"", - "set object 600 rect from 85.63749999999993, 20.83 to 85.65899999999993, 20.17 fc rgb \"#00CC44\"", - "set object 601 rect from 85.69649999999993, 20.83 to 85.70599999999993, 20.17 fc rgb \"#00CC44\"", - "set object 602 rect from 85.73249999999993, 20.83 to 85.76899999999992, 20.17 fc rgb \"#00CC44\"", - "set object 603 rect from 85.86549999999993, 20.83 to 85.87599999999992, 20.17 fc rgb \"#00CC44\"", - "set object 604 rect from 85.91149999999992, 20.83 to 85.92499999999993, 20.17 fc rgb \"#00CC44\"", - "set object 605 rect from 102.74599999999992, 20.83 to 102.80749999999992, 20.17 fc rgb \"#00CC44\"", - "set object 606 rect from 107.5244999999999, 20.83 to 107.57199999999992, 20.17 fc rgb \"#00CC44\"", - "set object 607 rect from 107.62449999999991, 20.83 to 107.6389999999999, 20.17 fc rgb \"#00CC44\"", - "set object 608 rect from 107.6674999999999, 20.83 to 107.6759999999999, 20.17 fc rgb \"#00CC44\"", - "set object 609 rect from 107.69849999999991, 20.83 to 107.70999999999992, 20.17 fc rgb \"#00CC44\"", - "set object 610 rect from 107.7294999999999, 20.83 to 107.7469999999999, 20.17 fc rgb \"#00CC44\"", - "set object 611 rect from 107.7834999999999, 20.83 to 107.79299999999992, 20.17 fc rgb \"#00CC44\"", - "set object 612 rect from 107.82049999999991, 20.83 to 107.8529999999999, 20.17 fc rgb \"#00CC44\"", - "set object 613 rect from 107.9294999999999, 20.83 to 107.94099999999992, 20.17 fc rgb \"#00CC44\"", - "set object 614 rect from 107.9654999999999, 20.83 to 107.97599999999991, 20.17 fc rgb \"#00CC44\"", - "set object 615 rect from 130.5489999999999, 20.83 to 130.5954999999999, 20.17 fc rgb \"#00CC44\"", - "set object 616 rect from 130.6469999999999, 20.83 to 130.6614999999999, 20.17 fc rgb \"#00CC44\"", - "set object 617 rect from 130.68999999999988, 20.83 to 130.6994999999999, 20.17 fc rgb \"#00CC44\"", - "set object 618 rect from 130.7219999999999, 20.83 to 130.7324999999999, 20.17 fc rgb \"#00CC44\"", - "set object 619 rect from 130.7519999999999, 20.83 to 130.76949999999988, 20.17 fc rgb \"#00CC44\"", - "set object 620 rect from 130.8059999999999, 20.83 to 130.8154999999999, 20.17 fc rgb \"#00CC44\"", - "set object 621 rect from 130.84299999999988, 20.83 to 130.87549999999987, 20.17 fc rgb \"#00CC44\"", - "set object 622 rect from 130.95199999999988, 20.83 to 130.9644999999999, 20.17 fc rgb \"#00CC44\"", - "set object 623 rect from 130.99099999999987, 20.83 to 131.00249999999988, 20.17 fc rgb \"#00CC44\"", - "set object 624 rect from 140.86699999999988, 20.83 to 140.8814999999999, 20.17 fc rgb \"#00CC44\"", - "set object 625 rect from 140.9319999999999, 20.83 to 140.9574999999999, 20.17 fc rgb \"#00CC44\"", - "set object 626 rect from 141.0299999999999, 20.83 to 141.03849999999989, 20.17 fc rgb \"#00CC44\"", - "set object 627 rect from 55.79999999999998, 21.83 to 56.198999999999984, 21.17 fc rgb \"#0044CC\"", - "set object 628 rect from 62.16149999999996, 21.83 to 62.548999999999964, 21.17 fc rgb \"#0044CC\"", - "set object 629 rect from 65.56449999999995, 21.83 to 65.61699999999995, 21.17 fc rgb \"#0044CC\"", - "set object 630 rect from 68.70599999999996, 21.83 to 68.76649999999995, 21.17 fc rgb \"#0044CC\"", - "set object 631 rect from 72.22199999999995, 21.83 to 72.28049999999995, 21.17 fc rgb \"#0044CC\"", - "set object 632 rect from 75.41849999999994, 21.83 to 75.46799999999995, 21.17 fc rgb \"#0044CC\"", - "set object 633 rect from 78.16449999999993, 21.83 to 78.23649999999994, 21.17 fc rgb \"#0044CC\"", - "set object 634 rect from 80.90399999999994, 21.83 to 80.95049999999993, 21.17 fc rgb \"#0044CC\"", - "set object 635 rect from 83.58349999999993, 21.83 to 83.63999999999993, 21.17 fc rgb \"#0044CC\"", - "set object 636 rect from 88.75199999999992, 21.83 to 88.82299999999992, 21.17 fc rgb \"#0044CC\"", - "set object 637 rect from 91.90999999999991, 21.83 to 91.96649999999993, 21.17 fc rgb \"#0044CC\"", - "set object 638 rect from 94.55599999999993, 21.83 to 94.6054999999999, 21.17 fc rgb \"#0044CC\"", - "set object 639 rect from 97.20749999999991, 21.83 to 97.26099999999992, 21.17 fc rgb \"#0044CC\"", - "set object 640 rect from 99.86649999999992, 21.83 to 99.92199999999991, 21.17 fc rgb \"#0044CC\"", - "set object 641 rect from 102.56049999999992, 21.83 to 102.61199999999991, 21.17 fc rgb \"#0044CC\"", - "set object 642 rect from 105.88099999999991, 21.83 to 105.93349999999991, 21.17 fc rgb \"#0044CC\"", - "set object 643 rect from 109.2659999999999, 21.83 to 109.38599999999991, 21.17 fc rgb \"#0044CC\"", - "set object 644 rect from 109.4024999999999, 21.83 to 109.41799999999989, 21.17 fc rgb \"#0044CC\"", - "set object 645 rect from 112.6029999999999, 21.83 to 112.6564999999999, 21.17 fc rgb \"#0044CC\"", - "set object 646 rect from 115.36399999999989, 21.83 to 115.4124999999999, 21.17 fc rgb \"#0044CC\"", - "set object 647 rect from 118.1434999999999, 21.83 to 118.19199999999991, 21.17 fc rgb \"#0044CC\"", - "set object 648 rect from 120.9194999999999, 21.83 to 121.0104999999999, 21.17 fc rgb \"#0044CC\"", - "set object 649 rect from 121.0259999999999, 21.83 to 121.0314999999999, 21.17 fc rgb \"#0044CC\"", - "set object 650 rect from 123.77499999999989, 21.83 to 123.8254999999999, 21.17 fc rgb \"#0044CC\"", - "set object 651 rect from 126.55149999999989, 21.83 to 126.59899999999989, 21.17 fc rgb \"#0044CC\"", - "set object 652 rect from 129.3344999999999, 21.83 to 129.4124999999999, 21.17 fc rgb \"#0044CC\"", - "set object 653 rect from 129.4249999999999, 21.83 to 129.48849999999987, 21.17 fc rgb \"#0044CC\"", - "set object 654 rect from 132.8659999999999, 21.83 to 132.92249999999987, 21.17 fc rgb \"#0044CC\"", - "set object 655 rect from 136.14449999999988, 21.83 to 136.19799999999987, 21.17 fc rgb \"#0044CC\"", - "set object 656 rect from 138.9289999999999, 21.83 to 138.98049999999986, 21.17 fc rgb \"#0044CC\"", - "set object 657 rect from 2.4204999999999997, 22.83 to 3.7920000000000003, 22.17 fc rgb \"#4444CC\"", - "set object 658 rect from 3.8075, 22.83 to 3.8129999999999997, 22.17 fc rgb \"#4444CC\"", - "set object 659 rect from 6.2695, 22.83 to 7.373, 22.17 fc rgb \"#4444CC\"", - "set object 660 rect from 7.3865, 22.83 to 7.3919999999999995, 22.17 fc rgb \"#4444CC\"", - "set object 661 rect from 9.2915, 22.83 to 10.405000000000001, 22.17 fc rgb \"#4444CC\"", - "set object 662 rect from 10.4235, 22.83 to 10.43, 22.17 fc rgb \"#4444CC\"", - "set object 663 rect from 12.8765, 22.83 to 13.897, 22.17 fc rgb \"#4444CC\"", - "set object 664 rect from 13.910499999999999, 22.83 to 13.915999999999999, 22.17 fc rgb \"#4444CC\"", + "set object 439 rect from 22.2675, 19.83 to 22.3815, 19.17 fc rgb \"#00CC00\"", + "set object 440 rect from 22.665, 19.83 to 23.1135, 19.17 fc rgb \"#00CC00\"", + "set object 441 rect from 27.951000000000004, 19.83 to 27.972500000000004, 19.17 fc rgb \"#00CC00\"", + "set object 442 rect from 27.993000000000002, 19.83 to 28.013500000000004, 19.17 fc rgb \"#00CC00\"", + "set object 443 rect from 28.043000000000003, 19.83 to 28.063500000000005, 19.17 fc rgb \"#00CC00\"", + "set object 444 rect from 28.085000000000004, 19.83 to 28.087500000000002, 19.17 fc rgb \"#00CC00\"", + "set object 445 rect from 28.115000000000002, 19.83 to 28.139500000000005, 19.17 fc rgb \"#00CC00\"", + "set object 446 rect from 28.154000000000007, 19.83 to 28.260000000000005, 19.17 fc rgb \"#00CC00\"", + "set object 447 rect from 28.309500000000003, 19.83 to 28.374000000000006, 19.17 fc rgb \"#00CC00\"", + "set object 448 rect from 28.383500000000005, 19.83 to 28.385000000000005, 19.17 fc rgb \"#00CC00\"", + "set object 449 rect from 28.396500000000003, 19.83 to 28.445000000000007, 19.17 fc rgb \"#00CC00\"", + "set object 450 rect from 28.459500000000006, 19.83 to 28.463000000000005, 19.17 fc rgb \"#00CC00\"", + "set object 451 rect from 28.489500000000007, 19.83 to 28.499000000000006, 19.17 fc rgb \"#00CC00\"", + "set object 452 rect from 28.512500000000006, 19.83 to 28.516000000000005, 19.17 fc rgb \"#00CC00\"", + "set object 453 rect from 28.529500000000006, 19.83 to 28.533000000000005, 19.17 fc rgb \"#00CC00\"", + "set object 454 rect from 28.554500000000004, 19.83 to 28.557000000000006, 19.17 fc rgb \"#00CC00\"", + "set object 455 rect from 28.573500000000006, 19.83 to 28.579000000000008, 19.17 fc rgb \"#00CC00\"", + "set object 456 rect from 28.59950000000001, 19.83 to 28.602000000000007, 19.17 fc rgb \"#00CC00\"", + "set object 457 rect from 28.623500000000007, 19.83 to 28.625000000000007, 19.17 fc rgb \"#00CC00\"", + "set object 458 rect from 28.637500000000006, 19.83 to 28.647000000000006, 19.17 fc rgb \"#00CC00\"", + "set object 459 rect from 28.657500000000006, 19.83 to 28.669000000000008, 19.17 fc rgb \"#00CC00\"", + "set object 460 rect from 28.682500000000005, 19.83 to 28.686000000000007, 19.17 fc rgb \"#00CC00\"", + "set object 461 rect from 28.695500000000006, 19.83 to 28.701000000000008, 19.17 fc rgb \"#00CC00\"", + "set object 462 rect from 28.72450000000001, 19.83 to 28.811000000000007, 19.17 fc rgb \"#00CC00\"", + "set object 463 rect from 28.83250000000001, 19.83 to 28.907500000000006, 19.17 fc rgb \"#00CC00\"", + "set object 464 rect from 28.97100000000001, 19.83 to 28.97450000000001, 19.17 fc rgb \"#00CC00\"", + "set object 465 rect from 28.99600000000001, 19.83 to 28.99850000000001, 19.17 fc rgb \"#00CC00\"", + "set object 466 rect from 29.01200000000001, 19.83 to 29.01350000000001, 19.17 fc rgb \"#00CC00\"", + "set object 467 rect from 29.02600000000001, 19.83 to 29.056500000000007, 19.17 fc rgb \"#00CC00\"", + "set object 468 rect from 29.06900000000001, 19.83 to 29.159500000000012, 19.17 fc rgb \"#00CC00\"", + "set object 469 rect from 29.17100000000001, 19.83 to 29.18450000000001, 19.17 fc rgb \"#00CC00\"", + "set object 470 rect from 29.19400000000001, 19.83 to 41.84850000000001, 19.17 fc rgb \"#00CC00\"", + "set object 471 rect from 41.87900000000001, 19.83 to 41.88650000000001, 19.17 fc rgb \"#00CC00\"", + "set object 472 rect from 27.972500000000004, 20.83 to 28.053000000000004, 20.17 fc rgb \"#44CC00\"", + "set object 473 rect from 28.063500000000005, 20.83 to 28.169000000000004, 20.17 fc rgb \"#44CC00\"", + "set object 474 rect from 28.260000000000005, 20.83 to 28.489500000000007, 20.17 fc rgb \"#44CC00\"", + "set object 475 rect from 28.499000000000006, 20.83 to 28.761500000000005, 20.17 fc rgb \"#44CC00\"", + "set object 476 rect from 28.78900000000001, 20.83 to 28.847500000000007, 20.17 fc rgb \"#44CC00\"", + "set object 477 rect from 28.907500000000006, 20.83 to 29.047000000000008, 20.17 fc rgb \"#44CC00\"", + "set object 478 rect from 29.056500000000007, 20.83 to 29.111000000000008, 20.17 fc rgb \"#44CC00\"", + "set object 479 rect from 29.12350000000001, 20.83 to 29.21900000000001, 20.17 fc rgb \"#44CC00\"", + "set object 480 rect from 41.82650000000001, 20.83 to 41.83500000000001, 20.17 fc rgb \"#44CC00\"", + "set object 481 rect from 41.84850000000001, 20.83 to 41.87900000000001, 20.17 fc rgb \"#44CC00\"", + "set object 482 rect from 16.737, 21.83 to 16.9595, 21.17 fc rgb \"#00CC44\"", + "set object 483 rect from 17.8715, 21.83 to 18.017000000000003, 21.17 fc rgb \"#00CC44\"", + "set object 484 rect from 18.992, 21.83 to 19.0685, 21.17 fc rgb \"#00CC44\"", + "set object 485 rect from 20.52, 21.83 to 20.5975, 21.17 fc rgb \"#00CC44\"", + "set object 486 rect from 21.109, 21.83 to 21.1335, 21.17 fc rgb \"#00CC44\"", + "set object 487 rect from 21.212, 21.83 to 21.2695, 21.17 fc rgb \"#00CC44\"", + "set object 488 rect from 21.4595, 21.83 to 21.49, 21.17 fc rgb \"#00CC44\"", + "set object 489 rect from 21.566499999999998, 21.83 to 21.588, 21.17 fc rgb \"#00CC44\"", + "set object 490 rect from 21.6535, 21.83 to 21.727, 21.17 fc rgb \"#00CC44\"", + "set object 491 rect from 22.445, 21.83 to 22.4625, 21.17 fc rgb \"#00CC44\"", + "set object 492 rect from 22.502000000000002, 21.83 to 22.5165, 21.17 fc rgb \"#00CC44\"", + "set object 493 rect from 22.553, 21.83 to 22.5645, 21.17 fc rgb \"#00CC44\"", + "set object 494 rect from 23.233, 21.83 to 23.336000000000002, 21.17 fc rgb \"#00CC44\"", + "set object 495 rect from 23.4255, 21.83 to 23.506, 21.17 fc rgb \"#00CC44\"", + "set object 496 rect from 23.5895, 21.83 to 23.613, 21.17 fc rgb \"#00CC44\"", + "set object 497 rect from 23.870500000000003, 21.83 to 23.907, 21.17 fc rgb \"#00CC44\"", + "set object 498 rect from 24.393, 21.83 to 24.430500000000002, 21.17 fc rgb \"#00CC44\"", + "set object 499 rect from 24.470000000000002, 21.83 to 24.504500000000004, 21.17 fc rgb \"#00CC44\"", + "set object 500 rect from 25.267500000000002, 21.83 to 25.283, 21.17 fc rgb \"#00CC44\"", + "set object 501 rect from 25.4195, 21.83 to 25.427, 21.17 fc rgb \"#00CC44\"", + "set object 502 rect from 25.519500000000004, 21.83 to 25.526000000000003, 21.17 fc rgb \"#00CC44\"", + "set object 503 rect from 42.28050000000001, 21.83 to 42.298000000000016, 21.17 fc rgb \"#00CC44\"", + "set object 504 rect from 42.62700000000002, 21.83 to 42.656500000000015, 21.17 fc rgb \"#00CC44\"", + "set object 505 rect from 42.747000000000014, 21.83 to 42.763500000000015, 21.17 fc rgb \"#00CC44\"", + "set object 506 rect from 42.80300000000001, 21.83 to 42.81050000000001, 21.17 fc rgb \"#00CC44\"", + "set object 507 rect from 42.844000000000015, 21.83 to 42.858500000000014, 21.17 fc rgb \"#00CC44\"", + "set object 508 rect from 43.60550000000001, 21.83 to 43.62000000000002, 21.17 fc rgb \"#00CC44\"", + "set object 509 rect from 44.796000000000014, 21.83 to 44.81150000000002, 21.17 fc rgb \"#00CC44\"", + "set object 510 rect from 44.84500000000001, 21.83 to 44.87150000000002, 21.17 fc rgb \"#00CC44\"", + "set object 511 rect from 44.996000000000016, 21.83 to 45.00850000000001, 21.17 fc rgb \"#00CC44\"", + "set object 512 rect from 45.04700000000001, 21.83 to 45.06450000000002, 21.17 fc rgb \"#00CC44\"", + "set object 513 rect from 45.09600000000001, 21.83 to 45.107500000000016, 21.17 fc rgb \"#00CC44\"", + "set object 514 rect from 45.14400000000002, 21.83 to 45.16150000000002, 21.17 fc rgb \"#00CC44\"", + "set object 515 rect from 45.32050000000002, 21.83 to 45.33700000000002, 21.17 fc rgb \"#00CC44\"", + "set object 516 rect from 45.38750000000002, 21.83 to 45.402000000000015, 21.17 fc rgb \"#00CC44\"", + "set object 517 rect from 45.43250000000002, 21.83 to 45.442000000000014, 21.17 fc rgb \"#00CC44\"", + "set object 518 rect from 45.46050000000002, 21.83 to 45.46500000000002, 21.17 fc rgb \"#00CC44\"", + "set object 519 rect from 45.47750000000001, 21.83 to 45.48300000000001, 21.17 fc rgb \"#00CC44\"", + "set object 520 rect from 45.49750000000001, 21.83 to 45.55900000000001, 21.17 fc rgb \"#00CC44\"", + "set object 521 rect from 45.66050000000001, 21.83 to 45.70300000000001, 21.17 fc rgb \"#00CC44\"", + "set object 522 rect from 45.79350000000001, 21.83 to 45.81700000000001, 21.17 fc rgb \"#00CC44\"", + "set object 523 rect from 45.86950000000001, 21.83 to 45.92300000000001, 21.17 fc rgb \"#00CC44\"", + "set object 524 rect from 45.99450000000001, 21.83 to 46.060500000000005, 21.17 fc rgb \"#00CC44\"", + "set object 525 rect from 46.18500000000001, 21.83 to 46.28150000000001, 21.17 fc rgb \"#00CC44\"", + "set object 526 rect from 46.550000000000004, 21.83 to 46.5915, 21.17 fc rgb \"#00CC44\"", + "set object 527 rect from 46.65500000000001, 21.83 to 46.691500000000005, 21.17 fc rgb \"#00CC44\"", + "set object 528 rect from 46.861000000000004, 21.83 to 46.8935, 21.17 fc rgb \"#00CC44\"", + "set object 529 rect from 47.039500000000004, 21.83 to 47.049, 21.17 fc rgb \"#00CC44\"", + "set object 530 rect from 47.0765, 21.83 to 47.135000000000005, 21.17 fc rgb \"#00CC44\"", + "set object 531 rect from 47.4125, 21.83 to 47.465, 21.17 fc rgb \"#00CC44\"", + "set object 532 rect from 49.454499999999996, 21.83 to 49.467, 21.17 fc rgb \"#00CC44\"", + "set object 533 rect from 49.6855, 21.83 to 49.726, 21.17 fc rgb \"#00CC44\"", + "set object 534 rect from 49.799499999999995, 21.83 to 49.812999999999995, 21.17 fc rgb \"#00CC44\"", + "set object 535 rect from 49.841499999999996, 21.83 to 49.849999999999994, 21.17 fc rgb \"#00CC44\"", + "set object 536 rect from 49.894499999999994, 21.83 to 49.9695, 21.17 fc rgb \"#00CC44\"", + "set object 537 rect from 50.083999999999996, 21.83 to 50.14149999999999, 21.17 fc rgb \"#00CC44\"", + "set object 538 rect from 50.29299999999999, 21.83 to 50.31249999999999, 21.17 fc rgb \"#00CC44\"", + "set object 539 rect from 50.36699999999999, 21.83 to 50.39849999999999, 21.17 fc rgb \"#00CC44\"", + "set object 540 rect from 50.520999999999994, 21.83 to 50.528499999999994, 21.17 fc rgb \"#00CC44\"", + "set object 541 rect from 50.54899999999999, 21.83 to 50.62049999999999, 21.17 fc rgb \"#00CC44\"", + "set object 542 rect from 51.27549999999999, 21.83 to 51.29099999999999, 21.17 fc rgb \"#00CC44\"", + "set object 543 rect from 51.52249999999999, 21.83 to 51.56899999999999, 21.17 fc rgb \"#00CC44\"", + "set object 544 rect from 51.87299999999998, 21.83 to 51.89049999999999, 21.17 fc rgb \"#00CC44\"", + "set object 545 rect from 52.115999999999985, 21.83 to 52.13449999999999, 21.17 fc rgb \"#00CC44\"", + "set object 546 rect from 52.286999999999985, 21.83 to 52.300499999999985, 21.17 fc rgb \"#00CC44\"", + "set object 547 rect from 52.326999999999984, 21.83 to 52.33049999999999, 21.17 fc rgb \"#00CC44\"", + "set object 548 rect from 52.362999999999985, 21.83 to 52.404499999999985, 21.17 fc rgb \"#00CC44\"", + "set object 549 rect from 54.566499999999984, 21.83 to 54.64299999999998, 21.17 fc rgb \"#00CC44\"", + "set object 550 rect from 55.49149999999998, 21.83 to 55.53099999999998, 21.17 fc rgb \"#00CC44\"", + "set object 551 rect from 56.64049999999998, 21.83 to 56.64999999999998, 21.17 fc rgb \"#00CC44\"", + "set object 552 rect from 56.750999999999976, 21.83 to 56.76449999999998, 21.17 fc rgb \"#00CC44\"", + "set object 553 rect from 57.039499999999975, 21.83 to 57.076499999999974, 21.17 fc rgb \"#00CC44\"", + "set object 554 rect from 57.885999999999974, 21.83 to 57.893499999999975, 21.17 fc rgb \"#00CC44\"", + "set object 555 rect from 57.97749999999997, 21.83 to 57.99099999999997, 21.17 fc rgb \"#00CC44\"", + "set object 556 rect from 58.04499999999997, 21.83 to 58.055499999999974, 21.17 fc rgb \"#00CC44\"", + "set object 557 rect from 58.14549999999997, 21.83 to 58.15399999999997, 21.17 fc rgb \"#00CC44\"", + "set object 558 rect from 58.17549999999997, 21.83 to 58.18399999999997, 21.17 fc rgb \"#00CC44\"", + "set object 559 rect from 58.40999999999997, 21.83 to 58.431499999999964, 21.17 fc rgb \"#00CC44\"", + "set object 560 rect from 58.51699999999997, 21.83 to 58.53049999999997, 21.17 fc rgb \"#00CC44\"", + "set object 561 rect from 58.590999999999966, 21.83 to 58.60049999999997, 21.17 fc rgb \"#00CC44\"", + "set object 562 rect from 59.65599999999996, 21.83 to 59.669499999999964, 21.17 fc rgb \"#00CC44\"", + "set object 563 rect from 60.05149999999996, 21.83 to 60.060999999999964, 21.17 fc rgb \"#00CC44\"", + "set object 564 rect from 60.176999999999964, 21.83 to 60.19499999999996, 21.17 fc rgb \"#00CC44\"", + "set object 565 rect from 60.26949999999996, 21.83 to 60.27999999999996, 21.17 fc rgb \"#00CC44\"", + "set object 566 rect from 60.31149999999996, 21.83 to 60.34699999999996, 21.17 fc rgb \"#00CC44\"", + "set object 567 rect from 60.471499999999956, 21.83 to 60.48399999999996, 21.17 fc rgb \"#00CC44\"", + "set object 568 rect from 60.508499999999955, 21.83 to 60.51999999999996, 21.17 fc rgb \"#00CC44\"", + "set object 569 rect from 60.92099999999996, 21.83 to 60.98249999999996, 21.17 fc rgb \"#00CC44\"", + "set object 570 rect from 63.15199999999995, 21.83 to 63.228499999999954, 21.17 fc rgb \"#00CC44\"", + "set object 571 rect from 67.34999999999994, 21.83 to 67.36349999999995, 21.17 fc rgb \"#00CC44\"", + "set object 572 rect from 67.40699999999995, 21.83 to 67.41249999999995, 21.17 fc rgb \"#00CC44\"", + "set object 573 rect from 67.45699999999994, 21.83 to 67.46599999999995, 21.17 fc rgb \"#00CC44\"", + "set object 574 rect from 69.11299999999994, 21.83 to 69.12949999999995, 21.17 fc rgb \"#00CC44\"", + "set object 575 rect from 69.19199999999995, 21.83 to 69.22649999999994, 21.17 fc rgb \"#00CC44\"", + "set object 576 rect from 69.30799999999994, 21.83 to 69.31949999999995, 21.17 fc rgb \"#00CC44\"", + "set object 577 rect from 69.34699999999995, 21.83 to 69.35749999999994, 21.17 fc rgb \"#00CC44\"", + "set object 578 rect from 69.38399999999996, 21.83 to 69.40549999999995, 21.17 fc rgb \"#00CC44\"", + "set object 579 rect from 69.45099999999994, 21.83 to 69.46349999999994, 21.17 fc rgb \"#00CC44\"", + "set object 580 rect from 70.31749999999994, 21.83 to 70.33949999999994, 21.17 fc rgb \"#00CC44\"", + "set object 581 rect from 74.41449999999995, 21.83 to 74.43899999999994, 21.17 fc rgb \"#00CC44\"", + "set object 582 rect from 74.52049999999994, 21.83 to 74.54499999999993, 21.17 fc rgb \"#00CC44\"", + "set object 583 rect from 74.59549999999994, 21.83 to 74.60899999999995, 21.17 fc rgb \"#00CC44\"", + "set object 584 rect from 84.09999999999994, 21.83 to 84.15349999999994, 21.17 fc rgb \"#00CC44\"", + "set object 585 rect from 84.26099999999994, 21.83 to 84.27549999999994, 21.17 fc rgb \"#00CC44\"", + "set object 586 rect from 84.31099999999992, 21.83 to 84.31949999999993, 21.17 fc rgb \"#00CC44\"", + "set object 587 rect from 84.34199999999993, 21.83 to 84.35349999999993, 21.17 fc rgb \"#00CC44\"", + "set object 588 rect from 84.37299999999993, 21.83 to 84.40149999999993, 21.17 fc rgb \"#00CC44\"", + "set object 589 rect from 84.43999999999994, 21.83 to 84.46149999999993, 21.17 fc rgb \"#00CC44\"", + "set object 590 rect from 84.53049999999993, 21.83 to 84.60099999999994, 21.17 fc rgb \"#00CC44\"", + "set object 591 rect from 84.68049999999992, 21.83 to 84.69199999999992, 21.17 fc rgb \"#00CC44\"", + "set object 592 rect from 84.71649999999993, 21.83 to 84.72799999999992, 21.17 fc rgb \"#00CC44\"", + "set object 593 rect from 84.92199999999994, 21.83 to 84.93849999999993, 21.17 fc rgb \"#00CC44\"", + "set object 594 rect from 84.99799999999993, 21.83 to 85.01049999999992, 21.17 fc rgb \"#00CC44\"", + "set object 595 rect from 85.03599999999992, 21.83 to 85.04449999999993, 21.17 fc rgb \"#00CC44\"", + "set object 596 rect from 85.06199999999993, 21.83 to 85.07249999999993, 21.17 fc rgb \"#00CC44\"", + "set object 597 rect from 85.09499999999994, 21.83 to 85.10249999999992, 21.17 fc rgb \"#00CC44\"", + "set object 598 rect from 85.38399999999993, 21.83 to 85.43999999999994, 21.17 fc rgb \"#00CC44\"", + "set object 599 rect from 85.59949999999992, 21.83 to 85.61599999999993, 21.17 fc rgb \"#00CC44\"", + "set object 600 rect from 85.63749999999993, 21.83 to 85.65899999999993, 21.17 fc rgb \"#00CC44\"", + "set object 601 rect from 85.69649999999993, 21.83 to 85.70599999999993, 21.17 fc rgb \"#00CC44\"", + "set object 602 rect from 85.73249999999993, 21.83 to 85.76899999999992, 21.17 fc rgb \"#00CC44\"", + "set object 603 rect from 85.86549999999993, 21.83 to 85.87599999999992, 21.17 fc rgb \"#00CC44\"", + "set object 604 rect from 85.91149999999992, 21.83 to 85.92499999999993, 21.17 fc rgb \"#00CC44\"", + "set object 605 rect from 102.74599999999992, 21.83 to 102.80749999999992, 21.17 fc rgb \"#00CC44\"", + "set object 606 rect from 107.5244999999999, 21.83 to 107.57199999999992, 21.17 fc rgb \"#00CC44\"", + "set object 607 rect from 107.62449999999991, 21.83 to 107.6389999999999, 21.17 fc rgb \"#00CC44\"", + "set object 608 rect from 107.6674999999999, 21.83 to 107.6759999999999, 21.17 fc rgb \"#00CC44\"", + "set object 609 rect from 107.69849999999991, 21.83 to 107.70999999999992, 21.17 fc rgb \"#00CC44\"", + "set object 610 rect from 107.7294999999999, 21.83 to 107.7469999999999, 21.17 fc rgb \"#00CC44\"", + "set object 611 rect from 107.7834999999999, 21.83 to 107.79299999999992, 21.17 fc rgb \"#00CC44\"", + "set object 612 rect from 107.82049999999991, 21.83 to 107.8529999999999, 21.17 fc rgb \"#00CC44\"", + "set object 613 rect from 107.9294999999999, 21.83 to 107.94099999999992, 21.17 fc rgb \"#00CC44\"", + "set object 614 rect from 107.9654999999999, 21.83 to 107.97599999999991, 21.17 fc rgb \"#00CC44\"", + "set object 615 rect from 130.5489999999999, 21.83 to 130.5954999999999, 21.17 fc rgb \"#00CC44\"", + "set object 616 rect from 130.6469999999999, 21.83 to 130.6614999999999, 21.17 fc rgb \"#00CC44\"", + "set object 617 rect from 130.68999999999988, 21.83 to 130.6994999999999, 21.17 fc rgb \"#00CC44\"", + "set object 618 rect from 130.7219999999999, 21.83 to 130.7324999999999, 21.17 fc rgb \"#00CC44\"", + "set object 619 rect from 130.7519999999999, 21.83 to 130.76949999999988, 21.17 fc rgb \"#00CC44\"", + "set object 620 rect from 130.8059999999999, 21.83 to 130.8154999999999, 21.17 fc rgb \"#00CC44\"", + "set object 621 rect from 130.84299999999988, 21.83 to 130.87549999999987, 21.17 fc rgb \"#00CC44\"", + "set object 622 rect from 130.95199999999988, 21.83 to 130.9644999999999, 21.17 fc rgb \"#00CC44\"", + "set object 623 rect from 130.99099999999987, 21.83 to 131.00249999999988, 21.17 fc rgb \"#00CC44\"", + "set object 624 rect from 140.86699999999988, 21.83 to 140.8814999999999, 21.17 fc rgb \"#00CC44\"", + "set object 625 rect from 140.9319999999999, 21.83 to 140.9574999999999, 21.17 fc rgb \"#00CC44\"", + "set object 626 rect from 141.0299999999999, 21.83 to 141.03849999999989, 21.17 fc rgb \"#00CC44\"", + "set object 627 rect from 55.79999999999998, 22.83 to 56.198999999999984, 22.17 fc rgb \"#0044CC\"", + "set object 628 rect from 62.16149999999996, 22.83 to 62.548999999999964, 22.17 fc rgb \"#0044CC\"", + "set object 629 rect from 65.56449999999995, 22.83 to 65.61699999999995, 22.17 fc rgb \"#0044CC\"", + "set object 630 rect from 68.70599999999996, 22.83 to 68.76649999999995, 22.17 fc rgb \"#0044CC\"", + "set object 631 rect from 72.22199999999995, 22.83 to 72.28049999999995, 22.17 fc rgb \"#0044CC\"", + "set object 632 rect from 75.41849999999994, 22.83 to 75.46799999999995, 22.17 fc rgb \"#0044CC\"", + "set object 633 rect from 78.16449999999993, 22.83 to 78.23649999999994, 22.17 fc rgb \"#0044CC\"", + "set object 634 rect from 80.90399999999994, 22.83 to 80.95049999999993, 22.17 fc rgb \"#0044CC\"", + "set object 635 rect from 83.58349999999993, 22.83 to 83.63999999999993, 22.17 fc rgb \"#0044CC\"", + "set object 636 rect from 88.75199999999992, 22.83 to 88.82299999999992, 22.17 fc rgb \"#0044CC\"", + "set object 637 rect from 91.90999999999991, 22.83 to 91.96649999999993, 22.17 fc rgb \"#0044CC\"", + "set object 638 rect from 94.55599999999993, 22.83 to 94.6054999999999, 22.17 fc rgb \"#0044CC\"", + "set object 639 rect from 97.20749999999991, 22.83 to 97.26099999999992, 22.17 fc rgb \"#0044CC\"", + "set object 640 rect from 99.86649999999992, 22.83 to 99.92199999999991, 22.17 fc rgb \"#0044CC\"", + "set object 641 rect from 102.56049999999992, 22.83 to 102.61199999999991, 22.17 fc rgb \"#0044CC\"", + "set object 642 rect from 105.88099999999991, 22.83 to 105.93349999999991, 22.17 fc rgb \"#0044CC\"", + "set object 643 rect from 109.2659999999999, 22.83 to 109.38599999999991, 22.17 fc rgb \"#0044CC\"", + "set object 644 rect from 109.4024999999999, 22.83 to 109.41799999999989, 22.17 fc rgb \"#0044CC\"", + "set object 645 rect from 112.6029999999999, 22.83 to 112.6564999999999, 22.17 fc rgb \"#0044CC\"", + "set object 646 rect from 115.36399999999989, 22.83 to 115.4124999999999, 22.17 fc rgb \"#0044CC\"", + "set object 647 rect from 118.1434999999999, 22.83 to 118.19199999999991, 22.17 fc rgb \"#0044CC\"", + "set object 648 rect from 120.9194999999999, 22.83 to 121.0104999999999, 22.17 fc rgb \"#0044CC\"", + "set object 649 rect from 121.0259999999999, 22.83 to 121.0314999999999, 22.17 fc rgb \"#0044CC\"", + "set object 650 rect from 123.77499999999989, 22.83 to 123.8254999999999, 22.17 fc rgb \"#0044CC\"", + "set object 651 rect from 126.55149999999989, 22.83 to 126.59899999999989, 22.17 fc rgb \"#0044CC\"", + "set object 652 rect from 129.3344999999999, 22.83 to 129.4124999999999, 22.17 fc rgb \"#0044CC\"", + "set object 653 rect from 129.4249999999999, 22.83 to 129.48849999999987, 22.17 fc rgb \"#0044CC\"", + "set object 654 rect from 132.8659999999999, 22.83 to 132.92249999999987, 22.17 fc rgb \"#0044CC\"", + "set object 655 rect from 136.14449999999988, 22.83 to 136.19799999999987, 22.17 fc rgb \"#0044CC\"", + "set object 656 rect from 138.9289999999999, 22.83 to 138.98049999999986, 22.17 fc rgb \"#0044CC\"", + "set object 657 rect from 2.4204999999999997, 23.83 to 3.7920000000000003, 23.17 fc rgb \"#4444CC\"", + "set object 658 rect from 3.8075, 23.83 to 3.8129999999999997, 23.17 fc rgb \"#4444CC\"", + "set object 659 rect from 6.2695, 23.83 to 7.373, 23.17 fc rgb \"#4444CC\"", + "set object 660 rect from 7.3865, 23.83 to 7.3919999999999995, 23.17 fc rgb \"#4444CC\"", + "set object 661 rect from 9.2915, 23.83 to 10.405000000000001, 23.17 fc rgb \"#4444CC\"", + "set object 662 rect from 10.4235, 23.83 to 10.43, 23.17 fc rgb \"#4444CC\"", + "set object 663 rect from 12.8765, 23.83 to 13.897, 23.17 fc rgb \"#4444CC\"", + "set object 664 rect from 13.910499999999999, 23.83 to 13.915999999999999, 23.17 fc rgb \"#4444CC\"", "set object 665 rect from 18.803, 10.2 to 19.803, 9.8 fc rgb \"#000000\"", "set object 666 rect from 19.8815, 10.2 to 20.8815, 9.8 fc rgb \"#000000\"", "set object 667 rect from 20.910999999999998, 10.2 to 21.910999999999998, 9.8 fc rgb \"#000000\"", @@ -1371,7 +1371,7 @@ "set label \"1 ms\" at 14.3305828125,1 font \"Helvetica,7'\"", "set label \"0 ms\" at 18.204082812499998,1 font \"Helvetica,7'\"", "set label \"0 ms\" at 85.27908281249994,1 font \"Helvetica,7'\"", - "set y2range [0:59.54259090909095]", + "set y2range [0:62.076318181818216]", "plot '-' using 1:2 axes x1y2 with impulses ls 1", "41.88650000000001 13.935500000000008", "3.7920000000000003 1.3375000000000004", @@ -1563,4 +1563,4 @@ "# start: 2.4204999999999997", "# end: 141.1669999999999", "# objects: 1547" -] +] \ No newline at end of file diff --git a/deps/v8/test/mjsunit/tools/tickprocessor-test.default b/deps/v8/test/mjsunit/tools/tickprocessor-test.default index 702f4bcae89..3e01532ac37 100644 --- a/deps/v8/test/mjsunit/tools/tickprocessor-test.default +++ b/deps/v8/test/mjsunit/tools/tickprocessor-test.default @@ -1,13 +1,9 @@ Statistical profiling result from v8.log, (13 ticks, 2 unaccounted, 0 excluded). - [Unknown]: - ticks total nonlib name - 2 15.4% - [Shared libraries]: ticks total nonlib name - 3 23.1% 0.0% /lib32/libm-2.7.so - 1 7.7% 0.0% ffffe000-fffff000 + 3 23.1% /lib32/libm-2.7.so + 1 7.7% ffffe000-fffff000 [JavaScript]: ticks total nonlib name @@ -16,13 +12,17 @@ Statistical profiling result from v8.log, (13 ticks, 2 unaccounted, 0 excluded). [C++]: ticks total nonlib name 2 15.4% 22.2% v8::internal::Runtime_Math_exp(v8::internal::Arguments) - 1 7.7% 11.1% v8::internal::JSObject::LocalLookupRealNamedProperty(v8::internal::String*, v8::internal::LookupResult*) + 1 7.7% 11.1% v8::internal::JSObject::LookupOwnRealNamedProperty(v8::internal::String*, v8::internal::LookupResult*) 1 7.7% 11.1% v8::internal::HashTable<v8::internal::StringDictionaryShape, v8::internal::String*>::FindEntry(v8::internal::String*) 1 7.7% 11.1% exp - [GC]: + [Summary]: ticks total nonlib name - 0 0.0% + 1 7.7% 11.1% JavaScript + 5 38.5% 55.6% C++ + 0 0.0% 0.0% GC + 4 30.8% Shared libraries + 2 15.4% Unaccounted [Bottom up (heavy) profile]: Note: percentage shows a share of a particular caller in the total @@ -38,7 +38,7 @@ Statistical profiling result from v8.log, (13 ticks, 2 unaccounted, 0 excluded). 2 100.0% LazyCompile: exp native math.js:41 2 100.0% Script: exp.js - 1 7.7% v8::internal::JSObject::LocalLookupRealNamedProperty(v8::internal::String*, v8::internal::LookupResult*) + 1 7.7% v8::internal::JSObject::LookupOwnRealNamedProperty(v8::internal::String*, v8::internal::LookupResult*) 1 100.0% Script: exp.js 1 7.7% v8::internal::HashTable<v8::internal::StringDictionaryShape, v8::internal::String*>::FindEntry(v8::internal::String*) diff --git a/deps/v8/test/mjsunit/tools/tickprocessor-test.func-info b/deps/v8/test/mjsunit/tools/tickprocessor-test.func-info index a66b90f4c53..c93b6ec701c 100644 --- a/deps/v8/test/mjsunit/tools/tickprocessor-test.func-info +++ b/deps/v8/test/mjsunit/tools/tickprocessor-test.func-info @@ -11,9 +11,12 @@ Statistical profiling result from v8.log, (3 ticks, 0 unaccounted, 0 excluded). [C++]: ticks total nonlib name - [GC]: + [Summary]: ticks total nonlib name - 0 0.0% + 3 100.0% 100.0% JavaScript + 0 0.0% 0.0% C++ + 0 0.0% 0.0% GC + 0 0.0% Shared libraries [Bottom up (heavy) profile]: Note: percentage shows a share of a particular caller in the total diff --git a/deps/v8/test/mjsunit/tools/tickprocessor-test.gc-state b/deps/v8/test/mjsunit/tools/tickprocessor-test.gc-state index 40f90db4f76..6b1a6a3b306 100644 --- a/deps/v8/test/mjsunit/tools/tickprocessor-test.gc-state +++ b/deps/v8/test/mjsunit/tools/tickprocessor-test.gc-state @@ -9,9 +9,12 @@ Statistical profiling result from v8.log, (13 ticks, 0 unaccounted, 13 excluded) [C++]: ticks total nonlib name - [GC]: + [Summary]: ticks total nonlib name - 0 0.0% + 0 0.0% 0.0% JavaScript + 0 0.0% 0.0% C++ + 0 0.0% 0.0% GC + 0 0.0% Shared libraries [Bottom up (heavy) profile]: Note: percentage shows a share of a particular caller in the total diff --git a/deps/v8/test/mjsunit/tools/tickprocessor-test.ignore-unknown b/deps/v8/test/mjsunit/tools/tickprocessor-test.ignore-unknown index 306d646c1a1..de70527f9dc 100644 --- a/deps/v8/test/mjsunit/tools/tickprocessor-test.ignore-unknown +++ b/deps/v8/test/mjsunit/tools/tickprocessor-test.ignore-unknown @@ -2,8 +2,8 @@ Statistical profiling result from v8.log, (13 ticks, 2 unaccounted, 0 excluded). [Shared libraries]: ticks total nonlib name - 3 27.3% 0.0% /lib32/libm-2.7.so - 1 9.1% 0.0% ffffe000-fffff000 + 3 27.3% /lib32/libm-2.7.so + 1 9.1% ffffe000-fffff000 [JavaScript]: ticks total nonlib name @@ -12,13 +12,16 @@ Statistical profiling result from v8.log, (13 ticks, 2 unaccounted, 0 excluded). [C++]: ticks total nonlib name 2 18.2% 28.6% v8::internal::Runtime_Math_exp(v8::internal::Arguments) - 1 9.1% 14.3% v8::internal::JSObject::LocalLookupRealNamedProperty(v8::internal::String*, v8::internal::LookupResult*) + 1 9.1% 14.3% v8::internal::JSObject::LookupOwnRealNamedProperty(v8::internal::String*, v8::internal::LookupResult*) 1 9.1% 14.3% v8::internal::HashTable<v8::internal::StringDictionaryShape, v8::internal::String*>::FindEntry(v8::internal::String*) 1 9.1% 14.3% exp - [GC]: + [Summary]: ticks total nonlib name - 0 0.0% + 1 9.1% 14.3% JavaScript + 5 45.5% 71.4% C++ + 0 0.0% 0.0% GC + 4 36.4% Shared libraries [Bottom up (heavy) profile]: Note: percentage shows a share of a particular caller in the total @@ -34,7 +37,7 @@ Statistical profiling result from v8.log, (13 ticks, 2 unaccounted, 0 excluded). 2 100.0% LazyCompile: exp native math.js:41 2 100.0% Script: exp.js - 1 9.1% v8::internal::JSObject::LocalLookupRealNamedProperty(v8::internal::String*, v8::internal::LookupResult*) + 1 9.1% v8::internal::JSObject::LookupOwnRealNamedProperty(v8::internal::String*, v8::internal::LookupResult*) 1 100.0% Script: exp.js 1 9.1% v8::internal::HashTable<v8::internal::StringDictionaryShape, v8::internal::String*>::FindEntry(v8::internal::String*) diff --git a/deps/v8/test/mjsunit/tools/tickprocessor-test.separate-ic b/deps/v8/test/mjsunit/tools/tickprocessor-test.separate-ic index 3a2041b52f1..119ccbe7126 100644 --- a/deps/v8/test/mjsunit/tools/tickprocessor-test.separate-ic +++ b/deps/v8/test/mjsunit/tools/tickprocessor-test.separate-ic @@ -1,13 +1,9 @@ Statistical profiling result from v8.log, (13 ticks, 2 unaccounted, 0 excluded). - [Unknown]: - ticks total nonlib name - 2 15.4% - [Shared libraries]: ticks total nonlib name - 3 23.1% 0.0% /lib32/libm-2.7.so - 1 7.7% 0.0% ffffe000-fffff000 + 3 23.1% /lib32/libm-2.7.so + 1 7.7% ffffe000-fffff000 [JavaScript]: ticks total nonlib name @@ -18,13 +14,17 @@ Statistical profiling result from v8.log, (13 ticks, 2 unaccounted, 0 excluded). [C++]: ticks total nonlib name 2 15.4% 22.2% v8::internal::Runtime_Math_exp(v8::internal::Arguments) - 1 7.7% 11.1% v8::internal::JSObject::LocalLookupRealNamedProperty(v8::internal::String*, v8::internal::LookupResult*) + 1 7.7% 11.1% v8::internal::JSObject::LookupOwnRealNamedProperty(v8::internal::String*, v8::internal::LookupResult*) 1 7.7% 11.1% v8::internal::HashTable<v8::internal::StringDictionaryShape, v8::internal::String*>::FindEntry(v8::internal::String*) 1 7.7% 11.1% exp - [GC]: + [Summary]: ticks total nonlib name - 0 0.0% + 3 23.1% 33.3% JavaScript + 5 38.5% 55.6% C++ + 0 0.0% 0.0% GC + 4 30.8% Shared libraries + 2 15.4% Unaccounted [Bottom up (heavy) profile]: Note: percentage shows a share of a particular caller in the total @@ -40,7 +40,7 @@ Statistical profiling result from v8.log, (13 ticks, 2 unaccounted, 0 excluded). 2 100.0% LazyCompile: exp native math.js:41 2 100.0% Script: exp.js - 1 7.7% v8::internal::JSObject::LocalLookupRealNamedProperty(v8::internal::String*, v8::internal::LookupResult*) + 1 7.7% v8::internal::JSObject::LookupOwnRealNamedProperty(v8::internal::String*, v8::internal::LookupResult*) 1 100.0% Script: exp.js 1 7.7% v8::internal::HashTable<v8::internal::StringDictionaryShape, v8::internal::String*>::FindEntry(v8::internal::String*) diff --git a/deps/v8/test/mjsunit/tools/tickprocessor.js b/deps/v8/test/mjsunit/tools/tickprocessor.js index 626929de16b..f460d349bb5 100644 --- a/deps/v8/test/mjsunit/tools/tickprocessor.js +++ b/deps/v8/test/mjsunit/tools/tickprocessor.js @@ -25,10 +25,6 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// This test case is not compatible with optimization stress because the -// generated profile will look vastly different when more is optimized. -// Flags: --nostress-opt --noalways-opt - // Load implementations from <project root>/tools. // Files: tools/splaytree.js tools/codemap.js tools/csvparser.js // Files: tools/consarray.js tools/profile.js tools/profile_view.js @@ -311,7 +307,7 @@ CppEntriesProviderMock.prototype.parseVmSymbols = function( name, startAddr, endAddr, symbolAdder) { var symbols = { 'shell': - [['v8::internal::JSObject::LocalLookupRealNamedProperty(v8::internal::String*, v8::internal::LookupResult*)', 0x080f8800, 0x080f8d90], + [['v8::internal::JSObject::LookupOwnRealNamedProperty(v8::internal::String*, v8::internal::LookupResult*)', 0x080f8800, 0x080f8d90], ['v8::internal::HashTable<v8::internal::StringDictionaryShape, v8::internal::String*>::FindEntry(v8::internal::String*)', 0x080f8210, 0x080f8800], ['v8::internal::Runtime_Math_exp(v8::internal::Arguments)', 0x08123b20, 0x08123b80]], '/lib32/libm-2.7.so': diff --git a/deps/v8/test/mjsunit/value-wrapper-accessor.js b/deps/v8/test/mjsunit/value-wrapper-accessor.js index f95145652cd..79db407121c 100644 --- a/deps/v8/test/mjsunit/value-wrapper-accessor.js +++ b/deps/v8/test/mjsunit/value-wrapper-accessor.js @@ -77,20 +77,14 @@ function test(object, prototype) { %OptimizeFunctionOnNextCall(nonstrict); result = undefined; nonstrict(object); - // TODO(1475): Support storing to primitive values. - // This should return "object" once storing to primitive values is - // supported. - assertEquals("undefined", typeof result); + assertEquals("object", typeof result); strict(object); strict(object); %OptimizeFunctionOnNextCall(strict); result = undefined; strict(object); - // TODO(1475): Support storing to primitive values. - // This should return "object" once storing to primitive values is - // supported. - assertEquals("undefined", typeof result); + assertEquals(object, result); })(); } diff --git a/deps/v8/test/mjsunit/with-readonly.js b/deps/v8/test/mjsunit/with-readonly.js index 29982b34746..43583348e95 100644 --- a/deps/v8/test/mjsunit/with-readonly.js +++ b/deps/v8/test/mjsunit/with-readonly.js @@ -27,8 +27,6 @@ // Test that readonly variables are treated correctly. -// Flags: --es5_readonly - // Create an object with a read-only length property in the prototype // chain by putting the string split function in the prototype chain. var o = {}; diff --git a/deps/v8/test/mozilla/mozilla.status b/deps/v8/test/mozilla/mozilla.status index c04412ae920..5662ee4144c 100644 --- a/deps/v8/test/mozilla/mozilla.status +++ b/deps/v8/test/mozilla/mozilla.status @@ -51,6 +51,19 @@ 'ecma_3/Number/15.7.4.3-02': [PASS, FAIL], 'ecma_3/Date/15.9.5.5-02': [PASS, FAIL], + ################## TURBO-FAN FAILURES ################### + + # TODO(turbofan): These are all covered by mjsunit as well. Enable them once + # we pass 'mjsunit' and 'webkit' with TurboFan. + 'js1_4/Functions/function-001': [PASS, NO_VARIANTS], + 'js1_5/Regress/regress-104077': [PASS, NO_VARIANTS], + 'js1_5/Regress/regress-396684': [PASS, NO_VARIANTS], + 'js1_5/Regress/regress-80981': [PASS, NO_VARIANTS], + + # TODO(turbofan): Large switch statements crash. + 'js1_5/Regress/regress-366601': [PASS, NO_VARIANTS], + 'js1_5/Regress/regress-398085-01': [PASS, NO_VARIANTS], + ##################### SKIPPED TESTS ##################### # This test checks that we behave properly in an out-of-memory @@ -113,7 +126,7 @@ 'js1_5/Regress/regress-360969-04': [PASS, ['mode == debug', TIMEOUT, NO_VARIANTS]], 'js1_5/Regress/regress-360969-05': [PASS, ['mode == debug', TIMEOUT, NO_VARIANTS]], 'js1_5/Regress/regress-360969-06': [PASS, ['mode == debug', TIMEOUT, NO_VARIANTS]], - 'js1_5/extensions/regress-365527': [PASS, ['mode == debug', TIMEOUT, NO_VARIANTS]], + 'js1_5/extensions/regress-365527': [PASS, SLOW, ['mode == debug', TIMEOUT, NO_VARIANTS]], 'js1_5/Regress/regress-280769-3': [PASS, ['mode == debug', FAIL]], 'js1_5/Regress/regress-203278-1': [PASS, ['mode == debug', FAIL]], @@ -170,7 +183,7 @@ 'js1_5/String/regress-56940-02': [PASS, FAIL], 'js1_5/String/regress-157334-01': [PASS, FAIL], 'js1_5/String/regress-322772': [PASS, FAIL], - 'js1_5/Array/regress-99120-01': [PASS, FAIL], + 'js1_5/Array/regress-99120-01': [PASS, FAIL, NO_VARIANTS], 'js1_5/Array/regress-99120-02': [PASS, FAIL], 'js1_5/Regress/regress-347306-01': [PASS, FAIL], 'js1_5/Regress/regress-416628': [PASS, FAIL, ['mode == debug', TIMEOUT, NO_VARIANTS]], @@ -642,10 +655,6 @@ # We do not correctly handle assignments within "with" 'ecma_3/Statements/12.10-01': [FAIL], - # We do not throw an exception when a const is redeclared. - # (We only fail section 1 of the test.) - 'js1_5/Regress/regress-103602': [FAIL], - ##################### MOZILLA EXTENSION TESTS ##################### 'ecma/extensions/15.1.2.1-1': [FAIL_OK], @@ -718,6 +727,13 @@ 'js1_5/extensions/regress-361964': [FAIL_OK], 'js1_5/extensions/regress-363988': [FAIL_OK], 'js1_5/extensions/regress-365869': [FAIL_OK], + + # Uses non ES5 compatible syntax for setter + 'js1_5/extensions/regress-367501-01': [FAIL_OK], + 'js1_5/extensions/regress-367501-02': [FAIL_OK], + 'js1_5/extensions/regress-367501-03': [FAIL_OK], + 'js1_5/extensions/regress-367501-04': [FAIL_OK], + 'js1_5/extensions/regress-367630': [FAIL_OK], 'js1_5/extensions/regress-367923': [FAIL_OK], 'js1_5/extensions/regress-368859': [FAIL_OK], @@ -843,7 +859,6 @@ 'js1_5/Regress/regress-404755': [SKIP], 'js1_5/Regress/regress-451322': [SKIP], - # BUG(1040): Allow this test to timeout. 'js1_5/GC/regress-203278-2': [PASS, TIMEOUT, NO_VARIANTS], }], # 'arch == arm or arch == arm64' @@ -852,10 +867,13 @@ ['arch == arm64', { # BUG(v8:3152): Runs out of stack in debug mode. 'js1_5/extensions/regress-355497': [FAIL_OK, ['mode == debug', SKIP]], + + # BUG(v8:3503): Times out in debug mode. + 'js1_5/Regress/regress-280769-2': [PASS, FAIL, ['mode == debug', SKIP]], }], # 'arch == arm64' -['arch == mipsel', { +['arch == mipsel or arch == mips64el', { # BUG(3251229): Times out when running new crankshaft test script. 'ecma_3/RegExp/regress-311414': [SKIP], @@ -872,7 +890,7 @@ # BUG(1040): Allow this test to timeout. 'js1_5/GC/regress-203278-2': [PASS, TIMEOUT, NO_VARIANTS], -}], # 'arch == mipsel' +}], # 'arch == mipsel or arch == mips64el' ['arch == mips', { diff --git a/deps/v8/test/preparser/duplicate-property.pyt b/deps/v8/test/preparser/duplicate-property.pyt index 5abf9adbcfd..594b4786cb0 100644 --- a/deps/v8/test/preparser/duplicate-property.pyt +++ b/deps/v8/test/preparser/duplicate-property.pyt @@ -81,7 +81,7 @@ def PropertyTest(name, propa, propb, allow_strict = True): """, replacement, None) StrictTest("$name-nested-set", """ - var o = {set $id1(){}, o: {set $id2(){} } }; + var o = {set $id1(v){}, o: {set $id2(v){} } }; """, replacement, None) diff --git a/deps/v8/test/promises-aplus/testcfg.py b/deps/v8/test/promises-aplus/testcfg.py index a5995a361d6..bd033791871 100644 --- a/deps/v8/test/promises-aplus/testcfg.py +++ b/deps/v8/test/promises-aplus/testcfg.py @@ -31,9 +31,9 @@ import shutil import sys import tarfile -import urllib from testrunner.local import testsuite +from testrunner.local import utils from testrunner.objects import testcase @@ -102,7 +102,7 @@ def DownloadTestData(self): directory = os.path.join(self.root, TEST_NAME) if not os.path.exists(archive): print('Downloading {0} from {1} ...'.format(TEST_NAME, TEST_URL)) - urllib.urlretrieve(TEST_URL, archive) + utils.URLRetrieve(TEST_URL, archive) if os.path.exists(directory): shutil.rmtree(directory) @@ -129,7 +129,7 @@ def DownloadSinon(self): os.mkdir(directory) path = os.path.join(directory, SINON_FILENAME) if not os.path.exists(path): - urllib.urlretrieve(SINON_URL, path) + utils.URLRetrieve(SINON_URL, path) hash = hashlib.sha256() with open(path, 'rb') as f: for chunk in iter(lambda: f.read(8192), ''): diff --git a/deps/v8/test/test262/test262.status b/deps/v8/test/test262/test262.status index 247bd5cb6f5..dd075d96885 100644 --- a/deps/v8/test/test262/test262.status +++ b/deps/v8/test/test262/test262.status @@ -31,6 +31,13 @@ '15.5.4.9_CE': [['no_i18n', SKIP]], + # TODO(turbofan): Timeouts on TurboFan need investigation. + '10.1.1_13': [PASS, NO_VARIANTS], + + # BUG(v8:3455) + '11.2.3_b': [FAIL], + '12.2.3_b': [FAIL], + ######################## NEEDS INVESTIGATION ########################### # These test failures are specific to the intl402 suite and need investigation @@ -38,10 +45,9 @@ # incompatibilities if the test cases turn out to be broken or ambiguous. '6.2.3': [FAIL], '9.2.1_2': [FAIL], - '9.2.5_11_g_ii_2': [FAIL], '9.2.6_2': [FAIL], '10.1.1_a': [FAIL], - '10.1.1_19_c': [PASS, FAIL], + '10.1.1_19_c': [PASS, FAIL, NO_VARIANTS], '10.1.2.1_4': [FAIL], '10.2.3_b': [PASS, FAIL], '10.3_a': [FAIL], @@ -52,7 +58,6 @@ '11.1.2.1_4': [FAIL], '11.3.2_FN_2': [PASS, FAIL], '11.3.2_TRF': [PASS, FAIL], - '11.3.2_TRP': [FAIL], '11.3_a': [FAIL], '12.1.1_a': [FAIL], '12.1.2.1_4': [FAIL], @@ -63,14 +68,7 @@ ##################### DELIBERATE INCOMPATIBILITIES ##################### - # This tests precision of Math functions. The implementation for those - # trigonometric functions are platform/compiler dependent. Furthermore, the - # expectation values by far deviates from the actual result given by an - # arbitrary-precision calculator, making those tests partly bogus. - 'S15.8.2.7_A7': [PASS, FAIL_OK], # Math.cos 'S15.8.2.8_A6': [PASS, FAIL_OK], # Math.exp (less precise with --fast-math) - 'S15.8.2.16_A7': [PASS, FAIL_OK], # Math.sin - 'S15.8.2.18_A7': [PASS, FAIL_OK], # Math.tan # Linux for ia32 (and therefore simulators) default to extended 80 bit # floating point formats, so these tests checking 64-bit FP precision fail. @@ -99,7 +97,12 @@ 'S15.1.3.2_A2.5_T1': [PASS, ['mode == debug', SKIP]], }], # ALWAYS -['arch == arm or arch == mipsel or arch == mips or arch == arm64', { +['system == macos', { + '11.3.2_TRP': [FAIL], + '9.2.5_11_g_ii_2': [FAIL], +}], # system == macos + +['arch == arm or arch == mipsel or arch == mips or arch == arm64 or arch == mips64el', { # TODO(mstarzinger): Causes stack overflow on simulators due to eager # compilation of parenthesized function literals. Needs investigation. diff --git a/deps/v8/test/test262/testcfg.py b/deps/v8/test/test262/testcfg.py index 8e129d3144b..de3c9ad7b9f 100644 --- a/deps/v8/test/test262/testcfg.py +++ b/deps/v8/test/test262/testcfg.py @@ -31,9 +31,9 @@ import shutil import sys import tarfile -import urllib from testrunner.local import testsuite +from testrunner.local import utils from testrunner.objects import testcase @@ -97,7 +97,7 @@ def DownloadData(self): directory_old_name = os.path.join(self.root, "data.old") if not os.path.exists(archive_name): print "Downloading test data from %s ..." % archive_url - urllib.urlretrieve(archive_url, archive_name) + utils.URLRetrieve(archive_url, archive_name) if os.path.exists(directory_name): if os.path.exists(directory_old_name): shutil.rmtree(directory_old_name) diff --git a/deps/v8/test/webkit/fast/js/Object-defineProperty-expected.txt b/deps/v8/test/webkit/fast/js/Object-defineProperty-expected.txt index 7a303f2c5e7..118f9dddf27 100644 --- a/deps/v8/test/webkit/fast/js/Object-defineProperty-expected.txt +++ b/deps/v8/test/webkit/fast/js/Object-defineProperty-expected.txt @@ -142,8 +142,8 @@ PASS 'use strict'; var o = {}; o.readOnly = false; o.readOnly threw exception Ty PASS Object.getOwnPropertyDescriptor(Object.defineProperty(Object.defineProperty({}, 'foo', {get: function() { return false; }, configurable: true}), 'foo', {value:false}), 'foo').writable is false PASS Object.getOwnPropertyDescriptor(Object.defineProperty(Object.defineProperty({}, 'foo', {get: function() { return false; }, configurable: true}), 'foo', {value:false, writable: false}), 'foo').writable is false PASS Object.getOwnPropertyDescriptor(Object.defineProperty(Object.defineProperty({}, 'foo', {get: function() { return false; }, configurable: true}), 'foo', {value:false, writable: true}), 'foo').writable is true -FAIL var a = Object.defineProperty([], 'length', {writable: false}); a[0] = 42; 0 in a; should be false. Was true. -FAIL 'use strict'; var a = Object.defineProperty([], 'length', {writable: false}); a[0] = 42; 0 in a; should throw an exception. Was true. +PASS var a = Object.defineProperty([], 'length', {writable: false}); a[0] = 42; 0 in a; is false +PASS 'use strict'; var a = Object.defineProperty([], 'length', {writable: false}); a[0] = 42; 0 in a; threw exception TypeError: Cannot assign to read only property 'length' of [object Array]. PASS var a = Object.defineProperty([42], '0', {writable: false}); a[0] = false; a[0]; is 42 PASS 'use strict'; var a = Object.defineProperty([42], '0', {writable: false}); a[0] = false; a[0]; threw exception TypeError: Cannot assign to read only property '0' of [object Array]. PASS var a = Object.defineProperty([], '0', {set: undefined}); a[0] = 42; a[0]; is undefined. diff --git a/deps/v8/test/webkit/fast/js/Object-getOwnPropertyNames-expected.txt b/deps/v8/test/webkit/fast/js/Object-getOwnPropertyNames-expected.txt index 52babed0285..4b8eb147753 100644 --- a/deps/v8/test/webkit/fast/js/Object-getOwnPropertyNames-expected.txt +++ b/deps/v8/test/webkit/fast/js/Object-getOwnPropertyNames-expected.txt @@ -53,35 +53,35 @@ PASS getSortedOwnPropertyNames(argumentsObject()) is ['callee', 'length'] PASS getSortedOwnPropertyNames(argumentsObject(1)) is ['0', 'callee', 'length'] PASS getSortedOwnPropertyNames(argumentsObject(1,2,3)) is ['0', '1', '2', 'callee', 'length'] PASS getSortedOwnPropertyNames((function(){arguments.__proto__=[1,2,3];return arguments;})()) is ['callee', 'length'] -FAIL getSortedOwnPropertyNames(parseInt) should be length,name. Was arguments,caller,length,name. -FAIL getSortedOwnPropertyNames(parseFloat) should be length,name. Was arguments,caller,length,name. -FAIL getSortedOwnPropertyNames(isNaN) should be length,name. Was arguments,caller,length,name. -FAIL getSortedOwnPropertyNames(isFinite) should be length,name. Was arguments,caller,length,name. -FAIL getSortedOwnPropertyNames(escape) should be length,name. Was arguments,caller,length,name. -FAIL getSortedOwnPropertyNames(unescape) should be length,name. Was arguments,caller,length,name. -FAIL getSortedOwnPropertyNames(decodeURI) should be length,name. Was arguments,caller,length,name. -FAIL getSortedOwnPropertyNames(decodeURIComponent) should be length,name. Was arguments,caller,length,name. -FAIL getSortedOwnPropertyNames(encodeURI) should be length,name. Was arguments,caller,length,name. -FAIL getSortedOwnPropertyNames(encodeURIComponent) should be length,name. Was arguments,caller,length,name. -FAIL getSortedOwnPropertyNames(Object) should be create,defineProperties,defineProperty,freeze,getOwnPropertyDescriptor,getOwnPropertyNames,getPrototypeOf,isExtensible,isFrozen,isSealed,keys,length,name,preventExtensions,prototype,seal,setPrototypeOf. Was arguments,caller,create,defineProperties,defineProperty,deliverChangeRecords,freeze,getNotifier,getOwnPropertyDescriptor,getOwnPropertyNames,getPrototypeOf,is,isExtensible,isFrozen,isSealed,keys,length,name,observe,preventExtensions,prototype,seal,setPrototypeOf,unobserve. +PASS getSortedOwnPropertyNames(parseInt) is ['arguments', 'caller', 'length', 'name'] +PASS getSortedOwnPropertyNames(parseFloat) is ['arguments', 'caller', 'length', 'name'] +PASS getSortedOwnPropertyNames(isNaN) is ['arguments', 'caller', 'length', 'name'] +PASS getSortedOwnPropertyNames(isFinite) is ['arguments', 'caller', 'length', 'name'] +PASS getSortedOwnPropertyNames(escape) is ['arguments', 'caller', 'length', 'name'] +PASS getSortedOwnPropertyNames(unescape) is ['arguments', 'caller', 'length', 'name'] +PASS getSortedOwnPropertyNames(decodeURI) is ['arguments', 'caller', 'length', 'name'] +PASS getSortedOwnPropertyNames(decodeURIComponent) is ['arguments', 'caller', 'length', 'name'] +PASS getSortedOwnPropertyNames(encodeURI) is ['arguments', 'caller', 'length', 'name'] +PASS getSortedOwnPropertyNames(encodeURIComponent) is ['arguments', 'caller', 'length', 'name'] +PASS getSortedOwnPropertyNames(Object) is ['arguments', 'caller', 'create', 'defineProperties', 'defineProperty', 'deliverChangeRecords', 'freeze', 'getNotifier', 'getOwnPropertyDescriptor', 'getOwnPropertyNames', 'getOwnPropertySymbols', 'getPrototypeOf', 'is', 'isExtensible', 'isFrozen', 'isSealed', 'keys', 'length', 'name', 'observe', 'preventExtensions', 'prototype', 'seal', 'setPrototypeOf', 'unobserve'] PASS getSortedOwnPropertyNames(Object.prototype) is ['__defineGetter__', '__defineSetter__', '__lookupGetter__', '__lookupSetter__', '__proto__', 'constructor', 'hasOwnProperty', 'isPrototypeOf', 'propertyIsEnumerable', 'toLocaleString', 'toString', 'valueOf'] -FAIL getSortedOwnPropertyNames(Function) should be length,name,prototype. Was arguments,caller,length,name,prototype. -FAIL getSortedOwnPropertyNames(Function.prototype) should be apply,bind,call,constructor,length,name,toString. Was apply,arguments,bind,call,caller,constructor,length,name,toString. -FAIL getSortedOwnPropertyNames(Array) should be isArray,length,name,prototype. Was arguments,caller,isArray,length,name,observe,prototype,unobserve. -PASS getSortedOwnPropertyNames(Array.prototype) is ['concat', 'constructor', 'every', 'filter', 'forEach', 'indexOf', 'join', 'lastIndexOf', 'length', 'map', 'pop', 'push', 'reduce', 'reduceRight', 'reverse', 'shift', 'slice', 'some', 'sort', 'splice', 'toLocaleString', 'toString', 'unshift'] -FAIL getSortedOwnPropertyNames(String) should be fromCharCode,length,name,prototype. Was arguments,caller,fromCharCode,length,name,prototype. +PASS getSortedOwnPropertyNames(Function) is ['arguments', 'caller', 'length', 'name', 'prototype'] +PASS getSortedOwnPropertyNames(Function.prototype) is ['apply', 'arguments', 'bind', 'call', 'caller', 'constructor', 'length', 'name', 'toString'] +PASS getSortedOwnPropertyNames(Array) is ['arguments', 'caller', 'isArray', 'length', 'name', 'observe', 'prototype', 'unobserve'] +PASS getSortedOwnPropertyNames(Array.prototype) is ['concat', 'constructor', 'entries', 'every', 'filter', 'forEach', 'indexOf', 'join', 'keys', 'lastIndexOf', 'length', 'map', 'pop', 'push', 'reduce', 'reduceRight', 'reverse', 'shift', 'slice', 'some', 'sort', 'splice', 'toLocaleString', 'toString', 'unshift', 'values'] +PASS getSortedOwnPropertyNames(String) is ['arguments', 'caller', 'fromCharCode', 'length', 'name', 'prototype'] PASS getSortedOwnPropertyNames(String.prototype) is ['anchor', 'big', 'blink', 'bold', 'charAt', 'charCodeAt', 'concat', 'constructor', 'fixed', 'fontcolor', 'fontsize', 'indexOf', 'italics', 'lastIndexOf', 'length', 'link', 'localeCompare', 'match', 'normalize', 'replace', 'search', 'slice', 'small', 'split', 'strike', 'sub', 'substr', 'substring', 'sup', 'toLocaleLowerCase', 'toLocaleUpperCase', 'toLowerCase', 'toString', 'toUpperCase', 'trim', 'trimLeft', 'trimRight', 'valueOf'] -FAIL getSortedOwnPropertyNames(Boolean) should be length,name,prototype. Was arguments,caller,length,name,prototype. +PASS getSortedOwnPropertyNames(Boolean) is ['arguments', 'caller', 'length', 'name', 'prototype'] PASS getSortedOwnPropertyNames(Boolean.prototype) is ['constructor', 'toString', 'valueOf'] -FAIL getSortedOwnPropertyNames(Number) should be MAX_VALUE,MIN_VALUE,NEGATIVE_INFINITY,NaN,POSITIVE_INFINITY,length,name,prototype. Was EPSILON,MAX_SAFE_INTEGER,MAX_VALUE,MIN_SAFE_INTEGER,MIN_VALUE,NEGATIVE_INFINITY,NaN,POSITIVE_INFINITY,arguments,caller,isFinite,isInteger,isNaN,isSafeInteger,length,name,parseFloat,parseInt,prototype. +PASS getSortedOwnPropertyNames(Number) is ['EPSILON', 'MAX_SAFE_INTEGER', 'MAX_VALUE', 'MIN_SAFE_INTEGER', 'MIN_VALUE', 'NEGATIVE_INFINITY', 'NaN', 'POSITIVE_INFINITY', 'arguments', 'caller', 'isFinite', 'isInteger', 'isNaN', 'isSafeInteger', 'length', 'name', 'parseFloat', 'parseInt', 'prototype'] PASS getSortedOwnPropertyNames(Number.prototype) is ['constructor', 'toExponential', 'toFixed', 'toLocaleString', 'toPrecision', 'toString', 'valueOf'] -FAIL getSortedOwnPropertyNames(Date) should be UTC,length,name,now,parse,prototype. Was UTC,arguments,caller,length,name,now,parse,prototype. +PASS getSortedOwnPropertyNames(Date) is ['UTC', 'arguments', 'caller', 'length', 'name', 'now', 'parse', 'prototype'] PASS getSortedOwnPropertyNames(Date.prototype) is ['constructor', 'getDate', 'getDay', 'getFullYear', 'getHours', 'getMilliseconds', 'getMinutes', 'getMonth', 'getSeconds', 'getTime', 'getTimezoneOffset', 'getUTCDate', 'getUTCDay', 'getUTCFullYear', 'getUTCHours', 'getUTCMilliseconds', 'getUTCMinutes', 'getUTCMonth', 'getUTCSeconds', 'getYear', 'setDate', 'setFullYear', 'setHours', 'setMilliseconds', 'setMinutes', 'setMonth', 'setSeconds', 'setTime', 'setUTCDate', 'setUTCFullYear', 'setUTCHours', 'setUTCMilliseconds', 'setUTCMinutes', 'setUTCMonth', 'setUTCSeconds', 'setYear', 'toDateString', 'toGMTString', 'toISOString', 'toJSON', 'toLocaleDateString', 'toLocaleString', 'toLocaleTimeString', 'toString', 'toTimeString', 'toUTCString', 'valueOf'] -FAIL getSortedOwnPropertyNames(RegExp) should be $&,$',$*,$+,$1,$2,$3,$4,$5,$6,$7,$8,$9,$_,$`,input,lastMatch,lastParen,leftContext,length,multiline,name,prototype,rightContext. Was $&,$',$*,$+,$1,$2,$3,$4,$5,$6,$7,$8,$9,$_,$`,$input,arguments,caller,input,lastMatch,lastParen,leftContext,length,multiline,name,prototype,rightContext. +FAIL getSortedOwnPropertyNames(RegExp) should be $&,$',$*,$+,$1,$2,$3,$4,$5,$6,$7,$8,$9,$_,$`,arguments,caller,input,lastMatch,lastParen,leftContext,length,multiline,name,prototype,rightContext. Was $&,$',$*,$+,$1,$2,$3,$4,$5,$6,$7,$8,$9,$_,$`,$input,arguments,caller,input,lastMatch,lastParen,leftContext,length,multiline,name,prototype,rightContext. PASS getSortedOwnPropertyNames(RegExp.prototype) is ['compile', 'constructor', 'exec', 'global', 'ignoreCase', 'lastIndex', 'multiline', 'source', 'test', 'toString'] -FAIL getSortedOwnPropertyNames(Error) should be length,name,prototype. Was arguments,caller,captureStackTrace,length,name,prototype,stackTraceLimit. +PASS getSortedOwnPropertyNames(Error) is ['arguments', 'caller', 'captureStackTrace', 'length', 'name', 'prototype', 'stackTraceLimit'] PASS getSortedOwnPropertyNames(Error.prototype) is ['constructor', 'message', 'name', 'toString'] -FAIL getSortedOwnPropertyNames(Math) should be E,LN10,LN2,LOG10E,LOG2E,PI,SQRT1_2,SQRT2,abs,acos,asin,atan,atan2,ceil,cos,exp,floor,log,max,min,pow,random,round,sin,sqrt,tan. Was E,LN10,LN2,LOG10E,LOG2E,PI,SQRT1_2,SQRT2,abs,acos,asin,atan,atan2,ceil,cos,exp,floor,imul,log,max,min,pow,random,round,sin,sqrt,tan. +PASS getSortedOwnPropertyNames(Math) is ['E', 'LN10', 'LN2', 'LOG10E', 'LOG2E', 'PI', 'SQRT1_2', 'SQRT2', 'abs', 'acos', 'acosh', 'asin', 'asinh', 'atan', 'atan2', 'atanh', 'cbrt', 'ceil', 'clz32', 'cos', 'cosh', 'exp', 'expm1', 'floor', 'fround', 'hypot', 'imul', 'log', 'log10', 'log1p', 'log2', 'max', 'min', 'pow', 'random', 'round', 'sign', 'sin', 'sinh', 'sqrt', 'tan', 'tanh', 'trunc'] PASS getSortedOwnPropertyNames(JSON) is ['parse', 'stringify'] PASS globalPropertyNames.indexOf('NaN') != -1 is true PASS globalPropertyNames.indexOf('Infinity') != -1 is true diff --git a/deps/v8/test/webkit/fast/js/Object-getOwnPropertyNames.js b/deps/v8/test/webkit/fast/js/Object-getOwnPropertyNames.js index 6373cf1ae06..c168c37b0ed 100644 --- a/deps/v8/test/webkit/fast/js/Object-getOwnPropertyNames.js +++ b/deps/v8/test/webkit/fast/js/Object-getOwnPropertyNames.js @@ -60,36 +60,36 @@ var expectedPropertyNamesSet = { "argumentsObject(1,2,3)": "['0', '1', '2', 'callee', 'length']", "(function(){arguments.__proto__=[1,2,3];return arguments;})()": "['callee', 'length']", // Built-in ECMA functions - "parseInt": "['length', 'name']", - "parseFloat": "['length', 'name']", - "isNaN": "['length', 'name']", - "isFinite": "['length', 'name']", - "escape": "['length', 'name']", - "unescape": "['length', 'name']", - "decodeURI": "['length', 'name']", - "decodeURIComponent": "['length', 'name']", - "encodeURI": "['length', 'name']", - "encodeURIComponent": "['length', 'name']", + "parseInt": "['arguments', 'caller', 'length', 'name']", + "parseFloat": "['arguments', 'caller', 'length', 'name']", + "isNaN": "['arguments', 'caller', 'length', 'name']", + "isFinite": "['arguments', 'caller', 'length', 'name']", + "escape": "['arguments', 'caller', 'length', 'name']", + "unescape": "['arguments', 'caller', 'length', 'name']", + "decodeURI": "['arguments', 'caller', 'length', 'name']", + "decodeURIComponent": "['arguments', 'caller', 'length', 'name']", + "encodeURI": "['arguments', 'caller', 'length', 'name']", + "encodeURIComponent": "['arguments', 'caller', 'length', 'name']", // Built-in ECMA objects - "Object": "['create', 'defineProperties', 'defineProperty', 'freeze', 'getOwnPropertyDescriptor', 'getOwnPropertyNames', 'getPrototypeOf', 'isExtensible', 'isFrozen', 'isSealed', 'keys', 'length', 'name', 'preventExtensions', 'prototype', 'seal', 'setPrototypeOf']", + "Object": "['arguments', 'caller', 'create', 'defineProperties', 'defineProperty', 'deliverChangeRecords', 'freeze', 'getNotifier', 'getOwnPropertyDescriptor', 'getOwnPropertyNames', 'getOwnPropertySymbols', 'getPrototypeOf', 'is', 'isExtensible', 'isFrozen', 'isSealed', 'keys', 'length', 'name', 'observe', 'preventExtensions', 'prototype', 'seal', 'setPrototypeOf', 'unobserve']", "Object.prototype": "['__defineGetter__', '__defineSetter__', '__lookupGetter__', '__lookupSetter__', '__proto__', 'constructor', 'hasOwnProperty', 'isPrototypeOf', 'propertyIsEnumerable', 'toLocaleString', 'toString', 'valueOf']", - "Function": "['length', 'name', 'prototype']", - "Function.prototype": "['apply', 'bind', 'call', 'constructor', 'length', 'name', 'toString']", - "Array": "['isArray', 'length', 'name', 'prototype']", - "Array.prototype": "['concat', 'constructor', 'every', 'filter', 'forEach', 'indexOf', 'join', 'lastIndexOf', 'length', 'map', 'pop', 'push', 'reduce', 'reduceRight', 'reverse', 'shift', 'slice', 'some', 'sort', 'splice', 'toLocaleString', 'toString', 'unshift']", - "String": "['fromCharCode', 'length', 'name', 'prototype']", + "Function": "['arguments', 'caller', 'length', 'name', 'prototype']", + "Function.prototype": "['apply', 'arguments', 'bind', 'call', 'caller', 'constructor', 'length', 'name', 'toString']", + "Array": "['arguments', 'caller', 'isArray', 'length', 'name', 'observe', 'prototype', 'unobserve']", + "Array.prototype": "['concat', 'constructor', 'entries', 'every', 'filter', 'forEach', 'indexOf', 'join', 'keys', 'lastIndexOf', 'length', 'map', 'pop', 'push', 'reduce', 'reduceRight', 'reverse', 'shift', 'slice', 'some', 'sort', 'splice', 'toLocaleString', 'toString', 'unshift', 'values']", + "String": "['arguments', 'caller', 'fromCharCode', 'length', 'name', 'prototype']", "String.prototype": "['anchor', 'big', 'blink', 'bold', 'charAt', 'charCodeAt', 'concat', 'constructor', 'fixed', 'fontcolor', 'fontsize', 'indexOf', 'italics', 'lastIndexOf', 'length', 'link', 'localeCompare', 'match', 'normalize', 'replace', 'search', 'slice', 'small', 'split', 'strike', 'sub', 'substr', 'substring', 'sup', 'toLocaleLowerCase', 'toLocaleUpperCase', 'toLowerCase', 'toString', 'toUpperCase', 'trim', 'trimLeft', 'trimRight', 'valueOf']", - "Boolean": "['length', 'name', 'prototype']", + "Boolean": "['arguments', 'caller', 'length', 'name', 'prototype']", "Boolean.prototype": "['constructor', 'toString', 'valueOf']", - "Number": "['MAX_VALUE', 'MIN_VALUE', 'NEGATIVE_INFINITY', 'NaN', 'POSITIVE_INFINITY', 'length', 'name', 'prototype']", + "Number": "['EPSILON', 'MAX_SAFE_INTEGER', 'MAX_VALUE', 'MIN_SAFE_INTEGER', 'MIN_VALUE', 'NEGATIVE_INFINITY', 'NaN', 'POSITIVE_INFINITY', 'arguments', 'caller', 'isFinite', 'isInteger', 'isNaN', 'isSafeInteger', 'length', 'name', 'parseFloat', 'parseInt', 'prototype']", "Number.prototype": "['constructor', 'toExponential', 'toFixed', 'toLocaleString', 'toPrecision', 'toString', 'valueOf']", - "Date": "['UTC', 'length', 'name', 'now', 'parse', 'prototype']", + "Date": "['UTC', 'arguments', 'caller', 'length', 'name', 'now', 'parse', 'prototype']", "Date.prototype": "['constructor', 'getDate', 'getDay', 'getFullYear', 'getHours', 'getMilliseconds', 'getMinutes', 'getMonth', 'getSeconds', 'getTime', 'getTimezoneOffset', 'getUTCDate', 'getUTCDay', 'getUTCFullYear', 'getUTCHours', 'getUTCMilliseconds', 'getUTCMinutes', 'getUTCMonth', 'getUTCSeconds', 'getYear', 'setDate', 'setFullYear', 'setHours', 'setMilliseconds', 'setMinutes', 'setMonth', 'setSeconds', 'setTime', 'setUTCDate', 'setUTCFullYear', 'setUTCHours', 'setUTCMilliseconds', 'setUTCMinutes', 'setUTCMonth', 'setUTCSeconds', 'setYear', 'toDateString', 'toGMTString', 'toISOString', 'toJSON', 'toLocaleDateString', 'toLocaleString', 'toLocaleTimeString', 'toString', 'toTimeString', 'toUTCString', 'valueOf']", - "RegExp": "['$&', \"$'\", '$*', '$+', '$1', '$2', '$3', '$4', '$5', '$6', '$7', '$8', '$9', '$_', '$`', 'input', 'lastMatch', 'lastParen', 'leftContext', 'length', 'multiline', 'name', 'prototype', 'rightContext']", + "RegExp": "['$&', \"$'\", '$*', '$+', '$1', '$2', '$3', '$4', '$5', '$6', '$7', '$8', '$9', '$_', '$`', 'arguments', 'caller', 'input', 'lastMatch', 'lastParen', 'leftContext', 'length', 'multiline', 'name', 'prototype', 'rightContext']", "RegExp.prototype": "['compile', 'constructor', 'exec', 'global', 'ignoreCase', 'lastIndex', 'multiline', 'source', 'test', 'toString']", - "Error": "['length', 'name', 'prototype']", + "Error": "['arguments', 'caller', 'captureStackTrace', 'length', 'name', 'prototype', 'stackTraceLimit']", "Error.prototype": "['constructor', 'message', 'name', 'toString']", - "Math": "['E', 'LN10', 'LN2', 'LOG10E', 'LOG2E', 'PI', 'SQRT1_2', 'SQRT2', 'abs', 'acos', 'asin', 'atan', 'atan2', 'ceil', 'cos', 'exp', 'floor', 'log', 'max', 'min', 'pow', 'random', 'round', 'sin', 'sqrt', 'tan']", + "Math": "['E', 'LN10', 'LN2', 'LOG10E', 'LOG2E', 'PI', 'SQRT1_2', 'SQRT2', 'abs', 'acos', 'acosh', 'asin', 'asinh', 'atan', 'atan2', 'atanh', 'cbrt', 'ceil', 'clz32', 'cos', 'cosh', 'exp', 'expm1', 'floor', 'fround', 'hypot', 'imul', 'log', 'log10', 'log1p', 'log2', 'max', 'min', 'pow', 'random', 'round', 'sign', 'sin', 'sinh', 'sqrt', 'tan', 'tanh', 'trunc']", "JSON": "['parse', 'stringify']" }; diff --git a/deps/v8/test/webkit/fast/js/primitive-property-access-edge-cases-expected.txt b/deps/v8/test/webkit/fast/js/primitive-property-access-edge-cases-expected.txt index f07d273f339..cc273dfba01 100644 --- a/deps/v8/test/webkit/fast/js/primitive-property-access-edge-cases-expected.txt +++ b/deps/v8/test/webkit/fast/js/primitive-property-access-edge-cases-expected.txt @@ -29,15 +29,15 @@ On success, you will see a series of "PASS" messages, followed by "TEST COMPLETE PASS checkGet(1, Number) is true PASS checkGet('hello', String) is true PASS checkGet(true, Boolean) is true -FAIL checkSet(1, Number) should be true. Was false. -FAIL checkSet('hello', String) should be true. Was false. -FAIL checkSet(true, Boolean) should be true. Was false. +PASS checkSet(1, Number) is true +PASS checkSet('hello', String) is true +PASS checkSet(true, Boolean) is true PASS checkGetStrict(1, Number) is true PASS checkGetStrict('hello', String) is true PASS checkGetStrict(true, Boolean) is true -FAIL checkSetStrict(1, Number) should be true. Was false. -FAIL checkSetStrict('hello', String) should be true. Was false. -FAIL checkSetStrict(true, Boolean) should be true. Was false. +PASS checkSetStrict(1, Number) is true +PASS checkSetStrict('hello', String) is true +PASS checkSetStrict(true, Boolean) is true PASS checkRead(1, Number) is true PASS checkRead('hello', String) is true PASS checkRead(true, Boolean) is true @@ -47,9 +47,9 @@ PASS checkWrite(true, Boolean) is true PASS checkReadStrict(1, Number) is true PASS checkReadStrict('hello', String) is true PASS checkReadStrict(true, Boolean) is true -FAIL checkWriteStrict(1, Number) should throw an exception. Was true. -FAIL checkWriteStrict('hello', String) should throw an exception. Was true. -FAIL checkWriteStrict(true, Boolean) should throw an exception. Was true. +PASS checkWriteStrict(1, Number) threw exception TypeError: Cannot assign to read only property 'foo' of 1. +PASS checkWriteStrict('hello', String) threw exception TypeError: Cannot assign to read only property 'foo' of hello. +PASS checkWriteStrict(true, Boolean) threw exception TypeError: Cannot assign to read only property 'foo' of true. PASS checkNumericGet(1, Number) is true PASS checkNumericGet('hello', String) is true PASS checkNumericGet(true, Boolean) is true diff --git a/deps/v8/test/webkit/fast/js/read-modify-eval-expected.txt b/deps/v8/test/webkit/fast/js/read-modify-eval-expected.txt index 4a16d0a7a25..b375b378065 100644 --- a/deps/v8/test/webkit/fast/js/read-modify-eval-expected.txt +++ b/deps/v8/test/webkit/fast/js/read-modify-eval-expected.txt @@ -42,7 +42,7 @@ PASS preDecTest(); is true PASS postIncTest(); is true PASS postDecTest(); is true PASS primitiveThisTest.call(1); is true -FAIL strictThisTest.call(1); should throw an exception. Was true. +PASS strictThisTest.call(1); threw exception TypeError: Cannot assign to read only property 'value' of 1. PASS successfullyParsed is true TEST COMPLETE diff --git a/deps/v8/test/webkit/fast/js/string-anchor-expected.txt b/deps/v8/test/webkit/fast/js/string-anchor-expected.txt index 3a50054f117..91a8338035a 100644 --- a/deps/v8/test/webkit/fast/js/string-anchor-expected.txt +++ b/deps/v8/test/webkit/fast/js/string-anchor-expected.txt @@ -32,8 +32,8 @@ PASS '_'.anchor(0x2A) is "<a name=\"42\">_</a>" PASS '_'.anchor('"') is "<a name=\""\">_</a>" PASS '_'.anchor('" href="http://www.evil.com') is "<a name=\"" href="http://www.evil.com\">_</a>" PASS String.prototype.anchor.call(0x2A, 0x2A) is "<a name=\"42\">42</a>" -FAIL String.prototype.anchor.call(undefined) should throw TypeError: Type error. Was <a name="undefined">undefined</a>. -FAIL String.prototype.anchor.call(null) should throw TypeError: Type error. Was <a name="undefined">null</a>. +FAIL String.prototype.anchor.call(undefined) should throw TypeError: Type error. Threw exception TypeError: String.prototype.anchor called on null or undefined. +FAIL String.prototype.anchor.call(null) should throw TypeError: Type error. Threw exception TypeError: String.prototype.anchor called on null or undefined. PASS String.prototype.anchor.length is 1 PASS successfullyParsed is true diff --git a/deps/v8/test/webkit/fast/js/string-fontcolor-expected.txt b/deps/v8/test/webkit/fast/js/string-fontcolor-expected.txt index af2c707f375..2ffda69a6f7 100644 --- a/deps/v8/test/webkit/fast/js/string-fontcolor-expected.txt +++ b/deps/v8/test/webkit/fast/js/string-fontcolor-expected.txt @@ -32,8 +32,8 @@ PASS '_'.fontcolor(0x2A) is "<font color=\"42\">_</font>" PASS '_'.fontcolor('"') is "<font color=\""\">_</font>" PASS '_'.fontcolor('" size="2px') is "<font color=\"" size="2px\">_</font>" PASS String.prototype.fontcolor.call(0x2A, 0x2A) is "<font color=\"42\">42</font>" -FAIL String.prototype.fontcolor.call(undefined) should throw TypeError: Type error. Was <font color="undefined">undefined</font>. -FAIL String.prototype.fontcolor.call(null) should throw TypeError: Type error. Was <font color="undefined">null</font>. +FAIL String.prototype.fontcolor.call(undefined) should throw TypeError: Type error. Threw exception TypeError: String.prototype.fontcolor called on null or undefined. +FAIL String.prototype.fontcolor.call(null) should throw TypeError: Type error. Threw exception TypeError: String.prototype.fontcolor called on null or undefined. PASS String.prototype.fontcolor.length is 1 PASS successfullyParsed is true diff --git a/deps/v8/test/webkit/fast/js/string-fontsize-expected.txt b/deps/v8/test/webkit/fast/js/string-fontsize-expected.txt index c114f74b15f..656f7fa7fb4 100644 --- a/deps/v8/test/webkit/fast/js/string-fontsize-expected.txt +++ b/deps/v8/test/webkit/fast/js/string-fontsize-expected.txt @@ -33,8 +33,8 @@ PASS '_'.fontsize(0x2A) is "<font size=\"42\">_</font>" PASS '_'.fontsize('"') is "<font size=\""\">_</font>" PASS '_'.fontsize('" color="b') is "<font size=\"" color="b\">_</font>" PASS String.prototype.fontsize.call(0x2A, 0x2A) is "<font size=\"42\">42</font>" -FAIL String.prototype.fontsize.call(undefined) should throw TypeError: Type error. Was <font size="undefined">undefined</font>. -FAIL String.prototype.fontsize.call(null) should throw TypeError: Type error. Was <font size="undefined">null</font>. +FAIL String.prototype.fontsize.call(undefined) should throw TypeError: Type error. Threw exception TypeError: String.prototype.fontsize called on null or undefined. +FAIL String.prototype.fontsize.call(null) should throw TypeError: Type error. Threw exception TypeError: String.prototype.fontsize called on null or undefined. PASS String.prototype.fontsize.length is 1 PASS successfullyParsed is true diff --git a/deps/v8/test/webkit/fast/js/string-link-expected.txt b/deps/v8/test/webkit/fast/js/string-link-expected.txt index afacbe6bb91..2443bd4bcd1 100644 --- a/deps/v8/test/webkit/fast/js/string-link-expected.txt +++ b/deps/v8/test/webkit/fast/js/string-link-expected.txt @@ -33,8 +33,8 @@ PASS '_'.link(0x2A) is "<a href=\"42\">_</a>" PASS '_'.link('"') is "<a href=\""\">_</a>" PASS '_'.link('" target="_blank') is "<a href=\"" target="_blank\">_</a>" PASS String.prototype.link.call(0x2A, 0x2A) is "<a href=\"42\">42</a>" -FAIL String.prototype.link.call(undefined) should throw TypeError: Type error. Was <a href="undefined">undefined</a>. -FAIL String.prototype.link.call(null) should throw TypeError: Type error. Was <a href="undefined">null</a>. +FAIL String.prototype.link.call(undefined) should throw TypeError: Type error. Threw exception TypeError: String.prototype.link called on null or undefined. +FAIL String.prototype.link.call(null) should throw TypeError: Type error. Threw exception TypeError: String.prototype.link called on null or undefined. PASS String.prototype.link.length is 1 PASS successfullyParsed is true diff --git a/deps/v8/test/webkit/for-in-cached-expected.txt b/deps/v8/test/webkit/for-in-cached-expected.txt index 0d0c337cf14..e5538fe3b80 100644 --- a/deps/v8/test/webkit/for-in-cached-expected.txt +++ b/deps/v8/test/webkit/for-in-cached-expected.txt @@ -34,8 +34,8 @@ PASS forIn3({ __proto__: { __proto__: { y3 : 2 } } }) is ['x', 'y3'] PASS forIn4(objectWithArrayAsProto) is [] PASS forIn4(objectWithArrayAsProto) is ['0'] PASS forIn5({get foo() { return 'called getter'} }) is ['foo', 'called getter'] -PASS forIn5({set foo() { } }) is ['foo', undefined] -PASS forIn5({get foo() { return 'called getter'}, set foo() { }}) is ['foo', 'called getter'] +PASS forIn5({set foo(v) { } }) is ['foo', undefined] +PASS forIn5({get foo() { return 'called getter'}, set foo(v) { }}) is ['foo', 'called getter'] PASS successfullyParsed is true TEST COMPLETE diff --git a/deps/v8/test/webkit/for-in-cached.js b/deps/v8/test/webkit/for-in-cached.js index 1842d612931..760a5b45890 100644 --- a/deps/v8/test/webkit/for-in-cached.js +++ b/deps/v8/test/webkit/for-in-cached.js @@ -84,8 +84,8 @@ function forIn5(o) { } shouldBe("forIn5({get foo() { return 'called getter'} })", "['foo', 'called getter']"); -shouldBe("forIn5({set foo() { } })", "['foo', undefined]"); -shouldBe("forIn5({get foo() { return 'called getter'}, set foo() { }})", "['foo', 'called getter']"); +shouldBe("forIn5({set foo(v) { } })", "['foo', undefined]"); +shouldBe("forIn5({get foo() { return 'called getter'}, set foo(v) { }})", "['foo', 'called getter']"); function cacheClearing() { for(var j=0; j < 10; j++){ diff --git a/deps/v8/test/webkit/object-literal-direct-put-expected.txt b/deps/v8/test/webkit/object-literal-direct-put-expected.txt index 46793d20da9..3a19f0a0d43 100644 --- a/deps/v8/test/webkit/object-literal-direct-put-expected.txt +++ b/deps/v8/test/webkit/object-literal-direct-put-expected.txt @@ -28,11 +28,11 @@ On success, you will see a series of "PASS" messages, followed by "TEST COMPLETE PASS ({a:true}).a is true PASS ({__proto__: {a:false}, a:true}).a is true -PASS ({__proto__: {set a() {throw 'Should not call setter'; }}, a:true}).a is true +PASS ({__proto__: {set a(v) {throw 'Should not call setter'; }}, a:true}).a is true PASS ({__proto__: {get a() {throw 'Should not reach getter'; }}, a:true}).a is true PASS ({__proto__: {get a() {throw 'Should not reach getter'; }, b:true}, a:true}).b is true PASS ({__proto__: {__proto__: {a:false}}, a:true}).a is true -PASS ({__proto__: {__proto__: {set a() {throw 'Should not call setter'; }}}, a:true}).a is true +PASS ({__proto__: {__proto__: {set a(v) {throw 'Should not call setter'; }}}, a:true}).a is true PASS ({__proto__: {__proto__: {get a() {throw 'Should not reach getter'; }}}, a:true}).a is true PASS ({__proto__: {__proto__: {get a() {throw 'Should not reach getter'; }, b:true}}, a:true}).b is true PASS successfullyParsed is true diff --git a/deps/v8/test/webkit/object-literal-direct-put.js b/deps/v8/test/webkit/object-literal-direct-put.js index 69c085f06b9..99f0a60c005 100644 --- a/deps/v8/test/webkit/object-literal-direct-put.js +++ b/deps/v8/test/webkit/object-literal-direct-put.js @@ -25,11 +25,11 @@ description("This test ensures that properties on an object literal are put dire shouldBeTrue("({a:true}).a"); shouldBeTrue("({__proto__: {a:false}, a:true}).a"); -shouldBeTrue("({__proto__: {set a() {throw 'Should not call setter'; }}, a:true}).a"); +shouldBeTrue("({__proto__: {set a(v) {throw 'Should not call setter'; }}, a:true}).a"); shouldBeTrue("({__proto__: {get a() {throw 'Should not reach getter'; }}, a:true}).a"); shouldBeTrue("({__proto__: {get a() {throw 'Should not reach getter'; }, b:true}, a:true}).b"); shouldBeTrue("({__proto__: {__proto__: {a:false}}, a:true}).a"); -shouldBeTrue("({__proto__: {__proto__: {set a() {throw 'Should not call setter'; }}}, a:true}).a"); +shouldBeTrue("({__proto__: {__proto__: {set a(v) {throw 'Should not call setter'; }}}, a:true}).a"); shouldBeTrue("({__proto__: {__proto__: {get a() {throw 'Should not reach getter'; }}}, a:true}).a"); shouldBeTrue("({__proto__: {__proto__: {get a() {throw 'Should not reach getter'; }, b:true}}, a:true}).b"); diff --git a/deps/v8/test/webkit/object-literal-syntax-expected.txt b/deps/v8/test/webkit/object-literal-syntax-expected.txt index 13b3499cec2..f9764454c57 100644 --- a/deps/v8/test/webkit/object-literal-syntax-expected.txt +++ b/deps/v8/test/webkit/object-literal-syntax-expected.txt @@ -27,25 +27,25 @@ On success, you will see a series of "PASS" messages, followed by "TEST COMPLETE PASS ({a:1, get a(){}}) threw exception SyntaxError: Object literal may not have data and accessor property with the same name. -PASS ({a:1, set a(){}}) threw exception SyntaxError: Object literal may not have data and accessor property with the same name. +PASS ({a:1, set a(v){}}) threw exception SyntaxError: Object literal may not have data and accessor property with the same name. PASS ({get a(){}, a:1}) threw exception SyntaxError: Object literal may not have data and accessor property with the same name. -PASS ({set a(){}, a:1}) threw exception SyntaxError: Object literal may not have data and accessor property with the same name. +PASS ({set a(v){}, a:1}) threw exception SyntaxError: Object literal may not have data and accessor property with the same name. PASS ({get a(){}, get a(){}}) threw exception SyntaxError: Object literal may not have multiple get/set accessors with the same name. -PASS ({set a(){}, set a(){}}) threw exception SyntaxError: Object literal may not have multiple get/set accessors with the same name. -PASS ({set a(){}, get a(){}, set a(){}}) threw exception SyntaxError: Object literal may not have multiple get/set accessors with the same name. +PASS ({set a(v){}, set a(v){}}) threw exception SyntaxError: Object literal may not have multiple get/set accessors with the same name. +PASS ({set a(v){}, get a(){}, set a(v){}}) threw exception SyntaxError: Object literal may not have multiple get/set accessors with the same name. PASS (function(){({a:1, get a(){}})}) threw exception SyntaxError: Object literal may not have data and accessor property with the same name. -PASS (function(){({a:1, set a(){}})}) threw exception SyntaxError: Object literal may not have data and accessor property with the same name. +PASS (function(){({a:1, set a(v){}})}) threw exception SyntaxError: Object literal may not have data and accessor property with the same name. PASS (function(){({get a(){}, a:1})}) threw exception SyntaxError: Object literal may not have data and accessor property with the same name. -PASS (function(){({set a(){}, a:1})}) threw exception SyntaxError: Object literal may not have data and accessor property with the same name. +PASS (function(){({set a(v){}, a:1})}) threw exception SyntaxError: Object literal may not have data and accessor property with the same name. PASS (function(){({get a(){}, get a(){}})}) threw exception SyntaxError: Object literal may not have multiple get/set accessors with the same name. -PASS (function(){({set a(){}, set a(){}})}) threw exception SyntaxError: Object literal may not have multiple get/set accessors with the same name. -PASS (function(){({set a(){}, get a(){}, set a(){}})}) threw exception SyntaxError: Object literal may not have multiple get/set accessors with the same name. +PASS (function(){({set a(v){}, set a(v){}})}) threw exception SyntaxError: Object literal may not have multiple get/set accessors with the same name. +PASS (function(){({set a(v){}, get a(){}, set a(v){}})}) threw exception SyntaxError: Object literal may not have multiple get/set accessors with the same name. PASS ({a:1, a:1, a:1}), true is true -PASS ({get a(){}, set a(){}}), true is true -PASS ({set a(){}, get a(){}}), true is true +PASS ({get a(){}, set a(v){}}), true is true +PASS ({set a(v){}, get a(){}}), true is true PASS (function(){({a:1, a:1, a:1})}), true is true -PASS (function(){({get a(){}, set a(){}})}), true is true -PASS (function(){({set a(){}, get a(){}})}), true is true +PASS (function(){({get a(){}, set a(v){}})}), true is true +PASS (function(){({set a(v){}, get a(){}})}), true is true PASS successfullyParsed is true TEST COMPLETE diff --git a/deps/v8/test/webkit/object-literal-syntax.js b/deps/v8/test/webkit/object-literal-syntax.js index 6884bec4028..e9cc2dd8c51 100644 --- a/deps/v8/test/webkit/object-literal-syntax.js +++ b/deps/v8/test/webkit/object-literal-syntax.js @@ -24,22 +24,22 @@ description("Make sure that we correctly identify parse errors in object literals"); shouldThrow("({a:1, get a(){}})"); -shouldThrow("({a:1, set a(){}})"); +shouldThrow("({a:1, set a(v){}})"); shouldThrow("({get a(){}, a:1})"); -shouldThrow("({set a(){}, a:1})"); +shouldThrow("({set a(v){}, a:1})"); shouldThrow("({get a(){}, get a(){}})"); -shouldThrow("({set a(){}, set a(){}})"); -shouldThrow("({set a(){}, get a(){}, set a(){}})"); +shouldThrow("({set a(v){}, set a(v){}})"); +shouldThrow("({set a(v){}, get a(){}, set a(v){}})"); shouldThrow("(function(){({a:1, get a(){}})})"); -shouldThrow("(function(){({a:1, set a(){}})})"); +shouldThrow("(function(){({a:1, set a(v){}})})"); shouldThrow("(function(){({get a(){}, a:1})})"); -shouldThrow("(function(){({set a(){}, a:1})})"); +shouldThrow("(function(){({set a(v){}, a:1})})"); shouldThrow("(function(){({get a(){}, get a(){}})})"); -shouldThrow("(function(){({set a(){}, set a(){}})})"); -shouldThrow("(function(){({set a(){}, get a(){}, set a(){}})})"); +shouldThrow("(function(){({set a(v){}, set a(v){}})})"); +shouldThrow("(function(){({set a(v){}, get a(){}, set a(v){}})})"); shouldBeTrue("({a:1, a:1, a:1}), true"); -shouldBeTrue("({get a(){}, set a(){}}), true"); -shouldBeTrue("({set a(){}, get a(){}}), true"); +shouldBeTrue("({get a(){}, set a(v){}}), true"); +shouldBeTrue("({set a(v){}, get a(){}}), true"); shouldBeTrue("(function(){({a:1, a:1, a:1})}), true"); -shouldBeTrue("(function(){({get a(){}, set a(){}})}), true"); -shouldBeTrue("(function(){({set a(){}, get a(){}})}), true"); +shouldBeTrue("(function(){({get a(){}, set a(v){}})}), true"); +shouldBeTrue("(function(){({set a(v){}, get a(){}})}), true"); diff --git a/deps/v8/test/webkit/string-replacement-outofmemory-expected.txt b/deps/v8/test/webkit/string-replacement-outofmemory-expected.txt index 68ac2179660..946b248ed68 100644 --- a/deps/v8/test/webkit/string-replacement-outofmemory-expected.txt +++ b/deps/v8/test/webkit/string-replacement-outofmemory-expected.txt @@ -21,3 +21,12 @@ # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +This tests that string replacement with a large replacement string causes an out-of-memory exception. See bug 102956 for more details. + +On success, you will see a series of "PASS" messages, followed by "TEST COMPLETE". + + +PASS x.replace(/\d/g, y) threw exception RangeError: Invalid string length. +PASS successfullyParsed is true + +TEST COMPLETE diff --git a/deps/v8/test/webkit/string-replacement-outofmemory.js b/deps/v8/test/webkit/string-replacement-outofmemory.js index 2b8e18a8541..becfdc6a1f6 100644 --- a/deps/v8/test/webkit/string-replacement-outofmemory.js +++ b/deps/v8/test/webkit/string-replacement-outofmemory.js @@ -37,5 +37,5 @@ var y = "2"; x = createStringWithRepeatedChar(x, 1 << 12); y = createStringWithRepeatedChar(y, (1 << 20) + 1); -shouldThrow("x.replace(/\\d/g, y)", '"Error: Out of memory"'); +shouldThrow("x.replace(/\\d/g, y)", '"RangeError: Invalid string length"'); var successfullyParsed = true; diff --git a/deps/v8/test/webkit/webkit.status b/deps/v8/test/webkit/webkit.status index a6bf845d00d..c14d5c13c46 100644 --- a/deps/v8/test/webkit/webkit.status +++ b/deps/v8/test/webkit/webkit.status @@ -27,16 +27,18 @@ [ [ALWAYS, { - # BUG(237872). TODO(bmeurer): Investigate. - 'string-replacement-outofmemory': [FAIL], - - ############################################################################## # Flaky tests. # BUG(v8:2989). 'dfg-inline-arguments-become-double': [PASS, FAIL], 'dfg-inline-arguments-become-int32': [PASS, FAIL], 'dfg-inline-arguments-reset': [PASS, FAIL], 'dfg-inline-arguments-reset-changetype': [PASS, FAIL], + # TODO(turbofan): Sometimes the try-catch blacklist fails. + 'exception-with-handler-inside-eval-with-dynamic-scope': [PASS, NO_VARIANTS], + # TODO(turbofan): We run out of stack earlier on 64-bit for now. + 'fast/js/deep-recursion-test': [PASS, NO_VARIANTS], + # TODO(bmeurer,svenpanne): Investigate test failure. + 'fast/js/toString-number': [SKIP], }], # ALWAYS ['mode == debug', { # Too slow in debug mode. @@ -51,4 +53,13 @@ ['arch == arm64 and simulator_run == True', { 'dfg-int-overflow-in-loop': [SKIP], }], # 'arch == arm64 and simulator_run == True' + + +############################################################################## +['gc_stress == True', { + # Tests taking too long + 'fast/js/excessive-comma-usage': [SKIP] +}], # 'gc_stress == True' + +############################################################################## ] diff --git a/deps/v8/testing/gmock.gyp b/deps/v8/testing/gmock.gyp new file mode 100644 index 00000000000..a36584dcf7f --- /dev/null +++ b/deps/v8/testing/gmock.gyp @@ -0,0 +1,62 @@ +# Copyright 2014 the V8 project authors. All rights reserved. +# Use of this source code is governed by a BSD-style license that can be +# found in the LICENSE file. + +{ + 'targets': [ + { + 'target_name': 'gmock', + 'type': 'static_library', + 'dependencies': [ + 'gtest.gyp:gtest', + ], + 'sources': [ + # Sources based on files in r173 of gmock. + 'gmock/include/gmock/gmock-actions.h', + 'gmock/include/gmock/gmock-cardinalities.h', + 'gmock/include/gmock/gmock-generated-actions.h', + 'gmock/include/gmock/gmock-generated-function-mockers.h', + 'gmock/include/gmock/gmock-generated-matchers.h', + 'gmock/include/gmock/gmock-generated-nice-strict.h', + 'gmock/include/gmock/gmock-matchers.h', + 'gmock/include/gmock/gmock-spec-builders.h', + 'gmock/include/gmock/gmock.h', + 'gmock/include/gmock/internal/gmock-generated-internal-utils.h', + 'gmock/include/gmock/internal/gmock-internal-utils.h', + 'gmock/include/gmock/internal/gmock-port.h', + 'gmock/src/gmock-all.cc', + 'gmock/src/gmock-cardinalities.cc', + 'gmock/src/gmock-internal-utils.cc', + 'gmock/src/gmock-matchers.cc', + 'gmock/src/gmock-spec-builders.cc', + 'gmock/src/gmock.cc', + 'gmock_mutant.h', # gMock helpers + ], + 'sources!': [ + 'gmock/src/gmock-all.cc', # Not needed by our build. + ], + 'include_dirs': [ + 'gmock', + 'gmock/include', + ], + 'direct_dependent_settings': { + 'include_dirs': [ + 'gmock/include', # So that gmock headers can find themselves. + ], + }, + 'export_dependent_settings': [ + 'gtest.gyp:gtest', + ], + }, + { + 'target_name': 'gmock_main', + 'type': 'static_library', + 'dependencies': [ + 'gmock', + ], + 'sources': [ + 'gmock/src/gmock_main.cc', + ], + }, + ], +} diff --git a/deps/v8/testing/gtest-type-names.h b/deps/v8/testing/gtest-type-names.h new file mode 100644 index 00000000000..ba900ddb888 --- /dev/null +++ b/deps/v8/testing/gtest-type-names.h @@ -0,0 +1,34 @@ +// Copyright 2014 the V8 project authors. All rights reserved. +// Use of this source code is governed by a BSD-style license that can be +// found in the LICENSE file. + +#ifndef V8_TESTING_GTEST_TYPE_NAMES_H_ +#define V8_TESTING_GTEST_TYPE_NAMES_H_ + +#include "include/v8stdint.h" +#include "testing/gtest/include/gtest/gtest.h" + +namespace testing { +namespace internal { + +#define GET_TYPE_NAME(type) \ + template <> \ + std::string GetTypeName<type>() { \ + return #type; \ + } +GET_TYPE_NAME(int8_t) +GET_TYPE_NAME(uint8_t) +GET_TYPE_NAME(int16_t) +GET_TYPE_NAME(uint16_t) +GET_TYPE_NAME(int32_t) +GET_TYPE_NAME(uint32_t) +GET_TYPE_NAME(int64_t) +GET_TYPE_NAME(uint64_t) +GET_TYPE_NAME(float) +GET_TYPE_NAME(double) +#undef GET_TYPE_NAME + +} // namespace internal +} // namespace testing + +#endif // V8_TESTING_GTEST_TYPE_NAMES_H_ diff --git a/deps/v8/testing/gtest.gyp b/deps/v8/testing/gtest.gyp new file mode 100644 index 00000000000..5d068d0257e --- /dev/null +++ b/deps/v8/testing/gtest.gyp @@ -0,0 +1,159 @@ +# Copyright 2014 the V8 project authors. All rights reserved. +# Use of this source code is governed by a BSD-style license that can be +# found in the LICENSE file. + +{ + 'targets': [ + { + 'target_name': 'gtest', + 'toolsets': ['host', 'target'], + 'type': 'static_library', + 'sources': [ + 'gtest/include/gtest/gtest-death-test.h', + 'gtest/include/gtest/gtest-message.h', + 'gtest/include/gtest/gtest-param-test.h', + 'gtest/include/gtest/gtest-printers.h', + 'gtest/include/gtest/gtest-spi.h', + 'gtest/include/gtest/gtest-test-part.h', + 'gtest/include/gtest/gtest-typed-test.h', + 'gtest/include/gtest/gtest.h', + 'gtest/include/gtest/gtest_pred_impl.h', + 'gtest/include/gtest/internal/gtest-death-test-internal.h', + 'gtest/include/gtest/internal/gtest-filepath.h', + 'gtest/include/gtest/internal/gtest-internal.h', + 'gtest/include/gtest/internal/gtest-linked_ptr.h', + 'gtest/include/gtest/internal/gtest-param-util-generated.h', + 'gtest/include/gtest/internal/gtest-param-util.h', + 'gtest/include/gtest/internal/gtest-port.h', + 'gtest/include/gtest/internal/gtest-string.h', + 'gtest/include/gtest/internal/gtest-tuple.h', + 'gtest/include/gtest/internal/gtest-type-util.h', + 'gtest/src/gtest-all.cc', + 'gtest/src/gtest-death-test.cc', + 'gtest/src/gtest-filepath.cc', + 'gtest/src/gtest-internal-inl.h', + 'gtest/src/gtest-port.cc', + 'gtest/src/gtest-printers.cc', + 'gtest/src/gtest-test-part.cc', + 'gtest/src/gtest-typed-test.cc', + 'gtest/src/gtest.cc', + 'gtest-type-names.h', + ], + 'sources!': [ + 'gtest/src/gtest-all.cc', # Not needed by our build. + ], + 'include_dirs': [ + 'gtest', + 'gtest/include', + ], + 'dependencies': [ + 'gtest_prod', + ], + 'defines': [ + # In order to allow regex matches in gtest to be shared between Windows + # and other systems, we tell gtest to always use it's internal engine. + 'GTEST_HAS_POSIX_RE=0', + # Chrome doesn't support / require C++11, yet. + 'GTEST_LANG_CXX11=0', + ], + 'all_dependent_settings': { + 'defines': [ + 'GTEST_HAS_POSIX_RE=0', + 'GTEST_LANG_CXX11=0', + ], + }, + 'conditions': [ + ['os_posix == 1', { + 'defines': [ + # gtest isn't able to figure out when RTTI is disabled for gcc + # versions older than 4.3.2, and assumes it's enabled. Our Mac + # and Linux builds disable RTTI, and cannot guarantee that the + # compiler will be 4.3.2. or newer. The Mac, for example, uses + # 4.2.1 as that is the latest available on that platform. gtest + # must be instructed that RTTI is disabled here, and for any + # direct dependents that might include gtest headers. + 'GTEST_HAS_RTTI=0', + ], + 'direct_dependent_settings': { + 'defines': [ + 'GTEST_HAS_RTTI=0', + ], + }, + }], + ['OS=="android"', { + 'defines': [ + 'GTEST_HAS_CLONE=0', + ], + 'direct_dependent_settings': { + 'defines': [ + 'GTEST_HAS_CLONE=0', + ], + }, + }], + ['OS=="android"', { + # We want gtest features that use tr1::tuple, but we currently + # don't support the variadic templates used by libstdc++'s + # implementation. gtest supports this scenario by providing its + # own implementation but we must opt in to it. + 'defines': [ + 'GTEST_USE_OWN_TR1_TUPLE=1', + # GTEST_USE_OWN_TR1_TUPLE only works if GTEST_HAS_TR1_TUPLE is set. + # gtest r625 made it so that GTEST_HAS_TR1_TUPLE is set to 0 + # automatically on android, so it has to be set explicitly here. + 'GTEST_HAS_TR1_TUPLE=1', + ], + 'direct_dependent_settings': { + 'defines': [ + 'GTEST_USE_OWN_TR1_TUPLE=1', + 'GTEST_HAS_TR1_TUPLE=1', + ], + }, + }], + ], + 'direct_dependent_settings': { + 'defines': [ + 'UNIT_TEST', + ], + 'include_dirs': [ + 'gtest/include', # So that gtest headers can find themselves. + ], + 'target_conditions': [ + ['_type=="executable"', { + 'test': 1, + 'conditions': [ + ['OS=="mac"', { + 'run_as': { + 'action????': ['${BUILT_PRODUCTS_DIR}/${PRODUCT_NAME}'], + }, + }], + ['OS=="win"', { + 'run_as': { + 'action????': ['$(TargetPath)', '--gtest_print_time'], + }, + }], + ], + }], + ], + 'msvs_disabled_warnings': [4800], + }, + }, + { + 'target_name': 'gtest_main', + 'type': 'static_library', + 'dependencies': [ + 'gtest', + ], + 'sources': [ + 'gtest/src/gtest_main.cc', + ], + }, + { + 'target_name': 'gtest_prod', + 'toolsets': ['host', 'target'], + 'type': 'none', + 'sources': [ + 'gtest/include/gtest/gtest_prod.h', + ], + }, + ], +} diff --git a/deps/v8/third_party/fdlibm/LICENSE b/deps/v8/third_party/fdlibm/LICENSE new file mode 100644 index 00000000000..b0247953f80 --- /dev/null +++ b/deps/v8/third_party/fdlibm/LICENSE @@ -0,0 +1,6 @@ +Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved. + +Developed at SunSoft, a Sun Microsystems, Inc. business. +Permission to use, copy, modify, and distribute this +software is freely granted, provided that this notice +is preserved. diff --git a/deps/v8/third_party/fdlibm/README.v8 b/deps/v8/third_party/fdlibm/README.v8 new file mode 100644 index 00000000000..ea8fdb6ce10 --- /dev/null +++ b/deps/v8/third_party/fdlibm/README.v8 @@ -0,0 +1,18 @@ +Name: Freely Distributable LIBM +Short Name: fdlibm +URL: http://www.netlib.org/fdlibm/ +Version: 5.3 +License: Freely Distributable. +License File: LICENSE. +Security Critical: yes. +License Android Compatible: yes. + +Description: +This is used to provide a accurate implementation for trigonometric functions +used in V8. + +Local Modifications: +For the use in V8, fdlibm has been reduced to include only sine, cosine and +tangent. To make inlining into generated code possible, a large portion of +that has been translated to Javascript. The rest remains in C, but has been +refactored and reformatted to interoperate with the rest of V8. diff --git a/deps/v8/third_party/fdlibm/fdlibm.cc b/deps/v8/third_party/fdlibm/fdlibm.cc new file mode 100644 index 00000000000..2f6eab17e8a --- /dev/null +++ b/deps/v8/third_party/fdlibm/fdlibm.cc @@ -0,0 +1,273 @@ +// The following is adapted from fdlibm (http://www.netlib.org/fdlibm). +// +// ==================================================== +// Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved. +// +// Developed at SunSoft, a Sun Microsystems, Inc. business. +// Permission to use, copy, modify, and distribute this +// software is freely granted, provided that this notice +// is preserved. +// ==================================================== +// +// The original source code covered by the above license above has been +// modified significantly by Google Inc. +// Copyright 2014 the V8 project authors. All rights reserved. + +#include "src/v8.h" + +#include "src/double.h" +#include "third_party/fdlibm/fdlibm.h" + + +namespace v8 { +namespace fdlibm { + +#ifdef _MSC_VER +inline double scalbn(double x, int y) { return _scalb(x, y); } +#endif // _MSC_VER + +const double MathConstants::constants[] = { + 6.36619772367581382433e-01, // invpio2 0 + 1.57079632673412561417e+00, // pio2_1 1 + 6.07710050650619224932e-11, // pio2_1t 2 + 6.07710050630396597660e-11, // pio2_2 3 + 2.02226624879595063154e-21, // pio2_2t 4 + 2.02226624871116645580e-21, // pio2_3 5 + 8.47842766036889956997e-32, // pio2_3t 6 + -1.66666666666666324348e-01, // S1 7 + 8.33333333332248946124e-03, // 8 + -1.98412698298579493134e-04, // 9 + 2.75573137070700676789e-06, // 10 + -2.50507602534068634195e-08, // 11 + 1.58969099521155010221e-10, // S6 12 + 4.16666666666666019037e-02, // C1 13 + -1.38888888888741095749e-03, // 14 + 2.48015872894767294178e-05, // 15 + -2.75573143513906633035e-07, // 16 + 2.08757232129817482790e-09, // 17 + -1.13596475577881948265e-11, // C6 18 + 3.33333333333334091986e-01, // T0 19 + 1.33333333333201242699e-01, // 20 + 5.39682539762260521377e-02, // 21 + 2.18694882948595424599e-02, // 22 + 8.86323982359930005737e-03, // 23 + 3.59207910759131235356e-03, // 24 + 1.45620945432529025516e-03, // 25 + 5.88041240820264096874e-04, // 26 + 2.46463134818469906812e-04, // 27 + 7.81794442939557092300e-05, // 28 + 7.14072491382608190305e-05, // 29 + -1.85586374855275456654e-05, // 30 + 2.59073051863633712884e-05, // T12 31 + 7.85398163397448278999e-01, // pio4 32 + 3.06161699786838301793e-17, // pio4lo 33 + 6.93147180369123816490e-01, // ln2_hi 34 + 1.90821492927058770002e-10, // ln2_lo 35 + 1.80143985094819840000e+16, // 2^54 36 + 6.666666666666666666e-01, // 2/3 37 + 6.666666666666735130e-01, // LP1 38 + 3.999999999940941908e-01, // 39 + 2.857142874366239149e-01, // 40 + 2.222219843214978396e-01, // 41 + 1.818357216161805012e-01, // 42 + 1.531383769920937332e-01, // 43 + 1.479819860511658591e-01, // LP7 44 +}; + + +// Table of constants for 2/pi, 396 Hex digits (476 decimal) of 2/pi +static const int two_over_pi[] = { + 0xA2F983, 0x6E4E44, 0x1529FC, 0x2757D1, 0xF534DD, 0xC0DB62, 0x95993C, + 0x439041, 0xFE5163, 0xABDEBB, 0xC561B7, 0x246E3A, 0x424DD2, 0xE00649, + 0x2EEA09, 0xD1921C, 0xFE1DEB, 0x1CB129, 0xA73EE8, 0x8235F5, 0x2EBB44, + 0x84E99C, 0x7026B4, 0x5F7E41, 0x3991D6, 0x398353, 0x39F49C, 0x845F8B, + 0xBDF928, 0x3B1FF8, 0x97FFDE, 0x05980F, 0xEF2F11, 0x8B5A0A, 0x6D1F6D, + 0x367ECF, 0x27CB09, 0xB74F46, 0x3F669E, 0x5FEA2D, 0x7527BA, 0xC7EBE5, + 0xF17B3D, 0x0739F7, 0x8A5292, 0xEA6BFB, 0x5FB11F, 0x8D5D08, 0x560330, + 0x46FC7B, 0x6BABF0, 0xCFBC20, 0x9AF436, 0x1DA9E3, 0x91615E, 0xE61B08, + 0x659985, 0x5F14A0, 0x68408D, 0xFFD880, 0x4D7327, 0x310606, 0x1556CA, + 0x73A8C9, 0x60E27B, 0xC08C6B}; + +static const double zero = 0.0; +static const double two24 = 1.6777216e+07; +static const double one = 1.0; +static const double twon24 = 5.9604644775390625e-08; + +static const double PIo2[] = { + 1.57079625129699707031e+00, // 0x3FF921FB, 0x40000000 + 7.54978941586159635335e-08, // 0x3E74442D, 0x00000000 + 5.39030252995776476554e-15, // 0x3CF84698, 0x80000000 + 3.28200341580791294123e-22, // 0x3B78CC51, 0x60000000 + 1.27065575308067607349e-29, // 0x39F01B83, 0x80000000 + 1.22933308981111328932e-36, // 0x387A2520, 0x40000000 + 2.73370053816464559624e-44, // 0x36E38222, 0x80000000 + 2.16741683877804819444e-51 // 0x3569F31D, 0x00000000 +}; + + +int __kernel_rem_pio2(double* x, double* y, int e0, int nx) { + static const int32_t jk = 3; + double fw; + int32_t jx = nx - 1; + int32_t jv = (e0 - 3) / 24; + if (jv < 0) jv = 0; + int32_t q0 = e0 - 24 * (jv + 1); + int32_t m = jx + jk; + + double f[10]; + for (int i = 0, j = jv - jx; i <= m; i++, j++) { + f[i] = (j < 0) ? zero : static_cast<double>(two_over_pi[j]); + } + + double q[10]; + for (int i = 0; i <= jk; i++) { + fw = 0.0; + for (int j = 0; j <= jx; j++) fw += x[j] * f[jx + i - j]; + q[i] = fw; + } + + int32_t jz = jk; + +recompute: + + int32_t iq[10]; + double z = q[jz]; + for (int i = 0, j = jz; j > 0; i++, j--) { + fw = static_cast<double>(static_cast<int32_t>(twon24 * z)); + iq[i] = static_cast<int32_t>(z - two24 * fw); + z = q[j - 1] + fw; + } + + z = scalbn(z, q0); + z -= 8.0 * std::floor(z * 0.125); + int32_t n = static_cast<int32_t>(z); + z -= static_cast<double>(n); + int32_t ih = 0; + if (q0 > 0) { + int32_t i = (iq[jz - 1] >> (24 - q0)); + n += i; + iq[jz - 1] -= i << (24 - q0); + ih = iq[jz - 1] >> (23 - q0); + } else if (q0 == 0) { + ih = iq[jz - 1] >> 23; + } else if (z >= 0.5) { + ih = 2; + } + + if (ih > 0) { + n += 1; + int32_t carry = 0; + for (int i = 0; i < jz; i++) { + int32_t j = iq[i]; + if (carry == 0) { + if (j != 0) { + carry = 1; + iq[i] = 0x1000000 - j; + } + } else { + iq[i] = 0xffffff - j; + } + } + if (q0 == 1) { + iq[jz - 1] &= 0x7fffff; + } else if (q0 == 2) { + iq[jz - 1] &= 0x3fffff; + } + if (ih == 2) { + z = one - z; + if (carry != 0) z -= scalbn(one, q0); + } + } + + if (z == zero) { + int32_t j = 0; + for (int i = jz - 1; i >= jk; i--) j |= iq[i]; + if (j == 0) { + int32_t k = 1; + while (iq[jk - k] == 0) k++; + for (int i = jz + 1; i <= jz + k; i++) { + f[jx + i] = static_cast<double>(two_over_pi[jv + i]); + for (j = 0, fw = 0.0; j <= jx; j++) fw += x[j] * f[jx + i - j]; + q[i] = fw; + } + jz += k; + goto recompute; + } + } + + if (z == 0.0) { + jz -= 1; + q0 -= 24; + while (iq[jz] == 0) { + jz--; + q0 -= 24; + } + } else { + z = scalbn(z, -q0); + if (z >= two24) { + fw = static_cast<double>(static_cast<int32_t>(twon24 * z)); + iq[jz] = static_cast<int32_t>(z - two24 * fw); + jz += 1; + q0 += 24; + iq[jz] = static_cast<int32_t>(fw); + } else { + iq[jz] = static_cast<int32_t>(z); + } + } + + fw = scalbn(one, q0); + for (int i = jz; i >= 0; i--) { + q[i] = fw * static_cast<double>(iq[i]); + fw *= twon24; + } + + double fq[10]; + for (int i = jz; i >= 0; i--) { + fw = 0.0; + for (int k = 0; k <= jk && k <= jz - i; k++) fw += PIo2[k] * q[i + k]; + fq[jz - i] = fw; + } + + fw = 0.0; + for (int i = jz; i >= 0; i--) fw += fq[i]; + y[0] = (ih == 0) ? fw : -fw; + fw = fq[0] - fw; + for (int i = 1; i <= jz; i++) fw += fq[i]; + y[1] = (ih == 0) ? fw : -fw; + return n & 7; +} + + +int rempio2(double x, double* y) { + int32_t hx = static_cast<int32_t>(internal::double_to_uint64(x) >> 32); + int32_t ix = hx & 0x7fffffff; + + if (ix >= 0x7ff00000) { + *y = base::OS::nan_value(); + return 0; + } + + int32_t e0 = (ix >> 20) - 1046; + uint64_t zi = internal::double_to_uint64(x) & 0xFFFFFFFFu; + zi |= static_cast<uint64_t>(ix - (e0 << 20)) << 32; + double z = internal::uint64_to_double(zi); + + double tx[3]; + for (int i = 0; i < 2; i++) { + tx[i] = static_cast<double>(static_cast<int32_t>(z)); + z = (z - tx[i]) * two24; + } + tx[2] = z; + + int nx = 3; + while (tx[nx - 1] == zero) nx--; + int n = __kernel_rem_pio2(tx, y, e0, nx); + if (hx < 0) { + y[0] = -y[0]; + y[1] = -y[1]; + return -n; + } + return n; +} +} +} // namespace v8::internal diff --git a/deps/v8/third_party/fdlibm/fdlibm.h b/deps/v8/third_party/fdlibm/fdlibm.h new file mode 100644 index 00000000000..7985c3a323a --- /dev/null +++ b/deps/v8/third_party/fdlibm/fdlibm.h @@ -0,0 +1,31 @@ +// The following is adapted from fdlibm (http://www.netlib.org/fdlibm). +// +// ==================================================== +// Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved. +// +// Developed at SunSoft, a Sun Microsystems, Inc. business. +// Permission to use, copy, modify, and distribute this +// software is freely granted, provided that this notice +// is preserved. +// ==================================================== +// +// The original source code covered by the above license above has been +// modified significantly by Google Inc. +// Copyright 2014 the V8 project authors. All rights reserved. + +#ifndef V8_FDLIBM_H_ +#define V8_FDLIBM_H_ + +namespace v8 { +namespace fdlibm { + +int rempio2(double x, double* y); + +// Constants to be exposed to builtins via Float64Array. +struct MathConstants { + static const double constants[45]; +}; +} +} // namespace v8::internal + +#endif // V8_FDLIBM_H_ diff --git a/deps/v8/third_party/fdlibm/fdlibm.js b/deps/v8/third_party/fdlibm/fdlibm.js new file mode 100644 index 00000000000..a55b7c70c8a --- /dev/null +++ b/deps/v8/third_party/fdlibm/fdlibm.js @@ -0,0 +1,518 @@ +// The following is adapted from fdlibm (http://www.netlib.org/fdlibm), +// +// ==================================================== +// Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved. +// +// Developed at SunSoft, a Sun Microsystems, Inc. business. +// Permission to use, copy, modify, and distribute this +// software is freely granted, provided that this notice +// is preserved. +// ==================================================== +// +// The original source code covered by the above license above has been +// modified significantly by Google Inc. +// Copyright 2014 the V8 project authors. All rights reserved. +// +// The following is a straightforward translation of fdlibm routines +// by Raymond Toy (rtoy@google.com). + + +var kMath; // Initialized to a Float64Array during genesis and is not writable. + +const INVPIO2 = kMath[0]; +const PIO2_1 = kMath[1]; +const PIO2_1T = kMath[2]; +const PIO2_2 = kMath[3]; +const PIO2_2T = kMath[4]; +const PIO2_3 = kMath[5]; +const PIO2_3T = kMath[6]; +const PIO4 = kMath[32]; +const PIO4LO = kMath[33]; + +// Compute k and r such that x - k*pi/2 = r where |r| < pi/4. For +// precision, r is returned as two values y0 and y1 such that r = y0 + y1 +// to more than double precision. +macro REMPIO2(X) + var n, y0, y1; + var hx = %_DoubleHi(X); + var ix = hx & 0x7fffffff; + + if (ix < 0x4002d97c) { + // |X| ~< 3*pi/4, special case with n = +/- 1 + if (hx > 0) { + var z = X - PIO2_1; + if (ix != 0x3ff921fb) { + // 33+53 bit pi is good enough + y0 = z - PIO2_1T; + y1 = (z - y0) - PIO2_1T; + } else { + // near pi/2, use 33+33+53 bit pi + z -= PIO2_2; + y0 = z - PIO2_2T; + y1 = (z - y0) - PIO2_2T; + } + n = 1; + } else { + // Negative X + var z = X + PIO2_1; + if (ix != 0x3ff921fb) { + // 33+53 bit pi is good enough + y0 = z + PIO2_1T; + y1 = (z - y0) + PIO2_1T; + } else { + // near pi/2, use 33+33+53 bit pi + z += PIO2_2; + y0 = z + PIO2_2T; + y1 = (z - y0) + PIO2_2T; + } + n = -1; + } + } else if (ix <= 0x413921fb) { + // |X| ~<= 2^19*(pi/2), medium size + var t = MathAbs(X); + n = (t * INVPIO2 + 0.5) | 0; + var r = t - n * PIO2_1; + var w = n * PIO2_1T; + // First round good to 85 bit + y0 = r - w; + if (ix - (%_DoubleHi(y0) & 0x7ff00000) > 0x1000000) { + // 2nd iteration needed, good to 118 + t = r; + w = n * PIO2_2; + r = t - w; + w = n * PIO2_2T - ((t - r) - w); + y0 = r - w; + if (ix - (%_DoubleHi(y0) & 0x7ff00000) > 0x3100000) { + // 3rd iteration needed. 151 bits accuracy + t = r; + w = n * PIO2_3; + r = t - w; + w = n * PIO2_3T - ((t - r) - w); + y0 = r - w; + } + } + y1 = (r - y0) - w; + if (hx < 0) { + n = -n; + y0 = -y0; + y1 = -y1; + } + } else { + // Need to do full Payne-Hanek reduction here. + var r = %RemPiO2(X); + n = r[0]; + y0 = r[1]; + y1 = r[2]; + } +endmacro + + +// __kernel_sin(X, Y, IY) +// kernel sin function on [-pi/4, pi/4], pi/4 ~ 0.7854 +// Input X is assumed to be bounded by ~pi/4 in magnitude. +// Input Y is the tail of X so that x = X + Y. +// +// Algorithm +// 1. Since ieee_sin(-x) = -ieee_sin(x), we need only to consider positive x. +// 2. ieee_sin(x) is approximated by a polynomial of degree 13 on +// [0,pi/4] +// 3 13 +// sin(x) ~ x + S1*x + ... + S6*x +// where +// +// |ieee_sin(x) 2 4 6 8 10 12 | -58 +// |----- - (1+S1*x +S2*x +S3*x +S4*x +S5*x +S6*x )| <= 2 +// | x | +// +// 3. ieee_sin(X+Y) = ieee_sin(X) + sin'(X')*Y +// ~ ieee_sin(X) + (1-X*X/2)*Y +// For better accuracy, let +// 3 2 2 2 2 +// r = X *(S2+X *(S3+X *(S4+X *(S5+X *S6)))) +// then 3 2 +// sin(x) = X + (S1*X + (X *(r-Y/2)+Y)) +// +macro KSIN(x) +kMath[7+x] +endmacro + +macro RETURN_KERNELSIN(X, Y, SIGN) + var z = X * X; + var v = z * X; + var r = KSIN(1) + z * (KSIN(2) + z * (KSIN(3) + + z * (KSIN(4) + z * KSIN(5)))); + return (X - ((z * (0.5 * Y - v * r) - Y) - v * KSIN(0))) SIGN; +endmacro + +// __kernel_cos(X, Y) +// kernel cos function on [-pi/4, pi/4], pi/4 ~ 0.785398164 +// Input X is assumed to be bounded by ~pi/4 in magnitude. +// Input Y is the tail of X so that x = X + Y. +// +// Algorithm +// 1. Since ieee_cos(-x) = ieee_cos(x), we need only to consider positive x. +// 2. ieee_cos(x) is approximated by a polynomial of degree 14 on +// [0,pi/4] +// 4 14 +// cos(x) ~ 1 - x*x/2 + C1*x + ... + C6*x +// where the remez error is +// +// | 2 4 6 8 10 12 14 | -58 +// |ieee_cos(x)-(1-.5*x +C1*x +C2*x +C3*x +C4*x +C5*x +C6*x )| <= 2 +// | | +// +// 4 6 8 10 12 14 +// 3. let r = C1*x +C2*x +C3*x +C4*x +C5*x +C6*x , then +// ieee_cos(x) = 1 - x*x/2 + r +// since ieee_cos(X+Y) ~ ieee_cos(X) - ieee_sin(X)*Y +// ~ ieee_cos(X) - X*Y, +// a correction term is necessary in ieee_cos(x) and hence +// cos(X+Y) = 1 - (X*X/2 - (r - X*Y)) +// For better accuracy when x > 0.3, let qx = |x|/4 with +// the last 32 bits mask off, and if x > 0.78125, let qx = 0.28125. +// Then +// cos(X+Y) = (1-qx) - ((X*X/2-qx) - (r-X*Y)). +// Note that 1-qx and (X*X/2-qx) is EXACT here, and the +// magnitude of the latter is at least a quarter of X*X/2, +// thus, reducing the rounding error in the subtraction. +// +macro KCOS(x) +kMath[13+x] +endmacro + +macro RETURN_KERNELCOS(X, Y, SIGN) + var ix = %_DoubleHi(X) & 0x7fffffff; + var z = X * X; + var r = z * (KCOS(0) + z * (KCOS(1) + z * (KCOS(2)+ + z * (KCOS(3) + z * (KCOS(4) + z * KCOS(5)))))); + if (ix < 0x3fd33333) { // |x| ~< 0.3 + return (1 - (0.5 * z - (z * r - X * Y))) SIGN; + } else { + var qx; + if (ix > 0x3fe90000) { // |x| > 0.78125 + qx = 0.28125; + } else { + qx = %_ConstructDouble(%_DoubleHi(0.25 * X), 0); + } + var hz = 0.5 * z - qx; + return (1 - qx - (hz - (z * r - X * Y))) SIGN; + } +endmacro + + +// kernel tan function on [-pi/4, pi/4], pi/4 ~ 0.7854 +// Input x is assumed to be bounded by ~pi/4 in magnitude. +// Input y is the tail of x. +// Input k indicates whether ieee_tan (if k = 1) or -1/tan (if k = -1) +// is returned. +// +// Algorithm +// 1. Since ieee_tan(-x) = -ieee_tan(x), we need only to consider positive x. +// 2. if x < 2^-28 (hx<0x3e300000 0), return x with inexact if x!=0. +// 3. ieee_tan(x) is approximated by a odd polynomial of degree 27 on +// [0,0.67434] +// 3 27 +// tan(x) ~ x + T1*x + ... + T13*x +// where +// +// |ieee_tan(x) 2 4 26 | -59.2 +// |----- - (1+T1*x +T2*x +.... +T13*x )| <= 2 +// | x | +// +// Note: ieee_tan(x+y) = ieee_tan(x) + tan'(x)*y +// ~ ieee_tan(x) + (1+x*x)*y +// Therefore, for better accuracy in computing ieee_tan(x+y), let +// 3 2 2 2 2 +// r = x *(T2+x *(T3+x *(...+x *(T12+x *T13)))) +// then +// 3 2 +// tan(x+y) = x + (T1*x + (x *(r+y)+y)) +// +// 4. For x in [0.67434,pi/4], let y = pi/4 - x, then +// tan(x) = ieee_tan(pi/4-y) = (1-ieee_tan(y))/(1+ieee_tan(y)) +// = 1 - 2*(ieee_tan(y) - (ieee_tan(y)^2)/(1+ieee_tan(y))) +// +// Set returnTan to 1 for tan; -1 for cot. Anything else is illegal +// and will cause incorrect results. +// +macro KTAN(x) +kMath[19+x] +endmacro + +function KernelTan(x, y, returnTan) { + var z; + var w; + var hx = %_DoubleHi(x); + var ix = hx & 0x7fffffff; + + if (ix < 0x3e300000) { // |x| < 2^-28 + if (((ix | %_DoubleLo(x)) | (returnTan + 1)) == 0) { + // x == 0 && returnTan = -1 + return 1 / MathAbs(x); + } else { + if (returnTan == 1) { + return x; + } else { + // Compute -1/(x + y) carefully + var w = x + y; + var z = %_ConstructDouble(%_DoubleHi(w), 0); + var v = y - (z - x); + var a = -1 / w; + var t = %_ConstructDouble(%_DoubleHi(a), 0); + var s = 1 + t * z; + return t + a * (s + t * v); + } + } + } + if (ix >= 0x3fe59429) { // |x| > .6744 + if (x < 0) { + x = -x; + y = -y; + } + z = PIO4 - x; + w = PIO4LO - y; + x = z + w; + y = 0; + } + z = x * x; + w = z * z; + + // Break x^5 * (T1 + x^2*T2 + ...) into + // x^5 * (T1 + x^4*T3 + ... + x^20*T11) + + // x^5 * (x^2 * (T2 + x^4*T4 + ... + x^22*T12)) + var r = KTAN(1) + w * (KTAN(3) + w * (KTAN(5) + + w * (KTAN(7) + w * (KTAN(9) + w * KTAN(11))))); + var v = z * (KTAN(2) + w * (KTAN(4) + w * (KTAN(6) + + w * (KTAN(8) + w * (KTAN(10) + w * KTAN(12)))))); + var s = z * x; + r = y + z * (s * (r + v) + y); + r = r + KTAN(0) * s; + w = x + r; + if (ix >= 0x3fe59428) { + return (1 - ((hx >> 30) & 2)) * + (returnTan - 2.0 * (x - (w * w / (w + returnTan) - r))); + } + if (returnTan == 1) { + return w; + } else { + z = %_ConstructDouble(%_DoubleHi(w), 0); + v = r - (z - x); + var a = -1 / w; + var t = %_ConstructDouble(%_DoubleHi(a), 0); + s = 1 + t * z; + return t + a * (s + t * v); + } +} + +function MathSinSlow(x) { + REMPIO2(x); + var sign = 1 - (n & 2); + if (n & 1) { + RETURN_KERNELCOS(y0, y1, * sign); + } else { + RETURN_KERNELSIN(y0, y1, * sign); + } +} + +function MathCosSlow(x) { + REMPIO2(x); + if (n & 1) { + var sign = (n & 2) - 1; + RETURN_KERNELSIN(y0, y1, * sign); + } else { + var sign = 1 - (n & 2); + RETURN_KERNELCOS(y0, y1, * sign); + } +} + +// ECMA 262 - 15.8.2.16 +function MathSin(x) { + x = x * 1; // Convert to number. + if ((%_DoubleHi(x) & 0x7fffffff) <= 0x3fe921fb) { + // |x| < pi/4, approximately. No reduction needed. + RETURN_KERNELSIN(x, 0, /* empty */); + } + return MathSinSlow(x); +} + +// ECMA 262 - 15.8.2.7 +function MathCos(x) { + x = x * 1; // Convert to number. + if ((%_DoubleHi(x) & 0x7fffffff) <= 0x3fe921fb) { + // |x| < pi/4, approximately. No reduction needed. + RETURN_KERNELCOS(x, 0, /* empty */); + } + return MathCosSlow(x); +} + +// ECMA 262 - 15.8.2.18 +function MathTan(x) { + x = x * 1; // Convert to number. + if ((%_DoubleHi(x) & 0x7fffffff) <= 0x3fe921fb) { + // |x| < pi/4, approximately. No reduction needed. + return KernelTan(x, 0, 1); + } + REMPIO2(x); + return KernelTan(y0, y1, (n & 1) ? -1 : 1); +} + +// ES6 draft 09-27-13, section 20.2.2.20. +// Math.log1p +// +// Method : +// 1. Argument Reduction: find k and f such that +// 1+x = 2^k * (1+f), +// where sqrt(2)/2 < 1+f < sqrt(2) . +// +// Note. If k=0, then f=x is exact. However, if k!=0, then f +// may not be representable exactly. In that case, a correction +// term is need. Let u=1+x rounded. Let c = (1+x)-u, then +// log(1+x) - log(u) ~ c/u. Thus, we proceed to compute log(u), +// and add back the correction term c/u. +// (Note: when x > 2**53, one can simply return log(x)) +// +// 2. Approximation of log1p(f). +// Let s = f/(2+f) ; based on log(1+f) = log(1+s) - log(1-s) +// = 2s + 2/3 s**3 + 2/5 s**5 + ....., +// = 2s + s*R +// We use a special Reme algorithm on [0,0.1716] to generate +// a polynomial of degree 14 to approximate R The maximum error +// of this polynomial approximation is bounded by 2**-58.45. In +// other words, +// 2 4 6 8 10 12 14 +// R(z) ~ Lp1*s +Lp2*s +Lp3*s +Lp4*s +Lp5*s +Lp6*s +Lp7*s +// (the values of Lp1 to Lp7 are listed in the program) +// and +// | 2 14 | -58.45 +// | Lp1*s +...+Lp7*s - R(z) | <= 2 +// | | +// Note that 2s = f - s*f = f - hfsq + s*hfsq, where hfsq = f*f/2. +// In order to guarantee error in log below 1ulp, we compute log +// by +// log1p(f) = f - (hfsq - s*(hfsq+R)). +// +// 3. Finally, log1p(x) = k*ln2 + log1p(f). +// = k*ln2_hi+(f-(hfsq-(s*(hfsq+R)+k*ln2_lo))) +// Here ln2 is split into two floating point number: +// ln2_hi + ln2_lo, +// where n*ln2_hi is always exact for |n| < 2000. +// +// Special cases: +// log1p(x) is NaN with signal if x < -1 (including -INF) ; +// log1p(+INF) is +INF; log1p(-1) is -INF with signal; +// log1p(NaN) is that NaN with no signal. +// +// Accuracy: +// according to an error analysis, the error is always less than +// 1 ulp (unit in the last place). +// +// Constants: +// The hexadecimal values are the intended ones for the following +// constants. The decimal values may be used, provided that the +// compiler will convert from decimal to binary accurately enough +// to produce the hexadecimal values shown. +// +// Note: Assuming log() return accurate answer, the following +// algorithm can be used to compute log1p(x) to within a few ULP: +// +// u = 1+x; +// if (u==1.0) return x ; else +// return log(u)*(x/(u-1.0)); +// +// See HP-15C Advanced Functions Handbook, p.193. +// +const LN2_HI = kMath[34]; +const LN2_LO = kMath[35]; +const TWO54 = kMath[36]; +const TWO_THIRD = kMath[37]; +macro KLOGP1(x) +(kMath[38+x]) +endmacro + +function MathLog1p(x) { + x = x * 1; // Convert to number. + var hx = %_DoubleHi(x); + var ax = hx & 0x7fffffff; + var k = 1; + var f = x; + var hu = 1; + var c = 0; + var u = x; + + if (hx < 0x3fda827a) { + // x < 0.41422 + if (ax >= 0x3ff00000) { // |x| >= 1 + if (x === -1) { + return -INFINITY; // log1p(-1) = -inf + } else { + return NAN; // log1p(x<-1) = NaN + } + } else if (ax < 0x3c900000) { + // For |x| < 2^-54 we can return x. + return x; + } else if (ax < 0x3e200000) { + // For |x| < 2^-29 we can use a simple two-term Taylor series. + return x - x * x * 0.5; + } + + if ((hx > 0) || (hx <= -0x402D413D)) { // (int) 0xbfd2bec3 = -0x402d413d + // -.2929 < x < 0.41422 + k = 0; + } + } + + // Handle Infinity and NAN + if (hx >= 0x7ff00000) return x; + + if (k !== 0) { + if (hx < 0x43400000) { + // x < 2^53 + u = 1 + x; + hu = %_DoubleHi(u); + k = (hu >> 20) - 1023; + c = (k > 0) ? 1 - (u - x) : x - (u - 1); + c = c / u; + } else { + hu = %_DoubleHi(u); + k = (hu >> 20) - 1023; + } + hu = hu & 0xfffff; + if (hu < 0x6a09e) { + u = %_ConstructDouble(hu | 0x3ff00000, %_DoubleLo(u)); // Normalize u. + } else { + ++k; + u = %_ConstructDouble(hu | 0x3fe00000, %_DoubleLo(u)); // Normalize u/2. + hu = (0x00100000 - hu) >> 2; + } + f = u - 1; + } + + var hfsq = 0.5 * f * f; + if (hu === 0) { + // |f| < 2^-20; + if (f === 0) { + if (k === 0) { + return 0.0; + } else { + return k * LN2_HI + (c + k * LN2_LO); + } + } + var R = hfsq * (1 - TWO_THIRD * f); + if (k === 0) { + return f - R; + } else { + return k * LN2_HI - ((R - (k * LN2_LO + c)) - f); + } + } + + var s = f / (2 + f); + var z = s * s; + var R = z * (KLOGP1(0) + z * (KLOGP1(1) + z * + (KLOGP1(2) + z * (KLOGP1(3) + z * + (KLOGP1(4) + z * (KLOGP1(5) + z * KLOGP1(6))))))); + if (k === 0) { + return f - (hfsq - s * (hfsq + R)); + } else { + return k * LN2_HI - ((hfsq - (s * (hfsq + R) + (k * LN2_LO + c))) - f); + } +} diff --git a/deps/v8/tools/DEPS b/deps/v8/tools/DEPS new file mode 100644 index 00000000000..c97eda99086 --- /dev/null +++ b/deps/v8/tools/DEPS @@ -0,0 +1,8 @@ +include_rules = [ + "+src", +] + +# checkdeps.py shouldn't check for includes in these directories: +skip_child_includes = [ + "gcmole", +] diff --git a/deps/v8/tools/check-static-initializers.sh b/deps/v8/tools/check-static-initializers.sh index 11ba080d5aa..da43170f6e6 100755 --- a/deps/v8/tools/check-static-initializers.sh +++ b/deps/v8/tools/check-static-initializers.sh @@ -32,8 +32,7 @@ # Allow: # - _GLOBAL__I__ZN2v810LineEditor6first_E # - _GLOBAL__I__ZN2v88internal32AtomicOps_Internalx86CPUFeaturesE -# - _GLOBAL__I__ZN2v88internal8ThreadId18highest_thread_id_E -expected_static_init_count=3 +expected_static_init_count=2 v8_root=$(readlink -f $(dirname $BASH_SOURCE)/../) diff --git a/deps/v8/tools/generate-trig-table.py b/deps/v8/tools/concatenate-files.py similarity index 53% rename from deps/v8/tools/generate-trig-table.py rename to deps/v8/tools/concatenate-files.py index c03cf73e2fe..86bdf563836 100644 --- a/deps/v8/tools/generate-trig-table.py +++ b/deps/v8/tools/concatenate-files.py @@ -1,6 +1,6 @@ #!/usr/bin/env python # -# Copyright 2013 the V8 project authors. All rights reserved. +# Copyright 2014 the V8 project authors. All rights reserved. # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: @@ -27,57 +27,49 @@ # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -# This is a utility for populating the lookup table for the -# approximation of trigonometric functions. - -import sys, math +# This utility concatenates several files into one. On Unix-like systems +# it is equivalent to: +# cat file1 file2 file3 ...files... > target +# +# The reason for writing a seperate utility is that 'cat' is not available +# on all supported build platforms, but Python is, and hence this provides +# us with an easy and uniform way of doing this on all platforms. -SAMPLES = 1800 +import optparse -TEMPLATE = """\ -// Copyright 2013 Google Inc. All Rights Reserved. -// This file was generated from a python script. +def Concatenate(filenames): + """Concatenate files. -#include "v8.h" -#include "trig-table.h" + Args: + files: Array of file names. + The last name is the target; all earlier ones are sources. -namespace v8 { -namespace internal { + Returns: + True, if the operation was successful. + """ + if len(filenames) < 2: + print "An error occured generating %s:\nNothing to do." % filenames[-1] + return False - const double TrigonometricLookupTable::kSinTable[] = - { %(sine_table)s }; - const double TrigonometricLookupTable::kCosXIntervalTable[] = - { %(cosine_table)s }; - const int TrigonometricLookupTable::kSamples = %(samples)i; - const int TrigonometricLookupTable::kTableSize = %(table_size)i; - const double TrigonometricLookupTable::kSamplesOverPiHalf = - %(samples_over_pi_half)s; + try: + with open(filenames[-1], "wb") as target: + for filename in filenames[:-1]: + with open(filename, "rb") as current: + target.write(current.read()) + return True + except IOError as e: + print "An error occured when writing %s:\n%s" % (filenames[-1], e) + return False -} } // v8::internal -""" def main(): - pi_half = math.pi / 2 - interval = pi_half / SAMPLES - sin = [] - cos_times_interval = [] - table_size = SAMPLES + 2 - - for i in range(0, table_size): - sample = i * interval - sin.append(repr(math.sin(sample))) - cos_times_interval.append(repr(math.cos(sample) * interval)) + parser = optparse.OptionParser() + parser.set_usage("""Concatenate several files into one. + Equivalent to: cat file1 ... > target.""") + (options, args) = parser.parse_args() + exit(0 if Concatenate(args) else 1) - output_file = sys.argv[1] - output = open(str(output_file), "w") - output.write(TEMPLATE % { - 'sine_table': ','.join(sin), - 'cosine_table': ','.join(cos_times_interval), - 'samples': SAMPLES, - 'table_size': table_size, - 'samples_over_pi_half': repr(SAMPLES / pi_half) - }) if __name__ == "__main__": main() diff --git a/deps/v8/tools/disasm.py b/deps/v8/tools/disasm.py index 6fa81cab938..cc7ef0621a3 100644 --- a/deps/v8/tools/disasm.py +++ b/deps/v8/tools/disasm.py @@ -49,7 +49,8 @@ "ia32": "-m i386", "x64": "-m i386 -M x86-64", "arm": "-m arm", # Not supported by our objdump build. - "mips": "-m mips" # Not supported by our objdump build. + "mips": "-m mips", # Not supported by our objdump build. + "arm64": "-m aarch64" } diff --git a/deps/v8/tools/external-reference-check.py b/deps/v8/tools/external-reference-check.py new file mode 100644 index 00000000000..386d4a9ee51 --- /dev/null +++ b/deps/v8/tools/external-reference-check.py @@ -0,0 +1,43 @@ +#!/usr/bin/env python +# Copyright 2014 the V8 project authors. All rights reserved. +# Use of this source code is governed by a BSD-style license that can be +# found in the LICENSE file. + +import re +import os +import sys + +DECLARE_FILE = "src/assembler.h" +REGISTER_FILE = "src/serialize.cc" +DECLARE_RE = re.compile("\s*static ExternalReference ([^(]+)\(") +REGISTER_RE = re.compile("\s*Add\(ExternalReference::([^(]+)\(") + +WORKSPACE = os.path.abspath(os.path.join(os.path.dirname(sys.argv[0]), "..")) + +# Ignore those. +BLACKLISTED = [ + "page_flags", + "math_exp_constants", + "math_exp_log_table", + "ForDeoptEntry", +] + +def Find(filename, re): + references = [] + with open(filename, "r") as f: + for line in f: + match = re.match(line) + if match: + references.append(match.group(1)) + return references + +def Main(): + declarations = Find(DECLARE_FILE, DECLARE_RE) + registrations = Find(REGISTER_FILE, REGISTER_RE) + difference = list(set(declarations) - set(registrations) - set(BLACKLISTED)) + for reference in difference: + print("Declared but not registered: ExternalReference::%s" % reference) + return len(difference) > 0 + +if __name__ == "__main__": + sys.exit(Main()) diff --git a/deps/v8/tools/fuzz-harness.sh b/deps/v8/tools/fuzz-harness.sh index efbf8646cee..cef59868a93 100755 --- a/deps/v8/tools/fuzz-harness.sh +++ b/deps/v8/tools/fuzz-harness.sh @@ -85,7 +85,7 @@ python -u "$jsfunfuzz_dir/jsfunfuzz/multi_timed_run.py" 300 \ "$d8" $flags "$jsfunfuzz_dir/jsfunfuzz/jsfunfuzz.js" exit_code=$(cat w* | grep " looking good" -c) exit_code=$((100-exit_code)) -tar -cjf fuzz-results-$(date +%y%m%d).tar.bz2 err-* w* +tar -cjf fuzz-results-$(date +%Y%m%d%H%M%S).tar.bz2 err-* w* rm -f err-* w* echo "Total failures: $exit_code" diff --git a/deps/v8/tools/gcmole/Makefile b/deps/v8/tools/gcmole/Makefile index 764245caf61..ee43c00d206 100644 --- a/deps/v8/tools/gcmole/Makefile +++ b/deps/v8/tools/gcmole/Makefile @@ -31,13 +31,12 @@ LLVM_INCLUDE:=$(LLVM_SRC_ROOT)/include CLANG_INCLUDE:=$(LLVM_SRC_ROOT)/tools/clang/include libgcmole.so: gcmole.cc - g++ -I$(LLVM_INCLUDE) -I$(CLANG_INCLUDE) -I. -D_DEBUG -D_GNU_SOURCE \ - -D__STDC_LIMIT_MACROS -D__STDC_CONSTANT_MACROS -O3 \ - -fomit-frame-pointer -fno-exceptions -fno-rtti -fPIC \ - -Woverloaded-virtual -Wcast-qual -fno-strict-aliasing \ - -pedantic -Wno-long-long -Wall \ - -W -Wno-unused-parameter -Wwrite-strings \ - -shared -o libgcmole.so gcmole.cc + $(CXX) -I$(LLVM_INCLUDE) -I$(CLANG_INCLUDE) -I. -D_DEBUG \ + -D_GNU_SOURCE -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS \ + -D__STDC_LIMIT_MACROS -O3 -fomit-frame-pointer -fno-exceptions \ + -fno-rtti -fPIC -Woverloaded-virtual -Wcast-qual -fno-strict-aliasing \ + -pedantic -Wno-long-long -Wall -W -Wno-unused-parameter \ + -Wwrite-strings -std=c++0x -shared -o libgcmole.so gcmole.cc clean: - rm -f libgcmole.so + $(RM) libgcmole.so diff --git a/deps/v8/tools/gcmole/bootstrap.sh b/deps/v8/tools/gcmole/bootstrap.sh index baa0b1f5f54..ac6593c67ac 100755 --- a/deps/v8/tools/gcmole/bootstrap.sh +++ b/deps/v8/tools/gcmole/bootstrap.sh @@ -27,9 +27,12 @@ # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -# This script will build libgcmole.so. +# This script will build libgcmole.so. Building a recent clang needs a +# recent GCC, so if you explicitly want to use GCC 4.8, use: +# +# CC=gcc-4.8 CPP=cpp-4.8 CXX=g++-4.8 CXXFLAGS=-static-libstdc++ CXXCPP=cpp-4.8 ./bootstrap.sh -CLANG_RELEASE=2.9 +CLANG_RELEASE=3.5 THIS_DIR="$(dirname "${0}")" LLVM_DIR="${THIS_DIR}/../../third_party/llvm" @@ -110,7 +113,7 @@ if [ "${OS}" = "Darwin" ]; then # See http://crbug.com/256342 STRIP_FLAGS=-x fi -strip ${STRIP_FLAGS} Release/bin/clang +strip ${STRIP_FLAGS} Release+Asserts/bin/clang cd - # Build libgcmole.so @@ -122,5 +125,5 @@ set +x echo echo You can now run gcmole using this command: echo -echo CLANG_BIN=\"third_party/llvm/Release/bin\" lua tools/gcmole/gcmole.lua +echo CLANG_BIN=\"third_party/llvm/Release+Asserts/bin\" lua tools/gcmole/gcmole.lua echo diff --git a/deps/v8/tools/gcmole/gcmole.cc b/deps/v8/tools/gcmole/gcmole.cc index bdff18952b5..9f1f781453d 100644 --- a/deps/v8/tools/gcmole/gcmole.cc +++ b/deps/v8/tools/gcmole/gcmole.cc @@ -51,8 +51,8 @@ typedef std::set<MangledName> CalleesSet; static bool GetMangledName(clang::MangleContext* ctx, const clang::NamedDecl* decl, MangledName* result) { - if (!isa<clang::CXXConstructorDecl>(decl) && - !isa<clang::CXXDestructorDecl>(decl)) { + if (!llvm::isa<clang::CXXConstructorDecl>(decl) && + !llvm::isa<clang::CXXDestructorDecl>(decl)) { llvm::SmallVector<char, 512> output; llvm::raw_svector_ostream out(output); ctx->mangleName(decl, out); @@ -74,7 +74,7 @@ static std::string STATE_TAG("enum v8::internal::StateTag"); static bool IsExternalVMState(const clang::ValueDecl* var) { const clang::EnumConstantDecl* enum_constant = - dyn_cast<clang::EnumConstantDecl>(var); + llvm::dyn_cast<clang::EnumConstantDecl>(var); if (enum_constant != NULL && enum_constant->getNameAsString() == EXTERNAL) { clang::QualType type = enum_constant->getType(); return (type.getAsString() == STATE_TAG); @@ -109,11 +109,10 @@ struct Resolver { clang::DeclContext::lookup_result result = decl_ctx_->lookup(ResolveName(n)); - clang::DeclContext::lookup_iterator end = result.second; - for (clang::DeclContext::lookup_iterator i = result.first; - i != end; + clang::DeclContext::lookup_iterator end = result.end(); + for (clang::DeclContext::lookup_iterator i = result.begin(); i != end; i++) { - if (isa<T>(*i)) return cast<T>(*i); + if (llvm::isa<T>(*i)) return llvm::cast<T>(*i); } return NULL; @@ -208,13 +207,13 @@ class FunctionDeclarationFinder : public clang::ASTConsumer, public clang::RecursiveASTVisitor<FunctionDeclarationFinder> { public: - explicit FunctionDeclarationFinder(clang::Diagnostic& d, + explicit FunctionDeclarationFinder(clang::DiagnosticsEngine& d, clang::SourceManager& sm, const std::vector<std::string>& args) - : d_(d), sm_(sm) { } + : d_(d), sm_(sm) {} virtual void HandleTranslationUnit(clang::ASTContext &ctx) { - mangle_context_ = clang::createItaniumMangleContext(ctx, d_); + mangle_context_ = clang::ItaniumMangleContext::create(ctx, d_); callees_printer_ = new CalleesPrinter(mangle_context_); TraverseDecl(ctx.getTranslationUnitDecl()); @@ -228,7 +227,7 @@ class FunctionDeclarationFinder } private: - clang::Diagnostic& d_; + clang::DiagnosticsEngine& d_; clang::SourceManager& sm_; clang::MangleContext* mangle_context_; @@ -508,10 +507,8 @@ class FunctionAnalyzer { FunctionAnalyzer(clang::MangleContext* ctx, clang::DeclarationName handle_decl_name, clang::CXXRecordDecl* object_decl, - clang::CXXRecordDecl* smi_decl, - clang::Diagnostic& d, - clang::SourceManager& sm, - bool dead_vars_analysis) + clang::CXXRecordDecl* smi_decl, clang::DiagnosticsEngine& d, + clang::SourceManager& sm, bool dead_vars_analysis) : ctx_(ctx), handle_decl_name_(handle_decl_name), object_decl_(object_decl), @@ -519,8 +516,7 @@ class FunctionAnalyzer { d_(d), sm_(sm), block_(NULL), - dead_vars_analysis_(dead_vars_analysis) { - } + dead_vars_analysis_(dead_vars_analysis) {} // -------------------------------------------------------------------------- @@ -528,19 +524,18 @@ class FunctionAnalyzer { // -------------------------------------------------------------------------- ExprEffect VisitExpr(clang::Expr* expr, const Environment& env) { -#define VISIT(type) do { \ - clang::type* concrete_expr = dyn_cast_or_null<clang::type>(expr); \ - if (concrete_expr != NULL) { \ - return Visit##type (concrete_expr, env); \ - } \ - } while(0); +#define VISIT(type) \ + do { \ + clang::type* concrete_expr = llvm::dyn_cast_or_null<clang::type>(expr); \ + if (concrete_expr != NULL) { \ + return Visit##type(concrete_expr, env); \ + } \ + } while (0); VISIT(AbstractConditionalOperator); VISIT(AddrLabelExpr); VISIT(ArraySubscriptExpr); VISIT(BinaryOperator); - VISIT(BinaryTypeTraitExpr); - VISIT(BlockDeclRefExpr); VISIT(BlockExpr); VISIT(CallExpr); VISIT(CastExpr); @@ -587,8 +582,8 @@ class FunctionAnalyzer { VISIT(StmtExpr); VISIT(StringLiteral); VISIT(SubstNonTypeTemplateParmPackExpr); + VISIT(TypeTraitExpr); VISIT(UnaryOperator); - VISIT(UnaryTypeTraitExpr); VISIT(VAArgExpr); #undef VISIT @@ -604,7 +599,6 @@ class FunctionAnalyzer { } IGNORE_EXPR(AddrLabelExpr); - IGNORE_EXPR(BinaryTypeTraitExpr); IGNORE_EXPR(BlockExpr); IGNORE_EXPR(CharacterLiteral); IGNORE_EXPR(ChooseExpr); @@ -633,7 +627,7 @@ class FunctionAnalyzer { IGNORE_EXPR(StmtExpr); IGNORE_EXPR(StringLiteral); IGNORE_EXPR(SubstNonTypeTemplateParmPackExpr); - IGNORE_EXPR(UnaryTypeTraitExpr); + IGNORE_EXPR(TypeTraitExpr); IGNORE_EXPR(VAArgExpr); IGNORE_EXPR(GNUNullExpr); IGNORE_EXPR(OverloadExpr); @@ -654,12 +648,9 @@ class FunctionAnalyzer { } bool IsRawPointerVar(clang::Expr* expr, std::string* var_name) { - if (isa<clang::BlockDeclRefExpr>(expr)) { - *var_name = cast<clang::BlockDeclRefExpr>(expr)->getDecl()-> - getNameAsString(); - return true; - } else if (isa<clang::DeclRefExpr>(expr)) { - *var_name = cast<clang::DeclRefExpr>(expr)->getDecl()->getNameAsString(); + if (llvm::isa<clang::DeclRefExpr>(expr)) { + *var_name = + llvm::cast<clang::DeclRefExpr>(expr)->getDecl()->getNameAsString(); return true; } return false; @@ -707,12 +698,7 @@ class FunctionAnalyzer { return VisitExpr(expr->getArgument(), env); } - DECL_VISIT_EXPR(CXXNewExpr) { - return Par(expr, - expr->getNumConstructorArgs(), - expr->getConstructorArgs(), - env); - } + DECL_VISIT_EXPR(CXXNewExpr) { return VisitExpr(expr->getInitializer(), env); } DECL_VISIT_EXPR(ExprWithCleanups) { return VisitExpr(expr->getSubExpr(), env); @@ -766,10 +752,6 @@ class FunctionAnalyzer { return Use(expr, expr->getDecl(), env); } - DECL_VISIT_EXPR(BlockDeclRefExpr) { - return Use(expr, expr->getDecl(), env); - } - ExprEffect Par(clang::Expr* parent, int n, clang::Expr** exprs, @@ -844,7 +826,7 @@ class FunctionAnalyzer { CallProps props; clang::CXXMemberCallExpr* memcall = - dyn_cast_or_null<clang::CXXMemberCallExpr>(call); + llvm::dyn_cast_or_null<clang::CXXMemberCallExpr>(call); if (memcall != NULL) { clang::Expr* receiver = memcall->getImplicitObjectArgument(); props.SetEffect(0, VisitExpr(receiver, env)); @@ -870,14 +852,15 @@ class FunctionAnalyzer { // -------------------------------------------------------------------------- Environment VisitStmt(clang::Stmt* stmt, const Environment& env) { -#define VISIT(type) do { \ - clang::type* concrete_stmt = dyn_cast_or_null<clang::type>(stmt); \ - if (concrete_stmt != NULL) { \ - return Visit##type (concrete_stmt, env); \ - } \ - } while(0); - - if (clang::Expr* expr = dyn_cast_or_null<clang::Expr>(stmt)) { +#define VISIT(type) \ + do { \ + clang::type* concrete_stmt = llvm::dyn_cast_or_null<clang::type>(stmt); \ + if (concrete_stmt != NULL) { \ + return Visit##type(concrete_stmt, env); \ + } \ + } while (0); + + if (clang::Expr* expr = llvm::dyn_cast_or_null<clang::Expr>(stmt)) { return env.ApplyEffect(VisitExpr(expr, env)); } @@ -1078,11 +1061,12 @@ class FunctionAnalyzer { const clang::TagType* ToTagType(const clang::Type* t) { if (t == NULL) { return NULL; - } else if (isa<clang::TagType>(t)) { - return cast<clang::TagType>(t); - } else if (isa<clang::SubstTemplateTypeParmType>(t)) { - return ToTagType(cast<clang::SubstTemplateTypeParmType>(t)-> - getReplacementType().getTypePtr()); + } else if (llvm::isa<clang::TagType>(t)) { + return llvm::cast<clang::TagType>(t); + } else if (llvm::isa<clang::SubstTemplateTypeParmType>(t)) { + return ToTagType(llvm::cast<clang::SubstTemplateTypeParmType>(t) + ->getReplacementType() + .getTypePtr()); } else { return NULL; } @@ -1095,7 +1079,7 @@ class FunctionAnalyzer { bool IsRawPointerType(clang::QualType qtype) { const clang::PointerType* type = - dyn_cast_or_null<clang::PointerType>(qtype.getTypePtrOrNull()); + llvm::dyn_cast_or_null<clang::PointerType>(qtype.getTypePtrOrNull()); if (type == NULL) return false; const clang::TagType* pointee = @@ -1103,7 +1087,7 @@ class FunctionAnalyzer { if (pointee == NULL) return false; clang::CXXRecordDecl* record = - dyn_cast_or_null<clang::CXXRecordDecl>(pointee->getDecl()); + llvm::dyn_cast_or_null<clang::CXXRecordDecl>(pointee->getDecl()); if (record == NULL) return false; if (!InV8Namespace(record)) return false; @@ -1117,7 +1101,7 @@ class FunctionAnalyzer { } Environment VisitDecl(clang::Decl* decl, const Environment& env) { - if (clang::VarDecl* var = dyn_cast<clang::VarDecl>(decl)) { + if (clang::VarDecl* var = llvm::dyn_cast<clang::VarDecl>(decl)) { Environment out = var->hasInit() ? VisitStmt(var->getInit(), env) : env; if (IsRawPointerType(var->getType())) { @@ -1177,7 +1161,8 @@ class FunctionAnalyzer { private: void ReportUnsafe(const clang::Expr* expr, const std::string& msg) { d_.Report(clang::FullSourceLoc(expr->getExprLoc(), sm_), - d_.getCustomDiagID(clang::Diagnostic::Warning, msg)); + d_.getCustomDiagID(clang::DiagnosticsEngine::Warning, "%0")) + << msg; } @@ -1186,7 +1171,7 @@ class FunctionAnalyzer { clang::CXXRecordDecl* object_decl_; clang::CXXRecordDecl* smi_decl_; - clang::Diagnostic& d_; + clang::DiagnosticsEngine& d_; clang::SourceManager& sm_; Block* block_; @@ -1197,8 +1182,7 @@ class FunctionAnalyzer { class ProblemsFinder : public clang::ASTConsumer, public clang::RecursiveASTVisitor<ProblemsFinder> { public: - ProblemsFinder(clang::Diagnostic& d, - clang::SourceManager& sm, + ProblemsFinder(clang::DiagnosticsEngine& d, clang::SourceManager& sm, const std::vector<std::string>& args) : d_(d), sm_(sm), dead_vars_analysis_(false) { for (unsigned i = 0; i < args.size(); ++i) { @@ -1224,14 +1208,9 @@ class ProblemsFinder : public clang::ASTConsumer, if (smi_decl != NULL) smi_decl = smi_decl->getDefinition(); if (object_decl != NULL && smi_decl != NULL) { - function_analyzer_ = - new FunctionAnalyzer(clang::createItaniumMangleContext(ctx, d_), - r.ResolveName("Handle"), - object_decl, - smi_decl, - d_, - sm_, - dead_vars_analysis_); + function_analyzer_ = new FunctionAnalyzer( + clang::ItaniumMangleContext::create(ctx, d_), r.ResolveName("Handle"), + object_decl, smi_decl, d_, sm_, dead_vars_analysis_); TraverseDecl(ctx.getTranslationUnitDecl()); } else { if (object_decl == NULL) { @@ -1249,7 +1228,7 @@ class ProblemsFinder : public clang::ASTConsumer, } private: - clang::Diagnostic& d_; + clang::DiagnosticsEngine& d_; clang::SourceManager& sm_; bool dead_vars_analysis_; diff --git a/deps/v8/tools/gcmole/gcmole.lua b/deps/v8/tools/gcmole/gcmole.lua index f1980a45961..d287f7b9122 100644 --- a/deps/v8/tools/gcmole/gcmole.lua +++ b/deps/v8/tools/gcmole/gcmole.lua @@ -93,18 +93,20 @@ end local function MakeClangCommandLine(plugin, plugin_args, triple, arch_define) if plugin_args then for i = 1, #plugin_args do - plugin_args[i] = "-plugin-arg-" .. plugin .. " " .. plugin_args[i] + plugin_args[i] = "-Xclang -plugin-arg-" .. plugin + .. " -Xclang " .. plugin_args[i] end plugin_args = " " .. table.concat(plugin_args, " ") end - return CLANG_BIN .. "/clang -cc1 -load " .. CLANG_PLUGINS .. "/libgcmole.so" - .. " -plugin " .. plugin + return CLANG_BIN .. "/clang++ -std=c++11 -c " + .. " -Xclang -load -Xclang " .. CLANG_PLUGINS .. "/libgcmole.so" + .. " -Xclang -plugin -Xclang " .. plugin .. (plugin_args or "") - .. " -triple " .. triple + .. " -Xclang -triple -Xclang " .. triple .. " -D" .. arch_define .. " -DENABLE_DEBUGGER_SUPPORT" .. " -DV8_I18N_SUPPORT" - .. " -Isrc" + .. " -I./" .. " -Ithird_party/icu/source/common" .. " -Ithird_party/icu/source/i18n" end diff --git a/deps/v8/tools/gdbinit b/deps/v8/tools/gdbinit new file mode 100644 index 00000000000..20cdff618c8 --- /dev/null +++ b/deps/v8/tools/gdbinit @@ -0,0 +1,33 @@ +# Copyright 2014 the V8 project authors. All rights reserved. +# Use of this source code is governed by a BSD-style license that can be +# found in the LICENSE file. + +# Print HeapObjects. +define job +print ((v8::internal::HeapObject*)($arg0))->Print() +end +document job +Print a v8 JavaScript object +Usage: job tagged_ptr +end + +# Print Code objects containing given PC. +define jco +job (v8::internal::Isolate::Current()->FindCodeObject((v8::internal::Address)$arg0)) +end +document jco +Print a v8 Code object from an internal code address +Usage: jco pc +end + +# Print JavaScript stack trace. +define jst +print v8::internal::Isolate::Current()->PrintStack(stdout) +end +document jst +Print the current JavaScript stack trace +Usage: jst +end + +set disassembly-flavor intel +set disable-randomization off diff --git a/deps/v8/tools/gen-postmortem-metadata.py b/deps/v8/tools/gen-postmortem-metadata.py index 2ce085393e1..b617573d9c7 100644 --- a/deps/v8/tools/gen-postmortem-metadata.py +++ b/deps/v8/tools/gen-postmortem-metadata.py @@ -70,8 +70,6 @@ { 'name': 'ExternalStringTag', 'value': 'kExternalStringTag' }, { 'name': 'SlicedStringTag', 'value': 'kSlicedStringTag' }, - { 'name': 'FailureTag', 'value': 'kFailureTag' }, - { 'name': 'FailureTagMask', 'value': 'kFailureTagMask' }, { 'name': 'HeapObjectTag', 'value': 'kHeapObjectTag' }, { 'name': 'HeapObjectTagMask', 'value': 'kHeapObjectTagMask' }, { 'name': 'SmiTag', 'value': 'kSmiTag' }, @@ -94,8 +92,6 @@ 'value': 'DescriptorArray::kFirstIndex' }, { 'name': 'prop_type_field', 'value': 'FIELD' }, - { 'name': 'prop_type_first_phantom', - 'value': 'INTERCEPTOR' }, { 'name': 'prop_type_mask', 'value': 'PropertyDetails::TypeField::kMask' }, { 'name': 'prop_index_mask', @@ -112,13 +108,6 @@ { 'name': 'prop_desc_size', 'value': 'DescriptorArray::kDescriptorSize' }, - { 'name': 'bit_field2_elements_kind_mask', - 'value': 'Map::kElementsKindMask' }, - { 'name': 'bit_field2_elements_kind_shift', - 'value': 'Map::kElementsKindShift' }, - { 'name': 'bit_field3_dictionary_map_shift', - 'value': 'Map::DictionaryMap::kShift' }, - { 'name': 'elements_fast_holey_elements', 'value': 'FAST_HOLEY_ELEMENTS' }, { 'name': 'elements_fast_elements', @@ -126,6 +115,13 @@ { 'name': 'elements_dictionary_elements', 'value': 'DICTIONARY_ELEMENTS' }, + { 'name': 'bit_field2_elements_kind_mask', + 'value': 'Map::ElementsKindBits::kMask' }, + { 'name': 'bit_field2_elements_kind_shift', + 'value': 'Map::ElementsKindBits::kShift' }, + { 'name': 'bit_field3_dictionary_map_shift', + 'value': 'Map::DictionaryMap::kShift' }, + { 'name': 'off_fp_context', 'value': 'StandardFrameConstants::kContextOffset' }, { 'name': 'off_fp_constant_pool', @@ -196,9 +192,9 @@ * This file is generated by %s. Do not edit directly. */ -#include "v8.h" -#include "frames.h" -#include "frames-inl.h" /* for architecture-specific frame constants */ +#include "src/v8.h" +#include "src/frames.h" +#include "src/frames-inl.h" /* for architecture-specific frame constants */ using namespace v8::internal; diff --git a/deps/v8/tools/generate-runtime-tests.py b/deps/v8/tools/generate-runtime-tests.py new file mode 100755 index 00000000000..b5f61a84227 --- /dev/null +++ b/deps/v8/tools/generate-runtime-tests.py @@ -0,0 +1,1412 @@ +#!/usr/bin/env python +# Copyright 2014 the V8 project authors. All rights reserved. +# Use of this source code is governed by a BSD-style license that can be +# found in the LICENSE file. + +import itertools +import js2c +import multiprocessing +import optparse +import os +import random +import re +import shutil +import signal +import string +import subprocess +import sys +import time + +FILENAME = "src/runtime.cc" +HEADERFILENAME = "src/runtime.h" +FUNCTION = re.compile("^RUNTIME_FUNCTION\(Runtime_(\w+)") +ARGSLENGTH = re.compile(".*DCHECK\(.*args\.length\(\) == (\d+)\);") +FUNCTIONEND = "}\n" +MACRO = re.compile(r"^#define ([^ ]+)\(([^)]*)\) *([^\\]*)\\?\n$") +FIRST_WORD = re.compile("^\s*(.*?)[\s({\[]") + +WORKSPACE = os.path.abspath(os.path.join(os.path.dirname(sys.argv[0]), "..")) +BASEPATH = os.path.join(WORKSPACE, "test", "mjsunit", "runtime-gen") +THIS_SCRIPT = os.path.relpath(sys.argv[0]) + +# Expand these macros, they define further runtime functions. +EXPAND_MACROS = [ + "BUFFER_VIEW_GETTER", + "DATA_VIEW_GETTER", + "DATA_VIEW_SETTER", + "RUNTIME_UNARY_MATH", +] +# TODO(jkummerow): We could also whitelist the following macros, but the +# functions they define are so trivial that it's unclear how much benefit +# that would provide: +# ELEMENTS_KIND_CHECK_RUNTIME_FUNCTION +# FIXED_TYPED_ARRAYS_CHECK_RUNTIME_FUNCTION +# TYPED_ARRAYS_CHECK_RUNTIME_FUNCTION + +# Counts of functions in each detection state. These are used to assert +# that the parser doesn't bit-rot. Change the values as needed when you add, +# remove or change runtime functions, but make sure we don't lose our ability +# to parse them! +EXPECTED_FUNCTION_COUNT = 429 +EXPECTED_FUZZABLE_COUNT = 332 +EXPECTED_CCTEST_COUNT = 7 +EXPECTED_UNKNOWN_COUNT = 16 +EXPECTED_BUILTINS_COUNT = 808 + + +# Don't call these at all. +BLACKLISTED = [ + "Abort", # Kills the process. + "AbortJS", # Kills the process. + "CompileForOnStackReplacement", # Riddled with DCHECK. + "IS_VAR", # Not implemented in the runtime. + "ListNatives", # Not available in Release mode. + "SetAllocationTimeout", # Too slow for fuzzing. + "SystemBreak", # Kills (int3) the process. + + # These are weird. They violate some invariants when called after + # bootstrapping. + "DisableAccessChecks", + "EnableAccessChecks", + + # The current LiveEdit implementation relies on and messes with internals + # in ways that makes it fundamentally unfuzzable :-( + "DebugGetLoadedScripts", + "DebugSetScriptSource", + "LiveEditFindSharedFunctionInfosForScript", + "LiveEditFunctionSourceUpdated", + "LiveEditGatherCompileInfo", + "LiveEditPatchFunctionPositions", + "LiveEditReplaceFunctionCode", + "LiveEditReplaceRefToNestedFunction", + "LiveEditReplaceScript", + "LiveEditRestartFrame", + "SetScriptBreakPoint", + + # TODO(jkummerow): Fix these and un-blacklist them! + "CreateDateTimeFormat", + "CreateNumberFormat", + + # TODO(danno): Fix these internal function that are only callable form stubs + # and un-blacklist them! + "NumberToString", + "RxegExpConstructResult", + "RegExpExec", + "StringAdd", + "SubString", + "StringCompare", + "StringCharCodeAt", + "GetFromCache", + + # Compilation + "CompileUnoptimized", + "CompileOptimized", + "TryInstallOptimizedCode", + "NotifyDeoptimized", + "NotifyStubFailure", + + # Utilities + "AllocateInNewSpace", + "AllocateInTargetSpace", + "AllocateHeapNumber", + "NumberToSmi", + "NumberToStringSkipCache", + + "NewSloppyArguments", + "NewStrictArguments", + + # Harmony + "CreateJSGeneratorObject", + "SuspendJSGeneratorObject", + "ResumeJSGeneratorObject", + "ThrowGeneratorStateError", + + # Arrays + "ArrayConstructor", + "InternalArrayConstructor", + "NormalizeElements", + + # Literals + "MaterializeRegExpLiteral", + "CreateObjectLiteral", + "CreateArrayLiteral", + "CreateArrayLiteralStubBailout", + + # Statements + "NewClosure", + "NewClosureFromStubFailure", + "NewObject", + "NewObjectWithAllocationSite", + "FinalizeInstanceSize", + "Throw", + "ReThrow", + "ThrowReferenceError", + "ThrowNotDateError", + "StackGuard", + "Interrupt", + "PromoteScheduledException", + + # Contexts + "NewGlobalContext", + "NewFunctionContext", + "PushWithContext", + "PushCatchContext", + "PushBlockContext", + "PushModuleContext", + "DeleteLookupSlot", + "LoadLookupSlot", + "LoadLookupSlotNoReferenceError", + "StoreLookupSlot", + + # Declarations + "DeclareGlobals", + "DeclareModules", + "DeclareContextSlot", + "InitializeConstGlobal", + "InitializeConstContextSlot", + + # Eval + "ResolvePossiblyDirectEval", + + # Maths + "MathPowSlow", + "MathPowRT" +] + + +# These will always throw. +THROWS = [ + "CheckExecutionState", # Needs to hit a break point. + "CheckIsBootstrapping", # Needs to be bootstrapping. + "DebugEvaluate", # Needs to hit a break point. + "DebugEvaluateGlobal", # Needs to hit a break point. + "DebugIndexedInterceptorElementValue", # Needs an indexed interceptor. + "DebugNamedInterceptorPropertyValue", # Needs a named interceptor. + "DebugSetScriptSource", # Checks compilation state of script. + "GetAllScopesDetails", # Needs to hit a break point. + "GetFrameCount", # Needs to hit a break point. + "GetFrameDetails", # Needs to hit a break point. + "GetRootNaN", # Needs to be bootstrapping. + "GetScopeCount", # Needs to hit a break point. + "GetScopeDetails", # Needs to hit a break point. + "GetStepInPositions", # Needs to hit a break point. + "GetTemplateField", # Needs a {Function,Object}TemplateInfo. + "GetThreadCount", # Needs to hit a break point. + "GetThreadDetails", # Needs to hit a break point. + "IsAccessAllowedForObserver", # Needs access-check-required object. + "UnblockConcurrentRecompilation" # Needs --block-concurrent-recompilation. +] + + +# Definitions used in CUSTOM_KNOWN_GOOD_INPUT below. +_BREAK_ITERATOR = ( + "%GetImplFromInitializedIntlObject(new Intl.v8BreakIterator())") +_COLLATOR = "%GetImplFromInitializedIntlObject(new Intl.Collator('en-US'))" +_DATETIME_FORMAT = ( + "%GetImplFromInitializedIntlObject(new Intl.DateTimeFormat('en-US'))") +_NUMBER_FORMAT = ( + "%GetImplFromInitializedIntlObject(new Intl.NumberFormat('en-US'))") + + +# Custom definitions for function input that does not throw. +# Format: "FunctionName": ["arg0", "arg1", ..., argslength]. +# None means "fall back to autodetected value". +CUSTOM_KNOWN_GOOD_INPUT = { + "AddNamedProperty": [None, "\"bla\"", None, None, None], + "AddPropertyForTemplate": [None, 10, None, None, None], + "Apply": ["function() {}", None, None, None, None, None], + "ArrayBufferSliceImpl": [None, None, 0, None], + "ArrayConcat": ["[1, 'a']", None], + "BreakIteratorAdoptText": [_BREAK_ITERATOR, None, None], + "BreakIteratorBreakType": [_BREAK_ITERATOR, None], + "BreakIteratorCurrent": [_BREAK_ITERATOR, None], + "BreakIteratorFirst": [_BREAK_ITERATOR, None], + "BreakIteratorNext": [_BREAK_ITERATOR, None], + "CompileString": [None, "false", None], + "CreateBreakIterator": ["'en-US'", "{type: 'string'}", None, None], + "CreateJSFunctionProxy": [None, "function() {}", None, None, None], + "CreatePrivateSymbol": ["\"foo\"", None], + "CreatePrivateOwnSymbol": ["\"foo\"", None], + "CreateSymbol": ["\"foo\"", None], + "DateParseString": [None, "new Array(8)", None], + "DefineAccessorPropertyUnchecked": [None, None, "function() {}", + "function() {}", 2, None], + "FunctionBindArguments": [None, None, "undefined", None, None], + "GetBreakLocations": [None, 0, None], + "GetDefaultReceiver": ["function() {}", None], + "GetImplFromInitializedIntlObject": ["new Intl.NumberFormat('en-US')", None], + "InternalCompare": [_COLLATOR, None, None, None], + "InternalDateFormat": [_DATETIME_FORMAT, None, None], + "InternalDateParse": [_DATETIME_FORMAT, None, None], + "InternalNumberFormat": [_NUMBER_FORMAT, None, None], + "InternalNumberParse": [_NUMBER_FORMAT, None, None], + "IsSloppyModeFunction": ["function() {}", None], + "LoadMutableDouble": ["{foo: 1.2}", None, None], + "NewObjectFromBound": ["(function() {}).bind({})", None], + "NumberToRadixString": [None, "2", None], + "ParseJson": ["\"{}\"", 1], + "RegExpExecMultiple": [None, None, "['a']", "['a']", None], + "DefineApiAccessorProperty": [None, None, "undefined", "undefined", None, None], + "SetIteratorInitialize": [None, None, "2", None], + "SetDebugEventListener": ["undefined", None, None], + "SetFunctionBreakPoint": [None, 218, None, None], + "StringBuilderConcat": ["[1, 2, 3]", 3, None, None], + "StringBuilderJoin": ["['a', 'b']", 4, None, None], + "StringMatch": [None, None, "['a', 'b']", None], + "StringNormalize": [None, 2, None], + "StringReplaceGlobalRegExpWithString": [None, None, None, "['a']", None], + "TypedArrayInitialize": [None, 6, "new ArrayBuffer(8)", None, 4, None], + "TypedArrayInitializeFromArrayLike": [None, 6, None, None, None], + "TypedArraySetFastCases": [None, None, "0", None], + "FunctionIsArrow": ["() => null", None], +} + + +# Types of arguments that cannot be generated in a JavaScript testcase. +NON_JS_TYPES = [ + "Code", "Context", "FixedArray", "FunctionTemplateInfo", + "JSFunctionResultCache", "JSMessageObject", "Map", "ScopeInfo", + "SharedFunctionInfo"] + + +class Generator(object): + + def RandomVariable(self, varname, vartype, simple): + if simple: + return self._Variable(varname, self.GENERATORS[vartype][0]) + return self.GENERATORS[vartype][1](self, varname, + self.DEFAULT_RECURSION_BUDGET) + + @staticmethod + def IsTypeSupported(typename): + return typename in Generator.GENERATORS + + USUAL_SUSPECT_PROPERTIES = ["size", "length", "byteLength", "__proto__", + "prototype", "0", "1", "-1"] + DEFAULT_RECURSION_BUDGET = 2 + PROXY_TRAPS = """{ + getOwnPropertyDescriptor: function(name) { + return {value: function() {}, configurable: true, writable: true, + enumerable: true}; + }, + getPropertyDescriptor: function(name) { + return {value: function() {}, configurable: true, writable: true, + enumerable: true}; + }, + getOwnPropertyNames: function() { return []; }, + getPropertyNames: function() { return []; }, + defineProperty: function(name, descriptor) {}, + delete: function(name) { return true; }, + fix: function() {} + }""" + + def _Variable(self, name, value, fallback=None): + args = { "name": name, "value": value, "fallback": fallback } + if fallback: + wrapper = "try { %%s } catch(e) { var %(name)s = %(fallback)s; }" % args + else: + wrapper = "%s" + return [wrapper % ("var %(name)s = %(value)s;" % args)] + + def _Boolean(self, name, recursion_budget): + return self._Variable(name, random.choice(["true", "false"])) + + def _Oddball(self, name, recursion_budget): + return self._Variable(name, + random.choice(["true", "false", "undefined", "null"])) + + def _StrictMode(self, name, recursion_budget): + return self._Variable(name, random.choice([0, 1])) + + def _Int32(self, name, recursion_budget=0): + die = random.random() + if die < 0.5: + value = random.choice([-3, -1, 0, 1, 2, 10, 515, 0x3fffffff, 0x7fffffff, + 0x40000000, -0x40000000, -0x80000000]) + elif die < 0.75: + value = random.randint(-1000, 1000) + else: + value = random.randint(-0x80000000, 0x7fffffff) + return self._Variable(name, value) + + def _Uint32(self, name, recursion_budget=0): + die = random.random() + if die < 0.5: + value = random.choice([0, 1, 2, 3, 4, 8, 0x3fffffff, 0x40000000, + 0x7fffffff, 0xffffffff]) + elif die < 0.75: + value = random.randint(0, 1000) + else: + value = random.randint(0, 0xffffffff) + return self._Variable(name, value) + + def _Smi(self, name, recursion_budget): + die = random.random() + if die < 0.5: + value = random.choice([-5, -1, 0, 1, 2, 3, 0x3fffffff, -0x40000000]) + elif die < 0.75: + value = random.randint(-1000, 1000) + else: + value = random.randint(-0x40000000, 0x3fffffff) + return self._Variable(name, value) + + def _Number(self, name, recursion_budget): + die = random.random() + if die < 0.5: + return self._Smi(name, recursion_budget) + elif die < 0.6: + value = random.choice(["Infinity", "-Infinity", "NaN", "-0", + "1.7976931348623157e+308", # Max value. + "2.2250738585072014e-308", # Min value. + "4.9406564584124654e-324"]) # Min subnormal. + else: + value = random.lognormvariate(0, 15) + return self._Variable(name, value) + + def _RawRandomString(self, minlength=0, maxlength=100, + alphabet=string.ascii_letters): + length = random.randint(minlength, maxlength) + result = "" + for i in xrange(length): + result += random.choice(alphabet) + return result + + def _SeqString(self, name, recursion_budget): + s1 = self._RawRandomString(1, 5) + s2 = self._RawRandomString(1, 5) + # 'foo' + 'bar' + return self._Variable(name, "\"%s\" + \"%s\"" % (s1, s2)) + + def _SeqTwoByteString(self, name): + s1 = self._RawRandomString(1, 5) + s2 = self._RawRandomString(1, 5) + # 'foo' + unicode + 'bar' + return self._Variable(name, "\"%s\" + \"\\2082\" + \"%s\"" % (s1, s2)) + + def _SlicedString(self, name): + s = self._RawRandomString(20, 30) + # 'ffoo12345678901234567890'.substr(1) + return self._Variable(name, "\"%s\".substr(1)" % s) + + def _ConsString(self, name): + s1 = self._RawRandomString(8, 15) + s2 = self._RawRandomString(8, 15) + # 'foo12345' + (function() { return 'bar12345';})() + return self._Variable(name, + "\"%s\" + (function() { return \"%s\";})()" % (s1, s2)) + + def _InternalizedString(self, name): + return self._Variable(name, "\"%s\"" % self._RawRandomString(0, 20)) + + def _String(self, name, recursion_budget): + die = random.random() + if die < 0.5: + string = random.choice(self.USUAL_SUSPECT_PROPERTIES) + return self._Variable(name, "\"%s\"" % string) + elif die < 0.6: + number_name = name + "_number" + result = self._Number(number_name, recursion_budget) + return result + self._Variable(name, "\"\" + %s" % number_name) + elif die < 0.7: + return self._SeqString(name, recursion_budget) + elif die < 0.8: + return self._ConsString(name) + elif die < 0.9: + return self._InternalizedString(name) + else: + return self._SlicedString(name) + + def _Symbol(self, name, recursion_budget): + raw_string_name = name + "_1" + result = self._String(raw_string_name, recursion_budget) + return result + self._Variable(name, "Symbol(%s)" % raw_string_name) + + def _Name(self, name, recursion_budget): + if random.random() < 0.2: + return self._Symbol(name, recursion_budget) + return self._String(name, recursion_budget) + + def _JSValue(self, name, recursion_budget): + die = random.random() + raw_name = name + "_1" + if die < 0.33: + result = self._String(raw_name, recursion_budget) + return result + self._Variable(name, "new String(%s)" % raw_name) + elif die < 0.66: + result = self._Boolean(raw_name, recursion_budget) + return result + self._Variable(name, "new Boolean(%s)" % raw_name) + else: + result = self._Number(raw_name, recursion_budget) + return result + self._Variable(name, "new Number(%s)" % raw_name) + + def _RawRandomPropertyName(self): + if random.random() < 0.5: + return random.choice(self.USUAL_SUSPECT_PROPERTIES) + return self._RawRandomString(0, 10) + + def _AddProperties(self, name, result, recursion_budget): + propcount = random.randint(0, 3) + propname = None + for i in range(propcount): + die = random.random() + if die < 0.5: + propname = "%s_prop%d" % (name, i) + result += self._Name(propname, recursion_budget - 1) + else: + propname = "\"%s\"" % self._RawRandomPropertyName() + propvalue_name = "%s_val%d" % (name, i) + result += self._Object(propvalue_name, recursion_budget - 1) + result.append("try { %s[%s] = %s; } catch (e) {}" % + (name, propname, propvalue_name)) + if random.random() < 0.2 and propname: + # Force the object to slow mode. + result.append("delete %s[%s];" % (name, propname)) + + def _RandomElementIndex(self, element_name, result): + if random.random() < 0.5: + return random.randint(-1000, 1000) + result += self._Smi(element_name, 0) + return element_name + + def _AddElements(self, name, result, recursion_budget): + elementcount = random.randint(0, 3) + for i in range(elementcount): + element_name = "%s_idx%d" % (name, i) + index = self._RandomElementIndex(element_name, result) + value_name = "%s_elt%d" % (name, i) + result += self._Object(value_name, recursion_budget - 1) + result.append("try { %s[%s] = %s; } catch(e) {}" % + (name, index, value_name)) + + def _AddAccessors(self, name, result, recursion_budget): + accessorcount = random.randint(0, 3) + for i in range(accessorcount): + propname = self._RawRandomPropertyName() + what = random.choice(["get", "set"]) + function_name = "%s_access%d" % (name, i) + result += self._PlainFunction(function_name, recursion_budget - 1) + result.append("try { Object.defineProperty(%s, \"%s\", {%s: %s}); } " + "catch (e) {}" % (name, propname, what, function_name)) + + def _PlainArray(self, name, recursion_budget): + die = random.random() + if die < 0.5: + literal = random.choice(["[]", "[1, 2]", "[1.5, 2.5]", + "['a', 'b', 1, true]"]) + return self._Variable(name, literal) + else: + new = random.choice(["", "new "]) + length = random.randint(0, 101000) + return self._Variable(name, "%sArray(%d)" % (new, length)) + + def _PlainObject(self, name, recursion_budget): + die = random.random() + if die < 0.67: + literal_propcount = random.randint(0, 3) + properties = [] + result = [] + for i in range(literal_propcount): + propname = self._RawRandomPropertyName() + propvalue_name = "%s_lit%d" % (name, i) + result += self._Object(propvalue_name, recursion_budget - 1) + properties.append("\"%s\": %s" % (propname, propvalue_name)) + return result + self._Variable(name, "{%s}" % ", ".join(properties)) + else: + return self._Variable(name, "new Object()") + + def _JSArray(self, name, recursion_budget): + result = self._PlainArray(name, recursion_budget) + self._AddAccessors(name, result, recursion_budget) + self._AddProperties(name, result, recursion_budget) + self._AddElements(name, result, recursion_budget) + return result + + def _RawRandomBufferLength(self): + if random.random() < 0.2: + return random.choice([0, 1, 8, 0x40000000, 0x80000000]) + return random.randint(0, 1000) + + def _JSArrayBuffer(self, name, recursion_budget): + length = self._RawRandomBufferLength() + return self._Variable(name, "new ArrayBuffer(%d)" % length) + + def _JSDataView(self, name, recursion_budget): + buffer_name = name + "_buffer" + result = self._JSArrayBuffer(buffer_name, recursion_budget) + args = [buffer_name] + die = random.random() + if die < 0.67: + offset = self._RawRandomBufferLength() + args.append("%d" % offset) + if die < 0.33: + length = self._RawRandomBufferLength() + args.append("%d" % length) + result += self._Variable(name, "new DataView(%s)" % ", ".join(args), + fallback="new DataView(new ArrayBuffer(8))") + return result + + def _JSDate(self, name, recursion_budget): + die = random.random() + if die < 0.25: + return self._Variable(name, "new Date()") + elif die < 0.5: + ms_name = name + "_ms" + result = self._Number(ms_name, recursion_budget) + return result + self._Variable(name, "new Date(%s)" % ms_name) + elif die < 0.75: + str_name = name + "_str" + month = random.choice(["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", + "Aug", "Sep", "Oct", "Nov", "Dec"]) + day = random.randint(1, 28) + year = random.randint(1900, 2100) + hour = random.randint(0, 23) + minute = random.randint(0, 59) + second = random.randint(0, 59) + str_value = ("\"%s %s, %s %s:%s:%s\"" % + (month, day, year, hour, minute, second)) + result = self._Variable(str_name, str_value) + return result + self._Variable(name, "new Date(%s)" % str_name) + else: + components = tuple(map(lambda x: "%s_%s" % (name, x), + ["y", "m", "d", "h", "min", "s", "ms"])) + return ([j for i in map(self._Int32, components) for j in i] + + self._Variable(name, "new Date(%s)" % ", ".join(components))) + + def _PlainFunction(self, name, recursion_budget): + result_name = "result" + body = ["function() {"] + body += self._Object(result_name, recursion_budget - 1) + body.append("return result;\n}") + return self._Variable(name, "%s" % "\n".join(body)) + + def _JSFunction(self, name, recursion_budget): + result = self._PlainFunction(name, recursion_budget) + self._AddAccessors(name, result, recursion_budget) + self._AddProperties(name, result, recursion_budget) + self._AddElements(name, result, recursion_budget) + return result + + def _JSFunctionProxy(self, name, recursion_budget): + # TODO(jkummerow): Revisit this as the Proxy implementation evolves. + return self._Variable(name, "Proxy.createFunction(%s, function() {})" % + self.PROXY_TRAPS) + + def _JSGeneratorObject(self, name, recursion_budget): + # TODO(jkummerow): Be more creative here? + return self._Variable(name, "(function*() { yield 1; })()") + + def _JSMap(self, name, recursion_budget, weak=""): + result = self._Variable(name, "new %sMap()" % weak) + num_entries = random.randint(0, 3) + for i in range(num_entries): + key_name = "%s_k%d" % (name, i) + value_name = "%s_v%d" % (name, i) + if weak: + result += self._JSObject(key_name, recursion_budget - 1) + else: + result += self._Object(key_name, recursion_budget - 1) + result += self._Object(value_name, recursion_budget - 1) + result.append("%s.set(%s, %s)" % (name, key_name, value_name)) + return result + + def _JSMapIterator(self, name, recursion_budget): + map_name = name + "_map" + result = self._JSMap(map_name, recursion_budget) + iterator_type = random.choice(['keys', 'values', 'entries']) + return (result + self._Variable(name, "%s.%s()" % + (map_name, iterator_type))) + + def _JSProxy(self, name, recursion_budget): + # TODO(jkummerow): Revisit this as the Proxy implementation evolves. + return self._Variable(name, "Proxy.create(%s)" % self.PROXY_TRAPS) + + def _JSRegExp(self, name, recursion_budget): + flags = random.choice(["", "g", "i", "m", "gi"]) + string = "a(b|c)*a" # TODO(jkummerow): Be more creative here? + ctor = random.choice(["/%s/%s", "new RegExp(\"%s\", \"%s\")"]) + return self._Variable(name, ctor % (string, flags)) + + def _JSSet(self, name, recursion_budget, weak=""): + result = self._Variable(name, "new %sSet()" % weak) + num_entries = random.randint(0, 3) + for i in range(num_entries): + element_name = "%s_e%d" % (name, i) + if weak: + result += self._JSObject(element_name, recursion_budget - 1) + else: + result += self._Object(element_name, recursion_budget - 1) + result.append("%s.add(%s)" % (name, element_name)) + return result + + def _JSSetIterator(self, name, recursion_budget): + set_name = name + "_set" + result = self._JSSet(set_name, recursion_budget) + iterator_type = random.choice(['values', 'entries']) + return (result + self._Variable(name, "%s.%s()" % + (set_name, iterator_type))) + + def _JSTypedArray(self, name, recursion_budget): + arraytype = random.choice(["Int8", "Int16", "Int32", "Uint8", "Uint16", + "Uint32", "Float32", "Float64", "Uint8Clamped"]) + ctor_type = random.randint(0, 3) + if ctor_type == 0: + length = random.randint(0, 1000) + return self._Variable(name, "new %sArray(%d)" % (arraytype, length), + fallback="new %sArray(8)" % arraytype) + elif ctor_type == 1: + input_name = name + "_typedarray" + result = self._JSTypedArray(input_name, recursion_budget - 1) + return (result + + self._Variable(name, "new %sArray(%s)" % (arraytype, input_name), + fallback="new %sArray(8)" % arraytype)) + elif ctor_type == 2: + arraylike_name = name + "_arraylike" + result = self._JSObject(arraylike_name, recursion_budget - 1) + length = random.randint(0, 1000) + result.append("try { %s.length = %d; } catch(e) {}" % + (arraylike_name, length)) + return (result + + self._Variable(name, + "new %sArray(%s)" % (arraytype, arraylike_name), + fallback="new %sArray(8)" % arraytype)) + else: + die = random.random() + buffer_name = name + "_buffer" + args = [buffer_name] + result = self._JSArrayBuffer(buffer_name, recursion_budget) + if die < 0.67: + offset_name = name + "_offset" + args.append(offset_name) + result += self._Int32(offset_name) + if die < 0.33: + length_name = name + "_length" + args.append(length_name) + result += self._Int32(length_name) + return (result + + self._Variable(name, + "new %sArray(%s)" % (arraytype, ", ".join(args)), + fallback="new %sArray(8)" % arraytype)) + + def _JSArrayBufferView(self, name, recursion_budget): + if random.random() < 0.4: + return self._JSDataView(name, recursion_budget) + else: + return self._JSTypedArray(name, recursion_budget) + + def _JSWeakCollection(self, name, recursion_budget): + ctor = random.choice([self._JSMap, self._JSSet]) + return ctor(name, recursion_budget, weak="Weak") + + def _PropertyDetails(self, name, recursion_budget): + # TODO(jkummerow): Be more clever here? + return self._Int32(name) + + def _JSObject(self, name, recursion_budget): + die = random.random() + if die < 0.4: + function = random.choice([self._PlainObject, self._PlainArray, + self._PlainFunction]) + elif die < 0.5: + return self._Variable(name, "this") # Global object. + else: + function = random.choice([self._JSArrayBuffer, self._JSDataView, + self._JSDate, self._JSFunctionProxy, + self._JSGeneratorObject, self._JSMap, + self._JSMapIterator, self._JSRegExp, + self._JSSet, self._JSSetIterator, + self._JSTypedArray, self._JSValue, + self._JSWeakCollection]) + result = function(name, recursion_budget) + self._AddAccessors(name, result, recursion_budget) + self._AddProperties(name, result, recursion_budget) + self._AddElements(name, result, recursion_budget) + return result + + def _JSReceiver(self, name, recursion_budget): + if random.random() < 0.9: return self._JSObject(name, recursion_budget) + return self._JSProxy(name, recursion_budget) + + def _HeapObject(self, name, recursion_budget): + die = random.random() + if die < 0.9: return self._JSReceiver(name, recursion_budget) + elif die < 0.95: return self._Oddball(name, recursion_budget) + else: return self._Name(name, recursion_budget) + + def _Object(self, name, recursion_budget): + if recursion_budget <= 0: + function = random.choice([self._Oddball, self._Number, self._Name, + self._JSValue, self._JSRegExp]) + return function(name, recursion_budget) + if random.random() < 0.2: + return self._Smi(name, recursion_budget) + return self._HeapObject(name, recursion_budget) + + GENERATORS = { + "Boolean": ["true", _Boolean], + "HeapObject": ["new Object()", _HeapObject], + "Int32": ["32", _Int32], + "JSArray": ["new Array()", _JSArray], + "JSArrayBuffer": ["new ArrayBuffer(8)", _JSArrayBuffer], + "JSArrayBufferView": ["new Int32Array(2)", _JSArrayBufferView], + "JSDataView": ["new DataView(new ArrayBuffer(24))", _JSDataView], + "JSDate": ["new Date()", _JSDate], + "JSFunction": ["function() {}", _JSFunction], + "JSFunctionProxy": ["Proxy.createFunction({}, function() {})", + _JSFunctionProxy], + "JSGeneratorObject": ["(function*(){ yield 1; })()", _JSGeneratorObject], + "JSMap": ["new Map()", _JSMap], + "JSMapIterator": ["new Map().entries()", _JSMapIterator], + "JSObject": ["new Object()", _JSObject], + "JSProxy": ["Proxy.create({})", _JSProxy], + "JSReceiver": ["new Object()", _JSReceiver], + "JSRegExp": ["/ab/g", _JSRegExp], + "JSSet": ["new Set()", _JSSet], + "JSSetIterator": ["new Set().values()", _JSSetIterator], + "JSTypedArray": ["new Int32Array(2)", _JSTypedArray], + "JSValue": ["new String('foo')", _JSValue], + "JSWeakCollection": ["new WeakMap()", _JSWeakCollection], + "Name": ["\"name\"", _Name], + "Number": ["1.5", _Number], + "Object": ["new Object()", _Object], + "PropertyDetails": ["513", _PropertyDetails], + "SeqOneByteString": ["\"seq 1-byte\"", _SeqString], + "SeqString": ["\"seqstring\"", _SeqString], + "SeqTwoByteString": ["\"seq \\u2082-byte\"", _SeqTwoByteString], + "Smi": ["1", _Smi], + "StrictMode": ["1", _StrictMode], + "String": ["\"foo\"", _String], + "Symbol": ["Symbol(\"symbol\")", _Symbol], + "Uint32": ["32", _Uint32], + } + + +class ArgParser(object): + def __init__(self, regex, ctor): + self.regex = regex + self.ArgCtor = ctor + + +class Arg(object): + def __init__(self, typename, varname, index): + self.type = typename + self.name = "_%s" % varname + self.index = index + + +class Function(object): + def __init__(self, match): + self.name = match.group(1) + self.argslength = -1 + self.args = {} + self.inline = "" + + handle_arg_parser = ArgParser( + re.compile("^\s*CONVERT_ARG_HANDLE_CHECKED\((\w+), (\w+), (\d+)\)"), + lambda match: Arg(match.group(1), match.group(2), int(match.group(3)))) + + plain_arg_parser = ArgParser( + re.compile("^\s*CONVERT_ARG_CHECKED\((\w+), (\w+), (\d+)\)"), + lambda match: Arg(match.group(1), match.group(2), int(match.group(3)))) + + number_handle_arg_parser = ArgParser( + re.compile("^\s*CONVERT_NUMBER_ARG_HANDLE_CHECKED\((\w+), (\d+)\)"), + lambda match: Arg("Number", match.group(1), int(match.group(2)))) + + smi_arg_parser = ArgParser( + re.compile("^\s*CONVERT_SMI_ARG_CHECKED\((\w+), (\d+)\)"), + lambda match: Arg("Smi", match.group(1), int(match.group(2)))) + + double_arg_parser = ArgParser( + re.compile("^\s*CONVERT_DOUBLE_ARG_CHECKED\((\w+), (\d+)\)"), + lambda match: Arg("Number", match.group(1), int(match.group(2)))) + + number_arg_parser = ArgParser( + re.compile( + "^\s*CONVERT_NUMBER_CHECKED\(\w+, (\w+), (\w+), args\[(\d+)\]\)"), + lambda match: Arg(match.group(2), match.group(1), int(match.group(3)))) + + strict_mode_arg_parser = ArgParser( + re.compile("^\s*CONVERT_STRICT_MODE_ARG_CHECKED\((\w+), (\d+)\)"), + lambda match: Arg("StrictMode", match.group(1), int(match.group(2)))) + + boolean_arg_parser = ArgParser( + re.compile("^\s*CONVERT_BOOLEAN_ARG_CHECKED\((\w+), (\d+)\)"), + lambda match: Arg("Boolean", match.group(1), int(match.group(2)))) + + property_details_parser = ArgParser( + re.compile("^\s*CONVERT_PROPERTY_DETAILS_CHECKED\((\w+), (\d+)\)"), + lambda match: Arg("PropertyDetails", match.group(1), int(match.group(2)))) + + arg_parsers = [handle_arg_parser, plain_arg_parser, number_handle_arg_parser, + smi_arg_parser, + double_arg_parser, number_arg_parser, strict_mode_arg_parser, + boolean_arg_parser, property_details_parser] + + def SetArgsLength(self, match): + self.argslength = int(match.group(1)) + + def TryParseArg(self, line): + for parser in Function.arg_parsers: + match = parser.regex.match(line) + if match: + arg = parser.ArgCtor(match) + self.args[arg.index] = arg + return True + return False + + def Filename(self): + return "%s.js" % self.name.lower() + + def __str__(self): + s = [self.name, "("] + argcount = self.argslength + if argcount < 0: + print("WARNING: unknown argslength for function %s" % self.name) + if self.args: + argcount = max([self.args[i].index + 1 for i in self.args]) + else: + argcount = 0 + for i in range(argcount): + if i > 0: s.append(", ") + s.append(self.args[i].type if i in self.args else "<unknown>") + s.append(")") + return "".join(s) + + +class Macro(object): + def __init__(self, match): + self.name = match.group(1) + self.args = [s.strip() for s in match.group(2).split(",")] + self.lines = [] + self.indentation = 0 + self.AddLine(match.group(3)) + + def AddLine(self, line): + if not line: return + if not self.lines: + # This is the first line, detect indentation. + self.indentation = len(line) - len(line.lstrip()) + line = line.rstrip("\\\n ") + if not line: return + assert len(line[:self.indentation].strip()) == 0, \ + ("expected whitespace: '%s', full line: '%s'" % + (line[:self.indentation], line)) + line = line[self.indentation:] + if not line: return + self.lines.append(line + "\n") + + def Finalize(self): + for arg in self.args: + pattern = re.compile(r"(##|\b)%s(##|\b)" % arg) + for i in range(len(self.lines)): + self.lines[i] = re.sub(pattern, "%%(%s)s" % arg, self.lines[i]) + + def FillIn(self, arg_values): + filler = {} + assert len(arg_values) == len(self.args) + for i in range(len(self.args)): + filler[self.args[i]] = arg_values[i] + result = [] + for line in self.lines: + result.append(line % filler) + return result + + +# Parses HEADERFILENAME to find out which runtime functions are "inline". +def FindInlineRuntimeFunctions(): + inline_functions = [] + with open(HEADERFILENAME, "r") as f: + inline_list = "#define INLINE_FUNCTION_LIST(F) \\\n" + inline_function = re.compile(r"^\s*F\((\w+), \d+, \d+\)\s*\\?") + mode = "SEARCHING" + for line in f: + if mode == "ACTIVE": + match = inline_function.match(line) + if match: + inline_functions.append(match.group(1)) + if not line.endswith("\\\n"): + mode = "SEARCHING" + elif mode == "SEARCHING": + if line == inline_list: + mode = "ACTIVE" + return inline_functions + + +def ReadFileAndExpandMacros(filename): + found_macros = {} + expanded_lines = [] + with open(filename, "r") as f: + found_macro = None + for line in f: + if found_macro is not None: + found_macro.AddLine(line) + if not line.endswith("\\\n"): + found_macro.Finalize() + found_macro = None + continue + + match = MACRO.match(line) + if match: + found_macro = Macro(match) + if found_macro.name in EXPAND_MACROS: + found_macros[found_macro.name] = found_macro + else: + found_macro = None + continue + + match = FIRST_WORD.match(line) + if match: + first_word = match.group(1) + if first_word in found_macros: + MACRO_CALL = re.compile("%s\(([^)]*)\)" % first_word) + match = MACRO_CALL.match(line) + assert match + args = [s.strip() for s in match.group(1).split(",")] + expanded_lines += found_macros[first_word].FillIn(args) + continue + + expanded_lines.append(line) + return expanded_lines + + +# Detects runtime functions by parsing FILENAME. +def FindRuntimeFunctions(): + inline_functions = FindInlineRuntimeFunctions() + functions = [] + expanded_lines = ReadFileAndExpandMacros(FILENAME) + function = None + partial_line = "" + for line in expanded_lines: + # Multi-line definition support, ignoring macros. + if line.startswith("RUNTIME_FUNCTION") and not line.endswith("{\n"): + if line.endswith("\\\n"): continue + partial_line = line.rstrip() + continue + if partial_line: + partial_line += " " + line.strip() + if partial_line.endswith("{"): + line = partial_line + partial_line = "" + else: + continue + + match = FUNCTION.match(line) + if match: + function = Function(match) + if function.name in inline_functions: + function.inline = "_" + continue + if function is None: continue + + match = ARGSLENGTH.match(line) + if match: + function.SetArgsLength(match) + continue + + if function.TryParseArg(line): + continue + + if line == FUNCTIONEND: + if function is not None: + functions.append(function) + function = None + return functions + + +# Hack: This must have the same fields as class Function above, because the +# two are used polymorphically in RunFuzzer(). We could use inheritance... +class Builtin(object): + def __init__(self, match): + self.name = match.group(1) + args = match.group(2) + self.argslength = 0 if args == "" else args.count(",") + 1 + self.inline = "" + self.args = {} + if self.argslength > 0: + args = args.split(",") + for i in range(len(args)): + # a = args[i].strip() # TODO: filter out /* comments */ first. + a = "" + self.args[i] = Arg("Object", a, i) + + def __str__(self): + return "%s(%d)" % (self.name, self.argslength) + + +def FindJSBuiltins(): + PATH = "src" + fileslist = [] + for (root, dirs, files) in os.walk(PATH): + for f in files: + if f.endswith(".js"): + fileslist.append(os.path.join(root, f)) + builtins = [] + regexp = re.compile("^function (\w+)\s*\((.*?)\) {") + matches = 0 + for filename in fileslist: + with open(filename, "r") as f: + file_contents = f.read() + file_contents = js2c.ExpandInlineMacros(file_contents) + lines = file_contents.split("\n") + partial_line = "" + for line in lines: + if line.startswith("function") and not '{' in line: + partial_line += line.rstrip() + continue + if partial_line: + partial_line += " " + line.strip() + if '{' in line: + line = partial_line + partial_line = "" + else: + continue + match = regexp.match(line) + if match: + builtins.append(Builtin(match)) + return builtins + + +# Classifies runtime functions. +def ClassifyFunctions(functions): + # Can be fuzzed with a JavaScript testcase. + js_fuzzable_functions = [] + # We have enough information to fuzz these, but they need inputs that + # cannot be created or passed around in JavaScript. + cctest_fuzzable_functions = [] + # This script does not have enough information about these. + unknown_functions = [] + + types = {} + for f in functions: + if f.name in BLACKLISTED: + continue + decision = js_fuzzable_functions + custom = CUSTOM_KNOWN_GOOD_INPUT.get(f.name, None) + if f.argslength < 0: + # Unknown length -> give up unless there's a custom definition. + if custom and custom[-1] is not None: + f.argslength = custom[-1] + assert len(custom) == f.argslength + 1, \ + ("%s: last custom definition must be argslength" % f.name) + else: + decision = unknown_functions + else: + if custom: + # Any custom definitions must match the known argslength. + assert len(custom) == f.argslength + 1, \ + ("%s should have %d custom definitions but has %d" % + (f.name, f.argslength + 1, len(custom))) + for i in range(f.argslength): + if custom and custom[i] is not None: + # All good, there's a custom definition. + pass + elif not i in f.args: + # No custom definition and no parse result -> give up. + decision = unknown_functions + else: + t = f.args[i].type + if t in NON_JS_TYPES: + decision = cctest_fuzzable_functions + else: + assert Generator.IsTypeSupported(t), \ + ("type generator not found for %s, function: %s" % (t, f)) + decision.append(f) + return (js_fuzzable_functions, cctest_fuzzable_functions, unknown_functions) + + +def _GetKnownGoodArgs(function, generator): + custom_input = CUSTOM_KNOWN_GOOD_INPUT.get(function.name, None) + definitions = [] + argslist = [] + for i in range(function.argslength): + if custom_input and custom_input[i] is not None: + name = "arg%d" % i + definitions.append("var %s = %s;" % (name, custom_input[i])) + else: + arg = function.args[i] + name = arg.name + definitions += generator.RandomVariable(name, arg.type, simple=True) + argslist.append(name) + return (definitions, argslist) + + +def _GenerateTestcase(function, definitions, argslist, throws): + s = ["// Copyright 2014 the V8 project authors. All rights reserved.", + "// AUTO-GENERATED BY tools/generate-runtime-tests.py, DO NOT MODIFY", + "// Flags: --allow-natives-syntax --harmony --harmony-proxies" + ] + definitions + call = "%%%s%s(%s);" % (function.inline, function.name, ", ".join(argslist)) + if throws: + s.append("try {") + s.append(call); + s.append("} catch(e) {}") + else: + s.append(call) + testcase = "\n".join(s) + return testcase + + +def GenerateJSTestcaseForFunction(function): + gen = Generator() + (definitions, argslist) = _GetKnownGoodArgs(function, gen) + testcase = _GenerateTestcase(function, definitions, argslist, + function.name in THROWS) + path = os.path.join(BASEPATH, function.Filename()) + with open(path, "w") as f: + f.write("%s\n" % testcase) + + +def GenerateTestcases(functions): + shutil.rmtree(BASEPATH) # Re-generate everything. + os.makedirs(BASEPATH) + for f in functions: + GenerateJSTestcaseForFunction(f) + + +def _SaveFileName(save_path, process_id, save_file_index): + return "%s/fuzz_%d_%d.js" % (save_path, process_id, save_file_index) + + +def _GetFuzzableRuntimeFunctions(): + functions = FindRuntimeFunctions() + (js_fuzzable_functions, cctest_fuzzable_functions, unknown_functions) = \ + ClassifyFunctions(functions) + return js_fuzzable_functions + + +FUZZ_TARGET_LISTS = { + "runtime": _GetFuzzableRuntimeFunctions, + "builtins": FindJSBuiltins, +} + + +def RunFuzzer(process_id, options, stop_running): + MAX_SLEEP_TIME = 0.1 + INITIAL_SLEEP_TIME = 0.001 + SLEEP_TIME_FACTOR = 1.25 + base_file_name = "/dev/shm/runtime_fuzz_%d" % process_id + test_file_name = "%s.js" % base_file_name + stderr_file_name = "%s.out" % base_file_name + save_file_index = 0 + while os.path.exists(_SaveFileName(options.save_path, process_id, + save_file_index)): + save_file_index += 1 + + targets = FUZZ_TARGET_LISTS[options.fuzz_target]() + try: + for i in range(options.num_tests): + if stop_running.is_set(): break + function = None + while function is None or function.argslength == 0: + function = random.choice(targets) + args = [] + definitions = [] + gen = Generator() + for i in range(function.argslength): + arg = function.args[i] + argname = "arg%d%s" % (i, arg.name) + args.append(argname) + definitions += gen.RandomVariable(argname, arg.type, simple=False) + testcase = _GenerateTestcase(function, definitions, args, True) + with open(test_file_name, "w") as f: + f.write("%s\n" % testcase) + with open("/dev/null", "w") as devnull: + with open(stderr_file_name, "w") as stderr: + process = subprocess.Popen( + [options.binary, "--allow-natives-syntax", "--harmony", + "--harmony-proxies", "--enable-slow-asserts", test_file_name], + stdout=devnull, stderr=stderr) + end_time = time.time() + options.timeout + timed_out = False + exit_code = None + sleep_time = INITIAL_SLEEP_TIME + while exit_code is None: + if time.time() >= end_time: + # Kill the process and wait for it to exit. + os.kill(process.pid, signal.SIGTERM) + exit_code = process.wait() + timed_out = True + else: + exit_code = process.poll() + time.sleep(sleep_time) + sleep_time = sleep_time * SLEEP_TIME_FACTOR + if sleep_time > MAX_SLEEP_TIME: + sleep_time = MAX_SLEEP_TIME + if exit_code != 0 and not timed_out: + oom = False + with open(stderr_file_name, "r") as stderr: + for line in stderr: + if line.strip() == "# Allocation failed - process out of memory": + oom = True + break + if oom: continue + save_name = _SaveFileName(options.save_path, process_id, + save_file_index) + shutil.copyfile(test_file_name, save_name) + save_file_index += 1 + except KeyboardInterrupt: + stop_running.set() + finally: + if os.path.exists(test_file_name): + os.remove(test_file_name) + if os.path.exists(stderr_file_name): + os.remove(stderr_file_name) + + +def BuildOptionParser(): + usage = """Usage: %%prog [options] ACTION + +where ACTION can be: + +info Print diagnostic info. +check Check that runtime functions can be parsed as expected, and that + test cases exist. +generate Parse source code for runtime functions, and auto-generate + test cases for them. Warning: this will nuke and re-create + %(path)s. +fuzz Generate fuzz tests, run them, save those that crashed (see options). +""" % {"path": os.path.relpath(BASEPATH)} + + o = optparse.OptionParser(usage=usage) + o.add_option("--binary", default="out/x64.debug/d8", + help="d8 binary used for running fuzz tests (default: %default)") + o.add_option("--fuzz-target", default="runtime", + help="Set of functions targeted by fuzzing. Allowed values: " + "%s (default: %%default)" % ", ".join(FUZZ_TARGET_LISTS)) + o.add_option("-n", "--num-tests", default=1000, type="int", + help="Number of fuzz tests to generate per worker process" + " (default: %default)") + o.add_option("--save-path", default="~/runtime_fuzz_output", + help="Path to directory where failing tests will be stored" + " (default: %default)") + o.add_option("--timeout", default=20, type="int", + help="Timeout for each fuzz test (in seconds, default:" + "%default)") + return o + + +def ProcessOptions(options, args): + options.save_path = os.path.expanduser(options.save_path) + if options.fuzz_target not in FUZZ_TARGET_LISTS: + print("Invalid fuzz target: %s" % options.fuzz_target) + return False + if len(args) != 1 or args[0] == "help": + return False + return True + + +def Main(): + parser = BuildOptionParser() + (options, args) = parser.parse_args() + + if not ProcessOptions(options, args): + parser.print_help() + return 1 + action = args[0] + + functions = FindRuntimeFunctions() + (js_fuzzable_functions, cctest_fuzzable_functions, unknown_functions) = \ + ClassifyFunctions(functions) + builtins = FindJSBuiltins() + + if action == "test": + print("put your temporary debugging code here") + return 0 + + if action == "info": + print("%d functions total; js_fuzzable_functions: %d, " + "cctest_fuzzable_functions: %d, unknown_functions: %d" + % (len(functions), len(js_fuzzable_functions), + len(cctest_fuzzable_functions), len(unknown_functions))) + print("%d JavaScript builtins" % len(builtins)) + print("unknown functions:") + for f in unknown_functions: + print(f) + return 0 + + if action == "check": + errors = 0 + + def CheckCount(actual, expected, description): + if len(actual) != expected: + print("Expected to detect %d %s, but found %d." % ( + expected, description, len(actual))) + print("If this change is intentional, please update the expectations" + " at the top of %s." % THIS_SCRIPT) + return 1 + return 0 + + errors += CheckCount(functions, EXPECTED_FUNCTION_COUNT, + "functions in total") + errors += CheckCount(js_fuzzable_functions, EXPECTED_FUZZABLE_COUNT, + "JavaScript-fuzzable functions") + errors += CheckCount(cctest_fuzzable_functions, EXPECTED_CCTEST_COUNT, + "cctest-fuzzable functions") + errors += CheckCount(unknown_functions, EXPECTED_UNKNOWN_COUNT, + "functions with incomplete type information") + errors += CheckCount(builtins, EXPECTED_BUILTINS_COUNT, + "JavaScript builtins") + + def CheckTestcasesExisting(functions): + errors = 0 + for f in functions: + if not os.path.isfile(os.path.join(BASEPATH, f.Filename())): + print("Missing testcase for %s, please run '%s generate'" % + (f.name, THIS_SCRIPT)) + errors += 1 + files = filter(lambda filename: not filename.startswith("."), + os.listdir(BASEPATH)) + if (len(files) != len(functions)): + unexpected_files = set(files) - set([f.Filename() for f in functions]) + for f in unexpected_files: + print("Unexpected testcase: %s" % os.path.join(BASEPATH, f)) + errors += 1 + print("Run '%s generate' to automatically clean these up." + % THIS_SCRIPT) + return errors + + errors += CheckTestcasesExisting(js_fuzzable_functions) + + def CheckNameClashes(runtime_functions, builtins): + errors = 0 + runtime_map = {} + for f in runtime_functions: + runtime_map[f.name] = 1 + for b in builtins: + if b.name in runtime_map: + print("Builtin/Runtime_Function name clash: %s" % b.name) + errors += 1 + return errors + + errors += CheckNameClashes(functions, builtins) + + if errors > 0: + return 1 + print("Generated runtime tests: all good.") + return 0 + + if action == "generate": + GenerateTestcases(js_fuzzable_functions) + return 0 + + if action == "fuzz": + processes = [] + if not os.path.isdir(options.save_path): + os.makedirs(options.save_path) + stop_running = multiprocessing.Event() + for i in range(multiprocessing.cpu_count()): + args = (i, options, stop_running) + p = multiprocessing.Process(target=RunFuzzer, args=args) + p.start() + processes.append(p) + try: + for i in range(len(processes)): + processes[i].join() + except KeyboardInterrupt: + stop_running.set() + for i in range(len(processes)): + processes[i].join() + return 0 + +if __name__ == "__main__": + sys.exit(Main()) diff --git a/deps/v8/tools/grokdump.py b/deps/v8/tools/grokdump.py index 8178b2f0cfe..f6c45fce4ba 100755 --- a/deps/v8/tools/grokdump.py +++ b/deps/v8/tools/grokdump.py @@ -3103,15 +3103,18 @@ def AnalyzeMinidump(options, minidump_name): frame_pointer = reader.ExceptionFP() print "Annotated stack (from exception.esp to bottom):" for slot in xrange(stack_top, stack_bottom, reader.PointerSize()): + ascii_content = [c if c >= '\x20' and c < '\x7f' else '.' + for c in reader.ReadBytes(slot, reader.PointerSize())] maybe_address = reader.ReadUIntPtr(slot) heap_object = heap.FindObject(maybe_address) maybe_symbol = reader.FindSymbol(maybe_address) if slot == frame_pointer: maybe_symbol = "<---- frame pointer" frame_pointer = maybe_address - print "%s: %s %s" % (reader.FormatIntPtr(slot), - reader.FormatIntPtr(maybe_address), - maybe_symbol or "") + print "%s: %s %s %s" % (reader.FormatIntPtr(slot), + reader.FormatIntPtr(maybe_address), + "".join(ascii_content), + maybe_symbol or "") if heap_object: heap_object.Print(Printer()) print diff --git a/deps/v8/tools/gyp/v8.gyp b/deps/v8/tools/gyp/v8.gyp index e6a5bd14ee1..d907725230c 100644 --- a/deps/v8/tools/gyp/v8.gyp +++ b/deps/v8/tools/gyp/v8.gyp @@ -42,20 +42,24 @@ }, { 'toolsets': ['target'], }], - ['v8_use_snapshot=="true"', { + + ['v8_use_snapshot=="true" and v8_use_external_startup_data==0', { # The dependency on v8_base should come from a transitive # dependency however the Android toolchain requires libv8_base.a # to appear before libv8_snapshot.a so it's listed explicitly. - 'dependencies': ['v8_base.<(v8_target_arch)', 'v8_snapshot'], - }, - { + 'dependencies': ['v8_base', 'v8_snapshot'], + }], + ['v8_use_snapshot!="true" and v8_use_external_startup_data==0', { # The dependency on v8_base should come from a transitive # dependency however the Android toolchain requires libv8_base.a # to appear before libv8_snapshot.a so it's listed explicitly. - 'dependencies': [ - 'v8_base.<(v8_target_arch)', - 'v8_nosnapshot.<(v8_target_arch)', - ], + 'dependencies': ['v8_base', 'v8_nosnapshot'], + }], + ['v8_use_external_startup_data==1 and want_separate_host_toolset==0', { + 'dependencies': ['v8_base', 'v8_external_snapshot'], + }], + ['v8_use_external_startup_data==1 and want_separate_host_toolset==1', { + 'dependencies': ['v8_base', 'v8_external_snapshot#host'], }], ['component=="shared_library"', { 'type': '<(component)', @@ -64,6 +68,9 @@ # has some sources to link into the component. '../../src/v8dll-main.cc', ], + 'include_dirs': [ + '../..', + ], 'defines': [ 'V8_SHARED', 'BUILDING_V8_SHARED', @@ -112,16 +119,14 @@ ['want_separate_host_toolset==1', { 'toolsets': ['host', 'target'], 'dependencies': [ - 'mksnapshot.<(v8_target_arch)#host', + 'mksnapshot#host', 'js2c#host', - 'generate_trig_table#host', ], }, { 'toolsets': ['target'], 'dependencies': [ - 'mksnapshot.<(v8_target_arch)', + 'mksnapshot', 'js2c', - 'generate_trig_table', ], }], ['component=="shared_library"', { @@ -138,22 +143,22 @@ }], ], 'dependencies': [ - 'v8_base.<(v8_target_arch)', + 'v8_base', ], 'include_dirs+': [ - '../../src', + '../..', ], 'sources': [ '<(SHARED_INTERMEDIATE_DIR)/libraries.cc', '<(SHARED_INTERMEDIATE_DIR)/experimental-libraries.cc', - '<(SHARED_INTERMEDIATE_DIR)/trig-table.cc', '<(INTERMEDIATE_DIR)/snapshot.cc', + '../../src/snapshot-common.cc', ], 'actions': [ { 'action_name': 'run_mksnapshot', 'inputs': [ - '<(PRODUCT_DIR)/<(EXECUTABLE_PREFIX)mksnapshot.<(v8_target_arch)<(EXECUTABLE_SUFFIX)', + '<(PRODUCT_DIR)/<(EXECUTABLE_PREFIX)mksnapshot<(EXECUTABLE_SUFFIX)', ], 'outputs': [ '<(INTERMEDIATE_DIR)/snapshot.cc', @@ -172,33 +177,33 @@ 'action': [ '<@(_inputs)', '<@(mksnapshot_flags)', - '<@(_outputs)' + '<@(INTERMEDIATE_DIR)/snapshot.cc' ], }, ], }, { - 'target_name': 'v8_nosnapshot.<(v8_target_arch)', + 'target_name': 'v8_nosnapshot', 'type': 'static_library', 'dependencies': [ - 'v8_base.<(v8_target_arch)', + 'v8_base', ], 'include_dirs+': [ - '../../src', + '../..', ], 'sources': [ '<(SHARED_INTERMEDIATE_DIR)/libraries.cc', '<(SHARED_INTERMEDIATE_DIR)/experimental-libraries.cc', - '<(SHARED_INTERMEDIATE_DIR)/trig-table.cc', + '../../src/snapshot-common.cc', '../../src/snapshot-empty.cc', ], 'conditions': [ ['want_separate_host_toolset==1', { 'toolsets': ['host', 'target'], - 'dependencies': ['js2c#host', 'generate_trig_table#host'], + 'dependencies': ['js2c#host'], }, { 'toolsets': ['target'], - 'dependencies': ['js2c', 'generate_trig_table'], + 'dependencies': ['js2c'], }], ['component=="shared_library"', { 'defines': [ @@ -208,43 +213,88 @@ }], ] }, - { 'target_name': 'generate_trig_table', - 'type': 'none', + { + 'target_name': 'v8_external_snapshot', + 'type': 'static_library', 'conditions': [ ['want_separate_host_toolset==1', { 'toolsets': ['host'], - }, { + 'dependencies': [ + 'mksnapshot#host', + 'js2c#host', + 'natives_blob#host', + ]}, { 'toolsets': ['target'], + 'dependencies': [ + 'mksnapshot', + 'js2c', + 'natives_blob', + ], }], + ['component=="shared_library"', { + 'defines': [ + 'V8_SHARED', + 'BUILDING_V8_SHARED', + ], + 'direct_dependent_settings': { + 'defines': [ + 'V8_SHARED', + 'USING_V8_SHARED', + ], + }, + }], + ], + 'dependencies': [ + 'v8_base', + ], + 'include_dirs+': [ + '../..', + ], + 'sources': [ + '../../src/natives-external.cc', + '../../src/snapshot-external.cc', ], 'actions': [ { - 'action_name': 'generate', + 'action_name': 'run_mksnapshot (external)', 'inputs': [ - '../../tools/generate-trig-table.py', + '<(PRODUCT_DIR)/<(EXECUTABLE_PREFIX)mksnapshot<(EXECUTABLE_SUFFIX)', ], 'outputs': [ - '<(SHARED_INTERMEDIATE_DIR)/trig-table.cc', + '<(INTERMEDIATE_DIR)/snapshot.cc', + '<(PRODUCT_DIR)/snapshot_blob.bin', ], + 'variables': { + 'mksnapshot_flags': [ + '--log-snapshot-positions', + '--logfile', '<(INTERMEDIATE_DIR)/snapshot.log', + ], + 'conditions': [ + ['v8_random_seed!=0', { + 'mksnapshot_flags': ['--random-seed', '<(v8_random_seed)'], + }], + ], + }, 'action': [ - 'python', - '../../tools/generate-trig-table.py', - '<@(_outputs)', + '<@(_inputs)', + '<@(mksnapshot_flags)', + '<@(INTERMEDIATE_DIR)/snapshot.cc', + '--startup_blob', '<(PRODUCT_DIR)/snapshot_blob.bin', ], }, - ] + ], }, { - 'target_name': 'v8_base.<(v8_target_arch)', + 'target_name': 'v8_base', 'type': 'static_library', 'dependencies': [ - 'v8_libbase.<(v8_target_arch)', + 'v8_libbase', ], 'variables': { 'optimize': 'max', }, 'include_dirs+': [ - '../../src', + '../..', ], 'sources': [ ### gcmole(all) ### '../../src/accessors.cc', @@ -263,10 +313,10 @@ '../../src/assembler.h', '../../src/assert-scope.h', '../../src/assert-scope.cc', + '../../src/ast-value-factory.cc', + '../../src/ast-value-factory.h', '../../src/ast.cc', '../../src/ast.h', - '../../src/atomicops.h', - '../../src/atomicops_internals_x86_gcc.cc', '../../src/bignum-dtoa.cc', '../../src/bignum-dtoa.h', '../../src/bignum.cc', @@ -292,6 +342,98 @@ '../../src/codegen.h', '../../src/compilation-cache.cc', '../../src/compilation-cache.h', + '../../src/compiler/ast-graph-builder.cc', + '../../src/compiler/ast-graph-builder.h', + '../../src/compiler/change-lowering.cc', + '../../src/compiler/change-lowering.h', + '../../src/compiler/code-generator-impl.h', + '../../src/compiler/code-generator.cc', + '../../src/compiler/code-generator.h', + '../../src/compiler/common-node-cache.h', + '../../src/compiler/common-operator.h', + '../../src/compiler/control-builders.cc', + '../../src/compiler/control-builders.h', + '../../src/compiler/frame.h', + '../../src/compiler/gap-resolver.cc', + '../../src/compiler/gap-resolver.h', + '../../src/compiler/generic-algorithm-inl.h', + '../../src/compiler/generic-algorithm.h', + '../../src/compiler/generic-graph.h', + '../../src/compiler/generic-node-inl.h', + '../../src/compiler/generic-node.h', + '../../src/compiler/graph-builder.cc', + '../../src/compiler/graph-builder.h', + '../../src/compiler/graph-inl.h', + '../../src/compiler/graph-reducer.cc', + '../../src/compiler/graph-reducer.h', + '../../src/compiler/graph-replay.cc', + '../../src/compiler/graph-replay.h', + '../../src/compiler/graph-visualizer.cc', + '../../src/compiler/graph-visualizer.h', + '../../src/compiler/graph.cc', + '../../src/compiler/graph.h', + '../../src/compiler/instruction-codes.h', + '../../src/compiler/instruction-selector-impl.h', + '../../src/compiler/instruction-selector.cc', + '../../src/compiler/instruction-selector.h', + '../../src/compiler/instruction.cc', + '../../src/compiler/instruction.h', + '../../src/compiler/js-context-specialization.cc', + '../../src/compiler/js-context-specialization.h', + '../../src/compiler/js-generic-lowering.cc', + '../../src/compiler/js-generic-lowering.h', + '../../src/compiler/js-graph.cc', + '../../src/compiler/js-graph.h', + '../../src/compiler/js-operator.h', + '../../src/compiler/js-typed-lowering.cc', + '../../src/compiler/js-typed-lowering.h', + '../../src/compiler/linkage-impl.h', + '../../src/compiler/linkage.cc', + '../../src/compiler/linkage.h', + '../../src/compiler/lowering-builder.cc', + '../../src/compiler/lowering-builder.h', + '../../src/compiler/machine-node-factory.h', + '../../src/compiler/machine-operator-reducer.cc', + '../../src/compiler/machine-operator-reducer.h', + '../../src/compiler/machine-operator.h', + '../../src/compiler/machine-type.h', + '../../src/compiler/node-aux-data-inl.h', + '../../src/compiler/node-aux-data.h', + '../../src/compiler/node-cache.cc', + '../../src/compiler/node-cache.h', + '../../src/compiler/node-matchers.h', + '../../src/compiler/node-properties-inl.h', + '../../src/compiler/node-properties.h', + '../../src/compiler/node.cc', + '../../src/compiler/node.h', + '../../src/compiler/opcodes.h', + '../../src/compiler/operator-properties-inl.h', + '../../src/compiler/operator-properties.h', + '../../src/compiler/operator.h', + '../../src/compiler/phi-reducer.h', + '../../src/compiler/pipeline.cc', + '../../src/compiler/pipeline.h', + '../../src/compiler/raw-machine-assembler.cc', + '../../src/compiler/raw-machine-assembler.h', + '../../src/compiler/register-allocator.cc', + '../../src/compiler/register-allocator.h', + '../../src/compiler/representation-change.h', + '../../src/compiler/schedule.cc', + '../../src/compiler/schedule.h', + '../../src/compiler/scheduler.cc', + '../../src/compiler/scheduler.h', + '../../src/compiler/simplified-lowering.cc', + '../../src/compiler/simplified-lowering.h', + '../../src/compiler/simplified-node-factory.h', + '../../src/compiler/simplified-operator.h', + '../../src/compiler/source-position.cc', + '../../src/compiler/source-position.h', + '../../src/compiler/structured-machine-assembler.cc', + '../../src/compiler/structured-machine-assembler.h', + '../../src/compiler/typer.cc', + '../../src/compiler/typer.h', + '../../src/compiler/verifier.cc', + '../../src/compiler/verifier.h', '../../src/compiler.cc', '../../src/compiler.h', '../../src/contexts.cc', @@ -304,8 +446,6 @@ '../../src/cpu-profiler-inl.h', '../../src/cpu-profiler.cc', '../../src/cpu-profiler.h', - '../../src/cpu.cc', - '../../src/cpu.h', '../../src/data-flow.cc', '../../src/data-flow.h', '../../src/date.cc', @@ -313,8 +453,6 @@ '../../src/dateparser-inl.h', '../../src/dateparser.cc', '../../src/dateparser.h', - '../../src/debug-agent.cc', - '../../src/debug-agent.h', '../../src/debug.cc', '../../src/debug.h', '../../src/deoptimizer.cc', @@ -349,6 +487,9 @@ '../../src/fast-dtoa.cc', '../../src/fast-dtoa.h', '../../src/feedback-slots.h', + '../../src/field-index.cc', + '../../src/field-index.h', + '../../src/field-index-inl.h', '../../src/fixed-dtoa.cc', '../../src/fixed-dtoa.h', '../../src/flag-definitions.h', @@ -370,14 +511,33 @@ '../../src/handles.cc', '../../src/handles.h', '../../src/hashmap.h', - '../../src/heap-inl.h', '../../src/heap-profiler.cc', '../../src/heap-profiler.h', '../../src/heap-snapshot-generator-inl.h', '../../src/heap-snapshot-generator.cc', '../../src/heap-snapshot-generator.h', - '../../src/heap.cc', - '../../src/heap.h', + '../../src/heap/gc-tracer.cc', + '../../src/heap/gc-tracer.h', + '../../src/heap/heap-inl.h', + '../../src/heap/heap.cc', + '../../src/heap/heap.h', + '../../src/heap/incremental-marking-inl.h', + '../../src/heap/incremental-marking.cc', + '../../src/heap/incremental-marking.h', + '../../src/heap/mark-compact-inl.h', + '../../src/heap/mark-compact.cc', + '../../src/heap/mark-compact.h', + '../../src/heap/objects-visiting-inl.h', + '../../src/heap/objects-visiting.cc', + '../../src/heap/objects-visiting.h', + '../../src/heap/spaces-inl.h', + '../../src/heap/spaces.cc', + '../../src/heap/spaces.h', + '../../src/heap/store-buffer-inl.h', + '../../src/heap/store-buffer.cc', + '../../src/heap/store-buffer.h', + '../../src/heap/sweeper-thread.h', + '../../src/heap/sweeper-thread.cc', '../../src/hydrogen-alias-analysis.h', '../../src/hydrogen-bce.cc', '../../src/hydrogen-bce.h', @@ -426,6 +586,8 @@ '../../src/hydrogen-sce.h', '../../src/hydrogen-store-elimination.cc', '../../src/hydrogen-store-elimination.h', + '../../src/hydrogen-types.cc', + '../../src/hydrogen-types.h', '../../src/hydrogen-uint32-analysis.cc', '../../src/hydrogen-uint32-analysis.h', '../../src/i18n.cc', @@ -435,8 +597,6 @@ '../../src/ic-inl.h', '../../src/ic.cc', '../../src/ic.h', - '../../src/incremental-marking.cc', - '../../src/incremental-marking.h', '../../src/interface.cc', '../../src/interface.h', '../../src/interpreter-irregexp.cc', @@ -448,14 +608,6 @@ '../../src/jsregexp-inl.h', '../../src/jsregexp.cc', '../../src/jsregexp.h', - '../../src/lazy-instance.h', - # TODO(jochen): move libplatform/ files to their own target. - '../../src/libplatform/default-platform.cc', - '../../src/libplatform/default-platform.h', - '../../src/libplatform/task-queue.cc', - '../../src/libplatform/task-queue.h', - '../../src/libplatform/worker-thread.cc', - '../../src/libplatform/worker-thread.h', '../../src/list-inl.h', '../../src/list.h', '../../src/lithium-allocator-inl.h', @@ -465,6 +617,7 @@ '../../src/lithium-codegen.h', '../../src/lithium.cc', '../../src/lithium.h', + '../../src/lithium-inl.h', '../../src/liveedit.cc', '../../src/liveedit.h', '../../src/log-inl.h', @@ -472,9 +625,10 @@ '../../src/log-utils.h', '../../src/log.cc', '../../src/log.h', + '../../src/lookup-inl.h', + '../../src/lookup.cc', + '../../src/lookup.h', '../../src/macro-assembler.h', - '../../src/mark-compact.cc', - '../../src/mark-compact.h', '../../src/messages.cc', '../../src/messages.h', '../../src/msan.h', @@ -482,28 +636,16 @@ '../../src/objects-debug.cc', '../../src/objects-inl.h', '../../src/objects-printer.cc', - '../../src/objects-visiting.cc', - '../../src/objects-visiting.h', '../../src/objects.cc', '../../src/objects.h', - '../../src/once.cc', - '../../src/once.h', - '../../src/optimizing-compiler-thread.h', '../../src/optimizing-compiler-thread.cc', + '../../src/optimizing-compiler-thread.h', + '../../src/ostreams.cc', + '../../src/ostreams.h', '../../src/parser.cc', '../../src/parser.h', - '../../src/platform/elapsed-timer.h', - '../../src/platform/time.cc', - '../../src/platform/time.h', - '../../src/platform.h', - '../../src/platform/condition-variable.cc', - '../../src/platform/condition-variable.h', - '../../src/platform/mutex.cc', - '../../src/platform/mutex.h', - '../../src/platform/semaphore.cc', - '../../src/platform/semaphore.h', - '../../src/platform/socket.cc', - '../../src/platform/socket.h', + '../../src/perf-jit.cc', + '../../src/perf-jit.h', '../../src/preparse-data-format.h', '../../src/preparse-data.cc', '../../src/preparse-data.h', @@ -517,6 +659,7 @@ '../../src/property-details.h', '../../src/property.cc', '../../src/property.h', + '../../src/prototype.h', '../../src/regexp-macro-assembler-irregexp-inl.h', '../../src/regexp-macro-assembler-irregexp.cc', '../../src/regexp-macro-assembler-irregexp.h', @@ -548,14 +691,9 @@ '../../src/serialize.h', '../../src/small-pointer-list.h', '../../src/smart-pointers.h', - '../../src/snapshot-common.cc', '../../src/snapshot.h', - '../../src/spaces-inl.h', - '../../src/spaces.cc', - '../../src/spaces.h', - '../../src/store-buffer-inl.h', - '../../src/store-buffer.cc', - '../../src/store-buffer.h', + '../../src/snapshot-source-sink.cc', + '../../src/snapshot-source-sink.h', '../../src/string-search.cc', '../../src/string-search.h', '../../src/string-stream.cc', @@ -564,8 +702,6 @@ '../../src/strtod.h', '../../src/stub-cache.cc', '../../src/stub-cache.h', - '../../src/sweeper-thread.h', - '../../src/sweeper-thread.cc', '../../src/token.cc', '../../src/token.h', '../../src/transitions-inl.h', @@ -588,12 +724,8 @@ '../../src/utils-inl.h', '../../src/utils.cc', '../../src/utils.h', - '../../src/utils/random-number-generator.cc', - '../../src/utils/random-number-generator.h', '../../src/v8.cc', '../../src/v8.h', - '../../src/v8checks.h', - '../../src/v8globals.h', '../../src/v8memory.h', '../../src/v8threads.cc', '../../src/v8threads.h', @@ -607,6 +739,8 @@ '../../src/zone-inl.h', '../../src/zone.cc', '../../src/zone.h', + '../../third_party/fdlibm/fdlibm.cc', + '../../third_party/fdlibm/fdlibm.h', ], 'conditions': [ ['want_separate_host_toolset==1', { @@ -646,6 +780,10 @@ '../../src/arm/regexp-macro-assembler-arm.h', '../../src/arm/simulator-arm.cc', '../../src/arm/stub-cache-arm.cc', + '../../src/compiler/arm/code-generator-arm.cc', + '../../src/compiler/arm/instruction-codes-arm.h', + '../../src/compiler/arm/instruction-selector-arm.cc', + '../../src/compiler/arm/linkage-arm.cc', ], }], ['v8_target_arch=="arm64"', { @@ -660,11 +798,13 @@ '../../src/arm64/code-stubs-arm64.h', '../../src/arm64/constants-arm64.h', '../../src/arm64/cpu-arm64.cc', - '../../src/arm64/cpu-arm64.h', '../../src/arm64/debug-arm64.cc', '../../src/arm64/decoder-arm64.cc', '../../src/arm64/decoder-arm64.h', '../../src/arm64/decoder-arm64-inl.h', + '../../src/arm64/delayed-masm-arm64.cc', + '../../src/arm64/delayed-masm-arm64.h', + '../../src/arm64/delayed-masm-arm64-inl.h', '../../src/arm64/deoptimizer-arm64.cc', '../../src/arm64/disasm-arm64.cc', '../../src/arm64/disasm-arm64.h', @@ -692,6 +832,10 @@ '../../src/arm64/stub-cache-arm64.cc', '../../src/arm64/utils-arm64.cc', '../../src/arm64/utils-arm64.h', + '../../src/compiler/arm64/code-generator-arm64.cc', + '../../src/compiler/arm64/instruction-codes-arm64.h', + '../../src/compiler/arm64/instruction-selector-arm64.cc', + '../../src/compiler/arm64/linkage-arm64.cc', ], }], ['v8_target_arch=="ia32"', { @@ -723,6 +867,41 @@ '../../src/ia32/regexp-macro-assembler-ia32.cc', '../../src/ia32/regexp-macro-assembler-ia32.h', '../../src/ia32/stub-cache-ia32.cc', + '../../src/compiler/ia32/code-generator-ia32.cc', + '../../src/compiler/ia32/instruction-codes-ia32.h', + '../../src/compiler/ia32/instruction-selector-ia32.cc', + '../../src/compiler/ia32/linkage-ia32.cc', + ], + }], + ['v8_target_arch=="x87"', { + 'sources': [ ### gcmole(arch:x87) ### + '../../src/x87/assembler-x87-inl.h', + '../../src/x87/assembler-x87.cc', + '../../src/x87/assembler-x87.h', + '../../src/x87/builtins-x87.cc', + '../../src/x87/code-stubs-x87.cc', + '../../src/x87/code-stubs-x87.h', + '../../src/x87/codegen-x87.cc', + '../../src/x87/codegen-x87.h', + '../../src/x87/cpu-x87.cc', + '../../src/x87/debug-x87.cc', + '../../src/x87/deoptimizer-x87.cc', + '../../src/x87/disasm-x87.cc', + '../../src/x87/frames-x87.cc', + '../../src/x87/frames-x87.h', + '../../src/x87/full-codegen-x87.cc', + '../../src/x87/ic-x87.cc', + '../../src/x87/lithium-codegen-x87.cc', + '../../src/x87/lithium-codegen-x87.h', + '../../src/x87/lithium-gap-resolver-x87.cc', + '../../src/x87/lithium-gap-resolver-x87.h', + '../../src/x87/lithium-x87.cc', + '../../src/x87/lithium-x87.h', + '../../src/x87/macro-assembler-x87.cc', + '../../src/x87/macro-assembler-x87.h', + '../../src/x87/regexp-macro-assembler-x87.cc', + '../../src/x87/regexp-macro-assembler-x87.h', + '../../src/x87/stub-cache-x87.cc', ], }], ['v8_target_arch=="mips" or v8_target_arch=="mipsel"', { @@ -759,7 +938,41 @@ '../../src/mips/stub-cache-mips.cc', ], }], - ['v8_target_arch=="x64"', { + ['v8_target_arch=="mips64el"', { + 'sources': [ ### gcmole(arch:mips64el) ### + '../../src/mips64/assembler-mips64.cc', + '../../src/mips64/assembler-mips64.h', + '../../src/mips64/assembler-mips64-inl.h', + '../../src/mips64/builtins-mips64.cc', + '../../src/mips64/codegen-mips64.cc', + '../../src/mips64/codegen-mips64.h', + '../../src/mips64/code-stubs-mips64.cc', + '../../src/mips64/code-stubs-mips64.h', + '../../src/mips64/constants-mips64.cc', + '../../src/mips64/constants-mips64.h', + '../../src/mips64/cpu-mips64.cc', + '../../src/mips64/debug-mips64.cc', + '../../src/mips64/deoptimizer-mips64.cc', + '../../src/mips64/disasm-mips64.cc', + '../../src/mips64/frames-mips64.cc', + '../../src/mips64/frames-mips64.h', + '../../src/mips64/full-codegen-mips64.cc', + '../../src/mips64/ic-mips64.cc', + '../../src/mips64/lithium-codegen-mips64.cc', + '../../src/mips64/lithium-codegen-mips64.h', + '../../src/mips64/lithium-gap-resolver-mips64.cc', + '../../src/mips64/lithium-gap-resolver-mips64.h', + '../../src/mips64/lithium-mips64.cc', + '../../src/mips64/lithium-mips64.h', + '../../src/mips64/macro-assembler-mips64.cc', + '../../src/mips64/macro-assembler-mips64.h', + '../../src/mips64/regexp-macro-assembler-mips64.cc', + '../../src/mips64/regexp-macro-assembler-mips64.h', + '../../src/mips64/simulator-mips64.cc', + '../../src/mips64/stub-cache-mips64.cc', + ], + }], + ['v8_target_arch=="x64" or v8_target_arch=="x32"', { 'sources': [ ### gcmole(arch:x64) ### '../../src/x64/assembler-x64-inl.h', '../../src/x64/assembler-x64.cc', @@ -788,6 +1001,10 @@ '../../src/x64/regexp-macro-assembler-x64.cc', '../../src/x64/regexp-macro-assembler-x64.h', '../../src/x64/stub-cache-x64.cc', + '../../src/compiler/x64/code-generator-x64.cc', + '../../src/compiler/x64/instruction-codes-x64.h', + '../../src/compiler/x64/instruction-selector-x64.cc', + '../../src/compiler/x64/linkage-x64.cc', ], }], ['OS=="linux"', { @@ -799,33 +1016,133 @@ ] }], ], + }, + } + ], + ['OS=="win"', { + 'variables': { + 'gyp_generators': '<!(echo $GYP_GENERATORS)', + }, + 'msvs_disabled_warnings': [4351, 4355, 4800], + }], + ['component=="shared_library"', { + 'defines': [ + 'BUILDING_V8_SHARED', + 'V8_SHARED', + ], + }], + ['v8_postmortem_support=="true"', { + 'sources': [ + '<(SHARED_INTERMEDIATE_DIR)/debug-support.cc', + ] + }], + ['v8_enable_i18n_support==1', { + 'dependencies': [ + '<(icu_gyp_path):icui18n', + '<(icu_gyp_path):icuuc', + ] + }, { # v8_enable_i18n_support==0 + 'sources!': [ + '../../src/i18n.cc', + '../../src/i18n.h', + ], + }], + ['OS=="win" and v8_enable_i18n_support==1', { + 'dependencies': [ + '<(icu_gyp_path):icudata', + ], + }], + ['icu_use_data_file_flag==1', { + 'defines': ['ICU_UTIL_DATA_IMPL=ICU_UTIL_DATA_FILE'], + }, { # else icu_use_data_file_flag !=1 + 'conditions': [ + ['OS=="win"', { + 'defines': ['ICU_UTIL_DATA_IMPL=ICU_UTIL_DATA_SHARED'], + }, { + 'defines': ['ICU_UTIL_DATA_IMPL=ICU_UTIL_DATA_STATIC'], + }], + ], + }], + ], + }, + { + 'target_name': 'v8_libbase', + 'type': 'static_library', + 'variables': { + 'optimize': 'max', + }, + 'include_dirs+': [ + '../..', + ], + 'sources': [ + '../../src/base/atomicops.h', + '../../src/base/atomicops_internals_arm64_gcc.h', + '../../src/base/atomicops_internals_arm_gcc.h', + '../../src/base/atomicops_internals_atomicword_compat.h', + '../../src/base/atomicops_internals_mac.h', + '../../src/base/atomicops_internals_mips_gcc.h', + '../../src/base/atomicops_internals_tsan.h', + '../../src/base/atomicops_internals_x86_gcc.cc', + '../../src/base/atomicops_internals_x86_gcc.h', + '../../src/base/atomicops_internals_x86_msvc.h', + '../../src/base/build_config.h', + '../../src/base/cpu.cc', + '../../src/base/cpu.h', + '../../src/base/lazy-instance.h', + '../../src/base/logging.cc', + '../../src/base/logging.h', + '../../src/base/macros.h', + '../../src/base/once.cc', + '../../src/base/once.h', + '../../src/base/platform/elapsed-timer.h', + '../../src/base/platform/time.cc', + '../../src/base/platform/time.h', + '../../src/base/platform/condition-variable.cc', + '../../src/base/platform/condition-variable.h', + '../../src/base/platform/mutex.cc', + '../../src/base/platform/mutex.h', + '../../src/base/platform/platform.h', + '../../src/base/platform/semaphore.cc', + '../../src/base/platform/semaphore.h', + '../../src/base/safe_conversions.h', + '../../src/base/safe_conversions_impl.h', + '../../src/base/safe_math.h', + '../../src/base/safe_math_impl.h', + '../../src/base/utils/random-number-generator.cc', + '../../src/base/utils/random-number-generator.h', + ], + 'conditions': [ + ['want_separate_host_toolset==1', { + 'toolsets': ['host', 'target'], + }, { + 'toolsets': ['target'], + }], + ['OS=="linux"', { + 'link_settings': { 'libraries': [ '-lrt' ] }, - 'sources': [ ### gcmole(os:linux) ### - '../../src/platform-linux.cc', - '../../src/platform-posix.cc' + 'sources': [ + '../../src/base/platform/platform-linux.cc', + '../../src/base/platform/platform-posix.cc' ], } ], ['OS=="android"', { - 'defines': [ - 'CAN_USE_VFP_INSTRUCTIONS', - ], 'sources': [ - '../../src/platform-posix.cc' + '../../src/base/platform/platform-posix.cc' ], 'conditions': [ ['host_os=="mac"', { 'target_conditions': [ ['_toolset=="host"', { 'sources': [ - '../../src/platform-macos.cc' + '../../src/base/platform/platform-macos.cc' ] }, { 'sources': [ - '../../src/platform-linux.cc' + '../../src/base/platform/platform-linux.cc' ] }], ], @@ -853,7 +1170,7 @@ }], ], 'sources': [ - '../../src/platform-linux.cc' + '../../src/base/platform/platform-linux.cc' ] }], ], @@ -869,28 +1186,29 @@ }], ['_toolset=="target"', { 'libraries': [ - '-lbacktrace', '-lsocket' + '-lbacktrace' ], }], ], }, 'sources': [ - '../../src/platform-posix.cc', + '../../src/base/platform/platform-posix.cc', + '../../src/base/qnx-math.h', ], 'target_conditions': [ ['_toolset=="host" and host_os=="linux"', { 'sources': [ - '../../src/platform-linux.cc' + '../../src/base/platform/platform-linux.cc' ], }], ['_toolset=="host" and host_os=="mac"', { 'sources': [ - '../../src/platform-macos.cc' + '../../src/base/platform/platform-macos.cc' ], }], ['_toolset=="target"', { 'sources': [ - '../../src/platform-qnx.cc' + '../../src/base/platform/platform-qnx.cc' ], }], ], @@ -902,8 +1220,8 @@ '-L/usr/local/lib -lexecinfo', ]}, 'sources': [ - '../../src/platform-freebsd.cc', - '../../src/platform-posix.cc' + '../../src/base/platform/platform-freebsd.cc', + '../../src/base/platform/platform-posix.cc' ], } ], @@ -913,8 +1231,8 @@ '-L/usr/local/lib -lexecinfo', ]}, 'sources': [ - '../../src/platform-openbsd.cc', - '../../src/platform-posix.cc' + '../../src/base/platform/platform-openbsd.cc', + '../../src/base/platform/platform-posix.cc' ], } ], @@ -924,26 +1242,26 @@ '-L/usr/pkg/lib -Wl,-R/usr/pkg/lib -lexecinfo', ]}, 'sources': [ - '../../src/platform-openbsd.cc', - '../../src/platform-posix.cc' + '../../src/base/platform/platform-openbsd.cc', + '../../src/base/platform/platform-posix.cc' ], } ], ['OS=="solaris"', { 'link_settings': { 'libraries': [ - '-lsocket -lnsl', + '-lnsl', ]}, 'sources': [ - '../../src/platform-solaris.cc', - '../../src/platform-posix.cc' + '../../src/base/platform/platform-solaris.cc', + '../../src/base/platform/platform-posix.cc' ], } ], ['OS=="mac"', { 'sources': [ - '../../src/platform-macos.cc', - '../../src/platform-posix.cc' + '../../src/base/platform/platform-macos.cc', + '../../src/base/platform/platform-posix.cc' ]}, ], ['OS=="win"', { @@ -961,14 +1279,15 @@ 'conditions': [ ['build_env=="Cygwin"', { 'sources': [ - '../../src/platform-cygwin.cc', - '../../src/platform-posix.cc' + '../../src/base/platform/platform-cygwin.cc', + '../../src/base/platform/platform-posix.cc' ], }, { 'sources': [ - '../../src/platform-win32.cc', - '../../src/win32-math.cc', - '../../src/win32-math.h' + '../../src/base/platform/platform-win32.cc', + '../../src/base/win32-headers.h', + '../../src/base/win32-math.cc', + '../../src/base/win32-math.h' ], }], ], @@ -977,9 +1296,10 @@ }, }, { 'sources': [ - '../../src/platform-win32.cc', - '../../src/win32-math.cc', - '../../src/win32-math.h' + '../../src/base/platform/platform-win32.cc', + '../../src/base/win32-headers.h', + '../../src/base/win32-math.cc', + '../../src/base/win32-math.h' ], 'msvs_disabled_warnings': [4351, 4355, 4800], 'link_settings': { @@ -988,58 +1308,28 @@ }], ], }], - ['component=="shared_library"', { - 'defines': [ - 'BUILDING_V8_SHARED', - 'V8_SHARED', - ], - }], - ['v8_postmortem_support=="true"', { - 'sources': [ - '<(SHARED_INTERMEDIATE_DIR)/debug-support.cc', - ] - }], - ['v8_enable_i18n_support==1', { - 'dependencies': [ - '<(icu_gyp_path):icui18n', - '<(icu_gyp_path):icuuc', - ] - }, { # v8_enable_i18n_support==0 - 'sources!': [ - '../../src/i18n.cc', - '../../src/i18n.h', - ], - }], - ['OS=="win" and v8_enable_i18n_support==1', { - 'dependencies': [ - '<(icu_gyp_path):icudata', - ], - }], - ['icu_use_data_file_flag==1', { - 'defines': ['ICU_UTIL_DATA_IMPL=ICU_UTIL_DATA_FILE'], - }, { # else icu_use_data_file_flag !=1 - 'conditions': [ - ['OS=="win"', { - 'defines': ['ICU_UTIL_DATA_IMPL=ICU_UTIL_DATA_SHARED'], - }, { - 'defines': ['ICU_UTIL_DATA_IMPL=ICU_UTIL_DATA_STATIC'], - }], - ], - }], ], }, { - 'target_name': 'v8_libbase.<(v8_target_arch)', - # TODO(jochen): Should be a static library once it has sources in it. - 'type': 'none', + 'target_name': 'v8_libplatform', + 'type': 'static_library', 'variables': { 'optimize': 'max', }, + 'dependencies': [ + 'v8_libbase', + ], 'include_dirs+': [ - '../../src', + '../..', ], 'sources': [ - '../../src/base/macros.h', + '../../include/libplatform/libplatform.h', + '../../src/libplatform/default-platform.cc', + '../../src/libplatform/default-platform.h', + '../../src/libplatform/task-queue.cc', + '../../src/libplatform/task-queue.h', + '../../src/libplatform/worker-thread.cc', + '../../src/libplatform/worker-thread.h', ], 'conditions': [ ['want_separate_host_toolset==1', { @@ -1047,14 +1337,34 @@ }, { 'toolsets': ['target'], }], - ['component=="shared_library"', { - 'defines': [ - 'BUILDING_V8_SHARED', - 'V8_SHARED', - ], - }], ], }, + { + 'target_name': 'natives_blob', + 'type': 'none', + 'conditions': [ + [ 'v8_use_external_startup_data==1', { + 'dependencies': ['js2c'], + 'actions': [{ + 'action_name': 'concatenate_natives_blob', + 'inputs': [ + '../../tools/concatenate-files.py', + '<(SHARED_INTERMEDIATE_DIR)/libraries.bin', + '<(SHARED_INTERMEDIATE_DIR)/libraries-experimental.bin', + ], + 'outputs': [ + '<(PRODUCT_DIR)/natives_blob.bin', + ], + 'action': ['python', '<@(_inputs)', '<@(_outputs)'], + }], + }], + ['want_separate_host_toolset==1', { + 'toolsets': ['host'], + }, { + 'toolsets': ['target'], + }], + ] + }, { 'target_name': 'js2c', 'type': 'none', @@ -1080,9 +1390,11 @@ 'library_files': [ '../../src/runtime.js', '../../src/v8natives.js', + '../../src/symbol.js', '../../src/array.js', '../../src/string.js', '../../src/uri.js', + '../../third_party/fdlibm/fdlibm.js', '../../src/math.js', '../../src/messages.js', '../../src/apinatives.js', @@ -1097,19 +1409,21 @@ '../../src/weak_collection.js', '../../src/promise.js', '../../src/object-observe.js', + '../../src/collection.js', + '../../src/collection-iterator.js', '../../src/macros.py', + '../../src/array-iterator.js', + '../../src/string-iterator.js' ], 'experimental_library_files': [ '../../src/macros.py', - '../../src/symbol.js', '../../src/proxy.js', - '../../src/collection.js', '../../src/generator.js', - '../../src/array-iterator.js', '../../src/harmony-string.js', '../../src/harmony-array.js', - '../../src/harmony-math.js' ], + 'libraries_bin_file': '<(SHARED_INTERMEDIATE_DIR)/libraries.bin', + 'libraries_experimental_bin_file': '<(SHARED_INTERMEDIATE_DIR)/libraries-experimental.bin', }, 'actions': [ { @@ -1125,12 +1439,20 @@ 'action': [ 'python', '../../tools/js2c.py', - '<@(_outputs)', + '<(SHARED_INTERMEDIATE_DIR)/libraries.cc', 'CORE', '<(v8_compress_startup_data)', '<@(library_files)', '<@(i18n_library_files)', ], + 'conditions': [ + [ 'v8_use_external_startup_data==1', { + 'outputs': ['<@(libraries_bin_file)'], + 'action': [ + '--startup_blob', '<@(libraries_bin_file)', + ], + }], + ], }, { 'action_name': 'js2c_experimental', @@ -1144,11 +1466,19 @@ 'action': [ 'python', '../../tools/js2c.py', - '<@(_outputs)', + '<(SHARED_INTERMEDIATE_DIR)/experimental-libraries.cc', 'EXPERIMENTAL', '<(v8_compress_startup_data)', '<@(experimental_library_files)' ], + 'conditions': [ + [ 'v8_use_external_startup_data==1', { + 'outputs': ['<@(libraries_experimental_bin_file)'], + 'action': [ + '--startup_blob', '<@(libraries_experimental_bin_file)' + ], + }], + ], }, ], }, @@ -1181,14 +1511,11 @@ ] }, { - 'target_name': 'mksnapshot.<(v8_target_arch)', + 'target_name': 'mksnapshot', 'type': 'executable', - 'dependencies': [ - 'v8_base.<(v8_target_arch)', - 'v8_nosnapshot.<(v8_target_arch)', - ], + 'dependencies': ['v8_base', 'v8_nosnapshot', 'v8_libplatform'], 'include_dirs+': [ - '../../src', + '../..', ], 'sources': [ '../../src/mksnapshot.cc', diff --git a/deps/v8/tools/js2c.py b/deps/v8/tools/js2c.py index 17182109207..77485f6e22f 100755 --- a/deps/v8/tools/js2c.py +++ b/deps/v8/tools/js2c.py @@ -218,6 +218,27 @@ def non_expander(s): lines = ExpandMacroDefinition(lines, pos, name_pattern, macro, non_expander) +INLINE_CONSTANT_PATTERN = re.compile(r'const\s+([a-zA-Z0-9_]+)\s*=\s*([^;\n]+)[;\n]') + +def ExpandInlineConstants(lines): + pos = 0 + while True: + const_match = INLINE_CONSTANT_PATTERN.search(lines, pos) + if const_match is None: + # no more constants + return lines + name = const_match.group(1) + replacement = const_match.group(2) + name_pattern = re.compile("\\b%s\\b" % name) + + # remove constant definition and replace + lines = (lines[:const_match.start()] + + re.sub(name_pattern, replacement, lines[const_match.end():])) + + # advance position to where the constant defintion was + pos = const_match.start() + + HEADER_TEMPLATE = """\ // Copyright 2011 Google Inc. All Rights Reserved. @@ -225,9 +246,9 @@ def non_expander(s): // want to make changes to this file you should either change the // javascript source files or the GYP script. -#include "v8.h" -#include "natives.h" -#include "utils.h" +#include "src/v8.h" +#include "src/natives.h" +#include "src/utils.h" namespace v8 { namespace internal { @@ -276,7 +297,7 @@ def non_expander(s): template <> void NativesCollection<%(type)s>::SetRawScriptsSource(Vector<const char> raw_source) { - ASSERT(%(raw_total_length)i == raw_source.length()); + DCHECK(%(raw_total_length)i == raw_source.length()); raw_sources = raw_source.start(); } @@ -333,6 +354,7 @@ def BuildFilterChain(macro_filename): filter_chain.extend([ RemoveCommentsAndTrailingWhitespace, ExpandInlineMacros, + ExpandInlineConstants, Validate, jsmin.JavaScriptMinifier().JSMinify ]) @@ -397,7 +419,7 @@ def PrepareSources(source_files): return result -def BuildMetadata(sources, source_bytes, native_type, omit): +def BuildMetadata(sources, source_bytes, native_type): """Build the meta data required to generate a libaries file. Args: @@ -405,7 +427,6 @@ def BuildMetadata(sources, source_bytes, native_type, omit): source_bytes: A list of source bytes. (The concatenation of all sources; might be compressed.) native_type: The parameter for the NativesCollection template. - omit: bool, whether we should omit the sources in the output. Returns: A dictionary for use with HEADER_TEMPLATE. @@ -438,7 +459,7 @@ def BuildMetadata(sources, source_bytes, native_type, omit): assert offset == len(raw_sources) # If we have the raw sources we can declare them accordingly. - have_raw_sources = source_bytes == raw_sources and not omit + have_raw_sources = source_bytes == raw_sources raw_sources_declaration = (RAW_SOURCES_DECLARATION if have_raw_sources else RAW_SOURCES_COMPRESSION_DECLARATION) @@ -446,7 +467,6 @@ def BuildMetadata(sources, source_bytes, native_type, omit): "builtin_count": len(sources.modules), "debugger_count": sum(sources.is_debugger_id), "sources_declaration": SOURCES_DECLARATION % ToCArray(source_bytes), - "sources_data": ToCArray(source_bytes) if not omit else "", "raw_sources_declaration": raw_sources_declaration, "raw_total_length": sum(map(len, sources.modules)), "total_length": total_length, @@ -477,10 +497,51 @@ def CompressMaybe(sources, compression_type): raise Error("Unknown compression type %s." % compression_type) -def JS2C(source, target, native_type, compression_type, raw_file, omit): +def PutInt(blob_file, value): + assert(value >= 0 and value < (1 << 20)) + size = 1 if (value < 1 << 6) else (2 if (value < 1 << 14) else 3) + value_with_length = (value << 2) | size + + byte_sequence = bytearray() + for i in xrange(size): + byte_sequence.append(value_with_length & 255) + value_with_length >>= 8; + blob_file.write(byte_sequence) + + +def PutStr(blob_file, value): + PutInt(blob_file, len(value)); + blob_file.write(value); + + +def WriteStartupBlob(sources, startup_blob): + """Write a startup blob, as expected by V8 Initialize ... + TODO(vogelheim): Add proper method name. + + Args: + sources: A Sources instance with the prepared sources. + startup_blob_file: Name of file to write the blob to. + """ + output = open(startup_blob, "wb") + + debug_sources = sum(sources.is_debugger_id); + PutInt(output, debug_sources) + for i in xrange(debug_sources): + PutStr(output, sources.names[i]); + PutStr(output, sources.modules[i]); + + PutInt(output, len(sources.names) - debug_sources) + for i in xrange(debug_sources, len(sources.names)): + PutStr(output, sources.names[i]); + PutStr(output, sources.modules[i]); + + output.close() + + +def JS2C(source, target, native_type, compression_type, raw_file, startup_blob): sources = PrepareSources(source) sources_bytes = CompressMaybe(sources, compression_type) - metadata = BuildMetadata(sources, sources_bytes, native_type, omit) + metadata = BuildMetadata(sources, sources_bytes, native_type) # Optionally emit raw file. if raw_file: @@ -488,6 +549,9 @@ def JS2C(source, target, native_type, compression_type, raw_file, omit): output.write(sources_bytes) output.close() + if startup_blob: + WriteStartupBlob(sources, startup_blob); + # Emit resulting source file. output = open(target, "w") output.write(HEADER_TEMPLATE % metadata) @@ -497,9 +561,9 @@ def JS2C(source, target, native_type, compression_type, raw_file, omit): def main(): parser = optparse.OptionParser() parser.add_option("--raw", action="store", - help="file to write the processed sources array to.") - parser.add_option("--omit", dest="omit", action="store_true", - help="Omit the raw sources from the generated code.") + help="file to write the processed sources array to.") + parser.add_option("--startup_blob", action="store", + help="file to write the startup blob to.") parser.set_usage("""js2c out.cc type compression sources.js ... out.cc: C code to be generated. type: type parameter for NativesCollection template. @@ -507,7 +571,7 @@ def main(): sources.js: JS internal sources or macros.py.""") (options, args) = parser.parse_args() - JS2C(args[3:], args[0], args[1], args[2], options.raw, options.omit) + JS2C(args[3:], args[0], args[1], args[2], options.raw, options.startup_blob) if __name__ == "__main__": diff --git a/deps/v8/tools/lexer-shell.cc b/deps/v8/tools/lexer-shell.cc index 273cdd9f4d7..cbd3524cb33 100644 --- a/deps/v8/tools/lexer-shell.cc +++ b/deps/v8/tools/lexer-shell.cc @@ -31,17 +31,18 @@ #include <stdlib.h> #include <string> #include <vector> -#include "v8.h" +#include "src/v8.h" -#include "api.h" -#include "messages.h" -#include "platform.h" -#include "runtime.h" -#include "scanner-character-streams.h" -#include "scopeinfo.h" -#include "shell-utils.h" -#include "string-stream.h" -#include "scanner.h" +#include "include/libplatform/libplatform.h" +#include "src/api.h" +#include "src/base/platform/platform.h" +#include "src/messages.h" +#include "src/runtime.h" +#include "src/scanner-character-streams.h" +#include "src/scopeinfo.h" +#include "tools/shell-utils.h" +#include "src/string-stream.h" +#include "src/scanner.h" using namespace v8::internal; @@ -52,7 +53,7 @@ class BaselineScanner { BaselineScanner(const char* fname, Isolate* isolate, Encoding encoding, - ElapsedTimer* timer, + v8::base::ElapsedTimer* timer, int repeat) : stream_(NULL) { int length = 0; @@ -127,13 +128,11 @@ struct TokenWithLocation { }; -TimeDelta RunBaselineScanner(const char* fname, - Isolate* isolate, - Encoding encoding, - bool dump_tokens, - std::vector<TokenWithLocation>* tokens, - int repeat) { - ElapsedTimer timer; +v8::base::TimeDelta RunBaselineScanner(const char* fname, Isolate* isolate, + Encoding encoding, bool dump_tokens, + std::vector<TokenWithLocation>* tokens, + int repeat) { + v8::base::ElapsedTimer timer; BaselineScanner scanner(fname, isolate, encoding, &timer, repeat); Token::Value token; int beg, end; @@ -158,7 +157,7 @@ void PrintTokens(const char* name, } -TimeDelta ProcessFile( +v8::base::TimeDelta ProcessFile( const char* fname, Encoding encoding, Isolate* isolate, @@ -169,7 +168,7 @@ TimeDelta ProcessFile( } HandleScope handle_scope(isolate); std::vector<TokenWithLocation> baseline_tokens; - TimeDelta baseline_time; + v8::base::TimeDelta baseline_time; baseline_time = RunBaselineScanner( fname, isolate, encoding, print_tokens, &baseline_tokens, repeat); @@ -182,6 +181,8 @@ TimeDelta ProcessFile( int main(int argc, char* argv[]) { v8::V8::InitializeICU(); + v8::Platform* platform = v8::platform::CreateDefaultPlatform(); + v8::V8::InitializePlatform(platform); v8::V8::SetFlagsFromCommandLine(&argc, argv, true); Encoding encoding = LATIN1; bool print_tokens = false; @@ -206,19 +207,20 @@ int main(int argc, char* argv[]) { fnames.push_back(std::string(argv[i])); } } - v8::Isolate* isolate = v8::Isolate::GetCurrent(); + v8::Isolate* isolate = v8::Isolate::New(); { + v8::Isolate::Scope isolate_scope(isolate); v8::HandleScope handle_scope(isolate); v8::Handle<v8::ObjectTemplate> global = v8::ObjectTemplate::New(isolate); v8::Local<v8::Context> context = v8::Context::New(isolate, NULL, global); - ASSERT(!context.IsEmpty()); + DCHECK(!context.IsEmpty()); { v8::Context::Scope scope(context); - Isolate* isolate = Isolate::Current(); double baseline_total = 0; for (size_t i = 0; i < fnames.size(); i++) { - TimeDelta time; - time = ProcessFile(fnames[i].c_str(), encoding, isolate, print_tokens, + v8::base::TimeDelta time; + time = ProcessFile(fnames[i].c_str(), encoding, + reinterpret_cast<Isolate*>(isolate), print_tokens, repeat); baseline_total += time.InMillisecondsF(); } @@ -227,5 +229,7 @@ int main(int argc, char* argv[]) { } } v8::V8::Dispose(); + v8::V8::ShutdownPlatform(); + delete platform; return 0; } diff --git a/deps/v8/tools/lexer-shell.gyp b/deps/v8/tools/lexer-shell.gyp index 623a503a0a5..836ea9762af 100644 --- a/deps/v8/tools/lexer-shell.gyp +++ b/deps/v8/tools/lexer-shell.gyp @@ -37,6 +37,7 @@ 'type': 'executable', 'dependencies': [ '../tools/gyp/v8.gyp:v8', + '../tools/gyp/v8.gyp:v8_libplatform', ], 'conditions': [ ['v8_enable_i18n_support==1', { @@ -47,7 +48,7 @@ }], ], 'include_dirs+': [ - '../src', + '..', ], 'sources': [ 'lexer-shell.cc', @@ -59,6 +60,7 @@ 'type': 'executable', 'dependencies': [ '../tools/gyp/v8.gyp:v8', + '../tools/gyp/v8.gyp:v8_libplatform', ], 'conditions': [ ['v8_enable_i18n_support==1', { @@ -69,7 +71,7 @@ }], ], 'include_dirs+': [ - '../src', + '..', ], 'sources': [ 'parser-shell.cc', diff --git a/deps/v8/tools/ll_prof.py b/deps/v8/tools/ll_prof.py index 216929d1e20..409b3969177 100755 --- a/deps/v8/tools/ll_prof.py +++ b/deps/v8/tools/ll_prof.py @@ -351,7 +351,8 @@ class LogReader(object): "ia32": ctypes.c_uint32, "arm": ctypes.c_uint32, "mips": ctypes.c_uint32, - "x64": ctypes.c_uint64 + "x64": ctypes.c_uint64, + "arm64": ctypes.c_uint64 } _CODE_CREATE_TAG = "C" diff --git a/deps/v8/tools/parser-shell.cc b/deps/v8/tools/parser-shell.cc index 2d95918a33d..a774449876d 100644 --- a/deps/v8/tools/parser-shell.cc +++ b/deps/v8/tools/parser-shell.cc @@ -31,20 +31,33 @@ #include <stdlib.h> #include <string> #include <vector> -#include "v8.h" +#include "src/v8.h" -#include "api.h" -#include "compiler.h" -#include "scanner-character-streams.h" -#include "shell-utils.h" -#include "parser.h" -#include "preparse-data-format.h" -#include "preparse-data.h" -#include "preparser.h" +#include "include/libplatform/libplatform.h" +#include "src/api.h" +#include "src/compiler.h" +#include "src/scanner-character-streams.h" +#include "tools/shell-utils.h" +#include "src/parser.h" +#include "src/preparse-data-format.h" +#include "src/preparse-data.h" +#include "src/preparser.h" using namespace v8::internal; -std::pair<TimeDelta, TimeDelta> RunBaselineParser( +class StringResource8 : public v8::String::ExternalAsciiStringResource { + public: + StringResource8(const char* data, int length) + : data_(data), length_(length) { } + virtual size_t length() const { return length_; } + virtual const char* data() const { return data_; } + + private: + const char* data_; + int length_; +}; + +std::pair<v8::base::TimeDelta, v8::base::TimeDelta> RunBaselineParser( const char* fname, Encoding encoding, int repeat, v8::Isolate* isolate, v8::Handle<v8::Context> context) { int length = 0; @@ -63,11 +76,13 @@ std::pair<TimeDelta, TimeDelta> RunBaselineParser( break; } case LATIN1: { - source_handle = v8::String::NewFromOneByte(isolate, source); + StringResource8* string_resource = + new StringResource8(reinterpret_cast<const char*>(source), length); + source_handle = v8::String::NewExternal(isolate, string_resource); break; } } - TimeDelta parse_time1, parse_time2; + v8::base::TimeDelta parse_time1, parse_time2; Handle<Script> script = Isolate::Current()->factory()->NewScript( v8::Utils::OpenHandle(*source_handle)); i::ScriptData* cached_data_impl = NULL; @@ -75,30 +90,32 @@ std::pair<TimeDelta, TimeDelta> RunBaselineParser( { CompilationInfoWithZone info(script); info.MarkAsGlobal(); - info.SetCachedData(&cached_data_impl, i::PRODUCE_CACHED_DATA); - ElapsedTimer timer; + info.SetCachedData(&cached_data_impl, + v8::ScriptCompiler::kProduceParserCache); + v8::base::ElapsedTimer timer; timer.Start(); // Allow lazy parsing; otherwise we won't produce cached data. bool success = Parser::Parse(&info, true); parse_time1 = timer.Elapsed(); if (!success) { fprintf(stderr, "Parsing failed\n"); - return std::make_pair(TimeDelta(), TimeDelta()); + return std::make_pair(v8::base::TimeDelta(), v8::base::TimeDelta()); } } // Second round of parsing (consume cached data). { CompilationInfoWithZone info(script); info.MarkAsGlobal(); - info.SetCachedData(&cached_data_impl, i::CONSUME_CACHED_DATA); - ElapsedTimer timer; + info.SetCachedData(&cached_data_impl, + v8::ScriptCompiler::kConsumeParserCache); + v8::base::ElapsedTimer timer; timer.Start(); // Allow lazy parsing; otherwise cached data won't help. bool success = Parser::Parse(&info, true); parse_time2 = timer.Elapsed(); if (!success) { fprintf(stderr, "Parsing failed\n"); - return std::make_pair(TimeDelta(), TimeDelta()); + return std::make_pair(v8::base::TimeDelta(), v8::base::TimeDelta()); } } return std::make_pair(parse_time1, parse_time2); @@ -107,6 +124,8 @@ std::pair<TimeDelta, TimeDelta> RunBaselineParser( int main(int argc, char* argv[]) { v8::V8::InitializeICU(); + v8::Platform* platform = v8::platform::CreateDefaultPlatform(); + v8::V8::InitializePlatform(platform); v8::V8::SetFlagsFromCommandLine(&argc, argv, true); Encoding encoding = LATIN1; std::vector<std::string> fnames; @@ -128,19 +147,21 @@ int main(int argc, char* argv[]) { fnames.push_back(std::string(argv[i])); } } - v8::Isolate* isolate = v8::Isolate::GetCurrent(); + v8::Isolate* isolate = v8::Isolate::New(); { + v8::Isolate::Scope isolate_scope(isolate); v8::HandleScope handle_scope(isolate); v8::Handle<v8::ObjectTemplate> global = v8::ObjectTemplate::New(isolate); v8::Local<v8::Context> context = v8::Context::New(isolate, NULL, global); - ASSERT(!context.IsEmpty()); + DCHECK(!context.IsEmpty()); { v8::Context::Scope scope(context); double first_parse_total = 0; double second_parse_total = 0; for (size_t i = 0; i < fnames.size(); i++) { - std::pair<TimeDelta, TimeDelta> time = RunBaselineParser( - fnames[i].c_str(), encoding, repeat, isolate, context); + std::pair<v8::base::TimeDelta, v8::base::TimeDelta> time = + RunBaselineParser(fnames[i].c_str(), encoding, repeat, isolate, + context); first_parse_total += time.first.InMillisecondsF(); second_parse_total += time.second.InMillisecondsF(); } @@ -152,5 +173,7 @@ int main(int argc, char* argv[]) { } } v8::V8::Dispose(); + v8::V8::ShutdownPlatform(); + delete platform; return 0; } diff --git a/deps/v8/tools/plot-timer-events b/deps/v8/tools/plot-timer-events index 8db067d5f12..15f28ac22b5 100755 --- a/deps/v8/tools/plot-timer-events +++ b/deps/v8/tools/plot-timer-events @@ -10,29 +10,43 @@ do done tools_path=`cd $(dirname "$0");pwd` -if [ ! "$D8_PATH" ]; then +if test ! "$D8_PATH"; then d8_public=`which d8` - if [ -x "$d8_public" ]; then D8_PATH=$(dirname "$d8_public"); fi + if test -x "$d8_public"; then D8_PATH=$(dirname "$d8_public"); fi fi -[ -n "$D8_PATH" ] || D8_PATH=$tools_path/.. + +if test ! -n "$D8_PATH"; then + D8_PATH=$tools_path/.. +fi + d8_exec=$D8_PATH/d8 -if [ ! -x "$d8_exec" ]; then +if test ! -x "$d8_exec"; then D8_PATH=`pwd`/out/native d8_exec=$D8_PATH/d8 fi -if [ ! -x "$d8_exec" ]; then +if test ! -x "$d8_exec"; then d8_exec=`grep -m 1 -o '".*/d8"' $log_file | sed 's/"//g'` fi -if [ ! -x "$d8_exec" ]; then +if test ! -x "$d8_exec"; then echo "d8 shell not found in $D8_PATH" echo "To build, execute 'make native' from the V8 directory" exit 1 fi -if [[ "$@" != *--distortion* ]]; then + +contains=0; +for arg in "$@"; do + `echo "$arg" | grep -q "^--distortion"` + if test $? -eq 0; then + contains=1 + break + fi +done + +if test "$contains" -eq 0; then # Try to find out how much the instrumentation overhead is. calibration_log=calibration.log calibration_script="for (var i = 0; i < 1000000; i++) print();" @@ -70,7 +84,7 @@ cat $log_file | -- $@ $options 2>/dev/null > timer-events.plot success=$? -if [[ $success != 0 ]] ; then +if test $success -ne 0; then cat timer-events.plot else cat timer-events.plot | gnuplot > timer-events.png diff --git a/deps/v8/tools/presubmit.py b/deps/v8/tools/presubmit.py index 88f1459d739..4b22444f9ea 100755 --- a/deps/v8/tools/presubmit.py +++ b/deps/v8/tools/presubmit.py @@ -54,6 +54,7 @@ build/deprecated build/endif_comment build/forward_decl +build/include_alpha build/include_order build/printf_format build/storage_class @@ -61,7 +62,6 @@ readability/boost readability/braces readability/casting -readability/check readability/constructors readability/fn_size readability/function @@ -80,7 +80,6 @@ runtime/nonconf runtime/printf runtime/printf_format -runtime/references runtime/rtti runtime/sizeof runtime/string @@ -101,6 +100,7 @@ whitespace/todo """.split() +# TODO(bmeurer): Fix and re-enable readability/check LINT_OUTPUT_PATTERN = re.compile(r'^.+[:(]\d+[:)]|^Done processing') @@ -200,7 +200,8 @@ def Run(self, path): def IgnoreDir(self, name): return (name.startswith('.') or - name in ('data', 'kraken', 'octane', 'sunspider')) + name in ('buildtools', 'data', 'gmock', 'gtest', 'kraken', + 'octane', 'sunspider')) def IgnoreFile(self, name): return name.startswith('.') @@ -235,7 +236,10 @@ def IgnoreFile(self, name): or (name in CppLintProcessor.IGNORE_LINT)) def GetPathsToSearch(self): - return ['src', 'include', 'samples', join('test', 'cctest')] + return ['src', 'include', 'samples', + join('test', 'base-unittests'), + join('test', 'cctest'), + join('test', 'compiler-unittests')] def GetCpplintScript(self, prio_path): for path in [prio_path] + os.environ["PATH"].split(os.pathsep): @@ -305,7 +309,8 @@ def FindFilesIn(self, path): if self.IgnoreDir(dir_part): break else: - if self.IsRelevant(file) and not self.IgnoreFile(file): + if (self.IsRelevant(file) and os.path.exists(file) + and not self.IgnoreFile(file)): result.append(join(path, file)) if output.wait() == 0: return result @@ -415,6 +420,19 @@ def ProcessFiles(self, files, path): return success +def CheckGeneratedRuntimeTests(workspace): + code = subprocess.call( + [sys.executable, join(workspace, "tools", "generate-runtime-tests.py"), + "check"]) + return code == 0 + + +def CheckExternalReferenceRegistration(workspace): + code = subprocess.call( + [sys.executable, join(workspace, "tools", "external-reference-check.py")]) + return code == 0 + + def GetOptions(): result = optparse.OptionParser() result.add_option('--no-lint', help="Do not run cpplint", default=False, @@ -433,6 +451,8 @@ def Main(): print "Running copyright header, trailing whitespaces and " \ "two empty lines between declarations check..." success = SourceProcessor().Run(workspace) and success + success = CheckGeneratedRuntimeTests(workspace) and success + success = CheckExternalReferenceRegistration(workspace) and success if success: return 0 else: diff --git a/deps/v8/tools/profile_view.js b/deps/v8/tools/profile_view.js index e041909b01a..d1545acd996 100644 --- a/deps/v8/tools/profile_view.js +++ b/deps/v8/tools/profile_view.js @@ -168,24 +168,6 @@ ProfileView.Node = function( }; -/** - * Returns a share of the function's total time in application's total time. - */ -ProfileView.Node.prototype.__defineGetter__( - 'totalPercent', - function() { return this.totalTime / - (this.head ? this.head.totalTime : this.totalTime) * 100.0; }); - - -/** - * Returns a share of the function's self time in application's total time. - */ -ProfileView.Node.prototype.__defineGetter__( - 'selfPercent', - function() { return this.selfTime / - (this.head ? this.head.totalTime : this.totalTime) * 100.0; }); - - /** * Returns a share of the function's total time in its parent's total time. */ diff --git a/deps/v8/tools/profviz/composer.js b/deps/v8/tools/profviz/composer.js index 0c9437ff541..85729b6d577 100644 --- a/deps/v8/tools/profviz/composer.js +++ b/deps/v8/tools/profviz/composer.js @@ -43,6 +43,7 @@ function PlotScriptComposer(kResX, kResY, error_output) { var kY1Offset = 11; // Offset for stack frame vs. event lines. var kDeoptRow = 7; // Row displaying deopts. + var kGetTimeHeight = 0.5; // Height of marker displaying timed part. var kMaxDeoptLength = 4; // Draw size of the largest deopt. var kPauseLabelPadding = 5; // Padding for pause time labels. var kNumPauseLabels = 7; // Number of biggest pauses to label. @@ -105,6 +106,8 @@ function PlotScriptComposer(kResX, kResY, error_output) { new TimerEvent("recompile async", "#CC4499", false, 1), 'V8.CompileEval': new TimerEvent("compile eval", "#CC4400", true, 0), + 'V8.IcMiss': + new TimerEvent("ic miss", "#CC9900", false, 0), 'V8.Parse': new TimerEvent("parse", "#00CC00", true, 0), 'V8.PreParse': @@ -134,6 +137,7 @@ function PlotScriptComposer(kResX, kResY, error_output) { var code_map = new CodeMap(); var execution_pauses = []; var deopts = []; + var gettime = []; var event_stack = []; var last_time_stamp = []; for (var i = 0; i < kNumThreads; i++) { @@ -272,6 +276,10 @@ function PlotScriptComposer(kResX, kResY, error_output) { deopts.push(new Deopt(time, size)); } + var processCurrentTimeEvent = function(time) { + gettime.push(time); + } + var processSharedLibrary = function(name, start, end) { var code_entry = new CodeMap.CodeEntry(end - start, name); code_entry.kind = -3; // External code kind. @@ -314,6 +322,8 @@ function PlotScriptComposer(kResX, kResY, error_output) { processor: processCodeDeleteEvent }, 'code-deopt': { parsers: [parseTimeStamp, parseInt], processor: processCodeDeoptEvent }, + 'current-time': { parsers: [parseTimeStamp], + processor: processCurrentTimeEvent }, 'tick': { parsers: [parseInt, parseTimeStamp, null, null, parseInt, 'var-args'], processor: processTickEvent } @@ -389,12 +399,15 @@ function PlotScriptComposer(kResX, kResY, error_output) { output("set xtics out nomirror"); output("unset key"); - function DrawBarBase(color, start, end, top, bottom) { + function DrawBarBase(color, start, end, top, bottom, transparency) { obj_index++; command = "set object " + obj_index + " rect"; command += " from " + start + ", " + top; command += " to " + end + ", " + bottom; command += " fc rgb \"" + color + "\""; + if (transparency) { + command += " fs transparent solid " + transparency; + } output(command); } @@ -411,7 +424,6 @@ function PlotScriptComposer(kResX, kResY, error_output) { for (var name in TimerEvents) { var event = TimerEvents[name]; var ranges = RestrictRangesTo(event.ranges, range_start, range_end); - ranges = MergeRanges(ranges); var sum = ranges.map(function(range) { return range.duration(); }) .reduce(function(a, b) { return a + b; }, 0); @@ -429,6 +441,13 @@ function PlotScriptComposer(kResX, kResY, error_output) { deopt.size / max_deopt_size * kMaxDeoptLength); } + // Plot current time polls. + if (gettime.length > 1) { + var start = gettime[0]; + var end = gettime.pop(); + DrawBarBase("#0000BB", start, end, kGetTimeHeight, 0, 0.2); + } + // Name Y-axis. var ytics = []; for (name in TimerEvents) { @@ -502,7 +521,8 @@ function PlotScriptComposer(kResX, kResY, error_output) { execution_pauses.sort( function(a, b) { return b.duration() - a.duration(); }); - var max_pause_time = execution_pauses[0].duration(); + var max_pause_time = execution_pauses.length > 0 + ? execution_pauses[0].duration() : 0; padding = kPauseLabelPadding * (range_end - range_start) / kResX; var y_scale = kY1Offset / max_pause_time / 2; for (var i = 0; i < execution_pauses.length && i < kNumPauseLabels; i++) { diff --git a/deps/v8/tools/profviz/stdio.js b/deps/v8/tools/profviz/stdio.js index db38f042a7d..5a8311dfb28 100644 --- a/deps/v8/tools/profviz/stdio.js +++ b/deps/v8/tools/profviz/stdio.js @@ -43,7 +43,7 @@ if (!isNaN(range_start)) range_start_override = range_start; if (!isNaN(range_end)) range_end_override = range_end; var kResX = 1600; -var kResY = 600; +var kResY = 700; function log_error(text) { print(text); quit(1); diff --git a/deps/v8/tools/push-to-trunk/auto_roll.py b/deps/v8/tools/push-to-trunk/auto_roll.py index 607ca0897ab..6e6c7fe2abc 100755 --- a/deps/v8/tools/push-to-trunk/auto_roll.py +++ b/deps/v8/tools/push-to-trunk/auto_roll.py @@ -12,8 +12,11 @@ from common_includes import * import chromium_roll +CLUSTERFUZZ_API_KEY_FILE = "CLUSTERFUZZ_API_KEY_FILE" + CONFIG = { PERSISTFILE_BASENAME: "/tmp/v8-auto-roll-tempfile", + CLUSTERFUZZ_API_KEY_FILE: ".cf_api_key", } CR_DEPS_URL = 'http://src.chromium.org/svn/trunk/src/DEPS' @@ -65,6 +68,24 @@ def RunStep(self): return True +class CheckClusterFuzz(Step): + MESSAGE = "Check ClusterFuzz api for new problems." + + def RunStep(self): + if not os.path.exists(self.Config(CLUSTERFUZZ_API_KEY_FILE)): + print "Skipping ClusterFuzz check. No api key file found." + return False + api_key = FileToText(self.Config(CLUSTERFUZZ_API_KEY_FILE)) + # Check for open, reproducible issues that have no associated bug. + result = self._side_effect_handler.ReadClusterFuzzAPI( + api_key, job_type="linux_asan_d8_dbg", reproducible="True", + open="True", bug_information="", + revision_greater_or_equal=str(self["last_push"])) + if result: + print "Stop due to pending ClusterFuzz issues." + return True + + class RollChromium(Step): MESSAGE = "Roll V8 into Chromium." @@ -75,6 +96,7 @@ def RunStep(self): "--reviewer", self._options.reviewer, "--chromium", self._options.chromium, "--force", + "--use-commit-queue", ] if self._options.sheriff: args.extend([ @@ -108,6 +130,7 @@ def _Steps(self): CheckActiveRoll, DetectLastPush, DetectLastRoll, + CheckClusterFuzz, RollChromium, ] diff --git a/deps/v8/tools/push-to-trunk/auto_tag.py b/deps/v8/tools/push-to-trunk/auto_tag.py new file mode 100755 index 00000000000..6beaaff8a30 --- /dev/null +++ b/deps/v8/tools/push-to-trunk/auto_tag.py @@ -0,0 +1,200 @@ +#!/usr/bin/env python +# Copyright 2014 the V8 project authors. All rights reserved. +# Use of this source code is governed by a BSD-style license that can be +# found in the LICENSE file. + +import argparse +import sys + +from common_includes import * + +CONFIG = { + BRANCHNAME: "auto-tag-v8", + PERSISTFILE_BASENAME: "/tmp/v8-auto-tag-tempfile", + DOT_GIT_LOCATION: ".git", + VERSION_FILE: "src/version.cc", +} + + +class Preparation(Step): + MESSAGE = "Preparation." + + def RunStep(self): + self.CommonPrepare() + self.PrepareBranch() + self.GitCheckout("master") + self.GitSVNRebase() + + +class GetTags(Step): + MESSAGE = "Get all V8 tags." + + def RunStep(self): + self.GitCreateBranch(self._config[BRANCHNAME]) + + # Get remote tags. + tags = filter(lambda s: re.match(r"^svn/tags/[\d+\.]+$", s), + self.GitRemotes()) + + # Remove 'svn/tags/' prefix. + self["tags"] = map(lambda s: s[9:], tags) + + +class GetOldestUntaggedVersion(Step): + MESSAGE = "Check if there's a version on bleeding edge without a tag." + + def RunStep(self): + tags = set(self["tags"]) + self["candidate"] = None + self["candidate_version"] = None + self["next"] = None + self["next_version"] = None + + # Iterate backwards through all automatic version updates. + for git_hash in self.GitLog( + format="%H", grep="\\[Auto\\-roll\\] Bump up version to").splitlines(): + + # Get the version. + if not self.GitCheckoutFileSafe(self._config[VERSION_FILE], git_hash): + continue + + self.ReadAndPersistVersion() + version = self.ArrayToVersion("") + + # Strip off trailing patch level (tags don't include tag level 0). + if version.endswith(".0"): + version = version[:-2] + + # Clean up checked-out version file. + self.GitCheckoutFileSafe(self._config[VERSION_FILE], "HEAD") + + if version in tags: + if self["candidate"]: + # Revision "git_hash" is tagged already and "candidate" was the next + # newer revision without a tag. + break + else: + print("Stop as %s is the latest version and it has been tagged." % + version) + self.CommonCleanup() + return True + else: + # This is the second oldest version without a tag. + self["next"] = self["candidate"] + self["next_version"] = self["candidate_version"] + + # This is the oldest version without a tag. + self["candidate"] = git_hash + self["candidate_version"] = version + + if not self["candidate"] or not self["candidate_version"]: + print "Nothing found to tag." + self.CommonCleanup() + return True + + print("Candidate for tagging is %s with version %s" % + (self["candidate"], self["candidate_version"])) + + +class GetLKGRs(Step): + MESSAGE = "Get the last lkgrs." + + def RunStep(self): + revision_url = "https://v8-status.appspot.com/revisions?format=json" + status_json = self.ReadURL(revision_url, wait_plan=[5, 20]) + self["lkgrs"] = [entry["revision"] + for entry in json.loads(status_json) if entry["status"]] + + +class CalculateTagRevision(Step): + MESSAGE = "Calculate the revision to tag." + + def LastLKGR(self, min_rev, max_rev): + """Finds the newest lkgr between min_rev (inclusive) and max_rev + (exclusive). + """ + for lkgr in self["lkgrs"]: + # LKGRs are reverse sorted. + if int(min_rev) <= int(lkgr) and int(lkgr) < int(max_rev): + return lkgr + return None + + def RunStep(self): + # Get the lkgr after the tag candidate and before the next tag candidate. + candidate_svn = self.GitSVNFindSVNRev(self["candidate"]) + if self["next"]: + next_svn = self.GitSVNFindSVNRev(self["next"]) + else: + # Don't include the version change commit itself if there is no upper + # limit yet. + candidate_svn = str(int(candidate_svn) + 1) + next_svn = sys.maxint + lkgr_svn = self.LastLKGR(candidate_svn, next_svn) + + if not lkgr_svn: + print "There is no lkgr since the candidate version yet." + self.CommonCleanup() + return True + + # Let's check if the lkgr is at least three hours old. + self["lkgr"] = self.GitSVNFindGitHash(lkgr_svn) + if not self["lkgr"]: + print "Couldn't find git hash for lkgr %s" % lkgr_svn + self.CommonCleanup() + return True + + lkgr_utc_time = int(self.GitLog(n=1, format="%at", git_hash=self["lkgr"])) + current_utc_time = self._side_effect_handler.GetUTCStamp() + + if current_utc_time < lkgr_utc_time + 10800: + print "Candidate lkgr %s is too recent for tagging." % lkgr_svn + self.CommonCleanup() + return True + + print "Tagging revision %s with %s" % (lkgr_svn, self["candidate_version"]) + + +class MakeTag(Step): + MESSAGE = "Tag the version." + + def RunStep(self): + if not self._options.dry_run: + self.GitReset(self["lkgr"]) + self.GitSVNTag(self["candidate_version"]) + + +class CleanUp(Step): + MESSAGE = "Clean up." + + def RunStep(self): + self.CommonCleanup() + + +class AutoTag(ScriptsBase): + def _PrepareOptions(self, parser): + parser.add_argument("--dry_run", help="Don't tag the new version.", + default=False, action="store_true") + + def _ProcessOptions(self, options): # pragma: no cover + if not options.dry_run and not options.author: + print "Specify your chromium.org email with -a" + return False + options.wait_for_lgtm = False + options.force_readline_defaults = True + options.force_upload = True + return True + + def _Steps(self): + return [ + Preparation, + GetTags, + GetOldestUntaggedVersion, + GetLKGRs, + CalculateTagRevision, + MakeTag, + CleanUp, + ] + + +if __name__ == "__main__": # pragma: no cover + sys.exit(AutoTag(CONFIG).Run()) diff --git a/deps/v8/tools/push-to-trunk/bump_up_version.py b/deps/v8/tools/push-to-trunk/bump_up_version.py new file mode 100755 index 00000000000..af5f73a600d --- /dev/null +++ b/deps/v8/tools/push-to-trunk/bump_up_version.py @@ -0,0 +1,241 @@ +#!/usr/bin/env python +# Copyright 2014 the V8 project authors. All rights reserved. +# Use of this source code is governed by a BSD-style license that can be +# found in the LICENSE file. + +""" +Script for auto-increasing the version on bleeding_edge. + +The script can be run regularly by a cron job. It will increase the build +level of the version on bleeding_edge if: +- the lkgr version is smaller than the version of the latest revision, +- the lkgr version is not a version change itself, +- the tree is not closed for maintenance. + +The new version will be the maximum of the bleeding_edge and trunk versions +1. +E.g. latest bleeding_edge version: 3.22.11.0 and latest trunk 3.23.0.0 gives +the new version 3.23.1.0. + +This script requires a depot tools git checkout. I.e. 'fetch v8'. +""" + +import argparse +import os +import sys + +from common_includes import * + +CONFIG = { + PERSISTFILE_BASENAME: "/tmp/v8-bump-up-version-tempfile", + VERSION_FILE: "src/version.cc", +} + +VERSION_BRANCH = "auto-bump-up-version" + + +class Preparation(Step): + MESSAGE = "Preparation." + + def RunStep(self): + # Check for a clean workdir. + if not self.GitIsWorkdirClean(): # pragma: no cover + # This is in case a developer runs this script on a dirty tree. + self.GitStash() + + # TODO(machenbach): This should be called master after the git switch. + self.GitCheckout("bleeding_edge") + + self.GitPull() + + # Ensure a clean version branch. + self.DeleteBranch(VERSION_BRANCH) + + +class GetCurrentBleedingEdgeVersion(Step): + MESSAGE = "Get latest bleeding edge version." + + def RunStep(self): + # TODO(machenbach): This should be called master after the git switch. + self.GitCheckout("bleeding_edge") + + # Store latest version and revision. + self.ReadAndPersistVersion() + self["latest_version"] = self.ArrayToVersion("") + self["latest"] = self.GitLog(n=1, format="%H") + print "Bleeding edge version: %s" % self["latest_version"] + + +# This step is pure paranoia. It forbids the script to continue if the last +# commit changed version.cc. Just in case the other bailout has a bug, this +# prevents the script from continuously commiting version changes. +class LastChangeBailout(Step): + MESSAGE = "Stop script if the last change modified the version." + + def RunStep(self): + if self._config[VERSION_FILE] in self.GitChangedFiles(self["latest"]): + print "Stop due to recent version change." + return True + + +# TODO(machenbach): Implement this for git. +class FetchLKGR(Step): + MESSAGE = "Fetching V8 LKGR." + + def RunStep(self): + lkgr_url = "https://v8-status.appspot.com/lkgr" + self["lkgr_svn"] = self.ReadURL(lkgr_url, wait_plan=[5]) + + +# TODO(machenbach): Implement this for git. With a git lkgr we could simply +# checkout that revision. With svn, we have to search backwards until that +# revision is found. +class GetLKGRVersion(Step): + MESSAGE = "Get bleeding edge lkgr version." + + def RunStep(self): + self.GitCheckout("bleeding_edge") + # If the commit was made from svn, there is a mapping entry in the commit + # message. + self["lkgr"] = self.GitLog( + grep="^git-svn-id: [^@]*@%s [A-Za-z0-9-]*$" % self["lkgr_svn"], + format="%H") + + # FIXME(machenbach): http://crbug.com/391712 can lead to svn lkgrs on the + # trunk branch (rarely). + if not self["lkgr"]: # pragma: no cover + self.Die("No git hash found for svn lkgr.") + + self.GitCreateBranch(VERSION_BRANCH, self["lkgr"]) + self.ReadAndPersistVersion("lkgr_") + self["lkgr_version"] = self.ArrayToVersion("lkgr_") + print "LKGR version: %s" % self["lkgr_version"] + + # Ensure a clean version branch. + self.GitCheckout("bleeding_edge") + self.DeleteBranch(VERSION_BRANCH) + + +class LKGRVersionUpToDateBailout(Step): + MESSAGE = "Stop script if the lkgr has a renewed version." + + def RunStep(self): + # If a version-change commit becomes the lkgr, don't bump up the version + # again. + if self._config[VERSION_FILE] in self.GitChangedFiles(self["lkgr"]): + print "Stop because the lkgr is a version change itself." + return True + + # Don't bump up the version if it got updated already after the lkgr. + if SortingKey(self["lkgr_version"]) < SortingKey(self["latest_version"]): + print("Stop because the latest version already changed since the lkgr " + "version.") + return True + + +class GetTrunkVersion(Step): + MESSAGE = "Get latest trunk version." + + def RunStep(self): + # TODO(machenbach): This should be called trunk after the git switch. + self.GitCheckout("master") + self.GitPull() + self.ReadAndPersistVersion("trunk_") + self["trunk_version"] = self.ArrayToVersion("trunk_") + print "Trunk version: %s" % self["trunk_version"] + + +class CalculateVersion(Step): + MESSAGE = "Calculate the new version." + + def RunStep(self): + if self["lkgr_build"] == "9999": # pragma: no cover + # If version control on bleeding edge was switched off, just use the last + # trunk version. + self["lkgr_version"] = self["trunk_version"] + + # The new version needs to be greater than the max on bleeding edge and + # trunk. + max_version = max(self["trunk_version"], + self["lkgr_version"], + key=SortingKey) + + # Strip off possible leading zeros. + self["new_major"], self["new_minor"], self["new_build"], _ = ( + map(str, map(int, max_version.split(".")))) + + self["new_build"] = str(int(self["new_build"]) + 1) + self["new_patch"] = "0" + + self["new_version"] = ("%s.%s.%s.0" % + (self["new_major"], self["new_minor"], self["new_build"])) + print "New version is %s" % self["new_version"] + + if self._options.dry_run: # pragma: no cover + print "Dry run, skipping version change." + return True + + +class CheckTreeStatus(Step): + MESSAGE = "Checking v8 tree status message." + + def RunStep(self): + status_url = "https://v8-status.appspot.com/current?format=json" + status_json = self.ReadURL(status_url, wait_plan=[5, 20, 300, 300]) + message = json.loads(status_json)["message"] + if re.search(r"maintenance|no commits", message, flags=re.I): + print "Skip version change by tree status: \"%s\"" % message + return True + + +class ChangeVersion(Step): + MESSAGE = "Bump up the version." + + def RunStep(self): + self.GitCreateBranch(VERSION_BRANCH, "bleeding_edge") + + self.SetVersion(self.Config(VERSION_FILE), "new_") + + try: + self.GitCommit("[Auto-roll] Bump up version to %s\n\nTBR=%s" % + (self["new_version"], self._options.author)) + self.GitUpload(author=self._options.author, + force=self._options.force_upload, + bypass_hooks=True) + self.GitDCommit() + print "Successfully changed the version." + finally: + # Clean up. + self.GitCheckout("bleeding_edge") + self.DeleteBranch(VERSION_BRANCH) + + +class BumpUpVersion(ScriptsBase): + def _PrepareOptions(self, parser): + parser.add_argument("--dry_run", help="Don't commit the new version.", + default=False, action="store_true") + + def _ProcessOptions(self, options): # pragma: no cover + if not options.dry_run and not options.author: + print "Specify your chromium.org email with -a" + return False + options.wait_for_lgtm = False + options.force_readline_defaults = True + options.force_upload = True + return True + + def _Steps(self): + return [ + Preparation, + GetCurrentBleedingEdgeVersion, + LastChangeBailout, + FetchLKGR, + GetLKGRVersion, + LKGRVersionUpToDateBailout, + GetTrunkVersion, + CalculateVersion, + CheckTreeStatus, + ChangeVersion, + ] + +if __name__ == "__main__": # pragma: no cover + sys.exit(BumpUpVersion(CONFIG).Run()) diff --git a/deps/v8/tools/push-to-trunk/chromium_roll.py b/deps/v8/tools/push-to-trunk/chromium_roll.py index 35ab24b05d6..0138ff8e72f 100755 --- a/deps/v8/tools/push-to-trunk/chromium_roll.py +++ b/deps/v8/tools/push-to-trunk/chromium_roll.py @@ -105,7 +105,8 @@ def RunStep(self): % self["sheriff"]) self.GitCommit("%s%s\n\nTBR=%s" % (commit_title, sheriff, rev)) self.GitUpload(author=self._options.author, - force=self._options.force_upload) + force=self._options.force_upload, + cq=self._options.use_commit_queue) print "CL uploaded." @@ -143,6 +144,9 @@ def _PrepareOptions(self, parser): "directory to automate the V8 roll.")) parser.add_argument("-l", "--last-push", help="The git commit ID of the last push to trunk.") + parser.add_argument("--use-commit-queue", + help="Check the CQ bit on upload.", + default=False, action="store_true") def _ProcessOptions(self, options): # pragma: no cover if not options.manual and not options.reviewer: diff --git a/deps/v8/tools/push-to-trunk/common_includes.py b/deps/v8/tools/push-to-trunk/common_includes.py index 482509f7d19..0e57a25bb14 100644 --- a/deps/v8/tools/push-to-trunk/common_includes.py +++ b/deps/v8/tools/push-to-trunk/common_includes.py @@ -28,6 +28,7 @@ import argparse import datetime +import httplib import imp import json import os @@ -36,6 +37,7 @@ import sys import textwrap import time +import urllib import urllib2 from git_recipes import GitRecipesMixin @@ -169,6 +171,16 @@ def FormatIssues(prefix, bugs): return "" +def SortingKey(version): + """Key for sorting version number strings: '3.11' > '3.2.1.1'""" + version_keys = map(int, version.split(".")) + # Fill up to full version numbers to normalize comparison. + while len(version_keys) < 4: # pragma: no cover + version_keys.append(0) + # Fill digits. + return ".".join(map("{0:04d}".format, version_keys)) + + # Some commands don't like the pipe, e.g. calling vi from within the script or # from subscripts like git cl upload. def Command(cmd, args="", prefix="", pipe=True): @@ -207,12 +219,34 @@ def ReadURL(self, url, params=None): finally: url_fh.close() + def ReadClusterFuzzAPI(self, api_key, **params): + params["api_key"] = api_key.strip() + params = urllib.urlencode(params) + + headers = {"Content-type": "application/x-www-form-urlencoded"} + + conn = httplib.HTTPSConnection("backend-dot-cluster-fuzz.appspot.com") + conn.request("POST", "/_api/", params, headers) + + response = conn.getresponse() + data = response.read() + + try: + return json.loads(data) + except: + print data + print "ERROR: Could not read response. Is your key valid?" + raise + def Sleep(self, seconds): time.sleep(seconds) def GetDate(self): return datetime.date.today().strftime("%Y-%m-%d") + def GetUTCStamp(self): + return time.mktime(datetime.datetime.utcnow().timetuple()) + DEFAULT_SIDE_EFFECT_HANDLER = SideEffectHandler() @@ -348,7 +382,7 @@ def Confirm(self, msg): def DeleteBranch(self, name): for line in self.GitBranch().splitlines(): - if re.match(r".*\s+%s$" % name, line): + if re.match(r"\*?\s*%s$" % re.escape(name), line): msg = "Branch %s exists, do you want to delete it?" % name if self.Confirm(msg): self.GitDeleteBranch(name) @@ -446,6 +480,25 @@ def FindLastTrunkPush(self, parent_hash="", include_patches=False): return self.GitLog(n=1, format="%H", grep=push_pattern, parent_hash=parent_hash, branch=branch) + def ArrayToVersion(self, prefix): + return ".".join([self[prefix + "major"], + self[prefix + "minor"], + self[prefix + "build"], + self[prefix + "patch"]]) + + def SetVersion(self, version_file, prefix): + output = "" + for line in FileToText(version_file).splitlines(): + if line.startswith("#define MAJOR_VERSION"): + line = re.sub("\d+$", self[prefix + "major"], line) + elif line.startswith("#define MINOR_VERSION"): + line = re.sub("\d+$", self[prefix + "minor"], line) + elif line.startswith("#define BUILD_NUMBER"): + line = re.sub("\d+$", self[prefix + "build"], line) + elif line.startswith("#define PATCH_LEVEL"): + line = re.sub("\d+$", self[prefix + "patch"], line) + output += "%s\n" % line + TextToFile(output, version_file) class UploadStep(Step): MESSAGE = "Upload for code review." diff --git a/deps/v8/tools/push-to-trunk/git_recipes.py b/deps/v8/tools/push-to-trunk/git_recipes.py index 8c1e314d7d8..6ffb2da8340 100644 --- a/deps/v8/tools/push-to-trunk/git_recipes.py +++ b/deps/v8/tools/push-to-trunk/git_recipes.py @@ -68,6 +68,9 @@ def GitReset(self, name): assert name self.Git(MakeArgs(["reset --hard", name])) + def GitStash(self): + self.Git(MakeArgs(["stash"])) + def GitRemotes(self): return map(str.strip, self.Git(MakeArgs(["branch -r"])).splitlines()) @@ -144,7 +147,8 @@ def GitApplyPatch(self, patch_file, reverse=False): args.append(Quoted(patch_file)) self.Git(MakeArgs(args)) - def GitUpload(self, reviewer="", author="", force=False): + def GitUpload(self, reviewer="", author="", force=False, cq=False, + bypass_hooks=False): args = ["cl upload --send-mail"] if author: args += ["--email", Quoted(author)] @@ -152,6 +156,10 @@ def GitUpload(self, reviewer="", author="", force=False): args += ["-r", Quoted(reviewer)] if force: args.append("-f") + if cq: + args.append("--use-commit-queue") + if bypass_hooks: + args.append("--bypass-hooks") # TODO(machenbach): Check output in forced mode. Verify that all required # base files were uploaded, if not retry. self.Git(MakeArgs(args), pipe=False) @@ -180,6 +188,9 @@ def GitPull(self): def GitSVNFetch(self): self.Git("svn fetch") + def GitSVNRebase(self): + self.Git("svn rebase") + # TODO(machenbach): Unused? Remove. @Strip def GitSVNLog(self): diff --git a/deps/v8/tools/push-to-trunk/push_to_trunk.py b/deps/v8/tools/push-to-trunk/push_to_trunk.py index c317bdc7305..56375fe79b4 100755 --- a/deps/v8/tools/push-to-trunk/push_to_trunk.py +++ b/deps/v8/tools/push-to-trunk/push_to_trunk.py @@ -124,6 +124,20 @@ def RunStep(self): self["last_push_bleeding_edge"] = last_push_bleeding_edge +# TODO(machenbach): Code similarities with bump_up_version.py. Merge after +# turning this script into a pure git script. +class GetCurrentBleedingEdgeVersion(Step): + MESSAGE = "Get latest bleeding edge version." + + def RunStep(self): + self.GitCheckoutFile(self.Config(VERSION_FILE), "svn/bleeding_edge") + + # Store latest version. + self.ReadAndPersistVersion("latest_") + self["latest_version"] = self.ArrayToVersion("latest_") + print "Bleeding edge version: %s" % self["latest_version"] + + class IncrementVersion(Step): MESSAGE = "Increment version number." @@ -131,11 +145,23 @@ def RunStep(self): # Retrieve current version from last trunk push. self.GitCheckoutFile(self.Config(VERSION_FILE), self["last_push_trunk"]) self.ReadAndPersistVersion() + self["trunk_version"] = self.ArrayToVersion("") + + if self["latest_build"] == "9999": # pragma: no cover + # If version control on bleeding edge was switched off, just use the last + # trunk version. + self["latest_version"] = self["trunk_version"] + + if SortingKey(self["trunk_version"]) < SortingKey(self["latest_version"]): + # If the version on bleeding_edge is newer than on trunk, use it. + self.GitCheckoutFile(self.Config(VERSION_FILE), "svn/bleeding_edge") + self.ReadAndPersistVersion() if self.Confirm(("Automatically increment BUILD_NUMBER? (Saying 'n' will " "fire up your EDITOR on %s so you can make arbitrary " "changes. When you're done, save the file and exit your " "EDITOR.)" % self.Config(VERSION_FILE))): + text = FileToText(self.Config(VERSION_FILE)) text = MSub(r"(?<=#define BUILD_NUMBER)(?P<space>\s+)\d*$", r"\g<space>%s" % str(int(self["build"]) + 1), @@ -147,6 +173,10 @@ def RunStep(self): # Variables prefixed with 'new_' contain the new version numbers for the # ongoing trunk push. self.ReadAndPersistVersion("new_") + + # Make sure patch level is 0 in a new push. + self["new_patch"] = "0" + self["version"] = "%s.%s.%s" % (self["new_major"], self["new_minor"], self["new_build"]) @@ -307,20 +337,7 @@ def RunStep(self): # The version file has been modified by the patch. Reset it to the version # on trunk and apply the correct version. self.GitCheckoutFile(self.Config(VERSION_FILE), "svn/trunk") - output = "" - for line in FileToText(self.Config(VERSION_FILE)).splitlines(): - if line.startswith("#define MAJOR_VERSION"): - line = re.sub("\d+$", self["new_major"], line) - elif line.startswith("#define MINOR_VERSION"): - line = re.sub("\d+$", self["new_minor"], line) - elif line.startswith("#define BUILD_NUMBER"): - line = re.sub("\d+$", self["new_build"], line) - elif line.startswith("#define PATCH_LEVEL"): - line = re.sub("\d+$", "0", line) - elif line.startswith("#define IS_CANDIDATE_VERSION"): - line = re.sub("\d+$", "0", line) - output += "%s\n" % line - TextToFile(output, self.Config(VERSION_FILE)) + self.SetVersion(self.Config(VERSION_FILE), "new_") class CommitTrunk(Step): @@ -428,6 +445,7 @@ def _Steps(self): FreshBranch, PreparePushRevision, DetectLastPush, + GetCurrentBleedingEdgeVersion, IncrementVersion, PrepareChangeLog, EditChangeLog, diff --git a/deps/v8/tools/push-to-trunk/releases.py b/deps/v8/tools/push-to-trunk/releases.py index 2a22b912ebf..ff578449680 100755 --- a/deps/v8/tools/push-to-trunk/releases.py +++ b/deps/v8/tools/push-to-trunk/releases.py @@ -52,15 +52,10 @@ '|"http\:\/\/v8\.googlecode\.com\/svn\/trunk@)' '([0-9]+)".*$', re.M) - -def SortingKey(version): - """Key for sorting version number strings: '3.11' > '3.2.1.1'""" - version_keys = map(int, version.split(".")) - # Fill up to full version numbers to normalize comparison. - while len(version_keys) < 4: - version_keys.append(0) - # Fill digits. - return ".".join(map("{0:03d}".format, version_keys)) +# Expression to pick tag and revision for bleeding edge tags. To be used with +# output of 'svn log'. +BLEEDING_EDGE_TAGS_RE = re.compile( + r"A \/tags\/([^\s]+) \(from \/branches\/bleeding_edge\:(\d+)\)") def SortBranches(branches): @@ -150,24 +145,14 @@ def GetMergedPatches(self, body): patches = "-%s" % patches return patches - def GetRelease(self, git_hash, branch): - self.ReadAndPersistVersion() - base_version = [self["major"], self["minor"], self["build"]] - version = ".".join(base_version) - body = self.GitLog(n=1, format="%B", git_hash=git_hash) - - patches = "" - if self["patch"] != "0": - version += ".%s" % self["patch"] - patches = self.GetMergedPatches(body) - - title = self.GitLog(n=1, format="%s", git_hash=git_hash) + def GetReleaseDict( + self, git_hash, bleeding_edge_rev, branch, version, patches, cl_body): revision = self.GitSVNFindSVNRev(git_hash) return { # The SVN revision on the branch. "revision": revision, # The SVN revision on bleeding edge (only for newer trunk pushes). - "bleeding_edge": self.GetBleedingEdgeFromPush(title), + "bleeding_edge": bleeding_edge_rev, # The branch name. "branch": branch, # The version for displaying in the form 3.26.3 or 3.26.3.12. @@ -182,14 +167,45 @@ def GetRelease(self, git_hash, branch): "chromium_branch": "", # Link to the CL on code review. Trunk pushes are not uploaded, so this # field will be populated below with the recent roll CL link. - "review_link": MatchSafe(REVIEW_LINK_RE.search(body)), + "review_link": MatchSafe(REVIEW_LINK_RE.search(cl_body)), # Link to the commit message on google code. "revision_link": ("https://code.google.com/p/v8/source/detail?r=%s" % revision), - }, self["patch"] + } + + def GetRelease(self, git_hash, branch): + self.ReadAndPersistVersion() + base_version = [self["major"], self["minor"], self["build"]] + version = ".".join(base_version) + body = self.GitLog(n=1, format="%B", git_hash=git_hash) + + patches = "" + if self["patch"] != "0": + version += ".%s" % self["patch"] + patches = self.GetMergedPatches(body) + + title = self.GitLog(n=1, format="%s", git_hash=git_hash) + return self.GetReleaseDict( + git_hash, self.GetBleedingEdgeFromPush(title), branch, version, + patches, body), self["patch"] + + def GetReleasesFromBleedingEdge(self): + tag_text = self.SVN("log https://v8.googlecode.com/svn/tags -v --limit 20") + releases = [] + for (tag, revision) in re.findall(BLEEDING_EDGE_TAGS_RE, tag_text): + git_hash = self.GitSVNFindGitHash(revision) + + # Add bleeding edge release. It does not contain patches or a code + # review link, as tags are not uploaded. + releases.append(self.GetReleaseDict( + git_hash, revision, "bleeding_edge", tag, "", "")) + return releases def GetReleasesFromBranch(self, branch): self.GitReset("svn/%s" % branch) + if branch == 'bleeding_edge': + return self.GetReleasesFromBleedingEdge() + releases = [] try: for git_hash in self.GitLog(format="%H").splitlines(): @@ -235,14 +251,16 @@ def RunStep(self): releases += self.GetReleasesFromBranch(stable) releases += self.GetReleasesFromBranch(beta) releases += self.GetReleasesFromBranch("trunk") + releases += self.GetReleasesFromBranch("bleeding_edge") elif self._options.branch == 'all': # pragma: no cover # Retrieve the full release history. for branch in branches: releases += self.GetReleasesFromBranch(branch) releases += self.GetReleasesFromBranch("trunk") + releases += self.GetReleasesFromBranch("bleeding_edge") else: # pragma: no cover # Retrieve history for a specified branch. - assert self._options.branch in branches + ["trunk"] + assert self._options.branch in branches + ["trunk", "bleeding_edge"] releases += self.GetReleasesFromBranch(self._options.branch) self["releases"] = sorted(releases, diff --git a/deps/v8/tools/push-to-trunk/test_scripts.py b/deps/v8/tools/push-to-trunk/test_scripts.py index bc79cfd5d7b..82a4d15f2ec 100644 --- a/deps/v8/tools/push-to-trunk/test_scripts.py +++ b/deps/v8/tools/push-to-trunk/test_scripts.py @@ -35,6 +35,7 @@ from auto_push import CheckLastPush from auto_push import SETTINGS_LOCATION import auto_roll +from auto_roll import CLUSTERFUZZ_API_KEY_FILE import common_includes from common_includes import * import merge_to_branch @@ -47,6 +48,11 @@ from chromium_roll import ChromiumRoll import releases from releases import Releases +import bump_up_version +from bump_up_version import BumpUpVersion +from bump_up_version import LastChangeBailout +from bump_up_version import LKGRVersionUpToDateBailout +from auto_tag import AutoTag TEST_CONFIG = { @@ -66,6 +72,7 @@ "/tmp/test-merge-to-branch-tempfile-already-merging", COMMIT_HASHES_FILE: "/tmp/test-merge-to-branch-tempfile-PATCH_COMMIT_HASHES", TEMPORARY_PATCH_FILE: "/tmp/test-merge-to-branch-tempfile-temporary-patch", + CLUSTERFUZZ_API_KEY_FILE: "/tmp/test-fake-cf-api-key", } @@ -350,7 +357,7 @@ def MakeStep(self): def RunStep(self, script=PushToTrunk, step_class=Step, args=None): """Convenience wrapper.""" - args = args or ["-m"] + args = args if args is not None else ["-m"] return script(TEST_CONFIG, self, self._state).RunSteps([step_class], args) def GitMock(self, cmd, args="", pipe=True): @@ -384,12 +391,20 @@ def ReadURL(self, url, params): else: return self._url_mock.Call("readurl", url) + def ReadClusterFuzzAPI(self, api_key, **params): + # TODO(machenbach): Use a mock for this and add a test that stops rolling + # due to clustefuzz results. + return [] + def Sleep(self, seconds): pass def GetDate(self): return "1999-07-31" + def GetUTCStamp(self): + return "100000" + def ExpectGit(self, *args): """Convenience wrapper.""" self._git_mock.Expect(*args) @@ -590,13 +605,19 @@ def testEditChangeLog(self): self.assertEquals("New\n Lines", FileToText(TEST_CONFIG[CHANGELOG_ENTRY_FILE])) + # Version on trunk: 3.22.4.0. Version on master (bleeding_edge): 3.22.6. + # Make sure that the increment is 3.22.7.0. def testIncrementVersion(self): TEST_CONFIG[VERSION_FILE] = self.MakeEmptyTempFile() self.WriteFakeVersionFile() self._state["last_push_trunk"] = "hash1" + self._state["latest_build"] = "6" + self._state["latest_version"] = "3.22.6.0" self.ExpectGit([ - Git("checkout -f hash1 -- %s" % TEST_CONFIG[VERSION_FILE], "") + Git("checkout -f hash1 -- %s" % TEST_CONFIG[VERSION_FILE], ""), + Git("checkout -f svn/bleeding_edge -- %s" % TEST_CONFIG[VERSION_FILE], + "", cb=lambda: self.WriteFakeVersionFile(22, 6)), ]) self.ExpectReadline([ @@ -607,7 +628,7 @@ def testIncrementVersion(self): self.assertEquals("3", self._state["new_major"]) self.assertEquals("22", self._state["new_minor"]) - self.assertEquals("5", self._state["new_build"]) + self.assertEquals("7", self._state["new_build"]) self.assertEquals("0", self._state["new_patch"]) def _TestSquashCommits(self, change_log, expected_msg): @@ -734,6 +755,8 @@ def CheckSVNCommit(): Git("log -1 --format=%s hash2", "Version 3.4.5 (based on bleeding_edge revision r1234)\n"), Git("svn find-rev r1234", "hash3\n"), + Git("checkout -f svn/bleeding_edge -- %s" % TEST_CONFIG[VERSION_FILE], + "", cb=self.WriteFakeVersionFile), Git("checkout -f hash2 -- %s" % TEST_CONFIG[VERSION_FILE], "", cb=self.WriteFakeVersionFile), Git("log --format=%H hash3..push_hash", "rev1\n"), @@ -996,6 +1019,8 @@ def testAutoRollUpToDate(self): self.assertEquals(1, result) def testAutoRoll(self): + TEST_CONFIG[CLUSTERFUZZ_API_KEY_FILE] = self.MakeEmptyTempFile() + TextToFile("fake key", TEST_CONFIG[CLUSTERFUZZ_API_KEY_FILE]) self.ExpectReadURL([ URL("https://codereview.chromium.org/search", "owner=author%40chromium.org&limit=30&closed=3&format=json", @@ -1142,6 +1167,33 @@ def VerifySVNCommit(): MergeToBranch(TEST_CONFIG, self).Run(args) def testReleases(self): + tag_response_text = """ +------------------------------------------------------------------------ +r22631 | author1@chromium.org | 2014-07-28 02:05:29 +0200 (Mon, 28 Jul 2014) +Changed paths: + A /tags/3.28.43 (from /trunk:22630) + +Tagging version 3.28.43 +------------------------------------------------------------------------ +r22629 | author2@chromium.org | 2014-07-26 05:09:29 +0200 (Sat, 26 Jul 2014) +Changed paths: + A /tags/3.28.41 (from /branches/bleeding_edge:22626) + +Tagging version 3.28.41 +------------------------------------------------------------------------ +r22556 | author3@chromium.org | 2014-07-23 13:31:59 +0200 (Wed, 23 Jul 2014) +Changed paths: + A /tags/3.27.34.7 (from /branches/3.27:22555) + +Tagging version 3.27.34.7 +------------------------------------------------------------------------ +r22627 | author4@chromium.org | 2014-07-26 01:39:15 +0200 (Sat, 26 Jul 2014) +Changed paths: + A /tags/3.28.40 (from /branches/bleeding_edge:22624) + +Tagging version 3.28.40 +------------------------------------------------------------------------ +""" json_output = self.MakeEmptyTempFile() csv_output = self.MakeEmptyTempFile() TEST_CONFIG[VERSION_FILE] = self.MakeEmptyTempFile() @@ -1205,6 +1257,15 @@ def ResetDEPS(revision): Git("log -1 --format=%ci hash6", ""), Git("checkout -f HEAD -- %s" % TEST_CONFIG[VERSION_FILE], "", cb=ResetVersion(22, 5)), + Git("reset --hard svn/bleeding_edge", ""), + Git("log https://v8.googlecode.com/svn/tags -v --limit 20", + tag_response_text), + Git("svn find-rev r22626", "hash_22626"), + Git("svn find-rev hash_22626", "22626"), + Git("log -1 --format=%ci hash_22626", "01:23"), + Git("svn find-rev r22624", "hash_22624"), + Git("svn find-rev hash_22624", "22624"), + Git("log -1 --format=%ci hash_22624", "02:34"), Git("status -s -uno", ""), Git("checkout -f master", ""), Git("pull", ""), @@ -1235,12 +1296,22 @@ def ResetDEPS(revision): Releases(TEST_CONFIG, self).Run(args) # Check expected output. - csv = ("3.22.3,trunk,345,4567,\r\n" + csv = ("3.28.41,bleeding_edge,22626,,\r\n" + "3.28.40,bleeding_edge,22624,,\r\n" + "3.22.3,trunk,345,4567,\r\n" "3.21.2,3.21,123,,\r\n" "3.3.1.1,3.3,234,,12\r\n") self.assertEquals(csv, FileToText(csv_output)) expected_json = [ + {"bleeding_edge": "22626", "patches_merged": "", "version": "3.28.41", + "chromium_revision": "", "branch": "bleeding_edge", "revision": "22626", + "review_link": "", "date": "01:23", "chromium_branch": "", + "revision_link": "https://code.google.com/p/v8/source/detail?r=22626"}, + {"bleeding_edge": "22624", "patches_merged": "", "version": "3.28.40", + "chromium_revision": "", "branch": "bleeding_edge", "revision": "22624", + "review_link": "", "date": "02:34", "chromium_branch": "", + "revision_link": "https://code.google.com/p/v8/source/detail?r=22624"}, {"bleeding_edge": "", "patches_merged": "", "version": "3.22.3", "chromium_revision": "4567", "branch": "trunk", "revision": "345", "review_link": "", "date": "", "chromium_branch": "7", @@ -1257,6 +1328,145 @@ def ResetDEPS(revision): self.assertEquals(expected_json, json.loads(FileToText(json_output))) + def testBumpUpVersion(self): + TEST_CONFIG[VERSION_FILE] = self.MakeEmptyTempFile() + self.WriteFakeVersionFile() + + def ResetVersion(minor, build, patch=0): + return lambda: self.WriteFakeVersionFile(minor=minor, + build=build, + patch=patch) + + self.ExpectGit([ + Git("status -s -uno", ""), + Git("checkout -f bleeding_edge", "", cb=ResetVersion(11, 4)), + Git("pull", ""), + Git("branch", ""), + Git("checkout -f bleeding_edge", ""), + Git("log -1 --format=%H", "latest_hash"), + Git("diff --name-only latest_hash latest_hash^", ""), + Git("checkout -f bleeding_edge", ""), + Git("log --format=%H --grep=\"^git-svn-id: [^@]*@12345 [A-Za-z0-9-]*$\"", + "lkgr_hash"), + Git("checkout -b auto-bump-up-version lkgr_hash", ""), + Git("checkout -f bleeding_edge", ""), + Git("branch", ""), + Git("diff --name-only lkgr_hash lkgr_hash^", ""), + Git("checkout -f master", "", cb=ResetVersion(11, 5)), + Git("pull", ""), + Git("checkout -b auto-bump-up-version bleeding_edge", "", + cb=ResetVersion(11, 4)), + Git("commit -am \"[Auto-roll] Bump up version to 3.11.6.0\n\n" + "TBR=author@chromium.org\"", ""), + Git("cl upload --send-mail --email \"author@chromium.org\" -f " + "--bypass-hooks", ""), + Git("cl dcommit -f --bypass-hooks", ""), + Git("checkout -f bleeding_edge", ""), + Git("branch", "auto-bump-up-version\n* bleeding_edge"), + Git("branch -D auto-bump-up-version", ""), + ]) + + self.ExpectReadURL([ + URL("https://v8-status.appspot.com/lkgr", "12345"), + URL("https://v8-status.appspot.com/current?format=json", + "{\"message\": \"Tree is open\"}"), + ]) + + BumpUpVersion(TEST_CONFIG, self).Run(["-a", "author@chromium.org"]) + + def testAutoTag(self): + TEST_CONFIG[VERSION_FILE] = self.MakeEmptyTempFile() + self.WriteFakeVersionFile() + + def ResetVersion(minor, build, patch=0): + return lambda: self.WriteFakeVersionFile(minor=minor, + build=build, + patch=patch) + + self.ExpectGit([ + Git("status -s -uno", ""), + Git("status -s -b -uno", "## some_branch\n"), + Git("svn fetch", ""), + Git("branch", " branch1\n* branch2\n"), + Git("checkout -f master", ""), + Git("svn rebase", ""), + Git("checkout -b %s" % TEST_CONFIG[BRANCHNAME], "", + cb=ResetVersion(4, 5)), + Git("branch -r", "svn/tags/3.4.2\nsvn/tags/3.2.1.0\nsvn/branches/3.4"), + Git("log --format=%H --grep=\"\\[Auto\\-roll\\] Bump up version to\"", + "hash125\nhash118\nhash111\nhash101"), + Git("checkout -f hash125 -- %s" % TEST_CONFIG[VERSION_FILE], "", + cb=ResetVersion(4, 4)), + Git("checkout -f HEAD -- %s" % TEST_CONFIG[VERSION_FILE], "", + cb=ResetVersion(4, 5)), + Git("checkout -f hash118 -- %s" % TEST_CONFIG[VERSION_FILE], "", + cb=ResetVersion(4, 3)), + Git("checkout -f HEAD -- %s" % TEST_CONFIG[VERSION_FILE], "", + cb=ResetVersion(4, 5)), + Git("checkout -f hash111 -- %s" % TEST_CONFIG[VERSION_FILE], "", + cb=ResetVersion(4, 2)), + Git("checkout -f HEAD -- %s" % TEST_CONFIG[VERSION_FILE], "", + cb=ResetVersion(4, 5)), + Git("svn find-rev hash118", "118"), + Git("svn find-rev hash125", "125"), + Git("svn find-rev r123", "hash123"), + Git("log -1 --format=%at hash123", "1"), + Git("reset --hard hash123", ""), + Git("svn tag 3.4.3 -m \"Tagging version 3.4.3\"", ""), + Git("checkout -f some_branch", ""), + Git("branch -D %s" % TEST_CONFIG[BRANCHNAME], ""), + ]) + + self.ExpectReadURL([ + URL("https://v8-status.appspot.com/revisions?format=json", + "[{\"revision\": \"126\", \"status\": true}," + "{\"revision\": \"123\", \"status\": true}," + "{\"revision\": \"112\", \"status\": true}]"), + ]) + + AutoTag(TEST_CONFIG, self).Run(["-a", "author@chromium.org"]) + + # Test that we bail out if the last change was a version change. + def testBumpUpVersionBailout1(self): + TEST_CONFIG[VERSION_FILE] = self.MakeEmptyTempFile() + self._state["latest"] = "latest_hash" + + self.ExpectGit([ + Git("diff --name-only latest_hash latest_hash^", + TEST_CONFIG[VERSION_FILE]), + ]) + + self.assertEquals(1, + self.RunStep(BumpUpVersion, LastChangeBailout, ["--dry_run"])) + + # Test that we bail out if the lkgr was a version change. + def testBumpUpVersionBailout2(self): + TEST_CONFIG[VERSION_FILE] = self.MakeEmptyTempFile() + self._state["lkgr"] = "lkgr_hash" + + self.ExpectGit([ + Git("diff --name-only lkgr_hash lkgr_hash^", TEST_CONFIG[VERSION_FILE]), + ]) + + self.assertEquals(1, + self.RunStep(BumpUpVersion, LKGRVersionUpToDateBailout, ["--dry_run"])) + + # Test that we bail out if the last version is already newer than the lkgr's + # version. + def testBumpUpVersionBailout3(self): + TEST_CONFIG[VERSION_FILE] = self.MakeEmptyTempFile() + self._state["lkgr"] = "lkgr_hash" + self._state["lkgr_version"] = "3.22.4.0" + self._state["latest_version"] = "3.22.5.0" + + self.ExpectGit([ + Git("diff --name-only lkgr_hash lkgr_hash^", ""), + ]) + + self.assertEquals(1, + self.RunStep(BumpUpVersion, LKGRVersionUpToDateBailout, ["--dry_run"])) + + class SystemTest(unittest.TestCase): def testReload(self): step = MakeStep(step_class=PrepareChangeLog, number=0, state={}, config={}, diff --git a/deps/v8/tools/run-deopt-fuzzer.py b/deps/v8/tools/run-deopt-fuzzer.py index 21894ff5209..57cb6b278c5 100755 --- a/deps/v8/tools/run-deopt-fuzzer.py +++ b/deps/v8/tools/run-deopt-fuzzer.py @@ -319,8 +319,11 @@ def Main(): for mode in options.mode: for arch in options.arch: - code = Execute(arch, mode, args, options, suites, workspace) - exit_code = exit_code or code + try: + code = Execute(arch, mode, args, options, suites, workspace) + exit_code = exit_code or code + except KeyboardInterrupt: + return 2 return exit_code @@ -366,8 +369,12 @@ def Execute(arch, mode, args, options, suites, workspace): timeout, options.isolates, options.command_prefix, options.extra_flags, - False, - options.random_seed) + False, # Keep i18n on by default. + options.random_seed, + True, # No sorting of test cases. + 0, # Don't rerun failing tests. + 0, # No use of a rerun-failing-tests maximum. + False) # No predictable mode. # Find available test suites and read test cases from them. variables = { @@ -381,6 +388,7 @@ def Execute(arch, mode, args, options, suites, workspace): "no_snap": False, "simulator": utils.UseSimulator(arch), "system": utils.GuessOS(), + "tsan": False, } all_tests = [] num_tests = 0 @@ -409,17 +417,11 @@ def Execute(arch, mode, args, options, suites, workspace): print "No tests to run." return 0 - try: - print(">>> Collection phase") - progress_indicator = progress.PROGRESS_INDICATORS[options.progress]() - runner = execution.Runner(suites, progress_indicator, ctx) + print(">>> Collection phase") + progress_indicator = progress.PROGRESS_INDICATORS[options.progress]() + runner = execution.Runner(suites, progress_indicator, ctx) - exit_code = runner.Run(options.j) - if runner.terminate: - return exit_code - - except KeyboardInterrupt: - return 1 + exit_code = runner.Run(options.j) print(">>> Analysis phase") num_tests = 0 @@ -462,19 +464,12 @@ def Execute(arch, mode, args, options, suites, workspace): print "No tests to run." return 0 - try: - print(">>> Deopt fuzzing phase (%d test cases)" % num_tests) - progress_indicator = progress.PROGRESS_INDICATORS[options.progress]() - runner = execution.Runner(suites, progress_indicator, ctx) - - exit_code = runner.Run(options.j) - if runner.terminate: - return exit_code + print(">>> Deopt fuzzing phase (%d test cases)" % num_tests) + progress_indicator = progress.PROGRESS_INDICATORS[options.progress]() + runner = execution.Runner(suites, progress_indicator, ctx) - except KeyboardInterrupt: - return 1 - - return exit_code + code = runner.Run(options.j) + return exit_code or code if __name__ == "__main__": diff --git a/deps/v8/tools/run-tests.py b/deps/v8/tools/run-tests.py index 5cf49049f48..6e9f5549d81 100755 --- a/deps/v8/tools/run-tests.py +++ b/deps/v8/tools/run-tests.py @@ -50,7 +50,8 @@ ARCH_GUESS = utils.DefaultArch() -DEFAULT_TESTS = ["mjsunit", "fuzz-natives", "cctest", "message", "preparser"] +DEFAULT_TESTS = ["mjsunit", "fuzz-natives", "base-unittests", + "cctest", "compiler-unittests", "message", "preparser"] TIMEOUT_DEFAULT = 60 TIMEOUT_SCALEFACTOR = {"debug" : 4, "release" : 1 } @@ -59,9 +60,10 @@ VARIANT_FLAGS = { "default": [], "stress": ["--stress-opt", "--always-opt"], + "turbofan": ["--turbo-filter=*", "--always-opt"], "nocrankshaft": ["--nocrankshaft"]} -VARIANTS = ["default", "stress", "nocrankshaft"] +VARIANTS = ["default", "stress", "turbofan", "nocrankshaft"] MODE_FLAGS = { "debug" : ["--nohard-abort", "--nodead-code-elimination", @@ -80,11 +82,14 @@ "android_ia32", "arm", "ia32", + "x87", "mips", "mipsel", + "mips64el", "nacl_ia32", "nacl_x64", "x64", + "x32", "arm64"] # Double the timeout for these: SLOW_ARCHS = ["android_arm", @@ -93,8 +98,10 @@ "arm", "mips", "mipsel", + "mips64el", "nacl_ia32", "nacl_x64", + "x87", "arm64"] @@ -155,6 +162,9 @@ def BuildOptions(): result.add_option("--no-snap", "--nosnap", help='Test a build compiled without snapshot.', default=False, dest="no_snap", action="store_true") + result.add_option("--no-sorting", "--nosorting", + help="Don't sort tests according to duration of last run.", + default=False, dest="no_sorting", action="store_true") result.add_option("--no-stress", "--nostress", help="Don't run crankshaft --always-opt --stress-op test", default=False, dest="no_stress", action="store_true") @@ -165,6 +175,9 @@ def BuildOptions(): help="Comma-separated list of testing variants") result.add_option("--outdir", help="Base directory with compile output", default="out") + result.add_option("--predictable", + help="Compare output of several reruns of each test", + default=False, action="store_true") result.add_option("-p", "--progress", help=("The style of progress indicator" " (verbose, dots, color, mono)"), @@ -173,6 +186,15 @@ def BuildOptions(): help=("Quick check mode (skip slow/flaky tests)")) result.add_option("--report", help="Print a summary of the tests to be run", default=False, action="store_true") + result.add_option("--json-test-results", + help="Path to a file for storing json results.") + result.add_option("--rerun-failures-count", + help=("Number of times to rerun each failing test case. " + "Very slow tests will be rerun only once."), + default=0, type="int") + result.add_option("--rerun-failures-max", + help="Maximum number of failing test cases to rerun.", + default=100, type="int") result.add_option("--shard-count", help="Split testsuites into this number of shards", default=1, type="int") @@ -193,6 +215,9 @@ def BuildOptions(): default=False, action="store_true") result.add_option("-t", "--timeout", help="Timeout in seconds", default= -1, type="int") + result.add_option("--tsan", + help="Regard test expectations for TSAN", + default=False, action="store_true") result.add_option("-v", "--verbose", help="Verbose output", default=False, action="store_true") result.add_option("--valgrind", help="Run tests through valgrind", @@ -252,6 +277,12 @@ def ProcessOptions(options): if options.gc_stress: options.extra_flags += GC_STRESS_FLAGS + if options.asan: + options.extra_flags.append("--invoke-weak-callbacks") + + if options.tsan: + VARIANTS = ["default"] + if options.j == 0: options.j = multiprocessing.cpu_count() @@ -283,6 +314,11 @@ def excl(*args): options.flaky_tests = "skip" options.slow_tests = "skip" options.pass_fail_tests = "skip" + if options.predictable: + VARIANTS = ["default"] + options.extra_flags.append("--predictable") + options.extra_flags.append("--verify_predictable") + options.extra_flags.append("--no-inline-new") if not options.shell_dir: if options.shell: @@ -336,9 +372,8 @@ def Main(): workspace = os.path.abspath(join(os.path.dirname(sys.argv[0]), "..")) if not options.no_presubmit: print ">>> running presubmit tests" - code = subprocess.call( + exit_code = subprocess.call( [sys.executable, join(workspace, "tools", "presubmit.py")]) - exit_code = code suite_paths = utils.GetSuitePaths(join(workspace, "test")) @@ -399,13 +434,22 @@ def Execute(arch, mode, args, options, suites, workspace): timeout = TIMEOUT_DEFAULT; timeout *= TIMEOUT_SCALEFACTOR[mode] + + if options.predictable: + # Predictable mode is slower. + timeout *= 2 + ctx = context.Context(arch, mode, shell_dir, mode_flags, options.verbose, timeout, options.isolates, options.command_prefix, options.extra_flags, options.no_i18n, - options.random_seed) + options.random_seed, + options.no_sorting, + options.rerun_failures_count, + options.rerun_failures_max, + options.predictable) # TODO(all): Combine "simulator" and "simulator_run". simulator_run = not options.dont_skip_simulator_slow_tests and \ @@ -423,6 +467,7 @@ def Execute(arch, mode, args, options, suites, workspace): "simulator_run": simulator_run, "simulator": utils.UseSimulator(arch), "system": utils.GuessOS(), + "tsan": options.tsan, } all_tests = [] num_tests = 0 @@ -459,44 +504,42 @@ def Execute(arch, mode, args, options, suites, workspace): return 0 # Run the tests, either locally or distributed on the network. - try: - start_time = time.time() - progress_indicator = progress.PROGRESS_INDICATORS[options.progress]() - if options.junitout: - progress_indicator = progress.JUnitTestProgressIndicator( - progress_indicator, options.junitout, options.junittestsuite) - - run_networked = not options.no_network - if not run_networked: - print("Network distribution disabled, running tests locally.") - elif utils.GuessOS() != "linux": - print("Network distribution is only supported on Linux, sorry!") + start_time = time.time() + progress_indicator = progress.PROGRESS_INDICATORS[options.progress]() + if options.junitout: + progress_indicator = progress.JUnitTestProgressIndicator( + progress_indicator, options.junitout, options.junittestsuite) + if options.json_test_results: + progress_indicator = progress.JsonTestProgressIndicator( + progress_indicator, options.json_test_results, arch, mode) + + run_networked = not options.no_network + if not run_networked: + print("Network distribution disabled, running tests locally.") + elif utils.GuessOS() != "linux": + print("Network distribution is only supported on Linux, sorry!") + run_networked = False + peers = [] + if run_networked: + peers = network_execution.GetPeers() + if not peers: + print("No connection to distribution server; running tests locally.") run_networked = False - peers = [] - if run_networked: - peers = network_execution.GetPeers() - if not peers: - print("No connection to distribution server; running tests locally.") - run_networked = False - elif len(peers) == 1: - print("No other peers on the network; running tests locally.") - run_networked = False - elif num_tests <= 100: - print("Less than 100 tests, running them locally.") - run_networked = False - - if run_networked: - runner = network_execution.NetworkedRunner(suites, progress_indicator, - ctx, peers, workspace) - else: - runner = execution.Runner(suites, progress_indicator, ctx) - - exit_code = runner.Run(options.j) - if runner.terminate: - return exit_code - overall_duration = time.time() - start_time - except KeyboardInterrupt: - raise + elif len(peers) == 1: + print("No other peers on the network; running tests locally.") + run_networked = False + elif num_tests <= 100: + print("Less than 100 tests, running them locally.") + run_networked = False + + if run_networked: + runner = network_execution.NetworkedRunner(suites, progress_indicator, + ctx, peers, workspace) + else: + runner = execution.Runner(suites, progress_indicator, ctx) + + exit_code = runner.Run(options.j) + overall_duration = time.time() - start_time if options.time: verbose.PrintTestDurations(suites, overall_duration) diff --git a/deps/v8/tools/run.py b/deps/v8/tools/run.py new file mode 100755 index 00000000000..5a656e19b59 --- /dev/null +++ b/deps/v8/tools/run.py @@ -0,0 +1,12 @@ +#!/usr/bin/env python +# Copyright 2014 the V8 project authors. All rights reserved. +# Use of this source code is governed by a BSD-style license that can be +# found in the LICENSE file. + +"""This program wraps an arbitrary command since gn currently can only execute +scripts.""" + +import subprocess +import sys + +sys.exit(subprocess.call(sys.argv[1:])) diff --git a/deps/v8/tools/run_benchmarks.py b/deps/v8/tools/run_benchmarks.py new file mode 100755 index 00000000000..d6e9145dace --- /dev/null +++ b/deps/v8/tools/run_benchmarks.py @@ -0,0 +1,421 @@ +#!/usr/bin/env python +# Copyright 2014 the V8 project authors. All rights reserved. +# Use of this source code is governed by a BSD-style license that can be +# found in the LICENSE file. + +""" +Performance runner for d8. + +Call e.g. with tools/run-benchmarks.py --arch ia32 some_suite.json + +The suite json format is expected to be: +{ + "path": <relative path chunks to benchmark resources and main file>, + "name": <optional suite name, file name is default>, + "archs": [<architecture name for which this suite is run>, ...], + "binary": <name of binary to run, default "d8">, + "flags": [<flag to d8>, ...], + "run_count": <how often will this suite run (optional)>, + "run_count_XXX": <how often will this suite run for arch XXX (optional)>, + "resources": [<js file to be loaded before main>, ...] + "main": <main js benchmark runner file>, + "results_regexp": <optional regexp>, + "results_processor": <optional python results processor script>, + "units": <the unit specification for the performance dashboard>, + "benchmarks": [ + { + "name": <name of the benchmark>, + "results_regexp": <optional more specific regexp>, + "results_processor": <optional python results processor script>, + "units": <the unit specification for the performance dashboard>, + }, ... + ] +} + +The benchmarks field can also nest other suites in arbitrary depth. A suite +with a "main" file is a leaf suite that can contain one more level of +benchmarks. + +A suite's results_regexp is expected to have one string place holder +"%s" for the benchmark name. A benchmark's results_regexp overwrites suite +defaults. + +A suite's results_processor may point to an optional python script. If +specified, it is called after running the benchmarks like this (with a path +relatve to the suite level's path): +<results_processor file> <same flags as for d8> <suite level name> <output> + +The <output> is a temporary file containing d8 output. The results_regexp will +be applied to the output of this script. + +A suite without "benchmarks" is considered a benchmark itself. + +Full example (suite with one runner): +{ + "path": ["."], + "flags": ["--expose-gc"], + "archs": ["ia32", "x64"], + "run_count": 5, + "run_count_ia32": 3, + "main": "run.js", + "results_regexp": "^%s: (.+)$", + "units": "score", + "benchmarks": [ + {"name": "Richards"}, + {"name": "DeltaBlue"}, + {"name": "NavierStokes", + "results_regexp": "^NavierStokes: (.+)$"} + ] +} + +Full example (suite with several runners): +{ + "path": ["."], + "flags": ["--expose-gc"], + "archs": ["ia32", "x64"], + "run_count": 5, + "units": "score", + "benchmarks": [ + {"name": "Richards", + "path": ["richards"], + "main": "run.js", + "run_count": 3, + "results_regexp": "^Richards: (.+)$"}, + {"name": "NavierStokes", + "path": ["navier_stokes"], + "main": "run.js", + "results_regexp": "^NavierStokes: (.+)$"} + ] +} + +Path pieces are concatenated. D8 is always run with the suite's path as cwd. +""" + +import json +import optparse +import os +import re +import sys + +from testrunner.local import commands +from testrunner.local import utils + +ARCH_GUESS = utils.DefaultArch() +SUPPORTED_ARCHS = ["android_arm", + "android_arm64", + "android_ia32", + "arm", + "ia32", + "mips", + "mipsel", + "nacl_ia32", + "nacl_x64", + "x64", + "arm64"] + + +class Results(object): + """Place holder for result traces.""" + def __init__(self, traces=None, errors=None): + self.traces = traces or [] + self.errors = errors or [] + + def ToDict(self): + return {"traces": self.traces, "errors": self.errors} + + def WriteToFile(self, file_name): + with open(file_name, "w") as f: + f.write(json.dumps(self.ToDict())) + + def __add__(self, other): + self.traces += other.traces + self.errors += other.errors + return self + + def __str__(self): # pragma: no cover + return str(self.ToDict()) + + +class Node(object): + """Represents a node in the benchmark suite tree structure.""" + def __init__(self, *args): + self._children = [] + + def AppendChild(self, child): + self._children.append(child) + + +class DefaultSentinel(Node): + """Fake parent node with all default values.""" + def __init__(self): + super(DefaultSentinel, self).__init__() + self.binary = "d8" + self.run_count = 10 + self.path = [] + self.graphs = [] + self.flags = [] + self.resources = [] + self.results_regexp = None + self.stddev_regexp = None + self.units = "score" + + +class Graph(Node): + """Represents a benchmark suite definition. + + Can either be a leaf or an inner node that provides default values. + """ + def __init__(self, suite, parent, arch): + super(Graph, self).__init__() + self._suite = suite + + assert isinstance(suite.get("path", []), list) + assert isinstance(suite["name"], basestring) + assert isinstance(suite.get("flags", []), list) + assert isinstance(suite.get("resources", []), list) + + # Accumulated values. + self.path = parent.path[:] + suite.get("path", []) + self.graphs = parent.graphs[:] + [suite["name"]] + self.flags = parent.flags[:] + suite.get("flags", []) + self.resources = parent.resources[:] + suite.get("resources", []) + + # Descrete values (with parent defaults). + self.binary = suite.get("binary", parent.binary) + self.run_count = suite.get("run_count", parent.run_count) + self.run_count = suite.get("run_count_%s" % arch, self.run_count) + self.units = suite.get("units", parent.units) + + # A regular expression for results. If the parent graph provides a + # regexp and the current suite has none, a string place holder for the + # suite name is expected. + # TODO(machenbach): Currently that makes only sense for the leaf level. + # Multiple place holders for multiple levels are not supported. + if parent.results_regexp: + regexp_default = parent.results_regexp % re.escape(suite["name"]) + else: + regexp_default = None + self.results_regexp = suite.get("results_regexp", regexp_default) + + # A similar regular expression for the standard deviation (optional). + if parent.stddev_regexp: + stddev_default = parent.stddev_regexp % re.escape(suite["name"]) + else: + stddev_default = None + self.stddev_regexp = suite.get("stddev_regexp", stddev_default) + + +class Trace(Graph): + """Represents a leaf in the benchmark suite tree structure. + + Handles collection of measurements. + """ + def __init__(self, suite, parent, arch): + super(Trace, self).__init__(suite, parent, arch) + assert self.results_regexp + self.results = [] + self.errors = [] + self.stddev = "" + + def ConsumeOutput(self, stdout): + try: + self.results.append( + re.search(self.results_regexp, stdout, re.M).group(1)) + except: + self.errors.append("Regexp \"%s\" didn't match for benchmark %s." + % (self.results_regexp, self.graphs[-1])) + + try: + if self.stddev_regexp and self.stddev: + self.errors.append("Benchmark %s should only run once since a stddev " + "is provided by the benchmark." % self.graphs[-1]) + if self.stddev_regexp: + self.stddev = re.search(self.stddev_regexp, stdout, re.M).group(1) + except: + self.errors.append("Regexp \"%s\" didn't match for benchmark %s." + % (self.stddev_regexp, self.graphs[-1])) + + def GetResults(self): + return Results([{ + "graphs": self.graphs, + "units": self.units, + "results": self.results, + "stddev": self.stddev, + }], self.errors) + + +class Runnable(Graph): + """Represents a runnable benchmark suite definition (i.e. has a main file). + """ + @property + def main(self): + return self._suite["main"] + + def ChangeCWD(self, suite_path): + """Changes the cwd to to path defined in the current graph. + + The benchmarks are supposed to be relative to the suite configuration. + """ + suite_dir = os.path.abspath(os.path.dirname(suite_path)) + bench_dir = os.path.normpath(os.path.join(*self.path)) + os.chdir(os.path.join(suite_dir, bench_dir)) + + def GetCommand(self, shell_dir): + # TODO(machenbach): This requires +.exe if run on windows. + return ( + [os.path.join(shell_dir, self.binary)] + + self.flags + + self.resources + + [self.main] + ) + + def Run(self, runner): + """Iterates over several runs and handles the output for all traces.""" + for stdout in runner(): + for trace in self._children: + trace.ConsumeOutput(stdout) + return reduce(lambda r, t: r + t.GetResults(), self._children, Results()) + + +class RunnableTrace(Trace, Runnable): + """Represents a runnable benchmark suite definition that is a leaf.""" + def __init__(self, suite, parent, arch): + super(RunnableTrace, self).__init__(suite, parent, arch) + + def Run(self, runner): + """Iterates over several runs and handles the output.""" + for stdout in runner(): + self.ConsumeOutput(stdout) + return self.GetResults() + + +def MakeGraph(suite, arch, parent): + """Factory method for making graph objects.""" + if isinstance(parent, Runnable): + # Below a runnable can only be traces. + return Trace(suite, parent, arch) + elif suite.get("main"): + # A main file makes this graph runnable. + if suite.get("benchmarks"): + # This graph has subbenchmarks (traces). + return Runnable(suite, parent, arch) + else: + # This graph has no subbenchmarks, it's a leaf. + return RunnableTrace(suite, parent, arch) + elif suite.get("benchmarks"): + # This is neither a leaf nor a runnable. + return Graph(suite, parent, arch) + else: # pragma: no cover + raise Exception("Invalid benchmark suite configuration.") + + +def BuildGraphs(suite, arch, parent=None): + """Builds a tree structure of graph objects that corresponds to the suite + configuration. + """ + parent = parent or DefaultSentinel() + + # TODO(machenbach): Implement notion of cpu type? + if arch not in suite.get("archs", ["ia32", "x64"]): + return None + + graph = MakeGraph(suite, arch, parent) + for subsuite in suite.get("benchmarks", []): + BuildGraphs(subsuite, arch, graph) + parent.AppendChild(graph) + return graph + + +def FlattenRunnables(node): + """Generator that traverses the tree structure and iterates over all + runnables. + """ + if isinstance(node, Runnable): + yield node + elif isinstance(node, Node): + for child in node._children: + for result in FlattenRunnables(child): + yield result + else: # pragma: no cover + raise Exception("Invalid benchmark suite configuration.") + + +# TODO: Implement results_processor. +def Main(args): + parser = optparse.OptionParser() + parser.add_option("--arch", + help=("The architecture to run tests for, " + "'auto' or 'native' for auto-detect"), + default="x64") + parser.add_option("--buildbot", + help="Adapt to path structure used on buildbots", + default=False, action="store_true") + parser.add_option("--json-test-results", + help="Path to a file for storing json results.") + parser.add_option("--outdir", help="Base directory with compile output", + default="out") + (options, args) = parser.parse_args(args) + + if len(args) == 0: # pragma: no cover + parser.print_help() + return 1 + + if options.arch in ["auto", "native"]: # pragma: no cover + options.arch = ARCH_GUESS + + if not options.arch in SUPPORTED_ARCHS: # pragma: no cover + print "Unknown architecture %s" % options.arch + return 1 + + workspace = os.path.abspath(os.path.join(os.path.dirname(__file__), "..")) + + if options.buildbot: + shell_dir = os.path.join(workspace, options.outdir, "Release") + else: + shell_dir = os.path.join(workspace, options.outdir, + "%s.release" % options.arch) + + results = Results() + for path in args: + path = os.path.abspath(path) + + if not os.path.exists(path): # pragma: no cover + results.errors.append("Benchmark file %s does not exist." % path) + continue + + with open(path) as f: + suite = json.loads(f.read()) + + # If no name is given, default to the file name without .json. + suite.setdefault("name", os.path.splitext(os.path.basename(path))[0]) + + for runnable in FlattenRunnables(BuildGraphs(suite, options.arch)): + print ">>> Running suite: %s" % "/".join(runnable.graphs) + runnable.ChangeCWD(path) + + def Runner(): + """Output generator that reruns several times.""" + for i in xrange(0, max(1, runnable.run_count)): + # TODO(machenbach): Make timeout configurable in the suite definition. + # Allow timeout per arch like with run_count per arch. + output = commands.Execute(runnable.GetCommand(shell_dir), timeout=60) + print ">>> Stdout (#%d):" % (i + 1) + print output.stdout + if output.stderr: # pragma: no cover + # Print stderr for debugging. + print ">>> Stderr (#%d):" % (i + 1) + print output.stderr + yield output.stdout + + # Let runnable iterate over all runs and handle output. + results += runnable.Run(Runner) + + if options.json_test_results: + results.WriteToFile(options.json_test_results) + else: # pragma: no cover + print results + + return min(1, len(results.errors)) + +if __name__ == "__main__": # pragma: no cover + sys.exit(Main(sys.argv[1:])) diff --git a/deps/v8/tools/testrunner/local/commands.py b/deps/v8/tools/testrunner/local/commands.py index 4f3dc51e02b..d6445d0c7a6 100644 --- a/deps/v8/tools/testrunner/local/commands.py +++ b/deps/v8/tools/testrunner/local/commands.py @@ -64,49 +64,46 @@ def Win32SetErrorMode(mode): def RunProcess(verbose, timeout, args, **rest): - try: - if verbose: print "#", " ".join(args) - popen_args = args - prev_error_mode = SEM_INVALID_VALUE - if utils.IsWindows(): - popen_args = subprocess.list2cmdline(args) - # Try to change the error mode to avoid dialogs on fatal errors. Don't - # touch any existing error mode flags by merging the existing error mode. - # See http://blogs.msdn.com/oldnewthing/archive/2004/07/27/198410.aspx. - error_mode = SEM_NOGPFAULTERRORBOX - prev_error_mode = Win32SetErrorMode(error_mode) - Win32SetErrorMode(error_mode | prev_error_mode) - process = subprocess.Popen( - shell=utils.IsWindows(), - args=popen_args, - **rest - ) - if (utils.IsWindows() and prev_error_mode != SEM_INVALID_VALUE): - Win32SetErrorMode(prev_error_mode) - # Compute the end time - if the process crosses this limit we - # consider it timed out. - if timeout is None: end_time = None - else: end_time = time.time() + timeout - timed_out = False - # Repeatedly check the exit code from the process in a - # loop and keep track of whether or not it times out. - exit_code = None - sleep_time = INITIAL_SLEEP_TIME - while exit_code is None: - if (not end_time is None) and (time.time() >= end_time): - # Kill the process and wait for it to exit. - KillProcessWithID(process.pid) - exit_code = process.wait() - timed_out = True - else: - exit_code = process.poll() - time.sleep(sleep_time) - sleep_time = sleep_time * SLEEP_TIME_FACTOR - if sleep_time > MAX_SLEEP_TIME: - sleep_time = MAX_SLEEP_TIME - return (exit_code, timed_out) - except KeyboardInterrupt: - raise + if verbose: print "#", " ".join(args) + popen_args = args + prev_error_mode = SEM_INVALID_VALUE + if utils.IsWindows(): + popen_args = subprocess.list2cmdline(args) + # Try to change the error mode to avoid dialogs on fatal errors. Don't + # touch any existing error mode flags by merging the existing error mode. + # See http://blogs.msdn.com/oldnewthing/archive/2004/07/27/198410.aspx. + error_mode = SEM_NOGPFAULTERRORBOX + prev_error_mode = Win32SetErrorMode(error_mode) + Win32SetErrorMode(error_mode | prev_error_mode) + process = subprocess.Popen( + shell=utils.IsWindows(), + args=popen_args, + **rest + ) + if (utils.IsWindows() and prev_error_mode != SEM_INVALID_VALUE): + Win32SetErrorMode(prev_error_mode) + # Compute the end time - if the process crosses this limit we + # consider it timed out. + if timeout is None: end_time = None + else: end_time = time.time() + timeout + timed_out = False + # Repeatedly check the exit code from the process in a + # loop and keep track of whether or not it times out. + exit_code = None + sleep_time = INITIAL_SLEEP_TIME + while exit_code is None: + if (not end_time is None) and (time.time() >= end_time): + # Kill the process and wait for it to exit. + KillProcessWithID(process.pid) + exit_code = process.wait() + timed_out = True + else: + exit_code = process.poll() + time.sleep(sleep_time) + sleep_time = sleep_time * SLEEP_TIME_FACTOR + if sleep_time > MAX_SLEEP_TIME: + sleep_time = MAX_SLEEP_TIME + return (exit_code, timed_out) def PrintError(string): @@ -142,11 +139,9 @@ def Execute(args, verbose=False, timeout=None): stdout=fd_out, stderr=fd_err ) - except KeyboardInterrupt: - raise - except: - raise finally: + # TODO(machenbach): A keyboard interrupt before the assignment to + # fd_out|err can lead to reference errors here. os.close(fd_out) os.close(fd_err) out = file(outname).read() diff --git a/deps/v8/tools/testrunner/local/execution.py b/deps/v8/tools/testrunner/local/execution.py index f4a40204e48..36ce7be83f6 100644 --- a/deps/v8/tools/testrunner/local/execution.py +++ b/deps/v8/tools/testrunner/local/execution.py @@ -26,19 +26,16 @@ # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -import multiprocessing import os -import threading +import shutil import time +from pool import Pool from . import commands +from . import perfdata from . import utils -BREAK_NOW = -1 -EXCEPTION = -2 - - class Job(object): def __init__(self, command, dep_command, test_id, timeout, verbose): self.command = command @@ -49,29 +46,31 @@ def __init__(self, command, dep_command, test_id, timeout, verbose): def RunTest(job): - try: - start_time = time.time() - if job.dep_command is not None: - dep_output = commands.Execute(job.dep_command, job.verbose, job.timeout) - # TODO(jkummerow): We approximate the test suite specific function - # IsFailureOutput() by just checking the exit code here. Currently - # only cctests define dependencies, for which this simplification is - # correct. - if dep_output.exit_code != 0: - return (job.id, dep_output, time.time() - start_time) - output = commands.Execute(job.command, job.verbose, job.timeout) - return (job.id, output, time.time() - start_time) - except KeyboardInterrupt: - return (-1, BREAK_NOW, 0) - except Exception, e: - print(">>> EXCEPTION: %s" % e) - return (-1, EXCEPTION, 0) - + start_time = time.time() + if job.dep_command is not None: + dep_output = commands.Execute(job.dep_command, job.verbose, job.timeout) + # TODO(jkummerow): We approximate the test suite specific function + # IsFailureOutput() by just checking the exit code here. Currently + # only cctests define dependencies, for which this simplification is + # correct. + if dep_output.exit_code != 0: + return (job.id, dep_output, time.time() - start_time) + output = commands.Execute(job.command, job.verbose, job.timeout) + return (job.id, output, time.time() - start_time) class Runner(object): def __init__(self, suites, progress_indicator, context): + self.datapath = os.path.join("out", "testrunner_data") + self.perf_data_manager = perfdata.PerfDataManager(self.datapath) + self.perfdata = self.perf_data_manager.GetStore(context.arch, context.mode) + self.perf_failures = False + self.printed_allocations = False self.tests = [ t for s in suites for t in s.tests ] + if not context.no_sorting: + for t in self.tests: + t.duration = self.perfdata.FetchPerfData(t) or 1.0 + self.tests.sort(key=lambda t: t.duration, reverse=True) self._CommonInit(len(self.tests), progress_indicator, context) def _CommonInit(self, num_tests, progress_indicator, context): @@ -83,8 +82,119 @@ def _CommonInit(self, num_tests, progress_indicator, context): self.remaining = num_tests self.failed = [] self.crashed = 0 - self.terminate = False - self.lock = threading.Lock() + self.reran_tests = 0 + + def _RunPerfSafe(self, fun): + try: + fun() + except Exception, e: + print("PerfData exception: %s" % e) + self.perf_failures = True + + def _GetJob(self, test): + command = self.GetCommand(test) + timeout = self.context.timeout + if ("--stress-opt" in test.flags or + "--stress-opt" in self.context.mode_flags or + "--stress-opt" in self.context.extra_flags): + timeout *= 4 + if test.dependency is not None: + dep_command = [ c.replace(test.path, test.dependency) for c in command ] + else: + dep_command = None + return Job(command, dep_command, test.id, timeout, self.context.verbose) + + def _MaybeRerun(self, pool, test): + if test.run <= self.context.rerun_failures_count: + # Possibly rerun this test if its run count is below the maximum per + # test. <= as the flag controls reruns not including the first run. + if test.run == 1: + # Count the overall number of reran tests on the first rerun. + if self.reran_tests < self.context.rerun_failures_max: + self.reran_tests += 1 + else: + # Don't rerun this if the overall number of rerun tests has been + # reached. + return + if test.run >= 2 and test.duration > self.context.timeout / 20.0: + # Rerun slow tests at most once. + return + + # Rerun this test. + test.duration = None + test.output = None + test.run += 1 + pool.add([self._GetJob(test)]) + self.remaining += 1 + + def _ProcessTestNormal(self, test, result, pool): + self.indicator.AboutToRun(test) + test.output = result[1] + test.duration = result[2] + has_unexpected_output = test.suite.HasUnexpectedOutput(test) + if has_unexpected_output: + self.failed.append(test) + if test.output.HasCrashed(): + self.crashed += 1 + else: + self.succeeded += 1 + self.remaining -= 1 + # For the indicator, everything that happens after the first run is treated + # as unexpected even if it flakily passes in order to include it in the + # output. + self.indicator.HasRun(test, has_unexpected_output or test.run > 1) + if has_unexpected_output: + # Rerun test failures after the indicator has processed the results. + self._MaybeRerun(pool, test) + # Update the perf database if the test succeeded. + return not has_unexpected_output + + def _ProcessTestPredictable(self, test, result, pool): + def HasDifferentAllocations(output1, output2): + def AllocationStr(stdout): + for line in reversed((stdout or "").splitlines()): + if line.startswith("### Allocations = "): + self.printed_allocations = True + return line + return "" + return (AllocationStr(output1.stdout) != AllocationStr(output2.stdout)) + + # Always pass the test duration for the database update. + test.duration = result[2] + if test.run == 1 and result[1].HasTimedOut(): + # If we get a timeout in the first run, we are already in an + # unpredictable state. Just report it as a failure and don't rerun. + self.indicator.AboutToRun(test) + test.output = result[1] + self.remaining -= 1 + self.failed.append(test) + self.indicator.HasRun(test, True) + if test.run > 1 and HasDifferentAllocations(test.output, result[1]): + # From the second run on, check for different allocations. If a + # difference is found, call the indicator twice to report both tests. + # All runs of each test are counted as one for the statistic. + self.indicator.AboutToRun(test) + self.remaining -= 1 + self.failed.append(test) + self.indicator.HasRun(test, True) + self.indicator.AboutToRun(test) + test.output = result[1] + self.indicator.HasRun(test, True) + elif test.run >= 3: + # No difference on the third run -> report a success. + self.indicator.AboutToRun(test) + self.remaining -= 1 + self.succeeded += 1 + test.output = result[1] + self.indicator.HasRun(test, False) + else: + # No difference yet and less than three runs -> add another run and + # remember the output for comparison. + test.run += 1 + test.output = result[1] + pool.add([self._GetJob(test)]) + # Always update the perf database. + return True def Run(self, jobs): self.indicator.Starting() @@ -95,71 +205,46 @@ def Run(self, jobs): return 0 def _RunInternal(self, jobs): - pool = multiprocessing.Pool(processes=jobs) + pool = Pool(jobs) test_map = {} + # TODO(machenbach): Instead of filling the queue completely before + # pool.imap_unordered, make this a generator that already starts testing + # while the queue is filled. queue = [] queued_exception = None for test in self.tests: assert test.id >= 0 test_map[test.id] = test try: - command = self.GetCommand(test) + queue.append([self._GetJob(test)]) except Exception, e: # If this failed, save the exception and re-raise it later (after # all other tests have had a chance to run). queued_exception = e continue - timeout = self.context.timeout - if ("--stress-opt" in test.flags or - "--stress-opt" in self.context.mode_flags or - "--stress-opt" in self.context.extra_flags): - timeout *= 4 - if test.dependency is not None: - dep_command = [ c.replace(test.path, test.dependency) for c in command ] - else: - dep_command = None - job = Job(command, dep_command, test.id, timeout, self.context.verbose) - queue.append(job) try: - kChunkSize = 1 - it = pool.imap_unordered(RunTest, queue, kChunkSize) + it = pool.imap_unordered(RunTest, queue) for result in it: - test_id = result[0] - if test_id < 0: - if result[1] == BREAK_NOW: - self.terminate = True - else: - continue - if self.terminate: - pool.terminate() - pool.join() - raise BreakNowException("User pressed Ctrl+C or IO went wrong") - test = test_map[test_id] - self.indicator.AboutToRun(test) - test.output = result[1] - test.duration = result[2] - has_unexpected_output = test.suite.HasUnexpectedOutput(test) - if has_unexpected_output: - self.failed.append(test) - if test.output.HasCrashed(): - self.crashed += 1 + test = test_map[result[0]] + if self.context.predictable: + update_perf = self._ProcessTestPredictable(test, result, pool) else: - self.succeeded += 1 - self.remaining -= 1 - self.indicator.HasRun(test, has_unexpected_output) - except KeyboardInterrupt: - pool.terminate() - pool.join() - raise - except Exception, e: - print("Exception: %s" % e) + update_perf = self._ProcessTestNormal(test, result, pool) + if update_perf: + self._RunPerfSafe(lambda: self.perfdata.UpdatePerfData(test)) + finally: pool.terminate() - pool.join() - raise + self._RunPerfSafe(lambda: self.perf_data_manager.close()) + if self.perf_failures: + # Nuke perf data in case of failures. This might not work on windows as + # some files might still be open. + print "Deleting perf test data due to db corruption." + shutil.rmtree(self.datapath) if queued_exception: raise queued_exception - return + # Make sure that any allocations were printed in predictable mode. + assert not self.context.predictable or self.printed_allocations def GetCommand(self, test): d8testflag = [] diff --git a/deps/v8/tools/testrunner/network/perfdata.py b/deps/v8/tools/testrunner/local/perfdata.py similarity index 100% rename from deps/v8/tools/testrunner/network/perfdata.py rename to deps/v8/tools/testrunner/local/perfdata.py diff --git a/deps/v8/tools/testrunner/local/pool.py b/deps/v8/tools/testrunner/local/pool.py new file mode 100644 index 00000000000..602a2d4b309 --- /dev/null +++ b/deps/v8/tools/testrunner/local/pool.py @@ -0,0 +1,146 @@ +#!/usr/bin/env python +# Copyright 2014 the V8 project authors. All rights reserved. +# Use of this source code is governed by a BSD-style license that can be +# found in the LICENSE file. + +from multiprocessing import Event, Process, Queue + +class NormalResult(): + def __init__(self, result): + self.result = result + self.exception = False + self.break_now = False + + +class ExceptionResult(): + def __init__(self): + self.exception = True + self.break_now = False + + +class BreakResult(): + def __init__(self): + self.exception = False + self.break_now = True + + +def Worker(fn, work_queue, done_queue, done): + """Worker to be run in a child process. + The worker stops on two conditions. 1. When the poison pill "STOP" is + reached or 2. when the event "done" is set.""" + try: + for args in iter(work_queue.get, "STOP"): + if done.is_set(): + break + try: + done_queue.put(NormalResult(fn(*args))) + except Exception, e: + print(">>> EXCEPTION: %s" % e) + done_queue.put(ExceptionResult()) + except KeyboardInterrupt: + done_queue.put(BreakResult()) + + +class Pool(): + """Distributes tasks to a number of worker processes. + New tasks can be added dynamically even after the workers have been started. + Requirement: Tasks can only be added from the parent process, e.g. while + consuming the results generator.""" + + # Factor to calculate the maximum number of items in the work/done queue. + # Necessary to not overflow the queue's pipe if a keyboard interrupt happens. + BUFFER_FACTOR = 4 + + def __init__(self, num_workers): + self.num_workers = num_workers + self.processes = [] + self.terminated = False + + # Invariant: count >= #work_queue + #done_queue. It is greater when a + # worker takes an item from the work_queue and before the result is + # submitted to the done_queue. It is equal when no worker is working, + # e.g. when all workers have finished, and when no results are processed. + # Count is only accessed by the parent process. Only the parent process is + # allowed to remove items from the done_queue and to add items to the + # work_queue. + self.count = 0 + self.work_queue = Queue() + self.done_queue = Queue() + self.done = Event() + + def imap_unordered(self, fn, gen): + """Maps function "fn" to items in generator "gen" on the worker processes + in an arbitrary order. The items are expected to be lists of arguments to + the function. Returns a results iterator.""" + try: + gen = iter(gen) + self.advance = self._advance_more + + for w in xrange(self.num_workers): + p = Process(target=Worker, args=(fn, + self.work_queue, + self.done_queue, + self.done)) + self.processes.append(p) + p.start() + + self.advance(gen) + while self.count > 0: + result = self.done_queue.get() + self.count -= 1 + if result.exception: + # Ignore items with unexpected exceptions. + continue + elif result.break_now: + # A keyboard interrupt happened in one of the worker processes. + raise KeyboardInterrupt + else: + yield result.result + self.advance(gen) + finally: + self.terminate() + + def _advance_more(self, gen): + while self.count < self.num_workers * self.BUFFER_FACTOR: + try: + self.work_queue.put(gen.next()) + self.count += 1 + except StopIteration: + self.advance = self._advance_empty + break + + def _advance_empty(self, gen): + pass + + def add(self, args): + """Adds an item to the work queue. Can be called dynamically while + processing the results from imap_unordered.""" + self.work_queue.put(args) + self.count += 1 + + def terminate(self): + if self.terminated: + return + self.terminated = True + + # For exceptional tear down set the "done" event to stop the workers before + # they empty the queue buffer. + self.done.set() + + for p in self.processes: + # During normal tear down the workers block on get(). Feed a poison pill + # per worker to make them stop. + self.work_queue.put("STOP") + + for p in self.processes: + p.join() + + # Drain the queues to prevent failures when queues are garbage collected. + try: + while True: self.work_queue.get(False) + except: + pass + try: + while True: self.done_queue.get(False) + except: + pass diff --git a/deps/v8/tools/testrunner/local/pool_unittest.py b/deps/v8/tools/testrunner/local/pool_unittest.py new file mode 100644 index 00000000000..bf2b3f85624 --- /dev/null +++ b/deps/v8/tools/testrunner/local/pool_unittest.py @@ -0,0 +1,41 @@ +#!/usr/bin/env python +# Copyright 2014 the V8 project authors. All rights reserved. +# Use of this source code is governed by a BSD-style license that can be +# found in the LICENSE file. + +import unittest + +from pool import Pool + +def Run(x): + if x == 10: + raise Exception("Expected exception triggered by test.") + return x + +class PoolTest(unittest.TestCase): + def testNormal(self): + results = set() + pool = Pool(3) + for result in pool.imap_unordered(Run, [[x] for x in range(0, 10)]): + results.add(result) + self.assertEquals(set(range(0, 10)), results) + + def testException(self): + results = set() + pool = Pool(3) + for result in pool.imap_unordered(Run, [[x] for x in range(0, 12)]): + # Item 10 will not appear in results due to an internal exception. + results.add(result) + expect = set(range(0, 12)) + expect.remove(10) + self.assertEquals(expect, results) + + def testAdd(self): + results = set() + pool = Pool(3) + for result in pool.imap_unordered(Run, [[x] for x in range(0, 10)]): + results.add(result) + if result < 30: + pool.add([result + 20]) + self.assertEquals(set(range(0, 10) + range(20, 30) + range(40, 50)), + results) diff --git a/deps/v8/tools/testrunner/local/progress.py b/deps/v8/tools/testrunner/local/progress.py index 03116ee768d..8caa58c44cc 100644 --- a/deps/v8/tools/testrunner/local/progress.py +++ b/deps/v8/tools/testrunner/local/progress.py @@ -26,11 +26,17 @@ # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +import json +import os import sys import time from . import junit_output + +ABS_PATH_PREFIX = os.getcwd() + os.sep + + def EscapeCommand(command): parts = [] for part in command: @@ -277,6 +283,59 @@ def HasRun(self, test, has_unexpected_output): fail_text) +class JsonTestProgressIndicator(ProgressIndicator): + + def __init__(self, progress_indicator, json_test_results, arch, mode): + self.progress_indicator = progress_indicator + self.json_test_results = json_test_results + self.arch = arch + self.mode = mode + self.results = [] + + def Starting(self): + self.progress_indicator.runner = self.runner + self.progress_indicator.Starting() + + def Done(self): + self.progress_indicator.Done() + complete_results = [] + if os.path.exists(self.json_test_results): + with open(self.json_test_results, "r") as f: + # Buildbot might start out with an empty file. + complete_results = json.loads(f.read() or "[]") + + complete_results.append({ + "arch": self.arch, + "mode": self.mode, + "results": self.results, + }) + + with open(self.json_test_results, "w") as f: + f.write(json.dumps(complete_results)) + + def AboutToRun(self, test): + self.progress_indicator.AboutToRun(test) + + def HasRun(self, test, has_unexpected_output): + self.progress_indicator.HasRun(test, has_unexpected_output) + if not has_unexpected_output: + # Omit tests that run as expected. Passing tests of reruns after failures + # will have unexpected_output to be reported here has well. + return + + self.results.append({ + "name": test.GetLabel(), + "flags": test.flags, + "command": EscapeCommand(self.runner.GetCommand(test)).replace( + ABS_PATH_PREFIX, ""), + "run": test.run, + "stdout": test.output.stdout, + "stderr": test.output.stderr, + "exit_code": test.output.exit_code, + "result": test.suite.GetOutcome(test), + }) + + PROGRESS_INDICATORS = { 'verbose': VerboseProgressIndicator, 'dots': DotsProgressIndicator, diff --git a/deps/v8/tools/testrunner/local/statusfile.py b/deps/v8/tools/testrunner/local/statusfile.py index a45add33ddd..7c3ca7fb51b 100644 --- a/deps/v8/tools/testrunner/local/statusfile.py +++ b/deps/v8/tools/testrunner/local/statusfile.py @@ -52,8 +52,8 @@ # Support arches, modes to be written as keywords instead of strings. VARIABLES = {ALWAYS: True} -for var in ["debug", "release", "android_arm", "android_arm64", "android_ia32", - "arm", "arm64", "ia32", "mips", "mipsel", "x64", "nacl_ia32", +for var in ["debug", "release", "android_arm", "android_arm64", "android_ia32", "android_x87", + "arm", "arm64", "ia32", "mips", "mipsel", "mips64el", "x64", "x87", "nacl_ia32", "nacl_x64", "macos", "windows", "linux"]: VARIABLES[var] = var diff --git a/deps/v8/tools/testrunner/local/testsuite.py b/deps/v8/tools/testrunner/local/testsuite.py index ff51196a563..0fd3f3a3000 100644 --- a/deps/v8/tools/testrunner/local/testsuite.py +++ b/deps/v8/tools/testrunner/local/testsuite.py @@ -190,18 +190,19 @@ def HasFailed(self, testcase): else: return execution_failed - def HasUnexpectedOutput(self, testcase): + def GetOutcome(self, testcase): if testcase.output.HasCrashed(): - outcome = statusfile.CRASH + return statusfile.CRASH elif testcase.output.HasTimedOut(): - outcome = statusfile.TIMEOUT + return statusfile.TIMEOUT elif self.HasFailed(testcase): - outcome = statusfile.FAIL + return statusfile.FAIL else: - outcome = statusfile.PASS - if not testcase.outcomes: - return outcome != statusfile.PASS - return not outcome in testcase.outcomes + return statusfile.PASS + + def HasUnexpectedOutput(self, testcase): + outcome = self.GetOutcome(testcase) + return not outcome in (testcase.outcomes or [statusfile.PASS]) def StripOutputForTransmit(self, testcase): if not self.HasUnexpectedOutput(testcase): diff --git a/deps/v8/tools/testrunner/local/utils.py b/deps/v8/tools/testrunner/local/utils.py index a5252b06a8a..707fa24fbf1 100644 --- a/deps/v8/tools/testrunner/local/utils.py +++ b/deps/v8/tools/testrunner/local/utils.py @@ -32,6 +32,7 @@ from os.path import join import platform import re +import urllib2 def GetSuitePaths(test_root): @@ -113,3 +114,10 @@ def GuessWordsize(): def IsWindows(): return GuessOS() == 'windows' + + +def URLRetrieve(source, destination): + """urllib is broken for SSL connections via a proxy therefore we + can't use urllib.urlretrieve().""" + with open(destination, 'w') as f: + f.write(urllib2.urlopen(source).read()) diff --git a/deps/v8/tools/testrunner/network/network_execution.py b/deps/v8/tools/testrunner/network/network_execution.py index 0f53a6bb645..a43a6cfdedf 100644 --- a/deps/v8/tools/testrunner/network/network_execution.py +++ b/deps/v8/tools/testrunner/network/network_execution.py @@ -33,8 +33,8 @@ import time from . import distro -from . import perfdata from ..local import execution +from ..local import perfdata from ..objects import peer from ..objects import workpacket from ..server import compression @@ -54,6 +54,8 @@ def __init__(self, suites, progress_indicator, context, peers, workspace): self.suites = suites num_tests = 0 datapath = os.path.join("out", "testrunner_data") + # TODO(machenbach): These fields should exist now in the superclass. + # But there is no super constructor call. Check if this is a problem. self.perf_data_manager = perfdata.PerfDataManager(datapath) self.perfdata = self.perf_data_manager.GetStore(context.arch, context.mode) for s in suites: diff --git a/deps/v8/tools/testrunner/objects/context.py b/deps/v8/tools/testrunner/objects/context.py index 68c19892410..937d9089f35 100644 --- a/deps/v8/tools/testrunner/objects/context.py +++ b/deps/v8/tools/testrunner/objects/context.py @@ -28,7 +28,9 @@ class Context(): def __init__(self, arch, mode, shell_dir, mode_flags, verbose, timeout, - isolates, command_prefix, extra_flags, noi18n, random_seed): + isolates, command_prefix, extra_flags, noi18n, random_seed, + no_sorting, rerun_failures_count, rerun_failures_max, + predictable): self.arch = arch self.mode = mode self.shell_dir = shell_dir @@ -40,15 +42,20 @@ def __init__(self, arch, mode, shell_dir, mode_flags, verbose, timeout, self.extra_flags = extra_flags self.noi18n = noi18n self.random_seed = random_seed + self.no_sorting = no_sorting + self.rerun_failures_count = rerun_failures_count + self.rerun_failures_max = rerun_failures_max + self.predictable = predictable def Pack(self): return [self.arch, self.mode, self.mode_flags, self.timeout, self.isolates, self.command_prefix, self.extra_flags, self.noi18n, - self.random_seed] + self.random_seed, self.no_sorting, self.rerun_failures_count, + self.rerun_failures_max, self.predictable] @staticmethod def Unpack(packed): # For the order of the fields, refer to Pack() above. return Context(packed[0], packed[1], None, packed[2], False, packed[3], packed[4], packed[5], packed[6], packed[7], - packed[8]) + packed[8], packed[9], packed[10], packed[11], packed[12]) diff --git a/deps/v8/tools/testrunner/objects/testcase.py b/deps/v8/tools/testrunner/objects/testcase.py index cfc522ea738..ca826067c7c 100644 --- a/deps/v8/tools/testrunner/objects/testcase.py +++ b/deps/v8/tools/testrunner/objects/testcase.py @@ -38,6 +38,7 @@ def __init__(self, suite, path, flags=[], dependency=None): self.output = None self.id = None # int, used to map result back to TestCase instance self.duration = None # assigned during execution + self.run = 1 # The nth time this test is executed. def CopyAddingFlags(self, flags): copy = TestCase(self.suite, self.path, self.flags + flags, self.dependency) @@ -60,6 +61,7 @@ def UnpackTask(task): test = TestCase(str(task[0]), task[1], task[2], task[3]) test.outcomes = set(task[4]) test.id = task[5] + test.run = 1 return test def SetSuiteObject(self, suites): diff --git a/deps/v8/tools/tickprocessor.js b/deps/v8/tools/tickprocessor.js index 187e647033f..acd7a71c41b 100644 --- a/deps/v8/tools/tickprocessor.js +++ b/deps/v8/tools/tickprocessor.js @@ -441,12 +441,6 @@ TickProcessor.prototype.printStatistics = function() { if (this.ticks_.total == 0) return; - // Print the unknown ticks percentage if they are not ignored. - if (!this.ignoreUnknown_ && this.ticks_.unaccounted > 0) { - this.printHeader('Unknown'); - this.printCounter(this.ticks_.unaccounted, this.ticks_.total); - } - var flatProfile = this.profile_.getFlatProfile(); var flatView = this.viewBuilder_.buildView(flatProfile); // Sort by self time, desc, then by name, desc. @@ -457,33 +451,39 @@ TickProcessor.prototype.printStatistics = function() { if (this.ignoreUnknown_) { totalTicks -= this.ticks_.unaccounted; } - // Our total time contains all the ticks encountered, - // while profile only knows about the filtered ticks. - flatView.head.totalTime = totalTicks; // Count library ticks var flatViewNodes = flatView.head.children; var self = this; + var libraryTicks = 0; - this.processProfile(flatViewNodes, + this.printHeader('Shared libraries'); + this.printEntries(flatViewNodes, totalTicks, null, function(name) { return self.isSharedLibrary(name); }, function(rec) { libraryTicks += rec.selfTime; }); var nonLibraryTicks = totalTicks - libraryTicks; - this.printHeader('Shared libraries'); - this.printEntries(flatViewNodes, null, - function(name) { return self.isSharedLibrary(name); }); - + var jsTicks = 0; this.printHeader('JavaScript'); - this.printEntries(flatViewNodes, nonLibraryTicks, - function(name) { return self.isJsCode(name); }); + this.printEntries(flatViewNodes, totalTicks, nonLibraryTicks, + function(name) { return self.isJsCode(name); }, + function(rec) { jsTicks += rec.selfTime; }); + var cppTicks = 0; this.printHeader('C++'); - this.printEntries(flatViewNodes, nonLibraryTicks, - function(name) { return self.isCppCode(name); }); - - this.printHeader('GC'); - this.printCounter(this.ticks_.gc, totalTicks); + this.printEntries(flatViewNodes, totalTicks, nonLibraryTicks, + function(name) { return self.isCppCode(name); }, + function(rec) { cppTicks += rec.selfTime; }); + + this.printHeader('Summary'); + this.printLine('JavaScript', jsTicks, totalTicks, nonLibraryTicks); + this.printLine('C++', cppTicks, totalTicks, nonLibraryTicks); + this.printLine('GC', this.ticks_.gc, totalTicks, nonLibraryTicks); + this.printLine('Shared libraries', libraryTicks, totalTicks, null); + if (!this.ignoreUnknown_ && this.ticks_.unaccounted > 0) { + this.printLine('Unaccounted', this.ticks_.unaccounted, + this.ticks_.total, null); + } this.printHeavyProfHeader(); var heavyProfile = this.profile_.getBottomUpProfile(); @@ -517,6 +517,18 @@ TickProcessor.prototype.printHeader = function(headerTitle) { }; +TickProcessor.prototype.printLine = function( + entry, ticks, totalTicks, nonLibTicks) { + var pct = ticks * 100 / totalTicks; + var nonLibPct = nonLibTicks != null + ? padLeft((ticks * 100 / nonLibTicks).toFixed(1), 5) + '% ' + : ' '; + print(' ' + padLeft(ticks, 5) + ' ' + + padLeft(pct.toFixed(1), 5) + '% ' + + nonLibPct + + entry); +} + TickProcessor.prototype.printHeavyProfHeader = function() { print('\n [Bottom up (heavy) profile]:'); print(' Note: percentage shows a share of a particular caller in the ' + @@ -529,12 +541,6 @@ TickProcessor.prototype.printHeavyProfHeader = function() { }; -TickProcessor.prototype.printCounter = function(ticksCount, totalTicksCount) { - var pct = ticksCount * 100.0 / totalTicksCount; - print(' ' + padLeft(ticksCount, 5) + ' ' + padLeft(pct.toFixed(1), 5) + '%'); -}; - - TickProcessor.prototype.processProfile = function( profile, filterP, func) { for (var i = 0, n = profile.length; i < n; ++i) { @@ -580,18 +586,13 @@ TickProcessor.prototype.formatFunctionName = function(funcName) { }; TickProcessor.prototype.printEntries = function( - profile, nonLibTicks, filterP) { + profile, totalTicks, nonLibTicks, filterP, callback) { var that = this; this.processProfile(profile, filterP, function (rec) { if (rec.selfTime == 0) return; - var nonLibPct = nonLibTicks != null ? - rec.selfTime * 100.0 / nonLibTicks : 0.0; + callback(rec); var funcName = that.formatFunctionName(rec.internalFuncName); - - print(' ' + padLeft(rec.selfTime, 5) + ' ' + - padLeft(rec.selfPercent.toFixed(1), 5) + '% ' + - padLeft(nonLibPct.toFixed(1), 5) + '% ' + - funcName); + that.printLine(funcName, rec.selfTime, totalTicks, nonLibTicks); }); }; diff --git a/deps/v8/tools/unittests/run_benchmarks_test.py b/deps/v8/tools/unittests/run_benchmarks_test.py new file mode 100644 index 00000000000..37a816e760d --- /dev/null +++ b/deps/v8/tools/unittests/run_benchmarks_test.py @@ -0,0 +1,297 @@ +#!/usr/bin/env python +# Copyright 2014 the V8 project authors. All rights reserved. +# Use of this source code is governed by a BSD-style license that can be +# found in the LICENSE file. + +from collections import namedtuple +import coverage +import json +from mock import DEFAULT +from mock import MagicMock +import os +from os import path, sys +import shutil +import tempfile +import unittest + +# Requires python-coverage and python-mock. Native python coverage +# version >= 3.7.1 should be installed to get the best speed. + +TEST_WORKSPACE = path.join(tempfile.gettempdir(), "test-v8-run-benchmarks") + +V8_JSON = { + "path": ["."], + "binary": "d7", + "flags": ["--flag"], + "main": "run.js", + "run_count": 1, + "results_regexp": "^%s: (.+)$", + "benchmarks": [ + {"name": "Richards"}, + {"name": "DeltaBlue"}, + ] +} + +V8_NESTED_SUITES_JSON = { + "path": ["."], + "flags": ["--flag"], + "run_count": 1, + "units": "score", + "benchmarks": [ + {"name": "Richards", + "path": ["richards"], + "binary": "d7", + "main": "run.js", + "resources": ["file1.js", "file2.js"], + "run_count": 2, + "results_regexp": "^Richards: (.+)$"}, + {"name": "Sub", + "path": ["sub"], + "benchmarks": [ + {"name": "Leaf", + "path": ["leaf"], + "run_count_x64": 3, + "units": "ms", + "main": "run.js", + "results_regexp": "^Simple: (.+) ms.$"}, + ] + }, + {"name": "DeltaBlue", + "path": ["delta_blue"], + "main": "run.js", + "flags": ["--flag2"], + "results_regexp": "^DeltaBlue: (.+)$"}, + {"name": "ShouldntRun", + "path": ["."], + "archs": ["arm"], + "main": "run.js"}, + ] +} + +Output = namedtuple("Output", "stdout, stderr") + +class BenchmarksTest(unittest.TestCase): + @classmethod + def setUpClass(cls): + cls.base = path.dirname(path.dirname(path.abspath(__file__))) + sys.path.append(cls.base) + cls._cov = coverage.coverage( + include=([os.path.join(cls.base, "run_benchmarks.py")])) + cls._cov.start() + import run_benchmarks + from testrunner.local import commands + global commands + global run_benchmarks + + @classmethod + def tearDownClass(cls): + cls._cov.stop() + print "" + print cls._cov.report() + + def setUp(self): + self.maxDiff = None + if path.exists(TEST_WORKSPACE): + shutil.rmtree(TEST_WORKSPACE) + os.makedirs(TEST_WORKSPACE) + + def tearDown(self): + if path.exists(TEST_WORKSPACE): + shutil.rmtree(TEST_WORKSPACE) + + def _WriteTestInput(self, json_content): + self._test_input = path.join(TEST_WORKSPACE, "test.json") + with open(self._test_input, "w") as f: + f.write(json.dumps(json_content)) + + def _MockCommand(self, *args): + # Fake output for each benchmark run. + benchmark_outputs = [Output(stdout=arg, stderr=None) for arg in args[1]] + def execute(*args, **kwargs): + return benchmark_outputs.pop() + commands.Execute = MagicMock(side_effect=execute) + + # Check that d8 is called from the correct cwd for each benchmark run. + dirs = [path.join(TEST_WORKSPACE, arg) for arg in args[0]] + def chdir(*args, **kwargs): + self.assertEquals(dirs.pop(), args[0]) + os.chdir = MagicMock(side_effect=chdir) + + def _CallMain(self, *args): + self._test_output = path.join(TEST_WORKSPACE, "results.json") + all_args=[ + "--json-test-results", + self._test_output, + self._test_input, + ] + all_args += args + return run_benchmarks.Main(all_args) + + def _LoadResults(self): + with open(self._test_output) as f: + return json.load(f) + + def _VerifyResults(self, suite, units, traces): + self.assertEquals([ + {"units": units, + "graphs": [suite, trace["name"]], + "results": trace["results"], + "stddev": trace["stddev"]} for trace in traces], + self._LoadResults()["traces"]) + + def _VerifyErrors(self, errors): + self.assertEquals(errors, self._LoadResults()["errors"]) + + def _VerifyMock(self, binary, *args): + arg = [path.join(path.dirname(self.base), binary)] + arg += args + commands.Execute.assert_called_with(arg, timeout=60) + + def _VerifyMockMultiple(self, *args): + expected = [] + for arg in args: + a = [path.join(path.dirname(self.base), arg[0])] + a += arg[1:] + expected.append(((a,), {"timeout": 60})) + self.assertEquals(expected, commands.Execute.call_args_list) + + def testOneRun(self): + self._WriteTestInput(V8_JSON) + self._MockCommand(["."], ["x\nRichards: 1.234\nDeltaBlue: 10657567\ny\n"]) + self.assertEquals(0, self._CallMain()) + self._VerifyResults("test", "score", [ + {"name": "Richards", "results": ["1.234"], "stddev": ""}, + {"name": "DeltaBlue", "results": ["10657567"], "stddev": ""}, + ]) + self._VerifyErrors([]) + self._VerifyMock(path.join("out", "x64.release", "d7"), "--flag", "run.js") + + def testTwoRuns_Units_SuiteName(self): + test_input = dict(V8_JSON) + test_input["run_count"] = 2 + test_input["name"] = "v8" + test_input["units"] = "ms" + self._WriteTestInput(test_input) + self._MockCommand([".", "."], + ["Richards: 100\nDeltaBlue: 200\n", + "Richards: 50\nDeltaBlue: 300\n"]) + self.assertEquals(0, self._CallMain()) + self._VerifyResults("v8", "ms", [ + {"name": "Richards", "results": ["50", "100"], "stddev": ""}, + {"name": "DeltaBlue", "results": ["300", "200"], "stddev": ""}, + ]) + self._VerifyErrors([]) + self._VerifyMock(path.join("out", "x64.release", "d7"), "--flag", "run.js") + + def testTwoRuns_SubRegexp(self): + test_input = dict(V8_JSON) + test_input["run_count"] = 2 + del test_input["results_regexp"] + test_input["benchmarks"][0]["results_regexp"] = "^Richards: (.+)$" + test_input["benchmarks"][1]["results_regexp"] = "^DeltaBlue: (.+)$" + self._WriteTestInput(test_input) + self._MockCommand([".", "."], + ["Richards: 100\nDeltaBlue: 200\n", + "Richards: 50\nDeltaBlue: 300\n"]) + self.assertEquals(0, self._CallMain()) + self._VerifyResults("test", "score", [ + {"name": "Richards", "results": ["50", "100"], "stddev": ""}, + {"name": "DeltaBlue", "results": ["300", "200"], "stddev": ""}, + ]) + self._VerifyErrors([]) + self._VerifyMock(path.join("out", "x64.release", "d7"), "--flag", "run.js") + + def testNestedSuite(self): + self._WriteTestInput(V8_NESTED_SUITES_JSON) + self._MockCommand(["delta_blue", "sub/leaf", "richards"], + ["DeltaBlue: 200\n", + "Simple: 1 ms.\n", + "Simple: 2 ms.\n", + "Simple: 3 ms.\n", + "Richards: 100\n", + "Richards: 50\n"]) + self.assertEquals(0, self._CallMain()) + self.assertEquals([ + {"units": "score", + "graphs": ["test", "Richards"], + "results": ["50", "100"], + "stddev": ""}, + {"units": "ms", + "graphs": ["test", "Sub", "Leaf"], + "results": ["3", "2", "1"], + "stddev": ""}, + {"units": "score", + "graphs": ["test", "DeltaBlue"], + "results": ["200"], + "stddev": ""}, + ], self._LoadResults()["traces"]) + self._VerifyErrors([]) + self._VerifyMockMultiple( + (path.join("out", "x64.release", "d7"), "--flag", "file1.js", + "file2.js", "run.js"), + (path.join("out", "x64.release", "d7"), "--flag", "file1.js", + "file2.js", "run.js"), + (path.join("out", "x64.release", "d8"), "--flag", "run.js"), + (path.join("out", "x64.release", "d8"), "--flag", "run.js"), + (path.join("out", "x64.release", "d8"), "--flag", "run.js"), + (path.join("out", "x64.release", "d8"), "--flag", "--flag2", "run.js")) + + def testOneRunStdDevRegExp(self): + test_input = dict(V8_JSON) + test_input["stddev_regexp"] = "^%s\-stddev: (.+)$" + self._WriteTestInput(test_input) + self._MockCommand(["."], ["Richards: 1.234\nRichards-stddev: 0.23\n" + "DeltaBlue: 10657567\nDeltaBlue-stddev: 106\n"]) + self.assertEquals(0, self._CallMain()) + self._VerifyResults("test", "score", [ + {"name": "Richards", "results": ["1.234"], "stddev": "0.23"}, + {"name": "DeltaBlue", "results": ["10657567"], "stddev": "106"}, + ]) + self._VerifyErrors([]) + self._VerifyMock(path.join("out", "x64.release", "d7"), "--flag", "run.js") + + def testTwoRunsStdDevRegExp(self): + test_input = dict(V8_JSON) + test_input["stddev_regexp"] = "^%s\-stddev: (.+)$" + test_input["run_count"] = 2 + self._WriteTestInput(test_input) + self._MockCommand(["."], ["Richards: 3\nRichards-stddev: 0.7\n" + "DeltaBlue: 6\nDeltaBlue-boom: 0.9\n", + "Richards: 2\nRichards-stddev: 0.5\n" + "DeltaBlue: 5\nDeltaBlue-stddev: 0.8\n"]) + self.assertEquals(1, self._CallMain()) + self._VerifyResults("test", "score", [ + {"name": "Richards", "results": ["2", "3"], "stddev": "0.7"}, + {"name": "DeltaBlue", "results": ["5", "6"], "stddev": "0.8"}, + ]) + self._VerifyErrors( + ["Benchmark Richards should only run once since a stddev is provided " + "by the benchmark.", + "Benchmark DeltaBlue should only run once since a stddev is provided " + "by the benchmark.", + "Regexp \"^DeltaBlue\-stddev: (.+)$\" didn't match for benchmark " + "DeltaBlue."]) + self._VerifyMock(path.join("out", "x64.release", "d7"), "--flag", "run.js") + + def testBuildbot(self): + self._WriteTestInput(V8_JSON) + self._MockCommand(["."], ["Richards: 1.234\nDeltaBlue: 10657567\n"]) + self.assertEquals(0, self._CallMain("--buildbot")) + self._VerifyResults("test", "score", [ + {"name": "Richards", "results": ["1.234"], "stddev": ""}, + {"name": "DeltaBlue", "results": ["10657567"], "stddev": ""}, + ]) + self._VerifyErrors([]) + self._VerifyMock(path.join("out", "Release", "d7"), "--flag", "run.js") + + def testRegexpNoMatch(self): + self._WriteTestInput(V8_JSON) + self._MockCommand(["."], ["x\nRichaards: 1.234\nDeltaBlue: 10657567\ny\n"]) + self.assertEquals(1, self._CallMain()) + self._VerifyResults("test", "score", [ + {"name": "Richards", "results": [], "stddev": ""}, + {"name": "DeltaBlue", "results": ["10657567"], "stddev": ""}, + ]) + self._VerifyErrors( + ["Regexp \"^Richards: (.+)$\" didn't match for benchmark Richards."]) + self._VerifyMock(path.join("out", "x64.release", "d7"), "--flag", "run.js") diff --git a/deps/v8/tools/whitespace.txt b/deps/v8/tools/whitespace.txt new file mode 100644 index 00000000000..64a6f4c9851 --- /dev/null +++ b/deps/v8/tools/whitespace.txt @@ -0,0 +1,8 @@ +You can modify this file to create no-op changelists. + +Try to write something funny. And please don't add trailing whitespace. + +A Smi walks into a bar and says: +"I'm so deoptimized today!" +The doubles heard this and started to unbox. +The Smi looked at them and...................... diff --git a/doc/api/all.markdown b/doc/api/all.markdown index 5ccef037f5e..2a164abb78e 100644 --- a/doc/api/all.markdown +++ b/doc/api/all.markdown @@ -35,4 +35,3 @@ @include debugger @include cluster @include smalloc -@include tracing diff --git a/doc/api/assert.markdown b/doc/api/assert.markdown index 1a66022219d..0fb7f0b8d42 100644 --- a/doc/api/assert.markdown +++ b/doc/api/assert.markdown @@ -9,35 +9,35 @@ access it with `require('assert')`. Throws an exception that displays the values for `actual` and `expected` separated by the provided operator. -## assert(value, message), assert.ok(value, [message]) +## assert(value, message), assert.ok(value[, message]) Tests if value is truthy, it is equivalent to `assert.equal(true, !!value, message);` -## assert.equal(actual, expected, [message]) +## assert.equal(actual, expected[, message]) Tests shallow, coercive equality with the equal comparison operator ( `==` ). -## assert.notEqual(actual, expected, [message]) +## assert.notEqual(actual, expected[, message]) Tests shallow, coercive non-equality with the not equal comparison operator ( `!=` ). -## assert.deepEqual(actual, expected, [message]) +## assert.deepEqual(actual, expected[, message]) Tests for deep equality. -## assert.notDeepEqual(actual, expected, [message]) +## assert.notDeepEqual(actual, expected[, message]) Tests for any deep inequality. -## assert.strictEqual(actual, expected, [message]) +## assert.strictEqual(actual, expected[, message]) Tests strict equality, as determined by the strict equality operator ( `===` ) -## assert.notStrictEqual(actual, expected, [message]) +## assert.notStrictEqual(actual, expected[, message]) Tests strict non-equality, as determined by the strict not equal operator ( `!==` ) -## assert.throws(block, [error], [message]) +## assert.throws(block[, error][, message]) Expects `block` to throw an error. `error` can be constructor, `RegExp` or validation function. @@ -74,7 +74,7 @@ Custom error validation: "unexpected error" ); -## assert.doesNotThrow(block, [message]) +## assert.doesNotThrow(block[, message]) Expects `block` not to throw an error, see `assert.throws` for details. diff --git a/doc/api/buffer.markdown b/doc/api/buffer.markdown index b384b05f43d..5d9cd9b9b7a 100644 --- a/doc/api/buffer.markdown +++ b/doc/api/buffer.markdown @@ -72,7 +72,13 @@ will be thrown here. Allocates a new buffer using an `array` of octets. -### new Buffer(str, [encoding]) +### new Buffer(buffer) + +* `buffer` {Buffer} + +Copies the passed `buffer` data onto a new `Buffer` instance. + +### new Buffer(str[, encoding]) * `str` String - string to encode. * `encoding` String - encoding to use, Optional. @@ -94,7 +100,7 @@ otherwise. Tests if `obj` is a `Buffer`. -### Class Method: Buffer.byteLength(string, [encoding]) +### Class Method: Buffer.byteLength(string[, encoding]) * `string` String * `encoding` String, Optional, Default: 'utf8' @@ -113,7 +119,7 @@ Example: // ½ + ¼ = ¾: 9 characters, 12 bytes -### Class Method: Buffer.concat(list, [totalLength]) +### Class Method: Buffer.concat(list[, totalLength]) * `list` {Array} List of Buffer objects to concat * `totalLength` {Number} Total length of the buffers when concatenated @@ -162,7 +168,7 @@ buffer object. It does not change when the contents of the buffer are changed. // 1234 // 1234 -### buf.write(string, [offset], [length], [encoding]) +### buf.write(string[, offset][, length][, encoding]) * `string` String - data to be written to buffer * `offset` Number, Optional, Default: 0 @@ -180,8 +186,50 @@ The method will not write partial characters. len = buf.write('\u00bd + \u00bc = \u00be', 0); console.log(len + " bytes: " + buf.toString('utf8', 0, len)); +### buf.writeUIntLE(value, offset, byteLength[, noAssert]) +### buf.writeUIntBE(value, offset, byteLength[, noAssert]) +### buf.writeIntLE(value, offset, byteLength[, noAssert]) +### buf.writeIntBE(value, offset, byteLength[, noAssert]) + +* `value` {Number} Bytes to be written to buffer +* `offset` {Number} `0 <= offset <= buf.length` +* `byteLength` {Number} `0 < byteLength <= 6` +* `noAssert` {Boolean} Default: false +* Return: {Number} + +Writes `value` to the buffer at the specified `offset` and `byteLength`. +Supports up to 48 bits of accuracy. For example: + + var b = new Buffer(6); + b.writeUIntBE(0x1234567890ab, 0, 6); + // <Buffer 12 34 56 78 90 ab> + +Set `noAssert` to `true` to skip validation of `value` and `offset`. Defaults +to `false`. + +### buf.readUIntLE(offset, byteLength[, noAssert]) +### buf.readUIntBE(offset, byteLength[, noAssert]) +### buf.readIntLE(offset, byteLength[, noAssert]) +### buf.readIntBE(offset, byteLength[, noAssert]) + +* `offset` {Number} `0 <= offset <= buf.length` +* `byteLength` {Number} `0 < byteLength <= 6` +* `noAssert` {Boolean} Default: false +* Return: {Number} + +A generalized version of all numeric read methods. Supports up to 48 bits of +accuracy. For example: + + var b = new Buffer(6); + b.writeUint16LE(0x90ab, 0); + b.writeUInt32LE(0x12345678, 2); + b.readUIntLE(0, 6).toString(16); // Specify 6 bytes (48 bits) + // output: '1234567890ab' + +Set `noAssert` to true to skip validation of `offset`. This means that `offset` +may be beyond the end of the buffer. Defaults to `false`. -### buf.toString([encoding], [start], [end]) +### buf.toString([encoding][, start][, end]) * `encoding` String, Optional, Default: 'utf8' * `start` Number, Optional, Default: 0 @@ -252,7 +300,7 @@ Returns a number indicating whether `this` comes before or after or is the same as the `otherBuffer` in sort order. -### buf.copy(targetBuffer, [targetStart], [sourceStart], [sourceEnd]) +### buf.copy(targetBuffer[, targetStart][, sourceStart][, sourceEnd]) * `targetBuffer` Buffer object - Buffer to copy into * `targetStart` Number, Optional, Default: 0 @@ -283,7 +331,7 @@ into `buf2`, starting at the 8th byte in `buf2`. // !!!!!!!!qrst!!!!!!!!!!!!! -### buf.slice([start], [end]) +### buf.slice([start][, end]) * `start` Number, Optional, Default: 0 * `end` Number, Optional, Default: `buffer.length` @@ -311,7 +359,7 @@ byte from the original Buffer. // abc // !bc -### buf.readUInt8(offset, [noAssert]) +### buf.readUInt8(offset[, noAssert]) * `offset` Number * `noAssert` Boolean, Optional, Default: false @@ -340,8 +388,8 @@ Example: // 0x23 // 0x42 -### buf.readUInt16LE(offset, [noAssert]) -### buf.readUInt16BE(offset, [noAssert]) +### buf.readUInt16LE(offset[, noAssert]) +### buf.readUInt16BE(offset[, noAssert]) * `offset` Number * `noAssert` Boolean, Optional, Default: false @@ -376,8 +424,8 @@ Example: // 0x2342 // 0x4223 -### buf.readUInt32LE(offset, [noAssert]) -### buf.readUInt32BE(offset, [noAssert]) +### buf.readUInt32LE(offset[, noAssert]) +### buf.readUInt32BE(offset[, noAssert]) * `offset` Number * `noAssert` Boolean, Optional, Default: false @@ -404,7 +452,7 @@ Example: // 0x03042342 // 0x42230403 -### buf.readInt8(offset, [noAssert]) +### buf.readInt8(offset[, noAssert]) * `offset` Number * `noAssert` Boolean, Optional, Default: false @@ -418,8 +466,8 @@ may be beyond the end of the buffer. Defaults to `false`. Works as `buffer.readUInt8`, except buffer contents are treated as two's complement signed values. -### buf.readInt16LE(offset, [noAssert]) -### buf.readInt16BE(offset, [noAssert]) +### buf.readInt16LE(offset[, noAssert]) +### buf.readInt16BE(offset[, noAssert]) * `offset` Number * `noAssert` Boolean, Optional, Default: false @@ -434,8 +482,8 @@ may be beyond the end of the buffer. Defaults to `false`. Works as `buffer.readUInt16*`, except buffer contents are treated as two's complement signed values. -### buf.readInt32LE(offset, [noAssert]) -### buf.readInt32BE(offset, [noAssert]) +### buf.readInt32LE(offset[, noAssert]) +### buf.readInt32BE(offset[, noAssert]) * `offset` Number * `noAssert` Boolean, Optional, Default: false @@ -450,8 +498,8 @@ may be beyond the end of the buffer. Defaults to `false`. Works as `buffer.readUInt32*`, except buffer contents are treated as two's complement signed values. -### buf.readFloatLE(offset, [noAssert]) -### buf.readFloatBE(offset, [noAssert]) +### buf.readFloatLE(offset[, noAssert]) +### buf.readFloatBE(offset[, noAssert]) * `offset` Number * `noAssert` Boolean, Optional, Default: false @@ -476,8 +524,8 @@ Example: // 0x01 -### buf.readDoubleLE(offset, [noAssert]) -### buf.readDoubleBE(offset, [noAssert]) +### buf.readDoubleLE(offset[, noAssert]) +### buf.readDoubleBE(offset[, noAssert]) * `offset` Number * `noAssert` Boolean, Optional, Default: false @@ -506,7 +554,7 @@ Example: // 0.3333333333333333 -### buf.writeUInt8(value, offset, [noAssert]) +### buf.writeUInt8(value, offset[, noAssert]) * `value` Number * `offset` Number @@ -532,8 +580,8 @@ Example: // <Buffer 03 04 23 42> -### buf.writeUInt16LE(value, offset, [noAssert]) -### buf.writeUInt16BE(value, offset, [noAssert]) +### buf.writeUInt16LE(value, offset[, noAssert]) +### buf.writeUInt16BE(value, offset[, noAssert]) * `value` Number * `offset` Number @@ -563,8 +611,8 @@ Example: // <Buffer de ad be ef> // <Buffer ad de ef be> -### buf.writeUInt32LE(value, offset, [noAssert]) -### buf.writeUInt32BE(value, offset, [noAssert]) +### buf.writeUInt32LE(value, offset[, noAssert]) +### buf.writeUInt32BE(value, offset[, noAssert]) * `value` Number * `offset` Number @@ -592,7 +640,7 @@ Example: // <Buffer fe ed fa ce> // <Buffer ce fa ed fe> -### buf.writeInt8(value, offset, [noAssert]) +### buf.writeInt8(value, offset[, noAssert]) * `value` Number * `offset` Number @@ -609,8 +657,8 @@ should not be used unless you are certain of correctness. Defaults to `false`. Works as `buffer.writeUInt8`, except value is written out as a two's complement signed integer into `buffer`. -### buf.writeInt16LE(value, offset, [noAssert]) -### buf.writeInt16BE(value, offset, [noAssert]) +### buf.writeInt16LE(value, offset[, noAssert]) +### buf.writeInt16BE(value, offset[, noAssert]) * `value` Number * `offset` Number @@ -627,8 +675,8 @@ should not be used unless you are certain of correctness. Defaults to `false`. Works as `buffer.writeUInt16*`, except value is written out as a two's complement signed integer into `buffer`. -### buf.writeInt32LE(value, offset, [noAssert]) -### buf.writeInt32BE(value, offset, [noAssert]) +### buf.writeInt32LE(value, offset[, noAssert]) +### buf.writeInt32BE(value, offset[, noAssert]) * `value` Number * `offset` Number @@ -645,8 +693,8 @@ should not be used unless you are certain of correctness. Defaults to `false`. Works as `buffer.writeUInt32*`, except value is written out as a two's complement signed integer into `buffer`. -### buf.writeFloatLE(value, offset, [noAssert]) -### buf.writeFloatBE(value, offset, [noAssert]) +### buf.writeFloatLE(value, offset[, noAssert]) +### buf.writeFloatBE(value, offset[, noAssert]) * `value` Number * `offset` Number @@ -674,8 +722,8 @@ Example: // <Buffer 4f 4a fe bb> // <Buffer bb fe 4a 4f> -### buf.writeDoubleLE(value, offset, [noAssert]) -### buf.writeDoubleBE(value, offset, [noAssert]) +### buf.writeDoubleLE(value, offset[, noAssert]) +### buf.writeDoubleBE(value, offset[, noAssert]) * `value` Number * `offset` Number @@ -703,7 +751,7 @@ Example: // <Buffer 43 eb d5 b7 dd f9 5f d7> // <Buffer d7 5f f9 dd b7 d5 eb 43> -### buf.fill(value, [offset], [end]) +### buf.fill(value[, offset][, end]) * `value` * `offset` Number, Optional diff --git a/doc/api/child_process.markdown b/doc/api/child_process.markdown index 5e6ecfb9c92..7db48829237 100644 --- a/doc/api/child_process.markdown +++ b/doc/api/child_process.markdown @@ -167,7 +167,7 @@ to a process. See `kill(2)` -### child.send(message, [sendHandle]) +### child.send(message[, sendHandle]) * `message` {Object} * `sendHandle` {Handle object} @@ -303,7 +303,7 @@ child process has any open IPC channels with the parent (i.e `fork()`). These methods follow the common async programming patterns (accepting a callback or returning an EventEmitter). -### child_process.spawn(command, [args], [options]) +### child_process.spawn(command[, args][, options]) * `command` {String} The command to run * `args` {Array} List of string arguments @@ -473,7 +473,7 @@ inherited, the child will remain attached to the controlling terminal. See also: `child_process.exec()` and `child_process.fork()` -### child_process.exec(command, [options], callback) +### child_process.exec(command[, options], callback) * `command` {String} The command to run, with space-separated arguments * `options` {Object} @@ -531,7 +531,7 @@ amount of data allowed on stdout or stderr - if this value is exceeded then the child process is killed. -### child_process.execFile(file, [args], [options], [callback]) +### child_process.execFile(file[, args][, options][, callback]) * `file` {String} The filename of the program to run * `args` {Array} List of string arguments @@ -555,7 +555,7 @@ subshell but rather the specified file directly. This makes it slightly leaner than `child_process.exec`. It has the same options. -### child_process.fork(modulePath, [args], [options]) +### child_process.fork(modulePath[, args][, options]) * `modulePath` {String} The module to run in the child * `args` {Array} List of string arguments @@ -598,7 +598,7 @@ Blocking calls like these are mostly useful for simplifying general purpose scripting tasks and for simplifying the loading/processing of application configuration at startup. -### child_process.spawnSync(command, [args], [options]) +### child_process.spawnSync(command[, args][, options]) * `command` {String} The command to run * `args` {Array} List of string arguments @@ -629,7 +629,7 @@ until the process has completely exited. That is to say, if the process handles the `SIGTERM` signal and doesn't exit, your process will wait until the child process has exited. -### child_process.execFileSync(command, [args], [options]) +### child_process.execFileSync(command[, args][, options]) * `command` {String} The command to run * `args` {Array} List of string arguments @@ -660,7 +660,7 @@ throw. The `Error` object will contain the entire result from [`child_process.spawnSync`](#child_process_child_process_spawnsync_command_args_options) -### child_process.execSync(command, [options]) +### child_process.execSync(command[, options]) * `command` {String} The command to run * `options` {Object} diff --git a/doc/api/cluster.markdown b/doc/api/cluster.markdown index 8228e34c1ee..f65e11c3592 100644 --- a/doc/api/cluster.markdown +++ b/doc/api/cluster.markdown @@ -415,7 +415,7 @@ exit, the master may choose not to respawn a worker based on this value. // kill worker worker.kill(); -### worker.send(message, [sendHandle]) +### worker.send(message[, sendHandle]) * `message` {Object} * `sendHandle` {Handle object} diff --git a/doc/api/console.markdown b/doc/api/console.markdown index 5cfe068f26c..46ab65fad36 100644 --- a/doc/api/console.markdown +++ b/doc/api/console.markdown @@ -22,29 +22,31 @@ In daily use, the blocking/non-blocking dichotomy is not something you should worry about unless you log huge amounts of data. -## console.log([data], [...]) +## console.log([data][, ...]) Prints to stdout with newline. This function can take multiple arguments in a `printf()`-like way. Example: + var count = 5; console.log('count: %d', count); + // prints 'count: 5' If formatting elements are not found in the first string then `util.inspect` is used on each argument. See [util.format()][] for more information. -## console.info([data], [...]) +## console.info([data][, ...]) Same as `console.log`. -## console.error([data], [...]) +## console.error([data][, ...]) Same as `console.log` but prints to stderr. -## console.warn([data], [...]) +## console.warn([data][, ...]) Same as `console.error`. -## console.dir(obj, [options]) +## console.dir(obj[, options]) Uses `util.inspect` on `obj` and prints resulting string to stdout. This function bypasses any custom `inspect()` function on `obj`. An optional *options* object @@ -73,13 +75,14 @@ Finish timer, record output. Example: ; } console.timeEnd('100-elements'); + // prints 100-elements: 262ms -## console.trace(message, [...]) +## console.trace(message[, ...]) Print to stderr `'Trace :'`, followed by the formatted message and stack trace to the current position. -## console.assert(value, [message], [...]) +## console.assert(value[, message][, ...]) Similar to [assert.ok()][], but the error message is formatted as `util.format(message...)`. diff --git a/doc/api/crypto.markdown b/doc/api/crypto.markdown index a414d9b9221..b6dcf46124a 100644 --- a/doc/api/crypto.markdown +++ b/doc/api/crypto.markdown @@ -12,7 +12,7 @@ It also offers a set of wrappers for OpenSSL's hash, hmac, cipher, decipher, sign and verify methods. -## crypto.setEngine(engine, [flags]) +## crypto.setEngine(engine[, flags]) Load and set engine for some/all OpenSSL functions (selected by flags). @@ -58,7 +58,7 @@ Example: ## crypto.createCredentials(details) -Stability: 0 - Deprecated. Use [tls.createSecureContext][] instead. + Stability: 0 - Deprecated. Use [tls.createSecureContext][] instead. Creates a credentials object, with the optional details being a dictionary with keys: @@ -122,7 +122,7 @@ digest. The legacy `update` and `digest` methods are also supported. Returned by `crypto.createHash`. -### hash.update(data, [input_encoding]) +### hash.update(data[, input_encoding]) Updates the hash content with the given `data`, the encoding of which is given in `input_encoding` and can be `'utf8'`, `'ascii'` or @@ -214,7 +214,7 @@ writable. The written plain text data is used to produce the encrypted data on the readable side. The legacy `update` and `final` methods are also supported. -### cipher.update(data, [input_encoding], [output_encoding]) +### cipher.update(data[, input_encoding][, output_encoding]) Updates the cipher with `data`, the encoding of which is given in `input_encoding` and can be `'utf8'`, `'ascii'` or `'binary'`. If no @@ -280,7 +280,7 @@ writable. The written enciphered data is used to produce the plain-text data on the the readable side. The legacy `update` and `final` methods are also supported. -### decipher.update(data, [input_encoding], [output_encoding]) +### decipher.update(data[, input_encoding][, output_encoding]) Updates the decipher with `data`, which is encoded in `'binary'`, `'base64'` or `'hex'`. If no encoding is provided, then a buffer is @@ -345,7 +345,7 @@ written, the `sign` method will return the signature. The legacy Updates the sign object with data. This can be called many times with new data as it is streamed. -### sign.sign(private_key, [output_format]) +### sign.sign(private_key[, output_format]) Calculates the signature on all the updated data passed through the sign. @@ -387,7 +387,7 @@ supported. Updates the verifier object with data. This can be called many times with new data as it is streamed. -### verifier.verify(object, signature, [signature_format]) +### verifier.verify(object, signature[, signature_format]) Verifies the signed data by using the `object` and `signature`. `object` is a string containing a PEM encoded object, which can be @@ -402,13 +402,13 @@ the data and public key. Note: `verifier` object can not be used after `verify()` method has been called. -## crypto.createDiffieHellman(prime_length, [generator]) +## crypto.createDiffieHellman(prime_length[, generator]) Creates a Diffie-Hellman key exchange object and generates a prime of `prime_length` bits and using an optional specific numeric `generator`. If no `generator` is specified, then `2` is used. -## crypto.createDiffieHellman(prime, [prime_encoding], [generator], [generator_encoding]) +## crypto.createDiffieHellman(prime[, prime_encoding][, generator][, generator_encoding]) Creates a Diffie-Hellman key exchange object using the supplied `prime` and an optional specific `generator`. @@ -442,7 +442,7 @@ the public key in the specified encoding. This key should be transferred to the other party. Encoding can be `'binary'`, `'hex'`, or `'base64'`. If no encoding is provided, then a buffer is returned. -### diffieHellman.computeSecret(other_public_key, [input_encoding], [output_encoding]) +### diffieHellman.computeSecret(other_public_key[, input_encoding][, output_encoding]) Computes the shared secret using `other_public_key` as the other party's public key and returns the computed shared secret. Supplied @@ -477,13 +477,13 @@ Returns the Diffie-Hellman private key in the specified encoding, which can be `'binary'`, `'hex'`, or `'base64'`. If no encoding is provided, then a buffer is returned. -### diffieHellman.setPublicKey(public_key, [encoding]) +### diffieHellman.setPublicKey(public_key[, encoding]) Sets the Diffie-Hellman public key. Key encoding can be `'binary'`, `'hex'` or `'base64'`. If no encoding is provided, then a buffer is expected. -### diffieHellman.setPrivateKey(private_key, [encoding]) +### diffieHellman.setPrivateKey(private_key[, encoding]) Sets the Diffie-Hellman private key. Key encoding can be `'binary'`, `'hex'` or `'base64'`. If no encoding is provided, then a buffer is @@ -541,7 +541,7 @@ Format specifies point encoding and can be `'compressed'`, `'uncompressed'`, or Encoding can be `'binary'`, `'hex'`, or `'base64'`. If no encoding is provided, then a buffer is returned. -### ECDH.computeSecret(other_public_key, [input_encoding], [output_encoding]) +### ECDH.computeSecret(other_public_key[, input_encoding][, output_encoding]) Computes the shared secret using `other_public_key` as the other party's public key and returns the computed shared secret. Supplied @@ -569,13 +569,13 @@ Returns the EC Diffie-Hellman private key in the specified encoding, which can be `'binary'`, `'hex'`, or `'base64'`. If no encoding is provided, then a buffer is returned. -### ECDH.setPublicKey(public_key, [encoding]) +### ECDH.setPublicKey(public_key[, encoding]) Sets the EC Diffie-Hellman public key. Key encoding can be `'binary'`, `'hex'` or `'base64'`. If no encoding is provided, then a buffer is expected. -### ECDH.setPrivateKey(private_key, [encoding]) +### ECDH.setPrivateKey(private_key[, encoding]) Sets the EC Diffie-Hellman private key. Key encoding can be `'binary'`, `'hex'` or `'base64'`. If no encoding is provided, then a buffer is @@ -596,7 +596,7 @@ Example (obtaining a shared secret): /* alice_secret and bob_secret should be the same */ console.log(alice_secret == bob_secret); -## crypto.pbkdf2(password, salt, iterations, keylen, [digest], callback) +## crypto.pbkdf2(password, salt, iterations, keylen[, digest], callback) Asynchronous PBKDF2 function. Applies the selected HMAC digest function (default: SHA1) to derive a key of the requested length from the password, @@ -614,11 +614,11 @@ Example: You can get a list of supported digest functions with [crypto.getHashes()](#crypto_crypto_gethashes). -## crypto.pbkdf2Sync(password, salt, iterations, keylen, [digest]) +## crypto.pbkdf2Sync(password, salt, iterations, keylen[, digest]) Synchronous PBKDF2 function. Returns derivedKey or throws error. -## crypto.randomBytes(size, [callback]) +## crypto.randomBytes(size[, callback]) Generates cryptographically strong pseudo-random data. Usage: @@ -642,7 +642,7 @@ accumulated entropy to generate cryptographically strong data. In other words, `crypto.randomBytes` without callback will not block even if all entropy sources are drained. -## crypto.pseudoRandomBytes(size, [callback]) +## crypto.pseudoRandomBytes(size[, callback]) Generates *non*-cryptographically strong pseudo-random data. The data returned will be unique if it is sufficiently long, but is not diff --git a/doc/api/dgram.markdown b/doc/api/dgram.markdown index 891c0c70300..09bacaf098c 100644 --- a/doc/api/dgram.markdown +++ b/doc/api/dgram.markdown @@ -21,14 +21,9 @@ You have to change it to this: }); -## dgram.createSocket(type, [callback]) -## dgram.createSocket(options, [callback]) +## dgram.createSocket(type[, callback]) * `type` String. Either 'udp4' or 'udp6' -* `options` Object. Should contain a `type` property and could contain - `reuseAddr` property. `false` by default. - When `reuseAddr` is `true` - `socket.bind()` will reuse address, even if the - other process has already bound a socket on it. * `callback` Function. Attached as a listener to `message` events. Optional * Returns: Socket object @@ -38,9 +33,28 @@ and `udp6`. Takes an optional callback which is added as a listener for `message` events. -Call `socket.bind` if you want to receive datagrams. `socket.bind()` will bind -to the "all interfaces" address on a random port (it does the right thing for -both `udp4` and `udp6` sockets). You can then retrieve the address and port +Call `socket.bind()` if you want to receive datagrams. `socket.bind()` will +bind to the "all interfaces" address on a random port (it does the right thing +for both `udp4` and `udp6` sockets). You can then retrieve the address and port +with `socket.address().address` and `socket.address().port`. + +## dgram.createSocket(options[, callback]) +* `options` Object +* `callback` Function. Attached as a listener to `message` events. +* Returns: Socket object + +The `options` object should contain a `type` field of either `udp4` or `udp6` +and an optional boolean `reuseAddr` field. + +When `reuseAddr` is true `socket.bind()` will reuse the address, even if +another process has already bound a socket on it. `reuseAddr` defaults to +`false`. + +Takes an optional callback which is added as a listener for `message` events. + +Call `socket.bind()` if you want to receive datagrams. `socket.bind()` will +bind to the "all interfaces" address on a random port (it does the right thing +for both `udp4` and `udp6` sockets). You can then retrieve the address and port with `socket.address().address` and `socket.address().port`. ## Class: dgram.Socket @@ -77,7 +91,7 @@ on this socket. Emitted when an error occurs. -### socket.send(buf, offset, length, port, address, [callback]) +### socket.send(buf, offset, length, port, address[, callback]) * `buf` Buffer object or string. Message to be sent * `offset` Integer. Offset in the buffer where the message starts. @@ -142,7 +156,7 @@ a packet might travel, and that generally sending a datagram greater than the (receiver) `MTU` won't work (the packet gets silently dropped, without informing the source that the data did not reach its intended recipient). -### socket.bind(port, [address], [callback]) +### socket.bind(port[, address][, callback]) * `port` Integer * `address` String, Optional @@ -188,7 +202,7 @@ Example of a UDP server listening on port 41234: // server listening 0.0.0.0:41234 -### socket.bind(options, [callback]) +### socket.bind(options[, callback]) * `options` {Object} - Required. Supports the following properties: * `port` {Number} - Required. @@ -262,7 +276,7 @@ systems is 1. Sets or clears the `IP_MULTICAST_LOOP` socket option. When this option is set, multicast packets will also be received on the local interface. -### socket.addMembership(multicastAddress, [multicastInterface]) +### socket.addMembership(multicastAddress[, multicastInterface]) * `multicastAddress` String * `multicastInterface` String, Optional @@ -272,7 +286,7 @@ Tells the kernel to join a multicast group with `IP_ADD_MEMBERSHIP` socket optio If `multicastInterface` is not specified, the OS will try to add membership to all valid interfaces. -### socket.dropMembership(multicastAddress, [multicastInterface]) +### socket.dropMembership(multicastAddress[, multicastInterface]) * `multicastAddress` String * `multicastInterface` String, Optional diff --git a/doc/api/dns.markdown b/doc/api/dns.markdown index 5b8477fcb48..d080d666186 100644 --- a/doc/api/dns.markdown +++ b/doc/api/dns.markdown @@ -31,7 +31,7 @@ resolves the IP addresses which are returned. }); }); -## dns.lookup(hostname, [options], callback) +## dns.lookup(hostname[, options], callback) Resolves a hostname (e.g. `'google.com'`) into the first found A (IPv4) or AAAA (IPv6) record. `options` can be an object or integer. If `options` is @@ -79,7 +79,7 @@ The callback has arguments `(err, hostname, service)`. The `hostname` and On error, `err` is an `Error` object, where `err.code` is the error code. -## dns.resolve(hostname, [rrtype], callback) +## dns.resolve(hostname[, rrtype], callback) Resolves a hostname (e.g. `'google.com'`) into an array of the record types specified by rrtype. diff --git a/doc/api/events.markdown b/doc/api/events.markdown index 1625f748a17..895debba026 100644 --- a/doc/api/events.markdown +++ b/doc/api/events.markdown @@ -104,7 +104,7 @@ Returns an array of listeners for the specified event. console.log(util.inspect(server.listeners('connection'))); // [ [Function] ] -### emitter.emit(event, [arg1], [arg2], [...]) +### emitter.emit(event[, arg1][, arg2][, ...]) Execute each of the listeners in order with the supplied arguments. diff --git a/doc/api/fs.markdown b/doc/api/fs.markdown index dca35674cd3..124962c2a11 100644 --- a/doc/api/fs.markdown +++ b/doc/api/fs.markdown @@ -208,7 +208,7 @@ the completion callback. Synchronous link(2). -## fs.symlink(srcpath, dstpath, [type], callback) +## fs.symlink(srcpath, dstpath[, type], callback) Asynchronous symlink(2). No arguments other than a possible exception are given to the completion callback. @@ -217,7 +217,7 @@ is `'file'`) and is only available on Windows (ignored on other platforms). Note that Windows junction points require the destination path to be absolute. When using `'junction'`, the `destination` argument will automatically be normalized to absolute path. -## fs.symlinkSync(srcpath, dstpath, [type]) +## fs.symlinkSync(srcpath, dstpath[, type]) Synchronous symlink(2). @@ -230,7 +230,7 @@ linkString)`. Synchronous readlink(2). Returns the symbolic link's string value. -## fs.realpath(path, [cache], callback) +## fs.realpath(path[, cache], callback) Asynchronous realpath(2). The `callback` gets two arguments `(err, resolvedPath)`. May use `process.cwd` to resolve relative paths. `cache` is an @@ -245,7 +245,7 @@ Example: console.log(resolvedPath); }); -## fs.realpathSync(path, [cache]) +## fs.realpathSync(path[, cache]) Synchronous realpath(2). Returns the resolved path. @@ -267,12 +267,12 @@ to the completion callback. Synchronous rmdir(2). -## fs.mkdir(path, [mode], callback) +## fs.mkdir(path[, mode], callback) Asynchronous mkdir(2). No arguments other than a possible exception are given to the completion callback. `mode` defaults to `0777`. -## fs.mkdirSync(path, [mode]) +## fs.mkdirSync(path[, mode]) Synchronous mkdir(2). @@ -296,7 +296,7 @@ to the completion callback. Synchronous close(2). -## fs.open(path, flags, [mode], callback) +## fs.open(path, flags[, mode], callback) Asynchronous file open. See open(2). `flags` can be: @@ -353,7 +353,7 @@ On Linux, positional writes don't work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file. -## fs.openSync(path, flags, [mode]) +## fs.openSync(path, flags[, mode]) Synchronous version of `fs.open()`. @@ -451,7 +451,7 @@ The callback is given the three arguments, `(err, bytesRead, buffer)`. Synchronous version of `fs.read`. Returns the number of `bytesRead`. -## fs.readFile(filename, [options], callback) +## fs.readFile(filename[, options], callback) * `filename` {String} * `options` {Object} @@ -472,7 +472,7 @@ contents of the file. If no encoding is specified, then the raw buffer is returned. -## fs.readFileSync(filename, [options]) +## fs.readFileSync(filename[, options]) Synchronous version of `fs.readFile`. Returns the contents of the `filename`. @@ -480,7 +480,7 @@ If the `encoding` option is specified then this function returns a string. Otherwise it returns a buffer. -## fs.writeFile(filename, data, [options], callback) +## fs.writeFile(filename, data[, options], callback) * `filename` {String} * `data` {String | Buffer} @@ -503,11 +503,11 @@ Example: console.log('It\'s saved!'); }); -## fs.writeFileSync(filename, data, [options]) +## fs.writeFileSync(filename, data[, options]) The synchronous version of `fs.writeFile`. -## fs.appendFile(filename, data, [options], callback) +## fs.appendFile(filename, data[, options], callback) * `filename` {String} * `data` {String | Buffer} @@ -527,11 +527,11 @@ Example: console.log('The "data to append" was appended to file!'); }); -## fs.appendFileSync(filename, data, [options]) +## fs.appendFileSync(filename, data[, options]) The synchronous version of `fs.appendFile`. -## fs.watchFile(filename, [options], listener) +## fs.watchFile(filename[, options], listener) Stability: 2 - Unstable. Use fs.watch instead, if possible. @@ -557,7 +557,7 @@ These stat objects are instances of `fs.Stat`. If you want to be notified when the file was modified, not just accessed you need to compare `curr.mtime` and `prev.mtime`. -## fs.unwatchFile(filename, [listener]) +## fs.unwatchFile(filename[, listener]) Stability: 2 - Unstable. Use fs.watch instead, if possible. @@ -568,7 +568,7 @@ have effectively stopped watching `filename`. Calling `fs.unwatchFile()` with a filename that is not being watched is a no-op, not an error. -## fs.watch(filename, [options], [listener]) +## fs.watch(filename[, options][, listener]) Stability: 2 - Unstable. @@ -728,7 +728,7 @@ Prior to Node v0.12, the `ctime` held the `birthtime` on Windows systems. Note that as of v0.12, `ctime` is not "creation time", and on Unix systems, it never was. -## fs.createReadStream(path, [options]) +## fs.createReadStream(path[, options]) Returns a new ReadStream object (See `Readable Stream`). @@ -745,6 +745,9 @@ Returns a new ReadStream object (See `Readable Stream`). the file instead of the entire file. Both `start` and `end` are inclusive and start at 0. The `encoding` can be `'utf8'`, `'ascii'`, or `'base64'`. +If `fd` is specified, `ReadStream` will ignore the `path` argument and will use +the specified file descriptor. This means that no `open` event will be emitted. + If `autoClose` is false, then the file descriptor won't be closed, even if there's an error. It is your responsibility to close it and make sure there's no file descriptor leak. If `autoClose` is set to true (default @@ -767,7 +770,7 @@ An example to read the last 10 bytes of a file which is 100 bytes long: Emitted when the ReadStream's file is opened. -## fs.createWriteStream(path, [options]) +## fs.createWriteStream(path[, options]) Returns a new WriteStream object (See `Writable Stream`). @@ -775,6 +778,7 @@ Returns a new WriteStream object (See `Writable Stream`). { flags: 'w', encoding: null, + fd: null, mode: 0666 } `options` may also include a `start` option to allow writing data at @@ -782,6 +786,11 @@ some position past the beginning of the file. Modifying a file rather than replacing it may require a `flags` mode of `r+` rather than the default mode `w`. +Like `ReadStream` above, if `fd` is specified, `WriteStream` will ignore the +`path` argument and will use the specified file descriptor. This means that no +`open` event will be emitted. + + ## Class: fs.WriteStream `WriteStream` is a [Writable Stream](stream.html#stream_class_stream_writable). diff --git a/doc/api/http.markdown b/doc/api/http.markdown index c0a8730ca99..3d482b441c8 100644 --- a/doc/api/http.markdown +++ b/doc/api/http.markdown @@ -59,12 +59,12 @@ Found'`. ## http.createServer([requestListener]) -Returns a new web server object. +Returns a new instance of [http.Server](#http_class_http_server). The `requestListener` is a function which is automatically added to the `'request'` event. -## http.createClient([port], [host]) +## http.createClient([port][, host]) This function is **deprecated**; please use [http.request()][] instead. Constructs a new HTTP client. `port` and `host` refer to the server to be @@ -155,12 +155,12 @@ sent to the server on that socket. `function (exception, socket) { }` -If a client connection emits an 'error' event - it will forwarded here. +If a client connection emits an 'error' event, it will be forwarded here. `socket` is the `net.Socket` object that the error originated from. -### server.listen(port, [hostname], [backlog], [callback]) +### server.listen(port[, hostname][, backlog][, callback]) Begin accepting connections on the specified port and hostname. If the hostname is omitted, the server will accept connections directed to any @@ -177,7 +177,7 @@ This function is asynchronous. The last parameter `callback` will be added as a listener for the ['listening'][] event. See also [net.Server.listen(port)][]. -### server.listen(path, [callback]) +### server.listen(path[, callback]) Start a UNIX socket server listening for connections on the given `path`. @@ -185,7 +185,7 @@ This function is asynchronous. The last parameter `callback` will be added as a listener for the ['listening'][] event. See also [net.Server.listen(path)][]. -### server.listen(handle, [callback]) +### server.listen(handle[, callback]) * `handle` {Object} * `callback` {Function} @@ -275,7 +275,7 @@ After this event, no more events will be emitted on the response object. Sends a HTTP/1.1 100 Continue message to the client, indicating that the request body should be sent. See the ['checkContinue'][] event on `Server`. -### response.writeHead(statusCode, [statusMessage], [headers]) +### response.writeHead(statusCode[, statusMessage][, headers]) Sends a response header to the request. The status code is a 3-digit HTTP status code, like `404`. The last argument, `headers`, are the response headers. @@ -295,7 +295,7 @@ be called before [response.end()][] is called. If you call [response.write()][] or [response.end()][] before calling this, the implicit/mutable headers will be calculated and call this function for you. -Note: that Content-Length is given in bytes not characters. The above example +Note that Content-Length is given in bytes not characters. The above example works because the string `'hello world'` contains only single byte characters. If the body contains higher coded characters then `Buffer.byteLength()` should be used to determine the number of bytes in a given encoding. @@ -389,7 +389,7 @@ Example: response.removeHeader("Content-Encoding"); -### response.write(chunk, [encoding]) +### response.write(chunk[, encoding][, callback]) If this method is called and [response.writeHead()][] has not been called, it will switch to implicit header mode and flush the implicit headers. @@ -399,7 +399,8 @@ be called multiple times to provide successive parts of the body. `chunk` can be a string or a buffer. If `chunk` is a string, the second parameter specifies how to encode it into a byte stream. -By default the `encoding` is `'utf8'`. +By default the `encoding` is `'utf8'`. The last parameter `callback` +will be called when this chunk of data is flushed. **Note**: This is the raw HTTP body and has nothing to do with higher-level multi-part body encodings that may be used. @@ -412,7 +413,7 @@ first chunk of body. Returns `true` if the entire data was flushed successfully to the kernel buffer. Returns `false` if all or part of the data was queued in user memory. -`'drain'` will be emitted when the buffer is again free. +`'drain'` will be emitted when the buffer is free again. ### response.addTrailers(headers) @@ -433,18 +434,20 @@ emit trailers, with a list of the header fields in its value. E.g., response.end(); -### response.end([data], [encoding]) +### response.end([data][, encoding][, callback]) This method signals to the server that all of the response headers and body have been sent; that server should consider this message complete. The method, `response.end()`, MUST be called on each response. -If `data` is specified, it is equivalent to calling `response.write(data, encoding)` -followed by `response.end()`. +If `data` is specified, it is equivalent to calling +`response.write(data, encoding)` followed by `response.end(callback)`. +If `callback` is specified, it will be called when the response stream +is finished. -## http.request(options, [callback]) +## http.request(options[, callback]) Node maintains several connections per server to make HTTP requests. This function allows one to transparently issue requests. @@ -489,11 +492,19 @@ upload a file with a POST request, then write to the `ClientRequest` object. Example: + var postData = querystring.stringify({ + 'msg' : 'Hello World!' + }); + var options = { hostname: 'www.google.com', port: 80, path: '/upload', - method: 'POST' + method: 'POST', + headers: { + 'Content-Type': 'application/x-www-form-urlencoded', + 'Content-Length': postData.length + } }; var req = http.request(options, function(res) { @@ -510,8 +521,7 @@ Example: }); // write data to request body - req.write('data\n'); - req.write('data\n'); + req.write(postData); req.end(); Note that in the example `req.end()` was called. With `http.request()` one @@ -537,7 +547,7 @@ There are a few special headers that should be noted. * Sending an Authorization header will override using the `auth` option to compute basic authentication. -## http.get(options, [callback]) +## http.get(options[, callback]) Since most requests are GET requests without bodies, Node provides this convenience method. The only difference between this method and `http.request()` @@ -857,7 +867,7 @@ That's usually what you want (it saves a TCP round-trip) but not when the first data isn't sent until possibly much later. `request.flush()` lets you bypass the optimization and kickstart the request. -### request.write(chunk, [encoding]) +### request.write(chunk[, encoding][, callback]) Sends a chunk of the body. By calling this method many times, the user can stream a request body to a @@ -870,21 +880,26 @@ The `chunk` argument should be a [Buffer][] or a string. The `encoding` argument is optional and only applies when `chunk` is a string. Defaults to `'utf8'`. +The `callback` argument is optional and will be called when this chunk of data +is flushed. -### request.end([data], [encoding]) +### request.end([data][, encoding][, callback]) Finishes sending the request. If any parts of the body are unsent, it will flush them to the stream. If the request is chunked, this will send the terminating `'0\r\n\r\n'`. If `data` is specified, it is equivalent to calling -`request.write(data, encoding)` followed by `request.end()`. +`request.write(data, encoding)` followed by `request.end(callback)`. + +If `callback` is specified, it will be called when the request stream +is finished. ### request.abort() Aborts a request. (New since v0.3.8.) -### request.setTimeout(timeout, [callback]) +### request.setTimeout(timeout[, callback]) Once a socket is assigned to this request and is connected [socket.setTimeout()][] will be called. @@ -894,7 +909,7 @@ Once a socket is assigned to this request and is connected Once a socket is assigned to this request and is connected [socket.setNoDelay()][] will be called. -### request.setSocketKeepAlive([enable], [initialDelay]) +### request.setSocketKeepAlive([enable][, initialDelay]) Once a socket is assigned to this request and is connected [socket.setKeepAlive()][] will be called. @@ -914,7 +929,7 @@ following additional events, methods, and properties. `function () { }` -Indicates that the underlaying connection was closed. +Indicates that the underlying connection was closed. Just like `'end'`, this event occurs only once per response. ### message.httpVersion diff --git a/doc/api/https.markdown b/doc/api/https.markdown index 9371db0a88d..464677e014c 100644 --- a/doc/api/https.markdown +++ b/doc/api/https.markdown @@ -18,7 +18,7 @@ See [http.Server#setTimeout()][]. See [http.Server#timeout][]. -## https.createServer(options, [requestListener]) +## https.createServer(options[, requestListener]) Returns a new HTTPS web server object. The `options` is similar to [tls.createServer()][]. The `requestListener` is a function which is @@ -55,9 +55,9 @@ Or }).listen(8000); -### server.listen(port, [host], [backlog], [callback]) -### server.listen(path, [callback]) -### server.listen(handle, [callback]) +### server.listen(port[, host][, backlog][, callback]) +### server.listen(path[, callback]) +### server.listen(handle[, callback]) See [http.listen()][] for details. diff --git a/doc/api/modules.markdown b/doc/api/modules.markdown index 2e08ae17246..950d72d891c 100644 --- a/doc/api/modules.markdown +++ b/doc/api/modules.markdown @@ -61,8 +61,8 @@ The module system is implemented in the `require("module")` module. <!--type=misc--> -When there are circular `require()` calls, a module might not be -done being executed when it is returned. +When there are circular `require()` calls, a module might not have finished +executing when it is returned. Consider this situation: @@ -93,7 +93,7 @@ Consider this situation: When `main.js` loads `a.js`, then `a.js` in turn loads `b.js`. At that point, `b.js` tries to load `a.js`. In order to prevent an infinite -loop an **unfinished copy** of the `a.js` exports object is returned to the +loop, an **unfinished copy** of the `a.js` exports object is returned to the `b.js` module. `b.js` then finishes loading, and its `exports` object is provided to the `a.js` module. @@ -175,6 +175,12 @@ this order: This allows programs to localize their dependencies, so that they do not clash. +You can require specific files or sub modules distributed with a module by +including a path suffix after the module name. For instance +`require('example-module/path/to/file')` would resolve `path/to/file` +relative to where `example-module` is located. The suffixed path follows the +same module resolution semantics. + ## Folders as Modules <!--type=misc--> @@ -248,7 +254,7 @@ a global but rather local to each module. * {Object} The `module.exports` object is created by the Module system. Sometimes this is not -acceptable; many want their module to be an instance of some class. To do this +acceptable; many want their module to be an instance of some class. To do this, assign the desired export object to `module.exports`. Note that assigning the desired object to `exports` will simply rebind the local `exports` variable, which is probably not what you want to do. diff --git a/doc/api/net.markdown b/doc/api/net.markdown index eb4988a7069..380f3458e4a 100644 --- a/doc/api/net.markdown +++ b/doc/api/net.markdown @@ -6,14 +6,16 @@ The `net` module provides you with an asynchronous network wrapper. It contains methods for creating both servers and clients (called streams). You can include this module with `require('net');` -## net.createServer([options], [connectionListener]) +## net.createServer([options][, connectionListener]) Creates a new TCP server. The `connectionListener` argument is automatically set as a listener for the ['connection'][] event. `options` is an object with the following defaults: - { allowHalfOpen: false + { + allowHalfOpen: false, + pauseOnConnect: false } If `allowHalfOpen` is `true`, then the socket won't automatically send a FIN @@ -21,6 +23,11 @@ packet when the other end of the socket sends a FIN packet. The socket becomes non-readable, but still writable. You should call the `end()` method explicitly. See ['end'][] event for more information. +If `pauseOnConnect` is `true`, then the socket associated with each incoming +connection will be paused, and no data will be read from its handle. This allows +connections to be passed between processes without any data being read by the +original process. To begin reading data from a paused socket, call `resume()`. + Here is an example of an echo server which listens for connections on port 8124: @@ -50,8 +57,8 @@ Use `nc` to connect to a UNIX domain socket server: nc -U /tmp/echo.sock -## net.connect(options, [connectionListener]) -## net.createConnection(options, [connectionListener]) +## net.connect(options[, connectionListener]) +## net.createConnection(options[, connectionListener]) A factory method, which returns a new ['net.Socket'](#net_class_net_socket) and connects to the supplied address and port. @@ -107,8 +114,8 @@ changed to var client = net.connect({path: '/tmp/echo.sock'}); -## net.connect(port, [host], [connectListener]) -## net.createConnection(port, [host], [connectListener]) +## net.connect(port[, host][, connectListener]) +## net.createConnection(port[, host][, connectListener]) Creates a TCP connection to `port` on `host`. If `host` is omitted, `'localhost'` will be assumed. @@ -117,8 +124,8 @@ The `connectListener` parameter will be added as an listener for the Is a factory method which returns a new ['net.Socket'](#net_class_net_socket). -## net.connect(path, [connectListener]) -## net.createConnection(path, [connectListener]) +## net.connect(path[, connectListener]) +## net.createConnection(path[, connectListener]) Creates unix socket connection to `path`. The `connectListener` parameter will be added as an listener for the @@ -130,7 +137,7 @@ A factory method which returns a new ['net.Socket'](#net_class_net_socket). This class is used to create a TCP or local server. -### server.listen(port, [host], [backlog], [callback]) +### server.listen(port[, host][, backlog][, callback]) Begin accepting connections on the specified `port` and `host`. If the `host` is omitted, the server will accept connections directed to any @@ -162,7 +169,7 @@ would be to wait a second and then try again. This can be done with (Note: All sockets in Node set `SO_REUSEADDR` already) -### server.listen(path, [callback]) +### server.listen(path[, callback]) * `path` {String} * `callback` {Function} @@ -189,7 +196,7 @@ double-backslashes, such as: net.createServer().listen( path.join('\\\\?\\pipe', process.cwd(), 'myctl')) -### server.listen(handle, [callback]) +### server.listen(handle[, callback]) * `handle` {Object} * `callback` {Function} @@ -208,7 +215,7 @@ This function is asynchronous. When the server has been bound, the last parameter `callback` will be added as an listener for the ['listening'][] event. -### server.listen(options, [callback]) +### server.listen(options[, callback]) * `options` {Object} - Required. Supports the following properties: * `port` {Number} - Optional. @@ -352,8 +359,8 @@ Set `readable` and/or `writable` to `true` to allow reads and/or writes on this socket (NOTE: Works only when `fd` is passed). About `allowHalfOpen`, refer to `createServer()` and `'end'` event. -### socket.connect(port, [host], [connectListener]) -### socket.connect(path, [connectListener]) +### socket.connect(port[, host][, connectListener]) +### socket.connect(path[, connectListener]) Opens the connection for a given socket. If `port` and `host` are given, then the socket will be opened as a TCP socket, if `host` is omitted, @@ -395,7 +402,7 @@ Users who experience large or growing `bufferSize` should attempt to Set the encoding for the socket as a Readable Stream. See [stream.setEncoding()][] for more information. -### socket.write(data, [encoding], [callback]) +### socket.write(data[, encoding][, callback]) Sends data on the socket. The second parameter specifies the encoding in the case of a string--it defaults to UTF8 encoding. @@ -407,7 +414,7 @@ buffer. Returns `false` if all or part of the data was queued in user memory. The optional `callback` parameter will be executed when the data is finally written out - this may not be immediately. -### socket.end([data], [encoding]) +### socket.end([data][, encoding]) Half-closes the socket. i.e., it sends a FIN packet. It is possible the server will still send some data. @@ -429,7 +436,7 @@ Useful to throttle back an upload. Resumes reading after a call to `pause()`. -### socket.setTimeout(timeout, [callback]) +### socket.setTimeout(timeout[, callback]) Sets the socket to timeout after `timeout` milliseconds of inactivity on the socket. By default `net.Socket` do not have a timeout. @@ -450,7 +457,7 @@ algorithm, they buffer data before sending it off. Setting `true` for `noDelay` will immediately fire off data each time `socket.write()` is called. `noDelay` defaults to `true`. -### socket.setKeepAlive([enable], [initialDelay]) +### socket.setKeepAlive([enable][, initialDelay]) Enable/disable keep-alive functionality, and optionally set the initial delay before the first keepalive probe is sent on an idle socket. @@ -601,5 +608,6 @@ Returns true if input is a version 6 IP address, otherwise returns false. [EventEmitter]: events.html#events_class_events_eventemitter ['listening']: #net_event_listening [server.getConnections()]: #net_server_getconnections_callback -[Readable Stream]: stream.html#stream_readable_stream +[Readable Stream]: stream.html#stream_class_stream_readable [stream.setEncoding()]: stream.html#stream_stream_setencoding_encoding +[dns.lookup()]: dns.html#dns_dns_lookup_domain_family_callback diff --git a/doc/api/path.markdown b/doc/api/path.markdown index 9446a4043aa..3489f1811f5 100644 --- a/doc/api/path.markdown +++ b/doc/api/path.markdown @@ -22,7 +22,7 @@ Example: // returns '/foo/bar/baz/asdf' -## path.join([path1], [path2], [...]) +## path.join([path1][, path2][, ...]) Join all arguments together and normalize the resulting path. @@ -127,7 +127,7 @@ Example: // returns '/foo/bar/baz/asdf' -## path.basename(p, [ext]) +## path.basename(p[, ext]) Return the last portion of a path. Similar to the Unix `basename` command. @@ -201,3 +201,55 @@ An example on Windows: process.env.PATH.split(path.delimiter) // returns ['C:\Windows\system32', 'C:\Windows', 'C:\Program Files\nodejs\'] + +## path.parse(pathString) + +Returns an object from a path string. + +An example on *nix: + + path.parse('/home/user/dir/file.txt') + // returns + { + root : "/", + dir : "/home/user/dir", + base : "file.txt", + ext : ".txt", + name : "file" + } + +An example on Windows: + + path.parse('C:\\path\\dir\\index.html') + // returns + { + root : "C:\", + dir : "C:\path\dir", + base : "index.html", + ext : ".html", + name : "index" + } + +## path.format(pathObject) + +Returns a path string from an object, the opposite of `path.parse` above. + + path.format({ + root : "/", + dir : "/home/user/dir", + base : "file.txt", + ext : ".txt", + name : "file" + }) + // returns + '/home/user/dir/file.txt' + +## path.posix + +Provide access to aforementioned `path` methods but always interact in a posix +compatible way. + +## path.win32 + +Provide access to aforementioned `path` methods but always interact in a win32 +compatible way. diff --git a/doc/api/process.markdown b/doc/api/process.markdown index 66e95129c61..ba0032509a2 100644 --- a/doc/api/process.markdown +++ b/doc/api/process.markdown @@ -181,7 +181,8 @@ Example: the definition of `console.log` }; `process.stderr` and `process.stdout` are unlike other streams in Node in -that writes to them are usually blocking. +that they cannot be closed (`end()` will throw), they never emit the `finish` +event and that writes are usually blocking. - They are blocking in the case that they refer to regular files or TTY file descriptors. @@ -209,7 +210,8 @@ See [the tty docs](tty.html#tty_tty) for more information. A writable stream to stderr. `process.stderr` and `process.stdout` are unlike other streams in Node in -that writes to them are usually blocking. +that they cannot be closed (`end()` will throw), they never emit the `finish` +event and that writes are usually blocking. - They are blocking in the case that they refer to regular files or TTY file descriptors. @@ -539,7 +541,7 @@ An example of the possible output looks like: target_arch: 'x64', v8_use_snapshot: 'true' } } -## process.kill(pid, [signal]) +## process.kill(pid[, signal]) Send a signal to a process. `pid` is the process id and `signal` is the string describing the signal to send. Signal names are strings like @@ -706,7 +708,7 @@ Sets or reads the process's file mode creation mask. Child processes inherit the mask from the parent process. Returns the old mask if `mask` argument is given, otherwise returns the current mask. - var oldmask, newmask = 0644; + var oldmask, newmask = 0022; oldmask = process.umask(newmask); console.log('Changed umask from: ' + oldmask.toString(8) + diff --git a/doc/api/querystring.markdown b/doc/api/querystring.markdown index 67f7e451f7c..e907c4e7d5a 100644 --- a/doc/api/querystring.markdown +++ b/doc/api/querystring.markdown @@ -7,7 +7,7 @@ This module provides utilities for dealing with query strings. It provides the following methods: -## querystring.stringify(obj, [sep], [eq], [options]) +## querystring.stringify(obj[, sep][, eq][, options]) Serialize an object to a query string. Optionally override the default separator (`'&'`) and assignment (`'='`) @@ -33,7 +33,7 @@ Example: // returns 'w=%D6%D0%CE%C4&foo=bar' -## querystring.parse(str, [sep], [eq], [options]) +## querystring.parse(str[, sep][, eq][, options]) Deserialize a query string to an object. Optionally override the default separator (`'&'`) and assignment (`'='`) diff --git a/doc/api/readline.markdown b/doc/api/readline.markdown index 43655bc89ef..055bc39bdf9 100644 --- a/doc/api/readline.markdown +++ b/doc/api/readline.markdown @@ -30,7 +30,7 @@ the following values: - `input` - the readable stream to listen to (Required). - - `output` - the writable stream to write readline data to (Required). + - `output` - the writable stream to write readline data to (Optional). - `completer` - an optional function that is used for Tab autocompletion. See below for an example of using this. @@ -100,6 +100,9 @@ to `true` to prevent the cursor placement being reset to `0`. This will also resume the `input` stream used with `createInterface` if it has been paused. +If `output` is set to `null` or `undefined` when calling `createInterface`, the +prompt is not written. + ### rl.question(query, callback) Prepends the prompt with `query` and invokes `callback` with the user's @@ -109,6 +112,9 @@ with the user's response after it has been typed. This will also resume the `input` stream used with `createInterface` if it has been paused. +If `output` is set to `null` or `undefined` when calling `createInterface`, +nothing is displayed. + Example usage: interface.question('What is your favorite food?', function(answer) { @@ -119,6 +125,8 @@ Example usage: Pauses the readline `input` stream, allowing it to be resumed later if needed. +Note that this doesn't immediately pause the stream of events. Several events may be emitted after calling `pause`, including `line`. + ### rl.resume() Resumes the readline `input` stream. @@ -128,10 +136,11 @@ Resumes the readline `input` stream. Closes the `Interface` instance, relinquishing control on the `input` and `output` streams. The "close" event will also be emitted. -### rl.write(data, [key]) +### rl.write(data[, key]) -Writes `data` to `output` stream. `key` is an object literal to represent a key -sequence; available if the terminal is a TTY. +Writes `data` to `output` stream, unless `output` is set to `null` or +`undefined` when calling `createInterface`. `key` is an object literal to +represent a key sequence; available if the terminal is a TTY. This will also resume the `input` stream if it has been paused. diff --git a/doc/api/smalloc.markdown b/doc/api/smalloc.markdown index 72e46ea5874..ff905714e5e 100644 --- a/doc/api/smalloc.markdown +++ b/doc/api/smalloc.markdown @@ -2,18 +2,20 @@ Stability: 1 - Experimental -## smalloc.alloc(length[, receiver][, type]) +## Class: smalloc + +Buffers are backed by a simple allocator that only handles the assignation of +external raw memory. Smalloc exposes that functionality. + +### smalloc.alloc(length[, receiver][, type]) * `length` {Number} `<= smalloc.kMaxLength` -* `receiver` {Object}, Optional, Default: `new Object` -* `type` {Enum}, Optional, Default: `Uint8` +* `receiver` {Object} Default: `new Object` +* `type` {Enum} Default: `Uint8` Returns `receiver` with allocated external array data. If no `receiver` is passed then a new Object will be created and returned. -Buffers are backed by a simple allocator that only handles the assignation of -external raw memory. Smalloc exposes that functionality. - This can be used to create your own Buffer-like classes. No other properties are set, so the user will need to keep track of other necessary information (e.g. `length` of the allocation). @@ -46,13 +48,17 @@ possible options are listed in `smalloc.Types`. Example usage: // { '0': 0, '1': 0.1, '2': 0.2 } -## smalloc.copyOnto(source, sourceStart, dest, destStart, copyLength); +It is not possible to freeze, seal and prevent extensions of objects with +external data using `Object.freeze`, `Object.seal` and +`Object.preventExtensions` respectively. + +### smalloc.copyOnto(source, sourceStart, dest, destStart, copyLength); -* `source` Object with external array allocation -* `sourceStart` Position to begin copying from -* `dest` Object with external array allocation -* `destStart` Position to begin copying onto -* `copyLength` Length of copy +* `source` {Object} with external array allocation +* `sourceStart` {Number} Position to begin copying from +* `dest` {Object} with external array allocation +* `destStart` {Number} Position to begin copying onto +* `copyLength` {Number} Length of copy Copy memory from one external array allocation to another. No arguments are optional, and any violation will throw. @@ -75,7 +81,7 @@ optional, and any violation will throw. `copyOnto` automatically detects the length of the allocation internally, so no need to set any additional properties for this to work. -## smalloc.dispose(obj) +### smalloc.dispose(obj) * `obj` Object @@ -103,21 +109,23 @@ careful. Cryptic errors may arise in applications that are difficult to trace. smalloc.copyOnto(b, 2, a, 0, 2); // now results in: - // Error: source has no external array data + // RangeError: copy_length > source_length +After `dispose()` is called object still behaves as one with external data, for +example `smalloc.hasExternalData()` returns `true`. `dispose()` does not support Buffers, and will throw if passed. -## smalloc.hasExternalData(obj) +### smalloc.hasExternalData(obj) * `obj` {Object} Returns `true` if the `obj` has externally allocated memory. -## smalloc.kMaxLength +### smalloc.kMaxLength Size of maximum allocation. This is also applicable to Buffer creation. -## smalloc.Types +### smalloc.Types Enum of possible external array types. Contains: diff --git a/doc/api/stream.markdown b/doc/api/stream.markdown index bdd709f7e27..5ffcef91456 100644 --- a/doc/api/stream.markdown +++ b/doc/api/stream.markdown @@ -65,7 +65,7 @@ var server = http.createServer(function (req, res) { // Readable streams emit 'data' events once a listener is added req.on('data', function (chunk) { body += chunk; - }) + }); // the end event tells you that you have entire body req.on('end', function () { @@ -80,8 +80,8 @@ var server = http.createServer(function (req, res) { // write back something interesting to the user: res.write(typeof data); res.end(); - }) -}) + }); +}); server.listen(1337); @@ -158,7 +158,7 @@ hadn't already. var readable = getReadableStreamSomehow(); readable.on('readable', function() { // there is some data to read now -}) +}); ``` Once the internal buffer is drained, a `readable` event will fire @@ -179,7 +179,7 @@ possible, this is the best way to do so. var readable = getReadableStreamSomehow(); readable.on('data', function(chunk) { console.log('got %d bytes of data', chunk.length); -}) +}); ``` #### Event: 'end' @@ -194,7 +194,7 @@ or by calling `read()` repeatedly until you get to the end. var readable = getReadableStreamSomehow(); readable.on('data', function(chunk) { console.log('got %d bytes of data', chunk.length); -}) +}); readable.on('end', function() { console.log('there will be no more data.'); }); @@ -266,7 +266,7 @@ readable.setEncoding('utf8'); readable.on('data', function(chunk) { assert.equal(typeof chunk, 'string'); console.log('got %d characters of string data', chunk.length); -}) +}); ``` #### readable.resume() @@ -286,7 +286,7 @@ var readable = getReadableStreamSomehow(); readable.resume(); readable.on('end', function(chunk) { console.log('got to the end, but did not read anything'); -}) +}); ``` #### readable.pause() @@ -307,7 +307,7 @@ readable.on('data', function(chunk) { console.log('now data will start flowing again'); readable.resume(); }, 1000); -}) +}); ``` #### readable.isPaused() @@ -328,7 +328,7 @@ readable.resume() readable.isPaused() // === false ``` -#### readable.pipe(destination, [options]) +#### readable.pipe(destination[, options]) * `destination` {[Writable][] Stream} The destination for writing data * `options` {Object} Pipe options @@ -501,7 +501,7 @@ Examples of writable streams include: * [child process stdin](child_process.html#child_process_child_stdin) * [process.stdout][], [process.stderr][] -#### writable.write(chunk, [encoding], [callback]) +#### writable.write(chunk[, encoding][, callback]) * `chunk` {String | Buffer} The data to write * `encoding` {String} The encoding, if `chunk` is a String @@ -564,7 +564,15 @@ Buffered data will be flushed either at `.uncork()` or at `.end()` call. Flush all data, buffered since `.cork()` call. -#### writable.end([chunk], [encoding], [callback]) +#### writable.setDefaultEncoding(encoding) + +* `encoding` {String} The new default encoding +* Return: `Boolean` + +Sets the default encoding for a writable stream. Returns `true` if the encoding +is valid and is set. Otherwise returns `false`. + +#### writable.end([chunk][, encoding][, callback]) * `chunk` {String | Buffer} Optional data to write * `encoding` {String} The encoding, if `chunk` is a String @@ -943,7 +951,7 @@ TLS, may ignore this argument, and simply provide data whenever it becomes available. There is no need, for example to "wait" until `size` bytes are available before calling [`stream.push(chunk)`][]. -#### readable.push(chunk, [encoding]) +#### readable.push(chunk[, encoding]) * `chunk` {Buffer | null | String} Chunk of data to push into the read queue * `encoding` {String} Encoding of String chunks. Must be a valid diff --git a/doc/api/timers.markdown b/doc/api/timers.markdown index 7ba209e5ee9..d05046a50ef 100644 --- a/doc/api/timers.markdown +++ b/doc/api/timers.markdown @@ -5,7 +5,7 @@ All of the timer functions are globals. You do not need to `require()` this module in order to use them. -## setTimeout(callback, delay, [arg], [...]) +## setTimeout(callback, delay[, arg][, ...]) To schedule execution of a one-time `callback` after `delay` milliseconds. Returns a `timeoutObject` for possible use with `clearTimeout()`. Optionally you can @@ -20,7 +20,7 @@ be called as close as possible to the time specified. Prevents a timeout from triggering. -## setInterval(callback, delay, [arg], [...]) +## setInterval(callback, delay[, arg][, ...]) To schedule the repeated execution of `callback` every `delay` milliseconds. Returns a `intervalObject` for possible use with `clearInterval()`. Optionally @@ -28,7 +28,7 @@ you can also pass arguments to the callback. ## clearInterval(intervalObject) -Stops a interval from triggering. +Stops an interval from triggering. ## unref() @@ -47,7 +47,7 @@ If you had previously `unref()`d a timer you can call `ref()` to explicitly request the timer hold the program open. If the timer is already `ref`d calling `ref` again will have no effect. -## setImmediate(callback, [arg], [...]) +## setImmediate(callback[, arg][, ...]) To schedule the "immediate" execution of `callback` after I/O events callbacks and before `setTimeout` and `setInterval` . Returns an @@ -56,7 +56,7 @@ can also pass arguments to the callback. Callbacks for immediates are queued in the order in which they were created. The entire callback queue is processed every event loop iteration. If you queue -an immediate from a inside an executing callback that immediate won't fire +an immediate from inside an executing callback, that immediate won't fire until the next event loop iteration. ## clearImmediate(immediateObject) diff --git a/doc/api/tls.markdown b/doc/api/tls.markdown index daa169c1083..c03845e8721 100644 --- a/doc/api/tls.markdown +++ b/doc/api/tls.markdown @@ -111,7 +111,7 @@ Example: console.log(ciphers); // ['AES128-SHA', 'AES256-SHA', ...] -## tls.createServer(options, [secureConnectionListener]) +## tls.createServer(options[, secureConnectionListener]) Creates a new [tls.Server][]. The `connectionListener` argument is automatically set as a listener for the [secureConnection][] event. The @@ -221,7 +221,7 @@ automatically set as a listener for the [secureConnection][] event. The NOTE: Automatically shared between `cluster` module workers. - - `sessionIdContext`: A string containing a opaque identifier for session + - `sessionIdContext`: A string containing an opaque identifier for session resumption. If `requestCert` is `true`, the default is MD5 hash value generated from command-line. Otherwise, the default is not provided. @@ -285,8 +285,8 @@ You can test this server by connecting to it with `openssl s_client`: openssl s_client -connect 127.0.0.1:8000 -## tls.connect(options, [callback]) -## tls.connect(port, [host], [options], [callback]) +## tls.connect(options[, callback]) +## tls.connect(port[, host][, options][, callback]) Creates a new client connection to the given `port` and `host` (old API) or `options.port` and `options.host`. (If `host` is omitted, it defaults to @@ -428,8 +428,6 @@ Construct a new TLSSocket object from existing TCP socket. ## tls.createSecureContext(details) -Stability: 0 - Deprecated. Use tls.createSecureContext instead. - Creates a credentials object, with the optional details being a dictionary with keys: @@ -455,9 +453,7 @@ publicly trusted list of CAs as given in <http://mxr.mozilla.org/mozilla/source/security/nss/lib/ckfw/builtins/certdata.txt>. -## tls.createSecurePair([context], [isServer], [requestCert], [rejectUnauthorized]) - - Stability: 0 - Deprecated. Use tls.TLSSocket instead. +## tls.createSecurePair([context][, isServer][, requestCert][, rejectUnauthorized]) Creates a new secure pair object with two streams, one of which reads/writes encrypted data, and one reads/writes cleartext data. @@ -505,7 +501,7 @@ connections using TLS or SSL. `function (tlsSocket) {}` This event is emitted after a new connection has been successfully -handshaked. The argument is a instance of [tls.TLSSocket][]. It has all the +handshaked. The argument is an instance of [tls.TLSSocket][]. It has all the common stream methods and events. `socket.authorized` is a boolean value which indicates if the @@ -594,7 +590,7 @@ NOTE: you may want to use some npm module like [asn1.js] to parse the certificates. -### server.listen(port, [host], [callback]) +### server.listen(port[, host][, callback]) Begin accepting connections on the specified `port` and `host`. If the `host` is omitted, the server will accept connections directed to any diff --git a/doc/api/tracing.markdown b/doc/api/tracing.markdown deleted file mode 100644 index 92e6809a04a..00000000000 --- a/doc/api/tracing.markdown +++ /dev/null @@ -1,273 +0,0 @@ -# Tracing - - Stability: 1 - Experimental - -The tracing module is designed for instrumenting your Node application. It is -not meant for general purpose use. - -***Be very careful with callbacks used in conjunction with this module*** - -Many of these callbacks interact directly with asynchronous subsystems in a -synchronous fashion. That is to say, you may be in a callback where a call to -`console.log()` could result in an infinite recursive loop. Also of note, many -of these callbacks are in hot execution code paths. That is to say your -callbacks are executed quite often in the normal operation of Node, so be wary -of doing CPU bound or synchronous workloads in these functions. Consider a ring -buffer and a timer to defer processing. - -`require('tracing')` to use this module. - -## v8 - -The `v8` property is an [EventEmitter][], it exposes events and interfaces -specific to the version of `v8` built with node. These interfaces are subject -to change by upstream and are therefore not covered under the stability index. - -### Event: 'gc' - -`function (before, after) { }` - -Emitted each time a GC run is completed. - -`before` and `after` are objects with the following properties: - -``` -{ - type: 'mark-sweep-compact', - flags: 0, - timestamp: 905535650119053, - total_heap_size: 6295040, - total_heap_size_executable: 4194304, - total_physical_size: 6295040, - used_heap_size: 2855416, - heap_size_limit: 1535115264 -} -``` - -### getHeapStatistics() - -Returns an object with the following properties - -``` -{ - total_heap_size: 7326976, - total_heap_size_executable: 4194304, - total_physical_size: 7326976, - used_heap_size: 3476208, - heap_size_limit: 1535115264 -} -``` - - -# Async Listeners - -The `AsyncListener` API is the JavaScript interface for the `AsyncWrap` -class which allows developers to be notified about key events in the -lifetime of an asynchronous event. Node performs a lot of asynchronous -events internally, and significant use of this API may have a -**significant performance impact** on your application. - - -## tracing.createAsyncListener(callbacksObj[, userData]) - -* `callbacksObj` {Object} Contains optional callbacks that will fire at -specific times in the life cycle of the asynchronous event. -* `userData` {Value} a value that will be passed to all callbacks. - -Returns a constructed `AsyncListener` object. - -To begin capturing asynchronous events pass either the `callbacksObj` or -pass an existing `AsyncListener` instance to [`tracing.addAsyncListener()`][]. -The same `AsyncListener` instance can only be added once to the active -queue, and subsequent attempts to add the instance will be ignored. - -To stop capturing pass the `AsyncListener` instance to -[`tracing.removeAsyncListener()`][]. This does _not_ mean the -`AsyncListener` previously added will stop triggering callbacks. Once -attached to an asynchronous event it will persist with the lifetime of the -asynchronous call stack. - -Explanation of function parameters: - - -`callbacksObj`: An `Object` which may contain several optional fields: - -* `create(userData)`: A `Function` called when an asynchronous -event is instantiated. If a `Value` is returned then it will be attached -to the event and overwrite any value that had been passed to -`tracing.createAsyncListener()`'s `userData` argument. If an initial -`userData` was passed when created, then `create()` will -receive that as a function argument. - -* `before(context, userData)`: A `Function` that is called immediately -before the asynchronous callback is about to run. It will be passed both -the `context` (i.e. `this`) of the calling function and the `userData` -either returned from `create()` or passed during construction (if -either occurred). - -* `after(context, userData)`: A `Function` called immediately after -the asynchronous event's callback has run. Note this will not be called -if the callback throws and the error is not handled. - -* `error(userData, error)`: A `Function` called if the event's -callback threw. If this registered callback returns `true` then Node will -assume the error has been properly handled and resume execution normally. -When multiple `error()` callbacks have been registered only **one** of -those callbacks needs to return `true` for `AsyncListener` to accept that -the error has been handled, but all `error()` callbacks will always be run. - -`userData`: A `Value` (i.e. anything) that will be, by default, -attached to all new event instances. This will be overwritten if a `Value` -is returned by `create()`. - -Here is an example of overwriting the `userData`: - - tracing.createAsyncListener({ - create: function listener(value) { - // value === true - return false; - }, { - before: function before(context, value) { - // value === false - } - }, true); - -**Note:** The [EventEmitter][], while used to emit status of an asynchronous -event, is not itself asynchronous. So `create()` will not fire when -an event is added, and `before()`/`after()` will not fire when emitted -callbacks are called. - - -## tracing.addAsyncListener(callbacksObj[, userData]) -## tracing.addAsyncListener(asyncListener) - -Returns a constructed `AsyncListener` object and immediately adds it to -the listening queue to begin capturing asynchronous events. - -Function parameters can either be the same as -[`tracing.createAsyncListener()`][], or a constructed `AsyncListener` -object. - -Example usage for capturing errors: - - var fs = require('fs'); - - var cntr = 0; - var key = tracing.addAsyncListener({ - create: function onCreate() { - return { uid: cntr++ }; - }, - before: function onBefore(context, storage) { - // Write directly to stdout or we'll enter a recursive loop - fs.writeSync(1, 'uid: ' + storage.uid + ' is about to run\n'); - }, - after: function onAfter(context, storage) { - fs.writeSync(1, 'uid: ' + storage.uid + ' ran\n'); - }, - error: function onError(storage, err) { - // Handle known errors - if (err.message === 'everything is fine') { - // Writing to stderr this time. - fs.writeSync(2, 'handled error just threw:\n'); - fs.writeSync(2, err.stack + '\n'); - return true; - } - } - }); - - process.nextTick(function() { - throw new Error('everything is fine'); - }); - - // Output: - // uid: 0 is about to run - // handled error just threw: - // Error: really, it's ok - // at /tmp/test2.js:27:9 - // at process._tickCallback (node.js:583:11) - // at Function.Module.runMain (module.js:492:11) - // at startup (node.js:123:16) - // at node.js:1012:3 - -## tracing.removeAsyncListener(asyncListener) - -Removes the `AsyncListener` from the listening queue. - -Removing the `AsyncListener` from the active queue does _not_ mean the -`asyncListener` callbacks will cease to fire on the events they've been -registered. Subsequently, any asynchronous events fired during the -execution of a callback will also have the same `asyncListener` callbacks -attached for future execution. For example: - - var fs = require('fs'); - - var key = tracing.createAsyncListener({ - create: function asyncListener() { - // Write directly to stdout or we'll enter a recursive loop - fs.writeSync(1, 'You summoned me?\n'); - } - }); - - // We want to begin capturing async events some time in the future. - setTimeout(function() { - tracing.addAsyncListener(key); - - // Perform a few additional async events. - setTimeout(function() { - setImmediate(function() { - process.nextTick(function() { }); - }); - }); - - // Removing the listener doesn't mean to stop capturing events that - // have already been added. - tracing.removeAsyncListener(key); - }, 100); - - // Output: - // You summoned me? - // You summoned me? - // You summoned me? - // You summoned me? - -The fact that we logged 4 asynchronous events is an implementation detail -of Node's [Timers][]. - -To stop capturing from a specific asynchronous event stack -`tracing.removeAsyncListener()` must be called from within the call -stack itself. For example: - - var fs = require('fs'); - - var key = tracing.createAsyncListener({ - create: function asyncListener() { - // Write directly to stdout or we'll enter a recursive loop - fs.writeSync(1, 'You summoned me?\n'); - } - }); - - // We want to begin capturing async events some time in the future. - setTimeout(function() { - tracing.addAsyncListener(key); - - // Perform a few additional async events. - setImmediate(function() { - // Stop capturing from this call stack. - tracing.removeAsyncListener(key); - - process.nextTick(function() { }); - }); - }, 100); - - // Output: - // You summoned me? - -The user must be explicit and always pass the `AsyncListener` they wish -to remove. It is not possible to simply remove all listeners at once. - - -[EventEmitter]: events.html#events_class_events_eventemitter -[Timers]: timers.html -[`tracing.createAsyncListener()`]: #tracing_tracing_createasynclistener_asynclistener_callbacksobj_storagevalue -[`tracing.addAsyncListener()`]: #tracing_tracing_addasynclistener_asynclistener -[`tracing.removeAsyncListener()`]: #tracing_tracing_removeasynclistener_asynclistener diff --git a/doc/api/url.markdown b/doc/api/url.markdown index 6aa6863ecdd..3ace6995794 100644 --- a/doc/api/url.markdown +++ b/doc/api/url.markdown @@ -65,13 +65,14 @@ string will not be in the parsed object. Examples are shown for the URL The following methods are provided by the URL module: -## url.parse(urlStr, [parseQueryString], [slashesDenoteHost]) +## url.parse(urlStr[, parseQueryString][, slashesDenoteHost]) Take a URL string, and return an object. -Pass `true` as the second argument to also parse -the query string using the `querystring` module. -Defaults to `false`. +Pass `true` as the second argument to also parse the query string using the +`querystring` module. If `true` then the `query` property will always be +assigned an object, and the `search` property will always be a (possibly +empty) string. Defaults to `false`. Pass `true` as the third argument to treat `//foo/bar` as `{ host: 'foo', pathname: '/bar' }` rather than @@ -94,11 +95,12 @@ Take a parsed URL object, and return a formatted URL string. * `hostname` will only be used if `host` is absent. * `port` will only be used if `host` is absent. * `host` will be used in place of `hostname` and `port` -* `pathname` is treated the same with or without the leading `/` (slash) -* `search` will be used in place of `query` +* `pathname` is treated the same with or without the leading `/` (slash). +* `path` is treated the same with `pathname` but able to contain `query` as well. +* `search` will be used in place of `query`. * `query` (object; see `querystring`) will only be used if `search` is absent. -* `search` is treated the same with or without the leading `?` (question mark) -* `hash` is treated the same with or without the leading `#` (pound sign, anchor) +* `search` is treated the same with or without the leading `?` (question mark). +* `hash` is treated the same with or without the leading `#` (pound sign, anchor). ## url.resolve(from, to) diff --git a/doc/api/util.markdown b/doc/api/util.markdown index dc6c0f64eed..cc639ddb494 100644 --- a/doc/api/util.markdown +++ b/doc/api/util.markdown @@ -43,7 +43,7 @@ environment variable set, then it will not print anything. You may separate multiple `NODE_DEBUG` environment variables with a comma. For example, `NODE_DEBUG=fs,net,tls`. -## util.format(format, [...]) +## util.format(format[, ...]) Returns a formatted string using the first argument as a `printf`-like format. @@ -81,7 +81,7 @@ Output with timestamp on `stdout`. require('util').log('Timestamped message.'); -## util.inspect(object, [options]) +## util.inspect(object[, options]) Return a string representation of `object`, which is useful for debugging. @@ -297,7 +297,7 @@ Deprecated predecessor of `console.log`. Deprecated predecessor of `console.log`. -## util.pump(readableStream, writableStream, [callback]) +## util.pump(readableStream, writableStream[, callback]) Stability: 0 - Deprecated: Use readableStream.pipe(writableStream) diff --git a/doc/api/vm.markdown b/doc/api/vm.markdown index 0f963ca4351..b1453249ec0 100644 --- a/doc/api/vm.markdown +++ b/doc/api/vm.markdown @@ -11,7 +11,7 @@ You can access this module with: JavaScript code can be compiled and run immediately or compiled, saved, and run later. -## vm.runInThisContext(code, [options]) +## vm.runInThisContext(code[, options]) `vm.runInThisContext()` compiles `code`, runs it and returns the result. Running code does not have access to local scope, but does have access to the current @@ -76,7 +76,7 @@ Returns whether or not a sandbox object has been contextified by calling `vm.createContext` on it. -## vm.runInContext(code, contextifiedSandbox, [options]) +## vm.runInContext(code, contextifiedSandbox[, options]) `vm.runInContext` compiles `code`, then runs it in `contextifiedSandbox` and returns the result. Running code does not have access to local scope. The @@ -85,7 +85,7 @@ returns the result. Running code does not have access to local scope. The `vm.runInContext` takes the same options as `vm.runInThisContext`. -Example: compile and execute differnt scripts in a single existing context. +Example: compile and execute different scripts in a single existing context. var util = require('util'); var vm = require('vm'); @@ -105,7 +105,7 @@ Note that running untrusted code is a tricky business requiring great care. separate process. -## vm.runInNewContext(code, [sandbox], [options]) +## vm.runInNewContext(code[, sandbox][, options]) `vm.runInNewContext` compiles `code`, contextifies `sandbox` if passed or creates a new contextified sandbox if it's omitted, and then runs the code with @@ -204,7 +204,7 @@ The options for running a script are: execution. If execution is terminated, an `Error` will be thrown. -### script.runInContext(contextifiedSandbox, [options]) +### script.runInContext(contextifiedSandbox[, options]) Similar to `vm.runInContext` but a method of a precompiled `Script` object. `script.runInContext` runs `script`'s compiled code in `contextifiedSandbox` @@ -238,7 +238,7 @@ Note that running untrusted code is a tricky business requiring great care. requires a separate process. -### script.runInNewContext([sandbox], [options]) +### script.runInNewContext([sandbox][, options]) Similar to `vm.runInNewContext` but a method of a precompiled `Script` object. `script.runInNewContext` contextifies `sandbox` if passed or creates a new diff --git a/doc/api/zlib.markdown b/doc/api/zlib.markdown index 68087ead666..3ccd2638e1e 100644 --- a/doc/api/zlib.markdown +++ b/doc/api/zlib.markdown @@ -201,38 +201,38 @@ callback with `callback(error, result)`. Every method has a `*Sync` counterpart, which accept the same arguments, but without a callback. -## zlib.deflate(buf, [options], callback) -## zlib.deflateSync(buf, [options]) +## zlib.deflate(buf[, options], callback) +## zlib.deflateSync(buf[, options]) Compress a string with Deflate. -## zlib.deflateRaw(buf, [options], callback) -## zlib.deflateRawSync(buf, [options]) +## zlib.deflateRaw(buf[, options], callback) +## zlib.deflateRawSync(buf[, options]) Compress a string with DeflateRaw. -## zlib.gzip(buf, [options], callback) -## zlib.gzipSync(buf, [options]) +## zlib.gzip(buf[, options], callback) +## zlib.gzipSync(buf[, options]) Compress a string with Gzip. -## zlib.gunzip(buf, [options], callback) -## zlib.gunzipSync(buf, [options]) +## zlib.gunzip(buf[, options], callback) +## zlib.gunzipSync(buf[, options]) Decompress a raw Buffer with Gunzip. -## zlib.inflate(buf, [options], callback) -## zlib.inflateSync(buf, [options]) +## zlib.inflate(buf[, options], callback) +## zlib.inflateSync(buf[, options]) Decompress a raw Buffer with Inflate. -## zlib.inflateRaw(buf, [options], callback) -## zlib.inflateRawSync(buf, [options]) +## zlib.inflateRaw(buf[, options], callback) +## zlib.inflateRawSync(buf[, options]) Decompress a raw Buffer with InflateRaw. -## zlib.unzip(buf, [options], callback) -## zlib.unzipSync(buf, [options]) +## zlib.unzip(buf[, options], callback) +## zlib.unzipSync(buf[, options]) Decompress a raw Buffer with Unzip. diff --git a/lib/_debugger.js b/lib/_debugger.js index 4b01d39e5b5..b8a3177d687 100644 --- a/lib/_debugger.js +++ b/lib/_debugger.js @@ -249,6 +249,10 @@ Client.prototype._onResponse = function(res) { this._removeScript(res.body.body.script); handled = true; + } else if (res.body && res.body.event === 'compileError') { + // This event is not used anywhere right now, perhaps somewhere in the + // future? + handled = true; } if (cb) { @@ -1356,6 +1360,12 @@ Interface.prototype.setBreakpoint = function(script, line, script = this.client.currentScript; } + if (script === undefined) { + this.print('Cannot determine the current script, ' + + 'make sure the debugged process is paused.'); + return; + } + if (/\(\)$/.test(script)) { // setBreakpoint('functionname()'); var req = { diff --git a/lib/_http_incoming.js b/lib/_http_incoming.js index b31754d95b1..69d3d86ec6a 100644 --- a/lib/_http_incoming.js +++ b/lib/_http_incoming.js @@ -125,11 +125,12 @@ IncomingMessage.prototype._addHeaderLines = function(headers, n) { raw = this.rawHeaders; dest = this.headers; } - raw.push.apply(raw, headers); for (var i = 0; i < n; i += 2) { var k = headers[i]; var v = headers[i + 1]; + raw.push(k); + raw.push(v); this._addHeaderLine(k, v, dest); } } diff --git a/lib/_http_outgoing.js b/lib/_http_outgoing.js index ff42e653bba..20aa3654018 100644 --- a/lib/_http_outgoing.js +++ b/lib/_http_outgoing.js @@ -82,9 +82,13 @@ function OutgoingMessage() { this.finished = false; this._hangupClose = false; + this._headerSent = false; this.socket = null; this.connection = null; + this._header = null; + this._headers = null; + this._headerNames = {}; } util.inherits(OutgoingMessage, Stream); @@ -323,23 +327,22 @@ function storeHeader(self, state, field, value) { OutgoingMessage.prototype.setHeader = function(name, value) { - if (arguments.length < 2) { - throw new Error('`name` and `value` are required for setHeader().'); - } - - if (this._header) { + if (typeof name !== 'string') + throw new TypeError('"name" should be a string'); + if (value === undefined) + throw new Error('"name" and "value" are required for setHeader().'); + if (this._header) throw new Error('Can\'t set headers after they are sent.'); - } + + if (this._headers === null) + this._headers = {}; var key = name.toLowerCase(); - this._headers = this._headers || {}; - this._headerNames = this._headerNames || {}; this._headers[key] = value; this._headerNames[key] = name; - if (automaticHeaders[key]) { + if (automaticHeaders[key]) this._removedHeader[key] = false; - } }; @@ -387,6 +390,7 @@ OutgoingMessage.prototype._renderHeaders = function() { var headers = {}; var keys = Object.keys(this._headers); + for (var i = 0, l = keys.length; i < l; i++) { var key = keys[i]; headers[this._headerNames[key]] = this._headers[key]; @@ -403,6 +407,18 @@ Object.defineProperty(OutgoingMessage.prototype, 'headersSent', { OutgoingMessage.prototype.write = function(chunk, encoding, callback) { + var self = this; + + if (this.finished) { + var err = new Error('write after end'); + process.nextTick(function() { + self.emit('error', err); + if (callback) callback(err); + }); + + return true; + } + if (!this._header) { this._implicitHeader(); } diff --git a/lib/_stream_writable.js b/lib/_stream_writable.js index dbc227bbaf3..39eee61460f 100644 --- a/lib/_stream_writable.js +++ b/lib/_stream_writable.js @@ -28,6 +28,7 @@ Writable.WritableState = WritableState; var util = require('util'); var Stream = require('stream'); +var debug = util.debuglog('stream'); util.inherits(Writable, Stream); @@ -35,6 +36,7 @@ function WriteReq(chunk, encoding, cb) { this.chunk = chunk; this.encoding = encoding; this.callback = cb; + this.next = null; } function WritableState(options, stream) { @@ -109,7 +111,8 @@ function WritableState(options, stream) { // the amount that is being written when _write is called. this.writelen = 0; - this.buffer = []; + this.bufferedRequest = null; + this.lastBufferedRequest = null; // number of pending user-supplied write callbacks // this must be 0 before 'finish' can be emitted @@ -123,6 +126,23 @@ function WritableState(options, stream) { this.errorEmitted = false; } +WritableState.prototype.getBuffer = function writableStateGetBuffer() { + var current = this.bufferedRequest; + var out = []; + while (current) { + out.push(current); + current = current.next; + } + return out; +}; + +Object.defineProperty(WritableState.prototype, 'buffer', { + get: util.deprecate(function() { + return this.getBuffer(); + }, '_writableState.buffer is deprecated. Use ' + + '_writableState.getBuffer() instead.') +}); + function Writable(options) { // Writable ctor is applied to Duplexes, though they're not // instanceof Writable, they're instanceof Readable. @@ -216,11 +236,20 @@ Writable.prototype.uncork = function() { !state.corked && !state.finished && !state.bufferProcessing && - state.buffer.length) + state.bufferedRequest) clearBuffer(this, state); } }; +Writable.prototype.setDefaultEncoding = function setDefaultEncoding(encoding) { + // node::ParseEncoding() requires lower case. + if (typeof encoding === 'string') + encoding = encoding.toLowerCase(); + if (!Buffer.isEncoding(encoding)) + throw new TypeError('Unknown encoding: ' + encoding); + this._writableState.defaultEncoding = encoding; +}; + function decodeChunk(state, chunk, encoding) { if (!state.objectMode && state.decodeStrings !== false && @@ -246,8 +275,15 @@ function writeOrBuffer(stream, state, chunk, encoding, cb) { if (!ret) state.needDrain = true; - if (state.writing || state.corked) - state.buffer.push(new WriteReq(chunk, encoding, cb)); + if (state.writing || state.corked) { + var last = state.lastBufferedRequest; + state.lastBufferedRequest = new WriteReq(chunk, encoding, cb); + if (last) { + last.next = state.lastBufferedRequest; + } else { + state.bufferedRequest = state.lastBufferedRequest; + } + } else doWrite(stream, state, false, len, chunk, encoding, cb); @@ -304,7 +340,7 @@ function onwrite(stream, er) { if (!finished && !state.corked && !state.bufferProcessing && - state.buffer.length) { + state.bufferedRequest) { clearBuffer(stream, state); } @@ -340,17 +376,23 @@ function onwriteDrain(stream, state) { // if there's something in the buffer waiting, then process it function clearBuffer(stream, state) { state.bufferProcessing = true; + var entry = state.bufferedRequest; - if (stream._writev && state.buffer.length > 1) { + if (stream._writev && entry && entry.next) { // Fast case, write everything using _writev() + var buffer = []; var cbs = []; - for (var c = 0; c < state.buffer.length; c++) - cbs.push(state.buffer[c].callback); + while (entry) { + cbs.push(entry.callback); + buffer.push(entry); + entry = entry.next; + } // count the one we are adding, as well. // TODO(isaacs) clean this up state.pendingcb++; - doWrite(stream, state, true, state.length, state.buffer, '', function(err) { + state.lastBufferedRequest = null; + doWrite(stream, state, true, state.length, buffer, '', function(err) { for (var i = 0; i < cbs.length; i++) { state.pendingcb--; cbs[i](err); @@ -358,34 +400,29 @@ function clearBuffer(stream, state) { }); // Clear buffer - state.buffer = []; } else { // Slow case, write chunks one-by-one - for (var c = 0; c < state.buffer.length; c++) { - var entry = state.buffer[c]; + while (entry) { var chunk = entry.chunk; var encoding = entry.encoding; var cb = entry.callback; var len = state.objectMode ? 1 : chunk.length; doWrite(stream, state, false, len, chunk, encoding, cb); - + entry = entry.next; // if we didn't call the onwrite immediately, then // it means that we need to wait until it does. // also, that means that the chunk and cb are currently // being processed, so move the buffer counter past them. if (state.writing) { - c++; break; } } - if (c < state.buffer.length) - state.buffer = state.buffer.slice(c); - else - state.buffer.length = 0; + if (entry === null) + state.lastBufferedRequest = null; } - + state.bufferedRequest = entry; state.bufferProcessing = false; } @@ -426,7 +463,7 @@ Writable.prototype.end = function(chunk, encoding, cb) { function needFinish(stream, state) { return (state.ending && state.length === 0 && - state.buffer.length === 0 && + state.bufferedRequest === null && !state.finished && !state.writing); } diff --git a/lib/buffer.js b/lib/buffer.js index bd69a972003..2e29ae4eba7 100644 --- a/lib/buffer.js +++ b/lib/buffer.js @@ -49,23 +49,31 @@ function Buffer(subject, encoding) { if (!util.isBuffer(this)) return new Buffer(subject, encoding); - if (util.isNumber(subject)) + if (util.isNumber(subject)) { this.length = subject > 0 ? subject >>> 0 : 0; - else if (util.isString(subject)) - this.length = Buffer.byteLength(subject, encoding = encoding || 'utf8'); - else if (util.isObject(subject)) { + + } else if (util.isString(subject)) { + if (!util.isString(encoding) || encoding.length === 0) + encoding = 'utf8'; + this.length = Buffer.byteLength(subject, encoding); + + // Handle Arrays, Buffers, Uint8Arrays or JSON. + } else if (util.isObject(subject)) { if (subject.type === 'Buffer' && util.isArray(subject.data)) subject = subject.data; - + // Must use floor() because array length may be > kMaxLength. this.length = +subject.length > 0 ? Math.floor(+subject.length) : 0; - } else + + } else { throw new TypeError('must start with number, buffer, array or string'); + } if (this.length > kMaxLength) { throw new RangeError('Attempt to allocate Buffer larger than maximum ' + 'size: 0x' + kMaxLength.toString(16) + ' bytes'); } + this.parent = undefined; if (this.length <= (Buffer.poolSize >>> 1) && this.length > 0) { if (this.length > poolSize - poolOffset) createPool(); @@ -78,25 +86,31 @@ function Buffer(subject, encoding) { alloc(this, this.length); } - if (!util.isNumber(subject)) { - if (util.isString(subject)) { - // In the case of base64 it's possible that the size of the buffer - // allocated was slightly too large. In this case we need to rewrite - // the length to the actual length written. - var len = this.write(subject, encoding); - - // Buffer was truncated after decode, realloc internal ExternalArray - if (len !== this.length) { - this.length = len; - truncate(this, this.length); - } - } else { - if (util.isBuffer(subject)) - subject.copy(this, 0, 0, this.length); - else if (util.isNumber(subject.length) || util.isArray(subject)) - for (var i = 0; i < this.length; i++) - this[i] = subject[i]; + if (util.isNumber(subject)) { + return; + } + + if (util.isString(subject)) { + // In the case of base64 it's possible that the size of the buffer + // allocated was slightly too large. In this case we need to rewrite + // the length to the actual length written. + var len = this.write(subject, encoding); + // Buffer was truncated after decode, realloc internal ExternalArray + if (len !== this.length) { + var prevLen = this.length; + this.length = len; + truncate(this, this.length); + poolOffset -= (prevLen - len); } + + } else if (util.isBuffer(subject)) { + subject.copy(this, 0, 0, this.length); + + } else if (util.isNumber(subject.length) || util.isArray(subject)) { + // Really crappy way to handle Uint8Arrays, but V8 doesn't give a simple + // way to access the data from the C++ API. + for (var i = 0; i < this.length; i++) + this[i] = subject[i]; } } @@ -117,7 +131,9 @@ function SlowBuffer(length) { // Objects created in C++. Significantly faster than calling the Buffer // function. function NativeBuffer(length) { - this.length = length; + this.length = length >>> 0; + // Set this to keep the object map the same. + this.parent = undefined; } NativeBuffer.prototype = Buffer.prototype; @@ -217,11 +233,6 @@ Buffer.byteLength = function(str, enc) { }; -// pre-set for values that may exist in the future -Buffer.prototype.length = undefined; -Buffer.prototype.parent = undefined; - - // toString(encoding, start=0, end=buffer.length) Buffer.prototype.toString = function(encoding, start, end) { var loweredCase = false; @@ -297,6 +308,29 @@ Buffer.prototype.compare = function compare(b) { }; +Buffer.prototype.fill = function fill(val, start, end) { + start = start >> 0; + end = (end === undefined) ? this.length : end >> 0; + + if (start < 0 || end > this.length) + throw new RangeError('out of range index'); + if (end <= start) + return this; + + if (typeof val !== 'string') { + val = val >>> 0; + } else if (val.length === 1) { + var code = val.charCodeAt(0); + if (code < 256) + val = code; + } + + internal.fill(this, val, start, end); + + return this; +}; + + // XXX remove in v0.13 Buffer.prototype.get = util.deprecate(function get(offset) { offset = ~~offset; @@ -461,6 +495,37 @@ function checkOffset(offset, ext, length) { } +Buffer.prototype.readUIntLE = function(offset, byteLength, noAssert) { + offset = offset >>> 0; + byteLength = byteLength >>> 0; + if (!noAssert) + checkOffset(offset, byteLength, this.length); + + var val = this[offset]; + var mul = 1; + var i = 0; + while (++i < byteLength && (mul *= 0x100)) + val += this[offset + i] * mul; + + return val; +}; + + +Buffer.prototype.readUIntBE = function(offset, byteLength, noAssert) { + offset = offset >>> 0; + byteLength = byteLength >>> 0; + if (!noAssert) + checkOffset(offset, byteLength, this.length); + + var val = this[offset + --byteLength]; + var mul = 1; + while (byteLength > 0 && (mul *= 0x100)) + val += this[offset + --byteLength] * mul; + + return val; +}; + + Buffer.prototype.readUInt8 = function(offset, noAssert) { offset = offset >>> 0; if (!noAssert) @@ -509,6 +574,46 @@ Buffer.prototype.readUInt32BE = function(offset, noAssert) { }; +Buffer.prototype.readIntLE = function(offset, byteLength, noAssert) { + offset = offset >>> 0; + byteLength = byteLength >>> 0; + if (!noAssert) + checkOffset(offset, byteLength, this.length); + + var val = this[offset]; + var mul = 1; + var i = 0; + while (++i < byteLength && (mul *= 0x100)) + val += this[offset + i] * mul; + mul *= 0x80; + + if (val >= mul) + val -= Math.pow(2, 8 * byteLength); + + return val; +}; + + +Buffer.prototype.readIntBE = function(offset, byteLength, noAssert) { + offset = offset >>> 0; + byteLength = byteLength >>> 0; + if (!noAssert) + checkOffset(offset, byteLength, this.length); + + var i = byteLength; + var mul = 1; + var val = this[offset + --i]; + while (i > 0 && (mul *= 0x100)) + val += this[offset + --i] * mul; + mul *= 0x80; + + if (val >= mul) + val -= Math.pow(2, 8 * byteLength); + + return val; +}; + + Buffer.prototype.readInt8 = function(offset, noAssert) { offset = offset >>> 0; if (!noAssert) @@ -560,6 +665,38 @@ Buffer.prototype.readInt32BE = function(offset, noAssert) { }; +Buffer.prototype.readFloatLE = function readFloatLE(offset, noAssert) { + offset = offset >>> 0; + if (!noAssert) + checkOffset(offset, 4, this.length); + return internal.readFloatLE(this, offset); +}; + + +Buffer.prototype.readFloatBE = function readFloatBE(offset, noAssert) { + offset = offset >>> 0; + if (!noAssert) + checkOffset(offset, 4, this.length); + return internal.readFloatBE(this, offset); +}; + + +Buffer.prototype.readDoubleLE = function readDoubleLE(offset, noAssert) { + offset = offset >>> 0; + if (!noAssert) + checkOffset(offset, 8, this.length); + return internal.readDoubleLE(this, offset); +}; + + +Buffer.prototype.readDoubleBE = function readDoubleBE(offset, noAssert) { + offset = offset >>> 0; + if (!noAssert) + checkOffset(offset, 8, this.length); + return internal.readDoubleBE(this, offset); +}; + + function checkInt(buffer, value, offset, ext, max, min) { if (!(buffer instanceof Buffer)) throw new TypeError('buffer must be a Buffer instance'); @@ -570,6 +707,40 @@ function checkInt(buffer, value, offset, ext, max, min) { } +Buffer.prototype.writeUIntLE = function(value, offset, byteLength, noAssert) { + value = +value; + offset = offset >>> 0; + byteLength = byteLength >>> 0; + if (!noAssert) + checkInt(this, value, offset, byteLength, Math.pow(2, 8 * byteLength), 0); + + var mul = 1; + var i = 0; + this[offset] = value; + while (++i < byteLength && (mul *= 0x100)) + this[offset + i] = (value / mul) >>> 0; + + return offset + byteLength; +}; + + +Buffer.prototype.writeUIntBE = function(value, offset, byteLength, noAssert) { + value = +value; + offset = offset >>> 0; + byteLength = byteLength >>> 0; + if (!noAssert) + checkInt(this, value, offset, byteLength, Math.pow(2, 8 * byteLength), 0); + + var i = byteLength - 1; + var mul = 1; + this[offset + i] = value; + while (--i >= 0 && (mul *= 0x100)) + this[offset + i] = (value / mul) >>> 0; + + return offset + byteLength; +}; + + Buffer.prototype.writeUInt8 = function(value, offset, noAssert) { value = +value; offset = offset >>> 0; @@ -628,6 +799,52 @@ Buffer.prototype.writeUInt32BE = function(value, offset, noAssert) { }; +Buffer.prototype.writeIntLE = function(value, offset, byteLength, noAssert) { + value = +value; + offset = offset >>> 0; + if (!noAssert) { + checkInt(this, + value, + offset, + byteLength, + Math.pow(2, 8 * byteLength - 1) - 1, + -Math.pow(2, 8 * byteLength - 1)); + } + + var i = 0; + var mul = 1; + var sub = value < 0 ? 1 : 0; + this[offset] = value; + while (++i < byteLength && (mul *= 0x100)) + this[offset + i] = ((value / mul) >> 0) - sub; + + return offset + byteLength; +}; + + +Buffer.prototype.writeIntBE = function(value, offset, byteLength, noAssert) { + value = +value; + offset = offset >>> 0; + if (!noAssert) { + checkInt(this, + value, + offset, + byteLength, + Math.pow(2, 8 * byteLength - 1) - 1, + -Math.pow(2, 8 * byteLength - 1)); + } + + var i = byteLength - 1; + var mul = 1; + var sub = value < 0 ? 1 : 0; + this[offset + i] = value; + while (--i >= 0 && (mul *= 0x100)) + this[offset + i] = ((value / mul) >> 0) - sub; + + return offset + byteLength; +}; + + Buffer.prototype.writeInt8 = function(value, offset, noAssert) { value = +value; offset = offset >>> 0; @@ -684,3 +901,51 @@ Buffer.prototype.writeInt32BE = function(value, offset, noAssert) { this[offset + 3] = value; return offset + 4; }; + + +function checkFloat(buffer, value, offset, ext) { + if (!(buffer instanceof Buffer)) + throw new TypeError('buffer must be a Buffer instance'); + if (offset + ext > buffer.length) + throw new RangeError('index out of range'); +} + + +Buffer.prototype.writeFloatLE = function writeFloatLE(val, offset, noAssert) { + val = +val; + offset = offset >>> 0; + if (!noAssert) + checkFloat(this, val, offset, 4); + internal.writeFloatLE(this, val, offset); + return offset + 4; +}; + + +Buffer.prototype.writeFloatBE = function writeFloatBE(val, offset, noAssert) { + val = +val; + offset = offset >>> 0; + if (!noAssert) + checkFloat(this, val, offset, 4); + internal.writeFloatBE(this, val, offset); + return offset + 4; +}; + + +Buffer.prototype.writeDoubleLE = function writeDoubleLE(val, offset, noAssert) { + val = +val; + offset = offset >>> 0; + if (!noAssert) + checkFloat(this, val, offset, 8); + internal.writeDoubleLE(this, val, offset); + return offset + 8; +}; + + +Buffer.prototype.writeDoubleBE = function writeDoubleBE(val, offset, noAssert) { + val = +val; + offset = offset >>> 0; + if (!noAssert) + checkFloat(this, val, offset, 8); + internal.writeDoubleBE(this, val, offset); + return offset + 8; +}; diff --git a/lib/child_process.js b/lib/child_process.js index 23963828c0c..25ec30af9a4 100644 --- a/lib/child_process.js +++ b/lib/child_process.js @@ -27,8 +27,11 @@ var assert = require('assert'); var util = require('util'); var Process = process.binding('process_wrap').Process; +var WriteWrap = process.binding('stream_wrap').WriteWrap; var uv = process.binding('uv'); +var debug = util.debuglog('child_process'); + var spawn_sync; // Lazy-loaded process.binding('spawn_sync') var constants; // Lazy-loaded process.binding('constants') @@ -320,6 +323,7 @@ function handleMessage(target, message, handle) { message.cmd.slice(0, INTERNAL_PREFIX.length) === INTERNAL_PREFIX) { eventName = 'internalMessage'; } + debug('%s: %j handle:', eventName, message, handle); target.emit(eventName, message, handle); } @@ -409,6 +413,7 @@ function setupChannel(target, channel) { }); target.send = function(message, handle) { + debug('send message: %j handle:', message, handle); if (!this.connected) this.emit('error', new Error('channel closed')); else @@ -473,7 +478,8 @@ function setupChannel(target, channel) { return; } - var req = { oncomplete: nop }; + var req = new WriteWrap(); + req.oncomplete = nop; var string = JSON.stringify(message) + '\n'; var err = channel.writeUtf8String(req, string, handle); @@ -930,28 +936,29 @@ function _validateStdio(stdio, sync) { } -function normalizeSpawnArguments(/*file, args, options*/) { +function normalizeSpawnArguments(file /*, args, options*/) { var args, options; - var file = arguments[0]; - if (Array.isArray(arguments[1])) { args = arguments[1].slice(0); options = arguments[2]; - } else if (arguments[1] && !Array.isArray(arguments[1])) { + } else if (arguments[1] !== undefined && !util.isObject(arguments[1])) { throw new TypeError('Incorrect value of args option'); } else { args = []; options = arguments[1]; } - if (!options) + if (options === undefined) options = {}; + else if (!util.isObject(options)) + throw new TypeError('options argument must be an object'); args.unshift(file); - var env = (options && options.env ? options.env : null) || process.env; + var env = options.env || process.env; var envPairs = []; + for (var key in env) { envPairs.push(key + '=' + env[key]); } @@ -969,24 +976,19 @@ function normalizeSpawnArguments(/*file, args, options*/) { var spawn = exports.spawn = function(/*file, args, options*/) { var opts = normalizeSpawnArguments.apply(null, arguments); - - var file = opts.file; - var args = opts.args; var options = opts.options; - var envPairs = opts.envPairs; - var child = new ChildProcess(); child.spawn({ - file: file, - args: args, - cwd: options ? options.cwd : null, - windowsVerbatimArguments: !!(options && options.windowsVerbatimArguments), - detached: !!(options && options.detached), - envPairs: envPairs, - stdio: options ? options.stdio : null, - uid: options ? options.uid : null, - gid: options ? options.gid : null + file: opts.file, + args: opts.args, + cwd: options.cwd, + windowsVerbatimArguments: !!options.windowsVerbatimArguments, + detached: !!options.detached, + envPairs: opts.envPairs, + stdio: options.stdio, + uid: options.uid, + gid: options.gid }); return child; @@ -1264,6 +1266,7 @@ function spawnSync(/*file, args, options*/) { options.file = opts.file; options.args = opts.args; + options.envPairs = opts.envPairs; if (options.killSignal) options.killSignal = lookupSignal(options.killSignal); @@ -1340,7 +1343,7 @@ function checkExecSyncError(ret) { function execFileSync(/*command, options*/) { var opts = normalizeSpawnArguments.apply(null, arguments); - var inheritStderr = !!!opts.options.stdio; + var inheritStderr = !opts.options.stdio; var ret = spawnSync(opts.file, opts.args.slice(1), opts.options); @@ -1359,7 +1362,7 @@ exports.execFileSync = execFileSync; function execSync(/*comand, options*/) { var opts = normalizeExecArgs.apply(null, arguments); - var inheritStderr = opts.options ? !!!opts.options.stdio : true; + var inheritStderr = opts.options ? !opts.options.stdio : true; var ret = spawnSync(opts.file, opts.args, opts.options); ret.cmd = opts.cmd; diff --git a/lib/cluster.js b/lib/cluster.js index c994f433b96..60aca0c8186 100644 --- a/lib/cluster.js +++ b/lib/cluster.js @@ -28,6 +28,8 @@ var util = require('util'); var SCHED_NONE = 1; var SCHED_RR = 2; +var debug = util.debuglog('cluster'); + var cluster = new EventEmitter; module.exports = cluster; cluster.Worker = Worker; @@ -266,16 +268,34 @@ function masterInit() { assert(schedulingPolicy === SCHED_NONE || schedulingPolicy === SCHED_RR, 'Bad cluster.schedulingPolicy: ' + schedulingPolicy); - process.on('internalMessage', function(message) { - if (message.cmd !== 'NODE_DEBUG_ENABLED') return; - var key; - for (key in cluster.workers) - process._debugProcess(cluster.workers[key].process.pid); + var hasDebugArg = process.execArgv.some(function(argv) { + return /^(--debug|--debug-brk)(=\d+)?$/.test(argv); }); process.nextTick(function() { cluster.emit('setup', settings); }); + + // Send debug signal only if not started in debug mode, this helps a lot + // on windows, because RegisterDebugHandler is not called when node starts + // with --debug.* arg. + if (hasDebugArg) + return; + + process.on('internalMessage', function(message) { + if (message.cmd !== 'NODE_DEBUG_ENABLED') return; + var key; + for (key in cluster.workers) { + var worker = cluster.workers[key]; + if (worker.state === 'online') { + process._debugProcess(worker.process.pid); + } else { + worker.once('online', function() { + process._debugProcess(this.process.pid); + }); + } + } + }); }; function createWorkerProcess(id, env) { @@ -438,6 +458,7 @@ function masterInit() { message.fd]; var key = args.join(':'); var handle = handles[key]; + debug('queryServer worker %d key `%s` handle: %j', worker.id, key, handle); if (util.isUndefined(handle)) { var constructor = RoundRobinHandle; // UDP is exempt from round-robin connection balancing for what should diff --git a/lib/crypto.js b/lib/crypto.js index a38ccb77c16..2f0a00b1523 100644 --- a/lib/crypto.js +++ b/lib/crypto.js @@ -376,6 +376,11 @@ function DiffieHellman(sizeOrKey, keyEncoding, generator, genEncoding) { if (!(this instanceof DiffieHellman)) return new DiffieHellman(sizeOrKey, keyEncoding, generator, genEncoding); + if (!util.isBuffer(sizeOrKey) && + typeof sizeOrKey !== 'number' && + typeof sizeOrKey !== 'string') + throw new TypeError('First argument should be number, string or Buffer'); + if (keyEncoding) { if (typeof keyEncoding !== 'string' || (!Buffer.isEncoding(keyEncoding) && keyEncoding !== 'buffer')) { diff --git a/lib/dgram.js b/lib/dgram.js index aae2f51bc86..d1bfa14caa0 100644 --- a/lib/dgram.js +++ b/lib/dgram.js @@ -25,6 +25,7 @@ var events = require('events'); var constants = require('constants'); var UDP = process.binding('udp_wrap').UDP; +var SendWrap = process.binding('udp_wrap').SendWrap; var BIND_STATE_UNBOUND = 0; var BIND_STATE_BINDING = 1; @@ -317,7 +318,9 @@ Socket.prototype.send = function(buffer, self.emit('error', ex); } else if (self._handle) { - var req = { buffer: buffer, length: length }; // Keep reference alive. + var req = new SendWrap(); + req.buffer = buffer; // Keep reference alive. + req.length = length; if (callback) { req.callback = callback; req.oncomplete = afterSend; diff --git a/lib/dns.js b/lib/dns.js index 4eb34d6651b..18023fab162 100644 --- a/lib/dns.js +++ b/lib/dns.js @@ -25,6 +25,9 @@ var util = require('util'); var cares = process.binding('cares_wrap'); var uv = process.binding('uv'); +var GetAddrInfoReqWrap = cares.GetAddrInfoReqWrap; +var GetNameInfoReqWrap = cares.GetNameInfoReqWrap; + var isIp = net.isIP; @@ -142,12 +145,11 @@ exports.lookup = function lookup(hostname, options, callback) { return {}; } - var req = { - callback: callback, - family: family, - hostname: hostname, - oncomplete: onlookup - }; + var req = new GetAddrInfoReqWrap(); + req.callback = callback; + req.family = family; + req.hostname = hostname; + req.oncomplete = onlookup; var err = cares.getaddrinfo(req, hostname, family, hints); if (err) { @@ -178,12 +180,12 @@ exports.lookupService = function(host, port, callback) { callback = makeAsync(callback); - var req = { - callback: callback, - host: host, - port: port, - oncomplete: onlookupservice - }; + var req = new GetNameInfoReqWrap(); + req.callback = callback; + req.host = host; + req.port = port; + req.oncomplete = onlookupservice; + var err = cares.getnameinfo(req, host, port); if (err) throw errnoException(err, 'getnameinfo', host); diff --git a/lib/fs.js b/lib/fs.js index 3301a6af847..a97ba3aa639 100644 --- a/lib/fs.js +++ b/lib/fs.js @@ -33,6 +33,7 @@ var constants = process.binding('constants'); var fs = exports; var Stream = require('stream').Stream; var EventEmitter = require('events').EventEmitter; +var FSReqWrap = binding.FSReqWrap; var Readable = Stream.Readable; var Writable = Stream.Writable; @@ -182,7 +183,9 @@ fs.Stats.prototype.isSocket = function() { fs.exists = function(path, callback) { if (!nullCheck(path, cb)) return; - binding.stat(pathModule._makeLong(path), cb); + var req = new FSReqWrap(); + req.oncomplete = cb; + binding.stat(pathModule._makeLong(path), req); function cb(err, stats) { if (callback) callback(err ? false : true); } @@ -421,7 +424,9 @@ Object.defineProperty(exports, '_stringToFlags', { // list to make the arguments clear. fs.close = function(fd, callback) { - binding.close(fd, makeCallback(callback)); + var req = new FSReqWrap(); + req.oncomplete = makeCallback(callback); + binding.close(fd, req); }; fs.closeSync = function(fd) { @@ -443,10 +448,14 @@ fs.open = function(path, flags, mode, callback) { mode = modeNum(mode, 438 /*=0666*/); if (!nullCheck(path, callback)) return; + + var req = new FSReqWrap(); + req.oncomplete = callback; + binding.open(pathModule._makeLong(path), stringToFlags(flags), mode, - callback); + req); }; fs.openSync = function(path, flags, mode) { @@ -482,7 +491,10 @@ fs.read = function(fd, buffer, offset, length, position, callback) { callback && callback(err, bytesRead || 0, buffer); } - binding.read(fd, buffer, offset, length, position, wrapper); + var req = new FSReqWrap(); + req.oncomplete = wrapper; + + binding.read(fd, buffer, offset, length, position, req); }; fs.readSync = function(fd, buffer, offset, length, position) { @@ -515,6 +527,16 @@ fs.readSync = function(fd, buffer, offset, length, position) { // OR // fs.write(fd, string[, position[, encoding]], callback); fs.write = function(fd, buffer, offset, length, position, callback) { + function strWrapper(err, written) { + // Retain a reference to buffer so that it can't be GC'ed too soon. + callback(err, written || 0, buffer); + } + + function bufWrapper(err, written) { + // retain reference to string in case it's external + callback(err, written || 0, buffer); + } + if (util.isBuffer(buffer)) { // if no position is passed then assume null if (util.isFunction(position)) { @@ -522,11 +544,9 @@ fs.write = function(fd, buffer, offset, length, position, callback) { position = null; } callback = maybeCallback(callback); - var wrapper = function(err, written) { - // Retain a reference to buffer so that it can't be GC'ed too soon. - callback(err, written || 0, buffer); - }; - return binding.writeBuffer(fd, buffer, offset, length, position, wrapper); + var req = new FSReqWrap(); + req.oncomplete = strWrapper; + return binding.writeBuffer(fd, buffer, offset, length, position, req); } if (util.isString(buffer)) @@ -541,11 +561,9 @@ fs.write = function(fd, buffer, offset, length, position, callback) { length = 'utf8'; } callback = maybeCallback(position); - position = function(err, written) { - // retain reference to string in case it's external - callback(err, written || 0, buffer); - }; - return binding.writeString(fd, buffer, offset, length, position); + var req = new FSReqWrap(); + req.oncomplete = bufWrapper; + return binding.writeString(fd, buffer, offset, length, req); }; // usage: @@ -569,9 +587,11 @@ fs.rename = function(oldPath, newPath, callback) { callback = makeCallback(callback); if (!nullCheck(oldPath, callback)) return; if (!nullCheck(newPath, callback)) return; + var req = new FSReqWrap(); + req.oncomplete = callback; binding.rename(pathModule._makeLong(oldPath), pathModule._makeLong(newPath), - callback); + req); }; fs.renameSync = function(oldPath, newPath) { @@ -583,8 +603,9 @@ fs.renameSync = function(oldPath, newPath) { fs.truncate = function(path, len, callback) { if (util.isNumber(path)) { - // legacy - return fs.ftruncate(path, len, callback); + var req = new FSReqWrap(); + req.oncomplete = callback; + return fs.ftruncate(path, len, req); } if (util.isFunction(len)) { callback = len; @@ -592,14 +613,17 @@ fs.truncate = function(path, len, callback) { } else if (util.isUndefined(len)) { len = 0; } + callback = maybeCallback(callback); fs.open(path, 'r+', function(er, fd) { if (er) return callback(er); - binding.ftruncate(fd, len, function(er) { + var req = new FSReqWrap(); + req.oncomplete = function ftruncateCb(er) { fs.close(fd, function(er2) { callback(er || er2); }); - }); + }; + binding.ftruncate(fd, len, req); }); }; @@ -628,7 +652,9 @@ fs.ftruncate = function(fd, len, callback) { } else if (util.isUndefined(len)) { len = 0; } - binding.ftruncate(fd, len, makeCallback(callback)); + var req = new FSReqWrap(); + req.oncomplete = makeCallback(callback); + binding.ftruncate(fd, len, req); }; fs.ftruncateSync = function(fd, len) { @@ -639,9 +665,11 @@ fs.ftruncateSync = function(fd, len) { }; fs.rmdir = function(path, callback) { - callback = makeCallback(callback); + callback = maybeCallback(callback); if (!nullCheck(path, callback)) return; - binding.rmdir(pathModule._makeLong(path), callback); + var req = new FSReqWrap(); + req.oncomplete = callback; + binding.rmdir(pathModule._makeLong(path), req); }; fs.rmdirSync = function(path) { @@ -650,7 +678,9 @@ fs.rmdirSync = function(path) { }; fs.fdatasync = function(fd, callback) { - binding.fdatasync(fd, makeCallback(callback)); + var req = new FSReqWrap(); + req.oncomplete = makeCallback(callback); + binding.fdatasync(fd, req); }; fs.fdatasyncSync = function(fd) { @@ -658,7 +688,9 @@ fs.fdatasyncSync = function(fd) { }; fs.fsync = function(fd, callback) { - binding.fsync(fd, makeCallback(callback)); + var req = new FSReqWrap(); + req.oncomplete = makeCallback(callback); + binding.fsync(fd, req); }; fs.fsyncSync = function(fd) { @@ -669,9 +701,11 @@ fs.mkdir = function(path, mode, callback) { if (util.isFunction(mode)) callback = mode; callback = makeCallback(callback); if (!nullCheck(path, callback)) return; + var req = new FSReqWrap(); + req.oncomplete = callback; binding.mkdir(pathModule._makeLong(path), modeNum(mode, 511 /*=0777*/), - callback); + req); }; fs.mkdirSync = function(path, mode) { @@ -683,7 +717,9 @@ fs.mkdirSync = function(path, mode) { fs.readdir = function(path, callback) { callback = makeCallback(callback); if (!nullCheck(path, callback)) return; - binding.readdir(pathModule._makeLong(path), callback); + var req = new FSReqWrap(); + req.oncomplete = callback; + binding.readdir(pathModule._makeLong(path), req); }; fs.readdirSync = function(path) { @@ -692,19 +728,25 @@ fs.readdirSync = function(path) { }; fs.fstat = function(fd, callback) { - binding.fstat(fd, makeCallback(callback)); + var req = new FSReqWrap(); + req.oncomplete = makeCallback(callback); + binding.fstat(fd, req); }; fs.lstat = function(path, callback) { callback = makeCallback(callback); if (!nullCheck(path, callback)) return; - binding.lstat(pathModule._makeLong(path), callback); + var req = new FSReqWrap(); + req.oncomplete = callback; + binding.lstat(pathModule._makeLong(path), req); }; fs.stat = function(path, callback) { callback = makeCallback(callback); if (!nullCheck(path, callback)) return; - binding.stat(pathModule._makeLong(path), callback); + var req = new FSReqWrap(); + req.oncomplete = callback; + binding.stat(pathModule._makeLong(path), req); }; fs.fstatSync = function(fd) { @@ -724,7 +766,9 @@ fs.statSync = function(path) { fs.readlink = function(path, callback) { callback = makeCallback(callback); if (!nullCheck(path, callback)) return; - binding.readlink(pathModule._makeLong(path), callback); + var req = new FSReqWrap(); + req.oncomplete = callback; + binding.readlink(pathModule._makeLong(path), req); }; fs.readlinkSync = function(path) { @@ -752,10 +796,13 @@ fs.symlink = function(destination, path, type_, callback) { if (!nullCheck(destination, callback)) return; if (!nullCheck(path, callback)) return; + var req = new FSReqWrap(); + req.oncomplete = callback; + binding.symlink(preprocessSymlinkDestination(destination, type), pathModule._makeLong(path), type, - callback); + req); }; fs.symlinkSync = function(destination, path, type) { @@ -774,9 +821,12 @@ fs.link = function(srcpath, dstpath, callback) { if (!nullCheck(srcpath, callback)) return; if (!nullCheck(dstpath, callback)) return; + var req = new FSReqWrap(); + req.oncomplete = callback; + binding.link(pathModule._makeLong(srcpath), pathModule._makeLong(dstpath), - callback); + req); }; fs.linkSync = function(srcpath, dstpath) { @@ -789,7 +839,9 @@ fs.linkSync = function(srcpath, dstpath) { fs.unlink = function(path, callback) { callback = makeCallback(callback); if (!nullCheck(path, callback)) return; - binding.unlink(pathModule._makeLong(path), callback); + var req = new FSReqWrap(); + req.oncomplete = callback; + binding.unlink(pathModule._makeLong(path), req); }; fs.unlinkSync = function(path) { @@ -798,7 +850,9 @@ fs.unlinkSync = function(path) { }; fs.fchmod = function(fd, mode, callback) { - binding.fchmod(fd, modeNum(mode), makeCallback(callback)); + var req = new FSReqWrap(); + req.oncomplete = makeCallback(callback); + binding.fchmod(fd, modeNum(mode), req); }; fs.fchmodSync = function(fd, mode) { @@ -848,9 +902,11 @@ if (constants.hasOwnProperty('O_SYMLINK')) { fs.chmod = function(path, mode, callback) { callback = makeCallback(callback); if (!nullCheck(path, callback)) return; + var req = new FSReqWrap(); + req.oncomplete = callback; binding.chmod(pathModule._makeLong(path), modeNum(mode), - callback); + req); }; fs.chmodSync = function(path, mode) { @@ -877,7 +933,9 @@ if (constants.hasOwnProperty('O_SYMLINK')) { } fs.fchown = function(fd, uid, gid, callback) { - binding.fchown(fd, uid, gid, makeCallback(callback)); + var req = new FSReqWrap(); + req.oncomplete = makeCallback(callback); + binding.fchown(fd, uid, gid, req); }; fs.fchownSync = function(fd, uid, gid) { @@ -887,7 +945,9 @@ fs.fchownSync = function(fd, uid, gid) { fs.chown = function(path, uid, gid, callback) { callback = makeCallback(callback); if (!nullCheck(path, callback)) return; - binding.chown(pathModule._makeLong(path), uid, gid, callback); + var req = new FSReqWrap(); + req.oncomplete = callback; + binding.chown(pathModule._makeLong(path), uid, gid, req); }; fs.chownSync = function(path, uid, gid) { @@ -913,10 +973,12 @@ fs._toUnixTimestamp = toUnixTimestamp; fs.utimes = function(path, atime, mtime, callback) { callback = makeCallback(callback); if (!nullCheck(path, callback)) return; + var req = new FSReqWrap(); + req.oncomplete = callback; binding.utimes(pathModule._makeLong(path), toUnixTimestamp(atime), toUnixTimestamp(mtime), - callback); + req); }; fs.utimesSync = function(path, atime, mtime) { @@ -929,7 +991,9 @@ fs.utimesSync = function(path, atime, mtime) { fs.futimes = function(fd, atime, mtime, callback) { atime = toUnixTimestamp(atime); mtime = toUnixTimestamp(mtime); - binding.futimes(fd, atime, mtime, makeCallback(callback)); + var req = new FSReqWrap(); + req.oncomplete = makeCallback(callback); + binding.futimes(fd, atime, mtime, req); }; fs.futimesSync = function(fd, atime, mtime) { diff --git a/lib/module.js b/lib/module.js index 564f6c49d6c..da51cf5ec0a 100644 --- a/lib/module.js +++ b/lib/module.js @@ -360,8 +360,8 @@ Module.prototype.load = function(filename) { // Loads a module at the given file path. Returns that module's // `exports` property. Module.prototype.require = function(path) { - assert(util.isString(path), 'path must be a string'); assert(path, 'missing path'); + assert(util.isString(path), 'path must be a string'); return Module._load(path, this); }; diff --git a/lib/net.js b/lib/net.js index 478c04a1f83..0ece1b0f97c 100644 --- a/lib/net.js +++ b/lib/net.js @@ -28,6 +28,11 @@ var cares = process.binding('cares_wrap'); var uv = process.binding('uv'); var Pipe = process.binding('pipe_wrap').Pipe; +var TCPConnectWrap = process.binding('tcp_wrap').TCPConnectWrap; +var PipeConnectWrap = process.binding('pipe_wrap').PipeConnectWrap; +var ShutdownWrap = process.binding('stream_wrap').ShutdownWrap; +var WriteWrap = process.binding('stream_wrap').WriteWrap; + var cluster; var errnoException = util._errnoException; @@ -180,8 +185,16 @@ function Socket(options) { // if we have a handle, then start the flow of data into the // buffer. if not, then this will happen when we connect - if (this._handle && options.readable !== false) - this.read(0); + if (this._handle && options.readable !== false) { + if (options.pauseOnCreate) { + // stop the handle from reading and pause the stream + this._handle.reading = false; + this._handle.readStop(); + this._readableState.flowing = false; + } else { + this.read(0); + } + } } util.inherits(Socket, stream.Duplex); @@ -210,7 +223,8 @@ function onSocketFinish() { if (!this._handle || !this._handle.shutdown) return this.destroy(); - var req = { oncomplete: afterShutdown }; + var req = new ShutdownWrap(); + req.oncomplete = afterShutdown; var err = this._handle.shutdown(req); if (err) @@ -633,7 +647,9 @@ Socket.prototype._writeGeneric = function(writev, data, encoding, cb) { return false; } - var req = { oncomplete: afterWrite, async: false }; + var req = new WriteWrap(); + req.oncomplete = afterWrite; + req.async = false; var err; if (writev) { @@ -716,7 +732,7 @@ Socket.prototype.__defineGetter__('bytesWritten', function() { data = this._pendingData, encoding = this._pendingEncoding; - state.buffer.forEach(function(el) { + state.getBuffer().forEach(function(el) { if (util.isBuffer(el.chunk)) bytes += el.chunk.length; else @@ -769,34 +785,18 @@ function connect(self, address, port, addressType, localAddress, localPort) { assert.ok(self._connecting); var err; - if (localAddress || localPort) { - if (localAddress && !exports.isIP(localAddress)) - err = new TypeError( - 'localAddress should be a valid IP: ' + localAddress); - - if (localPort && !util.isNumber(localPort)) - err = new TypeError('localPort should be a number: ' + localPort); + if (localAddress || localPort) { var bind; - switch (addressType) { - case 4: - if (!localAddress) - localAddress = '0.0.0.0'; - bind = self._handle.bind; - break; - case 6: - if (!localAddress) - localAddress = '::'; - bind = self._handle.bind6; - break; - default: - err = new TypeError('Invalid addressType: ' + addressType); - break; - } - - if (err) { - self._destroy(err); + if (addressType === 4) { + localAddress = localAddress || '0.0.0.0'; + bind = self._handle.bind; + } else if (addressType === 6) { + localAddress = localAddress || '::'; + bind = self._handle.bind6; + } else { + self._destroy(new TypeError('Invalid addressType: ' + addressType)); return; } @@ -813,18 +813,18 @@ function connect(self, address, port, addressType, localAddress, localPort) { } } - var req = { oncomplete: afterConnect }; if (addressType === 6 || addressType === 4) { - port = port | 0; - if (port <= 0 || port > 65535) - throw new RangeError('Port should be > 0 and < 65536'); + var req = new TCPConnectWrap(); + req.oncomplete = afterConnect; - if (addressType === 6) { - err = self._handle.connect6(req, address, port); - } else if (addressType === 4) { + if (addressType === 4) err = self._handle.connect(req, address, port); - } + else + err = self._handle.connect6(req, address, port); + } else { + var req = new PipeConnectWrap(); + req.oncomplete = afterConnect; err = self._handle.connect(req, address, afterConnect); } @@ -879,19 +879,26 @@ Socket.prototype.connect = function(options, cb) { if (pipe) { connect(self, options.path); - } else if (!options.host) { - debug('connect: missing host'); - self._host = '127.0.0.1'; - connect(self, self._host, options.port, 4); - } else { var dns = require('dns'); - var host = options.host; + var host = options.host || 'localhost'; + var port = options.port | 0; + var localAddress = options.localAddress; + var localPort = options.localPort; var dnsopts = { family: options.family, hints: 0 }; + if (localAddress && !exports.isIP(localAddress)) + throw new TypeError('localAddress must be a valid IP: ' + localAddress); + + if (localPort && !util.isNumber(localPort)) + throw new TypeError('localPort should be a number: ' + localPort); + + if (port <= 0 || port > 65535) + throw new RangeError('port should be > 0 and < 65536: ' + port); + if (dnsopts.family !== 4 && dnsopts.family !== 6) dnsopts.hints = dns.ADDRCONFIG | dns.V4MAPPED; @@ -917,19 +924,12 @@ Socket.prototype.connect = function(options, cb) { }); } else { timers._unrefActive(self); - - addressType = addressType || 4; - - // node_net.cc handles null host names graciously but user land - // expects remoteAddress to have a meaningful value - ip = ip || (addressType === 4 ? '127.0.0.1' : '0:0:0:0:0:0:0:1'); - connect(self, ip, - options.port, + port, addressType, - options.localAddress, - options.localPort); + localAddress, + localPort); } }); } @@ -1016,7 +1016,7 @@ function Server(/* [ options, ] listener */) { set: util.deprecate(function(val) { return (self._connections = val); }, 'connections property is deprecated. Use getConnections() method'), - configurable: true, enumerable: true + configurable: true, enumerable: false }); this._handle = null; @@ -1024,6 +1024,7 @@ function Server(/* [ options, ] listener */) { this._slaves = []; this.allowHalfOpen = options.allowHalfOpen || false; + this.pauseOnConnect = !!options.pauseOnConnect; } util.inherits(Server, events.EventEmitter); exports.Server = Server; @@ -1287,7 +1288,8 @@ function onconnection(err, clientHandle) { var socket = new Socket({ handle: clientHandle, - allowHalfOpen: self.allowHalfOpen + allowHalfOpen: self.allowHalfOpen, + pauseOnCreate: self.pauseOnConnect }); socket.readable = socket.writable = true; diff --git a/lib/path.js b/lib/path.js index 44ef0d68e25..1765ef075d3 100644 --- a/lib/path.js +++ b/lib/path.js @@ -25,425 +25,525 @@ var util = require('util'); // resolves . and .. elements in a path array with directory names there -// must be no slashes, empty elements, or device names (c:\) in the array +// must be no slashes or device names (c:\) in the array // (so also no leading and trailing slashes - it does not distinguish // relative and absolute paths) function normalizeArray(parts, allowAboveRoot) { - // if the path tries to go above the root, `up` ends up > 0 - var up = 0; - for (var i = parts.length - 1; i >= 0; i--) { - var last = parts[i]; - if (last === '.') { - parts.splice(i, 1); - } else if (last === '..') { - parts.splice(i, 1); - up++; - } else if (up) { - parts.splice(i, 1); - up--; + var res = []; + for (var i = 0; i < parts.length; i++) { + var p = parts[i]; + + // ignore empty parts + if (!p || p === '.') + continue; + + if (p === '..') { + if (res.length && res[res.length - 1] !== '..') { + res.pop(); + } else if (allowAboveRoot) { + res.push('..'); + } + } else { + res.push(p); } } - // if the path is allowed to go above the root, restore leading ..s - if (allowAboveRoot) { - for (; up--; up) { - parts.unshift('..'); + return res; +} + +// Regex to split a windows path into three parts: [*, device, slash, +// tail] windows-only +var splitDeviceRe = + /^([a-zA-Z]:|[\\\/]{2}[^\\\/]+[\\\/]+[^\\\/]+)?([\\\/])?([\s\S]*?)$/; + +// Regex to split the tail part of the above into [*, dir, basename, ext] +var splitTailRe = + /^([\s\S]*?)((?:\.{1,2}|[^\\\/]+?|)(\.[^.\/\\]*|))(?:[\\\/]*)$/; + +var win32 = {}; + +// Function to split a filename into [root, dir, basename, ext] +function win32SplitPath(filename) { + // Separate device+slash from tail + var result = splitDeviceRe.exec(filename), + device = (result[1] || '') + (result[2] || ''), + tail = result[3] || ''; + // Split the tail into dir, basename and extension + var result2 = splitTailRe.exec(tail), + dir = result2[1], + basename = result2[2], + ext = result2[3]; + return [device, dir, basename, ext]; +} + +var normalizeUNCRoot = function(device) { + return '\\\\' + device.replace(/^[\\\/]+/, '').replace(/[\\\/]+/g, '\\'); +}; + +// path.resolve([from ...], to) +win32.resolve = function() { + var resolvedDevice = '', + resolvedTail = '', + resolvedAbsolute = false; + + for (var i = arguments.length - 1; i >= -1; i--) { + var path; + if (i >= 0) { + path = arguments[i]; + } else if (!resolvedDevice) { + path = process.cwd(); + } else { + // Windows has the concept of drive-specific current working + // directories. If we've resolved a drive letter but not yet an + // absolute path, get cwd for that drive. We're sure the device is not + // an unc path at this points, because unc paths are always absolute. + path = process.env['=' + resolvedDevice]; + // Verify that a drive-local cwd was found and that it actually points + // to our drive. If not, default to the drive's root. + if (!path || path.substr(0, 3).toLowerCase() !== + resolvedDevice.toLowerCase() + '\\') { + path = resolvedDevice + '\\'; + } + } + + // Skip empty and invalid entries + if (!util.isString(path)) { + throw new TypeError('Arguments to path.resolve must be strings'); + } else if (!path) { + continue; + } + + var result = splitDeviceRe.exec(path), + device = result[1] || '', + isUnc = device && device.charAt(1) !== ':', + isAbsolute = win32.isAbsolute(path), + tail = result[3]; + + if (device && + resolvedDevice && + device.toLowerCase() !== resolvedDevice.toLowerCase()) { + // This path points to another device so it is not applicable + continue; + } + + if (!resolvedDevice) { + resolvedDevice = device; + } + if (!resolvedAbsolute) { + resolvedTail = tail + '\\' + resolvedTail; + resolvedAbsolute = isAbsolute; + } + + if (resolvedDevice && resolvedAbsolute) { + break; } } - return parts; -} + // Convert slashes to backslashes when `resolvedDevice` points to an UNC + // root. Also squash multiple slashes into a single one where appropriate. + if (isUnc) { + resolvedDevice = normalizeUNCRoot(resolvedDevice); + } + // At this point the path should be resolved to a full absolute path, + // but handle relative paths to be safe (might happen when process.cwd() + // fails) -if (isWindows) { - // Regex to split a windows path into three parts: [*, device, slash, - // tail] windows-only - var splitDeviceRe = - /^([a-zA-Z]:|[\\\/]{2}[^\\\/]+[\\\/]+[^\\\/]+)?([\\\/])?([\s\S]*?)$/; - - // Regex to split the tail part of the above into [*, dir, basename, ext] - var splitTailRe = - /^([\s\S]*?)((?:\.{1,2}|[^\\\/]+?|)(\.[^.\/\\]*|))(?:[\\\/]*)$/; - - // Function to split a filename into [root, dir, basename, ext] - // windows version - var splitPath = function(filename) { - // Separate device+slash from tail - var result = splitDeviceRe.exec(filename), - device = (result[1] || '') + (result[2] || ''), - tail = result[3] || ''; - // Split the tail into dir, basename and extension - var result2 = splitTailRe.exec(tail), - dir = result2[1], - basename = result2[2], - ext = result2[3]; - return [device, dir, basename, ext]; - }; + // Normalize the tail path + resolvedTail = normalizeArray(resolvedTail.split(/[\\\/]+/), + !resolvedAbsolute).join('\\'); - var normalizeUNCRoot = function(device) { - return '\\\\' + device.replace(/^[\\\/]+/, '').replace(/[\\\/]+/g, '\\'); - }; + // If device is a drive letter, we'll normalize to lower case. + if (resolvedDevice && resolvedDevice.charAt(1) === ':') { + resolvedDevice = resolvedDevice[0].toLowerCase() + + resolvedDevice.substr(1); + } - // path.resolve([from ...], to) - // windows version - exports.resolve = function() { - var resolvedDevice = '', - resolvedTail = '', - resolvedAbsolute = false; - - for (var i = arguments.length - 1; i >= -1; i--) { - var path; - if (i >= 0) { - path = arguments[i]; - } else if (!resolvedDevice) { - path = process.cwd(); - } else { - // Windows has the concept of drive-specific current working - // directories. If we've resolved a drive letter but not yet an - // absolute path, get cwd for that drive. We're sure the device is not - // an unc path at this points, because unc paths are always absolute. - path = process.env['=' + resolvedDevice]; - // Verify that a drive-local cwd was found and that it actually points - // to our drive. If not, default to the drive's root. - if (!path || path.substr(0, 3).toLowerCase() !== - resolvedDevice.toLowerCase() + '\\') { - path = resolvedDevice + '\\'; - } - } + return (resolvedDevice + (resolvedAbsolute ? '\\' : '') + resolvedTail) || + '.'; +}; - // Skip empty and invalid entries - if (!util.isString(path)) { - throw new TypeError('Arguments to path.resolve must be strings'); - } else if (!path) { - continue; - } - var result = splitDeviceRe.exec(path), - device = result[1] || '', - isUnc = device && device.charAt(1) !== ':', - isAbsolute = exports.isAbsolute(path), - tail = result[3]; - - if (device && - resolvedDevice && - device.toLowerCase() !== resolvedDevice.toLowerCase()) { - // This path points to another device so it is not applicable - continue; - } +win32.normalize = function(path) { + var result = splitDeviceRe.exec(path), + device = result[1] || '', + isUnc = device && device.charAt(1) !== ':', + isAbsolute = win32.isAbsolute(path), + tail = result[3], + trailingSlash = /[\\\/]$/.test(tail); - if (!resolvedDevice) { - resolvedDevice = device; - } - if (!resolvedAbsolute) { - resolvedTail = tail + '\\' + resolvedTail; - resolvedAbsolute = isAbsolute; - } + // If device is a drive letter, we'll normalize to lower case. + if (device && device.charAt(1) === ':') { + device = device[0].toLowerCase() + device.substr(1); + } - if (resolvedDevice && resolvedAbsolute) { - break; - } - } + // Normalize the tail path + tail = normalizeArray(tail.split(/[\\\/]+/), !isAbsolute).join('\\'); - // Convert slashes to backslashes when `resolvedDevice` points to an UNC - // root. Also squash multiple slashes into a single one where appropriate. - if (isUnc) { - resolvedDevice = normalizeUNCRoot(resolvedDevice); - } + if (!tail && !isAbsolute) { + tail = '.'; + } + if (tail && trailingSlash) { + tail += '\\'; + } - // At this point the path should be resolved to a full absolute path, - // but handle relative paths to be safe (might happen when process.cwd() - // fails) + // Convert slashes to backslashes when `device` points to an UNC root. + // Also squash multiple slashes into a single one where appropriate. + if (isUnc) { + device = normalizeUNCRoot(device); + } - // Normalize the tail path + return device + (isAbsolute ? '\\' : '') + tail; +}; - function f(p) { - return !!p; + +win32.isAbsolute = function(path) { + var result = splitDeviceRe.exec(path), + device = result[1] || '', + isUnc = !!device && device.charAt(1) !== ':'; + // UNC paths are always absolute + return !!result[2] || isUnc; +}; + +win32.join = function() { + function f(p) { + if (!util.isString(p)) { + throw new TypeError('Arguments to path.join must be strings'); } + return p; + } - resolvedTail = normalizeArray(resolvedTail.split(/[\\\/]+/).filter(f), - !resolvedAbsolute).join('\\'); + var paths = Array.prototype.filter.call(arguments, f); + var joined = paths.join('\\'); + + // Make sure that the joined path doesn't start with two slashes, because + // normalize() will mistake it for an UNC path then. + // + // This step is skipped when it is very clear that the user actually + // intended to point at an UNC path. This is assumed when the first + // non-empty string arguments starts with exactly two slashes followed by + // at least one more non-slash character. + // + // Note that for normalize() to treat a path as an UNC path it needs to + // have at least 2 components, so we don't filter for that here. + // This means that the user can use join to construct UNC paths from + // a server name and a share name; for example: + // path.join('//server', 'share') -> '\\\\server\\share\') + if (!/^[\\\/]{2}[^\\\/]/.test(paths[0])) { + joined = joined.replace(/^[\\\/]{2,}/, '\\'); + } - return (resolvedDevice + (resolvedAbsolute ? '\\' : '') + resolvedTail) || - '.'; - }; + return win32.normalize(joined); +}; - // windows version - exports.normalize = function(path) { - var result = splitDeviceRe.exec(path), - device = result[1] || '', - isUnc = device && device.charAt(1) !== ':', - isAbsolute = exports.isAbsolute(path), - tail = result[3], - trailingSlash = /[\\\/]$/.test(tail); - // If device is a drive letter, we'll normalize to lower case. - if (device && device.charAt(1) === ':') { - device = device[0].toLowerCase() + device.substr(1); - } +// path.relative(from, to) +// it will solve the relative path from 'from' to 'to', for instance: +// from = 'C:\\orandea\\test\\aaa' +// to = 'C:\\orandea\\impl\\bbb' +// The output of the function should be: '..\\..\\impl\\bbb' +win32.relative = function(from, to) { + from = win32.resolve(from); + to = win32.resolve(to); - // Normalize the tail path - tail = normalizeArray(tail.split(/[\\\/]+/).filter(function(p) { - return !!p; - }), !isAbsolute).join('\\'); + // windows is not case sensitive + var lowerFrom = from.toLowerCase(); + var lowerTo = to.toLowerCase(); - if (!tail && !isAbsolute) { - tail = '.'; - } - if (tail && trailingSlash) { - tail += '\\'; + function trim(arr) { + var start = 0; + for (; start < arr.length; start++) { + if (arr[start] !== '') break; } - // Convert slashes to backslashes when `device` points to an UNC root. - // Also squash multiple slashes into a single one where appropriate. - if (isUnc) { - device = normalizeUNCRoot(device); + var end = arr.length - 1; + for (; end >= 0; end--) { + if (arr[end] !== '') break; } - return device + (isAbsolute ? '\\' : '') + tail; - }; + if (start > end) return []; + return arr.slice(start, end + 1); + } - // windows version - exports.isAbsolute = function(path) { - var result = splitDeviceRe.exec(path), - device = result[1] || '', - isUnc = !!device && device.charAt(1) !== ':'; - // UNC paths are always absolute - return !!result[2] || isUnc; - }; + var toParts = trim(to.split('\\')); - // windows version - exports.join = function() { - function f(p) { - if (!util.isString(p)) { - throw new TypeError('Arguments to path.join must be strings'); - } - return p; - } + var lowerFromParts = trim(lowerFrom.split('\\')); + var lowerToParts = trim(lowerTo.split('\\')); - var paths = Array.prototype.filter.call(arguments, f); - var joined = paths.join('\\'); - - // Make sure that the joined path doesn't start with two slashes, because - // normalize() will mistake it for an UNC path then. - // - // This step is skipped when it is very clear that the user actually - // intended to point at an UNC path. This is assumed when the first - // non-empty string arguments starts with exactly two slashes followed by - // at least one more non-slash character. - // - // Note that for normalize() to treat a path as an UNC path it needs to - // have at least 2 components, so we don't filter for that here. - // This means that the user can use join to construct UNC paths from - // a server name and a share name; for example: - // path.join('//server', 'share') -> '\\\\server\\share\') - if (!/^[\\\/]{2}[^\\\/]/.test(paths[0])) { - joined = joined.replace(/^[\\\/]{2,}/, '\\'); + var length = Math.min(lowerFromParts.length, lowerToParts.length); + var samePartsLength = length; + for (var i = 0; i < length; i++) { + if (lowerFromParts[i] !== lowerToParts[i]) { + samePartsLength = i; + break; } + } - return exports.normalize(joined); - }; + if (samePartsLength == 0) { + return to; + } - // path.relative(from, to) - // it will solve the relative path from 'from' to 'to', for instance: - // from = 'C:\\orandea\\test\\aaa' - // to = 'C:\\orandea\\impl\\bbb' - // The output of the function should be: '..\\..\\impl\\bbb' - // windows version - exports.relative = function(from, to) { - from = exports.resolve(from); - to = exports.resolve(to); - - // windows is not case sensitive - var lowerFrom = from.toLowerCase(); - var lowerTo = to.toLowerCase(); - - function trim(arr) { - var start = 0; - for (; start < arr.length; start++) { - if (arr[start] !== '') break; - } + var outputParts = []; + for (var i = samePartsLength; i < lowerFromParts.length; i++) { + outputParts.push('..'); + } - var end = arr.length - 1; - for (; end >= 0; end--) { - if (arr[end] !== '') break; - } + outputParts = outputParts.concat(toParts.slice(samePartsLength)); - if (start > end) return []; - return arr.slice(start, end + 1); - } + return outputParts.join('\\'); +}; - var toParts = trim(to.split('\\')); - var lowerFromParts = trim(lowerFrom.split('\\')); - var lowerToParts = trim(lowerTo.split('\\')); +win32._makeLong = function(path) { + // Note: this will *probably* throw somewhere. + if (!util.isString(path)) + return path; - var length = Math.min(lowerFromParts.length, lowerToParts.length); - var samePartsLength = length; - for (var i = 0; i < length; i++) { - if (lowerFromParts[i] !== lowerToParts[i]) { - samePartsLength = i; - break; - } - } + if (!path) { + return ''; + } - if (samePartsLength == 0) { - return to; - } + var resolvedPath = win32.resolve(path); - var outputParts = []; - for (var i = samePartsLength; i < lowerFromParts.length; i++) { - outputParts.push('..'); - } + if (/^[a-zA-Z]\:\\/.test(resolvedPath)) { + // path is local filesystem path, which needs to be converted + // to long UNC path. + return '\\\\?\\' + resolvedPath; + } else if (/^\\\\[^?.]/.test(resolvedPath)) { + // path is network UNC path, which needs to be converted + // to long UNC path. + return '\\\\?\\UNC\\' + resolvedPath.substring(2); + } - outputParts = outputParts.concat(toParts.slice(samePartsLength)); + return path; +}; - return outputParts.join('\\'); - }; - exports.sep = '\\'; - exports.delimiter = ';'; +win32.dirname = function(path) { + var result = win32SplitPath(path), + root = result[0], + dir = result[1]; -} else /* posix */ { + if (!root && !dir) { + // No dirname whatsoever + return '.'; + } - // Split a filename into [root, dir, basename, ext], unix version - // 'root' is just a slash, or nothing. - var splitPathRe = - /^(\/?|)([\s\S]*?)((?:\.{1,2}|[^\/]+?|)(\.[^.\/]*|))(?:[\/]*)$/; - var splitPath = function(filename) { - return splitPathRe.exec(filename).slice(1); - }; + if (dir) { + // It has a dirname, strip trailing slash + dir = dir.substr(0, dir.length - 1); + } - // path.resolve([from ...], to) - // posix version - exports.resolve = function() { - var resolvedPath = '', - resolvedAbsolute = false; + return root + dir; +}; - for (var i = arguments.length - 1; i >= -1 && !resolvedAbsolute; i--) { - var path = (i >= 0) ? arguments[i] : process.cwd(); - // Skip empty and invalid entries - if (!util.isString(path)) { - throw new TypeError('Arguments to path.resolve must be strings'); - } else if (!path) { - continue; - } +win32.basename = function(path, ext) { + var f = win32SplitPath(path)[2]; + // TODO: make this comparison case-insensitive on windows? + if (ext && f.substr(-1 * ext.length) === ext) { + f = f.substr(0, f.length - ext.length); + } + return f; +}; - resolvedPath = path + '/' + resolvedPath; - resolvedAbsolute = path.charAt(0) === '/'; - } - // At this point the path should be resolved to a full absolute path, but - // handle relative paths to be safe (might happen when process.cwd() fails) +win32.extname = function(path) { + return win32SplitPath(path)[3]; +}; - // Normalize the path - resolvedPath = normalizeArray(resolvedPath.split('/').filter(function(p) { - return !!p; - }), !resolvedAbsolute).join('/'); - return ((resolvedAbsolute ? '/' : '') + resolvedPath) || '.'; - }; +win32.format = function(pathObject) { + if (!util.isObject(pathObject)) { + throw new TypeError( + "Parameter 'pathObject' must be an object, not " + typeof pathObject + ); + } - // path.normalize(path) - // posix version - exports.normalize = function(path) { - var isAbsolute = exports.isAbsolute(path), - trailingSlash = path[path.length - 1] === '/', - segments = path.split('/'), - nonEmptySegments = []; - - // Normalize the path - for (var i = 0; i < segments.length; i++) { - if (segments[i]) { - nonEmptySegments.push(segments[i]); - } - } - path = normalizeArray(nonEmptySegments, !isAbsolute).join('/'); + var root = pathObject.root || ''; - if (!path && !isAbsolute) { - path = '.'; - } - if (path && trailingSlash) { - path += '/'; - } + if (!util.isString(root)) { + throw new TypeError( + "'pathObject.root' must be a string or undefined, not " + + typeof pathObject.root + ); + } - return (isAbsolute ? '/' : '') + path; - }; + var dir = pathObject.dir; + var base = pathObject.base || ''; + if (dir.slice(dir.length - 1, dir.length) === win32.sep) { + return dir + base; + } - // posix version - exports.isAbsolute = function(path) { - return path.charAt(0) === '/'; - }; + if (dir) { + return dir + win32.sep + base; + } - // posix version - exports.join = function() { - var path = ''; - for (var i = 0; i < arguments.length; i++) { - var segment = arguments[i]; - if (!util.isString(segment)) { - throw new TypeError('Arguments to path.join must be strings'); - } - if (segment) { - if (!path) { - path += segment; - } else { - path += '/' + segment; - } - } - } - return exports.normalize(path); + return base; +}; + + +win32.parse = function(pathString) { + if (!util.isString(pathString)) { + throw new TypeError( + "Parameter 'pathString' must be a string, not " + typeof pathString + ); + } + var allParts = win32SplitPath(pathString); + if (!allParts || allParts.length !== 4) { + throw new TypeError("Invalid path '" + pathString + "'"); + } + return { + root: allParts[0], + dir: allParts[0] + allParts[1].slice(0, allParts[1].length - 1), + base: allParts[2], + ext: allParts[3], + name: allParts[2].slice(0, allParts[2].length - allParts[3].length) }; +}; - // path.relative(from, to) - // posix version - exports.relative = function(from, to) { - from = exports.resolve(from).substr(1); - to = exports.resolve(to).substr(1); +win32.sep = '\\'; +win32.delimiter = ';'; - function trim(arr) { - var start = 0; - for (; start < arr.length; start++) { - if (arr[start] !== '') break; - } - var end = arr.length - 1; - for (; end >= 0; end--) { - if (arr[end] !== '') break; - } +// Split a filename into [root, dir, basename, ext], unix version +// 'root' is just a slash, or nothing. +var splitPathRe = + /^(\/?|)([\s\S]*?)((?:\.{1,2}|[^\/]+?|)(\.[^.\/]*|))(?:[\/]*)$/; +var posix = {}; + + +function posixSplitPath(filename) { + return splitPathRe.exec(filename).slice(1); +} - if (start > end) return []; - return arr.slice(start, end + 1); + +// path.resolve([from ...], to) +// posix version +posix.resolve = function() { + var resolvedPath = '', + resolvedAbsolute = false; + + for (var i = arguments.length - 1; i >= -1 && !resolvedAbsolute; i--) { + var path = (i >= 0) ? arguments[i] : process.cwd(); + + // Skip empty and invalid entries + if (!util.isString(path)) { + throw new TypeError('Arguments to path.resolve must be strings'); + } else if (!path) { + continue; } - var fromParts = trim(from.split('/')); - var toParts = trim(to.split('/')); + resolvedPath = path + '/' + resolvedPath; + resolvedAbsolute = path.charAt(0) === '/'; + } + + // At this point the path should be resolved to a full absolute path, but + // handle relative paths to be safe (might happen when process.cwd() fails) + + // Normalize the path + resolvedPath = normalizeArray(resolvedPath.split('/'), + !resolvedAbsolute).join('/'); - var length = Math.min(fromParts.length, toParts.length); - var samePartsLength = length; - for (var i = 0; i < length; i++) { - if (fromParts[i] !== toParts[i]) { - samePartsLength = i; - break; + return ((resolvedAbsolute ? '/' : '') + resolvedPath) || '.'; +}; + +// path.normalize(path) +// posix version +posix.normalize = function(path) { + var isAbsolute = posix.isAbsolute(path), + trailingSlash = path.substr(-1) === '/'; + + // Normalize the path + path = normalizeArray(path.split('/'), !isAbsolute).join('/'); + + if (!path && !isAbsolute) { + path = '.'; + } + if (path && trailingSlash) { + path += '/'; + } + + return (isAbsolute ? '/' : '') + path; +}; + +// posix version +posix.isAbsolute = function(path) { + return path.charAt(0) === '/'; +}; + +// posix version +posix.join = function() { + var path = ''; + for (var i = 0; i < arguments.length; i++) { + var segment = arguments[i]; + if (!util.isString(segment)) { + throw new TypeError('Arguments to path.join must be strings'); + } + if (segment) { + if (!path) { + path += segment; + } else { + path += '/' + segment; } } + } + return posix.normalize(path); +}; + + +// path.relative(from, to) +// posix version +posix.relative = function(from, to) { + from = posix.resolve(from).substr(1); + to = posix.resolve(to).substr(1); + + function trim(arr) { + var start = 0; + for (; start < arr.length; start++) { + if (arr[start] !== '') break; + } - var outputParts = []; - for (var i = samePartsLength; i < fromParts.length; i++) { - outputParts.push('..'); + var end = arr.length - 1; + for (; end >= 0; end--) { + if (arr[end] !== '') break; } - outputParts = outputParts.concat(toParts.slice(samePartsLength)); + if (start > end) return []; + return arr.slice(start, end + 1); + } - return outputParts.join('/'); - }; + var fromParts = trim(from.split('/')); + var toParts = trim(to.split('/')); - exports.sep = '/'; - exports.delimiter = ':'; -} + var length = Math.min(fromParts.length, toParts.length); + var samePartsLength = length; + for (var i = 0; i < length; i++) { + if (fromParts[i] !== toParts[i]) { + samePartsLength = i; + break; + } + } + + var outputParts = []; + for (var i = samePartsLength; i < fromParts.length; i++) { + outputParts.push('..'); + } + + outputParts = outputParts.concat(toParts.slice(samePartsLength)); -exports.dirname = function(path) { - var result = splitPath(path), + return outputParts.join('/'); +}; + + +posix._makeLong = function(path) { + return path; +}; + + +posix.dirname = function(path) { + var result = posixSplitPath(path), root = result[0], dir = result[1]; @@ -461,8 +561,8 @@ exports.dirname = function(path) { }; -exports.basename = function(path, ext) { - var f = splitPath(path)[2]; +posix.basename = function(path, ext) { + var f = posixSplitPath(path)[2]; // TODO: make this comparison case-insensitive on windows? if (ext && f.substr(-1 * ext.length) === ext) { f = f.substr(0, f.length - ext.length); @@ -471,47 +571,65 @@ exports.basename = function(path, ext) { }; -exports.extname = function(path) { - return splitPath(path)[3]; +posix.extname = function(path) { + return posixSplitPath(path)[3]; }; -exports.exists = util.deprecate(function(path, callback) { - require('fs').exists(path, callback); -}, 'path.exists is now called `fs.exists`.'); +posix.format = function(pathObject) { + if (!util.isObject(pathObject)) { + throw new TypeError( + "Parameter 'pathObject' must be an object, not " + typeof pathObject + ); + } + var root = pathObject.root || ''; -exports.existsSync = util.deprecate(function(path) { - return require('fs').existsSync(path); -}, 'path.existsSync is now called `fs.existsSync`.'); + if (!util.isString(root)) { + throw new TypeError( + "'pathObject.root' must be a string or undefined, not " + + typeof pathObject.root + ); + } + var dir = pathObject.dir ? pathObject.dir + posix.sep : ''; + var base = pathObject.base || ''; + return dir + base; +}; -if (isWindows) { - exports._makeLong = function(path) { - // Note: this will *probably* throw somewhere. - if (!util.isString(path)) - return path; - if (!path) { - return ''; - } +posix.parse = function(pathString) { + if (!util.isString(pathString)) { + throw new TypeError( + "Parameter 'pathString' must be a string, not " + typeof pathString + ); + } + var allParts = posixSplitPath(pathString); + if (!allParts || allParts.length !== 4) { + throw new TypeError("Invalid path '" + pathString + "'"); + } + allParts[1] = allParts[1] || ''; + allParts[2] = allParts[2] || ''; + allParts[3] = allParts[3] || ''; + + return { + root: allParts[0], + dir: allParts[0] + allParts[1].slice(0, allParts[1].length - 1), + base: allParts[2], + ext: allParts[3], + name: allParts[2].slice(0, allParts[2].length - allParts[3].length) + }; +}; - var resolvedPath = exports.resolve(path); - if (/^[a-zA-Z]\:\\/.test(resolvedPath)) { - // path is local filesystem path, which needs to be converted - // to long UNC path. - return '\\\\?\\' + resolvedPath; - } else if (/^\\\\[^?.]/.test(resolvedPath)) { - // path is network UNC path, which needs to be converted - // to long UNC path. - return '\\\\?\\UNC\\' + resolvedPath.substring(2); - } +posix.sep = '/'; +posix.delimiter = ':'; - return path; - }; -} else { - exports._makeLong = function(path) { - return path; - }; -} + +if (isWindows) + module.exports = win32; +else /* posix */ + module.exports = posix; + +module.exports.posix = posix; +module.exports.win32 = win32; diff --git a/lib/readline.js b/lib/readline.js index add3a6a4bd9..bafef00e0dd 100644 --- a/lib/readline.js +++ b/lib/readline.js @@ -68,7 +68,7 @@ function Interface(input, output, completer, terminal) { // backwards compat; check the isTTY prop of the output stream // when `terminal` was not specified - if (util.isUndefined(terminal)) { + if (util.isUndefined(terminal) && !util.isNullOrUndefined(output)) { terminal = !!output.isTTY; } @@ -142,11 +142,15 @@ function Interface(input, output, completer, terminal) { this.history = []; this.historyIndex = -1; - output.on('resize', onresize); + if (!util.isNullOrUndefined(output)) + output.on('resize', onresize); + self.once('close', function() { input.removeListener('keypress', onkeypress); input.removeListener('end', ontermend); - output.removeListener('resize', onresize); + if (!util.isNullOrUndefined(output)) { + output.removeListener('resize', onresize); + } }); } @@ -156,7 +160,10 @@ function Interface(input, output, completer, terminal) { inherits(Interface, EventEmitter); Interface.prototype.__defineGetter__('columns', function() { - return this.output.columns || Infinity; + var columns = Infinity; + if (this.output && this.output.columns) + columns = this.output.columns; + return columns; }); Interface.prototype.setPrompt = function(prompt) { @@ -177,7 +184,7 @@ Interface.prototype.prompt = function(preserveCursor) { if (!preserveCursor) this.cursor = 0; this._refreshLine(); } else { - this.output.write(this._prompt); + this._writeToOutput(this._prompt); } }; @@ -207,6 +214,13 @@ Interface.prototype._onLine = function(line) { } }; +Interface.prototype._writeToOutput = function _writeToOutput(stringToWrite) { + if (!util.isString(stringToWrite)) + throw new TypeError('stringToWrite must be a string'); + + if (!util.isNullOrUndefined(this.output)) + this.output.write(stringToWrite); +}; Interface.prototype._addHistory = function() { if (this.line.length === 0) return ''; @@ -245,11 +259,11 @@ Interface.prototype._refreshLine = function() { exports.clearScreenDown(this.output); // Write the prompt and the current buffer content. - this.output.write(line); + this._writeToOutput(line); // Force terminal to allocate a new line if (lineCols === 0) { - this.output.write(' '); + this._writeToOutput(' '); } // Move cursor to original position. @@ -310,11 +324,14 @@ Interface.prototype._normalWrite = function(b) { this._sawReturn = false; } + // Run test() on the new string chunk, not on the entire line buffer. + var newPartContainsEnding = lineEnding.test(string); + if (this._line_buffer) { string = this._line_buffer + string; this._line_buffer = null; } - if (lineEnding.test(string)) { + if (newPartContainsEnding) { this._sawReturn = /\r$/.test(string); // got one or more newlines; process into "line" events @@ -348,7 +365,7 @@ Interface.prototype._insertString = function(c) { if (this._getCursorPos().cols === 0) { this._refreshLine(); } else { - this.output.write(c); + this._writeToOutput(c); } // a hack to get the line refreshed if it's needed @@ -375,7 +392,7 @@ Interface.prototype._tabComplete = function() { if (completions.length === 1) { self._insertString(completions[0].slice(completeOn.length)); } else { - self.output.write('\r\n'); + self._writeToOutput('\r\n'); var width = completions.reduce(function(a, b) { return a.length > b.length ? a : b; }).length + 2; // 2 space padding @@ -419,17 +436,17 @@ function handleGroup(self, group, width, maxColumns) { break; } var item = group[idx]; - self.output.write(item); + self._writeToOutput(item); if (col < maxColumns - 1) { for (var s = 0, itemLen = item.length; s < width - itemLen; s++) { - self.output.write(' '); + self._writeToOutput(' '); } } } - self.output.write('\r\n'); + self._writeToOutput('\r\n'); } - self.output.write('\r\n'); + self._writeToOutput('\r\n'); } function commonPrefix(strings) { @@ -522,7 +539,7 @@ Interface.prototype._deleteLineRight = function() { Interface.prototype.clearLine = function() { this._moveCursor(+Infinity); - this.output.write('\r\n'); + this._writeToOutput('\r\n'); this.line = ''; this.cursor = 0; this.prevRows = 0; @@ -1165,6 +1182,9 @@ function emitKeys(stream, s) { */ function cursorTo(stream, x, y) { + if (util.isNullOrUndefined(stream)) + return; + if (!util.isNumber(x) && !util.isNumber(y)) return; @@ -1185,6 +1205,9 @@ exports.cursorTo = cursorTo; */ function moveCursor(stream, dx, dy) { + if (util.isNullOrUndefined(stream)) + return; + if (dx < 0) { stream.write('\x1b[' + (-dx) + 'D'); } else if (dx > 0) { @@ -1208,6 +1231,9 @@ exports.moveCursor = moveCursor; */ function clearLine(stream, dir) { + if (util.isNullOrUndefined(stream)) + return; + if (dir < 0) { // to the beginning stream.write('\x1b[1K'); @@ -1227,6 +1253,9 @@ exports.clearLine = clearLine; */ function clearScreenDown(stream) { + if (util.isNullOrUndefined(stream)) + return; + stream.write('\x1b[0J'); } exports.clearScreenDown = clearScreenDown; diff --git a/lib/repl.js b/lib/repl.js index 578f99ed225..a8fa060c593 100644 --- a/lib/repl.js +++ b/lib/repl.js @@ -72,8 +72,7 @@ exports.writer = util.inspect; exports._builtinLibs = ['assert', 'buffer', 'child_process', 'cluster', 'crypto', 'dgram', 'dns', 'domain', 'events', 'fs', 'http', 'https', 'net', 'os', 'path', 'punycode', 'querystring', 'readline', 'stream', - 'string_decoder', 'tls', 'tty', 'url', 'util', 'vm', 'zlib', 'smalloc', - 'tracing']; + 'string_decoder', 'tls', 'tty', 'url', 'util', 'vm', 'zlib', 'smalloc']; function REPLServer(prompt, stream, eval_, useGlobal, ignoreUndefined) { diff --git a/lib/smalloc.js b/lib/smalloc.js index 22525ad0dec..2b4175e0cb1 100644 --- a/lib/smalloc.js +++ b/lib/smalloc.js @@ -37,6 +37,7 @@ Object.defineProperty(exports, 'kMaxLength', { // enumerated values for different external array types var Types = {}; +// Must match enum v8::ExternalArrayType. Object.defineProperties(Types, { 'Int8': { enumerable: true, value: 1, writable: false }, 'Uint8': { enumerable: true, value: 2, writable: false }, @@ -68,7 +69,7 @@ function alloc(n, obj, type) { throw new TypeError('obj must be an Object'); } - // 1 == v8::kExternalByteArray, 9 == v8::kExternalPixelArray + // 1 == v8::kExternalUint8Array, 9 == v8::kExternalUint8ClampedArray if (type < 1 || type > 9) throw new TypeError('unknown external array type: ' + type); if (util.isArray(obj)) @@ -85,6 +86,10 @@ function dispose(obj) { throw new TypeError('obj must be an Object'); if (util.isBuffer(obj)) throw new TypeError('obj cannot be a Buffer'); + if (smalloc.isTypedArray(obj)) + throw new TypeError('obj cannot be a typed array'); + if (!smalloc.hasExternalData(obj)) + throw new Error('obj has no external array data'); smalloc.dispose(obj); } diff --git a/lib/timers.js b/lib/timers.js index 3039b49f2c3..68e3e65e9aa 100644 --- a/lib/timers.js +++ b/lib/timers.js @@ -30,21 +30,6 @@ var TIMEOUT_MAX = 2147483647; // 2^31-1 var debug = require('util').debuglog('timer'); -var tracing = require('tracing'); -var asyncFlags = tracing._asyncFlags; -var runAsyncQueue = tracing._runAsyncQueue; -var loadAsyncQueue = tracing._loadAsyncQueue; -var unloadAsyncQueue = tracing._unloadAsyncQueue; - -// Same as in AsyncListener in env.h -var kHasListener = 0; - -// Do a little housekeeping. -delete tracing._asyncFlags; -delete tracing._runAsyncQueue; -delete tracing._loadAsyncQueue; -delete tracing._unloadAsyncQueue; - // IDLE TIMEOUTS // @@ -59,11 +44,6 @@ delete tracing._unloadAsyncQueue; // value = list var lists = {}; -// Make Timer as monomorphic as possible. -Timer.prototype._asyncQueue = undefined; -Timer.prototype._asyncData = undefined; -Timer.prototype._asyncFlags = 0; - // the main function - creates lists on demand and the watchers associated // with them. function insert(item, msecs) { @@ -100,7 +80,7 @@ function listOnTimeout() { var now = Timer.now(); debug('now: %s', now); - var diff, first, hasQueue, threw; + var diff, first, threw; while (first = L.peek(list)) { diff = now - first._idleStart; if (diff < msecs) { @@ -122,19 +102,13 @@ function listOnTimeout() { if (domain && domain._disposed) continue; - hasQueue = !!first._asyncQueue; - try { - if (hasQueue) - loadAsyncQueue(first); if (domain) domain.enter(); threw = true; first._onTimeout(); if (domain) domain.exit(); - if (hasQueue) - unloadAsyncQueue(first); threw = false; } finally { if (threw) { @@ -204,11 +178,6 @@ exports.active = function(item) { L.append(list, item); } } - // Whether or not a new TimerWrap needed to be created, this should run - // for each item. This way each "item" (i.e. timer) can properly have - // their own domain assigned. - if (asyncFlags[kHasListener] > 0) - runAsyncQueue(item); }; @@ -354,18 +323,15 @@ L.init(immediateQueue); function processImmediate() { var queue = immediateQueue; - var domain, hasQueue, immediate; + var domain, immediate; immediateQueue = {}; L.init(immediateQueue); while (L.isEmpty(queue) === false) { immediate = L.shift(queue); - hasQueue = !!immediate._asyncQueue; domain = immediate.domain; - if (hasQueue) - loadAsyncQueue(immediate); if (domain) domain.enter(); @@ -389,8 +355,6 @@ function processImmediate() { if (domain) domain.exit(); - if (hasQueue) - unloadAsyncQueue(immediate); } // Only round-trip to C++ land if we have to. Calling clearImmediate() on an @@ -406,11 +370,8 @@ function Immediate() { } Immediate.prototype.domain = undefined; Immediate.prototype._onImmediate = undefined; -Immediate.prototype._asyncQueue = undefined; -Immediate.prototype._asyncData = undefined; Immediate.prototype._idleNext = undefined; Immediate.prototype._idlePrev = undefined; -Immediate.prototype._asyncFlags = 0; exports.setImmediate = function(callback) { @@ -436,9 +397,6 @@ exports.setImmediate = function(callback) { process._immediateCallback = processImmediate; } - // setImmediates are handled more like nextTicks. - if (asyncFlags[kHasListener] > 0) - runAsyncQueue(immediate); if (process.domain) immediate.domain = process.domain; @@ -472,7 +430,7 @@ function unrefTimeout() { debug('unrefTimer fired'); - var diff, domain, first, hasQueue, threw; + var diff, domain, first, threw; while (first = L.peek(unrefList)) { diff = now - first._idleStart; @@ -490,11 +448,8 @@ function unrefTimeout() { if (!first._onTimeout) continue; if (domain && domain._disposed) continue; - hasQueue = !!first._asyncQueue; try { - if (hasQueue) - loadAsyncQueue(first); if (domain) domain.enter(); threw = true; debug('unreftimer firing timeout'); @@ -502,8 +457,6 @@ function unrefTimeout() { threw = false; if (domain) domain.exit(); - if (hasQueue) - unloadAsyncQueue(first); } finally { if (threw) process.nextTick(unrefTimeout); } diff --git a/lib/tls.js b/lib/tls.js index eff5d313e9f..f772d771de0 100644 --- a/lib/tls.js +++ b/lib/tls.js @@ -246,8 +246,4 @@ exports.TLSSocket = require('_tls_wrap').TLSSocket; exports.Server = require('_tls_wrap').Server; exports.createServer = require('_tls_wrap').createServer; exports.connect = require('_tls_wrap').connect; - -// Legacy API -exports.__defineGetter__('createSecurePair', util.deprecate(function() { - return require('_tls_legacy').createSecurePair; -}, 'createSecurePair() is deprecated, use TLSSocket instead')); +exports.createSecurePair = require('_tls_legacy').createSecurePair; diff --git a/lib/tracing.js b/lib/tracing.js deleted file mode 100644 index 49d0dec35c2..00000000000 --- a/lib/tracing.js +++ /dev/null @@ -1,395 +0,0 @@ -// Copyright Joyent, Inc. and other Node contributors. -// -// Permission is hereby granted, free of charge, to any person obtaining a -// copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to permit -// persons to whom the Software is furnished to do so, subject to the -// following conditions: -// -// The above copyright notice and this permission notice shall be included -// in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS -// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN -// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR -// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE -// USE OR OTHER DEALINGS IN THE SOFTWARE. - -var EventEmitter = require('events'); -var v8binding, process; - -// This needs to be loaded early, and before the "process" object is made -// global. So allow src/node.js to pass the process object in during -// initialization. -exports._nodeInitialization = function nodeInitialization(pobj) { - process = pobj; - v8binding = process.binding('v8'); - - // Finish setting up the v8 Object. - v8.getHeapStatistics = v8binding.getHeapStatistics; - - // Part of the AsyncListener setup to share objects/callbacks with the - // native layer. - process._setupAsyncListener(asyncFlags, - runAsyncQueue, - loadAsyncQueue, - unloadAsyncQueue); - - // Do a little housekeeping. - delete exports._nodeInitialization; -}; - - -// v8 - -var v8 = exports.v8 = new EventEmitter(); - - -function emitGC(before, after) { - v8.emit('gc', before, after); -} - - -v8.on('newListener', function(name) { - if (name === 'gc' && EventEmitter.listenerCount(this, name) === 0) { - v8binding.startGarbageCollectionTracking(emitGC); - } -}); - - -v8.on('removeListener', function(name) { - if (name === 'gc' && EventEmitter.listenerCount(this, name) === 0) { - v8binding.stopGarbageCollectionTracking(); - } -}); - - -// AsyncListener - -// new Array() is used here because it is more efficient for sparse -// arrays. Please *do not* change these to simple bracket notation. - -// Track the active queue of AsyncListeners that have been added. -var asyncQueue = new Array(); - -// Keep the stack of all contexts that have been loaded in the -// execution chain of asynchronous events. -var contextStack = new Array(); -var currentContext = undefined; - -// Incremental uid for new AsyncListener instances. -var alUid = 0; - -// Stateful flags shared with Environment for quick JS/C++ -// communication. -var asyncFlags = {}; - -// Prevent accidentally suppressed thrown errors from before/after. -var inAsyncTick = false; - -// To prevent infinite recursion when an error handler also throws -// flag when an error is currenly being handled. -var inErrorTick = false; - -// Needs to be the same as src/env.h -var kHasListener = 0; - -// Flags to determine what async listeners are available. -var HAS_CREATE_AL = 1 << 0; -var HAS_BEFORE_AL = 1 << 1; -var HAS_AFTER_AL = 1 << 2; -var HAS_ERROR_AL = 1 << 3; - -// _errorHandler is scoped so it's also accessible by _fatalException. -exports._errorHandler = errorHandler; - -// Needs to be accessible from lib/timers.js so they know when async -// listeners are currently in queue. They'll be cleaned up once -// references there are made. -exports._asyncFlags = asyncFlags; -exports._runAsyncQueue = runAsyncQueue; -exports._loadAsyncQueue = loadAsyncQueue; -exports._unloadAsyncQueue = unloadAsyncQueue; - -// Public API. -exports.createAsyncListener = createAsyncListener; -exports.addAsyncListener = addAsyncListener; -exports.removeAsyncListener = removeAsyncListener; - -// Load the currently executing context as the current context, and -// create a new asyncQueue that can receive any added queue items -// during the executing of the callback. -function loadContext(ctx) { - contextStack.push(currentContext); - currentContext = ctx; - - asyncFlags[kHasListener] = 1; -} - -function unloadContext() { - currentContext = contextStack.pop(); - - if (currentContext === undefined && asyncQueue.length === 0) - asyncFlags[kHasListener] = 0; -} - -// Run all the async listeners attached when an asynchronous event is -// instantiated. -function runAsyncQueue(context) { - var queue = new Array(); - var data = new Array(); - var ccQueue, i, queueItem, value; - - context._asyncQueue = queue; - context._asyncData = data; - context._asyncFlags = 0; - - inAsyncTick = true; - - // First run through all callbacks in the currentContext. These may - // add new AsyncListeners to the asyncQueue during execution. Hence - // why they need to be evaluated first. - if (currentContext) { - ccQueue = currentContext._asyncQueue; - context._asyncFlags |= currentContext._asyncFlags; - for (i = 0; i < ccQueue.length; i++) { - queueItem = ccQueue[i]; - queue[queue.length] = queueItem; - if ((queueItem.callback_flags & HAS_CREATE_AL) === 0) { - data[queueItem.uid] = queueItem.data; - continue; - } - value = queueItem.create(queueItem.data); - data[queueItem.uid] = (value === undefined) ? queueItem.data : value; - } - } - - // Then run through all items in the asyncQueue - if (asyncQueue) { - for (i = 0; i < asyncQueue.length; i++) { - queueItem = asyncQueue[i]; - // Quick way to check if an AL instance with the same uid was - // already run from currentContext. - if (data[queueItem.uid] !== undefined) - continue; - queue[queue.length] = queueItem; - context._asyncFlags |= queueItem.callback_flags; - if ((queueItem.callback_flags & HAS_CREATE_AL) === 0) { - data[queueItem.uid] = queueItem.data; - continue; - } - value = queueItem.create(queueItem.data); - data[queueItem.uid] = (value === undefined) ? queueItem.data : value; - } - } - - inAsyncTick = false; -} - -// Load the AsyncListener queue attached to context and run all -// "before" callbacks, if they exist. -function loadAsyncQueue(context) { - loadContext(context); - - if ((context._asyncFlags & HAS_BEFORE_AL) === 0) - return; - - var queue = context._asyncQueue; - var data = context._asyncData; - var i, queueItem; - - inAsyncTick = true; - for (i = 0; i < queue.length; i++) { - queueItem = queue[i]; - if ((queueItem.callback_flags & HAS_BEFORE_AL) > 0) - queueItem.before(context, data[queueItem.uid]); - } - inAsyncTick = false; -} - -// Unload the AsyncListener queue attached to context and run all -// "after" callbacks, if they exist. -function unloadAsyncQueue(context) { - if ((context._asyncFlags & HAS_AFTER_AL) === 0) { - unloadContext(); - return; - } - - var queue = context._asyncQueue; - var data = context._asyncData; - var i, queueItem; - - inAsyncTick = true; - for (i = 0; i < queue.length; i++) { - queueItem = queue[i]; - if ((queueItem.callback_flags & HAS_AFTER_AL) > 0) - queueItem.after(context, data[queueItem.uid]); - } - inAsyncTick = false; - - unloadContext(); -} - -// Handle errors that are thrown while in the context of an -// AsyncListener. If an error is thrown from an AsyncListener -// callback error handlers will be called once more to report -// the error, then the application will die forcefully. -function errorHandler(er) { - if (inErrorTick) - return false; - - var handled = false; - var i, queueItem, threw; - - inErrorTick = true; - - // First process error callbacks from the current context. - if (currentContext && (currentContext._asyncFlags & HAS_ERROR_AL) > 0) { - var queue = currentContext._asyncQueue; - var data = currentContext._asyncData; - for (i = 0; i < queue.length; i++) { - queueItem = queue[i]; - if ((queueItem.callback_flags & HAS_ERROR_AL) === 0) - continue; - try { - threw = true; - // While it would be possible to pass in currentContext, if - // the error is thrown from the "create" callback then there's - // a chance the object hasn't been fully constructed. - handled = queueItem.error(data[queueItem.uid], er) || handled; - threw = false; - } finally { - // If the error callback thew then die quickly. Only allow the - // exit events to be processed. - if (threw) { - process._exiting = true; - process.emit('exit', 1); - } - } - } - } - - // Now process callbacks from any existing queue. - if (asyncQueue) { - for (i = 0; i < asyncQueue.length; i++) { - queueItem = asyncQueue[i]; - if ((queueItem.callback_flags & HAS_ERROR_AL) === 0 || - (data && data[queueItem.uid] !== undefined)) - continue; - try { - threw = true; - handled = queueItem.error(queueItem.data, er) || handled; - threw = false; - } finally { - // If the error callback thew then die quickly. Only allow the - // exit events to be processed. - if (threw) { - process._exiting = true; - process.emit('exit', 1); - } - } - } - } - - inErrorTick = false; - - unloadContext(); - - // TODO(trevnorris): If the error was handled, should the after callbacks - // be fired anyways? - - return handled && !inAsyncTick; -} - -// Instance function of an AsyncListener object. -function AsyncListenerInst(callbacks, data) { - if (typeof callbacks.create === 'function') { - this.create = callbacks.create; - this.callback_flags |= HAS_CREATE_AL; - } - if (typeof callbacks.before === 'function') { - this.before = callbacks.before; - this.callback_flags |= HAS_BEFORE_AL; - } - if (typeof callbacks.after === 'function') { - this.after = callbacks.after; - this.callback_flags |= HAS_AFTER_AL; - } - if (typeof callbacks.error === 'function') { - this.error = callbacks.error; - this.callback_flags |= HAS_ERROR_AL; - } - - this.uid = ++alUid; - this.data = data === undefined ? null : data; -} -AsyncListenerInst.prototype.create = undefined; -AsyncListenerInst.prototype.before = undefined; -AsyncListenerInst.prototype.after = undefined; -AsyncListenerInst.prototype.error = undefined; -AsyncListenerInst.prototype.data = undefined; -AsyncListenerInst.prototype.uid = 0; -AsyncListenerInst.prototype.callback_flags = 0; - -// Create new async listener object. Useful when instantiating a new -// object and want the listener instance, but not add it to the stack. -// If an existing AsyncListenerInst is passed then any new "data" is -// ignored. -function createAsyncListener(callbacks, data) { - if (typeof callbacks !== 'object' || callbacks == null) - throw new TypeError('callbacks argument must be an object'); - - if (callbacks instanceof AsyncListenerInst) - return callbacks; - else - return new AsyncListenerInst(callbacks, data); -} - -// Add a listener to the current queue. -function addAsyncListener(callbacks, data) { - // Fast track if a new AsyncListenerInst has to be created. - if (!(callbacks instanceof AsyncListenerInst)) { - callbacks = createAsyncListener(callbacks, data); - asyncQueue.push(callbacks); - asyncFlags[kHasListener] = 1; - return callbacks; - } - - var inQueue = false; - // The asyncQueue will be small. Probably always <= 3 items. - for (var i = 0; i < asyncQueue.length; i++) { - if (callbacks === asyncQueue[i]) { - inQueue = true; - break; - } - } - - // Make sure the callback doesn't already exist in the queue. - if (!inQueue) { - asyncQueue.push(callbacks); - asyncFlags[kHasListener] = 1; - } - - return callbacks; -} - -// Remove listener from the current queue. Though this will not remove -// the listener from the current context. So callback propagation will -// continue. -function removeAsyncListener(obj) { - for (var i = 0; i < asyncQueue.length; i++) { - if (obj === asyncQueue[i]) { - asyncQueue.splice(i, 1); - break; - } - } - - if (asyncQueue.length > 0 || currentContext !== undefined) - asyncFlags[kHasListener] = 1; - else - asyncFlags[kHasListener] = 0; -} diff --git a/lib/url.js b/lib/url.js index 4c0ef0102fa..ac82d251179 100644 --- a/lib/url.js +++ b/lib/url.js @@ -70,8 +70,9 @@ var protocolPattern = /^([a-z0-9.+-]+:)/i, nonHostChars = ['%', '/', '?', ';', '#'].concat(autoEscape), hostEndingChars = ['/', '?', '#'], hostnameMaxLen = 255, - hostnamePartPattern = /^[a-z0-9A-Z_-]{0,63}$/, - hostnamePartStart = /^([a-z0-9A-Z_-]{0,63})(.*)$/, + hostnamePatternString = '[^' + nonHostChars.join('') + ']{0,63}', + hostnamePartPattern = new RegExp('^' + hostnamePatternString + '$'), + hostnamePartStart = new RegExp('^(' + hostnamePatternString + ')(.*)$'), // protocols that can allow "unsafe" and "unwise" chars. unsafeProtocol = { 'javascript': true, @@ -111,10 +112,15 @@ Url.prototype.parse = function(url, parseQueryString, slashesDenoteHost) { } // Copy chrome, IE, opera backslash-handling behavior. + // Back slashes before the query string get converted to forward slashes // See: https://code.google.com/p/chromium/issues/detail?id=25916 - var hashSplit = url.split('#'); - hashSplit[0] = hashSplit[0].replace(/\\/g, '/'); - url = hashSplit.join('#'); + var queryIndex = url.indexOf('?'), + splitter = + (queryIndex !== -1 && queryIndex < url.indexOf('#')) ? '?' : '#', + uSplit = url.split(splitter), + slashRegex = /\\/g; + uSplit[0] = uSplit[0].replace(slashRegex, '/'); + url = uSplit.join(splitter); var rest = url; @@ -122,7 +128,7 @@ Url.prototype.parse = function(url, parseQueryString, slashesDenoteHost) { // This is to support parse stuff like " http://foo.com \n" rest = rest.trim(); - if (!slashesDenoteHost && hashSplit.length === 1) { + if (!slashesDenoteHost && url.split('#').length === 1) { // Try fast path regexp var simplePath = simplePathPattern.exec(rest); if (simplePath) { @@ -136,6 +142,9 @@ Url.prototype.parse = function(url, parseQueryString, slashesDenoteHost) { } else { this.query = this.search.substr(1); } + } else if (parseQueryString) { + this.search = ''; + this.query = {}; } return this; } @@ -309,6 +318,8 @@ Url.prototype.parse = function(url, parseQueryString, slashesDenoteHost) { // need to be. for (var i = 0, l = autoEscape.length; i < l; i++) { var ae = autoEscape[i]; + if (rest.indexOf(ae) === -1) + continue; var esc = encodeURIComponent(ae); if (esc === ae) { esc = escape(ae); @@ -352,7 +363,7 @@ Url.prototype.parse = function(url, parseQueryString, slashesDenoteHost) { } // finally, reconstruct the href based on what has been validated. - this.href = this.format(); + this.href = this.format(parseQueryString); return this; }; @@ -367,7 +378,7 @@ function urlFormat(obj) { return obj.format(); } -Url.prototype.format = function() { +Url.prototype.format = function(parseQueryString) { var auth = this.auth || ''; if (auth) { auth = encodeURIComponent(auth); @@ -379,7 +390,26 @@ Url.prototype.format = function() { pathname = this.pathname || '', hash = this.hash || '', host = false, - query = ''; + query = '', + search = ''; + + if (this.path) { + var qm = this.path.indexOf('?'); + if (qm !== -1) { + query = this.path.slice(qm + 1); + search = '?' + query; + pathname = this.path.slice(0, qm); + } else { + if (parseQueryString) { + this.query = {}; + this.search = ''; + } else { + this.query = null; + this.search = null; + } + pathname = this.path; + } + } if (this.host) { host = auth + this.host; @@ -392,13 +422,15 @@ Url.prototype.format = function() { } } - if (this.query && + if (!query && + this.query && util.isObject(this.query) && Object.keys(this.query).length) { query = querystring.stringify(this.query); } - var search = this.search || (query && ('?' + query)) || ''; + if (!search) + search = this.search || (query && ('?' + query)) || ''; if (protocol && protocol.substr(-1) !== ':') protocol += ':'; diff --git a/lib/util.js b/lib/util.js index 2de944586fe..e97820c9af3 100644 --- a/lib/util.js +++ b/lib/util.js @@ -174,6 +174,7 @@ inspect.styles = { 'undefined': 'grey', 'null': 'bold', 'string': 'green', + 'symbol': 'green', 'date': 'magenta', // "name": intentionally not styling 'regexp': 'red' @@ -388,6 +389,9 @@ function formatPrimitive(ctx, value) { // For some reason typeof null is "object", so special case here. if (isNull(value)) return ctx.stylize('null', 'null'); + // es6 symbol primitive + if (isSymbol(value)) + return ctx.stylize(value.toString(), 'symbol'); } diff --git a/node.gyp b/node.gyp index 5454af2f963..b59474ee0ac 100644 --- a/node.gyp +++ b/node.gyp @@ -57,7 +57,6 @@ 'lib/string_decoder.js', 'lib/sys.js', 'lib/timers.js', - 'lib/tracing.js', 'lib/tls.js', 'lib/_tls_common.js', 'lib/_tls_legacy.js', @@ -67,6 +66,7 @@ 'lib/util.js', 'lib/vm.js', 'lib/zlib.js', + 'deps/debugger-agent/lib/_debugger_agent.js', ], }, @@ -77,6 +77,7 @@ 'dependencies': [ 'node_js2c#host', + 'deps/debugger-agent/debugger-agent.gyp:debugger-agent', ], 'include_dirs': [ @@ -87,6 +88,7 @@ ], 'sources': [ + 'src/async-wrap.cc', 'src/fs_event_wrap.cc', 'src/cares_wrap.cc', 'src/handle_wrap.cc', @@ -103,6 +105,7 @@ 'src/node_stat_watcher.cc', 'src/node_watchdog.cc', 'src/node_zlib.cc', + 'src/node_i18n.cc', 'src/pipe_wrap.cc', 'src/signal_wrap.cc', 'src/smalloc.cc', @@ -134,6 +137,7 @@ 'src/node_version.h', 'src/node_watchdog.h', 'src/node_wrap.h', + 'src/node_i18n.h', 'src/pipe_wrap.h', 'src/queue.h', 'src/smalloc.h', @@ -164,6 +168,17 @@ ], 'conditions': [ + [ 'v8_enable_i18n_support==1', { + 'defines': [ 'NODE_HAVE_I18N_SUPPORT=1' ], + 'dependencies': [ + '<(icu_gyp_path):icui18n', + '<(icu_gyp_path):icuuc', + ], + 'conditions': [ + [ 'icu_small=="true"', { + 'defines': [ 'NODE_HAVE_SMALL_ICU=1' ], + }]], + }], [ 'node_use_openssl=="true"', { 'defines': [ 'HAVE_OPENSSL=1' ], 'sources': [ @@ -230,8 +245,7 @@ 'conditions': [ [ 'OS=="linux"', { 'sources': [ - '<(SHARED_INTERMEDIATE_DIR)/node_dtrace_provider.o', - '<(SHARED_INTERMEDIATE_DIR)/libuv_dtrace_provider.o', + '<(SHARED_INTERMEDIATE_DIR)/node_dtrace_provider.o' ], }], [ 'OS!="mac" and OS!="linux"', { @@ -361,6 +375,10 @@ 'VCLinkerTool': { 'SubSystem': 1, # /subsystem:console }, + 'VCManifestTool': { + 'EmbedManifest': 'true', + 'AdditionalManifestFiles': 'src/res/node.exe.extra.manifest' + } }, }, # generate ETW header and resource files @@ -491,15 +509,13 @@ { 'action_name': 'node_dtrace_provider_o', 'inputs': [ - '<(OBJ_DIR)/libuv/deps/uv/src/unix/core.o', '<(OBJ_DIR)/node/src/node_dtrace.o', ], 'outputs': [ '<(OBJ_DIR)/node/src/node_dtrace_provider.o' ], 'action': [ 'dtrace', '-G', '-xnolibs', '-s', 'src/node_provider.d', - '-s', 'deps/uv/src/unix/uv-dtrace.d', '<@(_inputs)', - '-o', '<@(_outputs)' ] + '<@(_inputs)', '-o', '<@(_outputs)' ] } ] }], @@ -514,17 +530,7 @@ 'action': [ 'dtrace', '-C', '-G', '-s', '<@(_inputs)', '-o', '<@(_outputs)' ], - }, - { - 'action_name': 'libuv_dtrace_provider_o', - 'inputs': [ 'deps/uv/src/unix/uv-dtrace.d' ], - 'outputs': [ - '<(SHARED_INTERMEDIATE_DIR)/libuv_dtrace_provider.o' - ], - 'action': [ - 'dtrace', '-C', '-G', '-s', '<@(_inputs)', '-o', '<@(_outputs)' - ], - }, + } ], }], ] diff --git a/src/async-wrap-inl.h b/src/async-wrap-inl.h index 324c57b9aef..f064ea96cf4 100644 --- a/src/async-wrap-inl.h +++ b/src/async-wrap-inl.h @@ -27,197 +27,64 @@ #include "base-object-inl.h" #include "env.h" #include "env-inl.h" +#include "node_internals.h" #include "util.h" #include "util-inl.h" #include "v8.h" -#include <assert.h> namespace node { inline AsyncWrap::AsyncWrap(Environment* env, v8::Handle<v8::Object> object, - ProviderType provider) + ProviderType provider, + AsyncWrap* parent) : BaseObject(env, object), - async_flags_(NO_OPTIONS), + has_async_queue_(false), provider_type_(provider) { - if (!env->has_async_listener()) + // Check user controlled flag to see if the init callback should run. + if (!env->call_async_init_hook()) return; - // TODO(trevnorris): Do we really need to TryCatch this call? - v8::TryCatch try_catch; - try_catch.SetVerbose(true); - - v8::Local<v8::Value> val = object.As<v8::Value>(); - env->async_listener_run_function()->Call(env->process_object(), 1, &val); - - if (!try_catch.HasCaught()) - async_flags_ |= HAS_ASYNC_LISTENER; -} - - -inline AsyncWrap::~AsyncWrap() { -} - -inline uint32_t AsyncWrap::provider_type() const { - return provider_type_; -} - - -inline bool AsyncWrap::has_async_listener() { - return async_flags_ & HAS_ASYNC_LISTENER; -} - - -// I hate you domains. -inline v8::Handle<v8::Value> AsyncWrap::MakeDomainCallback( - const v8::Handle<v8::Function> cb, - int argc, - v8::Handle<v8::Value>* argv) { - assert(env()->context() == env()->isolate()->GetCurrentContext()); + // TODO(trevnorris): Until it's verified all passed object's are not weak, + // add a HandleScope to make sure there's no leak. + v8::HandleScope scope(env->isolate()); - v8::Local<v8::Object> context = object(); - v8::Local<v8::Object> process = env()->process_object(); - v8::Local<v8::Value> domain_v = context->Get(env()->domain_string()); - v8::Local<v8::Object> domain; + v8::Local<v8::Object> parent_obj; v8::TryCatch try_catch; - try_catch.SetVerbose(true); - - if (has_async_listener()) { - v8::Local<v8::Value> val = context.As<v8::Value>(); - env()->async_listener_load_function()->Call(process, 1, &val); + // If a parent value was sent then call its pre/post functions to let it know + // a conceptual "child" is being instantiated (e.g. that a server has + // received a connection). + if (parent != NULL) { + parent_obj = parent->object(); + env->async_hooks_pre_function()->Call(parent_obj, 0, NULL); if (try_catch.HasCaught()) - return v8::Undefined(env()->isolate()); + FatalError("node::AsyncWrap::AsyncWrap", "parent pre hook threw"); } - bool has_domain = domain_v->IsObject(); - if (has_domain) { - domain = domain_v.As<v8::Object>(); - - if (domain->Get(env()->disposed_string())->IsTrue()) - return Undefined(env()->isolate()); - - v8::Local<v8::Function> enter = - domain->Get(env()->enter_string()).As<v8::Function>(); - if (enter->IsFunction()) { - enter->Call(domain, 0, NULL); - if (try_catch.HasCaught()) - return Undefined(env()->isolate()); - } - } + env->async_hooks_init_function()->Call(object, 0, NULL); - v8::Local<v8::Value> ret = cb->Call(context, argc, argv); - - if (try_catch.HasCaught()) { - return Undefined(env()->isolate()); - } - - if (has_domain) { - v8::Local<v8::Function> exit = - domain->Get(env()->exit_string()).As<v8::Function>(); - if (exit->IsFunction()) { - exit->Call(domain, 0, NULL); - if (try_catch.HasCaught()) - return Undefined(env()->isolate()); - } - } + if (try_catch.HasCaught()) + FatalError("node::AsyncWrap::AsyncWrap", "init hook threw"); - if (has_async_listener()) { - v8::Local<v8::Value> val = context.As<v8::Value>(); - env()->async_listener_unload_function()->Call(process, 1, &val); + has_async_queue_ = true; + if (parent != NULL) { + env->async_hooks_post_function()->Call(parent_obj, 0, NULL); if (try_catch.HasCaught()) - return Undefined(env()->isolate()); + FatalError("node::AsyncWrap::AsyncWrap", "parent post hook threw"); } - - Environment::TickInfo* tick_info = env()->tick_info(); - - if (tick_info->in_tick()) { - return ret; - } - - if (tick_info->length() == 0) { - tick_info->set_index(0); - return ret; - } - - tick_info->set_in_tick(true); - - env()->tick_callback_function()->Call(process, 0, NULL); - - tick_info->set_in_tick(false); - - if (try_catch.HasCaught()) { - tick_info->set_last_threw(true); - return Undefined(env()->isolate()); - } - - return ret; } -inline v8::Handle<v8::Value> AsyncWrap::MakeCallback( - const v8::Handle<v8::Function> cb, - int argc, - v8::Handle<v8::Value>* argv) { - if (env()->using_domains()) - return MakeDomainCallback(cb, argc, argv); - - assert(env()->context() == env()->isolate()->GetCurrentContext()); - - v8::Local<v8::Object> context = object(); - v8::Local<v8::Object> process = env()->process_object(); - - v8::TryCatch try_catch; - try_catch.SetVerbose(true); - - if (has_async_listener()) { - v8::Local<v8::Value> val = context.As<v8::Value>(); - env()->async_listener_load_function()->Call(process, 1, &val); - - if (try_catch.HasCaught()) - return v8::Undefined(env()->isolate()); - } - - v8::Local<v8::Value> ret = cb->Call(context, argc, argv); - - if (try_catch.HasCaught()) { - return Undefined(env()->isolate()); - } - - if (has_async_listener()) { - v8::Local<v8::Value> val = context.As<v8::Value>(); - env()->async_listener_unload_function()->Call(process, 1, &val); - - if (try_catch.HasCaught()) - return v8::Undefined(env()->isolate()); - } - - Environment::TickInfo* tick_info = env()->tick_info(); - - if (tick_info->in_tick()) { - return ret; - } - - if (tick_info->length() == 0) { - tick_info->set_index(0); - return ret; - } - - tick_info->set_in_tick(true); - - env()->tick_callback_function()->Call(process, 0, NULL); - - tick_info->set_in_tick(false); +inline AsyncWrap::~AsyncWrap() { +} - if (try_catch.HasCaught()) { - tick_info->set_last_threw(true); - return Undefined(env()->isolate()); - } - return ret; +inline uint32_t AsyncWrap::provider_type() const { + return provider_type_; } @@ -226,10 +93,8 @@ inline v8::Handle<v8::Value> AsyncWrap::MakeCallback( int argc, v8::Handle<v8::Value>* argv) { v8::Local<v8::Value> cb_v = object()->Get(symbol); - v8::Local<v8::Function> cb = cb_v.As<v8::Function>(); - assert(cb->IsFunction()); - - return MakeCallback(cb, argc, argv); + ASSERT(cb_v->IsFunction()); + return MakeCallback(cb_v.As<v8::Function>(), argc, argv); } @@ -238,10 +103,8 @@ inline v8::Handle<v8::Value> AsyncWrap::MakeCallback( int argc, v8::Handle<v8::Value>* argv) { v8::Local<v8::Value> cb_v = object()->Get(index); - v8::Local<v8::Function> cb = cb_v.As<v8::Function>(); - assert(cb->IsFunction()); - - return MakeCallback(cb, argc, argv); + ASSERT(cb_v->IsFunction()); + return MakeCallback(cb_v.As<v8::Function>(), argc, argv); } } // namespace node diff --git a/src/async-wrap.cc b/src/async-wrap.cc new file mode 100644 index 00000000000..de1007e73d1 --- /dev/null +++ b/src/async-wrap.cc @@ -0,0 +1,188 @@ +// Copyright Joyent, Inc. and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + +#include "async-wrap.h" +#include "async-wrap-inl.h" +#include "env.h" +#include "env-inl.h" +#include "util.h" +#include "util-inl.h" + +#include "v8.h" + +using v8::Context; +using v8::Function; +using v8::FunctionCallbackInfo; +using v8::Handle; +using v8::HandleScope; +using v8::Integer; +using v8::Isolate; +using v8::Local; +using v8::Object; +using v8::TryCatch; +using v8::Value; +using v8::kExternalUint32Array; + +namespace node { + +static void SetupHooks(const FunctionCallbackInfo<Value>& args) { + Environment* env = Environment::GetCurrent(args.GetIsolate()); + + CHECK(args[0]->IsObject()); + CHECK(args[1]->IsFunction()); + CHECK(args[2]->IsFunction()); + CHECK(args[3]->IsFunction()); + + // Attach Fields enum from Environment::AsyncHooks. + // Flags attached to this object are: + // - kCallInitHook (0): Tells the AsyncWrap constructor whether it should + // make a call to the init JS callback. This is disabled by default, so + // even after setting the callbacks the flag will have to be set to + // non-zero to have those callbacks called. This only affects the init + // callback. If the init callback was called, then the pre/post callbacks + // will automatically be called. + Local<Object> async_hooks_obj = args[0].As<Object>(); + Environment::AsyncHooks* async_hooks = env->async_hooks(); + async_hooks_obj->SetIndexedPropertiesToExternalArrayData( + async_hooks->fields(), + kExternalUint32Array, + async_hooks->fields_count()); + + env->set_async_hooks_init_function(args[1].As<Function>()); + env->set_async_hooks_pre_function(args[2].As<Function>()); + env->set_async_hooks_post_function(args[3].As<Function>()); +} + + +static void Initialize(Handle<Object> target, + Handle<Value> unused, + Handle<Context> context) { + Environment* env = Environment::GetCurrent(context); + Isolate* isolate = env->isolate(); + HandleScope scope(isolate); + + NODE_SET_METHOD(target, "setupHooks", SetupHooks); + + Local<Object> async_providers = Object::New(isolate); +#define V(PROVIDER) \ + async_providers->Set(FIXED_ONE_BYTE_STRING(isolate, #PROVIDER), \ + Integer::New(isolate, AsyncWrap::PROVIDER_ ## PROVIDER)); + NODE_ASYNC_PROVIDER_TYPES(V) +#undef V + target->Set(FIXED_ONE_BYTE_STRING(isolate, "Providers"), async_providers); +} + + +Handle<Value> AsyncWrap::MakeCallback(const Handle<Function> cb, + int argc, + Handle<Value>* argv) { + CHECK(env()->context() == env()->isolate()->GetCurrentContext()); + + Local<Object> context = object(); + Local<Object> process = env()->process_object(); + Local<Object> domain; + bool has_domain = false; + + if (env()->using_domains()) { + Local<Value> domain_v = context->Get(env()->domain_string()); + has_domain = domain_v->IsObject(); + if (has_domain) { + domain = domain_v.As<Object>(); + if (domain->Get(env()->disposed_string())->IsTrue()) + return Undefined(env()->isolate()); + } + } + + TryCatch try_catch; + try_catch.SetVerbose(true); + + if (has_domain) { + Local<Value> enter_v = domain->Get(env()->enter_string()); + if (enter_v->IsFunction()) { + enter_v.As<Function>()->Call(domain, 0, NULL); + if (try_catch.HasCaught()) + return Undefined(env()->isolate()); + } + } + + if (has_async_queue_) { + try_catch.SetVerbose(false); + env()->async_hooks_pre_function()->Call(context, 0, NULL); + if (try_catch.HasCaught()) + FatalError("node::AsyncWrap::MakeCallback", "pre hook threw"); + try_catch.SetVerbose(true); + } + + Local<Value> ret = cb->Call(context, argc, argv); + + if (try_catch.HasCaught()) { + return Undefined(env()->isolate()); + } + + if (has_async_queue_) { + try_catch.SetVerbose(false); + env()->async_hooks_post_function()->Call(context, 0, NULL); + if (try_catch.HasCaught()) + FatalError("node::AsyncWrap::MakeCallback", "post hook threw"); + try_catch.SetVerbose(true); + } + + if (has_domain) { + Local<Value> exit_v = domain->Get(env()->exit_string()); + if (exit_v->IsFunction()) { + exit_v.As<Function>()->Call(domain, 0, NULL); + if (try_catch.HasCaught()) + return Undefined(env()->isolate()); + } + } + + Environment::TickInfo* tick_info = env()->tick_info(); + + if (tick_info->in_tick()) { + return ret; + } + + if (tick_info->length() == 0) { + env()->isolate()->RunMicrotasks(); + } + + if (tick_info->length() == 0) { + tick_info->set_index(0); + return ret; + } + + tick_info->set_in_tick(true); + + env()->tick_callback_function()->Call(process, 0, NULL); + + tick_info->set_in_tick(false); + + if (try_catch.HasCaught()) { + tick_info->set_last_threw(true); + return Undefined(env()->isolate()); + } + + return ret; +} + +} // namespace node + +NODE_MODULE_CONTEXT_AWARE_BUILTIN(async_wrap, node::Initialize) diff --git a/src/async-wrap.h b/src/async-wrap.h index 1b1802a68ff..403002a63f5 100644 --- a/src/async-wrap.h +++ b/src/async-wrap.h @@ -28,49 +28,52 @@ namespace node { +#define NODE_ASYNC_PROVIDER_TYPES(V) \ + V(NONE) \ + V(CARES) \ + V(CONNECTWRAP) \ + V(CRYPTO) \ + V(FSEVENTWRAP) \ + V(FSREQWRAP) \ + V(GETADDRINFOREQWRAP) \ + V(GETNAMEINFOREQWRAP) \ + V(PIPEWRAP) \ + V(PROCESSWRAP) \ + V(QUERYWRAP) \ + V(REQWRAP) \ + V(SHUTDOWNWRAP) \ + V(SIGNALWRAP) \ + V(STATWATCHER) \ + V(TCPWRAP) \ + V(TIMERWRAP) \ + V(TLSWRAP) \ + V(TTYWRAP) \ + V(UDPWRAP) \ + V(WRITEWRAP) \ + V(ZLIB) + class AsyncWrap : public BaseObject { public: - enum AsyncFlags { - NO_OPTIONS = 0, - HAS_ASYNC_LISTENER = 1 - }; - enum ProviderType { - PROVIDER_NONE = 1 << 0, - PROVIDER_CARES = 1 << 1, - PROVIDER_CONNECTWRAP = 1 << 2, - PROVIDER_CRYPTO = 1 << 3, - PROVIDER_FSEVENTWRAP = 1 << 4, - PROVIDER_GETADDRINFOREQWRAP = 1 << 5, - PROVIDER_PIPEWRAP = 1 << 6, - PROVIDER_PROCESSWRAP = 1 << 7, - PROVIDER_REQWRAP = 1 << 8, - PROVIDER_SHUTDOWNWRAP = 1 << 9, - PROVIDER_SIGNALWRAP = 1 << 10, - PROVIDER_STATWATCHER = 1 << 11, - PROVIDER_TCPWRAP = 1 << 12, - PROVIDER_TIMERWRAP = 1 << 13, - PROVIDER_TLSWRAP = 1 << 14, - PROVIDER_TTYWRAP = 1 << 15, - PROVIDER_UDPWRAP = 1 << 16, - PROVIDER_ZLIB = 1 << 17, - PROVIDER_GETNAMEINFOREQWRAP = 1 << 18 +#define V(PROVIDER) \ + PROVIDER_ ## PROVIDER, + NODE_ASYNC_PROVIDER_TYPES(V) +#undef V }; inline AsyncWrap(Environment* env, v8::Handle<v8::Object> object, - ProviderType provider); + ProviderType provider, + AsyncWrap* parent = NULL); inline ~AsyncWrap(); - inline bool has_async_listener(); - inline uint32_t provider_type() const; // Only call these within a valid HandleScope. - inline v8::Handle<v8::Value> MakeCallback(const v8::Handle<v8::Function> cb, - int argc, - v8::Handle<v8::Value>* argv); + v8::Handle<v8::Value> MakeCallback(const v8::Handle<v8::Function> cb, + int argc, + v8::Handle<v8::Value>* argv); inline v8::Handle<v8::Value> MakeCallback(const v8::Handle<v8::String> symbol, int argc, v8::Handle<v8::Value>* argv); @@ -81,15 +84,11 @@ class AsyncWrap : public BaseObject { private: inline AsyncWrap(); - // TODO(trevnorris): BURN IN FIRE! Remove this as soon as a suitable - // replacement is committed. - inline v8::Handle<v8::Value> MakeDomainCallback( - const v8::Handle<v8::Function> cb, - int argc, - v8::Handle<v8::Value>* argv); - - uint32_t async_flags_; - uint32_t provider_type_; + // When the async hooks init JS function is called from the constructor it is + // expected the context object will receive a _asyncQueue object property + // that will be used to call pre/post in MakeCallback. + bool has_async_queue_; + ProviderType provider_type_; }; } // namespace node diff --git a/src/base-object-inl.h b/src/base-object-inl.h index 4d726df7ca7..8cd9e2fd073 100644 --- a/src/base-object-inl.h +++ b/src/base-object-inl.h @@ -72,7 +72,7 @@ inline void BaseObject::MakeWeak(Type* ptr) { v8::HandleScope scope(env_->isolate()); v8::Local<v8::Object> handle = object(); assert(handle->InternalFieldCount() > 0); - Wrap<Type>(handle, ptr); + Wrap(handle, ptr); handle_.MarkIndependent(); handle_.SetWeak<Type>(ptr, WeakCallback<Type>); } diff --git a/src/cares_wrap.cc b/src/cares_wrap.cc index f3911df5ce3..39c88001563 100644 --- a/src/cares_wrap.cc +++ b/src/cares_wrap.cc @@ -55,6 +55,7 @@ using v8::Context; using v8::EscapableHandleScope; using v8::Function; using v8::FunctionCallbackInfo; +using v8::FunctionTemplate; using v8::Handle; using v8::HandleScope; using v8::Integer; @@ -64,8 +65,39 @@ using v8::Object; using v8::String; using v8::Value; -typedef class ReqWrap<uv_getaddrinfo_t> GetAddrInfoReqWrap; -typedef class ReqWrap<uv_getnameinfo_t> GetNameInfoReqWrap; + +class GetAddrInfoReqWrap : public ReqWrap<uv_getaddrinfo_t> { + public: + GetAddrInfoReqWrap(Environment* env, Local<Object> req_wrap_obj); +}; + +GetAddrInfoReqWrap::GetAddrInfoReqWrap(Environment* env, + Local<Object> req_wrap_obj) + : ReqWrap(env, req_wrap_obj, AsyncWrap::PROVIDER_GETADDRINFOREQWRAP) { + Wrap(req_wrap_obj, this); +} + + +static void NewGetAddrInfoReqWrap(const FunctionCallbackInfo<Value>& args) { + CHECK(args.IsConstructCall()); +} + + +class GetNameInfoReqWrap : public ReqWrap<uv_getnameinfo_t> { + public: + GetNameInfoReqWrap(Environment* env, Local<Object> req_wrap_obj); +}; + +GetNameInfoReqWrap::GetNameInfoReqWrap(Environment* env, + Local<Object> req_wrap_obj) + : ReqWrap(env, req_wrap_obj, AsyncWrap::PROVIDER_GETNAMEINFOREQWRAP) { + Wrap(req_wrap_obj, this); +} + + +static void NewGetNameInfoReqWrap(const FunctionCallbackInfo<Value>& args) { + CHECK(args.IsConstructCall()); +} static int cmp_ares_tasks(const ares_task_t* a, const ares_task_t* b) { @@ -229,7 +261,9 @@ static Local<Array> HostentToNames(Environment* env, struct hostent* host) { class QueryWrap : public AsyncWrap { public: QueryWrap(Environment* env, Local<Object> req_wrap_obj) - : AsyncWrap(env, req_wrap_obj, AsyncWrap::PROVIDER_CARES) { + : AsyncWrap(env, req_wrap_obj, AsyncWrap::PROVIDER_QUERYWRAP) { + if (env->in_domain()) + req_wrap_obj->Set(env->domain_string(), env->domain_array()->Get(0)); } virtual ~QueryWrap() { @@ -1035,10 +1069,7 @@ static void GetAddrInfo(const FunctionCallbackInfo<Value>& args) { abort(); } - GetAddrInfoReqWrap* req_wrap = - new GetAddrInfoReqWrap(env, - req_wrap_obj, - AsyncWrap::PROVIDER_GETADDRINFOREQWRAP); + GetAddrInfoReqWrap* req_wrap = new GetAddrInfoReqWrap(env, req_wrap_obj); struct addrinfo hints; memset(&hints, 0, sizeof(struct addrinfo)); @@ -1075,10 +1106,7 @@ static void GetNameInfo(const FunctionCallbackInfo<Value>& args) { CHECK(uv_ip4_addr(*ip, port, reinterpret_cast<sockaddr_in*>(&addr)) == 0 || uv_ip6_addr(*ip, port, reinterpret_cast<sockaddr_in6*>(&addr)) == 0); - GetNameInfoReqWrap* req_wrap = - new GetNameInfoReqWrap(env, - req_wrap_obj, - AsyncWrap::PROVIDER_GETNAMEINFOREQWRAP); + GetNameInfoReqWrap* req_wrap = new GetNameInfoReqWrap(env, req_wrap_obj); int err = uv_getnameinfo(env->event_loop(), &req_wrap->req_, @@ -1200,6 +1228,20 @@ static void StrError(const FunctionCallbackInfo<Value>& args) { } +static void CaresTimerCloseCb(uv_handle_t* handle) { + Environment* env = Environment::from_cares_timer_handle( + reinterpret_cast<uv_timer_t*>(handle)); + env->FinishHandleCleanup(handle); +} + + +static void CaresTimerClose(Environment* env, + uv_handle_t* handle, + void* arg) { + uv_close(handle, CaresTimerCloseCb); +} + + static void Initialize(Handle<Object> target, Handle<Value> unused, Handle<Context> context) { @@ -1223,6 +1265,10 @@ static void Initialize(Handle<Object> target, /* Initialize the timeout timer. The timer won't be started until the */ /* first socket is opened. */ uv_timer_init(env->event_loop(), env->cares_timer_handle()); + env->RegisterHandleCleanup( + reinterpret_cast<uv_handle_t*>(env->cares_timer_handle()), + CaresTimerClose, + NULL); NODE_SET_METHOD(target, "queryA", Query<QueryAWrap>); NODE_SET_METHOD(target, "queryAaaa", Query<QueryAaaaWrap>); @@ -1253,6 +1299,22 @@ static void Initialize(Handle<Object> target, Integer::New(env->isolate(), AI_ADDRCONFIG)); target->Set(FIXED_ONE_BYTE_STRING(env->isolate(), "AI_V4MAPPED"), Integer::New(env->isolate(), AI_V4MAPPED)); + + Local<FunctionTemplate> aiw = + FunctionTemplate::New(env->isolate(), NewGetAddrInfoReqWrap); + aiw->InstanceTemplate()->SetInternalFieldCount(1); + aiw->SetClassName( + FIXED_ONE_BYTE_STRING(env->isolate(), "GetAddrInfoReqWrap")); + target->Set(FIXED_ONE_BYTE_STRING(env->isolate(), "GetAddrInfoReqWrap"), + aiw->GetFunction()); + + Local<FunctionTemplate> niw = + FunctionTemplate::New(env->isolate(), NewGetNameInfoReqWrap); + niw->InstanceTemplate()->SetInternalFieldCount(1); + niw->SetClassName( + FIXED_ONE_BYTE_STRING(env->isolate(), "GetNameInfoReqWrap")); + target->Set(FIXED_ONE_BYTE_STRING(env->isolate(), "GetNameInfoReqWrap"), + niw->GetFunction()); } } // namespace cares_wrap diff --git a/src/env-inl.h b/src/env-inl.h index 4bc5eb09866..a2245130664 100644 --- a/src/env-inl.h +++ b/src/env-inl.h @@ -74,10 +74,10 @@ inline Environment::IsolateData* Environment::IsolateData::Get( } inline Environment::IsolateData* Environment::IsolateData::GetOrCreate( - v8::Isolate* isolate) { + v8::Isolate* isolate, uv_loop_t* loop) { IsolateData* isolate_data = Get(isolate); if (isolate_data == NULL) { - isolate_data = new IsolateData(isolate); + isolate_data = new IsolateData(isolate, loop); isolate->SetData(kIsolateSlot, isolate_data); } isolate_data->ref_count_ += 1; @@ -91,8 +91,9 @@ inline void Environment::IsolateData::Put() { } } -inline Environment::IsolateData::IsolateData(v8::Isolate* isolate) - : event_loop_(uv_default_loop()), +inline Environment::IsolateData::IsolateData(v8::Isolate* isolate, + uv_loop_t* loop) + : event_loop_(loop), isolate_(isolate), #define V(PropertyName, StringValue) \ PropertyName ## _(isolate, FIXED_ONE_BYTE_STRING(isolate, StringValue)), @@ -110,25 +111,20 @@ inline v8::Isolate* Environment::IsolateData::isolate() const { return isolate_; } -inline Environment::AsyncListener::AsyncListener() { - for (int i = 0; i < kFieldsCount; ++i) - fields_[i] = 0; +inline Environment::AsyncHooks::AsyncHooks() { + for (int i = 0; i < kFieldsCount; i++) fields_[i] = 0; } -inline uint32_t* Environment::AsyncListener::fields() { +inline uint32_t* Environment::AsyncHooks::fields() { return fields_; } -inline int Environment::AsyncListener::fields_count() const { +inline int Environment::AsyncHooks::fields_count() const { return kFieldsCount; } -inline bool Environment::AsyncListener::has_listener() const { - return fields_[kHasListener] > 0; -} - -inline uint32_t Environment::AsyncListener::watched_providers() const { - return fields_[kWatchedProviders]; +inline bool Environment::AsyncHooks::call_init_hook() { + return fields_[kCallInitHook] != 0; } inline Environment::DomainFlag::DomainFlag() { @@ -188,8 +184,9 @@ inline void Environment::TickInfo::set_last_threw(bool value) { last_threw_ = value; } -inline Environment* Environment::New(v8::Local<v8::Context> context) { - Environment* env = new Environment(context); +inline Environment* Environment::New(v8::Local<v8::Context> context, + uv_loop_t* loop) { + Environment* env = new Environment(context, loop); env->AssignToContext(context); return env; } @@ -207,12 +204,14 @@ inline Environment* Environment::GetCurrent(v8::Local<v8::Context> context) { context->GetAlignedPointerFromEmbedderData(kContextEmbedderDataIndex)); } -inline Environment::Environment(v8::Local<v8::Context> context) +inline Environment::Environment(v8::Local<v8::Context> context, + uv_loop_t* loop) : isolate_(context->GetIsolate()), - isolate_data_(IsolateData::GetOrCreate(context->GetIsolate())), + isolate_data_(IsolateData::GetOrCreate(context->GetIsolate(), loop)), using_smalloc_alloc_cb_(false), using_domains_(false), printed_error_(false), + debugger_agent_(this), context_(context->GetIsolate(), context) { // We'll be creating new objects so make sure we've entered the context. v8::HandleScope handle_scope(isolate()); @@ -221,6 +220,10 @@ inline Environment::Environment(v8::Local<v8::Context> context) set_module_load_list_array(v8::Array::New(isolate())); RB_INIT(&cares_task_list_); QUEUE_INIT(&gc_tracker_queue_); + QUEUE_INIT(&req_wrap_queue_); + QUEUE_INIT(&handle_wrap_queue_); + QUEUE_INIT(&handle_cleanup_queue_); + handle_cleanup_waiting_ = 0; } inline Environment::~Environment() { @@ -233,6 +236,21 @@ inline Environment::~Environment() { isolate_data()->Put(); } +inline void Environment::CleanupHandles() { + while (!QUEUE_EMPTY(&handle_cleanup_queue_)) { + QUEUE* q = QUEUE_HEAD(&handle_cleanup_queue_); + QUEUE_REMOVE(q); + + HandleCleanup* hc = ContainerOf(&HandleCleanup::handle_cleanup_queue_, q); + handle_cleanup_waiting_++; + hc->cb_(this, hc->handle_, hc->arg_); + delete hc; + } + + while (handle_cleanup_waiting_ != 0) + uv_run(event_loop(), UV_RUN_ONCE); +} + inline void Environment::Dispose() { delete this; } @@ -241,14 +259,9 @@ inline v8::Isolate* Environment::isolate() const { return isolate_; } -inline bool Environment::has_async_listener() const { - // The const_cast is okay, it doesn't violate conceptual const-ness. - return const_cast<Environment*>(this)->async_listener()->has_listener(); -} - -inline uint32_t Environment::watched_providers() const { +inline bool Environment::call_async_init_hook() const { // The const_cast is okay, it doesn't violate conceptual const-ness. - return const_cast<Environment*>(this)->async_listener()->watched_providers(); + return const_cast<Environment*>(this)->async_hooks()->call_init_hook(); } inline bool Environment::in_domain() const { @@ -287,12 +300,23 @@ inline uv_check_t* Environment::idle_check_handle() { return &idle_check_handle_; } +inline void Environment::RegisterHandleCleanup(uv_handle_t* handle, + HandleCleanupCb cb, + void *arg) { + HandleCleanup* hc = new HandleCleanup(handle, cb, arg); + QUEUE_INSERT_TAIL(&handle_cleanup_queue_, &hc->handle_cleanup_queue_); +} + +inline void Environment::FinishHandleCleanup(uv_handle_t* handle) { + handle_cleanup_waiting_--; +} + inline uv_loop_t* Environment::event_loop() const { return isolate_data()->event_loop(); } -inline Environment::AsyncListener* Environment::async_listener() { - return &async_listener_count_; +inline Environment::AsyncHooks* Environment::async_hooks() { + return &async_hooks_; } inline Environment::DomainFlag* Environment::domain_flag() { diff --git a/src/env.h b/src/env.h index 7fe132ce69b..e028a23f048 100644 --- a/src/env.h +++ b/src/env.h @@ -28,6 +28,7 @@ #include "uv.h" #include "v8.h" #include "queue.h" +#include "debugger-agent.h" #include <stdint.h> @@ -63,8 +64,8 @@ namespace node { V(address_string, "address") \ V(args_string, "args") \ V(argv_string, "argv") \ - V(async_queue_string, "_asyncQueue") \ V(async, "async") \ + V(async_queue_string, "_asyncQueue") \ V(atime_string, "atime") \ V(birthtime_string, "birthtime") \ V(blksize_string, "blksize") \ @@ -72,7 +73,6 @@ namespace node { V(buffer_string, "buffer") \ V(bytes_string, "bytes") \ V(bytes_parsed_string, "bytesParsed") \ - V(byte_length_string, "byteLength") \ V(callback_string, "callback") \ V(change_string, "change") \ V(close_string, "close") \ @@ -250,9 +250,9 @@ namespace node { V(zero_return_string, "ZERO_RETURN") \ #define ENVIRONMENT_STRONG_PERSISTENT_PROPERTIES(V) \ - V(async_listener_run_function, v8::Function) \ - V(async_listener_load_function, v8::Function) \ - V(async_listener_unload_function, v8::Function) \ + V(async_hooks_init_function, v8::Function) \ + V(async_hooks_pre_function, v8::Function) \ + V(async_hooks_post_function, v8::Function) \ V(binding_cache_object, v8::Object) \ V(buffer_constructor_function, v8::Function) \ V(context, v8::Context) \ @@ -286,26 +286,25 @@ RB_HEAD(ares_task_list, ares_task_t); class Environment { public: - class AsyncListener { + class AsyncHooks { public: inline uint32_t* fields(); inline int fields_count() const; - inline bool has_listener() const; - inline uint32_t watched_providers() const; + inline bool call_init_hook(); private: friend class Environment; // So we can call the constructor. - inline AsyncListener(); + inline AsyncHooks(); enum Fields { - kHasListener, - kWatchedProviders, + // Set this to not zero if the init hook should be called. + kCallInitHook, kFieldsCount }; uint32_t fields_[kFieldsCount]; - DISALLOW_COPY_AND_ASSIGN(AsyncListener); + DISALLOW_COPY_AND_ASSIGN(AsyncHooks); }; class DomainFlag { @@ -357,11 +356,34 @@ class Environment { DISALLOW_COPY_AND_ASSIGN(TickInfo); }; + typedef void (*HandleCleanupCb)(Environment* env, + uv_handle_t* handle, + void* arg); + + class HandleCleanup { + private: + friend class Environment; + + HandleCleanup(uv_handle_t* handle, HandleCleanupCb cb, void* arg) + : handle_(handle), + cb_(cb), + arg_(arg) { + QUEUE_INIT(&handle_cleanup_queue_); + } + + uv_handle_t* handle_; + HandleCleanupCb cb_; + void* arg_; + QUEUE handle_cleanup_queue_; + }; + static inline Environment* GetCurrent(v8::Isolate* isolate); static inline Environment* GetCurrent(v8::Local<v8::Context> context); // See CreateEnvironment() in src/node.cc. - static inline Environment* New(v8::Local<v8::Context> context); + static inline Environment* New(v8::Local<v8::Context> context, + uv_loop_t* loop); + inline void CleanupHandles(); inline void Dispose(); // Defined in src/node_profiler.cc. @@ -372,7 +394,7 @@ class Environment { inline v8::Isolate* isolate() const; inline uv_loop_t* event_loop() const; - inline bool has_async_listener() const; + inline bool call_async_init_hook() const; inline bool in_domain() const; inline uint32_t watched_providers() const; @@ -386,7 +408,13 @@ class Environment { static inline Environment* from_idle_check_handle(uv_check_t* handle); inline uv_check_t* idle_check_handle(); - inline AsyncListener* async_listener(); + // Register clean-up cb to be called on env->Dispose() + inline void RegisterHandleCleanup(uv_handle_t* handle, + HandleCleanupCb cb, + void *arg); + inline void FinishHandleCleanup(uv_handle_t* handle); + + inline AsyncHooks* async_hooks(); inline DomainFlag* domain_flag(); inline TickInfo* tick_info(); @@ -435,12 +463,19 @@ class Environment { ENVIRONMENT_STRONG_PERSISTENT_PROPERTIES(V) #undef V + inline debugger::Agent* debugger_agent() { + return &debugger_agent_; + } + + inline QUEUE* handle_wrap_queue() { return &handle_wrap_queue_; } + inline QUEUE* req_wrap_queue() { return &req_wrap_queue_; } + private: static const int kIsolateSlot = NODE_ISOLATE_SLOT; class GCInfo; class IsolateData; - inline explicit Environment(v8::Local<v8::Context> context); + inline Environment(v8::Local<v8::Context> context, uv_loop_t* loop); inline ~Environment(); inline IsolateData* isolate_data() const; void AfterGarbageCollectionCallback(const GCInfo* before, @@ -456,7 +491,7 @@ class Environment { uv_idle_t immediate_idle_handle_; uv_prepare_t idle_prepare_handle_; uv_check_t idle_check_handle_; - AsyncListener async_listener_count_; + AsyncHooks async_hooks_; DomainFlag domain_flag_; TickInfo tick_info_; uv_timer_t cares_timer_handle_; @@ -466,6 +501,12 @@ class Environment { bool using_domains_; QUEUE gc_tracker_queue_; bool printed_error_; + debugger::Agent debugger_agent_; + + QUEUE handle_wrap_queue_; + QUEUE req_wrap_queue_; + QUEUE handle_cleanup_queue_; + int handle_cleanup_waiting_; #define V(PropertyName, TypeName) \ v8::Persistent<TypeName> PropertyName ## _; @@ -495,7 +536,8 @@ class Environment { // Per-thread, reference-counted singleton. class IsolateData { public: - static inline IsolateData* GetOrCreate(v8::Isolate* isolate); + static inline IsolateData* GetOrCreate(v8::Isolate* isolate, + uv_loop_t* loop); inline void Put(); inline uv_loop_t* event_loop() const; @@ -510,7 +552,7 @@ class Environment { private: inline static IsolateData* Get(v8::Isolate* isolate); - inline explicit IsolateData(v8::Isolate* isolate); + inline explicit IsolateData(v8::Isolate* isolate, uv_loop_t* loop); inline v8::Isolate* isolate() const; // Defined in src/node_profiler.cc. diff --git a/src/handle_wrap.cc b/src/handle_wrap.cc index f713750d7f9..a355e5bbba5 100644 --- a/src/handle_wrap.cc +++ b/src/handle_wrap.cc @@ -39,9 +39,6 @@ using v8::Local; using v8::Object; using v8::Value; -// defined in node.cc -extern QUEUE handle_wrap_queue; - void HandleWrap::Ref(const FunctionCallbackInfo<Value>& args) { Environment* env = Environment::GetCurrent(args.GetIsolate()); @@ -93,14 +90,15 @@ void HandleWrap::Close(const FunctionCallbackInfo<Value>& args) { HandleWrap::HandleWrap(Environment* env, Handle<Object> object, uv_handle_t* handle, - AsyncWrap::ProviderType provider) - : AsyncWrap(env, object, provider), + AsyncWrap::ProviderType provider, + AsyncWrap* parent) + : AsyncWrap(env, object, provider, parent), flags_(0), handle__(handle) { handle__->data = this; HandleScope scope(env->isolate()); - Wrap<HandleWrap>(object, this); - QUEUE_INSERT_TAIL(&handle_wrap_queue, &handle_wrap_queue_); + Wrap(object, this); + QUEUE_INSERT_TAIL(env->handle_wrap_queue(), &handle_wrap_queue_); } diff --git a/src/handle_wrap.h b/src/handle_wrap.h index 4f6f6e03236..95551a75635 100644 --- a/src/handle_wrap.h +++ b/src/handle_wrap.h @@ -63,7 +63,8 @@ class HandleWrap : public AsyncWrap { HandleWrap(Environment* env, v8::Handle<v8::Object> object, uv_handle_t* handle, - AsyncWrap::ProviderType provider); + AsyncWrap::ProviderType provider, + AsyncWrap* parent = NULL); virtual ~HandleWrap(); private: diff --git a/src/node.cc b/src/node.cc index 76d6503366d..d15b47577f4 100644 --- a/src/node.cc +++ b/src/node.cc @@ -35,6 +35,10 @@ #include "node_crypto.h" #endif +#if defined(NODE_HAVE_I18N_SUPPORT) +#include "node_i18n.h" +#endif + #if defined HAVE_DTRACE || defined HAVE_ETW #include "node_dtrace.h" #endif @@ -116,11 +120,7 @@ using v8::TryCatch; using v8::Uint32; using v8::V8; using v8::Value; -using v8::kExternalUnsignedIntArray; - -// FIXME(bnoordhuis) Make these per-context? -QUEUE handle_wrap_queue = { &handle_wrap_queue, &handle_wrap_queue }; -QUEUE req_wrap_queue = { &req_wrap_queue, &req_wrap_queue }; +using v8::kExternalUint32Array; static bool print_eval = false; static bool force_repl = false; @@ -131,10 +131,17 @@ static bool use_debug_agent = false; static bool debug_wait_connect = false; static int debug_port = 5858; static bool v8_is_profiling = false; +static bool node_is_initialized = false; static node_module* modpending; static node_module* modlist_builtin; +static node_module* modlist_linked; static node_module* modlist_addon; +#if defined(NODE_HAVE_I18N_SUPPORT) +// Path to ICU data (for i18n / Intl) +static const char* icu_data_dir = NULL; +#endif + // used by C++ modules as well bool no_deprecation = false; @@ -902,32 +909,6 @@ Local<Value> WinapiErrnoException(Isolate* isolate, #endif -void SetupAsyncListener(const FunctionCallbackInfo<Value>& args) { - HandleScope handle_scope(args.GetIsolate()); - Environment* env = Environment::GetCurrent(args.GetIsolate()); - - assert(args[0]->IsObject()); - assert(args[1]->IsFunction()); - assert(args[2]->IsFunction()); - assert(args[3]->IsFunction()); - - env->set_async_listener_run_function(args[1].As<Function>()); - env->set_async_listener_load_function(args[2].As<Function>()); - env->set_async_listener_unload_function(args[3].As<Function>()); - - Local<Object> async_listener_flag_obj = args[0].As<Object>(); - Environment::AsyncListener* async_listener = env->async_listener(); - async_listener_flag_obj->SetIndexedPropertiesToExternalArrayData( - async_listener->fields(), - kExternalUnsignedIntArray, - async_listener->fields_count()); - - // Do a little housekeeping. - env->process_object()->Delete( - FIXED_ONE_BYTE_STRING(args.GetIsolate(), "_setupAsyncListener")); -} - - void SetupDomainUse(const FunctionCallbackInfo<Value>& args) { Environment* env = Environment::GetCurrent(args.GetIsolate()); @@ -959,7 +940,7 @@ void SetupDomainUse(const FunctionCallbackInfo<Value>& args) { Environment::DomainFlag* domain_flag = env->domain_flag(); domain_flag_obj->SetIndexedPropertiesToExternalArrayData( domain_flag->fields(), - kExternalUnsignedIntArray, + kExternalUint32Array, domain_flag->fields_count()); // Do a little housekeeping. @@ -967,6 +948,10 @@ void SetupDomainUse(const FunctionCallbackInfo<Value>& args) { FIXED_ONE_BYTE_STRING(args.GetIsolate(), "_setupDomainUse")); } +void RunMicrotasks(const FunctionCallbackInfo<Value>& args) { + args.GetIsolate()->RunMicrotasks(); +} + void SetupNextTick(const FunctionCallbackInfo<Value>& args) { HandleScope handle_scope(args.GetIsolate()); @@ -974,170 +959,109 @@ void SetupNextTick(const FunctionCallbackInfo<Value>& args) { assert(args[0]->IsObject()); assert(args[1]->IsFunction()); + assert(args[2]->IsObject()); // Values use to cross communicate with processNextTick. Local<Object> tick_info_obj = args[0].As<Object>(); tick_info_obj->SetIndexedPropertiesToExternalArrayData( env->tick_info()->fields(), - kExternalUnsignedIntArray, + kExternalUint32Array, env->tick_info()->fields_count()); env->set_tick_callback_function(args[1].As<Function>()); + NODE_SET_METHOD(args[2].As<Object>(), "runMicrotasks", RunMicrotasks); + // Do a little housekeeping. env->process_object()->Delete( FIXED_ONE_BYTE_STRING(args.GetIsolate(), "_setupNextTick")); } -Handle<Value> MakeDomainCallback(Environment* env, - Handle<Value> recv, - const Handle<Function> callback, - int argc, - Handle<Value> argv[]) { +Handle<Value> MakeCallback(Environment* env, + Handle<Value> recv, + const Handle<Function> callback, + int argc, + Handle<Value> argv[]) { // If you hit this assertion, you forgot to enter the v8::Context first. - assert(env->context() == env->isolate()->GetCurrentContext()); + CHECK(env->context() == env->isolate()->GetCurrentContext()); Local<Object> process = env->process_object(); Local<Object> object, domain; - Local<Value> domain_v; - - TryCatch try_catch; - try_catch.SetVerbose(true); - bool has_async_queue = false; + bool has_domain = false; if (recv->IsObject()) { object = recv.As<Object>(); - // TODO(trevnorris): This is sucky for performance. Fix it. - has_async_queue = object->Has(env->async_queue_string()); - if (has_async_queue) { - env->async_listener_load_function()->Call(process, 1, &recv); - - if (try_catch.HasCaught()) - return Undefined(env->isolate()); - } + Local<Value> async_queue_v = object->Get(env->async_queue_string()); + if (async_queue_v->IsObject()) + has_async_queue = true; } - bool has_domain = false; - - if (!object.IsEmpty()) { - domain_v = object->Get(env->domain_string()); + if (env->using_domains()) { + CHECK(recv->IsObject()); + Local<Value> domain_v = object->Get(env->domain_string()); has_domain = domain_v->IsObject(); if (has_domain) { domain = domain_v.As<Object>(); - - if (domain->Get(env->disposed_string())->IsTrue()) { - // domain has been disposed of. + if (domain->Get(env->disposed_string())->IsTrue()) return Undefined(env->isolate()); - } - - Local<Function> enter = domain->Get(env->enter_string()).As<Function>(); - if (enter->IsFunction()) { - enter->Call(domain, 0, NULL); - if (try_catch.HasCaught()) - return Undefined(env->isolate()); - } } } - Local<Value> ret = callback->Call(recv, argc, argv); - - if (try_catch.HasCaught()) { - return Undefined(env->isolate()); - } + TryCatch try_catch; + try_catch.SetVerbose(true); if (has_domain) { - Local<Function> exit = domain->Get(env->exit_string()).As<Function>(); - if (exit->IsFunction()) { - exit->Call(domain, 0, NULL); + Local<Value> enter_v = domain->Get(env->enter_string()); + if (enter_v->IsFunction()) { + enter_v.As<Function>()->Call(domain, 0, NULL); if (try_catch.HasCaught()) return Undefined(env->isolate()); } } if (has_async_queue) { - env->async_listener_unload_function()->Call(process, 1, &recv); - + try_catch.SetVerbose(false); + env->async_hooks_pre_function()->Call(object, 0, NULL); if (try_catch.HasCaught()) - return Undefined(env->isolate()); + FatalError("node:;MakeCallback", "pre hook threw"); + try_catch.SetVerbose(true); } - Environment::TickInfo* tick_info = env->tick_info(); - - if (tick_info->last_threw() == 1) { - tick_info->set_last_threw(0); - return ret; - } - - if (tick_info->in_tick()) { - return ret; - } - - if (tick_info->length() == 0) { - tick_info->set_index(0); - return ret; - } - - tick_info->set_in_tick(true); - - env->tick_callback_function()->Call(process, 0, NULL); - - tick_info->set_in_tick(false); - - if (try_catch.HasCaught()) { - tick_info->set_last_threw(true); - return Undefined(env->isolate()); - } - - return ret; -} - - -Handle<Value> MakeCallback(Environment* env, - Handle<Value> recv, - const Handle<Function> callback, - int argc, - Handle<Value> argv[]) { - if (env->using_domains()) - return MakeDomainCallback(env, recv, callback, argc, argv); - - // If you hit this assertion, you forgot to enter the v8::Context first. - assert(env->context() == env->isolate()->GetCurrentContext()); - - Local<Object> process = env->process_object(); - - TryCatch try_catch; - try_catch.SetVerbose(true); + Local<Value> ret = callback->Call(recv, argc, argv); - // TODO(trevnorris): This is sucky for performance. Fix it. - bool has_async_queue = - recv->IsObject() && recv.As<Object>()->Has(env->async_queue_string()); if (has_async_queue) { - env->async_listener_load_function()->Call(process, 1, &recv); + try_catch.SetVerbose(false); + env->async_hooks_post_function()->Call(object, 0, NULL); if (try_catch.HasCaught()) - return Undefined(env->isolate()); + FatalError("node::MakeCallback", "post hook threw"); + try_catch.SetVerbose(true); } - Local<Value> ret = callback->Call(recv, argc, argv); + if (has_domain) { + Local<Value> exit_v = domain->Get(env->exit_string()); + if (exit_v->IsFunction()) { + exit_v.As<Function>()->Call(domain, 0, NULL); + if (try_catch.HasCaught()) + return Undefined(env->isolate()); + } + } if (try_catch.HasCaught()) { return Undefined(env->isolate()); } - if (has_async_queue) { - env->async_listener_unload_function()->Call(process, 1, &recv); - - if (try_catch.HasCaught()) - return Undefined(env->isolate()); - } - Environment::TickInfo* tick_info = env->tick_info(); if (tick_info->in_tick()) { return ret; } + if (tick_info->length() == 0) { + env->isolate()->RunMicrotasks(); + } + if (tick_info->length() == 0) { tick_info->set_index(0); return ret; @@ -1165,10 +1089,9 @@ Handle<Value> MakeCallback(Environment* env, uint32_t index, int argc, Handle<Value> argv[]) { - Local<Function> callback = recv->Get(index).As<Function>(); - assert(callback->IsFunction()); - - return MakeCallback(env, recv.As<Value>(), callback, argc, argv); + Local<Value> cb_v = recv->Get(index); + CHECK(cb_v->IsFunction()); + return MakeCallback(env, recv.As<Value>(), cb_v.As<Function>(), argc, argv); } @@ -1177,9 +1100,9 @@ Handle<Value> MakeCallback(Environment* env, Handle<String> symbol, int argc, Handle<Value> argv[]) { - Local<Function> callback = recv->Get(symbol).As<Function>(); - assert(callback->IsFunction()); - return MakeCallback(env, recv.As<Value>(), callback, argc, argv); + Local<Value> cb_v = recv->Get(symbol); + CHECK(cb_v->IsFunction()); + return MakeCallback(env, recv.As<Value>(), cb_v.As<Function>(), argc, argv); } @@ -1236,20 +1159,6 @@ Handle<Value> MakeCallback(Isolate* isolate, } -Handle<Value> MakeDomainCallback(Handle<Object> recv, - Handle<Function> callback, - int argc, - Handle<Value> argv[]) { - Local<Context> context = recv->CreationContext(); - Environment* env = Environment::GetCurrent(context); - Context::Scope context_scope(context); - EscapableHandleScope handle_scope(env->isolate()); - return handle_scope.Escape(Local<Value>::New( - env->isolate(), - MakeDomainCallback(env, recv, callback, argc, argv))); -} - - enum encoding ParseEncoding(Isolate* isolate, Handle<Value> encoding_v, enum encoding _default) { @@ -1530,13 +1439,14 @@ static Local<Value> ExecuteString(Environment* env, static void GetActiveRequests(const FunctionCallbackInfo<Value>& args) { - HandleScope scope(args.GetIsolate()); + Environment* env = Environment::GetCurrent(args.GetIsolate()); + HandleScope scope(env->isolate()); Local<Array> ary = Array::New(args.GetIsolate()); QUEUE* q = NULL; int i = 0; - QUEUE_FOREACH(q, &req_wrap_queue) { + QUEUE_FOREACH(q, env->req_wrap_queue()) { ReqWrap<uv_req_t>* w = ContainerOf(&ReqWrap<uv_req_t>::req_wrap_queue_, q); if (w->persistent().IsEmpty()) continue; @@ -1559,7 +1469,7 @@ void GetActiveHandles(const FunctionCallbackInfo<Value>& args) { Local<String> owner_sym = env->owner_string(); - QUEUE_FOREACH(q, &handle_wrap_queue) { + QUEUE_FOREACH(q, env->handle_wrap_queue()) { HandleWrap* w = ContainerOf(&HandleWrap::handle_wrap_queue_, q); if (w->persistent().IsEmpty() || (w->flags_ & HandleWrap::kUnref)) continue; @@ -1615,7 +1525,7 @@ static void Cwd(const FunctionCallbackInfo<Value>& args) { Local<String> cwd = String::NewFromUtf8(env->isolate(), buf, String::kNormalString, - cwd_len - 1); + cwd_len); args.GetReturnValue().Set(cwd); } @@ -1934,7 +1844,7 @@ static void InitGroups(const FunctionCallbackInfo<Value>& args) { void Exit(const FunctionCallbackInfo<Value>& args) { Environment* env = Environment::GetCurrent(args.GetIsolate()); HandleScope scope(env->isolate()); - exit(args[0]->IntegerValue()); + exit(args[0]->Int32Value()); } @@ -1943,8 +1853,8 @@ static void Uptime(const FunctionCallbackInfo<Value>& args) { HandleScope scope(env->isolate()); double uptime; - uv_update_time(uv_default_loop()); - uptime = uv_now(uv_default_loop()) - prog_start_time; + uv_update_time(env->event_loop()); + uptime = uv_now(env->event_loop()) - prog_start_time; args.GetReturnValue().Set(Number::New(env->isolate(), uptime / 1000)); } @@ -1986,7 +1896,7 @@ void Kill(const FunctionCallbackInfo<Value>& args) { return env->ThrowError("Bad argument."); } - int pid = args[0]->IntegerValue(); + int pid = args[0]->Int32Value(); int sig = args[1]->Int32Value(); int err = uv_kill(pid, sig); args.GetReturnValue().Set(err); @@ -2030,7 +1940,15 @@ extern "C" void node_module_register(void* m) { if (mp->nm_flags & NM_F_BUILTIN) { mp->nm_link = modlist_builtin; modlist_builtin = mp; + } else if (!node_is_initialized) { + // "Linked" modules are included as part of the node project. + // Like builtins they are registered *before* node::Init runs. + mp->nm_flags = NM_F_LINKED; + mp->nm_link = modlist_linked; + modlist_linked = mp; } else { + // Once node::Init was called we can only register dynamic modules. + // See DLOpen. assert(modpending == NULL); modpending = mp; } @@ -2048,6 +1966,18 @@ struct node_module* get_builtin_module(const char* name) { return (mp); } +struct node_module* get_linked_module(const char* name) { + struct node_module* mp; + + for (mp = modlist_linked; mp != NULL; mp = mp->nm_link) { + if (strcmp(mp->nm_modname, name) == 0) + break; + } + + CHECK(mp == NULL || (mp->nm_flags & NM_F_LINKED) != 0); + return mp; +} + typedef void (UV_DYNAMIC* extInit)(Handle<Object> exports); // DLOpen is process.dlopen(module, filename). @@ -2254,6 +2184,46 @@ static void Binding(const FunctionCallbackInfo<Value>& args) { args.GetReturnValue().Set(exports); } +static void LinkedBinding(const FunctionCallbackInfo<Value>& args) { + Environment* env = Environment::GetCurrent(args.GetIsolate()); + + Local<String> module = args[0]->ToString(); + + Local<Object> cache = env->binding_cache_object(); + Local<Value> exports_v = cache->Get(module); + + if (exports_v->IsObject()) + return args.GetReturnValue().Set(exports_v.As<Object>()); + + node::Utf8Value module_v(module); + node_module* mod = get_linked_module(*module_v); + + if (mod == NULL) { + char errmsg[1024]; + snprintf(errmsg, + sizeof(errmsg), + "No such module was linked: %s", + *module_v); + return env->ThrowError(errmsg); + } + + Local<Object> exports = Object::New(env->isolate()); + + if (mod->nm_context_register_func != NULL) { + mod->nm_context_register_func(exports, + module, + env->context(), + mod->nm_priv); + } else if (mod->nm_register_func != NULL) { + mod->nm_register_func(exports, module, mod->nm_priv); + } else { + return env->ThrowError("Linked module has no declared entry point."); + } + + cache->Set(module, exports); + + args.GetReturnValue().Set(exports); +} static void ProcessTitleGetter(Local<String> property, const PropertyCallbackInfo<Value>& info) { @@ -2495,7 +2465,7 @@ static void DebugPortSetter(Local<String> property, const PropertyCallbackInfo<void>& info) { Environment* env = Environment::GetCurrent(info.GetIsolate()); HandleScope scope(env->isolate()); - debug_port = value->NumberValue(); + debug_port = value->Int32Value(); } @@ -2582,7 +2552,7 @@ void StopProfilerIdleNotifier(const FunctionCallbackInfo<Value>& args) { #define READONLY_PROPERTY(obj, str, var) \ do { \ - obj->Set(OneByteString(env->isolate(), str), var, v8::ReadOnly); \ + obj->ForceSet(OneByteString(env->isolate(), str), var, v8::ReadOnly); \ } while (0) @@ -2795,8 +2765,8 @@ void SetupProcessObject(Environment* env, NODE_SET_METHOD(process, "memoryUsage", MemoryUsage); NODE_SET_METHOD(process, "binding", Binding); + NODE_SET_METHOD(process, "_linkedBinding", LinkedBinding); - NODE_SET_METHOD(process, "_setupAsyncListener", SetupAsyncListener); NODE_SET_METHOD(process, "_setupNextTick", SetupNextTick); NODE_SET_METHOD(process, "_setupDomainUse", SetupDomainUse); @@ -2836,9 +2806,12 @@ static void RawDebug(const FunctionCallbackInfo<Value>& args) { } -void Load(Environment* env) { +void LoadEnvironment(Environment* env) { HandleScope handle_scope(env->isolate()); + V8::SetFatalErrorHandler(node::OnFatalError); + V8::AddMessageListener(OnMessage); + // Compile, execute the src/node.js file. (Which was included as static C // string in node_natives.h. 'natve_node' is the string containing that // source code.) @@ -2946,6 +2919,14 @@ static void PrintHelp() { " --trace-deprecation show stack traces on deprecations\n" " --v8-options print v8 command line options\n" " --max-stack-size=val set max v8 stack size (bytes)\n" +#if defined(NODE_HAVE_I18N_SUPPORT) + " --icu-data-dir=dir set ICU data load path to dir\n" + " (overrides NODE_ICU_DATA)\n" +#if !defined(NODE_HAVE_SMALL_ICU) + " Note: linked-in ICU data is\n" + " present.\n" +#endif +#endif "\n" "Environment variables:\n" #ifdef _WIN32 @@ -2957,6 +2938,12 @@ static void PrintHelp() { "NODE_MODULE_CONTEXTS Set to 1 to load modules in their own\n" " global contexts.\n" "NODE_DISABLE_COLORS Set to 1 to disable colors in the REPL\n" +#if defined(NODE_HAVE_I18N_SUPPORT) + "NODE_ICU_DATA Data path for ICU (Intl object) data\n" +#if !defined(NODE_HAVE_SMALL_ICU) + " (will extend linked-in data)\n" +#endif +#endif "\n" "Documentation can be found at http://nodejs.org/\n"); } @@ -3047,6 +3034,10 @@ static void ParseArgs(int* argc, } else if (strcmp(arg, "--v8-options") == 0) { new_v8_argv[new_v8_argc] = "--help"; new_v8_argc += 1; +#if defined(NODE_HAVE_I18N_SUPPORT) + } else if (strncmp(arg, "--icu-data-dir=", 15) == 0) { + icu_data_dir = arg + 15; +#endif } else { // V8 option. Pass through as-is. new_v8_argv[new_v8_argc] = arg; @@ -3079,40 +3070,33 @@ static void ParseArgs(int* argc, // Called from V8 Debug Agent TCP thread. -static void DispatchMessagesDebugAgentCallback() { +static void DispatchMessagesDebugAgentCallback(Environment* env) { + // TODO(indutny): move async handle to environment uv_async_send(&dispatch_debug_messages_async); } -// Called from the main thread. -static void EnableDebug(Isolate* isolate, bool wait_connect) { - assert(debugger_running == false); - Isolate::Scope isolate_scope(isolate); - HandleScope handle_scope(isolate); - v8::Debug::SetDebugMessageDispatchHandler(DispatchMessagesDebugAgentCallback, - false); - debugger_running = v8::Debug::EnableAgent("node " NODE_VERSION, - debug_port, - wait_connect); +static void StartDebug(Environment* env, bool wait) { + CHECK(!debugger_running); + + env->debugger_agent()->set_dispatch_handler( + DispatchMessagesDebugAgentCallback); + debugger_running = env->debugger_agent()->Start(debug_port, wait); if (debugger_running == false) { fprintf(stderr, "Starting debugger on port %d failed\n", debug_port); fflush(stderr); return; } - fprintf(stderr, "Debugger listening on port %d\n", debug_port); - fflush(stderr); +} - if (isolate == NULL) - return; // Still starting up. - Local<Context> context = isolate->GetCurrentContext(); - if (context.IsEmpty()) - return; // Still starting up. - Environment* env = Environment::GetCurrent(context); - // Assign environment to the debugger's context - env->AssignToContext(v8::Debug::GetDebugContext()); +// Called from the main thread. +static void EnableDebug(Environment* env) { + CHECK(debugger_running); + + // Send message to enable debug in workers + HandleScope handle_scope(env->isolate()); - Context::Scope context_scope(env->context()); Local<Object> message = Object::New(env->isolate()); message->Set(FIXED_ONE_BYTE_STRING(env->isolate(), "cmd"), FIXED_ONE_BYTE_STRING(env->isolate(), "NODE_DEBUG_ENABLED")); @@ -3121,6 +3105,9 @@ static void EnableDebug(Isolate* isolate, bool wait_connect) { message }; MakeCallback(env, env->process_object(), "emit", ARRAY_SIZE(argv), argv); + + // Enabled debugger, possibly making it wait on a semaphore + env->debugger_agent()->Enable(); } @@ -3128,7 +3115,12 @@ static void EnableDebug(Isolate* isolate, bool wait_connect) { static void DispatchDebugMessagesAsyncCallback(uv_async_t* handle) { if (debugger_running == false) { fprintf(stderr, "Starting debugger agent.\n"); - EnableDebug(node_isolate, false); + + Environment* env = Environment::GetCurrent(node_isolate); + Context::Scope context_scope(env->context()); + + StartDebug(env, false); + EnableDebug(env); } Isolate::Scope isolate_scope(node_isolate); v8::Debug::ProcessDebugMessages(); @@ -3357,7 +3349,8 @@ static void DebugPause(const FunctionCallbackInfo<Value>& args) { static void DebugEnd(const FunctionCallbackInfo<Value>& args) { if (debugger_running) { - v8::Debug::DisableAgent(); + Environment* env = Environment::GetCurrent(args.GetIsolate()); + env->debugger_agent()->Stop(); debugger_running = false; } } @@ -3368,7 +3361,7 @@ void Init(int* argc, int* exec_argc, const char*** exec_argv) { // Initialize prog_start_time to get relative uptime. - prog_start_time = uv_now(uv_default_loop()); + prog_start_time = static_cast<double>(uv_now(uv_default_loop())); // Make inherited handles noninheritable. uv_disable_stdio_inheritance(); @@ -3403,6 +3396,18 @@ void Init(int* argc, } } +#if defined(NODE_HAVE_I18N_SUPPORT) + if (icu_data_dir == NULL) { + // if the parameter isn't given, use the env variable. + icu_data_dir = getenv("NODE_ICU_DATA"); + } + // Initialize ICU. + // If icu_data_dir is NULL here, it will load the 'minimal' data. + if (!i18n::InitializeICUDirectory(icu_data_dir)) { + FatalError(NULL, "Could not initialize ICU " + "(check NODE_ICU_DATA or --icu-data-dir parameters)"); + } +#endif // The const_cast doesn't violate conceptual const-ness. V8 doesn't modify // the argv array or the elements it points to. V8::SetFlagsFromCommandLine(&v8_argc, const_cast<char**>(v8_argv), true); @@ -3427,7 +3432,8 @@ void Init(int* argc, // Fetch a reference to the main isolate, so we have a reference to it // even when we need it to access it from another (debugger) thread. - node_isolate = Isolate::GetCurrent(); + node_isolate = Isolate::New(); + Isolate::Scope isolate_scope(node_isolate); #ifdef __POSIX__ // Raise the open file descriptor limit. @@ -3458,13 +3464,7 @@ void Init(int* argc, RegisterSignalHandler(SIGTERM, SignalExit, true); #endif // __POSIX__ - V8::SetFatalErrorHandler(node::OnFatalError); - V8::AddMessageListener(OnMessage); - - // If the --debug flag was specified then initialize the debug thread. - if (use_debug_agent) { - EnableDebug(node_isolate, debug_wait_connect); - } else { + if (!use_debug_agent) { RegisterDebugSignalHandler(); } } @@ -3523,7 +3523,7 @@ int EmitExit(Environment* env) { process_object->Set(env->exiting_string(), True(env->isolate())); Handle<String> exitCode = env->exit_code_string(); - int code = process_object->Get(exitCode)->IntegerValue(); + int code = process_object->Get(exitCode)->Int32Value(); Local<Value> args[] = { env->exit_string(), @@ -3533,24 +3533,66 @@ int EmitExit(Environment* env) { MakeCallback(env, process_object, "emit", ARRAY_SIZE(args), args); // Reload exit code, it may be changed by `emit('exit')` - return process_object->Get(exitCode)->IntegerValue(); + return process_object->Get(exitCode)->Int32Value(); } +// Just a convenience method Environment* CreateEnvironment(Isolate* isolate, Handle<Context> context, int argc, const char* const* argv, int exec_argc, const char* const* exec_argv) { + Environment* env; + Context::Scope context_scope(context); + + env = CreateEnvironment(isolate, + uv_default_loop(), + context, + argc, + argv, + exec_argc, + exec_argv); + + LoadEnvironment(env); + + return env; +} + + +static void HandleCloseCb(uv_handle_t* handle) { + Environment* env = reinterpret_cast<Environment*>(handle->data); + env->FinishHandleCleanup(handle); +} + + +static void HandleCleanup(Environment* env, + uv_handle_t* handle, + void* arg) { + handle->data = env; + uv_close(handle, HandleCloseCb); +} + + +Environment* CreateEnvironment(Isolate* isolate, + uv_loop_t* loop, + Handle<Context> context, + int argc, + const char* const* argv, + int exec_argc, + const char* const* exec_argv) { HandleScope handle_scope(isolate); Context::Scope context_scope(context); - Environment* env = Environment::New(context); + Environment* env = Environment::New(context, loop); + + isolate->SetAutorunMicrotasks(false); uv_check_init(env->event_loop(), env->immediate_check_handle()); uv_unref( reinterpret_cast<uv_handle_t*>(env->immediate_check_handle())); + uv_idle_init(env->event_loop(), env->immediate_idle_handle()); // Inform V8's CPU profiler when we're idle. The profiler is sampling-based @@ -3567,6 +3609,24 @@ Environment* CreateEnvironment(Isolate* isolate, uv_unref(reinterpret_cast<uv_handle_t*>(env->idle_prepare_handle())); uv_unref(reinterpret_cast<uv_handle_t*>(env->idle_check_handle())); + // Register handle cleanups + env->RegisterHandleCleanup( + reinterpret_cast<uv_handle_t*>(env->immediate_check_handle()), + HandleCleanup, + NULL); + env->RegisterHandleCleanup( + reinterpret_cast<uv_handle_t*>(env->immediate_idle_handle()), + HandleCleanup, + NULL); + env->RegisterHandleCleanup( + reinterpret_cast<uv_handle_t*>(env->idle_prepare_handle()), + HandleCleanup, + NULL); + env->RegisterHandleCleanup( + reinterpret_cast<uv_handle_t*>(env->idle_check_handle()), + HandleCleanup, + NULL); + if (v8_is_profiling) { StartProfilerIdleNotifier(env); } @@ -3578,7 +3638,6 @@ Environment* CreateEnvironment(Isolate* isolate, env->set_process_object(process_object); SetupProcessObject(env, argc, argv, exec_argc, exec_argv); - Load(env); return env; } @@ -3614,40 +3673,48 @@ int Start(int argc, char** argv) { int code; V8::Initialize(); + node_is_initialized = true; { Locker locker(node_isolate); Isolate::Scope isolate_scope(node_isolate); HandleScope handle_scope(node_isolate); Local<Context> context = Context::New(node_isolate); Environment* env = CreateEnvironment( - node_isolate, context, argc, argv, exec_argc, exec_argv); - // Assign env to the debugger's context - if (debugger_running) { - HandleScope scope(env->isolate()); - env->AssignToContext(v8::Debug::GetDebugContext()); - } - // This Context::Scope is here so EnableDebug() can look up the current - // environment with Environment::GetCurrent(). - // TODO(bnoordhuis) Reorder the debugger initialization logic so it can - // be removed. - { - Context::Scope context_scope(env->context()); - bool more; - do { - more = uv_run(env->event_loop(), UV_RUN_ONCE); - if (more == false) { - EmitBeforeExit(env); - - // Emit `beforeExit` if the loop became alive either after emitting - // event, or after running some callbacks. - more = uv_loop_alive(env->event_loop()); - if (uv_run(env->event_loop(), UV_RUN_NOWAIT) != 0) - more = true; - } - } while (more == true); - code = EmitExit(env); - RunAtExit(env); - } + node_isolate, + uv_default_loop(), + context, + argc, + argv, + exec_argc, + exec_argv); + Context::Scope context_scope(context); + + // Start debug agent when argv has --debug + if (use_debug_agent) + StartDebug(env, debug_wait_connect); + + LoadEnvironment(env); + + // Enable debugger + if (use_debug_agent) + EnableDebug(env); + + bool more; + do { + more = uv_run(env->event_loop(), UV_RUN_ONCE); + if (more == false) { + EmitBeforeExit(env); + + // Emit `beforeExit` if the loop became alive either after emitting + // event, or after running some callbacks. + more = uv_loop_alive(env->event_loop()); + if (uv_run(env->event_loop(), UV_RUN_NOWAIT) != 0) + more = true; + } + } while (more == true); + code = EmitExit(env); + RunAtExit(env); + env->Dispose(); env = NULL; } diff --git a/src/node.h b/src/node.h index 4eaaa07859e..bb8a3de0e43 100644 --- a/src/node.h +++ b/src/node.h @@ -63,6 +63,9 @@ #define NODE_DEPRECATED(msg, fn) V8_DEPRECATED(msg, fn) +// Forward-declare libuv loop +struct uv_loop_s; + // Forward-declare these functions now to stop MSVS from becoming // terminally confused when it's done in node_internals.h namespace node { @@ -177,12 +180,26 @@ NODE_EXTERN void Init(int* argc, class Environment; +NODE_EXTERN Environment* CreateEnvironment(v8::Isolate* isolate, + struct uv_loop_s* loop, + v8::Handle<v8::Context> context, + int argc, + const char* const* argv, + int exec_argc, + const char* const* exec_argv); +NODE_EXTERN void LoadEnvironment(Environment* env); + +// NOTE: Calling this is the same as calling +// CreateEnvironment() + LoadEnvironment() from above. +// `uv_default_loop()` will be passed as `loop`. NODE_EXTERN Environment* CreateEnvironment(v8::Isolate* isolate, v8::Handle<v8::Context> context, int argc, const char* const* argv, int exec_argc, const char* const* exec_argv); + + NODE_EXTERN void EmitBeforeExit(Environment* env); NODE_EXTERN int EmitExit(Environment* env); NODE_EXTERN void RunAtExit(Environment* env); @@ -202,7 +219,7 @@ NODE_EXTERN void RunAtExit(Environment* env); v8::Number::New(isolate, static_cast<double>(constant)); \ v8::PropertyAttribute constant_attributes = \ static_cast<v8::PropertyAttribute>(v8::ReadOnly | v8::DontDelete); \ - (target)->Set(constant_name, constant_value, constant_attributes); \ + (target)->ForceSet(constant_name, constant_value, constant_attributes); \ } \ while (0) @@ -330,6 +347,7 @@ typedef void (*addon_context_register_func)( void* priv); #define NM_F_BUILTIN 0x01 +#define NM_F_LINKED 0x02 struct node_module { int nm_version; @@ -344,6 +362,7 @@ struct node_module { }; node_module* get_builtin_module(const char *name); +node_module* get_linked_module(const char *name); extern "C" NODE_EXTERN void node_module_register(void* mod); diff --git a/src/node.js b/src/node.js index 65092249d55..50460383ff7 100644 --- a/src/node.js +++ b/src/node.js @@ -39,9 +39,6 @@ process.EventEmitter = EventEmitter; // process.EventEmitter is deprecated - // Setup the tracing module - NativeModule.require('tracing')._nodeInitialization(process); - // do this good and early, since it handles errors. startup.processFatal(); @@ -56,7 +53,10 @@ startup.processKillAndExit(); startup.processSignalHandlers(); - startup.processChannel(); + // Do not initialize channel in debugger agent, it deletes env variable + // and the main thread won't see it. + if (process.argv[1] !== '--debug-agent') + startup.processChannel(); startup.processRawDebug(); @@ -80,6 +80,11 @@ var d = NativeModule.require('_debugger'); d.start(); + } else if (process.argv[1] == '--debug-agent') { + // Start the debugger agent + var d = NativeModule.require('_debugger_agent'); + d.start(); + } else if (process._eval != null) { // User passed '-e' or '--eval' arguments to Node. evalScript('[eval]'); @@ -221,14 +226,8 @@ }; startup.processFatal = function() { - var tracing = NativeModule.require('tracing'); - var _errorHandler = tracing._errorHandler; - // Cleanup - delete tracing._errorHandler; - process._fatalException = function(er) { - // First run through error handlers from asyncListener. - var caught = _errorHandler(er); + var caught; if (process.domain && process.domain._errorHandler) caught = process.domain._errorHandler(er) || caught; @@ -250,11 +249,7 @@ // if we handled an error, then make sure any ticks get processed } else { - var t = setImmediate(process._tickCallback); - // Complete hack to make sure any errors thrown from async - // listeners don't cause an infinite loop. - if (t._asyncQueue) - t._asyncQueue = []; + NativeModule.require('timers').setImmediate(process._tickCallback); } return caught; @@ -288,12 +283,11 @@ }; startup.processNextTick = function() { - var tracing = NativeModule.require('tracing'); var nextTickQueue = []; - var asyncFlags = tracing._asyncFlags; - var _runAsyncQueue = tracing._runAsyncQueue; - var _loadAsyncQueue = tracing._loadAsyncQueue; - var _unloadAsyncQueue = tracing._unloadAsyncQueue; + var microtasksScheduled = false; + + // Used to run V8's micro task queue. + var _runMicrotasks = {}; // This tickInfo thing is used so that the C++ code in src/node.cc // can have easy accesss to our nextTick state, and avoid unnecessary @@ -303,16 +297,14 @@ var kIndex = 0; var kLength = 1; - // For asyncFlags. - // *Must* match Environment::AsyncListeners::Fields in src/env.h - var kCount = 0; - process.nextTick = nextTick; // Needs to be accessible from beyond this scope. process._tickCallback = _tickCallback; process._tickDomainCallback = _tickDomainCallback; - process._setupNextTick(tickInfo, _tickCallback); + process._setupNextTick(tickInfo, _tickCallback, _runMicrotasks); + + _runMicrotasks = _runMicrotasks.runMicrotasks; function tickDone() { if (tickInfo[kLength] !== 0) { @@ -327,18 +319,38 @@ tickInfo[kIndex] = 0; } + function scheduleMicrotasks() { + if (microtasksScheduled) + return; + + nextTickQueue.push({ + callback: runMicrotasksCallback, + domain: null + }); + + tickInfo[kLength]++; + microtasksScheduled = true; + } + + function runMicrotasksCallback() { + microtasksScheduled = false; + _runMicrotasks(); + + if (tickInfo[kIndex] < tickInfo[kLength]) + scheduleMicrotasks(); + } + // Run callbacks that have no domain. // Using domains will cause this to be overridden. function _tickCallback() { - var callback, hasQueue, threw, tock; + var callback, threw, tock; + + scheduleMicrotasks(); while (tickInfo[kIndex] < tickInfo[kLength]) { tock = nextTickQueue[tickInfo[kIndex]++]; callback = tock.callback; threw = true; - hasQueue = !!tock._asyncQueue; - if (hasQueue) - _loadAsyncQueue(tock); try { callback(); threw = false; @@ -346,8 +358,6 @@ if (threw) tickDone(); } - if (hasQueue) - _unloadAsyncQueue(tock); if (1e4 < tickInfo[kIndex]) tickDone(); } @@ -356,15 +366,14 @@ } function _tickDomainCallback() { - var callback, domain, hasQueue, threw, tock; + var callback, domain, threw, tock; + + scheduleMicrotasks(); while (tickInfo[kIndex] < tickInfo[kLength]) { tock = nextTickQueue[tickInfo[kIndex]++]; callback = tock.callback; domain = tock.domain; - hasQueue = !!tock._asyncQueue; - if (hasQueue) - _loadAsyncQueue(tock); if (domain) domain.enter(); threw = true; @@ -375,8 +384,6 @@ if (threw) tickDone(); } - if (hasQueue) - _unloadAsyncQueue(tock); if (1e4 < tickInfo[kIndex]) tickDone(); if (domain) @@ -393,13 +400,9 @@ var obj = { callback: callback, - domain: process.domain || null, - _asyncQueue: undefined + domain: process.domain || null }; - if (asyncFlags[kCount] > 0) - _runAsyncQueue(obj); - nextTickQueue.push(obj); tickInfo[kLength]++; } @@ -602,8 +605,8 @@ process.kill = function(pid, sig) { var err; - if (typeof pid !== 'number' || !isFinite(pid)) { - throw new TypeError('pid must be a number'); + if (pid != (pid | 0)) { + throw new TypeError('invalid pid'); } // preserve null signal diff --git a/src/node_buffer.cc b/src/node_buffer.cc index dada1002baf..cd66a8ac740 100644 --- a/src/node_buffer.cc +++ b/src/node_buffer.cc @@ -87,7 +87,7 @@ bool HasInstance(Handle<Object> obj) { if (!obj->HasIndexedPropertiesInExternalArrayData()) return false; v8::ExternalArrayType type = obj->GetIndexedPropertiesExternalArrayDataType(); - return type == v8::kExternalUnsignedByteArray; + return type == v8::kExternalUint8Array; } @@ -338,37 +338,31 @@ void Copy(const FunctionCallbackInfo<Value> &args) { } -// buffer.fill(value[, start][, end]); void Fill(const FunctionCallbackInfo<Value>& args) { - Environment* env = Environment::GetCurrent(args.GetIsolate()); - HandleScope scope(env->isolate()); + ARGS_THIS(args[0].As<Object>()) - ARGS_THIS(args.This()) - SLICE_START_END(args[1], args[2], obj_length) - args.GetReturnValue().Set(args.This()); + size_t start = args[2]->Uint32Value(); + size_t end = args[3]->Uint32Value(); + size_t length = end - start; + CHECK(length + start <= obj_length); - if (args[0]->IsNumber()) { - int value = args[0]->Uint32Value() & 255; + if (args[1]->IsNumber()) { + int value = args[1]->Uint32Value() & 255; memset(obj_data + start, value, length); return; } - node::Utf8Value at(args[0]); - size_t at_length = at.length(); + node::Utf8Value str(args[1]); + size_t str_length = str.length(); + size_t in_there = str_length; + char* ptr = obj_data + start + str_length; - // optimize single ascii character case - if (at_length == 1) { - int value = static_cast<int>((*at)[0]); - memset(obj_data + start, value, length); + if (str_length == 0) return; - } - - size_t in_there = at_length; - char* ptr = obj_data + start + at_length; - memcpy(obj_data + start, *at, MIN(at_length, length)); + memcpy(obj_data + start, *str, MIN(str_length, length)); - if (at_length >= length) + if (str_length >= length) return; while (in_there < length - in_there) { @@ -468,17 +462,10 @@ static inline void Swizzle(char* start, unsigned int len) { template <typename T, enum Endianness endianness> void ReadFloatGeneric(const FunctionCallbackInfo<Value>& args) { - Environment* env = Environment::GetCurrent(args.GetIsolate()); - bool doAssert = !args[1]->BooleanValue(); - size_t offset; - - CHECK_NOT_OOB(ParseArrayIndex(args[0], 0, &offset)); + ARGS_THIS(args[0].As<Object>()); - if (doAssert) { - size_t len = Length(args.This()); - if (offset + sizeof(T) > len || offset + sizeof(T) < offset) - return env->ThrowRangeError("Trying to read beyond buffer length"); - } + uint32_t offset = args[1]->Uint32Value(); + CHECK_LE(offset + sizeof(T), obj_length); union NoAlias { T val; @@ -486,8 +473,7 @@ void ReadFloatGeneric(const FunctionCallbackInfo<Value>& args) { }; union NoAlias na; - const void* data = args.This()->GetIndexedPropertiesExternalArrayData(); - const char* ptr = static_cast<const char*>(data) + offset; + const char* ptr = static_cast<const char*>(obj_data) + offset; memcpy(na.bytes, ptr, sizeof(na.bytes)); if (endianness != GetEndianness()) Swizzle(na.bytes, sizeof(na.bytes)); @@ -518,24 +504,11 @@ void ReadDoubleBE(const FunctionCallbackInfo<Value>& args) { template <typename T, enum Endianness endianness> uint32_t WriteFloatGeneric(const FunctionCallbackInfo<Value>& args) { - Environment* env = Environment::GetCurrent(args.GetIsolate()); - bool doAssert = !args[2]->BooleanValue(); - - T val = static_cast<T>(args[0]->NumberValue()); - size_t offset; - - if (!ParseArrayIndex(args[1], 0, &offset)) { - env->ThrowRangeError("out of range index"); - return 0; - } + ARGS_THIS(args[0].As<Object>()) - if (doAssert) { - size_t len = Length(args.This()); - if (offset + sizeof(T) > len || offset + sizeof(T) < offset) { - env->ThrowRangeError("Trying to write beyond buffer length"); - return 0; - } - } + T val = args[1]->NumberValue(); + uint32_t offset = args[2]->Uint32Value(); + CHECK_LE(offset + sizeof(T), obj_length); union NoAlias { T val; @@ -543,8 +516,7 @@ uint32_t WriteFloatGeneric(const FunctionCallbackInfo<Value>& args) { }; union NoAlias na = { val }; - void* data = args.This()->GetIndexedPropertiesExternalArrayData(); - char* ptr = static_cast<char*>(data) + offset; + char* ptr = static_cast<char*>(obj_data) + offset; if (endianness != GetEndianness()) Swizzle(na.bytes, sizeof(na.bytes)); memcpy(ptr, na.bytes, sizeof(na.bytes)); @@ -649,37 +621,31 @@ void SetupBufferJS(const FunctionCallbackInfo<Value>& args) { NODE_SET_METHOD(proto, "ucs2Write", Ucs2Write); NODE_SET_METHOD(proto, "utf8Write", Utf8Write); - NODE_SET_METHOD(proto, "readDoubleBE", ReadDoubleBE); - NODE_SET_METHOD(proto, "readDoubleLE", ReadDoubleLE); - NODE_SET_METHOD(proto, "readFloatBE", ReadFloatBE); - NODE_SET_METHOD(proto, "readFloatLE", ReadFloatLE); - - NODE_SET_METHOD(proto, "writeDoubleBE", WriteDoubleBE); - NODE_SET_METHOD(proto, "writeDoubleLE", WriteDoubleLE); - NODE_SET_METHOD(proto, "writeFloatBE", WriteFloatBE); - NODE_SET_METHOD(proto, "writeFloatLE", WriteFloatLE); - NODE_SET_METHOD(proto, "copy", Copy); - NODE_SET_METHOD(proto, "fill", Fill); // for backwards compatibility - proto->Set(env->offset_string(), - Uint32::New(env->isolate(), 0), - v8::ReadOnly); + proto->ForceSet(env->offset_string(), + Uint32::New(env->isolate(), 0), + v8::ReadOnly); assert(args[1]->IsObject()); Local<Object> internal = args[1].As<Object>(); + ASSERT(internal->IsObject()); + + NODE_SET_METHOD(internal, "byteLength", ByteLength); + NODE_SET_METHOD(internal, "compare", Compare); + NODE_SET_METHOD(internal, "fill", Fill); - Local<Function> byte_length = FunctionTemplate::New( - env->isolate(), ByteLength)->GetFunction(); - byte_length->SetName(env->byte_length_string()); - internal->Set(env->byte_length_string(), byte_length); + NODE_SET_METHOD(internal, "readDoubleBE", ReadDoubleBE); + NODE_SET_METHOD(internal, "readDoubleLE", ReadDoubleLE); + NODE_SET_METHOD(internal, "readFloatBE", ReadFloatBE); + NODE_SET_METHOD(internal, "readFloatLE", ReadFloatLE); - Local<Function> compare = FunctionTemplate::New( - env->isolate(), Compare)->GetFunction(); - compare->SetName(env->compare_string()); - internal->Set(env->compare_string(), compare); + NODE_SET_METHOD(internal, "writeDoubleBE", WriteDoubleBE); + NODE_SET_METHOD(internal, "writeDoubleLE", WriteDoubleLE); + NODE_SET_METHOD(internal, "writeFloatBE", WriteFloatBE); + NODE_SET_METHOD(internal, "writeFloatLE", WriteFloatLE); } diff --git a/src/node_contextify.cc b/src/node_contextify.cc index bb43536d87c..2e8fd2cade6 100644 --- a/src/node_contextify.cc +++ b/src/node_contextify.cc @@ -205,7 +205,7 @@ class ContextifyContext { if (wrapper.IsEmpty()) return scope.Escape(Local<Value>::New(env->isolate(), Handle<Value>())); - Wrap<ContextifyContext>(wrapper, this); + Wrap(wrapper, this); return scope.Escape(wrapper); } @@ -537,18 +537,24 @@ class ContextifyScript : public BaseObject { Environment* env = Environment::GetCurrent(args.GetIsolate()); HandleScope scope(env->isolate()); + int64_t timeout; + bool display_errors; + // Assemble arguments - TryCatch try_catch; if (!args[0]->IsObject()) { return env->ThrowTypeError( "contextifiedSandbox argument must be an object."); } + Local<Object> sandbox = args[0].As<Object>(); - int64_t timeout = GetTimeoutArg(args, 1); - bool display_errors = GetDisplayErrorsArg(args, 1); - if (try_catch.HasCaught()) { - try_catch.ReThrow(); - return; + { + TryCatch try_catch; + timeout = GetTimeoutArg(args, 1); + display_errors = GetDisplayErrorsArg(args, 1); + if (try_catch.HasCaught()) { + try_catch.ReThrow(); + return; + } } // Get the context from the sandbox @@ -563,14 +569,22 @@ class ContextifyScript : public BaseObject { if (contextify_context->context().IsEmpty()) return; - // Do the eval within the context - Context::Scope context_scope(contextify_context->context()); - if (EvalMachine(contextify_context->env(), - timeout, - display_errors, - args, - try_catch)) { - contextify_context->CopyProperties(); + { + TryCatch try_catch; + // Do the eval within the context + Context::Scope context_scope(contextify_context->context()); + if (EvalMachine(contextify_context->env(), + timeout, + display_errors, + args, + try_catch)) { + contextify_context->CopyProperties(); + } + + if (try_catch.HasCaught()) { + try_catch.ReThrow(); + return; + } } } diff --git a/src/node_crypto.cc b/src/node_crypto.cc index 85f18bc1994..36836c1a790 100644 --- a/src/node_crypto.cc +++ b/src/node_crypto.cc @@ -81,6 +81,7 @@ using v8::Boolean; using v8::Context; using v8::EscapableHandleScope; using v8::Exception; +using v8::External; using v8::False; using v8::FunctionCallbackInfo; using v8::FunctionTemplate; @@ -120,8 +121,7 @@ X509_STORE* root_cert_store; template class SSLWrap<TLSCallbacks>; template void SSLWrap<TLSCallbacks>::AddMethods(Environment* env, Handle<FunctionTemplate> t); -template void SSLWrap<TLSCallbacks>::InitNPN(SecureContext* sc, - TLSCallbacks* base); +template void SSLWrap<TLSCallbacks>::InitNPN(SecureContext* sc); template SSL_SESSION* SSLWrap<TLSCallbacks>::GetSessionCallback( SSL* s, unsigned char* key, @@ -151,7 +151,8 @@ template int SSLWrap<TLSCallbacks>::TLSExtStatusCallback(SSL* s, void* arg); static void crypto_threadid_cb(CRYPTO_THREADID* tid) { - CRYPTO_THREADID_set_numeric(tid, uv_thread_self()); + assert(sizeof(uv_thread_t) <= sizeof(void*)); // NOLINT(runtime/sizeof) + CRYPTO_THREADID_set_pointer(tid, reinterpret_cast<void*>(uv_thread_self())); } @@ -287,6 +288,11 @@ void SecureContext::Initialize(Environment* env, Handle<Object> target) { "getIssuer", SecureContext::GetCertificate<false>); + NODE_SET_EXTERNAL( + t->PrototypeTemplate(), + "_external", + CtxGetter); + target->Set(FIXED_ONE_BYTE_STRING(env->isolate(), "SecureContext"), t->GetFunction()); env->set_secure_context_constructor_template(t); @@ -957,6 +963,16 @@ void SecureContext::SetTicketKeys(const FunctionCallbackInfo<Value>& args) { } +void SecureContext::CtxGetter(Local<String> property, + const PropertyCallbackInfo<Value>& info) { + HandleScope scope(info.GetIsolate()); + + SSL_CTX* ctx = Unwrap<SecureContext>(info.Holder())->ctx_; + Local<External> ext = External::New(info.GetIsolate(), ctx); + info.GetReturnValue().Set(ext); +} + + template <bool primary> void SecureContext::GetCertificate(const FunctionCallbackInfo<Value>& args) { HandleScope scope(args.GetIsolate()); @@ -1007,32 +1023,35 @@ void SSLWrap<Base>::AddMethods(Environment* env, Handle<FunctionTemplate> t) { #ifdef OPENSSL_NPN_NEGOTIATED NODE_SET_PROTOTYPE_METHOD(t, "getNegotiatedProtocol", GetNegotiatedProto); - NODE_SET_PROTOTYPE_METHOD(t, "setNPNProtocols", SetNPNProtocols); #endif // OPENSSL_NPN_NEGOTIATED + +#ifdef OPENSSL_NPN_NEGOTIATED + NODE_SET_PROTOTYPE_METHOD(t, "setNPNProtocols", SetNPNProtocols); +#endif + + NODE_SET_EXTERNAL( + t->PrototypeTemplate(), + "_external", + SSLGetter); } template <class Base> -void SSLWrap<Base>::InitNPN(SecureContext* sc, Base* base) { - if (base->is_server()) { -#ifdef OPENSSL_NPN_NEGOTIATED - // Server should advertise NPN protocols - SSL_CTX_set_next_protos_advertised_cb(sc->ctx_, - AdvertiseNextProtoCallback, - base); -#endif // OPENSSL_NPN_NEGOTIATED - } else { +void SSLWrap<Base>::InitNPN(SecureContext* sc) { #ifdef OPENSSL_NPN_NEGOTIATED - // Client should select protocol from list of advertised - // If server supports NPN - SSL_CTX_set_next_proto_select_cb(sc->ctx_, SelectNextProtoCallback, base); + // Server should advertise NPN protocols + SSL_CTX_set_next_protos_advertised_cb(sc->ctx_, + AdvertiseNextProtoCallback, + NULL); + // Client should select protocol from list of advertised + // If server supports NPN + SSL_CTX_set_next_proto_select_cb(sc->ctx_, SelectNextProtoCallback, NULL); #endif // OPENSSL_NPN_NEGOTIATED - } #ifdef NODE__HAVE_TLSEXT_STATUS_CB // OCSP stapling SSL_CTX_set_tlsext_status_cb(sc->ctx_, TLSExtStatusCallback); - SSL_CTX_set_tlsext_status_arg(sc->ctx_, base); + SSL_CTX_set_tlsext_status_arg(sc->ctx_, NULL); #endif // NODE__HAVE_TLSEXT_STATUS_CB } @@ -1688,7 +1707,7 @@ int SSLWrap<Base>::AdvertiseNextProtoCallback(SSL* s, const unsigned char** data, unsigned int* len, void* arg) { - Base* w = static_cast<Base*>(arg); + Base* w = static_cast<Base*>(SSL_get_app_data(s)); Environment* env = w->env(); HandleScope handle_scope(env->isolate()); Context::Scope context_scope(env->context()); @@ -1714,7 +1733,7 @@ int SSLWrap<Base>::SelectNextProtoCallback(SSL* s, const unsigned char* in, unsigned int inlen, void* arg) { - Base* w = static_cast<Base*>(arg); + Base* w = static_cast<Base*>(SSL_get_app_data(s)); Environment* env = w->env(); HandleScope handle_scope(env->isolate()); Context::Scope context_scope(env->context()); @@ -1806,7 +1825,7 @@ void SSLWrap<Base>::SetNPNProtocols(const FunctionCallbackInfo<Value>& args) { #ifdef NODE__HAVE_TLSEXT_STATUS_CB template <class Base> int SSLWrap<Base>::TLSExtStatusCallback(SSL* s, void* arg) { - Base* w = static_cast<Base*>(arg); + Base* w = static_cast<Base*>(SSL_get_app_data(s)); Environment* env = w->env(); HandleScope handle_scope(env->isolate()); @@ -1852,6 +1871,17 @@ int SSLWrap<Base>::TLSExtStatusCallback(SSL* s, void* arg) { #endif // NODE__HAVE_TLSEXT_STATUS_CB +template <class Base> +void SSLWrap<Base>::SSLGetter(Local<String> property, + const PropertyCallbackInfo<Value>& info) { + HandleScope scope(info.GetIsolate()); + + SSL* ssl = Unwrap<Base>(info.Holder())->ssl_; + Local<External> ext = External::New(info.GetIsolate(), ssl); + info.GetReturnValue().Set(ext); +} + + void Connection::OnClientHelloParseEnd(void* arg) { Connection* conn = static_cast<Connection*>(arg); @@ -2031,15 +2061,6 @@ void Connection::Initialize(Environment* env, Handle<Object> target) { SSLWrap<Connection>::AddMethods(env, t); -#ifdef OPENSSL_NPN_NEGOTIATED - NODE_SET_PROTOTYPE_METHOD(t, - "getNegotiatedProtocol", - Connection::GetNegotiatedProto); - NODE_SET_PROTOTYPE_METHOD(t, - "setNPNProtocols", - Connection::SetNPNProtocols); -#endif - #ifdef SSL_CTRL_SET_TLSEXT_SERVERNAME_CB NODE_SET_PROTOTYPE_METHOD(t, "getServername", Connection::GetServername); @@ -2122,7 +2143,7 @@ int Connection::SelectSNIContextCallback_(SSL *s, int *ad, void* arg) { if (secure_context_constructor_template->HasInstance(ret)) { conn->sniContext_.Reset(env->isolate(), ret); SecureContext* sc = Unwrap<SecureContext>(ret.As<Object>()); - InitNPN(sc, conn); + InitNPN(sc); SSL_set_SSL_CTX(s, sc->ctx_); } else { return SSL_TLSEXT_ERR_NOACK; @@ -2158,7 +2179,7 @@ void Connection::New(const FunctionCallbackInfo<Value>& args) { if (is_server) SSL_set_info_callback(conn->ssl_, SSLInfoCallback); - InitNPN(sc, conn); + InitNPN(sc); #ifdef SSL_CTRL_SET_TLSEXT_SERVERNAME_CB if (is_server) { @@ -4723,7 +4744,7 @@ void RandomBytes(const FunctionCallbackInfo<Value>& args) { // maybe allow a buffer to write to? cuts down on object creation // when generating random data in a loop if (!args[0]->IsUint32()) { - return env->ThrowTypeError("Argument #1 must be number > 0"); + return env->ThrowTypeError("size must be a number >= 0"); } const uint32_t size = args[0]->Uint32Value(); diff --git a/src/node_crypto.h b/src/node_crypto.h index c88f3945e64..1a719b9058a 100644 --- a/src/node_crypto.h +++ b/src/node_crypto.h @@ -105,6 +105,8 @@ class SecureContext : public BaseObject { static void LoadPKCS12(const v8::FunctionCallbackInfo<v8::Value>& args); static void GetTicketKeys(const v8::FunctionCallbackInfo<v8::Value>& args); static void SetTicketKeys(const v8::FunctionCallbackInfo<v8::Value>& args); + static void CtxGetter(v8::Local<v8::String> property, + const v8::PropertyCallbackInfo<v8::Value>& info); template <bool primary> static void GetCertificate(const v8::FunctionCallbackInfo<v8::Value>& args); @@ -188,7 +190,7 @@ class SSLWrap { inline bool is_waiting_new_session() const { return new_session_wait_; } protected: - static void InitNPN(SecureContext* sc, Base* base); + static void InitNPN(SecureContext* sc); static void AddMethods(Environment* env, v8::Handle<v8::FunctionTemplate> t); static SSL_SESSION* GetSessionCallback(SSL* s, @@ -237,6 +239,8 @@ class SSLWrap { void* arg); #endif // OPENSSL_NPN_NEGOTIATED static int TLSExtStatusCallback(SSL* s, void* arg); + static void SSLGetter(v8::Local<v8::String> property, + const v8::PropertyCallbackInfo<v8::Value>& info); inline Environment* ssl_env() const { return env_; diff --git a/src/node_crypto_bio.cc b/src/node_crypto_bio.cc index 22f0f6e8c71..441e9b17f78 100644 --- a/src/node_crypto_bio.cc +++ b/src/node_crypto_bio.cc @@ -272,6 +272,8 @@ size_t NodeBIO::Read(char* out, size_t size) { void NodeBIO::FreeEmpty() { + if (write_head_ == NULL) + return; Buffer* child = write_head_->next_; if (child == write_head_ || child == read_head_) return; @@ -281,13 +283,6 @@ void NodeBIO::FreeEmpty() { Buffer* prev = child; while (cur != read_head_) { - // Skip embedded buffer, and continue deallocating again starting from it - if (cur == &head_) { - prev->next_ = cur; - prev = cur; - cur = head_.next_; - continue; - } assert(cur != write_head_); assert(cur->write_pos_ == cur->read_pos_); @@ -295,7 +290,6 @@ void NodeBIO::FreeEmpty() { delete cur; cur = next; } - assert(prev == child || prev == &head_); prev->next_ = cur; } @@ -330,7 +324,7 @@ size_t NodeBIO::IndexOf(char delim, size_t limit) { } // Move to next buffer - if (current->read_pos_ + avail == kBufferLength) { + if (current->read_pos_ + avail == current->len_) { current = current->next_; } } @@ -343,10 +337,14 @@ size_t NodeBIO::IndexOf(char delim, size_t limit) { void NodeBIO::Write(const char* data, size_t size) { size_t offset = 0; size_t left = size; + + // Allocate initial buffer if the ring is empty + TryAllocateForWrite(left); + while (left > 0) { size_t to_write = left; - assert(write_head_->write_pos_ <= kBufferLength); - size_t avail = kBufferLength - write_head_->write_pos_; + assert(write_head_->write_pos_ <= write_head_->len_); + size_t avail = write_head_->len_ - write_head_->write_pos_; if (to_write > avail) to_write = avail; @@ -361,12 +359,12 @@ void NodeBIO::Write(const char* data, size_t size) { offset += to_write; length_ += to_write; write_head_->write_pos_ += to_write; - assert(write_head_->write_pos_ <= kBufferLength); + assert(write_head_->write_pos_ <= write_head_->len_); // Go to next buffer if there still are some bytes to write if (left != 0) { - assert(write_head_->write_pos_ == kBufferLength); - TryAllocateForWrite(); + assert(write_head_->write_pos_ == write_head_->len_); + TryAllocateForWrite(left); write_head_ = write_head_->next_; // Additionally, since we're moved to the next buffer, read head @@ -379,7 +377,9 @@ void NodeBIO::Write(const char* data, size_t size) { char* NodeBIO::PeekWritable(size_t* size) { - size_t available = kBufferLength - write_head_->write_pos_; + TryAllocateForWrite(*size); + + size_t available = write_head_->len_ - write_head_->write_pos_; if (*size != 0 && available > *size) available = *size; else @@ -392,12 +392,12 @@ char* NodeBIO::PeekWritable(size_t* size) { void NodeBIO::Commit(size_t size) { write_head_->write_pos_ += size; length_ += size; - assert(write_head_->write_pos_ <= kBufferLength); + assert(write_head_->write_pos_ <= write_head_->len_); // Allocate new buffer if write head is full, // and there're no other place to go - TryAllocateForWrite(); - if (write_head_->write_pos_ == kBufferLength) { + TryAllocateForWrite(0); + if (write_head_->write_pos_ == write_head_->len_) { write_head_ = write_head_->next_; // Additionally, since we're moved to the next buffer, read head @@ -407,19 +407,35 @@ void NodeBIO::Commit(size_t size) { } -void NodeBIO::TryAllocateForWrite() { +void NodeBIO::TryAllocateForWrite(size_t hint) { + Buffer* w = write_head_; + Buffer* r = read_head_; // If write head is full, next buffer is either read head or not empty. - if (write_head_->write_pos_ == kBufferLength && - (write_head_->next_ == read_head_ || - write_head_->next_->write_pos_ != 0)) { - Buffer* next = new Buffer(); - next->next_ = write_head_->next_; - write_head_->next_ = next; + if (w == NULL || + (w->write_pos_ == w->len_ && + (w->next_ == r || w->next_->write_pos_ != 0))) { + size_t len = w == NULL ? initial_ : + kThroughputBufferLength; + if (len < hint) + len = hint; + Buffer* next = new Buffer(len); + + if (w == NULL) { + next->next_ = next; + write_head_ = next; + read_head_ = next; + } else { + next->next_ = w->next_; + w->next_ = next; + } } } void NodeBIO::Reset() { + if (read_head_ == NULL) + return; + while (read_head_->read_pos_ != read_head_->write_pos_) { assert(read_head_->write_pos_ > read_head_->read_pos_); @@ -435,12 +451,15 @@ void NodeBIO::Reset() { NodeBIO::~NodeBIO() { - Buffer* current = head_.next_; - while (current != &head_) { + if (read_head_ == NULL) + return; + + Buffer* current = read_head_; + do { Buffer* next = current->next_; delete current; current = next; - } + } while (current != read_head_); read_head_ = NULL; write_head_ = NULL; diff --git a/src/node_crypto_bio.h b/src/node_crypto_bio.h index 0b9b3440c5e..163a82124f6 100644 --- a/src/node_crypto_bio.h +++ b/src/node_crypto_bio.h @@ -29,9 +29,10 @@ namespace node { class NodeBIO { public: - NodeBIO() : length_(0), read_head_(&head_), write_head_(&head_) { - // Loop head - head_.next_ = &head_; + NodeBIO() : initial_(kInitialBufferLength), + length_(0), + read_head_(NULL), + write_head_(NULL) { } ~NodeBIO(); @@ -42,7 +43,7 @@ class NodeBIO { void TryMoveReadHead(); // Allocate new buffer for write if needed - void TryAllocateForWrite(); + void TryAllocateForWrite(size_t hint); // Read `len` bytes maximum into `out`, return actual number of read bytes size_t Read(char* out, size_t size); @@ -76,11 +77,16 @@ class NodeBIO { // Commit reserved data void Commit(size_t size); + // Return size of buffer in bytes - size_t inline Length() { + inline size_t Length() const { return length_; } + inline void set_initial(size_t initial) { + initial_ = initial; + } + static inline NodeBIO* FromBIO(BIO* bio) { assert(bio->ptr != NULL); return static_cast<NodeBIO*>(bio->ptr); @@ -95,24 +101,34 @@ class NodeBIO { static int Gets(BIO* bio, char* out, int size); static long Ctrl(BIO* bio, int cmd, long num, void* ptr); - // NOTE: Size is maximum TLS frame length, this is required if we want - // to fit whole ClientHello into one Buffer of NodeBIO. - static const size_t kBufferLength = 16 * 1024 + 5; + // Enough to handle the most of the client hellos + static const size_t kInitialBufferLength = 1024; + static const size_t kThroughputBufferLength = 16384; + static const BIO_METHOD method; class Buffer { public: - Buffer() : read_pos_(0), write_pos_(0), next_(NULL) { + explicit Buffer(size_t len) : read_pos_(0), + write_pos_(0), + len_(len), + next_(NULL) { + data_ = new char[len]; + } + + ~Buffer() { + delete[] data_; } size_t read_pos_; size_t write_pos_; + size_t len_; Buffer* next_; - char data_[kBufferLength]; + char* data_; }; + size_t initial_; size_t length_; - Buffer head_; Buffer* read_head_; Buffer* write_head_; }; diff --git a/src/node_file.cc b/src/node_file.cc index 3b287ec1cc8..6736864bc57 100644 --- a/src/node_file.cc +++ b/src/node_file.cc @@ -71,11 +71,15 @@ class FSReqWrap: public ReqWrap<uv_fs_t> { void* operator new(size_t size) { return new char[size]; } void* operator new(size_t size, char* storage) { return storage; } - FSReqWrap(Environment* env, const char* syscall, char* data = NULL) - : ReqWrap<uv_fs_t>(env, Object::New(env->isolate())), + FSReqWrap(Environment* env, + Local<Object> req, + const char* syscall, + char* data = NULL) + : ReqWrap(env, req, AsyncWrap::PROVIDER_FSREQWRAP), syscall_(syscall), data_(data), dest_len_(0) { + Wrap(object(), this); } void ReleaseEarly() { @@ -98,6 +102,11 @@ class FSReqWrap: public ReqWrap<uv_fs_t> { }; +static void NewFSReqWrap(const FunctionCallbackInfo<Value>& args) { + CHECK(args.IsConstructCall()); +} + + #define ASSERT_OFFSET(a) \ if (!(a)->IsUndefined() && !(a)->IsNull() && !IsInt64((a)->NumberValue())) { \ return env->ThrowTypeError("Not an integer"); \ @@ -205,23 +214,28 @@ static void After(uv_fs_t *req) { argv[1] = Integer::New(env->isolate(), req->result); break; - case UV_FS_READDIR: + case UV_FS_SCANDIR: { - char *namebuf = static_cast<char*>(req->ptr); - int nnames = req->result; - - Local<Array> names = Array::New(env->isolate(), nnames); - - for (int i = 0; i < nnames; i++) { - Local<String> name = String::NewFromUtf8(env->isolate(), namebuf); + int r; + Local<Array> names = Array::New(env->isolate(), 0); + + for (int i = 0; ; i++) { + uv_dirent_t ent; + + r = uv_fs_scandir_next(req, &ent); + if (r == UV_EOF) + break; + if (r != 0) { + argv[0] = UVException(r, + NULL, + req_wrap->syscall(), + static_cast<const char*>(req->path)); + break; + } + + Local<String> name = String::NewFromUtf8(env->isolate(), + ent.name); names->Set(i, name); -#ifndef NDEBUG - namebuf += strlen(namebuf); - assert(*namebuf == '\0'); - namebuf += 1; -#else - namebuf += strlen(namebuf) + 1; -#endif } argv[1] = names; @@ -251,35 +265,35 @@ struct fs_req_wrap { }; -#define ASYNC_DEST_CALL(func, callback, dest_path, ...) \ +#define ASYNC_DEST_CALL(func, req, dest_path, ...) \ Environment* env = Environment::GetCurrent(args.GetIsolate()); \ FSReqWrap* req_wrap; \ char* dest_str = (dest_path); \ int dest_len = dest_str == NULL ? 0 : strlen(dest_str); \ char* storage = new char[sizeof(*req_wrap) + dest_len]; \ - req_wrap = new(storage) FSReqWrap(env, #func); \ + CHECK(req->IsObject()); \ + req_wrap = new(storage) FSReqWrap(env, req.As<Object>(), #func); \ req_wrap->dest_len(dest_len); \ if (dest_str != NULL) { \ memcpy(const_cast<char*>(req_wrap->dest()), \ dest_str, \ dest_len + 1); \ } \ - int err = uv_fs_ ## func(env->event_loop() , \ + int err = uv_fs_ ## func(env->event_loop(), \ &req_wrap->req_, \ __VA_ARGS__, \ After); \ - req_wrap->object()->Set(env->oncomplete_string(), callback); \ req_wrap->Dispatched(); \ if (err < 0) { \ - uv_fs_t* req = &req_wrap->req_; \ - req->result = err; \ - req->path = NULL; \ - After(req); \ + uv_fs_t* uv_req = &req_wrap->req_; \ + uv_req->result = err; \ + uv_req->path = NULL; \ + After(uv_req); \ } \ args.GetReturnValue().Set(req_wrap->persistent()); -#define ASYNC_CALL(func, callback, ...) \ - ASYNC_DEST_CALL(func, callback, NULL, __VA_ARGS__) \ +#define ASYNC_CALL(func, req, ...) \ + ASYNC_DEST_CALL(func, req, NULL, __VA_ARGS__) \ #define SYNC_DEST_CALL(func, path, dest, ...) \ fs_req_wrap req_wrap; \ @@ -316,7 +330,7 @@ static void Close(const FunctionCallbackInfo<Value>& args) { int fd = args[0]->Int32Value(); - if (args[1]->IsFunction()) { + if (args[1]->IsObject()) { ASYNC_CALL(close, args[1], fd) } else { SYNC_CALL(close, 0, fd) @@ -432,7 +446,7 @@ static void Stat(const FunctionCallbackInfo<Value>& args) { node::Utf8Value path(args[0]); - if (args[1]->IsFunction()) { + if (args[1]->IsObject()) { ASYNC_CALL(stat, args[1], *path) } else { SYNC_CALL(stat, *path, *path) @@ -452,7 +466,7 @@ static void LStat(const FunctionCallbackInfo<Value>& args) { node::Utf8Value path(args[0]); - if (args[1]->IsFunction()) { + if (args[1]->IsObject()) { ASYNC_CALL(lstat, args[1], *path) } else { SYNC_CALL(lstat, *path, *path) @@ -471,7 +485,7 @@ static void FStat(const FunctionCallbackInfo<Value>& args) { int fd = args[0]->Int32Value(); - if (args[1]->IsFunction()) { + if (args[1]->IsObject()) { ASYNC_CALL(fstat, args[1], fd) } else { SYNC_CALL(fstat, 0, fd) @@ -509,10 +523,10 @@ static void Symlink(const FunctionCallbackInfo<Value>& args) { } } - if (args[3]->IsFunction()) { - ASYNC_DEST_CALL(symlink, args[3], *dest, *dest, *path, flags) + if (args[3]->IsObject()) { + ASYNC_DEST_CALL(symlink, args[3], *path, *dest, *path, flags) } else { - SYNC_DEST_CALL(symlink, *path, *dest, *dest, *path, flags) + SYNC_DEST_CALL(symlink, *dest, *path, *dest, *path, flags) } } @@ -533,7 +547,7 @@ static void Link(const FunctionCallbackInfo<Value>& args) { node::Utf8Value orig_path(args[0]); node::Utf8Value new_path(args[1]); - if (args[2]->IsFunction()) { + if (args[2]->IsObject()) { ASYNC_DEST_CALL(link, args[2], *new_path, *orig_path, *new_path) } else { SYNC_DEST_CALL(link, *orig_path, *new_path, *orig_path, *new_path) @@ -551,7 +565,7 @@ static void ReadLink(const FunctionCallbackInfo<Value>& args) { node::Utf8Value path(args[0]); - if (args[1]->IsFunction()) { + if (args[1]->IsObject()) { ASYNC_CALL(readlink, args[1], *path) } else { SYNC_CALL(readlink, *path, *path) @@ -578,7 +592,7 @@ static void Rename(const FunctionCallbackInfo<Value>& args) { node::Utf8Value old_path(args[0]); node::Utf8Value new_path(args[1]); - if (args[2]->IsFunction()) { + if (args[2]->IsObject()) { ASYNC_DEST_CALL(rename, args[2], *new_path, *old_path, *new_path) } else { SYNC_DEST_CALL(rename, *old_path, *new_path, *old_path, *new_path) @@ -598,7 +612,7 @@ static void FTruncate(const FunctionCallbackInfo<Value>& args) { ASSERT_TRUNCATE_LENGTH(args[1]); int64_t len = GET_TRUNCATE_LENGTH(args[1]); - if (args[2]->IsFunction()) { + if (args[2]->IsObject()) { ASYNC_CALL(ftruncate, args[2], fd, len) } else { SYNC_CALL(ftruncate, 0, fd, len) @@ -615,7 +629,7 @@ static void Fdatasync(const FunctionCallbackInfo<Value>& args) { int fd = args[0]->Int32Value(); - if (args[1]->IsFunction()) { + if (args[1]->IsObject()) { ASYNC_CALL(fdatasync, args[1], fd) } else { SYNC_CALL(fdatasync, 0, fd) @@ -632,7 +646,7 @@ static void Fsync(const FunctionCallbackInfo<Value>& args) { int fd = args[0]->Int32Value(); - if (args[1]->IsFunction()) { + if (args[1]->IsObject()) { ASYNC_CALL(fsync, args[1], fd) } else { SYNC_CALL(fsync, 0, fd) @@ -650,7 +664,7 @@ static void Unlink(const FunctionCallbackInfo<Value>& args) { node::Utf8Value path(args[0]); - if (args[1]->IsFunction()) { + if (args[1]->IsObject()) { ASYNC_CALL(unlink, args[1], *path) } else { SYNC_CALL(unlink, *path, *path) @@ -668,7 +682,7 @@ static void RMDir(const FunctionCallbackInfo<Value>& args) { node::Utf8Value path(args[0]); - if (args[1]->IsFunction()) { + if (args[1]->IsObject()) { ASYNC_CALL(rmdir, args[1], *path) } else { SYNC_CALL(rmdir, *path, *path) @@ -686,7 +700,7 @@ static void MKDir(const FunctionCallbackInfo<Value>& args) { node::Utf8Value path(args[0]); int mode = static_cast<int>(args[1]->Int32Value()); - if (args[2]->IsFunction()) { + if (args[2]->IsObject()) { ASYNC_CALL(mkdir, args[2], *path, mode) } else { SYNC_CALL(mkdir, *path, *path, mode) @@ -704,25 +718,27 @@ static void ReadDir(const FunctionCallbackInfo<Value>& args) { node::Utf8Value path(args[0]); - if (args[1]->IsFunction()) { - ASYNC_CALL(readdir, args[1], *path, 0 /*flags*/) + if (args[1]->IsObject()) { + ASYNC_CALL(scandir, args[1], *path, 0 /*flags*/) } else { - SYNC_CALL(readdir, *path, *path, 0 /*flags*/) + SYNC_CALL(scandir, *path, *path, 0 /*flags*/) assert(SYNC_REQ.result >= 0); - char* namebuf = static_cast<char*>(SYNC_REQ.ptr); - uint32_t nnames = SYNC_REQ.result; - Local<Array> names = Array::New(env->isolate(), nnames); - - for (uint32_t i = 0; i < nnames; ++i) { - names->Set(i, String::NewFromUtf8(env->isolate(), namebuf)); -#ifndef NDEBUG - namebuf += strlen(namebuf); - assert(*namebuf == '\0'); - namebuf += 1; -#else - namebuf += strlen(namebuf) + 1; -#endif + int r; + Local<Array> names = Array::New(env->isolate(), 0); + + for (int i = 0; ; i++) { + uv_dirent_t ent; + + r = uv_fs_scandir_next(&SYNC_REQ, &ent); + if (r == UV_EOF) + break; + if (r != 0) + return env->ThrowUVException(r, "readdir", "", *path); + + Local<String> name = String::NewFromUtf8(env->isolate(), + ent.name); + names->Set(i, name); } args.GetReturnValue().Set(names); @@ -751,7 +767,7 @@ static void Open(const FunctionCallbackInfo<Value>& args) { int flags = args[1]->Int32Value(); int mode = static_cast<int>(args[2]->Int32Value()); - if (args[3]->IsFunction()) { + if (args[3]->IsObject()) { ASYNC_CALL(open, args[3], *path, flags, mode) } else { SYNC_CALL(open, *path, *path, flags, mode) @@ -783,7 +799,7 @@ static void WriteBuffer(const FunctionCallbackInfo<Value>& args) { size_t off = args[2]->Uint32Value(); size_t len = args[3]->Uint32Value(); int64_t pos = GET_OFFSET(args[4]); - Local<Value> cb = args[5]; + Local<Value> req = args[5]; if (off > buffer_length) return env->ThrowRangeError("offset out of bounds"); @@ -798,8 +814,8 @@ static void WriteBuffer(const FunctionCallbackInfo<Value>& args) { uv_buf_t uvbuf = uv_buf_init(const_cast<char*>(buf), len); - if (cb->IsFunction()) { - ASYNC_CALL(write, cb, fd, &uvbuf, 1, pos) + if (req->IsObject()) { + ASYNC_CALL(write, req, fd, &uvbuf, 1, pos) return; } @@ -823,7 +839,7 @@ static void WriteString(const FunctionCallbackInfo<Value>& args) { if (!args[0]->IsInt32()) return env->ThrowTypeError("First argument must be file descriptor"); - Local<Value> cb; + Local<Value> req; Local<Value> string = args[1]; int fd = args[0]->Int32Value(); char* buf = NULL; @@ -845,18 +861,19 @@ static void WriteString(const FunctionCallbackInfo<Value>& args) { must_free = true; } pos = GET_OFFSET(args[2]); - cb = args[4]; + req = args[4]; uv_buf_t uvbuf = uv_buf_init(const_cast<char*>(buf), len); - if (!cb->IsFunction()) { + if (!req->IsObject()) { SYNC_CALL(write, NULL, fd, &uvbuf, 1, pos) if (must_free) delete[] buf; return args.GetReturnValue().Set(SYNC_RESULT); } - FSReqWrap* req_wrap = new FSReqWrap(env, "write", must_free ? buf : NULL); + FSReqWrap* req_wrap = + new FSReqWrap(env, req.As<Object>(), "write", must_free ? buf : NULL); int err = uv_fs_write(env->event_loop(), &req_wrap->req_, fd, @@ -864,7 +881,6 @@ static void WriteString(const FunctionCallbackInfo<Value>& args) { 1, pos, After); - req_wrap->object()->Set(env->oncomplete_string(), cb); req_wrap->Dispatched(); if (err < 0) { uv_fs_t* req = &req_wrap->req_; @@ -899,7 +915,7 @@ static void Read(const FunctionCallbackInfo<Value>& args) { int fd = args[0]->Int32Value(); - Local<Value> cb; + Local<Value> req; size_t len; int64_t pos; @@ -929,10 +945,10 @@ static void Read(const FunctionCallbackInfo<Value>& args) { uv_buf_t uvbuf = uv_buf_init(const_cast<char*>(buf), len); - cb = args[5]; + req = args[5]; - if (cb->IsFunction()) { - ASYNC_CALL(read, cb, fd, &uvbuf, 1, pos); + if (req->IsObject()) { + ASYNC_CALL(read, req, fd, &uvbuf, 1, pos); } else { SYNC_CALL(read, 0, fd, &uvbuf, 1, pos) args.GetReturnValue().Set(SYNC_RESULT); @@ -953,7 +969,7 @@ static void Chmod(const FunctionCallbackInfo<Value>& args) { node::Utf8Value path(args[0]); int mode = static_cast<int>(args[1]->Int32Value()); - if (args[2]->IsFunction()) { + if (args[2]->IsObject()) { ASYNC_CALL(chmod, args[2], *path, mode); } else { SYNC_CALL(chmod, *path, *path, mode); @@ -974,7 +990,7 @@ static void FChmod(const FunctionCallbackInfo<Value>& args) { int fd = args[0]->Int32Value(); int mode = static_cast<int>(args[1]->Int32Value()); - if (args[2]->IsFunction()) { + if (args[2]->IsObject()) { ASYNC_CALL(fchmod, args[2], fd, mode); } else { SYNC_CALL(fchmod, 0, fd, mode); @@ -1007,7 +1023,7 @@ static void Chown(const FunctionCallbackInfo<Value>& args) { uv_uid_t uid = static_cast<uv_uid_t>(args[1]->Uint32Value()); uv_gid_t gid = static_cast<uv_gid_t>(args[2]->Uint32Value()); - if (args[3]->IsFunction()) { + if (args[3]->IsObject()) { ASYNC_CALL(chown, args[3], *path, uid, gid); } else { SYNC_CALL(chown, *path, *path, uid, gid); @@ -1040,7 +1056,7 @@ static void FChown(const FunctionCallbackInfo<Value>& args) { uv_uid_t uid = static_cast<uv_uid_t>(args[1]->Uint32Value()); uv_gid_t gid = static_cast<uv_gid_t>(args[2]->Uint32Value()); - if (args[3]->IsFunction()) { + if (args[3]->IsObject()) { ASYNC_CALL(fchown, args[3], fd, uid, gid); } else { SYNC_CALL(fchown, 0, fd, uid, gid); @@ -1070,7 +1086,7 @@ static void UTimes(const FunctionCallbackInfo<Value>& args) { const double atime = static_cast<double>(args[1]->NumberValue()); const double mtime = static_cast<double>(args[2]->NumberValue()); - if (args[3]->IsFunction()) { + if (args[3]->IsObject()) { ASYNC_CALL(utime, args[3], *path, atime, mtime); } else { SYNC_CALL(utime, *path, *path, atime, mtime); @@ -1099,7 +1115,7 @@ static void FUTimes(const FunctionCallbackInfo<Value>& args) { const double atime = static_cast<double>(args[1]->NumberValue()); const double mtime = static_cast<double>(args[2]->NumberValue()); - if (args[3]->IsFunction()) { + if (args[3]->IsObject()) { ASYNC_CALL(futime, args[3], fd, atime, mtime); } else { SYNC_CALL(futime, 0, fd, atime, mtime); @@ -1157,6 +1173,14 @@ void InitFs(Handle<Object> target, NODE_SET_METHOD(target, "futimes", FUTimes); StatWatcher::Initialize(env, target); + + // Create FunctionTemplate for FSReqWrap + Local<FunctionTemplate> fst = + FunctionTemplate::New(env->isolate(), NewFSReqWrap); + fst->InstanceTemplate()->SetInternalFieldCount(1); + fst->SetClassName(FIXED_ONE_BYTE_STRING(env->isolate(), "FSReqWrap")); + target->Set(FIXED_ONE_BYTE_STRING(env->isolate(), "FSReqWrap"), + fst->GetFunction()); } } // end namespace node diff --git a/src/node_i18n.cc b/src/node_i18n.cc new file mode 100644 index 00000000000..6d6144dc782 --- /dev/null +++ b/src/node_i18n.cc @@ -0,0 +1,88 @@ +// Copyright Joyent, Inc. and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + + +/* + * notes: by srl295 + * - When in NODE_HAVE_SMALL_ICU mode, ICU is linked against "stub" (null) data + * ( stubdata/libicudata.a ) containing nothing, no data, and it's also + * linked against a "small" data file which the SMALL_ICUDATA_ENTRY_POINT + * macro names. That's the "english+root" data. + * + * If icu_data_path is non-null, the user has provided a path and we assume + * it goes somewhere useful. We set that path in ICU, and exit. + * If icu_data_path is null, they haven't set a path and we want the + * "english+root" data. We call + * udata_setCommonData(SMALL_ICUDATA_ENTRY_POINT,...) + * to load up the english+root data. + * + * - when NOT in NODE_HAVE_SMALL_ICU mode, ICU is linked directly with its full + * data. All of the variables and command line options for changing data at + * runtime are disabled, as they wouldn't fully override the internal data. + * See: http://bugs.icu-project.org/trac/ticket/10924 + */ + + +#include "node_i18n.h" + +#if defined(NODE_HAVE_I18N_SUPPORT) + +#include <unicode/putil.h> +#include <unicode/udata.h> + +#ifdef NODE_HAVE_SMALL_ICU +/* if this is defined, we have a 'secondary' entry point. + compare following to utypes.h defs for U_ICUDATA_ENTRY_POINT */ +#define SMALL_ICUDATA_ENTRY_POINT \ + SMALL_DEF2(U_ICU_VERSION_MAJOR_NUM, U_LIB_SUFFIX_C_NAME) +#define SMALL_DEF2(major, suff) SMALL_DEF(major, suff) +#ifndef U_LIB_SUFFIX_C_NAME +#define SMALL_DEF(major, suff) icusmdt##major##_dat +#else +#define SMALL_DEF(major, suff) icusmdt##suff##major##_dat +#endif + +extern "C" const char U_DATA_API SMALL_ICUDATA_ENTRY_POINT[]; +#endif + +namespace node { +namespace i18n { + +bool InitializeICUDirectory(const char* icu_data_path) { + if (icu_data_path != NULL) { + u_setDataDirectory(icu_data_path); + return true; // no error + } else { + UErrorCode status = U_ZERO_ERROR; +#ifdef NODE_HAVE_SMALL_ICU + // install the 'small' data. + udata_setCommonData(&SMALL_ICUDATA_ENTRY_POINT, &status); +#else // !NODE_HAVE_SMALL_ICU + // no small data, so nothing to do. +#endif // !NODE_HAVE_SMALL_ICU + return (status == U_ZERO_ERROR); + } +} + +} // namespace i18n +} // namespace node + +#endif // NODE_HAVE_I18N_SUPPORT diff --git a/src/node_i18n.h b/src/node_i18n.h new file mode 100644 index 00000000000..f6807a911ec --- /dev/null +++ b/src/node_i18n.h @@ -0,0 +1,39 @@ +// Copyright Joyent, Inc. and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + +#ifndef SRC_NODE_I18N_H_ +#define SRC_NODE_I18N_H_ + +#include "node.h" + +#if defined(NODE_HAVE_I18N_SUPPORT) + +namespace node { +namespace i18n { + +NODE_EXTERN bool InitializeICUDirectory(const char* icu_data_path); + +} // namespace i18n +} // namespace node + +#endif // NODE_HAVE_I18N_SUPPORT + +#endif // SRC_NODE_I18N_H_ diff --git a/src/node_internals.h b/src/node_internals.h index d38a3f019fb..253bd38d962 100644 --- a/src/node_internals.h +++ b/src/node_internals.h @@ -216,6 +216,21 @@ NODE_DEPRECATED("Use ThrowUVException(isolate)", return ThrowUVException(isolate, errorno, syscall, message, path); }) +inline void NODE_SET_EXTERNAL(v8::Handle<v8::ObjectTemplate> target, + const char* key, + v8::AccessorGetterCallback getter) { + v8::Isolate* isolate = v8::Isolate::GetCurrent(); + v8::HandleScope handle_scope(isolate); + v8::Local<v8::String> prop = v8::String::NewFromUtf8(isolate, key); + target->SetAccessor(prop, + getter, + NULL, + v8::Handle<v8::Value>(), + v8::DEFAULT, + static_cast<v8::PropertyAttribute>(v8::ReadOnly | + v8::DontDelete)); +} + } // namespace node #endif // SRC_NODE_INTERNALS_H_ diff --git a/src/node_version.h b/src/node_version.h index ab3d2176337..ca154026c7f 100644 --- a/src/node_version.h +++ b/src/node_version.h @@ -24,7 +24,7 @@ #define NODE_MAJOR_VERSION 0 #define NODE_MINOR_VERSION 11 -#define NODE_PATCH_VERSION 14 +#define NODE_PATCH_VERSION 15 #define NODE_VERSION_IS_RELEASE 0 diff --git a/src/pipe_wrap.cc b/src/pipe_wrap.cc index 05472de5d92..69cdfcdff3c 100644 --- a/src/pipe_wrap.cc +++ b/src/pipe_wrap.cc @@ -21,6 +21,7 @@ #include "pipe_wrap.h" +#include "async-wrap.h" #include "env.h" #include "env-inl.h" #include "handle_wrap.h" @@ -37,6 +38,7 @@ namespace node { using v8::Boolean; using v8::Context; using v8::EscapableHandleScope; +using v8::External; using v8::Function; using v8::FunctionCallbackInfo; using v8::FunctionTemplate; @@ -50,8 +52,23 @@ using v8::String; using v8::Undefined; using v8::Value; + // TODO(bnoordhuis) share with TCPWrap? -typedef class ReqWrap<uv_connect_t> ConnectWrap; +class PipeConnectWrap : public ReqWrap<uv_connect_t> { + public: + PipeConnectWrap(Environment* env, Local<Object> req_wrap_obj); +}; + + +PipeConnectWrap::PipeConnectWrap(Environment* env, Local<Object> req_wrap_obj) + : ReqWrap(env, req_wrap_obj, AsyncWrap::PROVIDER_PIPEWRAP) { + Wrap(req_wrap_obj, this); +} + + +static void NewPipeConnectWrap(const FunctionCallbackInfo<Value>& args) { + CHECK(args.IsConstructCall()); +} uv_pipe_t* PipeWrap::UVHandle() { @@ -59,12 +76,13 @@ uv_pipe_t* PipeWrap::UVHandle() { } -Local<Object> PipeWrap::Instantiate(Environment* env) { +Local<Object> PipeWrap::Instantiate(Environment* env, AsyncWrap* parent) { EscapableHandleScope handle_scope(env->isolate()); assert(!env->pipe_constructor_template().IsEmpty()); Local<Function> constructor = env->pipe_constructor_template()->GetFunction(); assert(!constructor.IsEmpty()); - Local<Object> instance = constructor->NewInstance(); + Local<Value> ptr = External::New(env->isolate(), parent); + Local<Object> instance = constructor->NewInstance(1, &ptr); assert(!instance.IsEmpty()); return handle_scope.Escape(instance); } @@ -119,6 +137,14 @@ void PipeWrap::Initialize(Handle<Object> target, target->Set(FIXED_ONE_BYTE_STRING(env->isolate(), "Pipe"), t->GetFunction()); env->set_pipe_constructor_template(t); + + // Create FunctionTemplate for PipeConnectWrap. + Local<FunctionTemplate> cwt = + FunctionTemplate::New(env->isolate(), NewPipeConnectWrap); + cwt->InstanceTemplate()->SetInternalFieldCount(1); + cwt->SetClassName(FIXED_ONE_BYTE_STRING(env->isolate(), "PipeConnectWrap")); + target->Set(FIXED_ONE_BYTE_STRING(env->isolate(), "PipeConnectWrap"), + cwt->GetFunction()); } @@ -127,17 +153,25 @@ void PipeWrap::New(const FunctionCallbackInfo<Value>& args) { // Therefore we assert that we are not trying to call this as a // normal function. assert(args.IsConstructCall()); - HandleScope handle_scope(args.GetIsolate()); Environment* env = Environment::GetCurrent(args.GetIsolate()); - new PipeWrap(env, args.This(), args[0]->IsTrue()); + if (args[0]->IsExternal()) { + void* ptr = args[0].As<External>()->Value(); + new PipeWrap(env, args.This(), false, static_cast<AsyncWrap*>(ptr)); + } else { + new PipeWrap(env, args.This(), args[0]->IsTrue(), NULL); + } } -PipeWrap::PipeWrap(Environment* env, Handle<Object> object, bool ipc) +PipeWrap::PipeWrap(Environment* env, + Handle<Object> object, + bool ipc, + AsyncWrap* parent) : StreamWrap(env, object, reinterpret_cast<uv_stream_t*>(&handle_), - AsyncWrap::PROVIDER_PIPEWRAP) { + AsyncWrap::PROVIDER_PIPEWRAP, + parent) { int r = uv_pipe_init(env->event_loop(), &handle_, ipc); assert(r == 0); // How do we proxy this error up to javascript? // Suggestion: uv_pipe_init() returns void. @@ -209,7 +243,7 @@ void PipeWrap::OnConnection(uv_stream_t* handle, int status) { } // Instanciate the client javascript object and handle. - Local<Object> client_obj = Instantiate(env); + Local<Object> client_obj = Instantiate(env, pipe_wrap); // Unwrap the client javascript object. PipeWrap* wrap = Unwrap<PipeWrap>(client_obj); @@ -224,7 +258,7 @@ void PipeWrap::OnConnection(uv_stream_t* handle, int status) { // TODO(bnoordhuis) Maybe share this with TCPWrap? void PipeWrap::AfterConnect(uv_connect_t* req, int status) { - ConnectWrap* req_wrap = static_cast<ConnectWrap*>(req->data); + PipeConnectWrap* req_wrap = static_cast<PipeConnectWrap*>(req->data); PipeWrap* wrap = static_cast<PipeWrap*>(req->handle->data); assert(req_wrap->env() == wrap->env()); Environment* env = wrap->env(); @@ -287,9 +321,7 @@ void PipeWrap::Connect(const FunctionCallbackInfo<Value>& args) { Local<Object> req_wrap_obj = args[0].As<Object>(); node::Utf8Value name(args[1]); - ConnectWrap* req_wrap = new ConnectWrap(env, - req_wrap_obj, - AsyncWrap::PROVIDER_CONNECTWRAP); + PipeConnectWrap* req_wrap = new PipeConnectWrap(env, req_wrap_obj); uv_pipe_connect(&req_wrap->req_, &wrap->handle_, *name, diff --git a/src/pipe_wrap.h b/src/pipe_wrap.h index 92492c42b73..959f28f4dca 100644 --- a/src/pipe_wrap.h +++ b/src/pipe_wrap.h @@ -22,6 +22,7 @@ #ifndef SRC_PIPE_WRAP_H_ #define SRC_PIPE_WRAP_H_ +#include "async-wrap.h" #include "env.h" #include "stream_wrap.h" @@ -31,13 +32,16 @@ class PipeWrap : public StreamWrap { public: uv_pipe_t* UVHandle(); - static v8::Local<v8::Object> Instantiate(Environment* env); + static v8::Local<v8::Object> Instantiate(Environment* env, AsyncWrap* parent); static void Initialize(v8::Handle<v8::Object> target, v8::Handle<v8::Value> unused, v8::Handle<v8::Context> context); private: - PipeWrap(Environment* env, v8::Handle<v8::Object> object, bool ipc); + PipeWrap(Environment* env, + v8::Handle<v8::Object> object, + bool ipc, + AsyncWrap* parent); static void New(const v8::FunctionCallbackInfo<v8::Value>& args); static void Bind(const v8::FunctionCallbackInfo<v8::Value>& args); diff --git a/src/req_wrap.h b/src/req_wrap.h index 56d63eb7113..8c818e634a8 100644 --- a/src/req_wrap.h +++ b/src/req_wrap.h @@ -31,20 +31,17 @@ namespace node { -// defined in node.cc -extern QUEUE req_wrap_queue; - template <typename T> class ReqWrap : public AsyncWrap { public: ReqWrap(Environment* env, v8::Handle<v8::Object> object, - AsyncWrap::ProviderType provider = AsyncWrap::PROVIDER_REQWRAP) - : AsyncWrap(env, object, AsyncWrap::PROVIDER_REQWRAP) { + AsyncWrap::ProviderType provider) + : AsyncWrap(env, object, provider) { if (env->in_domain()) object->Set(env->domain_string(), env->domain_array()->Get(0)); - QUEUE_INSERT_TAIL(&req_wrap_queue, &req_wrap_queue_); + QUEUE_INSERT_TAIL(env->req_wrap_queue(), &req_wrap_queue_); } diff --git a/src/res/node.exe.extra.manifest b/src/res/node.exe.extra.manifest new file mode 100644 index 00000000000..c4cc80a141d --- /dev/null +++ b/src/res/node.exe.extra.manifest @@ -0,0 +1,15 @@ +<?xml version='1.0' encoding='UTF-8' standalone='yes'?> +<assembly xmlns='urn:schemas-microsoft-com:asm.v1' manifestVersion='1.0'> + <compatibility xmlns="urn:schemas-microsoft-com:compatibility.v1"> + <application> + <!-- Windows 8.1 --> + <supportedOS Id="{1f676c76-80e1-4239-95bb-83d0f6d0da78}"/> + <!-- Windows 8 --> + <supportedOS Id="{4a2f28e3-53b9-4441-ba9c-d69d4a4a6e38}"/> + <!-- Windows 7 --> + <supportedOS Id="{35138b9a-5d96-4fbd-8e2d-a2440225f93a}"/> + <!-- Windows Vista --> + <supportedOS Id="{e2011457-1546-43c5-a5fe-008deee3d3f0}"/> + </application> + </compatibility> +</assembly> diff --git a/src/smalloc.cc b/src/smalloc.cc index f2eedc369f3..0cd8f3eb9e4 100644 --- a/src/smalloc.cc +++ b/src/smalloc.cc @@ -51,7 +51,7 @@ using v8::RetainedObjectInfo; using v8::Uint32; using v8::Value; using v8::WeakCallbackData; -using v8::kExternalUnsignedByteArray; +using v8::kExternalUint8Array; class CallbackInfo { @@ -132,7 +132,7 @@ void CallbackInfo::WeakCallback(Isolate* isolate, Local<Object> object) { object->GetIndexedPropertiesExternalArrayDataType(); size_t array_size = ExternalArraySize(array_type); CHECK_GT(array_size, 0); - if (array_size > 1) { + if (array_size > 1 && array_data != NULL) { CHECK_GT(array_length * array_size, array_length); // Overflow check. array_length *= array_size; } @@ -147,23 +147,23 @@ void CallbackInfo::WeakCallback(Isolate* isolate, Local<Object> object) { // return size of external array type, or 0 if unrecognized size_t ExternalArraySize(enum ExternalArrayType type) { switch (type) { - case v8::kExternalUnsignedByteArray: + case v8::kExternalUint8Array: return sizeof(uint8_t); - case v8::kExternalByteArray: + case v8::kExternalInt8Array: return sizeof(int8_t); - case v8::kExternalShortArray: + case v8::kExternalInt16Array: return sizeof(int16_t); - case v8::kExternalUnsignedShortArray: + case v8::kExternalUint16Array: return sizeof(uint16_t); - case v8::kExternalIntArray: + case v8::kExternalInt32Array: return sizeof(int32_t); - case v8::kExternalUnsignedIntArray: + case v8::kExternalUint32Array: return sizeof(uint32_t); - case v8::kExternalFloatArray: + case v8::kExternalFloat32Array: return sizeof(float); // NOLINT(runtime/sizeof) - case v8::kExternalDoubleArray: + case v8::kExternalFloat64Array: return sizeof(double); // NOLINT(runtime/sizeof) - case v8::kExternalPixelArray: + case v8::kExternalUint8ClampedArray: return sizeof(uint8_t); } return 0; @@ -207,7 +207,7 @@ void CopyOnto(const FunctionCallbackInfo<Value>& args) { size_t dest_size = ExternalArraySize(dest_type); // optimization for Uint8 arrays (i.e. Buffers) - if (source_size != 1 && dest_size != 1) { + if (source_size != 1 || dest_size != 1) { if (source_size == 0) return env->ThrowTypeError("unknown source external array type"); if (dest_size == 0) @@ -304,7 +304,7 @@ void Alloc(const FunctionCallbackInfo<Value>& args) { // it's faster to not pass the default argument then use Uint32Value if (args[2]->IsUndefined()) { - array_type = kExternalUnsignedByteArray; + array_type = kExternalUint8Array; } else { array_type = static_cast<ExternalArrayType>(args[2]->Uint32Value()); size_t type_length = ExternalArraySize(array_type); @@ -385,7 +385,7 @@ void AllocDispose(Environment* env, Handle<Object> obj) { if (data != NULL) { obj->SetIndexedPropertiesToExternalArrayData(NULL, - kExternalUnsignedByteArray, + kExternalUint8Array, 0); free(data); } @@ -446,6 +446,9 @@ bool HasExternalData(Environment* env, Local<Object> obj) { return obj->HasIndexedPropertiesInExternalArrayData(); } +void IsTypedArray(const FunctionCallbackInfo<Value>& args) { + args.GetReturnValue().Set(args[0]->IsTypedArray()); +} void AllocTruncate(const FunctionCallbackInfo<Value>& args) { Environment* env = Environment::GetCurrent(args.GetIsolate()); @@ -547,6 +550,7 @@ void Initialize(Handle<Object> exports, NODE_SET_METHOD(exports, "truncate", AllocTruncate); NODE_SET_METHOD(exports, "hasExternalData", HasExternalData); + NODE_SET_METHOD(exports, "isTypedArray", IsTypedArray); exports->Set(FIXED_ONE_BYTE_STRING(env->isolate(), "kMaxLength"), Uint32::NewFromUnsigned(env->isolate(), kMaxLength)); diff --git a/src/spawn_sync.cc b/src/spawn_sync.cc index 59de8d46304..f13a9e9ed33 100644 --- a/src/spawn_sync.cc +++ b/src/spawn_sync.cc @@ -737,9 +737,9 @@ int SyncProcessRunner::ParseOptions(Local<Value> js_value) { r = CopyJsStringArray(js_env_pairs, &env_buffer_); if (r < 0) return r; - uv_process_options_.args = reinterpret_cast<char**>(env_buffer_); - } + uv_process_options_.env = reinterpret_cast<char**>(env_buffer_); + } Local<Value> js_uid = js_options->Get(env()->uid_string()); if (IsSet(js_uid)) { if (!CheckRange<uv_uid_t>(js_uid)) diff --git a/src/stream_wrap.cc b/src/stream_wrap.cc index fe3e82799df..840b16614a1 100644 --- a/src/stream_wrap.cc +++ b/src/stream_wrap.cc @@ -43,6 +43,7 @@ using v8::Array; using v8::Context; using v8::EscapableHandleScope; using v8::FunctionCallbackInfo; +using v8::FunctionTemplate; using v8::Handle; using v8::HandleScope; using v8::Integer; @@ -56,11 +57,37 @@ using v8::Undefined; using v8::Value; +void StreamWrap::Initialize(Handle<Object> target, + Handle<Value> unused, + Handle<Context> context) { + Environment* env = Environment::GetCurrent(context); + + Local<FunctionTemplate> sw = + FunctionTemplate::New(env->isolate(), ShutdownWrap::NewShutdownWrap); + sw->InstanceTemplate()->SetInternalFieldCount(1); + sw->SetClassName(FIXED_ONE_BYTE_STRING(env->isolate(), "ShutdownWrap")); + target->Set(FIXED_ONE_BYTE_STRING(env->isolate(), "ShutdownWrap"), + sw->GetFunction()); + + Local<FunctionTemplate> ww = + FunctionTemplate::New(env->isolate(), WriteWrap::NewWriteWrap); + ww->InstanceTemplate()->SetInternalFieldCount(1); + ww->SetClassName(FIXED_ONE_BYTE_STRING(env->isolate(), "WriteWrap")); + target->Set(FIXED_ONE_BYTE_STRING(env->isolate(), "WriteWrap"), + ww->GetFunction()); +} + + StreamWrap::StreamWrap(Environment* env, Local<Object> object, uv_stream_t* stream, - AsyncWrap::ProviderType provider) - : HandleWrap(env, object, reinterpret_cast<uv_handle_t*>(stream), provider), + AsyncWrap::ProviderType provider, + AsyncWrap* parent) + : HandleWrap(env, + object, + reinterpret_cast<uv_handle_t*>(stream), + provider, + parent), stream_(stream), default_callbacks_(this), callbacks_(&default_callbacks_), @@ -89,6 +116,7 @@ void StreamWrap::UpdateWriteQueueSize() { object()->Set(env()->write_queue_size_string(), write_queue_size); } + void StreamWrap::ReadStart(const FunctionCallbackInfo<Value>& args) { Environment* env = Environment::GetCurrent(args.GetIsolate()); HandleScope scope(env->isolate()); @@ -122,12 +150,14 @@ void StreamWrap::OnAlloc(uv_handle_t* handle, template <class WrapType, class UVType> -static Local<Object> AcceptHandle(Environment* env, uv_stream_t* pipe) { +static Local<Object> AcceptHandle(Environment* env, + uv_stream_t* pipe, + AsyncWrap* parent) { EscapableHandleScope scope(env->isolate()); Local<Object> wrap_obj; UVType* handle; - wrap_obj = WrapType::Instantiate(env); + wrap_obj = WrapType::Instantiate(env, parent); if (wrap_obj.IsEmpty()) return Local<Object>(); @@ -560,9 +590,7 @@ void StreamWrap::Shutdown(const FunctionCallbackInfo<Value>& args) { assert(args[0]->IsObject()); Local<Object> req_wrap_obj = args[0].As<Object>(); - ShutdownWrap* req_wrap = new ShutdownWrap(env, - req_wrap_obj, - AsyncWrap::PROVIDER_SHUTDOWNWRAP); + ShutdownWrap* req_wrap = new ShutdownWrap(env, req_wrap_obj); int err = wrap->callbacks()->DoShutdown(req_wrap, AfterShutdown); req_wrap->Dispatched(); if (err) @@ -722,11 +750,11 @@ void StreamWrapCallbacks::DoRead(uv_stream_t* handle, Local<Object> pending_obj; if (pending == UV_TCP) { - pending_obj = AcceptHandle<TCPWrap, uv_tcp_t>(env, handle); + pending_obj = AcceptHandle<TCPWrap, uv_tcp_t>(env, handle, wrap()); } else if (pending == UV_NAMED_PIPE) { - pending_obj = AcceptHandle<PipeWrap, uv_pipe_t>(env, handle); + pending_obj = AcceptHandle<PipeWrap, uv_pipe_t>(env, handle, wrap()); } else if (pending == UV_UDP) { - pending_obj = AcceptHandle<UDPWrap, uv_udp_t>(env, handle); + pending_obj = AcceptHandle<UDPWrap, uv_udp_t>(env, handle, wrap()); } else { assert(pending == UV_UNKNOWN_HANDLE); } @@ -744,3 +772,5 @@ int StreamWrapCallbacks::DoShutdown(ShutdownWrap* req_wrap, uv_shutdown_cb cb) { } } // namespace node + +NODE_MODULE_CONTEXT_AWARE_BUILTIN(stream_wrap, node::StreamWrap::Initialize) diff --git a/src/stream_wrap.h b/src/stream_wrap.h index 34e2799357d..38e5d484225 100644 --- a/src/stream_wrap.h +++ b/src/stream_wrap.h @@ -33,15 +33,26 @@ namespace node { // Forward declaration class StreamWrap; -typedef class ReqWrap<uv_shutdown_t> ShutdownWrap; +class ShutdownWrap : public ReqWrap<uv_shutdown_t> { + public: + ShutdownWrap(Environment* env, v8::Local<v8::Object> req_wrap_obj) + : ReqWrap(env, req_wrap_obj, AsyncWrap::PROVIDER_SHUTDOWNWRAP) { + Wrap(req_wrap_obj, this); + } + + static void NewShutdownWrap(const v8::FunctionCallbackInfo<v8::Value>& args) { + CHECK(args.IsConstructCall()); + } +}; class WriteWrap: public ReqWrap<uv_write_t> { public: // TODO(trevnorris): WrapWrap inherits from ReqWrap, which I've globbed // into the same provider. How should these be broken apart? WriteWrap(Environment* env, v8::Local<v8::Object> obj, StreamWrap* wrap) - : ReqWrap<uv_write_t>(env, obj), + : ReqWrap(env, obj, AsyncWrap::PROVIDER_WRITEWRAP), wrap_(wrap) { + Wrap(obj, this); } void* operator new(size_t size, char* storage) { return storage; } @@ -54,6 +65,10 @@ class WriteWrap: public ReqWrap<uv_write_t> { return wrap_; } + static void NewWriteWrap(const v8::FunctionCallbackInfo<v8::Value>& args) { + CHECK(args.IsConstructCall()); + } + private: // People should not be using the non-placement new and delete operator on a // WriteWrap. Ensure this never happens. @@ -105,6 +120,10 @@ class StreamWrapCallbacks { class StreamWrap : public HandleWrap { public: + static void Initialize(v8::Handle<v8::Object> target, + v8::Handle<v8::Value> unused, + v8::Handle<v8::Context> context); + void OverrideCallbacks(StreamWrapCallbacks* callbacks, bool gc) { StreamWrapCallbacks* old = callbacks_; callbacks_ = callbacks; @@ -158,7 +177,8 @@ class StreamWrap : public HandleWrap { StreamWrap(Environment* env, v8::Local<v8::Object> object, uv_stream_t* stream, - AsyncWrap::ProviderType provider); + AsyncWrap::ProviderType provider, + AsyncWrap* parent = NULL); ~StreamWrap() { if (!callbacks_gc_ && callbacks_ != &default_callbacks_) { diff --git a/src/tcp_wrap.cc b/src/tcp_wrap.cc index 09671d0095e..a5b20a6ac5a 100644 --- a/src/tcp_wrap.cc +++ b/src/tcp_wrap.cc @@ -38,6 +38,7 @@ namespace node { using v8::Context; using v8::EscapableHandleScope; +using v8::External; using v8::Function; using v8::FunctionCallbackInfo; using v8::FunctionTemplate; @@ -52,15 +53,31 @@ using v8::Undefined; using v8::Value; using v8::Boolean; -typedef class ReqWrap<uv_connect_t> ConnectWrap; +class TCPConnectWrap : public ReqWrap<uv_connect_t> { + public: + TCPConnectWrap(Environment* env, Local<Object> req_wrap_obj); +}; -Local<Object> TCPWrap::Instantiate(Environment* env) { + +TCPConnectWrap::TCPConnectWrap(Environment* env, Local<Object> req_wrap_obj) + : ReqWrap(env, req_wrap_obj, AsyncWrap::PROVIDER_TCPWRAP) { + Wrap(req_wrap_obj, this); +} + + +static void NewTCPConnectWrap(const FunctionCallbackInfo<Value>& args) { + CHECK(args.IsConstructCall()); +} + + +Local<Object> TCPWrap::Instantiate(Environment* env, AsyncWrap* parent) { EscapableHandleScope handle_scope(env->isolate()); assert(env->tcp_constructor_template().IsEmpty() == false); Local<Function> constructor = env->tcp_constructor_template()->GetFunction(); assert(constructor.IsEmpty() == false); - Local<Object> instance = constructor->NewInstance(); + Local<Value> ptr = External::New(env->isolate(), parent); + Local<Object> instance = constructor->NewInstance(1, &ptr); assert(instance.IsEmpty() == false); return handle_scope.Escape(instance); } @@ -135,6 +152,14 @@ void TCPWrap::Initialize(Handle<Object> target, target->Set(FIXED_ONE_BYTE_STRING(env->isolate(), "TCP"), t->GetFunction()); env->set_tcp_constructor_template(t); + + // Create FunctionTemplate for TCPConnectWrap. + Local<FunctionTemplate> cwt = + FunctionTemplate::New(env->isolate(), NewTCPConnectWrap); + cwt->InstanceTemplate()->SetInternalFieldCount(1); + cwt->SetClassName(FIXED_ONE_BYTE_STRING(env->isolate(), "TCPConnectWrap")); + target->Set(FIXED_ONE_BYTE_STRING(env->isolate(), "TCPConnectWrap"), + cwt->GetFunction()); } @@ -148,18 +173,26 @@ void TCPWrap::New(const FunctionCallbackInfo<Value>& args) { // Therefore we assert that we are not trying to call this as a // normal function. assert(args.IsConstructCall()); - HandleScope handle_scope(args.GetIsolate()); Environment* env = Environment::GetCurrent(args.GetIsolate()); - TCPWrap* wrap = new TCPWrap(env, args.This()); + TCPWrap* wrap; + if (args.Length() == 0) { + wrap = new TCPWrap(env, args.This(), NULL); + } else if (args[0]->IsExternal()) { + void* ptr = args[0].As<External>()->Value(); + wrap = new TCPWrap(env, args.This(), static_cast<AsyncWrap*>(ptr)); + } else { + UNREACHABLE(); + } assert(wrap); } -TCPWrap::TCPWrap(Environment* env, Handle<Object> object) +TCPWrap::TCPWrap(Environment* env, Handle<Object> object, AsyncWrap* parent) : StreamWrap(env, object, reinterpret_cast<uv_stream_t*>(&handle_), - AsyncWrap::PROVIDER_TCPWRAP) { + AsyncWrap::PROVIDER_TCPWRAP, + parent) { int r = uv_tcp_init(env->event_loop(), &handle_); assert(r == 0); // How do we proxy this error up to javascript? // Suggestion: uv_tcp_init() returns void. @@ -342,7 +375,8 @@ void TCPWrap::OnConnection(uv_stream_t* handle, int status) { if (status == 0) { // Instantiate the client javascript object and handle. - Local<Object> client_obj = Instantiate(env); + Local<Object> client_obj = + Instantiate(env, static_cast<AsyncWrap*>(tcp_wrap)); // Unwrap the client javascript object. TCPWrap* wrap = Unwrap<TCPWrap>(client_obj); @@ -359,7 +393,7 @@ void TCPWrap::OnConnection(uv_stream_t* handle, int status) { void TCPWrap::AfterConnect(uv_connect_t* req, int status) { - ConnectWrap* req_wrap = static_cast<ConnectWrap*>(req->data); + TCPConnectWrap* req_wrap = static_cast<TCPConnectWrap*>(req->data); TCPWrap* wrap = static_cast<TCPWrap*>(req->handle->data); assert(req_wrap->env() == wrap->env()); Environment* env = wrap->env(); @@ -404,9 +438,7 @@ void TCPWrap::Connect(const FunctionCallbackInfo<Value>& args) { int err = uv_ip4_addr(*ip_address, port, &addr); if (err == 0) { - ConnectWrap* req_wrap = new ConnectWrap(env, - req_wrap_obj, - AsyncWrap::PROVIDER_CONNECTWRAP); + TCPConnectWrap* req_wrap = new TCPConnectWrap(env, req_wrap_obj); err = uv_tcp_connect(&req_wrap->req_, &wrap->handle_, reinterpret_cast<const sockaddr*>(&addr), @@ -438,9 +470,7 @@ void TCPWrap::Connect6(const FunctionCallbackInfo<Value>& args) { int err = uv_ip6_addr(*ip_address, port, &addr); if (err == 0) { - ConnectWrap* req_wrap = new ConnectWrap(env, - req_wrap_obj, - AsyncWrap::PROVIDER_CONNECTWRAP); + TCPConnectWrap* req_wrap = new TCPConnectWrap(env, req_wrap_obj); err = uv_tcp_connect(&req_wrap->req_, &wrap->handle_, reinterpret_cast<const sockaddr*>(&addr), diff --git a/src/tcp_wrap.h b/src/tcp_wrap.h index c9981f572db..c923b387f0e 100644 --- a/src/tcp_wrap.h +++ b/src/tcp_wrap.h @@ -22,6 +22,7 @@ #ifndef SRC_TCP_WRAP_H_ #define SRC_TCP_WRAP_H_ +#include "async-wrap.h" #include "env.h" #include "stream_wrap.h" @@ -29,7 +30,7 @@ namespace node { class TCPWrap : public StreamWrap { public: - static v8::Local<v8::Object> Instantiate(Environment* env); + static v8::Local<v8::Object> Instantiate(Environment* env, AsyncWrap* parent); static void Initialize(v8::Handle<v8::Object> target, v8::Handle<v8::Value> unused, v8::Handle<v8::Context> context); @@ -37,7 +38,7 @@ class TCPWrap : public StreamWrap { uv_tcp_t* UVHandle(); private: - TCPWrap(Environment* env, v8::Handle<v8::Object> object); + TCPWrap(Environment* env, v8::Handle<v8::Object> object, AsyncWrap* parent); ~TCPWrap(); static void New(const v8::FunctionCallbackInfo<v8::Value>& args); diff --git a/src/timer_wrap.cc b/src/timer_wrap.cc index 099a54ec95d..71e6a613431 100644 --- a/src/timer_wrap.cc +++ b/src/timer_wrap.cc @@ -97,8 +97,6 @@ class TimerWrap : public HandleWrap { } static void Start(const FunctionCallbackInfo<Value>& args) { - Environment* env = Environment::GetCurrent(args.GetIsolate()); - HandleScope scope(env->isolate()); TimerWrap* wrap = Unwrap<TimerWrap>(args.Holder()); int64_t timeout = args[0]->IntegerValue(); @@ -108,8 +106,6 @@ class TimerWrap : public HandleWrap { } static void Stop(const FunctionCallbackInfo<Value>& args) { - Environment* env = Environment::GetCurrent(args.GetIsolate()); - HandleScope scope(env->isolate()); TimerWrap* wrap = Unwrap<TimerWrap>(args.Holder()); int err = uv_timer_stop(&wrap->handle_); @@ -117,8 +113,6 @@ class TimerWrap : public HandleWrap { } static void Again(const FunctionCallbackInfo<Value>& args) { - Environment* env = Environment::GetCurrent(args.GetIsolate()); - HandleScope scope(env->isolate()); TimerWrap* wrap = Unwrap<TimerWrap>(args.Holder()); int err = uv_timer_again(&wrap->handle_); @@ -126,8 +120,6 @@ class TimerWrap : public HandleWrap { } static void SetRepeat(const FunctionCallbackInfo<Value>& args) { - Environment* env = Environment::GetCurrent(args.GetIsolate()); - HandleScope scope(env->isolate()); TimerWrap* wrap = Unwrap<TimerWrap>(args.Holder()); int64_t repeat = args[0]->IntegerValue(); @@ -136,12 +128,13 @@ class TimerWrap : public HandleWrap { } static void GetRepeat(const FunctionCallbackInfo<Value>& args) { - Environment* env = Environment::GetCurrent(args.GetIsolate()); - HandleScope scope(env->isolate()); TimerWrap* wrap = Unwrap<TimerWrap>(args.Holder()); int64_t repeat = uv_timer_get_repeat(&wrap->handle_); - args.GetReturnValue().Set(static_cast<double>(repeat)); + if (repeat <= 0xfffffff) + args.GetReturnValue().Set(static_cast<uint32_t>(repeat)); + else + args.GetReturnValue().Set(static_cast<double>(repeat)); } static void OnTimeout(uv_timer_t* handle) { @@ -153,11 +146,13 @@ class TimerWrap : public HandleWrap { } static void Now(const FunctionCallbackInfo<Value>& args) { - HandleScope handle_scope(args.GetIsolate()); Environment* env = Environment::GetCurrent(args.GetIsolate()); uv_update_time(env->event_loop()); - double now = static_cast<double>(uv_now(env->event_loop())); - args.GetReturnValue().Set(now); + uint64_t now = uv_now(env->event_loop()); + if (now <= 0xfffffff) + args.GetReturnValue().Set(static_cast<uint32_t>(now)); + else + args.GetReturnValue().Set(static_cast<double>(now)); } uv_timer_t handle_; diff --git a/src/tls_wrap.cc b/src/tls_wrap.cc index d74954f7eef..607f786501e 100644 --- a/src/tls_wrap.cc +++ b/src/tls_wrap.cc @@ -78,7 +78,8 @@ TLSCallbacks::TLSCallbacks(Environment* env, error_(NULL), cycle_depth_(0), eof_(false) { - node::Wrap<TLSCallbacks>(object(), this); + node::Wrap(object(), this); + MakeWeak(this); // Initialize queue for clearIn writes QUEUE_INIT(&write_item_queue_); @@ -186,11 +187,13 @@ void TLSCallbacks::InitSSL() { } #endif // SSL_CTRL_SET_TLSEXT_SERVERNAME_CB - InitNPN(sc_, this); + InitNPN(sc_); if (is_server()) { SSL_set_accept_state(ssl_); } else if (is_client()) { + // Enough space for server response (hello, cert) + NodeBIO::FromBIO(enc_in_)->set_initial(kInitialClientBufferLength); SSL_set_connect_state(ssl_); } else { // Unexpected @@ -253,6 +256,7 @@ void TLSCallbacks::Receive(const FunctionCallbackInfo<Value>& args) { wrap->DoAlloc(reinterpret_cast<uv_handle_t*>(stream), len, &buf); size_t copy = buf.len > len ? len : buf.len; memcpy(buf.base, data, copy); + buf.len = copy; wrap->DoRead(stream, buf.len, &buf, UV_UNKNOWN_HANDLE); data += copy; @@ -443,6 +447,10 @@ void TLSCallbacks::ClearOut() { if (!hello_parser_.IsEnded()) return; + // No reads after EOF + if (eof_) + return; + HandleScope handle_scope(env()->isolate()); Context::Scope context_scope(env()->context()); @@ -472,6 +480,10 @@ void TLSCallbacks::ClearOut() { int err; Local<Value> arg = GetSSLError(read, &err, NULL); + // Ignore ZERO_RETURN after EOF, it is basically not a error + if (err == SSL_ERROR_ZERO_RETURN && eof_) + return; + if (!arg.IsEmpty()) { // When TLS Alert are stored in wbio, // it should be flushed to socket before destroyed. @@ -614,8 +626,9 @@ void TLSCallbacks::AfterWrite(WriteWrap* w) { void TLSCallbacks::DoAlloc(uv_handle_t* handle, size_t suggested_size, uv_buf_t* buf) { - buf->base = NodeBIO::FromBIO(enc_in_)->PeekWritable(&suggested_size); - buf->len = suggested_size; + size_t size = 0; + buf->base = NodeBIO::FromBIO(enc_in_)->PeekWritable(&size); + buf->len = size; } @@ -719,6 +732,7 @@ void TLSCallbacks::EnableHelloParser(const FunctionCallbackInfo<Value>& args) { TLSCallbacks* wrap = Unwrap<TLSCallbacks>(args.Holder()); + NodeBIO::FromBIO(wrap->enc_in_)->set_initial(kMaxHelloLength); wrap->hello_parser_.Start(SSLWrap<TLSCallbacks>::OnClientHello, OnClientHelloParseEnd, wrap); @@ -800,7 +814,7 @@ int TLSCallbacks::SelectSNIContextCallback(SSL* s, int* ad, void* arg) { p->sni_context_.Reset(env->isolate(), ctx); SecureContext* sc = Unwrap<SecureContext>(ctx.As<Object>()); - InitNPN(sc, p); + InitNPN(sc); SSL_set_SSL_CTX(s, sc->ctx_); return SSL_TLSEXT_ERR_OK; } diff --git a/src/tls_wrap.h b/src/tls_wrap.h index 13a53bfb307..b12a6b66122 100644 --- a/src/tls_wrap.h +++ b/src/tls_wrap.h @@ -46,6 +46,8 @@ class TLSCallbacks : public crypto::SSLWrap<TLSCallbacks>, public StreamWrapCallbacks, public AsyncWrap { public: + ~TLSCallbacks(); + static void Initialize(v8::Handle<v8::Object> target, v8::Handle<v8::Value> unused, v8::Handle<v8::Context> context); @@ -72,6 +74,12 @@ class TLSCallbacks : public crypto::SSLWrap<TLSCallbacks>, protected: static const int kClearOutChunkSize = 1024; + // Maximum number of bytes for hello parser + static const int kMaxHelloLength = 16384; + + // Usual ServerHello + Certificate size + static const int kInitialClientBufferLength = 4096; + // Maximum number of buffers passed to uv_write() static const int kSimultaneousBufferCount = 10; @@ -94,7 +102,6 @@ class TLSCallbacks : public crypto::SSLWrap<TLSCallbacks>, Kind kind, v8::Handle<v8::Object> sc, StreamWrapCallbacks* old); - ~TLSCallbacks(); static void SSLInfoCallback(const SSL* ssl_, int where, int ret); void InitSSL(); diff --git a/src/udp_wrap.cc b/src/udp_wrap.cc index f0d49133397..614326fa1fb 100644 --- a/src/udp_wrap.cc +++ b/src/udp_wrap.cc @@ -34,6 +34,8 @@ namespace node { using v8::Context; +using v8::EscapableHandleScope; +using v8::External; using v8::Function; using v8::FunctionCallbackInfo; using v8::FunctionTemplate; @@ -62,8 +64,9 @@ class SendWrap : public ReqWrap<uv_udp_send_t> { SendWrap::SendWrap(Environment* env, Local<Object> req_wrap_obj, bool have_callback) - : ReqWrap<uv_udp_send_t>(env, req_wrap_obj), + : ReqWrap(env, req_wrap_obj, AsyncWrap::PROVIDER_UDPWRAP), have_callback_(have_callback) { + Wrap(req_wrap_obj, this); } @@ -72,7 +75,12 @@ inline bool SendWrap::have_callback() const { } -UDPWrap::UDPWrap(Environment* env, Handle<Object> object) +static void NewSendWrap(const FunctionCallbackInfo<Value>& args) { + assert(args.IsConstructCall()); +} + + +UDPWrap::UDPWrap(Environment* env, Handle<Object> object, AsyncWrap* parent) : HandleWrap(env, object, reinterpret_cast<uv_handle_t*>(&handle_), @@ -124,14 +132,29 @@ void UDPWrap::Initialize(Handle<Object> target, target->Set(FIXED_ONE_BYTE_STRING(env->isolate(), "UDP"), t->GetFunction()); env->set_udp_constructor_function(t->GetFunction()); + + // Create FunctionTemplate for SendWrap + Local<FunctionTemplate> swt = + FunctionTemplate::New(env->isolate(), NewSendWrap); + swt->InstanceTemplate()->SetInternalFieldCount(1); + swt->SetClassName(FIXED_ONE_BYTE_STRING(env->isolate(), "SendWrap")); + target->Set(FIXED_ONE_BYTE_STRING(env->isolate(), "SendWrap"), + swt->GetFunction()); } void UDPWrap::New(const FunctionCallbackInfo<Value>& args) { - assert(args.IsConstructCall()); - HandleScope handle_scope(args.GetIsolate()); + CHECK(args.IsConstructCall()); Environment* env = Environment::GetCurrent(args.GetIsolate()); - new UDPWrap(env, args.This()); + if (args.Length() == 0) { + new UDPWrap(env, args.This(), NULL); + } else if (args[0]->IsExternal()) { + new UDPWrap(env, + args.This(), + static_cast<AsyncWrap*>(args[0].As<External>()->Value())); + } else { + UNREACHABLE(); + } } @@ -429,10 +452,12 @@ void UDPWrap::OnRecv(uv_udp_t* handle, } -Local<Object> UDPWrap::Instantiate(Environment* env) { +Local<Object> UDPWrap::Instantiate(Environment* env, AsyncWrap* parent) { // If this assert fires then Initialize hasn't been called yet. assert(env->udp_constructor_function().IsEmpty() == false); - return env->udp_constructor_function()->NewInstance(); + EscapableHandleScope scope(env->isolate()); + Local<Value> ptr = External::New(env->isolate(), parent); + return scope.Escape(env->udp_constructor_function()->NewInstance(1, &ptr)); } diff --git a/src/udp_wrap.h b/src/udp_wrap.h index ab0d6fa8e69..693fa51b718 100644 --- a/src/udp_wrap.h +++ b/src/udp_wrap.h @@ -22,6 +22,7 @@ #ifndef SRC_UDP_WRAP_H_ #define SRC_UDP_WRAP_H_ +#include "async-wrap.h" #include "env.h" #include "handle_wrap.h" #include "req_wrap.h" @@ -53,11 +54,11 @@ class UDPWrap: public HandleWrap { static void SetBroadcast(const v8::FunctionCallbackInfo<v8::Value>& args); static void SetTTL(const v8::FunctionCallbackInfo<v8::Value>& args); - static v8::Local<v8::Object> Instantiate(Environment* env); + static v8::Local<v8::Object> Instantiate(Environment* env, AsyncWrap* parent); uv_udp_t* UVHandle(); private: - UDPWrap(Environment* env, v8::Handle<v8::Object> object); + UDPWrap(Environment* env, v8::Handle<v8::Object> object, AsyncWrap* parent); virtual ~UDPWrap(); static void DoBind(const v8::FunctionCallbackInfo<v8::Value>& args, diff --git a/src/util.cc b/src/util.cc index 7459dbb7b84..67c96645303 100644 --- a/src/util.cc +++ b/src/util.cc @@ -31,6 +31,8 @@ Utf8Value::Utf8Value(v8::Handle<v8::Value> value) return; v8::Local<v8::String> val_ = value->ToString(); + if (val_.IsEmpty()) + return; // Allocate enough space to include the null terminator size_t len = StringBytes::StorageSize(val_, UTF8) + 1; diff --git a/test/common.js b/test/common.js index bd7318c95ef..622b0a3985f 100644 --- a/test/common.js +++ b/test/common.js @@ -30,12 +30,12 @@ exports.libDir = path.join(exports.testDir, '../lib'); exports.tmpDir = path.join(exports.testDir, 'tmp'); exports.PORT = +process.env.NODE_COMMON_PORT || 12346; +exports.opensslCli = path.join(path.dirname(process.execPath), 'openssl-cli'); if (process.platform === 'win32') { exports.PIPE = '\\\\.\\pipe\\libuv-test'; - exports.opensslCli = path.join(process.execPath, '..', 'openssl-cli.exe'); + exports.opensslCli += '.exe'; } else { exports.PIPE = exports.tmpDir + '/test.sock'; - exports.opensslCli = path.join(process.execPath, '..', 'openssl-cli'); } if (!fs.existsSync(exports.opensslCli)) exports.opensslCli = false; diff --git a/test/simple/test-debug-brk-no-arg.js b/test/disabled/test-debug-brk-no-arg.js similarity index 100% rename from test/simple/test-debug-brk-no-arg.js rename to test/disabled/test-debug-brk-no-arg.js diff --git a/test/fixtures/clustered-server/app.js b/test/fixtures/clustered-server/app.js index 4053cd3af27..3fca2aaa42f 100644 --- a/test/fixtures/clustered-server/app.js +++ b/test/fixtures/clustered-server/app.js @@ -16,6 +16,16 @@ if (cluster.isMaster) { } }); + process.on('message', function(msg) { + if (msg.type === 'getpids') { + var pids = []; + pids.push(process.pid); + for (var key in cluster.workers) + pids.push(cluster.workers[key].process.pid); + process.send({ type: 'pids', pids: pids }); + } + }); + for (var i = 0; i < NUMBER_OF_WORKERS; i++) { cluster.fork(); } diff --git a/test/internet/internet.status b/test/internet/internet.status index 34aea6a6af7..584e9e5aac0 100644 --- a/test/internet/internet.status +++ b/test/internet/internet.status @@ -1 +1,6 @@ prefix internet + +test-dns : PASS,FLAKY + +[$system==solaris] +test-http-dns-fail : PASS,FLAKY diff --git a/test/internet/test-dns.js b/test/internet/test-dns.js index 60227df7ca6..623a845c03f 100644 --- a/test/internet/test-dns.js +++ b/test/internet/test-dns.js @@ -632,8 +632,9 @@ var getaddrinfoCallbackCalled = false; console.log('looking up nodejs.org...'); -var req = {}; -var err = process.binding('cares_wrap').getaddrinfo(req, 'nodejs.org', 4); +var cares = process.binding('cares_wrap'); +var req = new cares.GetAddrInfoReqWrap(); +var err = cares.getaddrinfo(req, 'nodejs.org', 4); req.oncomplete = function(err, domains) { assert.strictEqual(err, 0); diff --git a/test/simple/simple.status b/test/simple/simple.status index 438ce39f5f2..8a0cbfb3cb8 100644 --- a/test/simple/simple.status +++ b/test/simple/simple.status @@ -1 +1,18 @@ prefix simple + +test-crypto-domains : PASS,FLAKY +test-debug-signal-cluster : PASS,FLAKY +test-cluster-basic : PASS,FLAKY + +[$system==win32] +test-timers-first-fire : PASS,FLAKY + +[$system==linux] +test-fs-readfile-error : PASS,FLAKY +test-net-GH-5504 : PASS,FLAKY +test-stdin-script-child : PASS,FLAKY +test-util-debug : PASS,FLAKY + +[$system==macos] + +[$system==solaris] diff --git a/test/simple/test-abort-fatal-error.js b/test/simple/test-abort-fatal-error.js index 64d31d9e881..79e3d72e869 100644 --- a/test/simple/test-abort-fatal-error.js +++ b/test/simple/test-abort-fatal-error.js @@ -30,7 +30,7 @@ if (process.platform === 'win32') { var exec = require('child_process').exec; var cmdline = 'ulimit -c 0; ' + process.execPath; -cmdline += ' --max-old-space-size=4 --max-new-space-size=1'; +cmdline += ' --max-old-space-size=4 --max-semi-space-size=1'; cmdline += ' -e "a = []; for (i = 0; i < 1e9; i++) { a.push({}) }"'; exec(cmdline, function(err, stdout, stderr) { diff --git a/test/simple/test-asynclistener-error-multiple-handled.js b/test/simple/test-asynclistener-error-multiple-handled.js deleted file mode 100644 index 576ce58e4cf..00000000000 --- a/test/simple/test-asynclistener-error-multiple-handled.js +++ /dev/null @@ -1,76 +0,0 @@ -// Copyright Joyent, Inc. and other Node contributors. -// -// Permission is hereby granted, free of charge, to any person obtaining a -// copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to permit -// persons to whom the Software is furnished to do so, subject to the -// following conditions: -// -// The above copyright notice and this permission notice shall be included -// in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS -// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN -// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR -// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE -// USE OR OTHER DEALINGS IN THE SOFTWARE. - -var common = require('../common'); -var assert = require('assert'); -var tracing = require('tracing'); - -var active = null; -var cntr = 0; - -function onAsync0() { - return 0; -} - -function onAsync1() { - return 1; -} - -function onError(stor) { - results.push(stor); - return true; -} - -var results = []; -var asyncNoHandleError0 = { - create: onAsync0, - error: onError -}; -var asyncNoHandleError1 = { - create: onAsync1, - error: onError -}; - -var listeners = [ - tracing.addAsyncListener(asyncNoHandleError0), - tracing.addAsyncListener(asyncNoHandleError1) -]; - -process.nextTick(function() { - throw new Error(); -}); - -tracing.removeAsyncListener(listeners[0]); -tracing.removeAsyncListener(listeners[1]); - -process.on('exit', function(code) { - // If the exit code isn't ok then return early to throw the stack that - // caused the bad return code. - if (code !== 0) - return; - - // Handling of errors should propagate to all listeners. - assert.equal(results[0], 0); - assert.equal(results[1], 1); - assert.equal(results.length, 2); - - console.log('ok'); -}); diff --git a/test/simple/test-asynclistener-error-net.js b/test/simple/test-asynclistener-error-net.js deleted file mode 100644 index 26a337a5044..00000000000 --- a/test/simple/test-asynclistener-error-net.js +++ /dev/null @@ -1,108 +0,0 @@ -// Copyright Joyent, Inc. and other Node contributors. -// -// Permission is hereby granted, free of charge, to any person obtaining a -// copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to permit -// persons to whom the Software is furnished to do so, subject to the -// following conditions: -// -// The above copyright notice and this permission notice shall be included -// in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS -// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN -// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR -// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE -// USE OR OTHER DEALINGS IN THE SOFTWARE. - -var common = require('../common'); -var assert = require('assert'); -var dns = require('dns'); -var fs = require('fs'); -var net = require('net'); -var tracing = require('tracing'); - -var errorMsgs = []; -var caught = 0; -var expectCaught = 0; - -var callbacksObj = { - error: function(value, er) { - var idx = errorMsgs.indexOf(er.message); - caught++; - - process._rawDebug('Handling error: ' + er.message); - - if (-1 < idx) - errorMsgs.splice(idx, 1); - else - throw new Error('Message not found: ' + er.message); - - return true; - } -}; - -var listener = tracing.addAsyncListener(callbacksObj); - -process.on('exit', function(code) { - tracing.removeAsyncListener(listener); - - if (code > 0) - return; - - if (errorMsgs.length > 0) - throw new Error('Errors not fired: ' + errorMsgs); - - assert.equal(caught, expectCaught); - process._rawDebug('ok'); -}); - - -// Net -var iter = 3; -for (var i = 0; i < iter; i++) { - errorMsgs.push('net - error: server connection'); - errorMsgs.push('net - error: client data'); - errorMsgs.push('net - error: server data'); -} -errorMsgs.push('net - error: server closed'); - -var server = net.createServer(function(c) { - c.on('data', function() { - if (0 === --iter) { - server.close(function() { - process._rawDebug('net - server closing'); - throw new Error('net - error: server closed'); - }); - expectCaught++; - } - process._rawDebug('net - connection received data'); - throw new Error('net - error: server data'); - }); - expectCaught++; - - c.end('bye'); - process._rawDebug('net - connection received'); - throw new Error('net - error: server connection'); -}); -expectCaught += iter; - -server.listen(common.PORT, function() { - for (var i = 0; i < iter; i++) - clientConnect(); -}); - -function clientConnect() { - var client = net.connect(common.PORT, function() { }); - - client.on('data', function() { - client.end('see ya'); - process._rawDebug('net - client received data'); - throw new Error('net - error: client data'); - }); - expectCaught++; -} diff --git a/test/simple/test-asynclistener-error-throw-in-before-multiple.js b/test/simple/test-asynclistener-error-throw-in-before-multiple.js deleted file mode 100644 index b9aecf3977b..00000000000 --- a/test/simple/test-asynclistener-error-throw-in-before-multiple.js +++ /dev/null @@ -1,80 +0,0 @@ -// Copyright Joyent, Inc. and other Node contributors. -// -// Permission is hereby granted, free of charge, to any person obtaining a -// copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to permit -// persons to whom the Software is furnished to do so, subject to the -// following conditions: -// -// The above copyright notice and this permission notice shall be included -// in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS -// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN -// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR -// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE -// USE OR OTHER DEALINGS IN THE SOFTWARE. - -var common = require('../common'); -var assert = require('assert'); -var tracing = require('tracing'); - -var once = 0; - -var results = []; -var handlers = { - before: function() { - throw 1; - }, - error: function(stor, err) { - // Must catch error thrown in before callback. - assert.equal(err, 1); - once++; - return true; - } -} - -var handlers1 = { - before: function() { - throw 2; - }, - error: function(stor, err) { - // Must catch *other* handlers throw by error callback. - assert.equal(err, 1); - once++; - return true; - } -} - -var listeners = [ - tracing.addAsyncListener(handlers), - tracing.addAsyncListener(handlers1) -]; - -var uncaughtFired = false; -process.on('uncaughtException', function(err) { - uncaughtFired = true; - - // Both error handlers must fire. - assert.equal(once, 2); -}); - -process.nextTick(function() { }); - -for (var i = 0; i < listeners.length; i++) - tracing.removeAsyncListener(listeners[i]); - -process.on('exit', function(code) { - // If the exit code isn't ok then return early to throw the stack that - // caused the bad return code. - if (code !== 0) - return; - // Make sure uncaughtException actually fired. - assert.ok(uncaughtFired); - console.log('ok'); -}); - diff --git a/test/simple/test-asynclistener-error-throw-in-before.js b/test/simple/test-asynclistener-error-throw-in-before.js deleted file mode 100644 index fb6b6eeecae..00000000000 --- a/test/simple/test-asynclistener-error-throw-in-before.js +++ /dev/null @@ -1,64 +0,0 @@ -// Copyright Joyent, Inc. and other Node contributors. -// -// Permission is hereby granted, free of charge, to any person obtaining a -// copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to permit -// persons to whom the Software is furnished to do so, subject to the -// following conditions: -// -// The above copyright notice and this permission notice shall be included -// in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS -// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN -// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR -// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE -// USE OR OTHER DEALINGS IN THE SOFTWARE. - -var common = require('../common'); -var assert = require('assert'); -var tracing = require('tracing'); - -var once = 0; - -var results = []; -var handlers = { - before: function() { - throw 1; - }, - error: function(stor, err) { - // Error handler must be called exactly *once*. - once++; - assert.equal(err, 1); - return true; - } -} - -var key = tracing.addAsyncListener(handlers); - -var uncaughtFired = false; -process.on('uncaughtException', function(err) { - uncaughtFired = true; - - // Process should propagate error regardless of handlers return value. - assert.equal(once, 1); -}); - -process.nextTick(function() { }); - -tracing.removeAsyncListener(key); - -process.on('exit', function(code) { - // If the exit code isn't ok then return early to throw the stack that - // caused the bad return code. - if (code !== 0) - return; - - // Make sure that the uncaughtException actually fired. - assert.ok(uncaughtFired); - console.log('ok'); -}); diff --git a/test/simple/test-asynclistener-error-throw-in-error.js b/test/simple/test-asynclistener-error-throw-in-error.js deleted file mode 100644 index c66d688fbea..00000000000 --- a/test/simple/test-asynclistener-error-throw-in-error.js +++ /dev/null @@ -1,87 +0,0 @@ -// Copyright Joyent, Inc. and other Node contributors. -// -// Permission is hereby granted, free of charge, to any person obtaining a -// copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to permit -// persons to whom the Software is furnished to do so, subject to the -// following conditions: -// -// The above copyright notice and this permission notice shall be included -// in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS -// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN -// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR -// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE -// USE OR OTHER DEALINGS IN THE SOFTWARE. - -var common = require('../common'); -var assert = require('assert'); -var spawn = require('child_process').spawn; -var tracing = require('tracing'); - -var checkStr = 'WRITTEN ON EXIT'; - -if (process.argv[2] === 'child') - runChild(); -else - runParent(); - - -function runChild() { - var cntr = 0; - - var key = tracing.addAsyncListener({ - error: function onError() { - cntr++; - throw new Error('onError'); - } - }); - - process.on('unhandledException', function() { - // Throwing in 'error' should bypass unhandledException. - process.exit(2); - }); - - process.on('exit', function() { - // Make sure that we can still write out to stderr even when the - // process dies. - process._rawDebug(checkStr); - }); - - process.nextTick(function() { - throw new Error('nextTick'); - }); -} - - -function runParent() { - var childDidExit = false; - var childStr = ''; - var child = spawn(process.execPath, [__filename, 'child']); - child.stderr.on('data', function(chunk) { - process._rawDebug('received data from child'); - childStr += chunk.toString(); - }); - - child.on('exit', function(code) { - process._rawDebug('child process exiting'); - childDidExit = true; - // This is thrown when Node throws from _fatalException. - assert.equal(code, 7); - }); - - process.on('exit', function() { - process._rawDebug('child ondata message:', - childStr.substr(0, checkStr.length)); - - assert.ok(childDidExit); - assert.equal(childStr.substr(0, checkStr.length), checkStr); - console.log('ok'); - }); -} - diff --git a/test/simple/test-asynclistener-error.js b/test/simple/test-asynclistener-error.js deleted file mode 100644 index 6e5f31b0348..00000000000 --- a/test/simple/test-asynclistener-error.js +++ /dev/null @@ -1,257 +0,0 @@ -// Copyright Joyent, Inc. and other Node contributors. -// -// Permission is hereby granted, free of charge, to any person obtaining a -// copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to permit -// persons to whom the Software is furnished to do so, subject to the -// following conditions: -// -// The above copyright notice and this permission notice shall be included -// in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS -// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN -// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR -// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE -// USE OR OTHER DEALINGS IN THE SOFTWARE. - -var common = require('../common'); -var assert = require('assert'); -var dns = require('dns'); -var fs = require('fs'); -var net = require('net'); -var tracing = require('tracing'); - -var addListener = tracing.addAsyncListener; -var removeListener = tracing.removeAsyncListener; -var errorMsgs = []; -var currentMsg = ''; -var caught = 0; -var expectCaught = 0; -var exitCbRan = false; - -var callbacksObj = { - error: function(value, er) { - var idx = errorMsgs.indexOf(er.message); - - caught++; - - if (-1 < idx) - errorMsgs.splice(idx, 1); - - return currentMsg === er.message; - } -}; - -var listener = tracing.createAsyncListener(callbacksObj); - -process.on('exit', function(code) { - removeListener(listener); - - // Something else went wrong, no need to further check. - if (code > 0) - return; - - // Make sure the exit callback only runs once. - assert.ok(!exitCbRan); - exitCbRan = true; - - // Check if any error messages weren't removed from the msg queue. - if (errorMsgs.length > 0) - throw new Error('Errors not fired: ' + errorMsgs); - - assert.equal(caught, expectCaught, 'caught all expected errors'); - process._rawDebug('ok'); -}); - - -// Catch synchronous throws -errorMsgs.push('sync throw'); -process.nextTick(function() { - addListener(listener); - - expectCaught++; - currentMsg = 'sync throw'; - throw new Error(currentMsg); - - removeListener(listener); -}); - - -// Simple cases -errorMsgs.push('setTimeout - simple'); -errorMsgs.push('setImmediate - simple'); -errorMsgs.push('setInterval - simple'); -errorMsgs.push('process.nextTick - simple'); -process.nextTick(function() { - addListener(listener); - - setTimeout(function() { - currentMsg = 'setTimeout - simple'; - throw new Error(currentMsg); - }); - expectCaught++; - - setImmediate(function() { - currentMsg = 'setImmediate - simple'; - throw new Error(currentMsg); - }); - expectCaught++; - - var b = setInterval(function() { - clearInterval(b); - currentMsg = 'setInterval - simple'; - throw new Error(currentMsg); - }); - expectCaught++; - - process.nextTick(function() { - currentMsg = 'process.nextTick - simple'; - throw new Error(currentMsg); - }); - expectCaught++; - - removeListener(listener); -}); - - -// Deeply nested -errorMsgs.push('setInterval - nested'); -errorMsgs.push('setImmediate - nested'); -errorMsgs.push('process.nextTick - nested'); -errorMsgs.push('setTimeout2 - nested'); -errorMsgs.push('setTimeout - nested'); -process.nextTick(function() { - addListener(listener); - - setTimeout(function() { - process.nextTick(function() { - setImmediate(function() { - var b = setInterval(function() { - clearInterval(b); - currentMsg = 'setInterval - nested'; - throw new Error(currentMsg); - }); - expectCaught++; - currentMsg = 'setImmediate - nested'; - throw new Error(currentMsg); - }); - expectCaught++; - currentMsg = 'process.nextTick - nested'; - throw new Error(currentMsg); - }); - expectCaught++; - setTimeout(function() { - currentMsg = 'setTimeout2 - nested'; - throw new Error(currentMsg); - }); - expectCaught++; - currentMsg = 'setTimeout - nested'; - throw new Error(currentMsg); - }); - expectCaught++; - - removeListener(listener); -}); - - -// FS -errorMsgs.push('fs - file does not exist'); -errorMsgs.push('fs - exists'); -errorMsgs.push('fs - realpath'); -process.nextTick(function() { - addListener(listener); - - fs.stat('does not exist', function(err, stats) { - currentMsg = 'fs - file does not exist'; - throw new Error(currentMsg); - }); - expectCaught++; - - fs.exists('hi all', function(exists) { - currentMsg = 'fs - exists'; - throw new Error(currentMsg); - }); - expectCaught++; - - fs.realpath('/some/path', function(err, resolved) { - currentMsg = 'fs - realpath'; - throw new Error(currentMsg); - }); - expectCaught++; - - removeListener(listener); -}); - - -// Nested FS -errorMsgs.push('fs - nested file does not exist'); -process.nextTick(function() { - addListener(listener); - - setTimeout(function() { - setImmediate(function() { - var b = setInterval(function() { - clearInterval(b); - process.nextTick(function() { - fs.stat('does not exist', function(err, stats) { - currentMsg = 'fs - nested file does not exist'; - throw new Error(currentMsg); - }); - expectCaught++; - }); - }); - }); - }); - - removeListener(listener); -}); - - -// Net -errorMsgs.push('net - connection listener'); -errorMsgs.push('net - client connect'); -errorMsgs.push('net - server listening'); -process.nextTick(function() { - addListener(listener); - - var server = net.createServer(function(c) { - server.close(); - currentMsg = 'net - connection listener'; - throw new Error(currentMsg); - }); - expectCaught++; - - server.listen(common.PORT, function() { - var client = net.connect(common.PORT, function() { - client.end(); - currentMsg = 'net - client connect'; - throw new Error(currentMsg); - }); - expectCaught++; - currentMsg = 'net - server listening'; - throw new Error(currentMsg); - }); - expectCaught++; - - removeListener(listener); -}); - - -// DNS -errorMsgs.push('dns - lookup'); -process.nextTick(function() { - addListener(listener); - - dns.lookup('localhost', function() { - currentMsg = 'dns - lookup'; - throw new Error(currentMsg); - }); - expectCaught++; - - removeListener(listener); -}); diff --git a/test/simple/test-asynclistener-multi-timeout.js b/test/simple/test-asynclistener-multi-timeout.js deleted file mode 100644 index 9af48205450..00000000000 --- a/test/simple/test-asynclistener-multi-timeout.js +++ /dev/null @@ -1,70 +0,0 @@ -// Copyright Joyent, Inc. and other Node contributors. -// -// Permission is hereby granted, free of charge, to any person obtaining a -// copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to permit -// persons to whom the Software is furnished to do so, subject to the -// following conditions: -// -// The above copyright notice and this permission notice shall be included -// in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS -// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN -// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR -// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE -// USE OR OTHER DEALINGS IN THE SOFTWARE. - -var common = require('../common'); -var assert = require('assert'); -var tracing = require('tracing'); - -var addListener = tracing.addAsyncListener; -var removeListener = tracing.removeAsyncListener; -var caught = []; -var expect = []; - -var callbacksObj = { - error: function(value, er) { - process._rawDebug('caught', er.message); - caught.push(er.message); - return (expect.indexOf(er.message) !== -1); - } -}; - -var listener = tracing.createAsyncListener(callbacksObj); - -process.on('exit', function(code) { - removeListener(listener); - - if (code > 0) - return; - - expect = expect.sort(); - caught = caught.sort(); - - process._rawDebug('expect', expect); - process._rawDebug('caught', caught); - assert.deepEqual(caught, expect, 'caught all expected errors'); - process._rawDebug('ok'); -}); - - -expect.push('immediate simple a'); -expect.push('immediate simple b'); -process.nextTick(function() { - addListener(listener); - // Tests for a setImmediate specific bug encountered while implementing - // AsyncListeners. - setImmediate(function() { - throw new Error('immediate simple a'); - }); - setImmediate(function() { - throw new Error('immediate simple b'); - }); - removeListener(listener); -}); diff --git a/test/simple/test-asynclistener-throw-before-infinite-recursion.js b/test/simple/test-asynclistener-throw-before-infinite-recursion.js deleted file mode 100644 index fdbb50be17d..00000000000 --- a/test/simple/test-asynclistener-throw-before-infinite-recursion.js +++ /dev/null @@ -1,48 +0,0 @@ -// Copyright Joyent, Inc. and other Node contributors. -// -// Permission is hereby granted, free of charge, to any person obtaining a -// copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to permit -// persons to whom the Software is furnished to do so, subject to the -// following conditions: -// -// The above copyright notice and this permission notice shall be included -// in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS -// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN -// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR -// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE -// USE OR OTHER DEALINGS IN THE SOFTWARE. - -var common = require('../common'); -var assert = require('assert'); -var tracing = require('tracing'); - -// If there is an uncaughtException listener then the error thrown from -// "before" will be considered handled, thus calling setImmediate to -// finish execution of the nextTickQueue. This in turn will cause "before" -// to fire again, entering into an infinite loop. -// So the asyncQueue is cleared from the returned setImmediate in -// _fatalException to prevent this from happening. -var cntr = 0; - - -tracing.addAsyncListener({ - before: function() { - if (++cntr > 1) { - // Can't throw since uncaughtException will also catch that. - process._rawDebug('Error: Multiple before callbacks called'); - process.exit(1); - } - throw new Error('before'); - } -}); - -process.on('uncaughtException', function() { }); - -process.nextTick(); diff --git a/test/simple/test-asynclistener.js b/test/simple/test-asynclistener.js deleted file mode 100644 index 4c29ec9054f..00000000000 --- a/test/simple/test-asynclistener.js +++ /dev/null @@ -1,188 +0,0 @@ -// Copyright Joyent, Inc. and other Node contributors. -// -// Permission is hereby granted, free of charge, to any person obtaining a -// copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to permit -// persons to whom the Software is furnished to do so, subject to the -// following conditions: -// -// The above copyright notice and this permission notice shall be included -// in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS -// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN -// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR -// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE -// USE OR OTHER DEALINGS IN THE SOFTWARE. - -var common = require('../common'); -var assert = require('assert'); -var net = require('net'); -var fs = require('fs'); -var dgram = require('dgram'); -var tracing = require('tracing'); - -var addListener = tracing.addAsyncListener; -var removeListener = tracing.removeAsyncListener; -var actualAsync = 0; -var expectAsync = 0; - -var callbacks = { - create: function onAsync() { - actualAsync++; - } -}; - -var listener = tracing.createAsyncListener(callbacks); - -process.on('exit', function() { - process._rawDebug('expected', expectAsync); - process._rawDebug('actual ', actualAsync); - // TODO(trevnorris): Not a great test. If one was missed, but others - // overflowed then the test would still pass. - assert.ok(actualAsync >= expectAsync); -}); - - -// Test listeners side-by-side -process.nextTick(function() { - addListener(listener); - - var b = setInterval(function() { - clearInterval(b); - }); - expectAsync++; - - var c = setInterval(function() { - clearInterval(c); - }); - expectAsync++; - - setTimeout(function() { }); - expectAsync++; - - setTimeout(function() { }); - expectAsync++; - - process.nextTick(function() { }); - expectAsync++; - - process.nextTick(function() { }); - expectAsync++; - - setImmediate(function() { }); - expectAsync++; - - setImmediate(function() { }); - expectAsync++; - - setTimeout(function() { }, 10); - expectAsync++; - - setTimeout(function() { }, 10); - expectAsync++; - - removeListener(listener); -}); - - -// Async listeners should propagate with nested callbacks -process.nextTick(function() { - addListener(listener); - var interval = 3; - - process.nextTick(function() { - setTimeout(function() { - setImmediate(function() { - var i = setInterval(function() { - if (--interval <= 0) - clearInterval(i); - }); - expectAsync++; - }); - expectAsync++; - process.nextTick(function() { - setImmediate(function() { - setTimeout(function() { }, 20); - expectAsync++; - }); - expectAsync++; - }); - expectAsync++; - }); - expectAsync++; - }); - expectAsync++; - - removeListener(listener); -}); - - -// Test triggers with two async listeners -process.nextTick(function() { - addListener(listener); - addListener(listener); - - setTimeout(function() { - process.nextTick(function() { }); - expectAsync += 2; - }); - expectAsync += 2; - - removeListener(listener); - removeListener(listener); -}); - - -// Test callbacks from fs I/O -process.nextTick(function() { - addListener(listener); - - fs.stat('something random', function(err, stat) { }); - expectAsync++; - - setImmediate(function() { - fs.stat('random again', function(err, stat) { }); - expectAsync++; - }); - expectAsync++; - - removeListener(listener); -}); - - -// Test net I/O -process.nextTick(function() { - addListener(listener); - - var server = net.createServer(function(c) { }); - expectAsync++; - - server.listen(common.PORT, function() { - server.close(); - expectAsync++; - }); - expectAsync++; - - removeListener(listener); -}); - - -// Test UDP -process.nextTick(function() { - addListener(listener); - - var server = dgram.createSocket('udp4'); - expectAsync++; - - server.bind(common.PORT); - - server.close(); - expectAsync++; - - removeListener(listener); -}); diff --git a/test/simple/test-v8-stats.js b/test/simple/test-buffer-slice.js similarity index 79% rename from test/simple/test-v8-stats.js rename to test/simple/test-buffer-slice.js index 6d70fb9a02a..1a462bd0be3 100644 --- a/test/simple/test-v8-stats.js +++ b/test/simple/test-buffer-slice.js @@ -21,16 +21,12 @@ var common = require('../common'); var assert = require('assert'); -var v8 = require('tracing').v8; -var s = v8.getHeapStatistics(); -var keys = [ - 'heap_size_limit', - 'total_heap_size', - 'total_heap_size_executable', - 'total_physical_size', - 'used_heap_size']; -assert.deepEqual(Object.keys(s).sort(), keys); -keys.forEach(function(key) { - assert.equal(typeof s[key], 'number'); -}); +var Buffer = require('buffer').Buffer; + +var buff = new Buffer(Buffer.poolSize + 1); +var slicedBuffer = buff.slice(); +assert.equal(slicedBuffer.parent, + buff, + "slicedBufffer should have its parent set to the original " + + " buffer"); diff --git a/test/simple/test-buffer.js b/test/simple/test-buffer.js index 70cc5908af4..bf742f93480 100644 --- a/test/simple/test-buffer.js +++ b/test/simple/test-buffer.js @@ -49,6 +49,56 @@ var c = new Buffer(512); console.log('c.length == %d', c.length); assert.strictEqual(512, c.length); +// First check Buffer#fill() works as expected. + +assert.throws(function() { + Buffer(8).fill('a', -1); +}); + +assert.throws(function() { + Buffer(8).fill('a', 0, 9); +}); + +// Make sure this doesn't hang indefinitely. +Buffer(8).fill(''); + +var buf = new Buffer(64); +buf.fill(10); +for (var i = 0; i < buf.length; i++) + assert.equal(buf[i], 10); + +buf.fill(11, 0, buf.length >> 1); +for (var i = 0; i < buf.length >> 1; i++) + assert.equal(buf[i], 11); +for (var i = (buf.length >> 1) + 1; i < buf.length; i++) + assert.equal(buf[i], 10); + +buf.fill('h'); +for (var i = 0; i < buf.length; i++) + assert.equal('h'.charCodeAt(0), buf[i]); + +buf.fill(0); +for (var i = 0; i < buf.length; i++) + assert.equal(0, buf[i]); + +buf.fill(null); +for (var i = 0; i < buf.length; i++) + assert.equal(0, buf[i]); + +buf.fill(1, 16, 32); +for (var i = 0; i < 16; i++) + assert.equal(0, buf[i]); +for (; i < 32; i++) + assert.equal(1, buf[i]); +for (; i < buf.length; i++) + assert.equal(0, buf[i]); + +var buf = new Buffer(10); +buf.fill('abc'); +assert.equal(buf.toString(), 'abcabcabca'); +buf.fill('է'); +assert.equal(buf.toString(), 'էէէէէ'); + // copy 512 bytes, from 0 to 512. b.fill(++cntr); c.fill(++cntr); @@ -642,28 +692,6 @@ assert.equal(0x6f, z[1]); assert.equal(0, Buffer('hello').slice(0, 0).length); -b = new Buffer(50); -b.fill('h'); -for (var i = 0; i < b.length; i++) { - assert.equal('h'.charCodeAt(0), b[i]); -} - -b.fill(0); -for (var i = 0; i < b.length; i++) { - assert.equal(0, b[i]); -} - -b.fill(1, 16, 32); -for (var i = 0; i < 16; i++) assert.equal(0, b[i]); -for (; i < 32; i++) assert.equal(1, b[i]); -for (; i < b.length; i++) assert.equal(0, b[i]); - -var buf = new Buffer(10); -buf.fill('abc'); -assert.equal(buf.toString(), 'abcabcabca'); -buf.fill('է'); -assert.equal(buf.toString(), 'էէէէէ'); - ['ucs2', 'ucs-2', 'utf16le', 'utf-16le'].forEach(function(encoding) { var b = new Buffer(10); b.write('あいうえお', encoding); @@ -922,8 +950,6 @@ var buf = new Buffer([0xFF]); assert.equal(buf.readUInt8(0), 255); assert.equal(buf.readInt8(0), -1); - - [16, 32].forEach(function(bits) { var buf = new Buffer(bits / 8 - 1); @@ -960,6 +986,91 @@ assert.equal(buf.readInt8(0), -1); (0xFFFFFFFF >> (32 - bits))); }); +// test for common read(U)IntLE/BE +(function() { + var buf = new Buffer([0x01, 0x02, 0x03, 0x04, 0x05, 0x06]); + + assert.equal(buf.readUIntLE(0, 1), 0x01); + assert.equal(buf.readUIntBE(0, 1), 0x01); + assert.equal(buf.readUIntLE(0, 3), 0x030201); + assert.equal(buf.readUIntBE(0, 3), 0x010203); + assert.equal(buf.readUIntLE(0, 5), 0x0504030201); + assert.equal(buf.readUIntBE(0, 5), 0x0102030405); + assert.equal(buf.readUIntLE(0, 6), 0x060504030201); + assert.equal(buf.readUIntBE(0, 6), 0x010203040506); + assert.equal(buf.readIntLE(0, 1), 0x01); + assert.equal(buf.readIntBE(0, 1), 0x01); + assert.equal(buf.readIntLE(0, 3), 0x030201); + assert.equal(buf.readIntBE(0, 3), 0x010203); + assert.equal(buf.readIntLE(0, 5), 0x0504030201); + assert.equal(buf.readIntBE(0, 5), 0x0102030405); + assert.equal(buf.readIntLE(0, 6), 0x060504030201); + assert.equal(buf.readIntBE(0, 6), 0x010203040506); +})(); + +// test for common write(U)IntLE/BE +(function() { + var buf = new Buffer(3); + buf.writeUIntLE(0x123456, 0, 3); + assert.deepEqual(buf.toJSON().data, [0x56, 0x34, 0x12]); + assert.equal(buf.readUIntLE(0, 3), 0x123456); + + buf = new Buffer(3); + buf.writeUIntBE(0x123456, 0, 3); + assert.deepEqual(buf.toJSON().data, [0x12, 0x34, 0x56]); + assert.equal(buf.readUIntBE(0, 3), 0x123456); + + buf = new Buffer(3); + buf.writeIntLE(0x123456, 0, 3); + assert.deepEqual(buf.toJSON().data, [0x56, 0x34, 0x12]); + assert.equal(buf.readIntLE(0, 3), 0x123456); + + buf = new Buffer(3); + buf.writeIntBE(0x123456, 0, 3); + assert.deepEqual(buf.toJSON().data, [0x12, 0x34, 0x56]); + assert.equal(buf.readIntBE(0, 3), 0x123456); + + buf = new Buffer(3); + buf.writeIntLE(-0x123456, 0, 3); + assert.deepEqual(buf.toJSON().data, [0xaa, 0xcb, 0xed]); + assert.equal(buf.readIntLE(0, 3), -0x123456); + + buf = new Buffer(3); + buf.writeIntBE(-0x123456, 0, 3); + assert.deepEqual(buf.toJSON().data, [0xed, 0xcb, 0xaa]); + assert.equal(buf.readIntBE(0, 3), -0x123456); + + buf = new Buffer(5); + buf.writeUIntLE(0x1234567890, 0, 5); + assert.deepEqual(buf.toJSON().data, [0x90, 0x78, 0x56, 0x34, 0x12]); + assert.equal(buf.readUIntLE(0, 5), 0x1234567890); + + buf = new Buffer(5); + buf.writeUIntBE(0x1234567890, 0, 5); + assert.deepEqual(buf.toJSON().data, [0x12, 0x34, 0x56, 0x78, 0x90]); + assert.equal(buf.readUIntBE(0, 5), 0x1234567890); + + buf = new Buffer(5); + buf.writeIntLE(0x1234567890, 0, 5); + assert.deepEqual(buf.toJSON().data, [0x90, 0x78, 0x56, 0x34, 0x12]); + assert.equal(buf.readIntLE(0, 5), 0x1234567890); + + buf = new Buffer(5); + buf.writeIntBE(0x1234567890, 0, 5); + assert.deepEqual(buf.toJSON().data, [0x12, 0x34, 0x56, 0x78, 0x90]); + assert.equal(buf.readIntBE(0, 5), 0x1234567890); + + buf = new Buffer(5); + buf.writeIntLE(-0x1234567890, 0, 5); + assert.deepEqual(buf.toJSON().data, [0x70, 0x87, 0xa9, 0xcb, 0xed]); + assert.equal(buf.readIntLE(0, 5), -0x1234567890); + + buf = new Buffer(5); + buf.writeIntBE(-0x1234567890, 0, 5); + assert.deepEqual(buf.toJSON().data, [0xed, 0xcb, 0xa9, 0x87, 0x70]); + assert.equal(buf.readIntBE(0, 5), -0x1234567890); +})(); + // test Buffer slice (function() { var buf = new Buffer('0123456789'); diff --git a/test/simple/test-child-process-spawn-typeerror.js b/test/simple/test-child-process-spawn-typeerror.js index 791adcbc208..4fd360a3f86 100644 --- a/test/simple/test-child-process-spawn-typeerror.js +++ b/test/simple/test-child-process-spawn-typeerror.js @@ -22,12 +22,15 @@ var spawn = require('child_process').spawn, assert = require('assert'), windows = (process.platform === 'win32'), - cmd = (windows) ? 'ls' : 'dir', + cmd = (windows) ? 'rundll32' : 'ls', + invalidcmd = 'hopefully_you_dont_have_this_on_your_machine', + invalidArgsMsg = /Incorrect value of args option/, + invalidOptionsMsg = /options argument must be an object/, errors = 0; try { // Ensure this throws a TypeError - var child = spawn(cmd, 'this is not an array'); + var child = spawn(invalidcmd, 'this is not an array'); child.on('error', function (err) { errors++; @@ -37,6 +40,44 @@ try { assert.equal(e instanceof TypeError, true); } +// verify that valid argument combinations do not throw +assert.doesNotThrow(function() { + spawn(cmd); +}); + +assert.doesNotThrow(function() { + spawn(cmd, []); +}); + +assert.doesNotThrow(function() { + spawn(cmd, {}); +}); + +assert.doesNotThrow(function() { + spawn(cmd, [], {}); +}); + +// verify that invalid argument combinations throw +assert.throws(function() { + spawn(); +}, /Bad argument/); + +assert.throws(function() { + spawn(cmd, null); +}, invalidArgsMsg); + +assert.throws(function() { + spawn(cmd, true); +}, invalidArgsMsg); + +assert.throws(function() { + spawn(cmd, [], null); +}, invalidOptionsMsg); + +assert.throws(function() { + spawn(cmd, [], 1); +}, invalidOptionsMsg); + process.on('exit', function() { assert.equal(errors, 0); }); diff --git a/test/simple/test-child-process-spawnsync-env.js b/test/simple/test-child-process-spawnsync-env.js new file mode 100644 index 00000000000..0cde9ffeefa --- /dev/null +++ b/test/simple/test-child-process-spawnsync-env.js @@ -0,0 +1,35 @@ +// Copyright Joyent, Inc. and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + +var common = require('../common'); +var assert = require('assert'); +var cp = require('child_process'); + +if (process.argv[2] === 'child') { + console.log(process.env.foo); +} else { + var expected = 'bar'; + var child = cp.spawnSync(process.execPath, [__filename, 'child'], { + env: {foo: expected} + }); + + assert.equal(child.stdout.toString().trim(), expected); +} diff --git a/test/simple/test-cluster-dgram-3.js b/test/simple/test-cluster-dgram-3.js new file mode 100644 index 00000000000..7e759c15309 --- /dev/null +++ b/test/simple/test-cluster-dgram-3.js @@ -0,0 +1,47 @@ +// Copyright Joyent, Inc. and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + +var assert = require('assert'); +var cluster = require('cluster'); +var dgram = require('dgram'); + +if (cluster.isMaster) { + // ensure that the worker exits peacefully + var worker = cluster.fork(); + worker.on('exit', function(statusCode) { + assert.equal(statusCode, 0); + worker = null; + }); + process.on('exit', function() { + assert.equal(worker, null); + }); + + return; +} + +// Should return two sockets, open on the same ephemeral port +var A = dgram.createSocket('udp4'); +A.bind(0); + +var B = dgram.createSocket('udp4'); +B.bind(0); + +setTimeout(process.disconnect.bind(process), 10); diff --git a/test/simple/test-asynclistener-remove-inflight-error.js b/test/simple/test-cluster-dgram-4.js similarity index 61% rename from test/simple/test-asynclistener-remove-inflight-error.js rename to test/simple/test-cluster-dgram-4.js index 1b9150152a7..a604966d10f 100644 --- a/test/simple/test-asynclistener-remove-inflight-error.js +++ b/test/simple/test-cluster-dgram-4.js @@ -19,40 +19,41 @@ // OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE // USE OR OTHER DEALINGS IN THE SOFTWARE. -var common = require('../common'); var assert = require('assert'); -var tracing = require('tracing'); - -var set = 0; -var asyncNoHandleError = { - error: function() { - set++; - } +var cluster = require('cluster'); +var dgram = require('dgram'); + +if (cluster.isMaster) { + // ensure that the worker exits peacefully + var worker = cluster.fork(); + worker.on('exit', function(statusCode) { + assert.equal(statusCode, 0); + worker = null; + }); + worker.on('message', function(msg) { + if (msg === 'PASS') { + worker.disconnect(); + } + }); + process.on('exit', function() { + assert.equal(worker, null); + }); + + return; } -var key = tracing.addAsyncListener(asyncNoHandleError); - -process.nextTick(function() { - throw 1; -}); - -tracing.removeAsyncListener(key); - -var uncaughtFired = false; -process.on('uncaughtException', function() { - uncaughtFired = true; - - // Throwing should call the error handler once, then propagate to - // uncaughtException - assert.equal(set, 1); -}); +// Should open the same ephemeral port twice in a row. +var A = dgram.createSocket('udp4'); +A.bind(0, function() { + A.close(); -process.on('exit', function(code) { - // If the exit code isn't ok then return early to throw the stack that - // caused the bad return code. - if (code !== 0) - return; + A.on('close', function() { + var B = dgram.createSocket('udp4'); + B.bind(0); - assert.ok(uncaughtFired); - console.log('ok'); + setTimeout(function() { + B.close(); + process.send('PASS'); + }, 0); + }); }); diff --git a/test/simple/test-cluster-net-listen-2.js b/test/simple/test-cluster-net-listen-2.js new file mode 100644 index 00000000000..2d9c67dc2e7 --- /dev/null +++ b/test/simple/test-cluster-net-listen-2.js @@ -0,0 +1,56 @@ +// Copyright Joyent, Inc. and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + +var assert = require('assert'); +var cluster = require('cluster'); +var net = require('net'); + +if (cluster.isMaster) { + // ensure that the worker exits peacefully + var worker = cluster.fork(); + worker.on('exit', function(statusCode) { + assert.equal(statusCode, 0); + worker = null; + }); + worker.on('message', function(msg) { + if (msg === 'PASS') { + worker.disconnect(); + } + }); + process.on('exit', function() { + assert.equal(worker, null); + }); +} +else { + // Should open the same ephemeral port twice in a row. + var A, B; + A = net.createServer().listen(0, function() { + console.log('Listen A'); + A.close(); + A.on('close', function() { + B = net.createServer().listen(0, function() { + console.log('Listen B'); + B.close(); + process.send('PASS'); + }); + }); + }); +} diff --git a/test/simple/test-asynclistener-run-error-once.js b/test/simple/test-cluster-net-listen-3.js similarity index 61% rename from test/simple/test-asynclistener-run-error-once.js rename to test/simple/test-cluster-net-listen-3.js index 154decb99af..573d5665b71 100644 --- a/test/simple/test-asynclistener-run-error-once.js +++ b/test/simple/test-cluster-net-listen-3.js @@ -19,41 +19,36 @@ // OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE // USE OR OTHER DEALINGS IN THE SOFTWARE. -var common = require('../common'); var assert = require('assert'); +var cluster = require('cluster'); var net = require('net'); -var tracing = require('tracing'); -var errMsg = 'net - error: server connection'; -var cntr = 0; -var al = tracing.addAsyncListener({ - error: function(stor, er) { - cntr++; - process._rawDebug('Handling error: ' + er.message); - assert.equal(errMsg, er.message); - return true; - } -}); - -process.on('exit', function(status) { - tracing.removeAsyncListener(al); - - console.log('exit status:', status); - assert.equal(status, 0); - console.log('cntr:', cntr); - assert.equal(cntr, 1); - console.log('ok'); -}); - - -var server = net.createServer(function(c) { - this.close(); - throw new Error(errMsg); -}); - - -server.listen(common.PORT, function() { - net.connect(common.PORT, function() { - this.destroy(); +if (cluster.isMaster) { + // ensure that the worker exits peacefully + var worker = cluster.fork(); + worker.on('exit', function(statusCode) { + assert.equal(statusCode, 0); + worker = null; + }); + worker.on('message', function(msg) { + if (msg === 'PASS') { + worker.disconnect(); + } + }); + process.on('exit', function() { + assert.equal(worker, null); + }); +} +else { + // Should return two sockets, open on the same ephemeral port + var A, B; + A = net.createServer().listen(0, function() { + console.log('Listen A'); + B = net.createServer().listen(0, function() { + console.log('Listen B'); + A.close(); + B.close(); + process.send('PASS'); + }); }); -}); +} diff --git a/test/simple/test-cluster-worker-destroy.js b/test/simple/test-cluster-worker-destroy.js new file mode 100644 index 00000000000..318b55caf6f --- /dev/null +++ b/test/simple/test-cluster-worker-destroy.js @@ -0,0 +1,79 @@ +// Copyright Joyent, Inc. and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + +/* + * The goal of this test is to cover the Workers' implementation of + * Worker.prototype.destroy. Worker.prototype.destroy is called within + * the worker's context: once when the worker is still connected to the + * master, and another time when it's not connected to it, so that we cover + * both code paths. + */ + +require('../common'); +var cluster = require('cluster'); +var assert = require('assert'); + +var worker1, worker2, workerExited, workerDisconnected; + +if (cluster.isMaster) { + worker1 = cluster.fork(); + worker2 = cluster.fork(); + + workerExited = 0; + workerDisconnected = 0; + + [worker1, worker2].forEach(function(worker) { + worker.on('disconnect', ondisconnect); + worker.on('exit', onexit); + }); + + process.on('exit', onProcessExit); + +} else { + if (cluster.worker.id === 1) { + // Call destroy when worker is disconnected + cluster.worker.process.on('disconnect', function() { + cluster.worker.destroy(); + }); + + cluster.worker.disconnect(); + } else { + // Call destroy when worker is not disconnected yet + cluster.worker.destroy(); + } +} + +function onProcessExit() { + assert.equal(workerExited, + 2, + 'When master exits, all workers should have exited too'); + assert.equal(workerDisconnected, + 2, + 'When master exits, all workers should have disconnected'); +} + +function ondisconnect() { + ++workerDisconnected; +} + +function onexit() { + ++workerExited; +} diff --git a/test/simple/test-crypto.js b/test/simple/test-crypto.js index 4b623380b46..d3f1ade321c 100644 --- a/test/simple/test-crypto.js +++ b/test/simple/test-crypto.js @@ -719,6 +719,22 @@ assert.equal(secret1, secret2.toString('base64')); assert.equal(dh1.verifyError, 0); assert.equal(dh2.verifyError, 0); +assert.throws(function() { + crypto.createDiffieHellman([0x1, 0x2]); +}); + +assert.throws(function() { + crypto.createDiffieHellman(function() { }); +}); + +assert.throws(function() { + crypto.createDiffieHellman(/abc/); +}); + +assert.throws(function() { + crypto.createDiffieHellman({}); +}); + // Create "another dh1" using generated keys from dh1, // and compute secret again var dh3 = crypto.createDiffieHellman(p1, 'buffer'); diff --git a/test/simple/test-debug-signal-cluster.js b/test/simple/test-debug-signal-cluster.js index 09dc3b4c2fd..27d53910b74 100644 --- a/test/simple/test-debug-signal-cluster.js +++ b/test/simple/test-debug-signal-cluster.js @@ -24,11 +24,15 @@ var assert = require('assert'); var spawn = require('child_process').spawn; var args = [ common.fixturesDir + '/clustered-server/app.js' ]; -var child = spawn(process.execPath, args); +var child = spawn(process.execPath, args, { + stdio: [ 'pipe', 'pipe', 'pipe', 'ipc' ] +}); var outputLines = []; var outputTimerId; var waitingForDebuggers = false; +var pids = null; + child.stderr.on('data', function(data) { var lines = data.toString().replace(/\r/g, '').trim().split('\n'); var line = lines[0]; @@ -42,8 +46,20 @@ child.stderr.on('data', function(data) { outputLines = outputLines.concat(lines); outputTimerId = setTimeout(onNoMoreLines, 800); } else if (line === 'all workers are running') { - waitingForDebuggers = true; - process._debugProcess(child.pid); + child.on('message', function(msg) { + if (msg.type !== 'pids') + return; + + pids = msg.pids; + console.error('got pids %j', pids); + + waitingForDebuggers = true; + process._debugProcess(child.pid); + }); + + child.send({ + type: 'getpids' + }); } }); @@ -57,7 +73,9 @@ setTimeout(function testTimedOut() { }, 6000); process.on('exit', function onExit() { - child.kill(); + pids.forEach(function(pid) { + process.kill(pid); + }); }); function assertOutputLines() { diff --git a/test/simple/test-dgram-bind-shared-ports.js b/test/simple/test-dgram-bind-shared-ports.js index 709d3ad5518..c9e22b5921f 100644 --- a/test/simple/test-dgram-bind-shared-ports.js +++ b/test/simple/test-dgram-bind-shared-ports.js @@ -24,6 +24,11 @@ var assert = require('assert'); var cluster = require('cluster'); var dgram = require('dgram'); +// TODO XXX FIXME when windows supports clustered dgram ports re-enable this +// test +if (process.platform == 'win32') + process.exit(0); + function noop() {} if (cluster.isMaster) { diff --git a/test/simple/test-asynclistener-remove-add-in-before.js b/test/simple/test-dns-cares-domains.js similarity index 70% rename from test/simple/test-asynclistener-remove-add-in-before.js rename to test/simple/test-dns-cares-domains.js index af0bc78e0e8..8581ed64db2 100644 --- a/test/simple/test-asynclistener-remove-add-in-before.js +++ b/test/simple/test-dns-cares-domains.js @@ -21,27 +21,26 @@ var common = require('../common'); var assert = require('assert'); -var tracing = require('tracing'); -var val; -var callbacks = { - create: function() { - return 42; - }, - before: function() { - tracing.removeAsyncListener(listener); - tracing.addAsyncListener(listener); - }, - after: function(context, storage) { - val = storage; - } -}; +var dns = require('dns'); +var domain = require('domain'); -var listener = tracing.addAsyncListener(callbacks); +var methods = [ + 'resolve4', + 'resolve6', + 'resolveCname', + 'resolveMx', + 'resolveNs', + 'resolveTxt', + 'resolveSrv', + 'resolveNaptr', + 'resolveSoa' +] -process.nextTick(function() {}); - -process.on('exit', function(status) { - tracing.removeAsyncListener(listener); - assert.equal(status, 0); - assert.equal(val, 42); +methods.forEach(function(method) { + var d = domain.create(); + d.run(function() { + dns[method]('google.com', function() { + assert.strictEqual(process.domain, d, method + ' retains domain') + }); + }); }); diff --git a/test/simple/test-fs-error-messages.js b/test/simple/test-fs-error-messages.js index 773faa69d74..16b5dd92b3f 100644 --- a/test/simple/test-fs-error-messages.js +++ b/test/simple/test-fs-error-messages.js @@ -56,6 +56,10 @@ fs.link(existingFile, existingFile2, function(err) { assert.ok(0 <= err.message.indexOf(existingFile2)); }); +fs.symlink(existingFile, existingFile2, function(err) { + assert.ok(0 <= err.message.indexOf(existingFile2)); +}); + fs.unlink(fn, function(err) { assert.ok(0 <= err.message.indexOf(fn)); }); @@ -153,6 +157,14 @@ try { assert.ok(0 <= err.message.indexOf(existingFile2)); } +try { + ++expected; + fs.symlinkSync(existingFile, existingFile2); +} catch (err) { + errors.push('symlink'); + assert.ok(0 <= err.message.indexOf(existingFile2)); +} + try { ++expected; fs.unlinkSync(fn); diff --git a/test/simple/test-asynclistener-remove-inflight.js b/test/simple/test-http-res-write-after-end.js similarity index 65% rename from test/simple/test-asynclistener-remove-inflight.js rename to test/simple/test-http-res-write-after-end.js index a21b1923f40..71a2564bfe4 100644 --- a/test/simple/test-asynclistener-remove-inflight.js +++ b/test/simple/test-http-res-write-after-end.js @@ -21,33 +21,29 @@ var common = require('../common'); var assert = require('assert'); -var tracing = require('tracing'); +var http = require('http'); -var set = 0; -var asyncNoHandleError = { - before: function() { - set++; - }, - after: function() { - set++; - } -} +var responseError; -var key = tracing.addAsyncListener(asyncNoHandleError); +var server = http.Server(function(req, res) { + res.on('error', function onResError(err) { + responseError = err; + }); -process.nextTick(function() { }); + res.write('This should write.'); + res.end(); -tracing.removeAsyncListener(key); - -process.on('exit', function(code) { - // If the exit code isn't ok then return early to throw the stack that - // caused the bad return code. - if (code !== 0) - return; + var r = res.write('This should raise an error.'); + assert.equal(r, true, 'write after end should return true'); +}); - // Calling removeAsyncListener *after* a callback is scheduled - // should not affect the handler from responding to the callback. - assert.equal(set, 2); - console.log('ok'); +server.listen(common.PORT, function() { + var req = http.get({port: common.PORT}, function(res) { + server.close(); + }); }); +process.on('exit', function onProcessExit(code) { + assert(responseError, 'response should have emitted error'); + assert.equal(responseError.message, 'write after end'); +}); diff --git a/test/simple/test-http-write-head.js b/test/simple/test-http-write-head.js index 2de59fe2eda..88923ef27ac 100644 --- a/test/simple/test-http-write-head.js +++ b/test/simple/test-http-write-head.js @@ -28,6 +28,17 @@ var http = require('http'); var s = http.createServer(function(req, res) { res.setHeader('test', '1'); + + // toLowerCase() is used on the name argument, so it must be a string. + var threw = false; + try { + res.setHeader(0xf00, 'bar'); + } catch (e) { + assert.ok(e instanceof TypeError); + threw = true; + } + assert.ok(threw, 'Non-string names should throw'); + res.writeHead(200, { Test: '2' }); res.end(); }); diff --git a/test/simple/test-intl.js b/test/simple/test-intl.js new file mode 100644 index 00000000000..841239a8d94 --- /dev/null +++ b/test/simple/test-intl.js @@ -0,0 +1,103 @@ +// Copyright Joyent, Inc. and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + +var common = require('../common'); +var assert = require('assert'); + +// does node think that i18n was enabled? +var enablei18n = process.config.variables.v8_enable_i18n_support; +if (enablei18n === undefined) { + enablei18n = false; +} + +// is the Intl object present? +var haveIntl = (global.Intl != undefined); + +// Returns true if no specific locale ids were configured (i.e. "all") +// Else, returns true if loc is in the configured list +// Else, returns false +function haveLocale(loc) { + var locs = process.config.variables.icu_locales.split(','); + return locs.indexOf(loc) !== -1; +} + +if (!haveIntl) { + var erMsg = + '"Intl" object is NOT present but v8_enable_i18n_support is ' + + enablei18n; + assert.equal(enablei18n, false, erMsg); + console.log('Skipping Intl tests because Intl object not present.'); + +} else { + var erMsg = + '"Intl" object is present but v8_enable_i18n_support is ' + + enablei18n + + '. Is this test out of date?'; + assert.equal(enablei18n, true, erMsg); + + // Construct a new date at the beginning of Unix time + var date0 = new Date(0); + + // Use the GMT time zone + var GMT = 'Etc/GMT'; + + // Construct an English formatter. Should format to "Jan 70" + var dtf = + new Intl.DateTimeFormat(['en'], + {timeZone: GMT, month: 'short', year: '2-digit'}); + + // If list is specified and doesn't contain 'en' then return. + if (process.config.variables.icu_locales && !haveLocale('en')) { + console.log('Skipping detailed Intl tests because English is not listed ' + + 'as supported.'); + // Smoke test. Does it format anything, or fail? + console.log('Date(0) formatted to: ' + dtf.format(date0)); + return; + } + + // Check with toLocaleString + var localeString = dtf.format(date0); + assert.equal(localeString, 'Jan 70'); + + // Options to request GMT + var optsGMT = {timeZone: GMT}; + + // Test format + localeString = date0.toLocaleString(['en'], optsGMT); + assert.equal(localeString, '1/1/1970, 12:00:00 AM'); + + // number format + assert.equal(new Intl.NumberFormat(['en']).format(12345.67890), '12,345.679'); + + var collOpts = { sensitivity: 'base', ignorePunctuation: true }; + var coll = new Intl.Collator(['en'], collOpts); + + assert.equal(coll.compare('blackbird', 'black-bird'), 0, + 'ignore punctuation failed'); + assert.equal(coll.compare('blackbird', 'red-bird'), -1, + 'compare less failed'); + assert.equal(coll.compare('bluebird', 'blackbird'), 1, + 'compare greater failed'); + assert.equal(coll.compare('Bluebird', 'bluebird'), 0, + 'ignore case failed'); + assert.equal(coll.compare('\ufb03', 'ffi'), 0, + 'ffi ligature (contraction) failed'); +} diff --git a/test/simple/test-microtask-queue-integration-domain.js b/test/simple/test-microtask-queue-integration-domain.js new file mode 100644 index 00000000000..2197bf9212e --- /dev/null +++ b/test/simple/test-microtask-queue-integration-domain.js @@ -0,0 +1,70 @@ +// Copyright Joyent, Inc. and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + +var common = require('../common'); +var assert = require('assert'); +var domain = require('domain'); + +var implementations = [ + function (fn) { + Promise.resolve().then(fn); + }, + function (fn) { + var obj = {}; + + Object.observe(obj, fn); + + obj.a = 1; + } +]; + +var expected = 0; +var done = 0; + +process.on('exit', function () { + assert.equal(done, expected); +}); + +function test (scheduleMicrotask) { + var nextTickCalled = false; + expected++; + + scheduleMicrotask(function () { + process.nextTick(function () { + nextTickCalled = true; + }); + + setTimeout(function () { + assert(nextTickCalled); + done++; + }, 0); + }); +} + +// first tick case +implementations.forEach(test); + +// tick callback case +setTimeout(function () { + implementations.forEach(function (impl) { + process.nextTick(test.bind(null, impl)); + }); +}, 0); diff --git a/test/simple/test-asynclistener-error-throw-in-after.js b/test/simple/test-microtask-queue-integration.js similarity index 62% rename from test/simple/test-asynclistener-error-throw-in-after.js rename to test/simple/test-microtask-queue-integration.js index 3eb02e165d4..af01548477f 100644 --- a/test/simple/test-asynclistener-error-throw-in-after.js +++ b/test/simple/test-microtask-queue-integration.js @@ -21,42 +21,49 @@ var common = require('../common'); var assert = require('assert'); -var tracing = require('tracing'); -var once = 0; - -var results = []; -var handlers = { - after: function() { - throw 1; +var implementations = [ + function (fn) { + Promise.resolve().then(fn); }, - error: function(stor, err) { - // Error handler must be called exactly *once*. - once++; - assert.equal(err, 1); - return true; - } -} + function (fn) { + var obj = {}; + + Object.observe(obj, fn); -var key = tracing.addAsyncListener(handlers); + obj.a = 1; + } +]; -var uncaughtFired = false; -process.on('uncaughtException', function(err) { - uncaughtFired = true; +var expected = 0; +var done = 0; - assert.equal(once, 1); +process.on('exit', function () { + assert.equal(done, expected); }); -process.nextTick(function() { }); +function test (scheduleMicrotask) { + var nextTickCalled = false; + expected++; + + scheduleMicrotask(function () { + process.nextTick(function () { + nextTickCalled = true; + }); -tracing.removeAsyncListener(key); + setTimeout(function () { + assert(nextTickCalled); + done++; + }, 0); + }); +} -process.on('exit', function(code) { - // If the exit code isn't ok then return early to throw the stack that - // caused the bad return code. - if (code !== 0) - return; +// first tick case +implementations.forEach(test); - assert.ok(uncaughtFired); - console.log('ok'); -}); +// tick callback case +setTimeout(function () { + implementations.forEach(function (impl) { + process.nextTick(test.bind(null, impl)); + }); +}, 0); diff --git a/test/simple/test-asynclistener-remove-before.js b/test/simple/test-microtask-queue-run-domain.js similarity index 69% rename from test/simple/test-asynclistener-remove-before.js rename to test/simple/test-microtask-queue-run-domain.js index bc306dbc3c8..2b3b76315ef 100644 --- a/test/simple/test-asynclistener-remove-before.js +++ b/test/simple/test-microtask-queue-run-domain.js @@ -21,33 +21,39 @@ var common = require('../common'); var assert = require('assert'); -var tracing = require('tracing'); -var set = 0; - -var asyncNoHandleError = { - before: function() { - set++; - }, - after: function() { - set++; - } +var domain = require('domain'); + +function enqueueMicrotask(fn) { + Promise.resolve().then(fn); } -var key = tracing.addAsyncListener(asyncNoHandleError); +var done = 0; + +process.on('exit', function() { + assert.equal(done, 2); +}); -tracing.removeAsyncListener(key); +// no nextTick, microtask +setTimeout(function() { + enqueueMicrotask(function() { + done++; + }); +}, 0); -process.nextTick(function() { }); -process.on('exit', function(code) { - // If the exit code isn't ok then return early to throw the stack that - // caused the bad return code. - if (code !== 0) - return; +// no nextTick, microtask with nextTick +setTimeout(function() { + var called = false; - // The async handler should never be called. - assert.equal(set, 0); - console.log('ok'); -}); + enqueueMicrotask(function() { + process.nextTick(function() { + called = true; + }); + }); + setTimeout(function() { + if (called) + done++; + }, 0); +}, 0); diff --git a/test/simple/test-microtask-queue-run-immediate-domain.js b/test/simple/test-microtask-queue-run-immediate-domain.js new file mode 100644 index 00000000000..8f95fadd586 --- /dev/null +++ b/test/simple/test-microtask-queue-run-immediate-domain.js @@ -0,0 +1,59 @@ +// Copyright Joyent, Inc. and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + +var common = require('../common'); +var assert = require('assert'); +var domain = require('domain'); + +function enqueueMicrotask(fn) { + Promise.resolve().then(fn); +} + +var done = 0; + +process.on('exit', function() { + assert.equal(done, 2); +}); + +// no nextTick, microtask +setImmediate(function() { + enqueueMicrotask(function() { + done++; + }); +}); + + +// no nextTick, microtask with nextTick +setImmediate(function() { + var called = false; + + enqueueMicrotask(function() { + process.nextTick(function() { + called = true; + }); + }); + + setImmediate(function() { + if (called) + done++; + }); + +}); diff --git a/test/simple/test-asynclistener-run-inst-once.js b/test/simple/test-microtask-queue-run-immediate.js similarity index 75% rename from test/simple/test-asynclistener-run-inst-once.js rename to test/simple/test-microtask-queue-run-immediate.js index b1492528eed..b5423eb6b4f 100644 --- a/test/simple/test-asynclistener-run-inst-once.js +++ b/test/simple/test-microtask-queue-run-immediate.js @@ -21,26 +21,38 @@ var common = require('../common'); var assert = require('assert'); -var tracing = require('tracing'); -var cntr = 0; -var al = tracing.createAsyncListener({ - create: function() { cntr++; }, -}); +function enqueueMicrotask(fn) { + Promise.resolve().then(fn); +} + +var done = 0; process.on('exit', function() { - assert.equal(cntr, 4); - console.log('ok'); + assert.equal(done, 2); +}); + +// no nextTick, microtask +setImmediate(function() { + enqueueMicrotask(function() { + done++; + }); }); -tracing.addAsyncListener(al); -process.nextTick(function() { - tracing.addAsyncListener(al); - process.nextTick(function() { - tracing.addAsyncListener(al); +// no nextTick, microtask with nextTick +setImmediate(function() { + var called = false; + + enqueueMicrotask(function() { process.nextTick(function() { - process.nextTick(function() { }); + called = true; }); }); + + setImmediate(function() { + if (called) + done++; + }); + }); diff --git a/test/simple/test-microtask-queue-run.js b/test/simple/test-microtask-queue-run.js new file mode 100644 index 00000000000..c4138454f54 --- /dev/null +++ b/test/simple/test-microtask-queue-run.js @@ -0,0 +1,58 @@ +// Copyright Joyent, Inc. and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + +var common = require('../common'); +var assert = require('assert'); + +function enqueueMicrotask(fn) { + Promise.resolve().then(fn); +} + +var done = 0; + +process.on('exit', function() { + assert.equal(done, 2); +}); + +// no nextTick, microtask +setTimeout(function() { + enqueueMicrotask(function() { + done++; + }); +}, 0); + + +// no nextTick, microtask with nextTick +setTimeout(function() { + var called = false; + + enqueueMicrotask(function() { + process.nextTick(function() { + called = true; + }); + }); + + setTimeout(function() { + if (called) + done++; + }, 0); + +}, 0); diff --git a/test/simple/test-module-loading-error.js b/test/simple/test-module-loading-error.js index c1afd58e077..beddb5d37b9 100644 --- a/test/simple/test-module-loading-error.js +++ b/test/simple/test-module-loading-error.js @@ -42,3 +42,15 @@ try { } catch (e) { assert.notEqual(e.toString().indexOf(dlerror_msg), -1); } + +try { + require(); +} catch (e) { + assert.notEqual(e.toString().indexOf('missing path'), -1); +} + +try { + require({}); +} catch (e) { + assert.notEqual(e.toString().indexOf('path must be a string'), -1); +} diff --git a/test/simple/test-module-nodemodulepaths.js b/test/simple/test-module-nodemodulepaths.js index 3d48d99ab94..af44840b4b4 100644 --- a/test/simple/test-module-nodemodulepaths.js +++ b/test/simple/test-module-nodemodulepaths.js @@ -21,6 +21,7 @@ var common = require('../common'); var assert = require('assert'); +var path = require('path'); var module = require('module'); @@ -29,7 +30,7 @@ var isWindows = process.platform === 'win32'; var file, delimiter, paths; if (isWindows) { - file = 'C:\\Users\\Rocko Artischocko\\node_stuff\\foo'; + file = path.normalize('C:\\Users\\Rocko Artischocko\\node_stuff\\foo'); delimiter = '\\' } else { file = '/usr/test/lib/node_modules/npm/foo'; @@ -39,4 +40,4 @@ if (isWindows) { paths = module._nodeModulePaths(file); assert.ok(paths.indexOf(file + delimiter + 'node_modules') !== -1); -assert.ok(Array.isArray(paths)); \ No newline at end of file +assert.ok(Array.isArray(paths)); diff --git a/test/simple/test-net-localerror.js b/test/simple/test-net-localerror.js index c4d04aa921a..d04d9c70720 100644 --- a/test/simple/test-net-localerror.js +++ b/test/simple/test-net-localerror.js @@ -23,39 +23,30 @@ var common = require('../common'); var assert = require('assert'); var net = require('net'); -var server = net.createServer(function(socket) { - assert.ok(false, 'no clients should connect'); -}).listen(common.PORT).on('listening', function() { - server.unref(); + connect({ + host: 'localhost', + port: common.PORT, + localPort: 'foobar', + }, 'localPort should be a number: foobar'); - function test1(next) { - connect({ - host: '127.0.0.1', - port: common.PORT, - localPort: 'foobar', - }, - 'localPort should be a number: foobar', - next); - } + connect({ + host: 'localhost', + port: common.PORT, + localAddress: 'foobar', + }, 'localAddress should be a valid IP: foobar'); - function test2(next) { - connect({ - host: '127.0.0.1', - port: common.PORT, - localAddress: 'foobar', - }, - 'localAddress should be a valid IP: foobar', - next) - } + connect({ + host: 'localhost', + port: 65536 + }, 'port should be > 0 and < 65536: 65536'); - test1(test2); -}) + connect({ + host: 'localhost', + port: 0 + }, 'port should be > 0 and < 65536: 0'); -function connect(opts, msg, cb) { - var client = net.connect(opts).on('connect', function() { - assert.ok(false, 'we should never connect'); - }).on('error', function(err) { - assert.strictEqual(err.message, msg); - if (cb) cb(); - }); +function connect(opts, msg) { + assert.throws(function() { + var client = net.connect(opts); + }, msg); } diff --git a/test/simple/test-asynclistener-remove-in-before.js b/test/simple/test-net-server-connections.js similarity index 75% rename from test/simple/test-asynclistener-remove-in-before.js rename to test/simple/test-net-server-connections.js index 06f470ce075..423fcb8e382 100644 --- a/test/simple/test-asynclistener-remove-in-before.js +++ b/test/simple/test-net-server-connections.js @@ -21,23 +21,12 @@ var common = require('../common'); var assert = require('assert'); -var tracing = require('tracing'); -var done = false; -var callbacks = { - before: function() { - tracing.removeAsyncListener(listener); - }, - after: function() { - done = true; - } -}; -var listener = tracing.addAsyncListener(callbacks); +var net = require('net'); -process.nextTick(function() {}); +// test that server.connections property is no longer enumerable now that it +// has been marked as deprecated -process.on('exit', function(status) { - tracing.removeAsyncListener(listener); - assert.equal(status, 0); - assert.ok(done); -}); +var server = new net.Server(); + +assert.equal(Object.keys(server).indexOf('connections'), -1); diff --git a/test/simple/test-asynclistener-error-multiple-mix.js b/test/simple/test-net-server-pause-on-connect.js similarity index 57% rename from test/simple/test-asynclistener-error-multiple-mix.js rename to test/simple/test-net-server-pause-on-connect.js index 83716bc281c..3a8255e8f77 100644 --- a/test/simple/test-asynclistener-error-multiple-mix.js +++ b/test/simple/test-net-server-pause-on-connect.js @@ -21,45 +21,39 @@ var common = require('../common'); var assert = require('assert'); -var tracing = require('tracing'); - -var results = []; -var asyncNoHandleError = { - error: function(stor) { - results.push(1); - } -}; - -var asyncHandleError = { - error: function(stor) { - results.push(0); - return true; - } -}; - -var listeners = [ - tracing.addAsyncListener(asyncHandleError), - tracing.addAsyncListener(asyncNoHandleError) -]; - -// Even if an error handler returns true, both should fire. -process.nextTick(function() { - throw new Error(); +var net = require('net'); +var msg = 'test'; +var stopped = true; +var server1 = net.createServer({pauseOnConnect: true}, function(socket) { + socket.on('data', function(data) { + if (stopped) { + assert(false, 'data event should not have happened yet'); + } + + assert.equal(data.toString(), msg, 'invalid data received'); + socket.end(); + server1.close(); + }); + + setTimeout(function() { + assert.equal(socket.bytesRead, 0, 'no data should have been read yet'); + socket.resume(); + stopped = false; + }, 3000); }); -tracing.removeAsyncListener(listeners[0]); -tracing.removeAsyncListener(listeners[1]); - -process.on('exit', function(code) { - // If the exit code isn't ok then return early to throw the stack that - // caused the bad return code. - if (code !== 0) - return; +var server2 = net.createServer({pauseOnConnect: false}, function(socket) { + socket.on('data', function(data) { + assert.equal(data.toString(), msg, 'invalid data received'); + socket.end(); + server2.close(); + }); +}); - // Mixed handling of errors should propagate to all listeners. - assert.equal(results[0], 0); - assert.equal(results[1], 1); - assert.equal(results.length, 2); +server1.listen(common.PORT, function() { + net.createConnection({port: common.PORT}).write(msg); +}); - console.log('ok'); +server2.listen(common.PORT + 1, function() { + net.createConnection({port: common.PORT + 1}).write(msg); }); diff --git a/test/simple/test-path-parse-format.js b/test/simple/test-path-parse-format.js new file mode 100644 index 00000000000..4f6e5af45c0 --- /dev/null +++ b/test/simple/test-path-parse-format.js @@ -0,0 +1,99 @@ +// Copyright Joyent, Inc. and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + +var assert = require('assert'); +var path = require('path'); + +var winPaths = [ + 'C:\\path\\dir\\index.html', + 'C:\\another_path\\DIR\\1\\2\\33\\index', + 'another_path\\DIR with spaces\\1\\2\\33\\index', + '\\foo\\C:', + 'file', + '.\\file', + + // unc + '\\\\server\\share\\file_path', + '\\\\server two\\shared folder\\file path.zip', + '\\\\teela\\admin$\\system32', + '\\\\?\\UNC\\server\\share' + +]; + +var unixPaths = [ + '/home/user/dir/file.txt', + '/home/user/a dir/another File.zip', + '/home/user/a dir//another&File.', + '/home/user/a$$$dir//another File.zip', + 'user/dir/another File.zip', + 'file', + '.\\file', + './file', + 'C:\\foo' +]; + +var errors = [ + {method: 'parse', input: [null], message: /Parameter 'pathString' must be a string, not/}, + {method: 'parse', input: [{}], message: /Parameter 'pathString' must be a string, not object/}, + {method: 'parse', input: [true], message: /Parameter 'pathString' must be a string, not boolean/}, + {method: 'parse', input: [1], message: /Parameter 'pathString' must be a string, not number/}, + {method: 'parse', input: [], message: /Parameter 'pathString' must be a string, not undefined/}, + // {method: 'parse', input: [''], message: /Invalid path/}, // omitted because it's hard to trigger! + {method: 'format', input: [null], message: /Parameter 'pathObject' must be an object, not/}, + {method: 'format', input: [''], message: /Parameter 'pathObject' must be an object, not string/}, + {method: 'format', input: [true], message: /Parameter 'pathObject' must be an object, not boolean/}, + {method: 'format', input: [1], message: /Parameter 'pathObject' must be an object, not number/}, + {method: 'format', input: [{root: true}], message: /'pathObject.root' must be a string or undefined, not boolean/}, + {method: 'format', input: [{root: 12}], message: /'pathObject.root' must be a string or undefined, not number/}, +]; + +check(path.win32, winPaths); +check(path.posix, unixPaths); +checkErrors(path.win32); +checkErrors(path.posix); + +function checkErrors(path) { + errors.forEach(function(errorCase) { + try { + path[errorCase.method].apply(path, errorCase.input); + } catch(err) { + assert.ok(err instanceof TypeError); + assert.ok( + errorCase.message.test(err.message), + 'expected ' + errorCase.message + ' to match ' + err.message + ); + return; + } + + assert.fail('should have thrown'); + }); +} + + +function check(path, paths) { + paths.forEach(function(element, index, array) { + var output = path.parse(element); + assert.strictEqual(path.format(output), element); + assert.strictEqual(output.dir, output.dir ? path.dirname(element) : ''); + assert.strictEqual(output.base, path.basename(element)); + assert.strictEqual(output.ext, path.extname(element)); + }); +} diff --git a/test/simple/test-path.js b/test/simple/test-path.js index cdeebcd0697..cdd59bcd5d9 100644 --- a/test/simple/test-path.js +++ b/test/simple/test-path.js @@ -37,22 +37,19 @@ assert.equal(path.basename('basename.ext'), 'basename.ext'); assert.equal(path.basename('basename.ext/'), 'basename.ext'); assert.equal(path.basename('basename.ext//'), 'basename.ext'); -if (isWindows) { - // On Windows a backslash acts as a path separator. - assert.equal(path.basename('\\dir\\basename.ext'), 'basename.ext'); - assert.equal(path.basename('\\basename.ext'), 'basename.ext'); - assert.equal(path.basename('basename.ext'), 'basename.ext'); - assert.equal(path.basename('basename.ext\\'), 'basename.ext'); - assert.equal(path.basename('basename.ext\\\\'), 'basename.ext'); +// On Windows a backslash acts as a path separator. +assert.equal(path.win32.basename('\\dir\\basename.ext'), 'basename.ext'); +assert.equal(path.win32.basename('\\basename.ext'), 'basename.ext'); +assert.equal(path.win32.basename('basename.ext'), 'basename.ext'); +assert.equal(path.win32.basename('basename.ext\\'), 'basename.ext'); +assert.equal(path.win32.basename('basename.ext\\\\'), 'basename.ext'); -} else { - // On unix a backslash is just treated as any other character. - assert.equal(path.basename('\\dir\\basename.ext'), '\\dir\\basename.ext'); - assert.equal(path.basename('\\basename.ext'), '\\basename.ext'); - assert.equal(path.basename('basename.ext'), 'basename.ext'); - assert.equal(path.basename('basename.ext\\'), 'basename.ext\\'); - assert.equal(path.basename('basename.ext\\\\'), 'basename.ext\\\\'); -} +// On unix a backslash is just treated as any other character. +assert.equal(path.posix.basename('\\dir\\basename.ext'), '\\dir\\basename.ext'); +assert.equal(path.posix.basename('\\basename.ext'), '\\basename.ext'); +assert.equal(path.posix.basename('basename.ext'), 'basename.ext'); +assert.equal(path.posix.basename('basename.ext\\'), 'basename.ext\\'); +assert.equal(path.posix.basename('basename.ext\\\\'), 'basename.ext\\\\'); // POSIX filenames may include control characters // c.f. http://www.dwheeler.com/essays/fixing-unix-linux-filenames.html @@ -73,35 +70,33 @@ assert.equal(path.dirname(''), '.'); assert.equal(path.dirname('/'), '/'); assert.equal(path.dirname('////'), '/'); -if (isWindows) { - assert.equal(path.dirname('c:\\'), 'c:\\'); - assert.equal(path.dirname('c:\\foo'), 'c:\\'); - assert.equal(path.dirname('c:\\foo\\'), 'c:\\'); - assert.equal(path.dirname('c:\\foo\\bar'), 'c:\\foo'); - assert.equal(path.dirname('c:\\foo\\bar\\'), 'c:\\foo'); - assert.equal(path.dirname('c:\\foo\\bar\\baz'), 'c:\\foo\\bar'); - assert.equal(path.dirname('\\'), '\\'); - assert.equal(path.dirname('\\foo'), '\\'); - assert.equal(path.dirname('\\foo\\'), '\\'); - assert.equal(path.dirname('\\foo\\bar'), '\\foo'); - assert.equal(path.dirname('\\foo\\bar\\'), '\\foo'); - assert.equal(path.dirname('\\foo\\bar\\baz'), '\\foo\\bar'); - assert.equal(path.dirname('c:'), 'c:'); - assert.equal(path.dirname('c:foo'), 'c:'); - assert.equal(path.dirname('c:foo\\'), 'c:'); - assert.equal(path.dirname('c:foo\\bar'), 'c:foo'); - assert.equal(path.dirname('c:foo\\bar\\'), 'c:foo'); - assert.equal(path.dirname('c:foo\\bar\\baz'), 'c:foo\\bar'); - assert.equal(path.dirname('\\\\unc\\share'), '\\\\unc\\share'); - assert.equal(path.dirname('\\\\unc\\share\\foo'), '\\\\unc\\share\\'); - assert.equal(path.dirname('\\\\unc\\share\\foo\\'), '\\\\unc\\share\\'); - assert.equal(path.dirname('\\\\unc\\share\\foo\\bar'), - '\\\\unc\\share\\foo'); - assert.equal(path.dirname('\\\\unc\\share\\foo\\bar\\'), - '\\\\unc\\share\\foo'); - assert.equal(path.dirname('\\\\unc\\share\\foo\\bar\\baz'), - '\\\\unc\\share\\foo\\bar'); -} +assert.equal(path.win32.dirname('c:\\'), 'c:\\'); +assert.equal(path.win32.dirname('c:\\foo'), 'c:\\'); +assert.equal(path.win32.dirname('c:\\foo\\'), 'c:\\'); +assert.equal(path.win32.dirname('c:\\foo\\bar'), 'c:\\foo'); +assert.equal(path.win32.dirname('c:\\foo\\bar\\'), 'c:\\foo'); +assert.equal(path.win32.dirname('c:\\foo\\bar\\baz'), 'c:\\foo\\bar'); +assert.equal(path.win32.dirname('\\'), '\\'); +assert.equal(path.win32.dirname('\\foo'), '\\'); +assert.equal(path.win32.dirname('\\foo\\'), '\\'); +assert.equal(path.win32.dirname('\\foo\\bar'), '\\foo'); +assert.equal(path.win32.dirname('\\foo\\bar\\'), '\\foo'); +assert.equal(path.win32.dirname('\\foo\\bar\\baz'), '\\foo\\bar'); +assert.equal(path.win32.dirname('c:'), 'c:'); +assert.equal(path.win32.dirname('c:foo'), 'c:'); +assert.equal(path.win32.dirname('c:foo\\'), 'c:'); +assert.equal(path.win32.dirname('c:foo\\bar'), 'c:foo'); +assert.equal(path.win32.dirname('c:foo\\bar\\'), 'c:foo'); +assert.equal(path.win32.dirname('c:foo\\bar\\baz'), 'c:foo\\bar'); +assert.equal(path.win32.dirname('\\\\unc\\share'), '\\\\unc\\share'); +assert.equal(path.win32.dirname('\\\\unc\\share\\foo'), '\\\\unc\\share\\'); +assert.equal(path.win32.dirname('\\\\unc\\share\\foo\\'), '\\\\unc\\share\\'); +assert.equal(path.win32.dirname('\\\\unc\\share\\foo\\bar'), + '\\\\unc\\share\\foo'); +assert.equal(path.win32.dirname('\\\\unc\\share\\foo\\bar\\'), + '\\\\unc\\share\\foo'); +assert.equal(path.win32.dirname('\\\\unc\\share\\foo\\bar\\baz'), + '\\\\unc\\share\\foo\\bar'); assert.equal(path.extname(''), ''); @@ -146,28 +141,25 @@ assert.equal(path.extname('file//'), ''); assert.equal(path.extname('file./'), '.'); assert.equal(path.extname('file.//'), '.'); -if (isWindows) { - // On windows, backspace is a path separator. - assert.equal(path.extname('.\\'), ''); - assert.equal(path.extname('..\\'), ''); - assert.equal(path.extname('file.ext\\'), '.ext'); - assert.equal(path.extname('file.ext\\\\'), '.ext'); - assert.equal(path.extname('file\\'), ''); - assert.equal(path.extname('file\\\\'), ''); - assert.equal(path.extname('file.\\'), '.'); - assert.equal(path.extname('file.\\\\'), '.'); +// On windows, backspace is a path separator. +assert.equal(path.win32.extname('.\\'), ''); +assert.equal(path.win32.extname('..\\'), ''); +assert.equal(path.win32.extname('file.ext\\'), '.ext'); +assert.equal(path.win32.extname('file.ext\\\\'), '.ext'); +assert.equal(path.win32.extname('file\\'), ''); +assert.equal(path.win32.extname('file\\\\'), ''); +assert.equal(path.win32.extname('file.\\'), '.'); +assert.equal(path.win32.extname('file.\\\\'), '.'); -} else { - // On unix, backspace is a valid name component like any other character. - assert.equal(path.extname('.\\'), ''); - assert.equal(path.extname('..\\'), '.\\'); - assert.equal(path.extname('file.ext\\'), '.ext\\'); - assert.equal(path.extname('file.ext\\\\'), '.ext\\\\'); - assert.equal(path.extname('file\\'), ''); - assert.equal(path.extname('file\\\\'), ''); - assert.equal(path.extname('file.\\'), '.\\'); - assert.equal(path.extname('file.\\\\'), '.\\\\'); -} +// On unix, backspace is a valid name component like any other character. +assert.equal(path.posix.extname('.\\'), ''); +assert.equal(path.posix.extname('..\\'), '.\\'); +assert.equal(path.posix.extname('file.ext\\'), '.ext\\'); +assert.equal(path.posix.extname('file.ext\\\\'), '.ext\\\\'); +assert.equal(path.posix.extname('file\\'), ''); +assert.equal(path.posix.extname('file\\\\'), ''); +assert.equal(path.posix.extname('file.\\'), '.\\'); +assert.equal(path.posix.extname('file.\\\\'), '.\\\\'); // path.join tests var failures = []; @@ -294,23 +286,21 @@ joinThrowTests.forEach(function(test) { // path normalize tests -if (isWindows) { - assert.equal(path.normalize('./fixtures///b/../b/c.js'), - 'fixtures\\b\\c.js'); - assert.equal(path.normalize('/foo/../../../bar'), '\\bar'); - assert.equal(path.normalize('a//b//../b'), 'a\\b'); - assert.equal(path.normalize('a//b//./c'), 'a\\b\\c'); - assert.equal(path.normalize('a//b//.'), 'a\\b'); - assert.equal(path.normalize('//server/share/dir/file.ext'), - '\\\\server\\share\\dir\\file.ext'); -} else { - assert.equal(path.normalize('./fixtures///b/../b/c.js'), - 'fixtures/b/c.js'); - assert.equal(path.normalize('/foo/../../../bar'), '/bar'); - assert.equal(path.normalize('a//b//../b'), 'a/b'); - assert.equal(path.normalize('a//b//./c'), 'a/b/c'); - assert.equal(path.normalize('a//b//.'), 'a/b'); -} +assert.equal(path.win32.normalize('./fixtures///b/../b/c.js'), + 'fixtures\\b\\c.js'); +assert.equal(path.win32.normalize('/foo/../../../bar'), '\\bar'); +assert.equal(path.win32.normalize('a//b//../b'), 'a\\b'); +assert.equal(path.win32.normalize('a//b//./c'), 'a\\b\\c'); +assert.equal(path.win32.normalize('a//b//.'), 'a\\b'); +assert.equal(path.win32.normalize('//server/share/dir/file.ext'), + '\\\\server\\share\\dir\\file.ext'); + +assert.equal(path.posix.normalize('./fixtures///b/../b/c.js'), + 'fixtures/b/c.js'); +assert.equal(path.posix.normalize('/foo/../../../bar'), '/bar'); +assert.equal(path.posix.normalize('a//b//../b'), 'a/b'); +assert.equal(path.posix.normalize('a//b//./c'), 'a/b/c'); +assert.equal(path.posix.normalize('a//b//.'), 'a/b'); // path.resolve tests if (isWindows) { @@ -321,7 +311,7 @@ if (isWindows) { [['c:/ignore', 'd:\\a/b\\c/d', '\\e.exe'], 'd:\\e.exe'], [['c:/ignore', 'c:/some/file'], 'c:\\some\\file'], [['d:/ignore', 'd:some/dir//'], 'd:\\ignore\\some\\dir'], - [['.'], process.cwd()], + [['.'], path.normalize(process.cwd())], [['//server/share', '..', 'relative\\'], '\\\\server\\share\\relative'], [['c:/', '//'], 'c:\\'], [['c:/', '//dir'], 'c:\\dir'], @@ -352,21 +342,19 @@ resolveTests.forEach(function(test) { assert.equal(failures.length, 0, failures.join('')); // path.isAbsolute tests -if (isWindows) { - assert.equal(path.isAbsolute('//server/file'), true); - assert.equal(path.isAbsolute('\\\\server\\file'), true); - assert.equal(path.isAbsolute('C:/Users/'), true); - assert.equal(path.isAbsolute('C:\\Users\\'), true); - assert.equal(path.isAbsolute('C:cwd/another'), false); - assert.equal(path.isAbsolute('C:cwd\\another'), false); - assert.equal(path.isAbsolute('directory/directory'), false); - assert.equal(path.isAbsolute('directory\\directory'), false); -} else { - assert.equal(path.isAbsolute('/home/foo'), true); - assert.equal(path.isAbsolute('/home/foo/..'), true); - assert.equal(path.isAbsolute('bar/'), false); - assert.equal(path.isAbsolute('./baz'), false); -} +assert.equal(path.win32.isAbsolute('//server/file'), true); +assert.equal(path.win32.isAbsolute('\\\\server\\file'), true); +assert.equal(path.win32.isAbsolute('C:/Users/'), true); +assert.equal(path.win32.isAbsolute('C:\\Users\\'), true); +assert.equal(path.win32.isAbsolute('C:cwd/another'), false); +assert.equal(path.win32.isAbsolute('C:cwd\\another'), false); +assert.equal(path.win32.isAbsolute('directory/directory'), false); +assert.equal(path.win32.isAbsolute('directory\\directory'), false); + +assert.equal(path.posix.isAbsolute('/home/foo'), true); +assert.equal(path.posix.isAbsolute('/home/foo/..'), true); +assert.equal(path.posix.isAbsolute('bar/'), false); +assert.equal(path.posix.isAbsolute('./baz'), false); // path.relative tests if (isWindows) { @@ -405,20 +393,20 @@ relativeTests.forEach(function(test) { }); assert.equal(failures.length, 0, failures.join('')); -// path.sep tests -if (isWindows) { - // windows - assert.equal(path.sep, '\\'); -} else { - // posix - assert.equal(path.sep, '/'); -} +// windows +assert.equal(path.win32.sep, '\\'); +// posix +assert.equal(path.posix.sep, '/'); // path.delimiter tests -if (isWindows) { - // windows - assert.equal(path.delimiter, ';'); -} else { - // posix - assert.equal(path.delimiter, ':'); -} +// windows +assert.equal(path.win32.delimiter, ';'); + +// posix +assert.equal(path.posix.delimiter, ':'); + + +if (isWindows) + assert.deepEqual(path, path.win32, 'should be win32 path module'); +else + assert.deepEqual(path, path.posix, 'should be posix path module'); diff --git a/test/simple/test-process-active-wraps.js b/test/simple/test-process-active-wraps.js index 63fc218debc..ae129868f3f 100644 --- a/test/simple/test-process-active-wraps.js +++ b/test/simple/test-process-active-wraps.js @@ -41,9 +41,16 @@ var handles = []; })(); (function() { + function onlookup() { + setImmediate(function() { + assert.equal(process._getActiveRequests().length, 0); + }); + }; + expect(1, 0); var conn = net.createConnection(common.PORT); - conn.on('error', function() { /* ignore */ }); + conn.on('lookup', onlookup); + conn.on('error', function() { assert(false); }); expect(2, 1); conn.destroy(); expect(2, 1); // client handle doesn't shut down until next tick @@ -52,10 +59,21 @@ var handles = []; (function() { var n = 0; + handles.forEach(function(handle) { handle.once('close', onclose); }); function onclose() { - if (++n === handles.length) setImmediate(expect, 0, 0); + if (++n === handles.length) { + // Allow the server handle a few loop iterations to wind down. + // This test is highly dependent on the implementation of handle + // closing. If this test breaks in the future, it does not + // necessarily mean that Node is broken. + setImmediate(function() { + setImmediate(function() { + assert.equal(process._getActiveHandles().length, 0); + }); + }); + } } })(); diff --git a/test/simple/test-process-kill-pid.js b/test/simple/test-process-kill-pid.js index e45adf1f6b7..6e4e7d83a7d 100644 --- a/test/simple/test-process-kill-pid.js +++ b/test/simple/test-process-kill-pid.js @@ -23,15 +23,6 @@ var common = require('../common'); var assert = require('assert'); -var pass; -var wait = setInterval(function(){}, 1000); - -process.on('exit', function(code) { - if (code === 0) { - assert.ok(pass); - } -}); - // test variants of pid // // null: TypeError @@ -43,58 +34,55 @@ process.on('exit', function(code) { // // Nan, Infinity, -Infinity: TypeError // -// 0: our group process +// 0, String(0): our group process // -// process.pid: ourself +// process.pid, String(process.pid): ourself assert.throws(function() { process.kill('SIGTERM'); }, TypeError); -assert.throws(function() { process.kill(String(process.pid)); }, TypeError); assert.throws(function() { process.kill(null); }, TypeError); assert.throws(function() { process.kill(undefined); }, TypeError); assert.throws(function() { process.kill(+'not a number'); }, TypeError); assert.throws(function() { process.kill(1/0); }, TypeError); assert.throws(function() { process.kill(-1/0); }, TypeError); -/* Sending SIGHUP is not supported on Windows */ -if (process.platform === 'win32') { - pass = true; - clearInterval(wait); - return; -} +// Test kill argument processing in valid cases. +// +// Monkey patch _kill so that we don't actually send any signals, particularly +// that we don't kill our process group, or try to actually send ANY signals on +// windows, which doesn't support them. +function kill(tryPid, trySig, expectPid, expectSig) { + var getPid; + var getSig; + var origKill = process._kill; + process._kill = function(pid, sig) { + getPid = pid; + getSig = sig; -process.once('SIGHUP', function() { - process.once('SIGHUP', function() { - pass = true; - clearInterval(wait); - }); - process.kill(process.pid, 'SIGHUP'); -}); + // un-monkey patch process._kill + process._kill = origKill; + }; + process.kill(tryPid, trySig); -/* - * Monkey patch _kill so that, in the specific case - * when we want to test sending a signal to pid 0, - * we don't actually send the signal to the process group. - * Otherwise, it could cause a lot of trouble, like terminating - * the test runner, or any other process that belongs to the - * same process group as this test. - */ -var origKill = process._kill; -process._kill = function(pid, sig) { - /* - * make sure we patch _kill only when - * we want to test sending a signal - * to the process group. - */ - assert.strictEqual(pid, 0); + assert.equal(getPid, expectPid); + assert.equal(getSig, expectSig); +} - // make sure that _kill is passed the correct args types - assert(typeof pid === 'number'); - assert(typeof sig === 'number'); +// Note that SIGHUP and SIGTERM map to 1 and 15 respectively, even on Windows +// (for Windows, libuv maps 1 and 15 to the correct behaviour). - // un-monkey patch process._kill - process._kill = origKill; - process._kill(process.pid, sig); -} +kill(0, 'SIGHUP', 0, 1); +kill(0, undefined, 0, 15); +kill('0', 'SIGHUP', 0, 1); +kill('0', undefined, 0, 15); + +// negative numbers are meaningful on unix +kill(-1, 'SIGHUP', -1, 1); +kill(-1, undefined, -1, 15); +kill('-1', 'SIGHUP', -1, 1); +kill('-1', undefined, -1, 15); -process.kill(0, 'SIGHUP'); +kill(process.pid, 'SIGHUP', process.pid, 1); +kill(process.pid, undefined, process.pid, 15); +kill(String(process.pid), 'SIGHUP', process.pid, 1); +kill(String(process.pid), undefined, process.pid, 15); diff --git a/test/simple/test-readline-interface.js b/test/simple/test-readline-interface.js index f91c10821a1..b86dd5a8a9b 100644 --- a/test/simple/test-readline-interface.js +++ b/test/simple/test-readline-interface.js @@ -307,5 +307,36 @@ function isWarned(emitter) { assert.equal(isWarned(process.stdout._events), false); } + //can create a new readline Interface with a null output arugument + fi = new FakeInput(); + rli = new readline.Interface({input: fi, output: null, terminal: terminal }); + + called = false; + rli.on('line', function(line) { + called = true; + assert.equal(line, 'asdf'); + }); + fi.emit('data', 'asdf\n'); + assert.ok(called); + + assert.doesNotThrow(function() { + rli.setPrompt("ddd> "); + }); + + assert.doesNotThrow(function() { + rli.prompt(); + }); + + assert.doesNotThrow(function() { + rli.write('really shouldnt be seeing this'); + }); + + assert.doesNotThrow(function() { + rli.question("What do you think of node.js? ", function(answer) { + console.log("Thank you for your valuable feedback:", answer); + rli.close(); + }) + }); + }); diff --git a/test/simple/test-smalloc.js b/test/simple/test-smalloc.js index be7e7ac3263..198b5c7f5cc 100644 --- a/test/simple/test-smalloc.js +++ b/test/simple/test-smalloc.js @@ -161,6 +161,20 @@ if (os.endianness() === 'LE') { copyOnto(c, 0, b, 0, 2); assert.equal(b[0], 0.1); +var b = alloc(1, Types.Uint16); +var c = alloc(2, Types.Uint8); +c[0] = c[1] = 0xff; +copyOnto(c, 0, b, 0, 2); +assert.equal(b[0], 0xffff); + +var b = alloc(2, Types.Uint8); +var c = alloc(1, Types.Uint16); +c[0] = 0xffff; +copyOnto(c, 0, b, 0, 1); +assert.equal(b[0], 0xff); +assert.equal(b[1], 0xff); + + // verify checking external if has external memory @@ -300,11 +314,19 @@ for (var i = 0; i < 5; i++) // only allow object to be passed to dispose assert.throws(function() { - alloc.dispose(null); + smalloc.dispose(null); }); // can't dispose a Buffer assert.throws(function() { - alloc.dispose(new Buffer()); + smalloc.dispose(new Buffer()); +}); + +assert.throws(function() { + smalloc.dispose(new Uint8Array(new ArrayBuffer(1))); +}); + +assert.throws(function() { + smalloc.dispose({}); }); diff --git a/test/simple/test-asynclistener-error-multiple-unhandled.js b/test/simple/test-stream-writable-change-default-encoding.js similarity index 50% rename from test/simple/test-asynclistener-error-multiple-unhandled.js rename to test/simple/test-stream-writable-change-default-encoding.js index 33afaaa81b1..eb71cf2b4a6 100644 --- a/test/simple/test-asynclistener-error-multiple-unhandled.js +++ b/test/simple/test-stream-writable-change-default-encoding.js @@ -21,59 +21,52 @@ var common = require('../common'); var assert = require('assert'); -var tracing = require('tracing'); -function onAsync0() { - return 0; -} +var stream = require('stream'); +var util = require('util'); -function onAsync1() { - return 1; -} - -function onError(stor) { - results.push(stor); -} - -var results = []; -var asyncNoHandleError0 = { - create: onAsync0, - error: onError -}; -var asyncNoHandleError1 = { - create: onAsync1, - error: onError +function MyWritable(fn, options) { + stream.Writable.call(this, options); + this.fn = fn; }; -var listeners = [ - tracing.addAsyncListener(asyncNoHandleError0), - tracing.addAsyncListener(asyncNoHandleError1) -]; +util.inherits(MyWritable, stream.Writable); -var uncaughtFired = false; -process.on('uncaughtException', function() { - uncaughtFired = true; - - // Unhandled errors should propagate to all listeners. - assert.equal(results[0], 0); - assert.equal(results[1], 1); - assert.equal(results.length, 2); -}); +MyWritable.prototype._write = function (chunk, encoding, callback) { + this.fn(Buffer.isBuffer(chunk), typeof chunk, encoding); + callback(); +}; -process.nextTick(function() { - throw new Error(); -}); +(function defaultCondingIsUtf8() { + var m = new MyWritable(function(isBuffer, type, enc) { + assert.equal(enc, 'utf8'); + }, { decodeStrings: false }); + m.write('foo'); + m.end(); +}()); -process.on('exit', function(code) { - // If the exit code isn't ok then return early to throw the stack that - // caused the bad return code. - if (code !== 0) - return; +(function changeDefaultEncodingToAscii() { + var m = new MyWritable(function(isBuffer, type, enc) { + assert.equal(enc, 'ascii'); + }, { decodeStrings: false }); + m.setDefaultEncoding('ascii'); + m.write('bar'); + m.end(); +}()); - // Need to remove the async listeners or tests will always pass - for (var i = 0; i < listeners.length; i++) - tracing.removeAsyncListener(listeners[i]); +assert.throws(function changeDefaultEncodingToInvalidValue() { + var m = new MyWritable(function(isBuffer, type, enc) { + }, { decodeStrings: false }); + m.setDefaultEncoding({}); + m.write('bar'); + m.end(); +}, TypeError); - assert.ok(uncaughtFired); - console.log('ok'); -}); +(function checkVairableCaseEncoding() { + var m = new MyWritable(function(isBuffer, type, enc) { + assert.equal(enc, 'ascii'); + }, { decodeStrings: false }); + m.setDefaultEncoding('AsCii'); + m.write('bar'); + m.end(); +}()); diff --git a/test/simple/test-stream2-transform.js b/test/simple/test-stream2-transform.js index 9c9ddd8efc3..6064565be0a 100644 --- a/test/simple/test-stream2-transform.js +++ b/test/simple/test-stream2-transform.js @@ -81,7 +81,7 @@ test('writable side consumption', function(t) { t.equal(tx._readableState.length, 10); t.equal(transformed, 10); t.equal(tx._transformState.writechunk.length, 5); - t.same(tx._writableState.buffer.map(function(c) { + t.same(tx._writableState.getBuffer().map(function(c) { return c.chunk.length; }), [6, 7, 8, 9, 10]); diff --git a/test/simple/test-tcp-wrap-connect.js b/test/simple/test-tcp-wrap-connect.js index 43fb37ac701..9e915d243ba 100644 --- a/test/simple/test-tcp-wrap-connect.js +++ b/test/simple/test-tcp-wrap-connect.js @@ -22,11 +22,13 @@ var common = require('../common'); var assert = require('assert'); var TCP = process.binding('tcp_wrap').TCP; +var TCPConnectWrap = process.binding('tcp_wrap').TCPConnectWrap; +var ShutdownWrap = process.binding('stream_wrap').ShutdownWrap; function makeConnection() { var client = new TCP(); - var req = {}; + var req = new TCPConnectWrap(); var err = client.connect(req, '127.0.0.1', common.PORT); assert.equal(err, 0); @@ -36,7 +38,7 @@ function makeConnection() { assert.equal(req, req_); console.log('connected'); - var shutdownReq = {}; + var shutdownReq = new ShutdownWrap(); var err = client.shutdown(shutdownReq); assert.equal(err, 0); diff --git a/test/simple/test-tcp-wrap-listen.js b/test/simple/test-tcp-wrap-listen.js index fb3175a008a..5801368ba1e 100644 --- a/test/simple/test-tcp-wrap-listen.js +++ b/test/simple/test-tcp-wrap-listen.js @@ -23,6 +23,7 @@ var common = require('../common'); var assert = require('assert'); var TCP = process.binding('tcp_wrap').TCP; +var WriteWrap = process.binding('stream_wrap').WriteWrap; var server = new TCP(); @@ -55,7 +56,8 @@ server.onconnection = function(err, client) { assert.equal(0, client.writeQueueSize); - var req = { async: false }; + var req = new WriteWrap(); + req.async = false; var err = client.writeBuffer(req, buffer); assert.equal(err, 0); client.pendingWrites.push(req); diff --git a/test/simple/test-url.js b/test/simple/test-url.js index 3271b0ba21a..f12a00dbed0 100644 --- a/test/simple/test-url.js +++ b/test/simple/test-url.js @@ -45,6 +45,30 @@ var parseTests = { href: 'http://evil-phisher/foo.html#h%5Ca%5Cs%5Ch' }, + 'http:\\\\evil-phisher\\foo.html?json="\\"foo\\""#h\\a\\s\\h': { + protocol: 'http:', + slashes: true, + host: 'evil-phisher', + hostname: 'evil-phisher', + pathname: '/foo.html', + search: '?json=%22%5C%22foo%5C%22%22', + query: 'json=%22%5C%22foo%5C%22%22', + path: '/foo.html?json=%22%5C%22foo%5C%22%22', + hash: '#h%5Ca%5Cs%5Ch', + href: 'http://evil-phisher/foo.html?json=%22%5C%22foo%5C%22%22#h%5Ca%5Cs%5Ch' + }, + + 'http:\\\\evil-phisher\\foo.html#h\\a\\s\\h?blarg': { + protocol: 'http:', + slashes: true, + host: 'evil-phisher', + hostname: 'evil-phisher', + pathname: '/foo.html', + path: '/foo.html', + hash: '#h%5Ca%5Cs%5Ch?blarg', + href: 'http://evil-phisher/foo.html#h%5Ca%5Cs%5Ch?blarg' + }, + 'http:\\\\evil-phisher\\foo.html': { protocol: 'http:', @@ -153,32 +177,44 @@ var parseTests = { 'path': '/Y' }, + // + not an invalid host character + // per https://url.spec.whatwg.org/#host-parsing + 'http://x.y.com+a/b/c' : { + 'href': 'http://x.y.com+a/b/c', + 'protocol': 'http:', + 'slashes': true, + 'host': 'x.y.com+a', + 'hostname': 'x.y.com+a', + 'pathname': '/b/c', + 'path': '/b/c' + }, + // an unexpected invalid char in the hostname. - 'HtTp://x.y.cOm*a/b/c?d=e#f g<h>i' : { - 'href': 'http://x.y.com/*a/b/c?d=e#f%20g%3Ch%3Ei', + 'HtTp://x.y.cOm;a/b/c?d=e#f g<h>i' : { + 'href': 'http://x.y.com/;a/b/c?d=e#f%20g%3Ch%3Ei', 'protocol': 'http:', 'slashes': true, 'host': 'x.y.com', 'hostname': 'x.y.com', - 'pathname': '/*a/b/c', + 'pathname': ';a/b/c', 'search': '?d=e', 'query': 'd=e', 'hash': '#f%20g%3Ch%3Ei', - 'path': '/*a/b/c?d=e' + 'path': ';a/b/c?d=e' }, // make sure that we don't accidentally lcast the path parts. - 'HtTp://x.y.cOm*A/b/c?d=e#f g<h>i' : { - 'href': 'http://x.y.com/*A/b/c?d=e#f%20g%3Ch%3Ei', + 'HtTp://x.y.cOm;A/b/c?d=e#f g<h>i' : { + 'href': 'http://x.y.com/;A/b/c?d=e#f%20g%3Ch%3Ei', 'protocol': 'http:', 'slashes': true, 'host': 'x.y.com', 'hostname': 'x.y.com', - 'pathname': '/*A/b/c', + 'pathname': ';A/b/c', 'search': '?d=e', 'query': 'd=e', 'hash': '#f%20g%3Ch%3Ei', - 'path': '/*A/b/c?d=e' + 'path': ';A/b/c?d=e' }, 'http://x...y...#p': { @@ -493,17 +529,17 @@ var parseTests = { 'path': '/' }, - 'http://www.Äffchen.cOm*A/b/c?d=e#f g<h>i' : { - 'href': 'http://www.xn--ffchen-9ta.com/*A/b/c?d=e#f%20g%3Ch%3Ei', + 'http://www.Äffchen.cOm;A/b/c?d=e#f g<h>i' : { + 'href': 'http://www.xn--ffchen-9ta.com/;A/b/c?d=e#f%20g%3Ch%3Ei', 'protocol': 'http:', 'slashes': true, 'host': 'www.xn--ffchen-9ta.com', 'hostname': 'www.xn--ffchen-9ta.com', - 'pathname': '/*A/b/c', + 'pathname': ';A/b/c', 'search': '?d=e', 'query': 'd=e', 'hash': '#f%20g%3Ch%3Ei', - 'path': '/*A/b/c?d=e' + 'path': ';A/b/c?d=e' }, 'http://SÉLIER.COM/' : { @@ -868,6 +904,20 @@ var parseTestsWithQueryString = { 'pathname': '/', 'path': '/' }, + '/example': { + protocol: null, + slashes: null, + auth: null, + host: null, + port: null, + hostname: null, + hash: null, + search: '', + query: {}, + pathname: '/example', + path: '/example', + href: '/example' + }, '/example?query=value':{ protocol: null, slashes: null, @@ -1047,7 +1097,7 @@ var formatTests = { // `#`,`?` in path '/path/to/%%23%3F+=&.txt?foo=theA1#bar' : { - href : '/path/to/%%23%3F+=&.txt?foo=theA1#bar', + href: '/path/to/%%23%3F+=&.txt?foo=theA1#bar', pathname: '/path/to/%#?+=&.txt', query: { foo: 'theA1' @@ -1057,7 +1107,7 @@ var formatTests = { // `#`,`?` in path + `#` in query '/path/to/%%23%3F+=&.txt?foo=the%231#bar' : { - href : '/path/to/%%23%3F+=&.txt?foo=the%231#bar', + href: '/path/to/%%23%3F+=&.txt?foo=the%231#bar', pathname: '/path/to/%#?+=&.txt', query: { foo: 'the#1' @@ -1072,7 +1122,7 @@ var formatTests = { hostname: 'ex.com', hash: '#frag', search: '?abc=the#1?&foo=bar', - pathname: '/foo?100%m#r', + pathname: '/foo?100%m#r' }, // `?` and `#` in search only @@ -1082,8 +1132,77 @@ var formatTests = { hostname: 'ex.com', hash: '#frag', search: '?abc=the#1?&foo=bar', - pathname: '/fooA100%mBr', + pathname: '/fooA100%mBr' + }, + + // path + 'http://github.com/joyent/node#js1': { + href: 'http://github.com/joyent/node#js1', + protocol: 'http:', + hostname: 'github.com', + hash: '#js1', + path: '/joyent/node' + }, + + // pathname vs. path, path wins + 'http://github.com/joyent/node2#js1': { + href: 'http://github.com/joyent/node2#js1', + protocol: 'http:', + hostname: 'github.com', + hash: '#js1', + path: '/joyent/node2', + pathname: '/joyent/node' + }, + + // pathname with query/search + 'http://github.com/joyent/node?foo=bar#js2': { + href: 'http://github.com/joyent/node?foo=bar#js2', + protocol: 'http:', + hostname: 'github.com', + hash: '#js2', + path: '/joyent/node?foo=bar' + }, + + // path vs. query, path wins + 'http://github.com/joyent/node?foo=bar2#js3': { + href: 'http://github.com/joyent/node?foo=bar2#js3', + protocol: 'http:', + hostname: 'github.com', + hash: '#js3', + path: '/joyent/node?foo=bar2', + query: {foo: 'bar'} + }, + + // path vs. search, path wins + 'http://github.com/joyent/node?foo=bar3#js4': { + href: 'http://github.com/joyent/node?foo=bar3#js4', + protocol: 'http:', + hostname: 'github.com', + hash: '#js4', + path: '/joyent/node?foo=bar3', + search: '?foo=bar' + }, + + // path is present without ? vs. query given + 'http://github.com/joyent/node#js5': { + href: 'http://github.com/joyent/node#js5', + protocol: 'http:', + hostname: 'github.com', + hash: '#js5', + path: '/joyent/node', + query: {foo: 'bar'} + }, + + // path is present without ? vs. search given + 'http://github.com/joyent/node#js6': { + href: 'http://github.com/joyent/node#js6', + protocol: 'http:', + hostname: 'github.com', + hash: '#js6', + path: '/joyent/node', + search: '?foo=bar' } + }; for (var u in formatTests) { var expect = formatTests[u].href; diff --git a/test/simple/test-util-inspect.js b/test/simple/test-util-inspect.js index 8651a2df019..2a7c2c4fcb5 100644 --- a/test/simple/test-util-inspect.js +++ b/test/simple/test-util-inspect.js @@ -235,3 +235,12 @@ assert.equal(util.inspect(bool), '{ [Boolean: true] foo: \'bar\' }'); var num = new Number(13.37); num.foo = 'bar'; assert.equal(util.inspect(num), '{ [Number: 13.37] foo: \'bar\' }'); + +// test es6 Symbol +if (typeof Symbol !== 'undefined') { + assert.equal(util.inspect(Symbol()), 'Symbol()'); + assert.equal(util.inspect(Symbol(123)), 'Symbol(123)'); + assert.equal(util.inspect(Symbol('hi')), 'Symbol(hi)'); + assert.equal(util.inspect([Symbol()]), '[ Symbol() ]'); + assert.equal(util.inspect({ foo: Symbol() }), '{ foo: Symbol() }'); +} diff --git a/test/simple/test-v8-gc.js b/test/simple/test-v8-gc.js deleted file mode 100644 index 077e48c574e..00000000000 --- a/test/simple/test-v8-gc.js +++ /dev/null @@ -1,53 +0,0 @@ -// Copyright Joyent, Inc. and other Node contributors. -// -// Permission is hereby granted, free of charge, to any person obtaining a -// copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to permit -// persons to whom the Software is furnished to do so, subject to the -// following conditions: -// -// The above copyright notice and this permission notice shall be included -// in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS -// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN -// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR -// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE -// USE OR OTHER DEALINGS IN THE SOFTWARE. - -// Flags: --expose_gc - -var common = require('../common'); -var assert = require('assert'); -var v8 = require('tracing').v8; - -assert(typeof gc === 'function', 'Run this test with --expose_gc.'); - -var ncalls = 0; -var before; -var after; - -function ongc(before_, after_) { - // Try very hard to not create garbage because that could kick off another - // garbage collection cycle. - before = before_; - after = after_; - ncalls += 1; -} - -gc(); -v8.on('gc', ongc); -gc(); -v8.removeListener('gc', ongc); -gc(); - -assert.equal(ncalls, 1); -assert.equal(typeof before, 'object'); -assert.equal(typeof after, 'object'); -assert.equal(typeof before.timestamp, 'number'); -assert.equal(typeof after.timestamp, 'number'); -assert.equal(before.timestamp <= after.timestamp, true); diff --git a/test/simple/test-vm-harmony-symbols.js b/test/simple/test-vm-harmony-symbols.js index 4aa41612666..200084fdf42 100644 --- a/test/simple/test-vm-harmony-symbols.js +++ b/test/simple/test-vm-harmony-symbols.js @@ -19,8 +19,6 @@ // OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE // USE OR OTHER DEALINGS IN THE SOFTWARE. -// Flags: --harmony_symbols - var common = require('../common'); var assert = require('assert'); var vm = require('vm'); diff --git a/tools/configure.d/nodedownload.py b/tools/configure.d/nodedownload.py new file mode 100644 index 00000000000..e24efd865f3 --- /dev/null +++ b/tools/configure.d/nodedownload.py @@ -0,0 +1,127 @@ +#!/usr/bin/env python +# Moved some utilities here from ../../configure + +import urllib +import hashlib +import sys +import zipfile +import tarfile +import fpformat +import contextlib + +def formatSize(amt): + """Format a size as a string in MB""" + return fpformat.fix(amt / 1024000., 1) + +def spin(c): + """print out an ASCII 'spinner' based on the value of counter 'c'""" + spin = ".:|'" + return (spin[c % len(spin)]) + +class ConfigOpener(urllib.FancyURLopener): + """fancy opener used by retrievefile. Set a UA""" + # append to existing version (UA) + version = '%s node.js/configure' % urllib.URLopener.version + +def reporthook(count, size, total): + """internal hook used by retrievefile""" + sys.stdout.write(' Fetch: %c %sMB total, %sMB downloaded \r' % + (spin(count), + formatSize(total), + formatSize(count*size))) + +def retrievefile(url, targetfile): + """fetch file 'url' as 'targetfile'. Return targetfile or throw.""" + try: + sys.stdout.write(' <%s>\nConnecting...\r' % url) + sys.stdout.flush() + msg = ConfigOpener().retrieve(url, targetfile, reporthook=reporthook) + print '' # clear the line + return targetfile + except: + print ' ** Error occurred while downloading\n <%s>' % url + raise + +def md5sum(targetfile): + """md5sum a file. Return the hex digest.""" + digest = hashlib.md5() + with open(targetfile, 'rb') as f: + chunk = f.read(1024) + while chunk != "": + digest.update(chunk) + chunk = f.read(1024) + return digest.hexdigest() + +def unpack(packedfile, parent_path): + """Unpacks packedfile into parent_path. Assumes .zip. Returns parent_path""" + if zipfile.is_zipfile(packedfile): + with contextlib.closing(zipfile.ZipFile(packedfile, 'r')) as icuzip: + print ' Extracting zipfile: %s' % packedfile + icuzip.extractall(parent_path) + return parent_path + elif tarfile.is_tarfile(packedfile): + with tarfile.TarFile.open(packedfile, 'r') as icuzip: + print ' Extracting tarfile: %s' % packedfile + icuzip.extractall(parent_path) + return parent_path + else: + packedsuffix = packedfile.lower().split('.')[-1] # .zip, .tgz etc + raise Exception('Error: Don\'t know how to unpack %s with extension %s' % (packedfile, packedsuffix)) + +# List of possible "--download=" types. +download_types = set(['icu']) + +# Default options for --download. +download_default = "none" + +def help(): + """This function calculates the '--help' text for '--download'.""" + return """Select which packages may be auto-downloaded. +valid values are: none, all, %s. (default is "%s").""" % (", ".join(download_types), download_default) + +def set2dict(keys, value=None): + """Convert some keys (iterable) to a dict.""" + return dict((key, value) for (key) in keys) + +def parse(opt): + """This function parses the options to --download and returns a set such as { icu: true }, etc. """ + if not opt: + opt = download_default + + theOpts = set(opt.split(',')) + + if 'all' in theOpts: + # all on + return set2dict(download_types, True) + elif 'none' in theOpts: + # all off + return set2dict(download_types, False) + + # OK. Now, process each of the opts. + theRet = set2dict(download_types, False) + for anOpt in opt.split(','): + if not anOpt or anOpt == "": + # ignore stray commas, etc. + continue + elif anOpt is 'all': + # all on + theRet = dict((key, True) for (key) in download_types) + else: + # turn this one on + if anOpt in download_types: + theRet[anOpt] = True + else: + # future proof: ignore unknown types + print 'Warning: ignoring unknown --download= type "%s"' % anOpt + # all done + return theRet + +def candownload(auto_downloads, package): + if not (package in auto_downloads.keys()): + raise Exception('Internal error: "%s" is not in the --downloads list. Check nodedownload.py' % package) + if auto_downloads[package]: + return True + else: + print """Warning: Not downloading package "%s". You could pass "--download=all" + (Windows: "download-all") to try auto-downloading it.""" % package + return False diff --git a/tools/doc/json.js b/tools/doc/json.js index 2459bc26aed..9fdc3bc1b4a 100644 --- a/tools/doc/json.js +++ b/tools/doc/json.js @@ -287,13 +287,13 @@ function processList(section) { } -// textRaw = "someobject.someMethod(a, [b=100], [c])" +// textRaw = "someobject.someMethod(a[, b=100][, c])" function parseSignature(text, sig) { var params = text.match(paramExpr); if (!params) return; params = params[1]; - // the ] is irrelevant. [ indicates optionalness. - params = params.replace(/\]/g, ''); + // the [ is irrelevant. ] indicates optionalness. + params = params.replace(/\[/g, ''); params = params.split(/,/) params.forEach(function(p, i, _) { p = p.trim(); @@ -302,9 +302,10 @@ function parseSignature(text, sig) { var optional = false; var def; // [foo] -> optional - if (p.charAt(0) === '[') { + if (p.charAt(p.length - 1) === ']') { optional = true; - p = p.substr(1); + p = p.substr(0, p.length - 1); + p = p.trim(); } var eq = p.indexOf('='); if (eq !== -1) { diff --git a/tools/icu/README.md b/tools/icu/README.md new file mode 100644 index 00000000000..40d5287d9f1 --- /dev/null +++ b/tools/icu/README.md @@ -0,0 +1,26 @@ +Notes about the icu directory. +=== + +The files in this directory were written for the node.js effort. It's +the intent of their author (Steven R. Loomis / srl295) to merge them +upstream into ICU, pending much discussion within the ICU-PMC. + +`icu_small.json` is somewhat node-specific as it specifies a "small ICU" +configuration file for the `icutrim.py` script. `icutrim.py` and +`iculslocs.cpp` may themselves be superseded by components built into +ICU in the future. + +The following tickets were opened during this work, and their +resolution may inform the reader as to the current state of icu-trim +upstream: + + * [#10919](http://bugs.icu-project.org/trac/ticket/10919) + (experimental branch - may copy any source patches here) + * [#10922](http://bugs.icu-project.org/trac/ticket/10922) + (data packaging improvements) + * [#10923](http://bugs.icu-project.org/trac/ticket/10923) + (rewrite data building in python) + +When/if components (not including the `.json` file) are merged into +ICU, this code and `configure` will be updated to detect and use those +variants rather than the ones in this directory. diff --git a/tools/icu/icu-generic.gyp b/tools/icu/icu-generic.gyp new file mode 100644 index 00000000000..bb2b5e5e4d5 --- /dev/null +++ b/tools/icu/icu-generic.gyp @@ -0,0 +1,524 @@ +# Copyright (c) IBM Corporation and Others. All Rights Reserved. +# very loosely based on icu.gyp from Chromium: +# Copyright (c) 2012 The Chromium Authors. All rights reserved. +# Use of this source code is governed by a BSD-style license that can be +# found in the LICENSE file. + + +{ + 'variables': { + 'icu_src_derb': [ '../../deps/icu/source/tools/genrb/derb.c' ], + }, + 'includes': [ '../../icu_config.gypi' ], + 'targets': [ + { + # a target for additional uconfig defines, target only + 'target_name': 'icu_uconfig_target', + 'type': 'none', + 'toolsets': [ 'target' ], + 'direct_dependent_settings': { + 'defines': [ + 'UCONFIG_NO_CONVERSION=1', + ] + }, + }, + { + # a target to hold uconfig defines. + # for now these are hard coded, but could be defined. + 'target_name': 'icu_uconfig', + 'type': 'none', + 'toolsets': [ 'host', 'target' ], + 'direct_dependent_settings': { + 'defines': [ + 'UCONFIG_NO_LEGACY_CONVERSION=1', + 'UCONFIG_NO_IDNA=1', + 'UCONFIG_NO_TRANSLITERATION=1', + 'UCONFIG_NO_SERVICE=1', + 'UCONFIG_NO_REGULAR_EXPRESSIONS=1', + 'U_ENABLE_DYLOAD=0', + 'U_STATIC_IMPLEMENTATION=1', + # Don't need std::string in API. + # Also, problematic: <http://bugs.icu-project.org/trac/ticket/11333> + 'U_HAVE_STD_STRING=0', + # TODO(srl295): reenable following pending + # https://code.google.com/p/v8/issues/detail?id=3345 + # (saves some space) + 'UCONFIG_NO_BREAK_ITERATION=0', + ], + } + }, + { + # a target to hold common settings. + # make any target that is ICU implementation depend on this. + 'target_name': 'icu_implementation', + 'toolsets': [ 'host', 'target' ], + 'type': 'none', + 'direct_dependent_settings': { + 'conditions': [ + [ 'os_posix == 1 and OS != "mac" and OS != "ios"', { + 'cflags': [ '-Wno-deprecated-declarations' ], + 'cflags_cc': [ '-frtti' ], + }], + [ 'OS == "mac" or OS == "ios"', { + 'xcode_settings': {'GCC_ENABLE_CPP_RTTI': 'YES' }, + }], + [ 'OS == "win"', { + 'msvs_settings': { + 'VCCLCompilerTool': {'RuntimeTypeInfo': 'true'}, + } + }], + ], + 'msvs_settings': { + 'VCCLCompilerTool': { + 'RuntimeTypeInfo': 'true', + 'ExceptionHandling': '1', + }, + }, + 'configurations': { + # TODO: why does this need to be redefined for Release and Debug? + # Maybe this should be pushed into common.gypi with an "if v8 i18n"? + 'Release': { + 'msvs_settings': { + 'VCCLCompilerTool': { + 'RuntimeTypeInfo': 'true', + 'ExceptionHandling': '1', + }, + }, + }, + 'Debug': { + 'msvs_settings': { + 'VCCLCompilerTool': { + 'RuntimeTypeInfo': 'true', + 'ExceptionHandling': '1', + }, + }, + }, + }, + 'defines': [ + 'U_ATTRIBUTE_DEPRECATED=', + '_CRT_SECURE_NO_DEPRECATE=', + 'U_STATIC_IMPLEMENTATION=1', + ], + }, + }, + { + 'target_name': 'icui18n', + 'toolsets': [ 'target', 'host' ], + 'conditions' : [ + ['_toolset=="target"', { + 'type': '<(library)', + 'sources': [ + '<@(icu_src_i18n)' + ], + 'conditions': [ + [ 'icu_ver_major == 54', { 'sources!': [ + ## Strip out the following for ICU 54 only. + ## add more conditions in the future? + ## if your compiler can dead-strip, this will + ## make ZERO difference to binary size. + ## Made ICU-specific for future-proofing. + + # alphabetic index + '../../deps/icu/source/i18n/alphaindex.cpp', + # BOCSU + # misc + '../../deps/icu/source/i18n/regexcmp.cpp', + '../../deps/icu/source/i18n/regexcmp.h', + '../../deps/icu/source/i18n/regexcst.h', + '../../deps/icu/source/i18n/regeximp.cpp', + '../../deps/icu/source/i18n/regeximp.h', + '../../deps/icu/source/i18n/regexst.cpp', + '../../deps/icu/source/i18n/regexst.h', + '../../deps/icu/source/i18n/regextxt.cpp', + '../../deps/icu/source/i18n/regextxt.h', + '../../deps/icu/source/i18n/region.cpp', + '../../deps/icu/source/i18n/region_impl.h', + '../../deps/icu/source/i18n/reldatefmt.cpp', + '../../deps/icu/source/i18n/reldatefmt.h' + '../../deps/icu/source/i18n/scientificformathelper.cpp', + '../../deps/icu/source/i18n/tmunit.cpp', + '../../deps/icu/source/i18n/tmutamt.cpp', + '../../deps/icu/source/i18n/tmutfmt.cpp', + '../../deps/icu/source/i18n/uregex.cpp', + '../../deps/icu/source/i18n/uregexc.cpp', + '../../deps/icu/source/i18n/uregion.cpp', + '../../deps/icu/source/i18n/uspoof.cpp', + '../../deps/icu/source/i18n/uspoof_build.cpp', + '../../deps/icu/source/i18n/uspoof_conf.cpp', + '../../deps/icu/source/i18n/uspoof_conf.h', + '../../deps/icu/source/i18n/uspoof_impl.cpp', + '../../deps/icu/source/i18n/uspoof_impl.h', + '../../deps/icu/source/i18n/uspoof_wsconf.cpp', + '../../deps/icu/source/i18n/uspoof_wsconf.h', + ]}]], + 'include_dirs': [ + '../../deps/icu/source/i18n', + ], + 'defines': [ + 'U_I18N_IMPLEMENTATION=1', + ], + 'dependencies': [ 'icuucx', 'icu_implementation', 'icu_uconfig', 'icu_uconfig_target' ], + 'direct_dependent_settings': { + 'include_dirs': [ + '../../deps/icu/source/i18n', + ], + }, + 'export_dependent_settings': [ 'icuucx', 'icu_uconfig_target' ], + }], + ['_toolset=="host"', { + 'type': 'none', + 'dependencies': [ 'icutools' ], + 'export_dependent_settings': [ 'icutools' ], + }], + ], + }, + # This exports actual ICU data + { + 'target_name': 'icudata', + 'type': '<(library)', + 'toolsets': [ 'target' ], + 'conditions': [ + [ 'OS == "win"', { + 'conditions': [ + [ 'icu_small == "false"', { # and OS=win + # full data - just build the full data file, then we are done. + 'sources': [ '<(SHARED_INTERMEDIATE_DIR)/icudt<(icu_ver_major)<(icu_endianness)_dat.obj' ], + 'dependencies': [ 'genccode#host' ], + 'actions': [ + { + 'action_name': 'icudata', + 'inputs': [ '<(icu_data_in)' ], + 'outputs': [ '<(SHARED_INTERMEDIATE_DIR)/icudt<(icu_ver_major)<(icu_endianness)_dat.obj' ], + 'action': [ '<(PRODUCT_DIR)/genccode', + '-o', + '-d', '<(SHARED_INTERMEDIATE_DIR)', + '-n', 'icudata', + '-e', 'icudt<(icu_ver_major)', + '<@(_inputs)' ], + }, + ], + }, { # icu_small == TRUE and OS == win + # link against stub data primarily + # then, use icupkg and genccode to rebuild data + 'dependencies': [ 'icustubdata', 'genccode#host', 'icupkg#host', 'genrb#host', 'iculslocs#host' ], + 'export_dependent_settings': [ 'icustubdata' ], + 'actions': [ + { + # trim down ICU + 'action_name': 'icutrim', + 'inputs': [ '<(icu_data_in)', 'icu_small.json' ], + 'outputs': [ '<(SHARED_INTERMEDIATE_DIR)/icutmp/icudt<(icu_ver_major)<(icu_endianness).dat' ], + 'action': [ 'python', + 'icutrim.py', + '-P', '../../<(CONFIGURATION_NAME)', + '-D', '<(icu_data_in)', + '--delete-tmp', + '-T', '<(SHARED_INTERMEDIATE_DIR)/icutmp', + '-F', 'icu_small.json', + '-O', 'icudt<(icu_ver_major)<(icu_endianness).dat', + '-v', + '-L', '<(icu_locales)'], + }, + { + # build final .dat -> .obj + 'action_name': 'genccode', + 'inputs': [ '<(SHARED_INTERMEDIATE_DIR)/icutmp/icudt<(icu_ver_major)<(icu_endianness).dat' ], + 'outputs': [ '<(SHARED_INTERMEDIATE_DIR)/icudt<(icu_ver_major)<(icu_endianness)_dat.obj' ], + 'action': [ '../../<(CONFIGURATION_NAME)/genccode', + '-o', + '-d', '<(SHARED_INTERMEDIATE_DIR)/', + '-n', 'icudata', + '-e', 'icusmdt<(icu_ver_major)', + '<@(_inputs)' ], + }, + ], + # This file contains the small ICU data. + 'sources': [ '<(SHARED_INTERMEDIATE_DIR)/icudt<(icu_ver_major)<(icu_endianness)_dat.obj' ], + } ] ], #end of OS==win and icu_small == true + }, { # OS != win + 'conditions': [ + [ 'icu_small == "false"', { + # full data - just build the full data file, then we are done. + 'sources': [ '<(SHARED_INTERMEDIATE_DIR)/icudt<(icu_ver_major)_dat.c' ], + 'dependencies': [ 'genccode#host', 'icupkg#host', 'icu_implementation#host', 'icu_uconfig' ], + 'include_dirs': [ + '../../deps/icu/source/common', + ], + 'actions': [ + { + # Swap endianness (if needed), or at least copy the file + 'action_name': 'icupkg', + 'inputs': [ '<(icu_data_in)' ], + 'outputs':[ '<(SHARED_INTERMEDIATE_DIR)/icudt<(icu_ver_major)<(icu_endianness).dat' ], + 'action': [ '<(PRODUCT_DIR)/icupkg', + '-t<(icu_endianness)', + '<@(_inputs)', + '<@(_outputs)', + ], + }, + { + # Rename without the endianness marker + 'action_name': 'copy', + 'inputs': [ '<(SHARED_INTERMEDIATE_DIR)/icudt<(icu_ver_major)<(icu_endianness).dat' ], + 'outputs':[ '<(SHARED_INTERMEDIATE_DIR)/icudt<(icu_ver_major).dat' ], + 'action': [ 'cp', + '<@(_inputs)', + '<@(_outputs)', + ], + }, + { + 'action_name': 'icudata', + 'inputs': [ '<(SHARED_INTERMEDIATE_DIR)/icudt<(icu_ver_major).dat' ], + 'outputs':[ '<(SHARED_INTERMEDIATE_DIR)/icudt<(icu_ver_major)_dat.c' ], + 'action': [ '<(PRODUCT_DIR)/genccode', + '-e', 'icudt<(icu_ver_major)', + '-d', '<(SHARED_INTERMEDIATE_DIR)', + '-f', 'icudt<(icu_ver_major)_dat', + '<@(_inputs)' ], + }, + ], # end actions + }, { # icu_small == true ( and OS != win ) + # link against stub data (as primary data) + # then, use icupkg and genccode to rebuild small data + 'dependencies': [ 'icustubdata', 'genccode#host', 'icupkg#host', 'genrb#host', 'iculslocs#host', + 'icu_implementation', 'icu_uconfig' ], + 'export_dependent_settings': [ 'icustubdata' ], + 'actions': [ + { + # trim down ICU + 'action_name': 'icutrim', + 'inputs': [ '<(icu_data_in)', 'icu_small.json' ], + 'outputs': [ '<(SHARED_INTERMEDIATE_DIR)/icutmp/icudt<(icu_ver_major)<(icu_endianness).dat' ], + 'action': [ 'python', + 'icutrim.py', + '-P', '<(PRODUCT_DIR)', + '-D', '<(icu_data_in)', + '--delete-tmp', + '-T', '<(SHARED_INTERMEDIATE_DIR)/icutmp', + '-F', 'icu_small.json', + '-O', 'icudt<(icu_ver_major)<(icu_endianness).dat', + '-v', + '-L', '<(icu_locales)'], + }, { + # rename to get the final entrypoint name right + 'action_name': 'rename', + 'inputs': [ '<(SHARED_INTERMEDIATE_DIR)/icutmp/icudt<(icu_ver_major)<(icu_endianness).dat' ], + 'outputs': [ '<(SHARED_INTERMEDIATE_DIR)/icutmp/icusmdt<(icu_ver_major).dat' ], + 'action': [ 'cp', + '<@(_inputs)', + '<@(_outputs)', + ], + }, { + # build final .dat -> .obj + 'action_name': 'genccode', + 'inputs': [ '<(SHARED_INTERMEDIATE_DIR)/icutmp/icusmdt<(icu_ver_major).dat' ], + 'outputs': [ '<(SHARED_INTERMEDIATE_DIR)/icusmdt<(icu_ver_major)_dat.c' ], + 'action': [ '<(PRODUCT_DIR)/genccode', + '-d', '<(SHARED_INTERMEDIATE_DIR)', + '<@(_inputs)' ], + }, + ], + # This file contains the small ICU data + 'sources': [ '<(SHARED_INTERMEDIATE_DIR)/icusmdt<(icu_ver_major)_dat.c' ], + # for umachine.h + 'include_dirs': [ + '../../deps/icu/source/common', + ], + }]], # end icu_small == true + }]], # end OS != win + }, # end icudata + # icustubdata is a tiny (~1k) symbol with no ICU data in it. + # tools must link against it as they are generating the full data. + { + 'target_name': 'icustubdata', + 'type': '<(library)', + 'toolsets': [ 'target' ], + 'dependencies': [ 'icu_implementation' ], + 'sources': [ + '<@(icu_src_stubdata)' + ], + 'include_dirs': [ + '../../deps/icu/source/common', + ], + }, + # this target is for v8 consumption. + # it is icuuc + stubdata + # it is only built for target + { + 'target_name': 'icuuc', + 'type': 'none', + 'toolsets': [ 'target', 'host' ], + 'conditions' : [ + ['_toolset=="host"', { + 'dependencies': [ 'icutools' ], + 'export_dependent_settings': [ 'icutools' ], + }], + ['_toolset=="target"', { + 'dependencies': [ 'icuucx', 'icudata' ], + 'export_dependent_settings': [ 'icuucx', 'icudata' ], + }], + ], + }, + # This is the 'real' icuuc. + { + 'target_name': 'icuucx', + 'type': '<(library)', + 'dependencies': [ 'icu_implementation', 'icu_uconfig', 'icu_uconfig_target' ], + 'toolsets': [ 'target' ], + 'sources': [ + '<@(icu_src_common)', + ], + 'conditions': [ + [ 'icu_ver_major == 54', { 'sources!': [ + ## Strip out the following for ICU 54 only. + ## add more conditions in the future? + ## if your compiler can dead-strip, this will + ## make ZERO difference to binary size. + ## Made ICU-specific for future-proofing. + + # bidi- not needed (yet!) + '../../deps/icu/source/common/ubidi.c', + '../../deps/icu/source/common/ubidiimp.h', + '../../deps/icu/source/common/ubidiln.c', + '../../deps/icu/source/common/ubidiwrt.c', + #'../../deps/icu/source/common/ubidi_props.c', + #'../../deps/icu/source/common/ubidi_props.h', + #'../../deps/icu/source/common/ubidi_props_data.h', + # and the callers + '../../deps/icu/source/common/ushape.cpp', + '../../deps/icu/source/common/usprep.cpp', + '../../deps/icu/source/common/uts46.cpp', + ]}], + [ 'OS == "solaris"', { 'defines': [ + '_XOPEN_SOURCE_EXTENDED=0', + ]}], + ], + 'include_dirs': [ + '../../deps/icu/source/common', + ], + 'defines': [ + 'U_COMMON_IMPLEMENTATION=1', + ], + 'cflags_c': ['-std=c99'], + 'export_dependent_settings': [ 'icu_uconfig', 'icu_uconfig_target' ], + 'direct_dependent_settings': { + 'include_dirs': [ + '../../deps/icu/source/common', + ], + 'conditions': [ + [ 'OS=="win"', { + 'link_settings': { + 'libraries': [ '-lAdvAPI32.Lib', '-lUser32.lib' ], + }, + }], + ], + }, + }, + # tools library + { + 'target_name': 'icutools', + 'type': '<(library)', + 'toolsets': [ 'host' ], + 'dependencies': [ 'icu_implementation', 'icu_uconfig' ], + 'sources': [ + '<@(icu_src_tools)', + '<@(icu_src_common)', + '<@(icu_src_i18n)', + '<@(icu_src_io)', + '<@(icu_src_stubdata)', + ], + 'sources!': [ + '../../deps/icu/source/tools/toolutil/udbgutil.cpp', + '../../deps/icu/source/tools/toolutil/udbgutil.h', + '../../deps/icu/source/tools/toolutil/dbgutil.cpp', + '../../deps/icu/source/tools/toolutil/dbgutil.h', + ], + 'include_dirs': [ + '../../deps/icu/source/common', + '../../deps/icu/source/i18n', + '../../deps/icu/source/io', + '../../deps/icu/source/tools/toolutil', + ], + 'defines': [ + 'U_COMMON_IMPLEMENTATION=1', + 'U_I18N_IMPLEMENTATION=1', + 'U_IO_IMPLEMENTATION=1', + 'U_TOOLUTIL_IMPLEMENTATION=1', + #'DEBUG=0', # http://bugs.icu-project.org/trac/ticket/10977 + ], + 'cflags_c': ['-std=c99'], + 'conditions': [ + ['OS == "solaris"', { + 'defines': [ '_XOPEN_SOURCE_EXTENDED=0' ] + }] + ], + 'direct_dependent_settings': { + 'include_dirs': [ + '../../deps/icu/source/common', + '../../deps/icu/source/i18n', + '../../deps/icu/source/io', + '../../deps/icu/source/tools/toolutil', + ], + 'conditions': [ + [ 'OS=="win"', { + 'link_settings': { + 'libraries': [ '-lAdvAPI32.Lib', '-lUser32.lib' ], + }, + }], + ], + }, + 'export_dependent_settings': [ 'icu_uconfig' ], + }, + # This tool is needed to rebuild .res files from .txt, + # or to build index (res_index.txt) files for small-icu + { + 'target_name': 'genrb', + 'type': 'executable', + 'toolsets': [ 'host' ], + 'dependencies': [ 'icutools' ], + 'sources': [ + '<@(icu_src_genrb)' + ], + # derb is a separate executable + # (which is not currently built) + 'sources!': [ + '<@(icu_src_derb)', + 'no-op.cc', + ], + }, + # This tool is used to rebuild res_index.res manifests + { + 'target_name': 'iculslocs', + 'toolsets': [ 'host' ], + 'type': 'executable', + 'dependencies': [ 'icutools' ], + 'sources': [ + 'iculslocs.cc', + 'no-op.cc', + ], + }, + # This tool is used to package, unpackage, repackage .dat files + # and convert endianesses + { + 'target_name': 'icupkg', + 'toolsets': [ 'host' ], + 'type': 'executable', + 'dependencies': [ 'icutools' ], + 'sources': [ + '<@(icu_src_icupkg)', + 'no-op.cc', + ], + }, + # this is used to convert .dat directly into .obj + { + 'target_name': 'genccode', + 'toolsets': [ 'host' ], + 'type': 'executable', + 'dependencies': [ 'icutools' ], + 'sources': [ + '<@(icu_src_genccode)', + 'no-op.cc', + ], + }, + ], +} diff --git a/tools/icu/icu-system.gyp b/tools/icu/icu-system.gyp new file mode 100644 index 00000000000..44d4f5feff4 --- /dev/null +++ b/tools/icu/icu-system.gyp @@ -0,0 +1,18 @@ +# Copyright (c) 2014 IBM Corporation and Others. All Rights Reserved. + +# This variant is used for the '--with-intl=system-icu' option. +# 'configure' has already set 'libs' and 'cflags' - so, +# there's nothing to do in these targets. + +{ + 'targets': [ + { + 'target_name': 'icuuc', + 'type': 'none', + }, + { + 'target_name': 'icui18n', + 'type': 'none', + }, + ], +} diff --git a/tools/icu/icu_small.json b/tools/icu/icu_small.json new file mode 100644 index 00000000000..e434794e91c --- /dev/null +++ b/tools/icu/icu_small.json @@ -0,0 +1,48 @@ +{ + "copyright": "Copyright (c) 2014 IBM Corporation and Others. All Rights Reserved.", + "comment": "icutrim.py config: Trim down ICU to just a certain locale set, needed for node.js use.", + "variables": { + "none": { + "only": [] + }, + "locales": { + "only": [ + "root", + "en" + ] + }, + "leavealone": { + } + }, + "trees": { + "ROOT": "locales", + "brkitr": "none", + "coll": "locales", + "curr": "locales", + "lang": "none", + "rbnf": "none", + "region": "none", + "zone": "locales", + "converters": "none", + "stringprep": "none", + "translit": "none", + "brkfiles": "none", + "brkdict": "none", + "confusables": "none", + "unit": "none" + }, + "remove": [ + "cnvalias.icu", + "postalCodeData.res", + "uts46.nrm", + "genderList.res", + "brkitr/root.res", + "unames.icu" + ], + "keep": [ + "pool.res", + "supplementalData.res", + "zoneinfo64.res", + "likelySubtags.res" + ] +} diff --git a/tools/icu/iculslocs.cc b/tools/icu/iculslocs.cc new file mode 100644 index 00000000000..66becace0a4 --- /dev/null +++ b/tools/icu/iculslocs.cc @@ -0,0 +1,388 @@ +/* +********************************************************************** +* Copyright (C) 2014, International Business Machines +* Corporation and others. All Rights Reserved. +********************************************************************** +* +* Created 2014-06-20 by Steven R. Loomis +* +* See: http://bugs.icu-project.org/trac/ticket/10922 +* +*/ + +/* +WHAT IS THIS? + +Here's the problem: It's difficult to reconfigure ICU from the command +line without using the full makefiles. You can do a lot, but not +everything. + +Consider: + + $ icupkg -r 'ja*' icudt53l.dat + +Great, you've now removed the (main) Japanese data. But something's +still wrong-- res_index (and thus, getAvailable* functions) still +claim the locale is present. + +You are reading the source to a tool (using only public API C code) +that can solve this problem. Use as follows: + + $ iculslocs -i . -N icudt53l -b res_index.txt + +.. Generates a NEW res_index.txt (by looking at the .dat file, and +figuring out which locales are actually available. Has commented out +the ones which are no longer available: + + ... + it_SM {""} +// ja {""} +// ja_JP {""} + jgo {""} + ... + +Then you can build and in-place patch it with existing ICU tools: + $ genrb res_index.txt + $ icupkg -a res_index.res icudt53l.dat + +.. Now you have a patched icudt539.dat that not only doesn't have +Japanese, it doesn't *claim* to have Japanese. + +*/ + +#include "string.h" +#include "charstr.h" // ICU internal header +#include <unicode/ustdio.h> +#include <unicode/ures.h> +#include <unicode/udata.h> + +const char* PROG = "iculslocs"; +const char* NAME = U_ICUDATA_NAME; // assume ICU data +const char* TREE = "ROOT"; +int VERBOSE = 0; + +#define RES_INDEX "res_index" +#define INSTALLEDLOCALES "InstalledLocales" + +CharString packageName; +const char* locale = RES_INDEX; // locale referring to our index + +void usage() { + u_printf("Usage: %s [options]\n", PROG); + u_printf( + "This program lists and optionally regenerates the locale " + "manifests\n" + " in ICU 'res_index.res' files.\n"); + u_printf( + " -i ICUDATA Set ICUDATA dir to ICUDATA.\n" + " NOTE: this must be the first option given.\n"); + u_printf(" -h This Help\n"); + u_printf(" -v Verbose Mode on\n"); + u_printf(" -l List locales to stdout\n"); + u_printf( + " if Verbose mode, then missing (unopenable)" + "locales\n" + " will be listed preceded by a '#'.\n"); + u_printf( + " -b res_index.txt Write 'corrected' bundle " + "to res_index.txt\n" + " missing bundles will be " + "OMITTED\n"); + u_printf( + " -T TREE Choose tree TREE\n" + " (TREE should be one of: \n" + " ROOT, brkitr, coll, curr, lang, rbnf, region, zone)\n"); + // see ureslocs.h and elsewhere + u_printf( + " -N NAME Choose name NAME\n" + " (default: '%s')\n", + U_ICUDATA_NAME); + u_printf( + "\nNOTE: for best results, this tool ought to be " + "linked against\n" + "stubdata. i.e. '%s -l' SHOULD return an error with " + " no data.\n", + PROG); +} + +#define ASSERT_SUCCESS(what) \ + if (U_FAILURE(status)) { \ + u_printf("%s:%d: %s: ERROR: %s %s\n", \ + __FILE__, \ + __LINE__, \ + PROG, \ + u_errorName(status), \ + what); \ + return 1; \ + } + +/** + * @param status changed from reference to pointer to match node.js style + */ +void calculatePackageName(UErrorCode* status) { + packageName.clear(); + if (strcmp(NAME, "NONE")) { + packageName.append(NAME, *status); + if (strcmp(TREE, "ROOT")) { + packageName.append(U_TREE_SEPARATOR_STRING, *status); + packageName.append(TREE, *status); + } + } + if (VERBOSE) { + u_printf("packageName: %s\n", packageName.data()); + } +} + +/** + * Does the locale exist? + * return zero for false, or nonzero if it was openable. + * Assumes calculatePackageName was called. + * @param exists set to TRUE if exists, FALSE otherwise. + * Changed from reference to pointer to match node.js style + * @return 0 on "OK" (success or resource-missing), + * 1 on "FAILURE" (unexpected error) + */ +int localeExists(const char* loc, UBool* exists) { + UErrorCode status = U_ZERO_ERROR; + if (VERBOSE > 1) { + u_printf("Trying to open %s:%s\n", packageName.data(), loc); + } + LocalUResourceBundlePointer aResource( + ures_openDirect(packageName.data(), loc, &status)); + *exists = FALSE; + if (U_SUCCESS(status)) { + *exists = true; + if (VERBOSE > 1) { + u_printf("%s:%s existed!\n", packageName.data(), loc); + } + return 0; + } else if (status == U_MISSING_RESOURCE_ERROR) { + *exists = false; + if (VERBOSE > 1) { + u_printf("%s:%s did NOT exist (%s)!\n", + packageName.data(), + loc, + u_errorName(status)); + } + return 0; // "good" failure + } else { + // some other failure.. + u_printf("%s:%d: %s: ERROR %s opening %s:%s for test.\n", + __FILE__, + __LINE__, + u_errorName(status), + packageName.data(), + loc); + return 1; // abort + } +} + +void printIndent(const LocalUFILEPointer& bf, int indent) { + for (int i = 0; i < indent + 1; i++) { + u_fprintf(bf.getAlias(), " "); + } +} + +/** + * Dumps a table resource contents + * if lev==0, skips INSTALLEDLOCALES + * @return 0 for OK, 1 for err + */ +int dumpAllButInstalledLocales(int lev, + LocalUResourceBundlePointer& bund, + LocalUFILEPointer& bf, + UErrorCode& status) { + ures_resetIterator(bund.getAlias()); + const UBool isTable = (UBool)(ures_getType(bund.getAlias()) == URES_TABLE); + LocalUResourceBundlePointer t; + while (U_SUCCESS(status) && ures_hasNext(bund.getAlias())) { + t.adoptInstead(ures_getNextResource(bund.getAlias(), t.orphan(), &status)); + ASSERT_SUCCESS("while processing table"); + const char* key = ures_getKey(t.getAlias()); + if (VERBOSE > 1) { + u_printf("dump@%d: got key %s\n", lev, key); + } + if (lev == 0 && !strcmp(key, INSTALLEDLOCALES)) { + if (VERBOSE > 1) { + u_printf("dump: skipping '%s' as it must be evaluated.\n", key); + } + } else { + printIndent(bf, lev); + u_fprintf(bf.getAlias(), "%s", key); + switch (ures_getType(t.getAlias())) { + case URES_STRING: { + int32_t len = 0; + const UChar* s = ures_getString(t.getAlias(), &len, &status); + ASSERT_SUCCESS("getting string"); + u_fprintf(bf.getAlias(), ":string {\""); + u_file_write(s, len, bf.getAlias()); + u_fprintf(bf.getAlias(), "\"}"); + } break; + default: { + u_printf("ERROR: unhandled type in dumpAllButInstalledLocales().\n"); + return 1; + } break; + } + u_fprintf(bf.getAlias(), "\n"); + } + } + return 0; +} + +int list(const char* toBundle) { + UErrorCode status = U_ZERO_ERROR; + + LocalUFILEPointer bf; + + if (toBundle != NULL) { + if (VERBOSE) { + u_printf("writing to bundle %s\n", toBundle); + } + // we write UTF-8 with BOM only. No exceptions. + bf.adoptInstead(u_fopen(toBundle, "w", "en_US_POSIX", "UTF-8")); + if (bf.isNull()) { + u_printf("ERROR: Could not open '%s' for writing.\n", toBundle); + return 1; + } + u_fputc(0xFEFF, bf.getAlias()); // write BOM + u_fprintf(bf.getAlias(), "// -*- Coding: utf-8; -*-\n//\n"); + } + + // first, calculate the bundle name. + calculatePackageName(&status); + ASSERT_SUCCESS("calculating package name"); + + if (VERBOSE) { + u_printf("\"locale\": %s\n", locale); + } + + LocalUResourceBundlePointer bund( + ures_openDirect(packageName.data(), locale, &status)); + ASSERT_SUCCESS("while opening the bundle"); + LocalUResourceBundlePointer installedLocales( + ures_getByKey(bund.getAlias(), INSTALLEDLOCALES, NULL, &status)); + ASSERT_SUCCESS("while fetching installed locales"); + + int32_t count = ures_getSize(installedLocales.getAlias()); + if (VERBOSE) { + u_printf("Locales: %d\n", count); + } + + if (bf.isValid()) { + // write the HEADER + u_fprintf(bf.getAlias(), + "// Warning this file is automatically generated\n" + "// Updated by %s based on %s:%s.txt\n", + PROG, + packageName.data(), + locale); + u_fprintf(bf.getAlias(), + "%s:table(nofallback) {\n" + " // First, everything besides InstalledLocales:\n", + locale); + if (dumpAllButInstalledLocales(0, bund, bf, status)) { + u_printf("Error dumping prolog for %s\n", toBundle); + return 1; + } + ASSERT_SUCCESS("while writing prolog"); // in case an error was missed + + u_fprintf(bf.getAlias(), + " %s:table { // %d locales in input %s.res\n", + INSTALLEDLOCALES, + count, + locale); + } + + // OK, now list them. + LocalUResourceBundlePointer subkey; + + int validCount = 0; + for (int32_t i = 0; i < count; i++) { + subkey.adoptInstead(ures_getByIndex( + installedLocales.getAlias(), i, subkey.orphan(), &status)); + ASSERT_SUCCESS("while fetching an installed locale's name"); + + const char* key = ures_getKey(subkey.getAlias()); + if (VERBOSE > 1) { + u_printf("@%d: %s\n", i, key); + } + // now, see if the locale is installed.. + + UBool exists; + if (localeExists(key, &exists)) { + return 1; // get out. + } + if (exists) { + validCount++; + u_printf("%s\n", key); + if (bf.isValid()) { + u_fprintf(bf.getAlias(), " %s {\"\"}\n", key); + } + } else { + if (bf.isValid()) { + u_fprintf(bf.getAlias(), "// %s {\"\"}\n", key); + } + if (VERBOSE) { + u_printf("#%s\n", key); // verbosity one - '' vs '#' + } + } + } + + if (bf.isValid()) { + u_fprintf(bf.getAlias(), " } // %d/%d valid\n", validCount, count); + // write the HEADER + u_fprintf(bf.getAlias(), "}\n"); + } + return 0; +} + +int main(int argc, const char* argv[]) { + PROG = argv[0]; + for (int i = 1; i < argc; i++) { + const char* arg = argv[i]; + int argsLeft = argc - i - 1; /* how many remain? */ + if (!strcmp(arg, "-v")) { + VERBOSE++; + } else if (!strcmp(arg, "-i") && (argsLeft >= 1)) { + if (i != 1) { + u_printf("ERROR: -i must be the first argument given.\n"); + usage(); + return 1; + } + const char* dir = argv[++i]; + u_setDataDirectory(dir); + if (VERBOSE) { + u_printf("ICUDATA is now %s\n", dir); + } + } else if (!strcmp(arg, "-T") && (argsLeft >= 1)) { + TREE = argv[++i]; + if (VERBOSE) { + u_printf("TREE is now %s\n", TREE); + } + } else if (!strcmp(arg, "-N") && (argsLeft >= 1)) { + NAME = argv[++i]; + if (VERBOSE) { + u_printf("NAME is now %s\n", NAME); + } + } else if (!strcmp(arg, "-?") || !strcmp(arg, "-h")) { + usage(); + return 0; + } else if (!strcmp(arg, "-l")) { + if (list(NULL)) { + return 1; + } + } else if (!strcmp(arg, "-b") && (argsLeft >= 1)) { + if (list(argv[++i])) { + return 1; + } + } else { + u_printf("Unknown or malformed option: %s\n", arg); + usage(); + return 1; + } + } +} + +// Local Variables: +// compile-command: "icurun iculslocs.cpp" +// End: diff --git a/tools/icu/icutrim.py b/tools/icu/icutrim.py new file mode 100755 index 00000000000..517bf39bad3 --- /dev/null +++ b/tools/icu/icutrim.py @@ -0,0 +1,356 @@ +#!/usr/bin/python +# +# Copyright (C) 2014 IBM Corporation and Others. All Rights Reserved. +# +# @author Steven R. Loomis <srl@icu-project.org> +# +# This tool slims down an ICU data (.dat) file according to a config file. +# +# See: http://bugs.icu-project.org/trac/ticket/10922 +# +# Usage: +# Use "-h" to get help options. + +import sys +import shutil +# for utf-8 +reload(sys) +sys.setdefaultencoding("utf-8") + +import optparse +import os +import json +import re + +endian=sys.byteorder + +parser = optparse.OptionParser(usage="usage: mkdir tmp ; %prog -D ~/Downloads/icudt53l.dat -T tmp -F trim_en.json -O icudt53l.dat" ) + +parser.add_option("-P","--tool-path", + action="store", + dest="toolpath", + help="set the prefix directory for ICU tools") + +parser.add_option("-D","--input-file", + action="store", + dest="datfile", + help="input data file (icudt__.dat)", + ) # required + +parser.add_option("-F","--filter-file", + action="store", + dest="filterfile", + help="filter file (JSON format)", + ) # required + +parser.add_option("-T","--tmp-dir", + action="store", + dest="tmpdir", + help="working directory.", + ) # required + +parser.add_option("--delete-tmp", + action="count", + dest="deltmpdir", + help="delete working directory.", + default=0) + +parser.add_option("-O","--outfile", + action="store", + dest="outfile", + help="outfile (NOT a full path)", + ) # required + +parser.add_option("-v","--verbose", + action="count", + default=0) + +parser.add_option('-L',"--locales", + action="store", + dest="locales", + help="sets the 'locales.only' variable", + default=None) + +parser.add_option('-e', '--endian', action='store', dest='endian', help='endian, big, little or host, your default is "%s".' % endian, default=endian, metavar='endianness') + +(options, args) = parser.parse_args() + +optVars = vars(options) + +for opt in [ "datfile", "filterfile", "tmpdir", "outfile" ]: + if optVars[opt] is None: + print "Missing required option: %s" % opt + sys.exit(1) + +if options.verbose>0: + print "Options: "+str(options) + +if (os.path.isdir(options.tmpdir) and options.deltmpdir): + if options.verbose>1: + print "Deleting tmp dir %s.." % (options.tmpdir) + shutil.rmtree(options.tmpdir) + +if not (os.path.isdir(options.tmpdir)): + os.mkdir(options.tmpdir) +else: + print "Please delete tmpdir %s before beginning." % options.tmpdir + sys.exit(1) + +if options.endian not in ("big","little","host"): + print "Unknown endianness: %s" % options.endian + sys.exit(1) + +if options.endian is "host": + options.endian = endian + +if not os.path.isdir(options.tmpdir): + print "Error, tmpdir not a directory: %s" % (options.tmpdir) + sys.exit(1) + +if not os.path.isfile(options.filterfile): + print "Filterfile doesn't exist: %s" % (options.filterfile) + sys.exit(1) + +if not os.path.isfile(options.datfile): + print "Datfile doesn't exist: %s" % (options.datfile) + sys.exit(1) + +if not options.datfile.endswith(".dat"): + print "Datfile doesn't end with .dat: %s" % (options.datfile) + sys.exit(1) + +outfile = os.path.join(options.tmpdir, options.outfile) + +if os.path.isfile(outfile): + print "Error, output file does exist: %s" % (outfile) + sys.exit(1) + +if not options.outfile.endswith(".dat"): + print "Outfile doesn't end with .dat: %s" % (options.outfile) + sys.exit(1) + +dataname=options.outfile[0:-4] + + +## TODO: need to improve this. Quotes, etc. +def runcmd(tool, cmd, doContinue=False): + if(options.toolpath): + cmd = os.path.join(options.toolpath, tool) + " " + cmd + else: + cmd = tool + " " + cmd + + if(options.verbose>4): + print "# " + cmd + + rc = os.system(cmd) + if rc is not 0 and not doContinue: + print "FAILED: %s" % cmd + sys.exit(1) + return rc + +## STEP 0 - read in json config +fi= open(options.filterfile, "rb") +config=json.load(fi) +fi.close() + +if (options.locales): + if not config.has_key("variables"): + config["variables"] = {} + if not config["variables"].has_key("locales"): + config["variables"]["locales"] = {} + config["variables"]["locales"]["only"] = options.locales.split(',') + +if (options.verbose > 6): + print config + +if(config.has_key("comment")): + print "%s: %s" % (options.filterfile, config["comment"]) + +## STEP 1 - copy the data file, swapping endianness +## The first letter of endian_letter will be 'b' or 'l' for big or little +endian_letter = options.endian[0] + +runcmd("icupkg", "-t%s %s %s""" % (endian_letter, options.datfile, outfile)) + +## STEP 2 - get listing +listfile = os.path.join(options.tmpdir,"icudata.lst") +runcmd("icupkg", "-l %s > %s""" % (outfile, listfile)) + +fi = open(listfile, 'rb') +items = fi.readlines() +items = [items[i].strip() for i in range(len(items))] +fi.close() + +itemset = set(items) + +if (options.verbose>1): + print "input file: %d items" % (len(items)) + +# list of all trees +trees = {} +RES_INDX = "res_index.res" +remove = None +# remove - always remove these +if config.has_key("remove"): + remove = set(config["remove"]) +else: + remove = set() + +# keep - always keep these +if config.has_key("keep"): + keep = set(config["keep"]) +else: + keep = set() + +def queueForRemoval(tree): + global remove + if not config.has_key("trees"): + # no config + return + if not config["trees"].has_key(tree): + return + mytree = trees[tree] + if(options.verbose>0): + print "* %s: %d items" % (tree, len(mytree["locs"])) + # do varible substitution for this tree here + if type(config["trees"][tree]) == str or type(config["trees"][tree]) == unicode: + treeStr = config["trees"][tree] + if(options.verbose>5): + print " Substituting $%s for tree %s" % (treeStr, tree) + if(not config.has_key("variables") or not config["variables"].has_key(treeStr)): + print " ERROR: no variable: variables.%s for tree %s" % (treeStr, tree) + sys.exit(1) + config["trees"][tree] = config["variables"][treeStr] + myconfig = config["trees"][tree] + if(options.verbose>4): + print " Config: %s" % (myconfig) + # Process this tree + if(len(myconfig)==0 or len(mytree["locs"])==0): + if(options.verbose>2): + print " No processing for %s - skipping" % (tree) + else: + only = None + if myconfig.has_key("only"): + only = set(myconfig["only"]) + if (len(only)==0) and (mytree["treeprefix"] != ""): + thePool = "%spool.res" % (mytree["treeprefix"]) + if (thePool in itemset): + if(options.verbose>0): + print "Removing %s because tree %s is empty." % (thePool, tree) + remove.add(thePool) + else: + print "tree %s - no ONLY" + for l in range(len(mytree["locs"])): + loc = mytree["locs"][l] + if (only is not None) and not loc in only: + # REMOVE loc + toRemove = "%s%s%s" % (mytree["treeprefix"], loc, mytree["extension"]) + if(options.verbose>6): + print "Queueing for removal: %s" % toRemove + remove.add(toRemove) + +def addTreeByType(tree, mytree): + if(options.verbose>1): + print "(considering %s): %s" % (tree, mytree) + trees[tree] = mytree + mytree["locs"]=[] + for i in range(len(items)): + item = items[i] + if item.startswith(mytree["treeprefix"]) and item.endswith(mytree["extension"]): + mytree["locs"].append(item[len(mytree["treeprefix"]):-4]) + # now, process + queueForRemoval(tree) + +addTreeByType("converters",{"treeprefix":"", "extension":".cnv"}) +addTreeByType("stringprep",{"treeprefix":"", "extension":".spp"}) +addTreeByType("translit",{"treeprefix":"translit/", "extension":".res"}) +addTreeByType("brkfiles",{"treeprefix":"brkitr/", "extension":".brk"}) +addTreeByType("brkdict",{"treeprefix":"brkitr/", "extension":"dict"}) +addTreeByType("confusables",{"treeprefix":"", "extension":".cfu"}) + +for i in range(len(items)): + item = items[i] + if item.endswith(RES_INDX): + treeprefix = item[0:item.rindex(RES_INDX)] + tree = None + if treeprefix == "": + tree = "ROOT" + else: + tree = treeprefix[0:-1] + if(options.verbose>6): + print "procesing %s" % (tree) + trees[tree] = { "extension": ".res", "treeprefix": treeprefix, "hasIndex": True } + # read in the resource list for the tree + treelistfile = os.path.join(options.tmpdir,"%s.lst" % tree) + runcmd("iculslocs", "-i %s -N %s -T %s -l > %s" % (outfile, dataname, tree, treelistfile)) + fi = open(treelistfile, 'rb') + treeitems = fi.readlines() + trees[tree]["locs"] = [treeitems[i].strip() for i in range(len(treeitems))] + fi.close() + if(not config.has_key("trees") or not config["trees"].has_key(tree)): + print " Warning: filter file %s does not mention trees.%s - will be kept as-is" % (options.filterfile, tree) + else: + queueForRemoval(tree) + +def removeList(count=0): + # don't allow "keep" items to creep in here. + global remove + remove = remove - keep + if(count > 10): + print "Giving up - %dth attempt at removal." % count + sys.exit(1) + if(options.verbose>1): + print "%d items to remove - try #%d" % (len(remove),count) + if(len(remove)>0): + oldcount = len(remove) + hackerrfile=os.path.join(options.tmpdir, "REMOVE.err") + removefile = os.path.join(options.tmpdir, "REMOVE.lst") + fi = open(removefile, 'wb') + for i in remove: + print >>fi, i + fi.close() + rc = runcmd("icupkg","-r %s %s 2> %s" % (removefile,outfile,hackerrfile),True) + if rc is not 0: + if(options.verbose>5): + print "## Damage control, trying to parse stderr from icupkg.." + fi = open(hackerrfile, 'rb') + erritems = fi.readlines() + fi.close() + #Item zone/zh_Hant_TW.res depends on missing item zone/zh_Hant.res + pat = re.compile("""^Item ([^ ]+) depends on missing item ([^ ]+).*""") + for i in range(len(erritems)): + line = erritems[i].strip() + m = pat.match(line) + if m: + toDelete = m.group(1) + if(options.verbose > 5): + print "<< %s added to delete" % toDelete + remove.add(toDelete) + else: + print "ERROR: could not match errline: %s" % line + sys.exit(1) + if(options.verbose > 5): + print " now %d items to remove" % len(remove) + if(oldcount == len(remove)): + print " ERROR: could not add any mor eitems to remove. Fail." + sys.exit(1) + removeList(count+1) + +# fire it up +removeList(1) + +# now, fixup res_index, one at a time +for tree in trees: + # skip trees that don't have res_index + if not trees[tree].has_key("hasIndex"): + continue + treebunddir = options.tmpdir + if(trees[tree]["treeprefix"]): + treebunddir = os.path.join(treebunddir, trees[tree]["treeprefix"]) + if not (os.path.isdir(treebunddir)): + os.mkdir(treebunddir) + treebundres = os.path.join(treebunddir,RES_INDX) + treebundtxt = "%s.txt" % (treebundres[0:-4]) + runcmd("iculslocs", "-i %s -N %s -T %s -b %s" % (outfile, dataname, tree, treebundtxt)) + runcmd("genrb","-d %s -s %s res_index.txt" % (treebunddir, treebunddir)) + runcmd("icupkg","-s %s -a %s%s %s" % (options.tmpdir, trees[tree]["treeprefix"], RES_INDX, outfile)) diff --git a/tools/icu/no-op.cc b/tools/icu/no-op.cc new file mode 100644 index 00000000000..08d1599a264 --- /dev/null +++ b/tools/icu/no-op.cc @@ -0,0 +1,18 @@ +/* +********************************************************************** +* Copyright (C) 2014, International Business Machines +* Corporation and others. All Rights Reserved. +********************************************************************** +* +*/ + +// +// ICU needs the C++, not the C linker to be used, even if the main function +// is in C. +// +// This is a dummy function just to get gyp to compile some internal +// tools as C++. +// +// It should not appear in production node binaries. + +extern void icu_dummy_cxx() {} diff --git a/tools/test.py b/tools/test.py index 0772f9ad321..579d444f6c5 100755 --- a/tools/test.py +++ b/tools/test.py @@ -55,8 +55,9 @@ class ProgressIndicator(object): - def __init__(self, cases): + def __init__(self, cases, flaky_tests_mode): self.cases = cases + self.flaky_tests_mode = flaky_tests_mode self.queue = Queue(len(cases)) for case in cases: self.queue.put_nowait(case) @@ -234,13 +235,19 @@ def HasRun(self, output): self._done += 1 command = basename(output.command[-1]) if output.UnexpectedOutput(): - print 'not ok %i - %s' % (self._done, command) + status_line = 'not ok %i - %s' % (self._done, command) + if FLAKY in output.test.outcomes and self.flaky_tests_mode == "dontcare": + status_line = status_line + " # TODO : Fix flaky test" + print status_line for l in output.output.stderr.splitlines(): print '#' + l for l in output.output.stdout.splitlines(): print '#' + l else: - print 'ok %i - %s' % (self._done, command) + status_line = 'ok %i - %s' % (self._done, command) + if FLAKY in output.test.outcomes: + status_line = status_line + " # TODO : Fix flaky test" + print status_line duration = output.test.duration @@ -258,8 +265,8 @@ def Done(self): class CompactProgressIndicator(ProgressIndicator): - def __init__(self, cases, templates): - super(CompactProgressIndicator, self).__init__(cases) + def __init__(self, cases, flaky_tests_mode, templates): + super(CompactProgressIndicator, self).__init__(cases, flaky_tests_mode) self.templates = templates self.last_status_length = 0 self.start_time = time.time() @@ -314,13 +321,13 @@ def PrintProgress(self, name): class ColorProgressIndicator(CompactProgressIndicator): - def __init__(self, cases): + def __init__(self, cases, flaky_tests_mode): templates = { 'status_line': "[%(mins)02i:%(secs)02i|\033[34m%%%(remaining) 4d\033[0m|\033[32m+%(passed) 4d\033[0m|\033[31m-%(failed) 4d\033[0m]: %(test)s", 'stdout': "\033[1m%s\033[0m", 'stderr': "\033[31m%s\033[0m", } - super(ColorProgressIndicator, self).__init__(cases, templates) + super(ColorProgressIndicator, self).__init__(cases, flaky_tests_mode, templates) def ClearLine(self, last_line_length): print "\033[1K\r", @@ -328,7 +335,7 @@ def ClearLine(self, last_line_length): class MonochromeProgressIndicator(CompactProgressIndicator): - def __init__(self, cases): + def __init__(self, cases, flaky_tests_mode): templates = { 'status_line': "[%(mins)02i:%(secs)02i|%%%(remaining) 4d|+%(passed) 4d|-%(failed) 4d]: %(test)s", 'stdout': '%s', @@ -336,7 +343,7 @@ def __init__(self, cases): 'clear': lambda last_line_length: ("\r" + (" " * last_line_length) + "\r"), 'max_length': 78 } - super(MonochromeProgressIndicator, self).__init__(cases, templates) + super(MonochromeProgressIndicator, self).__init__(cases, flaky_tests_mode, templates) def ClearLine(self, last_line_length): print ("\r" + (" " * last_line_length) + "\r"), @@ -738,8 +745,8 @@ def GetVmFlags(self, testcase, mode): def GetTimeout(self, mode): return self.timeout * TIMEOUT_SCALEFACTOR[mode] -def RunTestCases(cases_to_run, progress, tasks): - progress = PROGRESS_INDICATORS[progress](cases_to_run) +def RunTestCases(cases_to_run, progress, tasks, flaky_tests_mode): + progress = PROGRESS_INDICATORS[progress](cases_to_run, flaky_tests_mode) return progress.Run(tasks) @@ -763,6 +770,7 @@ def BuildRequirements(context, requirements, mode, scons_flags): TIMEOUT = 'timeout' CRASH = 'crash' SLOW = 'slow' +FLAKY = 'flaky' class Expression(object): @@ -1212,6 +1220,9 @@ def BuildOptions(): default=False, action="store_true") result.add_option("--cat", help="Print the source of the tests", default=False, action="store_true") + result.add_option("--flaky-tests", + help="Regard tests marked as flaky (run|skip|dontcare)", + default="run") result.add_option("--warn-unused", help="Report unused rules", default=False, action="store_true") result.add_option("-j", help="The number of parallel tasks to run", @@ -1258,6 +1269,13 @@ def ProcessOptions(options): options.scons_flags.append("arch=" + options.arch) if options.snapshot: options.scons_flags.append("snapshot=on") + def CheckTestMode(name, option): + if not option in ["run", "skip", "dontcare"]: + print "Unknown %s mode %s" % (name, option) + return False + return True + if not CheckTestMode("--flaky-tests", options.flaky_tests): + return False return True @@ -1457,15 +1475,15 @@ def wrap(processor): result = None def DoSkip(case): - return SKIP in case.outcomes or SLOW in case.outcomes + return SKIP in case.outcomes or SLOW in case.outcomes or (FLAKY in case.outcomes and options.flaky_tests == "skip") cases_to_run = [ c for c in all_cases if not DoSkip(c) ] if len(cases_to_run) == 0: print "No tests to run." - return 0 + return 1 else: try: start = time.time() - if RunTestCases(cases_to_run, options.progress, options.j): + if RunTestCases(cases_to_run, options.progress, options.j, options.flaky_tests): result = 0 else: result = 1 diff --git a/tools/upgrade-npm.sh b/tools/upgrade-npm.sh new file mode 100755 index 00000000000..02700324c93 --- /dev/null +++ b/tools/upgrade-npm.sh @@ -0,0 +1,7 @@ +#!/bin/bash + +set -xe + +rm -rf deps/npm + +(cd deps && curl https://registry.npmjs.org/npm/-/npm-$1.tgz | tar xz && mv package npm) diff --git a/vcbuild.bat b/vcbuild.bat index 0c9b0b285a6..39c656f1878 100644 --- a/vcbuild.bat +++ b/vcbuild.bat @@ -35,6 +35,8 @@ set noetw_msi_arg= set noperfctr= set noperfctr_arg= set noperfctr_msi_arg= +set i18n_arg= +set download_arg= :next-arg if "%1"=="" goto args-done @@ -62,6 +64,10 @@ if /i "%1"=="test" set test=test&goto arg-ok if /i "%1"=="msi" set msi=1&set licensertf=1&goto arg-ok if /i "%1"=="upload" set upload=1&goto arg-ok if /i "%1"=="jslint" set jslint=1&goto arg-ok +if /i "%1"=="small-icu" set i18n_arg=%1&goto arg-ok +if /i "%1"=="full-icu" set i18n_arg=%1&goto arg-ok +if /i "%1"=="intl-none" set i18n_arg=%1&goto arg-ok +if /i "%1"=="download-all" set download_arg="--download=all"&goto arg-ok echo Warning: ignoring invalid command line option `%1`. @@ -80,6 +86,10 @@ if defined nosnapshot set nosnapshot_arg=--without-snapshot if defined noetw set noetw_arg=--without-etw& set noetw_msi_arg=/p:NoETW=1 if defined noperfctr set noperfctr_arg=--without-perfctr& set noperfctr_msi_arg=/p:NoPerfCtr=1 +if "%i18n_arg%"=="full-icu" set i18n_arg=--with-intl=full-icu +if "%i18n_arg%"=="small-icu" set i18n_arg=--with-intl=small-icu +if "%i18n_arg%"=="intl-none" set i18n_arg=--with-intl=none + :project-gen @rem Skip project generation if requested. if defined noprojgen goto msbuild @@ -89,7 +99,7 @@ if defined NIGHTLY set TAG=nightly-%NIGHTLY% @rem Generate the VS project. SETLOCAL if defined VS100COMNTOOLS call "%VS100COMNTOOLS%\VCVarsQueryRegistry.bat" - python configure %debug_arg% %nosnapshot_arg% %noetw_arg% %noperfctr_arg% --dest-cpu=%target_arch% --tag=%TAG% + python configure %download_arg% %i18n_arg% %debug_arg% %nosnapshot_arg% %noetw_arg% %noperfctr_arg% --dest-cpu=%target_arch% --tag=%TAG% if errorlevel 1 goto create-msvs-files-failed if not exist node.sln goto create-msvs-files-failed echo Project files generated. @@ -102,7 +112,10 @@ if defined nobuild goto sign @rem Look for Visual Studio 2013 if not defined VS120COMNTOOLS goto vc-set-2012 if not exist "%VS120COMNTOOLS%\..\..\vc\vcvarsall.bat" goto vc-set-2012 -call "%VS120COMNTOOLS%\..\..\vc\vcvarsall.bat" +if "%VCVARS_VER%" NEQ "120" ( + call "%VS120COMNTOOLS%\..\..\vc\vcvarsall.bat" + SET VCVARS_VER=120 +) if not defined VCINSTALLDIR goto msbuild-not-found set GYP_MSVS_VERSION=2013 goto msbuild-found @@ -111,7 +124,10 @@ goto msbuild-found @rem Look for Visual Studio 2012 if not defined VS110COMNTOOLS goto vc-set-2010 if not exist "%VS110COMNTOOLS%\..\..\vc\vcvarsall.bat" goto vc-set-2010 -call "%VS110COMNTOOLS%\..\..\vc\vcvarsall.bat" +if "%VCVARS_VER%" NEQ "110" ( + call "%VS110COMNTOOLS%\..\..\vc\vcvarsall.bat" + SET VCVARS_VER=110 +) if not defined VCINSTALLDIR goto msbuild-not-found set GYP_MSVS_VERSION=2012 goto msbuild-found @@ -119,7 +135,10 @@ goto msbuild-found :vc-set-2010 if not defined VS100COMNTOOLS goto msbuild-not-found if not exist "%VS100COMNTOOLS%\..\..\vc\vcvarsall.bat" goto msbuild-not-found -call "%VS100COMNTOOLS%\..\..\vc\vcvarsall.bat" +if "%VCVARS_VER%" NEQ "100" ( + call "%VS100COMNTOOLS%\..\..\vc\vcvarsall.bat" + SET VCVARS_VER=100 +) if not defined VCINSTALLDIR goto msbuild-not-found goto msbuild-found @@ -217,7 +236,7 @@ python tools/closure_linter/closure_linter/gjslint.py --unix_mode --strict --noj goto exit :help -echo vcbuild.bat [debug/release] [msi] [test-all/test-uv/test-internet/test-pummel/test-simple/test-message] [clean] [noprojgen] [nobuild] [nosign] [x86/x64] +echo vcbuild.bat [debug/release] [msi] [test-all/test-uv/test-internet/test-pummel/test-simple/test-message] [clean] [noprojgen] [small-icu/full-icu/intl-none] [nobuild] [nosign] [x86/x64] [download-all] echo Examples: echo vcbuild.bat : builds release build echo vcbuild.bat debug : builds debug build